Next Article in Journal
DECOVID: A UK Two-Center Harmonized Database of Acute Care Electronic Health Records for COVID-19 Research
Previous Article in Journal
Sampling the Darcy Friction Factor Using Halton, Hammersley, Sobol, and Korobov Sequences: Data Points from the Colebrook Relation
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Data Descriptor

SurfaceEMG Datasets for Hand Gesture Recognition Under Constant and Three-Level Force Conditions

by
Cinthya Alejandra Zúñiga-Castillo
,
Víctor Alejandro Anaya-Mosqueda
,
Natalia Margarita Rendón-Caballero
,
Marcos Aviles
*,
José M. Álvarez-Alvarado
,
Roberto Augusto Gómez-Loenzo
and
Juvenal Rodríguez-Reséndiz
*
Facultad de Ingeniería, Universidad Autónoma de Querétaro, Santiago de Querétaro 76010, Mexico
*
Authors to whom correspondence should be addressed.
Data 2025, 10(12), 194; https://doi.org/10.3390/data10120194
Submission received: 14 October 2025 / Revised: 10 November 2025 / Accepted: 20 November 2025 / Published: 22 November 2025

Abstract

This work introduces two complementary surface electromyography (sEMG) datasets for hand gesture recognition. Signals were collected from 40 healthy subjects aged 18 to 40 years, divided into two independent groups of 20 participants each. In both datasets, subjects performed five hand gestures. Most of the gestures are the same, although the exact set and the order differ slightly between datasets. For example, Dataset 2 (DS2) includes the simultaneous flexion of the thumb and index finger, which is not present in Dataset 1 (DS1). Data were recorded with three bipolar sEMG sensors placed on the dominant forearm (flexor digitorum superficialis, extensor digitorum, and flexor pollicis longus). A battery-powered acquisition system was used, with sampling rates of 1000 Hz for DS1 and 1500 Hz for DS2. DS1 contains recordings performed at a constant moderate force, while DS2 includes three force levels (low, medium, and high). Both datasets provide raw signals and pre-processed versions segmented into overlapping windows, with clear file structures and annotations, enabling feature extraction for machine learning applications. Together, they constitute a large-scale standardized sEMG resource that supports the development and benchmarking of gesture and force recognition algorithms for rehabilitation, assistive technologies, and prosthetic control.
Dataset License: CC-BY 4.0

1. Introduction

Hand gesture recognition is a key element in the development of surface electromyography (sEMG)-based control systems, such as rehabilitation devices, assistive technologies, and prosthetic controllers [1]. sEMG signals capture the electrical activity of muscle fibers during contraction [2], and their acquisition from multiple forearm muscle groups provides the basis for analyzing both static muscle patterns and dynamic hand and wrist movements. This information is essential for reliable gesture recognition [3].
In recent years, machine learning has become the dominant approach for interpreting sEMG signals, enabling accurate pattern recognition and movement detection [4,5]. However, the performance of these algorithms strongly depends on the availability of sufficiently large datasets, the diversity of gestures, the number of participants, and the standardization of acquisition protocols [6]. Higher classification performance in muscle activity interpretation translates directly into more responsive and robust control systems [7].
To address these needs, two complementary datasets were introduced which were specifically designed to support reproducible machine learning research in hand gesture recognition. Both datasets were acquired under controlled conditions from 40 healthy volunteers and follow the same acquisition protocol but differ in force requirements. In DS1, each of the 20 participants performed 20 repetitions of five gestures at a constant moderate force. In DS2, another 20 participants performed 30 repetitions of the same number of gestures, but at three different force intensities (low, medium, and high). Although most gestures are shared, their exact set and acquisition order differ between DS1 and DS2. Together, these datasets constitute a diverse, standardized, and well-documented resource for advancing gesture and force recognition studies, especially in rehabilitation work using healthy low-channel patient data [8].
Both datasets contain main characteristics, which can be listed as follows:
  • Extensive sEMG signal data: The database is structured and documented to provide an equal level of quality to other databases, such as the Ninapro database [9], which has a large amount of datasets and movements of 78 subjects, which includes 67 intact individuals and only 11 amputee subjects. In comparison, the resource comprises two complementary sEMG datasets recorded from three forearm muscle groups obtained from 40 healthy subjects, providing detailed information on muscle activity during five predefined hand gestures. DS2 additionally incorporates recordings at three force levels (low, medium, and high). These features enable the analysis of complex neuromuscular activation patterns with potential applications in biomechanics, rehabilitation, assistive control systems, and motor control research.
  • Diversity in gestures and subjects: The database contains signals from forty healthy subjects, including both male and female participants, divided into two independent groups of twenty. Each dataset includes five predefined gestures. This diversity facilitates studies of inter-individual variability, gesture recognition, and the estimation of force levels under consistent experimental conditions [3].
  • High reusability for artificial intelligence and data-driven applications: The datasets include both raw and pre-processed sEMG signals, segmented into labeled SENIAM windows, and provided in accessible file formats (.tdmsand .mat). This structure supports reproducibility and enables the development and benchmarking of machine learning models for gesture recognition and force classification in sEMG-based control systems.
The remainder of this manuscript is organized as follows. Section 2 provides a detailed description of the datasets, including their structure, recording parameters, organization, and accessibility. Section 3 describes the acquisition system, electrode placement, experimental protocols, and preprocessing procedures used to obtain and format the data. Finally, Section 4 discusses potential applications of the datasets, their limitations, and future directions for research in sEMG-based gesture and force recognition.

2. Data Description

DS1 includes a total of 2000 valid repetitions, while DS2 includes 2863 valid repetitions out of the 3000 expected. In both datasets, each repetition was recorded continuously, but in the processed versions only the useful samples were retained after discarding the initial and final rest periods. Each repetition in DS1 contains 4500 useful samples (4.5 s) recorded across three channels, whereas each repetition in DS2 contains 9000 useful samples (6 s) across the same three channels. In the raw .tdms files the rest of the samples are included. Both datasets represent sEMG signals acquired from 36 participants with right-hand dominance and 4 with left-hand dominance. The acquisition channels were defined as follows:
  • Channel 1: sEMG signals from the flexor digitorum superficialis.
  • Channel 2: sEMG signals from the extensor digitorum.
  • Channel 3: sEMG signals from the flexor pollicis longus.
For DS1, the repetitions for all the subjects were recorded in a single file for each gesture, and for DS2, all the repetitions of each subject were recorded in different files. Due to the data file organization of each dataset, the statistical raw voltage values are presented in Table 1 and Table 2, respectively.
DS1 is divided into five gestures with 400 repetitions, respectively, of the 20 subjects, with the following statistics values shown in Table 1:
Table 1. Statistics values of gestures of DS1.
Table 1. Statistics values of gestures of DS1.
Gesture NumberMeanMaxMin
1−0.00320.4498−0.4255
2−0.00420.6529−0.7277
3−0.00400.3250−0.3605
4−0.00420.8209−0.7080
5−0.00440.5168−0.5083
DS2 is divided by 20 subjects, with approximately 30 repetitions per gesture. The statistics values are observed in Table 2:
Table 2. Statistics values of subjects of DS2.
Table 2. Statistics values of subjects of DS2.
Subject NumberSexAgeMeanMaxMin
1M22−0.01490.4777−0.4279
2F20−0.01770.4584−0.5455
3M22−0.01700.4013−0.3310
4M22−0.01700.0950−0.0667
5M23−0.00630.2855−0.2979
6F22−0.00540.1628−0.2018
7M23−0.00520.6004−0.3318
8M22−0.00730.6454−0.7561
9M22−0.01370.2907−0.2868
10M22−0.01070.1645−0.2688
11F22−0.00610.0860−0.1070
12F31−0.00230.1032−0.0877
13M21−0.00310.1203−0.1135
14M21−0.00320.0766−0.0787
15M25−0.00240.1289−0.1482
16M22−0.00250.0942−0.1027
17F22−0.00540.1169−0.1298
18M22−0.00461.5892−0.9401
19M21−0.00280.2302−0.2001
20M22−0.00470.1191−0.1263
In order to obtain a better precision in the classification, it is recommended that the gestures under study should be specifically selected for their application and for the AI algorithm with which they are going to be classified [10]. According to this, the gestures were selected with the main purpose being for the development of biomedical equipment based on sEMG signals. Common gestures were discarded for this application to avoid misinterpretation for the intentions of the subject. The set of movements is composed of five gestures that guarantee various levels of activation between them, so the training of AI algorithms have different classification features.
Although both datasets contain five hand gestures, their exact definition and acquisition order differ. The indices Mov1–Mov5 should therefore not be considered equivalent across DS1 and DS2.
  • DS1 gestures:
    Mov1: Fist (flexion of all fingers).
    Mov2: Thumb flexion.
    Mov3: Rest.
    Mov4: Extension of all fingers.
    Mov5: Flexion of the middle and ring fingers.
  • DS2 gestures:
    Mov1: Fist (flexion of all fingers).
    Mov2: Simultaneous flexion of the thumb and index finger.
    Mov3: Flexion of the middle and ring fingers.
    Mov4: Extension of all fingers.
    Mov5: Rest.
As observed, the thumb flexion gesture is different between datasets. In DS2, a greater muscle activation was observed in the thumb flexion gesture as participants reached the tip of the thumb to the tip of the little finger, compared to the base of the little finger in DS1.
In DS1, all repetitions were performed at a constant moderate force. In DS2, each gesture was executed at three force levels (low, medium, high). Participants received a verbal description of the three levels and interpreted them subjectively. No external measurement system (e.g., dynamometer) was used, reflecting real-world scenarios in prosthetic and assistive control applications where absolute force references are typically unavailable, affecting the moment of applying the algorithms in real environments, because they have only been trained with constant values [11]. Allowing subjective interpretation introduces inter-subject variability, which increases the dataset’s value for evaluating the robustness and generalization of machine learning algorithms.

2.1. DS1 Organization

The dataset includes recordings from 20 participants, each performing 20 repetitions per gesture (2000 repetitions in total) with a sampling frequency of 1000 Hz. The files are organized as follows:
  • EMG_DB: Signals of the 20 subjects containing 2000 repetitions × three channels × 4500 samples. Every 400 repetitions correspond to one gesture (Mov1–Mov5).
  • EMG_S: Segmented signals, with 174,000 windows × three channels × 200 samples.
  • Mov1–Mov5: Each file corresponds to one gesture. For example, Mov1 contains 400 repetitions of the fist gesture (three channels, 4.5 s per repetition).
  • TDMS files: One raw file per subject, containing all gestures with markers for valid repetitions.
  • Window Label Gestures: Labels for each window in one-hot codification. Gestures are coded from 0 to 4.

2.2. DS2 Organization

DS2 includes recordings from 20 participants, each performing approximately 30 repetitions per gesture at three force intensities (low, medium, and high) with a sampling frequency of 1500 Hz. A total of 2863 repetitions were retained after quality control:
  • Data_All_Raw: Unprocessed data of the 20 subjects, containing 2863 repetitions × three channels × 15,000 samples. Each subject has ∼150 repetitions divided into five gestures (30 per gesture, 10 per force).
  • EMG_WS_All: Segmented signals, with 332,108 windows × 3 channels × 375 samples.
  • Feature_AAV_All: Amplitude Average Value (AAV) features from the 332,108 windows across three channels.
  • Feature_MAV_All: Mean Absolute Value (MAV) features from the 332,108 windows across three channels.
  • Window_Label_Gestures: Labels for each window. Gestures are coded from 0 to 4.
  • Subject folders: A total of 20 folders (one per subject), each containing the five gestures. The raw .tdms files include both gesture and rest segments, which explains the 15,000 samples per repetition. In the processed datasets only the useful part of 9000 samples (6 s) per repetition was retained. Subjects 1 and 2 lack rest (120 repetitions).
The main characteristics of the two complementary sEMG datasets are summarized in Table 3. This table consolidates all relevant information regarding participants, acquisition setup, recording parameters, and gesture definitions, as well as the structure and accessibility of the data. It also highlights the differences between DS1 and DS2 in terms of sampling rate, number of repetitions, and applied force levels. Overall, Table 3 provides a comprehensive reference for understanding the organization, scope, and potential reuse of the presented datasets.

3. Materials and Methods

A total of 40 healthy volunteers (26 males and 14 females), aged 18–40 years, participated in the study. Subjects were considered healthy if they had no neurological, musculoskeletal, or dermatological conditions that could affect forearm motor function or skin–electrode contact. The acquisition protocol was previously approved by the Institutional Ethical Commission, where it was determined that the procedure does not hurt or damage the integrity of the participant, and their personal data is considered classified; all of this is in order to guarantee the well-being of the participants. All participants provided written informed consent which states their consent to perform the acquisition and publish the data obtained.
Signal acquisition was performed using three custom printed circuit boards (PCBs) for sEMG signal conditioning, powered by a rechargeable battery, and connected to a USB-6002 DAQ National Instruments device [12]. A second-order Butterworth band-pass filter (20–400 Hz, Sallen-Key configuration) [13] was implemented on each channel to reduce out-of-band components, and a 60 Hz notch filter was added to suppress powerline interference [14,15]. The acquisition was managed through a LabVIEW graphical interface as shown in Figure 1, which allowed configuration of sampling frequency, acquisition duration, and file storage parameters. The interface also provided real-time visualization of the three channels for continuous quality control, as shown in Figure 2.
Recordings were obtained using bipolar circular Ag/AgCl ECG T-718 AMBIDERM wet electrodes (3.2 × 3.8 cm) placed in accordance with SENIAM recommendations [16]. Each electrode included conductive gel supplied by the manufacturer, and the skin was cleaned with alcohol prior to placement to reduce impedance. Six electrodes were used in three bipolar pairs, with a seventh serving as reference on the bony region of the wrist. Electrodes were positioned over the flexor digitorum superficialis, extensor digitorum, and flexor pollicis longus, which were identified by palpation while participants performed simple gestures. Their main functions are summarized in Table 4, and the electrode placement scheme is shown in Figure 2. To minimize interference, participants removed metallic or electronic objects prior to acquisition, and the computer was operated on battery power.
For DS1, participants were seated in a standardized position with the forearm resting at elbow height (Figure 3). After 1 s at rest, an audio signal marked the beginning of the gesture, which was performed for 8 s at a constant moderate force. A second audio signal indicated the end of the gesture, followed by 2 s of rest. The sampling rate was 1000 Hz. After every five repetitions, a pause of 50 s was given and signals were visually inspected for quality. Each subject performed 20 repetitions of each of the five gestures. The protocol is summarized in Figure 4.
For DS2, the same posture was adopted. After 2 s of rest, an audio signal marked the start of the gesture, which was maintained for 6 s at one of three voluntary force levels (low, medium, and high), followed by 2 s of rest. Force levels were verbally described and self-interpreted by participants, without external instrumentation, in order to reflect conditions of real-world prosthetic and assistive applications where absolute force values cannot be standardized [20]. Each subject performed 10 repetitions per gesture per force level. The sampling rate was 1500 Hz. As in DS1, pauses of 50 s were given every five repetitions, with visual quality inspection of signals. The protocol is shown in Figure 5.
In the acquisition of both datasets, participants were asked to perform each gesture separately with the aim of storing the data by type of movement. When performing all the repetitions of a movement at once, a rest time was determined every 4 or 5 repetitions to avoid muscle fatigue due to prolonged repetition of a gesture. At the end of the gesture acquisition, a new file was created to store information about the new movement, resulting in five data files for each subject.
Both datasets were subjected to identical preprocessing procedures. Rest segments were discarded, and the informative portion of each repetition was segmented into overlapping windows. To accommodate the different sampling rates, DS1 (1000 Hz) used 75% overlap and a window length of 200 ms (200 samples) was employed. For DS2, 80% overlap was used, while 250 ms windows (corresponding to 375 samples at 1500 Hz) were used for DB2. It is important to note that commonly used window lengths for the overlap method range between 225 and 300 ms, with overlap values typically between 70% and 90% of the window length [21].
The total number of windows, N, was computed as follows:
N = L w s + 1
where L is the total signal length in samples, w is the window size, and s is the step size (window size minus overlap). This yielded 87 windows per repetition in DS1 and 116 windows per repetition in DS2. In DS2, two time-domain features were also extracted from each window and channel: the Amplitude Average Value (AAV) and the Mean Absolute Value (MAV). These features were calculated for each window containing N samples, were w i is the i-th sample in the window as follows [22]:
A A V = 1 N i = 1 N w i
M A V = 1 N i = 1 N | w i |
No normalization was applied to the distributed datasets (neither DB1 nor DB2). Signals and features are provided exactly as obtained after filtering, segmentation, and feature extraction. Amplitude values correspond to raw voltages measured at the electrode outputs after amplification and filtering, allowing end users to apply their own normalization procedures according to the needs of their algorithms or applications.
All acquisition and processing steps were carried out using NI LabVIEW 2021 (for data acquisition), MATLAB 2023a (for signal preprocessing), and Python 3.10.18 (Anaconda 2.6.3, Jupyter Notebook 7.2.2) for feature extraction and dataset formatting.

4. Outcomes and Future Scenarios

The two datasets described in this work provide a standardized and reproducible resource for the development of sEMG-based control systems. The dataset’s usability is enhanced by providing both raw and pre-processed versions for each dataset. The raw data allows researchers to apply their own novel segmentation methods. The pre-processed windowed data provides a convenient, standardized baseline for benchmarking classification algorithms, as set out in other databases, such as the work presented in [23]. Their structure supports applications not only in rehabilitation, assistive technologies, and prosthetic control [24,25], but also in broader domains such as human–machine interaction, robotic manipulation, biomechanics research, and educational use in signal processing. The practical applicability for these purposes is supported by including foundational gestures (e.g., Fist, Extension, and Rest) common to assistive control schemes [4]. Furthermore, the sampling frequencies ensure high-fidelity data that can be downsampled to benchmark performance in comparison to various commercial hardware specifications. It is important to clarify that this resource is intended specifically for developing and benchmarking machine learning (ML) models designed to leverage targeted sEMG signals, and may not be directly applicable for models trained on non-specific, bracelet-type data [26]. However, it should be noted that due to the use of healthy volunteers with no neurological disorders, these datasets may be most beneficial for the training of algorithms that use healthy low-channel data, such as rehabilitation projects involving chronic stroke patients or individuals with cerebral palsy [27,28].
DS1, recorded at constant moderate force, provides consistent conditions suitable for benchmarking classification algorithms under controlled effort. DS2, in contrast, introduces three voluntary force levels, allowing the exploration of algorithms that simultaneously recognize gestures and estimate force intensity. This is relevant for developing models that are robust to force variations in gesture classification, and for enabling proportional control, a feature for functional assistive technologies [29]. In comparison of existing datasets that exclusively provide information on hand gestures at a constant strength, these datasets enable the training and validation of machine learning models for both gesture recognition and proportional control, a capability directly relevant for real-time assistive devices for individuals with motor disabilities.

Limitations

Despite their utility, the datasets present several limitations. Most significantly for the target biomedical applications such as rehabilitation and prosthetic control, participants were exclusively healthy young adults, which restricts direct generalization to clinical populations or older individuals. While this data is intended for foundational algorithm development and benchmarking, any models trained on it would require significant further validation and fine-tuning on patient-specific data before any clinical use [30]. Additionally, the variety of hand gestures is restricted to five, which may limit applications requiring recognition of complex or subtle movements. In DS2, the force levels were self-interpreted by participants without objective measurement, introducing potential variability between subjects. For the matter of the reproducibility of the data, a limitation might be the use of different types of equipment in the study, which can differ with other countries and institutions. Furthermore, the implemented software for the signal acquisition could not match with other data acquisition software. Also, variations in the signals patterns are present in the data due to the unequal distribution between male and female participants. Finally, electrode placement was limited to three extrinsic forearm muscles. Although limiting acquisition to forearm muscles is a common approach in research utilizing EMG bracelets [26], this setup excludes intrinsic hand muscles that also contribute to differentiating fine motor gestures. This omission is a limitation, as it can create ambiguity between gestures that rely on similar extrinsic muscle patterns but different intrinsic control [31]. These factors should be considered when reusing the datasets or developing derived applications.

Author Contributions

Conceptualization, J.R.-R. and M.A.; methodology, C.A.Z.-C., V.A.A.-M., N.M.R.-C. and M.A.; software, M.A.; validation, J.M.Á.-A. and R.A.G.-L.; formal analysis, M.A. and J.M.Á.-A.; investigation, C.A.Z.-C., V.A.A.-M. and N.M.R.-C.; resources, J.R.-R., R.A.G.-L. and M.A.; data curation, C.A.Z.-C., V.A.A.-M. and N.M.R.-C.; writing—original draft preparation, C.A.Z.-C., V.A.A.-M. and N.M.R.-C.; writing—review and editing, M.A., J.M.Á.-A., R.A.G.-L. and J.R.-R.; visualization, M.A.; supervision, M.A. and J.R.-R.; project administration, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee of the Faculty of Engineering at the Autonomous University of Querétaro (protocol code CEAIFI-030-2024-TL, approved in 2024). All participants provided written informed consent prior to data collection. Data collection and dissemination adhered to the ethical principles of the Pan-American Health Organization and the Council of International Medical Science Organizations (2016). All datasets were fully anonymized and contain no personally identifiable information, ensuring compliance with open-access data sharing standards.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets are available at https://kaggle.com/datasets/d27a113cf8221f4344e5b6834dedc26fb6fbc16c1a9ad28d56cc5d5d77860c40 (Dataset DS1) (accessed on 19 November 2025) and https://kaggle.com/datasets/d4d7612f90e026339cb1eba7ba2bcb68295fd5cbb11944eadd489d8fe0399584 (Dataset DS2) (accessed on 19 November 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kadavath, M.R.K.; Nasor, M.; Imran, A. Enhanced Hand Gesture Recognition with Surface Electromyogram and Machine Learning. Sensors 2024, 24, 5231. [Google Scholar] [CrossRef]
  2. mDurance Solutions SL. ¿Qué es la Electromiografía de Superficie? 2025. Available online: https://mdurance.com/blog/que-es-la-electromiografia-de-superficie/ (accessed on 7 September 2025).
  3. Challa, K.; AlHmoud, I.W.; Jaiswal, C.; Turlapaty, A.C.; Gokaraju, B. EMG features dataset for arm activity recognition. Data Brief 2025, 60, 111519. [Google Scholar] [CrossRef]
  4. Toledo-Pérez, D.C.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A.; Jauregi-Correa, J.C. Support Vector Machine-Based EMG Signal Classification Techniques: A Review. Appl. Sci. 2019, 9, 4402. [Google Scholar] [CrossRef]
  5. Lee, K.H.; Min, J.Y.; Byun, S. Electromyogram-Based classification of hand and finger gestures using artificial neural networks. Sensors 2021, 22, 225. [Google Scholar] [CrossRef] [PubMed]
  6. Aarotale, P.N.; Rattani, A. Machine Learning-based sEMG Signal Classification for Hand Gesture Recognition. In Proceedings of the 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Lisbon, Portugal, 3–6 December 2024; pp. 6319–6326. [Google Scholar] [CrossRef]
  7. Ekinci, E.; Garip, Z.; Serbest, K. Electromyography based hand movement classification and feature extraction using machine learning algorithms. J. Polytech. 2023, 26, 1621–1633. [Google Scholar] [CrossRef]
  8. Le, H.; Panhuis, M.i.H.; Alici, G. Literature survey on machine learning techniques for enhancing accuracy of myoelectric hand gesture recognition in real-world prosthetic hand control. Biomim. Intell. Robot. 2025, 5, 100250. [Google Scholar] [CrossRef]
  9. Atzori, M.; Muller, H. The Ninapro Database: A Resource for sEMG Naturally Controlled Robotic Hand Prosthetics. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; IEEE: New York, NY, USA, 2015; pp. 7151–7154. [Google Scholar] [CrossRef]
  10. Castro, M.C.F.; Arjunan, S.P.; Kumar, D.K. Selection of suitable hand gestures for reliable myoelectric human computer interface. BioMed. Eng. OnLine 2015, 14, 30. [Google Scholar] [CrossRef] [PubMed]
  11. Young, P.R.; Hong, K.; Winslow, E.J.; Sagastume, G.K.; Battraw, M.A.; Whittle, R.S.; Schofield, J.S. The effects of limb position and grasped load on hand gesture classification using electromyography, force myography, and their combination. PLoS ONE 2025, 20, e0321319. [Google Scholar] [CrossRef]
  12. Aviles, M.; Sánchez-Reyes, L.M.; Fuentes-Aguilar, R.Q.; Toledo-Pérez, D.C.; Rodríguez-Reséndiz, J. A 11el methodology for classifying EMG movements based on SVM and genetic algorithms. Micromachines 2022, 13, 2108. [Google Scholar] [CrossRef]
  13. Robertson, D.E.; Dowling, J.J. Design and responses of Butterworth and critically damped digital filters. J. Electromyogr. Kinesiol. 2003, 13, 569–573. [Google Scholar] [CrossRef]
  14. Wang, J.; Tang, L.; Bronlund, J.E. Surface EMG signal amplification and filtering. Int. J. Comput. Appl. 2013, 82. [Google Scholar] [CrossRef]
  15. Storr, W. Sallen and Key Filter Design for Second Order Filters. 2022. Available online: https://www.electronics-tutorials.ws/filter/sallen-key-filter.html (accessed on 5 November 2025).
  16. Hermens, H.J.; Merletti, R.; Freriks, B. (Eds.) European Activities on Surface ElectroMyoGraphy; Roessingh Research and Development b.v.: Torino, Italy, 1996. [Google Scholar]
  17. Okafor, L.; Varacallo, M. Anatomy, Shoulder and Upper Limb, Hand Flexor Digitorum Superficialis Muscle. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2022; NBK539723. [Google Scholar]
  18. Ramage, J.; Varacallo, M. Anatomy, Shoulder and Upper Limb, Wrist Extensor Muscles. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2023; NBK534805. [Google Scholar]
  19. Benson, D.; Miao, K.; Varacallo, M. Anatomy, Shoulder and Upper Limb, Hand Flexor Pollicis Longus Muscle. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2023; NBK538490. [Google Scholar]
  20. Karrenbach, M.; Preechayasomboon, P.; Sauer, P.; Boe, D.; Rombokas, E. Deep learning and session-specific rapid recalibration for dynamic hand gesture recognition from EMG. Front. Bioeng. Biotechnol. 2022, 10, 1034672. [Google Scholar] [CrossRef]
  21. Ashraf, H.; Waris, A.; Gilani, S.O.; Kashif, A.S.; Jamil, M.; Jochumsen, M.; Niazi, I.K. Evaluation of windowing techniques for intramuscular EMG-based diagnostic, rehabilitative and assistive devices. J. Neural Eng. 2020, 18, 016017. [Google Scholar] [CrossRef]
  22. Castruita-López, J.F.; Aviles, M.; Toledo-Pérez, D.C.; Macías-Socarrás, I.; Rodríguez-Reséndiz, J. Electromyography Signals in Embedded Systems: A review of Processing and Classification techniques. Biomimetics 2025, 10, 166. [Google Scholar] [CrossRef]
  23. Kaczmarek, P.; Mańkowski, T.; Tomczyński, J. PUTEMG—A surface Electromyography Hand Gesture Recognition Dataset. Sensors 2019, 19, 3548. [Google Scholar] [CrossRef]
  24. Welihinda, D.; Gunarathne, L.; Herath, H.; Yasakethu, S.; Madusanka, N.; Lee, B. EEG and EMG-based human-machine interface for navigation of mobility-related assistive wheelchair (MRA-W). Heliyon 2024, 10, e27777. [Google Scholar] [CrossRef]
  25. Palumbo, A.; Ielpo, N.; Calabrese, B.; Garropoli, R.; Gramigna, V.; Ammendolia, A.; Marotta, N. An innovative device based on human-machine interface (hmi) for powered wheelchair control for neurodegenerative disease: A proof-of-concept. Sensors 2024, 24, 4774. [Google Scholar] [CrossRef] [PubMed]
  26. Pizzolato, S.; Tagliapietra, L.; Cognolato, M.; Reggiani, M.; Müller, H.; Atzori, M. Comparison of six electromyography acquisition setups on hand movement classification tasks. PLoS ONE 2017, 12, e0186132. [Google Scholar] [CrossRef]
  27. Winursito, A.; Arifin, F.; Muslikhin, M.; Artanto, H.; Caryn, F. Performance Analysis of EMG Signal Classification Methods for Hand Gesture Recognition in Stroke Rehabilitation. Elinvo (Electron. Inform. Vocat. Educ.) 2024, 8, 264–274. [Google Scholar] [CrossRef]
  28. Macintosh, A.; Vignais, N.; Desailly, E.; Biddiss, E.; Vigneron, V. A Classification and Calibration Procedure for Gesture Specific Home-Based Therapy Exercise in Young People With Cerebral Palsy. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 29, 144–155. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, Y.; Liang, S.; Yan, T.; Ao, J.; Zhou, Z.; Li, X. Classification and simulation of process of linear change for grip force at different grip speeds by using supervised learning based on sEMG. Expert Syst. Appl. 2022, 206, 117785. [Google Scholar] [CrossRef]
  30. Côté Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef] [PubMed]
  31. Ni, S.; Al-qaness, M.A.; Hawbani, A.; Al-Alimi, D.; Abd Elaziz, M.; Ewees, A.A. A survey on hand gesture recognition based on surface electromyography: Fundamentals, methods, applications, challenges and future trends. Appl. Soft Comput. 2024, 166, 112235. [Google Scholar] [CrossRef]
Figure 1. LabVIEW graphical interface.
Figure 1. LabVIEW graphical interface.
Data 10 00194 g001
Figure 2. Placement of electrodes: (a) flexor digitorum superficialis, (b) extensor digitorum, (c) flexor pollicis longus, and (d) electrode used.
Figure 2. Placement of electrodes: (a) flexor digitorum superficialis, (b) extensor digitorum, (c) flexor pollicis longus, and (d) electrode used.
Data 10 00194 g002
Figure 3. Standardized position for DS1 and DS2.
Figure 3. Standardized position for DS1 and DS2.
Data 10 00194 g003
Figure 4. Acquisition protocol for DS1.
Figure 4. Acquisition protocol for DS1.
Data 10 00194 g004
Figure 5. Acquisition protocol for DS2.
Figure 5. Acquisition protocol for DS2.
Data 10 00194 g005
Table 3. Summary of the two complementary sEMG datasets, including acquisition setup, gesture definitions, participant information, and data accessibility.
Table 3. Summary of the two complementary sEMG datasets, including acquisition setup, gesture definitions, participant information, and data accessibility.
Type of dataRaw signals, band-pass filtered (20–400 Hz) signals, and pre-processed segments.
Data collectionTwo datasets of sEMG signals were collected from two independent groups of twenty healthy subjects using a battery-powered acquisition system. Three bipolar sEMG sensors were placed on the dominant forearm over the flexor pollicis longus (channel 1), extensor digitorum (channel 2), and flexor digitorum superficialis (channel 3). The sampling rate was 1000 Hz for Dataset 1 (DS1) and 1500 Hz for Dataset 2 (DS2).
  • DS1 gestures: Fist (flexion of all fingers), thumb flexion, rest, extension of all fingers, and flexion of the middle and ring fingers.
  • DS2 gestures: Fist (flexion of all fingers), simultaneous flexion of the thumb and index finger, flexion of the middle and ring fingers, extension of all fingers, and rest.
In DS1, each acquisition lasted 8 s with 2 s of rest at the beginning and 1 s at the end. In the raw .tdms files these rest segments are included, but in the processed dataset only the useful part of 4.5 s (4500 samples) per repetition was retained. In DS2, each acquisition lasted 10 s with 2 s of rest at the beginning and at the end. The raw .tdms files include the rest periods, but in the processed dataset only the useful part of 6 s (9000 samples) per repetition was retained.
Dataset DS1Recordings from twenty healthy subjects performing five hand gestures at a constant moderate force intensity. Each subject completed twenty repetitions per gesture (20 × 5 = 100 repetitions per subject). The dataset comprises 2000 total repetitions (400 per gesture).
Dataset DS2Recordings from twenty healthy subjects performing the five gestures with three force intensities (low, medium, and high). Each subject completed ten repetitions per gesture at each force intensity (30 per gesture, 150 per subject). In total, 3000 repetitions were expected, but after quality control, 2863 valid repetitions were retained, with 137 discarded due to artifacts or acquisition errors.
Subjects and acquisition conditionsA total of 40 healthy volunteers (26 males and 14 females), aged between 18 and 40 years, participated. All acquisitions were carried out under the same conditions: subjects were seated comfortably with the dominant arm resting at elbow level, and signals were collected with identical sEMG equipment and sensor placement protocol across both datasets.
Data source locationRaw sEMG signals are provided in .tdms format, and pre-processed datasets are available in .mat format.
Data accessibilityData are permanently accessible on Kaggle (open access).
Direct URL to DS1: https://kaggle.com/datasets/d27a113cf8221f4344e5b6834dedc26fb6fbc16c1a9ad28d56cc5d5d77860c40 (Dataset DS1) (accessed on 19 November 2025)
Direct URL to DS2: https://kaggle.com/datasets/d4d7612f90e026339cb1eba7ba2bcb68295fd5cbb11944eadd489d8fe0399584 (Dataset DS2) (accessed on 19 November 2025)
Table 4. Monitored muscles and their function [17,18,19].
Table 4. Monitored muscles and their function [17,18,19].
MuscleFunction
Flexor digitorum superficialisFlexion of MCP and PIP joints of fingers 2 to 5
Extensor digitorumExtension of proximal phalanges over metacarpus
Flexor pollicis longusThumb flexion at DIP and MCP joints
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zúñiga-Castillo, C.A.; Anaya-Mosqueda, V.A.; Rendón-Caballero, N.M.; Aviles, M.; Álvarez-Alvarado, J.M.; Gómez-Loenzo, R.A.; Rodríguez-Reséndiz, J. SurfaceEMG Datasets for Hand Gesture Recognition Under Constant and Three-Level Force Conditions. Data 2025, 10, 194. https://doi.org/10.3390/data10120194

AMA Style

Zúñiga-Castillo CA, Anaya-Mosqueda VA, Rendón-Caballero NM, Aviles M, Álvarez-Alvarado JM, Gómez-Loenzo RA, Rodríguez-Reséndiz J. SurfaceEMG Datasets for Hand Gesture Recognition Under Constant and Three-Level Force Conditions. Data. 2025; 10(12):194. https://doi.org/10.3390/data10120194

Chicago/Turabian Style

Zúñiga-Castillo, Cinthya Alejandra, Víctor Alejandro Anaya-Mosqueda, Natalia Margarita Rendón-Caballero, Marcos Aviles, José M. Álvarez-Alvarado, Roberto Augusto Gómez-Loenzo, and Juvenal Rodríguez-Reséndiz. 2025. "SurfaceEMG Datasets for Hand Gesture Recognition Under Constant and Three-Level Force Conditions" Data 10, no. 12: 194. https://doi.org/10.3390/data10120194

APA Style

Zúñiga-Castillo, C. A., Anaya-Mosqueda, V. A., Rendón-Caballero, N. M., Aviles, M., Álvarez-Alvarado, J. M., Gómez-Loenzo, R. A., & Rodríguez-Reséndiz, J. (2025). SurfaceEMG Datasets for Hand Gesture Recognition Under Constant and Three-Level Force Conditions. Data, 10(12), 194. https://doi.org/10.3390/data10120194

Article Metrics

Back to TopTop