Next Article in Journal / Special Issue
The Rise of Big Data Science: A Survey of Techniques, Methods and Approaches in the Field of Natural Language Processing and Network Theory
Previous Article in Journal
A User Study of a Prototype of a Spatial Augmented Reality System for Education and Interaction with Geographic Data
Previous Article in Special Issue
Traffic Sign Recognition based on Synthesised Training Data
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

EMG Pattern Recognition in the Era of Big Data and Deep Learning

Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2018, 2(3), 21;
Submission received: 3 July 2018 / Revised: 20 July 2018 / Accepted: 20 July 2018 / Published: 1 August 2018
(This article belongs to the Special Issue Big Data and Cognitive Computing: Feature Papers 2018)


The increasing amount of data in electromyographic (EMG) signal research has greatly increased the importance of developing advanced data analysis and machine learning techniques which are better able to handle “big data”. Consequently, more advanced applications of EMG pattern recognition have been developed. This paper begins with a brief introduction to the main factors that expand EMG data resources into the era of big data, followed by the recent progress of existing shared EMG data sets. Next, we provide a review of recent research and development in EMG pattern recognition methods that can be applied to big data analytics. These modern EMG signal analysis methods can be divided into two main categories: (1) methods based on feature engineering involving a promising big data exploration tool called topological data analysis; and (2) methods based on feature learning with a special emphasis on “deep learning”. Finally, directions for future research in EMG pattern recognition are outlined and discussed.

1. Introduction

Recognition of human movements using surface electromyographic (EMG) signals generated during muscular contractions, referred to as “EMG Pattern Recognition”, has been employed in a wide array of applications, including but not limited to, powered upper-limb prostheses [1], electric power wheelchairs [2], human-computer interactions [3], and diagnoses in clinical applications [4]. Compared to other well-known bioelectrical signals (e.g., electrocardiogram, ECG; electrooculogram, EOG; and galvanic skin response, GSR), however, the analysis of surface EMG signal is more challenging given that it is stochastic in nature [5]. For upper-limb myoelectric prosthesis control, as an example, many confounding factors have also been shown to greatly influence the characteristics of the EMG signal and thus the performance of EMG pattern recognition systems. Just some of these challenges include the changing characteristics of the signal itself over time, electrode location shift, muscle fatigue, inter-subject variability, variations in muscle contraction intensity as well as changes in limb position and forearm orientation [1,6,7,8,9,10].
To capture and describe the complexity and variability of surface EMG signals for more advanced applications, a massive amount of information is therefore necessary.
Thanks to recent advancements in commercial EMG signal acquisition technologies, data storage and management, and file sharing systems, the field is now moving into the era of “big data”. Several factors have contributed to the recent expansion of EMG data resources such that big data approaches are beginning to be viable. First, EMG data sets collected as part of individual research studies are now being made available online instead of residing solely on hard drives within the laboratories of individual researchers (e.g., [10,11,12]). Secondly, as in other research communities, the availability of benchmark EMG databases has been critical to the growth of the field [13]. Thirdly, the development of high-density surface EMG systems has introduced the concept of a surface EMG image and thus dramatically increased the volume of data [14,15]. Lastly, the increasing availability of multi-modality sensor systems has generated larger amounts of data in which the EMG signal is considered one of the most important sources of information [16,17]. Here, we present the current state of existing shared EMG datasets, highlighting the opportunities and challenges in the development of truly big EMG data.
To translate the vast and complex information in EMG signals into useful control signals for prosthetic devices or a meaningful diagnostic tool for identifying neuromuscular diseases, advanced data analysis and machine learning techniques capable of analyzing big data are needed. Existing EMG pattern recognition approaches can be broadly divided into two categories: (1) methods based on feature engineering and (2) methods based on feature learning. “Feature Engineering” and feature extraction have been key parts of conventional machine learning algorithms. In EMG analysis, short time windows of the raw EMG signal are extracted and augmented by extracting time and frequency features aimed at improving information quality and density. Many studies have shown that the quality and quantity of the hand-crafted features have great influence on the performance of EMG pattern recognition [18,19,20,21].
Here, we review methods that can be applied to big data analytics involving a method rooted in algebraic topology called “topological data analysis”. This method has recently been shown to facilitate the design of an effective sparse EMG feature set across multiple EMG databases and scales well with data set size [22].
Conversely, in “feature learning”, explicit transformation of the raw EMG signals is not required as features are automatically created by the machine learning algorithms as they learn. The use of “deep learning” therefore shifts the focus from manual (human-based) feature engineering to automated feature engineering or learning. Although neural networks have been used in EMG research for several decades, deep learning techniques have recently been applied to EMG pattern recognition. This is, at least in part, due to the lack of sufficient EMG data availability to train these deep neural networks in the earlier years of the field. With the advent of shared bigger EMG data sets and recent advances in techniques for addressing overfitting problems, most emerging deep learning architectures and methods have now been employed in EMG pattern recognition systems (e.g., [14,23,24]). In some cases, both feature engineering and learning are combined by inputing pre-processed data or pre-extracted features to a deep learning algorithm with some benefits having been shown (e.g., references [11,23,24]). Here, we provide a comprehensive review of the recent research and development in deep learning for EMG pattern recognition. Directions for future research are also outlined and discussed.

2. Big EMG Data

In addition to the fact that some research questions cannot be answered using single, small data sets, larger samples are generally preferable to account for the large inter- and intra-subject variability in surface EMG signals. Similarly, differences in instrumentation and data collection protocols can introduce biases in small sample sizes. Over the last decade, a long-standing interest in acquiring the large-scale EMG data sets has been increasingly fulfilled. The four main factors that have contributed to expanding EMG data resources into the big data discussion are outlined and presented in this section.

2.1. Multiple Datasets

The first step in the successful open sharing of big data resources usually comes from a number of individual researchers and research groups who are motivated to share data collected as part of research studies. Although most EMG studies have collected data from small cohorts of participants (n = 5–40), relatively large EMG data sets of several hundreds to thousands of subjects could be easily gathered if their data was made available online along with their publications. However, researchers have not typically published their data for a number of reasons. For one, the extra work required to prepare data before making them publicly available may not be worth the perceived benefits. Furthermore, the collection of most data sets require significant investment in time and effort, and thus researchers may prefer to release them only once they have extracted the maximum perceived value and not at the time of the first publication. Keeping the data set private can preserve the right to re-analyze the data in the future, either to apply different analytical techniques or to investigate different research questions. In some cases, fear may also play a factor, as opening data sets facilitates subsequent analyses that might uncover problems with the data or invalidate previous results. Whatever the reason, many EMG data sets have remained hidden, residing solely on hard drives within the laboratories of individual researchers.
With the advent of “data papers”, which allow researchers to publish their data sets as citable scientific publications [25], more and more EMG data sets have been made available online. Because theperformance of EMG pattern recognition can vary depending on subject, experimental protocol, acquisition setup, and differences in pre-processing, multiple datasets are needed to ensure the robustness and generalization of findings [22,26].
For instance, Kamavuako et al. [27] found that there was no consensus on the optimum value of the threshold parameter of two of the most commonly used EMG features: zero crossings (ZC) and slope sign changes (SSC), leading them to investigate the effect of threshold selection on classification performance and on the ability to generalize across multiple data sets. Their results showed that the optimum threshold is highly subject- and dataset-dependent, i.e., each subject had a unique optimum threshold value, and, even within the same subject, the optimum threshold could change over time. In practical use, it is desirable to build models that can generalize from one set of subjects to another, one day to another, and from one setting to another. Therefore, they recommend a global optimum threshold value yielding a good trade-off between classification performance and generalization based on the global minimum classification error rate across four different EMG data sets.
The performance of many different EMG pattern recognition methods has been evaluated in a host of studies over the last few decades. However, most previous studies have been limited in terms of the relatively small sample sizes used for classification (small datasets) from one highly specific experiment (constrained datasets), and most of them have studied either no or only one practical robustness issue. A comparison of EMG pattern recognition methods using multiple EMG datasets could thus help identify robust feature extraction and classification methods. Scheme and Englehart [21] re-evaluated the performance of the commonly used Hudgins’ time domain features (ZC, SSC, mean absolute value (MAV) and waveform length (WL) [28]) and several additional features (autoregressive coefficients, AR; cepstral; coefficients, CC; Willison amplitude, WAMP; and sample entropy, SampEn) using six different EMG data sets containing over 60 subject sessions and 2500 separate contractions. Khushaba et al. [29] proposed a novel set of time domain features that can estimate the EMG signal power spectrum characteristics using five different EMG data sets. Phinyomark et al. [26,30] investigated the effect of sampling rate on EMG pattern recognition and then identified a novel set of features that are more accurate and robust for emerging low-sampling rate EMG systems, using four different EMG data sets containing 40 subject sessions with over 8000 separate contractions.
A summary of the existing shared EMG data sets for the classification of hand and finger movements are listed in Table 1. These fifteen datasets represent over 160 subject sessions with over 16,000 trials and more than 90,000 s of muscle contraction. Three of the datasets used sparse EMG channels (i.e., requiring precise positioning of the electrodes over the corresponding muscle) while the other twelve data sets employed wearable EMG armbands (i.e., multiple EMG sensors positioned radially around the circumference of a flexible band; see reference [31] for a review). The recent availability of consumer-grade wireless EMG armbands (such as the Myo armband by Thalmic Labs) will enable more researchers to collect EMG data and thus, represents a real opportunity for big data sharing. These data sets also include many of the dynamic factors that influence the performance of EMG pattern recognition, including changes in limb position, change of forearm orientation, varying contraction intensity, and between-day variability. An enormous variety of subjects, experimental protocols, acquisition setups, and pre-processing pipelines are clearly shown in Table 1, and consequently, this group of currently available datasets can be used for a comprehensive investigation of the generalization and robustness of EMG pattern recognition for myoelectric control. It is important to note that some subjects may have participated in more than one study (different subject sessions) for EMG datasets recorded from the same research group. Also, some datasets (e.g., Khushaba et al. 2 [32], Khushaba et al. 3 [33], and Chan et al. [34,35]) are only partially available online and require contacting the researchers who shared the data to access a full dataset.

2.2. Benchmark Datasets

As discussed, multiple datasets can be used to investigate the generalization and robustness of EMG pattern recognition methods, but only to a certain extent. A major limitation of the multiple-dataset investigation approach is the fact that EMG data from different data sets cannot be combined into one larger set due to experimental and equipment differences. Without large EMG datasets being collected using a single standardized protocol, it is difficult to investigate the generality of the findings across gender, age, characteristics of the amputation, etc. Although there are several recommendations for protocols, acquisition setups, and pre-processing pipelines such as the European recommendations, written by the Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles (SENIAM) project (, no solid benchmarking protocol and experimental setup (e.g., the set of movements, electrode locations, sampling rate, and filtering) has been adopted in previous studies. This is in stark contrast to other research communities that have found substantial benefit in the wide acceptance of protocols, leading to publicly available benchmark databases such as the 1000 Functional Connectomes Project and the Human Connectome Project databases for resting state functional magnetic resonance imaging (rfMRI) [36]. The usefulness and importance of benchmark databases have been clearly acknowledged in many research fields, and the lack of such a benchmark in the EMG community is a major obstacle towards open sharing of big EMG data.
In the earlier years, EMG studies were largely limited to large research centers that possessed highly specific and expensive instrumentation and manpower to acquire EMG data acceptable to the research community. This made it difficult for small laboratories to develop countries to contribute meaningfully to the field. Moreover, because of constraints on funding, time, subjects, etc., the volume of data was typically limited to the minimum required to verify a specific scientific hypothesis. In myoelectric control, this has often consisted of approximately ten able-bodied subjects and/or a few amputees.
The creation of a benchmark protocol and database would not only promote comparison between methods, attracting additional researchers from the signal processing and machine learning communities, but would also foster progress in big EMG data by encouraging the contribution of new datasets from other research groups using the same experimental protocols.
Indeed, the EMG research field has lagged behind other biomedical research fields in the development of big data sharing resources. The Non-Invasive Adaptive Prosthetics (NinaPro) database may currently be the biggest and most widely known publicly available benchmark database to date. The Ninapro project was launched in 2014 [13], and to date, consists of seven data sets [37,38,39,40,41] containing surface EMG signals from the forearm and upper arm using 10–16 EMG channels together with several additional modalities recorded from 117 able-bodied subjects and 13 amputees performing a partial set of 61 pre-defined hand and fingers movements (Table 2). In total, there are more than 48,000 trials and 326,000 s of muscle contraction. Additional modalities (depending on dataset) include inertial measurement units (IMU) or accelerometry data acquired using Delsys Trigno Wireless electrodes or Myo armbands, kinematic hand data acquired using a 22-sensor CyberGlove II data glove, wrist orientation data acquired using a two-axis Kübler IS40 inclinometer, finger force data measured using an Finger-Force Linear Sensor (FFLS) device, and eye movement data using a Tobii Pro Glasses 2 wearable eye tracker. All datasets are fully accessible upon successful registration at Data are stored anonymously and subject demographic information is limited to gender, age, height, weight, laterality, and several clinical characteristics of the amputee subjects. Supporting files for the experimental protocol and acquisition setup (e.g., stimulus videos and software) can be obtained on an individual basis by contacting the Ninapro team.
Unfortunately, although a large number of movements and electrode locations have been proposed by the Ninapro project, the maximum number of movements and electrode locations that can be combined across the seven current EMG datasets are seven and eight, respectively. Similarly, there is no consensus on sampling rate, filtering, resolution, gain, etc. due to the use of different EMG acquisition devices, i.e., an Otto Bock MyoBock System for Ninapro 1, a Delsys Trigno Wireless EMG System for Ninapro 2, 3, 6 and 7, a Cometa Wave Plus wireless EMG system for Ninapro 4, and Thalmic Myo armbands for Ninapro 5. Some manipulation and transformation of data are therefore necessary before combining EMG data across the Ninapro data sets.

2.3. High-Density Surface EMG

There are two common approaches to measuring EMG signals. One is to place electrodes precisely over specific muscles (known as sparse multi-channel surface EMG), and the other is to use array-like arrangements of electrodes that are placed over a muscle area. The latter is the more common approach in the myoelectric control literature, as shown in Table 1 and Table 2, but is often limited to a single row of electrodes (i.e., an EMG armband) [31]. To increase the spatial information of electrical muscle activity, high-density surface EMG (HD-sEMG or HD-EMG) has been proposed, which increases the density and coverage of the electrodes. Typically, HD-sEMG employs a large two-dimensional (2D) array of closely spaced electrodes with small size. The total number of electrodes that has been proposed for HD-sEMG is in the range of 32 [12] to over 350 [42], while the maximum number of electrodes for typical EMG armbands is 16 (Table 1 and Table 2). The existing shared HD-sEMG data sets, which use electrode arrays of 32, 128, and 192, are listed in Table 3. Due to the high sampling frequencies used when measuring surface EMG (typically 1000 Hz or above), large three-dimensional arrays, i.e., thousands of 2D images, can be obtained in just a few seconds of muscle contraction from a single subject. For the csl-hdemg dataset, as an example, 6500 trials of 3-s muscle contraction were recorded using a 192 electrode array sampled at 2048 Hz. As a result, there are over 39 million sEMG images in this dataset alone. Hence, the development of HD-sEMG has dramatically increased the volume of data.
The collected HD-sEMG data allows the analysis of EMG information in both the temporal and spatial domains, leading to new possibilities for analyzing EMG signals using image processing techniques. Two methods of analyzing these kinds of EMG signals include the HD-sEMG map (a topographical image) and the sEMG image (an instantaneous image). Following conventional EMG pattern recognition methods, the HD-sEMG map can be computed using the root mean square (RMS) [42] or other amplitude-based feature extraction methods (e.g., MAV, WL, etc.) [43], of individual channels distributed in 2D space. This map is thus also sometimes referred to as an intensity or heat map. Often, the active region of the HD-sEMG map associated with a certain muscle, the so-called activation map, is identified using an image segmentation method and used as an input for subsequent feature extraction methods. Features extracted from HD-sEMG maps can be based on intensity information (any signal magnitude and power feature [18,22]) and spatial information (e.g., the mean shift [42] or the coordinates of the centre of gravity and maximum values [44]). These maps and additional spatial-based features can be used to reduce the effect of confounding factors that influence the performance of EMG pattern recognition such as the changing characteristics of the signal itself over time and electrode location shift [45] as well as variations in muscle contraction intensity [44]. However, this remains a relatively new sub-field, and novel image segmentation and spatial feature extraction methods are still needed to improve the performance of robust EMG pattern recognition.
Instead of forming an image based on the signal magnitude taken from some time window of raw sEMG signals, as is typically done, the instantaneous sEMG image can also be directly formed from the raw sEMG signals [14]. This sEMG image is equivalent to the HD-sEMG map with a window length of one sample. The number of pixels (resolution) in these sEMG images is then defined by the total number of electrodes (e.g., an electrode array with eight rows and 16 columns forms an image with 8 × 16 pixels), while the number of instantaneous sEMG images captured per second is dictated by the sampling rate used (e.g., a sampling frequency of 1000 Hz with 3 s of muscle contraction provides 3000 sEMG images). Without applying feature extraction methods, instantaneous sEMG images have been treated as an image classification problem and thus classified using deep learning approaches. A simple majority voting over several tens to several hundreds of frames can then be used to further improve the recognition performance [14]. More details about deep learning and sEMG image analysis are discussed in the Section 3.2.
It is important to note that increasing the number of electrodes is not strictly necessary to increase the recognition performance. In fact, several studies have shown that there is little need to use all EMG channels (over 100 electrodes), and instead, a properly positioned smaller set of electrodes (e.g., 9 [44] and 20–80 [45]) can provide comparable results. There is, however, no consensus on the global optimum number of electrodes yielding maximum recognition performance. Moreover, the optimal EMG electrode sub-set is highly subject-dependent (even within the same experimental protocol [41,46]), and further research is needed in this area. The use of HD-EMG has also thus far been limited to controlled in-laboratory settings, limiting its practical applications.

2.4. Multiple Modalities

Because EMG captures the activity of muscles as part of the musculoskeletal system, information about the same contractions or motions can also be measured using different types of measuring techniques, instruments, and acquisition setups. The analysis of solely surface EMG signals could therefore be considered as the analysis of a single modality. Due to the increasing availability of multi-modality sensing systems, multi-modal analysis approaches are becoming a viable option. Multiple modalities can be used to capture complementary information which is not visible using a single modality, or to provide context for others. Even when two or more modalities capture similar information, their combination can still improve the robustness of pattern recognition systems when one of the modalities is missing or noisy.
Thus far, myoelectric control of powered prostheses is the most important and commercial application of EMG pattern recognition. In this context, accelerometers have been the main supplementary modality and are the most prevalent in shared surface EMG datasets, such as Khushaba et al. 5, Ninapro 2, 3, 5, 7, and mmGest datasets (see Table 1, Table 2 and Table 3). Accelerometery has been shown to provide additional information to EMG, especially to reduce the effects of limb position [47,48].
Outside of prosthesis control, other applications of EMG pattern recognition for which multi-modality data sets exist include, for example, sleep studies, such as the Cyclic Alternating Pattern (CAP) Sleep Database [49] and the Sleep Heart Health Study (SHHS) Polysomnography Database [50]; biomechanics, such as the cutting movement dataset [51] and the horse gait dataset [52]; and brain computer interfaces, such as the Affective Pacman dataset [53] and the emergency braking assistance dataset [54]. Recently, emotion recognition using multiple physiological modalities has gained attention as another important application that has benefited from the incorporation of surface EMG.

Emotion Recognition

Emotion recognition is one of the larger growing disciplines of multi-modal research, along with audio-visual speech recognition and multimedia content indexing and retrieval. The objective assessment of human emotion can be performed using the analysis of subjects’ emotional expressions and/or physiological signals. Until recently, most studies on emotion recognition and affective computing have focused on the analysis of facial expressions, speech, and multimedia content to identify the emotional state of the subjects. With the growth of wearable technology, however, physiological signals originating from the central and peripheral nervous systems have now gained attention as alternative sources of emotional information.
One of the earliest examples of multi-modal emotion recognition based on physiological signals was the study by Healey and Picard [55]. They recorded EMG from the trapezius muscle (tEMG), several physiological signals involving electrocardiogram (ECG), galvanic skin resistance (GSR), and respiration (Resp), and composite video records of the driver during real-world driving tasks of 24 subjects. These signals were collected over a 50-min duration for each subject and used to determine the driver’s level of stress. The data from 17 out of the 24 subjects publicly available via PhysioNet [56]. An alternate common experimental approach is to use multimedia content (e.g., music video clips and/or movie clips) as the stimuli to elicit different emotions of subjects. Table 4 summarizes four such publicly available data sets. While the DEAP (a Database for Emotion Analysis using Physiological signals) [57] and HR-EEG4EMO [17] datasets contain brain signals acquired using electroencephalogram (EEG) sensors, the DECAF (a multimodal dataset for DECoding user physiological responses to AFfective multimedia content) [58] dataset measures brain signals using magnetoencephalogram (MEG) sensors. These datasets, however, are not limited to brain signals; in fact, the BioVid Emo DB dataset [59] includes no brain signals at all. They also include various combinations of the following peripheral nervous system signals: surface EMG from the zygomaticus major muscle (zEMG), corrugator muscle (cEMG), tEMG, blood volume pressure (BVP), ECG, Resp, skin temperature (Temp), peripheral oxygen saturation (SpO2), pulse rate (PR), and electrooculogram (EOG). Facial videos were also recorded for all datsets. Another interesting multi-modal database is the BioVid Heat Pain database [60]. The tEMG, zEMG, cEMG, ECG, GSR, and EEG signals were collected along with facial videos from 86 subjects during exposure to painful heat stimuli. To gain access to these datasets (other than Healey and Picard’s dataset), the EULA (End User License Agreement) must be printed, signed, scanned, and returned via email to the authors of each dataset. Upon approval, they will then provide a username and password that can be used to download the data.
Compared to the previously discussed surface EMG data sets, the volume of these multimodal data sets is huge. For instance, the raw data from the 60-h MEG and peripheral physiological recordings in the DECAF dataset alone make up more than 300 GB. Either due to instrumentation limitations, or to limit the volume of data, unfortunately some of these datasets sampled EMG signals at lower frequencies, such as 15.5 Hz for the Healey and Picard dataset and 512 Hz for DEAP and BioVid Emo DB datasets (see Table 4). These lie well below the typical 1000-Hz sampling frequency for EMG signals, below which the performance of EMG pattern recognition has been shown to suffer from the loss of high frequency information [26,30].

2.5. Discussion

Although the EMG data sets outlined above are not as large as many other forms of big data, these shared datasets are large enough that a single computer cannot process them within a reasonable time (big volume) and they exhibit several big data quantities [61]. It is important to note that size is only one characteristic of big data, with others being equally important in its definition [62]. Specifically, big variety refers to the diversity of information within a single dataset or the diversity of multiple datasets. This is a critical aspect of both big data and EMG research, since sub-populations and different experimental conditions routinely favor different features and algorithms that are not shared by others. Therefore, no single EMG data set, big or not, should be considered to be comprehensive, and cross-validation of multiple datasets is recommended for the development of robust EMG pattern recognition systems [22,26]. Although larger EMG data sets would be preferable, the current publicly available EMG datasets (Table 1, Table 2 and Table 3) are sufficient to shed some light on the generalizability and robustness of EMG pattern recognition (and, in particular, myoelectric control). Intuitively, big variety also applies when surface EMG is analyzed together with other modalities such as EEG, MEG, and facial video (Table 4).
Big veracity refers to noise, error, incompleteness, or inconsistencies of big data. This can be interpreted in many ways in the context of EMG, and, in particular, myoelectric control, as noisy, incomplete, and inconsistent EMG data, often occurring in human experimentation. From an application standpoint, the attribution of models built from normally-limbed subjects to amputees or spinal cord injury patients, who may have very different or reduced musculature or muscle tone and higher skin impedances, also introduces veracity challenges. As noted in the data sets of Table 2, amputee subjects may not complete experiments due to fatigue or pain, and the number and placement of electrodes is often reduced or changed due to insufficient space. Surface EMG signals are also often corrupted by noise and interference while traveling through different tissues and equipment, requiring dedicated hardware or compensatory pre-processing steps [63]. The development of EMG feature extraction and classification methods that are robust to noise is also important [64,65], as is the reduction of data (or dimensionality) when dealing with large-scale data sets. Determining relevant and meaningful features from a given larger set of features which may contain irrelevant, redundant, or noisy information is commonly accomplished using either feature selection [66,67,68,69] or feature projection methods [70,71,72,73]. When properly executed, these methods not only reduce the impact of noise and irrelevant information, but also the amount of computational time required for classification.
Big velocity refers to the rate at which data are generated and the speed at which they should be analyzed. The speed at which decisions are made is integral to EMG applications, either as support for clinical decisions based on EMG, or in real-time human–machine interfaces, such as with myoelectric control. It is important to note that although real-time “user in the loop” experiments for myoelectric control are important for providing a good representation of the usability of a system, these types of studies are limited in their contributions to the growth of big EMG data. Necessarily, they allow only for the direct comparison of selected methods within a single experimental session, do not allow for later offline use (the data are collected during feedback, and not feed forward, control), and are difficult to reproduce given the number of uncontrolled parameters. On the other hand, while benchmark datasets may not incorporate feedback control, they allow other researchers to easily replicate results, perform data analyses and compare different methods. Moreover, many currently shared EMG data sets now include more realistic and dynamic movements which better approximate real-life conditions, e.g., varying limb position, contraction intensity, etc. Nevertheless, real-time testing remains paramount in the assessment of the true dynamic performance of EMG pattern recognition. Moreover, several key metrics for measuring the efficacy of control can only be measured by performing real-time control experiments, such as motion selection time and motion completion time (the time required to select and complete the desired motion) [74] and the Fitts’ Law test-based metrics [75].
The advantages of big data sharing (or Big Value) for EMG pattern recognition have been discussed throughout this section. Nevertheless, due to the limitations of the current benchmark database, the development of a new, standardized benchmark database for big EMG data would be highly beneficial. Such a benchmark could help to improve the reliability and reproducibility of research, improve research practices, maximize the contribution of research subjects, help to back up valuable data, reduce the cost of research within the EMG research community, and increase accessibility to the field for new researchers.

3. Techniques for Big EMG Data

Many methods for processing and analyzing EMG data have been proposed and tested; however, most have been designed for, and restricted to, smaller datasets. Consequently, it is difficult for many of these traditional methods to handle large-scale data effectively and efficiently. Considering shared EMG data sets have only recently been released and that only a handful of recent methods are able to handle big EMG data, research based on big EMG data remains relatively new. Novel methods capable of analyzing such data could be developed either by modifying traditional methods to run in parallel computing environments or by proposing new methods that natively leverage parallel computing. These methods will become very important in turning any collected big EMG dataset into a meaningful resource.

3.1. Feature Engineering

EMG pattern recognition systems typically consist of several inter-connected components: data pre-processing, feature extraction, dimensionality reduction, and classification [1,2]. The stochastic and non-stationary characteristics of the EMG signal make the instantaneous value unsuitable for conventional machine learning algorithms [86]. Feature extraction, which transforms short time windows of the raw EMG signal to generate additional information and improve information density, is thus required before a classification output can be computed. During the past several decades, numerous different EMG feature extraction methods based on time domain, frequency domain, and time–frequency domain information have been proposed and explored [7,8,18,19,20,22,26,28,29,30]. Interesting EMG feature extraction methods include a set of ZC, SSC, MAV, and WL (the most commonly used features [28]); AR and CC (the robust features for EMG electrode location shift, variation in muscle contraction effort, and muscle fatigue [8]); WAMP (a robust feature against noise [64,65]); SampEn (a robust feature against between-day variability [7]); and L-scale (an optimal feature for wearable EMG devices [26]), to name a few. For extended coverage of window-based EMG feature extraction methods, the reader is encourage to consult some of these aforementioned studies [7,8,18,19,20,22,26,28,29,30].
To find the best combination of all available features, one would have to try all possible combinations which is not practical and is even unfeasible for large data sets. Moreover, the best combination for one application or scenario is not necessarily the best for others. Instead of evaluating the performance of every possible combination, dimensionality reduction (feature selection and feature projection) approaches have been employed to eliminate irrelevant, redundant, or highly correlated features. More often than not, however, classical dimensionality reduction techniques cannot be applied to big data, and it is therefore necessary to re-design and change the way the traditional methods are computed.
One possible approach is to create methods that are capable of analyzing big data by modifying traditional methods to work in parallel computing environments. For feature selection, some potential and well-known population-based metaheuristic methods, such as genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO), have been found to be effective in selecting an optimal EMG feature set (e.g., [67,68,69]). These feature selection methods have been developed to work in parallel computing as well as on graphics processing units (GPU) [87,88]. Similarly, novel approaches have been proposed to run standard feature projection (e.g., principal component analysis (PCA)) in parallel or on GPUs for big data dimensionality reduction [89,90]. The size of the data in most current studies, however, can be effectively processed using standard methods in a single high performance computer, and thus, very few studies have concentrated on using either parallel versions or GPU-based implementations [66].
Another approach is to develop new methods that work natively in a parallel manner. A method called “topological simplification”, which is a topological data analysis (TDA) method, has been recently shown to design an effective sparse EMG feature set across multiple EMG databases and to scale well with dataset size [22]. Specifically, topological simplification, as exemplified by the Mapper algorithm [91], is an unsupervised learning method that can extract a topologically simplified skeleton of complex and unstructured data by means of a series of local clusterings in overlapping regions of the data space, and then linking together clusters that share common data points. Thanks to the local nature of the clustering, Mapper naturally provides a separation of the complete problem into a set of many smaller problems which are immediately amenable to parallelization and that are merged only at the final step. Moreover, the local clusterings depend only on the distances between the points in the overlapping regions; hence, a high-dimensional feature matrix is projected effectively down to a small distance matrix. These properties make Mapper a very good tool for the analysis of big data as this approach can be naturally performed in a framework of big data analysis such as the Google’s MapReduce paradigm.
The output of this topological simplification approach has often been used to extract non-trivial qualitative information from big data that is hard to discern when studying the dataset globally [92]. For EMG pattern recognition, this approach has been successfully used to create charts of EMG features spaces that are robust and generalize well across three different EMG data sets containing 58 individual subjects and 27,360 separate contractions [22]. These charts highlight four functional groups of state-of-the-art EMG features that describe meaningful non-redundant information allowing for a principled and interpretable choice of EMG features for further classification. To use the output of this approach for feature engineering and selection, we can evaluate measures (such as class separability, robustness, and complexity) selecting from the fundamental and most interesting feature groups to select the best representative features. Experimental results have shown that the Mapper selected feature set achieves the same (or higher) level of classification accuracies using a support vector machine (SVM) classifier as the set of features selected using an automatic brute-force feature selection method based on sequential forward selection (SFS) [22]. Additionally, based on a ranking of 81 features across 20 subjects, the computational cost of the Mapper method is approximately 21,600 times less than that of the SFS method. Furthermore, these topological feature maps could be used to inform the design of novel EMG features that fall into sparse feature groups or form entirely new groups of their own. For an extended coverage of the TDA and Mapper algorithms for biomedical big data, the reader is encouraged to consult this book chapter [93].
After finding an optimal feature set, conventional machine learning approaches can be applied. In the problem of EMG pattern recognition, commonly used classification algorithms include SVM [26,94], linear discriminant analysis (LDA) [7,8], k-nearest neighbors (KNN) [20,95], multi-layer perceptron neural network (MLP) [28,66], and random forests (RF) [7,96], to name a few.

3.2. Feature Learning

Although feature engineering has been the dominant focus for EMG pattern recognition so far, feature learning, as exemplified by deep learning, has recently started to demonstrate better recognition performance than carefully hand-crafted features. Indeed, in the past few years, deep learning has made great progress in feature learning for big EMG data. In contrast to feature engineering and conventional machine learning approaches, deep learning can take advantage of many samples to extract high-level features by learning representations from low-level inputs. Deep learning algorithms, however, require large training datasets to train large deep networks (a few hidden layers, each with a large number of neurons) as well as an associated large number of parameters (millions of free parameters). To train true deep neural networks, it is therefore necessary to re-consider the way traditional large-scale neural networks are computed using parallel deep learning models, GPU-based implementation, and optimized deep learning models.
One well-known parallel deep learning approach is deep stacking network (DSN) [97], which uses a method of stacking simple processing modules. A novel parallel deep learning model called tensor deep stacking network (T-DSN) [98] has been proposed to further improve the training efficiency of DSNs using clusters of central processing units (CPU). Combinations of model- and data-parallel schemes have also been implemented in a software framework called DistBelief [99] to deal with very large models (more than a billion parameters). GPU-based frameworks are another important method for parallel deep learning models [100,101]. When high performance computing resources (multiple CPU cores or GPUs) are not available, however, additional methods of improving training efficiency are necessary. Model compression techniques, for example, have been successfully applied to pattern recognition applications which commonly require implementation on embedded systems [102]. For real-time control, an incremental learning method is employed to update parameters when new samples arrive while still preserving the network structure. An extended coverage of general deep learning techniques for big data can be found in several reviews [100,101,103].
In general, though, deep learning models can be roughly grouped into three main categories: unsupervised pre-trained networks, convolutional neural networks, and recurrent neural networks. Although their application to surface EMG is relatively new, these three categories of models have already been used to analyze the EMG signal. Table 5 details each of the previous EMG research works utilizing deep learning methods.

3.2.1. Unsupervised Pre-Trained Networks (UPNs)

UPNs can further be divided into stacked auto-encoders and deep belief networks. Auto-encoder neural networks are an unsupervised method that are trained to copy their inputs to their outputs using a hidden layer as a code to represent the input. A deep auto-encoder (a.k.a. stacked auto-encoder, SAE) is then constructed by stacking several auto-encoders to learn hierarchical features for the given input. In contrast, a deep belief network (DBN) is composed of a stack of restricted Boltzmann machines (RBM), generative stochastic neural network models that learn a joint probability distribution of unlabeled training data. Both techniques employ two stages, pre-training and fine-tuning, to train the models which can help to avoid local optima and alleviate the overfitting of models [104].
For EMG pattern recognition, DBN has been used to replace conventional machine learning approaches to discriminate a five-wrist-motion problem using hand-crafted time domain features [24]. The results showed that DBN yields a better classification accuracy than LDA, SVM, and MLP, but that the DBN requires lengthy iterations to attain good performance in recognizing EMG patterns without overfitting. Subsequently, the same group of researchers also used split-and-merge algorithms to reduce the overfitting problem and to improve the accuracy, and called the new approach a split-and-merge deep belief network (SM-DBN) [105]. Wand and colleagues [106,107] also compared deep neural networks to commonly used machine learning approaches for EMG-based speech recognition, i.e., Gaussian mixture model (GMM), yielding accuracy improvements in almost all classification cases. The DBN also provides good performance in recognizing human emotional states (valence, arousal, and dominance) even when using the instantaneous value of surface EMG when paired with several other physiological signals from the DEAP dataset [108].
UPNs can also be used instead of traditional unsupervised feature projection methods, such as PCA and independent component analysis (ICA). As a data compression approach, for example, SAE has been used to compress EMG and EEG data, and the results show that it significantly reduces signal distortion for high compression levels compared to traditional EMG data compression techniques, such as discrete wavelet transform (DWT), compressive sensing (CS), and ICA [109]. ICA, however, still performed better than SAE for low compression levels. As a regression approach, DBN has been shown to outperform PCA in the estimation of human lower limb flexion and extension joint angles during walking [110].

3.2.2. Convolutional Neural Network (CNN)

The CNN (or ConvNet) may be the most widely used deep learning model in feature learning and is by far the most popular deep learning method for EMG pattern recognition (Table 5). CNN is quite similar to ordinary neural networks but makes the explicit assumption that the inputs are image-based, thus constraining the models in a tangible way (i.e., neurons are arranged in three dimensions). The hidden layers of CNN typically consist of convolutional layers, pooling layers (sub-sampling layers), and fully connected layers, where the first two layers are used for feature learning on large-scale images (i.e., the convolution operation acts as a feature extraction and the pooling operation acts as a dimensionality reduction).
CNN has been successful in EMG pattern recognition, with better classification accuracies having been found using CNN as compared to commonly used classification methods including LDA, SVM, KNN, MLP, and RF (Table 5). Specifically, Geng et al. [14] evaluated the performance of CNN in recognizing hand and finger motions based on sEMG from three public databases containing data recorded from either a single row of electrodes or a 2D high-density electrode array. Without using windowed features, the classification accuracy of an eight-motion within-subject problem achieved 89.3% on a single frame (1 ms) of an sEMG image and reached 99.0% and 99.5% using simple majority voting over 40 and 150 frames (40 and 150 ms), respectively. Subsequently, Du et al. [15] employed a similar approach with adaptation to achieve better performance for inter-session and inter-subject scenarios. It should be noted that although CNNs can be quite responsive when used with instantaneous sEMG ’images’ obtained from HD-sEMG, they still require a higher computational load to handle both the high density inputs and the large-scale deep neural networks.
Although deep neural networks can be used directly with raw data, both data pre-processing and feature engineering can further improve the performance of deep learning. For instance, use of the right color space is important for image recognition using deep learning. One of the most widely used pre-extracted features for deep learning is the spectrogram. Côté-Allard et al. [111,112], for example, used CNNs with spectrograms as the input. Their results showed that CNN is not only accurate enough to recognize complex motions, but is also robust to many confounding factors, such as short term muscle fatigue, small displacement of electrodes, inter-subject variability, and long term use, without the need for recalibration. They also proposed a transfer learning algorithm to reduce the computational load and improve performance of the CNN, and used continuous wavelet transform (CWT) as pre-extracted features [11]. Zhai et al. [113] proposed a self-recalibrating classifier that can be automatically updated to maintain stable performance over time without the need for subject retraining based on CNN using PCA-reduced spectrogram inputs. The results of this study [113] support those of Côté-Allard et al. [11,111,112], and show that CNN models could be useful in compensating continuous drift in surface EMG signals.
When short time windows of the raw EMG signal (150–200 ms) have been used as inputs to CNNs with very simple architectures, however, the reported accuracies have been below those of classical classification methods (i.e., RF for Ninapro 1 and 2, and SVM for Ninapro 3) [114]. This suggests that deep learning algorithms are strongly influenced by several factors (including network models and architectures, and optimization parameters), and thus, even after a good model and architecture is found, there is still a need to search for potentially better hyper-parameters to improve the performance of the algorithm.

3.2.3. Recurrent Neural Network (RNN)

In contrast to other deep learning models, RNNs take time series information into account, i.e., rather than completely feed-forward connections, RNN might have connections that feed back into prior layers. This feedback path allows RNNs to store the information from previous inputs and model problems in time. Long short-term memory (LSTM) units and gated recurrent Units (GRUs) are two of the prevailing RNN architectures. For EMG pattern recognition, a combination of the RNN and CNN (RNN + CNN) has been proposed and showed better performance than support vector regression (SVR) or CNN alone for estimating human upper limb joint angles [23]. Furthermore, Laezza [115] evaluated the performance of three different network models, RNN, CNN, and RNN + CNN, for myoelectric control. Their results showed that RNN alone provided the best classification performance (91.81%), compared with CNN (89.01%) and RNN + CNN (90.4%). This may be due to the fact that RNN and LSTM have advantages when processing sequential data like EMG time series.

3.3. Discussion

From these more recent works, it is clear that EMG pattern recognition systems based on deep learning can achieve better classification accuracies than their counterparts, e.g., LDA, SVM, KNN, MLP, RF, and GMM. One of the key requirements for deep learning, however, is the availability of large volumes of data. Based on the current size of available EMG data sets, more data recording is necessary. When there is an insufficient amount of training data, the models tend to overfit to the data and end up having poor generalization ability. In the case of EMG pattern recognition, moreover, the iteration time can be long due to the need to find relevant features from raw EMG signals, possibly increasing overfitting.
To avoid this overfitting problem, larger EMG training sets are important. In the absence of more training data, techniques such as dropout, batch normalization, and early stopping (e.g., References [102,113]) may be employed.
Another simple strategy to make sure that deep learning models can generalize well is to split the dataset into three sets: training, validation, and test sets. Most previous EMG studies using deep learning, however, have approached the model selection and parameter optimization processes without statistical methods (i.e., a single run trial instead of cross validation [116]). Caution should therefore be taken when comparing the classification performances of proposed deep learning algorithms with more shallow learning conventional algorithms (e.g., LDA and SVM) which require a smaller training dataset and whose presentation has more commonly employed cross validation.
In addition to larger EMG datasets and techniques for addressing overfitting problems, several studies have employed both feature engineering and learning by inputing pre-processed EMG data and/or pre-extracted EMG features to deep learning algorithms, and some benefits have been shown. No comparison between different types of pre-extracted EMG features involving window-based time domain features, time–frequency representation features, and EMG images, has been made yet. As this area remains relatively new, however, future research should consider how to integrate both feature engineering and feature learning together best for maximum benefit.
A key challenge and impediment to the clinical deployment of deep learning methods is their high computational cost (i.e., long training times and high computational complexity). Because of the stringent power and size restrictions of prosthetic components, most devices are built using embedded systems. While it is possible for inference to be carried out on these systems, training with deep learning must likely still be completed in an offline setting. A combined effort from the research community at large is therefore still needed to develop faster algorithms and hardware with even great processing power to deliver clinically viable deep learning based myoelectric control.
Another key challenge for the clinical use of deep learning methods for myoelectric prosthetic control is the use of unsupervised domain adaptation or transfer learning methods [117] to reduce the effect of confounding factors that affect the characteristics of surface EMG signals. Inter-subject and inter-session variability are two main factors that have been studied so far (Table 5). These techniques have been used to significantly reduce the amount of training data required for a new subject as well as to alleviate the need for periodic re-calibration. Nevertheless, there are other real-life conditions that must be addressed, including but not limited to, electrode location shift, muscle fatigue, variations in muscle contraction intensit as well as variations in limb position and forearm orientation. Furthermore, no study has yet to demonstrate real-time prosthesis control by amputees using deep learning approaches. Additional efforts are needed on the development and optimization of transfer learning and domain adaptation to leverage suitable information from able-bodied to amputee subjects. This is, in part, due to the greater variability in musculature that amputees possess compared to intact-limbed subjects, and it would be impractical to collect a large pre-training dataset from amputees.

4. Conclusions

In recent years, big data and deep learning have become extremely active research topics in many research fields, including EMG pattern recognition. Major advances have been made in the availability of shared surface EMG data, such that there are now at least 33 data sets with surface EMG collected from 662 subject sessions available online. This abundance of EMG data has enabled the resurgence of neural network approaches and the use of deep learning. Even more EMG data is expected to be made available in the near future due to technological advances (e.g., wireless wearable devices, HD-sEMG sensors, and data sharing), and thus big data methods should continue to be investigated and developed. All of the methods discussed in this paper show promise, provide inspiration for future studies, and demonstrate the potential of developing more advanced applications of EMG pattern recognition in the era of big data and deep learning.


This research was funded by the New Brunswick Health Research Foundation and the Natural Sciences and Engineering Research Council of Canada grant number DG 2014-04920.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Scheme, E.; Englehart, K. Electromyogram Pattern Recognition for Control of Powered Upper-limb Prostheses: State of the Art and Challenges for Clinical Use. J. Rehabilit. Res. Dev. 2011, 48, 643–660. [Google Scholar] [CrossRef]
  2. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. A Review of Control Methods for Electric Power Wheelchairs Based on Electromyography Signals with Special Emphasis on Pattern Recognition. IETE Tech. Rev. 2011, 28, 316–326. [Google Scholar] [CrossRef]
  3. Saponas, T.S.; Tan, D.S.; Morris, D.; Balakrishnan, R.; Turner, J.; Landay, J.A. Enabling Always-Available Input with Muscle-Computer Interfaces. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, Victoria, BC, Canada, 4–7 October 2009; ACM: New York, NY, USA, 2009; pp. 167–176. [Google Scholar] [CrossRef]
  4. Yousefi, J.; Hamilton-Wright, A. Characterizing EMG Data Using Machine-Learning Tools. Comput. Biol. Med. 2014, 51, 1–13. [Google Scholar] [CrossRef] [PubMed]
  5. Padmanabhan, P.; Puthusserypady, S. Nonlinear Analysis of EMG Signals—A Chaotic Approach. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; Volume 1, pp. 608–611. [Google Scholar] [CrossRef]
  6. Scheme, E.; Englehart, K. Training Strategies for Mitigating the Effect of Proportional Control on Classification in Pattern Recognition–Based Myoelectric Control. JPO J. Prosthet. Orthot. 2013, 25, 76–83. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Phinyomark, A.; Quaine, F.; Charbonnier, S.; Serviere, C.; Tarpin-Bernard, F.; Laurillau, Y. EMG Feature Evaluation for Improving Myoelectric Pattern Recognition Robustness. Expert Syst. Appl. 2013, 40, 4832–4840. [Google Scholar] [CrossRef]
  8. Tkach, D.; Huang, H.; Kuiken, T.A. Study of Stability of Time-Domain Features for Electromyographic Pattern Recognition. J. NeuroEng. Rehabilit. 2010, 7, 21. [Google Scholar] [CrossRef] [PubMed]
  9. Phinyomark, A.; Quaine, F.; Charbonnier, S.; Serviere, C.; Tarpin-Bernard, F.; Laurillau, Y. A Feasibility Study on the Use of Anthropometric Variables to Make Muscle–Computer Interface More Practical. Eng. Appl. Artif. Intell. 2013, 26, 1681–1688. [Google Scholar] [CrossRef]
  10. Khushaba, R.N.; Al-Timemy, A.; Kodagoda, S.; Nazarpour, K. Combined Influence of Forearm Orientation and Muscular Contraction on EMG Pattern Recognition. Expert Syst. Appl. 2016, 61, 154–161. [Google Scholar] [CrossRef]
  11. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. arXiv, 2018; arXiv:1801.07756. [Google Scholar]
  12. Georgi, M.; Amma, C.; Schultz, T. Recognizing Hand and Finger Gestures with IMU Based Motion and EMG Based Muscle Activity Sensing. In Proceedings of the International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2015), Lisbon, Portugal, 12–15 January 2015; SCITEPRESS—Science and Technology Publications, Lda: Setúbal, Portugal, 2015; Volume 4, pp. 99–108. [Google Scholar] [CrossRef]
  13. Atzori, M.; Gijsberts, A.; Heynen, S.; Hager, A.G.M.; Deriaz, O.; van der Smagt, P.; Castellini, C.; Caputo, B.; Müller, H. Building the Ninapro Database: A Resource for the Biorobotics Community. In Proceedings of the 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics, Rome, Italy, 24–27 June 2012; pp. 1258–1265. [Google Scholar] [CrossRef]
  14. Geng, W.; Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Li, J. Gesture Recognition by Instantaneous Surface EMG Images. Sci. Rep. 2016, 6, 36571. [Google Scholar] [CrossRef] [PubMed]
  15. Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Geng, W. Surface EMG-Based Inter-Session Gesture Recognition Enhanced by Deep Domain Adaptation. Sensors 2017, 17, 458. [Google Scholar] [CrossRef] [PubMed]
  16. Gruss, S.; Treister, R.; Werner, P.; Traue, H.C.; Crawcour, S.; Andrade, A.; Walter, S. Pain Intensity Recognition Rates via Biopotential Feature Patterns with Support Vector Machines. PLoS ONE 2015, 10, 1–14. [Google Scholar] [CrossRef] [PubMed]
  17. Becker, H.; Fleureau, J.; Guillotel, P.; Wendling, F.; Merlet, I.; Albera, L. Emotion Recognition Based on High-Resolution EEG Recordings and Reconstructed Brain Sources. IEEE Trans. Affect. Comput. 2017. [Google Scholar] [CrossRef]
  18. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Feature Reduction and Selection for EMG Signal Classification. Expert Syst. Appl. 2012, 39, 7420–7431. [Google Scholar] [CrossRef]
  19. Boostani, R.; Moradi, M.H. Evaluation of the Forearm EMG Signal Features for the Control of a Prosthetic Hand. Physiol. Meas. 2003, 24, 309. [Google Scholar] [CrossRef] [PubMed]
  20. Zardoshti-Kermani, M.; Wheeler, B.C.; Badie, K.; Hashemi, R.M. EMG Feature Evaluation for Movement Control of Upper Extremity Prostheses. IEEE Trans. Rehabilit. Eng. 1995, 3, 324–333. [Google Scholar] [CrossRef]
  21. Scheme, E.; Englehart, K. On the Robustness of EMG Features for Pattern Recognition Based Myoelectric Control; A Multi-Dataset Comparison. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 650–653. [Google Scholar] [CrossRef]
  22. Phinyomark, A.; Khushaba, R.N.; Ibáñez-Marcelo, E.; Patania, A.; Scheme, E.; Petri, G. Navigating Features: A Topologically Informed Chart of Electromyographic Features Space. J. R. Soc. Interface 2017, 14. [Google Scholar] [CrossRef] [PubMed]
  23. Xia, P.; Hu, J.; Peng, Y. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks. Artif. Organs 2018, 42, E67–E77. [Google Scholar] [CrossRef] [PubMed]
  24. Shim, H.M.; Lee, S. Multi-Channel Electromyography Pattern Classification Using Deep Belief Networks for Enhanced User Experience. J. Cent. South Univ. 2015, 22, 1801–1808. [Google Scholar] [CrossRef]
  25. Gorgolewski, K.; Margulies, D.; Milham, M. Making Data Sharing Count: A Publication-Based Solution. Front. Neurosci. 2013, 7, 9. [Google Scholar] [CrossRef] [PubMed]
  26. Phinyomark, A.; Khushaba, R.N.; Scheme, E. Feature Extraction and Selection for Myoelectric Control Based on Wearable EMG Sensors. Sensors 2018, 18, 1615. [Google Scholar] [CrossRef] [PubMed]
  27. Kamavuako, E.N.; Scheme, E.J.; Englehart, K.B. Determination of Optimum Threshold Values for EMG Time Domain Features; A Multi-Dataset Investigation. J. Neural Eng. 2016, 13, 046011. [Google Scholar] [CrossRef] [PubMed]
  28. Hudgins, B.; Parker, P.; Scott, R.N. A New Strategy for Multifunction Myoelectric Control. IEEE Trans. Biomed. Eng. 1993, 40, 82–94. [Google Scholar] [CrossRef] [PubMed]
  29. Khushaba, R.N.; Al-Timemy, A.H.; Al-Ani, A.; Al-Jumaily, A. A Framework of Temporal-Spatial Descriptors-Based Feature Extraction for Improved Myoelectric Pattern Recognition. IEEE Trans. Neural Syst. Rehabilit. Eng. 2017, 25, 1821–1831. [Google Scholar] [CrossRef] [PubMed]
  30. Phinyomark, A.; Scheme, E. A Feature Extraction Issue for Myoelectric Control Based on Wearable EMG Sensors. In Proceedings of the IEEE Sensors Applications Symposium (SAS), Seoul, Korea, 12–14 March 2018; pp. 1–6. [Google Scholar] [CrossRef]
  31. Phinyomark, A.; Quaine, F.; Laurillau, Y. The Relationship between Anthropometric Variables and Features of Electromyography Signal for Human—Computer Interface. In Applications, Challenges, and Advancements in Electromyography Signal Processing; Naik, G.R., Ed.; IGI Global: Hershey, PA, USA, 2014; Chapter 15; pp. 321–353. [Google Scholar]
  32. Khushaba, R.N.; Kodagoda, S. Electromyogram (EMG) Feature Reduction Using Mutual Components Analysis for Multifunction Prosthetic Fingers Control. In Proceedings of the 2012 12th International Conference on Control Automation Robotics Vision (ICARCV), Guangzhou, China, 5–7 December 2012; pp. 1534–1539. [Google Scholar] [CrossRef]
  33. Khushaba, R.N.; Kodagoda, S.; Liu, D.; Dissanayake, G. Muscle Computer Interfaces for Driver Distraction Reduction. Comput. Methods Prog. Biomed. 2013, 110, 137–149. [Google Scholar] [CrossRef] [PubMed]
  34. Chan, A.D.C.; Green, G.C. Myoelectric Control Development Toolbox. In Proceedings of the 30th Conference of the Canadian Medical & Biological Engineering Society (CMBEC), Toronto, ON, Canada, 16–19 June 2007; p. M0100. [Google Scholar]
  35. Goge, A.R.; Chan, A.D.C. Investigating Classification Parameters for Continuous Myoelectrically Controlled Prostheses. In Proceedings of the 28th Conference of the Canadian Medical & Biological Engineering Society (CMBEC), Quebec, QC, Canada, 10–11 September 2004; pp. 141–144. [Google Scholar]
  36. Phinyomark, A.; Ibáñez-Marcelo, E.; Petri, G. Resting-State fMRI Functional Connectivity: Big Data Preprocessing Pipelines and Topological Data Analysis. IEEE Trans. Big Data 2017, 3, 415–428. [Google Scholar] [CrossRef]
  37. Atzori, M.; Gijsberts, A.; Kuzborskij, I.; Elsig, S.; Hager, A.G.M.; Deriaz, O.; Castellini, C.; Müller, H.; Caputo, B. Characterization of a Benchmark Database for Myoelectric Movement Classification. IEEE Trans. Neural Syst. Rehabilit. Eng. 2015, 23, 73–83. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography Data for Non-Invasive Naturally-Controlled Robotic Hand Prostheses. Sci. Data 2014, 1, 140053. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Pizzolato, S.; Tagliapietra, L.; Cognolato, M.; Reggiani, M.; Müller, H.; Atzori, M. Comparison of Six Electromyography Acquisition Setups on Hand Movement Classification Tasks. PLoS ONE 2017, 12, e0186132. [Google Scholar] [CrossRef] [PubMed]
  40. Palermo, F.; Cognolato, M.; Gijsberts, A.; Müller, H.; Caputo, B.; Atzori, M. Repeatability of Grasp Recognition for Robotic Hand Prosthesis Control Based on sEMG Data. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 1154–1159. [Google Scholar] [CrossRef]
  41. Krasoulis, A.; Kyranou, I.; Erden, M.S.; Nazarpour, K.; Vijayakumar, S. Improved Prosthetic Hand Control with Concurrent Use of Myoelectric and Inertial Measurements. J. NeuroEng. Rehabilit. 2017, 14, 71. [Google Scholar] [CrossRef] [PubMed]
  42. Jordanić, M.; Rojas-Martínez, M.; Mañanas, M.A.; Alonso, J.F.; Marateb, H.R. A Novel Spatial Feature for the Identification of Motor Tasks Using High-Density Electromyography. Sensors 2017, 17, 1597. [Google Scholar] [CrossRef] [PubMed]
  43. Phinyomark, A.; Quaine, F.; Laurillau, Y.; Thongpanja, S.; Limsakul, C.; Phukpattaranont, P. EMG Amplitude Estimators Based on Probability Distribution for Muscle–Computer Interface. Fluct. Noise Lett. 2013, 12, 1350016. [Google Scholar] [CrossRef]
  44. Rojas-Martínez, M.; Mañanas, M.; Alonso, J.; Merletti, R. Identification of Isometric Contractions Based on High Density EMG Maps. J. Electromyogr. Kinesiol. 2013, 23, 33–42. [Google Scholar] [CrossRef] [PubMed]
  45. Amma, C.; Krings, T.; Böer, J.; Schultz, T. Advancing Muscle-Computer Interfaces with High-Density Electromyography. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), Seoul, Korea, 18–23 April 2015; ACM: New York, NY, USA, 2015; pp. 929–938. [Google Scholar] [CrossRef]
  46. Hwang, H.J.; Hahne, J.M.; Müller, K.R. Channel Selection for Simultaneous and Proportional Myoelectric Prosthesis Control of Multiple Degrees-of-Freedom. J. Neural Eng. 2014, 11, 056008. [Google Scholar] [CrossRef] [PubMed]
  47. Fougner, A.; Scheme, E.; Chan, A.D.C.; Englehart, K.; Stavdahl, Ø. Resolving the Limb Position Effect in Myoelectric Pattern Recognition. IEEE Trans. Neural Syst. Rehabilit. Eng. 2011, 19, 644–651. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Radmand, A.; Scheme, E.; Englehart, K. On the Suitability of Integrating Accelerometry Data with Electromyography Signals for Resolving the Effect of Changes in Limb Position during Dynamic Limb Movement. J. Prosthet. Orthot. 2014, 26, 185–193. [Google Scholar] [CrossRef]
  49. Terzano, M.G.; Parrino, L.; Sherieri, A.; Chervin, R.; Chokroverty, S.; Guilleminault, C.; Hirshkowitz, M.; Mahowald, M.; Moldofsky, H.; Rosa, A.; et al. Atlas, Rules, and Recording Techniques for the Scoring of Cyclic Alternating Pattern (CAP) in Human Sleep. Sleep Med. 2001, 2, 537–553. [Google Scholar] [CrossRef]
  50. Quan, S.F.; Howard, B.V.; Iber, C.; Kiley, J.P.; Nieto, F.J.; O’Connor, G.T.; Rapoport, D.M.; Redline, S.; Robbins, J.; Samet, J.M.; et al. The Sleep Heart Health Study: Design, Rationale, and Methods. Sleep 1997, 20, 1077–1085. [Google Scholar] [CrossRef] [PubMed]
  51. Neptune, R.; Wright, I.; van den Bogert, A. Muscle Coordination and Function During Cutting Movements. Med. Sci. Sports Exerc. 1999, 31, 294–302. [Google Scholar] [CrossRef] [PubMed]
  52. Vögele, A.M.; Zsoldos, R.R.; Krüger, B.; Licka, T. Novel Methods for Surface EMG Analysis and Exploration Based on Multi-Modal Gaussian Mixture Models. PLoS ONE 2016, 11, 1–28. [Google Scholar] [CrossRef] [PubMed]
  53. Reuderink, B.; Nijholt, A.; Poel, M. Affective Pacman: A Frustrating Game for Brain-Computer Interface Experiments. In Proceedings of the Intelligent Technologies for Interactive Entertainment; Nijholt, A., Reidsma, D., Hondorp, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 221–227. [Google Scholar]
  54. Haufe, S.; Treder, M.S.; Gugler, M.F.; Sagebaum, M.; Curio, G.; Blankertz, B. EEG Potentials Predict Upcoming Emergency Brakings During Simulated Driving. J. Neural Eng. 2011, 8, 056001. [Google Scholar] [CrossRef] [PubMed]
  55. Healey, J.A.; Picard, R.W. Detecting Stress During Real-World Driving Tasks Using Physiological Sensors. IEEE Trans. Intell. Transp. Syst. 2005, 6, 156–166. [Google Scholar] [CrossRef] [Green Version]
  56. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed]
  57. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  58. Abadi, M.K.; Subramanian, R.; Kia, S.M.; Avesani, P.; Patras, I.; Sebe, N. DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses. IEEE Trans. Affect. Comput. 2015, 6, 209–222. [Google Scholar] [CrossRef]
  59. Zhang, L.; Walter, S.; Ma, X.; Werner, P.; Al-Hamadi, A.; Traue, H.C.; Gruss, S. “BioVid Emo DB”: A Multimodal Database for Emotion Analyses Validated by Subjective Ratings. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–6. [Google Scholar] [CrossRef]
  60. Walter, S.; Gruss, S.; Ehleiter, H.; Tan, J.; Traue, H.C.; Crawcour, S.; Werner, P.; Al-Hamadi, A.; Andrade, A.O. The BioVid Heat Pain Database Data for the Advancement and Systematic Validation of An Automated Pain Recognition System. In Proceedings of the 2013 IEEE International Conference on Cybernetics (CYBCO), Lausanne, Switzerland, 13–15 June 2013; pp. 128–131. [Google Scholar] [CrossRef]
  61. Demchenko, Y.; Grosso, P.; de Laat, C.; Membrey, P. Addressing Big Data Issues in Scientific Data Infrastructure. In Proceedings of the 2013 International Conference on Collaboration Technologies and Systems (CTS), San Diego, CA, USA, 20–24 May 2013; pp. 48–55. [Google Scholar] [CrossRef]
  62. Gandomi, A.; Haider, M. Beyond the Hype: Big Data Concepts, Methods, and Analytics. Int. J. Inf. Manag. 2015, 35, 137–144. [Google Scholar] [CrossRef]
  63. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Wavelet-Based Denoising Algorithm for Robust EMG Pattern Recognition. Fluct. Noise Lett. 2011, 10, 157–167. [Google Scholar] [CrossRef]
  64. Phinyomark, A.; Limsakul, C.; Phukpattaranont, P. EMG Feature Extraction for Tolerance of 50 Hz Interference. In Proceedings of the 4th PSU-UNS International Conference on Engineering Technologies, Novi Sad, Serbia, 28–30 April 2009; pp. 289–293. [Google Scholar]
  65. Phinyomark, A.; Limsakul, C.; Phukpattaranont, P. EMG Feature Extraction for Tolerance of White Gaussian Noise. In Proceedings of the International Workshop and Symposium Science Technology, Nong-khai, Thailand, 15–16 December 2008. [Google Scholar]
  66. Luo, W.; Zhang, Z.; Wen, T.; Li, C.; Luo, Z. Features Extraction and Multi-Classification of sEMG Using A GPU-Accelerated GA/MLP Hybrid Algorithm. J. X-ray Sci. Technol. 2017, 25, 273–286. [Google Scholar] [CrossRef] [PubMed]
  67. Karthick, P.; Ghosh, D.M.; Ramakrishnan, S. Surface Electromyography Based Muscle Fatigue Detection Using High-Resolution Time-Frequency Methods and Machine Learning Algorithms. Comput. Methods Prog. Biomed. 2018, 154, 45–56. [Google Scholar] [CrossRef] [PubMed]
  68. Purushothaman, G.; Vikas, R. Identification of A Feature Selection Based Pattern Recognition Scheme for Finger Movement Recognition from Multichannel EMG Signals. Aust. Phys. Eng. Sci. Med. 2018, 41, 549–559. [Google Scholar] [CrossRef] [PubMed]
  69. Xi, X.; Tang, M.; Luo, Z. Feature-Level Fusion of Surface Electromyography for Activity Monitoring. Sensors 2018, 18, 614. [Google Scholar] [CrossRef] [PubMed]
  70. Englehart, K.; Hudgin, B.; Parker, P.A. A Wavelet-Based Continuous Classification Scheme for Multifunction Myoelectric Control. IEEE Trans. Biomed. Eng. 2001, 48, 302–311. [Google Scholar] [CrossRef] [PubMed]
  71. Chu, J.U.; Moon, I.; Lee, Y.J.; Kim, S.K.; Mun, M.S. A Supervised Feature-Projection-Based Real-Time EMG Pattern Recognition for Multifunction Myoelectric Hand Control. IEEE/ASME Trans. Mechatron. 2007, 12, 282–290. [Google Scholar] [CrossRef]
  72. Chu, J.U.; Moon, I.; Mun, M.S. A Real-Time EMG Pattern Recognition System Based on Linear-Nonlinear Feature Projection for a Multifunction Myoelectric Hand. IEEE Trans. Biomed. Eng. 2006, 53, 2232–2239. [Google Scholar] [CrossRef] [PubMed]
  73. Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification. Meas. Sci. Rev. 2012, 12, 82–89. [Google Scholar] [CrossRef]
  74. Kuiken, T.A.; Li, G.; Lock, B.A.; Lipschutz, R.D.; Miller, L.A.; Stubblefield, K.A.; Englehart, K.B. Targeted Muscle Reinnervation for Real-Time Myoelectric Control of Multifunction Artificial Arms. JAMA 2009, 301, 619–628. [Google Scholar] [CrossRef] [PubMed]
  75. Scheme, E.J.; Englehart, K.B. Validation of a Selective Ensemble-Based Classification Scheme for Myoelectric Control Using a Three-Dimensional Fitts’ Law Test. IEEE Trans. Neural Syst. Rehabilit. Eng. 2013, 21, 616–623. [Google Scholar] [CrossRef] [PubMed]
  76. Sapsanis, C.; Georgoulas, G.; Tzes, A.; Lymberopoulos, D. Improving EMG Based Classification of Basic Hand Movements Using EMD. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5754–5757. [Google Scholar] [CrossRef]
  77. Khushaba, R.N.; Kodagoda, S.; Takruri, M.; Dissanayake, G. Toward Improved Control of Prosthetic Fingers Using Surface Electromyogram (EMG) Signals. Expert Syst. Appl. 2012, 39, 10731–10738. [Google Scholar] [CrossRef]
  78. Ortiz-Catalan, M.; Brånemark, R.; Håkansson, B. BioPatRec: A Modular Research Platform for the Control of Artificial Limbs Based on Pattern Recognition Algorithms. Source Code Biol. Med. 2013, 8, 11. [Google Scholar] [CrossRef] [PubMed]
  79. Mastinu, E.; Ortiz-Catalan, M.; Håkansson, B. Analog Front-Ends Comparison in the Way of A Portable, Low-Power and Low-Cost EMG Controller Based on Pattern Recognition. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2111–2114. [Google Scholar] [CrossRef]
  80. Ortiz-Catalan, M.; Brånemark, R.; Håkansson, B. Evaluation of Classifier Topologies for the Real-Time Classification of Simultaneous Limb Motions. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6651–6654. [Google Scholar] [CrossRef]
  81. Ortiz-Catalan, M.; Håkansson, B.; Brånemark, R. Real-Time Classification of Simultaneous Hand and Wrist Motions Using Artificial Neural Networks with Variable Threshold Outputs. In Proceedings of the XXXIV International Conference on Artificial Neural Networks (ICANN), Amsterdam, The Netherlands, 15–16 May 2013; pp. 1159–1164. [Google Scholar]
  82. Ortiz-Catalan, M.; Håkansson, B.; Brånemark, R. Real-Time and Simultaneous Control of Artificial Limbs Based on Pattern Recognition Algorithms. IEEE Trans. Neural Syst. Rehabilit. Eng. 2014, 22, 756–764. [Google Scholar] [CrossRef] [PubMed]
  83. Khushaba, R.N.; Takruri, M.; Miro, J.V.; Kodagoda, S. Towards Limb Position Invariant Myoelectric Pattern Recognition Using Time-Dependent Spectral Features. Neural Netw. 2014, 55, 42–58. [Google Scholar] [CrossRef] [PubMed]
  84. Al-Timemy, A.H.; Khushaba, R.N.; Bugmann, G.; Escudero, J. Improving the Performance Against Force Variation of EMG Controlled Multifunctional Upper-Limb Prostheses for Transradial Amputees. IEEE Trans. Neural Syst. Rehabilit. Eng. 2016, 24, 650–661. [Google Scholar] [CrossRef] [PubMed]
  85. Fang, Y.; Liu, H.; Li, G.; Zhu, X. A Multichannel Surface EMG System for Hand Motion Recognition. Int. J. Hum. Robot. 2015, 12, 1550011. [Google Scholar] [CrossRef]
  86. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Investigating Long-Term Effects of Feature Extraction Methods for Continuous EMG Pattern Classification. Fluct. Noise Lett. 2012, 11, 1250028. [Google Scholar] [CrossRef]
  87. Cantú-Paz, E.; Goldberg, D.E. Efficient Parallel Genetic Algorithms: Theory and Practice. Comput. Methods Appl. Mech. Eng. 2000, 186, 221–238. [Google Scholar] [CrossRef]
  88. Zhou, Y.; Tan, Y. GPU-Based Parallel Particle Swarm Optimization. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 1493–1500. [Google Scholar] [CrossRef]
  89. Zhang, T.; Yang, B. Big Data Dimension Reduction Using PCA. In Proceedings of the 2016 IEEE International Conference on Smart Cloud (SmartCloud), New York, NY, USA, 18–20 November 2016; pp. 152–157. [Google Scholar] [CrossRef]
  90. Vogt, F.; Tacke, M. Fast Principal Component Analysis of Large Data Sets. Chemom. Intell. Lab. Syst. 2001, 59, 1–18. [Google Scholar] [CrossRef]
  91. Singh, G.; Memoli, F.; Carlsson, G. Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition. In Proceedings of the Eurographics Symposium on Point-Based Graphics, Prague, Czech Republic, 2–3 September 2007. [Google Scholar]
  92. Nicolau, M.; Levine, A.J.; Carlsson, G. Topology Based Data Analysis Identifies A Subgroup of Breast Cancers with A Unique Mutational Profile and Excellent Survival. Proc. Natl. Acad. Sci. USA 2011, 108, 7265–7270. [Google Scholar] [CrossRef] [PubMed]
  93. Phinyomark, A.; Ibáñez-Marcelo, E.; Petri, G. Topological Data analysis of Biomedical Big Data. In Signal Processing and Machine Learning for Biomedical Big Data; Sejdic, E., Falk, T.H., Eds.; CRC Press: Boca Raton, FL, USA, 2018; Chapter 11; pp. 209–234. [Google Scholar]
  94. Oskoei, M.A.; Hu, H. Support Vector Machine-Based Classification Scheme for Myoelectric Control Applied to Upper Limb. IEEE Trans. Biomed. Eng. 2008, 55, 1956–1965. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Kim, K.S.; Choi, H.H.; Moon, C.S.; Mun, C.W. Comparison of k-Nearest Neighbor, Quadratic Discriminant and Linear Discriminant Analysis in Classification of Electromyogram Signals Based on the Wrist-Motion Directions. Curr. Appl. Phys. 2011, 11, 740–745. [Google Scholar] [CrossRef]
  96. Verikas, A.; Vaiciukynas, E.; Gelzinis, A.; Parker, J.; Olsson, M.C. Electromyographic Patterns During Golf Swing: Activation Sequence Profiling and Prediction of Shot Effectiveness. Sensors 2016, 16, 592. [Google Scholar] [CrossRef] [PubMed]
  97. Deng, L.; Yu, D.; Platt, J. Scalable Stacking and Learning for Building Deep Architectures. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 2133–2136. [Google Scholar] [CrossRef]
  98. Hutchinson, B.; Deng, L.; Yu, D. Tensor Deep Stacking Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1944–1957. [Google Scholar] [CrossRef] [PubMed]
  99. Dean, J.; Corrado, G.S.; Monga, R.; Chen, K.; Devin, M.; Le, Q.V.; Mao, M.Z.; Ranzato, M.; Senior, A.; Tucker, P.; et al. Large Scale Distributed Deep Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS’12), Lake Tahoe, NV, USA, 3–6 December 2012; Curran Associates Inc.: Red Hook, NY, USA, 2012; Volume 1, pp. 1223–1231. [Google Scholar]
  100. Chen, X.W.; Lin, X. Big Data Deep Learning: Challenges and Perspectives. IEEE Access 2014, 2, 514–525. [Google Scholar] [CrossRef]
  101. Zhang, Q.; Yang, L.T.; Chen, Z.; Li, P. A Survey on Deep Learning for Big Data. Inf. Fusion 2018, 42, 146–157. [Google Scholar] [CrossRef]
  102. Hartwell, A.; Kadirkamanathan, V.; Anderson, S.R. Compact Deep Neural Networks for Computationally Efficient Gesture Classification From Electromyography Signals. arXiv, 2018; arXiv:1806.08641. [Google Scholar]
  103. Gheisari, M.; Wang, G.; Bhuiyan, M.Z.A. A Survey on Deep Learning in Big Data. In Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 21–24 July 2017; Volume 2, pp. 173–180. [Google Scholar] [CrossRef]
  104. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  105. Shim, H.M.; An, H.; Lee, S.; Lee, E.H.; Min, H.K.; Lee, S. EMG Pattern Classification by Split and Merge Deep Belief Network. Symmetry 2016, 8, 148. [Google Scholar] [CrossRef]
  106. Wand, M.; Schultz, T. Pattern Learning with Deep Neural Networks in EMG-Based Speech Recognition. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4200–4203. [Google Scholar] [CrossRef]
  107. Wand, M.; Schmidhuber, J. Deep Neural Network Frontend for Continuous EMG-Based Speech Recognition. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (Interspeech), San Francisco, CA, USA, 8–12 September 2016; pp. 3032–3036. [Google Scholar]
  108. Kawde, P.; Verma, G.K. Deep Belief Network Based Affect Recognition from Physiological Signals. In Proceedings of the 2017 4th IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (UPCON), Mathura, India, 26–28 October 2017; pp. 587–592. [Google Scholar] [CrossRef]
  109. Said, A.B.; Mohamed, A.; Elfouly, T.; Harras, K.; Wang, Z.J. Multimodal Deep Learning Approach for Joint EEG-EMG Data Compression and Classification. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar] [CrossRef]
  110. Chen, J.; Zhang, X.; Cheng, Y.; Xi, N. Surface EMG Based Continuous Estimation of Human Lower Limb Joint Angles By Using Deep Belief Networks. Biomed. Signal Process. Control 2018, 40, 335–342. [Google Scholar] [CrossRef]
  111. Côté-Allard, U.; Nougarou, F.; Fall, C.L.; Giguère, P.; Gosselin, C.; Laviolette, F.; Gosselin, B. A Convolutional Neural Network for Robotic Arm Guidance Using sEMG Based Frequency-Features. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2464–2470. [Google Scholar] [CrossRef]
  112. Côté-Allard, U.; Fall, C.L.; Campeau-Lecours, A.; Gosselin, C.; Laviolette, F.; Gosselin, B. Transfer Learning for sEMG Hand Gestures Recognition Using Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1663–1668. [Google Scholar] [CrossRef]
  113. Zhai, X.; Jelfs, B.; Chan, R.H.M.; Tin, C. Self-Recalibrating Surface EMG Pattern Recognition for Neuroprosthesis Control Based on Convolutional Neural Network. Front. Neurosci. 2017, 11, 379. [Google Scholar] [CrossRef] [PubMed]
  114. Atzori, M.; Cognolato, M.; Müller, H. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands. Front. Neurorobot. 2016, 10, 9. [Google Scholar] [CrossRef] [PubMed]
  115. Laezza, R. Deep Neural Networks for Myoelectric Pattern Recognition An Implementation for Multifunctional Control. Master’s Thesis, Chalmers University of Technology, Gothenburg, Sweden, 2018. [Google Scholar]
  116. Faust, O.; Hagiwara, Y.; Hong, T.J.; Lih, O.S.; Acharya, U.R. Deep Learning for Healthcare Applications Based on Physiological Signals: A Review. Comput. Methods Prog. Biomed. 2018, 161, 1–13. [Google Scholar] [CrossRef] [PubMed]
  117. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res. 2016, 17, 2030–2096. [Google Scholar]
  118. Park, K.H.; Lee, S.W. Movement Intention Decoding Based on Deep Learning for Multiuser Myoelectric Interfaces. In Proceedings of the 2016 4th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 22–24 February 2016; pp. 1–2. [Google Scholar] [CrossRef]
Table 1. Multiple electrocardiogram (EMG) data sets for myoelectric control: subject, experimental protocol, acquisition setup, and pre-processing pipeline.
Table 1. Multiple electrocardiogram (EMG) data sets for myoelectric control: subject, experimental protocol, acquisition setup, and pre-processing pipeline.
DatasetTest ConditionsSubjectsNumber of MovementsNumber of RepetitionsTotal Number of TrialsTime Per Trial (s)Number of ElectrodesSampling Rate (Hz)Filtering (Hz)Resolution (bits)
Sapsanis et al. [76] a A small number of EMG channels5N
(2M 3F)
63090062500BPF 15–500, NF at 5014
Khushaba et al. 1 [77] b A small number of EMG channels8N
(6M 2F)
Ortiz-Catalan et al. 1 [78] c EMG armband20N
(10M 10F)
10 + rest3600342000BPF 20–400, NF at 5014
Mastinu et al. [79] c EMG armband, Acquisition system (3 sets)8N
(6M 2F)
10 + rest3720342000-16
Ortiz-Catalan et al. 2 [80] c EMG armband6N
(3M 3F)
26 + rest3468382000BPF 20–400, NF at 5016
Ortiz-Catalan et al. 3 [81] c EMG armband7N
(6M 1F)
26 + rest3546382000BPF 20–400, NF at 5016
Ortiz-Catalan et al. 4 [82] c EMG armband17N
(11M 6F)
26 + rest31326382000BPF 20–400, NF at 5016
Khushaba et al. 2 [32] b EMG armband8N
(6M 2F)
(out of 6)
Khushaba et al. 3 [33] b EMG armband8N
(7M 1F)
(out of 6)
Khushaba et al. 4 [83] b Limb position (5 positions), EMG armband11N
(9M 2F)
7 + rest62310574000-12
Khushaba et al. 5 [10] b Forearm orientation (3 orientations), Contraction intensity (3 levels), EMG armband10N6 + rest31620564000-12
et al. [84] b
Amputation, Contraction intensity (3 levels), EMG armband9A
(7M 2F)
Côté-Allard et al. [11] d Between-day (2 days), EMG armband40N
(28M 12F)
6 + rest4–12374458200NF at 508
Chan et al. [34,35] eBetween-day (4 days)30N6 + rest2417,280383000BPF 1–100012
ISRMyo-I [85] f Between-day (10 days), EMG armband6N12 + rest2144010161000n/an/a
Table 2. Benchmark EMG data sets for myoelectric control: subject, experimental protocol, acquisition setup, and pre-processing pipeline.
Table 2. Benchmark EMG data sets for myoelectric control: subject, experimental protocol, acquisition setup, and pre-processing pipeline.
DatasetTest ConditionsSubjectsNumber of MovementsNumber of RepetitionsTotal Number of TrialsTime Per Trial (s)Number of ElectrodesSampling Rate (Hz)Filtering (Hz)Resolution (bits)
Ninapro 1 [37]EMG armband27N
(20M 7F)
52 + rest1014,040510100RMS, LPF at 512
Ninapro 2 [38]EMG armband40N
(28M 12F)
49 + rest611,7605122000NF at 5016
Ninapro 3 [38]Amputation, EMG armband11A
49 + rest a 63234512 b 2000NF at 5016
Ninapro 4 [39]EMG armband10N
(6M 4F)
52 + rest631205122000BPF 10–1000
NF at 50
Ninapro 5 [39]EMG armband10N
(8M 2F)
52 + rest63120516200NF at 508
Ninapro 6 [40]Between-day (5 days), EMG armband10N
(7M 3F)
7 + rest 12 × 2 = 2484004142000NF at 5016
Ninapro 7 [41]Amputation, EMG armband20N 2A40 + rest c 652805122000NF at 5016
Note that N, able-bodied (non-amputee) subject; A, amputee subject; M, male; F, female; LPF, low-pass filter; NF, notch filter; BPF, band-pass filter. a For subjects 1, 3, and 10, the number of movements was respectively reduced to 39, 49, and 43 movements (including rest) due to fatigue or pain; b For subjects 7 and 8, the number of electrodes was reduced to 10 due to insufficient space; c For subject 21, the number of movements was reduced to 38 (including rest).
Table 3. High-density surface EMG (HD-sEMG) datasets: subject, experimental protocol, acquisition setup, and pre-processing pipeline.
Table 3. High-density surface EMG (HD-sEMG) datasets: subject, experimental protocol, acquisition setup, and pre-processing pipeline.
DatasetTest ConditionsSubjectsNumber of MovementsNumber of RepetitionsTotal Number of TrialsTime Per Trial (s)Number of ElectrodesSampling Rate (Hz)Filtering (Hz)Resolution (bits)
mmGest [12] a Between-day (5 days), HD-sEMG5N
(4M 1F)
12 + rest154500≈1.1 8 × 4 = 32 1000-12
CapgMyo (DB-a) [15] b HD-sEMG18N8 + rest1014403–10 8 × 16 = 128 1000BPF 20–380
BSF 45–55
CapgMyo (DB-b) [15] b Between-day (2 days), HD-sEMG10N8 + rest1016003 8 × 16 = 128 1000BPF 20–380
BSF 45–55
CapgMyo (DB-c) [15] b HD-sEMG10N12 + rest1012003 8 × 16 = 128 1000BPF 20–380
BSF 45–55
csl-hdemg [45] a Between-day (5 days), HD-sEMG5N
(4M 1F)
26 + rest1065003 8 × 24 = 192 2048BPF 20–40016
Note that N, able-bodied (non-amputee) subject; M, male; F, female; BPF, band-pass filter; BSF, band-stop filter. a; b
Table 4. Multi-modal physiological datasets for emotion recognition: subject, experimental protocol, acquisition setup, and pre-processing pipeline for measuring surface EMG.
Table 4. Multi-modal physiological datasets for emotion recognition: subject, experimental protocol, acquisition setup, and pre-processing pipeline for measuring surface EMG.
DatasetResearch ProblemAffective StatesTypes of DataSubjectsTime DurationEMG ChannelsSampling Rate (Hz)
Healey and Picard [55] a Driver stress recognition3 levels of stressEMG, ECG, GSR, Resp,
facial video
17 of 24N54–93 min1 (tEMG)15.5
DEAP [57] b Affect recognition based music video stimuli4 quadrants of the valence-arousal spaceEMG, BVP, GSR, Resp, Temp, EOG, EEG, facial video32N (16M 16F) 40 × 1 -min2 (tEMG, zEMG)512
DECAF [58] c Affect recognition based music video and movie stimuli4 quadrants of the valence-arousal spaceEMG, ECG, EOG, MEG, facial video30N
(16M 14F)
40 × 1 min, 36 × 80 s1 (tEMG)1000
HR-EEG4EMO [17] d Affect recognition based film stimuli2 classes of the valence spaceEMG, ECG, GSR, Resp,
(31M 9F)
26 × 40 s − 6 minElectrodes located on the cheeks1000
BioVid Emo DB [59] eAffect recognition based film stimuli5 discrete emotionsEMG, ECG, GSR, facial video86 of 94N
(44M 50F)
15 × 32 − 245 s1 (tEMG)512
BioVid [60] eHeat pain recognition5 levels of pain intensityEMG, ECG, GSR, EEG, facial video86 of 90N
(45M 45F)
80 × 4 s3 (tEMG, zEMG, cEMG)512
Table 5. Summary of EMG research studies that have used deep learning techniques.
Table 5. Summary of EMG research studies that have used deep learning techniques.
Ref.Deep Learning ModelDeep Learning SoftwareInput Data
(Window Size/Overlap)
ApplicationTest ConditionsDataset
(Number of Subjects)
[24]UPN: DBNDeepLearnToolbox b Time domain features
(166 ms/83 ms)
Motion recognition-Local data set (28)
2 EMG channels
[105]UPN: SM-DBNDeepLearnToolbox b Time domain features
(166 ms/83 ms)
Motion recognition-Local data set (28)
2 EMG channels
[106]UPNOriginal scripts by Hinton a Time domain features
(27 ms/10 ms)
Silent speech interface-EMG-Array (20)
2 arrays: 1 × 8 , 4 × 8
(in-house toolbox)
Time domain features
(27 ms/10 ms)
Silent speech interface-EMG-UKA (11)
6 EMG channels
[108]UPN: DBNn/aRaw EMG
(1 min)
Emotion recognitionMultiple modalitiesDEAPMulti-modal > EEG
[109]UPN: SAEn/aRaw EMGData compressionMultiple modalitiesDEAPSAE > DWT, CS
[110]UPN: DBNn/aFull-wave rectified EMG
(sub-sampled with 100 Hz)
Joint angle estimationRegressionLocal dataset (6)
10 EMG channels
[14]CNNMXNet d sEMG imageMotion recognition-CapgMyo DB-a, csl-hdemg, Ninapro 1,2CNN > LDA, SVM, KNN, MLP, RF (using instantaneous values)
[15]CNNMXNet d sEMG imageMotion recognitionInter-subject, Between-dayCapgMyo DB-a,b,c,
Ninapro 1
CNN > LDA, SVM, KNN, RF (using instantaneous values)
[111]CNNTheano eTime–frequency features
(285 ms/20 ms)
Motion recognitionInter-subject, Between-dayCôté-Allard et al. (18)offline: 97.71%,
online: 93.14%
[112]CNNTheano eTime–frequency features
(260 ms/25 ms)
Motion recognitionInter-subjectCôté-Allard et al. (35)offline: 97.81%
[11]CNNTheano eTime-frequency features
(260 ms/25 ms)
Motion recognitionInter-subject, Between-dayCôté-Allard et al. (36), Ninapro 5Côté-Allard et al.: 98.31%
(for 7 motion classes), Ninapro 5: 65.57%
(for 18 motion classes)
[113]CNNMatConvNet c Time-frequency features
(200 ms/100 ms)
Motion recognitionAmputationNinapro 2,3CNN > SVM
[114]CNNMatConvNet c Raw EMG
(150 ms)
Motion recognitionAmputationNinapro 1,2,3Ninapro 1,2: RF > CNN
Ninapro 3: SVM > CNN
[118]CNNn/aRaw EMG
(200 ms)
Motion recognitionInter-subjectNinapro 1CNN > SVM
[102]CNNKeras f + TensorFlow g Raw EMG
(150 ms/5 ms)
Motion recognitionCompact architectureLocal data set (10)
8 + 5 EMG channels
[23]RNN + CNNn/aTime-frequency features
(50 ms/30 ms)
Joint angle estimationRegressionLocal data set (8)
5 EMG channels
[115]RNNCNTK h Time domain features
(200 ms/150 ms)
Motion recognitionAmputationNinapro 7RNN > RNN + CNN > CNN
Note that UPN, unsupervised pre-trained networks; DBN, deep belief network; SM-DBN, split-and-merge deep belief network; SAE, stacked auto-encoder; CNN, convolutional neural network; RNN, recurrent neural network; SVM, support vector machine; SVR, support vector regression; LDA, linear discriminant analysis; GMM, Gaussian mixture model; KNN, k-nearest neighbors; MLP, multi-layer perceptron neural network; RF, random forests; DWT, discrete wavelet transform; CS, compressive sensing; PCA, principal component analysis. a; b; c; d; e; f; g; h

Share and Cite

MDPI and ACS Style

Phinyomark, A.; Scheme, E. EMG Pattern Recognition in the Era of Big Data and Deep Learning. Big Data Cogn. Comput. 2018, 2, 21.

AMA Style

Phinyomark A, Scheme E. EMG Pattern Recognition in the Era of Big Data and Deep Learning. Big Data and Cognitive Computing. 2018; 2(3):21.

Chicago/Turabian Style

Phinyomark, Angkoon, and Erik Scheme. 2018. "EMG Pattern Recognition in the Era of Big Data and Deep Learning" Big Data and Cognitive Computing 2, no. 3: 21.

Article Metrics

Back to TopTop