# Data Augmentation of Surface Electromyography for Hand Gesture Recognition

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- the presentation of common (i.e., additive noise, overlapping windows) and novel (i.e., magnitude warping, wavelet decomposition, synthetic sEMG models) data augmentation methods for sEMG signals
- an extensive and systematic evaluation of the aforementioned augmentation methods
- an improvement in classification accuracy of the state-of-the-art model [19].

## 2. Materials and Methods

#### 2.1. Augmentation Tools

- Augmentor.One (AO): applies only one augmentation method randomly selected from the available methods, e.g., if we have two augmentations (${A}_{1},{A}_{2}$), this approach is equal to applying ${A}_{1}$ to half of the data and ${A}_{2}$ to the other half.
- Augmentor.All (AA): applies all the K methods consecutively, i.e., ${x}_{i}^{*}={A}_{K}\left({A}_{K-1}(\dots {A}_{1}\left({x}_{i}\right))\right)$.
- Augmentor.Random (AR): applies each of the K augmentation methods successively with a probability ${p}_{k}$, $k\in [1,K]$: ${x}_{i}^{*}={A}_{K}({p}_{K},{A}_{K-1}({p}_{K-1},\dots {A}_{1}({p}_{1},{x}_{i})))$. An augmentation method ${A}_{k}$ is applied if a random number sampled from a uniform distribution $\mathcal{U}(0,1)$ is greater than the threshold ${p}_{k}$. In these experiments, all threshold probabilities are equal to ${p}_{k}=0.5$.

#### 2.2. CNN Architectures

#### 2.3. Datasets

## 3. Experiments

#### 3.1. Metrics

- the Silhouette Coefficient (SC) [27]:$$\mathit{SC}=\frac{1}{N}\sum _{i=1}^{N}\frac{b\left(i\right)-a\left(i\right)}{max\left\{a\right(i),b(i\left)\right\}}$$
- the Davies–Bouldin (DB) index [28]:$$\mathit{DB}=\frac{1}{K}\sum _{i=1}^{K}ma{x}_{j\ne i}\left(\frac{{\sigma}_{i}+{\sigma}_{j}}{d({c}_{i},{c}_{j})}\right)$$

#### 3.2. Model Optimization

## 4. Results

## 5. Discussion

## 6. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A

**Figure A1.**Example of smooth curve generation used in the MW method. Random values for $T=6$ equidistant points are sampled from $\mathcal{N}(\mu =1,\sigma =0.2)$ and then interpolated using a cubic spline.

**Figure A2.**Examples of the investigated augmentation methods applied to the RMS of a sEMG signal from Ninapro-DB1. The colors correspond to the different electrode channels.

**Figure A4.**Average test accuracy of AtzoriNet* for each method at different augmentation ratios, R, for the Ninapro-DB1 dataset. The solid line corresponds to $\tau =15$ and the dashed line to $\tau =1$ (see also Figure A3).

**Figure A5.**Average test accuracy of AtzoriNet* for each method at different augmentation ratios, R, for the putEMG dataset. The solid line corresponds to $\tau =15$ and the dashed line to $\tau =1$ (see also Figure A3).

**Figure A6.**Average loss and accuracy graphs (

**left**), augmentation vs. baseline-1 and baseline-2 accuracy per subject (

**middle**), and t-SNE embeddings of a single subject (

**right**) for the different augmentation methods on Ninapro-DB1. The investigated method is shown on the left of the figure. In the left plot, the solid lines correspond to training, while the dashed lines to testing. In the middle plots, color points correspond to different subjects and the ellipse shows the general shape of the points. In the t-SNE visualizations, color points correspond to different gestures, while the values at the title are the SC/DB/accuracy metrics.

**Figure A7.**Average loss and accuracy graphs

**(left)**, augmentation vs. baseline-1 and baseline-2 accuracy per subject (

**middle**), and t-SNE embeddings of a single subject (

**right**) for the different augmentation methods on putEMG. The investigated method is shown on the left of the figure. In the left plot, the solid lines correspond to training, while the dashed lines to testing. In the middle plots, color points correspond to different subjects and the ellipse shows the general shape of the points. In the t-SNE visualizations, color points correspond to different gestures, while the values at the title are the SC/DB/accuracy metrics.

**Figure A8.**f1 scores of the AtzoriNet* for each gesture of Ninapro-DB1 dataset for $\tau =1$ and $R=9$.

Model | AtzoriNet* [22] | WeiNet [19] |
---|---|---|

Details | s = { | |

Conv2D(64, (3,3)) | ||

Conv2D(32, (1,$C)) | Conv2D(64, (3,3)) | |

Conv2D(32, (3,3)) | LC(64, (1,1)) | |

MaxPool(3,3) | LC(64, (1,1)) | |

Conv2D(64, (5,5)) | }× $C | |

MaxPool(3,3) | Concat(s) | |

Conv2D(64, (5,1)) | FC(512) | |

Conv2D($G, (1,1)) | FC(512) | |

Softmax | FC(128) | |

FC($G) | ||

Softmax | ||

Parameters | 84K | 8M |

Model | AtzoriNet* [22] | WeiNet [19] |
---|---|---|

Optimizer | Adam | SGD with momentum |

Learning rate | 0.001 | 0.1 |

Learning rate schedule | constant | divide by 10 after epochs 1624 |

Weight decay | 0.0005 | 0.0001 |

Epochs | 100 (early stopping) | 28 (preceded by pretraining) |

Batch size | 512 | 1024 |

**Table A3.**Datasets details and partition into train, validation and test sets based on gesture repetitions.

Dataset | Sampling (Hz) | sEMG Channels | Subjects | Gestures | Trials | Train | Validation | Test |
---|---|---|---|---|---|---|---|---|

Ninapro-DB1 [23] | 100 | 10 | 27 | 52+1 | 10 | 1, 3, 4, 6, 8, 9, 10 | 9, 10 | 2, 5, 7 |

putEMG [25] | 5124 | $3\times 8$ | 44 | 7+1 | 8 | 1, 3, 4, 6, 7, 8 | 8 | 2, 5 |

**Table A4.**Augmentation hyper-parameter selection and search space for each method. The selected values are shown for each dataset.

Method | Hyper-Parameter | Search Space | Selected Value (DB1/putEMG) |
---|---|---|---|

WD | $\psi $ | {coif2, db7, sym4} | sym4/db7 |

l | {4, 5} | 5 | |

b | {0, 2–5} | 3/0 | |

MW | $\sigma $ | {0.1, 0.2, 0.3} | 0.1 |

GN | $\mathit{SNR}$ | {25, 30, 35} | 30/35 |

**Table A5.**Augmentation hyper-parameter selection results. For each method and dataset, higher accuracy is shown in bold.

Method | Hyper-Parameter Value | Accuracy Ninapro-DB1 | Accuracy putEMG |
---|---|---|---|

GN | $\mathit{SNR}=25$ | 0.5862 | 0.8150 |

GN | $\mathit{SNR}=30$ | 0.6007 | 0.8509 |

GN | $\mathit{SNR}=35$ | 0.5994 | 0.8551 |

MW | $\sigma =0.1$ | 0.6342 | 0.8529 |

MW | $\sigma =0.2$ | 0.6123 | 0.8210 |

MW | $\sigma =0.3$ | 0.5904 | 0.7486 |

WD | $\psi =\mathrm{coif}2,l=4,b=0$ | 0.6145 | 0.8377 |

WD | $\psi =\mathrm{coif}2,l=4,b=2$ | 0.6068 | 0.8576 |

WD | $\psi =\mathrm{coif}2,l=4,b=3$ | 0.6074 | 0.8456 |

WD | $\psi =\mathrm{coif}2,l=4,b=4$ | 0.6064 | 0.8629 |

WD | $\psi =\mathrm{coif}2,l=4,b=5$ | 0.5890 | 0.8763 |

WD | $\psi =\mathrm{coif}2,l=5,b=0$ | 0.6112 | 0.8611 |

WD | $\psi =\mathrm{coif}2,l=5,b=2$ | 0.6152 | 0.8664 |

WD | $\psi =\mathrm{coif}2,l=5,b=3$ | 0.6129 | 0.8752 |

WD | $\psi =\mathrm{coif}2,l=5,b=4$ | 0.5985 | 0.8473 |

WD | $\psi =\mathrm{coif}2,l=5,b=5$ | 0.5764 | 0.8383 |

WD | $\psi =\mathrm{db}7,l=4,b=0$ | 0.6127 | 0.8454 |

WD | $\psi =\mathrm{db}7,l=4,b=2$ | 0.6120 | 0.8592 |

WD | $\psi =\mathrm{db}7,l=4,b=3$ | 0.6068 | 0.8617 |

WD | $\psi =\mathrm{db}7,l=4,b=4$ | 0.5907 | 0.8669 |

WD | $\psi =\mathrm{db}7,l=4,b=5$ | 0.5751 | 0.8391 |

WD | $\psi =\mathrm{db}7,l=5,b=0$ | 0.6168 | 0.8770 |

WD | $\psi =\mathrm{db}7,l=5,b=2$ | 0.6142 | 0.8429 |

WD | $\psi =\mathrm{db}7,l=5,b=3$ | 0.6036 | 0.8714 |

WD | $\psi =\mathrm{db}7,l=5,b=4$ | 0.5858 | 0.8635 |

WD | $\psi =\mathrm{db}7,l=5,b=5$ | 0.5634 | 0.8213 |

WD | $\psi =\mathrm{sym}4,l=4,b=0$ | 0.6146 | 0.8567 |

WD | $\psi =\mathrm{sym}4,l=4,b=2$ | 0.6149 | 0.8640 |

WD | $\psi =\mathrm{sym}4,l=4,b=3$ | 0.6129 | 0.8446 |

WD | $\psi =\mathrm{sym}4,l=4,b=4$ | 0.6072 | 0.8757 |

WD | $\psi =\mathrm{sym}4,l=4,b=5$ | 0.6008 | 0.8632 |

WD | $\psi =\mathrm{sym}4,l=5,b=0$ | 0.6162 | 0.8579 |

WD | $\psi =\mathrm{sym}4,l=5,b=2$ | 0.6122 | 0.8759 |

WD | $\psi =\mathrm{sym}4,l=5,b=3$ | 0.6240 | 0.8372 |

WD | $\psi =\mathrm{sym}4,l=5,b=4$ | 0.6004 | 0.8521 |

WD | $\psi =\mathrm{sym}4,l=5,b=5$ | 0.5846 | 0.8590 |

**Table A6.**Average accuracy of AtzoriNet* for each method at different augmentation ratios, R, for the Ninapro-DB1 dataset. The value in bold corresponds to the method with highest accuracy.

R | SS1 | SS2 | WD | MW | GN | WD+MW | AA | AO | AR |
---|---|---|---|---|---|---|---|---|---|

1 | 0.6937 | 0.7262 | 0.7308 | 0.7337 | 0.7306 | 0.7354 | 0.7284 | 0.7312 | 0.7322 |

4 | 0.6332 | 0.7166 | 0.7405 | 0.7436 | 0.7251 | 0.7389 | 0.7391 | 0.7377 | 0.7399 |

9 | 0.5786 | 0.7152 | 0.7345 | 0.7443 | 0.7331 | 0.7399 | 0.7256 | 0.7405 | 0.7383 |

SW(15) (baseline-1): 0.6305, SW(01) (baseline-2): 0.7273 |

**Table A7.**Average accuracy of AtzoriNet* for each method at different augmentation ratios, R, for the putEMG dataset. The value in bold corresponds to the method with highest accuracy.

R | SS1 | SS2 | WD | MW | GN | WD+MW | AA | AO | AR |
---|---|---|---|---|---|---|---|---|---|

1 | 0.9442 | 0.9498 | 0.9474 | 0.9477 | 0.9496 | 0.9506 | 0.9423 | 0.9453 | 0.9507 |

4 | 0.9500 | 0.9622 | 0.9660 | 0.9580 | 0.9569 | 0.9571 | 0.9489 | 0.9621 | 0.9616 |

9 | 0.9471 | 0.9584 | 0.9586 | 0.9605 | 0.9616 | 0.9612 | 0.9498 | 0.9696 | 0.9660 |

15 | 0.9406 | 0.9613 | 0.9602 | 0.9650 | 0.9654 | 0.9606 | 0.9482 | 0.9697 | 0.9672 |

**Table A8.**Average and standard deviation across subjects of the cluster metrics Silhouette Coefficient (SC), Davies–Bouldin (DB) and classification accuracy (row of $R=9$ in Table A6) for the Ninapro-DB1 dataset.

Method | SC | DB | Accuracy |
---|---|---|---|

SW(15) (baseline-1) | 0.2089 ± 0.0498 | 1.67 ± 0.21 | 0.6305 ± 0.0608 |

SW(01) (baseline-2) | 0.2783 ± 0.0461 | 1.45 ± 0.17 | 0.7273 ± 0.0581 |

SS1(01) | 0.1535 ± 0.0395 | 1.90 ± 0.22 | 0.5786 ± 0.0594 |

SS2(01) | 0.2980 ± 0.0486 | 1.36 ± 0.16 | 0.7152 ± 0.0564 |

WD(01) | 0.2627 ± 0.0468 | 1.47 ± 0.18 | 0.7345 ± 0.0591 |

MW(01) | 0.2768 ± 0.0459 | 1.42 ± 0.16 | 0.7443 ± 0.0558 |

GN(01) | 0.2814 ± 0.0492 | 1.42 ± 0.18 | 0.7331 ± 0.0618 |

WD+MW(01) | 0.2686 ± 0.0428 | 1.44 ± 0.15 | 0.7399 ± 0.0594 |

AA(01) | 0.2600 ± 0.0430 | 1.47 ± 0.15 | 0.7256 ± 0.0587 |

AO(01) | 0.2838 ± 0.0465 | 1.40 ± 0.17 | 0.7405 ± 0.0588 |

AR(01) | 0.2842 ± 0.0468 | 1.41 ± 0.17 | 0.7383 ± 0.0591 |

**Table A9.**Average and standard deviation across subjects of the cluster metrics Silhouette Coefficient (SC), Davies–Bouldin (DB) and classification accuracy (row of $R=15$ in Table A7) for the putEMG dataset.

Method | SC | DB | Accuracy |
---|---|---|---|

SW(15) (baseline-1) | 0.3349 ± 0.0732 | 1.58 ± 0.55 | 0.8071 ± 0.0933 |

SW(01) (baseline-2) | 0.4998 ± 0.0648 | 0.94 ± 0.22 | 0.9388 ± 0.0602 |

SS1(01) | 0.4287 ± 0.0903 | 1.11 ± 0.40 | 0.9406 ± 0.0480 |

SS2(01) | 0.5295 ± 0.0685 | 0.78 ± 0.16 | 0.9613 ± 0.0375 |

WD(01) | 0.5292 ± 0.0654 | 0.79 ± 0.16 | 0.9602 ± 0.0448 |

MW(01) | 0.4773 ± 0.0743 | 0.91 ± 0.22 | 0.9650 ± 0.0356 |

GN(01) | 0.5289 ± 0.0734 | 0.80 ± 0.19 | 0.9654 ± 0.0433 |

WD+MW(01) | 0.4827 ± 0.0783 | 0.90 ± 0.21 | 0.9606 ± 0.0464 |

AA(01) | 0.4728 ± 0.0755 | 0.94 ± 0.23 | 0.9482 ± 0.0528 |

AO(01) | 0.5261 ± 0.0683 | 0.80 ± 0.18 | 0.9697 ± 0.0299 |

AR(01) | 0.5394 ± 0.0603 | 0.77 ± 0.15 | 0.9672 ± 0.0384 |

**Table A10.**Pairwise comparisons with Bonferonni correction for the Ninapro-DB1 dataset. The table shows the p-value of the comparison between the classification accuracy of the methods on the corresponding row and column. Values above the diagonal show the comparisons for $R=9$ and below the diagonal for $R=1$ (last and first row of Table A6, respectively). Values in bold correspond to the comparisons of the method with the highest classification accuracy in Table A6 (i.e., MW). An ‘*’ denotes a significant difference ($\alpha =0.05$), while a ‘**’ denotes a p-value $p<0.001$.

SW(15) | SW(01) | SS1(01) | SS2(01) | WD(01) | MW(01) | GN(01) | WD+MW(01) | AA(01) | AO(01) | AR(01) | |
---|---|---|---|---|---|---|---|---|---|---|---|

SW(15) | — | ** | ** | ** | ** | ** | ** | ** | ** | ** | ** |

SW(01) | ** | — | ** | 0.009 * | 0.590 | ** | 1.0 | 0.019 * | 1.0 | 0.004 * | 0.004 * |

SS1(01) | ** | ** | — | ** | ** | ** | ** | ** | ** | ** | ** |

SS2(01) | ** | 1.0 | ** | — | ** | ** | ** | ** | 0.029* | ** | ** |

WD(01) | ** | 1.0 | ** | 1.0 | — | 0.007 * | 1.0 | 1.0 | 0.209 | 1.0 | 1.0 |

MW(01) | ** | 1.0 | ** | 0.270 | 1.0 | — | 0.064 | 1.0 | ** | 1.0 | 0.739 |

GN(01) | ** | 1.0 | ** | 1.0 | 1.0 | 1.0 | — | 1.0 | 0.554 | 0.123 | 1.0 |

WD+MW(01) | ** | 0.076 | ** | 0.002 * | 0.866 | 1.0 | 1.0 | — | ** | 1.0 | 1.0 |

AA(01) | ** | 1.0 | ** | 1.0 | 1.0 | 0.617 | 1.0 | 0.086 | — | ** | 0.001 * |

AO(01) | ** | 1.0 | ** | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | — | 1.0 |

AR(01) | ** | 1.0 | ** | 0.560 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | — |

**Table A11.**Pairwise comparisons with Bonferonni correction for the putEMG dataset. The table shows the p-value of the comparison between the classification accuracy of the methods on the corresponding row and column. Values above the diagonal show the comparisons for $R=15$ and below the diagonal for $R=1$ (last and first row of Table A7, respectively). Values in bold correspond to the comparisons of the method with the highest classification accuracy in Table A7 (i.e., AO). An ‘*’ denotes a significant difference ($\alpha =0.05$), while a ‘**’ denotes a p-value $p<0.001$.

SW(15) | SW(01) | SS1(01) | SS2(01) | WD(01) | MW(01) | GN(01) | WD+MW(01) | AA(01) | AO(01) | AR(01) | |
---|---|---|---|---|---|---|---|---|---|---|---|

SW(15) | — | ** | ** | ** | ** | ** | ** | ** | ** | ** | ** |

SW(01) | ** | — | 1.0 | 0.269 | 0.016* | 0.003 * | ** | 0.022 * | 1.0 | 0.003 * | 0.006 * |

SS1(01) | ** | 1.0 | — | 0.102 | 0.250 | 0.003 * | 0.040 * | 0.604 | 1.0 | ** | 0.002 * |

SS2(01) | ** | 1.0 | 1.0 | — | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.572 | 1.0 |

WD(01) | ** | 1.0 | 1.0 | 1.0 | — | 1.0 | 1.0 | 1.0 | 1.0 | 0.840 | 1.0 |

MW(01) | ** | 1.0 | 1.0 | 1.0 | 1.0 | — | 1.0 | 1.0 | 0.415 | 1.0 | 1.0 |

GN(01) | ** | 0.050 | 1.0 | 1.0 | 1.0 | 1.0 | — | 1.0 | 0.564 | 1.0 | 1.0 |

WD+MW(01) | ** | 0.756 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | — | 1.0 | 1.0 | 1.0 |

AA(01) | ** | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | — | 0.002 * | 0.001 * |

AO(01) | ** | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | — | 1.0 |

AR(01) | ** | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | — |

**Table A12.**Pairwise comparisons with Bonferonni correction for the Ninapro-DB1 dataset. The table shows the p-value of the comparison between the clustering metrics of the methods on the corresponding row and column. Values above the diagonal show the comparisons for the SC metrics and below the diagonal for the DB metrics (second and third columns of Table A8, respectively). An ‘*’ denotes a significant difference ($\alpha =0.05$), while a ‘**’ denotes a p-value $p<0.001$.

SW(15) | SW(01) | SS1(01) | SS2(01) | WD(01) | MW(01) | GN(01) | WD+MW(01) | AA(01) | AO(01) | AR(01) | |
---|---|---|---|---|---|---|---|---|---|---|---|

SW(15) | — | ** | ** | ** | ** | ** | ** | ** | ** | ** | ** |

SW(01) | ** | — | ** | ** | ** | 1.0 | 1.0 | 0.186 | ** | 1.0 | 0.449 |

SS1(01) | ** | ** | — | ** | ** | ** | ** | ** | ** | ** | ** |

SS2(01) | ** | ** | ** | — | ** | ** | ** | ** | ** | ** | ** |

WD(01) | ** | 1.0 | ** | ** | — | 0.060 | ** | 1.0 | 1.0 | ** | ** |

MW(01) | ** | 1.0 | ** | 0.016 * | 0.220 | — | 1.0 | 0.005 * | ** | 1.0 | 1.0 |

GN(01) | ** | 0.379 | ** | 0.007 * | 0.001 * | 1.0 | — | 0.011 * | ** | 1.0 | 1.0 |

WD+MW(01) | ** | 1.0 | ** | ** | 1.0 | 0.703 | 1.0 | — | 0.087 | ** | ** |

AA(01) | ** | 1.0 | ** | ** | 1.0 | 0.004 * | 0.009 * | 0.324 | — | ** | ** |

AO(01) | ** | 0.005 * | ** | 0.029 * | ** | 1.0 | 1.0 | 0.014 * | ** | — | 1.0 |

AR(01) | ** | 0.001 * | ** | 0.002 * | ** | 1.0 | 1.0 | 0.037 * | ** | 1.0 | — |

**Table A13.**Pairwise comparisons with Bonferonni correction for the putEMG dataset. The table shows the p-value of the comparison between the clustering metrics of the methods on the corresponding row and column. Values above the diagonal show the comparisons for the SC metrics and below the diagonal for the DB metrics (second and third columns of Table A9, respectively). An ‘*’ denotes a significant difference ($\alpha =0.05$), while a ‘**’ denotes a p-value $p<0.001$.

SW(15) | SW(01) | SS1(01) | SS2(01) | WD(01) | MW(01) | GN(01) | WD+MW(01) | AA(01) | AO(01) | AR(01) | |
---|---|---|---|---|---|---|---|---|---|---|---|

SW(15) | — | ** | ** | ** | ** | ** | ** | ** | ** | ** | ** |

SW(01) | ** | — | ** | ** | 0.001 * | 0.345 | ** | 1.0 | 0.086 | 0.030 * | ** |

SS1(01) | ** | 0.082 | — | ** | ** | 0.002 * | ** | ** | 0.006 * | ** | ** |

SS2(01) | ** | ** | ** | — | 1.0 | ** | 1.0 | ** | ** | 1.0 | 1.0 |

WD(01) | ** | ** | ** | 1.0 | — | ** | 1.0 | ** | ** | 1.0 | 1.0 |

MW(01) | ** | 1.0 | 0.008 * | ** | 0.009 * | — | ** | 1.0 | 1.0 | ** | ** |

GN(01) | ** | ** | ** | 1.0 | 1.0 | 0.029 * | — | ** | ** | 1.0 | 1.0 |

WD+MW(01) | ** | 1.0 | 0.008 * | ** | 0.003 * | 1.0 | 0.036 * | — | 1.0 | ** | ** |

AA(01) | ** | 1.0 | 0.022 * | ** | ** | 1.0 | 0.003 * | 1.0 | — | ** | ** |

AO(01) | ** | ** | ** | 1.0 | 1.0 | ** | 1.0 | 0.001 * | ** | — | 0.329 |

AR(01) | ** | ** | ** | 1.0 | 1.0 | ** | 1.0 | ** | ** | 1.0 | — |

**Table A14.**Average accuracy of WeiNet for each method at different augmentation ratios, R, for the Ninapro-DB1 dataset. The value in bold corresponds to the highest accuracy.

R | WD | MW | WD + MW |
---|---|---|---|

1 | 0.8439 | 0.8514 | 0.8498 |

4 | 0.8413 | 0.8556 | 0.8493 |

9 | 0.8421 | 0.8573 | 0.8541 |

15 | 0.8239 | 0.8581 | 0.8511 |

SW(01) (baseline-2): 0.8480. |

## References

- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data
**2019**, 6, 60. [Google Scholar] [CrossRef] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems—Volume 1, Lake Tahoe, NV, USA, 3–6 December 2012; Curran Associates Inc.: Red Hook, NY, USA, 2012; NIPS’12, pp. 1097–1105. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. A Neural Algorithm of Artistic Style. arXiv
**2015**, arXiv:1508.06576. [Google Scholar] [CrossRef] - Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, Montreal, QC, Canada, 8–13 December; NIPS’14; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef][Green Version]
- Lemley, J.; Bazrafkan, S.; Corcoran, P. Smart Augmentation Learning an Optimal Data Augmentation Strategy. IEEE Access
**2017**, 5, 5858–5869. [Google Scholar] [CrossRef] - Cubuk, E.D.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q.V. AutoAugment: Learning Augmentation Policies from Data. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2019, Long Beach, CA, USA, 16–20 June 2019; Volume 2019, pp. 113–123. [Google Scholar] [CrossRef]
- Cubuk, E.D.; Zoph, B.; Shlens, J.; Le, Q.V. Randaugment: Practical Automated Data Augmentation With a Reduced Search Space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 3008–3017. [Google Scholar] [CrossRef]
- Um, T.T.; Pfister, F.M.J.; Pichler, D.; Endo, S.; Lang, M.; Hirche, S.; Fietzek, U.; Kulić, D. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction—ICMI, Glasgow, UK, 13–17 November 2017; ACM Press: New York, NY, USA, 2017; Volume 517, pp. 216–220. [Google Scholar] [CrossRef][Green Version]
- Krell, M.M.; Seeland, A.; Kim, S.K. Data Augmentation for Brain-Computer Interfaces: Analysis on Event-Related Potentials Data. arXiv
**2018**, arXiv:1801.02730. [Google Scholar] - Lashgari, E.; Liang, D.; Maoz, U. Data Augmentation for Deep-Learning-Based Electroencephalography. J. Neurosci. Methods
**2020**, 346, 108885. [Google Scholar] [CrossRef] [PubMed] - Kamycki, K.; Kapuscinski, T.; Oszust, M. Data Augmentation with Suboptimal Warping for Time-Series Classification. Sensors
**2019**, 20, 98. [Google Scholar] [CrossRef] [PubMed][Green Version] - Atzori, M.; Cognolato, M.; Müller, H. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands. Front. Neur.
**2016**, 10, 9. [Google Scholar] [CrossRef] [PubMed][Green Version] - Geng, W.; Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Li, J. Gesture recognition by instantaneous surface EMG images. Sci. Rep.
**2016**, 6, 36571. [Google Scholar] [CrossRef] [PubMed] - Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Geng, W. Surface EMG-Based Inter-Session Gesture Recognition Enhanced by Deep Domain Adaptation. Sensors
**2017**, 17, 458. [Google Scholar] [CrossRef] [PubMed][Green Version] - Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Trans. Neural Syst. Rehabil. Eng.
**2019**, 27, 760–771. [Google Scholar] [CrossRef] [PubMed][Green Version] - Fan, B.; Liu, X.; Su, X.; Hui, P.; Niu, J. EmgAuth: An EMG-based Smartphone Unlocking System Using Siamese Network. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), Austin, TX, USA, 23–27 March 2020; pp. 1–10. [Google Scholar]
- Anicet Zanini, R.; Luna Colombini, E. Parkinson’s Disease EMG Data Augmentation and Simulation with DCGANs and Style Transfer. Sensors
**2020**, 20, 2605. [Google Scholar] [CrossRef] [PubMed] - Wei, W.; Wong, Y.; Du, Y.; Hu, Y.; Kankanhalli, M.; Geng, W. A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface. Pattern Recognit. Lett.
**2017**, 119, 131–138. [Google Scholar] [CrossRef] - Clancy, E.; Negro, F.; Farina, D. Single-channel techniques for information extraction from surface EMG signal. In Surface Electromyography: Physiology, Engineering, and Applications; Merletti, R., Farina, D., Eds.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2016; pp. 91–125. [Google Scholar] [CrossRef]
- Furui, A.; Hayashi, H.; Nakamura, G.; Chin, T.; Tsuji, T. An artificial EMG generation model based on signal-dependent noise and related application to motion classification. PLoS ONE
**2017**, 12, e0180112. [Google Scholar] [CrossRef] [PubMed][Green Version] - Tsinganos, P.; Cornelis, B.; Cornelis, J.; Jansen, B.; Skodras, A. Deep Learning in EMG-based Gesture Recognition. In Proceedings of the 5th International Conference on Physiological Computing Systems. SCITEPRESS—Science and Technology Publications, Seville, Spain, 19–21 September 2018; pp. 107–114. [Google Scholar] [CrossRef]
- Atzori, M.; Gijsberts, A.; Heynen, S.; Hager, A.G.M.; Deriaz, O.; Van Der Smagt, P.; Castellini, C.; Caputo, B.; Muller, H. Building the Ninapro database: A resource for the biorobotics community. In Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, Rome, Italy, 24–27 June 2012; pp. 1258–1265. [Google Scholar] [CrossRef][Green Version]
- Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data
**2014**, 1, 140053. [Google Scholar] [CrossRef] [PubMed][Green Version] - Kaczmarek, P.; Mańkowski, T.; Tomczyński, J. putEMG—A Surface Electromyography Hand Gesture Recognition Dataset. Sensors
**2019**, 19, 3548. [Google Scholar] [CrossRef] [PubMed][Green Version] - Maaten, L.v.d.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res.
**2008**, 9, 2579–2605. [Google Scholar] - Kaufman, L.; Rousseeuw, P.J. (Eds.) Finding Groups in Data: An Introduction to Cluster Analysis; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1990. [Google Scholar] [CrossRef]
- Davies, D.L.; Bouldin, D.W. A Cluster Separation Measure. IEEE Trans. Pattern Anal. Mach. Intell.
**1979**, PAMI-1, 224–227. [Google Scholar] [CrossRef] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar]

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Tsinganos, P.; Cornelis, B.; Cornelis, J.; Jansen, B.; Skodras, A. Data Augmentation of Surface Electromyography for Hand Gesture Recognition. *Sensors* **2020**, *20*, 4892.
https://doi.org/10.3390/s20174892

**AMA Style**

Tsinganos P, Cornelis B, Cornelis J, Jansen B, Skodras A. Data Augmentation of Surface Electromyography for Hand Gesture Recognition. *Sensors*. 2020; 20(17):4892.
https://doi.org/10.3390/s20174892

**Chicago/Turabian Style**

Tsinganos, Panagiotis, Bruno Cornelis, Jan Cornelis, Bart Jansen, and Athanassios Skodras. 2020. "Data Augmentation of Surface Electromyography for Hand Gesture Recognition" *Sensors* 20, no. 17: 4892.
https://doi.org/10.3390/s20174892