# sEMG-Based Drawing Trace Reconstruction: A Novel Hybrid Algorithm Fusing Gene Expression Programming into Kalman Filter

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. The Experiment

#### 2.1.1. Participants

#### 2.1.2. Drawn and Written Symbols

#### 2.1.3. Test Muscles

#### 2.1.4. Experimental Protocol

^{2}paper as templates placed on the work area of the digital tablet with transparent photo frame cover. Twenty-four sheets of template paper were prepared for the task phase. In the task phase, the sEMG electrodes were attached to the right arm and hand of one subject. After all the signals showed stable and normal, the subject was required to sit at a desk in front of the digital tablet with a comfortable posture, as shown in Figure 3. Then he clicked and held the starting button of the digital pen to trigger the drawing trace collection in terms of X-coordinates and Y-coordinates. At the same time, the sEMG signals were recorded synchronously. When the shape was drawn, he should release the starting button to end the trace collection.

#### 2.1.5. Data Acquisition

#### 2.1.6. Two Experimental Designs

_{n}) and F(y

_{n})) from the dataset of each shape. Note that we implemented two basic experimental designs for drawn shapes and written characters separately. In general, we constructed four experimental designs.

#### 2.2. Data Pre-Processing

#### 2.3. Reconstruction Algorithm

Algorithm 1 KF-GEP Parameters Training | |

Input:- Training set $RM{S}_{i=1}^{k}$, x-coordinate sequence ${x}_{i=1}^{k}$, y-coordinate sequence ${y}_{i=1}^{k}$
| |

Output:- State-transition model A, Process noise covariance Q, Covariance of the observation noise R, Observation model of $\Delta \widehat{x}$: GEP
_{X}( ), Observation model of $\Delta \widehat{y}$: GEP_{Y}( )
| |

Initialize:- Initial x-coordinate ${x}_{0}\leftarrow 0$; Initial y-coordinate ${y}_{0}\leftarrow 0$;
- Parameters involved in the GEP algorithm as shown in Table 1
| |

1: | for$i\leftarrow 1\text{}\mathrm{to}\text{}k$do |

2: | $\Delta {x}_{i}\leftarrow {x}_{i}-{x}_{i-1}$ |

3: | $\Delta {y}_{i}\leftarrow {y}_{i}-{y}_{i-1}$ |

4: | $\Delta {x}_{i}^{\prime}\leftarrow \Delta {x}_{i+1}^{T}$ |

5: | $\Delta {y}_{i}^{\prime}\leftarrow \Delta {y}_{i+1}^{T}$ |

6: | $\Delta x{y}_{i}\leftarrow [\Delta {{x}_{i}}^{T},\Delta {{y}_{i}}^{T}]$ |

7: | end for |

8: | $\langle {\beta}_{x},u\rangle \leftarrow \mathrm{BiLR}(\Delta xy,\text{}\Delta {x}^{\prime})$ |

9: | $\langle {\beta}_{y},v\rangle \leftarrow \mathrm{BiLR}(\Delta xy,\text{}\Delta {y}^{\prime})$ |

10: | $\langle {\mathrm{GEP}}_{X}(\text{}),\text{}{u}^{\prime}\rangle \leftarrow \mathrm{runGEP}(RMS,\Delta x)$ |

11: | $\langle {\mathrm{GEP}}_{Y}(\text{}),\text{}{v}^{\prime}\rangle \leftarrow \mathrm{runGEP}(RMS,\Delta y)$ |

12: | $A\leftarrow {[{\beta}_{x}^{T},\text{}{\beta}_{y}^{T}]}^{T}$ |

13: | $Q\leftarrow \mathrm{cov}(u,\text{}v)$ |

14: | $R\leftarrow \mathrm{cov}({u}^{\prime},\text{}{v}^{\prime})$ |

_{X}(RMS) and GEP

_{Y}(RMS) respectively map the sEMG signals space into the X-coordinates space and the Y-coordinates space, ${u}_{i}^{\prime}$ and ${v}_{i}^{\prime}$ represent residuals and the covariance of the observation noise $R=\mathrm{cov}\left({u}^{\prime},{v}^{\prime}\right)$ can be deduced (Algorithm 1, Line 14).

Algorithm 2 Continuous Coordinate Reconstruction | |

Input:- Testing set $RM{S}_{t=1}^{n}$, Observation models GEP
_{X}( ) and GEP_{Y}( ), KF parameters A, Q, R
| |

Output:- Updated x-coordinate sequence $\widehat{x}$; Updated y-coordinate sequence $\widehat{y}$
| |

Initialize:- Initial x-coordinate ${\widehat{x}}_{0}\leftarrow 0$; Initial y-coordinate ${\widehat{y}}_{0}\leftarrow 0$; Initial starting state ${S}_{0}\leftarrow 0$;
- Covariance matrix with suitable variances ${P}_{0}\leftarrow {I}_{2}$
| |

1: | for$t\leftarrow 1\text{}\mathrm{to}\text{}n$do |

2: | ${\widehat{S}}_{t}\leftarrow A\text{}{\mathrm{S}}_{t-1}$ |

3: | ${\widehat{P}}_{t}\leftarrow A\text{}{P}_{t-1}{A}^{T}+Q$ |

4: | ${K}_{t}\leftarrow {\widehat{P}}_{t}\text{}{({\widehat{P}}_{t}+R)}^{-1}$ |

5: | ${P}_{t}\leftarrow {\widehat{P}}_{t}-{K}_{t}{\widehat{P}}_{t}$ |

6: | $\Delta {\widehat{x}}_{t}\leftarrow {\mathrm{GEP}}_{X}\left(RM{S}_{t}\right)$ |

7: | $\Delta {\widehat{y}}_{t}\leftarrow {\mathrm{GEP}}_{Y}\left(RM{S}_{t}\right)$ |

8: | ${C}_{t}\leftarrow [\Delta {{\widehat{x}}_{t}}^{T},\Delta {{\widehat{y}}_{t}}^{T}]$ |

9: | ${S}_{t}\leftarrow {\widehat{\mathrm{S}}}_{t}+{K}_{t}({C}_{t}-{\widehat{\mathrm{S}}}_{t})$ |

10: | ${\widehat{x}}_{t}\leftarrow {S}_{t}\left(1\right)$ + ${\widehat{x}}_{t-1}$ |

11: | ${\widehat{y}}_{t}$ $\leftarrow {S}_{t}\left(2\right)$ + ${\widehat{y}}_{t-1}$ |

12: | end for |

_{X}( ) and GEP

_{Y}( ), are available for Continuous Coordinate Reconstruction routine, which is called with the update step in KF. Note that in the Algorithm 2, the measurement transformation matrix H via multivariate linear regression equation [3,24] is fully replaced by the ${C}_{t}=\left[\begin{array}{c}{\mathrm{GEP}}_{X}\text{}\left(RM{S}_{t}\right)\\ {\mathrm{GEP}}_{y}\text{}\left(RM{S}_{t}\right)\end{array}\right]$, as shown in Algorithm 2, Line 8.

^{2}). The more detailed calculation process of KF, GEP and R

^{2}can be found in Reference [24]. Finally, the paired sample Wilcoxon signed-rank test will be used to assess whether the reconstruction performance (R

^{2}s) of the KF-GEP do significantly tends to be better than those of the KF and the GEP.

## 3. Results

#### 3.1. Results of the KF-GEP

#### 3.1.1. Within-Group Reconstruction for Drawing

^{2}s are listed in Table 2. As shown in Table 2, the average accuracy reaches 0.68 ± 0.19 for reconstructed X-coordinates and 0.62 ± 0.12 for reconstructed Y-coordinates. Figure 5 shows the reconstruction results of each shape from one participant. The overlapping reconstructed traces shown in Figure 5 were randomly selected from test trials. Although there exist noise and inaccuracy, most of the reconstructed shapes are identifiable except for circle, ellipse, square and triangle.

#### 3.1.2. Within-Group Reconstruction for Drawing

#### 3.1.3. Between-Group Reconstruction for Drawing

#### 3.1.4. Between-Group Reconstruction for Handwriting

#### 3.2. Results of the Kalman Filter

#### 3.2.1. Within-Group Reconstruction for Drawing

#### 3.2.2. Within-Group Reconstruction for Handwriting

#### 3.2.3. Between-Group Reconstruction for Drawing

#### 3.2.4. Between-Group Reconstruction for Drawing

#### 3.3. Results of the GEP

#### 3.3.1. Within-Group Reconstruction for Drawing

#### 3.3.2. Within-Group Reconstruction for Handwriting

#### 3.3.3. Between-Group Reconstruction for Drawing

#### 3.3.4. Between-Group Reconstruction for Handwriting

#### 3.4. The Comparison of the Three Methods

^{2}s of all hand-drawn shapes and hand-written characters across subjects achieved by the KF-GEP, the KF and the GEP in within-group reconstruction are respectively shown in Figure 17. Those in between-group reconstructions are respectively shown in Figure 18. The statistical difference between the average KF-GEP and the average KF, between the KF-GEP and the average GEP measured by the paired sample Wilcoxon signed-rank test can also be found in Figure 17 and Figure 18. The Figures directly show that compared to the KF and the GEP, the KF-GEP can always reach significantly higher accuracy at the 0.05 level for both within-group reconstruction and between-group reconstruction.

## 4. Discussion

_{X}( ) and GEP

_{Y}( ) were trained by the GEP algorithm and the role of the matrix H in the KF was substituted by them. In conclusion, the KF-GEP combines the advantages of the GEP and the KF, thus it can not only improve the reconstruction accuracy but also convert sEMG signals into smoother reconstructions of drawing traces.

- (1)
- It can be found that the reconstruction accuracies of hand-written characters based on KF in this paper are worse than those in Reference [3]. The main reason may be that the experimental data used in our paper is different from that of [3]. We constructed the models across five subjects, however, Okorokova et al. (2015) constructed the models for 6 subjects separately. Thus, there always exists the inter-subject variety in our methods. The training and testing of models across the subjects are much more feasible, time-saving and natural for the on-line real-life scenario than that within each individual subject independently. In future work, we still need to explore a more advanced method suitable for different participants.
- (2)
- The number of participants was relatively small and the dataset cannot represent the general population. However, the initial findings of this paper are worthy of further larger scale study. In future work, we will look at gender, health status, age and so on for the potential statistical variation.
- (3)
- We pointed out that combining the KF with GEP can not only smooth the noise but also increase the reconstruction accuracy significantly. However, the average R
^{2}s attained from the KF-GEP are still below 0.85. In future work, we will try to use some other state-of-the-art deep-learning algorithms, such as Convolutional Neural Network, Recurrent Neural Network and so on, to improve the prediction performance. - (4)
- Compare to the results of our previous work [24], the average performance of all 14 shapes decreased. The main reason may be that the additional two shapes (triangle and rectangle) always have the lowest prediction performance. These two shapes are relatively complicated with turning during drawing. In addition, the reconstruction accuracies of the closed shapes (circle and ellipse) are still inferior. Thus, it can be concluded that this method can deduce reconstruction models for the basic one-stroke shapes which are not closed or complicated turnings but closed and complicated shapes can be formed with an arbitrary number of single strokes.

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Da Silva, H.P.; Fairclough, S.; Holzinger, A.; Jacob, R.; Tan, D. Introduction to the Special Issue on Physiological Computing for Human-Computer Interaction. ACM Trans. Comput. Interact.
**2015**, 21, 1–4. [Google Scholar] [CrossRef] - Chowdhury, A.; Ramadas, R.; Karmakar, S. Muscle computer interface: A Review. In ICoRD’13; Springer: Berlin, Germany, 2013; pp. 411–421. ISBN 8132210492. [Google Scholar]
- Okorokova, E.; Lebedev, M.; Linderman, M.; Ossadtchi, A. A dynamical model improves reconstruction of handwriting from multichannel electromyographic recordings. Front. Neurosci.
**2015**, 9, 1–15. [Google Scholar] [CrossRef] [PubMed] - Ding, Q.; Han, J.; Zhao, X.; Chen, Y. Missing-Data Classification with the Extended Full-Dimensional Gaussian Mixture Model: Applications to EMG-Based Motion Recognition. IEEE Trans. Ind. Electron.
**2015**, 62, 4994–5005. [Google Scholar] [CrossRef] - Zhang, X.; Ren, X.; Gao, X.; Chen, X.; Zhou, P. Complexity Analysis of Surface EMG for Overcoming ECG Interference toward Proportional Myoelectric Control. Entropy
**2016**, 18, 106. [Google Scholar] [CrossRef] - Pan, L.; Zhang, D.; Jiang, N.; Sheng, X.; Zhu, X. Transcranial direct current stimulation versus user training on improving online myoelectric control for amputees. J. Neural Eng.
**2017**, 14, 46019. [Google Scholar] [CrossRef] [PubMed] - Cheng, J.; Chen, X.; Liu, A.; Peng, H. A novel phonology- and radical-coded Chinese sign language recognition framework using accelerometer and surface electromyography sensors. Sensors
**2015**, 15, 23303–23324. [Google Scholar] [CrossRef] [PubMed] - Tang, Z.; Zhang, K.; Sun, S.; Gao, Z.; Zhang, L.; Yang, Z. An upper-limb power-assist exoskeleton using proportional myoelectric control. Sensors
**2014**, 14, 6677–6694. [Google Scholar] [CrossRef] [PubMed] - Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: Emerging avenues and challenges. IEEE Trans. Neural Syst. Rehabil. Eng.
**2014**, 22, 797–809. [Google Scholar] [CrossRef] [PubMed] - Field, M.; Gordon, S.; Peterson, E.; Robinson, R.; Stahovich, T.; Alvarado, C. The effect of task on classification accuracy: Using gesture recognition techniques in free-sketch recognition. Comput. Graph.
**2010**, 34, 499–512. [Google Scholar] [CrossRef] - Mi, G.C. A new gesture recognition algorithm and segmentation method of Korean scripts for gesture-allowed ink editor. Inf. Sci.
**2006**, 176, 1290–1303. [Google Scholar] - Notowidigdo, M.; Miller, R.C. Off-Line Sketch Interpretation; American Association for Artificial Intelligence: Palo Alto, CA, USA, 2004. [Google Scholar]
- Wobbrock, J.O.; Hall, M.G.; Wilson, A.D. Gestures without Libraries, Toolkits or Training: A $ 1 Recognizer for User Interface Prototypes. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, Newport, RI, USA, 7–10 October 2007. [Google Scholar]
- Kristensson, P.O.; Denby, L.C. Continuous Recognition and Visualization of Pen Strokes and Touch-Screen Gestures. In Proceedings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling, Vancouver, BC, Canada, 5–7 August 2011; Volume 1. [Google Scholar]
- Vikram, S.; Li, L.; Russell, S. Handwriting and Gestures in the Air, Recognizing on the Fly. In Proceedings of the CHI 2013 Extended Abstracts, Paris, France, 27 April–2 May 2013. ACM 978-1-4503-1952-2/13/04. [Google Scholar]
- Asano, T.; Honda, S. Visual interface system by character handwriting gestures in the air. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 56–61. [Google Scholar] [CrossRef]
- Zhao, Y. Human Emotion Recognition from Body Language of the Head Using Soft Computing Techniques; University of Ottawa: Ottawa, ON, Canada, 2012. [Google Scholar]
- Chen, Y.; Yang, Z.; Wang, J. Eyebrow emotional expression recognition using surface EMG signals. Neurocomputing
**2015**, 168, 871–879. [Google Scholar] [CrossRef] - Linderman, M.; Lebedev, M.A.; Erlichman, J.S. Recognition of handwriting from electromyography. PLoS ONE
**2009**, 4, e6791. [Google Scholar] [CrossRef] [PubMed] - Huang, G.; Zhang, D.; Zheng, X.; Zhu, X. An EMG-based handwriting recognition through dynamic time warping. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 4902–4905. [Google Scholar] [CrossRef]
- Chihi, I.; Abdelkrim, A.; Benrejeb, M. Analysis of Handwriting Velocity to Identify Handwriting Process from Electromyographic Signals. Am. J. Appl. Sci.
**2012**, 9, 1742–1756. [Google Scholar] - Li, C.; Ma, Z.; Yao, L.; Zhang, D. Improvements on EMG-based handwriting recognition with DTW algorithm. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 2144–2147. [Google Scholar] [CrossRef]
- Shih, C.; Lee, W.; Ku, Y. A Vision-Based Fingertip-Writing Character Recognition System. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2016; pp. 160–168. [Google Scholar]
- Chen, Y.; Yang, Z. A Novel Hybrid Model for Drawing Trace Reconstruction from Multichannel Surface Electromyographic Activity. Front. Neurosci.
**2017**, 11, 61. [Google Scholar] [CrossRef] [PubMed] - Djioua, M.; Plamondon, R. A new methodology to improve myoelectric signal processing using handwriting. In Proceedings of the 11th International Conference on Frontiers in Handwriting Recognition, Montreal, QC, Canada, 19–21 August 2008; pp. 112–118. [Google Scholar]
- Englehart, K.; Hudgins, B. A Robust, Real-Time Control Scheme for Multifunction Myoelectric Control. IEEE Trans. Biomed. Eng.
**2003**, 50, 848–854. [Google Scholar] [CrossRef] [PubMed] - Asghari Oskoei, M.; Hu, H. Myoelectric control systems—A survey. Biomed. Signal Process. Control
**2007**, 2, 275–294. [Google Scholar] [CrossRef] - Ferreira, C. Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence; Springer: Berline, Germany, 2006; Volume 21, ISBN 3540328491. [Google Scholar]
- Zhang, K.; Sun, S. Web music emotion recognition based on higher effective gene expression programming. Neurocomputing
**2013**, 105, 100–106. [Google Scholar] [CrossRef]

**Figure 1.**The fourteen basic one-stroke hand-drawn shapes used in our study. Dots represent starting points, arrows represent directions. The shapes are, from left to right, in order, horizontal line, vertical line, forward slash, backslash, arch, circle, ellipse, reversed horizontal line, reversed vertical line, reversed forward slash, reversed backslash, reversed arch, square and triangle.

**Figure 5.**The randomly selected reconstructed drawing traces of with-group reconstruction by the KF-GEP. The shapes are, from left to right, in order, horizontal line, vertical line, forward slash, backslash, arch, circle, ellipse, reversed horizontal line, reversed vertical line, reversed forward slash, reversed backslash, reversed arch, square and triangle.

**Figure 6.**The randomly selected reconstructed numeric characters of with-group reconstruction by the KF-GEP.

**Figure 7.**The randomly selected reconstructed drawing traces of between-group reconstruction by the KF-GEP.

**Figure 8.**The randomly selected reconstructed numeric character of between-group reconstruction by the KF-GEP.

**Figure 9.**The randomly selected reconstructed drawing traces of within-group reconstruction by the KF.

**Figure 10.**The randomly selected reconstructed numeric character of within-group reconstruction by the KF.

**Figure 11.**The randomly selected reconstructed drawing traces of between-group reconstruction by the KF.

**Figure 12.**The randomly selected reconstructed numeric character of between-group reconstruction by the KF.

**Figure 13.**The randomly selected reconstructed drawing traces of within-group reconstruction by the GEP.

**Figure 14.**The randomly selected reconstructed numeric character of within-group reconstruction by the GEP.

**Figure 15.**The randomly selected reconstructed drawing traces of between-group reconstruction by the GEP.

**Figure 16.**The randomly selected reconstructed numeric character of between-group reconstruction by the GEP.

**Figure 17.**The averages of R

^{2}s in within-group reconstruction achieved by three methods. Statistical difference between the KF-GEP and KF and between the KF-GEP and GEP was measured by paired sample Wilcoxon signed-rank test: * the KF-GEP’s performance do significantly tends to be greater than the KF’s and GEP’s at the 0.05 level.

**Figure 18.**The averages of R

^{2}s in between-group reconstruction achieved by three methods. Statistical difference between the KF-GEP and KF and between the KF-GEP and GEP was measured by paired sample Wilcoxon signed-rank test: * the KF-GEP’s performance do significantly tends to be greater than the KF’s and GEP’s at the 0.05 level.

Parameter | Value |
---|---|

Number of chromosomes | 200 |

Function set | F = {+, −, ×, /, Sin, Cos, Sqrt, x^{2}, Inv, Exp} |

Terminal | T = {T_{1}, T_{2}, T_{3}, T_{4}, T_{5}, T_{6}, T_{7}} |

Number of genes, head size, gene size | 6, 15, 31 |

Linking function | Addition |

Fitness function error type | RMSE |

Mutation rate | 0.00138 |

Inversion rate | 0.00546 |

IS/RIS/gene transposition rate | 0.00546 |

One-point/two-point recombination rate | 0.00277 |

Gene recombination rate | 0.00277 |

**Table 2.**Within-group reconstruction performance by the KF-GEP: the average accuracy of each shape across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Shape | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

horizontal line | 0.99 ± 0.00 | 0.71 ± 0.26 | 0.85 ± 0.13 |

vertical line | 0.42 ± 0.27 | 0.95 ± 0.13 | 0.69 ± 0.15 |

forward slash | 0.84 ± 0.25 | 0.99 ± 0.01 | 0.92 ± 0.10 |

backslash | 0.99 ± 0.01 | 0.99 ± 0.01 | 0.99 ± 0.01 |

Arch | 0.92 ± 0.08 | 0.49 ± 0.15 | 0.71 ± 0.11 |

Circle | 0.24 ± 0.13 | 0.42 ± 0.28 | 0.33 ± 0.18 |

Ellipse | 0.34 ± 0.20 | 0.09 ± 0.05 | 0.22 ± 0.15 |

Reversed horizontal line | 0.88 ± 0.27 | 0.47 ± 0.32 | 0.68 ± 0.21 |

Reversed vertical line | 0.53 ± 0.31 | 0.97 ± 0.06 | 0.75 ± 0.18 |

Reversed forward slash | 0.81 ± 0.26 | 0.98 ± 0.01 | 0.90 ± 0.13 |

Reversed backslash | 0.95 ± 0.16 | 0.99 ± 0.01 | 0.97 ± 0.08 |

Reversed arch | 0.82 ± 0.25 | 0.29 ± 0.21 | 0.56 ± 0.18 |

Square | 0.55 ± 0.30 | 0.19 ± 0.12 | 0.37 ± 0.20 |

Triangle | 0.28 ± 0.12 | 0.08 ± 0.05 | 0.18 ± 0.08 |

All | 0.68 ± 0.19 | 0.62 ± 0.12 | 0.65 ± 0.14 |

**Table 3.**Within-group reconstruction performance by the KF-GEP: the average accuracy of each numeric character across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Character | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

“0” | 0.49 ± 0.27 | 0.45 ± 0.26 | 0.47 ± 0.27 |

“1” | 0.85 ± 0.12 | 0.99 ± 0.01 | 0.92 ± 0.07 |

“2” | 0.44 ± 0.28 | 0.93 ± 0.02 | 0.69 ± 0.15 |

“3” | 0.21 ± 0.21 | 0.95 ± 0.01 | 0.58 ± 0.11 |

“4” | 0.19 ± 0.18 | 0.09 ± 0.02 | 0.14 ± 0.09 |

“5” | 0.35 ± 0.22 | 0.93 ± 0.02 | 0.64 ± 0.13 |

“6” | 0.16 ± 0.11 | 0.15 ± 0.10 | 0.16 ± 0.09 |

“7” | 0.51 ± 0.17 | 0.92 ± 0.04 | 0.72 ± 0.10 |

“8” | 0.18 ± 0.18 | 0.32 ± 0.19 | 0.25 ± 0.18 |

“9” | 0.59 ± 0.23 | 0.20 ± 0.16 | 0.40 ± 0.19 |

All | 0.40 ± 0.19 | 0.59 ± 0.06 | 0.50 ± 0.13 |

**Table 4.**Between-group reconstruction performance by the KF-GEP: the average accuracy of each shape across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Shape | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

horizontal line | 0.99 ± 0.002 | 0.78 ± 0.26 | 0.89 ± 0.15 |

vertical line | 0.52 ± 0.26 | 0.99 ± 0.01 | 0.76 ± 0.12 |

forward slash | 0.92 ± 0.27 | 0.93 ± 0.2 | 0.93 ± 0.25 |

backslash | 0.99 ± 0.01 | 0.99 ± 0.004 | 0.99 ± 0.01 |

Arch | 0.98 ± 0.01 | 0.68 ± 0.24 | 0.83 ± 0.12 |

Circle | 0.79 ± 0.26 | 0.72 ± 0.28 | 0.76 ± 0.26 |

Ellipse | 0.68 ± 0.28 | 0.69 ± 0.22 | 0.69 ± 0.25 |

Reversed horizontal line | 0.99 ± 0.01 | 0.54 ± 0.34 | 0.77 ± 0.16 |

Reversed vertical line | 0.71 ± 0.29 | 0.99 ± 0.01 | 0.85 ± 0.18 |

Reversed forward slash | 0.99 ± 0.01 | 0.99 ± 0.004 | 0.99 ± 0.01 |

Reversed backslash | 0.99 ± 0.004 | 0.99 ± 0.01 | 0.99 ± 0.004 |

Reversed arch | 0.98 ± 0.01 | 0.43 ± 0.26 | 0.71 ± 0.02 |

Square | 0.66 ± 0.32 | 0.46 ± 0.20 | 0.56 ± 0.28 |

Triangle | 0.56 ± 0.22 | 0.66 ± 0.12 | 0.61 ± 0.16 |

All | 0.84 ± 0.13 | 0.77 ± 0.15 | 0.81 ± 0.13 |

**Table 5.**Between-group reconstruction performance by the KF-GEP: the average accuracy of each numeric character across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Character | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

“0” | 0.73 ± 0.24 | 0.72 ± 0.27 | 0.73 ± 0.21 |

“1” | 0.92 ± 0.06 | 0.99 ± 0.003 | 0.96 ± 0.03 |

“2” | 0.28 ± 0.18 | 0.93 ± 0.01 | 0.61 ± 0.09 |

“3” | 0.26 ± 0.06 | 0.97 ± 0.01 | 0.62 ± 0.04 |

“4” | 0.32 ± 0.26 | 0.54 ± 0.26 | 0.43 ± 0.21 |

“5” | 0.19 ± 0.06 | 0.94 ± 0.04 | 0.57 ± 0.03 |

“6” | 0.39 ± 0.27 | 0.64 ± 0.29 | 0.52 ± 0.24 |

“7” | 0.62 ± 0.25 | 0.92 ± 0.04 | 0.77 ± 0.14 |

“8” | 0.17 ± 0.12 | 0.59 ± 0.25 | 0.38 ± 0.16 |

“9” | 0.49 ± 0.29 | 0.51 ± 0.14 | 0.50 ± 0.15 |

All | 0.44 ± 0.18 | 0.78 ± 0.13 | 0.60 ± 0.12 |

**Table 6.**Within-group reconstruction performance by the KF: the average accuracy of each shape across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Shape | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

horizontal line | 0.99 ± 0.07 | 0.64 ± 0.33 | 0.81 ± 0.17 |

vertical line | 0.37 ± 0.29 | 0.98 ± 0.03 | 0.67 ± 0.14 |

forward slash | 0.95 ± 0.08 | 0.98 ± 0.02 | 0.97 ± 0.04 |

backslash | 0.99 ± 0.01 | 0.98 ± 0.05 | 0.98 ± 0.02 |

Arch | 0.91 ± 0.07 | 0.38 ± 0.15 | 0.65 ± 0.09 |

Circle | 0.21 ± 0.22 | 0.35 ± 0.15 | 0.28 ± 0.12 |

Ellipse | 0.36 ± 0.16 | 0.03 ± 0.05 | 0.19 ± 0.08 |

Reversed horizontal line | 0.84 ± 0.27 | 0.43 ± 0.31 | 0.63 ± 0.22 |

Reversed vertical line | 0.48 ± 0.34 | 0.98 ± 0.03 | 0.73 ± 0.17 |

Reversed forward slash | 0.77 ± 0.27 | 0.98 ± 0.01 | 0.88 ± 0.14 |

Reversed backslash | 0.82 ± 0.28 | 0.99 ± 0.01 | 0.91 ± 0.14 |

Reversed arch | 0.70 ± 0.3 | 0.20 ± 0.16 | 0.45 ± 0.18 |

Square | 0.53 ± 0.35 | 0.17 ± 0.07 | 0.35 ± 0.18 |

Triangle | 0.21 ± 0.15 | 0.02 ± 0.03 | 0.12 ± 0.08 |

All | 0.65 ± 0.2 | 0.58 ± 0.10 | 0.62 ± 0.13 |

**Table 7.**Within-group reconstruction performance by the KF: the average accuracy of each numeric character across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Character | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

“0” | 0.36 ± 0.28 | 0.01 ± 0.01 | 0.19 ± 0.14 |

“1” | 0.60 ± 0.35 | 0.01 ± 0.01 | 0.19 ± 0.14 |

“2” | 0.15 ± 0.23 | 0.92 ± 0.03 | 0.54 ± 0.12 |

“3” | 0.08 ± 0.17 | 0.93 ± 0.02 | 0.50 ± 0.08 |

“4” | 0.16 ± 0.15 | 0.08 ± 0.04 | 0.12 ± 0.09 |

“5” | 0.30 ± 0.24 | 0.93 ± 0.02 | 0.61 ± 0.08 |

“6” | 0.12 ± 0.14 | 0.07 ± 0.05 | 0.09 ± 0.08 |

“7” | 0.60 ± 0.17 | 0.89 ± 0.03 | 0.75 ± 0.10 |

“8” | 0.09 ± 0.18 | 0.05 ± 0.05 | 0.07 ± 0.09 |

“9” | 0.16 ± 0.17 | 0.28 ± 0.07 | 0.21 ± 0.12 |

All | 0.26 ± 0.21 | 0.42 ± 0.03 | 0.28 ± 0.10 |

**Table 8.**Between-group reconstruction performance by the KF: the average accuracy of each shape across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Shape | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

horizontal line | 0.99 ± 0.002 | 0.77 ± 0.25 | 0.88 ± 0.12 |

vertical line | 0.45 ± 0.25 | 0.99 ± 0.004 | 0.72 ± 0.12 |

forward slash | 0.99 ± 0.007 | 0.99 ± 0.006 | 0.99 ± 0.006 |

backslash | 0.99 ± 0.004 | 0.99 ± 0.004 | 0.99 ± 0.004 |

Arch | 0.98 ± 0.01 | 0.65 ± 0.30 | 0.81 ± 0.15 |

Circle | 0.76 ± 0.26 | 0.74 ± 0.26 | 0.75 ± 0.17 |

Ellipse | 0.55 ± 0.25 | 0.59 ± 0.23 | 0.57 ± 0.14 |

Reversed horizontal line | 0.99 ± 0.007 | 0.47 ± 0.33 | 0.73 ± 0.16 |

Reversed vertical line | 0.59 ± 0.33 | 0.99 ± 0.007 | 0.79 ± 0.16 |

Reversed forward slash | 0.99 ± 0.003 | 0.99 ± 0.004 | 0.99 ± 0.003 |

Reversed backslash | 0.99 ± 0.003 | 0.99 ± 0.003 | 0.99 ± 0.003 |

Reversed arch | 0.98 ± 0.01 | 0.59 ± 0.31 | 0.78 ± 0.16 |

Square | 0.55 ± 0.36 | 0.34 ± 0.28 | 0.45 ± 0.22 |

Triangle | 0.22 ± 0.21 | 0.52 ± 0.17 | 0.37 ± 0.10 |

All | 0.79 ± 0.12 | 0.76 ± 0.15 | 0.77 ± 0.11 |

**Table 9.**Between-group reconstruction performance by the KF: the average accuracy of each numeric character across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Character | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

“0” | 0.61 ± 0.29 | 0.71 ± 0.29 | 0.66 ± 0.19 |

“1” | 0.91 ± 0.06 | 0.99 ± 0.002 | 0.95 ± 0.03 |

“2” | 0.02 ± 0.03 | 0.92 ± 0.01 | 0.47 ± 0.02 |

“3” | 0.35 ± 0.24 | 0.96 ± 0.01 | 0.66 ± 0.12 |

“4” | 0.21 ± 0.20 | 0.47 ± 0.24 | 0.34 ± 0.18 |

“5” | 0.03 ± 0.02 | 0.94 ± 0.01 | 0.49 ± 0.01 |

“6” | 0.15 ± 0.18 | 0.51 ± 0.30 | 0.33 ± 0.16 |

“7” | 0.37 ± 0.32 | 0.89 ± 0.03 | 0.63 ± 0.16 |

“8” | 0.11 ± 0.10 | 0.36 ± 0.25 | 0.23 ± 0.13 |

“9” | 0.22 ± 0.23 | 0.52 ± 0.16 | 0.37 ± 0.14 |

All | 0.30 ± 0.16 | 0.72 ± 0.13 | 0.51 ± 0.11 |

**Table 10.**Within-group reconstruction performance by the GEP: the average accuracy of each shape across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Shape | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

horizontal line | 0.99 ± 0.01 | 0.59 ± 0.32 | 0.79 ± 0.16 |

vertical line | 0.35 ± 0.29 | 0.94 ± 0.13 | 0.65 ± 0.15 |

forward slash | 0.76 ± 0.29 | 0.97 ± 0.06 | 0.88 ± 0.14 |

backslash | 0.97 ± 0.03 | 0.99 ± 0.01 | 0.98 ± 0.02 |

Arch | 0.91 ± 0.14 | 0.43 ± 0.17 | 0.67 ± 0.11 |

Circle | 0.18 ± 0.13 | 0.34 ± 0.27 | 0.26 ± 0.16 |

Ellipse | 0.23 ± 0.18 | 0.04 ± 0.06 | 0.13 ± 0.09 |

Reversed horizontal line | 0.89 ± 0.30 | 0.46 ± 0.28 | 0.63 ± 0.23 |

Reversed vertical line | 0.45 ± 0.32 | 0.97 ± 0.04 | 0.71 ± 0.17 |

Reversed forward slash | 0.73 ± 0.28 | 0.98 ± 0.03 | 0.85 ± 0.14 |

Reversed backslash | 0.89 ± 0.14 | 0.99 ± 0.01 | 0.94 ± 0.07 |

Reversed arch | 0.81 ± 0.24 | 0.28 ± 0.22 | 0.55 ± 0.17 |

Square | 0.55 ± 0.33 | 0.12 ± 0.10 | 0.33 ± 0.18 |

Triangle | 0.23 ± 0.16 | 0.02 ± 0.03 | 0.13 ± 0.08 |

All | 0.64 ± 0.20 | 0.61 ± 0.12 | 0.58 ± 0.14 |

**Table 11.**Within-group reconstruction performance by the GEP: the average accuracy of each numeric character across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Character | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

“0” | 0.49 ± 0.28 | 0.38 ± 0.24 | 0.44 ± 0.19 |

“1” | 0.78 ± 0.19 | 0.99 ± 0.01 | 0.88 ± 0.09 |

“2” | 0.42 ± 0.29 | 0.94 ± 0.02 | 0.68 ± 0.15 |

“3” | 0.17 ± 0.26 | 0.95 ± 0.02 | 0.56 ± 0.13 |

“4” | 0.16 ± 0.15 | 0.07 ± 0.04 | 0.12 ± 0.08 |

“5” | 0.32 ± 0.22 | 0.93 ± 0.02 | 0.63 ± 0.12 |

“6” | 0.14 ± 0.12 | 0.13 ± 0.08 | 0.13 ± 0.07 |

“7” | 0.46 ± 0.08 | 0.87 ± 0.09 | 0.66 ± 0.04 |

“8” | 0.16 ± 0.16 | 0.29 ± 0.17 | 0.22 ± 0.12 |

“9” | 0.59 ± 0.21 | 0.19 ± 0.17 | 0.40 ± 0.16 |

All | 0.37 ± 0.20 | 0.57 ± 0.08 | 0.47 ± 0.12 |

**Table 12.**Between-group reconstruction performance by the GEP: the average accuracy of each shape across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Shape | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

horizontal line | 0.99 ± 0.003 | 0.74 ± 0.27 | 0.87 ± 0.14 |

vertical line | 0.43 ± 0.26 | 0.99 ± 0.004 | 0.71 ± 0.13 |

forward slash | 0.99 ± 0.01 | 0.93 ± 0.22 | 0.96 ± 0.10 |

backslash | 0.99 ± 0.004 | 0.99 ± 0.004 | 0.99 ± 0.004 |

Arch | 0.98 ± 0.01 | 0.68 ± 0.28 | 0.83 ± 0.14 |

Circle | 0.78 ± 0.24 | 0.65 ± 0.31 | 0.71 ± 0.18 |

Ellipse | 0.61 ± 0.24 | 0.46 ± 0.22 | 0.54 ± 0.12 |

Reversed horizontal line | 0.99 ± 0.01 | 0.47 ± 0.33 | 0.73 ± 0.17 |

Reversed vertical line | 0.59 ± 0.32 | 0.99 ± 0.01 | 0.79 ± 0.16 |

Reversed forward slash | 0.99 ± 0.003 | 0.99 ± 0.003 | 0.9 ± 0.003 |

Reversed backslash | 0.99 ± 0.003 | 0.99 ± 0.003 | 0.99 ± 0.003 |

Reversed arch | 0.98 ± 0.01 | 0.53 ± 0.32 | 0.75 ± 0.16 |

Square | 0.59 ± 0.34 | 0.42 ± 0.26 | 0.51 ± 0.23 |

Triangle | 0.61 ± 0.19 | 0.55 ± 0.15 | 0.58 ± 0.11 |

All | 0.82 ± 0.13 | 0.74 ± 0.17 | 0.78 ± 0.13 |

**Table 13.**Between-group reconstruction performance by the GEP: the average accuracy of each numeric character across the five subjects.

Average Performance, R^{2} | |||
---|---|---|---|

Character | X-Coordinate | Y-Coordinate | Average: (X + Y)/2 |

“0” | 0.67 ± 0.27 | 0.67 ± 0.29 | 0.67 ± 0.19 |

“1” | 0.91 ± 0.06 | 0.95 ± 0.03 | 0.99 ± 0.002 |

“2” | 0.11 ± 0.16 | 0.94 ± 0.01 | 0.53 ± 0.08 |

“3” | 0.42 ± 0.22 | 0.96 ± 0.01 | 0.69 ± 0.10 |

“4” | 0.35 ± 0.29 | 0.43 ± 0.25 | 0.39 ± 0.17 |

“5” | 0.03 ± 0.06 | 0.94 ± 0.01 | 0.49 ± 0.03 |

“6” | 0.26 ± 0.32 | 0.54 ± 0.29 | 0.40 ± 0.21 |

“7” | 0.59 ± 0.26 | 0.92 ± 0.03 | 0.75 ± 0.13 |

“8” | 0.13 ± 0.12 | 0.59 ± 0.27 | 0.36 ± 0.15 |

“9” | 0.38 ± 0.30 | 0.43 ± 0.14 | 0.41 ± 0.13 |

All | 0.39 ± 0.21 | 0.74 ± 0.13 | 0.56 ± 0.12 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yang, Z.; Wen, Y.; Chen, Y. sEMG-Based Drawing Trace Reconstruction: A Novel Hybrid Algorithm Fusing Gene Expression Programming into Kalman Filter. *Sensors* **2018**, *18*, 3296.
https://doi.org/10.3390/s18103296

**AMA Style**

Yang Z, Wen Y, Chen Y. sEMG-Based Drawing Trace Reconstruction: A Novel Hybrid Algorithm Fusing Gene Expression Programming into Kalman Filter. *Sensors*. 2018; 18(10):3296.
https://doi.org/10.3390/s18103296

**Chicago/Turabian Style**

Yang, Zhongliang, Yangliang Wen, and Yumiao Chen. 2018. "sEMG-Based Drawing Trace Reconstruction: A Novel Hybrid Algorithm Fusing Gene Expression Programming into Kalman Filter" *Sensors* 18, no. 10: 3296.
https://doi.org/10.3390/s18103296