# Improving Human Motion Classification by Applying Bagging and Symmetry to PCA-Based Features

## Abstract

**:**

## 1. Introduction

#### 1.1. Challenges in Human Motion Comparison and Classification

#### 1.2. Recent Work in the Application of PCA for Human Motion Analysis

#### 1.3. Contributions of This Research

## 2. Materials and Methods

#### 2.1. Dataset

- blocks: age uke with the left hand, age uke with the right hand, gedan barai with the left hand, gedan barai with the right hand;
- strikes: empi with the left elbow, empi with the right elbow; and
- kicks: hiza geri with the left knee, hiza geri with the right knee, mae geri with the left leg, mae geri right with the right leg, yoko geri with the left leg, yoko geri with the right leg.

#### 2.2. Feature Space Definition

#### 2.3. Applying PCA for MoCap Feature Generation

#### 2.4. Classifier Bagging

#### 2.5. Application of Dataset Augmentation and Symmetry

## 3. Results

## 4. Discussion

## 5. Conclusions

## Funding

## Conflicts of Interest

## Appendix A

- Local planar rotation angles (Equation (2)) between vectors defined by body joints (these are planar rotation angles between certain body parts):$$\begin{array}{c}{F}_{1t}=\measuredangle \overrightarrow{(LeftShoulde{r}_{t}-LeftAr{m}_{t})},\overrightarrow{(LeftAr{m}_{t}-LeftForear{m}_{t})}\\ {F}_{2t}=\measuredangle \overrightarrow{(RightShoulde{r}_{t}-RightAr{m}_{t})},\overrightarrow{(RightAr{m}_{t}-RightForear{m}_{t})}\\ {F}_{3t}=\measuredangle \overrightarrow{(LeftThig{h}_{t}-LeftLe{g}_{t})},\overrightarrow{(LeftLe{g}_{t}-LeftFoo{t}_{t})}\\ {F}_{4t}=\measuredangle \overrightarrow{(RightThig{h}_{t}-RightLe{g}_{t})},\overrightarrow{(RightLe{g}_{t}-RightFoo{t}_{t})}\end{array}$$
- Global planar rotation angles (Equation (4)) between vectors defined by body joints and the coordinate frame (3), which is derived from the initial body position.$$\begin{array}{c}\overrightarrow{x}=\frac{\overrightarrow{(RightThig{h}_{1}-LeftThig{h}_{1})}}{\left|\begin{array}{c}\overrightarrow{RightThig{h}_{1}-LeftThig{h}_{1}}\end{array}\right|}\\ \overrightarrow{z}=\frac{[0,1,0]\times \overrightarrow{x}}{\left|\begin{array}{c}[0,1,0]\times \overrightarrow{x}\end{array}\right|}\\ \overrightarrow{y}=\frac{\overrightarrow{x}\times \overrightarrow{z}}{\left|\begin{array}{c}\overrightarrow{x}\times \overrightarrow{z}\end{array}\right|}\end{array}$$$$\begin{array}{cc}{F}_{5t}=\measuredangle \overrightarrow{(RightThig{h}_{t}-RightLe{g}_{t})},\overrightarrow{x}& {F}_{8t}=\measuredangle \overrightarrow{(LeftThig{h}_{t}-LeftLe{g}_{t})},\overrightarrow{x}\\ {F}_{6t}=\measuredangle \overrightarrow{(RightThig{h}_{t}-RightLe{g}_{t})},\overrightarrow{y}& {F}_{9t}=\measuredangle \overrightarrow{(LeftThig{h}_{t}-LeftLe{g}_{t})},\overrightarrow{y}\\ {F}_{7t}=\measuredangle \overrightarrow{(RightThig{h}_{t}-RightLe{g}_{t})},\overrightarrow{z}& {F}_{10t}=\measuredangle \overrightarrow{(LeftThig{h}_{t}-LeftLe{g}_{t})},\overrightarrow{z}\\ {F}_{11t}=\measuredangle \overrightarrow{(RightShoulde{r}_{t}-RightAr{m}_{t})},\overrightarrow{x}& {F}_{14t}=\measuredangle \overrightarrow{(LeftShoulde{r}_{t}-LeftAr{m}_{t})},\overrightarrow{x}\\ {F}_{12t}=\measuredangle \overrightarrow{(RightShoulde{r}_{t}-RightAr{m}_{t})},\overrightarrow{y}& {F}_{15t}=\measuredangle \overrightarrow{(LeftShoulde{r}_{t}-LeftAr{m}_{t})},\overrightarrow{y}\\ {F}_{13t}=\measuredangle \overrightarrow{(RightShoulde{r}_{t}-RightAr{m}_{t})},\overrightarrow{z}& {F}_{16t}=\measuredangle \overrightarrow{(LeftShoulde{r}_{t}-LeftAr{m}_{t})},\overrightarrow{z}\\ {F}_{17t}=\measuredangle \overrightarrow{(RightAr{m}_{t}-RightForear{m}_{t})},\overrightarrow{x}& {F}_{20t}=\measuredangle \overrightarrow{(LeftAr{m}_{t}-LeftForear{m}_{t})},\overrightarrow{x}\\ {F}_{18t}=\measuredangle \overrightarrow{(RightAr{m}_{t}-RightForear{m}_{t})},\overrightarrow{y}& {F}_{21t}=\measuredangle \overrightarrow{(LeftAr{m}_{t}-LeftForear{m}_{t})},\overrightarrow{y}\\ {F}_{19t}=\measuredangle \overrightarrow{(RightAr{m}_{t}-RightForear{m}_{t})},\overrightarrow{z}& {F}_{22t}=\measuredangle \overrightarrow{(LeftAr{m}_{t}-LeftForear{m}_{t})},\overrightarrow{z}\\ {F}_{23t}=\measuredangle \overrightarrow{(RightLe{g}_{t}-RightFoo{t}_{t})},\overrightarrow{x}& {F}_{26t}=\measuredangle \overrightarrow{(LeftLe{g}_{t}-LeftFoo{t}_{t})},\overrightarrow{x}\\ {F}_{24t}=\measuredangle \overrightarrow{(RightLe{g}_{t}-RightFoo{t}_{t})},\overrightarrow{y}& {F}_{27t}=\measuredangle \overrightarrow{(LeftLe{g}_{t}-LeftFoo{t}_{t})},\overrightarrow{y}\\ {F}_{25t}=\measuredangle \overrightarrow{(RightLe{g}_{t}-RightFoo{t}_{t})},\overrightarrow{z}& {F}_{28t}=\measuredangle \overrightarrow{(LeftLe{g}_{t}-LeftFoo{t}_{t})},\overrightarrow{z}\end{array}$$

## References

- Szczęsna, A.; Pruszowski, P.; Skurowski, P.; Lach, E.; Słupik, J.; Pęszor, D.; Paszkuta, M.; Polanski, A.; Wojciechowski, K.; Janiak, M.; et al. Inertial Motion Capture Costume. Procedia Technol.
**2017**, 27, 139–140. [Google Scholar] [CrossRef] - Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst.
**2006**, 104, 90–126. [Google Scholar] [CrossRef] - Glardon, P.; Boulic, R.; Thalmann, D. PCA-based walking engine using motion capture data. In Proceedings of the Computer Graphics International, Crete, Greece, 19 June 2004; pp. 292–298. [Google Scholar] [CrossRef]
- Chalodhorn, R.; Rao, R.P.N. Learning to Imitate Human Actions through Eigenposes. In From Motor Learning to Interaction Learning in Robots; Springer: Berlin, Germany, 2010; pp. 357–381. [Google Scholar]
- Kim, H.C.; Kim, D.; Bang, S. Face recognition using the mixture-of-eigenfaces method. Pattern Recognit. Lett.
**2002**, 23, 1549–1558. [Google Scholar] [CrossRef] - Bottino, A.; Simone, M.D.; Laurentini, A. Recognizing Human Motion using Eigensequences. J. WSCG
**2007**, 15, 135–142. [Google Scholar] - Billon, R.; Nédélec, A.; Tisseau, J. Gesture Recognition in Flow Based on PCA and Using Multiagent System. In Proceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA, 27–29 October 2008; pp. 239–240. [Google Scholar] [CrossRef]
- Mantovani, G.; Ravaschio, A.; Piaggi, P.; Landi, A. Fine classification of complex motion pattern in fencing. Procedia Eng.
**2010**, 2, 3423–3428. [Google Scholar] [CrossRef] - Choi, W.; Sekiguchi, H.; Hachimura, K. Analysis of Gait Motion by Using Motion Capture in the Japanese Traditional Performing Arts. In Proceedings of the 2009 the Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; pp. 1164–1167. [Google Scholar] [CrossRef]
- Choi, W.; Li, L.; Sekiguchi, H.; Hachimura, K. Recognition of gait motion by using data mining. In Proceedings of the 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), Gwangju, South Korea, 20–23 October 2013; pp. 1213–1216. [Google Scholar] [CrossRef]
- Das, S.R.; Wilson, R.C.; Lazarewicz, M.T.; Finkel, L.H. Two-Stage PCA Extracts Spatiotemporal Features for Gait Recognition. J. Multimed.
**2006**, 1, 9–17. [Google Scholar] - Świtoński, A.; Mucha, R.; Danowski, D.; Mucha, M.; Polanski, A.; Cieslar, G.; Wojciechowski, K.; Sieron, A. Diagnosis of the motion pathologies based on a reduced kinematical data of a gait. Prz. Elektrotechni.
**2011**, 87, 173–176. [Google Scholar] - Ko, J.H.; Han, D.W.; Newell, K.M. Skill level changes the coordination and variability of standing posture and movement in a pistol-aiming task. J. Sports Sci.
**2018**, 36, 809–816. [Google Scholar] [CrossRef] [PubMed] - Zago, M.; Pacifici, I.; Lovecchio, N.; Galli, M.; Federolf, P.; Sforza, C. Multi-segmental movement patterns reflect juggling complexity and skill level. Hum. Mov. Sci.
**2017**, 54. [Google Scholar] [CrossRef] [PubMed] - Lee, M.; Roan, M.; Smith, B. An application of principal component analysis for lower body kinematics between loaded and unloaded walking. J. biomech.
**2009**, 42, 2226–2230. [Google Scholar] [CrossRef] [PubMed] - Hinkel-Lipsker, J.; Hahn, M. Coordinative structuring of gait kinematics during adaptation to variable and asymmetric split-belt treadmill walking—A principal component analysis approach. Hum. Mov. Sci.
**2018**, 59. [Google Scholar] [CrossRef] [PubMed] - Etemad, S.A.; Arya, A. Classification and translation of style and affect in human motion using RBF neural networks. Neurocomputing
**2014**, 129, 585–595. [Google Scholar] [CrossRef] - Fotiadou, E.; Nikolaidis, N. Activity-based methods for person recognition in motion capture sequences. Pattern Recognit. Lett.
**2014**, 49, 48–54. [Google Scholar] [CrossRef] - Choi, W.; Ono, T.; Hachimura, K. Body Motion Analysis for Similarity Retrieval of Motion Data and Its Evaluation. In Proceedings of the 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; pp. 1177–1180. [Google Scholar] [CrossRef]
- Hachaj, T.; Ogiela, M.R. Classification of Karate Kicks with Hidden Markov Models Classifier and Angle-Based Features. In Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Beijing, China, 13–15 October 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Manns, M.; Otto, M.; Mauer, M. Measuring Motion Capture Data Quality for Data Driven Human Motion Synthesis. Procedia CIRP
**2016**, 41, 945–950. [Google Scholar] [CrossRef] [Green Version] - Tilmanne, J.; Dutoit, T. Expressive Gait Synthesis Using PCA and Gaussian Modeling. In Proceedings of the Third International Conference on Motion in Games, Utrecht, The Netherlands, 14–16 November 2010; Springer-Verlag: Berlin, Germany, 2010; pp. 363–374. [Google Scholar]
- Peng, S. Motion Segmentation Using Central Distance Features and Low-Pass Filter. In Proceedings of the 2010 the International Conference on Computational Intelligence and Security, Nanning, China, 11–14 December 2010; pp. 223–226. [Google Scholar] [CrossRef]
- Yang, Y.; Zeng, L.; Leung, H. Keyframe Extraction from Motion Capture Data for Visualization. In Proceedings of the 2016 International Conference on Virtual Reality and Visualization (ICVRV), Hangzhou, China, 24–26 Septemper 2016; pp. 154–157. [Google Scholar] [CrossRef]
- Haratian, R.; Phillips, C.; Timotijevic, T. A PCA-based technique for compensating the effect of sensor position changes in motion data. In Proceedings of the 2012 6th IEEE International Conference Intelligent Systems, Sofia, Bulgaria, 6–8 Septemper 2012; pp. 126–131. [Google Scholar] [CrossRef]
- Skurowski, P.; Pruszowski, P.; Pęszor, D. Synchronization of Motion Sequences from Different Sources. AIP Conf. Proc.
**2016**, 1738. [Google Scholar] [CrossRef] - Breiman, L. Bagging Predictors. Mach. Learn.
**1996**, 24, 123–140. [Google Scholar] [CrossRef] - Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujście, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
- Hachaj, T. GitHub repository of the project. Available online: https://github.com/browarsoftware/MoCapEigen (accessed on 24 July 2019).
- Hachaj, T.; Piekarczyk, M.; Ogiela, M.R. Human Actions Analysis: Templates Generation, Matching and Visualization Applied to Motion Capture of Highly-Skilled Karate Athletes. Sensors
**2017**, 17, 2590. [Google Scholar] [CrossRef] [PubMed] - Funakoshi, G. Karate-Do Kyohan: The Master Text, 1st ed.; Kodansha International: Tokio, Japan, 2013. [Google Scholar]
- Forsythe, G.E.; Malcolm, M.A.; Moler, C.B. Computer Methods for Mathematical Computations. Englewood Cliffs, New Jersey 07632. Prentice Hall, Inc., 1977. XI, 259 S. Available online: http://xxx.lanl.gov/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/zamm.19790590235 (accessed on 24 July 2019).
- Hachaj, T.; Ogiela, M.R.; Koptyra, K. Application of Assistive Computer Vision Methods to Oyama Karate Techniques Recognition. Symmetry
**2015**, 7, 1670–1698. [Google Scholar] [CrossRef] - Hachaj, T.; Ogiela, M.R. Human actions recognition on multimedia hardware using angle-based and coordinate-based features and multivariate continuous hidden Markov model classifier. Multimed. Tool. Appl.
**2016**, 75, 16265–16285. [Google Scholar] [CrossRef]

**Figure 1.**This picture presents a person taking part in a motion capture session in the data collection environment. This is a typical dojo (karate training room) with mirrors and training equipment. Not all IMU sensors and wiring are visible.

**Figure 2.**This figure presents the skeleton produced by the MoCap system that was used to gather data. The black dots mark the positions of body joints (they are returned by the MoCap system, and there are more joints than there are IMU sensors on the MoCap outfit). The red lines depict the body joint hierarchy.

**Figure 3.**This figure presents the layout of an algorithm for PCA-based feature generation from MoCap. The detailed description is in the text.

**Figure 4.**This figure presents PCA projection onto 3D space of the dataset from Section 2.1. Next to the PCA dimension axis are the percentages of the variance that they explain. All classes of motions are color-coded and have different shapes of markers that represent them.

**Table 1.**This table presents the cross-validation classification results of NNg on the karate MoCap dataset.

#Classes; #Classifiers; #Augmentation | 5 Features | 10 Features | 15 Features | 20 Features | 25 Features | 30 Features |
---|---|---|---|---|---|---|

12; 1; 0 | 0.614 | 0.767 | 0.772 | 0.794 | 0.847 | 0.842 |

12; 1; 1 | 0.628 | 0.764 | 0.811 | 0.811 | 0.811 | 0.812 |

12; 1; 2 | 0.644 | 0.758 | 0.781 | 0.808 | 0.822 | 0.847 |

**Table 2.**This table presents the cross-validation classification results of NNg with bagging on the karate MoCap dataset.

#Classes; #Classifiers; #Augmentation | 5 Features | 10 Features | 15 Features | 20 Features | 25 Features | 30 Features |
---|---|---|---|---|---|---|

4; 50; 0 | 0.622 | 0.628 | 0.661 | 0.636 | ||

4; 50; 1 | 0.581 | 0.614 | 0.619 | 0.631 | ||

4; 50; 2 | 0.628 | 0.706 | 0.703 | 0.714 | ||

4; 100; 0 | 0.717 | 0.744 | 0.803 | 0.758 | ||

4; 100; 1 | 0.742 | 0.742 | 0.697 | 0.722 | ||

4; 100; 2 | 0.736 | 0.742 | 0.742 | 0.756 | ||

4; 150; 0 | 0.767 | 0.772 | 0.806 | 0.758 | ||

4; 150; 1 | 0.789 | 0.783 | 0.789 | 0.806 | ||

4; 150; 2 | 0.742 | 0.775 | 0.769 | 0.800 | ||

4; 200; 0 | 0.797 | 0.772 | 0.828 | 0.786 | ||

4; 200; 1 | 0.794 | 0.828 | 0.781 | 0.781 | ||

4; 200; 2 | 0.747 | 0.758 | 0.772 | 0.794 | ||

6; 50; 0 | 0.742 | 0.808 | 0.803 | 0.797 | ||

6; 50; 1 | 0.711 | 0.808 | 0.808 | 0.783 | ||

6; 50; 2 | 0.678 | 0.817 | 0.806 | 0.808 | ||

6; 100; 0 | 0.772 | 0.836 | 0.839 | 0.828 | ||

6; 100; 1 | 0.711 | 0.825 | 0.856 | 0.803 | ||

6; 100; 2 | 0.769 | 0.861 | 0.842 | 0.864 | ||

6; 150; 0 | 0.781 | 0.833 | 0.825 | 0.811 | ||

6; 150; 1 | 0.742 | 0.831 | 0.825 | 0.794 | ||

6; 150; 2 | 0.756 | 0.853 | 0.844 | 0.844 | ||

6; 200; 0 | 0.800 | 0.847 | 0.847 | 0.836 | ||

6; 200; 1 | 0.731 | 0.858 | 0.847 | 0.831 | ||

6; 200; 2 | 0.764 | 0.858 | 0.844 | 0.861 | ||

8; 50; 0 | 0.750 | 0.836 | 0.861 | 0.850 | ||

8; 50; 1 | 0.706 | 0.864 | 0.794 | 0.831 | ||

8; 50; 2 | 0.697 | 0.839 | 0.833 | 0.822 | ||

8; 100; 0 | 0.744 | 0.844 | 0.864 | 0.861 | ||

8; 100; 1 | 0.683 | 0.858 | 0.803 | 0.831 | ||

8; 100; 2 | 0.725 | 0.831 | 0.858 | 0.839 | ||

8; 150; 0 | 0.756 | 0.861 | 0.864 | 0.867 | ||

8; 150; 1 | 0.692 | 0.861 | 0.814 | 0.817 | ||

8; 150; 2 | 0.725 | 0.831 | 0.839 | 0.833 | ||

8; 200; 0 | 0.742 | 0.853 | 0.858 | 0.869 | ||

8; 200; 1 | 0.717 | 0.861 | 0.808 | 0.817 | ||

8; 200; 2 | 0.728 | 0.828 | 0.836 | 0.839 | ||

10; 50; 0 | 0.683 | 0.819 | 0.811 | 0.872 | 0.894 | 0.886 |

10; 50; 1 | 0.683 | 0.831 | 0.814 | 0.850 | 0.861 | 0.850 |

10; 50; 2 | 0.664 | 0.806 | 0.825 | 0.817 | 0.861 | 0.867 |

10; 100; 0 | 0.692 | 0.819 | 0.811 | 0.867 | 0.894 | 0.883 |

10; 100; 1 | 0.664 | 0.833 | 0.806 | 0.856 | 0.861 | 0.858 |

10; 100; 2 | 0.672 | 0.786 | 0.842 | 0.822 | 0.861 | 0.869 |

10; 150; 0 | 0.694 | 0.819 | 0.825 | 0.867 | 0.900 | 0.886 |

10; 150; 1 | 0.675 | 0.844 | 0.814 | 0.850 | 0.867 | 0.858 |

10; 150; 2 | 0.672 | 0.792 | 0.831 | 0.825 | 0.861 | 0.880 |

10; 200; 0 | 0.692 | 0.817 | 0.825 | 0.867 | 0.894 | 0.886 |

10; 200; 1 | 0.678 | 0.844 | 0.814 | 0.847 | 0.867 | 0.856 |

10; 200; 2 | 0.675 | 0.789 | 0.828 | 0.822 | 0.867 | 0.878 |

**Table 3.**This table presents the cross-validation classification results of NNg with bagging and feature mirroring on the karate MoCap dataset.

#Classes; #Classifiers; #Augmentation | 5 Features | 10 Features | 15 Features | 20 Features | 25 Features | 30 Features |
---|---|---|---|---|---|---|

4; 50; 0 | 0.592 | 0.619 | 0.650 | 0.631 | ||

4; 100; 0 | 0.639 | 0.731 | 0.783 | 0.781 | ||

4; 150; 0 | 0.725 | 0.767 | 0.794 | 0.764 | ||

4; 200; 0 | 0.753 | 0.764 | 0.803 | 0.803 | ||

6; 50; 0 | 0.639 | 0.808 | 0.811 | 0.817 | ||

6; 100; 0 | 0.708 | 0.833 | 0.844 | 0.839 | ||

6; 150; 0 | 0.711 | 0.828 | 0.822 | 0.825 | ||

6; 200; 0 | 0.728 | 0.831 | 0.847 | 0.847 | ||

8; 50; 0 | 0.650 | 0.814 | 0.856 | 0.858 | ||

8; 100; 0 | 0.650 | 0.833 | 0.872 | 0.872 | ||

8; 150; 0 | 0.669 | 0.844 | 0.867 | 0.872 | ||

8; 200; 0 | 0.667 | 0.825 | 0.864 | 0.875 | ||

10; 50; 0 | 0.586 | 0.800 | 0.822 | 0.881 | 0.908 | 0.883 |

10; 100; 0 | 0.589 | 0.803 | 0.822 | 0.881 | 0.906 | 0.883 |

10; 150; 0 | 0.589 | 0.803 | 0.833 | 0.878 | 0.911 | 0.883 |

10; 200; 0 | 0.586 | 0.803 | 0.833 | 0.878 | 0.906 | 0.883 |

**Table 4.**This table presents cross-validation classification results of SVM with bagging and feature mirroring on the karate MoCap dataset.

#Classes | #Classifiers | Mirror? | Eigen Features | Kernel | Result |
---|---|---|---|---|---|

10 | 100 | FALSE | 25 | linear | 0.939 |

10 | 150 | FALSE | 25 | linear | 0.939 |

10 | 200 | FALSE | 25 | linear | 0.939 |

10 | 100 | FALSE | 25 | sigmoid | 0.900 |

10 | 150 | FALSE | 25 | sigmoid | 0.900 |

10 | 200 | FALSE | 25 | sigmoid | 0.903 |

10 | 100 | FALSE | 25 | radial | 0.919 |

10 | 150 | FALSE | 25 | radial | 0.922 |

10 | 200 | FALSE | 25 | radial | 0.922 |

10 | 100 | FALSE | 30 | linear | 0.933 |

10 | 150 | FALSE | 30 | linear | 0.925 |

10 | 200 | FALSE | 30 | linear | 0.925 |

10 | 100 | FALSE | 30 | sigmoid | 0.864 |

10 | 150 | FALSE | 30 | sigmoid | 0.864 |

10 | 200 | FALSE | 30 | sigmoid | 0.858 |

10 | 100 | FALSE | 30 | radial | 0.928 |

10 | 150 | FALSE | 30 | radial | 0.928 |

10 | 200 | FALSE | 30 | radial | 0.928 |

10 | 100 | TRUE | 25 | linear | 0.911 |

10 | 150 | TRUE | 25 | linear | 0.911 |

10 | 200 | TRUE | 25 | linear | 0.911 |

10 | 100 | TRUE | 25 | sigmoid | 0.875 |

10 | 150 | TRUE | 25 | sigmoid | 0.878 |

10 | 200 | TRUE | 25 | sigmoid | 0.875 |

10 | 100 | TRUE | 25 | radial | 0.919 |

10 | 150 | TRUE | 25 | radial | 0.914 |

10 | 200 | TRUE | 25 | radial | 0.917 |

10 | 100 | TRUE | 30 | linear | 0.889 |

10 | 150 | TRUE | 30 | linear | 0.889 |

10 | 200 | TRUE | 30 | linear | 0.886 |

10 | 100 | TRUE | 30 | sigmoid | 0.858 |

10 | 150 | TRUE | 30 | sigmoid | 0.858 |

10 | 200 | TRUE | 30 | sigmoid | 0.853 |

10 | 100 | TRUE | 30 | radial | 0.922 |

10 | 150 | TRUE | 30 | radial | 0.919 |

10 | 200 | TRUE | 30 | radial | 0.917 |

12 | 1 | FALSE | 25 | linear | 0.867 |

12 | 1 | TRUE | 25 | linear | 0.861 |

12 | 1 | FALSE | 25 | sigmoid | 0.880 |

12 | 1 | TRUE | 25 | sigmoid | 0.886 |

12 | 1 | FALSE | 25 | radial | 0.906 |

12 | 1 | TRUE | 25 | radial | 0.897 |

12 | 1 | FALSE | 30 | linear | 0.889 |

12 | 1 | TRUE | 30 | linear | 0.878 |

12 | 1 | FALSE | 30 | sigmoid | 0.844 |

12 | 1 | TRUE | 30 | sigmoid | 0.844 |

12 | 1 | FALSE | 30 | radial | 0.894 |

12 | 1 | TRUE | 30 | radial | 0.894 |

© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Hachaj, T.
Improving Human Motion Classification by Applying Bagging and Symmetry to PCA-Based Features. *Symmetry* **2019**, *11*, 1264.
https://doi.org/10.3390/sym11101264

**AMA Style**

Hachaj T.
Improving Human Motion Classification by Applying Bagging and Symmetry to PCA-Based Features. *Symmetry*. 2019; 11(10):1264.
https://doi.org/10.3390/sym11101264

**Chicago/Turabian Style**

Hachaj, Tomasz.
2019. "Improving Human Motion Classification by Applying Bagging and Symmetry to PCA-Based Features" *Symmetry* 11, no. 10: 1264.
https://doi.org/10.3390/sym11101264