# Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Classified Leg Motions and Experimental Methodology

- M1: standing without moving the legs (Figure 1(a)),
- M2: moving only the lower part of right leg to the back (Figure 1(b)),
- M3: moving both the lower and the upper part of the right leg to the front while bending the knee (Figure 1(c)),
- M4: moving the right leg forward without bending the knee (Figure 1(d)),
- M5: moving the right leg backward without bending the knee (Figure 1(e)),
- M6: opening the right leg to the right side of the body without bending the knee (Figure 1(f)),
- M7: squatting, moving both the upper and the lower leg (Figure 1(g)),
- M8: moving only the lower part of the right leg upward while sitting on a stool (Figure 1(h)).

## 3. Feature Extraction and Reduction

_{s}elements that can be represented as an N

_{s}× 1 vector

**s**= [s

_{1}, s

_{2}, …, s

_{Ns}]

^{T}is obtained. For the 10-second time windows and the 116 Hz sampling rate, N

_{s}= 1, 160. We considered using features such as the minimum and maximum values, the mean value, variance, skewness, kurtosis, autocorrelation sequence, cross-correlation sequence, total energy, peaks of the discrete Fourier transform (DFT) with the corresponding frequencies, and the discrete cosine transform (DCT) coefficients of

**s**. DCT is a transformation technique widely used in image processing that transforms the data into the form of the sum of cosine functions [56]. The features used are calculated as follows, with an explanation below of why we chose our final set of features:

_{i}is the ith element of the discrete-time sequence s, E{.} denotes the expectation operator, μ

**and σ are the mean and the standard deviation of s, R**

_{s}_{ss}(Δ) is the unbiased autocorrelation sequence of

**s**, μ

**is the mean of**

_{u}**u**, R

**(Δ) is the unbiased cross-correlation sequence between**

_{su}**s**and

**u**, S

_{DFT}(k) and S

_{DCT}(k) are the kth elements of the 1-D N

_{s}-point DFT and N

_{s}-point DCT, respectively.

- 1
- mean value of gyro 1 signal
- 2
- mean value of gyro 2 signal
- 3
- kurtosis of gyro 1 signal
- 4
- kurtosis of gyro 2 signal
- 5
- skewness of gyro 1 signal
- 6
- skewness of gyro 2 signal
- 7
- minimum value of gyro 1 signal
- 8
- minimum value of gyro 2 signal
- 9
- maximum value of gyro 1 signal
- 10
- maximum value of gyro 2 signal
- 11
- minimum value of cross-correlation between gyro 1 and gyro 2 signals
- 12
- maximum value of cross-correlation between gyro 1 and gyro 2 signals
- 13-17
- maximum 5 peaks of DFT of gyro 1 signal
- 18-22
- maximum 5 peaks of DFT of gyro 2 signal
- 23-27
- the 5 frequencies corresponding to the maximum 5 peaks of DFT of gyro 1 signal
- 28-32
- the 5 frequencies corresponding to the maximum 5 peaks of DFT of gyro 2 signal
- 33-38
- 6 samples of the autocorrelation function of gyro 1 signal (sample at the midpoint and every 25th sample up to the 125th)
- 39-44
- 6 samples of the autocorrelation function of gyro 2 signal (sample at the midpoint and every 25th sample up to the 125th)
- 45
- minimum value of the autocorrelation function of gyro 1 signal
- 46
- minimum value of the autocorrelation function of gyro 2 signal
- 47-61
- 15 samples of the cross-correlation between gyro 1 and gyro 2 signals (every 20th sample)
- 63-81
- first 20 DCT coefficients of gyro 1
- 82-101
- first 20 DCT coefficients of gyro 2

- maximum value of gyro 1 signal
- maximum value of the cross-correlation between gyro 1 and gyro 2 signals
- minimum value of gyro 2 signal
- the 3rd maximum peak of DFT of gyro 2 signal
- minimum value of the cross-correlation between gyro 1 and gyro 2 signals
- the 3rd maximum peak of DFT of gyro 1 signal

**x**= [x

_{1}, …, x

_{N}]

^{T}.

## 4. Classification Techniques

_{i}with each motion type (i = 1, …, c). An unknown motion is assigned to class ω

_{i}if its feature vector

**x**= [x

_{1}, …, x

_{N}]

^{T}falls in the region Ω

_{i}A rule that partitions the decision space into regions Ω

_{i}, i = 1, …, c is called a decision rule. In our work, each one of these regions corresponds to a different motion type. Boundaries between these regions are called decision surfaces. The training set contains a total of I = I

_{1}+ I

_{2}+ … + I

_{c}sample feature vectors where I

_{i}sample feature vectors belong to class ω

_{i}, and i = 1, …, c. The test set is then used to evaluate the performance of the decision rule.

#### 4.1. Bayesian Decision Making (BDM)

_{i}) be the a priori probability of the motion belonging to class ω

_{i}. To classify a motion with feature vector

**x**, a posteriori probabilities p(ω

_{i}∣

**x**) are compared and the motion is classified into class ω

_{j}that has the maximum a posteriori probability such that p(ω

_{j}∣

**x**) > p(ω

_{i}∣

**x**) ∀i ≠ j. This is known as Bayes' minimum error rule and can be equivalently expressed as:

**x**) denotes the label for feature vector

**x**. However, because these a posteriori probabilities are rarely known, they need to be estimated. A more convenient formulation of this rule can be obtained by using Bayes' theorem:

**x**∣ω

_{i}) are the class-conditional probability density functions (CCPDFs) which are also unknown and need to be estimated in their turn based on the training set. In Equation (3), $p(\text{x})={\sum}_{i=1}^{c}p(\text{x}\mid {\omega}_{i})p({\omega}_{i})$ is a constant and is equal to the same value for all classes. Then, the decision rule becomes: if p(

**x**∣ω

_{j})p(ω

_{j}) > p(

**x**∣ω

_{i})p(ω

_{i}) ∀i ≠ j ⇒

**x**∈ Ω

_{j}.

_{i}) are assumed to be equal for each class, the a posteriori probability becomes directly proportional to the likelihood value p(

**x**∣ω

_{i}). Under this assumption, the decision rule simplifies to:

_{j}(

**x**) > q

_{i}(

**x**) ∀i ≠ j ⇒

**x**∈ Ω

_{j}, where the function q

_{i}is called a discriminant function.

**x**

_{i}

_{1}, …,

**x**

_{iIi}}. Then the ML estimates for the mean vector and the covariance matrix are

**x**, the decision rule in Equation (4) is used for classification.

#### 4.2. Rule-Based Algorithm (RBA)

_{i}≤ τ

_{i}?,” where τ is the threshold value for a given feature and i = 1, 2, …, T, with T being the total number of features used [60]. Selecting and calculating features before using them in the RBA is an important necessary issue to make the algorithm independent of the calculation cost of different features. These rules are determined by examining the training vectors of all classes. More discriminative features are used at the nodes higher in the tree hierarchy. Decision-tree algorithms start from the top of the tree and branch out at each node into two descendant nodes based on checking conditions similar to above. This process continues until one of the leaves is reached or until a branch is terminated.

- Is the variance of gyro 2 signal < 0.1?
- Is the variance of gyro 1 signal < 0.1?
- Is the min value of gyro 1 signal > 0.6?
- Is $\frac{\text{max value of gyro}\phantom{\rule{0.2em}{0ex}}1\phantom{\rule{0.2em}{0ex}}\text{signal}}{\text{min value of gyro}\phantom{\rule{0.2em}{0ex}}1\phantom{\rule{0.2em}{0ex}}\text{signal}}<0.1$?
- Is $\frac{\text{variance of gyro}\phantom{\rule{0.2em}{0ex}}2\phantom{\rule{0.2em}{0ex}}\text{signal}}{\text{min value of autocorrelation function of gyro}\phantom{\rule{0.2em}{0ex}}2}>1.04$?
- Is max value of cross-correlation function < 0.4?
- Is $\frac{\text{max value of gyro}\phantom{\rule{0.2em}{0ex}}2\phantom{\rule{0.2em}{0ex}}\text{signal}}{\text{min value of gyro}\phantom{\rule{0.2em}{0ex}}2\phantom{\rule{0.2em}{0ex}}\text{signal}}<1.4$?

#### 4.3. Least-Squares Method (LSM)

**x**= [x

_{1}, x

_{2}, …, x

_{N}]

^{T}represents a test feature vector,

**r**= [r

_{i}

_{1}, r

_{i}

_{2}, …, r

_{iN}]

^{T}represents the average of the reference feature vectors for each distinct class, and ${\mathcal{D}}_{i}^{2}$ is the square of the distance between these two vectors.

#### 4.4. k-Nearest Neighbor (k-NN) Algorithm

**x**in a given set of many feature vectors. The neighbors are taken from a set of feature vectors (the training set) for which the correct classification is known. The occurrence number of each class is counted among these neighbor vectors and suppose that k

_{i}of these k vectors come from class ω

_{i}. Then, a k-NN estimator for class ω

_{i}can be defined as $\widehat{p}({\omega}_{i}\mid \text{x})=\frac{{k}_{i}}{k}$, and p̂(

**x**∣ω

_{i}) can be obtained from p̂(

**x**∣ω

_{i})p̂(ω

_{i}) = p̂(ω

_{i}∣

**x**)p̂(

**x**). This results in a classification rule such that

**x**is classified into class ω

_{j}if k

_{j}= max

_{i}(k

_{i}), where i = 1, …, c. In other words, the k nearest neighbors of the vector

**x**in the training set are considered and the vector

**x**is classified into the same class as the majority of its k nearest neighbors [61]. It is common to use the Euclidean distance measure, although other distance measures such as the Manhattan distance could in principle be used instead. The k-NN algorithm is sensitive to the local structure of the data.

#### 4.5. Dynamic Time Warping (DTW)

**x**and

**y**with lengths N and M:

**d**is constructed by using all the elements of the feature vectors

**x**and

**y**. The (n, m)th element of this matrix, d(n, m), is the distance between the nth element of

**x**and the mth element of

**y**and is given by $d(n,m)=\sqrt{{({x}_{n}-{y}_{m})}^{2}}=\phantom{\rule{0.2em}{0ex}}|{x}_{n}-{y}_{m}|$ [64].

**W**is a contiguous set of matrix elements that defines a mapping between

**x**and

**y**. Assuming that the lth element of the warping path is w

_{l}= (n

_{l}, m

_{l}), the warping path

**W**with length L is given as:

- (monotonicity) Warping function should be monotonic, meaning that the warping function cannot go “south” or “west”:n
_{l}≥ n_{l}_{−1}and m_{l}≥ m_{l}_{−1} - (boundary condition) The two vectors/sequences that are compared should be matched at the beginning and the end points of the warping path:w
_{1}= (1, 1) and w_{L}= (N, M) - (continuity condition) Warping function should not bypass any points:n
_{l}− n_{l}_{−1}≤ 1 and m_{l}− m_{l}_{−1}≤ 1 - Maximum amount of warp is controlled by a global limit:|n
_{l}− m_{l}| <GThis global constraint G is called a window width and is used to speed up DTW and prevent pathological warpings [64]. A good path is unlikely to wander very far from the diagonal.

**D**is constructed starting at (n, m) = (1, 1). D(n, m) represents the cost of the least-cost path that can be obtained until reaching point (n, m). As stated above, the warp path must either be incremented by one or stay the same along the n and m axes. Therefore, the distances of the optimal warp paths one data point smaller than lengths n and m are contained in the matrix elements D(n − 1, m − 1),D(n − 1, m), and D(n, m − 1). Therefore, D(n, m) is calculated by:

**W**is shown in Figure 10. Part of the DTW path in this figure is given by:

#### 4.6. Support Vector Machines (SVMs)

**x**

_{i}that are vectors in some space X ⊆ $\mathcal{R}$

^{N}and their labels ℓ

_{i}∈ {−1,1} where ℓ

_{i}= ℓ(

**x**

_{i}) and i = 1, …, I. Here, ℓ

_{i}is used to label the class of the feature vectors as before. If the feature vector is a class 1 vector, then ℓ

_{i}= +1; if it is a class 2 vector ℓ

_{i}= −1. The goal in training a SVM is to find the separating hyperplane with the largest margin so that the generalization of the classifier is better. All vectors lying on one side of the hyperplane are labeled as +1, and all vectors lying on the other side are labeled as −1. The support vectors are the (transformed) training patterns that lie closest to the hyperplane and are at equal distance from it. They correspond to the training samples that define the optimal separating hyperplane and are the most difficult patterns to classify, yet the most informative for the classification task.

**x**) ≥ 0, we label

**x**as +1, otherwise as −1. When K satisfies Mercer's condition, K(

**u, v**) = ϕ(

**u**) · ϕ(

**v**) where ϕ(.) : X → $\mathcal{F}$ is a nonlinear mapping and “·” denotes the inner or dot product. We can then rewrite f(

**x**) in the transformed space as f(

**x**) = a · ϕ(

**x**). The linear discriminant function f(

**x**) is based on the hyperplane a · ϕ(

**x**) = 0 where $\text{a}={\sum}_{i=1}^{I}{\beta}_{i}\mathit{\varphi}({\text{x}}_{i})$ is a weight vector. Thus, by using K, the training data is projected into a new feature space $\mathcal{F}$ which is often higher dimensional. The SVM then computes the β

_{i}'s that correspond to the maximal margin hyperplane in $\mathcal{F}$. By choosing different kernel functions, we can project the training data from X into spaces $\mathcal{F}$ for which hyperplanes in $\mathcal{F}$ correspond to more complex decision boundaries in the original space X. Hence, by nonlinear mapping of the original training patterns into other spaces, decision functions can be found using a linear algorithm in the transformed space by only computing the kernel K(

**x, x**

_{i}).

_{i}= +1) symbolize the first class (class 1) and circles (ℓ

_{i}= −1) symbolize the second class (class 2). These two types of training vectors can be separated with infinitely many different hyperplanes, three of which are shown in Figure 12(a). For each of these hyperplanes, correct classification rates may be different when test vectors are presented to the system. To have the smallest classification error at the test stage, the hyperplane should be placed between the support vectors of two classes with maximum and equal margin for both classes [79]. For a SVM, the optimal hyperplane classifier is unique [60]. The equation of a hyperplane that may be used to classify these two classes is given by:

**a**and the transformed feature vector ϕ(

**x**

_{i}) have been augmented by one dimension to include a bias weight so that the hyperplanes need not pass through the origin. For this hyperplane to have maximum margins, dotted and dashed margin lines in Figure 12(b) are given by the following two equations, respectively:

**n**, a

_{0}] where

**n**is the normal vector of the hyperplane, it can be shown that the distance between the two margin lines is 2/‖

**n**‖. Therefore, to maximize the separation between these margin lines, ‖

**n**‖ should be minimized. Since a

_{0}is a constant, this is equivalent to minimizing ‖

**a**‖.

**a**‖

^{2}subject to the constraint given by Equation (17) [80]. Using the method of Lagrange multipliers, we construct the functional

**a**, while maximizing with respect to the undetermined Lagrange multipliers λ

_{i}≥ 0. This can be done by solving the constrained optimization problem by quadratic programming [81] or by other techniques. The solution of the weight vector is ${\text{a}}^{\ast}={\sum}_{i=1}^{I}{\ell}_{i}{\mathrm{\lambda}}_{i}\mathit{\varphi}({\text{x}}_{i})$, corresponding to β

_{i}= ℓ

_{i}λ

_{i}. Then, the decision function is given by:

#### 4.7. Artificial Neural Networks (ANN)

**x**∈ ℝ

^{N}, the target output is 1 for the class that the vector belongs to, and 0 for all other output neurons. The sigmoid function used as the activation function in the hidden and output layers is given by:

**w**is the weight vector, t

_{ik}and o

_{ik}are the desired and actual output values for the ith training pattern and the kth output neuron, and I is the total number of training patterns. When the entire training set is covered, an epoch is completed. The error between the desired and actual outputs is computed at the end of each iteration and these errors are averaged at the end of each epoch (Equation (22)). The training process is terminated when a certain precision goal on the average error is reached or if the specified maximum number of epochs (5,000) is exceeded, whichever occurs earlier. The latter case occurs very rarely. The acceptable average error level is set to a value of 0.06. The weights are initialized randomly with a uniform distribution in the interval [0,1], and the learning rate is chosen as 0.3.

## 5. Experimental Results

#### 5.1. Computational Cost of the Classification Techniques

#### 5.2. Discussion

## 6. Potential Application Areas

## 7. Conclusions and Future Work

## A Principal Component Analysis (Karhunen-Loéve Transformation)

- Mean of each feature vector is calculated and subtracted.
- Covariance matrix of training feature vectors is calculated.
- Eigenvalues and eigenvectors of the covariance matrix are calculated.
- Transformation matrix is obtained by arranging the eigenvectors in descending order of their eigenvalues.
- Features are transformed to a new space where they become uncorrelated.

## Acknowledgments

## References

- Sukkarieh, S.; Nebot, E.M.; Durrant-Whyte, H.F. A high integrity IMU/GPS navigation loop for autonomous land vehicle applications. IEEE Trans. Rob. Autom.
**1999**, 15, 572–578. [Google Scholar] - Tao, Y.; Hu, H.; Zhou, H. Integration of vision and inertial sensors for 3D arm motion tracking in home-based rehabilitation. Int. J. Rob. Res.
**2007**, 26, 607–624. [Google Scholar] - Zhu, R.; Zhou, Z. A real-time articulated human motion tracking using tri-axis inertial/magnetic sensors package. IEEE Trans. Neural Syst. Rehab. Eng.
**2004**, 12, 295–302. [Google Scholar] - Cox, I.J.; Wilfong, G.T. (Eds.) Autonomous Robot Vehicles; Section on Inertial Navigation; Springer-Verlag: New York, NY, USA, 1990.
- Leondes, C.T. (Ed.) Theory and Applications of Kalman Filtering; Sponsored by NATO Advisory Group for Aerospace Research and Development; Technical Editing and Reproduction: London, UK, 1970.
- Mackenzie, D.A. Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
- Barshan, B.; Durrant-Whyte, H.F. Inertial navigation systems for mobile robots. IEEE Trans. Rob. Autom.
**1995**, 11, 328–342. [Google Scholar] - Tan, C.W.; Park, S. Design of accelerometer-based inertial navigation systems. IEEE Trans. Instrum. Meas.
**2005**, 54, 2520–2530. [Google Scholar] - Vaganay, J.; Aldon, M.J. Attitude estimation for a vehicle using inertial sensors. Proceedings of the 1st IFAC International Workshop on Intell. Auton. Vehicles, Southampton, Hampshire, UK, April 18–21, 1993; Charnley, D., Ed.; Pergamon: Oxford, UK, 1993; pp. 89–94. [Google Scholar]
- Nichol, J.G.; Singh, S.P.N.; Waldron, K.J.; Palmer, L.R., III; Orin, D.E. System design of a quadrupedal galloping machine. Int. J. Rob. Res.
**2004**, 23, 1013–1027. [Google Scholar] - Lin, P.C.; Komsuoglu, H.; Koditschek, D.E. Sensor data fusion for body state estimation in a hexapod robot with dynamical gaits. IEEE Trans. Rob.
**2006**, 22, 932–943. [Google Scholar] - Ang, W.T.; Khosla, P.K.; Riviere, C.N. Design of all-accelerometer inertial measurement unit for tremor sensing in hand-held microsurgical instrument. Proceedings of IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September, 2003; 2, pp. 1781–1786.
- Ang, W.T.; Pradeep, P.K.; Riviere, C.N. Active tremor compensation in microsurgery. Proceedings of the 26th Annual International Conference of the IEEE EMBS, San Francisco, CA, USA, September 1–5, 2004; 1, pp. 2738–2741.
- Maenaka, K. MEMS inertial sensors and their applications. Proceeding of the 5th International Conference on Networked Sensing Systems, Kanazawa, Japan, June 17–19, 2008; pp. 71–73.
- Mathie, M.J.; Celler, B.G.; Lovell, N.H.; Coster, A.C.F. Classification of basic daily movements using a triaxial accelerometer. Med. Biol. Eng. Comput.
**2004**, 42, 679–687. [Google Scholar] - Hauer, K.; Lamb, S.E.; Jorstad, E.C.; Todd, C.; Becker, C. Systematic review of definitions and methods of measuring falls in randomised controlled fall prevention trials. Age Ageing
**2006**, 35, 5–10. [Google Scholar] - Noury, N.; Fleury, A.; Rumeau, P.; Bourke, A.K.; Laighin, G.O.; Rialle, V.; Lundy, J.E. Fall detection—principles and methods. Proceedings of the 29th Annual International Conferences of IEEE EMBS, Lyon, France, August 23–26, 2007; pp. 1663–1666.
- Kangas, M.; Konttila, A.; Lindgren, P.; Winblad, I.; Jämsä, T. Comparison of low-complexity fall detection algorithms for body attached accelerometers. Gait Posture
**2008**, 28, 285–291. [Google Scholar] - Wu, W.H.; Bui, A.A.T.; Batalin, M.A.; Liu, D.; Kaiser, W.J. Incremental diagnosis method for intelligent wearable sensor system. IEEE Trans. Inf. Technol. B.
**2007**, 11, 553–562. [Google Scholar] - Jovanov, E.; Milenkovic, A.; Otto, C.; de Groen, P.C. A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation. J. NeuroEng. Rehab.
**2005**, 2. [Google Scholar] [CrossRef] - Pärkkä, J.; Ermes, M.; Korpipää, P.; Mäntyjärvi, J.; Peltola, J.; Korhonen, I. Activity classification using realistic data from wearable sensors. IEEE Trans. Inf. Technol. B.
**2006**, 10, 119–128. [Google Scholar] - Ermes, M.; Pärkkä, J.; Mäntyjärvi, J.; Korhonen, I. Detection of daily activities and sports with wearable sensors in controlled and uncontrolled conditions. IEEE Trans. Inf. Technol. B.
**2008**, 12, 20–26. [Google Scholar] - Aylward, R.; Paradiso, J.A. Sensemble: A wireless, compact, multi-user sensor system for interactive dance. Proceedings of Conference on New Interfaces for Musical Expression, Paris, France, June 4–8, 2006; pp. 134–139.
- Lee, J.; Ha, I. Real-time motion capture for a human body using accelerometers. Robotica
**2001**, 19, 601–610. [Google Scholar] - Shiratori, T.; Hodgins, J.K. Accelerometer-based user interfaces for the control of a physically simulated character. ACM T. Graphic.
**2008**, 27(123). [Google Scholar] - Zijlstra, W.; Aminian, K. Mobility assessment in older people: new possibilities and challenges. Eur. J. Ageing
**2007**, 4, 3–12. [Google Scholar] - Mathie, M.J.; Coster, A.C.F.; Lovell, N.H.; Celler, B.G. Accelerometry: providing an integrated, practical method for long-term, ambulatory monitoring of human movement. Physiol. Meas.
**2004**, 25, R1–R20. [Google Scholar] - Wong, W.Y.; Wong, M.S.; Lo, K.H. Clinical applications of sensors for human posture and movement analysis: A review. Prosthet. Orthot. Int.
**2007**, 31, 62–75. [Google Scholar] - Sabatini, A.M. Inertial sensing in biomechanics: a survey of computational techniques bridging motion analysis and personal navigation. In Computational Intelligence for Movement Sciences: Neural Networks and Other Emerging Techniques; Begg, R.K., Palaniswami, M., Eds.; Idea Group Publishing: Hershey, PA, USA, 2006; pp. 70–100. [Google Scholar]
- Tunçel, O. Human Activity Classification with Miniature Inertial Sensors. Master's thesis, Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey, July 2009. [Google Scholar]
- Moeslund, T.B.; Granum, E. A survey of computer vision-based human motion capture. Comput. Vis. Image Und.
**2001**, 81, 231–268. [Google Scholar] - Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Und.
**2006**, 104, 90–126. [Google Scholar] - Wang, L.; Hu, W.; Tan, T. Recent developments in human motion analysis. Pattern Recogn.
**2003**, 36, 585–601. [Google Scholar] - Aggarwal, J.K.; Cai, Q. Human motion analysis: a review. Comput. Vis. Image Und.
**1999**, 73, 428–440. [Google Scholar] - Hyeon-Kyu, L.; Kim, J.H. An HMM-based threshold model approach for gesture recognition. IEEE Trans. Pattern Anal.
**1999**, 21, 961–973. [Google Scholar] - Junker, H.; Amft, O.; Lukowicz, P.; Troester, G. Gesture spotting with body-worn inertial sensors to detect user activities. Pattern Recogn.
**2008**, 41, 2010–2024. [Google Scholar] - Lementec, J.C.; Bajcsy, P. Recognition of arm gestures using multiple orientation sensors: gesture classification. Proceeding of the 7th International Conferences Intelligent Transportation Systems, Washington, DC, USA, October 3–6, 2004; pp. 965–970.
- Uiterwaal, M.; Glerum, E.B.C.; Busser, H.J.; van Lummel, R.C. Ambulatory monitoring of physical activity in working situations, a validation study. J. Med. Eng. Technol.
**1998**, 22, 168–172. [Google Scholar] - Bussmann, J.B.; Reuvekamp, P.J.; Veltink, P.H.; Martens, W.L.; Stam, H.J. Validity and reliability of measurements obtained with an “activity monitor” in people with and without transtibial amputation. Phys. Ther.
**1998**, 78, 989–998. [Google Scholar] - Aminian, K.; Robert, P.; Buchser, E.E.; Rutschmann, B.; Hayoz, D.; Depairon, M. Physical activity monitoring based on accelerometry: validation and comparison with video observation. Med. Biol. Eng. Comput.
**1999**, 37, 304–308. [Google Scholar] - Roetenberg, D.; Slycke, P.J.; Veltink, P.H. Ambulatory position and orientation tracking fusing magnetic and inertial sensing. IEEE Trans. Bio-med. Eng.
**2007**, 54, 883–890. [Google Scholar] - Najafi, B.; Aminian, K.; Loew, F.; Blanc, Y.; Robert, P. Measurement of stand-sit and sit-stand transitions using a miniature gyroscope and its application in fall risk evaluation in the elderly. IEEE Trans. Bio-med. Eng.
**2002**, 49, 843–851. [Google Scholar] - Najafi, B.; Aminian, K.; Paraschiv-Ionescu, A.; Loew, F.; Büla, C.J.; Robert, P. Ambulatory system for human motion analysis using a kinematic sensor: monitoring of daily physical activity in the elderly. IEEE Trans. Bio-med. Eng.
**2003**, 50, 711–723. [Google Scholar] - Viéville, T.; Faugeras, O.D. Cooperation of the Inertial and Visual Systems, 59th Ed. ed; NATO ASI Series; In Traditional and Non-Traditional Robotic SensorsSpringer-Verlag: Berlin, Heidelberg, Germany, 1990; Volume F63, pp. 339–350. [Google Scholar]
- Workshop on Integration of Vision and Inertial Sensors (InerVis).
- Special Issue on the 2nd Workshop on Integration of Vision and Inertial Sensors (InerVis05). Int J Rob Res.
**2007**, 26, 515–517. - Bao, L.; Intille, S.S. Activity recognition from user-annotated acceleration data. Proceedings of Pervasive Computing: Second International Conference of Lecture Notes in Computer Science, Vienna, Austria, April 21–23, 2004; 3001, pp. 1–17. Available online: http://web.media.mit.edu/intille/papers-files/BaoIntille04.pdf (accessed October 14, 2009).
- Veltink, P.H.; Bussman, H.B.J.; de Vries, W.; Martens, W.L.J.; van Lummel, R.C. Detection of static and dynamic activities using uniaxial accelerometers. IEEE Trans Rehabil. Eng.
**1996**, 4, 375–385. [Google Scholar] - Kiani, K.; Snijders, C.J.; Gelsema, E.S. Computerized analysis of daily life motor activity for ambulatory monitoring. Technol. Health Care
**1997**, 5, 307–318. [Google Scholar] - Foerster, F.; Smeja, M.; Fahrenberg, J. Detection of posture and motion by accelerometry: a validation study in ambulatory monitoring. Comput. Hum. Behav.
**1999**, 15, 571–583. [Google Scholar] - Karantonis, D.M.; Narayanan, M.R.; Mathie, M.; Lovell, N.H.; Celler, B.G. Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. B.
**2006**, 10, 156–167. [Google Scholar] - Allen, F.R.; Ambikairajah, E.; Lovell, N.H.; Celler, B.G. Classification of a known sequence of motions and postures from accelerometry data using adapted Gaussian mixture models. Physiol. Meas.
**2006**, 27, 935–951. [Google Scholar] - Kern, N.; Schiele, B.; Schmidt, A. Multi-sensor activity context detection for wearable computing. Proceedings of European Symposium on Ambient Intelligence, Lecture Notes in Computer Science, Eindhoven, The Netherlands, November, 2003; 2875, pp. 220–232. Available online: http://citeseer.ist.psu.edu/kern03multisensor.html (accessed October 14, 2009).
- Shelley, T.; Barrett, J. Vibrating gyro to keep cars on route. In Eureka on Campus, Engineering Materials and Design; Spring: Berlin, Germany, 1992; Volume 4, p. 17. [Google Scholar]
- Murata Manufacturing Co., Ltd. Murata Gyrostar ENV-05A Piezoelectric Vibratory Gyroscope Datasheet.; Murata Manufacturing Co., Ltd.: Nagaokakyo, Japan, 1994. [Google Scholar]
- Jain, A.K. Fundamentals of Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1989. [Google Scholar]
- Webb, A. Statistical Pattern Recognition; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar]
- Rosenblatt, M. Remarks on some nonparametric estimates of a density function. Ann. Math. Stat.
**1956**, 27, 832–837. [Google Scholar] - Nadler, M.; Smith, E.P. Pattern Recognition Engineering; John Wiley & Sons: New York, NY, USA, 1993. [Google Scholar]
- Theodoridis, S.; Koutroumbas, K. Pattern Recognition; Academic Press: Orlando, FL, USA, 2006. [Google Scholar]
- Fukunaga, K. Introduction to Statistical Pattern Recognition, 2nd Ed. ed; Academic Press: San Diego, CA, USA, 1990. [Google Scholar]
- Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman and Hall: New York, NY, USA, 1986. [Google Scholar]
- Deller, J.R.; Hansen, J.H.L.; Proakis, J.G. Discrete-Time Processing of Speech Signals; IEEE: New York, NY, USA, 2000. [Google Scholar]
- Keogh, E.; Ratanamahatana, C.A. Exact indexing of dynamic time warping. Knowl. Inf. Syst.
**2005**, 7, 358–386. [Google Scholar] - Rath, T.M.; Manmatha, R. Word image matching using dynamic time warping. Proceedings of 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 16–22, 2003; Madison, Wisconsin; 2, pp. 521–527.
- Konidaris, T.; Gatos, B.; Perantonis, S.J.; Kesidis, A. Keyword matching in historical machine-printed documents using synthetic data, word portions and dynamic time warping. Proceedings of the 8th IAPR International Workshop on Document Analysis Systems, Nara, Japan, September 16–19, 2008; pp. 539–545.
- Zanuy, M.F. On-line signature recognition based on VQ-DTW. Pattern Recogn
**2007**, 40, 981–992. [Google Scholar] - Chang, W.D.; Shin, J. Modified dynamic time warping for stroke-based on-line signature verification. Proceedings of the 9th International Conference on Document Analysis and Recognition, Curitiba, State of Parana, Brazil, September 23–26; 2007; 2, pp. 724–728. [Google Scholar]
- Boulgouris, N.V.; Plataniotis, K.N.; Hatzinakos, D. Gait recognition using dynamic time warping. Proceedings of the IEEE 6th Workshop on Multimedia Signal Processing, Siena, Italy, September 29– October 1, 2004; pp. 263–266.
- Huang, B.; Kinsner, W. ECG frame classification using dynamic time warping. IEEE Canadian Conference on Electrical and Computer Engineering, Winnipeg, Manitoba, May 12–15, 2002; 2, pp. 1105–1110.
- Vullings, H.J.L.M.; Verhaegen, M.H.G.; Verbruggen, H.B. Automated ECG segmentation with dynamic time warping. Proceedings of the 20th Annual International Conference-IEEE/EMBS, Hong Kong, China, October 29– November 1, 1998; 20, pp. 163–166.
- Tuzcu, V.; Nas, S. Dynamic time warping as a novel tool in pattern recognition of ECG changes in heart rhythm disturbances. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hawaii, HI, USA, October 10–12, 2005; 1, pp. 182–186.
- Kovács-Vajna, Z.M. A fingerprint verification system based on triangular matching and dynamic time warping. IEEE Trans. Pattern Anal.
**2000**, 22, 1266–1276. [Google Scholar] - López, L.E.M.; Elías, R.P.; Tavira, J.V. Face localization in color images using dynamic time warping and integral projections. Proceedings of the International Joint Conference on Neural Networks, Orlando, FL, USA, August 12–17, 2007; pp. 892–896.
- Parsons, T. Voice and Speech Processing; McGraw-Hill: New York, NY, USA, 1986. [Google Scholar]
- Vapnik, V.N. Estimation of Dependences Based on Empirical Data; Nauka: Moscow, Russia, 1979; (in Russian, English translation: Springer Verlag: New York, NY, USA, 1982). [Google Scholar]
- Hsu, C.W.; Chang, C.C.; Lin, C.J. A Practical Guide to Support Vector Classification; Department of Computer Science, National Taiwan University: Taipei, Taiwan, 2008. [Google Scholar]
- Tong, S.; Koller, D. Support vector machine active learning with applications to text classification. J. Mach. Learn. Res.
**2001**, 2, 45–66. [Google Scholar] - Schölkopf, B.; Smola, A.J. Learning with Kernels; The MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
- Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd Ed. ed; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
- Duin, R.P.W.; Juszczak, P.; Paclik, P.; Pekalska, E.; de Ridder, D.; Tax, D.M.J. A Matlab Toolbox for Pattern Recognition, PRTools4; Delft University of Technology: Delft, The Netherlands, 2004. [Google Scholar]
- Chang, C.C.; Lin, C.J. LIBSVM: a library for support vector machines. 2001. Available online: http://www.csie.ntu.edu.tw/cjlin/libsvm (accessed October 14, 2009).
- Haykin, S. Neural Networks: A Comprehensive Foundation; Macmillan Publishing: New York, NY, USA, 1994. [Google Scholar]
- Hagan, M.T.; Demuth, H.B.; Beale, M.H. Neural Network Design; PWS Publishing: Boston, MA, USA, 1996. [Google Scholar]
- Kil, D.H.; Shin, F.B. Pattern Recognition and Prediction with Applications to Signal Characterization; American Institute of Physics: New York, NY, USA, 1996. [Google Scholar]
- Smith, L.I. A Tutorial on Principal Components Analysis. Technical report. 2002. Available online: http://cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf (accessedOctober 14, 2009).

**Figure 3.**Position of the two gyroscopes on the human leg (body figure adopted from http://www.answers.com/body breadths).

**Figure 8.**An example of the selection of the parameter k in the k-NN algorithm. The inner circle corresponds to k = 4 and the outer circle corresponds to k = 12, producing different classification results for the test vector.

**Figure 11.**In (a), (c), and (e), the upper curves show reference vectors and the lower curves represent test vectors of size 32 × 1. Parts (b), (d), and (f) illustrate the least-cost warp paths between the two feature vectors, respectively. In (a), reference and test vectors are from different classes. In (c) and (e), both the reference and the test vectors are from the same class.

**Figure 12.**(a) Three different hyperplanes separating two classes; (b) SVM hyperplane (solid line), its margins (dotted and dashed lines), and the support vectors (circled solid squares and dots).

**Figure 13.**Correct classification rates of the k-NN algorithm for (a) k = 1, …, 28 (RRSS) and (b) k = 1, …, 55 (LOO).

**Table 1.**Features selected by inspection (left) and the features selected by using the covariance matrix (right).

features selected by inspection: | features selected from the covariance matrix: |
---|---|

1: min value of cross-correlation | 1: min value of gyro 2 |

2: max value of cross-correlation | 2: min value of gyro 1 |

3: variance of gyro 2 | 3–8: 6 samples of the autocorrelation |

4: min value of gyro 1 | function of gyro 2 |

5: max value of gyro 1 | 9: 1st max peak of DFT of gyro 2 |

6: skewness of gyro 1 | 10: max value of gyro 2 |

7: skewness of gyro 2 | 11: min value of autocorrelation of gyro 2 |

8: mean of gyro 2 | 12: 3rd max peak of DFT of gyro 2 |

9: min value of gyro 2 | 13: max value of gyro 1 |

10–14: maximum 5 peaks of DFT of gyro 2 | 14: min value of cross-correlation |

**Table 2.**Sample SFFS results where average correct classification rates over all classification techniques are given for two different runs.

features selected (1st run): | % | features selected (2nd run): | % |
---|---|---|---|

max value of gyro 1 | 56.7 | max value of gyro 1 | 56.8 |

max value of cross-correlation | 86.2 | max value of cross-correlation | 86.9 |

3rd max peak of DFT of gyro 2 | 93.8 | min value of gyro 2 | 93.8 |

variance of gyro 2 | 95.0 | 3rd max peak of DFT of gyro 2 | 95.8 |

min value of cross-correlation | 95.9 | min value of cross-correlation | 96.3 |

min value of gyro 2 | 96.1 | skewness of gyro 1 | 97.2 |

skewness of gyro 1 | 96.8 | 2nd DCT coefficient of gyro 2 | 97.4 |

5th max peak of DFT of gyro 2 | 97.0 | ||

6th DCT coefficient of gyro 2 | 97.2 |

**Table 3.**Correct differentiation rates for different feature reduction methods and RRSS cross validation.

method: | correct differentiation rate (%) | ||||
---|---|---|---|---|---|

by inspection (14 features) | PCA to 14 features (6 features) | covariance matrix (14 features) | PCA to 101 features (8 features) | SFFS (6 features) | |

BDM | 97.5 | 97.7 | 96.2 | 98.0 | 97.3 |

LSM | 97.0 | 96.9 | 91.8 | 88.5 | 94.6 |

k-NN (k = 1) | 96.9 | 96.9 | 95.3 | 94.9 | 96.4 |

DTW-1 | 92.1 | 92.2 | 87.9 | 82.6 | 95.4 |

DTW-2 | 96.9 | 96.3 | 95.1 | 93.6 | 95.7 |

SVM | 99.2 | 99.1 | 94.6 | 94.6 | 97.2 |

ANN | 88.6 | 90.2 | 87.7 | 88.8 | 87.8 |

**Table 4.**Correct differentiation rates for different feature reduction methods and P-fold cross validation.

method: | correct differentiation rate (%) | ||||
---|---|---|---|---|---|

by inspection (14 features) | PCA to 14 features (6 features) | covariance matrix (14 features) | PCA to 101 features (8 features) | SFFS (6 features) | |

BDM | 98.9 | 98.5 | 98.1 | 99.1 | 98.1 |

LSM | 97.3 | 97.5 | 92.1 | 89.5 | 94.6 |

k-NN (k = 1) | 97.1 | 98.1 | 94.8 | 95.4 | 97.4 |

DTW-1 | 91.8 | 92.8 | 87.7 | 83.8 | 95.7 |

DTW-2 | 98.0 | 96.9 | 96.1 | 95.2 | 97.0 |

SVM | 99.7 | 99.4 | 95.3 | 96.7 | 97.9 |

ANN | 86.4 | 88.8 | 85.0 | 83.2 | 84.4 |

**Table 5.**Correct differentiation rates for different feature reduction methods and LOO cross validation.

method: | correct differentiation rate (%) | ||||
---|---|---|---|---|---|

by inspection (14 features) | PCA to 14 features (6 features) | covariance matrix (14 features) | PCA to 101 features (8 features) | SFFS (6 features) | |

BDM | 99.1 | 99.3 | 98.2 | 99.1 | 98.2 |

LSM | 97.1 | 97.3 | 92.0 | 90.4 | 94.2 |

k-NN (k = 1) | 97.1 | 98.2 | 94.6 | 95.1 | 97.6 |

DTW-1 | 91.7 | 93.8 | 88.0 | 83.7 | 96.0 |

DTW-2 | 98.2 | 97.8 | 95.2 | 95.1 | 97.3 |

SVM | 98.9 | 98.4 | 96.4 | 98.4 | 98.2 |

ANN | 85.1 | 88.8 | 84.8 | 83.3 | 80.1 |

classified | |||||||||
---|---|---|---|---|---|---|---|---|---|

M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | ||

true | M1 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

M2 | 0 | 55 | 0 | 0 | 0 | 0 | 0 | 1 | |

M3 | 0 | 0 | 56 | 0 | 0 | 0 | 0 | 0 | |

M4 | 0 | 0 | 0 | 54 | 2 | 0 | 0 | 0 | |

M5 | 0 | 0 | 0 | 3 | 53 | 0 | 0 | 0 | |

M6 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | 0 | |

M7 | 0 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | |

M8 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 54 |

classified | |||||||||
---|---|---|---|---|---|---|---|---|---|

M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | ||

true | M1 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

M2 | 0 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | |

M3 | 0 | 0 | 49 | 0 | 0 | 0 | 7 | 0 | |

M4 | 0 | 0 | 0 | 46 | 10 | 0 | 0 | 0 | |

M5 | 0 | 0 | 0 | 4 | 52 | 0 | 0 | 0 | |

M6 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | 0 | |

M7 | 0 | 0 | 1 | 0 | 0 | 0 | 55 | 0 | |

M8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 56 |

classified | |||||||||
---|---|---|---|---|---|---|---|---|---|

M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | ||

true | M1 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

M2 | 0 | 46 | 0 | 0 | 0 | 0 | 0 | 10 | |

M3 | 0 | 0 | 54 | 2 | 0 | 0 | 0 | 0 | |

M4 | 0 | 0 | 0 | 50 | 6 | 0 | 0 | 0 | |

M5 | 0 | 0 | 0 | 3 | 53 | 0 | 0 | 0 | |

M6 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | 0 | |

M7 | 0 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | |

M8 | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 51 |

classified | |||||||||
---|---|---|---|---|---|---|---|---|---|

M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | ||

true | M1 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

M2 | 0 | 52 | 0 | 0 | 0 | 0 | 0 | 4 | |

M3 | 0 | 0 | 56 | 0 | 0 | 0 | 0 | 0 | |

M4 | 0 | 0 | 0 | 52 | 4 | 0 | 0 | 0 | |

M5 | 0 | 0 | 0 | 2 | 54 | 0 | 0 | 0 | |

M6 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | 0 | |

M7 | 0 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | |

M8 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 55 |

classified | |||||||||
---|---|---|---|---|---|---|---|---|---|

M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | ||

true | M1 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

M2 | 0 | 49 | 0 | 0 | 0 | 2 | 0 | 5 | |

M3 | 0 | 0 | 56 | 0 | 0 | 0 | 0 | 0 | |

M4 | 0 | 0 | 0 | 52 | 4 | 0 | 0 | 0 | |

M5 | 0 | 0 | 1 | 3 | 52 | 0 | 0 | 0 | |

M6 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | 0 | |

M7 | 0 | 0 | 2 | 0 | 0 | 0 | 54 | 0 | |

M8 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 55 |

classified | |||||||||
---|---|---|---|---|---|---|---|---|---|

M1 | M2 | M3 | M4 | M5 | M6 | M7 | M8 | ||

true | M1 | 56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

M2 | 0 | 54 | 0 | 0 | 0 | 0 | 0 | 2 | |

M3 | 0 | 0 | 56 | 0 | 0 | 0 | 0 | 0 | |

M4 | 0 | 0 | 0 | 53 | 3 | 0 | 0 | 0 | |

M5 | 0 | 0 | 0 | 5 | 51 | 0 | 0 | 0 | |

M6 | 0 | 0 | 0 | 0 | 0 | 56 | 0 | 0 | |

M7 | 0 | 0 | 1 | 0 | 0 | 0 | 55 | 0 | |

M8 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 55 |

**Table 12.**(a) Number of correctly and incorrectly classified feature vectors out of 56 for SVMs (LOO cross validation, 98.2%); (b) same for ANN (LOO cross validation, 80.1%).

(a) | |||
---|---|---|---|

classified | |||

correct | incorrect | ||

true | M1 | 56 | 0 |

M2 | 54 | 2 | |

M3 | 56 | 0 | |

M4 | 53 | 3 | |

M5 | 53 | 3 | |

M6 | 56 | 0 | |

M7 | 56 | 0 | |

M8 | 56 | 0 |

(b) | |||
---|---|---|---|

classified | |||

correct | incorrect | ||

true | M1 | 56 | 0 |

M2 | 20 | 36 | |

M3 | 52 | 4 | |

M4 | 21 | 35 | |

M5 | 43 | 13 | |

M6 | 56 | 0 | |

M7 | 56 | 0 | |

M8 | 55 | 1 |

**Table 13.**Pre-processing and training times and the storage requirements of the classification methods.

method: | pre-processing/training time (msec) | storage requirements | ||
---|---|---|---|---|

RRSS | P-fold | LOO | ||

BDM | 2.144 | 1.441 | 1.706 | mean, covariance, CCPDF |

RBA | – | – | – | rules |

LSM | 0.098 | 0.554 | 105.141 | average of training vectors for each class |

k-NN (k = 1) | – | – | – | all training vectors |

DTW-1 | 0.098 | 0.554 | 105.141 | average of training vectors for each class |

DTW-2 | – | – | – | all training vectors |

SVM | 72.933 | 1880.233 | 5843.133 | SVM models |

ANN | 151940 | 145680 | 189100 | network structure and connection weights |

method: | classification time (msec) | ||
---|---|---|---|

RRSS | P-fold | LOO | |

BDM | 2.588 | 1.220 | 8.188 |

RBA | 0.003 | 0.003 | 0.003 |

LSM | 0.070 | 0.074 | 0.063 |

k-NN (k = 1) | 0.095 | 0.452 | 24.033 |

DTW-1 | 1.775 | 1.937 | 2.000 |

DTW-2 | 49.640 | 94.014 | 107.400 |

SVM | 0.009 | 0.016 | 0.132 |

ANN | 0.882 | 2.547 | 1.391 |

© 2009 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Tunçel, O.; Altun, K.; Barshan, B.
Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes. *Sensors* **2009**, *9*, 8508-8546.
https://doi.org/10.3390/s91108508

**AMA Style**

Tunçel O, Altun K, Barshan B.
Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes. *Sensors*. 2009; 9(11):8508-8546.
https://doi.org/10.3390/s91108508

**Chicago/Turabian Style**

Tunçel, Orkun, Kerem Altun, and Billur Barshan.
2009. "Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes" *Sensors* 9, no. 11: 8508-8546.
https://doi.org/10.3390/s91108508