# Enhanced Human Activity Recognition Based on Smartphone Sensor Data Using Hybrid Feature Selection Model

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- Human activity recognition using motion sensor data of smartphone.
- A hybrid feature selection model combination of SFFS based filter approach and SVM-based wrapper approach.
- An effective human activity identification using multiclass support vector machine.

## 2. Literature Review

## 3. Proposed Model

#### 3.1. Activity Data Collection

_{i}= (x

_{i}, y

_{i}, z

_{i}) and gyro

_{i}= (x

_{i}, y

_{i}, z

_{i}), where i = (1, 2, 3, …, n).

#### 3.2. Heterogenious Statistical Feature Extraction

#### 3.3. Feature Selection

Algorithm 1 |

Input: the set of all features $Y=\left\{{y}_{1},\text{}{y}_{2},\text{}\dots ,{Y}_{n}\right\}$ Output: a subset of features $X=\left\{\text{}{x}_{j}\right|\text{}j=1,2,3,\dots ,\text{}k;\text{}{x}_{j}\in Y\}$ Where, $k=\left(0,1,2,\dots ,n\right)$ Steps: 1. ${Y}_{0}=\{\varnothing \}$ 2. Select the best Feature X ^{+}Update: ${Y}_{K+1}={Y}_{K}+{X}^{+};=+1$ 3. Select the best Feature X ^{−}4. If $J\left({Y}_{K}-{X}^{-}\right)>J\left({Y}_{K}\right)$ [$\left(X\right)$ = Criterion func.] Then, ${Y}_{K+1}={Y}_{K}-{X}^{-};K=K+1$ go to step 3. |

#### 3.4. Objective Function Based on Discriminant Feature

#### 3.5. Acitivity Recognition for Validation Purpose

## 4. Experimental Results

_{x}f

_{7}, A

_{x}f

_{15}, A

_{y}f

_{13}, A

_{z}f

_{1}, G

_{x}f

_{14}, G

_{z}f

_{13}} produces the best result with an average accuracy of 96.81%, whereas accuracy without feature selection is 90.84%. Figure 9 depicts the accuracy of individual activity with feature selectiona and without feature selectio. In [31], the researchers used neural networks and random forests to detect human activities. In previous research, accuracy rate was below than 95% for different activities on average. Table 4 shows the confusion matrix of classification using optimal feature set. Figure 10 present the classification performance of activities using proposed feature selection model.

## 5. Additional Experimental

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Osmani, V.; Balasubramaniam, S.; Botvich, D. Human activity recognition in pervasive health-care: Supporting efficient remote collaboration. J. Netw. Comput. Appl.
**2008**, 31, 628–655. [Google Scholar] [CrossRef] - Tentori, M.; Favela, J. Activity-aware computing for healthcare. IEEE Pervasive Comput.
**2008**, 7, 51–57. [Google Scholar] [CrossRef] - Shao, L.; Ji, L.; Liu, Y.; Zhang, J. Human action segmentation and recognition via motion and shape analysis. Pattern Recognit. Lett.
**2012**, 33, 438–445. [Google Scholar] [CrossRef] - Tosato, D.; Spera, M.; Cristani, M.; Murino, V. Characterizing humans on Riemannian manifolds. IEEE Trans. Pattern Anal. Mach. Intell.
**2013**, 35, 1972–1984. [Google Scholar] [CrossRef] [PubMed] - Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newslett.
**2010**, 12, 74–82. [Google Scholar] [CrossRef] - Wang, X.; Rosenblum, D.; Wang, Y. Context-aware mobile music recommendation for daily activities. In Proceedings of the 20th ACM International Conference on Multimedia, Nara, Japan, 29 October–2 November 2012; pp. 99–108. [Google Scholar]
- Hsu, Y.L.; Yang, S.C.; Chang, H.C.; Lai, H.C. Human daily and sport activity recognition using a wearable inertial sensor network. IEEE Access
**2018**, 6, 31715–31728. [Google Scholar] [CrossRef] - Ramamurthy, S.R.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. Interdiscipl. Rev. Data Mining Knowl. Discov.
**2018**, 8, e1245. [Google Scholar] [CrossRef] - Iosifidis, A.; Tefas, A.; Pitas, I. View-invariant action recognition based on artificial neural networks. IEEE Trans. Neural Netw. Learn. Syst.
**2012**, 23, 412–424. [Google Scholar] [CrossRef] [PubMed] - Chung, S.; Lim, J.; Noh, K.J.; Kim, G.; Jeong, H. Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Sensors
**2019**, 19, 1716. [Google Scholar] [CrossRef] [Green Version] - Keogh, E.; Mueen, A. Curse of Dimensionality. In Encyclopedia of Machine Learning and Data Mining; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2017. [Google Scholar]
- Gyllensten, I.C.; Bonomi, A.G. Identifying Types of Physical Activity with a Single Accelerometer: Evaluating Laboratory-trained Algorithms in Daily Life. IEEE Trans. Biomed. Eng.
**2011**, 58, 2656–2663. [Google Scholar] [CrossRef] - Esfahani, P.; Malazi, H.T. PAMS: A new position-aware multi-sensor dataset for human activity recognition using smartphones. In Proceedings of the 2017 19th International Symposium on Computer Architecture and Digital Systems (CADS), Kish Island, Iran, 21–22 December 2017; pp. 1–7. [Google Scholar]
- Patil, C.M.; Jagadeesh, B.; Meghana, M.N. An Approach of Understanding Human Activity Recognition and Detection for Video Surveillance using HOG Descriptor and SVM Classifier. In Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, India, 8–9 September 2017; pp. 481–485. [Google Scholar]
- Ahmed, M.; Das Antar, A.; Ahad, A.R. An Approach to Classify Human Activities in Real-time from Smartphone Sensor Data. In Proceedings of the 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Spokane, WA, USA, 30 May–2 June 2019; pp. 140–145. [Google Scholar]
- Chen, Y.; Shen, C. Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition. IEEE Access
**2017**, 5, 3095–3110. [Google Scholar] [CrossRef] - Hassan, M.M.; Uddin, Z.; Mohamed, A.; Almogren, A. A robust human activity recognition system using smartphone sensors and deep learning. Futur. Gener. Comput. Syst.
**2018**, 81, 307–313. [Google Scholar] [CrossRef] - San-Segundo, R.; Blunck, H.; Moreno-Pimentel, J.; Stisen, A.; Gil-Martín, M. Robust Human Activity Recognition using smartwatches and smartphones. Eng. Appl. Artif. Intell.
**2018**, 72, 190–202. [Google Scholar] [CrossRef] - Zhu, R.; Xiao, Z.; Cheng, M.; Zhou, L.; Yan, B.; Lin, S.; Wen, H. Deep Ensemble Learning for Human Activity Recognition Using Smartphone. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar]
- Alemayoh, T.T.; Lee, J.H.; Okamoto, S. Deep Learning Based Real-time Daily Human Activity Recognition and Its Implementation in a Smartphone. In Proceedings of the 2019 16th International Conference on Ubiquitous Robots (UR), Jeju, Korea, 24–27 June 2019; pp. 179–182. [Google Scholar]
- Li, F.; Shui, Y.; Chen, W. Up and down buses activity recognition using smartphone accelerometer. In Proceedings of the 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference, Chongqing, China, 20–22 May 2016; pp. 761–765. [Google Scholar]
- Hsu, Y.-L.; Lin, S.-L.; Chou, P.-H.; Lai, H.-C.; Chang, H.-C.; Yang, S.-C. Application of nonparametric weighted feature extraction for an inertial-signal-based human activity recognition system. In Proceedings of the 2017 International Conference on Applied System Innovation (ICASI), Sapporo, Japan, 13–17 May 2017; pp. 1718–1720. [Google Scholar]
- Yin, X.; Shen, W.; Samarabandu, J.; Wang, X. Human activity detection based on multiple smart phone sensors and machine learning algorithms. In Proceedings of the 2015 IEEE 19th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Calabria, Italy, 6–8 May 2015; pp. 582–587. [Google Scholar]
- Jain, A.; Kanhangad, V. Human Activity Classification in Smartphones Using Accelerometer and Gyroscope Sensors. IEEE Sens. J.
**2018**, 18, 1169–1177. [Google Scholar] [CrossRef] - Chen, Y.; Wang, Y.; Cao, L.; Jin, Q. CCFS: A Confidence-based Cost-effective feature selection scheme for healthcare data classification. IEEE/ACM Trans. Comput. Boil. Bioinform.
**2019**. [Google Scholar] [CrossRef] - Cao, L.; Wang, Y.; Zhang, B.; Jin, Q.; Vasilakos, A.V. GCHAR: An efficient Group-based Context—Aware human activity recognition on smartphone. J. Parallel Distrib. Comput.
**2018**, 118, 67–80. [Google Scholar] [CrossRef] - Kang, M.; Islam, R.; Kim, J.; Kim, J.-M.; Pecht, M. A Hybrid Feature Selection Scheme for Reducing Diagnostic Performance Deterioration Caused by Outliers in Data-Driven Diagnostics. IEEE Trans. Ind. Electron.
**2016**, 63, 3299–3310. [Google Scholar] [CrossRef] - Ni, Q.; Patterson, T.; Cleland, I.; Nugent, C. Dynamic detection of window starting positions and its implementation within an activity recognition framework. J. Biomed. Inform.
**2016**, 62, 171–180. [Google Scholar] [CrossRef] - Islam, R.; Khan, S.A.; Kim, J.M. Discriminant Feature Distribution Analysis-Based Hybrid Feature Selection for Online Bearing Fault Diagnosis in Induction Motors. J. Sens.
**2016**, 2016. [Google Scholar] [CrossRef] - Reyes-Ortiz, J.L.; Oneto, L.; Samà, A.; Parra, X.; Anguita, D. Transition-Aware Human Activity Recognition Using Smartphones. In Neurocomputing; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Cruciani, F.; Cleland, I.; Nugent, C.; Mccullagh, P.; Synnes, K.; Hallberg, J. Automatic Annotation for Human Activity Recognition in Free Living Using a Smartphone. Sensors
**2018**, 18, 2203. [Google Scholar] [CrossRef] [Green Version] - Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A Public Domain Dataset for Human Activity Recognition Using Smartphones. In Proceedings of the 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013, Bruges, Belgium, 24–26 April 2013. [Google Scholar]
- Ronao, C.A.; Cho, S.-B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl.
**2016**, 59, 235–244. [Google Scholar] [CrossRef] - Myo, W.W.; Wettayaprasit, W.; Aiyarak, P. A Cyclic Attribution Technique Feature Selection Method for Human Activity Recognition. I.J. Intell. Syst. Appl.
**2019**, 10, 25–32. [Google Scholar] [CrossRef]

**Figure 3.**Examples of exceptions in feature distribution. (

**a**) Classes are well-separated; (

**b**) Classes are overlapped.

**Figure 7.**Sample accelerometer data (

**a**,

**c**,

**e**,

**g**,

**i**,

**k**) and gyroscope data (

**b**,

**d**,

**f**,

**h**,

**j**,

**l**) for different activities: (

**a**,

**c**) Stand to sit activity, (

**e**,

**g**) sitting activity, (

**i**,

**k**) walking activity, (

**b**,

**d**) stand to lie, (

**f**,

**h**) laying activity, (

**j**,

**l**) sit to lie activity.

**Figure 8.**Accuracy of different feature sets produced by support vector machine in wrapper method. Highest accuracy is mark in red circle: (

**a**) Accelerometer X-axial feature set; (

**b**) accelerometer Y-axial feature set; (

**c**) accelerometer Z-axial feature set; (

**d**) gyroscope X-axial feature set; (

**e**) gyroscope Y-axial feature set; (

**f**) gyroscope Z-axial feature set.

Features | Equation | Features | Equation |
---|---|---|---|

Mean | $\mu =\frac{1}{N}$$\sum}_{i=1}^{N}{x}_{i$ | Root Mean Square | ${\mathrm{x}}_{\mathrm{rms}}=\sqrt{\frac{1}{\mathrm{N}}\left({\mathrm{x}}_{1}^{2}+{\mathrm{x}}_{2}^{2}\dots +{\mathrm{x}}_{\mathrm{n}}^{2}\right)}$ |

Standard Deviation | $\mathrm{s}=\sqrt{\frac{1}{\mathrm{N}-1}{\displaystyle \sum}_{\mathrm{i}=1}^{\mathrm{N}}{\left({\mathrm{x}}_{\mathrm{i}}-\overline{\mathrm{x}}\right)}^{2}}$ | Energy | $E={\displaystyle \sum}_{i=1}^{N}{\left|{x}_{i}\right|}^{2}$ |

Median | $M=\left(\frac{\frac{n}{2}-cf}{f}\right)\left(w\right)+{L}_{m}$ | SRA | $SF=\mu \left(\sqrt[2]{ab{s}_{X}}\right)$ |

Maximum | MAX(M) | Peak to Peak | $PPV=MAX\left(M\right)-MIN\left(M\right)$ |

Minimum | MIN(M) | Crest Factor | $c=\frac{\left|{x}_{peak}\right|}{{x}_{rms}}$$=\frac{\left|\left|x\right|\right|\infty}{\left|\right|x|{|}_{2}}$ |

Interquartile Range | $IRQ=\frac{3}{4\left(n+1\right)}thterm\text{}$$-$$\frac{1}{4\left(n+1\right)}thterm$ | Impulse Factor | $i=\frac{{x}_{peak}}{{x}_{mean}}$ |

Correlation coefficient | $r=\frac{n\left({{\displaystyle \sum}}^{\text{}}xy\right)-\left({{\displaystyle \sum}}^{\text{}}x\right)\left({{\displaystyle \sum}}^{\text{}}y\right)}{\left[n{{\displaystyle \sum}}^{\text{}}{x}^{2}-{\left({{\displaystyle \sum}}^{\text{}}x\right)}^{2}\right]\left[n{{\displaystyle \sum}}^{\text{}}{y}^{2}-{\left({{\displaystyle \sum}}^{\text{}}y\right)}^{2}\right]}$ | Margin Factor | $MF={x}_{peak}/{x}_{sra}$ |

Skewness | $SV=\frac{1}{N}{\displaystyle \sum}_{I-1}^{N}{(\frac{{x}_{i-}x}{\sigma})}^{3}$ | Shape Factor | $SF={X}_{rms}/\mu \left(ab{s}_{X}\right)$ |

Kurtosis | $\mathrm{KV}=\frac{1}{\mathrm{N}}{\displaystyle \sum}_{\mathrm{i}=1}^{\mathrm{N}}{\left(\frac{{\mathrm{x}}_{\mathrm{i}}-\overline{\mathrm{x}}}{\mathsf{\sigma}}\right)}^{4}$ | Frequency Center | $FC=\sqrt{{f}_{1}{f}_{2}}$ |

Cross Correlation | ${\rho}_{i,j}=\frac{{X}_{i,j}}{\sqrt{{\sigma}_{i}^{2}{\sigma}_{j}^{2}}}$ | RMS Frequency | $RM{S}_{fr}=\mu (\sqrt{\left({\mathrm{fr}}_{\mathrm{x}}{}_{1}^{2}+{\mathrm{fr}}_{\mathrm{x}}{}_{2}^{2}\dots +{\mathrm{fr}}_{\mathrm{x}}{}_{\mathrm{n}}^{2}\right)})$ |

Absolute Mean value | ${A}_{M}=\mu \left(ab{s}_{x}\right)$ | Root Variant Frequency | ${\sigma}_{y}^{2}\left(M,T,\tau \right)=\frac{1}{M-1}\left\{{\displaystyle \sum}_{i-0}^{M-1}{y}_{i}^{-2}-\frac{1}{M}{[{\displaystyle \sum}_{I-0}^{M-1}{y}_{i}]}^{2}\right\}$ |

Variance | ${\sigma}^{2}=\left[{{\displaystyle \sum}}^{\text{}}{\left(x-\mu \right)}^{2}\right]/N$ |

Acc-x | Acc-y | Acc-z | Gyro-x | Gyro-y | Gyro-z | ||
---|---|---|---|---|---|---|---|

1 | Mean | A_{x} f_{1} | A_{y} f_{1} | Az f_{1} | G_{x} f_{1} | G_{y} f_{1} | G_{z} f_{1} |

2 | Standard Deviation | A_{x} f_{2} | A_{y} f_{2} | Az f_{2} | G_{x} f_{2} | G_{y} f_{2} | G_{z} f_{2} |

3 | Median | A_{x} f_{3} | A_{y} f_{3} | A_{z} f_{3} | G_{x} f_{3} | G_{y} f_{3} | G_{z} f_{3} |

4 | Maximum | A_{x} f_{4} | A_{y} f_{4} | A_{z} f_{4} | G_{x} f_{4} | G_{y} f_{4} | G_{z} f_{4} |

5 | Minimum | A_{x} f_{5} | A_{y} f_{5} | A_{z} f_{5} | G_{x} f_{5} | G_{y} f_{5} | G_{z} f_{5} |

6 | Interquartile Range | A_{x} f_{6} | A_{y} f_{6} | A_{z} f_{6} | G_{x} f_{6} | G_{y} f_{6} | G_{z} f_{6} |

7 | Correlation coefficient | A_{x} f_{7} | A_{y} f_{7} | A_{z} f_{7} | G_{x} f_{7} | G_{y} f_{7} | Gz f_{7} |

8 | Skewness | A_{x} f_{8} | A_{y} f_{8} | A_{z} f_{8} | G_{x} f_{8} | G_{y} f_{8} | G_{z} f_{8} |

9 | Kurtosis | A_{x} f_{9} | A_{y} f_{9} | A_{z} f_{9} | G_{x} f_{9} | G_{y} f_{9} | G_{z} f_{9} |

10 | Cross Correlation | A_{x} f_{10} | A_{y} f_{10} | A_{z} f_{10} | G_{x} f_{10} | G_{y} f_{10} | G_{z} f_{10} |

11 | Mean Absolute Value | A_{x} f_{11} | A_{y} f_{11} | A_{z} f_{11} | G_{x} f_{11} | G_{y} f_{11} | G_{z} f_{11} |

12 | Variance | A_{x} f_{12} | A_{y} f_{12} | A_{z} f_{12} | G_{x} f_{12} | G_{y} f_{12} | G_{z} f_{12} |

13 | Root Mean Square | A_{x} f_{13} | A_{y} f_{13} | A_{z} f_{13} | G_{x} f_{13} | G_{y} f_{13} | G_{z} f_{13} |

14 | Energy | A_{x} f_{14} | A_{y} f_{14} | A_{z} f_{14} | G_{x} f_{14} | G_{y} f_{14} | G_{z} f_{14} |

15 | SRA | A_{x} f_{15} | A_{y} f_{15} | A_{z} f_{15} | G_{x} f_{15} | G_{y} f_{15} | G_{z} f_{15} |

16 | Peak to Peak | A_{x} f_{16} | A_{y} f_{16} | A_{z} f_{16} | G_{x} f_{16} | G_{y} f_{16} | G_{z} f_{16} |

17 | Crest Factor | A_{x} f_{17} | A_{y} f_{17} | A_{z} f_{17} | G_{x} f_{17} | G_{y} f_{17} | G_{z} f_{17} |

18 | Impulse Factor | A_{x} f_{18} | A_{y} f_{18} | A_{z} f_{18} | G_{x} f_{18} | G_{y} f_{18} | G_{z} f_{18} |

19 | Margin Factor | A_{x} f_{19} | A_{y} f_{19} | A_{z} f_{19} | G_{x} f_{19} | G_{y} f_{19} | G_{z} f_{19} |

20 | Shape Factor | A_{x} f_{20} | A_{y} f_{20} | A_{z} f_{20} | G_{x} f_{20} | G_{y} f_{20} | G_{z} f_{20} |

21 | Frequency Center | A_{x} f_{21} | A_{y} f_{21} | A_{z} f_{21} | G_{x} f_{21} | G_{y} f_{21} | G_{z} f_{21} |

22 | RMS Frequency | A_{x} f_{22} | A_{y} f_{22} | A_{z} f_{22} | G_{x} f_{22} | G_{y} f_{22} | G_{z} f_{22} |

23 | Root Variant Frequency | A_{x} f_{23} | A_{y} f_{23} | A_{z} f_{23} | G_{x} f_{23} | G_{y} f_{23} | G_{z} f_{23} |

**Table 3.**Overall classification accuracy of final optimal features in wrapper approach in hybrid feature selection.

Accelerometer Axial | Gyroscope Axial | |||||
---|---|---|---|---|---|---|

_{x} | _{y} | _{z} | _{x} | _{y} | _{z} | |

Best Optimal feature | f_{a_x_2} = {A_{x} f_{2,} A_{x} f_{7,} A_{x} f_{9,} A_{x} f_{13,} A_{x} f_{15,} A_{x} f_{21}} | f_{a_y_9} = {A_{y} f_{2,} A_{y} f_{8,} A_{y} f_{9,} A_{y} f_{13,} A_{y} f_{15,} A_{y} f_{22}} | f_{a_z_5} = {A_{z} f_{1,} A_{z} f_{2,} A_{z} f_{7,} A_{z} f_{8,} A_{z} f_{14,} A_{z} f_{15,} A_{z} f_{21}} | f_{g_x_4} = {G_{x} f_{2,} G_{x} f_{3,} G_{x} f_{7,} G_{x} f_{8,} G_{x} f_{13,} G_{x} f_{14,} G_{x} f_{21,} G_{x} f_{22}} | f_{g_y_7} = {G_{y} f_{2,} G_{y} f_{8,} G_{y} f_{9,} G_{y} f_{14,} G_{y} f_{15,} G_{y} f_{17,} G_{y} f_{22}} | f_{g_z_1} = {G_{z} f_{2,} G_{z} f_{7,} G_{z} f_{9,} G_{z} f_{13,} G_{z} f_{14,} G_{z} f_{15,} G_{z} f_{21}} |

Overall accuracy | 94.15% | 92.8% | 92.25% | 93.15% | 90.1% | 92.15% |

Predicted Class | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Walking | Walking Downstairs | Walking Upstairs | Standing | Sitting | Lying | Stand-to-Sit | Sit-to-Stand | Sit-to-Lie | Lie-to-Sit | Stand-to-Lie | Lie-to-Stand | Recall | ||

Actual class | walking | 189 | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 97.93% | |

walking downstairs | 2 | 259 | 0 | 0 | 2 | 0 | 0 | 3 | 0 | 0 | 1 | 0 | 97.00% | |

walking upstairs | 5 | 5 | 252 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 95.45% | |

standing | 0 | 0 | 0 | 178 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 98.34% | |

sitting | 0 | 0 | 0 | 3 | 175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 98.31% | |

lying | 0 | 0 | 0 | 3 | 0 | 180 | 0 | 0 | 0 | 0 | 2 | 0 | 97.30% | |

stand-to-sit | 1 | 0 | 1 | 0 | 0 | 0 | 90 | 0 | 0 | 0 | 2 | 0 | 95.74% | |

sit-to-stand | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 89 | 3 | 0 | 0 | 0 | 95.70% | |

sit-to-lie | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 90 | 0 | 2 | 0 | 96.77% | |

lie-to-sit | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 90 | 0 | 3 | 94.74% | |

stand-to-lie | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 87 | 0 | 96.67% | |

lie-to-stand | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 87 | 97.75% | |

Precision | 95.94% | 97.37% | 98.82% | 96.74% | 97.77% | 98.90% | 95.74% | 93.68% | 95.74% | 97.83% | 92.55% | 96.67% | 96.81% |

Predicted Class | ||||||||
---|---|---|---|---|---|---|---|---|

Walking | Walking Downstairs | Walking Upstairs | Standing | Sitting | Lying | Recall | ||

Actual class | walking | 491 | 2 | 3 | 0 | 0 | 0 | 98.99% |

walking downstairs | 4 | 413 | 3 | 0 | 0 | 0 | 98.33% | |

walking upstairs | 11 | 1 | 458 | 0 | 0 | 1 | 97.24% | |

standing | 0 | 0 | 1 | 517 | 14 | 0 | 97.18% | |

sitting | 0 | 0 | 0 | 11 | 480 | 0 | 97.76% | |

lying | 0 | 0 | 0 | 4 | 0 | 533 | 99.26% | |

Precision | 97.04% | 99.28% | 98.49% | 97.18% | 97.17% | 99.81% | 98.13% |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ahmed, N.; Rafiq, J.I.; Islam, M.R.
Enhanced Human Activity Recognition Based on Smartphone Sensor Data Using Hybrid Feature Selection Model. *Sensors* **2020**, *20*, 317.
https://doi.org/10.3390/s20010317

**AMA Style**

Ahmed N, Rafiq JI, Islam MR.
Enhanced Human Activity Recognition Based on Smartphone Sensor Data Using Hybrid Feature Selection Model. *Sensors*. 2020; 20(1):317.
https://doi.org/10.3390/s20010317

**Chicago/Turabian Style**

Ahmed, Nadeem, Jahir Ibna Rafiq, and Md Rashedul Islam.
2020. "Enhanced Human Activity Recognition Based on Smartphone Sensor Data Using Hybrid Feature Selection Model" *Sensors* 20, no. 1: 317.
https://doi.org/10.3390/s20010317