Next Article in Journal
Quantitative Evaluation System of Soft Neurological Signs for Children with Attention Deficit Hyperactivity Disorder
Next Article in Special Issue
Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors
Previous Article in Journal
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Article Menu

Export Article

Open AccessArticle
Sensors 2016, 16(1), 115; doi:10.3390/s16010115

Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

Wearable Technologies, Sensor Technology Research Centre, University of Sussex, Brighton BN1 9RH, UK
*
Author to whom correspondence should be addressed.
Academic Editors: Yun Liu, Wendong Xiao, Han-Chieh Chao and Pony Chu
Received: 30 November 2015 / Revised: 31 December 2015 / Accepted: 12 January 2016 / Published: 18 January 2016
View Full-Text   |   Download PDF [2156 KB, uploaded 18 January 2016]   |  

Abstract

Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. View Full-Text
Keywords: human activity recognition; wearable sensors; deep learning; machine learning; sensor fusion; LSTM; neural network human activity recognition; wearable sensors; deep learning; machine learning; sensor fusion; LSTM; neural network
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Ordóñez, F.J.; Roggen, D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors 2016, 16, 115.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top