Next Article in Journal
Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality
Next Article in Special Issue
Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies
Previous Article in Journal
A Time-Domain Reflectometry Method with Variable Needle Pulse Width for Measuring the Dielectric Properties of Materials
Previous Article in Special Issue
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Article Menu

Export Article

Open AccessArticle
Sensors 2016, 16(2), 189; doi:10.3390/s16020189

Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors

College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
Academic Editors: Yun Liu, Han-Chieh Chao, Pony Chu and Wendong Xiao
Received: 19 November 2015 / Revised: 6 January 2016 / Accepted: 29 January 2016 / Published: 4 February 2016
View Full-Text   |   Download PDF [3493 KB, uploaded 4 February 2016]   |  

Abstract

This paper provides an approach for recognizing human activities with wearable sensors. The continuous autoencoder (CAE) as a novel stochastic neural network model is proposed which improves the ability of model continuous data. CAE adds Gaussian random units into the improved sigmoid activation function to extract the features of nonlinear data. In order to shorten the training time, we propose a new fast stochastic gradient descent (FSGD) algorithm to update the gradients of CAE. The reconstruction of a swiss-roll dataset experiment demonstrates that the CAE can fit continuous data better than the basic autoencoder, and the training time can be reduced by an FSGD algorithm. In the experiment of human activities’ recognition, time and frequency domain feature extract (TFFE) method is raised to extract features from the original sensors’ data. Then, the principal component analysis (PCA) method is applied to feature reduction. It can be noticed that the dimension of each data segment is reduced from 5625 to 42. The feature vectors extracted from original signals are used for the input of deep belief network (DBN), which is composed of multiple CAEs. The training results show that the correct differentiation rate of 99.3% has been achieved. Some contrast experiments like different sensors combinations, sensor units at different positions, and training time with different epochs are designed to validate our approach. View Full-Text
Keywords: continuous autoencoder; fast stochastic gradient descent; time and frequency domain feature extract; human activity recognition; wearable sensors continuous autoencoder; fast stochastic gradient descent; time and frequency domain feature extract; human activity recognition; wearable sensors
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Wang, L. Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors. Sensors 2016, 16, 189.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top