Next Article in Journal
Embedded Smart Antenna for Non-Destructive Testing and Evaluation (NDT&E) of Moisture Content and Deterioration in Concrete
Previous Article in Journal
Piezoelectric Transducer-Based Structural Health Monitoring for Aircraft Applications
Previous Article in Special Issue
Activity-Aware Wearable System for Power-Efficient Prediction of Physiological Responses
Article Menu
Issue 3 (February-1) cover image

Export Article

Open AccessArticle
Sensors 2019, 19(3), 546; https://doi.org/10.3390/s19030546

A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System

1
College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China
2
Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15261, USA
3
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
4
Department of Neurological Surgery, University of Pittsburgh, PA 15213, USA
*
Author to whom correspondence should be addressed.
Received: 28 December 2018 / Revised: 19 January 2019 / Accepted: 24 January 2019 / Published: 28 January 2019
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Full-Text   |   PDF [14357 KB, uploaded 3 February 2019]   |  

Abstract

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application. View Full-Text
Keywords: deep learning; egocentric activity recognition; hierarchical fusion framework; wearable sensor system deep learning; egocentric activity recognition; hierarchical fusion framework; wearable sensor system
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Yu, H.; Pan, G.; Pan, M.; Li, C.; Jia, W.; Zhang, L.; Sun, M. A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System. Sensors 2019, 19, 546.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top