Next Article in Journal
Embedded Smart Antenna for Non-Destructive Testing and Evaluation (NDT&E) of Moisture Content and Deterioration in Concrete
Next Article in Special Issue
Estimating Visibility of Annotations for View Management in Spatial Augmented Reality Based on Machine-Learning Techniques
Previous Article in Journal
Piezoelectric Transducer-Based Structural Health Monitoring for Aircraft Applications
Previous Article in Special Issue
Activity-Aware Wearable System for Power-Efficient Prediction of Physiological Responses
Open AccessArticle

A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System

1
College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China
2
Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15261, USA
3
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
4
Department of Neurological Surgery, University of Pittsburgh, PA 15213, USA
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(3), 546; https://doi.org/10.3390/s19030546
Received: 28 December 2018 / Revised: 19 January 2019 / Accepted: 24 January 2019 / Published: 28 January 2019
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application. View Full-Text
Keywords: deep learning; egocentric activity recognition; hierarchical fusion framework; wearable sensor system deep learning; egocentric activity recognition; hierarchical fusion framework; wearable sensor system
Show Figures

Figure 1

MDPI and ACS Style

Yu, H.; Pan, G.; Pan, M.; Li, C.; Jia, W.; Zhang, L.; Sun, M. A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System. Sensors 2019, 19, 546.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop