Next Article in Journal
UAVs for Structure-From-Motion Coastal Monitoring: A Case Study to Assess the Evolution of Embryo Dunes over a Two-Year Time Frame in the Po River Delta, Italy
Previous Article in Journal
Towards Portable Nanophotonic Sensors
Previous Article in Special Issue
Traffic Light Recognition Based on Binary Semantic Segmentation Network
Open AccessArticle

Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning

SW · Contents Basic Technology Research Group, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(7), 1716; https://doi.org/10.3390/s19071716
Received: 4 March 2019 / Revised: 5 April 2019 / Accepted: 8 April 2019 / Published: 10 April 2019
(This article belongs to the Special Issue Deep Learning Based Sensing Technologies for Autonomous Vehicles)
In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities. View Full-Text
Keywords: mobile sensing; sensor position; human activity recognition; multimodal sensor fusion; classifier-level ensemble; Long Short-Term Memory network; deep learning mobile sensing; sensor position; human activity recognition; multimodal sensor fusion; classifier-level ensemble; Long Short-Term Memory network; deep learning
Show Figures

Figure 1

MDPI and ACS Style

Chung, S.; Lim, J.; Noh, K.J.; Kim, G.; Jeong, H. Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Sensors 2019, 19, 1716. https://doi.org/10.3390/s19071716

AMA Style

Chung S, Lim J, Noh KJ, Kim G, Jeong H. Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Sensors. 2019; 19(7):1716. https://doi.org/10.3390/s19071716

Chicago/Turabian Style

Chung, Seungeun; Lim, Jiyoun; Noh, Kyoung J.; Kim, Gague; Jeong, Hyuntae. 2019. "Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning" Sensors 19, no. 7: 1716. https://doi.org/10.3390/s19071716

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop