Next Article in Journal
Satellite Retrieval of Downwelling Shortwave Surface Flux and Diffuse Fraction under All Sky Conditions in the Framework of the LSA SAF Program (Part 1: Methodology)
Next Article in Special Issue
Attention Graph Convolution Network for Image Segmentation in Big SAR Imagery Data
Previous Article in Journal
Improving the Accuracy of Landslide Detection in “Off-site” Area by Machine Learning Model Portability Comparison: A Case Study of Jiuzhaigou Earthquake, China
Open AccessArticle

Context-Aware Human Activity and Smartphone Position-Mining with Motion Sensors

1
Department of Computer Science and Software Engineering, Xi’an Jiaotong-liverpool University, Suzhou 215000, China
2
Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 7ZX, UK
3
Department of Electrical and Electronic Engineering, Xi’an Jiaotong-liverpool University, Suzhou 215000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(21), 2531; https://doi.org/10.3390/rs11212531
Received: 23 September 2019 / Revised: 21 October 2019 / Accepted: 24 October 2019 / Published: 29 October 2019
(This article belongs to the Special Issue Big Data Analytics for Secure and Smart Environmental Services)
Today’s smartphones are equipped with embedded sensors, such as accelerometers and gyroscopes, which have enabled a variety of measurements and recognition tasks. In this paper, we jointly investigate two types of recognition problems in a joint manner, e.g., human activity recognition and smartphone on-body position recognition, in order to enable more robust context-aware applications. So far, these two problems have been studied separately without considering the interactions between each other. In this study, by first applying a novel data preprocessing technique, we propose a joint recognition framework based on the multi-task learning strategy, which can reduce computational demand, better exploit complementary information between the two recognition tasks, and lead to higher recognition performance. We also extend the joint recognition framework so that additional information, such as user identification with biometric motion analysis, can be offered. We evaluate our work systematically and comprehensively on two datasets with real-world settings. Our joint recognition model achieves the promising performance of 0.9174 in terms of F 1 -score for user identification on the benchmark RealWorld Human Activity Recognition (HAR) dataset. On the other hand, in comparison with the conventional approach, the proposed joint model is shown to be able to improve human activity recognition and position recognition by 5.1 % and 9.6 % respectively. View Full-Text
Keywords: mobile sensing; human activity recognition; smartphone position detection; multi-task learning; machine learning mobile sensing; human activity recognition; smartphone position detection; multi-task learning; machine learning
Show Figures

Graphical abstract

MDPI and ACS Style

Gao, Z.; Liu, D.; Huang, K.; Huang, Y. Context-Aware Human Activity and Smartphone Position-Mining with Motion Sensors. Remote Sens. 2019, 11, 2531.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop