Next Article in Journal
Analysis of Efficient Spectrum Handoff in a Multi-Class Hybrid Spectrum Access Cognitive Radio Network Using Markov Modelling
Next Article in Special Issue
Exploring Deep Physiological Models for Nociceptive Pain Recognition
Previous Article in Journal
A Sensitive and Versatile Thickness Determination Method Based on Non-Inflection Terahertz Property Fitting
Open AccessArticle

Fusing Object Information and Inertial Data for Activity Recognition

Data and Web Science Group, University of Mannheim, 68159 Mannheim, Germany
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Diete, A.; Sztyler, T.; Stuckenschmidt, H. Vision and acceleration modalities: Partners for recognizing complex activities. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kyoto, Japan, 11 March–14 March 2019.
Sensors 2019, 19(19), 4119; https://doi.org/10.3390/s19194119
Received: 24 August 2019 / Revised: 18 September 2019 / Accepted: 19 September 2019 / Published: 23 September 2019
(This article belongs to the Special Issue Multi-Sensor Fusion in Body Sensor Networks)
In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%. View Full-Text
Keywords: activity recognition; machine learning; multi-modality activity recognition; machine learning; multi-modality
Show Figures

Figure 1

MDPI and ACS Style

Diete, A.; Stuckenschmidt, H. Fusing Object Information and Inertial Data for Activity Recognition. Sensors 2019, 19, 4119.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop