Next Article in Journal
Simple and Label-Free Fluorescent Detection of Melamine Based on Melamine–Thymine Recognition
Next Article in Special Issue
Indoor Localization Based on Weighted Surfacing from Crowdsourced Samples
Previous Article in Journal
A Device-Independent Efficient Actigraphy Signal-Encoding System for Applications in Monitoring Daily Human Activities and Health
Article Menu
Issue 9 (September) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(9), 2967; https://doi.org/10.3390/s18092967

Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition

Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Received: 12 July 2018 / Revised: 16 August 2018 / Accepted: 3 September 2018 / Published: 6 September 2018
(This article belongs to the Special Issue Pervasive Intelligence and Computing)
Full-Text   |   PDF [585 KB, uploaded 6 September 2018]   |  

Abstract

Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. In this paper, we propose a method based on an adversarial autoencoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial autoencoders. View Full-Text
Keywords: sensor analytics; human activity recognition; context detection; autoencoders; adversarial learning; imputation sensor analytics; human activity recognition; context detection; autoencoders; adversarial learning; imputation
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Saeed, A.; Ozcelebi, T.; Lukkien, J. Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition. Sensors 2018, 18, 2967.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top