Next Article in Journal
Spatial Extension of Road Traffic Sensor Data with Artificial Neural Networks
Next Article in Special Issue
A Combined Approach to Predicting Rest in Dogs Using Accelerometers
Previous Article in Journal
Reliability of 3D Lower Extremity Movement Analysis by Means of Inertial Sensor Technology during Transitional Tasks
Previous Article in Special Issue
Talk, Text, Tag? Understanding Self-Annotation of Smart Home Data from a User’s Perspective
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(8), 2639; https://doi.org/10.3390/s18082639

Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets

Data and Web Science Group, University of Mannheim, 68131 Mannheim, Germany
*
Author to whom correspondence should be addressed.
Received: 28 May 2018 / Revised: 22 July 2018 / Accepted: 8 August 2018 / Published: 11 August 2018
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
View Full-Text   |   Download PDF [2235 KB, uploaded 15 August 2018]   |  

Abstract

Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other modalities like acceleration data are often recorded alongside a video. For that purpose, we created an annotation tool that enables to annotate datasets of video and inertial sensor data. In contrast to most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. This means, after labeling a small set of instances our system is able to provide labeling recommendations. We aim to rely on the acceleration data of a wrist-worn sensor to support the labeling of a video recording. For that purpose, we apply template matching to identify time intervals of certain activities. We test our approach on three datasets, one containing warehouse picking activities, one consisting of activities of daily living and one about meal preparations. Our results show that the presented method is able to give hints to annotators about possible label candidates. View Full-Text
Keywords: machine learning; activity recognition; multimodal labeling machine learning; activity recognition; multimodal labeling
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Diete, A.; Sztyler, T.; Stuckenschmidt, H. Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets. Sensors 2018, 18, 2639.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top