E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Annotation of User Data for Sensor-Based Systems"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 May 2018)

Special Issue Editors

Guest Editor
Dr. Kristina Yordanova

Institute of Computer Science, University of Rostock, 18051 Rostock, Germany
Website | E-Mail
Interests: activity and intention recognition; human behavior models; knowledge elicitation; natural language processing; automatic extraction of behavior models from textual sources
Guest Editor
Dr. Adeline Paiement

Swansea University, Swansea, UK
Website | E-Mail
Interests: computer vision; machine learning; AI-assisted healthcare
Guest Editor
Prof. Jesse Hoey

David R. Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada
Website | E-Mail
Interests: artificial intelligence; computer vision; affective computing; computational social science; Markov decision processes

Special Issue Information

Dear Colleagues,

Labelling user data is a central part of the design and evaluation of sensors and sensor-based systems that aim to support the user through situation-aware reasoning. It is essential, both in designing and training a sensor-based system to recognize and reason about the situation, either through the design of new sensors, the definition of suitable observation and situation models in knowledge-driven applications, or though the preparation of training data for learning tasks in data-driven models. Hence, the quality of annotations can have a significant impact on the performance of the derived systems. Labelling is also vital for validating and quantifying the performance of sensors and sensor-based applications as well as for selecting the best performing sensor setup and configuration.

With sensor-based systems relying increasingly on large datasets with multiple sensors, the process of data labelling is becoming a major concern for the community.

To address these problems, this Special Issue contains selected papers from the International Workshop on Annotation of useR Data for UbiquitOUs Systems (ARDUOUS)(2017/2018) with focus on:

1) intelligent and interactive tools and automated methods for annotating large sensor datasets.
2) the role and impact of annotations in designing sensor-based applications,
3) the process of labelling, and the requirements to produce high quality annotations, especially in the context of large sensor datasets.

In addition, we are looking for outstanding submissions, which will extend the state-of-the-art in annotation for sensor-based systems. The scope of the issue includes but is not limited to:

 - methods and intelligent tools for annotating sensor data
 - processes of and best practices in annotating sensor data
 - annotation methods and tools for sensor setup and configuration
 - sensors and sensor-based methods and practices towards an automation of the annotation
 - improving and evaluating the annotation quality for better sensor interpretation
 - ethical issues concerning the collection and annotation of sensor data
 - beyond the labels: ontologies for semantic annotation of sensor data
 - high-quality and re-usable annotation for publicly available sensor datasets
 - impact of annotation on a sensor-based system's performance
 - building classifier models that are capable of dealing with multiple (noisy) annotations and/or making use of taxonomies/ontologies
 - the potential value of incorporating modelling of the annotators into predictive models

Dr.-Ing. Kristina Yordanova
Dr. Adeline Paiement
Prof. Jesse Hoey
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessArticle A Combined Approach to Predicting Rest in Dogs Using Accelerometers
Sensors 2018, 18(8), 2649; https://doi.org/10.3390/s18082649
Received: 25 June 2018 / Revised: 6 August 2018 / Accepted: 8 August 2018 / Published: 13 August 2018
PDF Full-text (1784 KB) | HTML Full-text | XML Full-text
Abstract
The ability to objectively measure episodes of rest has clear application for assessing health and well-being. Accelerometers afford a sensitive platform for doing so and have demonstrated their use in many human-based trials and interventions. Current state of the art methods for predicting
[...] Read more.
The ability to objectively measure episodes of rest has clear application for assessing health and well-being. Accelerometers afford a sensitive platform for doing so and have demonstrated their use in many human-based trials and interventions. Current state of the art methods for predicting sleep from accelerometer signals are either based on posture or low movement. While both have proven to be sensitive in humans, the methods do not directly transfer well to dogs, possibly because dogs are commonly alert but physically inactive when recumbent. In this paper, we combine a previously validated low-movement algorithm developed for humans and a posture-based algorithm developed for dogs. The hybrid approach was tested on 12 healthy dogs of varying breeds and sizes in their homes. The approach predicted state of rest with a mean accuracy of 0.86 (SD = 0.08). Furthermore, when a dog was in a resting state, the method was able to distinguish between head up and head down posture with a mean accuracy of 0.90 (SD = 0.08). This approach can be applied in a variety of contexts to assess how factors, such as changes in housing conditions or medication, may influence a dog’s resting patterns. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets
Sensors 2018, 18(8), 2639; https://doi.org/10.3390/s18082639
Received: 28 May 2018 / Revised: 22 July 2018 / Accepted: 8 August 2018 / Published: 11 August 2018
PDF Full-text (2235 KB) | HTML Full-text | XML Full-text
Abstract
Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other
[...] Read more.
Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other modalities like acceleration data are often recorded alongside a video. For that purpose, we created an annotation tool that enables to annotate datasets of video and inertial sensor data. In contrast to most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. This means, after labeling a small set of instances our system is able to provide labeling recommendations. We aim to rely on the acceleration data of a wrist-worn sensor to support the labeling of a video recording. For that purpose, we apply template matching to identify time intervals of certain activities. We test our approach on three datasets, one containing warehouse picking activities, one consisting of activities of daily living and one about meal preparations. Our results show that the presented method is able to give hints to annotators about possible label candidates. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Talk, Text, Tag? Understanding Self-Annotation of Smart Home Data from a User’s Perspective
Sensors 2018, 18(7), 2365; https://doi.org/10.3390/s18072365
Received: 7 June 2018 / Revised: 11 July 2018 / Accepted: 12 July 2018 / Published: 20 July 2018
PDF Full-text (1728 KB) | HTML Full-text | XML Full-text
Abstract
Delivering effortless interactions and appropriate interventions through pervasive systems requires making sense of multiple streams of sensor data. This is particularly challenging when these concern people’s natural behaviours in the real world. This paper takes a multidisciplinary perspective of annotation and draws on
[...] Read more.
Delivering effortless interactions and appropriate interventions through pervasive systems requires making sense of multiple streams of sensor data. This is particularly challenging when these concern people’s natural behaviours in the real world. This paper takes a multidisciplinary perspective of annotation and draws on an exploratory study of 12 people, who were encouraged to use a multi-modal annotation app while living in a prototype smart home. Analysis of the app usage data and of semi-structured interviews with the participants revealed strengths and limitations regarding self-annotation in a naturalistic context. Handing control of the annotation process to research participants enabled them to reason about their own data, while generating accounts that were appropriate and acceptable to them. Self-annotation provided participants an opportunity to reflect on themselves and their routines, but it was also a means to express themselves freely and sometimes even a backchannel to communicate playfully with the researchers. However, self-annotation may not be an effective way to capture accurate start and finish times for activities, or location associated with activity information. This paper offers new insights and recommendations for the design of self-annotation tools for deployment in the real world. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Activities of Daily Living Ontology for Ubiquitous Systems: Development and Evaluation
Sensors 2018, 18(7), 2361; https://doi.org/10.3390/s18072361
Received: 7 June 2018 / Revised: 3 July 2018 / Accepted: 6 July 2018 / Published: 20 July 2018
Cited by 1 | PDF Full-text (3129 KB) | HTML Full-text | XML Full-text
Abstract
Ubiquitous eHealth systems based on sensor technologies are seen as key enablers in the effort to reduce the financial impact of an ageing society. At the heart of such systems sit activity recognition algorithms, which need sensor data to reason over, and a
[...] Read more.
Ubiquitous eHealth systems based on sensor technologies are seen as key enablers in the effort to reduce the financial impact of an ageing society. At the heart of such systems sit activity recognition algorithms, which need sensor data to reason over, and a ground truth of adequate quality used for training and validation purposes. The large set up costs of such research projects and their complexity limit rapid developments in this area. Therefore, information sharing and reuse, especially in the context of collected datasets, is key in overcoming these barriers. One approach which facilitates this process by reducing ambiguity is the use of ontologies. This article presents a hierarchical ontology for activities of daily living (ADL), together with two use cases of ground truth acquisition in which this ontology has been successfully utilised. Requirements placed on the ontology by ongoing work are discussed. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Consistent Semantic Annotation of Outdoor Datasets via 2D/3D Label Transfer
Sensors 2018, 18(7), 2249; https://doi.org/10.3390/s18072249
Received: 7 June 2018 / Revised: 1 July 2018 / Accepted: 4 July 2018 / Published: 12 July 2018
PDF Full-text (14107 KB) | HTML Full-text | XML Full-text
Abstract
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation
[...] Read more.
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation of semantic regions in the data, delivered by substantial human labour. To speed up this process, we propose a framework for semantic annotation of scenes captured by moving camera(s), e.g., mounted on a vehicle or robot. It makes use of an available 3D model of the traversed scene to project segmented 3D objects into each camera frame to obtain an initial annotation of the associated 2D image, which is followed by manual refinement by the user. The refined annotation can be transferred to the next consecutive frame using optical flow estimation. We have evaluated the efficiency of the proposed framework during the production of a labelled outdoor dataset. The analysis of annotation times shows that up to 43% less effort is required on average, and the consistency of the labelling is also improved. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Automatic Annotation for Human Activity Recognition in Free Living Using a Smartphone
Sensors 2018, 18(7), 2203; https://doi.org/10.3390/s18072203
Received: 30 May 2018 / Revised: 5 July 2018 / Accepted: 6 July 2018 / Published: 9 July 2018
PDF Full-text (1299 KB) | HTML Full-text | XML Full-text
Abstract
Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized
[...] Read more.
Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized approaches requiring user specific datasets to be labeled. The availability of such datasets has the potential to help address common problems of smartphone-based HAR, such as inter-person variability. In this work, we present (i) an automatic labeling method facilitating the collection of labeled datasets in free-living conditions using the smartphone, and (ii) we investigate the robustness of common supervised classification approaches under instances of noisy data. We evaluated the results with a dataset consisting of 38 days of manually labeled data collected in free living. The comparison between the manually and the automatically labeled ground truth demonstrated that it was possible to obtain labels automatically with an 80–85% average precision rate. Results obtained also show how a supervised approach trained using automatically generated labels achieved an 84% f-score (using Neural Networks and Random Forests); however, results also demonstrated how the presence of label noise could lower the f-score up to 64–74% depending on the classification approach (Nearest Centroid and Multi-Class Support Vector Machine). Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Automatic Annotation of Unlabeled Data from Smartphone-Based Motion and Location Sensors
Sensors 2018, 18(7), 2134; https://doi.org/10.3390/s18072134
Received: 21 May 2018 / Revised: 24 June 2018 / Accepted: 26 June 2018 / Published: 3 July 2018
PDF Full-text (2369 KB) | HTML Full-text | XML Full-text
Abstract
Automatic data annotation eliminates most of the challenges we faced due to the manual methods of annotating sensor data. It significantly improves users’ experience during sensing activities since their active involvement in the labeling process is reduced. An unsupervised learning technique such as
[...] Read more.
Automatic data annotation eliminates most of the challenges we faced due to the manual methods of annotating sensor data. It significantly improves users’ experience during sensing activities since their active involvement in the labeling process is reduced. An unsupervised learning technique such as clustering can be used to automatically annotate sensor data. However, the lingering issue with clustering is the validation of generated clusters. In this paper, we adopted the k-means clustering algorithm for annotating unlabeled sensor data for the purpose of detecting sensitive location information of mobile crowd sensing users. Furthermore, we proposed a cluster validation index for the k-means algorithm, which is based on Multiple Pair-Frequency. Thereafter, we trained three classifiers (Support Vector Machine, K-Nearest Neighbor, and Naïve Bayes) using cluster labels generated from the k-means clustering algorithm. The accuracy, precision, and recall of these classifiers were evaluated during the classification of “non-sensitive” and “sensitive” data from motion and location sensors. Very high accuracy scores were recorded from Support Vector Machine and K-Nearest Neighbor classifiers while a fairly high accuracy score was recorded from the Naïve Bayes classifier. With the hybridized machine learning (unsupervised and supervised) technique presented in this paper, unlabeled sensor data was automatically annotated and then classified. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Open AccessArticle Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models
Sensors 2017, 17(10), 2328; https://doi.org/10.3390/s17102328
Received: 6 September 2017 / Revised: 28 September 2017 / Accepted: 11 October 2017 / Published: 13 October 2017
Cited by 1 | PDF Full-text (1572 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for
[...] Read more.
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, ‘in the wild’ data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Figures

Figure 1

Back to Top