Next Article in Journal
Noise-Resistant CECG Using Novel Capacitive Electrodes
Next Article in Special Issue
EAGA-MLP—An Enhanced and Adaptive Hybrid Classification Model for Diabetes Diagnosis
Previous Article in Journal
Effectiveness of Mobile Emitter Location by Cooperative Swarm of Unmanned Aerial Vehicles in Various Environmental Conditions
Previous Article in Special Issue
Foveation Pipeline for 360° Video-Based Telemedicine
Open AccessArticle

Person Re-ID by Fusion of Video Silhouettes and Wearable Signals for Home Monitoring Applications

Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK
*
Author to whom correspondence should be addressed.
This paper is an extension version of the conference paper: Masullo, A.; Burghardt, T.; Damen, D.; Perrett, T.; Mirmehdi, M. Who Goes There? Exploiting Silhouettes and Wearable Signals for Subject Identification in Multi-Person Environments. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27 October–2 November 2019.
Sensors 2020, 20(9), 2576; https://doi.org/10.3390/s20092576
Received: 3 April 2020 / Revised: 23 April 2020 / Accepted: 28 April 2020 / Published: 1 May 2020
(This article belongs to the Special Issue Multimodal Data Fusion and Machine-Learning for Healthcare)
The use of visual sensors for monitoring people in their living environments is critical in processing more accurate health measurements, but their use is undermined by the issue of privacy. Silhouettes, generated from RGB video, can help towards alleviating the issue of privacy to some considerable degree. However, the use of silhouettes would make it rather complex to discriminate between different subjects, preventing a subject-tailored analysis of the data within a free-living, multi-occupancy home. This limitation can be overcome with a strategic fusion of sensors that involves wearable accelerometer devices, which can be used in conjunction with the silhouette video data, to match video clips to a specific patient being monitored. The proposed method simultaneously solves the problem of Person ReID using silhouettes and enables home monitoring systems to employ sensor fusion techniques for data analysis. We develop a multimodal deep-learning detection framework that maps short video clips and accelerations into a latent space where the Euclidean distance can be measured to match video and acceleration streams. We train our method on the SPHERE Calorie Dataset, for which we show an average area under the ROC curve of 76.3% and an assignment accuracy of 77.4%. In addition, we propose a novel triplet loss for which we demonstrate improving performances and convergence speed. View Full-Text
Keywords: sensor fusion; digital health; silhouettes; accelerometer; ambient assisted living sensor fusion; digital health; silhouettes; accelerometer; ambient assisted living
Show Figures

Figure 1

MDPI and ACS Style

Masullo, A.; Burghardt, T.; Damen, D.; Perrett, T.; Mirmehdi, M. Person Re-ID by Fusion of Video Silhouettes and Wearable Signals for Home Monitoring Applications. Sensors 2020, 20, 2576. https://doi.org/10.3390/s20092576

AMA Style

Masullo A, Burghardt T, Damen D, Perrett T, Mirmehdi M. Person Re-ID by Fusion of Video Silhouettes and Wearable Signals for Home Monitoring Applications. Sensors. 2020; 20(9):2576. https://doi.org/10.3390/s20092576

Chicago/Turabian Style

Masullo, Alessandro; Burghardt, Tilo; Damen, Dima; Perrett, Toby; Mirmehdi, Majid. 2020. "Person Re-ID by Fusion of Video Silhouettes and Wearable Signals for Home Monitoring Applications" Sensors 20, no. 9: 2576. https://doi.org/10.3390/s20092576

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop