Next Article in Journal
Transfer Learning for Wireless Fingerprinting Localization Based on Optimal Transport
Next Article in Special Issue
Using a Deep Learning Method and Data from Two-Dimensional (2D) Marker-Less Video-Based Images for Walking Speed Classification
Previous Article in Journal
Defect Classification of Green Plums Based on Deep Learning
Previous Article in Special Issue
Recognition of Non-Manual Content in Continuous Japanese Sign Language
Article

3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter

1
Arts et Métiers Institue of Technology, LISPEN, HESAM University, 75005 Chalon-sur-Saône, France
2
IBISC Laboratory, University of Evry, 91000 Evry-Courcouronnes, France
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(23), 6985; https://doi.org/10.3390/s20236985
Received: 22 October 2020 / Revised: 30 November 2020 / Accepted: 5 December 2020 / Published: 7 December 2020
(This article belongs to the Special Issue Human Activity Recognition Based on Image Sensors and Deep Learning)
The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied. View Full-Text
Keywords: human tracking; omnidirectional camera; ego motion; particle filter human tracking; omnidirectional camera; ego motion; particle filter
Show Figures

Figure 1

MDPI and ACS Style

Ababsa, F.; Hadj-Abdelkader, H.; Boui, M. 3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter. Sensors 2020, 20, 6985. https://doi.org/10.3390/s20236985

AMA Style

Ababsa F, Hadj-Abdelkader H, Boui M. 3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter. Sensors. 2020; 20(23):6985. https://doi.org/10.3390/s20236985

Chicago/Turabian Style

Ababsa, Fakhreddine, Hicham Hadj-Abdelkader, and Marouane Boui. 2020. "3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter" Sensors 20, no. 23: 6985. https://doi.org/10.3390/s20236985

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop