Open AccessThis article is
- freely available
GrabCut-Based Human Segmentation in Video Sequences
Departamento MAIA, Universitat de Barcelona, Gran Via 585, 08007 Barcelona, Spain
Centre de Visió per Computador, Campus UAB, Edifici O, 08193 Bellaterra, Barcelona, Spain
* Author to whom correspondence should be addressed.
Received: 4 September 2012; in revised form: 1 November 2012 / Accepted: 6 November 2012 / Published: 9 November 2012
Abstract: In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology.
Keywords: segmentation; human pose recovery; GrabCut; GraphCut; Active Appearance Models; Conditional Random Field
Citations to this Article
Cite This Article
MDPI and ACS Style
Hernández-Vela, A.; Reyes, M.; Ponce, V.; Escalera, S. GrabCut-Based Human Segmentation in Video Sequences. Sensors 2012, 12, 15376-15393.
Hernández-Vela A, Reyes M, Ponce V, Escalera S. GrabCut-Based Human Segmentation in Video Sequences. Sensors. 2012; 12(11):15376-15393.
Hernández-Vela, Antonio; Reyes, Miguel; Ponce, Víctor; Escalera, Sergio. 2012. "GrabCut-Based Human Segmentation in Video Sequences." Sensors 12, no. 11: 15376-15393.