Human Part Segmentation in Depth Images with Annotated Part Positions
AbstractWe present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose estimation and automatic segmentation. A common technique in image segmentation is to represent an image as a two-dimensional grid graph, with one node for each pixel and edges between neighbouring pixels. We introduce a graph with distinct layers of nodes to model occlusion of the body by the arms. Once the graph is constructed, the annotated part positions are used as seeds for a standard interactive segmentation algorithm. Our method is evaluated on two public datasets containing depth images of humans from a frontal view. It produces a mean per-class accuracy of 93.55% on the first dataset, compared to 87.91% (random forest and graph cuts) and 90.31% (random forest and Markov random field). It also achieves a per-class accuracy of 90.60% on the second dataset. Future work can experiment with various methods for creating the graph layers to accurately model occlusion. View Full-Text
Share & Cite This Article
Hynes, A.; Czarnuch, S. Human Part Segmentation in Depth Images with Annotated Part Positions. Sensors 2018, 18, 1900.
Hynes A, Czarnuch S. Human Part Segmentation in Depth Images with Annotated Part Positions. Sensors. 2018; 18(6):1900.Chicago/Turabian Style
Hynes, Andrew; Czarnuch, Stephen. 2018. "Human Part Segmentation in Depth Images with Annotated Part Positions." Sensors 18, no. 6: 1900.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.