Next Article in Journal
Assessment of the Propulsion System Operation of the Ships Equipped with the Air Lubrication System
Next Article in Special Issue
Combining Augmented Reality and 3D Printing to Improve Surgical Workflows in Orthopedic Oncology: Smartphone Application and Clinical Evaluation
Previous Article in Journal
GNSS Precise Relative Positioning Using A Priori Relative Position in a GNSS Harsh Environment
Previous Article in Special Issue
Surface Reconstruction from Structured Light Images Using Differentiable Rendering
Article

HRDepthNet: Depth Image-Based Marker-Less Tracking of Body Joints

Assistance Systems and Medical Device Technology, Department of Health Services Research, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
*
Author to whom correspondence should be addressed.
Academic Editor: Matteo Poggi
Sensors 2021, 21(4), 1356; https://doi.org/10.3390/s21041356
Received: 30 November 2020 / Revised: 5 February 2021 / Accepted: 7 February 2021 / Published: 14 February 2021
(This article belongs to the Special Issue Computer Vision for 3D Perception and Applications)
With approaches for the detection of joint positions in color images such as HRNet and OpenPose being available, consideration of corresponding approaches for depth images is limited even though depth images have several advantages over color images like robustness to light variation or color- and texture invariance. Correspondingly, we introduce High- Resolution Depth Net (HRDepthNet)—a machine learning driven approach to detect human joints (body, head, and upper and lower extremities) in purely depth images. HRDepthNet retrains the original HRNet for depth images. Therefore, a dataset is created holding depth (and RGB) images recorded with subjects conducting the timed up and go test—an established geriatric assessment. The images were manually annotated RGB images. The training and evaluation were conducted with this dataset. For accuracy evaluation, detection of body joints was evaluated via COCO’s evaluation metrics and indicated that the resulting depth image-based model achieved better results than the HRNet trained and applied on corresponding RGB images. An additional evaluation of the position errors showed a median deviation of 1.619 cm (x-axis), 2.342 cm (y-axis) and 2.4 cm (z-axis). View Full-Text
Keywords: depth camera; timed “up & go” test; TUG; 5 × SST; marker-less tracking; machine learning; algorithm depth camera; timed “up & go” test; TUG; 5 × SST; marker-less tracking; machine learning; algorithm
Show Figures

Figure 1

MDPI and ACS Style

Büker, L.C.; Zuber, F.; Hein, A.; Fudickar, S. HRDepthNet: Depth Image-Based Marker-Less Tracking of Body Joints. Sensors 2021, 21, 1356. https://doi.org/10.3390/s21041356

AMA Style

Büker LC, Zuber F, Hein A, Fudickar S. HRDepthNet: Depth Image-Based Marker-Less Tracking of Body Joints. Sensors. 2021; 21(4):1356. https://doi.org/10.3390/s21041356

Chicago/Turabian Style

Büker, Linda C., Finnja Zuber, Andreas Hein, and Sebastian Fudickar. 2021. "HRDepthNet: Depth Image-Based Marker-Less Tracking of Body Joints" Sensors 21, no. 4: 1356. https://doi.org/10.3390/s21041356

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop