Next Article in Journal
Deep Learning-Based Crowd Scene Analysis Survey
Previous Article in Journal
Unsupervised Mitral Valve Tracking for Disease Detection in Echocardiogram Videos
Open AccessArticle

body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices

1
Laboratorio de Ciencias de las Imágenes, Departamento de Ingeniería Eléctrica y Computadoras, Universidad Nacional del Sur, and CONICET, Bahía Blanca B8000, Argentina
2
Instituto Patagónico de Ciencias Sociales y Humanas, Centro Nacional Patagónico, CONICET, Puerto Madryn U9120, Argentina
3
Departamento de Informática, Facultad de Ingeniería, Universidad Nacional de la Patagonia San Juan Bosco, Trelew U9100, Argentina
*
Author to whom correspondence should be addressed.
J. Imaging 2020, 6(9), 94; https://doi.org/10.3390/jimaging6090094
Received: 31 July 2020 / Revised: 23 August 2020 / Accepted: 31 August 2020 / Published: 11 September 2020
(This article belongs to the Special Issue 3D and Multimodal Image Acquisition Methods)
Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction. View Full-Text
Keywords: deep learning; neural networks; structure from motion; 3D point cloud; anthropometry deep learning; neural networks; structure from motion; 3D point cloud; anthropometry
Show Figures

Figure 1

MDPI and ACS Style

Trujillo-Jiménez, M.A.; Navarro, P.; Pazos, B.; Morales, L.; Ramallo, V.; Paschetta, C.; De Azevedo, S.; Ruderman, A.; Pérez, O.; Delrieux, C.; Gonzalez-José, R. body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices. J. Imaging 2020, 6, 94.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop