Next Article in Journal
High-Level CNN and Machine Learning Methods for Speaker Recognition
Next Article in Special Issue
Inertial Sensors for Hip Arthroplasty Rehabilitation: A Scoping Review
Previous Article in Journal
Design and Modeling of a Device Combining Single-Cell Exposure to a Uniform Electrical Field and Simultaneous Characterization via Bioimpedance Spectroscopy
Previous Article in Special Issue
Development and Practice of Sports-Related Public Welfare Platform Based on Multi-Sensor Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera

by
Tomasz Krzeszowski
1,*,
Bartosz Dziadek
2,
Cíntia França
3,4,
Francisco Martins
3,4,
Élvio Rúbio Gouveia
3,4 and
Krzysztof Przednowek
2
1
Faculty of Electrical and Computer Engineering, Rzeszów University of Technology, 35-959 Rzeszów, Poland
2
Institute of Physical Culture Sciences, Medical College of Rzeszów University, 35-959 Rzeszów, Poland
3
Department of Physical Education and Sport, University of Madeira, 9020-105 Funchal, Portugal
4
LARSYS, Interactive Technologies Institute, 9020-105 Funchal, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(7), 3459; https://doi.org/10.3390/s23073459
Submission received: 15 February 2023 / Revised: 20 March 2023 / Accepted: 22 March 2023 / Published: 25 March 2023

Abstract

:
Anthropometric measurements of the human body are an important problem that affects many aspects of human life. However, anthropometric measurement often requires the application of an appropriate measurement procedure and the use of specialized, sometimes expensive measurement tools. Sometimes the measurement procedure is complicated, time-consuming, and requires properly trained personnel. This study aimed to develop a system for estimating human anthropometric parameters based on a three-dimensional scan of the complete body made with an inexpensive depth camera in the form of the Kinect v2 sensor. The research included 129 men aged 18 to 28. The developed system consists of a rotating platform, a depth sensor (Kinect v2), and a PC computer that was used to record 3D data, and to estimate individual anthropometric parameters. Experimental studies have shown that the precision of the proposed system for a significant part of the parameters is satisfactory. The largest error was found in the waist circumference parameter. The results obtained confirm that this method can be used in anthropometric measurements.

1. Introduction

Anthropometric measurements of the human body are applicable to many aspects of human life [1]. Anthropometry is used in scientific research, clinical examinations, and medicine [2,3,4], in dietetics [5], biomechanics [6,7,8], and in the clothing industry [9]. The basis of anthropometry is an anthropometric measurement that requires the application of an appropriate measurement procedure and the use of specialized, sometimes expensive, measuring tools (e.g., anthropometer, measuring tapes, and caliper). In addition, the measurement process is usually complicated, uncomfortable, time-consuming, and requires properly trained personnel [1,2,10,11].
In view of the above, it became reasonable to look for other measurement methods that could support or partially replace the existing techniques or tools. An alternative to classical anthropometry became imaging methods commonly used in medicine (DXA, CT, MRI) [1], as well as estimation of anthropometric parameters using computer vision [11] and a 3D laser body scanner [12]. According to Jaeschke et al. [2], in order to improve the measurement of human body parameters (length, circumference of the trunk, hips, or other body parts), scanners visualizing a three-dimensional human model may prove useful. Liu et al. [13] stated that 3D scanners have fundamentally changed the approach to this type of anthropometric measurement in recent years. In [14], a synthetic data set of human body shapes was used to develop a method for estimating anthropometric parameters using deep learning and neural networks.
In the literature, there can be found some studies in which images from digital cameras [11,15,16], Kinect sensor [17,18,19,20], MoCap systems [21], or professional 3D scanners [2,4] were applied to estimate the individual types of human anthropometric parameters. Most of the mentioned solutions are complex optical systems consisting of multiple cameras, emitters of structured light, or laser beams [2,4,16,21]. They are characterized among others by the ability to perform a relatively quick and three-dimensional scan of the object, high resolution, and high accuracy. Unfortunately, they are expensive to purchase, their application is sometimes complicated, and due to their architecture, they are mainly used in laboratory conditions [22,23].
In the group of systems and tools used to estimate the anthropometric parameters of the human body, the Kinect sensor has found a wide application. The device, characterized by a low price, and equipped with an RGB camera, an infrared camera, and an infrared emitter, has become a significant element of scientific research worldwide [6,24,25].
Among the publications describing the varied application of the Kinect sensor, there are studies in which the authors characterized the sensor in terms of hardware [6], software, as well as procedures for calibration and synthesis of cameras, or methods aimed at improving and smoothing the image obtained by means of the sensor [24,26,27]. Cai et al. [25] described examples in which the device was used in various support systems for industry, detection, recognition and tracking of objects, tracking people and analyzing human activity. The Kinect sensor has also found applications in medicine [28,29,30,31], sport [32,33,34,35,36,37,38], and biomechanics [6], where IMU and EMG sensors have recently become popular methods for improving the accuracy of human movement pattern recognition [39].
It was also used as a tool supporting the process of estimating the anthropometric parameters of the human body. Some studies concerning this issue described the results obtained in systems based on one [9,17,20,40,41], three [42], four [43], and even 16 Kinect sensors [19]. Taking into account the solution with a single depth camera, the estimation of elementary human anthropometric measures was usually made based on the analysis of several images recorded by the sensor [17,41], as well as a three-dimensional model obtained from a partial or complete scan of the human body [9,20].
He et al. [20] developed a system performing anthropometric measurements based on a 3D model of the human body obtained from images captured by the Kinect sensor. The developed system has made it possible to measure the volume of the human body and the circumference of the chest, waist, and hips. The obtained results were compared with the results achieved by other methods for the construction of 3D human scans. The complete body scan was also performed by Kudzia [44], who applied a point cloud to calculate the volume and mass of individual body parts. However, the proposed method has a number of limitations, the most serious being the need for manual segmentation of the human body and the long time needed to perform this procedure. In another study in this field, the results of the application of the Kinect sensor to perform a 3D scan and anthropometric measurements of people wearing clothes and in various body positions were presented [9]. Naufal et al. [41] used in turn the Kinect sensor to calculate the height and surface area of the human body for weight estimation. In the estimation process, linear regression and polynomial regression were applied.
The review of the literature shows that the application of depth sensors as a tool supporting anthropometric measurements of the human body may be justified, but there is a lack of comprehensive solutions that enable the measurement of many anthropometric parameters and are thoroughly tested on many objects. This study aimed to develop a system for estimating human anthropometric parameters based on a three-dimensional scan of the complete body made with an inexpensive depth camera in the form of the Kinect v2 sensor. The developed system builds a 3D human model based on the data obtained from the depth sensor, then performs the segmentation of this model and estimates seven anthropometric parameters featuring the human build. It should be noted that the Kinect v2 Sensor is used only to acquire depth data and can be replaced with another sensor (e.g., Intel RealSense D455, Azure Kinect) that makes it possible to obtain this type of data. Summarizing, the main contributions of this paper can be stated as:
  • To develop a system for the estimation of human anthropometric parameters based on the data from a depth camera;
  • To develop a method for estimating anthropometric parameters from 3D scans;
  • Using and verifying the possibility to estimate anthropometric parameters by the Kinect v2 sensor.

2. Materials and Methods

2.1. Data Collection

The research included 129 men aged 18 to 28. The men featured a weight at a level of 79.4 ± 11.7 kg and a body height of 180.2 ± 6.5 cm. All participants of the research gave their written consent to the anthropometric examination and consent to perform the 3D body scan.
All anthropometric parameters were measured according to the International Standards for Anthropometric Assessment (ISAK) procedures [45] and included direct measurement of seven anthropometric parameters (see Figure 1):
  • Body height (BH)—the body height was measured with a stadiometer (SECA 213 Hamburg, Germany) with an accuracy of up to 1 mm.
  • Arm span (AS)—the subject stood with his back to the wall so that his back, buttocks, and heels touched the wall. The subject then raised both hands horizontally and the fingers of both hands were straightened. Then the left hand with straight fingers touched the corner of the room. The arm span was measured with tape from the corner of the room to a mark on the wall that corresponded to the end of the right hand.
  • Waist girth (WC)—measurements of the waist circumference were carried out with anthropometric tape, an approximate midpoint between the lower margin of the last palpable rib and the top of the iliac crest.
  • Hip girth (HC)—the hip circumference measured around the widest portion of the buttocks.
  • Arm girth (AC)—the subject was in a relaxed standing position with the arms hanging by the sides. The girth of the arm is measured by the anthropometric tape positioned perpendicular to the long axis of the arm at the level of the midpoint between the corner of the acromion and the proximal radial head. The tape should be positioned perpendicular to the long axis of the arm.
  • Thigh girth (TC)—the subject stands with his legs slightly apart and his body weight evenly distributed on both feet. The measurement was carried out using anthropometric tape in mid-thigh in a perpendicular plane to the long axis of the thigh so that the flexible tape does not indent the skin excessively.
  • Calf girth (CC)—the subject stood with feet slightly apart and body weight evenly distributed. Measurement was made in place of the maximum circumference of the calf in the plane perpendicular to the vertical axis of the leg. The measuring tape has been wrapped so that it does not indent the skin excessively.

2.2. System for Estimation of Human Somatic Parameters

The system for estimating human somatic parameters (Figure 2) consists of a rotating platform on which the measured person stands, a depth sensor that allows recording a 3D scan, and a PC computer that is used to record 3D data, as well as carrying out calculations related to the estimation of individual parameters. In the proposed solution, Kinect v2 was used as the depth sensor. The 3D scan of the measured person is recorded using a rotating platform (the platform rotates by 360 with a constant speed), which allows a full 3D scan of the human body. The scanned person should stance in a T-pose and his clothing should be limited to a minimum (e.g., tight-fitting underwear). During scanning the sensor records multiple 3D scans that present the human body from different sides. These scans are analyzed to find correspondence and merge into one 3D scan. This operation is performed on the basis of methods known from the literature and available in point cloud processing libraries [46].

2.3. Segmentation of a 3D Scan of the Human Body

In order to determine somatic parameters from a 3D scan (point cloud), it is necessary to perform segmentation in order to separate individual body segments. The 3D scan of the human figure is divided into 9 parts (Figure 3): head, upper torso, lower torso, right arm, left arm, right thigh, left thigh, right lower leg, and left lower leg. Segmentation is based on the proportions of individual parts of the body and finding the characteristic features of the human figure. The necessary aspect ratios for individual parts of the body were determined on the basis of measurements carried out on the test group. In the segmentation process, the location of the scan in the coordinate system is important (see Figure 3). To perform the calculations correctly, the scan should be positioned so that the z-axis corresponds to the sagittal axis of the human body, the y-axis corresponds to the vertical axis, and the x-axis corresponds to the transversal axis. First, the geometric center of the point cloud is calculated, which determines the approximate position of the scanned character’s hips. Then the point cloud is filtered so that only points belonging to the hips remain. Among these points, the point ( P H ) with the smallest value of the z coordinate is searched. The y coordinate of the P H point corresponds to the height for which the hip circumference (HC) is calculated. This height also defines the dividing line for the lower and upper body parts so that the scan can be divided into two parts corresponding to the humans’ upper and lower body. Then, for the points whose value for the y-axis is in the range ( P H . y t h H , P H . y + t h H ), the algorithm determines points with the smallest ( P H m i n x ) and largest ( P H m a x x ) value of the x coordinate. The x coordinates of these points allow us to calculate x H = ( P H m i n x . x + P H m i n x . x ) / 2 , and then determine the points that belong to the right (points with the value of the coordinate x less than x H ) and left (points with the coordinate value x greater than x H ) side of the scanned human. Then, based on the proportions of the body, for the right lower part of the body (right leg) and the left lower part of the body (left leg), the coordinates y of the scanned character’s knees are determined, which allows for determining the points belonging to the right thigh, right lower leg, left thigh, and left lower leg. In order to isolate the arms, the points corresponding to the upper body are projected onto the X Y plane and processed by the Concave Hull method to determine the points that define the outline of the 2D projection of the upper body (see Figure 4). Then, the contour points obtained in this way are filtered to isolate the points belonging to the right (points with the coordinate value x less than x H ) and the left (points with the coordinate value x greater than x H ) part of the human figure. In the next step, an analysis is performed to determine the points defining the beginning of the right ( P R A d o w n and P R A u p in Figure 4) and left ( P L A d o w n and P L A u p in Figure 4) of the arm. First, points P R A d o w n and P L A d o w n are determined, which define the lower beginning of the arms. These points are determined on the basis of the analysis of the directions of normalized vectors, the beginning, and end of which are determined by successive contour points. P R A d o w n is defined as the origin of the first normalized vector for which the x coordinate is greater than the t h d i r parameter, with the assumption that the analysis proceeds from the lowest points. Having P R A d o w n , subsequent points are analyzed in order to find a point ( P R A u p ) whose coordinate x is close to the x coordinate of the point P R A d o w n . Determination of P L A d o w n and P L A u p is performed in a similar way, except that the points belonging to the outline of the left side of the human are analyzed. With the points P R A d o w n , P R A u p , P L A d o w n and P L A u p the points belonging to the torso, right arm, left arm, and head can be determined. The torso is then split in half to make an upper and lower torso (Figure 3). The calculations were carried out using the PCL library [46].

2.4. Estimation of Human Somatic Parameters

With a segmented 3D scan, somatic features can be determined. The height (H) of the human figure is calculated on the basis of the coordinates of the points with the maximum and minimum value for the y axis, while the arm span (AS) is calculated on the basis of the coordinates of the points with the maximum and minimum value for the x axis. The procedure for determining the remaining somatic features from the 3D scan is as follows:
  • In order to calculate given perimeters, fragments of point clouds are separated from individual segments. These points are determined as follows:
    (a)
    Arm girth (AC)—the place (point P A C ) where the circumference is calculated is halfway between the beginning of the arm (defined by points P R A d o w n and P R A u p —right arm, P L A d o w n and P L A u p —left arm) and elbow (approximate position of the elbow is calculated on the basis of the proportion of the length of the arm to the forearm; this proportion was determined on the basis of the measurements of the test group). Then the points of the arm whose coordinate x is in the range ( P A C . x t h c u t , P A C . x + t h c u t ) are projected onto the Y Z plane.
    (b)
    Waist girth (WC)—the place (point P W C ) where the waist circumference is calculated is estimated based on the measurements of the test group, during which measured the distances between the beginning of the torso (place of hip circumference measurement) and the waist and between waist and the end of the body (beginning of the neck). Torso points whose coordinate y is in the range ( P W C . y t h c u t , P W C . y + t h c u t ) are projected onto the plane X Z .
    (c)
    Hip girth (HC)—the value of the y coordinate corresponding to the location of the hip circumference measurement ( P H . y ) is determined during segmentation. Points for which the y coordinate is in the range ( P H . y t h c u t , P H . y + t h c u t ) are projected onto the X Z plane.
    (d)
    Thigh girth (TC) is calculated for points located in the middle of the thigh segment. The y T C coordinate is derived from the points at the beginning and end of the thigh. Points for which the y T C coordinate is in the range ( y T C t h c u t , y T C + t h c u t ) are projected onto the X Z plane.
    (e)
    Calf girth (CC)—at the beginning, the approximate place of circumference measurement is determined, for this purpose, based on the measurements of the test group, during which the distances between the knee and the calf girth measurement place and the calf girth measurement place and the foot, the y C C coordinate was determined. Among the filtered points, the point ( P C C ) with the smallest value of the z coordinate is searched. The y coordinate of the P C C point corresponds to the height for which the calf has the greatest circumference. Points for which the y coordinate is in the range ( P C C . y t h c u t , P C C . y + t h c u t ) are projected onto the X Z plane.
  • Using the Convex Hull method, an ordered list of points is determined from the points projected onto the plane;
  • The perimeter is calculated from the equation:
    L = k = 1 n d ( P k , P k + 1 ) ,
    where P n + 1 = P 1 and d ( P 1 , P 2 ) is the Euclidean distance between the points P 1 and P 2 .

2.5. Statistical Analysis

The study used basic statistical measures, i.e., the arithmetic mean, standard deviation, median, and first and third quartiles. In addition, two indices of absolute difference (d) and relative difference ( Δ ) were determined:
d = 1 n i = 1 n G S i D C i
Δ = d 1 n i = 1 n G S i ,
where: n—total number of patterns, G S —gold standard value, and D C —estimated value.
Statistical evaluation of the significance of the differences was performed using the U Mann–Whitney test, taking p < 0.05 as significant. In addition, for a detailed comparison of the obtained results with the gold standard, analysis was performed using Bland–Altman plots. The coefficient of repeatability (CR), defined as the two standard deviations of the differences of the paired parameters, and the coefficient of variance (CV), expressed in percentage as the quotient of the standard deviation of the differences of the individual paired parameters by the mean, were determined. Additionally, Pearson correlations between measured and estimated values were calculated. Statistical analysis was performed in the GNU R software [47].

3. Results and Discussion

The experimental study consisted of verifying the accuracy of estimating selected anthropometric parameters calculated using the presented algorithm. The values of the obtained parameters were compared to direct measurement, which was considered as a gold standard (GS). The estimated results, along with the errors, are presented in Table 1.
The study shows that the most accurately estimated parameter was the body height parameter for which d = 0.002 m while Δ = 0.1 % . It should also be noted that the difference with regard to GS did not show statistical significance. Analyzing the median values, it is noted that the estimated values are close to the GS values. The situation is similar for quartiles Q1 and Q3. Analyzing dispersion, it is noted that an identical standard deviation was achieved for 5 of the 7 parameters. Different standard deviations were noted for arm span and waist girth. The remaining differences show statistical significance. The largest difference was observed for waist circumference d = 0.074 m and Δ = 9.2 % . Correlation analysis of the two measurements showed a very strong positive relationship (Table 2). Correlation coefficients ranged from r = 0.61 for arm girth to r = 0.97 for body height. For four parameters, a full correlation of the measurement results are found ( r > 0.9 ).
Analysis using the Bland–Altman method (Table 2) showed that as many as five parameters had a recurrence rate of less than CV < 5 % . For calf girth, a CV = 5.1 % was recorded, while the highest for arm girth was CV = 8.8 % . The best precision is characterized by the estimation of the body height parameter CV = 1.1 % . The coefficient of repeatability was very small and its value was equal to 0.04 for four parameters (body height, calf girth, hip girth, and thigh girth) and 0.06 for three parameters (arm span, arm girth, and waist girth).
In order to visualize in detail the differences between the calculated values from the 3D scan and the measured values (gold standard), analysis using Bland–Altman charts was used (Figure 5). The vast majority of measurements fall within the CV range, with only isolated cases deviating from the gold standard.
The comparative analysis carried out showed that the estimation of selected anthropometric parameters using the depth sensor generates acceptable errors. The main quality criterion adopted in the presented solution was the comparison with direct measurements of anthropometric parameters. Comparison of the obtained errors with errors generated by models presented by other researchers is not obvious due to the use of different criteria for evaluating methods. An interesting study was presented by Kahelin et al. (2020) [8], where they also used a 3D scan model from the reconstruction of 2.5D information capture from Kinect. The errors they obtained for hip girth (0.019 m) and thigh girth (0.013 m) were smaller than the errors presented in this paper: hip girth (0.063 m) and thigh girth (0.036 m) (Table 1).
Another paper that allows a direct comparison with the proposed method is that of [2], which presents a laser-based body surface scanner (Virtual smart XXL). This scanner was also compared to manual measurements and the differences and correlation were evaluated. The results for men were also more accurate than the results presented in this paper. The correlation coefficients of the estimated measurements with the manual measurements were 0.97 for waist girth and 0.97 for hip girth, whereas, for the method presented in this paper, the correlation results were 0.94 for waist girth and 0.93 for hip girth (Table 2). However, it should be noted that the laser scanner used in [2] is a more accurate measurement device relative to the Kinect camera-based scanner (Table 1).
A paper that also used a Kinect camera as a tool to determine a 3D scan of the human silhouette was by Tong et al. 2012 [42]. Tong and co-authors used two Kinects to capture the upper part and the lower part of a human body, respectively, and a third Kinect, placed on the opposite side, is used to capture the middle part of the human body. It is worth noting that the scanned subjects were wearing clothes. The results obtained for waist girth (0.062 m) and for hip girth (0.038 m), were also smaller than the errors presented in this work (Table 1).
The Kinect sensor is very often used as a tool to scan either the entire figure or selected body segments [22,28,29,40,41,43]. However, the direct comparison of errors is complicated by the use of different quality criteria. For example, in the work [22], the TEM criterion (the relative technical error of measurement) was defined, whose value was calculated at 0.88%. In the work of [28], where a 3D foot scanner based on a Kinect camera was presented, the RMSE (Root mean squared error) criterion was used. The RMSE error was calculated against a high-resolution laser scanner and was 2.8 mm. In another paper [40], where Kinect v2 was also used to estimate selected anthropometric parameters, the authors inferred that the differences between the estimated values and the traditional measurements show statistical significance, which confirms the results we obtained, presented in Table 1. Naufal et al. [41] studied a total of 147 subjects. The results measured manually were compared with the automatic estimation realized with the Kinect camera. The difference between the estimation and the manual measurement ranked at 1.04% and showed statistical significance.

4. Conclusions

This paper presents and tests a method for measuring selected anthropometric parameters using an inexpensive depth camera in the form of the Kinect v2 sensor. In order to evaluate the method, a statistical analysis was carried out in the form of a U Mann–Whitney test and Bland–Altman charts. Experimental studies have shown that the accuracy of the proposed system for a significant part of the parameters is satisfactory ( Δ < 7 % ). The largest error was in the waist circumference parameter. The results obtained confirm that the method can find application in anthropometric measurements. The use of newer devices, such as Azure Kinect, should allow for more accurate parameter estimates.
Limitations of the work are related to the validation of the method. The proposed method is tested only for selected anthropometric parameters. Another limitation is the research group. Parameter estimation was performed for the male gender. The method was not tested in a group of women. The research also did not take into account non-standard cases, such as body deformities, missing or shorter limbs, etc. In such cases, the system may give incorrect results.
Future work will be related to the development of new functionalities of the algorithm and the use of machine learning methods to classify body composition components and somatotype components. It is also planned to test the proposed method with other depth sensors, e.g., Azure Kinect. In addition to using a more accurate sensor, we will also work on improving the accuracy of the method itself, for this purpose various filtration and smoothing algorithms will be tested. Another important element of future work will be the inclusion of the female gender in the study.

5. Patents

T. Krzeszowski, K. Przednowek: “Method for estimating somatic features, somatic indicators, somatotype components, somatotype and body composition components with the use of depth sensor”, Polish patent publication PL240075B1, 2022.

Author Contributions

Conceptualization, T.K. and K.P.; methodology, T.K., K.P. and É.R.G.; software, T.K.; validation, T.K. and K.P.; formal analysis, K.P., C.F. and F.M.; investigation, B.D.; resources, B.D.; data curation, T.K. and K.P.; writing—original draft preparation, T.K., K.P. and B.D.; writing—review and editing, T.K., C.F., F.M., É.R.G. and K.P.; visualization, T.K. and K.P.; supervision, T.K.; project administration, T.K. and K.P.; funding acquisition, T.K. and K.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financed by Subcarpatian Center for Innovation (Podkarpackie Centrum Innowacyjności-PCI), Teofila Lenartowicza 4 Street, 35-051 Rzeszów, Poland under grant No. N3_60 11/PRZ/1/DG/PCI/2019.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Bioethics Committee of the University of Rzeszów (Poland).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

É.R.G., C.F. and F.M. acknowledge the support from LARSyS—the Portuguese national funding agency for science, research, and technology (FCT) pluriannual funding 2020–2023 (Reference: UIDB/50009/2020).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fosbøl, M.O.; Zerahn, B. Contemporary methods of body composition measurement. Clin. Physiol. Funct. Imaging 2015, 35, 81–97. [Google Scholar] [CrossRef] [PubMed]
  2. Jaeschke, L.; Steinbrecher, A.; Pischon, T. Measurement of waist and hip circumference with a body surface scanner: Feasibility, validity, reliability, and correlations with markers of the metabolic syndrome. PLoS ONE 2015, 10, e0119430. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Jones, P.; Baker, A.; Hardy, C.; Mowat, A. Measurement of body surface area in children with liver disease by a novel three-dimensional body scanning device. Eur. J. Appl. Physiol. Occup. Physiol. 1994, 68, 514–518. [Google Scholar] [CrossRef] [PubMed]
  4. Giachetti, A.; Lovato, C.; Piscitelli, F.; Milanese, C.; Zancanaro, C. Robust automatic measurement of 3D scanned models for the human body fat estimation. IEEE J. Biomed. Health Inform. 2015, 19, 660–666. [Google Scholar] [CrossRef]
  5. Fayet-Moore, F.; Petocz, P.; McConnell, A.; Tuck, K.; Mansour, M. The cross-sectional association between consumption of the recommended five food group “grain (cereal)”, dietary fibre and anthropometric measures among australian adults. Nutrients 2017, 9, 157. [Google Scholar] [CrossRef] [Green Version]
  6. Choppin, S.; Wheat, J. The potential of the Microsoft Kinect in sports analysis and biomechanics. Sport. Technol. 2013, 6, 78–85. [Google Scholar] [CrossRef]
  7. Vigotsky, A.D.; Bryanton, M.A.; Nuckols, G.; Beardsley, C.; Contreras, B.; Evans, J.; Schoenfeld, B.J. Biomechanical, anthropometric, and psychological determinants of barbell back squat strength. J. Strength Cond. Res. 2019, 33, S26–S35. [Google Scholar] [CrossRef]
  8. Kahelin, C.A.; George, N.C.; Gyemi, D.L.; Andrews, D.M. Head, Neck, Trunk, and Pelvis Tissue Mass Predictions for Older Adults using Anthropometric Measures and Dual-Energy X-ray Absorptiometry. Int. J. Kinesiol. Sport. Sci. 2020, 8, 14–23. [Google Scholar] [CrossRef]
  9. Xu, H.; Yu, Y.; Zhou, Y.; Li, Y.; Du, S. Measuring accurate body parameters of dressed humans with large-scale motion using a Kinect sensor. Sensors 2013, 13, 11362–11384. [Google Scholar] [CrossRef] [Green Version]
  10. Kuehnapfel, A.; Ahnert, P.; Loeffler, M.; Broda, A.; Scholz, M. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry. Sci. Rep. 2016, 6, 26672. [Google Scholar] [CrossRef] [Green Version]
  11. Stancic, I.; Supuk, T.; Cecic, M. Computer vision system for human anthropometric parameters estimation. WSEAS Trans. Syst. 2009, 8, 430–439. [Google Scholar]
  12. Lin, J.D.; Chiou, W.K.; Weng, H.F.; Fang, J.T.; Liu, T.H. Application of three-dimensional body scanner: Observation of prevalence of metabolic syndrome. Clin. Nutr. 2004, 23, 1313–1323. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, X.; Wu, Y.; Wu, H. Machine Learning Enabled 3D Body Measurement Estimation Using Hybrid Feature Selection and Bayesian Search. Appl. Sci. 2022, 12, 7253. [Google Scholar] [CrossRef]
  14. Škorvánková, D.; Riečickỳ, A.; Madaras, M. Automatic estimation of anthropometric human body measurements. arXiv 2021, arXiv:2112.11992. [Google Scholar]
  15. BenAbdelkader, C.; Yacoob, Y. Statistical estimation of human anthropometry from a single uncalibrated image. In Computational Forensics; Franke, K., Petrovic, S., Abraham, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–17. [Google Scholar]
  16. Peyer, K.E.; Morris, M.; Sellers, W.I. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras. PeerJ 2015, 3, e831. [Google Scholar] [CrossRef] [Green Version]
  17. Espitia-Contreras, A.; Sanchez-Caiman, P.; Uribe-Quevedo, A. Development of a Kinect-based anthropometric measurement application. In Proceedings of the 2014 IEEE Virtual Reality, Minneapolis, MN, USA, 29 March–2 April 2014; pp. 71–72. [Google Scholar] [CrossRef]
  18. Clarkson, S.; Choppin, S.; Hart, J.; Heller, B.; Wheat, J. Calculating body segment inertia parameters from a single rapid scan using the Microsoft Kinect. In Proceedings of the 3rd International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 16–17 October 2012; pp. 153–163. [Google Scholar]
  19. Soileau, L.; Bautista, D.; Johnson, C.; Gao, C.; Zhang, K.; Li, X.; Heymsfield, S.B.; Thomas, D.; Zheng, J. Automated anthropometric phenotyping with novel Kinect-based three-dimensional imaging method: Comparison with a reference laser imaging system. Eur. J. Clin. Nutr. 2016, 70, 475–481. [Google Scholar] [CrossRef]
  20. He, Q.; Ji, Y.; Zeng, D.; Zhang, Z. Volumeter: 3D human body parameters measurement with a single Kinect. IET Comput. Vis. 2018, 12, 553–561. [Google Scholar] [CrossRef]
  21. Couvertier, M.; Monnet, T.; Lacouture, P. Identification of Human Body Segment Inertial Parameters. In Proceedings of the 22nd Congress of the European Society of Biomechanics, Lyon, France, 10–13 July 2016. [Google Scholar]
  22. Clarkson, S.; Wheat, J.; Heller, B.; Choppin, S. Assessing the suitability of the Microsoft Kinect for calculating person specific body segment parameters. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 372–385. [Google Scholar]
  23. Cui, Y.; Chang, W.; Nöll, T.; Stricker, D. KinectAvatar: Fully automatic body capture using a single kinect. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 7729 LNCS; Springer: Berlin/Heidelberg, Germany, 2013; pp. 133–147. [Google Scholar] [CrossRef] [Green Version]
  24. Cui, Y.; Stricker, D. 3D body scanning with one Kinect. In Proceedings of the 2nd International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 25–26 October 2011; Volume 10. [Google Scholar]
  25. Cai, Z.; Han, J.; Liu, L.; Shao, L. RGB-D datasets using microsoft kinect or similar sensors: A survey. Multimed. Tools Appl. 2017, 76, 4313–4355. [Google Scholar] [CrossRef] [Green Version]
  26. Weiss, A.; Hirshberg, D.; Black, M.J. Home 3D body scans from noisy image and range data. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1951–1958. [Google Scholar] [CrossRef]
  27. Lawin, F.J. Depth Data Processing and 3D Reconstruction Using the Kinect v2. Master’s Thesis, Linköping University, Linköping, Sweden, 2015. [Google Scholar]
  28. Rogati, G.; Leardini, A.; Ortolani, M.; Caravaggi, P. Validation of a novel Kinect-based device for 3D scanning of the foot plantar surface in weight-bearing. J. Foot Ankle Res. 2019, 12, 46. [Google Scholar] [CrossRef] [Green Version]
  29. Zhao, K.; Luximon, A.; Chan, C. Low cost 3D foot scan with Kinect. Int. J. Digit. Hum. 2018, 2, 97–114. [Google Scholar] [CrossRef]
  30. Zain, N.; Rahman, W. Three-dimensional (3D) scanning using Microsoft® Kinect® Xbox 360® scanner for fabrication of 3D printed radiotherapy head phantom. J. Phys. Conf. Ser. 2020, 1497, 012005. [Google Scholar] [CrossRef]
  31. Kepski, M.; Kwolek, B. Event-driven system for fall detection using body-worn accelerometer and depth sensor. IET Comput. Vis. 2018, 12, 48–58. [Google Scholar] [CrossRef] [Green Version]
  32. Lin, Y.H.; Huang, S.Y.; Hsiao, K.F.; Kuo, K.P.; Wan, L.T. A kinect-based system for golf beginners’ training. In Information Technology Convergence; Springer: Dordrecht, The Netherlands, 2013; pp. 121–129. [Google Scholar]
  33. Ting, H.Y.; Sim, K.S.; Abas, F.S. Kinect-based badminton movement recognition and analysis system. Int. J. Comput. Sci. Sport 2015, 14, 25–41. [Google Scholar]
  34. Flôr, C.A.G.; Silvatti, A.P.; Menzl, H.J.K.; Dalla Bernardina, G.R.; de Souza Vicente, C.M.; de Andrade, A.G.P. Validity and Reliability of the Microsoft Kinect to Obtain the Execution Time of the Taekwondo’s Frontal Kick. In Proceedings of the 33 International Conference of Biomechanics in Sports, Poitiers, France, 29 June–3 July 2015. [Google Scholar]
  35. Ting, H.Y.; Tan, Y.W.D.; Lau, B.Y.S. Potential and limitations of Kinect for badminton performance analysis and profiling. Indian J. Sci. Technol. 2016, 9, 1–5. [Google Scholar] [CrossRef]
  36. Tamura, Y.; Yamaoka, K.; Uehara, M.; Shima, T. Capture and Feedback in Flying Disc Throw with use of Kinect. Int. J. Comput. Inf. Eng. 2013, 7, 190–194. [Google Scholar]
  37. Bianco, S.; Tisato, F. Karate moves recognition from skeletal motion. In Proceedings of the Three-Dimensional Image Processing (3DIP) and Applications 2013, Burlingame, CA, USA, 6–8 February 2013; International Society for Optics and Photonics: Bellingham, WA, USA, 2013; Volume 8650, p. 86500K. [Google Scholar]
  38. Marquardt, Z.; Beira, J.; Em, N.; Paiva, I.; Kox, S. Super Mirror: A kinect interface for ballet dancers. In Proceedings of the CHI’12 Extended Abstracts on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1619–1624. [Google Scholar]
  39. Li, X.; Liu, J.; Huang, Y.; Wang, D.; Miao, Y. Human Motion Pattern Recognition and Feature Extraction: An Approach Using Multi-Information Fusion. Micromachines 2022, 13, 1205. [Google Scholar] [CrossRef]
  40. Mokdad, M.; Mokdad, I.; Bouhafs, M.; Lahcene, B. Estimating anthropometric measurements of Algerian students with Microsoft kinect. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2019; Volume 826, pp. 496–506. [Google Scholar] [CrossRef]
  41. Naufal, A.; Anam, C.; Widodo, C.E.; Dougherty, G. Automated Calculation of Height and Area of Human Body for Estimating Body Weight Using a Matlab-based Kinect Camera. Smart Sci. 2022, 10, 68–75. [Google Scholar] [CrossRef]
  42. Tong, J.; Zhou, J.; Liu, L.; Pan, Z.; Yan, H. Scanning 3D full human bodies using kinects. IEEE Trans. Vis. Comput. Graph. 2012, 18, 643–650. [Google Scholar] [CrossRef] [Green Version]
  43. Bragança, S.; Arezes, P.; Carvalho, M.; Ashdown, S.P.; Castellucci, I.; Leão, C. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability. Work 2018, 59, 325–339. [Google Scholar] [CrossRef] [Green Version]
  44. Kudzia, P.; Jackson, E.; Dumas, G. Estimating body segment parameters from three-dimensional human body scans. PLoS ONE 2022, 17, e0262296. [Google Scholar] [CrossRef]
  45. Marfell-Jones, M.J.; Stewart, A.; De Ridder, J. International Standards for Anthropometric Assessment; International Society for the Advancement of Kinanthropometry: Wellington, New Zealand, 2012. [Google Scholar]
  46. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011. [Google Scholar] [CrossRef] [Green Version]
  47. R Core Team. R: A Language and Environment for Statistical Computing; R Version 4.0.3; R Foundation for Statistical Computing: Vienna, Austria, 2020. [Google Scholar]
Figure 1. A 3D scan with marked places of estimation of parameters.
Figure 1. A 3D scan with marked places of estimation of parameters.
Sensors 23 03459 g001
Figure 2. System for scanning human anthropometric parameters.
Figure 2. System for scanning human anthropometric parameters.
Sensors 23 03459 g002
Figure 3. Segmented 3D scan of the human body.
Figure 3. Segmented 3D scan of the human body.
Sensors 23 03459 g003
Figure 4. The contour points of the upper body determined using the Concave Hull method projected onto the XY plane.
Figure 4. The contour points of the upper body determined using the Concave Hull method projected onto the XY plane.
Sensors 23 03459 g004
Figure 5. Bland–Altman plots for anthropometric parameters.
Figure 5. Bland–Altman plots for anthropometric parameters.
Sensors 23 03459 g005
Table 1. Characteristics of the anthropometric parameters for GS and DC methods (N = 129).
Table 1. Characteristics of the anthropometric parameters for GS and DC methods (N = 129).
ParameterGS—Gold StandardDC—Depth Camera Estimationd Δ p
x ¯ sd MeQ1Q3 x ¯ sd MeQ1Q3
arm span (m)1.840.071.841.801.891.830.081.831.781.89−0.012−0.7%0.001 *
body height (m)1.800.071.801.761.851.800.071.801.761.84−0.002−0.1%0.224
arm girth (m)0.350.030.340.330.360.330.030.330.310.35−0.017−5.9%0.001 *
calf girth (m)0.390.030.390.380.410.380.030.380.360.40−0.013−3.4%0.001 *
hip girth (m)1.060.061.051.021.090.990.060.980.951.02−0.063−6.4%0.001 *
thigh girth (m)0.590.040.590.560.610.550.040.550.530.58−0.036−6.6%0.001 *
waist girth (m)0.890.080.880.840.920.810.070.800.770.84−0.074−9.2%0.001 *
x ¯ —mean; sd—standard deviation; Me—median; Q1—first quartile; Q3—third quartile; d—index of absolute difference; Δ—index of relative difference %; p—statistical probability; *—statistical significance.
Table 2. Results of correlation and Bland–Altman analysis.
Table 2. Results of correlation and Bland–Altman analysis.
Parameter x ¯ GS + DC r sd GS DC CRCV
arm span (m)1.840.940.030.061.6%
body height (m)1.80.970.020.041.1%
arm girth (m)0.340.610.030.068.8%
calf girth (m)0.390.840.020.045.1%
hip girth (m)1.030.930.020.041.9%
thigh girth (m)0.570.870.020.043.5%
waist girth (m)0.850.940.030.063.5%
x ¯ G S + D C —mean of GS and DC values; r—correlation coefficient; s d G S D C —standard deviation of differences between GS and DC values; CR—coefficient of repeatability; CV—coefficient of variance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Krzeszowski, T.; Dziadek, B.; França, C.; Martins, F.; Gouveia, É.R.; Przednowek, K. System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera. Sensors 2023, 23, 3459. https://doi.org/10.3390/s23073459

AMA Style

Krzeszowski T, Dziadek B, França C, Martins F, Gouveia ÉR, Przednowek K. System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera. Sensors. 2023; 23(7):3459. https://doi.org/10.3390/s23073459

Chicago/Turabian Style

Krzeszowski, Tomasz, Bartosz Dziadek, Cíntia França, Francisco Martins, Élvio Rúbio Gouveia, and Krzysztof Przednowek. 2023. "System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera" Sensors 23, no. 7: 3459. https://doi.org/10.3390/s23073459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop