Next Article in Journal
Towards an Iterated Game Model with Multiple Adversaries in Smart-World Systems
Previous Article in Journal
Study of Impact Damage in PVA-ECC Beam under Low-Velocity Impact Loading Using Piezoceramic Transducers and PVDF Thin-Film Transducers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns

1
Department of Agroforesty and Landscape, University of Padua, 35020 Legnaro, Italy
2
Department of Environmental Science and Policy, University of Milan, 20123 Milan, Italy
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 673; https://doi.org/10.3390/s18020673
Submission received: 6 December 2017 / Revised: 15 February 2018 / Accepted: 21 February 2018 / Published: 24 February 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
Frequent checks on livestock’s body growth can help reducing problems related to cow infertility or other welfare implications, and recognizing health’s anomalies. In the last ten years, optical methods have been proposed to extract information on various parameters while avoiding direct contact with animals’ body, generally causes stress. This research aims to evaluate a new monitoring system, which is suitable to frequently check calves and cow’s growth through a three-dimensional analysis of their bodies’ portions. The innovative system is based on multiple acquisitions from a low cost Structured Light Depth-Camera (Microsoft Kinect™ v1). The metrological performance of the instrument is proved through an uncertainty analysis and a proper calibration procedure. The paper reports application of the depth camera for extraction of different body parameters. Expanded uncertainty ranging between 3 and 15 mm is reported in the case of ten repeated measurements. Coefficients of determination R² > 0.84 and deviations lower than 6% from manual measurements where in general detected in the case of head size, hips distance, withers to tail length, chest girth, hips, and withers height. Conversely, lower performances where recognized in the case of animal depth (R² = 0.74) and back slope (R² = 0.12).

1. Introduction

Frequent monitoring of animals’ body condition in quantitative terms is helpful in order to allow for early recognition of health anomalies and consequently decrease the amount of complications related to infertility, lameness, or other animal diseases [1,2,3,4]. With reference to young calves, the first months of life are critical, since animals’ growth may be affected by the appearance of several diseases or other stress factors, such as dehorning [5]. Similarly, in the case of young and adults’ cows, body condition measurement, and growth is important in order to monitor welfare state [6,7,8] and so to maintain high productivity levels [9,10].
Over the last fifty-years, the best way to measure individual animals body live weight, as well as their physical development, has been carried out through the use of traditional or electronic weighing systems [11,12]. However, such techniques are expensive in terms of time and involve considerable costs, which are borne by dairy farmers. As a result, visual inspections are often preferred and biometric analysis are made only in the case of visibly suffering animals, when preventive measures are not possible and curative interventions are less suitable and expensive [13,14].
An alternative approach, which allows for overcoming the limits associated with the use of manual direct measurements, is with the introduction of techniques based on optical detection instruments [15,16]. Literature reports several studies that propose implementation of vision technologies based on optical devices to automatically detect lameness [17], to evaluate dairy cows back posture [18], recognize abnormal behavior [19,20] and to measure body parameters [21,22,23,24].
In recent years, much research effort has been focused on the use of image analysis for automatic animal weighing or estimated body parameters combining one or more two-dimensional (2D) views to obtain the required measurements [25,26,27]. However, 2D images only offer two-dimensional projection of the animal and the lack of the third dimension in vision limits applications utilizing depth information [28]. Furthermore, two dimensional sensing is very much influenced by perspective, distance, and specific wavelength or applied filters [29], which can introduce relevant distortions in collected data, therefore the attention is nowadays focusing on three-dimensional reconstructions, obtained by two-dimensional sensors or on truly three-dimensional instruments. In the first case, stereoscopic techniques are typically adopted. Such approach relies on simultaneous collection of multiple 2D images taken from different perspectives, by means of CMOS or CCD cameras and three-dimensional (3D) models are reconstructed applying specific photogrammetry algorithms run by fast processors. Stereoscopic techniques have been successfully used to determine the three-dimensional models of pigs [30], rabbits [31], broiler chickens [32,33], and cattle [34]. One such successful example is a stereo vision system with six 2D cameras and three flash units used to capture the three-dimensional shapes of pigs [35]. However, this approach seeks to minimize instrumentation costs, but robust algorithms and accurate calibration procedures are needed.
The second case is based on different instruments based on different technologies, such as triangulation or time of flight, which allow direct reconstruction of 3D point clouds. The advantages associated to such approaches are typically related to high lateral and vertical resolution and ease of use, however relatively high investments are often required, while the harsh environment (e.g., temperature, dust) can reduce sensors life [36].
The present paper discusses metrological implementation of a low cost 3D depth camera technology to allow for extraction of quantitative cow body parameters, and specifically on hip and withers height, back slope, body length, hip distance, head size, and chest girth.
Some works already mentioned application of the Kinect sensor in livestock applications. Maki reported quantification of body condition scoring after application of Kinect sensor on cows [37]; McPhee reported results on rump fat and muscle score from low and high muscled Angus cattle [38], while Kawasue took advantage of the 3D information depth to enhance the collection of thermal images from cow [39]. However, available literature is lacking with regard to the metrological performance: this is limiting very much actual application of the method, since it is not clear how much Kinect 3D data can support or replace manual measurements also with reference to different positions of animal body.

2. Material and Methods

2.1. Microsoft Kinect™ v1 RGB-Depth Camera

In the present work, low-cost depth-sensing cameras technology is applied. This is not a new invention, but recently, the usage of depth-sensing cameras technology has achieved an important diffusion, especially in the video-games productions. Therefore, this technology has spread quickly and high resolution depth cameras can be bought at a low-cost.
Specifically, a Microsoft Kinect™ v1 RGB-depth camera has been here implemented [40]. Since its launch in 2010, the sensor has been used in many applications, including agriculture [41,42] and livestock sector [43,44,45,46,47].
Microsoft Kinect™ captures synchronized colour and depth images at a rate of 30 frames per second (fps) using a RGB camera (8 bit VGA resolution with 640 × 480 pixel) aligned with a depth imager. The Kinect depth sensor is composed of an IR laser projector and a monochrome 640 × 480 pixel IR CMOS sensor [48].
A triangulation process permits to obtain a depth measurement [49]. The laser source emits a single beam, which is split into multiple beams by a diffraction grating to create a constant pattern of speckles projected onto the scene. This pattern is captured by the infrared camera and is correlated to a reference pattern. The reference pattern is obtained by capturing a plane at a known distance from the sensor, and is stored in the memory of the sensor. When a speckle is projected on an object whose distance to the sensor is smaller or larger than that of the reference plane, the position of the speckle in the infrared image will be shifted [50]. These shifts are measured for all of the speckles by a simple image correlation procedure, and then for each pixel the distance to the sensor can be retrieved from the corresponding disparity which allows reconstruction of the scene depth [51].
The Kinect sensor is priced less than 100 € and a new version allowing for a higher performance in terms of maximum permissible environmental light has been released in 2014, even though with a slight loss of resolution. Due to the recognized higher vertical resolution, Kinect V1 was preferred for the present work.

2.2. Sensors Positioning

Cattle, as well as cows’ body, have a complex three-dimensional shape, generally due to the incidence of hidden parts and overlapping body portions. Because of the cows’ frequent movements, the measurement is very difficult, in particular with regard to the head, legs, and tail of the animal.
Therefore, data collection from multiple positions, with respect to the animals’ body, can contribute to a more complete and comprehensive analysis. As a result of an optimization phase, Figure 1 shows a reasonable compromise between the number of measurements and the completeness of the cows’ body cloud points. Four distinct scans positions are considered, respectively, to the side (one position on the left and one on the right sides), one on the top and the last in frontal of the animal.
It has to be noted that acquisition cannot be made simultaneously, because the model projected by one instrument could be interpreted in a wrong way by the detector of another sensor. However, this is not limiting since the Kinect may work with a frequency of over 10 Hz: therefore, if needed, opportunely positioning four Kinect instruments and synchronizing scan operations, a full measurement cycle could be in principle completed in less than 0.5 s.
For the present research, which was aimed at defining the potential application of a low cost Structured Light Depth-Camera for frequent monitoring of calves and cows, the four sets of scans were obtained using one Kinect device fixed on a tripod. Tests were carried out in a barn, taking advantage of the posture featured by animals standing in front of the feeding alley. The Kinect instrument and the tripod were manually repositioned by the operator in the four reference positions, around the standing animal. The sensor was kept at an average distance ranging between 0.4 and 2 m from the animal (typically 1 m), with the sensor axis being roughly perpendicular to the captured animal portion. The measurement cycle took a time typically ranging between 10 and 15 seconds, during which the animal was minimally disturbed by the operator, which was moving at a minimum distance of about 1.5 m–3 m. As a consequence, the movements of the animal in such time interval were in general limited and negligibly affecting estimation of different body parameters. Indeed, the small relative movements of the animal and the lack of repeatability in sensor positioning and camera perspectives were in general not influencing results, thanks to the possibility of relocating collected point clouds, and compensating misalignments during the post processing phase [52].

2.3. Kinect Performance Verification

Microsoft Kinect™ is raising the interest of researchers involved in 3D reconstruction. However, it has been proven that the sensors output is not linear, and variability arises due to the specific instrument set up. For a quantitative application of the sensor, random and systematic errors have to be properly quantified and possibly compensated: therefore, specific calibration and performance analysis are necessary, eventually allowing for the definition of an uncertainty budget [53,54].
To this purpose, a substitution calibration approach was implemented, achieved through the measurement of a set of five reference polystyrene surfaces featuring hemispherical geometries, respectively with 50, 100, 150, 200, and 250 mm nominal radius [54]. A reference flat plane, with an average root mean square roughness lower than 0.3 mm and a deviation from flatness lower than 1 mm was additionally considered, in order to allow for estimation of background noise (BGN), quantified as the root mean square roughness of the vertical detected signal.
The calibration procedure of Microsoft Kinect™ v1 was repeated at different working distances defined on the basis of actual animal measurement, in the range 400–2000 mm. Lateral resolutions ranging between 0.6 and 2 mm were found with a slight decrease of the performance with the distance. Similarly, the vertical resolution tends to decrease at higher distances with a 4 mm typical value at 2 m from the target surface. Background noise also increases due to the loss of vertical resolution, with BGN ranging between 1.1 and 5.5 mm. A summary of the main sensor performance parameters, averaged on 10 measurements repeated at three different distances is reported in Table 1. Correction of collected data was done after implementation of a correction model, as discussed in [55].

2.4. Measurement Uncertainty

The uncertainty budget was estimated both for the manual and non-contact analyses [56]. In the case of manual measurements, uncertainty is ascribable to the following sources: meter resolution and calibration, Abbe error (due to the misalignment between the meter scale and the cow body), definition of critical measuring points (e.g., hip positions, hooves to floor interface, etc.), and cow posture at the measuring time.
In the case of non-contact sensing, main uncertainty sources are again: sensor resolution and calibration, Abbe error (due to the installation slope of the sensor relatively to the animal surface), algorithm uncertainty on definition of reference points, and cows’ postures during the measuring session.
In order to allow for the estimation of uncertainty related to manual measurement, the manual meter underwent a calibration test on a reference standard (a metal rod with a calibrated 1 m length). Additionally, five different measurements were repeated by three different operators on three different animals (not included in the subsequent weight analysis) using both the manual meter and the Kinect sensor in order to estimate the reference points localization uncertainty contribution. Main results are reported in Table 2.
Combined standard uncertainties for different manual and Kinect measurements related to hip and withers height, back slope, hip distance, head size and chest girth have been computed considering uncorrelated input quantities, according to (1)
U = k · i ( t · s i ) 2
where si represent the different uncertainty contributions reported in Table 2, t is the conversion coefficient for standard deviation depending on specific distribution types, as reported in the last column in Table 2, and k = 1.645 defined on the basis of a 90% level of confidence. It is clear how in general multiple non-contact measurements feature lower expanded uncertainties (U = 3–15 mm) with respect to manual measurements (U = 7–40 mm), thus allowing for an increase of confidence and a reduction of uncertainty on measured parameters on average by 30–50%.

2.5. Data Acquisition

The system was implemented in order to allow a fast extraction and monitoring of different body parameters of 20 cattles (Table 3), and in particular hip and withers height, back slope, depth, hip distance, head size, and chest girth (Figure 2). Experiments were carried out in a free-stall barns with an average light intensity of 95 lx measured in the proximity of the animals using a luminance metre (Konica Minolta T10, Inc., Osaka, Japan).
For a comparison, the same cows underwent to a series of manual measures to evaluate the same body parameters. Specifically, a metric tape was used and data were collected positioning the tape in contact with or in close proximity of the relevant positions of the animal (hips, withers, head, etc.) and reading values by naked eye. Such an approach required a reduced distance (lower than 0.5 m) between the animal and the operator, causing an increase in the nervousness of cow. Chest girth was measured positioning the tape around the chest, immediately behind the front legs. In this case, it should be noted that out of four adult cows it has not been possible to perform the manual chest control, due to the frequent cow movements and to the danger of injuries for the operator that the measurement involved. Additionally, it should be noted that both in the case of young calves (age < 8 months) and adult cows (age > 8 months), the manual measurements are causing stress to the animals, therefore several attempts had to be repeated to get closer to the animal and collect a valid measurement (i.e., with the metric tape firmly positioned on the animal and readable numbers in the line of sight).
On the other hand, in a few seconds, several tens of 3D data from the Microsoft Kinect™ v1 sensor could be collected. Specifically, 40 measurements were on average available for each animal and used for parameters extraction.

2.6. Data Processing

3D data were post processed by means of commercially available software (SPIP™), undergoing three main blocks of operations, as depicted in Figure 3:
(i)
data correction;
(ii)
image filtering; and,
(iii)
relevant parameters extraction.
In the first phase, a calibration was carried out in order to correct data sets and allow for subsequent extraction of quantitative values. Calibration was made taking advantage of calibration coefficients calculated through reference hemispherical geometries and applying a correction model [55]. A best fitting plane was also defined on the region of interest and used to calculate the average slope and allow rotation and easy relocation of the 3D data sets. In the second phase, flattened surfaces were segmented in order to isolate body portions and were subsequently processed through first and second derivative functions in order to produce gradient and curvature maps. In the third step, relevant positions were isolated applying thresholds or pass/non-pass filters. In particular, hips, withers, and tail head positions were identified as the local maximums (null gradients with negative curvatures); chest girth position was recognized as the thinnest body flank section (null gradient with positive curvature). Each body parameter was finally the average of values extracted from ten repeated 3D images captured on the same body portion.
The metrological quality of data extracted in the third step depends on the success of the first two steps. With regard to the calibration issue, details have already been given in previous paragraphs; the thresholding operation is probably the most challenging, sensibly affecting final data extraction. Thresholding operations were made on the basis of reference points, which were defined depending on the specific body portion.

2.7. Body Parameters

In the case of hip, withers, and tail head, the reference position was achieved after a gradient operation, applied for automatic detection of the relative maximum, which was then applied in the calculation of hip distance, height, head size, body length, depth and back slope (Figure 2).
Reference hoof-floor interface was determined on the basis of the hoof highest slope variation, after application of a second derivative function. Such position was eventually applied for estimation of reference front height (at the withers level) and back height (at the hips level). Back slope was mathematically estimated as the arctangent of the rate between the difference between front and back height and the length of the animal from the hips to the withers. The depth was estimated as the minimum distance between the chest and the floor and calculated as the difference between the average height and the vertical projection of the girth.
With reference to the head, since the sensor was positioned parallel to the cow head, Kinect maximum detection slope was usefully implement for the definition of the reference points. Since at the given working distance the applied sensor features a maximum detectable slope of 45°, higher angle positions are automatically set as void. Therefore, the border of the collected surface lies at the 45° slope position: reference points were consequently defined in the middle of such border, as the tangential point of a plane having a relative slope of 45° with respect to the cow face mean plane.
Data from different parameters arising from manual and non-contact measurements were compared by means of a linear regression. The aim of the analysis was not only to understand how much Kinect values can approximate manually captured ones, but also to highlight the most critical parameters. Analyses were carried out taking advantage of Microsoft Excel© statistical tools.

3. Results and Discussion

The main results obtained from assessment between measurements carried out manually and measurement performed using 3D optical sensor are reported, respectively, in the graphs of Figure 4. Values are drawn with error bars whose dimension corresponds case by case with the expanded uncertainty calculated with Equation (1). Graphs report linear regression analysis, together with best fitting line equations and coefficients of determination to understand the relations between contact and non-contact data.
In particular, with reference to the hip distance, results obtained by three-dimensional analysis are quite similar to the manual measurement, as shown by the linear regression coefficient close to one (m = 0.969), and the coefficient of determination R² = 0.983. As highlighted by the graph (Figure 4A), a good agreement can be predicted, especially for cows, where the hip bones are clearly more prominent. Conversely, in the case of young cattles, the deviations are larger but still within uncertainty limits, mainly due to the difficulties in the identification of exact hip positions: indeed, whenever the bone structures are not clearly visible, the localization of anatomical markers can be ambiguous, especially in the case of manual measurements. As Song [57], the consistency of locating these anatomical markers cannot be high due to the unclearness of the bone structure in young cattle.
A similar trend is highlighted in the body length analysis, estimated as the distance between withers and tail head length. In fact, an acceptable agreement with the manual method is shown by the linear regression coefficient equal to one and the coefficient of determination R² = 0.970 (Figure 4B).
Another parameter that is fundamental for determination and assessment, respectively, of growth and body weight is the height, in particular, in correspondence of the sides and the withers. To this respect, performed measurements have shown some limits; mainly due to a shadowing phenomenon on the ground caused by the cow body itself. As a consequence, in the case of images captured from the top of the animal downward, relatively large portions of the floor are not reconstructed, confusing height quantification. Also, it has to be pointed out that the Kinect sensor undergoes a non-linearity that introduces a distortion that is proportional to the measured length as reported in Table 2, and also discussed in some papers [58,59]: such non-linearity influences results, especially when performing measurements at large depths. The graph reported in Figure 4C shows how higher residuals cause a drop of the coefficient of determination down to R² = 0.842 for the average height. In order to improve such performance, it can be advisable to use lateral scan approaches, rather than measurements from above, thus taking advantage of the higher lateral measurement resolution and linearity, as compared to vertical ones.
Slope measurement (Figure 4D) is seldom used for body growth monitoring, however may represent an interesting indicator to monitor animals’ health problems, such as lameness or other hoof or legs disorders [60,61,62]. This parameter shows an inverse trend in terms of the coefficient of determination (R² = 0.122) than hip distance, body length, and height, due to uncertainty propagation.
The chest girth can be measured using the combined measurements analysis made by top and lateral measurements. An acceptable agreement with the manual method, is shown by the linear regression coefficient close to one, and the coefficient of determination R² = 0.984 (Figure 4E).
The depth parameter can also be determined by the difference between height value and chest projection. During the process of data validation, the determination coefficient was valued to be equal to R2 > 0.74, due to uncertainty propagation (Figure 4F).
Head measurement is less used for the body growth monitoring purposes, however it is easy accessible, therefore, this parameter should be taken into account in developments’ models. Head length was assessed both through manual analysis and 3D sensor. During the process of data validation, determination coefficient was valued to be equal to R2 > 0.96 (Figure 4G).

4. Conclusions

In the present paper, the Microsoft Kinect™ v1 sensor has been proposed, as a low cost instrument, to allow for body cow three-dimensional analysis, as well as non-invasive extraction of quantitative parameters.
Hip distance, body length and height, chest girth, depth, back slope, and head size were evaluated on 20 cows’ and compared with the same measurements carried out manually.
The experimental tests have shown relatively high coefficients of determination, in particular for parameters arising from three-dimensional measurements collected on low-depth surfaces, as is the case of hip distance, withers to tail head length or head size. Slightly lower accuracy is achieved when analyses are carried out on high aspect ratio body shapes or on purely three-dimensional shapes, as highlighted by average height or by chest girth data comparison. The lowest performances are observed when parameters values are estimated as a combination of multiple parameters, due to uncertainty propagation: this has been shown in the cases of back slope or depth experiments.
The method clearly needs pre-industrialization engineering activity to allow for automatic data collection and extraction. Additionally, studies are also needed in livestock environments to understand the effects of prolonged dust exposition on Kinect infrared projector or sensor, and the presence of high vibrations levels due for instance to the frequent passage of tractors or other vehicles in the proximity of the device installation. However, when considering that dairy farms are generally increasing their stock densities, the possibility of replacing manual with non-contact measurements is of great interest in order to allow for optimization of herd management based on individual cows’ health, welfare, and growth data monitoring, with no induced stress on animals.

Acknowledgments

The authors gratefully acknowledge the support provided by Giovanni Rotta, for execution of manual measurements on animals.

Author Contributions

Andrea Pezzuolo, Francesco Marinello and Luigi Sartori conceived and designed the experiments. Andrea Pezzuolo, Francesco Marinello, Marcella Guarino and Luigi Sartori analysed the data. Andrea Pezzuolo, Francesco Marinello and Luigi Sartori contributed reagents/materials/analysis tools. Andrea Pezzuolo, Francesco Marinello, Marcella Guarino and Luigi Sartori wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Hertem, T.; Bahr, C.; Schlageter Tello, A.; Viazzi, S.; Steensels, M.; Romanini, C.; Lokhorst, C.; Maltz, E.; Halachmi, I.; Berckmans, D. Lameness detection in dairy cattle: Single predictor v. multivariate analysis of image-based posture processing and behaviour and performance sensing. Animal 2016, 10, 1525–1532. [Google Scholar] [CrossRef] [PubMed]
  2. Tsai, D.M.; Huang, C.Y. A motion and image analysis method for automatic detection of estrus and mating behaviour in cattle. Comput. Electron. Agric. 2014, 104, 25–31. [Google Scholar] [CrossRef]
  3. Salau, J.; Haas, J.H.; Junge, W.; Thaller, G. Automated calculation of udder depth and rear leg angle in Holstein-Friesian cows using a multi -Kinect cow scanning system. Biosyst. Eng. 2017, 160, 154–169. [Google Scholar] [CrossRef]
  4. Norton, T.; Berckmans, D. Developing precision livestock farming tools for precision dairy farming. Anim. Front. 2017, 7, 18–23. [Google Scholar] [CrossRef]
  5. Windeyer, M.C.; Leslie, K.E.; Godden, S.M.; Hodgins, D.C.; Lissemore, K.D.; LeBlanc, S.J. Factors associated with morbidity, mortality, and growth of dairy heifer calves up to 3 months of age. Prev. Vet. Med. 2014, 113, 231–240. [Google Scholar] [CrossRef] [PubMed]
  6. Nilsson, M.; Herlin, A.; Ardö, H.; Guzhva, O.; Åström, K.; Bergsten, C. Development of automatic surveillance of animal behaviour and welfare using image analysis and machine learned segmentation technique. Animal 2015, 9, 1859–1865. [Google Scholar] [CrossRef] [PubMed]
  7. Richard, M.M.; Sloth, K.H.; Veissier, I. Real time positioning to detect early signs of welfare problems in cows. In Proceedings of the 7th European Conference on Precision Livestock Farming, Milan, Italy, 15–18 September 2015. [Google Scholar]
  8. Veissier, I.; Mialon, M.M.; Sloth, K.H. Early modification of the circadian organization of cow activity in relation to disease or estrus. J. Dairy Sci. 2017, 100, 3969–3974. [Google Scholar] [CrossRef] [PubMed]
  9. Azzaro, G.; Caccamo, M.; Ferguson, J.D.; Battiato, S.; Farinella, G.M.; Guarnera, G.C.; Puglisi, G.; Petriglieri, R.; Licitra, G. Objective estimation of body condition score by modeling cow body shape from digital images. J. Dairy Sci. 2011, 94, 2126–2137. [Google Scholar] [CrossRef] [PubMed]
  10. Halachmi, I.; Klopcic, M.; Polak, P.; Roberts, D.J.; Bewley, J.M. Automatic assessment of dairy cattle body condition score using thermal imaging. Comput. Electron. Agric. 2013, 99, 35–40. [Google Scholar] [CrossRef]
  11. Fontana, I.; Tullo, E.; Butterworth, A.; Guarino, M. An innovative approach to predict the growth in intensive poultry farming. Comput. Electron. Agric. 2015, 119, 178–183. [Google Scholar] [CrossRef]
  12. Pezzuolo, A.; Gonzàlez, L.A.; Giora, D.; Sartori, L.; Cillis, D.; Marinello, F. Body measurements of dairy cows using a structure from motion (SfM) photogrammetry approach. In Proceedings of the 8th European Conference on Precision Livestock Farming, Nantes, France, 12–14 September 2017. [Google Scholar]
  13. Grandin, T. Assessment of Stress during Handling and Transport. J. Anim. Sci. 1997, 75, 249–257. [Google Scholar] [CrossRef] [PubMed]
  14. Porto, S.M.C.; Arcidiacono, C.; Anguzza, U.; Cascone, G. The automatic detection of dairy cow feeding and standing behaviours in free-stall barns by a computer vision-based system. Biosyst. Eng. 2015, 133, 46–55. [Google Scholar] [CrossRef]
  15. Spoliansky, R.; Edan, Y.; Parmet, Y.; Halachmi, I. Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera. J. Dairy Sci. 2016, 99, 7714–7725. [Google Scholar] [CrossRef] [PubMed]
  16. Halachmi, I.; Guarino, M. Editorial: Precision livestock farming: A ‘per animal’ approach using advanced monitoring technologies. Animal 2016, 10, 1482–1483. [Google Scholar] [CrossRef] [PubMed]
  17. Van Hertem, T.; Viazzi, S.; Steensels, M.; Maltz, E.; Antler, A.; Alchanatis, V.; Schlageter-Tello, A.; Lokhorst, C.; Romanini, C.E.B.; Bahr, C.; et al. Automatic lameness detection based on consecutive 3D-video recordings. Biosyst. Eng. 2014, 119, 108–116. [Google Scholar] [CrossRef]
  18. Viazzi, S.; Bahr, C.; Schlageter-Tello, A.; Van Hertem, T.; Romanini, C.E.B.; Pluk, A.; Berckmans, D. Analysis of individual classification of lameness using automatic measurement of back posture in dairy cattle. J. Dairy Sci. 2013, 96, 257–266. [Google Scholar] [CrossRef] [PubMed]
  19. Oczak, M.; Viazzi, S.; Ismayilova, G.; Sonoda, L.T.; Roulston, N.; Fels, M.; Bahr, C.; Hartung, J.; Guarino, M.; Berckmans, D. Classification of aggressive behaviour in pigs by activity index and multilayer feed forward neural network. Biosyst. Eng. 2014, 119, 89–97. [Google Scholar] [CrossRef]
  20. Lee, J.; Jin, L.; Park, D.; Chung, Y. Automatic recognition of aggressive behavior in pigs using a Kinect depth sensor. Sensors 2016, 16, 631. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, Y.; Yang, W.; Winter, P.; Walker, L. Walk-through weighing of pigs using machine vision and an artificial neural network. Biosyst. Eng. 2008, 100, 117–125. [Google Scholar] [CrossRef]
  22. Weber, W.; Salau, J.; Haas, J.H.; Junge, W.; Bauer, U.; Harms, J.; Suhr, O.; Schönrock, K.; Rothfuß, H.; Bieletzki, S.; et al. Estimation of backfat thickness using extracted traits from an automatic 3D optical system in lactating Holstein–Friesian cows. Livest. Sci. 2014, 165, 129–137. [Google Scholar] [CrossRef]
  23. Hoffmann, G.; Schmidt, M.; Ammon, C. First investigations to refine video-based infrared thermography as a non-invasive tool to monitor the body temperature of calves. Animal 2016, 10, 1542–1546. [Google Scholar] [CrossRef] [PubMed]
  24. Da Borso, F.; Chiumenti, A.; Sigura, M.; Pezzuolo, A. Influence of automatic feeding systems on design and management of dairy farms. J. Agric. Eng. 2017, 48, 48–52. [Google Scholar] [CrossRef]
  25. Marchant, J.A.; Schofield, C.P. Extending the snake image processing algorithm for outlining pigs in scenes. Comput. Electron. Agric. 1993, 8, 261–275. [Google Scholar] [CrossRef]
  26. Brandl, N.; Jorgensen, E. Determination of live weight of pigs from dimensions measured using image analysis. Comput. Electron. Agric. 1996, 15, 57–72. [Google Scholar] [CrossRef]
  27. Schofield, C.P.; Marchant, J.A.; White, R.P.; Brandl, N.; Wilson, M. Monitoring pig growth using a prototype imaging system. J. Agric. Eng. Res. 1999, 72, 205–210. [Google Scholar] [CrossRef]
  28. Stajnko, D.; Brus, M.; Hočevar, M. Estimation of bull live weight through thermographically measured body dimensions. Comput. Electron. Agric. 2008, 61, 233–240. [Google Scholar] [CrossRef]
  29. Dubbini, M.; Pezzuolo, A.; De Giglio, M.; Gattelli, M.; Curzio, L.; Covi, D.; Yezekyan, T.; Marinello, F. Last generation instrument for agriculture multispectral data collection. CIGR J. 2017, 19, 158–163. [Google Scholar]
  30. Pastorelli, G.; Musella, M.; Zaninelli, M.; Tangorra, F.; Corino, C. Static spatial requirements of growing-finishing and heavy pigs. Livest. Sci. 2006, 105, 260–264. [Google Scholar] [CrossRef]
  31. Negretti, P.; Bianconi, G.; Finzi, A. Visual image analysis to estimate morphological and weight measurements in rabbits. World Rabbit Sci. 2007, 15, 37–41. [Google Scholar] [CrossRef]
  32. Wet, L.D.; Vranken, E.; Chedad, A.; Aerts, J.M.; Ceunen, J.; Berckmans, D. Computer-assisted image analysis to quantify daily growth rates of broiler chickens. Br. Poult. Sci. 2003, 44, 524–532. [Google Scholar] [CrossRef] [PubMed]
  33. Mollah, M.B.; Hasan, M.A.; Salam, M.A.; Ali, M.A. Digital image analysis to estimate the live weight of broiler. Comput. Electron. Agric. 2010, 72, 48–52. [Google Scholar] [CrossRef]
  34. Ozkaya, S.; Yalcin, B. The relationship of parameters of body measures and body weight by using digital image analysis in pre-slaughter cattle. Arch. Tierz. Dummerstorf 2008, 51, 120–128. [Google Scholar] [CrossRef]
  35. Wu, J.; Tillett, R.; McFarlane, N.; Ju, X.; Siebert, J.P.; Schofield, P. Extracting the three-dimensional shape of live pigs using stereo photogrammetry. Comput. Electron. Agric. 2004, 44, 203–222. [Google Scholar] [CrossRef]
  36. Vázquez-Arellano, M.; Griepentrog, H.W.; Reiser, D.; Paraforos, D.S. 3-D Imaging Systems for Agricultural Applications–A Review. Sensors 2016, 6, 618. [Google Scholar]
  37. Maki, N.; Nakamura, S.; Takano, S.; Okada, Y. 3D Model Generation of Cattle Using Multiple Depth-Maps for ICT Agriculture. In Proceedings of the Complex, Intelligent, and Software Intensive Systems 2017, Torin, Italy, 10–13 July 2017; pp. 768–777. [Google Scholar]
  38. McPhee, M.J.; Walmsley, B.J.; Skinner, B.; Littler, B.; Siddell, J.P.; Cafe, L.M.; Alempijevic, A. Live animal assessments of rump fat and muscle score in Angus cows and steers using 3-dimensional imaging. J. Anim. Sci. 2017, 95, 1847–1857. [Google Scholar] [CrossRef] [PubMed]
  39. Kawasue, K.; Win, K.D.; Yoshida, K.; Tokunaga, T. Black cattle body shape and temperature measurement using thermography and KINECT sensor. Artif. Life Robot. 2017, 22, 464–470. [Google Scholar] [CrossRef]
  40. Khoshelham, K.; Elberink, S.O. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 2012, 12, 1437–1454. [Google Scholar] [CrossRef] [PubMed]
  41. Andújar, D.; Dorado, J.; Fernández-Quintanilla, C.; Ribeiro, A. An approach to the use of depth cameras for weed volume estimation. Sensors 2016, 16, 972. [Google Scholar]
  42. Marinello, F.; Proto, A.R.; Zimbalatti, G.; Pezzuolo, A.; Cavalli, R.; Grigolato, S. Determination of forest road surface roughness by Kinect depth imaging. Ann. For Res. 2017, 60, 1–10. [Google Scholar] [CrossRef]
  43. Kongsro, J. Estimation of pig weight using a Microsoft Kinect prototype imaging system. Comput. Electron. Agric. 2014, 109, 32–35. [Google Scholar] [CrossRef]
  44. Kuzuhara, Y.; Kawamura, K.; Yoshitoshi, R.; Tamaki, T.; Sugai, S.; Ikegami, M.; Yasuda, T. A preliminarily study for predicting body weight and milk properties in lactating Holstein cows using a three-dimensional camera system. Comput. Electron. Agric. 2015, 111, 186–193. [Google Scholar] [CrossRef]
  45. Kim, J.; Chung, Y.; Choi, Y.; Sa, J.; Kim, H.; Chung, Y.; Park, D.; Kim, H. Depth-Based Detection of Standing-Pigs in Moving Noise Environments. Sensors 2017, 17, 2757. [Google Scholar] [CrossRef] [PubMed]
  46. Salau, J.; Haas, J.H.; Junge, W.; Thaller, G. A multi-Kinect cow scanning system: Calculating linear traits from manually marked recordings of Holstein-Friesian dairy cows. Biosyst. Eng. 2017, 157, 92–98. [Google Scholar] [CrossRef]
  47. Pezzuolo, A.; Guarino, M.; Sartori, L.; González, L.A.; Marinello, F. On-barn pig weight estimation based on body measurement by means of a Kinect v1 sensor. Comput. Electron. Agric. 2018, in press. [Google Scholar]
  48. Azzari, G.; Goulden, M.L.; Rusu, R.B. Rapid characterization of vegetation structure with a Microsoft Kinect sensor. Sensors 2013, 13, 2384–2398. [Google Scholar] [CrossRef] [PubMed]
  49. Zhang, Z. Microsoft kinect sensor and its effect. IEEE Multimedia 2012, 19, 4–10. [Google Scholar] [CrossRef] [Green Version]
  50. Nissimov, S.; Goldberger, J.; Alchanatis, V. Obstacle detection in a greenhouse environment using the Kinect sensor. Comput. Electron. Agric. 2015, 113, 104–115. [Google Scholar] [CrossRef]
  51. Han, J.; Shao, L.; Xu, D.; Shotton, J. Enhanced computer vision with Microsoft Kinect sensor: A review. IEEE Trans. Cybern. 2013, 43, 1318–1334. [Google Scholar] [PubMed]
  52. Condeço, J.; Christensen, L.H.; Rosén, B.G. Software relocation of 3D surface topography measurements. Int. J. Mach. Tools Manuf. 2001, 41, 2095–2101. [Google Scholar] [CrossRef]
  53. Joint Committee for Guides in Metrology. Guide to the Expression for Uncertainty in Measurement; International Organisation for Standardization (ISO): Genève, Switzerland, 1995. [Google Scholar]
  54. Savio, E.; De Chiffre, L.; Schmitt, R. Metrology of freeform shaped parts. CIRP Ann. Manuf. Technol. 2007, 56, 810–835. [Google Scholar] [CrossRef]
  55. Marinello, F.; Bariani, P.; Carmignato, S.; Savio, E. Geometrical modelling of scanning probe microscopes and characterization of errors. Meas. Sci. Technol. 2009, 20, 084013. [Google Scholar] [CrossRef]
  56. Joint Committee for Guides in Metrology. Guide to the Expression for Uncertainty in Measurement; International Organisation for Standardization (ISO): Genève, Switzerland, 2008. [Google Scholar]
  57. Song, X.; Schutte, J.J.W.; Van der Tol, P.P.J.; Van Halsema, F.E.D.; Groot Koer-kamp, P.W.G. Body measurements of dairy calf using a 3-D camera in an automatic feeding system. In Proceedings of the AgEng 2014: International Conference of Agricultural Engineering, Zurich, Switzerland, 6–10 July 2014. [Google Scholar]
  58. Marinello, F.; Pezzuolo, A.; Gasparini, F.; Arvidsson, J.; Sartori, L. Application of the Kinect sensor for dynamic soil surface characterization. Precis. Agric. 2015, 5, 1–12. [Google Scholar] [CrossRef]
  59. Andersen, M.R.; Jensen, T.; Lisouski, P.; Mortensen, A.K.; Hansen, M.K.; Gregersen, T.; Ahrendt, P. Kinect Depth Sensor Evaluation for Computer Vision Applications; Electrical and Computer Engineering Technical Report ECE-TR-6; Aarhus University: Aarhus, Denmark, 2012; pp. 1–37. [Google Scholar]
  60. Poursaberi, A.; Bahr, C.; Pluk, A.; Van Nuffel, A.; Berckmans, D. Real-time automatic lameness detection based on back posture extraction in dairy cattle: Shape analysis of cow with image processing techniques. Comput. Electron. Agric. 2010, 74, 110–119. [Google Scholar] [CrossRef]
  61. Pluk, A.; Bahr, C.; Poursaberi, A.; Maertens, W.; Van Nuffel, A.; Berckmans, D. Automatic measurement of touch and release angles of the fetlock joint for lameness detection in dairy cattle using vision techniques. J. Dairy Sci. 2012, 95, 1738–1748. [Google Scholar] [CrossRef] [PubMed]
  62. Van Nuffel, A.; Zwertvaegher, I.; Van Weyenberg, S.; Pastell, M.; Thorup, V.M.; Bahr, C.; Sonck, B.; Saeys, W. Lameness detection in dairy cows: Part 2. Use of sensors to automatically register changes in locomotion or behaviour. Animals 2015, 5, 861–885. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Perspective view of the animal, with relative positions of the three-dimensional (3D) optical sensors.
Figure 1. Perspective view of the animal, with relative positions of the three-dimensional (3D) optical sensors.
Sensors 18 00673 g001
Figure 2. The system was implemented in order to allow for a fast extraction and monitoring of different body parameters, and in particular, hip and withers height, back slope, depth, hip distance, head size, and chest girth. Relevant points positions are indicated by red circles.
Figure 2. The system was implemented in order to allow for a fast extraction and monitoring of different body parameters, and in particular, hip and withers height, back slope, depth, hip distance, head size, and chest girth. Relevant points positions are indicated by red circles.
Sensors 18 00673 g002
Figure 3. In the first row a flow chart representation of the applied methodology. In the bottom series of images three example of data extraction, respectively, for the head size, the hip distance, and chest girth, as indicated by green arrows.
Figure 3. In the first row a flow chart representation of the applied methodology. In the bottom series of images three example of data extraction, respectively, for the head size, the hip distance, and chest girth, as indicated by green arrows.
Sensors 18 00673 g003
Figure 4. Assessment between different cow body parameters: (A) hip distance, (B) body length, (C) average height, (D) slope, (E) chest girth, (F) depth, and (G) head length. In the axis labels, “Manual” and “Kinect” refer, respectively, to manual and Kinect measurement.
Figure 4. Assessment between different cow body parameters: (A) hip distance, (B) body length, (C) average height, (D) slope, (E) chest girth, (F) depth, and (G) head length. In the axis labels, “Manual” and “Kinect” refer, respectively, to manual and Kinect measurement.
Sensors 18 00673 g004aSensors 18 00673 g004bSensors 18 00673 g004c
Table 1. Sensor scanning performance.
Table 1. Sensor scanning performance.
Distance (mm)Lateral Range (mm)Lateral Resolution (mm)Vertical Resolution (mm)BGN (mm)
400415 × 3100.65 × 0.650.71.1
1000730 × 5501.14 × 1.1423.2
20001250 × 9501.95 × 1.9545.5
Table 2. Uncertainty sources.
Table 2. Uncertainty sources.
Uncertainty SourceUncertainty Estimation
Manual Meter (1 Measure)Kinect Sensor (1 Measure)Kinect Sensor (10 Measures)Distribution
Lateral Resolution (mm)10.65–2.00.3–0.7Rectangular
Vertical Resolution (mm)-1.1–5.50.4–1.8Rectangular
Background Noise (mm)-2.40.8Triangular
Length Calibration Non-Linearity (%)0.30.50.2Normal
Abbe Error (%)1.60.20.2Normal
Reference Points Localization (mm)4–1510–182–6Normal
Expanded Uncertainty U (mm)7–3016–403–15Normal
Table 3. Dairy cattles information.
Table 3. Dairy cattles information.
IDBreedAnimal CategoryAge (d)Lactations Number
1Holstein-FriesianCalf140
2Holstein-FriesianCalf400
3Holstein-FriesianCalf560
4Holstein-FriesianCalf130
5Holstein-FriesianCalf530
6Holstein-FriesianCalf320
7Red & white holsteinCalf1720
8Red & white holsteinCalf1680
9Holstein-FriesianCalf1270
10Holstein-FriesianCalf2180
11Holstein-FriesianCalf1930
12Holstein-FriesianCow2450
13Holstein-FriesianCow2410
14Holstein-FriesianCow5610
15Holstein-FriesianCow4560
16Holstein-FriesianCow5860
17Holstein-FriesianCow8001
18Holstein-FriesianCow22243
19Holstein-FriesianCow19933
20Holstein-FriesianCow13862

Share and Cite

MDPI and ACS Style

Pezzuolo, A.; Guarino, M.; Sartori, L.; Marinello, F. A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns. Sensors 2018, 18, 673. https://doi.org/10.3390/s18020673

AMA Style

Pezzuolo A, Guarino M, Sartori L, Marinello F. A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns. Sensors. 2018; 18(2):673. https://doi.org/10.3390/s18020673

Chicago/Turabian Style

Pezzuolo, Andrea, Marcella Guarino, Luigi Sartori, and Francesco Marinello. 2018. "A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns" Sensors 18, no. 2: 673. https://doi.org/10.3390/s18020673

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop