Next Article in Journal
Development of a Novel Biomimetic Mechanical Hand Based on Physical Characteristics of Apples
Previous Article in Journal
The Theory of Agriculture Multifunctionality on the Example of Private Households
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Practical Aspects of Weight Measurement Using Image Processing Methods in Waterfowl Production

1
University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary
2
Faculty of Informatics, Eotvos Lorand University, 1117 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(11), 1869; https://doi.org/10.3390/agriculture12111869
Submission received: 21 September 2022 / Revised: 31 October 2022 / Accepted: 2 November 2022 / Published: 8 November 2022
(This article belongs to the Section Digital Agriculture)

Abstract

:
Precision poultry farming technologies include the analysis of images of poultry flocks using cameras. In large-scale waterfowl farming, these can be used to determine the individual weight of poultry flocks. In our research in a real farming environment, we investigated the cameras fixed to the metal support structure of the barn, located above the suspended bird scales. Camera images of the bird on the weighing cell, taken from a top view, were matched to the weight data measured by the scale. The algorithm was trained on training data sets from a part of the database, and the results were validated with the other part of the database (Training: 60% Validation: 20% Testing: 20%). Three data science models were compared, and the random forest method achieved the highest accuracy and reliability. Our results show that the random forest method gave the most reliable results for determining the individual weights of birds. We found that the housing environment had a strong influence on the applicability of the data collection and processing technology. We have presented that by analyzing carefully collected images, it is possible to determine the individual weights of birds and thus provide valuable information on it.

1. Introduction

1.1. Main Characteristics of Precision Livestock Farming

Precision livestock farming (PLF) tools can help to provide evidence-based strategies to improve facility design and farm management [1]. Recently, several scientific articles have been published on precision farming methods in large-scale livestock production, but these have hardly been disseminated in a real farming environment. Meanwhile, these technologies could contribute to the achievement of several sustainable development goals (SDGs) [2]. PLF technologies focus on individual animals with data collection and analysis. It gives the added value from the evaluation of the results which helps farmers to increase their livestock income and reduce the negative environmental impact of their farming [3]. The cost of inputs (e.g., feed, drinking water, energy, drugs, and human labor) used in the production of animal products can be optimized, and the conditions of producing can be monitored continuously. Transparent animal product value chains can be achieved from the very first step—from the farms.
The efficient and effective use of PLF highly depends on the quality of the data collected by digital devices (Internet of Things, IoT). To solve the practical problem defined previously, the most appropriate data collection tool for the specificities of the farm animal species and the farming method must be found [4]. In most cases, in commercial intensive poultry housing systems, cameras can be used to collect the right amount and quality of individual data. One of the main challenges in the poultry sector is that a poultry house can contain tens of thousands of birds at the same time, making it difficult to distinguish animals from each other and their environment. During rearing, the birds also change in body size, feather cover and color. The processing of images and videos of birds are part of machine learning within data science. Machine learning also involves several processes and methods, including neural networks. Neural networks are a subset of computer vision systems that are specifically designed to analyze visual data. Image recognition is one of the tasks in which deep neural networks (DNNs) excel.
The main components of a computer vision system include cameras, recording units, processing units, and models. In an application of a computer vision system, animals (e.g., cattle, sheep, pig, poultry, etc.) are monitored by cameras installed at fixed locations, such as ceilings and passageways, or onto mobile devices such as rail systems, ground robots, and drones. Recording units (e.g., network video recorders or digital video recorders) collecting images or videos at different views (top, side, or front view) and various types (e.g., RGB, depth, thermal, etc.). Recordings are saved and transferred to processing units for further analysis. Processing units are computers or cloud computing servers [5]. The challenges in processing images and videos collected from animal environments are associated with inconsistent illuminations and backgrounds, variable animal shapes and sizes, similar appearances among animals, animal occlusion and overlapping, and low resolutions [6].

1.2. Main Characteristics of Poultry and Foie Grass Sector

The EU produces approximately 90% of the world’s foie gras. The other main producing countries are China, the United States, and Canada. Approximately 117,979 tons of foie gras were produced in the European Union in 2021. The largest populations of waterfowl are in France and Hungary (68.6 and 20%, respectively) [7]. The bird flu outbreaks in recent years have caused serious economic damage to waterfowl farmers. The reason is that traditionally, waterfowl are reared in semi-free or free-range systems. Poultry reared in this way are at increased risk of infection with avian influenza. The loss of production caused by such an epidemic is a serious economic loss for farmers, and therefore confinement techniques are increasingly used. Large-scale, commercial waterfowl farming technology is mostly similar to that used for broiler chicken houses. Thousands of birds are kept in closed buildings under automated and controlled housing conditions. The Hungarian poultry sector is characterized by integrated production. The integrator company provides the farmer with the day-old ducks, the feed, and the advisory service. The company delivers the animals for slaughter according to an integrator contract with the producer at the contractually agreed price. The maximum mortality rate and the expected average weight of the slaughtered animals are fixed in the contract [8]. In practice, the integrator’s consultants check the weight of the birds once during the production period by representative manual weighing (10–15 ducks per weighing). The cooperation with the integrator creates a predictable framework for duck farmers, and the right quality of carcass is the interest of both the integrator company and the duck farmer. Optimizing the number of inputs used (feed, drinking water, energy, litter, medicine, human resources) and minimizing mortality and animal health risk is in the interest of the farmer. This includes automating keeping technology, making high use of existing equipment, and increasing the speed of production rotation (shortening the length of service periods). The more efficiently the farmer produces ducks, the more profitable is his business. Poultry weight provides information about growth and the feed conversion efficiency of the flock can be calculated. Whether the aim is fattening or liver production, the weight and health of the ducks and geese are equally important indicators for farmers. In large-scale poultry production, weight monitoring of the birds is carried out either manually at some points a couple of times during the rearing period or by using digital scales placed among the birds in the building. In both cases, the average of the individual weight measurements of a small proportion of the flock is used to determine the average weight of the whole flock. When many birds need to be weighed, the traditional method is labor intensive and time-consuming [9], stressful to birds [10,11], subject to transcription errors, and prone to human errors [12]. It might be useful to automatically collect simultaneous information about the growth trend of all the birds to identify deviations from the expected growth trend [9,13], having also details about the health and welfare status of the animals [14].
Our ongoing research is being carried out on a large-scale duck fattening farm in Hungary. Our research objective is to estimate the individual weights of the birds in their live state by machine learning analysis of camera images. In this paper, we present our experience of the IT method used in a real farming environment. The aim of our research was to find the best way to determine the individual weights of ducks, a species of waterfowl, using machine learning methods already known and applied in other research fields.

2. Materials and Methods

The experiment was conducted on fattening ducks housed in a commercial, intensive, indoor system between September 2020 and September 2021 on a private farm in the south-eastern part of Hungary (Kiskundorozsma, GPS coordinates: 46.2667, 19.9167). Our study collected data on duck individuals during the whole fattening period. One period takes seven weeks, followed by two weeks service period, before a second fattening period. Our experiment included three periods between September 2020 and September 2021. The farmer fattened ducks all year round. At the end of each fattening period, the slaughter animals were transported to a slaughterhouse under contract with the farm. The poultry farm complies with the current legislation on the keeping of farm animals, animal welfare, and environmental protection, which is continuously monitored by the authorities (Hungarian State Treasury Agriculture and Rural Development Agency, National Food Chain Safety Office, and Hungarian Association of Poultry Farmers). The poultry farm is run as a family farm; there are no permanent employees, the farmer, and his family live 5 km away from the farm, and they have been fattening ducks for ten years.

2.1. Animals

Cherry Valley ducks are fattened on the poultry farm where our research took place. The Cherry Valley duck is a commercial cross of Pekin ducks and it is one of the major duck crosses used for commercial duck meat production in Hungary. It has a high growth rate and reaches a market live weight of 3.45 kg at 42 days of age with a FCR of 1.92 in the case of a medium-sized commercial duck (Figure 1). In one period, 8000 ducks are fattened, housed in one building from one day old to two weeks old, and then the ducks are divided into four buildings. Thus, 2000 ducks are fattened in one building until the end of the seven-week fattening period.
The stock density was 23 ducks/m2 during the two weeks of pre-rearing and 5.7 ducks/m2 during post-rearing.

2.2. Housing and Management Conditions

At the duck fattening farm, the ducks are kept in four foil tents, each 7 m wide, 50 m long, and 2.5 m high. There is a long tradition of fattening ducks in Hungary and the so-called foil tents are popular among duck farmers. The farmers attach the multi-layered, insulated foil to the metal frame structure (Figure 2). The two shorter sides of the foil tents can be partially folded down. This simple building is perfect for the needs of ducks. In the foil tent where our experiment was carried out, the ventilation technology consisted of a single-phase exhaust fan with a capacity of 42,000 m3/h, installed at one of the end walls. The air entered the barn naturally, without automatic control. The fan was switched on and off by a simple controller using a temperature sensor. In winter, a natural gas space heater ensured the right temperature, while in summer, a low-pressure cooling system was used. The latter consisted of a plastic pipe system attached to the longitudinal side wall of the barn and water spray valves. The ducks were supplied with water by two watering lines with valves placed along the length of the shed. A water pressure regulator at the end of the watering lines regulated the water pressure. The design of the watering valves was adapted to the water requirements of the ducks. The feeding technology consisted of a feed bin, a spiral dry feed filling tube with an overhead track, located at the shorter end of the building, close to the outer wall of the barn, from which the feed was delivered through plastic pipes connected to a plastic barrel in the foil tent. A feed level sensor in the last drop tube started and stopped the feed from the silo bin. The feed was fed from the plastic barrels into the rubber trays below (Figure 3). The commercial feed, as recommended by the integrator company, was fed to the ducks. Two-phase feeding was used on the farm. The ducks were fed starter feed for the first two weeks and finishing feed for the last five weeks. Lighting was natural for most of the fattening period, with supplementary artificial lighting consisting of fluorescent tubes attached to the frame system of the foil house used during the shorter days in autumn and spring. The foil tents had no concrete floor, and litter straw was placed on the ground, to which a new layer of straw was added every three or four days. After the birds had been transported to the slaughterhouse, a two-week break was observed, when the foil tents were cleaned, disinfected, rested, and prepared for the next flock of ducks in accordance with veterinary regulations. On average they complete 5–6 fattening periods in a year. In the on-farm circumstances, the birds’ weights were only representative, and no bird weighing was possible in the foil tents. Data on bird weights were obtained manually by rounding up and weighing 10–15 ducks per fattening period. The foil tents did not have separate service facilities; these were located approximately 10 m far from the tents in a small building.

2.3. Data Collection

The main parts of the system are the data collection system (weight sensors and cameras), the module for manual and automatic processing of the incoming data, and the database of the processed data. Using the database created by combining the images and the measured weight data, a machine-learning-based weight estimation system was created, which can be used to replace the hanging bird scales. The development of filtering components that run on the data is also presented, where the aim was to be easily integrated into the pipeline and to be usable in other machine vision and learning-based projects, being well customizable depending on the task.
For image data collection, we used outdoor IP66 rated security IP cameras in the stables, equipped with built-in infrared illumination, fixed dome (BOSCH Felxidome IP 5000i IR NDI-5503-AL 5Megapixel resolution, BOSCH Felxidome IP 3000i IR NDE-3502-AL 2Megapixel resolution), or Bullet camera (BOSCH DINION IP 3000i IR NBE-3502-AL 2Megapixel resolution).
Time stamps were assigned to the collected data so that we could later match the bird-weight-data–camera-image pairs.
The weight data were collected using a bird scale consisting of a suspended weighing plate and a digital data collection unit. The cameras were connected to a Gigabit Ethernet switch in the barn using Category 5e UTP wiring, and power was also supplied over this wire using PoE (power over Ethernet) technology. The cameras and the scale were fixed to the metal roof structure of the barn using a loose bolt connection and HILTI tape (Figure 4).
The measured weight data were transmitted to the data collection computer via a RS485 interface. The camera images were processed, and the measured weight data was recorded by a PC with an Intel Core i3 processor. The data were recorded in rotation, so that new data overwrote the oldest one, thus providing the capacity needed to store data and images continuously. Data processing was performed on an Asus ROG Zephyrus M15 GU502LW Notebook (IntelI CoITM) i7-10875H CPU @ 2.30 GHz, 32 GB RAM, NVIDIA GeForce RTX 2070 with Max-Q Design).
Several considerations had to be analyzed at the selection of the development environment. The Python language was selected, because in the field of deep learning, Big Data, and image processing, this language has several libraries that make the development much easier. One of our main input data were images of the OpenCV library, as well as Tensorflow and Keras which contain various predefined machine learning models which were essential. In addition, the Pandas module helped us in data processing and for annotation we used the publicly available LabelMe software [15].

2.4. Processing and Verifying Data

The first task was to create a database for training and validating the weight estimation system, by assembling and saving the bird images and the weight data associated with the bird.
The measured weight data was labeled with a timestamp then uploaded to the database. In the next step, we filtered those time stamps for which no weights were recorded by the system. The measurements that belonged to the same time stamp were averaged and finally the relevant data were saved in a separate file.
Since the size, material, and shape of the weighing pan and the birds’ willingness to step on the scales greatly influence the accuracy of the measurement, different materials and shapes of weighing pans were tested. They were contaminated at different rates, and their material and color made them easier or more difficult to detect and accurately segment birds on the weighing pan. The height of the scale was also important, as the high hanging point caused the adult, heavier birds to “swing” into the scale when it was used, causing the birds—especially the larger ones—to avoid the scale, so that very few measurements were taken and only with smaller-weight ducks. We, therefore, moved the weighing platform closer to the barn floor. The optimal height had to be found, because in the case of too low of a pan, the birds would carry the littering straw to the weighing pan, causing inaccurate measurements.
Another problem was the removal of feces (guano) and other dirt accumulating on the weighing platform because the weight of these increased the measured weight data, generating a continuous error in the measurements (Figure 5). An automatic zeroing solution was developed, which in operation was similar to the zeroing of conventional hanging bird scales. The applied system detects when there are no birds on the scale based on the camera image and performs the zeroing of the scale at these moments by setting an offset zeroing value corresponding to the idle weight currently measured. The validation of the camera images to be included in the neural network training dataset was performed in several steps. First, we selected images from the collected ones where exactly one bird is visible on the scale, the scale appears to be at rest, and the bird is on the scale with its entire body. As a result, only those images were left in the image database that could serve as inputs to the feature extractor component. The other images where the birds were not on the measuring plate with their full body were removed from the database.
Originally, we started with 45,000 color images, of which 2500 color images were extracted into the teaching set. This was further reduced to 800 due to faulty masks. For example, if there were a lot of ducks in one place, or the edges of the scale were not clearly visible because of the birds, or the reference point was wrong, or there were no ducks on the scale on the evaluated picture. We also removed images from the database where only the head or the breast was placed on the scale. These cases were filtered during the comparison with the weighed data and the measured values that did not match the development curve of the ducks and the corresponding images were also removed from the cleaned database. Consecutive, identical images were averaged. We also discarded images that did not carry additional information (e.g., ducks sleeping on the scales, multiple images of the same duck in the same pose, etc.)
Finally, we selected 449 images where a duck was visible on the scale and from these 225 pictures where the scale data was also correct, and the area had been filtered. The original 3072 × 1728 resolution camera images were downscaled to 40%, resulting in a resolution of 1228 × 691.
We also used a segmentation component to process the images, as annotated data were needed to train the neural network. To reduce the workload of the learning process, we added instance segmentation to the image sorting software, which selected the birds on the scale.
In our experiment, this segmentation represented the polygons containing the scale and the birds. To generate this, we used the publicly available LabelMe software, which offered several inclusion shapes, including the polygon with the unique shape we needed. The LabelMe software has the additional advantage of supporting the concatenation and conversion of the files generated during annotation into more popular formats.
The multi-stage processing pipeline was built up in the following way: the first step was to acquire the camera images (step 1), then segment the images using the instance segmentation neural network (step 2), which located the bird in the image and created a mask for it. If the network recognized a duck on the scale with at least 80% surety, the next step was the pose estimation and computation of the features (step 3) used in the weight estimation: the mask area and the ellipse area fitted to the mask. These two main features were passed to the weight estimator, and further two features were calculated: wingtip length and spine length (step 4). As a last step, the weight of the duck seen by the camera was estimated by the used model (steps 5 and 6) (Figure 6).
In practice, the image processing pipeline was implemented as follows.
Instance segmentation could also be used as object detection, since for the resulting mask one can easily find a bounding box, and some neural networks require a bounding box to be found before the mask is computed. Of course, the animal mask, which contained additional information, could also be useful during the weight estimation process. We have tested YOLACT [16] and Mask R-CNN [17], but due to learning difficulties we decided to use the latter, whose implementation is available in the detectron2 library.
In large-scale, intensive conditions, poultry are kept at high densities. To be able to estimate individual body weight with sufficient confidence, only those images were used to train and test the algorithm where the ducks’ bodies were clearly visible individually, with no overlap, so no instance segmentation was necessary.
In the next step of image processing, the feature set we were looking for was extracted from the images, which in our first approach was the area occupied by the animal on the scale, performed by an instance segmentation component responsible for detecting the scale and birds. As a starting point, we investigated the accuracy of inferring the weight of the animal based on the area occupied by the bird alone, which was tested by linear regression.
However, this solution alone did not provide sufficient accuracy, and further steps had to be taken to increase the reliability of the weight estimation by determining the position of the animals. As a result, a pose estimation was performed, and a bounding box was derived from the results. This seemed to be a more inaccurate solution in terms of localization, but the information from the position of the body parts was needed anyway, so it was appropriate to try it as an object detection component. For pose detection, the DeepLabCut [18] (DLC) model was investigated. DeepLabCut is an open-source software that specifically helps to estimate animal poses and even has a separate library (DLC live) that allows for the processing of live recordings. DLC was used to determine the neck point, dorsal midpoint, and tail point, and from these we calculated the bird’s backbone. The straight line connecting the wing points must intersect the bird’s backbone and the backbone must be within the previously defined contour. If the bird’s backbone fell outside this, the mask was not accurate. Processing the image with DLC in this way increased the accuracy. The occasional wing flapping that occurred in waterfowl was a problem because it greatly increased the area occupied by the bird in the camera image used for weight estimation. Therefore, we also calculated a wing length from the left- and right-wing flap points to correct the models. Ducks with outstretched necks covered larger area, which also reduced the accuracy, so we calculated bird areas without heads and necks. The Mask R-CNN [17] is a convolutional neural network (CNN), a state-of-the-art solution for image segmentation. Using the Mask R-CNN, instance segmentation could be performed, which returned a bounding box in addition to the mask. (Note: in principle, a bounding box can be derived from the DLC feature, but the Mask-R-CNN-derived bounding box was more reliable in our experience, whilst the DLC was helpful in cases where no object detection was available in the system.) Mask R-CNN detects objects in an image and generates a high-quality segmentation mask for each instance. As a starting point, we used Mask R-CNN to obtain the duck mask and used the headless bird area calculated from it, and the area of the ellipse fitted to the mask as a feature for the weight estimator module. As a final step in the learning process, the two files (images and weight data) were merged together along the timestamps. We calculated average weights and areas as a function of the animals in the image and performed an outlier filtering on them, as all the training samples that were very different from the others could be considered as noise (e.g., a measurement that was corrupted) and deleted.
We did this by expecting an ellipse with a larger area for the larger mask, so we sorted the feature pairs in ascending order by area. The expected result was that the ellipse area was also monotonically increasing. In cases where this was not true, it was worthwhile to examine the mask output because a measurement error was suspected.
As a result, four features could be used in the weight estimation: the area of the bird mask, the fitted ellipse, the length of the ridge, and the distance between the wing tips provided by the DLC.
The above features were given as input parameters for several models, we have investigated four models especially: linear regression model, random forest, and multilayer perceptron (MLP). Since the body size and weight of the birds under investigation increased monotonically during rearing, if the feature—the area of the bird—was larger, the weight was also larger, and therefore the linear regression model was well suited. In addition, the random forest and multilayer perceptron models were also tested, and the accuracies of the weight estimates obtained with these models were compared. The multilayer perceptron is an artificial neural network that has three or more layers of perceptrons. These layers are: a single input layer, one or more hidden layers, and a single output layer of perceptrons. Random forest is a supervised machine learning algorithm used widely in classification and regression problems as it builds decision trees on different samples and takes their majority vote for classification and averages, and in the case of regression, this reduces the risk of over-learning.

2.4.1. Development of Filter Modules and Detection of Repetitive Data

The functionality of the automated image processing pipeline has been improved by adding filter modules that could be loaded separately. Several filters were created as stand-alone modules that could be inserted into the processing pipeline, improving its accuracy. The most important module was for detecting identical images. Repetitive images degrade the generalization ability of the network, i.e., it becomes less accurate on data not yet seen by the algorithm, and also increase the storage capacity required. It was important to develop a solution that scaled well on large data sets. We used differential hash values, which had several advantages:
-
It was not sensitive to different resolution and aspect ratios;
-
Changing the contrast did not change the hash value, or changed it only slightly, so the hash values of very similar images would be closer to each other;
-
The method worked quickly.
In order to allow the different hash values to identify more images as repeats, two changes have been introduced: we used grayscale images, and we did not look for a complete match between hash values, allowing for small variations.

2.4.2. Blurred Image Detection

During the measurements, we noticed that either motion or dirt (e.g., spider eggs, cobwebs, insects) on the camera or in front of the camera caused some blurred images, which were not suitable for analysis because they greatly impaired the automatic segmentation.
Therefore, we implemented edge detection on the input images, because the fewer the number of edges found, the more likely they became blurred images. We used the Laplace operator, ran on three test images overlaid with an artificial blur. The first image remained sharp everywhere, the second image had the scale area blurred, and the last image was blurred everywhere. The result of the run is shown in Figure 7.
To obtain a metric for assessing blurriness, we looked at variance. A larger variance value was related to the number of detected edges, so the smaller the variance, the more blurred the image was. These variances were as follows for the above pictures:
  • Sharp image 1141;
  • Blurred scale 1063;
  • Blurred image 26.
The results showed that the highly blurred images were found by this method, but the difference between the first two images was not significant, and an important information was that the area of the balance was blurred (Figure 7).
Another approach to detect blur was the use of Fourier transform. This algorithm detected also only high blur, so it did not solve the problem. In practice, this was not a major concern, since it could be placed in the pipeline after instance segmentation, which explicitly gave the area of the scale. This way we could detect blur in the area with a lot of information and extract these images before feature computation.
Of course, this takes away the functionality of saving segmentation on bad images, so it is essential that the filtering runs quickly. Having lost the drawback of the Fourier transform solution, they work nearly as well, so speed is the deciding factor between the two. The measurements showed that the previously presented variance analysis-based solution evaluated an image in half as much time. The advantage of the Fourier transform was that it was less difficult to find the threshold, and could even work unchanged in other environments, but due to the fixed installation of the cameras it does not compensate for half the speed. It is worth mentioning that the noise/error of the balance mask was filtered by calculating the center and area of the polygon and replacing it with a circle of equal area and center. This worked quite well, but unfortunately it is not applicable to non-circular scale pans.

3. Results

The weight estimation error for the models tested is given in Table 1. The accuracy of the models was assessed using the mean absolute error (MAE), which is the average of the difference in grams between the actual and estimated values.
The interquartile range (IQR) is an example of a trimmed estimator, defined as the 50% trimmed central range (IQR = Q3 − Q1, the difference between the first and third quartile), which enhances the accuracy of dataset statistics by dropping lower contribution and outlying points (denoted by “IQR 0 filter” in Table 1). The data in the “IQR 0.2 filter” columns are shifted by 0.2 times IQR (the lower bound is Q1 − IQR × 0.2 and the upper bound is Q3 + IQR × 0.2) to investigate the impact on the models of the lower filtering efficiency expected in a real-life application environment. In this case, the best values were obtained by combining ellipse and occupied area, as expected, i.e., ellipse fitting is an additional technique that increased the accuracy of the estimation in practice.
The estimation of bird weights could be traced back to a classification problem if we do not aim to determine the exact weight of birds but group them into groups of 50 g. In this case, there was no regression problem, but a classification problem evolved, which was easier to solve, and this approach improved the accuracy of the estimation by 10–15 g on average by removing erroneous values, because the 50 g class breakdown allowed us to perform a meaningful filtering.
Figure 8 shows the 50 g class data (the y-axis shows the area counted from the mask and the x-axis shows the weight data sent by the scale) and the distribution of the area occupied by the birds on the image.
Figure 8 is intended to illustrate that the calculation of the area occupied by a bird does not in itself give a sufficiently accurate estimate of weight. Therefore, the four features mentioned above are proposed to estimate the weight, which together can significantly reduce the error of the weight estimate. When evaluating the accuracy of the weight estimation, it should be considered that the weight of the birds examined in the compilation of Table 1 ranged from 1100 to 1800 g, which represents a MAE of between 3.55% and 9.54%.

4. Discussion

PLF techniques in large-scale confined poultry flocks focus on the position of birds within the house and the relationships that can be identified in their behavioral patterns. By analyzing the typical behavioral patterns of birds, the aim is to identify differences that are primarily useful for early detection of diseases. The estimation of individual weights of birds is addressed in a limited amount of literature. Currently, there is no widespread method in practice to estimate the individual weights of birds by analyzing camera images. In our research, under real farming conditions, we encountered several practical problems in both data collection and data processing. Data collection devices (the cameras) that can be used in practice should be able to withstand contamination and should not interfere with the work processes involved in poultry housing systems. At the same time, they must provide accurate and reliable results.
In addition to the accuracy of the weight estimation, the complexity and resource requirements of the used model are also important. The applied bird weight estimation system should use the lowest possible power hardware components, due to cost and the feasibility problems in the field (barn) conditions of the intensive air cooling required for the powerful GPU. The regression model and the random forest model, working well on CPU, do not require a costly and cooling-intensive GPU. From practical point of view, the advantage of the regression model was that it did not require being trained, which saved significant costs and time in the case of possible future additions of new species to the weight estimation system. The MLP did not perform well; however, it had the highest resource requirements among all tested models. We did not even perform a single feature-based visualization, we only performed the weight estimation from the “Area and Ellipse” ensemble, but it still performed worse than the other two. As a further simplification, we used a classification method for the weight estimation (classification problem) by grouping the birds into groups of 50 g. An important value was that it happened in a real farming environment, so we had to adapt it to the daily farming practices. Several articles study single animals or use animals observed in “boxes” under highly controlled conditions [19,20,21]. In comparison, the real farming environment poses several practical problems that greatly affect the accuracy of the system.
Using cameras in the commercial poultry meat production, we found several interesting and valuable research results which are focused on welfare status [22,23], on identifying biomechanical variables of broiler chickens during feeding [24], on lameness of broilers [25], and on counting the broiler flock [26]. However, there is scant literature on the study of waterfowl.
Research using 3D sensors, mainly in cattle farming [27,28], can be found in the literature, but this practical application is not a cost-effective solution in poultry farming. Although a 3D-camera-based system that could determine the weight of several broilers at once or predict the weight of an individual broiler was developed [29], they stated that machine vision along with SVR could promisingly estimate the weight of live broiler chickens. In [30], they used the available Kinect cameras, but the applicability of the method has not been tested with the newer versions of Kinect cameras that are available. In [31], the authors present a system based on inexpensive, off-the-shelf components that can be used to observe and describe animal activity and behavior. Our solution did not use 3D sensors and relied on cheap off-the-shelf components. The recent global pandemic showed the importance of telemonitoring and telemanagement services in various domains, including PLF, therefore acceleration of these technology spin-offs is expected in the near future [32].

5. Conclusions

Precision farming technologies are also becoming increasingly important in the poultry sector. Their primary aim is to maximize the profitability of livestock production. This requires digital data on individual animals in addition to the production data collected so far. Whatever technology is used in large-scale livestock production, we must consider the conditions under which the animals are kept. These will have a major impact on the range of tools that can be used, the way in which data is collected and processed, and the way the results are fed back to the farmer. In this paper, we present a methodology to determine the individual weight of ducks by camera imaging. The proposed solution is suitable to replace conventional poultry weighing scales, providing similar accuracy at lower cost. In this article, we summarized the practical experience of our research in a real farming environment. We were looking for the simplest practical solution, both in terms of required IT devices and data analysis methods/computing requirements. Regular cleaning of the camera lenses was important for the applicability of the solution. It is advantageous that the artificial illumination was roughly constant for waterfowl, which are predominantly kept in closed conditions. The effectiveness of the weight estimation algorithm was highly dependent on the accuracy of the data used for training, and it is therefore important that the image and weight data used in the construction of the training data set be properly filtered and validated.
The images from the cameras that provided the weight estimation results can also be used for other studies, and the behavior of birds can be observed using this method.

Author Contributions

Conceptualization, S.S. and M.A.; methodology, M.A.; software, S.S.; validation, M.A. and S.S.; formal analysis, S.S.; investigation, M.A. and S.S.; resources, M.A. and S.S.; data curation, S.S.; writing—original draft preparation, S.S. and M.A.; writing—review and editing, M.A. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the National Research, Development, and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme, “Application Domain Specific Highly Reliable IT Solutions” project. The APC was funded by Óbuda University.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Special thanks to Beatrix Godó-Butty, a Hungarian private duck farmer who can carry out the experiments on her farm and who provided us with her own duck flock. We would like to thank Tamás Haidegger, from the University Research and Innovation Centre at Óbuda University, for his professional assistance in the preparation of the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

CPUCentral Processing Unit
DLCDeepLabCut
DNNDeep Neural Network
FCRFeed Conversion Ratio
GPSGlobal Positioning System
GPUGraphics Processing Unit
IQRInterquartile Range
MAEMean Absolute Error
MLMachine Learning
MLPMultilayer Perceptron
PLFPrecision Livestock Farming
PoEPower Over Ethernet
R-CNNRegion-Based Convolutional Neural Networks
RGBRed, Green, and Blue Colors
SDGsSustainable Development Goals
UTPUnshielded Twisted Pair
YOLACTYou Only Look At CoefficienTs

References

  1. Li, G.; Ji, B.; Li, B.; Shi, Z.; Zhao, Y.; Dou, Y.; Brocato, J. Assessment of layer pullet drinking behaviors under selectable light colors using convolutional neural network. Comput. Electron. Agric. 2020, 172, 105333. [Google Scholar] [CrossRef]
  2. Wolfert, S.; Isakhanyan, G. Sustainable agriculture by the Internet of Things—A practitioner’s approach to monitor sustainability progress. Comput. Electron. Agric. 2022, 200, 107226. [Google Scholar] [CrossRef]
  3. Kashiha, M.; Pluk, A.; Bahr, C.; Vranken, E.; Berckmans, D. Development of an early warning system for a broiler house using computer vision. Biosyst. Eng. 2013, 116, 36–45. [Google Scholar] [CrossRef]
  4. Alexy, M.; Haidegger, T. Precision Solutions in Livestock Farming—Feasibility and applicability of digital data collection. In Proceedings of the IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems ICCC 2022 Budapest, Reykjavík, Iceland, 6–9 July 2022; Anikó, S., Ed.; IEEE Hungary Section: Budapest, Hungary, 2022; pp. 233–238. [Google Scholar]
  5. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D., Jr.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
  6. Okinda, C.; Nyalala, I.; Korohou, T.; Okinda, C.; Wang, J.; Achieng, T.; Wamalwa, P.; Mang, T.; Shen, M. A review on computer vision systems in monitoring of poultry: A welfare perspective. Artif. Intell. Agric. 2020, 4, 184–208. [Google Scholar] [CrossRef]
  7. Available online: https://www.eurofoiegras.com/ (accessed on 12 September 2022).
  8. Lourens, A.; Heerkens, J.L.T.; Star, L. Sensors and techniques to monitor and improve welfare and performance in poultry chains. In Precision Technology and SENSOR Applications for Livestock Farming and Companion Animals; van Erpvan der Kooij, E., Ed.; Academic Publishers: Wageningen, The Netherlands, 2021; pp. 131–165. ISBN 978-90-8686-364-8. [Google Scholar]
  9. Mollah, B.R.; Hasan, A.; Salam, A.; Ali, A. Digital image analysis to estimate the live weight of broiler. Comput. Electron. Agric. 2010, 72, 48–52. [Google Scholar] [CrossRef]
  10. Newberry, R.C.; Hunt, J.R.; Gardiner, E.E. Behaviour of roaster chickens towards an automatic weighing perch. Br. Poult. Sci. 1985, 26, 229–237. [Google Scholar] [CrossRef]
  11. Doyle, I.; Leeson, S. Automatic Weighing of Poultry Reared on a Litter Floor. Can. J. Anim. Sci. 1989, 69, 1075–1081. [Google Scholar] [CrossRef]
  12. Feighner, S.D.; Godowski, E.F.; Miller, B.M. Portable Microcomputer-Based Weighing System: Applications in Poultry Sciences. Poult. Sci. 1986, 65, 868–873. [Google Scholar] [CrossRef]
  13. Fontana, I.; Tullo, E.; Butterworth, A.; Guarino, M. Broiler vocalisation analysis used to predict growth. In Proceedings of the Measuring Behavior 2014; Spink, A.J., Loijens, L.W.S., Woloszynowska-Fraser, M., Noldus, L.P.J.J., Eds.; Academic Publishers: Wageningen, The Netherlands, 2014. [Google Scholar]
  14. Fontana, I.; Tullo, E.; Butterworth, A.; Guarino, M. An innovative approach to predict the growth in intensive poultry farming. Comput. Electron. Agric. 2015, 119, 178–183. [Google Scholar] [CrossRef]
  15. Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A Database and Web-Based Tool for Image Annotation. Int. J. Comput. Vis. 2007, 77, 157–173. [Google Scholar] [CrossRef]
  16. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT: Real-Time Instance Segmentation. arXiv 2019, arXiv:1904.02689. [Google Scholar]
  17. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar] [CrossRef]
  18. Mathis, A.; Mamidanna, P.; Abe, T.; Cury, K.M.; Murthy, V.N.; Mathis, M.W.; Bethge, M. Markerless tracking of user-defined features with deep learning. arXiv 2018, arXiv:1804.03142. [Google Scholar] [CrossRef]
  19. Guo, Y.; Chai, L.; Aggrey, S.E.; Oladeinde, A.; Johnson, J.; Zock, G. A Machine Vision-Based Method for Monitoring Broiler Chicken Floor Distribution. Sensors 2020, 20, 3179. [Google Scholar] [CrossRef] [PubMed]
  20. Fang, C.; Zheng, H.; Yang, J.; Deng, H.; Zhang, T. Study on Poultry Pose Estimation Based on Multi-Parts Detection. Animals 2022, 12, 1322. [Google Scholar] [CrossRef]
  21. Fang, C.; Zhang, T.; Zheng, H.; Huang, J.; Cuan, K. Pose estimation and behavior classification of broiler chickens based on deep neural networks. Comput. Electron. Agric. 2020, 180, 105863. [Google Scholar] [CrossRef]
  22. Dawkins, M.S.; Cain, R.; Merelie, K.; Roberts, S.J. In search of the behavioural correlates of optical flow patterns in the automated assessment of broiler chicken welfare. Appl. Anim. Behav. Sci. 2013, 145, 44–50. [Google Scholar] [CrossRef]
  23. Pereira, D.F.; Miyamoto, B.C.B.; Maia, D.D.N.; Tatiana Sales, G.; Magalhaes, M.M.; Gates, R.S. Machine vision to identify broiler breeder behaviour. Comput. Electron. Agric. 2013, 99, 194–199. [Google Scholar] [CrossRef]
  24. Mehdizadeh, S.A.; Neves, D.; Tscharke, M.; Nääs, I.; Banhazi, T. Image analysis method to evaluate beak and head motion of broiler chickens during feeding. Comput. Electron. Agric. 2015, 114, 88–95. [Google Scholar] [CrossRef]
  25. Aydin, A. Development of an early detection system for lameness of broilers using computer vision. Comput. Electron. Agric. 2017, 136, 140–146. [Google Scholar] [CrossRef]
  26. Cao, L.; Xiao, Z.; Liao, X.; Yao, Y.; Wu, K.; Mu, J.; Li, J.; Pu, H. Automated Chicken Counting in Surveillance Camera Environments Based on the Point Supervision Algorithm: LC-DenseFCN. Agriculture 2021, 11, 493. [Google Scholar] [CrossRef]
  27. Schlageter-Tello, A.; Bokkers, E.; Koerkamp, P.G.; Van Hertem, T.; Viazzi, S.; Romanini, C.; Halachmi, I.; Bahr, C.; Berckmans, D.; Lokhorst, K. Comparison of locomotion scoring for dairy cows by experienced and inexperienced raters using live or video observation methods. Anim. Welf. 2015, 24, 69–79. [Google Scholar] [CrossRef] [Green Version]
  28. Van Hertem, T.; Viazzi, S.; Steensels, M.; Maltz, E.; Antler, A.; Alchanatis, V.; Schlageter-Tello, A.A.; Lokhorst, K.; Romanini, E.C.; Bahr, C.; et al. Automatic lameness detection based on consecutive 3D-video recordings. Biosyst. Eng. 2014, 119, 108–116. [Google Scholar] [CrossRef]
  29. Amraei, S.; Mehdizadeh, S.A.; Sallary, S. Application of computer vision and support vector regression for weight prediction of live broiler chicken. Eng. Agric. Environ. Food 2017, 10, 266–271. [Google Scholar] [CrossRef]
  30. Mortensen, A.K.; Lisouski, P.; Ahrendt, P. Weight prediction of broiler chickens using 3D computer vision. Comput. Electron. Agric. 2016, 123, 319–326. [Google Scholar] [CrossRef]
  31. Balogh, Z.; Magdin, M.; Molnár, G. Motion Detection and Face Recognition using Raspberry Pi, as a Part of, the Internet of Things. Acta Polytech. Hung. 2019, 16, 167–185. [Google Scholar]
  32. Khamis, A.; Meng, J.; Wang, J.; Azar, A.T.; Prestes, E.; Takács, Á.; Rudas, I.J.; Haidegger, T. Robotics and intelligent systems against a pandemic. Acta Polytech. Hung. 2021, 18, 13–35. [Google Scholar] [CrossRef]
Figure 1. Cherry Valley ducks at the experimental site.
Figure 1. Cherry Valley ducks at the experimental site.
Agriculture 12 01869 g001
Figure 2. The commercial Hungarian duck fattening farm with foil tents (experimental site).
Figure 2. The commercial Hungarian duck fattening farm with foil tents (experimental site).
Agriculture 12 01869 g002
Figure 3. A typical view inside the barn (experimental location).
Figure 3. A typical view inside the barn (experimental location).
Agriculture 12 01869 g003
Figure 4. Attaching the camera above the bird scale to the frame of the foil tent.
Figure 4. Attaching the camera above the bird scale to the frame of the foil tent.
Agriculture 12 01869 g004
Figure 5. The weighing plates used in the experiment.
Figure 5. The weighing plates used in the experiment.
Agriculture 12 01869 g005
Figure 6. Image processing flowchart.
Figure 6. Image processing flowchart.
Agriculture 12 01869 g006
Figure 7. Artificial blur.
Figure 7. Artificial blur.
Agriculture 12 01869 g007
Figure 8. Distribution of bird weights and area occupied.
Figure 8. Distribution of bird weights and area occupied.
Agriculture 12 01869 g008
Table 1. The accuracy of the models.
Table 1. The accuracy of the models.
IQR 0 FilterLinear RegressionRandom ForestMLP
Ellipse79.176.4
Area77.664
Area and Ellipse77.686.76143.6
IQR 0.2 FilterLinear RegressionRandom ForestMLP
Ellipse110.9797.77
Area110.6105
Area and Ellipse110.4778.88114
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Szabo, S.; Alexy, M. Practical Aspects of Weight Measurement Using Image Processing Methods in Waterfowl Production. Agriculture 2022, 12, 1869. https://doi.org/10.3390/agriculture12111869

AMA Style

Szabo S, Alexy M. Practical Aspects of Weight Measurement Using Image Processing Methods in Waterfowl Production. Agriculture. 2022; 12(11):1869. https://doi.org/10.3390/agriculture12111869

Chicago/Turabian Style

Szabo, Sandor, and Marta Alexy. 2022. "Practical Aspects of Weight Measurement Using Image Processing Methods in Waterfowl Production" Agriculture 12, no. 11: 1869. https://doi.org/10.3390/agriculture12111869

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop