Next Article in Journal
Arsenic Accumulation in Pakchoi Influenced by Acidification-Driven Changes in Soil Properties and Arsenic Transformation
Next Article in Special Issue
Only Detect Broilers Once (ODBO): A Method for Monitoring and Tracking Individual Behavior of Cage-Free Broilers
Previous Article in Journal
Techniques for In Vitro Fertilisation of Vitrified Cattle Oocytes: Challenges and New Developments
Previous Article in Special Issue
Research on Broiler Mortality Identification Methods Based on Video and Broiler Historical Movement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs

1
College of Veterinary Medicine, Nanjing Agricultural University, Nanjing 210014, China
2
Key Laboratory of Livestock Farming Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 211800, China
3
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 211800, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(4), 365; https://doi.org/10.3390/agriculture15040365
Submission received: 1 January 2025 / Revised: 20 January 2025 / Accepted: 7 February 2025 / Published: 8 February 2025
(This article belongs to the Special Issue Modeling of Livestock Breeding Environment and Animal Behavior)

Abstract

:
As large-scale and intensive fattening pig farming has become mainstream, the increase in farm size has led to more severe issues related to the hierarchy within pig groups. Due to genetic differences among individual fattening pigs, those that grow faster enjoy a higher social rank. Larger pigs with greater aggression continuously acquire more resources, further restricting the survival space of weaker pigs. Therefore, fattening pigs must be grouped rationally, and the management of weaker pigs must be enhanced. This study, considering current fattening pig farming needs and actual production environments, designed and implemented an intelligent sorting system based on weight estimation. The main hardware structure of the partitioning equipment includes a collection channel, partitioning channel, and gantry-style collection equipment. Experimental data were collected, and the original scene point cloud was preprocessed to extract the back point cloud of fattening pigs. Based on the morphological characteristics of the fattening pigs, the back point cloud segmentation method was used to automatically extract key features such as hip width, hip height, shoulder width, shoulder height, and body length. The segmentation algorithm first calculates the centroid of the point cloud and the eigenvectors of the covariance matrix to reconstruct the point cloud coordinate system. Then, based on the variation characteristics and geometric shape of the consecutive horizontal slices of the point cloud, hip width and shoulder width slices are extracted, and the related features are calculated. Weight estimation was performed using Random Forest, Multilayer perceptron (MLP), linear regression based on the least squares method, and ridge regression models, with parameter tuning using Bayesian optimization. The mean squared error, mean absolute error, and mean relative error were used as evaluation metrics to assess the model’s performance. Finally, the classification capability was evaluated using the median and average weights of the fattening pigs as partitioning standards. The experimental results show that the system’s average relative error in weight estimation is approximately 2.90%, and the total time for the partitioning process is less than 15 s, which meets the needs of practical production.

1. Introduction

Pigs, as social animals, engage in social interactions that are an important aspect of their behavior, often leading to the formation of social hierarchies [1]. In the production of fattening pigs, individual genetic traits vary, and even identical feeding methods can result in different growth rates, leading to significant differences in the social hierarchies of fattening pigs. Larger, more aggressive pigs tend to continuously acquire more resources, further constraining the survival space of weaker pigs [2,3]. Therefore, in fattening pig farming, it is essential to group pigs according to the principle of “weak with weak, strong with strong” and to strengthen the management of weaker individuals within pens.
Traditional grouping methods typically use weighbridge-based weighing combined with manual herding, with stockpersons periodically adjusting and regrouping the pigs. However, manual grouping not only is time-consuming and labor-intensive but also inevitably induces stress in the pigs, leading to unnecessary losses. Moreover, the re-establishment of hierarchical order after regrouping can further intensify social oppression within the group, negatively affecting production efficiency [4]. Big Dutchman, a German company, designed a grouping system equipped with Radio Frequency Identification (RFID)-based pig identification, featuring functions such as weighbridge weighing, paint marking, and automatic sorting. With the advancement of modern technology and the progress of the livestock industry, a farming model integrating automatic weighing technology has emerged, promoting the development of automated-control-based grouping systems. However, most existing grouping equipment relies on weighbridge-based weighing, which can easily induce stress responses, increase disease risk, and suffer from reduced weighing accuracy and lifespan due to contamination from manure.
Nowadays, with the development of big data and image processing technologies, contactless weight estimation has gained widespread attention. Traditional weighbridge-based weighing systems are prone to contamination from manure, leading to reduced accuracy and shortened service life. The use of contactless weight estimation technology enables real-time, automated weight monitoring, reducing labor and time costs while improving farm productivity. Dang et al. compared machine learning algorithms, including Multilayer perceptron (MLP), k-Nearest Neighbors (k-NN), the Gradient Boosting Machine (GBM), TabNet, and FT-Transformer, using ten body measurements as input features to estimate cattle live weight [5]. He et al. utilized Red, Green, Blue-Depth (RGB-D) images to design a sheep live weight estimation method based on a lightweight, high-resolution network [6]. Compared to image processing, point cloud data contain richer geometric information, offering higher precision and applicability. Condotta et al. collected depth images of 234 pigs across three breeds—Landrace, Duroc, and Yorkshire—and calculated the volume of pig models to build a linear regression model, achieving an average error of 4.6% [7]. Okayama et al. used point cloud data to measure the volume and spine curvature of 150 pigs, quantitatively evaluating their posture [8]. Kwon et al. addressed the challenges of noisy or incomplete point cloud data from 70 pigs by generating training data through mesh reconstruction and developing a deep neural network for weight estimation [9]. Shi et al. developed a pig weight prediction platform based on a binocular stereo vision system and Laboratory Virtual Instrument Engineering Workbench (LabVIEW), enabling the automatic extraction of the back area and intelligent weight prediction [10]. Cang et al. used the back area from top-view depth images as input and output for pig weight estimation, using a Fast Region-based Convolutional Neural Network (Fast-RCNN) with an additional regression branch to perform pig identification, localization, and weight estimation simultaneously [11]. Ling et al. calculated four parameters—body length, height, width, and abdominal girth—in both standard and non-standard postures, demonstrating that body length measurements in standard posture exhibit higher stability and consistency [12].
The adoption of contactless weight estimation methods helps reduce stress in pigs and improve their welfare. Scholars, both domestically and internationally, have conducted extensive research on pig weight estimation. However, much of this research remains at the laboratory stage, lacking the development of intelligent grouping systems for fattening pigs that integrate weight estimation technology. Compared to traditional automated grouping equipment, grouping systems incorporating contactless weight estimation do not require human intervention, reducing pig stress, improving animal welfare, and enhancing feed conversion ratios and pork quality. Zhang et al. developed a fattening pig grouping system based on two-dimensional image processing and machine vision technology, demonstrating through experiments that it can reduce labor costs [13]. Two-dimensional images have been widely used for livestock monitoring, but they are limited by factors such as lighting conditions, angle variations, and occlusions. In contrast, three-dimensional (3D) point cloud data provide more detailed spatial information, enabling more accurate body measurements and weight estimation. Therefore, this study developed a fattening pig grouping system based on 3D point cloud weight estimation, integrating edge devices and cloud servers, hardware design, and algorithm development to construct a weight estimation model and build an application platform for the grouping system. This system enables contactless weight estimation and intelligent grouping management for fattening pigs. The development of this intelligent grouping system contributes positively to advancing agriculture toward digitization and automation, improving the management level of the farming industry.

2. Materials and Methods

2.1. Experimental Data Collection

2.1.1. Animals and Data Collection Equipment

The video data for this study were collected at Jiaze Farm, located in Changzhou City, Jiangsu Province, China, from 2 to 5 September 2023 (as shown in Figure 1) to collect the body weight and depth video data of 50 fattening pigs. The collection channel and the camera system were installed with a gantry-like structure set above the collection channel. The Kinect depth camera (manufactured by Microsoft Corp, Redmond, WA, USA) lens was placed 1.7 m above the ground, shooting vertically to ensure that the entire structure did not encounter fattening pigs or partitioning equipment and did not interfere with the movement of the pigs. The installation diagram is shown in Figure 1.

2.1.2. Data Preprocessing

After preliminary screening, 204 valid video segments were obtained, excluding invalid videos caused by the excessive bending of pig postures or empty channels. For each video segment, RGB-D data were extracted every 5 s, resulting in a color image and a depth image, generating a total of 1488 pairs of RGB-D data samples. The sample images were inspected, and those affected by inaccurate cropping times or redundant cropping were removed. Through the use of the camera’s internal parameter matrix, the color images and depth images were registered and fused, resulting in 1246 original scene point cloud data samples. The transformation method is shown in Equation (1):
Q = D · K 1 · P
where Q represents the coordinates of each point in the point cloud, D is the depth image pixel value, K is the camera’s internal parameter matrix, and P is the homogeneous vector of the depth image pixel coordinates.

2.2. Point Cloud Preprocessing

Due to hardware limitations and external environmental interference, such as the presence of the ground and railings in the collection environment, the generated point cloud often contains a large amount of noise and outliers, which directly affect the quality of subsequent point cloud data processing. Point cloud denoising is crucial for improving the performance of various tasks. Point cloud denoising requires the removal of noise while preserving the geometric features of the model. Therefore, various filtering algorithms are needed to remove these interferences, facilitating the extraction of suitable pig point clouds.

2.2.1. Spatial Pass-Through Filtering

To address the issue of removing invalid point clouds far from the main body of the point cloud, pass-through filtering is a commonly used method for the coarse processing of point cloud data. Based on the spatial distribution characteristics of noise points in the point cloud environment, a point cloud pass-through filter was first defined to directly obtain the region of interest within the fattening pig point cloud channel. The region of interest is a spatial rectangle, centered on the rectangular collection channel, filtering out point clouds outside the channel and on the ground while retaining key pig point cloud information and channel point clouds. The effect of pass-through filtering is shown in Figure 2.

2.2.2. Statistical Outlier Filtering

There are still many random noise points within the determined collection space of the point cloud, and there is adhesion between the pig back point cloud and the channel in some samples, which significantly interferes with subsequent algorithms. In this study, statistical distance filtering was used to remove outliers.

2.2.3. Point Cloud Clustering

To remove background and filter noise points from the point cloud, clustering is required on the point cloud after statistical filtering. In this study, Density-based spatial clustering of applications with noise (DBSCAN) density-based clustering was used. DBSCAN is a typical representative of density-based spatial clustering algorithms, which mainly divides areas with sufficient density into clusters and finds arbitrarily shaped clusters in spatial databases with noise. It defines a cluster as the largest set of density-connected points. Compared to the K-Means clustering algorithm, DBSCAN does not require an input of the number of clusters, K, and it can divide and process clusters of different sizes and shapes.

2.2.4. Point Cloud Downsampling

The point cloud contains more key features of fattening pigs, but the massive amount of data limits its broader application. The generated back point cloud data of fattening pigs is large, and the processing time is long, which cannot meet the needs of practical production. Therefore, before the weight estimation model can be studied, the obtained pig back point cloud needs to be downsampled. Voxel downsampling is a commonly used data processing technique, especially in 3D image processing and computer graphics [14]. In this study, voxel downsampling was used to process the back point cloud of pigs. This reduces the point cloud density, resulting in fewer points in the point cloud while retaining key contour features.

2.3. Key Feature Extraction

During the growth and development of animals, there is a positive correlation between pig body weight and body measurements. Before the weight of fattening pigs can be estimated, it is necessary to obtain the key features highly correlated with body measurements from the point cloud. Based on the morphological characteristics of the animal, key points are located to achieve point cloud segmentation and feature extraction.

2.3.1. Feature Definition

Through discussions with farm personnel and consulting relevant pig herd management guidelines, five features—hip width, shoulder width, hip height, shoulder height, and body length—were selected as the parameters for estimating the weight of fattening pigs. Based on the physical characteristics of fattening pigs, the back point cloud of the pigs was divided into several regions, as shown in Figure 3, including the head, neck, and shoulder (A), the thorax and abdomen (B), and the hindquarters (C). The widest points of a pig’s hip and shoulder were defined as the segmentation planes a and b, respectively, perpendicular to the ground. Therefore, region A and region B were divided using the widest point of the shoulder as the segmentation plane, while region B and region C were divided using the widest point of the hip. In this study, the five features—hip width, shoulder width, hip height, shoulder height, and body length—were defined as the training parameters for the weight estimation model. The specific definitions are shown in Table 1.

2.3.2. Research on Feature Extraction Algorithms

Reconstructing a standardized coordinate system for the fattening pig point cloud is beneficial for subsequent feature extraction using cross-sectional slices. The pig back point clouds from different angles were reconstructed into a standardized coordinate system, with the ground as the z-plane, the x-axis parallel to the fattening pig’s torso pointing toward the head, and the y-axis perpendicular to the torso. The reconstructed coordinate system is shown in Figure 4.
After the point cloud parallel to the y-axis was obtained, the positions of segmentation planes (a) and (b) were determined using point cloud cross-sectional slicing to calculate the features. The back point cloud of the fattening pig was segmented into thin slices along the x-axis at 3 mm intervals, and the span of the point cloud in the y-axis direction was calculated for each slice. Through the use of the position of each slice on the y-axis as the horizontal coordinate, a graph of the y-coordinate span of each point cloud slice was obtained (as shown in Figure 5). To determine the widest points of the hip and shoulder, polynomial fitting was used to obtain the fitting curve. Upon observation, it is apparent that the curve has three significant maxima at the hip, shoulder, and head positions of the pig.
For pig point cloud samples with unsuitable posture features such as bending, redundancy, or incompleteness (as shown in Figure S1), the point cloud cross-sectional slicing method cannot obtain point cloud features. Therefore, after the fitting function was obtained, it was necessary to add steps to appropriately determine the number of extreme points and apply threshold judgment to eliminate unqualified samples. The judgment process for unqualified point clouds is shown in Figure S2.
Based on the fitted curve of the qualified point cloud, the minimum point before the third maximum point was taken as the point cloud segmentation plane for removing the head and extracting the back point cloud of the pig. Starting from the negative x-axis, the point cloud slice corresponding to the first maximum point on the x-axis was defined as the hip width slice, i.e., segmentation plane b. Similarly, the point cloud slice corresponding to the second maximum point was defined as the shoulder width slice, i.e., segmentation plane a. After the hip width and shoulder width point cloud slices were obtained, the key features were calculated. The y-axis span of the hip width slice was taken as the hip width (W1), and similarly, the shoulder width (W2) was obtained. The highest point within each point cloud slice cluster was calculated, with the maximum z-axis coordinate within the cluster taken as the height of the point cloud slice, i.e., hip height (H1) and shoulder height (H2). The distance between the two slices was calculated as the body length (L1).

2.4. Research on Weight Estimation Model Based on Feature Parameters

2.4.1. Research on Weight Estimation Model Based on Random Forest

Random Forest is a classic Bagging model whose weak learners are decision tree models [15]. The Random Forest model has good generalization performance, can effectively reduce the risk of overfitting, and can handle missing values and outliers. It also has a strong ability to fit data with nonlinear relationships.
The model parameters were determined through Bayesian optimization. Bayesian optimization, based on Bayes’ theorem, optimizes a black-box function by modeling the posterior distribution of the objective function to infer the region most likely to contain the global optimum [16]. The tuning parameters include the number of decision trees, the maximum depth, the minimum number of samples required to split a node, and the minimum number of samples required for a leaf node.
According to the parameters in the table, the Random Forest model was trained. The mean squared error (MSE), mean absolute error (MAE), and mean relative error (MRE) were used as model evaluation metrics to assess the model’s performance. The error calculation formulas are as follows:
M S E = 1 n i = 1 n y i y ^ i 2
M A E = 1 n i = 1 n | y i y ^ i |
M R E = 1 n i = 1 n | y i y ^ i y i |
In the formulas, n represents the number of samples, y i is the true value, and y ^ i is the predicted value.

2.4.2. Research on Weight Estimation Model Based on Multilayer Perceptron

Multilayer perceptron (MLP) is one of the most common artificial neural network (ANN) models [17]. Bayesian optimization was used to tune the algorithm, with parameters including the number of neurons in the first hidden layer, the number of neurons in the second hidden layer, the initial learning rate, the regularization parameter, the number of samples trained per iteration, and the maximum number of iterations.

2.4.3. Research on Weight Estimation Model Based on Linear Regression and Ridge Regression

Linear regression based on the least squares method (least squares regression) is a common linear regression approach and one of the simplest and most intuitive regression methods [18]. Its core idea is to fit the data by minimizing the sum of squared residuals between the observed values and the model’s predicted values. It is characterized by simplicity, intuitiveness, and strong interpretability but is prone to overfitting. Ridge regression is a commonly used linear regression method that adds an L2 regularization term to the ordinary least squares method. Adding the regularization term restricts the size of the model parameters, thereby reducing the variance in the model to solve multicollinearity and overfitting issues, making it more robust to outliers and noise [19].
The extracted samples were preprocessed by normalization, and Bayesian optimization was used for algorithm tuning experiments. The tuning parameters include the number of iterations and the regularization parameter.

2.5. Sorting System Equipment Construction

The fattening pig sorting system is designed for group-housed fattening pig sheds, taking into consideration the size of the pig shed and the high-temperature, high-humidity, and dusty production environment. Materials that are heat-resistant, dustproof, and waterproof are used to ensure the stability and reliability of the partitioning equipment. To support the use of the fattening pig sorting system, a play area and two feeding areas are set up separately, with feeders placed in each feeding area. The partitions between the three areas are modified, with an opening left in the middle of the railing for installing the partitioning equipment. The top and bottom ends of the railing are modified into two one-way gates, allowing the fattening pigs to leave through this channel after feeding. Toys and water dispensers are set up in the play area to attract pigs back after feeding. When a fattening pig needs to eat, it enters the data collection channel of the equipment, and after data collection, weight estimation, and fat evaluation, it proceeds to the feeding area through the corresponding exit passage. Once the weight estimation result is obtained, the sorting system uploads the weight information to the database. Every day at midnight, the sorting system automatically reviews the weight data of all pigs from the previous day in the database, calculates the mean weight of the group as the partitioning standard, and saves it to the database. The operation diagram and flowchart of the fattening pig partitioning equipment are shown in Figure 6.

2.5.1. Hardware Design

A 3D diagram of the designed fattening pig sorting equipment is shown in Figure 7. The main body of the equipment is divided into three parts: the collection channel, the partitioning channel, and the gantry-style data collection equipment, which, respectively, achieve the functions of restricting fattening pig movement, intelligent partitioning, and data collection. The entire equipment is made of galvanized stainless steel to enhance its resistance to the high-temperature and high-humidity environment of pig farms, significantly extending its service life and strength. The specific design of the fattening pig sorting equipment is shown in Figures S3–S5.
The main hardware components of the fattening pig partitioning equipment include a control circuit board centered around the STM32F103ZET6 chip (manufactured by STMicroelectronics, Geneva, Switzerland), solenoid valves (manufactured by Delixi Electric Co., Ltd., Yueqing, Zhejiang, China), cylinders, various sensors (manufactured by AOTORO, Yueqing, Zhejiang, China), a computer, an Message queuing telemetry transport (MQTT) edge computing gateway (manufactured by USR IOT Technology Co., Ltd., Jinan, Shandong, China), and temperature and humidity sensors(manufactured by Renke Control Technology Co., Ltd., Jinan, Shandong, China).
The fattening pig sorting equipment uses a 220 V power supply from the pig shed. Considering the power requirements of equipment such as cameras, sensors, the main controller, and the computing gateway, the hardware connection diagram is designed as shown in Figure 8. The main function of the control board is to connect and control electronic components such as sensors and solenoid valves, enabling sensor data collection and controlling the movement process of fattening pigs. It also interacts with the cloud server through the MQTT edge computing gateway, enabling data reporting and command reception, thus allowing operators to control and access the equipment via a web interface remotely.

2.5.2. Software Design

The software design of the fattening pig sorting equipment mainly includes the main control board’s main program and subprogram designs. The main program design begins with the initialization of the entire driver, including Input/Output (IO) port initialization, interrupt initialization and setting interrupt priority, delay function initialization, and 485 serial communication initializations. The initialization module of this main program is shown in Figure S6. The subprogram design mainly includes the pig detection module, anti-pinch module, and three-cylinder control modules. The main program design of this subroutine is shown in Figures S7–S9.

3. Results

3.1. Point Cloud Extraction Results

For DBSCAN clustering, the impact of the ‘eps’ and ‘min_ points’ parameters on clustering performance and computation speed was compared, and the experimental results are shown in Figure 9. Figure 9a shows that ‘eps’ is inversely proportional to the number of clusters after clustering and directly proportional to computation time. The clustering performance for ‘eps’ values of 0.002, 0.004, 0.008, 0.012, 0.016, and 0.028 was then compared, as shown in Figure 10a–f. When ‘eps = 0.002’, a large portion of the point cloud could not be classified. When ‘eps = 0.028’, the DBSCAN algorithm clustered the pig back point cloud and the collection channel’s railing in one class, failing to separate the pig back and railing information. When ‘eps’ was in the range of 0.004 to 0.016, the clustering computation time significantly increased, but the effectiveness of point cloud extraction did not differ much. However, when ‘eps = 0.004’, some edge points of the pig back point cloud were classified into other clusters, losing edge information. The experimental results indicate that ‘eps = 0.008’ is an appropriate parameter value.
Figure 9b shows that the effect of ‘min_ points’ on the number of clusters shows a decreasing-then-increasing trend, while its impact on computation time shows an increasing, decreasing, and then increasing trend. Subsequently, an experiment was conducted to analyze the effect of different ‘min_ points’ values on DBSCAN clustering when ‘eps = 0.008’. Since ‘min_ points’ has a more noticeable impact on clustering time, the clustering performance was compared under three conditions: ‘min_ points = 5’, ‘30’, and ‘35’. The experimental results show that for ‘min_ points = 30’ and ‘35’, small portions of the edge of the pig back point cloud were classified into other clusters, as shown in Figure 10g–i. Therefore, this study uses ‘eps = 0.008’ and ‘min_ points = 5’ as the parameters for DBSCAN clustering.
To select the back point cloud of fattening pigs from the clustering results, considering the sample characteristics, the largest cluster in the clustering results was defined and saved as the pig back point cloud. The extraction results are shown in Figure 11a.
For the 1246 sets of point clouds, after filtering and clustering, and the extraction of the pig back point cloud, 1174 complete back point cloud samples were obtained, 9 samples were incomplete, and 63 samples still contained noise points and other interference such as railings that were not removed. The experiment shows that through the use of DBSCAN clustering, the effective data sample collection rate obtained is 94.22%.
Compared to other downsampling methods, voxel downsampling is more efficient and results in evenly distributed sampling points, but the number of collected points is uncontrollable. In this study, the pig back point cloud collected was homogeneous, and the number of points was limited, making the impact of the downsampled point count on subsequent processing negligible. For example, in one sample, the voxel size was set to 0.01, with the number of points before processing being 217,625 and after sampling being 5980. The sampling results are shown in Figure 11b. The results show that after downsampling, the number of points was effectively reduced, while the key contour information of the point cloud was retained.

3.2. Weight Estimation Model Results

For 63 redundant samples and 63 normal samples, the threshold value for determining redundant samples based on y-span was verified. After algorithm processing, 8 redundant samples were automatically eliminated due to not having more than two extreme points, leaving 53 samples. The scatter plot of the y-span distribution for the remaining redundant and normal samples is shown in Figure 12. The classification results for redundant samples under different threshold values are shown in Table 2. The experimental results indicate that the accuracy for thresholds between 0.45 and 0.55 was 100%, but the recall rate was below 90%. This proves that these threshold values could effectively retain normal point clouds, but some redundant samples were classified as normal. For thresholds of 0.3 and 0.35, the precision was below 90%, while the recall rate was above 90%, indicating that many normal samples were misclassified as redundant. A value of 0.4 was ultimately chosen as the threshold standard to balance effective data collection efficiency and accuracy.
Through the point cloud cross-sectional slicing method, combined with the morphological analysis of pigs, automatic feature parameter collection and the automatic elimination of abnormal pig point cloud data can be achieved. For the 1246 clustered samples obtained after point cloud filtering, clustering, and downsampling, 847 data samples were extracted after negative samples were removed.

3.3. Analysis of Weight Estimation Model Results

The extracted 847 data samples were divided into training and test sets in an 8:2 ratio and then normalized. Bayesian optimization was used to determine the parameters for each model, as shown in Table 3. A relatively large range was set to maximize the likelihood of finding the optimal parameters.
Using the extracted feature parameters from 847 samples, a comparative analysis of the training results of the Random Forest, MLP, linear regression, and ridge regression models was conducted. The average relative errors for the four models were 4.48%, 5.04%, 4.54%, and 4.64%, respectively, as shown in Table 4. Based on the model evaluation results during training, a comparison of the evaluation results for the fattening pig weight estimation models was obtained.
The experimental results show that the MLP model has poor data fitting compared to the other three models, with the highest MSE, MAE, and MRE. Compared to the Random Forest model, the linear regression and ridge regression models show similar MAE and MSE but higher MRE. This indicates that the linear regression and ridge regression models produced larger relative errors at some data points, increasing MSE and MAE.
Thirty prediction results were randomly selected from the test set for analysis, comparing the true weight values with the estimated values. The true values were arranged in descending order, and a scatter plot of the true values and the calculated values from each model is shown in Figure 13a. Then, a relative error plot comparing the predicted values from the four models with the actual weight for each point cloud input was created, as shown in Figure 13b.
The experimental results indicate that the linear regression and ridge regression models are less capable than the Random Forest model of detecting samples with excessively large or small weights. Therefore, this study chose the Random Forest model as the weight estimation model for fattening pigs. The model achieved a MRE of 4.48%, a MAE of 5.23 kg, and a MSE of 37.63 kg.
After obtaining the weight estimation model, the 847 data samples were analyzed using the median and average weights as the partitioning criteria for fat and lean classification. In normal fattening pig farming, it is necessary to control the weight difference among pigs from the same litter to be less than 10 kg, and the daily feed intake of an individual pig is about 3 to 3.5 kg, leading to fluctuations in body weight. Based on the experience of farm personnel, it was considered that samples within the range of ± 9 kg of the median could be grouped into any partition. The median and average weights of the dataset were 119.5 kg and 116 kg, respectively. A total of 130 samples met the criteria under this classification standard. After calculations, the accuracy of using the median weight and the average weight as classification standards was 90.77% and 91.54%, respectively. Therefore, using the average weight as the partitioning criterion under this standard provides better classification results.

3.4. Sorting System Testing

For functional testing, performance testing, and continuous operation testing, the test contents and results are shown in Table S1. For the functional tests, the response time for four tests of device positioning, leaving detection, and partitioning functions was recorded (as shown in Table S2), with average times of 3.26 s, 4.20 s, and 0.84 s, respectively. For the performance tests, multiple control commands were sent quickly, and the commands were executed without conflict, with all functional modules operating normally. After one hour of continuous operation testing, a simulated partitioning command was sent according to the workflow. The experiment showed that all functions of the partitioning equipment met the design requirements, the structure was stable, and it complied with practical design standards.

3.5. Field Test Results

The test content mainly includes data collection, processing, and weight estimation. First, the system uses the positioning judgment module to achieve the automatic collection of the fattening pig point cloud. Then, through point cloud data processing, the back point cloud of the fattening pig is extracted. Finally, the weight estimation model provides the partitioning judgment result and opens the corresponding partitioning channel.
Due to the automatic filtering of point clouds and the need to improve the system’s anti-interference ability, the system must take multiple shots in a single experiment to obtain several point clouds for processing. Therefore, the collection and processing processes must be separated, and a multi-threaded architecture is used for system optimization. When the fattening pig enters the collection channel, the main thread only performs collection operations, taking three consecutive shots and saving the corresponding RGB-D data. The processing and weight estimation are handled in parallel through subroutines, and the average of the three results is taken after eliminating negative samples to improve the accuracy of weight estimation. After the improvements, the average data collection and processing time was approximately 10.61 s. Collecting the data volume three times increased the time by only 3.88 s, effectively improving data processing efficiency. The specific times are shown in Table S3.
After testing, the sorting system’s actuators performed reliably, successfully executing all commands without any failures or unexpected behavior. The weight estimation time and the total time were less than 12 s and 15 s, respectively, with an average relative error of approximately 2.90%, meeting the expected results. The test results are shown in Table 5. Images of the equipment during the field test and the webpage interface are shown in Figure 14.

4. Discussion

Exploring the relationship between body measurements and body weight is a common method in weight estimation research. By analyzing the relationship between livestock body measurements and known weights, a regression model is established to achieve weight estimation. Fernandes et al. designed an automatic computer vision system to extract body measurement features and predict pig weight [20]. Buayai et al. used boundary detection to extract features from top-view images taken from a feeder and used an artificial neural network to estimate weight [21]. Dohmen et al. selected 26 studies as primary research, finding that seven features—the top-view area, shoulder height, hip height, body length, hip width, volume, and chest girth—were widely used in weight estimation studies [22].
Due to the flexible field of view, Microsoft’s depth camera series is more suitable for livestock research, adapting to different farming scenarios and animal sizes, thus enabling more accurate data collection and analysis. Kongsro proposed a prototype system for weighing pigs based on the Microsoft Kinect camera, using depth images from photoelectric sensors, with this camera being less affected by factors like lighting and the environment compared to RGB cameras [23]. Pezzuolo et al. proposed a rapid, non-contact method for taking pig body measurements using the Microsoft Kinect v1 depth camera. They compared the prediction results from two models with manual measurements, reducing the mean absolute error by over 40% [24]. Salau et al. developed a multi-camera system consisting of several 3D cameras (six Microsoft Kinect cameras) for monitoring dairy cows to obtain complete cow models [25]. Menesatti et al. developed a low-cost stereoscopic vision system that used a PLS model for sheep weight estimation and compared the manually measured data with the data extracted by stereoscopic vision [26].
This study used Large White pigs in the fattening stage as the research subject, constructing a weight estimation model based on point clouds of the fattening pigs’ backs and developing a fattening pig sorting system based on 3D point cloud weight estimation, achieving some progress. However, the following issues require further study:
(1)
This study develops a weight estimation model for finishing pigs based on the current dataset. Although we have made efforts to ensure the representativeness of the data, we acknowledge that the diversity of the existing dataset remains limited. Specifically, the current dataset primarily focuses on weight data within a specific range of finishing pigs, which may not fully capture the potential impact of weight variations on the partitioning method. Expanding the diversity of the dataset is highly valuable, and we plan to extend the dataset in future research to include a broader weight range and pigs at different growth stages, to evaluate the applicability of the partitioning method in a wider range of scenarios;
(2)
Considering the large data volume and high computational complexity of point clouds, an industrial computer needs to be deployed on the farm, which is costly. In the future, the RGB-D image pairs can be uploaded for computation on a cloud server, or deployment research can be conducted on edge computing devices to reduce equipment costs. For farms located in remote areas with limited or intermittent internet connectivity, this could be challenging. However, solutions such as satellite or wireless communication technologies (e.g., 4G/5G or LoRa) can provide reliable internet connectivity even in rural regions. Alternatively, edge computing devices can be used to process data locally, thus minimizing the reliance on continuous cloud connectivity. These edge devices would collect and preprocess data on-site, transmitting only essential information or updates to the cloud when a stable internet connection is available. In terms of cost-effectiveness, the initial investment required for such a system might be a barrier for smaller farms or those with limited budgets. Long-term savings in labor and increased efficiency, however, could offset these costs, making it a viable solution in the future;
(3)
Multiple depth cameras can be deployed to collect data from different angles, and a registration and fusion algorithm for 3D point clouds of fattening pigs can be studied to obtain a complete pig point cloud. The full 3D point cloud of a pig can improve segmentation accuracy, further enhancing feature extraction and posture determination accuracy, thereby improving weight estimation accuracy. Additionally, as the commercial pig industry primarily uses hybrid breeds, which may exhibit different growth patterns and behaviors, the system’s adaptability to various pig breeds, including hybrids, should be explored in future studies to ensure its broad applicability.

5. Conclusions

This study utilized 3D point cloud weight estimation to achieve the intelligent penning of fattening pigs. Noise points were removed through preprocessing techniques such as pass-through and statistical filtering, while DBSCAN clustering was used to extract the back point cloud of the pigs. Voxel downsampling was applied to reduce computational demands. A segmentation algorithm was developed to analyze the back point cloud based on the morphological characteristics of fattening pigs, enabling the definition and extraction of key features. Weight estimation models were constructed by comparing the performance of the Random Forest, MLP, linear regression, and ridge regression models. Real-time point cloud data were collected using an industrial computer to obtain weight estimation results. Field tests showed that the total process took less than 15 s, with an average relative error of approximately 2.90%.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/agriculture15040365/s1, Figure S1: Unqualified point cloud samples; Figure S2: Unqualified point cloud judgment process; Figure S3: A 3D diagram of the acquisition channel; Figure S4: A 3D diagram of the column channel; Figure S5: A 3D diagram of the gantry-type collection device; Figure S6: The main program is initialized; Figure S7: In-place judgment flowchart; Figure S8: Anti-pinch flow chart; Figure S9: Cylinder control flow chart; Table S1: Column test results; Table S2: Response time statistics of device function tests in columns; Table S3: Results of point cloud collection and processing time of fattening pigs.

Author Contributions

Conceptualization, L.L. (Luo Liu) and Y.O.; methodology, L.L. (Luo Liu), Y.O. and Z.Z.; software, L.L. (Luo Liu) and Y.O.; validation, L.L. (Luo Liu) and Z.Z.; formal analysis, L.L. (Luo Liu); investigation, Y.O and Z.Z.; resources, L.L. (Longshen Liu); data curation, Y.O. and Z.Z.; writing—original draft preparation, L.L. (Luo Liu) and Y.O.; writing—review and editing, L.L. (Luo Liu); visualization, L.L. (Luo Liu) and Y.O.; supervision R.Z., M.S. and L.L. (Longshen Liu); project administration, M.S. and L.L. (Longshen Liu); funding acquisition, R.Z. and L.L. (Longshen Liu). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Jiangsu Provincial Key Research and Development Program (Grant No: BE2021363) and Jiangsu Province Modern Agricultural Machinery Equipment and Technology Demonstration and Promotion Project (Grant No: NJ2021-39).

Institutional Review Board Statement

This study involved only observation and did not involve any handling of animals; therefore, ethical approval was not required.

Data Availability Statement

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

The authors would like to acknowledge the support provided by the Jiangsu Lihua Animal Husbandry Co., Ltd. (Changzhou, China) and Jiangxi Zengxin Technology Co., Ltd. (Xinyu, China) in the use of their animals, facilities and equipment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moosa, M.M.; Ud-Dean, S.M.M. The Role of Dominance Hierarchy in the Evolution of Social Species. J. Theory Soc. Behav. 2011, 41, 203–208. [Google Scholar] [CrossRef]
  2. Canario, L.; Turner, S.P.; Roehe, R.; Lundeheim, N.; D’Eath, R.B.; Lawrence, A.B.; Knol, E.; Bergsma, R.; Rydhmer, L. Genetic Associations between Behavioral Traits and Direct-Social Effects of Growth Rate in Pigs1. J. Anim. Sci. 2012, 90, 4706–4715. [Google Scholar] [CrossRef]
  3. da Fonseca de Oliveira, A.C.; Webber, S.H.; Ramayo-Caldas, Y.; Dalmau, A.; Costa, L.B. Hierarchy Establishment in Growing Finishing Pigs: Impacts on Behavior, Growth Performance, and Physiological Parameters. Animals 2023, 13, 292. [Google Scholar] [CrossRef] [PubMed]
  4. Li, Y.Z.; Johnston, L.J. Behavior and Performance of Pigs Previously Housed in Large Groups. J. Anim. Sci. 2009, 87, 1472–1478. [Google Scholar] [CrossRef] [PubMed]
  5. Dang, C.; Choi, T.; Lee, S.; Lee, S.; Alam, M.; Park, M.; Han, S.; Lee, J.; Hoang, D. Machine Learning-Based Live Weight Estimation for Hanwoo Cow. Sustainability 2022, 14, 12661. [Google Scholar] [CrossRef]
  6. He, C.; Qiao, Y.; Mao, R.; Li, M.; Wang, M. Enhanced LiteHRNet Based Sheep Weight Estimation Using RGB-D Images. Comput. Electron. Agric. 2023, 206, 107667. [Google Scholar] [CrossRef]
  7. Condotta, I.C.; Brown-Brandl, T.M.; Silva-Miranda, K.O.; Stinn, J.P. Evaluation of a Depth Sensor for Mass Estimation of Growing and Finishing Pigs. Biosyst. Eng. 2018, 173, 11–18. [Google Scholar] [CrossRef]
  8. Okayama, T.; Kubota, Y.; Toyoda, A.; Kohari, D.; Noguchi, G. Estimating Body Weight of Pigs from Posture Analysis Using a Depth Camera. Anim. Sci. J. 2021, 92, e13626. [Google Scholar] [CrossRef]
  9. Kwon, K.; Park, A.; Lee, H.; Mun, D. Deep Learning-Based Weight Estimation Using a Fast-Reconstructed Mesh Model from the Point Cloud of a Pig. Comput. Electron. Agric. 2023, 210, 107903. [Google Scholar] [CrossRef]
  10. Shi, C.; Teng, G.; Li, Z. An Approach of Pig Weight Estimation Using Binocular Stereo System Based on LabVIEW. Comput. Electron. Agric. 2016, 129, 37–43. [Google Scholar] [CrossRef]
  11. Cang, Y.; He, H.; Qiao, Y. An Intelligent Pig Weights Estimate Method Based on Deep Learning in Sow Stall Environments. IEEE Access 2019, 7, 164867–164875. [Google Scholar] [CrossRef]
  12. Ling, Y.; Jimin, Z.; Caixing, L.; Xuhong, T.; Sumin, Z. Point Cloud-Based Pig Body Size Measurement Featured by Standard and Non-Standard Postures. Comput. Electron. Agric. 2022, 199, 107135. [Google Scholar] [CrossRef]
  13. Zhang, J.; Zhou, K.; Teng, G. Design of Automatic Group Sorting System for Fattening Pigs Based on Machine Vision. Trans. Chin. Soc. Agric. Eng. 2020, 36, 174–181. [Google Scholar]
  14. Zhang, B.; Xiong, C. Automatic Point Cloud Registration Based on Voxel Downsampling and Key Point Extraction. Laser Optoelectron. Prog. 2020, 57, 109–117. [Google Scholar]
  15. Tin, K.H. The Random Subspace Method for Constructing Decision Forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef]
  16. Frazier, P.I. A Tutorial on Bayesian Optimization. arXiv 2018, arXiv:1807.02811. [Google Scholar] [CrossRef]
  17. Taud, H.; Mas, J.-F. Multilayer Perceptron (MLP). In Geomatic Approaches for Modeling Land Change Scenarios; Springer: Cham, Switzerland, 2018; pp. 451–455. [Google Scholar] [CrossRef]
  18. Geladi, P.; Kowalski, B.R. Partial Least-Squares Regression: A Tutorial. Anal. Chim. Acta 1986, 185, 1–17. [Google Scholar] [CrossRef]
  19. McDonald, G.C. Ridge Regression. Wiley Interdiscip. Rev. Comput. Stat. 2009, 1, 93–100. [Google Scholar] [CrossRef]
  20. Fernandes, A.F.; Dórea, J.R.; Fitzgerald, R.; Herring, W.; Rosa, G.J. A Novel Automated System to Acquire Biometric and Morphological Measurements and Predict Body Weight of Pigs via 3D Computer Vision. J. Anim. Sci. 2019, 97, 496–508. [Google Scholar] [CrossRef] [PubMed]
  21. Buayai, P.; Piewthongngam, K.; Leung, C.K.; Saikaew, K.R. Semi-Automatic Pig Weight Estimation Using Digital Image Analysis. Appl. Eng. Agric. 2019, 35, 521–534. [Google Scholar] [CrossRef]
  22. Dohmen, R.; Catal, C.; Liu, Q. Computer Vision-Based Weight Estimation of Livestock: A Systematic Literature Review. New Zealand J. Agric. Res. 2022, 65, 227–247. [Google Scholar] [CrossRef]
  23. Kongsro, J. Estimation of Pig Weight Using a Microsoft Kinect Prototype Imaging System. Comput. Electron. Agric. 2014, 109, 32–35. [Google Scholar] [CrossRef]
  24. Pezzuolo, A.; Guarino, M.; Sartori, L.; González, L.A.; Marinello, F. On-Barn Pig Weight Estimation Based on Body Measurements by a Kinect v1 Depth Camera. Comput. Electron. Agric. 2018, 148, 29–36. [Google Scholar] [CrossRef]
  25. Salau, J.; Haas, J.H.; Junge, W.; Thaller, G. Extrinsic Calibration of a Multi-Kinect Camera Scanning Passage for Measuring Functional Traits in Dairy Cows. Biosyst. Eng. 2016, 151, 409–424. [Google Scholar] [CrossRef]
  26. Menesatti, P.; Costa, C.; Antonucci, F.; Steri, R.; Pallottino, F.; Catillo, G. A Low-Cost Stereovision System to Estimate Size and Weight of Live Sheep. Comput. Electron. Agric. 2014, 103, 33–38. [Google Scholar] [CrossRef]
Figure 1. Experimental data collection device diagram.
Figure 1. Experimental data collection device diagram.
Agriculture 15 00365 g001
Figure 2. Pass-through filtering.
Figure 2. Pass-through filtering.
Agriculture 15 00365 g002
Figure 3. Pig back point cloud division.
Figure 3. Pig back point cloud division.
Agriculture 15 00365 g003
Figure 4. Coordinate system reconstruction result of point cloud.
Figure 4. Coordinate system reconstruction result of point cloud.
Agriculture 15 00365 g004
Figure 5. x-axis span of a slice.
Figure 5. x-axis span of a slice.
Agriculture 15 00365 g005
Figure 6. Schematic diagram of the operation of the column equipment.
Figure 6. Schematic diagram of the operation of the column equipment.
Agriculture 15 00365 g006aAgriculture 15 00365 g006b
Figure 7. A 3D diagram of the column device.
Figure 7. A 3D diagram of the column device.
Agriculture 15 00365 g007
Figure 8. Hardware connection diagram.
Figure 8. Hardware connection diagram.
Agriculture 15 00365 g008
Figure 9. Relationship between ‘eps’ and ‘min_points’ and the number of running hours and categories.
Figure 9. Relationship between ‘eps’ and ‘min_points’ and the number of running hours and categories.
Agriculture 15 00365 g009
Figure 10. DBSCAN clustering of different ‘eps’ and ‘min_points’ values.
Figure 10. DBSCAN clustering of different ‘eps’ and ‘min_points’ values.
Agriculture 15 00365 g010
Figure 11. DBSCAN clustering and voxel downsampling effect.
Figure 11. DBSCAN clustering and voxel downsampling effect.
Agriculture 15 00365 g011
Figure 12. Scatter plot of redundant and normal samples.
Figure 12. Scatter plot of redundant and normal samples.
Agriculture 15 00365 g012
Figure 13. Model test results and error comparison.
Figure 13. Model test results and error comparison.
Agriculture 15 00365 g013
Figure 14. Operation display of sorting equipment and system platform.
Figure 14. Operation display of sorting equipment and system platform.
Agriculture 15 00365 g014
Table 1. Body size definition.
Table 1. Body size definition.
Feature ParametersCharactersDefinition
Hip WidthW1Point Cloud Width within Segmentation Plane b
Shoulder WidthW2Point Cloud Width within Segmentation Plane a
Hip HeightH1Height of the Highest Point in Plane b of the Point Cloud
Shoulder HeightH2Height of the Highest Point in Plane a of the Point Cloud
Body LengthL1Distance Between Planes a and b
Table 2. Classification results of redundant samples under different thresholds.
Table 2. Classification results of redundant samples under different thresholds.
ThresholdPrecisionRecall
0.55100%34.55%
0.5100%61.82%
0.45100%89.09%
0.496.30%94.55%
0.3587.10%98.18%
0.364.29%98.18%
Table 3. Thresholds for tuning parameters.
Table 3. Thresholds for tuning parameters.
Model NameParameterTuning ThresholdFixed Value
Random Forestn_estimators(60, 1200)533
max_depth(3, 30)20
min_samples_split(2, 100)94
min_samples_leaf(1, 10)1
MLPhidden_layer_size_1(50, 200)146
hidden_layer_size_2(50, 200)60
learning rate init(0.0001, 0.1)0.0001
alpha(0.0001, 0.1)0.011
batch size(16, 128)52.00
max iter(100, 1000)431
Linear Regressiondegree(1, 10)2
alpha(0.0001, 0.1)0.016
Ridge Regressionalpha(0.0001, 10)0.0001
Table 4. Model training results.
Table 4. Model training results.
Name of Evaluation MetricRandom ForestMLPLinear RegressionRidge Regression
Mean Squared Error (MSE)37.63 kg50.83 kg40.62 kg38.48 kg
Mean Absolute Error (MAE)5.23 kg5.75 kg5.31 kg5.43 kg
Mean Relative Error (MRE)4.48%5.04%4.54%4.64%
Table 5. Sorting system test results.
Table 5. Sorting system test results.
Test Pig IDTrue Value (kg)Prediction Mean Value (kg)Mean Relative ErrorTotal Duration (s)
1109114.785.31%11.3
2115116.010.88%10.9
3118.5116.201.94%10.2
4119114.573.72%10.1
5120118.781.02%10.6
6121.5120.271.02%10.3
7123.5120.762.22%10.7
8124119.383.73%14.9
9125122.272.19%13.3
10126117.256.94%13.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, L.; Ou, Y.; Zhao, Z.; Shen, M.; Zhao, R.; Liu, L. The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs. Agriculture 2025, 15, 365. https://doi.org/10.3390/agriculture15040365

AMA Style

Liu L, Ou Y, Zhao Z, Shen M, Zhao R, Liu L. The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs. Agriculture. 2025; 15(4):365. https://doi.org/10.3390/agriculture15040365

Chicago/Turabian Style

Liu, Luo, Yangsen Ou, Zhenan Zhao, Mingxia Shen, Ruqian Zhao, and Longshen Liu. 2025. "The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs" Agriculture 15, no. 4: 365. https://doi.org/10.3390/agriculture15040365

APA Style

Liu, L., Ou, Y., Zhao, Z., Shen, M., Zhao, R., & Liu, L. (2025). The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs. Agriculture, 15(4), 365. https://doi.org/10.3390/agriculture15040365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop