Next Article in Journal
Deep Learning for Multi-Source Data-Driven Crop Yield Prediction in Northeast China
Previous Article in Journal
The Residue Chemistry Transformation Linked to the Fungi Keystone Taxa during Different Residue Tissues Incorporation into Mollisols in Northeast China
Previous Article in Special Issue
A Point Cloud Segmentation Method for Pigs from Complex Point Cloud Environments Based on the Improved PointNet++
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research and Preliminary Evaluation of Key Technologies for 3D Reconstruction of Pig Bodies Based on 3D Point Clouds

State Key Laboratory of Animal Nutrition, Institute of Animal Sciences, Chinese Academy of Agricultural Sciences, Beijing 100193, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(6), 793; https://doi.org/10.3390/agriculture14060793
Submission received: 21 March 2024 / Revised: 7 May 2024 / Accepted: 11 May 2024 / Published: 22 May 2024
(This article belongs to the Special Issue Application of Sensor Technologies in Livestock Farming)

Abstract

:
In precision livestock farming, the non-contact perception of live pig body measurement data is a critical technological branch that can significantly enhance breeding efficiency, improve animal welfare, and effectively prevent and control diseases. Monitoring pig body measurements allows for accurate assessment of their growth and production performance. Currently, traditional sensing methods rely heavily on manual measurements, which not only have large errors and high workloads but also may cause stress responses in pigs, increasing the risk of African swine fever, and its costs of prevention and control. Therefore, we integrated and developed a system based on a 3D reconstruction model that includes the following contributions: 1. We developed a non-contact system for perceiving pig body measurements using a depth camera. This system, tailored to the specific needs of laboratory and on-site pig farming processes, can accurately acquire pig body data while avoiding stress and considering animal welfare. 2. Data preprocessing was performed using Gaussian filtering, mean filtering, and median filtering, followed by effective estimation of normals using methods such as least squares, principal component analysis (PCA), and random sample consensus (RANSAC). These steps enhance the quality and efficiency of point cloud processing, ensuring the reliability of 3D reconstruction tasks. 3. Experimental evidence showed that the use of the RANSAC method can significantly speed up 3D reconstruction, effectively reconstructing smooth surfaces of pigs. 4. For the acquisition of smooth surfaces in 3D reconstruction, experimental evidence demonstrated that the RANSAC method significantly improves the speed of reconstruction. 5. Experimental results indicated that the relative errors for chest girth and hip width were 3.55% and 2.83%, respectively. Faced with complex pigsty application scenarios, the technology we provided can effectively perceive pig body measurement data, meeting the needs of modern production.

1. Introduction

“Pig grain secures the world”, indicating that the swine industry is closely tied to people’s daily lives and health. Given the continuous increase in global population and the rising demand for meat, developing efficient pig farming techniques is especially crucial [1,2,3,4]. Although China is one of the major pork-producing countries globally, it still lacks efficiency and technological application in production. The swine industry in China is currently facing multiple challenges, including shortages of feed resources, high labor costs, significant biosecurity risks, and low production efficiency. Therefore, there is an urgent need to develop a rapid, accurate, and non-invasive method for estimating pig body data. Such a method would not only enhance production efficiency but also strengthen biosecurity measures and improve the health level of pigs, thereby supporting the modernization of the pig farming industry [5,6].
Although there has been notable advancement in modern data collection methods and advanced Internet of Things (IoT) technology within the global livestock industry over the past two decades, challenges continue to exist, particularly in terms of limited capabilities in intrinsic data analysis and ongoing concerns related to animal welfare [7]. Over the past two decades, there has been notable progress in the development of modern data collection methods and advanced Internet of Things (IoT) technologies within the global livestock industry. However, challenges remain, particularly in the areas of intrinsic data analysis capabilities and animal welfare concerns [6,8,9,10,11]. Dynamically perceiving the body measurement data of pigs is of utmost importance. By utilizing computer vision technology to estimate the body dimensions of livestock, which involves analyzing animal images to calculate their size, not only is the measurement automated, but it also significantly reduces disturbance to the animals, greatly enhancing the efficiency of livestock management. With the rapid development of computer technology, 3D perception, modeling, and reconstruction techniques have been widely applied in industrial and agricultural fields. The advancements in these technologies have not only propelled the development of research equipment but also advanced the depth of research in 3D reconstruction [12,13,14,15]. The application of 3D reconstruction technology primarily aims to overcome the difficulties associated with extracting 3D information from 2D images. This is because 2D imaging captures flat representations, whereas 3D imaging facilitates more effective 3D reconstructions. Some studies have employed 3D reconstruction technology to create 3D models of animals by analyzing multiple images, thereby constructing more accurate models. Techniques such as X-ray 3D reconstruction have been used to inspect industrial and agricultural products, while ultrasound tomography images and image-based methods have been applied to the 3D reconstruction of typical parts [16,17]. Three-dimensional reconstruction technology has a wide range of applications in the agricultural sector and has achieved relatively good research results [18,19,20,21].
In precision pig farming, the physical characteristics of pigs hold significant reference value for genetic breeding, feed conversion, and body condition analysis [22,23]. However, traditional methods still rely on manual tape measurements, which often cause stress to pigs and are inefficient. With technological advancements, image recognition and 3D reconstruction technologies have been widely applied to the perception of physical characteristic information. Depth cameras, such as Kinect DK and RealSense, have been developed, leading to extensive research in pig weight estimation, physical characteristic perception, and body condition analysis. Nonetheless, in field deployments, there are shortcomings such as lengthy processing times and high hardware requirements [24,25,26,27].
In conclusion, 3D imaging technology is an efficient and precise method for acquiring pig body characteristics in the pig farming industry. Compared to traditional contact measurement methods, this technology offers non-invasive monitoring, significantly reducing stress on the animals. By analyzing 3D images of pigs, we innovatively constructed 3D models of pigs and integrated an innovative body measurement system, providing revolutionary technology and system support for production sites. Moreover, in the current context of heavy pig farming tasks, large workloads, low efficiency, and a high reliance on traditional, singular, and ideal laboratory environments for pig testing, the development of new methods for sensing and modeling physical characteristics of pigs based on image vision technology offers a new perspective for industrial implementation. The main contributions of this paper include: 1. Constructing a model for perceiving 3D body size information of pigs. 2. Reconstructing 3D models using depth cameras. 3. Building a system for sensing body size information of pigs.

2. Materials and Methods

2.1. Environment and Animals

This study was deployed and constructed in the laboratory, and on-site verification was also conducted at a farm in Tianjin, China. The subjects selected for this study were ten pigs with body weights ranging from 52.5 to 121.5 kg. These pigs were chosen because they had well-developed physiques, proportionate builds, and uniform body colors. Selecting pigs within this weight range was primarily undertaken to optimize the model’s learning effectiveness and to minimize the impact of varying body lengths on the model’s performance. The 3D point cloud collection framework measured 2 m by 3 m, with one depth camera placed at each of the four corners and another depth camera at the center of the top, 2 m above the ground. This paper collected a total of 96,210 point cloud data points. For the collected data, we first carried out manual selection, primarily eliminating images with obstructions and poor imaging quality. Eventually, 9000 depth images were selected from five different perspectives. The experiment was divided into two parts: static model pigs in the laboratory and pigs in pigsties. The body measurement data of the pigs being measured, including body length, chest girth, abdominal girth, back height, and hip width, were also measured using a tailor’s tape. Data labeling was performed using the LabelMe software 5.4.1 and the corresponding JSON files were generated. Below is the overall technical roadmap for this study (Figure 1).
In this experiment, the electronic weighing scale used was a CWYC model. The physical measurements of the pigs primarily include chest girth and hip width, among others. The imaging collection equipment used was a LXPS-M4422-79E model. The computer processor was an Intel(R) Core(TM) i7-4210M CPU @ 2.60 GHz, and the RAM was 16 GB (Figure 2).

2.2. Data Analysis and Model Establishment

The acquired imaging images were first preprocessed using Python, including feature detection, registration, and other steps, followed by further processing through the model. Then, the data were analyzed based on convolutional neural networks to provide references for identifying changes in pig weights at different daily ages.
For camera calibration, we used a method that involves a large calibration board to unify the coordinates of all cameras. In this method, each large calibration board contained several smaller calibration boards, with known relative positions between them, and each camera was able to capture at least one small calibration board. These small boards allowed for the precise calibration of each camera’s internal and external parameters, enabling the alignment of each camera’s coordinates to the coordinate system of the small calibration boards.
We performed data labeling using the labelme.exe (V 5.4.1) software, marking parts such as ears, shoulders, hips, and tails, creating bounding boxes, and saving the annotation files. Once the labeling was completed, the files were saved in the default path, and a JSON file with the same name is automatically generated.

2.3. Point Cloud Data Acquisition and Processing

Point Cloud Data Acquisition

In this study, point cloud data were acquired using depth cameras, a method that allows for rapid and high-resolution collection of extensive 3D coordinate data of the surface of the object being measured. The acquisition of point cloud data can be completed within 0.2 s, followed by the computation of results in 1 s using a high-performance industrial computer. The Time of Flight (TOF) camera is a type of depth camera based on the principle of time of flight (ToF). It can measure the distance between the object and the camera, thereby generating a depth image. Combined with RGB (red, green, blue) images, TOF cameras can provide RGBD (red, green, blue, depth) data, which is used for 3D reconstruction. During the shooting process of the depth camera, the acquisition of point cloud data can be affected by factors such as object occlusion and uneven lighting, making the object prone to scanning blind spots, and thereby forming holes. Since point cloud data are collected by 5 cameras, there is noise that does not meet the requirements for model reliability, necessitating preprocessing steps such as denoising and simplification [28].

2.4. Point Cloud Preprocessing

After data collection, obstacles such as railings, baffles, and walls may be present. Therefore, the main focus is on simplifying the noise in the point cloud and aligning the point cloud data captured from different angles into the same coordinate system. This provides a solid data foundation for subsequent surface construction and the generation of 3D solid models. When using depth cameras to capture targets, point cloud data inevitably contain some noise due to the surrounding environment, human disturbances, and the characteristics of the pigs themselves. This noise prevents accurate representation of the spatial position of the scanned object, necessitating the removal of these noisy points. The main approach involves applying Gaussian filtering, mean filtering, and median filtering to the ordered point cloud data. Gaussian noise refers to a type of noise where the noise density function follows a Gaussian distribution. Due to the mathematical ease of handling Gaussian noise in both space and frequency domains, this noise model (also known as normal noise) is used. Simple smoothing of an image is achieved through mean filtering, which calculates the average grayscale value within a certain neighborhood of pixels and saves the result as the grayscale of the center pixel in the output image. Median filtering primarily replaces the center pixel with the median value of the rectangular neighborhood around the center pixel (1, 1-1 Gaussian filtering, 2, 2-2 mean filtering, and 3, 3-3 median filtering preprocessing) (Figure 3).

2.5. Data Model

The estimation of point cloud normals is an important step in point cloud processing, providing a foundation for subsequent tasks such as point cloud reconstruction, feature extraction, and registration. Common methods for normal estimation include the least squares method, principal component analysis (PCA), and random sample consensus (RANSAC) [29,30]. The least squares method estimates the normals of local planes by minimizing the sum of squared residuals between the points and the fitted plane. While intuitive and simple, it may lack precision when dealing with sharp features. Principal component analysis (PCA) determines the direction of minimal variance, suitable for surfaces approximating planes and offering robustness to noise. Random sample consensus (RANSAC) improves accuracy by iteratively excluding outliers, albeit computationally intensive, effectively handling noise and outliers. This study adopts the least squares method for data estimation, which is intuitive and applicable to most point cloud processing scenarios, but requires caution with sharp features. By combining these methods, point cloud processing quality and efficiency can be effectively enhanced, providing reliable support for 3D vision tasks.

2.6. Detection of Feature Points in Swine Point Clouds

The identification of feature points utilizes the Mask-RCNN convolutional neural network to detect features within color images and map them to the point cloud space using the calibration relationship between color images and depth point clouds. Mask RCNN is an instance segmentation algorithm, characterized by a multitask network capable of performing “object detection”, “object instance segmentation”, and “object keypoint detection”.
The overall structure of the algorithm is based on the Faster-RCNN framework, with a fully connected segmentation network added after the base feature network, transforming the original two tasks (classification + regression) into three tasks (classification + regression + segmentation). Mask R-CNN follows the same two-stage approach as Faster R-CNN, utilizing a fully convolutional network (FCN) to perform semantic segmentation on each proposal box generated by FastRCNN.
Mask RCNN is an efficient instance segmentation framework that integrates multiple functions, including object detection, instance segmentation, and keypoint detection, through a multi-stage processing flow to precisely identify and segment individual instances in images. In the first stage, it scans the input image with a region proposal network (RPN) to generate proposals for areas potentially containing objects. Subsequently, in the second stage, it classifies objects within these proposals, performs bounding box regression, and generates binary masks through a fully convolutional network branch, accurately distinguishing between object pixels and background pixels. This multi-task processing not only enhances processing efficiency but also optimizes the overall performance of the algorithm by decoupling and refining each sub-task (Figure 4).
The model primarily consists of the following components: it first preprocesses the input image, then extracts feature maps through a pretrained neural network. Each point on the feature map is used to generate multiple candidate region proposals, which are then sent to the RPN network for classification into foreground and background, as well as for bounding box regression, filtering out the higher-quality proposals. Next, the ROI Align technique is used to precisely align the feature maps and candidate regions, ensuring the accuracy of feature extraction. Finally, the filtered region proposals are classified, subjected to bounding box regression, and used to generate masks, outputting each object’s category, location, and detailed pixel-level segmentation results.
Mask-RCNN enhances the detection and segmentation quality of targets in images by using a ResNet-FPN architecture for feature extraction and adding a branch for mask prediction. The algorithm can not only process multiple targets within images but also provides high-quality segmentation masks for each target, supporting further analysis such as volume estimation or shape analysis. Moreover, its flexibility allows for expansion to other tasks, like keypoint detection, further broadening its range of applications. Mask RCNN is highly regarded for its exceptional feature extraction capabilities, outstanding object detection, and instance segmentation performance, having a wide range of application prospects in the field of computer vision.

2.7. Pig Body Point Cloud Registration

In this study, due to the incompleteness, rotational misalignment, and translational misalignment of the point cloud, it is necessary to register partial point clouds to obtain a complete point cloud. To achieve a comprehensive data model of the object being measured, it is essential to determine an appropriate coordinate transformation. This process involves merging point sets obtained from various perspectives into a unified coordinate system, forming a complete data point cloud. This facilitates further operations like visualization, which constitutes the registration of point cloud data. The study utilizes automatic point cloud registration technology, which employs certain algorithms or statistical rules to compute the misalignment between two point clouds, thereby achieving the effect of automatically registering the two point clouds. Essentially, it involves the coordinate transformation of data point clouds measured in different coordinate systems to obtain an overall data model.
The key issue in automatic registration technology lies in determining the coordinate transformation parameters R (rotation matrix) and T (translation vector), minimizing the distance between 3D data measured from two viewpoints after coordinate transformation. Registration algorithms can be classified into global registration and local registration based on their implementation process. The Point Cloud Library (PCL) includes a dedicated registration module, which implements fundamental data structures related to registration and classic registration algorithms such as ICP, estimating corresponding points, and removing incorrect correspondences during the registration process.

2.8. Three-Dimensional Reconstruction of Pig Body Point Clouds

Three-dimensional reconstruction based on point cloud data involves voxel grid reconstruction and surface reconstruction (such as triangulation). The ultimate goal is visualization and display, primarily using 3D visualization software or libraries (such as PCL, MeshLab, and Blender) to view and edit the reconstructed 3D models. Subsequently, these 3D models are utilized for further measurements of the pig’s body dimensions. In a pigsty environment, specific software or libraries might be employed to streamline these steps. For instance, the Point Cloud Library (PCL) offers a plethora of tools and algorithms for processing point cloud data, while algorithms like ElasticFusion and BundleFusion are specifically designed for real-time 3D reconstruction.

3. Results and Discussion

We developed a non-contact measurement system using depth cameras to accurately capture pig body size data, suitable for on-site pig farming scenarios. This stress-free method ensures animal welfare while accurately collecting data. We optimized data quality using preprocessing techniques such as Gaussian, mean, and median filtering, followed by robust normal estimation using methods such as least squares, principal component analysis (PCA), and random sample consensus (RANSAC), enhancing the efficiency of point cloud processing. Our experiments demonstrate that the use of the RANSAC method significantly improves the speed of 3D reconstruction, particularly in achieving smooth pig body surfaces.

3.1. Estimation of Point Cloud Normals

Point cloud normals are an essential geometric surface feature in 3D point clouds, with all algorithmic models relying on the estimation of normals. Estimating point cloud normals is a crucial step in point cloud processing. This study explores the use of the least squares method, principal component analysis (PCA), and random sample consensus (RANSAC). The least squares method, also known as the method of least squares, seeks the best function match for the data by minimizing the sum of the squares of the errors, demonstrating significant advantages in fitting. Utilizing the PCA principle, the process begins by searching for a direction n, where the distribution of projection points of all neighboring points in direction n is most concentrated, meaning the variance of the projections in that direction is minimized, eventually identifying n as the eigenvector corresponding to the smallest eigenvalue (Figure 5).
RANSAC, known for its strong advantages in scene stitching, primarily operates through feature matching (e.g., SIFT matching) to compute the transformation structure between the subsequent image and the previous one. Subsequently, image mapping overlays the next image onto the coordinate system of the previous image, culminating in the fusion of the transformed images. In summary, the figure below displays the images processed by these three methods.

3.2. Validation of the Point Cloud Feature Detection Algorithm

The feature point detection in the collected point cloud data of pigs showed satisfactory results, with key body parts such as the head, ears, back, and buttocks being effectively identified; it also performed well in detecting the state of curved surfaces. The algorithm is capable of achieving desirable outcomes in detecting fine details and easily overlooked feature points, providing a scientific foundation for subsequent model reconstruction and holding significant potential for future on-site applications (Figure 6).

3.3. Validation and Result Analysis of Static Pigs

In the preliminary phase, we conducted model construction and analysis under laboratory conditions. After acquiring point cloud data, we proceeded with preprocessing, feature point detection, and 3D reconstruction (Figure 7). By comparing field measurements with model validations, we found that the relative errors for body length, chest girth, abdominal girth, back height, were 0.2%, 1.73%, 4.48%, 3.28% respectively.

3.4. Pig Body Point Cloud Registration Experiment and Result Analysis

Analyzing the point cloud data of the pigs, we fused the data from five perspectives into one coordinate system, resulting in comprehensive and complete point cloud data. Through feature point detection, we significantly improved efficiency. Experimental results demonstrate the feasibility and effectiveness of this method, providing a model foundation for subsequent on-site applications (Figure 8).

3.5. Pig Body Point Cloud Reconstruction Experiment and Result Analysis

The 3D reconstruction method proposed in this study achieves point cloud reconstruction in complex pigpen environments. Experimental results demonstrate that this method effectively removes noise from point clouds and produces good reconstruction results. This method is characterized by its simplicity, ease of implementation, and speed. The following image shows the surface results from the experiments (Figure 9).

3.6. Parameter Estimation Analysis of Physiological Data

The body measurements of pigs serve as important indicators for assessing their growth and development, providing insights into their physical condition, feed intake, and genetic performance. Many research scholars have made significant contributions to this field by analyzing pig head-to-tail point cloud data [31], while others have focused on estimating weight based on back point cloud data [32], The advantage of this paper lies in its comprehensive 3D reconstruction and analysis of all body parts, enabling a scientific and thorough data analysis. In this technology, the parameters measured include chest girth and hip width etc. Chest girth refers to the circumference at the widest point behind the front legs, and hip width refers to the horizontal width at the widest part of the hips. For instance, in the measurement and model parameter estimation of a pig, the relative errors of chest girth, and hip width are 3.55% and 2.83%, respectively. This study involves the identification and segmentation of feature point regions in depth images, which are subsequently mapped onto 3D point clouds for training. The training curve is shown as follows (Figure 10).

3.7. Discussions and Perspectives

Additionally, we developed an image processing and detection result display system with a concise and intuitive human–machine interaction interface. It mainly consists of a communication interaction module, camera interaction module, image processing module, system control module, and human–machine interface module. This system can display real-time recognition results of feature points in images and mark them with rectangular areas. It also processes the images captured by the five cameras, performs point cloud reconstruction, generates 3D models, and displays them (Figure 11).
This paper integrates and innovates a 3D reconstruction model to perceive the physiological data of pigs, providing significant support for the development of precision livestock farming.

4. Conclusions

Building upon the current global research landscape, this paper integrates and innovates a 3D reconstruction model to perceive the physiological data of pigs. By utilizing multi-angle cameras to capture pig images and construct 3D models, it provides a scientific and reliable technological foundation for subsequent grouping of pigs based on their strengths and weaknesses. In the process of 3D stitching, incomplete stitching may occur due to missing data at the bottom of the pig’s abdomen. Through the analysis of physiological data with actual measurement, the perception of pig physiological data can be satisfied. Nonetheless, there may be fluctuations in estimation accuracy. In summary, the approach meets the requirements for on-site deployment and usage.

Author Contributions

Data curation, K.L., T.L. and X.Z.; Formal analysis, K.L., X.T. and B.X.; Investigation, K.L. and X.T.; Methodology, K.L., X.L., X.T. and B.X.; Project administration, B.X., Software, X.L., Q.L. and K.L.; Supervision, X.T. and B.X.; Writing—original draft, K.L. and X.Z.; Writing—review and editing, K.L., T.L., X.T. and T.L. All authors have read and agreed to the published version of the manuscript.

Funding

The National Science and Technology Major Project of China (2022ZD0115705).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Chinese Academy of Agricultural Sciences (protocol code IAS2023-25 and March 2023 of approval).

Data Availability Statement

Data is not being released due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, S.; Jiang, H.; Qiao, Y.; Jiang, S.; Lin, H.; Sun, Q. The research progress of vision-based artificial intelligence in smart pig farming. Sensors 2022, 22, 6541. [Google Scholar] [CrossRef] [PubMed]
  2. Tzanidakis, C.; Simitzis, P.; Arvanitis, K.; Panagakis, P. An overview of the current trends in precision pig farming technologies. Livest. Sci. 2021, 249, 104530. [Google Scholar] [CrossRef]
  3. Giersberg, M.F.; Meijboom, F.L.B. Smart technologies lead to smart answers? On the claim of smart sensing technologies to tackle animal related societal concerns in europe over current pig husbandry systems. Front. Vet. Sci. 2021, 7, 588214. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, Q.; Sun, Y.; Orsini, C.; Bolhuis, J.; Vlieg, J.D.; Bijma, P.; With, P.D.D. Enhanced camera-based individual pig detection and tracking for smart pig farms. Comput. Electron. Agric. 2023, 211, 108009. [Google Scholar] [CrossRef]
  5. Quan, Q.; Palaoag, T.; Sun, H.H. Design and implementation of intelligent pig house environment monitor system based on internet plus. In Proceedings of the 2022 4th Asia Pacific Information Technology Conference, Virtual, 14–16 January 2022. [Google Scholar]
  6. Hua, S.; Han, K.; Xu, Z.; Xu, M.; Ye, H.; Zhou, C.Q. Image processing technology based on internet of things in intelligent pig breeding. Math. Probl. Eng. 2021, 2021, 5583355. [Google Scholar] [CrossRef]
  7. Tedeschi, L.O.; Greenwood, P.L.; Ilan, H. Advancements in sensor technology and decision support intelligent tools to assist smart livestock farming. J. Anim. Sci. 2021, 99, skab038. [Google Scholar] [CrossRef] [PubMed]
  8. Kawasue, K.; Wai, P.P.; Win, K.D.; Lee, G.; Iki, Y. Pig weight prediction system using rgb-d sensor and ar glasses: Analysis method with free camera capture direction. Artif. Life Robot. 2023, 28, 89–95. [Google Scholar] [CrossRef]
  9. Arulmozhi, E.; Bhujel, A.; Moon, B.; Kim, H. The application of cameras in precision pig farming: An overview for swine-keeping professionals. Animals 2021, 11, 2343. [Google Scholar] [CrossRef] [PubMed]
  10. Hou, H.; Shi, W.; Guo, J.; Kou, S. Cow rump identification based on lightweight convolutional neural networks. Information 2021, 12, 361. [Google Scholar] [CrossRef]
  11. Kaewtapee, C.; Rakangtong, C.; Bunchasak, C. Pig weight estimation using image processing and artificial neural networks. J. Adv. Agric. Technol. 2019, 6, 253–256. [Google Scholar] [CrossRef]
  12. Ding, Z.; Xu, H.; Chen, G.; Wang, Z.; Chi, W.; Zhang, H.; Wang, Z.; Sun, L.; Yang, G.; Wen, Y. Three-dimensional reconstruction method based on bionic active sensing in precision assembly. Appl. Opt. 2020, 59, 846–856. [Google Scholar] [CrossRef] [PubMed]
  13. Qiao, Y.; Kong, H.; Clark, C.; Lomax, S.; Sukkarieh, S. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation. Comput. Electron. Agric. 2021, 185, 106143. [Google Scholar] [CrossRef]
  14. Rapado-Rincon, D.; Van, H.E.J.; Kootstra, G. Development and evaluation of automated localisation and reconstruction of all fruits on tomato plants in a greenhouse based on multi-view perception and 3d multi-object tracking. Biosyst. Eng. 2023, 231, 78–91. [Google Scholar] [CrossRef]
  15. Dong, W.; Roy, P.; Isler, V. Semantic Mapping for Orchard Environments by Merging Two-Sides Reconstructions of Tree Rows; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2020. [Google Scholar]
  16. Magistri, F.; Marks, E.; Nagulavancha, S.; Vizzo, I.; Lebe, T.; Behley, J.; Halstead, M.; Mccool, C.; Stachniss, C. Contrastive 3D shape completion and reconstruction for agricultural robots using rgb-d frames. IEEE Robot. Autom. Lett. 2022, 7, 10120–10127. [Google Scholar] [CrossRef]
  17. Shi, S.; Yin, L.; Liang, S.; Zhong, H.; Tian, X.; Liu, C.; Sun, A.; Liu, H. Research on 3D surface reconstruction and body size measurement of pigs based on multi-view rgb-d cameras. Comput. Electron. Agric. 2020, 175, 105543. [Google Scholar] [CrossRef]
  18. de Medeiros Esper, I.; Gangsei, L.E.; Cordova-Lopez, L.E.; Romanov, D.; Bjørnstad, P.H.; Alvseike, O.; From, P.J.; Mason, A. 3D model based adaptive cutting system for the meat factory cell: Overcoming natural variability. Smart Agric. Technol. 2024, 7, 100388. [Google Scholar] [CrossRef]
  19. de Medeiros Esper, I.; Smolkin, O.; Manko, M.; Popov, A.; From, P.J.; Mason, A. Evaluation of rgb-d multi-camera pose estimation for 3d reconstruction. Appl. Sci. 2022, 12, 4134. [Google Scholar] [CrossRef]
  20. de Medeiros Esper, I.; Cordova-Lopez, L.E.; Romanov, D.; Alvseike, O.; From, P.J.; Mason, A. Pigs: A stepwise rgb-d novel pig carcass cutting dataset. Data Brief 2022, 41, 107945. [Google Scholar] [CrossRef]
  21. Esper, I.D.M.; Cordova-Lopez, L.E.; From, P.J.; Mason, A. 3d registration of multiple rgb-d cameras on arbitrary position of a symmetric object with no overlapping in a meat factory environment. In Proceedings of the 2021 IEEE 21st International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 18–20 November 2021. [Google Scholar]
  22. Han, K.H.; Lee, W.; Sung, K.Y. Development of a model to analyze the relationship between smart pig-farm environmental data and daily weight increase based on decision tree. J. Korean Inst. Inf. Commun. Eng. 2016, 20, 2348–2354. [Google Scholar]
  23. Buayai, P.; Piewthongngam, K.; Leung, C.K.; Saikaew, K.R. Semi-automatic pig weight estimation using digital image analysis. Appl. Eng. Agric. 2019, 35, 521–534. [Google Scholar] [CrossRef]
  24. Riekert, M.; Klein, A.; Adrion, F.; Hoffmann, C.; Gallmann, E. Automatically detecting pig position and posture by 2d camera imaging and deep learning. Comput. Electron. Agric. 2020, 174, 105391. [Google Scholar] [CrossRef]
  25. Zhang, J.; Zhuang, Y.; Ji, H.; Teng, G. Pig weight and body size estimation using a multiple output regression convolutional neural network: A fast and fully automatic method. Sensors 2021, 21, 3218. [Google Scholar] [CrossRef] [PubMed]
  26. Liu, D.M.K.J. A computer vision-based method for spatial-temporal action recognition of tail-biting behaviour in group-housed pigs. Biosyst. Eng. 2020, 195, 27–41. [Google Scholar] [CrossRef]
  27. Chen, C.W.J.J. Recognition of aggressive episodes of pigs based on convolutional neural network and long short-term memory. Comput. Electron. Agric. 2020, 169, 105166. [Google Scholar] [CrossRef]
  28. Wang, Q.; Tan, Y.; Mei, Z. Computational methods of acquisition and processing of 3d point cloud data for construction applications. Arch. Comput. Methods Eng. 2020, 27, 479–499. [Google Scholar] [CrossRef]
  29. Sanchez, J.; Denis, F.; Coeurjolly, D.; Dupont, F.; Checchin, P. Robust normal vector estimation in 3d point clouds through iterative principal component analysis. ISPRS-J. Photogramm. Remote Sens. 2020, 163, 18–35. [Google Scholar] [CrossRef]
  30. Li, J.; Hu, Q.; Ai, M. Point cloud registration based on one-point ransac and scale-annealing biweight estimation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9716–9729. [Google Scholar] [CrossRef]
  31. Liu, Z.; Hua, J.; Xue, H.; Tian, H.; Chen, Y.; Liu, H. Body weight estimation for pigs based on 3d hybrid filter and convolutional neural network. Sensors 2023, 23, 7730. [Google Scholar] [CrossRef]
  32. Liu, Y.; Zhou, J.; Bian, Y.; Wang, T.; Xue, H.; Liu, L. Estimation of weight and body measurement model for pigs based on back point cloud data. Animals 2024, 14, 1046. [Google Scholar] [CrossRef]
Figure 1. Overall technology roadmap.
Figure 1. Overall technology roadmap.
Agriculture 14 00793 g001
Figure 2. Overview of the 3D point cloud data collection for live pigs.
Figure 2. Overview of the 3D point cloud data collection for live pigs.
Agriculture 14 00793 g002
Figure 3. Preprocessing with Gaussian filtering, mean filtering, and median filtering.
Figure 3. Preprocessing with Gaussian filtering, mean filtering, and median filtering.
Agriculture 14 00793 g003
Figure 4. Network structure diagram of the Mask-RCNN model.
Figure 4. Network structure diagram of the Mask-RCNN model.
Agriculture 14 00793 g004
Figure 5. Display of images processed by the three methods.
Figure 5. Display of images processed by the three methods.
Agriculture 14 00793 g005
Figure 6. Validation of the point cloud feature detection algorithm. In this diagram, the blue area is marked as the range for the starting point of body length measurement, the pink area represents the range for the endpoint of body length measurement, the red area indicates the range for chest circumference measurement, and the yellow area is designated for hip circumference measurement.
Figure 6. Validation of the point cloud feature detection algorithm. In this diagram, the blue area is marked as the range for the starting point of body length measurement, the pink area represents the range for the endpoint of body length measurement, the red area indicates the range for chest circumference measurement, and the yellow area is designated for hip circumference measurement.
Agriculture 14 00793 g006
Figure 7. Validation of static pigs.
Figure 7. Validation of static pigs.
Agriculture 14 00793 g007
Figure 8. Validation of pig body 3D registration (five image perspectives merged into one).
Figure 8. Validation of pig body 3D registration (five image perspectives merged into one).
Agriculture 14 00793 g008
Figure 9. Process diagram of 3D reconstruction of pigs.
Figure 9. Process diagram of 3D reconstruction of pigs.
Agriculture 14 00793 g009
Figure 10. Curves showing the mapping onto 3D point clouds for training.
Figure 10. Curves showing the mapping onto 3D point clouds for training.
Agriculture 14 00793 g010
Figure 11. Pig body size information processing system interface.
Figure 11. Pig body size information processing system interface.
Agriculture 14 00793 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lei, K.; Tang, X.; Li, X.; Lu, Q.; Long, T.; Zhang, X.; Xiong, B. Research and Preliminary Evaluation of Key Technologies for 3D Reconstruction of Pig Bodies Based on 3D Point Clouds. Agriculture 2024, 14, 793. https://doi.org/10.3390/agriculture14060793

AMA Style

Lei K, Tang X, Li X, Lu Q, Long T, Zhang X, Xiong B. Research and Preliminary Evaluation of Key Technologies for 3D Reconstruction of Pig Bodies Based on 3D Point Clouds. Agriculture. 2024; 14(6):793. https://doi.org/10.3390/agriculture14060793

Chicago/Turabian Style

Lei, Kaidong, Xiangfang Tang, Xiaoli Li, Qinggen Lu, Teng Long, Xinghang Zhang, and Benhai Xiong. 2024. "Research and Preliminary Evaluation of Key Technologies for 3D Reconstruction of Pig Bodies Based on 3D Point Clouds" Agriculture 14, no. 6: 793. https://doi.org/10.3390/agriculture14060793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop