Next Article in Journal
An Efficient Task Scheduling Framework for Large-Scale 3D Reconstruction in Multi-UAV Edge-Intelligence Systems
Previous Article in Journal
A Symmetry-Aware BAS for Improved Fuzzy Intra-Class Distance-Based Image Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PGF-Net: A Symmetric Cutting Path Generation and Fitting Optimization Method for Pig Carcasses Under Multi-Medium Interference

1
School of Artificial Intelligence, Henan Institute of Science and Technology, Xinxiang 453000, China
2
School of Computer Science and Technology, Henan Institute of Science and Technology, Xinxiang 453000, China
3
School of Mechanical and Electrical Engineering, Henan University of Technology, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1757; https://doi.org/10.3390/sym17101757
Submission received: 16 September 2025 / Revised: 7 October 2025 / Accepted: 10 October 2025 / Published: 17 October 2025
(This article belongs to the Section Computer)

Abstract

In the automated cutting process of pork carcasses, asymmetric cutting path planning is critical. However, various substances on the carcass surface, such as blood stains and fascia, severely interfere with the separation boundaries between fresh meat and bones, significantly reducing the accuracy of asymmetric cutting path planning. To address these issues, this paper proposes a method for generating and fitting optimized cutting paths for pork carcasses (PGF-Net). Specifically, this method comprises a cutting path generation module that integrates multi-scale boundary features and a cutting path fitting optimization module. The cutting path generation module extracts asymmetric boundary information by enhancing attention to boundaries across different regions, identifies key cutting points, and generates a coarse cutting path. The cutting path fitting optimization module then performs fitting optimization on the generated key cutting points to ultimately produce a refined asymmetric cutting path. Experimental results demonstrate that PGF-Net achieves mean root mean square errors of 0.4212 cm, 0.4651 cm, and 0.5313 cm across three cutting paths on six different pork carcass images. Findings confirm that this method enhances the yield of premium meat cuts while reducing tool wear costs. It provides an innovative technological solution for automated meat processing, holding significant industrial application value.

1. Introduction

With the annual growth rate of global meat consumption continuing to climb, technological innovation in pork carcass cutting, a core process in the meat processing industry chain, has become increasingly critical. As industrial intelligence advances, traditional semi-automated pork carcass cutting equipment reliant on manual labor suffers from inefficiencies, inconsistent cutting standards, cross-contamination risks, and high meat waste rates [1]. The meat cutting industry is characterized by harsh working conditions and heavy workloads, leading to labor shortages. Consequently, novel automated methods are urgently needed to address these challenges [2]. Precise asymmetric cutting path generation forms the foundation for automated cutting. Current research primarily employs machine vision and deep learning technologies, yet significant challenges remain [3]. Current research on pork carcass cutting path generation primarily employs machine vision systems or neural networks, though these approaches exhibit limitations. Even with progress in initial path generation, direct application to actual cutting remains inadequate. While fitting optimization in cutting path research has focused on autonomous driving and trajectory correction, referencing these methods remains crucial. This paper reviews relevant work from two perspectives: cutting path generation and fitting optimization.
In terms of asymmetric cutting path generation, the primary methods currently include image segmentation and convolutional neural networks. Nisbet et al. [4] introduced a machine vision system to the inspection of critical parts of cattle carcasses, which accurately delineated the parts and verified the errors by using RMSE through the 3D image data of cattle carcasses. Fernandes et al. [5] introduced machine vision as well as a method to predict pig weight, pig lean, and pig back fat using 3D images to measure pig biometrics and weight. Machine vision is the basis for automation, which uses image processing technology to automate the discrimination of cutting parts of pig carcasses, segmenting the key parts of pig carcasses, and the planning of cutting paths between the parts is key. Lu et al. [6] used an improved three-level color machine vision algorithm, which segments the parts of the chicken through color analysis, contour extraction, and pose matching. Yu et al. [7] used the DeepLab+ResNet50 model because of the distinct boundary between the fatty and lean parts of the meat, which segmented the longest muscle region of the back after correcting for image distortion and chromatic aberration. Cong et al. [8] designed a binocular vision-based robotic system for cutting pig abdomen, using threshold segmentation to detect the contour of pig carcasses, based on the contour using principal component analysis to determine the position of the cutting line. Finally, a binocular vision camera is used to complete the depth extraction of the center line, which is used to guide the robot to adjust the cutting path. Afonso et al. [9] proposed a prediction model for the segmentation of lamb carcass tissues, which selects six characteristic points (center, center of mass, and four vertex coordinates) of the minimum outer connecting rectangle of the four sites with respect to the positional relationships of muscle, subcutaneous fat, intermuscular fat, and bone. However, lamb carcass morphology is complex and variable, and image processing is poorly adapted with low prediction accuracy. In cases where the cutting boundary is clear and the contour of the area is distinct, the above cutting methods can be used for meat processing. However, it is less applicable in pig carcass cutting tasks with large body size differences, ambiguous boundary information, cross-mixing of bone and meat media, and complex cutting paths.
The hierarchical feature extraction mechanism and local perceptual characteristics unique to convolutional neural networks enable deep learning-based image processing algorithms to achieve high precision when performing pixel-level object segmentation tasks [10]. Zhao et al. [11] proposed a real-time semantic segmentation method for sheep carcasses, which achieves accurate segmentation of three parts of sheep carcasses. Tran et al. [12] proposed a poultry carcass cutting method based on the framework of an end-to-end converter, which accurately realized the segmentation of each part region of the poultry carcass. However, it did not improve the structure of the segmentation network itself, and there is still a lot of room for improvement in segmentation accuracy. Ban et al. [13] innovatively introduced a semantic segmentation neural network to the task of cutting pig carcasses, realizing the cutting of key parts of the spine ribs of pig carcasses. Liu et al. [14] proposed a laser-guided efficient cutting path planning method, which combines a segmentation algorithm (SA) with an improved genetic algorithm (GA), and finally identifies the cutting path of pork belly from the 3-D laser point cloud data of pig carcasses. Bao et al. [15] studied the characteristics of sheep carcass profile structure and the mechanical properties of sheep meat and sheep ribs, constructed a three-dimensional model of sheep carcass, and constructed a dynamic model of the slaughtering robot. Finally, the planning of the sheep carcass cutting path was completed from the perspective of mathematical modeling. Bu et al. [16] proposed a laser scanner-based 3D point cloud method for cutting pork pancetta, hind leg and tenderloin and obtaining meat contour information by constructing a contour adaptive shaping unit. Niu and Cai [17] proposed a cutting line construction method for the muscle tissue of pig carcasses, which is based on blurred and distorted X-ray images, using a convolutional neural network to extract the X-ray image features and correcting the muscle and bone edge information. Finally, the segmentation contour information was utilized to obtain the cutting line. Liu et al. [18] proposed a new real-time pork belly cutting path planning method. By using laser sensors instead of traditional machine vision, 3D laser point cloud data of pig carcasses are acquired, and the 3D point cloud data are downscaled and cutting paths are planned using the image pyramid method. Balkrishna et al. [19] constructed a 3D model of the data obtained after CT scanning and aligned it with the point cloud data taken by the camera. Cutting paths were generated on the surface of the 3D model, taking into account the skeletal structure of the pig carcass and the cutting characteristics of the tool. The above studies realized semantic segmentation of key parts of carcasses, and in planning cutting paths for pig carcasses, they are still stuck in relying on machine vision and positional characteristics between key parts to determine cutting paths. Moreover, the pig carcass surface suffers from media cross-mixing and a complex background, and the applicability of the cutting path generation method has not been verified.
With regard to asymmetric cutting path fitting optimization, Wang and Cai [20] proposed a meta-learning based cutting force follow-shape regulation method, which constructs a reinforcement learning-based tool follow-shape regulation model during the cutting of pig carcasses, obtains the optimal action sequences, and achieves the optimization of the cutting path from the feedback information of the tool. Yang et al. [21] proposed a visual navigation path extraction method based on neural networks and pixel scanning. The semantic segmentation network was trained based on Segnet and Unet, and the designed scanning method, filtering algorithm, and weighted average method were used to fit the navigation paths. Zheng et al. [22] proposed an improved lightweight YOLOX-Nano architecture to detect the fruit tree root points and applied the K-means clustering algorithm to classify the root points into two categories belonging to the left and right tree rows and to determine the navigation paths through geometric relationships and least squares. Yu et al. [23] proposed a new method for training Bessel curve control points using radial basis function neural networks, where initial paths are planned by the arc-line-arc three-segment composite curve, and five-times Bessel curve fitting is used to compensate for curvature discontinuities. Dai et al. [24] proposed an online trajectory optimization method for hypersonic vehicles based on convex programming and feedforward neural networks. The feedforward neural network is trained based on multiple optimal trajectories generated offline under aerodynamic uncertainties, and the optimal trajectories are output in real time. Molina-Leal et al. [25] applied a Long Short-Term Memory neural network network (LSTM) to learn the mapping between inputs and outputs in sample data, which can obtain the linear and angular velocities of the mobile robot. This allows the mobile robot to find the optimal trajectory between two points. Lin et al. [26] combined TTCN, the attention mechanism, and GRU network to construct a hybrid model for ship trajectory prediction, where the attention mechanism can better help the neural network to learn the data features, and the GRU has a strong nonlinear fitting ability, which can achieve the fitting optimization of the ship trajectory. In general, time series-based neural networks can be well applied in the field of pig carcass cutting. It can achieve the fitting optimization of the cutting path according to the horizontal and vertical coordinates of the cutting path, the initial direction, and the cutting speed. The optimal cutting path is obtained while ensuring compliance with the cutting criteria.
Therefore, the aim of this paper is to solve the problem of generating and optimizing asymmetric cutting paths for critical parts in the complex context of pig carcasses. An cutting path generation and fitting optimization method for pig carcasses under multi-medium interference (PGF-Net) was proposed. The accuracy of the cutting path has been validated according to current standards. The main contributions of this method are as follows:
  • We developed a cutting path generation method (CGM) that integrates a multi-scale boundary feature extraction module (MBM). This module extracts and fuses boundary information across multiple scales to generate effective cutting paths between key components;
  • We propose a bifurcated cutting path fitting module (BFM). This module performs fitting optimization on the generated cutting paths based on information such as the horizontal and vertical coordinates of the cutting position, as well as the speed and direction of the cutting tool.

2. Materials and Methods

In this section, a generation and fitting optimization method for pig carcass cutting paths (PGF-Net) is described in detail. First, the boundary constraint cutting path generation method (CGM) is introduced as a whole. Then, the main structure of the multi-scale boundary extraction module (MBM) and the realization principle are explained. Finally, a bifurcated cutting path fitting module (BFM) is proposed, and the recursive network of two branches and the implementation process are described in detail. The network framework of the algorithm in this paper is shown in Figure 1.

2.1. Cutting Path Generation Method (CGM)

In this section, a rough generation method for pig carcass cutting paths is proposed, as shown in Figure 2. The method mainly uses semantic segmentation to obtain the cutting boundary. First, a convolution operation with a step size of 2 is performed on the input image. Then, it is inputted into the rough path generation network, and the convolution operation with 5 different stages is performed. The image after the convolution of each scale is input into the multi-scale boundary extraction module separately to enhance the extraction of boundary features. The residual linking mechanism is used, which mainly consists of two 3 × 3 convolution blocks and feature summation to avoid gradient vanishing. In each multi-scale boundary extraction module, it is divided into five scales after a 3 × 3 DW convolution, and then DW convolution is performed for each of the five scales. The final feature fusion is performed to recover the original size of the image by a 1 × 1 convolution. The enhanced target features from the multi-scale boundary extraction module are fused with the corresponding scale decoding blocks, and then richer boundary detail information is obtained.
During the cutting process of pig carcasses, the images have problems such as large body size differences, blood stains at critical parts, and missing boundary information due to fascia coverage. In this paper, the MBM module is introduced to capture the multi-scale information of the key part features, and the MBM module is shown in Figure 3. The MBM module includes a small kernel convolution to capture the local information. Then, there is a set of parallel deep convolutions to capture the contextual information on multiple scales, where an Identity block is set to ensure the consistency of the feature information. The n th MBM block in stage l is represented by the following equation:
L l , n = C o n v k s × k s X l , n , ( n = 1 , , 5 , s = 5 , 7 , 9 , 11 , 13 ) ,
Z l , n = D W C o n v k n × k n L l , n , ( n = 1 , , 4 ) ,
where L l , n R C l × H l × W l is the localized feature extracted through k s × k s . Z l , n R C l × H l × W l is the contextual feature extracted by the n th k n × k n deep convolution DWConv . In this experiment, the settings n and s correspond to each branch. As an the example, in the n th branch of the l th stage, average pooling is used first. Then, a 1 × 1 convolution is performed to obtain the local region features. It can be represented as
F l , n p o o l = C o n v 1 × 1 P a v g X l , n , ( n = 1 , , 5 ) ,
where P a v g denotes the average pooling operation. Then, two depth strip convolutions are applied as an approximation to the standard large kernel depth convolution. It can be represented as
F l , n w = D W C o n v 1 × k b F l , n p o o l ,
F l , n h = D W C o n v k b × 1 F l , n w .
Deep strip convolution is chosen based on two main factors. First, strip convolution is lightweight. A similar effect is achieved with a pair of 1D deep kernels with k b 2 fewer parameters compared to the traditional k b × k b 2D deep convolution. Secondly, strip convolution can be better extracted for feature recognition and extraction of the thin strips (e.g., loin). In order to adapt to different sensory fields and better extract multi-scale features, k b = 5 , 7 , 9 , 11 , 13 is set so that it can obtain more relevant semantic linkage of feature information in the feature extraction process. And the design of strip convolution can avoid the increase in computational cost to some extent. Finally, an attention weight A l , n R 1 2 C l × H l × W l is generated to further enhance the output of the MBM module. This is expressed by the following equation:
A l , n = S i g m o i d C o n v 1 × 1 F l , n h ,
where the S i g m o i d function is to ensure that the attention graph A l , n stays within the 0 , 1 range. A l , n is the output of the entire module at the n th branch of the l th stage. The output of the entire module is
X l , o u t = C o n v 1 × 1 1 5 A l , n l = 1 , , 5 ,
where X l , o u t is the feature that has been enhanced by the module.

2.2. Bifurcated Cutting Path Fitting Module (BFM)

In this section, a double-ended GRU bifurcated cutting path fitting optimization model (BFM) based on forward and backward recurrent networks is proposed, and the overall network structure is shown in Figure 4. The framework is divided into three parts. (1) The sequence of cutting path key points extracted from the CGM module as input data. (2) A bidirectional linear regression network layer. (3) A fusion output layer that aggregates data from the two-branch network. The main components include input data preprocessing, a dual-branch network, and summation and fusion output. Data preprocessing operates on raw, actual coarse segmentation path data. Through data integration techniques, the fusion status information of key segmentation points is converted into a sequence of segmentation paths. The forward recursive network starts from the cutting origin point, extracting features based on adjacent feature information. The backward recursive network begins at the cutting endpoint, extracting features based on adjacent feature information. Each recursive network separately fits the input cutting path sequence. Finally, the fusion layer outputs the sequence of fitted cutting path points by summing the features. In the BFM module, the cutting path fitting problem can be expressed as an optimization problem, whose objective is to minimize the error between the predicted path and the actual path. We define the objective function as
M S E = m i n θ t = 1 T y ^ t y t 2 ,
where y ^ t is the predicted output of the model at time t, y t is the true cut waypoint, and θ is the model parameter. By optimizing this objective function, the BFM module can fit the cutting path more accurately.
The rough cutting path generated by the pig carcass cutting path generation method, which only has RGB image information, needs to be preprocessed before inputting it to the cutting path fitting optimization algorithm, as shown in Figure 5. The preset cutting paths are set as three paths with known state data as the first path, the second path, and the third path. The existing three paths are just a collection of points on the RGB image, and the horizontal coordinates, vertical coordinates, movement speed, direction of travel, and relative time at moment t of the cutting device during the cutting process need to be fused. The position information of the cutting tool at moment t can be represented as q t = a b s t , o r d t , v e l t , p o s t , t i m e t . Where a b s t , o r d t , v e l t , p o s t , and t i m e t are the horizontal coordinates of the corresponding cutting position, the vertical coordinates, the cutting tool moving speed, the direction of travel of the cutting tool, and the relative time at the moment of t, respectively.
In the cutting process, the feed point and the cutting direction of the front section are extremely important; it requires the forward network to have a good ability to deal with complex long sequences. Therefore, a standard LSTM network is used as the forward network. The pig carcass cutting path data with moments from 0 to t are organized into a data vector T F o r w a r d L S T M i n p u t = q 0 * , q 1 * , , q t * , which is used as the input vector of the forward network. The hidden layer is h = h 0 , h 1 , h 2 , , h t . The forward network takes the computed cell state and outputs and combines them to form the state vector H = H 0 , H 1 , H 2 , H t . Then, J, K, and S denote the values of the layer-to-layer weighting metrics, respectively. The hidden and output layers can be represented as
h i = tanh J q i + b i h w h i l e i = 0 tanh J q i + K h i 1 + b i h w h i l e i = 1 , 2 , 3 , , t ,
where q i denotes the state data at moment t = i . b i h and b i H denote the bias values of the hidden and output layers, respectively.
LSTM is able to recognize long-term data better than traditional RNN networks. LSTM solves the problem of long term dependence of data, thanks to the setup and effective cooperation of its input gates, forgetting gates, output gates, and cell state, defined as i, f, o, and c, respectively. Among them, the forgetting gate will decide which cut path point data need to be retained after receiving the previous state data, mainly realizing the filtering of abnormal path point interference. The input gate determines which data need to be retained and updates the cell state. The output gate is what controls which information from the current time step is transferred to the next or output layer. The cell state is the core of the whole network, which can store and transfer information, as well as control the flow and update of information.
The purpose of the forgetting gate is to minimize the interference of anomalous path points on subsequent fitted cut paths, in addition to being responsible for the collection of data from the previous moment. Then, the recursive input h t 1 and the current input q t are multiplied by their weights as inputs to the S i g m o i d function denoted as σ q = ( 1 + e q ) 1 . If output f t is a value within 0 , 1 , multiply by cell state c t 1 . If f t outputs 1, the LSTM will save this new data, and if f t outputs 0, the LSTM will completely forget this data.
f t = σ J f q t + K f h t 1 + b f .
In order to synthesize the efficiency and accuracy of the cut-path fitting, the backward recurrent network that is chosen ensures the correlation between the previous and subsequent moments and at the same time ensures the efficiency of the data sequence processing. In this section, a two-headed GRU network is chosen to realize the fitting of the path from q t + 1 to q N . The cut path data sequence has the single data characteristics of incremental time information, detailed path information, and small fluctuations between neighboring path points. These data characteristics can better meet the requirements of GRU to handle long data sequences, thus ensuring that the Sigmoid function in the update gate is in an intermediate state distribution. The training effect of each input data is maximized, thus reducing the gradient vanishing and improving the accuracy of path fitting. In this paper, the input cut path vector T B a c k w a r d B i G R U i n p u t = q t + 1 * , , q N 1 * , q N * , after bi-directional GRU computation as well as summing, outputs the combined state vector H = H t + 1 , H t + 2 , , H N .
The update gate is updated based on the a b s and o r d fields in the cut path data. When the two data changes are small, the curvature of the cutting path changes are relatively small, then the update gate takes a smaller value close to 0, and the weights should also take a smaller value. The equation is expressed as follows:
z N = tanh W z · h N 1 , q N * ,
where W z denotes the weight of the update gate. h N 1 is the state at the time of the previous path point. tanh takes values between −1 and 1.
The purpose of the reset gate is to minimize the interference of anomalous path points with subsequent fitted path points. When the reset gate takes a value close to 1, it means that the state values should be invalidated and discarded, thus ensuring the accuracy of the cut path fitting. The expression for the reset gate is shown below.
r N = tanh W r · h N 1 , q N * ,
where W r is the weight belonging to the reset gate. h N 1 is the state of the previous path point. tanh takes values between −1 and 1.
The output variable h N denotes the output vector value of this network model at moment N. Variable h ˜ N denotes the degree of dependency between the candidate state at the current moment and the cut path state value at the previous moment. h N is responsible for storing the position, attitude direction, and velocity information of the cutting path, and it serves as an input variable for the next moment. The equation is expressed as follows:
h ˜ N = tanh W h q N * + U h r N h N 1 + b h ,
h N = z N h N 1 + 1 z N h ˜ N ,
where W h is the weight value of GRU network. U h and b h are the bias values. ⊗ is the multiplication operation.
The first cutting path at successive moments is labeled from q 0 to q N . Thus, the cutting path at time q t can be fitted from q 0 to q t from the initial undercutting point or from q t + M + 1 to q N at the end of the cut. Finally, it is fused with the cut path features fitted by the double-ended GRU network. The Double-ended GRU network is represented by the following equation:
h n = G R U q n , h n 1 ; W ,
h n = G R U q n , h n 1 ; W ,
Q ¯ n = W h q ¯ h n + W h q ¯ h n + b q ¯ ,
where W denotes the weight matrix. h n , W , h n , and W denote the implied states and weight values of the forward and backward structures of the double-ended GRU network with double heads, respectively. Therefore, the temporal correlation of the input cut path points in both directions can be extracted. The detailed framework is shown in Figure 6.

2.3. Definition of a Loss Function

A semantic segmentation neural network is applied to a pig carcass cutting task. Its main challenge is the lack of feature differentiation due to the high background similarity of the area to be cut, which makes it difficult to achieve accurate cutting path segmentation. Moreover, there are body size differences between different pig carcasses, which leads to an uneven distribution of key part categories in the segmentation process. Meanwhile, the generated cutting paths are difficult to adapt to the body size variability in pig carcasses. Therefore, in order to optimize the proposed model, the cross-entropy loss function L C E is set between the segmentation result and the true value, respectively. The dice loss function L D i c e is set between the cut path generation result and the standard cut path. Cross-entropy loss excels at pixel-wise classification, particularly for imbalanced datasets, by penalizing misclassifications of small or rare anatomical structures. Its logarithmic term ensures gradient stability during backpropagation. Dice loss directly optimizes the overlap between predicted and ground-truth regions, making it suitable for spatial consistency in cutting path generation. It mitigates foreground–background asymmetry caused by size variability among carcasses. Dice loss L D i c e and cross-entropy loss L C E complement each other in addressing the issues of pig carcass segmentation and the generation of cutting paths. In this paper, the total loss function L t o t a l , which consists of the dice loss function and the cross-entropy loss function together, is used to perform the optimization task of fitting the cutting paths of pig carcasses. The equation is expressed as
L C E = 1 N i = 1 N g i · log p i + 1 g i · log 1 p i ,
L D i c e = 1 2 i = 1 N g i · p i i = 1 N g i + i = 1 N p i ,
L t o t a l = α L D i c e + β L C E ,
where g i and p i denote the ground-truth real image and the predicted probability map, respectively. N is the total number of pixels. α and β are the balancing factors.
Higher α prioritizes spatial overlap (critical for cutting paths), while β refines pixel-level accuracy. The value range of F1-Score is from 0 to 1, where 1 indicates the best output of the model and 0 indicates the worst output result of the model. As shown in Table 1, when α and β are set to 0.6 and 0.4, respectively, the best F1-score (0.89) is obtained.

3. Model Validation and Discussion

3.1. Cutting Path Error Quantification Criteria

For model training, batch size is set to 64, time steps is set to 5, input size is set to 5, and the training epoch value is 600. In this paper, the samples are set as training set and test set, which are 80% and 20% of the total samples, respectively. The distance error between the modeled cut path and the normalized cut path was verified using the root mean square error ( R M S E ). This is expressed by the following equation:
R M S E = 1 N i = 1 N d i s t a b s i ˜ , a b s i , o r d i ˜ , o r d i ,
where the d i s t function is the relative distance between the computed normalized cut path points and the fitted cut path points. N denotes the total length. i is the position of the path. a b s i ˜ and o r d i ˜ are the fitted values in horizontal and vertical coordinates, respectively. a b s i and o r d i are the actual values of the horizontal and vertical coordinates, respectively. If the value of R M S E is smaller, it means that the error between the fitted path points and the actual path points is smaller and the fitting is more accurate.
The maximum offset distance D max , the minimum offset distance D min , and the average offset distance D a v g of the same cut path points are used, and these three parameters are used to describe the error fluctuation in the model-optimized path from the standard path. The equations are as follows:
D max = a b s max 1 a b s max 2 2 + o r d max 1 o r d max 2 2 ,
D min = a b s min 1 a b s min 2 2 + o r d min 1 o r d min 2 2 ,
D a v g = D max + D min 2 .

3.2. Datasets

The pig carcass image dataset used in this study was photographed at Shandong Qianxihe Food Co. (Dezhou, China). The test samples were selected from adult long white pigs. Photographs were taken with an Intel Realsense D435i camera(Intel) at an overhead angle. In order to ensure the random validity of the dataset, pig carcasses were randomly selected in five batches, with 80 samples in each batch. A total of 400 images of pig carcasses with a size of 640 × 480 pixels were taken. In order to reduce the interference of unnecessary factors, the collected pig carcass images were preprocessed. Image preprocessing includes three aspects: data enhancement processing, normalization processing, and image annotation. Due to the limited number of pig carcass images collected, the original images were processed by flipping, rotating, and changing the brightness to meet the demand of deep learning on big data. To prevent overfitting, the number of image samples was expanded to five times the number of original samples and divided into training and validation sets in the ratio of 7:3.

3.3. Comparison Experiments

Comparison of the Path Generation Effect

In this study, BSCNet [27], CSWin-UNet [28], HiFormer [29], BRAU-Net++ [28], and U-Mamba [30] were used as comparison algorithms. Cutting path verification was performed in six groups of pig carcass images. As can be seen from the comparison results in Figure 7, PGF-Net outperforms other algorithms in the detection of several critical parts. These key areas include the fourth thoracic vertebrae, the fifth thoracic vertebrae, the lumbar sponsor vertebral junction, the area between the two ribs, and a number of key locations in the lower part of the spine. The results of this experiment will be specifically analyzed below. BSCNet [27] showed deviations in the actual pig carcass cutting paths, mainly in the second path along the underside of the spine, in the first path between the fourth and fifth thoracic vertebrae, and in the overlapping part of the spine as well as the ribs, which can lead to knife jamming during the actual cutting process. CSWin-UNet [28] deviations in actual pig carcass cutting paths exist mainly in the second path along the underside of the spine, especially in CSWin-UNet [28] (Figure 7d) as well as in CSWin-UNet [28] in Figure 7c, where the planned portion of the second path overlaps with the spine position. HiFormer [29] deviates in the actual pig carcass cutting path, mainly in the first path between the two ribs and the second path along the lower part of the spine, which overlaps with the bones or is too far apart to affect the quality of the pig carcass cut. BRAU-Net++ [28] and U-Mamba [30], in the actual pig carcass cutting path deviation, mainly exist in the first path between the two ribs and the ribs that overlap; the actual cutting process will lead to jamming knife or poor cutting quality.
In order to visualize the difference between PGF-Net and the existing network in a more in-depth way, six sets of experiments are taken as examples, and the cutting paths generated by the existing methods are compared with the standard paths, respectively. As shown in Figure 8, the comparison of the key points of the cutting paths can more intuitively feel the error fluctuation of the actual cutting paths, as well as the difference with the standard paths. The overall cutting path is divided into the first path, the second path, and the third path, and the results of the comparison of the three paths are analyzed next. When the camera shoots the image of pig carcass, the visual range is 160 × 120 cm, and the pixel resolution of the image is 640 × 480 . The actual distance per pixel can be expressed as
A c t u a l = A c t u a l d P i x e l p
where A c t u a l is the actual distance per pixel. A c t u a l d is the actual length. P i x e l p is the number of pixels under the actual length.
The comparison between the existing methods and the standard cut path critical points on the first path is shown in Figure 9. As can be seen from Table 2, the minimum distance of PGF-Net in Table 2 (b) is slightly lower than that of BSCNet [27] in the first path generation error results. Its minimum average error is 0.2909 cm, and PGF-Net has a better performance than other algorithms in the comparison of root mean square error. From Figure 10 and Table 3 combined, PGF-Net also shows superior performance on the second path. In the image of Figure 10c, PGF-Net is slightly lower than BSCNet [27], with a minimum average error of 0.3439 cm, and also outperforms the other compared algorithms in terms of root mean square error. From the combined view of Figure 11 and Table 4, PGF-Net is slightly lower than BSCNet [27] for both the Figure 11e image and Figure 11c image minimum distances in the third path. All other metrics are better than other algorithms. Taken together, PGF-Net is more suitable for cutting path generation and optimization.

3.4. Ablation Experiments

To demonstrate the effectiveness of MBM and BFM in the generation and fitting optimization of cutting paths in pig carcasses, ablation experiments were designed for validation. Four groups of Baseline, Baseline+MBM, Baseline+BFM, and Baseline+MBM+BFM (Ours) were validated on the first path, the second path, and the third path of a random image, as shown in Table 5, Table 6 and Table 7.
According to the experimental results, the MBM module can effectively improve the algorithm’s boundary attention in the complex context of pig carcasses. The module enhances the boundary constraint ability and optimizes the spatial consistency in the path generation process. The BFM module is able to fit the optimization on the basis of the generated paths to achieve a finer adjustment of the cutting path, which has a more obvious improvement in the accuracy of the cutting path generation for pig carcasses. The average error on the first path is improved by 1.7488 cm. The average error on the second path was increased by 1.3454 cm. The average error on the third path is improved by 1.2817 cm. Overall, PGF-Net improved the accuracy of pig carcass cutting path generation to some extent.

4. Conclusions

A generation and fitting optimization method for pig carcass cutting paths (PGF-Net) is proposed to address the problems of cross-mixing of pig carcass media, the lack of asymmetric cutting boundary information, and the large deviation of cutting path generation. First, the multi-scale boundary extraction module (MBM) is proposed. This module effectively solves the effects of medium cross-mixing and missing asymmetric boundary information on cutting path generation. Then, on the basis of existing pig carcass cutting path generation methods, this study proposes a bifurcated cutting path fitting module (BFM). The method adopts a two-branch structure and a bidirectional fitting strategy to refine the fitting of several critical parts to ensure the cutting quality. The experimental results show that the average values of root-mean-square errors of the three asymmetric paths of PGF-Net on six pig carcass images are 0.4212 cm, 0.4651 cm, and 0.5313 cm, respectively, which are better than other algorithms in the comparison experiments. It basically meets the requirements of pig carcass cutting standards and is applicable to the actual cutting process of pig carcasses. In the follow-up research work, we further reduce the network parameters without reducing the accuracy and improve the detection efficiency.

Author Contributions

Conceptualization, L.C., J.L. and P.B.; methodology, J.L. and L.C.; software, P.B.; validation, J.L.; Formal analysis, J.L. and P.B.; investigation, J.L.; resources, L.C.; data curation, J.L. and P.B.; writing—original draft preparation, J.L.; writing—review and editing, L.C. and J.L.; supervision, L.C.; project administration, L.C.; funding acquisition, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Zhongyuan Science and Technology Innovation Leadership Talent Programme (254200510043), the National Scientific and Technological Innovation Teams of Universities in Henan Province (25IRTSTHN018), and the Key Research and Development Project of Henan Province (241111110200).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors acknowledge the editors and reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PGF-NetPath generation and fitting optimization method
CGMCutting path generation method
MBMMulti-scale boundary feature extraction module
BFMBifurcated cutting path Fitting Module

References

  1. Baéza, E.; Guillier, L.; Petracci, M. Production factors affecting poultry carcass and meat quality attributes. Animal 2022, 16, 100331. [Google Scholar] [CrossRef]
  2. Xu, W.; He, Y.; Li, J.; Zhou, J.; Xu, E.; Wang, W.; Liu, D. Robotization and intelligent digital systems in the meat cutting industry: From the perspectives of robotic cutting, perception, and digital development. Trends Food Sci. Technol. 2023, 135, 234–251. [Google Scholar] [CrossRef]
  3. de Medeiros Esper, I.; From, P.J.; Mason, A. Robotisation and intelligent systems in abattoirs. Trends Food Sci. Technol. 2021, 108, 214–222. [Google Scholar] [CrossRef]
  4. Nisbet, H.; Lambe, N.; Miller, G.A.; Doeschl-Wilson, A.; Barclay, D.; Wheaton, A.; Duthie, C.A. Meat yields and primal cut weights from beef carcasses can be predicted with similar accuracies using in-abattoir 3D measurements or EUROP classification grade. Meat Sci. 2025, 222, 109738. [Google Scholar] [PubMed]
  5. Fernandes, A.F.; Dórea, J.R.; Fitzgerald, R.; Herring, W.; Rosa, G.J. A novel automated system to acquire biometric and morphological measurements and predict body weight of pigs via 3D computer vision. J. Anim. Sci. 2019, 97, 496–508. [Google Scholar] [PubMed]
  6. Lu, J.; Lee, K.M.; Ji, J. Color machine vision design methodology of a part-presentation algorithm for automated poultry handling. IEEE/ASME Trans. Mech. 2022, 28, 1222–1233. [Google Scholar]
  7. Yu, H.; Lim, J.; Seo, Y.; Lee, A. Compact imaging system and deep learning based segmentation for objective longissimus muscle area in Korean beef carcass. Meat Sci. 2023, 206, 109325. [Google Scholar] [CrossRef]
  8. Cong, M.; Zhang, J.; Du, Y.; Wang, Y.; Yu, X.; Liu, D. A porcine abdomen cutting robot system using binocular vision techniques based on kernel principal component analysis. J. Intell. Robot. Syst. 2021, 101, 1–10. [Google Scholar] [CrossRef]
  9. Afonso, J.J.; Almeida, M.; Batista, A.C.; Guedes, C.; Teixeira, A.; Silva, S.; Santos, V. Using Image Analysis Technique for Predicting Light Lamb Carcass Composition. Animals 2024, 14, 1593. [Google Scholar] [CrossRef]
  10. Rauf, H.T.; Lali, M.I.U.; Zahoor, S.; Shah, S.Z.H.; Rehman, A.U.; Bukhari, S.A.C. Visual features based automated identification of fish species using deep convolutional neural networks. Comput. Electron. Agric. 2019, 167, 105075. [Google Scholar] [CrossRef]
  11. Zhao, S.; Hao, G.; Zhang, Y.; Wang, S. A Real-Time Semantic Segmentation Method of Sheep Carcass Images Based on ICNet. J. Robot. 2021, 2021, 8847984. [Google Scholar] [CrossRef]
  12. Tran, M.; Truong, S.; Fernandes, A.F.; Kidd, M.T.; Le, N. CarcassFormer: An end-to-end transformer-based framework for simultaneous localization, segmentation and classification of poultry carcass defect. Poult. Sci. 2024, 103, 103765. [Google Scholar] [CrossRef]
  13. Ban, P.; Cai, L.; Ma, H. FSRFS-Net: Fusion of Contour Features and Semantic Information for Rib Segmentation of Porcine Spine. In Proceedings of the 2024 International Conference on Advanced Robotics and Mechatronics (ICARM), Tokyo, Japan, 6–8 July 2024; pp. 455–460. [Google Scholar]
  14. Liu, Y.; Guo, C.; Er, M.J. Robotic 3-D laser-guided approach for efficient cutting of porcine belly. IEEE/ASME Trans. Mech. 2021, 27, 2963–2972. [Google Scholar] [CrossRef]
  15. Bao, X.; Junsong, L.; Mao, J. Kinematics Analysis and Trajectory Planning of Segmentation Robot for Chilled Sheep Carcass. Appl. Eng. Agric. 2021, 37, 1147–1154. [Google Scholar] [CrossRef]
  16. Bu, L.; Tian, H.; Qiao, Z.; Hu, X.; Gao, G.; Qi, B.; Wang, Z.; Hu, J.; Zhang, C.; Zhang, D.; et al. Raw meat 3D laser scanning imaging: Optimized by adaptive contour unit. Food Bioprod. Process. 2025, 151, 103–117. [Google Scholar] [CrossRef]
  17. Niu, H.; Cai, L. Segmentation Line Construction Method for Pig Carcass Musculature Based on Blurred and Distorted X-ray Images. In Proceedings of the 2023 6th International Conference on Image and Graphics Processing, Chongqing, China, 6–8 January 2023; pp. 182–188. [Google Scholar]
  18. Liu, Y.; Ning, R.; Du, M.; Yu, S.; Yan, Y. Online path planning of pork cutting robot using 3D laser point cloud. Ind. Robot. Int. J. Robot. Res. Appl. 2024, 51, 511–517. [Google Scholar] [CrossRef]
  19. Balkrishna, A.; Pathak, R.; Kumar, S.; Arya, V.; Singh, S.K. Smart agricultural technology. Precis. Agric. 2023, 5, 100318. [Google Scholar]
  20. Wang, X.; Cai, L. Reinforced meta-learning method for shape-dependent regulation of cutting force in pork carcass operation robots. In Proceedings of the 2023 6th International Conference on Image and Graphics Processing, Chongqing, China, 6–8 January 2023; pp. 223–229. [Google Scholar]
  21. Yang, Z.; Ouyang, L.; Zhang, Z.; Duan, J.; Yu, J.; Wang, H. Visual navigation path extraction of orchard hard pavement based on scanning method and neural network. Comput. Electron. Agric. 2022, 197, 106964. [Google Scholar] [CrossRef]
  22. Zheng, Z.; Hu, Y.; Li, X.; Huang, Y. Autonomous navigation method of jujube catch-and-shake harvesting robot based on convolutional neural networks. Comput. Electron. Agric. 2023, 215, 108469. [Google Scholar] [CrossRef]
  23. Yu, L.; Wang, X.; Hou, Z.; Du, Z.; Zeng, Y.; Mu, Z. Path planning optimization for driverless vehicle in parallel parking integrating radial basis function neural network. Appl. Sci. 2021, 11, 8178. [Google Scholar] [CrossRef]
  24. Dai, P.; Feng, D.; Feng, W.; Cui, J.; Zhang, L. Entry trajectory optimization for hypersonic vehicles based on convex programming and neural network. Aerosp. Sci. Technol. 2023, 137, 108259. [Google Scholar] [CrossRef]
  25. Molina-Leal, A.; Gómez-Espinosa, A.; Escobedo Cabello, J.A.; Cuan-Urquizo, E.; Cruz-Ramírez, S.R. Trajectory planning for a mobile robot in a dynamic environment using an LSTM neural network. Appl. Sci. 2021, 11, 10689. [Google Scholar] [CrossRef]
  26. Lin, Z.; Yue, W.; Huang, J.; Wan, J. Ship trajectory prediction based on the TTCN-attention-GRU model. Electronics 2023, 12, 2556. [Google Scholar] [CrossRef]
  27. Zhou, Q.; Wang, L.; Gao, G.; Kang, B.; Ou, W.; Lu, H. Boundary-guided lightweight semantic segmentation with multi-scale semantic context. IEEE Trans. Multimed. 2024, 26, 7887–7900. [Google Scholar] [CrossRef]
  28. Liu, X.; Gao, P.; Yu, T.; Wang, F.; Yuan, R.Y. CSWin-UNet: Transformer UNet with cross-shaped windows for medical image segmentation. Inf. Fusion 2025, 113, 102634. [Google Scholar] [CrossRef]
  29. Heidari, M.; Kazerouni, A.; Soltany, M.; Azad, R.; Aghdam, E.K.; Cohen-Adad, J.; Merhof, D. Hiformer: Hierarchical multi-scale representations using transformers for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 6202–6212. [Google Scholar]
  30. Ma, J.; Li, F.; Wang, B. U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv 2024, arXiv:2401.04722. [Google Scholar]
Figure 1. Overall structure of the pig carcass cutting path generation and fitting optimization approach.
Figure 1. Overall structure of the pig carcass cutting path generation and fitting optimization approach.
Symmetry 17 01757 g001
Figure 2. Segmentation path generation network architecture.
Figure 2. Segmentation path generation network architecture.
Symmetry 17 01757 g002
Figure 3. Multi-scale boundary extraction module (MBM module).
Figure 3. Multi-scale boundary extraction module (MBM module).
Symmetry 17 01757 g003
Figure 4. Overall structure of the cut-path fitting optimization method with bidirectional recursion.
Figure 4. Overall structure of the cut-path fitting optimization method with bidirectional recursion.
Symmetry 17 01757 g004
Figure 5. Schematic diagram of the cutting path.
Figure 5. Schematic diagram of the cutting path.
Symmetry 17 01757 g005
Figure 6. Double-ended GRU network structure.
Figure 6. Double-ended GRU network structure.
Symmetry 17 01757 g006
Figure 7. Comparison results of actual pig carcass cutting paths. (ac) Blue show the experiments on the left half of the pig carcass. (df) Red show the experiment on the right half of the pig carcass.
Figure 7. Comparison results of actual pig carcass cutting paths. (ac) Blue show the experiments on the left half of the pig carcass. (df) Red show the experiment on the right half of the pig carcass.
Symmetry 17 01757 g007
Figure 8. Comparison results of critical points of the cutting path. (ac) Blue show the experiments on the left half of the pig carcass. (df) Red show the experiment on the right half of the pig carcass.
Figure 8. Comparison results of critical points of the cutting path. (ac) Blue show the experiments on the left half of the pig carcass. (df) Red show the experiment on the right half of the pig carcass.
Symmetry 17 01757 g008
Figure 9. First path comparison. (af) denote six sets of experiments.
Figure 9. First path comparison. (af) denote six sets of experiments.
Symmetry 17 01757 g009
Figure 10. Second path comparison. (af) denote six experimental groups.
Figure 10. Second path comparison. (af) denote six experimental groups.
Symmetry 17 01757 g010
Figure 11. Third path comparison. (af) denote six sets of experiments.
Figure 11. Third path comparison. (af) denote six sets of experiments.
Symmetry 17 01757 g011
Table 1. Ablation study on loss weights ( α , β ). (Optimal: Bold).
Table 1. Ablation study on loss weights ( α , β ). (Optimal: Bold).
α β F1-ScoreIoU
0.50.50.850.82
0.60.40.890.87
0.70.30.830.84
Table 2. Errors in first path generation results. (a)–(f) denote six sets of experiments. (Optimal: Bold).
Table 2. Errors in first path generation results. (a)–(f) denote six sets of experiments. (Optimal: Bold).
GroupMethod D max D min D avg RMSE
(a)BSCNet16.6932.1185.5536.512
CSWin-UNet13.6851.7336.2247.219
HiFormer15.8061.1395.4536.282
BRAU-Net++16.9602.3747.6719.028
U-Mamba10.8960.7115.9606.531
Ours3.0260.3231.2811.418
(b)BSCNet5.3430.3421.8252.024
CSWin-UNet12.1581.4106.9977.419
HiFormer12.1390.9694.7065.339
BRAU-Net++14.7991.2437.9538.907
U-Mamba12.4441.8828.7699.194
Ours3.7730.3621.6081.809
(c)BSCNet8.9440.7684.4094.791
CSWin-UNet23.2801.5898.83410.317
HiFormer16.8222.8286.4217.440
BRAU-Net++20.5381.05111.97113.027
U-Mamba17.8792.8309.28410.239
Ours3.8640.0331.8422.005
(d)BSCNet26.8171.09812.02414.808
CSWin-UNet15.9482.6149.0759.943
HiFormer26.2400.9887.9529.655
BRAU-Net++34.7521.04210.96215.747
U-Mamba21.1511.94212.22713.618
Ours2.5260.1941.1631.277
(e)BSCNet11.7171.4764.3785.133
CSWin-UNet36.6534.06017.66719.120
HiFormer16.6231.8035.0926.475
BRAU-Net++17.0161.2559.27210.312
U-Mamba24.2941.54113.34814.559
Ours2.9370.4751.9252.016
(f)BSCNet10.3331.0236.1246.466
CSWin-UNet19.7361.1097.0878.339
HiFormer11.6551.8855.4606.059
BRAU-Net++21.5653.51010.62111.281
U-Mamba9.5490.4504.1154.446
Ours6.4040.5351.4731.582
Table 3. Errors in second path generation results. (a)–(f) denote six sets of experiments. (Optimal: Bold).
Table 3. Errors in second path generation results. (a)–(f) denote six sets of experiments. (Optimal: Bold).
GroupMethod D max D min D avg RMSE
(a)BSCNet9.5970.8644.8645.138
CSWin-UNet11.5272.0316.3086.680
HiFormer27.6381.88012.56414.608
BRAU-Net++11.4372.8727.4567.745
U-Mamba17.1731.3329.3199.186
Ours3.2690.7681.3752.034
(b)BSCNet8.6180.6934.3935.056
CSWin-UNet9.6801.0555.0125.487
HiFormer10.5621.2875.1565.688
BRAU-Net++10.3510.8205.8126.315
U-Mamba10.5281.6815.7586.018
Ours3.37340.52971.70631.8503
(c)BSCNet8.6070.8543.7104.087
CSWin-UNet14.9081.4835.5236.141
HiFormer10.4281.4675.7856.261
BRAU-Net++9.0411.0514.7855.077
U-Mamba8.2101.5894.8225.051
Ours3.3440.8571.8761.971
(d)BSCNet8.9650.7375.1145.720
CSWin-UNet10.8422.6835.9686.368
HiFormer10.5000.9764.5335.328
BRAU-Net++14.2441.2265.9466.939
U-Mamba12.0371.6787.6747.938
Ours2.9980.2991.5711.691
(e)BSCNet11.4771.1485.1265.855
CSWin-UNet12.1881.2186.6037.335
HiFormer14.3611.6257.2117.849
BRAU-Net++9.0211.9155.2475.535
U-Mamba14.6203.0196.6696.974
Ours2.9520.2361.6281.768
(f)BSCNet11.5131.4865.7026.387
CSWin-UNet11.7870.9687.0647.479
HiFormer12.9521.3667.9218.313
BRAU-Net++12.5054.4117.4677.810
U-Mamba8.3401.5105.1895.425
Ours5.2650.6741.7571.848
Table 4. Errors in third path generation results. (a)–(f) denote six sets of experiments. (Optimal: Bold).
Table 4. Errors in third path generation results. (a)–(f) denote six sets of experiments. (Optimal: Bold).
GroupMethod D max D min D avg RMSE
(a)BSCNet8.5641.8574.6504.938
CSWin-UNet13.5933.3557.4888.087
HiFormer30.1528.87115.88216.670
BRAU-Net++14.5473.0357.8208.453
U-Mamba15.1252.2799.38210.219
Ours6.1290.9413.0322.154
(b)BSCNet7.0651.0583.5783.920
CSWin-UNet11.3451.1235.8426.306
HiFormer12.2481.0024.8175.561
BRAU-Net++9.4591.8665.1015.629
U-Mamba12.4701.3446.1156.552
Ours3.8050.9102.2972.421
(c)BSCNet12.4940.6253.9694.712
CSWin-UNet45.3400.83910.33213.441
HiFormer10.4201.1424.9265.777
BRAU-Net++9.2584.38015.89823.422
U-Mamba10.2441.8693.8274.142
Ours3.0871.3002.1952.246
(d)BSCNet5.2731.9443.3633.526
CSWin-UNet20.6291.68211.91013.356
HiFormer7.6960.9765.0145.405
BRAU-Net++14.1393.5477.4277.766
U-Mamba10.6031.8346.0796.513
Ours2.7740.6181.8191.934
(e)BSCNet6.0710.4112.6532.938
CSWin-UNet14.7750.8677.2338.390
HiFormer8.1090.6804.0784.477
BRAU-Net++7.0371.3303.6913.959
U-Mamba10.9004.0688.0188.246
Ours2.6550.6741.7081.808
(f)BSCNet8.8621.3683.8274.289
CSWin-UNet9.1404.8477.5177.613
HiFormer16.8361.8529.51810.472
BRAU-Net++17.5133.6739.63410.372
U-Mamba6.1660.8383.5643.896
Ours3.4110.9512.0912.196
Table 5. First path ablation experiments (Optimal: Bold).
Table 5. First path ablation experiments (Optimal: Bold).
Method D max D min D avg RMSE
Baseline7.864118.416610.451211.3544
Baseline+MBM7.024815.23488.76459.1269
Baseline+BFM4.458912.48966.12676.9815
Ours1.25916.15803.45574.1341
Table 6. Second path ablation experiments (Optimal: Bold).
Table 6. Second path ablation experiments (Optimal: Bold).
Method D max D min D avg RMSE
Baseline6.125515.32787.56118.2354
Baseline+MBM6.118913.46716.76937.8956
Baseline+BFM2.44829.11655.22896.1147
Ours0.91444.55682.17932.4632
Table 7. Third path ablation experiments (Optimal: Bold).
Table 7. Third path ablation experiments (Optimal: Bold).
Method D max D min D avg RMSE
Baseline6.845912.47827.13587.9681
Baseline+MBM5.658710.36486.15746.4589
Baseline+BFM4.896310.17675.11475.2354
Ours1.17145.47732.55372.8413
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, L.; Luo, J.; Ban, P. PGF-Net: A Symmetric Cutting Path Generation and Fitting Optimization Method for Pig Carcasses Under Multi-Medium Interference. Symmetry 2025, 17, 1757. https://doi.org/10.3390/sym17101757

AMA Style

Cai L, Luo J, Ban P. PGF-Net: A Symmetric Cutting Path Generation and Fitting Optimization Method for Pig Carcasses Under Multi-Medium Interference. Symmetry. 2025; 17(10):1757. https://doi.org/10.3390/sym17101757

Chicago/Turabian Style

Cai, Lei, Jin Luo, and Pengtao Ban. 2025. "PGF-Net: A Symmetric Cutting Path Generation and Fitting Optimization Method for Pig Carcasses Under Multi-Medium Interference" Symmetry 17, no. 10: 1757. https://doi.org/10.3390/sym17101757

APA Style

Cai, L., Luo, J., & Ban, P. (2025). PGF-Net: A Symmetric Cutting Path Generation and Fitting Optimization Method for Pig Carcasses Under Multi-Medium Interference. Symmetry, 17(10), 1757. https://doi.org/10.3390/sym17101757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop