Visual Detection of Portunus Survival Based on YOLOV5 and RCN Multi-Parameter Fusion

: Single-frame circulation aquaculture belongs to the important category of sustainable agriculture development. In light of the visual-detection problem related to survival rate of Portunus in single-frame three-dimensional aquaculture, a fusion recognition algorithm based on YOLOV5, RCN (ReﬁneContourNet) image recognition of residual bait ratio, centroid moving distance, and rotation angle was put forward. Based on three-parameter identiﬁcation and LWLR (Local Weighted Linear Regression), the survival rate model of each parameter of Portunus was established, respectively. Then, the softmax algorithm was used to obtain the classiﬁcation and judgment fusion model of Portunus’ survival rate. In recognition of the YOLOV5 residual bait and Portunus centroid, the EIOU (Efﬁcient IOU) loss function was used to improve the recognition accuracy of residual bait in target detection. In RCN, Portunus edge detection and recognition, the optimized binary cross-entropy loss function based on double thresholds successfully improved the edge clarity of the Portunus contour. The results showed that after optimization, the mAP (mean Average Precision) of YOLOV5 was improved, while the precision and mAP (threshold 0.5:0.95:0.05) of recognition between the residual bait and Portunus centroid were improved by 2% and 1.8%, respectively. The loss of the optimized RCN training set was reduced by 4%, and the rotation angle of Portunus was obtained using contour. The experiment shows that the recognition accuracy of the survival rate model was 0.920, 0.840, and 0.955 under the single parameters of centroid moving distance, residual bait ratio, and rotation angle, respectively; and the recognition accuracy of the survival rate model after multi-feature parameter fusion was 0.960. The accuracy of multi-parameter fusion was 5.5% higher than that of single-parameter (average accuracy). The fusion of multi-parameter relative to the single-parameter (average) accuracy was a higher percentage.


Introduction
A new agricultural farming model known as single-frame three-dimensional aquaculture was created on the principle of circulating water and can be used to reuse aquaculture water, reduce agricultural energy use and environmental pollution, and realize the sustainable development of new agricultural technology. In light of the development of recirculating water culture technology [1], the Portunus three-dimensional apartment culture technology has become widely popular. According to Figure 1, each aquaculture frame corresponds to one Portunus; the frame is covered with sand, and the water depth of the frame is approximately 10-20 cm. Portunus are fed regularly and quantitatively using pellet feed or small fish and shrimp as bait for culture. Currently, a large number of daily inspections are handled manually in the factory-based single-frame three-dimensional culture. In order to improve the efficiency of farming, there is an urgent need for a new technological method to manage the daily inspections of large-scale three-dimensional aquaculture. On the other hand, machine vision, as an efficient method for automatic detection [2,3], has been widely used for behavioral detection, quantitative feeding, and feeding tracking in wisdom breeding [4][5][6]. For example, the fusion of images with mathematical models to increase the reliability of the acquired information, 3D coordinate models, and tracking infrared reflection (IREF) detection devices [7] for fish behavior detection. Based on machine learning, differences between one or more frames of the camera [8] are used to determine the differences in fish feeding and quantitative indicators of feeding. To increase the reliability of information acquisition and to reduce interference, the optical flow was used to extract behavioral features (speed and rotation angle) for feeding tracking [9]. Based on this, this paper addresses the inefficiency of manual inspections of large-scale single-frame three-dimensional culture by conducting a study on the survival rate judgment of Portunus based on visual detection. Identifying visual-feature parameters such as Portunus centroid movement, feed residual bait, and Portunus rotation angle in the frame were performed using machine vision technology. Further, the fusion of the three parameters was used to determine the survival rate of the Portunus.
AgriEngineering 2023, 5, FOR PEER REVIEW 2 dimensional aquaculture. On the other hand, machine vision, as an efficient method for automatic detection [2,3], has been widely used for behavioral detection, quantitative feeding, and feeding tracking in wisdom breeding [4][5][6]. For example, the fusion of images with mathematical models to increase the reliability of the acquired information, 3D coordinate models, and tracking infrared reflection (IREF) detection devices [7] for fish behavior detection. Based on machine learning, differences between one or more frames of the camera [8] are used to determine the differences in fish feeding and quantitative indicators of feeding. To increase the reliability of information acquisition and to reduce interference, the optical flow was used to extract behavioral features (speed and rotation angle) for feeding tracking [9]. Based on this, this paper addresses the inefficiency of manual inspections of large-scale single-frame three-dimensional culture by conducting a study on the survival rate judgment of Portunus based on visual detection. Identifying visual-feature parameters such as Portunus centroid movement, feed residual bait, and Portunus rotation angle in the frame were performed using machine vision technology. Further, the fusion of the three parameters was used to determine the survival rate of the Portunus. The recognition of residual bait and Portunus centroid belongs to the target detection problem in visual inspection. In deep-learning target detection, a deeper network, richer feature annotation and extraction, and higher computing power are the key technical issues to improve target detection accuracy and real-time efficiency. Deeper networks can improve the accuracy of target detection, and Rauf [10] proposed to deepen the number of CNN convolutional layers with reference to the VGG-16 framework [11], which enabled the CNN to be extended from 16 to 32 layers and improved the accuracy of target recognition on the Fish-Pak dataset by 15.42% compared to the VGG-16 network. Richer feature annotation and network frameworks can improve learning efficiency. As the problem of aquaculture fish-action recognition arises, Måløy [12] proposed to use a CNN framework that combined spatial information and motion features fused with time-series information to provide richer information features related to fish behavior, with an accuracy of 80% in the task of predicting feeding and non-feeding behavior in fish farms. Combining network depth, higher feature extraction capability, and real-time requirements, the Faster R-CNN [13] and YOLO [14] frameworks have become the two leading algorithmic frameworks for target detection. They are widely used in crab target identification, fish target identification, and water quality prediction [15][16][17]. Faster R-CNN with an RPN network makes for high accuracy. Li [18] initialized the Faster R-CNN network using pretrained Zeiler and Fergus (ZF) models and optimized the convolutional feature map window for the ZF model network, resulting in improved detection speed and a 15.1% increase in mAP. YOLO is an end-to-end network framework that is fast and highly accurate. In order to The recognition of residual bait and Portunus centroid belongs to the target detection problem in visual inspection. In deep-learning target detection, a deeper network, richer feature annotation and extraction, and higher computing power are the key technical issues to improve target detection accuracy and real-time efficiency. Deeper networks can improve the accuracy of target detection, and Rauf [10] proposed to deepen the number of CNN convolutional layers with reference to the VGG-16 framework [11], which enabled the CNN to be extended from 16 to 32 layers and improved the accuracy of target recognition on the Fish-Pak dataset by 15.42% compared to the VGG-16 network. Richer feature annotation and network frameworks can improve learning efficiency. As the problem of aquaculture fish-action recognition arises, Måløy [12] proposed to use a CNN framework that combined spatial information and motion features fused with time-series information to provide richer information features related to fish behavior, with an accuracy of 80% in the task of predicting feeding and non-feeding behavior in fish farms. Combining network depth, higher feature extraction capability, and real-time requirements, the Faster R-CNN [13] and YOLO [14] frameworks have become the two leading algorithmic frameworks for target detection. They are widely used in crab target identification, fish target identification, and water quality prediction [15][16][17]. Faster R-CNN with an RPN network makes for high accuracy. Li [18] initialized the Faster R-CNN network using pretrained Zeiler and Fergus (ZF) models and optimized the convolutional feature map window for the ZF model network, resulting in improved detection speed and a 15.1% increase in mAP. YOLO is an end-to-end network framework that is fast and highly accurate. In order to identify fish species in deep water, Xu [19] performed experiments based on the YOLOV3 model [20] for fish detection in the high turbidity, high velocity, and murky water environment, and evaluated the fish detection accuracy in three datasets under marine and hydrokinetic (MHK) environments. It was shown that the mAP score reached 0.5392 under this dataset, and YOLOV3 improved the target detection accuracy. Due to the problem of unstable data transmission conditions in fish farms, Cai [21] used MobileNet to replace the Darknet-53 network in YOLOV3, which reduced the model size and the computation by a factor of 8-9. The speed of the optimized algorithm was improved, and the mAP was increased by 1.73% based on the fish dataset. Both Faster R-CNN and YOLO networks can extract deeper features from images, but YOLO belongs to the end-to-end algorithmic framework, which is faster. In large-scale three-dimensional factory farming, faster recognition algorithms mean real-time detection of targets in aquaculture. Therefore, the improved YOLO network was chosen for the recognition of residual bait and Portunus centroid targets.
The edge contour (rotation angle) recognition of Portunus belongs to the problem of visual inspection target segmentation. Aside from the traditional Canny and Pb algorithms, most target segmentation problems are now solved by architectures that use deep learning. Most of the time, different backbone network architectures and information fusion methods are made based on convolutional neural network frameworks like GooLeNet and VGG to pull out multi-scale features and achieve more accurate edge detection and segmentation. Typical backbone network architectures include multi-stream learning, skip-layer network learning, single model on multiple inputs, training independent network, holisticallynested networks, etc. In 2015, Xie [22] proposed the Holistically-Nested Edge Detection (HED) algorithm, which uses VGG-16 as the backbone network base, initializes the network weights using migration learning, and achieves simple contour segmentation of the target through multi-scale and multi-level feature learning. The HED algorithm achieves an ODS (optimal dataset scale) of 0.782 on the BSDS500 dataset, reflecting better performance of the dataset on the training set, achieving better segmentation results, and realizing target contour recognition. Subsequently, the Convolutional Oriented Boundaries (COB) algorithm [23] was improved on the basis of HED, which generates multi-scale, contouroriented, regionally-high-level features for sparse boundary-level segmentation with an ODS of 0.79 on BSDS500, optimizing the training set contour information feature extraction. Further, to reduce the number of deep learning network parameters and maintain the spatial information in segmentation, Badrinarayanan [24] proposed SegNet, an encoderdecoder-based segmentation network architecture. The SegNet algorithm transfers the maximum pooling index to the decoder, improving the segmentation resolution. The SegNet algorithm moves the maximum pooling index to the decoder, improving the segmentation accuracy. The CEDN (Fully Convolutional Encoder-Decoder Network) algorithm [25] can detect higher-level object contours, further optimizing the encoderdecoder framework for contour detection, and the CEDN algorithm improves the average recall on the PASCAL VOC dataset from 0.62 to 0.67 and the ODS reached 0.79 on the BSDS500 dataset. In 2015, researchers found that the ResNet network framework [26] can extend the neural network depth and extract complex features efficiently. In 2019, Kelm [27] proposed the RCN network architecture based on the ResNet algorithm, which is used for contour detection by using multi-path refinement and fuses mid-level and low-level features in a specific order with a concise and efficient algorithm, thus becoming a leading framework for edge detection. For example, Abdennour [28] proposed a driver profile recognition system with 99.3% accuracy for facial profile recognition based on RCN networks.
In summary, the problem is the visual detection of the survival rate for Portunus' three-dimensional apartment culture. The main work of this paper is as follows: (i) Based on YOLOV5, the residual bait and Portunus centroid were detected, and the EIOU loss [29] bounding box loss function was proposed on the basis of CIOU (Complete IOU) loss to optimize the accuracy of YOLOV5 in predicting the centroid moving distance of Portunus and residual bait. (ii) For the Portunus contour target segmentation problem, the RCN-based binary cross-entropy loss to double-threshold binary cross-entropy loss function is optimized to improve the Portunus contour detection accuracy and calculate the contour endpoint coordinates (i.e., rotation angle) to improve the Portunus contour curve accuracy. (iii) Based on YOLOV5's RCN algorithm, Portunus centroid movement, residual bait, and rotation angle were three parameters identified, using locally weighted linear regression (LWLR) and the softmax algorithm for information fusion to give a comprehensive determination model of the Portunus survival rate.
The main innovation of this paper lies in the visual detection of three parameters that indirectly reflect the survival of the Portunus using machine vision technology, and the establishment of a single-parameter survival determination model and a three-parameter fusion determination model, respectively. Thus, a vision detection-based survival detection method for Portunus was given. Currently, most of the vision inspection techniques for Portunus are focused on target detection. In the context of three-dimensional culture, the visual-detection method of multi-parameter fusion has not been reported. Meanwhile, this paper uses the YOLOV5 algorithm to detect the centroid movement distance and residual bait of Portunus and the RCN algorithm to detect the rotation angle of Portunus. In residual bait recognition, pellet bait belongs to small target recognition; this paper uses the EIOU loss function to enhance the training mAP and improve the recognition accuracy of pellet bait and Portunus. In RCN for Portunus contour rotation angle recognition, this paper proposes a double-threshold loss function algorithm to reduce the loss function and improve the training accuracy. Finally, a combined three-parameter LWLR fusion determination algorithm was given, and the fused survival recognition accuracy was improved by 5.5%, relative to the single-parameter (average accuracy) recognition accuracy.

Materials and Methods
In this paper, YOLOV5 and the RCN framework are applied to visually detect movement of the Portunus centroid, bait, and rotation angle. In YOLOV5 small-pellet-feed vision detection, YOLOV5 has poor recognition ability for small targets because it usually uses the CIOU bounding box loss function. There will be missed detection during the visual inspection of pellet bait, fish, and shrimp feed, as shown in Figure 2. The YOLO5 detection algorithm with the CIOU bounding box loss function has a mAP (0.5) of 69.6% for pellet feed target identification and 95.2% for tiny fish and shrimp target recognition in experiments. The average target identification accuracy mAP (0.5) of Portunus, pellet feed, fish, and shrimp feed was 88.1%. As a result, the purpose of this research was to upgrade the CIOU bounding box loss function to the EIOU loss function in order to increase the detection accuracy of baited tiny objects such as pellet feed.  Furthermore, as shown in Figure 3, after training with 300 epochs in RCN Portunus rotation angle detection, the average loss of the training set and the average loss of the test set were determined to be 0.2857 and 0.3071, respectively. The test results had significant biases when compared to the training results, and the Portunus contour prediction was fuzzy with a contour missed-detection problem. In order to enhance Portunus target profile recognition, this work offers a double-threshold loss function based on the RCN algorithm.

Survival Rate Detection YOLOV5 and RCN Framework
(1) Movement of the centroid and identification of residual bait based on YOLOV5. The movement of the Portunus' centroid and the residual bait are indirect indicators of whether or not the Portunus is alive. The typical size of a Portunus farm breeding frame is 500 mm × 400 mm, and the size of the frame helps to identify residual bait and the centroid coordinates of the Portunus. YOLOV5 is an end-to-end algorithmic framework with criteria for both speed and accuracy of detection. As a result, this article is built on the YOLOV5 framework for target recognition of Portunus centroid movement and residual bait, and Figure 4a shows its network architecture.
As illustrated in Figure 4a, the YOLOV5 network structure consists of four major modules: Data, BackBone, PANet, and Output. Its loss is comprised of three components: the loss of Classes, Objectness, and Location, as well as identifying and detecting the Portunus and residual bait. The image data are labeled with 3 different targets using the labeling: Portunus, pellet feed, and fish and shrimp. After labeling, the .xml file is acquired, which is then translated to .txt file. With reference to the PASCAL VOC dataset, a Portunus

Survival Rate Detection YOLOV5 and RCN Framework
(1) Movement of the centroid and identification of residual bait based on YOLOV5. The movement of the Portunus' centroid and the residual bait are indirect indicators of whether or not the Portunus is alive. The typical size of a Portunus farm breeding frame is 500 mm × 400 mm, and the size of the frame helps to identify residual bait and the centroid coordinates of the Portunus. YOLOV5 is an end-to-end algorithmic framework with criteria for both speed and accuracy of detection. As a result, this article is built on the YOLOV5 framework for target recognition of Portunus centroid movement and residual bait, and Figure 4a shows its network architecture.
AgriEngineering 2023, 5, FOR PEER REVIEW 6 target detection dataset was generated, of which 900 were used for training and 100 for validation. After preprocessing the photos to 640 × 640 size, 300 epochs of network training began. To boost training efficiency, the training Portunus dataset can be reclustered to build anchors templates based on the targets in the training Portunus dataset. In YOLOV5, data were preprocessed into the BackBone framework, and BackBone accelerated the extraction of Portunus and residual bait characteristics using convolutional networks. PANet performed same channel feature fusion and image size recovery after feature extraction. Lastly, three target detection layers were produced, reflecting the identification of small target objects, medium target objects, and big target objects, enhancing the accuracy of detecting Portunus and residual baits.
(a) (b) (2) Portunus rotation angle detection based on RCN. The angle of rotation of the Portunus is also an indirect indicator of whether or not the Portunus is alive. Due to the threedimensional nature of single-frame Portunus culture, it is unlikely that Portunus will move in big ways. Instead, feeding is often accompanied by small movements and rotations in many directions. In this paper, the RCN framework is used to detect the Portunus contour, and the end-point coordinate approach is used to determine the Portunus rotation angle based on the end-point image coordinates that are detected. Figure 4b shows the architec- As illustrated in Figure 4a, the YOLOV5 network structure consists of four major modules: Data, BackBone, PANet, and Output. Its loss is comprised of three components: the (3) Computation of rotation angle using the end-point coordinate approach. When the RCN detected the Portunus shape, the end-point coordinate method was used to determine the Portunus selection angle, as illustrated in Figure 5. First, Figure 5a shows the minimal outside rectangular box of the Portunus contour, and Figure 5b shows the coordinates of the Portunus tip point based on the outer rectangular box. After that, the coordinates of the contour tip points A and B as well as the Portunus' rotation angle were determined, as illustrated in Figure 5c. (4) For Portunus survival, the single-parameter determination model and multi-parameter fusion determination model were proposed. This paper used the YOLOV5 and RCN algorithms to identify the three parameters of Portunus centroid movement distance, residual bait ratio, and rotation angle. Next, the survival rate determination model of Portunus survival (single parameter) was created using the LWLR method. Finally, the softmax method was used to produce the regression prediction model of Portunus survival rate. Figure 6 shows the general structure of the Portunus survival fusion judgment model. LWLR is a non-parametric machine learning algorithm, and the regression coefficients ŵ are fitted to each sample of centroid movement distance, residual bait, and rotation angle. The visual recognition parameters of Portunus centroid movement distance, residual bait ratio, and rotation angle are 1 c , 2 c , and 3 c , respectively. Take the 1 c (4) For Portunus survival, the single-parameter determination model and multi-parameter fusion determination model were proposed. This paper used the YOLOV5 and RCN algorithms to identify the three parameters of Portunus centroid movement distance, residual bait ratio, and rotation angle. Next, the survival rate determination model of Portunus survival (single parameter) was created using the LWLR method. Finally, the softmax method was used to produce the regression prediction model of Portunus survival rate. Figure 6 shows the general structure of the Portunus survival fusion judgment model.
(3) Computation of rotation angle using the end-point coordinate approach. When the RCN detected the Portunus shape, the end-point coordinate method was used to determine the Portunus selection angle, as illustrated in Figure 5. First, Figure 5a shows the minimal outside rectangular box of the Portunus contour, and Figure 5b shows the coordinates of the Portunus tip point based on the outer rectangular box. After that, the coordinates of the contour tip points A and B as well as the Portunus' rotation angle were determined, as illustrated in Figure 5c. (4) For Portunus survival, the single-parameter determination model and multi-parameter fusion determination model were proposed. This paper used the YOLOV5 and RCN algorithms to identify the three parameters of Portunus centroid movement distance, residual bait ratio, and rotation angle. Next, the survival rate determination model of Portunus survival (single parameter) was created using the LWLR method. Finally, the softmax method was used to produce the regression prediction model of Portunus survival rate. Figure 6 shows the general structure of the Portunus survival fusion judgment model. LWLR is a non-parametric machine learning algorithm, and the regression coefficients ŵ are fitted to each sample of centroid movement distance, residual bait, and rotation angle. The visual recognition parameters of Portunus centroid movement distance, residual bait ratio, and rotation angle are 1 c , 2 c , and 3 c , respectively. Take the 1 c LWLR is a non-parametric machine learning algorithm, and the regression coefficientŝ w are fitted to each sample of centroid movement distance, residual bait, and rotation angle. The visual recognition parameters of Portunus centroid movement distance, residual bait ratio, and rotation angle are c 1 , c 2 , and c 3 , respectively. Take the c 1 parameter as an example, and randomly select its value as 350. Inputting the values of c 1 from the target detection into expression (1) provides the x 1c feature matrix expression as: where n is generally denoted as the number of sample features, and c 1n is denoted as different features. The characteristic matrix expression of the training sample set is as follows: where m = 350 represents the sample size of the training set, and c 1 (m) represents the sample size. The labels of training samples are as follows: where y c (m) represents the true label of each sample size. Then, the prediction formula of LWLR for a single feature sample is as follows: whereŵ represents the regression coefficient, the expression is as follows: where W is the diagonal matrix of m×m, and each data point is given a weight. LWLR uses "kernel" to give higher weight to nearby points. In this paper, Gaussian kernel is selected to construct the weight matrix W, and the expression is as follows: where W(i, i) represents the diagonal element of W, and i in c i represent the weight value of the training sample, and its range is (0, 1]. Among them, c 1 represents the reference point and c i 1 represents the predicted value sample. Parameter k determines how much weight is given to nearby points. When k is determined, the one-parameter survival rate determination model for c 1 is achieved.

YOLOV5 Boundary Box Loss Function EIOU
Due to YOLO bounding box loss, it experienced SSE Loss, IOU Loss [30], GIOU Loss [31], DIOU Loss [32], and CIOU Loss. The two factors of bounding box to be solved with DIOU were overlapping area and center point distance. DIOU is shown in Figure 7a, and the expression of DIOU is as follows: where b pred and b gt represent the center point coordinates of the predicted box and the center point coordinates of the real box, respectively; ρ represents the Euclidean distance, namely e 2 = ρ 2 b pred , b gt ; and d represents the diagonal distance of the smallest circumscribed rectangular box. DIOU not only solves the large error problem between the GIOU prediction box and real box, but also makes the relative distance between the prediction box and real box clear. However, there is no aspect ratio factor in DIOU function, and the aspect ratio influence factor αv (i.e., CIOU) is added on the basis of DIOU, as shown in Figure 7b. The expression for CIOU is as follows: where the expression for v is as follows: where v is used to measure the consistency of aspect ratio, and α is the value of equilibrium v which can be expressed as:

RCN Double-Threshold Loss Function
The fundamental cause of fuzzy RCN Portunus contour prediction and missed contour detection is a lack of feature information extraction in the RCN loss function, and the binary cross-entropy loss function: When the value of weights is fixed, inadequate feature extraction due to multiple convolutional stacking and upsampling fusion may lead to the inaccurate capturing of contour information, blurred contours, or missed contour detection of the Portunus. In this paper, the value of weights in the RCN loss function were optimized through the double-threshold RCN loss function. The double-threshold binary cross-entropy loss function is defined as follows: The formula for measuring the aspect ratio in CIOU is relatively complex, and the aspect ratio cannot replace the width and height of bounding box (w = kw gt , h = kh gt leads to v being 0). The partial derivatives of w and h in v are obtained, respectively, of which relationship is as follows: ∂v ∂w Therefore, width and height can only take the opposite number in the optimization process. The difference of aspect ratio reflected by Formula (11) v is not the real difference between width, height, and confidence, so the training and prediction accuracy of tiny objects decreases or even misses. The EIOU bounding box loss function, on the basis of CIOU, is proposed to replace the width and height losses, as shown in Figure 7c. αv, as shown in Figure 7c. The improved EIOU formula is as follows: where w pred and w gt represent the width of the predicted label and the width of the real label, h pred and h gt represent the height of the predicted label and the height of the real label, and c w and c h represent the width and height of the minimum external rectangular frame. Obviously, EIOU loss directly minimizes the width and height difference between the predicted bounding box and the real bounding box, and has better positioning and recognition effect for small objects. Moreover, Loss EIUO accelerates the convergence and improves the regression accuracy, and increases the mAP for small object detection. The comparison between CIOU and EIOU recognition is shown in Figure 7d.

RCN Double-Threshold Loss Function
The fundamental cause of fuzzy RCN Portunus contour prediction and missed contour detection is a lack of feature information extraction in the RCN loss function, and the binary cross-entropy loss function: where h θ (x) ∈ [0, 1], y ∈ {0, 1}, weights = 10, and h θ (x) obtained the corresponding binary label through the prediction of pixel x. When the value of weights is fixed, inadequate feature extraction due to multiple convolutional stacking and upsampling fusion may lead to the inaccurate capturing of contour information, blurred contours, or missed contour detection of the Portunus. In this paper, the value of weights in the RCN loss function were optimized through the double-threshold RCN loss function. The double-threshold binary cross-entropy loss function is defined as follows: where a, b, c, and d are hyperparameters, and the values of A and B are 10 and 1, respectively. The double-threshold binary cross-entropy loss function, which can improve edge learning and extraction in convolutional neural networks, can effectively prevent the loss of key contour edge information. Moreover, the value of weights can be realized by setting the thresholds h θ (x).

LWLR Single-Parameter Survival Determination Model Computation
Actual measurement data were collected at the farm base for the Portunus contour dataset used in this study. A total of 500 data points were collected for each characteristic parameter of Portunus residual bait ratio, centroid moving distance, and rotation angle. Combined with the experience of aquaculture personnel: (i) The smaller the residual bait ratio, the greater the survival probability of Portunus under a certain feeding amount at each stage. The residual bait ratio is negatively connected to survival. (ii) Since the size of breeding frame is known (the maximum diagonal distance is 60 cm), the larger the centroid moving distance is, the greater the survival probability of Portunus. The threshold value of centroid moving distance is fixed as 13 cm (greater than the threshold survival of

LWLR Single-Parameter Survival Determination Model Computation
Actual measurement data were collected at the farm base for the Portunus contour dataset used in this study. A total of 500 data points were collected for each characteristic parameter of Portunus residual bait ratio, centroid moving distance, and rotation angle. Combined with the experience of aquaculture personnel: (i) The smaller the residual bait ratio, the greater the survival probability of Portunus under a certain feeding amount at each stage. The residual bait ratio is negatively connected to survival. (ii) Since the size of breeding frame is known (the maximum diagonal distance is 60 cm), the larger the centroid moving distance is, the greater the survival probability of Portunus. The threshold value of centroid moving distance is fixed as 13 cm (greater than the threshold survival of 100%). The centroid moving distance is positively connected to survival. (iii) The larger the rotation angle of Portunus, the higher the survival rate. The rotation angle is positively connected to survival. The paper chose the rotation angle threshold of 100 • (greater than the threshold survival rate of 100%). The above three datasets are mutually independent from each other, as shown in Table 2. There are generalized linear connections between visual-detection feature parameters and Portunus survival rate. The value of k inŵ must be calculated in advance in the singleparameter LWLR determination model training, since this decides whether the LWLR model is suitable and is the only hyperparameter that must be established in LWLR. SSE (Sum of Squared Errors) was applied to train both training and test sets, and parameter regression was conducted. The formula of SSE loss function is as follows: whereŷ and y i are predicted value and true value of three characteristic parameters, respectively. The training set and test set are divided by 7:3 to obtain the residual bait ratio, centroid moving distance, and rotation angle. The text selects the appropriate value of k through the training and test set loss curves, which is as shown in Figure 9a

YOLO5-EIOU Boundary Frame Loss Experiment
In the YOLO5 frame for Portunus residual bait and centroid movement, the application of EIOU bounding box may significantly enhance the identification accuracy of tiny pellet bait, fish, and shrimp bait, and the survival rate determination accuracy of Portunus. In this paper, the YOLO5 algorithm using EIOU was experimentally verified.
Firstly, the data of object detection were mainly from the photos before and after feeding. One thousand photos were marked with three different goals: Portunus, pellet feed, and small fish and shrimp feed using labeling software, with 90% training and 10% verification. The image set is shown in Figure 10.
The selection of suitable data clustering center anchors in the YOLOV5 detection algorithm can significantly increase detection accuracy. As a result, prior to training, the kmeans approach was used to examine the clustering of the Portunus dataset's bounding box coordinate information in order to determine the best clustering centers of the dataset. The algorithm flow is as follows: (i) k samples were randomly selected from all samples as the initial center of the cluster, and they were not repeated. (ii) The distance of each sample from the center of each cluster was calculated, and different samples were assigned to the nearest cluster. (iii) The mean value of all samples in each cluster was calculated as a new cluster center. (iv) Steps ii and iii were repeated until there was no change or little change in the cluster center.

YOLO5-EIOU Boundary Frame Loss Experiment
In the YOLO5 frame for Portunus residual bait and centroid movement, the application of EIOU bounding box may significantly enhance the identification accuracy of tiny pellet bait, fish, and shrimp bait, and the survival rate determination accuracy of Portunus. In this paper, the YOLO5 algorithm using EIOU was experimentally verified.
Firstly, the data of object detection were mainly from the photos before and after feeding. One thousand photos were marked with three different goals: Portunus, pellet feed, and small fish and shrimp feed using labeling software, with 90% training and 10% verification. The image set is shown in Figure 10.
The selection of suitable data clustering center anchors in the YOLOV5 detection algorithm can significantly increase detection accuracy. As a result, prior to training, the k-means approach was used to examine the clustering of the Portunus dataset's bounding box coordinate information in order to determine the best clustering centers of the dataset. The algorithm flow is as follows: (i) k samples were randomly selected from all samples as the initial center of the cluster, and they were not repeated. (ii) The distance of each sample from the center of each cluster was calculated, and different samples were assigned to the nearest cluster. (iii) The mean value of all samples in each cluster was calculated as a new cluster center. (iv) Steps ii and iii were repeated until there was no change or little change in the cluster center. 83, 160, 151] and [6,6,10,10,10,21], [23,11,18,65,59], [91,94,170,131,151,203] with 1-IOU (bboxes, anchors) and Euclidean distance calculation clustering. The clustering results are shown in Figure 11a After the clustering analysis, the training and testing configuration data of YOLOV5 residual bait and Portunus centroid mass center recognition are shown in Table 3. Comparing the network performance of YOLOV5 under CIOU loss and EIOU loss loss functions, the mAP (0.5) values for pellet feed, fish and shrimp after optimal identification using EIOU loss were 71.3% and 96.2%, respectively, as shown in Figure 12a. Compared to the CIOU loss loss function, the mAP(0.5) values for pellet feed, fish, and shrimp increased by 1.7% and 1%, respectively. Table 4 shows a comparison of the algorithm performance training outcomes. The precision and IOU (threshold 0.5:0.95:0.05) mAP of target detection increased by 2% and 1.8%, respectively. EIOU loss improvement improved the average recognition accuracy of the whole picture, as demonstrated in Figure 12b.  Meanwhile, in the k-means clustering analysis, the distance between bounding boxes and cluster center anchors was calculated using 1-IOU (bboxes, anchors) in this study. When the border frame and the matching cluster center IOU are larger, the distance between them is smaller. The clusters identified by comparing the 1-IOU with the Euclidean distance computation were as follows: [5,5,7,7,11,10], [10,21,23,11,18,18] 83, 160, 151] and [6,6,10,10,10,21], [23,11,18,65,59], [91,94,170,131,151,203] with 1-IOU (bboxes, anchors) and Euclidean distance calculation clustering. The clustering results are shown in Figure 11a,b. The 1-IOU and Euclidean distance clustering indices have fitness values of 0.810 and 0.791, respectively. 1-IOU clustering anchors are more consistent with a realistic dataset sample distribution. After the clustering analysis, the training and testing configuration data of YOLOV5 residual bait and Portunus centroid mass center recognition are shown in Table 3. Comparing the network performance of YOLOV5 under CIOU loss and EIOU loss loss functions, the mAP (0.5) values for pellet feed, fish and shrimp after optimal identification using EIOU loss were 71.3% and 96.2%, respectively, as shown in Figure 12a. Compared to the CIOU loss loss function, the mAP(0.5) values for pellet feed, fish, and shrimp increased by 1.7% and 1%, respectively. Table 4 shows a comparison of the algorithm performance training outcomes. The precision and IOU (threshold 0.5:0.95:0.05) mAP of target detection increased by 2% and 1.8%, respectively. EIOU loss improvement improved the average recognition accuracy of the whole picture, as demonstrated in Figure 12b.
After the clustering analysis, the training and testing configuration data of YOLOV5 residual bait and Portunus centroid mass center recognition are shown in Table 3. Comparing the network performance of YOLOV5 under CIOU loss and EIOU loss loss functions, the mAP (0.5) values for pellet feed, fish and shrimp after optimal identification using EIOU loss were 71.3% and 96.2%, respectively, as shown in Figure 12a. Compared to the CIOU loss loss function, the mAP(0.5) values for pellet feed, fish, and shrimp increased by 1.7% and 1%, respectively. Table 4 shows a comparison of the algorithm performance training outcomes. The precision and IOU (threshold 0.5:0.95:0.05) mAP of target detection increased by 2% and 1.8%, respectively. EIOU loss improvement improved the average recognition accuracy of the whole picture, as demonstrated in Figure 12b.

RCN Double-Threshold Binary Cross-Entropy Loss Contour Recognition Experiment
To address the issue of inaccurate Portunus contour recognition information, fuzzy contour, or missing contour detection in RCN Portunus contour recognition in this paper, we proposed a double-threshold binary cross-entropy loss. The experimental picture collection was the same as for YOLOV5 residual bait identification, and the contour dataset was labeled using LabelMe to differentiate between foreground and background and generate.json contour label files. The top section of the crab shell was serrated during the marking process, which may easily generate negative consequences such as reverse noise transmission during training. As a result, during the dataset labeling process, the crab shell (with serrated form) was flattened and tagged. In reference to the number of images in the BSDS500 contour dataset, this research labels 520 Portunus contour datasets. The training set, test set, and validation set accounted for 80%, 17%, and 3%, respectively. The image set is shown in Figure 13.

RCN Double-Threshold Binary Cross-Entropy Loss Contour Recognition Experiment
To address the issue of inaccurate Portunus contour recognition information, fuzzy contour, or missing contour detection in RCN Portunus contour recognition in this paper, we proposed a double-threshold binary cross-entropy loss. The experimental picture collection was the same as for YOLOV5 residual bait identification, and the contour dataset was labeled using LabelMe to differentiate between foreground and background and generate.json contour label files. The top section of the crab shell was serrated during the marking process, which may easily generate negative consequences such as reverse noise transmission during training. As a result, during the dataset labeling process, the crab shell (with serrated form) was flattened and tagged. In reference to the number of images in the BSDS500 contour dataset, this research labels 520 Portunus contour datasets. The training set, test set, and validation set accounted for 80%, 17%, and 3%, respectively. The image set is shown in Figure 13. The binary cross-entropy loss function-based RCN network was trained using 300 epochs and the labeled image set; the average loss of the training set was 0.2857 and the average loss of the test set was 0.3071. Moreover, based on the improved double-threshold binary cross-entropy loss function RCN network after 300 training epochs, the average loss of the training set was calculated to be 0.2743, the average loss of the test set was 0.2879, and the optimized loss was reduced by 4% compared to that before optimization. On the other hand, the preoptimization RCN network required roughly 500 epochs to reduce the loss to 0.23. The optimized RCN can reduce the loss to 0.20 after training with 300 epochs. After algorithm improvements, the RCN network minimized contour blurring and missed contour detection phenomena while improving RCN prediction accuracy. Table 5 compares and analyzes the ODS metrics, OIS metrics, and AP of the CEND network, the preoptimized RCN network, and the optimized RCN network in order to demonstrate the benefits of the algorithm enhancement. The optimized RCN algorithm improved the ODS and IOS by 2% and 1.3%, respectively. Finally, comparing the RCN loss function before and after optimizing, the contour prediction of Portunus is shown in Figure 14.  The binary cross-entropy loss function-based RCN network was trained using 300 epochs and the labeled image set; the average loss of the training set was 0.2857 and the average loss of the test set was 0.3071. Moreover, based on the improved double-threshold binary cross-entropy loss function RCN network after 300 training epochs, the average loss of the training set was calculated to be 0.2743, the average loss of the test set was 0.2879, and the optimized loss was reduced by 4% compared to that before optimization. On the other hand, the preoptimization RCN network required roughly 500 epochs to reduce the loss to 0.23. The optimized RCN can reduce the loss to 0.20 after training with 300 epochs. After algorithm improvements, the RCN network minimized contour blurring and missed contour detection phenomena while improving RCN prediction accuracy. Table 5 compares and analyzes the ODS metrics, OIS metrics, and AP of the CEND network, the preoptimized RCN network, and the optimized RCN network in order to demonstrate the benefits of the algorithm enhancement. The optimized RCN algorithm improved the ODS and IOS by 2% and 1.3%, respectively. Finally, comparing the RCN loss function before and after optimizing, the contour prediction of Portunus is shown in Figure 14. The binary cross-entropy loss function-based RCN network was trained using 300 epochs and the labeled image set; the average loss of the training set was 0.2857 and the average loss of the test set was 0.3071. Moreover, based on the improved double-threshold binary cross-entropy loss function RCN network after 300 training epochs, the average loss of the training set was calculated to be 0.2743, the average loss of the test set was 0.2879, and the optimized loss was reduced by 4% compared to that before optimization. On the other hand, the preoptimization RCN network required roughly 500 epochs to reduce the loss to 0.23. The optimized RCN can reduce the loss to 0.20 after training with 300 epochs. After algorithm improvements, the RCN network minimized contour blurring and missed contour detection phenomena while improving RCN prediction accuracy. Table 5 compares and analyzes the ODS metrics, OIS metrics, and AP of the CEND network, the preoptimized RCN network, and the optimized RCN network in order to demonstrate the benefits of the algorithm enhancement. The optimized RCN algorithm improved the ODS and IOS by 2% and 1.3%, respectively. Finally, comparing the RCN loss function before and after optimizing, the contour prediction of Portunus is shown in Figure 14.

Multi-Parameter Survival Judgment Model
Following image-based identification of the Portunus' centroid, rotation angle, and residual bait using the YOLOV5 and RCN algorithms, the bait's residual concentration was calculated. In this paper, a single-parameter survival determination model was developed using LWLR. Experiments on the survival rate of Portunus under the effect of single characteristics were investigated. The 500 datasets were divided into training and test sets using 7:3. Figure 15a-c show the predicted and real values of the three image identification feature parameters of centroid moving distance, residual bait ratio, and rotation angle, respectively. As shown in Figure 15, the accuracy of each single parameter prediction was more than 80%, which is as shown in Table 6. To further improve the prediction accuracy, the so f tmax * c algorithm was used to fuse the three feature parameters for prediction. The so f tmax * c algorithm regression prediction formula for Portunus survival rate is as follows: where c k is the single-parameter LWLR model survival rate; ∂ ∈ [1, 2) hyperparameter is the scaling factor of z k , and is the threshold for multi-parameter fusion. Gridding was used to pick several values of ∂, and the fusion of centroid moving distance, residual bait ratio, and Portunus rotation angle feature parameters were used to generate the optimum regression determination model.   First, based on YOLOV5, RCN, and LWLR networks, the survival rates model of three different characteristic parameters were obtained, respectively. Second, the grid is split into {1.10, 1.20, 1.30, 1.40} for ∂ fetching values, as illustrated in Table 7. Through extracting 350 and 150 image data of 500 sample datasets for training and testing, the paper conducts the fusion judgment. The experimental findings revealed that combining three feature characteristics had the maximum accuracy in predicting Portunus survival at ∂ = 1.10. Figure 16 shows that the multi-parameter fusion predicted values (blue curve) nearly cover the real values (red curve). Moreover, the prediction result of multi-feature parameter fusion is better to that of single feature parameter, with the greatest prediction accuracy of 96.0%.

Discussion
Based on the inspection demand of Portunus single-frame three-dimensional aquaculture, a multi-parameter fusion judging model of Portunus survival rate based on machine vision was studied in this paper. Firstly, based on YOLOV5 and the RCN algorithm, the characteristic parameters such as centroid moving distance, residual bait ratio, and rotation angle were obtained. Secondly, based on the LWLR and m ax soft algorithm, the survival rate regression prediction of Portunus was finally obtained.

Discussion
Based on the inspection demand of Portunus single-frame three-dimensional aquaculture, a multi-parameter fusion judging model of Portunus survival rate based on machine vision was studied in this paper. Firstly, based on YOLOV5 and the RCN algorithm, the characteristic parameters such as centroid moving distance, residual bait ratio, and rotation angle were obtained. Secondly, based on the LWLR and so f tmax algorithm, the survival rate regression prediction of Portunus was finally obtained.

Conclusions
To detect Portunus survival in real time, the original YOLOV5 and RCN networks missed detection of both pellet feed and contour, and the original algorithm must be optimized and fused with many parameters. Firstly, the k-means clustering algorithm was used to cluster the size of anchors in YOLOV5, which could improve the accuracy of the algorithm model. Through EIOU loss, the convergence was accelerated, and the regression accuracy was also improved. Based on the original basis, the mAP(0.5) values for pellet feed, fish, and shrimp increased by 1.7% and 1%, respectively; EIOU loss increased precision by 2% and mAP (threshold for 0.5:0.95:0.05) by 1.8%; and the prediction accuracy of centroid moving distance and residual bait ratio of Portunus was improved. Secondly, the RCN network was optimized using the double-threshold binary cross-entropy loss function, and its last loss could be reduced to 0.2 when 300 times of training were made. Before optimization, last loss reached 0.23 after 500 training, and the RCN average loss was reduced by 4%. After RCN improvement, the ODS of contour recognition index increases by 2%, the probability of blurring or missing contour prediction was reduced, and the rotation angle of Portunus was obtained. Finally, based on the LWLR and Logistic prediction algorithm, multi-parameter fusion was carried out. The recognition accuracy of the residual bait ratio characteristic parameter to the survival rate of Portunus was 0.920, the recognition accuracy of centroid moving distance characteristic parameter to the survival rate of Portunus was 0.840, the recognition accuracy of rotation angle characteristic parameter to the survival rate of Portunus was 0.955, the recognition accuracy of multi-parameter fusion for the survival rate of Portunus was 0.960, and the accuracy of multi-parameter fusion was 5.5% higher than that of single parameter (average accuracy). Therefore, multi-parameter fusion greatly improves the accuracy of judging whether the Portunus is alive.

Conflicts of Interest:
The authors declare no conflict of interest.