Next Article in Journal
Research on the Impact of Factor Mobility on the Economic Efficiency of Marine Fisheries in China’s Coastal Regions
Previous Article in Journal
A Conserved Bactericidal Permeability-Increasing Protein (BPI) Mediates Immune Sensing and Host Defense in the Hong Kong Oyster (Crassostrea hongkongensis)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of Monitoring System for River Crab Feeding Platform Based on Machine Vision

1
School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China
2
Key Laboratory of Facility Agriculture Measurement, Control Technology and Equipment for Machinery Industry, Zhenjiang 212013, China
3
Key Laboratory of Smart Agricultural Technology (Yangtze River Delta), Ministry of Agriculture and Rural Affairs, P.R. China, Nanjing 210044, China
*
Authors to whom correspondence should be addressed.
Fishes 2026, 11(2), 88; https://doi.org/10.3390/fishes11020088
Submission received: 29 December 2025 / Revised: 22 January 2026 / Accepted: 26 January 2026 / Published: 1 February 2026
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)

Abstract

Bait costs constitute 40–50% of the total expenditure in river crab aquaculture, highlighting the critical need for accurately assessing crab growth and scientifically determining optimal feeding regimes across different farming stages. Current traditional methods rely on periodic manual sampling to monitor growth status and artificial feeding platforms to observe consumption and adjust bait input. These approaches are inefficient, disruptive to crab growth, and fail to provide comprehensive growth data. Therefore, this study proposes a machine vision-based monitoring system for river crab feeding platforms. Firstly, the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm is applied to enhance underwater images of river crabs. Subsequently, an improved YOLOv11 (You Only Look Once) model is introduced and applied for multi-target detection and counting in crab ponds, enabling the extraction of information related to both river crabs and bait. Concurrently, underwater environmental parameters are monitored in real-time via an integrated environmental information sensing system. Finally, an information processing platform is established to facilitate data sharing under a “detection–processing–distribution” workflow. The real crab farm experimental results show that the river crab quality error rate was below 9.57%, while the detection rates for both corn and pellet baits consistently exceeded 90% across varying conditions. These results indicate that the proposed system significantly enhances farming efficiency, elevates the level of automation, and provides technological support for the river crab aquaculture industry.
Key Contribution: (1) To address challenges such as low contrast and target occlusion in underwater images, we enhance the YOLOv11 algorithm through image enhancement, network optimization, and loss function refinement, achieving accurate detection of crabs and bait. (2) To overcome the inefficiency and invasiveness of manual measurement, a computer vision-based method is proposed for real-time, noncontact crab weight estimation and bait detection. (3) An integrated monitoring system combining visual detection, environmental sensing, and a cloud platform is developed to support intelligent management and science-based feeding in river crab aquaculture.

1. Introduction

As the aquaculture sector continues to grow, aquatic products have not only enhanced the nutritional profile of human diets and invigorated rural economies but also contributed to food security and ecological sustainability. River crab farming, predominantly situated along China’s eastern coastline and in rivers and lakes connected to the sea, has emerged as a vital component of China’s agricultural economy. However, the global agricultural sector is increasingly challenged by labor shortages and escalating labor costs. This issue is particularly acute in China, where population aging and structural demographic shifts are exacerbating the situation. To address these challenges, smart agricultural technologies with advanced operational capabilities are garnering increasing attention from researchers across various countries and disciplines [1].
In addition to labor-related challenges, feed waste represents a pervasive and pressing issue in traditional aquaculture [2]. Excess feed in river crab farming not only causes significant economic losses but also leads to water eutrophication and aquatic environment degradation. Conversely, insufficient feeding can provoke aggressive behavior among crabs, causing injury and mortality that ultimately reduce yield. Therefore, accurate and timely monitoring of growth status, feed consumption, and water quality in crab cultivation ponds is essential for improving aquaculture efficiency and sustainability.
At present, machine vision techniques have been extensively utilized in agricultural applications [3,4,5,6]. However, in aquaculture, the primary approach for monitoring river crab growth, feed intake, and pond environmental conditions remains reliant on manual sampling, as shown in Figure 1a. This process comprises two main components: river crab sampling and feed residue assessment. As shown in Figure 1b,c, river crab sampling involves manually harvesting individuals from a designated area within the pond and measuring their body weight to estimate the current average mass of the population. Feed assessment is conducted by regularly examining the amount of residual feed on feeding trays and adjusting subsequent feeding rates accordingly. In addition, aquaculture conditions in crab ponds are commonly assessed by measuring essential environmental factors, including temperature, pH, and dissolved oxygen, using manual sensing devices or basic sensors [7].
However, the current manual monitoring approach suffers from several significant limitations. The unique challenges of the underwater aquaculture environment prevent farmers from directly observing target species’ growth and population size, leading to a disconnect between actual stocking density and feeding practices. This often results in a mismatch between feed supply and actual demand. In this context, machine vision technology holds significant potential for aquaculture applications. Although underwater target detection remains challenging, various solutions have been proposed in recent years. Currently, representative object detection models such as RCNN [8,9], SSD [10], and the YOLO series (v1–v8) [11,12] have been widely adopted. For instance, Banan et al. [13] developed a neural network capable of distinguishing multiple fish species with 100% accuracy; however, the method demands substantial training data and computational resources, and may lack generalizability. Allken et al. [14] applied RetinaNet to analyze underwater videos captured inside trawl nets, achieving detailed image analysis but suffering from performance degradation under poor lighting.
Other researchers have explored alternative approaches for underwater target identification. By integrating speckle detection with Gaussian mixture models and Kalman filtering, Albuquerque et al. [15] developed a fish-counting method that improved accuracy and reduced costs but exhibited sensitivity to environmental noise. Han et al. [16] developed a CNN-based system incorporating max-RGB and gray-world assumptions for image pre-processing, combined with a fusion strategy to enhance detection. While effective in their specific underwater robot application, the method underperformed on other datasets. By embedding multi-directional edge information into an SSD framework with ResNet50, Hu et al. [17] developed a feature-enhanced sea urchin detection method, which improved small-target detection and confidence, though real-time performance remained inadequate.
More recently, a Swin Transformer backbone combined with an optimized path aggregation network was incorporated into YOLOv5 by Lei et al. [18], resulting in better multi-scale feature fusion and improved underwater performance, though at the cost of a larger model. Siripattanadilok et al. [19] applied a Faster R-CNN model enhanced by Grad-CAM for crab detection—including partially obscured individuals—with 98–99% accuracy, though performance varied significantly under different lighting conditions. Zhang et al. [20] proposed YOLOF for soft-shell crab detection, raising the average precision by 5.4% over YOLOv5s, yet the model’s increased parameter size and lack of lightweight design limit its practical deployment.
Beyond the detection and differentiation of river crabs and other aquatic organisms, accurate monitoring of residual bait plays a crucial role in achieving precision aquaculture. To address this challenge, researchers have developed several methodological approaches. The first approach involves physical instrumentation for detecting uneaten bait. Llorens et al. [21] employed a scientific single-beam echosounder to quantify uneaten feed particles within aquaculture floating cages, providing a foundation for automated feeding systems. However, the performance of such echo-based detectors is highly susceptible to environmental variables, including water quality, temperature, and salinity. The second category utilizes pixel segmentation techniques for bait identification. Liu et al. [22] developed an adaptive Otsu thresholding combined with a linear time-component labeling algorithm to calculate the remaining bait. Li et al. [23] introduced a histogram-based adaptive thresholding approach for detecting fish feed in individual underwater images. Their approach incorporated an improved Otsu algorithm to segment images into foreground and background regions. Nevertheless, the detection speed of these methods requires further enhancement. The third strategy employs object detectors for bait monitoring. Hu et al. [24] developed an enhanced YOLOv4-based model for detecting uneaten particles, which reduced model complexity and improved underwater bait particle accuracy by modifying the feature network model structure and the residual connectivity of CSPDenseNets. However, it has relatively high requirements for the computing power of the equipment.
To overcome the above limitations, a machine vision-based monitoring system for automated river crab feeding is developed in this study. The primary contributions include:
(1)
An innovative automated framework for aquaculture. This work presents the fully automated, electronically controlled feeding platform for river crab farming, engineered to modernize industry production methods.
(2)
Enhanced adaptability to complex conditions. Acknowledging that river crabs are most active during night and early morning, the system integrates the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. This preprocessing step enhances underwater image quality, ensuring stable detection capability across a wide range of lighting scenarios.
(3)
A task-adaptive detection model is used for non-invasive growth assessment. We have proposed an online real-time method to estimate the body length and weight of crabs. To comprehensively assess crab growth and feeding status, the system employs a YOLOv11 model specifically optimized for this task. Key enhancements—including a lightweight backbone, an attention mechanism, and a refined loss function—collectively boost the accuracy of multi-object detection and counting in underwater pond environments.
(4)
A closed-loop information system for users. This work implements a comprehensive cyber-physical system that collects, processes, and visualizes complex data. The integrated platform empowers users with easily accessible information and provides a strong, evidence-based reference for management decisions.

2. Materials and Methods

2.1. Overall System Design

An integrated hardware–software framework is established to support system implementation and experimental validation.
The hardware configuration consisted of: (1) a laptop equipped with an AMD Ryzen R9-6900HX processor (3.3 GHz), 16 GB of RAM, and an NVIDIA RTX 3070 Ti GPU, which was used for model training and computational tasks; (2) a cloud server (AMD EPYC 7K62 @ 2.6 GHz, Windows Server 2012) for hosting the information platform; (3) a custom aluminum feeding platform; and (4) two Android mobile phones for field testing.
The software workflow is divided into three stages. Initially, multiple algorithms are developed and tested on a Windows 11 platform, where PyCharm 2022.3.3 is used for development and PyTorch 2.4.0 for deep learning implementation. Subsequently, the cloud-based information platform is built and deployed on the Windows Server 2012 cloud server. Finally, the trained models are deployed on the mobile devices to conduct experimental analysis in real environments.
Figure 2 illustrates the overall workflow of the river crab feeding platform system. An optical camera acquires data on various targets, including river crabs, bait, and reference objects (Figure 2a). The proposed monitoring system for the river crab feeding platform is built upon a hardware structure that ensures high processing power, enabling reliable operation in both well-lit and poorly lit environments (Figure 2b). The data is used to identify, count, and measure the size of the crabs (Figure 2c). All acquired information is then transmitted to a cloud server, where it is sorted in a database, and finally displayed on user terminals. In poorly lit scenarios, low-quality images or videos are first pre-processed to enhance data quality before the same analysis pipeline is executed. Furthermore, the platform incorporates an underwater environment monitoring system (Figure 2d). This subsystem, centered around an embedded chip and supplemented by various sensors, collects environmental data from different crab ponds. The collected data is similarly transferred to the cloud server for storage and is made accessible to multiple types of terminals (Figure 2e). Aquaculture personnel and managers can then access this information through their devices, which present the data in a clear and intuitive interface (Figure 2f).
The physical model of the electronic feeding platform is initially designed in SolidWorks 2023 and subsequently fabricated from stainless steel plates and tubes. The base is a 1 m × 1 m stainless steel plate covered with a white plastic sheet to minimize interference with experimental results. A rectangular frame, approximately 0.8 m in height, is constructed around the perimeter using stainless steel tubes. A 0.5 m × 0.5 m stainless steel waterproof platform is mounted atop this frame to secure the electrical equipment and ensure system reliability. The camera is installed on the upper part of the frame, while the power supply and other components are installed on the waterproof platform, guaranteeing long-term monitoring of river crab feeding activities. The mechanical structure of the electronic feeding platform is shown in Figure 3.
Under adequate lighting, images and videos are captured by optical cameras or mobile devices for information extraction. In low-light conditions, underwater images undergo a pre-processing sequence using the CLAHE algorithm for color and contrast enhancement. This pipeline produces a high-quality, composite output, establishing a robust foundation for multi-target recognition.
The processed images are then annotated using LabelImg to generate TXT-format label files containing target and coordinate information, thereby creating the dataset. The targets are divided into four categories: crab, particle, corn, and reference object. To facilitate the estimation of shell dimensions and mass, 25 mm diameter nickel-plated steel-core coins are employed as reference objects, as they are easily identifiable while posing no interference to feeding crabs. The reference objects are randomly placed on the platform to ensure they can be detected. The annotation process adheres to a standardized workflow and protocol designed to ensure consistency and reliability. This encompasses clear definitions for all categories (crab, particle, corn, and reference objects), the use of tight bounding boxes for each instance, and a consistent approach to handling partially visible or overlapping objects. To maintain inter-annotator consistency, multiple annotators independently annotated subsets of the images and cross-checked and resolved differences to ensure high consistency among annotators. To achieve efficient training and reliable evaluation, the dataset is divided into a training set, validation set and test set in a ratio of 7:2:1. This dataset contains 710 underwater images, including 294 annotated crab instances, 36,056 particle instances, 772 corn instances and 1576 reference object instances. To obtain accurate crab size data across growth stages, a correction function is derived from reference object variations to compensate for optical distortions introduced by the cameras. Based on extensive experimentation, a model correlating shell length, shell width, and mass is developed to estimate crab weight. The resulting mass data is uploaded to the cloud server for database storage. Finally, the underwater environmental monitoring system, centered around an embedded chip, is deployed to track key parameters including temperature, pH, and dissolved oxygen levels.

2.2. Machine Vision Technology Development

2.2.1. Underwater Image Processing

Image quality is significantly impaired in crab ponds, whose depths generally range from 0.6 to 1.5 m, due to the complex underwater environment. Captured images often suffer from low contrast, low brightness, and a blurred, foggy appearance, making them unsuitable for direct target recognition analysis. Suspended plankton and impurities further obscure critical features. Moreover, activity from the crabs and the operation of oxygenators introduce numerous bubbles, creating a “marine snow” effect and false features that complicate analysis. As high-quality images are essential for accurately understanding underwater scenes, this poses a significant practical challenge.
Figure 4 illustrates this by comparing images from well-lit and poorly lit environments, analyzing their pixel value distributions through histograms and 3D distribution maps.
To overcome issues including poor contrast, insufficient brightness, and color distortion in underwater imagery, the proposed system employs the CLAHE technology.
The CLAHE incorporates a contrast-limiting mechanism within localized regions to prevent noise amplification. This process involves clipping pixels that exceed a predefined contrast threshold before equalization, thereby redistributing the excess contrast to other pixel regions. Following pixel equalization, bilinear interpolation is applied to maintain visual coherence and natural appearance. Figure 5a shows the original image, whereas Figure 5b illustrates the processed result.
To evaluate the effect of image preprocessing, the YOLOv11 algorithm is used for detection, as shown in Figure 6. It can be clearly seen that the detection effect of the preprocessed images has significantly improved.

2.2.2. Multi-Target Detection Based on Improved YOLOv11

In this study, YOLOv11 is adopted as the baseline detection framework due to its efficient single-stage inference paradigm, which is suitable for real-time monitoring in aquaculture environments. As opposed to two-stage models like Faster R-CNN [25], YOLO-based approaches perform end-to-end prediction of object locations and classes without relying on a separate region proposal step, which significantly reduces computational complexity.
To further improve efficiency and deployment feasibility, the proposed model introduces a lightweight backbone network to enhance feature extraction while reducing parameter count and computational cost. Additionally, a Triplet Attention module is placed ahead of the SPPF layer to strengthen feature discrimination across spatial and channel dimensions. The improved YOLOv11 framework, designed to optimize efficiency alongside detection accuracy, maintains a favorable equilibrium between real-time operation and robustness. Its full network architecture is illustrated in Figure 7.
FasterNet-Based Backbone Network Improvements
Considering the limited computational resources typically available in aquaculture monitoring equipment, FasterNet [26] is selected as the feature extraction backbone in this work. FasterNet employs Partial Convolution (PConv) to reduce redundant spatial computations while maintaining effective feature representation, enabling high inference efficiency on a wide range of hardware platforms. By leveraging this lightweight design, the proposed model achieves fast processing speed without compromising detection accuracy, which makes it especially appropriate for real-time monitoring in underwater environments. The structural design of FasterNet is shown in Figure 8.
Triplet Attention Mechanism
To further enhance feature representation under complex underwater conditions, the proposed network integrates a Triplet Attention mechanism [27]. This module enables cross-dimensional interaction by modeling dependencies across spatial and channel dimensions, allowing the detector to emphasize informative regions while suppressing irrelevant background noise. By applying attention-based feature recalibration prior to subsequent feature aggregation, the model improves its sensitivity to small, partially occluded targets commonly encountered in river crab detection scenarios. Figure 9 presents a detailed view of the Triplet Attention mechanism’s architecture.
SDIoU Loss Function
Compared with conventional IoU loss, which focuses solely on the overlapping region, the Shape-Distance IoU (SDIoU) introduced by Wang et al. [28] offers a more holistic geometric evaluation. SDIoU combines three essential components: the degree of overlap between bounding boxes, the distance between their center points, and differences in aspect ratio. By incorporating these factors, the loss function becomes more responsive to geometric variations in small objects and closely positioned targets, thereby enhancing bounding box regression accuracy. The computation procedure is described as follows:
r D = D w D l , r T = T w T l
r s = S D S T , S T > S D S T S D , S T < S D
d = 1 ρ 2 c e n t e r D , c e n t e r T c 2
S D I o U = D T D T + r D r T r S d
In these formulations, D represents the detection bounding box while T corresponds to the tracking bounding box. The areas of D and T are denoted by S D and S T , respectively. The dimensions of D are specified by its length D l and width D w , whereas T is characterized by its length T l and width T w . The minimum enclosing box diagonal distance covering both D and T is represented as c 2 , and ρ 2 indicates the Euclidean distance between the centroids of the two boxes.

2.2.3. Ablation Experiment and Performance Evaluation

The performance of the proposed enhanced YOLOv11 framework is validated through a set of ablation studies carried out on a self-collected dataset. Model evaluation is conducted using several metrics, including the number of parameters, GFLOPs, Precision, Recall, mean Average Precision (mAP), and frames per second (FPS). All experiments are executed under the same training settings, with an input image size of 640 × 640, a batch size of 16, and 300 training epochs. Precision (P) is defined as the proportion of correctly predicted positive samples among all predicted positives, whereas Recall (R) denotes the ratio of correctly identified positives to the total number of ground-truth positives, as formulated in Equations (5) and (6). The mAP metric reflects the average detection accuracy across all validation categories. In addition, FPS is used to characterize inference efficiency, indicating the image processing throughput of the network.
P = T P T P + F P
R = T P T P + F N
where TP denotes the number of true positive samples, FP represents false positives, and FN indicates false negatives.
Table 1 presents the comparative evaluation results of integrating different lightweight backbone networks into the YOLOv11 framework, including FasterNet, MobileNetV3, and ShuffleNet. The results show that the model based on FasterNet achieves the best balance between detection accuracy and computational efficiency. With only 1.58 M parameters and 3.7 G FLOPs, it achieves the mAP of 97.5% and maintains a high inference speed of 122 FPS. In contrast, the computational cost of the MobileNetV3 backbone network is slightly higher, and its detection performance is slightly lower, with the mAP of 96.9% and the FPS of 119. Although ShuffleNet achieves the highest inference speed (133 FPS) and the lowest computational complexity based on the model, its detection accuracy is significantly reduced. These results indicate that FasterNet has better feature representation capabilities when dealing with complex underwater scenes, and is still suitable for real-time deployment on resource-constrained devices.
Table 2 summarizes the ablation results, showing how each architectural modification affects model performance. By replacing the original Backbone with FasterNet, the model’s complexity was substantially decreased, with parameters and FLOPs reduced by 38.8% and 41.3%, respectively. This change also improved the inference speed, increasing FPS by 20, albeit at a minor cost to precision and recall, which decreased by 0.6 percentage points. Incorporating the Triplet Attention module subsequently improved detection performance, with precision rising by 0.4 percentage points and recall rising by 0.2 percentage points, but at the cost of higher computational overhead, reducing FPS by 15. Finally, replacing the loss function with SDIoU further enhanced the model, yielding additional gains of 0.2 percentage points in precision, a significant 2.0 percentage points in recall, and a 5 FPS improvement in speed.
A comparative assessment against multiple leading detectors—YOLOv3 [29], YOLOv4 [30], YOLOv5, PPYOLO [31], YOLOR [32], and YOLOv8—was conducted to further validate the effectiveness of the proposed approach. As summarized in Table 3, our proposed YOLOv11 model demonstrates a superior balance between efficiency and performance. Although precision and recall see only minimal enhancement, the model demonstrates a notably reduced size and increased FPS relative to YOLOv5, PPYOLO, and YOLOv8, thereby balancing accuracy with computational efficiency.
A comparison of detection performance under two lighting conditions is provided in Figure 10. In the well-lit environment (a-1), the original model (a-2) is prone to generating false positives for small, densely distributed impurities and exhibits repeated detections of crab targets. In contrast, the improved model (a-3) effectively suppresses incorrect responses to tiny impurities and eliminates redundant detections, yielding cleaner and more reliable outputs. Under dim illumination (b-1), compared with the original model (b-2), the refined model (b-3) delivers more stable detection of corn kernels partially occluded by river crabs, demonstrating enhanced robustness in challenging low-light conditions.

2.2.4. River Crab Size Detection

The procedure for river crab size detection utilizes the YOLOv11 network to identify the crab and output its pixel-based dimensions, subsequently applying a correction algorithm to calculate the true shell length, width, and estimated mass. The six key steps are as follows.
Step 1: The (x, y) coordinates of the bounding box’s top-left corner are extracted by detecting the reference object. In the YOLOv11 framework, the image origin is defined as the top-left pixel (0, 0), shown in Figure 11a. The target’s pixel dimensions are calculated by reverting the algorithm’s normalized outputs against the known image dimensions.
Step 2: The pixel measurements of the reference object are translated into real-world units via a distortion correction function. To mitigate the inherent optical distortions from the camera, we established empirical correction functions for the X and Y axes through extensive calibration. The functions used to correct the pixel dimensions of the reference object are provided below:
w = 180.998 + 208.061 × 0.999 x + 0.082 x
h = 138.209 + 161.609 × 0.999 y + 0.044 y
where x and y refer to the horizontal and vertical pixel coordinates, respectively, and w and h correspond to the pixel-based values of the reference length and reference width.
Step 3: Given the known true dimensions of the reference object, scale factors k 1 , k 2 for the X and Y axes are calculated separately. Let w r represent the true physical length of the reference, and w denote its corresponding pixel length measured in the image. Let h r represent the true physical width of the reference, and h denote its corresponding pixel width measured in the image. The scaling factor k 1 for the X-axis (length) and scaling factor k 2 for the Y-axis (width) is then computed as follows:
k 1 = w w r
k 2 = h h r
Step 4: The shell length and shell width of the river crab are measured. Corrected dimensions are obtained using image pixel calibration functions along the X-axis (Figure 11b) and Y-axis (Figure 11c).
Step 5: The actual shell length and width are calculated based on the proportional relationship between pixel values and physical dimensions within the same image. Let l c and w c represent the actual shell length and width of the river crab respectively, and l c p and w c p denote the corresponding pixel value in the image. Thus, the shell length and width can be computed using the formula:
l c = l c p k 1
w c = w c p k 2
Step 6: The mass of individual river crabs is calculated based on a regression function that correlates mass with shell length and width. This function is developed through a series of experiments, in which the shell length, shell width, and body weight of 80 juvenile crabs with a male-to-female ratio of 5:5 are measured (Figure 11d). Based on the allometric growth law in biology, a composite power function model is selected, whose form can naturally represent the nonlinear relative growth relationship between crab body weight and body size. Compared to linear or polynomial pure mathematical approximations, this model not only has good fitting results but its parameters also have more clear physiological interpretation significance. The resulting relationship is expressed as follows:
m = 0.144 × l c 3.551 + 1.353 × w c 0.874
where m denotes the mass of river crab.
The R2 value of this relationship is 0.98, indicating a good fitting effect. The residual plot and Q-Q plot are shown in Figure 12. In the residual plot, most of the residuals are distributed around y = 0, while in the Q-Q plot, the residuals are distributed near the diagonal line, showing a good result.

2.3. Underwater Environmental Monitoring System Design

Currently, aquaculture practitioners primarily rely on detached sensors or manual observation to assess the underwater environment, often resulting in fragmented and cumbersome data collection. To address these limitations, we developed an underwater environmental monitoring system for real-time tracking of temperature, pH, and dissolved oxygen in crab ponds. The system consolidates this information and transmits it to a cloud database, providing farming users with convenient and continuous access.
Our proposed system is built around an STM32F407ZGT6 microcontroller and integrates temperature, pH, and dissolved oxygen sensors as input interfaces. Data are transmitted wirelessly via a 4G communication module. The system offers broad coverage, strong adaptability, and straightforward maintenance, enabling it to effectively support monitoring tasks characterized by scattered, widely distributed, mobile, or highly real-time data acquisition requirements. The framework for the underwater environmental detection system is shown in Figure 13a.
The pH sensor comprises a reference electrode—filled with a constant potential silver chloride solution—and a glass electrode containing a solution of known pH. This design ensures low noise and highly stable operation. The dissolved oxygen sensor simultaneously measures two key parameters: temperature and dissolved oxygen (Figure 13b). It operates based on the fluorescence quenching principle. Compared with traditional methods, the fluorescence approach supports long-term, continuous monitoring and is widely adopted owing to its high sensitivity and extended service life. Collected data are transmitted via an RS485 serial port driven by an SP3485 chip, enabling real-time online data acquisition on the cloud server.

2.4. Information Platform Construction

The information platform, serving as the central hub of the entire system, is built on the SSM framework (Spring 5.2.0, Spring Boot 2.3.0, MyBatis-Plus 3.3.0). It utilizes the HTTP protocol for communication with the mobile APP and WEB clients, and the Socket protocol for interacting with the acquisition terminals. The WEB and APP design drawings are shown in Figure 14. The platform receives data collected by the YOLOv11-based detection application, processes and aggregates this information—including real-time counts of targets such as river crabs and particles—and distributes the results to the display APP and WEB clients. Additionally, it applies correction functions to refine river crab size measurements and disseminates the corrected data. User registration information submitted via the display APP and WEB terminal is also stored in the platform and can be modified by administrative personnel.
From a technical perspective, the platform leverages Spring for its Inversion of Control (IOC) and Aspect-Oriented Programming (AOP) capabilities, Spring Boot for its starter dependencies and auto-configuration that simplify development and deployment, and MyBatis-Plus as an enhanced toolkit to further improve MyBatis efficiency.

3. Results

3.1. Experiments

To assess both the operational performance of the proposed system and the accuracy of its algorithms, a complete monitoring system was implemented, comprising the platform mechanical structure, a water quality detection system, a WEB display interface, and an APP display interface.
As shown in Figure 15a, the system achieves real-time collection of water quality monitoring data, which is synchronously displayed on both the mobile APP and the computer WEB interface. The system was developed on the Android platform, enabling portable control and monitoring for crab farming staff. The application consists of two core computer vision modules: target recognition and object sizing. It captures images via the mobile device’s camera and utilizes the deployed improved YOLOv11 weights to identify and classify targets. The detection results are then uploaded to a cloud server and stored in a database, providing farm operators with near real-time access to the detection data. The actual test environment is shown in Figure 15b.
On the WEB side, the data visualization interface (Figure 15c) integrates key parameters transmitted from the testing APP and the water quality system, including temperature, pH, dissolved oxygen, river crab mass, and feed quantity. The user management interface (Figure 15d) displays the profiles of registered users, while Figure 15e shows the corresponding data status within the platform’s database.
On the mobile APP side, three main interfaces are presented: Figure 15f displays the received parameters, including the average mass of river crabs, the counts of corn and pellet baits. Figure 15g is dedicated to displaying real-time environmental data (temperature, pH, dissolved oxygen) from the water quality system, and Figure 15h provides the user operation interface.

3.2. Quality Estimation and Bait Quantity Detection

Table 4 presents the measurement outcomes for a randomly sampled group of ten river crabs. The relative estimation errors for individual mass measurements span from 2.18% to 9.57%, yielding an average deviation of 6.1%. All errors fall within an acceptable threshold of 10%, thereby validating the method’s precision and dependability.
Six groups of bait detection experiments involving two feed types (pellets and corn) are analyzed, with the results summarized in Table 5. Statistically, the detection rate for pellets exceeded 95% (range: 95–98%) under well-lit conditions and 92% (range: 92–96%) under low-light conditions. For corn, the detection rate is above 90% in both lighting scenarios, ranging from 94% to 96% in well-lit environments and from 90% to 94% in low-light environments. The corresponding missing detection rate is thus maintained below 10% across all experiments.

4. Discussion

To address the challenges of feed waste and low manual inspection efficiency in traditional aquaculture, this study introduces a comprehensive monitoring system for river crab feeding platforms based on machine vision. Compared with mainstream object detection models, the proposed system demonstrates superior overall performance in detecting river crabs and baits. Experimental results indicate that the model achieves an mAP of 98.2%, outperforming YOLOv5 (95.1%) and YOLOv8 (97.6%). Moreover, it significantly reduces computational demands, requiring only 1.58 M parameters and 3.7 GFLOPs—substantially lower than models such as YOLOv4 (64.4 M parameters, 142.8 GFLOPs) and PPYOLO (45.1 M parameters, 45.1 GFLOPs). This efficiency is primarily attributed to the integration of FasterNet as a lightweight backbone, which effectively minimizes spatial redundancy through partial convolution. Furthermore, the incorporation of a triplet attention mechanism enhances feature discriminability, enabling bait detection rates above 90% under varying lighting conditions. The use of the SDIoU loss function further improves localization accuracy for small and closely spaced targets. Compared to the baseline model (93.9%), the recall rate is notably increased to 95.5%. Together, these architectural contributions ensure the system can be deployed in real-time on mobile and edge devices—a critical advantage for in situ aquaculture monitoring, where hardware resources are often limited. At the system level, the proposed cloud-based monitoring framework is deployed and operated in a real farming environment, where visual and environmental data are continuously transmitted to the cloud platform and successfully visualized on WEB and mobile terminals, demonstrating stable operation and reliable data communication under normal working conditions.
The estimation results for crab size and mass further demonstrate the practical value of the system. In the weight estimation validation experiment, the average estimation error was 6.1%, and the individual estimation was lower than 9.57%. The visual-based method shows potential to replace the traditional manual sampling.
Currently, some intelligent feeding systems for aquatic farming have relatively limited functions, mostly focusing on individual tasks such as biological or foodstuff detection, and are difficult to form a comprehensive feeding decision support system. Meanwhile, some large-scale integrated devices based on Internet of Things technology have problems of high cost and inconvenient deployment. The platform proposed in this study uses a flexible electronic feeding platform as the carrier, integrating functions such as biomass estimation based on machine vision, residual food detection, and environmental monitoring into a unified closed-loop system. This not only builds a more comprehensive farming perception framework but also effectively controls the overall cost through lightweight and scalable design.
Despite these promising results, several limitations should be noted. First, the underwater environment in crab ponds is highly dynamic: suspended debris, algal blooms, and bubbles can degrade image quality and interfere with target recognition, particularly for small or partially occluded feed particles. Although CLAHE improves contrast and visibility, it may be insufficient under extreme turbidity. Second, the current system does not identify or track individual crabs, which could lead to repeated measurements of the same individual and bias population-level growth statistics. Additionally, the mass estimation model was developed with a limited sample size under specific breeding conditions, which may constrain its generalizability to other regions or crab species. Moreover, the overlapping during feeding caused by crab activities is relatively rare due to the farming density and feeding method in the experiment, but this issue should be considered in actual farming. The changes in the appearance of river crabs during molting and the detection problem of crabs in the edge position of the image have not been taken into account, which may affect the detection effect. However, the systematic quantitative assessment of system reliability, communication latency, and fault modes is not included in the scope of this study. These aspects will be further explored in subsequent work to enhance the robustness and scalability of the system.
Future work will focus on addressing these limitations by incorporating more advanced underwater image enhancement techniques and multi-object tracking algorithms, thereby enabling individual crab identification and long-term growth analysis. To enhance model robustness, we will significantly expand the dataset. This includes images captured across different seasons, water conditions, and breeding environments. We will also specifically incorporate challenging cases such as overlapping or molting crabs, as well as scenes with suspended debris, algae, and bubbles. Furthermore, time-filtering and multi-object tracking strategies—like trajectory consistency analysis and frame-to-frame confidence aggregation—will be implemented to improve tracking reliability. These measures will enhance both the detection and quality assessment models. Using multiple camera angles will help minimize the accuracy loss that occurs when crabs are located near the edge of a single camera’s field of view, leading to more reliable assessments. Another promising direction is the development of adaptive feeding strategies driven by historical data and predictive analytics, which could further enhance feeding precision and support intelligent aquaculture management.

5. Conclusions

To overcome the limitations of labor-intensive manual observation and unreliable data acquisition in aquaculture, this paper proposed a machine vision-based platform for monitoring river crab feeding. This system is designed to track the growth of river crabs and assess leftover bait in underwater pond environments. The system integrates an optimized YOLOv11 detection algorithm with CLAHE-based image preprocessing to increase recognition accuracy for river crabs and bait. In practical tests, the relative error in crab mass estimation was less than 9.57%, while the detection rate for both corn and pellets exceeded 90% across different environmental conditions. The results confirm that the system can reliably monitor crab growth and bait presence, showing strong potential for deployment in real farming applications. Furthermore, the system enables farmers to access relevant parameters in real-time via a mobile application and a WEB interface, providing actionable guidance for feeding management.
Despite its promising performance, the proposed method has certain limitations. First, the complex underwater environment, including floating debris that can occlude visual information, poses challenges to image clarity. Our current image preprocessing is limited to clarification and color equalization, which may be insufficient in highly turbid waters with abundant impurities—conditions that can affect the detection accuracy for small and multi-species bait targets. Second, during crab size measurement, we did not classify crabs by size, and repeated measurements of the same individual may have occurred, potentially introducing bias into population-level statistics. In future work, we plan to address these issues by developing more robust image enhancement techniques and introducing individual crab tracking to avoid repeated measurements. To enhance the system’s generalizability and scalability, future research will incorporate crabs spanning various growth stages and reared under different environmental conditions.

Author Contributions

Conceptualization, Y.S.; methodology, Z.L.; software, B.Y.; validation, D.Z.; formal analysis, Z.Y.; investigation, B.Y.; data curation, Y.C.; resources, Z.Y.; writing—original draft preparation, Y.S.; writing—review and editing, Z.L.; visualization, B.Y.; supervision, D.Z.; project administration, N.R.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 62173162); the Open Fund Project of the Key Laboratory of Smart Agricultural Technology (Yangtze River Delta), Ministry of Agriculture and Rural Affairs, P.R. China (Grant No. KSAT-YRD2024003); Jiangsu Provincial Agricultural Machinery Research, Manufacturing, Promotion and Application Integration Pilot Project (JSYTH14); the Priority Academic Program Development of Jiangsu Higher Education Institutions (Grant No. PAPD-2022-87).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Khan, Z.; Shen, Y.; Liu, H. Objectdetection in agriculture: A comprehensive review of methods, applications, challenges, and future directions. Agriculture 2025, 15, 1351. [Google Scholar] [CrossRef]
  2. Sun, Y.P.; Chen, Z.X.; Zhao, D.A.; Zhan, T.T.; Zhou, W.Q.; Ruan, C.Z. Design and experiment of precise feeding system for pond crab culture. Trans. Chin. Soc. Agric. Mach. 2022, 53, 291–301. [Google Scholar]
  3. Ji, W.; Pan, Y.; Xu, B.; Wang, J. A real-time apple targets detection method for picking robot based on ShufflenetV2-YOLOX. Agriculture 2022, 12, 856. [Google Scholar] [CrossRef]
  4. Hu, T.; Wang, W.; Gu, J.; Xia, Z.; Zhang, J.; Wang, B. Research on apple object detection and localization method based on improved YOLOX and RGB-D images. Agronomy 2023, 13, 1816. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Lu, Y.; Zhao, Y.; Pan, Q.; Jin, K.; Xu, G.; Hu, Y. TS-YOLO: An all-day and lightweight tea canopy shoots detection model. Agronomy 2023, 13, 1411. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Chen, Q.; Xu, W.; Xu, L.; Lu, E. Prediction of feed quantity for wheat combine harvester based on improved YOLOv5s and weight of single wheat plant without stubble. Agriculture 2024, 14, 1251. [Google Scholar] [CrossRef]
  7. Cao, S.; Zhao, D.A.; Sun, Y.P.; Ruan, C.Z. Learning-based low-illumination image enhancer for underwater live crab detection. ICES J. Mar. Sci. 2021, 78, 979–993. [Google Scholar] [CrossRef]
  8. Girshick, R.B.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  9. Girshick, R.B. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  10. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  11. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  12. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017; pp. 6517–6525. [Google Scholar]
  13. Banan, A.; Nasiri, A.; Taheri-Garavand, A. Deep learning-based appearance features extraction for automated carp species identification. Aquac. Eng. 2020, 89, 102053. [Google Scholar] [CrossRef]
  14. Allken, V.; Rosen, S.; Handegard, N.O.; Malde, K.; Demer, D. A deep learning-based method to identify and count pelagic and mesopelagic fishes from trawl camera images. ICES J. Mar. Sci. 2022, 78, 3780–3792. [Google Scholar] [CrossRef]
  15. Albuquerque, P.L.; Garcia, V.; Junior, A.D.; Lewandowski, T.; Detweiler, C.; Gonçalves, A.B.; Costa, C.S.; Naka, M.H.; Pistori, H. Automatic live fingerlings counting using computer vision. Comput. Electron. Agric. 2019, 167, 105015. [Google Scholar] [CrossRef]
  16. Han, F.L.; Yao, J.Z.; Zhu, H.T.; Wang, C.H. Underwater image processing and object detection based on deep CNN method. J. Sens. 2020, 2020, 6707328. [Google Scholar] [CrossRef]
  17. Hu, K.; Lu, F.Y.; Lu, M.X.; Deng, Z.L.; Liu, Y.P. A marine object detection algorithm based on SSD and feature enhancement. Complexity 2020, 2020, 5476142. [Google Scholar] [CrossRef]
  18. Lei, F.; Tang, F.F.; Li, S.H. Underwater target detection algorithm based on improved YOLOv5. J. Mar. Sci. Eng. 2022, 10, 310. [Google Scholar] [CrossRef]
  19. Siripattanadilok, W.; Siriborvornratanakul, T. Recognition of partially occluded soft-shell mud crabs using Faster R-CNN and Grad-CAM. Aquacult. Int. 2024, 32, 2977–2997. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Liu, F.F.; He, X.F.; Wu, X.Y.; Xu, M.J.; Feng, S. Soft-shell crab detection model based on YOLOF. Aquacult. Int. 2024, 32, 5269–5298. [Google Scholar] [CrossRef]
  21. Llorens, S.; Pérez-Arjona, I.; Soliveres, E.; Espinosa, V. Detection and target strength measurements of uneaten feed pellets with a single beam echosounder. Aquac. Eng. 2017, 78, 216–220. [Google Scholar] [CrossRef]
  22. Liu, H.Y.; Xu, L.H.; Li, D.W. Detection and recognition of uneaten fish food pellets in aquaculture using image processing. In International Conference on Graphic and Image Processing; SPIE: Bellingham, WA, USA, 2015; Volume 9443, p. 94430G. [Google Scholar]
  23. Li, D.W.; Xu, L.H.; Liu, H.Y. Detection of uneaten fish food pellets in underwater images for aquaculture. Aquac. Eng. 2017, 78, 85–94. [Google Scholar] [CrossRef]
  24. Hu, X.L.; Liu, Y.; Zhao, Z.X.; Liu, J.T.; Yang, X.T.; Sun, C.H.; Chen, S.H.; Li, B.; Zhou, C. Real-time detection of uneaten feed pellets in underwater images for aquaculture using an improved YOLOv4 network. Comput. Electron. Agric. 2021, 185, 106135. [Google Scholar] [CrossRef]
  25. Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  26. Chen, J.R.; Kao, S.; He, H.; Zhuo, W.P.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
  27. Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to attend: Convolutional triplet attention module. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2021; pp. 3139–3148. [Google Scholar]
  28. Wang, X.Y.; Fu, C.Y.; He, J.W.; Wang, S.J.; Wang, J.W. StrongFusionMOT: A multi-object tracking method based on LiDAR-camera fusion. IEEE Sens. J. 2023, 23, 11241–11252. [Google Scholar] [CrossRef]
  29. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  30. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
  31. Long, X.; Deng, K.P.; Wang, G.Z.; Zhang, Y.; Dang, Q.Q.; Gao, Y.; Shen, H.; Ren, J.G.; Han, S.M.; Ding, E.R.; et al. PP-YOLO: An effective and efficient implementation of object detector. arXiv 2020, arXiv:2007.12099. [Google Scholar] [CrossRef]
  32. Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. You only learn one representation: Unified network for multiple tasks. arXiv 2021, arXiv:2105.04206. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of traditional river crab manual proofing. (a) Manual proofing equipment; (b) river crab proofing; (c) river crab weighing.
Figure 1. Schematic diagram of traditional river crab manual proofing. (a) Manual proofing equipment; (b) river crab proofing; (c) river crab weighing.
Fishes 11 00088 g001
Figure 2. The workflow of the river crab feeding platform system.
Figure 2. The workflow of the river crab feeding platform system.
Fishes 11 00088 g002
Figure 3. Mechanical structure of the electronic feeding platform.
Figure 3. Mechanical structure of the electronic feeding platform.
Fishes 11 00088 g003
Figure 4. Images acquired for different environments and their pixel counts. (a) Original image of underwater crab; (b) 3-channel pixel histogram; (c) 3D color distribution map.
Figure 4. Images acquired for different environments and their pixel counts. (a) Original image of underwater crab; (b) 3-channel pixel histogram; (c) 3D color distribution map.
Fishes 11 00088 g004
Figure 5. Comparison of before and after image processing. (a) Original image; (b) processed image.
Figure 5. Comparison of before and after image processing. (a) Original image; (b) processed image.
Fishes 11 00088 g005
Figure 6. Comparison of before and after image enhancement and its detection results. (a) Original image; (b) detection results of the original image; (c) processed image; (d) detection results of the processed image.
Figure 6. Comparison of before and after image enhancement and its detection results. (a) Original image; (b) detection results of the original image; (c) processed image; (d) detection results of the processed image.
Fishes 11 00088 g006
Figure 7. The structure of improved YOLOv11 network.
Figure 7. The structure of improved YOLOv11 network.
Fishes 11 00088 g007
Figure 8. The structure of FasterNet.
Figure 8. The structure of FasterNet.
Fishes 11 00088 g008
Figure 9. Triplet attention mechanism module.
Figure 9. Triplet attention mechanism module.
Fishes 11 00088 g009
Figure 10. Detection results of images in different environments. (a) The comparisons of test results in well-lit environments; (b) the comparisons of test results in poorly lit environments.
Figure 10. Detection results of images in different environments. (a) The comparisons of test results in well-lit environments; (b) the comparisons of test results in poorly lit environments.
Fishes 11 00088 g010
Figure 11. Image correction function relationship. (a) Object measurement diagram; (b) pixel X-axis correction function; (c) pixel Y-axis correction function; (d) river crab mass equation.
Figure 11. Image correction function relationship. (a) Object measurement diagram; (b) pixel X-axis correction function; (c) pixel Y-axis correction function; (d) river crab mass equation.
Fishes 11 00088 g011
Figure 12. Q-Q plot and residual plot of the quality estimation regression function. (a) Q-Q plot; (b) residual plot.
Figure 12. Q-Q plot and residual plot of the quality estimation regression function. (a) Q-Q plot; (b) residual plot.
Fishes 11 00088 g012
Figure 13. Underwater environmental monitoring platforms. (a) Framework for underwater environmental detection system. (b) physical view of dissolved oxygen sensor. The words on the white part are: Brown wire: Positive 12–24 V, Black wire: Negative 12–24 V.
Figure 13. Underwater environmental monitoring platforms. (a) Framework for underwater environmental detection system. (b) physical view of dissolved oxygen sensor. The words on the white part are: Brown wire: Positive 12–24 V, Black wire: Negative 12–24 V.
Fishes 11 00088 g013
Figure 14. WEB and APP design drawings. (a) WEB design interface; (b) app design interface.
Figure 14. WEB and APP design drawings. (a) WEB design interface; (b) app design interface.
Fishes 11 00088 g014
Figure 15. System experimental demonstration diagram. (a) Experimental test equipment; (b) experimental test environment; (c) WEB display data; (d) WEB System Management; (e) database content; (f) average mass in APP; (g) environmental parameters in APP; (h) user Interface in APP.
Figure 15. System experimental demonstration diagram. (a) Experimental test equipment; (b) experimental test environment; (c) WEB display data; (d) WEB System Management; (e) database content; (f) average mass in APP; (g) environmental parameters in APP; (h) user Interface in APP.
Fishes 11 00088 g015
Table 1. Results of lightweight network comparison experiments.
Table 1. Results of lightweight network comparison experiments.
ModelParameters/MFLOPs/GPrecision/%Recall/%mAP/%FPS
YOLOv11-FasterNet1.583.794.293.397.5122
YOLOv11-MobileNetV31.733.993.991.996.9119
YOLOv11- ShuffleNet1.213.392.891.895.8133
Table 2. Ablation experiment results of the proposed YOLOv11 variants.
Table 2. Ablation experiment results of the proposed YOLOv11 variants.
ModelParameters/MFLOPs/GPrecision/%Recall/%mAP/%FPS
YOLOv112.586.394.893.997.9102
YOLOv11-FasterNet1.583.794.293.397.5122
YOLOv11-FasterNet-Triplet1.583.794.693.597.6107
YOLOv11-FasterNet-Triplet-SDIoU1.583.794.895.598.2112
Table 3. Detection parameters of comparison algorithms.
Table 3. Detection parameters of comparison algorithms.
AlgorithmsParameters/MFLOPs/GPrecision/%Recall/%mAP/%FPS
YOLOv361.5154.692.193.192.639
YOLOv464.4142.893.994.294.352
YOLOv512.316.194.695.295.158
PPYOLO45.145.194.394.994.546
YOLOR52.9120.494.195.495.631
YOLOv83.018.194.695.397.688
Ours1.583.794.895.598.2112
Table 4. Quality test results for river crabs.
Table 4. Quality test results for river crabs.
Number12345678910
Experiment Shell length/cm4.85.73.84.95.14.65.93.84.65.5
Experiment shell width/cm4.35.63.44.44.54.14.93.33.94.8
Experiment mass/g42.6975.7720.4545.6651.9837.1884.1820.3536.9866.7
Real Shell Length/cm4.65.83.7554.65.83.74.55.6
Real Shell width/cm4.25.33.24.44.24.25.13.33.85
Real Shell mass/g40.0980.7519.0246.6853.6440.8080.2619.833.8673.76
Mass error ratio/%6.486.177.532.183.108.874.882.799.239.57
Table 5. Test results for pellets and corn.
Table 5. Test results for pellets and corn.
CategoriesNumberNumber of Experimental FeedsWell-Lit EnvironmentPoorly Lit Environment
Number of DetectionsDetection Rate/%Number of DetectionsDetection Rate/%
Pellets110098989292
210095959494
310096969696
Corn15047944590
25048964590
35048964794
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Li, Z.; Yang, Z.; Yuan, B.; Zhao, D.; Ren, N.; Cheng, Y. Design of Monitoring System for River Crab Feeding Platform Based on Machine Vision. Fishes 2026, 11, 88. https://doi.org/10.3390/fishes11020088

AMA Style

Sun Y, Li Z, Yang Z, Yuan B, Zhao D, Ren N, Cheng Y. Design of Monitoring System for River Crab Feeding Platform Based on Machine Vision. Fishes. 2026; 11(2):88. https://doi.org/10.3390/fishes11020088

Chicago/Turabian Style

Sun, Yueping, Ziqiang Li, Zewei Yang, Bikang Yuan, De’an Zhao, Ni Ren, and Yawen Cheng. 2026. "Design of Monitoring System for River Crab Feeding Platform Based on Machine Vision" Fishes 11, no. 2: 88. https://doi.org/10.3390/fishes11020088

APA Style

Sun, Y., Li, Z., Yang, Z., Yuan, B., Zhao, D., Ren, N., & Cheng, Y. (2026). Design of Monitoring System for River Crab Feeding Platform Based on Machine Vision. Fishes, 11(2), 88. https://doi.org/10.3390/fishes11020088

Article Metrics

Back to TopTop