Next Article in Journal
Drivers of Milk Production Decisions on Polish Family Farms: A Classification Tree Approach
Previous Article in Journal
High-Precision Mapping and Real-Time Localization for Agricultural Machinery Sheds and Farm Access Roads Environments
Previous Article in Special Issue
Digital Transition as a Driver for Sustainable Tailor-Made Farm Management: An Up-to-Date Overview on Precision Livestock Farming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Death Detection and Removal in High-Density Animal Farming: Technologies, Integration, Challenges, and Prospects

Department of Engineering, China Agricultural University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(21), 2249; https://doi.org/10.3390/agriculture15212249
Submission received: 22 September 2025 / Revised: 23 October 2025 / Accepted: 24 October 2025 / Published: 28 October 2025

Abstract

In high-density commercial farms, the timely detection and removal of dead bodies are essential to maintain the well-being of animals and ensure farm productivity. This review systematically synthesizes 128 published studies, 52 of which are highly related to the death detecting topic, covering diverse animal species and farming scenarios. The review systematically synthesizes existing research on death detection methods, dead body removal systems, and their integration. The death detection process is divided into three key stages: data acquisition, dataset establishment, and data processing. Inspection systems are categorized into fixed and mobile inspection systems, enabling autonomous imaging for death detection. Regarding death removal systems, current research predominantly focuses on hardware design for poultry and aquaculture, but real-farm validation remains limited. Key focuses for future development include enhancing the robustness and adaptability of detection models with high-quality datasets, brainstorming for more feasible designs of removal systems to enhance adaptability to diverse farm conditions, and improving the integration of inspection systems with removal systems to conduct fully automated detection-removal operations. Ultimately, the successful application of these technologies will reduce labor dependence, enhance biosecurity, and support sustainable, high-density large-scale animal farming while ensuring both satisfying production and the welfare of animals.

1. Introduction

Livestock, poultry, and fishery farming play a crucial role in global food production, contributing significantly to food security and providing high-quality animal proteins to ensure dietary balance. As the world’s population is projected to reach 9.8 billion by 2050 [1], the demand for animal-based products, particularly meat, eggs, and dairy, is rising steadily. Consequently, large-scale, high-density animal farms have emerged to meet the increasing demand. However, as animal farms expanded in both quantity and density, several issues arose. One of the most pressing issues is managing the health and well-being of the animals in such high-density environments. Among the various concerns related to animal health, the timely detection and removal of dead bodies is particularly essential. Though according to poultry pathology, the reasonable daily loss of chickens is less than 0.05% [2], the accumulation of deceased animals in the farming environment can lead to numerous problems, such as disease transmission, environmental contamination, and economic losses. In addition, the presence of dead animals can negatively impact the quality of life for the remaining animals, increasing stress levels and potentially affecting their growth.
Traditionally, dead bodies in animal farms are removed by human workers. Breeders must manually observe the health of animals solely on the basis of their breeding experience to identify and remove deceased individuals. This has always been a labor-intensive, time-consuming, yet prone-to-error task, not to mention the risk of disease spread and economic losses due to delayed detection. Therefore, with the growth of animal farming industry, manual methods have become increasingly impractical.
The modernization of animal agriculture has spurred advancements in automation, with technologies such as robotics, machine vision, and artificial intelligence driving transformative changes in breeding management [3]. Along with these advancements, the detection and removal of dead bodies have become more intelligent and automated. Over the past two decades, machine vision-based solutions for death detecting tasks have been proposed, with continuous refinements following evolutions in the domain of image processing techniques. The methods developed within the machine learning community have been successfully applied in the agricultural sector, particularly in death detection scenarios, achieving excellent results with high accuracy and detection speed. Inspection robots with autonomous navigation systems are also widely applied for collecting image data to train these networks, which in turn enhances the robustness of the models. Moreover, delicate designs have been proposed to automatically pick up dead animals. Despite these innovations, progress remains fragmented, with domain-specific improvements in hardware or software rarely converging into cohesive frameworks. As technology develops and the industry advances as a whole, these designs balance between cost and performance, finding suitable solutions for commercial applications, with separate improvements in multiple fields altogether advancing the death detecting-removing process.
Previously available reviews cover the application of agricultural robotics to poultry production [3], machine learning techniques in poultry health and welfare issues [4], challenges and opportunities for robotics in poultry farming [5], machine vision technology in fish classification [6], and comparative analysis of the YOLO series for dead chicken detection [7]. While some of these reviews predominantly include contents related to death detection, their focus is often limited to either the software model (detection networks) or hardware design (robotic systems), and they often contain few details regarding each specific farming scenario. A critical gap remains in systematically synthesizing the interplay between sensing technologies, data acquisition protocols, detection models, and removal systems regarding the specific scenario of dead animal detection in high-density commercial farms. This review addresses this gap by proposing a holistic framework that bridges these components, offering feasible insights for researchers and practitioners to develop integrated, scalable solutions tailored to high-density farming environments. To ensure the comprehensiveness of the work, 128 papers are systematically synthesized, 52 of which are highly related to the death detecting topic. These core articles, published between 2009 and 2025, are able to present the overall development and the current state of the death detection research, covering diverse animal species as well as farming scenarios. Ultimately, innovations, key trends, and limitations across sensing technologies, as well as popular detection algorithms and removal mechanisms can be identified and concluded.
As shown in Figure 1, death detection and removal are two sequential tasks, since the removal action depends on detection models for locating the deceased. In this article, we divide the detection process into three stages: data acquisition, dataset establishment, and data processing. Each section is separately discussed, focusing on the specific devices used and typical working scenarios. Existing removal devices are also reviewed and organized, presenting available designs to innovate future research. The overall structure of the article is organized as follows. Section 2 discusses specific death detection methods, providing details on how original datasets are established and how detection models are designed and tested. Section 3 presents multiple existing designs on inspection devices, while Section 4 presents removal devices and their integration with detection models. Section 5 contains a discussion based on our findings from reviewing the previous related works. Finally, Section 6 provides our thoughts on current challenges and recommendations for future research. The specie-specific detection tasks discussed in the review are presented in Figure 2.

2. Death Detection Methods

2.1. Data Acquisition Methods

Vision-based death detection primarily relies on image data. However, there is a significant scarcity of high-quality datasets containing images of deceased farm animals in high-density animal farming environments. To address this gap, most researchers resort to collaborating with commercial farms or construct laboratory-simulated environments to acquire images from scratch and establish their own dataset. The data acquisition devices primarily include a camera (RGB, thermal-infrared (TIR), near-infrared (NIR) camera, etc.) that is deployed through handheld operation, fixed installations (tripod/ceiling-mounted), or mobile platforms (inspection robot, drone, etc.). The selection of these acquisition devices, parameters, and approaches is affected by the real-world farming scenario or the simulated experimental environments [8].
To review the data acquisition methods employed for death detection purposes, the advantages, limitations, and typical working scenarios of the three most widely used methods are summarized and presented in this section (Section 2.1).

2.1.1. Manual Data Collection

Manual image acquisition involves collecting image data using handheld imaging sensors. This method is characterized by its flexibility and cost-effectiveness, enabling real-time adjustments to accommodate various working scenarios. It proves particularly advantageous in complex environments where images collected by fixed cameras lack diversity, and inspection robots encounter operational difficulties.
For instance, given the complexities of the maritime environment and safety concerns, it is challenging and expensive to install temporary cameras solely for research data collection at sea. In response, Wang [9] addressed this issue by collecting images of deep-sea fish cages using a handheld NIKKOR 24X camera (Nikon Corporation, Tokyo, Japan), accompanied by professional staff in the South China Sea. Though only a limited number of images were taken during the sail, Wang expanded the dataset with additional images captured on the beachside and in a simulated laboratory setting.
Another scenario in which the manual image collection method proves superior occurs when the targeted samples are rare or infrequent, making large-scale data acquisition inefficient. According to poultry pathology, the reasonable daily loss of chickens is less than 0.05% [2]. In such cases, Luo et al. [10] collected images of dead chickens manually after locating dead chickens through human inspection, though having a self-developed mobile chassis for automated image collection in their experimental commercial farm. Similarly, Hao et al. [11] collected images of dead chickens manually as the training dataset, while using the images collected with an autonomous inspection platform to supplement and validate the detection model. When facing the complex environment in chicken coops, such as poor lighting and serious occlusion, Ma et al. [12] adopted two im-age collecting methods, one of which utilized a Xiaomi 13 smartphone camera, offering great portability and flexibility.
It is worth noticing that when employing this approach, most researchers ensure consistency across different sets of images by maintaining uniform or similar imaging angles, heights, and camera parameters during image collection. For instance, when collecting infrared images of caged chickens with a handheld infrared thermal sensor, Jia et al. [13] standardized the acquisition process by keeping the camera in auto mode at all times, remaining a distance of 10–15 cm from cage, and shooting at 45 degrees angle relative to horizontal plane. This setup helped maintain high image quality and internal consistency within the dataset. Similarly, Peng et al. [14] manually collected images of deceased pigs from a distance of 1000–1600 mm to make sure that the pigs fill most of the frames while being fully captured.

2.1.2. Fixed Imaging Systems

Fixed imaging sensors, such as those mounted on tripods or ceilings, collect imaging data from a fixed location or height. Whether using videos from existing overhead surveillance cameras or installing additional ones for research purposes, these systems are easy to construct and are adopted in various scenarios.
Some researchers taking this approach chose to mount cameras at a 90° top-view angle to minimize occlusion caused by cages and animal bodies [15,16]. Others opt for a tilted top-down view to simulate human eye observations and the typical camera angles of inspection robots [17,18,19]. In aquaculture-related death detection studies, researchers also favor a tilted top-view to obtain a more comprehensive view of the fish tank while reducing the influence of water surface reflection [20,21,22].
Apart from the advantages mentioned above, cameras mounted on tripods can ensure the fixed location of sets of images with necessary consistency. For instance, Zhu et al. [23] fixed cameras in front of chicken cages to capture two sequential images at a five-minute interval. They extracted center radiating vectors as contour features of the chicken. Since their SVM model took the vector differences between image pairs as input, it was essential that these differences originated solely from the chicken’s voluntary movements, rather than from variations in camera angle or other external factors. Similarly, taking immobility as the indicator for rabbit deaths, Duan et al. [24] fixed an industrial camera at a height of 1 m and a 45° angle to capture optical flow information of the rabbits. They ensured that the camera stayed in the same position on a tripod for each set of images.
When using this method, the height at which the cameras are fixed is an essential parameter. Bist et al. [25] highlighted the influence of camera heights on death-detection accuracy in their research. They installed eight night-vision network cameras to monitor cage-free hen mortality, with six cameras mounted 3 m above the litter floor, one at 0.5 m, and another at 1 m above ground height. The model’s performance varied from 99% at 0.5 m to only 94% at 3 m, with the best precision, recall, and mean average precision (mAP) achieved at 0.5 m. Despite having the lowest accuracy, the ceiling-height camera provides a broader field of view, offering a more comprehensive view of the farmhouse compared to the ground camera. Li et al. [26] mentioned similar influences as their robotic-arm-mounted camera struggled to capture the complete body of larger broilers when operating at a low height. They recommended elevating the camera in future studies. Overall, these studies suggest that integrating optimal ground and ceiling heights for a comprehensive view can enhance the detection accuracy and removal success rate of deceased animals in high-density farms.

2.1.3. Mobile Imaging Devices

Constructing a high-quality, large-scale image dataset is a laborious and repetitive process in animal farming. To address this challenge and further improve efficiency, self-developed mobile devices, such as rail-mounted systems, vehicles, and drones, are increasingly employed to autonomously navigate and capture images within the animal houses. These devices typically consist of imaging sensors, mobile platforms, and advanced navigation systems, allowing them to execute data collection tasks with minimal human intervention [3].
As most death detecting models for poultry farms aim to ultimately be deployed on inspection robots for real-time monitoring, most related works chose to collect images with a robot-like vehicle [9,10,11,12,27,28,29,30,31]. Training such models with images captured from a similar view enhances its performance when later integrated into robotic systems. Apart from presenting a similar view to the working scenario, this kind of device also offers an edge in the farming environment. In poultry houses, birds are typically housed in cages arranged in neat, parallel rows, creating simple walking paths for the mobile devices to navigate. However, a challenge worth noticing for mobile systems operating in flat farming poultry houses or cage-free animal farms is the need for slow movement and a highly sensitive obstacle avoidance system to minimize its interference for animal welfare considerations.
In the context of death detection for aquaculture, most researchers desire a top-view angle for comprehensive observation. As a result, UAVs and rail-mounted cameras are preferred [32,33,34]. Both Tian et al. [34] and Zhang et al. [33] highlighted the importance of capturing images from various heights and angles to increase dataset diversity, thereby enhancing the adaptability and robustness of the detection models. Zhou et al. [32] utilized a computer vision system attached to an elliptical rail to capture underwater videos in a recirculating aquaculture system. The camera, mounted on the end of a pole attached to the rail, was placed at 60–100 cm below the water surface to minimize interference from water surface fluctuations, bubbles, and foam.
From both research and animal welfare perspectives, minimizing human presence in animal facilities is crucial. Humane interference may bring infectious disease to animal farms, which can have severe consequences for flock health. Using a mobile device, researchers can minimize their time in the poultry house and improve work efficiently without compromising the quantity and quality of the images collected.

2.2. Imaging Sensors

Due to different imaging principles, different types of sensors capture distinct information. In the context of death detection, the choice of sensors is highly dependent on the type of data researchers use to differentiate deceased individuals from living ones, or to indicate death. For instance, fish often present an abnormal swimming posture of vertically turning over post-death [22]. Taking this phenomenon as an indicator of death, re-searchers may prefer a high-resolution RGB camera to carefully observe the detail state of each fish. On the other hand, thermal infrared (TIR) sensors enable researchers to obtain physiological information such as body temperature, which is an ideal choice for them taking the temperature drop as the indicator [27]. Additionally, with the advancements in image registration and fusion technologies, an increasing number of researchers are integrating multiple types of image data, leveraging fused images to improve the accuracy and robustness of death detection.
In this section (Section 2.2), we review the various sensor types used in death detection. By focusing on the advantages, limitations, and the how the provided information indicate death, we aim to provide a comprehensive understanding for why and how the sensors were used.

2.2.1. RGB Camera

The imaging sensors of RGB cameras, typically based on CMOS or CCD, consist of numerous photodiodes arranged under a Bayer filter mosaic (commonly in an RGGB pattern) to selectively capture red, green, and blue bands. These photodiodes are able to capture light signals coming from different areas of the scene, which are then converted into electrical signals. The signals are processed and digitized by an analog-to-digital converter (ADC), enabling the sensor to record color information within the frame. By combining the red, green, and blue intensities, the digital signals are used to reconstruct a full-color image, or an RGB image. RGB cameras are widely used in everyday applications and have undergone thorough development. As a result, they are highly optimized, cost-effective, and capable of producing high-resolution images [12]. Due to these advantages, researchers have been widely adopted in the field of animal death detection.
When building datasets based on RGB cameras, some researchers used the collected images directly without many pre-processing operations [2,11,12,14,20,30,32,33,34,35,36,37]. By eliminating the image pre-processing process, models can achieve higher detection speed and present better real-time response. However, false detection may occur when relying solely on unprocessed RGB images. For instance, Liu et al. [2] pointed out the similarity in morphology between healthy poultry and dead poultry while sitting or lying down as a main cause of detection errors. Facing the disturbance of irrelevant yet similar information within the scene, Zheng et al. [36] annotated prior information, such as branches and water ripples, which share a similar appearance to a dead fish, to reduce the false detections.
To further improve detection accuracy and model robustness, some researchers chose to pre-process the images and establish a dataset of specialized RGB-based images. Early works in this area were conducted by a research group at Jiangsu University. Lu [38] converted chicken RGB images into Lab color space, and selected component a* as a key feature to separate the red chicken comb from the red-brown feathers and the surroundings. Then, she determined whether a dead chicken was in the cage by judging the presence or absence of a stationary red chicken comb. Using similar methods to segment the red chicken comb, Peng [39] extracted morphological features, perimeter, area, eccentricity, complexity, and roundness as input variables for an SVM classifier to detect dead chickens with machine learning techniques. Using the color intensity information provided by RGB images, Zhu et al. [23] first transformed the color image into a grey image and further a binary image, then used a morphological filter to reduce noise. Once the chicken contours were successfully extracted, they abstracted its feature with a center radiating vector for further death determinations.
Although most CMOS sensors generate color images in the RGB color space, which simulates the imaging principles of human eyes, other color spaces may present color information more effectively. In addition to converting the RGB images into the Lab* color space as mentioned above [38], the HSV color space is another ideal alternative. While both HSV and RGB define color channels of images, HSV is specifically designed to fit human perception of light. In the context of dead body detection, Li et al. [26] preferred the HSV over RGB to detect the shanks of dead birds for removal. This is because the saturation channel in HSV effectively preserves key features, such as the chicken shank, while filtering out irrelevant elements like litter. Similarly, when aiming to improve dead duck recognition by removing cage structures from the view, Bai et al. [40] also chose to convert the RGB images into HSV color space to enhance accuracy.
During the pre-processing process of RGB images, Otsu thresholding algorithms and morphological image processing techniques such as erosion and dilation were commonly used in the early years to assist the detection process [23,38,39]. In recent years, researchers have tended to apply the more advanced image processing algorithms to increase the dataset quality. For instance, the Canny algorithm and Hough transform [26] were used for high-accuracy edge detection. Noise generating algorithms for noises like Gaussian noise or the salt and pepper noise [9] are able to simulate a real-world setting, improving the accuracy and robustness of the detection model.
Despite these advantages, RGB imaging faces several limitations in death detection tasks. One major issue is its reliance on lighting conditions. While most detection models perform better under high light intensities [25,26,32], farms generally maintain a rather low lighting condition (usually below 20 lux) [10], presenting a challenging environment. Ma et al. [12] used fill lights to provide supplementary lighting, but doing so could potentially cause animal stress. Another limitation lies in the sensitivity of RGB sensors to background similarity. When the color and morphology of a dead animal closely match those of the surrounding environment and live animals, it becomes difficult to distinguish the deceased animal from the background, leading to detection failures [14]. Additionally, occlusion poses another challenge in high-density environments, where objects or other animals may obstruct the line of sight to the dead individual, further complicating detection.

2.2.2. Thermal Infrared Camera

The theoretical basis of Infrared Radiation Thermography (IRT) is that any object with a temperature above absolute zero emits infrared electromagnetic radiation. This radiation can be described by the fundamental physical laws, including Planck’s radiation law, Wien’s displacement law, and Stefan–Boltzmann law. IRT sensors operate in the electromagnetic infrared spectrum region to determine the temperature distribution of an object’s surface. As a non-contact and real-time measurement technique, IRT has been widely applied for detecting surface temperature variations, which are then converted into a grayscale image or a false color image [41]. The greater the infrared radiation intensity of the measured object, the higher the gray level of the reaction in the thermal image [42]. Due to the significant drop in body temperature post-mortem, dead animals with notably lower temperatures are easily distinguishable in thermal images. Thus, IRT images are widely used for detecting mortality events in high-density farms.
When using IRT images to determine the body temperature, a common challenge lies in converting the radiation distribution into surface temperature and selecting the appropriate data values to represent the overall physiological status of the animal. Since multiple parts and regions of the animal’s body are captured once, it is essential to determine an appropriate region of interest, and further extract specific values such as maximum, minimum, and average temperatures of the selected region as the final temperature data [43]. A few studies were conducted by a research group at Hebei Agricultural University. After obtaining the maximum temperature of the heads of chickens as the abstracted number to present its physiological status, Jiang et al. [27] compared the temperature with a set threshold. If the number is below the set threshold, then the chicken is identified dead. Their overall accuracy was 80%, with 83.0% for the upper coop and 77.0% for the lower. A possible reason for this is the relatively insignificant temperature difference between the deceased chicken and the environment, since the lower coop is usually cooler. Similarly, Jia et al. [13] also noticed the distinct features of the chicken head in infrared thermography. While both the dead and living chickens have a notably high temperature in the head regions, their morphological features are quite different. Therefore, after extracting the contours of the head with a set threshold, morphology features (roundness, compactness, eccentricity, long axis, and short axis) were used as input for classification.
Apart from setting thresholds and binarize the image before establishing datasets, some researchers built thermal infrared image datasets directly without prior image processing [9,19]. However, concerning the inherently low resolution of TIR images, simple pre-processing operations, such as image enhancement, noise filtering and contrast adjustment, are commonly used to improve the quality of datasets.

2.2.3. Multi-Sensor Image Fusion

As previously discussed, RGB images and TIR images each have their own advantages and limitations due to differences in their hardware designs and imaging principles. These limitations can lead to issues such as false negatives, false positives, duplicate detection, as well as inaccurate localization. However, these two types of images offer complementary information where RGB images provide rich color and morphological details, while TIR images capture temperature-related features [44]. Therefore, by constructing multi-source image datasets, researchers can significantly improve their detection performance with information fusion.
Multi-source images generally refer to images obtained by fusing data from multiple sensors or cameras [10]. Image registration is usually the first step in this process, which aligns multiple images captured from different cameras, viewpoints, or resolutions [44]. Over the past two decades, research on image fusion methods has made great progress in theory and has been widely applied in various fields of precision agriculture, such as crop detection and recognition, identifying diseases and pests, as well as monitoring animal health.
In the specific context of death detection, the most widely used multi-source image is the fusion of RGB and TIR images [16,17,18,31,45]. For instance, Muvva et al. [16] fused a grayscale visual image (or RGB image) with a thermal image. After segmentation, live birds were shown as white blobs in the thermal images, while all birds (both live and dead) were shown in RGB images. By subtracting the segmented thermal image from the segmented RGB image, the fused images are able to show contours of dead birds directly. To further enhance the visibility of the dead chicken in images, Zhao et al. [19] fused TIR images and RGB images with a combined algorithm of the SURF algorithm and RANSAC algorithm. When tested on their improved YOLOv5s-SE model, the model trained on the fused dataset achieved a precision of 97.7%, compared to 79.1% for the TIR image dataset and a mere 39.5% for the RGB image dataset. Similarly, Heng et al. [17] performed similar comparisons between models trained on single-source and multi-source datasets for detecting dead piglets. Their results showed that precision reached 93.8% with fusion images, while only 77.1% on the TIR images and 26.9% on the RGB images. These findings demonstrate that, whether applying traditional image processing methods or deep learning-based algorithms, the use of fused images can significantly improve the quality of the dataset, which in turn enhances detection accuracy.
However, while multi-source images offer a wealth of information and can indeed improve dataset quality, as previously mentioned, it does not necessarily mean that the more types of images fused will yield better performance. Luo et al. [10] introduced a pixel-level registration method to align near-infrared (NIR), thermal infrared (TIR), and depth images. They trained and tested their death detection models using all single-source, dual-source, and multi-source image datasets. The results revealed that compared to the models with single-source images, the AP of all models with dual-source images showed a notable improvement. Specifically, models using NIR-depth and TIR-NIR combinations, particularly at higher IoU thresholds, not only outperformed all single-source images for dead chicken detection, but also showed little difference to the TIR-NIR-depth combination. For example, the NIR-depth image achieved an AP of 91.1% (IoU = 0.75) while only 91.0% for TIR-NIR-depth image. This suggests that the additional third image source may not always yield substantial gains in overall detection performance.
Despite its advantages, the initial step of creating a multi-source image dataset, the registration process, can be extremely time-consuming, especially when researchers aim for high-quality alignment. However, the future of multi-sensor image fusion looks promising as image processing algorithms can be integrated into intelligent systems, such as a deep learning models, to enhance automation and reduce interference. This integration is expected to streamline the process and significantly improve the efficiency of multi-source image fusion.

2.3. Dataset Establishment

Closely examining datasets regarding death detection tasks in high-density animal farms, a comprehensive summary of data collection methods, data types, data quality, dataset sizes, along with the annotated object classes and tools used for annotation are synthesized and presented in Table 1. As can be seen in the table, this information varies among different studies. Data collection methods are chosen to suit the breeding environment. For example, when operating in floor-raised farms, fixed imaging sensors are preferred, as mobile devices might harm the animals if crashed. Cage-raised scenarios, whether rabbit farms or poultry farms, favor the mobile platforms, inspection vehicles in particular, as the cages are arranged and organized strictly, making it less complicated to perform autonomous inspection tasks. UAVs are applied in aquaculture scenarios, where there are no rooftop limits and the coverage of farms (ponds) can be quite big. Types of imaging sensor chosen mainly depend on how the researchers want to identify the dead.
There are no universally defined standards for the quality and quantity of datasets in the context of death detection. Researchers usually make decisions based on experimental environment, specific application scenario, pre-existing research in their labs, and available funding. Most researchers apply a data augmentation process before finalizing their dataset. This process involves applying various image transformations to the original images to generate new training samples, which has been proven to enhance the model performance in terms of both accuracy and robustness. These transformations can be categorized as follows: (1) Geometric Transforms (e.g., flip, cop, translation, noise, etc.). (2) Color space transforms (e.g., contrast, brightness, sharpening, etc.). (3) Image mixing transforms (e.g., mix-up, cut-out, mosaic) [8]. If the dataset is established for deep learning based-object detection models, the final step would be annotation. This step is crucial as it provides the specific class and location for the machine for training, validating, and testing purposes.

2.4. Data Processing Methods

Apart from the quality of dataset, another crucial factor impacting the death detection performance is the method used for detection. The chosen method greatly affects the model’s ability to extract core features from the data, as well as how effectively these extracted features are used to identify dead animals. In this section (Section 2.4), we provide a comprehensive review of the progression of death detection methods, starting from traditional image processing techniques to more advanced deep learning-based algorithms. The advantages, limitations, and novelty of each method are systematically examined.

2.4.1. Traditional Death Detecting Methods

As mentioned at the beginning of Section 2, death detection has predominantly been a vision-based task. Therefore, initial advancements in data processing for death detection has been closely tied to the development of computer vision and image processing techniques.
Among the earliest death detection methods, image binarization is widely employed due to its simplicity and computational efficiency [16,27,38,39]. This involves the process of converting a colored image into a greyscale image and setting thresholds for further segmentations. For instance, Lu [38] captured two consecutive images from the same angle, but applied a vibration to the cage before the second capture. Both images were binarized to extract the red combs, then a logical “AND” operation was applied. The overlapping regions of sufficient sizes indicated stationary combs within the cage, suggesting a dead chicken with an identification accuracy exceeding 85%. Similarly, by setting appropriate thresholds, Muvva et al. [16] first extracted the contours of all birds (both live and dead) from RGB images, then segmented the contours of live birds from the TIR images. The dead birds were therefore identified by subtracting the segmented TIR images from the segmented RGB images. Their accuracy was higher than 81% for broilers aged five weeks or younger, yet decreased for older broilers due to less bird-background temperature gradients and more interactions among broilers, presenting challenges for image processing. Focusing on the dropped temperature rather than the morphology features of dead chickens, Jiang et al. [27] first used Otsu’s algorithm to segment chickens from the background, then used the open operation to remove small noise areas. The maximum temperatures from the remaining regions were extracted after the binarization stage, and they therefore obtained direct dead or alive results by comparing the temperature value with a predefined threshold temperature. Their algorithm was simple yet reached an overall accuracy of 80.0%.
To further improve the detection accuracy, many researchers choose to adopt traditional machine learning techniques after the image binarization stage. Among these, the Support Vector Machine (SVM) is a classic pattern recognition technique based on statistical learning theory [52]. Multiple researchers have explored the application of SVM for dead body detection in the early years [23,38,39]. Presenting one of the earliest death detecting models, Zhu et al. [23] introduced an automated method for detecting dead birds based on SVM. The method involved capturing two sequential images of the same cage, processing the images to extract the contours of the chicken, and employing a center radiating vector representation to abstract contour features. The differences between the corresponding vectors of the two images were taken as SVM input features. Their experimental results reached a correct identification rate of over 95%. Peng [39] defined morphology parameters such as area eccentricity, complexity and roundness of the extracted cockscomb as feature vectors. He selected a Least Squares Support Vector Machine (LS-SVM) classifier and used a grid search method to optimize its kernel width and the punishment factor. Although his model achieved detection accuracy of only 92%, it offered a more timely and efficient alternation. Taking the morphology and color features of chicken feet as input, Qu [28] performed dead chicken detection based on LibSVM classifier. This version of SVM is particularly advantageous for classifying tasks with a rather small sample. That said, her accuracy exceeded 90% with a dataset of only 170 images. Jia et al. [13] performed compare analysis between the decision tree algorithm, BP neural network, and SVM. Their accuracy based on decision tree algorithm was 87.5%, while the accuracy based on BP neural network and SVM was 91.7%. These findings suggest that traditional machine learning methods generally improve detection accuracy by approximately 10% from around 80% to exceeding 90% compared to basic image processing alone. However, to achieve further improvements, revolutionary changes in the machine learning algorithms itself are needed.
Beyond using morphology differences for image processing solutions, the primary difference between the two life-status is whether an animal stays motionless. Therefore, researchers have employed optical flow techniques to perform the task. This technique usually visualizes the distribution of apparent velocities of movement across image sequences within an image. Using this method, Duan et al. [24] proposed a dead rabbit recognition model based on the modified Mask RCNN and LiteFlowNet. Optical flow analysis and instance segmentation networks processed key frames to obtain rabbit mask flows and center point coordinates. The alive, or active rabbits, were filtered by optical flow thresholds, with the remaining points’ density distribution calculated via kernel density estimation to identify dead rabbits. Their overall accuracy achieved 90%, which could be further improved by prolonging the set time of each observation.

2.4.2. Novel Death Detecting Methods

In recent years, deep learning has become the dominant computational paradigm within the machine learning community for achieving remarkable results in various complex tasks. Particularly in the domain of computer vision, object detection based on deep learning algorithms has gained popularity due to its powerful feature extraction and localization abilities. These object detection algorithms are generally categorized into one-stage and two-stage models. The evolution of two-stage object detection algorithms, starting from Convolutional Neural Networks (CNNs) to Region-based CNNs (R-CNN) and Faster R-CNNs, has led to significantly enhanced detection accuracy and efficiency [53,54,55]. Initially, CNNs laid the foundation for efficient feature extraction. The framework was then refined by R-CNNs through the introduction of region proposal mechanisms, significantly improving the localization of objects. Subsequently, Faster R-CNN further advanced through integrating a region proposal network, which not only optimized the speed of detection but also enhanced the overall accuracy by allowing end-to-end training. Aside from the CNN series, the You Only Look Once (YOLO) algorithm is a notable example of a one-stage object detection algorithm [56]. YOLO demonstrates excellent feature extraction abilities that can be further integrated with other state-of-the-art algorithms for specific applications. From YOLOv1 to YOLOv10, this CNN-based algorithm has experienced several significant upgrades, with the latest versions offering an optimal balance of speed and accuracy. As a result, the YOLO series have also gained widespread popularity in various industrial and agricultural domains.
As within the agricultural sector, these deep learning algorithms have been successfully applied to tasks, such as plant diseases identification, crop growth monitoring, and animal behaviors analysis, presenting significant practical implications for precision farming. For instance, Soeb et al. [57] applied YOLOv7, one of the fastest single-stage object detection frameworks, to detect tea leaf diseases. Similarly, Rai et al. [58] introduced a YO-LO-based agricultural weed identification method, contributing to the advancement of precision agriculture. Regarding behavior monitoring for farm animals, Wang et al. [59] used YOLOv3 for target location and behavior recognition of egg breeders in self-breeding cages. Nasiri et al. [60] proposed a Convolutional Neural Networks (CNNs) based algorithm to estimate broiler feeding time and further assist the monitoring of its feeding behavior, thereby supporting better farm management through data-driven insights.
Death detection tasks in high-density farms followed this trend by replacing the traditional image processing models with advanced convolutional networks. For instance, Hao et al. [11] developed a dead broiler detection model with the improved-YOLOv3 network, reaching a precision of 95% and a processing speed of 11 fps. Zhao et al. [22] proposed a YOLOv4-based network to detect dead fish and achieve a detection accuracy of 95.47%. In contrast to building an original dataset, most novel death detecting models rely on pre-existing open-source deep learning object detection algorithms as their fundamental framework. While much of the research in deep learning focuses on the algorithms itself, in this review, we emphasize the modifications made to baseline deep learning networks in order to enhance their performance. Particularly, the choice of deep learning models, changes tailored to the specific needs in death detecting scenarios, as well as its performance are reviewed in Table 2. It is worth noticing that most researchers performed compare analysis between several deep learning models or compared the performance of original versus their improved version. Therefore, we only present the best performing models of each work in Table 2, including its network details and overall performance. However, if the author did not show a clear preference on the tested models, all relevant models are presented for completeness.
Several observations can be made based on the information summarized in Table 2. First, the choice of a deep learning network is largely influenced by the timing of the research. Since revolutions within the series could bring significant improvement to the network, most researchers tend to choose the latest versions of the open-source models as their framework. However, hardware compatibility may also influence the choices of versions, as some of the newly released versions may not come instantly compatible with the robot working environments [26].
As summarized in Table 2, most recent model improvements focus on enhancing detection accuracy and inference efficiency through lightweight network structures, attention mechanisms, optimized loss functions, and model compression techniques. Among them, attention modules such as CBAM and SE effectively address visual challenges commonly observed in high-density farms. CBAM strengthens spatial and channel attention, improving feature extraction under occlusion or overlapping conditions within multi-tier cages. SE modules enhance the representation of subtle texture and color cues in low-light or high-humidity environments, particularly for TIR and NIR images where dead bodies show weak contrast with the background. Meanwhile, the CIoU loss function improves localization accuracy for irregular postures or partially visible carcasses. These optimizations are not merely technical refinements but targeted responses to practical difficulties in real-world farming environments, enhancing the robustness and adaptability of detection models under complex conditions.
In addition, several studies have applied visualization techniques such as Gradient-weighted Class Activation Mapping (Grad-CAM) and class activation heatmaps to better understand the decision-making process of neural networks. For instance, Zhao et al. [51] presented feature map visualizations demonstrating the model’s enhanced feature extraction ability; Hao et al. [45] generated heatmaps showing that the regions with high activation responses closely matched the contours of dead chickens; Wu et al. [29] compared heatmaps between the proposed neck structure and the baseline, revealing clearer and more focused attention on relevant anatomical areas. Similarly, Yang et al. [30] utilized class activation maps to visualize the regions of interest (ROI) corresponding to network predictions, while Ma et al. [12] used Grad-CAM not only to verify successful detections but also to analyze failure cases by identifying areas where the model’s focus deviated from biologically meaningful regions. These interpretability tools provide valuable insights into how models respond to different visual cues, offering both quantitative and qualitative validation of their reliability.

2.4.3. Summary of the Death Detecting Methods

The development of death detection methods has evolved from simple image processing and traditional machine learning algorithms to advanced deep learning-based frameworks. Early methods, such as image binarization, morphological feature extraction, and Support Vector Machines (SVM), demonstrated good interpretability and low computational cost but were highly sensitive to environmental variations such as illumination, occlusion, and animal postures. While these conventional methods achieved accuracies around 80–90%, their scalability and robustness under real-world, large-scale farming conditions remained limited.
Recent deep-learning-based approaches, particularly those built upon convolutional neural networks (CNNs) and the YOLO series, have significantly improved detection accuracy and efficiency by enabling end-to-end learning and more robust feature extraction. These models are capable of handling complex visual environments and have been successfully applied across multiple species. However, challenges still exist regarding model generalization under varying environmental conditions, high computational demands for edge deployment, and the lack of multi-modal data utilization (e.g., combining thermal, depth, and RGB information). Moreover, most studies still focus on algorithmic optimization rather than the integration of these models into real-time robotic systems for practical on-site deployment.
Several studies have specifically evaluated model robustness under visual disturbances such as occlusion, illumination variation, and cluttered backgrounds, which are common in high-density animal farming environments. These works demonstrate that robustness testing is crucial for ensuring reliable detection performance in real-world applications. Li et al. [26] tested the model performance under 8 different light intensities (10 lux, 20 lux, 30 lux, 40 lux, 50 lux, 60 lux, 70 lux, 1000lux). Results indicate that the overall performance rose sharply from 74.5% precision under 10 lux to 98.8% under 40 lux, the increase then slowed down, reaching 99.3% under 1000 lux. Overall, the model shows robust performance, well-suiting diverse light intensities. Bist et al. [25] performed model evaluation with datasets under different settings. These settings include different litter coverage and feather coverage (0%, indicating no litter or feature coverage, 50%, where 50% of the hen’s body is covered with litter or feathers, and 80%, where 80% of the hen’s body is covered with litter or feathers). Taking feather coverage as an example, the results shows that while precision remains high (99.4% for 0% feathers covering and 98% for 80%), mAP@0.5:0.95 decreased score shows a significant decrease in performance with increased feathering, with 0% feather covering achieving a score of 65.5%, compare to 41.6% for 80% feather covering. This indicates that the proper cleaning and maintenance of floor-raised poultry houses is essential for such autonomous vision-based detections, and a call for future studies to address this issue. Overall, incorporating robustness assessment into model development therefore represents an essential step toward safe and trustworthy deployment in farm conditions.
Future research should focus on enhancing the adaptability and generalization ability of deep learning models through multi-source data fusion and cross-domain learning. Lightweight and energy-efficient model architectures are needed for on-board inference in mobile inspection or robotic platforms. Additionally, constructing large, species-diverse datasets under real-world farm conditions will be essential for improving robustness and reliability. The integration of death detection algorithms with autonomous inspection, removal, and environmental control systems will represent a key step toward achieving fully intelligent and automated mortality management in high-density animal farming.

3. Inspection System

In recent years, vision-based systems in agriculture have experienced rapid growth, significantly improving the efficiency and accuracy of various tasks across all sectors of precision agriculture. Among these advancements, agricultural inspection systems have shown great potential in ensuring the operational integrity of farming environments and monitoring animal health and welfare. These systems offer a more efficient, reliable, and autonomous alternative to traditional manual inspections. In the specific context of death detection for high-density animal farming, inspection systems play a crucial role by capturing vision-based information and enabling real-time mortality detection through advanced image processing techniques. While numerous vision-based detection models have been proposed for death detection tasks, such as YOLO and Faster R-CNN, their practical implementation in real-world scenario relies heavily on the integration with robust and adaptable hardware systems. Typically, the hardware design of agricultural inspection system mainly includes one or more imaging sensors, which may be either fixed on stationary structures or mounted on mobile platforms. In this section (Section 3), we present existing designs on inspection systems, focusing on their advantages and limitations for death detection tasks. Typical designs of the systems are presented in Figure 3.

3.1. Fixed Inspection System

In high-density farming scenarios, fixed inspection systems, such as surveillance cameras mounted on ceilings or walls, are commonly used due to their ease of deployment and relatively low cost. However, these fixed systems often suffer from several limitations. Firstly, static cameras have a limited field-of-view, failing to provide comprehensive visual coverage of all animals, especially in multi-tier cage farms. Secondly, occlusion issues occur frequently due to the presence of other animals or structural components, such as cage bars, feeding equipment, and water lines, resulting in incomplete or partial visibility of individuals. Finally, the significant distance between the camera and the target animals’ results in the reduced image resolution, causing the deceased bodies to appear as small targets and indistinct objects in the captured images. This significantly increases the probability of false negatives or missed detections, thereby complicating the overall detection process.

3.2. Mobile Inspection System

In contrast to fixed inspection systems, mobile inspection devices, also called inspection robots, have emerged as a more effective solution for death detection in high-density farming environments. By autonomously navigating through the farm, these systems can acquire high-resolution images at a close range, and even from multiple viewpoints to reduce the impact of occlusion [30,62]. Common mobile inspection systems for death detection include ground vehicles, unmanned aerial vehicles (UAVs), and rail systems. These mobile systems typically incorporate advanced navigation technologies such as LiDAR and Simultaneous Localization and Mapping (SLAM) to support its autonomous movement, path planning, and obstacle avoidance. As these robots move through the farming environment, images are collected and transmitted to onboard or remote computing units. This real-time vision-based information is then processed by pre-trained deep learning models to identify deceased animals with minimal human intervention. For instance, Ma et al. [12] designed an autonomous inspection vehicle with their proposed detection network, operating at a speed of 9 m/min and capturing three images per second during automatic inspections. During in-house experiments, the inspection system achieved an overall detection precision of 90.61%, with low false detection rates, demonstrating its reliability and robustness under continuous operation in poultry houses.
Beyond academic research, several companies are presenting their commercially de-signed solution. In 2023, Winworld (Henan Winworld Livestock Machinery Co., Ltd., Zhumadian, China) released their commercial poultry farming inspection robot. The robot is equipped with four sets of RGB cameras and could perform death detecting tasks for a four-layer cage-raised poultry farm and included an embedded death detection model. As for rail-based designs, the ChickenBoy robot developed by Big Dutchman moves along fixed overhead rails and collects environmental data while detecting dead chickens using thermographic imaging. It is designed primarily for flat-floor broiler houses, as its performance is limited in stacked-cage systems due to visual obstructions and the complexity of multi-layer structures. In Japan, Sensyn Robotics (SENSYN ROBOTICS, Inc., Tokyo, Japan) cooperated with TAIHO Industrial Co. (TAIHO Industrial Co., Ltd., Kagawa, Japan) (SENSYN ROBOTICS and TAIHO Industrial Corporation jointly develop an autonomous cage monitoring system) and together developed a prototype of an autonomous inspection robot for cage-raised chicken houses. It is capable of autonomously localizing and navigating within the poultry house and identifies dead chickens with AI analysis, achieving a high recognition accuracy.

3.3. Summary of Existing Inspection Systems

In summary, existing vision-based inspection systems for death detection in high-density animal farming can be categorized into fixed and mobile designs, each offering distinct advantages and facing unique challenges. Fixed systems are cost-effective and easy to deploy, yet they often suffer from their lack of flexibility, resulting in limited coverage and severe occlusion. In contrast, mobile inspection systems, such as ground vehicles, UAVs, and rail-based devices, enable close-range, multi-view image acquisition and improved detection performance through autonomous navigation and real-time data processing. However, these systems still face practical challenges, including mechanical complexity and high deployment costs. Overall, the evolution from static to mobile platforms marks a crucial transition toward intelligent and autonomous mortality monitoring. Continued integration of advanced sensing technologies, navigation systems, and deep learning models will be essential to achieving reliable and fully automated inspection systems for large-scale farming applications.

4. Automated Removal System

As discussed in Section 2.4, various methods have been proposed for detecting mortality events in high-density animal farms. Almost all of these detection methods, whether already implemented or still under development, aim to assist the real-time detection and removal of deceased farm animals. In this section, we review existing automated removal systems that are designed in conjunction with the death detection methods outlined above. Details regarding some of the existing removal systems are presented in Table 3. Specifically, we will focus on the various removal devices and their successful integration with detection methods, their limitations, and potential commercial application. Despite the common goal of dead body removal, there are significant differences in the design across species depending on the type of animal and housing environment. Specifically, most removal devices for poultry farms, whether it is a robotic arm or a conveyor belt, are fixed on a mobile platform. The device relies on the platform to locate and navigate, then remove the deceased as the dead body appears in the center of region.

4.1. Removal Systems for Floor-Raised Poultry Houses

Most robotic removal systems reported in the literature target floor-raised poultry environments, where access to individual birds is relatively unobstructed. One of the earliest experimental attempts was conducted by Liu et al. [2]. They designed a vision based dead broiler removal system equipped with robotic arms, a conveyor belt, and a storage cache. Upon detecting a dead broiler within approximately one meter, two high-torque servos rotate the arms in opposite directions to sweep the dead chicken onto the conveyor belt and further into the storage cache. To increase the adaptability to different field conditions, the angle of the conveyor belt can be adjusted. The total operation time per bird is around one minute considering the delay in signal transmission. Due to the friction of the belt and the space of the storage cache, two chickens can be removed in one single operation. Although the system worked well in open areas, only one degree of freedom limits its flexibility to reach broilers located near corners or under feeding/drinking lines.
A more flexible system was introduced by Xin et al. [35], who first mounted a grasping manipulator on their mobile chassis to remove death birds for on-site experiments. Their device for identifying and grasping dead broilers completed the task of removing dead broilers with an average success rate of 81.3%, proving that integrating a robotic arm with an autonomous navigation platform is a feasible solution.

4.2. Removal Systems for Cage-Raised Poultry Houses

In cage-raised poultry farming environments, the physical constraints of cage structures and limited working space pose significant challenges for automated death bird removal systems. To address these issues, several solutions have been proposed and tested in recent years.
Wang [9] introduced an automated carcass removal system for caged broilers, which consist of a bar-shaped scraper and a hinged platform. Upon detection of a deceased bird, the scraper traverses the cage from left to right, slowly pushing the dead chicken onto the top of the hinged platform on the right, then runs backward to chase away the alive. This way, the dead are lying on the platform while the alive are on the other side. By opening the platform, the deceased broiler can be dropped onto the manure conveyor and the system then resets for the next cycle. Experimental results demonstrated that the system achieved higher success rate with younger broilers, with 88% for chickens aged 4 weeks and 78 for chickens aged 7 weeks. This is mainly due to the fact that those elder broilers have thicken claws and larger skeletons and therefore are more likely to become stuck, and struggle to fall onto the conveyor belt. Another cause of failure is that the scraper may not always success in chasing the alive chickens, resulting in the alive falling onto the manure conveyor as well. The average removal time was around 85 s. Overall, Wang’s design was quite simple and cost-effective comparing with those involving a robotic arm. But its application in the commercial poultry houses would require individual installation per cage, making the already dense environment even more complex and diminishing the cost-effective advantage in large-scale operations.
Selecting similar operating population, caged broilers aged 3–7 weeks, Hu et al. [63] designed a three-joint and four-finger underactuated end-effector for automated carcass removal. The system achieved a higher success rate for chickens deceased for over 30 min (96.7%) compared to those deceased within 30 min (88.3%), as the broilers’ body become stiffer post-mortem and better matched the cylindrical shape assumed in the end-effector’s design. Future research can be conducted to refine the mathematical model of the deceased’s geometric shape so that it could better assist the real-time removal. Moreover, the success rate was lower when the dead broiler happens to lie with its back facing downward, since its abdomen was broader than the back, resulting in reduced force on the distal phalanx of the manipulator. Their average grasping time was 32 s per chicken. Despite the delicate design, their research only provided an end-effector, lacking the necessary constructions such as a mobile platform and a robotic arm to operate in a commercial poultry farm.
Further researching the robotic arm solution, Li et al. [26] designed and constructed a robot using commercially available components. The robot consisted of a two-finger gripper, a robotic arm, a camera mounted on the robot’s arm, and a computer controller. After detecting a dead broiler, with the shank being the target anatomical part for detection, the two-finger gripper then positions and rotates to match the shank’s angle (ideally, perpendicular to the shank). The gripper then grasps the bird, and moves the bird to a designated storage area. Their overall operation time ranged from 70.5 to 77.8 s per round. Zhao [19] conducted in-depth research on the kinematic characteristics of a five-degrees-of-freedom dead broiler-picking robotic arm. His work focused on trajectory planning of the arm but was solely on the theoretical level, with simulation analysis conducted using the MATLAB 2020a software.

4.3. Dead Fish Removal System in Aquaculture

Some progress has been made in the aquaculture sector, where smaller and more predictable environmental dynamics allow for feasible automation. For example, a floating dead fish salvaging device was developed by a research group from Shanghai Ocean University [21]. After a dead fish is detected, their designed robotic arm with a fishing net attached at the end locates, rotates, and pockets the dead fish, then drops the fish into the collection box and reset for another round. When setting the removal direction opposite to water inflow direction, their average success rate reached 77.14%, compared with 60% for groups under still water condition without water flow. This task is particularly challenging concerning that the target does not stay in a fixed position and that the salvage action itself may bring flow disturbance.

4.4. Summary of Current Removal Systems and Limitations

Of the five relatively comprehensive works of automated dead birds’ removal systems reviewed, only those aiming to pick up floor-raised chickens performed experiments in an actual farm environment. The others, though claimed to present solutions to commercial poultry farms, lacked on-site experiments. In fact, all of them performed the experiments in the lab, where the robotic arm picked the dead broiler up horizontally, whereas in an actual farm, the arm needs to go between the layers, avoiding the live broilers on its way, grabbing the dead ones precisely, and takes it out carefully. Further research should be conducted regarding this full process to ensure its commercial applications.
While various studies have been conducted within the poultry sector, dead body removal devices for livestock and aquaculture are still underdeveloped. The primary reason for the lack of research may be that not all types of animals are suitable for autonomous death removal. For instance, senior pigs are large and heavy, therefore removing its dead body would require an even larger, potentially even heavier machinery. The presence and operation of such devices may cause widespread stress response, which may even lead to aggressive behaviors, posing threats to operating staff as well as animals.
Table 3. Removal devices.
Table 3. Removal devices.
ReferenceTarget Farm AnimalRobotWalking (Moving) SpeedAverage Removal TimeAverage Success Rate
Liu et al., 2021 [2]Floor-raised broilersAgriculture 15 02249 i0013.3 cm/3 on the litterapproximately 1 min per operation, maximum of 2 chickens per operationN/A
Xin et al., 2024 [35]Floor-raised broilersAgriculture 15 02249 i002N/AN/A81.30%
Li et al., 2022 [26]Cage-raised broilersAgriculture 15 02249 i003N/Aranged from 70.5 to 77.8 s per round90% at 1000 lux light intensity
Hu, 2021 [63]Cage-raised broilersAgriculture 15 02249 i004N/A32 s96.7% (more than 30 min) 88.3% (deceased within 30 min)
Wang, 2023 [9]Cage-raised broilersAgriculture 15 02249 i005N/Aaround 85 s83% (for chickens 4–7 weeks old)
Li et al., 2023 [21]Recirculating aquaculture systemAgriculture 15 02249 i0060.3 m/sN/A77.14% (one-time removal success rate)
Overall, it can be concluded that most of the poultry removal robots have a rather complicated design, and lack the universality for handling birds of different ages and sizes. Removal devices for livestock and aquaculture with feasible designs are yet to emerge. Further research should be conducted to simplify the design, lower the cost, improve adaptability, and perform more on-site experiments for potential applications in real-world production.

5. Discussion

5.1. Wearable-Sensor-Based Death Detection Methods

By attaching the sensors to the animals or directly interacting with them, wearable sensors are able to detect physiological parameters such as body temperature and movement acceleration. These parameters experience significant changes post-death, thereby proving information for determining death. Unlike non-contact methods based on computer-vision, where the death detecting is conducting once every cage/tank, wearable-sensor-based methods perform detecting action individually, achieving higher precision and reliability.
Bao et al. [64] demonstrated the efficacy of this method by using foot ring data to analyze chicken activity intensity through calculating three-dimensional total variance. Their system achieved 100% accuracy in detecting dead chickens using five machine learning models trained on a dataset of 40,000 chickens. The models were further tested to recognize sick, active, and weak chickens to hint mortality risk and therefore potentially reducing deceased cases. Though at least one foot ring per chicken is required, the authors stressed that the cost of the system running for four years can be reduced by 25% compared with manual operation. Alves et al. [65] explored low-frequency transponders attached to each bird’s wings to monitor feeding behaviors and predict illness-related mortality events. Though the method proved that electronic feeders offer valuable insights for mortality prediction, further research is needed to investigate the feasibility and cost-effectiveness of implementing such systems in commercial farming practices. Challenges include the logistical complexity of sensor deployment and the need for robust data to manage large-scale datasets.
A key limitation of wearable sensor methods is their reliance on individual sensor attachment, which not only increases initial setup costs and ongoing maintenance demands, such as periodic battery replacement or charging, but also introduces stress to the animals. The process of attaching sensors and the presence of foreign objects may disrupt normal behavior and lead to increased stress levels, potentially affecting animal welfare and data accuracy. Therefore, this approach is so far solely suitable for farms with small capacity. Nevertheless, these wearable sensors offer the advantage of real-time, high-resolution physiological data, making them a promising direction for improving mortality detection accuracy, particularly in high-value livestock or breeding animals such as breeder chickens and rabbits. Future work should focus on reducing sensor costs, improving sensor durability, minimizing interference with animal behavior, and integrating these systems with automated removal systems to create fully automated solutions.

5.2. Ethical and Animal Welfare Considerations

With increasing public concern over animal welfare, the design and implementation of mortality detection and removal systems must prioritize ethical considerations to minimize stress and harm to live animals. There are multiple approaches researchers can adapt to ensure better welfare for the farm animals, including maintaining a comfortable lighting condition, lowering artificial noises, and using soft materials when devices are accessible to living animals.
Though studies have proved that models perform better with higher light intensities, researchers should avoid using supplementary lighting, especially flashlights [26], as it is likely to cause stress response. Since the demand for sufficient lighting comes from the capture of RGB image, solution to the problem could be using high-resolution NIR sensors. These sensors are excellent alternatives to RGB cameras since they also show distinct edges while having no special requirements for visible lighting conditions, relying solely on radiation to image [10]. Apart from lights, noise pollution is another source of stress. To prevent much artificial noises, Wang [9] selected wheel design for his data collection chassis since it causes less noise when moving within the chicken house. He also used soft plastic for his bar-shaped scraper to ensure that mechanical interactions with live broilers remain gentle, reducing the likelihood of aggressive behaviors or flight responses.
To ensure ethical compliance, future systems should integrate adaptive algorithms that account for animal behavior, such as adjusting detection intervals during rest periods, and user feedback mechanisms to monitor unintended welfare impacts. Collaborations between engineers, animal scientists, and ethicists will be essential to balance technological innovation with compassionate care.

5.3. Limitations of Current Works and Future Directions

Although significant advancements have been made in death detection and removal systems for high-density animal farming, several limitations remain. Many existing studies have been performed under controlled or small-scale conditions, limiting their applicability to real-world commercial environments characterized by low and variable lighting, high temperature and humidity, and unpredictable animal behaviors. Moreover, contact-based detection methods often rely on individual sensor attachments, resulting in increased cost and maintenance complexity. In addition, most studies have primarily focused on poultry, with limited exploration of other species such as livestock and aquaculture. Furthermore, the overall efficiency of current detection and removal systems remains insufficient for large-scale farms, where rapid and reliable intervention is essential.
To address these challenges, future research should emphasize large-scale field validation and enhanced adaptability under diverse environmental conditions, ensuring system robustness and reliability. Reducing hardware cost through modular, scalable, and multi-source sensing designs will further improve accessibility for small and medium sized farms. Developing cross-species applicable solutions, including those adapted for aquaculture and cage-based livestock production, will expand the generalization capability of current approaches.
Beyond detection, integrating inspection, removal, and disinfection functions into unified multi-robot collaborative systems represents a promising direction toward fully automated mortality management. For instance, multi-purpose robots could conduct continuous inspection, dead body removal, and sanitization simultaneously to improve biosecurity and operational efficiency. Enhancing motion control and trajectory planning of robotic arms will also enable close-range, in-cage detection and precise manipulation during removal tasks. Also, as current studies tend to overlook technical parameters concerning operational point of view, such as power supply requirements, energy consumption, and operator demands, future studies could include detailed analyses of these aspects. Ultimately, with optimizations regarding the endurance of the system, energy efficiency, and user-friendly interface, the overall usability and practical application potential will sure be realized.
Additionally, incorporating wearable and non-contact sensing technologies may enable early identification of abnormal physiological or behavioral patterns, serving as pre-mortality warning mechanisms that complement vision-based detection. Finally, AI-driven decision-making and environmental control systems could dynamically adjust ventilation, humidity, or temperature based on real-time detection results, achieving more intelligent and welfare-oriented farm management.
Collectively, these directions point toward an integrated, intelligent, and autonomous mortality management framework, contributing to the next generation of precision livestock farming.

6. Conclusions

In this review article, we present a comprehensive review of death detection models and dead body removal devices in the context of high-density animal farms. From image binarization to employing more advanced techniques such as deep learning, the death detection methods have experienced several remarkable evolutions following the advancement within the image processing and machine learning community. The development of dead body removal devices is also reviewed in this article. Each presented study provides novel solution to the problem.
Based on the review of related article, the process of dead body detection can be di-vided into three key stages: data acquisition, dataset establishment, and data processing. We have discussed the specific devices and methods that have been employed and refined to enhance detection accuracy and efficiency in each of these stages. Furthermore, the hardware design of removal devices and the deployment of algorithms are discussed as essential steps in integrating detection systems with removal devices. Though only a limited number of cases regarding this full process are presented, we have reviewed multiple excellent studies on specific stages within this process. Therefore, we highlight the potential for further research aimed at combining these individual components and perform thorough experiments.
Future efforts should focus on enhancing system robustness through large-scale field validation across diverse environmental and species conditions while reducing deployment cost via modular and scalable multi-source sensing. Integration of close-range detection, robotic removal, and disinfection into unified cooperative platforms will be essential to realize fully automated mortality management. Additionally, incorporating pre-mortality physiological or behavioral monitoring and improving system endurance and usability will further promote intelligent, welfare-oriented livestock farming. Altogether, these optimizations will be essential for achieving fully automated mortality management in high-density farming, ultimately improving both production efficiency and animal welfare for the industry as a whole.

Author Contributions

Conceptualization, L.W. and H.W.; writing—original draft, Y.H.; writing—review and editing, L.W.; visualization, Y.H. and W.J.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Ministry of Agriculture and Rural Affairs, PRC, grant number CARS-43-D-2.

Data Availability Statement

Not applicable. No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Godfray, H.C.J.; Beddington, J.R.; Crute, I.R.; Haddad, L.; Lawrence, D.; Muir, J.F.; Pretty, J.; Robinson, S.; Thomas, S.M.; Toulmin, C. Food Security: The Challenge of Feeding 9 Billion People. Science 2010, 327, 812–818. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, H.W.; Chen, C.H.; Tsai, Y.C.; Hsieh, K.W.; Lin, H.T. Identifying Images of Dead Chickens with a Chicken Removal System Integrated with a Deep Learning Algorithm. Sensors 2021, 21, 3579. [Google Scholar] [CrossRef] [PubMed]
  3. Ren, G.; Lin, T.; Ying, Y.; Chowdhary, G.; Ting, K.C. Agricultural robotics research applicable to poultry production: A review. Comput. Electron. Agric. 2020, 169, 105216. [Google Scholar] [CrossRef]
  4. Ojo, R.O.; Ajayi, A.O.; Owolabi, H.A.; Oyedele, L.O.; Akanbi, L.A. Internet of Things and Machine Learning techniques in poultry health and welfare management: A systematic literature review. Comput. Electron. Agric. 2022, 200, 107266. [Google Scholar] [CrossRef]
  5. Özentürk, U.; Chen, Z.; Jamone, L.; Versace, E. Robotics for poultry farming: Challenges and opportunities. Comput. Electron. Agric. 2024, 226, 109411. [Google Scholar] [CrossRef]
  6. Li, D.; Wang, Q.; Li, X.; Niu, M.; Wang, H.; Liu, C. Recent advances of machine vision technology in fish classification. ICES J. Mar. Sci. 2022, 79, 263–284. [Google Scholar] [CrossRef]
  7. Bumbálek, R.; Umurungi, S.N.; Ufitikirezi, J.D.D.M.; Zoubek, T.; Kuneš, R.; Stehlík, R.; Lin, H.; Bartoš, P. Deep learning in poultry farming: Comparative analysis of Yolov8, Yolov9, Yolov10, and Yolov11 for dead chickens detection. Poult. Sci. 2025, 104, 105440. [Google Scholar] [CrossRef]
  8. Badgujar, C.M.; Poulose, A.; Gan, H. Agricultural object detection with You Only Look Once (YOLO) Algorithm: A bibliometric and systematic literature review. Comput. Electron. Agric. 2024, 223, 109090. [Google Scholar] [CrossRef]
  9. Wang, L. Study on Health Monitoring of Broiler Flocks and Removal Test of Dead Broilers. Master’s Thesis, Shandong University of Technology, Zibo, China, 2023. [Google Scholar]
  10. Luo, S.; Ma, Y.; Jiang, F.; Wang, H.; Tong, Q.; Wang, L. Dead Laying Hens Detection Using TIR-NIR-Depth Images and Deep Learning on a Commercial Farm. Animals 2023, 13, 1861. [Google Scholar] [CrossRef]
  11. Hao, H.; Fang, P.; Duan, E.; Yang, Z.; Wang, L.; Wang, H. A Dead Broiler Inspection System for Large-Scale Breeding Farms Based on Deep Learning. Agriculture 2022, 12, 1176. [Google Scholar] [CrossRef]
  12. Ma, W.; Wang, X.; Yang, S.X.; Xue, X.; Li, M.; Wang, R.; Yu, L.; Song, L.; Li, Q. Autonomous inspection robot for dead laying hens in caged layer house. Comput. Electron. Agric. 2024, 227, 109595. [Google Scholar] [CrossRef]
  13. Jia, Y.; Xue, H.; Zhou, Z.; Zhao, X.; Huo, X.; Li, L. Automatic identification method for dead chicken in cage based on infrared thermal imaging technology. J. Hebei Agric. Univ. 2023, 46, 105–112. [Google Scholar]
  14. Peng, X.; He, X.; Sun, Y.; Liu, R.; Liang, Y.; Zhong, Y.; Pang, J.; Xiong, K. Identification and 3D localization of dead pig head based on improved YOLOv5. Acta Agric. Univ. Jiangxiensis 2024, 46, 763–773. [Google Scholar] [CrossRef]
  15. Khanal, R.; Wu, W.; Lee, J. Automated Dead Chicken Detection in Poultry Farms Using Knowledge Distillation and Vision Transformers. Appl. Sci. 2025, 15, 136. [Google Scholar] [CrossRef]
  16. Muvva, V.V.R.M.; Zhao, Y.; Parajuli, P.; Zhang, S.; Tabler, T.; Purswell, J. Early detection of mortality in poultry production using high resolution thermography. In Proceedings of the 10th International Livestock Environment Symposium (ILES X), Omaha, NE, USA, 25–27 September 2018; pp. 1499–1504. [Google Scholar]
  17. Heng, X.; Shen, M.X.; Liu, L.S.; Yao, W.; Li, P. The detection method of lactating deadpigs based on improved YOLOv7 and image fusion. J. Nanjing Agric. Univ. 2024, 48, 464–475. [Google Scholar]
  18. Zhao, Y.; Shen, M.; Liu, L.; Chen, J.; Zhu, W. Study on the method of detecting dead chickens in caged chicken based on improved YOLO v5s and image fusion. J. Nanjing Agric. Univ. 2024, 47, 369–382. [Google Scholar]
  19. Zhao, W.; Cheng, Y.; Cao, T.; Zhang, X.; Wu, A.; Xu, B. Design and kinematics analysis of robotic arm used for picking up dead chickens. J. Chin. Agric. Mech. 2023, 44, 131–136. [Google Scholar]
  20. Zhang, P.; Zheng, J.; Gao, L.; Li, P.; Long, H.; Liu, H.; Li, D. A novel detection model and platform for dead juvenile fish from the perspective of multi-task. Multimed. Tools Appl. 2024, 83, 24961–24981. [Google Scholar] [CrossRef]
  21. Li, J.; Zhang, Y.; Ni, Q.; Huang, D. Research on intelligent salvaging technology of floating dead fish in circulating water aquaculture based on DeepSORT algorithm. Mar. Fish. 2023, 45, 749–758. [Google Scholar]
  22. Zhao, S.; Zhang, S.; Lu, J.; Wang, H.; Feng, Y.; Shi, C.; Li, D.; Zhao, R. A lightweight dead fish detection method based on deformable convolution and YOLOV4. Comput. Electron. Agric. 2022, 198, 107098. [Google Scholar] [CrossRef]
  23. Zhu, W.; Lu, C.; Li, X.; Kong, L. Dead Birds Detection in Modern Chicken Farm Based on SVM. In Proceedings of the 2009 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 1–5. [Google Scholar]
  24. Duan, E.; Wang, L.; Lei, Y.; Hao, H.; Wang, H. Dead Rabbit Recognition Model Based on Instance Segmentation and Optical Flow Computing. Trans. Chin. Soc. Agric. Mach. 2022, 53, 256–264, 273. [Google Scholar]
  25. Bist, R.B.; Subedi, S.; Yang, X.; Chai, L. Automatic Detection of Cage-Free Dead Hens with Deep Learning Methods. Agriengineeing 2023, 5, 1020–1038. [Google Scholar] [CrossRef]
  26. Li, G.; Chesser, G.D.; Purswell, J.L.; Magee, C.L.; Gates, R.S.; Xiong, Y. Design and Development of a Broiler Mortality Removal. Appl. Eng. Agric. 2022, 38, 853–863. [Google Scholar] [CrossRef]
  27. Jiang, L.; Wang, W.; Huo, X.; Wang, H.; Tang, J.; Li, L. Design and experiment of dead chicken recognition robot system. J. Chin. Agric. Mech. 2023, 44, 81–87. [Google Scholar]
  28. Qu, Z. Study on Detection Method of Dead Chicken in Unmanned Chicken Farm. Master’s Thesis, Jilin University, Changchun, China, 2019. [Google Scholar]
  29. Wu, D.; Ying, Y.; Zhou, M.; Pan, J.; Cui, D. DCDNet: A deep neural network for dead chicken detection in layer farms. Comput. Electron. Agric. 2025, 237, 110492. [Google Scholar] [CrossRef]
  30. Yang, J.; Zhang, T.; Fang, C.; Zheng, H.; Ma, C.; Wu, Z. A detection method for dead caged hens based on improved YOLOv7. Comput. Electron. Agric. 2024, 226, 109388. [Google Scholar] [CrossRef]
  31. Xue, H. Design and Implementation of Dead Broiler Identification System Based on Infrared Thermal Imaging Technology. Master’s Thesis, Nanjing Agricultural University, Nanjing, China, 2020. [Google Scholar]
  32. Zhou, C.; Wang, C.; Sun, D.; Hu, J.; Ye, H. An automated lightweight approach for detecting dead fish in a recirculating aquaculture system. Aquaculture 2025, 594, 741433. [Google Scholar] [CrossRef]
  33. Zhang, H.; Tian, Z.; Liu, L.; Liang, H.; Feng, J.; Zeng, L. Real-time detection of dead fish for unmanned aquaculture by yolov8-based UAV. Aquaculture 2025, 595, 741551. [Google Scholar] [CrossRef]
  34. Tian, Q.; Huo, Y.; Yao, M.; Wang, H. A method for detecting dead fish on large water surfaces based on improved YOLOv10. arXiv 2024, arXiv:2409.00388. [Google Scholar] [CrossRef]
  35. Xin, C.; Li, H.; Li, Y.; Wang, M.; Lin, W.; Wang, S.; Zhang, W.; Xiao, M.; Zou, X. Research on an Identification and Grasping Device for Dead Yellow-Feather Broilers in Flat Houses Based on Deep Learning. Agriculture 2024, 14, 1614. [Google Scholar] [CrossRef]
  36. Zheng, J.; Fu, Y.; Zhao, R.; Lu, J.; Liu, S. Dead Fish Detection Model Based on DD-IYOLOv8. Fishes 2024, 9, 356. [Google Scholar] [CrossRef]
  37. Zhao, R.; Wang, Y.; Zhao, S.; Zhang, S.; Duan, Y. Detection and positioning system of dead fish in factory farming. China Agric. Inform. 2024, 36, 31–46. [Google Scholar]
  38. Lu, C.F. Study on Dead Birds Detection System Based on Machine Vision in Modern Chicken Farm. Master’s Thesis, Jiangsu University, Zhenjiang, China, 2009. [Google Scholar]
  39. Peng, Y. Study on Detecting Dead Birds in Modern Chicken Farm Based on SVM. Master’s Thesis, Jiangsu University, Zhenjiang, China, 2010. [Google Scholar]
  40. Bai, Z.; Lv, Y.; Zhu, Y.; Ma, Y.; Duan, E. Dead Duck Recognition Algorithm Based on Improved Mask R-CNN. Trans. Chin. Soc. Agric. Mach. 2024, 55, 305–314. [Google Scholar]
  41. Zaninelli, M.; Redaelli, V.; Luzi, F.; Bronzo, V.; Mitchell, M.; Dell Orto, V.; Bontempo, V.; Cattaneo, D.; Savoini, G. First Evaluation of Infrared Thermography as a Tool for the Monitoring of Udder Health Status in Farms of Dairy Cows. Sensors 2018, 18, 862. [Google Scholar] [CrossRef]
  42. McManus, C.; Bianchini, E.; Paim, T.; De Lima, F.; Neto, J.; Castanheira, M.; Esteves, G.; Cardoso, C.; Dalcin, V. Infrared Thermography to Evaluate Heat Tolerance in Different Genetic Groups of Lambs. Sensors 2015, 15, 17258–17273. [Google Scholar] [CrossRef]
  43. Cai, Z.; Cui, J.; Yuan, H.; Cheng, M. Application and research progress of infrared thermography in temperature measurement of livestock and poultry animals: A review. Comput. Electron. Agric. 2023, 205, 107586. [Google Scholar] [CrossRef]
  44. Li, D.; Song, Z.; Quan, C.; Xu, X.; Liu, C. Recent advances in image fusion technology in agriculture. Comput. Electron. Agric. 2021, 191, 106491. [Google Scholar] [CrossRef]
  45. Hao, H.; Jiang, W.; Luo, S.; Sun, X.; Wang, L.; Wang, H. Detection of Dead Broilers Based on Fusion of Color and Thermal Infrared Image Information. Trans. Chin. Soc. Agric. Mach. 2025, 56, 47–64. [Google Scholar]
  46. Depuru, B.K.; Putsala, S.; Mishra, P. Automating poultry farm management with artificial intelligence: Real-time detection and tracking of broiler chickens for enhanced and efficient health monitoring. Trop. Anim. Health Prod. 2024, 56, 75. [Google Scholar] [CrossRef] [PubMed]
  47. Hao, H.; Zou, F.; Duan, E.; Lei, X.; Wang, L.; Wang, H. Research on Broiler Mortality Identification Methods Based on Video and Broiler Historical Movement. Agriculture 2025, 15, 225. [Google Scholar] [CrossRef]
  48. Wang, L. Research and Application of Dead Fish Recognition Technology Based on Deep Learning. Master’s Thesis, Guangdong Ocean University, Zhanjiang, China, 2022. [Google Scholar]
  49. Fu, T.; Feng, D.; Ma, P.; Hu, W.; Yang, X.; Li, S.; Zhou, C. DF-DETR: Dead fish-detection transformer in recirculating aquaculture system. Aquacult. Int. 2025, 33, 43. [Google Scholar] [CrossRef]
  50. Tong, C.; Li, B.; Wu, J.; Xu, X. Developing a Dead Fish Recognition Model Based on an Improved YOLOv5s Model. Appl. Sci. 2025, 15, 3463. [Google Scholar] [CrossRef]
  51. Zhao, W.; Wang, H.; Pei, R.; Pang, C. Thermal infrared dead rabbit identification method based on improved YOLOF. Heilongjiang Anim. Sci. Veter. Med. 2023, 2023, 118–122. [Google Scholar]
  52. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  53. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  54. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  55. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 2002, 86, 2278–2324. [Google Scholar] [CrossRef]
  56. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar] [CrossRef]
  57. Soeb, M.J.A.; Jubayer, M.F.; Tarin, T.A.; Al Mamun, M.R.; Ruhad, F.M.; Parven, A.; Mubarak, N.M.; Karri, S.L.; Meftaul, I.M. Tea leaf disease detection and identification based on YOLOv7 (YOLO-T). Sci. Rep. 2023, 13, 6078. [Google Scholar] [CrossRef]
  58. Rai, N.; Zhang, Y.; Villamil, M.; Howatt, K.; Ostlie, M.; Sun, X. Agricultural weed identification in images and videos by integrating optimized deep learning architecture on an edge computing technology. Comput. Electron. Agric. 2024, 216, 108442. [Google Scholar] [CrossRef]
  59. Wang, J.; Wang, N.; Li, L.; Ren, Z. Real-time behavior detection and judgment of egg breeders based on YOLO v3. Neural Comput. Appl. 2020, 32, 5471–5481. [Google Scholar] [CrossRef]
  60. Nasiri, A.; Amirivojdan, A.; Zhao, Y.; Gan, H. Estimating the Feeding Time of Individual Broilers via Convolutional Neural Network and Image Processing. Animals 2023, 13, 2428. [Google Scholar] [CrossRef]
  61. De Montis, A.; Pinna, A.; Barra, M.; Vranken, E. Analysis of poultry eating and drinking behavior by software eYeNamic. J. Agric. Eng. 2013, 44, e33. [Google Scholar] [CrossRef]
  62. Jiang, W.; Hao, H.; Wang, H.; Wang, L. Possible application of agricultural robotics in rabbit farming under smart animal husbandry. J. Clean. Prod. 2025, 501, 145301. [Google Scholar] [CrossRef]
  63. Hu, Z. Research on Underactuated End Effector of Dead Chicken Picking Robot. Master’s Thesis, Hebei Agricultural University, Baoding, China, 2021. [Google Scholar]
  64. Bao, Y.; Lu, H.; Zhao, Q.; Yang, Z.; Xu, W. Detection system of dead and sick chickens in large scale farms based on artificial intelligence. Math. Biosci. Eng. 2021, 18, 6117–6135. [Google Scholar] [CrossRef]
  65. Alves, A.A.C.; Fernandes, A.F.A.; Breen, V.; Hawken, R.; Rosa, G.J.M. Monitoring mortality events in floor-raised broilers using machine learning algorithms trained with feeding behavior time-series data. Comput. Electron. Agric. 2024, 224, 109124. [Google Scholar] [CrossRef]
Figure 1. The workflow of integrated death detection and removal systems.
Figure 1. The workflow of integrated death detection and removal systems.
Agriculture 15 02249 g001
Figure 2. The reviewed species-specific death detection scenarios.
Figure 2. The reviewed species-specific death detection scenarios.
Agriculture 15 02249 g002
Figure 3. Typical inspection systems for death detection. (a) Fixed inspection robot for poultry farming. (b) Fixed data acquisition systems for dead fish detection [49]. (c) Fixed data acquisition system for chicken house [61]. (d) Video capture system for dead fish detection [32]. (e) Bird monitoring robot, ChickenBoy, developed by Big Dutchman (Big Dutchman, Inc., Vechta, Germany) (f) Close-up view of the unmanned aerial vehicle used for dead fish detection [33]. (g) Inspection robot for dead chicken detection [12]. (h) Inspection robot for dead hen detection [30]. (i) Image acquisition system attached mobile chassis for dead hen detection [11]. (j) Mobile robot for dead chicken detection [29].
Figure 3. Typical inspection systems for death detection. (a) Fixed inspection robot for poultry farming. (b) Fixed data acquisition systems for dead fish detection [49]. (c) Fixed data acquisition system for chicken house [61]. (d) Video capture system for dead fish detection [32]. (e) Bird monitoring robot, ChickenBoy, developed by Big Dutchman (Big Dutchman, Inc., Vechta, Germany) (f) Close-up view of the unmanned aerial vehicle used for dead fish detection [33]. (g) Inspection robot for dead chicken detection [12]. (h) Inspection robot for dead hen detection [30]. (i) Image acquisition system attached mobile chassis for dead hen detection [11]. (j) Mobile robot for dead chicken detection [29].
Agriculture 15 02249 g003
Table 1. Death detection dataset characteristics. Note: N/A, which means Not Applicable (indicates no relevant data for the item).
Table 1. Death detection dataset characteristics. Note: N/A, which means Not Applicable (indicates no relevant data for the item).
ReferenceSpecieSpecific Specie and the Breeding ModeCollection MethodImaging SensorData TypeDataset SizeRaw Image ResolutionAnnotation Tool
Poultry farmingLu, 2009 [38]ChickenCage-raisedN/AZC301 CMOS chip cameraRGB imageN/AN/AN/A
66 et al., 2009 [23]ChickenCage-raisedFixed cameraZC301 CMOS chip cameraRGB image160 sets (2 images each)N/AN/A
Peng, 2010 [39]ChickenCage-raisedFixed cameraMV-VS140FM/FC cameraRGB image120 sets (2 images each)1392 × 1040N/A
Muvva et al., 2018 [16]BroilerFloor-raisedFixed cameraDuo R cameraThermal infrared and RGB image120 sets (1 thermal infrared image and 1 RGB image each)1920 × 1080N/A
Xue, 2020 [31]ChickenFloor-raisedMobile platform-vehicleFLIR TAU2 630Thermal infrared image8050 (10,000 after data augmentation)640 × 512LabelImg
Liu et al., 2021 [2]ChickenFloor-raisedMobile platform-vehicleLogitech C922 Pro Stream CameraRGB image1501920 × 1080N/A
Li et al., 2022 [26]BroilerLab simulationFixed on a designed robotic armIntel RealSense D435RGB image14,288 images under 8 different lighting conditions1920 × 1080N/A
depth image1080 × 720N/A
Hao et al., 2022 [11]BroilerCage-raisedMobile platform-vehicleSony XCG-CG240CRGB image13101920 × 2000LabelImg
Manual collection
Luo et al., 2023 [10]HenFloor-raisedMobile platform-vehicleRealSense L515NIR, depth images2052 sets (living hens) and 1937 sets (dead hens) (1 TIR, 1 NIR, 1 depth image each)640 × 480LabelImg
IRay P2TIR images256 × 192
Bist et al., 2023 [25]HenFloor-raisedFixed cameraPRO-1080MSB cameraThermal infrared image9000 images after data augmentation1920 × 1080Makesense.AI (image labeled website)
DVR-4580
Jiang et al., 2023 [27]ChickenCage-raisedMobile platform-vehicleLepton3.5Thermal infrared imageN/A160 × 120N/A
Wang, 2023 [9]BroilerCage-raisedMobile platform-vehicleHIKMICRO H16Thermal infrared image1820160 × 120LabelImg
Jia et al., 2023 [13]ChickenCage-raisedManual collectionFLIRThermal infrared image700320 × 240N/A
Zhao et al., 2024 [18]ChickenCage-raisedFixed cameraFLIR A6Thermal infrared image9060640 × 512LabelImg
RGB image 1920 × 1080
Yang et al., 2024 [30]HenCage-raisedMobile platform-vehicleSIYI A8 mini zoom gimbal cameraRGB image50001280 × 720LabelImg
Bai et al., 2024 [40]DuckCage-raisedMobile platform-vehicleIMX camera moduleRGB image1057N/ASAM-Tool
Xin et al., 2024 [35]ChickenFloor-raised Binocular cameraRGB image15651920 × 1080LabelMe
Ma et al., 2024 [12]HenCage-raisedManual collectionXiaomi 13 smartphoneRGB image17,758 annotated images1920 × 1080LabelMe
Mobile platform-vehicleEight LRCP20680_1080PRGB image1920 × 1080N/A
Depuru et al., 2024 [46]BroilerFloor-raisedN/Aa Lenovo USB, 3.5-inch, 2.1-MP, 90-degree ultrawide, 360-rotation, 30-fps cameraRGB imageN/AN/AN/A
N/AHP High-Definition W100 wide-range cameraRGB imageN/A640 × 480N/A
Hao et al., 2025 [47]BroilerCage-raisedFixed cameraHIKROBOT China Inc., MV-CA023-10UC, Hangzhou, ChinaRGB image770N/ALabelme
Hao et al., 2025 [45]BroilerCage-raisedFixed cameraRealSense L515RGB, near-infrared, depth image991 sets with 3 images each setdepth image 1024 × 768; RGB image 1920 × 1080N/A
IRay P2Thermal infrared imageN/A256 × 192N/A
Wu et al., 2025 [29]ChickenCage-raisedMobile platform-vehicleN/ARGB image30001920 × 1080LabelImg
Khanal et al., 2025 [15]ChickenFloor-raisedFixed cameraIPC675LFW-AX4DUPKC-VG cameraRGB image1462N/AContain no explicit labels
Fishery farmingZhao et al., 2022 [22]FishLab simulationFixed cameraN/ARGB image11501920 × 1080LabelImg
Wang, 2022 [48]FishDeep and far sea aquacultureManual collectionNIKKOR 42XRGB image1312N/ALabelImg
Li et al., 2023 [21]FishRecirculating aquaculture systemFixed cameraLogitech C920eRGB image89701280 × 720roLabelImg
Zhao et al., 2024 [37]FishLab simulationFixed cameraZED 2 stereo cameraRGB image11721280 × 720LabelImg
Zheng et al., 2024 [36]FishAquaculture baseN/AN/ARGB image958N/ALabelMe
Tian et al., 2024 [34]FishPondMobile platform-UAVN/ARGB image5003840 × 2160X-AnyLabeling
Zhang et al., 2024 [20]FishBreeding pond (Fish vegetable symbiosis factory)Fixed cameraN/ARGB image25001920 × 1080LabelImg
Zhang et al., 2025 [33]FishExperimental baseMobile platform-UAVDJI M30TRGB image47664000 × 3000DarkLabel (software)
Zhou et al., 2025 [32]FishRecirculating aquaculture systemMobile platform-RailwayDCY01RGB image95301920 × 1080LabelImg
Fu et al., 2025 [49]FishRecirculating aquaculture systemFixed cameraHikvision surveillance cameraRGB image16352560 × 1920LabelImg
Tong et al., 2025 [50]Fish Fish farmManual collectionOPPO A96, Huawei Nova 7 Plus, Yingshi CS-H8 cameraRGB image670640 × 640LabelImg
Livestock farmingDuan et al., 2022 [24]RabbitCage-raisedFixed cameraHikrobot MV CA020 10GCRGB videos40 locations (1 min each)1624 × 1240LabelMe
Zhao et al., 2023 [51]RabbitCage-raisedFixed cameraInfiRay T2L-6LThermal infrared image3443256 × 192LabelImg
Peng et al., 2024 [14]PigBreeding farmManual collectionMobile phoneRGB image20001792 × 828LabelImg
Intel RealSense D435RGB, infrared, depth image241280 × 720
Heng et al., 2024 [17]PigletBreeding farmFixed cameraFLIR A6 stereo cameraThermal infrared image2000 sets (1 thermal infrared image, 1 RGB image each)640 × 512LabelImg
RGB image1920 × 1080
Table 2. Features of deep-learning-based death detection models. Note: N/A, which means Not Applicable (indicates no relevant data for the item).
Table 2. Features of deep-learning-based death detection models. Note: N/A, which means Not Applicable (indicates no relevant data for the item).
ReferenceAnimalDeep Learning ModelNetwork (Backbone)ModificationAccuracyAverage Precision (AP)Mean Average Precision (mAP)PrecisionRecallF1 ScoreSpeed
Xue, 2020 [31]ChickenYOLOv3Darknet-v53N/AN/AN/AN/A90.8%98.9%94.7%43.6 fps
Faster R-CNNVGG16N/AN/AN/AN/A98.1%98.7%98.4%8.8 fps
Liu et al., 2021 [2]ChickenYOLOv4CSPDarknet53N/A97.5%N/A100%95.24%100%N/AN/A
Li et al., 2022 [26]Broiler (Chicken)YOLOv4CSPDarknet53N/AN/AN/AN/A95.1%86.3%90.5%7 fps
Hao et al., 2022 [11]Broiler (Chicken)improved-YOLOv3Darknet-v53(1) mosaic enhancement: enrich the dataset, reduce the consumption of GPU (2) the Swish function (activation function): achieve a stronger activation performance (3) an SPP module: reduce information loss and integrate features of different sizes (4) CIoU loss (loss function): faster convergence and better performanceN/AN/A98.6%95.0%96.8%N/A0.09 s/frame
Bist et al., 2023 [25]Hen (Chicken)YOLOv5s-MDa variant of the efficientNet(1) an improved efficientNet (backbone): achieve good feature extraction performance while being lightweight and efficient (2) “YOLO Loss”, a customed combination of Objectness Loss, Classification Loss, and Regression Loss: to encourage accurate predictions and penalize incorrect predictionsN/AN/A99.5% (mAP@0.50); 82.3%(mAP@0.50:0.95)N/A98.4%N/A55.6 fps
Wang, 2023 [9]Broiler (Chicken)YOLOv5Improved CSPDarknet53N/AN/A97.4% (live); 93.1% (dead)N/A97.2% (live); 97.6% (dead)93.0% (live); 87.8% (dead)N/A69.4 fps
Zhao et al., 2024 [18]ChickenYOLOv5s-SEImproved CSPDarknet53(1) CSP module (backbone): reduce computational load (2) Focus module (backbone): expand the input channels, maximizes the preservation of information integrity (3) SE module (attention module): improves characterization ability (4) CIoU (loss function): better assest the quality of the box (5) DIoU_NMS (NMS process): help to address the issue of occlusionsN/AN/A98.2% (mAP@0.5)97.70%97.90%N/A5.4 ms/frame
Yang et al., 2024 [30]Hen (Chicken)improved-YOLOv7MobileNetv3(1) CBAM (attention module): make the networks concentrate on valuable feature layers (2) Repulsion loss (loss function): improved the crowded hen occlusion issue (3) DIoU-NMS (NMS process): filter out more accurate prediction boxes without increasing the computational loadN/A86.2% (live); 85.1% (dead)N/A95.7% (live); 95.1% (dead)86.8% (live); 86.8% (dead)91.0% (live); 90.8% (dead)103 fps
Bai et al., 2024 [40]DuckMask R-CNN + Swin TransformerSwin Transformer(1) Swin Transformer (backbone): improve feature extraction ability (2) Cross-Entropy Loss (loss function): improve classification performance, perform better under high-dense scenariosN/A95.8% (mAP@0.90)N/AN/AN/AN/AN/A
Xin et al., 2024 [35]Chickenimproved-YOLOv6 + SEEfficientRep(1) ASPP module: reduce the leakage in the detection in complex backgrounds; improve the iteration speed (2) SE attention mechanism: improves characterization abilityN/AN/A92%86%89%87%N/A
Ma et al., 2024 [12]Hen (Chicken)RTMDetCSPNeXtN/AN/AN/A92.8% (mAP@50)90.61%84.4%N/AN/A
Depuru et al., 2024 [46]Broiler (Chicken)YOLOv5sCSPDarknet53N/A95%N/AmAP@0.50: 98.1% (1WOC); 93.3% (2WOC); 90.7% (3WOC); 96.5% (4WOC)94.7% (1WOC); 88.4% (2WOC); 91.2% (3WOC); 95.2% (4WOC)95.9% (1WOC); 83.4% (2WOC); 80.1% (3WOC); 90.1% (4WOC)N/AN/A
Hao et al., 2025 [45]Broiler (Chicken)YOLOv11sN/AN/A91.7%N/AN/A93.0%87.2%89.4%6.1 ms/frame
Zhao et al., 2022 [22]FishDM-YOLOv4MobileNetV3(1) MobileNetV3 (backbone): achieve lightweight feature extraction network (2) a deep separable convolution (replace the standard convolution): to reduce the number of parameters of the overall networkN/A92.43%N/A95.47%77.52%N/A64 fps
Wang, 2022 [48]Fishimproved-YOLOv5GhostNeck(1) GhostNet (backbone) (2) CIoU(loss function) (3) SE moduleN/A97.5% (live); 99% (dead)98.40%N/AN/AN/A333.3 fps
Li et al., 2023 [21]FishDeepSORTN/A(1) rotate appearance features and rotating IoU: obtain more robust target dead fish information79.6%68.4%N/AN/AN/AN/AN/A
Zhao et al., 2024 [37]FishYOLOv7-PCimproved EfficientNet(1) PConv module (convolutional module): improve detection speed (2) Coordinate Attention module (neck network): improve feature extraction abilities, reduce false detection caused by occlusionN/AN/A96.6%97.9%85.3%N/A49 fps
Zheng et al., 2024 [36]FishDD-IYOLOv8Improved CSPDarknet53(1) DySnakeConv (neck network): to adaptively adjust the receptive field, improving the network’s capability to extract features (2) added a layer for detecting minor objects, modified the detection head of YOLOv8 to 4: to better focus on small targets and occluded dead fish (3) Hybrid Attention Mechanism (later stages of the backbone network): refine global feature extractionN/A91.7%N/A92.8%89.4%91.0%N/A
Tian et al., 2024 [34]Fishimproved-YOLOv10FasterNet(1) FasterNet (backbone): reduce model complexity (2) apply enhanced connectivity methods, introduce CSPStage (neck network): improve feature fusion (3) add a compact target detection head: to enhance the detection performance of smaller objectsN/A97.50%N/A95.70%94.50%N/A36 fps
Zhang et al., 2024 [20]FishE-YOLOv4Improved CSPDarknet53(1) Efficient Channel Attention module (backbone network): strengthen the feature auxiliary branchN/AN/A96.91%94.85%94.96%95%N/A
D-YOLOv4Densenet169(1) DenseNet169 (backbone): strengthen the transmission and utilization of dead fish target area features, alleviate the problem of gradient disappearance in the training processN/AN/A95.93%94.81%93.44%94%N/A
Zhang et al., 2025 [33]Fishimproved-YOLOv8Improved CSPDarknet53(1) Attention scale sequent fusion: enhance the extraction of multi-scale information, realize the fusion of feature maps at different scales (2) Triple feature encoding: ensure the details of the feature layers of different sized are fully considered (3) Channel and position attention mechanism: to better allocate the weights for learning information under different environmental conditions (4) P2 detection head: to better capture smaller and finer featuresN/AN/A98.10%95.60%95.20%N/A124 fps
Zhou et al., 2025 [32]FishDeadfish-YOLOimproved lightweight CSPDarknet(1) a lightweight backbone network: to ensure fast computations (2) an attention mechanism: to suppress unimportant features (3) ReLU-memristor-like activation function: to improve neural-network performanceN/AN/A94.6%N/AN/AN/A85 fps
Fu et al., 2025 [49]FishDF-DETRRepNCSPELAN(1) RepNCSPELAN (backbone): to better extract multi-scale features, to reduce the amount of model parameters (2) CascadedGroupAttention (feature fusion method): to capture more target features (3) CCFM_CSP module: to fuse important features using parallel dilated convolution with different expansion ratesN/AN/AN/A94.8%94.1%94.4%N/A
Tong et al., 2025 [50]FishYOLO-DWMImproved CSPDarknet53(1) DWMConv: to better extract feature (2) C3-EMA module: to enhance the feature processing capabilities (3) C3-Light module: to reduce the model’s parameters and FLOPsN/AN/A87.5%(mAP@50)93.6%77.5%84.8%N/A
Zhao et al., 2023 [51]Rabbitimproved-YOLOFMobileNetv3(1) MobileNetv3(backbone): to reduce the number of parameters, achieve a lightweight performance (2) SRM-ASPP structure (field enhancement network): to assist the feature extractionN/A95.0%N/A89.2%93.1%N/A79.2 fps
Peng et al., 2024 [14]PigYOLOv5-MobileNetV2MobileNetV2(1) MobileNetV2 (backbone): to reduce the number of parameters, achieve a lightweight performanceN/AN/A99.5%99.2%99.6%N/A10.8 ms/frame
YOLOv5-CBAMCBAM(1) CBAM attention mechanism (backbone): to improve the model’s focus on the pig headN/AN/A99.5%99.6%99.8%N/A11.6 ms/frame
Heng et al., 2024 [17]Piglet (Pig)YOLOv7-SEimproved EfficientNet(1) SE attention mechanism (backbone): to better extract featuresN/AN/A92.1%93.8%90.3%N/A6.8 fps
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, Y.; Wang, L.; Jiang, W.; Wang, H. Death Detection and Removal in High-Density Animal Farming: Technologies, Integration, Challenges, and Prospects. Agriculture 2025, 15, 2249. https://doi.org/10.3390/agriculture15212249

AMA Style

Han Y, Wang L, Jiang W, Wang H. Death Detection and Removal in High-Density Animal Farming: Technologies, Integration, Challenges, and Prospects. Agriculture. 2025; 15(21):2249. https://doi.org/10.3390/agriculture15212249

Chicago/Turabian Style

Han, Yutong, Liangju Wang, Wei Jiang, and Hongying Wang. 2025. "Death Detection and Removal in High-Density Animal Farming: Technologies, Integration, Challenges, and Prospects" Agriculture 15, no. 21: 2249. https://doi.org/10.3390/agriculture15212249

APA Style

Han, Y., Wang, L., Jiang, W., & Wang, H. (2025). Death Detection and Removal in High-Density Animal Farming: Technologies, Integration, Challenges, and Prospects. Agriculture, 15(21), 2249. https://doi.org/10.3390/agriculture15212249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop