Next Article in Journal
CAG-MoE: Multimodal Emotion Recognition with Cross-Attention Gated Mixture of Experts
Previous Article in Journal
Preface to the Special Issue “Advances in Nonlinear Analysis, Analytic Number Theory, and Mathematical Inequalities”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Convolutional Neural Networks in Animal Husbandry: A Review

by
Rotimi-Williams Bello
1,2,*,
Roseline Oluwaseun Ogundokun
1,
Pius A. Owolawi
1,
Etienne A. van Wyk
1 and
Chunling Tu
1
1
Department of Computer Systems Engineering, Faculty of Information and Communication Technology, Tshwane University of Technology, Pretoria 0152, South Africa
2
Department of Mathematics and Computer Science, Faculty of Basic and Applied Sciences, University of Africa, Toru-Orua 561101, Nigeria
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1906; https://doi.org/10.3390/math13121906
Submission received: 12 May 2025 / Revised: 30 May 2025 / Accepted: 5 June 2025 / Published: 6 June 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

Convolutional neural networks (CNNs) and their application in animal husbandry have in-depth mathematical expressions, which usually revolve around how well they map input data such as images or video frames of animals to meaningful outputs like health status, behavior class, and identification. Likewise, computer vision and deep learning models are driven by CNNs to act intelligently in improving productivity and animal management for sustainable animal husbandry. In animal husbandry, CNNs play a vital role in the management and monitoring of livestock’s health and productivity due to their high-performance accuracy in analyzing images and videos. Monitoring animals’ health is important for their welfare, food abundance, safety, and economic productivity. This paper aims to comprehensively review recent advancements and applications of relevant models that are based on CNNs for livestock health monitoring, covering the detection of their various diseases and classification of their behavior, for overall management gain. We selected relevant articles with various experimental results addressing animal detection, localization, tracking, and behavioral monitoring, validating the high-performance accuracy and efficiency of CNNs. Prominent anchor-based object detection models such as R-CNN (series), YOLO (series) and SSD (series), and anchor-free object detection models such as key-point based and anchor-point based are often used, demonstrating great versatility and robustness across various tasks. From the analysis, it is evident that more significant research contributions to animal husbandry have been made by CNNs. Limited labeled data, variation in data, low-quality or noisy images, complex backgrounds, computational demand, species-specific models, high implementation cost, scalability, modeling complex behaviors, and compatibility with current farm management systems are good examples of several notable challenges when applying CNNs in animal husbandry. By continued research efforts, these challenges can be addressed for the actualization of sustainable animal husbandry.

1. Introduction

Convolutional neural networks (CNNs) and their application in animal husbandry have in-depth mathematical expressions, which usually revolve around how well they map input data such as images or video frames of animals to meaningful outputs like health status, behavior class, and identification [1]. A generalized formulation is as follows:
Let
X R H × W × C :   I n p u t   i m a g e   h e i g h t   H ,   w i d t h   W ,   c h a n n e l s   C
Y R k :   O u t p u t   v e c t o r   r e p r e s e n t i n g   K   p o s s i b l e   a n i m a l   s t a t e s   ( e . g . ,   h e a l t h y / s i c k ,   I D s ,   b e h a v i o r s )  
F θ :   C N N s   m o d e l   w i t h   l e a r n a b l e   p a r a m e t e r s   θ  
Y ^ = F θ X :   P r e d i c t e d   o u t p u t
The objective function for Equations (1)–(4) minimizes the error between predicted and true labels as presented in the following Equations:
m i n θ   L Y , Y ^ = L Y , F θ X
where L is the loss function, e.g., cross-entropy for classification:
L C E Y , Y ^ = k = 1 k Y k log ( Y ^ k )
The CNN function F θ typically includes
F θ X = f L f L 1 f 1 X
where f i is the combination of convolution, activation (ReLU), and pooling at layer i.
Each convolution operation is
Z i ,   j ,   k = m = 1 M n = 1 N c = 1 C W m ,   n ,   c ,   k ·   X i + m ,   j + n ,   c + b k
Here, W is the convolution kernel, b k is the bias term, and Z is the output feature map.
The potential of CNNs is also reflected in their performance metrics as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l  
where T denotes true, P denotes positive, F denotes false, and N denotes negative.
A P = 1 n i = 1 n P ( i ) · R ( i )  
where n denotes the total number of recall points, P(i) is the precision at the i-th recall point, and R i is the change in recall between the i-th and (i + 1)-th recall points.
m A P = 1 n n = 1 n A P i
where APi denotes the average precision for class i, and n denotes classes’ total number.
A c c u r a c y = N u m b e r   o f   c o r r e c t   p r e d i c t i o n s T o t a l   p r e d i c t i o n s    
The mathematical framework as expressed in Equations (1)–(14), particularly Equations (9)–(13), which depend on the type of task, show the capability of CNNs in transforming raw animal data (images) into structured and actionable insights, making them a powerful and dependable tool in precision livestock farming. Moreover, the application of CNNs in food production encompasses the entire process from initial cultivation or raising of food sources to the final distribution of food products [2]. Food production is an important aspect of animal farming with several benefits to people, which involves agriculture, processing, and distribution, and considers various factors like environmental impact, sustainability, and food safety.
Animals such as cattle provide meat and milk for sustenance, including their by-products such as blood meal and bone meal (as livestock feed), tallows, hides, horns, hooves and biogas (as industrial and other uses), and clothing. The economic benefits of animals like cattle in terms of their yields outweigh the cost involved in their husbandry; this is in addition to their utilization as farm tools for land preparation and transportation (e.g., donkey and camel) [3]. Livestock industry is expanding worldwide due to various factors like increase in population, industrial dependence, urbanization, and change in dietary preferences. According to the Foreign Agricultural Service of U.S. Department of Agriculture (FAS), production of beef in the United States between 2024 and 2025 is estimated to rise to 12.3 million metric tons [4].
This positions the U.S. ahead of other countries in beef production globally, contributing about 20% to the beef production globally. For the same period, the global total beef production is projected at 61.38 million metric tons [4]. Other leading countries in beef production include Brazil, China, and the European Union, accounting for about 19%, 13%, and 11% of the global production, respectively [4]. This statistic is proof of progress made by concerned individuals in ensuring livestock products are available for consumption and industrial use [5]. As the livestock sector continues to expand, with the tendency for annual outputs to exceed expectations in years to come, there is a great implication of this expansion for conventional livestock farming systems [6].
To handle this expansion and various challenges confronting livestock farming, and for sustainable development of the industry, enough consideration has been given to precision livestock farming technologies, thereby enhancing livestock productivity, and their health/welfare monitoring and management [7]. Moreover, several studies have been conducted on the advancement of livestock industry using state-of-the-art technological methods like AI-driven models, data analytics, and automation, thereby transforming livestock farming, boosting efficiency and productivity, for onward satisfaction of ever-increasing market demands while maintaining animal welfare [8,9,10,11,12].
Several animal farming state-of-the-art technological models are based on CNNs, one of the most popular and widely used cutting edge deep learning algorithms for processing and analyzing visual data (like images and videos) automatically without the need for manual feature engineering [13,14,15,16,17].
With the rise of precision agriculture and smart farming technologies, CNNs have become super important in modern animal farming, useful for animal health monitoring, automated animal identification, behavior analysis, livestock counting and inventory, weight estimation, monitoring of body condition scores, detection of environmental issues, and smart feeding system [13]. They also offer several benefits to farmers in terms of increased productivity, better animal welfare, reduced labor costs, more data-driven decisions, and early interventions → lesser medical costs [14]. In furtherance of CNN applications in modern animal farming, this review contributes to the existing literature on CNNs by providing a comprehensive overview of the recent research advancements, targeting recent peer-reviewed studies on CNN applications in livestock farming, covering their management and health monitoring. Likewise, the review aims to fill the gaps in the existing smart agricultural technologies, where continuous modifications (which often turn out to be costly, time consuming, and inefficient) are required when employing them for animal husbandry solutions, including animal detection, localization, tracking, and behavioral monitoring. Moreover, the lack of consolidated knowledge on CNNs in animal science is another gap this paper aims to address.

2. Relationship Among CNNs, Computer Vision, and Livestock Welfare

Welfare is a term used for something that aids or promotes the well-being of another thing, such as animals, in this context. Animal welfare in animal farming is as important as the welfare of the farmers themselves. Mathematically, this can be expressed as follows:
L i v e s t o c k   W e l f a r e   ( L W )     C V   C N N   I ,   θ
LW = livestock welfare;
∝ = “is proportional to”;
CV = computer vision pipeline (including preprocessing, segmentation, detection, etc.);
CNN (I, θ) = convolutional neural network model applied to image data;
I = input image(s) from sensors/cameras on the farm;
θ = trained model parameters.
The interpretation of the above mathematical expressions is as follows:
(a)
CNNs are the core computational models extracting patterns such as body posture and disease signs from raw images.
(b)
Computer vision systems use CNNs to automate monitoring tasks.
(c)
Improved computer vision performance, enabled by accurate CNN models, leads to enhanced livestock welfare through early detection of problems and optimized animal care.
The advocacy for animal welfare is more pronounced in developing and industrialized Western countries in the early 21st century, particularly those with strong animal welfare movements, where public attention and debate are growing around the conditions of beef and dairy cattle. Some of the most prominent countries where these discussions gained traction include the United States, the United Kingdom, the European Union (esp. Germany, Netherlands, and Denmark), Australia, and New Zealand [18,19,20]. Beef and dairy products are beneficial globally either for commercial or subsistence purposes, making their market highly internationalized and welfare a global issue.
As opined by [21], livestock welfare can include ideal conditions and severe suffering, depending on the circumstances, which can range from diseases such as mastitis and lameness to body injuries such as fracture, wounds, and several other behavioral issues such as pain, aggression, fear, distress, decrease (inactive) in feeding, drinking, and restlessness or changes in mating, standing, and resting patterns at different ages. Several well-established measures are used in judging livestock welfare. Globally, five-freedom is the most common benchmark, formulated by the Farm Animal Welfare Council (FAWC) of United Kingdom, and now widely accepted in animal welfare science. The five-freedom benchmarks are as follows:
(1)
Freedom from hunger and thirst.
(2)
Freedom from discomfort.
(3)
Freedom from pain, injury, or disease.
(4)
Freedom to express normal behavior.
(5)
Freedom from fear and distress.
More recently, many experts have extended these benchmarks with newer frameworks such as the five-domains model, which focuses on (1) nutrition, (2) environment, (3) health, (4) behavior, and (5) mental state. The five domains focus more on positive experiences (not just the absence of suffering), for example, granting animals the feeling of curiosity, pleasure, or satisfaction. In practice, the assessment of livestock welfare is based on measurable indicators such as body condition scores, disease prevalence, injury rates, mortality and morbidity rates, behavior observations such as social interaction, stereotypies like pacing, and resource availability (food, water, bedding, and shelter) [22]. Good practice of animal welfare is not only beneficial to the owner, but to the animals, the environment, and the consumers of animal products (i.e., meat and dairy products). The quantity and quality of meat and milk produced by beef cattle and dairy cattle, respectively, can be negatively affected when little or no attention is given to cattle welfare [23].
Likewise, poor and unhygienic practices when slaughtering cattle and the like can lead to contaminated and poorer quality meat [24,25]. However, it is difficult, in real-farm settings, to assess the behavior (ethological) and normal functioning (physiological) of animals in their natural habitat [26,27]. Moreover, with the increase in livestock production and animal welfare sensitization, it has become more challenging to attain desirable animal productivity [28], further rendering animal monitoring by human surveillance impracticable [29]. Recently, different branches of artificial intelligence have sprung up, providing solutions to the numerous challenges confronting animal farming. Among the solutions are various state-of-the-art technologies for precision livestock farming, such as computer vision-based systems, used for livestock products evaluation, and offering real-time capabilities for livestock monitoring systems [30,31,32,33].
Models and systems driven by computer vision have the capabilities of handling the above-mentioned challenges by providing high-performing automated systems for animal welfare monitoring [34]. The development of computer vision technologies for livestock monitoring has been explored by several studies, solving issues related to welfare and management [35,36,37]. Computer vision is also widely applied to livestock monitoring [38,39,40]. The CNN, on the other hand, is a deep learning model, and a special kind of neural network widely applied by the scientific community and used mainly for image processing and computer vision tasks such as handling data with a grid-like structure, like images. At a high level, the input data are scanned with filters by convolutional layers for the detection of patterns, such as edges, textures, or more complex shapes. The data are downsampled by the pooling layers to reduce their size for faster processing. The predictions are made by the fully connected layers (near the end) based on the extracted features [41].

3. Fundamentals of CNNs

CNNs are artificial neural networks, popular for their capabilities to handle a wide range of artificial intelligence tasks. CNNs are primarily applied in computer vision and image processing for image analysis and classification [42]. The fundamentals of CNNs include the following:
(a) Input layer, which is (i) the layer through which raw data (usually images like 28 × 28 grayscale or 224 × 224 color images) are taken in, and (ii) the representation of each image is often in form of a tensor (height × width × channels).
(b) Convolutional layer, which is (i) the layer responsible for filters/kernels (small matrices, like 3 × 3, or 5 × 5) application over the input; (ii) different features (e.g., edges, textures) are detected by each trained filter, and (iii) the output is referred to as a feature map.
(c) Activation function (usually rectified linear unit, ReLU), which is (i) a non-linear activation like ReLU (f(x) = max (0, x)) applied after convolution, and (ii) the added non-linearity helps the model in learning complex patterns.
(d) Pooling layer, which is (i) the layer responsible for the reduction in feature maps size (down sampling); (ii) the common type is max pooling, which takes the maximum value in each region, and (iii) by this, computation is reduced, thereby helping the model become position invariant.
(e) Fully connected (dense) layers, which are (i) the layers in which the data are flattened into a vector after several convolution + pooling operations, and (ii) the final predictions (e.g., which class an image belongs to) are made by the dense layers at the end.
(f) For the output layer, (i) CNNs typically use Softmax activation to output probabilities for each class in classification tasks, and (ii) for regression tasks, it might just use a single number output.
(g) For training, (i) backpropagation and gradient descent are used for CNNs’ training, and (ii) the optimization is guided by loss functions (like cross-entropy for classification) [43,44,45].
As far back as the early 21st century, there have been so many innovative advancements and modifications in the learning techniques and architecture of CNNs, scalable enough to handle complex and heterogeneous problems [46]. Figure 1 illustrates a text-style description of CNN architecture, from input to output. Different domains of agriculture like precision livestock farming have applied CNNs for various tasks like animal detection and tracking, image segmentation, and classification, showing exemplary performance in those tasks [47,48,49,50]. Table 1 presents the commonly applied CNN models in animal farming, detailing their purpose, benefits, and application in the livestock industry. Other unspecified models are either rarely applied or yet to have been applied in animal farming.
More importantly, a critical observation reveals that while these models have extensively contributed to livestock sectors such as cattle and pig farming for tasks like health monitoring, behavioral analysis, and anomaly detection, less research has been conducted on the challenges involved when applying CNNs in animal husbandry.
As presented in Table 1, precision livestock farming involves employing advanced technologies and data-driven tools for livestock monitoring and management and the improvement of livestock’s health, welfare, productivity, and environmental impact in real-time. It is part of the broader concept of precision agriculture, focused specifically on animals. However, a critical observation reveals that CNN models have shown strong potential for image- and video-based tasks in livestock farming, but they are not yet widely adopted on farms due to several practical and technical barriers such as infrastructure limitations on farms, high cost and complexity, data collection and annotation challenges, environmental variability, generalization and robustness issues, and the preference for simpler solutions.
Therefore, more adoption of CNN models in livestock farming is advocated in this study by reducing the cost and complexity of infrastructure on farms and addressing data collection challenges, annotation challenges, and generalization and robustness issues as well as environmental variability. This advocacy lies in acknowledging the progress already demonstrated by the CNN models in livestock farming, advancing not only the impact of these technologies but also their sustainability for livestock welfare and production, for the benefit of farmers and consumers.

4. Application of CNNs in Livestock Health Monitoring

Globally, livestock farming and management play a significant role in agriculture, providing an immensurable contribution to the meat and dairy products supply. Timely and effective health monitoring facilitates disease detection, intervention, and treatment, thereby preventing potential disease outbreaks, reducing mortality rates, and boosting food safety and economic gain [87]. Physical symptoms of animal disease or injury can be detected by CNNs through image or video analysis of the animal. Common use cases are as follows:
(a) Disease detection from visual cues [88], which includes (i) mastitis detection in dairy cows from udder swelling and posture changes, (ii) foot rot or lameness detection from abnormal gait or leg swelling in sheep or cattle, and (iii) skin lesions or dermatological conditions in pigs.
(b) Lameness and gait analysis [89], which includes (i) detection of uneven weight distribution or irregular walking patterns, and (ii) tracking cow or pig movement using pose estimation.
(c) Respiratory disease monitoring (via cough or facial features) [90], which includes (i) analysis of facial swelling or discharges and sound-based features for early respiratory illness detection, (ii) detection of swine respiratory diseases through nasal discharge and facial analysis, and (iii) detection of cough using spectrograms fed into 2D CNNs.
(d) Behavior-based health monitoring [91], which includes (i) reduced feeding or drinking behavior (digestive issues) monitoring, (ii) isolation or increased lying time (due to lameness or stress) monitoring, and (iii) monitoring of aggression or tail biting in pigs (due to poor welfare conditions).
(e) Body condition scoring (BCS) [92], which includes (i) automating scoring in dairy cows via side and rear-view images, and (ii) monitoring weight loss in sheep or goats.
(f) Facial and thermal image analysis [93], which includes (i) stress recognition in cattle via eye temperature and facial expressions, and (ii) fever detection in livestock using thermal images processed with CNNs.
Below are the mathematical expressions representing the application of CNNs in livestock health monitoring.
Let
I = I 1 ,   I 2 ,   . . . ,   I n :   s e t   o f   i n p u t   i m a g e s   o r   v i d e o   f r a m e s   o f   l i v e s t o c k  
f θ   ( ) :   C N N   m o d e l   w i t h   p a r a m e t e r s   θ  
y i   Y :   g r o u n d t r u t h   h e a l t h   s t a t u s   l a b e l   f o r   i m a g e   I i
y ^ i = f θ I i :   p r e d i c t e d   h e a l t h   s t a t u s   e . g . ,   h e a l t h y ,   l a m e ,   i n j u r e d
L   y i ,   y ^ i :   l o s s   f u n c t i o n   e . g . ,   c r o s s e n t r o p y   f o r   c l a s s i f i c a t i o n
The objective of the CNNs-based health monitoring is represented as
m i n θ   1 n   i = 1 n L y i , f θ ( I i )
The training process is defined by Equation (21); the CNN parameter θ is optimized to minimize the average loss between the predicted and actual health conditions across all input images. If the goal is to classify whether an animal is diseased or not (binary classification), using SoftMax or sigmoid output, the binary cross-entropy loss would be
L ( y i , y ^ i ) = [ y i   l o g ( y ^ i ) ] + 1 y i   l o g ( 1 y ^ i )
The mechanism used by CNNs for feature extraction is expressed as follows.
Each image Ii passes through convolutional layers:
F ( l ) = σ ( W ( l ) F ( l 1 ) + b ( l ) )
F ( l ) : feature map at layer l;
W ( l ) , b ( l ) : learnable weights and bias of layer;
∗: convolution operator;
σ: activation function (e.g., ReLU).
Notable challenges and limitations in applying CNNs to livestock health monitoring are as follows:
(a)
Difficulties in obtaining labeled datasets of healthy and unhealthy animals.
(b)
Occlusions and noise such as dirt, lighting, and crowding affect the quality of images needed by CNNs.
(c)
Models trained on one farm may not generalize to others due to breed, environment, etc.
(d)
Requires edge computing or robust cloud infrastructure for real-time deployment.

5. CNN Applications in Livestock Behavior Classification and Monitoring

Livestock behavior is monitored for many reasons. Moreover, animal behavior is a vital indicator of so many things on animal farms, including health and disease, stress or pain, welfare, reproductive status, and feeding efficiency [94]. Traditional animal behavior monitoring relies on manual methods, which are time-consuming, subjective, and labor-intensive [95]. However, with CNNs, animal behavior can be recognized in real-time and automatically, especially when the CNN is applied to video and image data from barn cameras or drones. CNNs are ideal for the extraction of spatial patterns in visual data [96]. In behavior monitoring, CNNs are applied to the following:
(a) Still image-based behavior classification, which includes feeding CNNs labeled images of animals performing different behaviors, such as feeding, resting, and standing, and the models classify each image into a behavioral category.
(b) Video-based temporal behavior recognition, which includes extraction of features from each frame by CNNs, often combined with RNNs or LSTMs, for temporal patterns capture, and is used for continuous monitoring of animal activity over time.
(c) Object detection of behavior events, which includes detection and classification of animals and their actions (such as cow mounting another and pig biting tail) by YOLO or Faster R-CNN. Each detected object is labeled with a behavior type and tracked across frames.
(d) Pose estimation and gait analysis, which includes using keypoint detection CNNs such as OpenPose and DeepLabCut for body parts identification to infer posture and movement. CNNs are often used for detection of behaviors like lying, standing, walking, or mounting [97].
Some real-world applications and research studies include the following:
(a) Pig behavior monitoring, which includes (i) lying versus standing versus feeding detection using CNNs trained on RGB images, (ii) detection of aggressive behavior and tail-biting with YOLOv3, and (iii) classification of behavior sequences, such as stress-free activity versus stress-induced activity, using CNNs + LSTM.
(b) Cow behavior monitoring, which includes (i) estrus detection by CNNs using identification of mounting and restless movement, (ii) rumination and chewing detection from facial motion using video, and (iii) feeding and drinking patterns recognition in dairy cows with YOLO-based object tracking [98]. Table 2 presents the common architecture of CNNs employed in animal farming.
For multimodal integration, CNN-based vision can be combined with audio, such as coughs and distress calls using spectrograms processed by CNNs, and wearable sensors, such as accelerometers and GPS for motion or location data addition [104]. Moreover, CNN-based vision can be combined with thermal cameras for the detection of heat stress or fever-related behavioral changes. The classification and monitoring of livestock behavior using CNNs demonstrates significant progress in animal farming, offering significant and lasting positive changes in various applications [105]. This progress underlines the importance of CNNs in livestock monitoring, facilitating the accurate detection of health issues. The challenges in monitoring the behavior of individual animals using CNNs are presented in Table 3.
The mathematical formulation tailored to CNNs employed in livestock behavior classification and monitoring tasks is expressed in Equation (24). This formulation outlines the general components and processing pipelines of CNNs in the context of livestock images or video frames analysis. The mathematical expressions are as follows:
X R H × W × C :   I n p u t   i m a g e   o f   h e i g h t   H ,   w i d t h   W ,   a n d   C   c h a n n e l s   ( e . g . ,   R G B )
f l ( ) :   t h e   f u n c t i o n   r e p r e s e n t i n g   t h e   o p e r a t i o n s   i n   t h e   l t h   C N N s   l a y e r  
θ = { θ 1 , θ 2 , . . . , θ L } :   t h e   s e t   o f   p a r a m e t e r s   ( w e i g h t s   a n d   b i a s e s )   a c r o s s   L   l a y e r s
Convolutional filters and non-linearities are applied by CNNs layers for extraction of spatial features:
Z l = σ W l Z l 1 + b l ,   f o r   l = 1 ,   2 , . . . , L
W l : convolutional filter at layer l;
b l : bias term;
σ (⋅): activation function (e.g., ReLU);
∗: convolution operator;
Z 0 = X: input image.
After convolution and pooling, the output is flattened and passed through fully connected layers:
h = ϕ W f c v e c Z L + b f c
vec (.): vectorization (flattening);
W f c ,   b f c : weights and bias of the fully connected layer;
ϕ (⋅): non-linearity.
Assuming we classify into K behaviors such as eating, lying, standing, and aggressive, the Softmax for behavior classification will be
y ^ k = e h k j = 1 K e h j ,   f o r   k = 1 , ,   K
where y ^ k is the predicted probability for class k.
Given ground truth label y 1 , . . . , K , the loss is
L ( θ ) = k = 1 k 1 | y = k | log ( y ^ k )  
The above Equations can be interpreted in livestock behavior monitoring as follows.
X: frame/image of a pig or sheep on the farm.
CNNs layers detect posture, body orientation, and relative location.
Output y ^ : probability distribution over behaviors (e.g., standing = 0.75, lying = 0.2, eating = 0.05).
The model is trained to minimize L(θ) using gradient descent methods like Adam or SGD.

6. Livestock Management Through Localization and Tracking

Livestock management through localization and tracking is an essential aspect of modern precision agriculture. By tracking the behavior, location, and movement of individual animals in real-time, informed decisions related to safety, health, breeding, feeding, and overall productivity can be made by farmers. The agricultural sector has been revolutionized by the integration of localization and tracking technologies in livestock management, improving animal welfare, optimizing resource utilization, and reducing risks like theft and disease outbreaks. Among the common technologies usually employed are the following:
(a) Ultra-wideband systems, which offer accurate tracking capabilities within confined spaces like barns [106].
(b) Global navigation satellite systems and low power wide area networks are combined for extensive grazing areas, which facilitates real-time tracking of livestock [107].
(c) Thermal imaging, which is a non-invasive monitoring technique employed for assessing vital signs in livestock [105].
(d) IoT and satellite connectivity, which provides a viable solution for livestock monitoring in areas with limited cellular coverage [108].
These systems enable health monitoring and behavioral analysis through geo-fencing, ensuring comprehensive livestock supervision even in remote regions. Various areas of application are as follows:
(a) Health monitoring, which involves continuous tracking of movement patterns and vital signs for early detection of illnesses, thereby improving the welfare of animals and reducing the costs of veterinary care [109].
(b) Theft prevention using GPS tracking devices, which have been instrumental in reducing stock theft, particularly in regions where livestock theft is a threat to smallholder farmers [110].
(c) Behavioral analysis using advanced systems like Starformer, which utilize architecture based on transformer for livestock behaviors monitoring and analysis, assisting in anomalies identification and enhancement of farm management strategies [111].
The implementation of the abovementioned technologies in underdeveloped or remote areas may be hindered by limitations of necessary infrastructure, such as reliable internet connectivity [112].
Moreover, the vast amount of data generated necessitates robust data management and analysis tools to derive actionable insights. There are numerous benefits presented by adopting localization and tracking technologies in livestock management, including improved animal health monitoring, enhanced farm efficiency, and reduced losses from theft or disease [113]. However, it is crucial to address the challenges related to cost, infrastructure, and data management to ensure extensive and successful implementation of these systems. This enables researchers to investigate the potential of CNN models and their combination with thermal imaging technologies in this field. The employment of CNN models for the analysis of visual data from cameras may present a non-intrusive method for estimating weight, reducing stress and labor costs without affecting the model’s scalability for continuous monitoring [114].
The potential benefits associated with these technologies underline the need for their development and refinement through continued research that addresses the abovementioned challenges and limitations for the overall improvement of livestock farming. Elements from object tracking, localization (e.g., with sensors or vision), and data association are typically combined by the mathematical framework for livestock management through localization and tracking. The generalized representation using mathematical expressions is presented as follows.
For localization (position estimation) of livestock, let each animal be denoted by an index i ∈ {1, 2, …, N}, where N is the total number of animals.
Let the true position of animal i at time t be x i (t) ∈ R2 (2D) or R3 (3D).
Observed measurements (from cameras, GPS, RFID, etc.) are given as
Z i ( t ) = h x i t ) + v i ( t
where
ℎ (⋅) is the observation function (e.g., projection to 2D from a camera);
v i ( t )∼N (0, R) is the measurement noise.
Using a motion model for the next state prediction (e.g., via a particle filter or Kalman Filter),
x i ( t + 1 ) = f ( x i ( t ) , u i ( t ) ) + w i t
where
f (⋅) is the motion function (e.g., constant velocity model);
u i ( t ) is a control input (often 0 for free-moving livestock);
w i t N (0, Q) is the process noise.
To link observations to the correct animal in multi-animal data association, this implies that
A s s o c i a t i o n   C o s t : C ( i ,   j ) = | | z j t i t | | 2
where
i t is the predicted position;
z j t is the j-th observation.
Assignment is solved using the Hungarian algorithm or greedy methods.
The objective function (goal) of tracking is to minimize the total tracking error over time:
m i n { a i ,   j t }   t i j { a i ,   j t ·   | | z j t i t | | 2
Subject to
j { a i ,   j t 1 ,   i { a i ,   j t 1 ,   a i ,   j t   0 ,   1
where a i ,   j t indicates the assignment of observation j to animal i.
However, the localization output that is based on CNNs can be viewed as
i t = C N N I t
where I t is the input image (or video frame), and the CNNs learn a mapping from raw image pixels to livestock positions.

7. Detection and Classification of Livestock Diseases

The importance of CNNs cannot be underestimated in livestock management for their capability and enablement in detecting and identifying various diseases, facilitating intervention and the reduction of infectious diseases [115]. These diseases pose significant threats to animal welfare, global food security, and the agricultural economy. Conventional diagnostic methods, which are oftentimes reliant on manual inspections and laboratory tests, can be labor-intensive, time-consuming, and subject to human error. With the advent of artificial intelligence, particularly computer vision and deep learning, disease detection and classification have been revolutionized, and this advent has also offered rapid, scalable, and accurate solutions for livestock health management [116]. Different deep learning models have been applied for livestock disease detection, such as lumpy skin disease, which is a viral disease that affects cattle.
The lumpy skin disease is characterized by fever, the development of nodules on the skin, mucous membranes, internal organs, and other clinical signs. Recent studies have leveraged CNNs for the detection and classification of lumpy skin disease from images. In a study, 10 pretrained models, including VGG16, MobileNetV2, DenseNet201, and InceptionV3, were evaluated in a comparative analysis. An accuracy of 96.07% was achieved by VGG16 on one dataset, while 96.39% accuracy was achieved by MobileNetV2 on another, underlining their effectiveness in LSD detection [59]. In another study, MobileNetV2 was optimized using RMSProp optimizer, achieving 95% accuracy in classifying healthy and lumpy skin disease-affected cattle images, performing better than the existing benchmarks by 4–10% [117].
Deep learning models have been developed to detect diseases (poultry farming faces challenges from diseases like Coccidiosis, Salmonella, and Newcastle Disease) from fecal images. A system combining YOLO-V3 for object detection and ResNet50 for classification was trained on 10,500 chicken fecal images. YOLO-V3 achieved a mean average precision of 87.48% for detecting regions of interest, while ResNet50 attained a classification accuracy of 98.7% [118]. Skin diseases in livestock like cattle, sheep, and goats can be identified using deep learning models. A study applied EfficientNetB7, MobileNetV2, and DenseNet201 models to classify skin diseases, achieving 99.01%, 95.31%, and 97.08% accuracy, respectively. EfficientNetB7 demonstrated superior performance in disease detection [119].
Another method is IRT (infrared thermography), which is a non-invasive method used in capturing thermal images for detecting physiological changes associated with diseases. Its application has been in cattle health assessment, providing a stress-free alternative to conventional diagnostic methods. An approach based on the original concept employed deep contrastive learning for image retrieval for infectious diseases detection in cattle. By using models like ResNet and ResNeXt, a real-time detection with high accuracy was achieved by the system, indicating the capability of contrastive learning in veterinary diagnostics. Beyond diagnosis based on images, deep learning has been applied to genomic data for the detection of diseases. Genome sequences were classified in a study that employed graph representations and deep neural networks, achieving approximately 89.7% accuracy in the identification of pathogen signatures in bovine metagenome sequences. The recent applications of CNN models for the detection and classification of livestock diseases are presented in Table 4.
Detection and classification of livestock diseases can be expressed mathematically by typically modeling it as a supervised learning problem and solving it using CNNs. The general mathematical formulation is represented by the following Equations.
Let
X = x 1   , x 2   , . . . , x N   b e   t h e   s e t   o f   i n p u t   i m a g e s
where the images are of animals showing disease symptoms.
Y = y 1   , y 2   , . . . , y N  
where y i   1 ,   2 , . . . , C   is the label corresponding to one of C disease classes (or “healthy”, if included). The CNNs learn a function:
f θ : R H × W × D 1 ,   2 , . . . , C  
where
H, W, and D are the height, width, and depth (channels) of the image.
Θ are the learnable parameters (weights and biases) of the CNNs.
f θ ( x i ) = ŷ i : p r e d i c t e d   d i s e a s e   c l a s s   o f   i m a g e   x i
The objective is to minimize classification error using a loss function:
L θ = i = 1 N c = 1 C 1 ( y i = c ) · log p c ( x i ; θ )
where
1(yi = c) is 1 if image xi belongs to class c, 0 otherwise;
p c ( x i ; θ ) is the probability assigned to class c by the CNNs (usually via Softmax output).
If disease detection includes localization by bounding box regression, output includes ŷ1 (class) and ḃ1 = (x, y, w, h) (bounding box).
The classification loss and localization loss are combined by the total loss and represented as
L t o t a l = L c l s + λ   ·   L l o c
where
L c l s is the classification loss (e.g., cross-entropy);
L l o c is the localization loss (e.g., smooth L1 or IoU loss);
λ is the trade-off hyperparameter.
For model optimization, the CNN parameters θ are updated by gradient descent:
θ θ η θ L ( θ )  
where η is the learning rate.

8. Challenges in Applying CNNs in Animal Husbandry

The challenges in applying CNNs in animal husbandry, particularly for tasks such as behavior monitoring, welfare assessment, and disease detection, are enumerated as follows:
(a) Imbalanced and limited datasets [122,123], including (i) the challenge of scarcity of labelled and high-quality datasets of livestock images such as diseased versus healthy datasets, different breeds datasets, and varying conditions datasets, and (ii) overfitting, poor generalization, and unreliable predictions due to small or biased datasets. CNNs trained on a narrow dataset may become less effective elsewhere due to disease symptoms that may vary across breeds or regions.
(b) Lack of standardization [124], including (i) the challenge of standardization of image acquisition such as camera type, lighting, and distance, and (ii) the challenge of variability, which affects the generalizability of the model across different farms or geographic locations.
(c) Complex real-world conditions [125], including (i) the challenge of livestock being in dynamic and uncontrolled environments such as occlusions by other animals and outdoor lighting, reducing the detection and classification effectiveness and accuracy of CNNs under such variability.
(d) Computational resource constraints [126], including (i) the challenge of limited processing power and energy by edge devices on farms, and (ii) the need for significant model compression or hardware upgrades when deploying deep CNN models locally for real-time applications.
(e) Interpretability and trust [127], including (i) the challenge of CNN models being black-box models, and (ii) lack of trust by farmers and veterinarians in the capability of CNNs for diagnosis or classification without clear explanations.
(f) Integration with farm management systems [128], including (i) the challenge of CNN systems that are not integrated with herd management software or IoT systems but are often siloed, limiting actionable insights and real-time decision making on farms.
Our review reveals several key gaps and limitations in the current research that warrant further investigation and development. Addressing these gaps is essential for advancing the application of CNNs in animal husbandry, improving animal welfare, and enhancing farm management practices. Among the gaps found in the current research are the following:
(a)
Limited species coverage;
(b)
Data scarcity and quality issues;
(c)
Technical challenges in model development;
(d)
Integration with other technologies;
(e)
Ethical and welfare considerations;
(f)
Lack of standardization;
(g)
Underutilization in real-time monitoring.
In summary, while CNN models look promising for animal husbandry advancement, their real-world applications are limited by data, hardware, and integration challenges. It is necessary for future research to ensure the adaptability, explainability, and accessibility of these systems to ensure their practical farming impacts, especially by exploring few-shot learning for animal identification or integrating CNNs with IoT sensors for real-time farm monitoring. These and other mitigation strategies are potential solutions to the challenges presented in this review, including those confronting CNN-based behavior monitoring as presented in Table 3.

9. Conclusions

Significant advancements have been made by the CNN models in livestock farming, improving livestock health and farm productivity through real-time monitoring and management. In this review, a comprehensive overview of these advancements has been provided, establishing the importance of CNN models in detecting various diseases that affect livestock, analyzing and managing their behavior. The application of CNNs in livestock farming facilitates the timely and accurate identification and management of health-related issues and behaviors, thereby enhancing the welfare of individual livestock and farm productivity. However, a critical observation reveals that CNN models have shown strong potential for image- and video-based tasks in livestock farming, but they are not yet widely adopted on farms due to several practical and technical barriers such as the following:
(a)
Infrastructure limitations on farms;
(b)
High cost and complexity;
(c)
Data collection and annotation challenges;
(d)
Environmental variability;
(e)
Generalization and robustness issues;
(f)
Preference for simpler solutions.
Furthermore, the widespread and effective application of CNNs is limited by several notable challenges, which are as follows:
(a) Availability and quality of data, including limited annotated datasets, data variability, ethical and privacy concerns;
(b) Constraints of real-time processing, including high computational requirements, and latency issues;
(c) Environmental and operational variability, including (i) outdoor farm conditions such as dust, dirt, lighting changes, and weather variations, which can reduce the quality of the image and CNN performance, and (ii) dynamic animal movement, which makes consistent image capture and tracking more difficult;
(d) Transferability and generalization issues, where CNNs trained on data from one particular species or farm may find it difficult to generalize well to others due to differences in breed, equipment, and management practices;
(e) Difficulty in integrating CNNs with farm management systems, sensors, and IoT devices;
(f) Moreover, farmers may find it difficult to adopt AI technologies due to training needs, cost, or lack of trust in automated decisions;
(g) Likewise, the lack of transparency in CNN operation makes it difficult for users to trust or understand predictions.
Therefore, more adoption of CNN models in livestock farming is advocated in this study, including reducing the cost and complexity of infrastructure on farms and addressing data collection challenges, annotation challenges, and generalization and robustness issues as well as environmental variability. This advocacy lies in acknowledging the progress already demonstrated by the CNN models in livestock farming, advancing not only the impact of these technologies but also their sustainability for livestock welfare and production, for the benefit of farmers and consumers. By addressing the abovementioned challenges and embracing technologies, CNNs will not only meaningfully impact livestock health monitoring but also define new benchmarks for precision livestock farming.

Author Contributions

Conceptualization, R.-W.B.; methodology, R.-W.B., R.O.O., P.A.O., E.A.v.W. and C.T.; software, R.-W.B., P.A.O., E.A.v.W. and C.T.; validation, R.-W.B., P.A.O. and C.T.; formal analysis, R.-W.B., P.A.O. and C.T.; investigation, R.-W.B.; resources, P.A.O. and C.T.; data curation, R.-W.B.; writing—original draft preparation, R.-W.B.; writing—review and editing, R.-W.B.; visualization, R.-W.B. and C.T.; supervision, P.A.O. and C.T.; project administration, E.A.v.W. and P.A.O.; funding acquisition, P.A.O., E.A.v.W. and C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D., Jr.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and applications of convolutional neural network-based computer vision systems in animal farming: A review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef] [PubMed]
  2. Toldrá, F.; Mora, L.; Reig, M. Current developments in meat by-products. In New Aspects of Meat Quality; Woodhead Publishing: Sawston, UK, 2022; pp. 649–665. [Google Scholar]
  3. Derbib, T.; Daru, G.; Kehali, S.; Alemu, S. The Role of Working Animals and Their Welfare Issues in Ethiopia: A Systematic Review and Meta-Analysis. Vet. Med. Int. 2024, 2024, 7031990. [Google Scholar] [CrossRef] [PubMed]
  4. FAS.USDA.GOV. Available online: https://www.fas.usda.gov/data/production/commodity/0111000?utm_source/ (accessed on 5 May 2025).
  5. Mehrabi, Z.; Gill, M.; Wijk, M.V.; Herrero, M.; Ramankutty, N. Livestock policy for sustainable development. Nat. Food 2020, 1, 160–165. [Google Scholar] [CrossRef]
  6. Michalk, D.L.; Kemp, D.R.; Badgery, W.B.; Wu, J.; Zhang, Y.; Thomassin, P.J. Sustainability and future food security—A global perspective for livestock production. Land Degrad. Dev. 2019, 30, 561–573. [Google Scholar] [CrossRef]
  7. Ponnampalam, E.N.; Holman, B.W. Sustainability II: Sustainable animal production and meat processing. In Lawrie’s Meat Science; Woodhead Publishing: Sawston, UK, 2023; pp. 727–798. [Google Scholar]
  8. Akhigbe, B.I.; Munir, K.; Akinade, O.; Akanbi, L.; Oyedele, L.O. IoT technologies for livestock management: A review of present status, opportunities, and future trends. Big Data Cogn. Comput. 2021, 5, 10. [Google Scholar] [CrossRef]
  9. Vlaicu, P.A.; Gras, M.A.; Untea, A.E.; Lefter, N.A.; Rotar, M.C. Advancing livestock technology: Intelligent systemization for enhanced productivity, welfare, and sustainability. AgriEngineering 2024, 6, 1479–1496. [Google Scholar] [CrossRef]
  10. Neethirajan, S. Artificial intelligence and sensor innovations: Enhancing livestock welfare with a human-centric approach. Hum. Centric Intell. Syst. 2024, 4, 77–92. [Google Scholar] [CrossRef]
  11. Rashid, A.B.; Kausik, A.K. AI revolutionizing industries worldwide: A comprehensive overview of its diverse applications. Hybrid Adv. 2024, 7, 100277. [Google Scholar] [CrossRef]
  12. Bhaskaran, H.S.; Gordon, M.; Neethirajan, S. Development of a cloud-based IoT system for livestock health monitoring using AWS and python. Smart Agric. Technol. 2024, 9, 100524. [Google Scholar] [CrossRef]
  13. Bello, R.W.; Mohamed, A.S.A.; Talib, A.Z.; Sani, S.; Ab Wahab, M.N. Behavior recognition of group-ranched cattle from video sequences using deep learning. Indian J. Anim. Res. 2022, 56, 505–512. [Google Scholar] [CrossRef]
  14. Gikunda, P.K.; Jouandeau, N. State-of-the-art convolutional neural networks for smart farms: A review. In Intelligent Computing: Proceedings of the 2019 Computing Conference; Springer International Publishing: Cham, Switzerland, 2019; Volume 1, pp. 763–775. [Google Scholar]
  15. Ferrante, G.S.; Rodrigues, F.M.; Andrade, F.R.; Goularte, R.; Meneguette, R.I. Understanding the state of the Art in Animal detection and classification using computer vision technologies. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 3056–3065. [Google Scholar]
  16. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef]
  17. Fatoki, O.; Tu, C.; Hans, R.; Bello, R.W. Role of computer vision and deep learning algorithms in livestock behavioural recognition: A state-of-the-art-review. Edelweiss Appl. Sci. Technol. 2024, 8, 6416–6430. [Google Scholar] [CrossRef]
  18. Hocquette, J.F.; Ellies-Oury, M.P.; Lherm, M.; Pineau, C.; Deblitz, C.; Farmer, L. Current situation and future prospects for beef production in Europe—A review. Asian Australas. J. Anim. Sci. 2018, 31, 1017. [Google Scholar] [CrossRef]
  19. Hocquette, J.F.; Chatellier, V. Prospects for the European beef sector over the next 30 years. Anim. Front. 2011, 1, 20–28. [Google Scholar] [CrossRef]
  20. Drouillard, J.S. Current situation and future trends for beef production in the United States of America—A review. Asian Australas. J. Anim. Sci. 2018, 31, 1007. [Google Scholar] [CrossRef]
  21. Nielsen, S.S.; Houe, H.; Denwood, M.; Nielsen, L.R.; Forkman, B.; Otten, N.D.; Agger, J.F. Application of methods to assess animal welfare and suffering caused by infectious diseases in cattle and swine populations. Animals 2021, 11, 3017. [Google Scholar] [CrossRef]
  22. Linstädt, J.; Thöne-Reineke, C.; Merle, R. Animal-based welfare indicators for dairy cows and their validity and practicality: A systematic review of the existing literature. Front. Vet. Sci. 2024, 11, 1429097. [Google Scholar] [CrossRef]
  23. Mandel, R.; Bracke, M.B.; Nicol, C.J.; Webster, J.A.; Gygax, L. Dairy vs beef production–expert views on welfare of cattle in common food production systems. Animal 2022, 16, 100622. [Google Scholar] [CrossRef]
  24. Ahsan, M.I.; Khan, M.B.; Das, M.; Akter, S. Poor hygiene, facilities, and policies at slaughterhouses: A key threat to public health and environment. Bangladesh J. Vet. Anim. Sci. 2020, 8, 1–9. [Google Scholar]
  25. Ovuru, K.F.; Izah, S.C.; Ogidi, O.I.; Imarhiagbe, O.; Ogwu, M.C. Slaughterhouse facilities in developing nations: Sanitation and hygiene practices, microbial contaminants and sustainable management system. Food Sci. Biotechnol. 2024, 33, 519–537. [Google Scholar] [CrossRef]
  26. Neethirajan, S. Affective state recognition in livestock—Artificial intelligence approaches. Animals 2022, 12, 759. [Google Scholar] [CrossRef] [PubMed]
  27. Olczak, K.; Penar, W.; Nowicki, J.; Magiera, A.; Klocek, C. The role of sound in livestock farming—Selected aspects. Animals 2023, 13, 2307. [Google Scholar] [CrossRef] [PubMed]
  28. Bello, R.W.; Mohamed, A.S.A.; Talib, A.Z. Cow image segmentation using mask R-CNN integrated with grabcut. In International Conference on Emerging Technologies and Intelligent Systems; Springer International Publishing: Cham, Switzerland, 2021; pp. 23–32. [Google Scholar]
  29. Dwyer, C.M. Can improving animal welfare contribute to sustainability and productivity? Black Sea J. Agric. 2020, 3, 61–65. [Google Scholar]
  30. Neethirajan, S. Artificial intelligence and sensor technologies in dairy livestock export: Charting a digital transformation. Sensors 2023, 23, 7045. [Google Scholar] [CrossRef]
  31. Victor, N.; Maddikunta, P.K.R.; Mary, D.R.K.; Murugan, R.; Chengoden, R.; Gadekallu, T.R.; Rakesh, N.; Zhu, Y.; Paek, J. Remote Sensing for Agriculture in the Era of Industry 5.0–A survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5920–5945. [Google Scholar] [CrossRef]
  32. Ecer, F.; Ögel, İ.Y.; Dinçer, H.; Yüksel, S. Assessment of Metaverse wearable technologies for smart livestock farming through a neuro quantum spherical fuzzy decision-making model. Expert Syst. Appl. 2024, 255, 124722. [Google Scholar] [CrossRef]
  33. Kleen, J.L.; Guatteo, R. Precision livestock farming: What does it contain and what are the perspectives? Animals 2023, 13, 779. [Google Scholar] [CrossRef]
  34. Wang, Y.; Mücher, S.; Wang, W.; Guo, L.; Kooistra, L. A review of three-dimensional computer vision used in precision livestock farming for cattle growth management. Comput. Electron. Agric. 2023, 206, 107687. [Google Scholar] [CrossRef]
  35. Bello, R.W.; Owolawi, P.A.; van Wyk, E.A.; Tu, C. Cattle Instance Segmentation by Transfer Learning Approach Using Deep Learning Models for Sustainable Livestock Farming; IntechOpen: London, UK, 2025. [Google Scholar] [CrossRef]
  36. Olubummo, D.A.; Bello, R.W. Computer visin-based precision livestock farming: An overview of the challenges and opportunities. World News Nat. Sci. 2024, 54, 26–37. [Google Scholar]
  37. Tedeschi, L.O.; Greenwood, P.L.; Halachmi, I. Advancements in sensor technology and decision support intelligent tools to assist smart livestock farming. J. Anim. Sci. 2021, 99, skab038. [Google Scholar] [CrossRef]
  38. Oliveira, D.A.B.; Pereira, L.G.R.; Bresolin, T.; Ferreira, R.E.P.; Dorea, J.R.R. A review of deep learning algorithms for computer vision systems in livestock. Livest. Sci. 2021, 253, 104700. [Google Scholar] [CrossRef]
  39. Abd Aziz, N.S.N.; Daud, S.M.; Dziyauddin, R.A.; Adam, M.Z.; Azizan, A. A review on computer vision technology for monitoring poultry farm—Application, hardware, and software. IEEE Access 2020, 9, 12431–12445. [Google Scholar] [CrossRef]
  40. Ma, W.; Qi, X.; Sun, Y.; Gao, R.; Ding, L.; Wang, R.; Peng, C.; Zhang, J.; Wu, J.; Xu, Z.; et al. Computer vision-based measurement techniques for livestock body dimension and weight: A review. Agriculture 2024, 14, 306. [Google Scholar] [CrossRef]
  41. Jogin, M.; Madhulika, M.S.; Divya, G.D.; Meghana, R.K.; Apoorva, S. Feature extraction using convolution neural networks (CNN) and deep learning. In Proceedings of the 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 18–19 May 2018; pp. 2319–2323. [Google Scholar]
  42. Bello, R.W.; Olubummo, D.A.; Seiyaboh, Z.; Enuma, O.C.; Talib, A.Z.; Mohamed, A.S.A. Cattle identification: The history of nose prints approach in brief. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2020; Volume 594, p. 012026. [Google Scholar]
  43. Elngar, A.A.; Arafa, M.; Fathy, A.; Moustafa, B.; Mahmoud, O.; Shaban, M.; Fawzy, N. Image classification based on CNN: A survey. J. Cybersecur. Inf. Manag. 2021, 6, 18–50. [Google Scholar] [CrossRef]
  44. Xin, M.; Wang, Y. Research on image classification model based on deep convolution neural network. EURASIP J. Image Video Process. 2019, 2019, 40. [Google Scholar] [CrossRef]
  45. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  46. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  47. Bello, R.W.; Mohamed, A.S.A.; Talib, A.Z. Contour extraction of individual cattle from an image using enhanced Mask R-CNN instance segmentation method. IEEE Access 2021, 9, 56984–57000. [Google Scholar] [CrossRef]
  48. Bello, R.W.; Mohamed, A.; Talib, A. Smart animal husbandry: A review of its data, applications, techniques, challenges and opportunities. SSRN Electron. J. 2022, 1–24. [Google Scholar] [CrossRef]
  49. Bello, R.W.; Talıb, A.Z.H.; Mohamed, A.S.A.B. Deep learning-based architectures for recognition of cow using cow nose image pattern. Gazi Univ. J. Sci. 2020, 33, 831–844. [Google Scholar] [CrossRef]
  50. Bello, R.W.; Oladipo, M.A. Mask YOLOv7-based drone vision system for automated cattle detection and counting. Artif. Intell. Appl. 2024, 2, 115–125. [Google Scholar] [CrossRef]
  51. Araújo, V.M.; Rili, I.; Gisiger, T.; Gambs, S.; Vasseur, E.; Cellier, M.; Diallo, A.B. AI-Powered Cow Detection in Complex Farm Environments. Smart Agric. Technol. 2025, 10, 100770. [Google Scholar] [CrossRef]
  52. Qin, Q.; Zhou, X.; Gao, J.; Wang, Z.; Naer, A.; Hai, L.; Alatan, S.; Zhang, H.; Liu, Z. YOLOv8-CBAM: A study of sheep head identification in Ujumqin sheep. Front. Vet. Sci. 2025, 12, 1514212. [Google Scholar] [CrossRef]
  53. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  54. Chen, C.; Zhu, W.; Norton, T. Behaviour recognition of pigs and cattle: Journey from computer vision to deep learning. Comput. Electron. Agric. 2021, 187, 106255. [Google Scholar] [CrossRef]
  55. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef]
  56. Ahmad, M.; Zhang, W.; Smith, M.; Brilot, B.; Bell, M. Real-time livestock activity monitoring via fine-tuned faster r-cnn for multiclass cattle behaviour detection. In Proceedings of the 2023 IEEE 14th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 12–14 October 2023; pp. 805–811. [Google Scholar]
  57. Myint, B.B.; Onizuka, T.; Tin, P.; Aikawa, M.; Kobayashi, I.; Zin, T.T. Development of a real-time cattle lameness detection system using a single side-view camera. Sci. Rep. 2024, 14, 13734. [Google Scholar] [CrossRef]
  58. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  59. Senthilkumar, C.; Vadivu, G.; Neethirajan, S. Early Detection of Lumpy Skin Disease in Cattle Using Deep Learning—A Comparative Analysis of Pretrained Models. Vet. Sci. 2024, 11, 510. [Google Scholar] [CrossRef]
  60. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning; PMLR: London, UK, 2019; pp. 6105–6114. [Google Scholar]
  61. Otarashvili, L.; Subramanian, T.; Holmberg, J.; Levenson, J.J.; Stewart, C.V. Multispecies Animal Re-ID Using a Large Community-Curated Dataset. arXiv 2024, arXiv:2412.05602. [Google Scholar]
  62. Zhang, R.; Ji, J.; Zhao, K.; Wang, J.; Zhang, M.; Wang, M. A cascaded individual cow identification method based on DeepOtsu and EfficientNet. Agriculture 2023, 13, 279. [Google Scholar] [CrossRef]
  63. Howard, A.G. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  64. Zhang, Y.; Li, X.; Sun, Y.; Xue, A.; Zhang, Y.; Jiang, H.; Shen, W. Real-Time Monitoring Method for Cow Rumination Behavior Based on Edge Computing and Improved MobileNet v3. Smart Agric. 2024, 6, 29. [Google Scholar]
  65. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  66. Polat, H.E.; Koc, D.G.; Ertuğrul, Ö.; Koç, C.; Ekinci, K. Deep Learning based Individual Cattle Face Recognition using Data Augmentation and Transfer Learning. J. Agric. Sci. 2025, 31, 137–150. [Google Scholar] [CrossRef]
  67. Li, X.; Yu, M.; Xu, D.; Zhao, S.; Tan, H.; Liu, X. Non-contact measurement of pregnant sows’ backfat thickness based on a hybrid CNN-ViT model. Agriculture 2023, 13, 1395. [Google Scholar] [CrossRef]
  68. Sun, L.; Liu, G.; Yang, H.; Jiang, X.; Liu, J.; Wang, X.; Yang, H.; Yang, S. LAD-RCNN: A Powerful Tool for Livestock Face Detection and Normalization. Animals 2023, 13, 1446. [Google Scholar] [CrossRef]
  69. Bello, R.W.; Mohamed, A.S.A.; Talib, A.Z.; Olubummo, D.A.; Enuma, O.C. Computer vision-based techniques for cow object recognition. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2021; Volume 858, p. 012008. [Google Scholar]
  70. Gao, G.; Wang, C.; Wang, J.; Lv, Y.; Li, Q.; Ma, Y.; Zhang, X.; Li, Z.; Chen, G. CNN-Bi-LSTM: A complex environment-oriented cattle behavior classification network based on the fusion of CNN and Bi-LSTM. Sensors 2023, 23, 7714. [Google Scholar] [CrossRef]
  71. Qiao, Y.; Guo, Y.; Yu, K.; He, D. C3D-ConvLSTM based cow behaviour classification using video data for precision livestock farming. Comput. Electron. Agric. 2022, 193, 106650. [Google Scholar] [CrossRef]
  72. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  73. Huang, X.; Hu, Z.; Wang, X.; Yang, X.; Zhang, J.; Shi, D. An improved single shot multibox detector method applied in body condition score for dairy cows. Animals 2019, 9, 470. [Google Scholar] [CrossRef]
  74. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  75. Ayadi, S.; Ben Said, A.; Jabbar, R.; Aloulou, C.; Chabbouh, A.; Achballah, A.B. Dairy cow rumination detection: A deep learning approach. In International Workshop on Distributed Computing for Emerging Smart Networks; Springer International Publishing: Cham, Switzerland, 2020; pp. 123–139. [Google Scholar]
  76. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  77. Moradeyo, O.M.; Olaniyan, A.S.; Ojoawo, A.O.; Olawale, J.A.; Bello, R.W. YOLOv7 applied to livestock image detection and segmentation tasks in cattle grazing behavior, monitor and intrusions. J. Appl. Sci. Environ. Manag. 2023, 27, 953–958. [Google Scholar] [CrossRef]
  78. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar] [CrossRef]
  79. Li, S.; Fu, L.; Sun, Y.; Mu, Y.; Chen, L.; Li, J.; Gong, H. Individual dairy cow identification based on lightweight convolutional neural network. PLoS ONE 2021, 16, e0260510. [Google Scholar] [CrossRef] [PubMed]
  80. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  81. Rohan, A.; Rafaq, M.S.; Hasan, M.J.; Asghar, F.; Bashir, A.K.; Dottorini, T. Application of deep learning for livestock behaviour recognition: A systematic literature review. Comput. Electron. Agric. 2024, 224, 109115. [Google Scholar] [CrossRef]
  82. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  83. Çevik, K.K. Deep learning based real-time body condition score classification system. IEEE Access 2020, 8, 213950–213957. [Google Scholar] [CrossRef]
  84. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
  85. Zhang, K.; Li, D.; Huang, J.; Chen, Y. Automated video behavior recognition of pigs using two-stream convolutional networks. Sensors 2020, 20, 1085. [Google Scholar] [CrossRef]
  86. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  87. Singh, S.; Sharma, P.; Pal, N.; Sarma, D.K.; Tiwari, R.; Kumar, M. Holistic one health surveillance framework: Synergizing environmental, animal, and human determinants for enhanced infectious disease management. ACS Infect. Dis. 2024, 10, 808–826. [Google Scholar] [CrossRef]
  88. Mohan, A.; Raju, R.D.; Janarthanan, P. Animal disease diagnosis expert system using convolutional neural networks. In Proceedings of the 2019 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 21–22 February 2019; IEEE: Piscataway, NJ, USA; pp. 441–446. [Google Scholar]
  89. Zhang, K.; Han, S.; Wu, J.; Cheng, G.; Wang, Y.; Wu, S.; Liu, J. Early lameness detection in dairy cattle based on wearable gait analysis using semi-supervised LSTM-Autoencoder. Comput. Electron. Agric. 2023, 213, 108252. [Google Scholar] [CrossRef]
  90. Lagua, E.B.; Mun, H.S.; Ampode, K.M.B.; Chem, V.; Kim, Y.H.; Yang, C.J. Artificial intelligence for automatic monitoring of respiratory health conditions in smart swine farming. Animals 2023, 13, 1860. [Google Scholar] [CrossRef]
  91. Perez, M.; Toler-Franklin, C. CNN-based action recognition and pose estimation for classifying animal behavior from videos: A survey. arXiv 2023, arXiv:2301.06187. [Google Scholar]
  92. Summerfield, G.I.; De Freitas, A.; van Marle-Koster, E.; Myburgh, H.C. Automated Cow Body Condition Scoring Using Multiple 3D Cameras and Convolutional Neural Networks. Sensors 2023, 23, 9051. [Google Scholar] [CrossRef]
  93. Weng, Z.; Meng, F.; Liu, S.; Zhang, Y.; Zheng, Z.; Gong, C. Cattle face recognition based on a Two-Branch convolutional neural network. Comput. Electron. Agric. 2022, 196, 106871. [Google Scholar] [CrossRef]
  94. Simitzis, P.; Tzanidakis, C.; Tzamaloukas, O.; Sossidou, E. Contribution of precision livestock farming systems to the improvement of welfare status and productivity of dairy animals. Dairy 2021, 3, 12–28. [Google Scholar] [CrossRef]
  95. Siegford, J.M.; Steibel, J.P.; Han, J.; Benjamin, M.; Brown-Brandl, T.; Dórea, J.R.; Morris, D.; Norton, T.; Psota, E.; Rosa, G.J. The quest to develop automated systems for monitoring animal behavior. Appl. Anim. Behav. Sci. 2023, 265, 106000. [Google Scholar] [CrossRef]
  96. Fuentes, A.; Han, S.; Nasir, M.F.; Park, J.; Yoon, S.; Park, D.S. Multiview monitoring of individual cattle behavior based on action recognition in closed barns using deep learning. Animals 2023, 13, 2020. [Google Scholar] [CrossRef]
  97. da Silva Santos, A.; de Medeiros, V.W.C.; Gonçalves, G.E. Monitoring and classification of cattle behavior: A survey. Smart Agric. Technol. 2023, 3, 100091. [Google Scholar] [CrossRef]
  98. Bai, Q.; Gao, R.; Li, Q.; Wang, R.; Zhang, H. Recognition of the behaviors of dairy cows by an improved YOLO. Intell. Robot. 2024, 4, 1–19. [Google Scholar] [CrossRef]
  99. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  100. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  101. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 221–231. [Google Scholar] [CrossRef]
  102. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  103. Mathis, A.; Mamidanna, P.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 2018, 21, 1281–1289. [Google Scholar] [CrossRef]
  104. Yang, L.; Amin, O.; Shihada, B. Intelligent wearable systems: Opportunities and challenges in health and sports. ACM Comput. Surv. 2024, 56, 190. [Google Scholar] [CrossRef]
  105. Sadeghi, E.; Guo, Z.; Chiumento, A.; Havinga, P. Non-Invasive Monitoring of Vital Signs in Calves Using Thermal Imaging Technology. arXiv 2024, arXiv:2405.11532. [Google Scholar]
  106. Yin, M.; Ma, R.; Luo, H.; Li, J.; Zhao, Q.; Zhang, M. Non-contact sensing technology enables precision livestock farming in smart farms. Comput. Electron. Agric. 2023, 212, 108171. [Google Scholar] [CrossRef]
  107. García, M.G.; Molina, F.M.; Marín, C.P.; Marín, D.P. Potential for automatic detection of calving in beef cows grazing on rangelands from Global Navigate Satellite System collar data. Animal 2023, 17, 100901. [Google Scholar] [CrossRef]
  108. Mishra, S.; Sharma, S.K. Advanced contribution of IoT in agricultural production for the development of smart livestock environments. Internet Things 2023, 22, 100724. [Google Scholar] [CrossRef]
  109. Famuyiwa, A.S.; Dosunmu, O.P.; Jimi-Olatunji, D. Application of Computer-Based Techniques for Monitoring Animal Health, Behavior and Welfare: A Review. J. Appl. Sci. Environ. Manag. 2024, 28 (Suppl. S12B), 4277–4282. [Google Scholar]
  110. Mulrooney, K.; Harkness, A. Farm crime and security: Evaluating smart tag technology for preventing, tracking and recovering stolen livestock. Int. J. Rural. Criminol. 2023, 8, 107–123. [Google Scholar] [CrossRef]
  111. Qazi, A.; Razzaq, T.; Iqbal, A. AnimalFormer: Multimodal Vision Framework for Behavior-based Precision Livestock Farming. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–18 June 2024; pp. 7973–7982. [Google Scholar]
  112. Shafik, W. Barriers to implementing computational intelligence-based agriculture system. In Computational Intelligence in Internet of Agricultural Things; Springer Nature: Cham, Switzerland, 2024; pp. 193–219. [Google Scholar]
  113. Dayoub, M.; Shnaigat, S.; Tarawneh, R.A.; Al-Yacoub, A.N.; Al-Barakeh, F.; Al-Najjar, K. Enhancing animal production through smart agriculture: Possibilities, hurdles, resolutions, and advantages. Ruminants 2024, 4, 22–46. [Google Scholar] [CrossRef]
  114. Balasubramaniam, S.; Vijesh Joe, C.; Prasanth, A.; Kumar, K.S. Computer Vision Systems in Livestock Farming, Poultry Farming, and Fish Farming: Applications, Use Cases, and Research Directions. In Computer Vision in Smart Agriculture and Crop Management; Wiley: Hoboken, NJ, USA, 2025; pp. 221–258. [Google Scholar]
  115. Shaik, N.; Chitralingappa, P.; Krishna Priya, C.; Sumalatha, P.; Geetha Rani, K.; Harichandana, B.; Shaik, A.S. Deep Learning Approaches for Early Detection of Bovine Respiratory Diseases in Cattle. In International Symposium on Signal and Image Processing; Springer Nature: Singapore, 2024; pp. 227–239. [Google Scholar]
  116. Talebi, E.; Nezhad, M.K. Revolutionizing animal sciences: Multifaceted solutions and transformative impact of AI technologies. CABI Rev. 2024, 19. [Google Scholar] [CrossRef]
  117. Muhammad Saqib, S.; Iqbal, M.; Tahar Ben Othman, M.; Shahazad, T.; Yasin Ghadi, Y.; Al-Amro, S.; Mazhar, T. Lumpy skin disease diagnosis in cattle: A deep learning approach optimized with RMSProp and MobileNetV2. PLoS ONE 2024, 19, e0302862. [Google Scholar] [CrossRef]
  118. Degu, M.Z.; Simegn, G.L. Smartphone based detection and classification of poultry diseases from chicken fecal images using deep learning techniques. Smart Agric. Technol. 2023, 4, 100221. [Google Scholar] [CrossRef]
  119. Girmaw, D.W. Livestock animal skin disease detection and classification using deep learning approaches. Biomed. Signal Process. Control 2025, 102, 107334. [Google Scholar]
  120. Saha, D.K. An extensive investigation of convolutional neural network designs for the diagnosis of lumpy skin disease in dairy cows. Heliyon 2024, 10, e34242. [Google Scholar] [CrossRef] [PubMed]
  121. Machuve, D.; Nwankwo, E.; Mduma, N.; Mbelwa, J. Poultry diseases diagnostics models using deep learning. Front. Artif. Intell. 2022, 5, 733345. [Google Scholar] [CrossRef]
  122. Kellenberger, B.; Marcos, D.; Tuia, D. Detecting mammals in UAV images: Best practices to address a substantially imbalanced dataset with deep learning. Remote Sens. Environ. 2018, 216, 139–153. [Google Scholar] [CrossRef]
  123. Barbedo, J.G.A.; Koenigkan, L.V.; Santos, T.T.; Santos, P.M. A study on the detection of cattle in UAV images using deep learning. Sensors 2019, 19, 5436. [Google Scholar] [CrossRef]
  124. Bello, R.W.; Talib, A.Z.H.; Mohamed, A.S.A.B. Deep belief network approach for recognition of cow using cow nose image pattern. Walailak J. Sci. Technol. (WJST) 2021, 18, 8984. [Google Scholar] [CrossRef]
  125. Mao, A.; Huang, E.; Wang, X.; Liu, K. Deep learning-based animal activity recognition with wearable sensors: Overview, challenges, and future directions. Comput. Electron. Agric. 2023, 211, 108043. [Google Scholar] [CrossRef]
  126. Li, J.; Green-Miller, A.R.; Hu, X.; Lucic, A.; Mohan, M.M.; Dilger, R.N.; Condotta, I.C.; Aldridge, B.; Hart, J.M.; Ahuja, N. Barriers to computer vision applications in pig production facilities. Comput. Electron. Agric. 2022, 200, 107227. [Google Scholar] [CrossRef]
  127. Kumar, P.; Luo, S.; Shaukat, K. A Comprehensive Review of Deep Learning Approaches for Animal Detection on Video Data. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1420–1437. [Google Scholar] [CrossRef]
  128. Fuentes, S.; Viejo, C.G.; Tongson, E.; Dunshea, F.R. The livestock farming digital transformation: Implementation of new and emerging technologies using artificial intelligence. Anim. Health Res. Rev. 2022, 23, 59–71. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A text-style description of CNN architecture, from input to output.
Figure 1. A text-style description of CNN architecture, from input to output.
Mathematics 13 01906 g001
Table 1. Commonly applied CNN models in animal farming, detailing their purpose, benefits, and application in the livestock industry.
Table 1. Commonly applied CNN models in animal farming, detailing their purpose, benefits, and application in the livestock industry.
CNNs ModelApplications for the Livestock FarmingPurposeBenefits
YOLOv8 + CBAM Cow detection in complex farm environments [51,52].Health monitoring, behavioral analysis, tracking and counting.Fast, enhanced detection accuracy in challenging conditions, supports smart farming initiatives, real-time detection, and enables early intervention and automated monitoring.
VGG16 [53]Weight estimation from body images and animal classification in smart agriculture [54].Growth tracking and management and species and breed identification.Non-invasive, consistent monitoring of growth patterns, high classification accuracy, and supports automated livestock management.
Faster R-CNN [55]Excretion detection in pigsties and lameness detection via gait analysis [56,57].Waste management and emission modeling.Reliable detection of excretions, aids in environmental management, and detects movement anomalies early to prevent productivity loss.
ResNet (e.g., ResNet50) [58]Disease detection from skin/eye images [59].Disease diagnosis (e.g., pink eye).Deep feature extraction improves accuracy and early disease detection.
EfficientNet [60]Multispecies animal classification from images [61,62].Breed recognition and sorting.Efficient computation with high accuracy.
MobileNet [63]On-farm real-time behavior monitoring [64].Activity recognition (e.g., lying, feeding, etc.)Lightweight and works on mobile/edge devices.
DenseNet [65]Facial recognition for animal ID [66].Automated identification.Improves record keeping and reduces the need for physical tagging.
CNN-ViT Hybrid Backfat thickness measurement in sows [67].Body condition scoring and health monitoring. High precision in measurements and facilitates optimal feeding strategies.
LAD-RCNNLivestock face detection and normalization [68].Individual identification.Accurate face detection and angle normalization and improves animal tracking systems.
Mask YOLOv7 Automated livestock detection and counting [69].Welfare monitoring.High accuracy and precision in detecting individual cattle.
CNN-LSTM Hybrid Behavior recognition in livestock [70].Activity monitoring.Combines spatial and temporal data and improves understanding of animal behaviors.
3D CNN/Time-distributed CNN Behavior analysis using video sequences [71].Aggression detection and heat detection.Uses temporal information and helps improve animal welfare.
SSD (Single Shot Multibox Detector) [72]Animal detection and counting, behavior monitoring, individual animal identification, health monitoring, and video surveillance [73].Real-time object detection, localization, and classification.High speed, decent accuracy, multi-object detection, edge deployment friendly, and adaptability.
R-CNN [74]Animal detection and tracking, lameness detection, excretion zone detection, health monitoring, behavior recognition, and image-based classification [75].Object detection in complex environments, accurate localization, and foundation for automated monitoring.High detection accuracy, fine-grained analysis, improved animal welfare, real-time monitoring (with Faster R-CNN), and compatibility with large datasets.
Mask R-CNN [76]Animal detection and counting, behavior monitoring, health monitoring, individual identification, body condition and morphometry, and welfare assessment [77].Object detection, instance segmentation, pose estimation, and feature extraction.High precision segmentation, handles occlusion well, improved behavior tracking, and robustness in real-world conditions.
AlexNet [78]Breed classification, individual animal identification, health monitoring, behavior detection, and weight/body condition estimation [79].Image-based livestock classification and identification, health and condition monitoring, and behavior recognition.Efficient feature learning, good performance with limited data, baseline model, and fast inference.
LeNet [80]Livestock species classification, feed behavior detection, individual ID using ear tags or markings, and health condition pre-screening [81].Baseline model with lightweight deployment, applied in early stages of livestock computer vision projects to validate dataset quality and classification feasibility.Low computational cost, easy to implement and train, and effective for simple tasks.
Inception/GoogLeNet (and variants) [82]Animal identification, health monitoring, behavior recognition, weight estimation, and animal counting [83].To analyze images and video data for precision livestock monitoring, to perform automated identification, classification, and health assessment of animals, and to optimize resource management and improve animal welfare through data-driven insights.Efficient feature extraction, low computational cost, high accuracy, and scalability.
Inception-ResNet [84]Animal identification, health monitoring, behavior recognition, and multi-species classification [85].High-accuracy feature extraction, improving model convergence, and handling multi-scale patterns.Improved accuracy, faster convergence, better generalization, and scalability.
Xception [86]Behavior recognition, health monitoring, individual identification, and video segmentation.Behavior recognition, health monitoring, and individual identification.High accuracy, computational efficiency, and versatility.
Table 2. Common architecture of CNNs employed in animal farming.
Table 2. Common architecture of CNNs employed in animal farming.
ArchitectureUse CaseNotes
VGG16/VGG19 [53]Basic behavior classification.Simple, slower, and good for baseline.
ResNet-50/101 [58]Posture and action recognition.Deeper networks and more accurate.
YOLOv3/v4/v5 [99,100]Real-time object detection of behaviors.Fast inference, used with Darknet or PyTorch. (v1.1)
Faster R-CNN [55]Precise detection of behaviors/events.Slower but more accurate.
3D CNNs [101]Video behavior classification.Capture spatial + temporal info.
OpenPose/DeepLabCut [102,103]Keypoint detection and pose estimation.For gait, posture, and locomotion analysis.
Table 3. Challenges in CNN-based behavior monitoring.
Table 3. Challenges in CNN-based behavior monitoring.
ChallengesDescription
Data annotationLabor-intensive labeling of behavior data.
Occlusion and clutterAnimals may overlap or hide each other in images.
Lighting and environmentVariable lighting conditions affect accuracy.
GeneralizationCNNs trained in one farm may not generalize to others.
Real-time processingCNNs may need powerful hardware for continuous video analysis.
Table 4. Recent applications of CNN models for detection and classification of livestock diseases.
Table 4. Recent applications of CNN models for detection and classification of livestock diseases.
ReferencePurposeCNN Models UsedPerformance Metrics
Saha [120]Detection and classification of lumpy skin disease (LSD) in dairy cows.MobileNetV2, DenseNet201, Xception, and InceptionResNetV2MobileNetV2 achieved 96% accuracy and 98% AUC; DenseNet201 achieved 94% accuracy; F1 scores up to 96%.
Machuve et al. [121]Diagnosis of poultry diseases such as Coccidiosis, Salmonella, and Newcastle using fecal images.VGG16, InceptionV3, MobileNetV2, and XceptionAfter fine-tuning: MobileNetV2 achieved 98.02% accuracy; Xception achieved 98.24% accuracy; F1 scores above 75% for all classifiers.
Degu and Simegn [118]Detection and classification of poultry diseases using smartphone images.YOLO-V3 for object detection and ResNet50 for classificationYOLO-V3 achieved 87.48% mean average precision for ROI detection; ResNet50 achieved 98.7% classification accuracy.
Gao et al. [70]Classification of cattle behaviors in complex environments.CNN-Bi-LSTM (combination of CNNs and bi-directional LSTM)Achieved 94.3% accuracy, 94.2% precision, and 93.4% recall; outperformed Mask R-CNN, CNN-LSTM, and EfficientNet-LSTM models.
Girmaw [119]Detection and classification of skin diseases in livestock such as cattle, sheep, and goats.EfficientNetB7, MobileNetV2, and DenseNet201EfficientNetB7 achieved 99.01% accuracy; MobileNetV2 and DenseNet201 also demonstrated high performance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bello, R.-W.; Ogundokun, R.O.; Owolawi, P.A.; van Wyk, E.A.; Tu, C. Application of Convolutional Neural Networks in Animal Husbandry: A Review. Mathematics 2025, 13, 1906. https://doi.org/10.3390/math13121906

AMA Style

Bello R-W, Ogundokun RO, Owolawi PA, van Wyk EA, Tu C. Application of Convolutional Neural Networks in Animal Husbandry: A Review. Mathematics. 2025; 13(12):1906. https://doi.org/10.3390/math13121906

Chicago/Turabian Style

Bello, Rotimi-Williams, Roseline Oluwaseun Ogundokun, Pius A. Owolawi, Etienne A. van Wyk, and Chunling Tu. 2025. "Application of Convolutional Neural Networks in Animal Husbandry: A Review" Mathematics 13, no. 12: 1906. https://doi.org/10.3390/math13121906

APA Style

Bello, R.-W., Ogundokun, R. O., Owolawi, P. A., van Wyk, E. A., & Tu, C. (2025). Application of Convolutional Neural Networks in Animal Husbandry: A Review. Mathematics, 13(12), 1906. https://doi.org/10.3390/math13121906

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop