Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = animal behavior identification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2049 KiB  
Review
Wearable Sensors-Based Intelligent Sensing and Application of Animal Behaviors: A Comprehensive Review
by Luyu Ding, Chongxian Zhang, Yuxiao Yue, Chunxia Yao, Zhuo Li, Yating Hu, Baozhu Yang, Weihong Ma, Ligen Yu, Ronghua Gao and Qifeng Li
Sensors 2025, 25(14), 4515; https://doi.org/10.3390/s25144515 - 21 Jul 2025
Viewed by 579
Abstract
Accurate monitoring of animal behaviors enables improved management in precision livestock farming (PLF), supporting critical applications including health assessment, estrus detection, parturition monitoring, and feed intake estimation. Although both contact and non-contact sensing modalities are utilized, wearable devices with embedded sensors (e.g., accelerometers, [...] Read more.
Accurate monitoring of animal behaviors enables improved management in precision livestock farming (PLF), supporting critical applications including health assessment, estrus detection, parturition monitoring, and feed intake estimation. Although both contact and non-contact sensing modalities are utilized, wearable devices with embedded sensors (e.g., accelerometers, pressure sensors) offer unique advantages through continuous data streams that enhance behavioral traceability. Focusing specifically on contact sensing techniques, this review examines sensor characteristics and data acquisition challenges, methodologies for processing behavioral data and implementing identification algorithms, industrial applications enabled by recognition outcomes, and prevailing challenges with emerging research opportunities. Current behavior classification relies predominantly on traditional machine learning or deep learning approaches with high-frequency data acquisition. The fundamental limitation restricting advancement in this field is the difficulty in maintaining high-fidelity recognition performance at reduced acquisition rates, particularly for integrated multi-behavior identification. Considering that the computational demands and limited adaptability to complex field environments remain significant constraints, Tiny Machine Learning (Tiny ML) could present opportunities to guide future research toward practical, scalable behavioral monitoring solutions. In addition, algorithm development for functional applications post behavior recognition may represent a critical future research direction. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

23 pages, 6340 KiB  
Article
Design and Prototyping of a Robotic Structure for Poultry Farming
by Glauber da Rocha Balthazar, Robson Mateus Freitas Silveira and Iran José Oliveira da Silva
AgriEngineering 2025, 7(7), 233; https://doi.org/10.3390/agriengineering7070233 - 11 Jul 2025
Cited by 1 | Viewed by 619
Abstract
The identification and prediction of losses, along with environmental and behavioral analyses and animal welfare monitoring, are key drivers for the use of technologies in poultry farming which help characterize the productive environment. Among these technologies, robotics emerges as a facilitator as it [...] Read more.
The identification and prediction of losses, along with environmental and behavioral analyses and animal welfare monitoring, are key drivers for the use of technologies in poultry farming which help characterize the productive environment. Among these technologies, robotics emerges as a facilitator as it provides space for the use of several computing tools for capture, analysis and prediction. This study presents the full methodology for building a robot (so called RobôFrango) to its application in poultry farming. The construction method was based on evolutionary prototyping that allowed knowing and testing each physical component (electronic and mechanical) for assembling the robotic structure. This approach made it possible to identify the most suitable components for the broiler production system. The results presented motors, wheels, chassis, batteries and sensors that proved to be the most adaptable to the adversities existing in poultry farms. Validation of the final constructed structure was carried out through practical execution of the robot, seeking to understand how each component behaved in a commercial broiler aviary. It was concluded that it was possible to identify the best electronic and physical equipment for building a robotic prototype to work in poultry farms, and that a final product was generated. Full article
(This article belongs to the Special Issue Precision Farming Technologies for Monitoring Livestock and Poultry)
Show Figures

Figure 1

18 pages, 2228 KiB  
Review
Integrating Deep Learning and Transcriptomics to Assess Livestock Aggression: A Scoping Review
by Roland Juhos, Szilvia Kusza, Vilmos Bilicki and Zoltán Bagi
Biology 2025, 14(7), 771; https://doi.org/10.3390/biology14070771 - 26 Jun 2025
Viewed by 411
Abstract
The presence of aggressive behavior in livestock creates major difficulties for animal welfare, farm safety, economic performance and selective breeding. The two innovative tools of deep learning-based video analysis and transcriptomic profiling have recently appeared to aid the understanding and monitoring of such [...] Read more.
The presence of aggressive behavior in livestock creates major difficulties for animal welfare, farm safety, economic performance and selective breeding. The two innovative tools of deep learning-based video analysis and transcriptomic profiling have recently appeared to aid the understanding and monitoring of such behaviors. This scoping review assesses the current use of these two methods for aggression research across livestock species and identifies trends while revealing unaddressed gaps in existing literature. A scoping literature search was performed through the PubMed, Scopus and Web of Science databases to identify articles from 2014 to April 2025. The research included 268 original studies which were divided into 250 AI-driven behavioral phenotyping papers and 18 transcriptomic investigations without any studies combining both approaches. Most research focused on economically significant species, including pigs and cattle, yet poultry and small ruminants, along with camels and fish and other species, received limited attention. The main developments include convolutional neural network (CNN)-based object detection and pose estimation systems, together with the transcriptomic identification of molecular pathways that link to aggression and stress. The main barriers to progress in the field include inconsistent behavioral annotation and insufficient real-farm validation together with limited cross-modal integration. Standardized behavior definitions, together with multimodal datasets and integrated pipelines that link phenotypic and molecular data, should be developed according to our proposal. These innovations will speed up the advancement of livestock welfare alongside precision breeding and sustainable animal production. Full article
Show Figures

Figure 1

18 pages, 6678 KiB  
Article
HIEN: A Hybrid Interaction Enhanced Network for Horse Iris Super-Resolution
by Ao Zhang, Bin Guo, Xing Liu and Wei Liu
Appl. Sci. 2025, 15(13), 7191; https://doi.org/10.3390/app15137191 - 26 Jun 2025
Viewed by 262
Abstract
Horse iris recognition is a non-invasive identification method with great potential for precise management in intelligent horse farms. However, horses’ natural vigilance often leads to stress and resistance when exposed to close-range infrared cameras. This behavior makes it challenging to capture clear iris [...] Read more.
Horse iris recognition is a non-invasive identification method with great potential for precise management in intelligent horse farms. However, horses’ natural vigilance often leads to stress and resistance when exposed to close-range infrared cameras. This behavior makes it challenging to capture clear iris images, thereby reducing recognition performance. This paper addresses the challenge of generating high-resolution iris images from existing low-resolution counterparts. To this end, we propose a novel hybrid-architecture image super-resolution (SR) network. Central to our approach is the design of Paired Asymmetric Transformer Block (PATB), which incorporates Contextual Query Generator (CQG) to efficiently capture contextual information and model global feature interactions. Furthermore, we introduce an Efficient Residual Dense Block (ERDB), specifically engineered to effectively extract finer-grained local features inherent in the image data. By integrating PATB and ERDB, our network achieves superior fusion of global contextual awareness and local detail information, thereby significantly enhancing the reconstruction quality of horse iris images. Experimental evaluations on our self-constructed dataset of horse irises demonstrate the effectiveness of the proposed method. In terms of standard image quality metrics, it achieves the PSNR of 30.5988 dB and SSIM of 0.8552. Moreover, in terms of identity-recognition performance, the method achieves Precision, Recall, and F1-Score of 81.48%, 74.38%, and 77.77%, respectively. This study provides a useful contribution to digital horse farm management and supports the ongoing development of smart animal husbandry. Full article
Show Figures

Figure 1

48 pages, 9168 KiB  
Review
Socializing AI: Integrating Social Network Analysis and Deep Learning for Precision Dairy Cow Monitoring—A Critical Review
by Sibi Chakravathy Parivendan, Kashfia Sailunaz and Suresh Neethirajan
Animals 2025, 15(13), 1835; https://doi.org/10.3390/ani15131835 - 20 Jun 2025
Viewed by 1005
Abstract
This review critically analyzes recent advancements in dairy cow behavior recognition, highlighting novel methodological contributions through the integration of advanced artificial intelligence (AI) techniques such as transformer models and multi-view tracking with social network analysis (SNA). Such integration offers transformative opportunities for improving [...] Read more.
This review critically analyzes recent advancements in dairy cow behavior recognition, highlighting novel methodological contributions through the integration of advanced artificial intelligence (AI) techniques such as transformer models and multi-view tracking with social network analysis (SNA). Such integration offers transformative opportunities for improving dairy cattle welfare, but current applications remain limited. We describe the transition from manual, observer-based assessments to automated, scalable methods using convolutional neural networks (CNNs), spatio-temporal models, and attention mechanisms. Although object detection models, including You Only Look Once (YOLO), EfficientDet, and sequence models, such as Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Long Short-Term Memory (convLSTM), have improved detection and classification, significant challenges remain, including occlusions, annotation bottlenecks, dataset diversity, and limited generalizability. Existing interaction inference methods rely heavily on distance-based approximations (i.e., assuming that proximity implies social interaction), lacking the semantic depth essential for comprehensive SNA. To address this, we propose innovative methodological intersections such as pose-aware SNA frameworks and multi-camera fusion techniques. Moreover, we explicitly discuss ethical challenges and data governance issues, emphasizing data transparency and animal welfare concerns within precision livestock contexts. We clarify how these methodological innovations directly impact practical farming by enhancing monitoring precision, herd management, and welfare outcomes. Ultimately, this synthesis advocates for strategic, empathetic, and ethically responsible precision dairy farming practices, significantly advancing both dairy cow welfare and operational effectiveness. Full article
(This article belongs to the Section Animal Welfare)
Show Figures

Figure 1

24 pages, 412 KiB  
Review
Application of Convolutional Neural Networks in Animal Husbandry: A Review
by Rotimi-Williams Bello, Roseline Oluwaseun Ogundokun, Pius A. Owolawi, Etienne A. van Wyk and Chunling Tu
Mathematics 2025, 13(12), 1906; https://doi.org/10.3390/math13121906 - 6 Jun 2025
Viewed by 739
Abstract
Convolutional neural networks (CNNs) and their application in animal husbandry have in-depth mathematical expressions, which usually revolve around how well they map input data such as images or video frames of animals to meaningful outputs like health status, behavior class, and identification. Likewise, [...] Read more.
Convolutional neural networks (CNNs) and their application in animal husbandry have in-depth mathematical expressions, which usually revolve around how well they map input data such as images or video frames of animals to meaningful outputs like health status, behavior class, and identification. Likewise, computer vision and deep learning models are driven by CNNs to act intelligently in improving productivity and animal management for sustainable animal husbandry. In animal husbandry, CNNs play a vital role in the management and monitoring of livestock’s health and productivity due to their high-performance accuracy in analyzing images and videos. Monitoring animals’ health is important for their welfare, food abundance, safety, and economic productivity. This paper aims to comprehensively review recent advancements and applications of relevant models that are based on CNNs for livestock health monitoring, covering the detection of their various diseases and classification of their behavior, for overall management gain. We selected relevant articles with various experimental results addressing animal detection, localization, tracking, and behavioral monitoring, validating the high-performance accuracy and efficiency of CNNs. Prominent anchor-based object detection models such as R-CNN (series), YOLO (series) and SSD (series), and anchor-free object detection models such as key-point based and anchor-point based are often used, demonstrating great versatility and robustness across various tasks. From the analysis, it is evident that more significant research contributions to animal husbandry have been made by CNNs. Limited labeled data, variation in data, low-quality or noisy images, complex backgrounds, computational demand, species-specific models, high implementation cost, scalability, modeling complex behaviors, and compatibility with current farm management systems are good examples of several notable challenges when applying CNNs in animal husbandry. By continued research efforts, these challenges can be addressed for the actualization of sustainable animal husbandry. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

13 pages, 3369 KiB  
Article
Correlation Between Individual Body Condition and Seasonal Activity in Buresch’s Crested Newt, Triturus ivanbureschi
by Simeon Lukanov and Irena Atanasova
Diversity 2025, 17(5), 350; https://doi.org/10.3390/d17050350 - 15 May 2025
Viewed by 330
Abstract
Body condition is a standard measure of the individual fitness and health status in many animal species and is typically estimated by calculating the body condition indices (BCIs). The present study used capture/recapture data and the BCIs to test whether the activity (number [...] Read more.
Body condition is a standard measure of the individual fitness and health status in many animal species and is typically estimated by calculating the body condition indices (BCIs). The present study used capture/recapture data and the BCIs to test whether the activity (number of times an individual has been recaptured) of adult T. ivanbureschi was related to individual body condition. For three consecutive seasons, we set funnel traps in a temporary pond near Sofia, Bulgaria. A ventral pattern was used for individual identification, and the linear regression of lnMass/lnSVL was used for BCI calculation. The overall recapture rate for the population was 52.52%, with males recaptured more often than females. Activity and estimated population size varied across seasons. Body condition generally decreased towards the end of the aquatic phase in all years, with females consistently maintaining higher BCIs than males. There was no relationship between mean BCI per session and population activity for either sex, but individual BCI scores were correlated with individual activity, and this relationship was independent of both sex and temperature. The results suggest that winter activity may carry energetic costs later in the season and highlight potential sex-based differences in aquatic behavior. Full article
(This article belongs to the Special Issue Amphibian and Reptile Adaptation: Biodiversity and Monitoring)
Show Figures

Figure 1

27 pages, 2486 KiB  
Article
From Gaze to Interaction: Links Between Visual Attention, Facial Expression Identification, and Behavior of Children Diagnosed with ASD or Typically Developing Children with an Assistance Dog
by Manon Toutain, Salomé Paris, Solyane Lefranc, Laurence Henry and Marine Grandgeorge
Behav. Sci. 2025, 15(5), 674; https://doi.org/10.3390/bs15050674 - 14 May 2025
Viewed by 528
Abstract
Understanding how children engage with others is crucial for improving social interactions, especially when one of the partners is an animal. We investigated relationships between interaction strategies, visual attention, and facial expression identification of children interacting with an assistance dog, and evaluated the [...] Read more.
Understanding how children engage with others is crucial for improving social interactions, especially when one of the partners is an animal. We investigated relationships between interaction strategies, visual attention, and facial expression identification of children interacting with an assistance dog, and evaluated the effects of a neurodevelopmental disorder (Autism Spectrum Disorder (ASD)) on these elements. Thus 20 children (7 with ASD, 13 with typical development or TD) participated in three experimental tasks: (1) face-to-face encounters with the assistance dog while wearing eye-tracking glasses to analyze visual exploration patterns; (2) free interactions with the assistance dog, assessed using ethological methods and (3) a computerized task evaluating human and canine facial expression identification. The results revealed that (1) visual exploration patterns varied depending on task instructions: ASD children paid less attention to faces and more to the environment than TD children; (2) both groups displayed similar behavioral patterns during free interactions with the assistance dog; (3) facial expression identification data did not differ between groups; and (4) within-group associations emerged between visual attention, spontaneous interaction behaviors, and facial expression identification abilities. These findings highlighted the complex interplay between visual attention, facial expression identification, and social behavior of children, emphasizing the importance of context in shaping interaction strategies. Full article
Show Figures

Figure 1

12 pages, 1420 KiB  
Article
Effectiveness of Non-Invasive Methods in Studying Jaguar (Panthera onca) Hair
by Larissa Pereira Rodrigues, Paul Raad, Daniela Carvalho dos Santos, Alaor Aparecido Almeida, Vladimir Eliodoro Costa and Ligia Souza Lima Silveira da Mota
Animals 2025, 15(10), 1415; https://doi.org/10.3390/ani15101415 - 14 May 2025
Viewed by 877
Abstract
Mammalian hair is a source of biological information and can be used in genetic, toxicological, hormonal, and ecological studies. However, non-invasive collection methods are still little explored. This study aimed to describe and validate a passive methodology for collecting hair from jaguars ( [...] Read more.
Mammalian hair is a source of biological information and can be used in genetic, toxicological, hormonal, and ecological studies. However, non-invasive collection methods are still little explored. This study aimed to describe and validate a passive methodology for collecting hair from jaguars (Panthera onca) and evaluate its viability for different analyses. This study was conducted in the Northern Pantanal, where synthetic fiber mats were installed in strategic locations to passively capture hair. The presence of animals and the collection of samples were monitored by camera traps over a period of 30 days. The collected samples were subjected to morphological analyses by electron microscopy, molecular tests for genetic and sex identification, and isotopic and heavy metal analyses. The results showed that the collected hairs were well preserved, allowing the structural and molecular identification of the material. The analyses confirmed the viability of DNA for genetic studies and revealed specific concentrations of heavy metals and stable isotopes. The proposed methodology proved to be effective and is a promising alternative for obtaining samples without directly interfering with the behavior of the animals. Full article
(This article belongs to the Section Ecology and Conservation)
Show Figures

Figure 1

24 pages, 3173 KiB  
Article
Longitudinal Evaluation of the Detection Potential of Serum Oligoelements Cu, Se and Zn for the Diagnosis of Alzheimer’s Disease in the 3xTg-AD Animal Model
by Olivia F. M. Dias, Nicole M. E. Valle, Javier B. Mamani, Cicero J. S. Costa, Arielly H. Alves, Fernando A. Oliveira, Gabriel N. A. Rego, Marta C. S. Galanciak, Keithy Felix, Mariana P. Nucci and Lionel F. Gamarra
Int. J. Mol. Sci. 2025, 26(8), 3657; https://doi.org/10.3390/ijms26083657 - 12 Apr 2025
Viewed by 637
Abstract
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by the accumulation of β-amyloid (Aβ) and hyperphosphorylated tau, leading to neuroinflammation, oxidative stress, and neuronal death. Early detection of AD remains a challenge, as clinical manifestations only emerge in the advanced stages, limiting [...] Read more.
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by the accumulation of β-amyloid (Aβ) and hyperphosphorylated tau, leading to neuroinflammation, oxidative stress, and neuronal death. Early detection of AD remains a challenge, as clinical manifestations only emerge in the advanced stages, limiting therapeutic interventions. Minimally invasive biomarkers are essential for early identification and monitoring of disease progression. This study aims to evaluate the sensitivity of the relationship between serum oligoelement levels as biomarkers and the monitoring of AD progression in the 3xTg-AD model. Transgenic 3xTg-AD mice and C57BL/6 controls were evaluated over 12 months through serum oligoelement quantification using inductively coupled plasma mass spectrometry (ICP-MS), Aβ deposition via immunohistochemistry, and cognitive assessments using memory tests (Morris water maze and novel object recognition test), as well as spontaneous locomotion analysis using the open field test. The results demonstrated that oligoelements (copper, zinc, and selenium) were sensitive in detecting alterations in the AD group, preceding cognitive and motor deficits. Immunohistochemistry was performed for qualitative purposes, confirming the presence of β-amyloid in the CNS of transgenic animals. Up to the third month, labeling was moderate and restricted to neuronal cell bodies; from the fifth month onward, evident extracellular deposits emerged. Behavioral assessment indicated impairments in spatial and episodic memory, as well as altered locomotor patterns in AD mice. These findings reinforce that oligoelement variations may be associated with neurodegenerative processes, including oxidative stress and synaptic dysfunction. Thus, oligoelement analysis emerges as a promising approach for the early diagnosis of AD and the monitoring of disease progression, potentially contributing to the development of new therapeutic strategies. Full article
Show Figures

Figure 1

24 pages, 4262 KiB  
Article
PigFRIS: A Three-Stage Pipeline for Fence Occlusion Segmentation, GAN-Based Pig Face Inpainting, and Efficient Pig Face Recognition
by Ruihan Ma, Seyeon Chung, Sangcheol Kim and Hyongsuk Kim
Animals 2025, 15(7), 978; https://doi.org/10.3390/ani15070978 - 28 Mar 2025
Viewed by 632
Abstract
Accurate animal face recognition is essential for effective health monitoring, behavior analysis, and productivity management in smart farming. However, environmental obstructions and animal behaviors complicate identification tasks. In pig farming, fences and frequent movements often occlude essential facial features, while high inter-class similarity [...] Read more.
Accurate animal face recognition is essential for effective health monitoring, behavior analysis, and productivity management in smart farming. However, environmental obstructions and animal behaviors complicate identification tasks. In pig farming, fences and frequent movements often occlude essential facial features, while high inter-class similarity makes distinguishing individuals even more challenging. To address these issues, we introduce the Pig Face Recognition and Inpainting System (PigFRIS). This integrated framework enhances recognition accuracy by removing occlusions and restoring missing facial features. PigFRIS employs state-of-the-art occlusion detection with the YOLOv11 segmentation model, a GAN-based inpainting reconstruction module using AOT-GAN, and a lightweight recognition module tailored for pig face classification. In doing so, our system detects occlusions, reconstructs obscured regions, and emphasizes key facial features, thereby improving overall performance. Experimental results validate the effectiveness of PigFRIS. For instance, YOLO11l achieves a recall of 94.92% and a AP50 of 96.28% for occlusion detection, AOTGAN records a FID of 51.48 and an SSIM of 91.50% for image restoration, and EfficientNet-B2 attains an accuracy of 91.62% with an F1 Score of 91.44% in classification. Additionally, heatmap analysis reveals that the system successfully focuses on relevant facial features rather than irrelevant occlusions, enhancing classification reliability. This work offers a novel and practical solution for animal face recognition in smart farming. It overcomes the limitations of existing methods and contributes to more effective livestock management and advancements in agricultural technology. Full article
Show Figures

Figure 1

17 pages, 12853 KiB  
Article
A Non-Autonomous Amphoteric Metal Hydroxide Oscillations and Pattern Formation in Hydrogels
by Norbert Német, Hugh Shearer Lawson, Masaki Itatani, Federico Rossi, Nobuhiko J. Suematsu, Hiroyuki Kitahata and István Lagzi
Molecules 2025, 30(6), 1323; https://doi.org/10.3390/molecules30061323 - 15 Mar 2025
Cited by 1 | Viewed by 896
Abstract
Oscillations in animate and inanimate systems are ubiquitous phenomena driven by sophisticated chemical reaction networks. Non-autonomous chemical oscillators have been designed to mimic oscillatory behavior using programmable syringe pumps. Here, we investigated the non-autonomous oscillations, pattern formation, and front propagation of amphoteric hydroxide [...] Read more.
Oscillations in animate and inanimate systems are ubiquitous phenomena driven by sophisticated chemical reaction networks. Non-autonomous chemical oscillators have been designed to mimic oscillatory behavior using programmable syringe pumps. Here, we investigated the non-autonomous oscillations, pattern formation, and front propagation of amphoteric hydroxide (aluminum (III), zinc (II), tin (II), and lead (II)) precipitates under controlled pH conditions. A continuous stirred-tank reactor with modulated inflows of acidic and alkaline solutions generated pH oscillations, leading to periodic precipitation and dissolution of metal hydroxides in time. The generated turbidity oscillations exhibited ion-specific patterns, enabling their characterization through quantitative parameters such as peak width (W) and asymmetry (As). The study of mixed metal cationic systems showed that turbidity patterns contained signatures of both hydroxides due to the formation of mixed hydroxides and oxyhydroxides. The reaction–diffusion setup in solid hydrogel columns produced spatial precipitation patterns depending on metal cations and their concentrations. Additionally, in the case of tin (II), a propagating precipitation front was observed in a thin precipitation layer. These findings provide new insights into precipitation pattern formation and open avenues for metal ion identification and further exploration of complex reaction–diffusion systems. Full article
Show Figures

Figure 1

32 pages, 6997 KiB  
Article
CFR-YOLO: A Novel Cow Face Detection Network Based on YOLOv7 Improvement
by Guohong Gao, Yuxin Ma, Jianping Wang, Zhiyu Li, Yan Wang and Haofan Bai
Sensors 2025, 25(4), 1084; https://doi.org/10.3390/s25041084 - 11 Feb 2025
Cited by 4 | Viewed by 1121
Abstract
With the rapid development of machine learning and deep learning technology, cow face detection technology has achieved remarkable results. Traditional contact cattle identification methods are costly; are easy to lose and tamper with; and can lead to a series of security problems, such [...] Read more.
With the rapid development of machine learning and deep learning technology, cow face detection technology has achieved remarkable results. Traditional contact cattle identification methods are costly; are easy to lose and tamper with; and can lead to a series of security problems, such as untimely disease prevention and control, incorrect traceability of cattle products, and fraudulent insurance claims. In order to solve these problems, this study explores the application of cattle face detection technology in cattle individual detection to improve the accuracy of detection, an approach that is particularly important in smart animal husbandry and animal behavior analysis. In this paper, we propose a novel cow face detection network based on YOLOv7 improvement, named CFR-YOLO. First of all, the method of extracting the features of a cow’s face (including nose, eye corner, and mouth corner) is constructed. Then, we calculate the frame center of gravity and frame size based on these feature points to design the cow face detection CFR-YOLO network model. To optimize the performance of the model, the activation function of FReLU is used instead of the original SiLU activation function, and the CBS module is replaced by the CBF module. The RFB module is introduced in the backbone network; and in the head layer, the CBAM convolutional attention module is introduced. The performance of CFR-YOLO is compared with other mainstream deep learning models (including YOLOv7, YOLOv5, YOLOv4, and SSD) on a self-built cow face dataset. Experiments indicate that the CFR-YOLO model achieves 98.46% accuracy (precision), 97.21% recall (recall), and 96.27% average accuracy (mAP), proving its excellent performance in the field of cow face detection. In addition, comparative analyses with the other four methods show that CFR-YOLO exhibits faster convergence speed while ensuring the same detection accuracy; and its detection accuracy is higher under the condition of the same model convergence speed. These results will be helpful to further develop the cattle identification technique. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

25 pages, 6169 KiB  
Article
Elephant Sound Classification Using Deep Learning Optimization
by Hiruni Dewmini, Dulani Meedeniya and Charith Perera
Sensors 2025, 25(2), 352; https://doi.org/10.3390/s25020352 - 9 Jan 2025
Cited by 2 | Viewed by 2435
Abstract
Elephant sound identification is crucial in wildlife conservation and ecological research. The identification of elephant vocalizations provides insights into the behavior, social dynamics, and emotional expressions, leading to elephant conservation. This study addresses elephant sound classification utilizing raw audio processing. Our focus lies [...] Read more.
Elephant sound identification is crucial in wildlife conservation and ecological research. The identification of elephant vocalizations provides insights into the behavior, social dynamics, and emotional expressions, leading to elephant conservation. This study addresses elephant sound classification utilizing raw audio processing. Our focus lies on exploring lightweight models suitable for deployment on resource-costrained edge devices, including MobileNet, YAMNET, and RawNet, alongside introducing a novel model termed ElephantCallerNet. Notably, our investigation reveals that the proposed ElephantCallerNet achieves an impressive accuracy of 89% in classifying raw audio directly without converting it to spectrograms. Leveraging Bayesian optimization techniques, we fine-tuned crucial parameters such as learning rate, dropout, and kernel size, thereby enhancing the model’s performance. Moreover, we scrutinized the efficacy of spectrogram-based training, a prevalent approach in animal sound classification. Through comparative analysis, the raw audio processing outperforms spectrogram-based methods. In contrast to other models in the literature that primarily focus on a single caller type or binary classification that identifies whether a sound is an elephant voice or not, our solution is designed to classify three distinct caller-types namely roar, rumble, and trumpet. Full article
Show Figures

Figure 1

13 pages, 6856 KiB  
Article
Mind the Step: An Artificial Intelligence-Based Monitoring Platform for Animal Welfare
by Andrea Michielon, Paolo Litta, Francesca Bonelli, Gregorio Don, Stefano Farisè, Diana Giannuzzi, Marco Milanesi, Daniele Pietrucci, Angelica Vezzoli, Alessio Cecchinato, Giovanni Chillemi, Luigi Gallo, Marcello Mele and Cesare Furlanello
Sensors 2024, 24(24), 8042; https://doi.org/10.3390/s24248042 - 17 Dec 2024
Cited by 5 | Viewed by 2635
Abstract
We present an artificial intelligence (AI)-enhanced monitoring framework designed to assist personnel in evaluating and maintaining animal welfare using a modular architecture. This framework integrates multiple deep learning models to automatically compute metrics relevant to assessing animal well-being. Using deep learning for AI-based [...] Read more.
We present an artificial intelligence (AI)-enhanced monitoring framework designed to assist personnel in evaluating and maintaining animal welfare using a modular architecture. This framework integrates multiple deep learning models to automatically compute metrics relevant to assessing animal well-being. Using deep learning for AI-based vision adapted from industrial applications and human behavioral analysis, the framework includes modules for markerless animal identification and health status assessment (e.g., locomotion score and body condition score). Methods for behavioral analysis are also included to evaluate how nutritional and rearing conditions impact behaviors. These models are initially trained on public datasets and then fine-tuned on original data. We demonstrate the approach through two use cases: a health monitoring system for dairy cattle and a piglet behavior analysis system. The results indicate that scalable deep learning and edge computing solutions can support precision livestock farming by automating welfare assessments and enabling timely, data-driven interventions. Full article
Show Figures

Figure 1

Back to TopTop