Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = cow individual recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2436 KiB  
Review
May the Extensive Farming System of Small Ruminants Be Smart?
by Rosanna Paolino, Adriana Di Trana, Adele Coppola, Emilio Sabia, Amelia Maria Riviezzi, Luca Vignozzi, Salvatore Claps, Pasquale Caparra, Corrado Pacelli and Ada Braghieri
Agriculture 2025, 15(9), 929; https://doi.org/10.3390/agriculture15090929 - 24 Apr 2025
Viewed by 838
Abstract
Precision Livestock Farming (PLF) applies a complex of sensor technology, algorithms, and multiple tools for individual, real-time livestock monitoring. In intensive livestock systems, PLF is now quite widespread, allowing for the optimisation of management, thanks to the early recognition of diseases and the [...] Read more.
Precision Livestock Farming (PLF) applies a complex of sensor technology, algorithms, and multiple tools for individual, real-time livestock monitoring. In intensive livestock systems, PLF is now quite widespread, allowing for the optimisation of management, thanks to the early recognition of diseases and the possibility of monitoring animals’ feeding and reproductive behaviour, with an overall improvement of their welfare. Similarly, PLF systems represent an opportunity to improve the profitability and sustainability of extensive farming systems, including those of small ruminants, rationalising the use of pastures by avoiding overgrazing and controlling animals. Despite the livestock distribution in several parts of the world, the low profit and the relatively high cost of the devices cause delays in implementing PLF systems in small ruminants compared to those in dairy cows. Applying these tools to animals in extensive systems requires customisation compared to their use in intensive systems. In many cases, the unit prices of sensors for small ruminants are higher than those developed for large animals due to miniaturisation and higher production costs associated with lower production numbers. Sheep and goat farms are often in mountainous and remote areas with poor technological infrastructure and ineffective electricity, telephone, and internet services. Moreover, small ruminant farming is usually associated with advanced age in farmers, contributing to poor local initiatives and delays in PLF implementation. A targeted literature analysis was carried out to identify technologies already applied or at an advanced stage of development for the management of grazing animals, particularly sheep and goats, and their effects on nutrition, production, and animal welfare. The current technological developments include wearable, non-wearable, and network technologies. The review of the technologies involved and the main fields of application can help identify the most suitable systems for managing grazing sheep and goats and contribute to selecting more sustainable and efficient solutions in line with current environmental and welfare concerns. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

25 pages, 8629 KiB  
Article
Efficient Convolutional Network Model Incorporating a Multi-Attention Mechanism for Individual Recognition of Holstein Dairy Cows
by Xiaoli Ma, Youxin Yu, Wenbo Zhu, Yu Liu, Linhui Gan, Xiaoping An, Honghui Li and Buyu Wang
Animals 2025, 15(8), 1173; https://doi.org/10.3390/ani15081173 - 19 Apr 2025
Viewed by 513
Abstract
Individual recognition of Holstein cows is the basis for realizing precision dairy farming. Current machine vision individual recognition systems usually rely on fixed vertical illumination and top-view camera perspectives or require complex camera systems, and these requirements limit their promotion in practical applications. [...] Read more.
Individual recognition of Holstein cows is the basis for realizing precision dairy farming. Current machine vision individual recognition systems usually rely on fixed vertical illumination and top-view camera perspectives or require complex camera systems, and these requirements limit their promotion in practical applications. To solve this problem, a lightweight Holstein cow individual recognition feature extraction network named CowBackNet is designed in this paper. This network is not affected by camera angle and lighting changes and is suitable for farm environments. Secondly, a fusion multi-attention mechanism approach was adopted to integrate the attention mechanism, inverse residual structure, and depth-separable convolution technique to design a new feature extraction module, LightCBAM. This module was placed in the corresponding layer of CowBackNet to enhance the model’s ability to extract the key features of the cow’s back image from different viewpoints. In addition, the CowBack dataset was constructed in this study to verify the model’s ability to be applied in real scenarios, containing Holstein cowback images in real production environments under different viewpoints. The experimental results show that when using CowBackNet as a feature extraction network, the recognition accuracy reaches 88.30%, FLOPs are 0.727 G, and the model size is only 6.096 MB. Compared with the classical EfficientNetV2, the accuracy of CowBackNet is improved by 11.69%, the FLOPs are reduced by 0.001 G, and the number of parameters is also reduced by 14.6%. Therefore, the model developed in this paper shows good robustness in shooting angle, light change, and real production data, which not only improves the recognition accuracy but also optimizes the computational efficiency of the model, which is of great practical application value for realizing precision farming. Full article
Show Figures

Figure 1

29 pages, 13582 KiB  
Article
Individual Identification of Holstein Cows from Top-View RGB and Depth Images Based on Improved PointNet++ and ConvNeXt
by Kaixuan Zhao, Jinjin Wang, Yinan Chen, Junrui Sun and Ruihong Zhang
Agriculture 2025, 15(7), 710; https://doi.org/10.3390/agriculture15070710 - 26 Mar 2025
Cited by 2 | Viewed by 658
Abstract
The identification of individual cows is a prerequisite and foundation for realizing accurate and intelligent farming, but this identification method based on image information is easily affected by the environment and observation angle. To identify cows more accurately and efficiently, a novel individual [...] Read more.
The identification of individual cows is a prerequisite and foundation for realizing accurate and intelligent farming, but this identification method based on image information is easily affected by the environment and observation angle. To identify cows more accurately and efficiently, a novel individual recognition method based on the using anchor point detection and body pattern features from top-view depth images of cows was proposed. First, the top-view RGBD images of cows were collected. The hook and pin bones of cows were coarsely located based on the improved PointNet++ neural network. Second, the curvature variations in the hook and pin bone regions were analyzed to accurately locate the hook and pin bones. Based on the spatial relationship between the hook and pin bones, the critical area was determined, and the key region was transformed from a point cloud to a two-dimensional body pattern image. Finally, body pattern image classification based on the improved ConvNeXt network model was performed for individual cow identification. A dataset comprising 7600 top-view images from 40 cows was created and partitioned into training, validation, and test subsets using a 7:2:1 proportion. The results revealed that the AP50 value of the point cloud segmentation model is 95.5%, and the cow identification accuracy could reach 97.95%. The AP50 metric of the enhanced PointNet++ neural network exceeded that of the original model by 3 percentage points. Relative to the original model, the enhanced ConvNeXt model achieved a 6.11 percentage point increase in classification precision. The method is robust to the position and angle of the cow in the top-view. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

13 pages, 1794 KiB  
Article
Exploring Attributions in Convolutional Neural Networks for Cow Identification
by Dimitar Tanchev, Alexander Marazov, Gergana Balieva, Ivanka Lazarova and Ralitsa Rankova
Appl. Sci. 2025, 15(7), 3622; https://doi.org/10.3390/app15073622 - 26 Mar 2025
Cited by 1 | Viewed by 557
Abstract
Face recognition and identification is a method that is well established in traffic monitoring, security, human biodata analysis, etc. Regarding the current development and implementation of digitalization in all spheres of public life, new approaches are being sought to use the opportunities of [...] Read more.
Face recognition and identification is a method that is well established in traffic monitoring, security, human biodata analysis, etc. Regarding the current development and implementation of digitalization in all spheres of public life, new approaches are being sought to use the opportunities of high technology advancements in animal husbandry to enhance the sector’s sustainability. Using machine learning the present study aims to investigate the possibilities for the creation of a model for visual face recognition of farm animals (cows) that could be used in future applications to manage health, welfare, and productivity of the animals at the herd and individual levels in real-time. This study provides preliminary results from an ongoing research project, which employs attribution methods to identify which parts of a facial image contribute most to cow identification using a triplet loss network. A new dataset for identifying cows in farm environments was therefore created by taking digital images of cows at animal holdings with intensive breeding systems. After normalizing the images, they were subsequently segmented into cow and background regions. Several methods were then explored for analyzing attributions and examine whether the cow or background regions have a greater influence on the network’s performance and identifying the animal. Full article
Show Figures

Figure 1

14 pages, 2171 KiB  
Article
Individual Cow Recognition Based on Ultra-Wideband and Computer Vision
by Aruna Zhao, Huijuan Wu, Daoerji Fan and Kuo Li
Animals 2025, 15(3), 456; https://doi.org/10.3390/ani15030456 - 6 Feb 2025
Cited by 1 | Viewed by 872
Abstract
This study’s primary goal is to use computer vision and ultra-wideband (UWB) localisation techniques to automatically mark numerals in cow photos. In order to accomplish this, we created a UWB-based cow localisation system that involves installing tags on cow heads and placing several [...] Read more.
This study’s primary goal is to use computer vision and ultra-wideband (UWB) localisation techniques to automatically mark numerals in cow photos. In order to accomplish this, we created a UWB-based cow localisation system that involves installing tags on cow heads and placing several base stations throughout the farm. The system can determine the distance between each base station and the cow using wireless communication technology, which allows it to determine the cow’s current location coordinates. The study employed a neural network to train and optimise the ranging data gathered in the 1–20 m range in order to solve the issue of significant ranging errors in conventional UWB positioning systems. The experimental data indicates that the UWB positioning system’s unoptimized range error has an absolute mean of 0.18 m and a standard deviation of 0.047. However, when using a neural network-trained model, the ranging error is much decreased, with an absolute mean of 0.038 m and a standard deviation of 0.0079. The average root mean square error (RMSE) of the positioning coordinates is decreased to 0.043 m following the positioning computation utilising the optimised range data, greatly increasing the positioning accuracy. This study used the conventional camera shooting method for image acquisition. Following image acquisition, the system extracts the cow’s coordinate information from the image using a perspective transformation method. This allows for accurate cow identification and number labelling when compared to the location coordinates. According to the trial findings, this plan, which integrates computer vision and UWB positioning technologies, achieves high-precision cow labelling and placement in the optimised system and greatly raises the degree of automation and precise management in the farming process. This technology has many potential applications, particularly in the administration and surveillance of big dairy farms, and it offers a strong technical basis for precision farming. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

20 pages, 5811 KiB  
Article
YOLOX-S-TKECB: A Holstein Cow Identification Detection Algorithm
by Hongtao Zhang, Li Zheng, Lian Tan, Jiahui Gao and Yiming Luo
Agriculture 2024, 14(11), 1982; https://doi.org/10.3390/agriculture14111982 - 5 Nov 2024
Cited by 2 | Viewed by 1043
Abstract
Accurate identification of individual cow identity is a prerequisite for the construction of digital farms and serves as the basis for optimized feeding, disease prevention and control, breed improvement, and product quality traceability. Currently, cow identification faces challenges such as poor recognition accuracy, [...] Read more.
Accurate identification of individual cow identity is a prerequisite for the construction of digital farms and serves as the basis for optimized feeding, disease prevention and control, breed improvement, and product quality traceability. Currently, cow identification faces challenges such as poor recognition accuracy, large data volumes, weak model generalization ability, and low recognition speed. Therefore, this paper proposes a cow identification method based on YOLOX-S-TKECB. (1) Based on the characteristics of Holstein cows and their breeding practices, we constructed a real-time acquisition and preprocessing platform for two-dimensional Holstein cow images and built a cow identification model based on YOLOX-S-TKECB. (2) Transfer learning was introduced to improve the convergence speed and generalization ability of the cow identification model. (3) The CBAM attention mechanism module was added to enhance the model’s ability to extract features from cow torso patterns. (4) The alignment between the apriori frame and the target size was improved by optimizing the clustering algorithm and the multi-scale feature fusion method, thereby enhancing the performance of object detection at different scales. The experimental results demonstrate that, compared to the traditional YOLOX-S model, the improved model exhibits a 15.31% increase in mean average precision (mAP) and a 32-frame boost in frames per second (FPS). This validates the feasibility and effectiveness of the proposed YOLOX-S-TKECB-based cow identification algorithm, providing valuable technical support for the application of dairy cow identification in farms. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

27 pages, 23565 KiB  
Article
CAMLLA-YOLOv8n: Cow Behavior Recognition Based on Improved YOLOv8n
by Qingxiang Jia, Jucheng Yang, Shujie Han, Zihan Du and Jianzheng Liu
Animals 2024, 14(20), 3033; https://doi.org/10.3390/ani14203033 - 19 Oct 2024
Cited by 4 | Viewed by 2117
Abstract
Cow behavior carries important health information. The timely and accurate detection of standing, grazing, lying, estrus, licking, fighting, and other behaviors is crucial for individual cow monitoring and understanding of their health status. In this study, a model called CAMLLA-YOLOv8n is proposed for [...] Read more.
Cow behavior carries important health information. The timely and accurate detection of standing, grazing, lying, estrus, licking, fighting, and other behaviors is crucial for individual cow monitoring and understanding of their health status. In this study, a model called CAMLLA-YOLOv8n is proposed for Holstein cow behavior recognition. We use a hybrid data augmentation method to provide the model with rich Holstein cow behavior features and improve the YOLOV8n model to optimize the Holstein cow behavior detection results under challenging conditions. Specifically, we integrate the Coordinate Attention mechanism into the C2f module to form the C2f-CA module, which strengthens the expression of inter-channel feature information, enabling the model to more accurately identify and understand the spatial relationship between different Holstein cows’ positions, thereby improving the sensitivity to key areas and the ability to filter background interference. Secondly, the MLLAttention mechanism is introduced in the P3, P4, and P5 layers of the Neck part of the model to better cope with the challenges of Holstein cow behavior recognition caused by large-scale changes. In addition, we also innovatively improve the SPPF module to form the SPPF-GPE module, which optimizes small target recognition by combining global average pooling and global maximum pooling processing and enhances the model’s ability to capture the key parts of Holstein cow behavior in the environment. Given the limitations of traditional IoU loss in cow behavior detection, we replace CIoU loss with Shape–IoU loss, focusing on the shape and scale features of the Bounding Box, thereby improving the matching degree between the Prediction Box and the Ground Truth Box. In order to verify the effectiveness of the proposed CAMLLA-YOLOv8n algorithm, we conducted experiments on a self-constructed dataset containing 23,073 Holstein cow behavior instances. The experimental results show that, compared with models such as YOLOv3-tiny, YOLOv5n, YOLOv5s, YOLOv7-tiny, YOLOv8n, and YOLOv8s, the improved CAMLLA-YOLOv8n model achieved increases in Precision of 8.79%, 7.16%, 6.06%, 2.86%, 2.18%, and 2.69%, respectively, when detecting the states of Holstein cows grazing, standing, lying, licking, estrus, fighting, and empty bedding. Finally, although the Params and FLOPs of the CAMLLA-YOLOv8n model increased slightly compared with the YOLOv8n model, it achieved significant improvements of 2.18%, 1.62%, 1.84%, and 1.77% in the four key performance indicators of Precision, Recall, mAP@0.5, and mAP@0.5:0.95, respectively. This model, named CAMLLA-YOLOv8n, effectively meets the need for the accurate and rapid identification of Holstein cow behavior in actual agricultural environments. This research is significant for improving the economic benefits of farms and promoting the transformation of animal husbandry towards digitalization and intelligence. Full article
Show Figures

Figure 1

19 pages, 2021 KiB  
Article
Quality Assessment of Reconstructed Cow, Camel and Mare Milk Powders by Near-Infrared Spectroscopy and Chemometrics
by Mariem Majadi, Annamária Barkó, Adrienn Varga-Tóth, Zhulduz Suleimenova Maukenovna, Dossimova Zhanna Batirkhanovna, Senkebayeva Dilora, Matyas Lukacs, Timea Kaszab, Zsuzsanna Mednyánszky and Zoltan Kovacs
Molecules 2024, 29(17), 3989; https://doi.org/10.3390/molecules29173989 - 23 Aug 2024
Cited by 3 | Viewed by 1752
Abstract
Milk powders are becoming a major attraction for many industrial applications due to their nutritional and functional properties. Different types of powdered milk, each with their own distinct chemical compositions, can have different functionalities. Consequently, the development of rapid monitoring methods is becoming [...] Read more.
Milk powders are becoming a major attraction for many industrial applications due to their nutritional and functional properties. Different types of powdered milk, each with their own distinct chemical compositions, can have different functionalities. Consequently, the development of rapid monitoring methods is becoming an urgent task to explore and expand their applicability. Lately, there is growing emphasis on the potential of near-infrared spectroscopy (NIRS) as a rapid technique for the quality assessment of dairy products. In the present work, we explored the potential of NIRS coupled with chemometrics for the prediction of the main functional and chemical properties of three types of milk powders, as well as their important processing parameters. Mare, camel and cow milk powders were prepared at different concentrations (5%, 10% and 12%) and temperatures (25 °C, 40 °C and 65 °C), and then their main physicochemical attributes and NIRS spectra were analyzed. Overall, high accuracy in both recognition and prediction based on type, concentration and temperature was achieved by NIRS-based models, and the quantification of quality attributes (pH, viscosity, dry matter content, fat content, conductivity and individual amino acid content) also resulted in high accuracy in the models. R2CV and R2pr values ranging from 0.8 to 0.99 and 0.7 to 0.98, respectively, were obtained by using PLSR models. However, SVR models achieved higher R2CV and R2pr values, ranging from 0.91 to 0.99 and 0.80 to 0.99, respectively. Full article
(This article belongs to the Section Food Chemistry)
Show Figures

Figure 1

19 pages, 17761 KiB  
Article
Multi-Target Feeding-Behavior Recognition Method for Cows Based on Improved RefineMask
by Xuwen Li, Ronghua Gao, Qifeng Li, Rong Wang, Shanghao Liu, Weiwei Huang, Liuyiyi Yang and Zhenyuan Zhuo
Sensors 2024, 24(10), 2975; https://doi.org/10.3390/s24102975 - 8 May 2024
Cited by 2 | Viewed by 1546
Abstract
Within the current process of large-scale dairy-cattle breeding, to address the problems of low recognition-accuracy and significant recognition-error associated with existing visual methods, we propose a method for recognizing the feeding behavior of dairy cows, one based on an improved RefineMask instance-segmentation model, [...] Read more.
Within the current process of large-scale dairy-cattle breeding, to address the problems of low recognition-accuracy and significant recognition-error associated with existing visual methods, we propose a method for recognizing the feeding behavior of dairy cows, one based on an improved RefineMask instance-segmentation model, and using high-quality detection and segmentation results to realize the recognition of the feeding behavior of dairy cows. Firstly, the input features are better extracted by incorporating the convolutional block attention module into the residual module of the feature extraction network. Secondly, an efficient channel attention module is incorporated into the neck design to achieve efficient integration of feature extraction while avoiding the surge of parameter volume computation. Subsequently, the GIoU loss function is used to increase the area of the prediction frame to optimize the convergence speed of the loss function, thus improving the regression accuracy. Finally, the logic of using mask information to recognize foraging behavior was designed, and the accurate recognition of foraging behavior was achieved according to the segmentation results of the model. We constructed, trained, and tested a cow dataset consisting of 1000 images from 50 different individual cows at peak feeding times. The method’s effectiveness, robustness, and accuracy were verified by comparing it with example segmentation algorithms such as MSRCNN, Point_Rend, Cascade_Mask, and ConvNet_V2. The experimental results show that the accuracy of the improved RefineMask algorithm in recognizing the bounding box and accurately determining the segmentation mask is 98.3%, which is higher than that of the benchmark model by 0.7 percentage points; for this, the model parameter count size was 49.96 M, which meets the practical needs of local deployment. In addition, the technologies under study performed well in a variety of scenarios and adapted to various light environments; this research can provide technical support for the analysis of the relationship between cow feeding behavior and feed intake during peak feeding periods. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

23 pages, 19155 KiB  
Article
Open-Set Recognition of Individual Cows Based on Spatial Feature Transformation and Metric Learning
by Buyu Wang, Xia Li, Xiaoping An, Weijun Duan, Yuan Wang, Dian Wang and Jingwei Qi
Animals 2024, 14(8), 1175; https://doi.org/10.3390/ani14081175 - 14 Apr 2024
Cited by 8 | Viewed by 1961
Abstract
The automated recognition of individual cows is foundational for implementing intelligent farming. Traditional methods of individual cow recognition from an overhead perspective primarily rely on singular back features and perform poorly for cows with diverse orientation distributions and partial body visibility in the [...] Read more.
The automated recognition of individual cows is foundational for implementing intelligent farming. Traditional methods of individual cow recognition from an overhead perspective primarily rely on singular back features and perform poorly for cows with diverse orientation distributions and partial body visibility in the frame. This study proposes an open-set method for individual cow recognition based on spatial feature transformation and metric learning to address these issues. Initially, a spatial transformation deep feature extraction module, ResSTN, which incorporates preprocessing techniques, was designed to effectively address the low recognition rate caused by the diverse orientation distribution of individual cows. Subsequently, by constructing an open-set recognition framework that integrates three attention mechanisms, four loss functions, and four distance metric methods and exploring the impact of each component on recognition performance, this study achieves refined and optimized model configurations. Lastly, introducing moderate cropping and random occlusion strategies during the data-loading phase enhances the model’s ability to recognize partially visible individuals. The method proposed in this study achieves a recognition accuracy of 94.58% in open-set scenarios for individual cows in overhead images, with an average accuracy improvement of 2.98 percentage points for cows with diverse orientation distributions, and also demonstrates an improved recognition performance for partially visible and randomly occluded individual cows. This validates the effectiveness of the proposed method in open-set recognition, showing significant potential for application in precision cattle farming management. Full article
Show Figures

Figure 1

17 pages, 8563 KiB  
Article
Research on the Vision-Based Dairy Cow Ear Tag Recognition Method
by Tianhong Gao, Daoerji Fan, Huijuan Wu, Xiangzhong Chen, Shihao Song, Yuxin Sun and Jia Tian
Sensors 2024, 24(7), 2194; https://doi.org/10.3390/s24072194 - 29 Mar 2024
Cited by 8 | Viewed by 2467
Abstract
With the increase in the scale of breeding at modern pastures, the management of dairy cows has become much more challenging, and individual recognition is the key to the implementation of precision farming. Based on the need for low-cost and accurate herd management [...] Read more.
With the increase in the scale of breeding at modern pastures, the management of dairy cows has become much more challenging, and individual recognition is the key to the implementation of precision farming. Based on the need for low-cost and accurate herd management and for non-stressful and non-invasive individual recognition, we propose a vision-based automatic recognition method for dairy cow ear tags. Firstly, for the detection of cow ear tags, the lightweight Small-YOLOV5s is proposed, and then a differentiable binarization network (DBNet) combined with a convolutional recurrent neural network (CRNN) is used to achieve the recognition of the numbers on ear tags. The experimental results demonstrated notable improvements: Compared to those of YOLOV5s, Small-YOLOV5s enhanced recall by 1.5%, increased the mean average precision by 0.9%, reduced the number of model parameters by 5,447,802, and enhanced the average prediction speed for a single image by 0.5 ms. The final accuracy of the ear tag number recognition was an impressive 92.1%. Moreover, this study introduces two standardized experimental datasets specifically designed for the ear tag detection and recognition of dairy cows. These datasets will be made freely available to researchers in the global dairy cattle community with the intention of fostering intelligent advancements in the breeding industry. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

21 pages, 713 KiB  
Review
A Review on Information Technologies Applicable to Precision Dairy Farming: Focus on Behavior, Health Monitoring, and the Precise Feeding of Dairy Cows
by Na Liu, Jingwei Qi, Xiaoping An and Yuan Wang
Agriculture 2023, 13(10), 1858; https://doi.org/10.3390/agriculture13101858 - 22 Sep 2023
Cited by 18 | Viewed by 5856
Abstract
Milk production plays an essential role in the global economy. With the development of herds and farming systems, the collection of fine-scale data to enhance efficiency and decision-making on dairy farms still faces challenges. The behavior of animals reflects their physical state and [...] Read more.
Milk production plays an essential role in the global economy. With the development of herds and farming systems, the collection of fine-scale data to enhance efficiency and decision-making on dairy farms still faces challenges. The behavior of animals reflects their physical state and health level. In recent years, the rapid development of the Internet of Things (IoT), artificial intelligence (AI), and computer vision (CV) has made great progress in the research of precision dairy farming. Combining data from image, sound, and movement sensors with algorithms, these methods are conducive to monitoring the behavior, health, and management practices of dairy cows. In this review, we summarize the latest research on contact sensors, vision analysis, and machine-learning technologies applicable to dairy cattle, and we focus on the individual recognition, behavior, and health monitoring of dairy cattle and precise feeding. The utilization of state-of-the-art technologies allows for monitoring behavior in near real-time conditions, detecting cow mastitis in a timely manner, and assessing body conditions and feed intake accurately, which enables the promotion of the health and management level of dairy cows. Although there are limitations in implementing machine vision algorithms in commercial settings, technologies exist today and continue to be developed in order to be hopefully used in future commercial pasture management, which ultimately results in better value for producers. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

12 pages, 2744 KiB  
Article
Cattle Facial Matching Recognition Algorithm Based on Multi-View Feature Fusion
by Zhi Weng, Shaoqing Liu, Zhiqiang Zheng, Yong Zhang and Caili Gong
Electronics 2023, 12(1), 156; https://doi.org/10.3390/electronics12010156 - 29 Dec 2022
Cited by 6 | Viewed by 3058
Abstract
In the process of collecting facial images of cattle in the field, some features of the collected images end up going missing due to the changeable posture of the cattle, which makes the recognition accuracy decrease or impossible to recognize. This paper verifies [...] Read more.
In the process of collecting facial images of cattle in the field, some features of the collected images end up going missing due to the changeable posture of the cattle, which makes the recognition accuracy decrease or impossible to recognize. This paper verifies the practical effects of the classical matching algorithms ORB, SURF, and SIFT in bull face matching recognition. The experimental results show that the traditional matching algorithms perform poorly in terms of matching accuracy and matching time. In this paper, a new matching recognition model is constructed. The model inputs the target cattle facial data from different angles into the feature extraction channel, combined with GMS (grid-based motion statistics) algorithm and random sampling consistent algorithm, to achieve accurate recognition of individual cattle, and the recognition process is simple and fast. The recognition accuracy of the model was 85.56% for the Holstein cow face dataset, 82.58% for the Simmental beef cattle, and 80.73% for the mixed Holstein and Simmental beef cattle dataset. The recognition model constructed in the study can achieve individual recognition of cattle in complex environments, has good robustness to matching data, and can effectively reduce the effects of data angle changes and partial features missing in cattle facial recognition. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 5045 KiB  
Article
Development of an Automated Body Temperature Detection Platform for Face Recognition in Cattle with YOLO V3-Tiny Deep Learning and Infrared Thermal Imaging
by Shih-Sian Guo, Kuo-Hua Lee, Liyun Chang, Chin-Dar Tseng, Sin-Jhe Sie, Guang-Zhi Lin, Jih-Yi Chen, Yi-Hsin Yeh, Yu-Jie Huang and Tsair-Fwu Lee
Appl. Sci. 2022, 12(8), 4036; https://doi.org/10.3390/app12084036 - 16 Apr 2022
Cited by 19 | Viewed by 4614
Abstract
This study developed an automated temperature measurement and monitoring platform for dairy cattle. The platform used the YOLO V3-tiny (you only look once, YOLO) deep learning algorithm to identify and classify dairy cattle images. The system included a total of three layers of [...] Read more.
This study developed an automated temperature measurement and monitoring platform for dairy cattle. The platform used the YOLO V3-tiny (you only look once, YOLO) deep learning algorithm to identify and classify dairy cattle images. The system included a total of three layers of YOLO V3-tiny identification: (1) dairy cow body; (2) individual number (identity, ID); (3) thermal image of eye socket identification. We recorded each cow’s individual number and body temperature data after the three layers of identification, and carried out long-term body temperature tracking. The average prediction score of the recognition rate was 96%, and the accuracy was 90.0%. The thermal image of eye socket recognition rate was >99%. The area under the receiver operating characteristic curves (AUC) index of the prediction model was 0.813 (0.717–0.910). This showed that the model had excellent predictive ability. This system provides a rapid and convenient temperature measurement solution for ranchers. The improvement in dairy cattle image recognition can be optimized by collecting more image data. In the future, this platform is expected to replace the traditional solution of intrusive radio-frequency identification for individual recognition. Full article
Show Figures

Figure 1

19 pages, 5173 KiB  
Article
Multi-Center Agent Loss for Visual Identification of Chinese Simmental in the Wild
by Jianmin Zhao, Qiusheng Lian and Neal N. Xiong
Animals 2022, 12(4), 459; https://doi.org/10.3390/ani12040459 - 13 Feb 2022
Cited by 1 | Viewed by 2473
Abstract
Visual identification of cattle in the wild provides an essential way for real-time cattle monitoring applicable to precision livestock farming. Chinese Simmental exhibit a yellow or brown coat with individually characteristic white stripes or spots, which makes a biometric identifier for identification possible. [...] Read more.
Visual identification of cattle in the wild provides an essential way for real-time cattle monitoring applicable to precision livestock farming. Chinese Simmental exhibit a yellow or brown coat with individually characteristic white stripes or spots, which makes a biometric identifier for identification possible. This work employed the observable biometric characteristics to perform cattle identification with an image from any viewpoint. We propose multi-center agent loss to jointly supervise the learning of DCNNs by SoftMax with multiple centers and the agent triplet. We reformulated SoftMax with multiple centers to reduce intra-class variance by offering more centers for feature clustering. Then, we utilized the agent triplet, which consisted of the features and the agents, to enforce separation among different classes. As there are no datasets for the identification of cattle with multi-view images, we created CNSID100, consisting of 11,635 images from 100 Chinese Simmental identities. Our proposed loss was comprehensively compared with several well-known losses on CNSID100 and OpenCows2020 and analyzed in an engineering application in the farming environment. It was encouraging to find that our approach outperformed the state-of-the-art models on the datasets above. The engineering application demonstrated that our pipeline with detection and recognition is promising for continuous cattle identification in real livestock farming scenarios. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

Back to TopTop