Next Article in Journal
Neural Radiance Field-Inspired Depth Map Refinement for Accurate Multi-View Stereo
Previous Article in Journal
Magnetic Resonance Imaging as a Diagnostic Tool for Ilio-Femoro-Caval Deep Venous Thrombosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification

1
Graduate School of Engineering, University of Miyazaki, Miyazaki 889-2192, Japan
2
Organization for Learning and Student Development, University of Miyazaki, Miyazaki 889-2192, Japan
3
Sumiyoshi Livestock Science Station, Field Science Center, Faculty of Agriculture, University of Miyazaki, Miyzaki 889-2192, Japan
*
Author to whom correspondence should be addressed.
J. Imaging 2024, 10(3), 67; https://doi.org/10.3390/jimaging10030067
Submission received: 30 January 2024 / Revised: 3 March 2024 / Accepted: 5 March 2024 / Published: 8 March 2024
(This article belongs to the Section Computer Vision and Pattern Recognition)

Abstract

:
This study innovates livestock health management, utilizing a top-view depth camera for accurate cow lameness detection, classification, and precise segmentation through integration with a 3D depth camera and deep learning, distinguishing it from 2D systems. It underscores the importance of early lameness detection in cattle and focuses on extracting depth data from the cow’s body, with a specific emphasis on the back region’s maximum value. Precise cow detection and tracking are achieved through the Detectron2 framework and Intersection Over Union (IOU) techniques. Across a three-day testing period, with observations conducted twice daily with varying cow populations (ranging from 56 to 64 cows per day), the study consistently achieves an impressive average detection accuracy of 99.94%. Tracking accuracy remains at 99.92% over the same observation period. Subsequently, the research extracts the cow’s depth region using binary mask images derived from detection results and original depth images. Feature extraction generates a feature vector based on maximum height measurements from the cow’s backbone area. This feature vector is utilized for classification, evaluating three classifiers: Random Forest (RF), K-Nearest Neighbor (KNN), and Decision Tree (DT). The study highlights the potential of top-view depth video cameras for accurate cow lameness detection and classification, with significant implications for livestock health management.

1. Introduction

Lameness in cows is a widespread and costly problem that has a detrimental impact on animal welfare and the dairy industry [1]. It manifests as abnormal gait and posture, resulting in pain, decreased productivity, reproductive issues, and increased mortality rates [2]. The early and accurate detection of cow lameness is crucial to promptly intervene and effectively treat the condition, mitigating its negative consequences [3]. The development of a computer vision-based cow lameness system holds tremendous potential in improving animal welfare and dairy farm economics [4]. Such a system can provide real-time monitoring of cow gait and behavior, facilitating the timely identification of lameness cases and enabling prompt intervention. By automating the detection process, the system reduces reliance on human observers, eliminates subjectivity, and enables the continuous monitoring of large herds. We propose an automated cow lameness detection system utilizing depth image analysis to streamline the process and minimize human surveillance. This system offers advantages such as reduced workload and the early prediction of lameness. Implementing this automated system improves animal welfare, optimizes farm management, and enhances cattle health and productivity, ultimately leading to increased profitability and sustainability in the dairy industry [5].
In our research, we focused on the detection of cow lameness using a depth camera. To achieve this, we employed the Detectron2 framework [6] for the simultaneous detection and segmentation of multiple cows. In our testing farm, the number of cows passing through during one period can range from a minimum of 56 to a maximum of 64. These periods occur both in the morning and evening. Given that 56 to 64 cows traverse the path between the milking station and the rest area twice a day, we utilize the Intersection over Union (IOU) technique for multi-cow tracking. A depth camera is strategically positioned along this pathway. For feature extraction, we calculate the highest points along the cow’s backbone, resulting in a feature vector with a length of 176 derived from a 132 × 176 depth image. To evaluate our approach, we experimented with various machine learning classifiers, including K-Nearest Neighbors (KNN), Random Forest (RF), and Decision Tree (DT). These classifiers were trained on a dataset that encompassed labeled instances of both healthy and lame cows.

2. Research Background and Related Works

Traditional methods for cow lameness detection, such as manual locomotion scoring, often suffer from limitations in terms of accuracy and the ability to promptly identify mild lameness. As a result, there is a growing demand for advanced technologies and automated systems that can improve the accuracy and timeliness of lameness detection and monitoring in cattle. Several approaches have been explored in the realm of cow lameness detection. Some methods involve the use of 2D videos and deep learning algorithms, such as convolutional neural networks (CNNs) [7] and Mask R-CNNs [8]. These approaches have been applied to extract features critical for assessing lameness, such as spine shape and leg distances [9,10]. Additionally, researchers have developed cow lameness prediction models based on sophisticated techniques like the You Only Look Once version 3 (YOLOv3) [11] and long short-term memory (LSTM) networks [12], achieving high accuracy in predicting lameness scores. Furthermore, there have been efforts to use cow back posture as a basis for classifying lameness in dairy cattle [13]. Incorporating sensor technology, some studies have explored the detection of lameness through locomotion or behavior analysis [14,15,16,17]. A neck-mounted mobile sensor system that combines local positioning and activity (acceleration) was tested and validated on a commercial UK dairy farm [18]. Cattle lameness causes considerable animal welfare problems and negatively affects the farm economy. Gait scoring techniques and claw health reports are commonly used for research and surveys, but few daily management solutions exist to monitor gait parameters from individual cows within a herd [19]. These sensor-based approaches provide valuable data for assessing cow health, but they also have their own set of challenges.
Recently, computer vision techniques, particularly depth image analysis, have emerged as promising alternatives for cow lameness detection. Depth image analysis harnesses the capabilities of depth-sensing cameras to extract precise features related to gait patterns and body posture [20,21]. Cattle behavior mainly refers to the animals’ continuous interaction with the environment and the way they express themselves. Hence, it is a valuable indicator in assessing the health and welfare of animals [22]. Utilizing cameras, depth sensors, and advanced algorithms, these techniques excel in discerning variations in posture, gait, and other visual indicators. The working principles involve capturing images or video footage of cows in specific areas, followed by applying image processing techniques like filtering, segmentation, and feature extraction. Extracted features, encompassing limb positions, body posture, and hoof movement, are then subjected to analysis by advanced machine learning algorithms, including convolutional neural networks (CNNs) [23]. Cow gait recordings were made during four consecutive night-time milking sessions on an Israeli dairy farm using a 3D camera. A live, on-the-spot-assessed 5-point locomotion score was the reference for the automatic lameness score evaluation. A dataset of 186 cows with four automatic lameness scores and four live locomotion score repetitions was used for testing three different classification methods [24]. The computer vision technique has been rapidly adopted in cow lameness detection research due to its noncontact characteristic and moderate price [25]. This non-contact monitoring method offers the advantage of early detection. However, challenges in this domain include the need for larger datasets, real-time processing algorithms, and practical integration into dairy farming operations.
To address these challenges, our research presents an innovative approach for 3D images. By utilizing depth-sensing cameras [26,27], the ability of sensing 3D space using single cameras has been a widely investigated topic in image processing and computer vision [28]. Monitoring the growth and body condition of cows is essential for the optimal management of modern dairy farms. However, monitoring is rarely performed on commercial farms. Modern technologies based on three-dimensional (3D) shape analysis could address this problem [29]. By utilizing advanced computer vision techniques, we aim to enhance the accuracy and reliability of cow lameness detection. Body cleanliness is considered an important indicator for evaluating cow welfare. At present, assessing the cleanliness of different cow body parts is considered as a subjective and labor-intensive task. Automatic body cleanliness scoring needs to start with body part segmentation [30]. Our method focuses on multi-cow detection and segmentation [31,32], as well as tracking using IOU analysis. Additionally, we extract feature vectors from depth images, specifically targeting the highest points along a cow’s backbone spine. These features serve as input for three different machine learning classifiers, enabling the classification of lameness. This holistic approach seeks to contribute to the field by offering a robust and efficient solution that can effectively handle cow lameness detection, addressing this critical issue in dairy farming operations.

3. Materials and Methods

Our proposed system aims to develop a robust and accurate cow lameness classification system by leveraging depth image analysis. The objective is to automatically identify and classify lameness in cows based on their movement patterns captured through depth imaging. This system offers a non-invasive and objective approach for early lameness detection, enabling timely intervention and improved animal welfare. The proposed system consists of five main components: data preparation, automatic cow detection, tracking, feature extraction, and classification. Figure 1 illustrates the research methodology we propose.

3.1. Data Collection and Preprocessing

The datasets used in this study were captured using a depth camera (ifm03D303) at the Kunneppu Demonstration Farm in Hokkaido Prefecture, Japan. The depth camera was strategically positioned at a height of 3 m from the ground to capture comprehensive information about cow movements. The camera was placed in the middle of the pathway between the entrance and exit gates. Furthermore, the indoor house featured a concrete floor, as illustrated in Figure 2a,b. This camera setting ensured an optimal view of the cows and enabled accurate depth measurements. To collect the data, the depth camera captured three-dimensional (3D) information about the cows’ movements. The distance data obtained by the camera were stored in CSV format, with each row representing a frame. The distance measurements were recorded for various points within the captured field of view. A VGG annotator was used to make the annotation of cow regions. Figure 2c shows the data preparation process.
In the preprocessing stage, the depth data captured by the camera are reshaped into an image size of 132 × 176 pixels. The research work utilized a dataset of 4944 depth data images, which were annotated for cow detection. Among them, 4120 were used for training the customized cow detection model. These training images contained a total of 4302 cow instances. For validation purposes, a subset of 824 images was selected, which included 915 cow instances. Currently, we employ a random training split of 80% and validation split of 20% from a total of 4944 frames. In the future, we plan to enhance the robustness of our model by incorporating validation data from different dates. In Table 1, detailed information about the dataset is presented.

3.2. Automatic Cow Detection

The proposed system employs the robust Detectron2 framework [33] for the purpose of customized cattle detection. This advanced framework harnesses the power of deep learning techniques to identify cows precisely and automatically within depth images. To achieve this level of accuracy, the system undergoes a fine-tuning process with specialized datasets containing cow-specific visual data. By adapting a pre-trained model to the distinctive visual characteristics of cows, the system significantly enhances its predictive accuracy, ensuring reliable and efficient cattle detection.

3.2.1. Noise Removing

During the cow detection process, our proposed system operates continuously throughout the day, monitoring the movement of 56 to 64 cows between the milking production area and the resting area. This activity occurs during two time periods: in the morning from 5 a.m. to 8 a.m., and in the evening from 2 p.m. to 5 p.m. During these times, the farmer engages in pathway cleaning tasks. In Figure 3a, an example of our detection model identifying a human region as a cow is shown. This is considered a noise region, and it is necessary to eliminate this erroneous detection. To detect the cow regions accurately, a process is employed where the pixel values of the detected regions are summed and analyzed. This analysis aids in setting a pixel sum threshold that effectively distinguishes between cows and human areas, enabling the system to remove the human region. By effectively excluding these regions, the system focuses on the cow region, enabling the more reliable and precise analysis of cow lameness. Figure 3b presents the sum of detected cows and human regions. In our research, we establish the detected cow region as encompassing areas that exceed a defined threshold value (Th > 4000).

3.2.2. Cow Depth Region Extraction

After removing human noise region, we need to obtain the depth value of the detected cow region. To accomplish this, a binary mask specific to the cow-detected areas must be generated using our detection model. This binary mask is then applied to the original depth images through a multiplication process. After this process, we obtain the depth value for the cow region. Figure 4 presents the cow depth region extraction from detection.

3.3. Automatic Cow Tracking

For tracking, our system relies on the Intersection over Union (IOU) metric to assess the overlap of bounding boxes across consecutive frames. By analyzing IOU values and adjusting coordinates according to a predefined threshold, the system proficiently tracks the movement of cows. Figure 5 provides a visual representation of the IOU tracking process, showcasing the comparison between IOU values in the current frame and the previous frame with a designated IOU threshold.
When the IOU value between bounding boxes in consecutive frames exceeds or equals the specified threshold, the system retains the same tracking ID. Conversely, if the IOU value falls below the threshold, a new tracking ID is assigned. Following this tracking process, the system efficiently organizes and archives the tracked cows, saving them into individual folders corresponding to their respective track IDs, which are sequentially numbered as 1, 2, 3, and so on. This structured approach streamlines data management and facilitates easy access to and analysis of the tracked cow data within their designated folders. Figure 6 illustrates the process of cow tracking and saving to folders according to tracking IDs.

3.4. Cow Lameness Classification

The cow lameness classification system consists of two main components: feature extraction and classification. The feature extraction component analyzes sensor data to extract relevant information related to cow lameness, while the classification component uses three machine learning algorithms to classify the extracted features into different lameness categories. This system aims to enhance the early detection and monitoring of cow lameness, ultimately improving the welfare of cattle.

3.4.1. Cow Lameness Classification

To perform feature extraction on a cow frame with specific criteria, such as extracting frames where the bounding box width is full size (176), transforming depth values, applying a Gaussian filter, and finding the maximum value of the cow backbone. The process of extracting features from cow frames involves several steps. Firstly, frames with a desired bounding box width of 176 are selected. Next, the depth values in these frames are transformed using Equation (1), enhancing the representation of cow depth. Figure 7 presents the illustration of depth to high transformation.
t r a n s f o r m = d i s t a n c e d e p t h i m g x , y
To further process the transformed values, a Gaussian filter is applied, which reduces noise and smooths the data. This is achieved by convolving the transformed values with a Gaussian function calculated using Equation (2). Figure 8 presents the illustration of the filter image.
  G x , y = 1 2 π σ 2 e x 2 + y 2 2 σ 2
Following the process of high transformation and Gaussian filtering, our process culminates in the extraction of the highest points along the cow’s backbone line. This extraction is performed using Equation (3). Subsequently, these extracted highest points are harnessed as feature vectors, encapsulating essential characteristics pivotal for subsequent analysis and classification. Figure 9a provides an illustrative depiction of how the highest backbone values are extracted.
b a c k b o n e j = max G i , j   f o r   i = 1,2 , , m ,   1 j n
where t r a n s f o r m : transformed value from equation; d i s t a n c e : camera distance from ground to 2.8 m above; d e p t h i m g x , y : depth image value at coordinates x , y ; G x , y : the value of Gaussian function at coordinates x , y ; σ (sigma): the standard deviation of the Gaussian distribution; m : the number of rows in ‘G’, which in this case is 132; n : the number of columns in ‘G’, which in this case is 176.

Lameness and No-Lameness Cows

In Figure 9b, we can observe that most of the lame cows exhibit a curved backbone, measured from the starting point (their head and neck), mostly with values lower than 1.2. In contrast, for non-lame cows, their highest point in the backbone is straight, predominantly with values greater than 1.2.
The resulting values are then analyzed, and the maximum value in the region of interest corresponds to a prominent feature of the cow’s backbone. After extracting the highest values along the cow’s backbone as feature vectors, our next step involves their classification using methods such as K-Nearest Neighbor (KNN), Random Forest, and Decision Tree.

4. Performance Evaluation

The performance evaluation section consists of three parts: detection accuracy, tracking accuracy, and classification accuracy.

4.1. Automatic Cow Detection Accuracy

To evaluate the detection accuracy of our system, we collected testing data over a period of three days, encompassing both morning and evening sessions. Specifically, on 3 September (whole day) and 4 September (morning), a total of 56 cows were included in the dataset. For the evening of 4 September and the whole day of 5 September, we expanded the dataset to include 64 cows. These dates were intentionally chosen because during this period, accurate ground truth lameness scores were available from experts at the cow farm. Notably, our system successfully detected all cows during the entire duration of the three days, in both the morning and evening sessions. Remarkably, our system achieved an impressive average detection accuracy of 99.94%, demonstrating its high performance and reliability in accurately identifying and tracking cows. The evaluation results for automatic cow detection are presented in Table 2.
A c c u r a c y   o f   C o w   D e t e c t i o n = T P + T N T P + F P + T N + F N
where TP: True Positive; FP: false positive; TN: True Negative; FN: False Negative.

4.2. Automatic Cow Tracking Accuracy

For evaluating the performance of cow multi-object tracking, we have adopted the Multi-Object Tracking Accuracy (MOTA) metric [34]. The MOTA calculation is defined by Equation (5). The evaluation results for automatic cow tracking are presented in Table 3. The average accuracy was computed for all testing dates, yielding an overall accuracy of 99.92% over a three-day testing period.
M O T A = 1 t F N t + F P t + I D S t t G T t  
where IDS: ID Switch, GT: ground truth, FN: Missed Tracks, FP: False Tracks.

4.3. Cow Lameness Classification Accuracy

In this classification task, we classed 45 cows as No Lameness and 31 cows as Lameness. For the training, we used 885 frames for No Lameness and 491 frames for Lameness. For the testing, we used 229 frames for No Lameness and 116 frames for Lameness. The RF model achieved an accuracy of 82.3% during training, the KNN model achieved 81.2%, and the DT model achieved 70.4%. For the testing phase, the accuracy results were 81.1% for RF, 78.2% for KNN, and 69.2% for DT. The evaluation results for lameness classification are presented in Table 4.
Figure 10, Figure 11 and Figure 12 provide a comprehensive visual representation of our classification results using different algorithms. In Figure 10, we present the Lameness Testing Results obtained with the Random Forest (RF) classifier in Figure 10a, while Figure 10b displays the associated Confusion Matrix. Moving on to Figure 11a, it showcases the Lameness Testing Results achieved with the K-Nearest Neighbor (KNN) algorithm, and Figure 10b offers insights into the corresponding Confusion Matrix. Lastly, Figure 12a illustrates the Lameness Testing Results derived from the Decision Tree (DT) classifier, and Figure 10b elucidates the Confusion Matrix pertaining to DT. These figures collectively offer a visual perspective on the effectiveness of each classification method in distinguishing between ‘Lame’ and ‘Not Lame’ cow conditions. In Figure 10, Figure 11 and Figure 12a, we conducted a detailed analysis of the system’s classification results. In these figures, we present the outcomes of our classifiers, with red dotted lines denoting instances of incorrect predictions, while all other points represent correctly classified frames. This visual representation helps us to discern the accuracy and precision of our classification models, providing valuable insights into their performance and their ability to distinguish between ‘Lame’ and ‘Not Lame’ cow conditions.

5. Discussion

In this section, we delve into the details and outcomes of our proposed computer vision system, emphasizing its capabilities in cow automatic detection, depth region extraction, and automatic tracking, particularly in a real-world scenario where cows share their path with farmers. We also discuss the challenges related to human detection and the system’s performance in cow lameness classification. Furthermore, we outline the limitations of our current approach and articulate our plans for future enhancements.
Our computer vision system was rigorously tested in a practical environment over three days, involving the monitoring of a fluctuating population of cows ranging from 56 to 64. The system’s effectiveness in automatic cow detection, depth region extraction, and tracking was assessed, and the results are presented in Table 2 and Table 3, which illustrate the system’s testing accuracy in cow detection and tracking.
One of the notable challenges encountered in this real-world setting is the presence of humans, particularly farmers, in the same passage as the cows. Humans can easily be misidentified as cows, leading to false detections. However, our system incorporates a detection thresholding mechanism, which helps distinguish cows from humans, thereby reducing false positives. This feature contributes to the reliability of our system’s cow detection and tracking capabilities.
Beyond cow detection and tracking, our system addresses the critical issue of cow lameness classification. Lameness in cows is a key indicator of their health and well-being, and timely identification can lead to improved animal welfare. To classify cow lameness, we utilized the highest backbone values of cows as feature vectors and employed three different machine learning algorithms.
However, as shown in Figure 10, Figure 11 and Figure 12, our classification system does exhibit limitations. These limitations stem from our reliance on a subset of features—the highest backbone values—for classification. Consequently, there are instances where our system produces incorrect predictions. To overcome the limitations of our current approach and further enhance the system’s capabilities, we will focus on feature extraction methods: histograms of depth, depth gradients, depths based on landmasks. Using these approach methods will gain better accuracy and better performance.
In the future, we plan to compare the cow lameness classification results with the population frequency in an average dairy herd in Japan. Our testing classification result achieved 81.1% for our farm. Our testing farm comprises over 100 cows. Human monitoring for all cows incurs substantial costs and yields inaccurate results. However, our system, utilizing only one depth camera, not only saves significant costs but also provides accurate results. We recognize the potential impact of uneven floor surfaces on our testing accuracy, particularly in the context of focusing on cows’ highest points within indoor settings with concrete flooring. We plan to incorporate adjustments in our methodology to account for the uneven floor surface as a contributing factor to any decrease in accuracy. Additionally, we will consider the age of cows in our future studies and commit to integrating this consideration into our research methods.

6. Conclusions

In this study, our proposed system was subjected to rigorous real-world testing, involving a substantial cohort of 56 to 64 cows. Observations were conducted twice daily, encompassing both morning and evening sessions. The aim was to assess the system’s practical applicability and resilience in the realm of cow lameness detection and classification. Our approach involved harnessing the precise characteristics of the cow’s backbone spine line as a feature vector, coupled with the utilization of machine learning algorithms. The remarkable outcomes achieved underscore the effectiveness of this approach in automating the categorization of cow lameness levels. The consistency of our testing across varying times of the day and diverse cow behaviors provided invaluable insights into the system’s reliability and robustness under real-world conditions. This extensive validation further highlights the potential for the seamless integration of the proposed system into contemporary livestock management practices.
In conclusion, our study marks a significant stride forward in the quest for automated cow lameness detection and classification. The fusion of meticulous real-world testing involving 56 to 64 cows, combined with strategic feature selection and machine learning algorithms, underscores the practical viability of our system. This advancement paves the way for improved animal welfare and more efficient farm operations, setting the stage for the adoption of our technology in real-world agricultural settings. As we venture forward, refining and expanding upon these findings will undoubtedly contribute to the ongoing progress in precision livestock monitoring.

Author Contributions

For this research article, the authors made the following contributions: Conceptualization, S.C.T. and T.T.Z.; Methodology, S.C.T.; Software, S.C.T.; Validation, S.C.T. and P.T.; Formal Analysis, S.C.T. and P.T.; Investigation, S.C.T., T.O., M.A. and I.K.; Resources, T.T.Z.; Data Curation, S.C.T.; Writing—Original Draft Preparation, S.C.T.; Writing—Review and Editing, S.C.T., P.T. and T.T.Z.; Visualization, S.C.T.; Supervision, T.T.Z.; Project Administration, T.T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This publication is subsidized by JKA (Grant Number: 2023M-425) through its promotion funds from KEIRIN RACE.

Institutional Review Board Statement

Ethical review and approval were waived for this study, as it imposed no discomforting limitations on the animals. The collection of image data for analysis was conducted using a deployed depth camera, ensuring the undisturbed natural parturient behavior of animals and adhering to regular management practices on the farm.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets featured in this study are available upon request from the corresponding author.

Acknowledgments

This work was supported in part by “The Development and demonstration for the realization of problem-solving local 5G” from the Ministry of Internal Affairs and Communications and the Project of “the On-farm Demonstration Trials of Smart Agriculture” from the Ministry of Agriculture, Forestry and Fisheries (funding agency: NARO). This publication is subsidized by JKA through its promotion funds from KEIRIN RACE.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Van Nuffel, A.; Zwertvaegher, I.; Pluym, L.; Van Weyenberg, S.; Thorup, V.M.; Pastell, M.; Sonck, B.; Saeys, W. Lameness detection in dairy cows: Part 1. How to distinguish between non-lame and lame cows based on differences in locomotion or behavior. Animals 2015, 5, 838–860. [Google Scholar] [CrossRef]
  2. Meseret, S. A review of poultry welfare in conventional production system. Livest. Res. Rural Dev. 2016, 28, 234–245. [Google Scholar]
  3. Jiang, B.; Song, H.; Wang, H.; Li, C. Dairy cow lameness detection using a back curvature feature. Comput. Electron. Agric. 2022, 194, 106729. [Google Scholar] [CrossRef]
  4. Tun, S.C.; Zin, T.T.; Tin, P.; Kobayashi, I.I. Cow Lameness Detection Using Depth Image Analysis. In Proceedings of the IEEE 11th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 18–21 October 2022; pp. 492–493. [Google Scholar]
  5. Arazo, E.; Aly, R.; McGuinness, K. Segmentation Enhanced Lameness Detection in Dairy Cows from RGB and Depth Video. arXiv 2022, arXiv:2206.04449. [Google Scholar]
  6. Pham, V.; Pham, C.; Dang, T. Road damage detection and classification with detectron2 and faster r-cnn. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 5592–5601. [Google Scholar]
  7. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  8. Fregonesi, J.A.; Veira, D.M.; Von Keyserlingk, M.A.G.; Weary, D.M. Effects of bedding quality on lying behavior of dairy cows. J. Dairy Sci. 2007, 90, 5468–5472. [Google Scholar] [CrossRef] [PubMed]
  9. Viazzi, S.; Bahr, C.; Schlageter-Tello, A.A.; Van Hertem, T.; Romanini, C.E.B.; Pluk, A.; Halachmi, I.; Lokhorst, C.; Berckmans, D. Analysis of individual classification of lameness using automatic measurement of back posture in dairy cattle. J. Dairy Sci. 2013, 96, 257–266. [Google Scholar] [CrossRef]
  10. Van Nuffel, A.; Zwertvaegher, I.; Van Weyenberg, S.; Pastell, M.; Thorup, V.M.; Bahr, C.; Sonck, B.; Saeys, W. Lameness detection in dairy cows: Part 2. Use of sensors to automatically register changes in locomotion or behavior. Animals 2015, 5, 861–885. [Google Scholar] [CrossRef] [PubMed]
  11. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  12. Wu, D.; Wu, Q.; Yin, X.; Jiang, B.; Wang, H.; He, D.; Song, H. Lameness detection of dairy cows based on the YOLOv3 deep learning algorithm and a relative step size characteristic vector. Biosyst. Eng. 2020, 189, 150–163. [Google Scholar] [CrossRef]
  13. Zin, T.T.; Htet, Y.; Tun, S.C.; Tin, P. Artificial Intelligence Topping on Spectral Analysis for Lameness Detection in Dairy Cattle. Proc. Annu. Conf. Biomed. Fuzzy Syst. Assoc. 2022, 35, C-3. [Google Scholar]
  14. Thorup, V.; Munksgaard, L.; Robert, P.-E.; Erhard, H.; Thomsen, P.; Friggens, N. Lameness detection via leg-mounted accelerometers on dairy cows on four commercial farms. Animal 2015, 9, 1704–1712. [Google Scholar] [CrossRef] [PubMed]
  15. Haladjian, J.; Haug, J.; Nüske, S.; Bruegge, B. A wearable sensor system for lameness detection in dairy cattle. Multimodal Technol. Interact. 2018, 2, 27. [Google Scholar] [CrossRef]
  16. Pastell, M.; Kujala, M.; Aisla, A.M.; Hautala, M.; Poikalainen, V.; Praks, J.; Veermäe, I.; Ahokas, J. Detecting cow’s lameness using force sensors. Comput. Electron. Agric. 2008, 64, 34–38. [Google Scholar] [CrossRef]
  17. Thorup, V.M.; Nielsen, B.L.; Robert, P.-E.; Giger-Reverdin, S.; Konka, J.; Michie, C.; Friggens, N.C. Lameness affects cow feeding but not rumination behavior as characterized from sensor data. Front. Veter- Sci. 2016, 3, 37. [Google Scholar] [CrossRef] [PubMed]
  18. Barker, Z.; Diosdado, J.V.; Codling, E.; Bell, N.; Hodges, H.; Croft, D.; Amory, J. Use of novel sensors combining local positioning and acceleration to measure feeding behavior differences associated with lameness in dairy cattle. J. Dairy Sci. 2018, 101, 6310–6321. [Google Scholar] [CrossRef]
  19. Maertens, W.; Vangeyte, J.; Baert, J.; Jantuan, A.; Mertens, K.C.; De Campeneere, S.; Pluk, A.; Opsomer, G.; Van Weyenberg, S.; Van Nuffel, A. Development of a real time cow gait tracking and analysing tool to assess lameness using a pressure sensitive walkway: The GAITWISE system. Biosyst. Eng. 2011, 110, 29–39. [Google Scholar] [CrossRef]
  20. Zheng, Z.; Zhang, X.; Qin, L.; Yue, S.; Zeng, P. Cows’ legs tracking and lameness detection in dairy cattle using video analysis and Siamese neural networks. Comput. Electron. Agric. 2023, 205, 107618. [Google Scholar] [CrossRef]
  21. Barney, S.; Dlay, S.; Crowe, A.; Kyriazakis, I.; Leach, M. Deep learning pose estimation for multi-cattle lameness detection. Sci. Rep. 2023, 13, 4499. [Google Scholar] [CrossRef]
  22. Venter, Z.S.; Hawkins, H.-J.; Cramer, M.D. Cattle don’t care: Animal behaviour is similar regardless of grazing management in grasslands. Agric. Ecosyst. Environ. 2019, 272, 175–187. [Google Scholar] [CrossRef]
  23. Alharthi, A.S.; Yunas, S.U.; Ozanyan, K.B. Deep learning for monitoring of human gait: A review. IEEE Sens. J. 2019, 19, 9575–9591. [Google Scholar] [CrossRef]
  24. Van Hertem, T.; Viazzi, S.; Steensels, M.; Maltz, E.; Antler, A.; Alchanatis, V.; Schlageter-Tello, A.A.; Lokhorst, K.; Romanini, E.C.; Bahr, C.; et al. Automatic lameness detection based on consecutive 3D-video recordings. Biosyst. Eng. 2014, 119, 108–116. [Google Scholar] [CrossRef]
  25. Kang, X.; Zhang, X.D.; Liu, G. A review: Development of computer vision-based lameness detection for dairy cows and discussion of the practical applica-tions. Sensors 2021, 21, 753. [Google Scholar] [CrossRef] [PubMed]
  26. Iltis, A.; Snoussi, H. The Temporal PET Camera: A New Concept With High Spatial and Timing Resolution for PET Imaging. J. Imaging 2015, 1, 45–59. [Google Scholar] [CrossRef]
  27. Varaksin, A.Y.; Ryzhkov, S.V. Mathematical Modeling of Structure and Dynamics of Concentrated Tornado-like Vortices: A Review. Mathematics 2023, 11, 3293. [Google Scholar] [CrossRef]
  28. De Pellegrini, M.; Orlandi, L.; Sevegnani, D.; Conci, N. Mobile-Based 3D Modeling: An In-Depth Evaluation for the Application in Indoor Scenarios. J. Imaging 2021, 7, 167. [Google Scholar] [CrossRef]
  29. Le Cozler, Y.; Allain, C.; Caillot, A.; Delouard, J.; Delattre, L.; Luginbuhl, T.; Faverdin, P. High-precision scanning system for complete 3D cow body shape imaging and analysis of morphological traits. Comput. Electron. Agric. 2019, 157, 447–453. [Google Scholar] [CrossRef]
  30. Jia, N.; Kootstra, G.; Koerkamp, P.G.; Shi, Z.; Du, S. Segmentation of body parts of cows in RGB-depth images based on template matching. Comput. Electron. Agric. 2021, 180, 105897. [Google Scholar] [CrossRef]
  31. Jabbar, K.A.; Hansen, M.F.; Smith, M.L.; Smith, L.N. Early and non-intrusive lameness detection in dairy cows using 3-dimensional video. Biosyst. Eng. 2017, 153, 63–69. [Google Scholar] [CrossRef]
  32. Miekley, B.; Traulsen, I.; Krieter, J. Principal component analysis for the early detection of mastitis and lameness in dairy 375 cows. J. Dairy Res. 2013, 80, 335–343. [Google Scholar] [CrossRef]
  33. Abhishek, A.V.S.; Kotni, S. Detectron2 object detection & manipulating images using cartoonization. Int. J. Eng. Res. Technol. (IJERT) 2021, 10. [Google Scholar]
  34. Bernardin, K.; Stiefelhagen, R. Evaluating multiple object tracking performance: The clear mot metrics. EURASIP J. Image Video Process. 2008, 2008, 246309. [Google Scholar] [CrossRef]
Figure 1. Flow diagram for proposed research.
Figure 1. Flow diagram for proposed research.
Jimaging 10 00067 g001
Figure 2. (a) Illustration of camera setting. (b) Testing environment. (c) Data preparation process.
Figure 2. (a) Illustration of camera setting. (b) Testing environment. (c) Data preparation process.
Jimaging 10 00067 g002
Figure 3. (a) Noise region (human). (b) The process of noise removal (human).
Figure 3. (a) Noise region (human). (b) The process of noise removal (human).
Jimaging 10 00067 g003
Figure 4. Cow depth region extraction.
Figure 4. Cow depth region extraction.
Jimaging 10 00067 g004
Figure 5. Cow tracking with IOU.
Figure 5. Cow tracking with IOU.
Jimaging 10 00067 g005
Figure 6. Cow tracking and saving to folder according to tracking IDs.
Figure 6. Cow tracking and saving to folder according to tracking IDs.
Jimaging 10 00067 g006
Figure 7. Depth to high transformation.
Figure 7. Depth to high transformation.
Jimaging 10 00067 g007
Figure 8. Gaussian filter for noise reduction.
Figure 8. Gaussian filter for noise reduction.
Jimaging 10 00067 g008
Figure 9. (a) Extraction of maximum backbone value. (b) Maximum highest points for backbone.
Figure 9. (a) Extraction of maximum backbone value. (b) Maximum highest points for backbone.
Jimaging 10 00067 g009
Figure 10. (a) Lameness Testing Results with RF and (b) Confusion Matrix with RF.
Figure 10. (a) Lameness Testing Results with RF and (b) Confusion Matrix with RF.
Jimaging 10 00067 g010
Figure 11. (a) Lameness Testing Results with KNN and (b) Confusion Matrix with KNN.
Figure 11. (a) Lameness Testing Results with KNN and (b) Confusion Matrix with KNN.
Jimaging 10 00067 g011
Figure 12. (a) Lameness Testing Results with DT and (b) Confusion Matrix with DT.
Figure 12. (a) Lameness Testing Results with DT and (b) Confusion Matrix with DT.
Jimaging 10 00067 g012
Table 1. Dataset information.
Table 1. Dataset information.
DatasetDateTime#Frames#Instances
Training22 January 2023 (Morning)05:00–08:0041204302
Validation22 January 2023 (Morning)05:00–08:00824915
Table 2. Automatic cow detection accuracy.
Table 2. Automatic cow detection accuracy.
DateTime#CowTPTNFPFNAccuracy (%)
3 September 2022AM561217000100
PM56127304099.69
4 September 2022AM561240000100
PM64183614099.95
5 September 2022AM641736000100
PM641477000100
Average Accuracy 99.94
Table 3. Automatic cow tracking accuracy.
Table 3. Automatic cow tracking accuracy.
DateTime#CowGTFPFNIDSMOTA (%)
3 September 2022AM561247000100
PM56129702399.61
4 September 2022AM561257000100
PM64184310199.89
5 September 2022AM641778000100
PM641498000100
Average Accuracy 99.92
Table 4. Performance matrix for training and testing accuracy.
Table 4. Performance matrix for training and testing accuracy.
DatasetDatePeriodClassification Accuracy
RF (%)KNN (%)DT (%)
Training3 September 2022,
4 September 2022,
5 September 2022
a.m., p.m.
a.m., p.m.
a.m.
82.381.270.4
Testing5 September 2022p.m.81.178.269.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tun, S.C.; Onizuka, T.; Tin, P.; Aikawa, M.; Kobayashi, I.; Zin, T.T. Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification. J. Imaging 2024, 10, 67. https://doi.org/10.3390/jimaging10030067

AMA Style

Tun SC, Onizuka T, Tin P, Aikawa M, Kobayashi I, Zin TT. Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification. Journal of Imaging. 2024; 10(3):67. https://doi.org/10.3390/jimaging10030067

Chicago/Turabian Style

Tun, San Chain, Tsubasa Onizuka, Pyke Tin, Masaru Aikawa, Ikuo Kobayashi, and Thi Thi Zin. 2024. "Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification" Journal of Imaging 10, no. 3: 67. https://doi.org/10.3390/jimaging10030067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop