Next Article in Journal
An Initial Dot Encoding Scheme with Significantly Improved Robustness and Numbers
Next Article in Special Issue
Gait Recognition via Deep Learning of the Center-of-Pressure Trajectory
Previous Article in Journal
Influence of Multiple Openings on Reinforced Concrete Outrigger Walls in a Tall Building

Appl. Sci. 2019, 9(22), 4914; https://doi.org/10.3390/app9224914

Article
Bag-of-Visual-Words for Cattle Identification from Muzzle Print Images
1
Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, 971 87 Luleå, Sweden
2
Faculty of Engineering, Al-Azhar University, P.O. Box 83513 Qena, Egypt
3
Centre for Security, Communications and Network Research, University of Plymouth, Plymouth PL4 8AA, UK
4
Department of Computer Science, Faculty of Computers and Information, South Valley University, P.O. Box 83523 Qena, Egypt
*
Author to whom correspondence should be addressed.
Received: 7 September 2019 / Accepted: 12 November 2019 / Published: 15 November 2019

Abstract

:
Cattle, buffalo and cow identification plays an influential role in cattle traceability from birth to slaughter, understanding disease trajectories and large-scale cattle ownership management. Muzzle print images are considered discriminating cattle biometric identifiers for biometric-based cattle identification and traceability. This paper presents an exploration of the performance of the bag-of-visual-words (BoVW) approach in cattle identification using local invariant features extracted from a database of muzzle print images. Two local invariant feature detectors—namely, speeded-up robust features (SURF) and maximally stable extremal regions (MSER)—are used as feature extraction engines in the BoVW model. The performance evaluation criteria include several factors, namely, the identification accuracy, processing time and the number of features. The experimental work measures the performance of the BoVW model under a variable number of input muzzle print images in the training, validation, and testing phases. The identification accuracy values when utilizing the SURF feature detector and descriptor were 75%, 83%, 91%, and 93% for when 30%, 45%, 60%, and 75% of the database was used in the training phase, respectively. However, using MSER as a points-of-interest detector combined with the SURF descriptor achieved accuracies of 52%, 60%, 67%, and 67%, respectively, when applying the same training sizes. The research findings have proven the feasibility of deploying the BoVW paradigm in cattle identification using local invariant features extracted from muzzle print images.
Keywords:
computer vision; biometrics; cattle identification; bag-of-visual-words; muzzle print images

1. Introduction

Cattle, buffalo and cows are the major sources of meat in the food supply chain and their protection has become a vital need. Cattle identification is the process of accurately recognizing individual animals—buffalo and cows—via a unique physical marker or biometric identifier. Cattle identification is beneficial to different stakeholders, including animal producers, food consumers and the food industry [1]. For instance, cattle identification systems contribute to limiting the spread of animal diseases by allowing a better understanding of disease trajectories and therefore effectively managing cattle vaccination programs. In addition, cattle identification helps in limiting cattle losses, reducing the costs of disease destruction, minimizing trade losses and facilitating cattle ownership management in large-scale farms [2,3].
Conventional buffalo and cow identification methods are divided into three groups—permanent, temporary and electrical identification methods [4]. Traditional identification methods, such as tattooing, branding, ear notching, and radio-frequency identification (RFID) tagging, confront several challenges pertaining to their susceptibility to shape deformations, tag losses, fraud, animal-welfare concerns and limited scalability [5,6]. Furthermore, classic cattle identification methods suffer from installation, operational and security limitations [7]. Therefore, the traditional methods are not sufficiently reliable for cattle identification. The current situation raises the necessity for new cattle identification systems, not only for living cattle but also for animal products [8,9]. Recently, visual cattle biometrics have become an emerging research topic in computer vision [10].
Biometrics is a technology that is used to recognize humans using psychological or behavioral characteristics in both civilian and forensic applications [11,12]. Employing biometrics to individual cattle identification is a promising technology that overcomes many traditional identification problems. Several cattle identifiers have been studied such as muzzle print patterns [13], iris patterns [14], retinal vascular patterns [15,16], facial images [17] and DNA profiles [18]. Cattle muzzle print images display distinct grooves, valleys, and beaded structures, and muzzle print images are considered a unique and time-immutable biometric trait that can identify cattle with similar accuracy as that from human fingerprints [19,20].
Unlike human biometrics, cattle biometrics have attracted less research attention for two main reasons—the lack of standard benchmarking databases and the lack of common features such as minutiae and singular points in human fingerprints [21]. Most of the research performed in cattle identification has utilized muzzle print images combined with local invariant features such as scale-invariant feature transform (SIFT) [22] and speeded-up robust features (SURF) [23] to discriminate between cattle. To achieve more robustness and accuracy in cattle-biometric-based identification systems, further investigations are needed.
Although several approaches have been used for extracting and matching features from muzzle print images, bag-of-visual-words (BoVW) has not yet been investigated for cattle identification. This study contributes to the biometric-based cattle identification domain by exploring the performance of the BoVW paradigm for cattle identification purposes. To build the core of the BoVW model, the study utilizes speeded-up robust features (SURF) [24] and maximally stable extremal regions (MSER) [25] as two scale- and rotation-invariant local feature detectors. Identification accuracy, processing time and number of extracted features are considered the performance metrics. The reported findings from this work open the door to further research on cattle identification that can be extended to other types of animals and other biometric identifiers as well.
The rest of this article is organized as follows. Section 2 summarizes the related work on cattle identification using biometric traits. Section 3 explains the proposed cattle identification system and describes the system’s components concerning bag-of-visual-words, feature extraction approaches and the classification phase. Experimental results are reported in Section 4. Section 5 discusses the research findings, highlights the research limitations and proposes future research directions. Finally, concluding remarks and future work are given in Section 6.

2. Related Work

Cattle muzzle prints have received considerable research attention compared to the other animal biometric identifiers [26]. Some approaches for muzzle print images have been used for feature extraction and matching. A joint pixel approach of skin grooves was utilized by Minagawa et al. [19] as a key feature for muzzle print matching. This approach achieved matching scores of 60% and 12%. A database of 170 images collected from 43 animals was used, and 13 samples were excluded due to a feature extraction failure. The rest of the database was matched against itself. Twenty animals were correctly identified with 66.6 % total accuracy.
Noviyanto and Arymurthy [23] applied SURF and its variant, upright SURF (U-SURF), in cattle identification for extracting muzzle print image features. A database of 120 muzzle print images was collected from 8 animals (15 images of each animal). The main experimental scenario considered 10 muzzle print images for the training sample, while 5 images were used as testing samples. This method achieved 90% identification accuracy under rotation conditions.
Awad et al. [22] applied SIFT, followed by the random sample consensus (RANSAC) algorithm to improve the robustness of SIFT feature matching. The identification scenario considered a database of 105 muzzle print images collected from 15 cattle (7 muzzle print images from each animal). The 7 images of each animal were swapped between the enrollment and the identification phases and therefore a confusion matrix with a dimension of 105 × 105 was created from the calculated similarity scores. The proposed SIFT with the RANSAC method achieved an identification accuracy of 93.3%. In Reference [27], a cattle classification approach was proposed based on utilizing multiclass support vector machines (SVMs) and texture features extracted by a box-counting scheme.
Furthermore, a SIFT-based method combined with a matching refinement technique and orientation information was introduced by Noviyanto and Arymurthy [28]. The proposed refinement technique was evaluated against the SIFT features using a database of 160 muzzle prints collected from 20 cattle. The achieved accuracy was measured in terms of the equal error rate (EER), where SIFT achieved an EER of 0.0167, while the application of the refinement technique resulted in an EER of 0.0028. Gaber et al. [29] employed Weber’s local descriptor (WLD) for feature extraction from muzzle print images combined with the AdaBoost classifier for developing a cattle identification system. The maximum obtained identification accuracy was 99% using a database of 217 muzzle print images from 31 animals.
Recently, convolutional neural networks (CNNs) and deep learning (DL) methods have been introduced and used in many computer vision-related applications, achieving the most success in object detection, auto-driving and text-processing applications [30,31,32,33]. Consequently, deep learning has gained attention in animal biometrics from some research groups [34,35]. For instance, Andrew et al. [36] applied deep learning methods to bovine identification. The authors showed that the off-the-shelf networks have the ability to perform end-to-end individual identification from top-down images acquired by fixed cameras.
To address the problem of swapped and missed animals as well as false insurance claims, Kumar et al. [37] introduced a DL-based approach for cattle identification using the primary patterns of muzzle print images. In this method, the well-known stacked denoising autoencoder scheme was utilized for encoding the extracted features of the muzzle point images. In Reference [38], a neural network and rolling skew histogram were fused for cow identification in the rotary milking parlor. Zhangyong et al. [39] proposed another automated method based on CNNs for the precise identification of dairy cows. Through the cross-validation of a training set and a test set, the recognition accuracy could reach 87% for a single image. Other researchers, in Reference [40], used deep learning for cattle contour extraction and instance segmentation in a real cattle feedlot management environment.

3. BoVW-Based Cattle Identification

Generally, the bag-of-visual-words (BoVW) technique represents a given image as a collection of local features extracted from image patches or some points of interest in the image. In other words, mapping the image from a set of very high-dimensional features to a list of word numbers. Thus, it is logical to first discuss the motivation behind local feature extractors utilized in the proposed approach and then explain how they are converted into the visual word space.

3.1. Feature Extraction

3.1.1. Speeded-Up Robust Features (SURF)

The speeded-up robust features (SURF) descriptor [24] was developed as an alternative to the SIFT descriptor. Briefly, the SURF descriptor starts by constructing a square region around points of interest, which is oriented along its main orientation. The size of this square is 20 s , where s is the scale at which the point of interest is detected. The region inside the square is divided into smaller 4 × 4 subregions and the Haar wavelet responses in the horizontal d x and vertical d y directions are computed for each subregion at 5 × 5 sampled points, as shown in Figure 1.
To improve the robustness of the descriptor against localization errors and some geometric deformations, these responses are weighted with a Gaussian window. The wavelet responses d x and d y are summed for each subregion, which, with the sum of their absolute values, form entries of the feature vector F v ; that is,
F v = ( d x , | d x | , d y , | d y | )
This procedure is repeated for all the 4 × 4 subregions, resulting in a feature descriptor of 4 × 4 × 4 = 64 dimensions. To reduce illumination effects and make the descriptor invariant to region size, the feature descriptor is normalized to a unit vector. Applying restrictions (e.g., the number of divisions inside the square region) to the regular descriptor (i.e., SURF-64) F v results in several extended versions of SURF, such as SURF-36, SURF-128, and U-SURF [24,41]. In this work, the SURF feature descriptor (regular descriptor of length 64) is adopted to describe the image patches due to its balance of computing efficiency and representation capacity. It uses a 64-dimensional feature vector to describe the local features; in contrast, SIFT uses a 128-dimensional feature vector. Additionally, the SURF feature descriptor is more robust to various image perturbations than the SIFT local feature descriptor.

3.1.2. Maximally Stable Extremal Regions (MSER)

The maximally stable extremal regions technique [42] or its fast implementation [25,43] are widely used for detecting blobs in images via extracting a number of covariant regions called MSER. In this algorithm, the term “extremal” refers to the property that all pixels in an MSER have either higher (i.e., brighter extremal regions) or lower (i.e., darker extremal regions) intensity than all other pixels outside the boundary of that MSER. The extremal regions have two main properties—(i) are invariant to affine or projective transformations on the image and (ii) are invariant to lighting variations. Thus, they are scale and rotation invariant as well. The MSER algorithm detects the regions using a connectivity analysis and by computing connected maximal- and minimal-intensity areas in the region and on its outer boundary. It should be noted that other feature descriptors discussed in Reference [44] can be utilized for feature extraction.

3.2. Bag-of-Visual-Words Representation

The BoVW technique has been shown to be successful for a wide range of computer vision applications, including image retrieval [45] and object classification [46,47] as well as action recognition [48,49] with outstanding performance and low storage requirements. Simply, in the basic BoVW model, some local features are extracted from an image using a feature extractor (e.g., SURF) and then the extracted local features are clustered into visual words. That is, the image is described by a histogram of visual word counts instead of low-level features. In this context, this visual vocabulary representation provides a global representation from the local features or a “mid-level" representation that can bridge the large semantic gap between the low-level features extracted from the image and the high-level concepts.
Suppose we have a sequence X = ( x 1 , x 2 , . . . , x n ) of d-dimensional feature vectors obtained by a feature extractor from an image, where x i R d . The main objective of the BoVW technique is to quantize each sequence X based on a specific vocabulary dictionary V = { ν 1 , ν 2 , . . . , ν N } R d of N visual words. To achieve this objective, each sequence X can be represented by a histogram of probabilities p ( ν | x ) . In this way, the BoVW histogram H summarizes the whole image by counting how many times each of the visual words occurs in that image:
H = 1 K i = 1 k h ( x i )
where
h ( x i ) = p ( ν 1 | x i ) p ( ν 2 | x i ) p ( ν N | x i )
The most well-known method for building the visual vocabulary is to use k-means clustering because of its simplicity and convergence speed. Other methods, such as hierarchical or spectral clustering, can also be used for this task [48]. In this case, the center of every cluster is used as a visual word. The clustering step is to quantize the feature space into a small discrete number of visual words. It should be noted that the choice of data plays an important role in creating the visual vocabulary [50].
Figure 2 describes the proposed BoVW-based cattle identification approach. The local features of all the images in the database are extracted using SURF/MSER mechanisms. Then, a 500-word visual vocabulary is created by reducing the number of features via feature space quantization using k-means clustering. Finally, a classification mechanism is considered.

3.3. Classification Stage

A kernel support vector machine (KSVM) is applied to the bag-of-visual-words to achieve the classification task in the proposed cattle identification system. Every animal head, with 7 muzzle print images for each animal, is considered a separate class. The identification accuracy is measured using different database sizes for classifier training, validation and evaluation purposes.

4. Experimental Results

The experimental work in this study was conducted on a regular computer equipped with an Intel® Xeon® E5-2667 v2 CPU processor running @ 3.30 GHz with 64 GB of RAM and a Windows® 64-bit operating system. To build a unified testing environment, MATLAB® R2016b was used for code development and execution. The performance evaluation was measured using a nonstandard muzzle print database of 105 images. The database includes collected 7 captured cattle muzzle print images from 15 animal heads [22]. Examples of the muzzle print images randomly selected from the database are shown in Figure 3.

4.1. Bag-of-Visual-Words with SURF Features

The empirical work starts by checking the performance of the BoVW using the SURF features. In this case, the SURF approach was used for both feature detection and feature description operations. To check the BoVW performance under several training conditions, four scenarios involving 30%, 45%, 60% and 75% of the whole database were used as training input. The rest of the database in every scenario was used for testing and validation purposes. A linear kernel support vector machine was used in the last classification stage. To avoid any bias in the results, the database was randomly partitioned in each scenario. The histogram of visual word occurrences was considered, and the total number of visual words was set to 500 words.
The identification accuracy and the processing time were measured in each scenario and are recorded in Table 1. It is apparent that the identification accuracy and processing are proportional to the size of the training dataset. The maximum achieved accuracy is 93% using 75% (75 images of the 105 total images) of the data as the training dataset. However, the identification accuracy is acceptable and the situation has emerged due to the high similarity between the muzzle print images and hence between the visual words in the whole database.
The confusion matrices confirm the obtained identification accuracies. The confusion matrices are shown in Figure 4. The figure is aligned with the accuracies in Table 1, where yellow represents the highest score.

4.2. Bag-of-Visual-Words with MSER Features

The four aforementioned database scenarios used with the SURF mechanism were carried out again using MSER as a feature point detector. In these scenarios, the MSER detector was used to detect the points of interest, while SURF was employed as a feature descriptor for every detected point. The identification accuracy, processing time and error were calculated in every case and are reported in Table 2. Visual word histograms and confusion matrices were measured during the empirical work but are omitted from the paper to avoid figure redundancy. Table 2 illustrates the degradation in the obtained number of features and identification accuracy. It also shows very short processing times compared to Table 1, which is normal due to the small number of extracted features.
On the other hand, the obtained results from using both the SURF and MSER detectors in the BoVW paradigm were compared against other state-of-the-art methods in Table 3. The table confirms the similar performance of BoVW and our previous method that uses only the SIFT detector to calculate a small score between all the database images. Driven by the comparison results in Table 3, it is highly feasible in terms of accuracy to utilize the BoVW technique in cattle identification.

5. Discussion

Traditional cattle identification methods such as ear tagging, branding and tattooing are vulnerable to losses, damages and fading. Electronic identification systems such as RFID-based systems involve many security and privacy challenges. Therefore, mapping biometric identifiers for animal identification has emerged as a hot research trend. Biometric-based cattle identification solves several problems in the conventional cattle identification methods [1,51,52].
Central to this study is the evaluation of the performance of the bag-of-visual-words (BoVW) model in cattle, buffalo and cow identification using muzzle print images. The study aims to investigate the feasibility of deploying the BoVW technique for a new biometric-based cattle identification system. To this end, two feature detection and description methods—namely, SURF and MSER—were used as feature extraction engines at the heart of the BoVW technique. The study offers two feature extraction scenarios. Initially, SURF is used as a local feature detector and descriptor. The second scenario considers MSER for point-of-interest detection, while SURF is used for feature description. The proposed BoVW-based system was evaluated using a muzzle print database of 7 images per 15 cattle heads. The evaluation database includes 105 images in total and the database was divided into training and evaluation subsets.
The common issue of using SURF for feature detection and MSER is the identification accuracy value. The maximum achieved identification accuracy of using SURF is 93%, which is promising compared to other published methods. Moreover, using MSER achieved drastically low identification accuracy, which is incomparable to the accuracy achieved by using SURF. The 7% error in accuracy resulted from the similarity between muzzle print images because it is difficult to create distinguishing vocabularies for each image. Since this study is the first attempt to use the BoVW approach in cattle identification, it was hard to find similar comparison methods. Therefore, the comparison was performed with methods that extract local features from cattle muzzle print images.
Although the BoVW approach has achieved reasonable accuracy using the SURF feature detector and descriptor, the achieved results are limited to the small-sized database of 105 images. Having a standard muzzle print image database and benchmarks for identification accuracy are still missing in the cattle identification domain [1]. The empirical work performed in this study has proven the possibility of utilizing BoVW in cattle identification; however, the BoVW feature extraction method should receive more considerations. Furthermore, this study opens the door for future investigations of cattle identification using BoVW combined with machine and deep learning techniques.
Despite the reasonable achievements by the proposed BoVW approach, the research field of cattle identification is still far from complete, especially in unconstrained environments. Thus, addressing real-world challenges such as occlusion, illumination and cattle viewing distances is a must. To this end, the following directions of future research are suggested—(1) Using k-means clustering and well-defined distance measures could be helpful for further enhancing the performance of the BoVW approach; (2) Exploring other feature extraction algorithms may help solve the lack of discriminative power in the BoVW model; (3) Encoding schemes, pooling and normalization strategies, and fusion techniques are the main steps in any BoVW framework; thus, searching for new alternatives will improve the performance; (4) It is natural to consider combining recent techniques such as convolutional neural networks with the BoVW approach.

6. Conclusions

Cattle identification using animal biometric identifiers is still a challenging problem. A robust and accurate cattle identification mechanism is vital for protecting livestock, limiting livestock producers’ losses to disease and facilitating cattle ownership management. This paper has explored the performance of the BoVW paradigm in cattle identification using SURF and MSER as engines for BoVW feature detection and description from cattle muzzle print images. The experiments have proven the possibility of applying the BoVW model in building a cattle identification system. In addition, the study has confirmed the superiority of using SURF for feature detection and description with 93% identification accuracy compared to the 67% that was obtained by combining MSER and SURF for points-of-interest detection and description, respectively. The processing time has shown a high variability in using SURF and MSER. The required time for processing 75 images was measured as 89.5 s in SURF and 28.0 s in MSER, which correspond to the maximum accomplished accuracy. Although the empirical study has proven the feasibility of applying the BoVW approach in cattle identification using muzzle print images, special attention should be given to the quality of collected muzzle print images, the size of the training dataset and the feature extraction mechanisms. In future work, we will endeavor to build a large-scale muzzle print image dataset to further evaluate and enhance the performance of the proposed BoVW approach.

Author Contributions

Conceptualization, A.I.A.; Methodology, A.I.A. and M.H.; Software and Experiments, A.I.A.; Validation, A.I.A. and M.H.; Investigation, A.I.A. and M.H.; Writing–original draft, A.I.A. and M.H.; Writing–review and editing, A.I.A. and M.H.; Visualization, A.I.A.

Funding

This research received no external funding.

Acknowledgments

The authors would like to express their gratitude to A.E. Hassanien and team of Scientific Research Group in Egypt (SRGE), Cairo University, Egypt, for providing us with the cattle muzzle print image database used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Awad, A.I. From classical methods to animal biometrics: A review on cattle identification and tracking. Comput. Electron. Agric. 2016, 123, 423–435. [Google Scholar] [CrossRef]
  2. Chen, C.S.; Chen, W.C. Research and development of automatic monitoring system for livestock farms. Appl. Sci. 2019, 9, 1132. [Google Scholar] [CrossRef]
  3. Kumar, S.; Singh, S.K. Monitoring of pet animal in smart cities using animal biometrics. Future Gener. Comput. Syst. 2018, 83, 553–563. [Google Scholar] [CrossRef]
  4. Kumar, S.; Singh, S.K. Visual animal biometrics: Survey. IET Biomed. 2016, 6, 139–156. [Google Scholar] [CrossRef]
  5. Huhtala, A.; Suhonen, K.; Mäkelä, P.; Hakojärvi, M.; Ahokas, J. Evaluation of instrumentation for cow positioning and tracking indoors. Biosyst. Eng. 2007, 96, 399–405. [Google Scholar] [CrossRef]
  6. Bowling, M.B.; Pendell, D.L.; Morris, D.L.; Yoon, Y.; Katoh, K.; Belk, K.E.; Smith, G.C. Review: Identification and traceability of cattle in selected countries outside of North America. Prof. Anim. Sci. 2008, 24, 287–294. [Google Scholar] [CrossRef]
  7. Li, W.; Ji, Z.; Wang, L.; Sun, C.; Yang, X. Automatic individual identification of Holstein dairy cows using tailhead images. Comput. Electron. Agric. 2017, 142, 622–631. [Google Scholar] [CrossRef]
  8. Sofos, J.N. Challenges to meat safety in the 21st century. Meat Sci. 2008, 78, 3–13. [Google Scholar] [CrossRef]
  9. Dalvit, C.; De Marchi, M.; Cassandro, M. Genetic traceability of livestock products: A review. Meat Sci. 2007, 77, 437–449. [Google Scholar] [CrossRef]
  10. Kumar, S.; Singh, S.K. Cattle recognition: A new frontier in visual animal biometrics research. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2019, 1–20. [Google Scholar] [CrossRef]
  11. Jain, A.K.; Ross, A.A.; Nandakumar, K. Introduction to Biometrics; Springer: New York, NY, USA, 2011. [Google Scholar]
  12. Awad, A.I.; Hassanien, A.E. Impact of Some Biometric Modalities on Forensic Science. In Computational Intelligence in Digital Forensics: Forensic Investigation and Applications; Kamilah Muda, A., Choo, Y.H., Abraham, A., Srihari, S.N., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 47–62. [Google Scholar]
  13. Barry, B.; Gonzales Barron, U.; Butler, F.; McDonnell, K.; Ward, S. Using muzzle pattern recognition as a biometric approach for cattle identification. Trans. Am. Soc. Agric. Biol. Eng. 2007, 50, 1073–1080. [Google Scholar] [CrossRef]
  14. Lu, Y.; He, X.; Wen, Y.; Wang, P.S. A new cow identification system based on iris analysis and recognition. Int. J. Biomed. 2014, 6, 18–32. [Google Scholar] [CrossRef]
  15. Barry, B.; Corkery, G.; Barron, U.G.; Mc Donnell, K.; Butler, F.; Ward, S. A longitudinal study of the effect of time on the matching performance of a retinal recognition system for lambs. Comput. Electron. Agric. 2008, 64, 202–211. [Google Scholar] [CrossRef]
  16. Barron, U.G.; Corkery, G.; Barry, B.; Butler, F.; McDonnell, K.; Ward, S. Assessment of retinal recognition technology as a biometric method for sheep identification. Comput. Electron. Agric. 2008, 60, 156–166. [Google Scholar] [CrossRef]
  17. Kumar, S.; Tiwari, S.; Singh, S.K. Face recognition of cattle: Can it be done? Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2016, 86, 137–148. [Google Scholar] [CrossRef]
  18. Jiménez-Gamero, I.; Dorado, G.; Muñoz-Serrano, A.; Analla, M.; Alonso-Moraga, A. DNA microsatellites to ascertain pedigree-recorded information in a selecting nucleus of Murciano-Granadina dairy goats. Small Rumin. Res. 2006, 65, 266–273. [Google Scholar] [CrossRef]
  19. Minagawa, H.; Fujimura, T.; Ichiyanagi, M.; Tanaka, K. Identification of beef cattle by analyzing images of their muzzle patterns lifted on paper. Publ. Jpn. Soc. Agric. Inf. 2002, 8, 596–600. [Google Scholar]
  20. Baranov, A.; Graml, R.; Pirchner, F.; Schmid, D. Breed differences and intra-breed genetic variability of dermatoglyphic pattern of cattle. J. Anim. Breed. Genet. 1993, 110, 385–392. [Google Scholar] [CrossRef]
  21. Awad, A.I.; Baba, K. Fingerprint singularity detection: A comparative study. In Software Engineering and Computer Systems; Mohamad Zain, J., Wan Mohd, W.M., El-Qawasmeh, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 179, pp. 122–132. [Google Scholar]
  22. Awad, A.I.; Zawbaa, H.M.; Mahmoud, H.A.; Nabi, E.H.H.A.; Fayed, R.H.; Hassanien, A.E. A robust cattle identification scheme using muzzle print images. In Proceedings of the Federated Conference on Computer Science and Information Systems, Kraków, Poland, 8–11 September 2013; pp. 529–534. [Google Scholar]
  23. Noviyanto, A.; Arymurthy, A.M. Automatic cattle identification based on muzzle photo using speed-up robust features approach. In Proceedings of the 3rd European Conference of Computer Science, Paris, France, 2–4 December 2012; WSEAS Press: Athens, Greece, 2012; pp. 110–114. [Google Scholar]
  24. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  25. Nistér, D.; Stewénius, H. Linear time maximally stable extremal regions. In Proceedings of the 10th European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 183–196. [Google Scholar]
  26. Sinha, S.; Agarwal, M.; Singh, R.; Vatsa, M. Animal Biometrics: Techniques and Applications; Springer: Singapore, 2019. [Google Scholar]
  27. Mahmoud, H.A.; El Hadad, H.M.R. Automatic cattle muzzle print classification system using multiclass support vector machine. Int. J. Image Min. 2015, 1, 126–140. [Google Scholar] [CrossRef]
  28. Noviyanto, A.; Arymurthy, A.M. Beef cattle identification based on muzzle pattern using a matching refinement technique in the SIFT method. Comput. Electron. Agric. 2013, 99, 77–84. [Google Scholar] [CrossRef]
  29. Gaber, T.; Tharwat, A.; Hassanien, A.E.; Snasel, V. Biometric cattle identification approach based on Weber’s local descriptor and AdaBoost classifier. Comput. Electron. Agric. 2016, 122, 55–66. [Google Scholar] [CrossRef]
  30. Rivas, A.; Chamoso, P.; González-Briones, A.; Corchado, J. Detection of cattle using drones and convolutional neural networks. Sensors 2018, 18, 2048. [Google Scholar] [CrossRef] [PubMed]
  31. Han, J.; Zhang, D.; Cheng, G.; Liu, N.; Xu, D. Advanced deep-learning techniques for salient and category-specific object detection: A survey. IEEE Signal Process. Mag. 2018, 35, 84–100. [Google Scholar] [CrossRef]
  32. Ning, Y.; He, S.; Wu, Z.; Xing, C.; Zhang, L.J. A review of deep learning based speech synthesis. Appl. Sci. 2019, 9, 4050. [Google Scholar] [CrossRef]
  33. Ju, M.; Luo, H.; Wang, Z.; Hui, B.; Chang, Z. The application of improved YOLO V3 in multi-scale target detection. Appl. Sci. 2019, 9, 3775. [Google Scholar] [CrossRef]
  34. Shen, W.; Hu, H.; Dai, B.; Wei, X.; Sun, J.; Jiang, L.; Sun, Y. Individual identification of dairy cows based on convolutional neural networks. Multimed. Tools Appl. 2019, 1–14. [Google Scholar] [CrossRef]
  35. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  36. Andrew, W.; Greatwood, C.; Burghardt, T. Visual localisation and individual identification of holstein friesian cattle via deep learning. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 2850–2859. [Google Scholar]
  37. Kumar, S.; Pandey, A.; Satwik, K.S.R.; Kumar, S.; Singh, S.K.; Singh, A.K.; Mohan, A. Deep learning framework for recognition of cattle using muzzle point image pattern. Measurement 2018, 116, 1–17. [Google Scholar] [CrossRef]
  38. Phyo, C.N.; Zin, T.T.; Hama, H.; Kobayashi, I. A hybrid rolling skew histogram-neural network approach to dairy cow identification system. In Proceedings of the International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, 19–21 November 2018; pp. 1–5. [Google Scholar]
  39. Zhangyong, L.; Shen, S.; Ge, C.; Li, X. Cow individual identification based on convolutional neural network. In Proceedings of the International Conference on Algorithms, Computing and Artificial Intelligence, Sanya, China, 21–23 December 2018; p. 45. [Google Scholar]
  40. Qiao, Y.; Truman, M.; Sukkarieh, S. Cattle segmentation and contour extraction based on Mask R-CNN for precision livestock farming. Comput. Electron. Agric. 2019, 165, 104958. [Google Scholar] [CrossRef]
  41. Pang, Y.; Li, W.; Yuan, Y.; Pan, J. Fully affine invariant SURF for image matching. Neurocomputing 2012, 85, 6–10. [Google Scholar] [CrossRef]
  42. Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
  43. Donoser, M.; Bischof, H. Efficient maximally stable extremal region (MSER) tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 1, pp. 553–560. [Google Scholar]
  44. Hassaballah, M.; Abdelmgeid, A.A.; Alshazly, H.A. Image features detection, description and matching. In Image Feature Detectors and Descriptors: Foundations and Applications; Awad, A.I., Hassaballah, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 11–45. [Google Scholar]
  45. Jingyan, W.; Yongping, L.; Ying, Z.; Chao, W.; Honglan, X.; Guoling, C.; Xin, G. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting. IEEE Trans. Med. Imaging 2011, 30, 1996–2011. [Google Scholar]
  46. Perronnin, F. Universal and adapted vocabularies for generic visual categorization. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1243–1256. [Google Scholar] [CrossRef] [PubMed]
  47. Sheng, X.; Tao, F.; Deren, L.; Shiwei, W. Object classification of aerial images with bag-of-visual words. IEEE Geosci. Remote Sens. Lett. 2010, 7, 366–370. [Google Scholar] [CrossRef]
  48. Peng, X.; Wang, L.; Wang, X.; Qiao, Y. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Comput. Vis. Image Underst. 2016, 150, 109–125. [Google Scholar] [CrossRef]
  49. Richard, A.; Gall, J. A bag-of-words equivalent recurrent neural network for action recognition. Comput. Vis. Image Underst. 2016, 151, 79–91. [Google Scholar] [CrossRef]
  50. Lahrache, S.; El Ouazzani, R.; El Qadi, A. Bag-of-features for image memorability evaluation. IET Comput. Vis. 2016, 10, 577–584. [Google Scholar] [CrossRef]
  51. Dziuk, P. Positive, accurate animal identification. Anim. Reprod. Sci. 2003, 79, 319–323. [Google Scholar] [CrossRef]
  52. Trevarthen, A. The national livestock identification system: The importance of traceability in E-Business. J. Theor. Appl. Electron. Commer. Res. 2007, 2, 49–62. [Google Scholar]
Figure 1. Computing the speeded-up robust features (SURF) descriptor over the gradient space by dividing the square region around points of interest into 4 × 4 subregions.
Figure 1. Computing the speeded-up robust features (SURF) descriptor over the gradient space by dividing the square region around points of interest into 4 × 4 subregions.
Applsci 09 04914 g001
Figure 2. Outline of the main components of the bag-of-visual-words (BoVW)-based cattle identification system. The input muzzle print image database, feature extraction, feature clustering, bag-of-visual-words histograms and classification steps are highlighted as the main identification system components.
Figure 2. Outline of the main components of the bag-of-visual-words (BoVW)-based cattle identification system. The input muzzle print image database, feature extraction, feature clustering, bag-of-visual-words histograms and classification steps are highlighted as the main identification system components.
Applsci 09 04914 g002
Figure 3. A sample from the nonstandard muzzle prints database of 105 images in total. The sample shows different augmentation parameters, such as image rotation, image blurring and image distortion.
Figure 3. A sample from the nonstandard muzzle prints database of 105 images in total. The sample shows different augmentation parameters, such as image rotation, image blurring and image distortion.
Applsci 09 04914 g003
Figure 4. Confusion matrices using the SURF feature detector under different database percentages as the training samples. (a) Using 30% of the database as the training set (30 images out of 105), (b) Using 45% of the database as the training set (45 images out of 105), (c) Using 60% of the database as the training set (60 images out of 105), and (d) Using 75% of the database as the training set (75 images out of 105). The horizontal axis represents the predicted class while the vertical axis represents the true class.
Figure 4. Confusion matrices using the SURF feature detector under different database percentages as the training samples. (a) Using 30% of the database as the training set (30 images out of 105), (b) Using 45% of the database as the training set (45 images out of 105), (c) Using 60% of the database as the training set (60 images out of 105), and (d) Using 75% of the database as the training set (75 images out of 105). The horizontal axis represents the predicted class while the vertical axis represents the true class.
Applsci 09 04914 g004
Table 1. Performance of BoVW using SURF as the image feature detector and descriptor. The table represents the number of images used in the training process, while the rest of the database is used for the evaluation purpose. The processing time is measured in seconds.
Table 1. Performance of BoVW using SURF as the image feature detector and descriptor. The table represents the number of images used in the training process, while the rest of the database is used for the evaluation purpose. The processing time is measured in seconds.
Training (%)Configuration ParametersBoVW Performance Metrics
No. of ImagesNo. of Features (Average)Accuracy (%)Time (s)
30%3076007527.7
45%4576008332.6
60%6076009142.5
75%7576009389.5
Table 2. Performance of the BoVW using maximally stable extremal regions (MSER) for points-of-interest detection and the SURF detector for feature description at every detected point. The table represents the number of images used in the training process, while the rest of the database is used for evaluation. The processing time is measured in seconds.
Table 2. Performance of the BoVW using maximally stable extremal regions (MSER) for points-of-interest detection and the SURF detector for feature description at every detected point. The table represents the number of images used in the training process, while the rest of the database is used for evaluation. The processing time is measured in seconds.
Training (%)Configuration ParametersBoVW Performance Metrics
No. of ImagesNo. of Features (Average)Accuracy (%)Time (s)
30%302085215.0
45%451976019.7
60%601916722.1
75%751926728.0
Table 3. Comparison of the obtained identification accuracies using the BoVW with SURF and MSER against other state-of-the-art methods.
Table 3. Comparison of the obtained identification accuracies using the BoVW with SURF and MSER against other state-of-the-art methods.
No. of ImagesNo. of CattleTotal Accuracy (%)
Minagawa et al. [19]863066.6
Awad et al. [22]1051593.3
Noviyanto & Arymurthy [23]1200890.0
Proposed BoVW (SURF)1051593.0
Proposed BoVW (MSER)1501567.0
Back to TopTop