Next Article in Journal
Nickel Binding Affinity with Size-Fractioned Sediment Dissolved and Particulate Organic Matter and Correlation with Optical Indicators
Previous Article in Journal
Influence of Carbon Fiber Incorporation on Electrical Conductivity of Cement Composites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dog Identification Method Based on Muzzle Pattern Image

National Institute of Animal Science, Rural Development Administration, Sejong 339 705, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(24), 8994; https://doi.org/10.3390/app10248994
Submission received: 11 November 2020 / Revised: 10 December 2020 / Accepted: 15 December 2020 / Published: 16 December 2020
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Currently, invasive and external radio frequency identification (RFID) devices and pet tags are widely used for dog identification. However, social problems such as abandoning and losing dogs are constantly increasing. A more effective alternative to the existing identification method is required and the biometrics can be the alternative. This paper proposes an effective dog muzzle recognition method to identify individual dogs. The proposed method consists of preprocessing, feature extraction, matching, and postprocessing. For preprocessing, proposed resize and histogram equalization are used. For feature extraction algorithm, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Binary Robust Invariant Scaling Keypoints (BRISK) and Oriented FAST, and Rotated BRIEF (ORB) are applied and compared. For matching, Fast Library for Approximate Nearest Neighbors (FLANN) is used for SIFT and SURF, and hamming distance are used for BRISK and ORB. For postprocessing, two techniques to reduce incorrect matches are proposed. The proposed method was evaluated with 55 dog muzzle pattern images acquired from 11 dogs and 990 images augmented by the image deformation (i.e., angle, illumination, noise, affine transform). The best Equal Error Rate (EER) of the proposed method was 0.35%, and ORB was the most appropriate for the dog muzzle pattern recognition.

1. Introduction

The number of animals lost or abandoned in South Korea increases every year. In 2019, about 136,000 lost or abandoned animals were rescued, though only 40% were reunited with their original owner [1]. In an effort to deter abandonment and reduce instances of loss, and improve public health, since 2014 animal registration has been mandatory. As of 2019, 797,000 dogs have been registered under this law. Under the current law, dogs can be registered with invasive or external radio frequency identification (RFID) devices or pet tags. However, the method to inject the invasive RFID device into the epidermis is generally not preferred by the owners because of the concerns about negative side effects and animal welfare issue. Moreover, external RFID devices and pet tags are generally ineffective, as owners often lose the devices or are not aggressive enough about ensuring that their pets are wearing them. In light of what seems to be intractable problems with existing methods, an alternative to invasive and external RFID devices and pet tags are needed. Image-based biometrics could be the solution because biometric information is less likely to be lost and nothing is injected into the dogs.
Human biometrics aim to assign a unique identity to an individual according to certain physiological or behavioral characteristics unique to each person [2]. These unique characteristics are based on fingerprints, iris, face, and others [3,4,5], which are often called as biometrics modalities, identifiers, or traits. A human biometric system typically consists of four main phases: preprocessing, feature extraction, matching, and postprocessing. The preprocessing stage typically includes image enhancement (e.g., contrast stretching, low pass filtering, etc.). The feature extraction stage typically extracts a unique characteristic identifying an individual, and a variety of feature extraction algorithms have been introduced and applied. The matching stage matches the features extracted from two different images and a proper matching algorithm is selected according to the format of the extracted features. The postprocessing stage typically includes methods to reduce noise and improve results.
Adopting human biometrics into animals is a promising technology for the animal identification field. Similar to human biometrics, animal’s iris, face, and muzzle pattern are used for individual identification [6,7,8,9,10]. Among the various biometrics identifiers, muzzle pattern is a special one that can only be applied to certain animals such as cows, dogs, and more. Muzzle pattern has been studied since 1921 [11] and it is considered as an unique animal identifier that is similar to human fingerprints [12]. Recently, several image-based muzzle pattern recognition systems have been proposed and most of them were for individual cattle identification [13,14,15,16,17,18]. From the previous studies of cattle muzzle pattern recognition, it has been proved that the well-known general-purpose feature extraction algorithms such as Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) could be a robust solution for extracting unique features from an individual muzzle pattern images [15,16,17,18]. However, there was no studies answering which algorithm is more appropriate for the muzzle pattern recognition. In addition, while various cattle muzzle pattern recognition methods have been proposed, no study has been conducted for dogs as far as we know.
This study has two main purposes. The first is to prove that the proposed methods are effective for dog muzzle pattern recognition. The second is to answer which general-purpose feature extraction algorithm is more appropriate for dog muzzle pattern recognition. To achieve the purposes, a suitable method for dog muzzle pattern recognition based on general-purpose feature extraction algorithms is proposed and evaluated. Like a typical biometric system, the proposed method consists of four main steps: preprocessing, feature extraction, matching, and postprocessing. In the preprocessing step, two techniques are proposed related to image resizing and histogram equalization. In the feature extraction step, four leading general-purpose feature extraction algorithms are applied and compared. Those algorithms are Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Binary Robust Invariant Scaling Keypoints (BRISK) and Oriented FAST, and Rotated BRIEF (ORB). In the matching step, Fast Library for Approximate Nearest Neighbors (FLANN) is used for SIFT and SURF, and hamming distance are used for BRISK and ORB. In the postprocessing, another two techniques are proposed related to removing incorrect matches. The proposed method was evaluated with 1,045 dog muzzle pattern images which consist of 55 original images, and 990 images augmented by the image deformation (i.e., angle, intensity, affine, noise). The Equal Error Rate (EER) of the proposed method was 0.35%, and ORB was the most appropriate for the dog muzzle pattern recognition.

2. Related Work

No study for the dog muzzle pattern recognition has been introduced, but the several muzzle pattern recognition methods have been proposed for individual cattle identification.
Minagawa et al. [13] introduced a beef cattle identification method based on muzzle pattern. In the study, binary transformation processes and morphological approaches were conducted in order to extract joint pixels as the features. Barry et al. [14] proposed a cattle identification method based on the eigenfaces algorithm with the preprocessing similar to Minagawa et al. [13]. Noviyanto and Arymurthy [15] applied Scale Invariant Feature Transform (SIFT) in order to extract features from cattle muzzle pattern images, and proposed a matching refinement technique to reduce outliers as a postprocessing. In the study, the performance was evaluated with 160 muzzle pattern images, and robustness to scale and rotation was proved. In addition, they compared the proposed method to Minagawa et al. [13] and Barry et al. [14]. The Equal Error Rate (EER) of Minagawa et al. [13], Barry et al. [14], and Noviyanto and Arymurthy [15] was 0.429, 0.418, and 0.0028, respectively.
Many other studies using the general-purpose feature extraction algorithms have been introduced. Tharwat et at. [16] proposed a cattle identification using muzzle print images based on texture features approach. In the study, Local Binary Pattern (LBP) was used to extract local invariant features from muzzle print images. They reported the identification accuracy was 99.5%. Awad et al. [17] proposed cattle muzzle recognition method using Scale Invariant Feature Transform (SIFT). In the study, Random Sample Consensus (RANSAC) was applied as postprocessing to remove outliers and achieve more robustness. The identification accuracy was 93.3%. In addition, Noviyanto and Arymurthy [18] proposed another cattle identification method using Speed-Up Robust Features (SURF), and the identification accuracy was more than 90%.

3. Materials and Methods

3.1. Dataset

Animal experiment ethics: all procedures in acquiring dog’s muzzle pattern image were assessed and approved by the IACUC at National Institute Of Animal Science, protocol number 2019-371.

3.1.1. Data Acquisition

The dog muzzle pattern is similar to cattle but the size is much smaller than cattle. Since dog muzzle pattern is small, blurry muzzle pattern images are acquired even in small movements. However, thanks to advances in the camera technology, it was possible to obtain enough quality muzzle pattern images to be used for individual identification. In this study, a 6M pixel mono chrome camera with a liquid lens was used. The device which was used for acquiring muzzle pattern images is shown in Figure 1a.
The muzzle pattern images were acquired from 11 dogs [5 Poodle, 4 Maltese, 1 Shih Tzu, 1 Yorkshire Terrier] from about a 10–15 cm distance. The image acquisition was performed indoors to avoid light reflection from direct sunlight. After taking the images, only the interest area having more information for dog identification was segmented from the acquired images. The area is called the region of interest (ROI) and ROI is defined as a maximum rectangle containing the boundary of the two nostrils in the area of rhinarium. The ROI is shown in Figure 1b.

3.1.2. Data Screening

Many improper images for the dog individual identification were included in the acquired images because of indoor light and dog’s movement as shown in Figure 2. As a first step for the individual identification, it is necessary to select the proper images with no light reflection, and the high sharpness.
First, a histogram is used in order to discard images with light reflection. The dog nose area is typically dark, and, thus, the brightness values are low. On the other hand, the area where light was reflected has high brightness values. The histograms of images with light refection and without light reflection were manually inspected. After the manual inspection, it was determined to discard the images having more than 200 pixels between 150 and 255 in the histogram. The histograms of the images with light reflection and without light reflection are shown in Figure 3.
Then, the blurry image screening was done. A histogram equalization was applied to ROI images to enhance the sharpness and then blur detection were performed with the algorithm using Haar wavelet transform [19]. An image that did not pass the appropriate threshold of the algorithm was determined as a blurry image and then discarded. In the study, the threshold was 50. After all the processes, a dataset was composed of 55 ROI images by obtaining 5 images from each dog out of 11 dogs. The size of ROI images ranged from a minimum of 290 × 280 pixels to a maximum of 841 × 825 pixels, with an average of 549 × 515 pixels. In this study, this dataset is called the original test dataset.

3.1.3. Data Augmentation

The data augmentation is a generally used to prove the robustness of the system [15,17,20]. In order to evaluate the robustness of the proposed algorithm and to find the appropriate feature extraction algorithm for dog muzzle pattern recognition, the data was augmented with the image deformation from the original test dataset. The image deformation was done with taking into account the factors that may occur during the dog muzzle pattern image acquisition. While acquiring dog muzzle pattern images, the changes such as angle, illumination (intensity), noise, and perspective may occur. In this study, 18 image deformation images were created from an original image. Six images were made with the angular change from a −15° to 15° at 5° intervals. Four images were made by adding 25, 50, 75, and 100 to all pixel values in consideration of the environment with the high illumination. Four images were made with a vertical motion blur, horizontal motion blur, Gaussian blur, and salt and pepper noise. The last four images were made with four perspective transformations: up, down, left, and right. In total, the dataset has 1045 images and it consisted of 55 original images and 990 deformation images. An example of the test dataset for one image is shown in Figure 4. This dataset is called the augmented test dataset. Therefore, in this study, there are two test sets: (1) the original test dataset with only original images, (2) the augmented test dataset obtained with the described augmentation process.

3.2. Proposed Method

The proposed method, in this literature, has been illustrated in Figure 5. For the input ROI images, preprocessing was applied to enhance the muzzle pattern images so that it matches as well as possible. Then, a feature extraction was performed with general-purpose feature extraction algorithms. With the feature descriptors extracted from muzzle pattern images, matchings were performed. Lastly, postprocessing was applied to reduce the incorrect matches. The output of the process was the number of good matches for every two given images. All the muzzle pattern images were converted to gray scale and processed by the proposed method. In this study, the algorithms were implemented with Python3 and OpenCV3.3.

3.2.1. Image Resize

The size of the original test dataset ranged from a minimum of 290 × 280 pixels to a maximum of 841 × 825 pixels. The differences of sizes may degrade the dog identification performance. Therefore, Tharwat et al. [16] and Noviyanto and Arymurthy [18] resized the cattle muzzle pattern images to a fixed size of 300 × 300 pixels and 300 × 400 pixels, respectively. However, the dog muzzle pattern is much smaller than cattle, and, thus, it may be more sensitive to image resizing. If the ratio of width and height of the original image is changed while resizing the image, the muzzle pattern may be distorted. If the images are resized to become too large or too small, information loss occurs. The distortion and information loss may result in decreasing the dog identification performance. In this study, therefore, the image resize was done by maintaining the ratio of width and height of an original image, and a reference value for resize was chosen to minimize information loss by the experiment. The width and height to be resized are calculated by Equations (1) and (2). The smaller side out of the width and height is resized to a reference value r, and another side is resized to the value obtained by multiplying the scale factor to r as shown in Equation (2). The scale factor is the ratio of an original image, and calculated to be 1 or more as shown in Equation (1).
The scale factor is the ratio of the width and height of an image.
s = h w   i f   w h   w h   o t h e r w i s e
where w is a width, h is a height, and s is a scale factor of an image.
w ,   h = r ,   r × s   i f   w h   r × s ,   r   o t h e r w i s e
where w’ is a resized width, h’ is a resized height, s is a scale factor of an image, and r is a reference value for resize.

3.2.2. Contrast Limited Adaptive Histogram Equalization (CLAHE)

Histogram equalization is a representative image enhancement method using the cumulative distribution function of pixel values in an image. Since the general equalization increases the contrast of the entire image, the effect is often not satisfactory. The advanced histogram equalization method is called CLAHE. CLAHE divides an image into small areas and equalizes the histogram within each area by removing extreme noise using contrast limits [21]. Thus, it enhances the contrast more for the dark area, and less for the bright area.
The contrast of the dog muzzle images are usually low because the dog nose is typically dark, and it is different from image to image even for the same dog due to the difference in lighting conditions. The low contrast and differences in contrast decrease the dog identification performance. In this study, therefore, CLAHE is repeatedly applied until the histogram is stretched enough in order to reduce the difference of the contrast between images as well as to enhance the contrast. CLAHE is repeated until there are more than 1000 pixels in 0–49 and 206–255 in the histogram. Figure 6 shows the images enhanced by the proposed CLAHE. In the implementation of CLAHE, the threshold for the contrast was set to 2 and the grid size was set to 8 × 8 pixels.

3.2.3. Feature Extraction Algorithm

The general-purpose feature extraction algorithms such as SIFT, SURF, BRISK, and ORB have been proven to be a suitable solution for object recognition. They each have their own strength. In order to find which algorithm is better for dog muzzle pattern recognition, SIFT, SURF, BRISK, and ORB were applied to the propose method to extract unique features from muzzle pattern images. The feature extraction algorithms consist of a keypoint detector and descriptor. The detector finds keypoints in an image and the descriptor makes information to describe corresponding keypoints. In the implementation of algorithms, the default values of OpenCV3.3 was used for all the parameters of the algorithms.
SIFT is an algorithm that solves the problem of Harris corner technique, which is sensitive to scale change of image. SIFT creates different image scale spaces and finds the maximum corners as keypoints in the space with a Difference of Gaussian (DoG) detector (Figure 7). Equation (3) shows the convolution of DoG with image “I(x,y)”. For each keypoint, a descriptor is calculated from the gradient magnitude and the relative orientation in the local neighborhood of pixels [22]. SIFT is robust to the scale change of the image, but it requires high computational cost [23].
D x ,   y ,   σ   =   G x ,   y ,   k σ G x ,   y ,   σ I x ,   y
where “ G ” represents the Gaussian function.
SURF is an algorithm that uses an integral image in the process of detecting key points and generating descriptors in order to reduce the number of computations [24]. SURF is similar to SIFT in that the angle for the dominant orientation is extracted from the detected keypoints and the gradient information is used as an expression vector in the subarea. SURF uses the Hessian matrix to find keypoints. Equation (4) represents the Hessian Matrix in point “x = (x, y)” at scale “σ”.
H x , σ = L x x x ,   σ L x y x ,   σ L y x x ,   σ L y y x ,   σ
where “ L x x x ,   σ ” is the convolution of Gaussian second order derivative with the image “I” in point “x”, and similarly for “ L x y x ,   σ ” and “ L y y x ,   σ ”.
ORB is an algorithm that combines the Features from Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithms [25]. FAST detects a key point by comparing the brightness value with the neighboring pixels within a certain radius (Figure 8a). BRIEF is binary descriptor that compares the intensity of two pixels at a statistical stochastic distance based on the central pixel. ORB is robust to the rotation of an image by adding the main direction angle of the keypoint to the binary descriptor.
BRISK is another binary descriptor and it improves the concept of BRIEF. BRISK takes four concentric regions around a keypoint and divides them into arbitrary circular sampling zones according to Gaussian smoothing (Figure 8b). Then, the binary descriptor is constructed according to the brightness in all the divided circular zones [26]. FAST was used as the keypoint detector. BRISK with FAST is robust to the rotation and the scale change of an image.

3.2.4. Matching

The matching algorithms matches the feature descriptors extracted from feature extraction algorithms by calculating the distances, and then outputs the number of good matches. The number of good matches are the threshold to determine whether the muzzle pattern images are from the same dog or not. FLANN matching algorithm is mainly used for vector descriptor matching, and, thus, it was used for SIFT and SURF descriptor matching. In the implementation of FLANN, the threshold of FLANN was set to 0.8 for both SIFT and SURF. Hamming distance is mainly used for binary descriptor matching, and, thus, it was used for ORB and BRISK. In the implementation of hamming distance, the threshold of the hamming distance was set to 64 for ORB and 90 for BRISK.

3.2.5. Random Sample Consensus (RANSAC)

The output of matching algorithm contained many outliers. The outliers falsely increased the number of good matches, and, thus, the outliers increase the false match rate of the muzzle pattern recognition. In this study, therefore, RANSANC was used to remove outliers after matching process. RANSAC estimates parameters of a mathematical model from a randomly selected sample in the observed data including outliers, and then finds the model with the most inlier through repetitive work [27]. In the implementation of RANSAC, the threshold was set to 4. The effect of RANSAC is shown in Figure 9.

3.2.6. Duplicate Matching Removal (DMR)

In the feature descriptor matching, sometimes a cluster of incorrect matches occurs where several keypoints are mapped onto one point of another image. It may happen more often in matching of the muzzle pattern images because there are many similar points in the muzzle pattern images. The cluster of incorrect matches are not removed by RANSAC because of the statistical similarity of the cluster. These incorrect matches counted as the number of good matches falsely, and, thus, the incorrect matches increase the false match rate of the recognition system. In this study, the points where two or more matches were considered as incorrect matches and then removed. We call this process: Duplicate Matching Removal. The effect of DMR is shown in Figure 10.

4. Results and Discussion

4.1. Performance Evaluation

The performance of an identification system is typically evaluated with False Match Rate (FMR) and False Non-Match Rate (FNMR), or Equal Error Rate (EER) [28]. FMR is a measure when an identification system accepts what it should rejects, and FNMR is a measure when the system rejects what it should accepts. FMR and FNMR varies according to a matching threshold and correlates inversely. On the other hand, the EER is a more objective evaluation metric for comparing an identification system [29]. EER is the failure rate when FMR and FNMR are same with an optimal matching threshold. However, FMR and FNMR cannot exactly be the same because the matching threshold is discrete value. Therefore, EER is calculated by averaging EERlow and EERhigh obtained from Equations (5)–(7) [15,30].
t 1 = m a x t t | F N M R t F M R t
t 2 = m a x t t | F N M R t F M R t
where t is a threshold.
E E R l o w ,   E E R h i g h =   F N M R t 1 ,   F M R t 2   i f ( F N M R t 1 + F M R t 1   ( F N M R t 2 + F M R t 2 F N M R t 1 ,   F M R t 2   o t h e r w i s e  
In this study, two different identification matchings have been done in order to obtain the EER, i.e., genuine matching and imposter matching. Genuine matching is the matchings of the different muzzle pattern images of the same dog, and imposter matching is the matchings of the muzzle pattern images of the different dogs. The FNMR is the false nonmatch rate in genuine matching, and the FMR is the false matching rate in imposter matching. EER is calculated by FMR and FNMR obtained from genuine matching and imposter matching.

4.2. Effectiveness of the Proposed Methods

The proposed method for the dog identification consists of the several proposed techniques, such as resizing according to the original ratio of an image, the repetition of CLAHE, and the removal of duplicate matching points. In order to verify the effectiveness of the proposed techniques, several combinations of them were tested. The techniques were applied in the order of further performance improvement, and those are defined as following.
Basic Method = CLAHE + Feature Extraction and Matching + RANSAC
Proposed Method 1 = CLAHE + Feature Extraction and Matching + RANSAC + DMR
Proposed Method 2 = Proposed CLAHE + Feature Extraction and Matching + RANSAC + DMR
Proposed Method 3 = Proposed Resize + Proposed CLAHE + Feature Extraction and Matching + RANSAC + DMR
The evaluation was conducted by matching all combination of the original test dataset images without duplication. The dataset had 5 images for each dog out of 11 dogs. All combination of two different images without duplication out of the 5 images of one dog was 10. Since the dataset had the images for 11 dogs, the number of genuine matching with the dataset was 110. Similarly, all combination of two different images of the different dogs with the dataset was 1375. The algorithms were implemented with Python3 and OpenCV3.3.
The result of the performance evaluation is shown in Table 1. In the table, the Min and Max represent the minimum number of good matches and the maximum number of good matches out of all matchings. The false matching is the number of false matching with the optimal threshold and the optimal threshold means the threshold where FMR and FNMR are same. The EER is the error rate of the recognition system.

4.2.1. Basic Method

Many of the studies used general-purpose feature extraction algorithms such as SIFT and SURF for the cattle muzzle pattern recognition. Most of them did not apply any preprocessing but did apply the postprocessing to reduce the number of incorrect matches. A system representing previous studies for the cattle muzzle pattern recognition was implemented and it is called as a basic method. The basic method consisted of CLAHE, feature extraction and matching, and RANSAC.
Table 1 shows the result of the performance evaluation of the basic method. The EER was 3.1% for SIFT, 11.2% for SURF, 14.6% for BRISK, and 15.5% for ORB. SIFT was relatively better than other feature extraction algorithms. However, the identification performance for the dog muzzle pattern images was not satisfactory for all four feature extraction algorithms.

4.2.2. Duplicate Matching Removal (DMR)

When DMR was added to the basic method, the dog identification performance was improved for all feature extraction algorithms as shown in Table 1 (Proposed Method 1). The EER was reduced by 1.5% from 3.1% to 1.6% for SIFT, by 5.4% from 11.2% to 5.8% for SURF, by 3.7% from 14.6% to 10.9% for BRISK, and by 9.6% from 15.5% to 5.9% for ORB. ORB had the most improvement.
The average number of good matches was reduced in both genuine matching and imposter matching compared to the basic method for all feature extraction algorithms. However, the ratio of the decreases was much larger in imposter matching than in genuine matching. The large decrease of incorrect matches in imposter matching reduced the false matches and resulted in the reduction of the EER. This result showed that the proposed technique, DMR, was effective for the improvement of the dog muzzle pattern recognition.

4.2.3. Proposed CLAHE

As the result of repeatedly applying CLAHE instead of applying it once as in the proposed method 1, the dog identification performance were improved for SIFT and ORB as shown in Table 1 (Proposed Method 2). The EER was reduced by 1.6% from 1.6% to 0% for SIFT, and by 4.1% from 5.9% to 1.8% for ORB compared to the proposed method 1. However, the EER was the same for the SURF which increased by 1.8% from 10.9% to 12.7% for BRISK. SIFT performed the best and it identified all the dog images correctly. In addition, ORB had the most improvement.
The average number of good matches was increased in both genuine matchings and imposter matchings compared to the proposed method 1 for all feature extraction algorithms except for BRISK. However, the ratio of the increases was much larger in genuine matchings than in imposter matchings. The large increases of correct matches in genuine matchings reduced the false nonmatches and resulted in the reduction of the EER. This result showed that the proposed technique, the repetition of CLAHE, was effective for the improvement of the dog muzzle pattern recognition for all feature extraction algorithms except for BRISK.

4.2.4. Proposed Resize

The resize stage was added to the proposed method 2. Before applying the proposed resize method, an experiment was performed in order to compare the proposed resize method with the general resize method with a fixed width and height. In addition, the proper resize values were chosen for the proposed resize from the experiment. In the experiment, the average number of good matches and the difference of the minimum good matches in genuine matching from maximum good matches in imposter matchings (i.e., GAP) are compared for all four feature extraction algorithms. The larger the GAP, the better the identification performance.
Table 2 shows the results of the experiment. The GAP of the proposed resize was similar or larger than the fixed resize method. Especially, the proposed resize much improved the GAP for BRISK and ORB. In addition, 300 pixels were selected as the resize value of the proposed resize by considering both the average number of good matches and the GAP. The average number of good matches was larger with 350 pixels than with 300 pixels for all feature extraction algorithms, but the GAP decreased with 350 pixels, it especially decreased it more for ORB. With the experiment, it was verified that the proposed resize was better than the fixed resize, and, thus, the proper scale factor for the proposed resize was 300.
The proposed resize was applied to the proposed method 2 and this was the final proposed method for dog identification based on the muzzle pattern images. The dog identification performance was dramatically improved for all feature extraction algorithms as shown in Table 1 (proposed method 3). The EER was 0% for SIFT and ORB, and 0.9% for SURF and BRISK. SIFT and ORB performed better than SURF and BRISK, and they identified all the images correctly without false matching and false nonmatching. The proposed resize improved the minimum number of good matches in the genuine matching, and it resulted in the reduction of the EER. This result showed that the proposed resize was effective for the improvement of the dog muzzle pattern recognition.

4.2.5. Processing Time

The processing time for the final proposed method was measured with the following hardware configuration: Intel i7-9700k CPU @ 3.60GHz and 64.0GB RAM. Table 3 shows the minimum, the maximum, and the average processing time over all the matchings. Since the processing time depends on the number of keypoints detected by the feature extraction algorithm, it is different from an image to an image. Therefore, the range of the minimum and maximum processing time is wide. Overall, SURF was the fastest and BRISK was the slowest.

4.3. Evaluation of the Robustness of the Proposed Method

In order to compare the robustness of the final proposed method, dog muzzle pattern image matching was performed with the augmented test dataset including the deformed images. The test dataset had 4 classes of image deformation: (1) rotation (−15°, −10°, −5°, −15°, −10°, −5°), (2) intensity change (25, 50, 75, 100), (3) perspective change (Up, Down, Left, Right), (4) noise (salt and pepper, Gaussian blur, vertical motion blur, horizontal motion blur). Those deformations were chosen by taking into account the factors that may occur during the dog muzzle image acquisition. The evaluation of the robustness was performed by adding each class of the deformed images to the original test dataset.
For the evaluation of robustness for the image rotation, 330 rotation images and 55 original test dataset images were used. The number of genuine matching was 595 and the number of imposter matchings was 67,375. For the evaluation of the robustness for the intensity, perspective, and noise, 220 images of each deformation class and 55 original test dataset images were used, respectively. The number of genuine matchings was 190 and the number of imposter matchings was 22,000 for each deformation class, respectively. Lastly, the dog identification performance was evaluated with the original data and all the deformed data. The number of genuine matchings was 49,115 and the number of imposter matchings was 496,375.
Table 4 shows the result of the robustness comparison of four feature extraction algorithms. For the rotated images, ORB was performed the best and the EER was 0.1%. For the intensity change, SIFT was the best and the EER was 0%. For the perspective change, both SIFT and ORB were the best and the EER was 0.3%. For the noise, ORB was the best and the EER was 0.4%. In the robustness evaluation with the all deformed images, ORB was the best and the EER was 0.35%.
The experimental results shows that ORB is the proper general-purpose feature extraction algorithm for the dog muzzle recognition, and that it is relatively robust to the image deformation which may occur in image acquisition than other algorithms.

5. Conclusions

In this study, a new dog identification method using the muzzle pattern based on ORB approach has been proposed. The proposed method consists of three key techniques: proposed resize, proposed CLAHE, and Duplicate Matching Removal. The proposed resize and CLAHE have played a crucial role in improving the image matching performance for the same dog images, and Duplicate Matching Removal has played an important role in reducing the mismatches in matching of different dog images. The dog identification performance of the proposed method is significantly better than the basic method that represents the previous methods for the cattle muzzle identification based on the general-purpose feature extraction algorithms. While the best EER of the basic method is 3.1% with SIFT, the EER of the proposed method is 0% with ORB for the original test dataset images. In addition, the proposed method is robust to the change of angle, intensity, perspective, and noise. The EER of the proposed method is 0.35% for the augmented test dataset including the deformation images.
As far as we know, this is the first research paper on dog identification using muzzle pattern images. The performance of the proposed method for dog identification is good enough to lead to further work when considering this is the first study. In the future, the proposed method will be evaluated with more dogs, and additional preprocessing such as noise filtering will be tested to improve the performance more. Furthermore, the automatic image acquisition method will be studied for more practical solutions. The dog nose and muzzle pattern are much smaller than that of a cattle and it is difficult to obtain the appropriate dog muzzle pattern images. For the development of dog muzzle pattern recognition, therefore, it is necessary to develop an automatic system that detects the nose of dogs and segments the ROI muzzle pattern images, and filters out inappropriate images. With those further works, it is hoped that the proposed method will be used as a method of dog registration and contribute to the reduction of dogs lost or abandoned.

Author Contributions

Conceptualization, D.-H.J. and J.-B.K.; methodology, D.-H.J. and J.-B.K.; software, D.-H.J. and J.-B.K.; validation, D.H.-J., K.-S.K., and J.-K.K.; formal analysis, D.H.-J., K.-S.K., J.-B.K.; investigation, D.H.-J., J.-K.K., and J.-B.K.; resources, D.-H.J. and J.-B.K.; data curation, D.-H.J. and K.-Y.Y.; writing—original draft preparation, D.-H.J. and J.-B.K.; writing—review and editing, D.-H.J. and J.-B.K.; visualization, D.H.-J., K.-S.K., and K.-Y.Y.; supervision, J.-B.K.; project administration, J.-B.K.; funding acquisition, J.-B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out with the support of “Cooperative Research Program for Agriculture Science & Technology Development (Project No. PJ01398601)” Rural Development Administration, Republic of Korea.

Acknowledgments

This research was supported by the “RDA Research Associate Fellowship Program” of the National Institute of Animal Science, Rural Development Administration, Republic of Korea.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry for Food, Agriculture, Forestry and Fisheries. A Survey on the Protection and Welfare of Pets in 2019; Ministry for Food, Agriculture, Forestry and Fisheries: Gwacheon, Korea, 2020. [Google Scholar]
  2. Jain, A.; Ross, A.; Prabhakar, S. An Introduction to Biometric Recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 4–20. [Google Scholar] [CrossRef] [Green Version]
  3. Ng, R.Y.F.; Tay, Y.H.; Mok, K.M. A review of iris recognition algorithms. In Proceedings of the 2008 International Symposium on Information Technology, Kuala Lumpur, Malaysia, 26–28 August 2008; Volume 2, pp. 1–7. [Google Scholar]
  4. Yang, W.; Wang, S.; Hu, J.; Zheng, G.; Valli, C. Security and Accuracy of Fingerprint-Based Biometrics: A Review. Symmetry 2019, 11, 141. [Google Scholar] [CrossRef] [Green Version]
  5. Tolba, A.S.; El-Baz, A.H.; El-Harby, A.A. Face recognition: A literature review. Int. J. Signal Process. 2006, 2, 88–103. [Google Scholar]
  6. Liu, T.T.; Wu, D.F.; Wang, L.Y. Development process of animal image recognition technology and its application in modern cow and pig industry. IOP Conf. Ser.: Earth Environ. Sci. 2020, 512, 012090. [Google Scholar] [CrossRef]
  7. Sun, S.; Yang, S.; Zhao, L. Noncooperative bovine iris recognition via SIFT. Neurocomputing 2013, 120, 310–317. [Google Scholar] [CrossRef]
  8. Lu, Y.; He, X.; Wen, Y.; Wang, P.S. A new cow identification system based on iris analysis and recognition. Int. J. Biom. 2014, 6, 18. [Google Scholar] [CrossRef]
  9. Trokielewicz, M.; Szadkowski, M. Iris and periocular recognition in arabian race horses using deep convolutional neural networks. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 510–516. [Google Scholar]
  10. Kumar, S.; Singh, S.K. Biometric Recognition for Pet Animal. J. Softw. Eng. Appl. 2014, 7, 470–482. [Google Scholar] [CrossRef] [Green Version]
  11. Petersen, W. The Identification of the Bovine by Means of Nose-Prints. J. Dairy Sci. 1922, 5, 249–258. [Google Scholar] [CrossRef]
  12. Baranov, A.S.; Graml, R.; Pirchner, F.; Schmid, D.O. Breed differences and intra-breed genetic variability of dermatoglyphic pattern of cattle. J. Anim. Breed. Genet. 1993, 110, 385–392. [Google Scholar] [CrossRef]
  13. Minagawa, H.; Fujimura, T.; Ichiyanagi, M.; Tanaka, K.; Fangquan, M. Identification of beef cattle by analyzing images of their muzzle patterns lifted on paper. Publ. Jpn. Soc. Agric. Inform. 2002, 8, 596–600. [Google Scholar]
  14. Barry, B.; Gonzales-Barron, U.; McDonnell, K.; Butler, F.; Ward, S. Using Muzzle Pattern Recognition as a Biometric Approach for Cattle Identification. Trans. ASABE 2007, 50, 1073–1080. [Google Scholar] [CrossRef]
  15. Noviyanto, A.; Arymurthy, A.M. Beef cattle identification based on muzzle pattern using a matching refinement technique in the SIFT method. Comput. Electron. Agric. 2013, 99, 77–84. [Google Scholar] [CrossRef]
  16. Tharwat, A.; Gaber, T.; Hassanien, A.E.; Hassanien, H.A.; Tolba, M.F. Cattle Identification Using Muzzle Print Images Based on Texture Features Approach. In IOP Conference Series: Materials Science and Engineering, Proceedings of the The 6th International Conference On Electrical Engineering, Control And Robotics, Xiamen, China 10–12 January 2020; IOP Science: Bristol, UK, 2020; Volume 853, p. 853. [Google Scholar]
  17. Awad, A.I.; Zawbaa, H.M.; Mahmoud, H.A.; Nabi, E.H.H.A.; Fayed, R.H.; Hassanien, A.E. A robust cattle identification scheme using muzzle print images. In Proceedings of the 2013 Federated Conference on Computer Science and Information Systems, Krakow, Poland, 8–11 September 2013; pp. 529–534. [Google Scholar]
  18. Noviyanto, A.; Arymurthy, A.M. Automatic cattle identification based on muzzle photo using speed-up robust features approach. In Proceedings of the 3rd European conference of computer science, ECCS, Paris, France, 2–4 December 2012; Volume 110, p. 114. [Google Scholar]
  19. Tong, H.; Li, M.; Zhang, H.; Zhang, C. Blur detection for digital images using wavelet transform. In Proceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763), Taipei, Taiwan, 27–30 June 2004; Volume 1, pp. 17–20. [Google Scholar] [CrossRef] [Green Version]
  20. Rybak, Ł.; Dudczyk, J. A Geometrical Divide of Data Particle in Gravitational Classification of Moons and Circles Data Sets. Entropy 2020, 22, 1088. [Google Scholar] [CrossRef]
  21. Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200. [Google Scholar] [CrossRef] [Green Version]
  22. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  23. Tareen, S.A.K.; Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–10. [Google Scholar]
  24. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Computer Vision—ECCV 2006, Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar] [CrossRef]
  25. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  26. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar] [CrossRef]
  27. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  28. Cappelli, R.; Maio, D.; Maltoni, D.; Wayman, J.L.; Jain, A.K. Performance evaluation of fingerprint verification systems. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 3–18. [Google Scholar] [CrossRef] [PubMed]
  29. Poh, N.; Bengio, S. Evidences of Equal Error Rate Reduction in Biometric Authentication Fusion; IDIAP: Martigny, Switzerland, 2004. [Google Scholar]
  30. Maio, D.; Maltoni, D.; Cappelli, R.; Wayman, J.L.; Jain, A.K. FVC2000: Fingerprint verification competition. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 402–412. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Device for acquiring dog muzzle image; (b) Region of interest.
Figure 1. (a) Device for acquiring dog muzzle image; (b) Region of interest.
Applsci 10 08994 g001
Figure 2. Discarded images; (a) Images with light reflection; (b) Blurry image.
Figure 2. Discarded images; (a) Images with light reflection; (b) Blurry image.
Applsci 10 08994 g002
Figure 3. Muzzle pattern image and histogram; (a) Light reflection; (b) No light reflection.
Figure 3. Muzzle pattern image and histogram; (a) Light reflection; (b) No light reflection.
Applsci 10 08994 g003
Figure 4. Image deformation for one muzzle pattern image.
Figure 4. Image deformation for one muzzle pattern image.
Applsci 10 08994 g004
Figure 5. A block diagram of proposed method.
Figure 5. A block diagram of proposed method.
Applsci 10 08994 g005
Figure 6. Muzzle pattern image and histogram; (a) Original image; (b) CLAHE; (c) Proposed CLAHE.
Figure 6. Muzzle pattern image and histogram; (a) Original image; (b) CLAHE; (c) Proposed CLAHE.
Applsci 10 08994 g006
Figure 7. The internal Difference of Gaussian of Scale Invariant Feature Transform (SIFT algorithm).
Figure 7. The internal Difference of Gaussian of Scale Invariant Feature Transform (SIFT algorithm).
Applsci 10 08994 g007
Figure 8. Feature extraction algorithm; (a) Example of Features from Accelerated Segment Test (FAST); (b) Binary Robust Invariant Scaling Keypoints (BRISK) sampling pattern.
Figure 8. Feature extraction algorithm; (a) Example of Features from Accelerated Segment Test (FAST); (b) Binary Robust Invariant Scaling Keypoints (BRISK) sampling pattern.
Applsci 10 08994 g008
Figure 9. Matching result of Random Sample Consensus (RANSAC) algorithm; (a) Before; (b) After.
Figure 9. Matching result of Random Sample Consensus (RANSAC) algorithm; (a) Before; (b) After.
Applsci 10 08994 g009
Figure 10. Matching result of duplicate matching removal; (a) Before; (b) After.
Figure 10. Matching result of duplicate matching removal; (a) Before; (b) After.
Applsci 10 08994 g010
Table 1. Results of performance evaluation of the matching methods.
Table 1. Results of performance evaluation of the matching methods.
MethodEvaluation ItemSIFTSURFBRISKORB
GenuineImposterGenuineImposterGenuineImposterGenuineImposter
Basic MethodMin40402910128
Max75112686345791622266188
Average132496612066840821
False matching714121611619917207
Optimal threshold7.58.1109.929.7
EER(%)3.111.214.615.5
Proposed Method 1Min20102020
Max58165786420319121414
Average107378389562224
False matching2412512134117
Optimal threshold4.84.68.89.8
EER(%)1.65.810.95.9
Proposed Method 2Min71201130
Max88968226378729218021
Average1814120370966856
False matching0071814193228
Optimal threshold64.78.312.3
EER(%)05.812.71.8
Proposed Method 3Min804190820
Max67083116106126174025
Average189476334657256
False matching001011200
Optimal threshold85.71825
EER(%)00.90.90
Table 2. The number of good matches according to the resize techniques.
Table 2. The number of good matches according to the resize techniques.
MethodSize (Pixels)AlgorithmGenuineImposterGAP (Min-Max)
AverageMinAverageMax
Fixed size250 × 250SIFT1467461
SURF44346−3
BRISK2123533−30
ORB5294736−32
300 × 300SIFT168446−2
SURF63247−5
BRISK2894524−20
ORB68027739−12
350 × 350SIFT1806460
SURF79336−3
BRISK4352535−33
ORB71924628−4
Ratio of original size
(Proposed method)
250
for smaller
SIFT163446−2
SURF54346−3
BRISK24921628−7
ORB6008173447
300
for smaller
SIFT1898481
SURF76436−2
BRISK3469532−17
ORB7258262557
350
for smaller
SIFT193647−1
SURF93436−2
BRISK4797522−15
ORB7654862424
Table 3. The processing time for the proposed methods.
Table 3. The processing time for the proposed methods.
Time (ms)SIFTSURFBRISKORB
GenuineImposterGenuineImposterGenuineImposterGenuineImposter
Min2131946568555530172108
Max6475142321931,4007903,328210
Average34830699116751639834159
Table 4. Results of the robustness comparison of feature extraction algorithms.
Table 4. Results of the robustness comparison of feature extraction algorithms.
Alg.Evaluation ItemRotationIntensityPerspectiveNoiseTotal
GenuineImposterGenuineImposterGenuineImposterGenuineImposterGenuineImposter
SIFTMin3050300000
Max1367715787120771403715788
Average29043384266421842664
False matching212417113026420394123
Optimal threshold5.965.95.25.6
EER(%)0.30.00.30.90.66
SURFMin1020000000
Max61989618405982179618
Average91316048531024924
False matching154755595610859150902184513062
Optimal threshold5.65.35.94.94.9
EER(%)1.41.94.04.43.70
BRISKMin2010100000
Max218343746748209940294533746751
Average45859736439543054465
False matching6665578859474451361305155217095
Optimal threshold1517.416.89.911.4
EER(%)1.02.41.44.03.10
ORBMin2040204000
Max233237485736211649327135485751
Average750611126730679367316
False matching7942499113121491751856
Optimal threshold2322231919.5
EER(%)0.10.10.30.40.35
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jang, D.-H.; Kwon, K.-S.; Kim, J.-K.; Yang, K.-Y.; Kim, J.-B. Dog Identification Method Based on Muzzle Pattern Image. Appl. Sci. 2020, 10, 8994. https://doi.org/10.3390/app10248994

AMA Style

Jang D-H, Kwon K-S, Kim J-K, Yang K-Y, Kim J-B. Dog Identification Method Based on Muzzle Pattern Image. Applied Sciences. 2020; 10(24):8994. https://doi.org/10.3390/app10248994

Chicago/Turabian Style

Jang, Dong-Hwa, Kyeong-Seok Kwon, Jung-Kon Kim, Ka-Young Yang, and Jong-Bok Kim. 2020. "Dog Identification Method Based on Muzzle Pattern Image" Applied Sciences 10, no. 24: 8994. https://doi.org/10.3390/app10248994

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop