Next Article in Journal
Extraction and Analysis of Blue Steel Roofs Information Based on CNN Using Gaofen-2 Imageries
Previous Article in Journal
Integrity Monitoring of Multimodal Perception System for Vehicle Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weber Texture Local Descriptor for Identification of Group-Housed Pigs

1
School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China
2
School of Electronics and Information, Jiangsu University of Science and Technology, Zhenjiang 212003, China
3
Nanjing Normal University Taizhou College, Taizhou 225300, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(16), 4649; https://doi.org/10.3390/s20164649
Submission received: 20 July 2020 / Revised: 12 August 2020 / Accepted: 14 August 2020 / Published: 18 August 2020
(This article belongs to the Section Sensing and Imaging)

Abstract

:
The individual identification of group-housed pigs plays an important role in breeding process management and individual behavior analysis. Recently, livestock identification methods based on the side view or face image have strict requirements on the position and posture of livestock, which poses a challenge for the application of the monitoring scene of group-housed pigs. To address the issue above, a Weber texture local descriptor (WTLD) is proposed for the identification of group-housed pigs by extracting the local features of back hair, skin texture, spots, and so on. By calculating the differential excitation and multi-directional information of pixels, the local structure features of the main direction are fused to enhance the description ability of features. The experimental results show that the proposed WTLD achieves higher recognition rates with a lower feature dimension. This method can identify pig individuals with different positions and postures in the pig house. Without limitations on pig movement, this method can facilitate the identification of individual pigs with greater convenience and universality.

1. Introduction

The identification of group-housed pigs plays an important role in breeding process management and individual behavior analysis. The spread of infectious diseases is a huge threat to livestock farming. In order to detect the early symptoms of swine disease in time, people need to conduct effective monitoring of the group-housed pigs suffering from disease [1,2] through methods such as temperature measurement by infrared ray, detection of cough [3,4], calculation of the amount of drinking water [5], eating and drinking behavior recognition [6,7], and behavioral change measurement [8]. In this process, one of the most important basic tasks is to distinguish different pigs and identify pigs with abnormal behavior.
Electronic ear tags based on radio frequency identification (RFID) technology are widely used in livestock identification [9,10]. Readers can identify a large number of tags quickly and efficiently. However, the ear tag is an invasive method. It’s costly and might be lost in some cases [11]. Moreover, since pigs live together in groups, it is hard to identify each pig accurately at the same time if more than one pig enters the card reader range.
The identification method based on machine vision overcomes the limitations of electronic ear tags. In the early work, pigs were marked on their backs and sides with different colored spray paints. Jover et al. [12] segmented the image in the red-green-blue (RGB) color space and then recognized the color patterns using another color space. The position of piglets in a farrowing pen was detected. Kashiha et al. [13] used the ellipse fitting method to locate pigs. The paint patterns on the pigs’ backs were used to identify different individuals. However, due to the dirty environment of the pigsty and the growth of pigs, these markers are difficult to preserve well for a long time and cannot be effectively applied in practice.
Recent years have seen development in livestock recognition technology based on biometrics and machine learning [14], such as muzzle print recognition [15], iris recognition [16], face recognition, and so on. Zhao et al. [17] collected side images of cows passing through a fixed narrow channel. The convolutional neural network (CNN) was used for cow recognition. Shen et al. [18] employed the you only look once (YOLO) model to detect the cow in the side-view image and fine-tuned a CNN model to classify each individual cow. Hansen et al. [19] collected face images of pigs and used the face recognition network of Fishface, visual geometry group (VGG), and CNN to identify pigs. Marsot et al. [20] automatically detected each pig’s face and eyes by a cascade classifier based on Haar features and a shallow convolution neural network; then, they identified pigs by deep convolution neural network. These methods based on biometrics are stable and non-invasive. However, there are restrictions on image collection. Livestock usually need to go to a specific location or take specific poses. It’s difficult to be applied to surveillance videos of group-housed pigs. Furthermore, the training of deep convolution network needs massive data and great computational expense. When a new individual is added to the pig farm, image collection and training should be done again, which is time-consuming and inconvenient.
To solve the above problems, top view monitoring videos of group-housed pigs are collected, and a model based on the pigsty is established. Pigs move freely in the pigsty and the local texture features formed by the hair, skin lines, and spots on the body surface of pigs are used for the identification of group-housed pigs. Due to the uneven illumination and complex background in the actual pig farm, the accuracy of color and shape feature extraction is easy to be affected. Therefore, in this paper, the texture feature—which does not depend on the change of color or brightness—is extracted and pig identification based on appearance features is studied. The recognition method based on appearance features has low computational complexity and does not need training. Moreover, it is invariant to texture scale, translation and rotation, and illumination change [21]. Research on more powerful local feature descriptors has always been a hotspot of the appearance feature method [22]. In our previous work, the pig identification method based on Gabor and local binary pattern (LBP) was proposed [23]. However, the multi-scale feature of Gabor has a higher feature dimension and needs a large amount of computation. In this paper, a Weber texture local descriptor (WTLD) is proposed to enhance the feature description capability by calculating the differential excitation and multi-directional information of the pixel and fusing the local structure feature of the main direction. An adaptive threshold is used to quantify and encode, and a dense descriptor is constructed, which has higher recognition results and lower feature dimension.

2. Materials and Methods

This paper proposes a novel method of group-housed pigs recognition based on WTLD. The framework is illustrated in Figure 1. Firstly, the top-view videos of group-housed pigs are collected. Secondly, the videos are divided into image frames. After image enhancement and segmentation, images of individual pigs are obtained. Then, the local features of pigs are extracted based on WTLD. Finally, support vector machine (SVM) classifier is used for training and recognition.

2.1. Experimental Setup

2.1.1. Animals and Farm

Experimental videos were captured from a pig farm of the Zhenjiang Xima Development Company, based at Jiangsu University. The fattening pigsties were used in this study. There were several pigsties in the farm, which was about two meters long and two meters wide. Six to eight pigs were raised in each pigsty, as shown in Figure 2a. The breed of pigs was (Yorkshire × Landrace) × Duroc. They were 45 to 60 days old, and the average weight was about 23 kg.

2.1.2. Image Collection

By rebuilding the pigsty, the camera of FL3-U3-88S2C-C from Point Grey Research Inc. (Riverside Way V6w 1k7, Richmond, BC, Canada) was installed 3 m above the experimental pigsty, which captured top-view images of group-housed pigs, as shown in Figure 2b. The resolution of the image was 1760 × 1840 pixels. Flycap2 (LUSTER LightTech group Co., Ltd, Beijing, China) of Pointgrey company was used for camera installation and configuration on the personal computer. The computer processor was the Intel® CoreTM i7-2670QM [email protected] (Santa Clara, CA, USA). The physical memory was 8GB and the operating system was Microsoft Windows 7. On sunny days in June 2015 and May 2017, several videos were collected from the experimental pigsty, each of which was about 3 min.

2.1.3. Image Preprocessing

The videos were divided into image frames after collection, as shown in Figure 3a. A multi-target extraction method based on adaptive multi-threshold segmentation [24] was used to extract the image of each individual pig. Firstly, image enhancement was carried out, and the maximum entropy global threshold was used for segmentation. Secondly, the “effective region” was set, and mathematical morphology was used to obtain the initial segmentation target. According to the target centroid, the original image was adaptively divided into several circular sub-blocks. Finally, the multi-threshold local maximum was performed in the sub-blocks for the second segmentation. The images of each individual pig were normalized to the same size, as shown in Figure 3b.

2.1.4. Datasets

In the experiment, two pigsties were taken as samples. In our early research, seven pigs were selected from other pigsties and mixed in pigsty 1. Their size, color, and texture on the body were more different from each other. After preprocessing, 350 individual pig images were used to establish dataset 1 for pigsty 1. Later, videos of a common pigsty named pigsty 2 was captured. There were 10 pigs bred in it, which were similar in color and body size. A total of 500 individual pig images were used to establish dataset 2 for pigsty 2. This paper takes the more general data of pigsty 2 as an example to illustrate the method; then, it applies it to pigsty 1 to solve the problem of identification of group-raised pigs in pigsty 1.

2.2. Weber Local Descriptor (WLD)

Psychologists have observed that the ratio of the intensity change of an object after being stimulated to its original intensity is a constant—that is, the ratio of the increasement ΔI to original intensity I is a constant k. This relationship is called Weber Law [25], as following:
Δ I I = k .
Inspired by this, Weber local descriptor (WLD) [26] calculates the intensity difference between a central pixel and other pixels in its neighborhood. The differential excitation is used to describe the local significant pattern in the image, as shown in Equation (2):
ξ ( x c ) = arctan [ v s 00 v s 01 ] = arctan [ i = 0 p 1 ( x i x c x c ) ]
where ξ denotes the differential excitation, xc is the central pixel, xi is the ith pixel in the neighborhood of xc, and p represents the number of pixels in the neighborhood. v s 00 and v s 01 are the output of differential excitation filters f00 and f01, respectively. Then, ξ is evenly divided into M bands. Each band is uniformly quantized into S intervals.
In addition to the differential excitation, the gradient direction of the pixel is also calculated in WLD. The ratio of horizontal and vertical gray gradient is used to describe the local direction information in the image, as shown in Equation (3):
θ ( x c ) = arctan ( v s 11 v s 10 )
where θ denotes the direction, while v s 11 and v s 10 represent the output of horizontal and vertical filters f10 and f11, respectively. Then, θ is quantized into T directions after interval transformation. Finally, a two-dimensional histogram of T × (M × S) is constructed, where the abscissa is the direction and the ordinate is the differential excitation. Then, the two-dimensional histogram is concatenated into a one-dimensional histogram.

2.3. Weber Texture Local Descriptor (WTLD)

Although WLD computes the differential excitation and direction, only the horizontal and vertical local directions are considered, and the local structure information could not be fully expressed. In order to solve these problems, this paper proposes a Weber texture local descriptor, which not only combines multi-directional information with the differential excitation, but also contains the principal local structure information. Therefore, WTLD extracts more discriminative and powerful features than WLD. The WTLD computation is shown in Figure 4.
The calculation method of the proposed WTLD is as follows:
1.
The differential excitation of each pixel is calculated by:
γ ( x c ) = arctan [ i = 0 p 1 ( x i x c x c ) ]
where xc represents the center pixel value, xi denotes the value of the ith pixel in the neighborhood, and p is the number of pixels in the neighborhood. Figure 5 shows pixel and its eight neighborhoods. Then, the differential excitation γ is evenly divided into M bands and each band is quantized into S intervals.
2.
In order to extract the local multi-directional information, the multi-directional masks are used. The original image is convoluted with the multi-directional masks, as shown in Equation (5):
R i = | I M i | , 0 i 7
where I represents the original image, Mi denotes the multi-directional mask in the ith direction. Ri is the absolute value of the filtering result in the ith direction. Figure 6 shows Kirsch compass masks in 8 directions.
After convoluting with multi-directional masks, the response values in multiple directions are obtained. Then, the absolute values of the directional responses are calculated. The main direction of the neighborhood, such as the maximum direction, is defined by:
D 1 =   arg max i { R i | 0 i 7 }
where D1 denotes the maximum directional number. In the similar way, we can obtain the second, third, and fourth maximum directional numbers: D2, D3, and D4. After that, the two-dimensional histogram of T × (M × S) is constructed and connected in series to form a one-dimensional histogram.
Figure 7 shows the directional images of WLD and the proposed WTLD. Figure 7a are original RGB images of individual pigs. Figure 7b are the gray images of the original images. Figure 7c are directional images of WLD, which are calculated by horizontal and vertical filtering. Figure 7d are directional images of WTLD. Kirsch masks are used for multi-directional filtering, and the maximum direction number is used for directional images. As can be seen from Figure 7b, the hair, skin texture, and spots on a pig’s body that are different from each other can be used for distinguishing different individuals. By comparing Figure 7c,d, it can be seen that the directional images obtained by WTLD provide more detailed local information on the pig body surface. Obvious light and shade changes can be seen in many local areas. The red squares indicate some areas, but they are not limited to these areas.
In order to verify the effectiveness of the multi-directional information of the WTLD, the correlation coefficients of directional images were calculated for 10 pigs. The definition of the correlation coefficient is as follows:
r = m n ( A m n A ¯ ) ( E m n E ¯ ) m n ( A m n A ¯ ) 2 m n ( E m n E ¯ ) 2
where A and E are images, m and n are the size of the image, and Ā and Ē represent the mean values of A and E. Figure 8 shows the correlation coefficient matrix of directional images based on WLD and WTLD. Figure 8a is the correlation coefficient matrix of the directional images based on WLD, and Figure 8b is the correlation coefficient matrix of the directional images based on WTLD. As can be seen from the results, the correlation coefficients of different individual images based on WLD were relatively large; all the coefficients are more than 0.988. Conversely, the difference between pixels becomes larger due to the consideration of the multi-directional response of each pixel in the WTLD method. Hence, the correlation coefficient between different pig images is reduced. It indicates that multi-directional information can provide more discriminative information, which is helpful to distinguish different pig individuals.
3.
The difference excitation of the original WLD only calculates the difference between the central pixel and its neighborhood. Intensity variations of pixels in the neighborhood are not considered, which resulting in an insufficient expression of local structural information. To solve this problem, the gray intensity difference between pixels in the main direction is calculated, as shown in Equation (8):
C i = { N D i N D i + 4 , D i { 0 , 1 , 2 , 3 } N D i N D i 4 , D i { 4 , 5 , 6 , 7 } , i = 1 , 2 , 3 , 4
where Ci is the intensity difference of pixels. The calculation of intensity difference in the main direction not only describes the maximum direction of pixel change in the neighborhood, but also distinguishes the size of the change.
Since the grayscale values can be of any size, it is necessary to quantify them for coding. Therefore, an adaptive threshold σ is adopted such that the average absolute value of the gray intensity difference in different directions are taken as the threshold, as shown in Equation (9):
M i = { 1 , | C i | > σ 0 , | C i | σ , i = 1 , 2 , 3 , 4
where,
σ = 1 N i = 1 N 1 | C i | .
In Equation (9), Mi is the encode value of the intensity difference and N is 4. Then, the main direction number D1 and local structure information M1 are encoded, as shown in Equation (10):
L ( x c ) = 2 × D 1 + M 1 .
Finally, the image is divided into sub-blocks of the same size, and the local intensity histogram is calculated. The differential excitation and direction histogram are cascaded with the local intensity histogram to form a feature vector.
Figure 9 shows the local structure information coding process. As can be seen from Figure 9b,c, the main directional images reflect details such as the muscle concavity and convex, body surface patches, and so on. The intensity difference images describe more local skin texture formed by the hair. They all provide effective information to distinguish different individuals.

3. Experimental Results and Analysis

In this paper, two sets of data collected on the pig farm were used for the experiment. Since dataset 2 was a more general dataset in which pigs were not selected and the individual differences were not so obvious, more detailed experimental results are given with dataset 2. Then, we applied the proposed method to dataset 1 to solve the problem of identification of group-housed pigs in pigsty 1.
In the experiments, all the images were normalized to 100 × 100 pixels. Each image was divided into 4 × 4 sub-blocks to calculate the histograms. The experimental platform was MATLAB R2019b, and SVM classifier [27] with linear, polynomial and radial basis function (RBF) kernel was used for feature classification. The images were randomly divided into five groups by five-fold cross-validation. Four groups were used for training, and the remaining group was used for testing. The accuracy (Acc) on the test images was recorded and cycled five times. The average of the five results obtained on the test set was taken as the final result. Moreover, we also evaluated the performance of our model with precision (PR), specificity (SP), and F1-score (F1).

3.1. Comparative Experiment and Analysis of WLD and WTLD

In order to verify the effectiveness of the proposed method, Table 1, Table 2 and Table 3 show the experimental results of the original WLD, the original WLD with the local structure information added, and the proposed WTLD with different multi-directional masks. The “WLD + 1dir” and “WLD + 2dir” represent the original WLD adding the local structure information of one and two main directions, respectively. “WTLD1dir” and “WTLD2dir” represent the proposed WTLD with the local structure information of one and two main directions, respectively. The sizes of Kirsch, Sobel, and Prewitt masks are 3 × 3, 5 × 5, and 5 × 5, respectively.
It is obvious that the recognition rate of WLD is the lowest and the results of “WLD + 1dir” and “WLD + 2dir” are higher than those of WLD. It verifies that the local structural information added to WLD provides more useful information and is an effective supplement to WLD. Additionally, compared with “WLD + 1dir” and “WLD + 2dir”, the results of “WTLD1dir” and “WTLD2dir” are further improved after multi-directional filtering is adopted, which indicates that the multi-directional information extracted in WTLD also provide effective information and further enhances the expression ability of features. Furthermore, we also see that the local structure information added is not a case of the more, the better. The results of “WTLD2dir” are higher than “WTLD1dir” with Kirsch mask, while the results of “WTLD2dir” are lower than those of “WTLD1dir” with Sobel and Prewitt masks. Therefore, the local structural information of one direction is extracted and the results of WTLD means “WTLD1dir” in the following experiments.

3.2. Experimental Results of Different Multi-Directional Masks and Mask Sizes

In order to verify the influence of different masks and different mask sizes, multi-directional masks of Kirsch, Sobel, and Prewitt were used in the experiment. The mask sizes were 3 × 3, 5 × 5, 7 × 7, and 9 × 9, respectively. The experimental results based on linear, polynomial, and RBF kernel SVM is shown in Figure 10. As can be seen from the results, in general, the recognition rate of Sobel mask is better than those of the other masks. Sobel mask and Prewitt mask had better results when the size was 5 × 5 and 7 × 7. Kirsch mask had better results when the size was 3 × 3 with linear and RBF kernel SVM.

3.3. Experimental Results of Different Quantization Parameters

In order to analyze the influence of quantization parameters on the experimental results, the results of different quantization frequency bands number M and the uniform quantization interval number of each frequency band S were calculated. Figure 11 shows the results of M = 6, M = 8, and S = 5, S = 8, S = 10 with Kirsch, Sobel, and Prewitt masks, respectively. As can be seen from Figure 11, the results are better for Kirsch mask when M = 6, S = 8 and M = 6, S = 10. For Sobel mask, when M = 6, S = 5 the results are better, and for Prewitt mask, when M = 6, S = 8, the results are better. Overall, the results of different templates at M = 6 are higher than those at M = 8, which indicates that the quantization of differential excitation into low, intermediate, and high frequencies can effectively express the characteristics of differential excitation. The number of quantization frequency bands M is not a case of the more, the better. Similarly, the quantization level S of each frequency band is not as high as possible. With the increase of M and S, the dimension of features increases. Therefore, M = 6 and S = 5 are used in our experiments.

3.4. Performance Comparison Based on Different Local Descriptors

In this paper, comparative experiments were conducted between the proposed WTLD and other local descriptors of local directional number pattern (LDN) [28], local gradient increasing pattern (LGIP) [29], local binary pattern (LBP) [30], local monotonic pattern (LMP) [31], WLD [26], gradient local ternary pattern (GLTeP) [32], local arc pattern (LAP) [33], improved Weber binary coding (IWBC) [34], and median binary pattern (MBP) [35]. Table 4 shows the results of different local descriptors based on linear kernel, polynomial kernel, and RBF kernel function SVM, where the polynomial kernel is a third-order polynomial and the penalty coefficient C of RBF is 100. WTLD_kirsch, WTLD_sobel, and WTLD_prewitt represent the results of WTLD with Kirsch, Sobel, and Prewitt masks, respectively. The experimental results show that the proposed WTLD outperforms other local descriptors. With RBF kernel, the accuracy of WTLD_ kirsch, WTLD_ sobel, and WTLD_prewitt achieves 0.938, 0.950, and 0.938, respectively, which improves by approximately 3% compared with WLD. Additionally, it is clear that the precision, specificity, and F1-score of the proposed WTLD method are also higher than those of the other descriptors. Especially, the F1-score of WTLD_ sobel exceeds that of WLD by about 3.7%. One of the reasons is that the differential excitation and multi-directional information of pixels are calculated, and the local structure features of main direction are fused in WTLD. Therefore, a more powerful local feature descriptor is obtained. Figure 12 and Figure 13 show the WTLD confusion matrix based on linear kernel and RBF kernel SVM, respectively. The abscissa represents the actual tag, and the ordinate represents the predicted tag of the classifier.
Table 5 shows the comparison of feature dimension and feature vector length between WTLD and other local descriptors. It can be seen from Table 5 that the feature vector length of WTLD is the shortest. In WLTD, 16 × 4 × 4 is the length of the local structure information histogram, and 8 × 6 × 5 is the length of Weber’s excitation and direction histogram. Combined with the results of Table 4 and Table 5, it can be seen that the proposed WTLD has less feature dimensions and achieves higher results.

3.5. Results of WTLD Applied to Dataset 1

Table 6 shows the experimental results of pig identification by the proposed method in dataset 1. The experimental results also show that the proposed WTLD outperforms other local descriptors. On the whole, the experimental results of dataset 1 are higher than those in dataset 2. The accuracy of WTLD_kirsch, WTLD_sobel, and WTLD_prewitt are 97.1%, 95.71% and 97.14% with SVM of linear kernel function. The F1-scores of WTLD_kirsch, WTLD_sobel, and WTLD_prewitt with SVM of linear kernel function are 0.970, 0.954, and 0.969, which are higher than that of IWBC, which is 0.963. This is due to the obvious differences in color, texture, and spots on the body surface of the pigs in pigsty 1, which makes it easier to distinguish them from each other, as shown in Figure 14.
Figure 15 shows the confusion matrix of the pig identification based on the proposed WTLD. It can be seen that pigs No. 6, No. 5, and No. 1 are easy to be misidentified as other pigs, while pig No. 2, No. 3, No. 4, and No. 7 are not easily confused. Figure 16 shows examples of pig No. 6 and No. 5. It can be seen that there is little difference in the body surface of pigs No. 6 and No. 5 from the appearance. At the same time, Figure 16 also shows examples of individuals that can easily be identified. It can be seen that pigs No. 4 and No. 3 have obvious visual features in local body pattern, skin texture, and color, so they are relatively easy to identify correctly.

4. Conclusions

The local features such as color change, skin texture, and spots on the body surface provide important information for the individual identification of pigs. These local features are influenced by comprehensive factors of heredity and the breeding process, which are representative. In order to realize the effective identification of group-housed pigs, a new method based on WTLD was proposed in this paper, which makes full use of the biological features of the pig body to distinguish different individuals. The multi-directional mask is applied to the calculation of Weber direction information. The local spatial domain information related to the principal direction is added. Experiments on two datasets show that the proposed method has good performance. This method establishes a model based on a pigsty that can automatically identify different pig individuals in the pigsty without requiring them to go to a specific position or maintain a specific posture. It can facilitate the identification of individual pigs with greater convenience and universality.

Author Contributions

Data curation, Y.G., W.H.; Formal analysis, Y.G.; Funding acquisition, W.Z.; Investigation, C.M., W.Z., W.H.; Methodology, W.H. and W.Z.; Project administration, W.Z., C.M.; Software, W.H., Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was part of a project funded by “The National Natural Science Foundation of China” (Grant No.31872399), “The Doctoral Program of the Ministry of Education of China” (Grant No. 2010322711007), “The Priority Academic Program Development of Jiangsu Higher Education Institutions”, “The Graduate Student Scientific Research Innovation Projects of Jiangsu Ordinary University” (Grant No. CXLX13_664), and PhD Research Project of Jiangsu University of Science and Technology(Grant No. 1032931604).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Matthews, S.G.; Miller, A.L.; Clapp, J.; Plötz, T.; Kyriazakis, I. Early detection of health and welfare compromises through automated detection of behavioural changes in pigs. Vet. J. 2016, 217, 43–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Nasirahmadi, A.; Edwards, S.A.; Matheson, S.M.; Sturm, B. Using automated image analysis in pig behavioural research: Assessment of the influence of enrichment substrate provision on lying behaviour. Appl. Anim. Behav. Sci. 2017, 196, 30–35. [Google Scholar] [CrossRef]
  3. Guarino, M.; Jans, P.; Costa, A.; Aerts, J.-M.; Berckmans, D. Field test of algorithm for automatic cough detection in pig houses. Comput. Electron. Agric. 2008, 62, 22–28. [Google Scholar] [CrossRef]
  4. Zhao, J.; Li, X.; Liu, W.; Gao, Y.; Lei, M.; Tan, H.; Yang, D. DNN-HMM based acoustic model for continuous pig cough sound recognition. Int. J. Agric. Biol. Eng. 2020, 13, 186–193. [Google Scholar] [CrossRef]
  5. Kashiha, M.; Bahr, C.; Haredasht, S.A.; Ott, S.; Moons, C.P.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. The automatic monitoring of pigs water use by cameras. Comput. Electron. Agric. 2013, 90, 164–169. [Google Scholar] [CrossRef]
  6. Zhu, W.-X.; Guo, Y.; Jiao, P.-P.; Ma, C.-H.; Chen, C. Recognition and drinking behaviour analysis of individual pigs based on machine vision. Livest. Sci. 2017, 205, 129–136. [Google Scholar] [CrossRef]
  7. Yang, Q.; Xiao, D.; Lin, S. Feeding behavior recognition for group-housed pigs with the Faster R-CNN. Comput. Electron. Agric. 2018, 155, 453–460. [Google Scholar] [CrossRef]
  8. Matthews, S.G.; Miller, A.L.; PlÖtz, T.; Kyriazakis, I. Automated tracking to measure behavioural changes in pigs for health and welfare monitoring. Sci. Rep. 2017, 7, 17582. [Google Scholar] [CrossRef]
  9. Voulodimos, A.S.; Patrikakis, C.Z.; Sideridis, A.B.; Ntafis, V.A.; Xylouri, E.M. A complete farm management system based on animal identification using RFID technology. Comput. Electron. Agric. 2010, 70, 380–388. [Google Scholar] [CrossRef]
  10. Ruiz-Garcia, L.; Lunadei, L. The role of RFID in agriculture: Applications, limitations and challenges. Comput. Electron. Agric. 2011, 79, 42–50. [Google Scholar] [CrossRef] [Green Version]
  11. Fosgate, G.T.; Adesiyun, A.; Hird, D. Ear-tag retention and identification methods for extensively managed water buffalo (Bubalus bubalis) in Trinidad. Prev. Vet. Med. 2006, 73, 287–296. [Google Scholar] [CrossRef] [PubMed]
  12. Jover, J.N.; Alcañiz-Raya, M.; Gomez, V.; Balasch, S.; Moreno, J.; Colomer, V.G.; Torres, A. An automatic colour-based computer vision algorithm for tracking the position of piglets. Span. J. Agric. Res. 2009, 7, 535. [Google Scholar] [CrossRef] [Green Version]
  13. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.; ANiewold, T.; Ödberg, F.; Berckmans, D. Automatic identification of marked pigs in a pen using image pattern recognition. Comput. Electron. Agric. 2013, 93, 111–120. [Google Scholar] [CrossRef]
  14. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Ibrahim, M.E.; Hazem, M.E.; Hagar, M.E. A new muzzle classification model using decision tree classifier. Int. J. Electron. Inf. Eng. 2017, 6, 12–24. [Google Scholar]
  16. Trokielewicz, M.; Szadkowski, M. Iris and periocular recognition in Arabian race horses using deep convolutional neural networks. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 510–516. [Google Scholar]
  17. Zhao, K.X.; He, D.J. Recognition of individual dairy cattle based on convolutional neural networks. Trans. Chin. Soc. Agric. Eng. 2015, 31, 181–187. [Google Scholar]
  18. Shen, W.; Hu, H.; Dai, B.; Dar, B.; Wei, X.; Sun, J.; Jiang, L.; Sun, Y. Individual identification of dairy cows based on convolutional neural networks. Multimed. Tools Appl. 2020, 79, 14711–14724. [Google Scholar] [CrossRef]
  19. Hansen, M.; Smith, M.L.; Smith, L.N.; Salter, M.G.; Baxter, E.M.; Farish, M.; Grieve, B. Towards on-farm pig face recognition using convolutional neural networks. Comput. Ind. 2018, 98, 145–152. [Google Scholar] [CrossRef]
  20. Marsot, M.; Mei, J.; Shan, X.; Ye, L.; Feng, P.; Yan, X.; Li, C.; Zhao, Y. An adaptive pig face recognition approach using Convolutional Neural Networks. Comput. Electron. Agric. 2020, 173, 105386. [Google Scholar] [CrossRef]
  21. Yuetong, L.; Lanying, Z.; Wei, J.; Jing, G.; Feng, X. Palmprint recognition method based on line feature Weber local descriptor. J. Image Graph. 2016, 21, 235–244. [Google Scholar]
  22. Turan, C.; Lam, K.-M. Histogram-based local descriptors for facial expression recognition (FER): A comprehensive study. J. Vis. Commun. Image Rep. 2018, 55, 331–341. [Google Scholar] [CrossRef]
  23. Huang, W.; Zhu, W.; Ma, C.; Guo, Y.; Chen, C. Identification of group-housed pigs based on Gabor and Local Binary Pattern features. Biosyst. Eng. 2018, 166, 90–100. [Google Scholar] [CrossRef]
  24. Guo, Y.; Zhu, W.-X.; Jiao, P.-P.; Ma, C.-H.; Yang, J.-J. Multi-object extraction from topview group-housed pig images based on adaptive partitioning and multilevel thresholding segmentation. Biosyst. Eng. 2015, 135, 54–60. [Google Scholar] [CrossRef]
  25. Jain, A.K. Fundamentals of Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1989. [Google Scholar]
  26. Chen, J.; Shan, S.; He, C.; Zhao, G.; Pietikäinen, M.; Chen, X.; Gao, W. WLD: A Robust Local Image Descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1705–1720. [Google Scholar] [CrossRef]
  27. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  28. Rivera, A.R.; Castillo, J.R.; Chae, O.O. Local directional number pattern for face analysis: Face and expression recognition. IEEE Trans. Image Process 2012, 22, 1740–1752. [Google Scholar] [CrossRef]
  29. Zhou, L.; Wang, H. Local gradient increasing pattern for facial expression recognition. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 2601–2604. [Google Scholar]
  30. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  31. Mohammad, T.; Ali, M.L. Robust facial expression recognition based on Local Monotonic Pattern (LMP). In Proceedings of the 2011 14th International Conference on Computer and Information Technology(ICCIT) Institute of Electrical and Electronics Engineers (IEEE), Dhaka, Bangladesh, 22–24 December 2011; pp. 572–576. [Google Scholar]
  32. Faisal, A.; Emam, H. Automated facial expression recognition using gradient-based ternary texture patterns. Chin. J. Eng. 2013, 2, 1–8. [Google Scholar]
  33. Islam, M.S.; Auwatanamo, S. Facial expression recognition using local arc pattern. Trends Appl. Sci. Res. 2014, 9, 113–120. [Google Scholar] [CrossRef]
  34. Yang, B.-Q.; Zhang, T.; Gu, C.-C.; Wu, K.-J.; Guan, X.-P. A novel face recognition method based on IWLD and IWBC. Multimed. Tools Appl. 2015, 75, 6979–7002. [Google Scholar] [CrossRef]
  35. Bashar, F.; Khan, A.; Ahmed, F.; Kabir, H. Robust facial expression recognition based on median ternary pattern (MTP). In Proceedings of the 2013 IEEE International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 13–15 February 2014; pp. 1–5. [Google Scholar]
Figure 1. The flow diagram of the proposed method.
Figure 1. The flow diagram of the proposed method.
Sensors 20 04649 g001
Figure 2. Pigsties and video capture platform. (a) Pigsties in the farm; (b) video capture platform.
Figure 2. Pigsties and video capture platform. (a) Pigsties in the farm; (b) video capture platform.
Sensors 20 04649 g002
Figure 3. Video frame and samples of individual pig images after preprocessing. (a) One image frame of a video; (b) samples of individual pig images.
Figure 3. Video frame and samples of individual pig images after preprocessing. (a) One image frame of a video; (b) samples of individual pig images.
Sensors 20 04649 g003
Figure 4. Illustration of the computation of the proposed Weber texture local descriptor (WTLD).
Figure 4. Illustration of the computation of the proposed Weber texture local descriptor (WTLD).
Sensors 20 04649 g004
Figure 5. Pixel and its eight neighborhoods.
Figure 5. Pixel and its eight neighborhoods.
Sensors 20 04649 g005
Figure 6. Kirsch compass masks.
Figure 6. Kirsch compass masks.
Sensors 20 04649 g006
Figure 7. Directional images of Weber local descriptor (WLD) and the proposed WTLD. (a) Original images; (b) gray images; (c) directional images of WLD; (d) directional images of the proposed WTLD.
Figure 7. Directional images of Weber local descriptor (WLD) and the proposed WTLD. (a) Original images; (b) gray images; (c) directional images of WLD; (d) directional images of the proposed WTLD.
Sensors 20 04649 g007
Figure 8. Correlation coefficient matrix of directional images based on WLD and WTLD. (a) Correlation coefficient matrix of directional images based on WLD; (b) correlation coefficient matrix of directional images based on WTLD.
Figure 8. Correlation coefficient matrix of directional images based on WLD and WTLD. (a) Correlation coefficient matrix of directional images based on WLD; (b) correlation coefficient matrix of directional images based on WTLD.
Sensors 20 04649 g008
Figure 9. The local structure information coding process. (a) The gray images; (b) the main directional images; (c) the intensity difference of the main direction; (d) the coded images.
Figure 9. The local structure information coding process. (a) The gray images; (b) the main directional images; (c) the intensity difference of the main direction; (d) the coded images.
Sensors 20 04649 g009
Figure 10. Results of different multi-directional masks and mask sizes. (a) Result of linear kernel SVM; (b) result of polynomial kernel SVM; (c) result of RBF kernel SVM.
Figure 10. Results of different multi-directional masks and mask sizes. (a) Result of linear kernel SVM; (b) result of polynomial kernel SVM; (c) result of RBF kernel SVM.
Sensors 20 04649 g010
Figure 11. Results of different quantitative parameters. (a) Results of Kirsch mask; (b) results of Sobel mask; (c) results of Prewitt mask.
Figure 11. Results of different quantitative parameters. (a) Results of Kirsch mask; (b) results of Sobel mask; (c) results of Prewitt mask.
Sensors 20 04649 g011aSensors 20 04649 g011b
Figure 12. The confusion matrix obtained by WTLD_kirsch with SVM of linear kernel function (%).
Figure 12. The confusion matrix obtained by WTLD_kirsch with SVM of linear kernel function (%).
Sensors 20 04649 g012
Figure 13. The confusion matrix obtained by WTLD_kirsch with SVM of RBF kernel function (%).
Figure 13. The confusion matrix obtained by WTLD_kirsch with SVM of RBF kernel function (%).
Sensors 20 04649 g013
Figure 14. Video frame of dataset 1.
Figure 14. Video frame of dataset 1.
Sensors 20 04649 g014
Figure 15. The confusion matrix obtained by WTLD_kirsch with SVM of linear kernel function of dataset 1 (%).
Figure 15. The confusion matrix obtained by WTLD_kirsch with SVM of linear kernel function of dataset 1 (%).
Sensors 20 04649 g015
Figure 16. Examples of pig individuals in dataset 1.
Figure 16. Examples of pig individuals in dataset 1.
Sensors 20 04649 g016
Table 1. Comparison of experimental results between WLD and WTLD of Kirsch mask.
Table 1. Comparison of experimental results between WLD and WTLD of Kirsch mask.
MethodLinearPolynomialRBF (C = 100)
WLD91.490.891.4
WLD + 1dir93.291.693.6
WLD + 2dir92.690.493.2
WLTD1dir93.492.493.8
WLTD2dir94.092.494.0
Table 2. Comparison of experimental results between WLD and WTLD of Sobel mask.
Table 2. Comparison of experimental results between WLD and WTLD of Sobel mask.
MethodLinearPolynomialRBF (C = 100)
WLD91.490.891.4
WLD + 1dir92.892.093.0
WLD + 2dir93.092.693.0
WLTD1dir94.495.095.0
WLTD2dir93.893.694.0
Table 3. Comparison of experimental results between WLD and WTLD of Prewitt mask.
Table 3. Comparison of experimental results between WLD and WTLD of Prewitt mask.
MethodLinearPolynomialRBF (C = 100)
WLD91.490.891.4
WLD + 1dir93.691.493.8
WLD + 2dir92.891.493.4
WLTD1dir93.893.293.8
WLTD2dir93.892.694.0
Table 4. Comparison of experimental results between WTLD and other local descriptors. Acc: accuracy, PR: precision, SP: specificity, F1: F1-score, LDN: local directional number pattern, LGIP: local gradient increasing pattern, LBP: local binary pattern, LMP: local monotonic pattern, WLD: Weber local descriptor, GLTeP: gradient local ternary pattern, LAP: local arc pattern, IWBC: improved Weber binary coding, MBP: median binary pattern, WTLD_kirsch: the results of WTLD with Kirsch masks, WTLD_sobel: the results of WTLD with Sobel masks, WTLD_prewitt: the results of WTLD with Prewitt masks.
Table 4. Comparison of experimental results between WTLD and other local descriptors. Acc: accuracy, PR: precision, SP: specificity, F1: F1-score, LDN: local directional number pattern, LGIP: local gradient increasing pattern, LBP: local binary pattern, LMP: local monotonic pattern, WLD: Weber local descriptor, GLTeP: gradient local ternary pattern, LAP: local arc pattern, IWBC: improved Weber binary coding, MBP: median binary pattern, WTLD_kirsch: the results of WTLD with Kirsch masks, WTLD_sobel: the results of WTLD with Sobel masks, WTLD_prewitt: the results of WTLD with Prewitt masks.
MethodLinearPolynomialRBF (C = 100)
AccPRSPF1AccPRSPF1AccPRSPF1
LDN0.8900.8970.9880.8890.8620.8680.9850.8570.8860.8950.9870.885
LGIP0.8920.9030.9880.8910.8800.8920.9870.8780.8980.9070.9890.897
LBP0.8960.9040.9880.8940.8820.8940.9870.8790.9060.9110.9900.904
LMP0.9280.9360.9920.9260.9240.9330.9920.9220.9320.9390.9920.930
WLD0.9140.9230.9900.9120.9080.9170.9900.9060.9140.9220.9900.912
GLTeP0.9120.9210.9900.9090.9080.9170.9900.9050.9100.9190.9900.907
LAP0.8960.9020.9880.8920.8860.8950.9870.8810.9040.9120.9890.900
IWBC0.9300.9370.9920.9280.9300.9380.9920.9280.9340.9410.9930.932
MBP0.8940.9040.9880.8920.8820.8910.9870.8800.8980.9080.9890.896
WTLD_kirsch0.9340.9410.9930.9330.9240.9330.9920.9220.9380.9440.9930.937
WTLD_sobel0.9440.9490.9940.9430.9500.9570.9940.9500.9500.9550.9940.949
WTLD_prewitt0.9380.9450.9930.9380.9320.9390.9920.9310.9380.9440.9930.937
Table 5. Comparison of feature dimension and feature vector length between WTLD and other local descriptors.
Table 5. Comparison of feature dimension and feature vector length between WTLD and other local descriptors.
MethodFeature DimensionEigenvector Length
LDN [28]56896
LGIP [29]37592
LBP [30]59944
LMP [31]2564096
WLD [26]32512
GLTeP [32]5128192
LAP [33]2724352
IWBC [34]204832,768
MBP [35]2564096
WTLD256 (16 + 8 × 6 × 5)496 (16 × 4 × 4 + 8 × 6 × 5)
Table 6. Comparison of experimental results between WTLD and other local descriptors of dataset 1.
Table 6. Comparison of experimental results between WTLD and other local descriptors of dataset 1.
MethodLinear Polynomial RBF (C = 100)
AccPRSPF1AccPRSPF1AccPRSPF1
LDN0.9140.9260.9860.9110.9210.9360.9870.9190.9210.9280.9870.917
LGIP0.9140.9330.9860.9090.9070.9270.9850.9020.9140.9330.9860.909
LBP0.9000.9080.9830.8960.9000.9080.9830.8960.9000.9080.9830.896
LMP0.9500.9640.9920.9480.9500.9640.9920.9480.9500.9640.9920.948
WLD0.9210.9400.9870.9160.9140.9340.9860.9090.9210.9400.9870.916
GLTeP0.9500.9580.9920.9480.9500.9580.9920.9480.9500.9580.9920.948
LAP0.8930.9000.9820.8880.8930.9030.9820.8880.8930.9030.9820.888
IWBC0.9640.9730.9940.9630.9640.9730.9940.9630.9640.9730.9940.963
MBP0.9290.9330.9880.9270.9290.9330.9880.9270.9290.9330.9880.927
WTLD_kirsch0.9710.9790.9950.9700.9710.9790.9950.9700.9710.9790.9950.970
WTLD_sobel0.9570.9680.9930.9540.9570.9680.9930.9540.9640.9730.9940.963
WTLD_prewitt0.9710.9790.9950.9690.9710.9790.9950.9690.9710.9790.9950.969

Share and Cite

MDPI and ACS Style

Huang, W.; Zhu, W.; Ma, C.; Guo, Y. Weber Texture Local Descriptor for Identification of Group-Housed Pigs. Sensors 2020, 20, 4649. https://doi.org/10.3390/s20164649

AMA Style

Huang W, Zhu W, Ma C, Guo Y. Weber Texture Local Descriptor for Identification of Group-Housed Pigs. Sensors. 2020; 20(16):4649. https://doi.org/10.3390/s20164649

Chicago/Turabian Style

Huang, Weijia, Weixing Zhu, Changhua Ma, and Yizheng Guo. 2020. "Weber Texture Local Descriptor for Identification of Group-Housed Pigs" Sensors 20, no. 16: 4649. https://doi.org/10.3390/s20164649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop