Next Article in Journal
Image Reconstruction Based on Novel Sets of Generalized Orthogonal Moments
Previous Article in Journal
Explainable Deep Learning Models in Medical Image Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combination of LBP Bin and Histogram Selections for Color Texture Classification

1
LISIC laboratory, Université du Littoral Côte d’Opale, 50 rue Ferdinand Buisson, 62228 Calais CEDEX, France
2
Faculty of Information Technology, Ho Chi Minh City Open University, 97 Vo Van Tan, District 3, 700000 Ho Chi Minh City, Vietnam
*
Author to whom correspondence should be addressed.
J. Imaging 2020, 6(6), 53; https://doi.org/10.3390/jimaging6060053
Submission received: 21 April 2020 / Revised: 16 June 2020 / Accepted: 19 June 2020 / Published: 23 June 2020

Abstract

:
LBP (Local Binary Pattern) is a very popular texture descriptor largely used in computer vision. In most applications, LBP histograms are exploited as texture features leading to a high dimensional feature space, especially for color texture classification problems. In the past few years, different solutions were proposed to reduce the dimension of the feature space based on the LBP histogram. Most of these approaches apply feature selection methods in order to find the most discriminative bins. Recently another strategy proposed selecting the most discriminant LBP histograms in their entirety. This paper tends to improve on these previous approaches, and presents a combination of LBP bin and histogram selections, where a histogram ranking method is applied before processing a bin selection procedure. The proposed approach is evaluated on five benchmark image databases and the obtained results show the effectiveness of the combination of LBP bin and histogram selections which outperforms the simple LBP bin and LBP histogram selection approaches when they are applied independently.

1. Introduction

Texture analysis is one of the major topics in the field of computer vision and has many important applications including face recognition, object detection, image filtering, segmentation, and content-based access to image databases [1]. Texture classification can be defined as a task that assigns a texture into one of a set of predefined categories. This step requires an efficient descriptor in order to represent and discriminate the different texture classes. In past decades, texture analysis was extensively studied and a wide variety of texture representations were proposed [2]. Among these approaches, the Local Binary Pattern (LBP) descriptor proposed by Ojala et al. is known as one of the most successful statistical approaches due to its efficiency, robustness against illumination intensity changes, and relative fast calculation [3]. It was successfully applied to numerous applications as diverse as texture classification. To encode LBP, the gray level of each pixel is compared with those of its neighbors and the results of these comparisons are then weighted and summed. The obtained texture descriptor is the LBP histogram whose size depends on the number of neighbors. Since 2002, several extensions of LBP to color were then proposed to take advantage of all the color texture information contained in an image [4]. Considering a particular color space, the LBP descriptor is thus applied on each color component independently or on pairs of color components jointly, leading to several LBP histograms for characterizing a color texture.
To improve the classification performance, many approaches were proposed to reduce the dimension of the feature space based on the LBP histogram(s) [4]. These approaches follow four strategies.
(1)
To obtain more discriminative, robust and compact LBP-based features, the first strategy consists of identifying the most informative pattern groups based on some rules or on the predefinition of patterns of interest. The uniform LBP operator, where a reduced number of discriminant bins are a priori chosen among all the available ones, is an example of predefinied compact LBP [5].
(2)
The second strategy is based on feature extraction approaches, which project features into a new feature space, with a lower dimensionality, where the new constructed features are usually combinations of the original features. Chan et al. use a linear discriminant analysis to project high-dimensional color LBP bins into a discriminant space [6]. Banerji et al. apply Principal Component Analysis (PCA) to reduce the feature dimensionality of the concatenating LBP features extracted from different color spaces. Zhao et al. compare different dimensionality reduction methods on LBP features, e.g., PCA, kernel PCA and Laplacian PCA [7]. Hussain et al. exploit the complementarity of three sets of features, including LBP, and applies partial least squares for improving their object class recognition approach [8].
(3)
The third strategy consists of applying feature selection methods in order to find the most discriminative patterns [9]. Smith and Windeatt apply the fast correlation-based filtering algorithm [10] to select the LBP patterns that are the most correlated with the target class [11]. This algorithm starts with the full set of features, calculates dependences of features thanks to symmetrical uncertainty, and finds the best subset using backward selection technique with sequential search strategy. Lahdenoja et al. define a discrimination concept of symmetry for uniform patterns to reduce the feature dimensionality [12]. In this approach, the patterns with a higher level of symmetry are shown to have more discriminative power. Maturana et al. use an algorithm based on Fisher-like class separability criterion to select the neighbors used in the computation of LBP [13]. Liao et al. introduce Dominant Local Binary patterns (DLBP) which consider the most frequently occurred patterns in a texture image [14]. To compute the DLBP feature vectors from an input image, a pattern histogram which considers all the patterns in the input image is constructed and the histogram bins are sorted in descending order. The occurrence frequencies corresponding to the most frequently occurred patterns in the input image are served as the feature vectors. Guo et al. propose a Fisher separation criterion to learn the most reliable and robust patterns by using intra-class and inter-class distances [15]. This approach, which proposes to learn the most reliable and robust dominant bins by considering intra-class similarity and inter-class dissimilarity, is very interesting since it outperforms Ojala’s uniform LBP operator and Liao’s DLBP in the experiments on three texture databases. It has also been recently extended to the multi color space domain [16].
(4)
A fourth strategy to reduce the dimension of the feature space based on LBP histograms was proposed by Porebski et al. in 2013 [17]. In this approach, the most discriminant LBP histograms are selected in their entirety out of the different LBP histograms extracted from a color texture. It fundamentally differs from all the previous approaches which select the bins of the LBP histograms or project them into a discriminant space. Several scores were proposed in the literature to evaluate the relevance of histograms: the Intra-Class Similarity score (ICS-score), proposed by Porebski et al. [17], which is based on an intra-class similarity measure; the Adapted Supervised Laplacian score (ASL-score) and Adapted Laplacian score (AL-score), proposed by Kalakech et al. [18,19], which evaluates the relevance of the histograms using the local properties of the image data; the Simba-2 score, proposed by Mouhajid et al. [20], which is based on the hypothesis margin and the χ 2 distance; and the Sparse Adapted Supervised Laplacian score (SpASL-score) [21], which is based on the ASL-score and a sparse representation. The LBP histogram selection approach using ICS or ASL scores was recently extended to the multi color space domain and showed its relevance compared to the bin selection approach proposed by Guo [16].
In this paper, we propose improving this last selection strategy that selects discriminant LBP histograms in their entirety. Indeed, it is clear that not all bins of the selected histograms are meaningful for modelling the characteristics of textures. As it selects the most discriminating histograms and filter out the rest, we think that it might have some redundant bins in the selected histograms and a loss of some meaningful bins of the discarded histograms. That is why we propose introducing the combination of LBP bin and histogram selections, where a histogram ranking method is applied before processing a bin selection procedure. Two minor and one major contributions are thus proposed in this paper:
  • For the first minor contribution, the LBP histogram selection approach proposed in [21], using the SpASL-score, is extended to the multi color space domain to define the Sparse Multi Color Space Histogram Selection (Sparse-MCSHS).
  • We then propose comparing this Sparse-MCSHS approach with a multi color space bin selection approach, also based on a sparse representation. For this purpose, the sparsity score proposed by Liu [22] is used for selecting the most discriminant bins of LBP histograms extracted from images coded in several color spaces, leading to the Sparse Multi Color Space Bin Selection (Sparse-MCSBS) approach. This second minor contribution corresponds to the MCSBS approach proposed in [16] that was extended to a sparse representation.
  • These two approaches, Sparse-MCSHS and Sparse-MCSBS, are then combined to form the Sparse-MCSHBS approach (Sparse Multi Color Space Histogram and Bin Selection). This combination of bin and histogram selections represents the main contribution of the paper since such a combination for selecting relevant LBP bins was never previously proposed.
In Section 2, the color LBP histograms used to represent color textures are first described. The proposed Sparse-MCSHS and Sparse-MCSBS approaches are then presented in Section 3. The Section 4 introduces the combination of bin and histogram selections and finally experiments are carried out on benchmark texture databases in Section 5.

2. Color LBP Histograms

The LBP operator is widely applied to define texture features for classification of gray level images due to its inherent simplicity and robustness [3]. It transforms an image by thresholding the P neighbor levels of each pixel and coding the result as a binary number (where P is the number of neighboring pixels). Usually, the histogram of this LBP image is then computed for texture analysis and many authors take an interest in reduction of the 2 P -dimensional LBP histograms in order to improve texture classification performance [4].
The original LBP computation, based on gray level images, was then extended to multiple variants [23] and also to color since it was demonstrated that color information is a highly discriminant visual cue to represent the texture, especially in natural textures [24,25,26,27,28]. In the literature, the extension of LBP to color follows four different strategies: one in which informations of color and texture are separately analyzed (1), and the three others where color and texture are jointly considered during the computation of the LBP descriptor (2-3-4) [29,30,31].
(1)
In the first strategy, the original LBP operator is computed from the luminance image and combined with color features. For example, Mäenpää or Ning proposed to represent the color texture by concatenating the 3D color histogram of the color image and the LBP histogram of the corresponding luminance image [31,32]. Cusano et al. propose a texture descriptor which combines a luminance LBP histogram with color features based on the local color contrast [33].
(2)
The second strategy is a marginal approach that consists of applying the original LBP operator independently on each of the three components of the color image, without considering the spatial interactions between the levels of two different color components. The texture descriptor is obtained by concatenating the three resulting LBP histograms. This within component strategy was applied by several authors [34,35,36,37,38].
(3)
The third strategy consists of taking into account the spatial interactions within and between color components. To describe color texture, Opponent Color LBP (OCLBP) was defined [31]. For this purpose, the LBP operator is applied on each pixel and for each pair of components denoted ( C k , C k ) , k , k { 1 , 2 , 3 } . In this definition, opposing pairs such as ( C 1 , C 2 ) and ( C 2 , C 1 ) are considering to be highly redundant, and so, one of each pair is taken into consideration in the analysis. This leads to characterize a texture with only six histograms pairs ( ( C 1 , C 1 ) , ( C 2 , C 2 ) , ( C 3 , C 3 ) , ( C 1 , C 2 ) , ( C 1 , C 3 ) , ( C 2 , C 3 ) ) out of the nine available ones. However, these a priori chosen six histograms are not always the most relevant according to the different considered data sets [17] and it is preferable to consider the Extended Opponent Color LBP (EOCLBP). This way to describe the color textures with LBP was proposed by Pietikäinen in 2002 [37]. It consists of taking into account each color component independently and each possible pair of color components, leading to nine different histograms: three within-component ( ( C 1 , C 1 ) , ( C 2 , C 2 ) , ( C 3 , C 3 ) ) and six between-component ( ( C 1 , C 2 ) , ( C 2 , C 1 ) , ( C 1 , C 3 ) , ( C 3 , C 1 ) , ( C 2 , C 3 ) , ( C 3 , C 2 ) ) LBP histograms. These nine histograms are finally concatenated so that a color texture image is represented in a ( 9 × 2 P ) -dimensional feature space. The OCLBP and EOCLBP have often been considered to classify color texture images [6,17,18,31,39,40]. Recently, Lee et al. propose another color LBP variant for face recognition tasks, the local color vector binary pattern [41]. In the proposed approach, each color texture image is characterized by the color norm pattern and the color angular patterns via LBP texture operation.
(4)
The fourth strategy consists of analyzing the spatial interactions between the colors of the neighboring pixels based on the consideration of an order relation between colors following a vectorial approach. Instead of comparing the color components of pixels, Porebski et al. represent the color of pixels by a vector and compare the color vectors of the neighboring pixels with the color vector of the central one [42]. They use a partial color order relation based on the Euclidean distance for comparing the rank of color. As a result, a single color LBP histogram is obtained instead of the 6 or 9 provided by OCLBP or EOCLBP, respectively [17,18]. Another possible way consists of defining a suitable total ordering in the color space. This strategy was investigated by Ledoux et al. who propose the Mixed Color Order LBP (MCOLBP) [43]. Finally, in order to give a single code by color LBP, the quaternion representation can be considered. Quaternion is shown to be an efficient mathematical tool for representing color images based on a hypercomplex representation [44]. Lan et al. have thus proposed the Quaternionic Local Binary Pattern (QLBP) that makes use of quaternion to represent each pixel color by a complex number including all color components at one time. Under this representation, the dimension of QLBP is equal to the dimension of a grayscale LBP. QLBP was applied for person re-identification problems by Lan and Chahla in [45,46] and was then extended in [47] for color image classification.
Among the different extension strategies of LBP to color, MCOLBP and QLBP have the advantage of providing a texture descriptor whose dimension is equal to gray level LBP histogram, and thus a low computation time. However, the classification accuracies obtained with these descriptors on two benchmark texture databases are not as high as those obtained with OCLBP [43,48]. That is the reason we propose in this paper to consider the color LBP coming from the third strategy, and particularly the EOCLBP, where no a priori choice of the most discriminant LBP histograms is done. To reduce the dimension of this ( 9 × 2 P ) -dimensional LBP-based feature space, three selection procedures based on a sparse representation are presented and compared to automatically select the most discriminant features: the Sparse-MCSHS and Sparse-MCSBS approaches, described in the next section, and the combination of bin and histogram selections presented in Section 4.

3. Sparse-MCSHS and Sparse-MCSBS Approaches

Many authors have lead studies about the choice of color space for different applications: machine vision, face recognition, texture analysis, etc. [49,50,51]. Indeed, there exist numerous color spaces and it is proved that the color space choice impacts the results [52]. In the framework of color texture classification, many authors try to determine the “best” color space in order to improve the results of their proposed classification approach. However, it was shown that it is difficult to a priori determine the best color space suited to all applications of color texture classification [16]. For this reason, an alternative approach emerged: it consists of simultaneously exploiting the properties of several color spaces. Three main strategies are proposed in the literature:
  • Color space fusion, which involves fusing the results from several classifiers, each one operating in a different color space [53,54,55,56],
  • Color space selection, which consists of selecting the most well suited color spaces which are based on some specific quality criteria [57,58,59,60],
  • Color texture feature selection that evaluates the texture features over different color spaces and selects the set of features that provide the best discrimination between the different textures classed by using a supervised feature selection approach [34,61,62,63,64].
There exists no study that compares the performance of these color space combination strategies and it could be a great prospect to compare them. In this paper, we propose using the color texture feature selection strategy to compare and combine the approaches of LBP histogram selection and LBP bin selection in a multi color space framework.

3.1. Considered Color Spaces

As explained previously, there exist numerous color spaces. These color spaces take into account different physical, physiologic and psycho-visual properties, and can be classified into four families [60]. N S = 9 color spaces are here considered for experiments:
  • R G B and r g b , which belong to the primary space family,
  • Y C b C r and ( w b , r g , b y ) , which are luminance-chrominance spaces,
  • I 1 I 2 I 3 , which is an independent color component space,
  • HSV, HSI, HLS and I-HLS, which belong to the perceptual space family.
These nine color spaces were chosen since they do not require to know illumination and image acquisition conditions, contrary to CIE L * a * b * or L * u * v * color spaces for instance. They are also representative of the four different color space families, even if a majority of perceptual spaces were chosen because these spaces are known to obtain good classification accuracies [63,65].

3.2. Candidate Color Texture Descriptors

To compute the color LBP histograms or bins that are candidate for the selection, each image is first coded in each of the N S = 9 color spaces previously presented. Then, for each of this color space, the δ m a x = 9 different LBP histograms of the EOCLBP descriptor are computed from the so-coded images. A color texture is thus represented by δ m a x × N S = 9 × 9 = 81 candidate LBP histograms. When the number of bins Q is equal to 256 for each histogram (with P = 8 , Q = 2 P ), the total number of candidate LBP bins is Q × δ m a x × N S = 256 × 9 × 9 = 20,736 bins. Once the color texture descriptors have been computed, the most discriminant features are selected out of the available candidate LBP-based features (histograms or bins) in a supervised context in order to build a feature subspace with a reduced dimension, in which color textures are represented.
To evaluate the relevance of the feature subspaces, different models are proposed in the supervised context where samples with known class labels are available for the learning stage [66]. Wrapper models are defined by a feature selection procedure that uses the classification accuracy as discrimination power of a feature subspace. It gives good results and easily determines the dimension of the feature subspace by searching the highest classification rate but involves an important learning time and classifier-dependent results. On the contrary, filter models are built by using feature selection procedures that evaluate the discrimination power of different candidate feature subspaces without classifying the images. They are less time consuming but suffer to the difficulty to determine the dimension of the feature subspace to be selected. To obtain a good compromise between dimension selection, computation time and classification result, embedded models are preferred [67]. These approaches combine a filter model to determine the most discriminating feature subspaces at different dimensions and a wrapper model to determine the dimension of the selected subspace [68]. In this framework, wrapper and embedded selection approaches require to split up the initial dataset available for the learning stage in order to build a training and a validation subset. The classification stage is thus operated in the selected feature subspace with a testing subset.

3.3. Sparse-MCSHS Approach

The Multi Color Space Histogram Selection (MCSHS) approach analyzes LBP histograms computed from texture images coded into several color spaces [16]. Indeed, rather than looking for the best color space, these approaches first compute LBP histograms from several color spaces and then selects, out of the different candidate LBP histograms, those which are the most discriminant for the considered application in a supervised context.
MCSHS is an embedded histogram selection approach whose flow chart is represented by Figure 1. During the learning stage, candidate histograms are generated from training images. Then, the proposed histogram selection procedure uses a ranking algorithm. The selection is based on the histogram score evaluated for each of the δ m a x × N S = 81 available histograms. In previous works, several scores were proposed in the literature to evaluate the relevance of histograms: the Intra-Class Similarity score (ICS-score), proposed by Porebski et al. [17], which is based on an intra-class similarity measure; the Adapted Supervised Laplacian score (ASL-score), proposed by Kalakech et al. [18], which evaluates the relevance of histograms using the local properties of the image data; the Simba-2 score, proposed by Mouhajid et al. [20], which is based on the hypothesis margin and the χ 2 distance; and the Sparse Adapted Supervised Laplacian score (SpASL-score) [21]. Only the two first scores were applied in a multi color space framework. In this paper, we propose extending the multi color space domain with the LBP histogram selection approach using the SpASL-score.
The SpASL-score is an extension of the ASL-score proposed by Kalakech et al. [18]. The ASL-score is based on the Jeffrey distance and a similarity matrix which is deduced from the class labels with hard values 0 or 1. The SpASL-score uses a sparse representation to build a soft similarity matrix that takes float values between 0 and 1. Such a value measures the similarity in a subtle way, instead of being binary with just two values 0 and 1. This lead to a more powerful discriminating information [21].
Once the score has been computed for each of the δ m a x × N S = 81 candidate prototype histograms, a ranking is performed. The next step then consists of determining the dimension of the relevant histogram subspace. For this purpose, the candidate subspaces—made up, at the first step of the procedure, of the histogram with the best score, at the second step, of the two first ranked histograms and so on—are evaluated (see Figure 1). The evaluation function at different dimensions and the stopping criterion of the histogram selection procedure are based on the classification accuracy. In this work, accuracy is measured by using the nearest neighbor classifier associated with the L1 distance as a similarity measure because it does not require any parameter to be adjusted during the learning stage. This evaluation leads to consider during the learning stage a subset different from the training subset, i.e., the validation images. For this purpose, the classifier operates in each candidate subspace in order to classify the validation images represented by their prototype histograms. The selected subspace, whose dimension is D ^ = δ ^ × Q , is the one which maximizes the rate R δ of well-classified validation images:
δ ^ = argmax 1 δ δ m a x × N S R δ .
During the classification stage, textures of testing images are represented in the so-selected relevant histogram subspace in order to determine their class labels.

3.4. Sparse-MCSBS Approach

Like MCSHS, the Multi Color Space Bin Selection (MCSBS) approach analyzes LBP histograms computed from texture images coded into several color spaces. Instead of selecting the most discriminating histograms, MCSBS selects the most discriminating bins of these histograms. A first approach of MCSBS was proposed by Porebski et al. in 2018 [16]: it is an extension to the multi color space domain of the bin selection method proposed in 2010 by Guo et al. for gray level image analysis [15]. In this paper, we consider that each bin of an histogram corresponds to a feature of a vector, and we propose applying an embedded model for selecting the most discriminant bins. In the MCSHS approach, the SpASL-score, based on sparse representation, was considered to select relevant histograms. In the MCSBS approach, we also propose considering a score based on sparse representation. Among the effective supervised ranking scores, the supervised sparsity score outperforms other scores as shown in [22]. So we propose extending this score for selecting LBP histogram bins in the multi color space domain. The flow chart of the proposed MCSBS approach is represented by Figure 2.
During the learning stage, candidate histograms are generated from training images and concatenated to form a vector with Q × δ m a x × N S = 20,736 features. The sparsity score proposed by Liu is then computed for each feature [22]. The ranked bins are obtained by sorting all bins according to their score in ascending order.
To illustrate the proposed approach, we introduce an example with three sample histograms H i 1 , H i 2 and H i 3 , i { 1 , , N } , extracted from N training images, to compute the sparsity score of each bin (see Figure 3). To represent the bins of each histogram respectively, we use three symbols: a square, a circle and a triangle. We assume that each histogram has six bins which are numbered from 1 to 6. For example, the square numbered as 1 represents the first bin of histogram H i 1 . In our approach, the three histograms are firstly concatenated to form a feature vector with 18 features. The score associated to each bin is then computed with the sparsity score and the bins are ranked in ascending order according to their value. Note that the score value of each bin is indicated below each symbol.
Once the bin ranking strategy is applied, the dimension of the relevant bin subspace is determined. For this purpose, the relevance of candidate bin subspaces with different dimensions are evaluated. At the first step, the candidate subspace composed of the first ranked bin is considered. Then, at the second step, the candidate subspace composed of the two first ranked bins is considered and so on. Like for the MCSHS approach, the nearest neighbor classifier is considered with the L1 distance during the learning stage and the relevance of each candidate subspace is evaluated by the classification accuracy. This evaluation leads to consider a validation subset different from training images during the learning stage. This classifier operates in each candidate subspace to classify the validation images represented by their prototype bins. The selected subspace, whose dimension is D ^ , is the one which maximizes the rate of well-classified validation images denoted R D :
D ^ = argmax 1 D Q × δ m a x × N S R D .
During the classification stage, textures of testing images are represented in the so-selected relevant bin subspace in order to determine their class labels.
In the following section, our proposed original approach that combines a histogram selection and a bin selection for the classification task is introduced. Indeed it is clear that not all bins of the histograms selected by the MCSHS approach are meaningful for modelling the characteristics of textures. As it selects the most discriminating histograms and filter out the rest, we think that it might have some redundant bins in the selected histograms and a loss of some meaningful bins of the discarded histograms. This leads to our principal contribution that performs a combination of bin and histogram selections.

4. Combination of Bin and Histogram Selections

The purpose of the Multi Color Space Histogram and Bin Selection (MCSHBS) strategy is to filter out the irrelevant bins of the relevant histograms and oppositely, to find the relevant bins out of the irrelevant histograms. The flow chart of this approach is illustrated in Figure 4.
It is also an embedded selection method, where the bin ranking strategy is applied after the histogram ranking. During the learning stage, candidate histograms are generated from training images and a histogram ranking is applied by using a histogram selection score. A bin ranking strategy is then applied within each ranked histogram independently (instead of within the concatenated histogram in the MCSBS approach). Here, the SpASL-score is computed to rank the LBP histograms and the sparsity score proposed by Liu is considered to rank LBP bins. We assume that the first bin of each histogram is more relevant than the other bins. Therefore, we propose ranking at first, the group of all the first bins of each histogram in the order of the ranked histograms, then the group of all the second bins are ranked and continuously until the last bin of each histogram. The final bin ranking is a ( Q × δ m a x )-uplet vector, where Q is the total number of bins of each histogram and where the order of the bins in each δ m a x -uplet is based on the ranked histogram.
To illustrate the combination of histogram ranking and bin selection approaches, let us take the same example as in the previous section (see Figure 5). In this illustration, we assume that the histograms are ranked by a histogram selection score as H i 3 , H i 1 and H i 2 . A bin ranking is then achieved within each histogram by the supervised sparsity score, whereas in the previous section, it is done within the concatenated histogram. The final bin ranking is a vector composed of the 6 triplet-bins. The first triplet is composed of the three first bins of H i 3 , H i 1 and H i 2 , respectively. This procedure continues until the last triplet, which is composed of the three last bins of H i 3 , H i 1 and H i 2 respectively, is constituted.
To find the relevant subspace, the selection procedure is carried out in the same way as in Section 3.4: the relevance of candidate bin subspaces with different dimensions are evaluated thanks to the classification accuracy. The selected subspace, whose dimension is D ^ , is the one which maximizes the rate R D of well-classified validation images (see Equation (2)). During the classification stage, the relevant bins previously selected are computed for each testing image to determine its class label.

5. Experiments

In this section, we propose comparing the strategies of sparse LBP histogram selection, sparse LBP bin selection, and the combination of both in the multi color space framework (see Section 5.3). These strategies will be applied and analyzed with five image databases: NewBarktex, Outex-TC-00013, USPTex, STex and Parquet (see Section 5.1). A discussion about processing times required by the proposed selection approaches will be presented in Section 5.4.

5.1. Considered Color Texture Datasets

Outex-TC-00013 is composed of 68 color texture images acquired under controlled conditions by a 3-CCD digital color camera and whose size is 746 × 538 pixels [69]. Each of these 68 textures is split up into 20 disjoint sub-images of size 128 × 128 . Among these 1360 sub-images, 680 are used for the training subset and the remaining 680 are considered as testing images for an holdout evaluation (This decomposition is available at http://www.outex.oulu.fi/index.php?page=classification).
USPTex set is a more recent database [70]. It contains 191 natural color textures acquired under an unknown but fixed light source. As for the previous set, these images are split up into 128 × 128 disjoint sub-images. Since the original image size is here 512 × 384 pixels, this makes a total of 12 sub-images by a texture. For our experiments, this initial dataset of 2292 sub-images is split up in order to build a training and a testing image subset: for each texture, 6 sub-images are considered for the training and the 6 others are used as testing images (This decomposition is available at http://www-lisic.univ-littoral.fr/~porebski/USPtex.zip).
The Salzburg texture image database (STex) is a large collection of 476 color texture images, whose acquisition conditions are not defined. Each of the 476 original images is split up into 16 non-overlapping 128 × 128 sub-images. Half of these 7616 sub-images are used for the training subset and the remaining are considered as testing images (This decomposition is available at http://www-lisic.univ-littoral.fr/~porebski/Stex.zip).
Although Outex-TC-00013, USPTex and STex sets are widely used, these image sets present a major drawback: the partitioning applied to build these three sets consists of extracting training and testing sub-images from a same original texture image. However, such a partitioning, when it is combined with a classifier such as the nearest neighbor classifier, leads to biased classification results [52]. Indeed, testing images are spatially close to training images and are thus correlated. A simple 3D color histogram is then able to reach a high classification accuracy whereas it only characterizes the color distribution within the color space and does not take into account the spatial relationships between neighboring pixels, as a color texture feature should [31]. That is the reason we propose considering two other challenging image sets built from the Parquet and the Barktex databases, respectively.
The Parquet database is composed of fourteen varieties of wood for flooring [71]. Each type of wood presents several different grades ranging from 2 to 4 which are considered as independent classes, leading to a total of 38 different classes. The main challenge of this database is that, within each type of wood, the grades are very similar to each other. Moreover, the sizes of acquired images are different and the number of samples per class varies from 6 to 8. As done in [72], six samples per class are retained and the images are centre-cropped, so that the final dimension of images ranges from 480 × 480 to 1300 × 1300 pixels. For each texture, 3 images are considered for the training and the others are used as testing images (This decomposition is available at http://www-lisic.univ-littoral.fr/~porebski/Parquet.zip). With the experimental protocol proposed by the authors, the classification accuracy reached on this challenging database does not exceed 75.1%.
The Barktex database includes six tree bark classes, with 68 images per class [73]. Even if the number of classes of this database is limited to 6, the textures of these different classes are close to each other and their discrimination is not easy. To build the NewBarktex set, a region of interest, centered on the bark and whose size is 128 × 128 pixels, is first defined. Then, four sub-images whose size is 64 × 64 pixels are extracted from each region. We thus obtain a set of 68 × 4 = 272 sub-images per class. To ensure that color texture images used for the training and the testing images are less correlated as possible, the four sub-images extracted from a same original image all belong either to the training subset or to the testing one [52]: 816 images are thus used as training images and the remaining 816 as testing images (The NewBarktex image test suite can be downloaded at https://www-lisic.univ-littoral.fr/~porebski/NewBarkTex.zip). Table 1 summarizes theses databases.

5.2. Performance Evaluation and Comparisons

Since the considered texture benchmark databases were built in order to apply an holdout evaluation, the classification performance is assessed by following this evaluation scheme. For this purpose, classification results reached by the proposed methods are evaluated by measuring the accuracy as the rate of well-classified testing images during the classification stage.
During this stage, the relevant histograms previously selected by one of the proposed approaches presented in Section 3 and Section 4 are computed for each testing image and compared to those of the training images in the so-selected relevant histogram or bin subspace to determine the testing image label. The purpose of this paper being to show the contribution of different LBP-based feature selection approaches, independently of the considered classifier and its parameters—such as the metric—the nearest neighbor classifier associated with the L1 distance as a similarity measure is here considered. Obviously, the classification results are expected to be improved by using more elaborated methods such as the SVM classifier for example.
As previously mentioned, the texture benchmark databases are composed of only two image subsets (training and testing images), whereas the proposed approaches need three subsets by adding validation images required by embedded selection methods. To evaluate and compare our experimental results with the same conditions of other works, which do not divide the training subset into two parts, we propose using one subset as the training subset and the second both as the validation and testing subset. During the classification stage, the supervised classifier thus uses exactly the same training subset than the one used by the compared works for determining the class labels of the testing images.
In the following, the proposed LBP-based feature selection strategies are evaluated, analyzed and compared with the results of the state-of-the-art under the same experimental protocol. Number of classes, size of images, number of images for each class, total number of images and accuracy evaluation method are the same. No changes appear about texture rotation and illumination. Therefore, comparisons exclude some other existing works that apply other protocols to the experimented databases.

5.3. Validation of the Proposed LBP-Based Feature Selection Strategies

Table 2 presents the results obtained with the proposed LBP-based sparse feature selection approaches, on the NewBarktex database. The single color space and multiple color space strategies are also compared, as well as the results with and without selection.
For each color space, the accuracy R D ^ estimated by the rate of well classified testing images and the dimension D ^ of the selected feature space are presented. First, we can notice that operating a selection significantly improves the accuracy while reducing the dimensionality of the feature space. The improvement reaches on average 7.8% compared to the without selection approach.
We can also notice that even if the considered color texture database is fixed, the color space that enables reaching the best accuracy is not always the same and depends on the considered feature selection strategy: r g b (74.4%) when no selection is performed, H S V (79.5%) when a bin selection is achieved, and R G B with the histogram selection approach (81.3%) and the combination of both selections (83.7%). This confirms that the a priori choice of the well suited color space is not easy and so, the interest of the multi color space strategy. This table also shows that considering several color spaces significantly improves the accuracy too since an improvement of 7% is obtained compared to a single color space strategy.
We can also notice that even if the dimension of the feature space selected by the bin selection strategy is lower, the Sparse-MCSHS approach (87.3%) significantly outperforms the proposed Sparse-MCSBS approach (83.6%). This confirms the result recently obtained by Porebski et al. with the ICS-based MCSHS approach versus the MCSBS approach using Guo’s selection strategy [16]. Finally, we notice that operating a combination of bin and histogram selections (88.4%) improves the accuracy compared to a simple Sparse-MCSBS approach or a simple Sparse-MCSHS approach.
Table 3, Table 4, Table 5, Table 6 and Table 7 present the classification results obtained by our proposed approaches and those obtained by the different studies which applied a color texture classification algorithm on Outex-TC-00013, USPTex, STex, Parquet, and NewBarktex, respectively. To achieve classifier-independent comparisons, only the studies using the nearest neighbor classifier are here presented. We also propose in this paper to ignore wrapper approaches, which use the classification rate as discrimination power of feature subspaces, and so involve an important learning time [74]. We implemented on the other hand some experiments with Convolutional Neural Networks (CNN) to compare our approach with the latest popular methods. Since the considered color texture databases have few learning images, we propose using the pretrained AlexNet [75] and GoogleNet [76] networks to achieve this comparison.
The rows labelled as gray correspond to experiments that are carried out in this work whereas the other rows correspond to results published by other authors. The first column refers to the related papers and indicates the descriptors which were computed to discriminate the different color texture classes. The color spaces considered to classify the images are presented in the second column of each tables. Finally, the last column shows the obtained rate of well-classified testing images (Accuracy).
For the USPTex and STex databases, the best rate is 98.1%. These rates are obtained with our proposed approach of bin and histogram combination and are very encouraging. For Outex-TC-00013, Parquet and NewBarkTex, the results obtained with our approach get into a very satisfying position with the works of the state of the art, since they reach the second position with 95.7%, 83.3%, and 88.4%, respectively. For the Outex database, they are just behind the results obtained by the Multi Color Space Feature Selection proposed in [52] where many more different color spaces are considered. For the Parquet and NewBarkTex sets, they follow the rate of well-classified testing images reached by CNN with the pretrained GoogleNet and AlexNet network, respectively.
Table 8 presents the relative ranking of the considered approaches and the average accuracy reached over the five databases. This table shows that the combination of bin and histogram selections outperforms the simple MCSHS and MCSBS approaches: an average improvement of 0.8% compared to the MCSHS strategy and 3.1% compared to the MCSBS approach is observed. It also shows the relevance of the proposed approach with the experiments on pretrained CNN, with an average improvement of 8.2%. We can also notice that the results obtained with the proposed MCSHBS approach are more stable than those reached thanks to CNN, since they always get into the first or the second position, whatever the considered database.
As the aim of this paper is to reveal the real contribution of the proposed selection strategy as independent as possible from the impact of the classifier and LBP parameters, straightforward methods of texture representation (basic LBP configuration) and classification (1-NN) were preferred as well as texture databases without change of observation conditions. Obviously, the representation of color textures could be improved by using other configurations of LBP parameters (for instance by increasing the parameters P and R), by adding other color spaces or by considering an ensemble of several descriptors [93]. However, a high increase of the dimensionality of the original feature set could lead to a number of features much larger than the number of samples, and so to a risk of overfitting [94]. In such a case, an approach based on a preliminary feature clustering should be relevant to reduce the number of features candidate for selection.
The next section studies the impact of the proposed selection approaches on the processing times.

5.4. Processing Times

Moderate consumption of computation resources is really needed for applications that require fast processing and/or low memory storage such as machine vision applications operating in real time with significant production time constraints or mobile applications embedded on smartphones and tablets where the processor and the memory are limited. Supervised selection procedure aims to define a compact representation of color textures by reducing the dimensionality of the feature space during a learning stage. This offline learning stage is carried out prior to the online classification stage and, as with deep learning methods, can last from several minutes to a few days with no impact on the final application. During the online classification stage, the so defined compact representation then increases the accuracy of the classifier and decreases the computation time.
When no selection is performed, the learning stage only consists of computing the LBP histograms from the training images but the high dimension of the feature space ( 81 × 256 = 20,736 in our case) leads to a high and crippling computation time for the online classification stage with a poor accuracy as illustrated in Table 2 for the NewBarktex dataset (78.2%). When a selection procedure is performed, the learning stage is more time consuming since it consists of the computation of all the available histograms from the training and the validation images, followed by a selection phase to determine the relevant feature space. As an embedded method is used during this selection phase, the dimension determination step (see Figure 1, Figure 2 and Figure 4) is the most time consuming since it requires to operate several classifications to evaluate each candidate sub-space: Equation (1) shows that δ m a x × N S operations of classification are performed for the MCSHS approach while MCSBS and MCSHBS execute Q × δ m a x × N S operations (see Equation (2)). As Q = 2 P for a basic LBP descriptor, the learning execution time is strongly impacted by an increasing of the LBP parameter P. However, the low dimensional feature subspace determined by the selection procedure during the offline learning stage significantly reduces the online classification time.
Compared to a histogram selection approach (MCSHS), the online classification processing time reached by MCSHBS is close since the dimension of the selected feature sub-space is nearly the same: for instance, with the NewBarkTex dataset, the dimension of the selected feature sub-space is D ^ = 9472 for the Sparse-MCSHS approach and D ^ = 11,985 for Sparse-MCSHBS (see Table 2). The slight cost in processing time occurred with the Sparse-MCSHBS approach is counterbalanced by a slight gain of accuracy: 87.3% for the Sparse-MCSHS approach and 88.4% for Sparse-MCSHBS.
Compared to a bin selection approach (MCSBS), the online classification processing time required by MCSHBS is higher because the dimensionality of the feature sub-space selected with MCSBS is lower: D ^ = 754 for the Sparse-MCSBS approach and D ^ = 11,985 for Sparse-MCSHBS (see Table 2). However, classification results show that MCSHBS clearly outperforms MCSBS: the accuracy reaches 83.6% for the Sparse-MCSBS approach and 88.4% for Sparse-MCSHBS.
These results show the interest of operating a selection thanks to the proposed Sparse-MCSHBS approach.

6. Conclusions

In this paper, different strategies of LBP-based feature selection were developed:
  • First we extended the multi color space domain with the LBP histogram selection approach using the SpASL-score to define the Sparse Multi Color Space Histogram Selection (Sparse-MCSHS).
  • We then compared this approach with a new multi color space bin selection approach, also based on a sparse representation: the Sparse Multi Color Space Bin Selection (Sparse-MCSBS) approach.
  • A combination of bin and histogram selections (the Sparse-MCSHBS approach) was finally developed and evaluated on several benchmark texture databases.
The obtained results has shown that the multi-color space strategy contributes to improve the classification performance, and that the proposed Sparse-MCSHBS strategy outperforms the simple MCSBS and MCSHS approaches, while getting into a satisfying position with the works of the state of the art, including CNN-based approaches. Obviously, the classification results are expected to be improved by using more recent and sophisticated methods associated with our strategy, such as the SVM classifier for example.
To operate feature selection on high-dimensional and small sample data, we currently work on an approach based on preliminary feature clustering to reduce the number of features candidate for selection. Associated with a filter method, this approach aims both to reduce the learning time and to improve the classification accuracy as well as the selection stability. Our future work will also consist in analyzing and comparing the different strategies that are proposed in the literature to simultaneously exploit the properties of several color spaces.

Author Contributions

Investigation, V.T.H.; Software, A.P. and V.T.H.; Supervision, N.V. and D.H.; Writing—original draft, A.P. and V.T.H.; Writing—review & editing, N.V. and D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mirmehdi, M.; Xie, X.; Suri, J. Handbook of Texture Analysis; Imperial College Press: London, UK, 2009. [Google Scholar]
  2. Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BOW to CNN: Two decades of texture representation for texture classification. Int. J. Comput. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
  3. Ojala, T.; Pietikainen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  4. Pietikainen, M.; Hadid, A.; Zhao, G.; Ahonen, T. Computer Vision Using Local Binary Patterns; Springer: London, UK, 2011; Volume 40. [Google Scholar]
  5. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  6. Chan, C.H.; Kittler, J.; Messer, K. Multispectral local binary pattern histogram for component-based color face verification. In Proceedings of the First IEEE International Conference on Biometrics: Theory, Applications, and Systems 2007, Crystal City, VA, USA, 27–29 September 2007; pp. 1–7. [Google Scholar]
  7. Zhao, D.; Lin, Z.; Tang, Z. Laplacian PCA and its applications. In Proceedings of the 11th IEEE International Conference on Computer Vision IEEE 2007, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
  8. Hussain, S.U.; Triggs, W. Feature sets and dimensionality reduction for visual object detection. In Proceedings of the British Machine Vision Conference, Wales, UK, 30 August–2 September 2010; BMVA Press: Guildford, UK, 2010; p. 112. [Google Scholar]
  9. Huang, D.; Shan, C.; Ardabilian, M.; Wang, Y.; Chen, L. Local binary patterns and its application to facial image analysis: a survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2011, 41, 765–781. [Google Scholar] [CrossRef] [Green Version]
  10. Yu, L.; Liu, H. Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th International Conference on International Conference on Machine Learning 2003, Washington, DC, USA, 21–24 August 2003; AAAI Press: Menlo Park, CA, USA, 2003; Volume 3, pp. 856–863. [Google Scholar]
  11. Smith, R.S.; Windeatt, T. Facial expression detection using filtered local binary pattern features with ECOC classifiers and platt scaling. In Proceedings of the First Workshop on Applications of Pattern Analysis 2010, Windsor, UK, 1–3 September 2010; pp. 111–118. [Google Scholar]
  12. Lahdenoja, O.; Laiho, M.; Paasio, A. Reducing the feature vector length in local binary pattern based face recognition. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14 September 2005; Volume 2, p. II-914. [Google Scholar]
  13. Maturana, D.; Mery, D.; Soto, A. Learning discriminative local binary patterns for face recognition. In Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), Santa Barbara, CA, USA, 21–25 March 2011; pp. 470–475. [Google Scholar]
  14. Liao, S.; Law, M.W.K.; Chung, A.C.S. Dominant Local Binary Patterns for texture classification. IEEE Trans. Image Process. 2009, 18, 1107–1118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Guo, Y.; Zhao, G.; Pietikainen, M.; Xu, Z. Descriptor learning based on fisher separation criterion for texture classification. In Proceedings of the 10th Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 185–198. [Google Scholar]
  16. Porebski, A.; TruongHoang, V.; Vandenbroucke, N.; Hamad, D. Multi-color space local binary pattern-based feature selection for texture classification. J. Electron. Imaging 2018, 27, 1–15. [Google Scholar]
  17. Porebski, A.; Vandenbroucke, N.; Hamad, D. LBP histogram selection for supervised color texture classification. In Proceedings of the 20th IEEE International Conference on Image Processing 2013, Melbourne, Australia, 15–18 September 2013; pp. 3239–3243. [Google Scholar]
  18. Kalakech, M.; Porebski, A.; Vandenbroucke, N.; Hamad, D. A new LBP histogram selection score for color texture classification. In Proceedings of the 5th IEEE International Conference on Image Processing Theory, Tools and Applications, Orleans, France, 10–13 November 2015; pp. 242–247. [Google Scholar]
  19. Kalakech, M.; Porebski, A.; Vandenbroucke, N.; Hamad, D. Unsupervised Local Binary Pattern histogram selection scores for color texture classification. J. Imaging 2018, 4, 112. [Google Scholar] [CrossRef] [Green Version]
  20. Moujahid, A.; Abanda, A.; Dornaika, F. Feature extraction using block-based Local Binary Pattern for face recognition. Proc. Intell. Robot. Comput. Vis. XXXIII Alg. Tech. 2016, 2016, 1–6. [Google Scholar] [CrossRef]
  21. Hoang, V.T.; Porebski, A.; Vandenbroucke, N.; Hamad, D. LBP histogram selection based on sparse representation for color texture classification. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications 2017, Porto, Portugal, 27 February–1 March 2017; Volume 4, pp. 476–483. [Google Scholar]
  22. Liu, M.; Zhang, D. Sparsity score: a novel graph-preserving feature selection method. Int. J. Pattern Recognit. Artif. Intell. 2014, 28, 1450009. [Google Scholar] [CrossRef]
  23. Liu, L.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikäinen, M. Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognit. 2017, 62, 136–160. [Google Scholar] [CrossRef] [Green Version]
  24. Asada, N.; Matsuyama, T. Color image analysis by varying camera aperture. In Proceedings of the 11th International Conference on Pattern Recognitio, Computer Vision and Applications 1992, The Hague, The Netherlands, 30 August–3 September 1992; pp. 466–469. [Google Scholar]
  25. Drimbarean, A.; Whelan, P.F. Experiments in colour texture analysis. Pattern Recognit. Lett. 2001, 22, 1161–1167. [Google Scholar] [CrossRef] [Green Version]
  26. Gonzalez-Rufino, E.; Carrion, P.; Cernadas, E.; Fernandez-Delgado, M.; Dominguez-Petit, R. Exhaustive comparison of colour texture features and classification methods to discriminate cells categories in histological images of fish ovary. Pattern Recognit. 2013, 46, 2391–2407. [Google Scholar] [CrossRef] [Green Version]
  27. Kandaswamy, U.; Schuckers, S.A.; Adjeroh, D. Comparison of texture analysis schemes under nonideal conditions. IEEE Trans. Image Process. 2011, 20, 2260–2275. [Google Scholar] [CrossRef]
  28. Palm, C.; Lehmann, T.M. Classification of color textures by Gabor filtering. Mach. Graph. Vis. Int. J. 2002, 11, 195–219. [Google Scholar]
  29. Cernadas, E.; Fernandez-Delgado, M.; Gonzalez-Rufino, E.; Carrion, P. Influence of normalization and color space to color texture classification. Pattern Recognit. 2017, 61, 120–138. [Google Scholar] [CrossRef]
  30. Khan, F.S.; Anwer, R.M.; Van de Weijer, J.; Felsberg, M.; Laaksonen, J. Compact color-texture description for texture classification. Pattern Recognit. Lett. 2015, 51, 16–22. [Google Scholar] [CrossRef]
  31. Maenpaa, T.; Pietikainen, M. Classification with color and texture: jointly or separately? Pattern Recognit. 2004, 37, 1629–1640. [Google Scholar] [CrossRef] [Green Version]
  32. Ning, J.; Zhang, L.; Zhang, D.; Wu, C. Robust object tracking using joint color-texture histogram. Int. J. Pattern Recognit. Artif. Intell. 2009, 23, 1245–1263. [Google Scholar] [CrossRef] [Green Version]
  33. Cusano, C.; Napoletano, P.; Schettini, R. Combining local binary patterns and local color contrast for texture classification under varying illumination. J. Opt. Soc. Am. A 2014, 31, 1453. [Google Scholar] [CrossRef]
  34. Banerji, S.; Verma, A.; Liu, C. LBP and color descriptors for image classification. In Cross Disciplinary Biometric Systems; Springer: Berlin/Heidelberg, Germany, 2012; pp. 205–225. [Google Scholar]
  35. Choi, J.; Plataniotis, K.N.; Ro, Y.M. Using colour local binary pattern features for face recognition. In Proceedings of the 17th IEEE International Conference on Image Processing 2010, Hong Kong, China, 26–29 September 2010; pp. 4541–4544. [Google Scholar]
  36. Han, G.; Zhao, C. A scene images classification method based on local binary patterns and nearest-neighbor classifier. In Proceedings of the Eighth IEEE International Conference on Intelligent Systems Design and Applications, Kaohsiung, Taiwan, 26–28 November 2008; pp. 100–104. [Google Scholar]
  37. Pietikainen, M.; Maenpaa, T.; Viertola, J. Color texture classification with color histograms and local binary patterns. In Workshop on Texture Analysis in Machine Vision 2002; Computer Science: Oulu, Finland, 2002; pp. 109–112. [Google Scholar]
  38. Zhu, C.; Bichot, C.E.; Chen, L. Image region description using orthogonal combination of local binary patterns enhanced with color information. Pattern Recognit. 2013, 46, 1949–1963. [Google Scholar] [CrossRef]
  39. Chelali, F.Z.; Djeradi, A. CSLBP and OCLBP local descriptors for speaker identification from video sequences. In Proceedings of the IEEE International Conference on Complex Systems 2015, Marrakech, Morocco, 23–25 November 2015; pp. 1–7. [Google Scholar]
  40. Porebski, A.; Vandenbroucke, N.; Hamad, D. A fast embedded selection approach for color texture classification using degraded LBP. In Proceedings of the IEEE International Conference on Image Processing Theory, Tools and Applications 2015, Orleans, France, 10–13 November 2015; pp. 254–259. [Google Scholar]
  41. Lee, S.H.; Choi, J.Y.; Ro, Y.M.; Plataniotis, K.N. Local color vector binary patterns from multichannel face images for face recognition. IEEE Trans. Image Process. 2012, 21, 2347–2353. [Google Scholar] [CrossRef] [PubMed]
  42. Porebski, A.; Vandenbroucke, N.; Macaire, L. Haralick feature extraction from LBP images for color texture classification. In Proceedings of the IEEE International Conference on Image Processing Theory, Tools and Applications 2008, Sousse, Tunisia, 23–26 November 2008; pp. 1–8. [Google Scholar]
  43. Ledoux, A.; Losson, O.; Macaire, L. Color local binary patterns: compact descriptors for texture classification. J. Electron. Imaging 2016, 25, 061404. [Google Scholar] [CrossRef]
  44. Bihan, N.L.; Sangwine, S.J. Quaternion principal component analysis of color images. In Proceedings of the IEEE International Conference on Image Processing 2003, Barcelona, Spain, 14–17 September 2003; Volume 1, p. I-809. [Google Scholar]
  45. Chahla, C.; Snoussi, H.; Abdallah, F.; Dornaika, F. Discriminant quaternion local binary pattern embedding for person re-identification through prototype formation and color categorization. Eng. Appl. Artif. Intell. 2017, 58, 27–33. [Google Scholar] [CrossRef]
  46. Lan, R.; Zhou, Y.; Tang, Y.Y.; Chen, C.P. Person reidentification using quaternionic local binary pattern. In Proceedings of the IEEE International Conference on Multimedia and Expo 2014, Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
  47. Lan, R.; Lu, H.; Zhou, Y.; Liu, Z.; Luo, X. An LBP encoding scheme jointly using quaternionic representation and angular information. Neural Comput. Appl. 2019, 32, 4317–4323. [Google Scholar] [CrossRef]
  48. Lan, R.; Zhou, Y. Quaternion-Michelson descriptor for color image classification. IEEE Trans. Image Process. 2016, 25, 5281–5292. [Google Scholar] [CrossRef]
  49. Alata, O.; Quintard, L. Is there a best color space for color image characterization or representation based on Multivariate Gaussian Mixture Model? Comput. Vis. Image Underst. 2009, 113, 867–877. [Google Scholar] [CrossRef]
  50. Bello-Cerezo, R.; Bianconi, F.; Fernandez, A.; Gonzalez, E.; DiMaria, F. Experimental comparison of color spaces for material classification. J. Electron. Imaging 2016, 25, 061406. [Google Scholar] [CrossRef]
  51. Chaves-Gonzalez, J.M.; Vega-Rodriguez, M.A.; Gomez-Pulido, J.A.; Sanchez-Perez, J.M. Detecting skin in face recognition systems: A colour spaces study. Digit. Signal Process. 2010, 20, 806–823. [Google Scholar] [CrossRef]
  52. Porebski, A.; Vandenbroucke, N.; Macaire, L.; Hamad, D. A new benchmark image test suite for evaluating colour texture classification schemes. Multimed. Tools Appl. 2014, 70, 543–556. [Google Scholar] [CrossRef]
  53. Charrier, C.; Lebrun, G.; Lezoray, O. Evidential segmentation of microscopic color images with pixel classification posterior probabilities. J. Multimed. 2007, 2, 18607811. [Google Scholar] [CrossRef]
  54. Chindaro, S.; Sirlantzis, K.; Deravi, F. Texture classification system using colour space fusion. Electron. Lett. 2005, 41, 589–590. [Google Scholar] [CrossRef]
  55. Chindaro, S.; Sirlantzis, K.; Fairhurst, M.C. ICA-based multi-colour space texture classification system. Electron. Lett. 2006, 42, 1208–1209. [Google Scholar] [CrossRef]
  56. Mignotte, M. A de-texturing and spatially constrained K-means approach for image segmentation. Pattern Recognit. Lett. 2011, 32, 359–367. [Google Scholar] [CrossRef]
  57. Busin, L.; Vandenbroucke, N.; Macaire, L. Color spaces and image segmentation. In Advances in Imaging and Electron Physics; Elsevier: Amsterdam, The Netherlands, 2009; Volume 151, pp. 65–168. [Google Scholar]
  58. Laguzet, F.; Romero, A.; Gouiffès, M.; Lacassagne, L.; Etiemble, D. Color tracking with contextual switching: Real-time implementation on CPU. J. -Real-Time Image Process. 2015, 10, 403–422. [Google Scholar] [CrossRef]
  59. Stern, H.; Efros, B. Adaptive color space switching for tracking under varying illumination. Image Vis. Comput. 2005, 23, 353–364. [Google Scholar] [CrossRef]
  60. Vandenbroucke, N.; Busin, L.; Macaire, L. Unsupervised color-image segmentation by multicolor space iterative pixel classification. J. Electron. Imaging 2015, 24, 023032. [Google Scholar] [CrossRef]
  61. Cointault, F.; Guerin, D.; Guillemin, J.P.; Chopinet, B. In field Triticum aestivum ear counting using color texture image analysis. N. Z. J. Crop. Hortic. Sci. 2008, 36, 117–130. [Google Scholar] [CrossRef] [Green Version]
  62. Nanni, L.; Lumini, A. Fusion of color spaces for ear authentication. Pattern Recognit. 2009, 42, 1906–1913. [Google Scholar] [CrossRef]
  63. Porebski, A.; Vandenbroucke, N.; Macaire, L. Supervised texture classification: color space or texture feature selection? Pattern Anal. Appl. 2013, 16, 1–18. [Google Scholar] [CrossRef]
  64. Vandenbroucke, N.; Macaire, L.; Postaire, J.G. Color image segmentation by pixel classification in an adapted hybrid color space. Application to soccer image analysis. Comput. Vis. Image Underst. 2003, 90, 190–216. [Google Scholar] [CrossRef]
  65. Qazi, I.U.H.; Alata, O.; Burie, J.C.; Moussa, A.; Fernandez-Maloigne, C. Choice of a pertinent color space for color texture characterization using parametric spectral analysis. Pattern Recognit. 2011, 44, 16–31. [Google Scholar] [CrossRef]
  66. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  67. Liu, H.; Yu, L. Toward integrating feature selection algorithms for classification and clustering. IEEE Trans. Knowl. Data Eng. 2005, 17, 491–502. [Google Scholar]
  68. Tang, J.; Alelyani, S.; Liu, H. Feature Selection for Classification: A Review. In Data Classification: Algorithms and Applications; CRC Press: Cleveland, OH, USA, 2014; pp. 37–64. [Google Scholar]
  69. Ojala, T.; Maenpaa, T.; Pietikainen, M.; Viertola, J.; Kyllonen, J.; Huovinen, S. Outex—New framework for empirical evaluation of texture analysis algorithms. In Proceedings of the 16th International Conference on Pattern Recognition 2002, Quebec City, QC, Canada, 11–15 August 2002; Volume 1, pp. 701–706. [Google Scholar]
  70. Backes, A.R.; Casanova, D.; Bruno, O.M. Color texture analysis based on fractal descriptors. Pattern Recognit. 2012, 45, 1984–1992. [Google Scholar] [CrossRef]
  71. Bianconi, F.; Fernandez, A.; Gonzalez, E.; Saetta, S. Performance analysis of colour descriptors for parquet sorting. Expert Syst. Appl. 2013, 40, 1636–1644. [Google Scholar] [CrossRef]
  72. Bello-Cerezo, R.; Bianconi, F.; Di Maria, F.; Napoletano, P.; Smeraldi, F. Comparative Evaluation of Hand-Crafted Image Descriptors vs. Off-the-Shelf CNN-Based Features for Colour Texture Classification under Ideal and Realistic Conditions. Appl. Sci. 2019, 9, 738. [Google Scholar] [CrossRef] [Green Version]
  73. Lakmann, R. Barktex Benchmark Database of Color Textured Images. Koblenz-Landau University. 1998. Available online: ftp://ftphost.uni-koblenz.de/outgoing/vision/Lakmann/BarkTex (accessed on 23 June 2020).
  74. Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
  75. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  76. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. IGoing deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  77. Arvis, V.; Debain, C.; Berducat, M.; Benassi, A. Generalization of the cooccurrence matrix for colour images: application to colour texture classification. Image Anal. Stereol. 2004, 23, 63–72. [Google Scholar] [CrossRef] [Green Version]
  78. Alvarez, S.; Vanrell, M. Texton theory revisited: A bag-of-words approach to combine textons. Pattern Recognit. 2012, 45, 4312–4325. [Google Scholar] [CrossRef]
  79. Liu, P.; Guo, J.M.; Chamnongthai, K.; Prasetyo, H. Fusion of color histogram and LBP-based features for texture image retrieval and classification. Inf. Sci. 2017, 390, 95–111. [Google Scholar] [CrossRef]
  80. Cusano, C.; Napoletano, P.; Schettini, R. Illuminant invariant descriptors for color texture classification. In Computational Color Imaging; Springer: Berlin/Heidelberg, Germany, 2013; pp. 239–249. [Google Scholar]
  81. Mehta, R.; Egiazarian, K. Dominant Rotated Local Binary Patterns (DRLBP) for texture classification. Pattern Recognit. Lett. 2016, 71, 16–22. [Google Scholar] [CrossRef]
  82. Guo, J.M.; Prasetyo, H.; Lee, H.; Yao, C. Image retrieval using indexed histogram of Void-and-Cluster Block Truncation Coding. Signal Process. 2016, 123, 143–156. [Google Scholar] [CrossRef]
  83. Aptoula, E.; Lefèvre, S. On morphological color texture characterization. In Proceedings of the International Symposium on Mathematical Morphology 2007, Rio de Janeiro, Brazil, 10–13 October 2007; pp. 153–164. [Google Scholar]
  84. Kabbai, L.; Abdellaoui, M.; Douik, A. Image classification by combining local and global features. Vis. Comput. 2019, 35, 679–693. [Google Scholar] [CrossRef]
  85. Guo, Z.; Zhang, D.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [PubMed] [Green Version]
  86. Martà nez, R.A.; Richard, N.; Fernandez, C. Alternative to colour feature classification using colour contrast ocurrence matrix. In The International Conference on Quality Control by Artificial Vision; International Society for Optics and Photonics: Le Creusot, France, 2015; p. 953405. [Google Scholar]
  87. Fernandez, A.; Alvarez, M.X.; Bianconi, F. Texture description through histograms of equivalent patterns. J. Math. Imaging Vis. 2013, 45, 76–102. [Google Scholar] [CrossRef] [Green Version]
  88. Hammouche, K.; Losson, O.; Macaire, L. Fuzzy aura matrices for texture classification. Pattern Recognit. 2015, 53, 212–228. [Google Scholar] [CrossRef] [Green Version]
  89. Naresh, Y.G.; Nagendraswamy, H.S. Classification of medicinal plants: An approach using modified LBP with symbolic representation. Neurocomputing 2016, 173, 1789–1797. [Google Scholar] [CrossRef]
  90. Gonçalves, W.N.; da Silva, N.R.; da Fontoura Costa, L.; Bruno, O.M. Texture recognition based on diffusion in networks. Inf. Sci. 2016, 364–365, 51–71. [Google Scholar] [CrossRef]
  91. Wang, J.; Fan, Y.; Li, N. Combining fine texture and coarse color features for color texture classification. J. Electron. Imaging 2017, 26, 1. [Google Scholar]
  92. Ratajczak, R.; Bertrand, S.; Crispim-Junior, C.; Tougne, L. Efficient bark recognition in the wild. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP’19) 2019, Prague, Czech Republic, 25–27 February 2019. [Google Scholar]
  93. Alimoussa, M.; Vandenbroucke, N.; Porebski, A.; Oulad Haj Thami, R.; El Fkihi, S.; Hamad, D. Compact color texture representation by feature selection in multiple color spaces. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP’19), Prague, Czech Republic, 25–27 February 2019. [Google Scholar]
  94. Dernoncourt, D.; Hanczar, B.; Zucker, J.-D. Analysis of feature selection stability on high dimension and small sample data. Comput. Stat. Data Anal. 2014, 71, 681–693. [Google Scholar] [CrossRef]
Figure 1. The Sparse-MCSHS approach.
Figure 1. The Sparse-MCSHS approach.
Jimaging 06 00053 g001
Figure 2. The Sparse-MCSBS approach.
Figure 2. The Sparse-MCSBS approach.
Jimaging 06 00053 g002
Figure 3. Illustration of the bin ranking.
Figure 3. Illustration of the bin ranking.
Jimaging 06 00053 g003
Figure 4. The Sparse-MCSHBS approach.
Figure 4. The Sparse-MCSHBS approach.
Jimaging 06 00053 g004
Figure 5. Illustration of the combination of bin and histogram ranking.
Figure 5. Illustration of the combination of bin and histogram ranking.
Jimaging 06 00053 g005
Table 1. Summary of image databases used in experiments.
Table 1. Summary of image databases used in experiments.
Dataset NameImage SizeNb of ClassesNb of Training ImagesNb of Testing Images
Outex128 × 12868680680
USPTex128 × 12819111461146
STex128 × 12847638083808
Parquet 480 × 480 to 1300 × 1300 38114114
NewBarktex64 × 646816816
Table 2. Classification results obtained by the proposed LBP-based sparse feature selection strategies in a single and multiple color spaces on the NewBarktex database (bold values represent the best rates obtained with each color space and values in boxes indicate the best rate obtained for each strategy).
Table 2. Classification results obtained by the proposed LBP-based sparse feature selection strategies in a single and multiple color spaces on the NewBarktex database (bold values represent the best rates obtained with each color space and values in boxes indicate the best rate obtained for each strategy).
Color SpacesWithout SelectionBin SelectionHistogram SelectionCombination of Both
R D ^ D ^ R D ^ D ^ R D ^ D ^ R D ^ D ^
RGB73.2230479.0110981.3102483.71016
rgb74.4230474.4224277.176877.91530
I 1 I 2 I 3 71.7230474.4119879.5179280.51764
HSV70.5230479.550081.076881.0768
( w b , r g , b y ) 72.1230477.583680.6153682.11524
HLS70.1230478.031981.076881.0768
I-HLS72.1230472.7181378.851279.1762
HSI71.7230479.253379.876879.8768
Y C b C r 71.6230477.037079.3179282.51778
Average in single space71.9230476.999179.8108180.81186
Multiple color spaces 78.2 20,736 83.6 754 87.3 9472 88.4 11,985
Table 3. Comparison between the rates of well-classified image reached with the Outex-TC-00013 set and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
Table 3. Comparison between the rates of well-classified image reached with the Outex-TC-00013 set and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
FeaturesColor SpaceAccuracy
RSCCM + MCSFS [52]28 color spaces96.6
EOCLBP + Sparse-MCSHBS9 color spaces95.7
EOCLBP + Sparse-MCSHS9 color spaces95.6
EOCLBP + MCSHS-ICS [16]9 color spaces95.6
3D Color histogram [31]HSV95.4
EOCLBP + Sparse-MCSBS9 color spaces95.2
3D Color histogram [65]I-HLS94.5
Haralick features [77]RGB94.1
EOCLBP (with selection method) [18]RGB93.4
EOCLBP (with selection method) [17]RGB92.9
EOCLBP + MCSBS-Guo [16]9 color spaces92.9
RSCCM (with selection method) [63]HLS92.5
Between color component LBP histogram [31]RGB92.5
Quaternion-Michelson Descriptor [48]RGB91.3
Texton [78]RGB90.3
Combine color and LBP-based features [79]RGB90.2
Intensity-Color Contrast Descriptor [80]RGB89.3
DRLBP [81]RGB89.0
Autoregressive models and 3D color histogram [65]I-HLS88.9
Halftoning Local Derivative Pattern and Color Histogram [82]RGB88.2
Autoregressive models [83] L * a * b * 88.0
Within color component LBP histogram [31]RGB87.8
CWEUL LTP [84]RGB87.4
Mix color order LBP histogram [43]RGB87.1
Color angles LBP [41]RGB86.2
LBP and local color contrast [33]RGB85.3
CLBP (Completed LBP) [85]RGB84.4
Color contrast occurrence matrix [86]RGB82.6
Soft color descriptors [50]HSV81.4
Histograms of equivalent patterns [87]RGB80.9
Fuzzy aura matrices [88]RGB80.2
Pretrained AlexNet convolutional neural networkRGB78.5
Pretrained GoogleNet convolutional neural networkRGB77.9
Modified LBP [89]RGB67.3
RSCCM: Reduced Size Chromatic Co-occurrence Matrices, MCSFS: Multi Color Space Feature Selection, DRLBP: Dominant Rotated LBP, CWEUL LTP: Color Wavelet Elliptical Upper and Lower Local Ternary Pattern.
Table 4. Comparison between the rates of well-classified images reached with the USPTex database and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
Table 4. Comparison between the rates of well-classified images reached with the USPTex database and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
FeaturesColor SpaceAccuracy
EOCLBP + Sparse-MCSHBS9 color spaces98.1
EOCLBP + MCSHS-ASL [16]9 color spaces97.6
EOCLBP + Sparse-MCSHS9 color spaces97.4
EOCLBP + MCSBS-Guo [16]9 color spaces97.3
EOCLBP + Sparse-MCSBS9 color spaces94.8
Quaternion-Michelson Descriptor [48]RGB94.2
Halftoning Local Derivative Pattern and Color Histogram [82]RGB93.9
Quaternionic local angular binary pattern [47]RGB93.8
Pretrained GoogleNet convolutional neural networkRGB92.2
DRLBP [81]RGB89.4
Color angles [41]RGB88.8
Local multi-resolution patterns [90]Luminance86.7
Mix color order LBP histogram [43]RGB84.2
LBP and local color contrast [33]RGB82.9
Pretrained AlexNet convolutional neural networkRGB78.3
CLBP [85]RGB72.3
Soft color descriptors [50] L * a * b * 58.0
Table 5. Comparison between the rates of well-classified images reached with the STex database and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
Table 5. Comparison between the rates of well-classified images reached with the STex database and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
FeaturesColor SpaceAccuracy
EOCLBP + Sparse-MCSHBS9 color spaces98.1
EOCLBP + Sparse-MCSHS9 color spaces96.7
EOCLBP + MCSBS-Guo [16]9 color spaces96.7
EOCLBP + MCSHS-ASL [16]9 color spaces96.1
EOCLBP + Sparse-MCSBS9 color spaces94.7
Pretrained GoogleNet convolutional neural networkRGB92.9
Pretrained AlexNet convolutional neural networkRGB90.8
DRLBP [81]RGB89.4
Color contrast occurrence matrix [86]RGB76.7
Soft color descriptors [50] L * a * b * 55.3
Table 6. Comparison between the rates of well-classified images reached with the Parquet database and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
Table 6. Comparison between the rates of well-classified images reached with the Parquet database and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented.
FeaturesColor SpaceAccuracy
Pretrained GoogleNet convolutional neural networkRGB92.9
EOCLBP + Sparse-MCSHBS9 color spaces83.3
EOCLBP + Sparse-MCSHS9 color spaces82.5
EOCLBP + Sparse-MCSBS9 color spaces79.8
EOCLBP + MCSBS-Guo [16]9 color spaces79.0
EOCLBP + MCSHS-ICS [16]9 color spaces75.4
EOCLBP (with selection method) [17]RGB71.9
Fisher separation criteria-based learning LBP [15]RGB68.4
Pretrained AlexNet convolutional neural networkRGB68.4
Table 7. Comparison between the rates of well-classified images reached with the NewBarktex set and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented, while the underlined values indicate the results extracted from [43].
Table 7. Comparison between the rates of well-classified images reached with the NewBarktex set and the 1-NN classifier. The accuracy values in italics were obtained with other methods that we implemented, while the underlined values indicate the results extracted from [43].
FeaturesColor SpaceAccuracy
Pretrained AlexNet convolutional neural networkRGB90.6
EOCLBP + Sparse-MCSHBS9 color spaces88.4
EOCLBP + MCSHS-ICS [16]9 color spaces88.0
EOCLBP + MCSBS-Guo [16]9 color spaces87.8
EOCLBP + Sparse-MCSHS9 color spaces87.3
Completed local binary count [91]RGB84.3
EOCLBP + Sparse-MCSBS9 color spaces83.6
Pretrained GoogleNet convolutional neural networkRGB82.8
EOCLBP (with selection method) [17]RGB81.4
EOCLBP (with selection method) [18]RGB81.4
LBP and local color contrast [41]RGB80.2
Between color component LBP [37]RGB79.9
Light combination of LBP [92]HSV78.8
Mix color order LBP histogram [43]RGB77.7
CWEUL LTP [84]RGB76.6
RSCCM + MCSFS [52]20 color spaces75.9
CLBP [85]RGB72.8
Color angles [33]RGB71.0
DRLBP [81]RGB61.4
Color histograms [31]RGB58.6
Table 8. Relative ranking of the considered approaches and average accuracy reached over the five databases.
Table 8. Relative ranking of the considered approaches and average accuracy reached over the five databases.
ApproachRelative RankingAverage Accuracy
OutexUSPTexSTexParquetNewBarkTex
EOCLBP + Sparse-MCSHBS1112292.7
EOCLBP + Sparse-MCSHS2223391.9
EOCLBP + Sparse-MCSBS3334489.6
Pretrained GoogleNet CNN5441587.7
Pretrained AlexNet CNN4555181.3

Share and Cite

MDPI and ACS Style

Porebski, A.; Truong Hoang, V.; Vandenbroucke, N.; Hamad, D. Combination of LBP Bin and Histogram Selections for Color Texture Classification. J. Imaging 2020, 6, 53. https://doi.org/10.3390/jimaging6060053

AMA Style

Porebski A, Truong Hoang V, Vandenbroucke N, Hamad D. Combination of LBP Bin and Histogram Selections for Color Texture Classification. Journal of Imaging. 2020; 6(6):53. https://doi.org/10.3390/jimaging6060053

Chicago/Turabian Style

Porebski, Alice, Vinh Truong Hoang, Nicolas Vandenbroucke, and Denis Hamad. 2020. "Combination of LBP Bin and Histogram Selections for Color Texture Classification" Journal of Imaging 6, no. 6: 53. https://doi.org/10.3390/jimaging6060053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop