Color Texture Classification

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (15 March 2022) | Viewed by 26132

Special Issue Editors


E-Mail Website
Guest Editor
LISIC laboratory, Université du Littoral Côte d’Opale, 50 rue Ferdinand Buisson, 62228 Calais CEDEX, France
Interests: color representation; color and hypespectral imaging; dimensionality reduction; feature selection; texture classification; image segmentation; machine vision applications

E-Mail Website
Guest Editor
LISIC laboratory, Université du Littoral Côte d’Opale, 50 rue Ferdinand Buisson, 62228 Calais CEDEX, France
Interests: color texture classification; feature selection; color representation; hyperspectral images

Special Issue Information

Dear Colleagues,

Texture and color are salient visual cues in human perception and color textures provide essential information for object recognition and scene understanding. Therefore, color texture analysis is widely used in many imaging applications and color texture classification continues to be an active research topic which has seen major advances due to the emergence of deep learning in recent decades. In this context, color texture descriptors have evolved from "handcrafted" descriptors which provide color texture features based on manually defined models into "learned" descriptors which are directly designed from image data. Well-known classifiers or their combination have given way to convolutional neural networks (CNN) and pre-trained CNN. Although these deep learning and transfer models provide impressive performance, the representations generated can be difficult to understand and they suffer from their dependence on training data.

When the generated color texture features produce high-dimensional representations, bag-of-words strategies, feature selection approaches, or pooling stages are needed to reduce the dimensionality of these big data. The key challenge of color texture classification is to ensure high classification accuracy with low computation times despite a potentially large number of texture classes, high intra-class and low inter-class appearance variations of the texture, and limited training data.

The choice or the combination of different texture descriptors, color spaces, or classifiers and the integration of handcrafted descriptors into the design of deep learning models as well as the suitable adjustment of their parameters to produce interpretable, flexible, robust, invariant, and compact descriptors for color texture classification are all existing problems which remain unaddressed.

This Special Issue aims to present recent theoretical and practical advances in the field of color texture classification for researchers and practitioners, including new approaches, challenging applications, and future perspectives. Original contributions, state-of-the-art surveys, and comprehensive comparative reviews are welcome.

Dr. Nicolas Vandenbroucke
Dr. Alice Porebski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • color texture representation
  • color spaces
  • hand-designed descriptor
  • deep learning and hybrid approaches
  • dimensionality reduction
  • comparative evaluations and benchmarks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 6334 KiB  
Article
On the Quantification of Visual Texture Complexity
by Fereshteh Mirjalili and Jon Yngve Hardeberg
J. Imaging 2022, 8(9), 248; https://doi.org/10.3390/jimaging8090248 - 10 Sep 2022
Cited by 9 | Viewed by 2974
Abstract
Complexity is one of the major attributes of the visual perception of texture. However, very little is known about how humans visually interpret texture complexity. A psychophysical experiment was conducted to visually quantify the seven texture attributes of a series of textile fabrics: [...] Read more.
Complexity is one of the major attributes of the visual perception of texture. However, very little is known about how humans visually interpret texture complexity. A psychophysical experiment was conducted to visually quantify the seven texture attributes of a series of textile fabrics: complexity, color variation, randomness, strongness, regularity, repetitiveness, and homogeneity. It was found that the observers could discriminate between the textures with low and high complexity using some high-level visual cues such as randomness, color variation, strongness, etc. The results of principal component analysis (PCA) on the visual scores of the above attributes suggest that complexity and homogeneity could be essentially the underlying attributes of the same visual texture dimension, with complexity at the negative extreme and homogeneity at the positive extreme of this dimension. We chose to call this dimension visual texture complexity. Several texture measures including the first-order image statistics, co-occurrence matrix, local binary pattern, and Gabor features were computed for images of the textiles in sRGB, and four luminance-chrominance color spaces (i.e., HSV, YCbCr, Ohta’s I1I2I3, and CIELAB). The relationships between the visually quantified texture complexity of the textiles and the corresponding texture measures of the images were investigated. Analyzing the relationships showed that simple standard deviation of the image luminance channel had a strong correlation with the corresponding visual ratings of texture complexity in all five color spaces. Standard deviation of the energy of the image after convolving with an appropriate Gabor filter and entropy of the co-occurrence matrix, both computed for the image luminance channel, also showed high correlations with the visual data. In this comparison, sRGB, YCbCr, and HSV always outperformed the I1I2I3 and CIELAB color spaces. The highest correlations between the visual data and the corresponding image texture features in the luminance-chrominance color spaces were always obtained for the luminance channel of the images, and one of the two chrominance channels always performed better than the other. This result indicates that the arrangement of the image texture elements that impacts the observer’s perception of visual texture complexity cannot be represented properly by the chrominance channels. This must be carefully considered when choosing an image channel to quantify the visual texture complexity. Additionally, the good performance of the luminance channel in the five studied color spaces proves that variations in the luminance of the texture, or as one could call the luminance contrast, plays a crucial role in creating visual texture complexity. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

20 pages, 22310 KiB  
Article
Fuzzy Color Aura Matrices for Texture Image Segmentation
by Zohra Haliche, Kamal Hammouche, Olivier Losson and Ludovic Macaire
J. Imaging 2022, 8(9), 244; https://doi.org/10.3390/jimaging8090244 - 8 Sep 2022
Cited by 3 | Viewed by 2093
Abstract
Fuzzy gray-level aura matrices have been developed from fuzzy set theory and the aura concept to characterize texture images. They have proven to be powerful descriptors for color texture classification. However, using them for color texture segmentation is difficult because of their high [...] Read more.
Fuzzy gray-level aura matrices have been developed from fuzzy set theory and the aura concept to characterize texture images. They have proven to be powerful descriptors for color texture classification. However, using them for color texture segmentation is difficult because of their high memory and computation requirements. To overcome this problem, we propose to extend fuzzy gray-level aura matrices to fuzzy color aura matrices, which would allow us to apply them to color texture image segmentation. Unlike the marginal approach that requires one fuzzy gray-level aura matrix for each color channel, a single fuzzy color aura matrix is required to locally characterize the interactions between colors of neighboring pixels. Furthermore, all works about fuzzy gray-level aura matrices consider the same neighborhood function for each site. Another contribution of this paper is to define an adaptive neighborhood function based on information about neighboring sites provided by a pre-segmentation method. For this purpose, we propose a modified simple linear iterative clustering algorithm that incorporates a regional feature in order to partition the image into superpixels. All in all, the proposed color texture image segmentation boils down to a superpixel classification using a simple supervised classifier, each superpixel being characterized by a fuzzy color aura matrix. Experimental results on the Prague texture segmentation benchmark show that our method outperforms the classical state-of-the-art supervised segmentation methods and is similar to recent methods based on deep learning. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

29 pages, 21933 KiB  
Article
Compact Hybrid Multi-Color Space Descriptor Using Clustering-Based Feature Selection for Texture Classification
by Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Sanaa El Fkihi and Rachid Oulad Haj Thami
J. Imaging 2022, 8(8), 217; https://doi.org/10.3390/jimaging8080217 - 8 Aug 2022
Cited by 5 | Viewed by 2383
Abstract
Color texture classification aims to recognize patterns by the analysis of their colors and their textures. This process requires using descriptors to represent and discriminate the different texture classes. In most traditional approaches, these descriptors are used with a predefined setting of their [...] Read more.
Color texture classification aims to recognize patterns by the analysis of their colors and their textures. This process requires using descriptors to represent and discriminate the different texture classes. In most traditional approaches, these descriptors are used with a predefined setting of their parameters and computed from images coded in a chosen color space. The prior choice of a color space, a descriptor and its setting suited to a given application is a crucial but difficult problem that strongly impacts the classification results. To overcome this problem, this paper proposes a color texture representation that simultaneously takes into account the properties of several settings from different descriptors computed from images coded in multiple color spaces. Since the number of color texture features generated from this representation is high, a dimensionality reduction scheme by clustering-based sequential feature selection is applied to provide a compact hybrid multi-color space (CHMCS) descriptor. The experimental results carried out on five benchmark color texture databases with five color spaces and manifold settings of two texture descriptors show that combining different configurations always improves the accuracy compared to a predetermined configuration. On average, the CHMCS representation achieves 94.16% accuracy and outperforms deep learning networks and handcrafted color texture descriptors by over 5%, especially when the dataset is small. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

19 pages, 10872 KiB  
Article
Multi-View Learning for Material Classification
by Borhan Uddin Sumon, Damien Muselet, Sixiang Xu and Alain Trémeau
J. Imaging 2022, 8(7), 186; https://doi.org/10.3390/jimaging8070186 - 7 Jul 2022
Cited by 2 | Viewed by 2769
Abstract
Material classification is similar to texture classification and consists in predicting the material class of a surface in a color image, such as wood, metal, water, wool, or ceramic. It is very challenging because of the intra-class variability. Indeed, the visual appearance of [...] Read more.
Material classification is similar to texture classification and consists in predicting the material class of a surface in a color image, such as wood, metal, water, wool, or ceramic. It is very challenging because of the intra-class variability. Indeed, the visual appearance of a material is very sensitive to the acquisition conditions such as viewpoint or lighting conditions. Recent studies show that deep convolutional neural networks (CNNs) clearly outperform hand-crafted features in this context but suffer from a lack of data for training the models. In this paper, we propose two contributions to cope with this problem. First, we provide a new material dataset with a large range of acquisition conditions so that CNNs trained on these data can provide features that can adapt to the diverse appearances of the material samples encountered in real-world. Second, we leverage recent advances in multi-view learning methods to propose an original architecture designed to extract and combine features from several views of a single sample. We show that such multi-view CNNs significantly improve the performance of the classical alternatives for material classification. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

13 pages, 2216 KiB  
Article
Comparison of Different Image Data Augmentation Approaches
by Loris Nanni, Michelangelo Paci, Sheryl Brahnam and Alessandra Lumini
J. Imaging 2021, 7(12), 254; https://doi.org/10.3390/jimaging7120254 - 27 Nov 2021
Cited by 59 | Viewed by 8532
Abstract
Convolutional neural networks (CNNs) have gained prominence in the research literature on image classification over the last decade. One shortcoming of CNNs, however, is their lack of generalizability and tendency to overfit when presented with small training sets. Augmentation directly confronts this problem [...] Read more.
Convolutional neural networks (CNNs) have gained prominence in the research literature on image classification over the last decade. One shortcoming of CNNs, however, is their lack of generalizability and tendency to overfit when presented with small training sets. Augmentation directly confronts this problem by generating new data points providing additional information. In this paper, we investigate the performance of more than ten different sets of data augmentation methods, with two novel approaches proposed here: one based on the discrete wavelet transform and the other on the constant-Q Gabor transform. Pretrained ResNet50 networks are finetuned on each augmentation method. Combinations of these networks are evaluated and compared across four benchmark data sets of images representing diverse problems and collected by instruments that capture information at different scales: a virus data set, a bark data set, a portrait dataset, and a LIGO glitches data set. Experiments demonstrate the superiority of this approach. The best ensemble proposed in this work achieves state-of-the-art (or comparable) performance across all four data sets. This result shows that varying data augmentation is a feasible way for building an ensemble of classifiers for image classification. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

25 pages, 5028 KiB  
Article
Colour and Texture Descriptors for Visual Recognition: A Historical Overview
by Francesco Bianconi, Antonio Fernández, Fabrizio Smeraldi and Giulia Pascoletti
J. Imaging 2021, 7(11), 245; https://doi.org/10.3390/jimaging7110245 - 19 Nov 2021
Cited by 25 | Viewed by 5265
Abstract
Colour and texture are two perceptual stimuli that determine, to a great extent, the appearance of objects, materials and scenes. The ability to process texture and colour is a fundamental skill in humans as well as in animals; therefore, reproducing such capacity in [...] Read more.
Colour and texture are two perceptual stimuli that determine, to a great extent, the appearance of objects, materials and scenes. The ability to process texture and colour is a fundamental skill in humans as well as in animals; therefore, reproducing such capacity in artificial (‘intelligent’) systems has attracted considerable research attention since the early 70s. Whereas the main approach to the problem was essentially theory-driven (‘hand-crafted’) up to not long ago, in recent years the focus has moved towards data-driven solutions (deep learning). In this overview we retrace the key ideas and methods that have accompanied the evolution of colour and texture analysis over the last five decades, from the ‘early years’ to convolutional networks. Specifically, we review geometric, differential, statistical and rank-based approaches. Advantages and disadvantages of traditional methods vs. deep learning are also critically discussed, including a perspective on which traditional methods have already been subsumed by deep learning or would be feasible to integrate in a data-driven approach. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

Back to TopTop