All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
Classification is a crucial task for reintroducing end-of-life fabrics as raw materials in a circular process, thus reducing reliance on dyeing processes. In this context, this review explores the evolution of automated and semi-automated colour classification methods, emphasizing the transition from deterministic techniques to advanced methods, with a focus on machine learning, deep learning, and particularly Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs). These technologies show potential for improving accuracy and efficiency. The results highlight the need for enriched datasets, deeper AI integration into industrial processes, and alignment with circular economy objectives to enhance sustainability without compromising industrial performance. Tested against a case study, the different architectures confirmed the state-of-the-art statements demonstrating that they are effective in classification, with better performance reached by CNN-based methods, which outperforms other methods in most colour families, with an average accuracy of 86.1%, indicating its adaptability for this task. The adoption of the proposed AI-based colour-classification roadmap could be effective in reducing dyeing operations, lower costs, and improve sorting efficiency for textile SMEs.
The textile industry represents excellence for the European economy. It is expected to grow further, driven by the rise in fast fashion and the affordability of products, which encourage consumers to purchase and own larger quantities of clothing. However, textile production significantly impacts on the environment due to overconsumption and manufacturing practices that involve chemicals for fabric production, treatment, and dyeing. In the present scenario, the industry’s future relies on creating creative ways to turn discarded textiles into inputs for new manufacturing techniques [1]. Unfortunately, adopting green practices can be difficult, particularly for small- and medium-sized businesses (SMEs), as they call for complete control over a product’s lifetime, from design and manufacture to distribution and end-of-life management. Due to their high expenses and the requirement for large investments that usually only pay off in the medium to long term, these organizations frequently find it difficult to make such improvements [1]. With innovative businesses actively addressing their impact on ecosystems and society, sustainable businesses have gained traction in recent years. Adopting sustainability principles offers strategic growth potential, but it also comes with costs, investments, and significant creative initiatives that might not yield immediate financial rewards. With this strategy, businesses may launch products with a reduced environmental effect without sacrificing usability or aesthetics, and they can recover materials and energy at the end of a product’s lifecycle. The textile and clothing industry is no exception: leading companies are transitioning from traditional production methods to a systemic approach incorporating circular economy principles. This includes efficient resource utilization, reliance on renewable energy, minimizing or eliminating waste, and effective product end-of-life management. These initiatives are aligned with the concept of Product Lifecycle Thinking, which has recently been acknowledged as a key strategy by the European Community (specifically by the Directorate-General for the Environment, supported by the European Environment Agency). This approach is part of a broader framework for sustainable development, designed to foster an integrated environmental policy across European states. In this context, the European Union came forth with the “Europe 2020 Strategy” intended to build up inclusive economic growth, the three main priorities of which focus on investment in education, knowledge, research, and innovation—smart growth; economies that are greener and more resource-efficient; sustainable growth; and employment, social, and regional cohesion, and efforts to combat poverty-inclusion grasp. Wool textile recycling represents a viable option for waste minimization and the enhancement of a circular economy in the textile industry, thus representing a step towards excellence in textile manufacturing. As one of the most significant processes in terms of anthropogenic negative environmental effects, this procedure enables the provision of raw materials that may not require the use of virgin wool or the requirement to colour the garments using dyeing processes. A standard procedure for businesses that recycle wool is to look through their warehouse for appropriately coloured materials based on the customer’s selection. These are chosen based on their colour resemblance to the desired one as well as their understanding of the recycling procedure that follows. A fabric’s colour may alter slightly as a result of this treatment. Therefore, the operators of the company who are required to classify the fabrics must understand how the various colours blend to form a new one. Because the selection process mainly depends on the operators’ judgement, it might vary greatly based on the operator’s level of competence, colour perception, and fatigue. Additionally, the productivity of this approach is low and inconsistent. Consequently, automatic or semi-automatic digital methods and tools for effectively classifying recycled textiles by colour are crucial for enhancing recycling efficiency, particularly as they facilitate the sorting of materials for reuse or repurposing.
Early works on this topic (1997–2015) were proposed in the literature, primarily exploring the use of deterministic methods, mainly based on the Kubelka–Munk (K-M) theory [1,2], as well as traditional machine learning (ML) algorithms and Artificial Neural Networks. Given the computational and technological constraints of the time, these methods had the 2-fold advantage of improving the visual human-based classification and introducing the seed of current AI-based methods. The spectral reflectance of a mixture of components (colourants) that have been described by the absorption (K) and scattering (S) coefficients was mostly predicted using deterministic approaches. By combining pre-dyed fibres with CMC colour variations less than 0.8, several experiments about the tristimulus-matching algorithm based on the Stearns–Noechel model [3] (and its implementations [4,5,6]) made it possible to forecast the formula for matching a given colour standard consistently. In [7], a colour-matching system based on Friele’s hypothesis was created. A methodology for merging and optimizing colour and texture features is offered in [8], and the adoption of methods based on imaging-based colorimetry is suggested in [9] as a partial solution to these problems. Concerning early machine-learning techniques, support vector machines (SVMs) and k-nearest neighbours (k-NN) were frequently employed [10]. Features used in these methods were manually generated and extracted from colour spaces such as RGB, HSV, and LAB. Among the attributes employed in feature selection were histograms, texture-based disparities, and mean intensities. These models were less adaptive than deep learning, but they performed well on smaller datasets and were computationally inexpensive. However, the effectiveness of older approaches can be significantly impacted by changes in lighting, camera quality, and backdrop circumstances, requiring extensive pre-processing to normalize photographs. Scalable or real-time deployment is hindered by these limitations [11]. Because of their versatility and hierarchical learning ability, neural networks have dominated developments in fabric classification. In fact, [12] suggests a completely automated, real-time colour classification method for recycling wool apparel. The tool correctly identifies the clothing by adhering to the selection criteria supplied by human expertise by combining a feedforward backpropagation Artificial Neural Network (FFBP ANN)-based approach with a statistical technique known as the matrix approach of a self-organizing feature map (SOFM). The colours used to manufacture mélange fabrics can be grouped using several studies that can be used directly or inferred [13] since only a few investigations are in the literature [1]. A method for real-time classification of both mélange and solid colour woollen fabrics is proposed in [14] to minimize the processing time and the subjectivity of the classification.
Although these methods are computationally efficient for small datasets (a good approach for SMEs operating in several Textile Districts) they perform poorly when applied to large datasets or multi-class classification tasks. Adding new classes or expanding datasets often requires retraining or redesigning feature extraction methods. Deep learning methods are less vulnerable to variations in background, texture, and illumination because they can generalize from large and diverse datasets. Using pre-trained models allows for quick adaptation to new datasets or classes without having models be retrained from scratch, which can save a significant amount of time and resources [15]. As a result, AI systems can be used as an alternative to traditional methods, especially under difficult conditions. Their real-time colour processing and classification capabilities are ideal for industrial applications such as automated quality control. In other words, by combining high-resolution cameras with artificial intelligence (AI) techniques like machine learning (ML) and deep learning (DL), the automatic and accurate analysis of fabric colours is enabled. These solutions are useful in industries where efficiency and dependability are crucial, such as textile production, quality control, and recycling [16].
Based on the above, there still is a theoretical research gap on the topic of colour classification of fabrics. Therefore, the main Research Question in this field is the following:
Can contemporary deep-learning models outperform traditional ANN approaches in colour-based classification of recycled textiles, and what roadmap enables SMEs to adopt them?
Accordingly, this paper’s primary goal is to present a thorough analysis of current AI-based techniques that are quickly displacing both deterministic and conventional Artificial Neural Network (ANN)-based methods, proposing advantages of contemporary AI techniques, such as their adaptability, scalability, and capacity to handle complicated datasets, by examining recent developments. Moreover, this paper aims to provide a structured roadmap framework for industry adoption of colour classification systems within their sustainable manufacturing cycle.
The remainder of the paper is as follows: in Section 2 the scientific literature is searched to determine the most promising approaches for reliable colour classification of textile fabrics to be reused in a circular process. Common issues related to machine vision systems used to acquire relevant data from recycled fabrics are also explored. The section also presents an overview of AI methods from traditional ANNs-based approaches to modern deep learning-based methods. Finally, in Section 3, the main conclusions and a discussion on possible future developments are carried out.
2. Scientific Literature Research Methodology
The SCOPUS database was used to oversee the following query on 29 October 2024, which included the keywords “IA,” “textiles,” “colour,” “classification,” and “recycling.” The full Boolean string adopted is as follows:
(“artificial intelligence” OR “machine learning” OR “deep learning”) AND (fabric OR textile) AND (colour OR color) AND classification AND recycling.
According to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA)-based analysis of Figure 1, to be included in the state-of-the-art research, studies had to be empirical, which necessitates the collection and analysis of empirical data. Consequently, meta-analyses, theoretical studies, secondary data analysis, and simulated outcomes (from simulation models) were disregarded. Additionally, consideration was limited to publications that made the entire body of literature available.
Despite the huge industrial interest in colour-based classification methods for recycled fabrics, the literature research outcomes sum up to less than 200 papers and even the most recent literature studies rely on traditional neural networks. The graph in Figure 2 shows an almost linear increase starting in 2018.
A refinement is carried out by searching for the most often declared terms throughout the complete collection of 678 publications using Scopus’ “refine research” feature. Since many papers may share two or more keywords, it is pointless to display the percentage of them that address a particular topic. However, it is possible to extract the most common in the relevant fields. As shown in Table 1, the primary keywords can be grouped in a collection of macro-areas of interest.
As already mentioned, the objective of this study is to provide an overview of current methods and pertinent research on the use of cutting-edge technologies, as indicated in Table 1’s macro-areas, in light of the previously mentioned categorization. Before exploring the most relevant methods, it should be noticed that a reliable classification system relies primarily on a machine vision (MV) architecture able to capture input data, and on the availability of a dataset (or pre-processed inputs) before being trained and tested. Therefore, the next sub-section provides a brief description of MV systems commonly adopted for colour classification and hints at common issues with data.
3. Machine Vision Systems
The main limitation of industrial cameras is that colour is acquired, for each pixel of the image, by only three parameters (RGB); therefore, the conversion in the CIELAB space is dependent on such three pixel-by-pixel values. Spectrophotometers are the best choice for capturing more consistent data since spectral analysis allows acquiring reflectance or transmittance of light at specific wavelengths (typically in the range [400–700 nm], with a step of 10 nm) thus generating spectral curves . Reflectance values can be easily converted into tristimulus values once the illuminant is known and the CIELAB space is easily retrievable. However, the main limitation of this acquisition system is related to the small dimensions of acquired areas of the fabric, typically not exceeding a size. Hyperspectral Imaging Systems overcome this limitation by capturing images across a wide range of wavelengths, from the visible spectrum to the near-infrared (NIR). Hyperspectral cameras usually provide from 0.5 to 2 fps (with highest spectral resolution) up to 30 fps (usually with lower resolution), each frame consisting of a monochrome image of the scene together with the reflectance factors, commonly in the spectral range 400–1000 nm (with a minimum spectral resolution equal to 2.8 nm) for a given series of points measured in the inspected area.
Combining images with spectral data, it is possible to expand the capabilities of industrial cameras; however, this kind of MV is quite expensive. In conclusion, using different devices, the MV system allows for acquiring different typologies of data (images, reflectance spectra, or hyperspectral images) and retrieving the coordinates in commonly adopted spaces (with particular regard to CIELAB). As an important remark, it should be considered that in most of the colour classification systems overviewed in this work, these acquired data are used as input data for the sorting system, being the target data for a given colour class. Usually, the fabrics are initially grouped by families (e.g., red, blue, white, etc.); afterward, each fabric belonging to a family can be additionally classified into classes (for example four classes for the red family, seven classes for brown, etc.; see Figure 3).
Therefore, for all the classification systems considered in this work, there are the following:
Training data—they usually consist of one or more of the following data from the fabric to be classified: (1) acquired images, (2) acquired reflectance values, (3) a combination of image and reflectance values, (4) RGB, CIELAB, or other colour space information directly retrieved using an MV system or available in public databases. The AI-based algorithms can be trained using either images or colorimetric data, i.e., colour coordinates.
Target data—consists of colour classes and, possibly, sub-classes; the classification can be based either on human assessment of the fabrics (e.g., based on the knowledge of the expert operators working in the textile company) or on the use of standards such as PANTONE® or RAL Colour Chart [17].
4. Overview of AI-Based Methods
As mentioned above, the main aim is to provide an overview of current strategies and methods implemented to comply with the colour-based classification of recycled fabrics for their reintroduction in a circular process. Accordingly, also due to the surprisingly limited number of papers retrieved in the bibliographic analysis, the methods are classified based on the main AI-based technique adopted, starting from traditional ANNs-based approach, exploring the most commonly used method (i.e., CNNs), and closing with the (limited) number of works using RNNs.
4.1. Traditional ANNs-Based Methods
Unexpectedly, today several methods adopt traditional Artificial Neural Network-based approaches, especially when the main input of the sorting system is composed of CIELAB data [17]. Most recent works share that it is possible to use probabilistic networks or competitive layers to “guide” the classification of fabrics toward supervised training. For instance, a “simple, yet effective, machine vision-based system combined with a probabilistic neural network for carrying out reliable classification of plain, regenerated wool fabrics” is proposed in [17]. To classify the recycled wool fabrics, a set of colour classes must be defined, and an appropriately designed acquisition mechanism must also be in place. Following the image acquisition, useful information was extracted using image-processing techniques. Self-organizing maps (SOMs) are also used for classification [18]. A segmentation approach that combines the effective dense subspace clustering with the self-organizing maps neural network is proposed in [19]. Following the fabric image’s pre-processing, the self-organizing maps algorithm was used for the primary clustering, and the effective dense subspace clustering algorithm was used for the secondary clustering. This pre-processing of the image could help perform a subsequent classification using either deterministic approaches or CNN architectures [20] whose input is the clustered image (this would reduce the complexity of the input data). For colour difference detection and assessment, according to [21], other available intelligent techniques used today are “Support Vector Machine (SVM) algorithm and Least-squares SVM (LSSVM), Support Vector Regression (SVR) and Least-squares SVR (LSSVR)” [22] as well as the “Random Vector Functional-link net (RVFL), the Extreme Learning Machine (ELM) and the Kernel ELM (KELM) and the Regularization ELM (RELM) and the Online Sequential ELM (OSELM) learning algorithms” [23].
Additionally, it has been demonstrated that RVFL, ELM, and their variations provide non-iterative training protocols, speed, and the ability to provide predictive models [24]. ELM stands out in particular due to its simple architecture, quick learning speeds, and few parameter changes. Moving on to optimization techniques, the PCA approach is widely used in feature extraction and is renowned for its ability to separate major components and reduce dimensions. However, Genetic Algorithms (GAs) are a search optimization method that mimics the natural selection process.
4.2. Convolutional Neural Networks
As deep Convolutional Neural Network (CNN) models like AlexNet [24], GoogleNet [25], VGGNet [26], and ResNet [27] have emerged, the use of CNNs to extract characteristics from images has progressively gained popularity. Therefore, several academics have successfully coupled deep learning with apparel classification in recent years [28]. As widely recognized, Convolutional Neural Networks (CNNs) are “feedforward neural networks that can extract features from data with convolution structures” [29]. Their architecture typically consists of three different layers: convolution, pooling, and fully connected (FC) layers (see Figure 4).
A typical CNN layer can be represented by a mapping between the input data (e.g., vectors whose coordinates are the L*, a*, and b* values of a given fabric or directly the RGB image acquired using the MV system) and a new space (which is a function of the coordinates of the weighted sum of input activations ) according to Equation (1):
where
In Equation (2), is a weight matrix and is a bias term. The convolution operation in Equation (2) is defined by Equation (3):
where is the weight at the coordinate in the convolution filter (size ) and is the input value in position . The transfer function can be linear or non-linear, depending on the CNN architecture. Common activation functions include ReLU (Rectified Linear Unit) provided by Equation (4):
Pooling reduces the spatial dimensions of feature maps, retaining important features while reducing computation. Max pooling, for example, takes the maximum value in each pool region (see Equation (5)):
where and are the indices within the pooling window. Finally, fully connected layers combine features to make predictions. Each output of the CNN is computed by using Equation (1) with the predicted value :
where is Weights matrix.
Recent studies investigated the use of CNNs for fabric colour classification. A new technique in [30] converts fabric images into a “difference space,” which is the separation between the colours of the image and a predetermined reference set. This method generates colour-difference channels for input into the CNN thus improving the accuracy of recognition. The architecture is based on Convolutional and pooling layers, while dense layers with SoftMax activation are used for final classification.
A pre-trained deep learning network is utilized in [31] where woven materials are identified according to their density and texture while inadvertently capturing data. This method demonstrates how reliable transfer learning methods are for complex textile classification. Another study [32] uses CNNs and ensemble learning to manage differences in illumination and fabric textures. A deep learning method based on the Random Forest and K-NN algorithms was used to develop and analyze an intelligent colour-based object sorting system [33], showing that the former algorithm performs better in classification. To finalize the automatic fabric composition recognition in the sorting process, [34] uses the principles and methods of deep learning for the qualitative classification of waste textiles based on the analysis of near-infrared (NIR) spectroscopy.
According to the authors’ experimental results, the Convolutional Network classification approach using normalized and pixelated NIR can automatically classify several typical fabrics, including wool, polyester, cotton, and blends of different materials. Again, referring to [30], a novel architecture based on a group of CNNs with input in the colour difference domain is also suggested. The authors in [35] suggest a clothing classification technique based on a parallel Convolutional Neural Network (PCNN) coupled with an optimized random vector functional link (RVFL) to increase the accuracy of clothing image recognition. The technique extracts feature from photos of apparel using the PCNN model.
The issues with conventional Convolutional Neural Networks (such as limited data and overfitting) are then addressed by the structure-intensive, dual-channel Convolutional Neural Network, or PCNN. The transition of traditional RGB spaces into more effective “difference spaces,” which allows CNNs to discern minute differences in fabric images, is another major advance. In circumstances where textural patterns could typically make it difficult to identify colours, this method improves outcomes by gathering variations by referring to a pre-set and using this augmented input for categorization.
The authors in [36] adopted a deep learning method of support vector machine to classify a set of automatic matching systems between textile fabric colour and pantone card automatically. Their method works without manual matching, thereby improving work efficiency and eliminates the human observer bias. Table 2 lists some advantages offered by CNNs as a classifier for fabrics.
Summing up, even though CNNs are reliable tools, their performance is dependent on the availability of enough data, pre-processing to reduce problems with texture and lighting, and computational resources.
4.3. Recurrent Neural Networks
The use of Recurrent Neural Networks (RNNs) is another promising approach for colour classification, as demonstrated by the results obtained from the (limited) number of works dealing with this topic. RNNs (see Figure 5) are a type of neural network that captures temporal dependencies and is intended to oversee sequential data. In RNNs, the hidden state at time , i.e., is computed based on the current input (e.g., a vector containing CIELAB coordinates) and the previous hidden state at the time .
The core update equation is
where is a weight matrix for input , is a weight matrix for the hidden state, is a bias vector, and is the activation function which maps the hidden with the input spaces. The predicted class at the time is computed in Equation (8):
where is a weight matrix connecting the hidden state to output, is the bias vector for the output layer and is the activation function for the output (e.g., SoftMax for classification). When managing a sequence of input data , the RNN can be unfolded over time, leading to the computations for each time step (see Equation (9)):
The most relevant work adopting the RNN architecture for devising classification is [36] where authors propose “advanced neural network techniques, including convolutional layers and RNNs, to classify textures and colours of fabrics by exploring lighting and rotation effects on fabric detection”. First, the collection and pre-processing of fabric images are conducted, and the dataset is enlarged using data augmentation techniques. Then, “a pre-trained model is used, where only the newly added layers are trained, and the older layers are left frozen.” The results are categorized using the types of woven fabric (plain, twill, and satin) once the high-level texture information has been extracted [37]. In [38] a “novel RNN model that can effectively analyze hyperspectral pixels as sequential data and then determine information categories via network reasoning is proposed”, paving the way towards “future research, showcasing the potential of deep recurrent networks for hyperspectral data analysis”. Many more research adopts RNN architectures (e.g., [39,40,41,42]), but due to the aforementioned lack of studies dealing with classifiers for use in the textile field, it seems evident that there is room for further research on this topic employing this kind of network.
This assumption is because RNNs have several advantages, as listed in Table 3. It is important to highlight that for this kind of architecture, in the context of colour classification, pixel rows, or columns from the acquired digital image are treated as sequences, where each step corresponds to a pixel or a patch of pixels. This is a mathematical trick to test RNN with spatial data, despite they are mainly adopted to treat sequential data, as stated in Table 3. Otherwise, this kind of networks can be used for the estimation of dye usage reduction and/or to estimate emission savings in circular processes. These interesting aspects, however, fall outside the scope of this work.
4.4. Quality Control Metrics
Once a fabric is provided, an ideal classification system should be able to identify the correct class. However, there is a great deal of variation in fabrics that can be given to a textile company, particularly recycled materials. As a result, a categorization error should be accepted. Since the error grows as the classification classes move apart, such an error should consider that a classification error between two “adjacent” classes is acceptable. In actuality, a fabric is combined with other fabrics of the same class after being assigned to a particular class. Because of this, the impact of a misclassification involving “similarly” coloured materials is minimal. On the other hand, assigning it to the incorrect class could produce undesirable outcomes if the misclassification is significant.
Therefore, some metrics for measuring the performance of the classification system are deemed. For traditional ANNs, a convenient method, introduced in [17], is based on the definition of a “likelihood classification index” .
Another interesting index is the so-called “reliability index” [17], defined in Equation (10):
where “ is the total number of fabrics correctly classified, is the number of fabrics classified in a closer class, and is the number of samples to be classified” [17]. Both metrics have the advantage of assessing the performance of the classification on one side by strongly penalizing incorrect classification, but, on the other side, by minimizing the risk of misclassifications assigning a proper weight to samples which are classified in the closest class with respect to the manual classification.
Dealing with CNNs, the network’s performance may be evaluated using the so-called loss function. Cross-entropy loss is commonly used (see Equation (11)):
where is the number of classes (e.g., families), is a true label (1 if the class is correct, 0 otherwise), and is the predicted probability for class . The main benefit of using this loss function for the classification of coloured fabrics is that the log function penalizes incorrect predictions more when the confidence is high, encouraging the CNN model to assign high probability to the correct class (or, at least, on the closest one).
When an RNN is used, the loss function is computed based on the model’s output and the target :
where is typically a cross-entropy loss for classification tasks.
Once the fabric classification is carried out, both and parameters can be evaluated for CNN and RNN architectures. Accordingly, to provide a preliminary benchmark between the performance obtained using traditional ANN-based, CNNs, and RNNs methods, they have been assessed against the set of 200 samples provided by the Italian Textile Company Manteco S.p.A., located in Prato and described in [17]. The dataset used in this study was provided by Manteco S.p.A. under a non-disclosure agreement (NDA) and is therefore not publicly available in its original form. It includes high-resolution images and colorimetric measurements of 200 recycled wool fabric samples, organized across 10 colour families. Researchers may replicate the experiments using publicly available datasets such as DeepFashion2 or FabricNET, which were used for transfer-learning validation in this work.
Each sample represented a particular class belonging to a family. Ten overall families made up the used catalogue, as shown in Table 4. Table 5 shows values obtained with the method in [17], in [30], and in [37], using CNNs and RNNs trained using DeepFashion2 [43] and FabricNET [44] datasets. The dataset of 200 fabric colour samples was randomly divided into 70% for training (140 samples), 15% for validation (30 samples), and 15% for testing (30 samples).
These colours frequently lack strong, distinguishing characteristics that allow them to stand out in an image or dataset, according to the scientific literature [45]. This is primarily because of low contrast and texture (i.e., slight variations in intensity or texture when compared to other colours), partial overlap with shadows or lighting variations, and “reduced” information: black and grey colours primarily provide luminance information, which may not be sufficient for confident classification. To strengthen the comparative analysis of artificial intelligence techniques for textile fabric colour classification, a performance comparison of traditional Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs) based on standard classification metrics is here proposed. Results are reported using stratified 5-fold cross-validation on the aforementioned benchmark dataset, consisting of 200 labelled fabric samples grouped across eight colour families.
The following evaluation metrics were computed for each model [46]:
F1-score (F1): harmonic mean of precision and recall.
γ-index (γ): reliability metric accounting for adjacent class misclassifications, as defined in Equation (10).
All models were evaluated using stratified 5-fold cross-validation to ensure class balance within each fold. Performance was averaged over the folds, and standard deviations were reported to assess variance (see Table 6).
Figure 6 presents a bar chart comparison of average performance across the three model types. CNN-based methods consistently outperform both traditional ANNs and RNNs across all metrics, with relatively low variance. ANN-based models offer competitive results with slightly higher variance, while RNN-based models underperform slightly, particularly in classification precision and F1-score, likely due to their sensitivity to training data sequence and noise.
A Friedman test revealed significant differences among models (χ2(2) = 10.0, p = 0.007). Post hoc Wilcoxon tests confirmed CNN > ANN (p = 0.043) and CNN > RNN (p = 0.009). The higher accuracy of CNNs is likely due to their good generalization across colour families and efficient extraction of spatial data. They are especially well-suited for industrial use due to their adaptability to changes in texture, lighting, and shadows. Additionally, CNNs demonstrated higher robustness to reduced data volumes when pre-trained models were utilized with transfer learning, even though all approaches benefit from enriched datasets. In fact, for this specific case study, the CNN-based method consistently outperforms the other two methods in most colour families, with an average accuracy of 86.1%, indicating its robustness and adaptability for this task. The ANN-based method follows closely with an average accuracy of 83.2%, showing reliable performance but slightly less effective than the CNN approach. The RNN-based method achieves an average accuracy of 77.9%, suggesting that its sequential processing approach may not be as well-suited for the type of data available for the case study. However, all three architectures provide results averagely comparable to or outperforming traditional methods based on the use of neural networks or deterministic approaches, whose performance “varies in the range 57–80% for a dataset composed of a large number of samples and higher number of colours” [17] concerning the dataset used in this case study. In particular, all three methods perform very well with white and brown families, with the CNN-based method reaching the highest accuracy (91.1% for white and 95.2% for brown), highlighting its strength in classification. With red and violet, the CNN-based and RNN-based methods achieve similar results, with minor differences, while the ANN-based method slightly underperforms.
All methods show a noticeable drop in accuracy, particularly for RNN-based, green samples, while the lowest accuracies across all methods (69.2–71.3%) is reached for grey and black, thus indicating difficulty in distinguishing between subtle shades or patterns in this range. Based on the case study, the CNN-based method is the most effective approach overall, providing robust and consistent accuracy. However, further improvements, especially in distinguishing low-contrast or subtle colour families, are necessary for practical applications. The performance of the ANN-based method suggests it can still be a viable alternative in resource-constrained environments, while the RNN-based method may require adaptation or reconsideration for this type of problem. However, it should be noted that the case study contains a relatively small number of samples classified; as a result, the results should only be seen qualitatively, that is, as a first assessment of how well the previously discussed techniques performed. More evaluation will be required to compare the three methods more consistently.
4.5. Emerging Architectures
Fabric classification research has recently expanded beyond traditional CNNs to include transformer-based and hybrid systems. Convolutional models may be challenged by long-range dependencies and fine-grained colour changes captured by Vision Transformers (ViTs) and related attention mechanisms. For instance, the authors in [47] reported higher accuracy utilizing a CNN–ViT hybrid for high-resolution industrial fabric inspection, while the authors in [48] showed that a ViT enhanced colour-pattern detection in intricate woven textiles. For textile flaw detection and colour grading, hybrid ensembles that combine CNN feature extractors with transformer attention blocks show comparable advantages. Despite these methods are not related to colour assessment, their approaches could draft future directions also for this specific topic.
5. Roadmap for Adoption in Textile SMEs
The integration of AI-powered colour classification systems into textile recycling workflows presents significant opportunities for small- and medium-sized enterprises (SMEs) aiming to enhance sustainability and operational efficiency. However, the transition from traditional, operator-dependent processes to data-driven, automated systems requires a structured and scalable implementation roadmap. This section outlines an approach tailored to the specific needs and constraints of textile SMEs. Figure 7 reassume the overall proposed roadmap.
Phase 1: Assessment and Readiness Evaluation
Before implementation, SMEs should assess their current infrastructure, workforce skills, and data availability. Key steps include Process Mapping, where the company should identify where manual colour sorting occurs and the variability it introduces, Data Audit, where the availability of digital images or colorimetric data from past production or recycling batches should be considered, and Technology Readiness, i.e., the assessment of existing camera systems, lighting setups, and computing resources.
Phase 2: Pilot System Deployment
A small-scale pilot allows testing of AI models in a controlled production environment without disrupting ongoing operations. From the hardware point of view, a basic machine vision system using industrial RGB cameras and standard illumination (e.g., D65 light sources) should be established. Then, using such a system, a representative set of fabric samples, focusing on a few colour families, needs to be properly acquired. Finally, it will be possible to train a CNN model on this limited dataset using transfer learning to minimize data and resource requirements and AI outputs can be compared against human operators, assessing accuracy, consistency, and classification speed.
Phase 3: Integration with Production Workflow
Once validated, the system can be scaled and integrated into operational workflows. System Integration encompass embedding AI models into existing sorting stations or quality control points, with real-time prediction interfaces. Staff should be upskilled to interact with the system, interpret outputs, and manage exceptions. Furthermore, the company should continuously collect feedback from operators and improve the model with newly classified data.
Phase 4: Expansion and Optimization
To fully benefit from the AI solution, SMEs should plan for long-term scalability and optimization by gradually incorporate more colour families, fabric types, and edge cases into the training data and by tracking key performance indicators such as dye usage reduction, throughput improvement, and misclassification rate.
Phase 5: Collaboration and Standardization
Finally, SMEs can benefit from collaboration with research institutions, industry consortia, or textile hubs, joining collaborative efforts to build open datasets for fabric classification and aligning classification criteria with industry standards (e.g., Pantone, RAL) to ensure interoperability.
It is important to remark that production environments exhibit variable illumination, fabric texture, and surface properties. To maintain robust performance, it is crucial to use standard illumination with calibrated D65 or TL84 light sources, to calibrate camera sensors, and possibly to perform an update of the CNN through periodic transfer-learning sessions using new images.
For what concerns the investment costs related to the implementation of a colour assessment system for SMEs, it should be considered that a machine vision station (e.g., composed by an industrial camera, D65 lighting, and enclosure) can be implemented for approximately EUR 8000–EUR 12,000, while an entry-level GPU workstation suitable for CNN inference (e.g., NVIDIA RTX-class card) typically has a costs in the range EUR 3000–EUR 5000 [49]. This means that small- and medium-sized enterprises can start with low-budgeted investments and scale incrementally as accuracy and throughput requirements grow.
It should also be noticed that while AI-powered classification systems offer advantages in accuracy and repeatability, their deployment in real-world textile environments, particularly within SMEs, must contend with several practical trade-offs and operational challenges. The main challenges are as follows:
Speed vs. Accuracy—AI models prove to be effective in terms of classification accuracy, often exceeding 85–90%. However, models often require more computational resources and longer inference times, which may slow down throughput in fast-paced production environments. On the other hand, lightweight or compressed models, while faster, may compromise on precision—especially in cases involving subtle colour variations or mixed textures. Accordingly, optimization techniques such as model pruning, quantization, and edge computing should be considered to balance this trade-off, enabling rapid classification without significantly sacrificing accuracy.
Cost vs. Scalability—Initial investments in machine vision systems, AI development, and dataset preparation can be significant since high-resolution industrial cameras, hyperspectral sensors, or custom lighting setups can raise capital costs. Moreover, custom AI model development may require specialized expertise, further increasing adoption barriers for SMEs. So, SMEs should start with cost-effective RGB-based systems and apply transfer learning using pre-trained networks.
Lighting Variations—Inconsistent or suboptimal lighting during image acquisition can lead to inaccurate colour representation and model predictions. This is particularly critical for fabrics with low colour contrast or delicate gradients. Standardizing the lighting environment using D65 or TL84 artificial illuminants, along with proper enclosure of the vision setup, could help maintain colour consistency. Incorporating colour calibration targets in each batch can further correct for minor shifts.
Fabric Texture and Surface Properties—Surface textures such as gloss, pile, weave pattern, and shadowing can interfere with colour perception in images; in fact, highly textured or patterned fabrics may introduce noise into the classification process and AI models may unintentionally learn texture patterns instead of pure colour features. Therefore, pre-processing techniques (e.g., texture suppression filters, normalization) or the use of combined image and spectral data (e.g., using hyperspectral imaging) should be considered to improve classification robustness.
Data Quality and Labelling Consistency—AI models require large, well-labelled datasets to perform reliably. However, manual labelling is often subjective and inconsistent, especially in colour-based classification. This should push SMEs in collaborating with experienced operators to define consistent labelling criteria. Semi-supervised learning and data augmentation (e.g., lighting, contrast variation) can enrich the dataset without extensive manual effort.
The aforementioned roadmap has been adopted by the company Manteco S.p.A. in 2025. An early experimentation in colour classification was made in the first two weeks of June 2025 by classifying 350 samples into the eight families stated above. Samples were manually classified by a pool of three colourists under a controlled environment (neutral background, D65 illumination). Each expert evaluated independently, without any communication with the other evaluators, and completed an individual evaluation form in which they assigned their classification to each sample. Once all experts had completed their classification, the forms were collected and compared by an independent coordinator or the research team. Any disagreements were then reviewed by the entire team and resolved through subsequent consensus. Once the manual classification was completed, the γ CNN-based method was applied, achieving a classification accuracy of 97.3% compared to the manual method. Although these results are preliminary and class-wise accuracy has not yet been calculated, the findings demonstrate a good level of accuracy for CNN-based systems.
Table 7 maps the evolutionary progression of AI-based textile colour classification methods highlighting how the field has advanced in terms of methodology, data usage, and industrial application over time.
The main findings of the present paper can be summarized as follows:
While CNNs have been extensively explored and implemented, RNNs remain underutilized in the domain of textile classification despite their potential to enhance the understanding of sequential and contextual data, such as fabric patterns and gradients. RNNs’ ability to capture temporal dependencies enables a nuanced interpretation of colour transitions, making them uniquely suited for scenarios where changes occur across a sequence of image pixels. This capability is especially beneficial for fabrics with intricate designs or colour gradients that evolve spatially.
The “integration of advanced RNN variants, such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), mitigates some of the gradient issues commonly faced in traditional RNNs, enabling the model to learn long-term dependencies more effectively” [50].
Even while accuracy and efficiency have increased with current methods, allowing the proposed methods to reach an accuracy comparable with the one obtainable in different fields (e.g., medical [51] or agricultural ones [52]), there are still issues, such as the computing needs of deep learning model training, the requirement for high-quality annotated datasets, and the vulnerability of AI systems to changing environmental conditions.
The complexity and “black box” nature of AI models hinder interpretability and trust in industrial applications. By designing computationally efficient models optimized for deployment on resource-constrained devices, it could be possible to ensure transparency and reliability in industrial quality assurance by gradually introducing these systems at an industrial scale.
Deep learning models, while more accurate, demand large, well-annotated datasets and significant computational resources; they are prone to overfitting to texture patterns and exhibit “black-box” behaviour that complicates interpretability. These constraints highlight the need for continual dataset enrichment, model regularization, and explainability tools before large-scale industrial deployment.
Dealing with the benefits of the adoption of the proposed roadmap in terms of environmental impact, it is possible to define a set of quantitative sustainability indicators in alignment with ISO 14046 (water footprint), ISO 14064 (greenhouse gas emissions), and the European Commission’s Product Environmental Footprint (PEF) framework.
In terms of energy consumption reduction, Manteco expects to potentially decrease the total energy use (kWh per kg of textile processed) with a value in the range 10–12% due to decreased reprocessing and improved sorting efficiency. Since correctly pre-classified fabrics can be reused without new dyeing, simulations made by Manteco experts, based on industrial dye baths, indicate potential savings of 20–30% in process water per production batch and to reduce dye usage by approximately 25–30% per ton of recycled wool processed. In terms of equivalent CO2 reduction (CO2e), preliminary estimates indicate potential reductions between 0.4 and 0.7 kg CO2e/kg fabric.
6. Discussion and Conclusions
This study addressed the critical need for effective and efficient methods to classify recycled textiles based on colour, a key step in advancing circular economy practices in the textile industry. Traditional deterministic and early machine learning methods, while pioneering, sometimes fall short in handling the classification issue.
In particular, deterministic models require controlled lighting and cannot scale easily to complex problems. Machine learning algorithms depend on handcrafted features and are sensitive to camera settings and background variation [53].
For this reason, emerging technologies, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), seem to provide slightly better performances, under the condition that large datasets are available. In general, the findings highlight the advantages of CNNs for hierarchical feature extraction and of RNNs for processing sequential dependencies in fabric colour gradients.
While the review deals with all kinds of novel neural network architectures, highlighting potentials and limitations of all mentioned approaches, this work has some limitations when dealing with the test of literature architectures in the case study. In fact, while the dataset provides valuable real-world textile samples, its relatively small size and lack of open access limit reproducibility and generalization of the reported results [54]. Consequently, the comparative performance of CNNs here provided should be interpreted as preliminary and specific to this controlled case study, rather than definitive evidence of overall better performance. In any case, both these advanced methods demonstrate accuracy, robustness to noise, and adaptability for the specific problem of colour classification for fabrics.
With further reference to the proposed case study, despite overall high accuracy, classification of low-contrast fabrics such as grey and black remains challenging due to their sensitivity to lighting. As a consequence, an interesting path for future works seems to be the integration of hyperspectral imaging, which captures subtle reflectance differences beyond the RGB range, or combine RGB data with near-infrared spectra for enhanced separability [55]. Moreover, expanding the dataset with targeted augmentation (e.g., brightness and shadow variations) and applying class-balanced loss functions may also be effective approaches to reduce the current performance gap [56]. Statistical analysis confirmed that CNN performance was significantly higher than both ANN and RNN models (p < 0.01), supporting their suitability for textile colour classification under the tested conditions.
Moreover, the present study evaluated a standard RNN to provide a baseline comparison with CNNs. This means that advanced recurrent architectures such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks [57] were not implemented because of the lack of long sequential dependencies and the small dimensions of the dataset. Nevertheless, these variants proved to be well-suited to modelling colour gradients or pixel-row sequences. Therefore, these architectures may be effective approaches for future research, especially when larger sequential datasets are available.
To further advance the field, future research can focus on several areas. First, particular attention should be addressed to dataset enrichment by developing comprehensive and diverse datasets representing various fabric textures, lighting conditions, and colour variations to improve the robustness of AI models. Another important aspect is related to the further testing of RNN models and the future exploration of hybrid models, combining them with CNNs to improve performance in complex classification tasks. Furthermore, future work should be addressed on the alignment of AI models quantifying their environmental impact and optimizing them for minimal resource use during training and deployment and by establishing standardized metrics and evaluation protocols for classification systems to ensure consistent performance across different production environments.
Moreover, as AI research advances, several emerging paradigms offer promising opportunities for improving textile classification and circular economy practices. Some examples are as follows:
-
Self-Supervising Learning (SSL): They enable models to learn useful representations from unlabelled data, a critical advantage in the textile industry where large, annotated datasets are rare or expensive to obtain. SSL frameworks such as SimCLR, MoCo, and BYOL [58] use pretext tasks (e.g., predicting image augmentations, colour distortions, or patch ordering) to train encoders without labels.
-
Attention Mechanisms and Transformers: These models learn long-range dependencies across an image, making them adept at identifying subtle colour gradients, fine textures, and pattern repetitions that are common in complex fabric types (e.g., mélange or jacquard). Although not yet widely applied in the textile industry, early experiments with Visual Transformers (ViTs) in defect detection and fashion recognition show substantial improvements over ResNet and VGG baselines [59].
Besides a critical synthesis of AI colour-classification methods, this paper also proposes a practical implementation framework for circular-economy operations.
Finally, although the present paper is in the context of European Global Excellence in Manufacturing, it should be noted that several projects outside Europe demonstrate successful applications of AI to circular textile practices. India is home to several large-scale textile manufacturers and recyclers (e.g., Arvind Mills) that are actively exploring AI for fabric identification, sorting, and dye optimization. Academic institutions like IIT Delhi and NIFT have published studies on automated loom control, defect detection, and texture classification using deep learning [60]. In Bangladesh, the textile sector is beginning to integrate AI for waste reduction and fabric reuse, often in partnership with European sustainability programmes. China leads globally in textile production and AI research. Textile hubs in Zhejiang and Jiangsu provinces use machine vision systems enhanced by CNNs for real-time defect classification and colour sorting [61]. Several factories employ smart dyeing machines optimized by AI to reduce chemical usage and water consumption. Chinese institutions also contribute to deep learning research on colour matching algorithms, spectral data classification, and real-time grading systems, with practical applications in both domestic and export-oriented factories. In United States, Universities such as MIT and UC Berkeley are also involved in developing AI-based quality control systems tailored to sustainable materials [62,63].
The findings of this study also have relevance for contrasting managerial approaches in the fashion industry, where fast fashion and slow fashion represent two distinct strategic logics. Fast fashion brands compete through rapid product turnover and trend-driven collections, relying on frequently changing colour palettes to attract short-term consumer attention. By providing highly accurate, automated colour classification, the AI-based pipeline can shorten design-to-market cycles and enable more precise reuse of pre-coloured textiles, reducing both lead time and dyeing costs.
Slow fashion, on the other hand, emphasizes durability, timeless design, and restrained colour choices intended to remain relevant across multiple seasons. For these companies, colour-matching of recycled fibres supports consistent aesthetics while meeting sustainability standards. Thus, the same technological framework can reinforce different strategic priorities: speed and novelty for fast fashion, and quality and consistency for slow fashion. This distinction aligns with the analysis of managerial realities in the fashion sector presented [61].
Eventually, AI-powered textile recycling is starting to appear in early-stage innovation zones thanks to pilot projects supported by Google AI for Social Good, the H&M Foundation, or UNIDO. These initiatives are intended to adopt technologies and techniques to move forward sustainable innovation, increase classification precision, and promote a more inclusive, data-driven approach to textile recycling globally.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The author declares no conflicts of interest.
References
Furferi, R.; Carfagni, M. Colour mixing modelling and simulation: Optimization of colour recipe for carded fibres. Model. Simul. Eng.2010, 2010, 487678. [Google Scholar] [CrossRef]
Amirshahi, S.H.; Pailthorpe, M.T. An algorithm for optimizing color prediction in blends. Text. Res. J.1995, 65, 632–637. [Google Scholar] [CrossRef]
Stearns, E.I.; Noechel, F. Spectrophotometric prediction of color wool blends. Am. Dyest. Report.1944, 33, 177–180. [Google Scholar]
Rong, L.I.; Feng, G.U. Tristimulus algorithm of colour matching for precoloured fibre blends based on the Stearns-Noechel model. Color. Technol.2006, 122, 74–81. [Google Scholar] [CrossRef]
Thompson, B.; Hammersley, M.J. Prediction of the colour of scoured-wool blends. J. Text. Inst.1978, 69, 1–7. [Google Scholar] [CrossRef]
Kazmi, S.Z.; Grady, P.L.; Mock, G.N.; Hodge, G.L. On-line color monitoring in continuous textile dyeing. ISA Trans.1996, 35, 33–43. [Google Scholar] [CrossRef]
Philips-Invernizzi, B.; Dupont, D.; Cazé, C. Formulation of colored fiber blends from Friele’s theoretical model. Color Res. Appl.2002, 27, 191–198. [Google Scholar] [CrossRef]
Hunt, M.A.; Goddard, J.S., Jr.; Hylton, K.W.; Karnowski, T.P.; Richards, R.K.; Simpson, M.L.; Tobin, K.W., Jr.; Treece, D.A. Imaging tristimulus colorimeter for the evaluation of color in printed textiles. In Proceedings of the SPIE, Machine Vision Applications in Industrial Inspection VII, San Jose, CA, USA, 25–26 January 1999; SPIE Publications: Bellingham, WA, USA, 1999; Volume 3652. [Google Scholar]
Mäenpää, T.; Viertola, J.; Pietikäinen, M. Optimising colour and texture features for real-time visual inspection. Pattern Anal. Appl.2003, 6, 169–175. [Google Scholar] [CrossRef]
Amelio, A.; Bonifazi, G.; Cauteruccio, F.; Corradini, E.; Marchetti, M.; Ursino, D.; Virgili, L. DLE4FC: A Deep Learning Ensemble to Identify Fabric Colors. In Proceedings of the SEBD, Pula, Italy, 24–27 June 2023; pp. 13–21. [Google Scholar]
Furferi, R.; Governi, L. The recycling of wool clothes: An artificial neural network colour classification tool. Int. J. Adv. Manuf. Technol.2008, 37, 722–731. [Google Scholar] [CrossRef]
da Silva Barros, A.C.; Ohata, E.F.; da Silva, S.P.P.; Almeida, J.S.; Rebouças Filho, P.P. An innovative approach of textile fabrics identification from mobile images using computer vision based on deep transfer learning. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
Hirschler, R.; Oliveira, D.F.; Lopes, L.C. Quality of the daylight sources for industrial colour control. Color. Technol.2011, 127, 88–100. [Google Scholar] [CrossRef]
Furferi, R.; Servi, M. A machine vision-based algorithm for color classification of recycled wool fabrics. Appl. Sci.2023, 13, 2464. [Google Scholar] [CrossRef]
Qian, M.; Wang, Z.; Huang, X.; Xiang, Z.; Wei, P.; Hu, X. Color segmentation of multicolor porous printed fabrics by conjugating SOM and EDSC clustering algorithms. Text. Res. J.2022, 92, 3488–3499. [Google Scholar] [CrossRef]
Liu, S.; Liu, Y.K.; Lo, K.Y.C.; Kan, C.W. Intelligent techniques and optimization algorithms in textile colour management: A systematic review of applications and prediction accuracy. Fash. Text.2024, 11, 13. [Google Scholar] [CrossRef]
Gomes, U.T.; Pinheiro, P.R.; Saraiva, R.D. Dye Schedule Optimization: A Case Study in a Textile Industry. Appl. Sci.2021, 11, 6467. [Google Scholar] [CrossRef]
Zhang, J.; Zhang, K.; Wu, J.; Hu, X. Color segmentation and extraction of yarn-dyed fabric based on a hyperspectral imaging system. Text. Res. J.2020, 91, 729–742. [Google Scholar] [CrossRef]
Liu, S.; Lo, C.K.; Kan, C.W. Application of artificial intelligence techniques in textile wastewater decolorisation fields: A systematic and citation network analysis review. Color. Technol.2022, 138, 117–136. [Google Scholar] [CrossRef]
Liu, X.; Yang, D. Color constancy computation for dyed fabrics via improved marine predators algorithm optimized random vector functional-link network. Color Res. Appl.2021, 46, 1066–1078. [Google Scholar] [CrossRef]
Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM2017, 60, 84–90. [Google Scholar] [CrossRef]
Szegedy, C.; Vanhoucke, V.; Loffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE Press: Piscataway, NJ, USA, 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
Simonyan, K.; Zisserman, A. Very deep convolution networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE Press: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar] [CrossRef]
Zhou, Z.; Deng, W.; Wang, Y.; Zhu, Z. Classification of clothing images based on a parallel convolutional neural network and random vector functional link optimized by the grasshopper optimization algorithm. Text. Res. J.2022, 92, 1415–1428. [Google Scholar] [CrossRef]
Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst.2021, 33, 6999–7019. [Google Scholar] [CrossRef]
Amelio, A.; Bonifazi, G.; Corradini, E.; Di Saverio, S.; Marchetti, M.; Ursino, D.; Virgili, L. Defining a deep neural network ensemble for identifying fabric colors. Appl. Soft Comput.2022, 130, 109687. [Google Scholar] [CrossRef]
Das, S.; Shanmugaraja, K. Application of artificial neural network in determining the fabric weave pattern. Zast. Mater.2022, 63, 291–299. [Google Scholar] [CrossRef]
Penumuru, D.P.; Muthuswamy, S.; Karumbu, P. Identification and classification of materials using machine vision and machine learning in the context of industry 4.0. J. Intell. Manuf.2020, 31, 1229–1241. [Google Scholar] [CrossRef]
Kiruba, R.; Sneha, V.; Anuragha, S.; Vardhini, S.K.; Vismaya, V. Object color identification and classification using CNN algorithm and machine learning technique. In Proceedings of the 4th International Conference on Pervasive Computing and Social Networking (ICPCSN), Salem, India, 18–19 January 2024; pp. 18–24. [Google Scholar]
Thakur, R.; Panghal, D.; Jana, P.; Rajan, A.; Prasad, A. Automated fabric inspection through convolutional neural network: An approach. Neural Comput. Appl.2023, 35, 3805–3823. [Google Scholar] [CrossRef]
Liu, Z.; Li, W.; Wei, Z. Qualitative classification of waste textiles based on near infrared spectroscopy and the convolutional network. Text. Res. J.2020, 90, 1057–1066. [Google Scholar] [CrossRef]
Das, S.; Wahi, A. Digital Image Analysis Using Deep Learning Convolutional Neural Networks for Color Matching of Knitted Cotton Fabric. J. Nat. Fibers2022, 19, 15716–15722. [Google Scholar] [CrossRef]
Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric.2019, 158, 226–240. [Google Scholar] [CrossRef]
Iqbal Hussain, M.A.; Khan, B.; Wang, Z.; Ding, S. Woven fabric pattern recognition and classification based on deep convolutional neural networks. Electronics2020, 9, 1048. [Google Scholar] [CrossRef]
Zhang, H.; Zhu, Q. Color-based image classification and the impact of color similarity and lighting variation. J. Vis. Commun.Image Represent.2018, 53, 100–110. [Google Scholar]
Diallo, R.; Edalo, C.; Awe, O.O. Machine Learning Evaluation of Imbalanced Health Data: A Comparative Analysis of Balanced Accuracy, MCC, and F1 Score. In Practical Statistical Learning and Data Science Methods: Case Studies from LISA 2020 Global Network, USA; Springer Nature: Cham, Switzerland, 2024; pp. 283–312. [Google Scholar] [CrossRef]
Dau Sy, H.; Thi, P.D.; Gia, H.V.; Nguyen An, K.L. Automated fabric defect classification in textile manufacturing using advanced optical and deep learning techniques. Int. J. Adv. Manuf. Technol.2025, 137, 2963–2977. [Google Scholar] [CrossRef]
Bouzidi, S.; Hcini, G.; Jdey, I.; Drira, F. Convolutional neural networks and vision transformers for fashion mnist classification: A literature review. arXiv2024, arXiv:2406.03478. [Google Scholar] [CrossRef]
Würschinger, H.; Mühlbauer, M.; Winter, M.; Engelbrecht, M.; Hanenkamp, N. Implementation and potentials of a machine vision system in a series production using deep learning and low-cost hardware. In Proceedings of the Procedia CIRP, Virtual Conference, 24–26 November 2020; Volume 90, pp. 611–616. [Google Scholar] [CrossRef]
Patil, R.; Gudivada, V. A review of current trends, techniques, and challenges in large language models (LLMs). Appl. Sci.2024, 14, 2074. [Google Scholar] [CrossRef]
Paithane, P. Optimize multiscale feature hybrid-net deep learning approach used for automatic pancreas image segmentation. Mach. Vis. Appl.2024, 35, 135. [Google Scholar] [CrossRef]
Paithane, P.M. Random forest algorithm use for crop recommendation. ITEGAM-JETIA2023, 9, 34–41. [Google Scholar] [CrossRef]
Nani, P.; Das, S.; Dey, S. Enhancing object recognition: A comprehensive analysis of CNN based deep learning models considering lighting conditions and perspectives. Evol. Intell.2025, 18, 72. [Google Scholar] [CrossRef]
Furferi, R. Colour classification method for recycled melange fabrics. J. Appl. Sci.2011, 11, 236–246. [Google Scholar] [CrossRef]
Khonina, S.N.; Kazanskiy, N.L.; Oseledets, I.V.; Nikonorov, A.V.; Butt, M.A. Synergy between artificial intelligence and hyperspectral imagining—A review. Technologies2024, 12, 163. [Google Scholar] [CrossRef]
Mumuni, A.; Mumuni, F. Data augmentation: A comprehensive survey of modern approaches. Array2022, 16, 100258. [Google Scholar] [CrossRef]
Salem, F.M. Gated RNN: Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks. In Recurrent Neural Networks: From Simple to Gated Architectures; Springer International Publishing: Cham, Switzerland, 2021; pp. 85–100. [Google Scholar] [CrossRef]
Khan, A.; AlBarri, S.; Manzoor, M.A. Contrastive self-supervised learning: A survey on different architectures. In Proceedings of the 2022 2nd International Conference on Artificial Intelligence (ICAI), Kandy, Sri Lanka, 24–26 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar] [CrossRef]
Atliha, V.; Šešok, D. Comparison of VGG and ResNet used as Encoders for Image Captioning. In Proceedings of the 2020 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), Šiauliai, Lithuania, 23 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
Patil, D.; Asra, S. Fabric Defect Detection Systems and Methods in India: A Comprehensive Review. In Proceedings of the 2024 International Conference on Emerging Techniques in Computational Intelligence (ICETCI), Pune, India, 29–30 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 269–272. [Google Scholar]
Xu, C.; Xu, L.; Luo, K.; Zhang, J.; Sitahong, A.; Yang, M.; Zhang, C. The essence and applications of machine vision inspection for textile industry: A review. J. Text. Inst.2024, 116, 2286–2310. [Google Scholar] [CrossRef]
Malhotra, Y. AI-Machine Learning-Deep Learning-NLP-RPA: Executive Guide: Including Deep Learning, Natural Language Processing, Autonomous Cars, Robotic Process Automation; MIT Computer Science & AI Lab: Cambridge, MA, USA, 2018. [Google Scholar]
Sellitto, M.A.; Valladares, D.R.F.; Pastore, E.; Alfieri, A. Comparing competitive priorities of slow fashion and fast fashion operations of large retailers in an emerging economy. Glob. J. Flex. Syst. Manag.2022, 23, 1–19. [Google Scholar] [CrossRef]
Figure 1.
PRISMA chart, adapted in this work for the qualitative analysis of the scientific literature dealing with AI-based colour classification of textiles.
Figure 1.
PRISMA chart, adapted in this work for the qualitative analysis of the scientific literature dealing with AI-based colour classification of textiles.
Figure 2.
Documents dealing with the topic of AI in fabric classification based on colour.
Figure 2.
Documents dealing with the topic of AI in fabric classification based on colour.
Figure 3.
Example of classification in terms of colour performed by the Company Manteco S.p.A. located in Prato (Italy). In this example, four classes of “pink”, “red”, and “blue” families are depicted.
Figure 3.
Example of classification in terms of colour performed by the Company Manteco S.p.A. located in Prato (Italy). In this example, four classes of “pink”, “red”, and “blue” families are depicted.
Figure 4.
Architecture of a CNN used for colour classification. Input data are L*, a*, and b* coordinates of the acquired fabric (computed starting from RGB images or spectral data). The pertaining class of the fabric is the network output.
Figure 4.
Architecture of a CNN used for colour classification. Input data are L*, a*, and b* coordinates of the acquired fabric (computed starting from RGB images or spectral data). The pertaining class of the fabric is the network output.
Figure 5.
Architecture of RNN for colour classification. Input data are L*, a*, and b* coordinates of the acquired fabric (computed starting from RGB images or spectral data). The output consists of the fabric colour class.
Figure 5.
Architecture of RNN for colour classification. Input data are L*, a*, and b* coordinates of the acquired fabric (computed starting from RGB images or spectral data). The output consists of the fabric colour class.
Figure 6.
Comparison of model performance metrics (mean across folds) for ANN, CNN, and RNN architectures applied to textile colour classification.
Figure 6.
Comparison of model performance metrics (mean across folds) for ANN, CNN, and RNN architectures applied to textile colour classification.
Figure 7.
Proposed roadmap.
Figure 7.
Proposed roadmap.
Table 1.
Graph representations.
Table 1.
Graph representations.
Macro-Area
Number of Occurrences
Convolutional Neural Networks (CNNs)
170
Traditional Artificial Neural Networks
34
Recurrent Neural Networks (RNNs)
6
Table 2.
Main advantages and disadvantages linked to the use of CNNs for classification of fabrics based on colour.
Table 2.
Main advantages and disadvantages linked to the use of CNNs for classification of fabrics based on colour.
Advantages
Description
Disadvantages
Description
High Accuracy
CNNs’ capacity to extract hierarchical features renders them highly effective at spotting small variations in colour hues and patterns.
Data Dependence
Large, labelled datasets are necessary. Unbalanced or small datasets may cause overfitting.
Robustness to Noise
Advanced CNN designs use numerous feature layers and pre-processing approaches to accommodate changes in lighting, shadows, and fabric texture.
Computational Intensity
CNN training is a resource-intensive process for sophisticated models and high processing capacity, such as GPUs or TPUs.
Automatic Feature Extraction
CNNs simplify development and improve scalability by automatically learning the pertinent features from raw images, in contrast to traditional methods that call for laborious feature engineering.
Susceptibility to Representation Issues
If pre-processing is not used, colour fluctuations brought on by various lighting conditions or irregular image acquisition may result in misclassification.
Adaptability
CNNs are resource-efficient for tasks like fabric colour classification because transfer learning enables them to apply previously trained models to new datasets with sparse input.
Overfitting to Patterns
Instead of concentrating only on colour, CNNs can occasionally be overfitted to the texture patterns in textiles, which lowers their capacity for generalization.
Integration with Ensemble Methods
When CNNs and ensemble approaches are combined, generalization is enhanced, and overfitting is decreased. This is especially advantageous for textiles with a variety of colours or textures.
Interpretability Challenges
The network design makes it challenging to comprehend the reasoning behind a specific categorization choice, which can be problematic for quality assurance procedures.
Table 3.
Main advantages and disadvantages linked to the use of RNNs for classification of fabrics based on colour.
Table 3.
Main advantages and disadvantages linked to the use of RNNs for classification of fabrics based on colour.
Advantages
Description
Disadvantages
Description
Sequential Data Processing
When tasks involve sequential dependencies, like examining a sequence of pixel intensities or patterns in photographs, RNNs are adequate. This is useful when colour gradients or patterns extend across image pixels.
High Computational Costs
RNNs, as in the case of CNNs, are computationally intensive, especially on large datasets of high-resolution fabric images.
Contextual Understanding
RNNs can improve classification task accuracy by utilizing contextual information. Sequential data is a better way to understand tiny texture changes and fabrics’ colour transitions.
Sensitivity to Data Quality
RNNs are prone to overfitting, particularly when there is noise or imbalance in the training dataset. This is crucial in textile manufacturing since fabric samples may have different perspectives or uneven lighting.
Adaptability
To solve the vanishing gradient issue and enable effective learning of long-term dependencies, variants like Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs) can be useful for identifying intricate colour patterns.
Gradient Problem
Gradient problems may cause standard RNNs to perform poorly on lengthy sequences. Despite being lessened by GRUs and LSTMs, these solutions make the model more complex.
Automation in Complex Scenarios
Compared to conventional machine learning models, RNN-based models are better at adjusting to changing fabric orientations and dynamic lighting circumstances.
Training Times
RNNs frequently require more time to train than feedforward networks, thus limiting the implementation in industrial situations where speed is crucial.
Table 4.
Dataset used to test the ANN-based, CNN, and RNN algorithms.
Table 4.
Dataset used to test the ANN-based, CNN, and RNN algorithms.
Families
Family Number
Classes
Number of Fabrics in the Target Set
Number of Fabrics in the Training Set
Number of Fabrics for Each Family (Training Set)
White
1
1, 2, 3, 4
4
82
20, 18, 21, 23
Beige
2
5, 6, 7, 8
4
78
17, 22, 21, 18
Brown
3
9, 10, 11, 12
4
84
21, 19, 21, 23
Orange
4
13, 14, 15, 16
4
76
17, 19, 20, 21
Pink
5
17, 18, 19, 20
4
77
18, 18, 21, 20
Red
6
21, 22, 23, 24
4
85
23, 20, 20, 22
Violet
7
25, 26, 27, 28
4
81
19, 18, 23, 21
Blue
8
29, 30, 31, 32
4
90
23, 21, 22, 24
Green
9
33, 34, 35, 36
4
75
17, 21, 18, 18
Grey/black
10
37, 38, 39, 40
4
72
16, 15, 20, 21
Table 5.
values obtained with three different methods.
Table 5.
values obtained with three different methods.
Meta-analysis of intelligent textile colour management techniques
Recognition of AI as a standard for colour tasks
2025
This Review
CNN, RNN, SSL, FL (theoretical)
Comprehensive roadmap, new trends, and benchmarking
Strategic blueprint for future circular AI systems
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Furferi, R.
Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap. Textiles2025, 5, 53.
https://doi.org/10.3390/textiles5040053
AMA Style
Furferi R.
Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap. Textiles. 2025; 5(4):53.
https://doi.org/10.3390/textiles5040053
Chicago/Turabian Style
Furferi, Rocco.
2025. "Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap" Textiles 5, no. 4: 53.
https://doi.org/10.3390/textiles5040053
APA Style
Furferi, R.
(2025). Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap. Textiles, 5(4), 53.
https://doi.org/10.3390/textiles5040053
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Furferi, R.
Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap. Textiles2025, 5, 53.
https://doi.org/10.3390/textiles5040053
AMA Style
Furferi R.
Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap. Textiles. 2025; 5(4):53.
https://doi.org/10.3390/textiles5040053
Chicago/Turabian Style
Furferi, Rocco.
2025. "Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap" Textiles 5, no. 4: 53.
https://doi.org/10.3390/textiles5040053
APA Style
Furferi, R.
(2025). Advancing Circular Economy Practices Using AI-Powered Colour Classification of Textile Fabrics: Overview and Roadmap. Textiles, 5(4), 53.
https://doi.org/10.3390/textiles5040053