Next Article in Journal
An Improved Hierarchical WLAN Positioning Method Based on Apriori Knowledge
Previous Article in Journal
Downlink Channel Estimation in Massive Multiple-Input Multiple-Output with Correlated Sparsity by Overcomplete Dictionary and Bayesian Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Deep Principal Components Analysis-Based Neural Networks for Fabric Pilling Classification

1
Department of Industrial Education and Technology, National Changhua University of Education, Changhua County 50007, Taiwan
2
Department of Computer Science & Information Engineering, National Chin-Yi University of Technology, Taichung City 41170, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(5), 474; https://doi.org/10.3390/electronics8050474
Submission received: 16 March 2019 / Revised: 21 April 2019 / Accepted: 25 April 2019 / Published: 28 April 2019
(This article belongs to the Section Artificial Intelligence)

Abstract

:
A manufacturer’s fabric first undergoes an abrasion test and manual visual inspection to grade the fabric prior to shipment to ensure that there are no defects present. Manual visual classification consumes a considerable amount of human resources. Furthermore, extended use of the eyes during visual inspection often causes occupational injuries, resulting in a decrease in the efficiency of the entire operation. In order to overcome and avoid such situations, this study proposed the use of deep principal components analysis-based neural networks (DPCANNs) for fabric pilling identification. In the proposed DPCANN, the characteristics of the hairball were automatically captured using deep principal components analysis (DPCA), and the hairball class was identified using the neural network and the support vector machine (SVM). The experimental results showed that the proposed DPCANN has an average accuracy of 99.7% at the hairball level, which is in line with the needs of the industry. The results also confirmed that the proposed hairball classification method is superior to other methods.

1. Introduction

The traditional standard fabric inspection method is based on determining the extent of wear resistance and defects in the fabric through manual visual inspection. This method is susceptible to judgment errors in the pilling classification of knitted fabric due to several human factors such as fatigue of the inspector’s eyes and lack of experience. In addition, visual inspection is too subjective and insubstantial. Although accumulated experience can make up for errors in testing, it takes a lot of training and time for inspectors to become experts in visual inspection.
In the past, the pilling evaluation of knitted fabric was classified by manual observation which caused misjudgments and led to a decrease in efficiency. In order to overcome the above problems, several studies have utilized various image processing methods and classifiers to improve the pilling evaluation of knitted fabric. For instance, Deng et al. [1] used the multi-scale two-dimensional dual tree complex wavelet transform to extract information on the pilling of knitted fabric. This method extracted six parameters at different scales which were (1) the energy ratio of the pilling quality; (2) the area of the pilling; (3) the standard deviation of the pilling area; (4) the total number of the pilling; (5) the standard deviation of the height of the pilling; and (6) the deviation coefficient of the pilling position. The Levenberg–Marquardt backpropagation (LMBP) neural rule was then used as a classifier to distinguish hairball grades. Meanwhile, Saharkhiz et al. [2] used two-dimensional fast Fourier transform to process the image of the fabric, and low-pass filtering to blur the texture of the fabric’s surface; then, the three parameters, namely, the number, the volume, and the pilling area, were extracted from the surface of the knitted fabric. These three parameters were used to develop cluster algorithms for pilling evaluation of the knitted fabric. Consequently, Eldessouki et al. [3] used four strategies, namely, binarization, cutting, quantification, and classification, as an objective method to evaluate the pilling of a knitted fabric. The study extracted the following four parameters: (1) the number of pilling; (2) the average area of pilling; (3) the area ratio of pilling; and (4) the density of pilling. Finally, the artificial neural network (ANN) was used as a classifier. Moreover, Furferi et al. [4] used binarization and B-splines to pre-process the fabric’s image, and extracted five parameters, namely, (1) entropy curve; (2) total skewness; (3) total kurtosis; (4) coefficient of variation; and (5) brightness of the fabric. The ANN was also used as a classifier to evaluate the knitted fabric. Yun et al. [5] used the five standards of the American Society of Testing and Materials (ASTM) for knitted fabric as a reference for research. The fast Fourier and fast wavelet filtering methods were used to process the fabric’s image. A total of three parameters, namely, the number of pilling, the total pixel area, and the total pilling image grayscale value were extracted. Finally, using a statistical method, a grayscale image rule base was established. Meanwhile, Techniková et al. [6,7] proposed a system to objectively evaluate the pilling of unicolor fabrics and fabrics with complex patterns. The pilling evaluation system included a 3D fabric surface reconstructed from shading based on a gradient field method, the use of image analysis tools for pilling detection, and an objective estimation of the pilling grade. The conventional image processing methods mentioned above required the related features of the identified object to be defined in advance.
Due to recent developments in deep learning technology, more people are able to participate in the research of convolutional neural networks (CNNs). As a result, the level of application of image recognition has become more extensive, and has achieved better practicality. CNNs were introduced in 1998. LeCun et al. [8] proposed the LeNet-5 model which used the back propagation (BP) algorithm to adjust the parameters of the neural network. This was considered to be a fairly successful convolutional neural network in the early days of CNNs. Moreover, Krizhevsky [9] proposed the AlexNet model which greatly deepened the network architecture. It used ReLU as an incentive function, and introduced the dropout technology. This was the beginning of deep learning. In terms of the network architecture of the convolutional layer, AlexNet was much deeper than LeNet. It constructed a complex model with 60 million parameters and proposed innovative changes in the architecture. Consequently, Lin et al. [10] proposed a multi-layer perceptron (MLP) convolutional layer to replace the general traditional convolutional layer, and attached the convolution to a full connection layer which enhanced the ability of each convolutional layer to identify different characteristics. Szegedy et al. [11] proposed GoogLeNet which improved the MLP convolutional layer by developing a 1*1 convolution kernel to achieve cross-channel message exchange and dimension reduction. As the network deepened, the degradation problem became more obvious, that is, the deepened network made it impossible to learn the correct features, and reduced the accuracy of the method. In order to solve this problem, He et al. [12] proposed the ResNet maps low-level features and connected them directly to high-level networks. This allowed the deepened network to have representation ability for the first few layers, instead of learning from scratch, to make the network less prone to degradation problems and make training in deep networks easier. The latter part of the CNN is a classifier; the fully connected layer of the classifier is the most parameterized part of the CNN which is prone to over-fitting and requires a large amount of hardware resources.
To solve the problems encountered by the different methods mentioned, this study proposes a pilling classification for knitted fabric using deep principal components analysis-based neural networks (DPCANNs). The proposed DPCANN has the automatic extraction feature of a convolutional neural network and it does not require a large amount of hardware resources. The deep principal components analysis (DPCA) of the DPCANN was able to automatically capture the characteristics of the pilling. The neural network or support vector machine (SVM) was used to evaluate the pilling grade. The pilling classification was also performed by a classifier. Further, the architecture of the DPCANN approximates the effects of the CNN with fast execution speed.

2. The Pilling Classification of Knitted Fabric

In order to establish a database, textiles used in this study complied with the ISO 12945-2:2000 Martindale wear standards of textile grading. The fabric is clamped on the Martindale wear tester, and the front side of the sample and the front side of the other fabric are rubbed against each other according to a certain geometric pattern. The pattern of the hairball is a straight line at the beginning, and then gradually becomes an ellipse. Finally, in a direction perpendicular to the original line, the friction should be carried out under a gentle pressure until the specified number of turns is reached. After this action, under standard visual conditions, the standard image is generally compared with the original cloth to observe the extent of its hairball and other appearance changes. In this study, the five grades of pilling detection are obtained by human visual inspection. Although this pilling detection method is too subjective and unconvincing, accumulated experience can reduce the extent of detection error. Table 1 shows the five grades for the pilling test by the Martindale standard test. Grade 1 refers to very serious pilling; grade two refers to serious pilling; grade three refers to medium pilling; grade four refers to light pilling, and grade five no pilling.

3. The Proposed Method

In this section, this study proposes an effective recommendation for the pilling classification of knitted fabric by deep principal components analysis-based neural networks (DPCANNs). The proposed DPCANN of this study was mainly divided into two major steps and shown in Figure 1. The first step involved automatic extraction of the characteristics of the pilling through deep principal components analysis (DPCA); and the second step involved the use of the neural network or SVM classifier to determine the grade of the pilling on the fabric.

3.1. Deep Principal Components Analysis (DPCA)

This section describes the architecture of deep principal components analysis (DPCA), which is designed to automate the extraction of hairball features. Figure 2 shows three stages of the DPCA.

3.1.1. The First Stage

This study used the size as the input for the DPCA, the total number of N sheets as the training image, and K 1 × K 2 as the patch.
During the first stage of DPCA, each pixel x was sampled using the K 1 × K 2 patch. All the x samples of the current image were obtained. The size of each x sample is represented in the following equation.
( m K 1 s + 1 ) × ( n K 2 s + 1 )
The mean value of the samples was then subtracted to obtain the following:
X = [ x 1 ˜ , x 2 ˜ , , x N ˜ ]
The number of filters at the ith layer is represented in the following equation.
X V V T X F 2 , s . t .   V V T = I L 1
The feature vector of X X T was then calculated and the first L 1 feature vectors were selected as the characteristics of the first stage. The filter is represented by the following equation.
W l 1 m a t K 1 , K 2 ( q l ( X X T ) ) R K 1 K 2 ,   l = 1 , 2 , , L 1
The ith filter output at this stage is shown in Equation (5). The * symbol indicates a 2D convolution. The zero used in I i l was added to match the size of I i .
I i l I i W l 1 ,   i = 1 , 2 , , N

3.1.2. The Second Stage

The second stage is basically similar to the first stage. The output of the first stage was collected, and the mean value was subtracted. This is represented by the following equation.
Y = [ Y 1 , Y 2 , , Y L 1 ] R K 1 K 2
The second stage of the filter is described as follows:
W l 2 m a t K 1 , K 2 ( q l ( Y Y T ) ) R K 1 K 2 ,   l = 1 , 2 , , L 2
The convolution of each I i l entered in the second phase is represented as follows:
O i l { I i l W l 2 } l = 1 L 2
This stage produced a L 1 × L 2 feature.

3.1.3. The Third Stage

The third stage is the output stage. The output of the second stage was first binarized in the third stage using the following equation.
{ H ( I i l W l 2 ) } l = 1 L 2
Next, the binary vector of L 2 is converted to a decimal, and the binarized second-stage output values are accumulated as follows:
T i l = j = 1 L 2 2 l 1 H ( I i l W l 2 )
The T i l value ranged from 0 to 2 L 2 1 . Each T i l solution was divided into B equal parts; then, the histogram of each aliquot was calculated. All were connected by B h i s t ( T i l ) . The feature definition is represented as follows.
f i [ B h i s t ( T i l ) , , B h i s t ( T i L 1 ) ] T R ( 2 L 2 ) L 1 B

3.2. Classifier

Using the features obtained automatically by DPCA, the neural network classifier and SVM classifier were used to identify the pilling classification of the knitted fabric.

3.2.1. Neural Network Classifier

A neural network [13] is usually used as a classifier because it is simple and can be easily implemented; therefore, this study employed the neural network as a classifier. The features of a character were used as inputs to the neural network classifier, involving a great number of neurons. A single neuron was the sum of the weighted input signals which consisted of the output signals of the previous layer and was represented as y N O = f ( x ) = i = 1 N I w i j x i . The weights were initially specified as a random number between +1 and −1; the aim was to tune the weights of the neural network classifier.
The multi-layer neural network consisted of an input layer, a hidden layer, and an output layer.
  • Input Layer: The relationship of the input and the output signals are represented by the following equation.
    o i ( I ) ( k ) = x i ( k )
  • Hidden Layer: The input signal at node k is expressed as:
    n e t j ( H ) ( k ) = i = 1 N I o i ( I ) ( k ) w i j
    where wij, i = 1, 2, …, NI, j = 1, 2, …, NH represents the weights between the input and the hidden layers.
    A sigmoid transfer function transferred the input to the output of the hidden layer. Hence, the output at node k is expressed as:
    o j ( H ) ( k ) = 1 / ( 1 + exp ( n e t j ( H ) ( k ) ) )
  • Output Layer: The operation in the output layer is given as:
    n e t l ( O ) ( k ) = j N H o i ( H ) ( k ) w j l
    where wjl, l = 1, 2, …, NO, represents the weights between the hidden and the output layers. The output is expressed as:
    y l = o l ( O ) ( k ) = 1 / ( 1 + exp ( n e t l ( O ) ( k ) ) )
In this paper, the backpropagation (BP) algorithm was used to adjust the weights of the neural network classifier.

4. Support Vector Machine (SVM)

Support vector machine (SVM) [14] is a novel classifier. It is a generalized linear classifier for supervised learning which has been widely used in academic research in various fields such as text recognition, speech recognition, image recognition, financial time series analysis, bioinformatics analysis, and many other fields. In SVM, the data set used to train the classification model is transformed into a high-dimensional space, and then used to find the optimal separating hyperplane (OSH) or the decision function in the maximum space between the boundaries of various classifications. The data classified by the SVM hyperplane is mainly distinguished by positive and negative poles. If the calculation result of Equation (17) is positive, then the data point belongs to the positive region. Otherwise, it belongs in the negative region.
w x + b = 0
where w represents the vector that is perpendicular to the classification hyperplane, and the displacement b is added to increase the interval.
The basic concept of SVM can be written as the optimization problem of the following equation:
y i ( w T x i + b ) 1 , f o r   i = 1 , 2 , , N   a n d   x i ϵ A B   M i n i m i z e w , b 1 2 | | w | | 2 }
In this study, a slack variable (ξ), called a soft margin classifier, was added to Equation (18) to correct the classifier. The use of a soft margin could solve most problems in a dataset except for some problems with outliers.
y i ( w T x i + b ) 1 ξ , i = 1 , 2 , , N   a n d   x i ϵ A B   M i n i m i z e w , b 1 2 w 2 + C i = 1 N   ξ i , i = 1 , 2 , , N   ξ > 0 ,    i = 0 , 1 , 2 , , N }
The ideal data is linear and separable. However, the training data is mostly nonlinear and inseparable. For this study, the data input space was mapped to the corresponding feature space. In order to transfer the data to the high-dimensional space, the data was transmitted through a mapping function, : x n R n R N (where N > n). The core function definition used was K ( x i , x j ) = ( x i ) ( x j ) , and the common kernel function (kernel function) used was as follows:
  • Linear: When the data was linearly separable, mapping it to a high dimensional space was not needed. The core formula used was:
    K ( x i , x j ) = x i t x j
  • Polynomial: For a polynomial core, the number of polynomials was equal to d, and R . When d = 1 , the polynomial core is equivalent to a linear kernel heart. The core formula used was:
    K ( x i , x j ) = ( x i t x j + 1 ) d
  • Radial Basis Function (RBF): The RBF core function is the most commonly used core of SVM. According to some experimental data, the classification ability of RBF is better than other core functions. For this study, the core formula used was:
    K ( x i , x j ) = e x p ( γ x i x j 2 )
  • Sigmoid
    K ( x i , x j ) = t a n h   t a n h   ( γ x i x j + γ )

5. Experimental Results

To verify the performance of the proposed classifier, two experiments were performed. One involved 320 fabric images and the other 1600 fabric images. The resolution of the images was 225 dpi and the size of each image 320*240. In our experiment, the fabric images of these two sets came from a manufacturer in Taiwan. The manufacturer only provided data of fabric images from grade 2 to grade 5. For each grade of fabric pilling in the database 80 and 400 records were obtained, for a total of 320 and 1600 fabric images, in the first and second experiments, respectively. In general, training and testing samples are randomly chosen from the database. By this method, the selected images can belong to any grade of fabric pilling for training. Thus, the unselected grades of testing data will produce poor verification results. In order to avoid uneven training data and testing data at each grade when randomly sampling images, we randomly chose 80% of the images of each grade of fabric pilling in the database as training samples and the remaining 20% as testing samples. Several researchers, such as Huang and Fu [15] and Lee and Lin [16], also adopted this method to obtain training and testing samples. As for the ratio of training samples to testing samples, researchers can determine this according to the total image database obtained. In this study, since only a small image database (i.e., 320 images) was obtained at the beginning, 80% training samples and 20% testing samples of each grade of fabric pilling were used. This study performed 10 verifications for each fabric image set. That is, a total of 10 different training and testing image sets for each experiment were established. The selection of various percentages of each grade of fabric pilling will be taken into consideration and discussed in our future work.
The first experiment involved a total of 320 fabric images. Eighty percent of the data for each grade were randomly selected as training samples, and twenty percent were chosen as testing samples. The DPCANN was then used to identify the pilling grade. The set parameters of the proposed DPCANN are shown in Table 2. The detection results of the proposed DPCANN are shown in Table 3. The total accuracy rate was the average of the accuracy rates of 10 testing data sets. The overall average accuracy rates using the proposed DPCANN with neural network classifier and SVM classifier were 98.6% and 99.7%, respectively. The results obtained met the needs of the industry.
The second experiment involved a total of 1600 fabric images. These fabric images were obtained using different illumination experiments. To verify the performance of the proposed classifier, 400 records were obtained for each grade of fabric pilling in the database for a total of 1600 fabric images. In this experiment, we used the same method as in the first experiment to obtain the training and testing samples and set the parameters of the proposed DPCANN method. Table 4 shows the detection results of the proposed DPCANN. The overall average accuracy rates using the proposed DPCANN with neural network classifier and SVM classifier were 98.68% and 99.84%, respectively.
The study by Huang and Fu [15] involved textile grading of fleece based on pilling assessment which was performed using two image processing methods and two machine learning methods. For the image processing methods, the first method involved using the discrete Fourier transform combined with Gaussian filtering, and the second method involved using the Daubechies wavelet. For the machine learning methods, the ANN and the SVM were used to objectively solve the textile grading problem. Meanwhile, Lee and Lin [16] proposed a novel type-2 fuzzy cerebellar model articulation controller (T2FCMAC) based on a hybrid of group strategy, and an artificial bee colony (HGSABC) was proposed to evaluate the pilling grade of knitted fabric. The proposed T2FCMAC classifier embedded a type-2 fuzzy system within a traditional cerebellar model articulation controller (CMAC). The proposed HGSABC learning algorithm was used to adjust the parameters of the T2FCMAC classifier and prevent it from falling into a local optimum. A group search strategy was used to obtain balanced search capabilities and improve the performance of the artificial bee colony algorithm. Moreover, Fu [17] combined fast Fourier transform with Gaussian filtering for the image pre-processing method. Then, the k-NN and enhanced k-NN [18] classifiers were used to evaluate the pilling grade of knitted fabric. In the k-NN and enhanced k-NN [18] classifiers, the value of k is not easily determined. In our experiments, a total of 10 different training and testing data sets for each experiment were established. The k value from 1 to 10 for each training and testing data set is used to determine the pilling grade of knitted fabric. Experimental results show that the hyper-parameter k for the k-NN and enhanced k-NN [18] classifiers during 10 different training and testing data sets should be set at 2 to obtain a better average accuracy rate.
This study compared the results of DPCANN with other methods [2,3,4,15,16,17,18,19,20]. In order to fairly compare our method with other methods, we repeated other methods to achieve the classification of pilling grade. The other methods also randomly selected 80% of the images as training samples and the remaining 20% as testing samples. Experiments were also performed ten times. Ten training and testing samples were the same as in Table 3. Table 5 shows the results of the other methods. The results showed that the proposed method has a better average accuracy rate than other methods in fabric pilling grade detection.

6. Conclusions

This study proposed the use of DPCANN to identify the pilling grade of knitted fabric. The proposed DPCANN has the automatic capture feature of CNN, and does not require a large amount of hardware resources. The DPCA in the DPCANN was able to automatically capture the characteristics of the pilling, and the neural network or SVM was used to identify the pilling grade. This architecture has similar effects to that of the CNN, with a fast execution speed. The experimental results confirmed that the average accuracy of the proposed DPCANN for the pilling classification of knitted fabric was 99.7%. Furthermore, the recognition effect was also very good. Although the obtained results satisfied industry requirements, this study suggests that in order to further improve the accuracy rate, multiple DPCANN classifier fusion using fuzzy integrals must be adopted in future works.

Author Contributions

Conceptualization and Methodology, C.-S.Y., W.-J.C. and C.-J.L.; Software, C.-S.Y., C.-J.L.; Writing—Original Draft Preparation, C.-S.Y., C.-J.L.

Funding

This research was funded by the Ministry of Science and Technology of the Republic of China, Taiwan (No. MOST 107-2221-E-167-023).

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this manuscript.

References

  1. Deng, Z.; Wang, L.; Wang, X. An Integrated Method of Feature Extraction and Objective Evaluation of Fabric Pilling. J. Text. Inst. 2010, 102, 1–13. [Google Scholar] [CrossRef]
  2. Saharkhiz, S.; Abdorazaghi, M. The Performance of Different Clustering Methods in the Objective Assessment of Fabric Pilling. J. Eng. Fibers Fabr. 2012, 7, 35–41. [Google Scholar] [CrossRef]
  3. Eldessouki, M.; Hassan, M.; Bukhari, H.A.; Qashqari, K. Integrated Computer Vision and Soft Computing System for Classifying the Pilling Resistance of Knitted Fabrics. Fibres Text. East. Eur. 2014, 22, 106–112. [Google Scholar]
  4. Furferi, R.; Carfagni, M.; Governi, L.; Volpe, Y.; Bogani, P. Towards Automated and Objective Assessment of Fabric Pilling. Int. J. Adv. Robot. Syst. 2014, 11. [Google Scholar] [CrossRef]
  5. Yun, S.Y.; Kim, S.; Park, C.K. Development of an Objective Fabric Pilling Evaluation Method: Characterization of Pilling Using Image Analysis. Fibers Polym. 2013, 14, 832–837. [Google Scholar] [CrossRef]
  6. Techniková, L.; Tunák, M.; Janáček, J. Pilling Evaluation of Patterned Fabrics based on a Gradient Field Method. Indian J. Fibre Text. Res. 2016, 41, 97–101. [Google Scholar]
  7. Techniková, L.; Tunák, M.; Janáček, J. New Objective System of Pilling Evaluation for Various Types of Fabrics. J. Text. Inst. 2017, 108, 123–131. [Google Scholar] [CrossRef]
  8. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2323. [Google Scholar] [CrossRef]
  9. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 1–9. [Google Scholar] [CrossRef]
  10. Lin, M.; Chen, Q.; Yan, S. Network in Network. In Proceedings of the ICLR Conference, Scottsdale, AZ, USA, 2–4 May 2013; pp. 1–10. [Google Scholar]
  11. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  12. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Steganalysis. Multimed. Tools Appl. 2017, 77, 10437–10453. [Google Scholar]
  13. Lin, H.Y.; Lin, C.J. Using a Hybrid of Fuzzy Theory and Neural Network Filter for Single Image Dehazing. Appl. Intell. 2017, 47, 1099–1114. [Google Scholar] [CrossRef]
  14. Pal, M.; Foody, G.M. Feature Selection for Classification of Hyperspectral Data by SVM. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2297–2307. [Google Scholar] [CrossRef]
  15. Huang, M.L.; Fu, C.C. Applying Image Processing to the Textile Grading of Fleece Based on Pilling Assessment. Fibers 2018, 6, 73. [Google Scholar] [CrossRef]
  16. Lee, C.L.; Lin, C.J. Integrated Computer Vision and Type-2 Fuzzy CMAC Model for Classifying Pilling of Knitted Fabric. Electronics 2018, 7, 367. [Google Scholar] [CrossRef]
  17. Fu, C.C. Apply Image Processing Methods in Fabrics Objective Grading. Master’s Thesis, National Chin-Yi University of Technology, Taichung, Taiwan, 2017. [Google Scholar]
  18. Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition from Palm Depth Images for Gloved Hands. IEEE Trans. Hum. Mach. Syst. 2015, 45, 799–804. [Google Scholar] [CrossRef]
  19. Jing, J.; Zhang, Z.; Kang, X.; Jia, J. Objective Evaluation of Fabric Pilling Based on Wavelet Transform and the Local Binary Pattern. Text. Res. J. 2012, 82, 1880–1887. [Google Scholar] [CrossRef]
  20. Eldessouki, M.; Hassan, M. Adaptive Neuro-fuzzy System for Quantitative Evaluation of Woven Fabrics’ Pilling Resistance. Expert Syst. Appl. 2015, 42, 2098–2113. [Google Scholar] [CrossRef]
Figure 1. The proposed deep principal components analysis-based neural network (DPCANN) process for fabric pilling evaluation.
Figure 1. The proposed deep principal components analysis-based neural network (DPCANN) process for fabric pilling evaluation.
Electronics 08 00474 g001
Figure 2. Diagram of deep principal components analysis (DPCA).
Figure 2. Diagram of deep principal components analysis (DPCA).
Electronics 08 00474 g002
Table 1. Five grades of pilling detection.
Table 1. Five grades of pilling detection.
Sample Electronics 08 00474 i001 Electronics 08 00474 i002 Electronics 08 00474 i003 Electronics 08 00474 i004 Electronics 08 00474 i005
Surface pillingVery SeriousSeriousMediumLightN/A
GradesOneTwoThreeFourFive
Table 2. The set parameters of the proposed DPCANN.
Table 2. The set parameters of the proposed DPCANN.
DPCANeural NetworkSVM
Number of FiltersPatch SizeNumber of Hidden NodesLearning RateMomentumEpochLinear
85*52000.10.8250
Table 3. Results of the fabric pilling grade in the first experiment using the proposed DPCANN method.
Table 3. Results of the fabric pilling grade in the first experiment using the proposed DPCANN method.
Data SetsAccuracy Rate
Neural NetworkSVM
Data set 193.75%96.87%
Data set 296.87%100%
Data set 3100%100%
Data set 4100%100%
Data set 5100%100%
Data set 695.31%100%
Data set 7100%100%
Data set 8100%100%
Data set 9100%100%
Data set 10100%100%
Average accuracy rate98.6%99.7%
Table 4. Results of the fabric pilling grade in the second experiment using the proposed DPCANN method.
Table 4. Results of the fabric pilling grade in the second experiment using the proposed DPCANN method.
Data SetsAccuracy Rate
Neural NetworkSVM
Data set 1100%100%
Data set 298.5%100%
Data set 399.25%100%
Data set 498.93% 100%
Data set 597.18% 99.06%
Data set 698.93%100%
Data set 798.75% 100%
Data set 898.43% 100%
Data set 998.37% 100%
Data set 1098.45% 99.38%
Average accuracy rate98.68%99.84%
Table 5. Comparison of the results of other pilling evaluation methods.
Table 5. Comparison of the results of other pilling evaluation methods.
MethodsAverage Accuracy Rate
Saharkhiz and Abdorazaghi [2]96.8%
Eldessouki et al. [3]87.5%
Furferi et al. [4]94.3%
Huang and Fu [15]96.6%
Lee and Lin [16]97.3%
Jing et al. [19]95.0%
Eldessouki and Hassan [20]85.8%
Fu [17] using k-NN96.8%
Fu [17] using enhanced k-NN [18]97.1%
Proposed method using k-NN98.4%
Proposed method using enhanced k-NN [18]98.8%
Proposed method using neural network98.6%
Proposed method using SVM99.7%

Share and Cite

MDPI and ACS Style

Yang, C.-S.; Lin, C.-J.; Chen, W.-J. Using Deep Principal Components Analysis-Based Neural Networks for Fabric Pilling Classification. Electronics 2019, 8, 474. https://doi.org/10.3390/electronics8050474

AMA Style

Yang C-S, Lin C-J, Chen W-J. Using Deep Principal Components Analysis-Based Neural Networks for Fabric Pilling Classification. Electronics. 2019; 8(5):474. https://doi.org/10.3390/electronics8050474

Chicago/Turabian Style

Yang, Chin-Shan, Cheng-Jian Lin, and Wen-Jong Chen. 2019. "Using Deep Principal Components Analysis-Based Neural Networks for Fabric Pilling Classification" Electronics 8, no. 5: 474. https://doi.org/10.3390/electronics8050474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop