Next Article in Journal
A Low-Cost Visible Light Positioning System for Indoor Positioning
Previous Article in Journal
Research Progress of Automated Visual Surface Defect Detection for Industrial Metal Planar Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fabric Defect Detection Based on Illumination Correction and Visual Salient Features

1
School of Artificial Intelligence and Computer, Jiangnan University, Wuxi 214122, China
2
School of Information and Engineering, Changzhou University, Changzhou 213164, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5147; https://doi.org/10.3390/s20185147
Submission received: 8 August 2020 / Revised: 3 September 2020 / Accepted: 8 September 2020 / Published: 9 September 2020
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Aiming at the influence of uneven illumination on fabric feature extraction and the limitations of traditional frequency-based visual saliency algorithms, we propose a fabric defect detection method based on the combination of illumination correction and visual salient features—(1) Construct a multi-scale side window box (MS-BOX) filter to extract the illumination component of the image, then use the constructed two-dimensional gamma correction function to perform illumination correction on the image in the global angle, and finally enhance the local contrast of the image in the local angle; (2) Use the L 0 gradient minimization method to remove the background texture of fabric images and highlight the defects; (3) Represent the fabric image as a quaternion image, where each pixel in the image is represented by a quaternion consisting of color, intensity and edge characteristics. The two-dimensional fractional Fourier transform (2D-FRFT) is used to obtain the saliency map of the quaternion image. Experiments show that our method has a higher overall recall rate for defect detection of star-patterned, box-patterned, and dot-patterned fabrics, and the overall recall-precision effect is better than other existing methods.

1. Introduction

In the process of fabric production, defect detection is very important to the quality control of fabrics. Nowadays, the defect detection of fabrics is mainly aimed at two kinds of fabrics—(1) those with no complex texture pattern, simple structure, mostly solid-color fabrics and (2) those with complex patterns, including pariodic fabrics.
For the first kind of fabric, the research methods have been mature. The main subtypes are—(1) statistical methods, such as the co-occurrence matrix method [1] and the morphological method [2]; (2) spectral methods, such as Fourier transform [3], wavelet transformation [4], Gabor filtering [5]; (3) model methods, such as Markov random field model [6]; (4) deep learning methods, which have been widely used in computer vision [7,8], many researchers have also begun to use deep networks to detect fabric defects, such as neural networks [9], Mobile-Unet [10]; statistical and spectral methods when the area is too large or defects are too small, caused by error. The model method needs to define the model in advance, and different models need to be defined for different types of defects, which is not universal. The deep learning method needs a large number of samples as the training set, and training parameters take a long time. When the defect is close to the texture backgroud, Mobile-Unet [10] cannot detect the details of the defect well. For the second kind of fabric, there are few mature and available methods, mainly including Wavelet Preprocessed Golden Image Subtraction (WGIS) [11], Bollinger Bands (BB) [12], Regular Bands (RB) [13], Elo Ranking (ER) [14], and Similarity Relation (SR) [15]. The calculation process of these methods is complicated, and the precision and recall of detecting defects need to be improved. Low-rank decomposition [16,17] obtains the sparse defect part by decomposing the image into low-rank matrix and sparse matrix. This kind of method needs to construct the feature matrix, and then decompose the feature matrix with low-rank. The detection effect depends on the feature selection, and the robustness is poor and time-consuming.
Visual saliency algorithm simulates human vision, detects the difference between fabric defects and the normal background from a visual perspective, and separates the defect from the background to complete the defect detection. Frequency tuned salient region detection (FT) [18] transforms the defect image from RGB color space to CIE Lab color space, and regards the defect as an area with saliency characteristics by using the difference between the defect and background color and brightness. Low level visual saliency detection algorithm based on wavelet transform [19] uses multi-directional wavelet to carry out two-dimensional discrete wavelet transform on three channels of CIE Lab color space, and uses the way of global and local feature fusion to form the saliency map of defect area. The FT algorithm only considers the global features, and the detection is poor when the defect is close to the background color. The saliency algorithm based on wavelet transform [19] is more complex for the defect detection time of periodic fabrics, the calculation process is more cumbersome, and the adaptability to different types of defects needs to be improved. The spectral residual method based on Fourier transform [20] only considers the saliency of images in the frequency domain and ignores the saliency of images in the time domain and the space domain.
In the process of collecting fabric defect images, it is easy to be affected by uneven illumination, which increases the difficulty of image feature extraction and improves the error detection rate of defect detection. The traditional histogram equalization method and self quotient image method [21] are easy to make the image over enhanced and the effect of illumination correction is not good. In recent years, homomorphic filtering methods [22] and Retinex based methods [23] have been widely used in image illumination correction.
To overcome the above issues, we propose a method of fabric defect detection based on the combination of illumination correction and visual salient features. Firstly, to solve the problem that traditional illumination correction methods tend to overenhance the image, the illumination component of the fabric image is extracted by using the multi-scale side window box filter (MS-BOX) in the global angle, then enhance the local contrast of the fabric image in the local angle. Secondly, to solve the interference of texture background, we use the L 0 gradient minimization method to remove the background texture of fabric image. Then, the fabric image is represented as a quaternion image. Each pixel in the image is represented by a quaternion composed of color, intensity, and edge features. Finally, to solve the limitation of traditional frequency domain methods only considering the salient characteristics of the defect in the frequency domain, the salieny map of the fabric image is obtained by using the two-dimensional fractional Fourier transform. The contributions of this paper are summarized as follows:
  • Different from traditional methods that only perform illumination correction locally or globally, our method performs illumination correction on the fabric image in both global and local angles.
  • Different from the traditional method of constructing quaternion images, we choose a color space that is more suitable for fabric images, improve the robustness of the intensity feature channel, and replace the motion feature channel with edge feature channel.
  • Different from the traditional frequency domain method using simple Fourier transform to obtain the saliency map, we use the two-dimensional fractional Fourier transform to obtain the saliency map of the quaternion image.
The remainder of this paper is organized as follows—in Section 2, the work related to illumination correction and visual salient feature are briefly described. In Section 3, we propose a fabric defect detection method based on the combination of illumination correction and visual salient features and discuss the implementation details. In Section 4, we evaluate the performance of our method on a standard data and compare it with existing representative methods WGIS, Mobile-Unet, SHF, ER, CDPA, and SR. Finally, Section 5 concludes the paper.

2. Related Work

2.1. Illumination Correction

According to Retinex theory [23], an image q ( x , y ) can be divided into two different images: the reflected object image r ( x , y ) and the illumination image i ( x , y )
q ( x , y ) = r ( x , y ) · i ( x , y ) .
The multi-scale rolling guidance filter (RGF) [24] is widely used to extract the illumination component of an image. Ying et al. introduced a Exposure Fusion Framework (EFF) [25] and a Bio-Inspired Multi-Exposure Fusion Framework (BIMEFF) [26] for low-light image enhancement, where the enhanced result is obtained by fusing the input image and the synthetic image according to the weight matrix. Ren et al. [27] proposed a Joint low-light Enhancement and Denoising (JED) strategy, where enforced the spatial smoothness on each component and skilfully made use of weight matrices to suppress the noise and improve the contrast. In order to solve the problem of low visibility, Guo et al. [28] proposed a simple yet effective low-light image enhancement (LIME) method. Lore et al. [29] proposed a deep autoencoder approach to low-light image enhancement (LLNet). But the illumination correction effect of these methods for fabric images needs to be improved.

2.2. Visual Salient Feature

Visual saliency is a fundamental problem in image processing, pattern recognition, and computer vision. In recent years, many scholars have used visual salient features to detect fabric defects.
Li et al. [30] introduced a Saliency Histogram Features (SHF) method, in which they extrcted and selectd saliency histogram features to discriminate between the defective and defect-free fabric images. Zhang et al. [31] proposed a Color Dissimilarity and Positional Aggregation (CDPA) method, in which they measured the defect value based on the color difference and the position distance between similar color blocks. This kind of method has achieved certain effect, but its real-time performance is not good. The general residual analysis using Fourier transform is relatively simple and fast. On this basis, Guo et al. [32] used the phase spectrum instead of the original amplitude spectrum.

3. Methods

The steps of our method include illumination correction, texture background removal, saliency map generation and segmentation. Figure 1 is the framework of the proposed our method.

3.1. Illumination Correction

3.1.1. Illumination Correction in the Global Angle

The traditional method of extracting image illumination component by multi-scale rolling guidance filter [24] has some defects, for example, damaged image edge and illumination components appear halo phenomenon. The side window filter (SWF) [33] can preserve the edge of the image very well, so we use multi-scale side window box filter to extract the illumination components of fabric images. Considering that HSV color space is more consistent with the visual characteristics of human eyes, and the hue (H), saturation (S), and value (V) in HSV color space are independent of each other, the operation of V will not affect the color information of the image. So we choose to convert the image from RGB color space to HSV color space.
The definition of a side window is shown in Figure 2a. θ is the angle between horizontal line and the window, r represents the radius of the window, ρ 0 , r , and ( x , y ) is the position of the pixel i. By fixing ( x , y ) and changing θ , we are able to adjust the direction of the window and align its side with i.
In order to simplify the process, we adopt the proposal of the paper [33], and define eight side windows only in the discrete cases, as shown in Figure 2b–d. These eight windows defined here correspond to θ = k × π 2 , k [ 0 , 3 ] . By setting ρ = r , we can get the d o w n ( D ) , r i g h t ( R ) , u p ( U ) and l e f t ( L ) side windows, named as ω i D , ω i R , ω i U and ω i L . They are aligned i with their sides. By setting ρ = 0 , we can get the s o u t h w e s t ( S W ) , s o u t h e a s t ( S E ) , n o r t h e a s t ( N E ) and n o r t h w e s t ( N W ) side windows, named as ω i S W , ω i S E , ω i N E and ω i N W . They are aligned i with their corners.
By applying the filtering kernel F to each side window, we can get eight different outputs, named as I i θ , ρ , where θ = k × π 2 , k [ 0 , 3 ] and ρ 0 , r
I i θ , ρ = F ( q i , θ , ρ , r ) ,
where q i and I i are the intensities of the input image q and the output image I at location i, respectively. In order to preserve the edges, we want to minimize the distance between the input and the output at the edge. Consequently, we select the side window output with the minimum L 2 distance to the input intensity as the final output,
I S W F = arg min I i θ , ρ , r q i I i θ , ρ , r 2 2 ,
where I S W F is the output of SWF.
In order to enhance the robustness of the original SWF, we build a multi-scale SWF by changing the window radius r, and introduce the box filter (BOX) into the multi-scale SWF. That is to say, F in Equation (2) is averaging and the resulting filter is called multi-scale side window box filter (MS-BOX),
I M S B O X = j = 1 n 1 n arg min I i θ , ρ , r j q i I i θ , ρ , r j 2 2 ,
where I M S B O X is the outout of MS-BOX. n is the number of scales. In this paper, n = 3 .
By convoluting the value component V ( x , y ) of the image with M S B O X ( x , y ) , the estimated value of the illumination component I ( x , y ) can be obtained. The results are as follows:
I ( x , y ) = M S B O X ( x , y ) · V ( x , y ) .
After extracting the illumination component of the image, the gamma correction function can be constructed according to the distribution characteristics of the illumination component. The expression of the two-dimensional gamma correction function constructed in this paper is as follows:
O ( x , y ) = 255 ( V ( x , y ) 255 ) λ , λ = ( 1 2 ) m I ( x , y ) m ,
where O ( x , y ) is the brightness value of the output image after correction, λ is the index value used for brightness enhancement, which contains the characteristics of the illumination component of the image, m is the mean value of the estimated value of the illumination component I ( x , y ) .

3.1.2. Enhance the Contrast in the Local Angle

Local Contrast Enhancement (LCE) algorithm [34] can effectively improve the visualization of detail features and keep the original details of the image as much as possible. The transformation equation of LCE algorithm is as follows:
Y ( m , n ) = log ( L ( m , n ) L ( m , n ) ¯ ) , L ( m , n ) > θ , L ( m , n ) ¯ > θ 0 , o t h e r w i s e
L ( m , n ) ¯ = 1 N i , j Ω L ( m + i , n + j ) ,
where Θ is a predefined threshold and L ( m , n ) is the gray value at the pixel ( m , n ) . L ( m , n ) ¯ represents the local gray value of the pixel ( m , n ) in the domain. Y ( m , n ) represents the adjustment gray value of the pixel ( m , n ) . In this experiment, we use the domain of 5 × 5, where N is the total number of pixels in the selected domain. Since the local value of Equation (7) can be positive or negative, it is necessary to normalize it:
f ( m , n ) = Y ( m , n ) Y min Y max Y min × 255 .
We combine the above two methods to achieve illumination correction of fabric images in both global and local angles.

3.2. Extract Visual Salient Features of Image

3.2.1. Background Texture Smooth by the L 0 Gradient Minimization(LGM)

Because of the diversity of pattern and texture, it usually brings great difficulties to fabric defect detection. In recent years, due to the fast and effective of the LGM algorithm [35], LGM has been used by many scholars to remove texture. The LGM can not only smooth the background texture, but also retain the key information of the image. In brief, the LGM preserves the important edge parts of the image by adding the steepness of the transition part of the image while removing low-amplitude parts. Let I be the input image, the result of the LGM is S. The partial derivatives of the smoothed image at p in the x and y directions are defined as x S p and y S p respectively. Therefore, the gradient of smoothed output S at pixel p can be expressed as:
S p = x S p , y S p T .
So the image L 0 gradient specific objective function can be expressed as:
min p S p I p 2 + β S p h 2 + λ h 0 ,
where λ is non-negative parameter, which affects the degree of image smoothing. h is auxiliary variable and β is an adaptive parameter. By alternatively computing h and S, we can get the smoothed output result.
As shown in Figure 3a, the input image has a complex texture structure, and its mesh diagram is shown in Figure 4a. After the LGM algorithm, the complex texture information of input image is smoothed, and the output result is shown in Figure 3b. It is noted that the important edges of the defect are preserved, and the defect is more visible, and the mesh diagram is shown in Figure 4b.

3.2.2. Creation of a Quaternion Image

Saliency detection method based on quaternion [32] represents each pixel with a quaternion consisting of color, intensity, and motion features. Compared with RGB color space, CIE Luv color space is more suitable for fabric defect detection with single color. L represents brightness, and u and v represent chroma. Therefore, we change the input image I from RGB color space to CIE Luv color space. Let l, u and v represent different channels of image I in CIE Luv color space. Equations (12)–(15) create four broadly-tuned color channels:
L = l ( u + v ) / 2
U = u ( l + v ) / 2
V = v ( l + u ) / 2
Y = ( l + u ) / 2 l u / 2 v .
In human brain, there exists a ‘color opponent-component’ system [36]. In the center of receptive fields, neurons are excited by one color or chroma. The opposite chroma channels are obtained by Equations (16) and (17).
L U = L U
V Y = V Y .
In order to further reduce the non-saliency of the color and strengthen its biological rationality, we adjusted the intensity channel F,
F = ( l ¯ + u ¯ + v ¯ ) / 3 ,
where l ¯ = l l m , u ¯ = u u m , v ¯ = v v m . l m , u m , and v m are the mean value of l, u, and v respectively.
Since we are dealing with static images without motion features, we use Canny operator to extract edge features E instead of motion features, According to the above four feature channels, the quaternion image q is defined as follows:
q = f 1 + f 2 · μ 2
f 1 = E + L U · μ 1
f 2 = V Y + F · μ 1 ,
where μ i , i = 1 , 2 satisfies μ i 2 = 1 , μ 1 μ 2 .

3.2.3. Using 2-D Fractional Fourier Transform to Obtain Saliency Map

The fractional Fourier transform (FRFT) is a generalized form of the traditional Fourier transform. The result of the transform contains the information of signal time and frequency domains. For an input signal x ( t ) , the FRFT is as follows:
X α ( u ) = F α x ( t ) ( u ) = x ( t ) K α ( t , u ) d t
K α = 1 j cot α 2 π exp j t 2 + u 2 2 cot α t u csc α ,
where α is the rotation angle when the signal rotates to the frequency axis, α = p · π / 2 , and p is the transformation order of fractional Fourier transform. It can be seen from Equations (22) and (23) that when p = 1 , the rotation angle is π / 2 , and the fractional Fourier transform degenerates into the traditional Fourier transform; when p = 4 n , the rotation angle is an integral multiple of 0 or 2, and the result of the fractional Fourier transform is the signal itself; when p is a fraction, the rotation angle is between 0 and π / 2 , and the signal is rotated between the time axis and the frequency axis, In this case, the results of FRFT can describe the signal characteristics from both the time and the frequency domain. Figure 5 shows the transform domain of fractional Fourier transform, where axis t represents the time axis and axis ε represents the frequency axis.
For a two-dimensional signal x ( s , t ) , its two-dimensional fractional Fourier transform (2D-FRFT) is defined as:
X α , β ( u , v ) = F α t v F β s u x ( s , t ) ,
where α and β represent two independent fractional rotation angles in two-dimensional space, and the two-dimensional transformation result of signal x ( s , t ) is equal to two successive fractional Fourier transforms of the signal with parameters α and β respectively. In this work, we set both α and β to 0.9.
The transform kernel of two-dimensional fractional Fourier transform can be defined as follows:
K ( α , β ) = K α × K β ,
where α and β are discrete forms of kernel functions of fractional Fourier transform.
For a discrete two-dimensional signal f ( m , n ) , the discrete two-dimensional fractional Fourier transform at point ( m , n ) is as follows:
X ( α , β ) ( m , n ) = p = 0 M 1 q = 0 N 1 x ( p , q ) K ( α , β ) ( p , q , m , n ) .
The inverse discrete two-dimensional fractional Fourier transform at point ( m , n ) is as follows:
x ( p , q ) = m = 0 M 1 n = 0 N 1 X ( α , β ) ( m , n ) K ( α , β ) ( p , q , m , n ) .

3.2.4. Generation of Saliency Map

The 2D-FRFT of Equation (19) can be written as:
Q ( u , v ) = X 1 ( u , v ) + X 2 ( u , v ) μ 2 ,
where X i ( u , v ) , i = 1 , 2 is the two-dimensional fractional Fourier transform of f i .
Q ( u , v ) can be represented in polar form as:
Q ( u , v ) = Q ( u , v ) e μ ϕ ,
where is the amplitude spectrum, ϕ is the phase spectrum and μ is a unit pure quaternion.
Calculate the inverse two-dimensional fractional Fourier transform of Q ( u , v ) using Equation (27), the result is written as Q ¯ ( u , v ) .
The final saliency map is obtained by Equation (30),
S = g Q ¯ ( u , v ) 2 ,
where g is a 2D gaussian filter ( σ = 2.5 )
We use the region growing method to segment the saliency map and separate the defect from the background. Finally, the morphological treatment of the saliency map is carried out to remove the noise points that are easily caused by misdetection.

3.2.5. Computation Cost Analysis

The computational cost of our method is mainly affected by the following work: illumination correction, use LGM to remove texture background, construct a quaternion image and saliency map generation.
Let N = M × N , M and N are the width and height of the input image respectively, the illumination correction process is a linear calculation process, so its computational complexity is O ( N ) ; the computational complexity of LGM is mainly determined by Equation (11), so its computational complexity is O ( N log N ) ; the process of constructing a quaternion image is also a linear calculation process, so its computational complexity is O ( N ) ; the computational complexity of saliency map generation is mainly determined by Equation (28), so its computational complexity is O ( N log N ) . Therefore, the computation complexity of our method is as follows:
T ( m e t h o d ) = O ( N ) + O ( N log N ) + O ( N ) + O ( N log N ) = O ( N log N ) .
Besides, the space complexity of our method is O ( C × N ) , where C is a constant.

4. Experiments and Performance Evaluation

In this section, our work is performed by using in total of 50 images provided by the automation laboratory fabric database of Hong Kong University. More specifically, 15 defect images of size 256 × 256 are from the box-patterned fabric database, 15 defect images from the star-patterned fabric database and 20 defect images from the dot-patterned fabric database. In addition, all defect images have corresponding binary ground truth images, with a value of 1 for defective objects and 0 for defect-free objects. WGIS [11] (2005), Mobile-Unet [10] (2020), SHF [30] (2019), ER [14] (2016), CDPA [31] (2018) and SR [15] (2017) are implemented for comparison. The experiments are performed on a personal computer with an Intel Core i5-8300H processor and 8 GB memory. The testing codes are implemented in Matlab 2019a.

4.1. Analysis of Experimental Results of Different Illumination Correction Methods

Different illumination correction methods have different correction effects on fabric images, and comparison experiments of different methods are carried out for this problem. The scale factors r of the multi-scale MS-BOX are 3, 5, and 7, respectively. Figure 6 shows the illumination component extraction effect comparison of multi-scale RGF [24] (2014) and multi-scale MS-BOX. Figure 7 shows the illumination correction effect comparison of BIMEFF [26] (2017), JED [27] (2018), LIME [28] (2017), EFF [25] (2017), LLNet [29] (2017) and Ours.
As the Figure 6 shows, compared with the multi-scale RGF, the multi-scale MS-BOX can eliminate the halo phenomenon in the illumination component image to a certain extent. This is because in the multi-scale MS-BOX, the edge information of the image is preserved in the filtering process. The illumination component extracted by the multi-scale MS-BOX can effectively describe the illumination change information, which meets the feature requirements of the illumination component extraction.
As the Figure 7 shows, our method is better than other methods in illumination correction of fabric images, and can effectively improve the visualization of detail features. BIMEFF, JED, EFF and LLNet can basically eliminate the influence of illumination, but there are still some regions where the brightness is too dark to extract the details effectively. The LIME makes the image over enhanced, which is not conducive to the extraction of detailed features, and reduces the contrast between the defect and the background.

4.2. Parameter Selection of the L 0 Gradient Minimization Method

We use the L 0 gradient minimization method to remove the background of dot-patterned fabric images. In the L 0 gradient minimization method, parameter λ affect the effect of defect detection. We explore the most appropriate parameter λ , and the results are shown in Figure 8.
As the Figure 8 shows, for dot-patterned fabric images, if the parameter λ is set too small, the background texture is hardly removed. Conversely, if the parameter λ is set too large, the defects are removed out.
In order to compare the impact of parameter λ on all kinds of dot-patterned fabric types, the parameter λ is 0.005, 0.01, 0.015, 0.02, 0.03, 0.04, and 0.05 respectively. Figure 9 shows the detection accuracy of four fabric types. It is proved by experiments that the selected number of λ is set between 0.005 and 0.05, which can meet the needs of four fabric types. As the Figure 9a shows, when λ = 0.05 , the broken end fabric will be mistakenly smoothed out. As the Figure 9d shows, when λ is set to 0.04 or 0.05, the thin bar fabric will be mistakenly smoothed out, and the accuracy rate cannot be calculated. When λ = 0.02 , the detection accuracy rate of the four types of defects is the best, so we set the parameter λ = 0.02 .

4.3. Generation of the Saliency Map

The saliency map of star-patterned fabric defect detection is shown in Figure 10. The saliency map of box-patterned fabric defect detection is shown in Figure 11. The saliency map of dot-patterned fabric defect detection is shown in Figure 12. As the Figure 10, Figure 11 and Figure 12 show, we can see that our method can effectively highlight the defect regions with saliency features, and it has strong adaptability and robustness to different types of defects.

4.4. Result Comparison

For each defect type of the fabric image database, an exemplar is randomly selected. The results of WGIS [11], Mobile-Unet [10], SHF [30], ER [14], CDPA [31], SR [15], and Ours are shown in Figure 13, Figure 14 and Figure 15.
For star-patterned exemplars shown in Figure 13, the detection accuracy of Ours on star-patterned fabrics are better than the rest in vision, the location and shape of defects are closest to the ground truth. WGIS and ER are basically unable to detect. For box-patterned exemplars shown in Figure 14, Mobile-Unet, SHF, CDPA, SR, and Ours can detect defects, but Ours is more closest to ground truth in the shape of defects. WGIS and ER cause a lot of false detection of defect-free points. For dot-patterned exemplars shown in Figure 14, all methods can detect defects, but the detection effect of Ours is more prominent.

4.5. Quantitative Comparison

In order to test the effectiveness of the method, we also made quantitative and qualitative comparisons. A number of metrics are used to evaluate the effectiveness of the method. That is, we calculted true positive (TP), false positive (FP), true negative (TN) and false negative (FN). According to the four parameters calculated above, true positive rate: TPR = TP/(TP + FN); false positive rate: FPR = FP/(FP+TN); positive predictive value: PPV = TP/(TP + FN); negative predictive value: NPV = TN/(TN + FN). Additionally, we also used the f value to evaluate the performance correctly.
f = γ 2 + 1 × TPR × PPV TPR + γ 2 × PPV ,
where γ = 1 in [37]. That is to say, Equation (32) can be rewritten as
f = 2 × TPR × PPV TPR + PPV = 2 × 1 1 / TPR + 1 / PPV = 2 × 1 ( TP + FN ) / TP + ( TP + FP ) / TP = 2 × TP 2 × TP + FP + FN .
The above Equations show that when the FN and FP increase, the value of f decreases, when FN and FP decrease, the value of f increases gradually and tends to 1. The f value only depands on TPR and PPV, which avoids the problem of incorrect evaluation and false inspection caused by the small defect regions. Consequently, we choose the f value as an important index to evaluate the performance correct of the method.
Table 1, Table 2 and Table 3 compare the quantitative results of the different algorithms (WGIS [11], Mobile-Unet [10], SHF [30], ER [14], CDPA [31], SR [15], and Ours) on star-, box- and dot-patterned fabrics. Besides, we marked the best results with black bold. The first to sixth columns are defect type name, TPR, FPR, PPV, NPV, and f value respectively. It should be noted that each row of the table is the average test result of a method, and the test results of all rows are classified according to the types of fabric defects.
For numeric star-patterned results in Table 1, our method get the highest overall TPR and PPV, while the overall f value is the highest, indicating that our method has achieved better overall recall and precision, and the detection accuracy rate is the best. For numeric box-patterned results in Table 2, our method get the highest overall TPR, NPV, and f value, indicating that the our method has the highest detection accuracy. Although Mobile-Unet achieves the lowest overall FPR and highest overall PPV, the overall TPR is only 60.75% and the f value is not the best, which is not conducive to actual detect. For numeric dot-patterned results in Table 3, our method get the highest overall TPR, NPV, and f value, indicating that the detection effect of our method is similar to box-patterned fabric. Although Mobile-Unet achieves the highest overall PPV, the overall TPR is only 64.88%. In summary, our method significantly improves the TPR and f value of star-, box- and dot-patterned fabric defect detection.
Figure 16 shows the TPR-PPV scatter plots of star-, box- and dot-patterned fabrics by seven methods, in which the same type of scatter plots represent different types of defects at different locations. In the scatter diagram, the closer the value of TPR and PPV is to 1 (100%), which indicates that the better the comprehensive detection effect of the method. The more centralized the scatter value distribution, the more robust and universal the performance of the method. As Figure 16 shows, in the scatter diagrams, our method is closest to the upper right corner of the diagrams, that is to say, the comprehensive TPR-PPV effect of our method is better. Besides, the scatter value of our method is the most aggregated, which shows that our method is more robust and adaptable to the detection of different patterns of fabrics.

4.6. Running Time Comparison

We compare our running time with Mobile-Unet [10], WGIS [11], ER [13], SR [15], SHF [30] and CDPA [31], as shown in Table 4.
As the Table 4 shows, compared with other methods based on image processing (WGIS, ER, SR, SHF and CDPA), our running time is significantly shorter. Mobile-Unet has the best real-time performance. However, Mobile-Unet needs to prepare a large number of defect images as training data in advance, and the training process also needs a lot of time.

5. Conclusions

This paper proposes a method of fabric defect detection based on illumination correction and visual salient features. In view of the limitations of traditional illumination correction methods, we propose a new method of illumination correction, which adjusts the brightness according to the illumination component in the global angle and enhances the contrast in the local angle. In order to eliminate the interference of background to the detection, the L 0 gradient minimization method is used to remove the texture background and highlight the defects. The traditional frequency domain based visual saliency detection algorithm only considers the saliency of the defects in the frequency domain. In this paper, the image is represented by quaternion image, and the two-dimensional fractional Fourier transform is used to enhance the saliency of the defects in the frequency domain and time domain. Finally, we use the region growing method and morphological processing to segment the saliency map and complete the defect detection. Experimental results on a standard database show that our method has better robustness and better detection effect than other methods. But it should be noted that our method has a high FPR for dot-patterned fabric defects, which needs to be improved in the future.

Author Contributions

Project administration, L.D.; software, H.L.; supervision, J.L.; writing—original draft preparation, L.D. and H.L.; writing—review and editing, L.D. and H.L.; investigation, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by Open Project of Key Laboratory of Ministry of Public Security for Road Traffic Safety (No. 2020ZDSYSKFKT03-2) and National Natural Science Foudation of China (No. 71971031).

Acknowledgments

The database employed in this research is kindly provided by Industrial Automation Research Laboratory from Department of Electrical and Electronic Engineering of Hong Kong University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsai, I.; Lin, C.; Lin, J. Applying an Artificial Neural Network to Pattern Recognition in Fabric Defects. Text. Res. J. 1995, 65, 123–130. [Google Scholar] [CrossRef]
  2. Chetverikov, D.; Hanbury, A. Finding Defects in Texture Using Regularity and Local Orientation. Pattern Recognit. 2002, 35, 2165–2180. [Google Scholar] [CrossRef]
  3. Chan, C.; Pang, G. Fabric Defect Detection by Fourier Analysis. IEEE Trans. Ind. Appl. 2000, 36, 1267–1276. [Google Scholar] [CrossRef] [Green Version]
  4. Yang, X.; Pang, G.; Yung, N. Discriminative Fabric Defect Detection Using Adaptive Wavelets. Opt. Eng. 2002, 41, 3116–3126. [Google Scholar] [CrossRef]
  5. Mak, K.; Peng, P. An Automated Inspection System for Textile Fabrics Based on Gabor Filters. Robot. Comput. Integr. Manuf. 2008, 24, 359–369. [Google Scholar] [CrossRef]
  6. Cohen, F.; Fan, Z.; Attali, S. Automate Inspection of Textile Fabrics Using Textural Models. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 13, 803–808. [Google Scholar] [CrossRef]
  7. Athanasios, V.; Nikolaos, D.; Anastasios, D.; Eftychios, P. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 1–13. [Google Scholar]
  8. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Lu, Z.; Li, J. Fabric Defect Classification Using Radial Basis Function Network. Pattern Recognit. Lett. 2010, 31, 2033–2042. [Google Scholar] [CrossRef]
  10. Jing, J.; Wang, Z.; Matthias, R.; Zhang, H. Mobile-Unet: An efficient convolutional neural network for fabric defect detection. Text. Res. J. 2020. [Google Scholar] [CrossRef]
  11. Ngan, H.; Pang, G.; Yung, S.; Ng, M. Wavelet based methods on patterned fabric defect detection. Pattern Recognit. 2005, 38, 559–576. [Google Scholar] [CrossRef]
  12. Ngan, H.; Pang, G. Novel Method for Patterned Fabric Inspection Using Bollinger Bands. Opt. Eng. 2006, 45, 187–202. [Google Scholar]
  13. Ngan, H.; Pang, G. Regularity Analysis for Patterned Texture Inspection. IEEE Trans. Autom. Sci. Eng. 2009, 6, 131–144. [Google Scholar] [CrossRef] [Green Version]
  14. Tsang, C.; Ngan, H.; Pang, G. Fabric inspection based on the Elo rating method. Pattern Recognit. 2016, 51, 378–394. [Google Scholar] [CrossRef] [Green Version]
  15. Liang, J.; Gu, C.; Chang, X. Fabric Defect Detection Based on Similarity Relation. Pattern Recognit. Artif. Intell. 2017, 30, 456–464. [Google Scholar]
  16. Li, C.; Gao, G.; Liu, Z. Defect detection for patterned fabric images based on GHOG and low-rank decomposition. IEEE Access 2019, 7, 83962–83973. [Google Scholar] [CrossRef]
  17. Shi, B.; Liang, J.; Di, L.; Chen, C.; Hou, Z. Fabric Defect Detection via Low-rank Decomposition with Gradient Information. IEEE Access 2019, 7, 130424–130437. [Google Scholar] [CrossRef]
  18. Achanta, R.; Hemami, S.; Estrada, F. Frequency-tuned Salient Region Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
  19. Nevrex, I.; Lin, W.; Fang, Y. A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform. IEEE Trans. Multimed. 2013, 15, 96–105. [Google Scholar]
  20. Hou, X.; Zhang, L. Saliency Detection: A Spectral Residual Approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007. [Google Scholar]
  21. Wang, H.; Li, S.; Wang, Y. Self quotient image for face recognition. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; pp. 1397–1400. [Google Scholar]
  22. Yugander, P.; Tejaswini, C.H.; Meenakshi, J.; Varma, B.S.; Jagannath, M. MR Image Enhancement using Adaptive Weighted Mean Filtering and Homomorphic Filtering. Procedia Comput. Sci. 2020, 167, 677–685. [Google Scholar] [CrossRef]
  23. Land, E. Recent advances in Retinex thory. Vis. Res. 1986, 26, 7–21. [Google Scholar] [CrossRef]
  24. Zhang, Q.; Shen, X.; Xu, L. Rolling Guidance Filter. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 815–830. [Google Scholar]
  25. Ying, Z.; Ren, Y.; Wang, R.; Wang, W. A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 August 2017; pp. 36–46. [Google Scholar]
  26. Ying, Z.; Li, G.; Gao, W. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1–10. [Google Scholar]
  27. Ren, X.; Li, M.; Cheng, W.; Liu, J. Joint Enhancement and Denoising Method via Sequential Decomposition. In Proceedings of the IEEE International Symposium on Circuits and Systems, Florence, Italy, 27–30 May 2018. [Google Scholar]
  28. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  29. Kin, L.; Adedotun, A.; Soumik, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar]
  30. Li, M.; Wan, S.; Deng, Z.; Wang, Y. Fabric defect detection based on saliency histogram features. Comput. Intell. 2019, 35, 517–534. [Google Scholar] [CrossRef]
  31. Zhang, K.; Yan, Y.; Li, P.; Jing, J.; Liu, P.; Wang, Z. Fabric Defect Detection Using Salience Metric for Color Dissimilarity and Positional Aggregation. IEEE Access 2018, 6, 49170–49181. [Google Scholar] [CrossRef]
  32. Guo, C.; Ma, Q.; Zhang, L. Spatio-temporal Saliency Detection Using Phase Spectrum of Quaternion Fourier Transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  33. Yin, H.; Gong, Y.; Qiu, G. Side Window Filtering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 18–20 June 2019. [Google Scholar]
  34. Di, L.; Zhao, S.; He, R. Fabric defect inspection based on illumination preprocessing and feature extraction. CAAI Trans. Intell. Syst. 2019, 14, 716–724. [Google Scholar]
  35. Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image smoothing via L0 gradient minimization. ACM Trans. Graph 2011, 30, 1–12. [Google Scholar]
  36. Engel, S.; Zhang, X.; Wandell, B. Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature 1997, 388, 68–71. [Google Scholar] [CrossRef]
  37. Lazarevic-McManus, N.; Renno, J.; Jones, G. Performance evaluation in visual surveillance using the Fmeasure. In Proceedings of the 4th ACM International Workshop Video Surveillance and Sensor Networks, Cairo, Egypt, 1–4 October 2006; pp. 45–52. [Google Scholar]
Figure 1. Framework of the proposed our method.
Figure 1. Framework of the proposed our method.
Sensors 20 05147 g001
Figure 2. The definition of the side window. r is the radius of the window. (a) Definition of side window in continuous case. (b) The l e f t (red rectangle) and r i g h t (blue rectangle) side windows. (c) The u p (red rectangle) and d o w n (blue rectangle) side windows. (d) The n o r t h w e s t (red rectangle), n o r t h e a s t (blue rectangle), s o u t h w e s t (green rectangle) and s o u t h e a s t (purple rectangle) side windows.
Figure 2. The definition of the side window. r is the radius of the window. (a) Definition of side window in continuous case. (b) The l e f t (red rectangle) and r i g h t (blue rectangle) side windows. (c) The u p (red rectangle) and d o w n (blue rectangle) side windows. (d) The n o r t h w e s t (red rectangle), n o r t h e a s t (blue rectangle), s o u t h w e s t (green rectangle) and s o u t h e a s t (purple rectangle) side windows.
Sensors 20 05147 g002
Figure 3. Smoothed background texture information via the L 0 Gradient Minimization (LGM). (a) Input fabric image; (b) Smoothed fabric image.
Figure 3. Smoothed background texture information via the L 0 Gradient Minimization (LGM). (a) Input fabric image; (b) Smoothed fabric image.
Sensors 20 05147 g003
Figure 4. Implementation effect of the LGM. (a) Mesh diagram of fabric image; (b) Mesh diagram of smoothed fabric image.
Figure 4. Implementation effect of the LGM. (a) Mesh diagram of fabric image; (b) Mesh diagram of smoothed fabric image.
Sensors 20 05147 g004
Figure 5. Transform domain of fractional Fourier transform.
Figure 5. Transform domain of fractional Fourier transform.
Sensors 20 05147 g005
Figure 6. Comparison of illumination component extraction results.
Figure 6. Comparison of illumination component extraction results.
Sensors 20 05147 g006
Figure 7. Comparison of different illumination correction methods.
Figure 7. Comparison of different illumination correction methods.
Sensors 20 05147 g007
Figure 8. The optimum parameters of dot-patterned fabric types.
Figure 8. The optimum parameters of dot-patterned fabric types.
Sensors 20 05147 g008
Figure 9. The influence of different parameter λ on the detection accuracy of four dot-patterned defect types. (a) broken end type; (b) hole type; (c) thick bar type; (d) thin bar type.
Figure 9. The influence of different parameter λ on the detection accuracy of four dot-patterned defect types. (a) broken end type; (b) hole type; (c) thick bar type; (d) thin bar type.
Sensors 20 05147 g009
Figure 10. Saliency map of star-patterned fabric defect detection. (a) broken end type; (b) hole type; (c) netting multiple type.
Figure 10. Saliency map of star-patterned fabric defect detection. (a) broken end type; (b) hole type; (c) netting multiple type.
Sensors 20 05147 g010
Figure 11. Saliency map of box-patterned fabric defect detection. (a) hole type; (b) netting multiple type; (c) thin bar type.
Figure 11. Saliency map of box-patterned fabric defect detection. (a) hole type; (b) netting multiple type; (c) thin bar type.
Sensors 20 05147 g011
Figure 12. Saliency map of dot-patterned fabric defect detection. (a) broken end type; (b) hole type; (c) thin bar type; (d) thick bar type.
Figure 12. Saliency map of dot-patterned fabric defect detection. (a) broken end type; (b) hole type; (c) thin bar type; (d) thick bar type.
Sensors 20 05147 g012
Figure 13. Each row depicts the detect inspection exemplars for 7 algorithms of a specific detection type. From top to bottom, these types are Broken End, Hole and Netting Multiple.
Figure 13. Each row depicts the detect inspection exemplars for 7 algorithms of a specific detection type. From top to bottom, these types are Broken End, Hole and Netting Multiple.
Sensors 20 05147 g013
Figure 14. Each row depicts the detect inspection exemplars for 7 algorithms of a specific detection type. From top to bottom, these types are Hole, Netting Multiple and Thin Bar.
Figure 14. Each row depicts the detect inspection exemplars for 7 algorithms of a specific detection type. From top to bottom, these types are Hole, Netting Multiple and Thin Bar.
Sensors 20 05147 g014
Figure 15. Each row depicts the detect inspection exemplars for 7 algorithms of a specific detection type. From top to bottom, these types are Broken End, Hole, Thin Bar and Thick Bar.
Figure 15. Each row depicts the detect inspection exemplars for 7 algorithms of a specific detection type. From top to bottom, these types are Broken End, Hole, Thin Bar and Thick Bar.
Sensors 20 05147 g015
Figure 16. TPR-PPV scatter plots for (a) star-pattern, (b) box-pattern and (c) dot-pattern.
Figure 16. TPR-PPV scatter plots for (a) star-pattern, (b) box-pattern and (c) dot-pattern.
Sensors 20 05147 g016
Table 1. Numerical results of each defect type for star-patterned fabric.
Table 1. Numerical results of each defect type for star-patterned fabric.
Star
Pattern
TPR
(%)
FPR
(%)
PPV
(%)
NPV
(%)
f
(%)
Methods
Broken
End (5)
73.884.349.8899.2417.42WGIS
56.650.9529.1199.7438.45Mobile-Unet
58.460.9128.3199.7438.14SHF
8.791.167.1799.277.89ER
48.050.7825.8299.6233.59CDPA
58.162.6728.4099.6438.16SR
65.810.7934.8299.7545.54Ours
Hole
(5)
26.307.583.2799.455.81WGIS
62.530.4741.9599.8050.21Mobile-Unet
61.570.4647.2299.7953.44SHF
24.471.2311.6899.5415.81ER
57.510.4744.2899.7850.03CDPA
59.264.6041.8099.7849.02SR
74.060.5145.4899.8656.35Ours
Netting
Multiple (5)
36.073.2519.0698.2524.94WGIS
83.010.6963.7799.7772.12Mobile-Unet
60.480.6654.1599.2557.14SHF
16.420.8212.6198.5414.26ER
52.840.6258.6399.1655.58CDPA
59.800.7955.0399.4357.31SR
71.210.5756.2299.1762.83Ours
Overall
(15)
45.415.0610.7398.9817.35WGIS
67.390.7044.9499.7753.92Mobile-Unet
60.170.6743.2399.5950.31SHF
16.561.0710.4899.1212.83ER
52.800.6242.9199.5247.34CDPA
59.072.6841.7499.6148.91SR
70.360.6345.5199.5955.27Ours
Table 2. Numerical results of each defect type for box-patterned fabric.
Table 2. Numerical results of each defect type for box-patterned fabric.
Box
Pattern
TPR
(%)
FPR
(%)
PPV
(%)
NPV
(%)
f
(%)
Methods
Hole
(5)
31.1725.520.9299.311.78WGIS
62.440.7641.4199.7549.79Mobile-Unet
66.571.0536.4999.8047.14SHF
00.03097.690ER
62.600.9735.5599.7245.34CDPA
56.200.8037.2099.6744.76SR
83.101.3335.6799.8849.91Ours
Netting
Multiple (5)
33.0025.681.2898.872.46WGIS
50.230.9138.6599.3843.68Mobile-Unet
53.721.3330.1799.4238.63SHF
0.150.044.0095.810.28ER
51.381.5230.2899.5038.10CDPA
44.000.1630.1099.3635.74SR
59.761.4432.5099.4642.10Ours
Thin Bar
(5)
26.9024.201.0299.071.96WGIS
69.570.6949.3599.7057.74Mobile-Unet
65.811.0537.8699.6748.06SHF
5.844.512.3697.683.36ER
57.091.1332.8499.6041.69CDPA
60.301.6023.4099.6633.71SR
71.100.8149.1999.7258.14Ours
Overall
(15)
30.3525.131.0799.082.06WGIS
60.750.7843.1399.6150.44Mobile-Unet
62.031.1434.8499.6344.61SHF
1.991.522.1297.062.05ER
57.021.2132.8999.6141.71CDPA
53.500.8530.2399.5638.63SR
71.321.1939.0899.6850.49Ours
Table 3. Numerical results of each defect type for dot-patterned fabric.
Table 3. Numerical results of each defect type for dot-patterned fabric.
Dot
Pattern
TPR
(%)
FPR
(%)
PPV
(%)
NPV
(%)
f
(%)
Methods
Broken
End (5)
54.930.1825.5193.9034.84WGIS
68.591.8753.8098.1160.30Mobile-Unet
72.094.0147.4198.7057.20SHF
32.270.0156.2591.9041.01ER
78.325.2045.6598.9457.68CDPA
53.3626.5020.3082.6029.41SR
80.745.0949.0599.0761.02Ours
Hole
(5)
75.130.1710.9299.1519.06WGIS
77.584.0135.6399.2948.83Mobile-Unet
63.944.0732.5498.9743.13SHF
69.210.0530.6398.9442.46ER
72.184.8229.0499.0441.41CDPA
61.176.5022.2898.9532.66SR
84.195.3830.9599.4545.26Ours
Thick Bar
(5)
71.660.1749.4696.1958.52WGIS
65.260.2777.9795.0171.05Mobile-Unet
67.132.3073.2793.1170.15SHF
84.940.1549.4696.1962.51ER
58.853.3670.4393.9264.12CDPA
70.685.4927.4699.2339.55SR
87.183.6178.9597.7482.86Ours
Thin Bar
(5)
66.690.1610.6698.6418.38WGIS
48.090.1777.4798.6359.34Mobile-Unet
71.341.8548.2099.2257.53SHF
81.220.0726.8199.3040.31ER
64.881.8745.7899.0253.68CDPA
86.4216.5847.1597.4361.01SR
76.022.3645.6799.3457.06Ours
Overall
(20)
67.100.1720.0796.8130.89WGIS
64.881.5861.2297.7662.99Mobile-Unet
68.623.0650.3697.5058.08SHF
66.910.0740.7996.5850.68ER
68.563.8147.7297.5856.27CDPA
67.9013.7629.3094.5540.93SR
82.034.1151.1698.9063.01Ours
Table 4. Comparison of running time by seven different methods.
Table 4. Comparison of running time by seven different methods.
MethodsAverage Running Time/sHardware
Mobile-Unet [10]0.021One Nvidia TITAN Xp (GPU)
WGIS [11]12.99Intel Core i5-8300H (CPU)
ER [14]12.13Intel Core i5-8300H (CPU)
SR [15]3.99Intel Core i5-8300H (CPU)
SHF [30]16.46Intel Core i5-8300H (CPU)
CDPA [31]10.43Intel Core i5-8300H (CPU)
Ours2.18Intel Core i5-8300H (CPU)

Share and Cite

MDPI and ACS Style

Di, L.; Long, H.; Liang, J. Fabric Defect Detection Based on Illumination Correction and Visual Salient Features. Sensors 2020, 20, 5147. https://doi.org/10.3390/s20185147

AMA Style

Di L, Long H, Liang J. Fabric Defect Detection Based on Illumination Correction and Visual Salient Features. Sensors. 2020; 20(18):5147. https://doi.org/10.3390/s20185147

Chicago/Turabian Style

Di, Lan, Hanbin Long, and Jiuzhen Liang. 2020. "Fabric Defect Detection Based on Illumination Correction and Visual Salient Features" Sensors 20, no. 18: 5147. https://doi.org/10.3390/s20185147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop