Next Article in Journal
Lightweight Object Detector Based on Images Captured Using Unmanned Aerial Vehicle
Previous Article in Journal
Strength Parameters and Failure Criterion of Granite After High-Temperature and Water-Cooling Treatment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Evaluation Method for the Severity of Surface Fuzz Defects in Carbon Fiber Prepreg

1
School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China
2
State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
3
Key Laboratory of Opto-Electronics Information Technology, Ministry of Education, Tianjin 300072, China
4
Composite Test Technology Center, AVIC Composite Corporation Ltd., Beijing 101300, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(13), 7478; https://doi.org/10.3390/app15137478
Submission received: 11 June 2025 / Revised: 30 June 2025 / Accepted: 1 July 2025 / Published: 3 July 2025

Abstract

Featured Application

The proposed method achieves automated and standardized assessment of the severity of fuzzy defects in carbon fiber prepreg, significantly improving detection efficiency and reducing human subjectivity in quality control of carbon fiber prepreg production.

Abstract

Fuzz defects are prevalent surface imperfections in carbon fiber-reinforced polymer (CFRP) prepregs. Current manual inspection methods or conventional neural network-based approaches face significant limitations in achieving standardized and accurate severity assessment of such defects. In this article, a methodology comprising three key technical innovations is proposed: First, an adaptive thresholding algorithm is implemented, utilizing local average grayscale values to accurately identify fuzz defect pixels. Second, a grayscale histogram analysis is performed on the identified defect regions to quantify severity levels, effectively mitigating the influence of substrate material variations and illumination conditions on assessment accuracy. Third, a quantitative formula is defined based on the detection boxes drawn by the neural network object detection model and the effective area of defects to evaluate the severity of fuzz defects. Experimental validation shows 90% consistency with practical manual assessment in defect severity ranking tasks, proving its industrial applicability.

1. Introduction

Carbon fiber prepreg is a pre-impregnated sheet material fabricated by impregnating a polymer matrix into reinforcing fibers, serving as an essential intermediate for composite materials [1]. As a critical substrate for high-performance composites, defects in the prepreg—often introduced during the production process—can significantly lead to the unreliability of the final product [2]. These defects are usually divided into six categories: fuzz, wrinkles, poor glue, seam separation, foreign objects, and resin-rich areas. They have a substantial adverse effect on the mechanical properties, interfacial bonding, and long-term durability of composite structures. Among these, fuzz defects exhibit considerable morphological variability in the prepreg. Their colors are influenced by different material backgrounds and lighting conditions. Their shapes are affected by the distribution of carbon fibers, manifesting as clustered, strip-shaped, or dispersed patterns. Although deep learning techniques have shown remarkable progress in industrial defect detection, they still face notable technical challenges in accurately evaluating the severity of fuzz defects in carbon fiber prepreg. Traditional machine vision systems are either limited to defect type recognition [3] or lack the ability to grade and assess the severity of defects, which these are two crucial needs for refined quality inspection in industrial production. Consequently, the urgent need for quantitative severity metrics of fuzz defects in prepreg manufacturing has prompted the development of evaluation methods to classify surface imperfection criteria and enable precise inspection systems.
Surface defect detection based on image processing typically encompasses three categories: traditional image processing algorithms, machine learning-based methods, and deep learning approaches [4]. Faced with the demand for defect detection in carbon fiber prepreg, researchers have proposed diverse strategies to address the challenges of defect identification. For instance, Li et al. [5] developed a vision-based automatic recognition system that leverages an artistic conception drawing revert algorithm, exploiting repeated surface pattern features for defect identification, matching, and classification. Similarly, Liu et al. [6] introduced a structural-constrained low-rank decomposition method for fabric defect detection, which constructs a fused image by extracting energy features from both the original and energy images to enhance defect visibility. Houssein [7] proposed a multi-level threshold image segmentation method based on the Black Widow Optimization algorithm, which effectively reduces computational costs by using Otsu or Kapur as the objective function. Versaci et al. [8] proposed a new edge detection method based on fuzzy divergence and fuzzy entropy minimization, and introduced an adaptive S-shaped fuzzy membership function and a new fuzzy degree quantization index, effectively improving the edge detection performance of grayscale images under fuzzy conditions. Shi et al. [9] further combined low-rank decomposition with a structured graph algorithm, segmenting images into defect and defect-free blocks through local feature analysis while incorporating an adaptive threshold mechanism based on cycle counts during block merging.
Neural network-based deep learning models have also been applied to defect detection [10]. Lin et al. [11] introduced the Swin Transformer module into the YOLOv5 framework, significantly improving the detection of small-scale defects on fabric surfaces. Yuan et al. [12] proposed an improved YOLOv5 model, YOLO-HMC, which combines modules such as HorNet, MCBAM, and CARAFE to effectively improve the detection accuracy and efficiency of small defects on PCB surfaces. Building on this, Su et al. [13] optimized the YOLOv8 architecture by introducing Deformable Large Kernel Attention and Global Attention mechanisms, achieving substantial accuracy gains for prepreg surface defect detection. These studies demonstrate the evolving focus from basic defect recognition to more sophisticated feature extraction and model optimization.
Quantification of defects in images typically involves critical steps such as threshold segmentation, histogram analysis, and structural optimization. Zhu et al. [14] proposed an adaptive threshold method based on Adaptive cuckoo search optimization, coupled with genetic algorithms to determine optimal Gabor filter parameters for fabric defect extraction. Palletmulla et al. [15] systematically analyzed how grayscale quantization and window size affect the performance of grayscale co-occurrence matrix features in fabric defect detection. Meanwhile, Wiskin et al. [16] applied threshold segmentation in medical imaging to quantify breast density for cancer risk assessment, while Sun [17] developed a color space-based framework for the quantitative evaluation of flame stability. Wang et al. [18] proposed an acne grading method and severity assessment index based on lesion classification, and constructed a lightweight model called Acne RegNet, which achieved high-precision acne lesion recognition and severity assessment close to the level of dermatologists. Rahman et al. [19] developed an automatic acquisition system and image processing algorithm for rice leaf images, which achieved estimation of leaf infection area percentage based on pixel statistics. These works underscore the importance of accurate quantification for enhancing detection system practicality, though they also reveal inherent limitations in image-based severity assessment due to variations in background-target contrast and lighting conditions.
To address these challenges, researchers have extensively studied image-enhancement techniques [20]. Gray-level histogram equalization has emerged as a pivotal method for improving image quality and analysis reliability. Zhu et al. [21] proposed a threshold segmentation algorithm based on the statistical curve difference method to solve the problem of multi-target image segmentation with a small target area and grayscale close to the background. Uzun et al. [22] optimized adaptive histogram equalization through real-coded genetic algorithms and war strategy optimization, effectively enhancing image clarity. Fan [23] proposed a constrained histogram equalization approach with adjustable contrast parameters to overcome shortcomings in traditional methods. Pang et al. [24] further developed a variable contrast and saturation enhancement model using contrast-limited histogram equalization for local contrast correction. In low-light environments, Oztur et al. [25] introduced a dual-approach enhancement method combining histogram equalization with adaptive gamma correction in HSI color space, effectively resolving brightness inhomogeneity issues. Lv et al. [26] proposed an adaptive high grayscale image enhancement algorithm based on logarithmic mapping and simulated exposure, which effectively improves the visual quality and contrast of high grayscale low contrast images. These advancements in image preprocessing highlight the necessity of robust enhancement strategies to enable accurate detection and evaluation in image detection.
This study proposes a quantitative evaluation metric to assess the severity of surface fuzz defects in carbon fiber prepreg. Most deep learning-based object detectors output rectangular bounding boxes that only indicate the approximate location of defects. These bounding boxes do not provide information about the shape or severity of the defect. To address this limitation, the proposed method incorporates traditional image processing techniques, including adaptive thresholding and histogram normalization, to extract the actual defect region and suppress the background within the detection box. The contributions of this study can be summarized as follows:
  • Select pixels of fuzz defects from the detection boxes bounded by the deep learning models;
  • Propose a standard for representing the severity of defects per unit area through grayscale histograms of defective pixels and background pixels;
  • Calculate the effective area of fuzz defects;
  • Obtain an evaluation score that reflects the severity of the fuzz defects.

2. Materials and Methods

2.1. Selection of Defective Pixels

The first step in the proposed method involves converting the color image into a grayscale image, followed by the selection of defective pixels based on the grayscale representation. Due to the inherently uneven surface color distribution in carbon fiber prepreg, global grayscale thresholding proves insufficient for effectively distinguishing defects from the background while suppressing noise and illumination variations. To address this challenge, a localized thresholding strategy is adopted, taking into account the characteristics of the prepreg background—specifically, its irregular horizontal variations and relatively subtle vertical changes. In this approach, the average grayscale value of a local region is utilized as the threshold for segmentation, thereby improving the robustness and accuracy of defect pixel identification under varying background conditions.
To determine whether a given pixel contains a fuzz defect, a local grayscale analysis is performed. Specifically, the average grayscale value of a ( 2 × p + 1 ) × ( 2 × p + 1 ) neighborhood centered at the target pixel is calculated. This process is equivalent to applying a mean filter using a square convolution kernel of size ( 2 × p + 1 ) × ( 2 × p + 1 ) , which helps suppress local noise and smooth intensity variations for more reliable defect detection. The mathematical formulation for the mean grayscale value is given by the following:
i m g mean x , y = 1 2 × p + 1 2 i = x p x + p j = y p y + p i m g i , j
In Equation (1), m × n represents the dimension of the input grayscale image, where m and n represent the number of rows and columns, respectively. The imgmean is generated from the original image by applying a mean filter with convolution kernels of size ( 2 × p + 1 ) × ( 2 × p + 1 ) , thereby achieving local intensity smoothing and background balance. Average the pixel grayscale values along the row direction of the image to generate an a 1 × n vector. The method is to repeatedly fill the image and calculate the average grayscale of m × ( 2 × p + 1 ) pixels centered on the pixel to obtain an a 1 × n sequence, as follows:
m e a n g r a y 1 y = 1 2 * p + 1 * m i = 1 m j = y p y + p i m g i , j
Here, the i-th element in the summation corresponds to the i-th row, and j spans the ( 2 × p + 1 ) pixel window centered at column y. This operation provides a column-wise reference of the average grayscale value, which is used for subsequent thresholding. Based on the results of Equations (1) and (2), a binary mask is constructed to identify the fuzz defect regions. The mask is defined using a threshold criterion that compares the local mean grayscale value imgmean ith mean gray, as follows:
m a s k 1 x , y = 1 , i m g m e a n x , y > m e a n g r a y 1 y * 1.1 0 , i m g m e a n x , y < m e a n g r a y 1 y * 1.1
The resulting image mask1 is the mask of the fuzz region selected through the first threshold. For images with a ratio of m to n greater than 3, using the average grayscale of the pixels as the threshold is too high due to the excessive presence of fuzz areas within the m × ( 2 × p + 1 ) pixels. The maximum value of m is 4096, and the maximum value of n is 1366. Within the 1366 pixel range, the horizontal variation in the image is relatively small. Therefore, the average grayscale of ( 2 × p + 1 ) × n pixels centered on the pixel is taken as the grayscale threshold. This is shown as follows:
m e a n g r a y 1 x = 1 2 * p + 1 * n i = 1 n i = x p x + p i m g x , y
m a s k 1 x , y = 1 , i m g m e a n x , y > m e a n g r a y 1 x * 1.1 0 , i m g m e a n x , y < m e a n g r a y 1 x * 1.1
In the computation of the first defect mask, the grayscale threshold is derived by incorporating both the fuzz defect regions and the background. However, since this initial threshold may be biased due to the inclusion of fuzz pixels, a second threshold step is introduced to refine the segmentation. In this step, the background is first cleaned by excluding the previously identified fuzz pixels using the first mask, and a new column-wise grayscale threshold is computed based on the purified background. Specifically, the second column-wise average grayscale value, meangray2, is calculated as follows:
m e a n g r a y 2 y = 1 2 * p + 1 * m i = 1 m j = y p y + p i m g x , y * 1 m a s k 1
Here, mask1 is the binary mask obtained from the first threshold step, where a value of 1 indicates a fuzz defect pixel and 0 indicates a background pixel. Equation (6) computes the average grayscale of the background region after removing the fuzz defect areas, thereby providing a more accurate column-wise threshold for the second segmentation stage. Then, a second defect mask2 is generated by comparing the local average grayscale imgmean with the threshold defined by Equation (6):
m a s k 2 x , y = 1 , i m g m e a n x , y > m e a n g r a y 2 y * 1.1 0 , i m g m e a n x , y < m e a n g r a y 2 y * 1.1
The calculation for images with a ratio of m to n greater than 3 is as follows:
m e a n g r a y 2 x = 1 2 * p + 1 * n i = 1 n i = x p x + p i m g x , y
m a s k 2 x , y = 1 , i m g m e a n x , y > m e a n g r a y 2 x * 1.1 0 , i m g m e a n x , y < m e a n g r a y 2 x * 1.1
To enhance the robustness of the final defect mask and minimize the risk of pixel omission due to improper threshold selection, the mask with the higher number of identified defect pixels among mask1 and mask2 is selected as the final result. This selection is formalized as follows:
m a s k = max m a s k 1 , m a s k 2
In the resulting mask, the value 1 denotes a fuzz defect pixel, while 0 indicates a background pixel. This approach ensures that the threshold process is adaptive to the spatial characteristics of the image, thereby improving the accuracy and reliability of fuzz defect detection in carbon fiber prepreg surfaces.

2.2. Severity of Defects per Unit Area

In practical applications, different images may exhibit variations in grayscale values for the same severity of fuzz defects due to differences in lighting conditions and the inherent color properties of the fuzz itself [27]. To eliminate the influence of these external factors on the defect assessment, this study employs a localized grayscale histogram equalization method. This approach is applied specifically to the regions containing fuzz defects, with the lower bound of the local grayscale range defined as the average grayscale value of the fuzz area. The upper bound is determined based on the 80th percentile of the cumulative grayscale probability distribution within the same region. This technique enhances the comparability of images and improves the objectivity and consistency of defect severity evaluation.
Firstly, the grayscale image is normalized from the integer range to the floating-point range, using the following transformation:
g k = k 255
The k represents the grayscale level, ranging from 0 to 255 as an integer. Separate grayscale histograms, denoted as histob and histbg, are computed for the fuzz area and the background area as follows:
h i s t o b k = h 1 0 , h 1 1 , , h 1 255
h i s t b g k = h 2 0 , h 2 1 , , h 2 255
The grayscale probability distributions for the fuzz and background areas are then derived by normalizing the respective histograms:
P o b k = h i s t o b k i = 0 255 h 1 k = p 1 0 , p 1 1 , , p 1 255
P b g k = h i s t b g i i = 0 255 h 2 i = p 2 0 , p 2 1 , , p 2 255
The mean ob and mean bg represent where the average grayscale values of the fuzz area and background area, respectively, as follows:
m e a n o b = m a s k * i m g m a s k
m e a n b g = 1 m a s k * i m g ( 1 m a s k )
To determine the lower threshold of the grayscale range for the fuzz area, the grayscale level g(a) that is closest to meanob is identified as follows:
g a = a r g m i n g k m e a n o b , k 0 , 255
The upper threshold b is determined as the 80th percentile of the grayscale probability distribution of the fuzz area. Specifically, b is the smallest grayscale level that satisfies the following conditions:
i = 0 b 1 P o b i < 0.8
i = 0 b P o b i 0.8
Subsequently, the grayscale histogram and probability distribution for the interval are extracted as follows:
h i s t a b k = h 1 a , h 1 a + 1 , , h 1 b
P a b k = h i s t a b k i = a b h 1 i = p 1 a , p 1 a + 1 , , p 1 b
The cumulative distribution function (CDF) over this interval is then computed as follows:
C D F k = i = a k P a b k = C a , C a + 1 , , C b
To perform histogram equalization, the grayscale values in the fuzz area are mapped to a new range [28]. The c is the maximum grayscale value of the image. The equalized grayscale value g1(k) is calculated as follows:
g 1 k = g a + g c g a * C D F k
After mapping, the updated grayscale histogram and probability distribution for the fuzz area are obtained by the following:
h i s t o b k = h 1 0 , h 1 1 , , h 1 255
P o b k = = p 1 0 , p 1 1 , , p 1 255
It is worth noting that color variations may exist between different materials or even within different regions of the same material. These variations can lead to differences in the grayscale values of fuzz defects, even when their severity is the same. To mitigate the impact of such color differences on the evaluation process, the focus is shifted from the absolute grayscale values of the fuzz region to the relative difference between the fuzz and background regions.
To achieve this, the grayscale level g(b) that is closest to meanbg is determined as follows:
g b = a r g m i n g k m e a n b g , k 0 , 255
The relative grayscale difference between the fuzz and background is then calculated by the following:
g 2 k = g 1 k g b , k 0 , 255
Here, g(b) is the grayscale level closest to the average of the background region. Since the image of fuzz defects is captured on the background material, the grayscale probability distribution derived from the masked area inherently includes contributions from both the fuzz and background regions. To eliminate the background influence on the fuzz severity assessment, the background grayscale probability distribution outside the mask is subtracted from that inside the mask. This operation is defined as follows:
P d i f = P o b P b g
Finally, the severity of blurring per unit area is quantified as the absolute sum of weighted grayscale differences above the average grayscale value, as follows:
s = k = a 255 P d i f k * g 2 k
The metric s effectively captures the deviation of the fuzz region from the background in terms of relative grayscale, thereby providing a robust and material-independent measure of defect severity.

2.3. Effective Area

The fuzz defect regions identified in the processed image are further analyzed using a learning approach based on neural networks. In this context, the neural network automatically detects potential fuzz defects and annotates them with rectangular bounding boxes. While these bounding boxes provide a convenient and intuitive representation of the defect region, they may not accurately reflect the true extent of the defect, particularly for irregularly or skewed-distributed defects.
Specifically, when the fuzz defect is not aligned with the axes of the image, the area of the axis-aligned bounding box tends to overestimate the actual defect size. To address this issue and improve the accuracy of area-based severity calculation, the effective area Aeff is defined as the smallest area of the rotated bounding rectangle that tightly encloses the detected fuzz defect. This is achieved by rotating the image and recalculating the bounding rectangle until the minimum enclosing area is found.
The effective area Aeff serves as a more precise geometric descriptor of the fuzz defect, allowing for a more reliable and robust assessment of its severity. By using Aeff instead of the standard axis-aligned bounding box area, the method reduces the impact of spatial misalignment and improves the consistency of quantitative defect evaluation across different orientations and shapes of fuzz regions.

2.4. Quantitative Evaluation Score

To comprehensively evaluate the overall severity of fuzz defects, a composite scoring metric S is introduced, which integrates both the severity per unit area l and the effective area S of the detected defect. This scoring mechanism is formulated as follows:
S = s θ * l o g θ A e f f
In Equation (31), l represents the normalized grayscale deviation-based severity index defined by (30), while S is the effective area defined in 2.4. The parameter θ is introduced as an adjustable weighting factor that governs the relative contribution of the unit severity and the spatial extent of the defect in the final assessment. The value of θ should be empirically determined according to the specific application requirements and the desired emphasis on either the intensity of the defect or its physical size.
The use of a square root function on the product of θ and s ensures a balanced contribution between the two components, while the logarithmic term with base θ on the effective area Aeff mitigates the overemphasis of large-area defects and enhances the sensitivity to moderate or small defects with high severity. This formulation allows for a more nuanced and application-adaptive quantification of fuzz defect severity, improving the discriminability and interpretability of the final evaluation results.

3. Results

3.1. Image Acquisition

In this study, real-time image acquisition of carbon fiber prepreg was conducted on the production line using a linear array camera. The images had a resolution of 4096 × 4000 pixels and a spatial resolution of 0.06835 mm per pixel, corresponding to a physical field of view of 280 mm × 273.5 mm. The actual prepreg size corresponding to each image with a pixel dimension of 4096 × 4000 is 280 mm × 273.5 mm. The collected prepreg images were analyzed using a deep learning network to detect potential fuzz defects. Specifically, the regions identified within the detection bounding boxes are classified as fuzz defects and used as the examples for severity evaluation in this work.
The minimum image resolution required for the effective application of the method is determined by the pixel dimensions corresponding to the smallest defect size that needs to be detected in the specific industrial context. For example, if the minimum acceptable defect size is Z m m × Z m m and the spatial resolution is 0.0685 mm/pixel, then the minimum defect size in terms of pixels would be approximately Z / 0 . 06835   pixels × Z / 0 . 06835   pixels . Figure 1a displays the raw images captured by the camera, while Figure 1b presents the same images with annotations indicating the detected fuzz defects. Each bounding box is labeled with its width, height, threshold, and confidence level, providing a visual reference for the detection performance of deep learning models. In this study, the YOLOv8s architecture was employed as the detection network in the experimental setup. The model was trained on a dataset comprising 1270 annotated samples of fuzz defects. A total of 400 training epochs were conducted to ensure the model’s convergence and robust performance in defect detection.
Figure 2 presents an original image containing a linear object and an enlarged view of the region enclosed in the red box. Fuzzy defects are usually composed of dark linear objects and light flocculent objects. As observed in Figure 2, the average width of a linear object is 3–5 pixels. When the center of the convolution kernel is aligned with the center of this linear object, a small kernel size may result in the kernel covering only the dark linear region. This leads to the average grayscale value being computed based solely on the dark area, even though the corresponding point is judged as a defect-free area. To avoid this misclassification and ensure accurate detection of defect pixels, the width of the convolution kernel must exceed the width of the linear defect. That is, the kernel size (2p + 1) should be greater than five pixels, implying that the minimum value of p should be 3. The effectiveness of different p-values is illustrated in the following figures.
Figure 3a shows the original grayscale image. Figure 3b,c displays the masks obtained for p = 3 and p = 6, respectively. As shown, increasing the value of p leads to a decrease in the accuracy of edge extraction, with more background pixels near the boundary of the defect being incorrectly classified as defect pixels. Therefore, p = 3 is selected as the optimal parameter for subsequent calculations.

3.2. Histogram Equalization

To enhance the consistency of defect evaluation across different lighting conditions, grayscale histogram equalization was applied to the detected fuzz regions. This step is designed to normalize the grayscale distribution within the defect area, reducing the influence of environmental illumination variations.
Figure 4a shows the original grayscale image, and Figure 4b presents the corresponding mask of the fuzz area. Figure 4c displays the cumulative distribution function of grayscale levels within the fuzz region. Figure 5 shows the grayscale histogram of the image above the average grayscale. In this plot, the red line corresponds to point a, which represents the average grayscale value of the fuzz area in the original image. The green line marks point b, indicating the 80th percentile of the cumulative grayscale probability distribution. Figure 6 illustrates the probability density distribution of the grayscale range used for local histogram equalization and the resulting equalized image.

3.3. Effective Area

The effective area of the defect is defined as the minimum rotated bounding rectangle that tightly encloses the fuzz defect region. This definition helps to mitigate the overestimation of defect area caused by skewed or irregularly shaped defects. The angle α, which minimizes the bounding rectangle area, is calculated for each defect.
Figure 7a shows the fuzz area selected using the mask derived from Equation (10). Figure 7b presents the same image rotated by angle α to align the defect region for more accurate effective area computation.

3.4. Quantitative Results of the Severity of Fuzz Defects

The overall severity of fuzz defects is quantified using the composite score, which combines the severity per unit area l and the effective area S. This method allows for a more objective and interpretable assessment of defect severity.
To demonstrate the performance of the proposed method, 10 images with varying severity levels of fuzz defects were selected, as shown in Figure 8b. These images were processed by the neural network to detect defects and generate corresponding cropped regions as shown in Figure 8b. The quantitative severity scores, as calculated using Equation (31) with θ = 2, are summarized in Table 1.

3.5. Impact of Parameter θ on Severity Ranking

To further investigate the impact of parameter θ on the final severity ranking, evaluations were conducted on the three graphs shown in Figure 9 at different θ values (2, 3, and 5). The results demonstrate that the value of θ significantly affects the relative importance of the defect area and the severity per unit area in the final evaluation.
Table 2 presents the results for θ = 2, 3, and 5, where the defect area contributes more heavily to the final score. The corresponding quantitative ratings are as follows:
These experimental results confirm that the parameter θ allows for flexible tuning of the evaluation criteria, making the proposed method adaptable to different industrial quality requirements.

3.6. Accuracy

The proposed methods in this paper were applied to rank the severity of defects in a total of twenty cat hairball images. The ranking results generated by the algorithm were compared with those produced by experienced staff through manual evaluation. To quantitatively assess the consistency between the algorithmic rankings and the human rankings, Rank Consistency Scores were employed as the evaluation metrics. These measures focus on the overlap and alignment between the sets of top-ranked items, rather than requiring exact positional matches, making them particularly suitable for evaluating the overall ranking performance in terms of correctness and relevance. As presented in Table 3, the first row represents the manual ranking of defect severity by experienced personnel, while the second row corresponds to the rankings generated by the proposed method. The third row provides the Rank Consistency@K scores, as shown in Equation (32), which evaluate the alignment of the top-k ranked images between the two ranking strategies. The accuracy is 92% obtained through Equation (33).
Rank   Consistency @ k = M a n u a l T o p k M e t h o d T o p k k
Rank   Accuracy = k = 1 20 Rank   Consistency @ k 20 = 0.92

4. Discussion

This article proposes a quantitative evaluation method for the severity of fuzz defects on the surface of carbon fiber prepreg. We selected 20 surface images of prepreg materials exhibiting varying degrees of fuzz defects, as identified and labeled by a neural network-based detection model with bounding box annotations. We have developed an algorithm for separating fuzz areas from detection bounding box selected images, which filters for defective pixels by comparing local grayscale values centered on pixels with a threshold. It can be observed from the experiment that the method used can effectively segment the defect area. Quantitative analysis and calculation were conducted on the defects of the fuzz. Based on the analysis and processing of grayscale histograms, the quantification of defect severity has been achieved, and the quantification results are not affected by different materials, color brightness of different areas, and lighting conditions in the photography environment. Sort the defect images according to the quantified values and match the results determined by human judgment. This quantitative method effectively solves the key problem of existing neural networks being difficult to dynamically adjust judgment criteria according to quality requirements in industrial defect detection. The ranking results obtained using the method proposed in this article have a similarity of 92% with the ranking results artificially obtained by experienced staff. This validates the effectiveness and reliability of the proposed method in assessing the severity of defects and provides an interpretable decision-making basis for automated quality inspection systems.

Author Contributions

Conceptualization, Y.L. (Yutong Liu) and M.S.; methodology, Y.L. (Yutong Liu); software, Y.L. (Yutong Liu); validation, Y.L. (Yutong Liu), M.S. and X.W.; formal analysis, M.S.; investigation, Y.L. (Yusheng Liu) and H.L.; resources, M.S. and T.L.; data curation, Y.L. (Yutong Liu); writing—original draft preparation, Y.L. (Yutong Liu); writing—review and editing, M.S.; visualization, Y.L. (Yutong Liu); supervision, T.L.; project administration, M.S. and X.W.; funding acquisition, M.S. and T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Research and Development funding of AVIC Composite Corporation LTD, grant number GC732011602.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

The authors acknowledge the technical cooperation and support of prepreg production workers for the artificial rating of the fuzz defect samples in the AVIC company.

Conflicts of Interest

Author X.W. was employed by AVIC Composite Corporation LTD. The remaining authors declare that the re-search was con-ducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Liu, X.; Gan, X.; Ping, A. Automatic Flaw Detection of Carbon Fiber Prepreg Using a CFP-SSD Model During Preparation. Meas. Sci. Technol. 2024, 35, 035604. [Google Scholar] [CrossRef]
  2. Galos, J. Novel method of producing in-plane fibre waviness defects in composite test coupons. Compos. Commun. 2020, 17, 1–4. [Google Scholar] [CrossRef]
  3. Bhatt, P.M.; Malhan, R.K.; Rajendran, P.; Shah, B.C.; Thakar, S.; Yoon, Y.J.; Gupta, S.K. Image-Based Surface Defect Detection Using Deep Learning: A Review. ASME J. Comput. Inf. Sci. Eng. 2021, 21, 040801. [Google Scholar] [CrossRef]
  4. Ren, Z.; Fang, F.; Yan, N.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Tech. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  5. Li, L.; Wang, Y.; Qi, J.; Xiao, S.; Gao, H. A Novel High Recognition Rate Defect Inspection Method for Carbon Fiber Plain-Woven Prepreg Based on Image Texture Feature Compression. Polymers 2022, 14, 1855. [Google Scholar] [CrossRef]
  6. Liu, G.; Li, F. Fabric defect detection based on low-rank decomposition with structural constraints. Vis. Comput. 2022, 38, 639–653. [Google Scholar] [CrossRef]
  7. Houssein, E.H.; Helmy, B.E.; Oliva, D. A novel Black Widow Optimization algorithm for multilevel thresholding image segmentation. Expert Syst. Appl. 2021, 167, 114159. [Google Scholar] [CrossRef]
  8. Versaci, M.; Morabito, F.C. Image Edge Detection: A New Approach Based on Fuzzy Entropy and Fuzzy Divergence. Int. J. Fuzzy Syst. 2021, 23, 918–936. [Google Scholar] [CrossRef]
  9. Shi, B.; Liang, J.; Di, L.; Chen, C.; Hou, Z. Fabric defect detection via low-rank decomposition with gradient information and structured graph algorithm. Inf. Sci. 2021, 546, 608–626. [Google Scholar] [CrossRef]
  10. Pal, S.K.; Pramanik, A.; Maiti, J.; Mitra, P. Deep learning in multi-object detection and tracking: State of the art. Appl. Intell. 2021, 51, 6400–6429. [Google Scholar] [CrossRef]
  11. Lin, G.; Liu, K.; Xia, X.; Yan, R. An Efficient and Intelligent Detection Method for Fabric Defects based on Improved YOLOv5. Sensors 2023, 23, 97. [Google Scholar] [CrossRef] [PubMed]
  12. Yuan, M.; Zhou, Y.; Ren, X.; Zhi, H.; Zhang, J.; Chen, H. YOLO-HMC: An Improved Method for PCB Surface Defect Detection. IEEE Trans. Instrum. Meas. 2024, 73, 2001611. [Google Scholar] [CrossRef]
  13. Su, W.; Sang, M.; Liu, Y.; Wang, X. Defect Detection Method of Carbon Fiber Unidirectional Band Prepreg Based on Enhanced YOLOv8s. Sensors 2025, 25, 2665. [Google Scholar] [CrossRef] [PubMed]
  14. Zhu, R.; Xin, B.; Deng, N.; Fan, M. Fabric defect detection using ACS-based thresholding and GA-based optimal Gabor filter. J. Text. Inst. 2025, 115, 1432–1446. [Google Scholar] [CrossRef]
  15. Pallemulla, P.S.H.; Sooriyaarachchi, S.J.; de Silva, C.R.; Gamage, C.D. Defect Detection in Woven Fabrics by Analysis of Co-occurrence Texture Features as a Function of Gray-level Quantization and Window Size. Eng. J. Inst. Eng. 2021, 54, 55–64. [Google Scholar] [CrossRef]
  16. Wiskin, J.; Malik, B.; Natesan, R.; Lenox, M. Quantitative assessment of breast density using transmission ultrasound tomography. Med. Physics. 2019, 46, 2610–2620. [Google Scholar] [CrossRef]
  17. Sun, D.; Lu, G.; Zhou, H.; Li, X.; Yan, Y. A simple index based quantitative assessment of flame stability. In Proceedings of the 2013 IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 22–23 October 2013; pp. 190–193. [Google Scholar]
  18. Wang, J.; Luo, Y.; Wang, Z.; Hounye, A.H.; Cao, C.; Hou, M.; Zhang, J. A cell phone app for facial acne severity assessment. Appl. Intell. 2023, 53, 7614–7633. [Google Scholar] [CrossRef]
  19. Rahman, K.S.; Rakibul, M.R.I.; Salehin, M.M.; Ali, R.; Rahman, A. Assessment of paddy leaves disease severity level using image processing technique. Smart Agric. Technol. 2024, 7, 100410. [Google Scholar] [CrossRef]
  20. Guo, J.; Ma, J.; García-Fernández, A.F.; Zhang, Y.; Liang, H. A survey on image enhancement for Low-light images. Heliyon 2023, 9, 14558. [Google Scholar] [CrossRef]
  21. Zhu, Y.; Zhang, S.; Wei, Z.; Wang, X. Multi-Objective Image Threshold Segmentation Based on Statistical Curve Difference Method. J. Syst. Simul. 2017, 29, 2927–2933. [Google Scholar]
  22. Uzun, Y.; Bilgin, M. Medical image enhancement using war strategy optimization algorithm. Biomed. Signal Process. Control. 2025, 106, 107740. [Google Scholar] [CrossRef]
  23. Fan, X.; Wang, J.; Wang, H.; Xia, C. Contrast-Controllable Image Enhancement Based on Limited Histogram. Electronics 2022, 11, 3822. [Google Scholar] [CrossRef]
  24. Pang, L.; Zhou, J.; Zhang, W.S. Underwater image enhancement via variable contrast and saturation enhancement model. Multimed. Tools Appl. 2023, 82, 47495–47516. [Google Scholar] [CrossRef]
  25. Oztur, N.; Ozturk, S. A hybrid method for enhancement of both contrast distorted and low-light images. Int. J. Pattern Recognit. Artif. Intell. 2023, 37, 2354012. [Google Scholar] [CrossRef]
  26. Lv, Z.; Wang, P.; Wang, H.; Li, L.; Li, J.; Li, X.; Li, X.; Liu, C.; Sha, B. Adaptive high-gray image enhancement algorithm based on logarithmic mapping and simulated exposure. Infrared Phys. Technol. 2024, 136, 105030. [Google Scholar] [CrossRef]
  27. Su, H.; Yu, L.; Jung, C. Joint Contrast Enhancement and Noise Reduction of Low Light Images Via JND Transform. IEEE Trans. Multimed. 2022, 24, 17–32. [Google Scholar] [CrossRef]
  28. Zhang, H.; Qian, W.; Wan, M.; Zhang, K. Infrared image enhancement algorithm using local entropy mapping histogram adaptive segmentation. Infrared Phys. Technol. 2022, 120, 104000. [Google Scholar] [CrossRef]
Figure 1. (a) Images captured by the camera. (b) The image contains annotations of detection results obtained through neural network detection.
Figure 1. (a) Images captured by the camera. (b) The image contains annotations of detection results obtained through neural network detection.
Applsci 15 07478 g001
Figure 2. Enlarge the local area of the original image.
Figure 2. Enlarge the local area of the original image.
Applsci 15 07478 g002
Figure 3. (a) Original grayscale image of the image. (b) Obtain a mask representing the fuzz area when p = 3. (c) The mask of the fuzz area obtained when p = 6.
Figure 3. (a) Original grayscale image of the image. (b) Obtain a mask representing the fuzz area when p = 3. (c) The mask of the fuzz area obtained when p = 6.
Applsci 15 07478 g003
Figure 4. (a) Original grayscale image. (b) Mask of the fuzz area. (c) Image after grayscale equalization of the fuzz area.
Figure 4. (a) Original grayscale image. (b) Mask of the fuzz area. (c) Image after grayscale equalization of the fuzz area.
Applsci 15 07478 g004
Figure 5. Cumulative distribution chart of gray levels in the fuzz area.
Figure 5. Cumulative distribution chart of gray levels in the fuzz area.
Applsci 15 07478 g005
Figure 6. Gray level probability density chart of gray level range for local gray level histogram equalization.
Figure 6. Gray level probability density chart of gray level range for local gray level histogram equalization.
Applsci 15 07478 g006
Figure 7. (a) The mask obtained from Equation (10) is used to select the fuzz area. (b) Figure 4a rotated by ɑ.
Figure 7. (a) The mask obtained from Equation (10) is used to select the fuzz area. (b) Figure 4a rotated by ɑ.
Applsci 15 07478 g007
Figure 8. (a) The detection results of defects by deep learning models. (b) Enlarged Images within the box.
Figure 8. (a) The detection results of defects by deep learning models. (b) Enlarged Images within the box.
Applsci 15 07478 g008
Figure 9. (a) The detection results of defects by deep learning models. (b) Enlarged images within the box.
Figure 9. (a) The detection results of defects by deep learning models. (b) Enlarged images within the box.
Applsci 15 07478 g009
Table 1. Severity rating of fuzz defects.
Table 1. Severity rating of fuzz defects.
Dataset No.Image Size Within the BoxQuantitative Evaluation ScoreRanking of Methods in This ArticleArtificial Ranking
#152 × 16397.211
#268 × 201105.222
#3178 × 1307108.033
#4295 × 749121.244
#5121 × 874126.455
#6156 × 509128.666
#7312 × 2313136.277
#8377 × 3671143.688
#91280 × 879161.299
#10903 × 4000200.41010
Table 2. Ranking of ratings and severity of three images when θ is 2, 3, and 5.
Table 2. Ranking of ratings and severity of three images when θ is 2, 3, and 5.
Dataset No.Image Size Within the BoxQuantitative Evaluation Score and Ranking
θ = 2θ = 3θ = 5
#11135 × 163109.5134.8113.72
#1290 × 298109.8235.4314.13
#13495 × 131115.8335.3213.51
Table 3. Comparison of manual ranking and method-based ranking with set-based similarity measures at top-k positions.
Table 3. Comparison of manual ranking and method-based ranking with set-based similarity measures at top-k positions.
Image IdentifierManual RankingAlgorithm RankingRank
Consistency@K
1111
2221
3331
4441
5551
6675/6
7796/7
88103/4
99117/9
1010124/5
1111139/11
1212811/12
13131412/13
14141513/14
15151614/15
16161715/16
17171916/17
18181817/18
191961
2020201
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Sang, M.; Liu, Y.; Lin, H.; Wang, X.; Liu, T. Quantitative Evaluation Method for the Severity of Surface Fuzz Defects in Carbon Fiber Prepreg. Appl. Sci. 2025, 15, 7478. https://doi.org/10.3390/app15137478

AMA Style

Liu Y, Sang M, Liu Y, Lin H, Wang X, Liu T. Quantitative Evaluation Method for the Severity of Surface Fuzz Defects in Carbon Fiber Prepreg. Applied Sciences. 2025; 15(13):7478. https://doi.org/10.3390/app15137478

Chicago/Turabian Style

Liu, Yutong, Mei Sang, Yusheng Liu, Haojun Lin, Xueming Wang, and Tiegen Liu. 2025. "Quantitative Evaluation Method for the Severity of Surface Fuzz Defects in Carbon Fiber Prepreg" Applied Sciences 15, no. 13: 7478. https://doi.org/10.3390/app15137478

APA Style

Liu, Y., Sang, M., Liu, Y., Lin, H., Wang, X., & Liu, T. (2025). Quantitative Evaluation Method for the Severity of Surface Fuzz Defects in Carbon Fiber Prepreg. Applied Sciences, 15(13), 7478. https://doi.org/10.3390/app15137478

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop