Next Article in Journal
Improving Surface Roughness of FDM-Printed Parts Through CNC Machining: A Brief Review
Previous Article in Journal
Computational Study of Ultra-Small Gold Nanoparticles with Amphiphilic Polymer Coating
Previous Article in Special Issue
Micromechanical Analyses on Three-Dimensional Response of Compressive Failure in Unidirectional CFRP
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Vision Framework for Real-Time Surface Yarn Alignment Defect Detection in Carbon-Fiber-Reinforced Polymer Preforms

1
School of Software, Dalian University of Foreign Languages, Dalian 116024, China
2
College of Materials, Xiamen University, Xiamen 361005, China
*
Author to whom correspondence should be addressed.
J. Compos. Sci. 2025, 9(6), 295; https://doi.org/10.3390/jcs9060295
Submission received: 27 April 2025 / Revised: 27 May 2025 / Accepted: 5 June 2025 / Published: 7 June 2025
(This article belongs to the Special Issue Carbon Fiber Composites, 4th Edition)

Abstract

Carbon-fiber-reinforced polymer (CFRP) preforms are vital for high-performance composite structures, yet the real-time detection of surface yarn alignment defects is hindered by complex textures. This study introduces a novel machine vision framework to enable the precise, real-time identification of such defects in CFRP preforms. We proposed obtaining the frequency spectrum by removing the zero-frequency component from the projection curve of images of carbon fiber fabric, aiding in the identification of the cycle number for warp and weft yarns. A texture structure recognition method based on the artistic conception drawing (ACD) revert is applied to distinguishing the complex and diverse surface texture of the woven carbon fabric prepreg from potential surface defects. Based on the linear discriminant analysis for defect area threshold extraction, a defect boundary tracking algorithm rule was developed to achieve defect localization. Using over 1500 images captured from actual production lines to validate and compare the performance, the proposed method significantly outperforms the other inspection approaches, achieving a 97.02% recognition rate with a 0.38 s per image processing time. This research contributes new scientific insights into the correlation between yarn alignment anomalies and a machine-vision-based texture analysis in CFRP preforms, potentially advancing our fundamental understanding of the defect mechanisms in composite materials and enabling data-driven quality control in advanced manufacturing.

1. Introduction

Carbon-fiber-reinforced polymer (CFRP) components serve as critical enablers to achieving structural lightweighting in aerospace, rail transportation, and related industries, where manufacturing quality directly dictates the good service performance of the equipment and operational safety [1,2,3]. During the compression molding process for CFRP components, the key manufacturing stages typically involve the multi-axis layup of core-layer prepreg, orientation-controlled fiber alignment in the reinforcement layers, and thermoset curing under hot-press conditions. Due to the synergistic effects of interlaminar shear stress and thermal gradients during hot-press molding, the outer surfaces of the components are prone to the formation of defects, including delamination, porosity, and fiber misalignment [4]. If left unaddressed, these defects can induce inhomogeneity in the resin flow in subsequent vacuum-assisted resin transfer molding (VARTM) processes, ultimately compromising the cured components’ capacity to meet the prescribed load distribution patterns and fatigue life requirements under operational conditions [5]. In severe cases, such defects may trigger degradations in the global stiffness, substantially diminishing the mass-specific load-bearing efficiency of structural components. Consequently, to mitigate the detrimental impacts of fiber fracture and interlaminar debonding on the structural integrity of large composite assemblies, it is imperative to implement robust online surface yarn (warp and weft) alignment defect detection systems. Such systems enable timely defect remediation and process parameter optimization by manufacturing personnel. Unlike the conventional non-destructive testing, CFRP prepreg inspection resembles a modern form of textile woven fabric analysis, as CFRP prepregs constitute specialized woven structures [6,7]. Due to the interlacing of warp and weft yarns following specific weaving patterns, their surface exhibits a periodic texture defined by the yarn arrangement. The current computer vision methods for identifying surface yarn alignment defects in such materials are classified into image-feature-based [8,9], database-driven [10], and weave-pattern-based approaches [11,12,13,14].
Kang et al. [15] first introduced image-feature-based recognition, suitable for fabrics with similar warp/weft thicknesses. Zheng et al. [16] utilized a Markov Random Field (MRF) model to infer the texture element locations and classify structures by shape. Jing et al. [17] represented fabric images using binary weave diagrams and digital matrix coding, employing an Improved Distance Matching Function (IDMF) to characterize woven structures. These three approaches rely on recognizing woven patterns through pixel repetitions in the fabric features and yarn feature correlation coefficients. Xiao et al. [18] extended this by extracting feature vectors from weave points on each yarn and calculating the cross-yarn correlation coefficients, deriving the structure and dimensions from the correlation peak sequences. While image-feature-based methods achieve high accuracy, they demand high-quality images, as the uniformity of the illumination and clarity of the texture significantly influence their performance.
Currently, database-driven recognition methods have gained traction in fabric inspections and vision applications. Kuo et al. [19] leveraged the cumulative pixel gray-level values to extract warp/weft texture features, employing a two-stage backpropagation neural network for woven structure classification. Salem et al. [20] compared Gabor wavelets, Local Binary Pattern (LBP) operators, and Gray-Level Co-Occurrence Matrices (GLCMs) for feature extraction, establishing a three-elementary weave database and using Support Vector Machines (SVMs) for classification. Bao et al. [21] developed an improved KIII stimulation model via an Olfactory Neural Network (ONN) to enhance the recognition speed for three-elementary weaves. Pan et al. [22,23] created a comprehensive fabric texture database, achieving recognition by comparing the classification results with database template similarity coefficients—a method applicable to both three-elementary weaves and derived structures. However, a common limitation of these approaches is their heavy reliance on the database’s size and comprehensiveness.
Given the distinct weaving characteristics of different structures, the current research has predominantly focused on weave-pattern-based methods, with periodic texture repeat units serving as critical indicators for woven structure recognition. Zhou et al. [24,25] employed a statistical approach to extracting weave/color repeat units, identifying full weave structures, and detecting/correcting color pattern misrecognitions when present. Wang et al. [26] utilized a principal component analysis (PCA) to optimize the GLCM-based texture features and computed 2D autocorrelation matrices of pre-recognized weave structures for classification. Notably, both methods exhibited a promising performance only when the weave pattern repeat size was below a certain threshold.
When applied to automatic placement production lines, the aforementioned three methods (image-feature-based, database-based, and weave-pattern-based) suffer from poor robustness, low efficiency, and unstable accuracy. Moreover, due to the inherent complexity of woven structures, defect detection in CFRP prepregs remains an unsolved challenge in real-time manufacturing. The commonly employed approaches (filtering-based, feature-based, and learning-based methods) are primarily suitable for pure woven fiber fabrics composed of single-fiber spun yarns, making them poorly suited to inspecting CFRP prepregs [27,28,29,30,31]. Therefore, there is a critical need to develop a systematic, highly integrated framework for recognizing surface yarn alignment defects in prepregs, thereby enhancing the online detection efficiency and quality control in automated placement processes. This paper presents a yarn defect detection machine vision framework applied in the forming process of CFRP components. In order to optimize the acquisition of the surface geometry and weaving characteristics of carbon fiber prepregs, texture feature enhancements, a DFT-based yarn quantity detection method, and the artistic conception drawing (ACD) revert algorithm were used. Furthermore, identification rules for defect area extraction were designed to facilitate real-time monitoring and evaluation of manufacturing processes, avoiding potential defects that could affect the structural quality and consistency of CFRP components.

2. Materials and Methods

CFRP prepregs are typically shaped in hyperbolic molds during manufacturing. During automatic placement, inadequate in-plane deformation of the laminate stack to conform to the mold geometry causes the yarns to compress toward the center, inducing buckling that propagates and generates defects such as wrinkles and bubbles [32,33,34,35,36]. A flowchart of the surface yarn alignment defect detection process is presented in Figure 1.
First, five preprocessing algorithms enhance the texture features in the original images. Next, grayscale integral projection and discrete Fourier transform are employed to detect warp and weft yarns, with texture structure identification achieved via the artistic conception drawing (ACD) revert method [37,38]. The ACD revert algorithm generates the matrix Ozo, which serves as the input for calculating the classification metrics. These metrics are compared against templates using predefined rules to derive the ACC matrix, which encapsulates core data for defect recognition, localization, and classification. Defect images are identified by comparing element-wise thresholds in the ACC matrix, while defect localization is achieved through detecting and merging defective sub-regions. The specific implementation process is as follows.

2.1. Texture Feature Enhancement

During the image acquisition process, electronic noise, shadows, and light pollution can interfere with image processing [39]. In order to accurately identify the geometric features of the texture structure, warp and weft yarn contours, and warp and weft weave point edges, this paper uses the following preprocessing methods to enhance the texture features. The flowchart is shown in Figure 2.
Step 1. Grayscale equalization: The original image is subjected to grayscale transformation using the cumulative distribution function, aiming to achieve grayscale equalization. To prevent discontinuity and excessive enhancement, an interpolation method is employed to expedite the equalization of the grayscale histogram. This approach accentuates the defect contour without compromising the overall contrast, thereby facilitating the extraction of defect features.
Step 2. Box filter: The box filter can effectively distinguish warp and weft weave points in the prepreg image by calculating the sum of pixels near these points. Additionally, this method suppresses noise while preserving textural features.
Step 3. Morphological transformation: The bottom-hat image was subtracted from the sum of the original and top-hat images to maximize the contrast between objects and gaps, allowing them to be distinguished.
Step 4. Steerable filter: The steerable filter is capable of arbitrary rotation, enabling the extraction of linear combinations at various angles based on the warp or weft direction. This capability facilitates accurate edge detection for the yarns, thereby aiding in the extraction of texture features.
Step 5. Gaussian filter: This filter eliminates Gaussian noise and background texture interference to optimize the extraction of the feature information on the texture structure.

2.2. Detection of Weft and Warp Yarns

During the process of the automatic recognition of surface defects in CFRP components, the captured image will be affected by the material weaving structure, fiber type, resin content, and external temperature and will present different warp and weft yarn densities in the same image size. Therefore, the detection of the number of warp and weft yarns on the surface of a prepreg is the core step in detecting defects. This paper uses the methods of grayscale integral projection and discrete Fourier transform to segment the prepreg image and detect the number of warp and weft yarns.
Step 1. Grayscale integral projection: The pixel values of each row in the grayscale image matrix enhanced by texture features can be summed up to obtain the horizontal integrated projection, which is the grayscale integrated projection of the weft yarn. Similarly, by summing up the pixel values in each column of the matrix, a vertical integrated projection can be obtained, which is the grayscale integrated projection of the warp yarn. The distribution characteristics of warp and weft yarn weaving are displayed in the integrated projection maps in these two directions [40,41].
Owing to the elliptical cross-sectional shape of the yarn, there is brighter imaging at the axis position and darker imaging at the concave edge position under the illumination of the light source. This leads to a discernible gap between adjacent weft yarns or warp yarns, as illustrated in Figure 3. Therefore, the grayscale integration projection of yarns and gaps with different surface structures can reflect the trend in the grayscale variation in the gaps, which is particularly important for detecting the number of yarns. The expression of the grayscale integrated projection in the horizontal and vertical directions is shown in Equations (1) and (2).
G y = x = 1 n x I x , y / 255
G x = y = 1 n y I x , y / 255
where I(x, y) is the grayscale image; x and y are the pixel index values in the horizontal and vertical directions (grayscale image matrix coordinates); and nx and ny are the total rows and columns of the image, respectively. G(y) is the grayscale integrated projection value in the horizontal direction of row nx, and G(x) is the grayscale integrated projection value in the vertical direction of row ny. The ratio to 255 in the equation is used for data normalization.
Step 2. Discrete Fourier transform (DFT): The grayscale pixel values of warp and weft are transformed from the time domain into the frequency domain using the DFT [31]. Firstly, a one-dimensional discrete Fourier transform is applied to the integrated grayscale projection signals in both the vertical and horizontal directions, as shown in Equation (3).
X k = n = 0 N 1 x n e j 2 π / N k n ,   0 k N 1
where the original signal x(t) is the input sampled signal x(n), which is output as X(k) after the discrete Fourier transform, with a sampling frequency of N. Since the sampled signal data are all real numbers, the above equation can be expanded into Equation (4):
X k = n = 0 N 1 x n cos 2 π k n N j sin 2 π k n N ,   0 k N 1
In addition, since X(0) is the DC component of the signal without imaginary parts, there will be a large-amplitude phenomenon in the amplitude diagram, which affects the extraction of the coordinates of the maximum amplitude point. Therefore, the operation process of removing the 0-frequency component needs to be carried out, as shown in Equation (5):
X ¯ k = n = 0 N 1 x n n = 0 N 1 x n N ,   0 k N 1
Therefore, a frequency spectrum corresponding to the projection curve can be obtained, where the vertical and horizontal directions, respectively, reflect the cyclic periods of warp and weft yarns. The frequency corresponding to the point of the maximum amplitude in the spectrum represents the number of cyclic periods for either the warp or the weft yarns, which is the quantity of warp and weft yarns.

2.3. Texture Structure Recognition

The surface texture of a woven carbon fabric prepreg is complex, meaning surface defects are easily misidentified [42,43]. Therefore, in order to accurately and quickly detect surface defects on CFRP components, recognition of the surface texture structure of the tank body is crucial. This article utilizes previous research findings: the texture recognition method is based on the ACD revert and applied in the aforementioned machine vision framework. This method compresses the surface structure feature-enhanced image matrix while preserving the texture features, converts the complex grayscale image matrix into a simple binary matrix, and matches the data distribution of the binary matrix with the surface structure of the ACD image to identify the texture structure. The specific recognition process will go through steps such as weft segmentation, threshold calculation, binary transformation, lattice segmentation, and texture matching. The matrix OACD that reflects the texture structure can ultimately be obtained. A schematic of this texture structure recognition is shown in Figure 4.

2.4. Defect Location

In order to accurately and efficiently detect defects on the entire surface of CFRP components, the detection process as shown in Figure 5a was designed: Firstly, the starting point’s position is determined, and then the robot arm is equipped with a visual inspection frame to collect the image area by area. The moving speed of the robot is 500–800mm/s, the size of a single-frame image is 150 × 180 mm, the motion trajectory is used to determine the scanning direction, and then line-by-line inspection is started until surface image acquisition for the whole composite component is completed.
Moreover, carbon fabric prepregs may experience buckling due to pressure during the preform process, resulting in varying degrees of in-plane deformation in different regions. The distribution of the warp (weft) weave points in defect areas is irregular, while other areas without defects still conform to the weaving rules of the design. Therefore, it is necessary to extract defect features from the image, establish feature vectors, and achieve rapid defect localization during the preforming process, as shown in Figure 5b,c. Firstly, based on the distribution characteristics of the latitude and longitude organization points (black-and-white block grids), the defective image is compared with the defect-free image pixel by pixel in numerical terms. Measurement indicators (ACC = (TP + TN)/(TP + FN + FP + TN)) are used to calculate the accuracy of each 4 × 4 pixel area. The complete accuracy matrix MACC of the image is obtained by integrating all areas. Finally, the element values of the precision matrix MACC are compared with the defect segmentation matrix MLOC. According to the results of the comparison, the non-defect area and the defect area are divided into two subsets, M1 and M2, respectively. Then, the minimum submatrix G2 containing all elements within the M2 region as the target matrix for the defect area is constructed, and the contours of the defect area are extracted through the following boundary tracking algorithm.
The rules of the boundary tracking algorithm are shown in Figure 6. Firstly, the element with the smallest row and column values in the M2 index area is defined as the starting point for boundary tracking. Then, the eight neighboring adjacent points of the starting point are explored, and a clockwise element-by-element judgment is performed from the twelve o’clock direction to determine whether it is a boundary point. If it is a boundary point, the loop is terminated, and this point is marked as the starting point for the next boundary tracking and placed in the boundary matrix. If it is the previous boundary point, it is rotated 180° counterclockwise for edge searching until the next boundary point is found or coincides with the starting point. Using this rule, we search for all boundary points in sequence and then extract the index values of all elements in the boundary matrix to define the defect boundary set D. All elements in the set are connected to obtain the contour of the defect area. Then, the defect boundary set D is used to find the center position of the defect according to the following rules:
Step 1: Traverse all elements of D, calculate the distance between all two elements, and return the two elements with the largest distance.
Step 2: Set the line segment D2D8 connecting these two elements as the initial diameter of the circumcircle and obtain the initial center O1.
Step 3: Calculate the distance from all other elements to the center O1. If they are all less than or equal to the radius of the initial circle, O1 is the center position of the defect.
Step 4: If there are other elements whose distance from the initial center of the circle is greater than the initial radius of the circle, then this element and the initial two elements are rounded at three points, and the position of the minimum circumcircle center O2 is determined again.
Step 5: Repeat step 3 until the center position of the minimum circumcircle is found, which is the center position of the defect.
Finally, based on the current position relationship between the camera and the defect center in the world coordinate system, the motion control system can interpolate the two coordinates to rapidly position the defect center.

3. Results and Discussion

The acquisition and processing of high-quality images are essential for extracting surface defects during the preforming process for CFRP components. Nevertheless, several challenges arise in the actual detection process. Firstly, a preforming process workshop typically employs a brightfield light source installed on the roof, leading to an uneven distribution of illumination across the detection target. Additionally, the movement of the wire winding device generates trailing shadows on the surface of the prepreg, resulting in light spots, noise, and impurities that detrimentally impact the imaging quality. Finally, the prepreg is a woven fabric produced by impregnating continuous fibers with a resin matrix under strictly controlled conditions. Each bundle of carbon fiber yarn in the prepreg can contain thousands of fibers. When light strikes the surface of the prepreg, its dense layered structure creates multiple optical reflection layers, leading to overexposure and compromising the quality of image collection. To address the above problems, this study proposes a machine vision detection framework building upon previous research findings [44], as depicted in Figure 7, which leverages visual illumination techniques such as darkfield, low-angle, and linear symmetry lighting [45]. The framework mainly consists of a camera, a servo motor, an LED source, and an angle adjustment and slewing mechanism. An angle-adjustable cantilever driven by a servo unit is used as the main motion structure. It can dynamically switch between darkfield, low-angle, linear symmetric light and multi-angle stereo vision light and can adjust and transform the relative position of the light source in the process of detecting surface defects so that it is convenient to extract the three-dimensional space position of the projection image and the shooting field of view, the pitch angle of the shot, and other parameter information. In addition, the flange at the end of the device can be installed in the end effector of any automatic preforming system, achieving collaborative work between automatic preforming and automatic detection.
In order to evaluate the detection performance of the proposed method in this article, the visual inspection system collected 1500 images of different types of prepregs on site as the experimental samples. The collected images were 256 colors with a resolution of 1440 × 1080, and there were nine patterns with typical features, as shown in Figure 8. The experiment comprised two steps: The first stage was to apply grayscale integration projection to the warp and weft yarns of the feature-enhanced image to obtain pixel grayscale projection images and then extract the number of warp and weft yarns in the image through the DFT method to detect material parameters such as density and size. In the second stage, the proposed ACD revert algorithm was applied to transforming the feature enhanced image into an ACD image, and then the defect patterns were distinguished through clustering methods. We demonstrate the feasibility and accuracy of this method for defect detection, location, and classification in detecting the three-elementary woven structure of a carbon fabric prepreg.
Furthermore, this paper describes the approaches to capturing surface images of CFRP prepregs during component preparation; presents identification results for three basic woven structures; details the defect recognition and classification outcomes; and evaluates the performance of the proposed automatic detection method. The three types of testing samples were 1/1 plain, 2/2 twill, and 4hs satin, and they were obtained from Weihai Guangwei Composite Material Co., LTD. These fabric styles had identical fabric parameters in terms of the number of fibers per tow, the number of fibers per inch in the warp and weft directions, and fabric weight (g m−2). The fabric parameters are listed in Table 1. The image, which was captured using a CCD camera (MV-CH050-10UM, HIKROBOT), had 8-bit gray levels and was 896 × 702 pixels wide. The platform was built using Python 3.9 and OpenCV modules with an i7-8565U (1.8 GHz) CPU and 16 GB of RAM.

3.1. The Results and Analysis of Warp and Weft Yarn Detection

Six types of carbon fabric woven prepreg patterns were collected on site, and Table 2 shows a comparison of the results between the automatic detection algorithm proposed in this paper and the manual detection by professionals. Using the results of manual detection as the experimental truth, it can be seen that the error in the number of warp and weft yarns detected automatically is relatively small. The error rate for the defect-free pattern is less than 1%, and the plain-weave defect-free pattern has the smallest error rate. The error rate for patterns with defects is slightly higher, with the largest being that fir satin-weave (defect) patterns, but this error rate is only 2.12%. This is due to the different stacking stability of different surface structures. When defects occur, structures with poor stability are prone to yarn fluctuations, which can have an impact on automatic detection. But it does not affect the overall detection effect because the detection method in this paper divides the target into blocks for detection, and the yarn parameters of the same prepreg remain unchanged. The detection data for defect-free areas can be extracted to replace the data for defect areas, and these errors are maintained within a small range. The data extraction process for the automatic detection of yarn quantity is illustrated further in Figure 9: grayscale integration projection is performed on individual warp and weft yarns in the three original tissue structure images. This shows that in plain-weave images, the gaps between warp yarns are brighter and have larger grayscale pixel values compared to those for the warp yarn axis, while the gaps between weft yarns are dimmer and have smaller grayscale pixel values compared to those for the weft yarn axis. This rule also applies to twill-weave images, but the warp and weft weave points in twill-weave images present a diagonal structure at a certain angle on the fabric’s surface, so there is a texture of diagonal yarns in the gaps, and the brightness and darkness of the gaps are not as obvious as in plain-weave. The difference lies in the satin pattern images, where the warp yarns of the satin fabrics are denser than the weft yarns, resulting in more pronounced gaps between the weft yarns. The gaps between the warp yarns exhibit a texture of alternating light and dark, and both types of gaps have differences in their grayscale values with their corresponding yarns.
Furthermore, grayscale integration projection was applied to all warp and weft yarns in the three-elementary woven structure images, as shown in Figure 10. It can be seen that the curve follows the superposition of the grayscale distribution patterns of single rows or single columns mentioned above, showing the periodic distribution pattern of the time domain signal. The frequency of occurrence of the local maximum or minimum values is the same as the number of yarns in the corresponding image. Although the horizontal and vertical grayscale projections have obvious peaks and valleys, the range of the peak changes in different areas of the curve varies greatly, and there is also interference from noise peaks and valleys. Therefore, it is necessary to apply the one-dimensional DFT to the grayscale projection integration signal to detect the number of yarns with different surface structures, as shown in Figure 11. The maximum amplitude frequencies of the warp and weft yarns can be found to be 32 and 26, respectively. This means that the number of warp and weft yarns in the image is 32 and 26, which is the same as the actual number of warp and weft yarns in Figure 8. Therefore, in the discrete signal of the grayscale projection curve of the weft yarn (warp yarn) with a length N of the image height (width), for the pixel index sequence k = (0, 1, 2…), the maximum weight of the triangular wave component that vibrates for k cycles within the length N is found to be the number of weft yarns (warp yarns) present in the image, and thus the yarn quantity value can accurately be obtained. In summary, the frequency corresponding to the point with the highest amplitude in the spectrum is the number of cycles of the warp or weft yarn.

3.2. The Results and Analysis of Defect Morphology Recognition and Detection

In order to verify the effectiveness of defect recognition and classification, 200 samples of bubbles and wrinkles with different texture structures were selected from the collected image set. The defect morphologies were allocated at a ratio of 6:4, 5:5, and 4:6. The initial and final center points of the clustering results are shown in Table 3 and Table 4, respectively.
Three different internal validation indices were used to evaluate the clustering performance and find the optimal clustering center: the Davis–Bouldin Index (DBI), the Calinski–Harabasz Index (CH), and the silhouette coefficient (SC) [46]. The DBI is the ratio of the sum of the intragroup distance and the intergroup distance. The smaller the DBI value, the better the clustering effect. The CH is the ratio of the spacing between classes to the compactness within a class. The SC combines the cohesion and separation of the clusters. The larger the CH and SC values, the closer the clusters are to each other, and the more dispersion within the clusters, the better the clustering results that can be obtained. A comparative analysis of the clustering indices is shown in Table 5. It can be observed that within the same surface structure, there is little difference in the clustering index among samples with different defect morphology ratios, with the best clustering effect observed at a 6:4 ratio. Therefore, a clustering center with a ratio of 6:4 is considered the final aggregation center in various weaving structures.
Furthermore, Figure 12 shows the images collected and detected by the visual platform designed in this paper at the actual production site. It can be clearly seen from the human–computer interaction interface that the defect area has been boxed out. By comparing this with the actual center position in the two-dimensional image modeling, the deviation from the defect center, the recognition rate, and the detection speed are shown in Table 6. From the perspective of the defect types, the average position error for wrinkle defects is higher than that for bubbles. This is because the volume of wrinkles is larger than that of bubbles, resulting in a higher degree of fiber buckling. From the perspective of the weaving structure, plain-weaves have high stability, low formability, and the smallest average positional errors. On the contrary, satin-weaves have better formability and are more prone to being affected by the defect morphology, resulting in the highest average positional error. The overall central error for all samples in the table is 3.244%. This is because there is a significant deviation in the defect edge position, which affects the overall central error but does not affect the main position of the defect.
The defect detection performance of this method is evaluated by comparing its running time and accuracy with those of four other commonly used fabric defect localization methods. The defect detection speed for bubble and wrinkle defects is shown in Figure 13, and a comparison of the defect detection accuracy is shown in Table 7. The Gauss–Markov method [34] matches the distribution of the models through defect-free texture models and defective texture models. Although it can accurately describe the surface textures, implementation of the algorithm involves a large computational load, complex data processing, and a high average running time. The Support Vector Description (SCDD) method [47,48] is a type of dictionary learning method that reduces the dimensionality of the projection matrix and applies the Support Vector Machine (SVM) method to defect localization and description. However, the adaptability of this method is not high. The Gabor transform method [49] is a commonly used method in spectral analyses which comprehensively considers the relationship between the local and global characteristics of the image. However, its localization effect relies too much on filters, and the defect localization effect in complex texture images is poor. Morphological methods [50] utilize the differences in the grayscale distribution characteristics between defect areas and the overall image for localization, which is a common form of spatial statistical method. But the image noise has a significant impact on the localization performance. This paper uses precision matrices for threshold segmentation, element comparison, and defect localization. By maximizing the texture feature values, the data volume is greatly compressed. From the figures and tables, it can be seen that the detection speed (0.38s/pic) and the detection accuracy (97.02%) of our method for defect localization are superior to those of other methods. Therefore, the proposed machine vision framework and method can be applied to real-time CFRP preform inspection in the composite manufacturing and aerospace/automotive sectors. Future work could enhance its robustness via multi-sensor fusion and Industry 4.0 integration.

4. Conclusions

This study establishes a scientific correlation between anomalies in yarn alignment and machine-vision-based texture analyses in CFRP preforms, providing new insights into the mechanisms of defect formation in composite materials from the perspective of materials science. It reveals that removing the zero-frequency component from the projection curves enables precise identification of the warp/weft yarn cycle numbers via a frequency spectrum analysis, while the ACD revert method effectively distinguishes complex surface textures from defects.
The developed machine vision framework integrates five preprocessing algorithms, grayscale integral projection with the DFT for yarn detection, texture structure recognition via the ACD revert, and an LDA-based defect boundary tracking algorithm. This framework achieves the real-time detection of surface yarn alignment defects in CFRP preforms with a 97.02% recognition rate and a 0.38 s per image processing time, outperforming the traditional inspection methods.
The proposed method provides an engineering solution for automated quality control in composite manufacturing, enabling data-driven defect localization and classification in industrial production lines. Its modular design supports its integration with robotic inspection systems, facilitating real-time monitoring during CFRP component preforming to prevent the degradations in structural quality caused by yarn buckling and in-plane deformation.

Author Contributions

Conceptualization: L.L. and S.Y.; methodology: L.L. and S.X.; software: L.L.; validation: S.Y. and Z.W.; formal analysis: L.L.; investigation: S.Y.; resources: S.X.; data curation: L.L.; writing—original draft preparation: L.L.; writing—review and editing: L.L. and S.X.; visualization: L.L.; supervision: S.X.; project administration: L.L.; funding acquisition: L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the Natural Science Foundation Project of Liaoning Provincial Department of Science and Technology, China [grant number 2024-BS-204], and the Research Fund Project of Dalian University of Foreign Languages in 2023 [grant number 2024XJXM36].

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Vazquez-Moreno, J.M.; Sanchez-Hidalgo, R.; Sanz-Horcajo, E.; Vina, J.; Verdejo, R.; Lopez-Manchado, M.A. Preparation and mechanical properties of graphene/carbon fiber-reinforced hierarchical polymer composites. J. Compos. Sci. 2019, 3, 30. [Google Scholar] [CrossRef]
  2. Alshammari, B.A.; Alsuhybani, M.S.; Almushaikeh, A.M.; Alotaibi, B.M.; Alenad, A.M.; Alqahtani, N.B.; Alharbi, A.G. Comprehensive review of the properties and modifications of carbon fiber-reinforced thermoplastic composites. Polymers 2021, 13, 2474. [Google Scholar] [CrossRef]
  3. Akbarpour, A.; Volz, J.; Vemuganti, S. An experimental study incorporating carbon fiber composite bars and wraps for concrete performance and failure insight. J. Compos. Sci. 2024, 8, 174. [Google Scholar] [CrossRef]
  4. Xu, B.; Wei, M.Y.; Wu, X.Y.; Fu, L.Y.; Luo, F.; Lei, J.G. Fabrication of micro-groove on the surface of CFRP to enhance the connection strength of composite part. Polymers 2021, 13, 4039. [Google Scholar] [CrossRef]
  5. Han, S.L.; Li, Q.Z.; Cui, Z.; Xiao, P.; Miao, Y.A.; Chen, L.; Li, Y. Non-destructive testing and structural health monitoring technologies for carbon fiber reinforced polymers: A review. J. Strain Anal. Eng. Des. 2024, 39, 725–761. [Google Scholar] [CrossRef]
  6. Wang, B.; Zhong, S.; Lee, T.L.; Fancey, K.S.; Mi, J.W. Non-destructive testing and evaluation of composite materials/structures: A state-of-the-art review. Adv. Mech. Eng. 2020, 12, 1687814020913761. [Google Scholar] [CrossRef]
  7. Towsyfyan, H.; Biguri, A.; Boardman, R.; Blumensath, T. Successes and challenges in non-destructive testing of aircraft composite structures. Chin. J. Aeronaut. 2020, 33, 771–791. [Google Scholar] [CrossRef]
  8. Deng, N.; Wang, Y.L.; Xin, B.J.; Wang, W.Z. A novel objective wrinkle evaluation method for printed fabrics based on multi-view stereo algorithm. J. Text. Inst. 2021, 153, 367–377. [Google Scholar] [CrossRef]
  9. Xie, G.S.; Xu, Y.; Yu, Z.Q.; Liu, Z.; Sun, Y.Z. Warp-knitted vamp fabric quality detection algorithm based on improved MatchNet. J. Text. Inst. 2022, 114, 1872–1880. [Google Scholar] [CrossRef]
  10. Guarnera, G.C.; Hall, P.; Chesnais, A.; Glencross, M. Woven fabric model creation from a single image. ACM Trans. Graph. 2017, 36, 165. [Google Scholar] [CrossRef]
  11. Walther, J.; Decrette, M.; Tourlonias, M.; Bueno, M.A. 3D weaving process for composite reinforcements: Critical yarn crossings. Appl. Compos. Mater. 2025. [Google Scholar] [CrossRef]
  12. Angelova, R.A.; Sofronova, D. Sustainable textiles: Design of new weave patterns based on texts. Sustainable 2023, 15, 1614. [Google Scholar] [CrossRef]
  13. Fan, M.Z.; Deng, N.; Xin, B.J.; Zhu, R.H. Recognition and analysis of fabric texture by double-sided fusion of transmission and reflection images under compound light source. J. Text. Inst. 2023, 114, 1634–1646. [Google Scholar] [CrossRef]
  14. Khokhuntod, P.; Kaewpoonsuk, A. Moire pattern-based stripe density measurement using low-frequency component detection. IEEE Access 2025, 13, 79150–79168. [Google Scholar] [CrossRef]
  15. Kang, T.J.; Kim, C.H.; Oh, K.W. Automatic recognition of fabric weave patterns by digital image analysis. Text. Res. J. 1999, 69, 77–83. [Google Scholar] [CrossRef]
  16. Zheng, D.; Han, Y.; Hu, J.L. A new method for classification of woven structure for yarn-dyed fabric. Text. Res. J. 2013, 84, 78–95. [Google Scholar] [CrossRef]
  17. Jing, J.; Xu, M.; Li, P. Automatic recognition of weave pattern and repeat for yarn-dyed fabric based on KFCM and IDMF. Optik 2015, 126, 2876–2883. [Google Scholar] [CrossRef]
  18. Xiao, Z.; Nie, X.; Zhang, F.; Geng, L. Recognition for woven fabric pattern based on gradient histogram. J. Text. Inst. 2014, 105, 744–752. [Google Scholar] [CrossRef]
  19. Kuo, C.F.J.; Shih, C.Y.; Ho, C.E.; Peng, K.C. Application of computer vision in the automatic identification and classification of woven fabric weave patterns. Text. Res. J. 2010, 80, 2144–2157. [Google Scholar] [CrossRef]
  20. Salem, Y.B.; Nasri, S. Automatic recognition of woven fabrics based on texture and using SVM. Signal Imag. Video Process. 2010, 4, 429–434. [Google Scholar] [CrossRef]
  21. Bao, X.; Ni, X.; Wang, Y.; Zhou, Y. Recognition of fabric structures based on improved olfactory. Neural Netw. 2010, 1, 325–328. [Google Scholar]
  22. Pan, R.; Gao, W.; Liu, J.; Wang, H. Automatic recognition of woven fabric patterns based on pattern database. Fiber Polym. 2010, 11, 303–308. [Google Scholar] [CrossRef]
  23. Pan, R.; Gao, W.; Liu, J.; Wang, H.; Zhang, X. Automatic detection of structure parameters of yarn-dyed fabric. Text. Res. J. 2010, 80, 1819–1832. [Google Scholar]
  24. Zhou, D.; Zhou, L.Q.; Sun, J. A novel feedback error-correcting algorithm for automatic recognition of the color and weave pattern of yarn-dyed fabrics. Text. Res. J. 2013, 83, 1673–1689. [Google Scholar]
  25. Zhou, D.; Zhou, L.; Sheng, X.; Sun, J. A novel weave/color repeat extraction method with error tolerance. J. Eng. Fiber. Fabr. 2014, 9, 127–139. [Google Scholar] [CrossRef]
  26. Wang, X.; Georganas, N.D.; Petriu, E.M. Fabric texture analysis using computer vision techniques. IEEE Trans. Instrum. Meas. 2011, 60, 44–56. [Google Scholar] [CrossRef]
  27. Kahraman, Y.; Durmusoglu, A. Deep learning-based fabric defect detection: A review. Text. Res. J. 2023, 93, 1485–1503. [Google Scholar] [CrossRef]
  28. Meeradevi, T.; Sasikala, S.; Gomathi, S.; Prabakaran, K. An analytical survey of textile fabric defect and shade variation detection system using image processing. Multimed. Tools Appl. 2022, 82, 6167–6196. [Google Scholar] [CrossRef]
  29. Li, C.; Li, J.; Li, Y.F.; He, L.M.; Fu, X.K.; Chen, J.J. Fabric defect detection in textile manufacturing: A survey of the state of the art. Secur. Commun. Netw. 2021, 2021, 9948808. [Google Scholar] [CrossRef]
  30. Lu, W.T.; Lin, Q.Q.; Zhong, J.Y.; Wang, C.Q.; Xu, W.Q. Research progress of image processing technology for fabric defect detection. Text. Res. J. 2021, 42, 197–206. [Google Scholar]
  31. Zheng, X.H.; Liu, Z.H.; Chen, F.; Zhang, J.; Wang, J.L. Current status and prospect of intelligent development in textile industry. Text. Res. J. 2023, 44, 205–216. [Google Scholar]
  32. Jia, Z.; Fu, R.; Wang, F. Research advance review of machining technology for carbon fiber reinforced polymer composite components. J. Mech. Eng. 2023, 59, 348–374. [Google Scholar]
  33. Asmael, M.; Safaei, B.; Kalaf, O.; Zeeshan, Q.; Aldakheel, F.; Nasir, T.; Yang, Z.C. Recent developments in tensile properties of friction welding of carbon fiber-reinforced composite: A review. Nanotechnol. Rev. 2022, 11, 1408–1436. [Google Scholar] [CrossRef]
  34. Chu, Y.Y.; Li, S.C.; Chen, C.; Liu, Y.Y.; Huang, W.H.; Zhang, Y.; Chen, X.G. Research progress in bulletproof flexible textile materials and structures. Text. Res. J. 2022, 43, 203–212. [Google Scholar]
  35. Muddebihal, A.; Shivakumar Gouda, P.S.; Uppin, V.S. Techniques of creating in-plane fiber waviness in composites and its mechanical performance—A review. Eng. Res. Express. 2024, 6, 022505. [Google Scholar] [CrossRef]
  36. Al-Furjan, M.S.H.; Shan, L.; Shen, X.; Zarei, M.S.; Hajmohammad, M.H.; Kolahchi, R. A review on fabrication techniques and tensile properties of glass, carbon, and Kevlar Fiber Reinforced Polymer composites. J. Mater. Res. Technol. 2022, 19, 2930–2959. [Google Scholar] [CrossRef]
  37. Li, L.; Xiao, S.L.; Wang, Y.Q.; Qi, J.L.; Gao, H. A novel high recognition rate defect inspection method for carbon fiber plain-woven prepreg based on image texture feature compression. Polymers 2022, 14, 1855. [Google Scholar] [CrossRef]
  38. Li, L.; Wang, Y.Q.; Gao, H.; Qi, J.L.; Zhou, T.Y. Automatic recognition method for the three-elementary woven structures and defects of carbon fabric prepregs. Compos. Struct. 2022, 291, 115527. [Google Scholar] [CrossRef]
  39. Hrechuk, A.; Bushlya, V.; Ståhl, J.E.; Kryzhanivskyy, V. Novel metric “Implenarity” for characterization of shape and defectiveness: The case of CFRP hole quality. Compos Struct. 2021, 265, 113722. [Google Scholar] [CrossRef]
  40. Quan, C.G.; Miao, H.; Fu, Y. Surface contouring by optical edge projection based on a continuous wavelet transform. Appl. Opt. 2006, 45, 4815–4820. [Google Scholar] [CrossRef]
  41. Yang, Z.B.; Radzienski, M.; Kudela, P.; Ostachowicz, W. Scale-wavenumber domain filtering method for curvature modal damage detection. Compos. Struct. 2016, 154, 396–409. [Google Scholar] [CrossRef]
  42. Guo, Y.C.; Ge, X.; Yu, M.; Yan, G.; Liu, Y. Automatic recognition method for the repeat size of a weave pattern on a woven fabric image. Text. Res. J. 2019, 89, 2754–2775. [Google Scholar] [CrossRef]
  43. Wu, Q.L.; Shi, X.H.; Jiao, F.; Li, W.; Ti, X.T.; Liu, H.L. Three-point bending damage detection of SiC coated C/C composites based on acoustic emission. Mater. Today Commun. 2024, 41, 110655. [Google Scholar] [CrossRef]
  44. Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N. Adaptive illumination through spatial modulation of light intensity and image inversion. Meas. Sci. Technol. 2013, 24, 055401. [Google Scholar] [CrossRef]
  45. Hamdi, K.; Moreau, G.; Aboura, Z. Digital image correlation, acoustic emission and in-situ microscopy in order to understand composite compression damage behavior. Compos Struct. 2021, 258, 113424. [Google Scholar] [CrossRef]
  46. Hanmandlu, M.; Choudhury, D.; Dash, S. Detection of defects in fabrics using topothesy fractal dimension features. Signal Image Video Process. 2015, 9, 1521–1530. [Google Scholar] [CrossRef]
  47. Liu, Y.H.; Lin, S.H.; Hsueh, Y.L.; Lee, M.L. Automatic target defect identification for TFT-LCD array process inspection using kernel FCM-based fuzzy SVDD ensemble. Expert Syst. Appl. 2009, 36, 1978–1998. [Google Scholar] [CrossRef]
  48. Bi, M.; Sun, Z.; Li, Y. Textural fabric defect detection using adaptive quantized gray-level co-occurrence matrix and support vector description data. Inf. Technol. J. 2012, 11, 673–685. [Google Scholar]
  49. Zhang, W.Y.; Zhao, Q.S. Research on defect detection of cord fabrics based on Gabor wavelet transform. Comput. Eng. Appl. 2008, 82, 577–585. [Google Scholar]
  50. Song, L.; Li, R.; Chen, S. Fabric defect detection based on membership degree of regions. IEEE Access 2020, 8, 48752–48760. [Google Scholar] [CrossRef]
Figure 1. A flowchart of the surface yarn alignment defect detection process.
Figure 1. A flowchart of the surface yarn alignment defect detection process.
Jcs 09 00295 g001
Figure 2. Flowchart of image texture feature enhancement.
Figure 2. Flowchart of image texture feature enhancement.
Jcs 09 00295 g002
Figure 3. The gaps between warp (weft) yarns in the three-elementary structure images.
Figure 3. The gaps between warp (weft) yarns in the three-elementary structure images.
Jcs 09 00295 g003
Figure 4. Schematic of texture structure recognition.
Figure 4. Schematic of texture structure recognition.
Jcs 09 00295 g004
Figure 5. (a) The complete area detection process for surface defects on CFRP components; (b) rapid localization process for the defect center; (c) key images of the defect detection process.
Figure 5. (a) The complete area detection process for surface defects on CFRP components; (b) rapid localization process for the defect center; (c) key images of the defect detection process.
Jcs 09 00295 g005
Figure 6. The boundary tracking algorithm for extracting the contours and center of defect areas: (a) construct a submatrix G2 containing the M2 region; (b) find the boundary points in the 8-neighborhood of the starting point; (c) search for all boundary points in sequence; (d) obtain the contours of the defect area; (e) the extraction process for the minimum circumcircle; (f) the process of determining the minimum circumcircle twice is required.
Figure 6. The boundary tracking algorithm for extracting the contours and center of defect areas: (a) construct a submatrix G2 containing the M2 region; (b) find the boundary points in the 8-neighborhood of the starting point; (c) search for all boundary points in sequence; (d) obtain the contours of the defect area; (e) the extraction process for the minimum circumcircle; (f) the process of determining the minimum circumcircle twice is required.
Jcs 09 00295 g006
Figure 7. The structure of the machine vision device.
Figure 7. The structure of the machine vision device.
Jcs 09 00295 g007
Figure 8. The types of patterns collected on site and their processing effects: (a) original grayscale image; (b) equalization; (c) morphology; (d) directional adjustable filtering; (e) Gaussian filtering; (f) block grid image.
Figure 8. The types of patterns collected on site and their processing effects: (a) original grayscale image; (b) equalization; (c) morphology; (d) directional adjustable filtering; (e) Gaussian filtering; (f) block grid image.
Jcs 09 00295 g008
Figure 9. Gray-level integral projection images of warp (weft) yarn and gaps.
Figure 9. Gray-level integral projection images of warp (weft) yarn and gaps.
Jcs 09 00295 g009
Figure 10. The gray-level integrated projection of all warp (weft) yarns in the images of the three-elementary woven structures.
Figure 10. The gray-level integrated projection of all warp (weft) yarns in the images of the three-elementary woven structures.
Jcs 09 00295 g010
Figure 11. Fast Fourier transform spectrograms of all warp (weft) yarns in the images of the three-elementary woven structures.
Figure 11. Fast Fourier transform spectrograms of all warp (weft) yarns in the images of the three-elementary woven structures.
Jcs 09 00295 g011
Figure 12. The machine vision detection software and the hardware platform developed in this paper and the defect images detected.
Figure 12. The machine vision detection software and the hardware platform developed in this paper and the defect images detected.
Jcs 09 00295 g012
Figure 13. Comparison of average defect detection speeds of different methods.
Figure 13. Comparison of average defect detection speeds of different methods.
Jcs 09 00295 g013
Table 1. Fabric parameters.
Table 1. Fabric parameters.
Weave
Type
StyleWarp Count
(Per Inch)
Weft Count
(Per Inch)
Warp TowFill TowFabric Weight
(g m−2)
Ply Thickness
(mm)
PlainWP-301112123K, carbon3K, carbon1980.25
TwillWP-302212123K, carbon3K, carbon1980.25
SatinWP-305212123K, carbon3K, carbon1970.24
Table 2. Test results for warp/weft yarn quantity.
Table 2. Test results for warp/weft yarn quantity.
Pattern TypeManual Detection The Proposed Method Error Rate (%)
WeftWarpWeftWarpWeftWarp
Plain-weave (defect-free)24.030.024.029.900.33
Twill-weave (defect-free)24.030.024.229.80.830.67
Satin-weave (defect-free)24.030.023.830.20.830.83
Plain-weave (defect)26.032.025.731.91.161.25
Twill-weave (defect)26.032.026.431.81.541.72
Satin-weave (defect)26.032.025.932.32.121.72
Table 3. The initial cluster centers of surface defect morphologies of different proportions.
Table 3. The initial cluster centers of surface defect morphologies of different proportions.
Bubble:WrinklePlain-WeaveTwill-WeaveSatin-Weave
60%
40%
(0.0512, 0.1444, 0.1941)(0.3077, 0.1390, 0.2623)(0.1631, 0.2198, 0.2602)
(0.6556, 0.6637, 0.6878)(0.7201, 0.2606, 0.3115)(0.8809, 0.7147, 0.8535)
50%
50%
(0.0925, 0.1360, 0.1182)(0.2021, 0.1763, 0.0834)(0.1618, 0.1778, 0.2502)
(0.7318, 0.7107, 0.8325)(0.7662, 0.2709, 0.3816)(0.6483, 0.4223, 0.5154)
40%
60%
(0.0941, 0.2266, 0.1971)(0.2833, 0.1300, 0.0804)(0.1321, 0.0165, 0.2830)
(0.6809, 0.5200, 0.6326)(0.9682, 0.3633, 0.5238)(0.5921, 0.4710, 0.9178)
Table 4. The final cluster centers of surface defect morphologies of different proportions.
Table 4. The final cluster centers of surface defect morphologies of different proportions.
Bubble:WrinklePlain-WeaveTwill-WeaveSatin-Weave
60%
40%
(0.1732, 0.1640, 0.1774)(0.2378, 0.1089, 0.1458)(0.0886, 0.1308, 0.1745)
(0.7853, 0.7112, 0.7060)(0.7932, 0.4104, 0.4286)(0.6722, 0.6357, 0.7792)
50%
50%
(0.1658, 0.1858, 0.1894)(0.2591, 0.1163, 0.1392)(0.0895, 0.1428, 0.1620)
(0.7902, 0.6753, 0.7808)(0.7788, 0.4051, 0.4298)(0.6413, 0.6132, 0.7321)
40%
60%
(0.1437, 0.1870, 0.1948)(0.2546, 0.1078, 0.1459)(0.0836, 0.1214, 0.1536)
(0.8003, 0.6932, 0.7460)(0.7729, 0.4255, 0.4560)(0.6525, 0.6730, 0.7579)
Table 5. A comparative analysis of the cluster indexes.
Table 5. A comparative analysis of the cluster indexes.
Woven StructureBubble:WrinkleDBICHSC
Plain-weave6:40.34891889.90560.8384
5:50.36121706.36480.8339
4:60.36481555.57280.8242
Twill-weave6:40.39911119.67680.7891
5:50.40291075.66400.7813
4:60.4125972.45780.7767
Satin-weave6:40.37451334.92140.8125
5:50.37931309.84510.8101
4:60.38211276.94720.8096
Table 6. The detection performance of the proposed method for different defects.
Table 6. The detection performance of the proposed method for different defects.
Defect TypeCentering Error (%)Detection Rate (%)Detection Speed (s/pic)
HorizontalVerticalAverage
Bubble (plain-weave)1.5730.9411.25798.70.26
Wrinkle (plain-weave)1.8302.8762.35397.50.35
Bubble (twill-weave)1.2092.3441.77797.10.31
Wrinkle (twill-weave)3.8824.6524.26796.80.39
Bubble (satin-weave)3.5282.7413.13595.70.43
Wrinkle (Satin-weave)7.3875.9636.67596.30.55
Table 7. Comparison of detection accuracy of different methods.
Table 7. Comparison of detection accuracy of different methods.
Defect TypeDetection Accuracy (%)
Gauss–MarkovSVDDGaborMorphologyThis Paper
Bubble95.7192.8489.3886.1297.17
Wrinkle94.3889.3487.1490.3496.87
Total95.0591.0988.2688.2397.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Yao, S.; Xiao, S.; Wang, Z. Machine Vision Framework for Real-Time Surface Yarn Alignment Defect Detection in Carbon-Fiber-Reinforced Polymer Preforms. J. Compos. Sci. 2025, 9, 295. https://doi.org/10.3390/jcs9060295

AMA Style

Li L, Yao S, Xiao S, Wang Z. Machine Vision Framework for Real-Time Surface Yarn Alignment Defect Detection in Carbon-Fiber-Reinforced Polymer Preforms. Journal of Composites Science. 2025; 9(6):295. https://doi.org/10.3390/jcs9060295

Chicago/Turabian Style

Li, Lun, Shixuan Yao, Shenglei Xiao, and Zhuoran Wang. 2025. "Machine Vision Framework for Real-Time Surface Yarn Alignment Defect Detection in Carbon-Fiber-Reinforced Polymer Preforms" Journal of Composites Science 9, no. 6: 295. https://doi.org/10.3390/jcs9060295

APA Style

Li, L., Yao, S., Xiao, S., & Wang, Z. (2025). Machine Vision Framework for Real-Time Surface Yarn Alignment Defect Detection in Carbon-Fiber-Reinforced Polymer Preforms. Journal of Composites Science, 9(6), 295. https://doi.org/10.3390/jcs9060295

Article Metrics

Back to TopTop