Next Article in Journal
Cutting Force—Vibration Interactions in Precise—and Micromilling Processes: A Critical Review on Prediction Methods
Previous Article in Journal
Long-Term Culture of Cellular Spheroids in Novel Hydrogel Constructs for ECM Characterization in Bone Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Research on the Grinding Quality Evaluation of Composite Materials Based on Multi-Scale Texture Fusion Analysis

1
School of Mechanical and Electrical Engineering, Soochow University, Suzhou 215000, China
2
Institute of Aerospace Materials and Technology, Beijing 100076, China
*
Authors to whom correspondence should be addressed.
Materials 2025, 18(15), 3540; https://doi.org/10.3390/ma18153540
Submission received: 16 June 2025 / Revised: 21 July 2025 / Accepted: 24 July 2025 / Published: 28 July 2025
(This article belongs to the Section Advanced Composites)

Abstract

To address the challenges of manual inspection dependency, low efficiency, and high costs in evaluating the surface grinding quality of composite materials, this study investigated machine vision-based surface recognition algorithms. We proposed a multi-scale texture fusion analysis algorithm that innovatively integrated luminance analysis with multi-scale texture features through decision-level fusion. Specifically, a modified Rayleigh parameter was developed during luminance analysis to rapidly pre-segment unpolished areas by characterizing surface reflection properties. Furthermore, we enhanced the traditional Otsu algorithm by incorporating global grayscale mean (μ) and standard deviation (σ), overcoming its inherent limitations of exclusive reliance on grayscale histograms and lack of multimodal feature integration. This optimization enables simultaneous detection of specular reflection defects and texture uniformity variations. To improve detection window adaptability across heterogeneous surface regions, we designed a multi-scale texture analysis framework operating at multiple resolutions. Through decision-level fusion of luminance analysis and multi-scale texture evaluation, the proposed algorithm achieved 96% recognition accuracy with >95% reliability, demonstrating robust performance for automated surface grinding quality assessment of composite materials.

1. Introduction

The growing demand for lightweight, high-performance spacecraft has established fiber-reinforced composites and honeycomb sandwich structures as core materials for primary load-bearing components, owing to their exceptional specific strength and corrosion resistance [1,2,3]. However, the reliability and service life of these structures critically depend on the bonding quality between composite components. Surface grinding is widely employed to enhance interfacial bonding strength and fatigue life by removing contaminants, controlling roughness, and generating microscopic mechanical interlocking structures [4]. Studies confirm that adhesive performance is significantly influenced by the uniformity of surface topography and multi-scale textural characteristics. Conversely, excessive abrasion or uneven texture distribution may induce stress concentration [5], potentially leading to premature interfacial failure. Current surface grinding quality assessment predominantly relies on contact profilometers or empirical visual inspection—methods limited by low efficiency, localized sampling bias, and inadequate characterization of full-field textural features [6]. Particularly for complex-curvature spacecraft components, conventional methods fail to enable non-contact, high-precision, real-time in situ inspection [7]. In recent years, machine vision-based nondestructive evaluation techniques have gained prominence in surface quality assessment due to their high-resolution imaging, full-field analysis capabilities, and superior morphological feature extraction [8,9]. However, significant challenges persist in characterizing complex textures, integrating multi-scale feature integration, and morphological quantification. For instance, abraded surfaces contain multi-scale textures ranging from micrometer-scale grooves to millimeter-scale undulations, where single-scale analysis tends to overlook critical features [10]. Additionally, illumination conditions, material anisotropy, and surface reflectivity may distort image intensity [11], adversely affecting texture feature extraction accuracy.
In this study, the composite components exhibit surface reflectivity with directional scattering characteristics. Therefore, we proposed a luminance analysis method incorporating a modified Rayleigh distribution, which enables rapid pre-segmentation of non-abraded surfaces under experimental illumination conditions. Subsequently, the conventional Otsu algorithm for global threshold segmentation based on grayscale histograms has been enhanced by integrating texture and gradient modal features. Through multi-scale texture analysis, the hierarchical structural information of abraded surfaces is systematically captured. Unlike traditional evaluation algorithms based on the correlation between image features and surface roughness, this study innovatively incorporates luminance and multi-scale texture fusion analysis, significantly enhancing the recognition accuracy for different features of material surfaces.

2. Algorithm Core Framework

Conventional image processing methodologies primarily encompass template matching, edge detection, and texture analysis. For instance, template matching evaluates quality by comparing deviations between reference templates and target images, while texture analysis quantifies surface roughness and uniformity [12]. However, these approaches demonstrate constrained generalization capabilities due to their reliance on manual feature engineering. Statistics-based grayscale histogram analysis reflects surface finish, and edge histogram methods assess edge distribution uniformity, yet such techniques prove inadequate for processing images under high dynamic range (HDR) scenarios or complex illumination conditions [13]. This paper develops a surface polishing quality evaluation algorithm grounded in multi-scale texture fusion, which strategically adapts to varying material dimensions through differentiated assessment protocols. Capitalizing on the discriminative advantages of texture analysis, the framework dynamically selects evaluation metrics based on workpiece scale: direct root mean square error (RMSE) quantification governs local texture variation characterization for smaller specimens, while an integrated luminance–texture fusion decision architecture orchestrates quality assessment for larger-scale materials. The operational paradigm of this scale-adaptive algorithm is systematically depicted in Figure 1, demonstrating the coherent integration of multi-modal texture features through cascaded fusion modules.

2.1. Texture Feature Analysis

For fiber-reinforced polymer (FRP) composites with smaller dimensions where surface luminance exhibits negligible impact on subsequent image processing, localized root mean square error (RMSE) quantification can be adopted as the principal textural metric to characterize regional texture variations. The FRP material, composed of a polymer matrix reinforced with glass fibers, demonstrates three predominant surface quality defects during polishing processes: (1) directional striated textures caused by exposed fiber bundles, (2) cloud-like mottling from resin inhomogeneity, and (3) linear scratch patterns induced by mechanical abrasion. Whereas conventional grayscale statistical parameters prove inadequate in discriminating these defect modalities, the RMSE feature demonstrates enhanced discriminative capacity through systematic quantification of localized intensity fluctuations, thereby facilitating composite texture pattern characterization [2].
R M S E x , y = 1 N Σ i = 1 N y i y ^ i 2
where N represents the number of data points, y i denotes the i actual measurement value, and y ^ i corresponds to the i predicted or expected value. In fiber-exposed regions of material surfaces, localized high-frequency directional grayscale transitions (caused by fiber bundle protrusions) induce significantly elevated RMSE values, whereas resin-rich areas exhibit low-frequency grayscale gradients due to residual matrix accumulation. This differential response enables preliminary quality assessment through spatial RMSE distribution analysis.

2.2. Luminance Feature Characterization

The composite materials investigated in this study exhibit distinct optical behaviors depending on their surface conditions. Non-abraded surfaces approximate specular optical surfaces, producing concentrated directional reflectance that manifests as high-luminance regions during imaging. In contrast, abraded surfaces predominantly demonstrate diffuse reflection due to microscale structural modifications [14]. For fiber-reinforced polymers (FRPs), surface abrasion generates microstructural features—including fiber alignment variations and resin–fiber interface irregularities—which induce stochastic light scattering distributions.
The scale parameter σ in the Rayleigh distribution directly correlates with surface roughness metrics. Elevated σ values signify heightened scattering intensity, characteristic of non-abraded regions with high specular reflectance. Conversely, diminished σ values correspond to the uniform diffuse reflectance observed on abraded surfaces, enabling quantitative discrimination of light-scattering behaviors across progressive abrasion stages. To address directional scattering patterns inherent in composite surfaces, a modified Rayleigh distribution is introduced into the luminance analysis framework. This adaptation enhances the precision of reflectance characterization, particularly for surfaces with anisotropic scattering properties. The probability density function of the Rayleigh distribution is defined as follows [15]:
p I = 2 I σ 2 e I 2 σ 2 , I 0
The scale parameter σ , central to the Rayleigh distribution, is derived from the standard deviations (denoted as X and Y ) of two independent orthogonal Gaussian variables. In this experimental framework, σ serves as a surface roughness proxy parameter. Post-abrasion surface modifications induce an increase in σ values, which correspond to diminished high-luminance regions caused by reduced specular reflectance. This parametric relationship enables rapid localization of specular reflection zones through adaptive luminance threshold segmentation.

2.3. Enhanced Dual-Threshold Otsu Algorithm

The Otsu algorithm, fundamentally based on grayscale histogram analysis, operates by maximizing inter-class variance to optimally separate foreground features (e.g., non-abraded specular regions) from background materials. However, conventional implementations are constrained by single-modal sensitivity—relying exclusively on grayscale information without incorporating complementary texture and gradient features. This limitation becomes particularly critical in composite material analysis, where localized specular highlights and substrate textures may exhibit analogous grayscale values, inducing misclassification. Furthermore, surface abrasion-induced micro-scratches and noise artifacts distort grayscale distributions, frequently generating multimodal histogram profiles that challenge traditional Otsu’s unimodal threshold selection, as evidenced by experimental specimens with manually abraded surfaces exhibiting heterogeneous abrasion levels:
T P r e = μ + 2 σ
Within the pre-segmented region satisfying I T p r , where μ denotes the global grayscale mean and σ represents the grayscale standard deviation, the optimal secondary threshold T b is determined through constrained Otsu optimization [12]:
T b = argmax ω l T ω h T μ l T μ h T 2
Let ω l and ω h   denote the pixel proportion weights of low- and high-luminance regions, respectively, with μ l and μ h representing their intra-class mean intensities. The enhanced algorithm achieves rapid pre-segmentation of high-luminance zones, followed by synergistic fusion of luminance signatures and multi-scale texture descriptors through logical OR operations, enabling concurrent detection of specular anomalies and textural. A comparison between the traditional Otsu algorithm and the improved Otsu algorithm proposed in this study is shown in Table 1.

2.4. Multi-Scale Texture Characterization

Multi-scale texture analysis is a technique that extracts textural features across varying spatial resolutions or frequency domains, enabling comprehensive characterization of hierarchical surface structures from macroscopic to microscopic levels. By integrating texture features from different scales, this approach achieves a more complete representation of complex surfaces [14]. In this study, unabraded composite specimens exhibit physical pit defects with significant dimensional variations (0.1–2 mm), which appear as discrete points in imaging data. Under high-resolution (microscale) observation, these features display sharp edges but demonstrate noise sensitivity, whereas lower-resolution (macroscale) analysis yields blurred textures with enhanced noise immunity. Conventional single-scale windowing methods fail to concurrently capture multi-dimensional features: smaller windows exhibit suboptimal window sizing effects—missing large-scale textures while amplifying noise interference—whereas larger windows cause detail loss and positioning inaccuracy.
The proposed algorithm employs a three-tier Gaussian pyramid structure to construct scale space, generating multi-scale image representations at original resolution, 1/2 down sampling, and 1/4 down sampling levels. Each pyramid tier is processed using the following equation [14]:
G L x , y = m , n ω m , n G L 1 2 x + m , 2 y + n
Let L ϵ 0,1 , 2 denote the pyramid layer index, where L = 0 corresponds to the original resolution image. The scale transition is governed by a 5 × 5 Gaussian kernel ω m , n with spatial coordinates m , n ϵ 2,2 2 , ensuring scale-space continuity.
Post-abrasion composite surfaces exhibit significant textural complexity gradients, rendering fixed-size windowing ineffective for simultaneously accommodating macroscale characterization and microscale preservation. Unabraded high-reflectance zones necessitate large windows (e.g., 60 × 60 pixels at 1/4 scale) to capture global signatures, while micro-scratches demand smaller windows (15 × 15 pixels at original scale) to retain edge fidelity. To address this, the proposed algorithm implements a scale-adaptive windowing scheme with geometrically progressive dimensions, while maintaining consistent physical dimensions [10]:
W L = W 0 2 L 1 + 0.2 L
Let W 0 denote the base window size at the original scale, and 0.2 L represent the scale compensation factor to mitigate window undersizing at higher pyramid levels. The local texture standard deviation at scale L is calculated as follows [10]:
σ L x , y = 1 W L 2 i , j I L x + i , y + j μ L 2
The local mean μ L quantifies the intensity uniformity within the analysis window. The maximum fusion strategy, defined in Equation (7), operates under the principle of saliency preservation [10]:
σ f u s e d x , y = max σ 0 x , y , σ 1 x , y , σ 2 x , y
This strategy preserves salient features across scales through mathematical equivalence to parallel detector configurations, effectively enhancing defect recall rates.
The algorithm comprehensively assesses composite surface abrasion quality by fusing luminance and texture features through logical OR operation: M f i n a l = M b r i g h t M t e x t u r e , effectively preventing detection omissions inherent to single-feature methods while ensuring high recall rates. Morphological post-processing guarantees geometric compatibility between structural elements and textural characteristics, enhancing detection accuracy and robustness. Including an opening operation with structural element B open = d i s k ( r 1 ) , r 1 = 3   p i x e l s to eliminate discrete noise points, and a closing operation B close = d i s k ( r 2 ) , r 2 = 5   p i x e l s . The radius relationship derivation formula is as follows [12]:
r 2 = r 1 + m e d i a n W L 4

3. Experiments

3.1. Experimental Image Acquisition

In the present study, all images processed by the algorithm were acquired using the image acquisition system illustrated in Figure 2. This system is primarily composed of a Hikrobot MV-CU200-20GM monochrome camera, a Huakang Technology ring light source (model R120-72-23), and an 8 mm focal length.
C-Mount lens, a mounting bracket, computer hardware (Intel i7-12700H, RTX 3060 Laptop GPU), and a MATLAB-based software development environment that integrates the Image Processing Toolbox along with custom algorithm modules. Detailed parameters of the camera and lens are provided in Table 2. Prior to image acquisition, different areas of the sample were manually polished in an irregular manner to obtain a rich dataset.
The acquisition scheme is as follows: the composite material sample is placed face-up on the bracket support plate. The resin parameters for the glass fiber reinforced polymer (GFRP) composites are presented in Table 3. After adjusting the sample’s position, it is kept stationary while images are manually captured. The sample position is then adjusted, and the process is repeated. For the lighting scheme, the Huakang Technology ring light source R120-72-23 is used, with the camera capturing the light signals reflected from the sample’s surface after being emitted by the light source. The captured images are shown in Figure 3. Moreover, the algorithm developed in this study exhibits strong adaptability to varying lighting conditions.
Due to variations in the applied polishing force on the sample surface, different texture features emerge, which can be categorized into three conditions: unpolished, moderately polished, and over-polished. In this experiment, twenty groups of 2400 × 1400 BMP sample images were acquired, with each group exhibiting distinct visual characteristics—namely, the simultaneous presence of both polished and unpolished texture features, as illustrated in Figure 3c.

3.2. Experimental Environment and Evaluation Metrics

In this experiment, the computer hardware configuration is as follows: the CPU is an Intel i7-12700H, and the GPU is an RTX 3060 Laptop. The core software environment is MATLAB R2023b, which integrates the Image Processing Toolbox along with custom algorithm modules to support real-time image analysis and visualization. During the experiment, the parameters for multi-scale ratios were set to 1, 0.5, and 0.25; the window size parameters were configured at 15, 30, and 60; the brightness threshold was set to 0.65 based on environmental variables such as the light source; and the overall standard deviation threshold was set to 0.2. The acquired images were subsequently analyzed and processed, where the algorithm designates polished areas in red and unpolished areas in blue. The accuracy of the algorithm, denoted as P, is evaluated by comparing the red-to-blue area ratio A1 with the actual polished-to-unpolished area ratio A2, as illustrated in the following formula:
P = 1 A 1 A 2 A 2
In evaluating reliability, the reliability (C) is defined as the ratio of the number of experimental groups whose extracted features match the designed features ( N satisfying ) to the total number of experimental groups ( N a l l ), as shown in the following formula:
C = N satisfying N a l l

3.3. Experimental Reliability Analysis

3.3.1. Analysis of RMSE-Based Results

For fiber-reinforced polymer (FRP) composites with smaller dimensions where luminance interference remains negligible, a multi-scale analytical framework incorporating RMSE-based image processing enables effective visual evaluation of surface polishing quality. The assessment system utilizes chromatic encoding where red designates fully polished zones, yellow signifies adequately processed regions, and blue marks unpolished or substandard areas, with systematic quantification of incomplete polishing ratios. Furthermore, each evaluated image is segmented into 16 standardized sectors, triggering re-polishing protocols when any sector exceeds a 30% threshold of defective area coverage. Experimental results validate the technical validity and operational feasibility of this surface quality assessment methodology across diverse polishing scenarios, as empirically validated in Figure 4.

3.3.2. Reliability Analysis Based on Dual Feature Fusion

First, the accuracy of the evaluation algorithm was tested. In the test images, certain areas of the surface were fully polished with A2 set to 1.0. After feature extraction and computation by the evaluation algorithm, A1 was determined to be 0.961, which corresponds to an accuracy of 96.1%. Since this exceeds the predetermined accuracy threshold of 95%, the obtained processing quality characterization parameters are deemed accurate. The experimental results are shown in Figure 5.
Different images were captured from multiple polishing regions, and the evaluation algorithm was applied to extract features from each image. If the extracted features substantially matched the designed features, the corresponding group was counted toward N satisfying . Finally, the reliability of the evaluation algorithm was calculated. Some of the original images and the corresponding recognition results are shown in Figure 6. Comparative analysis of this algorithm versus other algorithms is presented in Table 4.

4. Conclusions

Resolving the limitations of traditional methods—such as the low coverage of contact profilometers limited to point sampling (unable to capture full-field texture gradients) and the subjectivity of visual inspection at macroscopic scales with blind spots for micro-defects—our algorithm achieves pixel-level full-field analysis with high resolution, rapid processing, and superior accuracy. Integrated with robotic arms on the production line, this algorithm accomplishes fully automated inspection and grinding operations.
This study utilizes machine vision technology and employs an innovative multi-scale texture fusion analysis method to evaluate the polishing quality of composite material surfaces.
1.
Leveraging the optical characteristics of the material surface, the algorithm incorporates brightness analysis combined with a modified Rayleigh distribution to perform preliminary segmentation of unpolished high-brightness regions, thereby improving recognition efficiency.
2.
An optimized Otsu algorithm is introduced, incorporating global gray value μ and standard deviation σ . This modification overcomes the traditional Otsu algorithm’s limitations—its reliance on single gray histograms and its ineffectiveness at integrating multimodal features such as texture and gradient—enabling the simultaneous detection of both specular reflection defects and texture uniformity variations.
3.
A maximum value fusion strategy integrates features extracted across different scales. This approach effectively preserves significant features at window sizes of 15, 30, and 60, improving the detection rate of minor pits.
4.
By optimizing algorithm parameters based on environmental conditions, the experimental accuracy was increased to 96%, while algorithm reliability reached over 95%.

Author Contributions

Conceptualization, Y.W.; Methodology, Y.W.; Software, J.L. (Jiacheng Li) and J.L. (Jiachang Liu); Validation, C.W., M.P., J.L. (Jiacheng Li) and J.L. (Jiachang Liu); Formal analysis, C.W. and M.P.; Investigation, J.L. (Jiacheng Li) and J.L. (Jiachang Liu); Data curation, Z.L. and A.G.; Writing—original draft, J.L. (Jiacheng Li), J.L. (Jiachang Liu) and W.S.; Writing—review & editing, J.L. (Jiacheng Li), J.L. (Jiachang Liu) and W.S.; Visualization, C.W. and M.P.; Supervision, Z.L. and L.L.; Project administration, L.L. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, J.; Srivastawa, K.; Jana, S.; Dixit, C. Advancements in lightweight materials for aerospace structures: A comprehensive review. Acceleron Aerosp. J. 2024, 2, 173–183. [Google Scholar] [CrossRef]
  2. Hamzat, A.K.; Murad, S.; Adediran, I.A.; Asmatulu, E.; Asmatulu, R. Fiber-reinforced composites for aerospace, energy, and marine applications: An insight into failure mechanisms under chemical, thermal, oxidative, and mechanical load conditions. Adv. Compos. Hybrid Mater. 2025, 8, 152. [Google Scholar] [CrossRef]
  3. Montgomery-Liljeroth, E.; Schievano, S.; Burriesci, G. Elastic properties of 2D auxetic honeycomb structures-a review. Appl. Mater. Today 2023, 30, 101722. [Google Scholar] [CrossRef]
  4. Liu, Y.; Zhang, C.; Dong, X. A survey of real-time surface defect inspection methods based on deep learning. Artif. Intell. Rev. 2023, 56, 12131–12170. [Google Scholar] [CrossRef]
  5. Naat, N.; Boutar, Y.; Naïmi, S.; Mezlini, S.; Da Silva, L.F.M. Effect of surface texture on the mechanical performance of bonded joints: A review. J. Adhes. 2023, 99, 166–258. [Google Scholar] [CrossRef]
  6. Jha, S.B.; Babiceanu, R.F. Deep CNN-based visual defect detection: Survey of current literature. Comput. Ind. 2023, 148, 103911. [Google Scholar] [CrossRef]
  7. Le, H.F.; Zhang, L.J.; Liu, Y.X. Surface Defect Detection of Industrial Parts Based on YOLOv5. IEEE Access 2022, 10, 130784–130794. [Google Scholar] [CrossRef]
  8. Singh, S.A.; Desai, K.A. Automated surface defect detection framework using machine vision and convolutional neural networks. J. Intell. Manuf. 2023, 34, 1995–2011. [Google Scholar] [CrossRef]
  9. Bayraktar, E. Efficient and Reliable Surface Defect Detection in Industrial Products Using Morphology-Based Techniques. In International Symposium on Intelligent Manufacturing and Service Systems; Springer Nature: Singapore, 2023. [Google Scholar] [CrossRef]
  10. Üzen, H.; Türkoğlu, M.; Yanikoglu, B.; Hanbay, D. Swin-MFINet: Swin transformer based multi-feature integration network for detection of pixel-level surface defects. Expert Syst. Appl. 2022, 209, 118269. [Google Scholar] [CrossRef]
  11. Slavov, D.V.; Hristov, V.D. 3D machine vision system for defect inspection and robot guidance. In Proceedings of the 2022 57th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), Ohrid, North Macedonia, 16–18 June 2022. [Google Scholar] [CrossRef]
  12. Xia, B.; Luo, H.; Shi, S. Improved Faster R-CNN Based Surface Defect Detection Algorithm for Plates. Comput. Intell. Neurosci. 2022, 2022, 3248722. [Google Scholar] [CrossRef] [PubMed]
  13. Huang, T.; Zhao, L.; Zhang, Y.; Tian, J.; Zhou, W. A Lightweight Fine-Grained Perception Defect Inspection Network Under Multitask Learning on Highly Reflective Surface. IEEE Trans. Instrum. Meas. 2024, 73, 5008310. [Google Scholar] [CrossRef]
  14. Muravyov, S.V.; Nguyen, D.C. Method of interval fusion with preference aggregation in brightness thresholds selection for automatic weld surface defects recognition. Measurement 2024, 236, 114969. [Google Scholar] [CrossRef]
  15. Wang, Y.; Wang, X.; Hao, R.; Lu, B.; Huang, B. Metal Surface Defect Detection Method Based on Improved Cascade R-CNN. ASME J. Comput. Inf. Sci. Eng. 2024, 24, 041002. [Google Scholar] [CrossRef]
Figure 1. Core structure of algorithms.
Figure 1. Core structure of algorithms.
Materials 18 03540 g001
Figure 2. Image acquisition system.
Figure 2. Image acquisition system.
Materials 18 03540 g002
Figure 3. (a) Fully sanded texture. (b) Completely unsanded texture. (c) Mixed and unsanded texture.
Figure 3. (a) Fully sanded texture. (b) Completely unsanded texture. (c) Mixed and unsanded texture.
Materials 18 03540 g003
Figure 4. Recognition results.
Figure 4. Recognition results.
Materials 18 03540 g004
Figure 5. Accuracy test chart. (a) Original image. (b) Recognition results.
Figure 5. Accuracy test chart. (a) Original image. (b) Recognition results.
Materials 18 03540 g005
Figure 6. Part of the original image and the recognition result.
Figure 6. Part of the original image and the recognition result.
Materials 18 03540 g006
Table 1. Comparison between the traditional Otsu algorithm and the enhanced Otsu algorithm.
Table 1. Comparison between the traditional Otsu algorithm and the enhanced Otsu algorithm.
Limitation CategoryTraditional OtsuEnhanced Otsu Solution
Noise SensitivityHigh noise sensitivitymorphological preprocessing
Uneven IlluminationSingle global thresholdLocal adaptive thresholding
Multi-threshold LimitationBinary segmentation onlyWatershed post-processing
Material-Specific AdaptationFixed threshold failureGray histogram redistribution
Table 2. Image acquisition camera and lens parameter table.
Table 2. Image acquisition camera and lens parameter table.
PropertyValue
Resolution5120 × 3840
Pixel size/μm1.4
Working distance/mm185 ± 1
Frame rate/(frame·s−1)5.9
Field of view/mm2107.5 × 80.2
Unit pixel represents physical distance/μm20.8
Table 3. Resin parameters.
Table 3. Resin parameters.
PropertyValue
Ply Thickness120 ± 5 μm
Average Resin Layer Thickness15 ± 3 μm
Fiber Volume Fraction60 ± 2%
Table 4. Comparison of different algorithms.
Table 4. Comparison of different algorithms.
AlgorithmAccuracyReliabilityComputational Efficiency
Our Method96.0%>95%185 ms/frame
Otsu Method82.3%78%92 ms/frame
Improved Faster R-CNN89.7%85%320 ms/frame
Swin-MFINet91.5%88%410 ms/frame
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Liu, Z.; Ling, L.; Guo, A.; Li, J.; Liu, J.; Wang, C.; Pan, M.; Song, W. Research on the Grinding Quality Evaluation of Composite Materials Based on Multi-Scale Texture Fusion Analysis. Materials 2025, 18, 3540. https://doi.org/10.3390/ma18153540

AMA Style

Wang Y, Liu Z, Ling L, Guo A, Li J, Liu J, Wang C, Pan M, Song W. Research on the Grinding Quality Evaluation of Composite Materials Based on Multi-Scale Texture Fusion Analysis. Materials. 2025; 18(15):3540. https://doi.org/10.3390/ma18153540

Chicago/Turabian Style

Wang, Yangjun, Zilu Liu, Li Ling, Anru Guo, Jiacheng Li, Jiachang Liu, Chunju Wang, Mingqiang Pan, and Wei Song. 2025. "Research on the Grinding Quality Evaluation of Composite Materials Based on Multi-Scale Texture Fusion Analysis" Materials 18, no. 15: 3540. https://doi.org/10.3390/ma18153540

APA Style

Wang, Y., Liu, Z., Ling, L., Guo, A., Li, J., Liu, J., Wang, C., Pan, M., & Song, W. (2025). Research on the Grinding Quality Evaluation of Composite Materials Based on Multi-Scale Texture Fusion Analysis. Materials, 18(15), 3540. https://doi.org/10.3390/ma18153540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop