Research on Synthetic Data Methods and Detection Models for Micro-Cracks
Highlights
- A Poisson image editing-based synthesis pipeline produces visually consistent micro-crack samples and improves data availability without degrading realism.
- The proposed CST-YOLO (YOLOv10 + LAPM + SCS-Former + SOFB) achieves 0.926 mAP@0.5:0.95 at 139 FPS on real-image evaluation, with complementary gains verified by ablations.
- The combined “data realism + architecture robustness” strategy enables accurate and efficient micro-crack detection under challenging illumination, shadow, and clutter conditions.
- The real-time performance supports practical deployment in large-scale infrastructure inspection scenarios that require both precision and throughput.
Abstract
1. Introduction
- 1.
- Insufficient robustness under complex imaging degradation. Shadows, reflections, low illumination, motion blur, and compression noise are commonly present in bridge inspections and significantly reduce the contrast between cracks and the background. Studies such as NR-IQA have revealed that CNN model performance varies significantly across different quality thresholds [8]. However, most current work remains at the “enhancement/filtering” level, and there is no unified framework for collaboratively optimizing quality metrics, prediction uncertainty, and detection networks.
- 2.
- Continuity expression and multi-scale compatibility for small cracks remain bottlenecks. Cracks often occur in thin structures, and endpoint and bifurcation information can be easily lost. Scale variation under UAV perspectives is vast, making it difficult to balance “recall” and “false detection suppression” using a unified threshold and single-scale features. Multi-scale detection and quantification workflows can partially mitigate this issue [10], but when extremely thin cracks, complex textures, and non-uniform lighting overlap, cracks remain prone to being broken, missed, or misclassified as pseudo-cracks.
- 3.
- Cross-scenario generalization and data dependence are prominent issues. Significant differences exist in bridge surface weathering, pollutant types, and repair status, and there are notable domain differences relative to roads/tunnels and other scenarios. Models often rely on specific data distributions. Data augmentation can alleviate sample shortages and improve multi-class damage detection performance [12], but discrepancies between augmented distributions and real distributions may introduce biases. Furthermore, there are no guidelines for selecting augmentation strategies that account for crack “geometric priors,” leading to instability in cross-bridge type and cross-season generalization.
- 4.
- Lack of a systematic trade-off paradigm between real-time deployability and fine quantification. Object detection is well-suited to real-time deployment, but bounding-box results cannot support metrics such as width/length. Segmentation/3D mapping can be used for quantification and visualization, but the labeling, computational power, and engineering process complexity are high. Although lightweight designs (e.g., the lightweight YOLOv4 for bridges) have improved edge feasibility [14], there remains a lack of a unified, reproducible engineering evaluation and optimization framework under multi-constraint conditions.
- 1.
- Proposed a Poisson Image Editing-based micro-crack synthetic data pipeline to alleviate small sample and distribution inadequacy problems. To address the high annotation cost, scarcity of real samples, and difficulty in covering diverse backgrounds, this paper constructs a crack synthesis strategy based on gradient domain fusion (Poisson image editing), achieving natural transitions and texture consistency between the crack foreground and different concrete backgrounds. Compared to traditional copy-paste augmentation methods, this strategy improves the usability of training data without significantly reducing realism, thereby enhancing the model’s generalization ability at the data level.
- 2.
- Proposed CST-YOLO: An integrated framework for micro-crack detection in complex scenes, forming a design paradigm of “environment decoupling—global perception—micro-feature enhancement”. Building on the YOLOv10 detection framework, this paper introduces Complex-Scene-Tolerant YOLO (CST-YOLO), which addresses three key challenges: environmental disturbances, such as complex illumination/shadows; long-range dependencies in crack topology; and the loss of high-frequency details in small targets. These challenges are modeled and optimized hierarchically, thereby improving the separability and detection stability of microcracks in complex structural scenarios.
- 3.
- Designed and integrated three key modules (LAPM, SCS-Former, SOFB), corresponding to environmental robustness, long-range modeling, and micro-scale enhancement.
- LAPM (Lighting-Adaptive Preprocessing Module): Inspired by Retinex theory, it suppresses illumination/shadow disturbances and stabilizes shallow feature distributions, achieving “environment decoupling”.
- SCS-Former (Spatial-Channel Sparse Transformer): Efficiently captures the long-range continuity of crack elongation structures through spatial sparse modeling and channel recalibration, enhancing “global perception” while maintaining computational efficiency.
- SOFB (Small Object Focus Block): Strengthens the expression of micro-crack details through high-frequency enhancement and robust channel attention, thereby alleviating background-dominated statistics arising from sparse small targets, thereby achieving “micro-feature enhancement”.
- 4.
- Achieved a balance between accuracy and speed in real image evaluation protocols, and verified effectiveness and complementary contributions through comparative and ablation experiments. Using a 650-image dataset with both real and synthetic samples, this paper employs an evaluation strategy where “synthetic data is used only for training, and validation/testing use real images,” and conducts systematic comparisons and ablation analyses. The results show that CST-YOLO achieves 0.990 mAP@0.5 and 0.926 mAP@0.5:0.95 on real-image evaluation, with real-time inference at 139 FPS. Ablation experiments validate the complementary gains of LAPM, SCS-Former, and SOFB, providing evidence for the feasibility of large-scale engineering inspection deployment.
2. Poisson Image Editing-Based Image Data Construction Method
3. Robust Detection Model for Complex Scenes
- Feature entanglement caused by environmental interference (lighting, shadows, stains, and crack texture similarity).
- Difficulty in modeling long-range topological dependencies of cracks (elongated structures spanning large areas of pixels).
- Degradation of small high-frequency features in deep networks (down-sampling and semantic aggregation lead to loss of details).
3.1. Overall Model Architecture
- 1.
- Input and lighting-adaptive preprocessing layer. Unlike standard YOLO pipelines that directly feed raw images into the network, CST-YOLO introduces an Lighting-Adaptive Preprocessing Module (LAPM) at the front end. Concretely, the input image is first processed by two Stem blocks to perform rapid down-sampling and obtain an initial feature representation. The resulting features are then forwarded to the LAPM, where feature-level illumination decoupling and noise suppression are conducted to attenuate the influence of lighting factors on crack feature extraction.
- 2.
- Multi-scale feature extraction backbone layer. The backbone adopts a CSP-based convolutional architecture to perform multi-scale feature extraction, generating feature maps at multiple resolutions via progressive down-sampling. Given that micro-cracks are highly sensitive to spatial resolution, CST-YOLO preserves relatively high-resolution shallow features at the backbone output, providing essential spatial detail to support subsequent small-target enhancement.
- 3.
- Feature fusion layer with global perception and micro-feature enhancement. To compensate for the limited ability of conventional CNNs to model long-range dependencies, CST-YOLO incorporates a Spatial-Channel Sparse Transformer (SCS-Former) at the end of the backbone to enhance understanding of the overall crack topology. By employing a sparsified attention mechanism, SCS-Former captures the spatial continuity of elongated cracks while keeping the computational overhead under control. To prevent micro-crack cues from being overwhelmed by dominant semantic information in deep layers, CST-YOLO introduces a Small Object Focus Block (SOFB) at the fusion stage. SOFB enhances high-frequency responses and performs channel recalibration, thereby reinforcing crack boundaries and fine-grained details. Together, SCS-Former and SOFB constitute the feature fusion module, enabling CST-YOLO to jointly leverage global structural awareness and local detail modeling.
3.2. LAPM
3.2.1. Theoretical Basis: Retinex-Based Image Decomposition Model
3.2.2. Approximate Modeling and Estimation of the Illumination Component
3.2.3. Reflectance Component Extraction and Differential Enhancement Mechanism
3.2.4. Residual Fusion and Feature Stability Analysis
3.3. Spatial–Channel Sparse Transformer
3.3.1. Design Rationale and Overall Framework
3.3.2. Spatial Sparse Attention Modeling
3.3.3. Channel Attention Modeling
3.3.4. Spatial–Channel Feature Fusion Mechanism
3.4. Small Object Focus Module
3.4.1. Learnable Second-Order Differential High-Frequency Feature Enhancement
3.4.2. High-Frequency Enhancement and Residual Fidelity Mechanism
3.4.3. Batch-Normalization-Free Channel Attention Mechanism
4. Experimental Design and Result Analysis
4.1. Experimental Setup
4.1.1. Hardware and Software Environment
4.1.2. Training Configuration Parameter Settings and Evaluation Metrics
4.2. Dataset Construction and Composition
4.3. Overall Performance Evaluation of CST-YOLO
- One-stage detectors are the mainstream paradigm that balances speed and accuracy. We include SSD-Lite [38] as a representative lightweight detector, along with several YOLO-family baselines, including YOLOF [23], YOLOX [22], and YOLOv12 [25]. These methods have been widely adopted in both academia and industry and reflect the current mainstream performance of one-stage detectors in terms of efficiency and accuracy.
- Transformer-based [39] detection frameworks represent an emerging paradigm that redesigns the detection pipeline via attention mechanisms. We introduce RT-DETR-R50 [26] as a baseline, utilizing pre-trained models on the COCO dataset. It performs global feature interaction and bounding-box prediction in a single pass via an encoder–decoder architecture, without requiring NMS post-processing, thereby enabling near-real-time inference while maintaining high accuracy. This provides a direct reference for comparing CNN-based and Transformer-based detectors in the object detection setting.
4.4. Ablation Studies and Module Effectiveness Analysis
4.4.1. Ablation Studies of Each Module of CST-YOLO
- The YOLOv10 baseline, shown as the blue curve in the following figures;
- YOLOv10_LAPM (The YOLOv10 baseline augmented with the LAPM), shown as the orange curve;
- YOLOv10_SCS-Former (The YOLOv10 baseline augmented with SCS-Former), shown as the green curve;
- YOLOv10_SOFB (The YOLOv10 baseline augmented with the SOFB), shown as the red curve;
- YOLOv10_EAMP_SCS-Former (The YOLOv10 baseline augmented with both LAPM and SCS-Former), shown as the purple curve;
- The full CST-YOLO model is shown as the brown curve.
4.4.2. Ablation Studies on Poisson Image Editing-Based Data Fusion
5. Conclusions
6. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhuang, H.; Cheng, Y.; Zhou, M.; Yang, Z. Deep Learning for Surface Crack Detection in Civil Engineering: A Comprehensive Review. Measurement 2025, 248, 116908. [Google Scholar] [CrossRef]
- Cha, Y.-J.; Ali, R.; Lewis, J.; Büyüköztürk, O. Deep Learning-Based Structural Health Monitoring. Autom. Constr. 2024, 161, 105328. [Google Scholar] [CrossRef]
- Fan, C.; Ding, Y.; Liu, X.; Yang, K. A Review of Crack Research in Concrete Structures Based on Data-Driven and Intelligent Algorithms. Structures 2025, 75, 108800. [Google Scholar] [CrossRef]
- Tang, Y.; Wei, Y.; Qian, L.; Liu, L. Deep Learning Image-Based Fusion Approach for Identifying Multiple Apparent Diseases in Concrete Structure. Sensors 2025, 25, 6796. [Google Scholar] [CrossRef]
- Guo, M.; Tian, W.; Li, Y.; Sui, D. Detection of Road Crack Images Based on Multistage Feature Fusion and a Texture Awareness Method. Sensors 2024, 24, 3268. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Jiang, X.; Peng, H. Research on a Road Crack Detection Method Based on YOLO11-MBC. Sensors 2025, 25, 7435. [Google Scholar] [CrossRef]
- Wu, J.; Zhang, X. Tunnel Crack Detection Method and Crack Image Processing Algorithm Based on Improved Retinex and Deep Learning. Sensors 2023, 23, 9140. [Google Scholar] [CrossRef] [PubMed]
- Lee, C.; Kim, D.; Kim, D. Optimizing Deep Learning-Based Crack Detection Using No-Reference Image Quality Assessment in a Mobile Tunnel Scanning System. Sensors 2025, 25, 5437. [Google Scholar] [CrossRef]
- Qi, Y.; Lin, P.; Yang, G.; Liang, T. Crack Detection and 3D Visualization of Crack Distribution for UAV-Based Bridge Inspection Using Efficient Approaches. Structures 2025, 78, 109075. [Google Scholar] [CrossRef]
- Zhou, L.; Jia, H.; Jiang, S.; Xu, F.; Tang, H.; Xiang, C.; Wang, G.; Zheng, H.; Chen, L. Multi-Scale Crack Detection and Quantification of Concrete Bridges Based on Aerial Photography and Improved Object Detection Network. Buildings 2025, 15, 1117. [Google Scholar] [CrossRef]
- Yang, S.; Jang, D.; Kim, J.; Jeon, H. Autonomous Concrete Crack Monitoring Using a Mobile Robot with a 2-DoF Manipulator and Stereo Vision Sensors. Sensors 2025, 25, 6121. [Google Scholar] [CrossRef]
- Dunphy, K.; Fekri, M.N.; Grolinger, K.; Sadhu, A. Data Augmentation for Deep-Learning-Based Multiclass Structural Damage Detection Using Limited Information. Sensors 2022, 22, 6193. [Google Scholar] [CrossRef]
- Qiu, Q.; Lau, D. Real-Time Detection of Cracks in Tiled Sidewalks Using YOLO-Based Method Applied to Unmanned Aerial Vehicle (UAV) Images. Autom. Constr. 2023, 147, 104745. [Google Scholar] [CrossRef]
- Dong, X.; Yuan, J.; Dai, J. Study on Lightweight Bridge Crack Detection Algorithm Based on YOLO11. Sensors 2025, 25, 3276. [Google Scholar] [CrossRef]
- Xiong, C.; Zayed, T.; Abdelkader, E.M. A Novel YOLOv8-GAM-Wise-IoU Model for Automated Detection of Bridge Surface Cracks. Constr. Build. Mater. 2024, 414, 135025. [Google Scholar] [CrossRef]
- Xu, X.; Zhao, M.; Shi, P.; Ren, R.; He, X.; Wei, X.; Yang, H. Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN. Sensors 2022, 22, 1215. [Google Scholar] [CrossRef]
- Song, F.; Liu, B.; Yuan, G. Pixel-Level Crack Identification for Bridge Concrete Structures Using Unmanned Aerial Vehicle Photography and Deep Learning. Struct. Control. Health Monit. 2024, 2024, 1299095. [Google Scholar] [CrossRef]
- Zhang, J.; Qian, S.; Tan, C. Automated Bridge Surface Crack Detection and Segmentation Using Computer Vision-Based Deep Learning Model. Eng. Appl. Artif. Intell. 2022, 115, 105225. [Google Scholar] [CrossRef]
- Kheradmandi, N.; Mehranfar, V. A Critical Review and Comparative Study on Image Segmentation-Based Techniques for Pavement Crack Detection. Constr. Build. Mater. 2022, 321, 126162. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN Towards Real-Time Object. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar] [CrossRef]
- Chen, Q.; Wang, Y.; Yang, T.; Zhang, X.; Cheng, J.; Sun, J. You Only Look One-Level Feature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Nashville, TN, USA, 20–25 June 2021. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. Adv. Neural Inf. Process. Syst. 2024, 37, 107984–108011. [Google Scholar] [CrossRef]
- Tian, Y.; Ye, Q.; Doermann, D. YOLOv12: Attention-Centric Real-Time Object Detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar] [CrossRef]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs Beat YOLOs on Real-Time Object Detection. arXiv 2023, arXiv:2304.08069. [Google Scholar] [CrossRef]
- Zhou, Z.; Zhang, J.; Gong, C. Automatic Detection Method of Tunnel Lining Multi-defects via an Enhanced You Only Look Once Network. Comput. Aided Civ. Eng. 2022, 37, 762–780. [Google Scholar] [CrossRef]
- Mammeri, S.; Barros, B.; Conde-Carnero, B.; Riveiro, B. From Traditional Damage Detection Methods to Physics-Informed Machine Learning in Bridges: A Review. Eng. Struct. 2025, 330, 119862. [Google Scholar] [CrossRef]
- Zhou, Y.; Guo, X.; Hou, F.; Wu, J. Review of Intelligent Road Defects Detection Technology. Sustainability 2022, 14, 6306. [Google Scholar] [CrossRef]
- Sholevar, N.; Golroo, A.; Esfahani, S.R. Machine Learning Techniques for Pavement Condition Evaluation. Autom. Constr. 2022, 136, 104190. [Google Scholar] [CrossRef]
- Lee, J.; Kim, H.-S.; Kim, N.; Ryu, E.-M.; Kang, J.-W. Learning to Detect Cracks on Damaged Concrete Surfaces Using Two-Branched Convolutional Neural Network. Sensors 2019, 19, 4796. [Google Scholar] [CrossRef] [PubMed]
- Fan, Y.; Mai, J.; Xue, F.; Lau, S.S.Y.; Jiang, S.; Tao, Y.; Zhang, X.; Tsang, W.C. UAV and Deep Learning for Automated Detection and Visualization of Façade Defects in Existing Residential Buildings. Sensors 2025, 25, 7118. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Ma, J.; Zhao, Z.; Shi, G. A Novel Approach for UAV Image Crack Detection. Sensors 2022, 22, 3305. [Google Scholar] [CrossRef] [PubMed]
- Baduge, S.K.; Thilakarathna, S.; Perera, J.S.; Arashpour, M.; Sharafi, P.; Teodosio, B.; Shringi, A.; Mendis, P. Artificial Intelligence and Smart Vision for Building and Construction 4.0: Machine and Deep Learning Methods and Applications. Autom. Constr. 2022, 141, 104440. [Google Scholar] [CrossRef]
- Michel, G.; Andrew, B. Poisson image editing. ACM Trans. Graph. 2003, 22, 313–318. [Google Scholar] [CrossRef]
- Land, E.H. The Retinex Theory of Color Vision. Sci. Am. 1978, 237, 108–128. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Zhang, C.; Bahrami, M.; Mishra, D.K.; Yuen, M.M.F.; Yu, Y.; Zhang, J. SelectSeg: Uncertainty-Based Selective Training and Prediction for Accurate Crack Segmentation under Limited Data and Noisy Annotations. Reliab. Eng. Syst. Saf. 2025, 259, 110909. [Google Scholar] [CrossRef]
- Nguyen, Q.D.; Thai, H.-T.; Nguyen, S.D. Self-Training Method for Structural Crack Detection Using Image Blending-Based Domain Mixing and Mutual Learning. Autom. Constr. 2025, 170, 105892. [Google Scholar] [CrossRef]
- Bhattacharya, S.; Zhang, C.; Mishra, D.K.; Yuen, M.M.F.; Zhang, J. Efficient Unsupervised Domain Adaptation for Crack Segmentation with Interpretable Fourier–Morphology Blending and Uncertainty-Guided Self-Training. Comput.-Aided Civ. Infrastruct. Eng. 2025, 40, 5790–5807. [Google Scholar] [CrossRef]










| Technical Approach | Representative Idea | Advantages | Limitations |
|---|---|---|---|
| Classification/Multi-Defect Recognition | Multi-Defect Fusion Recognition [4], Limited Data Augmentation [12] | More comprehensive semantic coverage, useful for coarse screening and alerting | Weak support for localization and quantification; class imbalance and coexisting defects can cause confusion |
| Object Detection | YOLO Series and Improvements [6,13,14,15], Comparison of Two-Stage Detection [16] | Fast inference, deployment-friendly, suitable for rapid localization | Difficult to delineate true crack boundaries and continuity; trade-off between small crack recall and false detection suppression |
| Pixel-Level Segmentation/Measurement | Pixel-Level Bridge Crack Recognition [17], Detection + Segmentation Joint [18], Segmentation Reviews [19] | Can output boundaries, beneficial for quantification of width/length, etc. | High annotation cost; prone to breaking or merging under low light/complex backgrounds; high computational resource requirements |
| 3D Mapping and Visualization | UAV Crack 3D Visualization [9,10] | Supports full bridge localization and spatial distribution representation | Sensitive to pose/reconstruction quality; complex engineering processes, error propagation difficult to control |
| Experimental Environment | Configuration |
|---|---|
| CPU | Intel Xeon |
| GPU | NVIDIA RTX 4090 |
| GPU memory | 24 GB |
| Operating system | Linux Ubuntu 20.04 LTS |
| CUDA version | CUDA 11.8 |
| Deep-learning framework | PyTorch 2.0.1 |
| Detection Method | Precision | Recall | mAP@0.5 | mAP@0.5:0.95 | FPS | Parameter | FLOPs |
|---|---|---|---|---|---|---|---|
| Faster R-CNN | 0.943 | 1.000 | 1.000 | 0.819 | 40 | 28.12 M | 69.84 G |
| SSD-Lite | 0.986 | 0.967 | 0.978 | 0.662 | 98 | 3.03 M | 2.80 G |
| YOLOF | 0.955 | 0.941 | 0.944 | 0.547 | 47 | 42.06 M | 39.35 G |
| YOLOX | 0.843 | 0.949 | 0.931 | 0.402 | 88 | 0.90 M | 1.24 G |
| YOLOv12 | 0.998 | 0.992 | 0.997 | 0.863 | 185 | 2.57 M | 6.5 G |
| RTDetr-ResNet50 | 0.133 | 0.589 | 0.201 | 0.084 | 40 | 42.76 M | 130.50 G |
| CST-YOLO | 0.979 | 0.991 | 0.992 | 0.926 | 139 | 2.74 M | 8.90 G |
| Detection Method | Precision | Recall | mAP@0.5 | mAP@0.5:0.95 | FPS |
|---|---|---|---|---|---|
| YOLOv10 | 0.959 | 0.936 | 0.968 | 0.894 | 137 |
| YOLOv10_LAPM | 0.974 | 0.980 | 0.977 | 0.906 | 143 |
| YOLOv10_SCS-Former | 0.974 | 0.970 | 0.976 | 0.909 | 140 |
| YOLOv10_SOFB | 0.970 | 0.965 | 0.982 | 0.909 | 142 |
| YOLOv10_LAPM_SCS-Former | 0.976 | 0.989 | 0.985 | 0.913 | 139 |
| CST-YOLO | 0.979 | 0.991 | 0.992 | 0.926 | 139 |
| Model | Precision | Recall | mAP@0.5 | mAP@0.5:0.95 |
|---|---|---|---|---|
| CST-YOLO-with-PoissonBlending | 0.989 | 0.997 | 0.995 | 0.938 |
| CST-YOLO-without-PoissonBlending | 0.979 | 0.991 | 0.992 | 0.922 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Jiang, Y.; Wang, T.; Chen, X.; Liang, J. Research on Synthetic Data Methods and Detection Models for Micro-Cracks. Sensors 2026, 26, 1883. https://doi.org/10.3390/s26061883
Jiang Y, Wang T, Chen X, Liang J. Research on Synthetic Data Methods and Detection Models for Micro-Cracks. Sensors. 2026; 26(6):1883. https://doi.org/10.3390/s26061883
Chicago/Turabian StyleJiang, Yaotong, Tianmiao Wang, Xuanhe Chen, and Jianhong Liang. 2026. "Research on Synthetic Data Methods and Detection Models for Micro-Cracks" Sensors 26, no. 6: 1883. https://doi.org/10.3390/s26061883
APA StyleJiang, Y., Wang, T., Chen, X., & Liang, J. (2026). Research on Synthetic Data Methods and Detection Models for Micro-Cracks. Sensors, 26(6), 1883. https://doi.org/10.3390/s26061883
