A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm
Abstract
:1. Introduction
- The provision of statistical data on analyzed yarns, generating an analysis report.
- Comprehensive yarn characterization in a production report with a user-friendly interface.
- The complete characterization of defects, including hairiness, neps, and thin and thick places.
- AI integration for the enhanced detection and automatic classification of yarn hairiness, improving accuracy and classification.
- The optimization of the YOLOv5s6 algorithm for yarn hairiness detection, resulting in significant improvements in accuracy, especially for protruding and loop fibers, with a 5–6% increase in mAP@0.5 and an 11–12% improvement in mAP@0.5:0.95.
- The integration of advanced modules (C2f, Bot-Transformer, GeLU) for more effective defect detection, particularly in capturing and classifying complex hairiness in yarn images.
- Validation through k-fold cross-validation with 10 splits, ensuring the model’s robustness and consistent performance in detecting loops and protruding fibers.
- The complete and automatic characterization of yarn defects, providing a comprehensive analysis and classification of defects like “hairiness” using advanced Deep Learning techniques.
- Irregular Visual Characteristics of Hairiness Classes: DL can handle complex features, such as the irregular shape of loop fibers and protrusion fibers, which are difficult for traditional methods.
- Accuracy and Adaptability: DL adapts to different types of yarns, colors, and lighting conditions, offering greater flexibility than optical sensors or conventional methods.
- Simultaneous Classification: allows the detection and classification of multiple defects (loop fibers and protruding fibers) in a single step, optimizing the time and resources.
- Superior Performance: our improved algorithm (YOLOv5s6-Hairiness) showed significant improvements in metrics such as mAP and accuracy, outperforming commercial systems and standard YOLO variants.
- Scalability and Cost-Effectiveness: DL-based solutions are more economical and portable than systems like USTER TESTER, which are very expensive and less flexible.
- Industrial Impact: DL improves the quality of the final yarn and fabric, reducing defects, waste, and operational costs, in addition to enabling real-time inspection.
2. Literature Review on Yarn Quality Analysis
2.1. Systems Based on Image Processing
2.2. Systems Based on Computer Vision and Artificial Intelligence
2.3. Comparative Analysis of Existing Systems
- (a)
- Mechatronic Prototype Development: System A has developed a mechatronic prototype. Systems B, C, and D do not have this feature.
- (b)
- Non-destructive Prototype: none of the systems have a non-destructive prototype.
- (c)
- Yarn Winding and Unwinding System: none of the systems include a yarn winding/unwinding capability.
- (d)
- Image or Video Analysis: All systems use image analysis. None uses video.
- (e)
- Use of Vision System or AI: Systems B, C, and D combine vision systems with AI, without deep learning techniques.
- (f)
- Detect Defects: Systems A and E use only vision systems, while B, C, and D combine them with AI.
- (g)
- Specific Yarn Quality Parameters:
- System A detects thick spots, thin spots, neps, the yarn diameter, and hairiness coefficient.
- System B detects the neps and mass variation coefficient.
- System C measures elongation at the break and tenacity.
- System D measures thick places, thin places, neps, the linear mass, coefficient of variation of mass, elongation at the break, and tenacity.
- System E measures the lengths of the crossed fibers.
- (h)
- Integration in production lines: only Systems A and B are integrable into production lines. C and D depend on the USTER TESTER 3.
- (i)
- Online or offline image acquisition: Systems A and B allow online image acquisition; C, D, and E are only offline.
- (j)
- Spectral analysis based on image processing: none of the systems perform image-based spectral analysis.
- (k)
- Use of deep learning techniques to detect defects: none of the systems use deep learning techniques to detect defects and also specifically to detect and classify yarn hairiness.
- (l)
- Yarn Hairiness Analysis: only System E analyzes yarn hairiness, but without covering other parameters.
2.4. Identification of Gaps and Proposed Solution
- Detects various yarn characteristics and defects, including thick places, the yarn diameter, linear mass, volume, and number of loose fibers, among others.
- Conducts an analysis based on images and videos.
- Uses deep learning techniques for defect detection, particularly yarn hairiness classification.
- Is non-destructive, low-cost, and easily integrates into textile production lines.
3. Materials and Methods
3.1. Yarn Hairiness Identification Using Yolo Algorithm
- The preliminary tests showed that YOLOv5s6 outperformed other variants, including YOLOv8, in the overall results.
- YOLOv5s6 was prioritized for its superior processing speed, crucial for the instant detection and classification of fibers with minimal latency.
- Given the availability of GPU resources, YOLOv5s6 was optimal for its performance on GPUs, accelerating fiber detection.
- YOLOv5s6 offers a competitive balance between accuracy and speed, meeting the needs of the application.
- YOLOv5 is user-friendly, especially with its PyTorch foundation, facilitating easy deployment.
3.2. YOLOv5s6 Algorithm Structure
- (1)
- BACKBONE: YOLOv5 uses a convolutional backbone to extract features from the input images, performing initial convolutions and extracting low-level representations.
- (2)
- NECK: some variants of YOLOv5 use a “neck”, which is a sequence of convolutional layers that help combine features from different scales, improving the detection of objects of different sizes.
- (3)
- HEAD OR DETECT: the “Head” or “Detect” is the final part of the model, where object detection takes place, predicting the bounding boxes and classes of objects present in the image.
3.3. Improved YOLOv5s6 Yarn Hairiness Structure
- The integration of the C2f Module of the YOLOv8 advanced version: the addition of the Cross-Stage Partial Networks Fusion (C2f) module enhances feature fusion at different stages, significantly improving the detection of fine details in yarn images, crucial for identifying hairiness types.
- Bot-Transformer modules in the neck: these modules, using multi-head self-attention mechanisms, allow the model to focus on relevant features in complex yarn images, improving accuracy in detecting loops and protruding fibers.
- GeLU activation function: Gaussian Error Linear Unit (GeLU) activation functions were used in certain layers to replace ReLU or SiLU, increasing the model’s sensitivity to detect loop and protruding fibers.
- Optimization of hyperparameters: The anchor_t value was set to 5.0 to prioritize larger anchors, enhancing the model’s ability to detect varying sizes of hairiness in high-resolution images. Additionally, the scale factor was adjusted to 2.0, allowing the neural network to resize images and capture more details, making smaller features like loop and protruding fibers more detectable.
- (1)
- BACKBONE
- Function: extracts relevant features from yarn images, capturing edges, textures, and patterns.
- Impact: an efficient backbone improves the accuracy of detecting distinctive features of loop and protruding fibers.
- (2)
- NECK
- Function: combines information at different scales, supporting the detection of objects of varying sizes.
- Impact: As loop and protruding fibers can vary in size, the Neck helps the model capture contextual information at multiple scales. This is especially useful for detecting smaller or more diffuse hairiness, improving accuracy by considering the context around these features.
- (3)
- HEAD
- Function: responsible for object detection, including generating predictions for bounding boxes and object classes.
- Impact: The Head enables the model to accurately locate and classify fibers. Bounding box predictions define the position and size of hairiness, while class predictions differentiate between loop and protruding fibers.
3.4. YOLOv5s6 Yarn Hairiness Improvements
3.4.1. CBG Module—Activation Function
- Improved Non-linearities: GeLU captures complex patterns of loop and protruding fibers better than SiLU, enhancing the detection of various yarn hairiness types.
- Mitigation of Vanishing Gradients: GeLU’s smooth derivatives prevent vanishing gradients, ensuring the more stable training of deep neural networks.
- Improved Training Stability: by approximating the cumulative Gaussian distribution, GeLU enhances the training stability, leading to more precise weight adjustments and better detection accuracy.
- Superior Performance: GeLU outperformed other activation functions in metrics, as shown in Section 4 of this study.
3.4.2. C2F Module
3.4.3. Bot-Transformer Module and MHSA Block
- q, k, and v Matrices: these matrices represent queries (q), keys (k), and values (v) and are integral to understanding input sequences.
- qkT (Dot Product): qkT represents the dot product between queries (q) and the transposed keys (k). This dot product quantifies the similarity between queries and keys, serving as the foundation for calculating attention weights.
- qrT (Relative Position): qrT symbolizes the dot product between queries (q) and transposed relative positions (r). The matrix r encodes positional or distance information between sequence elements.
- qkT + qrT (Combined Attention): This expression represents the summation of attentions derived from queries and keys and positional information. This cumulative measure captures both the overall attention and attention relative to the sequence’s positions.
- Softmax (qkT + qrT): The Softmax function is applied to the result from the previous step. Softmax normalizes values, mapping them to a range between 0 and 1. Larger values receive higher importance, directing attention towards the most relevant elements in the sequence.
- Weighted Multiplication: the result of the Softmax operation is multiplied by the value matrices (v).
- C2f (Cross-Stage Partial Networks Fusion): Improves feature fusion at different network stages, improves the gradient flow, and extracts detailed features, crucial for detecting fine details like protruding and loop fibers.
- Bot-Transformer: uses multi-head self-attention to focus on relevant features, improving the detection and classification of hairiness defects and increasing the accuracy across various scales.
- GeLu (Gaussian Error Linear Unit): Handles nonlinearities, increases sensitivity to fiber variations, stabilizes training, and improves the convergence speed, enhancing the overall model performance and accuracy by providing smooth activation, improving convergence, and ensuring a better gradient flow.
3.4.4. High-Level Hyperparameters
- (1)
- anchor_t: 5.0
- The threshold for assigning anchor boxes was set to 5.0, meaning the model considers anchors about five times larger than the original dimensions.
- This adjustment was useful for detecting large or unusually sized objects, like loop and protruding fibers, in 1280 × 1280 pixel images.
- Raising this parameter reduced false positives in noisy or complex backgrounds, enhancing the detection of larger fiber structures.
- (2)
- scale: 2.0
- Images were resized to approximately 200% to 300% of the standard resolution, increasing the scale to better detect fine details in fibers.
- This scaling improved the model’s ability to identify details in high-resolution images, but required more computational resources.
- These adjustments were crucial for accurate detection in our specific yarn dataset.
3.5. Dataset Preparation
- A high cost—exceeding EUR 100k;
- A high weight, limiting portability;
- Destructive sample analysis;
- A limited capability to analyze yarn parameters.
- Flip: horizontal, vertical;
- Saturation: between −25% and +25%;
- Blur: up to 5% of pixels;
- Noise: up to 5% of pixels.
4. Experimental Results and Discussion
4.1. Relevant Evaluation Metrics
- Average Precision (AP)—Measures the detection accuracy at different confidence levels, calculating the area under the Precision–Recall curve. AP is computed as follows:
- Mean Average Precision (mAP)—The average of AP across different object classes, giving an overall performance metric. It is computed as follows:
- IoU (Intersection over Union)—measures the overlap between the predicted and ground truth bounding boxes:
- Accuracy—The proportion of correct detections relative to total detections, calculated as:Accuracy = (TP + TN)/(TP + TN + FP + FN)
- ○
- True Positive (TP): the number of samples correctly classified as positive.
- ○
- False Positive (FP): the number of samples incorrectly classified as positive.
- ○
- False Negative (FN): the number of samples incorrectly classified as negative.
- ○
- True Negative (TN): the number of samples correctly classified as negative.
- Precision—measures the proportion of correct positive detections:
- Recall—measures the proportion of correct detections among true objects:
- F1-Score—harmonic mean of Precision and Recall:
- Confusion Matrix—a table showing correct and incorrect detections for each class.
4.2. Confusion Matrix and Analysis
4.3. Performance Metrics Curves
- Convergence of Losses: if both decrease and stabilize closely, the model generalizes well.
- Divergence of Losses: if training loss decreases while validation loss increases, it likely indicates overfitting.
- train/box_loss and val/box_loss: both box loss curves decrease consistently, with no significant difference, indicating good generalization.
- train/obj_loss and val/obj_loss: object loss curves decrease with minor fluctuations in validation, but no significant difference, suggesting no clear overfitting.
- train/cls_loss and val/cls_loss: classification losses decrease similarly for both curves, indicating good model generalization.
- metrics/precision: accuracy steadily increases and stabilizes for both training and validation, with no large differences, indicating no overfitting.
- metrics/recall: recall increases and stabilizes similarly in both sets, with a minimal difference between training and validation, indicating good generalization.
- metrics/mAP_0.5 and metrics/mAP_0.5:0.95: mean accuracy (mAP) metrics increase steadily, with close proximity between the training and validation curves, indicating good generalization and no clear overfitting.
4.4. Comparison of Results with Other Algorithms
- All baseline models were trained and tested on the same dataset with identical augmentation techniques and hardware settings as the proposed model.
- The default implementations of the models, as provided in their official repositories, were used for training and evaluation.
4.5. Obtained Results
- (a)
- Yarn hairiness detection with augmentation using the default YOLOv5s6 algorithm.
- (b)
- Yarn hairiness detection with augmentation using the proposed optimized YOLOv5s6—the Yarn Hairiness algorithm.
4.6. Visualizing Model Enhancements: Comparative Analysis with Heatmaps and Metrics
4.6.1. Heatmap Qualitative Analysis
4.6.2. Comparative Analysis
- Original YOLOv5s6 Standard: the heatmaps show limited focus areas with low accuracy in detecting loop and protrusion fibers.
- YOLOv5s6 + C2f (backbone): adding the C2f block improved the detection accuracy, especially for subtle patterns like protrusion fibers.
- YOLOv5s6 + Bot-Transformer: the integration of the Bot-Transformer module enhanced the focus on the relevant image parts, improving loop fiber detection.
- YOLOv5s6 + GeLU: the GeLU activation function increased the sensitivity to subtle texture variations, improving the accuracy in complex, noisy backgrounds.
- YOLOv5s6 + C2f + Bot-Transformer: this combination significantly improved focus areas, providing more detailed and accurate detection in complex image parts.
- YOLOv5s6 + C2f + GeLU: enhanced fine detail detection and sensitivity to texture variations, leading to a better overall performance.
- YOLOv5s6 + Bot-Transformer + GeLU: refined focus areas, resulting in greater accuracy and the better handling of complex scenarios, such as protrusion fibers.
- YOLOv5s6 + C2f + Bot-Transformer + GeLU: a comprehensive performance improvement in the focus, accuracy, and sensitivity to detecting loop and protrusion fibers.
- YOLOv5s6 + Hyperparameters: hyperparameter tuning improved focus areas and detection accuracy, reducing false positives in noisy backgrounds, though some defects were only detected in models with previous combinations.
- YOLOv5s6 Improved Hairiness: the final model shows significantly enhanced detection, identifying seven hairiness defects compared to only one in the original model.
4.7. Quantitative Metrics Comparison
4.7.1. Comparative Analysis
4.7.2. Conclusions
4.8. K-Fold Cross Validation
4.8.1. Comparative Analysis
- (1)
- mAP_0.5 and mAP_0.5:0.95
- The average values range from approximately 0.43 to 0.79, indicating the model’s strong ability to accurately detect loop and protruding fibers in various scenarios.
- (2)
- Precision
- The average precision varies between approximately 0.69 and 0.74, showing the model makes a high number of correct detections. However, there is room for improvement, especially regarding false positives.
- (3)
- Recall
- Average recall ranges from approximately 0.73 to 0.77, demonstrating the good recovery of loop and protruding fibers. This is consistent with the goal of minimizing false negatives.
- (4)
- Accuracy
- The model shows consistent accuracy levels across the ten folds ranging from approximately 0.4800 to 0.5200, suggesting a stable performance across different data partitions. The average accuracy is about 0.5064, indicating that the model correctly classifies instances approximately 50.64% of the time.
- (5)
- Variation in Results
- Metric values vary slightly between folds which is expected due to the nature of the method. This variation underscores the importance of k-fold cross-validation in assessing model robustness and generalization.
4.8.2. Overfitting Analysis
- Consistency of Metrics: The average metrics for mAP_0.5, mAP_0.5:0.95, precision, recall, and accuracy are consistent across the folds, indicating a stable model performance with low standard deviations. This suggests good generalization without overfitting.
- Comparison between Metrics: mAP_0.5:0.95 is lower than mAP_0.5. As expected, balanced precision and recall indicate an effective balance between avoiding false positives and capturing true positives.
- Overfitting Analysis: A significant difference between the training and validation metrics indicates overfitting. The small variation between folds suggests that the model is not overfitting, showing a consistent performance across data subsets.
- Conclusion: The k-fold cross-validation analysis shows no clear evidence of overfitting. The YOLOv5s6-Hairiness model generalizes well to the validation data, maintaining a good balance between precision and recall. In summary, the results obtained in the k-fold cross-validation are encouraging with the approach showing good performance in the detection of loop fibers and protruding fibers.
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Araújo, M.; Melo, E.M.C. Manual de Engenharia Têxtil; Fundação Calouste Gulbenkian: Lisboa, Portugal, 1987; Volume II. [Google Scholar]
- Kakde, P.C.M.; Patil, C. Minimization of Defects in Knitted Fabric. Int. J. Text. Eng. Process. 2016, 2, 13–18. [Google Scholar]
- Lord, P.R. 11—Quality and quality control. In Handbook of Yarn Production; Lord, P.R., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2003; pp. 276–300. [Google Scholar] [CrossRef]
- Carvalho, V.H.; Cardoso, P.J.; Belsley, M.S.; Vasconcelos, R.M.; Soares, F.O. Yarn Hairiness Characterization Using Two Orthogonal Directions. IEEE Trans. Instrum. Meas. 2009, 58, 594–601. [Google Scholar] [CrossRef]
- Pinto, R.; Pereira, F.; Carvalho, V.; Soares, F.; Vasconcelos, R. Yarn linear mass determination using image processing: First insights. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisboa, Portugal, 14–17 October 2019; pp. 198–203. [Google Scholar] [CrossRef]
- Xu, B.G. 1—Digital technology for yarn structure and appearance analysis. In Computer Technology for Textiles and Apparel; Hu, J., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2011; pp. 3–22. [Google Scholar] [CrossRef]
- Tyagi, G.K. 5—Yarn structure and properties from different spinning techniques. In Advances in Yarn Spinning Technology; Lawrence, C.A., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2010; pp. 119–154. [Google Scholar] [CrossRef]
- Wang, X.-H.; Wang, J.-Y.; Zhang, J.-L.; Liang, H.-W.; Kou, P.-M. Study on the detection of yarn hairiness morphology based on image processing technique. In Proceedings of the 2010 International Conference on Machine Learning and Cybernetics, Qingdao, China, 11–14 July 2010; pp. 2332–2336. [Google Scholar] [CrossRef]
- Wang, L.; Xu, B.; Gao, W. 3D measurement of yarn hairiness via multi-perspective images. In Proceedings of the Optics, Photonics, and Digital Technologies for Imaging Applications V, Proceedings of the SPIE Photonic Europe, Strasbourg, France, 22–26 April 2018; SPIE: Bellingham, WA, USA, 2018; pp. 292–309. [Google Scholar] [CrossRef]
- Sun, Y.; Li, Z.; Pan, R.; Zhou, J.; Gao, W. Measurement of long yarn hair based on hairiness segmentation and hairiness tracking. J. Text. Inst. 2017, 108, 1271–1279. [Google Scholar] [CrossRef]
- El Mogahzy, Y.E. 9—Structure and types of yarn for textile product design. In Engineering Textiles; El Mogahzy, Y.E., Ed.; Woodhead Publishing Series in Textiles; Woodhead Publishing: Sawston, UK, 2009; pp. 240–270. [Google Scholar] [CrossRef]
- Krupincová, G.; Meloun, M. Yarn hairiness versus quality of yarn. J. Text. Inst. 2013, 104, 1312–1319. [Google Scholar] [CrossRef]
- Kiron, M.I. Spin Finish in Textile. Textile Learner. Available online: https://textilelearner.net/spin-finish-in-textile/ (accessed on 23 July 2023).
- Busilienė, G.; Lekeckas, K.; Urbelis, V. Pilling Resistance of Knitted Fabrics. Mater. Sci. 2011, 17, 297–301. [Google Scholar] [CrossRef]
- Pereira, F.; Carvalho, V.; Soares, F.; Vasconcelos, R.; Machado, J. 6—Computer vision techniques for detecting yarn defects. In Applications of Computer Vision in Fashion and Textiles; Wong, W.K., Ed.; The Textile Institute Book Series; Woodhead Publishing: Sawston, UK, 2018; pp. 123–145. [Google Scholar] [CrossRef]
- Carvalho, V.; Soares, F.; Belsley, M.; Vasconcelos, R.M. Automatic yarn characterization system. In Proceedings of the 2008 IEEE SENSORS, Lecce, Italy, 26–29 October 2008; pp. 780–783. [Google Scholar] [CrossRef]
- Pereira, F.; Carvalho, V.; Vasconcelos, R.; Soares, F. A Review in the Use of Artificial Intelligence in Textile Industry. In Innovations in Mechatronics Engineering; Machado, J., Soares, F., Trojanowska, J., Yildirim, S., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 377–392. [Google Scholar] [CrossRef]
- Pereira, F.; Macedo, A.; Pinto, L.; Soares, F.; Vasconcelos, R.; Machado, J.; Carvalho, V. Intelligent Computer Vision System for Analysis and Characterization of Yarn Quality. Electronics 2023, 12, 236. [Google Scholar] [CrossRef]
- Pereira, F.; Oliveira, E.L.; Ferreira, G.G.; Sousa, F.; Caldas, P. Textile Yarn Winding and Unwinding System. In Innovations in Mechanical Engineering; Machado, J., Soares, F., Trojanowska, J., Ottaviano, E., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 347–358. [Google Scholar] [CrossRef]
- Caldas, P.; Sousa, F.; Pereira, F.; Lopes, H.; Machado, J. Automatic system for yarn quality analysis by image processing. J. Braz. Soc. Mech. Sci. Eng. 2022, 44, 565. [Google Scholar] [CrossRef]
- GitHub—Ultralytics/Yolov5: YOLOv5 in PyTorch > ONNX > CoreML > TFLite. Available online: https://github.com/ultralytics/yolov5 (accessed on 6 August 2023).
- Chen, S.; Tang, M.; Kan, J. Predicting Depth from Single RGB Images with Pyramidal Three-Streamed Networks. Sensors 2019, 19, 667. [Google Scholar] [CrossRef]
- Jiang, B.; Song, H.; He, D. Lameness detection of dairy cows based on a double normal background statistical model. Comput. Electron. Agric. 2019, 158, 140–149. [Google Scholar] [CrossRef]
- Li, Z.; Fan, B.; Xu, Y.; Sun, R. Improved YOLOv5 for Aerial Images Based on Attention Mechanism. IEEE Access. 2023, 11, 96235–96241. [Google Scholar] [CrossRef]
- Tan, S.; Lu, G.; Jiang, Z.; Huang, L. Improved YOLOv5 Network Model and Application in Safety Helmet Detection. In Proceedings of the 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Tokoname, Japan, 4–6 March 2021; pp. 330–333. [Google Scholar] [CrossRef]
- Liu, Z.; Gao, X.; Wan, Y.; Wang, J.; Lyu, H. An Improved YOLOv5 Method for Small Object Detection in UAV Capture Scenes. IEEE Access 2023, 11, 14365–14374. [Google Scholar] [CrossRef]
- Guo, Y.; Zhang, M. Blood Cell Detection Method Based on Improved YOLOv5. IEEE Access 2023, 11, 67987–67995. [Google Scholar] [CrossRef]
- Li, S.; Li, Y.; Li, Y.; Li, M.; Xu, X. YOLO-FIRI: Improved YOLOv5 for Infrared Image Object Detection. IEEE Access 2021, 9, 141861–141875. [Google Scholar] [CrossRef]
- Li, Y.; Cheng, R.; Zhang, C.; Chen, M.; Ma, J.; Shi, X. Sign language letters recognition model based on improved YOLOv5. In Proceedings of the 2022 9th International Conference on Digital Home (ICDH), Guangzhou, China, 28–30 October 2022; pp. 188–193. [Google Scholar] [CrossRef]
- Pagare, S.; Kumar, R. Object Detection Algorithms Compression CNN. YOLO and SSD. Int. J. Comput. Appl. 2023, 185, 34–38. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified. Real-Time Object Detection. Presented at the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. Available online: https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html (accessed on 6 July 2024).
- Redmon, J.; Farhadi, A. YOLO9000: Better. Faster. Stronger. Presented at the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. Available online: https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html (accessed on 6 July 2024).
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020. [Google Scholar] [CrossRef]
- Gašparović, B.; Mauša, G.; Rukavina, J.; Lerga, J. Evaluating YOLOV5. YOLOV6. YOLOV7. and YOLOV8 in Underwater Environment: Is There Real Improvement? In Proceedings of the 2023 8th International Conference on Smart and Sustainable Technologies (SpliTech), Split/Bol, Croatia, 20–23 June 2023; pp. 1–4. [Google Scholar] [CrossRef]
- Wu, T.; Dong, Y. YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition. Appl. Sci. 2023, 13, 12977. [Google Scholar] [CrossRef]
- Sun, J.; Jia, J.; Tang, C.-K.; Shum, H.-Y. Poisson matting. In ACM SIGGRAPH 2004 Papers, Proceeding of the SIGGRAPH’04, Los Angeles, CA, USA, 8–12 August 2004; Association for Computing Machinery: New York, NY, USA, 2004; pp. 315–321. [Google Scholar] [CrossRef]
- El-Geiheini, A.; ElKateb, S.; Abd-Elhamied, M.R. Yarn Tensile Properties Modeling Using Artificial Intelligence. Alex. Eng. J. 2020, 59, 4435–4440. [Google Scholar] [CrossRef]
- Abd-Elhamied, M.R.; Hashima, W.A.; ElKateb, S.; Elhawary, I.; El-Geiheini, A. Prediction of Cotton Yarn’s Characteristics by Image Processing and ANN. Alex. Eng. J. 2022, 61, 3335–3340. [Google Scholar] [CrossRef]
- Li, Z.; Zhong, P.; Tang, X.; Chen, Y.; Su, S.; Zhai, T. A New Method to Evaluate Yarn Appearance Qualities Based on Machine Vision and Image Processing. IEEE Access 2020, 8, 30928–30937. [Google Scholar] [CrossRef]
- Deng, Z.; Yu, L.; Wang, L.; Ke, W. An algorithm for cross-fiber separation in yarn hairiness image processing—The visual computer. Vis. Comput. 2024, 40, 3591–3599. [Google Scholar] [CrossRef]
- Haleem, N.; Bustreo, M.; Del Bue, A. A computer vision based online quality control system for textile yarns. Comput. Ind. 2021, 133, 103550. [Google Scholar] [CrossRef]
- Lu, W.; Yang, M. Face Detection Based on Viola-Jones Algorithm Applying Composite Features. In Proceedings of the 2019 International Conference on Robots & Intelligent System (ICRIS), Haikou, China, 15–16 June 2019; pp. 82–85. [Google Scholar] [CrossRef]
- Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Watson, G.A., Ed.; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar] [CrossRef]
- Casas, E.; Ramos, L.; Bendek, E.; Rivas-Echeverría, F. Assessing the Effectiveness of YOLO Architectures for Smoke and Wildfire Detection. IEEE Access 2023, 11, 96554–96583. [Google Scholar] [CrossRef]
- Guo, P.; Meng, W.; Xu, M.; Li, V.C.; Bao, Y. Predicting Mechanical Properties of High-Performance Fiber-Reinforced Cementitious Composites by Integrating Micromechanics and Machine Learning. Materials 2021, 14, 3143. [Google Scholar] [CrossRef]
- Ghavami, N.; Hu, Y.; Gibson, E.; Bonmati, E.; Emberton, M.; Moore, C.M.; Barratt, D.C. Automatic segmentation of prostate MRI using convolutional neural networks: Investigating the impact of network architecture on the accuracy of volume measurement and MRI-ultrasound registration. Med. Image Anal. 2019, 58, 101558. [Google Scholar] [CrossRef] [PubMed]
- Niu, D.; Liang, Y.; Wang, H.; Wang, M.; Hong, W.-C. Icing Forecasting of Transmission Lines with a Modified Back Propagation Neural Network-Support Vector Machine-Extreme Learning Machine with Kernel (BPNN-SVM-KELM) Based on the Variance-Covariance Weight Determination Method. Energies 2017, 10, 1196. [Google Scholar] [CrossRef]
- Srinivas, A.; Lin, T.-Y.; Parmar, N.; Shlens, J.; Abbeel, P.; Vaswani, A. Bottleneck Transformers for Visual Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 16519–16529. Available online: https://openaccess.thecvf.com/content/CVPR2021/html/Srinivas_Bottleneck_Transformers_for_Visual_Recognition_CVPR_2021_paper.html (accessed on 6 July 2024).
- Hu, H.; Zhu, Z. Sim-YOLOv5s: A method for detecting defects on the end face of lithium battery steel shells. Adv. Eng. Inform. 2023, 55, 101824. [Google Scholar] [CrossRef]
- Roy, A.M.; Bhaduri, J. DenseSPH-YOLOv5: An automated damage detection model based on DenseNet and Swin-Transformer prediction head-enabled YOLOv5 with attention mechanism. Adv. Eng. Inform. 2023, 56, 102007. [Google Scholar] [CrossRef]
- Hendrycks, D.; Gimpel, K. Gaussian Error Linear Units (GELUs). arXiv 2023. [Google Scholar] [CrossRef]
- Yu, G.; Zhou, X. An Improved YOLOv5 Crack Detection Method Combined with a Bottleneck Transformer. Mathematics 2023, 11, 2377. [Google Scholar] [CrossRef]
- Huang, Y.; Fan, J.; Hu, Y.; Guo, J.; Zhu, Y. TBi-YOLOv5: A surface defect detection model for crane wire with Bottleneck Transformer and small target detection layer. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2024, 238, 2425–2438. [Google Scholar] [CrossRef]
- Liu, J.; Qiao, W.; Xiong, Z. OAB-YOLOv5: One-Anchor-Based YOLOv5 for Rotated Object Detection in Remote Sensing Images. J. Sens. 2022, 2022, 8515510. [Google Scholar] [CrossRef]
- Isa, I.S.; Rosli, M.S.A.; Yusof, U.K.; Maruzuki, M.I.F.; Sulaiman, S.N. Optimizing the Hyperparameter Tuning of YOLOv5 for Underwater Detection. IEEE Access 2022, 10, 52818–52831. [Google Scholar] [CrossRef]
- Van, H.-P.-T.; Hoang, V.-D. Insulator Detection in Intelligent Monitoring Based on Yolo Family and Customizing Hyperparameters. J. Tech. Educ. Sci. 2023, 18, 69–77. [Google Scholar] [CrossRef]
- Pereira, F.; Pinto, L.; Machado, J.; Soares, F.; Vasconcelos, R.; Carvalho, V. Yarn Hairiness—Loop & Protruding Fibers Dataset; Mendeley Data: London, UK, 2023. [Google Scholar] [CrossRef]
- Pereira, F.; Pinto, L.; Soares, F.; Vasconcelos, R.; Machado, J.; Carvalho, V. Online yarn hairiness—Loop & protruding fibers dataset. Data Brief. 2024, 54, 110355. [Google Scholar] [CrossRef] [PubMed]
- “Roboflow: Computer Vision Tools for Developers and Enterprises. Available online: https://roboflow.com/ (accessed on 6 July 2024).
- “Labeling with LabelMe: Step-by-Step GUIDE [Alternatives + Datasets]. Available online: https://www.v7labs.com/blog/labelme-guide/ (accessed on 6 July 2024).
- Mullen, J.F.; Tanner, F.R.; Sallee, P.A. Comparing the Effects of Annotation Type on Machine Learning Detection Performance. In Proceedings of the 2019 IEEECVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 855–861. [Google Scholar] [CrossRef]
- Lin, G.; Liu, K.; Xia, X.; Yan, R. An Efficient and Intelligent Detection Method for Fabric Defects based on Improved YOLOv5. Sensors 2023, 23, 97. [Google Scholar] [CrossRef]
- Li, L.; Spratling, M. Understanding and combating robust overfitting via input loss landscape analysis and regularization. Pattern Recognit. 2023, 136, 109229. [Google Scholar] [CrossRef]
- Li, H.; Rajbahadur, G.K.; Lin, D.; Bezemer, C.-P.; Jiang, Z.M. Keeping Deep Learning Models in Check: A History-Based Approach to Mitigate Overfitting. IEEE Access 2024, 12, 70676–70689. [Google Scholar] [CrossRef]
- Uddin, S.; Lu, H.; Rahman, A.; Gao, J. A novel approach for assessing fairness in deployed machine learning algorithms. Sci. Rep. 2024, 14, 17753. [Google Scholar] [CrossRef]
- Hassan, A.; Gulzar Ahmad, S.; Ullah Munir, E.; Ali Khan, I.; Ramzan, N. Predictive modelling and identification of key risk factors for stroke using machine learning. Sci. Rep. 2024, 14, 11498. [Google Scholar] [CrossRef]
- Aljalal, M.; Aldosari, S.A.; Molinas, M.; Alturki, F.A. Selecting EEG channels and features using multi-objective optimization for accurate MCI detection: Validation using leave-one-subject-out strategy. Sci. Rep. 2024, 14, 12483. [Google Scholar] [CrossRef]
Feature | A [39] | B [41] | C [37] | D [38] | E [40] |
---|---|---|---|---|---|
Mechatronic prototype developed? | ✓ | - | - | - | - |
Non-destructive prototype? | - | - | - | - | - |
Yarn winding and unwinding system? | - | - | - | - | - |
Image or video analysis in yarn? | Image | Image | Image | Image | Image |
Use of Vision System (VS) or Artificial Intelligence (AI) to detect defects in textile fabric or yarn | VS | VS + AI | VS + AI | VS + AI | VS |
Yarn twist orientation | - | - | - | - | - |
Yarn twist step | - | - | - | - | - |
Thick places | ✓ | - | - | ✓ | - |
Thin places | ✓ | - | - | ✓ | - |
Neps | ✓ | ✓ | - | ✓ | - |
Yarn Diameter | ✓ | - | - | - | - |
Linear mass | - | - | - | ✓ | - |
Volume | - | - | - | - | - |
Number of cables | - | - | - | - | - |
Number of loose fibers | - | - | - | - | - |
Mean deviation of mass U (%) | - | - | - | - | - |
Coefficient of variation mass CV (%) | - | ✓ | - | ✓ | - |
Dataset | Classes | Number of Annotations |
---|---|---|
Training | Loop fibers Protruding Fibers | 4667 3035 |
Validation | Loop fibers Protruding Fibers | 1390 895 |
Test | Loop fibers Protruding Fibers | 676 374 |
Hardware and Operating System (OS) and Specific Environment | Specification |
---|---|
OS | UBUNTU 22.04.2 LTS |
CPU | TWO CPUS INTEL(R) XEON(R) CPU @ 2.00 GHZ |
GPU | NVIDIA TESLA T4 WITH |
RAM MEMORY | 16 GB |
FRAMEWORK | PYTORCH 1.13.1 |
CUDA | 12.0 |
CUDNN | 8700 |
PYTHON VERSION | 3.7.9 |
Parameter | Specification |
---|---|
Image size | 1280 × 1280 pixels |
Optimizer | Stochastic gradient descent (SGD) |
Learning rate | 0.01 |
Batch size | 16 |
Epochs | 100 |
Training time | 2 h 21 m 39 s |
Algorithm/Metrics (with AUG) | mAP_0.5:0.95 | mAP_0.5 | F1_Score | Recall | Precision | Accuracy |
---|---|---|---|---|---|---|
Default Yolov5s6 | 0.3394 | 0.6605 | 0.6446 | 0.6536 | 0.6358 | 0.4615 |
Proposed YOLOv5s6-Yarn Hairiness | 0.3786 | 0.6964 | 0.6685 | 0.7017 | 0.6383 | 0.4667 |
Yolov8 | 0.3630 | 0.6480 | 0.6255 | 0.6260 | 0.6250 | 0.4333 |
Yolov7 | 0.3271 | 0.6114 | 0.6331 | 0.6841 | 0.5848 | 0.4233 |
Yolov5n | 0.3312 | 0.6484 | 0.6315 | 0.6465 | 0.6171 | 0.4415 |
Fast RCNN | 0.2410 | 0.5220 | 0.3195 | 0.4740 | 0.2410 | 0.4325 |
Algorithm | mAP@0.5:0.95 | mAP@0.5 | F1_Score | Recall | Precision | Accuracy |
---|---|---|---|---|---|---|
Default YOLOv5s6 | 0.3394 | 0.6605 | 0.6445 | 0.6536 | 0.6358 | 0.4615 |
Proposed YOLOv5s6-Yarn Hairiness | 0.3786 | 0.6964 | 0.6685 | 0.7017 | 0.6383 | 0.4666 |
Metrics Increase (%) | +11.55 | +5.43 | +3.71 | +7.36 | +0.39 | +1.11 |
Metric | Results |
---|---|
Model latency per frame | 35 ms |
Throughput | 28.6 FPS |
GPU utilization | 85% |
GPU memory required | 8 GB |
RAM usage | 6 GB |
General hardware requirements | Dedicated GPU with 16 GB VRAM |
Algorithm | mAP@0.5:0.95 | mAP@0.5 | F1-Score | Recall | Precision | Accuracy |
---|---|---|---|---|---|---|
Default YOLOv5s6 | 0.3394 | 0.6605 | 0.6445 | 0.6536 | 0.6358 | 0.4615 |
YOLOV5S6 + C2F | 0.3434 | 0.6427 | 0.6411 | 0.6514 | 0.6312 | 0.4622 |
YOLOV5S6 + BOT-TRANSFORMER | 0.3395 | 0.6487 | 0.6531 | 0.6768 | 0.6310 | 0.4579 |
YOLOV5S6 + GELU | 0.3481 | 0.6528 | 0.6450 | 0.6611 | 0.6297 | 0.4581 |
YOLOV5S6 + C2F + BOT-TRANSFORMER | 0.3439 | 0.6521 | 0.6529 | 0.6529 | 0.6529 | 0.4561 |
YOLOV5S6 + C2F + GELU | 0.3441 | 0.6586 | 0.6493 | 0.6784 | 0.6226 | 0.4657 |
YOLOV5S6 + BOT-TRANSFORMER + GELU | 0.3413 | 0.6548 | 0.6486 | 0.6880 | 0.6135 | 0.4675 |
YOLOV5S6 + BOT-TRANSFORMER + GELU + C2F | 0.3476 | 0.6741 | 0.6511 | 0.6603 | 0.6421 | 0.4654 |
YOLOV5S6 + HYPER-PARAMETERS | 0.3702 | 0.6773 | 0.6616 | 0.6930 | 0.6329 | 0.4686 |
Proposed YOLOv5s6-Yarn Hairiness | 0.3786 | 0.6964 | 0.6685 | 0.7017 | 0.6383 | 0.4666 |
Metrics increases (%) | ||||||
---|---|---|---|---|---|---|
Algorithm/ Combination | mAP_0.5:0.95 | mAP_0.5 | F1-Score | Recall | Precision | Accuracy |
YOLOv5s6 default | N/A | N/A | N/A | N/A | N/A | N/A |
YOLOv5s6 + C2f | 1.18 | −2.70 | −0.53 | −0.34 | −0.72 | 0.15 |
YOLOV5s6 + Bot-transformer | 0.03 | −1.79 | 1.32 | 3.55 | −0.76 | −0.78 |
YOLOv5s6 + GeLu | 2.56 | −1.17 | 0.07 | 1.15 | −0.96 | −0.73 |
YOLOv5s6 + C2f + Bot-transformer | 1.33 | −1.27 | 1.29 | −0.11 | 2.69 | −1.17 |
YOLOv5s6 + C2f + GeLu | 1.39 | −0.29 | 0.73 | 3.79 | −2.08 | 0.91 |
YOLOv5s6 + Bot-transformer + GeLu | 0.56 | −0.86 | 0.63 | 5.26 | −3.51 | 1.30 |
YOLOv5s6 + Bot-transformer + GeLu + C2f | 2.42 | 2.06 | 1.01 | 1.03 | 0.99 | 0.85 |
YOLOv5s6 + Hyper-parameters | 9.08 | 2.54 | 2.64 | 6.03 | −0.46 | 1.54 |
YOLOv5s6 Hairiness Improved | 11.55 | 5.435 | 3.71 | 7.36 | 0.39 | 1.11 |
K-Fold 10 Cross-Validation (YOLOv5s6-Hairiness) | mAP_0.5 | mAP_0.5:0.95 | Precision | Recall | Accuracy |
---|---|---|---|---|---|
kfold-10 | 0.7860 | 0.4593 | 0.7040 | 0.7663 | 0.5050 |
kfold-9 | 0.7794 | 0.4541 | 0.6981 | 0.7496 | 0.5067 |
kfold-8 | 0.7901 | 0.4589 | 0.7094 | 0.7655 | 0.5067 |
kfold-7 | 0.7909 | 0.4575 | 0.7327 | 0.7426 | 0.5200 |
kfold-6 | 0.7939 | 0.4568 | 0.7439 | 0.7264 | 0.5050 |
kfold-5 | 0.7881 | 0.4477 | 0.7099 | 0.7528 | 0.5100 |
kfold-4 | 0.7872 | 0.4599 | 0.7200 | 0.7503 | 0.5084 |
kfold-3 | 0.7507 | 0.4330 | 0.6954 | 0.7289 | 0.4800 |
kfold-2 | 0.7862 | 0.4581 | 0.7143 | 0.757 | 0.5084 |
kfold-1 | 0.7911 | 0.4568 | 0.7197 | 0.7362 | 0.5017 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pereira, F.; Lopes, H.; Pinto, L.; Soares, F.; Vasconcelos, R.; Machado, J.; Carvalho, V. A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm. Appl. Sci. 2025, 15, 149. https://doi.org/10.3390/app15010149
Pereira F, Lopes H, Pinto L, Soares F, Vasconcelos R, Machado J, Carvalho V. A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm. Applied Sciences. 2025; 15(1):149. https://doi.org/10.3390/app15010149
Chicago/Turabian StylePereira, Filipe, Helena Lopes, Leandro Pinto, Filomena Soares, Rosa Vasconcelos, José Machado, and Vítor Carvalho. 2025. "A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm" Applied Sciences 15, no. 1: 149. https://doi.org/10.3390/app15010149
APA StylePereira, F., Lopes, H., Pinto, L., Soares, F., Vasconcelos, R., Machado, J., & Carvalho, V. (2025). A Novel Deep Learning Approach for Yarn Hairiness Characterization Using an Improved YOLOv5 Algorithm. Applied Sciences, 15(1), 149. https://doi.org/10.3390/app15010149