Next Article in Journal
You Only Look Once v8 Cattle Identification Based on Muzzle Print Pattern Using ORB and Fast Library for Approximate Nearest Neighbor Algorithms
Previous Article in Journal
Multi-Feature Long Short-Term Memory Facial Recognition for Real-Time Automated Drowsiness Observation of Automobile Drivers with Raspberry Pi 4
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Abaca Blend Fabric Classification Using Yolov8 Architecture †

by
Cedrick D. Cinco
,
Leopoldo Malabanan R. Dominguez
and
Jocelyn F. Villaverde
*
School of Electrical, Electronics, and Computer Engineering, Mapúa University, Manila 1002, Philippines
*
Author to whom correspondence should be addressed.
Presented at the 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering, Yunlin, Taiwan, 15–17 November 2024.
Eng. Proc. 2025, 92(1), 42; https://doi.org/10.3390/engproc2025092042
Published: 30 April 2025
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)

Abstract

:
Advanced deep learning has assisted in various operations in different industries. In the textile industry, the professional must be trained and experienced in fabric classification. Fabrics such as Abaca are difficult to classify as the same base material is intertwined with a different material. The versatile nature of Abaca is used in various products including paper bills, ropes, handwoven handicrafts, and fabric. Abaca fabric is an unsought product of fabric due to its rough texture. Blended Abaca fabrics are traditionally mixed with cotton, silk, and polyester. Due to the combination of the characteristics of the materials, the fabric classification is prone to human error. Therefore, we created a device capable of classifying blends of Abaca fabric using YOLOv8 architecture. We used a Raspberry Pi 4B with camera module v3 to capture images for classification. The dataset consisted of four blends, specifically Abaca, Cotton Abaca, Polyester Abaca, and Silk Abaca. A total 500 images were used to test the model’s performance, and the performance accuracy was 94.6%.

1. Introduction

Abaca fiber, popularly known as Manila Hemp, is a natural plant used in manufacturing textiles. Abaca is a resilient and versatile material applied to a variety of products, such as paper bills, ropes, handwoven products, and fabric. However, manufacturing fabric products using Abaca only is not recommended due to its rigidity and inflexibility. Blending abaca fibers with other natural or synthetic fibers, such as cotton, silk, or polyester, creates new fabrics with the characteristics of the resilient nature of Abaca. With the different blends of the fabric, identification, and inspection of the fabric becomes complex. While the Philippines is a major producer of the Abaca fabrics, the importance of maintaining standards is apparent.
Developments in computer vision technology enable the analysis of the attributes of surfaces on images. Computer vision has been used to identify the colors of an apple and tomato to determine their ripeness [1,2]. However, using colors as a feature increases complexity. Image processing techniques are usually used for classification or object detection to enhance the features of the source. On grayscale images where colors are replaced with pixel and intensity, features such as pattern, texture, and light intensity are extracted. In Ref. [3], enhanced grayscale images of irises were used to extract the individual characteristics. Grayscale images of fresh fish for freshness classification were also used [4]. They used the histogram of oriented gradients (HOG) feature extraction and support vector machine (SVM) for the classification of the freshness of the fish. Another study utilized image processing on gray-level segmentation for Maize disease detection [5]. However, image processing techniques are used to extract specific features to ascertain the Maize disease. The images of epidermal disease on the skin of felines and humans were examined [6,7]. The gray level co-occurrence matrix (GCM) was used for the texture analysis of epidermal images to detect diseases. When it comes to real-time applications of computer vision, a mixture of speed and accuracy is important. The architecture that satisfies that requirement is You Only Look Once (YOLO). YOLO is a deep learning architecture capable of real-time object detection and classification. Object detection is used to identify people’s positions in an enclosed space and in violation of social distancing protocol [8]. Object detection using YOLOv3 is also used for the identification and monitoring of white flies and fruit flies on farm produce [9] and underwater plastic [10].
The standard components that comprise computer vision technology systems include processors, sensors, and displays. Depending on the application, additional components are attached for specific purposes. The standard components are not applied to visually impaired subjects [11,12]. Audio devices and optical sensors are used to provide feedback and replace the senses. A system is developed to imitate the functions of a mobile phone by attaching various components [13].
Recent research on the Abaca material has been conducted for the production of the plant material, attributes, and applications. One of the recent studies is the identification and classification of the Abaca leaf [14]. A deep learning framework called Resnet is used to classify the images of the Abaca leaf for its health condition. The Abaca fibers are used for the classification of the stripping quality [15]. VGGNet-16 architecture is used to classify the Abaca fiber strip’s quality and improve Abaca production using geospatial data on the island of Catanduanes [16]. For Abaca production, a smart farming framework is used for an easy Abaca growth process [17]. Abaca has not been thoroughly examined in the integration of computer vision in its production and distribution. Most research on image processing uses computer vision to improve the quality of fabric by detecting defects [18].
The incorporation of computer vision into fabric production has not been thoroughly investigated. Using TextileNet, different fabrics are classified in the deep learning architectures [19]. Two datasets of OCT images are used to provide microscopic feature images and macro images to show surface-level details. The study utilized transfer learning to shorten the training period and complexity for fabric classification.
Based on the previous results and existing requirements, we created a device to classify blends of Abaca fabric using YOLOv8 architecture. Specifically, the blends of Abaca fabric consist of Abaca, Cotton-Abaca, Non-Abaca, Polyester-Abaca, and Silk-Abaca included in the custom dataset. We used Raspberry Pi 4B and its camera module v3 to gather a dataset containing the different Abaca fabric blends. A trained model of YOLOv8 architecture was developed using transfer learning and its performance and accuracy were evaluated using a confusion matrix.
The developed model assists professionals who handle the sale, distribution, or manufacturing of fabric. The model enhances the efficiency of the sorting and distribution process of Abaca and contributes to increasing the knowledge about the classification of Abaca fabric.

2. Methods

Figure 1 shows the framework of the system. The proposed system used input images using Raspberry Pi camera module v3, which were captured for classification. The image showed a macro-level view of the fabric and provided a close-up viewpoint of the fabric’s texture and pattern features. The image was pre-processed to fit the required parameters of the YOLOv8 architecture. The color image was converted to a grayscale one. Using the architecture, the image feature was processed to extract the texture and pattern features. At the final layer, the Yolov8 algorithm classified the images to predict fabric blends. The developed system output the prediction on a graphical user interface (GUI) and an LCD. The non-Abaca fabric images were taken from an open-source collection of datasets called Roboflow Universe [20]. The custom dataset included 4500 images with 4 different fabric blends. The dataset was split into the training and validation set in a ratio of 80:20. The test dataset consisted of 100 images per different blend.

2.1. Hardware Development

Figure 2 shows the block diagram of the system. The proposed system was composed of a single-board computer, auxiliary components, and a display. The Raspberry Pi4 was used for image classification. The system components included the power supply, memory card, mouse, and camera module v3 to take images in 12 megapixels. An auxiliary LED strip light was installed to obtain the accurate and consistent features of the fabric. The output was viewed on the LCD through the created GUI. The power source, camera module v3, and mouse provide input to the Raspberry Pi4. While the strip lights and LCD receive an input from Raspberry Pi 4. The Memory Card handles an interchanging of information from all sources to provide instructions on how the system operates. All of the equipment used in the research is sourced either from Makerlab Electronics Manila or locally manufactured components on Shopee.

2.2. Software Development

Figure 3 shows the main system’s operation flowchart. The YOLOv8 architecture resized the image to a 640 × 640 pixel size. The colored image was converted to a grayscale one. Afterward, the image was normalized and transformed into an appropriate input. The YOLOv8 algorithm extracted the features of the image which were processed in the layers. The final layer was the classification layer. Depending on the number of detected bounding boxes of the image, non-maximum suppression was conducted to determine the average of all overlapping predictions. Afterwards, the inference results were output. The classification submodule applied the conditions of how the output is generated. Abaca, Cotton Abaca, Silk Abaca, Polyester Abaca, and Non-Abaca Fabric were classified as a final prediction.

2.3. Training

Before training, the image was curated using Roboflow v3.0 for data gathering and annotation. The training started and reiterated until the model performance reached an accuracy of 90%. If the trained model’s performance was not satisfactory, the training parameters were adjusted. The parameters included the learning rate, regularization (weight decay), a learning rate scheduler, and warm-up epochs. The images were augmented using HSV-value augmentation, image rotation, image translation, image scale, image perspective, image mosaic, and image mix-up (Figure 4).

2.4. Experimental Setup

Figure 5 shows the developed system. The fabric sample was placed in the center of the box under the camera with lights. The Raspberry Pi 4 was connected to the camera module. A GUI displayed the results.

2.5. GUI

Figure 6 shows the GUI of the system. The interface is composed of two buttons (Capture and Predict), the preview image, and the resulting prediction. The two buttons were used to start the operation. The Capture button was used to initiate the camera module to capture a high-quality image. Then, the image was converted using the Pillow library to grayscale. The Predict Button was used to operate the YOLO model to predict a classification. Once predicted, the results were displayed in the Results text box.

3. Results and Discussion

3.1. Data Gathering

Table 1 shows the number of images on the training and test dataset. The test dataset contained 10% of the total dataset. The non-Abaca fabric was used to test the model but not used for training. The non-Abaca fabric was a denim fabric whose image was taken online from Roboflow Universe [20]. The dataset was split into the training and validation set in a ratio of 80:20.
Figure 7 shows a sample fabric for each class. The preprocessing was conducted to prevent any bias made by color and minimize the errors in extracting the texture and pattern features. The image’s classes shown. The image captured by the camera was sized into 480 × 480 pixel images. The size of the actual fabric was larger than 5 × 5 inches.

3.2. Statistical Treatment

Figure 8 shows the confusion matrix of the trained model. The gathered information was used to compute the accuracy for each fabric and the overall accuracy of the model (1,2).
Accuracy = i = 1 5 Aii i = 1 j = 1 5 Aij
Class   Accuracy = Aii j = 1 5 Aij
The overall accuracy of the model was computed by dividing all true positives by the total number of classifications. The accuracy for each classification was computed by dividing the true positives of classification in each fabric by the total number of the classifications of each fabric.
Table 2 shows the accuracy derived from the confusion matrix. The inaccuracy in the classification of Polyester Abaca was due to the similar features and textures of the fabric to others while the inaccuracy of the tested non-Abaca fabric was caused by the similarities of cotton and silk.

4. Conclusions and Recommendation

We successfully implemented the developed classification system. We created the dataset of blends of Abaca fabric consisting of Abaca, cotton Abaca, polyester Abaca, and silk Abaca. Using YOLOv8 architecture, the model was trained to reach an overall accuracy of 94.6%. The inaccurate classification was made due to the similarities of the Polyester Abaca to the Cotton Abaca Fabric. The non-Abaca fabric was not trained specifically, which might deteriorate the performance. Hence, it is necessary to increase the sample images in the dataset and include the non-Abaca fabric in the training dataset to improve the model’s performance.

Author Contributions

Conceptualization, C.D.C., L.M.R.D. and J.F.V.; methodology, C.D.C. and L.M.R.D.; software, C.D.C.; validation, C.D.C. and L.M.R.D.; writing—original draft preparation, C.D.C. and L.M.R.D.; writing—review and editing, C.D.C., L.M.R.D. and J.F.V.; visualization, C.D.C.; supervision, J.F.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gunawan, K.C.; Lie, Z.S. Apple Ripeness Level Detection Based on Skin Color Features with Convolutional Neural Network Classification Method. In Proceedings of the 7th International Conference on Electrical, Electronics and Information Engineering: Technological Breakthrough for Greater New Life, ICEEIE 2021, Malang, Indonesia, 2 October 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  2. Legaspi, J.; Pangilinan, J.R.; Linsangan, N. Tomato Ripeness and Size Classification Using Image Processing. In Proceedings of the 2022 5th International Seminar on Research of Information Technology and Intelligent Systems, ISRITI 2022, Yogyakarta, Indonesia, 8–9 December 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 613–618. [Google Scholar] [CrossRef]
  3. Virtusio, D.T.U.; Tapaganao, F.J.D.; Maramba, R.G. Enhanced Iris Recognition System Using Daugman Algorithm and Multi-Level Otsu’s Thresholding. In Proceedings of the International Conference on Electrical, Computer, and Energy Technologies, ICECET 2022, Prague, Czech Republic, 20–22 July 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  4. Abalos, M.R.; Villaverde, J.F. Fresh Fish Classification Using HOG Feature Extraction and SVM. In Proceedings of the 2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2022, Boracay Island, Philippines, 1–4 December 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  5. Bonifacio, D.J.M.; Pascual, A.M.I.E.; Caya, M.V.C.; Fausto, J.C. Determination of Common Maize (Zea mays) Disease Detection using Gray-Level Segmentation and Edge-Detection Technique. In Proceedings of the 2020 IEEE 12th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2020, Manila, Philippines, 3–7 December 2020; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  6. Andujar, B.J.; Ferranco, N.J.; Villaverde, J.F. Recognition of Feline Epidermal Disease using Raspberry-Pi based Gray Level Co-occurrence Matrix and Support Vector Machine. In Proceedings of the 2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2021, Manila, Philippines, 28–30 November 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  7. Jardeleza, S.G.S.; Jose, J.C.; Villaverde, J.F.; Ann Latina, M. Detection of Common Types of Eczema Using Gray Level Co-occurrence Matrix and Support Vector Machine. In Proceedings of the 2023 15th International Conference on Computer and Automation Engineering, ICCAE 2023, Sydney, Australia, 3–5 March 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023; pp. 231–236. [Google Scholar] [CrossRef]
  8. De Guzman, S.R.C.; Tan, L.C.; Villaverde, J.F. Social Distancing Violation Monitoring Using YOLO for Human Detection. In Proceedings of the 2021 IEEE 7th International Conference on Control Science and Systems Engineering (ICCSSE 2021), Qingdao, China, 30 July–1 August 2021; pp. 216–222. [Google Scholar] [CrossRef]
  9. Legaspi, K.R.B.; Sison, N.W.S.; Villaverde, J.F. Detection and Classification of Whiteflies and Fruit Flies Using YOLO. In Proceedings of the 2021 13th International Conference on Computer and Automation Engineering, ICCAE 2021, Melbourne, Australia, 20–22 March 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 1–4. [Google Scholar] [CrossRef]
  10. Reddy, V.B.K.; Basha, S.S.; Swarnalakshmi, J. Augmenting Underwater Plastic Detection: A Study with YOLO-V8m on Enhanced Datasets. In Proceedings of the 3rd International Conference on Applied Artificial Intelligence and Computing, ICAAIC 2024, Salem, India, 5–7 June 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 1309–1315. [Google Scholar] [CrossRef]
  11. Lakshmanan, S.; Divya, B.; Nirmala, K.; Annamalai, M.; Pragadeesh, T.; Sanju Varshini, T. Portable assistive system for visually impaired using raspberry pi. In Proceedings of the 2020 IEEE International Conference on Advances and Developments in Electrical and Electronics Engineering, ICADEE 2020, Coimbatore, India, 10–11 December 2020; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  12. Christopherson, P.S.; Eleyan, A.; Bejaoui, T.; Jazzar, M. Smart Stick for Visually Impaired People using Raspberry Pi with Deep Learning. In Proceedings of the 2022 International Conference on Smart Applications, Communications and Networking, SmartNets 2022, Palapye, Botswana, 29 November–1 December 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  13. Abd-Elrahim, A.M.; Abu-Assal, A.; Mohammad, A.A.A.A.; Al-Imam, A.I.M.; Hassan, A.H.A.; Muhi-Aldeen, M.A.M. Design and Implementation of Raspberry Pi based Cell phone. In Proceedings of the 2020 International Conference on Computer, Control, Electrical, and Electronics Engineering, ICCCEEE 2020, Khartoum, Sudan, 26 February–1 March 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  14. Buenconsejo, L.T.; Linsangan, N.B. Classification of Healthy and Unhealthy Abaca leaf using a Convolutional Neural Network (CNN). In Proceedings of the 2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2021, Manila, Philippines, 28–30 November 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  15. Hong, J.; Caya, M.V.C. Development of Convolutional Neural Network Model for Abaca Fiber Stripping Quality Classification System. In Proceedings of the 2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2022, Boracay Island, Philippines, 1–4 December 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  16. Tapado, B.M. Enhancing Abaca Fiber Production Through a GIS-Based Application. In Proceedings of the 2022 IEEE 7th International Conference on Information Technology and Digital Applications, ICITDA 2022, Yogyakarta, Indonesia, 4–5 November 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  17. Salazar, E.; Morales, A. Smart Irrigation Framework Using Arduino for an Improved Abaca Farming System. In Proceedings of the 2023 6th International Conference on Control, Robotics and Informatics, ICCRI 2023, Danang, Vietnam, 26–28 May 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023; pp. 39–45. [Google Scholar] [CrossRef]
  18. Mehta, A.; Jain, R. An Analysis of Fabric Defect Detection Techniques for Textile Industry Quality Control. In Proceedings of the 2023 World Conference on Communication and Computing, WCONF 2023, Raipur, India, 14–16 July 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  19. Siam, A.S.M.; Arafat, Y.; Talukdar, M.M.; Mehedi Hasan, M.; Rahman, R. Textile Net: A Deep Learning Approach for Textile Fabric Material Identification from OCT and Macro Images. In Proceedings of the 2023 26th International Conference on Computer and Information Technology, ICCIT 2023, Cox’s Bazar, Bangladesh, 13–15 December 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  20. Andersen, I. Jeans v2 Dataset. Roboflow Universe. Available online: https://universe.roboflow.com/irvin-andersen/jeans-v2 (accessed on 20 August 2024).
Figure 1. Framework of this study.
Figure 1. Framework of this study.
Engproc 92 00042 g001
Figure 2. Block Diagram.
Figure 2. Block Diagram.
Engproc 92 00042 g002
Figure 3. Operation flowchart.
Figure 3. Operation flowchart.
Engproc 92 00042 g003
Figure 4. Training flowchart.
Figure 4. Training flowchart.
Engproc 92 00042 g004
Figure 5. Developed system.
Figure 5. Developed system.
Engproc 92 00042 g005
Figure 6. GUI of system.
Figure 6. GUI of system.
Engproc 92 00042 g006
Figure 7. Fabric samples for Abaca, Cotton Abaca, Silk Abaca, and Polyester Abaca (in clockwise from top left).
Figure 7. Fabric samples for Abaca, Cotton Abaca, Silk Abaca, and Polyester Abaca (in clockwise from top left).
Engproc 92 00042 g007
Figure 8. Confusion matrix for classification.
Figure 8. Confusion matrix for classification.
Engproc 92 00042 g008
Table 1. Dataset used for training and testing.
Table 1. Dataset used for training and testing.
Blend of FabricTrainingTesting
Abaca1000100
Cotton Abaca1000100
Non-AbacaN/A100
Polyester Abaca1000100
Silk Abaca1000100
Total4000500
Table 2. Classification accuracy for each fabric.
Table 2. Classification accuracy for each fabric.
ClassAccuracy
Abaca100%
Cotton Abaca93%
Polyester Abaca90%
Silk Abaca100%
Non-Abaca90%
Model’s Accuracy94.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cinco, C.D.; Dominguez, L.M.R.; Villaverde, J.F. Abaca Blend Fabric Classification Using Yolov8 Architecture. Eng. Proc. 2025, 92, 42. https://doi.org/10.3390/engproc2025092042

AMA Style

Cinco CD, Dominguez LMR, Villaverde JF. Abaca Blend Fabric Classification Using Yolov8 Architecture. Engineering Proceedings. 2025; 92(1):42. https://doi.org/10.3390/engproc2025092042

Chicago/Turabian Style

Cinco, Cedrick D., Leopoldo Malabanan R. Dominguez, and Jocelyn F. Villaverde. 2025. "Abaca Blend Fabric Classification Using Yolov8 Architecture" Engineering Proceedings 92, no. 1: 42. https://doi.org/10.3390/engproc2025092042

APA Style

Cinco, C. D., Dominguez, L. M. R., & Villaverde, J. F. (2025). Abaca Blend Fabric Classification Using Yolov8 Architecture. Engineering Proceedings, 92(1), 42. https://doi.org/10.3390/engproc2025092042

Article Metrics

Back to TopTop