Next Article in Journal
Automated Aviation Wind Nowcasting: Exploring Feature-Based Machine Learning Methods
Previous Article in Journal
Cloud-Based Architecture for Production Information Exchange in European Micro-Factory Context
Previous Article in Special Issue
Analysis and Test of the Tillage Layer Roll-Type Residual Film Recovery Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Plastic Waste Classification and Recycling Efficiency: Integrating Image Sensors and Deep Learning Algorithms

1
Research Institute of Clean Manufacturing System, Korea Institute of Industrial Technology, Cheonan 31056, Republic of Korea
2
System Intelligence Group, Korea University College of Informatics, Seoul 02841, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10224; https://doi.org/10.3390/app131810224
Submission received: 8 August 2023 / Revised: 5 September 2023 / Accepted: 8 September 2023 / Published: 12 September 2023
(This article belongs to the Special Issue Recent Advances in the Plastics Recycling and Upcycling)

Abstract

:
Plastics, with their versatility and cost-effectiveness, have become indispensable materials across various industries. However, the improper disposal and mismanagement of plastic waste have led to significant environmental issues, including pollution, habitat destruction, and threats to wildlife. To address these challenges, numerous methods for plastic waste sorting and recycling have been developed. While conventional techniques like near-infrared spectroscopy (NIRS) have been effective to some extent, they face difficulties in accurately classifying chemically similar samples, such as polyethylene terephthalate (PET) and PET-glycol (PET-G), which have similar chemical compositions but distinct physical characteristics. This paper introduces an approach that adapts image sensors and deep learning object detection algorithms; specifically, the You Only Look Once (YOLO) model, to enhance plastic waste classification based on the shape of the waste. Unlike conventional methods that rely solely on spectral analysis, our methodology aims to significantly improve the accuracy and efficiency of classifying plastics, especially when dealing with materials having similar chemical compositions but differing physical attributes. The system developed using image sensors and the YOLO model proves to be not only effective but also scalable and adaptable for various industrial and environmental applications. In our experiments, the results are strikingly effective. We achieved a classification accuracy rate exceeding 91.7% mean Average Precision (mAP) in distinguishing between PET and PET-G, surpassing conventional techniques by a considerable margin. The implications of this research extend far and wide. By enhancing the accuracy of plastic waste sorting and reducing misclassification rates, we can significantly boost recycling efficiency. The proposed approach contributes to a more sustainable and efficient plastic waste management system, alleviating the strain on landfills and mitigating the environmental impact of plastic waste, contributing to a cleaner and more sustainable environment.

1. Introduction

Plastics have become an integral part of our daily lives, playing a crucial role in various industries and applications. However, improper disposal and mismanagement of plastic waste have raised significant environmental concerns. The accumulation of plastic waste poses a severe threat to ecosystems, including marine life [1] and terrestrial habitats [2], making effective waste management and recycling strategies urgent. For plastic waste sorting, conventional techniques such as NIRS [3,4,5], electrostatic separators [6], and magnetic density separation [7] have been employed. NIRS offers several advantages, such as remote and rapid measurement with a high signal-to-noise ratio, making it a global trend for waste plastic sorting systems. Furthermore, recent advances in artificial intelligence (AI) research have led to the integration of AI algorithms and NIRS technology to develop more accurate classification systems for plastic sorting [4,8,9]. However, NIRS relies on variations in the absorption efficiency due to differences in the chemical composition of target samples [10,11], presenting challenges when applied to chemically similar samples.
Therefore, there is a critical need for innovative and accurate plastic waste classification methods that can overcome the limitations of techniques and enable efficient recycling. For example, polyethylene terephthalate (PET) and polyethylene terephthalate glycol (PET-G) are both polyester-based plastics used in various applications, but they exhibit differences in their physical properties [12]. Particularly, PET has a higher melting point of approximately 245–255 °C, while PET-G has a lower melting point of around 70–80 °C. Consequently, when PET and PET-G are mixed for recycling, it leads to a significant reduction in recycling purity. Recognizing challenges, the California Legislature has passed a bill to revise the definition of PET (polyethylene terephthalate) by excluding PET-G, a glycol-modified version, to reduce contamination in recycling [13]. Moreover, although accurate information on PET-G in waste plastics is currently unavailable, it is anticipated that the global market for PET-G will experience steady growth, increasing from USD 2748 million in 2023 to USD 3819 million in 2033, with a compound annual growth rate (CAGR) of 3.3% [14].
Considering their distinct physical properties, PET and PET-G are commonly used in different forms. For instance, PET is widely used in beverage containers, while PET-G is commonly applied in electronic product packaging. Consequently, recycling facilities predominantly receive PET in the form of beverage containers and PET-G in the shape of rectangular packaging materials.
Machine learning plays an important role in decision-making applications [15]. Also, the use of machine learning algorithms in waste-sorting processing facilities highlights its critical role in managing recyclable materials with robotic technology. These robots demand advanced visual and manipulation capabilities to effectively process diverse recyclable materials within complex industrial environments. Conventional automated systems have historically employed machine/computer vision to extract materials like metal, paper, glass, and plastic from waste streams. Robotic technology [16,17,18], positioned as a more efficient and autonomous alternative, can support or replace current installations. In the context of machine vision, the analysis of multi-object images involves processes such as object identification (bounding box specification), localization (masking), and material type classification. This distinctive approach is crafted to precisely identify, locate, and categorize potentially overlapping recyclable materials within the same image, making it particularly suitable for industrial applications.
Object detection [19,20,21,22,23,24,25,26,27] in deep learning involves both classifying objects into categories (classification) and determining their positions with bounding boxes (localization). This field can be broadly categorized into two main approaches: 1-stage detectors and 2-stage detectors. In 1-stage detectors like YOLO [19,20,21,22] and SSD [23,24], both the localization and classification tasks are performed concurrently, whereas in 2-stage detectors [25,26,27], these tasks are carried out sequentially.
The YOLO model [19,20,21,22], being a representative 1-stage detector, has several advantages. Initially, it efficiently processes the entire image at once, eliminating the requirement for separate segmentation and analysis, as used in previous R-CNN-based methods. Secondly, YOLO employs a unified model that amalgamates region proposal, feature extraction, classification, and bounding box regression into one coherent framework, resulting in enhanced speed and facilitating real-time object detection. Thirdly, by operating on the entire image, YOLO effectively captures contextual information about objects and their surroundings, leading to reduced background errors. Lastly, the YOLO model demonstrates remarkable detection accuracy even when dealing with previously unseen images during the training phase, making it well-suited for swift image detection. In this study, the focus lies on utilizing the YOLO detector with a conveyor belt speed of 2m/s to classify plastics, specifically PET and PET-G, in the context of plastic sorting with different shapes.
The primary aim of this paper is to explore the recycling of substances with similar compositions but different shapes, which pose challenges for classification using conventional NIRS sorting systems based on chemical composition. To address these challenges, our proposed method incorporates image sensors and object detection deep learning algorithms to enable classification based on the shape and form of the plastics. In particular, with a focus on practical implementation within an industrial setting, a system has been developed through optical design considerations that take into account factors such as sorting speed and conveyor belt characteristics, which are prevalent in the field of plastic waste classification. The anticipated benefits of the proposed method include improved sorting accuracy and purity, reduced misclassification rates, and enhanced recycling efficiency, contributing to a more sustainable and effective plastic waste management system.

2. Background

2.1. Overview of Plastic Waste Classification Techniques

Plastic waste classification techniques play a vital role in the recycling and management of plastic waste (Figure 1a), facilitating the efficient separation and processing of different types of plastics. Conventional approaches to plastic waste classification have predominantly relied on near-infrared spectroscopy (NIRS). This technique utilizes the unique infrared spectra exhibited by various plastic polymers to distinguish and sort them. NIRS-based classification has shown promising results in accurately identifying plastics such as polypropylene (PP), polystyrene (PS), polymethyl methacrylate (PMMA), polyethylene terephthalate (PET), polyethylene (PE), and acrylonitrile butadiene styrene (ABS). However, the effectiveness of NIRS is limited when it comes to plastics with similar compositions but different physical attributes (Figure 1b), making it challenging to differentiate them solely based on their infrared spectra.

2.2. Conventional Approach Using Near-Infrared Spectroscopy for Plastic Waste Classification and Limitations and Challenges of Near-Infrared Spectroscopy

One significant limitation of NIRS-based plastic waste classification is its reliance solely on compositional analysis, neglecting the influence of the shape and form of the plastics. Plastics with similar compositions but different shapes, such as PET and PET-G, pose a challenge for NIRS-based classification. Despite their comparable molecular compositions, PET and PET-G can have distinct physical properties due to the addition of glycol in PET-G, resulting in differences in transparency and flexibility. As NIRS primarily focuses on molecular composition, it struggles to accurately differentiate PET and PET-G, leading to misclassifications during the recycling process. Consequently, the recycling efficiency and the quality of recycled plastic products can be compromised. Moreover, NIRS requires specialized equipment and expertise for spectral analysis, making it less accessible and cost-effective, particularly for small-scale recycling facilities or regions with limited resources. The limitations of NIRS-based classification highlight the need for alternative approaches that incorporate additional factors, such as the shape and form of plastics, to enhance accuracy and overcome the challenges associated with similar composition plastics.
In light of these limitations, this paper proposes a novel plastic waste classification method that combines image sensors and object detection deep learning algorithms. By integrating shape-based analysis with compositional analysis, the proposed method aims to address the challenges posed by plastics with similar compositions but distinct physical attributes. The utilization of image sensors allows for the capture of visual information, enabling the classification of plastics based on their shape, texture, and other visual characteristics. Object detection deep learning algorithms provide the necessary computational framework for accurate and efficient classification on a conveyor belt system. The subsequent sections will delve into the details of the proposed method, including system design, operation principles, and experimental evaluations, to validate its effectiveness in improving plastic waste classification accuracy and recycling efficiency.

3. Proposed Plastic Waste Classification Method

Figure 2 shows a proposed plastic waste classification system. It integrates the machine/computer vision (optical) module, deep learning-based image analysis module, and air nozzle module system. In this section, we describe the proposed methodology and the integrated system. In this research, we have integrated a vision-based classification system into an existing NIR-based classification system. Given the abundance of prior research focused on deep learning-based classification of waste plastics using NIR technology [4,8,9], the incorporation of a machine vision-based system into the NIRS system promises a valuable avenue for comprehensive elemental and morphological analysis.

3.1. Optical System Design

The optical system design (Figure 3) involves the selection of a depth camera and the construction of a plastic waste classification system. The depth camera (D455, Intel, Santa Clara, CA, USA) was selected based on criteria such as depth resolution, shutter type, pixel size, and frame rate. The D455 depth camera features a pixel size of 3 μm × 3 μm and is capable of measuring objects even at a distance of approximately 20 m. With a frame rate of 90 fps, it is well-suited for capturing images of plastics on a high-speed conveyor belt.
An optical system for morphology-based plastic waste classification was devised by integrating three machine/computer vision cameras into a multi-camera module. The system is designed to ensure that each of the three image sensors can capture data from a 1 m distance. To mitigate the effects of real-world factors such as dust and vibrations, the system was positioned at a height of 1 m above the conveyor belt. Furthermore, an optical setup capable of accommodating the practical sorting environment was constructed, enabling coverage of the 3-meter width of the conveyor belt. This setup allows for comprehensive coverage and accurate measurement of the plastics on the conveyor belt.
To classify the plastic waste, air nozzles are installed at the end of the conveyor belt. These air nozzles play a crucial role in the classification process by using pneumatic control. They are strategically placed to correspond with specific regions or samples on the conveyor belt. The machine/computer vision data are divided and matched to the number and position of the air nozzles, enabling communication and control through algorithms. This ensures that the appropriate air nozzle is activated based on the classification results.
Based on the selection of key modules and the design of the optical system, the system was built to acquire data in real-world scenarios. The lighting conditions were determined through tests to ensure the measurement of objects on the fast-moving conveyor belt, and the exposure function of the RGB camera was utilized for exposure control. As you can see in Figure 4, the sharpness of the image can be adjusted by controlling the exposure time. In Figure 4a of the PET bottle, the image was obtained with an exposure time of 10 ms, while Figure 4b was obtained with an exposure time of 5 ms. It can be observed that the image with a 5 ms exposure time is sharper. Additionally, as the exposure time decreases, the overall brightness of the image changes. Therefore, the brightness was adjusted by controlling the intensity of the lighting and applied to the classification system.

3.2. Proposed Waste Classification System with Development of Deep Learning Classification Algorithm Based on Field Data

The proposed waste classification system aims to develop a deep learning classification algorithm based on field data. Real-world plastic waste classification data is acquired from the conveyor belt (Figure 5a), which moves at a speed of 2 m per s. These data are essential for training and developing an accurate deep learning algorithm specifically tailored for the classification of PET and PET-G plastics in Figure 5b.
The deep learning algorithm is trained on the augmented dataset (Figure 6), consisting of a wide range of variations in plastic waste samples. The algorithm learns to recognize and classify different types of plastics based on their visual features, patterns, and characteristics. The training process involves optimizing the network’s parameters, adjusting the model architecture, and fine-tuning the algorithm to achieve high accuracy and reliable classification results.
The proposed waste classification system combines the developed deep learning algorithm with the optical system to create an integrated solution. The real-time data captured by the machine/computer vision cameras are fed into the deep learning algorithm, which performs inference and classifies the plastic waste in real-time. The data are communicated to control the nozzles through communication with the air nozzle controller, to control the air nozzles, and to activate the relevant ones for pneumatic sorting of the classified samples.
Data labeling (Figure 7) and augmentation techniques are employed to address potential issues of insufficient data or overfitting during the deep learning process. Despite acquiring a substantial amount of data from the field, additional data augmentation is performed to further enhance the learning process and improve the classification accuracy. The captured images were augmented by rotating, zooming, and other techniques, resulting in a dataset that was 25 times larger than the original (increased from 2000 to 50,000 data samples). By applying these transformations, the dataset is significantly expanded, resulting in a more robust and diverse training set.

3.3. YOLO-Based Object Detection

The utilization of the YOLO algorithm for object detection plays a critical role in the real-time image capture and classification process. YOLO-v8 is integrated into the system to enable fast and accurate detection of plastic waste objects.
The depth cameras capture images of the conveyor belt, and these images are fed into the YOLO-v8 algorithm for analysis. YOLO-v8 employs a single neural network architecture that simultaneously performs both object localization and classification. This means that instead of examining the image multiple times, as conducted by other algorithms, the YOLO model divides the image into a grid and predicts the bounding boxes and class probabilities directly.
The YOLO-v8 algorithm processes the images in real-time, swiftly detecting the presence of plastic waste objects. It generates bounding box coordinates, which indicate the precise location of each detected object within the image. Additionally, the algorithm assigns class labels to the detected objects, indicating the type of plastic waste they belong to (e.g., PET or PET-G).
The inference process of the YOLO-v8 algorithm is highly optimized, enabling efficient execution and real-time performance. This capability is of utmost importance in waste-sorting applications where quick decision making is necessary to ensure the timely and accurate classification of plastic waste.
Upon obtaining the inference results from the YOLO-v8 algorithm, the system proceeds to communicate these results to the air nozzles positioned at the end of the conveyor belt. This communication is facilitated through the Modbus TCP communication interface, ensuring seamless data transfer.

3.4. Interface of the Field Classification Module

As depicted in Figure 8, the Realsense camera module streams images to a PC via the USB3 interface, where deep learning inference is performed. The resulting inferences from the streamed images are transmitted as word data to the Holding Registers of Modbus TCP. In the case of PET, a value of 1 is sent. The transmitted results are linked to the PLC’s Digital Out module to generate a signal that activates the air nozzle. When a PET image is detected, as illustrated in Figure 9a, a digital signal shifts from 0 to 1, analogous to a yellow signal on an oscilloscope. The corresponding change in the Holding Registers’ value can be observed in the red-boxed value in Figure 9b.
When the data are acquired in real-time from the camera via the USB3 interface to the edge PC, the inference process starts using the internal YOLO-v8 algorithm. The inferred results (Figure 9a) are then communicated to the air nozzles located at the end of the conveyor belt through the Modbus TCP ( Figure 9b) communication interface.
Based on the object detection inference results, which include the bounding box coordinates and class labels, the system determines the specific regions or samples on the conveyor belt that require classification and sorting. To achieve this, the system sends an “O” signal to the pneumatic valves connected to the corresponding air nozzle controller.
For PET and PET-G classification, the air nozzles are divided into 64 regions, and, based on the inference results, the corresponding regions are activated by sending an “O” signal to the pneumatic valves. Only the samples in the activated regions are classified through pneumatic control.
The activation of the appropriate air nozzles directs focused bursts of air toward the classified plastic waste samples, effectively sorting them pneumatically. This precise and automated process ensures that each plastic waste object is correctly categorized and directed to its designated location for further processing or recycling.
While compositional analysis provides valuable insights into the material composition of plastic waste, it may not be adequate for distinguishing plastics with similar compositions but different shapes. To overcome this challenge, a shape-based classification algorithm is developed and integrated into the system.
The shape-based classification algorithm focuses on the visual characteristics, geometric features, contours, and textures of the plastic waste samples. By extracting relevant shape descriptors such as aspect ratios, circularity, convexity, and texture patterns, the algorithm can differentiate between different types of plastics based on their distinct shapes.
The algorithm leverages the visual information captured by the image sensors to perform shape-based analysis. It analyzes the unique visual properties of each plastic waste sample and compares them against predefined patterns and shape templates to determine the appropriate classification.
By incorporating shape information alongside the compositional analysis, the shape-based classification algorithm enhances the accuracy and reliability of the plastic waste classification system. The integration of both algorithms allows for a more comprehensive analysis of plastic waste, enabling precise classification, even in cases where compositional similarities exist.
The combined approach of compositional analysis and shape-based classification significantly improves the system’s ability to correctly identify and categorize plastic waste. It takes advantage of the strengths of each analysis method, mitigating the limitations of individual approaches and enhancing the overall effectiveness of the waste classification system.

4. Experiments and Results

4.1. Description of Experimental Environment and Conditions

The experiments were conducted with real sorting equipment to assess the performance of the plastic waste classification system (Figure 10). The experimental setup consisted of the conveyor belt system equipped with the multi-camera module and the integrated YOLO-based object detection algorithm (Figure 11). The conveyor belt was set to operate at a constant speed of 2 m per second, simulating a typical industrial scenario. Plastic waste samples, specifically PET (polyethylene terephthalate) and PET-G (polyethylene terephthalate glycol), were used for the experiments. The learning algorithm was trained using a dataset specifically created for these plastic types.

4.2. Evaluation of the Performance of the Training Algorithm

Due to the practical limitations of accurately calculating the classification accuracy in a real-world environment, the evaluation of the learning algorithm’s performance was conducted using validation data. The trained algorithm was tested (Figure 12a) using PET and PET-G samples moving at a speed of 2 m per second on the testbed.
The performance evaluation focused on assessing the algorithm’s accuracy in correctly classifying the plastic waste samples. The algorithm’s predictions were compared against the ground truth labels assigned to the samples. The classification accuracy was determined by calculating the percentage of correctly classified samples out of the total number of samples.
The experimental results revealed that the learning algorithm achieved a classification accuracy of 91.7 mAP (Figure 12b) for the PET and PET-G samples moving at a speed of 2 m per s. This high accuracy demonstrates the effectiveness and reliability of the developed algorithm in accurately identifying and classifying plastic waste in real-time.

4.3. Analysis of Classification Accuracy and Efficiency Results

The analysis of the classification accuracy and efficiency results provides valuable insights into the performance of the plastic waste classification system. The accuracy was measured by comparing the algorithm’s predictions with the ground truth labels, while the efficiency was assessed in terms of the algorithm’s processing speed and real-time capabilities.
The obtained classification accuracy (Figure 12b) of 91.7 mAP demonstrates the system’s ability to accurately identify and classify plastic waste samples. The algorithm’s high accuracy ensures minimal misclassifications, contributing to the overall efficiency and effectiveness of the waste sorting process. In terms of efficiency, the learning algorithm demonstrated real-time performance, keeping up with the conveyor belt’s speed of 2 m per s. The algorithm’s fast processing capabilities enable timely classification decisions, ensuring efficient waste-sorting operations.
Figure 13 provides a graphical representation of the performance metrics achieved by the trained YOLO model. In the Precision graph (Figure 13a), one can see value of 1 at a confidence level of 0.84. In the Recall graph (Figure 13b), there is a 0.98 value recorded at a confidence level of 0. The Precision–Recall curve (Figure 13c) shows a mAP (mean Average Precision) score of 91.7 at a confidence level of 0.5. In the F1-score graph (Figure 13d), one can see 0.87 value at a confidence level of 0.519. As for Figure 14, it presents the confusion matrix generated by the trained YOLO model, which categorizes objects into PET and background across the entire dataset. One can observe the recognition rates as True Positive (TP): 91%, False Negative (FN): 100%, False Positive (FP): 9%, and True Negative (TN): 0%.
The combination of high classification accuracy and real-time efficiency indicates the system’s potential for practical implementation in industrial settings. The accurate and efficient classification of plastic waste aids in optimizing recycling processes, reducing contamination, and promoting sustainable waste management practices.
The experimental results and analysis highlight the success of the developed plastic waste classification system in accurately and efficiently classifying PET and PET-G samples. The high classification accuracy and real-time performance pave the way for the system’s integration into industrial waste sorting operations, contributing to a more sustainable and environmentally conscious approach to plastic waste management.

5. Discussion

5.1. Interpretation and Comparison of Experimental Results

The interpretation and comparison of the experimental results shed light on the effectiveness of the proposed plastic waste classification system. The achieved classification accuracy of 91.7 mAP for PET and PET-G samples moving at a speed of 2 m per s demonstrates the system’s high precision in distinguishing between different types of plastics. This accuracy is crucial in ensuring proper sorting and recycling of plastic waste materials.
Comparing the achieved accuracy with existing manual sorting processes reveals the superiority of the developed system. Manual sorting processes often suffer from human error and subjective judgment, leading to lower accuracy rates. The automated nature of the proposed system eliminates these drawbacks, providing consistent and reliable classification results.
Moreover, the real-time performance of the learning algorithm enables prompt decision making in waste-sorting operations. The system keeps pace with the conveyor belt’s speed, ensuring that samples are classified on time. This efficiency is essential for maintaining the productivity of waste management facilities and optimizing the recycling process.

5.2. Advantages and Limitations of the Proposed Method

The proposed plastic waste classification method offers several advantages over traditional sorting approaches. Firstly, it eliminates the reliance on manual labor, reducing human error and increasing sorting accuracy. The automated nature of the system ensures consistent results, regardless of the operator’s expertise or subjective judgment.
Furthermore, the integration of depth cameras and the YOLO-based object detection algorithm enables real-time classification. This real-time capability enhances the efficiency of waste-sorting operations, minimizing delays and bottlenecks in the recycling process.
Additionally, the combination of compositional analysis and shape-based classification addresses the challenges posed by plastics with similar compositions but different shapes. By incorporating both compositional and shape information, the system improves classification accuracy, reducing misclassifications and enhancing overall sorting effectiveness.
However, the proposed method also has certain limitations. The reliance on depth cameras may introduce challenges in certain scenarios, such as low lighting conditions or occlusions. Ensuring proper lighting and minimizing occlusions can help mitigate these limitations and maintain accurate classification results.
Furthermore, the system’s performance may be affected by variations in plastic waste samples, such as different colors, textures, or surface conditions. Ensuring a diverse and representative training dataset can help improve the system’s ability to handle such variations and enhance classification accuracy.

5.3. Further Improvements and Research Directions

To further enhance the plastic waste classification system, several improvements and research directions can be explored. Firstly, expanding the range of detectable plastics beyond PET and PET-G can increase the system’s applicability in diverse waste management scenarios. Training the algorithm on additional plastic types and incorporating them into the system can improve its versatility and effectiveness.
Moreover, incorporating advanced machine learning techniques, such as deep neural networks or reinforcement learning, may further enhance the system’s classification accuracy and robustness. These techniques can capture more complex features and patterns, enabling the system to handle challenging scenarios and improve overall performance.
Additionally, integrating sensor fusion techniques, such as combining depth cameras with other types of sensors (e.g., spectroscopic sensors), can provide complementary information about plastic waste materials. This fusion of data from multiple sensors can improve classification accuracy by capturing additional material properties and characteristics.
The proposed method involves providing signals to the controller of the pneumatic nozzle in the area where inference is performed. While communication has been implemented, actual testing of the accuracy when classifying PET and PET-G by connecting to the pneumatic sensor is required. An important consideration here is the synchronization issue between the object detection inference results and the pneumatic valve, which is influenced by the conveyor speed. The delay in opening the pneumatic nozzle will vary depending on the conveyor speed. For example, on-site inference results can be calibrated to spray the pneumatic nozzle 100 ms after 1 m/s of conveyor speed, but when the conveyor speed changes to 2 m/s or 3 m/s, the delay time needs to be recalibrated on-site, and the results need to be organized for easy application on-site. Due to these differences, the accuracy of PET and PET-G classification may vary, necessitating future work to find a function for delay time based on conveyor speed to obtain reliable results.
The current system is in the proof-of-concept stage and is being validated at a recycling site, with the challenge of achieving significant accuracy improvements in plastic waste sorting over an extended period. This paper presents an innovative method that combines image sensors and deep learning (YOLO model) to classify plastics by shape and form, aiming to improve sorting accuracy, reduce misclassification, and enhance recycling efficiency, addressing environmental concerns related to plastic waste mismanagement. The current system is in the proof-of-concept stage and is being validated at a recycling site. Achieving significant improvements in accuracy within a short period is challenging and requires extended validation over an extended period. Furthermore, continuous evaluation and optimization of the system’s performance through field trials and feedback from waste management facilities are essential. Collaborating with industry partners and incorporating their expertise can help identify real-world challenges and refine the system accordingly.
The development of machine vision technology promises to significantly accelerate the automation of plastic classification, which still heavily relies on complex systems or manual labor in many aspects. In addition to PET and PET-G, PET and PVC, despite their similar chemical compositions, must also undergo classification. Currently, differences in chlorine content are determined through methods involving equipment such as X-rays or manual sorting. However, it is anticipated that machine vision, with its capability to distinguish the distinct shapes of PET and PVC, will enable a more straightforward classification process. Furthermore, it can be readily applied not only in the field of waste plastic classification but also in quality control technologies where precision machining is required [28], as well as in the area of E-waste (electronic waste) [29].
In conclusion, the proposed plastic waste classification system demonstrates high accuracy and real-time performance, offering advantages over traditional manual sorting approaches. While certain limitations exist, ongoing research and improvements can overcome these limitations and enhance the system’s capabilities. The integration of advanced machine learning techniques, sensor fusion, and collaboration with industry stakeholders will contribute to the continuous development and effectiveness of the system in promoting sustainable waste management practices.

6. Conclusions

In conclusion, the proposed plastic waste classification system combines machine/computer vision technology, deep learning algorithms, and pneumatic control to achieve accurate and real-time waste sorting. By integrating depth cameras and the YOLO-based object detection algorithm, the system can identify and classify different types of plastics with high precision on a fast-moving conveyor belt. The inclusion of a shape-based classification algorithm further enhances accuracy by considering visual characteristics and textures. This system offers significant advantages over manual sorting, ensuring consistent and reliable results while eliminating human error. However, improvements are needed to address challenges such as low lighting and variations in plastic samples, requiring appropriate lighting setups and diverse training datasets.
The potential applications of this system extend beyond plastic waste classification. The image-based classification approach can be applied to other industries and waste management fields where object recognition and sorting are necessary. Further research should focus on refining the system through the integration of advanced machine learning techniques and sensor fusion. Collaboration with industry stakeholders and continuous field trials will provide valuable feedback to enhance system performance and tackle real-world challenges. By revolutionizing waste management practices and promoting sustainability, this research contributes to efficient recycling, resource conservation, and environmental preservation. Expanding the system’s capabilities to handle diverse waste materials and fostering collaboration across sectors and international waste management fields will unlock new opportunities and pave the way toward a cleaner and more sustainable future.

Author Contributions

Conceptualization, Y.Y. and J.C.; formal analysis, Y.Y. and J.C.; investigation, J.C.; data curation, Y.Y. and J.C.; writing—original draft preparation, Y.Y. and J.C.; Revision note: B.L.; visualization, Y.Y. and J.C.; supervision, Y.Y. and J.C.; project administration, J.C.; funding acquisition, Y.Y. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Korea Institute of Industrial Technology as ”Development of core technology for smart sensing and digital medical process to support medical surgical field diagnosis” (KITECH EH-23-0014) and ”Development of AIoT-based intelligent diagnosis platform” (KITECH—JH-23-0009)”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This study has been conducted with the support of the Korea Institute of Industrial Technology as “Development of core technology for smart sensing and digital medical process to support medical surgical field diagnosis” (KITECH EH-23-0014) and “Development of AIoT-based intelligent diagnosis platform (2/2)—(KITECH—JH-23-0009).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akinwumi, I.I.; Domo-Spiff, A.H.; Salami, A. Marine plastic pollution and affordable housing challenge: Shredded waste plastic stabilized soil for producing compressed earth bricks. Case Stud. Constr. Mater. 2019, 11, e00241. [Google Scholar] [CrossRef]
  2. Reddy, A.V.K.; Shankaraiah, G.; Kumar, P.S. Toxicity Effects of Micro- and Nanoplastics in Terrestrial Environment. Micro Nanoplastics Soil 2023, 2, 191–220. [Google Scholar]
  3. Beigbeder, J.; Perrin, D.; Mascaro, J.-F.; Lopez-Cuesta, J.-M.T. Study of the Physico-Chemical Properties of Recycled Polymers from Waste Electrical and Electronic Equipment (WEEE) Sorted by High Resolution near Infrared Devices. Resour. Conserv. Recycl. 2013, 78, 105–114. [Google Scholar] [CrossRef]
  4. Zheng, Y.; Bai, J.; Xu, J.; Li, X.; Zhang, Y. A Discrimination Model in Waste Plastics Sorting Using NIR Hyperspectral Imaging System. Waste Manag. 2018, 72, 87–98. [Google Scholar] [CrossRef] [PubMed]
  5. Wu, X.; Li, J.; Yao, L.; Xu, Z. Auto-Sorting Commonly Recovered Plastics from Waste Household Appliances and Electronics Using near-Infrared Spectroscopy. J. Clean. Prod. 2020, 246, 118732. [Google Scholar] [CrossRef]
  6. Rybarczyk, D.; Jędryczka, C.; Regulski, R.; Sędziak, D.; Netter, K.; Czarnecka-Komorowska, D.; Barczewski, M.; Barański, M. Assessment of the Electrostatic Separation Effectiveness of Plastic Waste Using a Vision System. Sensors 2020, 20, 7201. [Google Scholar] [CrossRef] [PubMed]
  7. Gent, M.R.; Menendez, M.; Toraño, J.; Diego, I. Recycling of Plastic Waste by Density Separation: Prospects for Optimization. Waste Manag. Res. 2009, 27, 175–187. [Google Scholar] [CrossRef] [PubMed]
  8. Marchesi, C.; Rani, M.; Federici, S.; Lancini, M.; Depero, L.E. Evaluating Chemometric Strategies and Machine Learning Approaches for a Miniaturized Near-Infrared Spectrometer in Plastic Waste Classification. Acta IMEKO 2023, 12, 1–7. [Google Scholar] [CrossRef]
  9. Carrera, B.; Piñol, V.L.; Mata, J.B.; Kim, K. A Machine Learning Based Classification Models for Plastic Recycling Using Different Wavelength Range Spectrums. J. Clean. Prod. 2022, 374, 133883. [Google Scholar] [CrossRef]
  10. Cvetnić, T.S.; Krog, K.; Benković, M.; Jurina, T.; Valinger, D.; Redovniković, I.R.; Kljusurić, J.G.; Tuše, A.J.K. Application of Near-Infrared Spectroscopy for Monitoring and/or Control of Composting Processes. Appl. Sci. 2023, 13, 6419. [Google Scholar] [CrossRef]
  11. Ozaki, Y.; Morisawa, Y.L. Principles and Characteristics of NIR Spectroscopy. In Near-Infrared Spectroscopy: Theory, Spectral Analysis, Instrumentation, and Applications; Zaki, Y., Huck, C., Tsuchikawa, S., Engelsen, S.B., Eds.; Springer: Singapore, 2021; pp. 11–35. [Google Scholar]
  12. Kim, I.G.; Hong, S.Y.; Park, B.O.; Choi, H.J.; Lee, J.H. Polyphenylene Ether/Glycol Modified Polyethylene Terephthalate Blends and Their Physical Characteristics. J. Macromol. Sci. Part B 2012, 51, 798–806. [Google Scholar] [CrossRef]
  13. Staub, C. PET Resin Code Changes in California. Resource Recycling News. Available online: https://resource-recycling.com/recycling/2017/10/24/pet-resin-code-changes-california/ (accessed on 7 July 2023).
  14. Polyethylene Terephthalate Glocol (PETG) Market Outlook (2023 to 20330, Future Market Insights). 2023. Available online: https://www.futuremarketinsights.com/reports/polyethylene-terephthalate-glycol-market (accessed on 1 July 2023).
  15. Sharma, H.K.; Majumder, S.; Biswas, A.; Prentkovskis, O.; Kar, S.; Skačkauskas, P. A Study on Decision-Making of the Indian Railways Reservation System during COVID-19. J. Adv. Transp. 2022, 10, 7685375. [Google Scholar] [CrossRef]
  16. Raptopoulos, F.; Koskinopoulou, M.; Maniadakis, M. Robotic pick-and-toss facilitates urban waste sorting. In Proceedings of the IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, 21–22 August 2020; pp. 1149–1154. [Google Scholar]
  17. Zhihong, C.; Hebin, Z.; Yanbo, W.; Binyan, L.; Yu, L. A vision based robotic grasping system using deep learning for garbage sorting. In Proceedings of the Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 11223–11226. [Google Scholar]
  18. Bircanoğlu, C.; Atay, M.; Beşer, F.; Genç, Ö.; Kizrak, M.A. Recyclenet: Intelligent waste sorting using deep neural networks. In Proceedings of the Innovations in Intelligent Systems and Applications (INISTA), Thessaloniki, Greece, 3–5 July 2018; pp. 1–7. [Google Scholar]
  19. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar] [CrossRef]
  20. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. arXiv 2016, arXiv:1612.08242. [Google Scholar] [CrossRef]
  21. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  22. Bochkovskiy, A.; Wang, C.; Liao, H.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
  23. Liu, W.; Anguelov, D.; Szegedy, D.E.C.; Reed, S.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. arXiv 2015, arXiv:1512.02325. [Google Scholar] [CrossRef]
  24. Fu, C.; Liu, W.; Ranga, A.; Tyagi, A.; Berg, A.C. DSSD: Deconvolutional Single Shot Detector. arXiv 2017, arXiv:1701.06659. [Google Scholar] [CrossRef]
  25. Xu, H.; Jiang, C.; Liang, X.; Lin, L.; Li, Z. Reasoning-RCNN: Unifying Adaptive Global Reasoning Into Large-Scale Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar] [CrossRef]
  26. He, Z.; Zhang, L. Multi-Adversarial Faster-RCNN for Unrestricted Object Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar] [CrossRef]
  27. He, Z.; Zhang, L. Domain Adaptive Object Detection via Asymmetric Tri-Way Faster-RCNN. arXiv 2020, arXiv:2007.01571. [Google Scholar] [CrossRef]
  28. Xu, S.; Duan, Y.; Yu, Y.; Tian, Z.; Chen, Q. Machine vision-based high-precision and robust focus detection for femtosecond laser machining. Optics Express 2021, 29, 30952–30960. [Google Scholar] [CrossRef] [PubMed]
  29. Johnson, M.; Khatoon, A.; Fitzpatrick, C. Application of AI and Machine Vision to improve battery detection and recovery in E-Waste Management. In Proceedings of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Port Male, Maldives, 16–18 November 2022; pp. 1–6. [Google Scholar] [CrossRef]
Figure 1. PET and PET-G classification system: (a) overall classification process (b) target to classify plastic component.
Figure 1. PET and PET-G classification system: (a) overall classification process (b) target to classify plastic component.
Applsci 13 10224 g001
Figure 2. Proposed machine/computer vision module, deep learning-based image analysis module, air nozzle module integrated system.
Figure 2. Proposed machine/computer vision module, deep learning-based image analysis module, air nozzle module integrated system.
Applsci 13 10224 g002
Figure 3. Three camera installation layout for classification of PET and PET-G.
Figure 3. Three camera installation layout for classification of PET and PET-G.
Applsci 13 10224 g003
Figure 4. PET image according to exposure: (a) 10 ms exposure time and (b) 5 ms exposure time.
Figure 4. PET image according to exposure: (a) 10 ms exposure time and (b) 5 ms exposure time.
Applsci 13 10224 g004
Figure 5. Description of the proposed waste classification system: (a) Installation of lights and cameras for initial testing on the conveyor. (b) Image of the PET and PET-G under the initial condition on the conveyor.
Figure 5. Description of the proposed waste classification system: (a) Installation of lights and cameras for initial testing on the conveyor. (b) Image of the PET and PET-G under the initial condition on the conveyor.
Applsci 13 10224 g005
Figure 6. The augmented data of PET and PET-G.
Figure 6. The augmented data of PET and PET-G.
Applsci 13 10224 g006
Figure 7. PET and PET-G datasets with annotation.
Figure 7. PET and PET-G datasets with annotation.
Applsci 13 10224 g007
Figure 8. Schematic diagram of the proposed methodology.
Figure 8. Schematic diagram of the proposed methodology.
Applsci 13 10224 g008
Figure 9. Interface with PLC from inference results: (a) It divides the image area into 64 areas and converts them into digital signals, and (b) checks the communication to the PLC using communication with ModbusTCP.
Figure 9. Interface with PLC from inference results: (a) It divides the image area into 64 areas and converts them into digital signals, and (b) checks the communication to the PLC using communication with ModbusTCP.
Applsci 13 10224 g009
Figure 10. Installation site to verify the proposed garbage-sorting system: (a) Input side of the waste. (b) Conveyor with the lights and machine vision.
Figure 10. Installation site to verify the proposed garbage-sorting system: (a) Input side of the waste. (b) Conveyor with the lights and machine vision.
Applsci 13 10224 g010
Figure 11. Proposed system field test results: (a) Classification after obtaining an image of the classification area into 64 areas according to the pneumatic line. (b) Pneumatic nozzle (red box) control as a result of inference.
Figure 11. Proposed system field test results: (a) Classification after obtaining an image of the classification area into 64 areas according to the pneumatic line. (b) Pneumatic nozzle (red box) control as a result of inference.
Applsci 13 10224 g011
Figure 12. Proposed system field test results: (a) Inference results (Label of the detection and its detection probability). (b) Training results with mAP.
Figure 12. Proposed system field test results: (a) Inference results (Label of the detection and its detection probability). (b) Training results with mAP.
Applsci 13 10224 g012
Figure 13. Proposed System metrics: (a) Precision. (b) Recall. (c) Precision–Recall. (d) F-1 score.
Figure 13. Proposed System metrics: (a) Precision. (b) Recall. (c) Precision–Recall. (d) F-1 score.
Applsci 13 10224 g013
Figure 14. Proposed system confusion matrix.
Figure 14. Proposed system confusion matrix.
Applsci 13 10224 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, J.; Lim, B.; Yoo, Y. Advancing Plastic Waste Classification and Recycling Efficiency: Integrating Image Sensors and Deep Learning Algorithms. Appl. Sci. 2023, 13, 10224. https://doi.org/10.3390/app131810224

AMA Style

Choi J, Lim B, Yoo Y. Advancing Plastic Waste Classification and Recycling Efficiency: Integrating Image Sensors and Deep Learning Algorithms. Applied Sciences. 2023; 13(18):10224. https://doi.org/10.3390/app131810224

Chicago/Turabian Style

Choi, Janghee, Byeongju Lim, and Youngjun Yoo. 2023. "Advancing Plastic Waste Classification and Recycling Efficiency: Integrating Image Sensors and Deep Learning Algorithms" Applied Sciences 13, no. 18: 10224. https://doi.org/10.3390/app131810224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop