Next Article in Journal
Early Stage Identification of COVID-19 Patients in Mexico Using Machine Learning: A Case Study for the Tijuana General Hospital
Next Article in Special Issue
Designing Automated Deployment Strategies of Face Recognition Solutions in Heterogeneous IoT Platforms
Previous Article in Journal
A Private Strategy for Workload Forecasting on Large-Scale Wireless Networks
Previous Article in Special Issue
SCRO: A Domain Ontology for Describing Steel Cold Rolling Processes towards Industry 4.0
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inspection and Classification System for Automotive Component Remanufacturing Industry Based on Ensemble Learning

1
Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Mikeletegi 57, 20009 Donostia-San Sebastián, Spain
2
Computational Intelligence Group, Computer Science Faculty, University of the Basque Country (UPV/EHU), 20018 Donostia-San Sebastian, Spain
*
Author to whom correspondence should be addressed.
Information 2021, 12(12), 489; https://doi.org/10.3390/info12120489
Submission received: 12 October 2021 / Revised: 16 November 2021 / Accepted: 19 November 2021 / Published: 23 November 2021
(This article belongs to the Special Issue Knowledge Engineering in Industry 4.0)

Abstract

:
This paper presents an automated inspection and classification system for automotive component remanufacturing industry, based on ensemble learning. The system is based on different stages allowing to classify the components as good, rectifiable or rejection according to the manufacturer criteria. A study of two deep learning-based models’ performance when used individually and when using an ensemble of them is carried out, obtaining an improvement of 7% in accuracy in the ensemble. The results of the test set demonstrate the successful performance of the system in terms of component classification.

1. Introduction

Facing the emerging environmental crisis that the planet is experiencing, it is in everyone’s hands to implement urgent actions that help prevent climate change and promote both sustainable development and environmental protection. From an industrial perspective, one of the lines of action that can be taken to achieve sustainable manufacturing is remanufacturing [1,2,3].

1.1. Remanufacturing Process in the Manufacturing Industry

According to [4], remanufacturing could be defined as a process of returning used products to a functional “as new” state by rebuilding and replacing their components. This process is an effective way to reduce emissions of carbon dioxide (CO2) and other greenhouse and global warming gases [5]. Remanufacturing also reduces skilled labour, energy, material waste and overexploitation of natural resources, among others benefits [6,7,8,9]. In practice, companies develop remanufacturing businesses for social, economic and environmental benefits [10,11].
Remanufacturing is the reuse of products that have reached the end of their usable life, by carrying out a series of processes that return the product to its original state with an equivalent or superior quality. Therefore, the warranty of the remanufactured product is identical to the warranty of a new product. This process is also environmentally friendly because reduces energy consumption by eliminating the need to produce new components. In addition, it can significantly reduce lead times, thereby increasing customer satisfaction.
A typical remanufacturing process can be seen in Figure 1. This cyclical process starts with the availability of a used part (core); initially, this part was manufactured in a linear manufacturing process that complies with specific technical specifications. In the case of products with different components, the product is completely disassembled and an assessment of the condition of each individual part is carried out through an inspection system. Once the quality status of the component is defined, it is classified according to whether it is usable, it needs an additional repairing process or it is unusable. The parts classified as usable or repairable are subjected to an intensive cleaning process. The parts are then repaired and upgraded as necessary, where they are subjected to a series of advanced manufacturing processes. The cleaned, repaired and processed parts are then reassembled to make the final product. Finally, the reconditioned product are tested and evaluated to ensure that its condition meets the conditions and technical specifications of new products.
There are various sectors that can benefit from the remanufacturing process [12], such as aerospace, automotive [13] or electronics. Although this process is experiencing growth, some of this growth is constrained by factors such as the complex nature of the remanufacturing process [14]. There are other constraints such as the difficulty in obtaining an adequate supply of used products, as well as the development of efficient remanufacturing process tools and techniques to carry out the previously mentioned different steps of a remanufacturing process. The remanufacturing industry is therefore facing a major challenge to achieve a more sustainable production, which is so necessary to tackle the environmental crisis.

1.2. Machine Vision Applications for Quality Control

Among the different processes that compose the remanufacturing cycle, this paper is focused on the inspection phase. Inspection is absolutely necessary to determine the degree of deterioration and the quality of the product in order to facilitate the classification between good parts, rectifiable parts and rejectable parts that cannot be remanufactured. In this work, we address a real industrial use case of an inspection system focused on the remanufacturing of an automotive mechanical component.
Traditionally, inspection is performed manually by an operator. To overcome the limitations of human inspection, such as time, high labour cost or individual subjectivity, automated inspection techniques have started to be implemented to assist or replace human decisions. In recent years, methods based on deep learning and computer vision have achieved excellent performance on automated visual inspection problems [15,16,17,18]. Through these neural models, data acquired in production environments can be analysed and learned, with the aim of enhancing the inspection process with human-like skills. As a result, visual inspection has changed from being carried out manually by an operator to being fully automated.
With increasingly demanding quality standards, quality control is a major challenge for the industrial manufacturing sector. The need for strict quality inspection in remanufacturing is critical, as the final quality of the component depends on this process. In addition, achieving a good inspection performance in the imposed processing time is a difficult task that requires new neural networks architectures [19]. The studies that are currently being carried out on multiple types of components such as metal sheets [20,21], plastic pipes [22] or metallic brackets [23], among others, show the suitability of deep learning techniques for the inspection stage. The main problem with deep learning methods is that they generally require an inspection of the prediction results that do not have a high confidence value. In order to achieve more stable models that are able to specialize in complex image features, model ensembles are used [24,25]. This strategy allows each individual model to learn certain details that the rest of the models do not have to learn, thus parallelizing the detection tasks and obtaining multiple detection “opinions”.

1.3. Main Contributions

We propose to use a deep learning model ensemble for the inspection and classification of constant velocity joint cages. We propose to combine these approaches in a system that detects whether the cages have a wear (defect) or not, and then classifies them as rejectable or rectifiable, according to the manufacturer’s criteria about their defect size. We demonstrate the benefit of this combination compared with the use of each model in isolation.
The structure of this paper is as follows: In Section 2, the characteristics of the inspected components are described, as well as our inspection and evaluation proposed pipeline. In Section 3, the results of the experimentation are shown, followed by a discussion. Finally, in Section 4, some conclusions and possibilities for future work are presented.

2. Materials and Methods

This section describes in more detail the characteristics of the components that are analysed in this case study, together with their defectology definition. In addition, the proposed inspection and evaluation pipeline is described. Finally, the metrics to be used in the system evaluation stage are also detailed.

2.1. Characteristics of Inspected Components

In this paper we are going to use as a case study a component used in the automotive sector. Specifically, the study focuses on the Constant Velocity (CV) joints. The CV joint is a mechanical articulation in which the rotational speed of the output shaft is the same as that of the input shaft, regardless of the transmission angle at which the joint operates. Its design allows the rotational motion to be transmitted through cross grooves located between an outer bell and a grooved inner. The most commonly used design today, to which our components belong, is the Rzeppa type [26]. In this design, the balls are held in position by small windows in a mounting cage between the outer bell and the inner.
The design of the joint is such that the position of the balls always bisects the operating angle of the joint. It is a design that works like a ball gear. These balls cause wear in the area of contact with the cage, creating small marks in the part. In this paper we refer to this defect as wear. Figure 2 shows an image of the bearing contact zone, that is the region to be inspected.
The size of the wear is a discriminating factor in assessing whether the part can be reused again, must be rectified or it is no repairable. In this case, the criterion set by the manufacturer is as follows:
  • If the cage has a wear diameter smaller than 0.25 mm, it is rectifiable;
  • If the cage has a wear diameter equal or greater than 0.25 mm, it is rejectable.
There are different models of cages which are characterised, among other things, by the number and shape of their bearing contact points and windows. Figure 3 shows an example of the variability shown by the bearing contact zone in the image acquisition. The wear is highlighted by the red areas with a dashed border.

2.2. Proposed Inspection and Evaluation Pipeline

The proposed pipeline is shown in Figure 4 and explained in detail in the following sections.

2.2.1. Step 1: Image Acquisition

The first step in the proposed pipeline is to acquire the component images. The proposed acquisition system aims to highlight as much as possible the wear in the image. It is composed of the following elements as shown in Figure 5: a 5MP monochromatic matrix camera, a lighting bar oblique to the inspection area to maximise the contrast, and a centring device to place the cage inside the camera field of view. Thanks to this configuration, the obtained images have shown high contrast in the wear, regardless of the geometrical characteristics and the polishing level of the bearing contact zone. An example of acquired images with this setup is shown in Figure 3.

2.2.2. Step 2: Surface Inspection

Our proposal is to use artificial intelligence-based mechanisms for surface inspection. Specifically, the used neural networks in this work are: DeepLabV3+ [27] and YOLOv5 [28]. DeepLabV3+ is a semantic segmentation network with a decoder that improves the segmentation results with respect to its predecessor, DeepLabV3. Thanks to the downsampling, the resolution of the feature maps is reduced. This network has the peculiarity of using Atrous Convolutions, which allow to refine the effective field of view of the convolution. The result of this network is a mask, which will allow us to obtain an estimation of the wear dimension. The architecture of DeepLabV3+ is shown in Figure 6.
The second network used is YOLOv5. YOLOv5 is an object detection network, whose strategy is to divide the image into a grid, where objects are detected in each grid. In terms of architecture, this network is divided into three main blocks: the BackBone, the neck and the yolo head, as shown in Figure 7. The backbone of YOLOv5 is CSPDarknet [29] which employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. The neck, PANet [30] allows all the features to be merged. Finally, at the head is the yolo layer, which presents the output results. The major improvements of this yolo version is that it includes mosaic data augmentation and auto learning bounding box anchors.

2.2.3. Step 3: Classification Layer

The last step of the pipeline is the decision layer. The purpose of this layer is to merge the outputs of the surface inspection stage. In this layer is integrated all the decision logic required by the manufacturer. The final result is a single final classification in which all the models used are involved in the decision. For this purpose, decision trees are used, which are commonly used in decision making based on a series of conditions that happen in a successive way [31].

2.3. Evaluation Metrics

Different metrics are used to evaluate the models: Intersection over Union (IoU), mean Average Precision (mAP) and the accuracy.
A common way to determine whether a prediction proposal is correct is to use Intersection over Union (IoU) [32]. The IoU specifies the amount of overlap of the bounding boxes or pixels between the prediction and the ground truth, in a value from 0 to 1, being better when is closer to 1. The equation to calculate the IoU is shown in Equation (1), where A is a pixel set of proposed object and B the pixel set of ground truth object.
I o U ( A , B ) = A B A B
Based on the IoU result obtained, we classify whether the prediction is a False Positive, True Positive, False Negative or True Negative. Where this means:
  • True Positives (TP): the defect is detected as defect;
  • True Negatives (TN): the normality is detected as normality;
  • False Positives (FP): the normality is mistakenly detected as defect;
  • False Negatives (FN): the defect is mistakenly detected as normality.
Traditionally, a prediction is classified as TP if the IoU is >0.5. With the obtained values of TP, TN, FP and FN the accuracy and recall using the Equations (2) and (3), respectively, is calculated.
Precision = TP TP + FP
Recall = TP TP + FN
The Precision and Recall values are used to create the PR curve, where the Precision is plotted on the Y-axis and the recall on the X-axis. From this curve the AP can be calculated, whose value is the area under the PR curve. The mAP for object detection is the average of the AP value calculated for all classes.
For the classification evaluation, at the individual image level and at the cage level, the accuracy is used. The accuracy in binary classification, as in this case, is calculated in terms of positives and negatives. Formally, accuracy has the definition shown in Equation (4), where:
Accuracy = TP + TN ( TP + TN + FP + FN )

3. Results and Discussion

The experimentation of this work is composed of different tests that validate which inspection method is the most suitable for the classification of CV joint cages. This experimentation covers different aspects such as the performance of each model used individually and the benefits obtained by networks ensemble.

3.1. Dataset Generation

The dataset generation is carried out using the data acquisition system described in Section 2.2.1. A total of 55 CV joint cages are acquired, where 12 bearing contact point images are taken for each component. Summarizing, the complete dataset has 660 images. As shown in Table 1, the database is divided into three different sets, a training set, a validation set and a test set.
In terms of the quality of the training set data, a key factor that has a direct impact on the performance of the neural network is that the set is well-balanced. Especially in manufacturing environments, it is very common to have a shortage of defective data. A commonly applied technique to overcome this shortage and make models more robust is to apply data augmentation. This technique consists of increasing the volume of the training set, based on a series of geometric and photometric operations. In this case, the operations applied are rotation, incorporation of Gaussian noise, flipping effect, mirror effect and brightness variation. The processing operations must resemble the conditions of the working environment in which the image is acquired. Thus, a 15-fold increase in the original data volume was realized, from 432 images opted for training increased to a total of 6480 samples.

3.2. Performance Comparison between Traditional Methods and Deep Neural Networks

In this experiment, an evaluation with different traditional machine learning methods was carried out with the test set. Although in this paper a surface inspection based on deep learning is proposed, some classical machine learning techniques such as SVM (Support Vector Machine) [33], Gaussian Naive Bayes [34] and decision trees [35,36] are compared. The objective of this experiment is to conclude whether the use of deep learning is really essential for a surface inspection that is robust and adaptable to extreme production environments and also complies with the strict quality control standards set by the manufacturing industry.
The result obtained with all the traditional methods are very similar, as shown in the performance metrics shown in Table 2. It is observed that there is a strong trend to the faulty prediction of the sample as there is a significant number of false positives (FP). Similarly, by observing the values of the most classical methods metrics, it shows that they make a practically random inspection for the non-faulty samples.
Thanks to experimentation with different traditional machine learning methods, it is shown that, in comparison, deep learning-based models, such as Unet [37], DeepLabV3+, YOLOv3 [38] and YOLOv5, yield much better results. It could even be observed that the classical methods fail in learning the defects, which addresses the surface inspection problem, due to the large number of errors in form of false positives (FP) they make during the evaluation. FPs generate failures in the detection entailing an over-detection of defects. This means that the model does not learn the characteristics of the defects properly. In industry, this implies a very large over-rejection, which raises doubts about the reliability of the model in an automatic inspection process, as it would mean an extra cost due to the waste of good parts. Therefore, we can state that the use of more complex deep learning architectures, such as Unet, DeepLabV3+, YOLOv3 and YOLOv5, is needed to achieve high inspection accuracy rates.

3.3. Individual Evaluation of the Deep Neural Networks Models Performance

Based on the defect detection results obtained in the previous experiment using different deep neural network models, we selected the best segmentation and object detection models, DeepLabV3+ and YOLOv5, respectively. In this experimentation two trainings were performed, one with the semantic segmentation model DeepLabV3+ and the other with the defect detection model YOLOv5. For both neural models, the same dataset defined previously in Table 1 was used.
During the training of the DeepLabV3+ and YOLOv5 models an evaluation was performed for each epoch with the validation set in order to have a feedback of the training and to ensure its convergence to obtain satisfactory results.
The evaluation of each model was performed with the test set. As mentioned in the Section 2.3, different evaluation metrics such as IoU, mAP and Accuracy were used. The YOLOv5 network achieved 90% of mAP and the DeepLabV3+ network 85% of IoU. Both models yielded excellent results in terms of surface wear detection.
Some predictions of the two models are shown in Figure 8. This figure shows the semantic segmentation predicted masks using DeepLabV3+ in Figure 8b, as well as the predicted bounding boxes of YOLOv5 Figure 8c. The masks and bounding boxes are shown in red over the defective regions, and can be observed that they perfectly match the contour of the wear zone, achieving outstanding inspection accuracy.
During the experimentation it was observed that the models have different performance and capabilities in terms of surface inspection. The YOLOv5 network shows higher sensitivity for detecting wear regions, thus achieving better results in the evaluation step. However, it is prone to have false positives (FP), which leads to a decrease in detection reliability. In contrast, the DeepLabV3+ network gains in robustness, as it has almost no false positives (FP) in the detection, meaning that the defects detected are actually defective areas.DeepLabV3+ has more false negatives (FN) than YOLOv5, indicating a slightly lower accuracy in the evaluation metrics, as shown in Table 3.

3.4. Analysis of the Model Ensemble Performance

In this experiment, the DeepLabV3+ and YOLOv5 models are assembled to obtain a single class output. For this purpose, a decision layer is developed where the outputs of both networks are combined, the wear of each bearing contact point is measured in millimetres and the classification of the CV joint cage is performed.
Both models, DeepLabV3+ and YOLOv5, gave excellent results in the individual evaluation on the test set. However, it was observed that the two models shows different performance for the inspection of the addressed surface. While YOLOv5 has a trend to detect practically all wear zones, it has many false positives (FP) in the inspection, although it deals better with intra-class variability. In contrast, DeepLabV3+ is a model that has more false negatives (FN) than YOLOv5, but the detections are more confident and it is able to find very small wear zones.
Therefore, the YOLOv5 and DeepLabV3+ models are combined in order to reduce the mistakes produced in the inspection. In Table 3, the classification result during the evaluation phase of the YOLOv5+DeepLabV3+ ensemble is shown. The ensemble obtained an accuracy metric of 93.33%, a higher value compared with the models evaluated individually. In Figure 9, some results obtained with the YOLOv5+DeepLabV3+ ensemble are shown, where the wear zones of the bearing contact points are highlighted with a red bounding box. In Figure 9b, it can be observed how in some samples the surface wear was detected by only one model while the other did not detect it, when using both models isolated, i.e., not using the ensemble. In these cases, the YOLOv5+DeepLabV3+ ensemble is an effective method to avoid errors in the inspection, supplying the shortcomings shown by the two models individually. Therefore, the potential of merging more than one neural network to tackle a complicated surface inspection problem, where a database with a lot of intra-class variability is used, is validated.

3.5. Component Final Classification Results

In this last experiment, the ability of the system to classify components as rejectable, rectifiable or valid is evaluated. In order to classify a component into the mentioned three categories, it is necessary to inspect the 12 bearing contact points of each component individually. This classification by contact point is performed by using the assembled model and considering several quality criteria defined by the manufacturer. To make this classification, a decision tree is proposed as shown in Figure 10. Thanks to this tree, the class predicted by each model is obtained, together with its weight, which was defined based on the individual performance of the models. With these data, a weighted average is calculated to obtain the final class for each bearing contact point image.
Component-level classification evaluation was performed using a test set composed of 10 different cages. The criterion used to classify a component as defective is that it has more than four wears detected in total. This criterion is defined by the manufacturer due to the geometry of the cage, since, from a geometrical and functional point of view, it is not possible that in the same plane some bearing contact zones have wears and others do not. The proposed deep learning system was able to correctly classify all components, thus achieving an accuracy of 100%. Clearly, the inspection based on a YOLOv5+DeepLabV3+ ensemble is effective for the classification of remanufactured cages and that the decision tree is a necessary algorithm in the final decision making.

3.6. Results Summary

Based on the experimentation, it can be concluded that the combination of the models provides stability and reliability in the defect detection in the components. Thanks to the previous analysis of the traditional machine learning methods and the different segmentation and object detection networks, it was possible to choose the best combination of models. The results obtained with traditional methods are not accurate enough regarding the manufacturing industry requirements. These methods detect more false positives, increasing false rejections. An automatic system based on these techniques would be unreliable and would generate more financial and environmental costs than manual inspection.
Proposed system based on the combination of YOLOv5 and DeepLabV3+, i.e., ensemble, classifies the component based on the individual results per bearing contact point region and following the customer’s criteria. This system achieves an accuracy of 100% in the overall performance test, entailing a promising tool to solve the problem presented by the customer.

4. Conclusions

This work proposes an automatic inspection and classification system for automotive components, using model ensemble. An inspection pipeline is proposed, which allows to make decisions based on different criteria of component acceptance or rejection established by the manufacturer.
It is demonstrated that deep learning-based algorithms are able to learn complex geometries that traditional algorithms are not able to cope with. The models DeepLabV3+ and YOLOv5 are both well suited to segment wears, however, each model individually has its shortcomings. We validated how both models assembled are able to overcome the deficiencies in terms of FP and FN, thus obtaining a more robust detection at the inspection of each bearing contact point.
In order to perform a classification based on the models predictions and the defined quality criteria, a final decision layer based on a decision tree is proposed. This decision layer allows to consider the benefits of each model individually and unifies the classification as a single output.
A validation of the proposed system was carried out on a set of cages, where a 100% success rate in classification was obtained. As a future line of work, it is proposed to validate the system with a more significant set of samples to support the achieved results.

Author Contributions

Data curation, G.A.; Formal analysis, F.A.S.; Methodology, F.A.S.; Project administration, F.A.S.; Software, G.A.; Supervision, I.B.; Writing—original draft, F.A.S.; Writing—review & editing, I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ihobe 2020 grant of the Basque Government.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not available.

Acknowledgments

We would like to thank GKN Driveline Carcastillo for letting us publish the results obtained in the performed studies with its components.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Mete, S.; Çil, Z.A.; Özceylan, E.; Ağpak, K.; Battaïa, O. An optimisation support for the design of hybrid production lines including assembly and disassembly tasks. Int. J. Prod. Res. 2018, 56, 7375–7389. [Google Scholar] [CrossRef]
  2. Nasr, N.; Hilton, B.; German, R. Advances in Sustainable Manufacturing; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; pp. 189–194. [Google Scholar] [CrossRef]
  3. Ijomah, W.L.; McMahon, C.A.; Hammond, G.P.; Newman, S.T. Development of design for remanufacturing guidelines to support sustainable manufacturing. Robot.-Comput.-Integr. Manuf. 2007, 23, 712–719. [Google Scholar] [CrossRef]
  4. Ijomah, W. A Model-Based Definition of the Generic Remanufacturing Business Process; University of Plymouth: Plymouth, UK, 2002. [Google Scholar]
  5. Zhu, X.; Ren, M.; Chu, W.; Chiong, R. Remanufacturing subsidy or carbon regulation? An alternative toward sustainable production. J. Clean. Prod. 2019, 239, 117988. [Google Scholar] [CrossRef]
  6. Steinhilper, R. Remanufacturing—The Ultimate Form of Recycling; Fraunhofer IRB Verlag: Stuttgart, Germany, 1998. [Google Scholar]
  7. Ijomah, W.L.; McMahon, C.; Childe, S. Remanufacturing—A key strategy for sustainable development. In Design and Manufacture for Sustainable Development 2004; Cambridge University Press: Cambridge, UK, 2004; pp. 51–63. [Google Scholar]
  8. Nasr, N.; Thurston, M. Remanufacturing: A key enabler to sustainable product systems. Rochester Inst. Technol. 2006, 23, 15–18. [Google Scholar]
  9. Sundin, E.; Lee, H.M. In what way is remanufacturing good for the environment? In Design for Innovative Value towards a Sustainable Society; Springer: Kyoto, Japan, 2012; pp. 552–557. [Google Scholar]
  10. Sundin, E.; Bras, B. Making functional sales environmentally and economically beneficial through product remanufacturing. J. Clean. Prod. 2005, 13, 913–925. [Google Scholar] [CrossRef] [Green Version]
  11. Geyer, R.; Van Wassenhove, L.N.; Atasu, A. The economics of remanufacturing under limited component durability and finite product life cycles. Manag. Sci. 2007, 53, 88–100. [Google Scholar] [CrossRef] [Green Version]
  12. Gallo, M.; Romano, E.; Santillo, L.C. A perspective on remanufacturing business: Issues and opportunities. Int. Trade Econ. Policy Perspect. 2012, 209. [Google Scholar] [CrossRef]
  13. Pawlik, E.; Ijomah, W.; Corney, J. Current state and future perspective research on lean remanufacturing–focusing on the automotive industry. In IFIP International Conference on Advances in Production Management Systems; Springer: Berlin/Heidelberg, Germany, 2012; pp. 429–436. [Google Scholar]
  14. Lee, C.M.; Woo, W.S.; Roh, Y.H. Remanufacturing: Trends and issues. Int. J. Precis. Eng.-Manuf.-Green Technol. 2017, 4, 113–125. [Google Scholar] [CrossRef]
  15. Villalba-Diez, J.; Schmidt, D.; Gevers, R.; Ordieres-Meré, J.; Buchwitz, M.; Wellbrock, W. Deep learning for industrial computer vision quality control in the printing industry 4.0. Sensors 2019, 19, 3987. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Chouchene, A.; Carvalho, A.; Lima, T.M.; Charrua-Santos, F.; Osório, G.J.; Barhoumi, W. Artificial intelligence for product quality inspection toward smart industries: Quality control of vehicle non-conformities. In Proceedings of the 2020 9th international conference on industrial technology and management (ICITM), Oxford, UK, 11–13 February 2020; pp. 127–131. [Google Scholar]
  17. Schwebig, A.I.M.; Tutsch, R. Compilation of training datasets for use of convolutional neural networks supporting automatic inspection processes in industry 4.0 based electronic manufacturing. J. Sens. Sens. Syst. 2020, 9, 167–178. [Google Scholar] [CrossRef]
  18. Zheng, P.; Wang, H.; Sang, Z.; Zhong, R.Y.; Liu, Y.; Liu, C.; Mubarok, K.; Yu, S.; Xu, X. Smart manufacturing systems for Industry 4.0: Conceptual framework, scenarios, and future perspectives. Front. Mech. Eng. 2018, 13, 137–150. [Google Scholar] [CrossRef]
  19. Picon Ruiz, A.; Alvarez Gila, A.; Irusta, U.; Echazarra Huguet, J. Why deep learning performs better than classical machine learning? Dyna Ing. Ind. 2020, 95, 119–122. [Google Scholar] [CrossRef] [Green Version]
  20. Nwankpa, C.; Eze, S.; Ijomah, W.; Gachagan, A.; Marshall, S. Achieving remanufacturing inspection using deep learning. J. Remanuf. 2021, 11, 89–105. [Google Scholar] [CrossRef]
  21. Nwankpa, C.; Eze, S.; Ijomah, W.; Gachagan, A.; Marshall, S. Deep learning based vision inspection system for remanufacturing application. In Advances in Manufacturing Technology XXXIII; IOS Press: Amsterdam, The Netherlands, 2019; pp. 535–546. [Google Scholar]
  22. Zheng, Y.; Mamledesai, H.; Imam, H.; Ahmad, R. A Novel Deep Learning-based Automatic Damage Detection and Localization Method for Remanufacturing/Repair. Computer-Aided Design and Applications; Taylor and Francis Ltd.: Abingdon, UK, 2021; Volume 18, pp. 1359–1372. [Google Scholar]
  23. Zheng, Y. Intelligent and Automatic Inspection, Reconstruction and Process Planning Methods for Remanufacturing and Repair; University of Alberta: Edmonton, AB, Canada, 2021. [Google Scholar]
  24. Li, F.; Wu, J.; Dong, F.; Lin, J.; Sun, G.; Chen, H.; Shen, J. Ensemble machine learning systems for the estimation of steel quality control. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 2245–2252. [Google Scholar]
  25. Hann, E.; Gonzales, R.A.; Popescu, I.A.; Zhang, Q.; Ferreira, V.M.; Piechnik, S.K. Ensemble of Deep Convolutional Neural Networks with Monte Carlo Dropout Sampling for Automated Image Segmentation Quality Control and Robust Deep Learning Using Small Datasets. In Annual Conference on Medical Image Understanding and Analysis; Springer: Oxford, UK, 2021; pp. 280–293. [Google Scholar]
  26. Oh, S.J.; Woscek, J.T. Analysis of rzeppa and cardan joints in monorail drive train system. Int. J. Mech. Eng. Robot. Res. 2015, 4, 1. [Google Scholar]
  27. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  28. Ultralytics. YOLOv5. 2021. Available online: https://github.com/ultralytics/yolov5 (accessed on 22 November 2021).
  29. Bochkovskiy, A.; Wang, C.; Liao, H.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  30. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
  31. Rodríguez, J.J.; Quintana, G.; Bustillo, A.; Ciurana, J. A decision-making tool based on decision trees for roughness prediction in face milling. Int. J. Comput. Integr. Manuf. 2017, 30, 943–957. [Google Scholar] [CrossRef]
  32. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
  33. Suthaharan, S. Support vector machine. In Machine Learning Models and Algorithms for Big Data Classification; Springer: Boston, MA, USA, 2016; pp. 207–235. [Google Scholar]
  34. Zhang, H. The optimality of naive Bayes. AA 2004, 1, 3. [Google Scholar]
  35. Quinlan, J.R. Decision trees and decision-making. IEEE Trans. Syst. Man, Cybern. 1990, 20, 339–346. [Google Scholar] [CrossRef]
  36. Rokach, L.; Maimon, O. Decision trees. In Data Mining and Knowledge Discovery Handbook; Springer: Boston, MA, USA, 2005; pp. 165–192. [Google Scholar]
  37. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  38. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
Figure 1. Example of remanufacturing process.
Figure 1. Example of remanufacturing process.
Information 12 00489 g001
Figure 2. Remanufactured automotive component (CV joint cage).
Figure 2. Remanufactured automotive component (CV joint cage).
Information 12 00489 g002
Figure 3. Some examples of the dataset of remanufactured parts with different degrees of surface wear coloured in red with a dashed border.
Figure 3. Some examples of the dataset of remanufactured parts with different degrees of surface wear coloured in red with a dashed border.
Information 12 00489 g003
Figure 4. Proposed inspection and evaluation pipeline steps.
Figure 4. Proposed inspection and evaluation pipeline steps.
Information 12 00489 g004
Figure 5. Image acquisition setup.
Figure 5. Image acquisition setup.
Information 12 00489 g005
Figure 6. The network architecture of DeepLabV3+.
Figure 6. The network architecture of DeepLabV3+.
Information 12 00489 g006
Figure 7. The network architecture of YOLOv5.
Figure 7. The network architecture of YOLOv5.
Information 12 00489 g007
Figure 8. Surface wear detection using DeeplabV3+ semantic segmentation network and YOLOv5 detection network individually; (a) CV joint cage bearing contact point images and the wear zone indicated by red dotted lines, (b) prediction of semantic segmentation of the DeepLabV3+ model shown in red and (c) prediction of YOLOv5 model depicted with red bounding box.
Figure 8. Surface wear detection using DeeplabV3+ semantic segmentation network and YOLOv5 detection network individually; (a) CV joint cage bearing contact point images and the wear zone indicated by red dotted lines, (b) prediction of semantic segmentation of the DeepLabV3+ model shown in red and (c) prediction of YOLOv5 model depicted with red bounding box.
Information 12 00489 g008
Figure 9. Surface wear detection by ensemble DeepLabV3+ and YOLOv5 models; (a) CV joint cage bearing contact point images and the wear zone indicated by red dotted lines, and (b) predictions in red of semantic segmentation mask and detection bounding box.
Figure 9. Surface wear detection by ensemble DeepLabV3+ and YOLOv5 models; (a) CV joint cage bearing contact point images and the wear zone indicated by red dotted lines, and (b) predictions in red of semantic segmentation mask and detection bounding box.
Information 12 00489 g009
Figure 10. Wear classification decision tree.
Figure 10. Wear classification decision tree.
Information 12 00489 g010
Table 1. Distribution of the database for training and testing of DeepLabV3+ and Yolov5 models.
Table 1. Distribution of the database for training and testing of DeepLabV3+ and Yolov5 models.
Total SetTraining SetValidation SetTest Set
Number of remanufactured components5536910
Number of images (12 wear zone per component)660432108120
Table 2. Performance metrics of evaluated methods using the test set.
Table 2. Performance metrics of evaluated methods using the test set.
MethodTPTNFPFN
Decision Tree46203222
Gaussian Naive Bayes54282414
SVM5924289
DeepLabV3+5150217
UNet5048139
YOLOv353361021
YOLOv5604488
YOLOv5+DeepLabV3+625026
Table 3. Results of the Accuracy metric from the evaluation phase of the different YOLOv5 and DeepLabV3+ neural models.
Table 3. Results of the Accuracy metric from the evaluation phase of the different YOLOv5 and DeepLabV3+ neural models.
ModelDeepLabV3+YOLOv5YOLOv5 + DeepLabV3+
Accuracy (%)84.1786.6793.33
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saiz, F.A.; Alfaro, G.; Barandiaran, I. An Inspection and Classification System for Automotive Component Remanufacturing Industry Based on Ensemble Learning. Information 2021, 12, 489. https://doi.org/10.3390/info12120489

AMA Style

Saiz FA, Alfaro G, Barandiaran I. An Inspection and Classification System for Automotive Component Remanufacturing Industry Based on Ensemble Learning. Information. 2021; 12(12):489. https://doi.org/10.3390/info12120489

Chicago/Turabian Style

Saiz, Fátima A., Garazi Alfaro, and Iñigo Barandiaran. 2021. "An Inspection and Classification System for Automotive Component Remanufacturing Industry Based on Ensemble Learning" Information 12, no. 12: 489. https://doi.org/10.3390/info12120489

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop