# A Comparison of Machine Learning Techniques for the Quality Classification of Molded Products

^{*}

## Abstract

**:**

## 1. Introduction

- A real application of Industry 4.0 is demonstrated, proposing the use of ML to automate the quality control of molded plastic products;
- A comparison of six ML techniques for quality prediction is provided. The tested techniques are K-nearest neighbor (KNN), decision tree, random forest, gradient-boosted trees (GBT), support vector machine (SVM), and multi-layer perceptron (MLP). The source code of the comparison is publicly available in a GitHub repository (https://github.com/airtlab/machine-learning-for-quality-prediction-in-plastic-injection-molding, accessed on the 15 May 2022);
- A new dataset is presented. It includes the real data about the production of road lenses by injection molding and is publicly available in the source code repository. As such, the dataset can be used to benchmark other techniques.

## 2. Literature Review

## 3. Materials and Methods

#### 3.1. Plastic Injection Molding Data

^{3}).

- Waste, with ${U}_{0}<0.4$. All the samples that exhibit a general uniformity less than 0.4 in the photometric analysis should be discarded, as they are not compliant to the standard. The label for this class is 1;
- Acceptable, with $0.4\le {U}_{0}<0.45$. All the samples with a uniformity greater than or equal to 0.4 and less than 0.45 are considered acceptable by iGuzzini Illuminazione, as they comply with the standard. However, the target for the iGuzzini production is a higher quality. The label for this class is 2;
- Target, with $0.45\le {U}_{0}\le 0.5$. All the samples with a uniformity greater than or equal to 0.45 and less than or equal to 0.5 are considered optimal by the company. The label for this class is 3;
- Inefficient, with ${U}_{0}>0.5$. Even if the general uniformity is far greater than the standard threshold, producing lenses with a uniformity greater than 0.5 would result in the molding machine using more resources; therefore, this quality should be avoided, given that it is not required. The label for this class is 4.

#### 3.2. Tested ML Techniques

#### 3.2.1. KNN

- The K value, to establish how many neighbors should be considered to predict the class of an unknown sample. We tested all possible K values between 1 and 100;
- The distance measure to evaluate the similarity between feature vectors. Given that the features are numerical, we tested the Euclidean distance, cosine similarity, and Manhattan distance.

#### 3.2.2. Decision Tree

- The maximum depth of the tree, testing 10 different steps (with a linear increase) inside the range [−1, 300], i.e., all the possible values in the set {−1, 29, 59, 89, 119, 150, 180, 210, 240, 270, 300}, where −1 represents “no maximum depth”;
- The splitting criterion to select the features that best separate the input data, testing information gain, gain ratio, gini index, and accuracy;
- The use of pre-pruning, using the following three criteria, in addition to the maximum depth:
- −
- Gain when splitting a node lower than 0.01 (i.e., minimal gain = 0.01);
- −
- Number of samples in a leaf lower than 2 (i.e., minimal leaf size = 2);
- −
- Number of samples per split lower than 4 (i.e., minimal size for split = 4).

#### 3.2.3. Random Forest

- No pruning strategy was applied;
- The number of features to evaluate for the split in each tree was equal to $int\left(log\right(m)+1)$, where m is the number of features;
- The features to be used for the split in a tree were randomly selected.

- The maximum number of decision trees in the random forest, testing 10 different steps (with a linear increase) inside the range [−1, 300], i.e., all the values in the set {1, 31, 61, 91, 121, 151, 180, 210, 240, 270, 300};
- The maximum depth of the trees, testing 10 different steps (with a linear increase) inside the range [−1, 200], i.e., all the possible values in the set {−1, 19, 39, 59, 79, 100, 120, 140, 160, 180, 200}, where −1 represents “no maximum depth”;
- The splitting criterion to select the features that best separate the input data, testing information gain, gain ratio, Gini index, and accuracy.

#### 3.2.4. GBT

- The maximum number of decision trees in the random forest, testing 10 different steps (with a linear increase) inside the range [1, 300], i.e., all the values in the set {1, 31, 61, 91, 121, 151, 180, 210, 240, 270, 300};
- The maximum depth of the trees, testing 10 different steps (with a linear increase) inside the range [1, 200], i.e., all the possible values in the set {1, 21, 41, 61, 81, 101, 120, 140, 160, 180, 200}.

#### 3.2.5. SVM

- The C hyperparameter, regulating the margin from the decision boundary, testing 3 different steps (with a logarithmic increase) inside the range [0.1, 100], i.e., all the values in the set {0.1, 1, 10, 100};
- The kernel function, testing a linear kernel, a sigmoid kernel, a polynomial kernel, and a radial basis function (RBF) kernel. With the RBF kernel, we compared different values for the $Gamma$ parameter, testing 5 steps (with a logarithmic increase) inside the range [0.0001, 10], i.e., all the values in the set {0.0001, 0.001, 0.01, 0.1, 1, 10}.

#### 3.2.6. MLP

- The learning rate, testing 3 different steps with logarithmic increase in the range $[0.0001,0.1]$, i.e., all the values in the set $\{0.0001,$$0.001,$$0.01,$$0.1\}$;
- The momentum for the gradient descent, testing 3 steps with linear increase in the range $[0.6,0.9]$, i.e., all the values in the set $\{0.6,$$0.7,$$0.8,$$0.9\}$;
- The number of epochs for the training process, testing 3 steps with linear increase in the range $[100,500]$, i.e., all the values in the set $\{100,$$300,$$500\}$.

## 4. Experimental Evaluation

#### 4.1. Experimental Setup and Evaluation Metrics

- The precision for each class, i.e., the ratio between the number of samples correctly classified as belonging to a class and the total number of samples labeled as that class in the test set;
- The recall for each class, i.e., the ratio between the number of samples correctly classified as belonging to a class and the total number of samples available for that class in the test set;
- The macro-averaged ${F}_{1}$ score for each classifier, i.e., the average of the ${F}_{1}$ scores computed for each class.

#### 4.2. Results and Discussion

#### 4.3. Limitations

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Sisinni, E.; Saifullah, A.; Han, S.; Jennehag, U.; Gidlund, M. Industrial Internet of Things: Challenges, Opportunities, and Directions. IEEE Trans. Ind. Inform.
**2018**, 14, 4724–4734. [Google Scholar] [CrossRef] - Lasi, H.; Fettke, P.; Kemper, H.G.; Feld, T.; Hoffmann, M. Industry 4.0. Bus. Inf. Syst. Eng.
**2014**, 6, 239–242. [Google Scholar] [CrossRef] - Lee, J.; Davari, H.; Singh, J.; Pandhare, V. Industrial Artificial Intelligence for industry 4.0-based manufacturing systems. Manuf. Lett.
**2018**, 18, 20–23. [Google Scholar] [CrossRef] - Zhou, X.; Hu, Y.; Liang, W.; Ma, J.; Jin, Q. Variational LSTM Enhanced Anomaly Detection for Industrial Big Data. IEEE Trans. Ind. Inform.
**2021**, 17, 3469–3477. [Google Scholar] [CrossRef] - Susto, G.A.; Schirru, A.; Pampuri, S.; McLoone, S.; Beghi, A. Machine Learning for Predictive Maintenance: A Multiple Classifier Approach. IEEE Trans. Ind. Inform.
**2015**, 11, 812–820. [Google Scholar] [CrossRef][Green Version] - Zenisek, J.; Holzinger, F.; Affenzeller, M. Machine learning based concept drift detection for predictive maintenance. Comput. Ind. Eng.
**2019**, 137, 106031. [Google Scholar] [CrossRef] - Fernández-Caramés, T.M.; Blanco-Novoa, O.; Froiz-Míguez, I.; Fraga-Lamas, P. Towards an Autonomous Industry 4.0 Warehouse: A UAV and Blockchain-Based System for Inventory and Traceability Applications in Big Data-Driven Supply Chain Management. Sensors
**2019**, 19, 2394. [Google Scholar] [CrossRef][Green Version] - Gkamas, T.; Karaiskos, V.; Kontogiannis, S. Performance Evaluation of Distributed Database Strategies Using Docker as a Service for Industrial IoT Data: Application to Industry 4.0. Information
**2022**, 13, 190. [Google Scholar] [CrossRef] - Miragliotta, G.; Sianesi, A.; Convertini, E.; Distante, R. Data driven management in Industry 4.0: A method to measure Data Productivity. IFAC-PapersOnLine
**2018**, 51, 19–24. [Google Scholar] [CrossRef] - Gao, Z.; Dong, G.; Tang, Y.; Zhao, Y.F. Machine learning aided design of conformal cooling channels for injection molding. J. Intell. Manuf.
**2021**, 1, 1–19. [Google Scholar] [CrossRef] - Peres, R.S.; Barata, J.; Leitao, P.; Garcia, G. Multistage Quality Control Using Machine Learning in the Automotive Industry. IEEE Access
**2019**, 7, 79908–79916. [Google Scholar] [CrossRef] - Shahbazi, Z.; Byun, Y.C. Integration of Blockchain, IoT and Machine Learning for Multistage Quality Control and Enhancing Security in Smart Manufacturing. Sensors
**2021**, 21, 1467. [Google Scholar] [CrossRef] [PubMed] - Dornelles, J.d.A.; Ayala, N.F.; Frank, A.G. Smart Working in Industry 4.0: How digital technologies enhance manufacturing workers’ activities. Comput. Ind. Eng.
**2021**, 163, 107804. [Google Scholar] [CrossRef] - Eirinakis, P.; Lounis, S.; Plitsos, S.; Arampatzis, G.; Kalaboukas, K.; Kenda, K.; Lu, J.; Rožanec, J.M.; Stojanovic, N. Cognitive Digital Twins for Resilience in Production: A Conceptual Framework. Information
**2022**, 13, 33. [Google Scholar] [CrossRef] - Popov, V.V.; Kudryavtseva, E.V.; Kumar Katiyar, N.; Shishkin, A.; Stepanov, S.I.; Goel, S. Industry 4.0 and Digitalisation in Healthcare. Materials
**2022**, 15, 2140. [Google Scholar] [CrossRef] [PubMed] - Tomassini, S.; Falcionelli, N.; Sernani, P.; Müller, H.; Dragoni, A.F. An End-to-End 3D ConvLSTM-based Framework for Early Diagnosis of Alzheimer’s Disease from Full-Resolution Whole-Brain sMRI Scans. In Proceedings of the 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 74–78. [Google Scholar] [CrossRef]
- Cadavid, J.P.U.; Lamouri, S.; Grabot, B.; Pellerin, R.; Fortin, A. Machine learning applied in production planning and control: A state-of-the-art in the era of industry 4.0. J. Intell. Manuf.
**2020**, 31, 1531–1558. [Google Scholar] [CrossRef] - Shamsuzzaman, M.; Haridy, S.; Maged, A.; Alsyouf, I. Design and application of dual-EWMA scheme for anomaly detection in injection moulding process. Comput. Ind. Eng.
**2019**, 138, 106132. [Google Scholar] [CrossRef] - Gao, H.; Zhang, Y.; Zhou, X.; Li, D. Intelligent methods for the process parameter determination of plastic injection molding. Front. Mech. Eng.
**2018**, 13, 85–95. [Google Scholar] [CrossRef] - Chen, W.C.; Tai, P.H.; Wang, M.W.; Deng, W.J.; Chen, C.T. A neural network-based approach for dynamic quality prediction in a plastic injection molding process. Expert Syst. Appl.
**2008**, 35, 843–849. [Google Scholar] [CrossRef] - Dang, X.P. General frameworks for optimization of plastic injection molding process parameters. Simul. Model. Pract. Theory
**2014**, 41, 15–27. [Google Scholar] [CrossRef] - Ribeiro, B. Support vector machines for quality monitoring in a plastic injection molding process. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.)
**2005**, 35, 401–410. [Google Scholar] [CrossRef] - Sadeghi, B. A BP-neural network predictor model for plastic injection molding process. J. Mater. Process. Technol.
**2000**, 103, 411–416. [Google Scholar] [CrossRef] - Nagorny, P.; Pillet, M.; Pairel, E.; Le Goff, R.; Loureaux, J.; Wali, M.; Kiener, P. Quality prediction in injection molding. In Proceedings of the 2017 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Annecy, France, 26–28 June 2017; pp. 141–146. [Google Scholar] [CrossRef][Green Version]
- Ogorodnyk, O.; Lyngstad, O.V.; Larsen, M.; Wang, K.; Martinsen, K. Application of Machine Learning Methods for Prediction of Parts Quality in Thermoplastics Injection Molding. In Advanced Manufacturing and Automation VIII; Wang, K., Wang, Y., Strandhagen, J.O., Yu, T., Eds.; Springer: Singapore, 2019; pp. 237–244. [Google Scholar] [CrossRef][Green Version]
- Ke, K.C.; Huang, M.S. Quality Prediction for Injection Molding by Using a Multilayer Perceptron Neural Network. Polymers
**2020**, 12, 1812. [Google Scholar] [CrossRef] [PubMed] - Jung, H.; Jeon, J.; Choi, D.; Park, J.Y. Application of Machine Learning Techniques in Injection Molding Quality Prediction: Implications on Sustainable Manufacturing Industry. Sustainability
**2021**, 13, 4120. [Google Scholar] [CrossRef] - Liu, Y.; Wang, F.L.; Chang, Y.Q.; Li, C. A SNCCDBAGG-Based NN Ensemble Approach for Quality Prediction in Injection Molding Process. IEEE Trans. Autom. Sci. Eng.
**2011**, 8, 424–427. [Google Scholar] [CrossRef] - Obregon, J.; Hong, J.; Jung, J.Y. Rule-based explanations based on ensemble machine learning for detecting sink mark defects in the injection moulding process. J. Manuf. Syst.
**2021**, 60, 392–405. [Google Scholar] [CrossRef] - LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature
**2015**, 521, 436–444. [Google Scholar] [CrossRef] - Bai, Y.; Sun, Z.; Deng, J.; Li, L.; Long, J.; Li, C. Manufacturing Quality Prediction Using Intelligent Learning Approaches: A Comparative Study. Sustainability
**2018**, 10, 85. [Google Scholar] [CrossRef][Green Version] - Liu, C.; Wang, K.; Wang, Y.; Yuan, X. Learning deep multi-manifold structure feature representation for quality prediction with an industrial application. IEEE Trans. Ind. Inform.
**2021**, 1, 1. [Google Scholar] [CrossRef] - Lockner, Y.; Hopmann, C. Induced network-based transfer learning in injection molding for process modelling and optimization with artificial neural networks. Int. J. Adv. Manuf. Technol.
**2021**, 112, 3501–3513. [Google Scholar] [CrossRef] - Lockner, Y.; Hopmann, C.; Zhao, W. Transfer learning with artificial neural networks between injection molding processes and different polymer materials. J. Manuf. Process.
**2022**, 73, 395–408. [Google Scholar] [CrossRef] - Kira, K.; Rendell, L.A. The Feature Selection Problem: Traditional Methods and a New Algorithm. In Proceedings of the 10th National Conference on Artificial Intelligence (AAAI’92), San Jose, CA, USA, 12–16 July 1992; pp. 129–134. [Google Scholar]
- Freedman, D. Statistical Models: Theory and Practice; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar] [CrossRef]
- Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn.
**2006**, 63, 3–42. [Google Scholar] [CrossRef][Green Version] - Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc.
**1937**, 32, 675–701. [Google Scholar] [CrossRef] - Friedman, M. A Comparison of Alternative Tests of Significance for the Problem of m Rankings. Ann. Math. Stat.
**1940**, 11, 86–92. [Google Scholar] [CrossRef] - Nemenyi, P.B. Distribution-Free Multiple Comparisons; Princeton University: Princeton, NJ, USA, 1963. [Google Scholar]
- Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res.
**2006**, 7, 1–30. [Google Scholar] - Iman, R.L.; Davenport, J.M. Approximations of the critical region of the Friedman statistic. Commun. Stat.-Theory Methods
**1980**, 9, 571–595. [Google Scholar] [CrossRef] - Zhao, N.Y.; Lian, J.Y.; Wang, P.F.; Xu, Z.B. Recent progress in minimizing the warpage and shrinkage deformations by the optimization of process parameters in plastic injection molding: A review. Int. J. Adv. Manuf. Technol.
**2022**, 120, 85–101. [Google Scholar] [CrossRef] - Arik, S.O.; Pfister, T. TabNet: Attentive Interpretable Tabular Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 6679–6687. [Google Scholar]

**Figure 1.**The methodology followed for the study proposed in this paper. The production process parameters of the molded road lenses were collected from a real production environment and labeled by analyzing their general uniformity (${U}_{0}$) in lab settings. Then, six different classifiers were compared to understand their capability of predicting the quality class of each sample (a sample is the vector of the process parameters of a lens).

**Figure 2.**Relevance of the features on the class label of a sample computed with the Relief algorithm. The values are normalized between 0 and 1.

**Figure 3.**Relevance of the features on the class label of samples computed with the ANOVA F-values. The values are normalized between 0 and 1.

**Figure 4.**Macro-averaged ${F}_{1}$ scores (± standard deviation) obtained by the compared classifiers on the test set of each fold of the stratified 5-fold cross-validation.

**Figure 5.**Confusion matrices collected by summing up all the test samples in the stratified 5-fold cross-validation scheme, for the six classifiers, i.e., KNN (

**a**), decision tree (

**b**), random forest (

**c**), GBT (

**d**), SVM (

**e**), and MLP (

**f**).

Feature | Unit | Description |
---|---|---|

Melt temperature | °C | Temperature of the polymer before the injection in the mold |

Mold temperature | °C | Temperature of the mold |

Filling time | s | Time to fill the mold |

Plasticizing time | s | Time to plasticize the product |

Cycle time | s | Time to complete the entire process for a product |

Closing force | N | Closing force of the mold |

Clamping force peak value | N | Peak value of the closing force of the mold |

Torque peak value current cycle | N·m | Peak value of the torque on the injection screw |

Torque mean value current cycle | N·m | Mean value of the torque on the injection screw |

Specific back pressure peak value | Bar | Peak value of the resistance of the injection screw |

Specific injection pressure peak value | Bar | Peak value of the injection pressure |

Screw position at the end of hold pressure | cm | Position of the injection screw at the end of the holding cycle |

Shot volume | cm^{3} | Injection volume |

Waste | Acceptable | Target | Inefficient | |
---|---|---|---|---|

Label | 1 | 2 | 3 | 4 |

# Samples | 370 | 406 | 310 | 360 |

**Table 3.**The top six combinations of hyperparameters for the KNN. The best accuracy was achieved with $K=4$ by using the Manhattan distance to measure the similarity between the feature vectors.

K | Distance | Accuracy |
---|---|---|

4 | Manhattan | 92.21% |

6 | Manhattan | 91.73% |

3 | Manhattan | 91.73% |

5 | Manhattan | 91.73% |

8 | Manhattan | 91.46% |

7 | Manhattan | 91.32% |

**Table 4.**The top six combinations of hyperparameters for the decision tree. The best performance was achieved by using the accuracy as the splitting criterion for the nodes of the tree, without applying any pre-pruning strategy. The maximum depth had no effect, given that the tree stopped before reaching a depth of 29, which was the minimum tested.

Pre-Pruning | Split Criterion | Accuracy |
---|---|---|

false | accuracy | 91.52% |

true | accuracy | 91.45% |

true | Gini index | 90.08% |

true | information gain | 89.59% |

false | Gini index | 89.46% |

false | gain ratio | 89.46% |

**Table 5.**The top six combinations of hyperparameters for the random forest. The best performance was achieved by using the gain ratio as the splitting criterion for the nodes of the tree, using 151 trees, and with 79 as the maximum depth. No pruning strategies were applied, as we used the extremely randomized trees method. A maximum depth of −1 represents that no maximum depth was used for the trees.

# Trees | Max Depth | Split Criterion | Accuracy |
---|---|---|---|

151 | 79 | gain ratio | 95.04% |

300 | −1 | gain ratio | 94.97% |

210 | 79 | gain ratio | 94.90% |

61 | −1 | gain ratio | 94.90% |

270 | −1 | gain ratio | 94.90% |

121 | 39 | gain ratio | 94.90% |

**Table 6.**The top six combinations of hyperparameters for the GBT. The best performance was achieved by using 300 trees with a maximum depth of 41 (and all the values above 41). The second-best accuracy is slightly lower, given by 270 trees with a maximum depth of 41 (and above).

# Trees | Max Depth | Accuracy |
---|---|---|

300 | 41 (and above) | 94.21% |

270 | 41 (and above) | 94.14% |

300 | 21 | 94.07% |

240 | 21 (and above) | 94.00% |

210 | 41 (and above) | 93.87% |

210 | 21 | 93.80% |

**Table 7.**The top six combinations of hyperparameters for the SVM classifier. The best performance was achieved using the RBF kernel, with $C=100$ and $Gamma=0.001$. In terms of accuracy, the top five configurations used the RBF kernel. Among the other kernels, the polynomial worked better than the linear and the sigmoid, achieving 89.39% accuracy with $C=0.1$.

C | Gamma | Kernel | Accuracy |
---|---|---|---|

100.0 | 0.0010 | rbf | 91.73% |

10.0 | 0.0100 | rbf | 91.32% |

10.0 | 0.0010 | rbf | 90.90% |

1.0 | 0.0100 | rbf | 90.63% |

100.0 | 0.0001 | rbf | 89.94% |

0.1 | - | polynomial | 89.39% |

**Table 8.**The top six combinations of hyperparameters for the MLP. The network trained for 500 epochs with a learning rate of 0.1 and a momentum of 0.6 achieved the best accuracy.

Learning Rate | Momentum | Epochs | Accuracy |
---|---|---|---|

0.1000 | 0.6 | 500 | 92.08% |

0.1000 | 0.7 | 500 | 91.94% |

0.0100 | 0.9 | 500 | 91.59% |

0.1000 | 0.7 | 300 | 91.39% |

0.1000 | 0.8 | 100 | 91.11% |

0.1000 | 0.7 | 100 | 90.63% |

**Table 9.**The mean accuracy (and its standard deviation) computed for each classifier using a stratified 5-fold cross-validation scheme. The best accuracy value is highlighted with bold.

KNN | Decision Tree | Random Forest | GBT | SVM | MLP |
---|---|---|---|---|---|

92.21 ± 1.64% | 91.52 ± 1.63% | 95.04 ± 1.26% | 94.21 ± 1.37% | 91.73 ± 2.37% | 92.08 ± 1.92% |

**Table 10.**The precision and recall for each class, computed by summing up the samples in the test set of each fold of the stratified 5-fold cross-validation scheme. The best values of precision and recall for each class are highlighted with bold.

Class Precision | Class Recall | |||||||
---|---|---|---|---|---|---|---|---|

Waste | Acceptable | Target | Inefficient | Waste | Acceptable | Target | Inefficient | |

KNN | 92.14% | 92.42% | 92.48% | 91.84% | 91.89% | 90.15% | 91.29% | 95.62% |

Decision Tree | 88.65% | 91.05% | 93.71% | 93.14% | 90.81% | 87.68% | 91.29% | 96.71% |

Random Forest | 97.22% | 95.11% | 94.37% | 93.42% | 94.59% | 95.81% | 91.94% | 97.36% |

GBT | 94.82% | 94.29% | 94.35% | 93.42% | 94.05% | 93.60% | 91.61% | 97.26% |

SVM | 91.04% | 89.81% | 93.40% | 93.14% | 87.84% | 91.13% | 91.29% | 96.71% |

MLP | 92.90% | 89.15% | 94.22% | 92.91% | 88.38% | 93.10% | 89.35% | 96.99% |

**Table 11.**Ranking (1 = best classifier, 6 = worst classifier) of the classifiers in terms of average accuracy, macro-averaged ${F}_{1}$ scores, and class recalls. The last row reports the average rank.

KNN | DT | Random Forest | GBT | SVM | MLP | |
---|---|---|---|---|---|---|

Accuracy | 3 | 6 | 1 | 2 | 5 | 4 |

${F}_{1}$ score | 3 | 6 | 1 | 2 | 5 | 4 |

Waste Recall | 3 | 4 | 1 | 2 | 6 | 5 |

Acceptable Recall | 5 | 6 | 1 | 2 | 4 | 3 |

Target Recall | 3 | 3 | 1 | 2 | 3 | 6 |

Inefficient Recall | 6 | 3 | 1 | 2 | 3 | 5 |

Average | 3.83 | 4.67 | 1.00 | 2.00 | 4.33 | 4.50 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Polenta, A.; Tomassini, S.; Falcionelli, N.; Contardo, P.; Dragoni, A.F.; Sernani, P.
A Comparison of Machine Learning Techniques for the Quality Classification of Molded Products. *Information* **2022**, *13*, 272.
https://doi.org/10.3390/info13060272

**AMA Style**

Polenta A, Tomassini S, Falcionelli N, Contardo P, Dragoni AF, Sernani P.
A Comparison of Machine Learning Techniques for the Quality Classification of Molded Products. *Information*. 2022; 13(6):272.
https://doi.org/10.3390/info13060272

**Chicago/Turabian Style**

Polenta, Andrea, Selene Tomassini, Nicola Falcionelli, Paolo Contardo, Aldo Franco Dragoni, and Paolo Sernani.
2022. "A Comparison of Machine Learning Techniques for the Quality Classification of Molded Products" *Information* 13, no. 6: 272.
https://doi.org/10.3390/info13060272