Towards Cost-Optimal Zero-Defect Manufacturing in Injection Molding: An Explainable and Transferable Machine Learning Framework
Abstract
1. Introduction
- We provide a reproducible workflow, ranging from sensor data fusion and correlation-based feature selection to hyperparameter optimization.
- We present a comparative analysis of state-of-the-art supervised methods to detect defects, explicitly addressing class imbalance through a cost-sensitive optimization approach that minimizes the economic risk of false negatives.
- We incorporate explainable artificial intelligence (XAI) techniques, specifically SHAP analysis, to illuminate how critical parameters drive defect outcomes. This improves transparency and fosters trust in data-driven decisions.
- We demonstrate the scalability of the proposed framework by showing how knowledge gained from one dataset can be successfully transferred to a second dataset with fewer parameters via Transfer Learning.
2. Related Work and Theoretical Foundation
2.1. Zero-Defect Manufacturing (ZDM)
2.2. AI-Driven Quality Assurance and the Necessity of Explainability
2.3. Research Gap and Contribution
- We conduct a rigorous comparison of modern algorithms, including gradient boosting variants (CatBoost, LightGBM, XGBoost) and tabular transformers (SAINT), identifying CatBoost as the superior performer for this domain.
- Unlike previous works, we shift the optimization objective from pure accuracy to economic risk minimization. By implementing a cost-sensitive threshold calibration combined with SMOTE, we demonstrate how to minimize the financial impact of defective parts.
- We integrate SHAP to decode model decisions, revealing that specific physical parameters (e.g., nozzle temperature, motor power) are the primary drivers of defects, thus enabling targeted process adjustments.
- We provide empirical evidence that the developed models can be successfully transferred to secondary datasets, proving their adaptability to changing production environments without extensive data collection.
3. Methodology
3.1. Sample, Sources, and Data
3.2. Data Analysis Procedure
3.2.1. Supervised Learning
- XGBoost: An efficient tree learning algorithm is introduced with explicit handling of sparse features, approximate split finding, and cache-aware block structures for speed. It often excels on tabular datasets and is characterized by robust feature-engineering capabilities.
- LightGBM: A histogram-based learning approach is used, reducing the computational overhead by bucketing continuous features into discrete bins. This allows for faster training and lower memory usage while maintaining competitive performance on large datasets.
- CatBoost: Specialized handling of categorical features via ordered target encoding is provided, alleviating overfitting issues typically associated with naive label-encoding. CatBoost is also robust to hyperparameter tuning when dealing with mixed data.
- AutoGluon: An end-to-end toolkit is utilized that iteratively trains and refines multiple models, automatically managing hyperparameters based on validation performance;
- AutoSKLearn: An AutoML solution built on scikit-learn is applied, using Bayesian optimization to explore model configurations.
3.2.2. Hyperparameter Optimization
3.2.3. Performance Evaluation
3.2.4. Dealing with the Class Imbalance
3.2.5. SHAP Analysis
- N is the set of all features;
- S is a subset of features not containing feature i;
- is the value function representing the model prediction using the feature subset S.
3.2.6. Transfer Learning
- be the source domain with input feature space and distribution ;
- be the source task with label space and predictive function ;
- be the target domain with input feature space and distribution ;
- be the target task with label space and predictive function .
| Algorithm 1 Transfer Learning Sample Size Analysis |
| 1: Preprocess source and transfer datasets |
| 2: Split transfer data 80/20 into , |
| 3: Align features between domains |
| 4: Initialize sample sizes |
| 5: |
| 6: Train base model on |
| 7: for each do |
| 8: Sample subset examples from |
| 9: Transfer Learning: |
| 10: Initialize from |
| 11: Fine-tune on (50 rounds) |
| 12: test on |
| 13: |
| 14: New Model: |
| 15: Train new model from scratch on |
| 16: test on |
| 17: |
| 18: Store |
| 19: end for |
| 20: |
| 21: return Sample sizes and corresponding accuracies |
4. Results
4.1. Hyperparameter Optimization
4.2. Performance Evaluation
4.3. Dealing with Class Imbalance
4.4. Cost Optimization
4.5. Model Explanation
4.6. Transfer Learning and Scalability Analysis
5. Discussion
5.1. Algorithmic Superiority: GBDTs vs. Deep Learning
5.2. From Accuracy to Profitability: The Cost-Sensitive Paradigm
- Threshold Tuning alone reduces risk but introduces high variance, making the system unstable across production runs.
- SMOTE stabilizes the learning process by constructing a robust decision boundary around the minority class.
- The Hybrid Approach (SMOTE + Threshold Moving) proved optimal. It creates a synergy where SMOTE provides the necessary support vectors for the minority class, while threshold calibration fine-tunes the sensitivity to align with the specific cost matrix of the manufacturer.
5.3. Enablers for Industry 4.0: Explainability and Transferability
5.4. Limitations and Future Work
6. Conclusions
Practical Implications and Sustainability Impact
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
| Model | Hyperparameter | Range | Best Value |
|---|---|---|---|
| CatBoost | iterations | [100, 1000] | 970 |
| learning_rate | [1 × 10−3, 0.1] | 0.0753 | |
| depth | [4, 10] | 4 | |
| l2_leaf_reg | [1 × 10−8, 10] | 5.7635 | |
| border_count | [32, 255] | 110 | |
| min_data_in_leaf | [1, 50] | 39 | |
| CatBoostTimeSeries | sequence_length | [1, 100] | 19 |
| iterations | [100, 1000] | 867 | |
| learning_rate | [1 × 10−3, 0.1] | 0.0309 | |
| depth | [4, 10] | 4 | |
| l2_leaf_reg | [1 × 10−8, 10.0] | 3.7892 × 10−7 | |
| border_count | [32, 255] | 32 | |
| min_data_in_leaf | [1, 50] | 4 | |
| EBM | max_bins | [32, 128] | 114 |
| max_interaction_bins | [16, 64] | 19 | |
| learning_rate | [0.01, 0.1] | 0.017 | |
| min_samples_leaf | [10, 50] | 42 | |
| max_leaves | [3, 30] | 3 | |
| LightGBM | max_depth | [−1, 15] | 7 |
| learning_rate | [1 × 10−3, 0.4] | 0.3924 | |
| n_estimators | [50, 500] | 97 | |
| subsample | [0.4, 1.0] | 0.4526 | |
| colsample_bytree | [0.5, 1.0] | 0.5552 | |
| num_leaves | [20, 150] | 48 | |
| min_child_samples | [1, 75] | 59 | |
| lambda_l1 | [0, 10] | 0.0285 | |
| lambda_l2 | [0, 10] | 8.8419 | |
| RandomForest | n_estimators | [50, 1000] | 130 |
| max_depth | [3, 30] | 18 | |
| min_samples_split | [2, 20] | 3 | |
| min_samples_leaf | [1, 20] | 1 | |
| max_features | [sqrt, log2, None] | sqrt | |
| SAINT | mlp_ratio | [2.0, 6.0] | 2.989 |
| threshold | [0.01, 0.5] | 0.477 | |
| embedding_dim | [96, 160] | 96 | |
| num_heads | [2, 8] | 4 | |
| num_layers | [0, 1] | 1 | |
| XGBoost | max_depth | [3, 10] | 7 |
| learning_rate | [1 × 10−3, 0.3] | 0.1724 | |
| n_estimators | [50, 500] | 476 | |
| subsample | [0.4, 1.0] | 0.8339 | |
| colsample_bytree | [0.5, 1.0] | 0.8311 |

| Approach | Runtime [s] |
|---|---|
| Native | 308 |
| ClassWeights | 403 |
| Threshold | 328 |
| SMOTE | 1441 |
| SMOTEwithThreshold | 1727 |
| BalancedEnsemble | 1852 |
| FocalLoss | 1869 |
References
- Kim, D.W.; Yoon, S. Special issue on smart automation and manufacturing. Int. J. Comput. Integr. Manuf. 2018, 31, 675–676. [Google Scholar] [CrossRef]
- Gajšek, B.; Stradovnik, S.; Hace, A. Sustainable Move towards Flexible, Robotic, Human-Involving Workplace. Sustainability 2020, 12, 6590. [Google Scholar] [CrossRef]
- Abubakr, M.; Abbas, A.T.; Tomaz, Í.; Soliman, M.; Luqman, M.; Hegab, H. Sustainable and Smart Manufacturing: An Integrated Approach. Sustainability 2020, 12, 2280. [Google Scholar] [CrossRef]
- Sousa, J.; Nazarenko, A.; Grunewald, C.; Psarommatis, F.; Fraile, F.; Meyer, O.; Sarraipa, J. Zero-defect manufacturing terminology standardization: Definition, improvement, and harmonization. Front. Manuf. Technol. 2022, 2, 947474. [Google Scholar] [CrossRef]
- Psarommatis, F. A generic methodology and a digital twin for zero defect manufacturing (ZDM) performance mapping towards design for ZDM. J. Manuf. Syst. 2021, 59, 507–521. [Google Scholar] [CrossRef]
- Li, Y. Digital transformation and pathways for promoting global value position: An empirical study in Chinese manufacturing industries. Int. J.-Low-Carbon Technol. 2025, 20, 119–128. [Google Scholar] [CrossRef]
- Yang, J.; Liu, Y.; Morgan, P.L. Human–machine interaction towards Industry 5.0: Human-centric smart manufacturing. Digit. Eng. 2024, 2, 100013. [Google Scholar] [CrossRef]
- Gao, H.; Zhang, Y.; Zhou, X.; Li, D. Intelligent methods for the process parameter determination of plastic injection molding. Front. Mech. Eng. 2018, 13, 85–95. [Google Scholar] [CrossRef]
- Lai, H. Study on Improving the Life of Stereolithography Injection Mold. Adv. Mater. Res. 2012, 468–471, 1013–1016. [Google Scholar] [CrossRef]
- Kumar, S.; Park, H.S.; Lee, C. Data-driven smart control of injection molding process. Cirp J. Manuf. Sci. Technol. 2020, 31, 439–449. [Google Scholar] [CrossRef]
- Kurasov, D. Injection Molding Technology. Mater. Res. Proc. 2022, 21, 251–254. [Google Scholar] [CrossRef]
- Psarommatis, F.; May, G.; Dreyfus, P.A.; Kiritsis, D. Zero defect manufacturing: State-of-the-art review, shortcomings and future directions in research. Int. J. Prod. Res. 2020, 58, 1–17. [Google Scholar] [CrossRef]
- Psarommatis, F.; Sousa, J.; Mendonça, J.P.; Kiritsis, D. Zero-defect manufacturing the approach for higher manufacturing sustainability in the era of industry 4.0: A position paper. Int. J. Prod. Res. 2022, 60, 73–91. [Google Scholar] [CrossRef]
- Azamfirei, V.; Psarommatis, F.; Lagrosen, Y. Application of automation for in-line quality inspection, a zero-defect manufacturing approach. J. Manuf. Syst. 2023, 67, 1–22. [Google Scholar] [CrossRef]
- Psarommatis, F.; May, G.; Azamfirei, V. Zero defect manufacturing in 2024: A holistic literature review for bridging the gaps and forward outlook. Int. J. Prod. Res. 2024, 1–37. [Google Scholar] [CrossRef]
- Getachew, M.; Beshah, B.; Mulugeta, A.; Kitaw, D. Application of artificial intelligence to enhance manufacturing quality and zero-defect using CRISP-DM framework. Int. J. Prod. Res. 2024, 1–25. [Google Scholar] [CrossRef]
- Tsai, M.H.; Fan-Jiang, J.C.; Liou, G.Y.; Cheng, F.J.; Hwang, S.J.; Peng, H.S.; Chu, H.Y. Development of an online quality control system for injection molding process. Polymers 2022, 14, 1607. [Google Scholar] [CrossRef]
- Aminabadi, S.S.; Tabatabai, P.; Steiner, A.; Gruber, D.P.; Friesenbichler, W.; Habersohn, C.; Berger-Weber, G. Industry 4.0 in-line AI quality control of plastic injection molded parts. Polymers 2022, 14, 3551. [Google Scholar] [CrossRef]
- Rousopoulou, V.; Nizamis, A.; Vafeiadis, T.; Ioannidis, D.; Tzovaras, D. Predictive maintenance for injection molding machines enabled by cognitive analytics for industry 4.0. Front. Artif. Intell. 2020, 3, 578152. [Google Scholar] [CrossRef]
- Zhang, J.; Alexander, S. Fault diagnosis in injection moulding via cavity pressure signals. Int. J. Prod. Res. 2008, 46, 6499–6512. [Google Scholar] [CrossRef]
- Tayalati, F.; Boukrouh, I.; Azmani, A.; Azmani, M. Implementation of Digital Twin and Deep Learning for Process Monitoring: Case Study in Injection Molding Manufacturing. In Proceedings of the 10th World Congress on Electrical Engineering and Computer Systems and Sciences (EECSS’24); International Aset Inc.: Ottawa, ON, Canada, 2024. [Google Scholar]
- Jung, H.; Jeon, J.; Choi, D.; Park, J.Y. Application of machine learning techniques in injection molding quality prediction: Implications on sustainable manufacturing industry. Sustainability 2021, 13, 4120. [Google Scholar] [CrossRef]
- Rønsch, G.Ø.; Kulahci, M.; Dybdahl, M. An investigation of the utilisation of different data sources in manufacturing with application in injection moulding. Int. J. Prod. Res. 2021, 59, 4851–4868. [Google Scholar] [CrossRef]
- Gim, J.; Turng, L.S. Interpretation of the effect of transient process data on part quality of injection molding based on explainable artificial intelligence. Int. J. Prod. Res. 2023, 61, 8192–8212. [Google Scholar] [CrossRef]
- Senoner, J.; Schallmoser, S.; Kratzwald, B.; Feuerriegel, S.; Netland, T. Explainable AI improves task performance in human–AI collaboration. Sci. Rep. 2024, 14, 31150. [Google Scholar] [CrossRef]
- Tonekaboni, S.; Joshi, S.; McCradden, M.D.; Goldenberg, A. What clinicians want: Contextualizing explainable machine learning for clinical end use. In Proceedings of the Machine Learning for Healthcare Conference, Ann Arbor, MI, USA, 8–10 August 2019; pp. 359–380. [Google Scholar]
- Bussmann, N.; Giudici, P.; Marinelli, D.; Papenbrock, J. Explainable AI in credit risk management. Comput. Econ. 2021, 57, 203–216. [Google Scholar] [CrossRef]
- Bodria, F.; Barbiero, A.; Giudici, P. Benchmarking explainable artificial intelligence methods for intrusion detection systems. Expert Syst. Appl. 2021, 168, 114241. [Google Scholar] [CrossRef]
- Holstein, K.; McLaren, B.M.; Aleven, V. Co-designing AI for teacher assistance: A case study with student learning data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2019; pp. 1–15. [Google Scholar] [CrossRef]
- Chen, T.; He, T.; Benesty, M.; Khotilovich, V.; Tang, Y.; Cho, H.; Chen, K.; Mitchell, R.; Cano, I.; Zhou, T. Xgboost: Extreme Gradient Boosting, R Package Version 0.4-2. 2015. Available online: https://CRAN.R-project.org/package=xgboost (accessed on 10 December 2025).
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. Lightgbm: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3146–3154. [Google Scholar]
- Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient boosting with categorical features support. arXiv 2018, arXiv:1810.11363. [Google Scholar] [CrossRef]
- Rigatti, S.J. Random forest. J. Insur. Med. 2017, 47, 31–39. [Google Scholar] [CrossRef]
- Nori, H.; Jenkins, S.; Koch, P.; Caruana, R. Interpretml: A unified framework for machine learning interpretability. arXiv 2019, arXiv:1909.09223. [Google Scholar] [CrossRef]
- Erickson, N.; Mueller, J.; Shirkov, A.; Zhang, H.; Larroy, P.; Li, M.; Smola, A. Autogluon-tabular: Robust and accurate automl for structured data. arXiv 2020, arXiv:2003.06505. [Google Scholar]
- Feurer, M.; Eggensperger, K.; Falkner, S.; Lindauer, M.; Hutter, F. Auto-sklearn 2.0: The next generation. arXiv 2020, arXiv:2007.04074. [Google Scholar]
- Somepalli, G.; Goldblum, M.; Schwarzschild, A.; Bruss, C.B.; Goldstein, T. Saint: Improved neural networks for tabular data via row attention and contrastive pre-training. arXiv 2021, arXiv:2106.01342. [Google Scholar] [CrossRef]
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; Association for Computing Machinery: New York, NY, USA, 2019; pp. 2623–2631. [Google Scholar]
- Zeng, X.; Martinez, T.R. Distribution-balanced stratified cross-validation for accuracy estimation. J. Exp. Theor. Artif. Intell. 2000, 12, 1–12. [Google Scholar] [CrossRef]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Ling, C.; Sheng, V. Cost-Sensitive Learning and the Class Imbalance Problem. In Encyclopedia of Machine Learning; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. [Google Scholar]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Song, Z.; Zou, S.; Zhou, W.; Huang, Y.; Shao, L.; Yuan, J.; Gou, X.; Jin, W.; Wang, Z.; Chen, X.; et al. Clinically applicable histopathological diagnosis system for gastric cancer detection using deep learning. Nat. Commun. 2020, 11, 4294. [Google Scholar] [CrossRef]
- Kabir, H.; Wu, J.; Dahal, S.; Joo, T.; Garg, N. Automated estimation of cementitious sorptivity via computer vision. Nat. Commun. 2024, 15, 9935. [Google Scholar] [CrossRef]








| Category | M | T | Examples |
|---|---|---|---|
| Time Series Attributes | 1 | 1 | M: Cycle Start T: Cycle Start |
| Quality Attributes (Label) | 1 | 1 | M: Binary classification (0/1) T: Binary classification (0/1) |
| Temperature Parameters | 5 | 0 | M: Mold Temperature (1,3), Nozzle Temperature (1,2), Reference Temperature T: – |
| Energy Parameters | 6 | 0 | M: Energy Consumption Total, Energy Total, Heating Energy, Heating Power, Motor Power, Power Total T: – |
| Process Control Parameters | 5 | 5 | M: Injection Time, Injection Speed (1,3), Switching Time, Dosing Time T: Injection Time, Switching Pressure, Dosing Time, Mass Pad, Flow Rating |
| Material Properties | 2 | 0 | M: Shear Rate, Viscosity T: – |
| Process Monitoring | 1 | 0 | M: Switchover Monitoring T: – |
| Dataset | Total Samples | Good Parts | Defective Parts | Dimensionality |
|---|---|---|---|---|
| Main | 5912 | 5715 (96.67%) | 197 (3.33%) | 22 |
| Transfer | 7290 | 6959 (95.46%) | 331 (4.54%) | 7 |
| Model | Relative Improvement |
|---|---|
| CatBoost | 16.40% |
| CatBoostTimeSeries | 0.92% |
| EBM | 13.38% |
| LightGBM | 0.95% |
| Random Forest | 10.62% |
| SAINT | 61.07% |
| XGBoost | 2.84% |
| Model | TN | FP | FN | TP | F1 |
|---|---|---|---|---|---|
| XGBoost | |||||
| LightGBM | |||||
| EBM | |||||
| SAINT | |||||
| AutoGluon | |||||
| AutoSKLearn | |||||
| CatBoost | |||||
| CatBoostTS | |||||
| Random Forest | |||||
| Ground Truth | 1144.00 | 0.00 | 0.00 | 39.00 |
| Model | Mean F1-Score |
|---|---|
| Basic | |
| Threshold tuning | |
| SMOTE | |
| SMOTE & Threshold tuning |
| Approach | TP | FP | FN | TN | Cost |
|---|---|---|---|---|---|
| SMOTE Threshold | |||||
| CatBoost | |||||
| Threshold | |||||
| SMOTE |
| Feature | Importance | Insight |
|---|---|---|
| Motor Power | 1 | Higher motor power leads to more defective parts. |
| Nozzle Temperature 2 | 2 | Extreme nozzle temperatures increase defect risk. |
| Energy Consumption Total | 3 | High energy consumption results in more defects. |
| Power Total | 4 | Higher power totals are linked to defective outcomes. |
| Reference Temperature | 5 | Extreme temperatures raise defect risk; optimal mid-range values are preferable. |
| Nozzle Temperature 1 | 6 | Extreme values increase defect likelihood; mid-range conditions are more favorable. |
| Switching Time | 7 | Longer switching times are associated with defects. |
| Injection Speed | 8 | The effect is nuanced—optimal speeds minimize defects. |
| Injection Time | 9 | Has a comparatively lower impact on defects. |
| Heating Power | 10 | Impact varies by model, showing inconsistent trends. |
| Model | Phase | TL (Samples) | NM (Samples) | Key Observation |
|---|---|---|---|---|
| CatBoost | Stability | 180–260 | 190–260 | Minimal impact; TL offers no data efficiency advantage. |
| Plateau | >260 | >260 | Convergence is identical. | |
| LightGBM | Stability | 170–200 | 185–450 | TL stabilizes significantly earlier. |
| Plateau | >200 | >450 | NM requires more data to converge. |
| Feature | AutoGluon | CatBoost | LightGBM | Random Forest | XGBoost |
|---|---|---|---|---|---|
| Injection Time | 0.3511 | 0.2234 | 0.3529 | 0.1291 | 0.0607 |
| Dosing Time | 0.1278 | 0.0661 | 0.5269 | 0.0298 | 0.0082 |
| Total Sum | 0.4789 | 0.2895 | 0.8798 | 0.1589 | 0.0689 |
| Metric | Data Savings | Time Reduction | Energy Impact |
|---|---|---|---|
| Improvement | ≈55% | ≈55% faster | Up to 50% reduction |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Greif, L.; Ortner, J.; Kummert, P.; Kimmig, A.; Kreuzwieser, S.; Bönsch, J.; Ovtcharova, J. Towards Cost-Optimal Zero-Defect Manufacturing in Injection Molding: An Explainable and Transferable Machine Learning Framework. Sustainability 2026, 18, 2001. https://doi.org/10.3390/su18042001
Greif L, Ortner J, Kummert P, Kimmig A, Kreuzwieser S, Bönsch J, Ovtcharova J. Towards Cost-Optimal Zero-Defect Manufacturing in Injection Molding: An Explainable and Transferable Machine Learning Framework. Sustainability. 2026; 18(4):2001. https://doi.org/10.3390/su18042001
Chicago/Turabian StyleGreif, Lucas, Jonas Ortner, Peer Kummert, Andreas Kimmig, Simon Kreuzwieser, Jakob Bönsch, and Jivka Ovtcharova. 2026. "Towards Cost-Optimal Zero-Defect Manufacturing in Injection Molding: An Explainable and Transferable Machine Learning Framework" Sustainability 18, no. 4: 2001. https://doi.org/10.3390/su18042001
APA StyleGreif, L., Ortner, J., Kummert, P., Kimmig, A., Kreuzwieser, S., Bönsch, J., & Ovtcharova, J. (2026). Towards Cost-Optimal Zero-Defect Manufacturing in Injection Molding: An Explainable and Transferable Machine Learning Framework. Sustainability, 18(4), 2001. https://doi.org/10.3390/su18042001

