Next Article in Journal
A Theory for Plane Strain Tangential Contacts of Functionally Graded Elastic Solids with Application to Fretting
Previous Article in Journal
Triaxial Compression of Anisotropic Voronoi-Based Cellular Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decision Tree Models for Automated Quality Tools Selection

1
Faculty of Mechanical Engineering, Poznan University of Technology, 3 Piotrowo Street, 60-965 Poznan, Poland
2
Faculty of Computer Science, Kazimierz Wielki University, 30 Chodkiewicza Street, 85-064 Bydgoszcz, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 472; https://doi.org/10.3390/app16010472 (registering DOI)
Submission received: 23 November 2025 / Revised: 30 December 2025 / Accepted: 31 December 2025 / Published: 2 January 2026
(This article belongs to the Section Mechanical Engineering)

Abstract

Quality tools have a well-established place in business management. They help diagnose, analyze, and solve quality problems. In manufacturing companies, they are also used in process and product improvement projects. However, only the proper selection of quality tools can bring tangible benefits to an organization. Given their diverse content and methodologies, supporting the selection of these tools becomes a crucial issue. A literature review indicates only a few solutions in this area, implemented as decision support systems. Additionally, the challenges of Quality 4.0 and the demands of modern business reveal a research gap in automating the process of selecting quality tools. This is particularly true for less experienced company employees participating in improvement programs. Our previous research shows how machine learning using neural network models supports the development of an expert system in this area. The aim of this paper is to present the results of research conducted in which classifiers in the form of decision trees were developed. At the same time, attempts were made to demonstrate that decision tree classifiers (on an extended Excellence Toolbox dataset) can automatically recommend qualitative tools with an accuracy better than neural networks, while offering interpretable rules. The decision-tree models achieve strong classification performance, with the best tree reporting 96.75% effectiveness. In contrast, the neural network from previous studies achieved 94.87%.

1. Introduction

Quality tools have -well-established place in business management. They cover a wide spectrum of ways of operation in various areas of the enterprise, focused on meeting the requirements of its customers [1,2,3,4].
A comprehensive set of quality management instruments includes principles, methods, and tools [5]. Principles are the most general in nature. They define the attitude of the enterprise and its employees towards broadly understood quality issues. They have been formalized in the ISO 9000 series of standards [6,7,8]. Quality management methods, in turn, are characterized by a planned, repeatable, and scientifically based approach to the implementation of quality management tasks [9,10,11,12]. The third group of quality management instruments consists of quality tools. They constitute the basis for the proper use of data and information generated within the enterprise and its environment; their application is based on the experience and participation of employees. At the operational level, they help diagnose, analyze, and resolve quality problems [13]. In manufacturing companies, they are also used in process and product improvement projects [14,15,16,17].
Quality tools, like other quality management instruments, have their well-established place in enterprise management, regardless of the adopted quality management concept, such as TQM, Kaizen, Six-Sigma, or normative approach (QMS) [18,19,20,21,22,23]. The collection of quality tools is characterized by its size and diversity. They have been developed historically over many years [24]. They constitute the leading instruments of the various quality management concepts mentioned above. Based on already proven tools, methodological variations are being developed. The emergence of hybrid management concepts in the form of Human Six Sigma, Lean Six Sigma (and others) also contributes to the expansion and diversification of their set [25,26,27,28,29]. Additionally, in such a large collection, many tools show methodological similarities or can be used interchangeably.
In the educational context, the key to organizing knowledge about quality tools lies in the various classifications proposed in the literature [24,30]. Placing a tool in a given classification group can be the basis for its correct selection and, consequently, its application. From a practical perspective, assigning quality tools to specific phases of the product life cycle, stages of the production process, or organizational units of enterprises is particularly valuable [31], but often not precise enough.
The problem of proper tool selection was noticed many years ago and remains relevant today. The importance of this issue has been highlighted in studies [13,14] and recent publications [32,33,34,35,36]. The authors emphasize that any failures in the application of quality tools and techniques (collectively referred to as QTT [37,38,39]) are not due to their ineffectiveness, but rather to a lack of understanding of when, where, and how to use them [40], i.e., how to select them in given organizational and production conditions.
This is accompanied by further challenges, related not only to the selection of appropriate tools but also to solutions offering the possibility of quick and easy execution of this task [41,42,43,44,45,46,47].
Available publications, such as books, guides, training materials, etc., present knowledge about quality tools in a format that further complicates their effective use in production environments. They typically consist of tables, matrices, selection diagrams, or descriptive tool selection instructions. Traditional solutions, therefore, require potential users to devote significant time to selecting quality tools, which also depends on their perception of these sources.
Quality tools are recommended as simple and universal. This latter characteristic means that using single search criteria may not always yield a useful tool. Some existing manuals emphasize assigning them to specific stages of methodological improvement programs [13,24]. However, given the specific organizational and production conditions, these guidelines are not always sufficient.
From a user’s perspective, choosing the right tool can depend on many factors (not just the intended use): the type of data being processed, the user’s experience, the way of working (in a team), the type of results visualization, etc. Therefore, it is important to classify such diverse and numerous tools and refine the search criteria.
The aforementioned multitude of quality tools has therefore generated the need to develop solutions that would facilitate the “adaptation” of tools to the needs of their potential users, in the given organizational and production conditions [48].
In an era of pressure to effectively address quality issues and the need to capitalize on emerging opportunities, modern information technologies, and above all, artificial intelligence methods, including machine learning methods, provide invaluable support in the functioning of modern enterprises.
One of the first IT solutions designed to support the selection of quality tools is the Excellence Toolbox [48], which incorporates all the elements of a decision support system. The system’s knowledge base was implemented as a relational database. The creation of the system database was preceded by an analysis of a wide range of tools described in the literature and research on their application in manufacturing enterprises. Surveys were conducted in 100 manufacturing enterprises. The collected data was then synthesized, and subgroups of tools were identified that shared similarities based on selected criteria. In the developed system, the user’s selection of a specific tool was treated as a multi-criteria selection.
It should be emphasized that in previous studies on quality tools, the characteristics of individual tools were presented in a descriptive, and therefore unstructured, manner. Tool descriptions typically include the tool’s name, alternative names, stated purpose, method of use, application examples, and additional notes or comments.
As part of the development of a prototype instrument selection system, quality tool characteristics were developed in the form of a fixed set of features and their values/states. This descriptive, unstructured tool description serves as a supplement to the user’s reference after selecting the tool recommended by the system.
The application allows tools to be searched for by input data type (type of input data—TID), tool nature (tool category—TC), location in DMAIC (place in DMAIC—PDMAIC) and PDCA (place in PDCA—PPDCA), purpose of use—PU, form of result visualization (form of the output—FO), tool user TU, tool difficulty level (difficulty level—DL), and selected production process stage [48].
Each of the distinguished attributes describing quality tools in the system database (populated by experts via the interface of the same name) takes on one or more values. For example, depending on the type of input data for processing in the tool, it can be quantitative or qualitative (descriptive); when determining the applicability of a tool in the PDCA cycle, there may be more than one attribute characterizing the same tool (in this case, due to its “universality”).
The application was developed assuming that the potential user of the tool is a “customer”—they formulate their needs and requirements regarding the tool being searched for. On the other hand, the use of an attribute-based approach in tool descriptions allowed for the development of detailed tool characteristics in the form of a set of attributes and their states—a set of which, to a greater or lesser extent, meets the user’s search criteria.
Tool search results (via the user interface) are presented as a percentage, depending on the degree to which they match the characteristics specified during the search. Search results are generated in the form of a ranking list of tools; it is possible to obtain the results in the form of multiple recommendations. This is where the system’s built-in, classic description of each tool comes in handy. Through the expert interface, the database can be expanded with additional characteristics of management instruments. An additional functionality of the system is to advise the user which quality tool can be used next, previous, or interchangeably. The program has been validated and found to be a useful tool supporting the selection of quality tools, especially in SMEs.
Another solution for supporting the selection of quality tools to overcome quality problems and implement improvement actions within an organization is the work of [49]. Specifically, an Intelligent Decision Support System called eCIFOD was developed. The program’s authors mapped various quality tools and techniques, identifying and linking the features of these tools to the categories found in the CIFOD model (category, input, function, output, definition). This data was then entered into an IT system, which allowed the development of a program supporting the selection of quality tools (operating in web browsers). When searching for the appropriate quality tool using this solution, attributes must be selected from each group.
Within this system, other attributes of quality tools were distinguished compared to the previously discussed ones. These include: type of information needed (in the input category); impact on quality (in the output category); activity requires QTT and system level (in the function category).
As already mentioned, innovative technologies such as artificial intelligence (AI), machine learning, and data science are becoming a breakthrough element in the development of decision support systems.
The article [50] points out that the abundance of quality control tools and techniques can make selecting the most appropriate tools difficult, even for qualified personnel. The authors based their research on rational choice theory. Using one of the artificial intelligence algorithms, cosine similarity, they investigated its feasibility in selecting the most appropriate QTT [49,51,52]. In this way, the presented work contributes to the research stream concerning the optimization of the quality tool selection process itself.
An analysis of previous studies in this area reveals a research gap in attempts to automate the process of selecting quality tools. The need for such simplifications is particularly acute for less experienced employees of companies participating in improvement programs. Our previous research demonstrates how machine learning using neural network models supports the development of expert systems in this area [53,54].
The purpose of this article is to present the results of research conducted in which decision tree classifiers were developed. At the same time, attempts were made to demonstrate that decision tree classifiers (using the extended Excellence Toolbox dataset) can automatically recommend quality tools with accuracy better than neural networks, while offering interpretable rules. The decision-tree models achieve strong classification performance, with the best tree reporting 96.75% effectiveness. In contrast, the neural network from previous studies achieved 94.87%.
Decision trees are excellent classifiers that support decision-making. Their characteristic feature is the creation of decision rules based on paths generated from root to leaf. This is achieved by splitting a file containing training examples into smaller files that increasingly clearly identify decision classes. The example files are split based on tests, maximizing the information gain parameter or minimizing the Gini index [55].
We map two research questions (RQ1–RQ2) to two contributions (C1–C2) concerning feature effects and comparative performance.
RQ1—Does the article describe the influence of features (predictors) on the structure of trees?
C1—Feature importance was described in Chapter 2.1.
When developing the models, feature importance was calculated as the sum—over all tree nodes—of the increases (deltas) in node purity (delta(I) in the case of classification) and expressed as a fraction of the maximum sum (for all predictors).
RQ2—Does the article compare the performance of trees, select the best tree (no. 5), and assess the quality of this tree?
C2—These issues are described in Section 2 and Section 3.

2. Materials and Methods

2.1. Development of a Dataset

The Excellence Toolbox system described in Chapter 1 first enabled the development of a training file for its new implementation as an expert system [48]. The computer program described in the previous chapter is an extremely useful tool that allows even inexperienced individuals to search for and use quality instruments. Thanks to its comprehensive approach to these instruments, it is possible to search for them and acquire knowledge about them in one place, without the need for further literature searches or internet resources. Due to the emergence of new technologies there was a need to develop a new solution (expert system).
The development of the training file consisted of
  • Testing the compliance of the tool characteristics records in the system database with the knowledge of experts;
  • Examining the effectiveness of the program in searching for quality tools when a gradually narrowing number of search criteria was provided;
  • Indicating the minimum number of features necessary to obtain a clear result in the ranking of quality tools generated in the program response.
It should be emphasized that the program’s database is constantly expanded and updated via the aforementioned “expert interface” available within the program. The second step involved coding various combinations of selection features for a given tool and the suggested output generated by the program. The program’s effectiveness in finding quality tools was tested using a gradually narrowing number of search criteria for selected tool groups. These groups were selected based on interviews with experts from industry (employees involved in quality assurance).
The procedure for developing a set of training files involved entering various configurations of individual features into the program across all categories of tools. Operations were conducted based on “matrices” describing each tool in a feature-based system, with the assumed states/values of these features, which, from the system user’s perspective, also served as selection criteria. The experiment involved the user entering the tool’s features in the order suggested by the program (a prototype DSS system).
After completing each step in the system, i.e., specifying the characteristics of the tool being searched for, the user summarized the session by using the Summary option. Furthermore, the user was recorded at which point in the tool search in the program the user received a 100% suggestion, i.e., a tool that fully matched the specified characteristics.
After coding a given case in a specially prepared table, the tool was searched again, skipping the next single category (marked as “P “in Table 1), until a clear result was generated by the system. This allowed the user to determine the minimum number of features necessary to obtain a clear score in the quality tool ranking generated in the program’s response. A clear score here is defined as a system suggestion covering a maximum of three selected tools (with a 100% match). After receiving suggestions for four or more tools within the ranking, the table coded the information “no_clear_indication”.
Ultimately, 779 examples were used to build models in the form of decision trees and cover a total of 30 quality tools, which also served as the basis for developing decision tree models. The list of tools included a group of tools representing the division of tools into the so-called traditional, new, and statistical tools recognized by practitioners. Table 1 shows a fragment of the training file with examples of quality tool selections. The belonging to a given tool category (x) and the states of features (y) are coded in Table 1 in the form of x_y notation.
In the experiments, the entire set of examples (779) was divided into three files: training (548), validation (77), and test (154). This division of examples results from the need to maintain the presence of specific decision classes (decision-making classes balancing).
The importance of input parameters for the selection of quality tools is shown in Table 2 and Figure 1. When developing the models, feature importance was calculated as the sum—over all tree nodes—of the increases (deltas) of node purity (delta(I) in the case of classification) and expressing this value as a fraction of the maximum sum (for all predictors).

2.2. Methods

The research methodology was divided into stages:
  • Preparing a training file based on actual quality tool selections by company employees;
  • Selecting decision tree algorithms;
  • Developing models by conducting a training phase of these models along with changing two parameters (% pruning and minimum number of examples forming a leaf)—changing these parameters also allowed for reducing the effect of model overfitting;
  • Testing the models on new input dataset;
  • Selecting the most effective model;
  • Implementing the model in an expert system.
In the first stage of the study, Aitech Sphinx 4.0/DeTreex/PC Shell (Katowice, Poland) software was used, and Statsoft Statistica Data Miner (v. 13.3, Tulsa, OK, USA) was used for in-depth analysis. Two hyperparameters were examined: tree pruning percentage and minimum leaf size. These are the most commonly used parameters as stopping criteria. In the experiments (the first step), multiple decision tree models were built. The 12 best decision trees (Table 3) with different structures and generation parameters were selected. The number of inputs (8 inputs) and one output were kept constant across the models. The decision tree models were parameterized by two parameters (% pruning and minimum number of examples forming a leaf).
The evaluation parameter was overall accuracy. Each model was tested with a prepared test file. Figure 2 shows the testing of the best tree model (Aitech Sphinx 4.0).
In the second stage of the study, Statsoft Statistica Data Miner was used to perform an in-depth analysis of the classifiers.
The misclassification costs were defined as “equal,” the “Gini measure” was selected for goodness of fit, the prior probability was set to “estimated,” the stopping criterion was “prune at misclassification error,” and the stopping parameter was “minimum cardinality = 1.” A v-fold cross-validation (v = 10) was used to validate the models.
An algorithm similar to C&RT was used. The Gini measure was chosen as the partitioning criterion. Missing data were not necessary, as the data were complete.
The next figure (Figure 3) shows the structure of the best tree no. 5 (blue—root ID = 1, vertices (25); red—leaves (26)).
Due to the complexity of the tree and poor readability, the next figure (Figure 4) shows a fragment of the tree.
Example rules derived from the tree:
0007: QT = “tree_diagram_100” if TC = “2_5”, FO = “6_1”, PPDCA = “3_4”;
0008: QT = “tree_diagram_100” if TC = “2_1”, PPDCA = “3_1”, PU = “5_2”;
0009: QT = “affinity_diagram_100” if TC = “2_1”, PPDCA = “3_2”, PU = “5_2”;
0010: QT = “ affinity_diagram _100” if TC = “2_1”, PPDCA = “3_2”, PU = “5_6”;
0011: QT = “no_clear_indication” if TC = “2_5”, FO = “6_1”, PPDCA = “3_1”, DL = “P”;
Table 4 and Figure 5 show the cost sequence resulting from the pruning method selected. This allows the selection of the “right-sized” tree, marked with an asterisk, based on the cross-validation cost (CV) and the resubstitution cost for each pruned tree.
Selecting the right tree based on cross-validation cost and resubstitution cost helps avoid overfitting or underfitting the data.
Evaluation measures were calculated for each class of the best tree (no. 5) (Table 5).
The best tree (no. 5) achieved average values for recall (0.934798535), precision (0.891751946), and F1-score (0.910848951). Low values for these metrics occur when multiple tools are selected together, due to imbalanced classes. However, these values are still above 0.5. Further work on improving the classifiers should include supplementing the training file.

3. Results and Discussion

3.1. A Model Based on an Artificial Neural Network

This article contributes to the discussion on selecting the best classifiers to solve a specific research problem, namely, selecting the best quality tools used in enterprises to solve quality problems.
The subject of this article is research on the development of decision tree models, or so-called white box models. Previous research presented in publications [53,54] concerned the development of models in the form of neural networks, or so-called black box models. Decision trees were chosen due to their clear interpretability.
For a given research problem and a set of training examples, various types of machine learning algorithms should be evaluated. In previous experiments [53,54], multiple neural network models were built with different parameters and structures. The input layer (8 inputs) and output layer (1 output) were fixed. The neural network models were varied by varying the number of hidden neurons (5–30), the number of training epochs in the BFGS algorithm (10–150), the error function (entropy and SOS function), and the activation function in the hidden and output layers (functions: Softmax, exponential, logistic, linear, Tanh). The best neural network model was selected based on the highest model efficiency and the shortest network training time. The best neural network achieved 94.87%.
In the first experiment, the training file contained 410 examples, covering 11 quality tools [53]. In the second experiment, 369 records were added to the example file, covering another 19 tools. This file, containing a total of 779 examples and covering a total of 30 quality tools, also served as the basis for developing decision tree models. The list of tools included a group of tools representing the division of tools into the so-called traditional, new, and statistical tools recognized by practitioners.

3.2. Decision Tree Models

Figure 6 illustrates the effectiveness of the 12 selected decision trees. The best tree, with the fewest classification errors, is the one with the following parameters: no pruning and a minimum number of leaf examples of 1. Its effectiveness is 96.75%. Of the 154 test cases, only 5 were misclassified. However, trees with a minimum number of leaf examples of 5 are very poorly classified. The classification effectiveness of these trees drops to approximately 85%.
The classifiers were evaluated on an extended dataset. The best tree, with the fewest classification errors (5), was the one with the following parameters: no pruning and a minimum number of leaf examples of 1. Its effectiveness was 96.75%. In contrast, the neural network from previous studies achieved 94.87%, with 22 errors.
Limitations associated with the developed decision tree models include:
  • The size of the dataset; the dataset contains actual data from the enterprise, which is difficult to collect and is a lengthy process.
  • During the research, many steps had to be performed manually.
  • Lack of external validation.

3.3. An Example of Using a Decision Tree Model in an Expert System for Selecting Quality Tools

It was noticed that even for a less experienced employee, decision-making time using decision tree models is several percent shorter than using the manual method or the Excellence Toolbox system.
An expert system can assist a quality engineer in selecting quality tools. Figure 7 shows sample expert system screens. Figure 7a shows the expert system’s startup window, followed by Figure 7b, where input data is entered into the expert system using tree-based decision rules (TC = 2_6, PDMAIC = 4_5), and the solution is provided as output by the system (QT = matrix_data_analysis_100).

4. Conclusions

As mentioned above, quality tools have a well-established position in quality management, regardless of the adopted version of this concept. A key issue to address from a practitioner’s perspective is therefore providing support in selecting these tools to ensure they are most useful in a given location, setting, and time.
The aim of this paper is to present the results of research conducted in which classifiers in the form of decision trees were developed. At the same time, attempts were made to demonstrate that decision tree classifiers (on an extended Excellence Toolbox dataset) can automatically recommend qualitative tools with an accuracy better than neural networks, while offering interpretable rules. The decision-tree models achieve strong classification performance, with the best tree reporting 96.75% effectiveness. In contrast, the neural network from previous studies achieved 94.87%.
The results of previous research indicate that further work on improving the classifiers should include supplementing the training file. This will enable the search for more effective models using other types of classifiers. Further research will also be extended to models built using other algorithms (e.g., logistic regression, k-NN).
Thus, future research should aim to increase the training file size, evaluate it in real industrial settings, and use ensemble methods.
The implementation of the obtained solutions is intended for use in the construction of an expert system.
In times when the pressure to effectively solve quality problems and the need to take advantage of emerging opportunities are so strong, building expert systems for selecting quality tools becomes invaluable in the functioning of modern enterprises, especially for employees with less experience.

Author Contributions

Conceptualization, B.S. and I.R.; Methodology, B.S. and I.R.; Formal Analysis, B.S. and I.R.; Investigation, B.S. and I.R.; Resources, B.S.; Data Curation, B.S.; Writing—Original Draft Preparation, B.S. and I.R.; Writing—Review and Editing, B.S. and I.R.; Visualization, I.R.; Supervision, B.S. and I.R.; Funding Acquisition, B.S. and I.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a statutory activity financed by the Polish Ministry of Science and Higher Education, grant number (0613/SBAD/4940), and a grant to maintain the research potential of Kazimierz Wielki University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Venkatesh, V.; Samsingh, R.V.; Karthik, P. Application of Quality Tools in a Plastic Based Production Industry to achieve the Continuous Improvement Cycle. Qual.Access Success 2017, 18, 61. [Google Scholar]
  2. Uddin, M.M. Improving product quality and production yield in wood flooring manufacturing using basic quality tools. Int. J. Qual. Res. 2020, 15, 155–170. [Google Scholar] [CrossRef]
  3. Chen, S.-H. Integrated Analysis of the Performance of TQM Tools and Techniques: A Case Study in the Taiwanese Motor Industry. Int. J. Prod. Res. 2013, 51, 1072–1083. [Google Scholar] [CrossRef]
  4. Singh, J.; Singh, H. Quality Control Tools for Enhancing Manufacturing Performance: A Case Study. IUP J. Oper. Manag. 2024, 23, 27–43. [Google Scholar]
  5. Hamrol, A. Quality Management and Engineering. With a Look into Reality 4.0; PWN: Warszawa, Poland, 2023. [Google Scholar]
  6. ISO 9001:2015; Quality Management Systems–Requirements. ISO: Geneva, Switzerland, 2015.
  7. Castello, J.; De Castro, R.; Marimon, F. Use of quality tools and techniques and their integration into ISO 9001: A wind power supply chain case. Int. J. Qual. Reliab. Manag. 2020, 37, 68–89. [Google Scholar] [CrossRef]
  8. Fotopoulos, C.; Psomas, E. The use of quality management tools and techniques in ISO 9001: 2000 certified companies: The Greek case. Int. J. Product. Perform. Manag. 2009, 58, 564–580. [Google Scholar] [CrossRef]
  9. Nedra, A.; Néjib, S.; Yassine, C.; Morched, C. A new lean Six Sigma hybrid method based on the combination of PDCA and the DMAIC to improve process performance: Application to clothing SME. Ind. Textila 2019, 70, 447–456. [Google Scholar] [CrossRef]
  10. Breyfogle, F.W., III. Implementing Six Sigma: Smarter Solutions using Statistical Methods; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  11. Starzyńska, B.; Bryke, M.; Diakun, J. Human lean green method -a new approach toward auditing manufacturing & service companies. Sustainability 2021, 13, 10789. [Google Scholar]
  12. Wittenberger, G.; Teplická, K. The Synergy Model of Quality Tools and Methods and Its Influence on Process Performance and Improvement. Appl. Sci. 2024, 14, 5079. [Google Scholar] [CrossRef]
  13. Hagemeyer, C.; Gershenson, J.K.; Johnson, D.M. Classification and application of problem solving quality tools: A manufacturing case study. TQM Mag. 2006, 18, 455–483. [Google Scholar] [CrossRef]
  14. Okpala, C.; Nwamekwe, C.O.; Ezeanyim, O.C. The Implementation of Kaizen Principles in Manufacturing Processes: A Pathway to Continuous Improvement. Int. J. Eng. Invent. 2024, 13, 116–124. [Google Scholar]
  15. Fonseca, L.; Lima, V.; Silva, M. Utilization of quality tools: Does sector and size matter? Int. J. Qual. Res. 2015, 9, 605–620. [Google Scholar]
  16. Oakland, J.S. Total Quality Management and Operational Excellence: Text With Cases; Routledge: London, UK, 2014. [Google Scholar]
  17. Masoudi, E.; Shahin, A. The influence of TQM on sustainability performance: The mediating role of green technology innovation in manufacturing firms. Technol. Sustain. 2025, 4, 353–379. [Google Scholar] [CrossRef]
  18. Fotopoulos, C.B.; Psomas, E.L. The impact of “soft” and “hard” TQM elements on quality management results. Int. J. Qual. Reliab. Manag. 2009, 26, 150–163. [Google Scholar] [CrossRef]
  19. Safari, A.; Parast, M.M.; Al Ismail, V.B. Lean Six Sigma, ISO 9001, and organizational performance: An integrated approach. Qual. Manag. J. 2025, 32, 180–195. [Google Scholar] [CrossRef]
  20. Athab, K.R. The Impact of Applying Kaizen on Improving Product Quality: A Case Study in the General Company for Electrical Industries. Eur. J. Manag. Econ. Bus. 2025, 2, 99–112. [Google Scholar] [CrossRef] [PubMed]
  21. Widiwati, I.T.B.; Liman, S.D.; Nurprihatin, F. The implementation of Lean Six Sigma approach to minimize waste at a food manufacturing industry. J. Eng. Res. 2025, 13, 611–626. [Google Scholar] [CrossRef]
  22. Ibrahim, A.; Kumar, G. A framework for integrating Lean Six Sigma and Industry 4.0 for sustainable manufacturing. Int. J. Prod. Res. 2025, 1–23. [Google Scholar] [CrossRef]
  23. Emmanuel, S.; Taifa, I.W. Quality Management Systems (QMSs): Exploring the Effect of Risk Factors and Developing the Risk Management Framework for QMS Effectiveness in Service Institutions. Qual. Reliab. Eng. Int. 2025, 41, 3176–3196. [Google Scholar] [CrossRef]
  24. Tague, N.R. The Quality Toolbox; Quality Press: Milwaukee, WI, USA, 2023. [Google Scholar]
  25. Arcidiacono, G.; Pieroni, A. The revolution lean six sigma 4.0. Int. J. Adv. Sci. Eng. Inf. Technol. 2018, 8, 141–149. [Google Scholar] [CrossRef]
  26. Bhattacharya, A.; Nand, A.; Castka, P. Lean-green integration and its impact on sustainability performance: A critical review. J. Clean. Prod. 2019, 236, 117697. [Google Scholar] [CrossRef]
  27. Abualfaraa, W.; Salonitis, K.; Al-Ashaab, A.; Ala’raj, M. Lean-green manufacturing practices and their link with sustainability: A critical review. Sustainability 2020, 12, 981. [Google Scholar] [CrossRef]
  28. Duarte, S.; Mc Dermott, O. The dimensions of lean-green 4.0 readiness a systematic literature review. Prod. Plan. Control. 2025, 36, 1175–1187. [Google Scholar] [CrossRef]
  29. Le, H.; Duffy, G. Human-Centered Lean Six Sigma: Creating a Culture of Integrated Operational Excellence; Productivity Press: New York, NY, USA, 2023. [Google Scholar]
  30. Kowang, T.O.; Binti Afandi, N.N.; Long, C.S.; Rasli, A. Classification of quality tools and techniques: A quality management system approach. Adv. Sci. Lett. 2015, 21, 1329–1332. [Google Scholar] [CrossRef]
  31. Misztal, A.; Ratajszczak, K. Possibilities of using contemporary quality management methods and tools for the sustainable development of the organization. Sustainability 2025, 17, 617. [Google Scholar] [CrossRef]
  32. Tetteh, F.K.; Nyantakyi, B.; Owusu Kwateng, K.; Osei, H.V. The mediation role of innovation in the relationship between total quality management and performance of small and medium scale enterprises. Int. J. Qual. Reliab. Manag. 2025, 42, 676–705. [Google Scholar] [CrossRef]
  33. Tari, J.J.; Sabater, V. Quality tools and techniques: Are they necessary for quality management? Int. J. Prod. Econ. 2004, 92, 267–280. [Google Scholar] [CrossRef]
  34. Żukowska, M.; Buń, P.; Górski, F.; Starzyńska, B. Cyber sickness in industrial virtual reality training. In Proceedings of the International Scientific-Technical Conference Manufacturing; Springer International Publishing: Cham, Switzerland, 2019; pp. 137–149. [Google Scholar]
  35. Shahin, A.; Arabzad, M.S.; Ghorbani, M. Proposing an integrated framework of seven basic and new quality management tools and techniques: A roadmap. Res. J. Int. Stud. 2010, 17, 183–195. [Google Scholar]
  36. Saifuddin, I.; Rizal, H.S. A Study of Quality Tools and Techniques in the Context of Industrial Revolution 4.0 in Malaysia. What’s New? Calitatea 2020, 21, 88–96. [Google Scholar]
  37. Markulik, Š.; Šolc, M.; Fiľo, M. Implementation of Quality Tools in Mechanical Engineering Piece Production. Appl. Sci. 2024, 14, 944. [Google Scholar] [CrossRef]
  38. Pongboonchai-Empl, T.; Antony, J.; Garza-Reyes, J.A.; Komkowski, T.; Tortorella, G.L. Integration of Industry 4.0 technologies into Lean Six Sigma DMAIC: A systematic review. Prod. Plan. Control. 2023, 35, 1–26. [Google Scholar] [CrossRef]
  39. Sader, S.; Husti, I.; Daroczi, M. A review of quality 4.0: Definitions, features, technologies, applications, and challenges. Total Qual. Manag. Bus. Excell. 2022, 33, 1164–1182. [Google Scholar] [CrossRef]
  40. Santos, G.; Sá, J.C.; Félix, M.J.; Barreto, L.; Carvalho, F.; Doiro, M.; Zgodavová, K.; Stefanović, M. New needed quality management skills for quality managers 4.0. Sustainability 2021, 13, 6149. [Google Scholar] [CrossRef]
  41. Carnerud, D.; Mårtensson, A.; Ahlin, K.; Slumpi, T.P. On the inclusion of sustainability and digitalisation in quality management—An overview from past to present. Total Qual. Manag. Bus. Excell. 2025, 36, 199–221. [Google Scholar] [CrossRef]
  42. Turner, M.; Oakland, J. Defining Quality 4.0. Qual. World 2021, 30, 25–31. [Google Scholar]
  43. Alves, K.R.S.; Hansen, É. Application of quality tools and process automation in a PVC analysis laboratory. Int. J. Product. Qual. Manag. 2025, 44, 256–277. [Google Scholar] [CrossRef]
  44. Oliveira, D.; Alvelos, H.; Rosa, M.J. Quality 4.0: Results from a systematic literature review. TQM J. 2025, 37, 379–456. [Google Scholar] [CrossRef]
  45. Armutcu, B.; Majeed, M.U.; Hussain, Z.; Aslam, S. The impact of digital voice of customer and product lifecycle management on Quality 4.0: Moderating role of AI in SMEs. J. Manuf. Technol. Manag. 2025, 36, 1265–1283. [Google Scholar] [CrossRef]
  46. Sarker, T.R.; Dunston, J.K. Implementation of Quality 4.0, a systematic review of approaches, root causes, and challenges. Qual. Eng. 2025, 37, 204–218. [Google Scholar] [CrossRef]
  47. Kushwaha, D.; Talib, F. A bibliometric analysis of Quality 4.0: Current status, trends and future research directions. Int. J. Qual. Reliab. Manag. 2025, 42, 474–503. [Google Scholar] [CrossRef]
  48. Starzyńska, B.; Hamrol, A. Excellence toolbox: Decision support system for quality tools and techniques selection and application. Total Qual. Manag. Bus. Excell. 2013, 24, 577–595. [Google Scholar] [CrossRef]
  49. Amran, M.M.; Khairanum, S.; Roslan, B.R.; Ikbar, A.M.; Anwar, A.F. Development of intelligent decision support system for selection of quality tools and techniques. Int. J. Mach. Learn. Comput. 2019, 9, 893–898. [Google Scholar] [CrossRef]
  50. Ibrahim, S.Z.; Daril, M.A.M.; Wahab, M.I.A.; Subari, K.; Manan, Q.; Irum, S. Optimizing Quality Tool Selection with Cosine Similarity for Continuous Improvement. Pak. J. Life Soc. Sci. 2024, 22, 3228–3239. [Google Scholar] [CrossRef]
  51. Ibrahim, S.Z.; Daril, M.A.M.; Subari, K.; Wahab, M.I.A.; Ali, K.A.M. Development of Attributes of Quality Tools and Techniques for Quality Engineering Improvement. In Advanced Transdisciplinary Engineering and Technology; Springer International Publishing: Cham, Switzerland, 2022; pp. 143–152. [Google Scholar]
  52. Amran, M.; Daril, B.M. Rational Decision for Selection of Quality Tools and Techniques using Cosine Similarity. Asia Proc. Soc. Sci. 2022, 9, 273–274. [Google Scholar] [CrossRef]
  53. Starzyńska, B.; Rojek, I. Supporting the Selection of Quality Tools Using Neural Networks. In Proceedings of the ISPEM 2023: International Conference on Intelligent Systems in Production Engineering and Maintenance, Advances in Production, Wrocław, Poland, 13–15 September 2023; Lecture Notes in Networks and Systems. Burduk, A., Batako, A., Machado, J., Wyczółkowski, R., Antosz, K., Gola, A., Eds.; Springer: Cham, Switzerland, 2023; Volume 790, pp. 133–145. [Google Scholar]
  54. Starzyńska, B.; Rojek, I. Further Research on Neural Networks in Supporting the Selection of Quality Tools. In Proceedings of the International Conference on Intelligent Systems in Production Engineering and Maintenance; Springer Nature Switzerland: Cham, Switzerland, 2025; pp. 278–290. [Google Scholar]
  55. Rameshkumar, K. Effective Decision Tree Algorithm for Reality Mining; Lap Lambert Academic Publishing: Saarbrücken, Germany, 2021. [Google Scholar]
Figure 1. Parameter validity chart.
Figure 1. Parameter validity chart.
Applsci 16 00472 g001
Figure 2. Test of the best tree model.
Figure 2. Test of the best tree model.
Applsci 16 00472 g002
Figure 3. Decision tree No. 5 (blue—vertices (25); red—leaves (26)).
Figure 3. Decision tree No. 5 (blue—vertices (25); red—leaves (26)).
Applsci 16 00472 g003
Figure 4. Tree fragment number 5. where ID—node number, blue—vertices; red—leaves; N—number of examples from which the node or leaf was created; parameters based on which the division was made: TC—Tool category, PU—Purpose of use, PPDCA—Place in PDCA).
Figure 4. Tree fragment number 5. where ID—node number, blue—vertices; red—leaves; N—number of examples from which the node or leaf was created; parameters based on which the division was made: TC—Tool category, PU—Purpose of use, PPDCA—Place in PDCA).
Applsci 16 00472 g004
Figure 5. Cost sequence (resubstitution cost—blue line, SK cost—red line).
Figure 5. Cost sequence (resubstitution cost—blue line, SK cost—red line).
Applsci 16 00472 g005
Figure 6. Effectiveness of classification trees (black—highest value; red—lowest values; blue—intermediate values).
Figure 6. Effectiveness of classification trees (black—highest value; red—lowest values; blue—intermediate values).
Applsci 16 00472 g006
Figure 7. Sample screens of the expert system for selecting quality tool.
Figure 7. Sample screens of the expert system for selecting quality tool.
Applsci 16 00472 g007
Table 1. A fragment of the training file.
Table 1. A fragment of the training file.
InputsOutput
Type of input data (TID)Tool category (TC)Place in PDCA (PPDCA)Place in DMAIC (PDMAIC)Purpose of use (PU)Form of the output (FO)Tool user (TU)Difficulty level (DL)Quality Tool (QT)
1_12_43_34_25_26_67_38_1check_sheet_100
1_12_43_34_25_26_67_3Pcheck_sheet_100
1_12_43_34_25_26_6PPcheck_sheet_100
1_12_43_34_25_2PPPcheck_sheet_100; stratification_100
1_12_43_34_2PPPPno_clear_indication
Table 2. Validity of parameters.
Table 2. Validity of parameters.
Parameter%Validity
FO1001.000000
PPDCA960.963274
PU810.814341
TU570.574614
TC450.445353
PDMAIC320.324221
DL250.252921
TID160.157190
Table 3. Decision tree parameters.
Table 3. Decision tree parameters.
% Tree PruningMin. Number of Examples Forming a Leaf of the TreeNumber of Misclassified Cases% of Misclassified CasesClassifier Efficiency
%
No trim (0%)153.2596.75
No trim (0%)2117.1492.86
No trim (0%)52214.2585.75
25%11711.0488.96
25%21711.0488.96
25%52214.2985.71
35%1138.4491.56
35%21711.0488.96
35%52214.2985.71
50%195.8494.16
50%21711.0488.96
50%52214.2985.71
Table 4. Tree Building Cost Sequence (* indicates the best tree).
Table 4. Tree Building Cost Sequence (* indicates the best tree).
IDEnd NodesSK CostSk Std. ErrorResubstitution Cost
tree 1450.0951220.0144890.024390
tree 2410.0951220.0144890.026829
tree 3350.0902440.0141510.031707
tree 4310.0804880.0134350.036585
* tree 5260.0756100.0130560.048780
tree 6240.0951220.0144890.056098
tree 7170.0951220.0144890.087805
tree 8160.0951220.0144890.092683
tree 9140.1195120.0160200.109756
tree 10120.1341460.0168310.129268
tree 11110.1439020.0173340.141463
tree 12100.1634150.0182600.160976
tree 1380.2341460.0209130.219512
tree 1470.2512200.0214200.251220
tree 1560.3024390.0226840.302439
tree 1650.3682930.0238210.368293
tree 1730.5243900.0246640.524390
tree 1820.6097560.0240910.609756
tree 1910.8024390.0196640.802439
Table 5. Calculation of evaluation measures for individual classes of the best tree (no. 5).
Table 5. Calculation of evaluation measures for individual classes of the best tree (no. 5).
IDQTRecallPrecisionF1-Score
1tree_diagram_1000.9642857140.9000000000.931034483
2no_clear_indication0.9358974360.8488372090.890243902
3tree_diagram_100;mind_map_1001.0000000000.7500000000.857142857
4tree_diagram_100;fishbone_diagram_1001.0000000001.0000000001.000000000
5fault_tree_analysis_100;decision_tree_1000.6153846150.6153846150.615384615
6SIPOC_diagram_1001.0000000001.0000000001.000000000
7process_decision_program_chart_1001.0000000001.0000000001.000000000
8decision_tree_1001.0000000001.0000000001.000000000
9decision_tree_100;SIPOC_diagram_1000.5000000000.5000000000.500000000
10decision_tree_100;decision_tree_for_CCP_1001.0000000000.7500000000.857142857
11decision_tree_100;top_down_flowchart_1001.0000000001.0000000001.000000000
12decision_tree_100;fault_tree_analysis_1000.8000000000.6666666670.727272727
13requirement_table_1001.0000000001.0000000001.000000000
14requirement_table_100;QFD_1001.0000000001.0000000001.000000000
15affinity_diagram_1001.0000000001.0000000001.000000000
16matrix_diagram_+E5:E26100;requirements_and_measures_tree_1001.0000000001.0000000001.000000000
17arrow_diagram_100;flowchart_1001.0000000001.0000000001.000000000
18matrix_data_analysis_1001.0000000000.9876543210.993788820
19matrix_diagram_1001.0000000001.0000000001.000000000
20matrix_diagram_100;requirements_and_measures_tree_1001.0000000001.0000000001.000000000
21matrix_diagram_100;requirements_and_measures_tree_100;force_field_analysis_1000.7500000000.6000000000.666666667
22relations_diagram_100_1001.0000000001.0000000001.000000000
AVERAGE VALUE0.9347985350.8917519460.910848951
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Starzyńska, B.; Rojek, I. Decision Tree Models for Automated Quality Tools Selection. Appl. Sci. 2026, 16, 472. https://doi.org/10.3390/app16010472

AMA Style

Starzyńska B, Rojek I. Decision Tree Models for Automated Quality Tools Selection. Applied Sciences. 2026; 16(1):472. https://doi.org/10.3390/app16010472

Chicago/Turabian Style

Starzyńska, Beata, and Izabela Rojek. 2026. "Decision Tree Models for Automated Quality Tools Selection" Applied Sciences 16, no. 1: 472. https://doi.org/10.3390/app16010472

APA Style

Starzyńska, B., & Rojek, I. (2026). Decision Tree Models for Automated Quality Tools Selection. Applied Sciences, 16(1), 472. https://doi.org/10.3390/app16010472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop