Next Article in Journal
Physics Statistical Descriptor-Informed Deep Image Structure and Texture Similarity Metric as a Generative Adversarial Network Optimization Criterion for Three-Dimensional Gray-Scale Core Reconstruction
Previous Article in Journal
Damage Detection on Real Bridges Using Machine Learning Techniques: A Systematic Review
Previous Article in Special Issue
Triplet Spatial Reconstruction Attention-Based Lightweight Ship Component Detection for Intelligent Manufacturing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementing Sensible Algorithmic Decisions in Manufacturing

by
Luis Asunción Pérez-Domínguez
1,
Dynhora-Danheyda Ramírez-Ochoa
2,*,
David Luviano-Cruz
1,*,
Erwin-Adán Martínez-Gómez
1,
Vicente García-Jiménez
3 and
Diana Ortiz-Muñoz
1
1
Departamento de Ingeniería Industrial y Manufactura, Instituto de Ingeniería y Tecnología, Universidad Autónoma de Ciudad Juárez, Av. Plutarco Elías Calles #1210 Fovissste Chamizal, Ciudad Juárez, Chihuahua C.P. 32310, Mexico
2
Tecnologías de la Información e Innovación Digital, Universidad Tecnológica de Chihuahua, Av. Montes Americanos #9501 Col. Sector 35, Ciudad Chihuahua, Chihuahua C.P. 31216, Mexico
3
Departamento de Ingeniaría Eléctrica y Computación, Instituto de Ingeniería y Tecnología, Universidad Autónoma de Ciudad Juárez, Av. Plutarco Elías Calles #1210 Fovissste Chamizal, Ciudad Juárez, Chihuahua C.P. 32310, Mexico
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 8885; https://doi.org/10.3390/app15168885
Submission received: 5 July 2025 / Revised: 31 July 2025 / Accepted: 6 August 2025 / Published: 12 August 2025
(This article belongs to the Special Issue Artificial Intelligence on the Edge for Industry 4.0)

Abstract

A significant component of making intelligent decisions is optimizing algorithms. In this context, it is imperative to develop algorithms that are more efficient in order to efficiently and accurately process large quantities of intricate data. In addition, the main contribution of this study lies in the integration of optimization theory with swarm intelligence through multicriteria decision-making methods (MCDMs). This study indicates that combining dimensional analysis (DA) with particle swarm optimization (PSO) can smartly and efficiently improve analysis and decision making, resolving PSO’s shortcomings. A convergence investigation between the bat algorithm (BA), MOORA-PSO, TOPSIS-PSO, DA-PSO, and PSO is carried out to substantiate this assertion. Additionally, the ANOVA method is used to validate data dependability in order to evaluate the algorithms’ correctness.

1. Introduction

Throughout modern history, transformative shifts have characterized the evolution of companies, propelled by successive revolutions. Originating in the 18th century with the First Industrial Revolution and progressing to what is now recognized as Industry 4.0 [1], this ongoing process has been driven by an unwavering commitment to meeting societal demands and mitigating risks. In this relentless pursuit, companies have given rise to a spectrum of technologies revolutionizing both the personal and professional aspects of daily life. Fundamental to this development is scientific research and technological advancements, which are crucial for enhancing business competitiveness [2,3,4].
In the dynamic landscape of business, significant challenges emerge, encompassing the addressing of pressing societal needs, the optimization of resources, the streamlining of processes, and reduction in costs. The escalating complexity of these processes has spurred researchers to make advances not only in information processing and analysis, but also in expediting result interpretation [5,6].
The growth and stability of companies, as well as individuals, hinge on daily decisions aligned with their environment. Consideration of a range of feasible alternatives and potential event variations forms the basis for these decisions. Consequently, decision-making holds paramount importance for companies. Optimal decision-making necessitates pre-analyzed information and the generation of alternative solutions for the presented problem. Each decision involves the evaluation of criteria and alternatives to discern the cost–benefit impact on the company [7,8].
In essence, decision-making constitutes a series of responses to circumstances in human daily life. These responses result from mental analyses where available options are compared. Decision making not only propels the evolution of humanity but also shapes the trajectory of companies, with each decision reflecting the consequences of actions taken or not taken [9,10,11].
In this dynamic context, decision making in the business sphere undergoes significant transformation. Decision makers now encounter a multitude of complex information. The exponential growth in the volume and quality of data poses new demands, encompassing classification and analysis to interpretation, with the aim of transforming this data into valuable information. Additionally, modern consumers expect the rapid delivery of information, products, and services, prompting the industry to devise innovative strategies and models to meet these expectations [12,13]. Consequently, decision making evolves into an intricate process involving the management of both secure and uncertain data, as well as quantitative and qualitative information. This includes historical data, statistical studies, and expert knowledge [14,15,16].
Several decades ago, the decision making process was straightforward and simple. However, with the integration of technology, the nature of this process becomes increasingly complex. This complexity manifests in the identification of valuable information, derived from both complete and incomplete data [17,18]. When evaluating the information, a set of options and at least two characteristics are considered, and preferences for these characteristics must be determined by the decision maker to establish a starting point for the choice [19,20,21].
Currently, various strategies encompass information management and processing methods. A significant challenge in decision making involves efficiently representing and interrelating data to generate information promptly [8,15,22]. It is important to recognize that values are derived not only from complete information but also from uncertain or missing data, involving the simultaneous incorporation of qualitative and quantitative information [3].
On the other hand, technological advancements drive the development of mathematical models grounded in both probabilistic and non-probabilistic aspects, ensuring precision in the final results [23,24]. However, these advancements also present disadvantages, such as the proliferation of decision-making methods and techniques, as well as the constant search for more sophisticated and practical methods adaptable to various environments while considering conflicting multiple objectives [19,24].
Amidst the diversity of strategies, optimization methods (OMs) emerge with the aim of simplifying decisions and, consequently, reducing losses and/or increasing gains. Within this optimization family, the variety of methods based on the nature of the problem adds a level of challenge to selecting the most appropriate strategy [8,15,22]. Notwithstanding the complexities, optimization methods address real-world problems by applying a fundamental rule: “do more with less”. These methods follow a series of mathematical steps that simultaneously consider priorities, constraints, and situations to arrive at the best solution [15,16,25].
Additionally, in the category of optimization methods are metaheuristics, offering solutions close to the optimal ones and, unlike other methods, having faster processing times. The efficiency of metaheuristics lies in searching for the best solution in a defined space, simulating real-life actions. Among these metaheuristic methods are those based on natural behavior and collective intelligence [15,26,27]. Other optimization methods are designed for multicriteria decision making (MCDM), comparing, selecting, and ranking results to arrive at the best decision [28,29].
As a result, strategies undergo significant advancements with the goal of achieving more with less. However, limitations and opportunities for improvement persist. For instance, the particle swarm optimization (PSO) algorithm stands out for its search capability and ease of implementation but may be limited by easily converging to local results, complicating the attainment of global optimization [6,30]. On the other hand, multicriteria decision-making (MCDM) methods represent another optimization strategy, evaluating multiple conditions through algorithms and mathematical tools to identify the best alternative. Nevertheless, these methods rely on decision makers to assign weights to each criterion, introducing subjectivity into the process [20,28].
In response to the aforementioned challenges, there is a growing interest in improving outcomes for decision makers. In this regard, the adoption of various optimization strategies can significantly contribute to mitigating the difficulties associated with the use of algorithms and methods in their conventional forms [31,32]. Similarly, the implementation of a hybrid approach emerges as an effective way to enhance the effectiveness of results [33,34].
As previously mentioned, the decision-making process is complex and requires careful consideration, especially in crucial decisions. The assessment and selection of intelligent decisions are complicated due to the lack of complete information or the subjectivity of decision makers. To address these challenges, it is necessary to enhance existing decision-making algorithms [14,19]. The efficient combination of the PSO algorithm with an MCDM aims to leverage their strengths, simultaneously mitigating weaknesses, and providing a robust strategy for decision making in complex environments [31,34].
The PSO algorithm, recognized for its simple and effective structure in optimizing nonlinear and convex functions, faces the crucial limitation of quickly converging to local solutions, compromising the ability to achieve global optimization. Additionally, it grapples with the challenge of balancing exploration in the search for new solutions and exploitation to improve existing solutions, potentially leading to suboptimal outcomes [31,35].
In the case of MCDM, despite its solid mathematical foundation and flexibility for application in various contexts, the weighting of criteria introduces biases, heavily relying on the subjectivity of decision-makers. Moreover, in problems with many criteria and alternatives, the application of MCDM can become complex and computationally intensive [36,37].
The utilization of PSO in decision-making carries the risk of premature convergence towards local results, resulting in suboptimal solutions. In MCDM, the challenge lies in assigning weights to criteria due to the inherent subjectivity. The integration of PSO and MCDM, aimed at generating hybrid algorithms, seeks to yield superior results and diminish subjectivity. However, this integration introduces additional challenges, including heightened complexity and effort in algorithm development, as well as results that are more intricate to interpret and communicate. The results and insights derived from these hybrid approaches may represent valuable contributions to future research and development in this field.
As per the literature, hybrid algorithms enable the enhancement of efficiency and result quality while fostering technological innovation. Primarily, hybrid algorithms leverage the strengths of multiple approaches by balancing the capacities of the integrated algorithms, thereby enabling the attainment of optimal and well-founded solutions [31,33].
Furthermore, the intention is to overcome premature convergence, a characteristic limitation of PSO, by combining it with MCDM. The incorporation of multicriteria evaluation helps maintain an appropriate balance between global exploration and local exploitation, enabling the achievement of solutions that transcend local optima [30,35].
One standout characteristic of hybrid algorithms is their adaptability to diverse business contexts. This adaptability allows them to effectively navigate different business environments and cater to varied needs [38,39]. The amalgamation of PSO and MCDM aims to provide flexibility and efficiency, enabling application in complex and dynamic business scenarios while adjusting to the specific needs of each situation.
Moreover, the integration of different approaches optimizes the ability to address specific challenges that might be difficult to handle with a single method. This integrated approach enhances problem-solving capabilities, especially when confronted with intricate business issues. The result is significantly improved efficiency when solving complex business problems.
In addition, the contribution to technological innovation is a direct outcome of the research and development of these hybrid algorithms. The continuous process of integration and improvement to overcome specific challenges not only ensures the evolution of hybrid algorithms but also substantially contributes to the overall advancement in the field of computational intelligence and machine-assisted decision making. This ongoing innovation fosters progress and positive transformations in technology.
In this study, the following innovations and contributions are highlighted, considering key aspects to evaluate the performance of the algorithms:
(a)
The development of three hybrid algorithms with the purpose of mitigating the limitations of the PSO algorithm, enhancing the effectiveness of the obtained results. Key criteria for performance evaluation are identified, including efficiency, accuracy, and convergence time.
(b)
Comparison between PSO and the hybrid algorithms to assess their performance in the same case study, allowing specific metrics such as solution quality and convergence time to be measured.
(c)
The determination of the hybrid algorithm that achieves a balance between global and local population scanning, overcoming PSO limitations related to premature convergence toward the local optimum. The stability and robustness of each algorithm are analyzed against variations in input data, seeking consistent performance under different conditions.
(d)
Implementation of the hybrid algorithms in a computer program that allows adjustments to initial parameters. A comparison and validation of the obtained results are conducted, evaluating efficiency, accuracy, and convergence speed.
(e)
The practical application of the algorithms in a real case within the manufacturing industry, focusing on reducing defects in the plastic injection molding process. The computational efficiency of each algorithm is assessed, considering its ability to provide effective solutions in terms of efficiency and accuracy under specific industry conditions.
This article is structured into six fundamental sections with the purpose of ensuring a clear and logical understanding of the content, as illustrated in Figure 1. In the introduction (Section 1), the relevant objectives of the work are precisely outlined, focusing on the importance of intelligent decision making in the industrial context. Additionally, the inherent challenges of the metaheuristic algorithm PSO are explored, providing an understanding of the obstacles that could arise in the addressed context.

2. Relevant Literature

This section provides detailed information about the algorithms and methods used in the research, including the necessary mathematical formulations to comprehend the algorithms addressed in the study. The analysis begins in Section 2.1 with optimization methods: the particle swarm optimization (PSO) algorithm and the bat algorithm (BA). Subsequently, in Section 2.2, it delves into multicriteria decision-making methods (MCDMs): Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Dimensional Analysis (DA).

2.1. Optimization Methods

Optimization methods aim to enhance a structure to maximize benefits, profits, and performance while utilizing minimal resources such as time, materials, and costs [20,25,40]. In this context, optimization focuses on finding solutions within a feasible set of options [25,40,41]. In the realm of decision making, optimization methods play an integral role in data analysis strategies, facilitating the tasks of decision makers [15,16,42].
Currently, there is a wide diversity of optimization methods, as there is no single approach that addresses all problems optimally. This multitude of options adds complexity to the task of selecting the most suitable and efficient method for solving a specific problem [42,43]. It is important to note that the majority of optimization problems are complex and involve elements of nonlinearity, discontinuity, and convexity. This scenario drives the evolution of classical algorithms with the goal of improving their efficiency to achieve nearly optimal solutions in shorter times [20,44].
In this context, Figure 2 provides a classification of optimization methods, distinguishing two main categories: heuristics and metaheuristics. Within heuristics, there are subclassifications, including construction and improvement algorithms. On the other hand, metaheuristics are considered high-level procedures due to their strategy of exploring the search space, starting from random solutions that improve with each iteration. The strategy employed by metaheuristics includes diversification (a mechanism for exploring the search space) and intensification (using previously found solutions) [27,41]. Metaheuristics are further divided into those based on populations or trajectories, with Swarm Intelligence and evolutionary algorithms standing out in this category [15,16,42,45].
In view of the above, the algorithms used in this research belong to the family of metaheuristics, considered high-level procedures due to their strategy of exploring the search space from random solutions that improve with each iteration [27,41]. However, the proliferation of metaheuristic algorithms in recent years has enhanced solution capabilities, especially those based on populations, leading to a new subdivision known as Swarm Intelligence. This classification is also a field within Artificial Intelligence, mimicking the collective social behavior of living organisms (insects, animals, and bacteria) for communication, organization, or goal achievement [26,46].
Taking into account the above, in this research, we will focus on the particle swarm optimization algorithm (PSO) and conduct a comparison with the bat algorithm (BA) and other hybrid algorithms under development.
In the following sections, we will delve into detailing the advantages and disadvantages, as well as the mathematical structure of the PSO and BA algorithms.

2.1.1. Particle Swarm Optimization Algorithm

In 1986, Glover introduced metaheuristic algorithms, incorporating them into the family of optimization methods. Since then, a little over three decades have passed, giving rise to an extensive variety of metaheuristic algorithms with various classifications [15,26,47]. Among the different categories of metaheuristics, those algorithms that draw inspiration from the collaboration among living organisms to tackle complex problems stand out. These algorithms are characterized by employing random elements to find solutions efficiently, extracting cues from the behavior of natural phenomena [16,48].
The Particle Swarm Optimization (PSO) algorithm constitutes a mathematical model inspired by the social behavior of flocks of birds searching for food in unknown destinations. It is categorized as a swarm intelligence algorithm, focused on operating with solutions to address complex problems [49,50]. In its classical form, PSO has proven to be effective in solving a variety of problems due to its simplicity of implementation and its ability to successfully tackle complex issues with few required parameters [51,52]. PSO has demonstrated its contributions in various contexts, such as partner selection [53], parameter estimation [48], task allocation using unmanned aerial vehicles for the delivery of medicines, and food to victims [50], and the enhancement of welding processes in robots, resulting in cost reduction and increased productivity [54]. Additionally, recent advancements and implementations about PSO include the radiation shielding system’s design parameters being automatically and repeatedly optimized using a PSO extension[55], photovoltaic solar systems[56], application to breast cancer diagnosis [57], power prediction in battery systems [58], among others[59].
Furthermore, the implementation of a PSO is reflected in patents ranging from optimal solution finding [60] to accurate device position estimation [61].
However, among the opportunities for improvement in PSO is its tendency to easily fall into a local solution, providing premature solutions, providing premature solutions. Therefore, an MCDM algorithm to model and handle data would offer more significant global exploration [30].
The general operational procedure of the classic PSO algorithm consists of five steps, as illustrated in Figure 3. These steps begin with the initialization of parameters, positions, and velocities of particles, followed by the evaluation of the objective function, the update of the velocity/position of each particle, and ultimately, the determination of the global optimal position [49,62]. For a deeper understanding of the PSO algorithm, the detailed structure is presented in Algorithm 1, and the following paragraphs contain the corresponding mathematical formulation of the algorithm.
Algorithm 1: Structure of the PSO.
  • Require: Decision matrix, degree of preference of each criteria, and control parameters: ω, c1, c2, r2, r2, and the number of iterations to perform (T)
  • Ensure: best position (pbest) and best optimum (gbest)
1:
Build the decision matrix;
2:
Swarm initialization;
3:
Determine the first position and first velocity of the particles;
4:
Evaluate the objective function to obtain the best optimum and local position;
5:
Obtain the best global optimum and the best global position;
Applsci 15 08885 i001
As detailed in the flowchart (Figure 3) and shown in the PSO Algorithm (Algorithm 1), the initial step involves capturing input data into a decision matrix, where the alternatives are represented as rows, and the criteria as columns. After this, the control parameters are set, including ω , c 1 , c 2 , r 1 , r 2 and the total number of iterations (T).
Fundamentally, the inertial weight ( ω ) plays an important role in updating the particle velocities, aiming to achieve a balance between global and local searches. A smaller value of ω favors local search, while a larger value facilitates global exploration. For this reason, the literature suggests that the value of ω should be in the interval of [ 0.8 , 1.2 ] .
Simultaneously, the coefficients r 1 and r 2 simulate the influence of nature on the particles, initialized with random numbers within the interval [ 0 , 1 ] . Meanwhile, the learning factors encompass the cognitive acceleration constants ( c 1 ) and the social acceleration constants ( c 2 ). Cognitive acceleration governs how each particle’s personal best position influences its speed update. A large c 1 encourages particles to explore more aggressively in their own search space, while a smaller value encourages reliance on their past local experiences. In the case of social acceleration, it regulates the influence of the best global position of the swarm on the speed update of each particle. A higher c 2 prompts particles to follow the direction of successful particles in the history of the swarm, exploring the search space more globally. On the contrary, a lower value makes the particles more dependent on the global information of the swarm. Typically, the values of c 1 and c 2 fall within the range of 1.5 to 2.0 .
As described in the algorithm (Algorithm 1), the initial current position ( C P ), corresponding to the decision matrix, is established, while the initial velocity (V) of the particle is randomly determined. Subsequently, to evaluate the objective function, the Equation (1) is applied.
C F N i ( t ) = f ( x ) = f ( C P N i ( t ) )
In the process of determining the first local best optimum ( L B F ), the current best optimum is used (Equation (1)):
L B F N i ( t ) = C F N i ( t )
Furthermore, to obtain the best global optimum, the maximum value is selected from the local best optimum dataset (Equation (2)):
G B F ( t ) = m a x ( L B F N i ( t ) )
Meanwhile, by identifying the best global position, the position of the particle at i is extracted from the best global optimum (Equation (3)). Position (z) provides the value of the local best position and becomes the global best position, where z is particle position in i of GBF ( t ) .
G B P ( t ) = L B F ( z )
To update the speed, the inertia weight coefficient, learning factors and particle circulation values are used. In addition, the values of speed, local position and the best local and global positions from the previous iteration are used, applying Equation (5).
V N i ( t ) = ω V N i ( t 1 ) + c 1 r 1 L B P N i ( t 1 ) C P N i ( t 1 ) + c 2 r 2 G B P ( t 1 ) C P N i ( t 1 )
After calculating the new speed, the determination of the new position is done using Equation (6):
C P N i ( t ) = C P N i ( t 1 ) + V N i ( t )

2.1.2. Bat Algorithm

In 2010, Xin-She Yang proposed the Bat algorithm (BA), which originated from the echolocation behavior of bats and emerged as an important option for solving global optimization problems [63]. When bats fly, they emit high-volume pulses of sound, which are used to listen to and analyze the echoes that bounce off entities and objects in their environment. This allows one to determine the distance to the prey or obstacles. A bat emits high-frequency pulses when hunting its prey, and this frequency decreases as the bat gets closer to its prey [33,64].
During its initial stages, the algorithm generates a random population of bats, which were then classified based on their fitness after objective function evaluation. The positions of these bats were adapted by flying towards optimal solutions found by other individuals, thus introducing a global exploration component. Updating the frequency and breadth of solutions contributed to the balance between exploration and exploitation. Through repeated iterations, where the bats continually sought to improve their positions, the algorithm focused on finding optimal solutions within the search space. This approach, based on bat echolocation, was successful in a variety of optimization problems [65,66].
The working of BA and the key equations used in its implementation are detailed in the Algorithm 2 and explained in the following paragraphs [40,67].
Algorithm 2: Structure of the BA.
  • Require: Initialize the population of n bats x i , initial speed v i , frequency f i , pulse rate r i , and loudness A i
  • Ensure: classification of results
Applsci 15 08885 i002
Moreover, each bat starts at a position x i with a velocity v i and a frequency f i (during the first iteration, each bat is randomly assigned a frequency within the range [ f m i n , f m a x ]). Subsequently, they are modified to generate new solutions x i t , velocities v i t , and frequencies f i , following Equations (7)–(9).
f i = f m i n + ( f m a x f m i n ) β
v i t = v i t 1 + ( x i t x * ) f i
x i t = x i t 1 + v i t
In the bat algorithm (BA), β is defined as a random value within a specific range, typically assigned in the interval [0, 1] [63]. The variable x * represents the current best global location, determined by comparing all solutions among the n bats in each iteration t. Additionally, the variable f i is employed to adjust both the range and speed of bat movements.
In the context of local search, after selecting a solution from the set of current best solutions, a new solution is locally generated for each bat through a random walk, as per Equation (10). Here, ϵ is a random value drawn from a range of [−1, 1], while A t = < A i t > denotes the average loudness of all bats during this time interval.
x n e w = x o l d + ϵ A t
As the iterations progress, the volume A i and pulse rate r i must be updated, as indicated in the Equations (11) and (12). Here, α and γ are constants that normally take the same value; In most simulations in the literature, they are assigned the value 0.9 . Emission volume and rates will only be updated if new solutions are improved, indicating that these bats are moving towards the optimal solution.
A i t + 1 = α A i t
r i t = r i 0 [ 1 exp ( γ t ) ]

2.2. Methods of Multiple Criteria Decision-Making

In the area of multiple criteria decision-making (MCDM) methods, there are techniques to compare, evaluate and classify a set of finite alternatives according to the proposed objective. MCDM accommodates data into criteria with multiple units, accepting both qualitative and quantitative data [36,68,69].
MCDM’s beginnings date back to 1960 and since then its goal has been to help decision makers gain better insights, even in complex situations. MCDM helps to select the best option among several possibilities (alternatives) considering a set of contradictory points of view (criteria) [8,20].
In recent years, the application of MCDM has seen an increase in several areas, leading to the development of new methods. Among the most popular MCDM methods are (a) Elimination and Choice Translating Reality (ELECTRE), where pairs of alternatives are compared; (b) Technique of Order of Preference by Similarity with the Ideal Solution (TOPSIS), which compares the distance between all alternatives with the best and worst solution; (c) Analytical hierarchy process (AHP), which decomposes elements into hierarchies and determines the priority of elements; (d) Multicriteria optimization and compromise solution (VIKOR) method, which classifies alternatives in the face of various conflicts; (e) Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE), calculating the dominant flow of alternatives; (f) Multi-objective Optimization Method Based on Proportion Analysis (MOORA), evaluating each alternative based on the proportion analysis; (g) Combinatorial Evaluation Models Based on Distances (CODAS), using the Euclidean distance and the Taxicab distance of the negative ideal; and (h) Dimensional Analysis (DA), using the dimensional homogeneity of an ideal alternative to compare it with existing ones and determine a similarity index [36,68,70].
In the next section, attention will be focused on two MCDM methods (MOORA, TOPSIS and DA) and their concepts, advantages, disadvantages and mathematical structures for their implementation will be explained.

2.2.1. Multi-Objective Optimization Method Based on the Analysis of Proportions

Brauers and Zavadkas introduced the multi-objective optimization method based on ratio analysis (MOORA) in 2006 as a mathematical calculation method to address complex problems with multiple conflicting criteria [19,71,72]. MOORA allows the simultaneous evaluation of contradictory attributes, considering that the objectives set for each criterion are maximized and minimized independently [20,73].
Certainly, MOORA is a subset of multicriteria decision-making (MCDM) methods and is frequently employed in decision-making processes due to its simplicity and ease of use [72,74]. However, the method offers a high degree of flexibility by allowing criteria to be classified as subjective (profitable or unprofitable) for various decision attributes, with weights assigned to facilitate better decision making [19,75]. Furthermore, MOORA stands out as a reliable and robust method capable of solving concrete problems with minimal mathematical operations [18,19].
The MOORA method, as shown in Figure 4, encompasses six sequential steps. These steps guide the process, which involves building the decision matrix, defining objectives, developing the standardized decision matrix, creating the balanced standardized decision matrix based on criteria preferences, estimating the evaluations’ overall cost and benefit criteria and, ultimately, establishing the value of the contribution.
To estimate the global evaluations of the benefit criteria, Equation (13) is utilized, where δ m a x is related to N x i .
N x i = ξ k l | ϵ δ m a x
Simultaneously, for the global evaluations of the cost criterion, it is determined as the sum of the normalized weights using Equation (14), where δ m i n is related to N x j .
N x j = ξ k l | ϵ δ m i n
On the other hand, to establish the value of the contribution, Equation (15) is employed.
N y i = Σ ( l = 1 ) g N x i Σ ( i = g + 1 ) m N x j
where
N y i represents the contribution of each alternative.
i = 1, , g are the maximum criteria.
l = g + 1, g + 2, , m are the minimum criteria.

2.2.2. Technique of Preference Order by Similarity with the Ideal Solution

In 1981, the TOPSIS method (Ideal Solution Similarity Preference Order Technique) was developed by Hwang and Yoon as an MCDM strategy for optimizing multiple alternatives in decision making. This method involves constructing an ideal alternative by considering all available alternatives and then identifying the alternative closest to this ideal, as illustrated in Figure 5. The selection is based on choosing between the closest distance to the positive ideal solution, which maximizes benefit and minimizes cost, and the furthest distance to the negative ideal solution, which maximizes cost and minimizes benefit [76,77,78].
Now, the steps of the TOPSIS method are presented in Algorithm 3 in a descriptive and conceptual manner, whereas Algorithm 4 provides them in a mathematical form. Both are distinguished by their ease of application and clarity of understanding. The process begins with the determination of the decision matrix, which consists of a set of alternatives evaluated according to a set of predefined criteria. The decision matrix is then normalized to ensure the uniformity of elements within the same domain (Equation (16)), and the calculation of the weighted normalized decision matrix follows (Equation (17)).
Algorithm 3: Structure of the TOPSIS method (descriptive and conceptual)
  • Require: Decision matrix
  • Ensure: Ranking preferences;
1:
Develop the decision matrix;
2:
Normalize the decision matrix;
3:
Estimate the weighted normalized decision matrix;
4:
Determine the positive ideal solution (PIS) and the negative ideal solution (NIS);
5:
Calculate the Euclidean distance;
6:
Calculation of the relative proximity to the ideal solution;
7:
Calculate the hierarchy of preferences;
8:
return: Hierarchy of preferences.
n i j = x i j / j = 1 m ( X i j ) 2 , j = 1 , , m
α i j = w j x n i j ,   j = 1 , , n , i = 1 , , m
The next steps involve determining the positive ideal solution (PIS, Equation (18)) and the negative ideal solution (NIS, Equation (19)).
β + = ( α 1 + , , α n + ) = ( i m a x α i j , j ϵ j ) ( i m i n α i j , j ϵ j )
β = ( α 1 , , α n ) = ( i m i n α i j , j ϵ j ) ( i m a x α i j , j ϵ j )
Continuing with the calculations, the Euclidean distance is calculated by measuring the distance between each alternative and the positive ideal solution t i + (Equation (20)) and the negative ideal solution t i (Equation (21)).
t i + = j = 1 n ( α i j α j + ) 2 , i = 1 , , m
t i = j = 1 n ( α i j α j ) 2 , i = 1 , , m
Once the Euclidean distances are obtained, the relative proximity to the positive ideal solution is calculated (Equation (22)).
R i = t i ( t i + + t i ) , i = 1 , , m
The process concludes by sorting the results in descending order according to their proximity to the greatest relative proximity, aligning them with the ideal solution [76,77].
Algorithm 4: Structure of the TOPSIS Method (mathematically).
  • Require: Decision matrix D R m × n , criterion weights ω R n ;
  • Ensure: Ranking preferences R = [ r 1 , r 2 , , r m ] ;
1:
Develop the decision matrix D = [ d i j ] m × n ;
2:
Normalize the decision matrix using: r i j = d i j k = 1 m d k j 2 ;
3:
Estimate the weighted normalized decision matrix: v i j = ω j · r i j ;
4:
Determine the positive ideal solution (PIS) and negative ideal solution (NIS);
5:
Calculate the Euclidean distance: d i + = j = 1 n ( v i j v j + ) 2 , d i = j = 1 n ( v i j v j ) 2 ;
6:
Calculate the relative proximity: C i = d i d i + + d i ;
7:
Calculate the hierarchy of preferences by ranking C i in descending order;
8:
return Hierarchy of preferences R .

2.2.3. Dimensional Analysis Method

The history of dimensional analysis (DA) spans over 300 years, with publications on the topic dating back approximately 200 years. Identifying the true precursor of dimensional analysis has been challenging and controversial for researchers. It was claimed that Bridgman played a crucial role in establishing one of the two fundamental theorems of dimensional analysis; however, the origin of dimensional analysis can be attributed to Newton and Fourier. Fourier is definitely credited with originating dimensional analysis. Despite the slow recognition of it, it is currently a method that allows formulating equations that describe any natural phenomenon [79,80].
As for the DA method, it simplifies the problems through dimensional homogeneity, presenting itself as a better alternative than others. Furthermore, DA has the ability to reduce the dimensions of the variables and associate the criteria and alternatives involved in a difficulty. In other words, DA compares each alternative it has with an ideal optimal solution, calculates the similarity index (SI), and chooses the one with the highest SI as the best alternative [81,82].
The DA method is recognized as a suitable MCDM technique applicable to optimization and decision-making scenarios with various measurement scales. It boasts several advantages, including mathematical simplicity in its application, the integration of decision makers’ opinions into information processing, a reduction in variable complexity, an enhanced understanding of relationships between variables, and the scalability of results. Consequently, the DA method serves as a tool that streamlines information before analysis or modeling, preserving the generality of data transformation. However, it is worth noting that DA exhibits limitations when dealing with information characterized by fuzzy numbers [79,80,81].
The DA method covers a series of four steps: constructing the decision matrix, estimating the degree of preference for the criteria, establishing the ideal solution and the similarity index, and finally ranking the alternatives in descending order (see Figure 6).
Moreover, it presents the following advantages: (1) it considers the opinion of the decision makers about the alternatives and criteria; (2) the importance of the criteria is independent, being benefit or cost [83]; and (3) reduce the number of variables [81]; (4) consider the mutual relationship between the criteria and the alternatives [82]; (5) works without data, so it supports all kinds of [81] methods. However, DA shows disadvantages when handling non-fuzzy quantitative information [83,84].
The DA method consists of five steps (refer to Algorithm 5): building the decision matrix, estimating the degree of preference for the criteria, establishing the ideal solution, determining the similarity index, and identifying the best solution [82,84,85].
Algorithm 5: Structure of the decision analysis (DA) algorithm.
  • Require: Decision matrix D R m × n , decision maker opinions; ω = [ ω 1 , ω 2 , , ω n ] T ;
  • Ensure: Best optimum solution g best ;
1:
Build the normalized decision matrix D n o r m ;
2:
Determine the importance weight of each criterion ω j where j = 1 n ω j = 1 ;
3:
Calculate the similarity index I S i for each alternative i = 1 , 2 , , m ;
4:
Choose the alternative with highest similarity index: g best = arg max i S i ;
5:
return  g best .
To calculate the similarity index for each alternative ( IS ), Equation (23) is utilized, where the assigned value for each alternative ( a j i ) is divided by the ideal alternative for each criterion ( S l * ), raised to the assigned value for each criterion by the deciders w j .
IS i ( a 1 i , a 2 i , , a m i ) = j = 1 m a j i S l * ω j

3. Formulation of the Proposal

In this article, the materials use for encoding the proposed algorithms included software programs for both the internal logic (backend) and the user interface (frontend), as well as for the execution environment of the proposed algorithms. In the backend, programming tools such as Python 3.9 , Visual Studio Code 1.55 . 2 as the code editor, Flask 1.1 . 2 as a microframework, NodeJS 14.0 as a framework, and PostgreSQL 12.9 as a database are employed. On the other hand, Angular 13.0 was used as the Framework for the frontend. Additionally, the execution environment is implemented on the Linux Ubuntu Server 20.04 operating system, along with the Apache 2 web server.
The methodology used in the development of this article is called “Strategy for the Development and Evaluation of Algorithms (SADE).”This method consists of five phases, which are detailed in the following paragraphs and illustrate in Figure 7.

3.1. Phase 1: Data Collection

In Phase 1 of the SADE methodology, experimentation is conducted with HyDM-PSO algorithms in a real-world context.

3.2. Phase 2: Development of Hybrid Algorithms

In Phase 2 of the SADE methodology, hybrid algorithms are developed by combining the PSO algorithm with three multicriteria decision-making methods (MCDMs): Multi-Objective Optimization Method based on Ratio Analysis (MOORA), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), and Dimensional Analysis (DA). This integration results in the creation of the MOORA-PSO, TOPSIS-PSO, and DA-PSO algorithms, collectively named the Hybrid Decision-Making with PSO (HyDM-PSO). Throughout all HyDM-PSO algorithms, MCDM is utilized to preprocess the data and establish an initial search point for PSO. The general structure of the HyDM-PSO algorithms is presented in Algorithm 6, in pseudocode, covering nine fundamental steps.   
Algorithm 6: General structure of algorithms HyDM-PSO.
  • Require: Decision matrix, degree of preference of the criteria and control parameters: ω, c1, c2, and the number of iterations to perform (T)
  • Ensure: best position ( pbest ) and best optimum ( gbest )
1:
Construct the decision matrix, with the degrees of preference for each criterion, and the inertia weight;
2:
Obtain the ideal solution using the objective function of MCDM;
3:
Set learning factors;
4:
Assign the circulation of each particle;
5:
Determine the first position and velocity of the particles;
6:
Evaluate the objective function to obtain the best optimum and local position;
7:
Obtain the best global optimum and the best global position;
Applsci 15 08885 i003
In the context of Algorithm 6, Step 1 involves constructing the decision matrix, incorporating the degrees of preference for each criterion, and determining the inertia weight. The value assigned to the inertia weight assumes a crucial role in the process, but cannot remain constant due to possible changes in the opinions of decision makers. An examination of the literature reveals the various values used for the inertia weight in various contexts, where a higher value favors global exploration and a lower value promotes local exploration. Although the ranking of values is similar in several studies [30,86,87], establishing a specific weight for a particular method remains a challenge. This article adopts two different values for the inertia weight: ω = 0.3 and ω = 0.7 .
Furthermore, variation in the preference degrees for each criterion depends on the preferences and priorities of decision makers in specific cases, as documented in the literature [20]. This article considers three distinct values for the preference degrees of each criterion ( D f 1 , D f 2 , D f 3 ). The criteria are identified as C1, C2, C3, C4, and C5, as detailed in Table 1.
In Step 2 of Algorithm 6, the objective is to attain the ideal solution by applying the MCDM objective function. In this context, a comprehensive explanation is provided regarding the specific implementation of each objective function in the HyMD-PSO algorithms in the subsequent paragraphs:
  • MOORA-PSO Algorithm: To compute the objective function of the MOORA-PSO algorithm, refer to Equations (13)–(15). After classifying the alternatives, identify the two top solutions.
  • TOPSIS-PSO Algorithm: In calculating the objective function of the TOPSIS-PSO algorithm. The results from Equations (16) and (22) for TOPSIS-PSO provide the positive ideal solution and the negative ideal solution.
  • DA-PSO Algorithm: For the DA-PSO algorithm, employ Equation (23) to calculate the ideal solution.
As part of Step 3 in Algorithm 6, the learning factors are established by adopting values from the literature. In 2021, a value of 2.0 is employed for both acceleration coefficients, considered as standard reference values for the PSO algorithm [88]. In 2003, a usage of 1.5 for the cognitive coefficient and 2.5 for the social coefficient is recorded [86]. When c 1 > c 2 , there will be more confidence in the particle results, while when c 2 > c 1 , there will be more confidence in the swarm results. Therefore, for this article, five different values for the cognitive and social coefficients are selected, as detailed in Table 2. According to [89,90], randomness in acceleration coefficients can lead to premature convergence, resulting in a suboptimal solution. Likewise, multiple improved PSO variations have been developed in the literature to tackle these difficulties [56].
Now, the degrees of preference for each criterion, the cognitive and social coefficients, and the inertia weight for all the experiments conducted in this article are generated, as shown in Table 3. Parameters used in the experiments are the preference degrees ( D f ), cognitive and social coefficients ( C C ), and inertia weight ( ω )
In Step 4 of Algorithm 6, the assignment of circulation values for each particle is carried out, specifically for the cognitive coefficient ( r 1 ) and the social coefficient ( r 2 ) ). This assignment stems from the hybridization of MCDM with the HyDM-PSO algorithm. A detailed description of the process is provided below:
1.
MOORA-PSO algorithm: After the calculation of the objective function using Equations (13)–(15), the alternatives are classified, leading to the identification of the top two solutions. The values corresponding to these solutions are extracted from the decision matrix and assigned to the coefficients simulating the influence of nature on the particle ( r 1 and r 2 ).
2.
TOPSIS-PSO algorithm: In the objective function, the positive ideal solution (Equation (16)) and the negative ideal solution (Equation (22)) are calculated, and the resulting values are assigned to the coefficients that simulate the influence of nature on the particle ( r 1 and r 2 ).
3.
DA-PSO algorithm: During the calculation of the objective function with Equation (23), the ideal solution is identified, and it is assigned to r 1 , while the similarity index is designated as r 2 .
Regarding the subsequent Steps of Algorithm 6, these align with the PSO algorithm. The position update is performed utilizing Equation (6), while the particle velocity is governed by Equation (5). The determination of the best local position involves the local optimum, which is calculated using the update of the best global position executed through Equation (4), and the identification of the best global optimum is carried out using Equation (3).

3.3. Phase 3: Comparative Evaluation of Algorithms

During Phase 3 of the SADE methodology, a comparative analysis of the results is being conducted to gain a distinct perspective from an alternative approach, integrating supplementary methods like the PSO algorithms and the bat algorithm (BA).

3.4. Phase 4: Validation of Results

In Phase 4 of the SADE methodology, the validation of results is initiate. During this stage, a comparison of the results from all experiments is recommended using statistical ANOVA method. The objective is to assess the solution found and the time required for convergence [91]. The convergence related to the PSO stability study is concerned with how the important parameters affect the particle swarm dynamics and under what conditions the particle swarm converges in particular stable positions [92].

3.5. Phase 5: Strategic Feedback

In the concluding phase of the SADE methodology, feedback is supplied by comparing results with real-life context and findings from the experimentation, with the aim of improving actions in a specific context.

4. Experimental Case

4.1. Phase 1: Data Collection

The data acquisition used for the experimental case originates specifically from the plastic injection molding process of a manufacturing company situated in Ciudad Juárez, Chihuahua, Mexico. This process, employed in the manufacturing industry to mass-produce plastic parts with precision and intricate details, spans applications from toys and packaging to automotive components and medical devices. In this operation, plastic granules undergo melting in a specialized machine, followed by their precise injection into a closed mold with the desired shape. Within the mold, the plastic swiftly cools and solidifies, assuming the mold’s shape. Subsequently, the mold opens, and the molded part is ejected [93].
Critical characteristics within this process ensure quality and efficiency, including melting temperature, mold temperature, filling time, packaging time, and packaging precision. In practice, an inadequate or non-optimized configuration of these parameters leads to high variability in the final product, resulting in defects such as flash, warpage, internal voids, uneven shrinkage, low mechanical strength, or poor surface finish. These defects not only compromise the functionality of the parts but also result in rework, material waste, line stoppages, and customer rejection—directly impacting the company’s profitability.
The melting temperature affects the plastic’s flow during injection, and proper maintenance prevents defects. Mold temperature influences cooling rate and surface quality. The adequate filling time prevents issues like overfilling, incomplete filling, or the generation of internal stresses within the part. Appropriate packaging time ensures the proper solidification of the plastic in all areas of the mold, improving density and reducing potential defects like shrinkage. Proper packing pressure aids in material compaction, reduces porosity, and improves the homogeneity of the molded part. Each of the parameters interacts with others and cannot be evaluated in isolation.
For this reason, identifying the best possible combination of these critical characteristics is essential, as it allows for stable operating conditions, reduced defective parts, shorter cycle times, minimized energy and raw material consumption, and prolonged mold life. Consequently, the manufacturing company achieves greater efficiency on its production lines, significant improvement in product quality delivered to the customer, reduced operating cost, and enhanced competitiveness in the market.
The precise control of these characteristics is essential to guarantee that the final part meets the required specifications in terms of quality, dimensional accuracy, and physical properties. Optimizing these characteristics is crucial to defect prevention and maintaining efficient and consistent production in plastic injection molding [94,95]. The following control variables in Table 4 in injection molding optimization are established primarily on the premise of the defects and operational problems observed in plastic injection processes, as documented in both industry and scientific literature. Considering the above, a decision matrix is designed (Table 4), encompassing five criteria: melting temperature ( C 1 ), mold temperature ( C 2 ), filling time ( C 3 ), packaging time ( C 4 ), and packing pressure ( C 5 ), alongside nine alternatives ( A 1 , A 2 A 9 ) with assigned values for these characteristics.

4.2. Phase 2: Development of Hybrid Algorithms

The subsequent sections provide a detailed account of the results derived from the experiments conducted with the HyDM-PSO and PSO algorithms. The sequence of experiments begins with the MOORA-PSO algorithm, followed by TOPSIS-PSO, DA-PSO, and culminates with the PSO algorithm.

4.2.1. Experiment 1: Hybrid Algorithm MOORA-PSO

In Experiment 1, the integration of the MOORA algorithm with the PSO algorithm takes place within a computer program (CPgm). Throughout this experiment, various tests are conducted, running CPgm and adjusting the values of parameters such as particle acceleration coefficients ( CC ) and degrees of preference ( Df ). Additionally, CPgm undergoes three runs with identical parameters to observe patterns, as outlined in Table 3.
Figure 8 illustrates the results of the tests, employing an inertia weight of 0.7 and degrees of preference Df 1 = [ 0 . 40 , 0 . 20 , 0 . 03 , 0 . 07 , 030 ] . In the tests, labeled A 1 A 3 , C C 1 is utilized with coefficients c 1 = c 2 = 2.0 , while in A 4 A 6 , C C 2 is applied with c 1 = c 2 = 1.5 . Lastly, in tests A 7 A 9 , C C 5 is employed with c 1 = c 2 = 2.5 .
The results of the tests depicted in Figure 8 are concurrently displayed in Table 5 to enhance result visualization.
Regarding the tests depicted in Figure 9, an inertia weight of 0.7 is also utilized, along with the degrees of preference Df 21 and the acceleration coefficients C C 3 and C C 4 . In C C 3 , the coefficients c 1 = 1.5 and c 2 = 2.5 are employed, while in C C 4 , the coefficients are c 1 = 2.5 and c 2 = 1.5 . Tests A 10 A 12 utilize C C 3 , whereas A 13 A 15 employ C C 4 .
Continuing with the tests, an inertia weight of 0.7 is applied, along with the degrees of preference Df 2 = , [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] , as specified in Table 1. In Figure 10, the tests B 1 B 3 commence with a value for the acceleration coefficients C C 1 from Table 3, utilizing c 1 = c 2 = 2.0. In B 4 B 6 , is implemented with the values of C C 2 , c 1 = c 2 = 1.5 . Additionally, in B 7 B 9 , where C C 5 is used, c 1 = c 2 = 2.5 . The outcomes of these tests are also presented in Table 6.
Figure 11 displays the tests, maintaining an inertia weight of 0.7 and utilizing the degrees of preference Df 2 . However, there is a modification in the acceleration coefficients. In tests B 10 B 12 , the values of C C 3 , c 1 = 1.5 , and c 2 = 2.5 are implemented. And in B 13 B 15 , the values of C C 4 , c 1 = 2.5 , and c 2 = 1.5 are employed.
In Figure 12, tests C 1 C 9 are depicted, employing an inertia weight of 0.7 and degrees of preference Df 3 = [ 0 . 123 , 0 . 099 , 0 . 043 , 0 . 343 , 0 . 392 ] . The values of the acceleration coefficients are modified: C C 1 is set to 2.0 , C C 2 to 1.5 , and C C 5 to 2.5 . In tests C 1 C 3 , C C 1 is utilized; in C 4 C 6 , C C 2 is applied, and in C 7 C 9 , C C 5 is employed. The outcomes of these tests are presented in Table 7.
Figure 13 displays the tests, maintaining an inertia weight of 0.7 and utilizing the degrees of preference Df 3 . However, there is a modification in the acceleration coefficients. In tests C 10 C 12 , the values of C C 3 , c 1 = 1.5 , and c 2 = 2.5 are implemented. And in C 13 C 15 , with the values of C C 4 , c 1 = 2.5 , and c 2 = 1.5 are employed.
In the second part of Experiment 1, where the hybrid MOORA-PSO algorithm is implemented, the experimentation begins with an inertia weight of 0.3 . In Figure 14, the degrees of preference Df 1 are employed, as specified in Table 1. In tests D 1 D 3 , C C 1 is utilized; in D 4 D 6 , C C 2 is applied, and in D 7 D 9 , C C 5 is employed. The outcomes of these tests are detailed in Table 8.
Figure 15 illustrates the tests employing D f 1 and ω = 0.3 . In tests D 10 D 12 , C C 3 is utilized with the values of c 1 = 1.5 , c 2 = 2.5 . Additionally, in D 13 D 15 , the values of c 1 = 2.5 , c 2 = 1.5 are applied, corresponding to C C 4 .
The subsequent tests persist in utilizing the inertia weight of 0.3 and the degrees of preference Df 3 = [ 0 . 123 , 0 . 099 , 0 . 043 , 0 . 343 , 0 . 392 ] from Table 1. In Figure 16, the tests E 1 E 9 are presented, employing the acceleration coefficients C C 1 , C C 1 , and C C 1 , as in the preceding tests. The outcomes of these tests are detailed in Table 9.
While Figure 17 shows the tests E 10 E 12 that uses D f 3 and ω = 0.3 . In E 10 E 12 , C C 3 is used. And in E 13 E 15 , C C 4 is used.
The tests, characterized by an inertia weight of 0.3 and degrees of preference Df 2 = [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] , are depicted in Figure 18. In this illustration, F 1 F 3 involve the utilization of C C 1 , F 4 F 6 employ C C 2 , and F 7 F 9 implement C C 5 . The outcomes of these tests are detailed in Table 10.
In Figure 19, the tests F 10 F 12 use D f 2 and ω = 0.3 . In F 10 F 12 , C C 3 is used. And in F 13 F 15 , C C 4 is used.

4.2.2. Experiment 2: Hybrid Algorithm TOPSIS-PSO

In this phase of the experiment, the hybrid algorithm TOPSIS is implemented with the PSO algorithm in a computer program. Nine test are initiated with an inertia weight of 0.7 to enhance global search. Throughout these test, various tests are conducted by adjusting the acceleration coefficients of the particles to enhance convergence and ensure solution stability. Furthermore, the degrees of preference for the criteria are adjusted. In Figure 20, we show the results of the first executions 9 of the computer program with the degrees of preference Df 1 = [ 0 . 40 , 0 . 20 , 0 . 03 , 0 . 07 , 0 . 30 ] of Table 1.
The results of the tests A 1 A 9 are observed in Table 11. In tests A 1 A 3 , we use the acceleration coefficients c 1 = c 2 = 2.0 so that the speed of the particles increases uncontrollably and reaches rapid convergence. This value is a reference value in the literature found. However, in the tests A 4 A 6 , the acceleration coefficients c 1 = c 2 = 1.5 . And for tests A 7 A 9 , we use c 1 = c 2 = 2.5 (Figure 8).
In the following tests, we use the acceleration coefficients with values greater than zero to pull the particles towards the average of pbest and gbest . Furthermore, if c 1 > c 2 , there will be more confidence in the particle results. Therefore, we use for the tests A 10 A 12 the values of c 1 = 1.5 and c 2 = 2.5 . Furthermore, we know that when c 2 > c 1 there will be more confidence in the results of the swarm, for this reason, in the tests A 13 A 15 , we use c 1 = 2.5 and c 2 = 1.5 (Figure 21).
In the following tests, we continue to use an 0.7 inertia weight to benefit global search. But we will change the degrees of preference Df 2 = , [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] of Table 1. Figure 22 shows the tests B 1 B 9 , where the values of the acceleration coefficients are the same, so the speed of the particles increases without control and reaching convergence speed. We start with a value for the acceleration coefficients of 2.0 . Later, we modified them to a value of 1.5 and 2.5 . The results of these tests are observed in Table 12.
Figure 23 shows the tests B 10 B 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests B 13 B 15 have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In Figure 24, the tests C 1 C 9 have acceleration coefficients with values of 2.0 , 1.5 , and 2.5 . The results of these tests are observed in Table 13, where the trajectory becomes smoother during the search and there are no sudden movements.
Figure 25 shows the tests C 10 C 12 with the acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the C 13 C 15 tests have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In this second part, we continue using the MOORA-PSO hybrid. In the first nine tests, we used an inertial weight of 0.3 to benefit local search. And within these nine tests, we modified the acceleration coefficients and the degrees of preference of the particle. Figure 26 shows the results of the first nine runs of the computer program (Table 14), with the degrees of preference Df 1 = [ 0 . 400 , 0 . 200 , 0 . 030 , 0 . 070 , 0 . 30 ] from Table 1.
The first three tests in Figure 27 provide greater confidence in the swarm, as these correspond to D 10 D 12 with the acceleration coefficients c 1 = 1.5 and c 2 = 2.5 . In the tests D 13 D 15 , we use c 1 = 2.5 and c 2 = 1.5 for greater confidence in the particles.
The following tests use an inertia weight of 0.3 to benefit local search, with degrees of preference Df 3 = [ 0 . 123 , 0 . 099 , 0 . 043 , 0 . 343 , 0 . 392 ] from Table 1.
Figure 28 shows the tests E 1 E 9 , where the values of the acceleration coefficients are the same for the speed of the particles to increase uncontrollably and reach rapid convergence. We start with a value for the acceleration coefficients of 2.0 . Later, we modified them to a value of 1.5 and 2.5 . The results of these tests are observed in Table 15, in which we observe whether the trajectory becomes smoother during the search or if it has sudden movements.
Figure 29 shows the tests E 10 E 12 with the acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests E 13 E 15 have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In these nine tests, they have an inertia weight of 0.3 to benefit local search, with the degrees of preference Df 2 = [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] from Table 1.
Figure 30 shows the tests F 1 F 9 , with a value for the acceleration coefficients of 2.0 , 1.5 , and 2.5 , respectively. Table 16 presents the results of the tests, where the trajectory becomes smoother during the search and does not contain jerky movements.
Figure 31 shows the tests F 10 F 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests F 13 F 15 have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .

4.2.3. Experiment 3: DA-PSO

In this part, the DA-PSO algorithm is implemented in a computer program. Where the first nine tests use an inertial weight of 0.7 to benefit global scanning. Within these nine tests, the acceleration coefficients of the particle are modified to improve convergence and maintain stability in the solution. Likewise, the degrees of preference are modified.
In Figure 32, we show the results of the first nine executions of the computer program in which we use the degrees of preference Df 1 = [ 0 . 400 , 0 . 200 , 0 . 030 , 0 . 070 , 0 . 30 ] from Table 1.
The results of the tests A 1 A 9 are observed in Table 17. In tests A 1 A 3 , we use the acceleration coefficients c 1 = c 2 = 2.0 for the speed of the particles to increase uncontrollably and reach rapid convergence. This value was a reference value in the literature found. While in tests A 4 A 6 , the acceleration coefficients c 1 = c 2 = 1.5 . And for tests A 7 A 9 , we use c 1 = c 2 = 2.5 . The finality of these tests is to observe whether the trajectory becomes smoother during the search, since when using large values in the acceleration coefficients the movements are abrupt.
As we have commented, values greater than zero for the acceleration coefficients allow the particles to be attracted towards the average of pbest and gbest . Furthermore, if we use the values for acceleration coefficients c 1 = 1.5 and c 2 = 2.5 , we will have greater confidence in the swarm. And when we use c 1 = 2.5 and c 2 = 1.5 for the acceleration coefficients, we will have more confidence in the particles (Figure 33).
In the following tests, we continue to use an inertia weight of 0.7 to benefit global search. But we will change the degrees of preference Df 2 = [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] from Table 1. And to improve the convergence and maintain the stability in the solution, the acceleration coefficients are modified as in the previous tests.
In Figure 34, are the tests B 1 B 9 , where the values of the acceleration coefficients are the same so that the speed of the particles increases without control and reaches a fast convergence. We start with a value for the acceleration coefficients of 2.0 . Later, we modified them to a value of 1.5 and 2.5 . The results of these tests are observed in Table 18, in which it is observed whether the trajectory becomes smoother during the search or if it has sudden movements.
Figure 35 shows the tests B 10 B 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests B 13 B 15 have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In Figure 36, the tests C 1 C 9 use acceleration coefficients with values of 2.0 , 1.5 , and 2.5 . The results of these tests are observed in Table 19, where the trajectory becomes smoother during the search and there are no sudden movements.
Figure 37 shows the tests C 10 C 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the C 13 C 15 tests provide higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In Figure 38, the tests C 1 C 9 use acceleration coefficients with values of 2.0 , 1.5 , and 2.5 . The results of these tests are observed in Table 20, where the trajectory becomes smoother during the search and there are no sudden movements.
Figure 39 shows the tests C 10 C 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the C 13 C 15 tests provide greater confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
The following tests use an inertia weight of 0.3 to benefit local search, with degrees of preference Df 3 = [ 0 . 123 , 0 . 099 , 0 . 043 , 0 . 343 , 0 . 392 ] from Table 1.
Figure 40 shows the tests E 1 E 9 , where the values of the acceleration coefficients are the same so that the velocity of the particles increases without control and reaches a rapid convergence. We start with a value for the acceleration coefficients of 2.0 . Later, we modified them to a value of 1.5 and 2.5 . The results of these tests are observed in Table 21, in which it is observed whether the trajectory becomes smoother during the search or if it has sudden movements.
Figure 41 shows the tests E 10 E 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests E 13 E 15 provide greater confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In these nine tests, they have an inertia weight of 0.3 to benefit local search, with the degrees of preference Df 2 = [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] from Table 1.
Figure 42 shows the tests F 1 F 9 , with a value for the acceleration coefficients of 2.0 , 1.5 , and 2.5 , respectively. Table 22 presents the results of the tests, where the trajectory becomes smoother during the search and does not contain sudden movements.
Figure 43 shows the tests F 10 F 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests F 13 F 15 have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .

4.2.4. Experiment 4 Using PSO

In the initial stage of implementation of the PSO algorithm, nine tests begin with an inertial weight of 0.7 to improve global exploration. Within this set of tests, the computer program that incorporates PSO is run 135 times, making adjustments to the acceleration coefficients and degrees of preference. These modifications aim to enhance convergence and guarantee stability in the solution.
The results of the nine runs of the program, using preference degrees Df 1 = [ 0 . 400 , 0 . 200 , 0 . 030 , 0 . 070 , 0 . 30 ] from Table 1, are shown in Figure 44.
The results of tests A 1 A 9 are observed in Table 23. In tests A 1 A 3 , we use the acceleration coefficients c 1 = c 2 = 2.0 for the speed of the particles to increase uncontrollably and reach rapid convergence. This value was found to have been used as a reference value in the literature. In tests A 4 A 6 , the acceleration coefficients c 1 = c 2 = 1.5 . And for tests A 7 A 9 , we use c 1 = c 2 = 2.5 . The finality of these tests is to observe whether the trajectory becomes smoother during the search, since when using large values in the acceleration coefficients, the movements are abrupt.
Table 23 presents the results of tests A 1 A 9 . In tests A 1 A 9 , acceleration coefficients c 1 = c 2 = 2.0 are employed to facilitate an uncontrollable speed of particles and achieve rapid convergence, referencing values found in the literature. For tests A 1 A 6 , acceleration coefficients are set to c 1 = c 2 = 1.5 . In tests A 7 A 9 , c 1 = c 2 = 2.5 . The purpose of these tests is to observe whether the trajectory becomes smoother during the search, as higher values of the acceleration coefficient tend to induce abrupt movements.
Acceleration coefficients greater than zero allow particles to be attracted to the average of the local best ( pbest ) and global best ( gbest ). Furthermore, when c 1 > c 2 , greater confidence is generated in the particle results, while c 2 > c 1 establishes confidence in the swarm results. Therefore, in tests A 10 A 12 , the acceleration coefficients are set to c 1 = 1.5 and c 2 = 2.5 to encourage greater confidence in the swarm. In tests A 13 A 15 , c 1 = 2.5 and c 2 = 1.5 are used, promoting the self-confidence of the particles (Figure 45).
In the following tests, we continue to use an inertia weight of 0.7 to benefit global search. But we will change the degrees of preference Df 2 = [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] from Table 1. And to improve the convergence and maintain the stability in the solution, the acceleration coefficients are modified as in the previous tests.
In Figure 46, the tests B 1 B 9 , the values of the acceleration coefficients are the same so that the speed of the particles increases without control and reaches a fast convergence. We start with a value for the acceleration coefficients of 2.0 . Later, we modified them to values of 1.5 and 2.5 . The results of these tests are observed in Table 24, in which it is observed whether the trajectory becomes smoother during the search or whether it has sudden movements.
Figure 47 shows the tests B 10 B 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests B 13 B 15 provide greater confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In Figure 48, the tests C 1 C 9 use acceleration coefficients with values of 2.0 , 1.5 , and 2.5 . The results of these tests are observed in Table 25, where the trajectory becomes smoother during the search and there are no sudden movements. But we will change the degrees of preference Df 3 = [ 0 . 123 , 0 . 099 , 0 . 043 , 0 . 343 , 0 . 392 ] from Table 1.
Figure 49 shows the tests C 10 C 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the C 13 C 15 tests provide greater confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In the second part of the experimentation, the PSO algorithm continues. In the first nine tests, we used an inertial weight of 0.3 to benefit local search. Within these nine tests, we perform another 135 executions of the computer program in which the particle’s acceleration coefficients and degrees of preference are modified.
Figure 50 shows the results of the first nine executions of the computer program with the degrees of preference Df 1 = [ 0 . 400 , 0 . 200 , 0 . 030 , 0 . 070 , 0 . 30 ] from Table 1. The results of these tests are observed in Table 26.
The first three tests in Figure 51 have greater confidence in the swarm, since these correspond to D 10 D 12 with the acceleration coefficients c 1 = 1.5 and c 2 = 2.5 . In the tests D 13 D 15 , we use c 1 = 2.5 and c 2 = 1.5 for greater confidence in the particles.
The following tests use an inertia weight of 0.3 to benefit local search, with degrees of preference Df 3 = [ 0 . 123 , 0 . 099 , 0 . 043 , 0 . 343 , 0 . 392 ] from Table 1.
Figure 52 shows the tests E 1 E 9 , where the values of the acceleration coefficients are the same so that the velocity of the particles increases without control and reaches a rapid convergence. We start with a value for the acceleration coefficients of 2.0 . Later, we modified them to a value of 1.5 and 2.5 . The results of these tests are observed in Table 27, in which it is observed whether the trajectory becomes smoother during the search or it has sudden movements.
Figure 53 shows the tests E 10 E 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests E 13 E 15 have higher confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
In these nine tests, they have an inertia weight of 0.3 to benefit local search, with the degrees of preference Df 2 = [ 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 , 0 . 20 ] from Table 1.
Figure 54 shows the tests F 1 F 9 , with a value for the acceleration coefficients of 2.0 , 1.5 , and 2.5 , respectively. Table 28 presents the results of the tests, where the trajectory becomes smoother during the search and does not contain sudden movements.
Figure 55 shows the tests F 10 F 12 with acceleration coefficients c 1 = 1.5 and c 2 = 2.5 with greater confidence in the swarm. And the tests F 13 F 15 provide greater confidence in the particles, with c 1 = 2.5 and c 2 = 1.5 .
Table 28 presents the results of the tests, where the trajectory becomes smoother during the search and does not contain sudden movements.

4.3. Phase 3: Comparative Evaluation of Algorithms

Regarding the obtained results, significant success is evident. The intrinsic limitations of the PSO algorithm decrease when integrating it with an MCDM. This assertion is supported through the comparison of results, enabling the evaluation of data efficiency and reliability.
In Figure 56, it is observed that the DA-PSO algorithm achieves a greater number of solutions, avoiding falling into local optima. Concerning the quantity of solutions, the DA-PSO algorithm exhibits an outstanding performance, followed by the PSO, TOPSIS-PSO, and finally, MOORA-PSO algorithms. Throughout the experimentation, it was found that the MOORA-PSO and TOPSIS-PSO algorithms present a higher percentage of findings between one and three alternative solutions, with 87 % and 73 % , respectively.
In contrast, the PSO and DA-PSO algorithms obtain values lower than 10 % in the same field; however, these algorithms show a higher percentage by presenting a greater number of alternative solutions. The PSO algorithm has a 70 % probability of finding between four and six different solutions and a 28 % probability of finding solutions ranging between seven and nine alternatives. On the other hand, the DA-PSO algorithm reports a probability of 81 % of finding between four and six solutions and 13 % of finding between seven and nine solutions, achieving a balance between global and local scanning.
In order to ensure the efficiency and reliability of the results in iterations of 10, the computer programs containing the PSO and DA-PSO algorithms were executed again, this time with an increased number of iterations to 50. The newly obtained data, as reflected in Figure 57, confirm a probability of 22 % in finding between four and six solutions.
Based on the findings, the DA-PSO algorithm surpasses the results achieved by PSO. Nevertheless, it is crucial to emphasize that DA-PSO provides additional optimal outcomes that can be scrutinized in decision making processes, contributing to the mitigation of seasonality in PSO when seeking optimal results.
Concerning the real case in the manufacturing industry, Figure 58 illustrates that the most recommended alternative is A1, as indicated by DA-PSO ( 28 % ), MOORA-PSO (Algorithms 20 % ), and TOPSIS-PSO ( 26 % ). In contrast, PSO identifies alternative A2 as the optimal choice, with a probability of 33 % , although it is also acknowledged as the second-best option with a probability of 19 % . A2 is also identified as the second recommended option for DA-PSO, with a probability of 12 % , and for TOPSIS-PSO with 17 % , even though A4 is also considered by DA-PSO. Furthermore, other alternatives emerge as secondary options, including A3 (for DA-PSO), A4 (for MOORA-PSO), A5 (for TOPSIS-PSO), or A9 (for PSO).
Through the comparative analysis of results between the HyDM-PSO and PSO algorithms, these findings are now subjected to validation against another metaheuristic algorithm, specifically the bat algorithm (BA). To accomplish this, the BA algorithm is executed within a computer program and tests.
The control parameters utilized in BA consist of a set number of iterations (10), α = 0.90 , γ = 0.90 , and random values for other parameters. The random values assigned to speed, frequency, and pulse rate fall within the range [0, 1], while a range of [1, 2] is employed for loudness.
The results of the experiment with the BA algorithm are depicted in Figure 59. These findings indicate that among the top alternatives are A 2 , A 9 , A 7 , A 5 , and A 5 , corresponding to the tests E 1 , E 2 , E 3 , E 4 , and E 5 . Additionally, it is observed that alternative A 7 in test E 3 exhibits stagnation in results, and A 5 experiences slight stagnation in tests E 4 and E 5 .
Moving on to Table 29, the results of each iteration are presented, where, despite not being an alternative that appears in multiple iterations, A 9 appears in all tests, and A 6 appears in four out of five tests.
Considering the experiment values with the BA algorithm and acknowledging that the DA-PSO and PSO algorithms yield the best results, a comparison of these outcomes is depicted in Figure 60. The Figure illustrates that the DA-PSO algorithm remains the optimal choice for exploring new solutions and preventing stagnation in suboptimal solutions. Conversely, the BA algorithm struggles to provide more than seven solutions.

4.4. Phase 4: Validation of Results

The validation was conducted using ANOVA as follows. In this sense, Table 30 shows that no significant differences exist between any of the groups with F = 0.00 and p = 1.000 .
Table 31 shows Tukey simultaneous tests for differences of means. The data indicate a high degree of confidence in the results of statistical analysis.
Table 32 shows the Fisher individual tests for differences of means. The results show a high level of statistical confidence.
Table 33 indicates that means with the letter A are statistically similar to the control mean. The Fisher least significant difference (LSD) results are as follows.
In order to verify the time required for executing computer programs using the HyDM-PSO, PSO, and BA algorithms, the time for 5 runs of each program with 50 iterations is being recorded, yielding the results displayed in Table 34. It is being observed that the PSO and DA-PSO algorithms exhibit the most optimal execution times; however, PSO records the minimum time in four instances.
In the context of validating data reliability, the tests A 1 , B 1 , C 1 , D 1 , E 1 , and F 1 are selected from the four tests, and the BA algorithm undergoes six executions to determine the final position of the particles.

4.5. Phase 5: Strategic Feedback

Based on the findings, the utilization of this technology is recommended for intelligent decision making in intricate scenarios, such as the context of the plastic injection molding process, with the aim of ensuring the quality and efficiency of the final product.
The experimentation considers the plastic injection molding process, taking into account criteria such as melting temperature ( C 1 ), mold temperature ( C 2 ), filling time ( C 3 ), packaging time ( C 4 ), and packing pressure ( C 5 ). The experiment results indicate that the optimal alternatives for this context are A 1 and A 2 (Table 35).
In the case of alternatives A 1 and A 2 , it is noted that the packaging time ( C 4 ), and packing pressure ( C 5 ) exhibit higher values compared to other alternatives (refer to Table 4). Consequently, these are not values that have a significant impact on the plastic injection molding process. Therefore, it is recommended that, in future experiments, greater focus be placed on criteria such as melting temperature ( C 1 ), mold temperature ( C 2 ), and filling time ( C 3 ), or even considering the incorporation of additional features.

5. Analysis and Discussions

The methodology, named “Strategy for the Development and Evaluation of Algorithms (SADE)”, composed of five phases, exhibits robustness in addressing an optimization problem. Additionally, the obtained results showcase its reliability and consistency in finding the optimal solution.
The contributions proposed in this article were successfully carried out through the following actions:
(a)
The successful development of three hybrid algorithms that integrate the PSO algorithm with MCDM methods. These algorithms, collectively named “Hybrid Decision Making with PSO (HyDM-PSO)”, encompass MOORA-PSO, TOPSIS-PSO, and DA-PSO.
(b)
The accomplished implementation of the HyDM-PSO algorithms in a computer program to adjust initial parameters and perform comparisons of efficiency, precision, and convergence speed. The program development is done in Python, utilizing various frameworks and other computational tools.
(c)
The comparison of HyDM-PSO algorithms not only involved comparisons among themselves, but also included comparisons with the PSO and BA metaheuristics. These comparisons were made in terms of performance, considering the time required to execute the algorithms in computer programs. The evaluation reveals that the DA-PSO algorithm requires less time; although it is slightly behind PSO at the best times, the difference is minimal, maintaining significant efficiency, as detailed in Table 34. Averaging the times, it is observed that DA-PSO consistently occupies the second position after PSO, demonstrating shorter execution times, as detailed in Table 36.
(d)
The DA-PSO algorithm achieves stability and robustness, surpassing the limitations of PSO in addressing the search for optimal results and mitigating seasonality. The results demonstrate the high effectiveness of the DA-PSO algorithm, as evidenced in the experimental case with iterations of 10 and 50, as illustrated in Figure 59.
(e)
When applying the algorithms in a real-world case for reducing defects in the plastic injection molding process, their applicability in the real world is validated.
Table 36. Average execution time.
Table 36. Average execution time.
AlgorithmAverage
MOORA-PSO00:02.3
TOPSIS-PSO00:02.5
DA-PSO00:02.3
PSO00:02.1
BA00:03.0

6. Conclusions

Decision making stands out as a critical factor for achieving business success, with the analysis and interpretation of information playing a fundamental role in selecting optimal solutions. Aligned with the established objectives, this study has produced favorable outcomes, buttressed by the intricacies of the experiments delineated in this article.
The hybridization of algorithms has showcased significant potential in the decision-making process, placing particular emphasis on the efficacy of amalgamating two prominent algorithms: the differential optimization algorithm (DA) and the particle swarm optimization (PSO). Specifically, the DA-PSO hybrid demonstrates a high degree of efficiency in effectively addressing the inherent seasonality of PSO, generating supplementary solutions around the initial solution.
Furthermore, the experiments disclose that the MOORA-PSO and TOPSIS-PSO hybrids exhibit sensitivity to control parameters, exerting a noteworthy impact on their performance and, consequently, the ultimate results. This discovery points towards future jobs aimed at refining and optimizing these hybrid approaches.
An important aspect of the algorithms implemented in a computer program is the creation of a database with the details of each experiment, generating greater detail of the results obtained. The detail that is stored will be used for later studies, since the data is stored in .xlsx file format. Additionally, the computer program generates graphs used in the experimental case section of this article. While the program is currently only accessible locally, ongoing efforts aim to make it available on the web. Although the program is presently accessible only on a local level, continuous endeavors are ongoing efforts aiming to make it available on the web.
Looking from a different perspective, the effectiveness of this study yields a positive impact on the manufacturing industry through the implementation of these algorithms in a real-world environment, thereby reducing defects in the plastic injection molding process. This work emphasizes the importance of meticulous consideration of the initial parameters to maximize the efficiency of the proposed algorithms. The conclusions not only definitively conclude the study but also open new avenues for future exploration in the field of MCDM. They highlight the potential to fine-tune decision making in complex industrial environments through the hybridization of algorithms.
The researchers’ future work is directed towards extending their investigations into diverse areas, with a specific emphasis on those entailing multicriteria decision making. Envisaged applications span across task assignment, personnel selection, and medical treatment diagnosis. Within future work, the development of hybrid algorithms that integrate additional metaheuristics, notably Ant Colony Optimization (ACO) and Bat Algorithm (BA) is also contemplated. Moreover, the research team intends to broaden the scope by incorporating other multicriteria decision-making (MCDM) approaches, including CODAS and BWM. In the same way, it is planned to expand the research by integrating methods capable of handling diffuse information, such as q-rung orthopair.

Author Contributions

Conceptualization, L.A.P.-D. and D.-D.R.-O.; methodology, L.A.P.-D. and D.-D.R.-O.; software, D.-D.R.-O.; validation, L.A.P.-D., D.-D.R.-O. and D.L.-C.; formal analysis, L.A.P.-D., D.-D.R.-O. and V.G.-J.; investigation, L.A.P.-D., D.-D.R.-O. and E.-A.M.-G.; resources, D.L.-C. and L.A.P.-D.; data curation, D.O.-M. and E.-A.M.-G.; writing—original draft preparation, L.A.P.-D. and D.-D.R.-O.; writing—review and editing, L.A.P.-D., D.-D.R.-O. and D.L.-C.; visualization, D.-D.R.-O.; supervision, L.A.P.-D. and E.-A.M.-G.; project administration, L.A.P.-D.; funding acquisition, L.A.P.-D., D.-D.R.-O. and D.L.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Consejo Nacional de Ciencia y Tecnología (CONACYT) grant number CVU-405262 and Universidad Autónoma de Ciudad Juárez.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy concerns.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rozo-García, F. Revisión de las tecnologías presentes en la industria 4.0. Rev. UIS Ing. 2020, 19, 177–192. [Google Scholar] [CrossRef]
  2. Yoo, I.; Yi, C.G. Economic innovation caused by digital transformation and impact on social systems. Sustainability 2022, 14, 2600. [Google Scholar] [CrossRef]
  3. Omri, N.; Al Masry, Z.; Mairot, N.; Giampiccolo, S.; Zerhouni, N. Industrial data management strategy towards an SME-oriented PHM. J. Manuf. Syst. 2020, 56, 23–36. [Google Scholar] [CrossRef]
  4. Zhou, G.; Luo, S. Higher education input, technological innovation, and economic growth in China. Sustainability 2018, 10, 2615. [Google Scholar] [CrossRef]
  5. Nair, S.; Gohel, J.V. A review on contemporary hole transport materials for perovskite solar cells. In Nanotechnology for Energy and Environmental Engineering; Springer: Cham, Switzerland, 2020; pp. 145–168. [Google Scholar] [CrossRef]
  6. de Sousa Junior, W.T.; Montevechi, J.A.B.; de Carvalho Miranda, R.; de Oliveira, M.L.M.; Campos, A.T. Shop floor simulation optimization using machine learning to improve parallel metaheuristics. Expert Syst. Appl. 2020, 150, 113272. [Google Scholar] [CrossRef]
  7. Abdel-Basset, M.; Gamal, A.; Chakrabortty, R.K.; Ryan, M. Development of a hybrid multi-criteria decision-making approach for sustainability evaluation of bioenergy production technologies: A case study. J. Clean. Prod. 2021, 290, 125805. [Google Scholar] [CrossRef]
  8. Yıldızbaşı, A.; Ünlü, V. Performance evaluation of SMEs towards Industry 4.0 using fuzzy group decision making methods. SN Appl. Sci. 2020, 2, 355. [Google Scholar] [CrossRef]
  9. Navarrete, F.E.R.; Cabrera, N.Y.R. El panorama de la industria 4.0 en el marco de la formación profesional del talento humano en salud. REDIIS/Rev. Investig. Innov. Salud 2018, 2, 99–111. [Google Scholar] [CrossRef]
  10. Hoßfeld, S. Optimization on decision making driven by digitalization. Econ. World 2017, 5, 120–128. [Google Scholar] [CrossRef]
  11. Gallo Viracucha, E.V. Análisis de las Políticas Públicas de Salud Sexual y Reproductiva y su Incidencia en el Embarazo Adolescente en Ecuador para el Período 2011–2018. Bachelor’s Thesis, The Central University of Ecuador, Quito, Ecuador, 2020. [Google Scholar] [CrossRef]
  12. Kulkarni, A.; Halder, S. A simulation-based decision-making framework for construction supply chain management (SCM). Asian J. Civ. Eng. 2020, 21, 229–241. [Google Scholar] [CrossRef]
  13. Gunal, M.M. Simulation and the fourth industrial revolution. In Simulation for Industry 4.0: Past, Present, and Future; Springer: Cham, Switzerland, 2019; pp. 1–17. [Google Scholar] [CrossRef]
  14. Hopfe, C.J.; McLeod, R.S. Enhancing resilient community decision-making using building performance simulation. Build. Environ. 2021, 188, 107398. [Google Scholar] [CrossRef]
  15. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  16. Maier, H.R.; Razavi, S.; Kapelan, Z.; Matott, L.S.; Kasprzyk, J.; Tolson, B.A. Introductory overview: Optimization using evolutionary algorithms and other metaheuristics. Environ. Model. Softw. 2019, 114, 195–213. [Google Scholar] [CrossRef]
  17. Yang, Z.; Li, X.; He, P. A decision algorithm for selecting the design scheme for blockchain-based agricultural product traceability system in q-rung orthopair fuzzy environment. J. Clean. Prod. 2021, 290, 125191. [Google Scholar] [CrossRef]
  18. Brauers, W.K.M. Multi-objective seaport planning by MOORA decision making. Ann. Oper. Res. 2013, 206, 39–58. [Google Scholar] [CrossRef]
  19. Prayoga, N.D.; Zarlis, M.; Efendi, S. Weighting comparison analysis ROC and Full consistency Method (FUCOM) on MOORA in decision making. Sink. J. Dan Penelit. Tek. Inform. 2022, 7, 2024–2032. [Google Scholar] [CrossRef]
  20. Vinogradova, I. Multi-attribute decision-making methods as a part of mathematical optimization. Mathematics 2019, 7, 915. [Google Scholar] [CrossRef]
  21. Chakraborty, S. Applications of the MOORA method for decision making in manufacturing environment. Int. J. Adv. Manuf. Technol. 2011, 54, 1155–1166. [Google Scholar] [CrossRef]
  22. Sulistianto, S.; Sudradjat, A.; Setiawan, S.; Supendar, H.; Handrianto, Y. Comparison of Job Position Based Promotion Using: VIKOR, ELECTRE and Promethee Method. In Proceedings of the 2018 Third International Conference on Informatics and Computing (ICIC), Palembang, Indonesia, 17–18 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar] [CrossRef]
  23. Siregar, V.M.M.; Tampubolon, M.R.; Parapat, E.P.S.; Malau, E.I.; Hutagalung, D.S. Decision support system for selection technique using MOORA method. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1088, 012022. [Google Scholar] [CrossRef]
  24. Keith, A.J.; Ahner, D.K. A survey of decision making and optimization under uncertainty. Ann. Oper. Res. 2021, 300, 319–353. [Google Scholar] [CrossRef]
  25. Rai, R.S.; Bajpai, V. Optimization in Manufacturing Systems Using Evolutionary Techniques. In Optimization of Manufacturing Processes; Springer: Cham, Switzerland, 2020; pp. 201–229. [Google Scholar] [CrossRef]
  26. Tzanetos, A.; Dounias, G. Nature inspired optimization algorithms or simply variations of metaheuristics? Artif. Intell. Rev. 2021, 54, 1841–1862. [Google Scholar] [CrossRef]
  27. Eskandarpour, M.; Ouelhadj, D.; Fletcher, G. Decision making using metaheuristic optimization methods in sustainable transportation. In Sustainable Transportation and Smart Logistics; Elsevier: Amsterdam, The Netherlands, 2019; pp. 285–304. [Google Scholar] [CrossRef]
  28. Grillone, B.; Danov, S.; Sumper, A.; Cipriano, J.; Mor, G. A review of deterministic and data-driven methods to quantify energy efficiency savings and to predict retrofitting scenarios in buildings. Renew. Sustain. Energy Rev. 2020, 131, 110027. [Google Scholar] [CrossRef]
  29. Bhowmik, S.; Jagadish, C.; Gupta, K. Introduction. In Modeling and Optimization of Advanced Manufacturing Processes; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  30. Li, M.; Chen, H.; Wang, X.; Zhong, N.; Lu, S. An improved particle swarm optimization algorithm with adaptive inertia weights. Int. J. Inf. Technol. Decis. Mak. 2019, 18, 833–866. [Google Scholar] [CrossRef]
  31. Sharma, A.; Shoval, S.; Sharma, A.; Pandey, J.K. Path planning for multiple targets interception by the swarm of UAVs based on swarm intelligence algorithms: A review. IETE Tech. Rev. 2022, 39, 675–697. [Google Scholar] [CrossRef]
  32. Bdrany, A.; Sadkhan, S.B. Decision Making Approaches in Cognitive Radio-Status, Challenges and Future Trends. In Proceedings of the 2020 International Conference on Advanced Science and Engineering (ICOASE), Duhok, Iraq, 23–24 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 195–198. [Google Scholar] [CrossRef]
  33. Chen, Y.; Hu, S.; Li, A.; Cao, Y.; Zhao, Y.; Ming, W. Parameters Optimization of Electrical Discharge Machining Process Using Swarm Intelligence: A Review. Metals 2023, 13, 839. [Google Scholar] [CrossRef]
  34. Jamwal, A.; Agrawal, R.; Sharma, M.; Kumar, V. Review on multi-criteria decision analysis in sustainable manufacturing decision making. Int. J. Sustain. Eng. 2021, 14, 202–225. [Google Scholar] [CrossRef]
  35. Wang, J.; Wang, X.; Li, X.; Yi, J. A hybrid particle swarm optimization algorithm with dynamic adjustment of inertia weight based on a new feature selection method to optimize SVM parameters. Entropy 2023, 25, 531. [Google Scholar] [CrossRef]
  36. Taherdoost, H.; Madanchian, M. Multi-Criteria Decision Making (MCDM) Methods and Concepts. Encyclopedia 2023, 3, 77–87. [Google Scholar] [CrossRef]
  37. Pazzini, M.; Corticelli, R.; Lantieri, C.; Mazzoli, C. Multi-Criteria Analysis and Decision-Making Approach for the Urban Regeneration: The Application to the Rimini Canal Port (Italy). Sustainability 2022, 15, 772. [Google Scholar] [CrossRef]
  38. Hirolikar, D.S.; Tirpude, R.R.; Rithik, B.; Varghese, S.; Saraswat, S.; Jayalwal, A. Hybrid Algorithms based Software Development System using Artificial Intelligence for the Business Development. In Proceedings of the 2023 3rd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 12–13 May 2023; pp. 529–532. [Google Scholar] [CrossRef]
  39. Raman, R.; Kafila; Gehlot, A.; Shah, H.; Sanjay, C.P.; Ponnusamy, R. Financial Forecasting Through Hybrid Algorithms of Machine Learning and Deep Learning. In Proceedings of the 2023 3rd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 12–13 May 2023; pp. 1705–1709. [Google Scholar] [CrossRef]
  40. Chakri, A.; Khelif, R.; Benouaret, M.; Yang, X.S. New directional bat algorithm for continuous optimization problems. Expert Syst. Appl. 2017, 69, 159–175. [Google Scholar] [CrossRef]
  41. Banerjee, A.; Singh, D.; Sahana, S.; Nath, I. Impacts of metaheuristic and swarm intelligence approach in optimization. In Cognitive Big Data Intelligence with a Metaheuristic Approach; Elsevier: Amsterdam, The Netherlands, 2022; pp. 71–99. [Google Scholar] [CrossRef]
  42. Ogundoyin, S.O.; Kamil, I.A. Optimization techniques and applications in fog computing: An exhaustive survey. Swarm Evol. Comput. 2021, 66, 100937. [Google Scholar] [CrossRef]
  43. Yang, X.S.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  44. Bansal, J.C. Particle Swarm Optimization. In Evolutionary and Swarm Intelligence Algorithms; Springer International Publishing: Cham, Switzerland, 2019; pp. 11–23. [Google Scholar] [CrossRef]
  45. Johnvictor, A.C.; Durgamahanthi, V.; Pariti Venkata, R.M.; Jethi, N. Critical review of bio-inspired optimization techniques. Wiley Interdiscip. Rev. Comput. Stat. 2022, 14, e1528. [Google Scholar] [CrossRef]
  46. Ramírez-Ochoa, D.D.; Pérez-Domínguez, L.A.; Martínez-Gómez, E.A.; Luviano-Cruz, D. PSO, a swarm intelligence-based evolutionary algorithm as a decision-making strategy: A review. Symmetry 2022, 14, 455. [Google Scholar] [CrossRef]
  47. Glover, F.; McMillan, C. The general employee scheduling problem. An integration of MS and AI. Comput. Oper. Res. 1986, 13, 563–573. [Google Scholar] [CrossRef]
  48. Jahandideh-Tehrani, M.; Bozorg-Haddad, O.; Loáiciga, H.A. Application of particle swarm optimization to water management: An introduction and overview. Environ. Monit. Assess. 2020, 192, 281. [Google Scholar] [CrossRef] [PubMed]
  49. Dziwiński, P.; Bartczuk, L. A new hybrid particle swarm optimization and genetic algorithm method controlled by fuzzy logic. IEEE Trans. Fuzzy Syst. 2019, 28, 1140–1154. [Google Scholar] [CrossRef]
  50. Geng, N.; Meng, Q.; Gong, D.; Chung, P.W. How good are distributed allocation algorithms for solving urban search and rescue problems? A comparative study with centralized algorithms. IEEE Trans. Autom. Sci. Eng. 2018, 16, 478–485. [Google Scholar] [CrossRef]
  51. Huang, K.W.; Wu, Z.X.; Peng, H.W.; Tsai, M.C.; Hung, Y.C.; Lu, Y.C. Memetic particle gravitation optimization algorithm for solving clustering problems. IEEE Access 2019, 7, 80950–80968. [Google Scholar] [CrossRef]
  52. Zhen, L.; Liu, Y.; Dongsheng, W.; Wei, Z. Parameter estimation of software reliability model and prediction based on hybrid wolf pack algorithm and particle swarm optimization. IEEE Access 2020, 8, 29354–29369. [Google Scholar] [CrossRef]
  53. Wu, C.; Gao, J.; Barnes, D. Sustainable partner selection and order allocation for strategic items: An integrated multi-stage decision-making model. Int. J. Prod. Res. 2022, 61, 1076–1100. [Google Scholar] [CrossRef]
  54. Yifei, T.; Meng, Z.; Jingwei, L.; Dongbo, L.; Yulin, W. Research on intelligent welding robot path optimization based on GA and PSO algorithms. IEEE Access 2018, 6, 65397–65404. [Google Scholar] [CrossRef]
  55. Lei, J.; Yang, C.; Zhang, H.; Liu, C.; Yan, D.; Xiao, G.; He, Z.; Chen, Z.; Yu, T. Radiation shielding optimization design research based on bare-bones particle swarm optimization algorithm. Nucl. Eng. Technol. 2023, 55, 2215–2221. [Google Scholar] [CrossRef]
  56. Sooriamoorthy, D.; Manoharan, A.; Sivanesan, S.K.; Lun, S.K.; Cheong, A.C.H.; Perumal, S.K.S. Optimizing Solar Maximum Power Point Tracking with Adaptive PSO: A Comparative Analysis of Inertia Weight and Acceleration Coefficient Strategies. Results Eng. 2025, 27, 106429. [Google Scholar] [CrossRef]
  57. Awotwe, S.; Dufera, A.T.; Yi, W. Recent advancement of metaheuristic optimization algorithms-based learning for breast cancer diagnosis: A review. Memetic Comput. 2025, 17, 31. [Google Scholar] [CrossRef]
  58. Jafari, S.; Byun, Y.C. AI-driven state of power prediction in battery systems: A PSO-optimized deep learning approach with XAI. Energy 2025, 331, 136764. [Google Scholar] [CrossRef]
  59. Priyadarshi, R.; Kumar, R.R. Evolution of swarm intelligence: A systematic review of particle swarm and ant colony optimization approaches in modern research. Arch. Comput. Methods Eng. 2025, 32, 3609–3650. [Google Scholar] [CrossRef]
  60. Sinde, G.W. Neural Networks for Ingress Monitoring; Trilithic, Inc.: Indianapolis, IN, USA, 2009. [Google Scholar]
  61. Bin, H.; Yi, W. Method, Apparatus, and System for Positioning Terminal Device; Huawei Technologies Co., Ltd.: Shenzhen, China, 2021. [Google Scholar]
  62. Rahman, H.F.; Janardhanan, M.N.; Nielsen, I.E. Real-time order acceptance and scheduling problems in a flow shop environment using hybrid GA-PSO algorithm. IEEE Access 2019, 7, 112742–112755. [Google Scholar] [CrossRef]
  63. Yang, X.S. Bat algorithm for multi-objective optimisation. Int. J. Bio-Inspired Comput. 2011, 3, 267–274. [Google Scholar] [CrossRef]
  64. Kumar, Y.; Kaur, A. Variants of bat algorithm for solving partitional clustering problems. Eng. Comput. 2021, 38, 1973–1999. [Google Scholar] [CrossRef]
  65. Singh, A.; Meyyazhagan, A.; Verma, S. Nature-Inspired Computing: Bat Echolocation to BAT Algorithm. In Nature-Inspired Intelligent Computing Techniques in Bioinformatics; Springer: Singapore, 2022; pp. 163–174. [Google Scholar] [CrossRef]
  66. Alsalibi, B.; Abualigah, L.; Khader, A.T. A novel bat algorithm with dynamic membrane structure for optimization problems. Appl. Intell. 2021, 51, 1992–2017. [Google Scholar] [CrossRef]
  67. Zebari, A.Y.; Almufti, S.M.; Abdulrahman, C.M. Bat algorithm (BA): Review, applications and modifications. Int. J. Sci. World 2020, 8, 1–7. [Google Scholar] [CrossRef]
  68. Chakraborty, S.; Jana, T.; Paul, S. On the application of multi criteria decision making technique for multi-response optimization of metal cutting process. Intell. Decis. Technol. 2019, 13, 101–115. [Google Scholar] [CrossRef]
  69. Zavadskas, E.K.; Antucheviciene, J.; Razavi Hajiagha, S.H.; Hashemi, S.S. The interval-valued intuitionistic fuzzy MULTIMOORA method for group decision making in engineering. Math. Probl. Eng. 2015, 2015, 560690. [Google Scholar] [CrossRef]
  70. Wang, P.; Li, Y.; Wang, Y.H.; Zhu, Z.Q. A new method based on TOPSIS and response surface method for MCDM problems with interval numbers. Math. Probl. Eng. 2015, 2015, 938535. [Google Scholar] [CrossRef]
  71. Maddu, J.; Karrolla, B.; Shaik, R.U.; Vuppala, S. Comparative Study of Optimization Models for Evaluation of EDM Process Parameters on Ti-6Al-4V. Modelling 2021, 2, 555–566. [Google Scholar] [CrossRef]
  72. Lumbantoruan, G.; Purba, E.N. Analisis Nilai Market Jaminan Pinjaman Dengan Metode Moora. METHOMIKA J. Manaj. Inform. Komputerisasi Akunt. 2022, 6, 199–204. [Google Scholar] [CrossRef]
  73. Eberhat, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar] [CrossRef]
  74. Brodny, J.; Tutak, M. Analyzing the level of digitalization among the enterprises of the European Union member states and their impact on economic growth. J. Open Innov. Technol. Mark. Complex. 2022, 8, 70. [Google Scholar] [CrossRef]
  75. Nainggolan, A.; Siregar, A.; Mesran, M. Sistem Pendukung Keputusan Penilaian Indeks Kinerja Sales Marketing Menerapkan Metode MOORA. Hello World J. Ilmu Komput. 2022, 1, 121–129. [Google Scholar] [CrossRef]
  76. Liu, X.; Fan, X.; Guo, Y.; Cao, Y.; Li, C. Multi-objective optimization of GFRP injection molding process parameters, using GA-ELM, MOFA, and GRA-TOPSIS. Trans. Can. Soc. Mech. Eng. 2021, 46, 37–49. [Google Scholar] [CrossRef]
  77. Etedali, S. Ranking of design scenarios of TMD for seismically excited structures using TOPSIS. Front. Struct. Civ. Eng. 2020, 14, 1372–1386. [Google Scholar] [CrossRef]
  78. Wang, L.; Zhou, Y.; Li, Q.; Zuo, Q.; Gao, H.; Liu, J.; Tian, Y. Forest Land Quality Evaluation and the Protection Zoning of Subtropical Humid Evergreen Broadleaf Forest Region Based on the PSOTOPSIS Model and the Local Indicator of Spatial Association: A Case Study of Hefeng County, Hubei Province, China. Forests 2021, 12, 325. [Google Scholar] [CrossRef]
  79. Macagno, E.O. Historico-critical review of dimensional analysis. J. Frankl. Inst. 1971, 292, 391–402. [Google Scholar] [CrossRef]
  80. Conejo, A.N. Fundamentals of Dimensional Analysis: Theory and Applications in Metallurgy; Springer Nature: London, UK, 2021. [Google Scholar]
  81. Shen, W.; Davis, T.; Lin, D.K.; Nachtsheim, C.J. Dimensional analysis and its applications in statistics. J. Qual. Technol. 2014, 46, 185–198. [Google Scholar] [CrossRef]
  82. Villa Silva, A.J.; Pérez-Domínguez, L.; Martínez Gómez, E.; Luviano-Cruz, D.; Valles-Rosales, D. Dimensional Analysis under Linguistic Pythagorean Fuzzy Set. Symmetry 2021, 13, 440. [Google Scholar] [CrossRef]
  83. Villa Silva, A.J.; Pérez Dominguez, L.A.; Martínez Gómez, E.; Alvarado-Iniesta, A.; Pérez Olguín, I.J.C. Dimensional analysis under pythagorean fuzzy approach for supplier selection. Symmetry 2019, 11, 336. [Google Scholar] [CrossRef]
  84. Alcaraz, J.L.G.; Iniesta, A.A.; Macías, A.A.M. Selección de proveedores basada en análisis dimensional. Contaduría y Adm. 2013, 58, 249–278. [Google Scholar] [CrossRef]
  85. Pérez-Domínguez, L. Intuitionistic fuzzy dimensional analysis for multi-criteria decision making. Iran. J. Fuzzy Syst. 2018, 15, 17–40. [Google Scholar] [CrossRef]
  86. Venter, G.; Sobieszczanski-Sobieski, J. Particle Swarm Optimization. AIAA J. 2003, 41, 1583–1589. [Google Scholar] [CrossRef]
  87. Son, N.H.; Hieu, T.T.; Uyen, V.T.N. DOE-MARCOS: A New Approach to Multi-Criteria Decision Making. J. Appl. Eng. Sci. 2023, 21, 263–274. [Google Scholar] [CrossRef]
  88. Pace, F.; Santilano, A.; Godio, A. A review of geophysical modeling based on particle swarm optimization. Surv. Geophys. 2021, 42, 505–549. [Google Scholar] [CrossRef]
  89. Kiani, A.T.; Nadeem, M.F.; Ahmed, A.; Khan, I.A.; Alkhammash, H.I.; Sajjad, I.A.; Hussain, B. An improved particle swarm optimization with chaotic inertia weight and acceleration coefficients for optimal extraction of PV models parameters. Energies 2021, 14, 2980. [Google Scholar] [CrossRef]
  90. Du, Y.; Xu, F. A hybrid multi-step probability selection particle swarm optimization with dynamic chaotic inertial weight and acceleration coefficients for numerical function optimization. Symmetry 2020, 12, 922. [Google Scholar] [CrossRef]
  91. Sahu, K.; Alzahrani, F.A.; Srivastava, R.; Kumar, R. Hesitant fuzzy sets based symmetrical model of decision-making for estimating the durability of web application. Symmetry 2020, 12, 1770. [Google Scholar] [CrossRef]
  92. Xu, G.; Yu, G. On convergence analysis of particle swarm optimization algorithm. J. Comput. Appl. Math. 2018, 333, 65–73. [Google Scholar] [CrossRef]
  93. Öktem, H.; Shinde, D. Determination of Optimal Process Parameters for Plastic Injection Molding of Polymer Materials Using Multi-Objective Optimization. J. Mater. Eng. Perform. 2021, 30, 8616–8632. [Google Scholar] [CrossRef]
  94. Mena Ledezma, J.; Posada Correa, J.C. Definition and evaluation of good manufacturing practices for plastic injection molding. Energy Effic. 2023, 16, 24. [Google Scholar] [CrossRef]
  95. Nguyen, H.T.; Nguyen, M.Q. A Numerical Simulation and Multi-objective Optimization for the Plastic Injection Molding of a Centrifugal Pump Casing. IOP Conf. Ser. Earth Environ. Sci. 2023, 1278, 012026. [Google Scholar] [CrossRef]
Figure 1. Article structure.
Figure 1. Article structure.
Applsci 15 08885 g001
Figure 2. Classification of optimization methods.
Figure 2. Classification of optimization methods.
Applsci 15 08885 g002
Figure 3. Flowchart with the classic PSO algorithm procedure.
Figure 3. Flowchart with the classic PSO algorithm procedure.
Applsci 15 08885 g003
Figure 4. Structure of the MOORA method.
Figure 4. Structure of the MOORA method.
Applsci 15 08885 g004
Figure 5. Structure of the TOPSIS method.
Figure 5. Structure of the TOPSIS method.
Applsci 15 08885 g005
Figure 6. Structure of the DA method.
Figure 6. Structure of the DA method.
Applsci 15 08885 g006
Figure 7. Methodology SADE.
Figure 7. Methodology SADE.
Applsci 15 08885 g007
Figure 8. Experiment 1-P1(MOORA-PSO): degrees of preference Df 1 and ω = 0.7 are employed. In tests A 1 A 3 , C C 1 is utilized. In A 4 A 6 , C C 2 is applied, and in A 7 A 9 , C C 5 is used.
Figure 8. Experiment 1-P1(MOORA-PSO): degrees of preference Df 1 and ω = 0.7 are employed. In tests A 1 A 3 , C C 1 is utilized. In A 4 A 6 , C C 2 is applied, and in A 7 A 9 , C C 5 is used.
Applsci 15 08885 g008
Figure 9. Experiment 1-P1(MOORA-PSO): degrees of preference Df 1 and ω = 0.7 are employed. In tests A 10 A 12 , C C 3 is utilized. In A 13 A 15 , C C 4 is used.
Figure 9. Experiment 1-P1(MOORA-PSO): degrees of preference Df 1 and ω = 0.7 are employed. In tests A 10 A 12 , C C 3 is utilized. In A 13 A 15 , C C 4 is used.
Applsci 15 08885 g009
Figure 10. Experiment 1-P1(MOORA-PSO): Df 2 and ω = 0.7 are used. In the tests B 1 B 3 , we use C C 1 . In B 4 B 6 , C C 2 is used. And in B 7 B 9 , C C 5 is used.
Figure 10. Experiment 1-P1(MOORA-PSO): Df 2 and ω = 0.7 are used. In the tests B 1 B 3 , we use C C 1 . In B 4 B 6 , C C 2 is used. And in B 7 B 9 , C C 5 is used.
Applsci 15 08885 g010
Figure 11. Experiment 1-P1(MOORA-PSO): Df 2 and ω = 0.7 are used. In tests B 10 B 12 , we use C C 3 . In B 13 B 15 , C C 4 is used.
Figure 11. Experiment 1-P1(MOORA-PSO): Df 2 and ω = 0.7 are used. In tests B 10 B 12 , we use C C 3 . In B 13 B 15 , C C 4 is used.
Applsci 15 08885 g011
Figure 12. Experiment 1-P1(MOORA-PSO): Df 3 and ω = 0.7 are used. In tests C 1 C 3 , we use C C 1 . In C 4 C 6 , C C 2 is used. And in C 7 C 9 , C C 5 is used.
Figure 12. Experiment 1-P1(MOORA-PSO): Df 3 and ω = 0.7 are used. In tests C 1 C 3 , we use C C 1 . In C 4 C 6 , C C 2 is used. And in C 7 C 9 , C C 5 is used.
Applsci 15 08885 g012
Figure 13. Experiment 1-P1(MOORA-PSO): Df 3 and ω = 0.7 are used. In tests C 10 C 12 , we use C C 3 . In C 13 C 15 , C C 4 is used.
Figure 13. Experiment 1-P1(MOORA-PSO): Df 3 and ω = 0.7 are used. In tests C 10 C 12 , we use C C 3 . In C 13 C 15 , C C 4 is used.
Applsci 15 08885 g013
Figure 14. Experiment 1-P2 (MOORA-PSO): Df 1 and ω = 0.3 are employed. In tests D 1 D 3 , C C 1 is utilized. In D 4 D 6 , C C 2 is applied, and in D 7 D 9 , C C 5 is used.
Figure 14. Experiment 1-P2 (MOORA-PSO): Df 1 and ω = 0.3 are employed. In tests D 1 D 3 , C C 1 is utilized. In D 4 D 6 , C C 2 is applied, and in D 7 D 9 , C C 5 is used.
Applsci 15 08885 g014
Figure 15. Experiment 1-P2(MOORA-PSO): Df 1 and ω = 0.3 are employed. In tests D 10 D 12 , C C 3 is utilized. In D 13 D 15 , C C 4 is applied.
Figure 15. Experiment 1-P2(MOORA-PSO): Df 1 and ω = 0.3 are employed. In tests D 10 D 12 , C C 3 is utilized. In D 13 D 15 , C C 4 is applied.
Applsci 15 08885 g015
Figure 16. Experiment 1-P2 (MOORA-PSO): Df 3 and ω = 0.3 are employed. In tests E 1 E 3 , C C 1 is utilized. In E 4 E 6 , C C 2 is applied, and in E 7 E 9 , C C 5 is used.
Figure 16. Experiment 1-P2 (MOORA-PSO): Df 3 and ω = 0.3 are employed. In tests E 1 E 3 , C C 1 is utilized. In E 4 E 6 , C C 2 is applied, and in E 7 E 9 , C C 5 is used.
Applsci 15 08885 g016
Figure 17. Experiment 1-P2(MOORA-PSO): Df 3 and ω = 0.3 are employed. In tests E 10 E 12 , C C 3 is utilized. In E 13 E 15 , C C 4 is applied.
Figure 17. Experiment 1-P2(MOORA-PSO): Df 3 and ω = 0.3 are employed. In tests E 10 E 12 , C C 3 is utilized. In E 13 E 15 , C C 4 is applied.
Applsci 15 08885 g017
Figure 18. Experiment 1-P2 (MOORA-PSO): Df 2 and ω = 0.3 are employed. In tests F 1 F 3 , C C 1 is utilized. In F 4 F 6 , C C 2 is applied, and in F 7 F 9 , C C 5 is used.
Figure 18. Experiment 1-P2 (MOORA-PSO): Df 2 and ω = 0.3 are employed. In tests F 1 F 3 , C C 1 is utilized. In F 4 F 6 , C C 2 is applied, and in F 7 F 9 , C C 5 is used.
Applsci 15 08885 g018
Figure 19. Experiment 1-P2(MOORA-PSO), Df 2 and ω = 0.3 are employed. In tests F 10 F 12 , C C 3 is utilized. In F 13 F 15 , C C 4 is applied.
Figure 19. Experiment 1-P2(MOORA-PSO), Df 2 and ω = 0.3 are employed. In tests F 10 F 12 , C C 3 is utilized. In F 13 F 15 , C C 4 is applied.
Applsci 15 08885 g019
Figure 20. Applying TOPSIS-PSO: D f 1 and ω = 0.7 . In tests A 1 A 3 , we use c 1 = c 2 = 2.0 . In tests A 4 A 6 , c 1 = c 2 = 1.5 . And in tests A 7 A 9 , c 1 = c 2 = 2.5 .
Figure 20. Applying TOPSIS-PSO: D f 1 and ω = 0.7 . In tests A 1 A 3 , we use c 1 = c 2 = 2.0 . In tests A 4 A 6 , c 1 = c 2 = 1.5 . And in tests A 7 A 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g020
Figure 21. Applying TOPSIS-PSO: D f 1 and ω = 0.7 . In tests A 10 A 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests A 13 A 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 21. Applying TOPSIS-PSO: D f 1 and ω = 0.7 . In tests A 10 A 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests A 13 A 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g021
Figure 22. Applying TOPSIS-PSO: D f 2 and ω = 0.7 . In tests B 1 B 3 , we use c 1 = c 2 = 2.0 . In tests B 4 B 6 , c 1 = c 2 = 1.5 . And in tests B 7 B 9 , c 1 = c 2 = 2.5 .
Figure 22. Applying TOPSIS-PSO: D f 2 and ω = 0.7 . In tests B 1 B 3 , we use c 1 = c 2 = 2.0 . In tests B 4 B 6 , c 1 = c 2 = 1.5 . And in tests B 7 B 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g022
Figure 23. Applying TOPSIS-PSO: D f 2 and ω = 0.7 . In tests B 10 B 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests B 13 B 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 23. Applying TOPSIS-PSO: D f 2 and ω = 0.7 . In tests B 10 B 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests B 13 B 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g023
Figure 24. Applying TOPSIS-PSO: D f 3 and ω = 0.7 . In tests C 1 C 3 , we use c 1 = c 2 = 2.0 . In tests C 4 C 6 , c 1 = c 2 = 1.5 . In tests C 7 C 9 , c 1 = c 2 = 2.5 .
Figure 24. Applying TOPSIS-PSO: D f 3 and ω = 0.7 . In tests C 1 C 3 , we use c 1 = c 2 = 2.0 . In tests C 4 C 6 , c 1 = c 2 = 1.5 . In tests C 7 C 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g024
Figure 25. Applying TOPSIS-PSO: D f 3 and ω = 0.7 . In tests C 10 C 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests C 13 C 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 25. Applying TOPSIS-PSO: D f 3 and ω = 0.7 . In tests C 10 C 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests C 13 C 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g025
Figure 26. Applying TOPSIS-PSO: D f 1 and ω = 0.3 . In tests D 1 D 3 , we use c 1 = c 2 = 2.0 . In tests D 4 D 6 , c 1 = c 2 = 1.5 . And in tests D 7 D 9 , c 1 = c 2 = 2.5 .
Figure 26. Applying TOPSIS-PSO: D f 1 and ω = 0.3 . In tests D 1 D 3 , we use c 1 = c 2 = 2.0 . In tests D 4 D 6 , c 1 = c 2 = 1.5 . And in tests D 7 D 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g026
Figure 27. Applying TOPSIS-PSO: D f 1 and ω = 0.3 . In tests D 10 D 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests D 13 D 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 27. Applying TOPSIS-PSO: D f 1 and ω = 0.3 . In tests D 10 D 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests D 13 D 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g027
Figure 28. Applying TOPSIS-PSO: D f 3 and ω = 0.3 . In tests E 1 E 3 , we use c 1 = c 2 = 2.0 . In tests E 4 E 6 , c 1 = c 2 = 1.5 . In tests E 7 E 9 , c 1 = 2.5 .
Figure 28. Applying TOPSIS-PSO: D f 3 and ω = 0.3 . In tests E 1 E 3 , we use c 1 = c 2 = 2.0 . In tests E 4 E 6 , c 1 = c 2 = 1.5 . In tests E 7 E 9 , c 1 = 2.5 .
Applsci 15 08885 g028
Figure 29. Applying TOPSIS-PSO: D f 3 and ω = 0.3 . In tests E 10 E 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests E 13 E 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 29. Applying TOPSIS-PSO: D f 3 and ω = 0.3 . In tests E 10 E 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests E 13 E 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g029
Figure 30. Applying TOPSIS-PSO: D f 2 = and ω = 0.3 . In tests F 1 F 3 we use c 1 = c 2 = 2.0 . In tests F 4 F 6 , c 1 = c 2 = 1.5 . And in tests F 7 F 9 , c 1 = c 2 = 2.5 .
Figure 30. Applying TOPSIS-PSO: D f 2 = and ω = 0.3 . In tests F 1 F 3 we use c 1 = c 2 = 2.0 . In tests F 4 F 6 , c 1 = c 2 = 1.5 . And in tests F 7 F 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g030
Figure 31. Applying TOPSIS-PSO: D f 2 and ω = 0.3 . In tests F 10 F 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests F 13 F 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 31. Applying TOPSIS-PSO: D f 2 and ω = 0.3 . In tests F 10 F 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests F 13 F 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g031
Figure 32. Applying DA-PSO: D f 1 and ω = 0.7 . In tests A 1 A 3 , we use c 1 = c 2 = 2.0 . In tests A 4 A 6 , c 1 = c 2 = 1.5 . And in tests A 7 A 9 , c 1 = c 2 = 2.5 .
Figure 32. Applying DA-PSO: D f 1 and ω = 0.7 . In tests A 1 A 3 , we use c 1 = c 2 = 2.0 . In tests A 4 A 6 , c 1 = c 2 = 1.5 . And in tests A 7 A 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g032
Figure 33. Applying DA-PSO: D f 1 and ω = 0.7 . In tests A 10 A 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests A 13 A 15 , c 1 = c 2 = 2.5 .
Figure 33. Applying DA-PSO: D f 1 and ω = 0.7 . In tests A 10 A 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests A 13 A 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g033
Figure 34. Applying DA-PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 1 B 3 , we use c 1 = c 2 = 2.0 . In tests B 4 B 6 , c 1 = c 2 = 1.5 . And in tests B 7 B 9 , c 1 = c 2 = 2.5 .
Figure 34. Applying DA-PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 1 B 3 , we use c 1 = c 2 = 2.0 . In tests B 4 B 6 , c 1 = c 2 = 1.5 . And in tests B 7 B 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g034
Figure 35. Applying DA-PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 10 B 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests B 13 B 15 , c 1 = c 2 = 2.5 .
Figure 35. Applying DA-PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 10 B 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests B 13 B 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g035
Figure 36. Applying DA-PSO: D f 3 and ω = 0.7 . In tests C 1 C 3 , we use c 1 = c 2 = 2.0 . In tests C 4 C 6 , c 1 = c 2 = 1.5 . And in tests C 7 C 9 , c 1 = 1.5 , c 2 = 2.5 .
Figure 36. Applying DA-PSO: D f 3 and ω = 0.7 . In tests C 1 C 3 , we use c 1 = c 2 = 2.0 . In tests C 4 C 6 , c 1 = c 2 = 1.5 . And in tests C 7 C 9 , c 1 = 1.5 , c 2 = 2.5 .
Applsci 15 08885 g036
Figure 37. Applying DA-PSO: D f 3 and ω = 0.7 . In tests C 10 C 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests C 13 C 15 , c 1 = c 2 = 2.5 .
Figure 37. Applying DA-PSO: D f 3 and ω = 0.7 . In tests C 10 C 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests C 13 C 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g037
Figure 38. Applying DA-PSO: D f 3 and ω = 0.3 . In tests D 1 D 3 , we use c 1 = c 2 = 2.0 . In tests D 4 D 6 , c 1 = c 2 = 1.5 . And in tests D 7 D 9 , c 1 = c 2 = 2.5 .
Figure 38. Applying DA-PSO: D f 3 and ω = 0.3 . In tests D 1 D 3 , we use c 1 = c 2 = 2.0 . In tests D 4 D 6 , c 1 = c 2 = 1.5 . And in tests D 7 D 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g038
Figure 39. Applying DA-PSO: D f 3 and ω = 0.3 . In tests D 10 D 12 , we use c 1 = 1.5 , c 2 = 2.5 . And in tests D 13 D 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 39. Applying DA-PSO: D f 3 and ω = 0.3 . In tests D 10 D 12 , we use c 1 = 1.5 , c 2 = 2.5 . And in tests D 13 D 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g039
Figure 40. Applying DA-PSO: D f 3 and ω = 0.3 . In tests E 1 E 3 , we use c 1 = c 2 = 2.0 . In tests E 4 E 6 , c 1 = c 2 = 1.5 . And in tests E 7 E 9 , c 1 = c 2 = 2.5 .
Figure 40. Applying DA-PSO: D f 3 and ω = 0.3 . In tests E 1 E 3 , we use c 1 = c 2 = 2.0 . In tests E 4 E 6 , c 1 = c 2 = 1.5 . And in tests E 7 E 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g040
Figure 41. Applying DA-PSO: D f 3 and ω = 0.3 . In tests E 10 E 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests E 13 E 15 , c 1 = c 2 = 2.5 .
Figure 41. Applying DA-PSO: D f 3 and ω = 0.3 . In tests E 10 E 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests E 13 E 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g041
Figure 42. Applying DA-PSO: D f 2 and ω = 0.3 . In tests F 1 F 3 , we use c 1 = c 2 = 2.0 . In tests F 4 F 6 , c 1 = c 2 = 1.5 . And in tests F 7 F 9 , c 1 = c 2 = 2.5 .
Figure 42. Applying DA-PSO: D f 2 and ω = 0.3 . In tests F 1 F 3 , we use c 1 = c 2 = 2.0 . In tests F 4 F 6 , c 1 = c 2 = 1.5 . And in tests F 7 F 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g042
Figure 43. Applying DA-PSO: D f 2 and ω = 0.3 . In tests F 10 F 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests F 13 F 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 43. Applying DA-PSO: D f 2 and ω = 0.3 . In tests F 10 F 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests F 13 F 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g043
Figure 44. Applying PSO: Degrees of preference D f 1 and ω = 0.7 . In tests A 1 A 3 , we use c 1 = c 2 = 2.0 . In tests A 4 A 6 , c 1 = c 2 = 1.5 . And in tests A 7 A 9 , c 1 = c 2 = 2.5 .
Figure 44. Applying PSO: Degrees of preference D f 1 and ω = 0.7 . In tests A 1 A 3 , we use c 1 = c 2 = 2.0 . In tests A 4 A 6 , c 1 = c 2 = 1.5 . And in tests A 7 A 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g044
Figure 45. Applying PSO: Degrees of preference D f 1 and ω = 0.7 . In tests A 10 A 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests A 13 A 15 , c 1 = c 2 = 2.5 .
Figure 45. Applying PSO: Degrees of preference D f 1 and ω = 0.7 . In tests A 10 A 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests A 13 A 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g045
Figure 46. Applying PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 1 B 3 , we use c 1 = c 2 = 2.0 . In tests B 4 B 6 , c 1 = c 2 = 1.5 . And in tests B 7 B 9 , c 1 = c 2 = 2.5 .
Figure 46. Applying PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 1 B 3 , we use c 1 = c 2 = 2.0 . In tests B 4 B 6 , c 1 = c 2 = 1.5 . And in tests B 7 B 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g046
Figure 47. Applying PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 10 B 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests B 13 B 15 , c 1 = c 2 = 2.5 .
Figure 47. Applying PSO: Degrees of preference D f 2 and ω = 0.7 . In tests B 10 B 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests B 13 B 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g047
Figure 48. Applying PSO: D f 3 and ω = 0.7 . In tests C 1 C 3 , we use c 1 = c 2 = 2.0 . In tests C 4 C 6 , c 1 = c 2 = 1.5 . And in tests C 7 C 9 , c 1 = 1.5 , c 2 = 2.5 .
Figure 48. Applying PSO: D f 3 and ω = 0.7 . In tests C 1 C 3 , we use c 1 = c 2 = 2.0 . In tests C 4 C 6 , c 1 = c 2 = 1.5 . And in tests C 7 C 9 , c 1 = 1.5 , c 2 = 2.5 .
Applsci 15 08885 g048
Figure 49. Applying PSO: D f 3 and ω = 0.7 . In tests C 10 C 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests C 13 C 15 , c 1 = c 2 = 2.5 .
Figure 49. Applying PSO: D f 3 and ω = 0.7 . In tests C 10 C 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests C 13 C 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g049
Figure 50. Applying PSO: D f 1 and ω = 0.3 . In tests D 1 D 3 , we use c 1 = c 2 = 2.0 . In tests D 4 D 6 , c 1 = c 2 = 1.5 . And in tests D 7 D 9 , c 1 = c 2 = 2.5 .
Figure 50. Applying PSO: D f 1 and ω = 0.3 . In tests D 1 D 3 , we use c 1 = c 2 = 2.0 . In tests D 4 D 6 , c 1 = c 2 = 1.5 . And in tests D 7 D 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g050
Figure 51. Applying PSO: Degrees of preference D f 1 and ω = 0.3 . In tests D 10 D 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests D 13 D 15 , c 1 = c 2 = 2.5 .
Figure 51. Applying PSO: Degrees of preference D f 1 and ω = 0.3 . In tests D 10 D 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests D 13 D 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g051
Figure 52. Applying PSO: Degrees of preference D f 3 and ω = 0.3 . In tests E 1 E 3 , we use c 1 = c 2 = 2.0 . In tests E 4 E 6 , c 1 = c 2 = 1.5 . And in tests E 7 E 9 , c 1 = c 2 = 2.5 .
Figure 52. Applying PSO: Degrees of preference D f 3 and ω = 0.3 . In tests E 1 E 3 , we use c 1 = c 2 = 2.0 . In tests E 4 E 6 , c 1 = c 2 = 1.5 . And in tests E 7 E 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g052
Figure 53. Applying PSO: D f 3 and ω = 0.3 . In tests E 10 E 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests E 13 E 15 , c 1 = c 2 = 2.5 .
Figure 53. Applying PSO: D f 3 and ω = 0.3 . In tests E 10 E 12 , c 1 = 2.5 , c 2 = 1.5 . And in tests E 13 E 15 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g053
Figure 54. Applying PSO: D f 2 and ω = 0.3 . In tests F 1 F 3 , we use c 1 = c 2 = 2.0 . In tests F 4 F 6 , c 1 = c 2 = 1.5 . And in tests F 7 F 9 , c 1 = c 2 = 2.5 .
Figure 54. Applying PSO: D f 2 and ω = 0.3 . In tests F 1 F 3 , we use c 1 = c 2 = 2.0 . In tests F 4 F 6 , c 1 = c 2 = 1.5 . And in tests F 7 F 9 , c 1 = c 2 = 2.5 .
Applsci 15 08885 g054
Figure 55. Applying PSO: D f 2 and ω = 0.3 . In tests F 10 F 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests F 13 F 15 , c 1 = 2.5 , c 2 = 1.5 .
Figure 55. Applying PSO: D f 2 and ω = 0.3 . In tests F 10 F 12 , c 1 = 1.5 , c 2 = 2.5 . And in tests F 13 F 15 , c 1 = 2.5 , c 2 = 1.5 .
Applsci 15 08885 g055
Figure 56. Solutions found by each algorithm.
Figure 56. Solutions found by each algorithm.
Applsci 15 08885 g056
Figure 57. Solutions found by each algorithm, for 50 iterations.
Figure 57. Solutions found by each algorithm, for 50 iterations.
Applsci 15 08885 g057
Figure 58. The best solution found for the case study.
Figure 58. The best solution found for the case study.
Applsci 15 08885 g058
Figure 59. Results using the BA algorithm.
Figure 59. Results using the BA algorithm.
Applsci 15 08885 g059
Figure 60. Results using the BA algorithm.
Figure 60. Results using the BA algorithm.
Applsci 15 08885 g060
Table 1. Degrees of preference for each criterion.
Table 1. Degrees of preference for each criterion.
C1C2C3C4C5
D f 1 0.4000.2000.0300.0700.300
D f 2 0.2000.2000.2000.2000.200
D f 3 0.1230.0990.0430.3430.392
Table 2. Assigned values for acceleration coefficients.
Table 2. Assigned values for acceleration coefficients.
c 1 c 2
C C 1 22
C C 2 1.51.5
C C 3 1.52.5
C C 4 2.51.5
C C 5 2.52.5
Table 3. Parameters used in the experiments.
Table 3. Parameters used in the experiments.
Values for the Experiments
D f 1 D f 2 D f 3 D f 1 D f 2 D f 3
C C 1 ω = 0.3 ω = 0.7
C C 2
C C 3
C C 4
C C 5
Table 4. The matrix of criteria and alternatives of the experiment.
Table 4. The matrix of criteria and alternatives of the experiment.
C1C2C3C4C5
A10.0480.0470.0700.0870.190
A20.0530.0520.0660.0810.058
A30.0570.0570.0660.0760.022
A40.0620.0620.0630.0580.007
A50.0660.0660.0700.0850.004
A60.0700.0710.0660.0580.003
A70.0750.0750.0660.0470.002
A80.0790.0790.0660.0350.002
A90.0830.0830.0660.0510.000
Table 5. Results of tests A 1 A 9 with MOORA-PSO for 10 iterations.
Table 5. Results of tests A 1 A 9 with MOORA-PSO for 10 iterations.
Tests
IterationA1A2A3A4A5A6A7A8A9
t 1 111111111
t 2 118428179
t 3 718228877
t 4 718488377
t 5 718428377
t 6 718428377
t 7 718428377
t 8 718428377
t 9 718428379
t 10 768428379
Table 6. Results of tests B 1 B 9 with MOORA-PSO for 10 iterations.
Table 6. Results of tests B 1 B 9 with MOORA-PSO for 10 iterations.
Tests
IterationB1B2B3B4B5B6B7B8B9
t 1 111111111
t 2 847155142
t 3 848153142
t 4 848153842
t 5 848153848
t 6 748153842
t 7 844158842
t 8 744158848
t 9 846158842
t 10 577983311
Table 7. Results of tests C 1 C 9 with MOORA-PSO for 10 iterations.
Table 7. Results of tests C 1 C 9 with MOORA-PSO for 10 iterations.
Tests
IterationC1C2C3C4C5C6C7C8C9
t 1 111111111
t 2 141645727
t 3 142645547
t 4 355642746
t 5 155675547
t 6 356642746
t 7 355645746
t 8 155645746
t 9 175635546
t 10 179635727
Table 8. Results of tests D 1 D 9 with MOORA-PSO for 10 iterations.
Table 8. Results of tests D 1 D 9 with MOORA-PSO for 10 iterations.
Tests
IterationD1D2D3D4D5D6D7D8D9
t 1 111111111
t 2 511292111
t 3 511492771
t 4 531492771
t 5 511492771
t 6 531492771
t 7 511492741
t 8 511492741
t 9 511492741
t 10 511492741
Table 9. Results of tests E 1 E 9 with MOORA-PSO for 10 iterations.
Table 9. Results of tests E 1 E 9 with MOORA-PSO for 10 iterations.
Tests
IterationE1E2E3E4E5E6E7E8E9
t 1 111111111
t 2 571439231
t 3 571439235
t 4 571439235
t 5 571439235
t 6 571439235
t 7 571439235
t 8 571439235
t 9 571439235
t 10 571439235
Table 10. Results of tests F 1 F 9 with MOORA-PSO for 10 iterations.
Table 10. Results of tests F 1 F 9 with MOORA-PSO for 10 iterations.
Tests
IterationF1F2F3F4F5F6F7F8F9
t 1 111111111
t 2 449919829
t 3 449919829
t 4 449919829
t 5 449919829
t 6 449819829
t 7 449919829
t 8 449819829
t 9 449819829
t 10 449819829
Table 11. Results of tests A 1 A 9 with TOPSIS-PSO for 10 iterations.
Table 11. Results of tests A 1 A 9 with TOPSIS-PSO for 10 iterations.
Tests
IterationA1A2A3A4A5A6A7A8A9
t 1 111111111
t 2 133593178
t 3 933592253
t 4 934913271
t 5 333512558
t 6 339212271
t 7 333513558
t 8 334913571
t 9 334513558
t 10 334913228
Table 12. Results of tests B 1 B 9 with TOPSIS-PSO for 10 iterations.
Table 12. Results of tests B 1 B 9 with TOPSIS-PSO for 10 iterations.
Tests
IterationB1B2B3B4B5B6B7B8B9
t 1 111111111
t 2 416251674
t 3 456271674
t 4 396252694
t 5 456352874
t 6 496251794
t 7 351252464
t 8 491252799
t 9 451252569
t 10 334913228
Table 13. Results of tests C 1 C 9 with MOORA-PSO for 10 iterations.
Table 13. Results of tests C 1 C 9 with MOORA-PSO for 10 iterations.
Tests
IterationC1C2C3C4C5C6C7C8C9
t 1 111111111
t 2 549125421
t 3 749725491
t 4 541167291
t 5 797125691
t 6 741135221
t 7 996125829
t 8 549135221
t 9 747125729
t 10 549137221
Table 14. Results of tests D 1 D 9 with TOPSIS-PSO for 10 iterations.
Table 14. Results of tests D 1 D 9 with TOPSIS-PSO for 10 iterations.
Tests
IterationD1D2D3D4D5D6D7D8D9
t 1 111111111
t 2 417199681
t 3 417299685
t 4 467269615
t 5 437269685
t 6 867269145
t 7 837969685
t 8 867969195
t 9 837969685
t 10 864969195
Table 15. Results of tests E 1 E 9 with TOPSIS-PSO for 10 iterations.
Table 15. Results of tests E 1 E 9 with TOPSIS-PSO for 10 iterations.
Tests
IterationE1E2E3E4E5E6E7E8E9
t 1 111111111
t 2 114422198
t 3 194482198
t 4 191812119
t 5 694489698
t 6 795812711
t 7 796489998
t 8 515812511
t 9 796499799
t 10 515892311
Table 16. Results of tests F 1 F 9 with TOPSIS-PSO for 10 iterations.
Table 16. Results of tests F 1 F 9 with TOPSIS-PSO for 10 iterations.
Tests
IterationF1F2F3F4F5F6F7F8F9
t 1 111111111
t 2 711557183
t 3 711557182
t 4 219557116
t 5 711517182
t 6 219517192
t 7 271517186
t 8 219517199
t 9 751517182
t 10 219512199
Table 17. Results of tests A 1 A 9 with DA-PSO for 10 iterations.
Table 17. Results of tests A 1 A 9 with DA-PSO for 10 iterations.
Tests
IterationA1A2A3A4A5A6A7A8A9
t 1 222222222
t 2 223652457
t 3 161548382
t 4 342774325
t 5 814285674
t 6 113894524
t 7 852869372
t 8 386419499
t 9 856674338
t 10 826299262
Table 18. Results of tests B 1 B 9 with DA-PSO for 10 iterations.
Table 18. Results of tests B 1 B 9 with DA-PSO for 10 iterations.
Tests
IterationB1B2B3B4B5B6B7B8B9
t 1 222222222
t 2 161963266
t 3 892698992
t 4 185151923
t 5 646247881
t 6 778383322
t 7 568486499
t 8 554498452
t 9 956897368
t 10 688387762
Table 19. Results of tests C 1 C 9 with DA-PSO for 10 iterations.
Table 19. Results of tests C 1 C 9 with DA-PSO for 10 iterations.
Tests
IterationC1C2C3C4C5C6C7C8C9
t 1 222222222
t 2 122443261
t 3 941764187
t 4 763714916
t 5 438959945
t 6 463577945
t 7 416241944
t 8 783679844
t 9 993539327
t 10 436217194
Table 20. Results of tests D 1 D 9 with DA-PSO, for 10 iterations.
Table 20. Results of tests D 1 D 9 with DA-PSO, for 10 iterations.
Tests
IterationD1D2D3D4D5D6D7D8D9
t 1 222222222
t 2 113329884
t 3 242513997
t 4 999111281
t 5 175121482
t 6 265977458
t 7 949518591
t 8 599114393
t 9 835761693
t 10 648478873
Table 21. Results of tests E 1 E 9 with DA-PSO for 10 iterations.
Table 21. Results of tests E 1 E 9 with DA-PSO for 10 iterations.
Tests
IterationE1E2E3E4E5E6E7E8E9
t 1 222222222
t 2 269156251
t 3 118242684
t 4 222156739
t 5 653687871
t 6 717562523
t 7 738692759
t 8 661937986
t 9 879596948
t 10 938786976
Table 22. Results of tests E 1 E 9 with DA-PSO for 10 iterations.
Table 22. Results of tests E 1 E 9 with DA-PSO for 10 iterations.
Tests
IterationF1F2F3F4F5F6F7F8F9
t 1 222222222
t 2 136769283
t 3 911911761
t 4 714941246
t 5 145879698
t 6 159819911
t 7 719111549
t 8 848915799
t 9 698877979
t 10 493889799
Table 23. Results of tests A 1 A 9 with PSO for 10 iterations.
Table 23. Results of tests A 1 A 9 with PSO for 10 iterations.
Tests
IterationA1A2A3A4A5A6A7A8A9
t 1 222222222
t 2 223652457
t 3 161548382
t 4 342774325
t 5 814285674
t 6 113894524
t 7 852869372
t 8 386419499
t 9 856674338
t 10 826299262
Table 24. Results of tests B 1 B 9 with PSO for 10 iterations.
Table 24. Results of tests B 1 B 9 with PSO for 10 iterations.
Tests
IterationB1B2B3B4B5B6B7B8B9
t 1 222222222
t 2 223652266
t 3 161548992
t 4 342774923
t 5 814285881
t 6 113894322
t 7 852869499
t 8 386419452
t 9 856674368
t 10 826299762
Table 25. Results of tests C 1 C 9 with PSO for 10 iterations.
Table 25. Results of tests C 1 C 9 with PSO for 10 iterations.
Tests
IterationC1C2C3C4C5C6C7C8C9
t 1 222222222
t 2 122443261
t 3 941764187
t 4 763714916
t 5 438959945
t 6 463577945
t 7 416241944
t 8 783679844
t 9 993539327
t 10 436217194
Table 26. Results of tests D 1 D 6 with PSO, for 10 iterations.
Table 26. Results of tests D 1 D 6 with PSO, for 10 iterations.
Tests
IterationD1D2D3D4D5D6D7D8D9
t 1 222222222
t 2 113329884
t 3 242513997
t 4 999111281
t 5 175121482
t 6 265977458
t 7 949518591
t 8 599114393
t 9 835761693
t 10 648478873
Table 27. Results of tests E 1 E 9 with PSO for 10 iterations.
Table 27. Results of tests E 1 E 9 with PSO for 10 iterations.
Tests
IterationE1E2E3E4E5E6E7E8E9
t 1 222222222
t 2 269116251
t 3 118525684
t 4 222561739
t 5 653118871
t 6 717219523
t 7 738573759
t 8 661675986
t 9 879669948
t 10 938289976
Table 28. Results of tests F 1 F 9 with PSO for 10 iterations.
Table 28. Results of tests F 1 F 9 with PSO for 10 iterations.
Tests
IterationF1F2F3F4F5F6F7F8F9
t 1 222222222
t 2 136769283
t 3 911911761
t 4 714941246
t 5 145879698
t 6 159819911
t 7 719111549
t 8 848915799
t 9 698877979
t 10 493889799
Table 29. Results of tests E 1 E 5 with BA for 10 iterations.
Table 29. Results of tests E 1 E 5 with BA for 10 iterations.
IterationE1E2E3E4E5
I-199999
I-218927
I-349925
I-449965
I-549965
I-679965
I-771965
I-871965
I-971965
I-1071965
Table 30. Analysis of Variance.
Table 30. Analysis of Variance.
SourceDFAdj SSAdj MSF-Valuep-Value
Factor30.000.00000.001.000
Error321352.0042.2500
Total351352.00
Table 31. Tukey simultaneous tests for the differences of means.
Table 31. Tukey simultaneous tests for the differences of means.
Difference of LevelsDifference of MeansSE of Difference95% CIT-ValueAdjusted p-Value
MOORA-PSO/DA-PSO0.003.06(−8.30, 8.30)0.001.000
TOPSIS-PSO/DA-PSO0.003.06(−8.30, 8.30)0.001.000
PSO/DA-PSO0.003.06(−8.30, 8.30)0.001.000
TOPSIS-PSO/MOORA-PSO0.003.06(−8.30, 8.30)0.001.000
PSO/MOORA-PSO0.003.06(−8.30, 8.30)0.001.000
PSO/TOPSIS-PSO0.003.06(−8.30, 8.30)0.001.000
Table 32. Fisher individual tests for differences of means.
Table 32. Fisher individual tests for differences of means.
Difference of LevelsDifference of MeansSE of Difference95% CIT-ValueAdjusted p-Value
MOORA-PSO/DA-PSO0.003.06(−6.24, 6.24)0.001.000
TOPSIS-PSO/DA-PSO0.003.06(−6.24, 6.24)0.001.000
PSO/DA-PSO0.003.06(−6.24, 6.24)0.001.000
TOPSIS-PSO/MOORA-PSO0.003.06(−6.24, 6.24)0.001.000
PSO/MOORA-PSO0.003.06(−6.24, 6.24)0.001.000
PSO/TOPSIS-PSO0.003.06(−6.24, 6.24)0.001.000
Table 33. Grouping information using the Fisher LSD method and 95% confidence.
Table 33. Grouping information using the Fisher LSD method and 95% confidence.
FactorNMeanGrouping
PSO910.00A
TOPSIS-PSO910.00A
MOORA-PSO910.00A
DA-PSO910.00A
Table 34. Experiment execution time.
Table 34. Experiment execution time.
12345
MOORA-PSO00:02.300:02.200:02.300:02.300:02.3
TOPSIS-PSO00:02.600:02.600:02.500:02.500:02.4
DA-PSO00:02.200:02.100:02.200:02.200:02.1
PSO00:02.200:02.100:02.100:02.100:02.1
BA00:03.100:03.000:02.800:02.900:03.0
Table 35. Better alternatives for the plastic injection process.
Table 35. Better alternatives for the plastic injection process.
C1C2C3C4C5
A10.0480.0470.0700.0870.190
A20.0530.0520.0660.0810.058
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez-Domínguez, L.A.; Ramírez-Ochoa, D.-D.; Luviano-Cruz, D.; Martínez-Gómez, E.-A.; García-Jiménez, V.; Ortiz-Muñoz, D. Implementing Sensible Algorithmic Decisions in Manufacturing. Appl. Sci. 2025, 15, 8885. https://doi.org/10.3390/app15168885

AMA Style

Pérez-Domínguez LA, Ramírez-Ochoa D-D, Luviano-Cruz D, Martínez-Gómez E-A, García-Jiménez V, Ortiz-Muñoz D. Implementing Sensible Algorithmic Decisions in Manufacturing. Applied Sciences. 2025; 15(16):8885. https://doi.org/10.3390/app15168885

Chicago/Turabian Style

Pérez-Domínguez, Luis Asunción, Dynhora-Danheyda Ramírez-Ochoa, David Luviano-Cruz, Erwin-Adán Martínez-Gómez, Vicente García-Jiménez, and Diana Ortiz-Muñoz. 2025. "Implementing Sensible Algorithmic Decisions in Manufacturing" Applied Sciences 15, no. 16: 8885. https://doi.org/10.3390/app15168885

APA Style

Pérez-Domínguez, L. A., Ramírez-Ochoa, D.-D., Luviano-Cruz, D., Martínez-Gómez, E.-A., García-Jiménez, V., & Ortiz-Muñoz, D. (2025). Implementing Sensible Algorithmic Decisions in Manufacturing. Applied Sciences, 15(16), 8885. https://doi.org/10.3390/app15168885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop