Next Article in Journal
Teal-WCA: A Climate Services Platform for Planning Solar Photovoltaic and Wind Energy Resources in West and Central Africa in the Context of Climate Change
Next Article in Special Issue
Data-Driven Scheduling Optimization for SMT Lines Using SMD Reel Commonality
Previous Article in Journal
Data Decomposition Modeling Based on Improved Dung Beetle Optimization Algorithm for Wind Power Prediction
Previous Article in Special Issue
Characterization and Dataset Compilation of Torque–Angle Curve Behavior for M2/M3 Screws
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parallel Simplex, an Alternative to Classical Experimentation: A Case Study

by
Francisco Zorrilla Briones
1,*,
Inocente Yuliana Meléndez Pastrana
1,2,
Manuel Alonso Rodríguez Morachis
1 and
José Luís Anaya Carrasco
1
1
División de Estudios de Posgrado e Investigación, Tecnológico Nacional de Mexico/I.T. de Ciudad Juárez, Av. Tecnológico 1340, Fuentes del Valle, Juárez 32500, Chihuahua, Mexico
2
Departamento de Posgrado, Universidad Tecnológica de Ciudad Juárez, Av. Universidad Tecnológica 3051, Col, Lote Bravo, Juárez 32695, Chihuahua, Mexico
*
Author to whom correspondence should be addressed.
Data 2024, 9(12), 147; https://doi.org/10.3390/data9120147
Submission received: 9 September 2024 / Revised: 27 November 2024 / Accepted: 29 November 2024 / Published: 10 December 2024

Abstract

:
Experimentation is a strong methodology that improves and optimizes processes. Nevertheless, in many cases, real-life dynamics of production demands and other restrictions inhibit the use of these methodologies because their use implies stopping production, generating scrap, jeopardizing demand accomplishments, and other problems. Proposed here is an alternative methodology to search for the best process variable levels and optimize the response of the process without the need to stop production. This algorithm is based on the principles of the Variable Simplex developed by Nelder and Mead and the continuous iterative process of EVOPS developed by Box, which is then modified as a simplex by Spendley. It is named parallel simplex because it searches for the best response with three independent Simplexes searching for the same response at the same time. The algorithm was designed for three simplexes of two input variables each. The case study documented shows that it is efficient and effective.

1. Summary

Traditional experimentation requires designing an experimental array, stopping production, setting up the equipment/machinery for each run, and running all the runs of the array. This process inevitably results in lost production time, scrap, and the use of extraordinary resources. In many real manufacturing scenarios, this is difficult to do due to the complexity of production demand trade-offs and manufacturing process dynamics. The main idea of Box’s Evolutive Operations [1], Spendley and Himsworth’s Simplex [2], and Nelder and Mead’s Variable Simplex [3] is to find the best combination of process parameters by introducing small changes in the process variables. These small changes allow the experimenter to run the process continuously because the response variable does not move dramatically away from its target, thereby eliminating or minimizing the generation of nonconforming parts (scrap).
These direct search methods (EVOPS, Simplex, Variable Simplex, etc.) have the following disadvantages that have limited their use in a manufacturing process:
  • They are very sensitive to noise (internal noise).
  • The complexity of the computation increases dramatically with the number of variables considered.
  • The number of iterations to find a local optimum grows exponentially with the number of variables in the algorithm.
These drawbacks make these methods not very practical to apply to a manufacturing process.
On the other hand, these procedures are based on heuristic/stochastic algorithms, i.e., a series of iterations is developed by introducing small changes in the input variables and evaluating the observed response. The process continues until the response is located on the desired nominal value. We must consider that a deterministic algorithm is defined as one that exhibits predictable behavior and a fixed sequence of steps. In other words, a deterministic algorithm always produces the same result for a given set of inputs, and its execution is not subject to variability or uncertainty. Furthermore, the process that the algorithm follows is not dependent on random factors or any probability, and it follows the same path each time it is executed, producing the same result. In regard to the heuristic and stochastic algorithms, it is important to note that their behavior is not predictable. The results produced by these algorithms may vary depending on the inputs provided, due to the inherent random or probabilistic nature of the algorithms. The algorithms do not adhere to a fixed sequence of steps in each run, thereby introducing variability.
  • The utilization of randomness or approximation: Stochastic algorithms frequently employ a random component (for example, in the decisions made during their execution) or an approximation based on heuristics to identify solutions more expediently in complex problems.
  • Objective: In numerous instances, these algorithms do not pursue an exact solution, but a “good” solution within a reasonable time, particularly when the search space is vast or the problem is challenging to solve exactly.
Deterministic algorithms, in contrast to heuristic or stochastic algorithms, rely on known or well-defined functions to guide their execution. These algorithms follow a set series of steps that are determined solely by the inputs they receive, without any random factor affecting their outcome. They are based on specific functions that operate on the inputs and dictate the algorithm’s steps, consistently producing the same results for the same inputs. On the other hand, when an algorithm lacks a clearly defined function or deals with ambiguous or uncertain problems, it may be heuristic or stochastic in nature. These algorithms make decisions that are not strictly predictable and may involve approximation or randomness. Examples of such algorithms are genetic algorithms, simulated annealing, and random search, which are commonly used in complex optimization or search problems with large solution spaces.
The Nelder–Mead algorithm is a heuristic and non-deterministic algorithm. Although it follows a clearly defined set of steps to identify the minimum value of a function without utilizing derivatives, the manner in which it explores the solution space incorporates elements of randomness, particularly during the reflection, expansion, and contraction phases.
The algorithm begins with a set of random initial points. The algorithm commences with an initial set of points that may be selected at random or otherwise, although the latter is more common.
Space exploration: In the reflection, expansion, and contraction stages, the decisions regarding the movement of the points of the ‘simplex’ (the set of points representing the current solutions) may be contingent upon minor variations in the configuration of the simplex, which introduces variability.
Although the process is defined by a sequence of steps, the behavior of the algorithm may vary slightly between runs due to the initial choice and the nature of the moves in the search space. Accordingly, it is not deemed a deterministic algorithm.
In conclusion, the Nelder–Mead algorithm is not deterministic due to the involvement of randomness in its decision-making processes, such as the selection of initial points. The method is heuristic in nature, seeking an approximate solution without the necessity of gradients or derivatives [3,4,5].
Logic algorithms are widely used in the engineering field, from genetic algorithms applied to analyze and improve the location, assignment, and routing of transportation systems [6], to many other applications; these authors documented another few heuristic and stochastic methods and their objectives. It can be seen that there are many designs of algorithms. Mainly, they can be classified as follows:
  • Deterministic: Those that work with a known function.
  • Heuristic: Algorithms based on test and error procedures.
  • Stochastic: They use successive iterations, taking into account changes in the input variables and measuring the change observed in the output variables, with the help of probabilistic theories.
One of the algorithms that seem to work like the Nelder and Mead algorithm presented in this paper is the Genetic Algorithm, widely used to analyze and improve many different systems. These algorithms evolve a population of individuals by subjecting it to random actions, similar to those that act in biological evolution (mutations and genetic recombination), as well as natural selection. According to some criterion, it is decided which are the most adapted individuals, which survive, and which are the least adapted, which are discarded. An algorithm is a set of organized steps that describe the process to be followed to solve a particular problem, such as an experimental design for optimizing process parameters.
It should be noted that the primary distinction between the genetic algorithm and the Nelder–Mead algorithm lies in their respective deterministic and stochastic characteristics. As previously stated, the former is deterministic, while the latter is heuristic/stochastic. Despite this difference, both algorithms exhibit similarities in several aspects.
The optimization objective is as follows: Both algorithms are designed to solve optimization problems, that is, they seek to identify a solution (or set of solutions) that minimizes or maximizes an objective function. The objective of both methods is to identify the optimal solution within a given search space, whether that be the global minimum or a satisfactory approximation.
Solution Space Exploration: Both algorithms endeavor to explore the search space in pursuit of optimal or approximate solutions. Despite employing disparate methodologies—one via an evolutionary process and the other through the manipulation of a set of points—both approaches are predicated on the notion of directing the “search” in the space towards more favorable regions.
The adaptation or modification of solutions is a crucial aspect of the process.
A genetic algorithm is a computational method that employs the principles of genetics to search for optimal solutions. It employs a process of mutation, crossover, and selection to generate novel solutions from existing ones.
The Nelder–Mead method is a metaheuristic optimization technique that employs a simplex method to iteratively improve the quality of a solution. The process entails reflection, expansion, contraction, and reduction of a set of points (simplex) within the solution space, with the objective of moving the set towards the minimum.
Solution population:
While Nelder–Mead does not operate with “populations” of solutions in the same manner as a genetic algorithm, it nevertheless possesses a comparable collection of points (the simplex) that evolve during the optimization process. In this sense, it can be posited that, at each iteration, the algorithm is handling a set of solutions.
Trial and error methods: Both algorithms employ an approach based on the evaluation of the objective function and adjust the solutions based on the results obtained. They do not require derivatives of the objective function, which makes them useful in problems where the function is non-differentiable, noisy, or has discontinuities, or in the case of Nelder and Mead, the objective function is unknown. There is extensive literature on these algorithm applications [7,8,9,10].
An exhaustive review of the literature shows evidence of the loss of efficiency of these direct search algorithms when the number of dimensions (variables) is increased. As an example, ref. [11] compares different algorithms (heuristic and deterministic) and their efficiency; our attention is focused on the number of runs required to obtain a local or global optimum. As shown in Table 1, the number of runs (iterations of the algorithm) increases dramatically when the number of dimensions (variables) is increased (n).
As another example, ref. [12] compared the NMS algorithm with five typical problems. Table 2 shows the comparison of results under different sample sizes and the number of variables for five typical problems; it can be seen that the number of iterations increases dramatically as the number of variables in the geometric array (dimensions) increases.
To give the reader a better idea, these algorithms are essentially different from those used to make decisions with multiple objectives, such as those used as multicriteria. To understand these, you can see [13,14,15] and others.
The algorithm we discussed here is used to find the best combination of parameters for a process, whatever it may be, from a design, a machine, or a manufacturing process. It has to be considered that the increase in the number of iterations for these algorithms is directly related to the number of variables analyzed. If two variables are considered, the geometric array corresponds to a triangle, if three variables are considered, the array corresponds to a square, if four variables are considered, the geometric array corresponds to a pentagon, and so on. As the number of variables increases, so does the number of edges and vertices, which slows down the algorithm and increases the complexity of the computation. In many cases, the high number of variables leads to a non-convergence situation (the algorithm gets “lost”).
The proposed methodology tries to overcome exactly this situation, that is, to search for the optimal response (local or global) when the number of input variables increases, trying to minimize the number of iterations without the algorithm “getting lost”, that is, it converges quickly. Again, these algorithms, by introducing small changes in the input variables, allow them to be executed during the normal production process. Every process/manufacturing engineer knows the value of this.

2. Methods

Box [1] proposed an iterative experimentation process, basically, his proposal consisted in the introduction of a two-level factorial design that is modified in search of the best response [1]. Small changes in the levels of the design are estimated to minimize the generation of nonconforming products. This way of managing the experimentation process is known as operative evolutions (EVOPs). EVOP is mainly used in the 3rd phase of experimentation (optimization) after the process has been characterized.
On the other hand [16], there are questions about the sample size. The Box algorithm does not consider whether the sample size is sufficient to generate acceptable statistical properties. It is clear that the proposal of Box has many advantages and that it is innovative because it searches for the best response with small changes during production, minimizing the nonconforming product.
Spendley [2] proposed a big change in the algorithm of Box, adding a simplex arrangement that replaces the factorial design [2]. It is through this simplex that the levels of each iteration are calculated. The algorithm of [2] is known as simplex EVOP. It iteratively moves the polyhedron (with n + 1 vertices), looking for the optimization of the response. Here is a brief review of the operations of the variable simplex of Nelder and Mead [3].
In the classical array, the levels of the variables considered are pre-designed, that is, once the array is defined, these levels remain fixed during the process of experimentation. In the Nelder and Mead Variable Simplex (NMS), the algorithm starts with pre-designed levels, then these levels are modified by the iterations of the algorithm according to a set of rules (operations of the simplex). The algorithm consists of using a simplex (a geometric arrangement), generally a polyhedron with n − 1 vertices (for two input variables, it will be a triangle). The search mechanism consists of using one of the vertices as a pivot (called the worse vertex) to estimate the next vertices (variable levels), always moving towards the best vertex (the one closest to the response objective: maximum, minimum, and target). The algorithm moves through the response, calculating new vertexes in each iteration, through four basic operations: Reflection, Expansion, Contraction, and Shrinkage. Figure 1 illustrates these operations.
There is plenty of literature available to understand the logistics of variable simplex operations. In this section, a brief explanation of the logistics of the algorithm of NMS is resumed.

2.1. Basic Operations of the Nelder and Mead Algorithm

For the first iteration, k = 0, the algorithm needs an initial simplex S0 = { x 0 0 , x 1 0 , , x n 0 } because it prefers to start with a regular simplex. This characteristic changes immediately, since the successive iterations replace the worst vertex, x h k , of the Sk-th simplex with another vertex. This worse vertex will be replaced by one of the new points (vertexes) x r k , x e k and x c k . The selection of one of these three points, at any k ≥ 0 iteration, is defined by one of the three operations of the algorithm: Reflection, Expansion, and Contraction.

2.1.1. Reflection

Considering that f i > f r > f l for at least one i, where ih or il; α is the coefficient of reflection. Figure 2 shows the reflection operation in two dimensions (two variables of the process) for the n = 2 case. The distance between the points x h k and x k is equal to the distance between x ¯ k and x r k . This is obtained when α = 1.

2.1.2. Expansion

The worse vertex x h k is replaced by:
x e k = γ x r k + ( 1 γ ) x ¯ k
Considering that (i) fr < fe, and (ii) fe < fl. If (i) is kept but (ii) is not, then the worse vertex x h k is replaced by x r k . γ is the expansion coefficient. Figure 3 shows the expansion operation for two dimensions, n = 2. The distance between the points x h and x ¯ is equivalent to the distances between the points x ¯ and x r , and x e and x r . This is obtained for a γ = 2.

2.1.3. Contraction

In the contraction operation, the worse vertex x h k is replaced by:
x c k = β x h k + ( 1 β ) x ¯ k
Considering that frfh, or:
x c k = β x r k + ( 1 β ) x ¯ k
When fr < fh; β is the coefficient of contraction. In Figure 4 the operation of contraction is shown for two dimensions n = 2. The distance between the points x h and x ¯ corresponds to two times the distances between x ¯ and x c 1 , or x ¯ and x c 2 . This is obtained for a β = ½.

2.1.4. Shrinkage

A specific form of contraction can be employed within an iteration to prevent over-expansion. This operation is referred to as “shrinkage”, and it differs from the standard contraction in that it involves the retention of a smaller number of points to generate the new simplex. In regular contraction, all vertices are retained. In shrinkage, however, only one point, designated as the best vertex, X l k , is kept. Shrinkage is made if f c > f h , then the new simplex is defined by
X i k + 1 = δ X i k + 1 δ X l k , i   and   i l
where δ is the shrinkage coefficient and 0 < δ < 1. Figure 5 illustrates a two-dimensional shrinkage step with n = 2 and a shrinkage coefficient δ equal to 1/2.
In the case studied and presented in this paper, they were considered six variables, so the geometrical array is to be a heptagon, if the traditional NMS algorithm was used. As an alternative to these mathematical and practical problems, the named Parallel Simplex is proposed.

3. The Parallel Simplex Proposal

This document concentrates on the proposed alternative of the parallel simplex. Figure 6 shows the movements of the simplex (geometrical array), with the basic operations of Reflection, Expansion, and Contraction, always moving towards the objective of the response variable. As shown in Figure 5, the parallel simplex approaches the objective of the response variable with three simplexes, independent between them, but pursuing the same response and objective. After an exhaustive literature review, nothing similar was found. Many authors have tried to minimize the problems of non-convergency and the high number of iterations adding or modifying the original Nelder Mead simplex, nevertheless, the results are very questionable because of the following:
  • The complexity of the algorithm still increases with the number of input variables.
  • Almost all the evidence found is generated through known test functions (thus, deterministic approximations).
  • Almost all the cases presented are computer-simulated situations.
Maintaining independence among the three simplexes assures the efficiency of the algorithm, avoiding the increase of the complexity of the calculus, thus reducing the number of iterations, as it will happen if the traditional array of a heptagon (n + 1 dimensions) is used. Because the three simplexes chase the same objective, the simplex operations are the same for every single simplex.

4. The Analyzed Process

The process analyzed corresponds to the manufacturing of plastic profiles, to be applied as seals for refrigerators, washing machines, and other domestic equipment. Plastic is melted through a simple plasticizing screw and extruded through a nozzle with the designed profile. In Figure 7, the selected profile is shown. This profile was chosen because it presented a high rejection rate (100% rejection of lots). A 100% inspection and rework were used on a regular basis.
This seal showed a very high rate of defects, mainly, incomplete shots. This defect is shown in Figure 8.
The main control variables of this process are the cubic feet per minute of air (at a constant 4 bar pressure) injected by four nozzles. This is done to keep the profile of the extrusion, as shown in Figure 9. The fifth variable is the height of the puller. This height is the distance between the rollers that pull the extruded material. The sixth variable is the speed of the extruder and the puller (that are synchronized). The output variable is measured as defects per 1000 parts as λ (the continuous extruded material is cut to the design length).
At this point, it is very important to consider that if a traditional/classic array (two-level factorial design) is used, it will imply the following:
  • The design, set up, and run of 26 runs, that is 64 runs.
  • If more replicates are used for better-fit estimations, the number of runs will increase.
  • The majority of products will not be conformant to specifications.
  • Production will be lost because the process must stop (to set up each run).
  • Extra resources (time, technicians, operators, materials, etc.) will be needed.
The principle of Evolutive Operations of these direct search algorithms, as the NMS, allows the investigator to search for the best response while the process is running. The small changes induced in the process allow for reduction or even elimination of the generation of non-conformant parts.

The Experimental Array

In this case, the objective of the response variable y is cero defects per 1000 parts. The control variables and the initial vertexes of the simplexes are set up in the software. This is shown in Figure 10.
The first set of geometrical arrays—three triangles, three vertexes each one—is set up; three combinations for each pair of variables, X 1 and X 2 ; X 3 and X 4 ; X 5 and X 6 . The process is started and the feedback of the response is input into the software. The software calculates, on every new iteration, a new combination (vertex) for each simplex (in classical experimentation, these combinations or runs are determined in advance. Luckily some runs will be useful to estimate changes in the response, but many others will not). Table 3 shows the final results of the experimentation process.
The NMS algorithm categorizes the response as Best (B), Near the Worse (NW), and Worse (W), considering the distance of the response from the objective (in this case a minimum of cero is intended). This is documented as the Ranking of the response. There are a lot of stop criteria of the algorithm in the literature, nevertheless, the Teta parameter developed [12] was included because of its simplicity and accuracy.
i = 1 k + 1 ( y i y ¯ ) 2 k + 1 θ
This is a comparison of the standard error of the response against a predetermined value, typically θ = 1 × 10−6, for test functions. In real practical processes, a value of Teta of θ = 1 × 10−3 is more than acceptable.

5. Results Analysis

In Table 3, it is shown that the algorithm stopped at the 24th iteration, obtaining a desired 0.002 on the dual response and a Teta value of 2.22 × 10−7. Before any other comparison, a two proportions analysis was made to compare the process with the new set-up. Historical data of 3 months against lots of 1000 pieces were utilized. The analysis showed a p-value of 0.0000 for the probability of the alfa error, for the hypothesis that H 0 : P i n i t i a l = P f i n a l ; thus, the difference is statistically significant.
Utilizing the 17th version of Minitab®, it can be seen that, for a full factorial design needed to estimate a Response Surface, 90 runs are required for six variables (with the minimum of central, axial, and radial points), and at minimum, 53 runs are required for a half design.
These runs were not executed because they implied many runs, much material, many scraps, etc. Nevertheless, data from the Parallel Simplex results were utilized to feed the Response Surface array and compare the results. This comparison is shown in Table 4.
The internal operation of the Parallel Simplex is based on the continuous iteration of three independent simplexes, with two variables each in this case. As a further comparison, a Response Surface analysis was made for each pair of variables (simplex), utilizing the algorithm results (again, these runs were not executed). A full model with principal and interaction effects was designed and analyzed.
f x = β 0 + β 1 x 1 + β 2 x 2 + β 3 x 1 2 + β 4 x 2 2 + β 5 x 1 x 2 + e i j
As part of the comparison, a canonical analysis was made to ensure that the optimum found with the Poly Simplex algorithm corresponds to a local minimum. The regression analysis for the first simplex resulted in the equation:
R d = 0.742 16.93 x 1 + 10.90 x 2 + 126 x 1 2 + 76 x 2 2 190 x 1 x 2
With an adjusted R 2 = 71.55 % and p-values for x 1 and x 2 of 0.000 and 0.003.
The stationary point estimated for this pair of variables is:
Considering matrix B and b
B = 126 −95 −95 76   and   b = −16.93 10.90
Thus
x s = 1 2 B −1 b = 0.227931 0.213203 ; the canonical conversion shows that λ = 199.234 2.765 for x 1 and x 2 as both roots are positive, this stationary point is a minimum.
The regression analysis for the second simplex resulted in the equation:
R d = 0.075 + 12.4 x 3 10.9 x 4 + 454 x 3 2 + 343 x 4 2 788 x 3 x 4
With an adjusted R 2 = 75.22 % and p-values for x 3 and x 4 of 0.000 and 0.723.
The stationary point estimated for this pair of variables is:
Considering matrix B and b
B = 454 −394 −394 343   and   b = 12.40 −10.90
Thus
x s = 1 2 B −1 b = 0.042593 0.064815 ; the canonical conversion shows that λ = 796.390 0.610 for x 3 and x 4 as both roots are positive, this stationary point is a minimum.
The canonical analysis demonstrates that the final combinations of each pair of variables is a local minimum. In Table 5, a resume of the regression coefficients for the six variables at the same time and for each pair of variables is shown.

6. Discussion

The case study presented pieces of evidence that the Parallel Simplex is an efficient alternative to classical experimentation. It is an experimentation process that does not require stopping production, generating scrap, or using additional resources.
For many years, these Direct Search Algorithms were concealed to theoretical approaches, with test functions, etc. One of the main reasons is because the mathematical complexity of the calculus and the sensibility of the algorithms to the internal noise is present in many industrial processes. As a matter of fact, it is difficult to find literature on successful approaches to real processes. In addition, an approach such as the one proposed here was not found in the literature reviewed; three independent simplexes seek to optimize the same response in a synchronous manner. A few examples can be revised on [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32], among many others. This alternative is a powerful way to optimize processes, overcoming the restrictions of the original algorithms.
A summary of analogous algorithms, founded upon the tenets of Nelder and Mead with select adaptations, is provided in Table 6 below. Additionally, the table illustrates the characteristics of these algorithms.

7. Conclusions

The proposed algorithm has demonstrated its efficiency by finding the global minimum for the six variables included in the model in only 24 iterations. A traditional model using the original Nelder and Mead algorithm, which requires a heptagon for the analysis, would require a very large number of iterations, and, as already demonstrated, a traditional factorial design would also require a considerable number of experimental runs. Among the most important contributions is the fact that this process was performed during a normal production run, production was not stopped, no extraordinary resources were used, and the generation of nonconforming parts was minimal.
On the other hand, the canonical analysis shows that a local minimum for the response variable has been found for every pair of variables. It is worth mentioning that with the new operating parameters, the process currently has a quality efficiency of 98%. Prior to the implementation of the experimental parameters, the production line was capable of sorting 100% of the material produced. Following the introduction of the new parameters, a sampling procedure was initiated, which revealed a defect rate of 2%.
The objective of this research is to develop an algorithmic approach that can effectively address the complexities inherent in complex production processes, machines, and equipment. To achieve this, the proposed algorithm is designed to focus on simple production processes, where no more than eight variables can be identified. A plethora of algorithms exists, encompassing both stochastic and deterministic approaches. However, these algorithms are predominantly tailored towards intricate processes and systems, where a multitude of variables interacts simultaneously, or in systems with pre-defined functions, which also exhibit a high degree of complexity. These complexities and restrictions impede the ability of the average industrial engineer to apply them. The proposed approach offers several advantages, including the following:
The proposed algorithm is simple, effective for simple processes, does not require knowledge of the function to be optimized, runs without stopping production or processes, and does not require large computational resources. Indeed, anyone can develop a macro in commercial software and execute and interpret its analysis.
“The complexity of the solution is directly proportional to the complexity of the mind that is searching for it”.
Anonymous.
Applying the Parallel concept to a higher dimension (variables) needs to be developed to investigate the efficiency of the algorithm. Further investigations of this algorithm are being developed to accumulate evidence of its practical benefit to the average industrial Engineer. Furthermore, the objective is to develop an application analogous to the one presented for discrete variables.

Author Contributions

Conceptualization: F.Z.B.; methodology: F.Z.B. and M.A.R.M.; software: F.Z.B. and I.Y.M.P.; validation: F.Z.B.; formal analysis: F.Z.B.; writing—review and editing: J.L.A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

This work was published with the support of the Institute of Innovation and Competitiveness of the Ministry of Innovation and Economic Development of the State of Chihuahua, Mexico.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Box, G.E.P. Evolutionary Operation: A Method for Increasing Industrial Productivity. Appl. Stat. 1957, 6, 110–115. [Google Scholar] [CrossRef]
  2. Spendely, W.G.R.; Himsworth, F.R. Sequential Applications of Simplex Designs in Optimization and EVOP. Technometrics 1962, 4, 441–461. [Google Scholar] [CrossRef]
  3. Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  4. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes: The Art of Scientific Computing, 3rd ed.; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  5. Lagarias, J.C.; Reeds, J.A.; Wright, M.H.; Wright, P.E. Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM J. Optim. 1998, 9, 112–147. [Google Scholar] [CrossRef]
  6. Mrad, M.; Bamatraf, K.; Alkahtani, M.; Hidri, L. A Genetic Algorithm for the Integrated Warehouse Location, Allocation and Vehicle Routing Problem in a Pooled Transportation System. Int. J. Ind. Eng. Theory Appl. Pract. 2023, 30, 852–875. [Google Scholar] [CrossRef]
  7. Vineetha, G.R.; Shiyas, C.R. Machine Learning-Enhanced Genetic Algorithm for Robust Layout Design in Dynamic Facility Layout Problems: Implementation of Dynamic Facility Layout Problems. Int. J. Ind. Eng. Theory Appl. Pract. 2023, 30, 1466–1485. [Google Scholar]
  8. Xie, Y.; Jiang, H.; Wang, L.; Wang, C. Sports Ecotourism Demand Prediction Using Improved Fruit Fly Optimization Algorithm. Int. J. Ind. Eng. Theory Appl. Pract. 2023, 30, 1629–1642. [Google Scholar] [CrossRef]
  9. Praga-Alejo, R.J.; de León-Delgado, H.; González-González, D.S.; Cantú-Sifuentes, M.; Tahaei, A. Manufacturing Processes Modeling Using Multivariate Information Criteria for Radial Basis Function Selection. Int. J. Ind. Eng. Theory Appl. Pract. 2022, 29, 117–130. [Google Scholar] [CrossRef]
  10. Bardeji, S.F.; Saghih, A.M.F.; Mirghaderi, S.H. Multi-Objective Inventory and Routing Model for a Multi-Product and Multi-Period Problem of Veterinary Drugs. Int. J. Ind. Eng. Theory Appl. Pract. 2022, 29, 464–486. [Google Scholar] [CrossRef]
  11. Drazic, M.; Drazic, Z.; Dragan, U. Continuous variable neighborhood search with modified Nelder- Mead for non-differentiable optimization. IMA J. Manag. Math. 2014, 27, 75–78. [Google Scholar] [CrossRef]
  12. Fan, S.; Zahara, E. Stochastic response surface optimization via an Enhanced Nelder Mead simplex search procedure. Eng. Optim. 2006, 38, 15–36. [Google Scholar] [CrossRef]
  13. Park, H.; Shin, Y.; Moon, I. Multiple-Objective Scheduling for Batch Process Systems Using Stochastic Utility Evaluation. Int. J. Ind. Eng. Theory Appl. Pract. 2024, 31, 412–428. [Google Scholar] [CrossRef]
  14. Uluskan, M.; Beki, B. Project Selection Revisited: Customized Type-2 Fuzzy ORESTE Approach for Project Prioritization. Int. J. Ind. Eng. Theory Appl. Pract. 2024, 31, 317–339. [Google Scholar] [CrossRef]
  15. Sharma, J.; Tyagi, M.; Bhardwaj, A. Contemplation of Operations Perspectives Encumbering the Production-Consumption Avenues of the Processed Food Supply Chain. Int. J. Ind. Eng. Theory Appl. Pract. 2023, 30, 121–146. [Google Scholar] [CrossRef]
  16. Ryan, T.P. Statistical Methods for Quality Improvement; John Wiley: New York, NY, USA, 1989. [Google Scholar]
  17. Montgomery, D.C. Design and Analysis of Experiments, 4th ed.; Jhon Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  18. Kaelo, P.; Ali, M. Some Variants of the Controlled Random Search Algorithm for Global Optimization. J. Optim. Theory Appl. 2006, 130, 253–264. [Google Scholar] [CrossRef]
  19. Lewis, R.; Shepherd, A.; Torczon, V. Implementing Generating Set Search Methods for Linearly Constrained Minimization. Soc. Ind. Appl. Math. 2007, 29, 2507–2530. [Google Scholar] [CrossRef]
  20. Lewis, R.; Torczon, V. Active set identification for linearly constrained minimization without explicit derivatives. Soc. Ind. Appl. Math. 2009, 20, 1378–1405. [Google Scholar] [CrossRef]
  21. Bogani, C.; Gasparo, M.G.; Papini, A. Generating Set Search Methods for Piecewise Smooth Problems. Siam J. Optim. 2009, 20, 321–335. [Google Scholar] [CrossRef]
  22. Khorsandi, A.; Alimardani, A.; Vahidir, B.; Hosseinian, S.H. Hybrid shuffled frog leaping algorithm and Nelder-Mead simplex search for optimal reactive power dispatch. IET Gener. Transm. Distrib. 2010, 5, 249–256. [Google Scholar] [CrossRef]
  23. Bera, S.; Mukherjee, I. A Mahalanobis Distance-based Diversification and Nelder-Mead Simplex Intensification Search Scheme for Continuous Ant Colony Optimization. World Acad. Sci. Eng. Technol. 2010, 69, 1265–1271. [Google Scholar]
  24. Griffin, J.; Kolda, T. Asynchronous parallel hybrid optimization combining DIRECT and GSS. Optim. Methods Softw. 2010, 25, 797–817. [Google Scholar] [CrossRef]
  25. Gao, X.K.; Low, T.S.; Liu, Z.J.; Chen, S.X. Robust Design for Torque Optimization Using Response Surface Methodology. IEEE Trans. Magn. 2002, 38, 1141–1144. [Google Scholar] [CrossRef]
  26. Luo, C.; Yu, B. Low dimensional simplex evolution: A new heuristic for global optimization. J. Glob. Optim. 2011, 52, 45–55. [Google Scholar] [CrossRef]
  27. Barzinpour, F.; Noorossana, R.; Akhavan Niaki, S.; Javad Ershadi, M. A hybrid Nelder- Mead simplex and PSO approach on economic and economic-statistical designs of MEWMA control charts. Int. J. Adv. Manuf. Technol. 2012, 65, 1339–1348. [Google Scholar] [CrossRef]
  28. Hossein Gandomi, A.; Yang, X.; Hossein Alavi, A. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2011, 29, 17–35. [Google Scholar] [CrossRef]
  29. Hussain, S.; Gabbar, H. Fault Diagnosis in Gearbox Using Adaptive Wavelet Filtering and Shock Response Spectrum Features Extraction. Struct. Health Monit. 2013, 12, 169–180. [Google Scholar] [CrossRef]
  30. McCormick, M.; Verghese, T. An Approach to Unbiased Subsample Interpolation for Motion Tracking. Ultrason. Imaging 2013, 35, 76. [Google Scholar] [CrossRef]
  31. Ren, J.; Duan, J. A parameter estimation method based on random slow manifolds. arXiv 2013, arXiv:1303.4600. [Google Scholar]
  32. Tippayawannakorn, N.; Pichitlamken, J. Nelder-Mead Method with Local Selection using Neighborhood and Memory for Stochastic Optimization. J. Comput. Sci. 2013, 9, 463–476. [Google Scholar] [CrossRef]
  33. Xiong, Q.; Jutan, A. Continuous Optimization Using a Dynamic Simplex method. Chem. Eng. Sci. 2003, 58, 3817–3828. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0009250903002367 (accessed on 24 November 2024). [CrossRef]
  34. Luersen, M.; Le Riche, R. Globalized Nelder Mead method for engineering optimization. Comput. Struct. 2004, 82, 2251–2260. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0045794904002378 (accessed on 24 November 2024). [CrossRef]
  35. Fan, S.; Liang, Y.; Zahara, E. A genetic algorithm and a particle swarm optimizer hybridized with Nelder-Mead simplex search. Comput. Ind. Enginering 2006, 50, 401–425. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0360835206000404 (accessed on 24 November 2024).
  36. Chelouah, R.; Siarry, P. A hybrid method combining continuous tabu search and nelder-Mead simplex algorithms for the global optimization of multiminima functions. Eur. J. Oper. Res. 2003, 161, 636–654. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0377221703006301 (accessed on 24 November 2024). [CrossRef]
Figure 1. Operations on the NMS: Reflection, Expansion, Contraction, and Shrinkage.
Figure 1. Operations on the NMS: Reflection, Expansion, Contraction, and Shrinkage.
Data 09 00147 g001
Figure 2. Reflection Operation.
Figure 2. Reflection Operation.
Data 09 00147 g002
Figure 3. Expansion Operation.
Figure 3. Expansion Operation.
Data 09 00147 g003
Figure 4. Contraction Operation.
Figure 4. Contraction Operation.
Data 09 00147 g004
Figure 5. Shrinkage Operation.
Figure 5. Shrinkage Operation.
Data 09 00147 g005
Figure 6. The Parallel Simplex Array for Six Control Variables.
Figure 6. The Parallel Simplex Array for Six Control Variables.
Data 09 00147 g006
Figure 7. Plastic Seal Profile.
Figure 7. Plastic Seal Profile.
Data 09 00147 g007
Figure 8. Incomplete Shots.
Figure 8. Incomplete Shots.
Data 09 00147 g008
Figure 9. Air Nozzles on the Mold.
Figure 9. Air Nozzles on the Mold.
Data 09 00147 g009
Figure 10. Initial Vertexes of the Parallel Simplex.
Figure 10. Initial Vertexes of the Parallel Simplex.
Data 09 00147 g010
Table 1. Comparison of Different Algorithms for Differentiable Functions [11].
Table 1. Comparison of Different Algorithms for Differentiable Functions [11].
Function VNS+NMVNS+RNMRMNMEEMNM
NamenFoptFevalSucc.FminFevalSucc.FminFevalSucc.FminFevalSucc.Fmin
S1. Branin20.39797210 7510 7210 7210
S2. Gldstein and Price23.000015010 21110 10910 10910
S3. Hartman H343−3.862624510 36710 70210 1234−2.1453
S4. Hartman H646−3.3224101910 137510 15478−3.27445905−2.9421
S5. Shekel 4, 104−10.5364136710 313110 17936−7.36792222−4.2124
S6. Shekel 4, 54−10.1532103510 131810 14875−6.40692705−6.4069
S7. Shubert2−186.7306107110 121810 222810 730−23.3780
S8. Ackley100188,72610 324,40510 35,70600.8977880012.4700
S9. Dixon and Price100527310 475210 513810 513810
S10. Griewank60148,52810 161,09610 17,28500.2108601053.1800
S11. Rosenbrock100973810 791710 848280.7974675080.7974
S12. Schwefel1001,173,56810 1,262,06010 16,42601789.88253501818.4500
S13. Zakharov100405610 421810 450810 450810
S14. Rastrigin100130,80210 222,88110 32,18407.56181147088.7500
S15. Powell120609210 597710 634910 623710
Sum 1,671,742150 2,002,001150 134,01697 29,25574
Table 2. Efficiency of NMS of Five Typical Problems [12].
Table 2. Efficiency of NMS of Five Typical Problems [12].
Efficiency Parameters
ProblemDimensionStep SizeIterationsLDBA
121206.9310.0220.0730.047
1042509.660.3250.1950.042
184125011.3400.7970.0960.023
Average9.310.3810.1210.037
221207.1490.0130.1050.070
1012259.4800.1130.2170.080
18460010.5340.5100.2100.077
Average9.0540.2120.1770.076
321307.6300.0490.0460.038
10459510.5792.4610.9630.543
184200011.9606.8461.2410.777
Average10.0563.1190.750.453
441808.3570.1130.1530.075
812409.5190.3060.2540.093
164152011.6630.5470.4320.122
Average9.8460.3220.2800.097
521207.1720.0280.0840.060
1041709.1120.2220.3100.106
184136011.5460.3670.3420.111
Average9.2770.2060.2450.092
Table 3. Results of the Parallel Simplex Process.
Table 3. Results of the Parallel Simplex Process.
Vertice No.Simplex No.Air Out 01Air Out 02Air Out 03Air Out 04Puller HeightExtrusion SpeedRDRankingTETA
110.20000.18000.18000.22000.690022.40000.005B0.0000000000
210.30000.28000.25000.32000.720024.00000.085Nw0.0000000000
310.15000.15000.15000.17000.650021.00000.130W0.0026722222
410.35000.31000.28000.37000.760025.40000.500R0.0470722222
510.30000.27000.24750.32000.677522.10000.099Cw0.0017146667
620.20000.17000.17750.22000.647520.50000.050R0.0014735556
730.10000.08000.11000.12000.660020.80000.075R0.0008388889
830.15000.12750.14440.17000.664421.12500.065Cr0.0006500000
940.25000.21750.21190.27000.651920.82500.060R0.0000389000
1050.30000.26000.24500.32000.635020.20000.090R0.0002888889
1150.26250.22690.21980.28250.657020.89380.105Cw0.0005722222
1260.21250.17940.18550.23250.652720.56880.006R0.0016402222
1360.19380.16030.17230.21380.653020.44060.005E0.0016722222
1470.13130.10340.12990.15130.643520.04690.060R0.0005722222
1570.16410.13430.15240.18410.646920.25860.056Cr0.0005180000
1680.23280.20090.20000.25280.650920.71170.003R0.0005615556
1780.28360.24960.23500.30360.654621.04410.101E0.0005180000
1890.26880.23660.22510.28880.651520.95310.025R0.0003686667
19100.30160.26740.24760.32160.654921.16480.106R0.0019615556
20100.27620.24310.23000.29620.649320.66620.096Cw0.0015748889
21110.24020.20740.20490.26020.648720.42480.060R0.0014660000
22120.19690.16520.17490.21690.650320.47030.002R0.0007348889
23120.15720.12620.14730.17720.650720.37240.060E0.0007220000
24130.18950.15870.16990.20950.652420.75720.002R0.0000002222
Table 4. Parallel Simplex vs. Response Surface Comparison.
Table 4. Parallel Simplex vs. Response Surface Comparison.
VariableParallel SimplexResponse Surface
x 1 (Air 1)0.190.2059
x 2 (Air 2)0.160.1768
x 3 (Air 3)0.170.1813
x 4 (Air 4)0.21(Out of the analysis due to lack of effect)
x 5 (Puller Height)0.650.6514
x 6 (Extrusion Speed)20.7520.66
Table 5. Regression Coefficients for Each Model.
Table 5. Regression Coefficients for Each Model.
ModelX1, X2X3, X4X5, X6Global (X1, X2, X3, X4, X5, X6)
Adjusted R2 71.55%75.22%82.52%93.28
Stationary PointMinimumMinimumMinimumSaddle
Table 6. Algorithms based on the proposal of Nelder and Mead.
Table 6. Algorithms based on the proposal of Nelder and Mead.
ReferencesMethodApplication Characteristics
Xiong and Jutan (2003) [33]Dynamic SimplexChemical ProcessesThe conventional Nelder–Mead Method is adapted and expanded to facilitate the monitoring of evolving optima.
Luersen and Le Riche (2004) [34]Globalized Nelder MeadEngineering DesignParticularly adapted to tackle multimodal, discontinuous optimization problems, for which it is uncertain that a global optimization can be afforded. Different strategies for restarting the local search. Made more robust by reinitializing degenerated simplexes
Fan S. et al. (2006) [35]Particle Swarm Optimizer Hybridized with Nelder–Mead SimplexTo locate the global optimal solutions for the nonlinear continuous variable functions mainly focusing on response surface methodology (RSM).Both the hybrid NM–GA and NM–PSO algorithms
incorporate concepts from the NM, GA or PSO, which are readily implemented in practice and the computation of functional derivatives is not necessary.
Chelouah and Siarry (2003) [36]Continuous Tabu Search and Nelder–MeadCombinatorial Optimization ProblemsAvoid the risk of trapping into a local minimum.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zorrilla Briones, F.; Meléndez Pastrana, I.Y.; Rodríguez Morachis, M.A.; Anaya Carrasco, J.L. Parallel Simplex, an Alternative to Classical Experimentation: A Case Study. Data 2024, 9, 147. https://doi.org/10.3390/data9120147

AMA Style

Zorrilla Briones F, Meléndez Pastrana IY, Rodríguez Morachis MA, Anaya Carrasco JL. Parallel Simplex, an Alternative to Classical Experimentation: A Case Study. Data. 2024; 9(12):147. https://doi.org/10.3390/data9120147

Chicago/Turabian Style

Zorrilla Briones, Francisco, Inocente Yuliana Meléndez Pastrana, Manuel Alonso Rodríguez Morachis, and José Luís Anaya Carrasco. 2024. "Parallel Simplex, an Alternative to Classical Experimentation: A Case Study" Data 9, no. 12: 147. https://doi.org/10.3390/data9120147

APA Style

Zorrilla Briones, F., Meléndez Pastrana, I. Y., Rodríguez Morachis, M. A., & Anaya Carrasco, J. L. (2024). Parallel Simplex, an Alternative to Classical Experimentation: A Case Study. Data, 9(12), 147. https://doi.org/10.3390/data9120147

Article Metrics

Back to TopTop