Next Article in Journal
Research Agenda on Multiple-Criteria Decision-Making: New Academic Debates in Business and Management
Previous Article in Journal
Temporal Cox Process with Folded Normal Intensity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of Stopping Rules in the Steepest Ascent Methodology for the Optimization of a Simulated Process

by
Paulo Eduardo García-Nava
*,†,
Luis Alberto Rodríguez-Picón
,
Luis Carlos Méndez-González
and
Iván Juan Carlos Pérez-Olguín
Department of Industrial Engineering and Manufacturing, Autonomous University of Ciudad Juárez, Av. del Charro no. 450 Nte. Col. Partido Romero, Ciudad Juárez 32310, Chihuahua, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2022, 11(10), 514; https://doi.org/10.3390/axioms11100514
Submission received: 5 September 2022 / Revised: 23 September 2022 / Accepted: 24 September 2022 / Published: 29 September 2022
(This article belongs to the Section Mathematical Analysis)

Abstract

:
Competitiveness motivates organizations to implement statistical approaches for improvement purposes. The literature offers a variety of quantitative methods intended to analyze and improve processes such as the design of experiments, steepest paths and stopping rules that search optimum responses. The objective of this paper is to run a first-order experiment to develop a steepest ascent path to subsequently apply three stopping rules (Myers and Khuri stopping rule, recursive parabolic rule and recursive parabolic rule enhanced) to identify the optimum experimentation stop from two different simulated cases. The method includes the consideration of the case study, the fitting of a linear model, the development of the steepest path and the application of stopping rules. Results suggest that procedures’ performances are similar when the response obeys a parametric function and differ when the response exhibits stochastic behavior. The discussion section shows a structured analysis to visualize these results and the output of each of the stopping rules in the two analyzed cases.

1. Introduction

Today’s industry demands more optimization in its processes. Weichert et al. [1] assure that advances in the manufacturing industry and the resulting available data have brought important progress and large interest in optimization-related methods to improve production processes. In several different disciplines, engineers have to take many technological and managerial decisions at different stages for optimization purposes. The literature shows different examples of this, such as the case of [2,3,4], with research that successfully involves optimization for different purposes. The ultimate goal is either to minimize the effort required or to maximize the desired benefit. For instance, Balafkandeh et al. [5] and Juangphanich et al. [6] focus their efforts on optimizing operations to minimize outputs, while, Oleksy-Sobczak and Klewicka [7] and Delavar and Naderifar [8] pursue optimization of inputs to maximize results.
However, companies are not only concerned with creating more products with less resources; they also have the intention of performing right operations the first time. Several organizations around the world use statistical approaches as the core of process problem solving. According to Bryant [9], problem solving constitutes the ambition to transcend the limits of ordinary capability, sometimes against rational ideas and the limitation of human capabilities, because people still have the need to reduce potential complexity and manage cognitive load. Inside an industrial environment, problem solving is widely related to productivity and it is an everyday issue to solve. In this scenario, each company is responsible for its own development. As noted by Apsemidis et al. [10], the complexity of the industrial environment may be large enough to avoid classical process monitoring techniques and substitute new statistical learning methodologies. This is why mathematical and statistical approaches are being designed for organizations to obtain higher performance in their processes.
There are several quantitative methods such as Design Of Experiments (DOE), intended to analyze and improve processes. As explained by Montgomery [11], experiments are useful to understand the performance of a process or any system that combines operations, machines, methods, people and other resources to transform some inputs (commonly a material) into an output with one or more observable response variables. Likewise, experimental designs have found several methods of learning through a series of activities, with the aim of making conjectures about a process to drive innovation in the product realization process, resulting in improvement of process yield, reduction of variability, closer conformance to nominal, reduction of development time and, finally, reducing costs by the optimization of processes.
For instance, Sheoran and Kumar [12] studied a set of processing parameters that needed to be carefully selected for a specific output requirement. It was noted that some of these parameters were more significant for the response variable than the rest; this significance needed to be identified and optimized. Therefore, researchers explored different experimental or statistical approaches such as DOE for optimization and property improvement purposes.
The task of optimizing systems is always complex because of the quantity of factors involved. Beyond DOE, there is a method called Response Surface Methodology (RSM) used for this purpose. As stated by Myers et al. [13], a RSM is a collection of statistical techniques useful for developing, improving and optimizing a process. In the case of industrial processes, it is very useful to apply this method particularly in situations where different input variables influence the performance or quality characteristics of the product (the response variable). This method is useful in the solution of several problems such as mapping a response surface over a particular region of interest, selection of operating conditions to achieve specifications, customer requirements and, of course, optimization of a response.
For example, Karimifard and Moghaddam [14] presents RSM as a powerful tool for designing experiments and analyzing processes related to different environmental wastewater treatment operations with successfully optimized outputs.
Beyond these strategic tools, an auxiliary method to analyze the behavior of a response is called the Steepest Ascent or Descent Method (SADM). It is useful to obtain a region where optimization is feasible. Myers et al. [13] remarks that this method searches for such regions through experimental design, model-building procedure and sequential experimentation. The type of designs that are most frequently used are the two-level factorial and fractional factorial designs. It is fundamental to remember that the strategy involves sequential movement in the factors from one region to another, resulting in more than one experiment. As mentioned by De Oliveira et al. [15], optimization of processes commonly involves statistical techniques such as the RSM as one of the most effective ways to pursue optimization by modeling techniques.
A great exemplification of this is application of SADM by Chavan and Talange [16], who applied a full factorial statistical design to obtain a model to find which input factors affect the response variables significantly in a process of fuel cells. The steepest ascent method was applied to find the maximum power delivered by these fuel cells within the defined ranges of input factors.
Since the SADM entails consecutive individual experimentation, it becomes necessary to have a procedure that gives mathematical support to recognize when the response has been improved and no more experimentation needs to be carried out. They are called Stopping Rules (SRs).
Myers and Khuri [17] presented a procedure that consists in performing a sequence of sets of trials with the information provided by the first-order fitting. This Myers and Khuri Stopping Rule (MKSR) is used to determine a path to observe an increasing or decreasing response. This procedure considers the random error variation in the response and avoids the need to take many observations when the true mean response is decreasing. Furthermore, it prevents premature stopping decisions when the true mean response is decreasing. The stopping procedure is applied once a steepest path is developed and has been used in experimental strategy in optimum seeking methodology. There have been different informal procedures to stop, such as one that orders a stop at the first drop. Another informal tendency has been stopping after three consecutive drops. Nevertheless, due to the presence of random error variation in the observed response, the dropping of the response may not be true all along the function. Therefore, the need to design formal rules such as the MKSR has been important. The most important characteristic of the MKSR is that it assumes that y ( t ) is normally distributed with mean η ( t ) and variance σ 2 from a sequence of independent normal variables.
Similarly, Miró-Quesada and Del Castillo [18] reported that a first-order experimental design is commonly followed by a steepest ascent search where there is a need for a stopping rule to determine the optimal point in the search direction. This procedure, known as Del Castillo’s Recursive Parabolic Rule (RPR), has been studied for quadratic responses. It is assumed that it is of interest to maximize the response, so the steepest ascent case is considered. In real experimentation, stopping a search before the maximum response over a path means that the optimum value will not be selected and the procedure will not be able to be efficient because of the wasting of resources in experimentation. One of the most important considerations when applying the RPR is that this procedure tries to fit a quadratic behavior to the observed data. It also recursively updates the estimate of the second-order coefficient and tests if the first derivative is negative. However, the necessity of developing a more robust procedure to also consider non-quadratic behavior is starting to be an important issue to solve.
Finally, Del Castillo [19] explains a procedure to also consider non-quadratic behavior, called the Recursive Parabolic Rule Enhanced (RPRE). It has the advantage of being more robust because it becomes more sensitive when the standard deviation of the error is small. It mainly consists of three modifications to the traditional recursive rule. The intercept and the first order term, in addition to the second term, are recursively fitted. Furthermore, only a local parabolic model along the search direction is fitted, defined by a new concept called window. Finally, a coding scheme on the numbers of steps is used to reduce variance.
In this paper, a comprehensive study is carried out. It considers two simulated processes that offer response variables after a set of inputs (different levels and factors) configured according to a factorial design. The application and comparison of the mentioned three SRs will be illustrated, with the objective of recognizing the best performance. This analysis is important and relevant considering that current literature lacks recent studies of this nature, where three of the most important formal stopping rules are applied over a path of improvement that considers the coefficients of a linear model. This study is characterized by its lack of comparability with previous studies, because the response variables obtained by the simulators applied were specially obtained for this particular experiment. This means that comparison with previous studies is not feasible, since it can only be applied to the special conditions where responses were exclusively obtained for this case.
At this point, the originality of the contribution of this paper not only relies on the verification and comparison of performance of SRs, but also on the interaction of SRs with the behavior of the outputs. SR performance is similar when the response obeys a parametric function and differs when the output follows a stochastic behavior. It is important that further analysis over stochastic behaviors continue to pursue an even better procedure to stop over the path of improvement.
After this first section of introduction, Section 2 is intended to show the method used for the paper and explain the application of rules. Section 3 presents the results obtained with both of the cases after the application of the three SRs. Finally, discussion and conclusion are presented in Section 4 and Section 5, respectively.

2. Method

This procedure is intended to follow the main objective, which consists in running a first-order designed experiment and subsequently performing a steepest ascent/descent. The core of this analysis is to run the SRs previously addressed to discover the best performance between the MKSR, the RPR and the RPRE under the conditions of the specific experimentation schemes and cases considered.
The method is explained in Figure 1 as a flow chart to easily visualize the progress.
The first stage of the method consists in the considerations of the three cases that will be presented. Second, the first-order model is fitted; a linear regression equation is used. Then, the steepest path with the first-order model is determined so that the application of the MKSR, RPR and RPRE is possible. Finally, the selection of the rule with the best performance is denoted for each of the analyzed cases.
a. Considerations of the case study
The analysis considers two different simulated cases, both of which will consider the implementation of a factorial experimentation design. This factorial analysis includes seven factors, two levels, six center points and two replicates. It gives a total of 262 experimental runs. For ANOVA, the main assumptions of normality, equality of variances and independence of residuals must be assured for a well-performed analysis [20].
b. Fit the first order model
The execution of the designed experiment starts with the consideration of the levels for each of the seven factors given by the simulators. The factorial design with the replicates and center points is given by a statistical software. Once the design is done, specific values of the factors are set in the simulators so that a response variable can be obtained from each of the two simulators. Once the responses are obtained, a factorial analysis is performed. The analysis offers a Coded Coefficients (CC) table which is used to build the Coded Unit Regression Equation (CURE). This first order model is obtained as the base for the development of the path for the SADM.
c. Determine the steepest path with the first order model
Once the coded unit regression equation is obtained. the steepest path can now be built. According to [13], it is necessary to give a general algorithm to determine the coordinates of a point on the path. Considering that the point x 1 = x 2 = = x k = 0 is the origin point, it is necessary to:
* Select a step size for the path. The variable with the largest absolute regression coefficient is the one selected;
* Calculate the step size in the other variables with (1) as follows:
Δ x j = b j b i / Δ x i , j = 1 , 2 , , k , i j
where b j represent the regression coefficient of the factor whose step size is to be estimated and b i represents the regression coefficient of the factor with the largest absolute coefficient, while Δ x i and Δ x j work as the step size of the process variables;
* Convert Δ x j from the coded variables to the natural variables.
d. Application of MKSR, RPR and RPRE
The intention is to utilize a formal procedure in order to stop at a required value. For this paper, the MKSR, RPR and RPRE will be applied for comparison.
The procedure of Myers and Khuri [17] is as follows:
1. The MKSR assumes the behavior in the observed response y ( t ) to be normally distributed; thus y ( t ) n o r m a l ( η ( t ) , σ 2 ) ;
2. A significance test is run using confidence intervals as follows:
y ( n i + 1 ) y ( n i ) b ; where individual experimentation continues;
y ( n i + 1 ) y ( n i ) < b > a ; where individual experimentation continues;
y ( n i + 1 ) y ( n i ) a ; where individual experimentation stops;
3. A solution is established for the limits of the procedures a and b in (2) as follows:
a = b = Φ 1 ( 1 2 k ) ( σ ε ) ( 2 ) ;
where a and b work as the limits of the interval for the significance test, Φ is the normal cumulative distribution function, κ is a guess of the number of individual experimentation runs to arrive to the improvement and σ is the square root of the adjusted mean square of ANOVA from the factorial analysis;
4. Once the values for a and b are computed, the decision to stop is determined with (3) as follows:
y ( n i ) y ( n i 1 ) a < 0
where y ( n i ) is a present value from the response variables in the path of improvement from SADM, y ( n i 1 ) is a past value from the path and a is a limit of the interval of the significance test of the procedure;
5. The moment where individual experimentation stopped is the time where the response is considered to be the best. Nevertheless, if a better response is identified in a previous time, that value is the new best response.
On the other hand, ref. [18] presents the RPR procedure:
1. It assumes the behavior of the observed response y ( t ) to be quadratic and proposes to obtain the first derivative of y ( t ) = η ( t ) + ϵ t = θ 0 + θ 1 t + θ 2 t 2 + ϵ t to obtain y ( t ) = θ 1 + 2 θ 2 t = 0 ; thus t * = θ 1 / 2 θ 2 ;
2. The parameters θ 0 or y ( 0 ) , θ 1 and θ 2 are estimated as follows:
a. θ 0 is obtained by computing the arithmetic mean of center points of the experiment;
b. θ 1 is estimated by calculating (4):
θ 1 = b 1 2 + b 2 2 + + b k 2
where θ 1 is one of the parameter estimations that assists computation in the procedure and b i represent the regression coefficients of the linear model;
c. θ 2 must be recursively estimated. Therefore, there will be one θ 2 for each iteration or individual experimentation t. This means the estimation of θ 2 ( t ) ;
3. θ 2 ( t ) should be estimated as follows:
a. For θ 2 ( 0 ) , (5) is considered:
θ 2 ( 0 ) = θ 1 2 t p r i o r
where θ 2 ( 0 ) works as an estimation for the procedure when t = 0 , and t p r i o r works as an initial guess about the number of iterations or individual experiments that are considered to be necessary to reach the optimum value;
b. For θ 2 ( t ) starting from θ 2 ( 1 ) , the updating is calculated using (6) as follows:
θ 2 ( t ) = θ 2 ( t 1 ) + P t 1 t 2 1 + ( t 4 P t 1 ) ( Y ( t ) Y ( 0 ) θ 1 t θ 2 ( t 1 ) t 2 )
where Y ( t ) represents the response variable in t time in the path of improvement.
c. To calculate P 0 in P t , Miró-Quesada and Del Castillo [18] propose that it is necessary to establish an initial value, similar to the initial guess of t p r i o r ;
d. For P t starting from P 1 , the updating is calculated using (7), as shown next:
P t = ( 1 P t 1 t 4 1 + t 4 P t 1 ) P t 1
where P t is considered the scaled variance of θ 2 ( t ) after t iterations of individual experimentation, noted as: P t = 1 σ ϵ 2 V a r ( θ 2 ( t ) ) ;
e. An estimation for the variance of the error σ ( θ 1 + 2 θ 2 ( t ) t ) 2 is obtained with (8) as follows:
σ ( θ 1 + 2 θ 2 ( t ) t ) 2 = 4 σ ϵ 2 t 2 P t .
This result will be used next for comparison purposes;
4. A decision rule is applied to state if (9) is fulfilled. If so, individual experimentation stops. The in-equation is as follows:
θ 1 + 2 θ 2 ( t ) t < 3 σ θ 1 + 2 θ 2 ( t ) t 2 .
The intention is to compare both sides of (9) to assure the stopping iteration;
5. The iteration where t stopped is the value in y where the response is considered to be the best answer. Nevertheless, if a better response is identified in a previous time, that value is the new best response (the same as in the MKSR).
Finally, Del Castillo [19] explains the RPRE, which recursively fits the intercept and the first-order term in addition to the second-order term in (10), as shown here:
Y ( t ) = η ( t ) + ϵ t = θ 0 + θ 1 t + θ 2 t 2 + ϵ t
where η ( t ) denotes the operation θ 0 + θ 1 t + θ 2 t 2 .
The procedure can be summarized with the following five main steps:
1. The recursive fitting increases the robustness for non-quadratic behavior by specifying a maximum number of experiments in the recursive least squares algorithm, applying a concept called “window” to fit only a local parabolic model along the search direction to make it less sensitive to large scale deviations from quadratic behavior. The “window” size ( N ) is determined using an indicator called “Signal-to-Noise Ratio” (SNR) estimated with (11) as follows:
S N R = b σ ϵ
where σ ϵ is the standard deviation from the center points of the experiment.
The variable b is estimated using (12) as follows:
b = β = i = 1 k b i 2 .
Finally, it is necessary to identify N in a table of values of window sizes. In the enhanced stopping rule, it is proposed to visualize the N × 1 vector b N and the scalar v N ;
2. As in the procedure for the RPR, t p r i o r continues to be an initial guess about the number of individual experiments that are suggested to be necessary to reach the optimum value. Now, for the estimation of parameters when t = 0 , computations (13)–(15) are suggested, as illustrated next:
θ 0 ( 0 ) = Y ( 0 )
where Y ( 0 ) represents the average of the response variables obtained from the center points of the experiment. Furthermore:
θ 1 ( 0 ) = b
where b represents the square root of the sum of the squares of the regression coefficients of the linear model of the experiment. Next:
θ 2 ( 0 ) = θ 1 ( 0 ) 2 t p r i o r
where the constant 2 proposed by the author of the rule has the intention of making the value of t p r i o r more robust;
3. The algorithm makes use of the matrix definitions (16)–(19) for updating the three parameters θ 0 , θ 1 and θ 2 :
θ ( t ) = θ 0 ( t ) θ 1 ( t ) θ 2 ( t ) ;
ϕ t = 1 t t 2 ;
d ϕ t d t d t = 0 1 2 t ;
P 0 = 1 0 0 0 1 0 0 0 10 .
The large value of 10 given to P 0 makes the rule robust against possibly large discrepancies between t p r i o r and t * , giving “adaptation” ability to varying curvature.
Now, (20) is used to update θ ( t ) :
θ ( t ) = θ ( t 1 ) + P t 1 ϕ t 1 + ϕ t P t 1 ϕ t ( Y ( t ) ϕ t θ ( t 1 ) ) .
Furthermore, (21) is used to update P t :
P t = V a r ( θ ( t ) ) / σ ϵ 2 = ( I P t 1 ϕ t 1 + ϕ t P t 1 ϕ t ϕ t ) P t 1 ;
4. If (22) is fulfilled, the search stops and returns to t * , such that the maximum response is determined by Y ( t * ) = max l = 1 , , t { Y ( l ) } . The rule is shown next:
d t θ ( t ) < 1.645 σ ϵ d t P t d t
where −1.645 represents a standardized value of the normal distribution for a significance level of 0.05.
Otherwise, the procedure continues computing t N 1 following the next steps according to Del Castillo [19]:
a. Perform an experiment at step t N 1 ;
b. Update vector Y N ( t ) with the observed value Y ( t ) by discarding its first element, shifting the remaining elements one position up in the vector and including Y ( t ) as the last element in Y N ( t ) ;
c. Read b N and v N using the table of values of window size N in the enhanced stopping rule proposed by [19], where it is possible to visualize the N × 1 vector b N and the scalar v N . After this, continue individual experimentation until (23) is fulfilled:
b N Y N ( t ) < 1.645 σ ϵ v N ;
5. If the inequality holds, then stop the search and return t * such that Y ( t * ) = max l = 1 , , t { Y ( l ) } .
Next, Figure 2 shows the steps to follow in each of the SRs previously mentioned.
e. Selection of the rule with best performance for each case.
In this last stage of the method, the response value with the best performance is observed for each of the simulators. This means that through the analysis, either the MKSR, RPR or RPRE will have better performance according to the application of each SR and their adjustment with the behavior of the data. For each simulator, the best value and SR is mentioned.

3. Results

The analysis considers two cases. The selection of these two simulated cases is based on the importance of a well-performed approach to processes with several inputs and only one output. The combination of levels and factors is vital to understanding the interactions in the system; nonetheless, identifying relevant factors even among several of them is indispensable in terms of optimization of the response. Furthermore, as multiple experiments are required for the comparison of the considered stopping rules, the simulated processes may represent an important option, as they offer practical scenarios that can be reproducible for optimization purposes. These two cases follow factorial designs, which contain seven factors, two levels, six center points and two replicates. This gives a total of 262 experimental runs per case. The first case has factors P , Q , R , S , T , U and V. The low levels for each of these factors are 11, 68, 67, 275, 0.4, 70 and 16, respectively. The high levels for the factors are 13.5, 84, 92, 300, 0.5, 80 and 20, respectively. After the factorial analysis was performed using Minitab®, only factors P , Q , S and U were significant for the response. The second case has factors S , T , Y , Z , E , F and G. The low levels for each of these factors are 325, 650, −2, 1.4, 1, 28.5 and 9, respectively. The high levels for these factors are 350, 700, 0, 1.5, 3.0, 31 and 13, respectively. Only factors Y , Z , E and G were significant. This information was used in both cases to build the steepest path using the procedure in [13] to determine the step size of the path for each of the significant factors. The sufficiency of diversity in the nature of these experiments to generalize findings and make more general recommendations will follow the conditions and properties of the experiment itself. This means that all conclusions will be highly valuable for oncoming situations with similar natures and characteristics to the ones here presented.
Next, the application of the three mentioned SRs in Case 1.

3.1. Results for Case 1

The coded unit regression equation for Case 1 is:
y = 4.2943 0.0367 P + 0.2123 Q 0.0381 S + 0.0519 U .
The path for the steepest ascent is built considering:
  • The selection of a step size for this path. The variable with the largest absolute regression coefficient is the one selected. In this case, factor Q is selected;
  • The proposed natural step size for the factor Q is Δ Q = 1 . Through conversion from coded to natural units, the coded step size for Q is Δ X Q = 0.1250 ;
  • The calculation of the coded step sizes for the rest of the variables is performed with (1).
    For example, the coded step size of P is:
    Δ X P = 0.0367 0.2123 / 0.1250 = 0.0216 .
    Thus, the coded step sizes for the significant factors are:
    Δ X P = 0.0216 for P,
    Δ X Q = 0.1250 for Q,
    Δ X S = 0.0224 for S and;
    Δ X U = 0.0306 for U.
    The natural step sizes for the same factors are:
    Δ P = 0.0270 for P,
    Δ Q = 1 for Q,
    Δ S = 0.2804 for S and;
    Δ U = 5 for U.
    The path for Case 1 is built next.
Application of MKSR to Case 1
1. Assumption of behavior in Case 1. The steepest direction is shown in Table 1, running 15 iterations. It starts with step 0, computing the center points of the experiment.
This procedure assumes a normally distributed behavior in response y ( t ) . Figure 3 shows the behavior of the response in its steepest path from t = 1 to t = 14 . The straight line tries to illustrate the assumption of normality for the response.
2. Significance test in Case 1. Iterations or individual experimentation through the steepest path stop when y ( n i + 1 ) y ( n i ) a .
3. Estimation for limits a and b in Case 1. Limits a = 0.744 and b = 0.744 are calculated using (2) as shown in Table 2. Those limits are used to identify the moment where (3) is fulfilled. It is important to remember that the value of κ is a guess of the number of individual experimentation runs to arrive at the improvement. In this case, the considered value of κ = 15 .
4. Application of decision rule in Case 1. The decision to stop is determined by (3). This means that the time t should stop when y ( n i ) y ( n i 1 ) 0.744 . The behavior of the data is shown in Table 3.
5. Selection of optimum response in Case 1. If y ( n i ) y ( n i 1 ) a , the search stops and it returns to t * such that Y ( t * ) = max l = 1 , , t { Y ( l ) } . As noted, the best performance for the MKSR us found in iteration 13, with a response of 6.37 units.
Application of RPR to Case 1
1. Assumption of behavior in Case 1. The steepest path applies the same way for this case. Now, Figure 4 shows the behavior of the response in the steepest path from t = 1 to t = 14 . The curved line tries to illustrate the quadratic assumption of the response.
2. Estimation of parameters θ 0 and θ 1 in Case 1. The estimation starts with θ 0 , which is obtained by calculating the arithmetic mean of center points. In this case:
θ 0 = 4.10 + 4.41 + 4.42 + 4.37 + 3.88 + 4.45 6 ; thus, θ 0 = Y ( 0 ) = 4.2710 .
Applying (4) through the coefficients of significant factors,
θ 1 = ( 0.0367 ) 2 + ( 0.2123 ) 2 + ( 0.0381 ) 2 + ( 0.0519 ) 2 = 0.2249 .
This is known as the slope of the response function at the origin in the steepest direction.
3. Recursive estimation of parameters in Case 1.Table 4 details the recursive estimation of parameters, which assists the stopping decision. For this case, P 0 = t p r i o r = 10 .
4. Application of decision rule in Case 1. The decision to stop is given when (9) is fulfilled. The "Status" column of Table 4 shows the moment that this occurs.
5. Selection of optimum response in Case 1. As seen in Table 8, the decision to stop occurred in iteration 5; nevertheless, the best response was at 4.62 units because it returns to t * such that Y ( t * ) = max l = 1 , , t { Y ( l ) } .
Application of RPRE to Case 1
1. Assumption of behavior in Case 1.Figure 5 shows the response behavior in the steepest path and the assumption of both quadratic and non-quadratic behavior. The straight and curved lines illustrate the capability of this procedure to assume both types of behavior.
In order to estimate the indicator SNR and obtain the N, it is necessary to use (12) to compute:
b = i = 1 k ( b i 2 ) = ( 0.04 2 ) + ( 0.21 ) 2 + ( 0.04 ) 2 + ( 0.05 ) 2 = 0.22 .
This is the slope of the response function at the origin in the steepest direction.
Then, (11) is applied to estimate the indicator S N R :
S N R = b σ ϵ = 0.22 0.21 = 0.23 .
This indicator makes the value N = 15 . It means that computations shall start for:
t < N 1 t < 15 1 t < 14 .
2. Estimation of parameters θ 0 , θ 1 and θ 2 when t = 0 in Case 1.
Results for parameters θ 0 , θ 1 and θ 2 when t = 0 are presented next:
θ 0 ( 0 ) = 4.62 ;    θ 1 ( 0 ) = 0.22 ;    θ 2 ( 0 ) = 0.01 .
3. Recursive estimation of θ 2 and P t when t < N 1 in Case 1.
Table 5 shows the recursive procedure of θ i and P t when t < N 1 . For this case, P 0 obeys (19) and t p r i o r = 18 .
4. Application of decision rule in Case 1. If it is not fulfilled, modifications for t N 1 are applied.
In this case, in-equation d t θ ( t ) < 1.645 σ ϵ d t P t d t is fulfilled, so the search stops and returns to t * , such that Y ( t * ) = max l = 1 , , t { Y ( l ) } .
5. Selection of optimum response in Case 1.Table 5 shows the decision to stop, which occurred in iteration 5. Nevertheless, the best response was at 4.62 units.
Now, the application of the three SRs in Case 2.

3.2. Results for Case 2

The coded unit regression equation for Case 2 is:
y = 149.7810 + 9.5079 Y 0.2023 Z + 31.1119 E + 29.8927 G .
The path for the steepest ascent is built considering:
  • The selection of a step size for this path. The variable with the largest absolute regression coefficient is the one selected. In this case, factor E is the one selected to propose a natural step size;
  • The proposed natural step size for factor E is the unit; thus, the step size for E is Δ E = 1.0000 . Through conversion from coded to natural units, the coded step size for E is Δ X E = 1.0000 . By coincidence, it is equal for both coded and natural units;
  • The calculation of coded step size in the other variables is performed with (1). For example, the coded step size of Z is:
    Δ X Z = 0.2023 31.1119 / 1.0000 = 0.0065 .
    Thus, the coded step sizes for the significant factors are:
    Δ X Y = 0.3056 for Y,
    Δ X Z = 0.0065 for Z,
    Δ X E = 1.0000 for E and;
    Δ X G = 0.9608 for G.
    The natural step sizes for the same factors are:
    Δ Y = 0.3056 for Y,
    Δ Z = 0.0003 for Z,
    Δ E = 1.0000 for E and;
    Δ G = 1.9216 for G.
The path for Case 2 can now be built.
Application of MKSR to Case 2
1. Assumption of behavior in Case 2. Table 6 shows the steepest path from iteration 0 to 14.
Next, Figure 6 shows the behavior of the response in its steepest path from t = 1 to t = 14 .
2. Significance test in Case 2. As in Case 1, individual experimentation over the steepest path stops when y ( n i + 1 ) y ( n i ) a .
3. Estimation for limits a and b in Case 2. Limits a = 3.668 and b = 3.668 are calculated using (2) with κ = 30 , as shown in Table 7. Those limits are used to identify the moment where (3) is fulfilled.
4. Application of decision rule in Case 2. The decision to stop occurs when
y ( n i ) y ( n i 1 ) 3.67 . The behavior is shown in Table 8.
5. Selection of optimum response in Case 2. In this case, the best performance for the MKSR in Case 2 is the response of 232.75 units, because it stopped in t * such that Y ( t * ) = max l = 1 , , t { Y ( l ) } .
Application of RPR to Case 2
1. Assumption of behavior in Case 2. Figure 7 shows behavior of the response in the steepest path.
2. Estimation of parameters θ 0 and θ 1 in Case 2. These estimations are shown next:
θ 0 = 163.23 + 162.70 + 162.44 + 162.02 + 162.67 + 163.22 6 = Y ( 0 ) = 162.71 .
This next estimation is performed the same way as in the previous case (Case 1):
θ 1 = ( 9.5079 ) 2 + ( 0.2023 ) 2 + ( 31.1119 ) 2 + ( 29.8927 ) 2 = 44.18 .
3. Recursive estimation of parameters in Case 2.Table 9 shows the recursive estimation needed with P 0 = t p r i o r = 10 .
4. Application of decision rule in Case 2. The “Status” column in Table 9 shows the moment when (9) is fulfilled.
5. Selection of optimum response in Case 2. Table 11 illustrates the decision to stop, which occurred in iteration 4. However, the selection falls in t * such that Y ( t * ) = max l = 1 , , t { Y ( l ) } ; therefore, the best response was at 232.75 units.
Application of RPRE to Case 2
1. Assumption of behavior in Case 2.Figure 8 shows the behavior of the response in the steepest path and the assumption of both quadratic and linear behavior.
The variable b = 44.181 .
The indicator S T N R = 95.005 .
This makes the value of N = 3 .
This means that computations shall start for t < N 1 t < 3 1 t < 2 .
2. Estimation of parameters θ 0 , θ 1 and θ 2 when t = 0 in Case 2.
Results for parameters θ 0 , θ 1 and θ 2 when t = 0 are presented next:
θ 0 ( 0 ) = 163.4100 ;    θ 1 ( 0 ) = 44.1810 ;    θ 2 ( 0 ) = 2.2091 .
3. Recursive estimation of θ 2 and P t when t < N 1 in Case 2.
Table 10 shows the recursive procedure of θ i and P t when t < N 1 .
P 0 follows (19) and t p r i o r = 10 .
4. Application of decision rule in Case 2. If it is not fulfilled, modifications for t N 1 are applied.
In this case, in-equation d t θ ( t ) < 1.645 σ ϵ d t P t d t is not fulfilled, so the search continues for t N 1 , as seen in Table 11.
5. Selection of optimum response in Case 2.Table 11 shows the decision to stop, which occurred in iteration 3. Nevertheless, the best response was at 232.75 units due to t * in Y ( t * ) = max l = 1 , , t { Y ( l ) } .

4. Discussion

Now that the three SR procedures have been applied for both cases, the discussion and comparison of the best performance are presented next. In Case 1, the method from [17] needed 14 iterations to stop, giving an optimum response of 6.37. The method from [18] stopped at the fifth iteration, obtaining the optimum response of 4.62. The method from [19] had the same results as the previous one. In Case 2, Ref. [17] needed 3 iterations to stop, delivering a response of 232.75. Ref. [18] stopped at the fourth iteration, with 232.75 as a response. Ref. [19] obtained 232.75 after 3 iterations. The most important piece of information in this case is related to the maximized shown response. For these cases, the number of iterations is not as critical as the response obtained, due to the adaptation that it could suffer. This means that the higher the step size for the steepest path, the faster the maximum response will be reached. However, the intention is to carefully analyze the experiment and the obtained outcomes for comparison.
Table 12 illustrates information of the results. Furthermore, Figure 9 shows the best results for each of the two analyzed cases.
The behavior of the responses of the individual experiments in both study cases is considerably different. In the graph on the right side of Figure 9, the shape of the response tends to resemble a simple parametric function, which could be compared to a quadratic or even a logarithmic model. In these functions, it is relatively easy to find a maximum or a minimum point. This situation becomes evident when observing the way in which the three SRs determined the maximum point in the same place. However, the issue becomes complicated by a case such as the graph on the left side of Figure 9, in which the response does not follow a fixed trend. On the contrary, it shows random variations up and down, so it cannot easily be assimilated into a specific behavior. When this occurs, it can be said that this is a case belonging to a stochastic model. An interesting phenomenon occurs in this type of behavior; the performances of the SRs are not the same; they does not coincide and are different from one another. In a few words, the evidence shown here suggests that the performances of the SRs are similar when the response obeys a parametric function. On the other hand, the performances of the SRs differ when the response exhibits stochastic behavior.
In the particular case of this analysis and under the conditions proposed, the MKSR seems to have better performance, while it is observed that the RPR and the RPRE show a lower output. This situation is not strange; it is explained due to the assumptions of each rule. The MKSR assumes normally distributed behavior, the RPR suggests a quadratic parametric function, and the RPRE obeys quadratic and non-quadratic behavior. None of the previous assumptions easily adjusted to the responses of these individual experiments; however, the MKSR adjusted to give a higher performance.
As previously mentioned, an important task here was to conduct experimentation to simulate behaviors for these three applied stopping procedures. This allowed the possibility of comparison and selection of the procedure with the best performance. The results, with responses and number of needed iterations, are visualized above.
At this point, the complexity of the behavior of the trajectory is particularly important, because it becomes necessary to not only infer using parametric known functions, but also to consider the analysis of these stochastic behaviors. Fortunately, there are several stochastic models that can be considered to characterize the behavior of a steepest ascent trajectory. The Wiener process may be a good option, as it has non-monotone increments with a drift and diffusion parameters that can characterize the randomness in a trajectory, which is the case of the steepest ascent. Stopping rules can be implemented in the stochastic process by considering specific parametric functions in the drift to define a recursive strategy to determine the stopping time.

5. Conclusions

As mentioned by [21], several industrial activities follow Research and Development (R&D) schemes to develop new products or to improve existing ones by analyzing different factorial designs, considering the results of experimentation obtained with repetitions, preliminary results and deviations. The main goal is the response improvement seen as the process of obtaining the optimum result of experimentation on an initially constructed optimization strategy by applying different methods, procedures or rules.
In this case, the applied procedures were DOE, SADM and, finally, three SRs (MKSR, RPR and RPRE). The first of these, the MKSR, assumes normality in the data. This means that linear behavior is the core for the data analysis. On the other hand, the RPR considers parabolic behavior in the information. This means that a quadratic function takes place for this procedure. However, it is reported that the RPRE is as robust, as it is able to work properly with normality, non-normality, quadratic and non-quadratic behavior.
The nature and conditions of the case study and experiment developed will always be critical for the development of each of the procedure and rules. In this case, the conditions related to the designed experiment considered a full factorial design with seven factors, two levels, six center points and two replicates. This gives a total of 262 experimental runs per case. The analysis of this factorial design produced the lineal equation of coded units as the core piece of information for the steepest path. Once the path was built, the decision to stop was critical, because it determined the efficiency of the applied stopping procedure.
The aim was to maximize the response; other schemes of experimentation should be proposed if the intention is to minimize the response or if the researchers wish to obtain a certain value. The SRs illustrated assumed a certain and specific parametric function to establish their theoretical base.
Considering this, and as previously mentioned, research shall continue to propose new exploration schemes relating stochastic processes with the possibility of modifying the parametric base to better characterize the random behavior of the improvement trajectory, in order to search for minimum, maximum or target values.

Author Contributions

Conceptualization, P.E.G.-N. and L.A.R.-P.; methodology, L.A.R.-P.; validation, L.A.R.-P.; data curation, P.E.G.-N.; formal analysis, P.E.G.-N.; investigation, P.E.G.-N.; supervision, L.A.R.-P.; resources, P.E.G.-N.; writing—original draft preparation, P.E.G.-N.; writing—review and editing, L.A.R.-P., L.C.M.-G. and I.J.C.P.-O.; visualization, L.C.M.-G. and I.J.C.P.-O.; funding acquisition, P.E.G.-N. and L.A.R.-P. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Autonomous University of Ciudad Juárez and the Technological University of Chihuahua via the Teacher Professional Development Program through the special program for graduate studies.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weichert, D.; Link, P.; Stoll, A.; Rüping, S.; Ihlenfeldt, S.; Wrobel, S. A review of machine learning for the optimization of production processes. Int. J. Adv. Manuf. Technol. 2019, 104, 1889–1902. [Google Scholar] [CrossRef]
  2. Rafiee, K.; Feng, Q.; Coit, D.W. Reliability assessment of competing risks with generalized mixed shock models. Reliab. Eng. Syst. Saf. 2017, 159, 1–11. [Google Scholar] [CrossRef]
  3. Gvozdović, N.; Božić-Tomić, K.; Marković, L.; Marković, L.M.; Koprivica, S.; Kovačević, M.; Jovic, S. Application of the Multi-Criteria Optimization Method to Repair Landslides with Additional Soil Collapse. Axioms 2022, 11, 182. [Google Scholar] [CrossRef]
  4. Yeo, J.; Kang, M. Proximal Linearized Iteratively Reweighted Algorithms for Nonconvex and Nonsmooth Optimization Problem. Axioms 2022, 11, 201. [Google Scholar] [CrossRef]
  5. Balafkandeh, S.; Mahmoudi, S.M.S.; Gholamian, E. Design and tri-criteria optimization of an MCFC based energy system with hydrogen production and injection: An effort to minimize the carbon emission. Process. Saf. Environ. Prot. 2022, 166, 299–309. [Google Scholar] [CrossRef]
  6. Juangphanich, P.; De Maesschalck, C.; Paniagua, G. Turbine Passage Design Methodology to Minimize Entropy Production—A Two-Step Optimization Strategy. Entropy 2019, 21, 604. [Google Scholar] [CrossRef] [PubMed]
  7. Oleksy-Sobczak, M.; Klewicka, E. Optimization of media composition to maximize the yield of exopolysaccharides production by Lactobacillus rhamnosus strains. Probiotics Antimicrob. Proteins 2020, 12, 774–783. [Google Scholar] [CrossRef] [PubMed]
  8. Delavar, H.; Naderifar, A. Optimization of ethylene dichloride (EDC) and ethane concentrations to maximize catalytic ethylene oxide production rate and yield: Experimental study and modeling. Chem. Eng. Sci. 2022, 259, 117803. [Google Scholar] [CrossRef]
  9. Bryant, P.T. Problem-Solving. In Augmented Humanity; Springer: Berlin/Heidelberg, Germany, 2021; pp. 103–137. [Google Scholar]
  10. Apsemidis, A.; Psarakis, S.; Moguerza, J.M. A review of machine learning kernel methods in statistical process monitoring. Comput. Ind. Eng. 2020, 142, 106376. [Google Scholar] [CrossRef]
  11. Montgomery, D.C. Design and Analysis of Experiments; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  12. Sheoran, A.J.; Kumar, H. Fused Deposition modeling process parameters optimization and effect on mechanical properties and part quality: Review and reflection on present research. Mater. Today Proc. 2020, 21, 1659–1672. [Google Scholar] [CrossRef]
  13. Myers, R.H.; Montgomery, D.C.; Anderson-Cook, C.M. Response Surface Methodology: Process and Product Optimization Using Designed Experiments; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  14. Karimifard, S.; Moghaddam, M.R.A. Application of response surface methodology in physicochemical removal of dyes from wastewater: A critical review. Sci. Total Environ. 2018, 640, 772–797. [Google Scholar] [CrossRef] [PubMed]
  15. De Oliveira, L.G.; de Paiva, A.P.; Balestrassi, P.P.; Ferreira, J.R.; da Costa, S.C.; da Silva Campos, P.H. Response surface methodology for advanced manufacturing technology optimization: Theoretical fundamentals, practical guidelines, and survey literature review. Int. J. Adv. Manuf. Technol. 2019, 104, 1785–1837. [Google Scholar] [CrossRef]
  16. Chavan, S.L.; Talange, D.B. Statistical design of experiment approach for modeling and optimization of PEM fuel cell. Energy Sources Part A Recovery Util. Environ. Eff. 2018, 40, 830–846. [Google Scholar] [CrossRef]
  17. Myers, R.; Khuri, A. A new procedure for steepest ascent. Commun. Stat.-Theory Methods 1979, 8, 1359–1376. [Google Scholar] [CrossRef]
  18. Miró-Quesada, G.; Del Castillo, E. An enhanced recursive stopping rule for steepest ascent searches in response surface methodology. Commun. Stat.-Simul. Comput. 2007, 33, 201–228. [Google Scholar] [CrossRef]
  19. Del Castillo, E. Process Optimization: A Statistical Approach; Springer Science & Business Media: New York, NY, USA, 2007; Volume 105. [Google Scholar]
  20. Delacre, M.; Lakens, D.; Mora, Y.; Leys, C. Taking Parametric Assumptions Seriously Arguments for the Use of Welch’s F-test instead of the Classical F-test in One-way ANOVA. Int. Rev. Soc. Psychol. 2019, 32, 1–13. [Google Scholar] [CrossRef]
  21. Popović, B. Planning, analyzing and optimizing experiments. J. Eng. Manag. Compet. (JEMC) 2020, 10, 15–30. [Google Scholar] [CrossRef]
Figure 1. Flow chart for the proposed method of the manuscript from the consideration of the case study to the selection of the rule with the best performance.
Figure 1. Flow chart for the proposed method of the manuscript from the consideration of the case study to the selection of the rule with the best performance.
Axioms 11 00514 g001
Figure 2. Flow chart that relates the steps needed to develop the MKR, RPR and RPRE.
Figure 2. Flow chart that relates the steps needed to develop the MKR, RPR and RPRE.
Axioms 11 00514 g002
Figure 3. Graph with the steepest ascent path of the response Y with a straight line, which assumes normality in Case 1.
Figure 3. Graph with the steepest ascent path of the response Y with a straight line, which assumes normality in Case 1.
Axioms 11 00514 g003
Figure 4. Graph with the steepest ascent path of the response y with a curved line, which assumes quadratic behavior in Case 1.
Figure 4. Graph with the steepest ascent path of the response y with a curved line, which assumes quadratic behavior in Case 1.
Axioms 11 00514 g004
Figure 5. Graph with steepest ascent path of the response y with both straight and curved lines, which assumes both quadratic and non-quadratic behavior in Case 1.
Figure 5. Graph with steepest ascent path of the response y with both straight and curved lines, which assumes both quadratic and non-quadratic behavior in Case 1.
Axioms 11 00514 g005
Figure 6. Graph with the steepest descent path of the response Y with a straight line, which assumes normality in Case 2.
Figure 6. Graph with the steepest descent path of the response Y with a straight line, which assumes normality in Case 2.
Axioms 11 00514 g006
Figure 7. Graph with steepest descent path of y with a curved line, which assumes quadratic behavior in Case 2.
Figure 7. Graph with steepest descent path of y with a curved line, which assumes quadratic behavior in Case 2.
Axioms 11 00514 g007
Figure 8. Graph with steepest descent path of y with straight and curved lines, which assumes both quadratic and non-quadratic behavior in Case 2.
Figure 8. Graph with steepest descent path of y with straight and curved lines, which assumes both quadratic and non-quadratic behavior in Case 2.
Axioms 11 00514 g008
Figure 9. Graphs with paths from both cases comparing the three SRs.
Figure 9. Graphs with paths from both cases comparing the three SRs.
Axioms 11 00514 g009
Table 1. Implementation of steepest ascent path for Case 1.
Table 1. Implementation of steepest ascent path for Case 1.
tPQSUy
012.3076.00287.5075.004.62
112.2077.00287.2080.004.44
212.2078.00286.9085.004.51
312.2079.00286.7090.004.43
412.1080.00286.4095.004.05
512.1081.00286.10100.004.36
612.1082.00285.80105.004.48
712.1083.00285.50110.005.16
812.0084.00285.30115.004.91
912.0085.00285.00120.005.12
1012.0086.00284.70125.005.13
1112.0087.00284.40130.004.85
1211.9088.00284.10135.005.10
1311.9089.00283.90140.006.37
1411.9090.00283.60145.004.87
Table 2. Obtained limits of MKR in Case 1.
Table 2. Obtained limits of MKR in Case 1.
ab Φ 1 1 / 2 k σ ϵ 2
−0.740.74−1.830.281.41
Table 3. Obtained results from MKSR procedure in Case 1.
Table 3. Obtained results from MKSR procedure in Case 1.
tPQSUy y ( n i + 1 ) y ( n i ) Status
012.3076.00287.5075.004.62-Starts
112.2077.00287.2080.004.44−0.18Continues
212.2078.00286.9085.004.510.07Continues
312.2079.00286.7090.004.43−0.08Continues
412.1080.00286.4095.004.05−0.38Continues
512.1081.00286.10100.004.360.31Continues
612.1082.00285.80105.004.480.12Continues
712.1083.00285.50110.005.160.68Continues
812.0084.00285.30115.004.91−0.25Continues
912.0085.00285.00120.005.120.21Continues
1012.0086.00284.70125.005.130.01Continues
1112.0087.00284.40130.004.85−0.28Continues
1211.9088.00284.10135.005.100.25Continues
1311.9089.00283.90140.006.371.27Continues
1411.9090.00283.60145.004.87−1.50Stops
Table 4. Obtained results of the RPR procedure in Case 1.
Table 4. Obtained results of the RPR procedure in Case 1.
t y ( t ) θ 2 ( t ) P t θ 1 + 2 θ 2 ( t ) t σ θ 1 + 2 θ 2 ( t ) t 2 3 σ θ 1 + 2 θ 2 ( t ) t 2 Status
04.62−0.01100.220.000.00Starts
14.44−0.050.910.120.30−1.64Continues
24.51−0.050.060.010.08−0.83Continues
34.43−0.060.01−0.110.03−0.52Continues
44.05−0.070.00−0.310.01−0.37Continues
54.36−0.050.00−0.280.00−0.27Stops
Table 5. Recursive procedure of θ i and P t when t < N 1 .
Table 5. Recursive procedure of θ i and P t when t < N 1 .
t Y ( t ) θ 0 θ 1 θ 2 P t d t θ ( t ) 1.645 σ ϵ d t P t d t
100
04.624.620.23−0.010100.23NA
0010
0.90−0.10−0.80
14.444.590.19−0.31−0.100.90−0.80−0.43−0.91
−0.80−0.802.30
0.70−0.20−0.10
24.514.510.15−0.08−0.200.90−0.40−0.19−0.53
−0.10−0.400.30
0.60−0.300.00
34.434.490.12−0.05−0.300.70−0.20−0.18−0.38
0.00−0.200.10
0.60−0.300.00
44.054.500.14−0.06−0.300.60−0.10−0.28−0.30
0.00−0.100.00
0.60−0.300.00
54.364.500.02−0.02−0.300.40−0.10−0.48−0.25
0.00−0.100.00
Table 6. Obtained results for the steepest ascent path of Case 2.
Table 6. Obtained results for the steepest ascent path of Case 2.
tYZEGy
0−1.001.502.0011.00163.41
1−0.701.403.0012.90212.22
2−0.401.404.0014.80232.75
3−0.101.405.0016.80226.16
40.201.406.0018.70191.12
50.501.407.0020.60130.32
60.801.408.0022.5042.51
71.101.409.0024.50−82.49
81.401.4010.0026.40−233.67
91.801.4011.0028.30−405.47
102.101.4012.0030.20−573.87
112.401.4013.0032.10−849.67
122.701.4014.0034.10−1094.76
133.001.4015.0036.00−1356.20
143.301.4016.0037.90−1605.82
Table 7. Obtained limits of the MKSR in Case 2.
Table 7. Obtained limits of the MKSR in Case 2.
ab Φ 1 1 / 2 k σ ϵ 2
−3.673.67−1.831.411.41
Table 8. Results for the MKSR procedure in Case 2.
Table 8. Results for the MKSR procedure in Case 2.
tYZEGy y ( ni + 1 ) y ( ni ) Status
0−1.001.502.0011.00163.41-Starts
1−0.701.403.0012.90212.2248.81Continues
2−0.401.404.0014.80232.7520.54Continues
3−0.101.405.0016.80226.16−6.60Stops
Table 9. Obtained results for the RPR procedure in Case 2.
Table 9. Obtained results for the RPR procedure in Case 2.
t y ( t ) θ 2 ( t ) Pt θ 1 + 2 θ 2 ( t ) t σ θ 1 + 2 θ 2 ( t ) t 2 3 σ θ 1 + 2 θ 2 ( t ) t 2 Status
0163.41−2.2110.0044.180.000.00Starts
1212.224.640.9153.467.27−8.09Continues
2232.75−3.990.0628.231.87−4.10Continues
3226.16−7.030.011.980.73−2.57Continues
4191.12−8.650.00−25.020.36−1.80Stops
Table 10. Recursive procedure of θ i and P t when t < N 1 .
Table 10. Recursive procedure of θ i and P t when t < N 1 .
t y ( t ) θ 0 θ 1 θ 2 P t d t θ ( t ) 1.645 σ ϵ d t P t d t
100
0163.41163.4144.18−2.2101044.18−0.70
0010
0.90−0.10−0.80
1212.22163.9444.713.05−0.100.90−0.8050.80−1.86
−0.80−0.802.30
Table 11. Recursive procedure for t N 1 .
Table 11. Recursive procedure for t N 1 .
t Y ( t ) YN ( t ) b NYN ( t ) 1.645 σ ϵ vN
2232.75163.41212.22232.756.40−1.78
3226.16212.22232.75226.16−20.16−1.78
Table 12. Performance comparison for the three SRs.
Table 12. Performance comparison for the three SRs.
Author and Year of
SR Procedure
Case 1Case 2
Iterations
to Stop
Best
Response
Iterations
to Stop
Best
Response
R. Myers and Khuri (1979)
MKSR
146.373232.75
Miró-Quezada and Del Castillo (2004)
RPR
54.624232.75
Del Castillo (2007)
RPRE
54.623232.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

García-Nava, P.E.; Rodríguez-Picón, L.A.; Méndez-González, L.C.; Pérez-Olguín, I.J.C. A Study of Stopping Rules in the Steepest Ascent Methodology for the Optimization of a Simulated Process. Axioms 2022, 11, 514. https://doi.org/10.3390/axioms11100514

AMA Style

García-Nava PE, Rodríguez-Picón LA, Méndez-González LC, Pérez-Olguín IJC. A Study of Stopping Rules in the Steepest Ascent Methodology for the Optimization of a Simulated Process. Axioms. 2022; 11(10):514. https://doi.org/10.3390/axioms11100514

Chicago/Turabian Style

García-Nava, Paulo Eduardo, Luis Alberto Rodríguez-Picón, Luis Carlos Méndez-González, and Iván Juan Carlos Pérez-Olguín. 2022. "A Study of Stopping Rules in the Steepest Ascent Methodology for the Optimization of a Simulated Process" Axioms 11, no. 10: 514. https://doi.org/10.3390/axioms11100514

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop