Differential Evolution with Estimation of Distribution for Worst-Case Scenario Optimization

: Worst-case scenario optimization deals with the minimization of the maximum output in all scenarios of a problem, and it is usually formulated as a min-max problem. Employing nested evolutionary algorithms to solve the problem requires numerous function evaluations. This work proposes a differential evolution with an estimation of distribution algorithm. The algorithm has a nested form, where a differential evolution is applied for both the design and scenario space optimization. To reduce the computational cost, we estimate the distribution of the best worst solution for the best solutions found so far. The probabilistic model is used to sample part of the initial population of the scenario space differential evolution, using a priori knowledge of the previous generations. The method is compared with a state-of-the-art algorithm on both benchmark problems and an engineering application, and the related results are reported.


Introduction
Many real-world optimization problems, including engineering design optimization, typically involve uncertainty that needs to be considered for a robust solution to be found. The worst-case scenario optimization refers to obtaining the solution that will perform best under the worst possible conditions. This approach gives the most conservative solution but also the most robust solution to the problem under uncertainty.
The formulation of the problem that arises is a special case of a bilevel optimization problem (BOP), where one optimization problem has another optimization problem in its constraints [1,2]. In the worst-case scenario case, the maximization of the function in the uncertain space is nested in the minimization in the design space, leading to a min-max optimization problem. Therefore, optimization can be achieved in a hierarchical way.
Min-max optimization has been solved by classical methods such as mathematical programming [3], branch-and-bound algorithms [4] and approximation methods [5]. These methods have limited application as they require simplifying assumptions about the fitness function, such as linearity and convexity.
In recent years, evolutionary algorithms (EAs) have been developed to solve min-max optimization problems. Using EAs mitigates the problem of making specific assumptions about the underlying problem, as they are population-based and they directly use the objectives. In this way, they can handle mathematically intractable problems that do not follow specific mathematical properties.
A very popular approach to solve min-max problems with the EAs is the co-evolutionary approach, where the populations of design and scenario space are co-evolving. In [6], a co-evolutionary genetic algorithm was developed, while in [7], particle swarm optimization was used as the evolution strategy. In such approaches, the optimization search over the design and scenario space is parallelized, reducing significantly the number of function evaluations. In general, they manage to successfully solve symmetrical problems but perform poorly in asymmetrical problems by looping over bad solutions, due to the red queen effect [8]. As the condition of symmetry does not hold in the majority of the problems [definition can be found in Section 2, they become unsuitable for most of the problems.
One approach to mitigate this problem is to apply a nested structure, solving the problem hierarchically as for e.g., in [9], where a nested particle swarm optimization is applied. This leads to a prohibitively increased computation cost, as the design and scenario space is infinite for continuous problems. Min-max optimization problems solved as bilevel problems with bilevel evolutionary algorithms were presented in [10], where three algorithms-the BLDE [11], a completely nested algorithm, the BLEAQ [1], an evolutionary algorithm that employs quadratic approximation in the mappings of the two levels, and the BLCMAES [12], a specialized bilevel CMA-ES-known to perform well in bilevel problems, were tested on a min-max test function and showed good performance in most of the cases but required a high number of function evaluations. A recently proposed differential evolution (DE) with a bottom-boosting scheme that does not use surrogates proved to reach superior accuracy, though the number of functional evaluations (FEs) needed is still relatively high [13].
Using a surrogate model can lower the computational cost. Surrogate models are approximation functions of the actual evaluation and are quicker and easier to evaluate. Surrogate-assisted EAs have been developed for min-max optimization, such as in [14]. In that work, a surrogate model is built with a Gaussian process to approximate the decision variables and the objective value, assuming that evaluating the worst-case scenario performance is expensive. This might be problematic when the real function evaluation is also expensive. In [15], a Kriging-based optimization algorithm is proposed, where Kriging models the objective function as a gaussian process. A newly proposed surrogate-assisted EA applying multitasking can be found in [16], where a radial basis function is trained and used as a surrogate.
As already explained, there are two ways so far to reduce the computational cost when using EAs for min-max problems: the co-evolutionary approach and the use of surrogates, which both come with the disadvantage that either cannot be applied in all the problems or the final solution lacks accuracy.
The DE [17] is one of the most popular EAs because of its efficiency for global optimization. Estimation of distribution algorithm (EDA) is a newer population-based algorithm that relies on estimating the distribution for global convergence, rather than crossover and mutation, and has great convergence [18]. Hybrid DE-EDAs have been proposed to combine the good exploration and exploitation characteristics of each in several optimization problems, such as in [19] for solving a job-shop scheduling problem and in [20] for the multi-point dynamic aggregation problem. EDA with a particle swarm optimization has been developed for bilevel optimization problems, where it served as a hybrid algorithm of the upper-level [21].
In this paper, we propose a DE with EDA for solving min-max optimization problems. The algorithm has a nested form, where a DE is applied for both the design and scenario space optimization. To reduce the computational cost, instead of using surrogates, we estimate the distribution of the best worst solution for the best solutions found so far. Then, this distribution is passed to a scenario space optimization, and a part of the population is sampled from it as a priori knowledge. That way, there is a higher probability that the population will contain the best solution, and there is no need for training a surrogate model. We also limit the search for the scenario space. If one solution found is already worse than the best worst scenario, it is skipped.
The rest of this paper is organized as follows: Section 2 introduces the basic concepts of the worst-case scenario and min-max optimization. A brief description of the general DE and EDA algorithm is provided in Section 3, along with a detailed description of the proposed method. In Section 4, we describe the test functions and the parameter settings used in our experiments. In Section 5, the results are presented and discussed. Finally, Section 6 concludes our paper.

Background
In this section, the definitions of the deterministic optimization problem and the worst-case scenario optimization as an instance of robust optimization are presented.

Definition of Classical Optimization Problem
A typical optimization problem is the problem of minimizing an objective over a set of decision variables subject to a set of constraints. The generic mathematical form of an optimization problem is: min where x ∈ R n , x L ≤ x ≤ x U is a decision vector of n dimension, f (x) is the objective function and g(x) are the inequality constraints. The global optimization techniques solve this problem, giving a deterministic optimal design. Usually, no uncertainties are considered. This approach, though widely used, is not very useful when a designer desires the optimal solution given the uncertainties of the system. Therefore, robust optimization approaches are applied [22].

Definition of Worst Case Scenario Optimization Problem
When one seeks the most robust solution under uncertainties, then the worst-case scenario approach is applied. Worst-case scenario optimization deals with minimizing the maximum output in the scenario space of a problem, and it is usually formulated as a min-max problem. The general worst-case scenario optimization problem in its min-max formulation is described as: min where X ∈ R m represents the set of possible solutions and Y ∈ R n the set of possible scenarios. The problem is a special instance of a bilevel optimization problem (BOP), where one optimization problem (the upper level, UL) has another optimization problem in its constraints (the lower level, LL). The reader can find more about the BOPs in [1]. Here, the UL and LL share the same objective function f (x, y), where UL is optimizing with respect to the variables x of the design space and the lower level is optimizing with respect to the uncertain parameters y of the scenario space. If the upper-level problem is a minimization problem, then the worst-case scenario given by the uncertain variables y of a solution x can be found by maximizing f (x, y). From now on, we will refer to the design space as upper level (UL) and scenario space as lower level (LL) interchangeably. In Figure 1, a general sketch of the min-max optimization problem as bilevel problem is shown, where for every fixed x in the UL, a maximization problem over the scenario space y is activated in the LL. When the problem holds the following condition: then the problem is symmetrical. Problems that satisfy the symmetrical condition are simpler to solve since the feasible regions of the upper and lower level are independent.

Algorithm Method
In this section, we briefly describe the differential evolution and estimation of distribution algorithms. Then, we explain the proposed algorithm for obtaining worst-case scenario optimization.

Differential Evolution (DE)
DE [17] is a population-based metaheuristic search algorithm and falls under the category of evolutionary algorithm methods. Following the standard schema of such methods, it is based on an evolutionary process, where a population of candidate solutions goes through mutation, crossover, and selection operations. The main steps of the algorithm can be seen below:

1.
Initialization: A population of NPop individuals is randomly initialized. Each individual is represented by a D dimensional parameter vector, X i,g = (x 1 i,g , x 2 i,g , ..., x D i,g ) where i = 1, 2, ..., nPop, g = 1, 2, ...MaxGen, where MaxGen is the maximum number of generations. Each vector component is subject to upper and lower bounds X min and X max . The initial values of the ith individual are generated as: where rand(0,1) is a random integer between 0 and 1.

2.
Mutation: The new individual is generated by adding the weighted difference vector between two randomly selected population members to a third member. This process is expressed as: V is the mutant vector, X is an individual, r1, r2, r3 are randomly chosen integers within the range of [1,NPop] and r1, r2, r3 = i, G corresponds to the current generation, F is the scale factor, usually a positive real number between 0.2 and 0.8. F controls the rate at which the population evolves.

3.
Crossover: After mutation, the binomial crossover operation is applied. The mutant individual V i,G is recombined with the parent vector X i,G , in order to generate the offspring U i,G . The vectors of the offspring are inherited from X i,G or V i,G depending on a parameter called crossover probability, C r ∈ [0, 1] as follows: where rand ∈ (0, 1) is a uniformly generated number, random(i) ∈ 1, ..., D is a randomly chosen index, which assures that V i,G gives at least one element to U i,G . t = 1, ..., D denotes the t-th element of the individual's vector.

4.
Selection: The selection operation is a competition between each individual X i,G and its offspring U i,G and defines which individual will prevail in the next generation.
The winner is the one with the best fitness value. The operation is expressed by the following equation: The above steps of mutation, crossover, and selection are repeated for each generation until a certain set of termination criteria has been met. Figure

Estimation of Distribution Algorithms (EDAs)
The basic flowchart of the EDA is shown in Figure 3. The general steps of the EDA algorithm are the following:

1.
Initialization: A population is initialized randomly.

2.
Selection: The most promising individuals S(t) from the population P(t), where t is the current generation, are selected.

3.
Estimation of the probabilistic distribution: A probabilistic model M(t) is built from S(t).

4.
Generate new individuals: New candidate solutions are generated by sampling from the M(t).

5.
Create new population: The new solutions are incorporated into P(t), and go to the next generation. The procedure ends when the termination criteria are met.

Proposed Algorithm
In the proposed algorithm, we keep the hierarchically nested formulation of a min-max problem, which solves asymmetrical problems. The design space (UL) decision variables are evolving with a DE. For the evaluation of each UL individual, first the scenario space (LL) problem is solved by the DE. This solution is then transferred to the upper level. To reduce the cost, we apply an estimation of distribution mechanism between the decision space search (UL) and the scenario space search (LL). In that way, we use a priori knowledge obtained during the optimization. To further reduce the FEs, we search only for solutions with good worst-case scenarios. If the objective function of a solution X 1 under any scenario is already worse in terms of worst-case performance of the best solution X 2 found so far, there is no need for further exploring X 1 over scenario space. Therefore, the mutant individual's performance is checked under the parent's worst-case scenario, and further explored only when it is better in terms of the fitness function. Figure 4 shows the general framework of the proposed approach. The main steps of the proposed algorithm for the UL:

1.
Initialization: A population of size NPop is initialized according to the general DE procedure mentioned in the previous section, where the individuals are representing candidate solutions in the design space X.

2.
Evaluation: To evaluate the fitness function, we need to solve the problem in the scenario space. For a fixed candidate UL solution X i , the LL DE is executed. More detailed steps are given in the next paragraphs. The LL DE returns the solution corresponding to the worst-case scenario for the specific X i . For each individual, the corresponding best Y best = argmax y∈Y f (X i , y) solutions are stored, meaning the solution y that for a fixed x maximizes the objective function.

3.
Building: The individuals in the population P(i) are sorted as the ascending of the UL fitness values. The best nPop/2 are selected. From the best nPop/2 individuals, we build the distribution to establish a probabilistic model M G for the LL solution. The d-dimensional multivariate normal densities to factorize the joint probability density function (pdf) are: where x is the d-dimensional random vector, µ is the d-dimensional mean vector and Σ is the dxd covariance matrix. The two parameters are estimated from the best nPop/2 of the population, from the stored lower level best solutions. In that way, in each generation, we extract statistical information about the LL solutions of the previous UL population. The parameters are updated accordingly in each generation, following the general schema of an estimation of distribution algorithm. 4.
Evolution: Evolve UL with the steps of the standard DE of mutation, crossover, producing an offspring U i,G . 5.
Selection: As mentioned above, the selection operation is a competition between each individual X i,G and its offspring U i,G . The offspring will be evaluated in the scenario space and sent in LL In that way, a lot of unneeded LL optimization calls will be avoided, reducing FEs. If the offspring is evaluated in the scenario space, the selection procedure in Equation (6) is applied. 6.
Termination criteria: • Stop if the maximum number of function evaluations MaxFEs is reached. • Stop if the improvement of the best objective value of the last MaxImpGen generations is below a specific number. • Stop if the absolute difference of the best and the known true optimal objective value is below a specific number.

7.
Output: the best worst case function value f (x * , y * ), the solution corresponding to the best worst-case scenario x * , y * For the LL: 1.
Setting: Set the parameters of the probability of crossover CR, the population size nPop, the mutation rate F, the sampling probability β.

2.
Initialization: Sample nPop individuals to initialize the population. If β ≤ random(0, 1), then the individual is sampled from the probabilistic model M G UL built in the UL with the Equation (7). The model here is sampled with the mvnrnd(mu, Sigma) built-in function of Matlab, which accepts a mean vector mu and covariance matrix sigma as input and returns a random vector chosen from the multivariate normal distribution with that mean and covariance [23]. Otherwise, it is uniformly sampled in the scenario space according to the Equation (3). Please note that for the first UL generation, β is always 0, as no probabilistic model is built yet. For the following generations, β can range from (0,1) number, where β = 1 means that the population will be sampled only from the probabilistic model. This might lead the algorithm to be stuck in local optima and to converge prematurely. An example of an initial population generated with the aforementioned method with β = 0.5 is shown in Figure 5. Magenta asterisk points represent the population generated by the probabilistic model M G UL of the previous UL generation. Blue points are samples uniformly distributed in the search space. In Figure 6, the effect of the probabilistic model on the initial population of LL for f 8 during the optimization is shown. As the iterations increase, the LL members of the populations sampled from the probabilistic distribution reach the promising area that maximizes the function. In the zoomed subplot in each subfigure, one can see that all such members of the population are close to the global maximum, compared to the randomly distributed members.

3.
Mutation, crossover, and selection as the standard DE. 4.
Termination criteria: • Stop if the maximum number of generations MaxGen is reached. • Stop if the absolute difference between the best and the known true optimal objective value is below a specific number.

Experimental Settings
In this section, we describe the 13 benchmark test functions used for this study and provide the parameter settings for our experiments.

Test Functions
The performance of the proposed algorithm was tested on 13 benchmark problems of min-max optimization. The problems used are found collected in [15] along with their referenced optimal values. The first 7 problems f 1 -f 7 are taken from [24] and they are convex in UL and concave in the LL. The problems described as min-max are: Test function f 1 : The points x * = −0.4833, −0.3167 and y * = 0.0833, −0.0833 are the known solutions of the f 1 , and the optimal value is approximated at f 1 (x * , y * ) = −1.6833. Test function f 2 : with x ∈ [−5, 5] 2 , y ∈ [−5, 5] 2 . The points x * = 1.6954, −0.0032 and y * = 0.7186, −0.0001 are the known solutions of the f 2 , and the optimal value is approximated at f 2 (x * , y * ) = 1.4039. Test function f 3 : The points x * = −1.1807, 0.9128 and y * = 2.0985, 2.666 are the known solutions of the f 4 , and the optimal value is approximated at with Test function f 8 [25]: 10], y ∈ [0, 10]. The points x * = 5 and y * = 5 are the known solutions of the f 8 , and the optimal value is approximated at f 8 (x * , y * ) = 0. This test function is a saddle point function. The function along with the known optimum is plotted in Figure 7, and it serves as an example of a symmetric function. Test function f 9 [25]: min x∈X max y∈Y f 9 (x, y) = min 3 − 0.2x 1 + 0.3y 1 , 3 + 0.2x 1 − 0.1y 1 (16) with x ∈ [0, 10], y ∈ [0, 10]. The points x * = 0 and y * = 0 are the known solutions of the f 9 , and the optimal value is approximated at f 9 (x * , y * ) = 3. It is a two-plane asymmetrical function. The contour plot and 3-D plot of this function, along with the known optima, are shown in Figure 8 and serves as an example of an asymmetrical function. Test function f 10 [25]: 10], y ∈ [0, 10]. The points x * = 10 and y * = 2.1257 are the known solutions of the f 10 , and the optimal value is approximated at f 10 (x * , y * ) = 0.097794. It is a damped sinus asymmetrical function. Test function f 11 [25]: 10], y ∈ [0, 10]. The points x * = 7.0441 and y * = 10 or y * = 0 are the known solutions of the f 11 , and the optimal value is approximated at f 11 (x * , y * ) = 0.042488. It is a damped cosine wave asymmetrical function. Test function f 12 [6]: The points x * = 0.5, 0.25 and y * = 0, 0 are the known solutions of the f 12 , and the optimal value is approximated at f 12 (x * , y * ) = 0. 25. Test function f 13 [6]: with x ∈ [−1, 3] 2 , y ∈ [0, 10] 2 . The points x * = 1, 1 and y * = any, any are the known solutions of the f 13 , and the optimal value is approximated at f 13 (x * , y * ) = 1.

Parameter Settings
The parameter setting used for all the experiments of this study are shown in Table 1. The population size depends on the dimensionality of the problem, where for the UL max(n x + n y , 5) * 2 is used and for the LL max(n y , 5) * 2, where n x , n y is the dimensionality of the UL and LL, respectively.

Upper-Level Lower-Level
Population size max(n x + n y , 5) * 2 max(n y , 5) * 2 Crossover 0.9 0.9 Mutation uniformly (0. All the simulations were undertaken on an Intel (R) Core (TM) i7-7500 CPU @ 2.70 GHz, 16 GB of RAM, and the Windows 10 operating system. The code and the experiments were implemented and run in Matlab R2018b.

Effectiveness of the Probabilistic Sharing Mechanism
To evaluate the effectiveness of the probabilistic sharing mechanism of the proposed algorithm, we compare three different instances that correspond to three different β values. The first algorithmic instance has β = 0, meaning that the estimation of distribution in the optimization procedure is not activated, and the algorithm becomes a traditional nested DE. This instance serves therefore as the baseline. The second algorithmic instance corresponds to β = 0.5, where half of the initial population of the LL is sampled from the probabilistic model. Last, for the third algorithmic instance, we set a value of β = 0.8, testing the ability of the algorithm when 80% of the initial population of the LL is sampled from the probabilistic model.
Due to the inherent randomness of the EAs, repeated experiments are held to assess a statistical analysis of the performance of the algorithm. We report results of 30 independent runs, which is the minimum number of samples used for statistical assessment and tests. In Table 2, the statistical results of the 30 runs of the different instances of the algorithm are reported. More specifically, we report the mean, median, and standard deviation of the accuracy of the objective function. We calculate the accuracy as the absolute differences between the best objective function values provided by the algorithms and the known global optimal objective values of each test function. This is expressed as where f and f * are the best and the true optimal values, respectively.  In order to compare the instances, the non-parametric statistical Wilcoxon signed-rank test [26] was carried out at the 5% significance, where for each test function, the best instance in terms of median accuracy used a control algorithm against the other two. The reported ≤0.05 means that it rejects the null hypothesis and the two samples are different, while >0.05 means the opposite. The best algorithm in terms of median accuracy is shown in bold. We also report the median of the total number of function evaluations. In bold are the lowest median FEs corresponding to the best algorithmic instance in terms of median accuracy. As we can see, the proposed method outperforms the baseline in most of the test functions. More specifically, the second and the third instances are significantly better than the first in the test functions f 1 − f 8 and f 13 . For these test functions, the results of these two instances do not differ significantly, therefore there is a tie. What we can note though, is that instance 2 repeatedly requires fewer FEs to reach the same results. Therefore, it performs better in terms of computation expense. For test function f 9 , all the instances are performing equally in terms of median accuracy, while the baseline instance reports less FEs. The third instance is best in test functions f 5 -f 6 . For test function f 7 , there is a tie between the second and third instance. The first instance performs better in test functions f 10 and f 12 , while for f 11 , the first and second instance outperforms the third. In many cases, the baseline algorithmic case reports a low number of FEs. These cases, where it does not reach the desired accuracy, indicate premature convergence, when the "least improvement" termination criterion is activated and the algorithm is terminated before reaching the maximum number of evaluations. In 11 out of 13 test functions, instance 2 outperforms at least one instance or performs equally, which makes selecting a β = 0.5 a safe choice.
In Figure 9, the success rate of each algorithmic instance and each test function is reported. As a success rate, we define here the percentage of the number of runs where the algorithm reached the desired accuracy of the total runs for each test function. It is interesting to note that the baseline first instance did not at all reach the desired accuracy in 9 out of 13 test problems. The performance of the algorithm improves dramatically by the use of the estimation of distribution. On the other hand, the instance with β = 0.5 reaches the desired accuracy for at least one run in 11 out of 13 problems and, instance with β = 0.8 in 10 out of 13 problems. The second instance reaches the accuracy of 100% for asymmetrical functions f 9 and f 10 . For test functions f 7 and f 12 , none of the algorithms reach the desired accuracy in the predefined number of FEs. f 7 is one of the test problems with higher dimensionality, and a higher number of function evaluations might be needed in order to reach higher accuracy. In Figure 10, the convergence plots of the accuracy of the upper level for each algorithmic instance and test function are shown. The red color corresponds to the instance where β = 0.0, magenta β = 0.5 and blue β = 0.8. The bumps that can be spotted in the convergence are probably because of inaccurate solutions of the worst-case scenario. This can be mostly seen in Figure 10g, for test function f 7 , where the convergence seems to go further than the desired accuracy. In Figure 10j for f 10 , algorithmic instance 2 and 3 seem to converge in even earlier generations, in contrast to the baseline first algorithmic instance.

Comparison with State-of-the-Art Method MMDE
In this subsection, we compare the proposed method with one state-of-the-art minmax EA. The MMDE [13] employs a differential evolution algorithm along with a bottomboosting scheme and a regeneration strategy to detect best worst-case solutions. The MMDE showed statistically significant superior performance against a number of other min-max EAs, so we only compared with the MMDE. For the comparative experiments, the following settings are applied. For the proposed method, the DE parameters of UL are the same as in Table 1, while for the LL, the population size was set to max(n y , 5) * 3 and β = 0.5. For the MMDE, the proposed settings from the reference paper are used and are crossover CR = 0.5 and mutation F = 0.7. The MMDE also has two parameters K s and T that control the number of FEs in the bottom-boosting scheme and partial-regeneration strategy. Here, they are set to 190 and 10, respectively, as in the original settings. To have a fair comparison, the termination criterion for both algorithms is only the total number of FEs and set to 10 4 . Since the number of FEs is limited, an additional check was employed for the proposed method, where if a new solution of the UL is already found in the previous population, then it is not passed to the lower level, since the worst-case scenario is already known. The algorithms are run 30 times on test functions f 8 − f 13. For comparing the two methods, we use the mean square error (MSE) of the obtained solutions in the design space (UL) to the true optimum, a metric commonly used for comparing min-max algorithms. More specifically, the MSE is calculated: where X best is the best solution found by the algorithm and X opt the known optimal solution, while D X is the dimensionality of the solution. In Table 3, we report the mean, median and standard deviation of the mean square error (MSE). In Figure 11, these values are illustrated as boxplots. The Wilcoxon signed-rank test [26] was conducted at the 5% significance, and we report if the p-value rejects or not the null hypothesis. The proposed method outperforms the MMDE for the test functions f 8 and f 10 , while it performs equally good on asymmetrical test function f 9 . On the test functions f 11 -f 13 , the MMDE performs better than the proposed method.   >0.05 NA

Conclusions
In this work, we propose an algorithm for solving worst-case scenario optimization as a min-max problem. The algorithm employs a nested differential evolution with an estimation of the distribution between the two levels to enhance the efficiency of solving the problems in terms of both accuracy and computational cost. A probabilistic model is built from the best worst-case solutions found so far and is used to generate samples as an initial population of the lower level DE to speed up the convergence. First, the efficiency is investigated by comparing the nested algorithm with different probabilities of using the probabilistic model on 13 test functions of various dimensions and characteristics. To further investigate the performance of the algorithm, it is compared with the MMDE, one state-of-the-art algorithm known to perform well on these problems on both benchmark functions and on an engineering application. The results show that, most times, the proposed method performs better or equal to the MMDE.
In future work, the method could be tested with different population-based EAs in UL or LL, as it is independent of the evolutionary strategy. The parameter, β, that defines the probability that the probabilistic model will be used, could be adapted during the optimization. Last, the method can be tested on higher dimensional test functions and/or engineering applications.

Data Availability Statement:
The code and the related results are available on request from the corresponding author.