From an Optimal Point to an Optimal Region: A Novel Methodology for Optimization of Multimodal Constrained Problems and a Novel Constrained Sliding Particle Swarm Optimization Strategy

: The present work proposes a novel methodology for an optimization procedure extending the optimal point to an optimal area based on an uncertainty map of deterministic optimization. To do so, this work proposes the deductions of a likelihood-based test to draw conﬁdence regions of population-based optimizations. A novel Constrained Sliding Particle Swarm Optimization algorithm is also proposed that can cope with the optimization procedures characterized by multi-local minima. There are two open issues in the optimization literature, uncertainty analysis of the deterministic optimization and application of meta-heuristic algorithms to solve multi-local minima problems. The proposed methodology was evaluated in a series of ﬁve benchmark tests. The results demonstrated that the methodology is able to identify all the local minima and the global one, if any. Moreover, it was able to draw the conﬁdence regions of all minima found by the optimization algorithm, hence, extending the optimal point to an optimal region. Moreover, providing the set of decision variables that can give an optimal value, with statistical conﬁdence. Finally, the methodology is evaluated to address a case study from chemical engineering; the optimization of a complex multifunctional process where separation and reaction are processed simultaneously, a true moving bed reactor. The method was able to efﬁciently identify the two possible optimal operating regions of this process. Therefore, proving the practical application of this methodology.


Introduction
Traditional optimization procedures are usually seen as a methodology that provides a point in which the process meets the desired requirements. It is not usual to perform an analysis of confidence of the results provided by the optimization. However, in modern industry, it is important to provide more flexibility and precision to the results of an optimization procedure. It is necessary to evaluate how accurate the optimal provided by the optimizer is, concomitantly with an evaluation of all the operating options where the system can work to meet the optimal conditions. Thus, transforming that fixed point into a probable region within which the process can be operated and still satisfy the performance and safety requirements of the problem in study.
In [1], these issues are addressed, and a method based on the bootstrap technique is presented to determine the confidence region of optimal operating conditions in robust process design. In [2], a method based on the likelihood confidence regions is presented to build a map of a feasible operating region of an unconstrained optimization procedure using a population-based meta-heuristic optimizer.
This is an open issue in the literature that needs to be deepened in order to optimize the optimization techniques according to the new paradigms of the digital revolution. However, few works are found in the literature that develops this topic. A populationbased method should be an ideal choice to draw confidence regions because they produce a huge amount of data about the objective function topography. If these data are properly used, they can provide a joint confidence region of the optimal point, and therefore, no extra steps are necessary to obtain a confidence region of the optimal value. Moreover, it might be a solution for one of the main issues presented in the employment of classic statistical methods to build confidence regions, which usually are based on the Gaussian distribution [1,[3][4][5]. However, this is not what is usually observed in an industrial campaign, where the systems are mostly nonlinear. When the system starts to deviate from the normal conditions, the technique starts to be limited. On the other hand, through a population-based method, it is possible to draw the likelihood confidence region, which is known to be able to represent nonlinear confidence regions with recognizable precision [2,4,6,7].
Finally, optimization procedures based on meta-heuristics methods are characterized by the generation of a large amount of data. These data are usually discarded after obtaining the result. On the other hand, these data contain important information that can be extracted and recycled into useful insights about the system under evaluation. Therefore, a methodology that makes use of this already available data is of importance in the optimization literature, promoting the efficient use of data and computational resources. This issue was introduced by [8], where the authors make use of the results of a metaheuristic optimization procedure to build operating maps of a chemical unit.
In this scenario, this work proposes the mathematical deduction of a statistical test to evaluate the likelihood confidence regions from constrained optimization procedures based on meta-heuristic methods. On the other hand, this work also addresses an important issue found while analyzing the optimization results: the existence of multi-local minima. Thus, the present work also proposes a modified version of a particle swarm optimization method, here called constraints-based sliding particle swarm optimizer (CSPSO). The main contribution of the CSPSO proposed here is the possibility of expanding the search area through optimization while keeping in memory the past results and considering the constraints of the problem. This strategy allows obtaining a complete picture of the optimization landscape, together with the confidence regions of all minimum points. Several benchmark tests are present to evaluate the performance of the proposed methodology. Furthermore, the optimization problem of a complex chemical process is presented in order to evaluate the methodology applied to an industrial problem. Therefore, the main contributions of this work are summarized as:

•
A statistical test to evaluate the likelihood confidence regions from constrained metaheuristic optimization is proposed; • A novel constraints-based sliding particle swarm optimizer is presented to address multi-local minima problems; • The proposed methodology is evaluated using several benchmark tests and using the optimization problem of a chemical process as a practical case study.

Likelihood Confidence Region of Constrained Optimization Problems
To use the resulting population of a meta-heuristic method to map the optimal point uncertainties, it is necessary to evaluate the big data generated by the meta-heuristic optimizer, in the present case the CSPSO. To do this, the most appropriate approach is to use the likelihood confidence regions, which can be based on the Fisher-Snedecor criteria. This section presents the mathematical deductions that led to a Fisher-Snedecor test to analyze the population obtained from the CSPSO in each minimum found by the optimizer, considering the problem constraints. It is important to highlight that by population it means all the decision variable points evaluated by the algorithm.
Considering a constrained single-objective optimization problem, it can be represented by the Lagrangian functions described in Equation (1).
where f ob is the objective function with its corresponding decision variables θ and C is the vector of constraints, λ is known as a vector of Lagrange multipliers, which allows determining the extremes of a function when subject to restrictions. k = 1, 2, 3, . . . , n k indicates the number of iterations developed by the PSO. In addition, considering that a global optimal solution, θ g , could be obtained by the analytical solution of the objective function, the Lagrange expression is represented by L(θ g , λ g ). It is important to highlight that this, θ g , is a theoretical concept that no optimizer will achieve, but will search for the best approximation to it. Therefore, the optimal solution obtained by the optimizer in the search to approximate to θ g is represented by θ * , whose Lagrange expression is defined as L θ * k , λ * . These concepts are exemplified graphically in Figure 1, where the points are the meta-heuristic population after all iterations; therefore, a set of solutions from a determined number of iterations is defined as θ k+∆k . Figure 1. The resulting space formed by the overall population originated by the algorithm. The relationship between the value of each particle with a theoretical global minimum is expressed in terms of Pythagorean theorem. Based on this theorem it is presented a schematical representation of the concepts used in the deductions of the Fisher-Snedecor test.
In Figure 1, e represents the distance between the analytical optimum L(θ g , λ g ) and the one optima found by the CSPSO L θ * k , λ * k , in other words it is the residual error of the objective function evaluation, b is the projection of e on a tangent plane to the L(θ k+∆k , λ k+∆k ), h represents the difference between the analytical optimum and a given point in the CSPSO population L(θ k+∆k , λ k+∆k ). So, the error can be expressed by the triangle relationship in Figure 1 as: The squared residual error, e 2 can be normalized in terms of variance of the complete population, V k (θ), according to the following equation: The error can be defined for all k minima obtained in the optimization process, as follows: The variance, V k θ * k , λ k , between each minimum found by the CSPSO and analytical optimum L(θ g , λ g ), is defined in Equation (5).
where n k is total number of iterations evaluated by the CSPSO. Therefore, it is possible to rewrite Equations (4) and (5) in the form of Equation (6).
As e 2 is the squared residual error of uncorrelated experiments (evaluations of the objective function), it follows a chi-squared distribution, χ 2 . It is important to highlight that an experiment is referred here as a consult to the objective function. This consult should be an isolated event and a fact that is independent of the algorithm used to perform the optimization. This probability distribution has n k − 1 degrees of freedom, as follows: Considering the same deduction, but now taking as reference the distance between the global analytical solution and the remaining points of the CSPSO population, represented in Figure 1 by h. Then, h, is defined by: The variance, V k (θ k+∆k , λ k+∆k ), between a minimum found by the CSPSO and a given point in the population, L θ * k , λ k , is defined by: Similar as done before with e 2 , Equations (8) and (9) can be transformed in: Therefore, assuming also that h becomes a chi-square distribution premise, whose degrees of freedom are expressed by n k − (n θ − 1), as follows: where n θ is the population size. Hence, the difference between the errors presented can be expressed by: From the Pythagorean theorem h 2 − e 2 is equal to b 2 . Furthermore, h and b are orthogonal to each other, as seen in Figure 1. Therefore, they are independent vectors; hence they are independent distributions. Knowing this, their ratio is equivalent to a Fisher-Snedecor distribution, as: where α is the confidence level of the Fisher-Snedecor test. Finally, the Taylor Series was expanded around the optimal point in terms of objective function as: where ∇L θ * is the gradient vector of the Lagragian and H θ * the Hessian matrix of the objective function, which is correlated to the covariance matrix of optimal points as [3,7]: Replacing Equation (15) in Equation (14), it is possible to obtain: Therefore, it is possible to rewrite Equation (13) through Equation (16), obtaining: Assuming that V(θ k+∆k , λ k+∆k ) is a good approximation for V k , the following test is obtained: The deductions here presented culminated in the above Fisher-Snedecor test. That can be used to evaluate a population provided by a meta-heuristic method, the CSPSO in the present case, in terms of each local minima found. The population points that meet this criterion will compose the optimal region of the evaluated minima, building in this way a map of the multimodal landscape straightforward from the optimization results.

A Sliding Particle Swarm Optimization for Solving Constrained Optimization Problems
Particle swarm optimization is a meta-heuristic optimization technique based on the concept of a particle group of p individuals that can hover over a determined optimization landscape, search region, in order to define the region topology. It was first presented by Kennedy and Eberhart (1995) as a method for the optimization of nonlinear functions that mimics the social behavior of living communities, for example, the birds' flocks, which is nowadays the most common association to PSO. In the basic concept of this optimization strategy, each particle of the swarm has a specific position, x p , within the search area and a respective velocity, v p . This information is stored in an individual and a group memory, which are shared among all the members of the swarm in each iteration, k. It can be represented as: x k+1 where c 1 and c 2 are the acceleration coefficients, r 1 and r 2 are random uniform generated numbers in a range between 0 and 1. Through Equations (19) and (20) the velocity and position of each particle after each iteration are computed. These values are based on the best position found among all the particles in all iterations, x glo p , and the particle own best position in the previous iterations, x per p . Several works have been published addressing this technique applied in different fields: parameters estimation [5,7], control system [9][10][11], and in optimization problems [12][13][14]. For the past two decades, studies have been published with developments and improvements in the PSO technique [12,[15][16][17][18][19] Ratnaweera et al. (2004), for example, proposed a self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO-TVAC). In the referred work, a PSO was developed to achieve fast convergence. This was done by introducing a necessary momentum for the particles through the elimination of the particle velocity term in Eq. 19 and introducing an adaptative acceleration coefficient. The authors showed through benchmark tests that their proposed HPSO was able to outperform previous strategy types of PSOs in most of the tests. When not outperforming, the HPSO presented similar results with faster convergence. In [18] another variant of PSO is presented, called parallel PSO. The algorithm was developed to deal with complex process optimization, that requires significant computational effort. The authors demonstrated the efficiency of the approach through the optimization of a chromatographic separation unit.
When the PSO is compared with other meta-heuristic methods it is considered to have simpler and easier implementation. Further comparisons are provided in the literature pointing out some advantages of the PSO strategy. For instance, in Kachitvichyanukul (2012) the genetic algorithms (GAs) are compared with PSO-based methods. The referred work evaluated the computational effort of both methods, and their respective precision, showing that the GAs present an exponential relation between the population size and the solution time, while the PSO presented a linear relationship. This gives margin to the employment of larger populations using PSO, which might be an advantage in complex optimizations. Furthermore, several works have been comparing the PSO with different GAs in different fields of application; most of these works point to a better performance of PSO [20][21][22].
A myriad of issues has been well addressed in the PSO literature, such as the acceleration coefficient, the particles inertia, initial velocity, etc. This last point, the initialization velocity, has been in analysis in the literature, e.g., Engelbrecht (2012) evaluated this issue concluding that the particles tend to leave the boundaries of the search space when the velocities are initiated randomly. The author demonstrated that the result is directly dependent on the method of initialization of the velocity of the particle, pointing out an initialization velocity equal to zero or random value around zero as the most adequate solution to avoid the roaming particles problem.
From one point of view, the literature about PSO is well developed. On the other hand, the application of PSO to solve multi-modal optimization problems is still an issue. This issue was addressed in Nogueira et al., (2020), where the authors proposed a PSO that can evaluate unconstrained multi-modal optimization problems. The PSO proposed in the referred work can cope with complex optimization systems. However, the standpoint of constrained problems still needs to be deepened.
Dealing with constrained objective functions and multilocal minima is a complex problem that can be found in several practical cases. In such a situation, the application of the usual PSO strategy might be limited due to the landscape nature, slight variation in the function peak height can generate significant changes in the position of the global optimum (Poli et al., 2007). Therefore, the flock might not have momentum enough to leave a local minimum, therefore getting stuck in it.
Thus, this work proposes a novel PSO modification, that can deal with the multiminima problem in constrained optimization. We called this PSO version as constrained sliding particle swarm optimization (CSPSO). This was done by introducing a muta-tion on the individuals' social learning, following a strategy like the one proposed by Nogueira et al. (2020).
This idea is presented in Figure 2 where it is found a given constrained function that has a total of five local minima. The mutable social component of the PSO will conduct the flock to hover the region R1 up to the region R3, inducing them to change their search area each time that a minimum is found. An ordinary PSO algorithm would find one of those minima and get stuck in that area. This is a problem in two ways: First, the results do not provide the full topography of the search region; second, in the drawing of the feasible operating region, because it is intended to evaluate all possible operating conditions that satisfy the process optimal conditions. In this way, the CSPSO here proposed is based on the PSO-TVAC developed by Ratnaweera et al. (2004) and the SPSO developed by Nogueira et al., (2020), using the initial velocity equal to zero in order to achieve a better performance. The first step is then to set the acceleration coefficients, (c 1i , c 1 f , c 2i , and c 2 f ), the optimization constraints, the number of iterations k, initial search region limits R 1 and set the maximum number of iterations as criterion to change the region.
Then the initialization procedure begins. The particles' position is started within the first search regions, e.g., R1 in Figure 2. The initialization is done randomly generating, x 1 p as: where, R i,min is the inferior limit of the initial search region, R i,max is the superior limit, and rnd is a random value between 0 and 1, introducing the randomness in the system. After this, the initial velocities are set to zero and the maximum velocity, v max , can be calculated as: Then the objective function and the set of constraints, C, are evaluated for each particle position f ob C, x 1 p . There are two ways to address the constraints issue here, through a penalty function or straight evaluation. The first option introduces a new parameter to the optimization problem, which is the penalty. This parameter might affect the efficiency constraining the fly landscape. On the other hand, the straight evaluation of the constraints will make the algorithm computationally heavier. As the developed algorithm is meant to deal with complex optimization, it was decided to adopt both options which can be chosen by the user following the case study. This point is one of the points that this PSO variant diverges from the others found in the literature. Thus, the algorithm will work as presented in Figure 3, where δ is the constraint value, ∆ is the penalty factor. After the successful evaluation of the constraint, the acceleration coefficients (c 1 and c 2 ) are computed as: Then it is possible to evaluate the next iteration velocity, v k+1 p as: where r 1 and r 2 are randomly generated variables. If the computed velocity is higher than v max , a new velocity value should be calculated using: Then the new position can be computed, x k+1 p , as: If the computed position does not respect the searching bounds or the internal restrictions, x i restri , then that position is set to the middle of the search area, x k+1 . At this point, the criteria for minimum local is evaluated based on the overall best optimal point, x glo p . If a value of optimal point is found to be equal or close to x glo p for several iterations, then a space restriction should be created. This constraint, created each time that a minimum is recognized, will serve as a social memory for the flock. Then, the gradual expansion of the search region of the CSPSO starts, when a minimum is recognized for the first time the flock should move from R 1 to R 2 , as in the example of Figure 2. Then gradually to R 3 and so on, after each new minimum is found, until the procedure is finished, and the total number of iterations is reached. Before each expansion of the search region, the information about the minimum point found will be stored in the social memory. If the algorithm does not find a new minimum, the region will expand until the global landscape is explored. At each expansion, the procedure here described is repeated. These last steps are the main differences between the proposed CSPSO and the others PSO variants found in the literature. Most of the other PSOs stop right before these steps. A pseudo-code of the proposed CSPSO is presented below Algorithm 1.

Algorithm 1. A pseudo-code of the proposed CSPSO Begin
Set PSO parameters Acceleration coefficient 1:= c 1 Acceleration coefficient 2:=c 2 Define the value of the criteria χ for the maximum number of iteration within the same minima Define range α of a local minima l = 0 while (termination condition = false, expansion of search domain) Initialize the population within search region, R l,min , R l,max v p = 0 x p = R l,max + rnd R l,max − R l,min v max,p = X p,max − X p,min /2 for j = 1 to number of iterations for p = 1 to number of particles if ∆ = 0 evaluate objective function with penalties := f j C, x p ) for i = 1 to number of dimensions update particle position and velocity

Results
The CSPSO here proposed together with the likelihood analysis deduced was implemented in MATLAB. The results were compared with standard PSO strategies, and it was noted that, for all benchmarks, the ordinary PSO strategy was able to identify only one local minimum. Then, a series of benchmarks were used in order to evaluate the proposed methodology. This results section presents the performance of the methodology for five constrained benchmark problems. Three of them characterized by simple constraints, and two of them characterized by additional constraints in order to evaluate the robustness of the proposed methodology. In Table 1 is provided the CSPSO parameters used to solve this problem.

Benchmark 1-Ackley Function
The first benchmark is a continuous, multimodal, and non-convex function, defined on n-dimensional space. It is a well-known function for testing optimization algorithms. It is characterized by a large valley area with a slight inclination and several local small depressions; in its center is found a deep depression that configures the function global minimum. The function has one global minimum: f (x i * ) = 0 at x = 0. The optimization problem that originates from Ackley function is written as: where a, b, and c are the constants and are usually equal to: a = 20, b = 0.2, c = 2π, n = 2. The function area is usually constrained to a hypercube, in this case, defined by: Table 1 provides the CSPSO parameters used to solve this problem. As it is possible to see from Figure 4, the CSPSO provided a full picture of the Ackley Function topography. As it is possible to see from the 3D map this function has several local minima distributed along the way toward the global minima. The 2D map is then built using a projection of the 3D topography in a perpendicular plane. A contour map of this function was also added to the 2D map to provide a visual idea of where the minimum is located. Therefore, in the 2D map it is possible to see the contour lines, each line surrounding the minimum of that region. It is possible to see that the contour line forms small "puddles" around the local minima, where it is found the value corresponding to the objective function in that surrounding; furthermore, it is possible to see that these "puddles" are filled by the CSPSO confidence regions. Moving toward the center of Figure 4, the global minima is found, contour marked by the value 2, which is also well described by the confidence region. Thus, from the 2D map, it is possible to see that the methodology here proposed can identify all the local minima and the global minimum, while drawing their respective confidence regions. From the surface map, surface lines, it is possible to see that the confidence regions are precisely within the function minima and global minimum. The methodology was able to provide accurately the uncertainty map of each minimum found. Therefore, providing not only the optimal points but their corresponding uncertainty. Thus, the optimal inputs, x 1 and x 2 , are not an ordered pair but an ordered region within which it is possible to find several points that will lead to the same value of the objective function.

Benchmark 2-Rastrigin Function
The Rastrigin Function is a continuous, non-convex function, defined on n-dimensional space. It is a representative example of a non-linear differentiable-multimodal function. This function is commonly used as a benchmark to test the performance of optimization algorithms due to its topography. In this case, there is no global minimum, but several local minima which in the present case are: f (x i * ) = 0 is obtainable for x i = 0, i = 1; . . . ; n. The optimization problem that originates from Rastrigin Function is written as: where A is a constant and in the present case equal to 10, n = 2.
As it is possible to see from Figure 5 the CSPSO also provided a full picture of the Rastrigin Function topography. As it is possible to see from the 3D map this function has several local minima completely surrounded by maximum points. Therefore, this is a more complex landscape. The 2D map is then build using a projection of the 3D topography in a perpendicular plane. A contour map of this function was also added to the 2D map to provide a visual idea of where the minimum and maximum are located. In the 2D map, it is possible to see the contour lines, surrounding the minimum and maximum areas of the objective function landscape. Through the values in the contour lines, one can identify the maximum and minimum. Again, the contour line forms small "puddles" around the local minima and surrounding the maximum points. It is interesting to see that the proposed method was able to fill the "puddles" where the local minima are located, bypassing the maximum barriers. Moreover, from the 2D map, it is possible to see that the methodology proposed here can identify all the local minima, while drawing their respective confidence regions. The evidence provided by the first benchmark test is reinforced here by these new results. The methodology shows a significant efficiency to provide accurately the uncertainty map of each minimum found. From the surface map, it is possible to see that the confidence regions are precisely within the function minima.

Benchmark 3-Cross-in-Tray Function
The Cross-in-Tray Function is characterized as continuous, non-convex, multimodal, non-differentiable and defined on a 2-dimensional space. This function has a very peculiar shape in which a cross-like formation is found crossing the middle of its topography, forming four separate quarters. In each quarter of the function, several local minima are found. The optimization problem that originates from the Cross-in-Tray Function is written as: In this case, a shallow surface is presented that contains several minima. Therefore, it is a landscape more propone to hold the optimizer in one of the local minima. As before, the 2D map was built based on the projection of the 3D topography in a perpendicular plane. Through the contour lines, it is possible to see that there are slight differences between the minimum. Even though the proposed methodology was able to fill the minima found within the "puddles". In this final test of this series of input constrained functions, the CSPSO also provided a full picture of the Cross-in-Tray Function topography, Figure 6. In the 2D map, it is possible to see that the proposed methodology can identify all the local minima even under peculiar topographies like this one. While drawing the local minima confidence regions, substantial evidence is provided that the methodology is efficient to accurately determine the uncertainty map of each minimum found. From the surface map it is possible to see that the confidence regions are precisely within the function minima.

Benchmark 4-Egg Crate Function constrained
The Egg Crate Function is characterized to be continuous, not convex, multimodal, differentiable, separable, and defined on 2-dimensional space. In this case, an additional constraint was added to reduce the objective function area and evaluate the methodology robustness. The optimization problem that originates from Egg Crate Function with input and output constraints is written as: s.t. Figure 7 shows this new benchmark function in which were included constraints within the landscape. As it is possible to see from the 3D map, this function has a global minimum surrounded by maximum points and local minima. Therefore, this is a complex landscape, where it should be difficult to identify the global minima. The 3D map provides an overall idea of this landscape. A contour map of this function was also added to the 2D map providing a more detailed visualization of this topography in the 2D plane. Through the values in the contour lines, it is possible to see the maxima, minima, and global minimum. It is possible to see in this figure how the maxima contours are surrounding the global minimum, providing only a small access to this area. Furthermore, in this case, the proposed methodology was able to fully identify all the local minima and the global one. In this test it is possible to see that the CSPSO also provided a full picture of the constrained landscape. As it is possible to see in Figure 7, the introduced constraint removed four local minima from the landscape. In this case, it is observed that the CSPSO pointed in the constraint the direction of the local minima that are outside the constrained area. This is another evidence of the efficiency of the proposed methodology; this point can be seen in the 2D map. In this case, it is also demonstrated that it was possible to precisely draw the confidence regions of each local minimum.

Benchmark 5-Keane Function constrained
The Keane Function is defined as continuous, non-convex, multimodal, differentiable, non-separable and defined on a 2-dimensional space. As in the previous case, an additional constraint was added to reduce the objective function area and evaluate the methodology robustness. The optimization problem that originates Keane Function is written as: Figure 8 shows this new benchmark function in which were included constraints within the landscape. In this test, it is possible to see that the CSPSO also provided a full picture of the constrained landscape. As it is possible to see in Figure 8 the introduced constraint removed seven local minima from the landscape. In this case, there were two minima beyond the constraint, but close to it. However, the CSPSO could not point the existence of these minima beyond the constraint. Probably this is due to the landscape of the Keane Function constrained, which has a deep depression close to the restriction. This depression might work as a gravitational pole that attracts all the particles close by. On the other hand, the CSPSO was able to effectively identify all minima within the constrained landscape, proving the efficiency of the proposed method.

Benchmark 5-Himmelblau Function
The Himmelblau Function is defined as a continuous, non-convex, multimodal, differentiable and defined on 2-dimensional space. This function is characterized by a flat landscape, which makes the search for the minimum more difficult. The optimization problem that originates Himmelblau Function is written as: (39) Figure 9 shows this final benchmark which has different properties than the previous ones. As it is possible to see this function has a very flat landscape where it is possible to find four local minima: f (3, 2) = 0; f (−2.805, 3.283) = 0; f (−3.779, −3.283) = 0; f (3.584, −1.848) = 0. The contour map is also provided for this function providing a more detailed visualization of this topography in 2D plane. It is possible to see that in this case, the above mentioned "puddles" are not found in the contours. Instead, two large areas for objective function equal to 100 are observed. Within these areas, the local minima are found. This is a representation of how flat/shallow the landscape is, which barely changes its slop until the minima are found. Moreover, in this case, the proposed methodology was able to fully identify all the local minima and the global ones. A case study-Optimization of a True Moving Bed Reactor True moving bed reactor (TMBR) is a multifunctional unit that is capable of integrating reaction and separation simultaneously, being an alternative to the traditional way of production where it is found one specific unit for each step (reaction and separation) of the production route. It is highlighted as an efficient way to leverage the potential of systems in which the reactions are limited by equilibrium [23][24][25][26]. The equilibrium conversion is not a limitation in those units because one or more of the products are continuously removed from the reaction zone [27]. However, this system presents complex dynamics and system nonlinearity which make the unit optimization and control very difficult tasks.
In [28] a true moving bed reactor was proposed and optimized to perform the synthesis of n-propyl propionate. Please refer to this work if the reader is interested in more details about the process, phenomenological model, and the system. The optimization problem for the synthesis of n-propyl propionate in a TMBR unit is a 5-dimensional problem and was written in [28] as: where Pur x , is the extract stream purity, Pur r , is the raffinate stream purity, EC, is the eluent consumption, and Pr is the unit productivity, P st is the minimum requirement for the purities. As it is an input/output system, the processes output are dependent on the process input, which are here the decision variables, x, in this case operating flow rates of the process: feed flow rate (Q f ), eluent flow rate (Q e ), recycling flow rate (Q IV ), extract flow rate (Q x ), and solid flow rate Q s . Therefore, the goal is to maximize the productivity while minimizing the eluent consumption and keeping the purities as close as possible to its target. This objective function has two local minima. Figure 10 presents the results obtained after applying the proposed methodology to solve the TMBR optimization problem. It is provided in the figure the objective function landscape, the optimal points confidence regions, and the contour of the expected region for the local minima. The information about the contour of the expected region was obtained from [29] and used as a reference to check the CSPSO result. As it is possible to see from Figure 9, the algorithm was able to identify the two local minima of the objective function and describe with precision the confidence region of the optimal point, with a confidence level of 99%. This means that any of these points inside the confidence region will generate an optimal operating condition in the process.

Conclusions
This work presented the mathematical deductions of a likelihood test to evaluate confidence regions of optimal points provided by population-based methods. A novel particle swarm optimization algorithm was proposed to apply the proposed likelihood test in the identification of multi-local minima and concomitantly map their correlated uncertainties for constrained optimizations. The proposed optimization algorithm is here called constrained sliding particle swarm optimization (CSPSO).
A series of benchmark tests were done in order to evaluate the efficiency, consistency, and robustness of the proposed methodology. Six optimization problems of multi-modal functions were tested. The CSPSO was able to identify all local-minima and global minimum, when existent, for all the benchmarks here tested. Furthermore, through the population and using the proposed likelihood test, it was possible to draw the confidence regions of each local-minima and the global one when existent. A final test involving the optimization of a chemical unit, a true moving bed reactor, was then provided. In this last optimization case, it was possible to demonstrate the application of the proposed methodology in a practical problem. The method was able to identify all the local minima of this last optimization problem. Furthermore, as the methodology was able to identify with precision the optimal regions, it proves that the distribution of errors, e and h, are χ 2 distribution, thus in accordance with the proposed methodology.
Therefore, it was possible to map the uncertainties of the optimal points, extending the result of the optimization algorithm from a defined value to a feasible region. All the decision variables mapped within the optimal regions lead to an optimal value of the objective function satisfying the problem constraints. Funding: This work was financially supported by: Project-NORTE-01-0145-FEDER-029384 funded by FEDER funds through NORTE 2020-Programa Operacional Regional do NORTE -and by national funds (PIDDAC) through FCT/MCTES. This work was also financially supported by: Base Funding-UIDB/50020/2020 of the Associate Laboratory LSRE-LCM-funded by national funds through FCT/MCTES (PIDDAC), Capes for its financial support, financial code 001 and FCT-Fundação para a Ciência e Tecnologia under CEEC Institucional program.