A Comparative Study of Stochastic Model Predictive Controllers

: A comparative study of two state-of-the-art stochastic model predictive controllers for linear systems with parametric and additive uncertainties is presented. On the one hand, Stochastic Model Predictive Control (SMPC) is based on analytical methods and solves an optimal control problem (OCP) similar to a classic Model Predictive Control (MPC) with constraints. SMPC deﬁnes probabilistic constraints on the states, which are transformed into equivalent deterministic ones. On the other hand, Scenario-based Model Predictive Control (SCMPC) solves an OCP for a speciﬁed number of random realizations of uncertainties, also called scenarios. In this paper, Classic MPC, SMPC and SCMPC are compared through two numerical examples. Thanks to several Monte-Carlo simulations, performances of classic MPC, SMPC and SCMPC are compared using several criteria, such as number of successful runs, number of times the constraints are violated, integral absolute error and computational cost. Moreover, a Stochastic Model Predictive Control Toolbox was developed by the authors, available on MATLAB Central, in which it is possible to simulate a SMPC or a SCMPC to control multivariable linear systems with additive disturbances. This software was used to carry out part of the simulations of the numerical examples in this article and it can be used for results reproduction.


Introduction
Model Predictive Control (MPC) is a widely used strategy for the control of industrial processes [1][2][3], robotics and automation [4][5][6], energy efficiency of buildings and renewable energies [7][8][9][10][11]. This is due to its "ability to predict" the future behavior of the real process, using an explicit model of it. At each sampling time and with current information of the process variables, it makes predictions of the process dynamics along a prediction horizon [12][13][14]. These predictions, which depend on the future controls (not known), are incorporated into a cost index to solve an open loop Optimal Control Problem (OCP) subject to constraints. As a solution, the sequence of optimal controls that minimize such index is obtained. Finally, the moving horizon (or receding horizon) technique is used, where only the first element of the solution (corresponding to the input at the current instant) is applied to the process, discarding the rest and repeating the OCP with updated information in the next sampling time.
A classic MPC requires a model of the process to be controlled. Since uncertainties, either parametric or external disturbances, are always present, it could be troublesome using a model that does not represent the real process, leading to erroneous predictions and affecting the control performance, stability and feasibility [12,15].
Robust Model Predictive Control (RMPC) approaches [15][16][17] were devised to control systems under the premise that process uncertainties are bounded with known limits in order to build a family of process models, where one of them represents the current system. In that case, a polytopic system is conformed containing the nominal system. Then, an OCP will not be formulated for a single predicted trajectory, but for as many vertices as the polytope has, forming a band around the trajectory of the nominal model. Finally, the set of optimal sequences is calculated by minimizing the worst case (MINMAX).
Stochastic predictive control strategies [18][19][20] consider that most processes have parametric uncertainties and independent disturbances of stochastic nature (bounded or not) with known probability distributions, mostly of the Gaussian type. As mentioned in [21,22], most of these techniques can be grouped into two approaches, depending on how this statistical information is incorporated into the OCP: the deterministic strategies (stochastic MPC or SMPC) and the scenario-based approaches (or SCMPC).
Those of the first group (SMPC) propose an OCP based on the expected value of a quadratic cost index replacing the hard constraints in the states with probabilistic ones. In SMPC, it is required that the probability of constraint violation in a state does not exceed a level of allowed risk [23][24][25][26]. Based on the knowledge of the covariances of random variables and their propagation along the prediction horizon, an SMPC problem is converted into a deterministic one. The resulting problem is an OCP with a cost index of the nominal states trajectory subject to deterministic constraints, considering that the uncertainties are a function of the nominal states. Since SMPC does not have bounded uncertainties, recursive feasibility cannot be guaranteed [19,27] since it is not possible to obtain a cost and terminal set. However, by bounding the uncertainties, it is possible to generate a minimum invariant set [23,24] and adjusted constraints in such a way as to ensure feasibility and stability.
The scenario-based group (SCMPC) presents similarities to RMPC since optimization is performed for various scenarios based on a nominal trajectory or a worst case scenario. Unlike in the RMPC approach, the scenarios to be optimized are not the vertices of a polytope, but the number of models obeys a formula based on an established confidence level [22,[28][29][30], which at each control period is generated randomly for different values of the uncertainties. These techniques are widely used when uncertainties do not obey any type of known distribution. However, in addition to its high computational load (due to the random generation of scenarios and their subsequent optimization), its "randomness" can lead to erratic closed-loop behaviors [29], since most scenarios may not represent the behavior of the current system.
The aim of this article is to present a comparative study of two stochastic model predictive controllers, for linear systems with parametric and additive uncertainties, belonging to the SMPC and SCMPC groups. The main highlights are: • A detailed description of the theoretical background of each strategy is presented. Emphasis is made on the formulation of the optimal control problem and how uncertainties are addressed. In addition, the SCMPC formulation for worst case OCP is analyzed.

•
The ways in which OCPs are stated in each strategy are compared with respect to the cost function and constraints on the states according to the statistical information. The structural similarity between the SMPC approach and classic MPC is shown by transforming probabilistic constraints into deterministic ones.

•
The viability of these two control strategies is analyzed through two numerical examples: a two-mass spring SISO system with parametric and additive uncertainties and a nonlinear quadruple-tank system with additive uncertainties. The controllers comparison is made by using performance indices such as number of successful runs, number of times the constraints are violated, mean value of the integral absolute error and the computational cost.

•
As complementary material, the specialized software Stochastic Model Predictive Control Toolbox, developed by the authors, is available in MATLAB Central [31]. This tool allows readers to reproduce part of the results here presented or to tune and simulate SMPCs or SCMPCs for controlling multivariable systems with additive disturbances which present Gaussian probability distributions.
The results of the numerical examples show that the two stochastic schemes have higher probability of success than the classic MPC. The SCMPC presents the highest probability of success, but with the highest computational cost. The SMPC has similar computational cost to the classic MPC, but with a considerable increase in the probability of success.
This article is organized as follows. Section 2 presents a review of classic MPC. Stochastic MPC approaches based on probabilistic constraints on states (SMPC) and scenario-based (SCMPC) for control of linear systems are shown in Section 3. Two numerical examples comparing the performance of stochastic MPCs and classic MPC are presented in Section 4, using simulations to control mechanical and liquid level systems. For comparison purposes, a set of indicators related to control performance, constraint violations and computational cost, among others, was calculated for both examples. Finally, the conclusions are presented in Section 5.

Model Predictive Control Strategy
MPC is a strategy that uses an explicit model of the process where, at each sampling time and with the current information of the process variables, predictions of process future behavior along a horizon are managed. Such predictions are incorporated into a cost index to solve an open loop Optimal Control Problem (OCP) subject to constraints, which results in the sequence of future optimal controls.
Consider the discrete linear system (1), where variables x k+i ∈ R n x and u k+i ∈ R n u represent the state vectors and system inputs, respectively, at the instant k + i ∈ N; A ∈ R n x ×n x is the state matrix and B ∈ R n x ×n u is the system input matrix For every instant k, based on (1) and with the availability of the current state of the plant x k , predictions are made for the states x k+i+1|k and inputs u k+i|k , along a prediction horizon N, ∀i ∈ {0, 1, . . . , N − 1}. These predictions are included in the quadratic cost index (2), which is subsequently minimized at the OCP (3) where the current statex k is known, thus x k|k =x k ; matrices Q x ∈ R n x ×n x , R u ∈ R n u ×n u and P N ∈ R n x ×n x are weighting matrices that penalize the first N predicted states, predicted inputs and terminal state, respectively. The matrices Q x and R u are defined by the designer such that Q x ≥ 0 and R u > 0 and P N is obtained from a quadratic stability analysis [13,32,33]. Notice that the subscript k + i|k indicates the predicted value of the variable for the instant k + i, based on the information available at time k. Each control period, the problem stated in (3) is solved, obtaining the vector of optimal decision variables u * k = {u * k|k , u * k+1|k , . . . , u * k+N−1|k } which drive the system states to a desired operating point [34] or as a regulator towards the origin. Next, applying the receding horizon strategy, only the first element u * k|k from the control sequence u * k is used, so that OCP (3) is performed again at time k + 1. The cost function (2) obeys the dual mode prediction paradigm [12,13,15], which ensures stability for an appropriate (or long enough) horizon N, incorporating a terminal cost x k+N|k P N x k+N|k which penalizes the terminal state x k+N|k . Another condition, also used to ensure closed-loop stability, is to complement the dual mode by adding the terminal constraint x k+N|k ∈ X T [12,14] to the OCP (3), with X being the set of allowed values for the states. The purpose of this is to force the terminal state x k+N|k and the following states to remain within a safe zone or terminal set X T , positively invariant [32,35,36], under the terminal control law u k+i|k = Kx k+i|k , ∀i ∈ {N, N + 1, . . . , ∞}, such that X T ⊂ X.
If the sequence of future controls u k = {u k|k , u k+1|k , . . . , u k+N−1|k } are ruled by a state feedback control law expressed by (4) [13,32,33] where K ∈ R n u ×n x is a feedback matrix that stabilizes the system, then the cost index (2) can be expressed in terms of v k ∈ R n u , replacing (4) in (2). Now, the index is expressed in terms of the constant matrices Q xx = Q x + K R u K, Q xv = 2K R u and the new decision variables Classic MPC: Given the current state x k|k =x k , model (1) and control law (4), classic MPC recursively makes predictions of states x k+i+1|k and inputs u k+i|k along an horizon N, ∀i ∈ {0, 1, . . . , N − 1}. With matrix A cl = (A + BK) strictly stable, x k+i+1|k as a function of v k+i|k is given by Incorporating (6) in (5), the OCP (7a) is stated subject to constraints (7b)-(7f) s.t.
x k+i+1|k = A cl x k+i|k + Bv k+i|k (7b) Notice that constraint (7b) indicates that the predicted state x k+i+1|k is computed recursively using the state and inputs from previous state k + i|k, where x k|k =x k . However, since (7b) is implicit in (7a), its incorporation in the set of constraints is not required. Linear inequalities (7c) and (7d) are constraints for the predicted states and input trajectories, respectively, with H ∈ R c x ×n x and D ∈ R c u ×n u ; vectors h ∈ R c x and d ∈ R c u represent the constraint limits; and c x and c u are the number of state constraints and input constraints, respectively. Given the quadratic and convex nature of (5), the linear model (6) and the type of constraints (7c) and (7d), a finite horizon OCP stated as in (7a) can be solved at each control period. This OCP is a quadratic programming problem (QP) with a global optimum whose solution results in the optimal control vector v where only the first element v * k|k is applied to the input at that instant. This is u k|k = Kx k + v * k|k .

Stochastic MPC
Let us consider the dynamics of an uncertain system defined by Equation (8) where w ∈ R n w are additive disturbances, G ∈ R n x ×n w is a matrix that relates the influence of w on system states and δ are parametric uncertainties. Stochastic model predictive control approaches were inspired by this type of systems, in which δ and w are stochastic in nature, independent and with known probability distributions. Since this statistical information is taken into account in the solution of the OCP [18][19][20][21], stochastic predictive control has been widely accepted and has been applied in different areas such as building air conditioning [37][38][39], renewable energy management [40,41], process control [3,42], robotics and automotive [5,22,[43][44][45]. A more extensive review of these and other applications is presented in [18,19,21,25,46], where network control systems, air traffic, finance, path planning and training control are discussed. Most stochastic model predictive control strategies can be classified into two groups: (1) those based on analytical methods (SMPC) [23][24][25]47], which solve an OCP based on the expected value of a cost index, subject to probabilistic constraints (usually on the predicted states); and (2) those based on random scenarios (SCMPC) [22,28,29], which solve an OCP for a determined number of random realizations of uncertainties also called scenarios.

SMPC Strategy
In the SMPC approach, the cost index is expressed as the expected value of a quadratic index and hard constraints in the states are replaced by probabilistic ones. The aim of SMCPC is that the probability of constraint violation in a state does not exceed a permitted risk level. By obtaining a deterministic equivalent of this stochastic approach, it is possible to solve an OCP similar to (7a).
Let us consider the uncertain system defined in (8) such that the state prediction x k+i+1|k ∀i ∈ {0, 1, . . . , N − 1} is given by where δ is associated with bounded parametric uncertainties [30] with probability distribution P δ , while w (which is not necessarily bounded [21,24,26]) has a distribution P w and the sequence {w k , w k+1 , . . . , w k+N−1 } has zero mean (E[w k+i ] = 0) with its values independent and identically distributed (i.i.d.). Let us define variables z k+i|k = E k [x k+i|k ] and e k+i|k = x k+i|k − z k+i|k representing the expected value of x k+i|k and its deviation, respectively. Let us split the prediction of x k+i|k into two parts: a deterministic part which involves its predicted nominal value z k+i|k and another stochastic one corresponding to the effect of the disturbances, represented by e k+i|k and whose mean is zero Replacing (10) in the state feedback control law (4) [23,47] (a common alternative in SMPC is based on feedback of the deviation [24,25] as u k+i|k = Ke k+i|k + v k+i|k ), expression (11) is obtained Predictions ∀i ∈ {0, 1, . . . , N − 1} for the nominal state and the deviation are given by Equations (12) and (13), respectively, where z k|k =x k , e k|k = 0 and A cl (δ k+i ) = (A(δ k+i ) + B(δ k+i )K) is strictly stable Due to the random nature of δ and w and its influence on x k+i|k and u k+i|k , the cost index to minimize, given by the expected value of (5), is defined as Replacing (10) in (14), remembering that E k [x k+i|k ] = z k+i|k and that the expected value of a product involving e k+i|k is zero (since e k+i|k is zero mean), a new cost index (15) is obtained containing the nominal trajectories of the predicted states z k+i|k where (e k+i|k Q xx e k+i|k ) + e k+N|k P N e k+N|k ] is a constant term that can be excluded from the cost index, since it does not depend on the decision variables v k+i|k and does not influence the optimum.
Chance Constraints: In view of the stochastic behavior of uncertainty and its influence on the predicted states, the state constraints in the OCP can be formulated as probabilistic constraints [21,26] of the form (16) based on a risk level or allowed probability of violation ε ∈ [0, 1] The constraints expressed as in (16) can be interpreted as: the probability that a predicted state does not violate the jth constraint must be greater than or equal to the jth desired probability level 1 − ε j . Substituting Equation (10) into (16) and separating the deterministic variable H j z k+i+1|k from the stochastic one, h j − H j e k+i+1|k , the constraints can be rewritten as Setting a new bound η j k+i+1|k calculated in such a way that H j z k+i+1|k ≤ η j k+i+1|k then The bound η j k+i+1|k can be obtained from solving a stochastic optimization problem stated as in (18a), subject to chance constraints and whose decision variable is η As an alternative to solve (18a), let us reorder Equation (17) Let us assume that probability distributions of w k and δ k are known. Thus, the random variable H j e k+i+1|k − h j has a known cumulative distribution function (CDF) F He−h [24,25], and its respective inverse function F −1 He−h is also known. For this reason, the equivalent of (19) would be Therefore, the value of η j k+i+1|k can be calculated evaluating the inverse function F −1 It should be noted that the calculation of η j k+i+1|k , through either (18a) or (20), presents an advantage in computational terms when it is performed offline since the random variable e k+i+1|k does not depend on the state x k+i|k or the decision variable v k+i|k (see (13)). Therefore, the deterministic equivalent of the chance constraints (16) is Based on the nominal index (15) and deterministic constraints (21), the deterministic equivalent of the SMPC with finite prediction horizon N can be stated as Notice that inclusion of (22b) in the set of constraints is not required since it is implicit in (22a). This OCP, as well as (7a), can be solved using quadratic programming, where at each sampling time only the first element v * k|k of the optimal sequence v * k is applied to the system, that is u k|k = Kx k + v * k|k .

SCMPC Strategy
The aim of SCMCP is to solve a convex OCP whose cost index for a prediction horizon N is composed not by a single trajectory of the states, but by the sum of a set of M trajectories generated due to random realizations of the disturbances also known as scenarios [28,29]. The optimal controls u * k = {u * k|k , u * k+1|k , . . . , u * k+N−1|k } are those that minimize such an index satisfying the constraints for each scenario. For this reason, the selection of the number of scenarios, M, requires special attention in order to guarantee a defined minimum level of confidence or non-violation of constraints.
Let us consider the uncertain system defined in (8) with control inputs calculated with the control law (4). For a given instant k, the state predictions x k+i+1|k and future inputs u k+i|k , ∀i ∈ {0, 1, . . . , N − 1} are given by where stochastic variables δ and w are independent, δ is bounded and the sequence {w k|k , . . . , w k+N−1|k } is independent and identically distributed (i.i.d.). Replacing (24) in (23), the discrete model (25) is obtained as a function of v k+i|k , where A cl (δ k+i|k ) = (A(δ k+i|k ) + B(δ k+i|k )K), being A cl (δ k+i|k ) strictly stable For a given instant k, let us define the jth scenario denoted by the pair γ k and w [j] k are random and known realizations along N of the parametric and additive uncertainties, respectively. Thus, the set Γ k of M random and independent scenarios for the instant k is Γ k = {γ [1] k , γ [2] k , . . . , γ [M] k }. An advantage of SCMPC is that knowledge about probability distributions P δ and P w is not required since the sequence Γ k is obtained at each instant k from experimental data or by means of a random number generator. Therefore, predictions expressed by equations (24) and (25) for the jth scenario are given by (26) and (27), where it is fulfilled that the initial condition x Then, taking (5), a cost index J N (x k , v k ) can be formulated for each scenario j, representing the trajectory of x From (28) a global cost index J Sc (x k , v k ) is defined as the sum of the M trajectories generated, which conform a band around a nominal trajectory Under (29), an scenario-based optimal control problem is stated (30a), whose solution at time k yields the optimal controls {v * k|k , v * k+1|k , . . . , v * k+N−1|k } that minimize J Sc (x k , v k ) and do not violate the constraints (notice that constraint (30b) is implicit in (30a), and it is not required to solve the optimization problem) ∀i ∈ {0, 1, . . . , N − 1}, ∀j ∈ {1, 2, . . . , M} The constraints (30c) and (30d), whose matrices are the same as in (7c) and (7d), indicate that they must be met for each scenario j.
A robust SCMPC approach is achieved by replacing (30a) with (31), so that the sequence v k that minimizes the worst case (min-max optimization) [21,28] of the M realizations is calculated, as opposed to (30a), in which a nominal trajectory is minimized This robust approach is expensive computationally, because the worst case of all the scenarios is obtained first, to subsequently perform a minimization on it. Another drawback is that the worst case does not always correspond to reality at that moment, so an optimum applied to the real process could lead to poor behavior.
Number of Scenarios: The value of M [18,19] demands special attention, in order to guarantee compliance with the constraints in the states with at least a specified probability level. By establishing an allowable violation probability ε ∈ [0, 1], a very low confidence level β ∈ [0, 1] (e.g., β = 10 −9 ) and with the number of decision variables given by d = n u N, the probabilistic constraints P k [x k+i+1|k ∈ X] ≥ 1 − ε, where X is a set of allowed values for the states, can be transformed into the M deterministic constraints (30c) where With the minimum number of scenarios given by (32), the non-violation of state constraints is met, at least with a confidence level (1 − ε) [21,48], where the lower is the probability of violation, the greater is the number of scenarios M.
As pointed out in [18,29], a disadvantage is that the random nature of scenarios can cause unlikely circumstances (i.e., the scenario is far from the reality of the process), so the closed-loop system may exhibit erroneous behavior. For this reason, in [29], a removal of R scenarios from the previously defined M is proposed. In [22], the concept of scenarios is exploited to generate achievable probabilistic sets offline, which are used as an MPC formulation based on probabilistic tubes.

Numerical Examples, Results and Discussion
In this section, two examples are presented in which various MPCs were evaluated by performing N r Monte Carlo simulations for each control scheme, always starting from the same initial state x 0|0 and assuming that the state measurements are accurate. The performances of the MPCs were analyzed through the performance indices listed in Table 1. A 64-bit Windows 10 computer, 16 GB of RAM and 2.5 GHz Intel Core i7 processor was used. Simulations were run in Matlab R2018b; control actions for classic MPC, SMPC and SCMPC were calculated using the quadprog toolbox of the Mosek 9.2 optimization software and Matlab fminimax function for the Robust SCMPC. Furthermore, for reproducible results, the specialized Stochastic Model Predictive Control Toolbox software was developed in Matlab for the realization of a part of the simulations in this article. This software allows simulating (Matlab's quadprog solver is used for optimization) a SMPC or SCMPC to control multivariable systems with additive disturbances and is available in MATLAB Central [31] so that the reader can reproduce the results of this section or use it in other systems.

Example 1
Consider the two-mass spring system [16,49] with friction-less sliding of Figure 1 where the masses m 1 with position x 1 and m 2 with position x 2 are linked by a spring with elastic constant k s . The control input u is a force acting on m 1 and w 1 and w 2 are external disturbances acting on m 1 and m 2 , respectively.
Setting the state vectors as x = [x 1 x 2 x 3 x 4 ] and additive disturbances as w = [w 1 w 2 ] ; its representation in the discrete state space by Euler's approximation method, for a sampling time T s is where m 1 , m 2 = 1 kg are constant, T s = 0.1 s (this was selected fulfilling the Nyquist-Shannon sampling theorem, taking the highest frequency of the poles of the system) and the constraints |x 3 |, |x 4 | ≤ 0.12 m/s must be satisfied. The elastic constant k s is associated with parametric uncertainties δ and it has a uniform probability distribution k s ∼ U ([0.5, 2.0])N/m. The additive disturbances w have a normal distribution w ∼ N (0, Σ w )N with zero mean and covariance matrix Σ w = diag(0.022 2 , 0.022 2 ). For all controllers, when applying N = 6, Q x = diag (1, 1, 4, 6), R u = 1; ε SMPC = 0.05; ε SCMPC = 0.05, β SCMPC = 10 −9 and by means of (32), M = 896.
Three cases were established based on the parameter k s to evaluate the performance of five MPCs (classic without constraints (MPC n/c), classic with constraints (MPC w/c), SMPC, SCMPC and Robust SCMPC): • Case 1: k s is known, constant and it is set to its nominal value k s = 1.25.

•
Case 2: k s is unknown and varies according to its probability distribution at every control period. • Case 3: k s is unknown and remains constant for all instants along each simulation. The k s value varies according to its probability distribution in each simulation.
For each case, N r = 100 simulations with N T s = 100 sampling times of duration were performed, starting from the initial state x 0|0 =x 0 = [0.5 0.5 0 0] towards the origin as the desired state.
For Case 1, the feedback matrix K and the terminal state weighting P N are obtained from solving a Quadratic Optimal Control Problem (LQR) [ Table 2 shows high probabilities of success p s (0.84, 0.74 and 0.64) by the stochastic MPCs. However, the biggest success of the SCMPC and Robust SCMPC required longer times t avg (79.0 ms and 235.7 ms) to obtain the control sequence than SMPC (5.7 ms), which works as a classic MPC with constraints. This makes sense since in each iteration SMPC and SCMPC select 896 random realizations for w and solve the OCP (30a) for all of them using (29) or (31). Figure 2 shows the N r = 100 trajectories made by the real states and input (thin solid lines), starting fromx 0 = [0.5 0.5 0 0] towards the origin. The MPC n/c (Figure 2a), having no constraints, presents the highest violations (N v = 2416) and overshoots (PO avg = 20.98%), while the mean trajectories (thick solid lines) of the constrained MPCs do not exceed the limits (black dashed lines). Notice how stochastic approaches (Figures 2c-e) are within these limits or barely exceed them, showing that the probability that a state is within the allowed limits is 68%. The largest standard deviations σ max occurred in the scenario-based MPCs (0.036 and 0.054) due to the randomness mentioned in [29] and because the worst case is not always the closest to reality (hence, the highest IAE avg (52.50) and IAU avg (8.70) of all).

Case 2 Results:
The performance indices and the real trajectories of the states are shown in Table 3 and Figure 3 respectively. As in Case 1, stochastic MPCs had the highest p s , observing an increase in the scenario-based schemes (0.94 and 0.86), despite the randomness of k s . Nevertheless, these schemes are the ones who take the longest to calculate the solution (t avg (220.3 ms and 311.8 ms) due to the fact that, in addition to selecting 896 random scenarios for w, it should also be done for k s .
As seen in Figure 3, the mean trajectories that include the standard deviations of the stochastic MPCs are the only ones that stay within the limits (the probability that a state is within the allowed limits is 68%). This kind of robustness on stochastic strategies implies high values of IAE avg and IAU avg indicators compared to the others.
Case 3 Results: As can be seen in Table 4 and Figure 4, it is once again corroborated that the highest p s and average trajectories with standard deviations within the limits belongs to stochastic MPCs. Regarding Case 2, the indicators t avg are very similar, but for this case higher standard deviations σ max of the three cases are observed. The high PO avg (28.23%) presented by the robust SCMPC can be seen in the trajectories of the x 3 (Figure 4e) state; however, this strategy presents the lowest N v (27) of all MPCs.

Example 2
Consider the quadruple-tank system [50,51] of Figure 5, in which the aim is to control the level of liquid h i in tank i ∀i ∈ {1, 2, 3, 4}, by means of pumps 1 and 2 whose flows are proportional to the applied voltage (Q P 1 = k 1 v 1 , Q P 2 = k 2 v 2 ) and are distributed by the valves in proportions determined by γ 1 , γ 2 ∈ [0, 1] Let A i be the cross section of tank i; a i and a 12 are the areas of the tank outlet pipes; and g is the acceleration due to gravity. By performing a mass balance and applying Bernoulli's law, the equations that describe the behavior of the nonlinear system are given by Let us define the state vectors 8.187 cm, 7.720 cm, 8.039 cm, 4.0 V, 3.5 V}. Linearizing around P 0 and using Euler's approximation, for a sampling period T s , the linear model in the discrete state space can be represented by 2 , a 1 , a 2 , a 12 = 0.352 cm 2 , a 3 , a 4 = 1.006 cm 2 ; k 1 , k 2 = 33.333cm 3 /Vs; γ 1 = 0.6, γ 2 = 0.7; g = 981cm/s 2 , T s = 5 s.
To analyze the effect of the allowed violation probabilities in the states, ε, on the number of considered scenarios M and on the different performance indicators, this example compares the performance of an SCMPC for five values of ε(0.4, 0.3, 0.2, 0.1, 0.05) with a constrained classic MPC (MPC w/c). For each scheme, N r = 100 runs were carried out on the nonlinear model (33), each one with N T s = 40 sampling times of duration, starting from the initial state x 0|0 =x 0 = [−5.5 − 6.9 − 0.5 − 0.2] towards the origin as the desired state.
For all controllers, N = 5, Q x = diag(10, 10, 1, 1), R u = diag(1, 1), β = 10 −9 . The matrices K and P N , obtained from solving a Quadratic Optimal Control Problem (LQR) [13,20], are Results: Table 5 shows the performance indices of the MPC w/c and the SCMPC, for the N r = 100 runs, while Figure 6 depicts the closed loop simulations. SCMPCs showed the lowest violations, in both quantity N v (15, 14, 10, 7 and 6) and percentage of deviation PO avg (1.55%, 1.60%, 1.78%, 1.06% and 0.48%) compared to the MPC w/c (N v = 263 and PO avg = 5.76%). Furthermore, the stochastic scheme shows high values of successful runs N s (87, 88, 91, 93 and 94) and probabilities of success p s (0.87, 0.88, 0.91, 0.93 and 0.94), which increase as ε decreases. However, this improvement in p s leads to an increase in the number of scenarios M (133, 177, 266, 531 and 1062) to be considered in the OCP (30a), and hence an increase in the average time that the algorithm takes to find a solution at each instant k (t avg (12.3 ms, 14.2 ms, 18.1 ms and 30.3 ms). Regarding the indices IAE avg , IAU avg and σ max , despite not observing a considerable difference between controllers, it can be concluded that the lower the ε the lower the performance values.
As can be seen in the Figure 6, all the N r = 100 trajectories made by the states (thin solid lines) start from N r = 100 towards the origin, where the mean trajectories (thick solid lines) of the SCMPCs are considerably far from the constraints (black dashed lines), opposite to that of the MPC w/c that passes very close and even violates them (that's why the highest PO avg = 5.76% of them). On the other hand, for SCMPCs, the average trajectories that include the standard deviations (dashed lines) are within the limits. It means that the probability that a state is within the allowed limits is 68%. As for the applied inputs, they start at their maximum allowed values and decrease as the states converge towards the origin

Conclusions and Future Work
Two stochastic predictive control approaches are compared in this article; one based on analytical methods (SMPC) and the other based on scenarios (SCMPC). The low computational cost of the SMPC is because the uncertain statistical information is used offline to adjust the states constraints. However, if this information changes, it cannot be considered during operation. On the other hand, this new statistical information can be incorporated into the SCMPC to generate the scenarios online, but it leads to a high computational cost.
It is shown that the SMPC can be summed up in a deterministic OCP (22a) whose structure is similar to that of a classic MPC with constraints (7a), with similar computational cost t avg , but with a considerable increase in the probability of success p s , due to offline constraint adjustment.
Scenario-based approaches SCMPC and robust SCMPC, compared to the others, gave the highest probabilities of success, at the expense of a high computational cost, since they need to solve an OCP with M random scenarios for each control period. From results shown in Table 5, can be concluded that a decrease in the parameter ε, related to the probability of violation allowed for the state constraints, produces an increase in the probability of success p s . However, this improvement results in an increase in the number of M scenarios to be considered in the OCP and therefore an increase in the average time t avg that the algorithm takes to find a solution at each sampling time.
It is clear that the consideration of statistical process information, through its inclusion in the OCP, significantly improves the probability of success p s . This can be verified in Figures 2-4 and 6, where it is observed that only stochastic approaches reached mean trajectories with standard deviations within the limits or barely exceeding them. Thus, for a normal distribution, it means that the probability a state is within the allowed limits is 68%.
Scenario-based schemes are attractive in the sense that they have a greater probability of success and the inclusion of new statistical information online without necessarily known probability distributions. However, due to the randomness of scenarios, their solution can show undesirable behaviors despite their high computational cost. This drawbacks would prevent their implementation for the control of systems with fast dynamics such as the two-mass spring system in the example, which has T s (0.1 s) close to the lowest t avg (79.0 ms) of the SCMPCs of the three cases. Given these drawbacks, future work continues on the proposal of new SCMPC schemes that reduce the computational load and unlikely scenarios.
Finally, the specialized software Stochastic Model Predictive Control Toolbox is available in MATLAB Central for the keen readers. This tool was created by the authors to carry out part of the simulations in this article. It also allows readers to reproduce the results here presented or to tune and simulate SMPCs or SCMPCs for controlling multivariable systems with additive disturbances which present Gaussian probability distributions.