Next Article in Journal
High-Throughput Measurement of the Contact Resistance of Metal Electrode Materials and Uncertainty Estimation
Previous Article in Journal
Task-Level Aware Scheduling of Energy-Constrained Applications on Heterogeneous Multi-Core System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study of Stochastic Model Predictive Controllers

Instituto Universitario de Automática e Informática Industrial, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(12), 2078; https://doi.org/10.3390/electronics9122078
Submission received: 15 October 2020 / Revised: 26 November 2020 / Accepted: 3 December 2020 / Published: 6 December 2020
(This article belongs to the Special Issue Model Predictive Control and Optimization Applied to Process Control)

Abstract

:
A comparative study of two state-of-the-art stochastic model predictive controllers for linear systems with parametric and additive uncertainties is presented. On the one hand, Stochastic Model Predictive Control (SMPC) is based on analytical methods and solves an optimal control problem (OCP) similar to a classic Model Predictive Control (MPC) with constraints. SMPC defines probabilistic constraints on the states, which are transformed into equivalent deterministic ones. On the other hand, Scenario-based Model Predictive Control (SCMPC) solves an OCP for a specified number of random realizations of uncertainties, also called scenarios. In this paper, Classic MPC, SMPC and SCMPC are compared through two numerical examples. Thanks to several Monte-Carlo simulations, performances of classic MPC, SMPC and SCMPC are compared using several criteria, such as number of successful runs, number of times the constraints are violated, integral absolute error and computational cost. Moreover, a Stochastic Model Predictive Control Toolbox was developed by the authors, available on MATLAB Central, in which it is possible to simulate a SMPC or a SCMPC to control multivariable linear systems with additive disturbances. This software was used to carry out part of the simulations of the numerical examples in this article and it can be used for results reproduction.

1. Introduction

Model Predictive Control (MPC) is a widely used strategy for the control of industrial processes [1,2,3], robotics and automation [4,5,6], energy efficiency of buildings and renewable energies [7,8,9,10,11]. This is due to its “ability to predict” the future behavior of the real process, using an explicit model of it. At each sampling time and with current information of the process variables, it makes predictions of the process dynamics along a prediction horizon [12,13,14]. These predictions, which depend on the future controls (not known), are incorporated into a cost index to solve an open loop Optimal Control Problem (OCP) subject to constraints. As a solution, the sequence of optimal controls that minimize such index is obtained. Finally, the moving horizon (or receding horizon) technique is used, where only the first element of the solution (corresponding to the input at the current instant) is applied to the process, discarding the rest and repeating the OCP with updated information in the next sampling time.
A classic MPC requires a model of the process to be controlled. Since uncertainties, either parametric or external disturbances, are always present, it could be troublesome using a model that does not represent the real process, leading to erroneous predictions and affecting the control performance, stability and feasibility [12,15].
Robust Model Predictive Control (RMPC) approaches [15,16,17] were devised to control systems under the premise that process uncertainties are bounded with known limits in order to build a family of process models, where one of them represents the current system. In that case, a polytopic system is conformed containing the nominal system. Then, an OCP will not be formulated for a single predicted trajectory, but for as many vertices as the polytope has, forming a band around the trajectory of the nominal model. Finally, the set of optimal sequences is calculated by minimizing the worst case (MINMAX).
Stochastic predictive control strategies [18,19,20] consider that most processes have parametric uncertainties and independent disturbances of stochastic nature (bounded or not) with known probability distributions, mostly of the Gaussian type. As mentioned in [21,22], most of these techniques can be grouped into two approaches, depending on how this statistical information is incorporated into the OCP: the deterministic strategies (stochastic MPC or SMPC) and the scenario-based approaches (or SCMPC).
Those of the first group (SMPC) propose an OCP based on the expected value of a quadratic cost index replacing the hard constraints in the states with probabilistic ones. In SMPC, it is required that the probability of constraint violation in a state does not exceed a level of allowed risk [23,24,25,26]. Based on the knowledge of the covariances of random variables and their propagation along the prediction horizon, an SMPC problem is converted into a deterministic one. The resulting problem is an OCP with a cost index of the nominal states trajectory subject to deterministic constraints, considering that the uncertainties are a function of the nominal states. Since SMPC does not have bounded uncertainties, recursive feasibility cannot be guaranteed [19,27] since it is not possible to obtain a cost and terminal set. However, by bounding the uncertainties, it is possible to generate a minimum invariant set [23,24] and adjusted constraints in such a way as to ensure feasibility and stability.
The scenario-based group (SCMPC) presents similarities to RMPC since optimization is performed for various scenarios based on a nominal trajectory or a worst case scenario. Unlike in the RMPC approach, the scenarios to be optimized are not the vertices of a polytope, but the number of models obeys a formula based on an established confidence level [22,28,29,30], which at each control period is generated randomly for different values of the uncertainties. These techniques are widely used when uncertainties do not obey any type of known distribution. However, in addition to its high computational load (due to the random generation of scenarios and their subsequent optimization), its “randomness” can lead to erratic closed-loop behaviors [29], since most scenarios may not represent the behavior of the current system.
The aim of this article is to present a comparative study of two stochastic model predictive controllers, for linear systems with parametric and additive uncertainties, belonging to the SMPC and SCMPC groups. The main highlights are:
  • A detailed description of the theoretical background of each strategy is presented. Emphasis is made on the formulation of the optimal control problem and how uncertainties are addressed. In addition, the SCMPC formulation for worst case OCP is analyzed.
  • The ways in which OCPs are stated in each strategy are compared with respect to the cost function and constraints on the states according to the statistical information. The structural similarity between the SMPC approach and classic MPC is shown by transforming probabilistic constraints into deterministic ones.
  • The viability of these two control strategies is analyzed through two numerical examples: a two-mass spring SISO system with parametric and additive uncertainties and a nonlinear quadruple-tank system with additive uncertainties. The controllers comparison is made by using performance indices such as number of successful runs, number of times the constraints are violated, mean value of the integral absolute error and the computational cost.
  • As complementary material, the specialized software Stochastic Model Predictive Control Toolbox, developed by the authors, is available in MATLAB Central [31]. This tool allows readers to reproduce part of the results here presented or to tune and simulate SMPCs or SCMPCs for controlling multivariable systems with additive disturbances which present Gaussian probability distributions.
The results of the numerical examples show that the two stochastic schemes have higher probability of success than the classic MPC. The SCMPC presents the highest probability of success, but with the highest computational cost. The SMPC has similar computational cost to the classic MPC, but with a considerable increase in the probability of success.
This article is organized as follows. Section 2 presents a review of classic MPC. Stochastic MPC approaches based on probabilistic constraints on states (SMPC) and scenario-based (SCMPC) for control of linear systems are shown in Section 3. Two numerical examples comparing the performance of stochastic MPCs and classic MPC are presented in Section 4, using simulations to control mechanical and liquid level systems. For comparison purposes, a set of indicators related to control performance, constraint violations and computational cost, among others, was calculated for both examples. Finally, the conclusions are presented in Section 5.

2. Model Predictive Control Strategy

MPC is a strategy that uses an explicit model of the process where, at each sampling time and with the current information of the process variables, predictions of process future behavior along a horizon are managed. Such predictions are incorporated into a cost index to solve an open loop Optimal Control Problem (OCP) subject to constraints, which results in the sequence of future optimal controls.
Consider the discrete linear system (1), where variables x k + i R n x and u k + i R n u represent the state vectors and system inputs, respectively, at the instant k + i N ; A R n x × n x is the state matrix and B R n x × n u is the system input matrix
x k + i + 1 = A x k + i + B u k + i
For every instant k, based on (1) and with the availability of the current state of the plant x ^ k , predictions are made for the states x k + i + 1 | k and inputs u k + i | k , along a prediction horizon N, i { 0 , 1 , , N 1 } . These predictions are included in the quadratic cost index (2), which is subsequently minimized at the OCP (3)
J N ( x ^ k , u k ) = i = 0 N 1 x k + i | k Q x x k + i | k + u k + i | k R u u k + i | k + x k + N | k P N x k + N | k
min u k | k , u k + 1 | k , , u k + N 1 | k J N ( x ^ k , u k )
where the current state x ^ k is known, thus x k | k = x ^ k ; matrices Q x R n x × n x , R u R n u × n u and P N R n x × n x are weighting matrices that penalize the first N predicted states, predicted inputs and terminal state, respectively. The matrices Q x and R u are defined by the designer such that Q x 0 and R u > 0 and P N is obtained from a quadratic stability analysis [13,32,33]. Notice that the subscript k + i | k indicates the predicted value of the variable for the instant k + i , based on the information available at time k.
Each control period, the problem stated in (3) is solved, obtaining the vector of optimal decision variables u k * = { u k | k * , u k + 1 | k * , , u k + N 1 | k * } which drive the system states to a desired operating point [34] or as a regulator towards the origin. Next, applying the receding horizon strategy, only the first element u k | k * from the control sequence u k * is used, so that OCP (3) is performed again at time k + 1 .
The cost function (2) obeys the dual mode prediction paradigm [12,13,15], which ensures stability for an appropriate (or long enough) horizon N, incorporating a terminal cost x k + N | k P N x k + N | k which penalizes the terminal state x k + N | k . Another condition, also used to ensure closed-loop stability, is to complement the dual mode by adding the terminal constraint x k + N | k X T [12,14] to the OCP (3), with X being the set of allowed values for the states. The purpose of this is to force the terminal state x k + N | k and the following states to remain within a safe zone or terminal set X T , positively invariant [32,35,36], under the terminal control law u k + i | k = K x k + i | k , i { N , N + 1 , , } , such that X T X .
If the sequence of future controls u k = { u k | k , u k + 1 | k , , u k + N 1 | k } are ruled by a state feedback control law expressed by (4) [13,32,33]
u k + i | k = K x k + i | k + v k + i | k
where K R n u × n x is a feedback matrix that stabilizes the system, then the cost index (2) can be expressed in terms of v k R n u , replacing (4) in (2). Now, the index is expressed in terms of the constant matrices Q x x = Q x + K R u K , Q x v = 2 K R u and the new decision variables v k = { v k | k , v k + 1 | k , , v k + N 1 | k }
J N ( x ^ k , v k ) = i = 0 N 1 x k + i | k Q x x x k + i | k + x k + i | k Q x v v k + i | k + v k + i | k R u v k + i | k + x k + N | k P N x k + N | k
Classic MPC: Given the current state x k | k = x ^ k , model (1) and control law (4), classic MPC recursively makes predictions of states x k + i + 1 | k and inputs u k + i | k along an horizon N, i { 0 , 1 , , N 1 } . With matrix A c l = ( A + B K ) strictly stable, x k + i + 1 | k as a function of v k + i | k is given by
x k + i + 1 | k = A c l x k + i | k + B v k + i | k
Incorporating (6) in (5), the OCP (7a) is stated subject to constraints (7b)–(7f)
min v k | k , v k + 1 | k , , v k + N 1 | k J N ( x ^ k , v k ) s . t .
x k + i + 1 | k = A c l x k + i | k + B v k + i | k
H x k + i + 1 | k h
D u k + i | k d
x k | k = x ^ k
i { 0 , 1 , , N 1 }
Notice that constraint (7b) indicates that the predicted state x k + i + 1 | k is computed recursively using the state and inputs from previous state k + i | k , where x k | k = x ^ k . However, since (7b) is implicit in (7a), its incorporation in the set of constraints is not required. Linear inequalities (7c) and (7d) are constraints for the predicted states and input trajectories, respectively, with H R c x × n x and D R c u × n u ; vectors h R c x and d R c u represent the constraint limits; and c x and c u are the number of state constraints and input constraints, respectively.
Given the quadratic and convex nature of (5), the linear model (6) and the type of constraints (7c) and (7d), a finite horizon OCP stated as in (7a) can be solved at each control period. This OCP is a quadratic programming problem (QP) with a global optimum whose solution results in the optimal control vector v k * = { v k | k * , v k + 1 | k * , , v k + N 1 | k * } , where only the first element v k | k * is applied to the input at that instant. This is u k | k = K x ^ k + v k | k * .

3. Stochastic MPC

Let us consider the dynamics of an uncertain system defined by Equation (8)
x k + i + 1 = A ( δ k + i ) x k + i + B ( δ k + i ) u k + i + G ( δ k + i ) w k + i
where w R n w are additive disturbances, G R n x × n w is a matrix that relates the influence of w on system states and δ are parametric uncertainties.
Stochastic model predictive control approaches were inspired by this type of systems, in which δ and w are stochastic in nature, independent and with known probability distributions. Since this statistical information is taken into account in the solution of the OCP [18,19,20,21], stochastic predictive control has been widely accepted and has been applied in different areas such as building air conditioning [37,38,39], renewable energy management [40,41], process control [3,42], robotics and automotive [5,22,43,44,45]. A more extensive review of these and other applications is presented in [18,19,21,25,46], where network control systems, air traffic, finance, path planning and training control are discussed.
Most stochastic model predictive control strategies can be classified into two groups: (1) those based on analytical methods (SMPC) [23,24,25,47], which solve an OCP based on the expected value of a cost index, subject to probabilistic constraints (usually on the predicted states); and (2) those based on random scenarios (SCMPC) [22,28,29], which solve an OCP for a determined number of random realizations of uncertainties also called scenarios.

3.1. SMPC Strategy

In the SMPC approach, the cost index is expressed as the expected value of a quadratic index and hard constraints in the states are replaced by probabilistic ones. The aim of SMCPC is that the probability of constraint violation in a state does not exceed a permitted risk level. By obtaining a deterministic equivalent of this stochastic approach, it is possible to solve an OCP similar to (7a).
Let us consider the uncertain system defined in (8) such that the state prediction x k + i + 1 | k i { 0 , 1 , , N 1 } is given by
x k + i + 1 | k = A ( δ k + i ) x k + i | k + B ( δ k + i ) u k + i | k + G ( δ k + i ) w k + i
where δ is associated with bounded parametric uncertainties [30] with probability distribution P δ , while w (which is not necessarily bounded [21,24,26]) has a distribution P w and the sequence { w k , w k + 1 , , w k + N 1 } has zero mean ( E [ w k + i ] = 0 ) with its values independent and identically distributed (i.i.d.).
Let us define variables z k + i | k = E k [ x k + i | k ] and e k + i | k = x k + i | k z k + i | k representing the expected value of x k + i | k and its deviation, respectively. Let us split the prediction of x k + i | k into two parts: a deterministic part which involves its predicted nominal value z k + i | k and another stochastic one corresponding to the effect of the disturbances, represented by e k + i | k and whose mean is zero
x k + i | k = z k + i | k + e k + i | k
Replacing (10) in the state feedback control law (4) [23,47] (a common alternative in SMPC is based on feedback of the deviation [24,25] as u k + i | k = K e k + i | k + v k + i | k ), expression (11) is obtained
u k + i | k = K ( z k + i | k + e k + i | k ) + v k + i | k
Predictions i { 0 , 1 , , N 1 } for the nominal state and the deviation are given by Equations (12) and (13), respectively, where z k | k = x ^ k , e k | k = 0 and A c l ( δ k + i ) = ( A ( δ k + i ) + B ( δ k + i ) K ) is strictly stable
z k + i + 1 | k = A c l ( δ k + i ) z k + i | k + B ( δ k + i ) v k + i | k
e k + i + 1 | k = A c l ( δ k + i ) e k + i | k + G ( δ k + i ) w k + i
Due to the random nature of δ and w and its influence on x k + i | k and u k + i | k , the cost index to minimize, given by the expected value of (5), is defined as J S ( x ^ k , v k ) , in (14)
J S ( x ^ k , v k ) = E k i = 0 N 1 x k + i | k Q x x x k + i | k + x k + i | k Q x v v k + i | k + v k + i | k R u v k + i | k + x k + N | k P N x k + N | k
Replacing (10) in (14), remembering that E k [ x k + i | k ] = z k + i | k and that the expected value of a product involving e k + i | k is zero (since e k + i | k is zero mean), a new cost index (15) is obtained containing the nominal trajectories of the predicted states z k + i | k
J S ( x ^ k , v k ) = i = 0 N 1 z k + i | k Q x x z k + i | k + z k + i | k Q x v v k + i | k + v k + i | k R u v k + i | k + z k + N | k P N z k + N | k + c
where c = E k [ i = 0 N 1 ( e k + i | k Q x x e k + i | k ) + e k + N | k P N e k + N | k ] is a constant term that can be excluded from the cost index, since it does not depend on the decision variables v k + i | k and does not influence the optimum.
Chance Constraints: In view of the stochastic behavior of uncertainty and its influence on the predicted states, the state constraints in the OCP can be formulated as probabilistic constraints [21,26] of the form (16) based on a risk level or allowed probability of violation ε [ 0 , 1 ]
P k [ H j x k + i + 1 | k h j ] 1 ε j j { 1 , 2 , , c x }
The constraints expressed as in (16) can be interpreted as: the probability that a predicted state does not violate the jth constraint must be greater than or equal to the jth desired probability level 1 ε j .
Substituting Equation (10) into (16) and separating the deterministic variable H j z k + i + 1 | k from the stochastic one, h j H j e k + i + 1 | k , the constraints can be rewritten as
P k [ H j z k + i + 1 | k h j H j e k + i + 1 | k ] 1 ε j
Setting a new bound η j k + i + 1 | k calculated in such a way that H j z k + i + 1 | k η j k + i + 1 | k then
P k [ η j k + i + 1 | k h j H j e k + i + 1 | k ] 1 ε j
The bound η j k + i + 1 | k can be obtained from solving a stochastic optimization problem stated as in (18a), subject to chance constraints and whose decision variable is η
η j k + i + 1 | k = max η η s . t .
P k [ η h j H j e k + i + 1 | k ] 1 ε j
i { 0 , 1 , , N 1 } , j { 1 , 2 , , c x }
As an alternative to solve (18a), let us reorder Equation (17)
P k [ H j e k + i + 1 | k h j η j k + i + 1 | k ] 1 ε j
Let us assume that probability distributions of w k and δ k are known. Thus, the random variable H j e k + i + 1 | k h j has a known cumulative distribution function (CDF) F H e h [24,25], and its respective inverse function F H e h 1 is also known. For this reason, the equivalent of (19) would be
F H e h ( η j k + i + 1 | k ) 1 ε j
Therefore, the value of η j k + i + 1 | k can be calculated evaluating the inverse function F H e h 1 in 1 ε j
η j k + i + 1 | k = F H e h 1 ( 1 ε j )
It should be noted that the calculation of η j k + i + 1 | k , through either (18a) or (20), presents an advantage in computational terms when it is performed offline since the random variable e k + i + 1 | k does not depend on the state x k + i | k or the decision variable v k + i | k (see (13)). Therefore, the deterministic equivalent of the chance constraints (16) is
H j z k + i + 1 | k η j k + i + 1 | k
Based on the nominal index (15) and deterministic constraints (21), the deterministic equivalent of the SMPC with finite prediction horizon N can be stated as
min v k | k , v k + 1 | k , , v k + N 1 | k J S ( x ^ k , v k ) s . t .
z k + i + 1 | k = A c l ( δ k + i ) z k + i | k + B ( δ k + i ) v k + i | k
H j z k + i + 1 | k η j k + i + 1 | k
D u k + i | k d
z k | k = x ^ k
i { 0 , 1 , , N 1 } , j { 1 , 2 , , c x }
Notice that inclusion of (22b) in the set of constraints is not required since it is implicit in (22a). This OCP, as well as (7a), can be solved using quadratic programming, where at each sampling time only the first element v k | k * of the optimal sequence v k * is applied to the system, that is u k | k = K x ^ k + v k | k * .

3.2. SCMPC Strategy

The aim of SCMCP is to solve a convex OCP whose cost index for a prediction horizon N is composed not by a single trajectory of the states, but by the sum of a set of M trajectories generated due to random realizations of the disturbances also known as scenarios [28,29]. The optimal controls u k * = { u k | k * , u k + 1 | k * , , u k + N 1 | k * } are those that minimize such an index satisfying the constraints for each scenario. For this reason, the selection of the number of scenarios, M, requires special attention in order to guarantee a defined minimum level of confidence or non-violation of constraints.
Let us consider the uncertain system defined in (8) with control inputs calculated with the control law (4). For a given instant k, the state predictions x k + i + 1 | k and future inputs u k + i | k , i { 0 , 1 , , N 1 } are given by
x k + i + 1 | k = A ( δ k + i | k ) x k + i | k + B ( δ k + i | k ) u k + i | k + G ( δ k + i | k ) w k + i | k
u k + i | k = K x k + i | k + v k + i | k
where stochastic variables δ and w are independent, δ is bounded and the sequence { w k | k , , w k + N 1 | k } is independent and identically distributed (i.i.d.). Replacing (24) in (23), the discrete model (25) is obtained as a function of v k + i | k , where A c l ( δ k + i | k ) = ( A ( δ k + i | k ) + B ( δ k + i | k ) K ) , being A c l ( δ k + i | k ) strictly stable
x k + i + 1 | k = A c l ( δ k + i | k ) x k + i | k + B ( δ k + i | k ) v k + i | k + G ( δ k + i | k ) w k + i | k
For a given instant k, let us define the jth scenario denoted by the pair γ k [ j ] = ( δ k [ j ] , w k [ j ] ) , j { 1 , 2 , , M } , as
δ k [ j ] = { δ k | k [ j ] δ k + 1 | k [ j ] δ k + N 1 | k [ j ] } , w k [ j ] = { w k | k [ j ] w k + 1 | k [ j ] w k + N 1 | k [ j ] }
where δ k [ j ] and w k [ j ] are random and known realizations along N of the parametric and additive uncertainties, respectively. Thus, the set Γ k of M random and independent scenarios for the instant k is Γ k = { γ k [ 1 ] , γ k [ 2 ] , , γ k [ M ] } .
An advantage of SCMPC is that knowledge about probability distributions P δ and P w is not required since the sequence Γ k is obtained at each instant k from experimental data or by means of a random number generator. Therefore, predictions expressed by equations (24) and (25) for the jth scenario are given by (26) and (27), where it is fulfilled that the initial condition x k | k [ j ] = x ^ k and the decision variables { v k | k , v k + 1 | k , , v k + N 1 | k } are the same for each j
x k + i + 1 | k [ j ] = A c l ( δ k + i | k [ j ] ) x k + i | k [ j ] + B ( δ k + i | k [ j ] ) v k + i | k + G ( δ k + i | k [ j ] ) w k + i | k [ j ]
u k + i | k [ j ] = K x k + i | k [ j ] + v k + i | k
Then, taking (5), a cost index J N [ j ] ( x ^ k , v k ) can be formulated for each scenario j, representing the trajectory of x k + i | k [ j ] along N
J N [ j ] ( x ^ k , v k ) = i = 0 N 1 x k + i | k [ j ] Q x x x k + i | k [ j ] + x k + i | k [ j ] Q x v v k + i | k + v k + i | k R u v k + i | k + x k + N | k [ j ] P N x k + N | k [ j ]
From (28) a global cost index J S c ( x ^ k , v k ) is defined as the sum of the M trajectories generated, which conform a band around a nominal trajectory
J S c ( x ^ k , v k ) = j = 1 M J N [ j ] ( x ^ k , v k )
Under (29), an scenario-based optimal control problem is stated (30a), whose solution at time k yields the optimal controls { v k | k * , v k + 1 | k * , , v k + N 1 | k * } that minimize J S c ( x ^ k , v k ) and do not violate the constraints (notice that constraint (30b) is implicit in (30a), and it is not required to solve the optimization problem)
min v k | k , v k + 1 | k , , v k + N 1 | k J S c ( x ^ k , v k ) s . t .
x k + i + 1 | k [ j ] = A c l ( δ k + i | k [ j ] ) x k + i | k [ j ] + B ( δ k + i | k [ j ] ) v k + i | k + G ( δ k + i | k [ j ] ) w k + i | k [ j ]
H x k + i + 1 | k [ j ] h
D u k + i | k [ j ] d
x k | k [ j ] = x ^ k
i { 0 , 1 , , N 1 } , j { 1 , 2 , , M }
The constraints (30c) and (30d), whose matrices are the same as in (7c) and (7d), indicate that they must be met for each scenario j.
A robust SCMPC approach is achieved by replacing (30a) with (31), so that the sequence v k that minimizes the worst case (min–max optimization) [21,28] of the M realizations is calculated, as opposed to (30a), in which a nominal trajectory is minimized
min v k | k , v k + 1 | k , , v k + N 1 | k max j = 1 , 2 , , M J N [ j ] ( x ^ k , v k )
This robust approach is expensive computationally, because the worst case of all the scenarios is obtained first, to subsequently perform a minimization on it. Another drawback is that the worst case does not always correspond to reality at that moment, so an optimum applied to the real process could lead to poor behavior.
Number of Scenarios: The value of M [18,19] demands special attention, in order to guarantee compliance with the constraints in the states with at least a specified probability level. By establishing an allowable violation probability ε [ 0 , 1 ] , a very low confidence level β [ 0 , 1 ] (e.g., β = 10 9 ) and with the number of decision variables given by d = n u N , the probabilistic constraints P k [ x k + i + 1 | k X ] 1 ε , where X is a set of allowed values for the states, can be transformed into the M deterministic constraints (30c) where
M d + 1 + ln ( 1 / β ) + 2 ( d + 1 ) ln ( 1 / β ) ε
With the minimum number of scenarios given by (32), the non-violation of state constraints is met, at least with a confidence level ( 1 ε ) [21,48], where the lower is the probability of violation, the greater is the number of scenarios M.
As pointed out in [18,29], a disadvantage is that the random nature of scenarios can cause unlikely circumstances (i.e., the scenario is far from the reality of the process), so the closed-loop system may exhibit erroneous behavior. For this reason, in [29], a removal of R scenarios from the previously defined M is proposed. In [22], the concept of scenarios is exploited to generate achievable probabilistic sets offline, which are used as an MPC formulation based on probabilistic tubes.

4. Numerical Examples, Results and Discussion

In this section, two examples are presented in which various MPCs were evaluated by performing N r Monte Carlo simulations for each control scheme, always starting from the same initial state x 0 | 0 and assuming that the state measurements are accurate. The performances of the MPCs were analyzed through the performance indices listed in Table 1.
A 64-bit Windows 10 computer, 16 GB of RAM and 2.5 GHz Intel Core i7 processor was used. Simulations were run in Matlab R2018b; control actions for classic MPC, SMPC and SCMPC were calculated using the quadprog toolbox of the Mosek 9.2 optimization software and Matlab fminimax function for the Robust SCMPC. Furthermore, for reproducible results, the specialized Stochastic Model Predictive Control Toolbox software was developed in Matlab for the realization of a part of the simulations in this article. This software allows simulating (Matlab’s quadprog solver is used for optimization) a SMPC or SCMPC to control multivariable systems with additive disturbances and is available in MATLAB Central [31] so that the reader can reproduce the results of this section or use it in other systems.

4.1. Example 1

Consider the two-mass spring system [16,49] with friction-less sliding of Figure 1 where the masses m 1 with position x 1 and m 2 with position x 2 are linked by a spring with elastic constant k s . The control input u is a force acting on m 1 and w 1 and w 2 are external disturbances acting on m 1 and m 2 , respectively.
The system’s equations of motion using x 3 as the linear velocity of m 1 and x 4 as the linear velocity of m 2 are
x ˙ 1 = x 3 x ˙ 2 = x 4 m 1 x ˙ 3 = k s ( x 1 x 2 ) + u + w 1 m 2 x ˙ 4 = k s ( x 1 x 2 ) + w 2
Setting the state vectors as x = [ x 1 x 2 x 3 x 4 ] and additive disturbances as w = [ w 1 w 2 ] ; its representation in the discrete state space by Euler’s approximation method, for a sampling time T s is
x k + 1 = A x k + B u k + G w k A = 1 0 T s 0 0 1 0 T s k s T s m 1 k s T s m 1 1 0 k s T s m 2 k s T s m 2 0 1 , B = 0 0 T s m 1 0 , G = 0 0 0 0 T s m 1 0 0 T s m 2
where m 1 , m 2 = 1 kg are constant, T s = 0.1 s (this was selected fulfilling the Nyquist–Shannon sampling theorem, taking the highest frequency of the poles of the system) and the constraints | x 3 | , | x 4 | 0.12 m/s must be satisfied. The elastic constant k s is associated with parametric uncertainties δ and it has a uniform probability distribution k s U ( [ 0.5 , 2.0 ] )   N / m . The additive disturbances w have a normal distribution w N ( 0 , Σ w )   N with zero mean and covariance matrix Σ w = diag ( 0.022 2 , 0.022 2 ) . For all controllers, when applying N = 6 , Q x = diag ( 1 , 1 , 4 , 6 ) , R u = 1 ; ε S M P C = 0.05 ; ε S C M P C = 0.05 , β S C M P C = 10 9 and by means of (32), M = 896 .
Three cases were established based on the parameter k s to evaluate the performance of five MPCs (classic without constraints (MPC n/c), classic with constraints (MPC w/c), SMPC, SCMPC and Robust SCMPC):
  • Case 1: k s is known, constant and it is set to its nominal value k s = 1.25 .
  • Case 2: k s is unknown and varies according to its probability distribution at every control period.
  • Case 3: k s is unknown and remains constant for all instants along each simulation. The k s value varies according to its probability distribution in each simulation.
For each case, N r = 100 simulations with N T s = 100 sampling times of duration were performed, starting from the initial state x 0 | 0 = x ^ 0 = [ 0.5 0.5 0 0 ] towards the origin as the desired state.
For Case 1, the feedback matrix K and the terminal state weighting P N are obtained from solving a Quadratic Optimal Control Problem (LQR) [13,20]
K = [ 2.876 1.683 3.187 0.904 ] , P N = [ 155.739 102.335 43.512 77.451 102.335 117.489 26.755 30.134 43.512 26.755 40.392 15.371 77.451 30.134 15.371 126.794 ]
For Cases 2 and 3, they are obtained by solving an eigenvalues problem (EVP); through a stability analysis of Lyapunov’s approach [13,32,33], such matrices are
K = [ 56.563 41.356 15.171 45.156 ] , P N = [ 4509 3280 971 3801 3280 2450 709 2706 971 709 244 791 3801 2706 791 3462 ]
Case 1 Results: Table 2 shows high probabilities of success p s ( 0.84 , 0.74 and 0.64 ) by the stochastic MPCs. However, the biggest success of the SCMPC and Robust SCMPC required longer times t avg ( 79.0 ms and 235.7 ms) to obtain the control sequence than SMPC ( 5.7 ms), which works as a classic MPC with constraints. This makes sense since in each iteration SMPC and SCMPC select 896 random realizations for w and solve the OCP (30a) for all of them using (29) or (31).
Figure 2 shows the N r = 100 trajectories made by the real states and input (thin solid lines), starting from x ^ 0 = [ 0.5 0.5 0 0 ] towards the origin. The MPC n/c (Figure 2a), having no constraints, presents the highest violations ( N v = 2416 ) and overshoots ( PO avg = 20.98 % ), while the mean trajectories (thick solid lines) of the constrained MPCs do not exceed the limits (black dashed lines). Notice how stochastic approaches (Figure 2c–e) are within these limits or barely exceed them, showing that the probability that a state is within the allowed limits is 68 % . The largest standard deviations σ max occurred in the scenario-based MPCs ( 0.036 and 0.054 ) due to the randomness mentioned in [29] and because the worst case is not always the closest to reality (hence, the highest IAEavg ( 52.50 ) and IAUavg ( 8.70 ) of all).
Case 2 Results: The performance indices and the real trajectories of the states are shown in Table 3 and Figure 3 respectively. As in Case 1, stochastic MPCs had the highest p s , observing an increase in the scenario-based schemes ( 0.94 and 0.86 ), despite the randomness of k s . Nevertheless, these schemes are the ones who take the longest to calculate the solution ( t avg ( 220.3 ms and 311.8 ms) due to the fact that, in addition to selecting 896 random scenarios for w, it should also be done for k s .
As seen in Figure 3, the mean trajectories that include the standard deviations of the stochastic MPCs are the only ones that stay within the limits (the probability that a state is within the allowed limits is 68 % ). This kind of robustness on stochastic strategies implies high values of IAEavg and IAUavg indicators compared to the others.
Case 3 Results: As can be seen in Table 4 and Figure 4, it is once again corroborated that the highest p s and average trajectories with standard deviations within the limits belongs to stochastic MPCs. Regarding Case 2, the indicators t avg are very similar, but for this case higher standard deviations σ max of the three cases are observed. The high POavg ( 28.23 % ) presented by the robust SCMPC can be seen in the trajectories of the x 3 (Figure 4e) state; however, this strategy presents the lowest N v (27) of all MPCs.

4.2. Example 2

Consider the quadruple-tank system [50,51] of Figure 5, in which the aim is to control the level of liquid h i in tank i i { 1 , 2 , 3 , 4 } , by means of pumps 1 and 2 whose flows are proportional to the applied voltage ( Q P 1 = k 1 v 1 , Q P 2 = k 2 v 2 ) and are distributed by the valves in proportions determined by γ 1 , γ 2 [ 0 , 1 ] .
Let A i be the cross section of tank i; a i and a 12 are the areas of the tank outlet pipes; and g is the acceleration due to gravity. By performing a mass balance and applying Bernoulli’s law, the equations that describe the behavior of the nonlinear system are given by
d h 1 d t = a 1 A 1 2 g h 1 a 12 A 1 sgn ( h 1 h 2 ) 2 g | h 1 h 2 | + ( 1 γ 2 ) k 2 A 1 v 2 d h 2 d t = a 2 A 2 2 g h 2 + a 12 A 2 sgn ( h 1 h 2 ) 2 g | h 1 h 2 | + ( 1 γ 1 ) k 1 A 2 v 1 d h 3 d t = a 1 A 3 2 g h 1 a 3 A 3 2 g h 3 + γ 1 k 1 A 3 v 1 d h 4 d t = a 2 A 4 2 g h 2 a 4 A 4 2 g h 4 + γ 2 k 2 A 4 v 2
Let us define the state vectors x = [ x 1 x 2 x 3 x 4 ] and inputs u = [ u 1 u 2 ] ; where x i = h i h i 0 and u i = v i v i 0 represent the deviations from the operating point P 0 = { h 1 0 , h 2 0 , h 3 0 , h 4 0 , v 1 0 , v 2 0 } = { 7.873 cm , 8.187 cm , 7.720 cm , 8.039 cm , 4.0 V , 3.5 V } . Linearizing around P 0 and using Euler’s approximation, for a sampling period T s , the linear model in the discrete state space can be represented by
x k + 1 = A x k + B u k A = β 1 + β x A 1 + 1 β x A 1 0 0 β x A 2 β 2 + β x A 2 + 1 0 0 β 1 A 3 0 β 3 A 3 + 1 0 0 β 2 A 4 0 β 4 A 4 + 1 , B = 0 ( 1 γ 2 ) k 2 A 1 T s ( 1 γ 1 ) k 1 A 2 T s 0 γ 1 k 1 A 3 T s 0 0 γ 2 k 2 A 4 T s
where β i = a i g 2 h i 0 T s , β x = a 12 g 2 | h 1 0 h 2 0 | T s ; A i = 144 cm 2 , a 1 , a 2 , a 12 = 0.352 cm 2 , a 3 , a 4 = 1.006 cm 2 ; k 1 , k 2 = 33.333   cm 3 / Vs ; γ 1 = 0.6 , γ 2 = 0.7 ; g = 981   cm / s 2 , T s = 5 s.
The constraints | x 3 | , | x 4 | 1.5   cm , | u 1 | , | u 2 | 1.0   V must be satisfied and the process has additive disturbances w in its states, with normal distribution w N ( 0 , Σ w )   cm with zero mean and covariance matrix Σ w = diag ( 0.1 2 , 0.1 2 , 0.1 2 , 0.1 2 ) , which are truncated | w i | 0.3   cm and G = diag ( 1 , 1 , 1 , 1 ) .
To analyze the effect of the allowed violation probabilities in the states, ε , on the number of considered scenarios M and on the different performance indicators, this example compares the performance of an SCMPC for five values of ε ( 0.4 , 0.3 , 0.2 , 0.1 , 0.05 ) with a constrained classic MPC (MPC w/c). For each scheme, N r = 100 runs were carried out on the nonlinear model (33), each one with N T s = 40 sampling times of duration, starting from the initial state x 0 | 0 = x ^ 0 = [ 5.5 6.9 0.5 0.2 ] towards the origin as the desired state.
For all controllers, N = 5 , Q x = diag ( 10 , 10 , 1 , 1 ) , R u = diag ( 1 , 1 ) , β = 10 9 . The matrices K and P N , obtained from solving a Quadratic Optimal Control Problem (LQR) [13,20], are
K = [ 0.745 0.649 0.153 0.054 0.556 0.662 0.045 0.245 ] , P N = [ 13.275 3.341 0.839 0.811 3.341 13.438 0.884 0.809 0.839 0.884 1.870 0.140 0.811 0.809 0.140 1.689 ]
Results: Table 5 shows the performance indices of the MPC w/c and the SCMPC, for the N r = 100 runs, while Figure 6 depicts the closed loop simulations. SCMPCs showed the lowest violations, in both quantity N v ( 15 , 14 , 10 , 7 and 6) and percentage of deviation PO avg ( 1.55 % , 1.60 % , 1.78 % , 1.06 % and 0.48 % ) compared to the MPC w/c ( N v = 263 and PO avg = 5.76 % ). Furthermore, the stochastic scheme shows high values of successful runs N s ( 87 , 88 , 91 , 93 and 94) and probabilities of success p s ( 0.87 , 0.88 , 0.91 , 0.93 and 0.94 ), which increase as ε decreases. However, this improvement in p s leads to an increase in the number of scenarios M ( 133 , 177 , 266 , 531 and 1062) to be considered in the OCP (30a), and hence an increase in the average time that the algorithm takes to find a solution at each instant k ( t avg ( 12.3 ms , 14.2 ms , 18.1 ms and 30.3 ms). Regarding the indices IAE avg , IAU avg and σ max , despite not observing a considerable difference between controllers, it can be concluded that the lower the ε the lower the performance values.
As can be seen in the Figure 6, all the N r = 100 trajectories made by the states (thin solid lines) start from N r = 100 towards the origin, where the mean trajectories (thick solid lines) of the SCMPCs are considerably far from the constraints (black dashed lines), opposite to that of the MPC w/c that passes very close and even violates them (that’s why the highest PO avg = 5.76 % of them). On the other hand, for SCMPCs, the average trajectories that include the standard deviations (dashed lines) are within the limits. It means that the probability that a state is within the allowed limits is 68 % . As for the applied inputs, they start at their maximum allowed values and decrease as the states converge towards the origin

5. Conclusions and Future Work

Two stochastic predictive control approaches are compared in this article; one based on analytical methods (SMPC) and the other based on scenarios (SCMPC). The low computational cost of the SMPC is because the uncertain statistical information is used offline to adjust the states constraints. However, if this information changes, it cannot be considered during operation. On the other hand, this new statistical information can be incorporated into the SCMPC to generate the scenarios online, but it leads to a high computational cost.
It is shown that the SMPC can be summed up in a deterministic OCP (22a) whose structure is similar to that of a classic MPC with constraints (7a), with similar computational cost t avg , but with a considerable increase in the probability of success p s , due to offline constraint adjustment.
Scenario-based approaches SCMPC and robust SCMPC, compared to the others, gave the highest probabilities of success, at the expense of a high computational cost, since they need to solve an OCP with M random scenarios for each control period. From results shown in Table 5, can be concluded that a decrease in the parameter ε , related to the probability of violation allowed for the state constraints, produces an increase in the probability of success p s . However, this improvement results in an increase in the number of M scenarios to be considered in the OCP and therefore an increase in the average time t avg that the algorithm takes to find a solution at each sampling time.
It is clear that the consideration of statistical process information, through its inclusion in the OCP, significantly improves the probability of success p s . This can be verified in Figure 2, Figure 3, Figure 4 and Figure 6, where it is observed that only stochastic approaches reached mean trajectories with standard deviations within the limits or barely exceeding them. Thus, for a normal distribution, it means that the probability a state is within the allowed limits is 68 % .
Scenario-based schemes are attractive in the sense that they have a greater probability of success and the inclusion of new statistical information online without necessarily known probability distributions. However, due to the randomness of scenarios, their solution can show undesirable behaviors despite their high computational cost. This drawbacks would prevent their implementation for the control of systems with fast dynamics such as the two-mass spring system in the example, which has T s ( 0.1 s) close to the lowest t avg ( 79.0 ms) of the SCMPCs of the three cases. Given these drawbacks, future work continues on the proposal of new SCMPC schemes that reduce the computational load and unlikely scenarios.
Finally, the specialized software Stochastic Model Predictive Control Toolbox is available in MATLAB Central for the keen readers. This tool was created by the authors to carry out part of the simulations in this article. It also allows readers to reproduce the results here presented or to tune and simulate SMPCs or SCMPCs for controlling multivariable systems with additive disturbances which present Gaussian probability distributions.

Author Contributions

Conceptualization, Methodology and Writing, E.G. and J.S. (Javier Sanchis); Review and Supervision S.G.-N. and J.S. (José Salcedo); and Software developed, E.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Richalet, J.; Rault, A.; Testud, J.; Papon, J. Model predictive heuristic control: Applications to industrial processes. Automatica 1978, 14, 413–428. [Google Scholar] [CrossRef]
  2. Forbes, M.G.; Patwardhan, R.S.; Hamadah, H.; Gopaluni, R.B. Model Predictive Control in Industry: Challenges and Opportunities. IFAC Pap. 2015, 48, 531–538. [Google Scholar] [CrossRef]
  3. Jurado, I.; Millán, P.; Quevedo, D.; Rubio, F. Stochastic MPC with Applications to Process Control. Int. J. Control 2014, 88, 792–800. [Google Scholar] [CrossRef]
  4. Richalet, J.; Abu El Ata-Doss, S.; Arber, C.; Kuntze, H.; Jacubasch, A.; Schill, W. Predictive Functional Control-Application to Fast and Accurate Robots. IFAC Proc. Vol. 1987, 20, 251–258. [Google Scholar] [CrossRef]
  5. Tan, Y.; Cai, G.; Li, B.; Teo, K.L.; Wang, S. Stochastic Model Predictive Control for the Set Point Tracking of Unmanned Surface Vehicles. IEEE Access 2020, 8, 579–588. [Google Scholar] [CrossRef]
  6. Mueller, M.W.; D’Andrea, R. A model predictive controller for quadrocopter state interception. In Proceedings of the 2013 European Control Conference (ECC), Zurich, Switzerland, 17–19 July 2013; pp. 1383–1389. [Google Scholar] [CrossRef]
  7. Lamoudi, M.Y.; Alamir, M.; Béguery, P. Model predictive control for energy management in buildings Part 2: Distributed Model Predictive Control. IFAC Proc. Vol. 2012, 45, 226–231. [Google Scholar] [CrossRef] [Green Version]
  8. Salsbury, T.; Mhaskar, P.; Qin, S.J. Predictive control methods to improve energy efficiency and reduce demand in buildings. Comput. Chem. Eng. 2013, 51, 77–85. [Google Scholar] [CrossRef]
  9. Sultana, W.R.; Sahoo, S.K.; Sukchai, S.; Yamuna, S.; Venkatesh, D. A review on state-of-the-art development of model predictive control for renewable energy applications. Renew. Sustain. Energy Rev. 2017, 76, 391–406. [Google Scholar] [CrossRef]
  10. Luchini, E.; Schirrer, A.; Kozek, M. A hierarchical MPC for multi-objective mixed-integer optimisation applied to redundant refrigeration circuits. IFAC Pap. 2017, 50, 9058–9064. [Google Scholar] [CrossRef]
  11. Mercorelli, P.; Kubasiak, N.; Liu, S. Model predictive control of an electromagnetic actuator fed by multilevel PWM inverter. In Proceedings of the 2004 IEEE International Symposium on Industrial Electronics, Ajaccio, France, 4–7 May 2004; pp. 531–535. [Google Scholar] [CrossRef]
  12. Mayne, D.; Rawlings, J.; Rao, C.; Scokaert, P. Constrained Model Predictive Control: Stability and Optimality. Automatica 2000, 36, 789–814. [Google Scholar] [CrossRef]
  13. Maciejowski, J. Predictive Control: With Constraints, 1st ed.; Prentice Hall: London, UK, 2002. [Google Scholar]
  14. Kouvaritakis, B.; Cannon, M. Model Predictive Control. Classical, Robust and Stochastic, 1st ed.; Springer International Publishing: London, UK, 2016. [Google Scholar] [CrossRef]
  15. Mayne, D.Q. Model Predictive Control: Recent Developments and Future Promise. Automatica 2014, 50, 2967–2986. [Google Scholar] [CrossRef]
  16. Kothare, M.V.; Balakrishnan, V.; Morari, M. Robust Constrained Model Predictive Control Using Linear Matrix Inequalities. Automatica 1996, 32, 1361–1379. [Google Scholar] [CrossRef] [Green Version]
  17. Bemporad, A.; Morari, M. Robust Model Predictive Control: A Survey. In Robustness in Identification and Control; Springer: London, UK, 1999; pp. 207–226. [Google Scholar] [CrossRef]
  18. Mesbah, A. Stochastic Model Predictive Control: An Overview and Perspectives for Future Research. IEEE Control Syst. Mag. 2016, 36, 30–44. [Google Scholar] [CrossRef] [Green Version]
  19. Mayne, D. Robust and Stochastic MPC: Are We Going in the Right Direction? IFAC Pap. 2015, 48, 1–8. [Google Scholar] [CrossRef]
  20. Kouvaritakis, B.; Cannon, M. Stochastic Model Predictive Control. In Encyclopedia of Systems and Control, 1st ed.; Springer: London, UK, 2015; pp. 1350–1357. [Google Scholar] [CrossRef]
  21. Farina, M.; Giulioni, L.; Scattolini, R. Stochastic Linear Model Predictive Control with Chance Constraints—A Review. J. Process. Control 2016, 44, 53–67. [Google Scholar] [CrossRef]
  22. Hewing, L.; Zeilinger, M.N. Scenario-Based Probabilistic Reachable Sets for Recursively Feasible Stochastic Model Predictive Control. IEEE Control Syst. Lett. 2020, 4, 450–455. [Google Scholar] [CrossRef]
  23. Cannon, M.; Kouvaritakis, B.; Raković, S.V.; Cheng, Q. Stochastic Tubes in Model Predictive Control With Probabilistic Constraints. IEEE Trans. Autom. Control 2011, 56, 194–200. [Google Scholar] [CrossRef]
  24. Lorenzen, M.; Dabbene, F.; Tempo, R.; Allgöwer, F. Constraint-Tightening and Stability in Stochastic Model Predictive Control. IEEE Trans. Autom. Control 2017, 62, 3165–3177. [Google Scholar] [CrossRef] [Green Version]
  25. Heirung, T.A.N.; Paulson, J.A.; O’Leary, J.; Mesbah, A. Stochastic Model Mredictive Control-How Does It Work? Comput. Chem. Eng. 2018, 114, 158–170. [Google Scholar] [CrossRef]
  26. Paulson, J.A.; Buehler, E.A.; Braatz, R.D.; Mesbah, A. Stochastic Model Predictive Control with Joint Chance Constraints. Int. J. Control 2017, 93, 126–139. [Google Scholar] [CrossRef]
  27. Mayne, D.Q.; Falugi, P. Stabilizing Conditions for Model Predictive Control. Int. J. Robust Nonlinear Control 2018, 29, 894–903. [Google Scholar] [CrossRef] [Green Version]
  28. Calafiore, G.C.; Fagiano, L. Robust Model Predictive Control Via Scenario Optimization. IEEE Trans. Autom. Control 2013, 58, 219–224. [Google Scholar] [CrossRef] [Green Version]
  29. Schildbach, G.; Fagiano, L.; Frei, C.; Morari, M. The Scenario Approach for Stochastic Model Predictive Control with Bounds on Closed-Loop Constraint Violations. Automatica 2014, 50, 3009–3018. [Google Scholar] [CrossRef] [Green Version]
  30. Calafiore, G.; Campi, M.C. The Scenario Approach to Robust Control Design. IEEE Trans. Autom. Control 2006, 51, 742–753. [Google Scholar] [CrossRef]
  31. González Querubín, E.A. Stochastic Model Predictive Control Toolbox. Version 1.0.4. Available online: https://www.mathworks.com/matlabcentral/fileexchange/75803 (accessed on 5 December 2020).
  32. Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1994. [Google Scholar] [CrossRef] [Green Version]
  33. Cuzzola, F.A.; Geromel, J.C.; Morari, M. An Improved Approach for Constrained Robust Model Predictive Control. Automatica 2002, 38, 1183–1189. [Google Scholar] [CrossRef]
  34. Maeder, U.; Borrelli, F.; Morari, M. Linear Offset-Free Model Predictive Control. Automatica 2009, 45, 2214–2222. [Google Scholar] [CrossRef]
  35. Blanchini, F.; Miani, S. Set-Theoretic Methods in Control, 1st ed.; Birkhäuser Boston: Boston, MA, USA, 2008. [Google Scholar] [CrossRef]
  36. Khlebnikov, M.V.; Polyak, B.T.; Kuntsevich, V.M. Optimization of Linear Systems Subject to Bounded Exogenous Disturbances: The Invariant Ellipsoid Technique. Autom. Remote. Control 2011, 72, 2227–2275. [Google Scholar] [CrossRef]
  37. Zhang, X.; Grammatico, S.; Schildbach, G.; Goulart, P.; Lygeros, J. On the Sample Size of Randomized MPC for Chance-Constrained Systems with Application to Building Climate Control. In Proceedings of the 2014 European Control Conference (ECC), Strasbourg, France, 24–27 June 2014; pp. 478–483. [Google Scholar] [CrossRef]
  38. Parisio, A.; Varagnolo, D.; Molinari, M.; Pattarello, G.; Fabietti, L.; Johansson, K.H. Implementation of a Scenario-based MPC for HVAC Systems: An Experimental Case Study. IFAC Proc. Vol. 2014, 47, 599–605. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, X.; Liu, Y.; Xu, L.; Liu, J.; Sun, H. A Chance-Constrained Stochastic Model Predictive Control for Building Integrated with Renewable Resources. Electr. Power Syst. Res. 2020, 184, 106348. [Google Scholar] [CrossRef]
  40. Raimondi Cominesi, S.; Farina, M.; Giulioni, L.; Picasso, B.; Scattolini, R. A Two-Layer Stochastic Model Predictive Control Scheme for Microgrids. IEEE Trans. Control Syst. Technol. 2018, 26, 1–13. [Google Scholar] [CrossRef]
  41. Ramandi, M.Y.; Bigdeli, N.; Afshar, K. Stochastic Economic Model Predictive Control for Real-Time Scheduling of Balance Responsible Parties. Int. J. Electr. Power Energy Syst. 2020, 118, 105800. [Google Scholar] [CrossRef]
  42. Velarde, P.; Tian, X.; Sadowska, A.D.; Maestre, J.M. Scenario-Based Hierarchical and Distributed MPC for Water Resources Management with Dynamical Uncertainty. Water Resour. Manag. 2018, 33, 677–696. [Google Scholar] [CrossRef]
  43. Velasco, J.; Liniger, A.; Zhang, X.; Lygeros, J. Efficient Implementation of Randomized MPC for Miniature Race Cars. In Proceedings of the 2016 European Control Conference (ECC), Aalborg, Denmark, 29 June 2016; pp. 957–962. [Google Scholar] [CrossRef]
  44. Piovesan, J.L.; Tanner, H.G. Randomized Model Predictive Control for Robot Navigation. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 17 May 2009; pp. 94–99. [Google Scholar] [CrossRef] [Green Version]
  45. Goodwin, G.C.; Medioli, A.M. Scenario-Based, Closed-Loop Model Predictive Control With Application to Emergency Vehicle Scheduling. Int. J. Control 2013, 86, 1338–1348. [Google Scholar] [CrossRef]
  46. Mesbah, A. Stochastic Model Predictive Control with Active Uncertainty Learning: A Survey on Dual Control. Annu. Rev. Control 2018, 45, 107–117. [Google Scholar] [CrossRef]
  47. Cannon, M.; Kouvaritakis, B.; Ng, D. Probabilistic Tubes in Linear Stochastic Model Predictive Control. Syst. Control Lett. 2009, 58, 747–753. [Google Scholar] [CrossRef]
  48. Calafiore, G.C. Random Convex Programs. SIAM J. Optim. 2010, 20, 3427–3464. [Google Scholar] [CrossRef] [Green Version]
  49. Wie, B.; Bernstein, D.S. Benchmark Problems for Robust Control Design. J. Guid. Control Dyn. 1992, 15, 1057–1059. [Google Scholar] [CrossRef] [Green Version]
  50. Johansson, K.H.; Polyak, B.T.; Kuntsevich, V.M. The Quadruple-Tank Process: A Multivariable Laboratory Process With an Adjustable Zero. IEEE Trans. Control Syst. Technol. 2000, 8, 456–465. [Google Scholar] [CrossRef] [Green Version]
  51. Numsomran, A.; Tipsuwanporn, V.; Tirasesth, K. Modeling of the Modified Quadruple-Tank Process. In Proceedings of the 2008 SICE Annual Conference, Tokyo, Japan, 20–22 August 2008; pp. 818–823. [Google Scholar] [CrossRef]
Figure 1. Two-mass spring system.
Figure 1. Two-mass spring system.
Electronics 09 02078 g001
Figure 2. Case 1: Two-mass spring system controlled with different MPCs, for 100 Monte Carlo simulations. Trajectories of the real states and input (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 0.12 .
Figure 2. Case 1: Two-mass spring system controlled with different MPCs, for 100 Monte Carlo simulations. Trajectories of the real states and input (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 0.12 .
Electronics 09 02078 g002
Figure 3. Case 2: Two-mass spring system controlled with different MPCs, for 100 Monte Carlo simulations. Trajectories of the real states and input (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 0.12 .
Figure 3. Case 2: Two-mass spring system controlled with different MPCs, for 100 Monte Carlo simulations. Trajectories of the real states and input (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 0.12 .
Electronics 09 02078 g003
Figure 4. Case 3: Two-mass spring system controlled with different MPCs, for 100 Monte Carlo simulations. Trajectories of the real states and input (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 0.12 .
Figure 4. Case 3: Two-mass spring system controlled with different MPCs, for 100 Monte Carlo simulations. Trajectories of the real states and input (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 0.12 .
Electronics 09 02078 g004
Figure 5. Quadruple-tank system.
Figure 5. Quadruple-tank system.
Electronics 09 02078 g005
Figure 6. Quadruple-tank system controlled with a constrained MPC and a SCMPC for multiple ε values, for 100 Monte Carlo simulations. Trajectories of the real states and inputs (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 1.5 , | u 1 | , | u 2 | 1.0 .
Figure 6. Quadruple-tank system controlled with a constrained MPC and a SCMPC for multiple ε values, for 100 Monte Carlo simulations. Trajectories of the real states and inputs (thin solid lines), mean value (thick solid lines), mean value with standard deviation (dotted lines), minimum and maximum values (blue dashed lines) and constraints (black dashed lines) | x 3 | , | x 4 | 1.5 , | u 1 | , | u 2 | 1.0 .
Electronics 09 02078 g006
Table 1. Performance indices.
Table 1. Performance indices.
IndexDescription
N s Number of simulations from N r where no constraints were violated.
p s Probability of success of a simulation, p s = N s / N r .
N v Number of constraints violated in all N r simulations.
IAEavgMean value of the integral absolute error of the states, based on the closed-loop system responses of the N r simulations.
IAU avg Mean value of the integral of the absolute value of the applied inputs of all N r .
t avg Average time taken for the algorithm to obtain a solution.
σ max Maximum standard deviation of the states with constraints.
PO avg Average percentage overshoot of constraints in the violated states.
Table 2. MPCs performance indices for Case 1.
Table 2. MPCs performance indices for Case 1.
MPC N s p s N v IAE avg IAU avg t avg σ max PO avg
MPC n/c0 0.00 2416 43.88 5.35 5.6 ms 0.009 20.98 %
MPC w/c3 0.03 702 49.01 7.29 5.9 ms 0.031 2.45 %
SMPC64 0.64 135 49.85 7.39 5.7 ms 0.029 1.78 %
SCMPC84 0.84 168 50.61 7.55 79.0 ms 0.036 6.78 %
SCMPC (robust)74 0.74 30 52.50 8.70 235.7 ms 0.054 6.01 %
Table 3. MPCs performance indices: Case 2.
Table 3. MPCs performance indices: Case 2.
MPC N s p s N v IAE avg IAU avg t avg σ max PO avg
MPC n/c0 0.00 611 50.58 6.16 6.3 ms 0.010 20.97 %
MPC w/c4 0.04 362 51.37 5.59 6.4 ms 0.027 1.75 %
SMPC65 0.65 77 51.98 5.87 6.3 ms 0.031 1.13 %
SCMPC94 0.94 7 52.91 6.96 220.3 ms 0.045 13.67 %
SCMPC (robust)86 0.86 21 53.14 8.70 311.8 ms 0.055 6.29 %
Table 4. MPCs performance indices for Case 3.
Table 4. MPCs performance indices for Case 3.
MPC N s p s N v IAE avg IAU avg t avg σ max PO avg
MPC n/c0 0.00 754 51.27 6.14 5.8 ms 0.028 19.38 %
MPC w/c1 0.01 462 52.16 5.74 6.1 ms 0.032 2.07 %
SMPC57 0.57 107 52.56 5.73 5.9 ms 0.036 1.90 %
SCMPC85 0.85 28 53.42 6.95 220.9 ms 0.046 11.22 %
SCMPC (robust)79 0.79 27 53.97 8.67 327.5 ms 0.062 28.23 %
Table 5. Performance indices.
Table 5. Performance indices.
MPC N s p s N v IAE avg IAU avg t avg σ max PO avg
MPC w/c2 0.02 263 78.63 21.51 6.7 ms 0.218 5.76 %
SCMPC ( ε = 0.4 M = 133 ) 87 0.87 15 78.40 21.22 12.3 ms 0.218 1.55 %
SCMPC ( ε = 0.3 M = 177 ) 88 0.88 14 78.39 21.21 14.2 ms 0.218 1.60 %
SCMPC ( ε = 0.2 M = 266 ) 91 0.91 10 78.38 21.18 18.1 ms 0.218 1.78 %
SCMPC ( ε = 0.1 M = 531 ) 93 0.93 7 78.36 21.15 30.3 ms 0.217 1.06 %
SCMPC ( ε = 0.05 M = 1062 ) 94 0.94 6 78.35 21.14 57.3 ms 0.217 0.48 %
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

González, E.; Sanchis, J.; García-Nieto, S.; Salcedo, J. A Comparative Study of Stochastic Model Predictive Controllers. Electronics 2020, 9, 2078. https://doi.org/10.3390/electronics9122078

AMA Style

González E, Sanchis J, García-Nieto S, Salcedo J. A Comparative Study of Stochastic Model Predictive Controllers. Electronics. 2020; 9(12):2078. https://doi.org/10.3390/electronics9122078

Chicago/Turabian Style

González, Edwin, Javier Sanchis, Sergio García-Nieto, and José Salcedo. 2020. "A Comparative Study of Stochastic Model Predictive Controllers" Electronics 9, no. 12: 2078. https://doi.org/10.3390/electronics9122078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop