Next Article in Journal
Suitability of Electrical Coupling in Solar Cell Thermoelectric Hybridization
Previous Article in Journal
A Modular Timber Construction System for the Sustainable Vertical Extension of Office Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach to Off-Line Robust Model Predictive Control for Polytopic Uncertain Models

School of Electrical and Electronic Engineering, Shanghai Institute of Technology, Shanghai 201418, China
*
Authors to whom correspondence should be addressed.
Designs 2018, 2(3), 31; https://doi.org/10.3390/designs2030031
Submission received: 9 July 2018 / Revised: 25 July 2018 / Accepted: 6 August 2018 / Published: 20 August 2018

Abstract

:
Concerning the robust model predictive control (MPC) for constrained systems with polytopic model characterization, some approaches have already been given in the literature. One famous approach is an off-line MPC, which off-line finds a state-feedback law sequence with corresponding ellipsoidal domains of attraction. Originally, each law in the sequence was calculated by fixing the infinite horizon control moves as a single state feedback law. This paper optimizes the feedback law in the larger ellipsoid, foreseeing that, if it is applied at the current instant, then better feedback laws in the smaller ellipsoids will be applied at the following time. In this way, the new approach achieves a larger domain of attraction and better control performance. A simulation example shows the effectiveness of the new technique.

1. Introduction

Model predictive control (MPC) has attracted considerable attention, since it is an effective control algorithm to deal with multivariable constrained control problems. The nominal MPC for constrained linear systems has been solved in systematic ways around 2000 [1,2]. Recently, some approaches have been extended to distributed implementations [3,4]. Synthesizing robust MPC for constrained uncertain systems has attracted great attention after the nominal MPC. This has become a significant branch of MPC. The lack of robustness in MPC, based on a nominal model [5], calls for a robust MPC technique based on uncertainty models. Up to now, robust MPC has been solved in several ways [6,7,8]. A good technique for robust MPC, however, requires not only guaranteed stability, but also low computational burden, big (at least desired) domain of attraction, and low performance cost value [9].
The authors in [6] firstly solved a min-max optimization problem in an infinite horizon for systems with polytopic description, by fixing the control moves as a state feedback law that was on-line calculated. The authors in [9] off-line calculated a feedback law sequence with corresponding ellipsoidal domains of attraction, and on-line interpolated the control law at each sampling time applying this sequence. The computational burden is largely reduced. In addition, nominal performance cost is used in [9] to the take place of the “worst-case” one so that feasibility can be improved. A heuristic varying-horizon formulation is used and the feedback gains can be obtained in a backward manner.
In this paper, the off-line technique in [9] is further studied. Originally, each off-line feedback law was calculated where the infinite horizon control move was treated as a single state feedback law. This paper, instead, optimizes the feedback law in the larger ellipsoid by considering that the feedback laws in the smaller ellipsoids will be applied at the next time if it is applied at the current time. In a sense, the new technique in this paper is equivalent to a varying-horizon MPC, i.e., the control horizon (say M ) gradually changes from M > 1 to M = 1, while the technique in [9] can be taken as a fixed horizon MPC with M = 1. Hence, the new technique achieves better control performance and can control a wider class of systems, i.e., improve control performance and feasibility.
So far, the state feedback approach is popular in most of the robust MPC problem, and the full state is assumed to be exactly measured to act as the initial condition for future predictions [10,11,12,13,14]. However, in many control problems, not all states can be measured exactly, and only the output information is available for feedback. In this case, an output feedback RMPC design is necessary (e.g., [3,15]). The output feedback MPC approach, parallel to that in [6], has been proposed in [16], and the off-line robust MPC has been studied in [17]. The new approach in this paper may be applied to improve the procedure in [17], which will be studied in the near future. Notation: The notations are fairly standard. n is the n -dimensional space of real valued vectors. W > 0 (W ≥ 0) means that W is symmetric positive-definite (symmetric non-negative-definite). For a vector x and positive-definite matrix W , x W 2 = x T W x . x ( k + i | k ) is the value of vector x at a future time k + i predicted at time k. The symbol * induces a symmetric structure, e.g., when H and R are symmetric matrices, then [ H + S + * * T R ] : = [ H + S + S T T T T R ] .

2. Problem Statement

Consider the following time varying model:
x ( k + 1 ) = A ( k ) x ( k ) + B ( k ) u ( k )
with input and state constraints, i.e.,
u ¯ u ( k + i ) u ¯ , i 0
ψ ¯ Ψ x ( k + i + 1 ) ψ ¯ , i 0
where u m and x n are input and measurable state, respectively; u ¯ : = [ u ¯ 1 , u ¯ 2 , , u ¯ m ] T , and ψ ¯ : = [ ψ ¯ 1 , ψ ¯ 2 , , ψ ¯ q ] T with u ¯ i > 0 , i = 1 m and ψ ¯ j > 0 , j = 1 q ; Ψ q × n . Input constraint is common in practice, which arises from physical and technological limitations. It is well known that the negligence of control input constraints usually leads to limit cycles, parasitic equilibrium points, or even causes instability. Moreover, we assume that [ A ( k ) | B ( k ) ] Ω , k 0 , where Ω = C o { A 1 | B 1 , A 2 | B 2 , , A L | B L } , i.e., there exist L nonnegative coefficients ω l ( k ) , l = 1 L such that
l = 1 L ω l ( k ) = 1 , [ A ( k ) | B ( k ) ] = l = 1 L ω l ( k ) [ A l | B l ]
A predictive controller is proposed to drive systems (1) and (2) to the origin ( x , u ) = ( 0 , 0 ) , and at each time k , to solve the following optimization problem:
min u ( k ) max [ A ( k + i ) | B ( k + i ) ] Ω , i 0 J ( k ) = i = 0 [ x ( k + i | k ) Q 2 + u ( k + i | k ) R 2 ]
The following constraints are imposed on Equation (4a):
x ( k + i + 1 | k ) = A ( k + i ) x ( k + i | k ) + B ( k + i ) u ( k + i | k ) , x ( k | k ) = x ( k )
u ¯ u ( k + i | k ) u ¯ , ψ ¯ Ψ x ( k + i + 1 | k ) ψ ¯
for all i ≥ 0. In (4), Q > 0 and R > 0 are weighting matrices and u ( k ) = [ u ( k | k ) T , u ( k + 1 | k ) T , u ( k + 2 | k ) T , ] T are the decision variables. At time k, u ( k ) = u ( k | k ) is implemented and the optimization (4) is repeated at time k + 1.
The authors in [9] simplified problem (4) by fixing u ( k ) as a state feedback law, i.e., u ( k + i | k ) = F ( k ) x ( k + i | k ) , i 0 . Define a quadratic function
V ( i , k ) = x ( k + i | k ) T P ( k ) x ( k + i | k ) , P ( k ) > 0 , k 0
with the robust stability constraint as follows:
V ( i + 1 , k ) V ( i , k ) x ( k + i | k ) Q 2 u ( k + i | k ) R 2 , [ A ( k + i ) | B ( k + i ) ] Ω , i 0
For a stable closed-loop system, x ( | k ) = 0 and V ( , k ) = 0 . Summing (6) from 0 to obtains
max [ A ( k + i ) | B ( k + i ) ] Ω , i 0 J ( k ) V ( 0 , k ) γ
where γ > 0 . Define Q = γ P ( k ) 1 and F ( k ) = Y Q 1 , then Equations (4c), (6) and (7) are satisfied if
[ 1 * x ( k ) Q ] 0 , Q > 0
[ Q * * * A l Q + B l Y Q * * Q 1 / 2 Q 0 γ I * R 1 / 2 Y 0 0 γ I ] 0 , l = 1 L
[ Z Y Y T Q ] 0 , Z j j u ¯ j 2 , j = 1 m
[ Q * Ψ ( A l Q + B l Y ) Γ ] 0 , Γ s s ψ ¯ s 2 , l = 1 L ; s = 1 q
where Z j j ( Γ s s ) is the j th ( s th) diagonal element of Z ( Γ ) [18]. In this way, problem (4) is approximated by
min γ , Q , Y , Z , Γ γ
s.t. Equations (8)–(11).
The closed-loop system is exponentially stable if (12) is feasible at the initial time k = 0.
Based on [6], the authors in [9] off-line determined a look-up table of feedback laws with corresponding ellipsoidal domains of attraction. The control law was determined on-line from the look-up table. A linear interpolation of the two corresponding off-line feedback laws was chosen when the state stayed between two ellipsoids and an additional condition was satisfied.
Algorithm 1 The Basic Off-Line MPC [9]
1: Off-line, generate state points x 1 , x 2 , , x N where x h 1 ,
2: h = N 2 is nearer to the origin than x h .
3: Substitute x ( k ) in (8) by x h ,
4: h = N 1
5: Solve (12) to obtain Q h , Y h , γ h ,
6: The ellipsoids ε h = { x n | x T Q h 1 x 1 }
7: The feedback laws F h = Y h Q h 1 .
8: Take appropriate choices to ensure ε h ε h 1 , h = N 2 .
9: On-line, if for each x h ,
10: The following condition is satisfied:
11:
Q h 1 ( A l + B l F h 1 ) T Q h 1 ( A l + B l F h 1 ) > 0 , l = 1 , 2 , , L
12: Then each time k adopt the following control law:
13:
F ( k ) = { F ( α h ( k ) ) , x ( k ) ε h , x ( k ) ε h 1 , F 1 x ( k ) ε 1 ,
14: where F ( α h ( k ) ) = α h ( k ) F h + ( 1 α h ( k ) ) F h 1 , x ( k ) T [ α h ( k ) Q h 1 + ( 1 α h ( k ) ) Q h 1 1 ] x ( k ) = 1 and 0 α h ( k ) 1 .
Compared with [6], the on-line computational burden is reduced, but the optimization problem gives worse control performance. In this paper, we propose a new algorithm with better control performance and larger domains of attraction.

3. The Improved Off-Line Technique

In calculating Fh, Algorithm 1 does not consider Fi, i < h . However, for smaller ellipsoids Fi, i < h are better feedback laws than Fh. In the following, the selection of Q 1 , F 1 , γ 1 is the same with Algorithm 1, but a different technique is adopted in this paper to calculate Q h , F h , γ h , h 2 . For x h , h 2 , we choose Q h , F h such that, for all x ( k ) ε h , at the following time { F h 1 , , F 2 , F 1 } is applied and x ( k + h 1 | k ) ε 1 .

3.1. Calculating Q2, F2

Define an optimization problem
min u ( k + i | k ) , i 1 max [ A ( k + i ) | B ( k + i ) ] Ω , i 1 J 2 , tail ( k ) = i = 1 [ x ( k + i | k ) Q 2 + u ( k + i | k ) R 2 ] ,   s . t .   ( 4 b ) ,   ( 4 c )   for   all   i     1
and solve this problem by
u ( k + i | k ) = F 1 x ( k + i | k ) , i 1
By analogy to Equation (7),
max [ A ( k + i ) | B ( k + i ) ] Ω , i 1 J 2 , tail ( k ) x ( k + 1 | k ) T P 1 x ( k + 1 | k ) γ 1
where P 1 = γ 1 Q 1 1 . In this way, problem (4a) is turned into min-max optimization of (also refer to [18])
J ¯ 2 ( k ) : = J ¯ ( k ) = x ( k ) Q 2 + u ( k ) R 2 + x ( k + 1 | k ) P 1 2
Solve u ( k ) by u ( k ) = F 2 x ( k ) and define
J ¯ 2 ( k ) = x ( k ) T { Q + F 2 T R F 2 + [ A ( k ) + B ( k ) F 2 ] T P 1 [ A ( k ) + B ( k ) F 2 ] } x ( k ) γ 2
Introduce a slack variable P 2 such that
γ 2 x 2 T P 2 x 2 0
and
Q + F 2 T R F 2 + [ A ( k ) + B ( k ) F 2 ] T P 1 [ A ( k ) + B ( k ) F 2 ] P 2
Moreover, u ( k ) = F 2 x ( k ) should satisfy hard constraints
u _ F 2 x ( k ) u ¯ , ψ _ Ψ [ A ( k ) + B ( k ) F 2 ] x ( k ) ψ ¯ , x ( k ) ε 2
and terminal constraint
x ( k + 1 | k ) ε 1 , x ( k ) ε 2
Equation (23) is equivalent to [ A ( k ) + B ( k ) F 2 ] T Q 1 1 [ A ( k ) + B ( k ) F 2 ] Q 2 1 . Define Q 2 = γ 2 P 2 1 and F 2 = Y 2 Q 2 1 , then the following LMIs can be obtained:
[ 1 * x 2 Q 2 ] 0 , Q 2 > 0
[ Q 2 * * * A l Q 2 + B l Y 2 γ 2 P 1 1 * * Q 1 / 2 Q 2 0 γ 2 I * R 1 / 2 Y 2 0 0 γ 2 I ] 0 , l = 1 L
[ Q 2 * A l Q 2 + B l Y 2 Q 1 ] 0 , l = 1 L
Constraint (22) is satisfied if [6]
[ Z 2 Y 2 Y 2 T Q 2 ] 0 , Z 2 , j j z j , inf 2 , j = 1 m
[ Q 2 * Ψ ( A l Q 2 + B l Y 2 ) Γ 2 ] 0 , Γ 2 , s s ψ s , inf 2 , l = 1 L ; s = 1 q
Thus, Y 2 , Q 2 and γ 2 can be obtained by solving
min γ 2 , Y 2 , Q 2 , Z 2 , Γ 2 γ 2 ,   s . t .   Equations   ( 24 ) ( 28 )

3.2. Calculating Qh, Fh, h 3

The rationale in Section 3.1 is applied, with a little change. Define an optimization problem
min u ( k + i | k ) , i h 1 max [ A ( k + i ) | B ( k + i ) ] Ω , i h 1 J h , tail ( k ) = i = h 1 [ x ( k + i | k ) Q 2 + u ( k + i | k ) R 2 ]
s.t. Equations (4b) and (4c) for all ih − 1
By analogy to Equation (18), problem (4a) is turned into min-max optimization of
J ¯ h ( k ) : = J ¯ ( k ) = i = 0 h 2 [ x ( k + i | k ) Q 2 + u ( k + i | k ) R 2 ] + x ( k + h 1 | k ) P 1 2
which is solved by
u ( k + i | k ) = F h i x ( k + i | k ) , i = 0 h 2 ; u ( k + i | k ) = F 1 x ( k + i | k ) , i h 1
By analogy to Equation (19), define
J ¯ h ( k ) = x ( k ) T { Q + F h T R F h + [ A ( k ) + B ( k ) F h ] T P 1 , l 2 l h 1 [ A ( k ) + B ( k ) F h ] } x ( k ) γ h
where, by induction, for h 3 ,
P 1 , l 2 l h 1 = [ i = 2 h 1 ( A l i + B l i F i ) ] T P 1 [ i = 2 h 1 ( A l i + B l i F i ) ] + i = 3 h 1 { [ j = i h 1 ( A l j + B l j F j ) ] T ( Q + F i 1 T R F i 1 ) [ j = i h 1 ( A l j + B l j F j ) ] } + Q + F h 1 T R F h 1
and
P 1 , l 2 l h = ( A l h + B l h F h ) T P 1 , l 2 l h 1 ( A l h + B l h F h ) + Q + F h T R F h
By Equation (33), introduce a slack variable P h = γ h Q h 1 and define F h = Y h Q h 1 such that
[ 1 * x h Q h ] 0
and
[ Q h * * * A l h Q h + B l h Y h γ h P 1 , l 2 l h 1 1 * * Q 1 / 2 Q h 0 γ h I * R 1 / 2 Y h 0 0 γ h I ] 0 , l i = 1 L ; i = 2 h
Moreover, the terminal constraint should be equivalent to
[ A ( k ) + B ( k ) F 2 ] T S 1 , l 2 l h 1 [ A ( k ) + B ( k ) F 2 ] Q h 1
where, by induction,
S 1 , l 2 l h 1 = [ i = 2 h 1 ( A l i + B l i F i ) ] T Q 1 1 [ i = 2 h 1 ( A l i + B l i F i ) ]
Equation (38) means that, for x ( k ) ε h , if at the following time { F h 1 , , F 2 , F 1 } is applied, then x ( k + h 1 | k ) ε 1 . Equation (38) can be transformed into
[ Q h * i = 2 h 1 ( A l i + B l i F i ) ( A l h Q h + B l h Y h ) Q 1 ] 0 , l i = 1 L ; i = 2 h
Again, u ( k | k ) = F h x ( k | k ) should satisfy
[ Z h Y h Y h T Q h ] 0 , Z h , j j z j , inf 2 , j = 1 m
and
[ Q h * Ψ ( A l h Q h + B l h Y h ) Γ h ] 0 , Γ h , s s ψ s , inf 2 , l h = 1 L ; s = 1 q
Thus, Y h , Q h and γ h can be obtained by solving
min γ h , Y h , Q h , Z h , Γ h γ h
s.t. Equations (36), (37) and (40)–(42).
Algorithm 2 The Improved Off-Line MPC
1: Off-line, generate state points x 1 , x 2 , , x N where x h 1 ,
2: h = N 2 is nearer to the origin than x h .
3: Substitute x ( k ) in (8) by x 1
4: Solve (12) to obtain Y 1 ,
5:     Q 1 , γ 1 , F 1 = Y 1 Q 1 1
6:     P 1 = γ 1 Q 1 1 .
7: For x 2 , solve (29)
8: Obtain Q 2 , Y 2 and F 2 = Y 2 Q 2 1 .
9: For x h , h = N 3 , solve (43)
10: Obtain Q h , Y h and F h = Y h Q h 1 .
11:  ε h ε h 1 , h = N 2 .
12: On-line, at each time k adopt the control law (14).
Theorem 1.
For systems (1) and (2), and an initial state x ( 0 ) ε N , the off-line constrained robust MPC in Algorithm 2 robustly asymptotically stabilizes the closed-loop system.
Proof. 
Similar to [9], when x ( k ) satisfies x ( k ) Q h 1 2 1 and x ( k ) Q h 1 1 2 1 , h 1 , let Q ( α h ( k ) ) 1 = α h ( k ) Q h 1 + ( 1 α h ( k ) ) Q h 1 1 , Z ( α h ( k ) ) = α h ( k ) Z h + ( 1 α h ( k ) ) Z h 1 and Γ ( α h ( k ) ) = α h ( k ) Γ h + ( 1 α h ( k ) ) Γ h 1 . By linear interpolation, [ Z ( α h ( k ) ) * F ( α h ( k ) ) T Q ( α h ( k ) ) 1 ] 0 and [ Q ( α h ( k ) ) 1 * Ψ ( A l + B l F ( α h ( k ) ) ) Γ ( α h ( k ) ) ] 0 , which means that F ( α h ( k ) ) satisfies the input and state constraints. Since { F h 1 , F h 1 , F h 2 , , F 1 } is a stable feedback law sequence for all initial state inside of ε h 1 , it is shown that
[ Q h 1 1 * i = 2 h 1 ( A l i + B l i F i ) ( A l h + B l h F h 1 ) Q 1 ] 0
Moreover, Equation (40) is equivalent to [ Q h 1 * i = 2 h 1 ( A l i + B l i F i ) ( A l h + B l h F h ) Q 1 ] 0 . Hence, by linear interpolation,
[ Q ( α h ( k ) ) 1 * i = 2 h 1 ( A l i + B l i F i ) ( A l h + B l h F ( α h ( k ) ) ) Q 1 ] 0
which means that { F ( α h ( k ) ) , F h 1 , , F 1 } is a stable control law sequence for all x ( k ) ε h , α h ( k ) = { x n | x T Q ( α h ( k ) ) 1 x 1 } and is guaranteed to drive x ( k + h 1 | k ) into ε 1 , with the constraints satisfied. Inside of ε 1 , F 1 is applied, which is stable and drives the state to the origin. ☐
If Equation (38) is made to be automatically satisfied, more off-line feedback laws may be needed in order for ε N to cover a desired space region. However, with this automatic satisfaction, better control performance can be obtained. Hence, we give the following alternative algorithm.
Algorithm 3 The Improved Off-Line MPC with an Automatic Condition
1: Off-line, as in Algorithm 2,
2: Obtain Q h , Y h , γ h and F h , h = N 1 .
3: ε h ε h 1 , h = N 2 .
4: On-line, if for each x h , h = N 3 ,
5: The following condition is satisfied:
6:
( A l h + B l h F h ) T S 1 , l 2 l h 1 ( A l h + B l h F h ) Q h 1 , l i = 1 L ; i = 2 h
7: for x 2
8: The following condition is satisfied:
9:
( A l + B l F 2 ) T Q 1 1 ( A l + B l F 2 ) Q 2 1 , l = 1 L
10: Then at each time k adopt the control law (14).

4. Numerical Example

4.1. Example 1

Consider the system [ x ( 1 ) ( k + 1 ) x ( 2 ) ( k + 1 ) ] = [ 0.8 0.2 β ( k ) 0.8 ] [ x ( 1 ) ( k ) x ( 2 ) ( k ) ] + [ 1 0 ] u ( k ) , where β ( k ) is an uncertain parameter. Use 0.5 β ( k ) 2.5 to form polytopic description and β ( k ) = 1.5 to calculate the state evolution. The constraints are | u ( k + i | k ) | 2 , i 0 . Choose the weighting matrices as Q = I and R = 1 . Consider the following two cases.
Case 1. Choose x h = [ ξ h , 0 ] T , h = 1 4 . Choose ξ 1 = 1 and ξ h = ξ h 1 + Δ ξ h , h = 2 4 . Choose large Δ ξ h such that: (i) condition (13) is satisfied, (ii) optimization problem (12) is feasible for x h , and (iii) optimization problem (29) or (43) is feasible for x h . Thus, we obtain ξ 2 = 1.1 , ξ 3 = 1.5 and ξ 4 = 1.9 . The initial state lies at x ( 0 ) = [ 1.9 , 0 ] T .
Apply Algorithms 1 and 2. The state and input responses for these two algorithms are shown in Figure 1 and Figure 2, respectively. The upper bounds of the cost value γ h for these two algorithms are [ 15.4987 , 18.7479 , 35.5955 , 66.6209 ] and [ 15.4987 , 18.7425 , 34.9721 , 58.1560 ] , respectively. Moreover, denote
J ^ = i = 0 [ x ( i ) Q 2 + u ( i ) R 2 ]
then J ^ * = 16.3221 for Algorithm 1 and J ^ * = 15.1033 for Algorithm 2. The simulation results show that Algorithm 2 achieves better control performance.
Case 2. Increase N . Algorithm 1 becomes infeasible at x h = 3.4658 . However, Algorithm 2 is still feasible by choosing x 1 = 1 , x 2 = 1.1 , x 3 = 1.5 , x 4 = 1.9 , x 5 = 3.8 , x 6 = 9.0 , x 7 = 12.8 , etc.

4.2. Example 2

Directly consider the system in [9]: [ x ( 1 ) ( k + 1 ) x ( 2 ) ( k + 1 ) ] = [ 1 0.1 0 1 β ( k ) ] [ x ( 1 ) ( k ) x ( 2 ) ( k ) ] + [ 0 0.0787 ] u ( k ) , where β ( k ) is an uncertain parameter. Use 0.1 β ( k ) 10 to form a polytopic description and use β ( k ) = 9 to calculate the state evolution. The constraints are | u ( k + i | k ) | 0.2 , | x ( 2 ) ( k + i + 1 | k ) | 0.03 , i 0 . Choose the weighting matrices as Q = I and R = 0.00002 . Case 1. Choose x h = [ 0.01 + 0.00025 ( h 1 ) 0 ] T and x N = 0.05 . The initial state lies at x 0 = [ 0.05 , 0 ] T . Apply Algorithms 1 and 3. The state trajectories, the state responses and the input responses for these two algorithms are shown in Figure 3, Figure 4 and Figure 5, respectively. Moreover, denote J ^ = i = 0 [ x ( k + i ) Q 2 + u ( k + i ) R 2 ] , then J ^ * = 0.0492 for Algorithm 1 and J ^ * = 0.0975 for Algorithm 3. Case 2. Choose x h = [ 0.01 + 0.00025 ( h 1 ) 0 ] T , h = 1 N N such that the optimization problem is infeasible for x N N . Then, for Algorithm 1, x N N is 19.6006 , and for Algorithm 3, 0.0505 .

5. Conclusions

In this paper, we have given a new algorithm for off-line robust MPC. The off-line state feedback law is optimized instead, such that each single state feedback law is fixed by the infinite-horizon control moves. This new algorithm consists of MPC with a varying horizon, i.e., the control horizon (say M ) varies from M > 1 to M = 1, while the original Algorithm 1 can be taken as an approach with M = 1. Simulation results show that the new algorithm achieves better control performance. Our future research on this topic will be extending it to the output feedback MPC approaches.

Author Contributions

Conceptualization, X.M.; Methodology, X.M.; Software, X.M. and H.B.; Validation, X.M., H.B. and N.Z.; Formal Analysis, X.M.; Investigation, H.B. and N.Z.; Resources, H.B.; Data Curation, X.M. and N.Z.; Writing-Original Draft Preparation, X.M.; Writing-Review & Editing, X.M. and H.B.; Visualization, H.B.; Supervision, X.M.; Project Administration, X.M.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Baocang Ding in answering some questions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bemporad, A.; Morari, M.; Dua, V.; Pistikopoulos, E.N. The explicit linear quadratic regulator for constrained systems. Automatica 2002, 38, 3–20. [Google Scholar] [CrossRef]
  2. Mayne, D.Q.; Rawlings, J.B.; Rao, C.V.; Scokaert, P.O.M. Constrained model predictive control: Stability and optimality. Automatica 2000, 36, 789–814. [Google Scholar] [CrossRef]
  3. Gao, Y.; Dai, L.; Xia, Y.; Liu, Y. Distributed model predictive control for consensus of nonlinear second-order multi-agent systems. Int. J. Robust Nonlinear Control 2017, 27, 830–842. [Google Scholar] [CrossRef]
  4. Gao, Y.; Xia, Y.; Dai, L. Cooperative distributed model predictive control of multiple coupled linear systems. IET Control Theory Appl. 2015, 9, 2561–2567. [Google Scholar] [CrossRef]
  5. Bemporad, A.; Morari, M. Robust model predictive control: A. survey. Robustness Ident. Control 1999, 245, 207–246. [Google Scholar]
  6. Kothare, M.V.; Balakrishnan, V.; Morari, M. Robust constrained model predictive control using linear matrix inequalities. Automatica 1996, 32, 1361–1379. [Google Scholar] [CrossRef] [Green Version]
  7. Kothare, M.V.; Balakrishnan, V.; Morari, M. Efficient robust predictive control. IEEE Trans. Autom. Control 2000, 45, 1545–1549. [Google Scholar]
  8. Wan, Z.; Kothare, M.V. Efficient robust constrained model predictive with a time varying terminal constraint set. Syst. Control Lett. 2003, 48, 375–383. [Google Scholar] [CrossRef]
  9. Wan, Z.; Kothare, M.V. An efficient off-line formulation of robust model predictive control using linear matrix inequalities. Automatica 2003, 39, 837–846. [Google Scholar] [CrossRef]
  10. Li, D.; Xi, Y.; Zheng, P. Constrained robust feedback model predictive control for uncertain systems with polytopic description. Int. J. Control 2009, 82, 1267–1274. [Google Scholar] [CrossRef]
  11. Garone, E.; Casavola, A. Receding horizon control strategies for constrained LPV systems based on a class of nonlinearly parameterized Lyapunov functions. IEEE Trans. Autom. Control 2012, 57, 2354–2360. [Google Scholar] [CrossRef]
  12. Huang, H.; Li, D.; Lin, Z.; Xi, Y. An improved robust model predictive control design in the presence of actuator saturation. Automatica 2011, 47, 861–864. [Google Scholar] [CrossRef]
  13. Shi, T.; Su, H.; Chu, J. An improved model predictive control for uncertain systems with input saturation. J. Franklin Inst. 2013, 350, 2757–2768. [Google Scholar] [CrossRef]
  14. Chisci, L.; Zappa, G. Feasibility in predictive control of constrained linear systems: The output feedback case. Int. J. Robust Nonlinear Control 2002, 12, 465–487. [Google Scholar] [CrossRef]
  15. Ding, B.; Pan, H. Output feedback robust MPC for LPV system with polytopic model parametric uncertainty and bounded disturbance. Int. J. Control 2016, 89, 1554–1571. [Google Scholar] [CrossRef]
  16. Ping, X.; Ding, B. Off-line approach to dynamic output feedback robust model predictive control. Syst. Control Lett. 2013, 62, 1038–1048. [Google Scholar] [CrossRef]
  17. Ding, B.; Xi, Y.; Li, S. A synthesis approach of on-line constrained robust model predictive control. Automatica 2004, 40, 163–167. [Google Scholar] [CrossRef]
  18. Ding, B.; Xi, Y.; Cychowski, M.T.; O’Mahony, T. Improving off-line approach to robust MPC based-on nominal performance cost. Automatica 2007, 43, 158–163. [Google Scholar] [CrossRef]
Figure 1. The state responses of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 2).
Figure 1. The state responses of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 2).
Designs 02 00031 g001
Figure 2. The input responses of the closed-loop system (dashed line for Algorithm 1, solid Algorithm 2).
Figure 2. The input responses of the closed-loop system (dashed line for Algorithm 1, solid Algorithm 2).
Designs 02 00031 g002
Figure 3. The state trajectories of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 3).
Figure 3. The state trajectories of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 3).
Designs 02 00031 g003
Figure 4. The state responses of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 3).
Figure 4. The state responses of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 3).
Designs 02 00031 g004
Figure 5. The input responses of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 3).
Figure 5. The input responses of the closed-loop systems (dashed line for Algorithm 1, solid Algorithm 3).
Designs 02 00031 g005

Share and Cite

MDPI and ACS Style

Ma, X.; Bao, H.; Zhang, N. A New Approach to Off-Line Robust Model Predictive Control for Polytopic Uncertain Models. Designs 2018, 2, 31. https://doi.org/10.3390/designs2030031

AMA Style

Ma X, Bao H, Zhang N. A New Approach to Off-Line Robust Model Predictive Control for Polytopic Uncertain Models. Designs. 2018; 2(3):31. https://doi.org/10.3390/designs2030031

Chicago/Turabian Style

Ma, Xianghua, Hanqiu Bao, and Ning Zhang. 2018. "A New Approach to Off-Line Robust Model Predictive Control for Polytopic Uncertain Models" Designs 2, no. 3: 31. https://doi.org/10.3390/designs2030031

Article Metrics

Back to TopTop