Next Article in Journal
N-Hyper Sets
Next Article in Special Issue
Model Predictive Control of Mineral Column Flotation Process
Previous Article in Journal
Some Notes about Inference for the Lognormal Diffusion Process with Exogenous Factors
Previous Article in Special Issue
Safeness Index-Based Economic Model Predictive Control of Stochastic Nonlinear Systems
Article Menu
Issue 5 (May) cover image

Export Article

Mathematics 2018, 6(5), 86; doi:10.3390/math6050086

Article
Enhancing Strong Neighbor-Based Optimization for Distributed Model Predictive Control Systems
1,2
,
1,2,* and 1,2,*
1
Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China
2
Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
*
Authors to whom correspondence should be addressed.
Received: 1 April 2018 / Accepted: 8 May 2018 / Published: 22 May 2018

Abstract

:
This paper considers a class of large-scale systems which is composed of many interacting subsystems, and each of them is controlled by an individual controller. For this type of system, to improve the optimization performance of the entire closed-loop system in a distributed framework without the entire system’s information or too-complicated network information, connectivity is always an important topic. To achieve this purpose, a distributed model predictive control (DMPC) design method is proposed in this paper, where each local model predictive control (MPC) considers the optimization performance of its strong coupling subsystems and communicates with them. A method to determine the strength of the coupling relationship based on the closed-loop system’s performance and subsystem network connectivity is proposed for the selection of each subsystem’s neighbors. Finally, through integrating the steady-state calculation, the designed DMPC is able to guarantee the recursive feasibility and asymptotic stability of the closed-loop system in the cases of both tracking set point and stabilizing system to zeroes. Simulation results show the efficiency of the proposed DMPC.
Keywords:
model predictive control; distributed model predictive control; large-scale systems; neighborhood optimization

1. Introduction

There is a class of complex large-scale industrial control systems which are composed of many interacting and spatially distributed subsystems, and each subsystem is controlled by an individual controller (e.g., large-scale chemical process [1], smart micro-grid [2,3] systems, distributed generation systems [4]), where the controllers exchange information with each other through a communication network. The objective is to achieve a good global performance of the entire closed-loop system or a common goal of all subsystems by the controller network. This objective is usually to track setpoints with minimized total error or to stabilize the entire system to zeroes in the dynamic control layer.
Distributed model predictive control (DMPC) controls each subsystem by an individual local model predictive control (MPC), and is one of the most important distributed control or optimization algorithms [1,5,6,7,8], since it not only inherits MPC’s ability to get good optimization performance and explicitly accommodate constraints [9,10], but also has the advantages of a distributed framework of fault-tolerance, less computation, and being flexible to system structure [7,11,12,13,14]. However, compared with the centralized control scheme, its performance is still not as good as that of centralized MPC for coupling systems in a peer-to-peer distributed control framework.
Many algorithms and design methods have appeared in the literature for different types of systems and for different problems in the design of DMPCs. For example, the design of DMPC for nonlinear systems [15,16], DMPC for uncertain systems [15,17], DMPC for networked systems with time delay [18], a decentralized optimization algorithm for solving DMPC [19], the design of cooperative strategies for improving the performance of DMPC [20], the design of an event-based communication DMPC for reducing the load on the communication network [21], as well as the design of a DPMC control structure [22]. Among these algorithms, several DMPC algorithms relate to the purpose of improving the closed-loop optimization performance while considering the information connectivity [5,21,23,24,25,26]. Information connectivity is considered because it directly affects the structural flexibility and error tolerance ability. Reference [27] proposed a DMPC where each subsystem-based MPC only communicates with its directly-impacted neighbors and uses an iterative algorithm to obtain the “Nash optimality”. References [20,28,29] proposed cooperative DMPC, where each MPC considers the cost of the entire system and communicates with all the other MPCs to obtain “Pareto optimality”. To reduce the information connectivity and increase the structural flexibility, Reference [30] proposed that each subsystem optimize all the subsystems impacted by it over the optimization horizon. The solution of this method is equal to the cooperative DMPC, while its communication efforts are less than the cooperative DMPC, especially for sparse systems. References [31,32] gave a strategy to dynamically adjust the weighting of performance index in cooperative MPC to avoid bad performance occurring in some subsystems. In an effort to achieve a trade-off between the optimization performance of the entire system and the information connectivity, an intuitively appealing strategy, called impacted-region cost optimization-based DMPC, is proposed in [33,34,35], where each subsystem-based MPC only considers the cost of its own subsystem and those of the subsystems directly impacted by it. Consequently, each MPC only communicates with its neighboring MPCs. In addition, in some papers, the control flexibility and information connectivity are paid more attention by researchers. References [14,36] provide a tube-based DMPC where all interactions are considered as disturbances and each subsystem-based MPC is solved independently. It does not exchange the state and input trajectory, but the interaction constraints, to avoid the interaction consistency problem. This method is able to improve the flexibility and fault tolerance ability of the control network [37]. References [25,37] proposed reconfigurable DMPC and plug-and-play DMPC based on dissipative theory, which focus on the problem of how to design a DMPC which allows the addition or deletion of subsystems without any change in existing controllers. It can be seen that the optimization performance of the entire system and structural flexibility are two conflicting key points in DMPC design. The selection of the range of each subsystem’s neighbors to be optimized in each subsystem-based MPC is important in the design of DMPC in order to obtain good optimization performance without unnecessary information connections. Thus, the aim of this paper is to design an algorithm to determine the range of each subsystem optimized from the point of view of enlarging each subsystem MPC’s feasible region, then to improve the entire system’s optimization performance without too-complicated network connectivity. Then, based on the result of this algorithm, we aim to design a stabilized neighborhood optimization-based DMPC that handles state constraints and is able to be used in target tracking.
As for target tracking, the difficulty in DMPC is to guarantee the recursive feasibility. References [38,39,40] provide a tracking algorithm for a series of MPC systems, where a steady-state target optimizer (SSTO) is integrated in the design of the cost function. The proposed controller is able to drive the whole system to any admissible setpoint in an admissible way, ensuring feasibility under any change of setpoint. As for distributed systems, [38] gives a DMPC for tracking based on the method introduced in [39] and a cooperative DMPC strategy. Reference [41] proposes another method based on global calculations of targeting tracking. It does not require a feasible starting point of each distributed predictive controller. These methods provide good references and possible methods for designing a tracking DMPC that considers optimization performance improvement and network connectivity.
In this paper, strong coupling neighbor-based optimization DMPC is proposed. With this method, each local MPC coordinates and communicates with its strong coupling neighbors. It takes its strongly-coupling downstream subsystems cost function into account in its cost function to improve the performance of the entire closed-loop system. To reduce the unnecessary network connectivity, the interaction terms of weak coupling upstream neighbors are ignored in its predictive model and are considered as bounded disturbances. In addition, the closed-loop optimization performance is used to determine which interaction should be regarded as strong coupling and be considered in DMPC. The strategy proposed in [38] is used to guarantee the recursive feasibility and stability in the target tracking problem. An asymptotically-stable closed-loop system with state constraints is guaranteed.
The remainder of this paper is organized as follows. Section 2 describes the problem to be solved. Section 3 describes the design of the proposed DMPC. Section 4 analyzes the stability of the closed-loop system. Section 5 presents the simulation results to demonstrate the effectiveness of the proposed algorithm. Finally, a brief conclusion to the paper is drawn in Section 6.

2. Problem Description

Considering a large-scale discrete-time linear system which is composed of many interacting subsystems, the overall system model is:
x + = A x + B u , y = C x ,
where x R n x is the system state, u R n u is the system current control input, y R n y is the controlled output, and x + is the successor state. The state of the system and control input applied at sample time t are denoted as x ( t ) , u ( t ) , respectively. Moreover, there are hard constraints in the system state and control input. That is, for t 0 :
x ( t ) X , u ( t ) U ,
where X R n x and U R n u are compact convex polyhedra containing the origin in their interior.
Given Model (1), without loss of generality, the overall system is divided in to m subsystems, denoted as S i , i I 0 : m . Thus, u = ( u 1 , u 2 , . . . , u m ) and x = ( x 1 , x 2 , . . . , x m ) , then the subsystem model for S i , i I 0 : m is:
x i + = A i i x i + B i i u i + j N i B i j u j ,
where N i is the set of subsystems that send inputs to the current subsystem S i . For subsystem S j , j N i , S j couples with S i by sending control input u i to S i . In particular, j N i if B i j 0 . Given the overall system constraints set X , U , x i , u i fit hard constraints x i ( t ) X i , u i ( t ) U i .
In this paper, for ease of analysis, here the definitions of neighbor (upstream-neighbor) and downstream neighbor are given.
Definition 1.
Given subsystem S i with state evolution Equation (3), define S j , S j N i , which send input information to S i as the neighbor (upstream neighbor) of S i . Moreover, for arbitrary S j , S j N i , since S i receives input information from S j , S i is defined as a downstream neighbor of S j .
Denote the tracking target as y t . Assume that ( A , B ) is stabilizable and the state is measurable. The aim of a tracking problem given a target y t is to design a controller which enables y ( t ) y t in an admissible way when t . Hence, the origin control objective function of the overall system is:
V N o r i g i n ( x , y t ; u ) = k = 0 N 1 ( C x ( k ) y ^ t Q o 2 + u ( k ) u ^ t R 2 ) + C x ( N ) y ^ t P o 2 ,
where P o > 0 , Q o > 0 , and R > 0 is the weighting coefficients matrix, and u t is steady input corresponding to y t .
The problem considered here is to design a DMPC algorithm to control a physical network, which coordinate with each other considering the following performance indicators:
  • to achieve a good optimization performance of the entire closed-loop system.
  • to guarantee the feasibility of target tracking.
  • to simplify the information connectivity among controllers to guarantee good structural flexibility and error-tolerance of the distributed control framework.
To solve this problem, in this paper, an enhanced strong neighbor-based optimization DMPC is designed, and is detailed in the next section.

3. DMPC Design

In an interacting distributed system, the state evolution of each subsystem is affected by the optimal control decisions of its upstream neighbors. Each subsystem considers if these effects will help to improve the performance of entire closed-loop system. On the other hand, these impacts have different strengths for different downstream subsystems. Some of the effects are too small and can be ignored. If these weakly-coupling downstream subsystems’ cost functions are involved in each subsystem’s optimization problem, additional information connections arise with little improvement of the performance of the closed-loop system. The increase of information connections will hinder the error tolerance and flexibility of the distributed control system. Thus, each subsystem-based MPC takes the cost functions of its strongly-interacting downstream subsystems into account to improve the closed-loop performance of the entire system and receive information from its strong-coupling neighbors.

3.1. Strong-Coupling Neighbor-Based Optimization for Tracking

Given that the coupling degrees between different subsystems differ substantially, here we enable the subsystem to cooperate with strong-coupling neighbors while treating the weak-coupling ones as disturbance. Define N i ( s t r o n g ) as a set of strong-coupling neighboring subsystems and N i ( w e a k ) as set of weak-coupling neighbors. The rule for deciding strong-coupling systems is detailed in Section 3.4.
Then, for S i , we have:
x i + = A i i x i + B i i u i + j N i ( s t r o n g ) B i j u j + w i ,
where
w i = j N i ( w e a k ) B i j u j ,
w i W i , W i = ( B i j U j ) ,
N i ( w e a k ) N i ( s t r o n g ) = N i = { j | B i j 0 , j i } .
The deviation w i represents the influence collection of weak-coupling upstream neighbors in N i , ( w e a k ) . w i is contained in a convex and compact set W i which contains the origin.
If the weak coupling influence w i is neglected, a simplified model based on S i is acquired. That is:
x ¯ i + = A i i x ¯ i + B i i u ¯ i + j N i ( s t r o n g ) B i j u ¯ j .
Here x ¯ i , u ¯ i , and u ¯ j , j N i ( s t r o n g ) represent the state and input of a simplified subsystem model which neglects weak-coupling upstream neighbors’ influence w i .
The simplified overall system model with new coupling relation matrix B ¯ is:
x ¯ + = A x ¯ + B ¯ u ¯ ,
where x ¯ = ( x ¯ 1 , x ¯ 2 , . . . , x ¯ m ) and u ¯ = ( u ¯ 1 , u ¯ 2 , . . . , u ¯ m ) represent states and inputs in this simplified model.
Considering the target-tracking problem of the simplified model, in order to ensure the output track, given target y t , constraints are given for terminal state prediction. If the current target y t is set as the tracking target through the controller optimization, when y t changes, the terminal constraints need to change immediately. The optimal solution at a previous time may not fit the terminal constraints brought by the changed y t . This violates the recursive feasibility of the system. Thus, here a steady state optimization is integrated in the MPC for tracking where an artificial feasible tracking goal y s is proposed as a medium variable. This variable works as an optimized variable. With setting tracking point y s equal to the previous target, the recursive feasibility will not be violated by the target change.
The medium target y s and its state x ¯ S and input u ¯ s should satisfy the simplified system’s steady state equations. It has
A I n x B ¯ 0 C 0 I x ¯ s u ¯ s y s = 0 0 ,
x ¯ s u ¯ s = M y y s .
Here M y is a suitable matrix. That is, target y s ’s corresponding inputs u ¯ s and states x ¯ s in the simplified model can be expressed by y s . The equation is based on the premise of Lemma 1.14 in [42]. If Lemma 1.14 does not hold, a M θ and θ which fits x ¯ s u ¯ s = M θ θ can be found, which can replace the y s as a variable to be solved.
For the manual tracking target y s for the overall system, we have y s = { y 1 , s , , y i , s , , y m , s } . That is, given y s , arbitrary subsystem S i gets a subtracking target y s , i . Similar to (9), x ¯ s , i , u ¯ s , i are solved.
With the simplified model and artificial tracking target y s , i , according to (9), in the strong-coupling neighbor-based optimization MPC algorithm, the objective function optimized in subsystem S i , i [ 1 , m ] is set as V i N ( x i , y t ; x i , u i , 0 : N 1 , y s ) as follows:
V i N ( x i , y t ; x i , u i , 0 : N 1 , y s ) = k = 0 N 1 ( x i ( k ) x ¯ i , s Q i 2 + u i ( k ) u ¯ i , s R i 2 ) + x i ( N ) x ¯ i , s P i 2 + V 0 ( y i , s , y t , i ) + k = 0 N 1 h H i x h ( k ) x ¯ s , h Q h 2 + x h ( N ) x ¯ s , h P h ,
where x i , y t is the given initial state and target, u i , 0 : N 1 are input predictions in 0 : N 1 sample time ahead. y s is the admissible target. Q i = C i Q o , i C i > 0 and
H i = { h | i N h ( s t r o n g ) , S h , h [ 1 , m ] , h i } .
Here, S i ’s controller design takes the strong-coupling downstream neighbors’ performances as part of its optimized objective. That is, the current subsystem S i ’s optimal solution is decided by its own and downstream neighbors in set H i , which is strongly impacted by S i .
Next, we will use the simplified model in (6) with only strong couplings to solve the tracking problem (10) for each subsystem. To guarantee control feasibility and stability, the following definitions and assumptions are given.
One important issue is to deal with the deviation caused by neglecting weak-coupling neighbor inputs. Here robust positively invariantt sets are adopted to enable the deviation of states to be bounded and the real system’s states to be controlled in X .
Definition 2.
(Robust positively invariant set control law) Given e = ( x x ¯ ) which represents the dynamics of the error between the origin plant and the simplified model:
e + = A k e + w ,
with A k = ( A + B K ) . A set ϕ is called a robust positively invariant set for system (12) if A k ϕ W ϕ , and the control law is called a robust positively invariant set control law.
The definition of a robust positively invariant set illustrates that for system x = A x + B u + w if ϕ and robust positively invariant set control law K exist, then for e ( 0 ) = x ( 0 ) x ¯ ( 0 ) , the trajectories of the original system at arbitrary time t denoted as x ( t ) can be controlled in x ( t ) = x ¯ ( t ) ϕ .
Based on this definition, in this paper the dynamics of deviation ( x i x ¯ i ) introduced by neglecting weak-coupling neighbors can be solved. For subsystem S i proposed as (5), the deviation is written as:
e i + = A i i e i + B i i u i , e + w i ,
where e i = x i x ¯ i is the deviation from the simplified model to the original model and u i , e is the control law. There exists the set ϕ i as a robust positively invariant set for S i if ( A i i + B i i K i ) e i ϕ i for all e i ϕ i and all w i W i . Here u i , e = K i e i is a feedback control input and we denote K i as the robust positively invariant set control law for S i . Then, it is easy to obtain x i ( t ) = x ¯ i ( t ) ϕ i for time t. Let ( x ¯ i ( t ) , u ¯ i ( t ) ) F i , where F i = ( X i × U i ) ( ϕ i × K i ϕ i ) , the origin system state and input satisfy ( x i ( t ) , K i ( x i ( t ) x ¯ i ( t ) ) + u ¯ i ( t ) ) X i × U i . Thus, with the help of a robust positively invariant set, the original system optimization is transferred to a simplified model. For the overall system, we have K = d i a g ( K 1 , K 2 , . . . , K m ) .
With Definition 2, if the deviation brought by omitting weak-coupling neighbors is controlled in a robust positively invariant (RPI) set ϕ i with control law K i and simplified model in (7) has control law and state u ¯ i , x ¯ i confined in U i K i ϕ i , X i ϕ i , respectively, the local subsystem will have a feasible solution for the optimization.
As for the manually-selected tracking target y s , based on the overall simplified model in (7), the following definition is given:
Definition 3.
(Tracking invariant set control law). Consider that overall system (7) is controlled by the following control law:
u ¯ = K ¯ ( x ¯ x ¯ s ) + u ¯ s = K ¯ x ¯ + L y s .
Let A + B ¯ K ¯ be Hurwitz, then this control law steers system (7) to the steady state and input ( x ¯ s , u ¯ s ) = M y y s . K ¯ is denoted as the tracking invariant set control law.
Denote the set of initial state and steady output that can be stabilized by control law (13) while fulfilling the system constraints throughout its evolution as an invariant set for tracking Ω K ¯ . For any ( x ( 0 ) , y s ) Ω K ¯ , the trajectory of the system x ¯ + = A x ¯ + B u ¯ controlled by u ¯ = K ¯ x + L y s is confined in Ω K ¯ and tends to ( x s , u s ) = M y y s .
Under Definitions 2 and 3, before introducing the enhancing strong neighbor-based optimization DMPC, some assumptions for the closed-loop system feasibility and stability are given as follows. The concrete theorem and an analysis of stability and feasibility are given in Section 4.
Assumption 1.
The eigenvalues of A i i + B i i K i are in the interior of the unitary circle. ϕ i is an admissible robust positively invariant set for S i ’s deviation ( x i x ¯ i ) subject to constraints F i , and the corresponding feedback control law is u i , e = K i e i .
Assumption 2.
Let Ω K ¯ be a tracking invariant set for the simplified system (7) subject to constraints F = { { ( x ¯ 1 , u ¯ 1 ) , . . . , ( x ¯ m , u ¯ m ) } | i , ( x ¯ i , u ¯ i ) ( X i × U i ) ( ϕ i × K ¯ i ϕ i ) } , and the corresponding feedback gain matrix is K ¯ = { K ¯ 1 , K ¯ 2 , , K ¯ m } .
Assumption 3.
For Q = block diag { Q 1 , Q 2 , , Q m } , R = block diag { R 1 , R 2 , , R m } and P = block diag ( P 1 , P 2 , , P m ) , it has:
( A + B ¯ K ¯ ) P ( A + B ¯ K ¯ ) P = ( Q + K ¯ R K ¯ ) .
Assumption 1 ensures that with the feedback control law u i , e = K i e i , i I 0 : m , the state estimated by the simplified model (7) is near to the real system’s trajectory before the system reaches the target. In Assumption 2, Ω k ¯ is set as a terminal constraint of DMPC. Assumption 3 is used in the proof of the convergence of system presented in the Appendix A.
So far, the strong-coupling neighbor-based optimization DMPC algorithm, which is solved iteratively, can be defined as follows:
Firstly, denote the optimal objective of subsystem S i as V i N . According to (10), at iterating step p, V i N fits:
V i N ( x i , y t , p ; x ¯ i , u ¯ i , 0 : N 1 , y i , s ) = k = 0 N 1 ( x ¯ i ( k ) x ¯ i , s Q i 2 + u ¯ i ( k ) u ¯ i , s R i 2 ) + x ¯ i ( N ) x ¯ i , s P i 2 + V 0 ( y i , s , y i , t ) + k = 0 N 1 h H i x ¯ h ( k ) x ¯ h , s [ p 1 ] Q h 2 + x ¯ h ( N ) x ¯ h , s [ p 1 ] P h .
Compute the optimization solution
( x ¯ i ( 0 ) , u ¯ i , 0 : N 1 , y i , s ) = arg min V i N ( x i , y t , p ; x ¯ i , u ¯ i , 0 : N 1 , y i , s ) ,
Subject to constraints:
x ¯ h i ( k + 1 ) = A h i h i x ¯ h i ( k ) + h j N h ( s t r o n g ) B h j u ¯ h j [ p ] ( k ) + B h i h i u ¯ h i ( k ) ,
( x ¯ h i ( k ) u ¯ h i ( k ) ) F , F : ( X h i , U h i ) ( W h i , K h i W i ) ,
( x ¯ ( N ) , y s ) Ω K ¯ ,
x ¯ i ( 0 ) x i ϕ i ,
M y y i , s = ( x ¯ i , s , u ¯ i , s ) ,
with h i H i { i } , and ϕ i , Ω k ¯ defined in Assumptions 2 and 3, respectively. The optimization function (16) updates S i ’s initial state, inputs in N steps u ¯ i , 0 : N 1 and current tracking target y i , s based on the information from subsystems in H .
Secondly, set
u ¯ i , 0 : N 1 [ p ] = γ i u ¯ i , 0 : N 1 + ( 1 γ i ) u ¯ i , 0 : N 1 [ p 1 ] ,
y i , s [ p ] = γ i y i , s + ( 1 γ i ) y i , s [ p 1 ] ,
x ¯ i [ p ] ( 0 ) = γ i x i ( 0 ) + ( 1 γ i ) x ¯ i [ p 1 ] ( 0 ) ,
i = 1 m γ i = 1 , γ i > 0 .
γ i R , 0 < γ i < 1 is to guarantee the consistency of the optimization problem. That is, at the end of the current sample time, all shared variables converge.
After that, we take
p = p + 1
to iterate until the solutions convergence. Then, we have x ¯ i * = x ¯ i [ p ] , u ¯ i * = u ¯ i , 0 : N 1 [ p ] , y i , s * = y i , s [ p ] .
Finally, when the solution converges, according to Assumption 1, take the control law of S i as
u i , 0 * = u ¯ i , 0 * + K i ( x i x ¯ i * ) ,
where K i is the robust positively invariant set control law. u ¯ i , 0 * is the first element of u ¯ i * . For better understanding, the algorithm is also presented in Algorithm 1.
Algorithm 1 Enhancing Strong Neighbor-Based Optimization DMPC
Mathematics 06 00086 i001
In this algorithm, we use an iterative strategy to guarantee the distributed control solution ( x ¯ ( 0 ) , u ¯ 0 : N 1 , y s ) is consistent. Next, the selection of warm start, the given solution for each subsystem at initial iterative step 0, is proposed in the next section.

3.2. Warm Start

Considering a new sample time, with updated system states, the choice of a warm start is based on the principle that it fits the simplified system’s constraints in (17), so that real subsystem solution’s feasibility is guaranteed. The warm start is designed as the following algorithm:
Algorithm 2 Warm Start for Iterative Algorithm
Mathematics 06 00086 i002
The algorithm illustrates that two choices are provided for the warm start. One is acquiring a solution from the tracking invariant set control law K ¯ , with the simplified model prediction ( x ¯ i * ( 1 | t ) , y i , s * ( t ) ) as initial state and tracking target, respectively. The other is taking a solution from the simplified model prediction at time t. Both of them fit the constraints of (17). Note that the second option will only be considered when the subsystem enters the tracking invariant set.

3.3. RPI Control Law and RPI Set

Here one constraint coupling subsystem is considered. Given that for S i we have x i X i and u i U i , express the constraints in inequalities: X i = { x i | l i T x i | 1 } and U i = { u i | h i T u i | 1 } . The robust positively invariant set ϕ i is denoted as ϕ i = { x i : x i T P i x i 1 } .
With the definition a of robust positively invariant set in Definition 2, ϕ i should ensure that x i ϕ i , x i X i . That is:
| h i T x i | 1 , x i ϕ i .
Based on definitions of N i ( s t r o n g ) and N i ( w e a k ) , W i is decided according to the constraints of N i ( w e a k ) . For deviation caused by neglecting the subsystem in N i ( w e a k ) , a minimization of robust positive invariant set ϕ i by introducing a parameter γ i [ 0 , 1 ] can be obtained.
The parameter γ i controls the size of the robust positive invariant set ϕ i by further minimizing ϕ i in ϕ i γ i X . That is:
min γ i s . t . | h i T x i | γ i , x i ϕ i .
We should also consider the input constraint U i :
| l i T K i x i | 1 , x i ϕ i ,
and the constraint brought by the property of robust positive invariant set ϕ i itself should be considered.
Based on the above analysis, referring to [43], we can obtain γ i and K i by solving the following linear matrix inequality optimization problem:
min W i , Y i , γ i γ i ,
λ i W i * * 0 1 λ i * A i i W i + B i Y i w i W i > 0 , w i v e r t ( W i ) ,
1 * Y i T l i W i > 0 ,
γ i * W i h i W i > 0 ,
and K i = Y i W i 1 . Thus, we get RPI control law K i and γ i , which illustrates the size of ϕ i . To get ϕ i , we use the procedure in Reference [43].

3.4. Determination of Strong Coupling

There are many measurements to measure the strength of interactions among subsystems. Different measurements lead to different optimization performance. This paper focuses on the performance and connectivity of subsystems. Thus, the determination of strong-coupling neighbors is based on the influence on the size of current subsystem’s robust positively invariant (RPI) set and subsystem connectivity.
On the one hand, as defined in Definition 2, ϕ i is a robust positively invariant set for subsystem S i described as x i + = A i i x i + B i i u i + j N i B i j u j when u j is set to zero. Given that ϕ i deals with deviation caused by neglecting some of the inputs u j , j N i , the size of ϕ i is expected to be sufficiently small. The benefit is that the solution in (15) can get a larger feasible domain. Here we consider that a sufficiently large domain means the solution has more degrees of freedom and brings better subsystem performance. Based on the idea above, to decide to omit the weak-coupling neighbor set N i ( w e a k ) , we choose a neighbor collection which results in a small size of robust positively invariant set ϕ i . The basis of measuring the robust positively invariant set ϕ i by introducing γ i is mentioned in the previous section. On the other hand, connectivity, as the measurement of subsystem topology complexity, is easy to obtain. Next, we give the numerical analysis.
Denote an arbitrary option for deciding the strong-, weak-coupling neighbors as C i , ( d ) , d D i . D i = { 1 , . . . , d m a x } I is the label set of ways of S i ’s neighbors’ distribution. d m a x represents the size of feasible distribution methods which fits d m a x 2 s i z e ( N i ) . For better understanding of C i , ( d ) , here we take an arbitrary neighbor set N i = { j 1 , j 2 , j 3 } as an example. If we treat j i as a strong-coupling neighbor and j 2 , j 2 as weak, we have d D i , C i , ( d ) , satisfying:
C i , ( d ) = { ( N i ( s t r o n g ) , N i ( w e a k ) ) | N i ( s t r o n g ) = { j 1 } , N i ( w e a k ) = { j 2 , j 3 } } .
Option C i , ( d ) results in a specified connectivity amount (normalized) c i , ( d ) [ 0 , 1 ] and an RPI set denoted as ϕ ( i , d ) γ i , ( d ) X i . Here c i , ( d ) [ 0 , 1 ] are defined as:
c i , ( d ) = s i z e ( N i ( s t r o n g ) ) s i z e ( N i ) [ 0 , 1 ] .
To find the optimal distribution C i , ( d * ) of strong- and weak-coupling neighbors, here we take:
C i , ( d * ) = argmin C i , ( d ) , W i , ( d ) , Y i , ( d ) , γ i , ( d ) ( ( γ i , ( 1 ) + μ i c i , ( 1 ) ) , . . . , ( γ i , ( d ) + μ i c i , ( d ) ) , . . . , ( γ i , ( d m a x ) + μ i c i , ( d m a x ) ) ) ,
where for d D i ,
λ i W i , ( d ) * * 0 1 λ i * A i i W i , ( d ) + B i Y i , ( d ) w i W i , ( d ) > 0 , w i v e r t ( W i , ( d ) ) ,
1 * Y i , ( d ) T l i W i , ( d ) > 0 ,
γ i , ( d ) * W i , ( d ) h i W i , ( d ) > 0 ,
0 γ i , ( d ) 1 .
In this equation, μ i is a weight coefficient for the optimization. γ i , ( d ) , W i , ( d ) , Y i , ( d ) , X i , ( d ) represent the γ i , W i , Y i , X i under distribution C i , ( d ) . Moreover, C i , ( d * ) is the optimal solution.
This optimization means that in order to make the optimal decision on strong-coupling neighbors and weak-coupling neighbors while taking both connectivity and performance into account, the optimization that minimizes the combination of subsystem connectivity and size of ϕ i should be solved. To decide whether a neighbor S j , j N i is a strong-coupling neighbor or a weak one, the size of ϕ i is expected to be small so that the solution in (15) can get a larger feasible domain. At the same time, the connectivity is expected to be small to reduce the system’s topological complexity.
The optimization achieves the goal of choosing neighbors which result in smaller size of robust positively invariant set ϕ i and connectivity. Solution C i , ( d * ) reflects the consideration of influence on RPI set ϕ i and connectivity. With this method, even though “weak-coupling” neighbors are omitted and deviation is brought, the simplified model has a large degree of freedom to design the control law of tracking and reduces the connectivity at the same time. Thus, a good system performance and error tolerance can be obtained.

4. Stability and Convergence

In this section, the feasibility and stability theorem of strong-coupling neighbor-based DMPC are given. Denote
X N = { x X | v = ( x , u 0 : N 1 , y s ) , u ( k ) U , k I 0 : N 1 , y s Y s , s . t . v Z N } , Z N = { v | u ( k ) U , k I 0 : N 1 , y s Y s , x ( k ; x , u ) X , k I 0 : N 1 , x ( N ; x , u ) Ω K ¯ } .
x ( k ; x , u ) represents the current time’s state prediction after k sample time. Y s is the feasible tracking set based on hard constraints of x and u.
Theorem 1.
Assume that Assumptions 1–3 hold. Then, for all initial state x ( 0 ) with tracking target y t if v ( 0 ) Z N , the closed-loop system based on a strong-coupling neighbor-based DMPC algorithm is feasible and asymptotically stable and converges to y ^ s C ϕ k , where y ^ s = ( y ^ 1 , s , . . . , y ^ m , s ) , y ^ i , s = a r g m i n V 0 ( y i , s , y i , t ) among feasible targets.
Proof. 
Feasibility is proved by Lemmas A1, A2. Stability’s proofs are in Lemmas A3, A4 in the Appendix A. ☐

5. Simulation

The simulation takes an industrial system model with five subsystems interacting with each other as an example. Between different subsystems, the coupling degrees vary substantially. The relationships of subsystems and the designed MPC are shown in Figure 1.
In Figure 1, dotted lines are used to represent weak coupling, while solid lines are used to represent strong coupling. With the strategy we have defined in our paper, weak couplings are neglected. As a result, it can be seen in Figure 1 that only parts of the subsystems are joint in cooperation. Subsystem models are also given as follows:
S 1 : x 1 , t + 1 = 0 . 5 0 . 6 0 0 . 66 x 1 , t + 0 . 1 0 . 7 u 1 , t + 0 0 . 04 u 2 , t , y 1 , t = 0 1 x 1 , t ,
S 2 : x 2 , t + 1 = 0 . 6 0 . 1 0 0 . 71 x 2 , t + 0 . 5 1 u 2 , t + 0 0 . 3 u 1 , t + 0 0 . 01 u 3 , t , y 1 , t = 0 1 x 1 , t ,
S 3 : x 3 , t + 1 = 0 . 7 0 . 2 0 . 1 0 . 4 x 3 , t + 0 . 9 1 u 3 , t + 0 0 . 4 u 2 , t + 0 0 . 05 u 4 , t , y 1 , t = 0 1 x 1 , t ,
S 4 : x 4 , t + 1 = 0 . 9 0 . 7 0 0 . 6 x 4 , t + 0 . 4 0 . 4 u 4 , t + 0 . 3 0 . 6 u 3 , t + 0 0 . 01 u 5 , t , y 1 , t = 0 1 x 1 , t ,
S 5 : x 5 , t + 1 = 0 . 8 0 0 . 5 0 . 78 x 5 , t + 0 1 u 5 , t + 0 . 4 0 . 2 u 4 , t , y 1 , t = 0 1 x 1 , t .
By the strong-coupling neighbor-based DMPC, connections including S 2 S 1 , S 3 S 2 , S 4 S 3 , S 5 S 4 are neglected. For the five subsystems in the given model, γ 1 , γ 2 , γ 3 , γ 4 , and γ 5 , which evaluate the system performance, are obtained by optimization in Section 3.4, they are:
( γ 1 , γ 2 , γ 3 , γ 4 , γ 5 ) = ( 0 . 54 , 0 . 66 , 0 . 72 , 0 . 53 , 0 ) .
Among them, γ 5 = 0 illustrates that subsystem S 5 has no weak-coupling upstream neighbors. Additionally, the robust positively invariant set feedback control laws are
{ K 1 , K 2 , K 3 , K 4 } = { [ 0 . 119 0 . 762 ] T , [ 0 . 171 0 . 434 ] T , [ 0 . 316 0 . 251 ] T , [ 0 . 724 0 . 966 ] T } .
The optimization horizon N is 10 sample time. Take Q = I 10 × 10 and R = I 5 × 5 . To accelerate the iterative process, in both of these iterative algorithms, the terminal conditions of iteration are | | u i [ p ] u i [ p 1 ] | | 2 10 3 or p > 100 . If either of these two conditions is satisfied, iteration terminates.
The following shows the system performance when the strong-coupling neighbor-based DMPC algorithm is applied. Here we chose different set-points to detect the system stability. Three groups of setpoints were given to verify the system’s feasibility and stability. For a better understanding, cooperative DMPC strategy control results which cooperate with all neighbors are also introduced to make a comparison. The simulation took a total of 74 . 3 seconds for 90 sampling times. The performance comparison of strong-coupling neighbor-based DMPC (SCN-DMPC) with cooperative DMPC where each subsystem used the full system’s information in their controller is shown in Figure 2, Figure 3 and Figure 4.
Figure 2 shows the state evolution of each subsystem. The two curves of SCN-DMPC and cooperative are close to each other. This is because the weak couplings in the given example are tiny compared with the strong couplings and thus do not have much of an impact on system dynamics. Besides, SCN-DMPC optimization algorithm was always feasible and was able to keep stable with a changing tracking target. Figure 3 shows the input difference between these two algorithms. The control laws of these two algorithms are almost the same. Tracking results are shown in Figure 4. There was a small off-set in subsystem S 1 , S 3 , which could be eliminated by adding an observer. All other subsystems could track the steady-state target without steady-state off-set. From the simulation results of Figure 2, Figure 3 and Figure 4, the stability and good optimization performance of the closed-loop system using SCN-DMPC is verified.
In Figure 2, Figure 3 and Figure 4, the curves of SCN-DMPC and cooperative DMPC are close to each other. The reason is that the weak couplings in the given example were tiny compared with the strong couplings, and thus they did not have much of an impact on system dynamics, even though a small difference existed. Specifically, given the state equation input weight coefficients in each subsystem, the deviations of the five subsystems fit w 1 0 . 04 , w 2 0 . 01 , w 3 0 . 05 , w 4 0 . 01 . The effects of these disturbances were very small compared with those of each subsystem’s inputs. Under robust feedback control law, they do not have much of an influence on system dynamics. Besides, the performance of the simplified model under SCN-DMPC equals to that under the control law optimizing the global performance of simplified system. As a result, the system performance under SCN-DMPC was close to that in cooperative DMPC. Under circumstances where the weak interactions are close to the impact of each subsystem’s inputs (which sacrifices part of the performance to achieve less network connectivity), omitting weak couplings may result in a greater influence on system dynamics, and the simulation results can differ.
Moreover, mean square errors between the closed-loop systems with strong-coupling neighbor-based optimization DMPC and cooperative DMPC outputs are listed in Table 1. The total error of five subsystems was only 3.5, which illustrates the good optimization performance of SCN-DMPC.
Connectivities are compared in the following table.
Table 2 shows that when strong-coupling neighbor-based DMPC was applied, the total information connections reduced to eight, which means that five connections were avoided compared with cooperative DMPC.
Above all, the simulation results show that the proposed SCN-DMPC achieved a good performance close to the cooperative DMPC with a significant reduction of information connectivity.

6. Conclusions

In this paper, a strong-coupling neighbor-based optimization DMPC method is proposed to decide the cooperation among subsystems, where each subsystem’s MPC considers the optimization performance and evolution of its strong-coupling downstream subsystems and communicates with them. For strongly-coupled subsystems, the influence on state and objective function are considered. For weakly-coupled subsystems, influence is neglected in the cooperative design. A closed-loop system’s performance and network connectivity-based method is proposed to determine the strength of coupling relationships among subsystems. The feasibility and stability of the closed-loop system in the case of target-tracking are analyzed. Simulation results show that the proposed SCN-DMPC was able to achieve similar performance in comparison to the DMPC which did not neglect the information or influence of weakly coupling subsystems. At the same time, connectivity was significantly decreased.

Author Contributions

Shan Gao developed the main algorithm, contributed to the stability analysis, designed the simulation and prepared the draft of the paper. Yi Zheng and Shaoyuan Li proposed the idea of Enhancing Strong Neighbor-based coordination strategy. They contributed to the main theory of the work and gave the inspiration and guidance of the strong-coupling neighbors’ determination, the algorithm design and stability analysis.

Acknowledgments

This work is supported by the National Nature Science Foundation of China (61673273, 61590924).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In the strong-coupling neighbor-based optimization DMPC algorithm proposed in this paper, the optimal solution of each subsystem equals to the solution of optimizing the overall system objective function. That is:
arg min V N ( x i , y t , ; x ¯ i , u ¯ i , 0 : N 1 , y s ) = arg min V i N ( x i , y t ; x ¯ i , u ¯ i , 0 : N 1 , y s ) ,
where
V N ( x i , y t ; x ¯ i , u ¯ i , 0 : N 1 , y s ) = k = 0 N 1 ( x ( k ) x ¯ s Q 2 + u ( k ) u ¯ s R 2 ) + x ( N ) x ¯ s P 2 + V 0 ( y s , y t ) .
Thus, for easy analysis, here we take the overall objective function V N to prove the feasibility and stability, and define:
v [ p ] = { x ¯ 1 [ p ] , . . . , x ¯ m [ p ] , u ¯ 1 , 0 : N 1 [ p ] , . . . , u ¯ m , 0 : N 1 [ p ] , y 1 , s [ p ] , . . . , y 1 , m [ p ] } ,
v * = { x ¯ 1 * , . . . , x ¯ m * , u ¯ 1 , 0 : N 1 * , . . . , u ¯ m , 0 : N 1 * , y 1 , s * , . . . , y 1 , m * } .
Denote v i [ p ] as
v i [ p ] = ( x ¯ 1 [ p 1 ] , . . . , x ¯ i [ p ] , . . . , x ¯ m [ p 1 ] , u ¯ 1 , 0 : N [ p 1 ] , . . . , u ¯ i , 0 : N [ p ] , . . . , u ¯ m , 0 : N [ p 1 ] , y 1 , s [ p 1 ] , . . . , y i , s [ p ] , . . . , y m , s [ p 1 ] ) .
Lemma A1.
Feasibility. Feasibility can be proved by assuming v ( t ) Z n . Then, we get v ( t + 1 ) Z n .
Proof. 
The feasibility of the model is proved by analyzing the simplified model.
Refer to the warm start algorithm. At time t + 1 , the warm start for arbitrary S i : v ¯ i [ 0 ] ( t + 1 ) = ( x ¯ i [ 0 ] ( 0 | t + 1 ) , u ¯ i , 0 : N [ 0 ] ( t + 1 ) , y i , s [ 0 ] ( t + 1 ) ) . With the warm start, all constraints in (17) are satisfied.
From p t h to ( p + 1 ) t h iterations, given that
( v 1 [ p + 1 ] , v 2 [ p + 1 ] , . . . , v m [ p + 1 ] )
are feasible, obviously their convex sum v [ p + 1 ] is feasible. As a result, the converged simplified solution v * ( t + 1 ) is feasible. Based on this, the real system solution is around the invariant set of v * ( t + 1 ) , and fits the system constraints. That is, ( x ( t + 1 ) , v ( t + 1 ) ) Z n . ☐
Lemma A2.
Convergence. Here we have
V N ( x ( t ) , y t ; v [ p + 1 ] ( t ) ) V N ( x ( t ) , y t ; v [ p ] ( t ) ) .
Proof. 
Since it has
V N ( x , y t , v [ p + 1 ] ) γ 1 V N ( x , y t ; v 1 [ p + 1 ] ) + . . . + γ m V N ( x , y t ; v m [ p + 1 ] ) γ 1 V N ( x , y t ; v [ p ] ) + . . . + γ m V N ( x , y t ; v [ p ] ) = V N ( x , y t , v [ p ] ) ,
convergence is proved. ☐
According to the Lemma above, we have:
V N ( x ( t ) , y t ; v [ p + 1 ] ( t ) ) V N ( x ( t ) , y t ; v [ 0 ] ( t ) ) .
Lemma A3.
Local Bounded. When ( x ( t ) , y s [ 0 ] ( t ) Ω K ¯ , then
V N ( x ( t ) , y t ; v [ p ] ( t ) ) x ( t ) x ¯ s [ 0 ] ( t ) P 2 + V 0 ( y s [ 0 ] ( t ) , y t ) .
Proof. 
Firstly,
V N ( x ( t ) , y t ; v [ 0 ] ( t ) ) x ( t ) x ¯ s [ 0 ] ( t ) P 2 + V 0 ( y s [ 0 ] ( t ) , y t )
will be proved.
According to the definition of warm start and Assumption 3, here we have:
V N ( x ( t ) , y t ; v [ 0 ] ( t ) ) k = 0 N 1 x ( k ; t ) x ¯ s [ 0 ] ( t ) Q + u ( k ) u ¯ s [ 0 ] ( t ) R + x ( N ) x ¯ s [ 0 ] ( t ) P 2 + V 0 ( y s [ 0 ] ( t ) , y t ) = k = 0 N 1 x ( k ; t ) x ¯ s [ 0 ] ( t ) Q + K ¯ R K ¯ + x ( N ) x ¯ s [ 0 ] ( t ) P 2 + V 0 ( y s [ 0 ] ( t ) , y t ) = x ( N ) x ¯ s [ 0 ] ( t ) P 2 + V 0 ( y s [ 0 ] ( t ) , y t ) .
Thus, we have:
V N ( x ( t ) , y t ; v [ p ] ( t ) ) V N ( x ( t ) , y t ; v [ 0 ] ( t ) ) x ( t ) x ¯ s [ 0 ] ( t ) P 2 + V 0 ( y ¯ s [ 0 ] ( t ) , y t ) .
 ☐
Lemma A4.
Convergence. Let Assumption 3 hold, for any feasible solution z ( 0 ) = ( x ( 0 ) , v ( 0 ) ) Z N , the system converges to equilibrium point z s . That is,
V N ( x ( t + 1 ) , y t ; v ¯ * ( t + 1 ) ) V N ( x ( t ) , y t ; v ¯ * ( t ) ) x ( t ) x ¯ s ( t ) Q 2 .
The final tracking points of the simplified system (the optimal solution of V N ) are ( x ¯ * ( x s , y t ) , u ¯ * ( x s , y t ) ) = ( x s , u s ) , which are the centralized optimal solution.
Proof. 
For simplified system optimization, we have
V N ( x ( t + 1 ) , y t , v * ( t + 1 ) ) V N ( x ( t + 1 ) , y t ; v ( 0 ; t + 1 ) ) ,
and also
V N ( x ( t + 1 ) , y t ; v ( 0 ; t + 1 ) ) V N ( x ( t ) , y t , v * ( t ) ) x ( t ) x ¯ s ( t ) Q 2 u ( t ) u ¯ s ( t ) R .
According to (A11) and (A12), we have
V N ( x ( t + 1 ) , y t ; v * ( t + 1 ) ) V N ( x ( t ) , y t ; v * ( t ) ) x ( t ) x ¯ s ( t ) Q 2 u ( t ) u ¯ s ( t ) R x ( t ) x ¯ s ( t ) Q 2 .
Since the robust positively invariant set feedback control law K = d i a g ( K 1 , K 2 , . . . , K m ) ensures the real states in the invariant set of the simplified model, the real system’s stability is proved. ☐

References

  1. Christofides, P.D.; Scattolini, R.; Muñoz de la Peña, D.; Liu, J. Distributed model predictive control: A tutorial review and future research directions. Comput. Chem. Eng. 2013, 51, 21–41. [Google Scholar] [CrossRef]
  2. Del Real, A.J.; Arce, A.; Bordons, C. Combined environmental and economic dispatch of smart grids using distributed model predictive control. Int. J. Electr. Power Energy Syst. 2014, 54, 65–76. [Google Scholar] [CrossRef]
  3. Zheng, Y.; Li, S.; Tan, R. Distributed Model Predictive Control for On-Connected Microgrid Power Management. IEEE Trans. Control Syst. Technol. 2018, 26, 1028–1039. [Google Scholar] [CrossRef]
  4. Yu, W.; Liu, D.; Huang, Y. Operation optimization based on the power supply and storage capacity of an active distribution network. Energies 2013, 6, 6423–6438. [Google Scholar] [CrossRef]
  5. Scattolini, R. Architectures for distributed and hierarchical model predictive control-a review. J. Process Control 2009, 19, 723–731. [Google Scholar] [CrossRef]
  6. Du, X.; Xi, Y.; Li, S. Distributed model predictive control for large-scale systems. In Proceedings of the 2001 American Control Conference, Arlington, VA, USA, 25–27 June 2001; Volume 4, pp. 3142–3143. [Google Scholar]
  7. Li, S.; Yi, Z. Distributed Model Predictive Control for Plant-Wide Systems; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  8. Mota, J.F.C.; Xavier, J.M.F.; Aguiar, P.M.Q.; Püschel, M. Distributed Optimization With Local Domains: Applications in MPC and Network Flows. IEEE Trans. Autom. Control 2015, 60, 2004–2009. [Google Scholar] [CrossRef]
  9. Qin, S.; Badgwell, T. A survey of industrial model predictive control technology. Control Eng. Pract. 2003, 11, 733–764. [Google Scholar] [CrossRef]
  10. Maciejowski, J. Predictive Control: With Constraints; Pearson Education: London, UK, 2002. [Google Scholar]
  11. Vaccarini, M.; Longhi, S.; Katebi, M. Unconstrained networked decentralized model predictive control. J. Process Control 2009, 19, 328–339. [Google Scholar] [CrossRef]
  12. Leirens, S.; Zamora, C.; Negenborn, R.; De Schutter, B. Coordination in urban water supply networks using distributed model predictive control. In Proceedings of the American Control Conference (ACC), Baltimore, MD, USA, 30 June–2 July 2010; pp. 3957–3962. [Google Scholar]
  13. Wang, Z.; Ong, C.J. Distributed Model Predictive Control of linear discrete-time systems with local and global constraints. Automatica 2017, 81, 184–195. [Google Scholar] [CrossRef]
  14. Trodden, P.A.; Maestre, J. Distributed predictive control with minimization of mutual disturbances. Automatica 2017, 77, 31–43. [Google Scholar] [CrossRef]
  15. Al-Gherwi, W.; Budman, H.; Elkamel, A. A robust distributed model predictive control algorithm. J. Process Control 2011, 21, 1127–1137. [Google Scholar] [CrossRef]
  16. Kirubakaran, V.; Radhakrishnan, T.; Sivakumaran, N. Distributed multiparametric model predictive control design for a quadruple tank process. Measurement 2014, 47, 841–854. [Google Scholar] [CrossRef]
  17. Zhang, L.; Wang, J.; Li, C. Distributed model predictive control for polytopic uncertain systems subject to actuator saturation. J. Process Control 2013, 23, 1075–1089. [Google Scholar] [CrossRef]
  18. Liu, J.; Muñoz de la Peña, D.; Christofides, P.D. Distributed model predictive control of nonlinear systems subject to asynchronous and delayed measurements. Automatica 2010, 46, 52–61. [Google Scholar] [CrossRef]
  19. Cheng, R.; Fraser Forbes, J.; Yip, W.S. Dantzig–Wolfe decomposition and plant-wide MPC coordination. Comput. Chem. Eng. 2008, 32, 1507–1522. [Google Scholar] [CrossRef]
  20. Zheng, Y.; Li, S.; Qiu, H. Networked coordination-based distributed model predictive control for large-scale system. IEEE Trans. Control Syst. Technol. 2013, 21, 991–998. [Google Scholar] [CrossRef]
  21. Groß, D.; Stursberg, O. A Cooperative Distributed MPC Algorithm With Event-Based Communication and Parallel Optimization. IEEE Trans. Control Netw. Syst. 2016, 3, 275–285. [Google Scholar] [CrossRef]
  22. Walid Al-Gherwi, H.B.; Elkamel, A. Selection of control structure for distributed model predictive control in the presence of model errors. J. Process Control 2010, 20, 270–284. [Google Scholar] [CrossRef]
  23. Camponogara, E.; Jia, D.; Krogh, B.; Talukdar, S. Distributed model predictive control. IEEE Control Syst. Mag. 2002, 22, 44–52. [Google Scholar]
  24. Conte, C.; Jones, C.N.; Morari, M.; Zeilinger, M.N. Distributed synthesis and stability of cooperative distributed model predictive control for linear systems. Automatica 2016, 69, 117–125. [Google Scholar] [CrossRef]
  25. Tippett, M.J.; Bao, J. Reconfigurable distributed model predictive control. Chem. Eng. Sci. 2015, 136, 2–19. [Google Scholar] [CrossRef]
  26. Zheng, Y.; Wei, Y.; Li, S. Coupling Degree Clustering-Based Distributed Model Predictive Control Network Design. IEEE Trans. Autom. Sci. Eng. 2018, 1–10. [Google Scholar] [CrossRef]
  27. Li, S.; Zhang, Y.; Zhu, Q. Nash-optimization enhanced distributed model predictive control applied to the Shell benchmark problem. Inf. Sci. 2005, 170, 329–349. [Google Scholar] [CrossRef]
  28. Venkat, A.N.; Hiskens, I.A.; Rawlings, J.B.; Wright, S.J. Distributed MPC Strategies with Application to Power System Automatic Generation Control. IEEE Trans. Control Syst. Technol. 2008, 16, 1192–1206. [Google Scholar] [CrossRef]
  29. Stewart, B.T.; Wright, S.J.; Rawlings, J.B. Cooperative distributed model predictive control for nonlinear systems. J. Process Control 2011, 21, 698–704. [Google Scholar] [CrossRef]
  30. Zheng, Y.; Li, S. N-Step Impacted-Region Optimization based Distributed Model Predictive Control. IFAC-PapersOnLine 2015, 48, 831–836. [Google Scholar] [CrossRef]
  31. De Lima, M.L.; Camponogara, E.; Marruedo, D.L.; de la Peña, D.M. Distributed Satisficing MPC. IEEE Trans. Control Syst. Technol. 2015, 23, 305–312. [Google Scholar] [CrossRef]
  32. De Lima, M.L.; Limon, D.; de la Pena, D.M.; Camponogara, E. Distributed Satisficing MPC With Guarantee of Stability. IEEE Trans. Autom. Control 2016, 61, 532–537. [Google Scholar] [CrossRef]
  33. Zheng, Y.; Li, S.; Wang, X. Distributed model predictive control for plant-wide hot-rolled strip laminar cooling process. J. Process Control 2009, 19, 1427–1437. [Google Scholar] [CrossRef]
  34. Zheng, Y.; Li, S.; Li, N. Distributed model predictive control over network information exchange for large-scale systems. Control Eng. Pract. 2011, 19, 757–769. [Google Scholar] [CrossRef]
  35. Li, S.; Zheng, Y.; Lin, Z. Impacted-Region Optimization for Distributed Model Predictive Control Systems with Constraints. IEEE Trans. Autom. Sci. Eng. 2015, 12, 1447–1460. [Google Scholar] [CrossRef]
  36. Riverso, S.; Ferrari-Trecate, G. Tube-based distributed control of linear constrained systems. Automatica 2012, 48, 2860–2865. [Google Scholar] [CrossRef]
  37. Riverso, S.; Boem, F.; Ferrari-Trecate, G.; Parisini, T. Plug-and-Play Fault Detection and Control-Reconfiguration for a Class of Nonlinear Large-Scale Constrained Systems. IEEE Trans. Autom. Control 2016, 61, 3963–3978. [Google Scholar] [CrossRef]
  38. Ferramosca, A.; Limon, D.; Alvarado, I.; Camacho, E. Cooperative distributed MPC for tracking. Automatica 2013, 49, 906–914. [Google Scholar] [CrossRef]
  39. Limon, D. MPC for tracking of piece-wise constant references for constrained linear systems. In Proceedings of the 16th IFAC World Congress, Prague, Czech Republic, 3–8 July 2005; p. 882. [Google Scholar]
  40. Limon, D.; Alvarado, I.; Alamo, T.; Camacho, E.F. Robust tube-based MPC for tracking of constrained linear systems with additive disturbances. J. Process Control 2010, 20, 248–260. [Google Scholar] [CrossRef]
  41. Shao, Q.M.; Cinar, A. Coordination scheme and target tracking for distributed model predictive control. Chem. Eng. Sci. 2015, 136, 20–26. [Google Scholar] [CrossRef]
  42. Rawlings, J.B.; Mayne, D.Q. Model Predictive Control: Theory and Design; Nob Hill Publishing: Madison, WI, USA, 2009; pp. 3430–3433. [Google Scholar]
  43. Alvarado, I. On the Design of Robust Tube-Based MPC for Tracking. In Proceedings of the 17th World Congress The International Federation of Automatic Control, Seoul, Korea, 6–11 July 2008; pp. 15333–15338. [Google Scholar]
Figure 1. An illustration of the structure of a distributed system and its distributed control framework. MPC: model predictive control; DMPC: distributed MPC.
Figure 1. An illustration of the structure of a distributed system and its distributed control framework. MPC: model predictive control; DMPC: distributed MPC.
Mathematics 06 00086 g001
Figure 2. States of each subsystem under the control of strong-coupling neighbor-based DMPC (SCN-DMPC) and cooperative DMPC.
Figure 2. States of each subsystem under the control of strong-coupling neighbor-based DMPC (SCN-DMPC) and cooperative DMPC.
Mathematics 06 00086 g002
Figure 3. Inputs of each subsystem under the control of SCN-DMPC and cooperative DMPC.
Figure 3. Inputs of each subsystem under the control of SCN-DMPC and cooperative DMPC.
Mathematics 06 00086 g003
Figure 4. Output of each subsystem under the control of SCN-DMPC and cooperative DMPC.
Figure 4. Output of each subsystem under the control of SCN-DMPC and cooperative DMPC.
Mathematics 06 00086 g004
Table 1. Mean square error (MSE) of outputs between SCN-DMPC and cooperative DMPC.
Table 1. Mean square error (MSE) of outputs between SCN-DMPC and cooperative DMPC.
Item S 1 S 2 S 3 S 4 S 5
MSE0.57711.15120.71110.13750.9162
Table 2. Comparison of system connectivity with different control methods.
Table 2. Comparison of system connectivity with different control methods.
SystemSCN-DMPCCooperative DMPC
S 1 12
S 2 23
S 3 24
S 4 23
S 5 11
S 813

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Mathematics EISSN 2227-7390 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top