Distributed Model Predictive Control of Steam / Water Loop in Large Scale Ships

: In modern steam power plants, the ever-increasing complexity requires great reliability and ﬂexibility of the control system. Hence, in this paper, the feasibility of a distributed model predictive control (DiMPC) strategy with an extended prediction self-adaptive control (EPSAC) framework is studied, in which the multiple controllers allow each sub-loop to have its own requirement ﬂexibility. Meanwhile, the model predictive control can guarantee a good performance for the system with constraints. The performance is compared against a decentralized model predictive control (DeMPC) and a centralized model predictive control (CMPC). In order to improve the computing speed, a multiple objective model predictive control (MOMPC) is proposed. For the stability of the control system, the convergence of the DiMPC is discussed. Simulation tests are performed on the ﬁve di ﬀ erent sub-loops of steam / water loop. The results indicate that the DiMPC may achieve similar performance as CMPC while outperforming the DeMPC method.


Introduction
The steam/water loop is an important part of a steam power plant, which plays a role in feed water supply and recycling processes. It is a highly complex and constrained system with multiple variables and interactions [1]. Meanwhile, due to the harsh and challenging operating environment (sea winds, sea waves and sea currents) and various operating modes (automatic start-up, reverse, stop, setting speed, emergency stop and reduction of revolutions) [2], there are difficulties to design a controller that delivers satisfactory performance for the steam/water loop. In order to design an effective approach overcoming the difficulties mentioned above and improving their liability and flexibility of the system, the feasibility of a distributed model predictive control is studied in this paper.
Nowadays, the major concern of the steam power plant is not only the tracking performance, but also other performances such as the consumed energy or safety in terrible conditions. Apart from realizing the load tracking performance, the controllers should also fulfill the flexible requirements for each sub-loop. A general way to improve the flexibility is to apply distributed controllers in the system [3]. Also, the multiple controllers improve the reliability of the system [4]. Concurrently, in order are enjoyed by the DiMPC. Then, by the means provided in [33], the stability of the CMPC is proved, meanwhile, the stability of the DiMPC is guaranteed.
The rest of the paper is organized as follows: Section 2 describes the steam/water loop, and the modeling of the system is shown. The details about the DiMPC, CMPC and DeMPC are introduced in Section 3. Section 4 shows the results and analysis. Finally, some conclusions are drawn in Section 5.

Description of the Steam/Water Loop
As shown in Figure 1, the steam/water loop is composed of two main loops, in which one is the water loop indicated by green line and the other is steam loop indicated by red line. The system works as follows. Firstly, the water from the water tank goes to the condenser. Then it is deoxygenated in the deaerator. After being pumped to boiler, the feed water goes into the mud drum due to its high density. The feed water is turned into a mixture of steam and water in the risers. Following this, the steam is separated from the mixture and heated in the superheater. Finally, the steam with a certain pressure and temperature services in the steam turbine. The used steam is sent back to the exhaust manifold and most of the steam is condensed in the condenser, while the remaining part services in the deaerator for deoxygenation [28]. The sub-loops have strong interactions between each other, such as the water level between the deaerator and the condenser, the pressure between the deaerator and the exhaust manifold system. Hence, there are challenges to obtain a desired controller for the steam/water loop.
In order to explore the characteristics of the steam/water loop, staircase experiments are conducted around the operating point on the system. The normalized outputs and corresponding static gains are shown in Figure 2; Figure 3, respectively. In the experiment, every 10% step changes are imposed in one input variable, while keeping the other inputs constant. The results show that the static gains change considerably along with the input changes, which indicates the nonlinearity of the system.

Description of the Steam/Water Loop
As shown in Figure 1, the steam/water loop is composed of two main loops, in which one is the water loop indicated by green line and the other is steam loop indicated by red line. The system works as follows. Firstly, the water from the water tank goes to the condenser. Then it is deoxygenated in the deaerator. After being pumped to boiler, the feed water goes into the mud drum due to its high density. The feed water is turned into a mixture of steam and water in the risers. Following this, the steam is separated from the mixture and heated in the superheater. Finally, the steam with a certain pressure and temperature services in the steam turbine. The used steam is sent back to the exhaust manifold and most of the steam is condensed in the condenser, while the remaining part services in the deaerator for deoxygenation [28]. The sub-loops have strong interactions between each other, such as the water level between the deaerator and the condenser, the pressure between the deaerator and the exhaust manifold system. Hence, there are challenges to obtain a desired controller for the steam/water loop.
In order to explore the characteristics of the steam/water loop, staircase experiments are conducted around the operating point on the system. The normalized outputs and corresponding static gains are shown in Figure 2; Figure 3, respectively. In the experiment, every 10% step changes are imposed in one input variable, while keeping the other inputs constant. The results show that the static gains change considerably along with the input changes, which indicates the nonlinearity of the system.   By linearization around the operating point, the model of the system is obtained as shown in (1), with five inputs and five outputs. The input vector u = [u1, u2, u3, u4, u5] contains the positions of the valves that control the flow rates of feedwater to the drum (u1), exhaust steam from the exhaust manifold (u2), exhaust steam to the deaerator (u3), water from the deaerator (u4) and water to the condenser (u5), respectively. The output vector y = [y1, y2, y3, y4, y5] contains the values of the water level in drum (y1), pressure in exhaust manifold (y2), water level (y3) and pressure (y4) in deaerator, and water level of condenser (y5), respectively. The ranges and operating points of the output variables are listed in Table 1, and the operating points are obtained according to a real large-scale ship.  ...  By linearization around the operating point, the model of the system is obtained as shown in (1), with five inputs and five outputs. The input vector u = [u 1 , u 2 , u 3 , u 4 , u 5 ] contains the positions of the valves that control the flow rates of feedwater to the drum (u 1 ), exhaust steam from the exhaust manifold (u 2 ), exhaust steam to the deaerator (u 3 ), water from the deaerator (u 4 ) and water to the condenser (u 5 ), respectively. The output vector y = [y 1 , y 2 , y 3 , y 4 , y 5 ] contains the values of the water level in drum (y 1 ), pressure in exhaust manifold (y 2 ), water level (y 3 ) and pressure (y 4 ) in deaerator, and water level of condenser (y 5 ), respectively. The ranges and operating points of the output variables are listed in Table 1, and the operating points are obtained according to a real large-scale ship.
where G 11 = 0.0000987  The rates and amplitudes of the five inputs are constrained to: The inputs units are normalized percentage values of the valve opening (i.e., 0 represents a fully closed valve, and 1 is completely opened). Additionally, the input rates are measured in percentage per second.

Introduction for Centralized MPC (CMPC)
The following is a short summary of the extended prediction self-adaptive control (EPSAC) and more details are found in [34]. Consider a discrete system described below: where k is the discrete-time index; y(k) indicates the measured output sequence of system; x(k) is the output sequence of model; and w(k) is the model/process disturbance sequence. The output of the model x(k) depends on the past outputs and inputs, and can be expressed generically as: In EPSAC, the future input consists of two parts: where u base (k + l k) indicates lth predicted basic future control scenario and δu(k + l k) indicates the optimizing future control actions, and they are all based on the states and inputs of time k. Then, following the results of lth predicted output is obtained by applying (5) as the control effort.
y(k + l k) = y base (k + l k) + y opt (k + l k) where y base (k + l k) is the effect of base future control and y opt (k + l k) is the effect of optimizing future control actions δu(k k) , . . . , δu(k + N c − 1 k) . The part of y opt (k + l k) can be expressed as a discrete time convolution as follows: where h 1 , . . . h N p are impulse response coefficients; g 1 , . . . g N p are the step response coefficients; N c , N p are the control horizon and prediction horizon, respectively. Thus, the following formulation is obtained: Processes 2019, 7, 442 where N 1 indicates the time-delay in the system. The disturbance term w(k) is defined as a filtered white noise signal [30]. When there is no information concerning the noise, the disturbance model used in (3) is chosen as an integrator, to ensure zero steady-state error in the reference tracking experiment: where e(k) denotes the white noise sequence. In order to apply EPSAC for a multiple-input and multiple-output (MIMO) system, the individual error of each output is minimized separately. The cost function for the steam/water system with five sub-loops is as follows: where r i (i = 1, 2, . . . , 5) are the setpoints for the five loops. By defining G ij as the influence from jth input to ith output, (11) is rewritten as: with R i denotes the reference for loop i, and Y i denotes the predicted output for loop i. Taking constraints from inputs and outputs into account, the process to find the minimum cost function becomes an optimization problem which is called quadratic programming.
where A is a matrix; b is a vector according to the constraints and U i is the input for sub-loop i. Figure 4 shows the conceptual representation of the centralized MPC [35]. To get the optimal solutions for sub-loop i, the interaction u j∈N i , N i = j ∈ N : G ij 0 from other sub-loops is taken into account as shown in (13).
Hence, the optimal centralized solution U = [U 1 U 2 U 3 U 4 U 5 ] is obtained by solving the following global cost function: where J i are defined in (11), and p i are weighting factors. In our case, the p i are chosen as the values which can normalize the cost function J i .  Hence, the optimal centralized solution U = [U1 U2 U3 U4 U5] is obtained by solving the following global cost function: where i J are defined in (11), and i p are weighting factors. In our case, the i p are chosen as the values which can normalize the cost function i J .

Proposed Distributed MPC (DiMPC)
It is noteworthy to mention that the centralized approach implies that all the information regarding to all the sub-systems (or sub-loops) is gathered in a single controller, as showed in Figure  4. The advantage is straightforward since the cost function (14) has an optimal solution. However, if one sub-loop malfunctions, then the entire steam/water loop collapses, with serious consequences for safety of the large scale ship.
One solution is provided by the distributed MPC (DiMPC) method, that regards all the sub-systems as independent modules which are controlled by an individual controller. Through the communication network, the inherent interactions are considered.
Thus, the same local cost function (13) is locally minimized by each controller, in which the coupling term is computed with the input trajectory j U received from the neighbors, and several iterations are performed until the local optimal solution is reached. For the sake of clarity, conceptual representation of the distributed MPC architecture is shown in Figure 5, and a pseudo-code is provided:

Algorithm 1 The Iterative DiMPC
Step 1: Sub-loop I receives an optimal local control action Step 3: If the termination conditions

Proposed Distributed MPC (DiMPC)
It is noteworthy to mention that the centralized approach implies that all the information regarding to all the sub-systems (or sub-loops) is gathered in a single controller, as showed in Figure 4. The advantage is straightforward since the cost function (14) has an optimal solution. However, if one sub-loop malfunctions, then the entire steam/water loop collapses, with serious consequences for safety of the large scale ship.
One solution is provided by the distributed MPC (DiMPC) method, that regards all the sub-systems as independent modules which are controlled by an individual controller. Through the communication network, the inherent interactions are considered.
Thus, the same local cost function (13) is locally minimized by each controller, in which the coupling term 5 j=1,j i G ij U j is computed with the input trajectory U j received from the neighbors, and several iterations are performed until the local optimal solution is reached. For the sake of clarity, conceptual representation of the distributed MPC architecture is shown in Figure 5, and a pseudo-code is provided: Step 1: Sub-loop I receives an optimal local control action δU i at the iterative time as iter = 0 according to the EPSAC, and the local control action δU i can be rewritten as δU iter i , where δU i indicates the vector of the optimizing future control actions with length of N ci ; Step 2: The δU iter j (j ∈ N i ) is communicated to the loop i, and the δU iter+1 i is calculated again with the δU iter j from other loops; Step 3: If the termination conditions δU iter+1 where ε i is the positive value and iter indicates the upper bound of the number of iteration times. Otherwise, the iter is set as iter = iter + 1, and return to Step 2; Step 4: Calculate the optimal control effort as U t = U base + δU iter , and the control effort is applied to the system; Step 5: Set t = t + 1, return to Step 1.
iteration times. Otherwise, the iter is set as iter = iter + 1, and return to Step 2; Step 4: Calculate the optimal control effort as iter t base  =+ U U U , and the control effort is applied to the system; Step 5: Set t = t + 1, return to Step 1.

Classical Decentralized MPC (DeMPC)
In Figure 6, the conceptual representation of the DeMPC is presented. When comparing with the distributed strategy from Figure 5, it can be seen that the main difference is given by the fact that the controllers do not exchange information, although the physical coupling remains. Hence, the local cost function to be minimized by each controller is: which is derived from (13), by removing the coupling influence between sub-loops.

Classical Decentralized MPC (DeMPC)
In Figure 6, the conceptual representation of the DeMPC is presented. When comparing with the distributed strategy from Figure 5, it can be seen that the main difference is given by the fact that the controllers do not exchange information, although the physical coupling remains.
Processes 2019, 7, x FOR PEER REVIEW 9 of 21 iteration times. Otherwise, the iter is set as iter = iter + 1, and return to Step 2; Step 4: Calculate the optimal control effort as iter t base  =+ U U U , and the control effort is applied to the system; Step 5: Set t = t + 1, return to Step 1.

Classical Decentralized MPC (DeMPC)
In Figure 6, the conceptual representation of the DeMPC is presented. When comparing with the distributed strategy from Figure 5, it can be seen that the main difference is given by the fact that the controllers do not exchange information, although the physical coupling remains. Hence, the local cost function to be minimized by each controller is: which is derived from (13), by removing the coupling influence between sub-loops. Hence, the local cost function to be minimized by each controller is: with which is derived from (13), by removing the coupling influence between sub-loops.

Multiple Objective Distributed Model Predictive Control (MODiMPC)
Nowadays, setpoint tracking is not the only target for the control system. For some fast dynamic systems, the computing speed of the control strategies has an important influence. In order to improve the computing speed, a small loss in tracking performance is made to realize the fast computing speed. The scheme of the MODiMPC is shown in Figure 7. There are three layers of structure, in which the priority is shown as safety > tracking performance > energy.

Multiple Objective Distributed Model Predictive Control (MODiMPC)
Nowadays, setpoint tracking is not the only target for the control system. For some fast dynamic systems, the computing speed of the control strategies has an important influence. In order to improve the computing speed, a small loss in tracking performance is made to realize the fast computing speed. The scheme of the MODiMPC is shown in Figure 7. There are three layers of structure, in which the priority is shown as safety > tracking performance > energy.  The algorithm starts from initial zero conditions and computes the optimal control effort as an unconstrained solution of the optimization problem that aims to minimize the tracking error.
If the predicted inputs or outputs are not safe (generally out of the hard constraints), the constraints is included in the optimization to ensure safety. If the variables are in the safety interval, then according to the tracking error condition, two options are available, namely: (i) Error > ε, the focus is on performance, and the control effort is kept iter i δ U , (i.e., the control effort is kept to the one which minimizes the cost function with (11)); or (ii) Error < ε, the focus is on energy, and the control effort is kept = 0 iter i δ U (i.e., the actuator does not need to do any change), where ε is a tolerance error, and in this paper, it is chosen as 1% of the upper bound of the corresponding output. According to the end conditions ( ) of the DiMPC, the procedure stops or continues to obtain a new result. Due to the hydraulic cylinder being linked with the valve in the steam/water loop, the frequent changes in valves mean frequent changes in the hydraulic cylinder, which results in a large energy costs. In this sense, the energy is saved if the valves do not need to do any change under the condition Error < ε.
In the traditional MPC, the influence from constraints has always been considered to obtain the optimal inputs for the system. In our study, the quadratic programming is applied. However, the setpoint does not always change during the operation of the system, and most of the time, the system operates at a stable operating point. During this kind of period, the only thing to be considered is energy, in which the control effort is always kept the same as the last sampling time. Hence, no optimization process exists anymore, and there is a huge reduction in computing time. The algorithm starts from initial zero conditions and computes the optimal control effort as an unconstrained solution of the optimization problem that aims to minimize the tracking error.
If the predicted inputs or outputs are not safe (generally out of the hard constraints), the constraints is included in the optimization to ensure safety. If the variables are in the safety interval, then according to the tracking error condition, two options are available, namely: (i) Error > ε, the focus is on performance, and the control effort is kept δU iter i , (i.e., the control effort is kept to the one which minimizes the cost function with (11)); or (ii) Error < ε, the focus is on energy, and the control effort is kept δU iter i = 0 (i.e., the actuator does not need to do any change), where ε is a tolerance error, and in this paper, it is chosen as 1% of the upper bound of the corresponding output. According to the end conditions ( δU iter+1 i − δU iter i ≤ ε i ∨ iter + 1 > iter) of the DiMPC, the procedure stops or continues to obtain a new result.
Due to the hydraulic cylinder being linked with the valve in the steam/water loop, the frequent changes in valves mean frequent changes in the hydraulic cylinder, which results in a large energy costs. In this sense, the energy is saved if the valves do not need to do any change under the condition Error < ε.
In the traditional MPC, the influence from constraints has always been considered to obtain the optimal inputs for the system. In our study, the quadratic programming is applied. However, the setpoint does not always change during the operation of the system, and most of the time, the system operates at a stable operating point. During this kind of period, the only thing to be considered is energy, in which the control effort is always kept the same as the last sampling time. Hence, no optimization process exists anymore, and there is a huge reduction in computing time.

Convergence Issue
In order to analyze the stability of the optimal solution of the distributed control system, first the convergence issue is discussed. A standard MPC formulation is written in a form of a series of static optimization problems shown as follows: where S is the vector of the decision variables, including the state variables X and the control variables U. M(S) is the prediction model and C(S) denotes the constraints. Although our process model has an input-output formulation, it can be easily translated into a state-space definition.
In the DiMPC, Equation (16) is decompositioned into subproblems. For the sub-loop i: According to [35], if the distributed control methodologies satisfy some conditions, the properties that can be proved for the equivalent CMPC problem are enjoyed by the solution obtained using the DiMPC implementation. Also, the convergence issue of the DiMPC is equal to the CMPC. The optimal results from the DiMPC converge to the global optimal point. The conditions are listed as follows: (1) The sub-loops can completely cover the full large system; (2) J i and C i are convex; (3) The sub-loops work sequentially; (4) The starting point is in the interior of the feasible region; (5) Each sub-loop cooperates with its neighbors in that it broadcasts its latest iteration to these neighbors; (6) Each sub-loop uses the same optimal method to generate its iterations.
However, the conditions 2 and 3 are over strict as many systems are nonconvex and have nonlinearity in reality. Further, [36,37] show that these two conditions can be relaxed to nonconvex optimal problems with nonlinearity.
Moreover, the convergence of the DiMPC is further analyzed using the study given in [33]. Hence, starting from the unconstrained optimal solution of the distributed algorithm, the idea is to rewrite it with a recursive matrix formulation. After some matrix manipulation, a compact description is obtained: where U * (k) consists of the optimal sequences of all the sub-loops, computed at sample time k, while U * (k − 1) are the shifted optimal trajectories computed at the previous sampling time k − 1, with the last term doubled, to ensure the dimensions consistencies. The term F(k) is variable and computed at each sampling instant using the prediction error, whereasĤ is a constant term that is computed off-line in the initialization stage of the algorithm (see [33] for further details).
Note that using Equation (18), the convergence of the local optimal solutions can be checked by verifying that all the eigenvalues fromĤ are inside the unit circle. Additionally, Equation (18) can be reformulated in the classical system approach as: (19) where q −1 is an operator that shifts the data backward one sampling period, F(k) is regarded as the system's input, while U * (k) denotes the system's output. It is noteworthy to mention that Equation (19) can be used to analyze the stability of the optimal solution U * (k) at sampling time k, in the classical linear time-invariant framework by verifying that all the eigenvalues of the system equivalent matrix (I− q −1Ĥ ) are inside the unit circle. Furthermore, if this condition is satisfied on the equality case, it results in the optimal solution of the distributed algorithm being marginally stable. Hence, using this simple approach, the evolution of the system is computed using F(k) as the system's input, which is calculated using the prediction error from each sampling period. Although this is an analytical approach to recursively place the system's progression in time, it can be straightforwardly computed in an automatic manner, using the simulation tools available for a control engineer. Moreover, all the computations are computed in a distributed manner, since using Equation (18), each sub-loop computes the optimal trajectories of the coupling neighbors, and knowing this information, it computes its own optimal trajectory at each sampling instant.

Simulation Results and Analysis
According to our previous work, the parameter configuration for the EPSAC method is shown in Table 2.  Where the T s is the sampling time; N c1 , N c2 , . . . , N c5 are control horizons; (the control horizons were selected by finding a good trade-off between tracking performance and computation time for each loop), N p1 , N p2 , . . . , N p5 are prediction horizons of the five loops, respectively (the prediction horizons were selected taking into account the specific transient dynamics for each loop); N s is the number of the samples. The step setpoints are provided in Table 3. In the experiments, the initial condition was set at the operating point of the steam/water loop. The simulation results are shown in Figure 8, including the system outputs and the corresponding control efforts. In order to test which case provides the best result, performance indexes in an average value for the five sub-loops were compared including integrated absolute relative error (IARE), integral secondary control output (ISU), ratio of integrated absolute relative error (RIARE), ratio of integral secondary control output (RISU) and combined index (J). These indexes are calculated with the following expressions: where u ssi is the steady state value of the ith input; C 1 , C 2 are the compared controllers and the weighting factors w 1 and w 2 in (24) are chosen as w 1 = w 2 = 0.5. As depicted in Figure 8, the CMPC had similar performance as the DiMPC, and both outperformed the DeMPC. This conclusion is not only valid for this process, but also for other processes, since the DeMPC strategy does not take the interactions into account, which lead to some severe fluctuations when the setpoint changes in other variables. Although the control efforts in the DiMPC are obtained separately, the performance is still good due to the iteratively communication between the controllers for each sub-loop. In real life operations, where the addition of the effects of noise and stochastic disturbances are needed, perhaps with adding periodic disturbances from sea dynamics, the DeMPC may even lead to instability in the overall system.
where ssi u is the steady state value of the ith input; C1, C2 are the compared controllers and the weighting factors w1 and w2 in (24) are chosen as w1 = w2 = 0.5. As depicted in Figure 8, the CMPC had similar performance as the DiMPC, and both outperformed the DeMPC. This conclusion is not only valid for this process, but also for other processes, since the DeMPC strategy does not take the interactions into account, which lead to some severe fluctuations when the setpoint changes in other variables. Although the control efforts in the DiMPC are obtained separately, the performance is still good due to the iteratively communication between the controllers for each sub-loop. In real life operations, where the addition of the effects of noise and stochastic disturbances are needed, perhaps with adding periodic disturbances from sea dynamics, the DeMPC may even lead to instability in the overall system. The iterations are shown in Figure 9 during the optimization of DiMPC. In the algorithm, the two conditions to end the iteration are designed as: (i) The difference between consecutive optimal inputs fulfill the condition  The same conclusion is also obtained according to the numerical values shown in Table 4; Table  5. As the index J implies, the DiMPC and CMPC have similar results. However, there is only one controller in the CMPC, which means the system may be out of service if there is any problem with the controller. On the contrary, the DiMPC has more ability in fault-tolerance and flexibility without much performance loss compared with the CMPC. As the DiMPC only needs a part of the entire model, it is much easier to find a feasible solution, while CMPC needs the entire model to obtain all the solutions at one time. In this context, the DiMPC has better robust performance than the CMPC. Hence, the system model required for the DiMPC can be less accurate than the CMPC. In an industry context, a staggering 60-70% of the project time is spent on model development, while the rest is The iterations are shown in Figure 9 during the optimization of DiMPC. In the algorithm, the two conditions to end the iteration are designed as: (i) The difference between consecutive optimal inputs fulfill the condition δU iter+1 i − δU iter i ≤ 0.002 ; (ii) the maximum iteration time is five, i.e., iter > 5. The iterations are shown in Figure 9 during the optimization of DiMPC. In the algorithm, the two conditions to end the iteration are designed as: (i) The difference between consecutive optimal inputs fulfill the condition  The same conclusion is also obtained according to the numerical values shown in Table 4; Table  5. As the index J implies, the DiMPC and CMPC have similar results. However, there is only one controller in the CMPC, which means the system may be out of service if there is any problem with the controller. On the contrary, the DiMPC has more ability in fault-tolerance and flexibility without much performance loss compared with the CMPC. As the DiMPC only needs a part of the entire model, it is much easier to find a feasible solution, while CMPC needs the entire model to obtain all the solutions at one time. In this context, the DiMPC has better robust performance than the CMPC. Hence, the system model required for the DiMPC can be less accurate than the CMPC. In an industry context, a staggering 60-70% of the project time is spent on model development, while the rest is The same conclusion is also obtained according to the numerical values shown in Tables 4 and 5. As the index J implies, the DiMPC and CMPC have similar results. However, there is only one controller in the CMPC, which means the system may be out of service if there is any problem with the controller. On the contrary, the DiMPC has more ability in fault-tolerance and flexibility without much performance loss compared with the CMPC. As the DiMPC only needs a part of the entire model, it is much easier to find a feasible solution, while CMPC needs the entire model to obtain all the solutions at one time. In this context, the DiMPC has better robust performance than the CMPC. Hence, the system model required for the DiMPC can be less accurate than the CMPC. In an industry context, a staggering 60-70% of the project time is spent on model development, while the rest is claimed for the controller design and validation [38]. Hence, any reduction in identification time requirement greatly diminishes overall loop maintenance control related costs. The analysis given in this paper provides a trade-off solution, yet with acceptable performance but with great yields for cost reduction in control design and validation time. The results for the DiMPC and the multiple objective distributed model predictive control (MODiMPC) are shown in Figure 10. The performance indexes are shown in claimed for the controller design and validation [38]. Hence, any reduction in identification time requirement greatly diminishes overall loop maintenance control related costs. The analysis given in this paper provides a trade-off solution, yet with acceptable performance but with great yields for cost reduction in control design and validation time. The results for the DiMPC and the multiple objective distributed model predictive control (MODiMPC) are shown in Figure 10. The performance indexes are shown in Table 6. The computing time for MODiMPC is 2.81 s and for the DiMPC 29.36 s, respectively.   It can be seen from the results that there is a large improvement in the computing time after the MODiMPC was applied and without too much loss in tracking performance.
As previously mentioned, in order to guarantee the stability of the optimal solution in a DiMPC framework, the convergence of the optimal solution was firstly discussed. Due to the fact that there are only constraints in input variables in our study, the feasibility of the steam/water loop belongs to the trivial case (according to [33], solving the optimization problem the existence of a feasible solution is ensured at each sampling period). The H i matrices for the five loops are calculated as follows (for more details about convergence issue, please refer to the reference [33]): By the eig function in MATLAB (R2016b, Mathworks, Natick, MA, USA, 2016), the eigenvalues are calculated in a centralized manner for the steam/water loop, and the maximum value is ρ max < 1, which indicates that the DiMPC is convergent. In the steam/water loop, the five sub-loops cover the full system, and in the DiMPC the information is exchanged iteratively. Hence, the conditions 1 and 3-6 are satisfied. In order to cover the worst circumstances, the sufficient conditions in Section 3.5 tend to be conservative, and in some cases the convexity are not necessary [37]. Hence, it is concluded that the DiMPC has the same convergence as the CMPC, and the convergence of the DiMPC is guaranteed.

Conclusions
Regarding the multiple sub-loops in the steam/water loop, this paper introduced a distributed model predictive control based on the EPSAC framework. Different types of the MPC were applied to the steam/water loop system, including the DeMPC, CMPC and DiMPC. According to the simulation results, the DiMPC had similar performance with the CMPC, and outperformed the DeMPC. Due to the multiple controllers in the DiMPC strategy, the DiMPC had better performance of fault-tolerance and flexibility than the CMPC which improved the reliability of the steam/water loop. By proving equivalence in stability between the DiMPC and the CMPC, and the stability of the CPMC, the stability of the DiMPC is guaranteed. Meanwhile, a multiple objective MPC was proposed, and the computing speed was improved without too much loss in tracking performance.