Open Access
This article is

- freely available
- re-usable

*Mathematics*
**2018**,
*6*(4),
51;
doi:10.3390/math6040051

Article

Data Driven Economic Model Predictive Control

Department of Chemical Engineering, McMaster University, Hamilton, ON L8S 4L7, Canada

^{*}

Author to whom correspondence should be addressed.

Received: 7 March 2018 / Accepted: 22 March 2018 / Published: 2 April 2018

## Abstract

**:**

This manuscript addresses the problem of data driven model based economic model predictive control (MPC) design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI) model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.

Keywords:

Lyapunov-based model predictive control (MPC); subspace-based identification; closed-loop identification; model predictive control; economic model predictive control## 1. Introduction

Control systems designed to manage chemical process operations often face numerous challenges such as inherent nonlinearity, process constraints and uncertainty. Model predictive control (MPC) is a well-established control method that can handle these challenges. In MPC, the control action is computed by solving an open-loop optimal control problem at each sampling instance over a time horizon, subject to the model that captures the dynamic response of the plant, and constraints [1]. In early MPC designs, the objective function was often utilized as a parameter to ensure closed-loop stability. In subsequent contributions, Lyapunov-based MPC was proposed where feasibility and stability from a well characterized region was built into the MPC [2,3].

With increasing recognition (and ability) of MPC designs to focus on economic objectives, the notion of Economic MPC (EMPC) was developed for linear and nonlinear systems [4,5,6], and several important issues (such as input rate-of-change constraint and uncertainty) addressed. The key idea with the EMPC designs is the fact that the controller is directly given the economic objective to work with, and the controller internally determines the process operation (including, if needed, a set point) [7].

Most of the existing MPC formulations, economic or otherwise, have been illustrated using first principles models. With growing availability of data, there exists the possibility of enhancing MPC implementation for situations where a first principles model may not be available, and simple ‘step-test’, transfer-function based model identification approaches may not suffice. One of the widely utilized approaches in the general direction of model identification are latent variable methods, where the correlation between subsequent measurements is used to model and predict the process evolution [8,9]. In one direction, Dynamic Mode Decomposition with control (DMDc) has been utilized to extract low-order models from high-dimensional, complex systems [10,11]. In another direction, subspace-based system identification methods have been adapted for the purpose of model identification, where state-space model from measured data are identified using projection methods [12,13,14]. To handle the resultant plant model mismatch with data-driven model based approaches, monitoring of the model validity becomes especially important.

One approach to monitor the process is to focus on control performance [15], where the control performance is monitored and compared against a benchmark control design. To focus more explicitly on the model behavior, in a recent result [16], an adaptive data-driven MPC was proposed to evaluate model prediction performance and trigger model identification in case of poor model prediction. In another direction, an EMPC using empirical model was proposed [17]. The approach relies on a linearization approach, resulting in closed-loop stability guarantees for regions where the plant-model mismatch is sufficiently small, and illustrate results on stabilization around nominally stable equilibrium points. In summary, data driven MPC or EMPC approaches, which utilize appropriate modeling techniques to identify data from closed-loop tests to handle operation around nominally unstable equilibrium points, remain to be addressed.

Motivated by the above considerations, in this work, we address the problem of data driven model based predictive control at an unstable equilibrium point. In order to identify a model around an unstable equilibrium point, the system is perturbed under closed-loop operation. Having identified a model, a Lyapunov-based MPC is designed to achieve local and practical stability. The Lyapunov-based design is then used as the basis for a data driven Lyapunov-based EMPC design to achieve economical goals while ensuring boundedness. The rest of the manuscript is organized as follows: first, the general mathematical description for the systems considered in this work and a representative formulation for Lyapunov-based model predictive control are presented. Then, the proposed approach for closed-loop model identification is explained. Subsequently, a Lyapunov-based MPC is designed and illustrated through a simulation example. Finally, an economic MPC is designed to consider economical objectives. The efficacy of the proposed method is illustrated through implementation on a nonlinear continuous stirred-tank reactor (CSTR) with input rate of change constraints. Finally, concluding remarks are presented.

## 2. Preliminaries

This section presents a brief description of the general class of processes that are considered in this manuscript, followed by closed-loop subspace identification and Lyapunov based MPC formulation.

#### 2.1. System Description

We consider a multi-input multi-output (MIMO) controllable systems where $u\in {\mathbb{R}}^{{n}_{u}}$ denotes the vector of constrained manipulated variables, taking values in a nonempty convex subset $\mathcal{U}\subset {\mathbb{R}}^{{n}_{u}}$, where $\mathcal{U}=\left\{u\in {\mathbb{R}}^{{n}_{u}}\mid {u}_{\mathrm{min}}\le u\le {u}_{\mathrm{max}}\right\}$, ${u}_{\mathrm{min}}\in {\mathbb{R}}^{{n}_{u}}$ and ${u}_{\mathrm{max}}\in {\mathbb{R}}^{{n}_{u}}$ denote the lower and upper bounds of the input variables, and $y\in {\mathbb{R}}^{{n}_{y}}$ denotes the vector of measured output variables. In keeping with the discrete implementation of MPC, u is piecewise constant and defined over an arbitrary sampling instance k as:
where $\Delta t$ is the sampling time and ${x}_{k}$ and ${y}_{k}$ denote state and output at the kth sample time. The central problem that the present manuscript addresses is that of designing a data driven modeling and control design for economic MPC.

$$u(t)=u(k),\phantom{\rule{5.69054pt}{0ex}}k\Delta t\le t<(k+1)\Delta t,$$

#### 2.2. System Identification

In this section, a brief review of a conventional subspace-based state space system identification methods is presented [16,18,19]. These methods are used to identify the system matrices for a discrete-time linear time invariant (LTI) system of the following form:
where $x\in {\mathbb{R}}^{{n}_{x}}$ and $y\in {\mathbb{R}}^{{n}_{y}}$ denote the vectors of state variables and measured outputs, and $w\in {\mathbb{R}}^{{n}_{x}}$ and $v\in {\mathbb{R}}^{{n}_{y}}$ are zero mean, white vectors of process noise and measurement noise with the following covariance matrices:
where $Q\in {\mathbb{R}}^{{n}_{x}\times {n}_{x}}$, $S\in {\mathbb{R}}^{{n}_{x}\times {n}_{y}}$ and $R\in {\mathbb{R}}^{{n}_{y}\times {n}_{y}}$ are covariance matrices, and ${\delta}_{ij}$ is the Kronecker delta function. The subspace-based system identification techniques utilize Hankel matrices constructed by stacking the output measurements and manipulated variables as follows:
where i is a user-specified parameter that limits the maximum order of the system (n), and, j is determined by the number of sample times of data. By using Equation (4), the past and future Hankel matrices for input and output are defined:

$$\begin{array}{cc}\hfill {x}_{k+1}& =A{x}_{k}+B{u}_{k}+{w}_{k},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {y}_{k}& =C{x}_{k}+D{u}_{k}+{v}_{k},\hfill \end{array}$$

$$\begin{array}{cc}\hfill E[\left(\begin{array}{c}{w}_{i}\\ {v}_{j}\end{array}\right)\left(\begin{array}{cc}{w}_{i}^{T}& {v}_{j}^{T}\end{array}\right)]& =\left(\begin{array}{cc}Q& S\\ {S}^{T}& R\end{array}\right){\delta}_{ij},\hfill \end{array}$$

$$\begin{array}{c}\hfill {U}_{1|i}=\left[\begin{array}{cccc}{u}_{1}& {u}_{2}& \cdots & {u}_{j}\\ {u}_{2}& {u}_{3}& \cdots & {u}_{j+1}\\ \cdots & \cdots & \cdots & \cdots \\ {u}_{i}& {u}_{i+1}& \cdots & {u}_{i+j-1}\end{array}\right],\end{array}$$

$$\begin{array}{c}\hfill {U}_{p}={U}_{1|i},\phantom{\rule{1.em}{0ex}}{U}_{f}={U}_{1|i},\phantom{\rule{1.em}{0ex}}{Y}_{p}={Y}_{1|i},\phantom{\rule{1.em}{0ex}}{Y}_{f}={Y}_{1|i}.\end{array}$$

Similar block-Hankel matrices are made for process and measurement noises ${V}_{p},{V}_{f}\in {\mathbb{R}}^{i{n}_{y}\times j}$ and ${W}_{p},{W}_{f}\in {\mathbb{R}}^{i{n}_{x}\times j}$ are defined in the similar way. The state sequences are defined as follows:
Furthermore, these matrices are used in the algorithm:

$$\begin{array}{cc}\hfill {X}_{p}& =\left[\begin{array}{cccc}{x}_{1}& {x}_{2}& \cdots & {x}_{j}\end{array}\right],\hfill \end{array}$$

$$\begin{array}{cc}\hfill {X}_{f}& =\left[\begin{array}{cccc}{x}_{i+1}& {x}_{i+2}& \cdots & {x}_{i+j}\end{array}\right].\hfill \end{array}$$

$$\begin{array}{c}\hfill {\Psi}_{p}=\left[\begin{array}{c}{Y}_{p}\\ {U}_{p}\end{array}\right],\phantom{\rule{1.em}{0ex}}{\Psi}_{f}=\left[\begin{array}{c}{Y}_{f}\\ {U}_{f}\end{array}\right],\phantom{\rule{1.em}{0ex}}{\Psi}_{pr}=\left[\begin{array}{c}{R}_{f}\\ {\Psi}_{p}\end{array}\right].\end{array}$$

By recursive substitution into the state space model equations Equations (1) and (2), it is straightforward to show:
where:

$$\begin{array}{cc}\hfill {Y}_{f}& ={\Gamma}_{i}{X}_{f}+{\Phi}_{i}^{d}{U}_{f}+{\Phi}_{i}^{s}{W}_{f}+{V}_{f},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {Y}_{p}& ={\Gamma}_{i}{X}_{p}+{\Phi}_{i}^{d}{U}_{p}+{\Phi}_{i}^{s}{W}_{p}+{V}_{p},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {X}_{f}& ={A}^{i}{X}_{p}+{\Delta}_{i}^{d}{U}_{p}+{\Delta}_{i}^{s}{W}_{p},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\Gamma}_{i}& =\left[\begin{array}{c}C\\ CA\\ C{A}^{2}\\ \vdots \\ C{A}^{i-1}\end{array}\right],\phantom{\rule{1.em}{0ex}}{\Phi}_{i}^{d}=\left[\begin{array}{ccccc}D& 0& 0& \cdots & 0\\ CB& D& 0& \cdots & 0\\ CAB& CB& D& \cdots & 0\\ \cdots & \cdots & \cdots & \cdots & \cdots \\ C{A}^{i-2}B& C{A}^{i-3}B& C{A}^{i-4}B& \cdots & D\end{array}\right],\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\Phi}_{i}^{s}& =\left[\begin{array}{cccccc}0& 0& 0& \cdots & 0& 0\\ C& 0& 0& \cdots & 0& 0\\ CA& C& 0& \cdots & 0& 0\\ \cdots & \cdots & \cdots & \cdots & 0& 0\\ C{A}^{i-2}& C{A}^{i-3}& C{A}^{i-4}& \cdots & C& 0\end{array}\right],\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\Delta}_{i}^{d}& =\left[\begin{array}{ccccc}{A}^{i-1}B& {A}^{i-2}B& \cdots & AB& B\end{array}\right],\phantom{\rule{1.em}{0ex}}{\Delta}_{i}^{s}=\left[\begin{array}{ccccc}{A}^{i-1}& {A}^{i-2}& \cdots & A& I\end{array}\right].\hfill \end{array}$$

Equation (9) can be rewritten in the following form to have the input and output data at the left hand side of the equation [20]:

$$\begin{array}{c}\hfill \left[\begin{array}{cc}I& -{\Phi}_{i}^{d}\end{array}\right]\left[\begin{array}{c}{Y}_{f}\\ {U}_{f}\end{array}\right]={\Gamma}_{i}{X}_{f}+{\Phi}_{i}^{s}{W}_{f}+{V}_{f}.\end{array}$$

In open loop identification methods, in the next step, by orthogonal projecting of Equation (15) onto ${\Psi}_{p}$:

$$\begin{array}{c}\hfill \left[\begin{array}{cc}I& -{\Phi}_{i}^{d}\end{array}\right]{\Psi}_{f}/{\Psi}_{p}={\Gamma}_{i}{X}_{f}/{\Psi}_{p}.\end{array}$$

Note that, the last two terms in RHS of Equation (15) are eliminated since the noise terms are independent, or othogonal to the future inputs. Equation (16) indicates that:

$$\begin{array}{c}\hfill Column\_Space({W}_{f}/{W}_{p})=Column\_Space({({{{\Gamma}_{i}}^{\perp}}^{T}\left[\begin{array}{cc}I& -{H}_{i}^{d}\end{array}\right])}^{T}).\end{array}$$

Therefore, ${\Gamma}_{i}$ and ${H}_{i}^{d}$ can be calculated using Equation (17) by decomposition methods. These can in turn be utilized to determine the system matrices (some of these details are deferred to Section 3.1). For further discussion on system matrix extraction, the readers are referred to references [18,19].

#### 2.3. Lyapunov-Based MPC

The Lyapunov-based MPC (LMPC) for linear system has the following form:
where ${\tilde{x}}_{k+j}$, ${\tilde{y}}_{k+j}$, ${y}_{k+j}^{SP}$ and ${\tilde{u}}_{k+j}$ denote predicted state and output, output set-point and calculated manipulated input variables j time steps ahead computed at time step k, and ${\widehat{x}}_{l}$ is the current estimation of state, and $0<\alpha <1$ is a user defined parameter. The operator ${||.||}_{Q}^{2}$ denotes the weighted Euclidean norm defined for an arbitrary vector x and weighting matrix W as ${||x||}_{W}^{2}={x}^{T}Wx$. Furthermore, ${Q}_{y}>0$ and ${R}_{du}\ge 0$ denote the positive definite and positive semi-definite weighting matrices for penalizing deviations in the output predictions and for the rate of change of the manipulated inputs, respectively. Moreover, ${N}_{y}$ and ${N}_{u}$ denote the prediction and control horizons, respectively, and the input rate of change, given by $\Delta {\tilde{u}}_{k+j}={\tilde{u}}_{k+j}-{\tilde{u}}_{k+j-1}$, takes values in a nonempty convex subset ${\mathcal{U}}_{\circ}\subset {\mathbb{R}}^{m}$, where ${\mathcal{U}}_{\circ}=\left\{\Delta u\in {\mathbb{R}}^{{n}_{u}}\mid \Delta {u}_{\mathrm{min}}\le \Delta u\le \Delta {u}_{\mathrm{max}}\right\}$. Note finally that, while the system dynamics are described in continuous time, the objective function and constraints are defined in discrete time to be consistent with the discrete implementation of the control action.

$$\begin{array}{cc}\hfill \underset{{\tilde{u}}_{k},\dots ,{\tilde{u}}_{k+P}}{min}& {\displaystyle \sum _{j=1}^{{N}_{y}}}||{\tilde{y}}_{k+j}-{y}_{k+j}^{\mathrm{SP}}{||}_{{Q}_{y}}^{2}+{\displaystyle \sum _{j=1}^{{N}_{u}}}||{\tilde{u}}_{k+j}-{\tilde{u}}_{k+j-1}{||}_{{R}_{du}}^{2},\hfill \end{array}$$

$$\begin{array}{c}\hfill \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}:\phantom{\rule{4.pt}{0ex}}\end{array}$$

$$\begin{array}{c}{\tilde{x}}_{k+1}=A{\tilde{x}}_{k}+B{\tilde{u}}_{k},\hfill \end{array}$$

$$\begin{array}{c}{\tilde{y}}_{k}=C{\tilde{x}}_{k}+D{\tilde{u}}_{k},\hfill \end{array}$$

$$\begin{array}{c}\tilde{u}\in \mathcal{U},\phantom{\rule{2.em}{0ex}}\Delta \tilde{u}\in {\mathcal{U}}_{\circ},\phantom{\rule{2.em}{0ex}}\tilde{x}(k)={\widehat{x}}_{l},\hfill \end{array}$$

$$\begin{array}{c}V({\tilde{x}}_{k+1})\le \alpha V({\tilde{x}}_{k})\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}V({\tilde{x}}_{k})>{\u03f5}^{\ast},\hfill \end{array}$$

$$\begin{array}{c}V({\tilde{x}}_{k+1})\le {\u03f5}^{\ast}\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}V({\tilde{x}}_{k})\le {\u03f5}^{\ast},\hfill \end{array}$$

Equations (23) and (24) are representatives of Lyapunov-based stability constraint [21,22], where $V({x}_{k})$ is a suitable control Lyapunov function, and $\alpha ,{\u03f5}^{\ast}>0$ are user-specified parameters. In the presented formulation, ${\u03f5}^{\ast}>0$ enables practical stabilization to account for the discrete nature of the control implementation.

**Remark**

**1.**

Existing Lyapunov-based MPC approaches exploit the fact that the feasibility (and stability) region can be pre-determined. The feasibility region, among other things, depends on the choice of the parameter α, the requested decay factor in the value of the Lyapunov function at each time step. If (reasonably) good first principles models are available, then these features of the MPC formulation provide excellent confidence over the operating region under closed-loop. In contrast, in the presence of significant plant-model mismatch (as is possibly the case with data driven models), the imposition of such decay constraints could result in unnecessary infeasibility issues. In designing the LMPC formulation with a data driven model, this possible lack of feasibility must be accounted for (as is done in Section 3.2).

## 3. Integrating Lyapunov-Based MPC with Data Driven Models

In this section, we first utilize an identification approach necessary to identify good models for operation around an unstable equilibrium point. The data driven Lyapunov-based MPC design is presented next.

#### 3.1. Closed-Loop Model Identification

Note that, when interested in identifying the system around an unstable equilibrium point, open-loop data would not suffice. To begin with, nominal open-loop operation around an unstable equilibrium point is not possible. If the nominal operation is under closed-loop, but the loop is opened to perform step tests, the system would move to the stable equilibrium point corresponding to the new input value, thereby not providing dynamic information around the desired operating point. The training data, therefore, has to be obtained using closed-loop step tests, and an appropriate closed-loop model identification method employed. Such a method is described next.

In employing closed-loop data, note that the assumption of future inputs being independent of future disturbances no longer holds, and, if not recognized, can cause biased results in system identification [18]. In order to handle this issue, the closed-loop identification approach in the projection utilizes a different variable ${\Psi}_{pr}$ instead of ${\Psi}_{p}$. The new instrument variable, which satisfies the independence requirement, is used to project both sides of Equation (15) and the result is used to determine LTI model matrices. For further details, refer to [16,18,23].

By projecting Equation (15) onto ${\Psi}_{pr}$ we get:

$$\begin{array}{c}\hfill \left[\begin{array}{cc}I& -{\Phi}_{i}^{d}\end{array}\right]{\Psi}_{f}/{\Psi}_{pr}={\Gamma}_{i}{X}_{f}/{\Psi}_{pr}+{\Phi}_{i}^{s}{W}_{f}/{\Psi}_{pr}+{V}_{f}/{\Psi}_{pr}.\end{array}$$

Since the future process and measurement noises are independent of the past input/output and future setpoint in Equation (25), the noise terms cancel, resulting in:

$$\begin{array}{c}\hfill \left[\begin{array}{cc}I& -{\Phi}_{i}^{d}\end{array}\right]{\Psi}_{f}/{\Psi}_{pr}={\Gamma}_{i}{X}_{f}/{\Psi}_{pr}.\end{array}$$

By multiplying Equation (26) by the extended orthogonal observability ${\Gamma}_{i}^{\perp}$, the state term is eliminated:

$$\begin{array}{c}\hfill {({\Gamma}_{i}^{\perp})}^{T}\left[\begin{array}{cc}I& -{\Phi}_{i}^{d}\end{array}\right]{\Psi}_{f}/{\Psi}_{pr}=0.\end{array}$$

Therefore, the column space of ${\Psi}_{f}/{\Psi}_{pr}$ is orthogonal to the row space of $\left[\begin{array}{cc}{({\Gamma}_{i}^{\perp})}^{T}& -{({\Gamma}_{i}^{\perp})}^{T}{\Phi}_{i}^{d}\end{array}\right]$. By performing singular value decomposition (SVD) of ${\Psi}_{f}/{\Psi}_{pr}$:
where ${\Sigma}_{1}$ contains dominant singular values of ${\Psi}_{f}/{\Psi}_{pr}$ and, theoretically, it has the order ${n}_{u}i+n$ [18,23].

$$\begin{array}{c}\hfill {\Psi}_{f}/{\Psi}_{pr}=U\Sigma V=\left[\begin{array}{cc}{U}_{1}& {U}_{2}\end{array}\right]\left[\begin{array}{cc}{\Sigma}_{1}& 0\\ 0& 0\end{array}\right]\left[\begin{array}{c}{{V}_{1}}^{T}\\ {{V}_{2}}^{T}\end{array}\right],\end{array}$$

Therefore, the order of the system can be determined by the number of the dominant singular values of the ${\Psi}_{f}/{\Psi}_{pr}$ [20]. The orthogonal column space of ${\Psi}_{f}/{\Psi}_{pr}$ is ${U}_{2}M$, where $M\in {\mathbb{R}}^{({n}_{y}-n)i\times ({n}_{y}-n)i}$ is any constant nonsingular matrix and is typically chosen as an identity matrix [18,23]. One approach to determine the LTI model is as follows [18]:

$$\begin{array}{c}\hfill {(\left[\begin{array}{cc}{\Gamma}_{i}^{\perp}& -{\Gamma}_{i}^{\perp}{\Phi}_{i}^{d}\end{array}\right])}^{T}={U}_{2}M.\end{array}$$

From Equation (29), ${\Gamma}_{i}$ and ${\Phi}_{i}^{d}$ can be estimated:
which results in (using MATLAB (2017a, MathWorks, Natick, MA, USA) matrix index notation):

$$\begin{array}{c}\hfill \left[\begin{array}{c}{{\Gamma}_{i}}^{\perp}\\ -{({\Phi}_{i}^{d})}^{T}{{\Gamma}_{i}}^{\perp}\end{array}\right]={U}_{2},\end{array}$$

$$\begin{array}{c}\hfill \phantom{\rule{0.277778em}{0ex}}\left\{\begin{array}{c}{\widehat{\Gamma}}_{i}={U}_{2}{(1:{n}_{y}i,:)}^{\perp},\hfill \\ {\widehat{\Phi}}_{i}^{d}=-{({{U}_{2}(1:{n}_{y}i,:)}^{T})}^{\u2020}{{U}_{2}({n}_{y}i+1:end,:)}^{T}.\hfill \end{array}\right.\end{array}$$

The past state sequence can be calculated as follows:

$$\begin{array}{c}\hfill {\widehat{X}}_{i}={\widehat{\Gamma}}_{i}^{\u2020}\left[\begin{array}{cc}I& -{\widehat{\Phi}}_{i}^{d}\end{array}\right]{\Psi}_{f}/{\Psi}_{pr}.\end{array}$$

The future state sequence can be calculated by changing data Hankel matrices as follows [18]:
where ${\underset{\xaf}{\widehat{\Gamma}}}_{i}$ is obtained by eliminating the last ${n}_{y}$ rows of ${\Gamma}_{i}$, and ${\underset{\xaf}{H}}_{i}^{d}$ is obtained by eliminating the last ${n}_{y}$ rows and the last ${n}_{u}$ columns of ${H}_{i}^{d}$. Then, the model matrices can be estimated using least squares:

$$\begin{array}{cc}\hfill {R}_{f}& ={R}_{i+2|2i},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {U}_{p}& ={U}_{1|i+1},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {Y}_{p}& ={Y}_{1|i+1},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {U}_{f}& ={U}_{i+2|2i},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {Y}_{f}& ={Y}_{i+2|2i},\hfill \end{array}$$

$$\begin{array}{cc}\hfill \Rightarrow {\widehat{X}}_{i+1}& ={\underset{\xaf}{\widehat{\Gamma}}}_{i}^{\u2020}\left[\begin{array}{cc}I& -{\underset{\xaf}{\widehat{H}}}_{i}^{d}\end{array}\right]{\Psi}_{f}/{\Psi}_{pr},\hfill \end{array}$$

$$\begin{array}{c}\hfill \left[\begin{array}{c}{X}_{i+1}\\ {Y}_{i|i}\end{array}\right]=\left[\begin{array}{cc}A& B\\ C& D\end{array}\right]\left[\begin{array}{c}{X}_{i}\\ {U}_{i|i}\end{array}\right]+\left[\begin{array}{c}{W}_{i|i}\\ {V}_{i|i}\end{array}\right].\end{array}$$

Note that the difference between the proposed method in [18] and described method is that, in order to ensure that the observer is stable (eigenvalues of $A-KC$ are inside unit circle), instead of innovation form of LTI model, Equations (1) and (2) are used [16] to derive extended state space equations. The system matrices can be calculated as follows:

$$\begin{array}{c}\hfill \left[\begin{array}{cc}\widehat{A}& \widehat{B}\\ \widehat{C}& \widehat{D}\end{array}\right]=\left[\begin{array}{c}{X}_{i+1}\\ {Y}_{i|i}\end{array}\right]{\left[\begin{array}{c}{X}_{i}\\ {U}_{i|i}\end{array}\right]}^{\u2020}.\end{array}$$

With the proposed approach, process and measurement noise Hankel matrices can be calculated as the residual of the least square solution of Equation (39):

$$\begin{array}{c}\hfill \left[\begin{array}{c}{\widehat{W}}_{i|i}\\ {\widehat{V}}_{i|i}\end{array}\right]=\left[\begin{array}{c}{X}_{i+1}\\ {Y}_{i|i}\end{array}\right]-\left[\begin{array}{cc}\widehat{A}& \widehat{B}\\ \widehat{C}& \widehat{D}\end{array}\right]\left[\begin{array}{c}{X}_{i}\\ {U}_{i|i}\end{array}\right].\end{array}$$

Then, the covariances of plant noises can be estimated as follows:

$$\begin{array}{c}\hfill \left[\begin{array}{cc}\widehat{Q}& \widehat{S}\\ {\widehat{S}}^{T}& \widehat{R}\end{array}\right]=E(\left[\begin{array}{c}{\widehat{W}}_{i|i}\\ {\widehat{V}}_{i|i}\end{array}\right]\left[\begin{array}{cc}{\widehat{W}}_{i|i}^{T}& {\widehat{V}}_{i|i}^{T}\end{array}\right]).\end{array}$$

Model identification using closed-loop data has a positive impact on the predictive capability of the model (see the simulation section for a comparison with a model identified using open-loop data).

#### 3.2. Control Design and Implementation

Having identified an LTI model for the system (with its associated states), the MPC implementation first requires a determination of the state estimates. To this end, an appropriate state estimator needs to be utilized. In the present manuscript, a Luenberger observer is utilized for the purpose of illustration. Thus, at the time of control implementation, state estimates ${\widehat{x}}_{k}$ are generated as follows:
where L is the observer gain and is computed using pole placement method, and ${y}_{k}$ is the vector of measured variables (in deviation form, from the set point).

$$\begin{array}{c}\hfill {\widehat{x}}_{k+1}=A{\widehat{x}}_{k}+B{u}_{k}+L({y}_{k}-C{\widehat{x}}_{k}),\end{array}$$

In order to stabilize the system at an unstable equilibrium point, a Lyapunov-based MPC is designed. The control calculation is achieved using a two-tier approach (to decouple the problem of stability enforcement and objective function tuning). The first layer calculates the minimum value of Lyapunov function that can be reached subject to the constraints. This tier is formulated as follows:
where $\tilde{x}$, $\tilde{y}$ are predicted state and output and ${\tilde{u}}^{1}$ is the candidate input computed in the first tier. ${x}^{SP}$ is underlying state setpoint (in deviation form from the nominal equilibrium point), which here is the desired unstable equilibrium point (and therefore zero in terms of deviation variables). For setpoint tracking, this value can be calculated using the target calculation method; readers are referred to [24] for further details.

$$\begin{array}{c}\hfill \begin{array}{cc}\hfill {V}_{min}=\underset{{\tilde{u}}_{k}^{1}}{min}& (V({\tilde{x}}_{k+1})),\hfill \\ \hfill \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}:\phantom{\rule{4.pt}{0ex}}& \\ & {\tilde{x}}_{k+1}=A{\tilde{x}}_{k}+B{{\tilde{u}}^{1}}_{k},\hfill \\ & {\tilde{y}}_{k}=C{\tilde{x}}_{k}+D{{\tilde{u}}^{1}}_{k},\hfill \\ & {\tilde{u}}^{1}\in \mathcal{U},\phantom{\rule{2.em}{0ex}}\Delta {\tilde{u}}^{1}\in {\mathcal{U}}_{\circ},\phantom{\rule{2.em}{0ex}}\tilde{x}(k)={\widehat{x}}_{l}-{x}^{SP},\hfill \end{array}\end{array}$$

Note that the first tier has a prediction horizon of 1 because the objective is to only compute the immediate control action that would minimize the value of the Lyapunov function at the next time step. V is chosen as a quadratic Lyapunov function with the following form:
where P is a positive definite matrix computed by solving the Riccati equation with the LTI model matrices as follows:
where $Q\in {\mathbb{R}}^{{n}_{x}\times {n}_{x}}$ and $R\in {\mathbb{R}}^{{n}_{u}\times {n}_{u}}$ are positive definite matrices. Then, in the second tier, this minimum value is used as a constraint (upper bound for Lyapunov function value at the next time step). The second tier is formulated as follows:
where ${N}_{p}$ is the prediction horizon and ${\tilde{u}}^{2}$ denotes the control action computed by the second tier. In essence, in the second tier, the controller calculates a control action sequence that can take the process to the setpoint in an optimal fashion optimally while ensuring that the system reaches the minimum achievable Lyapunov function value at the next time step. Note that, in both of the tiers, the input sequence is a decision variable in the optimization problem, but only the first value of the input sequence of the second tier is implemented on the process. The solution of the first tier, however, is used to ensure and generate a feasible initial guess for the second tier. The two-tiered control structure is schematically presented in Figure 1.

$$\begin{array}{c}\hfill V(\tilde{x})={\tilde{x}}^{T}P\tilde{x},\end{array}$$

$$\begin{array}{c}\hfill {A}^{T}PA-P-{A}^{T}PB{({B}^{T}PB+R)}^{-1}+Q=0,\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{cc}\hfill \underset{{\tilde{u}}_{k}^{2},\dots ,{\tilde{u}}_{k+{N}_{p}}^{2}}{min}& {\displaystyle \sum _{j=1}^{{N}_{y}}}||{\tilde{y}}_{k+j}-{\tilde{y}}_{k+j}^{\mathrm{SP}}{||}_{{Q}_{y}}^{2}+||{\tilde{u}}_{k+j}^{2}-{\tilde{u}}_{k+j-1}^{2}{||}_{{R}_{du}}^{2},\hfill \\ \hfill \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}:\phantom{\rule{4.pt}{0ex}}& \\ & {\tilde{x}}_{k+1}=A{\tilde{x}}_{k}+B{\tilde{u}}_{k},\hfill \\ & {\tilde{y}}_{k}=C{\tilde{x}}_{k}+D{\tilde{u}}_{k},\hfill \\ & {\tilde{u}}^{2}\in \mathcal{U},\phantom{\rule{2.em}{0ex}}\Delta {\tilde{u}}^{2}\in {\mathcal{U}}_{\circ},\phantom{\rule{2.em}{0ex}}\tilde{x}(k)={\widehat{x}}_{l},\hfill \\ & V({\tilde{x}}_{k+1})\le {V}_{min}\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}V({\tilde{x}}_{k})>{\u03f5}^{\ast},\hfill \\ & V({\tilde{x}}_{k+1})\le {\u03f5}^{\ast}\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}V({\tilde{x}}_{k})\le {\u03f5}^{\ast}\hfill \end{array}\end{array}$$

**Remark**

**2.**

Note that Tiers 1 and 2 are executed in series and at the same time, and the implementation does not require a time scale separation. The overall optimization is split into two tiers to guarantee feasibility of the optimization problem. In particular, the first tier computes an input move with the objective function only focusing on minimizing the Lyapunov function value at the next time step. Notice that the constraints in the first tier are such that the optimization problem is guaranteed to be feasible. With this feasible solution, the second tier is used to determine the input trajectory that achieves the best performance, while requiring the Lyapunov function to decay. Again, since the second tier optimization problem uses the solution from Tier 1 to impose the stability constraint, feasibility of the second tier optimization problem, and, hence, of the MPC optimization problem, is guaranteed. In contrast, if one were to require the Lyapunov function to decay by an arbitrary chosen factor, determination of that factor in a way that guarantees feasibility of the optimization problem would be a non-trivial task.

**Remark**

**3.**

It is important to recognize that, in the present formulation, feasibility of the optimization problem does not guarantee closed-loop stability. A superfluous (and incorrect) reason is as follows: the first tier computes the control action that minimizes the value of the Lyapunov function at the next step, but does not require that it be smaller than the previous time step, leading to potential destabilizing control action. The key point to realize here, however, is that if such a control action were to exist (that would lower the value of the Lyapunov function at the next time step), the optimization problem would determine that value by virtue of the Lyapunov function being the objective function, and lead to closed-loop stability. The reasons closed-loop stability may not be achieved are two: (1) the current state might be such that closed-loop stability is not achievable for the system dynamics and constraints; and (2) due to plant model mismatch, where the control action that causes the Lyapunov function to decay for the identified model does not do so for the system in question. The first reason points to a fundamental limitation due to the presence of input constraints, while the second is due to the lack of availability of the ‘correct’ system dynamics, and as such will be true in general for data driven MPC formulations. Note that inclusion of a noise/plant model mismatch term in the model may help with the predictive capability of the model, however, unless a bound on the uncertainty can be assumed, closed-loop stability can not be guaranteed.

**Remark**

**4.**

Along similar lines, consider the scenario where, based on the model, and constraints, an input value exists for which $V(x(k))<=V(x(k-1))$ is achievable. It can be readily shown that any solution computed by the first tier of the optimization problem would also result in $V(x(k))<=V(x(k-1))$ by virtue of the objective function being the Lyapunov function at the next time step. Thus, in such a case, the explicit incorporation of the constraint $V(x(k))<=V(x(k-1))$ (as is traditionally done in Lyapunov based MPC) does not help, and is not required. On the other hand, for the scenario where such an input does not exist, the inclusion of the constraint will cause the optimization problem to be infeasible. In contrast, in the proposed formulation, the MPC will compute a control action where the value of the Lyapunov function might be greater than the previous value, but greater by the smallest margin possible. The real impact of this phenomenon is in making the MPC formulation more pliable, especially when dealing with plant-model mismatch. In such scenarios, the proposed MPC continues to compute feasible (best possible, in terms of stabilizing behavior) solutions, and, should the process move into a region from where stabilization is possible, smoothly transits to computing stabilizing control action.

**Remark**

**5.**

In the current manuscript, we focus on the cases where a first principal model is not available. If a good first principles model was available, it could be utilized directly in a nonlinear MPC design, or linearized if one were to implement a linear MPC. In the case of linearization, the applicability would be limited by the region over which the linearization holds. In contrast, note that the model utilized in the present manuscript does not result from a linearization of a nonlinear model. Instead, it is a linear model, possibly with a higher number of states than the original nonlinear model, albeit identified, and applicable, over a ‘larger’ region of operation, compared to a linearized model.

**Remark**

**6.**

To account for possible plant-model mismatch, model validity can be monitored with model monitoring methods [16], resulting in appropriately triggering re-identification in case of poor model prediction. In another direction, in line with control performance monitoring approaches, the Lyapunov function value could be utilized. Thus, unacceptable increases in Lyapunov function value could be utilized as a means of triggering re-identification.

**Remark**

**7.**

As mentioned previously, in order to create rich training data around unstable operating points, closed-loop data must be generated. In turn, since open-loop methods result in biased estimation [25,26] in model identification, a suitable closed-loop identification method is utilized, and adapted to ensure that the model accurately captures the key dynamics.

## 4. Simulation Results

We next illustrate the proposed approach using a nonlinear CSTR example [27]. To this end, consider a CSTR where a first-order, exothermic and irreversible reaction of the form $A\stackrel{k}{\to}B$ takes place. The mass and energy conservation laws results in the following mathematical model:

$$\begin{array}{c}\hfill \begin{array}{cc}\hfill {\dot{C}}_{A}& =\frac{F}{V}({C}_{A0}-{C}_{A})-{k}_{0}{e}^{\frac{-E}{R{T}_{R}}}{C}_{A},\hfill \\ \hfill {\dot{T}}_{R}& =\frac{F}{V}({T}_{A0}-{T}_{R})+\frac{(-\Delta H)}{\rho {c}_{p}}{k}_{0}{e}^{\frac{-E}{R{T}_{R}}}{C}_{A}+\frac{Q}{\rho {c}_{p}V}.\hfill \end{array}\end{array}$$

The description of the process variables and the values of the system parameters are presented in Table 1. The control objective is to stabilize the system at an unstable equilibrium point using inlet concentration, ${C}_{{A}_{0}}$, and the rate of heat input, Q, while the manipulated inputs are constrained to be within the limits $|{C}_{{A}_{0}}|\le 1$ kmol/m

^{3}and $|Q|\le 9\times {10}^{3}$ KJ/min, and the input rate is constrained as $|\Delta {C}_{{A}_{0}}|\le 0.1$ Kmol/m^{3}and $|\Delta Q|\le 9\times 200$ KJ/min. We assume that both of the states are measured. The system has an unstable equilibrium point at ${C}_{A}=0.573$ Kmol/m^{3}and $T=395.3$ K. The goal is to stabilize the system at this equilibrium point. To this end, first an LTI model is identified using closed-loop data; then, an MPC is designed to stabilize the system at the unstable equilibrium point.For system identification of the CSTR model, proportional–integral (PI) controllers (pairing ${C}_{A}$ with ${C}_{A,in}$ and T with Q) are implemented in the process. In particular, pseudo-random binary signals are used as set-points for PI controllers. The identified LTI model order is selected as $n=4$ and $i=12$, in order to achieve the best fit in model prediction (using cross-validation). Note that these four states are the states of the identified LTI model. When dealing with setpoint tracking, these states can be augmented with additional states and utilized as part of an offset-free MPC design. Model validation results under a different set of set-point changes from training data are presented in Figure 2 and Figure 3. The identified system is unstable with absolute eigenvalues $\left[\begin{array}{cccc}0.9311& 0.9311& 0.9998& 1.0002\end{array}\right],$ which has an eigenvalue outside unit circle. The unstable nature of the identified model is consistent with the operation of the system around the unstable equilibrium point.

For the model validation, initially, a steady state Kalman filter (gain calculated by the identification method) is utilized to update state estimate until $t=0.8$ min and after convergence of the states (gaged via convergence of the outputs), the model and the input trajectory (without the state estimator) are used to predict the future output. Figure 2 illustrates the results of the model validation, and compares against a model obtained from open-loop step pseudo-random binary sequence (PRBS) on the input. As expected, the model identified using closed-loop data predicts better.

Next, closed-loop simulation results for proposed controller and conventional MPC (i.e., MPC without Lyapunov constraint) with horizons 1 and 10 are presented in Figure 4, Figure 5, Figure 6 and Figure 7. The controllers parameters are presented in Table 2. As can be seen, the LMPC has the best performance in stabilizing the system at the unstable equilibrium point. The MPC with a horizon of 1 is not capable of stabilizing the system, and the controller with a horizon of 10 reaches the set-point later compared to the LMPC. In addition, the evolution of the subspace states indicates better performance under the proposed LMPC.

## 5. Data-Driven EMPC Design and Illustration

Having illustrated the ability of the LMPC to achieve stabilization, it is next utilized to achieve economical objectives while ensuring stability. The Lyapunov based EMPC formulation is as follows:
where the value of $\rho $ dictates the neighborhood that the process states are allowed to evolve within. ${c}_{y}$ and ${c}_{u}$ indicate output and input cost vectors. Other variables have the same definition as Equation (47).

$$\begin{array}{c}\hfill \begin{array}{cc}\hfill \underset{{\tilde{u}}_{k},\dots ,{\tilde{u}}_{k+P}}{max}& {\displaystyle \sum _{j=1}^{{N}_{y}}}{c}_{y}^{T}{\tilde{y}}_{k+j}-{c}_{u}^{T}{\tilde{u}}_{k+j},\hfill \\ \hfill \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}:\phantom{\rule{4.pt}{0ex}}& \\ & {\tilde{x}}_{k+1}=A{\tilde{x}}_{k}+B{\tilde{u}}_{k},\hfill \\ & {\tilde{y}}_{k}=C{\tilde{x}}_{k}+D{\tilde{u}}_{k},\hfill \\ & \tilde{u}\in \mathcal{U},\phantom{\rule{2.em}{0ex}}\Delta \tilde{u}\in {\mathcal{U}}_{\circ},\phantom{\rule{2.em}{0ex}}\tilde{x}(k)={\widehat{x}}_{l},\hfill \\ & V({\tilde{x}}_{k+j})\le \rho \phantom{\rule{0.277778em}{0ex}}\mathrm{for}\phantom{\rule{0.277778em}{0ex}}j=1,\cdots ,P,\hfill \end{array}\end{array}$$

**Remark**

**8.**

In recent contributions [17,28], a Lyapunov-Based EMPC is proposed that utilizes data-driven methods to identify an empirical model for the system where the number of empirical model states is equal to the order of the plant model. In contrast, in the present work, the order of the model is selected based on the ability of the model to fit and predict dynamic behavior over a suitable range of operation, in turn allowing for an EMPC design that can reliably operate over a larger region.

**Remark**

**9.**

The EMPC formulation in the present manuscript utilizes a linear form of the cost function for the purpose of illustration. The proposed approach is not limited by this particular choice. Any other form of the cost function, including those where the costs could be time dependent, could be readily utilized within the proposed formulation. In such scenarios, the presence of the stability constraints provide the safeguards that allow the EMPC to move the process as needed to achieve economical goals.

**Remark**

**10.**

The use of linear models in the control design opens up the possibility of utilizing MPC formulations [3,29] that enable stabilization from the entire null controllable region (the region from which stabilization is achievable subject to input constraints). The use of the NCR can, in turn, be utilized to maximize the region over which the EMPC can be implemented, thereby maximizing the potential economic benefit. Such an implementation, however, needs to account for potential plant model mismatch owing to the use of the linear model, and remains the subject of future work.

Next, the proposed Lyapunov-based EMPC (LEMPC) is implemented on the CSTR simulation example and compared to the LMPC implementation. The closed-loop results are presented in Figure 8, Figure 9, Figure 10 and Figure 11. Exploiting the flexibility of operation within a neighborhood of the origin, the LEMPC drives the system to a point on the border of that neighborhood, which happens to be the optimal operating point, instead of the nominal operating point. Figure 12 shows the comparison of the LEMPC and LMPC. As expected, the LEMPC achieves improved economic returns compared to the conventional MPC.

## 6. Conclusions

In this study, a novel data-driven MPC is developed that enables stabilization at nominally unstable equilibrium points. This LMPC is then utilized within an economic MPC formulation to yield a data driven EMPC formulation. The proposed approach is described and compared against a representative MPC, and shown to be able to provide improved closed-loop performance.

## Acknowledgments

Financial support from the McMaster Advanced Control Consortium (MACC) is gratefully acknowledged.

## Author Contributions

Masoud Kheradmandi as the lead author was the primary contributor, contributed to conceiving and designing of the framework, performed all the simulations and wrote the first draft of the manuscript, and made susbequent revisions. The advisor Prashant Mhaskar contributed to conceiving and designing of the framework, analyzing the data and revising the paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Rawlings, J.B.; Mayne, D.Q. Model Predictive Control: Theory and Design; Nob Hill Publishing: Madison, Wisconsin, 2009. [Google Scholar]
- Mhaskar, P.; El-Farra, N.H.; Christofides, P.D. Predictive control of switched nonlinear systems with scheduled mode transitions. IEEE Trans. Autom. Control
**2005**, 50, 1670–1680. [Google Scholar] [CrossRef] - Mahmood, M.; Mhaskar, P. Constrained control Lyapunov function based model predictive control design. Int. J. Robust Nonlinear Control
**2014**, 24, 374–388. [Google Scholar] [CrossRef] - Angeli, D.; Amrit, R.; Rawlings, J.B. On average performance and stability of economic model predictive control. IEEE Trans. Autom. Control
**2012**, 57, 1615–1626. [Google Scholar] [CrossRef] - Bayer, F.A.; Lorenzen, M.; Müller, M.A.; Allgöwer, F. Improving Performance in Robust Economic MPC Using Stochastic Information. IFAC-PapersOnLine
**2015**, 48, 410–415. [Google Scholar] [CrossRef] - Liu, S.; Liu, J. Economic model predictive control with extended horizon. Automatica
**2016**, 73, 180–192. [Google Scholar] [CrossRef] - Müller, M.A.; Angeli, D.; Allgöwer, F. Economic model predictive control with self-tuning terminal cost. Eur. J. Control
**2013**, 19, 408–416. [Google Scholar] [CrossRef] - Golshan, M.; MacGregor, J.F.; Bruwer, M.J.; Mhaskar, P. Latent Variable Model Predictive Control (LV-MPC) for trajectory tracking in batch processes. J. Process Control
**2010**, 20, 538–550. [Google Scholar] [CrossRef] - MacGregor, J.; Bruwer, M.; Miletic, I.; Cardin, M.; Liu, Z. Latent variable models and big data in the process industries. IFAC-PapersOnLine
**2015**, 48, 520–524. [Google Scholar] [CrossRef] - Narasingam, A.; Siddhamshetty, P.; Kwon, J.S.I. Handling Spatial Heterogeneity in Reservoir Parameters Using Proper Orthogonal Decomposition Based Ensemble Kalman Filter for Model-Based Feedback Control of Hydraulic Fracturing. Ind. Eng. Chem. Res.
**2018**. [Google Scholar] [CrossRef] - Narasingam, A.; Kwon, J.S.I. Development of local dynamic mode decomposition with control: Application to model predictive control of hydraulic fracturing. Comput. Chem. Eng.
**2017**, 106, 501–511. [Google Scholar] [CrossRef] - Huang, B.; Kadali, R. Dynamic Modeling, Predictive Control and Performance Monitoring: A Data-Driven Subspace Approach; Springer: Berlin/Heidelberg, 2008. [Google Scholar]
- Li, W.; Han, Z.; Shah, S.L. Subspace identification for FDI in systems with non-uniformly sampled multirate data. Automatica
**2006**, 42, 619–627. [Google Scholar] [CrossRef] - Hajizadeh, I.; Rashid, M.; Turksoy, K.; Samadi, S.; Feng, J.; Sevil, M.; Hobbs, N.; Lazaro, C.; Maloney, Z.; Littlejohn, E.; et al. Multivariable Recursive Subspace Identification with Application to Artificial Pancreas Systems. IFAC-PapersOnLine
**2017**, 50, 886–891. [Google Scholar] [CrossRef] - Shah, S.L.; Patwardhan, R.; Huang, B. Multivariate controller performance analysis: methods, applications and challenges. In AICHE Symposium Series; American Institute of Chemical Engineers: New York, NY, USA, 1998; Volume 2002, pp. 190–207. [Google Scholar]
- Kheradmandi, M.; Mhaskar, P. Model predictive control with closed-loop re-identification. Comput. Chem. Eng.
**2017**, 109, 249–260. [Google Scholar] [CrossRef] - Alanqar, A.; Ellis, M.; Christofides, P.D. Economic model predictive control of nonlinear process systems using empirical models. AIChE J.
**2015**, 61, 816–830. [Google Scholar] [CrossRef] - Huang, B.; Ding, S.X.; Qin, S.J. Closed-loop subspace identification: An orthogonal projection approach. J. Process Control
**2005**, 15, 53–66. [Google Scholar] [CrossRef] - Qin, S.J. An overview of subspace identification. Comput. Chem. Eng.
**2006**, 30, 1502–1513. [Google Scholar] [CrossRef] - Wang, J.; Qin, S.J. A new subspace identification approach based on principal component analysis. J. Process Control
**2002**, 12, 841–855. [Google Scholar] [CrossRef] - Mhaskar, P.; El-Farra, N.H.; Christofides, P.D. Stabilization of nonlinear systems with state and control constraints using Lyapunov-based predictive control. Syst. Control Lett.
**2006**, 55, 650–659. [Google Scholar] [CrossRef] - Mayne, D.Q.; Rawlings, J.B.; Rao, C.V.; Scokaert, P.O. Constrained model predictive control: Stability and optimality. Automatica
**2000**, 36, 789–814. [Google Scholar] [CrossRef] - Qin, S.J.; Ljung, L. Closed-loop subspace identification with innovation estimation. IFAC Proc. Vol.
**2003**, 36, 861–866. [Google Scholar] [CrossRef] - Pannocchia, G.; Rawlings, J.B. Disturbance models for offset-free model-predictive control. AIChE J.
**2003**, 49, 426–437. [Google Scholar] [CrossRef] - Ljung, L. System identification. In Signal Analysis and Prediction; Springer: Boston, MA, USA, 1998. [Google Scholar]
- Forssell, U.; Ljung, L. Closed-loop identification revisited. Automatica
**1999**, 35, 1215–1241. [Google Scholar] [CrossRef] - Wallace, M.; Pon Kumar, S.S.; Mhaskar, P. Offset-free model predictive control with explicit performance specification. Ind. Eng. Chem. Res.
**2016**, 55, 995–1003. [Google Scholar] [CrossRef] - Alanqar, A.; Durand, H.; Christofides, P.D. Fault-Tolerant Economic Model Predictive Control Using Error-Triggered Online Model Identification. Ind. Eng. Chem. Res.
**2017**, 56, 5652–5667. [Google Scholar] [CrossRef] - Mahmood, M.; Mhaskar, P. Enhanced Stability Regions for Model Predictive Control of Nonlinear Process Systems. AIChE J.
**2008**, 54, 1487–1498. [Google Scholar] [CrossRef]

**Figure 2.**Data driven model validation results: measured outputs (dash-dotted line), state and output estimates using the (linear time invariant) LTI model model from closed-loop data and identification (dashed line), state and output estimates using the LTI model from open-loop data and identification (dotted line), observer stopping point (vertical dashed line).

**Figure 4.**Closed-loop profiles of the measured variables obtained from the proposed Lyapunov-based MPC (continuous line), MPC with horizon 1 (dash-dotted line), MPC with horizon 10 (dashed line), and MPC with horizon 1 and open-loop identification (narrow dash-dotted line) and set-point (dashed line).

**Figure 5.**Closed-loop profiles of the manipulated variables obtained from the proposed LMPC (continuous line), MPC with horizon 1 (dash-dotted line), MPC with horizon 1 and open-loop identification (narrow dash-dotted line) and MPC with horizon 10 (dashed line).

**Figure 6.**Closed-loop profiles of the LTI model states obtained from the proposed LMPC (continuous line), MPC with horizon 1 (dash-dotted line) and MPC with horizon 10 (dashed line).

**Figure 7.**Closed-loop Lyapunov function profiles obtained from the proposed LMPC (continuous line), MPC with horizon 1 (dash-dotted line) and MPC with horizon 10 (dashed line).

**Figure 8.**Closed-loop profiles of the measured variables obtained from the proposed Lyapunov-based economic MPC (continuous line) and the nominal equilibrium point (dashed line).

**Figure 9.**Closed-loop profiles of the manipulated variables obtained from the proposed LEMPC (continuous line).

**Figure 10.**Closed-loop profiles of the identified model states obtained from the proposed LEMPC (continuous line).

**Figure 11.**Closed-loop Lyapunov function profiles obtained from the proposed LEMPC (continuous line). Note that the LEMPC drives the system to a point within the acceptable neighborhood of the origin.

**Figure 12.**A comparison of the economic cost between the LEMPC (continuous line) and LMPC (dotted line).

**Table 1.**Variable and parameter description and values for the continuous stirred-tank reactor (CSTR) example.

Variable | Description | Unit | Value |
---|---|---|---|

${C}_{A,S}$ | Nominal Value of Concentration | $\frac{kmol}{{m}^{3}}$ | $0.573$ |

${T}_{R,S}$ | Nominal Value of Reactor Temperature | K | 395 |

F | Flow Rate | $\frac{{m}^{3}}{min}$ | $0.2$ |

V | Volume of the Reactor | $\frac{{m}^{3}}{min}$ | $0.2$ |

${C}_{A0,S}$ | Nominal Inlet Concentration | $\frac{kmol}{{m}^{3}}$ | $0.787$ |

${k}_{0}$ | Pre-Exponential Constant | − | $72\times {10}^{9}$ |

E | Activation Energy | $\frac{kJ}{mol}$ | $8.314\times {10}^{4}$ |

R | Ideal Gas Constant | $\frac{kJ}{KmolK}$ | $8.314$ |

${T}_{A0}$ | Inlet Temperature | K | $352.6$ |

$\Delta H$ | Enthalpy of the Reaction | $\frac{kJ}{Kmol}$ | $4.78\times {10}^{4}$ |

$\rho $ | Fluid Density | $\frac{kg}{{m}^{3}}$ | ${10}^{3}$ |

${c}_{p}$ | Heat Capacity | $\frac{kj}{kg.K}$ | $0.239$ |

Variable | Value |
---|---|

$\Delta t$ | $0.2$ min |

${Q}_{x}$ | $\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]$ |

${Q}_{x,MPC}$ | $10\times diag([1/{C}_{A,s},1/{T}_{R,S}])$ |

${R}_{\Delta u,MPC}$ | $diag([1/{C}_{A0,max},1/{Q}_{max}])$ |

${Q}_{K}$ | $diag([{10}^{3},{10}^{3}])$ |

${R}_{K}$ | $diag([{10}^{-3},{10}^{-3}])$ |

${\tau}_{min}$ | 0 |

${\tau}_{max}$ | 5 |

${\epsilon}_{i}$ | ${10}^{-3}\times {x}_{i,Sp}$ |

$\Delta {u}_{min}$ | $\left[\begin{array}{cc}-0.1& -200\end{array}\right]$ |

$\Delta {u}_{max}$ | $\left[\begin{array}{cc}0.1& 200\end{array}\right]$ |

${\u03f5}^{\ast}$ | 1 |

$V(x)$ | ${(x-{x}_{sp})}^{T}(x-{x}_{sp})$ |

${c}_{y}$ | ${\left[\begin{array}{cc}{10}^{8}& 0\end{array}\right]}^{T}$ |

${c}_{u}$ | ${\left[\begin{array}{cc}0& 0.1\end{array}\right]}^{T}$ |

$\rho $ | $7.83\times {10}^{5}$ |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).