Please select whether you prefer to view the MDPI pages with a view tailored for mobile displays or to view the MDPI
pages in the normal scrollable desktop version. This selection will be stored into your cookies and used automatically
in next visits. You can also change the view style at any point from the main header when using the pages with your
Sensitivity-Based Economic NMPC with a Path-Following Approach
Eka Suwartadi 1,
Vyacheslav Kungurtsev 2
Johannes Jäschke 1,*
Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway
Department of Computer Science, Czech Technical University in Prague, 12000 Praha 2, Czech Republic
Correspondence: Tel.: +47-735-93691
Received: 26 November 2016 / Accepted: 13 February 2017 / Published: 27 February 2017
We present a sensitivity-based predictor-corrector path-following algorithm for fast nonlinear model predictive control (NMPC) and demonstrate it on a large case study with an economic cost function. The path-following method is applied within the advanced-step NMPC framework to obtain fast and accurate approximate solutions of the NMPC problem. In our approach, we solve a sequence of quadratic programs to trace the optimal NMPC solution along a parameter change. A distinguishing feature of the path-following algorithm in this paper is that the strongly-active inequality constraints are included as equality constraints in the quadratic programs, while the weakly-active constraints are left as inequalities. This leads to close tracking of the optimal solution. The approach is applied to an economic NMPC case study consisting of a process with a reactor, a distillation column and a recycler. We compare the path-following NMPC solution with an ideal NMPC solution, which is obtained by solving the full nonlinear programming problem. Our simulations show that the proposed algorithm effectively traces the exact solution.
The idea of economic model predictive control (MPC) is to integrate the economic optimization layer and the control layer in the process control hierarchy into a single dynamic optimization layer. While classic model predictive control approaches typically employ a quadratic objective to minimize the error between the setpoints and selected measurements, economic MPC adjusts the inputs to minimize the economic cost of operation directly. This makes it possible to optimize the cost during transient operation of the plant. In recent years, this has become increasingly desirable, as stronger competition, volatile energy prices and rapidly changing product specifications require agile plant operations, where also transients are optimized to maximize profit.
The first industrial implementations of economic MPC were reported in [1,2] for oil refinery applications. The development of theory and stability analysis for economic MPC arose almost a decade afterwards; see, e.g., [3,4]. Recent progress on economic MPC is reviewed and surveyed in [5,6]. Most of the current research activities focus on the stability analysis of economic MPC and do not discuss its performance (an exception is ).
Because nonlinear process models are often used for economic optimization, a potential drawback of economic MPC is that it requires solving a large-scale nonlinear optimization problem (NLP) associated with the nonlinear model predictive control (NMPC) problem at every sample time. The solution of this NLP may take a significant amount of time , and this can lead to performance degradation and even to instability of the closed-loop system .
To reduce the detrimental effect of computational delay in NMPC, several sensitivity-based methods were proposed [10,11,12]. All of these fast sensitivity approaches exploit the fact that the NMPC optimization problems are identical at each sample time, except for one varying parameter: the initial state. Instead of solving the full nonlinear optimization problem when new measurements of the state become available, these approaches use the sensitivity of the NLP solution at a previously-computed iteration to obtain fast approximate solutions to the new NMPC problem. These approximate solutions can be computed and implemented in the plant with minimal delay. A recent overview of the developments in fast sensitivity-based nonlinear MPC is given in , and a comparison of different approaches to obtain sensitivity updates for NMPC is compiled in the paper by Wolf and Marquardt .
Diehl et al.  proposed the concept of real-time iteration (RTI), in which the full NLP is not solved at all during the MPC iterations. Instead, at each NMPC sampling time, a single quadratic programming (QP) related to the sequential quadratic programming (SQP) iteration for solving the full NLP is solved. The real-time iteration scheme contains two phases: (1) the preparation phase and (2) the feedback phase. In the preparation phase, the model derivatives are evaluated using a predicted state measurement, and a QP is formulated based on data of this predicted state. In the feedback phase, once the new initial state is available, the QP is updated to include the new initial state and solved for the control input that is injected into the plant. The real-time iteration scheme has been applied to economic NMPC in the context of wind turbine control [16,17]. Similar to the real-time iteration scheme are the approaches by Ohtsuka  and the early paper by Li and Biegler , where one single Newton-like iteration is performed per sampling time.
A different approach, the advanced-step NMPC (asNMPC), was proposed by Zavala and Biegler . The asNMPC approach involves solving the full NLP at every sample time. However, the full NLP solution is computed in advance for a predicted initial state. Once the new state measurement is available, the NLP solution is corrected using a fast sensitivity update to match the measured or estimated initial state. A simple sensitivity update scheme is implemented in the software package sIPOPT . However, active set changes are handled rather heuristically; see  for an overview. Kadam and Marquardt  proposed a similar approach, where nominal NLP solutions are updated by solving QPs in a neighboring extremal scheme; see also [12,23].
The framework of asNMPC was also applied by Jäschke and Biegler , who use a multiple-step predictor path-following algorithm to correct the NLP predictions. Their approach included measures to handle active set changes rigorously, and their path-following advanced-step NMPC algorithm is also the first one to handle non-unique Lagrange multipliers.
The contribution of this paper is to apply an improved path-following method for correcting the NLP solution within the advanced-step NMPC framework. In particular, we replace the predictor path-following method from  by a predictor-corrector method and demonstrate numerically that the method works efficiently on a large-scale case study. We present how the asNMPC with the predictor-corrector path-following algorithm performs in the presence of measurement noise and compare it with a pure predictor path-following asNMPC approach and an ideal NMPC approach, where the NLP is assumed to be solved instantly. We also give a brief discussion about how our method differs from previously published approaches.
The structure of this paper is the following. We start by introducing the ideal NMPC and advanced-step NMPC frameworks in Section 2 and give a description of our path-following algorithm together with some relevant background material and a brief discussion in Section 3. The proposed algorithm is applied to a process with a reactor, distillation and recycling in Section 4, where we consider the cases with and without measurement noise and discuss the results. The paper is closed with our conclusions in Section 5.
2. NMPC Problem Formulations
2.1. The NMPC Problem
We consider a nonlinear discrete-time dynamic system:
where denotes the state variable, is the control input and is a continuous model function, which calculates the next state from the previous state and control input , where . This system is optimized by a nonlinear model predictive controller, which solves the problem:
at each sample time. Here, is the predicted state variable; is the predicted control input; and is the final predicted state variable restricted to the terminal region . The stage cost is denoted by and the terminal cost by . Further, denotes the path constraints, i.e., , where .
The solution of the optimization problem is denoted . At sample time k, an estimate or measurement of the state is obtained, and problem is solved. Then, the first part of the optimal control sequence is assigned as plant input, such that . This first part of the solution to defines an implicit feedback law , and the system will evolve according to . At the next sample time , when the measurement of the new state is obtained, the procedure is repeated. The NMPC algorithm is summarized in Algorithm 1.
Algorithm 1: General NMPC algorithm.
2.2. Ideal NMPC and Advanced-Step NMPC Framework
For achieving optimal economic performance and good stability properties, problem needs to be solved instantly, so that the optimal input can be injected without time delay as soon as the values of the new states are available. We refer to this hypothetical case without computational delay as ideal NMPC.
In practice, there will always be some time delay between obtaining the updated values of the states and injecting the updated inputs into the plant. The main reason for this delay is the time it requires to solve the optimization problem . As the process models become more advanced, solving the optimization problems requires more time, and the computational delay cannot be neglected any more. This has led to the development of fast sensitivity-based NMPC approaches. One such approach that will be a adopted in this paper is the advanced-step NMPC (asNMPC) approach . It is based on the following steps:
Solve the NMPC problem at time k with a predicted state value of time ,
When the measurement becomes available at time , compute an approximation of the NLP solution using fast sensitivity methods,
Update , and repeat from Step 1.
Zavala and Biegler proposed a fast one-step sensitivity update that is based on solving a linear system of equations . Under some assumptions, this corresponds to a first-order Taylor approximation of the optimal solution. In particular, this approach requires strict complementarity of the NLP solution, which ensures no changes in the active set. A more general approach involves allowing for changes in the active set and making several sensitivity updates. This was proposed in  and will be developed further in this paper.
3. Sensitivity-Based Path-Following NMPC
In this section, we present some fundamental sensitivity results from the literature and then use them in a path-following scheme for obtaining fast approximate solutions to the NLP.
3.1. Sensitivity Properties of NLP
The dynamic optimization Problem (2) can be cast as a general parametric NLP problem:
where are the decision variables (which generally include the state variables and the control input ) and is the parameter, which is typically the initial state variable . In addition, is the scalar objective function; denotes the equality constraints; and finally, denotes the inequality constraints. The instances of Problem (3) that are solved at each sample time differ only in the parameter .
The Lagrangian function of this problem is defined as:
and the KKT (Karush–Kuhn–Tucker) conditions are:
In order for the KKT conditions to be a necessary condition of optimality, we require a constraint qualification (CQ) to hold. In this paper, we will assume that the linear independence constraint qualification (LICQ) holds:
Definition 1 (LICQ).
Given a vector and a point χ, the LICQ holds at χ if the set of vectors is linearly independent.
The LICQ implies that the multipliers satisfying the KKT conditions are unique. If additionally, a suitable second-order condition holds, then the KKT conditions guarantee a unique local minimum. A suitable second-order condition states that the Hessian matrix has to be positive definite in a set of appropriate directions, defined in the following property:
Definition 2 (SSOSC).
The strong second-order sufficient condition (SSOSC) holds at χ with multipliers λ and μ if for all , such that and for i, such that and .
For a given , denote the solution to (3) by , and if no confusion is possible, we omit the argument and write simply . We are interested in knowing how the solution changes with a perturbation in the parameter . Before we state a first sensitivity result, we define another important concept:
Definition 3 (SC).
Given a vector and a solution with vectors of multipliers and , strict complimentary (SC) holds if for each
Now, we are ready to state the result below given by Fiacco .
Theorem 1 (Implicit function theorem applied to optimality conditions).
Let be a KKT point that satisfies (5), and assume that LICQ, SSOSC and SC hold at . Further, let the function F, c, g be at least -times differentiable in χ and k-times differentiable in . Then:
is an isolated minimizer, and the associated multipliers λ and μ are unique.
for in a neighborhood of , the set of active constraints remains unchanged.
for in a neighborhood of , there exists a k-times differentiable function , that corresponds to a locally unique minimum for (3).
Using this result, the sensitivity of the optimal solution in a small neighborhood of can be computed by solving a system of linear equations that arises from applying the implicit function theorem to the KKT conditions of (3):
Here, the constraint gradients with subscript indicate that we only include the vectors and components of the Jacobian corresponding to the active inequality constraints at χ, i.e., if . Denoting the solution of the equation above as , for small , we obtain a good estimate:
of the solution to the NLP Problem (3) at the parameter value . This approach was applied by Zavala and Biegler .
If becomes large, the approximate solution may no longer be accurate enough, because the SC assumption implies that the active set cannot change. While that is usually true for small perturbations, large changes in may very well induce active set changes.
It can be seen that the sensitivity system corresponds to the stationarity conditions for a particular QP. This is not coincidental. It can be shown that for small enough, the set is constant for . Thus, we can form a QP wherein we are potentially moving off of weakly-active constraints while staying on the strongly-active ones. The primal-dual solution of this QP is in fact the directional derivative of the primal-dual solution path .
Let be twice continuously differentiable in and χ near , and let the LICQ and SSOSC hold at . Then, the solution is Lipschitz continuous in a neighborhood of , and the solution function is directionally differentiable.
Moreover, the directional derivative uniquely solves the following quadratic problem:
where is the strongly-active set and denotes the weakly active set.
See  (Sections 5.1 and 5.2) and  (Proposition 3.4.1). ☐
The theorem above gives the solution of the perturbed NLP (3) by solving a QP problem. Note that regardless of the inertia of the Lagrangian Hessian, if the SSOSC holds, it is positive definite on the null-space of the equality constraints, and thus, the QP defined is convex with an easily obtainable finite global minimizer. In , it is noted that as the solution to this QP is the directional derivative of the primal-dual solution of the NLP, it is a predictor step, a tangential first-order estimate of the change in the solution subject to a change in the parameter. We refer to the QP (10) as a pure-predictor. Note that obtaining the sensitivity via (10) instead of (6) has the advantage that changes in the active set can be accounted for correctly, and strict complementarity (SC) is not required. On the other hand, when SC does hold, (6) and (10) are equivalent.
3.2. Path-Following Based on Sensitivity Properties
Equation (6) and the QP (10) describes the change in the optimal solutions for small perturbations. They cannot be guaranteed to reproduce the optimal solution accurately for larger perturbations, because of curvature in the solution path and active set changes that happen further away from the linearization point. One approach to handle such cases is to divide the overall perturbation into several smaller intervals and to iteratively use the sensitivity to track the path of optimal solutions.
The general idea of a path-following method is to reach the solution of the problem at a final parameter value by tracing a sequence of solutions for a series of parameter values with . The new direction is found by evaluating the sensitivity at the current point. This is similar to a Euler integration for ordinary differential equations.
However, just as in the case of integrating differential equations with a Euler method, a path-following algorithm that is only based on the sensitivity calculated by the pure predictor QP may fail to track the solution accurately enough and may lead to poor solutions. To address this problem, a common approach is to include elements that are similar to a Newton step, which force the path-following algorithm towards the true solution. It has been found that such a corrector element can be easily included into a QP that is very similar to the predictor QP (10). Consider approximating (3) by a QP, linearizing with respect to both χ and , but again enforcing the equality of the strongly-active constraints, as we expect them to remain strongly active at a perturbed NLP:
In our NMPC problem , the parameter corresponds to the current “initial” state, . Moreover, the cost function is independent of , and we have that . Since the parameter enters the constraints linearly, we have that and are constants. With these facts, the above QP simplifies to:
We denote the QP formulation (12) as the predictor-corrector. We note that this QP is similar to the QP proposed in the real-time iteration scheme . However, it is not quite the same, as we enforce the strongly-active constraints as equality constraints in the QP. As explained in , this particular QP tries to estimate how the NLP solution changes as the parameter does in the predictor component and refines the estimate, in more closely satisfying the KKT conditions at the new parameter, as a corrector.
The predictor-corrector QP (12) is well suited for use in a path-following algorithm, where the optimal solution path is tracked from to a final value along a sequence of parameter points with . At each point , the QP is solved and the primal-dual solutions updated as:
where is obtained from the primal solution of QP (12) and where and correspond to the Lagrange multipliers of QP (12).
Changes in the active set along the path are detected by the QP as follows: If a constraint becomes inactive at some point along the path, the corresponding multiplier will first become weakly active, i.e., it will be added to the set . Since it is not included as an equality constraint, the next QP solution can move away from the constraint. Similarly, if a new constraint becomes active along the path, it will make the corresponding linearized inequality constraint in the QP active and be tracked further along the path.
The resulting path-following algorithm is summarized with its main steps in Algorithm 2, and we are now in the position to apply it in the advanced-step NMPC setting described in Section 2.2. In particular, the path-following algorithm is used to find a fast approximation of the optimal NLP solution corresponding to the new available state measurement, which is done by following the optimal solution path from the predicted state to the measured state.
Algorithm 2: Path-following algorithm.
3.3. Discussion of the Path-Following asNMPC Approach
In this section, we discuss some characteristics of the path-following asNMPC approach presented in this paper. We also present a small example to demonstrate the effect of including the strongly-active constraints as equality constraints in the QP.
A reader who is familiar with the real-time iteration scheme  will have realized that the QPs (12) that are solved in our path-following algorithm are similar to the ones proposed and solved in the real-time iteration scheme. However, there are some fundamental differences between the standard real-time iteration scheme as described in  and the asNMPC with a path-following approach.
This work is set in the advanced-step NMPC framework, i.e., at every time step, the full NLP is solved for a predicted state. When the new measurement becomes available, the precomputed NLP solution is updated by tracking the optimal solution curve from the predicted initial state to the new measured or estimated state. Any numerical homotopy algorithm can be used to update the NLP solution, and we have presented a suitable one in this paper. Note that the solution of the last QP along the path corresponds to the updated NLP solution, and only the inputs computed in this last QP will be injected into the plant.
The situation is quite different in the real time iteration (RTI) scheme described in . Here, the NLP is not solved at all during the MPC sampling times. Instead, at each sampling time, a single QP is solved, and the computed input is applied to the plant. This will require very fast sampling times, and if the QP fails to track the true solution due to very large disturbances, similar measures as in the advanced-step NMPC procedure (i.e., solving the full NLP) must be performed to get the controller “on track” again. Note that the inputs computed from every QP are applied to the plant and, not as in our path-following asNMPC, only the input computed in the last QP along the homotopy.
Finally, in the QPs of the previously published real-time iteration schemes , all inequality constraints are linearized and included as QP inequality constraints. Our approach in this paper, however, distinguishes between strongly- and weakly-active inequality constraints. Strongly-active inequalities are included as linearized equality constraints in the QP, while weakly-active constraints are linearized and added as inequality constraints to the QP. This ensures that the true solution path is tracked more accurately also when the full Hessian of the optimization problem becomes non-convex. We illustrate this in the small example below.
Consider the following parametric “NLP”
for which we have plotted the constraints at in Figure 1a.
The feasible region lies in between the parabola and the horizontal line. Changing the parameter t from zero to one moves the lower constraint up from to .
The objective gradient is , and the Hessian of the objective is always indefinite . The constraint gradients are . For , a (local) primal solution is given by . The first constraint is active, the second constraint is inactive, and the dual solution is . At we thus have the optimal primal solution and the optimal multiplier .
We consider starting from an approximate solution at the point with dual variables , such that the first constraint is strongly active, while the second one remains inactive. The linearized constraints for this point are shown in Figure 1b. Now, consider a change , going from to .
The pure predictor QP (10) has the form, recalling that we enforce the strongly active constraint as equality:
This QP is convex with a unique solution resulting in the subsequent point .
The predictor-corrector QP (12), which includes a linear term in the objective that acts as a corrector, is given for this case as
Again this QP is convex with a unique primal solution . The step computed by this predictor corrector QP moves the update to the true optimal solution .
Now, consider a third QP, which is the predictor-corrector QP (12), but without enforcing the strongly active constraints as equalities. That is, all constraints are included in the QP as they were in the original NLP (16),
This QP is non-convex and unbounded; we can decrease the objective arbitrarily by setting and letting a scalar go to infinity. Although there is a local minimizer at , a QP solver that behaves “optimally” should find the unbounded “solution”.
This last approach cannot be expected to work reliably if the full Hessian of the optimization problem may become non-convex, which easily can be the case when optimizing economic objective functions. We note, however, that if the Hessian is positive definite, QP it is not necessary to enforce the strongly active constraints as equality constraints in the predictor-corrector QP (12).
4. Numerical Case Study
4.1. Process Description
We demonstrate the path-following NMPC (pf-NMPC) on an isothermal reactor and separator process depicted in Figure 2. The continuously-stirred tank reactor (CSTR) is fed with a stream containing 100% component and a recycler R from the distillation column. A first-order reaction takes place in the CSTR where is the desired product and the product with flow rate F is fed to the column. In the distillation column, the unreacted raw material is separated from the product and recycled into the reactor. The desired product leaves the distillation column as the bottom product, which is required to have a certain purity. Reaction kinetic parameters for the reactor are described in Table 1. The distillation column model is taken from . Table 2 summarizes the parameters used in the distillation. In total, the model has 84 state variables of which 82 are from the distillation (concentration and holdup for each stage) and two from the CSTR (one concentration and one holdup).
The stage cost of the economic objective function to optimize under operation is:
where is the feed cost, is the steam cost and is the product price. The price setting is , , . The operational constraints are the concentration of the bottom product (), as well as the liquid holdup at the bottom and top of the distillation column and in the CSTR ( kmol). The control inputs are reflux flow (), boil-up flow (), feeding rate to the distillation (F), distillate (top) and bottom product flow rates (D and B). These control inputs have bound constraints as follows:
First, we run a steady-state optimization with the following feed rate (kmol/min). This gives us the optimal values for control inputs and state variables. The optimal steady state input values are The optimal state and control inputs are used to construct regularization term added to the objective function (20). Now, the regularized stage becomes:
The weights and are selected to make the rotated stage cost of the steady state problem strongly convex; for details, see . This is done to obtain an economic NMPC controller that is stable.
Secondly, we set up the NLP for calculating the predicted state variables and predicted control inputs . We employ a direct collocation approach on finite elements using Lagrange collocation to discretize the dynamics, where we use three collocation points in each finite element. By using the direct collocation approach, the state variables and control inputs become optimization variables.
The economic NMPC case study is initialized with the steady state values for a production rate kmol/min, such that the economic NMPC controller is effectively controlling a throughput change from kmol/min to kmol/min. We simulate 150 MPC iterations, with a sample time of 1 min. The prediction horizon of the NMPC controller is set to 30 min. This setting results in an NLP with 10,314 optimization variables. We use CasADi  (Version 3.1.0-rc1) with IPOPT  as the NLP solver. For the QPs, we use MINOS QP  from TOMLAB.
4.2. Comparison of the Open-Loop Optimization Results
In this section, we compare the solutions obtained from the path-following algorithm with the “true” solution of the optimization problem obtained by solving the full NLP. To do this, we consider the second MPC iteration, where the path-following asNMPC is used for the first time to correct the one-sample ahead-prediction (in the first MPC iteration, to start up the asNMPC procedure, the full NLP is solved twice). We focus on the interesting case where the predicted state is corrupted by noise, such that the path-following algorithm is required to update the solution. In Figure 3, we have plotted the difference between a selection of predicted states, obtained by applying the path-following NMPC approaches, and the ideal NMPC approach.
We observe that the one-step pure-predictor tracks the ideal NMPC solution worst and the four-step path-following with predictor-corrector tracks best. This happens because the predictor-corrector path-following QP has an additional linear term in the objective function and constraint for the purpose of moving closer to the solution of the NLP (the “corrector” component), as well as tracing the first-order estimate of the change in the solution (the “predictor”). The four-step path-following performs better because a smaller step size gives finer approximation of the parametric NLP solution.
This is also reflected in the average approximation errors given in Table 3. The average approximation error has been calculated by averaging the error one-norm over all MPC iterations.
We observe that in this case study, the accuracy of a single predictor-corrector step is almost as good as performing four predictor-corrector steps along the path. That is, a single predictor-corrector QP update may be sufficient for this application. However, in general, in the presence of larger noise magnitudes and longer sampling intervals, which cause poorer predictions, a single-step update may no longer lead to good approximations. We note the large error in the pure-predictor path-following method for the solution accuracy of several orders of magnitude.
On the other hand, given that the optimization vector χ has dimension 10,164 for our case study, the average one-norm approximation error of ca. 4.5 does result in very small errors on the individual variables.
4.3. Closed-Loop Results: No Measurement Noise
In this section, we compare the results for closed loop process operation. We consider first the case without measurement noise, and we compare the results for ideal NMPC with the results obtained by the path-following algorithm with the pure-predictor QP (10) and the predictor-corrector QP (12). Figure 4 shows the trajectories of the top and bottom composition in the distillation column and the reactor concentration and holdup. Note that around 120 min, the bottom composition constraint in the distillation column becomes active, while the CSTR holdup is kept at its upper bound all of the time (any reduction in the holdup will result in economic and product loss).
In this case (without noise), the prediction and the true solution only differ due to numerical noise. There is no need to update the prediction, and all approaches give exactly the same closed-loop behavior. This is also reflected in the accumulated stage cost, which is shown in Table 4.
The closed-loop control inputs are given in Figure 5. Note here that the feed rate into the distillation column is adjusted such that the reactor holdup is at its constraint all of the time.
4.4. Closed-Loop Results: With Measurement Noise
Next, we run simulations with measurement noise on all of the holdups in the system. The noise is taken to have a normal distribution with zero mean and a variance of one percent of the steady state values. This will result in corrupted predictions that have to be corrected for by the path-following algorithms. Again, we perform simulations with one and four steps of pure-predictor and predictor-corrector QPs.
Figure 6 shows the top and bottom compositions of the distillation column, together with the concentration and holdup in the CSTR. The states are obtained under closed-loop operation with the ideal and path-following NMPC algorithms. Due to noise, it is not possible to avoid the violation of the active constraints in the holdup of the CSTR and the bottom composition in the distillation column. This is the case for both the ideal NMPC and the path-following approaches.
The input variables shown in Figure 7 are also reflecting the measurement noise, and again, we see that the fast sensitivity NMPC approaches are very close to the ideal NMPC inputs.
Finally, we compare the accumulated economic stage cost in Table 5.
Here, we observe that our proposed predictor-corrector path-following algorithm performs identically to the ideal NMPC. This is as expected, since the predictor-corrector path-following algorithm is trying to reproduce the true NLP solution. Interestingly, in this case, the larger error in the pure predictor path-following NMPC leads to a better economic performance of the closed loop system. This behavior is due to the fact that the random measurement noise can have a positive and a negative effect on the operation, which is not taken into account by the ideal NMPC (and also the predictor-corrector NMPC). In this case, the inaccuracy of the pure-predictor path-following NMPC led to better economic performance in the closed loop. However, it could also have been the opposite.
5. Discussion and Conclusions
We applied the path-following ideas developed in Jäschke et al.  and Kungurtsev and Diehl  to a large-scale process containing a reactor, a distillation column and a recycle stream. Compared with single-step updates based on solving a linear system of equations as proposed by , our path-following approach requires somewhat more computational effort. However, the advantage of the path-following approach is that active set changes are handled rigorously. Moreover, solving a sequence of a few QPs can be expected to be much faster than solving the full NLP, especially since they can be initialized very well, such that the computational delay between obtaining the new state and injecting the updated input into the plant is still sufficiently small. In our computations, we have considered a fixed step-size for the path-following, such that the number of QPs to be solved is known in advance.
The case without noise does not require the path-following algorithm to correct the solution, because the prediction and the true measurement are identical, except for numerical noise. However, when measurement noise is added to the holdups, the situation becomes different. In this case, the prediction and the measurements differ, such that an update is required. All four approaches track the ideal NMPC solution to some degree; however, in terms of accuracy, the predictor-corrector performs consistently better. Given that the pure sensitivity QP and the predictor-corrector QP are very similar in structure, it is recommended to use the latter in the path-following algorithm, especially for highly nonlinear processes and cases with significant measurement noise.
We have presented basic algorithms for path-following, and they seem to work well for the cases we have studied, such that the path-following algorithms do not diverge from the true solution. In principle, however, the path-following algorithms may get lost, and more sophisticated implementations need to include checks and safeguards. We note, however, that the application of the path-following algorithm in the advanced-step NMPC framework has the desirable property that the solution of the full NLP acts as a corrector, such that if the path-following algorithm diverges from the true solution, this will be most likely for only one sample time, until the next full NLP is solved.
The path-following algorithm in this paper (and the corresponding QPs) still relies on the assumption of linearly-independent constraint gradients. If there are path-constraints present in the discretized NLP, care must be taken to formulate them in such a way that LICQ is not violated. In future work, we will consider extending the path-following NMPC approaches to handle more general situations with linearly-dependent inequality constraints.
Vyacheslav Kungurtsev was supported by the Czech Science Foundation Project 17-26999S. Eka Suwartadi and Johannes Jäschke are supported by the Research Council of Norway Young Research Talent Grant 239809.
V.K. and J.J. contributed the algorithmic ideas for the paper; E.S. implemented the algorithm and simulated the case study; E.S. primarily wrote the paper, with periodic assistance from V.K.; J.J. supervised the work, analyzed the simulation results and contributed to writing and correcting the paper.
Conflicts of Interest
The authors declare no conflict of interest.
Zanin, A.C.; Tvrzská de Gouvêa, M.; Odloak, D. Industrial implementation of a real-time optimization strategy for maximizing production of LPG in a FCC unit. Comput. Chem. Eng.2000, 24, 525–531. [Google Scholar] [CrossRef]
Zanin, A.C.; Tvrzská de Gouvêa, M.; Odloak, D. Integrating real-time optimization into the model predictive controller of the FCC system. Control Eng. Pract.2002, 10, 819–831. [Google Scholar] [CrossRef]
Rawlings, J.B.; Amrit, R. Optimizing process economic performance using model predictive control. In Nonlinear Model Predictive Control; Springer: Berlin/Heidelberg, Germany, 2009; Volume 384, pp. 119–138. [Google Scholar]
Rawlings, J.B.; Angeli, D.; Bates, C.N. Fundamentals of economic model predictive control. In Proceedings of the 51st IEEE Conference on Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012.
Ellis, M.; Durand, H.; Christofides, P.D. A tutorial review of economic model predictive methods. J. Process Control2014, 24, 1156–1178. [Google Scholar] [CrossRef]
Tran, T.; Ling, K.-V.; Maciejowski, J.M. Economic model predictive control—A review. In Proceedings of the 31st ISARC, Sydney, Australia, 9–11 July 2014.
Angeli, D.; Amrit, R.; Rawlings, J.B. On average performance and stability of economic model predictive control. IEEE Trans. Autom. Control2012, 57, 1615–1626. [Google Scholar] [CrossRef]
Idris, E.A.N.; Engell, S. Economics-based NMPC strategies for the operation and control of a continuous catalytic distillation process. J. Process Control2012, 22, 1832–1843. [Google Scholar] [CrossRef]
Findeisen, R.; Allgöwer, F. Computational delay in nonlinear model predictive control. In Proceedings of the International Symposium on Advanced Control of Chemical Proceses (ADCHEM’03), Hongkong, China, 11–14 January 2004.
Zavala, V.M.; Biegler, L.T. The advanced-step NMPC controller: Optimality, stability, and robustness. Automatica2009, 45, 86–93. [Google Scholar] [CrossRef]
Diehl, M.; Bock, H.G.; Schlöder, J.P. A real-time iteration scheme for nonlinear optimization in optimal feedback control. SIAM J. Control Optim.2005, 43, 1714–1736. [Google Scholar] [CrossRef]
Würth, L.; Hannemann, R.; Marquardt, W. Neighboring-extremal updates for nonlinear model-predictive control and dynamic real-time optimization. J. Process Control2009, 19, 1277–1288. [Google Scholar] [CrossRef]
Biegler, L.T.; Yang, X.; Fischer, G.A.G. Advances in sensitivity-based nonlinear model predictive control and dynamic real-time optimization. J. Process Control2015, 30, 104–116. [Google Scholar] [CrossRef]
Wolf, I.J.; Marquadt, W. Fast NMPC schemes for regulatory and economic NMPC—A review. J. Process Control2016, 44, 162–183. [Google Scholar] [CrossRef]
Diehl, M.; Bock, H.G.; Schlöder, J.P.; Findeisen, R.; Nagy, Z.; Allgöwer, F. Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. J. Process Control2002, 12, 577–585. [Google Scholar] [CrossRef]
Gros, S.; Quirynen, R.; Diehl, M. An improved real-time economic NMPC scheme for Wind Turbine control using spline-interpolated aerodynamic coefficients. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 935–940.
Gros, S.; Vukov, M.; Diehl, M. A real-time MHE and NMPC scheme for wind turbine control. In Proceedings of the 52nd IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 1007–1012.
Ohtsuka, T. A continuation/GMRES method for fast computation of nonlinear receding horizon control. Automatica2004, 40, 563–574. [Google Scholar] [CrossRef]
Li, W.C.; Biegler, L.T. Multistep, Newton-type control strategies for constrained nonlinear processes. Chem. Eng. Res. Des.1989, 67, 562–577. [Google Scholar]
Pirnay, H.; López-Negrete, R.; Biegler, L.T. Optimal sensitivity based on IPOPT. Math. Program. Comput.2002, 4, 307–331. [Google Scholar] [CrossRef]
Yang, X.; Biegler, L.T. Advanced-multi-step nonlinear model predictive control. J. Process Control2013, 23, 1116–1128. [Google Scholar] [CrossRef]
Kadam, J.; Marquardt, W. Sensitivity-based solution updates in closed-loop dynamic optimization. In Proceedings of the DYCOPS 7 Conference, Cambridge, MA, USA, 5–7 July 2004.
Würth, L.; Hannemann, R.; Marquardt, W. A two-layer architecture for economically optimal process control and operation. J. Process Control2011, 21, 311–321. [Google Scholar] [CrossRef]
Jäschke, J.; Yang, X.; Biegler, L.T. Fast economic model predictive control based on NLP-sensitivities. J. Process Control2014, 24, 1260–1272. [Google Scholar] [CrossRef]
Fiacco, A.V. Introduction to Sensitivity and Stability Analysis in Nonlinear Programming; Academic Press: New York, NY, USA, 1983. [Google Scholar]
Bonnans, J.F.; Shapiro, A. Optimization problems with perturbations: A guided tour. SIAM Rev.1998, 40, 228–264. [Google Scholar] [CrossRef]
Levy, A.B. Solution sensitivity from general principles. SIAM J. Control Optim.2001, 40, 1–38. [Google Scholar] [CrossRef]
Kungurtsev, V.; Diehl, M. Sequential quadratic programming methods for parametric nonlinear optimization. Comput. Optim. Appl.2014, 59, 475–509. [Google Scholar] [CrossRef]
Skogestad, S.; Postlethwaite, I. Multivariate Feedback Control: Analysis and Design; Wiley-Interscience: Hoboken, NJ, USA, 2005. [Google Scholar]
Andersson, J. A General Purpose Software Framework for Dynamic Optimization. Ph.D. Thesis, Arenberg Doctoral School, KU Leuven, Leuven, Belgium, October 2013. [Google Scholar]
Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program.2006, 106, 25–57. [Google Scholar] [CrossRef]
Murtagh, B.A.; Saunders, M.A. A projected lagrangian algorithm and its implementation for sparse nonlinear constraints. Math. Program. Study1982, 16, 84–117. [Google Scholar]
(a) Constraints of NLP (16) in Example 1 and (b) their linearization at and t = 0.
(a) Constraints of NLP (16) in Example 1 and (b) their linearization at and t = 0.
Diagram of continuously-stirred tank reactor (CSTR) and distillation column.
Diagram of continuously-stirred tank reactor (CSTR) and distillation column.
The difference in predicted state variables between ideal NMPC (iNMPC) and path-following NMPC (pf-NMPC) from the second iteration.
The difference in predicted state variables between ideal NMPC (iNMPC) and path-following NMPC (pf-NMPC) from the second iteration.