On the Use of Nonlinear Model Predictive Control without Parameter Adaptation for Batch Processes

Optimization techniques are typically used to improve economic performance of batch processes, while meeting product and environmental specifications and safety constraints. Offline methods suffer from the parameters of the model being inaccurate, while re-identification of the parameters may not be possible due to the absence of persistency of excitation. Thus, a practical solution is the Nonlinear Model Predictive Control (NMPC) without parameter adaptation, where the measured states serve as new initial conditions for the re-optimization problem with a diminishing horizon. In such schemes, it is clear that the optimum cannot be reached due to plant-model mismatch. However, this paper goes one step further in showing that such re-optimization could in certain cases, especially with an economic cost, lead to results worse than the offline optimal input. On the other hand, in absence of process noise, for small parametric variations, if the cost function corresponds to tracking a feasible trajectory, re-optimization always improves performance. This shows inherent robustness associated with the tracking cost. A batch reactor example presents and analyzes the different cases. Re-optimizing led to worse results in some cases with an economical cost function, while no such problem occurred while working with a tracking cost.


Introduction
Batch processes are widely used in specialty industries, such as pharmaceuticals, due to their flexibility in operation.As opposed to continuous processes, their operating conditions vary with time, in order to meet the specifications and respect safety and environmental constraints.Additionally, in order to improve process operation efficiency, reduce cost, numerical optimization based on phenomenological models is used to obtain the time-varying schedule [1].
However, using an optimum, computed off-line, suffers from the problem of the model not exactly representing the reality.Very often, it is hard to get a precise model due to the lack of quality or quantity in the experimental data.In addition, in many cases, parameters are estimated from lab experiments, and thus are not very accurate when scaled-up to industrial processes.
To address this problem, use of measurements in the framework of optimization is recommended [2,3].The idea is to repeatedly re-optimize, changing the optimization problem appropriately using the information obtained from measurements.The initial conditions of the optimization problem are adapted based on the current measurements.In addition, it is also possible to identify the parameters of the system from the measurements and update them.Thus, two main categories need to be distinguished, though there is a bit of inconsistency in the nomenclature reported in the literature.If only the initial conditions are updated, the schemes are referred to as Model Predictive Control (MPC) [4][5][6][7][8][9][10][11], while Dynamic Real Time Optimization (D-RTO) schemes incorporate adaptation of both initial conditions and parameters [12].
MPC schemes incorporate feedback by re-optimization, when computation is not prohibitive [4][5][6][7].In this case, the model is not adapted, while a new optimum is computed from the initial conditions obtained from current measurements.Most real systems are better represented by a nonlinear model [8,9] and using Nonlinear Model Predictive Control (NMPC) is more appropriate [10,11].
In D-RTO, the parameters of the model are also adapted.The major problem with the adaptation of model parameters is the persistency of excitation.The optimal input is typically not persistently exciting, and adding an excitation for the purpose of identification would cause sub-optimality [13].Thus, in short, D-RTO is very difficult to implement except in special cases [14,15].
NMPC schemes do not get to the optimum due to plant-model mismatch, while D-RTO is not practical to implement.An intermediary solution is the robust NMPC reported in the literature [16,17].The most known is the min-max method, that considers the worst-case scenario for optimization [18].This method, however, is very conservative and clearly not optimal.Other methods such as the multi-stage NMPC [18] seek a compromise between conservatism and optimality.Stochastic NMPC [19] considers a probabilistic setting for the parameter uncertainties, and seeks an optimum in a stochastic sense.
The current study takes a different approach and explores the pertinence of re-optimizing with adapted initial conditions without adapting the model (NMPC) in the case of batch processes optimization with parametric errors.The main question asked is: "Given that the true optimum will not be reached due to plant-model parameter mismatch, is re-optimizing worthwhile?Will there be an improvement compared to simple implementation of the off-line optimal solution?"It is shown that NMPC re-optimization may deteriorate the performance, especially with an economic cost function.On the other hand, no such effect is present when the cost function is a squared error of the deviation from a desired trajectory feasible for the plant and the active constraints are invariant.In the absence of process noise, the tracking objective shows robustness and repeated optimization can be used even when the model is subject to small parametric errors.This paper, thus, highlights the difference in robustness between the economic and tracking objectives.
This paper first presents the basics of NMPC.Then, an analysis points out why re-optimizing without parameter adaption can give worse results.A demonstration showing that such situation does not arise for a quadratic tracking cost follows.Finally, an example is used to illustrate the different possible situations.

Problem Formulation-Model Predictive Control without Parameter Adaptation
Model Predictive control consists of repeatedly optimizing a given cost function based on a model of the system, using the state information obtained from the measurements.Two types of formulations are found in the literature-the receding horizon [20], typically used for continuous processes, and the diminishing horizon [21], used for batch processes.In this paper, the diminishing horizon for a batch process with fixed final time t f will be studied.Thus, at a given time t k , the state obtained from the measurements is x k , and the optimization problem is given as follows: where J is the function to minimize, u the input variable, x the states, F the equations describing the system dynamics, v the process noise, φ a function representing the terminal cost evaluated, L the integral cost function, θ the parameters and S and T respectively the path and terminal constraints.The initial conditions are obtained from the measured values, x k . .
x and .
v represent, respectively, the differentiated states and noise.
The above formulation gets to the optimum, to the extent allowed by the sampling, when there is process noise but no parametric errors.The process noise would move the states away from the predicted value, but the repetition of the optimization assures that an optimum is found even from the deviated value.
Contrarily, this paper would consider the case where the functional form is assumed to be correct, but the parameters θ are unknown, so the error in the parameters θ = θ − θ real is non-zero.This would also cause a variation in the states, but it might not be sufficient to simply optimize from the new states and the wrong parameters.Additionally, the excitation present in the system might not be sufficient to identify them online.In this work, the influence of such a parametric error on the operation of the NMPC would be studied.

Variational Analysis of Model Predictive Control without Parameter Adaptation
Let us consider an appropriate input and state parameterization (e.g., piecewise constant), where the parameterized input vector U and the parameterized states X will be used.Additionally, assume that the active constraints are invariant with respect to parametric variations and so become additional algebraic equations.These algebraic equations reduce the dimension of the search space.Let U k represent the reduced vector of inputs from time t k until t f , and the states during this time interval is represented by X k .The dynamic relationships can be written in a nonlinear static form and the dynamic optimization problem becomes the following static nonlinear programming problem: where d k is the difference between the predicted and observed measurements, caused by process noise and parametric variations.
In what follows, variational analysis will be carried out assuming that the parametric variations are "small".Thus, higher order terms will be neglected.Thus, the results obtained are valid for "small" parametric variations.In presence of parametric uncertainties and disturbances to the system, the variation equation ∆J can then be written as a second order development: In this equation, certain terms are constant since ∆d and ∆θ cannot be affected by manipulation on the process.Furthermore, the first term is zero by definition.Removing these terms and renaming the modifiable terms as ∆J, the equation becomes the following: The necessary condition for the variational optimization can be obtained by differentiating it with respect to the input and equating to zero.The following equation is obtained: The optimal input can be calculated as: Processes 2016, 4, 27 4 of 10 Define: which are mathematical constructs that represent the parts of ( 6) that correspond to ∆d and ∆θ respectively.Under the standard assumption that the Hessian is positive definite, the square root exists.The units of t θ and t d are the same as J −0.5 and so it is difficult to find a physical interpretation.This paper considers the case where the parameters are not adapted principally due the absence of persistency of excitation.It is well known that the optimum cannot be reached in such a case.
The following proposition goes one step further to show that it might even be harmful to re-optimize under certain circumstances.Proposition 1.Consider the repeated dynamic optimization problem (1) solved using the corresponding static nonlinear programming problem (2).Let the variations in the measured states be caused by both parametric variations and process noise.Furthermore, assume that the active constraints are invariant with respect to parametric variations.If the correction is only based on state measurements and the parameters are not adapted, then re-optimization will be worse than the offline solution when the terms t θ and t d point in opposing directions, satisfying t T d t θ ≤ − 1 2 t T d t d .Proof.If only ∆d is measured and corrected, then: and: Obviously, − In the absence of process noise, ∆d = ∂Ψ ∂θ ∆θ, and, thus, the terms t θ and t d can be written as: Since for a general cost function (such as an economic objective) there is no relationship between these two terms, the result from Proposition 1 holds.However, it will be shown in the following proposition that, for a tracking cost with a trajectory feasible for the plant, re-optimization is beneficial even if the parameters are not adapted.This, in other words, expresses the inherent robustness associated with a tracking cost function.

Proposition 2. Consider the repeated dynamic optimization problem (1) solved using the corresponding static nonlinear programming problem (2) with the tracking cost
, with X re f being a trajectory feasible for the plant with U re f being the corresponding input and w the weight for the input variations.Let the variations in the measured states be caused by parametric variations only.Furthermore, assume that the active constraints are invariant with respect to parametric variations.If the correction is only based on state measurements and the parameters are not adapted, then, for small enough parametric variations, t θ = t d , and the re-optimization will be better or equal to the offline solution, i.e., ∆J opt ≤ 0.
Proof.If the variation in the state is caused only by parametric uncertainties, then ∆d = ∂Ψ ∂θ ∆θ.The partial derivatives for this case are given by: With these, the two terms t θ and t d can be written as At the optimum, since X re f is assumed to be feasible for the plant, X − X re f = 0. Outside the optimum, X − X re f grows with ∆θ and the first term in Equation ( 18) becomes proportional to ∆θ 2 .For small enough parametric variations, this term can be neglected.Then, t θ and t d are the same and this gives: In this case, the re-optimization is always better than the offline solution.Such a robustness result cannot be established when process noise is present.Inclusion of process noise would cause ∆d = ∂Ψ ∂θ ∆θ + v, which would lead to an additional term in Equation (17).This in turn prevents t θ from being equal to t d , which could eventually lead to a potential degradation in performance.Thus, robustness can only be established mathematically for a trajectory cost without process noise.

Illustrative Example
To illustrate the importance of parametric errors on NMPC, six different cases will be treated.The first three will be with economical cost, while the last three will have a trajectory to follow.In both situations, cases with terminal constraint, path constraint and no constraints will be done.Barrier functions will be used to treat the constraints.
For each case, a batch reactor with two reactions is studied (inspired from Reference [12]): A → B and A + B → C. From a mass balance, the following model is derived for the system: .
where c X is the concentration of X (mol/L) and k 1 and k 2 are the kinetic reaction coefficient (h −1 ), which are obtained using the Arrhenius equation: Using the following scaled temperature as the input parameter: and considering: the kinetic coefficient are expressed as: The nominal values of all the parameters, as well as the constraint values, for these simulations are given in Table 1.For each case, the parameters with errors will be α and k 10 .
Table 1.Models parameters, operating bounds, and initial conditions for Cases 1 to 6.

Parameter
Value Units The objective of the three first cases is to maximize the final concentration of B. In the first case, there are no constraints on the system, which gives the following optimization problem: Case 2: System with terminal constraint and economical cost The objective is to maximize the final concentration of B, in this case with a constraint on the final concentration of A: max The optimization is subject to a terminal constraint on c A .The terminal constraint is included in the numerical optimization using the following barrier function, where b (c) is a barrier function for the constraint −c max ≤ 0: Case 3: System with path constraint and economical cost The objective is to maximize the final concentration of B, in this case with a lower bound on the input parameter: The optimization is subject to a path constraint on u.The path constraint is included in the numerical optimization once again using a barrier function.
Case 4: Unconstrained system with trajectory cost The objective of the three last cases is to minimize the difference between a trajectory and the concentration of B. In this case, there are no constraints on the system, which gives the following optimization problem: where c B re f is the trajectory to follow.For the three tracking cases, c B re f is a path following 90% of the maximal production (model optimum).Additionally, in those three cases, inputs are not penalized, mainly because no measurement noise was considered.
Case 5: System with terminal constraint and trajectory cost The objective is to minimize the difference between a trajectory and the concentration of B, in this case with a constraint on the final concentration of A: The optimization is subject to a terminal constraint on c A .The terminal constraint is included in the numerical optimization using a barrier function.c B re f is the trajectory to follow and not a function of time.

Case 6: System with path constraint and trajectory cost
The objective is to minimize the difference between a trajectory and the concentration of B, in this case with a lower bound on the input parameter: The optimization is subject to a path constraint on u.The path constraint is included in the numerical optimization once again using a barrier function.c B set is the trajectory to follow and not a function of time.

Results
The terminal cost obtained for each simulation is shown in Table 2.The simulations in which the feedback re-optimization ended giving worse result than just using the offline optimization are indicated in bold.The parametric errors considered are all ±20% except for Case 3.This particular case was harder to optimize and a greater parametric error was required for the feedback's impact to surpass the optimization difficulties.Note that scenarios where re-optimization is worse than offline solution only occur with economical costs.Additional simulations have been made with different parametric errors, all leading to this same observation.All trajectory-tracking problems with a trajectory feasible for the plant always lead to the re-optimization being better.However, if a path more demanding than the maximal production was chosen for c B set , i.e., not a feasible trajectory, then the tracking problem suffers the same difficulties as the economical cost.
The simulation for a +20% error on α and −20% on k 10 in Case 1 is shown on Figure 1.It shows how re-optimization is actually worse than the offline solution.The figure clearly shows that the input is being pulled away from its optimal value with each re-optimization.
plant always lead to the re-optimization being better.However, if a path more demanding than the maximal production was chosen for    , i.e., not a feasible trajectory, then the tracking problem suffers the same difficulties as the economical cost.
The simulation for a +20% error on  and −20% on  � 10 in Case 1 is shown on Figure 1.It shows how re-optimization is actually worse than the offline solution.The figure clearly shows that the input is being pulled away from its optimal value with each re-optimization.

Conclusions
Optimization is frequently used on processes, whether it is offline or online in a control method, such as NMPC.In this paper, the impact of using NMPC in presence of parametric errors is studied.An analysis of the mathematical formulation of NMPC has shown that situations can occur where

Conclusions
Optimization is frequently used on processes, whether it is offline or online in a control method, such as NMPC.In this paper, the impact of using NMPC in presence of parametric errors is studied.An analysis of the mathematical formulation of NMPC has shown that situations can occur where online optimization could lead to results worse than the offline one.The example studied presented this case in particular.It was seen that deterioration of the performance occurred only for an economical cost, while online optimization always helped with the tracking cost.A theoretical analysis has been performed and supports this result, showing that, for a quadratic tracking cost, online re-optimization will improve performance with small parametric uncertainties.
1 2 t T d t d ≤ 0 while −t T d t θ is sign indefinite.If t d and t θ point in the same direction, i.e., ∠ (t d , t θ ) ∈ − π 2 , π 2 rad, then ∆J opt ≤ 0. However, if they point on different directions, ∆J opt could still be negative as long as the first term (− 1 2 t T d t d ) dominates.Yet, if t T d t θ ≤ − 1 2 t T d t d , ∆J opt is positive, making the offline solution better than the re-optimization.

Table 2 .
Comparison of offline, re-optimization and plant optimum solutions for the six cases with parametric errors.Cost is maximized for Cases 1-3 and minimized for Cases 4-6.