#### 4.1. Example 1

Let the advection field be given by

that is, it is space-independent and at first constantly pointing into the direction

${(1,1)}^{\top}$ before it then smoothly rotates towards the direction

${(-1,-1)}^{\top}$. The diffusion coefficient is set to

$d=1$, the reaction term is chosen as

$r=0$, that is, there is no reaction in the system, and the control input

$u\in {L}^{2}(0,1;{\mathbb{R}}^{4})$ acts on all four edges of the domain individually.

Furthermore, we choose

${y}_{\circ}\left(x\right)=15$,

${y}_{1}^{2}\left(x\right):=15+{x}_{1}{x}_{2}$ and set

${\sigma}_{1}^{1}=1$,

${\sigma}_{1}^{2}=0.1$,

${\sigma}_{1}^{3}=0.001$ in the cost function

${\widehat{J}}_{1}$ and

${\sigma}_{2}^{1}=0$,

${\sigma}_{1}^{2}=0$,

${\sigma}_{1}^{3}=1$ in the cost function

${\widehat{J}}_{2}$. The linear mappings

${\mathcal{C}}_{1}^{1}\in \mathcal{L}\left({L}^{2}(0,T;H)\right)$ and

${\mathcal{C}}_{1}^{2}\in \mathcal{L}\left(H\right)$ are chosen to be the identity, respectively. This is a quite typical framework for MOCPs: the first cost function penalizes the deviation from a desired state, whereas the second cost function measures the control costs.

Given these data we can apply Theorem

1 and show the external stability of the sets

${\widehat{J}}^{n}({U}^{n},{y}_{n}\left({t}_{n}\right))$, which implies the feasibility of the steps

4 and

6 in Algorithm

3.

However, we can in general not expect that there is a feedback

$\kappa $ fulfilling (

7). The reason for this is that such a

$\kappa $ would have to fulfill the inequality

due to the choices

${\sigma}_{2}^{1}={\sigma}_{2}^{2}=0$ in the second cost function, which implies

$\kappa =0$. Plugging this into (

7) for

$i=1$ we see that the inequality is only fulfilled, if the uncontrolled system would move towards the desired temperatures. This is unlikely to happen in our setting, since the desired temperatures are larger than the initial condition and increasing in time. Nevertheless we can still compute a minimizer of (

8) and use it to define

$\tilde{u}$ in (

6). Note that in this case, the assumptions of Theorem

3 are not fulfilled, so that this choice of

$\kappa $ and

$\tilde{u}$ is of heuristic nature.

Another heuristic approach for determining

$\tilde{u}$ that we want to test in the following is motivated by the criteria used in step 2 of Algorithm 4 in Reference [

11], which was designed for problems without terminal condition and translates to our problem as follows: During the

n-th loop iteration of Algorithm

3 (

$n\in \{0,\dots ,L-\ell \}$) compute

$\kappa \in {\mathcal{U}}_{o}p{t}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}\left({y}_{n}\left({t}_{n-1}^{\ell}\right)\right)$ in step

9 such that

holds. Then we set

Here we do not impose the inequality (

15) on

$\kappa $ explicitly. The reason for this is that demanding (

15) would not guarantee us any performance results for our framework, but only increase the computational time.

Therefore, we use again the gradient descent method presented in Algorithm

2 for computing a control

$\kappa \in {\mathcal{U}}_{o}p{t}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}$. As initial control

u we choose

Again, it can be shown that for any accumulation point

$\overline{u}$ of the sequence

${\left({u}_{k}\right)}_{k\in \mathbb{N}}$ produced by Algorithm

2, it holds

$\overline{u}\in {\mathcal{U}}_{o}p{t}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}\left({y}_{n}\left({t}_{n-1}^{\ell}\right)\right)$. Moreover, although the inequality (

15) is not guaranteed directly by this method, we still get

${\widehat{J}}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}(\kappa ,{y}_{n}\left({t}_{n-1}^{\ell}\right))\le {\widehat{J}}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}(u,{y}_{n}\left({t}_{n-1}^{\ell}\right))$. In particular, this implies

since the time grid is equidistant. Therefore, it can be expected that (

15) is satisfied in most cases.

**Test 1.** In our first test the main focus is on investigating how well the MOMPC Algorithm

3 performs compared to the open-loop problem on the time intervall

$[0,1]$ for different MPC horizons. Therefore, we perform Algorithm

3 with all controls from

${\mathcal{U}}_{opt,appr}^{0}\left({y}_{\circ}\right)$ for the MPC horizon lengths of

$\ell =25$,

$\ell =50$ and

$\ell =99$ with two different strategies in step

9:

Compute

$\kappa $ by minimizing (

8) and set

$\tilde{u}$ as in (

6).

Compute

$\kappa $ by using Algorithm

2 as described above and set

$\tilde{u}$ according to (

16).

Note that the set ${\mathcal{U}}_{opt,appr}^{0}\left({y}_{\circ}\right)$ contains 42, 34 and 40 elements for the horizon lengths $\ell =25$, $\ell =50$ and $\ell =99$, respectively.

The obtained results when using strategy 1 for computing

$\kappa $ are shown in the top row of

Figure 1a–c. It is clearly visible that an increasing MPC horizon length has the positive effect that the MOMPC points produced by Algorithm

3 are located closer to the Pareto front

${\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$. This can be explained by the fact that a larger MPC horizon allows for a better prediction of the future behavior of the system dynamics. However, we see a clustering of MOMPC points in the middle of the Pareto front, which only slightly improves, if the MPC horizon length is increased. So it is not possible to obtain the whole extent of the Pareto front by varying the initial control

${\overline{u}}_{0}\in {\mathcal{U}}_{opt,appr}^{0}\left({y}_{\circ}\right)$.

On the other hand, we can see the results of strategy 2 in the bottom row of

Figure 1a–c. Let us first note that the inequality (

15) is fulfilled in most steps of Algorithm

3 for all initial controls

${\overline{u}}_{0}\in {\mathcal{U}}_{opt,appr}^{0}\left({y}_{\circ}\right)$. Only for some initial controls

${\overline{u}}_{0}$, for which

${\widehat{J}}^{0}({\overline{u}}_{0},{y}_{\circ})$ is located on the top left part of the Pareto front

${\mathcal{J}}_{opt}^{0}\left({y}_{\circ}\right)$, that is, for initial controls corresponding to the part of the Pareto front, where the main goal is to minimize

${\widehat{J}}_{1}^{0}(\xb7,{y}_{\circ})$ almost regardless of the function values of

${\widehat{J}}_{2}^{0}(\xb7,{y}_{\circ})$, the condition (

15) is not fulfilled in some steps of the MOMPC algorithm. The reason for this is the following: When using Algorithm

2 for computing a feedback

$\kappa \in {\mathcal{U}}_{o}p{t}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}$, the resulting minimizer

$\widehat{\alpha}$ of (

13) displays the weight of the cost functions

${\widehat{J}}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}(\xb7,{y}_{n}\left({t}_{n-1}^{\ell}\right))$ at the Pareto optimal point

$\kappa \in {\mathcal{U}}_{o}p{t}^{({t}_{n-1}^{\ell},{t}_{n+1}^{\ell})}\left({y}_{n}\left({t}_{n-1}^{\ell}\right)\right)$. If we start with a point

${\overline{u}}_{0}$, for which

${\widehat{J}}^{0}({\overline{u}}_{0},{y}_{\circ})$ is located on the top left part of the Pareto front

${\mathcal{J}}_{opt}^{0}\left({y}_{\circ}\right)$, the weighting of the cost functions will be

$\widehat{\alpha}\approx {(1,0)}^{\top}$. Therefore, the descent direction

${q}^{n}\left(u\right)$ from Algorithm

2 will mostly point in the direction of the negative gradient of the first cost function, which might lead to (

15) not being satisfied.

Looking at the results, we observe again that a larger MPC horizon length leads to a better result in the sense that the MOMPC points are closer to the Pareto front. In contrast to strategy 1, we do not see a clustering of MOMPC points in the middle of the Pareto front, but rather in the lower right part of the Pareto front. This clustering improves, if the MPC horizon length is increased, so that the entire scale of the Pareto front is obtained for a horizon length of $\ell =99$.

The difference in the clustering behavior of strategy 1 compared to strategy 2 can be seen by looking at the following relation: Every initial control vector

${\overline{u}}_{0}\in {\mathcal{U}}_{opt,appr}^{0}\left({y}_{\circ}\right)$ is the solution to a weighted-sum problem

for some weight vector

${\alpha}^{init}\in {\mathbb{R}}_{\ge 0}^{2}$ with

${\alpha}_{1}^{init}+{\alpha}_{2}^{init}=1$; see, for example, Reference [

12]. This weight vector can be determined easily when using the Euclidean reference point method to compute

${\mathcal{U}}_{opt,appr}^{0}\left({y}_{\circ}\right)$; cf. Reference ([

19], Lemma 5). After executing Algorithm

3 we can check to which Pareto optimal point

$\overline{y}={\widehat{J}}^{(0,1)}(\overline{u},{y}_{\circ})\in {\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$ the MOMPC point

${\widehat{J}}^{(0,1)}(\mu ,{y}_{\circ})$ has the smallest distance. Again,

$\overline{u}$ is the solution to a weighted-sum problem

for a weight vector

${\alpha}^{end}\in {\mathbb{R}}_{\ge 0}^{2}$ with

${\alpha}_{1}^{end}+{\alpha}_{2}^{end}=1$, which can again be determined. The mapping

${\alpha}_{1}^{init}\mapsto {\alpha}_{1}^{end}$ can be seen in

Figure 2 for both strategies.

Since the mapping displays the first weight vector, a small value in the plot corresponds to a small weighting of the first cost function (the tracking term) and a large weighting of the second cost function (the control costs), and vice versa.

Note that the ‘ideal’ result of this mapping would be the identity, since this would imply that the weight ${\alpha}^{init}$ of the initial control ${\overline{u}}_{0}$ remains constant throughout the MOMPC algorithm.

From this plot the clustering of the MOMPC points for both strategies can be deduced. For strategy 1 it can be seen that the clustering of the points happen at a weight of ${\alpha}_{1}^{end}\in [0.7,0.8]$ for all MPC horizon lengths. Up to an initial weight of ${\alpha}_{1}^{init}\approx 0.8$ all the initial controls lead more or less to the same result of the MOMPC algorithm. Only for initial controls with an initial weight of ${\alpha}_{1}^{init}>0.8$, we observe that the upper part of the Pareto front can be reached.

For strategy 2 the clustering in the lower part of the Pareto front for MPC horizon lengths of $\ell =25$ and $\ell =50$ can be deduced, since up to an initial weight of ${\alpha}_{1}^{init}\approx 0.8$, the value of ${\alpha}_{1}^{end}$ stays below a value of 0.1, which means that the resulting MOMPC point is in a region of the Pareto front, where the cost function ${\widehat{J}}_{2}^{(0,1)}(\xb7,{y}_{\circ})$ is weighted much more in comparison to the first cost function ${\widehat{J}}_{1}^{(0,1)}(\xb7,{y}_{\circ})$. Moreover, for these two MPC horizons, we can see a clear cut-off behavior at a value of 0.8. If ${\alpha}_{1}^{init}$ is larger than 0.8, the value of ${\alpha}_{1}^{end}$ is larger than 0.9. For an MPC horizon length of $\ell =99$, we do not observe such a clear cut-off behavior. Although the plot is still far from being the identity, it is visible that one can control the outcome of the MOMPC algorithm more precisely by varying the initial control.

In conclusion we can say that for strategy 2 a larger MPC horizon leads to a better distribution of the MOMPC points on the Pareto front, and that it is possible to approximate the full scale of the Pareto front by choosing different initial controls ${u}_{0}\in {U}_{opt}^{0}\left({y}_{\circ}\right)$. Strategy 1, however, seems not to be well-suited for this problem framework, since it is not possible to reach to whole extent of the Pareto front by choosing different initial controls. Quite on the contrary, the MOMPC algorithm steers almost all initial controls to the same region of the Pareto front, independently of the MPC horizon length.

**Test 2.** Now we want to show why the use of MPC in the multiobjective context is needed. For this we consider the following setup: Imagine that we want to compute the Pareto front

${\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$. However, there is only a prediction of the advection field available, which is given by

In the end it turns out that the prediction of the advection field is not very accurate, and the true advection is given by the function

c in (

14), that is, it deviates from the prediction

${c}^{pred}$ after

$t=0.5$.

Denote by

${U}_{opt,pred}^{(0,1)}\left({y}_{\circ}\right)$ the Pareto set of the problem (

MOCP) using the predicted advection

${c}^{pred}$. Again we used the Euclidean reference point method to compute an approximation both of

${U}_{opt}^{(0,1)}\left({y}_{\circ}\right)$ and of

${U}_{opt,pred}^{(0,1)}\left({y}_{\circ}\right)$.

Figure 3a displays the Pareto front

${\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$ together with the function values

${\widehat{J}}^{(0,1)}({U}_{opt,pred}^{(0,1)}\left({y}_{\circ}\right),{y}_{\circ})$. One can clearly see that the controls in

${U}_{opt,pred}^{(0,1)}\left({y}_{\circ}\right)$ are far from being Pareto optimal for the problem with advection

c, especially in the upper part of the Pareto front. This can be explained by the fact that the changing advection totally changes the strategy that has to be used to come close to the desired temperature, which is not captured by the open-loop problem using the prediction

${c}^{pred}$.

We compare this to the results obtained by using the MOMPC Algorithm

3 together with strategy 2 from the first test, where we assumed that the true advection is known to the open-loop problems in the MPC algorithm. This is reasonable since the MPC horizons are reasonably small to allow a precise prediction of the future behavior of the advection field. Just as an example we look at an MPC horizon length of

$\ell =50$ in

Figure 3b and see that in contrast to the predicted Pareto optimal controls

${U}_{opt,pred}^{(0,1)}\left({y}_{\circ}\right)$ the MOMPC feedback control

$\mu $ comes quite close to the true Pareto front. This underlines that the use of MOMPC is necessary in situations, in which there are only predictions of data available.

#### 4.2. Example 2

The parameter choices of the previous experiment imply that the choice of a feedback

$\kappa $ fulfilling (

7) is not possible. In this section we want to present numerical results for a setup that allows such a feedback in principle.

To this end we choose the following parameter values: The diffusion coefficient is set to

$d=0.5$, the reaction coefficient to

$r=0.5$, and the advection field is chosen as

As initial condition we choose

which is a smooth approximation of the discontinuous function

It can be shown that

$y=0$ is a stable steady state of the PDE (

1) to the control

$u=0$. Therefore, we choose the desired temperatures

${y}_{1}^{1}={y}_{2}^{1}=0\in {L}^{2}(0,1;H)$ and

${y}_{1}^{2}={y}_{2}^{2}=0\in H$ together with the parameter values

${\sigma}_{1}^{1}={\sigma}_{2}^{1}=1$,

${\sigma}_{1}^{2}={\sigma}_{2}^{2}=0.1$ and

${\sigma}_{1}^{3}={\sigma}_{2}^{3}=0.1$. The linear functionals in the cost functions are given by

Thus, both cost functions measure the deviation of the state from the steady state $y=0$ together with some control costs. To make the cost functions conflicting, the linear functionals are chosen such that the first cost function penalizes deviations from the steady state mostly in the left half of the domain, whereas the second cost function penalizes the deviation in the right half of the domain.

If we run Algorithm

3 with this parameter setting, we still observe that in many iterations there is no feedback

$\kappa $ fulfilling (

7), especially for the larger MPC horizons of

$\ell =25$ and

$\ell =50$. The reason for this is that the initial Pareto front

${\mathcal{J}}_{opt}^{0}\left({y}_{\circ}\right)$ is quite close or even dominates the Pareto front

${\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$ in some points for a MPC horizon of

$\ell =25$ or

$\ell =50$, see

Figure 4.

Since the MOMPC points cannot perform better than any point on the Pareto front

${\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$, it is not possible to achieve the performance result from Theorem

3 in those parts. As all the other assumptions of Theorem

3 are satisfied, the reason for not achieving the performance result has to be that it is not possible to find a feedback

$\kappa $ fulfilling (

7) in all steps of the algorithm. This is what we observe numerically.

However, if we look at

Figure 5 we see that we still obtain the performance result from Theorem

3 for many points, although a feedback

$\kappa $ cannot be found for all these points. The black lines indicate which performance value

$\widehat{J}$ corresponds to which initial cost. Therefore, if the black lines are pointing to the top right (starting from the red point), this means that the performance result is satisfied for this point. We observe that the performance result holds for all points for an MPC horizon length of

$\ell =13$ and for most points for

$\ell =25$. Even for

$\ell =50$ there are some points in the middle of the Pareto front, for which the performance result holds, although there are only very few steps, where a feedback

$\kappa $ fulfilling (

7) can be found.

Figure 6 indicates that this performance behavior does not change drastically, if we increase the time horizon

${t}_{\mathsf{end}}$.

The reason for this is that at time ${t}_{\mathsf{end}}=1$ the steady state $y=0$ is almost reached, so that almost no further costs are needed after this point.

If we compare the MOMPC results obtained from Algorithm

3 with the Pareto front

${\mathcal{J}}_{opt}^{(0,1)}\left({y}_{\circ}\right)$ of the open-loop problem in

Figure 7, we observe that increasing the MPC horizon has two positive effects on the results: Firstly, the MOMPC points are located closer to the Pareto front, and secondly, they are spread more evenly over the entire Pareto front. Already for an MPC horizon length of

$\ell =50$ we can see that the Pareto front is almost perfectly approximated.