Open Access
This article is

- freely available
- re-usable

*Axioms*
**2018**,
*7*(2),
31;
doi:10.3390/axioms7020031

Article

Final Value Problems for Parabolic Differential Equations and Their Well-Posedness

^{1}

Unit of Epidemiology and Biostatistics, Aalborg University Hospital, Hobrovej 18-22, DK-9000 Aalborg, Denmark

^{2}

Department of Mathematics, Aalborg University, Skjernvej 4A, DK-9220 Aalborg Øst, Denmark

*

Correspondence: jjohnsen@math.aau.dk; Tel.: +45-9940-8847

^{†}

These authors contributed equally to this work.

Received: 29 March 2018 / Accepted: 28 April 2018 / Published: 9 May 2018

## Abstract

**:**

This article concerns the basic understanding of parabolic final value problems, and a large class of such problems is proved to be well posed. The clarification is obtained via explicit Hilbert spaces that characterise the possible data, giving existence, uniqueness and stability of the corresponding solutions. The data space is given as the graph normed domain of an unbounded operator occurring naturally in the theory. It induces a new compatibility condition, which relies on the fact, shown here, that analytic semigroups always are invertible in the class of closed operators. The general set-up is evolution equations for Lax–Milgram operators in spaces of vector distributions. As a main example, the final value problem of the heat equation on a smooth open set is treated, and non-zero Dirichlet data are shown to require a non-trivial extension of the compatibility condition by addition of an improper Bochner integral.

Keywords:

parabolic boundary problem; final value; compatibility condition; well posed; non-selfadjoint; hyponormalMSC:

35A01; 47D06## 1. Introduction

In this article, we establish well-posedness of final value problems for a large class of parabolic differential equations. Seemingly, this clarifies a longstanding gap in the comprehension of such problems.

Taking the heat equation as a first example, we address the problem of characterising the functions $u(t,x)$ that, in a ${C}^{\infty}$-smooth bounded open set $\mathsf{\Omega}\subset {\mathbb{R}}^{n}$ with boundary $\partial \mathsf{\Omega}$, fulfil the equations, where $\Delta ={\partial}_{{x}_{1}}^{2}+\cdots +{\partial}_{{x}_{n}}^{2}$ denotes the Laplacian,

$$\left\{\begin{array}{ccc}\hfill {\partial}_{t}u(t,x)-\Delta u(t,x)& =f(t,x)\hfill & \phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}t\in \phantom{\rule{0.166667em}{0ex}}]0,T[\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{4.pt}{0ex}}x\in \mathsf{\Omega},\hfill \\ \hfill u(t,x)& =g(t,x)\hfill & \phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}t\in \phantom{\rule{0.166667em}{0ex}}]0,T[\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{4.pt}{0ex}}x\in \partial \mathsf{\Omega},\hfill \\ \hfill u(T,x)& ={u}_{T}\left(x\right)\hfill & \phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}x\in \mathsf{\Omega}.\hfill \end{array}\right.$$

Motivation for doing so could be given by imagining a nuclear power plant, which is hit by a power failure at time $t=0$. Once power is regained at time $t=T$, and a measurement of the reactor temperature ${u}_{T}\left(x\right)$ is obtained, it is of course desirable to calculate backwards in time to provide an answer to the question: were temperatures $u(t,x)$ around some earlier time ${t}_{0}<T$ high enough to cause a meltdown of the fuel rods ?

We provide here a theoretical analysis of such problems and prove that they are well-posed, that is, they have existence, uniqueness and stability of solutions $u\in X$ for given data $(f,g,{u}_{T})\in Y$, in certain normed spaces X, Y to be specified below. The results were announced without proofs in the short note [1].

Although well-posedness is of decisive importance for the interpretation and accuracy of numerical schemes, which one would use in practice, such a theory has seemingly not been worked out before. Explained roughly, our method is to provide a useful structure on the reachable set for a general class of parabolic differential equations.

#### 1.1. Background

Let us first describe the case $f=0$, $g=0$. Then the mere heat equation $({\partial}_{t}-\Delta )u=0$ is clearly solved for all $t\in \mathbb{R}$ by the function $u(t,x)={e}^{(T-t)\lambda}v\left(x\right)$, if $v\left(x\right)$ is an eigenfunction of the Dirichlet realization $-{\Delta}_{D}$ of the Laplace operator with eigenvalue $\lambda $.

In view of this, the homogeneous final value problem (1) would obviously have the above u as a basic solution if, coincidentally, the final data ${u}_{T}\left(x\right)$ were given as the eigenfunction $v\left(x\right)$. The theory below includes the set $\mathcal{B}$ of such basic solutions u together with its linear hull $\mathcal{E}=span\mathcal{B}$ and a certain completion $\overline{\mathcal{E}}$.

It is easy to describe $\mathcal{E}$ in terms of the eigenvalues $0<{\lambda}_{1}\le {\lambda}_{2}\le \dots $ and the associated ${L}_{2}\left(\mathsf{\Omega}\right)$-orthonormal basis ${e}_{1},{e}_{2},\dots $ of eigenfunctions of $-{\Delta}_{D}$: corresponding to final data ${u}_{T}$ in $span\left({e}_{j}\right)$, which are the ${u}_{T}$ having finite expansions ${u}_{T}\left(x\right)={\sum}_{j}\left({u}_{T}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j}\left(x\right)$ in ${L}_{2}\left(\mathsf{\Omega}\right)$, the space $\mathcal{E}$ consists of solutions $u(t,x)$ being finite sums
Moreover, at time $t=0$ there is, because of the finiteness, a vector $u(0,x)$ in ${L}_{2}\left(\mathsf{\Omega}\right)$ that trivially fulfills

$$u(t,x)={\sum}_{j}\phantom{\rule{0.166667em}{0ex}}{e}^{(T-t){\lambda}_{j}}\left({u}_{T}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j}\left(x\right).$$

$${\parallel u(0,\xb7)\parallel}^{2}={\sum}_{j}\phantom{\rule{0.166667em}{0ex}}{e}^{2T{\lambda}_{j}}{\left|\left({u}_{T}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\right|}^{2}<\infty .$$

However, when summation is extended to all $j\in \mathbb{N}$, condition (3) becomes very strong, as it is only satisfied for special ${u}_{T}$: Weyl’s law for the counting function, cf. ([2], Chapter 6.4), entails the well-known ${\lambda}_{j}=\mathcal{O}\left({j}^{\frac{2}{n}}\right)$, so a single term in (3) yields $|\left({u}_{T}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})|\le cexp\left(-T{j}^{\frac{2}{n}}\right)$; i.e., the ${L}_{2}$-coordinates of such ${u}_{T}$ decay rapidly for $j\to \infty $.

Condition (3) has been known at least since the 1950s; the work of John [3] and Miranker [4] are the earliest we know. While many authors have avoided an analysis of it, Payne found it scientifically intolerable because ${u}_{T}$ is likely to be imprecisely measured; cf. his treatise [5] on the variety of methods applied to (1) until the mid 1970s.

More recently, e.g., Isakov [6] emphasized the classical observation, found already in [4], that (2) implies a phenomenon of instability. Indeed, the sequence of final data ${u}_{T,k}={e}_{k}$ has constant length 1, yet via (2) it gives the initial states ${u}_{k}(0,x)={e}^{T{\lambda}_{k}}{e}_{k}\left(x\right)$ having ${L}_{2}$-norms ${e}^{T{\lambda}_{k}}$, which clearly blow up rapidly for $k\to \infty $.

This ${L}_{2}$-instability cannot be explained away, of course, but it does not rule out that (1) is well-posed. It rather indicates that the ${L}_{2}$-norm is an insensitive choice for (1).

In fact, here there is an analogy with the classical stationary Dirichlet problem
This is unsolvable for $u\in {C}^{2}\left(\overline{\mathsf{\Omega}}\right)$ given certain $f\in {C}^{0}\left(\overline{\mathsf{\Omega}}\right)$, $g\in {C}^{0}(\partial \mathsf{\Omega})$: Günther proved prior to 1934, cf. ([7], p. 85), that when $f\left(x\right)=\chi \left(x\right)\left(3{x}_{3}^{2}\right|x{|}^{-2}-1)/log|x|$ for some radial cut-off function $\chi \in {C}_{0}^{\infty}\left(\mathsf{\Omega}\right)$ equal to 1 around the origin, $\mathsf{\Omega}$ being the unit ball of ${\mathbb{R}}^{3}$, then $f\in {C}^{0}\left(\overline{\mathsf{\Omega}}\right)$ while the convolution $w=\frac{1}{4\pi \left|x\right|}\ast f$ is in ${C}^{1}\left(\overline{\mathsf{\Omega}}\right)$ but not in ${C}^{2}$ at $x=0$; so $w\in {C}^{1}\left(\overline{\mathsf{\Omega}}\right)\backslash {C}^{2}\left(\overline{\mathsf{\Omega}}\right)$. Yet w is the unique ${C}^{1}\left(\overline{\mathsf{\Omega}}\right)$-solution of (4) in the distribution space ${\mathcal{D}}^{\prime}\left(\mathsf{\Omega}\right)$ in the case g is given as $g={w}_{|\partial \mathsf{\Omega}}$. Thus the ${C}^{k}$-scales constitute an insensitive choice for (4). Nonetheless, replacing ${C}^{2}\left(\overline{\mathsf{\Omega}}\right)$ by its completion ${H}^{1}\left(\mathsf{\Omega}\right)$ in the Sobolev norm $({\sum}_{\left|\alpha \right|\le 1}{\int}_{\mathsf{\Omega}}|{D}^{\alpha}u{{|}^{2}\phantom{\rule{0.166667em}{0ex}}dx)}^{1/2}$, it is classical that (4) is well-posed with u in ${H}^{1}\left(\mathsf{\Omega}\right)$.

$$-\Delta u=f\phantom{\rule{1.em}{0ex}}\mathrm{in}\phantom{\rule{4.pt}{0ex}}\mathsf{\Omega},\phantom{\rule{2.em}{0ex}}u=g\phantom{\rule{1.em}{0ex}}\mathrm{on}\phantom{\rule{4.pt}{0ex}}\partial \mathsf{\Omega}.$$

To obtain similarly well-adapted spaces for (1) with $f=0$, $g=0$, one could base the analysis on (3). Indeed, along with the above space $\mathcal{E}$ of basic solutions, a norm $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|{u}_{T}\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|$ on the space of final data ${u}_{T}\in span\left({e}_{j}\right)$ can be defined by (3), leading to the norm $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|{u}_{T}\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|=({\sum}_{j=1}^{\infty}{e}^{2T{\lambda}_{j}}|\left({u}_{T}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){{|}^{2})}^{1/2}$ on the ${u}_{T}$ that correspond to solutions u in the completion $\overline{\mathcal{E}}$. This would give well-posedness of (1) with $u\in \overline{\mathcal{E}}$; cf. Remark 16.

But the present paper goes much beyond this. For one thing, we have freed the discussion from $-{\Delta}_{D}$ and its specific eigenvalue distribution by using sesqui-linear forms, cf. Lax–Milgram’s lemma, which allowed us to extend the proofs to a general class of elliptic operators A.

Secondly we analyse the fully inhomogeneous problem (1) for general f, g in Section 5. In this situation well-posedness is not just a matter of choosing the norm on the data $(f,g,{u}_{T})$ suitably (as one might think from the above $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|{u}_{T}\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|$). In fact, prior to this choice, one has to restrict the $(f,g,{u}_{T})$ to a subspace characterised by certain compatibility conditions. While such conditions are well known in the theory of parabolic boundary problems, they are shown here to have a new and special form for final value problems.

Indeed, the compatibility conditions stem from the unbounded operator ${u}_{T}\mapsto u\left(0\right)$, which maps the final data to the corresponding initial state in the presence of the source term f. The fact that this operator is well defined, and that its domain endowed with the graph norm yields the data space, is the leitmotif of this article.

#### 1.2. The Abstract Final Value Problem

Let us outline our analysis given for a Lax–Milgram operator A defined in H from a V-elliptic sesquilinear form $a(\xb7,\xb7)$ in a Gelfand triple, i.e., in a set-up of three Hilbert spaces $V\hookrightarrow H\hookrightarrow {V}^{\ast}$ having norms denoted $\parallel \xb7\parallel $, $|\xb7|$ and ${\parallel \xb7\parallel}_{\ast}$, and where V is the form domain of a.

In this framework, we consider the following general final value problem: given data
determine the V-valued vector distributions $u\left(t\right)$ on $\phantom{\rule{0.166667em}{0ex}}]0,T[\phantom{\rule{0.166667em}{0ex}}$, that is the $u\in {\mathcal{D}}^{\prime}(0,T;V)$, fulfilling
Classically a wealth of parabolic Cauchy problems with homogeneous boundary conditions have been efficiently treated with the triples $(H,V,a)$ and the ${\mathcal{D}}^{\prime}(0,T;{V}^{\ast})$ set-up in (6). For this the reader may consult the work of Lions and Magenes [8], Tanabe [9], Temam [10], Amann [11]. Also recently, e.g., Almog, Grebenkov, Helffer, Henry studied variants of the complex Airy operator via such triples [12,13,14], and our results should at least extend to final value problems for those of their realisations that have non-empty spectrum.

$$f\in {L}_{2}(0,T;{V}^{\ast}),\phantom{\rule{2.em}{0ex}}{u}_{T}\in H,$$

$$\left\{\begin{array}{ccc}\hfill {\partial}_{t}u+Au& =f\hfill & \phantom{\rule{1.em}{0ex}}\mathrm{in}\phantom{\rule{4.pt}{0ex}}{\mathcal{D}}^{\prime}(0,T;{V}^{\ast}),\hfill \\ \hfill u\left(T\right)& ={u}_{T}\hfill & \phantom{\rule{1.em}{0ex}}\mathrm{in}\phantom{\rule{4.pt}{0ex}}H.\hfill \end{array}\right.$$

To compare (6) with the analogous Cauchy problem, we recall that whenever ${u}^{\prime}+Au=f$ is solved under the initial condition $u\left(0\right)={u}_{0}\in H$, for some $f\in {L}_{2}(0,T;{V}^{\ast})$, there is a unique solution u in the Banach space
For (6) it would thus be natural to envisage solutions u in the same space X. This turns out to be true, but only under substantial further conditions on the data $(f,{u}_{T})$.

$$\begin{array}{cc}\hfill X=& {L}_{2}(0,T;V)\bigcap C([0,T];H)\bigcap {H}^{1}(0,T;{V}^{\ast}),\hfill \\ \hfill {\parallel u\parallel}_{X}=& {\left({\int}_{0}^{T}{\parallel u\left(t\right)\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}dt+\underset{0\le t\le T}{sup}{\left|u\left(t\right)\right|}^{2}+{\int}_{0}^{T}{(\parallel u\left(t\right)\parallel}_{\ast}^{2}+\parallel {u}^{\prime}\left(t\right){\parallel}_{\ast}^{2})\phantom{\rule{0.166667em}{0ex}}dt\right)}^{1/2}.\hfill \end{array}$$

To formulate these, we exploit that $-A$ generates an analytic semigroup ${e}^{-tA}$ in $\mathbb{B}\left(H\right)$. This is crucial for the entire article, because analytic semigroups always are invertible in the class of closed operators, as we show in Proposition 1. We denote its inverse by ${e}^{tA}$, consistent with the case $-A$ generates a group,

$${\left({e}^{-tA}\right)}^{-1}={e}^{tA}.$$

Its domain is the Hilbert space $D\left({e}^{tA}\right)=R\left({e}^{-tA}\right)$ that is normed by $\parallel u\parallel ={\left(\right|u|}^{2}+|{e}^{tA}u{{|}^{2})}^{1/2}$. In Proposition 10 we show that a non-empty spectrum, $\sigma \left(A\right)\ne \varnothing $, yields strict inclusions

$$D\left({e}^{{t}^{\prime}A}\right)\u228aD\left({e}^{tA}\right)\u228aH\phantom{\rule{2.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}0<t<{t}^{\prime}.$$

For $t=T$ these domains play a crucial role in the well-posedness result below, cf. (11), where also the full yield ${y}_{f}$ of the source term f on the system appears, namely
The map $f\mapsto {y}_{f}$ takes values in H, and it is a continuous surjection ${y}_{f}:{L}_{2}(0,T;{V}^{\ast})\to H$.

$${y}_{f}={\int}_{0}^{T}{e}^{-(T-s)A}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds.$$

**Theorem**

**1.**

For the final value problem (6) to have a solution u in the space X in (7), it is necessary and sufficient that the data $(f,{u}_{T})$ belong to the subspace Y of ${L}_{2}(0,T;{V}^{\ast})\oplus H$ defined by the condition
Moreover, in X the solution u is unique and depends continuously on the data $(f,{u}_{T})$ in Y, that is, we have ${\parallel u\parallel}_{X}\le c{\parallel (f,{u}_{T})\parallel}_{Y}$, when Y is given the graph norm
(The full statements are found in Theorems 7 and 8 below.)

$${u}_{T}-{\int}_{0}^{T}{e}^{-(T-t)A}f\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt\phantom{\rule{4pt}{0ex}}\in \phantom{\rule{4pt}{0ex}}D\left({e}^{TA}\right).$$

$$\parallel (f,{u}_{T}){\parallel}_{Y}={\left(|{u}_{T}{|}^{2}+{\int}_{0}^{T}{\parallel f\left(t\right)\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt+|{e}^{TA}\left({u}_{T}-{\int}_{0}^{T}{e}^{-(T-t)A}f\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt\right){|}^{2}\right)}^{1/2}.$$

Condition (11) is a fundamental novelty for the above class of final value problems, but more generally it also gives an important clarification for parabolic differential equations.

As for its nature, we note that the data $(f,{u}_{T})$ fulfilling (11) form a Hilbert(-able) space Y embedded into ${L}_{2}(0,T;{V}^{\ast})\oplus H$, in view of its norm in (12).

Using the above ${y}_{f}$, (12) is seen to be the graph norm of $(f,{u}_{T})\mapsto {e}^{TA}({u}_{T}-{y}_{f})$, which in terms of $\mathsf{\Phi}(f,{u}_{T})={u}_{T}-{y}_{f}$ is the unbounded operator ${e}^{TA}\mathsf{\Phi}$ from ${L}_{2}(0,T;{V}^{\ast})\oplus H$ to H. As (11) means that the operator ${e}^{TA}\mathsf{\Phi}$ must be defined at $(f,{u}_{T})$, the space Y is its domain. Thus ${e}^{TA}\mathsf{\Phi}$ is a key ingredient in the rigorous treatment of (6).

The role of ${e}^{TA}\mathsf{\Phi}$ is easy to elucidate in control theoretic terms: its value ${e}^{TA}\mathsf{\Phi}(f,{u}_{T})$ simply equals the particular initial state $u\left(0\right)$ which is steered by f to the final state $u\left(T\right)={u}_{T}$ at time T; cf. (13) below.

Because of ${e}^{-(T-t)A}$ and the integral over $[0,T]$, (11) involves non-local operators in both space and time as an inconvenient aspect—exacerbated by use of the abstract domain $D\left({e}^{TA}\right)$, which for longer lengths T of the time interval gives increasingly stricter conditions; cf. (9).

Anyhow, we propose to regard (11) as a compatibility condition on the data $(f,{u}_{T})$, and thus we generalise the notion of compatibility.

For comparison we recall that Grubb and Solonnikov [15] made a systematic investigation of a large class of initial-boundary problems of parabolic (pseudo-)differential equations and worked out compatibility conditions, which are necessary and sufficient for well-posedness in full scales of anisotropic ${L}_{2}$-Sobolev spaces. Their conditions are explicit and local at the curved corner $\partial \mathsf{\Omega}\times \left\{0\right\}$, except for half-integer values of the smoothness s that were shown to require so-called coincidence, which is expressed in integrals over the product of the two boundaries $\left\{0\right\}\times \mathsf{\Omega}$ and $\phantom{\rule{0.166667em}{0ex}}]0,T[\phantom{\rule{0.166667em}{0ex}}\times \phantom{\rule{0.166667em}{0ex}}\partial \mathsf{\Omega}$; hence it also is a non-local condition.

However, while the conditions of Grubb and Solonnikov [15] are decisive for the solution’s regularity, condition (11) is crucial for the existence question; cf. the theorem.

Previously, uniqueness was shown by Amann ([11], Section V.2.5.2) in a t-dependent set-up, but injectivity of $u\left(0\right)\mapsto u\left(T\right)$ was proved much earlier for problems with t-dependent sesquilinear forms by Lions and Malgrange [16].

Showalter [17] attempted to characterise the possible ${u}_{T}$ in terms of Yosida approximations for $f=0$ and A having half-angle $\frac{\pi}{4}$. As an ingredient, invertibility of analytic semigroups was claimed in [17] for such A, but the proof was flawed as A can have semi-angle $\pi /4$ even if ${A}^{2}$ is not accretive; cf. our example in Remark 9.

Theorem 1 is proved largely by comparing with the corresponding problem ${u}^{\prime}+Au=f$, $u\left(0\right)={u}_{0}$. It is well known in functional analysis, cf. (7), that this is well-posed for $f\in {L}_{2}(0,T;{V}^{\ast})$, ${u}_{0}\in H$, with solutions $u\in X$. However, as shown below by adaptation of a classical argument, u is also in this set-up necessarily given by Duhamel’s principle, or the variation of constants formula, for the analytic semigroup ${e}^{-tA}$ in ${V}^{\ast}$,

$$u\left(t\right)={e}^{-tA}u\left(0\right)+{\int}_{0}^{t}{e}^{-(t-s)A}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds.$$

For $t=T$ this yields a bijective correspondence $u\left(0\right)\leftrightarrow u\left(T\right)$ between the initial and terminal states (in particular backwards uniqueness of the solutions in the large class X)—but this relies crucially on the previously mentioned invertibility of ${e}^{-tA}$; cf. (8).

As a consequence of (13) one finds the necessity of (11), as the difference $\mathsf{\Phi}(f,{u}_{T})={u}_{T}-{y}_{f}$ in (11) must equal the vector ${e}^{-TA}u\left(0\right)$, which obviously belongs to $D\left({e}^{TA}\right)$.

Moreover, (13) yields that $u\left(T\right)$ in a natural way consists of two parts, that differ radically even when A has nice properties:

First, ${e}^{-tA}u\left(0\right)$ solves the semi-homogeneous problem with $f=0$, and for $u\left(0\right)\ne 0$ there is the precise property in non-selfadjoint dynamics that the “height” function $h\left(t\right)$ is strictly convex,
This is shown in Proposition 4 when A belongs to the broad class of hyponormal operators, studied by Janas [18], or in case ${A}^{2}$ is accretive; then $h\left(t\right)$ is also strictly decreasing with ${h}^{\prime}\left(0\right)\le -m\left(A\right)$, where $m\left(A\right)$ is the lower bound of A.

$$h\left(t\right)=|{e}^{-tA}u\left(0\right)|.$$

The stiffness inherent in strict convexity is supplemented by the fact that $u\left(T\right)={e}^{-TA}u\left(0\right)$ is confined to a dense, but very small space, as by a well-known property of analytic semigroups,

$$u\left(T\right)\in {\bigcap}_{n\in \mathbb{N}}D\left({A}^{n}\right).$$

Secondly, for ${u}_{0}=0$ the integral in (13) solves the initial value problem, and it has a rather different nature since its final value ${y}_{f}$ in (10) is surjective ${y}_{f}:{L}_{2}(0,T;{V}^{\ast})\to H$, hence can be anywhere in H, regardless of the Lax–Milgram operator A in our set-up. This we show in Proposition 6 using a kind of control-theoretic argument in case A is self-adjoint with compact inverse; and for general A by means of the Closed Range Theorem, cf. Proposition 5.

For the reachable set of the equation ${u}^{\prime}+Au=f$, or rather the possible final data ${u}_{T}$, they will be a sum of an arbitrary vector ${y}_{f}$ in H and a term ${e}^{-TA}u\left(0\right)$ of great stiffness (cf. (15)). Thus ${u}_{T}$ can be prescribed in the affine space ${y}_{f}+D\left({e}^{TA}\right)$. As any ${y}_{f}\ne 0$ will push the dense set $D\left({e}^{TA}\right)\subset H$ in some arbitrary direction, $u\left(T\right)$ can be expected anywhere in H (unless ${y}_{f}\in D\left({e}^{TA}\right)$ is known a priori). Consequently neither $u\left(T\right)\in D\left({e}^{TA}\right)$ nor (15) can be expected to hold for ${y}_{f}\ne 0$, not even if its norm $|{y}_{f}|$ is much smaller than $|{e}^{-TA}u\left(0\right)|$.

As for final state measurements in real life applications, we would like to prevent a misunderstanding by noting that it is only under the peculiar circumstance that ${y}_{f}=0$ is known a priori to be an exact identity that (15) would be a valid expectation on $u\left(T\right)$.

Indeed, even if f is so small that it is (quantitatively) insignificant for the time development of the system governed by ${u}^{\prime}+Au=f$, so that $f=0$ is a valid dynamical approximation, the (qualitative) mathematical expectation that $u\left(T\right)$ should fulfill (15) cannot be justified from such an approximation; cf. the above.

In view of this fundamental difference between the problems that are truly and merely approximately homogeneous, it seems that proper understanding of final value problems is facilitated by treating inhomogeneous problems from the very beginning.

#### 1.3. The Inhomogeneous Heat Problem

For (1) with general data $(f,g,{u}_{T})$ the above is applied with $A=-{\Delta}_{D}$, that is the Dirichlet realisation of the Laplacian. The results are analogous, but less simple to state and more demanding to obtain.

First of all, even though it is a linear problem, the compatibility condition (11) destroys the old trick of reducing to boundary data $g=0$, for when $w\in {H}^{1}$ fulfils $w=g\ne 0$ on the curved boundary $S=\phantom{\rule{0.166667em}{0ex}}]0,T[\phantom{\rule{0.166667em}{0ex}}\times \partial \mathsf{\Omega}$, then w lacks the regularity needed to test (11) on the data $(\tilde{f},0,{\tilde{u}}_{T})$ of the reduced problem; cf. (127) ff.

Secondly, it is, therefore, non-trivial to clarify that every $g\ne 0$ does give rise to an extra term ${z}_{g}$, in the sense that (11) is replaced by the compatibility condition

$${u}_{T}-{y}_{f}+{z}_{g}\in D\left({e}^{-T{\Delta}_{D}}\right).$$

Thirdly, due to the low reqularity, it requires technical diligence to show that ${z}_{g}$, despite the singularity of $\Delta {e}^{(T-s){\Delta}_{D}}$ at $s=T$, has the structure of a single convergent improper Bochner integral, namely
The reader is referred to Section 5 for the choice of the Poisson operator ${K}_{0}$ and for an account of the results on the fully inhomogeneous problem in (1), especially Theorem 10 and Corollary 3, which we sum up here:

$${z}_{g}={\u2a0d}_{0}^{T}\Delta {e}^{(T-s){\Delta}_{D}}{K}_{0}g\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds.$$

**Theorem**

**2.**

For given data $f\in {L}_{2}(0,T;{H}^{-1}\left(\mathsf{\Omega}\right))$, $g\in {H}^{1/2}\left(S\right)$, ${u}_{T}\in {L}_{2}\left(\mathsf{\Omega}\right)$ the final value problem (1) is solved by a function u in ${X}_{1}={L}_{2}(0,T;{H}^{1}\left(\mathsf{\Omega}\right))\bigcap C([0,T];{L}_{2}\left(\mathsf{\Omega}\right))\bigcap {H}^{1}(0,T;{H}^{-1}\left(\mathsf{\Omega}\right))$, if and only if the data in terms of (10) and (17) satisfy the compatibility condition (16). In the affirmative case, u is uniquely determined in ${X}_{1}$ and has the representation, with all terms in ${X}_{1}$,
The unique solution u in ${X}_{1}$ depends continuously on the data $(f,g,{u}_{T})$ in the Hilbert space ${Y}_{1}$, when these are given the norms in (130) and (158) below, respectively.

$$\begin{array}{c}\hfill u\left(t\right)={e}^{t{\Delta}_{D}}{e}^{-T{\Delta}_{D}}({u}_{T}-{y}_{f}+{z}_{g})+{\int}_{0}^{t}{e}^{(t-s)\Delta}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds-{\u2a0d}_{0}^{t}\Delta {e}^{(t-s){\Delta}_{D}}{K}_{0}g\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds,\end{array}$$

#### 1.4. Contents

Our presentation is aimed at describing methods and consequences in a concise way, readable for a broad audience within evolution problems. Therefore we have preferred a simple set-up, leaving many examples and extensions to future work, cf. Section 6.

Notation is given in Section 2 together with the set-up for Lax–Milgram operators and semigroup theory. Some facts on forward evolution problems are recalled in Section 3, followed by our analysis of abstract final value problems in Section 4. The heat equation and its final and boundary value problems are treated in Section 5. Section 6 concludes with remarks on the method’s applicability and notes on the literature of the problem.

## 2. Preliminaries

In the sequel specific constants will appear as ${C}_{j}$, $j\in \mathbb{N}$, whereas constants denoted by c may vary from place to place. ${\mathbf{1}}_{S}$ denotes the characteristic function of the set S.

Throughout V and H denote two separable Hilbert spaces, such that V is algebraically, topologically and densely contained in H. Then there is a similar inclusion into the anti-dual ${V}^{\ast}$, i.e., the space of conjugated linear functionals on V,
$(V,H,{V}^{\ast})$ is also known as a Gelfand triple. Denoting the norms by $\parallel \xb7\parallel $, $|\xb7|$ and ${\parallel \xb7\parallel}_{\ast}$, respectively, there are constants such that for all $v\in V$,
The inner product on H is denoted by $(\xb7\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}\xb7)$; and the sesquilinear scalar product on ${V}^{\ast}\times V$ by ${\langle \xb7,\xb7\rangle}_{{V}^{\ast},V}$ or $\langle \xb7,\xb7\rangle $, it fulfils $\left|\langle u,v\rangle \right|\le {\parallel u\parallel}_{\ast}\parallel v\parallel $. The second inclusion in (19) means that for $u\in H$,

$$\begin{array}{c}\hfill V\subseteq H\equiv {H}^{\ast}\subseteq {V}^{\ast}.\end{array}$$

$${\parallel v\parallel}_{\ast}\le {C}_{1}\left|v\right|\le {C}_{2}\parallel v\parallel .$$

$$\langle u,v\rangle =\left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}\mathrm{all}\phantom{\rule{4.pt}{0ex}}v\in V.$$

For a linear transformation A in H, the domain is written $D\left(A\right)$, while $R\left(A\right)$ denotes its range and $Z\left(A\right)$ its null-space. $\rho \left(A\right)$, $\sigma \left(A\right)$ and $\nu \left(A\right)=\left\{\phantom{\rule{0.166667em}{0ex}}\right(Au\phantom{\rule{0.166667em}{0ex}}\left|\phantom{\rule{0.166667em}{0ex}}u\right)\mid u\in D\left(A\right),\phantom{\rule{4pt}{0ex}}\left|u\right|=1\phantom{\rule{0.166667em}{0ex}}\}$ denote the resolvent set, spectrum and numerical range, while $m\left(A\right)=infRe\nu \left(A\right)$ is the lower bound of A. Throughout $\mathbb{B}\left(H\right)$ stands for the Banach space of bounded linear operators on H.

For a given Banach space B and $T>0$, we denote by ${L}_{1}(0,T;B)$ the space of equivalence classes of functions $f:[0,T]\to B$ that are strongly measurable with ${\int}_{0}^{T}\parallel f\left(t\right)\parallel \phantom{\rule{0.166667em}{0ex}}dt$ finite. For such f the Bochner integral is denoted by ${\int}_{0}^{T}f\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt$, cf. [19]; it fulfils $\langle \phantom{\rule{0.166667em}{0ex}}{\int}_{0}^{T}f\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt,\phantom{\rule{0.166667em}{0ex}}\lambda \phantom{\rule{0.166667em}{0ex}}\rangle ={\int}_{0}^{T}\langle \phantom{\rule{0.166667em}{0ex}}f\left(t\right),\phantom{\rule{0.166667em}{0ex}}\lambda \phantom{\rule{0.166667em}{0ex}}\rangle \phantom{\rule{0.166667em}{0ex}}dt$ for every functional λ in the dual space ${B}^{\prime}$. Likewise ${L}_{2}(0,T,B)$ consists of the strongly measurable f with finite norm $({\int}_{0}^{T}{\parallel f\left(t\right)\parallel}^{2}{\phantom{\rule{0.166667em}{0ex}}dt)}^{1/2}$.

On an open set $\mathsf{\Omega}\subset {\mathbb{R}}^{n}$, $n\ge 1$, the space ${C}_{0}^{\infty}\left(\mathsf{\Omega}\right)$ consists of the infinitely differentiable functions having compact support in Ω; it is given the usual $\mathcal{L}\mathcal{F}$-topology, cf. [20,21]. The dual space of continuous linear functionals ${\mathcal{D}}^{\prime}\left(\mathsf{\Omega}\right)$ is the distribution space on Ω. We use the standard distribution theory as exposed by Grubb [20] and Hörmander [22].

More generally, the space of B-valued vector distributions is denoted by ${\mathcal{D}}^{\prime}(\mathsf{\Omega};B)$; it consists of the continuous linear maps $\mathsf{\Lambda}:{C}_{0}^{\infty}\left(\mathsf{\Omega}\right)\to B$, cf. [21], the value of which at $\phi \in {C}_{0}^{\infty}\left(\mathsf{\Omega}\right)$ is indicated by $\langle \phantom{\rule{0.166667em}{0ex}}\mathsf{\Lambda},\phantom{\rule{0.166667em}{0ex}}\phi \phantom{\rule{0.166667em}{0ex}}\rangle $. If Ω is the interval $\phantom{\rule{0.166667em}{0ex}}]0,T[\phantom{\rule{0.166667em}{0ex}}$ we also write ${\mathcal{D}}^{\prime}(\mathsf{\Omega};B)={\mathcal{D}}^{\prime}(0,T;B)$.

The Sobolev space ${H}^{1}(0,T;B)$ consists of the $u\in {\mathcal{D}}^{\prime}(0,T;B)$ for which both u, ${u}^{\prime}$ belong to ${L}_{2}(0,T;B)$; it is normed by ${\int}_{0}^{T}{(\parallel u\parallel}^{2}+\parallel {u}^{\prime}{\parallel}^{2}{\left)\phantom{\rule{0.166667em}{0ex}}dt\right)}^{1/2}$. More generally ${W}^{1,1}(0,T;B)$ is defined by replacing ${L}_{2}$ by ${L}_{1}$.

#### 2.1. Lax–Milgram Operators

Our main tool will be the Lax–Milgram operator associated to an elliptic sesquilinear form, cf. the set-up in ([20], Section 12.4). For the reader’s sake we review this, also to establish a few additional points from the proofs in [20].

We let $a(\xb7,\xb7)$ be a bounded, V-elliptic sesquilinear form on V, i.e., certain ${C}_{3},{C}_{4}>0$ fulfil for all $u,v\in V$

$$\begin{array}{c}\hfill \left|a(u,v)\right|\le {C}_{3}\parallel u\parallel \parallel v\parallel ,\phantom{\rule{2.em}{0ex}}Rea(v,v)\ge {C}_{4}{\parallel v\parallel}^{2}.\end{array}$$

Obviously, the adjoint sesquilinear form ${a}^{\ast}(u,v)=\overline{a(v,u)}$ inherits these properties (with the same ${C}_{3}$, ${C}_{4}$), and so does the “real part”, ${a}_{Re}(u,v)=\frac{1}{2}(a(u,v)+{a}^{\ast}(u,v))$. Since ${a}_{Re}(u,u)\ge 0$, the form ${a}_{Re}$ is an inner product on V, inducing the equivalent norm

$$\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|u\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|={a}_{Re}{(u,u)}^{1/2},\phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}u\in V.$$

We recall that $s(u,v)={\left(Su\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)}_{V}$ gives a bijective correspondence between bounded sesquilinear forms $s(\xb7,\xb7)$ on V and bounded operators $S\in \mathbb{B}\left(V\right)$, which is isometric since $\parallel S\parallel $ equals the operator norm of the sesquilinear form $\left|s\right|=sup\left\{\left|s\right(u,v\left)\right||\parallel u\parallel =1=\parallel v\parallel \right\}$. So the given form a induces an ${\mathcal{A}}_{0}\in \mathbb{B}\left(V\right)$ given by
and the adjoint form ${a}^{\ast}$ similarly induces an operator ${\mathcal{A}}_{0}^{\ast}\in \mathbb{B}\left(V\right)$, which is seen at once to be the adjoint of ${\mathcal{A}}_{0}$ in the sense that ${\left({\mathcal{A}}_{0}^{\ast}v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u)}_{V}={\left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\mathcal{A}}_{0}u)}_{V}$.

$$\begin{array}{c}\hfill a(u,v)={\left({\mathcal{A}}_{0}u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)}_{V}\phantom{\rule{1.em}{0ex}}\forall u,v\in V;\end{array}$$

The V-ellipticity in (22) shows that ${\mathcal{A}}_{0}$, ${\mathcal{A}}_{0}^{\ast}$ are both injective with positive lower bounds $m\left({\mathcal{A}}_{0}\right),\phantom{\rule{0.166667em}{0ex}}m\left({\mathcal{A}}_{0}^{\ast}\right)\ge {C}_{4}$, so ${\mathcal{A}}_{0}$, ${\mathcal{A}}_{0}^{\ast}$ are in fact bijections on V (cf. ([20], Theorem 12.9)).

By Riesz’s representation theorem, there exists a bijective isometry $J\in \mathbb{B}(V,{V}^{\ast})$ such that for every ${v}^{\ast}=J\tilde{v}$ one has $\langle J\tilde{v},v\rangle ={\left(\tilde{v}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)}_{V}$ for all $v\in V$. Therefore $\mathcal{A}:=J\circ {\mathcal{A}}_{0}$ is an operator in $\mathbb{B}(V,{V}^{\ast})$, for which (24) gives
Similarly ${\mathcal{A}}^{\prime}:=J\circ {\mathcal{A}}_{0}^{\ast}$ fulfils $\langle {\mathcal{A}}^{\prime}u,v\rangle ={\left({\mathcal{A}}_{0}^{\ast}u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)}_{V}={a}^{\ast}(u,v)$ for all $u,v\in V$.

$$\begin{array}{c}\hfill \langle \mathcal{A}u,v\rangle =a(u,v),\phantom{\rule{1.em}{0ex}}\forall u,v\in V.\end{array}$$

Clearly $\mathcal{A}$ and ${\mathcal{A}}^{\prime}$ are bijections, as composites of such. Hence they give rise to a Hilbert space structure on ${V}^{\ast}$ with the inner product
inducing the norm $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|w{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}_{\ast}={a}_{Re}{({\mathcal{A}}^{-1}w,{\mathcal{A}}^{-1}w)}^{1/2}=\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|{\mathcal{A}}^{-1}w\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|$ on ${V}^{\ast}$, equivalent to ${\parallel w\parallel}_{\ast}$.

$$\begin{array}{c}\hfill {\left({w}_{1}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{w}_{2})}_{{V}^{\ast}}={a}_{Re}({\mathcal{A}}^{-1}{w}_{1},{\mathcal{A}}^{-1}{w}_{2}),\end{array}$$

The Lax–Milgram operator A is defined by restriction of $\mathcal{A}$ to an operator in H, i.e.,
So $D\left(A\right)$ consists of the $u\in V$ for which some $f\in H$ fulfils $\left(f\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)=a(u,v)$ for all $v\in V$.

$$Av=\mathcal{A}v\phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}v\in D\left(A\right):={\mathcal{A}}^{-1}\left(H\right).$$

The reader may consult ([20], Section 12.4) for elementary proofs of the following: A is closed in H, with $D\left(A\right)$ dense in H as well as in V; in H also ${\mathcal{A}}^{\prime}$ has these properties, and it equals the adjoint of A in H; i.e., ${A}^{\prime}{|}_{{\mathcal{A}}^{\prime -1}\left(H\right)}={A}^{\ast}$. As A is closed, $D\left(A\right)$ is a Hilbert space with the graph norm ${\parallel v\parallel}_{D\left(A\right)}^{2}={\left|v\right|}^{2}+{\left|Av\right|}^{2}$, and $D\left(A\right)\hookrightarrow V$ is bounded due to (22). Geometrically, $\sigma \left(A\right)$ and $\nu \left(A\right)$ are contained in the sector of $z\in \mathbb{C}$ given by
Actually $0\in \rho \left(A\right)$ since a is V-elliptic, so ${A}^{-1}\in \mathbb{B}\left(H\right)$; moreover $m\left(A\right)\ge {C}_{1}{C}_{4}/{C}_{2}>0$.

$$\begin{array}{c}\hfill |Imz|\le {C}_{3}{C}_{4}^{-1}Rez.\end{array}$$

Both the closed operator A in H and its extension $\mathcal{A}\in \mathbb{B}(V,{V}^{\ast})$ are used throughout. (For simplicity, they were both denoted by A in the introduction, though.)

#### 2.2. The Self-Adjoint Case

As is well known, if A is selfadjoint, i.e., ${A}^{\ast}=A$ (or ${a}^{\ast}=a$), and has compact inverse, then H has an orthonormal basis of eigenvectors of A, which can be scaled to orthonormal bases of V and ${V}^{\ast}$. This is recalled, because our results can be given a more explicit form in this case, e.g., for $-\Delta $ in (1).

The properties that A is selfadjoint, closed, and densely defined with dense range in H carry over to ${A}^{-1}$ (e.g., ([20], Theorem 12.7)), so when ${A}^{-1}$ in addition is compact in H (e.g., if $V\hookrightarrow H$ is compact), then the spectral theorem for compact selfadjoint operators states that H has an orthonormal basis $\left({e}_{j}\right)$ consisting of eigenvectors of ${A}^{-1}$, where the eigenvalues ${\mu}_{j}$ of ${A}^{-1}$ by the positivity can be ordered such that

$$\begin{array}{c}\hfill {\mu}_{1}\ge {\mu}_{2}\ge \dots \ge {\mu}_{j}\ge \cdots >0,\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}{\mu}_{j}\to 0\phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}j\to \infty .\end{array}$$

The orthonormal basis $\left({e}_{j}\right)$ also consists of eigenvectors of A with eigenvalues ${\lambda}_{j}=1/{\mu}_{j}$. Hence $\sigma \left(A\right)={\sigma}_{point}\left(A\right)=\{\phantom{\rule{0.166667em}{0ex}}{\lambda}_{j}\mid j\in \mathbb{N}\phantom{\rule{0.166667em}{0ex}}\}$. Indeed, ${\sigma}_{res}\left(A\right)=\varnothing $ as ${A}^{\ast}=A$; and ${A}^{-1}\in \mathbb{B}\left(H\right)$ while $A-\nu I=({\nu}^{-1}I-{A}^{-1})\nu A$ has a bounded inverse for $\nu \ne {\lambda}_{j}$, as ${\nu}^{-1}\notin \sigma \left({A}^{-1}\right)$.

As ${a}_{Re}=a$ here, V is now renormed by $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|v{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}^{2}=a(v,v)$. However, if moreover V is considered with $a(u,v)$ as inner product, then $\mathcal{A}:V\to {V}^{\ast}$ is the Riesz isometry; and one has

**Fact**

**1.**

For every $v\in V$ the H-expansion $v={\sum}_{j=1}^{\infty}\left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j}$ converges in V. Moreover, the sequence ${({e}_{j}/\sqrt{{\lambda}_{j}})}_{j\in \mathbb{N}}$ is an orthonormal basis for V, and $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|v{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}^{2}={\sum}_{j=1}^{\infty}{\lambda}_{j}{\left|\left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\right|}^{2}$.

**Proof.**

The ${e}_{j}/\sqrt{{\lambda}_{j}}$ are orthonormal in V since $a({e}_{j},{e}_{k})=\langle \mathcal{A}{e}_{j},{e}_{k}\rangle ={\lambda}_{j}\left({e}_{j}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{k})$, cf. (25). They also yield a basis for V since similarly $w\in V\ominus span({e}_{j}/\sqrt{{\lambda}_{j}})$ implies $w=0$. As ${\lambda}_{j}>0$, expansion of any v in V gives
whence the rightmost side converges in V. This means that $v={\sum}_{j=1}^{\infty}\sqrt{{\lambda}_{j}}\left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j}/\sqrt{{\lambda}_{j}}$ is an orthogonal expansion in V, whence $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|v{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}^{2}$ has the stated expression. ☐

$$\begin{array}{cc}\hfill v& =\sum _{j=1}^{\infty}a(v,{\lambda}_{j}^{-1/2}{e}_{j}){\lambda}_{j}^{-1/2}{e}_{j}=\sum _{j=1}^{\infty}\overline{a({e}_{j},v)}{\lambda}_{j}^{-1}{e}_{j}=\sum _{j=1}^{\infty}\left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j},\hfill \end{array}$$

For ${V}^{\ast}$ the set-up (26), (25) here gives ${\left({w}_{1}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{w}_{2})}_{{V}^{\ast}}=a({\mathcal{A}}^{-1}{w}_{1},{\mathcal{A}}^{-1}{w}_{2})=\langle {w}_{1},{\mathcal{A}}^{-1}{w}_{2}\rangle $.

**Fact**

**2.**

For every $w\in {V}^{\ast}$ the expansion $w={\sum}_{j=1}^{\infty}\langle w,{e}_{j}\rangle {e}_{j}$ converges in ${V}^{\ast}$. Moreover, the sequence ${\left(\sqrt{{\lambda}_{j}}{e}_{j}\right)}_{j\in \mathbb{N}}$ is an orthonormal basis of ${V}^{\ast}$ and $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|w{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}_{\ast}^{2}={\sum}_{j=1}^{\infty}{\lambda}_{j}^{-1}{\left|\langle w,{e}_{j}\rangle \right|}^{2}$.

**Proof.**

$\left(\sqrt{{\lambda}_{j}}{e}_{j}\right)$ is orthonormal as ${\left({e}_{j}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{k})}_{{V}^{\ast}}=\langle {e}_{j},{\mathcal{A}}^{-1}{e}_{k}\rangle ={\lambda}_{k}^{-1}\left({e}_{j}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{k})$; and if $w\in {V}^{\ast}$ for all j fulfils $0=\langle {e}_{j},{\mathcal{A}}^{-1}w\rangle =\left({e}_{j}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\mathcal{A}}^{-1}w)$ , then $w=0$ as ${\mathcal{A}}^{-1}$ is injective. Therefore
so the rightmost side converges in ${V}^{\ast}$, and the expression for $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|w{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}_{\ast}^{2}$ results. ☐

$$\begin{array}{c}\hfill w=\sum _{j=1}^{\infty}{\left(w\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\lambda}_{j}^{1/2}{e}_{j})}_{{V}^{\ast}}{\lambda}_{j}^{1/2}{e}_{j}=\sum _{j=1}^{\infty}\langle w,{\mathcal{A}}^{-1}{e}_{j}\rangle {\lambda}_{j}{e}_{j}=\sum _{j=1}^{\infty}\langle w,{e}_{j}\rangle {e}_{j},\end{array}$$

#### 2.3. Semigroups

Assuming that the reader is familiar with the theory of semigroups ${e}^{t\mathbf{A}}$, we review a few needed facts in a setting with a general complex Banach space B. The books of Pazy [23], Tanabe [9] and Yosida [19] may serve as general references.

The generator is $\mathbf{A}x={lim}_{t\to {0}^{+}}\frac{1}{t}({e}^{t\mathbf{A}}x-x)$, with domain $D\left(\mathbf{A}\right)$ consisting of the $x\in B$ for which the limit exists. $\mathbf{A}$ is a densely defined, closed linear operator in B that for certain $\omega \ge 0$ and $M\ge 1$ satisfies ${\parallel (\mathbf{A}-\lambda )}^{-n}{\parallel}_{\mathbb{B}\left(B\right)}\le M/{(\lambda -\omega )}^{n}$ for $\lambda >\omega $, $n\in \mathbb{N}$.

The corresponding semigroup of operators is written ${e}^{t\mathbf{A}}$, it belongs to $\mathbb{B}\left(B\right)$ with
Its basic properties are that ${e}^{t\mathbf{A}}{e}^{s\mathbf{A}}={e}^{(s+t)\mathbf{A}}$ for $s,t\ge 0$, ${e}^{0\mathbf{A}}=I$, ${lim}_{t\to {0}^{+}}{e}^{t\mathbf{A}}x=x$ for $x\in B$, and the first of these gives at once the range inclusions

$$\begin{array}{c}\hfill \parallel {e}^{t\mathbf{A}}{\parallel}_{\mathbb{B}\left(B\right)}\le M{e}^{\omega t}\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}0\le t<\infty .\end{array}$$

$$R\left({e}^{(s+t)\mathbf{A}}\right)\subset R\left({e}^{t\mathbf{A}}\right)\subset B.$$

The following well-known theorem gives a criterion for $\mathbf{A}$ to generate an analytic semigroup that is uniformly bounded, i.e., has $\omega =0$. It summarises the most relevant parts of Theorems 1.7.7 and 2.5.2 in [23], and it involves sectors of the form

$$\Sigma :=\left\{\lambda \in \left|\right|arg\lambda |<\frac{\pi}{2}+\theta \right\}\cup \left\{0\right\}.$$

**Theorem**

**3.**

If $\theta \in \phantom{\rule{0.166667em}{0ex}}]0,\frac{\pi}{2}[\phantom{\rule{0.166667em}{0ex}}$ and $M>0$ are such that the resolvent set $\rho \left(\mathbf{A}\right)\supseteq \Sigma $ and
then $\mathbf{A}$ generates an analytic semigroup ${e}^{z\mathbf{A}}$ for $|argz|<\theta $, for which $\parallel {e}^{z\mathbf{A}}\parallel $ is bounded for $|argz|\le {\theta}^{\prime}<\theta $, and ${e}^{t\mathbf{A}}$ is differentiable in $\mathbb{B}\left(B\right)$ for $t>0$ with ${\left({e}^{t\mathbf{A}}\right)}^{\prime}=\mathbf{A}{e}^{t\mathbf{A}}$. Here

$${\parallel (\lambda I-\mathbf{A})}^{-1}{\parallel}_{\mathbb{B}\left(B\right)}\le \frac{M}{\left|\lambda \right|},\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}\lambda \in \Sigma ,\phantom{\rule{4.pt}{0ex}}\lambda \ne 0,$$

$$\begin{array}{c}\hfill \parallel \mathbf{A}{e}^{t\mathbf{A}}{\parallel}_{\mathbb{B}\left(B\right)}\le \frac{c}{t}\phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}t>0.\end{array}$$

Furthermore, if ${e}^{t\mathbf{A}}$ is analytic, ${u}^{\prime}=\mathbf{A}u$, $u\left(0\right)={u}_{0}$ is uniquely solved by $u\left(t\right)={e}^{t\mathbf{A}}{u}_{0}$ for every ${u}_{0}\in B$.

#### 2.3.1. Injectivity

Often it is crucial to know whether the semigroup ${e}^{t\mathbf{A}}$ consists of injective operators. Injectivity is, e.g., equivalent to the geometric property that the trajectories of two solutions ${e}^{t\mathbf{A}}{v}_{0}$ and ${e}^{t\mathbf{A}}{w}_{0}$ of ${u}^{\prime}=\mathbf{A}u$ have no point of confluence in B for ${v}_{0}\ne {w}_{0}$.

However, the literature seems to have focused on examples with non-invertibility of ${e}^{t\mathbf{A}}$, e.g., ([23], Example 2.2.1). However, injectivity always holds in the analytic case, as we now show:

**Proposition**

**1.**

When a semigroup ${e}^{z\mathbf{A}}$ on a complex Banach space B is analytic $S\to \mathbb{B}\left(B\right)$ in the sector $S=\left\{z\in \mid |argz|<\theta \right\}$ for some $\theta >0$, then ${e}^{z\mathbf{A}}$ is injective for all $z\in S$.

**Proof.**

Let ${e}^{{z}_{0}\mathbf{A}}{u}_{0}=0$ hold for some ${u}_{0}\in B$, ${z}_{0}\in S$. The analyticity of ${e}^{z\mathbf{A}}$ in S carries over to the map $f:z\mapsto {e}^{z\mathbf{A}}{u}_{0}$, and to ${g}_{v}:z\mapsto \langle \phantom{\rule{0.166667em}{0ex}}v,\phantom{\rule{0.166667em}{0ex}}f\left(z\right)\phantom{\rule{0.166667em}{0ex}}\rangle $ for arbitrary v in the dual space ${B}^{\prime}$. So ${g}_{v}$ has in a ball $B({z}_{0},r)\subset S$ the Taylor expansion
By the properties of analytic semigroups (cf. ([23], Lemma 2.4.2)) and of ${u}_{0}$,
so that ${g}_{v}\equiv 0$ holds on $B({z}_{0},r)$ and consequently on S by unique analytic extension.

$$\begin{array}{c}\hfill {g}_{v}\left(z\right)=\sum _{n=0}^{\infty}\frac{1}{n!}\langle \phantom{\rule{0.166667em}{0ex}}v,\phantom{\rule{0.166667em}{0ex}}{f}^{\left(n\right)}\left({z}_{0}\right)\phantom{\rule{0.166667em}{0ex}}\rangle {(z-{z}_{0})}^{n}.\end{array}$$

$$\begin{array}{c}\hfill {f}^{\left(n\right)}\left({z}_{0}\right)={\mathbf{A}}^{n}{e}^{{z}_{0}\mathbf{A}}{u}_{0}=0\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}\mathrm{all}\phantom{\rule{4.pt}{0ex}}n\ge 0,\end{array}$$

Now $f\left({z}_{1}\right)\ne 0$ would yield ${g}_{v}\left({z}_{1}\right)\ne 0$ for a suitable v in ${B}^{\prime}$, hence $f\equiv 0$ on S and
since ${e}^{t\mathbf{A}}$ is a strongly continuous semigroup. Altogether $Z\left({e}^{{z}_{0}\mathbf{A}}\right)=\left\{0\right\}$ is proved. ☐

$$\begin{array}{c}\hfill {u}_{0}=\underset{t\to 0}{lim}{e}^{t\mathbf{A}}{u}_{0}=\underset{t\to 0}{lim}f\left(t\right)=0,\end{array}$$

**Remark**

**1.**

We have only been able to track a claim of the injectivity in Proposition 1 in case $z>0$, $\theta \le \pi /4$ and B is a Hilbert space; cf. Showalter’s paper [17]. However, his proof is flawed, as ${\mathbf{A}}^{2}$ is non-accretive for some $\mathbf{A}$ with $\theta \le \pi /4$, cf. the counter-example in Remark 9 below.

**Remark**

**2.**

Injectivity also follows directly when $\mathbf{A}$ is defined on a Hilbert space H having an orthonormal basis ${\left({e}_{n}\right)}_{n\in \mathbb{N}}$ such that $\mathbf{A}{e}_{j}={\lambda}_{j}{e}_{j}$: Clearly ${e}^{t\mathbf{A}}{e}_{j}={e}^{t{\lambda}_{j}}{e}_{j}$ as both sides satisfy ${x}^{\prime}-\mathbf{A}x=0$, $x\left(0\right)={e}_{j}$. So if ${e}^{t\mathbf{A}}v=0$, boundedness of ${e}^{t\mathbf{A}}$ gives
so that $v\perp {e}_{j}$ for all j, and thus $v\in span{\left({e}_{n}\right)}^{\perp}={H}^{\perp}=\left\{0\right\}$. Hence ${e}^{t\mathbf{A}}$ is invertible for such $\mathbf{A}$.

$$\begin{array}{c}\hfill 0={e}^{t\mathbf{A}}v=\sum \left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}^{t\mathbf{A}}{e}_{j}=\sum \left(v\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}^{t{\lambda}_{j}}{e}_{j},\end{array}$$

We have chosen to use the symbol ${e}^{-t\mathbf{A}}$ to denote the inverse of the analytic semigroup ${e}^{t\mathbf{A}}$ generated by $\mathbf{A}$, consistent with the case in which ${e}^{t\mathbf{A}}$ does form a group in $\mathbb{B}\left(B\right)$, i.e.,
This notation is convenient for our purposes (with some diligence).

$${e}^{-t\mathbf{A}}:={\left({e}^{t\mathbf{A}}\right)}^{-1}\phantom{\rule{2.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}\mathrm{all}\phantom{\rule{4.pt}{0ex}}t\in \mathbb{R}.$$

For simplicity we observe the following when $B=H$ is a Hilbert space and $t>0$: clearly ${e}^{-t\mathbf{A}}$ maps $D\left({e}^{-t\mathbf{A}}\right)=R\left({e}^{t\mathbf{A}}\right)$ bijectively onto H, and it is an unbounded closed operator in H. As ${\left({e}^{t\mathbf{A}}\right)}^{\ast}={e}^{t{\mathbf{A}}^{\ast}}$ also is analytic, so that $Z\left({e}^{t{\mathbf{A}}^{\ast}}\right)=\left\{0\right\}$ by Proposition 1, we have $\overline{D\left({e}^{-t\mathbf{A}}\right)}=H$, i.e., the domain is dense in H.

A partial group phenomenon and other algebraic properties are collected here:

**Proposition**

**2.**

The inverses ${e}^{-t\mathbf{A}}$ in (41) form a semigroup of unbounded operators,
This extends to $(s,t)\in \phantom{\rule{0.166667em}{0ex}}]-\infty ,0]\times \mathbb{R}$, but the right-hand side may be unbounded for $t+s>0$.

$${e}^{-t\mathbf{A}}{e}^{-s\mathbf{A}}={e}^{-(t+s)\mathbf{A}}\phantom{\rule{2.em}{0ex}}\mathit{for}\phantom{\rule{4.pt}{0ex}}t,s\ge 0.$$

Moreover, as unbounded operators the ${e}^{-t\mathbf{A}}$ commute with ${e}^{s\mathbf{A}}\in \mathbb{B}\left(H\right)$, i.e.,
and there is a descending chain of domain inclusions

$${e}^{s\mathbf{A}}{e}^{-t\mathbf{A}}\subset {e}^{-t\mathbf{A}}{e}^{s\mathbf{A}}\phantom{\rule{2.em}{0ex}}\mathit{for}\phantom{\rule{4.pt}{0ex}}t,s\ge 0,$$

$$\begin{array}{c}\hfill D\left({e}^{-{t}^{\prime}\mathbf{A}}\right)\subset D\left({e}^{-t\mathbf{A}}\right)\subset H\phantom{\rule{2.em}{0ex}}\mathit{for}\phantom{\rule{4.pt}{0ex}}0<t<{t}^{\prime}.\end{array}$$

**Proof.**

When $s,t\ge 0$, clearly ${e}^{-t\mathbf{A}}{e}^{-s\mathbf{A}}{e}^{(s+t)\mathbf{A}}={I}_{H}$ holds, so that ${e}^{-(s+t)\mathbf{A}}\subset {e}^{-t\mathbf{A}}{e}^{-s\mathbf{A}}$; but equality necessarily holds, as the injection ${e}^{-t\mathbf{A}}{e}^{-s\mathbf{A}}$ cannot be a proper extension of the surjection ${e}^{-(s+t)\mathbf{A}}$. Whence (42). For $t+s\ge 0\ge s$ this yields ${e}^{-t\mathbf{A}}{e}^{-s\mathbf{A}}={e}^{-(t+s)\mathbf{A}}{e}^{s\mathbf{A}}{e}^{-s\mathbf{A}}={e}^{-(t+s)\mathbf{A}}$. The case $-s>t\ge 0$ is similar.

Also the commutation follows at once, for the semigroup property gives
where the right-hand side is a restriction of ${e}^{-t\mathbf{A}}{e}^{s\mathbf{A}}$. Finally (33) yields (44). ☐

$${e}^{s\mathbf{A}}{e}^{-t\mathbf{A}}={e}^{-t\mathbf{A}}{e}^{t\mathbf{A}}{e}^{s\mathbf{A}}{e}^{-t\mathbf{A}}={e}^{-t\mathbf{A}}{e}^{(s+t)\mathbf{A}}{e}^{-t\mathbf{A}}={e}^{-t\mathbf{A}}{e}^{s\mathbf{A}}{I}_{R\left({e}^{t\mathbf{A}}\right)},$$

#### 2.3.2. Some Regularity Properties

As a preparation we treat a few regularity questions for $s\mapsto {e}^{(t-s)\mathbf{A}}f\left(s\right)$, where the analytic operator function $E\left(s\right)={e}^{(t-s)\mathbf{A}}$ has a singularity at $s=t$; cf. (36). This will be controlled when $f\in {L}_{1}(0,t;B)$.

That $Ef={e}^{(t-\xb7)\mathbf{A}}f$ also is in ${L}_{1}(0,t;B)$ is undoubtedly known. So let us recall briefly how to prove it strongly measurable, i.e., to find a sequence of simple functions converging pointwise to $E\left(s\right)f\left(s\right)$ for a.e. $s\in [0,t]$; cf. [19]. Now f can be so approximated by a sequence $\left({f}_{n}\right)$, and E can by its continuity $[0,t[\phantom{\rule{0.166667em}{0ex}}\to \mathbb{B}\left(B\right)$ also be approximated pointwise for $s<t$ by ${E}_{n}$ defined on each subinterval $[t(j-1){2}^{-n},tj{2}^{-n}[\phantom{\rule{0.166667em}{0ex}}$, $j=1,\cdots ,{2}^{n}$, as the value of E at the left end point. Then $Ef={lim}_{n}{E}_{n}{f}_{n}$ on $[0,t]$ a.e. Therefore ${e}^{(t-\xb7)\mathbf{A}}f\in {L}_{1}(0,t;B)$ follows directly from (32),

$$\begin{array}{c}\hfill \parallel {e}^{(t-\xb7)\mathbf{A}}{f\parallel}_{{L}_{1}(0,t;B)}\le {\int}_{0}^{t}\parallel {e}^{(t-s)\mathbf{A}}\parallel \parallel f\left(s\right)\parallel \phantom{\rule{0.166667em}{0ex}}ds\le M{e}^{\omega t}{\parallel f\parallel}_{{L}_{1}(0,t;B)}.\end{array}$$

Moreover, $\langle \eta ,{e}^{(t-\xb7)\mathbf{A}}f\rangle $ is seen to be in ${L}_{1}(0,t)$ by majorizing with $\parallel {e}^{(t-s)\mathbf{A}}{f\left(s\right)\parallel}_{B}{\parallel \eta \parallel}_{{B}^{\ast}}$, for strong measurability implies weak measurability; cf. ([24], Section IV.5 appendix).

The main concern is to obtain a Leibniz rule for the derivative:
For $w\in {C}^{1}(0,T;B)$ this is unproblematic for $s<T$: $w(s+h)=w\left(s\right)+h{\partial}_{s}w\left(s\right)+o\left(h\right)$, where $o\left(h\right)/h\to 0$ for $h\to 0$; and the operator is differentiable in $\mathbb{B}\left(B\right)$ for $s<T$, cf. Theorem 3, so that ${e}^{(T-(s+h\left)\right)\mathbf{A}}={e}^{(T-s)\mathbf{A}}+h(-\mathbf{A}){e}^{(T-s)\mathbf{A}}+o\left(h\right)$. Hence a multiplication of the two expansions gives the right-hand side of (47) to the first order in h. The Leibniz rule is more generally valid in the vector distribution sense:

$${\partial}_{s}\left({e}^{(T-s)\mathbf{A}}w\left(s\right)\right)=(-\mathbf{A}){e}^{(T-s)\mathbf{A}}w\left(s\right)+{e}^{(T-s)\mathbf{A}}{\partial}_{s}w\left(s\right).$$

**Proposition**

**3.**

When $\mathbf{A}$ generates an analytic semigroup on a Banach space B and $w\in {H}^{1}(0,T;B)$, then the Leibniz rule (47) holds in ${\mathcal{D}}^{\prime}(0,T;B)$.

**Proof.**

It suffices to cover the case $\omega =0$, for the other cases then follow by applying the formula to the semigroup ${e}^{-\omega t}{e}^{t\mathbf{A}}$ generated by $\mathbf{A}-\omega I$. For $w\in {H}^{1}(0,T;B)$ the standard convolution procedure gives a sequence $\left({w}_{k}\right)$ in ${C}^{1}([0,T];B)$ such that
For arbitrary $\varphi \in {C}_{0}^{\infty}\left(\phantom{\rule{0.166667em}{0ex}}\right]0,T\left[\phantom{\rule{0.166667em}{0ex}}\right)$, we find using the Bochner inequality that
with $C=M({\int}_{supp\varphi}{\left|\varphi \left(s\right)\right|}^{2}{\phantom{\rule{0.166667em}{0ex}}ds)}^{1/2}$, where M is the constant in (32).

$${w}_{k}\to w\phantom{\rule{1.em}{0ex}}\mathrm{in}\phantom{\rule{1.em}{0ex}}{L}_{2}(0,T;B),\phantom{\rule{2.em}{0ex}}{w}_{k}^{\prime}\to {w}^{\prime}\phantom{\rule{1.em}{0ex}}\mathrm{in}\phantom{\rule{1.em}{0ex}}{L}_{2,\mathrm{loc}}(0,T;B).$$

$$\begin{array}{c}\hfill \parallel {\int}_{0}^{T}{e}^{(T-s)\mathbf{A}}(w\left(s\right)-{w}_{k}\left(s\right)){\varphi \left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\parallel}_{B}\le C{\parallel w\left(s\right)-{w}_{k}\left(s\right)\parallel}_{{L}_{2}(0,T;B)},\end{array}$$

Hence ${e}^{(T-s)\mathbf{A}}{w}_{k}\to {e}^{(T-s)\mathbf{A}}w$ in ${\mathcal{D}}^{\prime}(0,T;B)$, so via the ${C}^{1}$-case above, as ${\partial}_{s}$ is continuous in ${\mathcal{D}}^{\prime}$, we get
Indeed, the last limits exist in ${\mathcal{D}}^{\prime}(0,T;B)$ by the choice of ${w}_{k}$, for if $\u03f5>0$ is small enough,
with $\tilde{C}={\left({\int}_{supp\varphi}\right|\frac{c\varphi \left(s\right)}{T-s}{|}^{2}\phantom{\rule{0.166667em}{0ex}}ds)}^{1/2}$, using the bound on $(-\mathbf{A}){e}^{(T-s)\mathbf{A}}$ in Theorem 3. ☐

$$\begin{array}{cc}\hfill {\partial}_{s}\left({e}^{(T-s)\mathbf{A}}w\right)& =\underset{k\to \infty}{lim}\left({\partial}_{s}\left({e}^{(T-s)\mathbf{A}}{w}_{k}\right)\right)\hfill \\ & =\underset{k\to \infty}{lim}\left((-\mathbf{A}){e}^{(T-s)\mathbf{A}}{w}_{k}\right)+\underset{k\to \infty}{lim}\left({e}^{(T-s)\mathbf{A}}{\partial}_{s}{w}_{k}\right)=(-\mathbf{A}){e}^{(T-s)\mathbf{A}}w+{e}^{(T-s)\mathbf{A}}{\partial}_{s}w.\hfill \end{array}$$

$$\begin{array}{c}\parallel {\int}_{supp\varphi}{e}^{(T-s)\mathbf{A}}({w}^{\prime}\left(s\right)-{w}_{k}^{\prime}\left(s\right)){\varphi \left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\parallel}_{B}\le c{\int}_{\epsilon}^{T-\epsilon}{\parallel {w}^{\prime}\left(s\right)-{w}_{k}^{\prime}\left(s\right)\parallel}_{B}\phantom{\rule{0.166667em}{0ex}}ds,\end{array}$$

$$\begin{array}{c}\parallel {\int}_{0}^{T}(-\mathbf{A}){e}^{(T-s)\mathbf{A}}(w\left(s\right)-{w}_{k}\left(s\right)){\varphi \left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\parallel}_{B}\le \tilde{C}{\parallel w-{w}_{k}\parallel}_{{L}_{2}(0,T;B)}\end{array}$$

## 3. Functional Analysis of Initial Value Problems

Having set the scene by recalling elliptic Lax–Milgram operators $\mathcal{A}$ in Gelfand triples $(V,H,{V}^{\ast})$ in Section 2.1, we now discuss solutions of the classical initial value problem
By definition of vector distributions, the above equation means that for every scalar test function $\phi \in {C}_{0}^{\infty}\left(\right]0,T\left[\right)$ one has $\langle \phantom{\rule{0.166667em}{0ex}}u,\phantom{\rule{0.166667em}{0ex}}-{\phi}^{\prime}\phantom{\rule{0.166667em}{0ex}}\rangle +\langle \phantom{\rule{0.166667em}{0ex}}\mathcal{A}u,\phantom{\rule{0.166667em}{0ex}}\phi \phantom{\rule{0.166667em}{0ex}}\rangle =\langle \phantom{\rule{0.166667em}{0ex}}f,\phantom{\rule{0.166667em}{0ex}}\phi \phantom{\rule{0.166667em}{0ex}}\rangle $ as an identity in ${V}^{\ast}$.

$$\left\{\begin{array}{ccc}\hfill {\partial}_{t}u+\mathcal{A}u& =f\hfill & \mathrm{in}\phantom{\rule{4.pt}{0ex}}{\mathcal{D}}^{\prime}(0,T;{V}^{\ast})\hfill \\ \hfill u\left(0\right)& ={u}_{0}\hfill & \mathrm{in}\phantom{\rule{4.pt}{0ex}}H.\hfill \end{array}\right.$$

First we recall the fundamental theorem for vector functions from ([10], Lemma III.1.1). Further below, it will be crucial for obtaining a solution formula for (53).

**Lemma**

**1.**

For a Banach space B and $u,g\in {L}_{1}(a,b;B)$ the following are equivalent:

- (i)
- u is a.e. equal to a primitive function of g, i.e., for some vector $\xi \in B$$$u\left(t\right)=\xi +{\int}_{a}^{t}g\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\phantom{\rule{1.em}{0ex}}\mathit{for}\phantom{\rule{4.pt}{0ex}}a.e.\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}t\in [a,b].$$
- (ii)
- For each test function $\varphi \in {C}_{0}^{\infty}\left(\right]a,b\left[\right)$ one has ${\int}_{a}^{b}u\left(t\right){\varphi}^{\prime}\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt=-{\int}_{a}^{b}g\left(t\right)\varphi \left(t\right)\phantom{\rule{0.166667em}{0ex}}dt$.
- (iii)
- For each η in the dual space ${B}^{\prime}$, $\frac{d}{dt}\langle \eta ,u\rangle =\langle \eta ,g\rangle $ holds in ${\mathcal{D}}^{\prime}(a,b)$.

$$\begin{array}{c}\hfill \underset{a\le t\le b}{sup}{\parallel u\left(t\right)\parallel}_{B}\le {(b-a)}^{-1}{\parallel u\parallel}_{{L}_{1}(a,b;B)}+{\parallel g\parallel}_{{L}_{1}(a,b;B)}.\end{array}$$

**Remark**

**4.**

Lemma 1 is proved in [10], except for the estimate (55): the continuous function ${\parallel u\left(t\right)\parallel}_{B}$ attains its minimum at some ${t}_{0}\in [a,b]$, so applying the Bochner inequality in (i) and the Mean Value Theorem,
This yields (55), hence the Sobolev embedding ${W}^{1,1}(a,b;B)\hookrightarrow C([a,b];B)$. If furthermore $u,g\in {L}_{2}(a,b;B)$, we get the Sobolev embedding ${H}^{1}(a,b;B)\hookrightarrow C([a,b];B)$ similarly,

$$\begin{array}{c}\hfill {\parallel u\left(t\right)\parallel}_{B}\le \parallel u\left({t}_{0}\right){\parallel}_{B}+|{\int}_{{t}_{0}}^{t}\parallel g\left(t\right){\parallel}_{B}\phantom{\rule{0.166667em}{0ex}}dt|\le \frac{1}{b-a}{\int}_{a}^{b}\parallel u\left(t\right){\parallel}_{B}\phantom{\rule{0.166667em}{0ex}}dt+{\int}_{a}^{b}{\parallel g\left(t\right)\parallel}_{B}\phantom{\rule{0.166667em}{0ex}}dt.\end{array}$$

$$\begin{array}{c}\hfill \underset{a\le t\le b}{sup}{\parallel u\left(t\right)\parallel}_{B}\le {(b-a)}^{-1/2}{\parallel u\parallel}_{{L}_{2}(a,b;B)}+{(b-a)}^{1/2}{\parallel g\parallel}_{{L}_{2}(a,b;B)}\le c{\parallel u\parallel}_{{H}^{1}(a,b;B)}.\end{array}$$

Secondly we recall the Leibniz rule $\frac{d}{dt}\left(f\left(t\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}g\left(t\right))=\left({f}^{\prime}\left(t\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}g\left(t\right))+\left(f\left(t\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{g}^{\prime}\left(t\right))$ valid for $f,g\in {C}^{1}([0,T];H)$. The well-known generalization below was proved in real vector spaces in ([10], Lemma III.1.2) for $u=v$. We briefly extend this to the general complex case, which we mainly use to obtain that ${\partial}_{t}{\left|u\right|}^{2}=2Re\langle \phantom{\rule{0.166667em}{0ex}}{u}^{\prime},\phantom{\rule{0.166667em}{0ex}}u\phantom{\rule{0.166667em}{0ex}}\rangle $, though also $u\ne v$ will be needed.

**Lemma**

**2.**

If $u,v\in {L}_{2}(0,T;V)\cap {H}^{1}(0,T;{V}^{\ast})$, then $t\mapsto \left(u\left(t\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v\left(t\right))$ is in ${L}_{1}(0,T)$ and
Furthermore, u and v have continuous representatives on $[0,T]$, i.e., $u,v\in C\left(\right[0,T];H)$.

$$\frac{d}{dt}\left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)=\langle {u}^{\prime},v\rangle +\overline{\langle {v}^{\prime},u\rangle}\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathit{in}\phantom{\rule{4.pt}{0ex}}{\mathcal{D}}^{\prime}(\mathcal{0},\mathcal{T}).$$

**Proof.**

Let $u,v\in {L}_{2}(0,T;V)$ with distributional derivatives ${u}^{\prime},{v}^{\prime}\in {L}_{2}(0,T;{V}^{\ast})$. As in the proof of Proposition 3 we obtain ${u}_{m}\in {C}^{\infty}([0,T];V)$ such that ${u}_{m}\to u$ in ${L}_{2}(0,T;V)$ while ${u}_{m}^{\prime}\to {u}^{\prime}$ in ${L}_{2,\mathrm{loc}}(0,T;{V}^{\ast})$. Similarily v gives rise to ${v}_{m}$.

By continuity of inner products, the function $\left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)$ is measurable on $[0,T]$ for $u,v\in {L}_{2}(0,T;V)$, and ${\int}_{0}^{T}\left|\left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)\right|\phantom{\rule{0.166667em}{0ex}}dt<\infty $. Sesquilinearity yields $\left({u}_{m}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{v}_{m})\to \left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)$ in ${L}_{1}(0,T)$ for $m\to \infty $, while both $\langle {u}_{m}^{\prime},{v}_{m}\rangle \to \langle {u}^{\prime},v\rangle $ and $\langle {v}_{m}^{\prime},{u}_{m}\rangle \to \langle {v}^{\prime},u\rangle $ hold in ${L}_{2,\mathrm{loc}}(0,T)$, hence in ${\mathcal{D}}^{\prime}(0,T)$.

As differentiation is continuous in ${\mathcal{D}}^{\prime}(0,T)$, one finds from the ${C}^{1}$-case and (21) that
Taking $v=u$ the function $t\mapsto {\left|u\left(t\right)\right|}^{2}$ is seen to be in ${W}^{1,1}(0,T)\subset C\left([0,T]\right)$, and since any $u\in {H}^{1}(0,T;{V}^{\ast})$ is continuous in ${V}^{\ast}$ by Remark 4, one can also here obtain from Lemma III.1.4 in [10] that $u:[0,T]\to H$ is continuous. Similarly for v. ☐

$$\begin{array}{c}\hfill \frac{d}{dt}\left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)=\underset{m}{lim}\frac{d}{dt}\left({u}_{m}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{v}_{m})=\underset{m}{lim}\left({u}_{m}^{\prime}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{v}_{m})+\underset{m}{lim}\overline{\left({v}_{m}^{\prime}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{u}_{m})}=\langle {u}^{\prime},v\rangle +\overline{\langle {v}^{\prime},u\rangle}.\end{array}$$

#### 3.1. Existence and Uniqueness

In our presentation the following result is a cornerstone, relying on the full framework in Section 2.1; in particular A need not be selfadjoint:

**Theorem**

**4.**

Let V be a separable Hilbert space with $V\subseteq H$ algebraically, topologically and densely, cf. (19) and (20), and let $\mathcal{A}:V\to {V}^{\ast}$ be the bounded Lax–Milgram operator induced by a V-elliptic sesquilinear form, cf. (25). When ${u}_{0}\in H$ and $f\in {L}_{2}(0,T;{V}^{\ast})$ are given, then (53) has a uniquely determined solution $u\left(t\right)$ belonging to the space

$$\begin{array}{c}\hfill X={L}_{2}(0,T;V)\bigcap C([0,T];H)\bigcap {H}^{1}(0,T;{V}^{\ast}).\end{array}$$

We omit a proof of this theorem, as it is a special case of a more general result of Lions and Magenes ([8], Section 3.4.4) on t-dependent forms $a(t;u,v)$. Clearly the conjunction of $u\in {L}_{2}(0,T;V)$ and ${u}^{\prime}\in {L}_{2}(0,T;{V}^{\ast})$, which appears in [8], is equivalent to the claim in (60) that u belongs to the intersection of ${L}_{2}(0,T,V)$ and ${H}^{1}(0,T;{V}^{\ast})$.

Alternatively one can use Theorem III.1.1 in Temam’s book [10], where proof is given using Lemma 1 to reduce to the scalar differential equation ${\partial}_{t}\langle u,\eta \rangle +a(u,\eta )=\langle f,\eta \rangle $ in ${\mathcal{D}}^{\prime}(0,T)$, for $\eta \in V$, which is treated by Faedo–Galerkin approximation and basic functional analysis. His proof extends straightforwardly, from a specific triple $(H,V,a)$ for the Navier-Stokes equations, to the general set-up in Section 2.1, also when ${A}^{\ast}\ne A$.

However, either way, we need the finer theory described in the next two subsections.

#### 3.2. Well-Posedness

We now substantiate that the unique solution from Theorem 4 depends continuously on the data, so that (53) is well-posed in the sense of Hadamard. First we note that the solution in Theorem 4 is an element of the space X in (60), which is a Banach space when normed, as done throughout, by
To clarify a redundancy in this choice, we need a Sobolev inequality for vector functions.

$$\begin{array}{c}\hfill {\parallel u\parallel}_{X}={\left({\parallel u\parallel}_{{L}_{2}(0,T;V)}^{2}+\underset{0\le t\le T}{sup}{\left|u\left(t\right)\right|}^{2}+{\parallel u\parallel}_{{H}^{1}(0,T;{V}^{\ast})}^{2}\right)}^{1/2}.\end{array}$$

**Lemma**

**3.**

There is an inclusion ${L}_{2}(0,T;V)\cap {H}^{1}(0,T;{V}^{\ast})\subset C([0,T];H)$ and

$$\underset{0\le t\le T}{sup}{\left|u\left(t\right)\right|}^{2}\le (1+\frac{{C}_{2}^{2}}{{C}_{1}^{2}T}){\int}_{0}^{T}{\parallel u\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}dt+{\int}_{0}^{T}{\parallel {u}^{\prime}\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt.$$

**Proof.**

If u belongs to the intersection, the continuity follows from Lemma 2, where the formula gives ${\partial}_{t}{\left|u\right|}^{2}=2Re\langle \phantom{\rule{0.166667em}{0ex}}{u}^{\prime},\phantom{\rule{0.166667em}{0ex}}u\phantom{\rule{0.166667em}{0ex}}\rangle $. By Lemma 1, integration of both sides entails
which by use of the Mean Value Theorem as in Remark 4 leads to the estimate. ☐

$${\left|u\left(t\right)\right|}^{2}\le |u\left({t}_{0}\right){|}^{2}+{\int}_{0}^{T}{(\parallel u\parallel}^{2}+\parallel {u}^{\prime}{\parallel}_{\ast}^{2})\phantom{\rule{0.166667em}{0ex}}dt,$$

**Remark**

**5.**

In our solution set X in (60) one can safely omit the space $C\left(\right[0,T];H)$, according to Lemma 3. Likewise $sup\left|u\right|$ can be removed from ${\parallel \xb7\parallel}_{X}$, as one just obtains an equivalent norm (similarly for the term ${\int}_{0}^{T}{\parallel u\left(t\right)\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt$ in (7)). Thus X is more precisely a Hilbertable space; we omit this detail in the sequel for the sake of simplicity. However, we shall keep X as stated in order to emphasize the properties of the solutions.

The next result on stability is well known among experts, and while it may be derived from the abstract proofs in [8], we shall give a direct proof based on explicit estimates:

**Corollary**

**1.**

The unique solution u of (53), given by Theorem 4, depends continuously as an element of X on the data $(f,{u}_{0})\in {L}_{2}(0,T;{V}^{\ast})\oplus H$, i.e.,
That is, the solution operator $(f,{u}_{0})\mapsto u$ is a bounded linear map ${L}_{2}(0,T;{V}^{\ast})\oplus H\to X$.

$$\begin{array}{c}\hfill {\parallel u\parallel}_{X}^{2}\le c\left(\right|{u}_{0}{|}^{2}+{\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2}).\end{array}$$

**Proof.**

Clearly $u\in {L}_{2}(0,T;V)$ while the functions ${u}^{\prime},f$ and $\mathcal{A}u$ belong to ${L}_{2}(0,T;{V}^{\ast})$, so as an identity of integrable functions,
Hence Lemma 2 and the V-ellipticity gives

$$\begin{array}{c}\hfill Re\langle {\partial}_{t}u,u\rangle +Re\langle \mathcal{A}u,u\rangle =Re\langle f,u\rangle .\end{array}$$

$$\begin{array}{c}\hfill {\partial}_{t}{\left|u\right|}^{2}+2{C}_{4}{\parallel u\parallel}^{2}\le 2\left|\langle f,u\rangle \right|\le {C}_{4}^{-1}{\parallel f\parallel}_{\ast}^{2}+{C}_{4}{\parallel u\parallel}^{2}.\end{array}$$

Using again that ${\left|u\left(t\right)\right|}^{2}$ and ${\partial}_{t}{\left|u\left(t\right)\right|}^{2}$ are in ${L}_{1}(0,T)$, taking $B=\mathbb{C}$ in Lemma 1 yields
For the first two contributions to the X-norm this gives
Since u solves (53) it is clear that $\parallel {\partial}_{t}{u\left(t\right)\parallel}_{\ast}^{2}\le {(\parallel f\left(t\right)\parallel}_{\ast}+{\parallel \mathcal{A}u\parallel}_{\ast}{)}^{2}$, so we get
which upon substitution of (69) altogether shows (64). ☐

$$\begin{array}{c}\hfill {\left|u\left(t\right)\right|}^{2}+{C}_{4}{\int}_{0}^{t}{\parallel u\left(s\right)\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}ds\le |{u}_{0}{|}^{2}+{C}_{4}^{-1}{\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2}.\end{array}$$

$$\begin{array}{cc}\hfill \underset{0\le t\le T}{sup}{\left|u\left(t\right)\right|}^{2}& \le |{u}_{0}{|}^{2}+{C}_{4}^{-1}{\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2},\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\parallel u\parallel}_{{L}_{2}(0,T;V)}^{2}& \le {C}_{4}^{-1}|{u}_{0}{|}^{2}+{C}_{4}^{-2}{\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2}.\hfill \end{array}$$

$$\begin{array}{c}\hfill {\int}_{0}^{T}\parallel {\partial}_{t}{u\left(t\right)\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt\le 2{\int}_{0}^{T}{\parallel f\left(t\right)\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt+{2\parallel \mathcal{A}\parallel}_{\mathbb{B}(V,{V}^{\ast})}^{2}{\int}_{0}^{T}{\parallel u\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}dt,\end{array}$$

#### 3.3. The First Order Solution Formula

We now supplement the well-posedness by a direct proof of the variation of constants formula, which requires that the extended Lax–Milgram operator $\mathcal{A}$ generates an analytic semigroup in ${V}^{\ast}$. This is known, cf. [9], but lacking a concise proof in the literature, we begin by analysing A in H:

**Lemma**

**4.**

For a V-elliptic Lax–Milgram operator A, both $-A$ and $-{A}^{\ast}$ have the sector Σ in (34) in their resolvent sets for $\theta =arccot({C}_{3}/{C}_{4})$ and they generate analytic semigroups on H. This holds verbatim for the extensions $-\mathcal{A}$ and $-{\mathcal{A}}^{\prime}$ in ${V}^{\ast}$.

**Proof.**

To apply Theorem 3, we let $\lambda \ne 0$ be given in the sector Σ for some angle θ satisfying $0<\theta <arccot({C}_{3}/{C}_{4})$. Then it is clear that $\delta =-sgn(Im\lambda )\theta $ or $\delta =0$ gives
In case $\delta \in \left\{\pm \theta \right\}$ a multiplication of the inequalities (28) by $-sin\delta $ yields
In addition ${C}_{\theta}:={C}_{4}cos\theta -{C}_{3}sin\theta >0$, because $cot\theta >{C}_{3}{C}_{4}^{-1}$. So for $u\in D\left(A\right)$,
This V-ellipticity holds also if $\delta =0$, cf. (71), so ${e}^{i\delta}(A+\lambda I)$ is in any case bijective; and so is $-A-\lambda I$.

$$Re\left({e}^{i\delta}\lambda \right)\ge 0.$$

$$\begin{array}{c}\hfill -sin\delta Ima(u,u)\ge -{C}_{3}{C}_{4}^{-1}sin\theta Rea(u,u).\end{array}$$

$$\begin{array}{cc}\hfill Re\left({e}^{i\delta}(a(u,u)+\lambda \left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u))\right)& \ge Re\left({e}^{i\delta}a(u,u)\right)=cos\delta Rea(u,u)-sin\delta Ima(u,u)\hfill \\ & \ge (cos\theta -{C}_{3}{C}_{4}^{-1}sin\theta )Rea(u,u)\hfill \\ & \ge {C}_{\theta}{\parallel u\parallel}^{2}.\hfill \end{array}$$

To bound $-{(A+\lambda I)}^{-1}$, we see from (73) that for $u\in D\left(A\right)$,
This implies (35) for $-A$. Since ${A}^{\ast}$ is the Lax–Milgram operator associated to the elliptic form ${a}^{\ast}$, the above also entails the statement for $-{A}^{\ast}$.

$$\begin{array}{cc}\hfill \left|\lambda \right|\left(u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u)& \le \left|\left((A+\lambda )u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u)\right|+\left|a(u,u)\right|\le \left|\left((A+\lambda )u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u)\right|+{C}_{3}{\parallel u\parallel}^{2}\hfill \\ & \le (1+{C}_{3}{C}_{\theta}^{-1})\left|\left((A+\lambda )u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u)\right|.\hfill \end{array}$$

For $\mathcal{A}$ it follows at once from (73) that $Re\langle {e}^{i\delta}(\mathcal{A}+\lambda )u,u\rangle \ge {C}_{\theta}{\parallel u\parallel}^{2}$ for $u\in V$. Hence $R(\mathcal{A}+\lambda I)$ is closed in ${V}^{\ast}$, and it is also dense since $R(\mathcal{A}+\lambda I)\supset R(A+\lambda I)=H$ by the above; i.e., $\mathcal{A}+\lambda I$ is surjective. Mimicking (74), we get for $u\ne 0$, $\parallel w\parallel =1$, both in V,
This yields injectivity of $\mathcal{A}+\lambda I$ and the resolvent estimate. ${\mathcal{A}}^{\prime}$ is covered through ${a}^{\ast}$. ☐

$$\begin{array}{c}\hfill \left|\lambda \right|\xb7{\parallel u\parallel}_{\ast}\le \underset{w}{sup}\left|\langle (\mathcal{A}+\lambda )u,w\rangle \right|+{C}_{3}{C}_{\theta}^{-1}\left|\langle (\mathcal{A}+\lambda )u,\frac{1}{\parallel u\parallel}u\rangle \right|\le c{\parallel (\mathcal{A}+\lambda )u\parallel}_{\ast}.\end{array}$$

We denote by ${e}^{-t\mathcal{A}}$ the semigroup generated by $-\mathcal{A}$ on ${V}^{\ast}$, to distinguish it from ${e}^{-tA}$ on H. Analogously for ${e}^{-t{\mathcal{A}}^{\prime}}\in \mathbb{B}\left({V}^{\ast}\right)$. As $A\subset \mathcal{A}$ implies that ${(\mathcal{A}+\lambda I)}^{-1}{|}_{H}={(A+\lambda I)}^{-1}$, and since A and $\mathcal{A}$ have the same sector Σ by Lemma 4, the well-known Laplace transformation formula, cf. ([23], Theorem 1.7.7), yields the corresponding fact, say ${e}^{-t\mathcal{A}}{|}_{H}={e}^{-tA}$ for the semigroups:

**Lemma**

**5.**

For all $x\in H$ one has ${e}^{-t\mathcal{A}}x={e}^{-tA}x$ as well as ${e}^{-t{\mathcal{A}}^{\prime}}x={e}^{-t{A}^{\ast}}x$.

We could add that A and ${A}^{\ast}$ are dissipative, as $m\left(A\right)>0$, $m\left({A}^{\ast}\right)>0$ in H, so ${e}^{-tA}$, ${e}^{-t{A}^{\ast}}$ are contractions for $t\ge 0$ by the Lumer–Philips theorem; cf. ([20], Corollary 14.12).

Using Lemmas 4 and 5, the announced formula results as an addendum to Theorem 4:

**Theorem**

**5.**

The unique solution u in X provided by Theorem 4 satisfies that
where each of the three terms belongs to X.

$$u\left(t\right)={e}^{-tA}{u}_{0}+{\int}_{0}^{t}{e}^{-(t-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\phantom{\rule{2.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}0\le t\le T,$$

**Proof.**

Once (76) has been shown, Theorem 4 applies in particular to cases with $f=0$, yielding that $u\left(t\right)$ and hence ${e}^{-tA}{u}_{0}$ belongs to X. For general data $(f,{u}_{0})$ this means that the last term containing f necessarily is a member of X too.

To derive Formula (76) in the present general context, one should note that all terms in the equation ${\partial}_{t}u+\mathcal{A}u=f$ belong to the space ${L}_{2}(0,T;{V}^{\ast})$. Therefore the operator ${e}^{-(T-t)\mathcal{A}}$ applies to both sides as an integration factor, yielding
Now ${e}^{-(T-t)\mathcal{A}}u\left(t\right)$ is in ${L}_{1}(0,T;{V}^{\ast})$, cf. the argument prior to (46). For its derivative in ${\mathcal{D}}^{\prime}(0,T;{V}^{\ast})$ the Leibniz rule in Proposition 3 gives, as $u\left(t\right)\in V=D\left(\mathcal{A}\right)$ for t a.e.,
As both terms on the right-hand side are in ${L}_{2}(0,T;{V}^{\ast})$, the implication (ii)⇒ (i) in Lemma 1 gives
From this identity in $C([0,T];{V}^{\ast})$ formula (76) results in case $t=T$ by evaluation, when also Lemma 5 is used for the term containing ${u}_{0}$. However, obviously the above argument applies to any subinterval $[0,{T}_{1}]\subset [0,T]$, whence (76) is valid for all t in $[0,T]$. ☐

$${e}^{-(T-t)\mathcal{A}}{\partial}_{t}u\left(t\right)+{e}^{-(T-t)\mathcal{A}}\mathcal{A}u\left(t\right)={e}^{-(T-t)\mathcal{A}}f\left(t\right).$$

$${\partial}_{t}\left({e}^{-(T-t)\mathcal{A}}u\left(t\right)\right)={e}^{-(T-t)\mathcal{A}}{\partial}_{t}u\left(t\right)+{e}^{-(T-t)\mathcal{A}}\mathcal{A}u\left(t\right).$$

$${e}^{-(T-t)\mathcal{A}}u\left(t\right)={e}^{-T\mathcal{A}}{u}_{0}+{\int}_{0}^{t}{e}^{-(T-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds.$$

Alternatively one could conclude by applying ${e}^{-(T-s)\mathcal{A}}={e}^{-(T-t)\mathcal{A}}{e}^{-(t-s)\mathcal{A}}$ in (79) and use the Bochner identity to commute ${e}^{-(T-t)\mathcal{A}}$ with the integral: as analytic semigroups like ${e}^{-(T-t)\mathcal{A}}$ are always injective, cf. Proposition 1, formula (76) then results at once.

For later reference we show similarly the next inequality:

**Corollary**

**2.**

The solution ${e}^{-t\mathcal{A}}{u}_{0}$ to the problem with $f=0$ in Theorem 4 belongs to ${L}_{2}(0,T;V)$ and fulfils, for every ${u}_{0}\in H$,

$$\underset{0\le t\le T}{sup}(T-t)|{e}^{-t\mathcal{A}}{u}_{0}{|}^{2}\le {C}_{5}{\int}_{0}^{T}{\parallel {e}^{-t\mathcal{A}}{u}_{0}\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}dt.$$

**Proof.**

It is seen from Theorem 5 that $u\left(t\right)={e}^{-t\mathcal{A}}{u}_{0}$ always is in ${L}_{2}(0,T;V)$, as a member of X. By taking scalar products with $(T-\xb7)u$ on both sides of the differential equation, one obtains in ${L}_{1}(0,T)$ the identity
Taking real parts here, applying Lemma 2 to u and integrating partially on $[t,T]$, one obtains
By reorganising this, a crude estimate yields the result at once for ${C}_{5}=\frac{{C}_{2}}{{C}_{1}}+2T{C}_{3}$. ☐

$$(T-t)\langle \phantom{\rule{0.166667em}{0ex}}{u}^{\prime}\left(t\right),\phantom{\rule{0.166667em}{0ex}}u\left(t\right)\phantom{\rule{0.166667em}{0ex}}\rangle +(T-t)a(u\left(t\right),u\left(t\right))=0.$$

$${\int}_{t}^{T}{\left|u\left(s\right)\right|}^{2}\phantom{\rule{0.166667em}{0ex}}ds-(T-t){\left|u\left(t\right)\right|}^{2}=-2{\int}_{t}^{T}(T-s)Rea(u\left(s\right),u\left(s\right))\phantom{\rule{0.166667em}{0ex}}ds.$$

#### 3.4. Non-Selfadjoint Dynamics

It is classical that ${e}^{-tA}{u}_{0}$ in (76) is a term that decays exponentially for $t\to \infty $ if A is self-adjoint and has compact inverse on H. This follows from the eigenfunction expansions, cf. the formulas in the introduction and Section 2.2, which imply for the ’height’ function $h\left(t\right)=|{e}^{-tA}{u}_{0}|$ that $h\left(t\right)=\mathcal{O}\left({e}^{-tRe{\lambda}_{1}}\right)$.

However, it is a much more precise dynamical property that $h\left(t\right)$ is a strictly convex function for ${u}_{0}\ne 0$ (we refer to [25] for a lucid account of convex functions). Strict convexity is established below for wide classes of non-self-adjoint A, namely if A is hyponormal or such that ${A}^{2}$ is accretive.

Moreover, it seems to be a novelty that the injectivity of ${e}^{-tA}$ provided by Proposition 1 implies the strict convexity. For simplicity we first explain this for the square $h{\left(t\right)}^{2}$.

Indeed, differentiating twice for $t>0$ one finds for $u={e}^{-tA}{u}_{0}$,
In case ${A}^{2}$ is accretive, that is when $m\left({A}^{2}\right)\ge 0$, we may keep only the last term in (83) to get that ${\left({h}^{2}\right)}^{\prime \prime}\left(t\right)\ge 2{\left|A{e}^{-tA}{u}_{0}\right|}^{2}$, which for ${u}_{0}\ne 0$ implies ${\left({h}^{2}\right)}^{\prime \prime}>0$ as both A and ${e}^{-tA}$ are injective; cf. (34) and Proposition 1. Hence ${h}^{2}$ is strictly convex for $t>0$ if $m\left({A}^{2}\right)\ge 0$.

$${\left({h}^{2}\right)}^{\prime \prime}={(-2Re\left(A{e}^{-tA}{u}_{0}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}^{-tA}{u}_{0}))}^{\prime}=2Re\left({A}^{2}u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}u)+2\left(Au\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}Au).$$

Another case is when A is hyponormal. For an unbounded operator A this means, cf. the work of Janas [18], that
Note that if both A, ${A}^{\ast}$ are hyponormal, then A is normal. This is a quite general class, but it fits most naturally into the present discussion:

$$D\left(A\right)\subset D\left({A}^{\ast}\right)\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{1.em}{0ex}}|{A}^{\ast}u|\le |Au|\phantom{\rule{1.em}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}\mathrm{all}\phantom{\rule{4.pt}{0ex}}u\in D\left(A\right).$$

For hyponormal A we have $R\left({e}^{-tA}\right)\subset D\left(A\right)\subset D\left({A}^{\ast}\right)$, which shows that ${A}^{\ast}{e}^{-tA}{u}_{0}$ is defined. Using this and hyponormality once more in (83), we get
Now ${\left({h}^{2}\right)}^{\prime \prime}>0$ follows for ${u}_{0}\ne 0$ from injectivity of ${e}^{-tA}$ and of $A+{A}^{\ast}$; the latter holds since $2{a}_{Re}$ is V-elliptic. So ${h}^{2}$ is also strictly convex for hyponormal A.

$${\left({h}^{2}\right)}^{\prime \prime}\left(t\right)\ge \left(Au\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{A}^{\ast}u)+\left({A}^{\ast}u\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}Au)+{\left|Au\right|}^{2}+|{A}^{\ast}{u|}^{2}={\left|(A+{A}^{\ast}){e}^{-tA}{u}_{0}\right|}^{2}.$$

Also on the closed half-line with $t\ge 0$ there is a result on non-selfadjoint dynamics. Here we return to $h\left(t\right)$ itself and normalise, at no cost, to $|{u}_{0}|=1$ to get cleaner statements:

**Proposition**

**4.**

Let A denote a V-elliptic Lax–Milgram operator, defined from a triple $(H,V,a)$, such that A is hyponormal, as above, or such that ${A}^{2}$ is accretive, and let u be the solution from Theorem 4 for $f=0$ and $|{u}_{0}|=1$. Then $h\left(t\right)=\left|u\right(t\left)\right|$ is strictly decreasing and strictly convex for $t\ge 0$ and differentiable from the right at $t=0$ with
and generally

$${h}^{\prime}\left(0\right)=-Re\left(A{u}_{0}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{u}_{0})\phantom{\rule{2.em}{0ex}}\mathit{for}\phantom{\rule{4.pt}{0ex}}{u}_{0}\in D\left(A\right),$$

$${h}^{\prime}\left(0\right)\le -m\left(A\right).$$

**Remark**

**6.**

The derivative ${h}^{\prime}\left(0\right)$ might be $-\infty $ if ${u}_{0}\in H\setminus D\left(A\right)$.

**Proof.**

By the convexity shown above, ${\left({h}^{2}\right)}^{\prime}$ is increasing. Since $m\left(A\right)>0$ holds by the V-ellipticity, ${h}^{2}$ is strictly decreasing (and so is h) for $t>0$ as
These properties give that ${h}^{\prime}={\left({h}^{2}\right)}^{\prime}/\left(2\sqrt{{h}^{2}}\right)$ is strictly increasing for $t>0$, so the Mean Value Theorem yields that $\left(h\right(t)-h(s\left)\right)/(t-s)<\left(h\right(u)-h(t\left)\right)/(u-t)$ for $0<s<t<u$; i.e., h is strictly convex on $\phantom{\rule{0.166667em}{0ex}}]0,\infty [\phantom{\rule{0.166667em}{0ex}}$.

$${\left({h}^{2}\right)}^{\prime}\left(t\right)=-2Re\left(A{e}^{-tA}{u}_{0}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}^{-tA}{u}_{0})\le -2m\left(A\right){\left|{e}^{-tA}{u}_{0}\right|}^{2}<0.$$

The inequality $h\left(\right(1-\theta )t+\theta s)\le (1-\theta )h\left(t\right)+\theta h\left(s\right)$, $\theta \in \phantom{\rule{0.166667em}{0ex}}]0,1[\phantom{\rule{0.166667em}{0ex}}$ now extends by continuity to $t=0$. So does strict convexity of h, using twice that the slope function is increasing.

By convexity ${h}^{\prime}$ is increasing for $t>0$, so ${lim}_{t\to {0}^{+}}{h}^{\prime}\left(t\right)=inf{h}^{\prime}\ge -\infty $. For each $0<s<1$ the continuity of h yields $|{e}^{-tA}{u}_{0}|\ge s|{u}_{0}|=s$ for all sufficiently small $t\ge 0$. By the above formulas for ${h}^{\prime}$ and ${\left({h}^{2}\right)}^{\prime}$ we have ${h}^{\prime}\left(t\right)=-Re\left(A{e}^{-tA}{u}_{0}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}^{-tA}{u}_{0})/\left|{e}^{-tA}{u}_{0}\right|$, so the Mean Value Theorem gives for some $\tau \in \phantom{\rule{0.166667em}{0ex}}]0,t[\phantom{\rule{0.166667em}{0ex}}$,
Hence $h\left(0\right)>h\left(t\right)$ for all $t>0$. Moreover, the limit of ${h}^{\prime}\left(\tau \right)$ was shown above to exist for $\tau \to {0}^{+}$, so ${h}^{\prime}\left(0\right)$ exists in $[-\infty ,-m(A\left)\right]$. If ${u}_{0}\in D\left(A\right)$ we may commute A with the semigroup in the formula for ${h}^{\prime}\left(\tau \right)$, which by continuity gives ${h}^{\prime}\left(0\right)=-Re\left(A{u}_{0}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{u}_{0})$. ☐

$${t}^{-1}(h\left(t\right)-h\left(0\right))={h}^{\prime}\left(\tau \right)\le -m\left(A\right)s<0.$$

Proposition 4 is a stiffness result for $u={e}^{-tA}{u}_{0}$, due to strict convexity of $|{e}^{-tA}{u}_{0}|$. It is noteworthy that when $A\ne {A}^{\ast}$, then Proposition 4 gives conditions under which the eigenvalues in $\mathbb{C}\backslash \mathbb{R}$ (if any) never lead to oscillations in the size of the solution.

**Remark**

**7.**

Since ${h}^{\prime}\left(0\right)$ is estimated in terms of the lower bound $m\left(A\right)$, it is the numerical range $\nu \left(A\right)$, rather than $\sigma \left(A\right)$, that controls short-time decay of the solutions ${e}^{-tA}{u}_{0}$.

**Remark**

**8.**

In Proposition 4 we note that when ${A}^{2}$ is accretive, i.e., $m\left({A}^{2}\right)\ge 0$, then A is necessarily sectorial with half-angle $\pi /4$; that is $\nu \left(A\right)\subset \left\{z\in \left|\right|arg\left(z\right)|\le \pi /4\right\}$. This may be seen as in ([17], Lemma 3), where reduction to bounded operators was made in order to invoke the operator monotonicity of the square root.

**Remark**

**9.**

We take the opportunity to point out an error in ([17], Lemma 3), where it incorrectly was claimed that having half-angle $\pi /4$ also is sufficient for $m\left({A}^{2}\right)\ge 0$. A counter-example is available already for A in $\mathbb{B}\left(H\right)$ (if $dimH\ge 2$), as $A=X+iY$ for self-adjoint X, $Y\in \mathbb{B}\left(H\right)$: here $m\left(A\right)\ge 0$ if and only if $X\ge 0$, and we can even arrange that A has half-angle $\pi /4$, that is $|Im(Av\phantom{\rule{0.166667em}{0ex}}\left|\phantom{\rule{0.166667em}{0ex}}v\right)|\le Re(Av\phantom{\rule{0.166667em}{0ex}}\left|\phantom{\rule{0.166667em}{0ex}}v\right)$ or $\left|\right(Yv\phantom{\rule{0.166667em}{0ex}}\left|\phantom{\rule{0.166667em}{0ex}}v\right)|\le (Xv\phantom{\rule{0.166667em}{0ex}}\left|\phantom{\rule{0.166667em}{0ex}}v\right)$, by designing Y so that $-X\le Y\le X$. Here we may take $Y=\delta X+{\lambda}_{1}U$, where $\delta >0$ is small enough and U is a partial isometry that interchanges two eigenvectors ${v}_{1}$, ${v}_{2}$ of X with eigenvalues ${\lambda}_{2}>{\lambda}_{1}>0$, $U=0$ on $H\ominus span({v}_{1},{v}_{2})$. In fact, writing $v={c}_{1}{v}_{1}+{c}_{2}{v}_{2}+{v}_{\perp}$ for ${v}_{\perp}\in H\ominus span({v}_{1},{v}_{2})$, since ${v}_{1}\perp {v}_{2}$, the above inequalities for Y are equivalent to $2{\lambda}_{1}|Re\left({c}_{1}{\overline{c}}_{2}\right)|\le {\lambda}_{1}(1-\delta )|{c}_{1}{|}^{2}+(1-\delta ){\lambda}_{2}{\left|{c}_{2}\right|}^{2}+(1-\delta )\left(X{v}_{\perp}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{v}_{\perp})$, which by the positivity of X and Young’s inequality is implied by $1/(1-\delta )\le (1-\delta )\frac{{\lambda}_{2}}{{\lambda}_{1}}$, that is if $0<\delta \le 1-\sqrt{{\lambda}_{1}/{\lambda}_{2}}$. Now, $m\left({A}^{2}\right)\ge 0$ if and only if ${\left|Xv\right|}^{2}\ge {\left|Yv\right|}^{2}$ for all v in H, but this will always be violated, as one can see from ${\left|Yv\right|}^{2}={\delta}^{2}{\left|Xv\right|}^{2}+{\lambda}_{1}^{2}{\left|Uv\right|}^{2}+2\delta {\lambda}_{1}Re\left(Xv\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}Uv)$ by inserting $v={v}_{1}$, for the last term drops out as ${v}_{1}\perp {v}_{2}=U{v}_{1}$, so that actually $|Y{v}_{1}{|}^{2}=({\delta}^{2}+1){\lambda}_{1}^{2}>{\left|X{v}_{1}\right|}^{2}$. Thus $A=\left(\begin{array}{cc}\lambda & 0\\ 0& 4\lambda \end{array}\right)+i\lambda \left(\begin{array}{cc}\delta & 1\\ 1& 4\delta \end{array}\right)$ is a counter-example in ${\mathbb{C}}^{2}$ for any $\lambda >0$, $0<\delta \le 1/2$.

**Remark**

**10.**

It is perhaps useful to emphasize the benefit from joining the two methods. Within semigroup theory the “mild solution” given in (76) is the only possible solution to (53); but as our class of solutions is larger, the extension of the old uniqueness argument in Theorem 5 was needed. Existence of a solution is for analytic semigroups classical if $f:[0,T]\to H$ is Hölder continuous, cf. ([23], Corollary 4.3.3). Using functional analysis, this gap to the weaker condition $f\in {L}_{2}(0,T;{V}^{\ast})$ is bridged by Theorem 5, which states that the mild solution is indeed the solution in the space of vector distributions in Theorem 4; albeit at the expense that the generator A is a V-elliptic Lax–Milgram operator.

## 4. Abstract Final Value Problems

In this section, we show for a Lax–Milgram operator $\mathcal{A}$ that the final value problem
is well-posed when the final data belong to an appropriate space, to be identified below. This is obtained via comparison with the initial value problem treated in Section 3.

$$\left\{\begin{array}{ccc}\hfill {\partial}_{t}u+\mathcal{A}u& =f\hfill & \phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{in}\phantom{\rule{4.pt}{0ex}}{\mathcal{D}}^{\prime}(0,T;{V}^{\ast}),\hfill \\ \hfill u\left(T\right)& ={u}_{T}\hfill & \phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{in}\phantom{\rule{4.pt}{0ex}}H,\hfill \end{array}\right.$$

#### 4.1. A Bijection From Initial to Terminal States

According to Theorem 4, the solutions to the differential equation ${u}^{\prime}+\mathcal{A}u=f$ are for fixed f parametrised by the initial states $u\left(0\right)\in H$. To study the terminal states $u\left(T\right)$ we note that (76) yields

$$\begin{array}{c}\hfill u\left(T\right)={e}^{-TA}u\left(0\right)+{\int}_{0}^{T}{e}^{-(T-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds.\end{array}$$

This representation of $u\left(T\right)$ is essential in what follows, as it gives a bijective correspondence $u\left(0\right)\leftrightarrow u\left(T\right)$ between the initial and terminal states, as accounted for below.

First we analyse the integral term above by introducing the yield map $f\mapsto {y}_{f}$ given by
Clearly ${y}_{f}$ is a vector in ${V}^{\ast}$ by definition of the integral (and since $C([0,T];{V}^{\ast})\subset {L}_{1}(0,T;{V}^{\ast})$). But actually it is in the smaller space H, for ${y}_{f}=u\left(T\right)$ holds in H when u is the solution for ${u}_{0}=0$ of (53), and then Corollary 1 yields an estimate of ${sup}_{t\in [0,T]}\left|u\left(t\right)\right|$ by the ${L}_{2}$-norm of f; cf. (61). In particular, we have

$$\begin{array}{c}\hfill {y}_{f}={\int}_{0}^{T}{e}^{-(T-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds,\phantom{\rule{2.em}{0ex}}f\in {L}_{2}(0,T;{V}^{\ast}).\end{array}$$

$$\begin{array}{c}\hfill |{y}_{f}{|\le c\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}.\end{array}$$

Moreover, $f\mapsto {y}_{f}$ is by (93) bounded ${L}_{2}(0,T;{V}^{\ast})\to H$, and it has dense range in H containing all $x\in D\left({e}^{\epsilon A}\right)$ for every $\epsilon >0$, for if in (92) we insert the piecewise continuous function
then the semigroup property gives ${y}_{{f}_{\epsilon}}={\int}_{T-\epsilon}^{T}{e}^{-\epsilon A}\left(\frac{1}{\epsilon}{e}^{\epsilon A}x\right)\phantom{\rule{0.166667em}{0ex}}ds=\frac{1}{\epsilon}{\int}_{T-\epsilon}^{T}x\phantom{\rule{0.166667em}{0ex}}ds=x$. However, standard operator theory gives the optimal result, that is, surjectivity:

$${f}_{\epsilon}\left(s\right)={\mathbf{1}}_{[T-\epsilon ,T]}\left(s\right){e}^{(T-\epsilon -s)A}\left(\frac{1}{\epsilon}{e}^{\epsilon A}x\right),$$

**Proposition**

**5.**

The yield map $f\mapsto {y}_{f}$ is in $\mathbb{B}({L}_{2}(0,T;{V}^{\ast}),H)$ and it is surjective, $R\left({y}_{f}\right)=H$. Its adjoint in $\mathbb{B}(H,{L}_{2}(0,T;V))$ is the orbit map given by $v\mapsto {e}^{-(T-\xb7){A}^{\ast}}v$.

**Proof.**

To determine the adjoint of $f\mapsto {y}_{f}$, we first calculate for $f\in {L}_{2}(0,T;H)$ so that the integrand in (92) belongs to $C\left(\right[0,T];H)$. For $v\in H$ we get, using the Bochner identity twice,
The last scalar product makes sense because $s\mapsto {e}^{-(T-s){A}^{\ast}}v$ is in ${L}_{2}(0,T;V)$, as seen by applying Corollary 2 to the Lax–Milgram operator ${A}^{\ast}$, and ${L}_{2}(0,T;V)$ is the dual space to ${L}_{2}(0,T;{V}^{\ast})$; cf. Remark 11 below. Since ${L}_{2}(0,T;H)$ is dense in ${L}_{2}(0,T;{V}^{\ast})$, it follows by closure that the left- and right-hand sides are equal for every $f\in {L}_{2}(0,T;{V}^{\ast})$ and $v\in H$. Hence $v\mapsto {e}^{-(T-\xb7){A}^{\ast}}v$ is the adjoint of ${y}_{f}$.

$$\left({y}_{f}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)={\int}_{0}^{T}\left({e}^{-(T-s)A}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}v)\phantom{\rule{0.166667em}{0ex}}ds={\int}_{0}^{T}\left(f\left(s\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}^{-(T-s){A}^{\ast}}v)\phantom{\rule{0.166667em}{0ex}}ds=\langle \phantom{\rule{0.166667em}{0ex}}f,\phantom{\rule{0.166667em}{0ex}}{e}^{-(T-s){A}^{\ast}}v\phantom{\rule{0.166667em}{0ex}}\rangle .$$

Applying Corollary 2 to ${A}^{\ast}$ for $t=0$, a change of variables yields for every $v\in H$,
This estimate from below of the adjoint is equivalent to closedness of the range of ${y}_{f}$, as the range is dense by (94). This follows from the Closed Range Theorem; cf. ([26], Theorem 3.1) for a general result on this. ☐

$${\left|v\right|}^{2}\le \frac{{C}_{5}}{T}{\int}_{0}^{T}{\parallel {e}^{-(T-s){A}^{\ast}}v\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}ds.$$

**Remark**

**11.**

The Banach spaces ${L}_{2}(0,T;V)$, ${L}_{2}(0,T;{V}^{\ast})$ are in duality, and ${L}_{2}{(0,T;V)}^{\ast}$ identifies with ${L}_{2}(0,T;{V}^{\ast})$: for each $\mathsf{\Lambda}\in {L}_{2}{(0,T;V)}^{\ast}$ the inner product ${a}_{Re}$ and Riesz’ theorem yield $h\in {L}_{2}(0,T;V)$ that for $g\in {L}_{2}(0,T;V)$ fulfils $\langle \phantom{\rule{0.166667em}{0ex}}\mathsf{\Lambda},\phantom{\rule{0.166667em}{0ex}}g\phantom{\rule{0.166667em}{0ex}}\rangle ={\int}_{0}^{T}{a}_{Re}(h,g)\phantom{\rule{0.166667em}{0ex}}dt$; so $\langle \phantom{\rule{0.166667em}{0ex}}\mathsf{\Lambda},\phantom{\rule{0.166667em}{0ex}}g\phantom{\rule{0.166667em}{0ex}}\rangle ={\int}_{0}^{T}\langle \phantom{\rule{0.166667em}{0ex}}f,\phantom{\rule{0.166667em}{0ex}}g\phantom{\rule{0.166667em}{0ex}}\rangle \phantom{\rule{0.166667em}{0ex}}dt$ for $f=\frac{1}{2}(\mathcal{A}+{\mathcal{A}}^{\prime})h$ in ${L}_{2}(0,T;{V}^{\ast})$; cf. (23) and (25).

The surjectivity of ${y}_{f}$ can be shown in important cases using an explicit construction, which is of interest in control theory (cf. Remark 12), and given here for completeness:

**Proposition**

**6.**

If ${A}^{\ast}=A$ and ${A}^{-1}$ is compact, every $v\in H$ equals ${y}_{f}$ for some computable $f\in {L}_{2}(0,T;{V}^{\ast})$.

**Proof.**

Fact 1 yields an ortonormal basis ${\left({e}_{n}\right)}_{n\in \mathbb{N}}$ so that $A{e}_{n}={\lambda}_{n}{e}_{n}$, hence any v in H fulfils $v={\sum}_{j}{\alpha}_{j}{e}_{j}$ with ${\sum}_{j}{\left|{\alpha}_{j}\right|}^{2}<\infty $. By Fact 2 every $f\in {L}_{2}(0,T;{V}^{\ast})$ has an expansion
converging in ${V}^{\ast}$ for t a.e. Since ${e}^{-(T-s)\mathcal{A}}{e}_{j}={e}^{-(T-s){\lambda}_{j}}{e}_{j}$, cf. Remark 2, such f fulfill
Hence ${y}_{f}=v$ is equivalent to the validity of ${\int}_{0}^{T}{\beta}_{j}\left(s\right){e}^{s{\lambda}_{j}}\phantom{\rule{0.166667em}{0ex}}ds={\alpha}_{j}{e}^{T{\lambda}_{j}}$ for $j\in \mathbb{N}$. So if, in terms of some ${\theta}_{j}\in \phantom{\rule{0.166667em}{0ex}}]0,1[\phantom{\rule{0.166667em}{0ex}}$ to be determined, we take the coefficients of $f\left(t\right)$ as
then the condition will be satisfied if and only if ${k}_{j}={\alpha}_{j}{e}^{T{\lambda}_{j}}\sqrt{{\lambda}_{j}}{({e}^{T\sqrt{{\lambda}_{j}}}-{e}^{{\theta}_{j}T\sqrt{{\lambda}_{j}}})}^{-1}$.

$$f\left(t\right)=\sum _{j=1}^{\infty}{\beta}_{j}\left(t\right){e}_{j}=\sum _{j=1}^{\infty}\langle f\left(t\right),{e}_{j}\rangle {e}_{j}$$

$$\begin{array}{c}\hfill {y}_{f}={\int}_{0}^{T}{e}^{-(T-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds=\sum _{j=1}^{\infty}{e}^{-T{\lambda}_{j}}\left({\int}_{0}^{T}{\beta}_{j}\left(s\right){e}^{s{\lambda}_{j}}\phantom{\rule{0.166667em}{0ex}}ds\right){e}_{j}.\end{array}$$

$${\beta}_{j}\left(t\right)={k}_{j}{\mathbf{1}}_{[{\theta}_{j}T,T]}\left(t\right)exp\left(t(\sqrt{{\lambda}_{j}}-{\lambda}_{j})\right),$$

Moreover, using the equivalent norm $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|\xb7{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}_{\ast}$ on ${V}^{\ast}$ in Fact 2,
Therefore f is in ${L}_{2}(0,T;{V}^{\ast})$ whenever ${\int}_{0}^{T}|{\beta}_{j}{|}^{2}\phantom{\rule{0.166667em}{0ex}}dt\le C{\lambda}_{j}{\left|{\alpha}_{j}\right|}^{2}$ holds eventually for some $C>0$, and here a direct calculation gives

$$\begin{array}{c}\hfill {\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2}={\int}_{0}^{T}\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|f\left(t\right){\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt=\sum _{j=1}^{\infty}{\lambda}_{j}^{-1}{\int}_{0}^{T}{\left|{\beta}_{j}\left(t\right)\right|}^{2}\phantom{\rule{0.166667em}{0ex}}dt.\end{array}$$

$${\int}_{0}^{T}\frac{|{\beta}_{j}{|}^{2}}{|{k}_{j}{|}^{2}}\phantom{\rule{0.166667em}{0ex}}dt=\frac{{e}^{2T(\sqrt{{\lambda}_{j}}-{\lambda}_{j})}-{e}^{2{\theta}_{j}T(\sqrt{{\lambda}_{j}}-{\lambda}_{j})}}{2\sqrt{{\lambda}_{j}}-2{\lambda}_{j}}=\frac{{e}^{2T\sqrt{{\lambda}_{j}}}({e}^{2T(1-{\theta}_{j})({\lambda}_{j}-\sqrt{{\lambda}_{j}})}-1)}{2{e}^{2T{\lambda}_{j}}({\lambda}_{j}-\sqrt{{\lambda}_{j}})}.$$

So in view of the expression for ${k}_{j}$, the quadratic integrability of f follows if the ${\theta}_{j}$ can be chosen so that the above numerator is estimated by $C({\lambda}_{j}-\sqrt{{\lambda}_{j}}){({e}^{T\sqrt{{\lambda}_{j}}}-{e}^{{\theta}_{j}T\sqrt{{\lambda}_{j}}})}^{2}$ with C independent of $j\ge J$ for a suitable J, or more simply if
We may take J so that ${\lambda}_{j}>3$ for all $j\ge J$, since at most finitely many eigenvalues do not fulfill this. Then ${\theta}_{j}:=1-{({\lambda}_{j}-\sqrt{{\lambda}_{j}})}^{-1}$ belongs to $\phantom{\rule{0.166667em}{0ex}}]0,1[\phantom{\rule{0.166667em}{0ex}}$, and the above is reduced to
Applying the Mean Value Theorem to exp on $[-\frac{T}{\sqrt{{\lambda}_{j}}-1},0]$, we obtain the inequality
Hence (103) is fulfilled for $C=exp\left(6T\right)/{T}^{2}$. ☐

$${e}^{2T(1-{\theta}_{j})({\lambda}_{j}-\sqrt{{\lambda}_{j}})}-1\le C({\lambda}_{j}-\sqrt{{\lambda}_{j}}){(1-{e}^{-(1-{\theta}_{j})T\sqrt{{\lambda}_{j}}})}^{2}.$$

$$exp\left(2T\right)-1\le C({\lambda}_{j}-\sqrt{{\lambda}_{j}}){(1-exp(-\frac{T}{\sqrt{{\lambda}_{j}}-1}))}^{2}.$$

$$({\lambda}_{j}-\sqrt{{\lambda}_{j}}){(1-exp\left(\frac{-T}{\sqrt{{\lambda}_{j}}-1}\right))}^{2}\ge exp(-\frac{2T}{\sqrt{3}-1})\frac{{T}^{2}{\lambda}_{j}}{{\lambda}_{j}-\sqrt{{\lambda}_{j}}}>exp(-4T){T}^{2}>0.$$

**Remark**

**12.**

In the above proof $supp\text{}{\beta}_{j}\subset [{\theta}_{j}T,T]$, so the given v can be attained by ${y}_{f}$ by arranging the coefficients ${\beta}_{j}$ in each dimension successively as time approaches T, as ${\theta}_{j}\nearrow 1$ follows in (99) by counting the eigenvalues so that ${\lambda}_{j}\nearrow \infty $. This can even be postponed to any given ${T}_{0}<T$, for $supp\text{}{\beta}_{j}\subset [{T}_{0},T]$ holds whenever ${\theta}_{j}T\ge {T}_{0}$, and we may reset to ${\theta}_{j}={T}_{0}/T$ and adjust the ${k}_{j}$ accordingly, for the finitely many remaining j. Both themes may be of interest in infinite dimensional control theory.

In order to isolate $u\left(0\right)$ in (91), it will of course be decisive that the operator ${e}^{-TA}$ has an inverse, as was shown for general analytic semigroups in Proposition 1.

For our Lax–Milgram operator A with analytic semigroup ${e}^{-tA}$ generated by $\mathbf{A}=-A$, it is the symbol ${e}^{tA}$ that denotes the inverse, consistent with the sign convention in (41). Hence the properties of ${e}^{tA}$ can be read off from Proposition 2, where (43) gives

$${e}^{-tA}{e}^{TA}\subset {e}^{(T-t)A}\phantom{\rule{2.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}0\le t\le T.$$

Moreover, it is decisive for the interpretation of the compatibility conditions in Section 4.2 below to know that the domain inclusions in Proposition 2 are strict. We include a mild sufficient condition along with a characterisation of the domain $D\left({e}^{tA}\right)$.

**Proposition**

**7.**

If H has an orthonormal basis of eigenvectors ${\left({e}_{j}\right)}_{j\in \mathbb{N}}$ of A so that the corresponding eigenvalues fulfil $Re{\lambda}_{j}\to \infty $ for $j\to \infty $, then the inclusions in (44) are both strict , and $D\left({e}^{tA}\right)$ is the completion of $span{\left({e}_{j}\right)}_{j\in \mathbb{N}}$ with respect to the graph norm,
The domain $D\left({e}^{tA}\right)$ equals the subspace $S\subset H$ in which the right-hand side is finite.

$$\begin{array}{c}\hfill {\parallel x\parallel}_{D\left({e}^{tA}\right)}^{2}=\sum _{j=1}^{\infty}(1+{e}^{2Re{\lambda}_{j}t}){\left|\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\right|}^{2}.\end{array}$$

**Proof.**

If $x\in S$ the vector $v={\sum}_{j=1}^{\infty}{e}^{{\lambda}_{j}t}\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j}$ is well defined in H, and with methods from Remark 2 it follows that ${e}^{-tA}v=x$; i.e., $x\in D\left({e}^{tA}\right)$.

Conversely, for $x\in D\left({e}^{tA}\right)$ there is a vector $y\in H$ such that $x={e}^{-tA}y={\sum}_{j=1}^{\infty}\left(y\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}^{-t{\lambda}_{j}}{e}_{j}$. That is, ${e}^{{\lambda}_{j}t}\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})=\left(y\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\in {\ell}_{2}$, so $x\in S$. Then $|{e}^{tA}{x|}^{2}=\sum {e}^{2Re{\lambda}_{j}t}{\left|\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\right|}^{2}$ yields (106).

Now any $x\in D\left({e}^{{t}^{\prime}A}\right)$ is also in $D\left({e}^{tA}\right)$ for $t<{t}^{\prime}$, since $Re{\lambda}_{j}>0$ holds in (106) for all j by V-ellipticity. As $Re{\lambda}_{j}\to \infty $, we may choose a subsequence so that $Re{\lambda}_{{j}_{n}}>n$ and set
Here $x\in D\left({e}^{tA}\right)$ as it is in S by construction for $t\ge 0$; but not in $D\left({e}^{{t}^{\prime}A}\right)$ for ${t}^{\prime}>t$ as

$$\begin{array}{c}\hfill x=\sum _{n=1}^{\infty}\frac{1}{n}{e}^{-{\lambda}_{{j}_{n}}t}{e}_{{j}_{n}}.\end{array}$$

$$\begin{array}{c}\hfill \sum _{j=1}^{\infty}{e}^{2Re{\lambda}_{j}{t}^{\prime}}{\left|\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\right|}^{2}=\sum _{n=1}^{\infty}{e}^{2Re{\lambda}_{{j}_{n}}({t}^{\prime}-t)}\frac{1}{{n}^{2}}>\sum _{n=1}^{\infty}\frac{{e}^{2n({t}^{\prime}-t)}}{{n}^{2}}=\infty .\end{array}$$

Furthermore, using orthogonality, it follows for any $x\in D\left({e}^{tA}\right)$ that, for $N\to \infty $,
Hence the space $D\left({e}^{tA}\right)$ has $span{\left({e}_{j}\right)}_{j\in \mathbb{N}}$ as a dense subspace. That is, the completion of the latter with respect to the graph norm identifies with the former. ☐

$$\begin{array}{c}\hfill \parallel x-\sum _{j\le N}\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j}){e}_{j}{\parallel}_{D\left({e}^{tA}\right)}^{2}=\sum _{J>N}(1+{e}^{2Re{\lambda}_{j}t}){\left|\left(x\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}_{j})\right|}^{2}\to 0.\end{array}$$

After this study of the map ${y}_{f}$, the injectivity of the operator ${e}^{-tA}$ and the domain $D\left({e}^{tA}\right)$, cf. Propositions 1, 2, 5 and 7, we address the final value problem (90) by solving (91) for the vector $u\left(0\right)$. This is done by considering the map
This is composed of the bijection ${e}^{-TA}$ and a translation by the vector ${y}_{f}$, hence is bijective from H to the affine space $R\left({e}^{-TA}\right)+{y}_{f}$. In fact, using (41), inversion gives

$$\begin{array}{c}\hfill u\left(0\right)\mapsto {e}^{-TA}u\left(0\right)+{y}_{f}.\end{array}$$

$$\begin{array}{c}\hfill u\left(0\right)={e}^{TA}\left(u\left(T\right)-{\int}_{0}^{T}{e}^{-(T-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\right)={e}^{TA}(u\left(T\right)-{y}_{f}).\end{array}$$

This may be summed up thus:

**Theorem**

**6.**

For the set of solutions u in X of the differential equation $({\partial}_{t}+\mathcal{A})u=f$ with fixed data $f\in {L}_{2}(0,T;{V}^{\ast})$, the Formulas (91) and (111) give a bijective correspondence between the initial states $u\left(0\right)$ in H and the terminal states $u\left(T\right)$ in ${y}_{f}+D\left({e}^{TA}\right)$.

In view of the linearity, the affine space ${y}_{f}+D\left({e}^{TA}\right)$ might seem surprising. However, a suitable reinterpretation gives the compatibility condition introduced in the next section.

#### 4.2. Well-Posedness of the Final Value Problem

Since $R\left({e}^{TA}\right)\subset H$, the initial state in (111) can be inserted into Formula (76), so any solution u of (90) must satisfy
Here one could contract the first term a bit, as ${e}^{-tA}{e}^{TA}\subset {e}^{(T-t)A}$ by (105). But we refrain from this because ${e}^{-tA}{e}^{TA}$ rather obviously applies to ${u}_{T}-{y}_{f}$ if and only if this vector belongs to $D\left({e}^{TA}\right)$—and the following theorem corroborates that this is equivalent to the unique solvability in X of the final value problem (90):

$$\begin{array}{c}\hfill u\left(t\right)={e}^{-tA}{e}^{TA}({u}_{T}-{y}_{f})+{\int}_{0}^{t}{e}^{-(t-s)\mathcal{A}}f\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds.\end{array}$$

**Theorem**

**7.**

Let V be a separable Hilbert space contained algebraically, topologically and densely in H, and let A be the Lax–Milgram operator defined in H from a bounded V-elliptic sesquilinear form a, and having bounded extension $\mathcal{A}:V\to {V}^{\ast}$. For given $f\in {L}_{2}(0,T;{V}^{\ast})$ and ${u}_{T}\in H$, the condition
is necessary and sufficient for the existence of some $u\in X$, cf. (60), that solves the final value problem (90). Such a function u is uniquely determined and given by (112), where all terms belong to X as functions of t.

$$\begin{array}{c}\hfill {u}_{T}-{y}_{f}\in D\left({e}^{TA}\right)\end{array}$$

**Proof.**

When (90) has a solution $u\in X$, then ${u}_{T}$ is reachable from the initial state $u\left(0\right)$ determined from the bijection in Theorem 6, which gives that ${u}_{T}-{y}_{f}={e}^{-TA}u\left(0\right)\in D\left({e}^{TA}\right)$. Hence (113) is necessary and (112) follows by insertion, as explained prior to (112). Uniqueness is obvious from the right-hand side of (112).

When ${u}_{T}$, f fulfill (113), then ${u}_{0}={e}^{TA}({u}_{T}-{y}_{f})$ defines a vector in H, so Theorem 4 yields a function $u\in X$ solving $({\partial}_{t}+\mathcal{A})u=f$ and $u\left(0\right)={u}_{0}$. According to Theorem 6 this u has final state $u\left(T\right)={e}^{-TA}{e}^{TA}({u}_{T}-{y}_{f})+{y}_{f}={u}_{T}$, hence solves (90).

Finally, the fact that the integral in (112) defines a function in X follows at once from Theorem 5, for it states that it equals the solution in X of ${\tilde{u}}^{\prime}+\mathcal{A}\tilde{u}=f$, $\tilde{u}\left(0\right)=0$. Since $u\in X$ in (112), also ${e}^{-tA}{e}^{TA}({u}_{T}-{y}_{f})$ is a function in X. ☐

**Remark**

**13.**

**Remark**

**14.**

Writing condition (113) as ${u}_{T}={e}^{-TA}u\left(0\right)+{y}_{f}$, cf. Remark 13, this part of Theorem 7 is natural inasmuch as each set of admissible terminal data ${u}_{T}$ are in effect a sum of the terminal state, ${e}^{-TA}u\left(0\right)$, of the semi-homogeneous initial value problem (53) with $f=0$ and of the terminal state ${y}_{f}$ of the semi-homogeneous problem (53) with $u\left(0\right)=0$. Moreover, the ${u}_{T}$ fill at least a dense set in H, as for fixed $u\left(0\right)$ this follows from Proposition 5; for fixed f from the density of $D\left({e}^{TA}\right)$ seen prior to Proposition 2.

**Remark**

**15.**

To elucidate the criterion ${u}_{T}-{y}_{f}\in D\left({e}^{TA}\right)$ in formula (113) of Theorem 7, we consider the matrix operator ${P}_{\mathcal{A}}=\left(\begin{array}{c}{\partial}_{t}+\mathcal{A}\\ {r}_{T}\end{array}\right)$, with ${r}_{T}$ denoting restriction at $t=T$, and the “forward” map $\mathsf{\Phi}(f,{u}_{T})={u}_{T}-{y}_{f}$, which by (61) and Proposition 5 give bounded operators
Then, in terms of the range $R\left({P}_{\mathcal{A}}\right)$, clearly (90) has a solution if and only if $\left(\begin{array}{c}f\\ {u}_{T}\end{array}\right)\in R\left({P}_{\mathcal{A}}\right)$, so the compatibility condition (113) means that $R\left({P}_{\mathcal{A}}\right)={\mathsf{\Phi}}^{-1}\left(D\left({e}^{TA}\right)\right)=D\left({e}^{TA}\mathsf{\Phi}\right)$.

$$X\underset{\phantom{\rule{3.33333pt}{0ex}}}{\overset{\phantom{\rule{1.em}{0ex}}{P}_{\mathcal{A}}\phantom{\rule{1.em}{0ex}}}{\to}}\begin{array}{c}{L}_{2}(0,T;{V}^{\ast})\\ \oplus \\ H\end{array}\underset{\phantom{\rule{3.33333pt}{0ex}}}{\overset{\phantom{\rule{1.em}{0ex}}\mathsf{\Phi}\phantom{\rule{1.em}{0ex}}}{\to}}H.$$

The paraphrase at the end of Remark 15 is convenient for the choice of a useful norm on the data. Indeed, we now introduce the space of admissible data $Y=D\left({e}^{TA}\mathsf{\Phi}\right)$, i.e.,
endowed with the graph norm on $D\left({e}^{TA}\mathsf{\Phi}\right)$ given by
Using the equivalent norm $\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|\xb7{\left|\phantom{\rule{-1.6pt}{0ex}}\right|\phantom{\rule{-1.6pt}{0ex}}|}_{\ast}$ from (26) for ${V}^{\ast}$, the above is induced by the inner product
This space Y is complete: as Φ in Remark 15 is bounded, the composite map ${e}^{TA}\mathsf{\Phi}$ is a closed operator from ${L}_{2}(0,T;{V}^{\ast})\oplus H$ to H, so its domain $D\left({e}^{TA}\mathsf{\Phi}\right)=Y$ is complete with respect to the graph norm given in (116). Hence Y is a Hilbert(-able) space—but we shall often just work with the equivalent norm on the Banach space Y obtained by using simply ${\parallel \xb7\parallel}_{\ast}$ on ${V}^{\ast}$.

$$\begin{array}{c}\hfill Y=\left\{(f,{u}_{T})\in {L}_{2}(0,T;{V}^{\ast})\oplus H|{u}_{T}-{y}_{f}\in D\left({e}^{TA}\right)\right\},\end{array}$$

$$\begin{array}{c}\hfill \parallel (f,{u}_{T}){\parallel}_{Y}^{2}=|{u}_{T}{|}^{2}+{\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2}+{\left|{e}^{TA}({u}_{T}-{y}_{f})\right|}^{2}.\end{array}$$

$$\left({u}_{T}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{v}_{T})+{\int}_{0}^{T}{\left(f\left(s\right)\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}g\left(s\right))}_{{V}^{\ast}}\phantom{\rule{0.166667em}{0ex}}ds+\left({e}^{TA}({u}_{T}-{y}_{f})\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{e}^{TA}({v}_{T}-{y}_{g})).$$

Moreover, the norm in (116) also leads to continuity of the solution operator for (90):

**Theorem**

**8.**

The solution $u\in X$ in Theorem 7 depends continuously on the data $(f,{u}_{T})$ in the Hilbert space Y in (115), or equivalently, for some constant c we have
Another equivalent norm on the Hilbert space Y is obtained by omitting the term $|{u}_{T}{|}^{2}$.

$$\begin{array}{c}{\int}_{0}^{T}{\parallel u\left(t\right)\parallel}^{2}\phantom{\rule{0.166667em}{0ex}}dt+\underset{t\in [0,T]}{sup}{\left|u\left(t\right)\right|}^{2}+{\int}_{0}^{T}{\parallel {\partial}_{t}u\left(t\right)\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt\hfill \\ \hfill \le |{u}_{T}{|}^{2}+c\left({\int}_{0}^{T}{\parallel f\left(t\right)\parallel}_{\ast}^{2}\phantom{\rule{0.166667em}{0ex}}dt+|{e}^{TA}({u}_{T}-{\int}_{0}^{T}{e}^{-(T-t)\mathcal{A}}f\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt){|}^{2}\right).\end{array}$$

**Proof.**

This follows from Corollary 1 by inserting ${u}_{0}={e}^{TA}({u}_{T}-{y}_{f})$ from (111) into (64), for this gives ${\parallel u\parallel}_{X}^{2}\le c|{e}^{TA}({u}_{T}-{y}_{f}){|}^{2}+c{\parallel f\parallel}_{{L}_{2}(0,T;{V}^{\ast})}^{2}$, where one can add $|{u}_{T}{|}^{2}$. Conversely the boundedness of ${y}_{f}$ and ${e}^{-TA}$ yield that $|{u}_{T}{|}^{2}\le {c\parallel f\parallel}^{2}+c{\left|{e}^{TA}({u}_{T}-{y}_{f})\right|}^{2}$. ☐

Of course, Theorems 7 and 8 together mean that the final value problem in (90) is well posed in the spaces X and Y.

## 5. The Heat Equation With Final Data

To apply the theory in Section 4, we treat the heat equation and its final value problem. In the sequel Ω stands for a smooth, open bounded set in ${\mathbb{R}}^{n}$, $n\ge 2$ as described in ([20], Appendix C). In particular Ω is locally on one side of its boundary $\mathsf{\Gamma}:=\partial \mathsf{\Omega}$.

For such sets we consider the problem of finding the u satisfying

$$\left\{\begin{array}{ccc}\hfill {\partial}_{t}u(t,x)-\Delta u(t,x)& =f(t,x)\hfill & \phantom{\rule{4.pt}{0ex}}\mathrm{in}\phantom{\rule{4.pt}{0ex}}Q:=]0,T[\times \mathsf{\Omega},\hfill \\ \hfill {\gamma}_{0}u(\end{array}\right.$$