This section is dedicated to the proof of the main theorem.
Proof. Step 1: The regularization procedure
First, we regularize the problem starting with the initial data. We find a sequence
such that
for all
and
in
for
. Since
, there is an
such that
. Due to the continuous embedding
, we also have that
given that
is small enough.
Now, we define
by
Since
in
,
in
and the normal trace
, we also have that
in
.
For
, we define the scaling
It is easy to check that
is also divergence free. Due to
, the set
is open and a neighborhood of
. In particular, we have that
on
. Due to
in
, we get for
small enough the inclusion
, where
a neighborhood
. Hence, provided that
is small enough, we also have
on
. Finally, we note the convergence
Now, let
be such that
in
and
for all
. Moreover, we can regularize
in
by functions
with
,
on
, and
on
. The latter can be assumed due to
on
. In summary, we get that
and the convergences
Since the unknown domain depends on the solution h, we first replace it with a regularised domain and use a fixed point argument in Step 6 below to find . To be more precise, let such that and for all with m as before, i.e., , and will be chosen later.
For any auxiliary sequence
with
for all
, we take a space-time regularization
of
such that, if
in
and
in
as
, then
Note that we use the same parameter
introduced in the beginning of Step 1 for
, for
, the sequence
, as well as for
; moreover,
depends on
via
.
The operator
can be constructed as follows: Consider for each
a regularization
such that
uniformly on bounded subsets of
as
and
if
. Then, we set
Since
as
and
on
, we can assume that
by considering
small enough. For further technical details, we refer to Reference [
18]. The properties (
24) of
will be crucial in Step 6.3 below in the analysis of the fixed point operator
, as in (
52).
Furthermore, in order to regularize the non-linearity, we also introduce the space-time regularization
for
such that
For example, we can set
with
being an approximate identity with compact support in
, and
denotes the extension by zero on
.
Step 2: The approximate, almost linearized problem
Given , , and , (for an arbitrary ), as in Step 1, we consider the following, almost linearized, approximate problem: Find such that
- (i)
,
- (ii)
,
- (iii)
on ,
- (iv)
,
- (v)
,
- (vi)
, and ,
- (vii)
for all and such that
on , we have
Here, we linearized the term
with respect to
b via
, with
being the derivative of
, i.e.,
However,
is still non-linear in the unknown
. In addition, we wrote
as
which is possible due to Lemma 6. Furthermore, comparing (
28) with (
20), we reversed the integration by parts with respect to time. For
, we used
which explains the new term
in (
28). For
, we used the Reynolds transport theorem to get
This in combination with the replacement of the term
in (
20) by the more regular term
explains the term
in (
28).
Note that we are allowed to take the solution
as test function
in (
28). This yields
Here, we used that
which is true since the upper boundary of
moves at the velocity
and
on
, see iii) above. Similar to the considerations in
Section 2, we get
with
depending only on the data and
T but not on
,
M or
m. Due to
being bounded, we also deduce by Poincaré’s inequality, as in Lemma 7, that
is bounded in
by a constant independent of
.
Now, we transform the domain
to the reference configuration, the cylinder
, via
Note that
is a smooth diffeomorphism with
for all
. For the sake of abbreviation, we set
for a function
in the following.
Then, the transformed system reads
for all
,
such that
Here,
,
and
. The interface condition transfers to
Step 3: The Galerkin procedure for the approximate, linearized problem
Now, we construct a Galerkin basis
of the space
by the eigenfunctions of the Stokes problem, as in Reference [
19] (Chapter 1, Section 2.6),
Setting
, we get for any
a basis of the space
Let
be a basis of
. Then, we construct
such that
and
for all
. This can be achieved by solving the
t-dependent modified Stokes problem
in a weak sense. Furthermore,
Note that both
and
are smooth with respect to time since
, via
, has this property. Moreover, on
, all basis functions
,
are independent of
t, and even
on
.
Now, we consider the
equations
for
and
for
, combined with the initial conditions
here,
and
denote the orthogonal projections of
and
onto the finite dimensional spaces span
and span
, respectively. Due to the smoothness of the involved functions with respect to time, we get the existence of a solution
of the form
on
. In order to prove the unique existence of the solution, we first plug (
37) into (
35) and (
36) to have a system that we want to solve for the unknowns
and
. This, in turn, can be done by reducing it to a system of first order with respect to time. This first order system for
(without initial conditions) reads
Here,
denotes all lower-order terms that do not involve
or
. Since the equation is linear, so is
. Furthermore,
is smooth with respect to time. Obviously,
is given by
where
. Obviously,
is smooth with respect to time, too, and it is easy to check that
is positive definite and, hence, invertible. Therefore, we deduce the existence of a unique solution of the form as in (
37) that satisfies the mentioned initial conditions.
Now, we multiply equation (
35) with
and sum over
. By analogy, we multiply (
36) with
and sum over
. Adding those two terms yields
Since
, we get, with the exterior normal vector
of
, that
Therefore, we deduce
Plugging this into (
38), we get
Similarly as done in
Section 2, along with
and using that
is elliptic with a constant independent of
n, we can integrate in time and apply Grönwall’s inequality to conclude that
Here, the constant
depends on
and
M. However, if we return to the problem in the deformed configuration, we obtain—similarly as done in
Section 2—that
with
independent of
, and
M.
Step 4: Uniform estimates of time derivatives in n
In order to obtain a solution for
, we still need additional estimates of the largest time derivatives. To be more specific, we need the estimate
with
independent of
n but possibly dependent on the given data,
, and
.
In order to achieve this estimate, we multiply (
35) with
and sum over
. In a similar fashion, we multiply (
36) with
and sum over
. For simplicity, we omit the indices
and
n in the following and obtain, after adding the two sums,
with
Hence, we get that
Then, an integration in time implies that
Now, we have to estimate every term on the right-hand side of (
41) by a constant independent of
n, but possibly depending on
M, and on
and
v. The first three pose no problem since they depend on the initial data that are smooth. For the fourth term, Hölder’s and Young’s inequality imply that
Let us estimate the second summand of the right-hand side of (
42). Remembering that
, we get that
Concerning the second term, we recall that
solves the Stokes-like problem (
33), and, hence,
solves
in the weak sense. Due to the duality estimate
for all test functions
, we get that
Now, using the equation solved by
and their continuous dependence on the given data, as in (
34), we can estimate further:
Combining the last two estimates, we have
The term
in (
43) is uniformly bounded on
due to the smoothness and boundedness of
. The bound may depend on
though. Hence, we get that
where we use the estimate (
44) for the term involving
and (
39) for the term involving
. Therefore, we get from (
42) that
The integral
is bounded independently of
n since
is bounded uniformly in
, as in (
39), and
is bounded in
due to
being independent of
n. Concerning the term
we use again (
43) and deduce that
is bounded due to (
46) and (
45).
For the next term in (
41) involving
, as in (
27), we use
and the boundedness of
in
, to get that
For the second integral involving
, integration by parts implies that
The first integral on the right-hand side can be estimated as
while, for the second one, we can write
where we used that
,
on
and that, by (
39),
is bounded in
.
For the next term in (
41), we easily have
Concerning integrals involving
h, we start with the estimate
The term
is bounded in view of (
39). For the next term, we have, due to
and (
39), that
Moreover, due to the boundedness of
,
Finally, the estimate
holds. So, in total, we conclude
Due to
, with the second term bounded in
because of (
43), (
45) and (
46), and
, we have shown—in the original notation, that is to say, with indices
n and
—that
Step 5: Convergence of
to a weak solution
The bounds from (
39) and (
48) imply that there is a subsequence of
, which we denote by
again such that
as
for some
The last convergence in (
49) is implied by the second to last (up to a subsequence), which, in turn, is implied by
and the Aubin-Lions lemma. Using the above convergences, we let
n tend to infinity in (
35) and (
36) after an integration in time. For the convergence
for all
, we use that
, the convergence of
to
pointwise almost everywhere on
and the dominated convergence theorem.
For the interface condition, we note that by construction, as in (
37), we have
The first two convergences in (
49) imply via the Aubin-Lions lemma that, as
, we have
; thus,
We already know that
as
. Therefore, by possibly choosing a suitable subsequence, we deduce
For the initial values, we have by design for
Due to
with
, we deduce that
and
.
Now, in order to combine the equations (
35) and (
36), and to solve the weak formulation (
32) on
Z as a whole, we take a relevant test function
, as in (
32).
Let
denote the projection of
b onto span
. As before,
denotes the solution of the stationary Stokes-like system (
33), and
denotes the weak solution to the similar system but with boundary value
instead of
. Due to the linearity of this system and the continuous dependence of solutions on the data, we get
This enables us to multiply (
36) with
, integrate in time, take the limit
n to infinity for the solution, as justified above, take the sum over
, and then consider the limit
for the test function. This allows us to basically replace
with
b and
with
in (
36).
Then, we consider the Dirichlet part
which satisfies
on
and
. For this reason, we can consider the projection of
onto span
. Analogously to before, we multiply (
35) with
, integrate over time, take the limit
for the solution, take the sum over
, and then take the limit
for the test function.
Next using that
, we can add the two resulting equations to get that
solves (
28). Since the diffeomorphism
is smooth, we obtain a weak solution for the problem (
28), formulated on
, by setting
.
Furthermore, weak solutions of (
28) are unique. For the proof, take two solutions
of (
28) to the same given data and initial values. In the equation that is solved by the difference
, we test with the difference itself. After integration by parts, we obtain
Since
is the derivative of a convex function, it is monotonically increasing. Therefore, every term on the left-hand side is non-negative, and we conclude that
and
. The latter implies
due to
. □