4.1. Accuracy Tests and Qualitative Agreements
In
Table 1, we define the tests we want to show in this paper, varying the parameters of the systems. For this choice of parameters, a typical time scale is of the order of unit, while after time 15, the solution reaches the steady state.
First, we check the accuracy of the schemes adopted. In
Table 2 and
Table 3, we see the error for the conductivity variables, calculated with Richardson extrapolation (see, e.g., [
13]). We show the error for the module of the vector and of the tensor, and the parameters chosen are defined in
TestA,
TestB and
TestC.
Here, we define the initial conditions
and
, and the source function
where
is the identity matrix and
.
In
Figure 1, we show three different quantities of
TestD: the module of the variables at the final time (first column), the two components of the flux
at the final time (second column) and the energy as a function of time (third column). In the first row, we have the results for the variable
m, and in the second row, the results for the variable
are shown. As expected, the energy decays in time for both variables, and it is very small at the final time, which indicates that we are close to the steady state of the systems. The main difference between the two variables is the shape of the network, with a
Y shape for the conductivity vector and a
V shape for the tensor.
In
Figure 2, we show the results obtained when varying parameter
D in Equations (
8) and (
9) and in Equations (
3)–(
6). The tests we consider are:
TestG (first column),
TestD (second column) and
TestE (third column), with
for variable
m in the first row and for
in the second row. The ramifications become more evident when decreasing the diffusivity, and they become thinner and thinner. For the first parameter chosen,
, we are not able to see those ramifications for vector
m because the time scale associated with the diffusion is too fast to capture the details.
In
Figure 3, we observe the dependence on the relaxation exponent
. In the first column, we report the results of
TestH; in the second, those corresponding to
TestD; and in the third one, those corresponding to
TestF. Again, in the first row, we show the results for variable
m, and in the second row, those for variable
. If
, the results do not show the details of the network, and it seems that
is the parameter that better represents the leaf network.
In
Figure 4, we show the behavior of solution
when
. We see the results for
, and again, we notice that for the largest value of
, we are not able to see any ramification. Meanwhile, we see that for
smaller than
, we are close to asymptotic behavior.
4.2. Quantitative Agreement
In this section, we show some quantitative comparisons between the two models. For this reason, we consider well-prepared initial data, and we look for compatible parameters.
The goal of this part is to choose a convenient set of parameters in order to compare the two systems, trying to make them as close as possible. Now, we distinguish the parameters with for the model, and for the m model.
For simplicity, the choice of parameter is performed by comparing the two models in one space dimension. In 1D, the systems (
3), (
6), (
8) and (
9) read
Now, we suppose that
C has the following form:
where
B is a measure of the discrepancy between the two models, and we set the initial conditions so that
. If we substitute Equation (
33) in Equation (
31), we have
At this point, we multiply Equation (
32) by a factor
, and we obtain
After some manipulation, Equations (
34) and (
35) become:
where
is the residual. We made use of the following identity in the equation for
mNow, we consider the difference between Equations (
36) and (
37), and we obtain
Since we want the residual to be small, in absolute value, a convenient choice for the sets of variables is the following:
and for the initial conditions, we choose
and
such that, initially, we have
In this way, the first derivative in space is also equal to 0 after one time step.
In order to show some results in 2D, we need to define the initial conditions for
and
such that, at the initial time, we have again
with
and
, while the values of the parameters are defined in
TestM in
Table 4. In 2D, the equivalent expression of (
33) is
In this subsection, we comment on the solutions of the following tests:
In
Table 5, we show the time evolution of the norm of the difference between
and
, to see how they move away from each other when we increase the time. In this table, we see that the two solutions are very different, even after few time steps, and for this reason, they are difficult to compare. The definition of
is the following:
after
time steps, with
, such that
, where
and
.
As it appears from the table, the two models move quickly far apart from each other, suggesting intrinsically different behavior.
Now, we are interested in showing the solution of the
model, in the case of zero-diffusivity. Since the randomness of the network is common in nature but is also very effective in stabilizing the equations, we want to see if there is some analogy in considering the cases
and
. We call
the solution with
and
when
. In
Figure 5, we show the agreement of the two solutions, with
(left panel) and
(right panel). The other parameters are defined in
TestN and
TestO in
Table 6, while the initial condition is defined in Equation (
29) for
Figure 5.
In
Figure 6, we illustrate the long time solution for the
model obtained with the following space-dependent initial condition:
In order to show some quantitative comparison between the results, we calculate the difference in the solutions, at the final time, with the following expression:
This value, for the comparison shown in
Figure 5, is
, and that for
Figure 6 is
. If we consider that the spatial step is
, the resulting values of this difference are not surprising because they are both of the order of
.
As we explain in the
Section 2, the Dirichlet integral is important to describe the randomness of the network, and it is essential for the
, because, without that term, there is no network formation for the
m model. In particular, if the initial condition for
m has a support
, there is no mechanism that extends the support, since
m will remain a zero vector in the whole
. From the analytical point of view, it is also necessary to regularize the model, and this is the main reason why we have a diffusion term (with
), coming from the gradient flow of that Dirichlet integral.
We also note that, if we set the diffusion coefficient equal to zero, the model for m reduces to a reaction equation for the conductivity. This means that if , the support of the unknown m remains unchanged. In particular, it cannot extend, while in some regions, the numerical support (i.e., the region in which becomes lower than a given small threshold) may shrink.
Alternative Boundary Conditions
Furthermore, in the case of zero-diffusivity for the m model, we observe anomalous behavior of the solution near the boundaries. In order to overcome such a problem, we propose an ad hoc boundary condition as illustrated below.
Let us consider the equations for
m with
and in the limit of steady state. We have
which we can write as follows:
Thus, we can deduce that
. This means that there exists a constant
, such that,
If we substitute Equation (
43) in Equation (
42), we obtain
and if we solve it for
, we have the boundary condition for
mAnalogously, we can also find a boundary condition for vector
. Again, starting with zero-diffusivity and at steady state, we have
which means that the conductivity is proportional to the tensor product of the pressure gradient, i.e.,
. Again, we look for a constant
, such that,
Now, we substitute Equation (
47) in Equation (
46), and we solve this for
. In this way, as before, we find the expression for the conductivity tensor at the boundary
Conditions (
45) and (
48) might be a reasonable choice in the case of zero diffusivity. This treatment has the drawback of introducing additional non-linearity to the system. Alternative boundary conditions are currently under investigation.
Another aspect we want to focus on is the
steady state for the
m model in two different cases:
and
(as the authors show in [
7]). For this reason, we define different initial conditions for the vector
m, such that
and in
Figure 7, we plot the following quantity:
where
is the Frobenius norm, and
. In this way, we see the difference between the solutions (with the initial condition
and
in the left panel and with
and
in the right panel) as a function of time. In this way, we support with numerical evidence that, for the
m model and for
, the steady state is unique and it does not depend on the initial conditions, as expected (see e.g., [
7]).
In
Figure 8, which is the case for
, we see that we reach two different steady states when choosing two different initial conditions, suggesting that the steady state solution is not unique when
.