Next Article in Journal
Resolving Indeterminacy Approach to Solve Multi-Criteria Zero-Sum Matrix Games with Intuitionistic Fuzzy Goals
Next Article in Special Issue
Iterative and Noniterative Splitting Methods of the Stochastic Burgers’ Equation: Theory and Application
Previous Article in Journal
Nonadditivity Index Based Quasi-Random Generation of Capacities and Its Application in Comprehensive Decision Aiding
Previous Article in Special Issue
Impact on Stability by the Use of Memory in Traub-Type Schemes
Open AccessArticle

Adaptive Iterative Splitting Methods for Convection-Diffusion-Reaction Equations

1
The Institute of Theoretical Electrical Engineering, Ruhr University of Bochum, Universitätsstrasse 150, D-44801 Bochum, Germany
2
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 302; https://doi.org/10.3390/math8030302
Received: 8 January 2020 / Revised: 18 February 2020 / Accepted: 20 February 2020 / Published: 25 February 2020

Abstract

This article proposes adaptive iterative splitting methods to solve Multiphysics problems, which are related to convection–diffusion–reaction equations. The splitting techniques are based on iterative splitting approaches with adaptive ideas. Based on shifting the time-steps with additional adaptive time-ranges, we could embedded the adaptive techniques into the splitting approach. The numerical analysis of the adapted iterative splitting schemes is considered and we develop the underlying error estimates for the application of the adaptive schemes. The performance of the method with respect to the accuracy and the acceleration is evaluated in different numerical experiments. We test the benefits of the adaptive splitting approach on highly nonlinear Burgers’ and Maxwell–Stefan diffusion equations.
Keywords: time adaptive integration; adaptive iterative splitting; operator-splitting method; error control; convection–diffusion–reaction equations; iterative solver method; nonlinear equations time adaptive integration; adaptive iterative splitting; operator-splitting method; error control; convection–diffusion–reaction equations; iterative solver method; nonlinear equations

1. Introduction

In this paper, we propose adaptive splitting schemes to solve nonlinear differential equations. We consider spatially discretized convection–diffusion–reaction equations, which we could apply as semi-discretized nonlinear systems of ordinary differential equations. Based on the nonlinearities, it is important to deal with adaptive schemes, whereas we can control the local errors of the underlying schemes, see [1,2,3,4]. In general, the splitting methods have local splitting errors, which can be controlled with the time- or spatial steps of the underlying schemes, see [1,2,5,6].
In this work, we consider time-splitting methods, and here, we distinguish between
  • Non-iterative methods, e.g., Lie–Trotter, see [7]; Strang-splitting methods, see [8]; or exponential splitting schemes, see [1,9].
  • Iterative methods, e.g., iterative splitting methods, see [3,10], or Waveform-Relaxation methods, see, e.g., [11,12]
For the non-iterative methods, e.g., Lie–Trotter and Strang-splitting schemes, first works exist and discuss the ideas of the adaptive time splitting methods, see [6,8]. Although for the iterative methods only some first ideas exist, see [13], based on dealing with different time-step approaches, our new contributions are based on the novel strategy of ϵ -shifting, see [14], of the underlying splitting method. We obtain so-called shifted iterative splitting methods, which can be compared with the standard iterative splitting methods, i.e., without the shifting. The local error estimate can be computed with the solution of the shifted and non-shifted iterative splitting approaches. Then, the effective error control is given as a function of the local error estimates and an error-tolerance parameter.
Such a novel adaptive splitting approach is important to reduce the computational time, while we decompose into sub-operators, which can be simulated with faster and more accurate numerical schemes. Further, the novel approach can be optimized with an effective and maximal splitting time step, see also [5,14]. We combine these two ideas of adaptivity and splitting into a novel strategy to control and reduce the local splitting error for the iterative splitting methods. Then, we can obtain more accurate results with a maximal time step and reduce the computational cost.
In this paper, we present the novel adaptive splitting techniques as follows.
  • In the first step, we consider the standard splitting techniques with the underlying error analysis, see [3], and
  • in the second step, we introduce the adaptive techniques, which is based on the ϵ -shift technique, see [14], such that we can control the local splitting error.
Then, an error-estimate is computed, such that we are allowed to evaluate a maximum splitting time-step with respect to the shifted and non-shifted iterative splitting approach. The analysis is based on the standard iterative splitting approaches, see [13,15], and the additional adaptive techniques, see [5,6]. In the numerical applications, which are based on convection–diffusion–reaction equations, we present the verification and the benefits of the novel adaptive iterative splitting approaches.
The paper is outlined as following. In Section 2, we explain the adaptive splitting approaches. Further, in Section 3, we discuss the error analysis of the adaptive splitting schemes. The applications to different convection–diffusion–reaction equations are done in Section 4. In Section 5, we discuss the theoretical and practical results.

2. Adaptive Splitting Approaches

We take inspiration for our studies, which are presented below, from real-life simulations of nonlinear convection–diffusion–reaction equations with the help of splitting approaches, see [3,16,17,18].
We deal with convection–diffusion–reaction equations, which can be written as
t u ( t ) + x v ( u ) x ( D ( u ) x u ) = f ( u ( t ) ) , x I R d , t > 0 ,
u ( x , 0 ) = u 0 ( x ) , x I R d , t = 0 ,
where f : I R n I R n is the reaction term; u : I R d × I R I R n is the solution; and D ( u ) is the diffusion matrix, which is a tensor of order d × d × n and the velocity vector, which is of order d × n .
In the present paper, we apply spatial discretization methods, such that we consider the spatial discretized partial differential equations with the included boundary conditions, which are given as ordinary differential equations:
t u ( t ) = A ( u ( t ) ) u ( t ) + B ( u ( t ) ) u ( t ) , t ( 0 , T ) ,
where u ( 0 ) = u 0 is the initial condition. A ( u ) and B ( u ) are operators, which are spatially discretized. For example, A ( u ) is the discretized in space convection operator and B ( u ) is the discretized in space diffusion operators. For convenience, the nonlinear operators are bounded, e.g., bounded matrices.
In our proposed scheme, we embed the idea of the ϵ -shift, which is explained in [14], to the iterative splitting methods, see [3]. Then, we obtain a new so called ϵ -shifted iterative splitting method, which is a new contribution. Such shifted iterative splitting approach are used to design new adaptive time-splitting methods, which can be applied with maximal time-steps and compute the error-controls of the local splitting-errors.
In the following, we discuss the standard and the shifted splitting approaches.

2.1. Standard Splitting Approaches

In this section, we describe the standard splitting methods.
We deal with two splitting schemes:
  • Non-iterative splitting scheme (Strang splitting), see [8].
  • Iterative splitting scheme (fixpoint scheme), see [3].

2.1.1. Strang-Marchuk Splitting (SMS)

In the SMS method, in the first step, the operator A is solved in the left half of interval [ t n , t n + 1 ] ; then, in the second step, the operator B is solved in the whole interval [ t n , t n + 1 ] ; and in the third step, the operator A is solved in the right half of the interval [ t n , t n + 1 ] . By the initial conditions, the three subproblems are connected, see
d u ˜ ( t ) d t = A ( u ˜ ( t ) ) u ˜ ( t ) , with u ˜ ( t n ) = u ( t n ) , and   t [ t n , t n + τ / 2 ] , d u ˜ ˜ ( t ) d t = B ( u ˜ ˜ ( t ) ) u ˜ ˜ ( t ) , with u ˜ ˜ ( t n ) = u ˜ ( t n + τ / 2 ) , and   t [ t n , t n + 1 ] , d u ( t ) d t = A ( u ( t ) ) u ( t ) , with u ( t n + τ / 2 ) = u ˜ ˜ ( t n + 1 ) and t [ t n + τ / 2 , t n ] ,
where τ = Δ ( t n ) = t n + 1 t n is the time step.

2.1.2. Iterative Splitting Methods

The iterative splitting method defined in [13] and extensively studied for ordinary and partial differential equations in [3,10,15,19,20] are alternative operator splitting methods, which are based on iterative techniques.
We apply two versions of the iterative splitting methods:
  • Linear Iterative Splitting (LIS)
    The LIS solves in the first equation the linear part of operator A with the given right hand side of operator B. Then, it solves in the second equation the linear part of operator B with the right hand side of operator A, using the solution of the first equation. The two solver steps are iterated m times before we pass to the next interval.
    d u ˜ i ( t ) d t = A ( u i 1 ( t ) ) u ˜ i ( t ) + B ( u i 1 ( t ) ) u i 1 ( t ) , with   u ˜ i ( t n ) = u ( t n ) , d u i ( t ) d t = A ( u ˜ i ( t ) ) u ˜ i ( t ) + B ( u ˜ i ( t ) ) u i ( t ) , with   u i ( t n ) = u ( t n ) ,
    where i = 1 , 2 , , m . For the initialisation of the iteration, we start with function u 0 ( t ) , which verifies the initial condition u 0 ( 0 ) = u 0 . After, we have performed m iterations of the LIS, we apply the approximated solution u ( t n + 1 ) = u m ( t n + 1 ) for the next time-step, till the final step n + 1 = N .
  • Quasilinear Iterative Splitting (QIS)
    The QIS solves in the first equation the nonlinear part of operator A with the given right hand side of operator B. Then, it solves in the second equation the nonlinear part of operator B with the right hand side of operator A, using the solution of the first equation. The two solver steps are iterated m times before we pass to the next interval.
    d u ˜ i ( t ) d t = A ( u ˜ i ( t ) ) u ˜ i ( t ) + B ( u i 1 ( t ) ) u i 1 ( t ) , with   u ˜ i ( t n ) = u ( t n ) , d u i ( t ) d t = A ( u ˜ i ( t ) ) u ˜ i ( t ) + B ( u i ( t ) ) u i ( t ) , with   u i ( t n ) = u ( t n ) ,
    where i = 1 , 2 , , m . For the initialisation of the iteration, we start with function u 0 ( t ) , which verifies the initial condition u 0 ( 0 ) = u 0 . After, we have performed m iterations of the QIS, we apply the approximated solution u ( t n + 1 ) = u m ( t n + 1 ) for the next time-step, till the final step n + 1 = N .

2.2. Shifted Splitting Approaches for Error Estimations

In this section, we describe the modified splitting methods to apply error-estimates. We propose shifted splitting as a novel method to design error estimates, see [14].
We deal with two shifted splitting schemes:
  • Non-iterative splitting scheme (Strang splitting), see [14].
  • Iterative splitting scheme (fixpoint scheme), see [3].

2.2.1. Shifted Strang-Marchuk Splitting (SSMS)

In this method, in interval [ t n , t n + 1 ] , we first solve for operator A with a half time step minus a small ϵ , means τ / 2 ϵ , then we solve B with the full time-step τ , and again for A with a half time step plus a small ϵ , means τ / 2 + ϵ . The three subproblems are connected by the initial conditions, according to
d u ˜ ( t ) d t = A ( u ˜ ( t ) ) u ˜ ( t ) , with u ˜ ( t n ) = u ( t n ) and t [ t n , t n + τ / 2 ϵ ] , time step τ / 2 ϵ , d u ˜ ˜ ( t ) d t = B ( u ˜ ˜ ( t ) ) u ˜ ˜ ( t ) , with u ˜ ˜ ( t n ) = u ˜ ( t n + τ / 2 ϵ ) , and t [ t n , t n + 1 ] , time step τ , d u ( t ) d t = A ( u ( t ) ) u ( t ) , with u ( t n + τ / 2 ϵ ) = u ˜ ˜ ( t n + 1 ) and t [ t n + τ / 2 ϵ , t n + 1 ] , time step τ / 2 + ϵ .
The ϵ value is a small fraction of τ , for example ϵ = 0.005 τ , so that the shifted interval is close to the original one. The error estimate is given as
e r r = | | u S t r a n g ( t n + 1 ) u S t r a n g , ϵ ( t n + 1 ) | | .
If the error estimate e r r is higher than a given tolerance η , we redo the computations with a smaller step according to the refinement scheme
Δ t n e w = ν Δ t η e r r ,
where we apply ν > 0 , near 1 as a security factor. Otherwise, if e r r η , we accept the obtained value u S t r a n g ( t n + 1 ) , and proceed with the next time interval. In this case, in order to avoid the usage of unnecessary small time steps, we apply a coarsening scheme
Δ t n e w = ( 1 + κ ) Δ t ,
where κ is a small positive value depending on the tolerance η .
The Algorithm is given in Algorithm 1:
Algorithm 1. 
1. 
Compute the local time-steps with the Strang and shifted Strang method, means u s t r a n g ( t n + 1 ) and u s t r a n g , ϵ ( t n + 1 ) .
2. 
Compute the error estimation according to (7).
3. 
If e r r > η , reject the time-step and restart the recent time-interval with the Δ t n = Δ t n e w obtained from (8).
If e r r η , then we are in the error tolerance. Proceed with the next time interval with the increased time step Δ t n + 1 = Δ t n e w given by (9).
Remark 1. 
The theoretical results of the Algorithm 1, which is based on the ϵ-shifted Strang-splitting method, are given in the literature [4,5]. Here, the authors applied the shifted Strang-splitting method and could apply a local error estimate of the first and second splitting resolution, see [14].
In the Figure 1, we have the graphically introduction of the shifting ideas.

2.2.2. Shifted Iterative Splitting Methods

In this section, we apply the shifting time-step ideas to the iterative splitting methods. First, we solve for the first iterative step with time-step τ ϵ , then we solve for the second iterative step with time-step τ + ϵ .
We modify the two versions of the iterative splitting methods:
  • Shifted Linear Iterative Splitting (SLIS)
    The SLIS solves in the first equation the linearized part of operator A with the given right hand side of operator B for a minus shift of ϵ in the time-step. Then, it solves the second equation the linearized part of operator B with a given right hand side of operator A for a plus shift of ϵ in the time-step, using the solution of the first equation. The two solver steps are iterated m times before we pass to the next interval.
    d u ˜ i ( t ) d t = A ( u i 1 ( t ) ) u ˜ i ( t ) + B ( u i 1 ( t ) ) u i 1 ( t ) , with u ˜ i ( t n ) = u ( t n ) , time step τ ϵ , d u i ( t ) d t = A ( u ˜ i ( t ) ) u ˜ i ( t ) + B ( u ˜ i ( t ) ) u i ( t ) , with u i ( t n ) = u ( t n ) , time step τ + ϵ ,
    where i = 1 , 2 , , m . For the initialization of the iteration, we start with function u 0 ( t ) , which verifies the initial condition u 0 ( 0 ) = u 0 . After we have performed m iterations of the SLIS, we apply the approximated solution u ( t n + 1 ) = u m ( t n + 1 ) for the next time-step, until the final step n + 1 = N .
    We apply the error-estimates as in Algorithm 2 and then we go to the next time-step.
    Here, we decided i = 1 , but the error estimates also work for i = 1 , 2 , , m .
  • Shifted Quasilinear Iterative Splitting (SQIS)
    The SQIS solves in the first equation the nonlinear part of operator A with the given right hand side of operator B for a minus shift of ϵ in the time-step. Then, it solves the second equation the nonlinear part of operator B with a given right hand side of operator A for a plus shift of ϵ in the time-step, using the solution of the first equation. The two solver steps are iterated m times before we pass to the next interval.
    d u ˜ i ( t ) d t = A ( u ˜ i ( t ) ) u ˜ i ( t ) + B ( u i 1 ( t ) ) u i 1 ( t ) , with u ˜ i ( t n ) = u ( t n ) , time step τ ϵ , d u i ( t ) d t = A ( u ˜ i ( t ) ) u ˜ i ( t ) + B ( u i ( t ) ) u i ( t ) , with u i ( t n ) = u ( t n ) , time step τ + ϵ ,
    where i = 1 , 2 , , m . For the initialization of the iteration, we start with function u 0 ( t ) , which verifies the initial condition u 0 ( 0 ) = u 0 . After we have performed m iterations of the SQIS, we apply the approximated solution u ( t n + 1 ) = u m ( t n + 1 ) for the next time-step, till the final step n + 1 = N .
    We apply the error-estimates as in Algorithm 2 and then we go to the next time-step.
    Here we decided i = 1 , but the error estimates also works for i = 1 , 2 , , m .
The error estimate is given as
e r r = | | u i ( t n + 1 ) u i , ϵ ( t n + 1 ) | | η ,
whereas η is a given error tolerance, e.g., η = 10 5 .
Further the adaptive time-stepping is
Δ t n e w = ν Δ t η e r r 1 / ( 2 i ) ,
where we apply ν > 0 , near 1 as a security factor.
The Algorithm is given in Algorithm 2:
Algorithm 2. 
1. 
We compute the local time-steps with the iterative and shifted iterative method, means u i ( t n + 1 ) and u i , ϵ ( t n + 1 )
2. 
We compute the error estimation
e r r = | | u i ( t n + 1 ) u i , ϵ ( t n + 1 ) | | η ,
3. 
If e r r η , then we are in the error tolerance and we accept the time-step means u ( t n + 1 ) = u i ( t n + 1 ) and the next time-step is Δ t n + 1 = Δ t n e w .
Otherwise, we reject the time-step and restarted the recent time-interval with Δ t n = Δ t n e w
In the Figure 2, we have the graphically introduction of the shifting ideas.

3. Error Analysis

The error analysis of the methods are done in the following.
We deal with the following assumptions to the nonlinear operators:
Assumption 1. 
  • Estimation of the nonlinear operators:
    | | A ( e i ( t ) ) | | | | A ˜ | | , t n t t n + 1 , i = 0 , 1 , , I ,
    | | B ( e i ( t ) ) | | | | B ˜ | | , t n t t n + 1 , i = 0 , 1 , , I ,
    where A ˜ and B ˜ are bounded operators, such that the Taylor expansion of the operators can be applied.
  • For the nonlinear operators A and B, we estimated the linearized parts as bounded operators A ˜ , B ˜ : X X , where X is an appropriate Banach space. Further, we have a Banach-norm for the vector and the matrices, which is given as · .
In Theorem 1, we derive the consistency order of the shifted iterative operator-splitting.
Theorem 1. 
The operators A , B L ( X ) are nonlinear bounded operators with the Assumption 1. The abstract Cauchy problem is given as
t c ( t ) = A ( c ( t ) ) c ( t ) + B ( c ( t ) ) c ( t ) , 0 < t T , c ( 0 ) = c 0 .
The abstract Cauchy problem (17) has an existent and unique solution. Then, the shifted iterative splitting method (11) is consistent with the order of the consistency O ( τ n 2 i ) with i = 1 , , m .
Proof. 
We assume A ˜ + B ˜ L ( X ) and we assume that the linear operators are generator of a uniformly continuous semigroup, such that we have a unique solution c ( t ) = exp ( ( A ˜ + B ˜ ) t ) c 0 .
In the following, we consider the local time-interval [ t n , t n + 1 ] .
e i ( t ) = c ( t ) c ˜ i ( t ) and e i + 1 ( t ) = c ( t ) c i ( t ) are the local error functions.
The error functions for the shifted time-intervals are computed as
t e i ( t ) = A ˜ e i ( t ) + B ˜ e i 1 ( t ) , t ( t n , t n + 1 ϵ ] , e i ( t n ) = 0 ,
and
t e i + 1 ( t ) = A ˜ e i ( t ) + B ˜ e i + 1 ( t ) , t ( t n , t n + 1 + ϵ ] , e i + 1 ( t n ) = 0 ,
for i = 1 , 3 , 5 , , with e 1 ( 0 ) = 0 and e 0 ( t ) = c ( t ) .
Based on the Assumptions 1, we can assume that the linearized operators A ˜ and B ˜ are generators of the one-parameter C 0 semigroup, which are given as ( exp ( A ˜ ( t ) ) t 0 and ( exp ( B ˜ ( t ) ) t 0 .
In the following, we can write the abstract Cauchy problem with homogeneous initial conditions as
e i ( t ) = t n t exp ( A ˜ ( t s ) ) B ˜ e i 1 ( s ) d s , t [ t n , t n + 1 ϵ ] , e i + 1 ( t ) = t n t exp ( B ˜ ( t s ) ) A ˜ e i + 1 ( s ) d s , t [ t n , t n + 1 + ϵ ] ,
We apply the norms for the vectors and matrices and we can estimate
e i ( t ) B ˜ e i 1 t n t exp ( A ˜ ( t s ) ) d s , t [ t n , t n + 1 ϵ ] , e i + 1 ( t ) A ˜ e i t n t exp ( B ˜ ( t s ) ) d s , t [ t n , t n + 1 + ϵ ] ,
We assume, that ( A ˜ ( t ) ) t 0 and ( B ˜ ( t ) ) t 0 are generators of semigroups and we apply the so-called growth estimation. Then, we can estimate
exp ( A ˜ t ) K exp ( ω t ) ; t 0 , exp ( A ˜ t ) K ˜ exp ( ω ˜ t ) ; t 0 ,
where the estimations are held for some numbers K 0 and ω , ω ˜ I R .
In the following, we distinguish between the following two operator-types.
  • We assume ( A ˜ ( t ) ) t 0 and ( B ˜ ( t ) ) t 0 are bounded operators, which generates stable semigroups, meaning ω , ω ˜ 0 , see [13,21], or
  • We assume ( A ˜ ( t ) ) t 0 and ( B ˜ ( t ) ) t 0 are operators with exponential growth, which generates stable semigroups, means ω , ω ˜ > 0 , see [13,21].
Then, we have the following two estimates of the two groups of operators:
  • Bounded operators.
    They are estimated as
    exp ( A ˜ t ) K , t 0 , exp ( B ˜ t ) K ˜ , t 0 ,
    and we apply the estimations to (21) and obtain the relation
    e i ( t ) K B ˜ ( τ n ϵ ) e i 1 , t [ t n , t n + 1 ϵ ] , e i + 1 ( t ) K ˜ A ˜ ( τ n + ϵ ) e i , t [ t n , t n + 1 ϵ ] .
  • Operators with exponential growth.
    Here, we assume that ( exp ( A ˜ t ) ) t 0 , ( exp ( B ˜ t ) ) t 0 are exponentially growing with some ω > 0 , ω ˜ > 0 . Therefore, we can estimate
    t n t exp ( A ˜ ( t s ) ) d s K ω ( t ) , t [ t n , t n + 1 ϵ ] ,
    t n t exp ( B ˜ ( t s ) ) d s K ˜ ω ( t ) , t [ t n , t n + 1 + ϵ ] ,
    where
    K ω ( t ) = K ω exp ( ω ( t t n ) ) 1 , t [ t n , t n + 1 ϵ ] ,
    K ˜ ω ˜ ( t ) = K ˜ ω ˜ exp ( ω ( t t n ) ) 1 , t [ t n , t n + 1 + ϵ ] .
    Further, we apply
    K ω ( t ) K ω exp ( ω τ n ) 1 = K τ n + O ( τ n 2 ) ,
    K ˜ ω ˜ ( t ) K ˜ ω ˜ exp ( ω ˜ τ n ) 1 = K ˜ τ n + O ( τ n 2 ) .
The estimations (24) and (30) result in
e i ( K B ˜ ( τ n ϵ ) e i 1 + O ( ( τ n ϵ ) 2 ) ,
e i + 1 ( K ˜ A ˜ ( τ n + ϵ ) e i + O ( ( τ n + ϵ ) 2 ) ,
and we can apply recursively the error-estimation of Equations (31) and (32) and we obtain,
e i + 1 K B ˜ A ˜ B ˜ | | e i 1 | | ( τ n 2 2 ϵ τ n ) + O ( τ n 3 ) + O ( ϵ τ n 2 ) .
Then, we recursively applied the Equation (33) and obtain the proved statement. □
Remark 2. 
Based on the derivation of the error of the shifted iterative method, we obtain the error
e r r = u i ( t n + 1 ) u i , ϵ ( t n + 1 ) η ,
whereas e r r = C ( Δ t 2 i 1 ) and also η = C ( Δ t n e w 2 i 1 ) , and we obtain
Δ t n e w = ν Δ t η e r r 2 i 1 .
Remark 3. 
In realistic applications, an optimal relation between the time-step τ n and the number of iterative steps 2 i 1 , see Equation (35), is necessary. In practical experiments, we saw that ν > 0 , but near to1and i 3 , 4 , 5 iterations are sufficient, such that we obtain an optimal new time-step Δ t n e w . To improve the criterion for stopping the iterative processes, we can additionally define an error bound. For example | c i c i 1 | err with err = 10 4 can be used to restrict us to an appropriate low number of iterative steps.
The order of accuracy can be improved by the choice of the initial iteration function, e.g., additional pre-steps with standard splitting approaches, see [13].
Based on our assumption about the initial solutions, we initialize with exact solutions or we apply higher order interpolated split solutions. This assumption allow to derive a theory for the exactness of the iterative methods, see also [13].

4. Numerical Results

In this section, we present the numerical results based on our novel iterative splitting methods for nonlinear ordinary and partial differential equation. We verified our theoretical results of the error-estimates and applied the shifted iterative splitting methods as a new-solver class.

4.1. First Numerical Example: Bernoulli Equation

In the first example, we apply a nonlinear differential equation, which is given as the Bernoulli equation, see
u ( t ) t = ( λ 1 + λ 3 ) u ( t ) + ( λ 2 + λ 4 ) ( u ( t ) ) p , t [ 0 , T ] , with u ( 0 ) = 1 .
For the Bernoulli equation, we can derive analytical solutions as reference solutions, see [15,22]. The analytical solutions are given as
u ( t ) = exp ( ( λ 1 + λ 3 ) t ) λ 2 + λ 4 λ 1 + λ 3 exp ( ( λ 1 + λ 3 ) ( p 1 ) t ) + c 1 / ( 1 p ) .
Using u ( 0 ) = 1 we find that c = 1 + λ 2 + λ 4 λ 1 + λ 3 , so
u ( t ) = exp ( ( λ 1 + λ 3 ) t ) 1 + λ 2 + λ 4 λ 1 + λ 3 1 exp ( ( λ 1 + λ 3 ) ( p 1 ) t ) 1 / ( 1 p ) .
For the applications, we apply the following parameters, p = 2 , λ 1 = 1 , λ 2 = 0.5 , λ 3 = 100 , λ 4 = 20 , T = 0.2 , and u ( 0 ) = 1 .
We apply the following operators for the splitting.
  • operator A: A = ( λ 1 + λ 3 ) ,
  • operator B: B ( u ) = ( λ 2 + λ 4 ) ( u ( t ) ) p 1 .
We apply backward Euler method to approximate the derivative in each subinterval [ t n , t n + 1 ] , n = 0 , 1 , , N , and solve the resulting equation by using the fixed point method and Newton’s method with tolerance 10 12 allowing a maximum of three iterations. The accuracy of the methods is assessed by comparing the numerical result u n u m with the analytical solution u given by (37). We compute the maximum and mean difference at the nodes t n , according to
e max = max n | u n u m ( t n ) u ( t n ) | ,
and
e mean = 1 N n | u n u m ( t n ) u ( t n ) | .
For the Shifted Strang–Marchuk splitting, we analyze the accuracy (with respect to the analytic solution) and the cost of the algorithm for different tolerances η and coarsening factors 1 + κ , where we have set κ = 4 η 1 4 . Taking larger values of κ reduces the number of time intervals, but increases the number of tentative steps, where the Δ t n must be reduced in order to satisfy the error tolerance criterion. The value of κ has been experimentally chosen in order to minimize the total number of steps.
In each splitting step, the differential equation is approximated by back-Euler’s method and solved using the fixed point (BEFP) or Newton’s (BEN) methods. The cost of the algorithm and the final accuracy depend on the error tolerance η and the coarsening factor 1 + κ . We compare the shifted ABA-operator splitting method with the shifted variants of the iterative splitting methods above considered.
Table 1 shows that the accuracy is roughly proportional to the square root of the tolerance η , and the number of functional evaluations is inversely proportional to the same quantity. Newton’s method requires less iterations to fulfill the tolerance; thus, if the number of time steps is similar to that of the fixed point method, it needs less functional evaluations. Nevertheless, Newton’s method also evaluates the derivative, which reduces its advantage over the fixed point method.
For the Shifted Linear Iterative Splitting method, we take κ = 4 η . We obtain similar accuracies to Strang–Marchuk’s algorithm working now with higher error tolerances η , as it is shown in Table 2. The accuracy is of the same order as η , whereas the computational cost is slightly higher than the cost of Strang–Marchuk’s algorithm.
The results for the Shifted Quasilinear Iterative Splitting, see Table 3, are quite alike to the ones of the linear splitting. Increasing the number of iterations, i t e r = 2 , 3 , , results in a linear increment of the cost without any accuracy improvement.

4.2. Second Numerical Example: Mixed Convection–Diffusion and Burgers Equation

In the second numerical example, we apply coupled partial differential equation (PDE). We apply a coupling of a convection–diffusion equation with a Burgers’ equation in 2D, which is called mixed convection–diffusion and Burgers equation (MCDB), and given as
t u = 1 2 u ( x u + y u ) 1 2 ( x u + y u ) + μ ( x x u + y y u ) + f ( x , y , t ) , ( x , y , t ) Ω × [ 0 , T ] , u ( x , y , 0 ) = u ana ( x , y , 0 ) , ( x , y ) Ω , u ( x , y , t ) = u ana ( x , y , t ) , ( x , y , t ) Ω × [ 0 , T ] ,
where the domains are given as Ω = [ 0 , 1 ] × [ 0 , 1 ] and T = 1.25 . The viscosity is μ .
For such an mixed PDE, we can derive an analytical solution, which is
u ana ( x , y , t ) = 1 + exp x + y t 2 μ 1 + exp x + y t 2 μ ,
where we can derive the right hand side f ( x , y , t ) .
By considering the following operators
A ( u ) v = 1 2 u ( x v + y v ) + 1 2 μ ( x x v + y y v ) , B v = 1 2 ( x v + y v ) + 1 2 μ ( x x v + y y v ) + f ( x , y , t ) .
The MCDB Equation (38) is splitted into the Burgers’ term, A and the convection–diffusion term, B and we obtain the operators:
t u = A ( u ) u + B u .
We deal with different viscosities: low viscosity μ = 0.5 , high viscosity, μ = 5 . The spatial domain is discretized taking a rectangular mesh with n x = n y = 16 intervals and applying standard second order divided difference approximations. The resulting differential system is solved by the same methods as in the previous example. The coarsening strategy applied here when e r r < η is
Δ t n e w = m i n ( 1 + η 2 / e r r , 2 ) Δ t ,
where e r r = u i , j is the vector norm of the computed values in the nodes ( x i , y j ) at each time step.
For the shifted Strang–Marchuk splitting method we take ϵ = 0.05 and different values of η .
Table 4 shows the results of solving the equation with low viscosity μ = 0.5 using different tolerances. The solutions of the differential equations are approximated by using back-Euler fixed point method (BEFP) or back-Euler–Newton’s method (BEN). Both methods perform similarly in cost and accuracy in this case.
The corresponding results for solving the equation with high viscosity, μ = 5 , are shown in Table 5. BEFP requires much more time steps then BEN, but reaches more accuracy.
For the linear and quasilinear shifted iterative splitting methods we take ϵ = 0.5 and the same coarsening strategy. Table 6, Table 7, Table 8 and Table 9 show the cost and the accuracy in the low and high viscosity cases for the shifted linear and quasilinear iterative splitting methods using the back-Euler fixed point method and back-Euler Newton’s method as solvers.
The shifted linear and quasilinear iterative splitting methods give similar results in all the considered cases. The behavior of the back-Euler fixed point method is worse in the low viscosity case than in the high viscosity case, as in the shifted Strang–Marchuk splitting method.

4.3. Third Numerical Example: Convection-Diffusion-Reaction Equation

In the third numerical example, we deal with a PDE, which is a convection–diffusion–reaction equation in 3D (CDR), see the example in [23]:
t u = v · u + D u k u , ( x , y , z , t ) Ω × [ t 0 , T ] , u ( x , y , z , t 0 ) = u 0 ( x , y , z ) , ( x , y , z ) Ω , u ( x , y , z , t ) = 0 Ω × [ t 0 , T ] ,
where we have v = ( v x , v y , v z ) t , D I R 3 × I R 3 a diffusion matrix, u is the velocity field, k is a reaction parameter, and Ω = [ 0 , 4 ] 3 × [ t 0 , T ] , T = 10.0 .
We can have a special analytical solution for an instantaneous point source, which is given as:
u ana ( x , y , z , t ) = M 4 π t D 11 D 22 D 33 t exp ( ( x x 1 ) v x t ) 2 4 D 11 t ( y y 1 ) 2 4 D 22 t ( z z 1 ) 2 4 D 22 t .
We have the following parameters.
  • instantaneous point source: ( x 1 , y 1 , z 1 ) = ( 1 , 1 , 1 ) , M = 1.0 ,
  • initial start at t 0 = 1 , where we initialise the equation with u 0 ( x , y , z ) = u ana ( x , y , z , t 0 ) ,
  • the diffusion parameters are given as D 11 = 0.01 , D 22 = 0.02 , D 33 = 0.03 all other parameters are 0,
  • the velocity is given as ( v x , v y , v z ) = ( 0.1 , 0 , 0 ) ,
  • the reaction parameter is given as k = 0.1 .
By considering the following operators, we decouple into the fast velocity–reaction part and the slow diffusion parts:
A = D , B = v · k ,
we split (50) in fast and slow parts
t u = A u + B u .
The equation is spatially discretized taking a number, n x , n y , n z , of equal subintervals in each direction in Ω , and approximating the spatial derivatives by standard second order divided differences, resulting in a linear differential system.
We first check that the discretization error decreases with the size of the spatial subintervals by solving the differential system using Heun’s method and Strang–Marchuk method with different number of spatial subintervals. The numerical results u n u m in the node points ( x i , y j , z k ) are compared with the analytical solution u a n a in the same points at the final time T = 10 , computing the maximum and the mean absolute differences as before. Table 10 shows that there is no significant difference between both methods.
Now we fix the number of spatial subintervals n x = n y = n z = 16 , and analyze the performance of the adaptive methods for the CDR example. To estimate the convergence of the methods, we compare their results with the approximation obtained by integrating the differential equation by Heun’s method using the same time steps. Table 11 shows the results of the shifted Strang–Marchuk splitting and the shifted linear iterative splitting for different tolerances, η . Lower tolerances produce lower maximum and mean errors but require more time steps. The relationship between the number of time steps and the mean error is depicted in Figure 3.
Remark 4. 
In the Figure 3, we see the differences in the convergence behaviour between the shifted Strang–Marchuk splitting (SSML) and the shifted linear iterative splitting (SLIS) method. We see in the figure, that the SSML method has only a linear convergence order 1 , the SLIS method has higher order of convergence, here we have at least an order 2 . Therefore the adaptive iterative scheme is much more effective and accurate than the noniterative splitting scheme. The result verified the proposition, that the iterative splitting scheme is a higher order scheme, see [3,15], and that we also conserve the higher order approach in the adaptive version.

4.4. Fourth Numerical Example: Nonlinear Diffusion Equation

Our fourth numerical example is a partial differential equation which is nonlinear diffusion equation in 2D, see the example in [24].
The multicomponent diffusion equation is based on the idea of a Maxwell–Stefan diffusion equation, which is highly nonlinear, see [20,24]:
t u = A ( u ) u , ( x , y , t ) Ω × [ t 0 , T ] , u ( x , y , t 0 ) = u 0 ( x , y ) , ( x , z ) Ω , u ( x , y , t ) = 0 Ω × [ t 0 , T ] ,
where we have A I R 3 × I R 3 × I R 3 a nonlinear diffusion matrix and Ω = [ 0 , 1 ] 3 × [ t 0 , T ] , t 0 = 0 , T = 1.0 .
An application of such a nonlinear diffusion (NLD) is given by
t u 1 = D 12 · u 1 , ( x , y , t ) Ω × [ t 0 , T ] ,
t u 2 = · ( 1 D 23 + β u 1 ) 1 ( u 2 + β D 12 u 2 u 1 ) , ( x , y , t ) Ω × [ t 0 , T ] , u ( x , y , t ) = 0 Ω × [ t 0 , T ] ,
where we have α = 1 D 12 1 D 13 and β = 1 D 12 1 D 23 .
Further, we apply with the following parameters in the NLD Equations (41) and (42).
The parameters and the initial and boundary conditions are given as:
  • Uphill example, which is known as semi-degenerated Duncan and Toor experiment, see [25]:
    D 12 = D 13 = 0.833 and D 23 = 0.168 , where we have α = 0 .
  • Asymptotic example, which is known as asymptotic Duncan and Toor experiment, see [25]:
    D 12 = 0.0833 , D 13 = 0.680 and D 23 = 0.168 , where we have α 0 .
  • We apply J = 140 , where J is the number of spatial grid points.
  • Based on the explicit discretization method, we have to fulfill the time-step-restriction, which is given as CFL-condition:
    Δ t ( Δ x ) 2 2 max { D 12 , D 13 , D 23 } .
  • The computational domains are given with: Ω = [ 0 , 1 ] is the spatial domain and [ 0 , T ] = [ 0 , 1 ] is the time domain.
  • The initial conditions are as follows.
    • Uphill example
      u 1 i n ( x ) = 0.8 if 0 x < 0.25 , 1.6 ( 0.75 x ) if 0.25 x < 0.75 , 0.0 if 0.75 x 1.0 ,
      u 2 i n ( x ) = 0.2 , for   all x Ω = [ 0 , 1 ] .
    • Asymptotic example
      u 1 i n ( x ) = 0.8 if 0 x 0.5 , 0.0 else ,
      u 2 i n ( x ) = 0.2 , for   all x Ω = [ 0 , 1 ] .
  • For the boundary conditions, we apply no-flux type conditions:
    u 1 = u 2 = 0 , on Ω × [ 0 , 1 ] .
We apply the following splitting of the operators with the one-dimensional spatial derivations:
t u = ( A ( u ) + B ( u ) ) u ,
where we have the operators in the following decomposition of the u 1 and u 2 parts with ξ [ 0 , 1 ] :
A ( u ) = ξ D 12 2 x 2 0 ξ x ( 1 D 23 + β u 1 ) 1 β D 12 u 2 x ( 1 ξ ) x ( 1 D 23 + β u 1 ) 1 x
B ( u ) = ( 1 ξ ) D 12 2 x 2 0 ( 1 ξ ) x ( 1 D 23 + β u 1 ) 1 β D 12 u 2 x ξ x ( 1 D 23 + β u 1 ) 1 x
where we have ξ = 0.5 a symmetric decomposition.
We first check that the non adaptive methods require a very small time step to converge and estimate its convergence by comparing the results doubling successively the number of time steps. The results are shown in Table 12 for the direct integration and for the unshifted Strang–Marchuk method. The errors are computed measuring the difference between the result obtained with a given number of time steps and the result with twice that number at every shared temporal and spatial node. The error estimates for the Strang–Marchuk splitting in the case of 40,000 time steps are not available because the method diverges with 20,000 time steps.
Figure 4 illustrates the uphill phenomenon, where the solutions u 1 and u 2 increase before reaching the stationary state.
The adaptive methods result in an important reduction of the number of time steps obtaining similar error estimations. Table 13 and Table 14 show the results for the uphill case and for the asymptotic case of the nonlinear diffusion equation, respectively. Here, the errors are computed by comparing the solution of the shifted methods with the ones obtained by direct integration, using the same time steps as the adaptive method.
The shifted Strang-Marchuk method behaves better for ϵ = 0.03, whereas the shifted linear and quasilinear splitting methods work well for ϵ = 0.03, except for in Table 15, where the behavior of the considered splitting methods is studied for different splitting weights ξ .
Figure 5 depicts the regions in the space-time plane where the uphill phenomenon takes place, that is where N 2 and x u 2 have the same sign. The equation is solved by the shifted linear iterative splitting with η = 1.0 × 10 5 .

5. Conclusions and Discussion

We present a novel adaptive iterative splitting approach for partial differential equations of the type convection–diffusion–reaction equation. The numerical analysis shows the convergence of the schemes, while we could apply a shift in time of the methods. In the numerical experiments, we apply different state-of-the-art nonlinear convection–diffusion equations, where we receive benefits in the computational time and also in the accuracy of the methods. The adaptive splitting schemes allow to control the errors of the scheme and reduce the computational time, while we could apply smaller and larger time-steps.

Author Contributions

The theory, the formal analysis and the methology presented in this paper was developped by J.G. The software development and the numerical validation of the methods was done by J.L.H. and E.M. The paper was written by J.G., J.L.H. and E.M. and was corrected and edited by J.G., J.L.H. and E.M. The writing—review was done by J.G. The supervision and project administration was done by J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by German Academic Exchange Service grant number 91588469.

Acknowledgments

We acknowledge support by the DFG Open Access Publication Funds of the Ruhr-Universität of Bochum, Germany and by Ministerio de Economía y Competitividad, Spain, under grant PGC2018-095896-B-C21-C22.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Auzinger, W.; Herfort, W. Local error structures and order conditions in terms of Lie elements for exponential splitting schemes. Opusc. Math. 2014, 34, 243–255. [Google Scholar] [CrossRef]
  2. Auzinger, W.; Koch, O.; Quell, M. Adaptive high-order splitting methods for systems of nonlinear evolution equations with periodic boundary conditions. Numer. Algor. 2017, 75, 261–283. [Google Scholar] [CrossRef]
  3. Geiser, J. Iterative Splitting Methods for Differential Equations; Numerical Analysis and Scientific Computing Series; Taylor & Francis Group: Boca Raton, FL, USA; London, UK; New York, NY, USA, 2011. [Google Scholar]
  4. Descombes, S.; Massot, M. Operator splitting for nonlinear reaction-diffusion systems with an entropic structure: Singular perturbation and order reduction. Numer. Math. 2004, 97, 667–698. [Google Scholar] [CrossRef]
  5. Descombes, S.; Dumont, T.; Louvet, V.; Massot, M. On the local and global errors of splitting approximations of reaction-diffusion equations with high spatial gradients. Int. J. Comput. Math. 2007, 84, 749–765. [Google Scholar] [CrossRef]
  6. McLachlan, R.I.; Quispel, G.R.W. Splitting methods. Acta Numer. 2002, 11, 341–434. [Google Scholar] [CrossRef]
  7. Trotter, H.F. On the product of semi-groups of operators. Proc. Am. Math. Soc. 1959, 10, 545–551. [Google Scholar] [CrossRef]
  8. Strang, G. On the construction and comparision of difference schemes. SIAM J. Numer. Anal. 1968, 5, 506–517. [Google Scholar] [CrossRef]
  9. Jahnke, T.; Lubich, C. Error bounds for exponential operator splittings. BIT Numer. Math. 2000, 40, 735–745. [Google Scholar] [CrossRef]
  10. Geiser, J. Iterative Semi-implicit Splitting Methods for Stochastic Chemical Kinetics. In Finite Difference Methods: Theory and Applications; Dimov, I., Faragó, I., Vulkov, L., Eds.; Springer International Publishing: Cham, Germany, 2019; pp. 35–47. [Google Scholar]
  11. Nevanlinna, O. Remarks on Picard-Lindelöf Iteration, Part I. BIT 1989, 29, 328–346. [Google Scholar] [CrossRef]
  12. Vandewalle, S. Parallel Multigrid Waveform Relaxation for Parabolic Problems; Teubner Skripten zur Numerik, B.G. Teubner Stuttgart: Stuttgart, Germany, 1993. [Google Scholar]
  13. Farago, I.; Geiser, J. Iterative Operator-Splitting Methods for Linear Problems. Int. J. Comput. Sci. Eng. 2007, 3, 255–263. [Google Scholar] [CrossRef]
  14. Descombes, S.; Duarte, M.; Dumont, T.; Louvet, V.; Massot, M. Adaptive time splitting method for multi-scale evolutionary partial differential equations. Confluentes Math. 2011, 3, 413–443. [Google Scholar] [CrossRef]
  15. Geiser, J. Iterative Operator-Splitting Methods with higher order Time-Integration Methods and Applications for Parabolic Partial Differential Equations. J. Comput. Appl. Math. 2008, 217, 227–242. [Google Scholar] [CrossRef]
  16. Dimov, I.; Farago, I.; Havasi, A.; Zlatev, Z. Different splitting techniques with application to air pollution models. Int. J. Environ. Pollut. 2008, 32, 174–199. [Google Scholar] [CrossRef]
  17. Karlsen, K.H.; Lie, K.-A.; Natvig, J.R.; Nordhaug, H.F.; Dahle, H.K. Operator splitting methods for systems of convection–diffusion equations: Nonlinear error mechanisms and correction strategies. J. Comput. Phys. 2001, 173, 636–663. [Google Scholar] [CrossRef]
  18. Vabishchevich, P. Additive Operator-Difference Schemes: Splitting Schemes; De Gruyter: Berlin, Germany, 2014. [Google Scholar]
  19. Geiser, J. Iterative operator-splitting methods for nonlinear differential equations and applications. Numer. Methods Partial Differ. Equ. 2011, 27, 1026–1054. [Google Scholar] [CrossRef]
  20. Geiser, J. Iterative solvers for the Maxwell–Stefan diffusion equations: Methods and applications in plasma and particle transport. Cogent Math. 2015, 2, 1092913. [Google Scholar] [CrossRef]
  21. Engel, K.-J.; Nagel, R. One-Parameter Semigroups for Linear Evolution Equations; Springer: New York, NY, USA, 2000. [Google Scholar]
  22. Geiser, J.; Hueso, J.L.; Martinez, E. New versions of iterative splitting methods for the momentum equation. J. Comput. Appl. Math. 2017, 309, 359–370. [Google Scholar] [CrossRef]
  23. Socolofsky, S.A.; Jirka, G.H. Environmental fluid mechanics. Part I: Mass transfer and diffusion. In Engineering-Lectures, 2nd ed.; University of Karlsruhe, Institute of Hydromechanics: Karlsruhe, Germany, 2004. [Google Scholar]
  24. Boudin, L.; Grec, B.; Salvarani, F. A mathematical and numerical analysis of the Maxwell–Stefan diffusion equations. Discrete Contin. Dyn. Syst. Ser. B 2012, 17, 1427–1440. [Google Scholar] [CrossRef]
  25. Duncan, J.B.; Toor, H.L. An experimental study of three component gas diffusion. AIChE J. 1962, 8, 38–41. [Google Scholar] [CrossRef]
Figure 1. Standard Strang splitting and shifted Strang splitting method.
Figure 1. Standard Strang splitting and shifted Strang splitting method.
Mathematics 08 00302 g001
Figure 2. Unshifted iterative splitting and shifted iterative splitting method.
Figure 2. Unshifted iterative splitting and shifted iterative splitting method.
Mathematics 08 00302 g002
Figure 3. Mean error e m e a n of the shifted Strang-Marchuk splitting (SSMS) and the shifted linear iterative splitting (SLIS) for the CDR equation for different tolerances, η .
Figure 3. Mean error e m e a n of the shifted Strang-Marchuk splitting (SSMS) and the shifted linear iterative splitting (SLIS) for the CDR equation for different tolerances, η .
Mathematics 08 00302 g003
Figure 4. Evolution of the magnitudes u 1 and u 2 in the uphill example.
Figure 4. Evolution of the magnitudes u 1 and u 2 in the uphill example.
Mathematics 08 00302 g004
Figure 5. Regions in the space-time domain where N 2 and x u 2 have the same sign.
Figure 5. Regions in the space-time domain where N 2 and x u 2 have the same sign.
Mathematics 08 00302 g005
Table 1. Shifted Strang–Marchuk splitting method for Bernoulli’s equation.
Table 1. Shifted Strang–Marchuk splitting method for Bernoulli’s equation.
SolverToleranceCoarseningTimeTentativeTotalFunctionalMax ErrorMean Error
η 1 + κ StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 6 1.12654716631091 6.1055 × 10 3 2.5916 × 10 3
1.0 × 10 8 1.0400344804247530 7.3088 × 10 4 3.2822 × 10 4
1.0 × 10 10 1.01263048303335159,938 9.4175 × 10 5 3.5575 × 10 5
1.0 × 10 12 1.004029,706105530,761550,518 1.2504 × 10 5 3.6420 × 10 6
BEN 1.0 × 10 6 1.1265471663838 6.1709 × 10 3 2.6494 × 10 3
1.0 × 10 8 1.0400341794205742 7.6231 × 10 4 3.2987 × 10 4
1.0 × 10 10 1.01263047303335040,100 9.4168 × 10 5 3.5588 × 10 5
1.0 × 10 12 1.004029,706105530,761368,648 1.2526 × 10 5 3.6422 × 10 6
Table 2. Shifted linear iterative splitting method for Bernoulli’s equation.
Table 2. Shifted linear iterative splitting method for Bernoulli’s equation.
SolverToleranceCoarseningTimeTentativeTotalFunctionalMax ErrorMean Error
η 1 + κ StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 4 1.040010971161358 7.0454 × 10 3 3.0242 × 10 3
1.0 × 10 5 1.0126692397318656 8.4372 × 10 4 4.1488 × 10 4
1.0 × 10 6 1.00405752164591670,530 9.1785 × 10 5 4.7166 × 10 5
1.0 × 10 7 1.001354,10558554,690654,195 9.5495 × 10 6 4.9235 × 10 6
BEN 1.0 × 10 4 1.04001087115912 7.1531 × 10 3 3.0507 × 10 3
1.0 × 10 5 1.0126690397295824 8.5046 × 10 4 4.1542 × 10 4
1.0 × 10 6 1.00405748164591247,288 9.2284 × 10 5 4.7179 × 10 5
1.0 × 10 7 1.001354,17058654,756438,040 9.3885 × 10 6 4.9213 × 10 6
Table 3. Shifted quasilinear iterative splitting method for Bernoulli’s equation.
Table 3. Shifted quasilinear iterative splitting method for Bernoulli’s equation.
SolverToleranceCoarseningTimeTentativeTotalFunctionalMax ErrorMean Error
η 1 + κ StepsStepsStepsEvaluations e m a x e m e a n
BEF 1.0 × 10 4 1.040010871151350 7.1191 × 10 3 3.0513 × 10 3
1.0 × 10 5 1.0126691397308652 8.4635 × 10 4 4.1525 × 10 4
1.0 × 10 6 1.00405750164591470,534 9.1939 × 10 5 4.7178 × 10 5
1.0 × 10 7 1.001354,17958654,765655,207 9.4160 × 10 6 4.9210 × 10 6
BEN 1.0 × 10 4 1.040010771141108 7.2075 × 10 3 3.0781 × 10 3
1.0 × 10 5 1.0126689397287146 8.5547 × 10 4 4.1556 × 10 4
1.0 × 10 6 1.00405747164591158,464 9.2240 × 10 5 4.7190 × 10 5
1.0 × 10 7 1.001354,16858654,754438,030 9.3864 × 10 6 4.9214 × 10 6
Table 4. Solution of mixed convection–diffusion and Burgers (MCDB) equation for μ = 0.5 using the shifted Strang–Marchuk splitting method.
Table 4. Solution of mixed convection–diffusion and Burgers (MCDB) equation for μ = 0.5 using the shifted Strang–Marchuk splitting method.
SolverToleranceTimeTentativeTotalFunctionalMax ErrorMean Error
η StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 1 3362115478752 1.8537 × 10 2 1.5565 × 10 3
1.0 × 10 2 493385318496 7.7173 × 10 3 1.0212 × 10 3
1.0 × 10 3 15815158625,376 2.8237 × 10 3 4.6368 × 10 4
1.0 × 10 4 635626358101,728 9.6438 × 10 4 2.6878 × 10 4
BEN 1.0 × 10 1 6528931302 4.6585 × 10 2 6.1648 × 10 3
1.0 × 10 2 381183995586 9.8225 × 10 3 1.2524 × 10 3
1.0 × 10 3 15665157121,994 2.8012 × 10 3 4.6639 × 10 4
1.0 × 10 4 63532635588,970 9.6547 × 10 4 2.6882 × 10 4
Table 5. Solution of MCDB equation for μ = 5 using the shifted Strang–Marchuk splitting method.
Table 5. Solution of MCDB equation for μ = 5 using the shifted Strang–Marchuk splitting method.
SolverToleranceTimeTentativeTotalFunctionalMax ErrorMean Error
η StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 1 48991077597695,616 7.9463 × 10 3 1.0352 × 10 4
1.0 × 10 2 3375977435269,632 6.2542 × 10 4 1.4242 × 10 5
1.0 × 10 3 2980159313950,224 1.7561 × 10 5 8.0790 × 10 6
1.0 × 10 4 33824338654,176 1.2602 × 10 5 6.7706 × 10 6
BEN 1.0 × 10 1 11011154 1.6393 × 10 3 4.0286 × 10 4
1.0 × 10 2 43144616 9.7024 × 10 4 3.0109 × 10 4
1.0 × 10 3 33903394746 1.4041 × 10 4 6.0483 × 10 5
1.0 × 10 4 11181111915,662 2.3620 × 10 5 1.9871 × 10 5
Table 6. Results of the shifted linear iterative splitting for the MCDB equation with μ = 0.5 .
Table 6. Results of the shifted linear iterative splitting for the MCDB equation with μ = 0.5 .
SolverToleranceTimeTentativeTotalFunctionalMax ErrorMean Error
η StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 1 64630394911,388 5.0943 × 10 3 9.7351 × 10 4
5.0 × 10 2 64223888010,560 2.7441 × 10 3 3.9256 × 10 4
2.5 × 10 2 78817296011,520 1.6284 × 10 3 1.5681 × 10 4
1.25 × 10 2 1343152149517,940 5.9673 × 10 4 1.5654 × 10 4
BEN 1.0 × 10 1 2221643863860 1.4076 × 10 2 6.1344 × 10 4
5.0 × 10 2 3881575455450 4.7347 × 10 3 2.0174 × 10 4
2.5 × 10 2 7061528588580 1.5222 × 10 3 1.3188 × 10 4
1.25 × 10 2 1337150148714,870 2.6003 × 10 3 1.5871 × 10 4
Table 7. Results of the shifted linear iterative splitting for the MCDB equation with μ = 5 .
Table 7. Results of the shifted linear iterative splitting for the MCDB equation with μ = 5 .
SolverToleranceTimeTentativeTotalFunctionalMax ErrorMean Error
η StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 1 653029629492113,904 1.9099 × 10 2 1.0672 × 10 3
5.0 × 10 2 645820228480101,760 1.9099 × 10 2 5.1457 × 10 4
2.5 × 10 2 64401250769092,280 1.1951 × 10 3 2.2108 × 10 4
1.25 × 10 2 6407807721486,568 5.1211 × 10 4 8.9769 × 10 5
BEN 1.0 × 10 1 372158580 5.3262 × 10 2 6.0472 × 10 3
5.0 × 10 2 632083830 2.6259 × 10 2 2.5666 × 10 3
2.5 × 10 2 109181271270 9.7793 × 10 3 9.2603 × 10 4
1.25 × 10 2 188152032030 3.4315 × 10 3 3.1950 × 10 4
Table 8. Results of the shifted quasi linear iterative splitting for the MCDB equation with μ = 0.5 .
Table 8. Results of the shifted quasi linear iterative splitting for the MCDB equation with μ = 0.5 .
SolverToleranceTimeTentativeTotalFunctionalMax ErrorMean Error
η StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 1 64732196811,616 5.2855 × 10 3 9.7675 × 10 4
5.0 × 10 2 64323888110,572 2.4622 × 10 3 3.9273 × 10 4
2.5 × 10 2 78817396111,532 1.7081 × 10 3 1.5881 × 10 4
1.25 × 10 2 6419809722886,736 7.2082 × 10 4 8.9011 × 10 5
BEN 1.0 × 10 1 2211633843840 1.3667 × 10 2 5.5944 × 10 4
5.0 × 10 2 3881575455450 4.6486 × 10 3 2.0269 × 10 4
2.5 × 10 2 7061528588580 2.2658 × 10 3 1.3493 × 10 4
1.25 × 10 2 1343152149517,940 6.2250 × 10 4 1.5775 × 10 4
Table 9. Results of the shifted quasi linear iterative splitting for the MCDB equation with μ = 5 .
Table 9. Results of the shifted quasi linear iterative splitting for the MCDB equation with μ = 5 .
SolverToleranceTimeTentativeTotalFunctionalMax ErrorMean Error
η StepsStepsStepsEvaluations e m a x e m e a n
BEFP 1.0 × 10 1 652529749499113,988 1.9100 × 10 2 1.0647 × 10 3
5.0 × 10 2 646120248485101,820 1.9100 × 10 2 5.1301 × 10 4
2.5 × 10 2 64451235768092,160 1.1921 × 10 3 2.1909 × 10 4
1.25 × 10 2 6419809722886,736 7.2082 × 10 4 8.9011 × 10 5
BEN 1.0 × 10 1 372158580 5.3071 × 10 2 6.0407 × 10 3
5.0 × 10 2 632083830 2.6196 × 10 2 2.5650 × 10 3
2.5 × 10 2 109181271270 9.7690 × 10 3 9.2559 × 10 4
1.25 × 10 2 188152032030 3.4299 × 10 3 3.1941 × 10 4
Table 10. Differences between the analytical solution of convection–diffusion–reaction (CDR) equation and the numerical solutions obtained by direct (Heun) integration and by Strang–Marchuk method with different number of spatial subintervals.
Table 10. Differences between the analytical solution of convection–diffusion–reaction (CDR) equation and the numerical solutions obtained by direct (Heun) integration and by Strang–Marchuk method with different number of spatial subintervals.
SpatialTimeHeunStrang-Marchuk
SubintervalsSteps e m a x e m e a n e m a x e m e a n
8160.3425640.0194290.3386640.019334
12240.1085040.0063810.1042890.006366
16320.0659860.0029390.0637300.002901
20400.0448010.0017920.0434940.001774
Table 11. Results of the shifted Strang–Marchuk’s splitting, (SSMS), and the shifted linear iterative splitting, SLIS, for the CDR equation for different tolerances.
Table 11. Results of the shifted Strang–Marchuk’s splitting, (SSMS), and the shifted linear iterative splitting, SLIS, for the CDR equation for different tolerances.
ShiftedToleranceTimeTentativeFunctionalMax ErrorMean Error
Method η StepsStepsEvaluations e m a x e m e a n
SSMS 1.0 × 10 4 284372 4.1517 × 10 3 1.0990 × 10 4
2.5 × 10 5 5411768 9.1570 × 10 4 2.5527 × 10 5
6.25 × 10 6 107241560 2.2147 × 10 4 6.2970 × 10 6
1.5625 × 10 6 211523144 5.6628 × 10 5 1.6200 × 10 6
3.90625 × 10 7 4161036216 1.5064 × 10 5 4.3245 × 10 7
SLIS 6.4 × 10 2 333280 1.1334 × 10 2 1.0145 × 10 3
1.6 × 10 2 684568 1.2867 × 10 2 9.4605 × 10 4
4.0 × 10 3 150171328 9.4607 × 10 3 6.7794 × 10 4
1.0 × 10 3 5701305592 6.1880 × 10 3 3.3441 × 10 4
2.5 × 10 4 225657722,656 1.4651 × 10 3 8.0392 × 10 5
Table 12. Cost and error estimates of the Heun integration (HI) and the Strang–Marchuk (SM) splitting for the nonlinear diffusion equation.
Table 12. Cost and error estimates of the Heun integration (HI) and the Strang–Marchuk (SM) splitting for the nonlinear diffusion equation.
Uphill ExampleAsymptotic Behavior
SolverTimeFunctionalMax ErrorMean ErrorMax ErrorMean Error
StepsEvaluations e m a x e m e a n e m a x e m e a n
HI120,000240,000 1.2050 × 10 2 7.0883 × 10 7 1.0045 × 10 1 5.7176 × 10 7
240,000480,000 2.1710 × 10 3 1.1072 × 10 7 1.8856 × 10 2 1.0060 × 10 7
480,000960,000 4.5606 × 10 4 1.8113 × 10 8 4.1844 × 10 3 2.0626 × 10 8
SM40,000240,000 1.9853 × 10 2 7.3680 × 10 7
80,000480,000 4.3103 × 10 3 1.7807 × 10 7 2.2560 × 10 1 7.4157 × 10 7
160,000960,000 8.2574 × 10 4 4.2266 × 10 8 3.3259 × 10 2 2.4397 × 10 7
Table 13. Results of the shifted Strang–Marchuk’s splitting (SSMS) the shifted linear iterative splitting (SLIS), and the shifted quasilinear iterative splitting (SQIS), for the uphill case of the NLD equation.
Table 13. Results of the shifted Strang–Marchuk’s splitting (SSMS) the shifted linear iterative splitting (SLIS), and the shifted quasilinear iterative splitting (SQIS), for the uphill case of the NLD equation.
ShiftedParameterTimeTentativeFunctionalMax ErrorMean Error
Method η StepsStepsEvaluations e m a x e m e a n
SSMS 1.0 × 10 2 22,711477278,256 4.4092 × 10 1 5.4509 × 10 2
1.0 × 10 3 12,1131297160,920 8.4409 × 10 2 4.1336 × 10 3
ϵ = 0.03 1.0 × 10 4 12,1051290160,740 4.5592 × 10 3 1.6081 × 10 4
1.0 × 10 5 12,1091314161,076 6.9255 × 10 4 1.4140 × 10 5
SLIS 1.0 × 10 2 19,7883177183,720 9.3208 × 10 2 1.6424 × 10 2
1.0 × 10 3 19,7873133183,360 7.2316 × 10 3 9.9959 × 10 4
ϵ = 0.01 1.0 × 10 4 19,8783158184,288 1.1644 × 10 3 9.4958 × 10 5
1.0 × 10 5 25,1264880240,048 1.1369 × 10 4 8.2151 × 10 6
SQIS 1.0 × 10 2 20,0213119185,120 8.3612 × 10 1 1.5261 × 10 1
1.0 × 10 3 19,7843194183,824 7.0992 × 10 3 1.5841 × 10 3
ϵ = 0.01 1.0 × 10 4 19,8793145184,192 1.1644 × 10 3 9.5553 × 10 5
1.0 × 10 5 25,1254895240,160 1.1370 × 10 4 8.1793 × 10 6
Table 14. Results of the shifted Strang–Marchuk’s splitting (SSMS), the shifted linear iterative splitting (SLIS), and the shifted quasilinear iterative splitting (SQIS), for the uphill case of the NLD equation with η = 1 × 10 5 and different values of the weight parameter ξ .
Table 14. Results of the shifted Strang–Marchuk’s splitting (SSMS), the shifted linear iterative splitting (SLIS), and the shifted quasilinear iterative splitting (SQIS), for the uphill case of the NLD equation with η = 1 × 10 5 and different values of the weight parameter ξ .
ShiftedParameterTimeTentativeFunctionalMax ErrorMean Error
Method ξ StepsStepsEvaluations e m a x e m e a n
SSMS040,1604775539,220 2.2711 × 10 1 4.0969 × 10 2
0.2521,0363991300,324 6.1904 × 10 2 1.2819 × 10 2
ϵ = 0.030.512,1051290160,740 4.5592 × 10 3 1.6081 × 10 4
0.7512,094527151,452 3.8271 × 10 3 1.9800 × 10 5
116,4351581216,192 3.9292 × 10 3 1.4966 × 10 4
SLIS039,3137195372,064 9.0557 × 10 4 1.5864 × 10 4
0.2529,5155859282,992 9.5854 × 10 4 2.1259 × 10 4
ϵ = 0.010.519,8783158184,288 1.1644 × 10 3 9.4958 × 10 5
0.7522,0102272194,256 1.8200 × 10 3 2.9126 × 10 5
130,8672064263,448 2.5255 × 10 3 2.4335 × 10 5
SQIS039,3137233372,368 8.2133 × 10 4 1.6467 × 10 4
0.2529,5155933283,584 1.0667 × 10 3 2.2256 × 10 4
ϵ = 0.010.519,8793145184,192 1.1644 × 10 3 9.5553 × 10 5
0.7522,0772384195,688 1.8200 × 10 3 3.0349 × 10 5
130,9062084263,920 2.5255 × 10 3 2.5535 × 10 5
Table 15. Results of the shifted Strang–Marchuk’s splitting (SSMS), the shifted linear iterative splitting (SLIS), and the shifted quasilinear iterative splitting (SQIS) for the asymptotic case of the NLD equation.
Table 15. Results of the shifted Strang–Marchuk’s splitting (SSMS), the shifted linear iterative splitting (SLIS), and the shifted quasilinear iterative splitting (SQIS) for the asymptotic case of the NLD equation.
ShiftedParameterTimeTentativeFunctionalMax ErrorMean Error
Method η StepsStepsEvaluations e m a x e m e a n
SSMS 1.0 × 10 2 31,442238380,160 9.1699 × 10 1 1.2015 × 10 2
1.0 × 10 3 12,810546160,272 5.1000 × 10 1 4.5127 × 10 3
ϵ = 0.03 1.0 × 10 4 10,223511128,808 7.6494 × 10 2 1.6126 × 10 4
1.0 × 10 5 10,508854136,344 7.2481 × 10 3 8.4180 × 10 6
SLIS 1.0 × 10 2 15,0871748134,680 1.5719 × 10 1 2.7535 × 10 2
1.0 × 10 3 14,7361702131,504 1.2306 × 10 2 5.0463 × 10 4
ϵ = 0.01 1.0 × 10 4 15,0471815134,896 2.2724 × 10 3 3.9500 × 10 5
1.0 × 10 5 24,3715071235,536 2.2461 × 10 4 8.5634 × 10 6
SQIS 1.0 × 10 2 16,5021509144,088 2.7868 × 10 1 7.1696 × 10 2
1.0 × 10 3 14,7091683131,136 1.2001 × 10 2 2.1926 × 10 3
ϵ = 0.01 1.0 × 10 4 15,0491819134,944 1.9668 × 10 3 4.4380 × 10 5
1.0 × 10 5 24,3815042235,384 1.9277 × 10 4 6.6005 × 10 6
Back to TopTop