Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation

In this paper, we construct novel numerical algorithms to solve the heat or diffusion equation. We start with 105 different leapfrog-hopscotch algorithm combinations and narrow this selection down to five during subsequent tests. We demonstrate the performance of these top five methods in the case of large systems with random parameters and discontinuous initial conditions, by comparing them with other methods. We verify the methods by reproducing an analytical solution using a non-equidistant mesh. Then, we construct a new nontrivial analytical solution containing the Kummer functions for the heat equation with time-dependent coefficients, and also reproduce this solution. The new methods are then applied to the nonlinear Fisher equation. Finally, we analytically prove that the order of accuracy of the methods is two, and present evidence that they are unconditionally stable.


Introduction
This paper can be considered to be a continuation of our previous work [1][2][3][4], in which we developed novel numerical algorithms to solve the heat or diffusion equation. The phenomenon of diffusion is described by the heat or diffusion equation, which, in its simplest form, is the following linear parabolic partial differential equation (PDE): where u is the concentration or temperature, and α is the diffusion coefficient or thermal diffusivity in the case of particle or heat transport, respectively. If the medium in which the diffusion takes place is not homogeneous, one can use a more general form cρ ∂u ∂t = ∇(k∇u) (2) where, in the case of conductive heat transfer, k = k → r , t , c = c → r , t , and ρ = ρ → r , t are the heat conductivity, specific heat, and mass density, respectively. The α = k/(cρ) relation connects the four nonnegative quantities. The heat equation, either the simplest version (1) or similar but more difficult equations, has an increasing number of analytical solutions. Unfortunately, most consider only systems with relatively simple geometries, such as a cylindrical inclusion in an infinite medium [5]. Moreover, parameters such as the diffusion coefficient or the heat conductivity are typically taken as constants, or a fixed and relatively simple function of the space variables [6] at best. This means that, for intricate geometries, and in the case of space-and time-dependent coefficients in general, numerical calculations cannot be avoided. However, approaches such as the finite difference (FDM) or finite element (FEM) methods require the full spatial discretization of the system; thus, they are computationally demanding. We explained in our previous papers [3,4,7], and also demonstrate in this paper, that the widely used conventional solvers, either explicit or implicit, have serious difficulties. The explicit methods are usually conditionally stable, so when the stiffness of the problem is high, very small time step sizes have to be used. By comparison, when the number of cells is large, which is almost always the case in two or three dimensions, the implicit solvers become very slow and use a significant amount of memory. This information highlights the fact that finding effective numerical methods is still important.
One of the very few easily parallelizable explicit and unconditionally stable methods is the two-stage odd-even hopscotch (OEH) algorithm [3,[8][9][10]. In our previous papers [3,4], we showed that this method is robust and powerful for spatially homogeneous grids but, in the case of stiff systems, it can be disastrously inaccurate for large time step sizes. We constructed and tested new hopscotch combinations and found [1][2][3][4] that some of them behave much better, not only for large, but also for medium and small time step sizes. In this paper, we extend our research by further modifying the underlying space and time structure as explained below.
Our current work was inspired by the well-known leapfrog method [11] used by the molecular dynamics community to solve Newton's equations of motion. In their books, Hockney and Eastwood [12] (p. 28) and, later, Frenkel and Smit [13] (p. 75) introduced this method in the following form: v(t + ∆t/2) = v(t − ∆t/2) + F(x(t))∆t/m x(t + ∆t) = x(t) + v(t + ∆t/2)∆t In Figure 1, a diagram of this method is presented, and one can immediately understand why the method is called leapfrog. The leapfrog scheme was successfully adapted and also generalized to hyperbolic PDEs [14], but for parabolic equations it is unstable. This can be addressed by a modification to obtain the Dufort-Frankel method [15] (Subsection 8.3). In all of these publications, the nodes of the spatial mesh are treated equivalently, unlike in the OEH method. Verwer and Sommeijer [16] presented a composite method for the advection-diffusion problem, where the leapfrog scheme was used for the horizontal advection part, and the Du Fort Frankel and Crank-Nicolson methods are used for the other parts. It can be easily determined that their method is significantly different from ours. In addition, we have been unable to find any combinations of the leapfrog and hopscotch structures in the literature that are at least a little similar to ours.
The remainder of this paper is organized as follows. In Section 2 we introduce the new algorithms, both for the simplest, one-dimensional, equidistant mesh, and also for general, arbitrary meshes. In Section 3, first we very briefly present the results of the numerical tests for the first assessment of the 10 5 combinations to obtain a manageable number of methods. Then, in Sections 3.3 and 3.4, two numerical experiments are presented for two space dimensional stiff systems consisting of 10,000 cells. We choose the top five combinations with the highest accuracy and compare them to selected other methods. In Section 3.5, we reproduce an analytical solution from the top five methods using a non-equidistant mesh for verification. In Section 3.6, we construct a new analytical solution for time-dependent diffusivity and then, in Section 3.7, the top five methods are verified again by comparing their numerical results to this solution. In Section 3.8, the convergence and stability properties of these methods are analytically investigated. In Section 4 we summarize the conclusions and our research plan for the future.

The New Methods
To implement the OEH algorithm, we need to construct a bipartite grid, in which the set of nodes (or cells) is divided into two similar subsets with odd and even space indices, such that all nearest neighbors of odd nodes are even, and vice versa. In one space dimension, where the space variable is discretized as usual, x i = i∆x, i = 0, ..., N − 1, it is straightforward. Let us fix the time discretization for the remainder of the paper to t n = t 0 + nh, n = 1, ..., T, T = (t fin − t 0 )/h. In the original OEH method, odd and even time steps can also be distinguished: in odd time steps the odd nodes are treated in the first stage, and only the u values belonging to the beginning of the time step are used. Then, in the second stage, when the even nodes are calculated, the new values of their odd neighbors, which have just been calculated, are used. In even time steps, the roles of the nodes are interchanged, as shown in Figure 2a. The original OEH method always applies the standard explicit Euler formula in the first stage and the implicit Euler formula in the second stage. However, in each stage of this structure, the latest available u values of the neighbors are used; thus, when the second stage calculations begin, the new values of the neighbors u n+1 i−1 and u n+1 i+1 are known, which makes the implicit formula effectively explicit. Therefore, it is obvious that the OEH method is fully explicit, and the previous u values do not need to be stored at all, which means that one array is sufficient to store the variable u in the memory. We emphasize that all of our algorithms described in this paper have this property, irrespective of the modifications. In one of our previous papers [1], we applied the UPFD method in stage one and the explicit Euler in stage two, which we then called the UPFD + Explicit Euler odd-even Hopscotch method. However, this can simply be the called the reversed hopscotch method, as one only needs to change the order of the two formulas in the code of the original OEH to obtain the code of this method. Then, we modified the space and time structures and constructed the so called shifted-hopscotch method [4].
In the case of the new leapfrog-hopscotch method, the calculation starts with taking a half-sized time step for the odd nodes using the initial u 0 i values, which is symbolized by a green box containing the number 0 in Figure 2b. Then, full time steps are taken for the even and odd nodes strictly alternately until the end of the last timestep (orange box in Figure 1), which must also be halved for odd nodes to reach exactly the same time point as the even nodes. In Figure 1, the number "0" means that this stage is performed only once and is not repeated. By comparison, stages 1-4 are repeated T/2 times (including the last, halved stage 4), so T must be an even integer. Thus, finally, we have five different stages 0. . . 4 in which five different formulas can be applied to maximize the efficiency. Let us turn our attention to these formulas. It is relatively common [17] (p. 112) to spatially discretize Equation (1) in one dimension by applying the central difference formula to obtain a system of ordinary differential equations (ODEs) for nodes i = 1, ..., N − 2: Suppose now that one needs to solve Equation (2) when the material properties are not constants but functions of the space variables and the mesh is possibly unstructured. We have shown in our previous papers [3,7] (based on, e.g., Chapter 5 of the book [18]) that Equation (3) can be generalized to arbitrary grids consisting of cells of various shapes and properties: where the heat capacity and the inverse thermal resistance can be calculated approximately as: respectively, where V i is the volume of the cell, and A ij and d ij are the surface between the cells and the distance between the cell centers, respectively. Figure 3 can help the reader to understand these quantities. Although the grid should be topologically rectangular to maintain the explicit property of the OEH-type methods, we emphasize that the geometrical shape of the cells is not necessarily rectangular.  (3) and (4) can be written into a condensed matrix-form: The matrix M is tridiagonal in the one-dimensional case of Equation (3) with the following elements: In the general case of Equation (4), the nonzero elements of the matrix can be given as: Let us introduce the characteristic time or time constant τ i of cell i, which is always a non-negative quantity: Using this we introduce the following notations: Here, r i is the generalization of r = αh ∆x 2 = − m ii h 2 , the usual mesh ratio. We recall the so-called theta method: where θ ∈ [0, 1]. If θ = 1, this scheme is the explicit (Euler) or forward-time central-space (FTCS) scheme. Otherwise the theta method is implicit, and for θ = 0, 1 /2 one has the implicit (Euler) and the Crank-Nicolson methods, respectively [17]. However, we remind the reader that in our leapfrog-hopscotch scheme, the neighbors are always taken into account at the same, most recent time period; thus, we can express the new value of the u variable in the following brief form in the 1D case: and in the general case: Here, µ = n for the even cells, and µ ∈ n, n − 1 2 , n − 1 for the odd cells, depending on where we are in the leapfrog-hopscotch structure. For easier understanding, see Figure 1, in addition to Example 1 below.
The other formula we use is derived from the constant-neighbor (CNe) method, which was introduced in our previous papers [7,19]. For the leapfrog-hopscotch method in the one-dimensional case, this can be written as follows: whereas for general grids it is: and for halved time steps, r i and A i must be divided by 2, obviously.
Stage 0. Take a half time step with the (12) formula with θ = 1 /4 for odd cells: Repeat the following four stages for n = 0, ..., T − 3: Stage 1. Take a full time step with the (12) formula with θ = 1 /2 for even cells: Stage 2. Take a full time step with the (14) formula for odd cells: Stage 3. Take a full time step with the (12) formula with θ = 1 /2 for even cells: Stage 4. If n + 2 < T − 1, then take a full time step with θ = 1 /2 for odd cells: else take a half time step with the (12) formula with θ = 1 /2 for odd cells: Based on this example, all other combinations can be easily constructed. In our previous paper [4], the best shifted-hopscotch combination was S4 (0, 1 /2, 1 /2, 1 /2, 1), and, in the category of the positivity-preserving algorithms, S1 (C, C, C, C, C). These will be tested against the new leapfrog-hopscotch algorithms in the following section.

On the Methods and Circumstances of the Investigation
In Sections 3.2-3.4, two space dimensional, topologically rectangle-structured lattices with N = N x × N z cells are investigated (see Figure 3 for visualization). We consider the system as thermally isolated (zero Neumann boundary conditions), which is implemented by the omission of those terms of the sum in Equation (4), which have infinite resistivity in the denominator due to the isolated boundary. Let us denote by λ MIN and λ MAX the eigenvalues of the system matrix M with the (nonzero) smallest and the largest absolute values, respectively. Now, λ MAX /λ MIN gives the stiffness ratio of the system, and the maximum possible time step size for the FTCS (explicit Euler) scheme is exactly given by h FTCS MAX = |2/λ MAX |, above which the solutions are expected to diverge due to instability. This threshold is frequently called the CFL limit and also holds for the second-order explicit Runge-Kutta (RK) method [20].
In Sections 3.2-3.4, the reference solution is calculated by the ode15s built-in solver of MATLAB, with very strict error tolerance and, therefore, high precision. In Section 3.5, Section 3.6, and Section 3.8 the reference solution is an analytical solution. The (global) numerical error is the absolute difference of the numerical solutions u num j produced by the examined method and the reference solution u ref j at final time t fin . We use these individual errors of the nodes or cells to calculate the maximum error: the average error: and the so-called energy error: which, in the case of heat conduction, yields the error in terms of energy. However, for different algorithms, these errors depend on the time step size in different ways. If we want to evaluate the overall performance of the new methods compared to the original OEH method, we consider a fixed value of the final time t fin , and first calculate the solution with a large time step size (typically t fin /4), then repeat the whole calculation for subsequently halved time step sizes S times until h reaches a very small value (typically around 2 × 10 −6 ).
Then, the aggregated relative error (ARE) quantities are calculated for each type of error defined above. For instance, in the case of the L ∞ error, it has the following form: We can also take the simple average of these errors, to obtain one single quantity for a method applied on a concrete problem, which will help us to select the best combinations. For example, if ARE = 2, then the examined method produces smaller errors by two orders of magnitude than the OEH method.
For the simulations where running times are measured, we used a desktop computer with an Intel Core i5-9400 CPU, 16.0 GB RAM with the MATLAB R2020b software, in which there was a built-in tic-toc function to measure the total running time of the tested algorithms.

Preliminary Tests
The following nine values of the parameter theta are substituted into the formula given in (12): θ ∈ {0, 1 /5, 1 /4, 1 /3, 1 /2, 2 /3, 3 /4, 4 /5, 1 }. Thus, with the CNe Formula (14), we have 10 different formulas. There are five stages in the leapfrog-hopscotch structure and, by inserting these 10 formulas in all possible combinations, we obtain 10 5 = 100,000 different algorithm combinations in total. We wrote MATLAB code which constructs and tests all of these combinations in a highly automatized manner under the following circumstances.
We stress that, until this point, we have not stated anything precisely. The only purpose of these preliminary tests on the small systems is the reduction of the large number of algorithm combinations by eliminating those which are probably not worthy of more careful investigation.

Comparison with Other Methods for a Large, Moderately Stiff System
We examine a grid similar to the one in Figure 2 with an isolated boundary, but where the sizes are fixed to N x = 100 and N z = 100; thus, the total number of cells is 10,000, and the final time is t fin = 0.1. The exponents introduced above were set to the following values: which means, e.g., that log-uniformly distributed values between 0.01 and 100 were given to the capacities. The generated system can be characterized by its stiffness ratio and h FTCS MAX values, which are 3.1 × 10 7 and 7.3 × 10 −4 , respectively. The performance of the new algorithms is compared with the following, widely used MATLAB solvers: • ode15s, a first-to fifth-order (implicit) numerical differentiation formulas with variablestep and variable order (VSVO), developed for solving stiff problems; • ode23s, a second-order modified (implicit) Rosenbrock formula; • ode23t, applies implicit trapezoidal rule using a free interpolant; • ode23tb, a combination of backward differentiation formula and trapezoidal rule; • ode45, a fourth-/fifth-order explicit Runge-Kutta-Dormand-Prince formula; • ode23, a second-/third-order explicit Runge-Kutta-Bogacki-Shampine method; • ode113, a first-to 13th-order VSVO Adams-Bashforth-Moulton solver.
In the case of the MATLAB solvers, we could not directly set the time step size, but changed the tolerances instead, starting from a large value, such as 'AbsTol' = 'RelTol' . = 'Tol' =10 3 , towards a small minimum value, for example, 'Tol' =10 −5 . In addition to these solvers, we also used the following methods for comparison purposes; the original UPFD, the CNe, the two-and three-stage linear-neighbor (LNe and LNe3) methods [7], and the Dufort-Frankel (DF) algorithm [17] (p. 120): are explicit unconditionally stable schemes. The DF scheme is a two-step method which is not self-starting; thus, we calculated the first time step by two subsequent UPDF steps with halved time step sizes. The Heun method, also called the explicit trapezoidal rule, is one of the most common second-order RK scheme [21]. Finally, the widely used Crank-Nicolson scheme (abbreviated here as CrN) is an implicit scheme: where I is the N × N unit matrix. The A and B matrixes here are time-independent; thus, it is sufficient to calculate them only once, before the first time step. We implement this scheme in two different ways: one is to calculate Y = A −1 B before the first time step and then each time step is just a simple matrix multiplication We plotted the L ∞ , L 1 , and energy errors as a function of the effective time step size h EFF , and based on this (and on similar data that are presented in the next subsection), we selected the following top five combinations from those listed in (21) and discarded the reminder: L1 (C, C, C, C, C), L2 (0, 1 /2, 1 /2, 1 /2, 1 /2), L3 ( 1 /5, 1 /2, 1 /2, 1 /2, 1 /2), L4 ( 1 /4, 1 /2, C, 1 /2, 1 /2), L5 ( 1 /5, 1 /2, C, 1 /2, 1 /2).
One can see that L2-3 and L4-5 are highly similar; only the zeroth stage is different. We tried to optimize the value of theta for this zeroth stage by calculating the errors for a few fixed time step sizes for θ = k/N θ , k = 0, ..., N θ , where N θ = 1000. In the case of the method L2-3 (θ, 1 /2, 1 /2, 1 /2, 1 /2), we found that the errors are larger for large values of theta and the smallest when θ ≈ 0. In case of the L4-5 (θ, 1 /2, C, 1 /2, 1 /2) combination, we found that there is a nontrivial minimum of the error function at around θ ≈ 0.2, but its position slightly depends on the time step size, as shown in Figure 4. We can conclude that the L4 combination is usually slightly more accurate than that of L5. In Figure 5, we present the energy error as a function of the time step size for these top five combinations and other methods, and Figure 6 shows the L 1 errors vs. the total running times. Furthermore, Table 1 presents some results that were obtained by our numerical schemes and the "ode" routines of MATLAB. One can see that, as expected, the implicit Crank-Nicolson scheme is the most accurate, but the accuracy of the L2 method, in addition to the L5 and S4 methods, approaches it, and is similar to that of the Heun method below the CFL limit.

Comparison with Other Methods for a Large, Very Stiff System
In the second case study, new values were set for the α and β exponents: This means that the width of the distribution of the capacities and thermal resistances were increased and a non-negligible anisotropy appeared because, on average, the resistances in the x direction are two orders of magnitude larger than in the z direction. Note that the geometric mean of the C and R quantities is still equal to one. With this modification, we gained a system with a much higher stiffness ratio, 2.5 × 10 11 , and the CFL limit for the standard FTCS was h EE MAX = 1.6 × 10 −6 , which, we stress again, also holds for the Heun method. All other parameters and circumstances are the same as in the previous subsection. In Figures 7 and 8, the energy and the L ∞ errors are presented as a function of the time step size and the total running time, respectively. As expected, the implicit methods gained a slight advantage compared to the previous, less stiff case, but the new L2 method outperforms all other examined methods provided that not only the accuracy, but also the speed, is taken into account. In Table 2, we report the data related to this numerical experiment, and in Table 3 we summarize the ARE error quantities, defined in Equation (19), of the explicit stable methods for both case studies.

Verification 1. Comparison to Analytical Results Using a Non-Uniform Mesh
We consider a very recent [22] nontrivial analytical solution of Equation (1), given on the whole real number line for positive values of t as follows: where, for the sake of simplicity, we use the value α = 1. In our last paper [4], we reproduced this solution by prescribing the Dirichlet boundary conditions calculated using the analytical solution at the two ends of the interval. Now, we do not use this kind of information, but construct a large-scale non-equidistant spatial grid according to the following procedure. First, we define the coordinates of the cell borders by the formula: where γ = 10 −11 . Thus, we have a relatively dense system of nodes close to the origin, which becomes increasingly less dense as one moves further from the origin, and towards +5922.3, which is the right boundary of the mesh. Then the cell centers are easily calculated: It is straightforward to reflect this structure to the origin to create the mirror image of the mesh at the negative side of the x axis, thus obtaining 2000 cells in total. Now, at the vicinity of the origin we have small cells with diameter 0.01, which are increasing as we move further from the origin, first very slowly, then increasingly rapidly until they reach ∆x ±1000 = 211.6. The resistances and the cell capacities can then be calculated using Formula (5) in Section 2: We took into account the zero Neumann boundary conditions, as in the previous two subsections, which is a good approximation because the values of the initial function are extremely close to zero far from the origin. The stiffness ratio is 5.7 × 10 11 for this mesh, and h FTCS MAX = 5 × 10 −5 . As in our last paper [4], and also in the next subsection, we reproduce the analytical solution in the finite time interval t ∈ [t 0 , t fin ], where t 0 = 0.5, t fin = 1. Obviously, the generalized Formulas (12) and (14) must be used. In Figure 9, the L ∞ errors as a function of the time step size h are presented for the case of the u solution for the top five leapfrog-hopscotch algorithms, a first-order "reference-curve" for the original CNe method, and the Heun method. These results verify not only the second-order convergence of the numerical methods, but also the procedure of generalizing the calculations to nonuniform grids (see Equations (4) and (5) above). One can also see that the L2 and L3 algorithms reach the minimum error (determined by the space discretization) for larger values of h than the CFL limit for the Heun method.

New Analytical Solution with Time-Dependent Diffusion Coefficient
Now we investigate the analytical solutions of the diffusion equation where the diffusion constant has a power-law time dependence: To avoid confusion, we use the "hat" notation for the "generalized diffusion coefficient",α, which is considered constant and has the dimension of t −n α. We apply the self-similar Ansatz, first introduced by Sedov [23], with the form where a, b ∈ R are the self-similar exponents describing the decay and the spreading of the solution in time and space. These properties make this Ansatz physically extraordinarily relevant. In the previous decade, we applied this Ansatz to numerous physical systems described by partial differential equations, most with hydrodynamical origins [24,25]. The constraint among the exponents is defined via a + 1 = a + 2b − n. Therefore, a can have arbitrary real values but b = (1 + n)/2 must be fulfilled. These relations dictate the reduced ODE of: The solution can be evaluated with the Maple 12 software and reads: where M and U are the Kummer functions [26], whereas c 1 and c 2 are arbitrary real constants. Note that the larger the exponent n, the quicker the spreading of the solutions. Figure 10 shows numerous shape functions for various values of n. In the next subsection we address the first term on the right-hand side of (26); thus, we give this part of the solution (c 1 = 1, c 2 = 0) as a function of the x and t variables:

Verification 2. Comparison to Analytical Results with Time-Dependent Material Properties
We examine Equation (25) and take the solution given in (27) for a = 1, n = 1 as a reference solution: whose 3D graph can be seen in Figure 10b. We reproduce this solution in finite space and time intervals x ∈ [x 1 , x 2 ] and t ∈ [t 0 , t fin ], where x 1 = −5, x 2 = 5, while t 0 = 0.5, t fin = 1, andα = 1. The space and time intervals are discretized as usual: x j = x 1 + j∆x, j = 0, ..., 1000, ∆x = 0.01 and t n = t 0 + nh, n = 1, ..., T, T = (t fin − t 0 )/h; thus, we can return to the equidistant theta and CNe Formulas (11) and (13). For this grid, h FTCS MAX = 5 × 10 −5 again, and the stiffness ratio is only 4.1 × 10 5 . We use the analytical solution to prescribe the appropriate Dirichlet boundary conditions at the two ends of the interval. We take into consideration the continuous change of the thermal diffusivity by recalculating the mesh ratio at each time step by multiplying its original value by the physical time taken at the middle of the actual time step, i.e., r n = t n + h 2 αh ∆x 2 is used at the nth time step in Formulas (11) and (13). The values of the Kummer M function were calculated by the hypergeom function of MATLAB. The L ∞ errors as a function of the time step size h, which are presented in Figure 11, show that our methods, particularly L2 and L3, can also easily solve this problem. Figure 11 is highly similar to Figure 10, and we also obtained similar error functions for several other values of the parameters t 0 , t fin , x 1,2 , ∆x, ... for both problems. We also present the graphs of the initial function u(x, t = 0.5), the reference analytical solution u ref (x, t fin ), and a corresponding numerical solution for h = 7.8 × 10 −3 in Figure 12.
We intentionally chose such a large time step size (more than two orders of magnitude larger than h FTCS MAX ), for which the L ∞ error is 0.096, to demonstrate the robustness of the algorithms, because now one can see that there is nothing unphysical in the numerical solution. We observed that, for even larger time step sizes, the numerical solution is approaching the initial function, which means that increasing the time step size is roughly equivalent to the slowing of the numerical diffusion process. The second conclusion we can draw is that the CNe formula is less effective than the θ = 1 /2 formula if we have small, nearly equally sized cells directly adjacent to one another.

Solution for the Nonlinear Fisher Equation
Here, we examine a nonlinear reaction-diffusion equation, the so-called Fisher equation [27], in the following form: This equation is also known as the Kolmogorov-Petrovsky-Piskunov or Fisher-KPP equation [28,29], and was originally developed to describe the spread of advantageous gene variants in space. We found the following analytical solution for α = 1 in the literature [30,31]: As in Section 3.7, we use this solution to gain the initial condition: . and the Dirichlet boundary conditions at the ends of the interval: In principle, the nonlinear term βu(1 − u) can be incorporated into the schemes in many different ways. We chose the following semi-implicit treatment: which can be rearranged into a fully explicit form: where u pred i is the most recent value of u i after performing the previous stage calculations which describe the diffusion term. The operation (31) is performed in two extra, separate stages, where a loop goes through all of the nodes (both odd and even). The first extra stage is after Stage 1, whereas the second is after (the original) Stage 4, at integer time levels, at the same moments in which the boundary conditions are recalculated.
With this procedure, we observed that the new methods are very stable for Fisher's equation and behave very similarly to the linear case. We solved Equation (29) for several different values of the parameters β, x 1 , x 2 , t fin , and ∆x. We present the L ∞ errors as a function of the time step size for β = 6, x 1 = 0, x 2 = 4, t fin = 1, and ∆x = 0.01 in Figure 13, but we must note that very similar curves were produced for other values of the parameters. We underline that the goal of this subsection is not the systematic investigation of the new methods in the case of nonlinear equations, which is planned to be performed in our future works. Here, we only show that these leapfrog-hopscotch methods can be successfully applied to nonlinear equations, even if the coefficient of the nonlinear term is large.

Analytical Investigations
We start with the analysis of the convergence properties of the five most advantageous methods in the one-dimensional case for constant values of mesh ratio r, that is, when the equidistant spatial discretization is fixed. Theorem 1. The order of convergence of the L1. . . L5 leapfrog-hopscotch algorithms is two for the system (6) of linear ODEs: d where M is introduced in (7) and is an arbitrary initial vector.
The proof of Theorem 1 is presented in Appendix A.
When analyzing the stability of the methods, we distinguish two types of algorithms. In the first type, only the CNe and the θ = 0 formulas are present, in which the new u n+1 i values are the convex combinations of the old u n j values, see Remark 1. We also recall a lemma [32] (p. 28) stating that a convex combination of convex combinations is again a convex combination. These lemmas imply the following: This means that all of these algorithms, and particularly L1 (C, C, C, C, C), are positivity preserving (more precisely, they satisfy the minimum and maximum principle), which is a significantly stronger property than mere unconditional stability.

Discussion and Summary
The present paper is the natural continuation of our previous work on explicit algorithms for solving the time-dependent diffusion equation. In the current work, we combined the hopscotch space structure with leapfrog time integration. By applying the well-known theta method with nine different values of θ and the recently invented CNe method, we constructed 10 5 combinations. Via subsequent numerical experiments, this huge number was decreased by excluding the combinations that underperformed and, finally, only the top five of these were retained. We used two two-dimensional stiff systems containing 10,000 cells with completely discontinuous random parameters and initial conditions, and presented the results of these five algorithms, in addition to those of several other methods, for these systems. For the purpose of verification, two analytical solutions were used. One was reproduced using a non-equidistant grid. The second was a completely new solution containing the Kummer M function for time-dependent thermal diffusivity, presented here for the first time. We also solved the nonlinear Fisher equation and demonstrated that our algorithms can handle the nonlinearity of this equation without losing stability. Then, we provided exact proof that the top five new methods have second-order convergence for fixed spatial discretization, and also showed by analytical calculations that they have very good stability properties.
We found that, in general, the L2 (0, 1 /2, 1 /2, 1 /2, 1 /2) combination is the most competitive. This combination yields accurate results orders of magnitude faster than the well-optimized MATLAB routines, and it is more accurate than all of the other examined explicit and stable methods. Although, unlike the L1 (C, C, C, C, C) algorithm, L2 is not positivity preserving, it is also surprisingly robust for relatively large time step sizes, which is not that case for the original OEH algorithm. Moreover, this new L2 algorithm is easy to implement and requires an even smaller amount of function evaluation and computer memory than the conventional explicit second-order Runge-Kutta methods, such as the Heun method. We can conclude that it combines has the most important advantages of the standard explicit and the implicit methods.
Our next aim is to search for more accurate algorithms for nonlinear parabolic equations. We plan to construct new analytical solutions under circumstances in which the heat conducting coefficients depend on space, and use these to test both the new and old methods. These results will be of significant interest in the simulation of real problems, such as in the field of engineering, the heat transfer by conduction, and the radiation in elements of machine tools.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
Proof of Theorem 1. We start from the Taylor series of u i : where all u terms on the right-hand side are taken at time t. Let us substitute the spatially discretized ODE: to obtain: After simplification one obtains: Thus, for a double time step: u n+2 i ≈ 1 − 4r + 12r 2 u n i + 2r − 8r 2 u n i−1 + u n i+1 + 2r 2 u n i−2 + u n i+2 .