A Special Multigrid Strategy on Non-Uniform Grids for Solving 3D Convection–Diffusion Problems with Boundary/Interior Layers

: Boundary or interior layer problems of high-dimensional convection–diffusion equations have distinct asymmetry. Consequently, computational grid distributions and linear algebraic systems arising from ﬁnite difference schemes for them are also asymmetric. Numerical solutions for these kinds of problems are more complicated than those symmetric problems. In this paper, we extended our previous work on the partial semi-coarsening multigrid method combined with the high-order compact (HOC) difference scheme for solving the two-dimensional (2D) convection– diffusion problems on non-uniform grids to the three-dimensional (3D) cases. The main merit of the present method is that the multigrid method on non-uniform grids can be performed with a different number of grids in different coordinate axes, which is more efﬁcient than the multigrid method on non-uniform grids with the same number of grids in different coordinate axes. Numerical experiments are carried out to validate the accuracy and efﬁciency of the present method. It is shown that, without losing the high precision, the present method is very effective to reduce computing cost by cutting down the number of grids in the direction(s) which does/do not contain boundary or interior layer(s).


Introduction
In this paper, we consider the three-dimensional (3D) steady convection-diffusion equation in the form of where the meaning of each symbol in Equation (1) and in this paper is listed in the following nomenclature. Equation (1) can be regarded as the model equation of many transport processes of fluid flows and heat transfer [1,2]. It can describe the convection and diffusion of various physical quantities, such as concentrations, heat, momentum, etc. Numerical simulation of fluid dynamics problems is computationally intensive, but it may still be a huge challenge for 3D problems even on the most advanced computers limited to the computing power and storage capacity. Higher precision computational methods combined with faster iterative approaches has seemed to be promising to get satisfying precision solutions with acceptable CPU time during the past three decades [3][4][5][6][7].
High-order compact (HOC) difference methods have been developed and extensively used to solve various convection-diffusion problems [3][4][5][8][9][10][11][12][13][14][15]. The attractive merits of HOC schemes include high accuracy, good stability and easy treatment of boundary conditions, etc. However, the majority of the existing HOC schemes are built on uniform grids [3,8,10,12]. For many realistic fluid flow and heat transfer problems which have asymmetry distribution of variation in their domains, such as boundary or interior layer problems [4,5,9,11,[13][14][15], if uniform grids are still used in the whole domain, numerical results would deteriorate unless very fine grids are distributed to obtain acceptable accuracy. However, it is not necessary to do it like this because reasonable non-uniform grid distribution is a good choice, and it would be very effective to handle such problems. To do this, Kalita et al. [11] developed an HOC scheme on rectangular non-uniform grids to solve the 2D convection-diffusion problems. The scheme shows very good scale resolution with various non-uniform grid spacing patterns. Afterwards, this HOC scheme was extended to the 3D case by Ge and Cao [5] and Shanab et al. [15]. At the same time, we notice that the same number of grids in all coordinate axes is used for all the numerical examples they considered [5,11,15]. However, for those problems with local large gradients, the physical quantities change steeply in just one direction (for 2D) or one/two direction(s) (for 3D), while, in the other direction(s), solutions change smoothly, and using the same number of grids in all directions is unnecessary. It would cause a big waste of storage space and computing cost. Thus, one aim of this paper is to design a special grid distribution strategy to fix this defect.
To efficiently solve the linear algebraic system which is resulting from finite difference schemes, the multigrid method is proposed [16]. Up to now, the multigrid method has been employed to solve various elliptic equations discretized by the HOC difference schemes [5][6][7]9,14,[17][18][19][20][21][22][23][24]. However, since most HOC schemes are proposed on the uniform grids, consequently, the multigrid methods are carried out on uniform grids [6,[18][19][20][21]24]. Ge and Zhang et al. [4,9] employed the fourth-order compact discretization schemes and the multigrid method to solve 2D and 3D convection-diffusion equation with boundary layers on non-uniform grids. Because the grid transformation was used to transform the nonuniform grid to a uniform one, the multigrid method was also implemented on uniform grids in the computational plane. In addition, it is worthy of being pointed out that, in Ref. [22], a multigrid method is developed to solve the system of linear equations arising from the HOC scheme [11] on non-uniform grids for 2D problems, and it was extended to 3D cases [5,7]. It is a pity that the same number of grids in all coordinate axes must be used because the restriction and interpolation operators needed by the multigrid method are constructed just for a full-coarsening grid (see Refs. [5,7] for more details). Actually, it is not necessary to distribute the same number of grids for those boundary or interior layer problems when local big gradient(s) only occur(s) in partial coordinate direction(s).
To overcome the defect, based on the idea of Zhang [19] about the partial semicoarsening strategy for solving a 2D Poisson equation, Cao et al. [23] developed a special method to solve the 2D boundary or interior layer problems of the convection-diffusion equation. The main merit of the grid distribution strategy is that it permits to set a different number of grids for xand y-coordinate axes, respectively. Under such circumstances, computing cost is greatly reduced by setting a small number of grids in non-dominant direction(s), but the computed accuracy is not lost. In this paper, we tend to generalize the method in [23] to 3D cases and develop a special multigrid method on non-uniform grids to solve the 3D convection-diffusion boundary or interior layer problems (1). The outline of this paper is arranged as follows. In Section 2, the HOC scheme that is proposed by Ge and Cao [5] will be extended for solving the 3D convection-diffusion Equation (1) on non-uniform grids. In Section 3, we develop a special multigrid strategy including partial semi-coarsening procedure, specified restriction and interpolation operators. After that, numerical examples are employed to illustrate the accuracy and efficiency of the present method in Section 4. Section 5 presents the conclusions of this paper.

HOC Scheme on Non-Uniform Grids
We consider a cubic domain Ω = [a 1 , , and divide the interval [a 1 , a 2 ], [b 1 , b 2 ] and [c 1 , c 2 ] into sub-intervals by the points a 1 = x 0 , x 1 , x 2 , . . . , x N x −1 , x N x = a 2 , b 1 = y 0 , y 1 , y 2 , . . . , y N y −1 , y N y = b 2 and c 1 = z 0 , z 1 , z 2 , . . . , z N z −1 , z N z = c 2 . In the xaxis, the backward and forward step lengths can be respectively given by In addition, similarly, on the yand z-axes, y b = y j − y j−1 , If and only if x f = x b , y f = y b and z f = z b , the grid turns to be uniform. Figure 1 shows a non-uniform cubic grid stencil in the subdomain. The discretization value of Φ(x, y, z) at a reference point (x, y, z) denoted by Φ 0 and the other 18 neighboring grid points are defined analogously as Figure 1. The HOC difference scheme on non-uniform grids for the 3D convection-diffusion Equation (1) developed by Ge and Cao [5] is given as and The finite difference operators appearing in Figure 2 are given in Appendix A. It notes that the order of the truncation error of the scheme is fourth on uniform grids if x f = x b , y f = y b and z f = z b and at least third on non-uniform grids if x f = x b or y f = y b or z f = z b . Shanab et al. [15] also got this scheme with a similar derivation process (more details, see Refs. [5,15]).

Partial Semi-Coarsening Multigrid Method
The multigrid method is a very efficient iterative method for solving the large systems of algebraic equations arising from the discretizations of elliptic boundary value problems [25]. The main logic of the multigrid method is based on the recognition that classical relaxation methods can strongly dampen the oscillatory error components, but converge slowly for smooth error components. However, oscillation or smoothness is comparative, i.e., the smooth error components on fine grids become oscillatory when they are transferred to coarse grids. Therefore, the multigrid method splits the errors into high frequency components and low frequency components and solves them in different subspaces. Firstly, we perform a few relaxation sweeps on the fine grid and then transfer the residual of the fine grid to the next coarse grid and smooth it again on the coarse grid. This process is repeated successively until the coarsest grid. Then, the corrections are interpolated back to finer grids successively and relaxation sweeps are implemented again until the process reaches the finest grid. This procedure is called a multigrid V-cycle. Restriction operator, interpolation operator and relaxation operator are three essential components of the multigrid method. We will discuss them respectively in the following parts.
Mulder [26] firstly proposed a semi-coarsening multigrid method to solve the convection problems. After that, Liu and Liu [27] employed it to simulate the whole process of flow transition in 3D flat plate boundary layers. In 2002, Zhang [19] developed a partial semi-coarsening multigrid method for solving the 2D Poisson equation. Partial semi-coarsening means that, after semi-coarsening, on a certain coarse grid, the number of grids in both directions must be same; then, the standard full coarsening process is carried out in both directions until the coarsest grid is reached. Numerical results indicate that the computational cost of the partial semi-coarsening strategy is correspondingly decreased greatly with fewer grids in the non-dominant axis than that in the dominant axis, but computed accuracy does not lose compared to the full grid coarsening process with the same number of grids in both directions. Ge [20] extended such a method to the 3D case. However, the grid distribution in [19,20] is uniform although unequal mesh-size is employed for different coordinate directions.
In this paper, we will develop a partial semi-coarsening multigrid method with nonuniform grids to solve the 3D convection-diffusion problems. The merit of computational methods with non-uniform grids is that it is more flexible to treat local large gradient problems, such as boundary layers or local singularities, etc. It is necessary to show the 3D grid partial semi-coarsening process here. For example, we assume that the computational domain is a unit cube, and the number of grids on the x-, yand z-axes are N x , N y , and N z , respectively. Then, if we simplify to write N x = 128, N y = 32, N z = 8 as 128 × 32 × 8, Figure 2 gives the process of partial semi-coarsening for 3D non-uniform grids, in which the steps 1 -4 are semi-coarsening and steps 5 and 6 are full-coarsening. The whole process is called partial semi-coarsening. The key technique for implementing the partial semicoarsening multigrid method is to construct new restriction and interpolation operators for different grid coarsening processes.

Restriction Operator
As a matter of convenience but without loss of generality, we suppose that the xand y-axes are dominant, where the boundary or interior layers exist, but the z-axis is not dominant. Under these circumstances, it is reasonable to put more grid points in xand y-axes than on the z-axis. Because the numbers of girds in x-, y-, and z-axes may be different, the restriction operators must be individually built. Firstly, we use r i,j,k to represent the residual on the fine grid point (i, j, k), andr¯i ,j,k to represent the corresponding residual on the corresponding coarse grid point (ī,j,k). It notes that i = 2i, j = 2j and k = 2k. For the restriction operator of the residual, we design it as follows: (i) If N x > N y > N z , i.e., the number of grids is the biggest in the x-direction, so we only conduct grid coarsening in the x-direction. A one-way weighting average strategy is used. In Figure 3, we define which x f and x b are forward and backward step lengths of the fine grids. Two big black points denote the grid points on the fine grid level, while a small black point represents the grid point on the coarse grid level. Its residual is evaluated by itself and its two neighboring grid points on the fine grid level. According to the one-way average strategy introduced by Cao et al. [23], the restriction operator for the residual from fine grid to coarse grid is written out as follows: (ii) If N x = N y > N z , under this circumstance, there exist two dominant directions, i.e., xand y-axes. The two-way average strategy introduced by Ge and Cao [22] is employed to evaluate the residual on the coarse grid level. The basic principle of it is the area law described by Liu [28]. Figure 4 gives the areas S i (i = 0, 1, . . . , 8) for the non-uniform subdomain. The corresponding full weighting restriction operator can be explicitly given out as: where the S i (i = 1, . . . , 8) are defined as follows: (iii) If N x = N y = N z , for each grid point on the coarse grid level, Ref. [7] proposed a full-weighting restriction operator on a non-uniform grid based on the volume law [28]. Under this circumstance, we can use the full weighting restriction operator directly (see Appendix B).

Interpolation Operator
under this condition, interpolation is only needed to be conducted on the x-axis. Figure 5 shows the coarse grid points (big black points), the fine grid point (small and big black points), and the step length of the coarse grid stencil L x . According to the average strategy introduced by Cao et al. [23], the interpolation operator can be explicitly given out as: under such circumstance, interpolation should be performed on both the xand y-axes. Figure 6 shows the areas S i (i = 1, 2, 3, 4) used by the non-uniform interpolation operator. In addition, big black points represent the grid points on the coarse grid level and small black points the grid points on the fine grid level. The interpolation operator can be explicitly given out as: where for this case, we directly use the tri-linear interpolation operator on a non-uniform grid based on the volume law proposed in [7] (see Appendix C).

Relaxation Operator
The role of relaxation operators (such as Jacobi, Gauss-Seidel iteration, etc.) for the multigrid method is not to remove the errors, but to dump the high frequency components of the errors on the current grid while leaving the low frequency components to be removed by the coarser grids. In [19,20], the line Gauss-Seidel relaxation (for 2D) and plane Gauss-Seidel relaxation (for 3D) were used to remove the high frequency error components in the dominant direction(s) and prove to be very efficient and robust. In [7], the point Gauss-Seidel relaxation, the red-black Gauss-Seidel relaxation, the four-color Gauss-Seidel relaxation, and the alternating direction plane Gauss-Seidel relaxation were used as smoothers for the multigrid method, and their smoothing effect was compared with numerical experiments. In this paper, we use the line Gauss-Seidel relaxations as smoothers if the boundary or interior layers just exist in one direction as done in [19]; the plane Gauss-Seidel relaxations is used if the boundary or interior layers exist in two directions, as done in [20]. Once the boundary or interior layers exist in all three directions, the alternating plane Gauss-Seidel relaxation is applied as a smoother as done in [7].

Numerical Experiments
The following four convection-diffusion problems with boundary layers or interior layers are computed to validate the high accuracy and high efficiency of the present method. The definition domain of all problems is set to be Ω = [0, 1] × [0, 1] × [0, 1], and the Dirichlet boundary conditions are considered. All computations are coded by the Fortran 77 language, and the programs are stopped when the residuals in the L 2 norm on the finest grids are cut down by 10 10 . Maximum absolute error (Error) on the finest grid, the CPU time in seconds, numbers of multigrid V(ν 1 , ν 2 ) cycles (Num), and convergence order are reported. The convergence order is computed by: where Error1 and Error2 are the maximum absolute errors estimated at two different grids with the numbers of grids N 1 and N 2 , respectively. where The exact solution is A local large gradient solution exists at x = 0.5 with big σ, so we just refine grids on the x-axis while setting fewer grids on the yand z-axes by using the grid distribution as follows: where λ x ∈ (−1, 1) is a stretching parameter that controls the distribution of grids on the x-axis. For example, −1 < λ x < 0 indicates that more grid points are clustered to the boundary x = 0 and x = 1, whereas 0 < λ x < 1 indicates that more grid points are clustered around the middle of the region x = 0.5. If λ x = 0, the grid turns out to be uniform in the computational region.
We set Re = 10 2 , σ = 10 3 and multigrid V(2, 2) cycles. Table 1 shows the computed results of maximum errors, multigrid iteration numbers, CPU time and comparisons with the full-coarsening grids [5]. We find that the computed accuracy does not lose when less grids are used in the two non-dominant directions. For example, when λ x = 0.6, the maximum error with 32 × 16 × 16 grids is almost the same as that of 32 × 32 × 32. It is not strange because the local gradient solution only occurs in the x-direction and not in the yor z-direction. Thus, we can set fewer grids in them to reduce the computational cost and make the computation more efficient and cost-effective. For instance, on a non-uniform grid, the number of grid 32 × 16 × 16 is just a quarter of that of grid 32 × 32 × 32, and the CPU time with 32 × 16 × 16 grids is also just a quarter of that of 32 × 32 × 32 grids. Under these conditions, we can find that the multigrid of partial semi-coarsening is much more efficient than that of full-coarsening as long as the multigrid iteration numbers and the CPU time are being compared. We note that 64 × 64 × 64, 32 × 32 × 32 and 16 × 16 × 16 grids still used for a partial semi-coarsening grid algorithm is to show that the partial semi-coarsening grid algorithm can reduce to the full-coarsening grid algorithm if N x = N y = N z . Table 2 gives the numerical results at Re = 10 2 , σ = 10 4 with λ x = 0.75. We notice that, when σ is bigger, the solution in the interior layer changes more violently. However, the computed results on non-uniform grids are still very accurate. On the other hand, we notice that, when the number of grids in the dominant direction, i.e., the x-axis, is fixed, reducing the number of grid points on the yand z-axes can still get almost equivalent accurate solutions as with fine grids in all three of the directions. By the comparison of the numerical results with 64 × 64 × 64 full-coarsening grids and 64 × 16 × 16 partial seim-coarsening grids, we can get that, for this case, a coarse grid with only a one-sixteenth grid of the fine grid is enough to achieve nearly the same accuracy as the fine grid in the three directions. As a result, the CPU time of a correspondingly partial semi-coarsening multigrid method is just about one-sixteenth of the cost by the full-coarsening multigrid method. Figure 7a,b show the grid with 32 × 32 × 32 and 32 × 16 × 16, with λ x = 0.75 in the xoy plane, respectively. Figure 7c,d show the computed solutions on 32 × 32 × 32 grids and 32 × 16 × 16 grids at the plane z = 0.375 for σ = 10 4 . We can see that the solution with 32 × 16 × 16 grids is almost indistinguishable from that with 32 × 32 × 32 grids. It also demonstrates that distributing more grids on non-dominant directions is unnecessary. However, computational efficiency is improved, and computing cost is saved because less grid points are involved. Table 1. Maximum errors, multigrid iteration numbers, and CPU time for Problem in Section 4.1 (σ = 10 3 , λ x = 0.6).

Problem 2
The exact solution is For this example, when ε is small, a large gradient solution just occurs in the axis y = 1, but not on xand z-axes, so we use the following grid functions: The closer λ y is to 1, the more grid points are clustered near y = 1. Tables 3 and 4 list the computed results for ε = 10 −2 and ε = 10 −3 , with λ y = 0.55 and λ y = 0.9, respectively. By observing the maximum errors, we find that numerical solution precision does not decrease when we set fewer grids along the xand z-axes while the number of grids along the y-axis is fixed. In other words, if the large gradient solution only happens in one coordinate axis, it is not necessary to use the same number of grids for the other two coordinate axes. Under such circumstances, computational cost and storage space would be saved. Additionally, we notice that, for this problem, the multigrid V(2, 2) cycles are very efficient and more cost-effective than the multgrid method of full-coarsening [5] with fewer numbers of multgrid iterations and less CPU time. When ε = 10 −3 , the boundary layer is steeper, but we still can get very accurate solutions on the non-uniform grids with few grid points on the non-dominant axes. The multigrid method of partial semi-coarsening still works well and is superior to the multigrid method of full-coarsening [5]. Figure 8a,b show the grid with 32 × 32 × 32 and 16 × 32 × 16 with λ y = 0.9 in the xoy plane, respectively. Figure 8c,d show the computed solution on 32 × 32 × 32 grids and 16 × 32 × 16 grids at the plane z = 0.5 for ε = 10 −3 . We can see that the solutions computed by 16 × 32 × 16 grids and by 32 × 32 × 32 grids have no perceivable difference. It also illustrates that the partial semi-coarsening grid is a very effective strategy to deduce the computational cost and storage space.

Full-Coarsening Grids [5] Partial Semi-Coarsening Grids
where The exact solution is The problem has steep interior layers on the axes x = 0.5 and y = 0.5 when σ is large. Thus, we distribute many more non-uniform grids on these two axes while fewer uniform grids on the z-axis. The grid functions are given as follows: (36) Table 5 shows the numerical results by using the multigrid methods of full-coarsening and partial semi-coarsening for the Problem in Section 4.3 with σ = 10 3 . We choose λ x = λ y = 0.55. A multigrid V(2, 2) cycle is used. We find that the computational cost (CPU time) dramatically descends because comparatively fewer grid points are used in a non-dominant direction (z-direction). However, the computational accuracy does not drop distinctly. Fewer V(2, 2) cycles are needed for the multigrid method of partial semicoarsening than that of the full-coarsening. It indicates that the multigrid method of partial semi-coarsening is more efficient and cost-effective than that of full-coarsening. Table 6 gives the computed results when σ = 10 4 for the Problem in Section 4.3. λ x = λ y = 0.7 is used for both the full-coasening multigrid method and the partial semi-coarsening multigrid method. Under such circumstance, the solution in the interior layer changes more violently. The partial semi-coarsening with 64 × 64 × 16 grids gets almost the same accuracy for the numerical results as full-coarsening with 64 × 64 × 64 grids. However, the V(2, 2) cycles consumed by the multigrid method of partial semicoarsening are only half that of the full-coarsening. Meanwhile, the CPU time consumed by the multigrid method of partial semi-coarsening is only about one tenth of that by the full-coarsening. Figure 9 shows the computed solution (c) with the full-coarsening non-uniform 64 × 64 × 64 grids (a), and the computed solution (d) with the partial semi-coarsening on nonuniform 64 × 64 × 16 grids (b), with λ x = λ y = 0.7 for σ = 10 4 , Re = 10 2 , at the plane x = 0.5, for the Problem in Section 4.3. We observe that the difference between them is almost indistinguishable. Thus, we can draw that using fewer grids along the nondominant direction does not influence the precision of the numerical solution. However, computational cost will be lowered since relatively fewer grids are used.

Full-Coarsening Grids [5]
Partial Semi-Coarsening Grids where The exact solution is We notice that the Problem in Section 4.4 has a steep solution gradient along axes x = 1, y = 1 and z = 1 when ε is small. Under this circumstance, we use grid distribution function as follows in order to set more grid points in the boundary layers on the x-, y-, and z-axes: When λ x = λ y = λ z = 0.75, the grid with 32 × 32 × 32 in the xoy plane is shown in Figure 10a. Table 7 gives the maximum error, convergence order, number of multigrid V(2, 2) cycles, and CPU time of the full-coarsening multigrid method with uniform grids and non-uniform grids for the Problem in Section 4.4. We choose ε = 0.1, 0.05 and 0.01. Correspondingly, grid stretching parameters are set to be λ x = λ y = λ z = 0.35, 0.5 and 0.8, respectively. We notice that the maximum errors on non-uniform grids are superior to that on uniform grids. In addition, the multigrid methods are very efficient with both uniform and non-uniform grids. Under this condition, a few more multigrid V(2, 2) cycles are needed on non-uniform grids than that on uniform grids. When ε = 0.01, the computed results on the uniform grids are inaccurate, whereas very accurate solutions are obtained on the non-uniform grids. It fully demonstrates that the multigrid method of the partial semicoarsening in this paper can reduce to that of the full-coarsening if boundary or interior layers exist on all three coordinate axes. In other words, the full-coarsening multigrid method in [5] is the special case of the present partial semi-coarsening multigrid method. Figure 10 shows the exact solution (b), numerical solution on uniform grid (c), and numerical solution on the non-uniform grid (d) for ε = 10 −2 , λ x = λ y = λ z = 0.8 at the plane z = 0.8125. By observing the computed results of Figure 10c,d with the exact solution of Figure 10b, we can get that the numerical solution computed on the non-uniform grid is more accurate than that computed on the uniform grid.

Conclusions
In this study, we have extended the partial semi-coarsening multigrid method introduced in Ref. [23] for solving 2D boundary or interior layer problems of convectiondiffusion equations to 3D cases. The highlight of this method is that it is allowed to use different numbers of grids in different coordinate axes, so it is cost-effective and suitable for solving the problems with local large gradients or boundary/interior layers that only exist on one or two axes for 3D problems. It overcomes the defect in our previous work in Ref. [5] with same numbers of grid points being used in all coordinate axes, which leads to a big waste of CPU time and storage. On the other hand, if the problems with local large gradients or boundary layers exist in all three coordinate axes, the present partial semicoarsening multigrid method reduces to the full-coarsening multigrid method [5]. Thus, the present partial semi-coarsening strategy is more flexible for different types of boundary or interior layer problems. The computed results of numerical experiments validate that the partial semi-coarsening multigrid method, by using fewer grid points in non-dominant direction(s), is very efficient and cost-effective compared to the full-coarsening multigrid method, but the computed accuracy is not lost.
We emphasize that, although this work is the generalization of the work for 2D in Ref. [23], it is not straightforward-at least, their implementations are nontrivial-because only one dominant direction exists for 2D boundary or interior layers problems, but two dominant directions may exist for 3D cases. This inevitably adds complexity for the algorithm designing and program implementation. In addition, the present method can also be extended to solve 2D and 3D boundary layer fluid flow problems governed by the incompressible Navier-Stokes equations. This is beyond the scope of this paper, and we plan to do this research in the near future.

Acknowledgments:
We would like to thank the editors and the referees whose constructive comments and suggestions are helpful to improve the quality of this paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
computational domain on the z-axis, c 1 < c 2 x i discrete point coordinate on the x-axis y i discrete point coordinate on the y-axis z i discrete point coordinate on the z-axis N x number of discrete grids on the x-axis N y number of discrete grids on the y-axis N z number of discrete grids on the z-axis i discrete grid index on the x-axis on the fine grid j discrete grid index on the y-axis on the fine grid k discrete grid index on the z-axis on the fine grid i discrete grid index on the x-axis on the coarse grid j discrete grid index on the y-axis on the coarse grid k discrete grid index on the z-axis on the coarse grid x b backward step length on the x-axis x f forward step length on the x-axis y b backward step lengths on the y-axis y f forward step lengths on the y-axis z b backward step lengths on the z-axis z f forward step lengths on the z-axis δ x , δ 2 x the first and second order central difference operator on the x-axis δ y , δ 2 y the first and second order central difference operator on the y-axis δ z , δ 2 z the first and second order central difference operator on the z-axis r residual on the fine grid r residual on the coarse grid S i (i = 1, · · · , 8) area of the i-th subregion ν 1 pre-smoothing times ν 2 post-smoothing times Error maximum absolute error Num numbers of multigrid iterations Order convergence order log base 10 logarithmic function Re Reynolds number σ boundary or interior layer parameters diffusion coefficient λ x grids stretching parameters on the x-axis λ y grids stretching parameters on the y-axis λ z grids stretching parameters on the z-axis in which h = 1 2 (x f + x b ), k = 1 2 (y f + y b ), l = 1 2 (z f + z b ).