Next Article in Journal
Black-Box Bug Amplification for Multithreaded Software
Previous Article in Journal
A Review of Fractional Order Calculus Applications in Electric Vehicle Energy Storage and Management Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Lavrentiev Regularization of Singular and Ill-Conditioned Discrete Boundary Value Problems in the Robust Multigrid Technique

by
Sergey I. Martynenko
* and
Aleksey Yu. Varaksin
Joint Institute for High Temperatures of the Russian Academy of Sciences, Moscow 125412, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 2919; https://doi.org/10.3390/math13182919
Submission received: 30 July 2025 / Revised: 31 August 2025 / Accepted: 5 September 2025 / Published: 9 September 2025

Abstract

The paper presents a multigrid algorithm with the effective procedure for finding problem-dependent components of smoothers. The discrete Neumann-type boundary value problem is taken as a model problem. To overcome the difficulties caused by the singularity of the coefficient matrix of the resulting system of linear equations, the discrete Neumann-type boundary value problem is solved by direct Gauss elimination on the coarsest level. At finer grid levels, Lavrentiev (shift) regularization is used to construct the approximate solutions of singular or ill-conditioned problems. The regularization parameter for the unperturbed systems can be defined using the proximity of solutions obtained at the coarser grid levels. The paper presents the multigrid algorithm, an analysis of convergence and perturbation errors, a procedure for the definition of the starting guess for the Neumann boundary value problem satisfying the compatibility condition, and an extrapolation of solutions of regularized linear systems. This robust algorithm with the least number of problem-dependent components will be useful in solving the industrial problems.

1. Introduction

Let A x = b denote a discrete analogue of linear boundary value problem (BVP) L u = f with eliminated boundary conditions. The pure Neumann boundary conditions, which specify the normal derivative of a solution on the boundary, result in the singular coefficient matrix A. Three main properties can immediately be seen: (1) the determinant of a singular matrix is equal to zero; (2) its inverse ( A 1 ) is not defined; and (3) a singular matrix always has at least one eigenvalue equal to zero. As a result, if a solution x of A x = b exists, then it is not unique: any constant function c is a solution of the homogeneous system: A c = 0 . This singular behavior of the coefficient matrix A has to be taken into account separately by any iterative solver [1,2,3,4,5,6].
The Neumann boundary value problem (NBVP) for the Poison equation has numerous applications in heat transfer, fluid dynamics, electrostatics, solid mechanics and other branches of physics and engineering. Currently, various numerical methods for solving the singular linear system have been proposed and developed [7]. One of them, the Lavrentiev regularization, adds a regularization term to the original system A x = b for transforming it into a well-posed problem [8]. The choice of the regularization parameter is critical for the success of the Lavrentiev regularization: a larger value leads to a more stable but potentially less accurate solution [9,10,11]. The advantages of this approach are that it is simple and that it does not affect the smoothing used in multigrid methods.
The Robust Multigrid Technique (RMT) had been developed for solving the industrial and real-life problems [12]. For the given purpose, the RMT has the following advantages:
(1)
Robustness: the only additional problem-dependent component (the number of smoothing iterations on coarse levels) compared to the single-grid smoothers;
(2)
Efficiency: close-to-optimal algorithmic complexity;
(3)
Almost full parallelism;
(4)
Low memory implementation.
The goal of this activity is to construct an effective procedure for finding problem-dependent components of iterative algorithms for black-box software. The numerical solution of a singular system arising from the discretization of the NBVP is chosen as a model problem. Practitioners often use the following procedure when solving industrial problems. Let
A β u β = b β
be a parameter-dependent problem, where β is an under- (for example, pressure correction equation in the SIMPLE method) or overrelaxation parameter. The simplest iterative scheme for this parameter-dependent problem becomes
u β ( ν + 1 ) = G β u β ( ν ) + b ˜ β
where ν is the iteration counter. If the parameter-dependent system is solved for different β values starting from the same initial guess u β ( 0 ) , then an empirical dependence ν = ν ( β ) can be constructed. The optimal value of β minimizes the number of iterations: ν ( β opt ) min . The advantage of this procedure is its robustness: the optimal value β opt can be determined without knowing the matrix G β spectrum. This procedure has the disadvantage that the computation of β opt is the main fraction cost of the overall effort for solving the given parameter-dependent problem.
The RMT uses the independent systems
A β ( l , k ) ( l , k ) u β ( l , k ) ( l , k ) = b β ( l , k ) ( l , k ) , k = 1 , 2 , , 3 d l , l = L 3 + , L 3 + 1 , , 0 , d = 2 , 3
for (parallel) solving the discrete BVP, where zero level l = 0 is the finest grid and l = L 3 + is the coarsest level. The basic idea behind the RMT-based algorithm is that the discrete BVP can be solved by direct Gauss elimination on the coarsest level without regularization. After that, the regularized discrete BVPs are used for smoothing on the finer levels, and only a few iterations are needed for convergence. Since all computational grids of the same level are similar to each other, it is natural to assume that the optimal values of problem-dependent components will be approximately the same for all grids of the same level:
β opt ( l ) β opt ( l , 1 ) β opt ( l , 2 ) β opt ( l , 3 d l ) .
In other words, the optimal value β opt ( l ) can be found on one or several grids of the same level l, and the obtained value β opt ( l ) can be taken for all grids of the given level l. As a result, the amount of extra work on this level is negligible. This assumption is usually the most general one, and thus the results may be the strongest that can be obtained. The coarse level solution is a sufficiently accurate approximation to the fine level solution, and it used a priori information for determining the regularization parameter. On the finest grid ( l = 0 ), the optimal value β opt ( 0 ) is computed by extrapolating the optimal values β opt ( 1 ) , β opt ( 1 ) , obtained at the other levels l = 1 , 2 , .
A similar procedure can be used to adapt the computational grid to the features of the numerical solution. Currently, there have been numerous attempts to improve the computational algorithms for BVP using the artificial intelligence-based technologies. The authors hope that the opportunities for the development of these computational algorithms are not yet completed and the solvers can be improved without artificial intelligence. Various problem-dependent strategies for finding the optimal value of the regularization parameter are given in [13,14,15,16] and others.
The article is organized as follows: first, some theoretical results on the iterative solution of regularized systems and 1D illustration are presented. After that, some theoretical estimations for a two-grid algorithm are given. In the Section 4, the RMT is used for solving a 3D NBVP, and further multigrid preconditioning as well as the extrapolation of the solution and choice of the initial guess for the iterative solver are both discussed.
The development of parallel and high-performance computational techniques for the mathematical simulation of physical and chemical processes will have an impressive influence on UN SDGs (Sustainable Development Goals: Clean Water and Sanitation, Affordable and Clean Energy, Industry, Innovation and Infrastructure, Climate Action, Sustainable Cities and Communities and others) as the United Nations’ chief initiative for advancing basic living standards and addressing a range of global issues.

2. Single-Grid Solver

Here, the Neumann boundary value problem (NBVP) for the Poisson equation
2 u x 2 + 2 u y 2 + 2 u z 2 = f ( x , y , z ) , u n Ω = g
is considered in the unit cube Ω = ( 0 , 1 ) , where g is a known function. If a solution of the boundary value problem exists, then it is not unique. The integration of (1) over the domain Ω Ω results in the following compatibility condition:
0 1 0 1 . u x | x = 1 u x | x = 0 d y d z + 0 1 0 1 . u y | y = 1 u y | y = 0 d x d z + 0 1 0 1 . u z | z = 1 u z | z = 0 d x d y = 0 1 0 1 0 1 f ( x , y , z ) d x d y d z .
Solutions of the continuous boundary value problem (1) exist if (and only if) the compatibility condition is satisfied.
The standard seven-point discrete Laplacian on a square mesh with mesh size h = 1 / , where is a discretization parameter, becomes
u i 1 j k h 2 u i j k h + u i + 1 j k h h 2 + u i j 1 k h 2 u i j k h + u i j + 1 k h h 2 + u i j k 1 h 2 u i j k h + u i j k + 1 h h 2 = f ( x i , y j , z k ) ,
where the function u h is a discrete analogue of u in (1).
The elimination of Neumann boundary conditions and lexicographic ordering of the unknowns make it possible to rewrite the discrete NBVP in matrix form
A u = b
with a singular symmetric N × N coefficient matrix A (all diagonal entries a i i > 0 ). This means that det A = 0 and A is the non-invertible matrix (i.e., A is not defined). Many real-life applications lead to the systems of algebraic equations with a singular ( det A = 0 ) or ill-conditioned ( det A 0 ) coefficient matrix.
Together with (2), the regularized system
( A + α I ) u α = b
will be considered and analyzed, where I is an identity matrix and α > 0 is a regularization parameter (Lavrentiev method, [8]). The regularized problem (3) is constructed in such a way that a major difficulty of the original problem (2) is overcome: the term α I shifts all eigenvalues of the coefficient matrix A on α , so A + α I is the invertible matrix.
Let
D i ( A ) = | a i i | j i | a i j | , i = 1 , 2 , , n ,
denote the value of diagonal dominance in each row of the matrix A, and
D ( A ) = min 1 i N D i ( A ) , D ( A ) = max 1 i N D i ( A ) .
Theorem 1. 
For a diagonally dominant matrix A ( D ( A ) 0 ) with the positive diagonal ( a i i > 0 ) and the non-positive off-diagonal ( a i j 0 , i j ) entries, the equality holds:
A 1 = D 1 ,
if D ( A ) = D ( A ) = D .
The proof for this theorem is outlined in [17]. If the coefficient matrix A + α I in (2) is true D ( A ) = D ( A ) = α and
α ( A + α I ) 1 = 1 .
It is possible to propose several iterative methods for solving the system (3), but Theorem 1 predicts that some of them will diverge:
Lemma 1. 
The iterations
( A + α I ) ( u ( ν + 1 ) u ( ν ) ) = b A u ( ν )
diverge for all α > 0 in · norm.
Proof of Lemma 1. 
The iteration can be rewritten as follows:
u ( ν + 1 ) = S u ( ν ) + ( A + α I ) 1 b ,
where
S = I ( A + α I ) 1 A
is an N × N iteration matrix. Since
S = I ( A + α I ) 1 A = ( A + α I ) 1 ( A + α I A ) = α ( A + α I ) 1 ,
Equation (4) results in S = 1 . A sufficient condition for iteration convergence is that S < 1 ; therefore, the iterations (5) diverge ( α > 0 ). □
If A is an invertible matrix and α = 0 , then (5) becomes a direct solution algorithm.
Lemma 2. 
The iterations
( A + α I ) u ( ν + 1 ) = b + α u ( ν )
diverge for all α > 0 in · norm.
The main difficulty in the use of Lavrentiev regularization is the determination of α . If (3) can be solved for α 0 , then we immediately have the solution u α u . However, it is as difficult to solve (3) for α 0 as it is to solve (2) for u . On the other hand, increasing the parameter α results in the well-conditioned matrix A + α I with large solution error u α u . In general, it is not possible to obtain a formula for the optimal value of the regularization parameter α , but a robust approach for the adaptive Lavrentiev (shift) regularization will be proposed here.
To illustrate these remarks, consider the 1D NBVP
u = e x , u ( 0 ) = 1 , u ( 1 ) = e
with exact solution u e ( x ) = e x + C , where C is some constant. The points
x i v = i 1 = ( i 1 ) h , i = 1 , 2 , , , + 1 , x i f = 2 i 1 2 = ( 2 i 1 ) h 2 , i = 1 , 2 , , 1 , ,
define a Cartesian uniform grid, where h = 1 / . The finite-difference analogue of u = e x becomes
u i 1 h 2 u i h + u i + 1 h h 2 = exp ( x i f ) .
The discrete Poisson equation with eliminated Neumann boundary conditions
(1)
i = 1
( 1 + α ) u 1 h + u 2 h = h exp ( x 1 f ) + h u ( 0 ) ,
(2)
1 < i <
u i 1 h ( 2 + α ) u i h + u i + 1 h = h exp ( x i f ) ,
(3)
i =
u 1 h ( 1 + α ) u h = h exp ( x f ) h u ( 1 )
can be written in the matrix form (3). In order to measure the convergence and accuracy of the numerical solution, the parameters are introduced as follows:
(a)
The relative residual norms
θ α ( ν ) = b A u α ( ν ) b ,
θ ^ α ( ν ) = b A u α ( ν ) b A u α ( 0 ) ,
where u α ( ν ) is an approximation of the solution u α = ( A + α I ) 1 b of (3) after ν iterations of the basic iterative or direct ( ν = 1 ) method;
(b)
The error of the numerical solution
χ = max 1 i | u e ( x i f ) u i h | ,
where u 1 h = u e ( x 1 f ) .
First, we solve the 1D NBVP (6) in the matrix form (3) for different values of the regularization parameter α by a direct (Thomas) algorithm ( ν = 1 ). Figure 1 represents convergence u i h u ( x i f ) for sufficiently small α if the coefficient matrix A + α I and the right-hand side vector b are given exactly. In general, the divergence of the iterative/direct algorithm has to be expected for α 0 because A is the non-invertible matrix.
The strong influence of α on the convergence rate of iterative methods is obvious. To illustrate what α 0 means, (7) should be rewritten as
a i u i 1 h + b i u i h + c i u i + 1 h = d i , i = 1 , 2 , , .
The Gauss–Seidel iteration becomes
u i h ( ν + 1 ) = 1 b i d i a i u i 1 h ( ν + 1 ) c i u i + 1 h ( ν ) , i = 1 , 2 , , ,
where ν is the iteration counter. The stopping criterion is defined as
b ( A + α I ) u α ( ν ) b < ε = 10 .
Figure 2 demonstrates the required number of Gauss–Seidel iterations to reach this stopping criterion (11) for different α starting u α ( 0 ) = 0 . The following estimate holds
u u α ( ν ) u A + α I · ( A + α I ) b ( A + α I ) u α ( ν ) b
where u = ( A + α I ) 1 b is the exact solution of (7). In order to assess the numerical efficiency of an iterative solver, it is necessary to take into account the condition number cond ( A + α I ) = A + α I · ( A + α I ) . Because of Theorem 1 (4), the condition number becomes
cond ( A + α I ) = A + α I 1 α · α ( A + α I ) 1 = 1 α A + α I 1 α A .
It is easy to see that
cond ( A + α I ) + for α 0 ,
and the estimation (12) becomes
u u α ( ν ) u ε α A .
The parameter ε in the stopping criterion can be roughly estimated as
ε α A ,
i.e., the stopping criterion depends on α . The regularization significantly increases the number of iterations required for sufficiently small α ; i.e., the convergence rate of iterative algorithms deteriorates as α 0 (Figure 2).
For the NBVP (1), discretized on a uniform grid with mesh size h (2) and regularized by parameter α , the smallest eigenvalue satisfies λ min = α (3) and, for isotropic problems, it corresponds to just those eigenfunctions, which are very smooth geometrically in all spatial directions. On the other hand, the largest eigenvalue satisfies λ max + α and corresponds to geometrically non-smooth eigenvectors. Here, λ min = 0 and λ max are the smallest and largest eigenvalues of matrix A (2), respectively.
In multigrid methods, the largest eigenvalue of the smoothing operator defines a reduction of the high-frequency errors in each smoothing step. Since λ max α , the Lavrentiev regularization does not affect the smoothing as shown in Figure 3. To highlight the smoothing effect of the Gauss–Seidel iterations, the initial guess is taken as an oscillating function
v i ( 0 ) = 1 + ( 1 ) i 2 = 0 , if i odd , 1 , if i even .
As a result, h-independent convergence of the multigrid algorithm is expected as λ max α (Figure 4).
The development of robust multigrid solvers for singular and ill-conditioned discrete boundary value problems is a new challenge for scientific computing. Equation (3) can be rewritten as
b A u α = α u α .
Assuming a good estimate u ˜ α for u α and a stopping criterion
b A u α b ϵ
are given in advance, the parameter of regularization α becomes
α = ϵ b u ˜ α .
After that, the regularized system (3) can be solved by well-known multigrid methods. The main difficulty of this approach is a sufficiently accurate a priori estimation of the solution norm u ˜ α . In order to overcome this difficulty, the idea is to exploit the proximity of the solution of a discrete BVP obtained at adjacent grid levels of the Robust Multigrid Technique (RMT) [12].

3. The Two-Grid Algorithm

The simplest multigrid algorithm uses two grids: a fine grid and a coarse grid. The iteration formula of the multigrid methods reads
u ^ 0 ( q + 1 ) = c 0 ( ν ) + u ^ 0 ( q ) ,
here, c 0 ( ν ) is the coarse grid correction after ν smoothing iterations on the fine grid, q is the multigrid iteration counter, u ^ 0 ( q ) and u ^ 0 ( q + 1 ) are the approximations to the solution before and after the qth multigrid iteration, respectively. The subscript <<0>> denotes an affiliation to the fine grid. The multiplication by matrix A 0 + α I and subtraction of the right-hand vector b result in
b 0 ( A 0 + α 0 I ) u ^ 0 ( q + 1 ) = ( A 0 + α 0 I ) c 0 ( ν ) + b 0 ( A 0 + α 0 I ) u ^ 0 ( q ) .
Assuming that the correction will be given is
c 0 ( ν ) = C 0 ( b 0 ( A 0 + α 0 I ) u ^ 0 ( q ) ) ,
where C 0 is some matrix, one multigrid step is of the form
b 0 ( A 0 + α 0 I ) u ^ 0 ( q + 1 ) = ( I ( A 0 + α 0 I ) C 0 ) b 0 ( A 0 + α 0 I ) u ^ 0 ( q ) .
After q multigrid steps, the equation becomes
b 0 ( A 0 + α 0 I ) u ^ 0 ( q ) = I ( A 0 + α 0 I ) C 0 q b 0 ( A 0 + α 0 I ) u ^ 0 ( 0 ) .
A straightforward computation yields an estimate of the form
0 < ρ ( q ) < I ( A 0 + α 0 I ) C 0 ,
where
ρ ( q ) = b 0 ( A 0 + α 0 I ) u ^ 0 ( q ) b 0 ( A 0 + α 0 I ) u ^ 0 ( 0 ) q
is the average reduction factor of the residual after q multigrid iterations. Then, it is necessary to determine the matrix C 0 and estimate the norm I ( A 0 + α I ) C 0 .
For completely consistent smoothers, the resulting equation reads
c 0 c 0 ( ν ) = S α 0 ν c 0 c 0 ( 0 ) ,
where
c 0 = ( A 0 + α 0 I ) 1 b 0 ( A 0 + α 0 I ) u ^ 0 ( q )
is the exact solution of a defect equation, ν is the smoothing iteration counter and S α 0 is the smoothing iteration matrix. The starting guess c 0 ( 0 ) for smoothing on the fine grid is an exact coarse grid solution
c 1 = ( A 1 + α 1 I ) 1 R 0 1 b 0 ( A 0 + α 0 I ) u ^ 0 ( q )
prolongated to the fine grid
c 0 ( 0 ) = P 1 0 c 1 ,
where R 0 1 and P 1 0 are the restriction and prolongation operators, respectively. The subscript <<1>> denotes affiliation to the coarse grid.
The substitution of (19) to (18) results in the matrix C 0
C 0 = ( A 0 + α 0 I ) 1 S α 0 ν ( A 0 + α 0 I ) 1 P 1 0 ( A 1 + α 1 I ) 1 R 0 1
and
I ( A 0 + α 0 I ) C 0 = ( A 0 + α 0 I ) S α 0 ν ( A 0 + α 0 I ) 1 P 1 0 ( A 1 + α 1 I ) 1 R 0 1
The classical multigrid convergence analysis is based on two propositions [18]:
(1)
Smoothing property: existence of a monotonically decreasing function η ( ν ) : R + R + , such that η ( ν ) 0 for ν and
( A 0 + α 0 I ) S α 0 ν η ( ν ) A 0 + α 0 I .
(2)
Approximation property: existence of a constant C A > 0 independent of l = 0 , 1 such that
( A 0 + α 0 I ) 1 P 1 0 ( A 1 + α 1 I ) 1 R 0 1 C A A 0 + α 0 I 1 .
Here, α > 0 for a singular matrix A and α 0 for an invertible matrix A. The smoothing and approximation properties should be proved for each case.
If the smoothing and approximation properties hold, then the estimation of the matrix (20) becomes
I ( A 0 + α 0 I ) C 0   ( A 0 + α 0 I ) S α 0 ν · ( A 0 + α 0 I ) P 1 0 ( A 1 + α 1 I ) R 0 1   C A η ( ν ) .
The h-independent convergence of multigrid iterations
0 < ρ ( q ) C A η ( ν ) < 1
is expected after a sufficient number of smoothing iterations ν on the finest grid.
The numerical solution of ill-conditioned industrial problems requires an analysis of the computational errors expressed as perturbations on the coefficient matrix and vectors of the resulting linear system. The perturbated defect equations on the fine and coarse grids becomes
( A 0 + α 0 I + δ A 0 + δ α 0 I ) ( c 0 + δ c 0 ) = b 0 + δ b ˜ 0 ( A 0 + α 0 I + δ A 0 + δ α 0 I ) u ^ 0 ( q ) , ( A 1 + α 1 I + δ A 1 + δ α 1 I ) ( c 1 + δ c 1 ) = R 0 1 b 0 + δ b 0 ( A 0 + α 0 I + δ A + δ α I ) u ^ 0 ( q ) ,
where the perturbations of corrections δ c 0 and δ c 1 are caused by perturbations of the coefficient matrices δ A 0 + δ α 0 I and δ A 1 + δ α 1 I as well as the right-hand side vectors δ b 0 and δ b 1 . These equations result in the estimation
δ c 0 P 1 0 δ c 1 ( A 0 + α 0 I ) 1 P 1 0 ( A 1 + α 1 I ) 1 R 0 1 · δ b 0 ( δ A 0 + δ α 0 I ) u ^ 0 ( q )
+ ( A 0 + α 0 I ) 1 · δ A ˜ 0 · c 0 + ( A 0 + α 0 I ) 1 · δ A ˜ 0 · δ c 0
+ P 1 0 ( A 1 + α 1 I ) 1 · δ A ˜ 1 · c 1 + P 1 0 ( A 1 + α 1 I ) 1 · δ A ˜ 1 · δ c 1 .
An elementary transformation of equations leads to the error estimation
δ c P 1 0 δ c 1 c 0   ( A 0 + α 0 I ) 1 P 1 0 ( A 1 + α 1 I ) 1 R 0 1 · δ b ( δ + δ I ) u ^ 0 ( q ) b 0 A 0 + α 0 I + 2 cond ( A 0 + α 0 I ) δ + δ I A 0 + α 0 I ,
where cond ( A 0 + α 0 I ) is the condition number of the coefficient matrix A 0 + α 0 I . First, let all perturbations of the coefficient matrix be zero ( δ A 0 = 0 and δ α 0 = 0 )
δ c 0 P 1 0 δ c 1 c 0 ( A 0 + α 0 I ) 1 P 1 0 ( A 1 + α 1 I ) 1 R 0 1 δ b 0 b 0 A 0 + α 0 I
and the approximation property holds
δ c 0 P 1 0 δ c 1 c 0 C A δ b 0 b 0 .
In this case, fast convergence of the multigrid solver is expected. On the other hand, if perturbation of the right-hand side vector is zero ( δ b 0 = 0 ) and the approximation property holds, then the estimation can be rewritten as
δ c 0 P 1 0 δ c 1 c 0 C A u ^ 0 ( q ) b 0 + 2 1 α 0 δ A 0 + δ α 0 I .
The rough estimation predicts that for each perturbed problem, there is an optimal value of the regularization parameter α 0 , so δ A 0 + δ α 0 I / α 0 min .
The Lavrentiev method for the regularization of linear ill-posed problems with noisy data had been studied by many researchers. A set of the parameter choice rules, which give good results also in the case of noise levels under- or overestimated many times, had been proposed [19,20].

4. Robust Multigrid Technique

In the late 1980s, the multigrid methods based on a multiple coarse grid correction strategy were proposed and developed. P. O. Frederickson and O. A. McBryan studied the efficiency of the Parallel Superconvergent Multigrid Method (PSMG); the basic idea behind the PSMG is that the even and odd points of the fine grid form two coarse grids [21,22]. The Robust Multigrid Technique (RMT) had been developed for advanced black-box software for the numerical simulation of physical and chemical processes by solving a “robustness–efficiency–parallelism” problem [12]. The RMT has the least number of problem-dependent components, full parallelism and close-to-optimal algorithmic complexity in low memory implementation. Figure 5 represents the triple coarsening used in the RMT: three coarse grids G 1 1 , G 2 1 and G 3 1 are generated by the agglomeration of the finite volumes on the fine grid G 1 0 . The finer and coarse grids in the RMT have these properties:
1.
All coarse grids G 1 1 , G 2 1 and G 3 1 have no common grid points:
G n 1 G m 1 = , n m ;
2.
The finest grid G 1 0 can be represented as a union of the coarse grids G 1 1 , G 2 1 and G 3 1 :
G 1 0 = k = 1 3 G k 1 ;
3.
All grids are geometrically similar, but the mesh size of the coarse grids G 1 1 , G 2 1 and G 3 1 is three times larger than the mesh size of the finest grid G 1 0 ;
4.
Independently of the grid functions assignment, each finite volume on the coarse grids G 1 1 , G 2 1 and G 3 1 is a union of three finite volumes on the fine grid G 1 0 .
Figure 5. The finest ( G 1 0 ) and three coarse grids ( G 1 1 , G 2 1 and G 3 1 ).
Figure 5. The finest ( G 1 0 ) and three coarse grids ( G 1 1 , G 2 1 and G 3 1 ).
Mathematics 13 02919 g005
The classical multigrid methods use two types of coarsening: vertex-centred coarsening consists of deleting every other vertex in each direction, and cell-centred coarsening consists of taking unions of fine grid cells to obtain coarse grid cells. The triple coarsening uses both deleting vertex and finite volume agglomeration. The basic idea of triple coarsening is to agglomerate 3 d 1 , d = 1 , 2 , 3 neighbour finite volumes with the selected one. As a result, each coarse volume consists of 3 d fine volumes independently on the grid function assignment to grid points (Figure 6). In combination with the additive property of definite integral, the triple coarsening results in an accurate finite volume discretization of BVPs on the multigrid structures. All coefficients of PDEs and the right-hand side function are discretized only on the finest grid (the so-called composite formula). Theoretical analysis shows l-independent accuracy of the definite integral evaluation on the multigrid structure [12]. This is a very important property for industrial problems, where some coefficients of PDEs may have the boundary layers or be highly oscillating functions. Such features make it difficult to evaluate these coefficients of discrete BVPs on the coarse grids of classical multigrid methods.
The RMT uses the most powerful coarse grid correction strategy where the most modes are approximated on the coarse grids in order to make the task of the smoother the least demanding.
The finest grid G 1 0 forms a zero grid level, but the coarse grids G 1 1 , G 2 1 and G 3 1 form the first grid level. The following triple coarsening is carried out recursively: each grid G i l , i = 1 , , 3 l of level l is considered to be the finest grid for three coarse grids of level l + 1 . Nine coarser grids obtained from three coarse grids of the first level form a second level. The grid hierarchy G m l , m = 1 , , 3 l , l = 0 , , L 3 + is called a multigrid structure ( MS ) generated by the grid G 1 0 :
MS ( G 1 0 ) = G m l | m = 1 , 2 , , 3 l , l = 0 , 1 , , L 3 + ,
where L 3 + is the number of the coarsest level. Each grid G k l MS ( G 1 0 ) generates its own multigrid structure MS ( G k l ) .
Let u , u l h and u l + 1 h denote the exact solution of system A u = b (discrete BVP) and solutions of systems A l u l = b l and A l + 1 u l + 1 = b l + 1 obtained on levels l and l + 1 , respectively. For second-order discretization, the estimation holds
u u l h C ( 3 l h ) 2 ,
where h is the mesh size of the finest grid, l = 0 , 1 , , L 3 + . The triangle inequality allows us to estimate the difference between the solutions u l h and u l + 1 h as follows:
u l h u l + 1 h   10 u u l h = O ( h ) .
This means that the solution u l + 1 h obtained on the coarse level l + 1 approximates the solution u l h obtained on fine level l well. The discussion of convergence, stability, error analysis and parallelization of the RMT is given in numerous papers and summarized in [12].
For clarity, a 1D problem on the four-level multigrid structure ( L 3 + = 3 ) shown in Figure 7 is considered for an illustration of the algorithm. The multigrid iteration of the RMT starts on the coarsest level L 3 + (Figure 8), where a direct method should be used for solving the discrete NBVP (2)
A L 3 + u L 3 + h = b L 3 + .
The numerical solution prolongated to the finer level L 3 + 1 becomes the starting guess
u L 3 + 1 h ( 0 ) = P L 3 + L 3 + 1 u L 3 + h = P L 3 + L 3 + 1 A L 3 + 1 b L 3 + ,
where P L 3 + L 3 + 1 is the problem-independent prolongation operator of the RMT [12]. Figure 9 and Figure 10 illustrate the problem-independent prolongation of the RMT.
The resulting discrete problem that has to be solved on all grids of level L 3 + 1 is
( A + α L 3 + 1 I ) u L 3 + 1 h = b L 3 + 1 ,
where
α L 3 + 1 = ϵ b L 3 + 1 u L 3 + h .
After a few iteration steps
W L 3 + 1 ( u L 3 + 1 ( ν ) u L 3 + 1 ( ν 1 ) ) = u L 3 + 1 h = b L 3 + 1 ( A L 3 + 1 + α L 3 + 1 I ) u L 3 + 1 ( ν 1 ) ,
the error becomes smooth, where ν is the smoothing iteration counter and W L 3 + 1 is a splitting matrix. In multigrid methods, it is assumed that the smoother is a convergent iterative method, i.e.,
I W l 1 ( A + α I ) < 1 , for α > 0 .
The obtained function u L 3 + 1 ( ν ) is the first multigrid iteration value u L 3 + 1 [ 1 ] , i.e.,
u L 3 + 1 [ 1 ] = u L 3 + 1 ( ν ) .
After that, the stopping criterion
b L 3 + 1 ( A L 3 + 1 + α L 3 + 1 I ) u L 3 + 1 [ 1 ] b L 3 + 1 < ε
is checked and, if necessary, the next multigrid iteration is performed or transferred to the fine grid. The substitution of u L 3 + 1 [ 1 ] in (23) gives
( A L 3 + 1 + α L 3 + 1 I ) u L 3 + 1 [ 1 ] = b L 3 + 1 + r L 3 + 1
where r L 3 + 1 is some residue. Adding a correction c to the approximation u L 3 + 1 [ 1 ] to the solution u L 3 + 1 h of (23) required to eliminate the residue r L 3 + 1 results in the defect equation
( A L 3 + 1 + α L 3 + 1 I ) c L 3 + 1 = b L 3 + 1 ( A L 3 + 1 + α L 3 + 1 I ) u L 3 + 1 [ 1 ] .
This equation on the coarsest level becomes
A L 3 + c L 3 + = R L 3 + 1 L 3 + b L 3 + 1 ( A L 3 + 1 + α L 3 + 1 I ) u L 3 + 1 [ 1 ] ,
where R L 3 + 1 L 3 + is the problem-independent restriction operator of the RMT (Figure 11). After the solution of this defect equation, the correction should be prolongated to the fine level:
c L 3 + 1 = P L 3 + L 3 + 1 A L 3 + 1 R L 3 + 1 L 3 + b L 3 + 1 ( A L 3 + 1 + α L 3 + 1 I ) u L 3 + 1 [ 1 ] .
After a few iteration steps (24) starting with the initial guess
u L 3 + 1 h ( 0 ) = u L 3 + 1 h [ 1 ] + c L 3 + 1
the error becomes smooth and small, and a new approximation to the solution after second multigrid iteration is
u L 3 + 1 [ 2 ] = u L 3 + 1 ( ν ) .
The multigrid schedule of the RMT is the V-cycle with no pre-smoothing (a sawtooth cycle) with the dynamic finest grid, as shown in Figure 12 [12].
Since each coarse level consists of independent (without common points) and almost identical grids (Figure 5), all unknown problem-dependent parameters of the algorithm (for example, the number of smoothing iterations, under- and over-relaxation or regularization parameters, and others) can be found on one or several grids to achieve the fastest convergence rate or the highest accuracy possible, starting computations from the same initial guess [12]. The obtained pseudo-optimal value of the problem-dependent components is taken to be the same for all grids of the same level. In general, each coarse level l consists of 3 d l grids, l = 1 , 2 , , L 3 + , d = 2 , 3 , so the additional effort to optimize the RMT on one or several grids is negligible. This approach based on numerical experiments makes it possible to effectively optimize the RMT in solving real-life problems.
The NBVP (1) with the exact solution u ( x , y , z ) = e x + y + z + C is used as a benchmark problem. Substitution of the exact solution into (1) gives the right-hand side function f ( x , y , z ) = 3 e x + y + z and the corresponding boundary conditions on faces of the unit cube Ω = [ 0 , 1 ] 3 . Figure 13 demonstrates the convergence of the RMT during five multigrid iterations ( q = 5 ) starting the iterant zero on 101 × 101 × 101 ( h = 1 / 100 , L 3 + = 3 ) and 251 × 251 × 251 ( h = 1 / 250 , L 3 + = 4 ) with the point Gauss–Seidel smoother and
(1)
The criterion of accuracy (15):
b l A l u l b l ϵ = 10 5 , l = 0 , 1 , , L 3 + 1 ,
(2)
The regularization parameter (16) on the finer levels:
α l = ϵ b l u l + 1 , l = 0 , 1 , , L 3 + 1 ,
(3)
The level stopping criterion (11):
b l ( A l + α l I ) u l b l < 10 5 , l = 1 , 2 , , L 3 + 1 .
The error of the numerical solution (10) is computed on the finest grid after each multigrid iteration.
The multigrid cycle shown on Figure 12 makes it difficult to compare the convergence rate of the RMT if the multigrid structures have different numbers of levels. However, it is easy to see that the number of multigrid iterations q needed for solving (3) is approximately the same; i.e., h-independent convergence has been achieved.
Figure 13. Convergence of the RMT after q multigrid iterations.
Figure 13. Convergence of the RMT after q multigrid iterations.
Mathematics 13 02919 g013

5. Multigrid Preconditioning

In the process of solving applied problems, it is often necessary to change the order or type of approximation of (initial-) boundary value problems and/or the computational grid to adapt to the features of the solution. For clarity, let us consider a linear boundary value problem
L Ω u ( x ) = f Ω ( x ) , x Ω ,
L Ω u ( x ) = f Ω ( x ) , x Ω ,
Here, x = ( x 1 , , x d ) T and Ω R is a given open bounded domain with boundary Ω , L Ω is a linear (elliptic) differential operator on Ω and L Ω represents one or several linear boundary operators on Ω , L Ω denotes a given function on Ω and f Ω represents one or several functions on Ω , u ( x ) is a solution, d = 2 , 3 .
The generation of computational grid Ω h in the domain Ω Ω and some approximation (FDM, FVM, FEM, DG or others) of the BVP (25), some ordering of the unknowns and elimination of the boundary conditions allows us to rewrite the discrete analogue of (25) in matrix form
A ^ u ^ = b ^ .
Within an arbitrary iterative process for the solution of a given discrete problem, the solution becomes
u ^ = A ^ b ^ .
However, this approach is not always convenient: if it is necessary to increase the order of approximation of the BVP (25) on Ω h , then a significant part of the computer code will have to be rewritten. In general, it is more convenient to solve not the original system (26) but to use the defect correction approach and the auxiliary space method [18,23]. Let the high-order approximation of the BVP (25) on an unstructured grid Ω h in matrix form be (26). An body-unfitted Cartesian grid Ω ˜ h (auxiliary space) can be generated in the domain Ω , and the discrete analogue of the BVP (25) on Ω ˜ h can be written in matrix form
A u = b .
The unstructured body-fitted grid Ω h makes it possible to obtain a more accurate approximation of the BVP (25), but on the other hand, the auxiliary structured grid Ω ˜ h allows us to construct a highly efficient (geometric multigrid) algorithm for solving systems of (non-)linear algebraic equations. The basic idea is to exploit the advantages of Ω h and Ω ˜ h with small change in computer code.
The system of linear equations should be rewritten as
A c = R ^ Ω h Ω ˜ h b ^ A ^ u ^ ( q ) ,
there c = u u ^ ( q ) is a correction, b ^ A ^ u ^ ( q ) is a residual computed on the unstructured grid Ω h , R Ω h Ω ˜ h is a restriction operator transferring the residual b ^ A ^ u ^ ( q ) from the unstructured grid Ω h to the structured grid Ω ˜ h (auxiliary space), and q is the intergrid iteration counter.
The two-grid algorithm can be represented as:
(1)
Computation of the residual b ^ A ^ u ^ ( q ) on the unstructured grid Ω h ;
(2)
Restriction of the residual b ^ A ^ u ^ ( q ) from the unstructured grid Ω h to the structured grid Ω ˜ h (auxiliary space), i.e., computation of the right-hand side of (28) R ^ Ω h Ω ˜ h b ^ A ^ u ^ ( q ) ;
(3)
Solution of (28) by a geometric multigrid on the structured grid Ω ˜ h (auxiliary space)
c = A 1 R ^ Ω h Ω ˜ h b ^ A ^ u ^ ( q ) ;
(4)
Prolongation of the correction c from the structured grid Ω ˜ h (auxiliary space) to the unstructured grid Ω h , i.e., computation of P ^ Ω ˜ h Ω h c ;
(5)
Computation of the starting guess for the smoothing iterations
u ^ ( 0 ) = c + u ^ ( q ) ; c = 0 ;
(6)
Smoothing on the unstructured grid Ω h
W ^ u ^ ( ν + 1 ) u ^ ( ν ) = b ^ A ^ u ^ ( ν ) ,
where W ^ is a splitting matrix and ν is the smoothing iteration counter;
(7)
Computation of a new approximation to the solution (27)
u ^ ( q + 1 ) = u ^ ( ν + 1 ) .
The above-mentioned two-grid algorithm can be represented in matrix form
b ^ A ^ u ^ ( q + 1 ) = A ^ S ν A ^ 1 P ^ Ω ˜ h Ω h A 1 R ^ Ω h Ω ˜ h b ^ A ^ u ^ ( q ) ,
where S = I W ^ 1 A ^ is the smoothing iteration matrix. If the smoothing and approximation properties hold, then the h-independent convergence of the integrid iteration is expected (21).
Another problem is the definition of problem-dependent components of the algorithm (under- and over-relaxation parameters, regularization parameter, the number of smoothing iterations, etc.). All problem-dependent components can be determined experimentally, starting computations from the same initial guess. In practice, such an approach is justified when solving a series of systems with different right-hand sides: A x = b k , k = 1 , 2 , , K 1 . Since all grids of each level of a multigrid structure are similar to each other, determination of the problem-dependent components of the algorithm is possible on one or several grids of each level. The obtained pseudo-optimal values are taken to be the same for all grids of the same level. Theoretical analysis shows that this approach leads to an increase in the total execution time by several percent compared to the case when all problem-dependent components are known in advance. Figure 14 represents a generalization of the multigrid cycle shown in Figure 12 to the two-grid (structured–unstructured grids) algorithm.
Another problem is the generation of the adaptive grids. In practice, the computational grids are often (locally or globally) refined, and the effect of grid refinement on the numerical solution is investigated for optimal adaptation of the grid to a numerical solution. MLAT is an example of local refinement of the structured grids [18]. The iterative process of the structured grid refinement can be formalized on the multigrid structure. Thus, after the first multigrid iteration on a structured grid Ω ˜ h , an approximation to the solution on an unstructured grid Ω h and a priori information about the unstructured grid Ω h (the density of vertices in the region Ω ) are obtained. The procedure for rebuilding the auxiliary grid Ω ˜ h into the adaptive grid Ω h to simplify the intergrid operators P ^ Ω ˜ h Ω h and R ^ Ω h Ω ˜ h is presented in [12]. An FAS approach allows using the multigrid cycle shown in Figure 14 for solving the nonlinear BVP [12].
Thus, the combination of defect correction, auxiliary space and the adaptive determination of problem-dependent components of the algorithm by means of numerical experiments on one or several grids of each level starting from the same initial guess (except for the finest grid, where all problem-dependent components are determined by extrapolation from coarse levels) allows solving a wide class of problems in a unified way. The FAS method allows us to generalize this approach to the nonlinear case.

6. Extrapolation of the Solution

The extrapolation can be used to improve the accuracy of the solution of the regularized system (3). Let
b A u α 1 = α 1 u α 1 ,
b A u α 2 = α 2 u α 2
be two linear system with a singular matrix A (2) and different regularization parameters α 1 and α 2 , where α 1 α 2 . A linear combination of (29a) multiplied by 1 ω and (29b) multiplied by ω becomes
b A ( 1 ω ) u α 1 + ω u α 2 = ( 1 ω ) α 1 u α 1 + ω α 2 u α 2 ,
where
u ¯ = ( 1 ω ) u α 1 + ω u α 2
is the extrapolated solution and ω is some parameter chosen for minimization of the right-hand side of (30). The basic idea is to minimize the residual b A u ¯ by an optimal choice of ω using the Euclidean scalar product ( · , · )
b A u ¯ , b A u ¯ = ( 1 ω ) α 1 u α 1 + ω α 2 u α 2 , ( 1 ω ) α 1 u α 1 + ω α 2 u α 2 .
The right-hand side of (31) can be represented as
F ( ω ) = ( 1 ω ) 2 α 1 2 ( u α 1 , u α 1 ) + 2 ( 1 ω ) ω α 1 α 2 ( u α 1 , u α 2 ) + ω 2 α 2 2 ( u α 2 , u α 2 ) .
The second derivative of the function F ( ω )
F ω ω = 2 ( α 1 u α 1 α 2 u α 2 , α 1 u α 1 α 2 u α 2 ) > 0
predicts a single minimum of the function F ( ω ) :
F ω ( ω ) = 2 ( 1 ω ) α 1 2 ( u α 1 , u α 1 ) + 2 ( 1 2 ω ) α 1 α 2 ( u α 1 , u α 2 ) + 2 ω α 2 2 ( u α 2 , u α 2 ) = 0 .
This leads to value ω
ω = α 1 u α 1 , α 1 u α 1 α 2 u α 2 α 1 u α 1 α 2 u α 2 , α 1 u α 1 α 2 u α 2
that minimizes the right-hand size of (31):
F ( ω ) = α 1 2 α 2 ( u α 1 , u α 1 ) ( u α 2 , u α 2 ) α 1 2 α 2 ( u α 1 , u α 2 ) ( α 2 2 ( u α 2 , u α 2 ) α 1 2 ( u α 1 , u α 1 ) ) α 1 u α 1 α 2 u α 2 , α 1 u α 1 α 2 u α 2 2 .
It is easy to see that
F ( ω ) = F ( 0 ) α 1 u α 1 , α 1 u α 1 α 2 u α 2 2 α 1 u α 1 α 2 u α 2 , α 1 u α 1 α 2 u α 2 2 < F ( 0 ) ,
F ( ω ) = F ( 1 ) α 2 u α 2 , α 1 u α 1 α 2 u α 2 2 α 1 u α 1 α 2 u α 2 , α 1 u α 1 α 2 u α 2 2 < F ( 1 ) .
Obviously, F ( ω ) < min ( F ( 0 ) ; F ( 1 ) ) for α 1 α 2 . Employing the extrapolation is one way to obtain more accurate approximations u ¯ to the solution of singular system (2) in the sense of minimum b A u ¯ .

7. Choice of Initial Guess for Iterative Solver

As a rule, the zero approximation to the solution of BVPs is used as a starting guess in the iterative processes. The NBVP (1) with the exact solution u ( x , y , z ) = e x + y + z + C , the right-hand side function f ( x , y , z ) = 3 e x + y + z and the corresponding boundary conditions are used to illustrate a more accurate definition of the starting guess. The integration of (1) over squares Ω x = { ( y , z ) | 0 y 1 , 0 z 1 , } leads to a variant of the compatibility condition:
0 1 0 1 2 u x 2 d y d z + 0 1 0 1 2 u y 2 d y d z + 0 1 0 1 2 u z 2 d y d z = 0 1 0 1 3 e x + y + z d y d z
or
0 1 0 1 2 u x 2 d y d z + 0 1 u y y = 1 u y y = 0 d z + 0 1 u z z = 1 u z z = 0 d y = 3 e x ( e 1 ) 2 .
Taking into account the Neumann boundary conditions,
u y y = 0 = e x + z , u y y = 1 = e x + 1 + z , u z z = 0 = e x + y , u z z = 1 = e x + y + 1 ,
this equation becomes
0 1 0 1 2 u x 2 d y d z + ( e 1 ) 0 1 e x + z d z + ( e 1 ) 0 1 e x + y d y = 3 e x ( e 1 ) 2 .
The exact integration simplifies the expression
0 1 0 1 2 u x 2 d y d z = ( e 1 ) 2 e x .
The basic assumption is to represent the unknown solution u ( x , y , z ) as a sum of three one-dimensional functions ψ ( x ) , ψ ( y ) and ψ ( z ) :
u ( x , y , z ) = ψ x ( x ) + ψ y ( y ) + ψ z ( z ) .
After substitution of the representation (33) into (32), Equation (32) becomes an ordinary differential equation (ODE)
d 2 ψ x d x 2 = ( e 1 ) 2 e x .
The boundary conditions are treated in the same manner, for example
u x x = 0 = e y + z 0 1 0 1 u x x = 0 d y d z = d ψ x d x x = 0 = 0 1 0 1 e y + z d y d z = ( e 1 ) 2 .
Finally, a one-dimensional auxiliary NBVP
d ψ x d x x = 0 = ( e 1 ) 2 , d 2 ψ x d x = ( e 1 ) 2 e , d ψ x d x x = 1 = e ( e 1 ) 2 ,
has the exact solution
ψ x ( x ) = ( e 1 ) 2 e x + C .
Similarly, the functions ψ y ( y ) and ψ z ( z ) in (33) can be determined in the same way. The resulted starting guess is
u ˜ ( 0 ) ( x , y , z ) = ( e 1 ) e x + e y + e z + C .
For obvious reasons, it is necessary to compare the zero ( u ( 0 ) = 0 ) and improved (35) starting guesses:
(a)
The difference between exact solution u ( x , y , z ) = e x + y + z ( C = 0 ) and zero starting guess ( u ( 0 ) = 0 ) in point ( x = 1 , y = 1 , z = 1 ) is
max [ 0 , 1 ] u ( x , y , z ) max [ 0 , 1 ] u ( 0 ) ( x , y , z ) = e 3 20.08 ;
(b)
The difference between the exact solution u ( x , y , z ) = e x + y + z ( C = 0 ) and the starting guess (35) in point ( x = 1 , y = 1 , z = 1 ) is
max [ 0 , 1 ] u ( x , y , z ) max [ 0 , 1 ] u ˜ ( 0 ) ( x , y , z ) = e 3 3 e ( e 1 ) 2 3.99 .
The above-mentioned approach makes it possible to obtain a starting guess to the solution of the NBVP that is more accurate than the zero one.
This simple algorithm is summarized as follows:
(1)
The integration of the BVP (1) over Ω y z = { ( y , z ) | 0 y , z 1 } and decomposition (33) result in an ODE (34), ψ x ( x ) is the solution of this ODE;
(2)
The integration of the BVP (1) over Ω x z = { ( x , z ) | 0 x , z 1 } and decomposition (33) result in an ODE, ψ y ( y ) is the solution of this ODE;
(3)
The integration of the BVP (1) over Ω x y = { ( x , y ) | 0 x , y 1 } and decomposition (33) result in an ODE, ψ z ( z ) is a solution of this ODE;
(4)
Formation of the initial guess as (33).

8. Conclusions

An efficient multigrid algorithm for solving the singular or ill-conditioned discrete boundary value problems can be constructed on the basis of a direct solver on the coarsest levels and an iterative smoother of the regularized system on the fine levels.
The main results can be summarized as follows:
(1)
If the numerical solution u α of a singular or ill-conditioned system A u α = b (2) satisfies the condition (15), where the value of parameter ϵ is given in advance, then the regularization parameter α can be determined as (16), where the function u ˜ α is some approximation to the solution u α of the original system A u α = b ;
(2)
If the stopping criterion of iterations for solving the regularized system (3) is given as (11), then the parameter ε depends on the regularization parameter α as (13);
(3)
For perturbed singular or ill-conditioned systems, the regularization parameter α must satisfy the condition δ A 0 / α 0 m i n ;
(4)
The Lavrentiev regularization does not affect the smoothing, and h-independent convergence of the multigrid algorithm is expected;
(5)
The regularization parameter α in multigrid algorithms can be defined using the proximity of solutions obtained at the coarser grid levels;
(6)
A numerical solution of the regularized system with different parameters α can be improved by extrapolation;
(7)
Representation (33) makes it possible to formulate a more accurate starting guess for BVPs than the zero one.
The proposed RMT-based procedures can be used for the adaptive determination of the problem-dependent components of the multigrid algorithm and adaptation of the computational grid to the features of the numerical solution in black-box software. The advantages of this approach are its simplicity of implementation, negligible additional efforts and robustness based on analysis of the experimental convergence rate.

Author Contributions

Conceptualization, methodology, software S.I.M.; formal analysis, review and editing, A.Y.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (State Assignment No. 075-00269-25-00).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Trottenberg, U.; Oosterlee, C.W.; Schüller, A. Multigrid; Academic Press: London, UK, 2000. [Google Scholar]
  2. Liao, Z.; Hayami, K.; Morikuni, K.; Yin, J.F. A stabilized GMRES method for singular and severely ill-conditioned systems of linear equations. Jpn. J. Ind. Appl. Math. 2022, 39, 717–751. [Google Scholar] [CrossRef]
  3. Yu, X.; Ma, C. The modified Uzawa methods for solving singular linear systems. Comput. Math. Appl. 2021, 104, 71–86. [Google Scholar] [CrossRef]
  4. Wang, M.; Bertsekas, D.P. On the Convergence of Simulation-based Iterative Methods for Solving Singular Linear Systems. Stoch. Syst. 2017, 3, 38–95. [Google Scholar] [CrossRef]
  5. Hong, L.Y.; Zhang, N. On the preconditioned MINRES method for solving singular linear systems. Comput. Appl. Math. 2022, 41, 304. [Google Scholar] [CrossRef]
  6. Eldén, L.; Simoncini, V. Solving Ill-Posed Linear Systems with GMRES and a Singular Preconditioner. SIAM J. Matrix Anal. Appl. 2012, 33, 1369–1394. [Google Scholar] [CrossRef]
  7. Buzhabadi, R. New Algorithms for Solving Singular Linear System. Comput. Math. Model. 2018, 29, 71–82. [Google Scholar] [CrossRef]
  8. Lavrentiev, M.M. Some Improperly Posed Problems of Mathematical Physics; Springer-Verlag: New York, NY, USA, 1967. [Google Scholar]
  9. Sheela, S.; Singh, A. Lavrentiev regularization of a singularly perturbed elliptic PDE. Appl. Math. Comput. 2004, 148, 189–205. [Google Scholar] [CrossRef]
  10. Morigi, S.; Reichel, L.; Sgallari, F. An iterative Lavrentiev regularization method. BIT Numer. Math. 2006, 46, 589–606. [Google Scholar] [CrossRef]
  11. Morozov, V.; Mukhamadiev, E.M.; Nazimov, A.B. Regularization of singular systems of linear algebraic equations by shifts. Comput. Math. Math. Phys. 2007, 47, 1885–1892. [Google Scholar] [CrossRef]
  12. Martynenko, S.I. Numerical Methods for Black-Box Software in Computational Continuum Mechanics; De Gruyter: Berlin, Germany, 2023. [Google Scholar] [CrossRef]
  13. Kaltenbacher, B. On the regularizing properties of a full multigrid method for ill-posed problems. Inverse Probl. 2001, 17, 767–788. [Google Scholar] [CrossRef]
  14. Zeng, C.; Luo, X.; Yang, S.; Li, F. A parameter choice strategy for a multilevel augmentation method in iterated Lavrentiev regularization. J. Inverse Ill-Posed Probl. 2018, 26, 153–170. [Google Scholar] [CrossRef]
  15. George, S.; Padikkal, J.; Remesh, K.; Argyros, M. A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations. Mathematics 2022, 10, 3365. [Google Scholar] [CrossRef]
  16. Plato, R.; Mathé, P.; Hofmann, B.M. Optimal rates for Lavrentiev regularization with adjoint source conditions. Math. Comput. 2016, 87, 785–801. [Google Scholar] [CrossRef]
  17. Volkov, Y.S.; Miroshnichenko, V.L. Norm estimates for the inverses of matrices of monotone type and totally positive matrices. Sib. Math. J. 2009, 50, 982–987. [Google Scholar] [CrossRef]
  18. Hackbusch, W. Multi-Grid Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  19. Raus, T.; Hämarik, U. On numerical realization of quasioptimal parameter choices in (iterated) Tikhonov and Lavrentiev regularization. Math. Model. Anal. 2009, 14, 99–108. [Google Scholar] [CrossRef][Green Version]
  20. Hämarik, U.; Palm, R.; Raus, T.A. A family of rules for the choice of the regularization parameter in the Lavrentiev method in the case of rough estimate of the noise level of the data. J. Inverse Ill-Posed Probl. 2012, 20, 831–854. [Google Scholar] [CrossRef]
  21. Frederickson, P.O.; McBryan, O.A. Parallel superconvergent multigrid. In Multigrid Methods: Theory, Applications and Supercomputing; McCormick, S., Ed.; Marcel Dekker: New York, NY, USA, 1988; pp. 195–210. [Google Scholar]
  22. Decker, N.H. Note on the Parallel Efficiency of the Frederickson-McBryan Multigrid Algorithm. SIAM J. Sci. Stat. Comput. 2005, 12, 208–220. [Google Scholar] [CrossRef]
  23. Xu, J. The auxiliary space method and optimal multigrid preconditioning techniques for unstructured grids. Computing 1996, 56, 215–235. [Google Scholar] [CrossRef]
Figure 1. The influence of the regularization parameter α on the accuracy of the numerical solution of the 1D NBVP (6) ( = 100 , h = 1 / 100 , ν = 1 ).
Figure 1. The influence of the regularization parameter α on the accuracy of the numerical solution of the 1D NBVP (6) ( = 100 , h = 1 / 100 , ν = 1 ).
Mathematics 13 02919 g001
Figure 2. The influence of the regularization parameter α on the number of Gauss–Seidel iterations ( = 100 , h = 1 / 100 ).
Figure 2. The influence of the regularization parameter α on the number of Gauss–Seidel iterations ( = 100 , h = 1 / 100 ).
Mathematics 13 02919 g002
Figure 3. Convergence during the first Gauss–Seidel iterations ( = 100 , h = 1 / 100 ).
Figure 3. Convergence during the first Gauss–Seidel iterations ( = 100 , h = 1 / 100 ).
Mathematics 13 02919 g003
Figure 4. h-independent convergence during the first Gauss–Seidel iterations ( α = 10 5 ).
Figure 4. h-independent convergence during the first Gauss–Seidel iterations ( α = 10 5 ).
Mathematics 13 02919 g004
Figure 6. The triple coarsening in the RMT: vertex • and volume face + (left) and vertex + and volume face • (right).
Figure 6. The triple coarsening in the RMT: vertex • and volume face + (left) and vertex + and volume face • (right).
Mathematics 13 02919 g006
Figure 7. The multigrid structure MS ( G 1 0 ) , L 3 + = 3 .
Figure 7. The multigrid structure MS ( G 1 0 ) , L 3 + = 3 .
Mathematics 13 02919 g007
Figure 8. Numerical solution of (22) by direct method on the coarsest level (•).
Figure 8. Numerical solution of (22) by direct method on the coarsest level (•).
Mathematics 13 02919 g008
Figure 9. Prolongation from the coarsest level.
Figure 9. Prolongation from the coarsest level.
Mathematics 13 02919 g009
Figure 10. The problem-independent prolongation in the RMT.
Figure 10. The problem-independent prolongation in the RMT.
Mathematics 13 02919 g010
Figure 11. Restriction from level L 3 + 1 .
Figure 11. Restriction from level L 3 + 1 .
Mathematics 13 02919 g011
Figure 12. Multigrid cycle of the RMT with two multigrid iteration on each level.
Figure 12. Multigrid cycle of the RMT with two multigrid iteration on each level.
Mathematics 13 02919 g012
Figure 14. Multigrid preconditioning: original (★) and auxiliary (∘) grids.
Figure 14. Multigrid preconditioning: original (★) and auxiliary (∘) grids.
Mathematics 13 02919 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martynenko, S.I.; Varaksin, A.Y. Adaptive Lavrentiev Regularization of Singular and Ill-Conditioned Discrete Boundary Value Problems in the Robust Multigrid Technique. Mathematics 2025, 13, 2919. https://doi.org/10.3390/math13182919

AMA Style

Martynenko SI, Varaksin AY. Adaptive Lavrentiev Regularization of Singular and Ill-Conditioned Discrete Boundary Value Problems in the Robust Multigrid Technique. Mathematics. 2025; 13(18):2919. https://doi.org/10.3390/math13182919

Chicago/Turabian Style

Martynenko, Sergey I., and Aleksey Yu. Varaksin. 2025. "Adaptive Lavrentiev Regularization of Singular and Ill-Conditioned Discrete Boundary Value Problems in the Robust Multigrid Technique" Mathematics 13, no. 18: 2919. https://doi.org/10.3390/math13182919

APA Style

Martynenko, S. I., & Varaksin, A. Y. (2025). Adaptive Lavrentiev Regularization of Singular and Ill-Conditioned Discrete Boundary Value Problems in the Robust Multigrid Technique. Mathematics, 13(18), 2919. https://doi.org/10.3390/math13182919

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop