Next Article in Journal
Adaptive Nonlinear Proportional–Integral–Derivative Control of a Continuous Stirred Tank Reactor Process Using a Radial Basis Function Neural Network
Previous Article in Journal
Iterative Matrix Techniques Based on Averages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(7), 440; https://doi.org/10.3390/a18070440
Submission received: 21 June 2025 / Revised: 11 July 2025 / Accepted: 16 July 2025 / Published: 18 July 2025
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)

Abstract

For the splitting iterative scheme to solve the system of linear equations, an equivalent form in terms of descent and residual vectors is formulated. We propose an extrapolation technique using the new formulation, such that a new splitting iterative scheme (NSIS) can be simply generated from the original one by inserting an acceleration parameter preceding the descent vector. The spectral radius of the NSIS is proven to be smaller than the original one, and so has a faster convergence speed. The orthogonality of consecutive residual vectors is coined into the second NSIS, from which a stepwise varying orthogonalization factor can be derived explicitly. Multiplying the descent vector by the factor, the second NSIS is proven to be absolutely convergent. The modification is based on the maximal reduction of residual vector norm. Two-parameter and three-parameter NSIS are investigated, wherein the optimal value of one parameter is obtained by using the maximization technique. The splitting iterative schemes are unified to have the same iterative form, but endowed with different governing equations for the descent vector. Some examples are examined to exhibit the performance of the proposed extrapolation techniques used in the NSIS.

1. Introduction

In the paper, we improve some splitting iteration methods for solving a system of linear equations:
A x = b , b , x R n , A R n × n .
For solving Equation (1) a lot of well-developed methods were discussed in the text books [1,2,3,4,5]. In general the Krylov subspace method is employed for solving the large scale linear problem. An m-dimensional Krylov subspace is considered:
K m ( A , r 0 ) : = Span { r 0 , A r 0 , , A m 1 r 0 } .
In the generalized minimal residual (GMRES) method [6], r = r 0 A u is perpendicular to L m = A K m defined by Equation (2), i.e.,
r L m .
The best descent vector u K m is sought to minimize the residual [3]:
min r ,
where
r = b A x = r 0 A u
is the residual vector, whose Euclidean norm is denoted by r . Equation (3) is one of the Petrov-Galerkin method to search u K m . The perpendicularity is a vital guidance for developing effective iterative algorithms, which was employed in [7,8] to enhance the stability of GMRES and SOR.
Dongarra and Sullivan [9] introduced the Krylov subspace method as the top ten algorithms. Bai [10] demonstrated the motivations and realizations of Krylov subspace methods for large sparse linear systems. Recently, Bouyghf et al. [11] provided a unified approach to the Krylov subspace methods for solving linear systems.
Suppose that A is decomposed by
A = M N ,
where M is a nonsingular matrix. The ( M , N ) splitting iterative scheme for Equation (1) is [12]
M x k + 1 = N x k + b ,
where x k is the kth step value of x . The convergence of the iteration in Equation (5) is guaranteed if
ρ = ρ ( G ) < 1 ,
where G = M 1 N is the iteration matrix, and ρ is the spectral radius of G .
The Gaussian elimination method is a classical exact method, but it is not suitable for use in either scientific or engineering practice. The Gauss–Seidel method is a semi-exact method by finding the exact values of variables at each iteration step by a forward-substitution technique. Without needing extra parameter is its main advantage; however, it is convergent slowly.
Many methods are special cases of the ( M , N ) splitting iterative scheme, namely the Jacobi method, the Gauss–Seidel method, the successive overrelaxation (SOR) method, and the accelerated overrelaxation (AOR) method. The SOR method was developed in [13]. For multi-parameter splitting methods, the two-parameter AOR method [14]:
( D w L ) x k + 1 = r b + [ ( 1 r ) D + r U + ( r w ) L ] x k ,
is a generalization of SOR, which is equipped with a relaxation parameter w and an acceleration parameter r for controlling the convergence behavior.
The ( M , N ) splitting for Equation (7) is
A = M N = D U L ,
where
M = 1 r ( D w L ) , N = 1 r [ ( 1 r ) D + r U + ( r w ) L ] .
If r = w , the SOR iterative method
( D w L ) x k + 1 = w b + ( 1 w ) D x k + w U x k
is recovered from Equation (7).
The generalizations of AOR method for different linear problems can refer to [15,16,17]; the convergence behaviors were analyzed in [18,19]. To improve the convergence speed the preconditioned AOR methods were developed. Different kinds of preconditioners have been examined to improve the speed of convergence of the iterative methods. A growing interest is to study the preconditioners to ameliorate the speed of convergence of iterative methods in a lot of literature [20,21,22,23,24,25,26,27].
Liu and his co-workers [28,29] reformulated SOR and AOR in terms of descent and residual vectors. The reaccelerated version of AOR was discussed in [30], and the generalization to the three-parameter method of AOR was carried out in [31].
Given an initial guess x 0 for an iterative scheme, it follows from Equation (1) a residual vector r 0 = b A x 0 . Knowing r 0 , one attempts to search a descent vector u , which can correct the solution to x = x 0 + u to abide the rule of r < r 0 . The residual and descent vectors are two very fundamental concepts in the formulation of the iterative schemes. We will point out that the purpose of extrapolation techniques [32,33,34,35] is designed a feasible manner to search a better descent vector basically, such that the residuals can be reduced quickly.
In the paper we will unify the splitting iteration methods, and simplify the extrapolation techniques from the formulation in terms of descent and residual vectors. The new ideas in this formulation to solve the linear system are developed by improving the existent splitting iteration methods, of which the maximization technique and the spectral analysis of iteration matrix are concerned with. Absolute convergence of the new splitting iterative scheme (NSIS) is proven.

2. A Reduction of Spectral Radius

Through the descent vector and the kth step residual vector
u = x x k ,
r k = b A x k ,
Equation (1) can be written as
A u = r k .
Equation (12) shows that the best descent vector is u = A 1 r k . However, if we can find A 1 , the solution of Equation (1) is already given by x = A 1 b exactly. The main difficulty is that finding A 1 is absolutely not an easy task.
Lemma 1. 
The ( M , N ) splitting iterative scheme (5) for solving Equation (1) can be reformulated as
x k + 1 = x k + u k , M u k = r k .
Proof. 
By means of Equation (5), we have
M x k + 1 = M x k + ( N M ) x k + b ;
it becomes, with the aids of Equations (4) and (11),
M ( x k + 1 x k ) = r k .
We end the proof of Lemma 1 by setting u k = x k + 1 x k in view of Equation (10).  □
We modify Equation (13) and prove the following result.
Theorem 1. 
Inserting an acceleration parameter α preceding u k in Equation (13), a new splitting iterative scheme for Equation (1) reads as
x k + 1 = x k + α u k , M u k = r k ,
Under the following condition:
1 < α < 1 1 ρ ( G ) ,
where G = M 1 N is the iteration matrix for Equation (5), the spectral radius for Equation (14), denoted by ρ α , satisfies.
ρ α < ρ ( G ) .
Proof. 
When the first one in Equation (14) is multiplied by M , and Equations (4) and (11), and the second one in Equation (14) are taken into account, it generates
M x k + 1 = M x k + α M u k = M x k + α r k = M x k + α ( b A x k ) = M x k + α b α ( M N ) x k = ( 1 α ) M x k + α N x k + α b .
The associated iteration matrix is
G α = ( 1 α ) I n + α G .
It follows that
ρ α = 1 α + α ρ ( G ) .
If α > 1 and 0 < ρ ( G ) < 1 owing to Equation (6), then
( α 1 ) ρ ( G ) < α 1
implies
ρ α = 1 α + α ρ ( G ) < ρ ( G ) .
To meet the requirement of
ρ α = 1 α + α ρ ( G ) > 0 ,
α < 1 / [ 1 ρ ( G ) ] follows readily.  □
Equation (14) can be also derived from Equation (5) after multiplying α and using a substitution technique:
α M x k + 1 = α N x k + α b ,
x k + 1 1 α x k + 1 + 1 1 α x k .
It results in
M x k + 1 + ( α 1 ) M x k = α N x k + α b , M x k + 1 M x k = α ( N M ) x k + α b , M x k + 1 M x k = α ( b A x k ) , M ( x k + 1 x k ) = α r k , x k + 1 x k = α M 1 r k = α u k .
This is the usual extrapolation method used in the iterative scheme with α as an extrapolation parameter [33].
The extrapolation technique consists of a multiplication in Equation (19), and an extrapolation in Equation (20), which will be named an extrapolation technique for the ( M , N ) splitting iterative scheme (5) with the parameter α , or simply, an α -extrapolation.
We now realize the extrapolation technique for the SOR method in Equation (9) as follows:
α ( D w L ) x k + 1 = w α b + ( 1 r ) α D x k + w α U x k , α ( D w L ) 1 α x k + 1 + 1 1 α x k = w α b + ( 1 r ) α D x k + w α U x k , ( D w L ) x k + 1 = w α b + ( 1 r ) α D x k + w α U x k + ( 1 α ) ( D w L ) x k , ( D w L ) x k + 1 = w α b + ( 1 r α ) D x k + w α U x k + ( w α w ) L x k .
Setting w α = r , we can derive Equation (7). Therefore, the AOR in Equation (7) is an α -extrapolation of the SOR in Equation (9).
Suppose that the spectral radius of SOR is ρ SOR . According to Equation (15), when ( r , w ) in the AOR method are taken as
1 < r w < 1 1 ρ SOR ,
AOR is convergent faster than SOR.
Remark 1. 
Comparing to Equation (13), α only appears in the first one before u k in Equation (14), and the second ones are unchanged. The α-extrapolation in Equation (14) within the formulation in terms of descent and residual vectors is simpler than the extrapolation technique applied to the original iterative equation, like that in Equation (21) for the general ( M , N ) method, and that in Equation (22) for the method of SOR to obtain the AOR method. AOR is the α = r / w -extrapolation of SOR.
By absorbing α into u k and renaming it to u k again, Equation (14) can also be written as
x k + 1 = x k + u k , M u k = α r k .
Comparing to Equation (13), it has the same iterative form, but the governing equations of u k are different; u k in Equation (24) is enhanced by an acceleration factor α. Both Equations (14) and (24) are the α-extrapolation of Equation (13); they are equivalent.

3. On a Modification of the Splitting Iterative Scheme and Determining ηk

Inspired by Equation (3), the iterative scheme in Equation (13) is orthogonal if its descent and residual vectors satisfy
A u k · r k + 1 = 0 .
Usually this is not true. For the original splitting iterative scheme in Equation (13), we have
r k + 1 = r k A u k ,
which results to
r k + 1 T A u k = r k T A u k A u k 2 0 .
We define a quantity η k to measure the degree of the non-orthogonality of the iterative scheme:
η k = r k T A u k A u k 2 .
Now the iterative scheme in Equation (13) is orthogonal if and only if η k = 1 .
Rather than the constant value of α in Equation (14), in this section we derive a more powerful version of the splitting iterative scheme (5) with a stepwise varying factor η k in Equation (27).
Theorem 2. 
If a factor η k is inserted preceding u k in the first one of Equation (13), then a new splitting iteration method for solving Equation (1) reads as
x k + 1 = x k + η k u k , M u k = r k .
The optimal value for the orthogonalization factor η k at each step is given by Equation (27).
Proof. 
Multiplying the first one in Equation (28) by A and using Equation (11) yields
r k + 1 = r k η k A u k ;
the squared norm is
r k + 1 2 = r k 2 2 η k r k T A u k + η k 2 A u k 2 .
Let
g = 2 η k r k T A u k η k 2 A u k 2 ;
we have
r k + 1 2 = r k 2 g .
The following maximization problem is established to maximally decrease the residual vector norm at each step by
max η k { g = 2 η k r k T A u k η k 2 A u k 2 } .
Using d g / d η k = 0 for the maximality of g in Equation (31), we can derive Equation (27).  □
Remark 2. 
In contrast to the α-extrapolation in Equation (14), Equation (28) is an η k -extrapolation of Equation (13). When α is a constant in Equation (14), η k is a varying factor per step.
Theorem 3. 
The splitting iterative scheme in Equation (28) is absolute convergence, i.e.,
r k + 1 < r k , k = 0 , 1 , .
Proof. 
Inserting Equation (27) for the optimal value of η k into Equation (29) yields
g = ( r k T A u k ) 2 A u k 2 .
It follows from Equation (30) that
r k + 1 2 = r k 2 ( r k T A u k ) 2 A u k 2 < r k 2 ,
due to ( r k T A u k ) 2 / A u k 2 > 0 . Hence, we proved Equation (32).  □
Theorem 4. 
In the splitting iterative scheme in Equations (28) and (27), we have
r k + 1 T A u k = 0 ,
which means that the consecutive residual vector r k + 1 is perpendicular to A u k .
Proof. 
By means of Equations (28) and (27), we have
x k + 1 = x k + r k T A u k A u k 2 u k .
Multiplying Equation (34) by A and using Equation (11) yields
r k + 1 = r k r k T A u k A u k 2 A u k ,
which being taken the inner product to A u k renders
r k + 1 T A u k = r k T A u k r k T A u k A u k 2 A u k 2 = 0 .
The proof of Equation (33) for Theorem 4 is complete.  □
Equation (26) indicates that the consecutive residual vectors of the original splitting iterative scheme are not perpendicular to A u k . Hence, the absolute convergence is not guaranteed for the original splitting iterative scheme.
The property of
r k + 1 A u k
is well-known in the literature. The orthogonal property guarantees that the residual vector norm is decreased per step. Refer to [7] for a detailed development of the re-orthogonalized GMRES technique for solving the linear systems.

4. New Form of QAOR and Optimal Value of r

Wu and Liu [17] proposed a quasi accelerated overrelaxation method (QAOR) by taking
[ ( 1 + r ) D w L ] x k + 1 = r b + [ D + r U + ( r w ) L ] x k .
Dividing by 1 + r and renaming the parameters as
w 0 = w 1 + r , r 0 = r 1 + r ,
Equation (35) can be recast to
( D w 0 L ) x k + 1 = r 0 b + [ ( 1 r 0 ) D + r 0 U + ( r 0 w 0 ) L ] x k .
It is just the AOR method in Equation (7). However, Equation (35) has an advantage that the parameter r is permitted in a large range.

4.1. A New Form of QAOR

The original form of quasi accelerated overrelaxation method (QAOR) in Equation (35) is not easy to handle. For the purpose of comparison we derive a new form of QAOR in terms of descent and residual vectors as follows.
Theorem 5. 
A new iterative form of the QAOR method is
x k + 1 = x k + u k , [ ( 1 + r ) D w L ] u k = r r k .
Proof. 
We rewrite Equation (35) to
[ ( 1 + r ) D w L ] x k + 1 = [ ( 1 + r ) D w L ] x k + r b + [ D + r U + ( r w ) L ] x k [ ( 1 + r ) D w L ] x k ,
and arrange it to
[ ( 1 + r ) D w L ] x k + 1 = [ ( 1 + r ) D w L ] x k + r b r [ D U L ] x k .
With the aids of Equations (8) and (11), we have
[ ( 1 + r ) D w L ] x k + 1 = [ ( 1 + r ) D w L ] x k + r r k ,
and also
[ ( 1 + r ) D w L ] ( x k + 1 x k ) = r r k .
The proof of Equation (36) is complete upon setting x k + 1 x k = u k .  □

4.2. Determining r in QAOR

Theorem 6. 
Given an initial value x 0 , r 0 = b A x 0 is known. For the QAOR method in Equation (36) if w is given, then
r = r 0 T A v 0 A v 0 2
is the optimal value of r, where  v 0  is obtained from a forward solution of
[ ( 1 + r ) D w L ] v 0 = r 0 .
Proof. 
Upon letting
v 0 = u 0 r ,
Equation (38) is a direct result of the second one in Equation (36) with k = 0 . The first one in Equation (36) with k = 0 is
x 1 = x 0 + r v 0 .
Multiplying Equation (39) by A and using Equation (11) yields
r 1 = r 0 r A v 0 ,
the squared norm of which is
r 1 2 = r 0 2 2 r r 0 T A v 0 + r 2 A v 0 2 .
Let
f = 2 r r 0 T A v 0 r 2 A v 0 2
be a decreasing function of the residual vector norm at the first step; we have
r 1 2 = r 0 2 f .
We encounter a maximization problem to determine r via
max r { f = 2 r r 0 T A v 0 r 2 A v 0 2 } .
Adopting d f / d r = 0 , Equation (37) is derived.  □
Equation (37) is a nonlinear equation for r, because v 0 = [ ( 1 + r ) D w L ] 1 r 0 obtained from Equation (38) is also a function of r. It is a fixed point problem to seek the solution of r, which can be solved by the iteration method to determine the optimal value of r.

4.3. Accelerated QAOR

According to Theorem 1, we can accelerate the convergence speed of QAOR by utilizing the following corollary.
Corollary 1. 
A new iterative form of the accelerated QAOR method is
x k + 1 = x k + α u k , [ ( 1 + r ) D w L ] u k = r r k ,
where α > 1 , and r is determined by Equation (37).

5. Reaccelerated over Relaxation Method

Vatti et al. [30] proposed a reaccelerated over relaxation (ROR) method as follows:
( D w L ) x k + 1 = r b + [ ( 1 r ) D + r U + ( r w ) L ] x k + r w ( A x k b ) .
Theorem 7. 
A new iterative form of the ROR method is
x k + 1 = x k + u k , ( D w L ) u k = r ( 1 w ) r k .
Proof. 
With the aid of Equation (11), we rewrite Equation (41) to
( D w L ) x k + 1 = ( D w L ) x k + r b + [ ( 1 r ) D + r U + ( r w ) L ] x k ( D w L ) x k r w r k ,
and then to
( D w L ) x k + 1 = ( D w L ) x k + r b r [ D U L ] x k r w r k .
It follows from Equations (8) and (11) that
( D w L ) x k + 1 = ( D w L ) x k + r b r A x k r w r k = ( D w L ) x k + r r k r w r k ,
and that
( D w L ) ( x k + 1 x k ) = r r k r w r k ;
Upon using u k = x k + 1 x k , Equation (42) is proven.  □
Theorem 8. 
The AOR method can be recast to
x k + 1 = x k + u k , ( D w L ) u k = r r k .
The ROR method is obtained from the AOR method upon applying the extrapolation technique (19) and (20) by a parameter β = 1 w on Equation (43); it generates the following iterative form:
x k + 1 = x k + ( 1 w ) u k , ( D w L ) u k = r r k .
Proof. 
Equation (43) is obtained from Equation (42) by deleting r w r k in the right-hand side of the second equation. Applying the extrapolation technique (19) and (20) on Equation (43) leads to
β x k + 1 = β x k + β u k , β 1 β x k + 1 + 1 1 β x k = β x k + β u k , x k + 1 = x k + β u k ,
which is the first one in Equation (44), if we take β = 1 w .
In Equation (42), let
w k = u k 1 w ;
we have
x k + 1 = x k + ( 1 w ) w k , ( D w L ) w k = r r k .
Comparing Equation (45), with w k replaced by u k , to Equation (44), they are the same.  □
Theorem 9. 
Given an initial value x 0 , r 0 = b A x 0 is known. For the ROR method in Equation (42) if w is given, then
r = r 0 T A v 0 A v 0 2
is the optimal value of r, where  v 0  is obtained from a forward solution of
( D w L ) v 0 = ( 1 w ) r 0 .
Proof. 
Letting
v 0 = u 0 r ,
Equation (47) is a direct result of the second one in Equation (42) with k = 0 . The first one in Equation (42) with k = 0 is
x 1 = x 0 + r v 0 .
Other processes are similar to that in Theorem 6.  □
Equation (46) is a simple equation for r, because v 0 = ( 1 w ) ( D w L ) 1 r 0 is not a function of r. We can compute the optimal value of r by Equation (46) directly.
Remark 3. 
We have mentioned in Remark 1 that AOR is the α = r / w -extrapolation of SOR. It is interesting that ROR is the β = 1 w -extrapolation of AOR. Consequently, ROR is the secondary extrapolation of SOR by two parameters α = r / w and β = 1 w . An advantage of ROR is that the optimal value of r can be obtained from Equation (46) directly without any iteration.

6. Three-Parameter Splitting Iterative Scheme

A generalization of Equation (35) was proposed in [31,36] as follows:
[ ( 1 + σ ) D w L ] x k + 1 = r b + [ ( 1 + σ r ) D + r U + ( r w ) L ] x k ,
which is named a parametric-AOR (POR), accompanied by an extra parameter σ . The QAOR method in Equation (35) is a special case of Equation (48) with σ = r .
Theorem 10. 
A new iterative form of the POR method is
x k + 1 = x k + u k , [ ( 1 + σ ) D w L ] u k = r r k .
Proof. 
Equation (48) is rewritten as
[ ( 1 + σ ) D w L ] x k + 1 = [ ( 1 + σ ) D w L ] x k + r b + [ ( 1 + σ r ) D + r U + ( r w ) L ] x k [ ( 1 + σ ) D w L ] x k ,
which can be rearranged to
[ ( 1 + σ ) D w L ] x k + 1 = [ ( 1 + σ ) D w L ] x k + 1 + r b r [ D U L ] x k .
Equations (8) and (11) lead to
[ ( 1 + σ ) D w L ] x k + 1 = [ ( 1 + σ ) D w L ] x k + r r k .
The proof for Equation (49) is complete, upon moving [ ( 1 + σ ) D w L ] x k to the left-hand side, and setting x k + 1 x k = u k .  □
Isah et al. [31] further generalized Equation (48) to
[ ( 1 + σ ) D w L ] x k + 1 = r b + [ ( 1 + σ r ) D + r U + ( r w ) L ] x k r w r k ,
which is named a parametric reaccelerated overrelaxation (PROR) method. Like that in Theorem 9, we can prove the following result.
Theorem 11. 
The PROR method is obtained from the POR method by applying the extrapolation technique (19) and (20) by a parameter β = 1 w ; it generates the following iterative form:
x k + 1 = x k + ( 1 w ) u k , [ ( 1 + σ ) D w L ] u k = r r k .
Proof. 
As that done in Theorem 10, Equation (50) can be expressed as
x k + 1 = x k + w k , [ ( 1 + σ ) D w L ] w k = r ( 1 w ) r k .
If we take
u k = w k 1 w ,
Equation (52) changes to Equation (51).
By applying the extrapolation technique (19) and (20) on Equation (49) by a parameter β = 1 w , Equation (51) can be derived again.  □
When w k in Equation (52) is replaced by u k , the PROR method can also be written as
x k + 1 = x k + u k , [ ( 1 + σ ) D w L ] u k = r ( 1 w ) r k .
Remark 4. 
QAOR is a special case of POR with σ = w . The PROR method is the β = 1 w -extrapolation of POR.

7. Algorithms

The iterative schemes can be adjust to have the same iterative form:
x k + 1 = x k + u k ,
but the governing equations for u k are different. In Table 1, we summarize the iterative schemes discussed in this paper.
According to the different methods of the splitting iterative scheme we have the following iterative algorithms.
Algorithms 1 and 2 can be applied to the above iterative schemes in Table 1. For saving notations we abbreviate them as A1-SOR and A2-SOR, and so on; they are Algorithms 1 and 2 for SOR, and so on. In Algorithm 2, α is added to further accelerate the convergence speed. which needs to satisfy the constraint (15). For most cases we take α = 1 , otherwise specified.
We can reduce the number of unknown parameters by using the maximization techniques, like as Theorem 6 for the QAOR method with the optimal value of r.
Algorithm 1:  α -acceleration
1: Give A , M , x 0 , α > 1 , and ε
2: Do k = 0 , 1 , , until r k < ε
3: r k = b A x k
4: Solve M u k = r k
5: x k + 1 = x k + α u k
Algorithm 2:  η k -acceleration
1: Give A , M , x 0 , α , and ε
2: Do k = 0 , 1 , , until r k < ε
3: r k = b A x c k
4: Solve M u k = r k
5: η k = r k T A u k A u k 2
6: x k + 1 = x k + α η k u k
In Algorithm 3 for w of SOR, and Algorithm 4 for r of QAOR, we need some iterations with a loose convergence criterion say ε 1 = 10 2 ; in Algorithm 5 for r of AOR and POR, and Algorithm 6 for r of ROR and PROR, they do not need any iteration.
Algorithm 3: for w of SOR
1: Give x 0 , w ( 0 ) and ε 1
2: r 0 = b A x 0
3: Do k = 0 , 1 ,
4: Solve ( D w ( k ) L ) v 0 = r 0
5: w ( k + 1 ) = r 0 T A v 0 A v 0 2
6: Enddo, if | w ( k + 1 ) w ( k ) | < ε 1
Algorithm 4: for r of QAOR
1: Give x 0 , w, r ( 0 ) and ε 1
2: r 0 = b A x 0
3: Do k = 0 , 1 ,
4: Solve [ ( 1 + r ( k ) ) D w L ] v 0 = r 0
5: r ( k + 1 ) = r 0 T A v 0 A v 0 2
6: Enddo, if | r ( k + 1 ) r ( k ) | < ε 1
Algorithm 5: for r of AOR and POR
1: Give x 0 , w, σ ( σ = 0 for AOR)
2: r 0 = b A x 0
3: Solve [ ( 1 + σ ) D w L ] v 0 = r 0
4: r = r 0 T A v 0 A v 0 2
Algorithm 6: for r of ROR and PROR
1: Give x 0 , w, σ ( σ = 0 for ROR)
2: r 0 = b A x 0
3: Solve [ ( 1 + σ ) D w L ] v 0 = ( 1 w ) r 0
4: r = r 0 T A v 0 A v 0 2
In Algorithm 5 for AOR if the optimization of r is carried out at all iteration steps it is equivalent to A2-SOR with r = η k .

8. Results and Discussion

For a given linear problem A x = b with size n, the first step is constructing D = { D i j } , U = { U i j } and L = { L i j } from A (refer to Algorithm 7), which are used in all algorithms. The number of operations is n 2 .
Algorithm 7: for D , U and L
1: Give A = { A i j }
2: Do i = 1 , , n
3: Do j = 1 , , n
4: If j = i , D i j = A i j ; otherwise D i j = 0
5: If j < i , L i j = A i j ; otherwise L i j = 0
6: If j > i , U i j = A i j ; otherwise U i j = 0
7: Enddo
In order to compare the proposed NSIS to the methods in the existent literature, some simple examples are taken and the sizes n of the linear systems are the same to the literature.

8.1. Example 1

Consider an example of Equation (1) with A = [ a i j ] , i , j = 1 , , n ; a i j = 1 / ( 10 j ) 1 / 20 , i > j , a i j = 1 / [ 10 ( i j ) ] 1 / 20 , i < j , and a i j = 1 , i = j [17]. We take b i = 1 , i = 1 , , n and x i 0 = 0 , i = 1 , , n , and fix n = 20 .
With the convergence criterion ε = 10 12 Table 2 compares the number of iterations (NI) for SOR, A2-SOR and A2-AOR, and that obtained in [17] by using Quasi-AOR (QAOR) and Quasi-SOR (QSOR), which are subjected to a loose convergence criterion 10 6 . In SOR the optimal value of w is computed from Algorithm 3; through three iterations w = 0.715496 is obtained; the spectral radius of the iteration matrix is small with 0.2898. If an ad hoc value w = 0.5 is used in SOR, it needs 269 iterations.
In A2-SOR, we take w = 0.715496 and α = 1.6 ; w = 0.15 and r = 0.759399 are used in A2-AOR; w = 0.3 and r = 0.9 are used in AOR and QAOR. In AOR we take w = 1.5 , and if the optimal r = 0.641305 is computed from Algorithm 5, then NI reduces to 83.
In Table 3, we demonstrate the usefulness of A1-SOR method with w = 0.5 . A1-SOR with α = 1.98 can improve the convergence speed twice than the original SOR.
In Table 4 the test is for the AOR method with w = 0.3 and r = 0.9 . The convergence is improved by increasing the value of α .
Comparing Table 2, Table 3 and Table 4, the Algorithm 2 type iterative schemes are convergent faster than the Algorithm 1 type iterative schemes.
As that given in [17], we take r = 0.9 and w = 0.3 in the original QAOR; it needs NI = 421 to satisfy a stringent convergence criterion ε = 10 15 , where ρ QAOR = 0.5341 . If Algorithm 4 for r is adopted to find the optimal value r = 3.22823 , ρ QAOR = 0.5341 is greatly reduced to ρ QAOR = 0.2843 . Under the same convergence criterion, NI is reduced to 264.
In Table 5, the NI obtained by Algorithm 1 for the QAOR method are compared for different values of α .
In Table 6, NI obtained by Algorithm 2 for the QAOR is compared for different values of r. Under a loose convergence criterion ε 1 = 10 2 , r is obtained from Algorithm 4 for r with r ( 0 ) = 1 and different values of w. The best value of w is w = 0.3 for the QAOR method.
We take w = 1.5 ; r is computed by Algorithm 5 for r of AOR, and r is computed by Algorithm 6 for r of ROR. In Table 7, NI obtained by different methods are compared. α = 2 is used in A1-ROR; w = 0.1 is used in A2-ROR.
In Table 8, NI obtained by POR and A1-POR are compared for different values of σ . r is obtained from Algorithm 5. The best value of σ is σ = 0.1 for the POR method.
In Table 9, NI obtained by PROR and A1-PROR are compared for different values of σ . r is obtained from Algorithm 6.
Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 demonstrate that the proposed NSIS and the optimization technique for determining the optima value of parameter can significantly reduce the number of iterations (NI).

8.2. Example 2

In Equation (1), we take [37]
A = 1 0 1 5 1 5 0 1 71 10 113 10 16 5 1 5 1 0 2 1 5 0 1 , b = 1.4 5.2 4.4 3.2 , x = 1 1 1 1 .
Under x 0 = 0 and ε = 10 10 , we take w = 5 / 3 and r = 14 / 3 for AOR, w = 5 / 3 and r = 4.03 for ROR, and α = 0.6 used in A1-AOR. For AOR if r = 1.97713 is computed from Algorithm 5 with w = 5 / 3 , then NI is reduced to 62. The values of w = 5 / 3 and r = 14 / 3 for AOR were claimed in [30,31] to be the optimal ones; however, the AOR method with the values of w = 5 / 3 and r = 1.97713 converges faster.
For A1-AOR if r = 1.97713 is computed from Algorithm 5 with w = 5 / 3 , and take α = 1.45 , then NI is reduced to 45 as shown in Table 10. For A1-ROR if r = 2.965691 is computed from Algorithm 6, and w = 5 / 3 and α = 1.45 were taken, then NI is reduced to 45.
Table 11 demonstrates the A1-PROR method for different values of α . r = 0.23726 is computed from Algorithm 6 with w = 1 / 6 and σ = 0.9 . The optimal value r = 0.3446 is derived in [31].

8.3. Example 3

In Equation (1), we take
A = 1 2 3 4 5 2 3 4 5 1 3 4 5 1 2 4 5 1 2 3 5 1 2 3 4 , x = 1 1 1 1 1 .
A is a symmetric positive definite matrix. When A is a positive definite matrix,
w opt = 2 1 + 1 ρ 2 ( I n D 1 A )
is the optimal value for the SOR method [38], where ρ is the spectral radius of I n D 1 A .
We find that the spectral radius of I n D 1 A is 2.5233, which according to Equation (53) means that the SOR method for this problem is divergent. Indeed for this problem other algorithms AOR, QAOR, ROR, POR, and PROR are divergent.
We take x 0 = 0 . Under ε = 10 5 , Table 12 demonstrates the NI of the A2-SOR method for different values of α . w = 0.20078 is computed from Algorithm 3.
Table 13 demonstrates the A2-PROR method for different values of α . r = 0.865643 is computed from Algorithm 6 with w = 0.5 and σ = 0.5 .
There exists an optimal value of α , for which NI is minimal. In general the relation between α to NI is not simple.
Table 14 demonstrates the A2-ROR method for different values of w but α = 1.3 fixed. r is computed from Algorithm 6 at each iteration step.
This problem shows that the factor η k in the splitting iterative schemes can stabilize the original splitting iterative schemes, which are unstable for this problem.

8.4. Example 4

We solve a complex Helmholtz equation:
Δ u ( x , y ) + σ u ( x , y ) = f ( x , y ) , 0 < x < 1 , 0 < y < 1 ,
with homogeneous boundary condition, where σ is a complex-valued impedance coefficient.
After five-point discretization, Equation (54) becomes a complex linear system:
( W + i T ) ( w + i t ) = f + i g ,
where W , T R N × N are symmetric positive semi-definite with W > 0 positive, and i 2 = 1 . Upon letting
A = W T T W , y = w t , h = f g ,
Equation (55) becomes
A x = b , x , b R n , A R n × n ,
where n = 2 N .
Equation (57) is a two-block linear system; in general, special splitting techniques are designed to effectively solve this sort linear system, for instance, the optimal two-block splitting iterative scheme [8], the generalized successive overrelaxation (GSOR) method [39]; the symmetric block triangular splitting (SBTS) iteration method [40], the Hermitian and skew-Hermitian splitting (HSS) iteration method [41], and the modified Hermitian and skew-Hermitian splitting (MHSS) iteration method [42].
Suppose that K = I n 0 C + C I n 0 be the centered difference matrix approximation of the negative Laplacian operator in Equation (54); the mesh size is h = 1 / ( n 0 + 1 ) , and C = tridiag ( 1 , 2 , 1 ) / h 2 ; then we have
K + 3 3 h I N + i K + 3 + 3 h I N u = f + i g ,
where N = n 0 2 and n = 2 n 0 2 ;
W = K + 3 3 h I N , T = K + 3 + 3 h I N , f = j h ( j + 1 ) 2 , g = j h ( j + 1 ) 2 , j = 1 , , N .
We apply the AOR in Equation (43) with w = 0.6 and r being optimized by Algorithm 5. Table 15 compares the optimal value of r, the number of steps (NI) and CPU obtained by AOR, under ε = 10 4 and x 0 = 0 .
We apply the AOR in Equation (43) with w = 1.2 and r being optimized by Algorithm 5 at every iteration steps. It is equivalent to A2-SOR. Table 16 compares NI and CPU under ε = 10 4 and x 0 = 0 . Comparing to Table 15 the A2-SOR is saving CPU time and also with less number of NI.
Under n 0 = 32 ( n = 2048 ), r / b ε = 10 6 , where b = b 1 2 + b 2 2 , b 1 = f + g = 0 and b 2 = g , we apply the A1-AOR with α = 1.95 , w = 1.6 and r being optimized by Algorithm 5 at every iteration steps to solve this problem.
We compare NI obtained by different methods in Table 17, some of which is listed in [40] with HSS in [41], and MHSS in [42]. The GSOR was obtained from Table 1 in [39]. Owing to its easy formulation and with a low computational cost, the A1-AOR is good even it is slower than other iterative methods in Table 17. Without needing of the complicated spectral analysis to determine the optimal value of parameter the A1-AOR is a competitive method.

8.5. Example 5

We solve a Hilbert linear problem with the following coefficient matrix:
A i j = 1 i + j 1 , i , j = 1 , , n .
In a practical application the problem is finding an ( n 1 ) -order polynomial function p ( x ) = a 0 + a 1 x + + a n 1 x n 1 to best match a continuous function f ( x ) in the interval of x [ 0 , 1 ] :
min deg ( p ) n 1 0 1 | f ( x ) p ( x ) | d x ,
which leads to a problem governed by Equation (1). A is the n × n Hilbert matrix defined by Equation (58), x is composed of the n coefficients a 0 , a 1 , , a n 1 appearing in p ( x ) , and
b = 0 1 f ( x ) d x 0 1 x f ( x ) d x 0 1 x n 1 f ( x ) d x
is uniquely determined by the function f ( x ) .
The Hilbert matrix is notoriously ill-conditioned, which can be seen from Table 18. The condition number of Hilbert matrix grows as e 3.5 n .
Because the Hilbert matrix is seriously ill-conditioned, a loose convergence criterion with ε = 10 3 is considered. We suppose that x j = 1 , j = 1 , , n are the solutions. We first apply the Gauss–Seidel method to solve this problem. Table 19 lists NI, the maximum error (ME) = max j = 1 , , n | x j 1 | , and the root-mean-square-error (RMSE) = j = 1 n ( x j 1 ) 2 / n obtained by the Gauss–Seidel method.
Table 20 lists NI, ME and RMSE obtained by the SOR with w determined by Algorithm 3 at the first step. It is apparent that the SOR is superior than the Gauss–Seidel method.

9. Conclusions

In this paper the splitting iterative schemes were reformulated in terms of descent and residual vectors. In the new formulation for the new splitting iterative scheme (NSIS) the extrapolation technique becomes easy to follow. The NSIS can be obtained from the original splitting iterative scheme by either multiplying the descent vector by a parameter α , or multiplying the right-hand side of the governing equation for the descent vector by a parameter α . Different splitting iterative schemes can be unified to have the same iterative form, but they are different in the governing equations for the descent vector.
We proved that the spectral radius of the NSIS is smaller than the original one if α > 1 . In addition to a constant value of α , varying values of α were examined by preserving the orthogonality and maximizing the residual vectors’ decreasing norm. A varying orthogonalization factor was introduced in the second NSIS to enhance the stability, and the property of absolute convergence was proven.
Main points for the novel contributions are summarized as follows.
  • The splitting iterative schemes were unified in terms of descent and residual vectors.
  • An extrapolation parameter α > 1 in the splitting iterative scheme can improve the convergence speed.
  • The NSISs were developed to maximally decrease the residual and to preserve the orthogonal property.
  • The second method by adding a stepwise varying factor η k can stabilize the splitting iterative scheme, even the original one is unstable.
  • The acceleration parameter r can be obtained readily by a maximization technique.
  • The improvement of convergence speed was observed by adopting the proposed NSISs together with the optimization technique for determining the optima value of parameter.

Author Contributions

Methodology, C.-S.L.; software, C.-S.L.; validation, C.-S.L.; formal analysis, C.-S.L.; writing—original draft preparation, C.-S.L.; writing—review and editing, B.L.; funding acquisition, C.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

Taiwan’s National Science and Technology Council project NSTC 113-2221-E-019-043-MY3 granted to the first author is highly appreciated.

Data Availability Statement

All data that support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Björck, A. Numerical Methods for Least Squares Problems; SIAM Publisher: Philadelphia, PA, USA, 1996. [Google Scholar]
  2. Meurant, G.; Tabbens, J.D. Krylov Methods for Non-Symmetric Linear Systems: From Theory to Computations; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 2020; Volume 57. [Google Scholar]
  3. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Pennsylvania, PA, USA, 2003. [Google Scholar]
  4. Sogabe, T. Krylov Subspace Methods for Linear Systems: Principles of Algorithms; Springer: Singapore, 2023. [Google Scholar]
  5. van der Vorst, H.A. Iterative Krylov Methods for Large Linear Systems; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  6. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  7. Liu, C.S.; Chang, C.W.; Kuo, C.L. Re-orthogonalized/affine GMRES and orthogonalized maximal projection algorithm for solving linear systems. Algorithms 2024, 17, 266. [Google Scholar] [CrossRef]
  8. Liu, C.S.; Chang, C.W. Enhance stability of successive over-relaxation method and orthogonalized symmetry successive over-relaxation in a larger range of relaxation parameter. Symmetry 2024, 16, 907. [Google Scholar] [CrossRef]
  9. Dongarra, J.; Sullivan, F. Guest editors’ introduction to the top 10 algorithms. Comput. Sci. Eng. 2000, 2, 22–23. [Google Scholar] [CrossRef]
  10. Bai, Z.Z. Motivations and realizations of Krylov subspace methods for large sparse linear systems. J. Comput. Appl. Math. 2015, 283, 71–78. [Google Scholar] [CrossRef]
  11. Bouyghf, F.; Messaoudi, A.; Sadok, H. A unified approach to Krylov subspace methods for solving linear systems. Numer. Algor. 2024, 96, 305–332. [Google Scholar] [CrossRef]
  12. Varga, R.S. Matrix Iterative Analysis; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  13. Young, D. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef]
  14. Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
  15. Li, Y.; Dai, P. Generalized AOR methods for linear complementarity problem. Appl. Math. Comput. 2007, 188, 7–18. [Google Scholar] [CrossRef]
  16. Zhang, C.H.; Wang, X.; Tang, X.B. Generalized AOR method for solving a class of generalized saddle point problems. J. Comput. Appl. Math. 2019, 350, 69–79. [Google Scholar] [CrossRef]
  17. Wu, S.L.; Liu, Y.J. A new version of the accelerated overrelaxation iterative method. J. Appl. Math. 2014, 2014, 725360. [Google Scholar] [CrossRef]
  18. Cvetkovic, L.; Kostic, V. A note on the convergence of the AOR method. Appl. Math. Comput. 2007, 194, 394–399. [Google Scholar] [CrossRef]
  19. Gao, Z.X.; Huang, T.Z. Convergence of AOR method. Appl. Math. Comput. 2006, 176, 134–140. [Google Scholar] [CrossRef]
  20. Huang, Z.G.; Xu, Z.; Lu, Q.; Cui, J.J. Some new preconditioned generalized AOR methods for generalized least-squares problems. Appl. Math. Comput. 2015, 269, 87–104. [Google Scholar]
  21. Yun, J.H. Comparison results of the preconditioned AOR methods for L-matrices. Appl. Math. Comput. 2011, 218, 3399–3413. [Google Scholar] [CrossRef]
  22. Beik, P.A.F.; Shams, N.N. On the modified iterative methods for M-matrix linear systems. Bull. Iranian Math. Soc. 2015, 41, 1519–1535. [Google Scholar]
  23. Dehghan, M.; Hajarian, M. Modied AOR iterative methods to solve linear systems. J. Vib. Control 2014, 20, 661–669. [Google Scholar] [CrossRef]
  24. Huang, T.Z.; Cheng, G.H.; Evans, D.J.; Cheng, X.Y. AOR type iterations for solving preconditioned linear systems. Int. J. Comput. Math. 2005, 82, 969–976. [Google Scholar] [CrossRef]
  25. Moghadam, M.M.; Beik, F.P.A. Comparison results on the preconditioned mixed-type splitting iterative method for M-matrix linear systems. Bull. Iranian Math. Soc. 2012, 38, 349–367. [Google Scholar]
  26. Wang, L.; Song, Y. Preconditioned AOR iterative method for M-matrices. J. Comput. Appl. Math. 2009, 226, 114–124. [Google Scholar] [CrossRef]
  27. Wu, M.; Wang, L.; Song, Y. Preconditioned AOR iterative method for linear systems. Appl. Numer. Math. 2007, 57, 672–685. [Google Scholar] [CrossRef]
  28. Liu, C.S.; El-Zahar, E.R.; Chang, C.W. An optimal combination of the splitting-linearizing method to SSOR and SAOR for solving the system of nonlinear equations. Mathematics 2024, 12, 1808. [Google Scholar] [CrossRef]
  29. Liu, C.S.; Chang, C.W. The SOR and AOR methods with stepwise optimized values of parameters for the iterative solutions of linear systems. Contemp. Math. 2024, 5, 4013–4028. [Google Scholar] [CrossRef]
  30. Vatti, V.B.K.; Rao, G.C.; Pai, S.S. Reaccelerated over relaxation (ROR) method. Bull. Int. Math. Virtual Inst. 2020, 10, 315–324. [Google Scholar]
  31. Isah, I.O.; Ndanusa, M.D.; Shehu, M.D.; Yusuf, A. Parametric reaccelerated overrelaxation (PROR) method for numerical solution of linear systems. Sci. World J. 2022, 17, 59–64. [Google Scholar]
  32. Hadjidimos, A.; Yeyois, A. The principle of extrapolatlon in connection with the accelerated overrelaxatlon method. Linear Alg. Appl. 1980, 30, 115–128. [Google Scholar] [CrossRef]
  33. Albrecht, P.; Klein, M.P. Extrapolated iterative methods for linear systems. SIAM J. Numer. Anal. 1984, 21, 192–201. [Google Scholar] [CrossRef]
  34. Hadjidimos, A. A survey of the iterative methods for the solution of linear systems by extrapolation, relaxation and other techniques. J. Comput. Appl. Math. 1987, 20, 37–51. [Google Scholar] [CrossRef]
  35. Hadjidimos, A. On the equivalence of extrapolation and Richardson’s iteration and its applications. Linear Alg. Appl. 2005, 402, 165–192. [Google Scholar] [CrossRef]
  36. Vatti, V.B.K.; Rao, G.C.; Pai, S.S. Parametric overrelaxation (PAOR) method. In Numerical Optimization in Engineering and Sciences; Advances in Intelligent Systems and Computing; Springer: Singapore, 2020; Volume 979, pp. 283–288. [Google Scholar]
  37. Avdelas, G.; Hadjidimos, A. Optimum accelerated overrelaxation method in a special case. Mathe. Comput. 1981, 36, 183–187. [Google Scholar] [CrossRef]
  38. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer Science: New York, NY, USA, 2000. [Google Scholar]
  39. Salkuyeh, D.L.; Hezari, D.; Edalatpour, V. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 2015, 92, 802–815. [Google Scholar] [CrossRef]
  40. Li, X.A.; Zhang, W.H.; Wu, J.Y. On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations. Appl. Math. Lett. 2018, 79, 131–137. [Google Scholar] [CrossRef]
  41. Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
  42. Bai, Z.Z.; Benzi, M.; Chen, F. On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algor. 2011, 56, 297–317. [Google Scholar] [CrossRef]
Table 1. Comparing the governing equations for u k for different iterative schemes; P : = D w L and Q : = ( 1 + σ ) D w L ; QAOR is a special case of POR with σ = r .
Table 1. Comparing the governing equations for u k for different iterative schemes; P : = D w L and Q : = ( 1 + σ ) D w L ; QAOR is a special case of POR with σ = r .
SORAORRORPORPROR
P u k = w r k P u k = r r k P u k = r ( 1 w ) r k Q u k = r r k Q u k = r ( 1 w ) r k
Table 2. Example 1, NI for QAOR, QSOR, AOR, SOR, A2-SOR and A2-AOR; op. r means the optimal value of r, and op. w means the optimal value of w.
Table 2. Example 1, NI for QAOR, QSOR, AOR, SOR, A2-SOR and A2-AOR; op. r means the optimal value of r, and op. w means the optimal value of w.
QAORQSORAORAOR (op. r)SOR (op. w)A2-SORA2-AOR
NI155141161831677254
Table 3. Example 1, NI and spectral radius obtained by A1-SOR for different values of α .
Table 3. Example 1, NI and spectral radius obtained by A1-SOR for different values of α .
α 11.21.51.81.98
ρ 0.5040.4070.2670.1540.147
NI269223178147133
Table 4. Example 1, NI obtained by A1-AOR for different values of α .
Table 4. Example 1, NI obtained by A1-AOR for different values of α .
α 11.21.31.41.51.61.7
NI1611331211111039689
Table 5. Example 1, NI and spectral radius ρ obtained by A1-QAOR for different values of α .
Table 5. Example 1, NI and spectral radius ρ obtained by A1-QAOR for different values of α .
α 11.21.51.82
ρ 0.28430.20700.27750.48910.6495
NI264218171140124
Table 6. Example 1, the value of r and NI obtained by A2-QAOR for different values of w.
Table 6. Example 1, the value of r and NI obtained by A2-QAOR for different values of w.
w0.10.30.50.81
r3.289093.229413.168743.068353.00518
NI91728894111
Table 7. Example 1, the value of r and NI obtained by different methods of AOR and ROR.
Table 7. Example 1, the value of r and NI obtained by different methods of AOR and ROR.
AORRORA1-RORA2-ROR
r0.64131−1.28261−1.282610.84773
NI1021024074
Table 8. Example 1, NI obtained by POR and A1-POR for different values of σ ; w = 1.9 fixed, and α = 2 for A1-POR.
Table 8. Example 1, NI obtained by POR and A1-POR for different values of σ ; w = 1.9 fixed, and α = 2 for A1-POR.
σ 0.10.50.81.2
POR117152178198
A1-POR58678090
Table 9. Example 1, NI for PROR and A1-PROR with different values of σ ; w = 1.5 fixed, and α = 2 for A1-PROR.
Table 9. Example 1, NI for PROR and A1-PROR with different values of σ ; w = 1.5 fixed, and α = 2 for A1-PROR.
σ 0.10.30.50.8
PROR137167185200
A1-PROR59758393
Table 10. Example 2, NI for AOR and ROR.
Table 10. Example 2, NI for AOR and ROR.
AORAOR (op. r)A1-AORAl-AOR (op. r)RORAl-ROR (op. r)
NI966247454645
Table 11. Example 2, comparing NI for A1-PROR with different values of α .
Table 11. Example 2, comparing NI for A1-PROR with different values of α .
α 11.21.31.41.51.6
NI625149484648
Table 12. Example 3, NI for A2-SOR with different values of α .
Table 12. Example 3, NI for A2-SOR with different values of α .
α 1.31.41.451.51.6
NI113128109140165
Table 13. Example 3, NI for A2-PROR with different values of α .
Table 13. Example 3, NI for A2-PROR with different values of α .
α 1.251.31.41.451.51.6
NI114771008598142
Table 14. Example 3, NI for A2-ROR with different values of w.
Table 14. Example 3, NI for A2-ROR with different values of w.
w0.30.40.50.60.70.8
α 1.31.41.31.51.51.6
NI978881745876
Table 15. Example 4, r, NI and CPU obtained by AOR with different n.
Table 15. Example 4, r, NI and CPU obtained by AOR with different n.
n200450800125018002450
r0.7371130.7404540.7386570.7362920.7341010.732191
NI649398133148158
CPU0.350.672.447.0515.8633.89
Table 16. Example 4, NI and CPU obtained by A2-SOR with different n.
Table 16. Example 4, NI and CPU obtained by A2-SOR with different n.
n200450800125018002450
NI4057738692100
CPU0.340.671.594.4110.1220.85
Table 17. Example 4, NI obtained by different methods.
Table 17. Example 4, NI obtained by different methods.
MethodHSSMHSSSBTSGSORA1-AOR
NI65543122106
Table 18. The condition numbers of Hilbert matrix.
Table 18. The condition numbers of Hilbert matrix.
ncond(A)ncond(A)
3 5.24 × 10 2 7 4.57 × 10 8
4 1.55 × 10 4 8 1.53 × 10 10
5 4.77 × 10 5 9 4.93 × 10 11
6 1.50 × 10 7 10 1.60 × 10 13
Table 19. Example 5, NI, ME and RMSE obtained by Gauss–Seidel method with different n.
Table 19. Example 5, NI, ME and RMSE obtained by Gauss–Seidel method with different n.
n20406080100
NI328514448725757
ME 1.08 × 10 1 9.79 × 10 2 2.68 × 10 1 1.38 × 10 1 1.42 × 10 1
RMSE 5.92 × 10 2 5.10 × 10 2 6.83 × 10 2 5.00 × 10 2 4.89 × 10 2
Table 20. Example 5, NI, ME and RMSE obtained by the SOR with different n.
Table 20. Example 5, NI, ME and RMSE obtained by the SOR with different n.
n20406080100
w0.1567250.1118800.0826110.0734550.063021
NI76248299298294
ME 1.31 × 10 1 1.11 × 10 1 9.59 × 10 2 9.78 × 10 2 1.00 × 10 1
RMSE 5.91 × 10 2 4.43 × 10 2 3.42 × 10 2 3.57 × 10 2 3.79 × 10 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Li, B. Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems. Algorithms 2025, 18, 440. https://doi.org/10.3390/a18070440

AMA Style

Liu C-S, Li B. Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems. Algorithms. 2025; 18(7):440. https://doi.org/10.3390/a18070440

Chicago/Turabian Style

Liu, Chein-Shan, and Botong Li. 2025. "Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems" Algorithms 18, no. 7: 440. https://doi.org/10.3390/a18070440

APA Style

Liu, C.-S., & Li, B. (2025). Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems. Algorithms, 18(7), 440. https://doi.org/10.3390/a18070440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop