Abstract
For the splitting iterative scheme to solve the system of linear equations, an equivalent form in terms of descent and residual vectors is formulated. We propose an extrapolation technique using the new formulation, such that a new splitting iterative scheme (NSIS) can be simply generated from the original one by inserting an acceleration parameter preceding the descent vector. The spectral radius of the NSIS is proven to be smaller than the original one, and so has a faster convergence speed. The orthogonality of consecutive residual vectors is coined into the second NSIS, from which a stepwise varying orthogonalization factor can be derived explicitly. Multiplying the descent vector by the factor, the second NSIS is proven to be absolutely convergent. The modification is based on the maximal reduction of residual vector norm. Two-parameter and three-parameter NSIS are investigated, wherein the optimal value of one parameter is obtained by using the maximization technique. The splitting iterative schemes are unified to have the same iterative form, but endowed with different governing equations for the descent vector. Some examples are examined to exhibit the performance of the proposed extrapolation techniques used in the NSIS.
    1. Introduction
In the paper, we improve some splitting iteration methods for solving a system of linear equations:
      
        
      
      
      
      
    
For solving Equation (1) a lot of well-developed methods were discussed in the text books [,,,,]. In general the Krylov subspace method is employed for solving the large scale linear problem. An m-dimensional Krylov subspace is considered:
      
        
      
      
      
      
    
In the generalized minimal residual (GMRES) method [],  is perpendicular to  defined by Equation (2), i.e.,
      
        
      
      
      
      
    
The best descent vector  is sought to minimize the residual []:
      
        
      
      
      
      
    
      where
      
        
      
      
      
      
    
      is the residual vector, whose Euclidean norm is denoted by . Equation (3) is one of the Petrov-Galerkin method to search . The perpendicularity is a vital guidance for developing effective iterative algorithms, which was employed in [,] to enhance the stability of GMRES and SOR.
Dongarra and Sullivan [] introduced the Krylov subspace method as the top ten algorithms. Bai [] demonstrated the motivations and realizations of Krylov subspace methods for large sparse linear systems. Recently, Bouyghf et al. [] provided a unified approach to the Krylov subspace methods for solving linear systems.
Suppose that  is decomposed by
      
      
        
      
      
      
      
    
      where  is a nonsingular matrix. The  splitting iterative scheme for Equation (1) is []
      
        
      
      
      
      
    
      where  is the kth step value of . The convergence of the iteration in Equation (5) is guaranteed if
      
        
      
      
      
      
    
      where  is the iteration matrix, and  is the spectral radius of .
The Gaussian elimination method is a classical exact method, but it is not suitable for use in either scientific or engineering practice. The Gauss–Seidel method is a semi-exact method by finding the exact values of variables at each iteration step by a forward-substitution technique. Without needing extra parameter is its main advantage; however, it is convergent slowly.
Many methods are special cases of the  splitting iterative scheme, namely the Jacobi method, the Gauss–Seidel method, the successive overrelaxation (SOR) method, and the accelerated overrelaxation (AOR) method. The SOR method was developed in []. For multi-parameter splitting methods, the two-parameter AOR method []:
      
        
      
      
      
      
    
      is a generalization of SOR, which is equipped with a relaxation parameter w and an acceleration parameter r for controlling the convergence behavior.
If , the SOR iterative method
      
        
      
      
      
      
    
      is recovered from Equation (7).
The generalizations of AOR method for different linear problems can refer to [,,]; the convergence behaviors were analyzed in [,]. To improve the convergence speed the preconditioned AOR methods were developed. Different kinds of preconditioners have been examined to improve the speed of convergence of the iterative methods. A growing interest is to study the preconditioners to ameliorate the speed of convergence of iterative methods in a lot of literature [,,,,,,,].
Liu and his co-workers [,] reformulated SOR and AOR in terms of descent and residual vectors. The reaccelerated version of AOR was discussed in [], and the generalization to the three-parameter method of AOR was carried out in [].
Given an initial guess  for an iterative scheme, it follows from Equation (1) a residual vector . Knowing , one attempts to search a descent vector , which can correct the solution to  to abide the rule of . The residual and descent vectors are two very fundamental concepts in the formulation of the iterative schemes. We will point out that the purpose of extrapolation techniques [,,,] is designed a feasible manner to search a better descent vector basically, such that the residuals can be reduced quickly.
In the paper we will unify the splitting iteration methods, and simplify the extrapolation techniques from the formulation in terms of descent and residual vectors. The new ideas in this formulation to solve the linear system are developed by improving the existent splitting iteration methods, of which the maximization technique and the spectral analysis of iteration matrix are concerned with. Absolute convergence of the new splitting iterative scheme (NSIS) is proven.
2. A Reduction of Spectral Radius
Equation (12) shows that the best descent vector is . However, if we can find , the solution of Equation (1) is already given by  exactly. The main difficulty is that finding  is absolutely not an easy task.
Proof.  
We end the proof of Lemma 1 by setting  in view of Equation (10).    □
We modify Equation (13) and prove the following result.
Theorem 1.  
Under the following condition:
        
      
        
      
      
      
      
    
      where  is the iteration matrix for Equation (5), the spectral radius for Equation (14), denoted by , satisfies.
      
        
      
      
      
      
    
Proof.  
When the first one in Equation (14) is multiplied by , and Equations  (4) and (11), and the second one in Equation (14) are taken into account, it generates
      
        
      
      
      
      
    
The associated iteration matrix is
        
      
        
      
      
      
      
    
It follows that
        
      
        
      
      
      
      
    
To meet the requirement of
      
        
      
      
      
      
     follows readily.    □
Equation (14) can be also derived from Equation (5) after multiplying  and using a substitution technique: 
      
        
      
      
      
      
    
      
        
      
      
      
      
    
It results in
      
        
      
      
      
      
    
This is the usual extrapolation method used in the iterative scheme with  as an extrapolation parameter [].
The extrapolation technique consists of a multiplication in Equation (19), and an extrapolation in Equation (20), which will be named an extrapolation technique for the  splitting iterative scheme (5) with the parameter , or simply, an -extrapolation.
We now realize the extrapolation technique for the SOR method in Equation (9) as follows:
      
        
      
      
      
      
    
Setting , we can derive Equation (7). Therefore, the AOR in Equation (7) is an -extrapolation of the SOR in Equation (9).
Suppose that the spectral radius of SOR is . According to Equation (15), when  in the AOR method are taken as
      
        
      
      
      
      
    
AOR is convergent faster than SOR.
Remark 1.  
Comparing to Equation (13), α only appears in the first one before  in Equation (14), and the second ones are unchanged. The α-extrapolation in Equation (14) within the formulation in terms of descent and residual vectors is simpler than the extrapolation technique applied to the original iterative equation, like that in Equation (21) for the general  method, and that in Equation (22) for the method of SOR to obtain the AOR method. AOR is the -extrapolation of SOR.
By absorbing α into  and renaming it to  again, Equation (14) can also be written as
      
        
      
      
      
      
    
3. On a Modification of the Splitting Iterative Scheme and Determining ηk
Inspired by Equation (3), the iterative scheme in Equation (13) is orthogonal if its descent and residual vectors satisfy
      
        
      
      
      
      
    
Usually this is not true. For the original splitting iterative scheme in Equation (13), we have
      
        
      
      
      
      
    
      which results to
      
        
      
      
      
      
    
We define a quantity  to measure the degree of the non-orthogonality of the iterative scheme:
      
        
      
      
      
      
    
Now the iterative scheme in Equation (13) is orthogonal if and only if .
Rather than the constant value of  in Equation (14), in this section we derive a more powerful version of the splitting iterative scheme (5) with a stepwise varying factor  in Equation (27).
Theorem 2.  
If a factor  is inserted preceding  in the first one of Equation (13), then a new splitting iteration method for solving Equation (1) reads as
      
        
      
      
      
      
    
The optimal value for the orthogonalization factor  at each step is given by Equation (27).
Proof.  
Let
      
        
      
      
      
      
    
        we have
      
        
      
      
      
      
    
The following maximization problem is established to maximally decrease the residual vector norm at each step by
      
        
      
      
      
      
    
Remark 2.  
Theorem 3.  
The splitting iterative scheme in Equation (28) is absolute convergence, i.e.,
      
        
      
      
      
      
    
Proof.  
Theorem 4.  
Proof.  
Multiplying Equation (34) by  and using Equation (11) yields
      
        
      
      
      
      
    
        which being taken the inner product to  renders
      
        
      
      
      
      
    
The proof of Equation (33) for Theorem 4 is complete.    □
Equation (26) indicates that the consecutive residual vectors of the original splitting iterative scheme are not perpendicular to . Hence, the absolute convergence is not guaranteed for the original splitting iterative scheme.
The property of
      
        
      
      
      
      
    
      is well-known in the literature. The orthogonal property guarantees that the residual vector norm is decreased per step. Refer to [] for a detailed development of the re-orthogonalized GMRES technique for solving the linear systems.
4. New Form of QAOR and Optimal Value of r
Wu and Liu [] proposed a quasi accelerated overrelaxation method (QAOR) by taking
      
        
      
      
      
      
    
Dividing by  and renaming the parameters as
      
        
      
      
      
      
    
Equation (35) can be recast to
      
        
      
      
      
      
    
It is just the AOR method in Equation (7). However, Equation (35) has an advantage that the parameter r is permitted in a large range.
4.1. A New Form of QAOR
The original form of quasi accelerated overrelaxation method (QAOR) in Equation (35) is not easy to handle. For the purpose of comparison we derive a new form of QAOR in terms of descent and residual vectors as follows.
Theorem 5.  
A new iterative form of the QAOR method is
      
        
      
      
      
      
    
4.2. Determining r in QAOR
Theorem 6.  
Given an initial value ,  is known. For the QAOR method in Equation (36) if w is given, then
      
        
      
      
      
      
    is the optimal value of r, where  is obtained from a forward solution of
      
        
      
      
      
      
    
Proof.  
Upon letting
      
        
      
      
      
      
    
Equation (38) is a direct result of the second one in Equation (36) with . The first one in Equation (36) with  is
      
        
      
      
      
      
    
Let
      
        
      
      
      
      
    
          be a decreasing function of the residual vector norm at the first step; we have
      
        
      
      
      
      
    
We encounter a maximization problem to determine r via
      
        
      
      
      
      
    
Adopting , Equation (37) is derived.    □
4.3. Accelerated QAOR
According to Theorem 1, we can accelerate the convergence speed of QAOR by utilizing the following corollary.
Corollary 1.  
A new iterative form of the accelerated QAOR method is
      
        
      
      
      
      
    where , and r is determined by Equation (37).
5. Reaccelerated over Relaxation Method
Vatti et al. [] proposed a reaccelerated over relaxation (ROR) method as follows:
      
        
      
      
      
      
    
Theorem 7.  
A new iterative form of the ROR method is
      
        
      
      
      
      
    
Proof.  
Upon using , Equation (42) is proven.    □
Theorem 8.  
The AOR method can be recast to
      
        
      
      
      
      
    
Proof.  
Theorem 9.  
Given an initial value ,  is known. For the ROR method in Equation (42) if w is given, then
      
        
      
      
      
      
    is the optimal value of r, where  is obtained from a forward solution of
      
        
      
      
      
      
    
Proof.  
Letting 
      
        
      
      
      
      
    
Equation (47) is a direct result of the second one in Equation (42) with . The first one in Equation (42) with  is
      
        
      
      
      
      
    
Other processes are similar to that in Theorem 6.    □
Equation (46) is a simple equation for r, because  is not a function of r. We can compute the optimal value of r by Equation (46) directly.
Remark 3.  
We have mentioned in Remark 1 that AOR is the -extrapolation of SOR. It is interesting that ROR is the -extrapolation of AOR. Consequently, ROR is the secondary extrapolation of SOR by two parameters  and . An advantage of ROR is that the optimal value of r can be obtained from Equation (46) directly without any iteration.
6. Three-Parameter Splitting Iterative Scheme
A generalization of Equation (35) was proposed in [,] as follows:
      
        
      
      
      
      
    
      which is named a parametric-AOR (POR), accompanied by an extra parameter . The QAOR method in Equation (35) is a special case of Equation (48) with .
Theorem 10.  
A new iterative form of the POR method is
      
        
      
      
      
      
    
Proof.  
The proof for Equation (49) is complete, upon moving  to the left-hand side, and setting .    □
Isah et al. [] further generalized Equation (48) to
      
        
      
      
      
      
    
      which is named a parametric reaccelerated overrelaxation (PROR) method. Like that in Theorem 9, we can prove the following result.
Theorem 11.  
Proof.  
As that done in Theorem 10, Equation (50) can be expressed as
      
        
      
      
      
      
    
If we take
      
        
      
      
      
      
    
When  in Equation (52) is replaced by , the PROR method can also be written as
      
        
      
      
      
      
    
Remark 4.  
QAOR is a special case of POR with . The PROR method is the -extrapolation of POR.
7. Algorithms
The iterative schemes can be adjust to have the same iterative form:
      
        
      
      
      
      
    
      but the governing equations for  are different. In Table 1, we summarize the iterative schemes discussed in this paper.
       
    
    Table 1.
    Comparing the governing equations for  for different iterative schemes;  and ; QAOR is a special case of POR with .
  
According to the different methods of the splitting iterative scheme we have the following iterative algorithms.
Algorithms 1 and 2 can be applied to the above iterative schemes in Table 1. For saving notations we abbreviate them as A1-SOR and A2-SOR, and so on; they are Algorithms 1 and 2 for SOR, and so on. In Algorithm 2,  is added to further accelerate the convergence speed. which needs to satisfy the constraint (15). For most cases we take , otherwise specified.
We can reduce the number of unknown parameters by using the maximization techniques, like as Theorem 6 for the QAOR method with the optimal value of r.
      
| Algorithm 1: -acceleration | 
1: Give , , , , and  2: Do , until  3:  4: Solve  5:   | 
| Algorithm 2: -acceleration | 
1: Give , , , , and  2: Do , until  3:  4: Solve  5:  6:   | 
In Algorithm 3 for w of SOR, and Algorithm 4 for r of QAOR, we need some iterations with a loose convergence criterion say ; in Algorithm 5 for r of AOR and POR, and Algorithm 6 for r of ROR and PROR, they do not need any iteration.
      
| Algorithm 3: for w of SOR | 
1: Give ,  and  2:  3: Do  4: Solve  5:  6: Enddo, if   | 
| Algorithm 4: for r of QAOR | 
1: Give , w,  and  2:  3: Do  4: Solve  5:  6: Enddo, if   | 
| Algorithm 5: for r of AOR and POR | 
1: Give , w,  ( for AOR) 2:  3: Solve  4:   | 
| Algorithm 6: for r of ROR and PROR | 
1: Give , w,  ( for ROR) 2:  3: Solve  4:   | 
In Algorithm 5 for AOR if the optimization of r is carried out at all iteration steps it is equivalent to A2-SOR with .
8. Results and Discussion
For a given linear problem  with size n, the first step is constructing ,  and  from  (refer to Algorithm 7), which are used in all algorithms. The number of operations is .
      
| Algorithm 7: for , and | 
1: Give  2: Do  3: Do  4: If , ; otherwise  5: If , ; otherwise  6: If , ; otherwise  7: Enddo  | 
In order to compare the proposed NSIS to the methods in the existent literature, some simple examples are taken and the sizes n of the linear systems are the same to the literature.
8.1. Example 1
Consider an example of Equation (1) with ; , , and  []. We take ,  and , and fix .
With the convergence criterion  Table 2 compares the number of iterations (NI) for SOR, A2-SOR and A2-AOR, and that obtained in [] by using Quasi-AOR (QAOR) and Quasi-SOR (QSOR), which are subjected to a loose convergence criterion . In SOR the optimal value of w is computed from Algorithm 3; through three iterations  is obtained; the spectral radius of the iteration matrix is small with 0.2898. If an ad hoc value  is used in SOR, it needs 269 iterations.
       
    
    Table 2.
    Example 1, NI for QAOR, QSOR, AOR, SOR, A2-SOR and A2-AOR; op. r means the optimal value of r, and op. w means the optimal value of w.
  
In A2-SOR, we take  and ;  and  are used in A2-AOR;  and  are used in AOR and QAOR. In AOR we take , and if the optimal  is computed from Algorithm 5, then NI reduces to 83.
In Table 3, we demonstrate the usefulness of A1-SOR method with . A1-SOR with  can improve the convergence speed twice than the original SOR.
       
    
    Table 3.
    Example 1, NI and spectral radius obtained by A1-SOR for different values of .
  
In Table 4 the test is for the AOR method with  and . The convergence is improved by increasing the value of .
       
    
    Table 4.
    Example 1, NI obtained by A1-AOR for different values of .
  
Comparing Table 2, Table 3 and Table 4, the Algorithm 2 type iterative schemes are convergent faster than the Algorithm 1 type iterative schemes.
As that given in [], we take  and  in the original QAOR; it needs NI = 421 to satisfy a stringent convergence criterion , where . If Algorithm 4 for r is adopted to find the optimal value ,  is greatly reduced to . Under the same convergence criterion, NI is reduced to 264.
In Table 5, the NI obtained by Algorithm 1 for the QAOR method are compared for different values of .
       
    
    Table 5.
    Example 1, NI and spectral radius  obtained by A1-QAOR for different values of .
  
In Table 6, NI obtained by Algorithm 2 for the QAOR is compared for different values of r. Under a loose convergence criterion , r is obtained from Algorithm 4 for r with  and different values of w. The best value of w is  for the QAOR method.
       
    
    Table 6.
    Example 1, the value of r and NI obtained by A2-QAOR for different values of w.
  
We take ; r is computed by Algorithm 5 for r of AOR, and r is computed by Algorithm 6 for r of ROR. In Table 7, NI obtained by different methods are compared.  is used in A1-ROR;  is used in A2-ROR.
       
    
    Table 7.
    Example 1, the value of r and NI obtained by different methods of AOR and ROR.
  
In Table 8, NI obtained by POR and A1-POR are compared for different values of . r is obtained from Algorithm 5. The best value of  is  for the POR method.
       
    
    Table 8.
    Example 1, NI obtained by POR and A1-POR for different values of ;  fixed, and  for A1-POR.
  
In Table 9, NI obtained by PROR and A1-PROR are compared for different values of . r is obtained from Algorithm 6.
       
    
    Table 9.
    Example 1, NI for PROR and A1-PROR with different values of ;  fixed, and  for A1-PROR.
  
8.2. Example 2
In Equation (1), we take []
      
        
      
      
      
      
    
Under  and , we take  and  for AOR,  and  for ROR, and  used in A1-AOR. For AOR if  is computed from Algorithm 5 with , then NI is reduced to 62. The values of  and  for AOR were claimed in [,] to be the optimal ones; however, the AOR method with the values of  and  converges faster.
For A1-AOR if  is computed from Algorithm 5 with , and take , then NI is reduced to 45 as shown in Table 10. For A1-ROR if  is computed from Algorithm 6, and  and  were taken, then NI is reduced to 45.
       
    
    Table 10.
    Example 2, NI for AOR and ROR.
  
Table 11 demonstrates the A1-PROR method for different values of .  is computed from Algorithm 6 with  and . The optimal value  is derived in [].
       
    
    Table 11.
    Example 2, comparing NI for A1-PROR with different values of .
  
8.3. Example 3
In Equation (1), we take
      
        
      
      
      
      
    
 is a symmetric positive definite matrix. When  is a positive definite matrix,
      
        
      
      
      
      
    
        is the optimal value for the SOR method [], where  is the spectral radius of .
We find that the spectral radius of  is 2.5233, which according to Equation (53) means that the SOR method for this problem is divergent. Indeed for this problem other algorithms AOR, QAOR, ROR, POR, and PROR are divergent.
We take . Under , Table 12 demonstrates the NI of the A2-SOR method for different values of .  is computed from Algorithm 3.
       
    
    Table 12.
    Example 3, NI for A2-SOR with different values of .
  
Table 13 demonstrates the A2-PROR method for different values of .  is computed from Algorithm 6 with  and .
       
    
    Table 13.
    Example 3, NI for A2-PROR with different values of .
  
There exists an optimal value of , for which NI is minimal. In general the relation between  to NI is not simple.
Table 14 demonstrates the A2-ROR method for different values of w but  fixed. r is computed from Algorithm 6 at each iteration step.
       
    
    Table 14.
    Example 3, NI for A2-ROR with different values of w.
  
This problem shows that the factor  in the splitting iterative schemes can stabilize the original splitting iterative schemes, which are unstable for this problem.
8.4. Example 4
We solve a complex Helmholtz equation: 
      
        
      
      
      
      
     with homogeneous boundary condition, where  is a complex-valued impedance coefficient.
After five-point discretization, Equation (54) becomes a complex linear system:
      
        
      
      
      
      
    
        where  are symmetric positive semi-definite with  positive, and . Upon letting
      
        
      
      
      
      
    
Equation (55) becomes
      
        
      
      
      
      
    
        where .
Equation (57) is a two-block linear system; in general, special splitting techniques are designed to effectively solve this sort linear system, for instance, the optimal two-block splitting iterative scheme [], the generalized successive overrelaxation (GSOR) method []; the symmetric block triangular splitting (SBTS) iteration method [], the Hermitian and skew-Hermitian splitting (HSS) iteration method [], and the modified Hermitian and skew-Hermitian splitting (MHSS) iteration method [].
Suppose that  be the centered difference matrix approximation of the negative Laplacian operator in Equation (54); the mesh size is , and ; then we have 
      
        
      
      
      
      
     where  and ;  
      
        
      
      
      
      
    
We apply the AOR in Equation (43) with  and r being optimized by Algorithm 5. Table 15 compares the optimal value of r, the number of steps (NI) and CPU obtained by AOR, under  and .
       
    
    Table 15.
    Example 4, r, NI and CPU obtained by AOR with different n.
  
We apply the AOR in Equation (43) with  and r being optimized by Algorithm 5 at every iteration steps. It is equivalent to A2-SOR. Table 16 compares NI and CPU under  and . Comparing to Table 15 the A2-SOR is saving CPU time and also with less number of NI.
       
    
    Table 16.
    Example 4, NI and CPU obtained by A2-SOR with different n.
  
Under  (), , where ,  and , we apply the A1-AOR with ,  and r being optimized by Algorithm 5 at every iteration steps to solve this problem.
We compare NI obtained by different methods in Table 17, some of which is listed in [] with HSS in [], and MHSS in []. The GSOR was obtained from Table 1 in []. Owing to its easy formulation and with a low computational cost, the A1-AOR is good even it is slower than other iterative methods in Table 17. Without needing of the complicated spectral analysis to determine the optimal value of parameter the A1-AOR is a competitive method.
       
    
    Table 17.
    Example 4, NI obtained by different methods.
  
8.5. Example 5
We solve a Hilbert linear problem with the following coefficient matrix:
      
        
      
      
      
      
    
In a practical application the problem is finding an -order polynomial function  to best match a continuous function  in the interval of :
      
        
      
      
      
      
    
        which leads to a problem governed by Equation (1).  is the  Hilbert matrix defined by Equation (58),  is composed of the n coefficients  appearing in , and
      
        
      
      
      
      
    
        is uniquely determined by the function .
The Hilbert matrix is notoriously ill-conditioned, which can be seen from Table 18. The condition number of Hilbert matrix grows as .
       
    
    Table 18.
    The condition numbers of Hilbert matrix.
  
Because the Hilbert matrix is seriously ill-conditioned, a loose convergence criterion with  is considered. We suppose that  are the solutions. We first apply the Gauss–Seidel method to solve this problem. Table 19 lists NI, the maximum error (ME) = , and the root-mean-square-error (RMSE) =  obtained by the Gauss–Seidel method.
       
    
    Table 19.
    Example 5, NI, ME and RMSE obtained by Gauss–Seidel method with different n.
  
Table 20 lists NI, ME and RMSE obtained by the SOR with w determined by Algorithm 3 at the first step. It is apparent that the SOR is superior than the Gauss–Seidel method.
       
    
    Table 20.
    Example 5, NI, ME and RMSE obtained by the SOR with different n.
  
9. Conclusions
In this paper the splitting iterative schemes were reformulated in terms of descent and residual vectors. In the new formulation for the new splitting iterative scheme (NSIS) the extrapolation technique becomes easy to follow. The NSIS can be obtained from the original splitting iterative scheme by either multiplying the descent vector by a parameter , or multiplying the right-hand side of the governing equation for the descent vector by a parameter . Different splitting iterative schemes can be unified to have the same iterative form, but they are different in the governing equations for the descent vector.
We proved that the spectral radius of the NSIS is smaller than the original one if . In addition to a constant value of , varying values of  were examined by preserving the orthogonality and maximizing the residual vectors’ decreasing norm. A varying orthogonalization factor was introduced in the second NSIS to enhance the stability, and the property of absolute convergence was proven.
Main points for the novel contributions are summarized as follows.
- The splitting iterative schemes were unified in terms of descent and residual vectors.
 - An extrapolation parameter in the splitting iterative scheme can improve the convergence speed.
 - The NSISs were developed to maximally decrease the residual and to preserve the orthogonal property.
 - The second method by adding a stepwise varying factor can stabilize the splitting iterative scheme, even the original one is unstable.
 - The acceleration parameter r can be obtained readily by a maximization technique.
 - The improvement of convergence speed was observed by adopting the proposed NSISs together with the optimization technique for determining the optima value of parameter.
 
Author Contributions
Methodology, C.-S.L.; software, C.-S.L.; validation, C.-S.L.; formal analysis, C.-S.L.; writing—original draft preparation, C.-S.L.; writing—review and editing, B.L.; funding acquisition, C.-S.L. All authors have read and agreed to the published version of the manuscript.
Funding
Taiwan’s National Science and Technology Council project NSTC 113-2221-E-019-043-MY3 granted to the first author is highly appreciated.
Data Availability Statement
All data that support the findings of this study are included within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Björck, A. Numerical Methods for Least Squares Problems; SIAM Publisher: Philadelphia, PA, USA, 1996. [Google Scholar]
 - Meurant, G.; Tabbens, J.D. Krylov Methods for Non-Symmetric Linear Systems: From Theory to Computations; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 2020; Volume 57. [Google Scholar]
 - Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Pennsylvania, PA, USA, 2003. [Google Scholar]
 - Sogabe, T. Krylov Subspace Methods for Linear Systems: Principles of Algorithms; Springer: Singapore, 2023. [Google Scholar]
 - van der Vorst, H.A. Iterative Krylov Methods for Large Linear Systems; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
 - Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
 - Liu, C.S.; Chang, C.W.; Kuo, C.L. Re-orthogonalized/affine GMRES and orthogonalized maximal projection algorithm for solving linear systems. Algorithms 2024, 17, 266. [Google Scholar] [CrossRef]
 - Liu, C.S.; Chang, C.W. Enhance stability of successive over-relaxation method and orthogonalized symmetry successive over-relaxation in a larger range of relaxation parameter. Symmetry 2024, 16, 907. [Google Scholar] [CrossRef]
 - Dongarra, J.; Sullivan, F. Guest editors’ introduction to the top 10 algorithms. Comput. Sci. Eng. 2000, 2, 22–23. [Google Scholar] [CrossRef]
 - Bai, Z.Z. Motivations and realizations of Krylov subspace methods for large sparse linear systems. J. Comput. Appl. Math. 2015, 283, 71–78. [Google Scholar] [CrossRef]
 - Bouyghf, F.; Messaoudi, A.; Sadok, H. A unified approach to Krylov subspace methods for solving linear systems. Numer. Algor. 2024, 96, 305–332. [Google Scholar] [CrossRef]
 - Varga, R.S. Matrix Iterative Analysis; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
 - Young, D. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef][Green Version]
 - Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
 - Li, Y.; Dai, P. Generalized AOR methods for linear complementarity problem. Appl. Math. Comput. 2007, 188, 7–18. [Google Scholar] [CrossRef]
 - Zhang, C.H.; Wang, X.; Tang, X.B. Generalized AOR method for solving a class of generalized saddle point problems. J. Comput. Appl. Math. 2019, 350, 69–79. [Google Scholar] [CrossRef]
 - Wu, S.L.; Liu, Y.J. A new version of the accelerated overrelaxation iterative method. J. Appl. Math. 2014, 2014, 725360. [Google Scholar] [CrossRef]
 - Cvetkovic, L.; Kostic, V. A note on the convergence of the AOR method. Appl. Math. Comput. 2007, 194, 394–399. [Google Scholar] [CrossRef]
 - Gao, Z.X.; Huang, T.Z. Convergence of AOR method. Appl. Math. Comput. 2006, 176, 134–140. [Google Scholar] [CrossRef]
 - Huang, Z.G.; Xu, Z.; Lu, Q.; Cui, J.J. Some new preconditioned generalized AOR methods for generalized least-squares problems. Appl. Math. Comput. 2015, 269, 87–104. [Google Scholar]
 - Yun, J.H. Comparison results of the preconditioned AOR methods for L-matrices. Appl. Math. Comput. 2011, 218, 3399–3413. [Google Scholar] [CrossRef]
 - Beik, P.A.F.; Shams, N.N. On the modified iterative methods for M-matrix linear systems. Bull. Iranian Math. Soc. 2015, 41, 1519–1535. [Google Scholar]
 - Dehghan, M.; Hajarian, M. Modied AOR iterative methods to solve linear systems. J. Vib. Control 2014, 20, 661–669. [Google Scholar] [CrossRef]
 - Huang, T.Z.; Cheng, G.H.; Evans, D.J.; Cheng, X.Y. AOR type iterations for solving preconditioned linear systems. Int. J. Comput. Math. 2005, 82, 969–976. [Google Scholar] [CrossRef]
 - Moghadam, M.M.; Beik, F.P.A. Comparison results on the preconditioned mixed-type splitting iterative method for M-matrix linear systems. Bull. Iranian Math. Soc. 2012, 38, 349–367. [Google Scholar]
 - Wang, L.; Song, Y. Preconditioned AOR iterative method for M-matrices. J. Comput. Appl. Math. 2009, 226, 114–124. [Google Scholar] [CrossRef]
 - Wu, M.; Wang, L.; Song, Y. Preconditioned AOR iterative method for linear systems. Appl. Numer. Math. 2007, 57, 672–685. [Google Scholar] [CrossRef]
 - Liu, C.S.; El-Zahar, E.R.; Chang, C.W. An optimal combination of the splitting-linearizing method to SSOR and SAOR for solving the system of nonlinear equations. Mathematics 2024, 12, 1808. [Google Scholar] [CrossRef]
 - Liu, C.S.; Chang, C.W. The SOR and AOR methods with stepwise optimized values of parameters for the iterative solutions of linear systems. Contemp. Math. 2024, 5, 4013–4028. [Google Scholar] [CrossRef]
 - Vatti, V.B.K.; Rao, G.C.; Pai, S.S. Reaccelerated over relaxation (ROR) method. Bull. Int. Math. Virtual Inst. 2020, 10, 315–324. [Google Scholar]
 - Isah, I.O.; Ndanusa, M.D.; Shehu, M.D.; Yusuf, A. Parametric reaccelerated overrelaxation (PROR) method for numerical solution of linear systems. Sci. World J. 2022, 17, 59–64. [Google Scholar]
 - Hadjidimos, A.; Yeyois, A. The principle of extrapolatlon in connection with the accelerated overrelaxatlon method. Linear Alg. Appl. 1980, 30, 115–128. [Google Scholar] [CrossRef]
 - Albrecht, P.; Klein, M.P. Extrapolated iterative methods for linear systems. SIAM J. Numer. Anal. 1984, 21, 192–201. [Google Scholar] [CrossRef]
 - Hadjidimos, A. A survey of the iterative methods for the solution of linear systems by extrapolation, relaxation and other techniques. J. Comput. Appl. Math. 1987, 20, 37–51. [Google Scholar] [CrossRef]
 - Hadjidimos, A. On the equivalence of extrapolation and Richardson’s iteration and its applications. Linear Alg. Appl. 2005, 402, 165–192. [Google Scholar] [CrossRef]
 - Vatti, V.B.K.; Rao, G.C.; Pai, S.S. Parametric overrelaxation (PAOR) method. In Numerical Optimization in Engineering and Sciences; Advances in Intelligent Systems and Computing; Springer: Singapore, 2020; Volume 979, pp. 283–288. [Google Scholar][Green Version]
 - Avdelas, G.; Hadjidimos, A. Optimum accelerated overrelaxation method in a special case. Mathe. Comput. 1981, 36, 183–187. [Google Scholar] [CrossRef]
 - Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer Science: New York, NY, USA, 2000. [Google Scholar]
 - Salkuyeh, D.L.; Hezari, D.; Edalatpour, V. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 2015, 92, 802–815. [Google Scholar] [CrossRef]
 - Li, X.A.; Zhang, W.H.; Wu, J.Y. On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations. Appl. Math. Lett. 2018, 79, 131–137. [Google Scholar] [CrossRef]
 - Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
 - Bai, Z.Z.; Benzi, M.; Chen, F. On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algor. 2011, 56, 297–317. [Google Scholar] [CrossRef]
 
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.  | 
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).