Next Article in Journal
Semi-Self-Supervised Domain Adaptation: Developing Deep Learning Models with Limited Annotated Data for Wheat Head Segmentation
Next Article in Special Issue
On Implementing a Two-Step Interior Point Method for Solving Linear Programs
Previous Article in Journal
Artificial Intelligence in Modeling and Simulation
Previous Article in Special Issue
Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(6), 266; https://doi.org/10.3390/a17060266
Submission received: 19 May 2024 / Revised: 9 June 2024 / Accepted: 14 June 2024 / Published: 15 June 2024
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)

Abstract

:
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. A stabilization factor, η , to measure the deviation from the orthogonality of the residual vector is inserted into GMRES to preserve the orthogonality automatically. The re-orthogonalized GMRES (ROGMRES) method guarantees the absolute convergence; even the orthogonality is lost gradually in the GMRES iteration. When η < 1 / 2 , the residuals’ lengths of GMRES and GMRES(m) no longer decrease; hence, η < 1 / 2 can be adopted as a stopping criterion to terminate the iterations. We prove η = 1 for the ROGMRES method; it automatically keeps the orthogonality, and maintains the maximality for reducing the length of the residual vector. We improve GMRES by seeking the descent vector to minimize the residual in a larger space of the affine Krylov subspace. The resulting orthogonalized maximal projection algorithm (OMPA) is identified as having good performance. We further derive the iterative formulas by extending the GMRES method to the affine Krylov subspace; these equations are slightly different from the equations derived by Saad and Schultz (1986). The affine GMRES method is combined with the orthogonalization technique to generate a powerful affine GMRES (A-GMRES) method with high performance.

1. Introduction

In the paper, we revisit GMRES and GMRES(m) to solve
A x = b , x , b R n , A R n × n .
In terms of an initial guess, x 0 , and the initial residual r 0 = b A x 0 , Equation (1) is equivalent to
A u = r 0 ,
where
u = x x 0
is the descent vector or called a corrector.
Equation (2) can be used to search a proper descent vector, u , which is inserted into Equation (3) to produce a solution, x , closer to the exact solution. We suppose an m-dimensional Krylov subspace:
K m ( A , r 0 ) : = Span { r 0 , A r 0 , , A m 1 r 0 } .
Krylov subspaces are named after A.N. Krylov in a paper [1], who described a method for solving a secular equation to determine the frequencies of small vibrations of material systems.
Many well-developed Krylov subspace methods for solving Equation (1) have been discussed in the text books [2,3,4,5]. Godunov and Gordienko [6] used an optimal representation of vectors in the Krylov space with the help of a variational problem; the extremum of the variational problem is the solution of the Kalman matrix equation, in which the 2-norm minimal solution is used as a characteristic of the Krylov space.
One of the most important classes of numerical methods for iteratively solving Equation (1) is the Krylov subspace methods [7,8], and the preconditioned Krylov subspace methods [9]. Recently, Bouyghf et al. [10] presented a comprehensive framework for unifying the Krylov subspace methods for Equation (1).
The GMRES method in [11] used the Petrov–Galerkin method to search u K m via a perpendicular property:
r 0 A u L m = A K m .
The descent vector u K m is achieved by minimizing the length of the residual vector [2]:
min r ,
where
r = b A x = r 0 A u ,
and r is the Euclidean norm of the residual vector, r .
In many Krylov subspace methods, the central idea is the projection. The method of GMRES is a special case of the Krylov subspace methods with an oblique projection. Let V and W = A V be the matrix representations of K m and L m , respectively. We project the residual Equation (2) from an n-dimensional space to an m-dimensional subspace:
W T A u = W T r 0 .
We seek u in the subspace K m by
u = V y ;
hence, Equation (8) becomes
W T W y = W T r 0 .
Solving y from Equation (10) and inserting it into Equation (9), we can obtain u , and, hence, the next x = x 0 + u ; x is a correction of x 0 , and we hope r   <   r 0 .
Meanwhile, Equation (7) becomes
r = r 0 W y ,
where W y = A u . By means of Equations (5), (7), and (11), we enforce the perpendicular property by
r 0 W y L m ;
it is equivalent to the projection of r 0 on L m . W y = A u is the projection of r 0 on L m , and r is perpendicular to L m , which ensures that the length of the residual vector is decreasing. For seeking a fast convergent iterative algorithm, we must keep the orthogonality, and simultaneously maximize the projection quantity W y = A u . Later on, we will modify the GMRES method from these two aspects.
Liu [12] derived an effective iterative algorithm based on the maximal projection method to solve Equation (1) in the column space of A . Recently, Liu et al. [13] applied the maximal projection method and the minimal residual norm method to solve the least-squares problems.
There are two reasons to cause the slowdown and even the stagnation of GMRES: one is the loss of orthogonality, and other is that its descent vector, u = V y , is not the best one. There are different methods to accelerate and improve GMRES [14,15,16,17]. The restart technique is often applied to the Krylov subspace methods, which, however, generally slows their convergence. Another way to accelerate the convergence is updating the initial value [18,19]. The preconditioning technology is often used to speed up GMRES and other Krylov subspace methods [20,21,22,23,24]. The multipreconditioned GMRES [25] is also proposed by using two or more preconditioners simultaneously. The polynomial preconditioned GMRES and GMRES-DR were also developed in [26]. As an augment of the Krylov subspace, some authors sought better descent vectors in a larger space to accelerate the convergence of GMRES [27,28,29]. Recently, Carson et al. [30] presented mathematically important and practically relevant phenomena that uncover the behavior of GMRES through a discussion of computed examples; they considered the conjugate gradient (CG) and GMRES methods crucially important for further development and practical applications, as well as other Krylov subspace methods.
Contribution and Novelty:
The major contribution and novelty of the present paper are that we propose a quite simple strategy to overcome the slowdown and stagnation behavior of GMRES by inserting a stabilization factor, η = r T A u / A u 2 , into the iterative equation. In doing so, the orthogonality property of the residual vector and the stepwise decreasing property of the residual length are automatically preserved. We extend the space of GMRES to an affine Krylov subspace for seeking a better descent vector. When the maximal projection method and the affine technique are combined into the orthogonalization technique, two very powerful iterative algorithms for solving linear systems with high performance are developed.
Highlight:
  • Examine the iterative algorithm for linear systems from the viewpoint of maximizing the decreasing quantity of the residual and maintaining the orthogonality of the consecutive residual vector.
  • A stabilization factor to measure the deviation from the orthogonality is inserted into GMRES to preserve the orthogonality automatically.
  • The re-orthogonalized GMRES guarantees the preservation of orthogonality, even if the orthogonality is gradually lost in the iterations by means of GMRES.
  • Improve GMRES by seeking the descent vector to minimize the length of the residual vector in a larger space of the affine Krylov subspace.
  • The new algorithms are all absolute convergence.
Outline:
The Arnoldi process and the conventional GMRES and GMRES(m) are described in Section 2. In Section 3, we propose a new algorithm modified from GMRES by inserting a stabilization factor, η , which is named a re-orthogonalized GMRES (ROGMRES). We discuss ROGMRES from two aspects: preserving the orthogonality and maximally decreasing the length of the residual vector; we also prove that ROGMRES can automatically maintain η = 1 , and preserve the good property of orthogonality and the maximality of reducing the residual. In Section 4, a better descent vector is sought in a larger affine Krylov subspace by using the maximal projection method; upon combining it with the orthogonalization technique, we propose a new algorithm, the orthogonalized maximal projection algorithm (OMPA). The iterative equations in GMRES are extended to that in the affine Krylov subspace by using the minimum residual method in Section 5, which results in an affine GMRES (A-GMRES) method. The numerical tests of some examples are given in Section 6. Section 7 concludes the achievements, novelties, and contributions of this paper.

2. GMRES and GMRES(m)

The Arnoldi process [2] is often used to orthonormalize the Krylov vectors A j r 0 , j = 0 , , m 1 in Equation (4), such that the resultant vectors v i , i = 1 , , m satisfy v i · v j = δ i j , i , j = 1 , , m , where δ i j is the Kronecker delta symbol. The full orthonormalization procedure, known as the Algorithm 1, is set up as follows.   
Algorithm 1: Arnoldi process
1:  Select m and give an initial r 0
2:  v 1 = r 0 r 0
3:  Do j = 1 : m
4:  w j = A v j
5:     Do i = 1 : j
6:     h i j = v i · w j
7:     w j = w j h i j v i
8:     Enddo of i
9:  h j + 1 , j = w j
10: v j + 1 = w j h j + 1 , j
11: Enddo of j
A dot between two vectors represents the inner product, like as a · b = a T b . V denotes the Arnoldi matrix whose jth column is v j :
V : = [ v 1 , , v m ] R n × m .
After m steps of the Arnoldi process, we can construct an upper Hessenberg matrix, H m , as follows:
H m = h 11 h 12 h 1 m h 21 h 22 h 2 m h 32 h 3 m h m , m 1 h m m .
Utilizing H m , the Arnoldi process can be expressed as a matrix product form:
A V = V H m + h m + 1 , m v m + 1 e m T ,
where H m R m × m , and e m = [ 0 , , 0 , 1 ] T R m . Now, the augmented Hessenberg upper matrix, H ¯ m R ( m + 1 ) × m , can be formulated as
H ¯ m = H m h m + 1 , m e m T = h 11 h 12 h 1 m h 21 h 22 h 2 m h 32 h 3 m h m , m 1 h m m 0 0 h m + 1 , m .
According to [2], the Algorithm 2 method is given as follows.
Algorithm 2: GMRES
1: Give m 0 , x 0 , and ε
2: Do m = m 0 , ( k = m m 0 ), until r k < ε
3: r k = b A x k
4: v 1 k = r k r k
5: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
6: Solve ( H ¯ k T H ¯ k ) y k = r k H ¯ k T e 1
7: x k + 1 = x k + V k y k
In the above, x k is the kth step value of x ; r k is the kth residual vector; y k is the vector of m expansion coefficients used in V k y k ; e 1 is the first column of I m + 1 ; V k denotes the kth step V in Equation (13); H ¯ k R ( m + 1 ) × m is the kth step augmented Hessenberg upper matrix in Equation (16); and V k y k is the kth step descent vector. Notice that V k y k K m ( A , r k ) .
In addition to Algorithm 3, there is a restarted version (GMRES(m)) [11,31] described as follows.
Algorithm 3: GMRES(m)
1: Give m 0 , m 1 , x 0 , and ε
2: Do j = 1 ,
3: Do m = m 0 , , m 1 ( k = m m 0 )
4: r k = b A x k
5: v 1 k = r k r k
6: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
7: Solve ( H ¯ k T H ¯ k ) y k = r k H ¯ k T e 1
8: x k + 1 = x k + V k y k
9: If r k + 1 < ε , stop
10: Otherwise, x 0 = x k + 1 ; go to 2
Instead of m 0 = 1 used in the original GMRES and GMRES(m), a suitable value of m 0 1 can speed up the convergence; hence, m = m 1 m 0 + 1 is the frequency for restart. Rather than the original GMRES and GMRES(m) [2,11,31], we take x k , not x 0 , in (7) of Algorithm GMRES and in (8) of Algorithm GMRES(m). Because x 0 is not updated in the original GMRES and GMRES(m), they converge slowly. In Algorithm GMRES(m), if we take m 1 = m 0 , then we return to the usual iterative algorithm for GMRES, with a fixed-dimension, m = m 0 , of the Krylov subspace.
The restart remedies the long-term recurrence and accumulations of the round-off errors of GMRES. In order to improve the restart process, several improvement techniques have been proposed, as mentioned above. Some techniques based on adaptively determining the restart frequency can be found in [32,33,34,35]. For a quicker convergence, we can update the current value of x at each iterative step. Because x k is new information for determining the next value of x k + 1 , the result of x k cannot be wasted for saving computational cost.

3. Maximally Decreasing Residual and Automatically Preserving Orthogonality

In this section, we develop a new iterative algorithm based on GMRES to solve Equation (1). The iterative form is
x k + 1 = x k + u k ,
where u k is the kth step descent vector, which is determined by the iterative algorithm. For instance, u k in GMRES is u k = V k y k .
Lemma 1.
A better iterative scheme than that in Equation (17) is
x k + 1 = x k + r k T A u k A u k 2 u k .
Proof. 
Let
x k + 1 = x k + η u k ;
it follows that
r k + 1 = r k η A u k .
Taking the squared norms of both sides yields
r k + 1 2 = r k 2 ( 2 η r k T A u k η 2 A u k 2 ) = : r k 2 h .
To maximally reduce the residual we consider
max η h = 2 η r k T A u k η 2 A u k 2 ,
which leads to
η k = r k T A u k A u k 2 .
Inserting η k into Equation (19), the proof of Equation (18) is finished.    □
Lemma 2.
In Equation (18) the following orthogonal property holds:
r k + 1 · ( A u k ) = 0 .
Proof. 
Apply A to Equation (18):
A x k + 1 = A x k + r k T A u k A u k 2 A u k .
Using A x k + 1 = b r k + 1 and A x k = b r k , Equation (25) changes to
r k + 1 = r k r k T A u k A u k 2 A u k .
Taking the inner product with A u k yields
r k + 1 · ( A u k ) = r k T A u k r k T A u k A u k 2 A u k 2 = 0 .
Equation (24) is proven.    □
It follows from Equation (5) that η = 1 is implied by GMRES. However, during the GMRES iteration the numerical values of η may deviate from 1 to a great extent, even being zero or a negative value. Therefore, we propose a re-orthogonalized version of GMRES, namely the ROGMRES method (Algorithm 4), as follows.
Algorithm 4: ROGMRES
1: Give m 0 , x 0 , and ε
2: Do m = m 0 , ( k = m m 0 ), until r k < ε
3: r k = b A x k
4: v 1 k = r k r k
5: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
6: Solve ( H ¯ k T H ¯ k ) y k = r k H ¯ k T e 1
7: u k = V k y k
8: x k + 1 = x k + r k T A u k A u k 2 u k
Correspondingly, the ROGMRES(m) Algorithm 5 is given as follows.
Algorithm 5: ROGMRES(m)
1: Give m 0 , m 1 , x 0 , and ε
2: Do j = 1 ,
3: Do m = m 0 , , m 1 ( k = m m 0 )
4: r k = b A x k
5: v 1 k = r k r k
6: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
7: Solve ( H ¯ k T H ¯ k ) y k = r k H ¯ k T e 1
8: u k = V k y k
9: x k + 1 = x k + r k T A u k A u k 2 u k
10: If r k + 1 < ε , stop
11: Otherwise, x 0 = x k + 1 ; go to 2
Upon comparing with GMRES(m) in Section 2, the computational cost of ROGMRES(m) is slightly increased by computing an extra term, r k T A u k / A u k 2 , at each iteration; it needs one matrix-vector product and two inner products of vectors.
Theorem 1.
For GMRES, if η k 1 is happened at the kth step, the orthogonality in Equation (5) cannot be continued after the kth step, i.e.,
r k + 1 · ( A u k ) 0 ;
moreover, if η k < 1 / 2 , the residual does not decrease, i.e.,
r k + 1 > r k .
For ROGMRES in Equation (18), the residual is absolutely decreased:
r k + 1 < r k ,
and the orthogonality r k + 1 · ( A u k ) = 0 holds.
Proof. 
Applying A to x k + 1 = x k + V k y k in Algorithm GMRES yields
r k + 1 = r k A u k ,
where u k = V k y k .
It follows from Equation (23) that
( r k η k A u k ) · ( A u k ) = 0 ,
which can be rearranged to
( r k A u k ) · ( A u k ) = ( η k 1 ) A u k 2 ,
and then by Equation (31),
r k + 1 · ( A u k ) = ( η k 1 ) A u k 2 .
If η k 1 , we can prove Equation (28). Taking the squared norm of Equation (31) and using Equation (23) generates
r k + 1 2 = r k 2 2 r k · ( A u k ) + A u k 2 = r k 2 + ( 1 2 η k ) A u k 2 .
If η k < 1 / 2 , ( 1 2 η k ) A u k 2 > 0 and Equation (29) is proven.
For ROGMRES, taking the squared norm of Equation (26), we have
r k + 1 2 = r k 2 ( r k T A u k ) 2 A u k 2 .
Because ( r k T A u k ) 2 / A u k 2 > 0 , Equation (30) is proven. The orthogonality r k + 1 · ( A u k ) = 0 was proven in Equation (24) of Lemma 2.    □
Corollary 1.
For GMRES, the orthogonality of the consecutive residual vector is equivalent to the maximality of h in Equation (22), where
η k = 1 r k T A u k = A u k 2 .
Proof. 
By means of Equation (34), to satisfy the orthogonality condition of the residual vector we require η k = 1 ; it implies Equation (37) by the definition in Equation (23). At the same time, Equation (36) changes to
r k + 1 2 = r k 2 A u k 2 .
Inserting η k = 1 into Equation (22), we have
max h = 2 r k T A u k A u k 2 = A u k 2 ,
of which r k T A u k = A u k 2 in Equation (37) was used. For GMRES, obtaining the maximal value of h and preserving the orthogonality are the same.    □
Remark 1.
To maintain η k = 1 in GMRES is a key issue for preserving the orthogonality and for the maximality of reducing the residual. However, for the traditional GMRES, it is not always true that it can maintain η k = 1 during the iteration process.
Theorem 2.
Let
w k = r k T A u k A u k 2 u k
be the descent vector of ROGMRES in
x k + 1 = x k + w k .
The following identity holds:
η k = r k T A w k A w k 2 = 1 ;
such that ROGMRES can automatically maintain the orthogonality of the residual vector, and achieve the maximality of the decreasing length of the residual vector.
Proof. 
Let
γ = r k T A u k A u k 2 ;
Equation (40) is written as
w k = γ u k .
Inserting it into Equation (42) produces
η k = 1 γ r k T A u k A u k 2 ;
by means of Equation (43), it follows that
η k = A u k 2 r k T A u k r k T A u k A u k 2 = 1 .
Thus, Equation (42) is proven. The proof to satisfy the orthogonality condition of the consecutive residual vector and the maximality of h is the same as that given in Corollary 1.    □
Remark 2.
The properties in ROGMRES are crucial so that it can automatically maintain η k = 1 , for preserving its good properties of the orthogonality, and the maximality for reducing the residual. The method in Equation (18) can be viewed as an automatically orthogonality preserving method. Numerical experiments will verify these points.
Theorem 3.
For ROGMRES in Equation (18), the iterative point, x k , is located on an invariant manifold:
r k 2 = r 0 2 Q k ,
where Q 0 = 1 ; Q k > 0 with Q k + 1 > Q k is an increasing sequence.
Proof. 
We begin with the following time-invariant manifold:
r 2 = r 0 2 Q ( t ) ,
where Q ( 0 ) = 1 , and Q ( t ) is an increasing function of t.
We suppose that the evolution of x in time is driven by a descent vector, u :
x ˙ = γ u .
Equation (48) is a time-invariant manifold, such that its time differential is zero, i.e.,
1 2 Q ( t ) Q ˙ ( t ) r 2 r · ( A x ˙ ) = 0 ,
where r ˙ = A x ˙ was used.
Inserting Equation (49) into Equation (50) yields
γ = Q ˙ ( t ) 2 Q ( t ) r 2 r T A u .
Inserting γ into Equation (49) and using v = A u , we can derive
x ˙ = q ( t ) r 2 r T v u ,
where
q ( t ) : = Q ˙ ( t ) 2 Q ( t ) > 0 .
By applying the forward Euler scheme to Equation (52), we have
x ( t + Δ t ) = x ( t ) + β r 2 r T v u ,
where
β = q ( t ) Δ t > 0 .
An iterative form of Equation (54) is
x k + 1 = x k + β k r k 2 r k T v k u k .
It generates the residual form:
r k + 1 = r k β k r k 2 r k T v k v k .
Taking the squared norms of both sides yields
r k + 1 2 = r k 2 2 β k r k 2 + β k 2 r k 4 ( r k T v k ) 2 v k 2 .
Dividing both sides by r k 2 renders
a k β k 2 2 β k + 1 r k + 1 2 r k 2 = 0 ,
where
a k : = r k 2 v k 2 ( r k T v k ) 2 1 .
By means of Equation (47), Equation (59) changes to
a k β k 2 2 β k + 1 Q k Q k + 1 = 0 .
If we take
β k = 1 a k = ( r k T v k ) 2 r k 2 v k 2 , Q k Q k + 1 = 1 1 a k ,
Equation (61) is satisfied. In terms of β k , Equation (56) is recast to
x k + 1 = x k + ( r k T v k ) 2 r k 2 v k 2 r k 2 r k T v k u k = x k + r k T v k v k 2 u k .
Noticing v k = A u k , Equation (63) is just the iterative Equation (18) for ROGMRES.    □
Remark 3.
For the factor defined in Equation (23), η k plays a vital role in Equation (34) to dominate the orthogonal property of GMRES, and also in Equation (35) to control the residual decreasing property of GMRES. When GMRES blows up for some problems, by introducing η k in GMRES, we can stabilize the iterations to avoid blowing up. In this viewpoint, η k is a stabilization factor for GMRES.
Inspired by Remark 3, two simple methods can be developed for GMRES and GMRES(m): we can compute η k at each step, and consider | η k 1 | > 10 10 or η k < 1 / 2 to be the stopping criteria of the iterations; they are labeled as GMRES( η ) and GMRES(m, η ), respectively.
Remark 4.
h given in Equation (21) is a decreasing quantity of the residual, whose maximal value by means of Equation (36) is
max h = ( r k T A u k ) 2 A u k 2 .
Upon comparing with a k in Equation (60), we have
max h = r k 2 a k .
Because a k 1 , the best value of max h that can be obtained is r k 2 when a k = 1 ; in this situation, it will directly lead to the exact solution at the k + 1 step, owing to r k + 1 = 0 , which is obtained by inserting A u k = r k into Equation (38). In this sense, a k can also be an objective function. a 0 correlates to f = ( r 0 T A u ) 2 / A u 2 , which is the quantity of the projection from r 0 on L m = A K m , by
a 0 = r 0 2 f .
In the next section, we will maximize f, i.e., minimize a 0 , to find the best descent vector for u .

4. Orthogonalized Maximal Projection Algorithm

As seen from Equation (38), we must make A u 2 as large as possible. A u = A V y , mentioned in Section 1, is the projection of r 0 on L m = A K m . Therefore we consider
max u f = ( r 0 · v ) 2 v 2
to be the maximal projection of r 0 on the direction v / v , where v = A u . In view of Equation (66), the maximization of f is equivalent to the minimization of a 0 .
We can expand u r 0 + K m via
u = r 0 + V y ,
which is slightly different from the descent vector u = V y K m used in GMRES.
Theorem 4.
For u r 0 + K m ( A , r 0 ) , r 0 0 , the optimal u approximately satisfying Equation (2) and subjecting to Equation (67) is given in Equation (68), where
W = A V , C = W T W ,
C y = W T ( r 0 A r 0 ) .
Proof. 
Due to v = A u and Equation (68),
v = v 0 + W y ,
where W = A V and
v 0 = A r 0 .
By Equation (71), one has
r 0 · v = r 0 · v 0 + r 0 T W y ,
v 2 = v 0 2 + 2 v 0 T W y + y T C y ,
where
C : = W T W > 0 R m × m .
Resorting to the maximality condition for f in Equation (67), we can obtain
r 0 · v y 2 v 2 y 1 = 0 ,
where
y 1 : = y ( r 0 · v ) = W T r 0 ,
y 2 : = 1 2 y v 2 = W T v 0 + C y .
From Equation (76), y 2 is equal to y 1 :
y 2 = η 1 y 1 y 2 = y 1 ,
because of
η 1 = v 2 r 0 · v = A u 2 r 0 T A u = 1 ,
according to Corollary 1.
Then, it follows from Equations (77)–(79) that
C y + W T v 0 = W T r 0 ,
which can be arranged to Equation (70).    □
We pull back the minimization problem used in the GMRES method to the following one:
min { r 0 v 2 } = min { r 0 A u 2 } ,
which is derived from Equations (6) and (7).
Theorem 5.
For u r 0 + K m ( A , r 0 ) , derived from the minimization problem in Equation (82), u in Equation (2) is optimized to be that in Theorem 4.
Proof. 
From
y r 0 v 2 = y v 2 2 y ( r 0 · v ) = 0 ,
and Equations (78) and (77) it follows that
W T v 0 + C y = W T r 0 ,
which is just Equation (70).    □
According to Theorems 4 and 5, the maximal projection is equivalent to the minimization of residual. Therefore, the maximal projection algorithm (MPA) is an extension of GMRES to a larger space of the affine Krylov subspace, not merely in the Krylov subspace.
By considering Lemma 1 and Theorem 5, we come to an improvement of ROGMRES as well as the double-improvement of GMRES to the following iterative formulas for the orthogonalized maximal projection algorithm (OMPA):
( W T W ) y = W T ( r 0 A r 0 ) ,
u = r 0 + V y ,
x = x 0 + r 0 T A u A u 2 u .
Equation (85) can be further simplified in the next section. The algorithms of OMPA (Algorithm 6) and OMPA(m) (Algorithm 7) are given as follows.
Algorithm 6: OMPA
1: Give m 0 , x 0 , and ε
2: Do m = m 0 , ( k = m m 0 ), until r k < ε
3: r k = b A x k
4: v 1 k = r k r k
5: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
6: W k = A V k
7: Solve ( W k T W k ) y k = W k T ( r k A r k )
8: u k = r k + V k y k
9: x k + 1 = x k + r k T A u k A u k 2 u k
Algorithm 7: OMPA(m)
1: Give m 0 , m 1 , x 0 , and ε
2: Do j = 1 ,
3: Do m = m 0 , , m 1 ( k = m m 0 )
4: r k = b A x k
5: v 1 k = r k r k
6: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
7: W k = A V k
8: Solve ( W k T W k ) y k = W k T ( r k A r k )
9: u k = r k + V k y k
10: x k + 1 = x k + r k T A u k A u k 2 u k
11: If r k + 1 < ε , stop
12: Otherwise, x 0 = x k + 1 ; go to 2
In Algorithm OMPA(m), if we take m 1 = m 0 , then we return to the usual iterative algorithm with a fixed-dimension m = m 0 of the Krylov subspace. In Algorithm OMPA(m), m = m 1 m 0 + 1 is the frequency for restart.
Remark 5.
In view of Equations (67) and (80), f and η = 1 have the following relation:
f = r 0 · v η = r 0 · v = A u 2 .
Therefore, the maximization of f is equivalent to the maximization of the length of the projection vector A u . By means of Equation (64),
f = max h ;
hence,
max u f = max u max η 2 η r 0 T A u η 2 A u 2
follows from Equation (22). Through a double-maximization, we have derived the maximal projection algorithm.

5. An Affine GMRES Method

In the derivation of GMRES, the following technique was employed [11]:
min y r 0 e 1 H ¯ m y ,
where r 0 = r 0 .
Instead of Equation (91) used in the GMRES method to develop the iterative algorithm from u K m ( A , r 0 ) , we seek the descent vector, u , in a larger subspace of u r 0 + K m ( A , r 0 )  by
min y r 0 A r 0 W y ,
which is obtained from Equation (82) by inserting Equation (86) for u , and using W = A V .
Now, Equation (92) is re-written as
min y r 0 A r 0 A V m y .
We denote V m the m × m -dimensional Arnoldi matrix, and V m + 1 the ( m + 1 ) × ( m + 1 ) -dimensional Arnoldi matrix. It is known that
A V m = V m + 1 H ¯ m , V m T V m = I m , V m + 1 T V m + 1 = I m + 1 .
Theorem 6.
For u r 0 + K m ( A , r 0 ) , derived from Equation (93), the orthogonalized affine GMRES possesses the following iterative formulas:
( H ¯ m T H ¯ m ) y = r 0 H ¯ m T e 1 r 0 [ H ¯ m T H ¯ m ] 1 ,
u = r 0 + V m y ,
x = x 0 + r 0 T A u A u 2 u ,
where r 0 = r 0 , and [ H ¯ m T H ¯ m ] 1 is the first column of H ¯ m T H ¯ m .
Proof. 
The objective r 0 A r 0 A V m y in Equation (93) can be simplified as follows. In terms of the first Krylov vector v 1 = r 0 / r 0 , where r 0 = r 0 , and using the first one in Equation (94), we have
r 0 A r 0 A V m y = r 0 v 1 r 0 A v 1 V m + 1 H ¯ m y .
We write
v 1 = V m + 1 e 1 , A v 1 = A V m e 2 = V m + 1 H ¯ m e 2 ,
where e 2 is the first column of I m . Then, it follows that
r 0 A r 0 A V m y = V m + 1 [ r 0 e 1 r 0 H ¯ m e 2 H ¯ m y ] .
Taking the squared norms of both sides yields
r 0 A r 0 A V m y 2 = [ r 0 e 1 r 0 H ¯ m e 2 H ¯ m y ] T V m + 1 T V m + 1 [ r 0 e 1 r 0 H ¯ m e 2 H ¯ m y ] = [ r 0 e 1 r 0 H ¯ m e 2 H ¯ m y ] T [ r 0 e 1 r 0 H ¯ m e 2 H ¯ m y ] = r 0 e 1 r 0 H ¯ m e 2 H ¯ m y 2 ,
where the last one in Equation (94) was used.
The minimization problem in Equation (93) is now simplified to
min y r 0 e 1 r 0 H ¯ m e 2 H ¯ m y .
To satisfy the minimality condition, we can derive Equation (95); using the technique in Lemma 1, Equation (97) can be derived.    □
Remark 6.
Notice that the minimization problem in Equation (102) is slightly different from that in Equation (91), where an extra term, r 0 H ¯ m e 2 , appeared in Equation (102). However, the resulting affine GMRES (A-GMRES) (Algorithm 8) in the affine Krylov subspace can significantly improve the performance rather than GMRES, when the orthogonality is taken into account. We remind that (6)–(8) in the following Algorithm A-GMRES are slightly more computational expensive than (6) and (7) in Algorithm GMRES. Also, (7)–(9) in the following Algorithm A-GMRES(m) (Algorithm 9) are slightly more computational expensive than (7) and (8) in Algorithm GMRES(m).
Algorithm 8: A-GMRES
1: Give m 0 , x 0 , and ε
2: Do m = m 0 , ( k = m m 0 )
3: r k = b A x k , r k = r k
4: v 1 k = r k r k
5: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
6: ( H ¯ k T H ¯ k ) y k = r k H ¯ k T e 1 r k [ H ¯ k T H ¯ k ] 1
7: u k = r k + V k y k
8: x k + 1 = x k + r k T A u k A u k 2 u k
9: If r k + 1 < ε , stop
Algorithm 9: A-GMRES(m)
1: Give m 0 , m 1 , x 0 , and ε
2: Do j = 1 ,
3: Do m = m 0 , , m 1 ( k = m m 0 )
4: r k = b A x k , r k = r k
5: v 1 k = r k r k
6: V k = [ v 1 k , , v m k ]   (by Arnoldi process)
7: ( H ¯ k T H ¯ k ) y k = r k H ¯ k T e 1 r k [ H ¯ k T H ¯ k ] 1
8: u k = r k + V k y k
9: x k + 1 = x k + r k T A u k A u k 2 u k
10: If r k + 1 < ε , stop
11: Otherwise, x 0 = x k + 1 ; go to 2

6. Examples

In this section, we apply the Algorithms GMRES, GMRES(m), A-GMRES, A-GMRES(m), ROGMRES, ROGMRES(m), OMPA, and MPA(m) to solve some examples endowed with a symmetric cyclic matrix, randomized matrix, ill-posed Cauchy problem, diagonal dominant matrix, and highly ill-conditioned matrix. They are subjected to the convergence criterion with r k < ε , where ε is the error tolerance.

6.1. Example 1: Cyclic Matrix Linear System

A cyclic matrix, A , is generated from the first row ( 1 , , n ) . The exact solution is assumed to be x i = 1 , i = 1 , , n , and b can be obtained from Equation (1). First we apply GMRES to this problem with n = 300 and ε = 10 10 . The initial values are x i 0 = 1 + 1 / i , i = 1 , , n . For this problem, we encounter the situation that η k < 1 after 150 steps, not η k = 1 as shown in Figure 1a by a solid line. As shown in Figure 1b, the residuals blow up for GMRES.
This problem shows that the original GMRES method cannot be applied to solve this linear system, since η k < 1 happened after 150, and the iteration of GMRES blows up after around 157 steps. In this situation, by Equation (35),
r k + 1 > r k ,
the residual obtained by GMRES grows step-by step; hence, the GMRES method is no longer stable after 157 steps.
To overcome this drawback of GMRES, we can either relax the convergence criterion, or employ the version of GMRES ( η ) before it diverges.
Table 1 compares the maximum error (ME) and number of steps (NS) for different methods under ε = 3 × 10 10 . m 0 = 10 is used in GMRES and ROGMRES; m 0 = 1 is used in GMRES( η ); m 0 = 30 and m 1 = 40 are used in GMRES(m); m 0 = 35 and m 1 = 40 are used in ROGMRES(m) and GMRES(m, η ); m 0 = 50 and m 1 = 50 are used in MPA(m) and A-GMRES(m). As shown in Figure 1a by a dashed line and a dashed–dotted line, both ROGMRES and ROGMRES(m) can keep the value of η k = 1 up to the termination; as shown in Figure 1b, they converge faster than GMRES. ROGMRES(m) converges faster than ROGMRES. This example shows that ROGMRES can stabilize GMRES when it is unstable, as shown in Figure 1b.
It is interesting that the MPA(m), with a fixed value m = m 0 = m 1 = 50 , can attain a highly accurate solution with ten steps, as shown in Figure 1b; A-GMRES(m) obtains ME = 5.88 × 10 14 with 15 steps by using m = m 0 = m 1 = 50 . If the orthogonality is not adopted in A-GMRES(m), ME = 7.51 × 10 14 is obtained with 14 steps; it means that η k = 1 can be kept, even though the orthogonalization technique was not employed. Within ten steps, A-GMRES with m 0 = 50 can obtain ME = 7.06 × 10 14 . Within nine steps, A-GMRES(m) with m 0 = 50 and m 0 = 51 can obtain ME = 8.02 × 10 14 .

6.2. Example 2: Random Matrix Linear System

The coefficient matrix A of size 50 × 50 is randomly generated with 1 < a i j < 2 . The exact solution is x i = 1 , i = 1 , , 50 , and b can be obtained by Equation (1).
We consider the initial values x i 0 = 0.5 and take ε = 10 10 . Table 2 compares ME and NS for different methods under ε = 10 10 . m 0 = 10 is used in ROGMRES, OMPA, A-GMRES, and GMRES( η ); m 0 = 30 and m 1 = 40 are used in GMRES(m), ROGMRES(m) and GMRES(m, η ). ROGMRES is better than OMPA and A-GMRES; other methods are not accurate. For this example, the restart ROGMRES(m) is not good.

6.3. Example 3: Inverse Cauchy Problem and Boundary Value Problem

Consider the inverse Cauchy problem:
Δ u = u r r + 1 r u r + 1 r 2 u θ θ = 0 ,
u ( ρ , θ ) = h ( θ ) , 0 θ π ,
u n ( ρ , θ ) = g ( θ ) , 0 θ π .
The method of fundamental solutions is taken as
u ( x ) = j = 1 n c j U ( x , s j ) , s j Ω c .
where
U ( x , s j ) = ln r j , r j = x s j .
We consider
u ( x , y ) = cos x cosh y + sin x sinh y ,
ρ ( θ ) = exp ( sin θ ) sin 2 ( 2 θ ) + exp ( cos θ ) cos 2 ( 2 θ ) ,
where u ( x , y ) is an exact solution of the Laplace equation, and ρ ( θ ) , 0 θ 2 π describes the boundary contour in the polar coordinates.
By GMRES with n = 60 , through 60 iterations the residual is shown in Figure 2a, which does not satisfy the convergence criterion with ε = 10 2 . If we run it to 70 steps, GMRES will blow up. In contrast, ROGMRES converges with 72 steps. As compared in Figure 2b, ME obtained by ROGMRES is smaller than 0.27, as shown in Figure 2c. However, GMRES obtains ME = 0.5 with a worse result.
Because the inverse Cauchy problem is a highly ill-posed problem, GMRES does not converge within a loose convergence criterion ε = 10 2 , and after 70 steps it will blow up. It is apparent that GMRES cannot solve the Cauchy problem adequately. The ROGMRES method is stable for the Cauchy problem, and can improve the accuracy to 0.27; however, it leaves room to improve the methods based on the Krylov subspace for solving the Cauchy problem.
Next, we consider the mixed boundary value problem. Under ε = 10 3 , for GMRES(m) with m 0 = 3 and m 1 = 10 , as shown in Figure 3a, η k after 12 steps is irregular; it causes the residuals to decrease non-monotonically, as shown in Figure 3b. The GMRES(m) does not converge within 200 steps, and obtains an incorrect solution with ME = 0.92. If η k < 1 / 2 is used as a stopping criterion, through 14 steps, ME = 6.64 × 10 3 is obtained. When the ROGMRES method is applied, we obtain ME = 4.16 × 10 4 with 187 steps. As shown in Figure 3a by a dashed line, η k = 1 is kept well, and the residuals monotonically decrease, as shown in Figure 3b by a dashed line for ROGMRES. ME = 4.87 × 10 4 is obtained through 167 steps by OMPA with m 0 = 4 , which is convergent faster than ROGMRES. ME = 4.09 × 10 4 is obtained through 109 steps by A-GMRES with m 0 = 4 , which is convergent faster than ROGMRES and OMPA.

6.4. Example 4: Diagonal Dominant Linear System

Consider an example borrowed from Morgan [27]. The matrix A consists of main diagonal elements from 1 to 1000; the super diagonal elements are 0.1, and other elements are zero. We take b = 1 .
Table 3 compares the number of steps (NS) for different methods under ε = 10 10 . m 0 = 1 is used in OMPA and A-GMRES; m 0 = 1 , m 1 = 25 are used in original GMRES(m), ROGMRES(m) and current GMRES(m). The NS of ROGMRES(m), and current GMRES(m) are the same, because of η k = 1 for the current GMRES(m) before convergence. Figure 4 compares the residuals.
The method in Equation (18) is an automatically orthogonality preserving method. When the OMPA and A-GMRES methods do not consider the automatically orthogonality preserving method, the results are drastically different, as shown in Figure 5. Because η = 1 is not preserved, both OMPA and A-GMRES diverge very quickly after 40 steps.

6.5. Example 5: Densely Clustered Non-Symmetric Linear System

Consider an example with a i j = ( i + j / 2 ) / n , i , j = 1 , , n , where n = 2000 . Suppose that x i = 1 , i = 1 , , 2000 are exact solutions, and the initial values are x i 0 = 0 , i = 1 , , 2000 . The values of matrix elements are densely clustered in a narrow range with a i j [ 0.00075 , 1.5 ] . This problem is a highly ill-conditioned non-symmetric linear system with the condition number over the order 10 15 . Table 4 compares the maximal errors (MEs) and number of steps (NS) for different methods under ε = 10 10 . m 0 = 1 is used in OMPA and A-GMRES; m 0 = 1 , m 1 = 3 are used in current GMRES(m), and m 0 = 2 , m 1 = 3 are used in ROGMRES(m). ROGMRES(m) is convergent faster than other algorithms. Even for this highly ill-conditioned problem, the proposed novel methods are highly efficient in finding accurate solutions.

7. Conclusions

In this paper the GMRES method was re-examined from the two aspects of preserving the orthogonality and maximizing the decreasing length of the residual vector, both of which can significantly enhance the stability and accelerate the convergence speed of GMRES. If η k = r k T A u k / A u k 2 in GMRES is kept to η k = 1 during the iterations, it is stable; otherwise, the GMRES is unstable. For any iterative form of x k + 1 = x k + u k , we can improve it to x k + 1 = x k + u k ( r k T A u k ) / A u k 2 , such that the orthogonality is preserved automatically, and simultaneously the length of the residual vector is reduced maximally. It brings the GMRES method to a re-orthogonalized GMRES (ROGMRES) method, preserving the orthogonality automatically and having the property of absolute convergence. We derived the new iterative form x k + 1 = x k + u k ( r k T A u k ) / A u k 2 also from the invariant-manifold theory.
In order to find a better descent vector, we solved a maximal projection problem in a larger affine Krylov subspace to derive the descent vector. We proved that the maximal projection is equivalent to the minimal residual length in the m-dimensional affine Krylov subspace. After taking the automatically orthogonality preserving method into account, a new orthogonalized maximal projection algorithm (OMPA) method was developed. We derived formulas similar to GMRES in the affine Krylov subspace, namely the affine GMRES (A-GMRES) method, which had superior performance for the problems tested in the paper; it is even better than OMPA for some problems. The algorithm A-GMRES possessed two advantages: solving a simpler least squares problem in the affine Krylov subspace, and preserving the orthogonality automatically.
Through the studies conducted in the paper, the novelty points and main contributions are summarized as follows.
  • The GMRES was examined from the viewpoints of maximizing the decreasing quantity of the residual and to maintain the orthogonality of the consecutive residual vector.
  • A stabilization factor, η , to maintain orthogonality was inserted into GMRES to preserve the orthogonality automatically.
  • GMRES was improved in a larger space of the affine Krylov subspace.
  • A new orthogonalized maximal projection algorithm, OMPA, was derived; a new affine A-GMRES was derived.
  • The new algorithms ROGMRES, OMPA, and A-GMRES guarantee the absolute convergence.
  • Through examples, we showed that the orthogonalization techniques are useful to stabilize the methods of GMRES, MPA, and A-GMRES.
  • Numerical testings for five different problems confirmed that the methods of ROGMRES, ROGMRES(m), OMPA, OMPA(m), A-GMRES, and A-GMRES(m) can significantly accelerate the convergence speed compared with the original methods of GMRES and GMRES(m).

Author Contributions

Conceptualization, C.-S.L. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., C.-W.C. and C.-L.K.; Validation, C.-S.L., C.-W.C. and C.-L.K.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-L.K.; Resources, C.-S.L. and C.-W.C.; Data curation, C.-S.L., C.-W.C. and C.-L.K.; Writing—original draft, C.-S.L. and C.-W.C.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-L.K.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krylov, A.N. On the numerical solution of equation by which are determined in technical problems the frequencies of small vibrations of material systems. Izv. Akad. Nauk. SSSR 1931, 7, 491–539. [Google Scholar]
  2. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Pittsburgh, PA, USA, 2003. [Google Scholar]
  3. van der Vorst, H.A. Iterative Krylov Methods for Large Linear Systems; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  4. Meurant, G.; Tabbens, J.D. Krylov Methods for Non-Symmetric Linear Systems: From Theory to Computations; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 2020; Volume 57. [Google Scholar]
  5. Godunov, S.K.; Rozhkovskaya, T. Modern Aspects of Linear Algebra; Translations of Mathematical Monographs; AMS: Little Rock, AR, USA, 1998.
  6. Godunov, S.K.; Gordienko, V.M. The Krylov space and the Kalman equation. Sib. Zhurnal Vychislitel’Noi Mat. 1998, 1, 5–10. [Google Scholar]
  7. van Den Eshof, J.; Sleijpen, G.L.G. Inexact Krylov subspace methods for linear systems. SIAM J. Matrix Anal. Appl. 2004, 26, 125–153. [Google Scholar] [CrossRef]
  8. Bai, Z.Z. Motivations and realizations of Krylov subspace methods for large sparse linear systems. J. Comput. Appl. Math. 2015, 283, 71–78. [Google Scholar] [CrossRef]
  9. Simoncini, V.; Szyld, D.B. Recent computational developments in Krylov subspace methods for linear systems. Numer. Linear Algebra Appl. 2007, 14, 1–59. [Google Scholar] [CrossRef]
  10. Bouyghf, F.; Messaoudi, A.; Sadok, H. A unified approach to Krylov subspace methods for solving linear systems. Numer. Algor. 2024, 96, 305–332. [Google Scholar] [CrossRef]
  11. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  12. Liu, C.S. A maximal projection solution of ill-posed linear system in a column subspace, better than the least squares solution. Comput. Math. Appl. 2014, 67, 1998–2014. [Google Scholar] [CrossRef]
  13. Liu, C.S.; Kuo, C.L.; Chang, C.W. Solving least-squares problems via a double-optimal algorithm and a variant of Karush-Kuhn-Tucker equation for over-determined system. Algorithms 2024, 17, 211. [Google Scholar] [CrossRef]
  14. Morgan, R.B. Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations. SIAM J. Matrix Anal. Appl. 2000, 21, 1112–1135. [Google Scholar] [CrossRef]
  15. Baker, A.H.; Jessup, E.R.; Manteuffel, T. A technique for accelerating the convergence of restarted GMRES. SIAM J. Matrix Anal. Appl. 2005, 26, 962–984. [Google Scholar] [CrossRef]
  16. Zou, Q. GMRES algorithms over 35 years. Appl. Math. Comput. 2023, 445, 127869. [Google Scholar] [CrossRef]
  17. Thomas, S.; Carson, E.; Rozložník, M. Iterated Gauss–Seidel GMRES. SIAM J. Sci. Comput. 2023, 46, S254–S279. [Google Scholar] [CrossRef]
  18. Imakura, A.; Sogabe, T.; Zhang, S.L. An efficient variant of the GMRES(m) method based on the error equations. East Asian J. Appl. Math. 2012, 2, 19–32. [Google Scholar] [CrossRef]
  19. Imakura, A.; Sogabe, T.; Zhang, S.L. A look-back-type restart for the restarted krylov subspace methods to solve non-Hermitian systems. Jpn. J. Ind. Appl. Math. 2018, 35, 835–859. [Google Scholar] [CrossRef]
  20. Benzi, M.; Meyer, C.D.; Tuma, M. A sparse approximate inverse preconditioner for the conjugate gradient method. SIAM J. Sci. Comput. 1996, 17, 1135–1149. [Google Scholar] [CrossRef]
  21. Benzi, M.; Tuma, M. A sparse approximate inverse preconditioner for nonsymmetric linear systems. SIAM J. Sci. Comput. 1998, 19, 968–994. [Google Scholar] [CrossRef]
  22. Benzi, M.; Cullum, J.K.; Tuma, M. Robust approximate inverse preconditioning for the conjugate gradient method. SIAM J. Sci. Comput. 2000, 22, 1318–1332. [Google Scholar] [CrossRef]
  23. Benzi, M.; Bertaccini, D. Approximate inverse preconditioning for shifted linear systems. BIT Numer. Math. 2003, 43, 231–244. [Google Scholar] [CrossRef]
  24. Baglama, J.; Calvetti, D.; Golub, G.H.; Reichel, L. Adaptively preconditioned GMRES algorithms. SIAM J. Sci. Comput. 1998, 20, 243–269. [Google Scholar] [CrossRef]
  25. Bakhos, T.; Kitanidis, P.; Ladenheim, S.; Saibaba, A.K.; Szyld, D.B. Multipreconditioned GMRES for shifted systems. SIAM J. Sci. Comput. 2017, 39, S222–S247. [Google Scholar] [CrossRef]
  26. Liu, Q.; Morgan, R.B.; Wilcox, W. Polynomial preconditioned GMRES and GMRES-DR. SIAM J. Sci. Comput. 2015, 37, S407–S428. [Google Scholar] [CrossRef]
  27. Morgan, R.B. A restarted GMRES method augmented with eigenvectors. SIAM J. Matrix Anal. Appl. 1995, 16, 1154–1171. [Google Scholar] [CrossRef]
  28. Baglama, J.; Reichel, L. Augmented GMRES-type methods. Numer. Linear Alg. Appl. 2007, 14, 337–350. [Google Scholar] [CrossRef]
  29. Dong, Y.; Garde, H.; Hansen, P.C. R3GMRES: Including prior information in GMRES-type methods for discrete inverse problems. Electron. Trans. Numer. Anal. 2014, 42, 136–146. [Google Scholar]
  30. Carson, E.; Liesen, J.; Strakoš, Z. Towards understanding CG and GMRES through examples. Linear Alg. Appl. 2024, 692, 241–291. [Google Scholar] [CrossRef]
  31. Erhel, J.; Burrage, K.; Pohl, B. Restarted GMRES preconditioned by deflation. J. Comput. Appl. Math. 1996, 69, 303–318. [Google Scholar] [CrossRef]
  32. Baker, A.H.; Jessup, E.R.; Kolev, T.Z. A simple strategy for varying the parameter in GMRES(m). J. Comput. Appl. Math. 2009, 230, 751–761. [Google Scholar] [CrossRef]
  33. Moriya, K.; Nodera, T. The deflated-GMRES(m,k) method with switching the restart frequency dynamically. Numer. Linear Alg. Appl. 2000, 7, 569–584. [Google Scholar] [CrossRef]
  34. Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F. A new adaptive GMRES algorithm for achieving high accuracy. Numer. Linear Alg. Appl. 1998, 5, 275–297. [Google Scholar] [CrossRef]
  35. Zhang, L.; Nodera, T. A new adaptive restart for GMRES(m) method. Austr. N. Z. Ind. Appl. Math. J. 2005, 46, 409–426. [Google Scholar] [CrossRef]
Figure 1. For example 1: (a) η of GMRES, ROGMRES, ROGMRES(m), and MPA(m), and (b) the residuals blowing up obtained by GMRES, ROGMRES, ROGMRES(m), and MPA(m) do not blow up.
Figure 1. For example 1: (a) η of GMRES, ROGMRES, ROGMRES(m), and MPA(m), and (b) the residuals blowing up obtained by GMRES, ROGMRES, ROGMRES(m), and MPA(m) do not blow up.
Algorithms 17 00266 g001
Figure 2. For example 3 of the inverse Cauchy problem: (a) the residuals, (b) comparing solutions, and (c) the errors obtained by GMRES and ROGMRES.
Figure 2. For example 3 of the inverse Cauchy problem: (a) the residuals, (b) comparing solutions, and (c) the errors obtained by GMRES and ROGMRES.
Algorithms 17 00266 g002
Figure 3. For example 3 with mixed boundary conditions: (a) η and (b) the residuals obtained by GMRES(m), ROGMRES, OMPA, and A-GMRES; the residuals obtained by GMRES(m) fail.
Figure 3. For example 3 with mixed boundary conditions: (a) η and (b) the residuals obtained by GMRES(m), ROGMRES, OMPA, and A-GMRES; the residuals obtained by GMRES(m) fail.
Algorithms 17 00266 g003
Figure 4. For example 4, the residuals obtained by OMPA, A-GMRES, ROGMRES(m), original GMRES(m), and current GMRES(m), which is coincident with that obtained by ROGMRES(m).
Figure 4. For example 4, the residuals obtained by OMPA, A-GMRES, ROGMRES(m), original GMRES(m), and current GMRES(m), which is coincident with that obtained by ROGMRES(m).
Algorithms 17 00266 g004
Figure 5. For example 4, η obtained by OMPA and A-GMRES. If the automatically orthogonality preserving method is not taken into account, they fail.
Figure 5. For example 4, η obtained by OMPA and A-GMRES. If the automatically orthogonality preserving method is not taken into account, they fail.
Algorithms 17 00266 g005
Table 1. For example 1, ME and NS obtained by different methods.
Table 1. For example 1, ME and NS obtained by different methods.
GMRES(m)ROGMRESROGMRES(m)GMRES( η )GMRES(m, η )MPA(m)A-GMRES(m)
ME 7.39 × 10 14 7.22 × 10 14 6.50 × 10 14 8.22 × 10 14 9.04 × 10 14 5.66 × 10 14 5.88 × 10 14
NS28441816161015
Table 2. For example 2, ME and NS obtained by different methods.
Table 2. For example 2, ME and NS obtained by different methods.
GMRES(m)ROGMRESOMPAA-GMRESROGMRES(m)GMRES( η )GMRES(m, η )
ME 3.51 × 10 2 9.77 × 10 14 1.88 × 10 13 3.36 × 10 9 3.51 × 10 2 3.65 × 10 2 3.25 × 10 2
NS>550514352>55045163
Table 3. For example 4, NS obtained by different methods.
Table 3. For example 4, NS obtained by different methods.
Original GMRES(m)ROGMRES(m)OMPAA-GMRES[27]Current GMRES(m)
NS>30044323130044
Table 4. For example 5, ME and NS obtained by different methods.
Table 4. For example 5, ME and NS obtained by different methods.
MethodOMPAA-GMRESCurrent GMRES(m)ROGMRES(m)
NS4392
ME 2.10 × 10 9 1.46 × 10 11 1.18 × 10 13 5.98 × 10 13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Chang , C.-W.; Kuo , C.-L. Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems. Algorithms 2024, 17, 266. https://doi.org/10.3390/a17060266

AMA Style

Liu C-S, Chang  C-W, Kuo  C-L. Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems. Algorithms. 2024; 17(6):266. https://doi.org/10.3390/a17060266

Chicago/Turabian Style

Liu, Chein-Shan, Chih-Wen Chang , and Chung-Lun Kuo . 2024. "Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems" Algorithms 17, no. 6: 266. https://doi.org/10.3390/a17060266

APA Style

Liu, C. -S., Chang , C. -W., & Kuo , C. -L. (2024). Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems. Algorithms, 17(6), 266. https://doi.org/10.3390/a17060266

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop