Next Article in Journal
Approximation of Two Systems of Radical Functional Equations Related to Quadratic and Quartic Mappings
Previous Article in Journal
Revision and Comparative Study with Experimental Validation of Sliding Mode Control Approaches Using Artificial Neural Networks for Positioning Piezoelectric Actuator
Previous Article in Special Issue
Asymptotic Behavior of the Modulus of the Kernel and Error Bounds of Anti-Gaussian Quadrature Formulas with Jacobi Weights
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Randomized Q-OR Krylov Subspace Method for Solving Nonsymmetric Linear Systems

Retired Researcher, 75012 Paris, France
Mathematics 2025, 13(12), 1953; https://doi.org/10.3390/math13121953
Submission received: 22 April 2025 / Revised: 4 June 2025 / Accepted: 11 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing for Applied Mathematics)

Abstract

The most popular iterative methods for solving nonsymmetric linear systems are Krylov methods. Recently, an optimal Quasi-ORthogonal (Q-OR) method was introduced, which yields the same residual norms as the Generalized Minimum Residual (GMRES) method, provided GMRES is not stagnating. In this paper, we study how to introduce matrix sketching in this algorithm. It allows us to reduce the dimension of the problem in one of the main steps of the algorithm.

1. Introduction

Let A be a real nonsingular nonsymmetric matrix of order n. So far, the most popular iterative methods for solving a nonsymmetric linear system A x = b , where b is a given real vector, are Krylov methods. Many of them can be classified as Quasi-ORthogonal (Q-OR) methods or Quasi-Minimal Residual (Q-MR) methods (see, for instance, [1]). All these methods use the same framework but differ by the basis which is chosen. Different possibilities for computing the basis are described in [1] (Chapter 4). Well-known examples of Krylov methods are FOM [2,3] and GMRES [4], which use an orthonormal basis. In [5], a Q-OR optimal method that minimizes the residual norm using a non-orthogonal basis was proposed. In most cases, it must give the same residual norms as GMRES, which also minimizes the residual norm, but uses an orthonormal basis computed with the Arnoldi process (see [4]).
In recent years, randomization techniques have been proposed to reduce the dimension of some problems in numerical linear algebra (see [6,7,8]). In this paper, we study how to introduce randomization and matrix sketching in the Q-OR optimal algorithm. Sketching is used to solve a least squares subproblem that must be solved at each iteration.
Section 2 recalls the Q-OR optimal method [1,5]. In Section 3, we describe some known techniques for matrix sketching. Section 4 shows how to use these techniques in the Q-OR optimal method. This is illustrated by a few numerical experiments described in Section 5, showing that, even though some monotonicity properties are lost, convergence is preserved for the randomized algorithm.

2. The Q-OR Optimal Method

Let r 0 = b A x 0 be the initial residual vector. Let us assume that we have an ascending basis of the nested Krylov subspaces K k ( A , r 0 ) , which are defined as
K k ( A , r 0 ) = span { r 0 , A r 0 , A 2 r 0 , , A k 1 r 0 } , k = 1 , 2 ,
The dimension of these subspaces rises to k max n , known as the grade of A with respect to r 0 . This means that, if v 1 , , v k are the basis vectors of K k ( A , r 0 ) , then v 1 , , v k , v k + 1 are the basis vectors for K k + 1 ( A , r 0 ) as long as k + 1 k max .
Such basis vectors satisfy what is called an Arnoldi relation,
A V k = V k H k + h k + 1 , k v k + 1 e k T = V k + 1 H ̲ k ,
where H k is an upper Hessenberg matrix with entries h i , j , the columns of V k are the basis vectors v 1 , , v k , and e k is the last column of the identity matrix of order k. The matrix H ̲ k is H k , appended at the bottom with a k + 1 st row equal to h k + 1 , k e k T .
The iterates x k , k 1 in Q-OR and Q-MR methods are sought as
x k = x 0 + V k y k ,
for some unique vector y k R k . Since we choose v 1 = r 0 / r 0 , the residual vector r k , defined as r k = b A x k , is
r k = b A x k = b A x 0 A V k y k = r 0 V k e 1 A V k y k = V k r 0 e 1 H k y k h k + 1 , k [ y k ] k v k + 1 .
In a Q-OR method, the kth iterate x k O is defined (provided that H k is nonsingular) by computing y k = y k O in (2) as the solution of the linear system
H k y k = r 0 e 1 .
This annihilates the term within the parentheses on the right side of (3). The iterates of the Q-OR method are x k O = x 0 + r 0 V k H k 1 e 1 , the residual vector r k O is proportional to v k + 1 , and
r k O = h k + 1 , k [ y k O ] k .
In the case where H k is singular and x k O is not defined, we define the residual norm as being infinite, r k O = .
The residual vector in relation (3) can also be written as
r k = V k + 1 ( r 0 e 1 H ̲ k y k ) .
Instead of removing the term within the parentheses on the right side of (3), we would like to minimize the norm of the residual itself. This is what is carried out in GMRES with an orthonormal basis [4]. Minimizing the norm of the residual may seem as costly when the columns of the matrix V k + 1 are not orthonormal. However, we have
r k V k + 1 r 0 e 1 H ̲ k y k .
In a general Q-MR method, the vector y k M is computed as the solution of the least squares problem:
min y r 0 e 1 H ̲ k y .
Note that y k M does not minimize the norm of the residual, but the norm of what is called the quasi-residual, as follows:
z k M = r 0 e 1 H ̲ k y k M .
The Q-MR iterates are always defined as opposite to the Q-OR iterates when H k is singular. Note that the preceding definitions do not depend on the choice of the basis. It is a general framework that could use any basis. Q-OR and Q-MR methods, as well as their many interesting mathematical properties, are studied in detail in [1].
The Hessenberg matrices H k are unreduced since h j + 1 , j 0 for j = 1 , , k 1 . Therefore, they are nonderogatory and can be factorized as H k = U k C ( k ) U k 1 , where U k is an upper triangular matrix with | U k ] 1 , 1 = 1 , and C ( k ) is a companion matrix corresponding to the characteristic polynomial of H k (see [1]). The matrix U k is, in fact, a Krylov matrix:
U k = e 1 H k e 1 H k 2 e 1 H k k 1 e 1 .
Clearly, U k is the principal matrix of order k of U k + 1 . Let ϑ 1 , j be the entries of the first row of U k + 1 1 . It is proved in [1,5] that, whatever the basis of the Krylov subspace is, the Q-OR residual norms satisfy
r k O r 0 = 1 | ϑ 1 , k + 1 | , k = 0 , 1 ,
As shown in [1,5], there exists a non-orthogonal basis such that | ϑ 1 , k + 1 | is maximized. Therefore, this minimizes the Q-OR residual norm. Assuming that ϑ 1 , k 0 and v k T A v k 0 , it can be computed as follows:
v ˜ k = A v k V k s β v k , v k + 1 = v ˜ k v ˜ k ,
with
V k T V k s = V k T A v k ,
and
β = α v k T A v k , α = A v k 2 ( V k T A v k ) T s .
The k first entries of the kth column of the upper Hessenberg matrix H ̲ k are h 1 : k , k = s + β e k , and h k + 1 , k = v ˜ k . Moreover, we have ϑ 1 , 1 = 1 , and
ϑ 1 , k + 1 = 1 h k + 1 , k j = 1 k ϑ 1 , j h j , k , k = 1 , , n 1 .
At iteration k, we have to solve the linear system (9) whose matrix is symmetric-positive-definite as long as V k is of rank k. In [5], this linear system was solved by incrementally computing the inverses of the triangular factors of the Cholesky factorization of V k T V k . The details of the method, as described in [5], are shown as Algorithm 1. In this algorithm, the matrix L k contains the inverse of the Cholesky factor of V k T V k . Preconditioning can be easily incorporated each time we have a product of the matrix A with a vector.
Note that the modulus of ϑ 1 , k + 1 gives the inverse of the (relative) norm of the Q-OR residual at iteration k. Hence, we can compute the basis vectors v k , stop the iterations using ϑ 1 , k + 1 , and then reduce the upper Hessenberg matrix to an upper triangular form to compute the final approximate solution.
This method is named Q-ORoptinv because it minimizes the residual norm and uses the inverses of Cholesky factors. When, for all k, v k T A v k 0 , it must give the same residual norms as GMRES. The reader may wonder why we have derived an algorithm which delivers the same residual norms as GMRES but with more floating point operations. The reason is that the dot products in Q-ORoptinv are all independent and they can be computed in parallel, contrary to the dot products in the modified Gram–Schmidt (MGS) implementation of GMRES.
As in GMRES, the storage increases at every iteration, so the algorithm can be restarted every m iterations to limit the needed storage.
Algorithm 1 Q-ORoptinv.
  1: 
input A, b, x 0
  2: 
—Initialization
  3: 
r 0 = b A x 0
  4: 
v 1 = r 0 / r 0 , v 1 A = A v 1 , L 1 = 1
  5: 
ω = v 1 T v 1 A , α = ( v 1 A ) T v 1 A ω 2
  6: 
h 1 , 1 = ω + α ω
  7: 
v ˜ = v 1 A h 1 , 1 v 1 , h 2 , 1 = v ˜
  8: 
v 2 = 1 h 2 , 1 v ˜ , v 2 A = A v 2
  9: 
V 2 = v 1 v 2
10: 
ϑ 1 , 1 = 1 , ϑ 1 , 2 = h 1 , 1 h 2 , 1
11: 
ϑ = ϑ 1 , 1 ϑ 1 , 2 T
12: 
—End of initialization
13: 
for  k = 2 , until convergence do
14: 
    v k V = V k 1 T v k , v k t A = V k T v k A
15: 
    k = L k 1 v k V , y k T = k T L k 1
16: 
   if  k T k < 1  then
17: 
      k , k = 1 k T k
18: 
   else
19: 
      ( p k v ) T = y k T V k 1 T , k , k = v k p k v
20: 
   end if
21: 
    L k = L k 1 0 1 k , k y k T 1 k , k
22: 
    A = L k v k t A , s = L k T A
23: 
    α = ( v k A ) T v k A A T A , β = α ( v k t A ) k
24: 
    h 1 : k , k = h 1 , k h k , k = s + β e k
25: 
    v ˜ = v k A V k h 1 : k , k , h k + 1 , k = v ˜
26: 
    ϑ 1 , k + 1 = 1 h k + 1 , k ϑ T h 1 : k , k
27: 
    ϑ = ϑ 1 , 1 ϑ 1 , k + 1 T
28: 
    v k + 1 = 1 h k + 1 , k v ˜ , v k + 1 A = A v k + 1
29: 
    V k + 1 = V k v k + 1
30: 
   if needed, solve H k y ( k ) = r 0 e 1 , x k = x 0 + V k y ( k )
31: 
end for
The solution s of Equation (9) is also the solution of the least squares problem:
min y R k V k y A v k ,
since (9) is the normal equation corresponding to (10). Hence, we can use the economy size QR factorization of V k to solve (10) with an upper triangular matrix R of order k instead of using the inverses of the Cholesky factors of V k T V k . Since the method is often restarted with m n , meaning that the number of columns k is small compared to the number of rows, V k is what is called a tall-and-skinny matrix. There exist special algorithms for computing the QR factorization of such matrices that can be used on parallel computers (see [9,10]). Note that the columns of Q give an orthogonal basis of the Krylov subspace. So, if we use the QR factorization, we are more or less back to what is completed in GMRES. When the restart parameter m is large, or when there is no restart, using the QR factorization may be too expensive. However, at each iteration, we only add one more column to the matrix V k . There exist algorithms for updating the QR factorization when we add a new column to the matrix (see [11,12]). This can be carried out, for instance, by orthogonalizing the new column against the columns of the previous matrix Q with the modified Gram–Schmidt algorithm. This is what we used in our numerical experiments.

3. Random Sketching

Since the matrix V k in the least squares problem (10) is tall and skinny, it may be useful to use a random sketching, a technique that was introduced during the last twenty years. This is used to reduce the dimension of the problem (see, for instance, [6]). A sketching matrix S is of order × n with n . Let V be a subspace of R n . The matrix S is an ε -embedding of V if
| S v v | ε v , v V ,
where 0 < ε < 1 . Generally, ε -embeddings are constructed with probabilistic techniques to be independent of the subspace V with a high probability. They are called oblivious ε -embeddings. There are several distributions for constructing such embeddings, such as Gaussian ones and the subsampled randomized Hadamard transform (SRHT) [13].
SRHT is constructed with Hadamard matrices. These matrices are defined recursively. Starting with H = 1 , and having a Hadamard matrix H, the next matrix is
H H H H .
Therefore, their order is always a power of 2. Let p be an integer such that 2 p is the smallest power of 2 larger than or equal to n. The × 2 p SRHT matrix S ˜ is
S ˜ = 1 P H D ,
where D is a random diagonal matrix with diagonal entries ± 1 , H is a Hadamard matrix, and P is a random uniform subsampling matrix. The constant in front of P H D depends on the way the Hadamard matrix is scaled. For our purposes, the sketching matrix S is made of the first n columns of S ˜ . We apply S ˜ to a vector with only the first n components, which are nonzero. The multiplication by H is carried out using the fast Walsh–Hadamard transform. It uses the recursive structure of H to evaluate the product in N log 2 ( N ) operations with N = 2 p . The problem with this sketching matrix is that 2 p can be much larger than n.
Another possibility is to use the Clarkson–Woodruff transform [8,14]. The matrix S is an × n sparse matrix with only one nonzero entry in each column which is ± 1 with probability 1 / 2 . The row number of the entry is chosen randomly. For the first columns of S, a random permutation of [ 1 , 2 , , ] is chosen.
A delicate issue with matrix sketching is the choice of . It is known that inequality (11) is satisfied for SRHT with probability 1 δ if
= O ε 2 k + log N δ log k δ ,
where k is the dimension of the subspace V . However, this is of little help for us since we need the same sketching matrix S for all iterations and the subspace dimension is increasing by one every iteration. If the Q-OR method is restarted, k may be chosen as the restart parameter m. However, this may be too small to obtain a fast convergence. We will show experimentally in Section 5 how the choice of influences the convergence of the sketched Q-OR method.
In numerical linear algebra, matrix sketching has been mainly used with some successes for solving large least squares problems. In recent years, randomization has also been used in different Krylov methods for solving linear systems. However, methods such as randomized GMRES [15] or sketched GMRES [7] do not minimize the residual norm as in GMRES. Hence, they are misnamed. In fact, some of them are Q-MR methods with non-orthogonal bases.

4. The Randomized Q-OR Method

A randomized variant of the Q-OR optimal method can be carried out by simply replacing (10) with
min y R k S V k y S A v k ,
where S is an × n sketching matrix that is computed before running the algorithm. The matrix S V k can be computed incrementally since S V k = S V k 1 S v k . Thus, there are only two matrix–vector products with S per iteration. Now, it makes more sense to use a QR factorization to solve the least squares problem (12) because it is of a smaller dimension than (10). Of course, the basis that is obtained is no longer optimal, and the method does not minimize the residual norm. However, since S ( V k y A v k ) V k y A v k , the convergence of the method must not be too different, even though the decrease in the residual norm may not be monotone, as we will see with the numerical experiments detailed the next section.
The sketched algorithm is described as Algorithm 2. In statement 12, the QR factorization is simply a normalization of the vector v 1 S , and the initial matrix R is a scalar, i.e., the norm of v 1 S . Statement 17 is an update of the QR factorization when we append a new vector v k S to the previous matrix. This can be completed in different ways. In numerical experiments, we use a modified Gram–Schmidt implementation of the update. Note that the first dimension of the sketching matrix must be larger than the iteration number k.
Algorithm 2 Q-ORsketch.
  1: 
input A, b, x 0 , S
  2: 
—Initialization
  3: 
r 0 = b A x 0
  4: 
v 1 = r 0 / r 0 , v 1 A = A v 1 , v 1 S = S v 1
  5: 
ω = v 1 T v 1 A , α = ( v 1 A ) T v 1 A ω 2
  6: 
h 1 , 1 = ω + α ω
  7: 
v ˜ = v 1 A h 1 , 1 v 1 , h 2 , 1 = v ˜
  8: 
v 2 = 1 h 2 , 1 v ˜ , v 2 A = A v 2
  9: 
V 2 = v 1 v 2
10: 
ϑ 1 , 1 = 1 , ϑ 1 , 2 = h 1 , 1 h 2 , 1
11: 
ϑ = ϑ 1 , 1 ϑ 1 , 2 T
12: 
[ Q , R ] = Q R ( v 1 S )
13: 
—End of initialization
14: 
for  k = 2 , until convergence do
15: 
    v k V = V k 1 T v k , v k t A = V k T v k A
16: 
    v k S = S v k , v k S A = S v k A
17: 
    [ Q , R ] = update _ Q R ( Q , R , v k S )
18: 
    s = R 1 ( Q T v k S A )
19: 
    α = ( v k A ) T v k A ( v k A ) T ( V k s ) , β = α ( v k t A ) k
20: 
    h 1 : k , k = h 1 , k h k , k = s + β e k
21: 
    v ˜ = v k A V k h 1 : k , k , h k + 1 , k = v ˜
22: 
    ϑ 1 , k + 1 = 1 h k + 1 , k ϑ T h 1 : k , k
23: 
    ϑ = ϑ 1 , 1 ϑ 1 , k + 1 T
24: 
    v k + 1 = 1 h k + 1 , k v ˜ , v k + 1 A = A v k + 1
25: 
    V k + 1 = V k v k + 1
26: 
   if needed, solve H k y ( k ) = r 0 e 1 , x k = x 0 + V k y ( k )
27: 
end for

5. Numerical Experiments

For the first experiment, we consider the matrix fs_680_1 (https://sparse.tamu.edu, URL accessed on 1 January 2025). We scale this matrix to have a unit diagonal and name it fs_680_1c. This sparse matrix of order 680 has 21,184 nonzero entries and a condition number equal to 8.6944 × 10 3 . Figure 1 shows the true residual norms b A x k for the standard Q-ORoptinv method using the inverses of Cholesky factors and the randomized method using SRHT sketching without preconditioning and without restarting. The initial iterate is the zero vector. Note that for SRHT, N = 1204 when n = 680 . The value of is n / 4 = 170 . Using Clarkson–Woodruff sketching provides almost the same results. The residual norms of the two algorithms are almost similar, but since the method with sketching does not minimize the residual norm, it is slightly larger and with small oscillations.
Figure 2 displays the true residual norms for the method with SRHT sketching for = n / 2 , n / 4 , n / 8 , and n / 16 . Note that 680 / 8 = 85 and 680 / 16 = 43 . This limits the number of iterations that we can perform with these small values of . In fact, one can see that, after 43 iterations, the algorithm with = n / 16 does not converge. The results with n / 2 , n / 4 , and n / 8 are more or less the same, showing that the algorithm is only weakly dependent on the choice of . However, with = n / 8 , we cannot perform much more than 85 iterations.
For the second example, we consider the matrix rajat27 (https://sparse.tamu.edu) of order 20,640. Since this matrix has some zero entries on the diagonal and this can be a problem for some preconditioners, we add 2 I to the matrix, and we name it rajat27b. This matrix has 101,681 nonzero entries and an estimated condition number equal to 4.8588 × 10 7 . We use a diagonal preconditioner.
Figure 3 shows the computed residual norms (using relation (5)) for the standard Q-ORoptinv method and the randomized method using SRHT sketching. Once again, the method with sketching converges similarly to the standard method.
Figure 4 displays the computed residual norms for the method with SRHT sketching for = n / 4 , n / 8 , n / 16 , and n / 32 . Note that all these values of are larger than the number of iterations we have to perform. The results with these values of are more or less the same, showing, once again, that the randomized algorithm is only weakly dependent on the choice of .
Figure 5 compares SRHT and Clarkson–Woodruff sketching. The two algorithms converge similarly, but more oscillations occur with Clarkson–Woodruff sketching. However, it is cheaper than SRHT.
The third example corresponds to the finite difference discretization of a convection–diffusion equation,
x λ ( x , y ) u x y λ ( x , y ) u y + u x + u y = f in [ 0 , 1 ] 2 ,
with homogeneous Dirichlet boundary conditions. The diffusion coefficient λ ( x , y ) is piecewise constant, being equal to 100 in [ 1 / 4 , 3 / 4 ] 2 and 1 elsewhere. The mesh size is h = 1 / 151 , providing a matrix of order 22,500. Its estimated condition number is 9.3909 × 10 5 . The right-hand side is a random vector.
We use an incomplete LU preconditioner without fill-in (ILU(0)) and we restart the methods every 100 iterations. Figure 6 shows that, even though there are some oscillations with the method using sketching, the convergence is very similar to that of Q-ORoptinv.

6. Conclusions

In this paper, we have shown how to use the technique of matrix sketching in the Krylov method Q-ORoptinv for solving nonsymmetric linear systems. This was accomplished to reduce the complexity of an important part of the algorithm. Even though the sketched method does not minimize the norm of the residual, it converges almost as fast as the genuine method, as demonstrated by the numerical experiments. This new variant of the method can be interesting when solving large nonsymmetric linear systems.

Funding

This research received no external funding.

Data Availability Statement

No available data.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Meurant, G.; Duintjer Tebbens, J. Krylov Methods for Nonsymmetric Linear Systems; Springer Series in Computational Mathematics; Springer International Publishing: Cham, Switzerland, 2020; Volume 57. [Google Scholar]
  2. Saad, Y. Krylov subspace methods for solving large nonsymmetric linear systems. Math. Comput. 1981, 37, 105–126. [Google Scholar] [CrossRef]
  3. Saad, Y. Practical use of some Krylov subspace methods for solving indefinite and nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1984, 5, 203–228. [Google Scholar] [CrossRef]
  4. Saad, Y.; Schultz, M.H. GMRES: A generalized minimum residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  5. Meurant, G. An optimal Q-OR Krylov subspace method for solving linear systems. Electron. Trans. Numer. Anal. 2017, 47, 127–152. [Google Scholar] [CrossRef]
  6. Halko, N.; Martinsson, P.G.; Tropp, J.A. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 2011, 53, 217–288. [Google Scholar] [CrossRef]
  7. Nakatsukasa, Y.; Tropp, J.A. Fast and accurate randomized algorithms for linear systems and eigenvalue problems. SIAM J. Matrix Anal. Appl. 2024, 45, 1183–1214. [Google Scholar] [CrossRef]
  8. Woodruff, D.P. Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci. 2014, 10, 1–157. [Google Scholar] [CrossRef]
  9. Buttari, A.; Langou, J.; Kurzak, J.; Dongarra, J. Parallel tiled QR factorization for multicore architectures. Concurr. Comput.-Pract. Exp. 2008, 20, 1573–1590. [Google Scholar] [CrossRef]
  10. Demmel, J.W.; Grigori, L.; Hoemmen, M.; Langou, J. Communication-optimal parallel and sequential QR and LU factorizations. SIAM J. Sci. Comput. 2012, 34, A206–A239. [Google Scholar] [CrossRef]
  11. Hammarling, S.; Higham, N.J.; Lucas, C. LAPACK-style codes for pivoted Cholesky and QR updating. In Proceedings of the International Workshop on Applied Parallel Computing, Umea, Sweden, 18–21 June 2006; Springer: Berlin/Heidelberg, 2006; pp. 137–146. [Google Scholar]
  12. Hammarling, S.; Lucas, G. Updating the QR Factorization and the Least Squares Problem; Technical Report MIMS Eprint 2008-111; Manchester Institute for Mathematical Sciences, University of Manchester: Manchester, UK, 2008. [Google Scholar]
  13. Tropp, J. Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal. 2011, 3, 115–126. [Google Scholar] [CrossRef]
  14. Clarkson, K.L.; Woodruff, D.P. Low-rank approximation and regression in input sparsity time. J. ACM 2017, 63, 1–45. [Google Scholar] [CrossRef]
  15. Balabanov, O.; Grigori, L. Randomized Gram-Schmidt process with application to GMRES. SIAM J. Sci. Comput. 2022, 44, A1450–A1474. [Google Scholar] [CrossRef]
Figure 1. fs_680_1c, true residual norms, Q-ORoptinv (solid), Q-OR with SRHT sketching, = n / 4 (dashed).
Figure 1. fs_680_1c, true residual norms, Q-ORoptinv (solid), Q-OR with SRHT sketching, = n / 4 (dashed).
Mathematics 13 01953 g001
Figure 2. fs_680_1c, true residual norms, Q-OR with SRHT sketching, = n / 2 (solid), n / 4 (dashed), n / 8 (dash-dotted), n / 16 (dotted).
Figure 2. fs_680_1c, true residual norms, Q-OR with SRHT sketching, = n / 2 (solid), n / 4 (dashed), n / 8 (dash-dotted), n / 16 (dotted).
Mathematics 13 01953 g002
Figure 3. rajat27b, diagonal preconditioner, computed residual norms, Q-ORoptinv (solid), Q-OR with SRHT sketching, = n / 4 (dashed).
Figure 3. rajat27b, diagonal preconditioner, computed residual norms, Q-ORoptinv (solid), Q-OR with SRHT sketching, = n / 4 (dashed).
Mathematics 13 01953 g003
Figure 4. rajat27b, diagonal preconditioner, computed residual norms, Q-OR with SRHT sketching, = n / 4 (solid), n / 8 (dashed), n / 16 (dash-dotted), n / 32 (dotted).
Figure 4. rajat27b, diagonal preconditioner, computed residual norms, Q-OR with SRHT sketching, = n / 4 (solid), n / 8 (dashed), n / 16 (dash-dotted), n / 32 (dotted).
Mathematics 13 01953 g004
Figure 5. rajat27b, diagonal preconditioner, computed residual norms, Q-OR with sketching, SRHT (solid), Clarkson–Woodruff (dot-dashed).
Figure 5. rajat27b, diagonal preconditioner, computed residual norms, Q-OR with sketching, SRHT (solid), Clarkson–Woodruff (dot-dashed).
Mathematics 13 01953 g005
Figure 6. convection–diffusion, ILU(0) preconditioner, computed residual norms, Q-ORoptinv (solid), Q-OR with SRHT sketching, = n / 4 (dashed), m = 100 .
Figure 6. convection–diffusion, ILU(0) preconditioner, computed residual norms, Q-ORoptinv (solid), Q-OR with SRHT sketching, = n / 4 (dashed), m = 100 .
Mathematics 13 01953 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meurant, G. A Randomized Q-OR Krylov Subspace Method for Solving Nonsymmetric Linear Systems. Mathematics 2025, 13, 1953. https://doi.org/10.3390/math13121953

AMA Style

Meurant G. A Randomized Q-OR Krylov Subspace Method for Solving Nonsymmetric Linear Systems. Mathematics. 2025; 13(12):1953. https://doi.org/10.3390/math13121953

Chicago/Turabian Style

Meurant, Gérard. 2025. "A Randomized Q-OR Krylov Subspace Method for Solving Nonsymmetric Linear Systems" Mathematics 13, no. 12: 1953. https://doi.org/10.3390/math13121953

APA Style

Meurant, G. (2025). A Randomized Q-OR Krylov Subspace Method for Solving Nonsymmetric Linear Systems. Mathematics, 13(12), 1953. https://doi.org/10.3390/math13121953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop