PHSS Iterative Method for Solving Generalized Lyapunov Equations †

: Based on previous research results, we propose a new preprocessing HSS iteration method (PHSS) for the generalized Lyapunov equation. At the same time, the corresponding inexact PHSS algorithm (IPHSS) is given from the angle of application. All the new methods presented in this paper have given the corresponding convergence proof. The numerical experiments are carried out to compare the new method with the existing methods, and the improvement effect is obvious. The feasibility and effectiveness of the proposed method are proved from two aspects of theory and calculation.


Introduction
We consider the system of large sparse linear equations where A ∈ C n×n is non-Hermite positive definite matrix and x, b ∈ C n×n . The actual background of such problems can be found in [1][2][3][4][5][6][7] and its references. For (1), Bai, Golub and Ng put forward the HSS iteration method in 2003 [8]. Any matrix can be decomposed into the sum of symmetric matrices and skew symmetric matrices so that we can get the formula: Let x (0) ∈ C n be an initial guess. For k = 0, 1, 2, ..., until the sequence of iterates x (k) converges, compute the next iterate x (k+1) through the following procedure: (αI + H(A))x (k+ 1 2 ) = (αI − S(A))x (k) + b, (αI + S(A))x (k+1) = (αI − H(A))x (k+ 1 2 where α is normal number. Bai and others proved its unconditional convergence to the unique solution of (1) in [8].
In order to speed up the HSS iteration method, Bai and others put forward the PHSS iteration method [9][10][11]. Decompose coefficient matrix A into the sum of symmetric matrices and skew symmetric matrices and we can get the formula: where P(A) ∈ C n×n is Hermite positive definite matrix. Therefore, we can get the HSS iterative format: (αP(A) + H(A))x (k+ 1 2 ) = (αP(A) − S(A))x (k) + b, where α is normal number. Bai and others proved its unconditional convergence to the unique solution of (1) in [10].

The PHSS Iterative Method for the Generalized Lyapunov Equation
Many methods to solve the standard Lyapunov equation have been put forward in [12][13][14][15][16][17][18][19]. In the literature [12], Xu and others put forward the HSS iterative solution of the generalized Lyapunov equation. Inspired by this, this paper proposes the PHSS iterative solution of the generalized Lyapunov equation.
Consider the generalized Lyapunov equation as follows: where A, N j , C ∈ R n×n , A is an asymmetric positive definite matrix, C is a symmetric matrix and N j 2 A 2 (j = 1, 2, ..., m). When a = b, the Equation (3) degenerates to the standard Lyapunov equation.
Then we apply the PHSS iterative method to solve the generalized Lyapunov Equation (3): Let us suppose that α is a normal number, then the decomposition of A is similar to (2) Then the iterative format can be obtained: According to the nature of Kronecker product, we can get where x k = vec(X k ), c = vec(C), then, according to the nature of Kronecker product, we can get where The convergence of the iterative scheme (4) is equivalent to the convergence of the iterative scheme (5) and their convergence factors are the same. Theorem 1. Let us suppose that A ∈ R n×n is an asymmetric positive definite matrix, K = and the maximum and minimum eigenvalues of matrix P −1 H are λ max and λ min , respectively. Then the convergence factor of the PHSS iterative method (4) is the spectral radius of matrix

Its upper bound is
When λ min > K and α = √ λ min · λ max , σ 0 ( α) reaches the minimum. It means that Therefore, the PHSS iterative method for solving the generalized Lyapunov equation is convergent.
Proof. The first form of the iteration format (5) is brought into the second form, and its iteration matrix is obtained: Then the convergence factor of the iterative scheme (5) is ρ(G), which is the same as the convergence factor of the iterative scheme (4).
Because P ∈ R n×n is a symmetric positive definite matrix, we can suppose that Because G is similar to and G 1 is similar to we can get that Because H is positive definite matrix, S is a semi positive definite matrix. For any non-zero column vector x ∈ R n , we can get that P is symmetric positive definite matrix, so P −1 is positive definite matrix. It is easy to prove that P − 1 2 x is a non-zero column vector by means of proof of absurdity. Then we can see that Therefore, H is a positive definite matrix, S is a semi positive definite matrix.
H is a real symmetric matrix and S is an antisymmetric matrix, so we can see that Therefore, H is a symmetric positive definite matrix, S is an antisymmetric semidefinite matrix. Meanwhile, since we can conclude that H is similar to P −1 H and S is similar to P −1 S.
Let us suppose that Q = (αI − S)(αI + S) −1 and we can see that It's easy to deduce that Q * Q = I and we can conclude that QQ * = Q * Q = I. So Q is a unitary matrix and we can deduce that Let us suppose that L = (αI − H)(αI + H) −1 and we can deduce through that Therefore, L is a normal matrix and we can deduce through the Formula (7) that It is easy to see that so both (αI + H) −1 and (αI + S) −1 are normal matrices. Because H is a positive definite matrix and S is a semi positive definite matrix, we can easy to deduce that Through the Formula (6), (8) and (9), we can see that The following proves that when λ min > K and α = √ λ min · λ min , σ 0 (α) reaches the minimum and σ 0 (α) is less than 1 at this time.
In fact, for fixed α, function α−λ α+λ is monotonically decreasing with respect to λ. So we can see that It's easy to see that when λ min > K, monotonically decreases over (0, λ min ) and increases monotonously on (λ min , +∞), α−λ min α+λ min + 2K α+λ min monotonically decreases over (0, λ max ) and increases monotonously on (λ max , +∞). Therefore, when λ min < α < λ max and α−λ min Through the proof of the expression of σ 0 (α), we can see that on the one hand, when α → +∞ , we can get that σ 0 (α) → 1 and σ 0 (α) increases monotonously on ( α, +∞), on the other hand, when α → −∞ , we can get that σ 0 (α) → 1 and σ 0 (α) decreases monotonously on ( α, +∞), Therefore, we can see that on the one hand, when α ≥ α = √ λ min · λ max , we can get that σ 0 (α) < 1, on the other hand, when 0 ≤ α ≤ α = √ λ min · λ max , we can get that σ 0 (α) < 1. Summing up the above, we can conclude that the PHSS iterative method for the generalized Lyapunov equation is convergent and the upper bound of the convergence factor is σ 0 (α) which is only associated with matrix P −1 2 m ∑ j=1 (N j ⊗N j ) 2 and the eigenvalues of matrix P −1 H. In addition, when α = α, the upper bound σ 0 (α) of the convergence factor of the PHSS iterative method of the generalized Lyapunov Equation (3) is minimal, but the convergence factor ρ(G) does not necessarily reach the minimum at this time, that is to say, when α = α, the PHSS iteration does not necessarily converge the fastest. How to obtain the optimal parameters needs to be further studied.
To sum up, the PHSS iterative method is convergent for the generalized Lyapunov Equation (3) which satisfies the condition.

Inexact PHSS (IPHSS) Iterative Algorithm
In order to reduce the computational complexity of the HSS iterative method for solving the generalized Lyapunov equation, Xu Qingqing and others proposed an IHSS iteration method for solving the generalized Lyapunov equation in [12]. Similarly, the IPHSS iteration method for solving the generalized Lyapunov equation can be derived from the PHSS iteration method for solving the generalized Lyapunov equation.
Taking X k as the initial value, the following generalized Lyapunov equation is approximated by iterative method, and X k+ 1 2 is obtained: Because the matrix of the Lyapunov Equation (10) is symmetric and positive definite, the approximate solution can be obtained by the CG algorithm.
Next, we use X k+ 1 2 as initial value approximation to solve the following Lyapunov equation and get X k+1 : For Lyapunov Equation (11), the approximate solution can be obtained by CGNE algorithm. Similar to the inexact HSS iterative method for solving the generalized Lyapunov equation in the literature [12], the inexact PHSS iteration method for solving the generalized Lyapunov equation can be summarized as Algorithm 1 as follow:
In Algorithm 1, ε k and η k is used to control the accuracy of internal iterations in the iterative process, and the stopping criterion of the (ii) step only makes the following convergence theorem more concise. In fact, the criterion can be changed to q k+1 2 ≤ η k Pz k+1 2 . Theorem 2. Let us suppose that A ∈ R n×n is an asymmetrical positive definite matrix. According to Theorem 1, α is chosen to make the HSS iterative method converge. {X k } is an iterative sequence generated by Algorithm 1, and X * is the exact solution of the generalized Lyapunov equation. Then we can get that where x k = vec(X k ) and x * = vec(X * ) Let us define the vector norm | · | as: For any vector y, we can define that | y | = (αI + P −1 S)y 2 .
In particular, if the iterative sequence {x k } converges to x * , that is, {X k } converges to X * , where ε max = max{ε k } and ), q k+1 = vec(Q k+1 ) and we can see that ≤ ε k r k 2 and q k+1 2 ≤ 2αη k Pz k+1 2 , then we can conclude that Because we can bring the Formula (13) into the type (12) and see that Let X * be the exact solution of the generalized Lyapunov equation, that is, x * is the exact solution of the following two equations: Through the first equation in Formula (14), we can see that We can bring the Formula (15) into the second equation of the Formula (14) and see that As a result, we can conclude that Let us suppose that vector norm is | y | = (αI + P −1 S)y 2 and the matrix norm is Because (αI + P −1 H) and (αI − P −1 H) can be exchanged, we conclude that (αI + P −1 H) −1 and (αI − P −1 H) can be exchanged. As a result, we can conclude that Because Through the Formula (9), we can see that If we accurately solve the Lyapunov Equations (10) and (11), the corresponding {ε k } and {η k } should be zero, so both ε max and η max are zero. At this point, the convergence factor of the IPHSS iteration method is the same as that of the PHSS iteration method. Theorem 3 shows that in order to guarantee the convergence of the IPHSS iterative method, we only need the conditional to satisfy, and we do not need {ε k } and {η k } to go to zero with the increase of k. Therefore, when the generalized Lyapunov equation is solved, the selection of {ε k } and {η k } should make the calculation as small as possible, and the iterative factor of the IPHSS iterative method is as close to the convergence factor of the PHSS iterative method as possible.

Numerical Experiments
In this section, we test the IPHSS algorithm for solving the generalized Lyapunov equation by numerical examples.
Here is a theoretical numerical example for a simple test of numerical performance about the algorithm: Example 1. Now, we consider the generalized Lyapunov equation as follows: where N is a random matrix that satisfies the condition of Theorem 1; are three diagonal matrices h = 1 √ n ; x (0) = vec X (0) is taken as a zero vector; and the program is executed by Matlab. The order of the coefficient matrix A is n. The relative error is Res = The stopping criterion is Res < 10 −6 and Iter is the number of iterations. CPU is iterative time. The parameters of the IPHSS method are taken as α = 0.8. The parameters of the IPHSS method are taken as α = β = 0.2. The preconditioned matrix P is selected as the diagonal matrix of the coefficient matrix A. Through the IPHSS algorithm we can get Table 1 as follows. The numerical results in the analysis Table 1 show that IPHSS method has faster convergence speed, better stability and convergence than IHSS method in this example.
The following numerical examples are given to test the numerical performance of the algorithm in practice:

Example 2.
Considering the problem about the finite element discretization of self-heat conduction [20]: We need to solve the following generalized Lyapunov equation in solving its Kodamm matrix: The parameters of the IPHSS method are taken as α = 0.9 . The parameters of the IPHSS method are taken as α = β = 0.2 . Through the IPHSS algorithm we can get Table 2 as follows. The numerical results in the analysis Table 2 show that the amplitude of the number of iterative times for the IHSS iteration and the IPHSS iteration of the generalized Lyapunov equation is smaller, which indicates that the two methods are very stable, but the number of iterations and times of the IPHSS iteration are far smaller than that of the IHSS iteration, and the relative error of the IPHSS iteration is also less than the relative error of the IHSS iteration. Not only that, it can be seen that the gap between the iterative time of the IPHSS iterative method and the iteration time of the IHSS iteration method is larger, as we can see the higher order of the matrix, and thus the IPHSS iterative method for solving the generalized Lyapunov equation is more effective than the IHSS iteration.

Conclusions
In this paper, a new method of solving the generalized Lyapunov equation by PHSS iterative method is proposed and its convergence is proved. Then, the IPHSS algorithm for solving the generalized Lyapunov equation is put forward, and the convergence of the generalized Lyapunov equation is proved. Finally, a numerical experiment is carried out to compare the new method with the existing methods. It is found that compared with the IHSS iteration method, the IPHSS iteration method has obvious improvement effect.