Next Article in Journal
Fixpointed Idempotent Uninorm (Based) Logics
Previous Article in Journal
Calculating Nodal Voltages Using the Admittance Matrix Spectrum of an Electrical Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Four-Parameter PSS Method for Solving the Sylvester Equation

Department of Mathematics, College of Sciences, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 105; https://doi.org/10.3390/math7010105
Submission received: 16 November 2018 / Revised: 20 December 2018 / Accepted: 26 December 2018 / Published: 20 January 2019

Abstract

:
In order to solve the Sylvester equations more efficiently, a new four parameters positive and skew-Hermitian splitting (FPPSS) iterative method is proposed in this paper based on the previous research of the positive and skew-Hermitian splitting (PSS) iterative method. We prove that when coefficient matrix A and B satisfy certain conditions, the FPPSS iterative method is convergent in the parameter’s value region. The numerical experiment results show that compared with previous iterative method, the FPPSS iterative method is more effective in terms of iteration number IT and runtime.

1. Introduction

In this paper, we mainly consider the problem of solving the continuous Sylvester equations with the following form:
A X + X B = C ,
where A m × m , B n × n , C m × n are given matrices that satisfy the following conditions:
(I)
A , B , and C are large-scale and sparse matrices;
(II)
At least one of A and B is a non-Hermitian matrix;
(III)
At least one of the positive semidefinite matrices A and B is a positive definite matrix.
The solution of Equation (1) exists and is unique.
This kind of matrix equation has a wide range of applications in scientific computing and engineering fields. Problems like digital image restoration, control systems, electromagnetic field processing, neural networks, and model reduction will eventually involve the solution of large-scale Sylvester equations [1,2,3]. Because the time required to solve the Sylvester equation is related to the speed of solving actual problems, designing an effective method for solving the Sylvester equation is a subject with theoretical research and practical application value.
In the past few decades, scholars have focused on the methods of solving such problems. Therefore, more and more direct and iterative solutions are proposed. However, because the coefficient matrix of the equation to be solved is mostly a large and sparse matrix, the direct method is not applicable compared with the iterative method. In 1952, the conjugate gradient method (CG) was proposed to solve symmetric positive definite linear equations [4].
In 1986, in order to solve the problem of asymmetric coefficient matrix, Saad. Y et al. put forward the famous Generalized Minimal Residual (GMRES) algorithm which has better stability and less storage space than the previous Krylov subspace algorithm [5].
In 2003, Bai Zhongzhi et al. proposed the Hermitian and skew-Hermitian splitting iterative method, namely HSS iterative method [6]. After that, many academicians at home and abroad have improved this kind of method, such as the method based on positive definite and skew-Hermitian splitting of coefficient matrix, i.e., the PSS iteration method [7]; the NSS iteration method in views of normal and skew-Hermitian splitting [8]; and according to various preconditioning technique, the preconditioned HSS iterative method [9,10], lopsided HSS iterative method [11], modified generalization HSS iterative method [12], and so on.
The HSS iterative method and its variants have many mature and effective extensions to solve continuous Sylvester equation.
In 2011, based on the Hermitian splitting and skew-Hermitian splitting of coefficient matrices, Bai et al. applied HSS iteration method to solve continuous Sylvester equation for the first time [6].
In 2013, Wang Xiang and others solved Sylvester equation by PSS iteration method [13].
In 2014, Zheng Qingqing and others used NSS iteration method to solve Sylvester equation [14].
In 2015, MHSS iteration method and GHSS iteration method were proposed successively [15,16].
In 2017, PMHSS iteration method was proposed [17].
It can be seen that most of the methods for solving Sylvester equation are improved and generalized based on HSS iteration method and there is still room for research on the promotion and application of PSS algorithms. Based on the above reasons, in order to further improve the solving speed of Sylvester equation, a new four-parameter PSS iteration method, namely FPPSS iteration method is proposed to solve the continuous Sylvester equation. The parameters that minimize the upper bound of the spectral radius of the iteration matrix are derived, and the effectiveness and stability of the iteration method are proved by numerical experiments.
The structure in this paper is as follows. In Section 2, the iterative scheme of the FPPSS iterative method for solving the large-scale continuous Sylvester equation with non-Hermitian positive definite/semidefinite matrix is given, and the exact range of parameters for guaranteeing the convergence of the FPPSS iterative method is theoretically calculated. Moreover, optimal iterative parameters that bring the upper bound of the spectral radius of the iterative matrix to a minimum are derived. In Section 3, numerical experiments compare the FPPSS iterative method with the PSS iterative method to demonstrate the effectiveness and stability of FPPSS. Finally, in Section 4, some conclusions are given.

2. The Four-Parameter PSS Iterative Method

In order to further improve the convergence speed of the PSS iterative method, a four-parameter PSS iterative method, namely FPPSS iterative method, is proposed to solve the continuous Sylvester equation.
Now, we use P ( V ) and S ( V ) to represent the positive and skew-Hermitian part of matrix V n × n , respectively. Obviously, matrix V has positive definite and skew-Hermitian splitting, i.e., PSS iterative method [7]:
V = P ( V ) + S ( V ) .
Analogy to the PSS method, the matrix A and B have the following forms of splitting:
A = ( α 1 I + P ( A ) ) ( α 1 I S ( A ) ) = ( β 1 I + S ( A ) ) ( β 1 I P ( A ) ) , B = ( α 2 I + P ( B ) ) ( α 2 I S ( B ) ) = ( β 2 I + S ( B ) ) ( β 2 I P ( B ) ) ,
where α j ( j = 1 , 2 ) are given non-negative constants and β j ( j = 1 , 2 ) are positive constants, I is the identity matrix with the appropriate dimension.
Then Equation (1) can be equivalently rewritten as:
{ ( α 1 I + P ( A ) ) X + X ( α 2 I + P ( B ) ) = ( α 1 I S ( A ) ) X + X ( α 2 I S ( B ) ) + C , ( β 1 I + S ( A ) ) X + X ( β 2 I + S ( B ) ) = ( β 1 I P ( A ) ) X + X ( β 2 I P ( B ) ) + C .
In the assumption (I)–(III), we can observe that matrices α 1 I + P ( A ) and ( α 2 I + P ( B ) ) have no common eigenvalues, while matrices β 1 I + S ( A ) and ( β 2 I + S ( B ) ) also have no common eigenvalues, so the above two equations have a unique solution for any given right end, which results in the following four-parameter positive definite and skew-Hermitian splitting iterative method for solving the continuous Sylvester equation (1), namely the FPPSS iterative method.
Theorem 1.
Given any initial matrix X ( 0 ) m × n , for k = 0 , 1 , 2 , , X ( k + 1 ) m × n is calculated in the following format until the iteration sequence { X ( k ) } k = 0 satisfies the convergence condition:
{ ( α 1 I + P ( A ) ) X ( k + 1 2 ) + X ( k + 1 2 ) ( α 2 I + P ( B ) ) = ( α 1 I S ( A ) ) X ( k ) + X ( k ) ( α 2 I S ( B ) ) + C , ( β 1 I + S ( A ) ) X ( k + 1 ) + X ( k + 1 ) ( β 2 I + S ( B ) ) = ( β 1 I P ( A ) ) X ( k + 1 2 ) + X ( k + 1 2 ) ( β 2 I P ( B ) ) + C ,
where α j ( j = 1 , 2 ) are given non-negative constants and β j ( j = 1 , 2 ) are positive constants, I is the identity matrix with the appropriate dimension.
Let P ( A ) , P ( B ) and S ( A ) , S ( B ) be the positive definite and skew-Hermitian parts of matrices A and B , respectively.
Let
λ max ( P ( A ) ) = max λ j s p ( P ( A ) ) { | λ j | } ,   μ max ( P ( B ) ) = max μ k s p ( P ( B ) ) { | μ k | } , λ min ( P ( A ) ) = min λ j s p ( P ( A ) ) { | λ j | } ,   μ min ( P ( B ) ) = min μ k s p ( P ( B ) ) { | μ k | } , ξ max ( S ( A ) ) = max i ξ j s p ( S ( A ) ) { | ξ j | } ,   ζ max ( S ( B ) ) = max i ζ k s p ( S ( B ) ) { | ζ k | } , ξ min ( S ( A ) ) = min i ξ j s p ( S ( A ) ) { | ξ j | } ,   ζ min ( S ( B ) ) = min i ζ k s p ( S ( B ) ) { | ζ k | } ,
with i = 1 and
Θ max = λ max ( P ( A ) ) + μ max ( P ( B ) ) ,   ϒ max = ξ max ( S ( A ) ) + ζ max ( S ( B ) ) , Θ min = λ min ( P ( A ) ) + μ min ( P ( B ) ) ,   ϒ min = ξ min ( S ( A ) ) + ζ min ( S ( B ) ) .
In addition, let A = P + S , in which
P = P ( A ) = I P ( A ) + P ( B ) T I ,   S = S ( A ) = I S ( A ) + S ( B ) T I .
According to [18], Θ max , ϒ max and Θ min , ϒ min are the upper and lower bounds of the eigenvalues of matrices P and S , respectively.
The convergence theorem of the FPPSS iterative method for solving the continuous Sylvester Equation (1) is proved as follows.
Theorem 2.
Suppose A m × m and B n × n are positive semidefinite matrices, and at least one of them is a positive definite matrix. α j ( j = 1 , 2 ) are given non-negative constants and β j ( j = 1 , 2 ) are positive constants. Let:
M ( α , β ) = ( β I + S ) 1 ( β I P ) ( α I + P ) 1 ( α I S ) ,
and
α = α 1 + α 2 ,   β = β 1 + β 2 ,
then the upper bound of the spectral radius ρ ( M ( α , β ) ) of the iterative matrix (4) of the iterative method (2) is:
σ ( α , β ) = max Θ | β Θ α + Θ | max ϒ α 2 + ϒ 2 β 2 + ϒ 2 .
In the meantime, if parameters α and β satisfy:
( α , β ) = 1 4 Ω ,
with
Ω 1 = { ( α , β ) | α β β ( α ) } , Ω 2 = { ( α , β ) | β max { α , β ( α ) } , ϕ 1 ( α , β ) > 0 } , Ω 3 = { ( α , β ) | β ( α ) β α } , Ω 4 = { ( α , β ) | β < min { α , β ( α ) } , ϕ 2 ( α , β ) > 0 } ,
where functions ϕ 1 ( α , β ) , ϕ 2 ( α , β ) and β ( α ) are as follows:
ϕ 1 ( α , β ) = ( β α ) ( Θ min 2 ϒ max 2 ) + 2 α β Θ min + 2 ϒ max 2 Θ min , ϕ 2 ( α , β ) = ( β α ) ( Θ max 2 ϒ min 2 ) + 2 α β Θ max + 2 ϒ min 2 Θ max , β ( α ) = α ( Θ max + Θ min ) + 2 Θ max Θ min 2 α + Θ max + Θ min [ Θ min , Θ max ] ,
we can prove that σ ( α , β ) < 1 , that is, the FPPSS iterative method (2) converges to the exact solution X of the continuous Sylvester Equation (1).
Proof. 
By Kronecker product, the FPPSS iterative method (2) can be transformed into
{ [ I ( α 1 I + P ( A ) ) + ( α 2 I + P ( B ) ) T I ] x ( k + 1 2 ) = [ I ( α 1 I S ( A ) ) + ( α 2 I S ( B ) ) T I ] x ( k ) + c , [ I ( β 1 I + S ( A ) ) + ( β 2 I + S ( B ) ) T I ] x ( k + 1 ) = [ I ( β 1 I P ( A ) ) + ( β 2 I P ( B ) ) T I ] x ( k + 1 2 ) + c ,
and Equation (9) can be further turned into:
{ [ ( α 1 + α 2 ) I + I P ( A ) + P ( B ) T I ] x ( k + 1 2 ) = [ ( α 1 + α 2 ) I I S ( A ) S ( B ) T I ] x ( k ) + c , [ ( β 1 + β 2 ) I + I S ( A ) + S ( B ) T I ] x ( k + 1 ) = [ ( β 1 + β 2 ) I I P ( A ) P ( B ) T I ] x ( k + 1 2 ) + c ,
which can be rewritten equivalently as:
{ [ α I + P ] x ( k + 1 2 ) = [ α I S ] x ( k ) + c , [ β I + S ] x ( k + 1 ) = [ β I P ] x ( k + 1 2 ) + c .
After the Formula (11) is reorganized, we can get:
x ( k + 1 ) = [ ( β I + S ) 1 ( β I P ) ( α I + P ) 1 ( α I S ) ] x ( k ) + [ ( α + β ) ( β I + S ) 1 ( α I + P ) 1 ] c = M ( α , β ) x ( k ) + N ( α , β ) c ,
where M ( α , β ) is an iterative matrix.
According to the [19], P is a positive definite matrix, S is a Skew-Hermitian matrix, α is a non-negative constant, and β is a normal number.
The spectral radius of the iterative matrix M ( α , β ) satisfies:
ρ ( M ( α , β ) ) = ρ ( ( β I + S ) 1 ( β I P ) ( α I + P ) 1 ( α I S ) ) ( β I + S ) 1 ( β I P ) ( α I + P ) 1 ( α I S ) 2 .
Because
( β I + S ) 1 ( β I P ) ( α I + P ) 1 ( α I S )
is similar to:
( β I P ) ( α I + P ) 1 ( α I S ) ( β I + S ) 1 ,
(13) can be rewritten as:
ρ ( M ( α , β ) ) = ρ ( ( β I + S ) 1 ( β I P ) ( α I + P ) 1 ( α I S ) ) ( β I P ) ( α I + P ) 1 ( α I S ) ( β I + S ) 1 2 ( β I P ) ( α I + P ) 1 2 ( α I S ) ( β I + S ) 1 2 = V 1 ( α ) 2 V 2 ( α ) 2 .
(1) Consider: V 1 ( α ) 2 = ( β I P ) ( α I + P ) 1 2
V 1 ( α ) 2 2 = max λ { [ ( β I P ) ( α I + P ) 1 ] T [ ( β I P ) ( α I + P ) 1 ] } = max λ { ( α I + P ) T ( β I P ) T ( β I P ) ( α I + P ) 1 } = max λ { [ ( α I + P ) T ] 1 ( β I P ) T ( β I P ) ( α I + P ) 1 } ,
for
[ ( α I + P ) T ] 1 ( β I P ) T ( β I P ) ( α I + P ) 1
is similar to:
( β I P ) T ( β I P ) ( α I + P ) 1 [ ( α I + P ) T ] 1 ,
(15) can be rewritten as:
V 1 ( α ) 2 2 = max λ { ( β I P ) T ( β I P ) ( α I + P ) 1 [ ( α I + P ) T ] 1 } = max λ { [ ( β I P ) T ( β I P ) ] [ ( α I + P ) T ( α I + P ) ] 1 } = max λ { [ ( β I P T ) ( β I P ) ] [ ( α I + P T ) ( α I + P ) ] 1 } = max λ { [ β 2 I β ( P + P T ) + P T P ] [ α 2 I + α ( P + P T ) + P T P ] 1 } .
(16) can be equivalently rewritten as
V 1 ( α ) 2 2 = max λ [ β 2 I β ( P + P T ) + P T P ] λ [ α 2 I + α ( P + P T ) + P T P ] = max β 2 2 β Θ + Θ 2 α 2 + 2 α Θ + Θ 2 = max ( β Θ α + Θ ) 2 ,
so V 1 ( α ) 2 = max Θ | β Θ α + Θ | .
(2) Consider V 2 ( α ) 2 = ( α I S ) ( β I + S ) 1 2
In the same way as the proof process of V 1 ( α ) 2 = ( β I P ) ( α I + P ) 1 2 , we can get
V 2 ( α ) 2 2 = max λ [ α 2 I α ( S + S T ) + S T S ] λ [ β 2 I + β ( S + S T ) + S T S ] = max α 2 + ϒ 2 β 2 + ϒ 2 ,
so V 2 ( α ) = max ϒ α 2 + ϒ 2 β 2 + ϒ 2 .
Bring (17) and (18) into (14), we get:
ρ ( M ( α , β ) ) max Θ | β Θ α + Θ | max ϒ α 2 + ϒ 2 β 2 + ϒ 2 ,
which gives the upper bound σ ( α , β ) = max Θ | β Θ α + Θ | max ϒ α 2 + ϒ 2 β 2 + ϒ 2 of the spectral radius of the iterative matrix M ( α , β ) .
In the following, similar to the Theorem 2.2 of the literature [20] to prove the process idea, we can get:
max Θ | β Θ α + Θ | = max { | β Θ max α + Θ max | , | β Θ min α + Θ min | } ,
absorb the absolute value symbol on the right side of (20) to get:
β Θ min α + Θ min = Θ max β α + Θ max .
It can be solved from the Formula (21) that:
β ( α ) = α ( Θ max + Θ min ) + 2 Θ max Θ min 2 α + Θ max + Θ min [ Θ min , Θ max ] .
Simultaneously we have:
V 1 ( α ) 2 = max Θ | β Θ α + Θ | = { Θ max β Θ max + α , β < β ( α ) , β Θ min Θ min + α , β β ( α ) .
The same reason can be used to obtained:
V 2 ( α ) = max ϒ α 2 + ϒ 2 β 2 + ϒ 2 = { α 2 + ϒ max 2 β 2 + ϒ max 2 , α β , α 2 + ϒ min 2 β 2 + ϒ min 2 , α > β .
At this point we can divide the area Ω = { ( α , β ) | α 0 , β > 0 } into the following four parts according to (23) and (24):
Ω 1 = { ( α , β ) | α β < β ( α ) } ,   Ω 2 = { ( α , β ) | β max { α , β ( α ) } } , Ω 3 = { ( α , β ) | β ( α ) β α } ,   Ω 4 = { ( α , β ) | β < min { α , β ( α ) } } .
From (19), (23), and (24) we can know:
(1) For ( α , β ) Ω 1 = { ( α , β ) | α β β ( α ) } ,
ρ ( M ( α , β ) ) Θ max β Θ max + α α 2 + ϒ max 2 β 2 + ϒ max 2 < 1 .
(2) For ( α , β ) Ω 2 = { ( α , β ) | β max { α , β ( α ) } } ,
ρ ( M ( α , β ) ) β Θ min Θ min + α α 2 + ϒ max 2 β 2 + ϒ max 2 ,
to make (25) less than 1, if and only if
ϕ 1 ( α , β ) = ( β α ) ( Θ min 2 ϒ max 2 ) + 2 α β Θ min + 2 ϒ max 2 Θ min > 0 .
(3) For ( α , β ) Ω 3 = { ( α , β ) | β ( α ) β α } ,
ρ ( M ( α , β ) ) β Θ min α + Θ min α 2 + ϒ min 2 β 2 + ϒ min 2 < β α α β = 1 .
(4) For ( α , β ) Ω 4 = { ( α , β ) | β < min { α , β ( α ) } } ,
ρ ( M ( α , β ) ) Θ max β Θ max + α α 2 + ϒ min 2 β 2 + ϒ min 2 ,
to make (26) less than 1, if and only if:
ϕ 2 ( α , β ) = ( β α ) ( Θ max 2 ϒ min 2 ) + 2 α β Θ max + 2 ϒ min 2 Θ max > 0 .
In summary, we can draw the conclusion:
ρ ( M ( α , β ) ) σ ( α , β ) < 1 ,   ( α , β ) = 1 4 Ω .
Theorem 2 is verified. □
Theorem 3.
The theoretical optimal parameter that makes σ ( α , β ) the minimum is:
( α , β ) = arg min α , β { σ ( α , β ) } = { ( α 1 , β ( α 1 ) ) ,   Θ max Θ min ϒ min 2 , ( α 0 , β ( α 0 ) ) ,   ϒ min 2 < Θ max Θ min < ϒ max 2 , ( α 2 , β ( α 2 ) ) ,   Θ max Θ min ϒ max 2 ,  
where
α 1 = ϒ min 2 Θ max Θ min + ( ϒ min 2 + Θ max 2 ) ( ϒ min 2 + Θ min 2 ) Θ max + Θ min , α 0 = Θ max Θ min , α 2 = ϒ max 2 Θ max Θ min + ( ϒ max 2 + Θ max 2 ) ( ϒ max 2 + Θ min 2 ) Θ max + Θ min .
The upper bound of the spectral radius of the corresponding iterative matrix is:
σ ( α , β ) = { σ ( α 1 ) ,   Θ max Θ min ϒ min 2 , σ ( α 0 ) ,   ϒ min 2 < Θ max Θ min < ϒ max 2 , σ ( α 2 ) ,   Θ max Θ min ϒ max 2 ,  
where
σ ( α ) = σ ( α , β ( α ) ) = { β ( α ) Θ min α + Θ min α 2 + ϒ min 2 β ( α ) 2 + ϒ min 2 ,   α > α 0 , β ( α ) Θ min α + Θ min α 2 + ϒ max 2 β ( α ) 2 + ϒ max 2 ,   α α 0 .
Proof. 
From (23) and (24) we can know:
σ ( α , β ) = { Θ max β Θ max + α α 2 + ϒ max 2 β 2 + ϒ max 2 ,   ( α , β ) Ω 1 , β Θ min Θ min + α α 2 + ϒ max 2 β 2 + ϒ max 2 ,   ( α , β ) Ω 2 , β Θ min α + Θ min α 2 + ϒ min 2 β 2 + ϒ min 2 ,   ( α , β ) Ω 3 , Θ max β Θ max + α α 2 + ϒ min 2 β 2 + ϒ min 2 ,   ( α , β ) Ω 4 ,
we can observe from (27) that σ β ( α , β ) < 0 when ( α , β ) Ω 1 and ( α , β ) Ω 4 , σ β ( α , β ) > 0 when ( α , β ) Ω 2 and ( α , β ) Ω 3 , then at β = β ( α ) , σ β ( α , β ) has a minimum value and is also the minimum value.
Substitute β = β ( α ) into (27), then:
σ ( α ) = σ ( α , β ( α ) ) = { β ( α ) Θ min α + Θ min α 2 + ϒ min 2 β ( α ) 2 + ϒ min 2 ,   α > α 0 , β ( α ) Θ min α + Θ min α 2 + ϒ max 2 β ( α ) 2 + ϒ max 2 ,   α α 0 ,
obviously, computing the minimum value of (27) is converted to solving the minimum value of (28).
Find the derivative number for (28) and get:
σ ( α ) = { c 1 ( α ) η 1 ( α ) ,   α > α 0 , c 2 ( α ) η 2 ( α ) ,   α < α 0 ,
where c 1 ( α ) and c 2 ( α ) are two positive function, and:
η 1 ( α ) = ( Θ max + Θ min ) α 2 + 2 α ( Θ max Θ min ϒ min 2 ) ϒ min 2 ( Θ max + Θ min ) , η 2 ( α ) = ( Θ max + Θ min ) α 2 + 2 α ( Θ max Θ min ϒ max 2 ) ϒ max 2 ( Θ max + Θ min ) .
It can be observed that η 1 ( α ) is similar to η 2 ( α ) format and has a positive root and a negative root. The positive roots are denoted as α 1 and α 2 , respectively, and because of ϒ max > ϒ min , α 1 < α 2 . Also note that Θ max + Θ min 0 .
Bring α = α 0 into (30) to get:
η 1 ( α 0 ) = ( Θ max + Θ min ) 2 ( Θ max Θ min ϒ min 2 ) , η 2 ( α 0 ) = ( Θ max + Θ min ) 2 ( Θ max Θ min ϒ max 2 ) .
According to (31), we can find:
(1) When Θ max Θ min ϒ min 2 , we have η 1 ( α 0 ) < 0 and η 2 ( α 0 ) < 0 , then there are α 0 < α 1 < α 2 , at this time σ ( α , β ) takes the minimum at ( α 1 , β ( α 1 ) ) .
(2) When ϒ min 2 < Θ max Θ min < ϒ max 2 , we have η 1 ( α 0 ) > 0 and η 2 ( α 0 ) < 0 ,then there are α 1 < α 0 < α 2 , at this time σ ( α , β ) takes the minimum at ( α 0 , β ( α 0 ) ) .
(3) When Θ max Θ min ϒ max 2 , we have η 1 ( α 0 ) > 0 and η 2 ( α 0 ) > 0 ,then there are α 1 < α 2 < α 0 , at this time σ ( α , β ) takes the minimum at ( α 2 , β ( α 2 ) ) .
In summary, Theorem 3 is verified. □

3. Numerical Experiments

In this part, we use numerical experiments to compare the FPPSS iterative method, PSS iterative method and HSS iterative method for solving the continuous Sylvester Equation (1) in term of iteration steps (IT) and computing time (CPU).
In the implementation of the algorithm, for the convenience of calculation, the initial matrix X ( 0 ) is taken as a zero matrix, and the iterative stopping criterion is C A X ( k ) X ( k ) B F C F 10 6 . In addition, in each step of the iterative method, the subproblem is solved by the direct method in [20].
Example 1.
In order to generate large and sparse matrices A and B, we established them in the following ways which can also be seen in [13]:
A = ( 10 1 1 2 10 1 2 10 1 1 2 10 ) , B = ( 8 1 1 3 8 1 3 8 1 1 3 8 ) .
Table 1 and Table 2 lists the numerical results of FPPSS, PSS, and HSS iterative method using experimental optimal iterative parameters. α 1 , β 1 , and α (where β = α ) represent the experimental quasi-optimal parameters of the FPPSS, PSS, and HSS iterative methods, respectively.
Example 2.
The continuous Sylvester equation (1) with m = n and the matrices:
{ A = d i a g ( 1 , 2 , , n ) + 10 3 L T , B = 2 t I + d i a g ( 1 , 2 , n ) + 10 3 L T + 2 t L ,
with L the strictly lower triangular matrix having ones in the lower triangle part and t is a problem parameter to be specified in actual computations.
Table 3 and Table 4 lists the numerical results of FPPSS, PSS, and HSS iterative method using experimental optimal iterative parameters. α 1 , β 1 , and α (where β = α ) represent the experimental quasi-optimal parameters of the FPPSS, PSS, and HSS iterative methods, respectively.
Example 3.
Consider Equation (1), where m = n , A = B = M + q N + 100 ( n + 1 ) 2 I and M ,   N n × n are the following three diagonal matrices.
M = tridiag ( 1 , 2 , 1 ) and N = tridiag ( 0.5 , 0 , 0.5 ) , Table 5 and Table 6 list the numerical results of FPPSS and PSS iterative method using experimental optimal iterative parameters. α 1 (where α 1 = α 2 ), β 1 (where β 1 = β 2 ) and α (where β = α ) represent the experimental quasi-optimal parameters of the FPPSS, PSS, and HSS iterative methods, respectively.
From the above three examples, we can see that although the runtime of FPPSS iteration method is slightly higher than that of previous iteration method when the dimension of coefficient matrices is small, but lower when the dimension is large. And the iteration steps of FPPSS iteration method are less than that of previous iteration method regardless of the dimension of coefficient matrices. When the matrix dimension is high, the results can still be calculated in a shorter runtime and less iteration steps. Through the above numerical experiments, it is proved that FPPSS iteration method is an effective improved algorithm.

4. Conclusions

In this paper, a new four-parameter positive and skew-Hermitian iterative method, namely the FPPSS iterative method, is applied to solve the Sylvester equation of the form A X + X B = C , which is a generalization of the classical PSS iterative method [7]. This paper proves that when the parameters satisfy certain conditions, the iterative sequence generated by the FPPSS method converges to the unique solution of the Sylvester equation, and the PSS method is a special case of the FPPSS method. We also give the theoretical optimal sum of the parameters that minimize the upper bound of the spectral radius of the iterative matrix. In addition, it can be seen from the experimental data that the FPPSS iterative method is superior to the PSS and HSS iterative method in most cases in CPU and IT, which indicates that the newly constructed FPPSS iterative method is an effective iterative method for solving the Sylvester equation.

Author Contributions

Conceptualization, H.-L.S. and Y.-R.L.; methodology, H.-L.S.; software, Y.-R.L.; validation, H.-L.S., Y.-R.L. and X.-H.S.; formal analysis, H.-L.S.; data curation, Y.-R.L.; writing–original draft preparation, H.-L.S.; writing–review & editing, Y.-R.L.; project administration, X.-H.S.; funding acquisition, H.-L.S.

Funding

Project supported by the National Natural Science Foundation of China (No. 11371081) and the Natural Science Foundation of Liaoning Province (No. 20170540323).

Acknowledgments

The authors would like to express their great thankfulness to the referees for the comments and constructive suggestions, which are valuable in improving the quality of the original paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X. HSS Based Iterative Methods and Accelerating Techniques for some Linear and Nonlinear Equations and a Class of Continuous Sylvester Equations; Lanzhou University: Gansu, China, 2013. [Google Scholar]
  2. Calvetti, D.; Reichel, L. Application of ADI iterative methods to the restoration of noisy images. Soc. Ind. Appl. Math. 1996, 17, 165–186. [Google Scholar] [CrossRef]
  3. Obinata, G.; Anderson, B. Model Reduction for Control System Design; Springer: Berlin, Germany, 2001; Volume 54. [Google Scholar]
  4. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  5. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. Siam J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  6. Bai, Z.Z. On Hermitian and skew-Hermitian splitting iteration methods for continuous sylvester equation. J. Comput. Math. 2011, 29, 185–198. [Google Scholar]
  7. Bai, Z.Z.; Golub, G.H.; Lu, L.Z.; Yin, J.F. Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. Siam J. Sci. Comput. 2006, 26, 844–863. [Google Scholar] [CrossRef]
  8. Bai, Z.Z.; Golub, G.H.; Ng, M.K. On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations. Numer. Linear Algebr. Appl. 2007, 14, 319–335. [Google Scholar] [CrossRef] [Green Version]
  9. Bai, Z.Z.; Golub, G.H.; Pan, J.Y. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numerische Mathematik 2004, 428, 413–440. [Google Scholar] [CrossRef]
  10. Bertaccini, D.; Golub, G.H.; Serra Capizzano, S.; Tablino Possio, C. Preconditioned HSS methods for the solution of non-Hermitian positive definite linear systems. Numerische Mathematik 2005, 99, 441–484. [Google Scholar] [CrossRef]
  11. Li, L.; Huang, T.Z.; Liu, X.P. Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems. Comput. Math. Appl. 2010, 14, 217–235. [Google Scholar] [CrossRef]
  12. He, J.; Huang, T.Z.; Cheng, G.H. A modified generalization of the Hermitian and skew-Hermitian splitting iteration. Bulletin Mathematique de la Societe Desences Mathematiques de Roumanie 2012, 2, 147–155. [Google Scholar]
  13. Wang, X.; Li, W.W.; Mao, L.Z. On positive-definite and skew-Hermitian splitting iteration methods for continuous sylvester equation AX + XB = C. East Asian J. Appl. Math. 2013, 7, 55–69. [Google Scholar]
  14. Zheng, Q.Q.; Ma, C.F. On normal and skew-Hermitian splitting iteration methods for large sparse continuous sylvester equations. J. Comput. Appl. Math. 2014, 268, 145–154. [Google Scholar] [CrossRef]
  15. Zhou, D.; Chen, G.; Cai, Q. On modified HSS iteration methods for continuous sylvester equations. Appl. Math. Comput. 2015, 263, 84–93. [Google Scholar] [CrossRef]
  16. Zhou, R.; Wang, X.; Tang, X. A generalization of the Hermitian and skew-Hermitian splitting iteration method for solving sylvester equations. Appl. Math. Comput. 2015, 271, 609–617. [Google Scholar] [CrossRef]
  17. Yongxin, D. Chuanqing on PMHSS iteration method for continuous sylvester equation. J. Comput. Math. 2017, 35, 600–619. [Google Scholar]
  18. Horn, R.A. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1994; pp. 239–297. [Google Scholar]
  19. Xu, Z. Concise Course on Matrix Theory, 3rd ed.; Science Press: Beijing, China, 2014; pp. 160–167. [Google Scholar]
  20. Bartels, R.H.; Stewart, G.W. Solution of the matrix Equation AX + XB = C. Commun. ACM 1972, 15, 820–826. [Google Scholar] [CrossRef]
Table 1. IT and CPU for four parameters positive and skew-Hermitian splitting (FPPSS), positive and skew-Hermitian splitting (PSS), and Hermitian and skew-Hermitian splitting iterative method (HSS) for Example 1 when using experimental quasi-optimal parameters.
Table 1. IT and CPU for four parameters positive and skew-Hermitian splitting (FPPSS), positive and skew-Hermitian splitting (PSS), and Hermitian and skew-Hermitian splitting iterative method (HSS) for Example 1 when using experimental quasi-optimal parameters.
MethodFPPSSPSSHSS
nITCPUITCPUITCPU
n = 861.312161.153151.249
n = 1661.318161.147161.166
n = 3261.332171.298161.250
n = 6461.424171.559161.495
n = 12862.230173.134163.655
n = 25668.4061719.1871626.811
n = 512686.88917192.13916230.645
n = 1024 6818.956----
Table 2. The practical optimal value for FPPSS, PSS, and HSS for Example 1.
Table 2. The practical optimal value for FPPSS, PSS, and HSS for Example 1.
MethodFPPSSPSSHSS
n α 1 β 1 α = β α = β
n = 80.379494.78434.7843
n = 160.427594.77504.7750
n = 320.440294.77144.7714
n = 640.443494.77024.7702
n = 1280.444294.76984.7698
n = 2560.444494.76974.7697
n = 5120.444494.76974.7697
n = 1024 0.44449--
Table 3. IT and CPU for FPPSS, PSS, and HSS for Example 2 when using experimental quasi-optimal parameters.
Table 3. IT and CPU for FPPSS, PSS, and HSS for Example 2 when using experimental quasi-optimal parameters.
MethodFPPSSPSSHSS
n ITCPUITCPUITCPU
n = 8 21.280401.132301.368
n = 16 31.335541.317441.200
n = 32 31.399731.702651.443
n = 64 31.3831003.305933.493
n = 128 41.86313922.17713427.289
n = 256 56.862196361.342191438.372
n = 512 669.461----
n = 1024 81122.085----
Table 4. The practical optimal value for FPPSS, PSS, and HSS for Example 2.
Table 4. The practical optimal value for FPPSS, PSS, and HSS for Example 2.
MethodFPPSSPSSHSS
n α 1 β 1 α = β α = β
n = 8 9.9114 × 10 6 2.55001.41421.4142
n = 16 3.7485 × 10 5 2.75002.00002.0000
n = 32 1.4448 × 10 4 2.86782.82842.8284
n = 64 5.6589 × 10 4 2.93234.00004.0000
n = 128 0.00222.96755.65695.6569
n = 256 0.00892.99128.00008.0000
n = 512 0.03513.0259--
n = 1024 0.13583.1305--
Table 5. IT and CPU for FPPSS, PSS, and HSS for Example 3 when using experimental quasi-optimal parameters.
Table 5. IT and CPU for FPPSS, PSS, and HSS for Example 3 when using experimental quasi-optimal parameters.
MethodFPPSSPSSHSS
q n ITCPUITCPUITCPU
q = 1 n = 8 81.302201.192281.124
n = 16 151.344391.298471.334
n = 32 341.439811.673931.874
n = 64 623.0291646.2812037.193
n = 128 10413.175----
n = 256 175256.191----
q = 10 n = 8 91.385231.268331.227
n = 16 141.464431.259601.436
n = 32 211.501831.6471232.041
n = 64 302.1191665.40925110.950
n = 128 449.310----
n = 256 67152.779----
q = 100 n = 8 111.544241.290391.151
n = 16 251.502431.472691.291
n = 32 611.824841.5851331.900
n = 64 914.0321676.18726510.966
n = 128 12325.951----
n = 256 182365.007----
Table 6. The practical optimal value for FPPSS, PSS, and HSS for Example 3.
Table 6. The practical optimal value for FPPSS, PSS, and HSS for Example 3.
MethodFPPSSPSS
HSS
q n α 1 β 1 α = β α = β
q = 1 n = 8 0.27303.23461.31631.3163
n = 16 0.41192.34600.60100.6401
n = 32 0.47372.09180.32090.3209
n = 64 0.49302.02370.16170.1617
n = 128 0.49822.0060--
n = 256 0.49952.0015--
q = 10 n = 8 3.23463.23461.31631.3163
n = 16 2.34602.34600.64010.6401
n = 32 2.09182.09180.32090.3209
n = 64 2.02372.02370.16170.1617
n = 128 2.00602.0060--
n = 256 2.00152.0015--
q = 100 n = 8 3.23463.23461.31631.3163
n = 16 2.34602.34600.64010.6401
n = 32 2.09182.09180.32090.3209
n = 64 2.02372.02370.16170.1617
n = 128 2.00602.0060--
n = 256 2.00152.0015--

Share and Cite

MDPI and ACS Style

Shen, H.-L.; Li, Y.-R.; Shao, X.-H. The Four-Parameter PSS Method for Solving the Sylvester Equation. Mathematics 2019, 7, 105. https://doi.org/10.3390/math7010105

AMA Style

Shen H-L, Li Y-R, Shao X-H. The Four-Parameter PSS Method for Solving the Sylvester Equation. Mathematics. 2019; 7(1):105. https://doi.org/10.3390/math7010105

Chicago/Turabian Style

Shen, Hai-Long, Yan-Ran Li, and Xin-Hui Shao. 2019. "The Four-Parameter PSS Method for Solving the Sylvester Equation" Mathematics 7, no. 1: 105. https://doi.org/10.3390/math7010105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop