Next Article in Journal
Three Weaker Forms of Soft Faint Continuity
Previous Article in Journal
Charging and Discharging Modeling of Inertial Sensors Based on Ultraviolet Charge Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Step Simplified Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems

School of Mathematics and Statistics, Zhaoqing University, Zhaoqing 526000, China
Symmetry 2024, 16(9), 1210; https://doi.org/10.3390/sym16091210
Submission received: 14 August 2024 / Revised: 4 September 2024 / Accepted: 5 September 2024 / Published: 14 September 2024

Abstract

:
A two-step simplified modulus-based matrix splitting iteration method is presented for solving the linear complementarity problem. According to general matrix splitting and special matrix splitting, a general convergence analysis and a specific convergence analysis are described, respectively. Numerical experiments show that the iteration method is effective and that the convergence theories are valid.

1. Introduction

The linear complementarity problem (LCP) is to search for a real column vector z such that
z T ( A z + q ) = 0 with z 0 , A z + q 0 ,
where A = ( a i j ) R n × n , q R n is known, and the superscript “ T ” stands for the transpose of a vector. The applications of the LCP( A , q ) involve the scientific calculation (linear and quadratic programming problems), the engineering (journal bearing free boundary problems), and the economic field (market equilibrium problems), etc.; see [1,2,3,4,5,6,7].
To compute the numerical solution of a large and sparse LCP( A , q ), various kinds of iterative solving methods have been proposed recently. One of these methods is the modulus-based matrix splitting (MMS) class iteration method, and the relevant research results are abundant. This method includes the modulus-based matrix splitting (MMS) iteration method [8], the general modulus-based matrix splitting (GMMS) iteration method [9], the accelerated modulus-based matrix splitting (AMMS) iteration method [10], and so on. The characteristics of this kind of method are mainly reflected in two aspects; one is to reformulate the LCP as a fixed-point equation by introducing a parameter matrix and a new vector x, and the other is to develop an iterative method by combining it with matrix splitting [8,9,10,11,12,13]. At present, this kind of MMS iteration method has been used to solve other complementary problems, such as implicit complementarity problems [14,15], horizontal complementarity problems [16,17,18], vertical linear complementarity problems [19,20,21], and nonlinear complementarity problems [22,23,24,25]. There are other efficient iteration methods for solving the LCP( A , q ), such as the fixed-point method [5], the projection algorithm [26], and the modulus-based, nonsmooth Newton method [27].
In [2], the authors proposed a new modulus-based matrix splitting (NMMS) iteration method for solving the LCP( A , q ) based on a fixed-point equation. The major feature of this method is that no new vector x is introduced in the iterative procession. Compared with the existing original MMS iteration method [8], the NMMS iteration method has a simpler pattern. Numerical experiments show that the NMMS iteration method sometimes has some advantages over the MMS iteration methods. In view of these characteristics and advantages, the NMMS iteration method has been used to deal with implicit complementarity problems [15] and nonlinear complementarity problems [28]. In this paper, we further study the numerical method for solving the LCP( A , q ) based on the NMMS iteration method. The contributions of this paper manifest in three aspects. First, the two-step simplified modulus-based matrix splitting (TSMMS) iteration method is proposed; it is a novel method. Additionally, the convergence conditions are presented from the spectral radius ρ and the 2-norm . of the matrix. Second, some specific convergence conditions related to the special matrix splitting are also provided. Third, numerical experiments are illustrated to verify the TSMMS iteration method and convergence theory. The NMMS iteration method involves one instance of matrix splitting, while the TSMMS iteration method involves two instances of matrix splitting; the latter method constitutes a generalization of the former and is used in a wider application field.
The rest of this article is arranged as follows. Some preliminaries are introduced in Section 2, and the NMMS iteration method, as well as the convergence analysis, are presented in Section 3. The corresponding numerical experiments are described in Section 4. Finally, concluding remarks are provided in Section 5.

2. Preliminaries

Some definitions and denotations are involved in our discussion, such as the P-matrix, the Z-matrix, the M-matrix, the H + -matrix, and the comparison matrix A of A, H-splitting and H-compatible splitting. Most of them can been seen in many works within the literature, such as [2,8,29]. In the following passages, we briefly review some lemmas and the modulus-based matrix splitting iteration method [2].
Lemma 1 
([30]). Let A R n × n be an H-matrix, let D be the diagonal part of the A, and let A = D B . Then, A and | D | are nonsingular, | A 1 | A 1 , and ρ ( | D | 1 | B | ) < 1 , where “ | · | ” represents the absolute value taken by element.
Lemma 2 
([30]). Let A R n × n be an M-matrix, and let B R n × n be a Z-matrix with A B . Then, B is an M-matrix.
Lemma 3 
([2]). Let A R n × n with a i j 0 . If there exists u R n with u > 0 such that A u < u , then ρ ( A ) < 1 .
For any positive diagonal matrix Φ , the LCP (1) is equivalent to the LCP
( Φ z ) T ( A z + q ) = 0 with Φ z 0 , A z + q 0 .
It follows that the LCP (2) can be reformulated as a fixed-point equation:
( Φ + A ) z = | ( A Φ ) z + q | q .
Set A = F G ; then, Equation (3) can be rewritten as
( Φ + F ) z = G z + | ( A Φ ) z + q | q .
Additionally, Wu and Li presented the new modulus-based matrix splitting (NMMS) iteration method for solving the LCP (1) in [2]. The iterative pattern of this method is described below.
Method 1. 
(The new modulus-based matrix splitting (NMMS) iteration method)
Let A = F G be a splitting of A, and let matrix Φ + F be nonsingular, where Ω is a positive diagonal matrix. Given any initial vector z ( 0 ) R n , for k = 0 , 1 , 2 , , compute z ( k + 1 ) by solving the linear system
( Φ + F ) z ( k + 1 ) = G z ( k ) + | ( A Φ ) z ( k ) + q | q .
In [2], the authors explained the difference between the NMMS iteration method and the MMS iteration method [8], and they compared the two methods through numerical experiments.

3. Main Results

In this section, based on the idea of the NMMS iteration method, that is, Method 1, we consider two splittings of A. First, we propose the two-step simplified modulus-based matrix splitting (TSMMS) iteration method, as well as its particular cases in turn. Then, we analyze the convergence.
The TSMMS iteration method. Let A = F 1 G 1 = F 2 G 2 be two splittings of A, with Φ 1 + F 1 and Φ 2 + F 2 being nonsingular, where Φ 1 , Φ 2 are two positive diagonal matrices. Given any initial vector z ( 0 ) R n , for k = 0 , 1 , 2 , , compute z ( k + 1 ) by solving the following two linear systems:
( Φ 1 + F 1 ) z ( k + 1 ) = G 1 z ( k + 1 2 ) + | ( A Φ 1 ) z ( k + 1 2 ) + q | q , ( Φ 2 + F 2 ) z ( k + 1 2 ) = G 2 z ( k ) + | ( A Φ 2 ) z ( k ) + q | q .
Compared with the two-step modulus-based matrix splitting (TMMS) iteration method suggested in [11], this iteration form is simple. We call it the two-step simplified modulus-based matrix splitting (TSMMS) iteration method.
Accelerated over-relaxation (AOR) splitting is a particular form of matrix splitting, which is defined as
A = 1 ω ( D γ L ) 1 ω [ ( 1 ω ) D + ( ω γ ) L + ω U ] ,
where ω > 0 and γ 0 , D = diag ( A ) is the diagonal matrix of A, and L and U are the strictly lower triangular matrix and the strictly upper triangular matrix of A, respectively. For the TSMMS iteration method (6), if we consider AOR splitting, that is,
F 1 = 1 ω 1 ( D γ 1 L ) , G 1 = 1 ω 1 [ ( 1 ω 1 ) D + ( ω 1 γ 1 ) L + ω 1 U ] , F 2 = 1 ω 2 ( D γ 2 U ) , G 2 = 1 ω 2 [ ( 1 ω 2 ) D + ( ω 2 γ 2 ) U + ω 2 L ] ,
then (6) supplies the two-step simplified modulus-based accelerated over-relaxation (TSMAOR) iteration method
( ω 1 Φ 1 + D γ 1 L ) z ( k + 1 ) = [ ( 1 ω 1 ) D + ( ω 1 γ 1 ) L + ω 1 U ] z ( k + 1 2 ) + ω 1 | ( A Φ 1 ) z ( k + 1 2 ) + q | ω 1 q , ( ω 2 Φ 2 + D γ 2 U ) z ( k + 1 2 ) = [ ( 1 ω 2 ) D + ( ω 2 γ 2 ) U + ω 2 L ] z ( k ) + ω 2 | ( A Φ 2 ) z ( k ) + q | ω 2 q .
Specifically, when ω i = γ i for i = 1 , 2 , when ω i = γ i = 1 for i = 1 , 2 , and when ω i = 1 , γ i = 0 for i = 1 , 2 , respectively, the TSMAOR iteration method reduces the two-step simplified modulus-based successive over-relaxation (TSMSOR) iteration method:
( ω 1 Φ 1 + D ω 1 L ) z ( k + 1 ) = [ ( 1 ω 1 ) D + ω 1 U ] z ( k + 1 2 ) + ω 1 | ( A Φ 1 ) z ( k + 1 2 ) + q | ω 1 q , ( ω 2 Φ 2 + D ω 2 U ) z ( k + 1 2 ) = [ ( 1 ω 2 ) D + ω 2 L ] z ( k ) + ω 2 | ( A Φ 2 ) z ( k ) + q | ω 2 q .
TSMAOR also reduces the two-step simplified modulus-based Gauss–Seidel (TSMGS) iteration method:
( Φ 1 + D L ) z ( k + 1 ) = U z ( k + 1 2 ) + | ( A Φ 1 ) z ( k + 1 2 ) + q | q , ( Φ 2 + D U ) z ( k + 1 2 ) = L z ( k ) + | ( A Φ 2 ) z ( k ) + q | q .
Finally, TSMAOR also reduces the two-step simplified modulus-based Jacobian (TSMJ) iteration method:
( Φ 1 + D ) z ( k + 1 ) = ( L + U ) z ( k + 1 2 ) + | ( A Φ 1 ) z ( k + 1 2 ) + q | q , ( Φ 2 + D ) z ( k + 1 2 ) = ( U + L ) z ( k ) + | ( A Φ 2 ) z ( k ) + q | q .
Next, we discuss the general convergence conditions for the TSMMS iteration method, and then we discuss some concrete convergence conditions related to the special matrix splitting in terms of the spectral radius ρ and the two-norm · of the matrix.
Theorem 1. 
Let A be a P-matrix, and let A = F 1 G 1 = F 2 G 2 be two instances of splitting A, with Φ 1 + F 1 and Φ 2 + F 2 being nonsingular, where Φ 1 , Φ 2 are two positive diagonal matrices. If any of the following conditions holds, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP( A , q ):
( i ) ρ | ( Φ 1 + F 1 ) 1 G 1 | + | ( Φ 1 + F 1 ) 1 | | A Φ 1 | | ( Φ 2 + F 2 ) 1 G 2 | + | ( Φ 2 + F 2 ) 1 | | A Φ 2 | < 1 ,
( ii ) ρ | ( Φ 1 + F 1 ) 1 | | G 1 | + | A Φ 1 | | ( Φ 2 + F 2 ) 1 | | G 2 | + | A Φ 2 | < 1 ,
( iii ) ρ | ( Φ 1 + F 1 ) 1 | 2 | G 1 | + | Φ 1 F 1 | | ( Φ 2 + F 2 ) 1 | 2 | G 2 | + | Φ 2 F 2 | < 1 .
Proof. 
Since A is a P-matrix, we know that the LCP( A , q ) has a unique solution. Let z ( * ) be the solution. Then,
( Φ 1 + F 1 ) z ( * ) = N 1 z ( * ) + | ( A Φ 1 ) z ( * ) + q | q , ( Φ 2 + F 2 ) z ( * ) = N 2 z ( * ) + | ( A Φ 2 ) z ( * ) + q | q . .
Combining the previous equation with (6), we obtain
z ( k + 1 ) z ( * ) = ( Φ 1 + F 1 ) 1 G 1 ( z ( k + 1 2 ) z ( * ) ) + ( Φ 1 + F 1 ) 1 ( | ( A Φ 1 ) z ( k + 1 2 ) + q | | ( A Φ 1 ) z ( * ) + q | ) , z ( k + 1 2 ) z ( * ) = ( Φ 2 + F 2 ) 1 G 2 ( z ( k ) z ( * ) ) + ( Φ 2 + F 2 ) 1 ( | ( A Φ 2 ) z ( k ) + q | | ( A Φ 2 ) z ( * ) + q | ) .
Hence,
| z ( k + 1 ) z ( * ) | | ( Φ 1 + F 1 ) 1 G 1 | | z ( k + 1 2 ) z ( * ) | + | ( Φ 1 + F 1 ) 1 | | A Φ 1 | | z ( k + 1 2 ) z ( * ) | , = | ( Φ 1 + F 1 ) 1 G 1 | + | ( Φ 1 + F 1 ) 1 | | A Φ 1 | | z ( k + 1 2 ) z ( * ) | , | ( Φ 1 + F 1 ) 1 | | G 1 | + | A Φ 1 | | z ( k + 1 2 ) z ( * ) | , | ( Φ 1 + F 1 ) 1 | 2 | G 1 | + | Φ 1 F 1 | | z ( k + 1 2 ) z ( * ) | , | z ( k + 1 2 ) z ( * ) | | ( Φ 2 + F 2 ) 1 G 2 | | z ( k ) z ( * ) | + | ( Φ 2 + F 2 ) 1 | | A Φ 2 | | z ( k ) z ( * ) | , = | ( Φ 2 + F 2 ) 1 G 2 | + | ( Φ 2 + F 2 ) 1 | | A Φ 2 | | z ( k ) z ( * ) | , | ( Φ 2 + F 2 ) 1 | | G 2 | + | A Φ 2 | | z ( k ) z ( * ) | , | ( Φ 2 + F 2 ) 1 | 2 | G 2 | + | Φ 2 F 2 | | z ( k ) z ( * ) | .
Then,
| z ( k + 1 ) z ( * ) | | ( Φ 1 + F 1 ) 1 G 1 | + | ( Φ 1 + F 1 ) 1 | | A Φ 1 | | ( Φ 2 + F 2 ) 1 G 2 | + | ( Φ 2 + F 2 ) 1 | | A Φ 2 | | z ( k ) z ( * ) | , | ( Φ 1 + F 1 ) 1 | | G 1 | + | A Φ 1 | | ( Φ 2 + F 2 ) 1 | | G 2 | + | A Φ 2 | | z ( k ) z ( * ) | , | ( Φ 1 + F 1 ) 1 | 2 | G 1 | + | Φ 1 F 1 | | ( Φ 2 + F 2 ) 1 | 2 | G 2 | + | Φ 2 F 2 | | z ( k ) z ( * ) | .
So, from the above three inequalities, we know that, if any of the conditions (12)–(14) in Theorem 1 hold, the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method converges to z ( * ) for any initial vector z ( 0 ) R n . □
From the proof of Theorem 1, if we consider the two-norm · of the matrix, the following theorem can be obtained easily.
Theorem 2. 
Let A be a P-matrix, and let A = F 1 G 1 = F 2 G 2 be two instances of splitting of A, with Φ 1 + F 1 and Φ 2 + F 2 being nonsingular, where Φ 1 , Φ 2 are two positive diagonal matrices. If any of the following conditions holds, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP( A , q ):
( i ) ( Φ 1 + F 1 ) 1 G 1 + ( Φ 1 + F 1 ) 1 A Φ 1 ( Φ 2 + F 2 ) 1 G 2 + ( Φ 2 + F 2 ) 1 A Φ 2 < 1 ,
( ii ) ( Φ 1 + F 1 ) 1 G 1 + A Φ 1 ( Φ 2 + F 2 ) 1 G 2 + A Φ 2 < 1 ,
( iii ) ( Φ 1 + F 1 ) 1 2 G 1 + Φ 1 F 1 ( Φ 2 + F 2 ) 1 2 G 2 + Φ 2 F 2 < 1 .
Based on Lemma 1 and the spectral radius theories of a nonnegative matrix, if F i in the splitting A = F i G i satisfies that condition that Φ i + F i is an H-matrix for i = 1 , 2 , since
| ( Φ 1 + F 1 ) 1 | | G 1 | + | A Φ 1 | | ( Φ 2 + F 2 ) 1 | | G 2 | + | A Φ 2 | Φ 1 + F 1 1 | G 1 | + | A Φ 1 | Φ 2 + F 2 1 | G 2 | + | A Φ 2 | ,
the second condition (ii) in Theorem 1 can be changed as follows:
ρ Φ 1 + F 1 1 | G 1 | + | A Φ 1 | Φ 2 + F 2 1 | G 2 | + | A Φ 2 | < 1 .
For this case, we arrive at the following conclusion.
Corollary 1. 
Let A be a P-matrix, and let A = F 1 G 1 = F 2 G 2 be two instances of the H -splitting of A. If
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
holds, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP( A , q ).
Proof. 
Since A = F i G i is an instance of H-splitting, we know that F i | G i | is an M-matrix. When combining this understanding with the fact that A has positive diagonal elements, we can easily prove that F i has positive diagonal elements. In addition, from
F i F i | G i |
and Lemma 2, we know that F i is an M-matrix and that Φ i + F i is an M-matrix. It follows that Φ i + F i is an H + -matrix. Therefore,
| ( Φ i + F i ) 1 | Φ i + F i 1 = ( Φ i + F i ) 1 , i = 1 , 2 .
Thus, the second condition (13) in Theorem 1 can be changed as in (18). □
Next, we set Φ 1 = Φ 2 in specific cases of the TSMMS iteration method, and we discuss the concrete convergence conditions related to (17) in Theorem 2.
Theorem 3. 
Let A be a P-matrix, and let F i in A = F i G i be a symmetric, positive, definite matrix for i = 1 , 2 . Denote λ 1 ( i ) and λ n ( i ) as the smallest and largest eigenvalues of F i for i = 1 , 2 , respectively. Denote τ ( i ) = G i for i = 1 , 2 , and set Φ 1 = Φ 2 = ϕ I with ϕ > 0 . Then, is any of the following conditions holds, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP (1):
( i ) when Υ = ( 2 τ ( 1 ) + λ n ( 1 ) ) ( 2 τ ( 2 ) + λ n ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 τ ( 1 ) + λ n ( 1 ) + λ 1 ( 1 ) + 2 τ ( 2 ) + λ n ( 2 ) + λ 1 ( 2 ) < min { λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 } , Υ < ϕ < min { λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 } ,
( ii ) let Γ = ( 2 τ ( 1 ) λ 1 ( 1 ) ) ( 2 τ ( 2 ) λ 1 ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 λ 1 ( 1 ) + 2 λ 1 ( 2 ) 2 τ ( 1 ) 2 τ ( 2 ) , Θ = λ 1 ( 1 ) + λ 1 ( 2 ) τ ( 1 ) τ ( 2 ) , when Θ > 0 , ϕ > max Γ , λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 , when Θ < 0 , max λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 < ϕ < Γ ,
( iii ) let Δ 1 = ( λ 1 ( 1 ) + 2 τ ( 2 ) λ n ( 1 ) 2 τ ( 1 ) ) 2 8 [ λ 1 ( 1 ) λ 1 ( 2 ) ( 2 τ ( 1 ) + λ n ( 1 ) ) ( 2 τ ( 2 ) + λ 1 ( 2 ) ) ] , ϕ > ( λ 1 ( 1 ) + 2 τ ( 2 ) λ n ( 1 ) 2 τ ( 1 ) ) + Δ 1 4 , λ 1 ( 2 ) + λ n ( 2 ) 2 ϕ < λ 1 ( 1 ) + λ n ( 1 ) 2 , ,
( iv ) let Δ 2 = ( λ 1 ( 2 ) + 2 τ ( 1 ) λ n ( 2 ) 2 τ ( 2 ) ) 2 8 [ λ 1 ( 1 ) λ 1 ( 2 ) ( 2 τ ( 2 ) + λ n ( 2 ) ) ( 2 τ ( 1 ) + λ 1 ( 1 ) ) ] , ϕ > ( λ 1 ( 2 ) + 2 τ ( 1 ) λ n ( 2 ) 2 τ ( 2 ) ) + Δ 2 4 , λ 1 ( 1 ) + λ n ( 1 ) 2 ϕ < λ 1 ( 2 ) + λ n ( 2 ) 2 .
Proof. 
Since Φ 1 = Φ 2 = ϕ I , the left part of (17) in Theorem 2 can be reformulated as
( Φ 1 + F 1 ) 1 2 G 1 + Φ 1 F 1 ( Φ 2 + F 2 ) 1 2 G 2 + Φ 2 F 2 = ( ϕ I + F 1 ) 1 2 G 1 + ϕ I F 1 ( ϕ I + F 2 ) 1 2 G 2 + ϕ I F 2 = 1 ϕ + λ 1 ( 1 ) 2 τ ( 1 ) + ϕ I F 1 1 ϕ + λ 1 ( 2 ) 2 τ ( 2 ) + ϕ I F 2 = 2 τ ( 1 ) + max { | ϕ λ 1 ( 1 ) | , | ϕ λ n ( 1 ) | } ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + max { | ϕ λ 1 ( 2 ) | , | ϕ λ n ( 2 ) | } ϕ + λ 1 ( 2 )
= 2 τ ( 1 ) + λ n ( 1 ) ϕ ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + λ n ( 2 ) ϕ ϕ + λ 1 ( 2 ) , when ϕ < λ 1 ( 1 ) + λ n ( 1 ) 2 and ϕ < λ 1 ( 2 ) + λ n ( 2 ) 2 , 2 τ ( 1 ) + ϕ λ 1 ( 1 ) ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + ϕ λ 1 ( 2 ) ϕ + λ 1 ( 2 ) , when ϕ λ 1 ( 1 ) + λ n ( 1 ) 2 and ϕ λ 1 ( 2 ) + λ n ( 2 ) 2 , 2 τ ( 1 ) + λ n ( 1 ) ϕ ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + ϕ λ 1 ( 2 ) ϕ + λ 1 ( 2 ) , when λ 1 ( 2 ) + λ n ( 2 ) 2 ϕ < λ 1 ( 1 ) + λ n ( 1 ) 2 , 2 τ ( 1 ) + ϕ λ 1 ( 1 ) ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + λ n ( 2 ) ϕ ϕ + λ 1 ( 2 ) , when λ 1 ( 2 ) + λ n ( 2 ) 2 > ϕ λ 1 ( 1 ) + λ n ( 1 ) 2 .
Thus, the following equations are solved:
( I ) 2 τ ( 1 ) + λ n ( 1 ) ϕ ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + λ n ( 2 ) ϕ ϕ + λ 1 ( 2 ) < 1 , ϕ < λ 1 ( 1 ) + λ n ( 1 ) 2 , ϕ < λ 1 ( 2 ) + λ n ( 2 ) 2 , ( II ) 2 τ ( 1 ) + ϕ λ 1 ( 1 ) ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + ϕ λ 1 ( 2 ) ϕ + λ 1 ( 2 ) < 1 , ϕ λ 1 ( 1 ) + λ n ( 1 ) 2 , ϕ λ 1 ( 2 ) + λ n ( 2 ) 2 ,
( III ) 2 τ ( 1 ) + λ n ( 1 ) ϕ ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + ϕ λ 1 ( 2 ) ϕ + λ 1 ( 2 ) < 1 , λ 1 ( 2 ) + λ n ( 2 ) 2 ϕ < λ 1 ( 1 ) + λ n ( 1 ) 2 , and ( IV ) 2 τ ( 1 ) + ϕ λ 1 ( 1 ) ϕ + λ 1 ( 1 ) · 2 τ ( 2 ) + λ n ( 2 ) ϕ ϕ + λ 1 ( 2 ) < 1 , λ 1 ( 2 ) + λ n ( 2 ) 2 > ϕ λ 1 ( 1 ) + λ n ( 1 ) 2 .
Accordingly, we can obtain the following relations about ϕ :
( i ) ( 2 τ ( 1 ) + λ n ( 1 ) ) ( 2 τ ( 2 ) + λ n ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 τ ( 1 ) + λ n ( 1 ) + λ 1 ( 1 ) + 2 τ ( 2 ) + λ n ( 2 ) + λ 1 ( 2 ) < ϕ < min { λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 } , ( 2 τ ( 1 ) + λ n ( 1 ) ) ( 2 τ ( 2 ) + λ n ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 τ ( 1 ) + λ n ( 1 ) + λ 1 ( 1 ) + 2 τ ( 2 ) + λ n ( 2 ) + λ 1 ( 2 ) < min { λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 } ,
( ii ) ϕ > max ( 2 τ ( 1 ) λ 1 ( 1 ) ) ( 2 τ ( 2 ) λ 1 ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 λ 1 ( 1 ) + 2 λ 1 ( 2 ) 2 τ ( 1 ) 2 τ ( 2 ) , λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 , λ 1 ( 1 ) + λ 1 ( 2 ) τ ( 1 ) τ ( 2 ) > 0 ,
max λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 < ϕ < ( 2 τ ( 1 ) λ 1 ( 1 ) ) ( 2 τ ( 2 ) λ 1 ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 λ 1 ( 1 ) + 2 λ 1 ( 2 ) 2 τ ( 1 ) 2 τ ( 2 ) , λ 1 ( 1 ) + λ 1 ( 2 ) τ ( 1 ) τ ( 2 ) < 0 ,
( iii ) Δ 1 = ( λ 1 ( 1 ) + 2 τ ( 2 ) λ n ( 1 ) 2 τ ( 1 ) ) 2 8 [ λ 1 ( 1 ) λ 1 ( 2 ) ( 2 τ ( 1 ) + λ n ( 1 ) ) ( 2 τ ( 2 ) + λ 1 ( 2 ) ) ] 0 , ϕ < ( λ 1 ( 1 ) + 2 τ ( 2 ) λ n ( 1 ) 2 τ ( 1 ) ) Δ 1 4 or ϕ > ( λ 1 ( 1 ) + 2 τ ( 2 ) λ n ( 1 ) 2 τ ( 1 ) ) + Δ 1 4 , λ 1 ( 2 ) + λ n ( 2 ) 2 ϕ < λ 1 ( 1 ) + λ n ( 1 ) 2 ,
( iv ) Δ 2 = ( λ 1 ( 2 ) + 2 τ ( 1 ) λ n ( 2 ) 2 τ ( 2 ) ) 2 8 [ λ 1 ( 1 ) λ 1 ( 2 ) ( 2 τ ( 2 ) + λ n ( 2 ) ) ( 2 τ ( 1 ) + λ 1 ( 1 ) ) ] 0 , ϕ < ( λ 1 ( 2 ) + 2 τ ( 1 ) λ n ( 2 ) 2 τ ( 2 ) ) Δ 2 4 or ϕ > ( λ 1 ( 2 ) + 2 τ ( 1 ) λ n ( 2 ) 2 τ ( 2 ) ) + Δ 2 4 , λ 1 ( 2 ) + λ n ( 2 ) 2 > ϕ λ 1 ( 1 ) + λ n ( 1 ) 2 .
According to (i)–(iv), when noting that both of the following conditions hold, we know that Theorem 3 is established:
Δ 1 | λ 1 ( 1 ) + 2 τ ( 2 ) λ n ( 1 ) 2 τ ( 1 ) | 0 and Δ 2 | λ 1 ( 2 ) + 2 τ ( 1 ) λ n ( 2 ) 2 τ ( 2 ) | 0 .
We note that there are empty sets in all four conditions of Theorem 3. Moreover, it can be seen from the proof process that, when we solve a set of inequalities, some of them can take an equal sign and some cannot take an equal sign. So, the values at the endpoints should be discussed according to the case. For example, in the first case of condition (ii), when
Γ = ( 2 τ ( 1 ) λ 1 ( 1 ) ) ( 2 τ ( 2 ) λ 1 ( 2 ) ) λ 1 ( 1 ) λ 1 ( 2 ) 2 λ 1 ( 1 ) + 2 λ 1 ( 2 ) 2 τ ( 1 ) 2 τ ( 2 ) < max λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 ,
the domain of ϕ should be modified as
ϕ max Γ , λ 1 ( 1 ) + λ n ( 1 ) 2 , λ 1 ( 2 ) + λ n ( 2 ) 2 .
For Theorem 3, there is a particular case, that is, Corollary 2, as follows.
Corollary 2. 
Let A be a P-matrix, and let F i in A = F i G i be a positive scalar matrix for i = 1 , 2 . Let F i = s ( i ) I , s ( i ) > 0 for i = 1 , 2 . Denote τ ( i ) = G i for i = 1 , 2 , and set Φ 1 = Φ 2 = ϕ I with ϕ > 0 . Then, if any of the following conditions holds, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP( A , q ):
( i ) when Υ = ( 2 τ ( 1 ) + s ( 1 ) ) ( 2 τ ( 2 ) + s ( 2 ) ) s ( 1 ) s ( 2 ) 2 ( τ ( 1 ) + s ( 1 ) + τ ( 2 ) + s ( 2 ) ) < min { s ( 1 ) , s ( 2 ) } , Υ < ϕ < min { s ( 1 ) , s ( 2 ) } , ( ii ) when s ( 1 ) + s ( 1 ) τ ( 1 ) τ ( 2 ) > 0 , ϕ > max ( 2 τ ( 1 ) s ( 1 ) ) ( 2 τ ( 2 ) s ( 2 ) ) s ( 1 ) s ( 2 ) 2 ( s ( 1 ) + s ( 2 ) τ ( 1 ) τ ( 2 ) ) , s ( 1 ) , s ( 2 ) , when s ( 1 ) + s ( 1 ) τ ( 1 ) τ ( 2 ) < 0 , max s ( 1 ) , s ( 2 ) < ϕ < ( 2 τ ( 1 ) s ( 1 ) ) ( 2 τ ( 2 ) s ( 2 ) ) s ( 1 ) s ( 2 ) 2 ( s ( 1 ) + s ( 2 ) τ ( 1 ) τ ( 2 ) ) , ( iii ) let Δ 1 = ( 2 τ ( 2 ) 2 τ ( 1 ) ) 2 + 8 ( 4 τ ( 1 ) τ ( 2 ) + 2 τ ( 1 ) s ( 2 ) + 2 τ ( 2 ) s ( 1 ) ) , ϕ > ( 2 τ ( 2 ) 2 τ ( 1 ) ) + Δ 1 4 , s ( 2 ) ϕ < s ( 1 ) , ( iv ) let Δ 2 = ( 2 τ ( 1 ) 2 τ ( 2 ) ) 2 + 8 ( 4 τ ( 1 ) τ ( 2 ) + 2 τ ( 1 ) s ( 2 ) + 2 τ ( 2 ) s ( 1 ) ) , ϕ > ( 2 τ ( 1 ) 2 τ ( 2 ) ) + Δ 2 4 , s ( 1 ) ϕ < s ( 2 ) .
Proof. 
Similar to the proof of Theorem 2, the left part of (17) in Theorem 2 can be reformulated as
( Φ 1 + F 1 ) 1 2 G 1 + Φ 1 F 1 ( Φ 2 + F 2 ) 1 2 G 2 + Φ 2 F 2 = ( ϕ I + s ( 1 ) I ) 1 2 G 1 + ϕ I s ( 1 ) I ( ϕ I + s ( 2 ) I ) 1 2 G 2 + ϕ I s ( 2 ) I = 2 τ ( 1 ) + | ϕ s ( 1 ) | ϕ + s ( 1 ) · 2 τ ( 2 ) + | ϕ s ( 2 ) | ϕ + s ( 2 ) = 2 τ ( 1 ) + s ( 1 ) ϕ ϕ + s ( 1 ) · 2 τ ( 2 ) + s ( 2 ) ϕ ϕ + s ( 2 ) , when ϕ < s ( 1 ) and ϕ < s ( 2 ) , 2 τ ( 1 ) + ϕ s ( 1 ) ϕ + s ( 1 ) · 2 τ ( 2 ) + ϕ s ( 2 ) ϕ + s ( 2 ) , when ϕ s ( 1 ) and ϕ s ( 2 ) , 2 τ ( 1 ) + s ( 1 ) ϕ ϕ + s ( 1 ) · 2 τ ( 2 ) + ϕ s ( 2 ) ϕ + s ( 2 ) , when s ( 2 ) ϕ < s ( 1 ) , 2 τ ( 1 ) + ϕ s ( 1 ) ϕ + s ( 1 ) · 2 τ ( 2 ) + s ( 2 ) ϕ ϕ + s ( 2 ) , when s ( 1 ) ϕ < s ( 2 ) .
Thus, the following equations can be solved:
( I ) 2 τ ( 1 ) + s ( 1 ) ϕ ϕ + s ( 1 ) · 2 τ ( 2 ) + s ( 2 ) ϕ ϕ + s ( 2 ) < 1 , ϕ < s ( 1 ) , ϕ < s ( 2 ) , ( II ) 2 τ ( 1 ) + ϕ s ( 1 ) ϕ + s ( 1 ) · 2 τ ( 2 ) + ϕ s ( 2 ) ϕ + s ( 2 ) < 1 , ϕ s ( 1 ) , ϕ s ( 2 ) ,
( III ) 2 τ ( 1 ) + s ( 1 ) ϕ ϕ + s ( 1 ) · 2 τ ( 2 ) + ϕ s ( 2 ) ϕ + s ( 2 ) < 1 , s ( 2 ) ϕ < s ( 1 ) , and ( IV ) 2 τ ( 1 ) + ϕ s ( 1 ) ϕ + s ( 1 ) · 2 τ ( 2 ) + s ( 2 ) ϕ ϕ + s ( 2 ) < 1 , s ( 1 ) ϕ < s ( 2 ) .
Accordingly, we can obtain the following relations about ϕ :
( i ) ( 2 τ ( 1 ) + s ( 1 ) ) ( 2 τ ( 2 ) + s ( 2 ) ) s ( 1 ) s ( 2 ) 2 ( τ ( 1 ) + s ( 1 ) + τ ( 2 ) + s ( 2 ) ) < ϕ < min { s ( 1 ) , s ( 2 ) } , ( 2 τ ( 1 ) + s ( 1 ) ) ( 2 τ ( 2 ) + s ( 2 ) ) s ( 1 ) s ( 2 ) 2 ( τ ( 1 ) + s ( 1 ) + τ ( 2 ) + s ( 2 ) ) < min { s ( 1 ) , s ( 2 ) } , ( ii ) ϕ > max ( 2 τ ( 1 ) s ( 1 ) ) ( 2 τ ( 2 ) s ( 2 ) ) s ( 1 ) s ( 2 ) 2 s ( 1 ) + 2 s ( 2 ) 2 τ ( 1 ) 2 τ ( 2 ) , s ( 1 ) , s ( 2 ) , s ( 1 ) + s ( 2 ) τ ( 1 ) τ ( 2 ) > 0 , max s ( 1 ) , s ( 2 ) < ϕ < ( 2 τ ( 1 ) s ( 1 ) ) ( 2 τ ( 2 ) s ( 2 ) ) s ( 1 ) s ( 2 ) 2 s ( 1 ) + 2 s ( 2 ) 2 τ ( 1 ) 2 τ ( 2 ) , s ( 1 ) + s ( 2 ) τ ( 1 ) τ ( 2 ) < 0 , ( iii ) Δ 1 = ( 2 τ ( 2 ) 2 τ ( 1 ) ) 2 + 8 [ 4 τ ( 1 ) τ ( 2 ) + 2 τ ( 1 ) s ( 2 ) + 2 τ ( 2 ) s ( 1 ) ] 0 , ϕ < ( 2 τ ( 2 ) 2 τ ( 1 ) ) Δ 1 4 or ϕ > ( 2 τ ( 2 ) 2 τ ( 1 ) ) + Δ 1 4 , s ( 2 ) ϕ < s ( 1 ) , ( iv ) Δ 2 = ( 2 τ ( 1 ) 2 τ ( 2 ) ) 2 + 8 [ 4 τ ( 1 ) τ ( 2 ) + 2 τ ( 1 ) s ( 2 ) + 2 τ ( 2 ) s ( 1 ) ] 0 , ϕ < ( 2 τ ( 1 ) 2 τ ( 2 ) ) Δ 2 4 or ϕ > ( 2 τ ( 1 ) 2 τ ( 2 ) ) + Δ 2 4 , s ( 1 ) ϕ < s ( 2 ) .
From (i)–(iv), we know that Corollary 2 is established. □
Next, we let A be an H + -matrix, and we let the matrix splitting be the H-splitting. Some convergence conclusions related to Corollary 1 are arrived at below.
Theorem 4. 
Let A be an H + -matrix, and let A = F i G i be an H-compatible splitting, with Φ i + F i being nonsingular, i.e., A = F i | G i | , where Φ i is a positive diagonal matrix for i = 1 , 2 . If
Φ i D for i = 1 , 2 ,
then, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP( A , q ).
Proof. 
Since A is an H + -matrix, which is also a P-matrix, we know that the LCP( A , q ) has a unique solution for any q R n . Since A = F i G i is an H-compatible splitting, it is also an H-splitting due to A being an M-matrix for i = 1 , 2 . Thus, the condition
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
in Corollary 1 can be considered. In the following passages, we will prove that the above condition holds when Φ i D for i = 1 , 2 .
Since Φ i D , we have
0 ( Φ i + F i ) 1 ( | G i | + | A Φ i | ) = ( Φ i + F i ) 1 ( Φ i + F i + | G i | F i | D | + | L | + | U | ) = I ( Φ i + F i ) 1 2 A .
Since A is an M-matrix, there must exist a positive vector, v, such that A v > 0 . It follows that
( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | v = ( I ( Φ 1 + F 1 ) 1 2 A ) ( I ( Φ 2 + F 2 ) 1 2 A ) v < ( I ( Φ 1 + F 1 ) 1 2 A ) v < v ,
and that
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
holds since
( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | 0 .
Thus, Theorem 4 is established. □
It is well known that, if A is an H + -matrix, when 0 γ ω 1 with ω 0 , then the AOR splitting of A is an H-compatible splitting. So, according to Theorem 4, for the TSMAOR splitting method, we have Corollary 3 as follows.
Corollary 3. 
Let A be an H + -matrix, and let A = F i G i be the AOR splitting for i = 1 , 2 . If
Φ i D for i = 1 , 2 and 0 γ i ω i 1 with ω i 0 for i = 1 , 2 ,
then, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMAOR iteration method (8) converges to the unique solution z ( * ) of the LCP( A , q ).
In fact, if
ρ ( D 1 ( | L | + | U | ) ) < 1 2
in Theorem 4, the domain of Φ can be enlarged, that is, Theorem 5 applies as follows.
Theorem 5. 
Let A be an H + -matrix with ρ ( D 1 ( | L | + | U | ) ) < 1 2 , and let A = F i G i be an H-compatible splitting, with Φ i + F i being nonsingular, i.e., A = F i | G i | , where Φ i is a positive diagonal matrix for i = 1 , 2 . If
Φ i 1 2 D for i = 1 , 2 ,
then, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMMS iteration method (6) converges to the unique solution z ( * ) of the LCP( A , q ).
Proof. 
Similar to how we proved Theorem 4, we will prove that
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
holds when Φ i 1 2 D for i = 1 , 2 if ρ ( D 1 ( | L | + | U | ) ) < 1 2 .
Since Φ i 1 2 D , we have
0 ( Φ i + F i ) 1 ( | G i | + | A Φ i | ) = ( Φ i + F i ) 1 ( Φ i + F i Φ i + | G i | F i + | A Φ i | ) = I ( Φ i + F i ) 1 ( Φ i + A | A Φ i | ) = I ( Φ i + F i ) 1 ( Φ i | D Φ i | + D 2 ( | L | + | U | ) ) = I ( Φ i + F i ) 1 ( Φ i | D Φ i | + D ( I 2 D 1 ( | L | + | U | ) ) ) .
So, under the condition of Theorem 5, that is, Φ i 1 2 D and ρ ( D 1 ( | L | + | U | ) ) < 1 , we know that Φ i | D Φ i | 0 and that I 2 D 1 ( | L | + | U | ) is an M -matrix. Thus, there exists a positive vector v, such that
( I 2 D 1 ( | L | + | U | ) ) v > 0 .
It follows that
( Φ 1 + F 1 ) 1 ( | G 1 | + | A Φ 1 | ) ( Φ 2 + F 2 ) 1 ( | G 2 | + | A Φ 2 | ) v = ( I ( Φ 1 + F 1 ) 1 ( Φ 1 | D Φ 1 | + D ( I 2 D 1 ( | L | + | U | ) ) ) ) ( I ( Φ 2 + F 2 ) 1 ( Φ 2 | D Φ 2 | + D ( I 2 D 1 ( | L | + | U | ) ) ) ) v < ( I ( Φ 1 + F 1 ) 1 ( Φ 1 | D Φ 1 | + D ( I 2 D 1 ( | L | + | U | ) ) ) ) v < v ,
and
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | ) < 1
holds. Thus, the conclusion is proven. □
Similar to Corollary 3, according to the above theorem, we can easily obtain Corollary 4 as follows.
Corollary 4. 
Let A be an H + -matrix with ρ ( D 1 ( | L | + | U | ) ) < 1 2 , and let A = F i G i be the AOR splitting for i = 1 , 2 . If
Φ i 1 2 D for i = 1 , 2 and 0 γ i ω i 1 with ω i 0 for i = 1 , 2 ,
then, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMAOR iteration method (8) converges to the unique solution z ( * ) of the LCP( A , q ).
Corollaries 3 and 4 apply to the case of ω i 1 for the TSMAOR iteration method (8). In the following, we discuss the case ω i > 1 .
Theorem 6. 
Let A be an H + -matrix, and let A = F i G i be the AOR splitting with ω i = ω > 1 , i = 1 , 2 . If either of the following conditions holds, then, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMAOR iteration method (8) converges to the unique solution z ( * ) of the LCP( A , q ):
( i ) Φ i D , 0 γ i ω with 1 < ω < 1 ρ ( D 1 ( | L | + | U | ) ) for i = 1 , 2 ,
( ii ) 1 2 ( 3 2 ω ) D Φ < D , 0 γ i ω and 1 < ω < 2 if ρ ( D 1 ( | L | + | U | ) ) < 1 2 for i = 1 , 2 ,
where Φ1 = Φ2 = Φ.
Proof. 
From the proof of Corollary 1, we know that, if F i is an H + -matrix for i = 1 , 2 , the condition
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
can be used. Since A is an H + -matrix, it is easy to observe that F i in the AOR splitting is an H + -matrix for i = 1 , 2 . So, the above condition can be considered.
We consider 0 γ i ω i and ω i = ω > 1 for i = 1 , 2 , and we have
0 ( Φ i + F i ) 1 ( | G i | + | A Φ i | ) = I ( Φ i + F i ) 1 ( Φ i + 1 | 1 ω i | ω i D | D Φ i | 2 ( | L | + | U | ) ) = I ( Φ i + F i ) 1 ( Φ i + 2 ω i ω i D | D Φ i | 2 ( | L | + | U | ) ) = I ( Φ i + F i ) 1 2 D ( 1 ω i I D 1 ( | L | + | U | ) ) when Φ i D , I ( Φ i + F i ) 1 ( 2 Φ i + ( 2 ω i 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) ) when Φ i < D , = I ( Φ i + F i ) 1 2 D ( 1 ω I D 1 ( | L | + | U | ) ) when Φ i D , I ( Φ i + F i ) 1 ( 2 Φ i + ( 2 ω 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) ) when Φ i < D .
From the last relation above, we know that, when
ρ ( D 1 ( | L | + | U ) ) < 1 ω ,
then the matrix 2 D ( 1 ω I D 1 ( | L | + | U | ) ) is an M -matrix, and when
2 Φ i + ( 2 ω 3 ) D 0 ,
i.e., 1 < ω < 2 , and ρ ( D 1 ( | L | + | U ) ) < 1 2 , then the matrix 2 Φ i + ( 2 ω 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) is an M-matrix. So, when the same proof approach as that of Theorem 4 is used, there are positive vectors, u and v, such that
2 D ( 1 ω I D 1 ( | L | + | U | ) ) u > 0 and ( 2 Φ + ( 2 ω 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) ) v > 0 .
Here, we consider the case of Φ 1 = Φ 2 = Φ for the second relationship. Thus, we have
( I ( Φ 1 + F 1 ) 1 2 D ( 1 ω I D 1 ( | L | + | U | ) ) ) ( I ( Φ 2 + F 2 ) 1 2 D ( 1 ω I D 1 ( | L | + | U | ) ) ) u < ( I ( Φ 1 + F 1 ) 1 2 D ( 1 ω I D 1 ( | L | + | U | ) ) ) u < u and ( I ( Φ + F 1 ) 1 ( 2 Φ + ( 2 ω 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) ) ) ( I ( Φ + F 2 ) 1 ( 2 Φ + ( 2 ω 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) ) ) v < ( I ( Φ + F 2 ) 1 ( 2 Φ + ( 2 ω 3 ) D + D ( I 2 D 1 ( | L | + | U | ) ) ) ) v < v .
Thus, for the two cases, the condition
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
always holds, and then Theorem 6 is established. □
For the second condition (26) in Theorem 6, we note that the domain of Φ is connected with ω , which means that ω determines the domain. For example, if ω = 6 5 , the domain of Φ is 2 3 D Φ < D . On the contrary, if Φ = 2 3 D , the domain of ω is 1 < ω 6 5 . In addition, since the TSMSOR iteration method, the TSMGS iteration method, and the TSMJ iteration method are the particular cases of the TSMAOR iteration method, it is easy to know that Corollary 3 and Theorem 6 above are applicable to the three methods. In our above discussion for the TSMAOR iteration method, the cases 0 < ω 1 and ω > 1 were studied when γ i ω i separately, and the case γ i > ω i was not considered. Next, we discuss this case, and we obtain the following conclusion.
Theorem 7. 
Let A be an H + -matrix, and let A = F i G i be the AOR splitting of A. Denote ω i = ω and γ i = γ , i = 1 , 2 . If γ > ω and either of the following conditions holds, then, for any initial vector z ( 0 ) R n , the iteration sequence { z ( k ) } k = 1 + generated using the TSMAOR iteration method (8) converges to the unique solution z ( * ) of the LCP (1):
( i ) Φ i D , γ min { 1 , ω } < 1 ρ ( D 1 ( | L | + | U | ) ) , ( ii ) Φ 1 = Φ 2 = Φ < D , ρ ( Φ 1 ( | L | + | U | ) ) < ω γ with 0 < ω 1 and ρ ( ( Φ + 1 ω ω D ) 1 ( | L | + | U | ) ) < ω γ with Φ > ( 1 1 ω ) D and ω > 1 .
Proof. 
Similar to the idea of Theorem 6, we will prove that the condition
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
is satisfied under those conditions.
Since γ i > ω i > 0 for i = 1 , 2 , then, for the TSMAOR iteration method, we can prove that
0 ( Φ i + F i ) 1 ( | G i | + | A Φ i | ) = I ( Φ 1 + F 1 ) 1 ( Φ 1 + 1 | 1 ω | ω D | D Φ 1 | 2 γ ω | L | 2 | U | ) , I ( Φ 2 + F 2 ) 1 ( Φ 2 + 1 | 1 ω | ω D | D Φ 2 | 2 γ ω | U | 2 | L | ) ,
I ( Φ 1 + F 1 ) 1 ( Φ 1 + 1 | 1 ω | ω D | D Φ 1 | 2 γ ω ( | L | + | U | ) ) if Φ 1 + 1 | 1 ω | ω D | D Φ 1 | 2 γ ω ( | L | + | U | ) ) is an M matrix , I ( Φ 2 + F 2 ) 1 ( Φ 2 + 1 | 1 ω | ω D | D Φ 2 | 2 γ ω ( | U | + | L | ) ) if Φ 2 + 1 | 1 ω | ω D | D Φ 2 | 2 | U | 2 | L | is an M matrix , = I ( Φ i + F i ) 1 ( Φ i + 1 | 1 ω | ω D | D Φ i | 2 γ ω ( | L | + | U | ) ) if Φ i + 1 | 1 ω | ω D | D Φ i | 2 γ ω ( | L | + | U | ) is an M matrix .
Now, we discuss the conditions that can guarantee the matrix Φ i + 1 | 1 ω | ω D | D Φ i | 2 γ ω ( | L | + | U | ) to be an M-matrix.
When Φ i D ,
Φ i + 1 | 1 ω | ω D | D Φ i | 2 γ ω ( | L | + | U | ) = Φ i + 1 | 1 ω | ω D Φ i + D 2 γ ω ( | L | + | U | ) = 1 + ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) = 2 D min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ,
which is not connected with Φ i . So, we know that, if
γ ω ρ ( D 1 ( | L | + | U | ) ) < min { 1 , ω } ω ,
that is,
γ min { 1 , ω } < 1 ρ ( D 1 ( | L | + | U | ) ) ,
the matrix Φ + 1 | 1 ω | ω D | D Φ | 2 γ ω ( | L | + | U | ) is an M-matrix.
When Φ 1 = Φ 2 = Φ < D ,
Φ + 1 | 1 ω | ω D | D Φ | 2 γ ω ( | L | + | U | ) = Φ + 1 | 1 ω | ω D D + Φ 2 γ ω ( | L | + | U | ) = 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) = 2 Φ γ ω ( | L | + | U | ) if 0 < ω 1 , 2 Φ + 1 ω ω D γ ω ( | L | + | U | ) if ω > 1 .
So, if
ρ ( Φ 1 ( | L | + | U | ) ) < ω γ and ρ ( ( Φ + 1 ω ω D ) 1 ( | L | + | U | ) ) ) < ω γ with Φ > ( 1 1 ω ) D ,
the above involved matrix is an M-matrix.
Under the conditions proven above, for the matrix that appeared in (18) of Corollary 1, we have
0 ( Φ 1 + F 1 ) 1 ( | G 1 | + | A Φ 1 | ) · ( Φ 2 + F 2 ) 1 ( | G 2 | + | A Φ 2 | ) = I ( Φ 1 + F 1 ) 1 ( Φ 1 + 1 | 1 ω 1 | ω 1 D | D Φ 1 | 2 γ 1 ω 1 | L | 2 | U | ) · I ( Φ 2 + F 2 ) 1 ( Φ 2 + 1 | 1 ω 2 | ω 2 D | D Φ 2 | 2 γ 2 ω 2 | U | 2 | L | ) I ( Φ 1 + F 1 ) 1 ( Φ 1 + 1 | 1 ω 1 | ω 1 D | D Φ 1 | 2 γ ω ( | L | + | U | ) · I ( Φ 2 + F 2 ) 1 ( Φ 2 + 1 | 1 ω 2 | ω 2 D | D Φ 2 | 2 γ ω ( | U | + | L | ) = I ( Φ 1 + F 1 ) 1 2 D min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) · I ( Φ 2 + F 2 ) 1 2 D min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) when Φ i D , I ( Φ + F 1 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) · I ( Φ + F 2 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) when Φ 1 = Φ 2 = Φ < D .
Thus, since 2 D min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) and 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) are two M-matrices, there exist two positive vectors, α and β , such that
2 D ( min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ) α > 0
and
( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) β > 0 ,
respectively. It follows that the condition
( I ( Φ 1 + F 1 ) 1 2 D ( min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ) ) · ( I ( Φ 2 + F 2 ) 1 2 D ( min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ) ) α < ( I ( Φ 1 + F 1 ) 1 2 D ( min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ) ) α < α ( I ( Φ + F 1 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) ) · ( I ( Φ + F 2 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) ) β < ( I ( Φ + F 1 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) ) β < β
holds. So, from Lemma 3, we know that
ρ ( ( I ( Φ 1 + F 1 ) 1 2 D ( min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ) ) · ( I ( Φ 2 + F 2 ) 1 2 D ( min { 1 , ω } ω I γ ω D 1 ( | L | + | U | ) ) ) ) < 1 ρ ( I ( Φ + F 1 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) ) · ( I ( Φ + F 2 ) 1 ( 2 Φ + 1 ω | 1 ω | ω D 2 γ ω ( | L | + | U | ) ) ) ) < 1 .
Then, according to the spectral radius theories of a nonnegative matrix, we know that
ρ ( Φ 1 + F 1 ) 1 | G 1 | + | A Φ 1 | ( Φ 2 + F 2 ) 1 | G 2 | + | A Φ 2 | < 1
is established for the TSMAOR iteration method. So, the conclusion is proven. □

4. Numerical Experiments

In this section, we provide some examples related to the convergence conditions. IT represents the number of iteration steps, and CPU (in seconds) represents the elapsed time in our experiments. We denote RES ( z ) as the norm of the residual vector, which is defined as
RES ( z ) = | | min ( z , A z + q ) | | ,
where “ min ” is taken component-wise, and “ · ” denotes the second norm of the vector. Since z is the solution to the LCP if and only if
min ( z , A z + q ) = 0 ,
we set the iteration process to cease when RES ( z ( k ) ) < 10 5 or when IT reaches 300, where z ( k ) represents the kth approximate solution produced using the iteration methods. The matrix A in the LCP is generated via
A ( ξ , ζ ) = A ^ + ξ B + ζ C ,
where
A ^ = Tridiag ( I , S , I ) R n × n
is a block-tridiagonal matrix, with S = tridiag ( 1 , 4 , 1 ) R m × m being a tridiagonal matrix. B = tridiag ( 0 , 0 , 1 ) R n × n is also a tridiagonal matrix, and C is a given diagonal matrix of order n with m 2 = n . ξ and ζ are given two numbers to guarantee that A ( ξ , ζ ) is a P-matrix. For convenience, we set z ( 0 ) = zeros ( n , 1 ) to be the initial vector in all of our experiments.
Example 1. 
In this example, we consider the convergence condition of Theorem 3. We let A in the LCP(1) be A ( 0 , 3 ) , A ( 1 , 2 ) , and C = diag ( [ 1 , 2 , 3 , 1 , 2 , 3 , ] ) R n × n with n = 1600 . Then, A ( 0 , 3 ) is a symmetric P-matrix, and A ( 1 , 2 ) is a nonsymmetric P-matrix. We set Φ = ϕ I with ϕ > 0 in the TSMMS iteration method (6). For A ( 0 , 2 ) , we set
F 1 = triu ( A , 1 ) triu ( A , 2 )
and
F 2 = A ( triu ( A , 1 ) triu ( A , 0 ) ) T ( triu ( A , 1 ) triu ( A , 0 ) ) .
For A ( 1 , 2 ) , we set
F 1 = ( triu ( A , 1 ) triu ( A , 0 ) ) T + ( triu ( A , 1 ) triu ( A , 1 ) )
and
F 2 = ( triu ( A , m ) triu ( A , 0 ) ) T + ( triu ( A , m ) triu ( A , 1 ) ) ,
where “ triu ” is a function in Matlab. Then, we find that both cases satisfy the conditions (19) and (20) in Theorem 3. So, the convergence region of ϕ is
ϕ ( Υ , + ) .
To see the numerical results, we set q = A z ( * ) in the LCP (1), with z ( * ) = ( 1 , 0 , 1 , 0 , ) T being the true solution. We select the values of ϕ in the convergence region, and we denote
σ = ( Φ 1 + F 1 ) 1 2 G 1 + Φ 1 F 1 ( Φ 2 + F 2 ) 1 2 G 2 + Φ 2 F 2
(see Theorem 2) and
δ = λ 1 ( 1 ) + λ 1 ( 2 ) τ ( 1 ) τ ( 2 ) .
Set
Υ : 1 : Υ + 9 ,
and then the convergence domain of ϕ and the numerical results are shown in Table 1 and Table 2, as well as Figure 1 below, respectively.
In Figure 1, subfigures (a) and (b) are related to the case A ( 0 , 3 ) ; subfigures (c) and (d) are related to the case A ( 1 , 2 ) . The horizontal axis denotes ϕ , and the vertical axis denotes IT and the function σ , respectively.
Example 2. 
In this example, we consider the convergence of the TSMAOR iteration method. We set A to be A ( 1 , 3 ) with n = 1600 and C to be same as Example 1. Then, A is an H + -matrix, and
ρ ( D 1 ( | L | + | U | ) ) = 0.2470 < 1 2 .
Then, from Corollaries 3 and 4 and Theorem 6, we know that, if Φ = 2 D , one sufficient region of ω , γ is
0 γ ω 1 ρ ( D 1 ( | L | + | U | ) ) with ω 0 ,
and if Φ = 2 3 D , the sufficient region of ω , γ is
0 γ ω 6 5 .
In order to see numerical results, we consider some concrete values of ω and γ to perform the TSMAOR iteration method. For Φ = 2 D , we set
ω = 1 l ρ : 1 l ρ : 1 ρ 1 l ρ , γ = 1 l ρ : 1 l ρ : 1 ρ 1 l ρ .
For Φ = 2 3 D , we set
ω = 6 l 5 : 6 l 5 : 6 5 6 l 5 , γ = 6 l 5 : 6 l 5 : 6 5 6 l 5 .
We let l = 10 , and we denote the second conditions’ function (26) in Theorem 6 as
ρ ( ω , γ ) = ρ | ( Φ 1 + F 1 ) 1 | | G 1 | + | A Φ 1 | | ( Φ 2 + F 2 ) 1 | | G 2 | + | A Φ 2 | .
Then, we obtain Table 3 and Table 4, as well as Figure 2, as follows.
In Figure 2, subfigures (a) and (b) correspond to Table 3 and Table 4, respectively, that is, the case A ( 1 , 3 ) with Φ = 2 D and the case A ( 1 , 3 ) with Φ = 2 3 D . From Table 3 and Table 4 and Figure 2, we can find that, when γ ω , the TSMAOR iteration method is convergent, but when ω < γ , there are cases in which the convergent condition ρ ( ω , γ ) < 1 is not satisfied, but the TSMAOR iteration method is also convergent. Moreover, the numerical example also shows that the size of the spectral radius ρ ( ω , γ ) < 1 is not exactly consistent with the number of iterative steps (IT), which is shown in sub-figure (a). Moreover, this example also shows that, when ω = γ , the TSMGS iteration method usually performs better.
Example 3. 
In this example, we compare the two-step simplified modulus-based Gauss–Seidel (TSMGS) iteration method with the new modulus-based Gauss–Seidel (NMGS) iteration method in [2]. We consider four cases, A ( 0 , 3 ) , A ( 1 , 2 ) , A ( 1 , 3 ) , and A ( 2 , 3 ) . We set n = 2500 , and then all four cases satisfy the condition
ρ = ρ ( D 1 ( | L | + | U | ) ) < 1 2
in Corollary 4. Then, the TSMGS iteration method is convergent when Φ 1 2 D . We set some concrete value of Φ to perform the two iteration methods, i.e.,
Φ = 1 2 D : 1 4 D : 3 D .
We denote
ρ T S M G S = ρ ( | ( Φ + D L ) 1 | ( | U | + | A Φ | ) | ( Φ + D U ) 1 | ( | L | + | A Φ | ) )
and
ρ N M G S = ρ ( | ( Φ + D L ) 1 | ( | U | + | A Φ | ) ) ,
which determine the convergence of the TSMGS iteration method and the NMGS iteration method, respectively. Then, we obtain Figure 3 as follows.
In Figure 3, the horizontal axis denotes ϕ with Φ = ϕ D , and the vertical axis denotes IT. The sub-figures (a) and (b) are related to A ( 0 , 3 ) , (c) and (d) are related to A ( 1 , 2 ) , (e) and (f) are related to A ( 1 , 3 ) , and (g) and (h) are related to A ( 2 , 3 ) . This example shows that the TSMGS iteration method is usually better than the NMGS iteration method.

5. Concluding Remarks

We have presented the two-step simplified modulus-based matrix splitting (TSMMS) iteration method for solving the LCP. The general convergence conditions and some concrete convergence conditions were supplied. Numerical experiments verified the effectiveness of the presented method and the convergence conditions. Other complementary problems, such as the implicit complementarity problem and the horizontal linear complementarity problem, can also be considered to use the TSMMS iteration method to reach a solution, which is our future research topic.

Funding

This research was supported by the Natural Science Foundation of Guangdong Province (No. 2023A1515011911), the Guangdong Basic and Applied Basic Research Foundation (No. 2022A151501108), the Guangdong Province Education Science Planning Project (No. 2024GXJK249), and the Innovative Research Team Project of Zhaoqing University.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author thanks the anonymous reviewers for providing valuable suggestions to make this paper more readable.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Billups, S.C.; Murty, K.G. Complementarity problems. J. Comput. Appl. Math. 2000, 124, 303–328. [Google Scholar] [CrossRef]
  2. Wu, S.L.; Li, C.X. A class of new modulus-based matrix splitting methods for linear complementarity problem. Optim. Lett. 2022, 16, 1427–1443. [Google Scholar] [CrossRef]
  3. Ferris, M.C.; Pang, J.S. Complementarity and Variational Problems; State of the Art: Philadephia, PA, USA, 1997. [Google Scholar]
  4. Hadjidimos, A.; Lapidakis, M.; Tzoumas, M. On iterative solution for linear complementarity problem with an H+-matrix. SIAM J. Matrix Anal. Appl. 2012, 33, 97–110. [Google Scholar] [CrossRef]
  5. Shi, X.J.; Yang, L.; Huang, Z.H. A fixed point method for the linear complementarity problem arising from American option pricing. Acta Math. Appl. Sin. 2016, 32, 921–932. [Google Scholar] [CrossRef]
  6. Cottle, R.W.; Sacher, R.S. On the solution of large, structured linear complementarity problems: The tridiagonal case. Appl. Math. Opt. 1976, 3, 321–340. [Google Scholar] [CrossRef]
  7. Zheng, N.; Hayami, K.; Yin, J.F. Modulus-type inner outer iteration methods for nonnegative constrained least squares problems. SIAM J. Matrix Anal. Appl. 2016, 37, 1250–1278. [Google Scholar] [CrossRef]
  8. Bai, Z.Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  9. Li, W. A general modulus-based matrix splitting method for linear complementarity problems of H-matrices. Appl. Math. Lett. 2013, 26, 1159–1164. [Google Scholar] [CrossRef]
  10. Zheng, N.; Yin, J.F. Accelerated modulus-based matrix splitting iteration methods for linear complementarity problem. Numer. Algor. 2013, 64, 245–262. [Google Scholar] [CrossRef]
  11. Zhang, L.L. Two-step modulus-based matrix splitting iteration method for linear complementarity problems. Numer. Algor. 2011, 57, 83–99. [Google Scholar] [CrossRef]
  12. Cvetković, L.; Hadjidimos, A.; Kostić, V. On the choice of parameters in MAOR type splitting methods for the linear complementarity problem. Numer. Algor. 2014, 67, 793–806. [Google Scholar] [CrossRef]
  13. Zheng, H.; Cui, J.J. Accelerated relaxation two-sweep modulus-based matrix splitting iteration method for linear complementarity problems. Int. J. Comp. Meth. 2024, 21, 2350025. [Google Scholar]
  14. Zheng, H.; Vong, S. A modified modulus-based matrix splitting iteration method for solving implicit complementarity problems. Numer. Algor. 2019, 82, 573–592. [Google Scholar] [CrossRef]
  15. Fang, X.M. The convergence of modulus-based matrix splitting iteration methods for implicit complementarity problems. Comput. Appl. Math. 2022, 411, 114241. [Google Scholar] [CrossRef]
  16. Mezzadri, F.; Galligani, E. Modulus-based matrix splitting methods for a class of horizontal nonlinear complementarity problems. Numer. Algor. 2021, 87, 667–687. [Google Scholar] [CrossRef]
  17. Mezzadri, F.; Galligani, E. Modulus-based matrix splitting methods for horizontal linear complementarity problems. Numer. Algor. 2020, 83, 201–219. [Google Scholar] [CrossRef]
  18. Liao, S.W.; Zhang, G.F.; Liang, Z.Z. A preconditioned general modulus-based matrix splitting iteration method for solving horizontal linear complementarity problems. Numer. Algor. 2023, 93, 919–947. [Google Scholar] [CrossRef]
  19. Guo, W.X.; Zheng, H.; Peng, X.F. New convergence results of the modulus-based methods for vertical linear complementarity problems. Appl. Math. Lett. 2023, 135, 108444. [Google Scholar] [CrossRef]
  20. Mezzadri, F. A modulus-based formulation for the vertical linear complementarity problems. Numer. Algor. 2022, 90, 1547–1568. [Google Scholar] [CrossRef]
  21. Zheng, H.; Zhang, Y.X.; Lu, X.P.; Vong, S. Modulus-based synchronous multisplitting iteration methods for large sparse vertical linear complementarity problems. Numer. Algor. 2023, 93, 711–729. [Google Scholar] [CrossRef]
  22. Hong, J.T.; Li, C.L. Modulus-based matrix splitting iteration methods for a class of nonlinear complementarity problem. Appl. Math. Comput. 2016, 23, 629–641. [Google Scholar] [CrossRef]
  23. Huang, N.; Ma, C.F. The modulus-based matrix splitting algorithms for a class of weakly nonlinear complementarity problems. Numer. Linear Algebra 2016, 23, 558–569. [Google Scholar] [CrossRef]
  24. Wang, G.B.; Tan, F.P. Modulus-based multisplitting iteration method for a class of weakly nonlinear complementarity problems. Comput. Appl. Math. Comput. 2021, 3, 419–428. [Google Scholar] [CrossRef]
  25. Fang, X. Convergence of modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems. Numer. Algor. 2022, 90, 931–950. [Google Scholar] [CrossRef]
  26. Sharaf, I.M. A projection algorithm for positive definite linear complementarity problems with applications. Int. J. Manag. Sci. Eng. 2016, 12, 141–147. [Google Scholar] [CrossRef]
  27. Zheng, H.; Li, W. The modulus-based nonsmooth Newton’s method for solving linear complementarity problems. J. Comput. Appl. Math. 2015, 288, 116–126. [Google Scholar] [CrossRef]
  28. Fang, X.M. The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Math. 2024, 9, 8594–8609. [Google Scholar] [CrossRef]
  29. Frommer, A.; Szyld, D.B. H-splittings and two-stage iterative methods. Numer. Math. 1992, 63, 345–356. [Google Scholar] [CrossRef]
  30. Varga, R.S. Matrix Iterative Analysis; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
Figure 1. Comparison of IT and σ for Method 1 with Φ = ϕ I .
Figure 1. Comparison of IT and σ for Method 1 with Φ = ϕ I .
Symmetry 16 01210 g001
Figure 2. Comparison of IT and the spectral radius ρ ( ω , γ ) in the TSMAOR iteration method.
Figure 2. Comparison of IT and the spectral radius ρ ( ω , γ ) in the TSMAOR iteration method.
Symmetry 16 01210 g002
Figure 3. Comparison between the TSMGS iteration method and the NMGS iteration method.
Figure 3. Comparison between the TSMGS iteration method and the NMGS iteration method.
Symmetry 16 01210 g003aSymmetry 16 01210 g003b
Table 1. The convergence domain of ϕ in the TSMMS iteration method.
Table 1. The convergence domain of ϕ in the TSMMS iteration method.
n = 1600 λ 1 ( 1 ) + λ n ( 1 ) 2 λ 1 ( 2 ) + λ n ( 2 ) 2 Υ Γ δ ϕ
A ( 0 , 3 ) 10.0010.005.5367−1.99418.9265 ( 5.5367 , + )
A ( 1 , 2 ) 8.008.005.5764−2.06984.8122 ( 5.5764 , + )
Table 2. The numerical results of the TSMMS iteration method when Φ = ϕ I .
Table 2. The numerical results of the TSMMS iteration method when Φ = ϕ I .
  A ( 0 , 3 )
  ϕ 1 ϕ 2 ϕ 3 ϕ 4 ϕ 5 ϕ 6 ϕ 7 ϕ 8 ϕ 9 ϕ 10
σ 1.000.710.510.350.240.220.250.280.310.33
IT11977667899
CPU0.0120.0060.0040.0040.0060.0030.0090.0060.0090.005
  A ( 1 , 2 )
  ϕ 1 ϕ 2 ϕ 3 ϕ 4 ϕ 5 ϕ 6 ϕ 7 ϕ 8 ϕ 9 ϕ 10
σ 1.000.660.440.380.420.450.480.510.530.55
IT7667899101112
CPU0.0250.0090.0090.0110.0120.0140.0140.0160.0170.019
Table 3. Numerical results of the TSMAOR iteration method with Φ = 2 D .
Table 3. Numerical results of the TSMAOR iteration method with Φ = 2 D .
A ( 1 , 3 )
ρ ( ω , γ ) ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8 ω 9
γ 1 0.4280.2770.3450.5110.6430.7470.8310.8980.956
γ 2 0.4940.2670.3370.5070.6390.7410.8300.8960.955
γ 3 0.5710.2990.3290.5010.6360.7420.8280.8970.955
γ 4 0.6550.3350.3570.4960.6330.7410.8240.8920.951
γ 5 0.7510.3750.3870.5270.6280.7370.8250.8960.956
γ 6 0.8610.4210.4230.5590.6590.7360.8230.8950.952
γ 7 0.9940.4690.4570.5930.6930.7640.8210.8950.953
γ 8 1.1390.5220.4970.6290.7240.7960.8520.8940.955
γ 9 1.2990.5780.5380.6660.7560.8250.8790.9200.950
IT ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8 ω 9
γ 1 191311998887
γ 2 191210998887
γ 3 181210988877
γ 4 171210988877
γ 5 171110988777
γ 6 16119988777
γ 7 15119887777
γ 8 15109887777
γ 9 14109877777
Table 4. Numerical results of the TSMAOR iteration method with Φ = 2 3 D .
Table 4. Numerical results of the TSMAOR iteration method with Φ = 2 3 D .
A ( 1 , 3 )
ρ ( ω , γ ) ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8 ω 9
γ 1 0.8210.6800.5680.4800.4060.3450.2980.2530.315
γ 2 0.8700.6750.5650.4750.4030.3410.2920.2500.314
γ 3 0.9220.7180.5590.4700.3970.3380.2870.2450.309
γ 4 0.9790.7610.5920.4640.3920.3320.2810.2420.305
γ 5 1.0400.8040.6290.4920.3850.3260.2770.2350.300
γ 6 1.1040.8550.6670.5210.4080.3210.2730.2320.295
γ 7 1.1700.9040.7060.5530.4350.3390.2680.2260.290
γ 8 1.2380.9560.7490.5850.4600.3610.2830.2200.287
γ 9 1.3151.0190.7910.6200.4850.3830.3010.2360.281
IT ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8 ω 9
γ 1 43211310776810
γ 2 41201310876710
γ 3 4020141087679
γ 4 3920141187679
γ 5 4021141187679
γ 6 4021141187679
γ 7 4021141087678
γ 8 4021141087668
γ 9 4020141087668
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fang, X. Two-Step Simplified Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems. Symmetry 2024, 16, 1210. https://doi.org/10.3390/sym16091210

AMA Style

Fang X. Two-Step Simplified Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems. Symmetry. 2024; 16(9):1210. https://doi.org/10.3390/sym16091210

Chicago/Turabian Style

Fang, Ximing. 2024. "Two-Step Simplified Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems" Symmetry 16, no. 9: 1210. https://doi.org/10.3390/sym16091210

APA Style

Fang, X. (2024). Two-Step Simplified Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems. Symmetry, 16(9), 1210. https://doi.org/10.3390/sym16091210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop