1. Introduction
The absolute value equation, denoted by AVE, is to find a vector
such that
where
,
with
being the
l-th entry of
x and
denotes absolute value for real scalar. The AVE (
1) is a special case of the generalized AVE (GAVE)
with
and
, which was introduced in [
1] and further studied in [
2,
3,
4]. In fact, if
B is nonsingular, then (
2) can be converted into (
1). The AVE (
1) is closely interrelated to the linear complementarity problem (LCP), bimatrix games and others (see e.g., [
1,
2,
3,
4,
5,
6,
7,
8] and the references therein).
In general, solving the AVE (
1) is NP-hard [
2]. Furthermore, when the AVE (
1) is solvable, it is NP-complete for testing whether the AVE (
1) has a unique solution or multiple solutions [
9]. The existence of its solutions have been studied in [
1,
10,
11,
12,
13,
14], and an outstanding and commonly used sufficient condition for solving the AVE (
1) can be found in [
11], as follows.
Lemma 1 ([11]). Assume that is invertible. If , then the AVE (1) is uniquely solvable for any . For solving the AVE (
1), a great deal of numerical methods have been proposed, such as the Newton-type iteration methods [
4,
7,
15,
16,
17,
18,
19,
20,
21,
22,
23], the Picard iteration method [
24], the preconditioned AOR iteration method [
25], the generalized Gauss–Seidel iteration method [
26], the Levenberg–Marquardt methods [
27,
28], the exact and inexact Douglas–Rachford splitting methods [
29], the dynamical systems [
30,
31,
32,
33,
34,
35], the modified multivariate spectral gradient algorithm [
36], the modified HS conjugate gradient method [
37], and others (see e.g., [
3,
38,
39,
40,
41] and the references therein).
In recent years, the SOR-like iteration methods have attracted considerable attention. Ke and Ma [
42] first developed an SOR-like iteration method (Algorithm 1) by converting the AVE (
1) into a two-by-two block nonlinear equation to address the AVE (
1), and proved the convergence of the Algorithm 1 under the sufficient condition that
with
.
Algorithm 1([42]). (The SOR-like iteration method) |
Let the matrix A be nonsingular. Given two initial guesses , for until the generated sequence is convergent, compute
|
In order to further explore the convergence conditions of the SOR-like iteration method for solving the AVE (
1) in [
42], Guo et al. [
43] proved the convergence of Algorithm 1 from the perspective of spectral radius and got the optimal relaxation parameter
with
,
. Herein,
represents a diagonal matrix with
as its diagonal entries for every
and
In the sequel, Chen et al. [
44] investigated the theoretical optimal parameter
and the approximate optimal parameter
of Algorithm 1 for resolving the AVE (
1), resulting in
Meanwhile, by reformulating the AVE (
1) as a two-by-two block nonlinear equation, a fixed point iteration (FPI) method was suggested for solving the AVE (
1) in [
45], but the convergence of the FPI method is only guaranteed for the case that
. Furthermore, Yu et al. [
46] put forward a modified fixed point iteration (MFPI) method by introducing a nonsingular matrix
Q, which guaranteed the convergence for solving the AVE (
1) with
by selecting an appropriate parameter matrix
Q. In addition, Dong et al. [
47] proposed a new SOR-like (NSOR) iteration method by rewriting the AVE (
1) into a new two-by-two block nonlinear system, and the convergence conditions of the NSOR iteration method were proven from the perspective of spectrum. In this paper, by reformulating the AVE (
1) into a new alternative two-by-two block nonlinear system, we propose an alternative SOR-like (ASOR-like) iteration method for solving the AVE (
1) and prove its convergence from the view of iteration error and spectrum, respectively. Furthermore, the optimal iteration parameter selection is also discussed. In addition, we use numerical experiments to demonstrate the feasibility and effectiveness of the ASOR-like iteration method.
The layout of this paper is organized below.
Section 2 explains some of the mathematical notations and the lemmas that are used later in the proof.
Section 3 and
Section 4 propose the iterative format, the convergence conditions and the optimal iteration parameter selection of the ASOR-like iteration method. In
Section 5, some numerical experiments are conducted to prove the effectiveness of the proposed method by comparing it with some existing algorithms. Finally, we give a brief conclusion in
Section 6.
3. An Alternative SOR-like Iteration Method
In this section, we put forward an alternative two-by-two block nonlinear system of the AVE (
1). Let
, and then the AVE (
1) is equivalent to
that is
where
,
.
Let
, where
and then the following fixed point equation can be gained,
where the parameter
. That is,
Based on (
8), we establish the following matrix splitting iteration method to solve the AVE (
1), called the alternative SOR-like (ASOR-like) iteration method. The algorithmic framework for this method is as follows.
Algorithm 2 (The ASOR-like iteration method)
|
Let the matrix A be nonsingular. Given two initial guesses , for until the generated sequence is convergent, compute
|
In the following, we demonstrate the main outcomes of this paper. Theorems 1 and 2 are inspired by that of Theorem 3.1 in [
42] and Theorem 2.1 in [
44], respectively. Let
be the solution pair of the nonlinear system (
7), then we have
Let the vector pair
be generated by (
9), and define the iteration errors as
Then, the convergence results of the ASOR-like iteration method can be obtained as follows.
Theorem 1. Let the matrix A be invertible. Denoteif , then , where . Proof. From (
9), (
10), (
11), and (
12), we have
According to (
13), (
14) and Lemma 2, we can get
According to Lemma 2, multiplying (
15) from left by the nonnegative matrix
P, it holds that
In the light of (
16), it follows that
If
, then we can obtain
This completes the proof. □
Theorem 2. Let the matrix A be invertible. Denote ifthen the following inequality holds,where is defined by Proof. According to the proof of Theorem 1, we get
Multiplying (
19) from left by the nonnegative matrix
Q, we get
Then it can be concluded that
where
Next, we discuss the selection of the iteration parameter
such that
, thus the inequality (
18) holds.
Because
is a symmetric positive semidefinite matrix, then we have
, where
is an eigenvalue of
, and then it holds that
namely,
from which we obtain
Hence, a sufficient condition for the convergence is
From (
20), we have
provide (
17), which completes the proof. □
Note that if the conditions of Theorem 2 are satisfied, then we obtain
Hence,
and
. Therefore, the iteration sequence
generated by (
9) will convergent to the solution of the AVE (
1).
In order to further study the existence of parameter
for solving AVE (
1), from the perspective of spectrum, we analyze the range and the optimal choice of parameter
under the convergence condition of Algorithm 2. To determine the spectrum of iteration matrix, we consider the following eigenvalue problem
where
is an arbitrary eigenvalue of
. This means that we can provide a good approximation for optimal choice of parameter
with
. Then we focus on the following eigenvalue equation
It is important to be able to find the optimal parameter
(hereafter abbreviated as
) to minimize
for Algorithm 2; that is
where
To this end, we need the following auxiliary lemmas.
Lemma 3 [50]. Consider the quadratic equation , where b and c are real numbers. Both roots of the equation are less than one in modulus if and only if and .
Lemma 4. If and , there exists satisfying for any matrix B.
Proof. Due to , then there exists satisfying . □
The following proof is inspired by [
51]. Notice that
where
D is a diagonal matrix. Without loss of generality, suppose that
. From (
21), it holds that
There exists a vector
satisfying
. Multiplying both sides of (
22) by
from left and using Lemma 4, we obtain
where
. The roots of (
23) are given by
According to Lemma 3, we obtain a sufficient condition such that the two roots of (
23) are both less than one, that is
It is easy to check that (25) is equivalent to
. Equation (26) seems harder to be verified at first sight. Hence, we will proceed to discuss more about it. Notice
a sufficient condition for (26) is
for
. Let
, and then
holds for
when
. For
, we have
. The roots of
are
Thus, we can obtain if , which leads to the solution set of being .
In conclusion, when
, if
the roots of (
23) are strictly lower than one in modulus.
Remark 1. It is well-known that is the selection of parameter ω for the classical SOR iteration method and the SOR-like iteration method in [42], which is also the basic necessary convergent condition. Considering the relationship between the convergence conditions of the ASOR-like method from the two perspectives, it is easy to check that (25) is equivalent to , which is a sufficient condition of (20). This also shows that the convergence condition from the spectral perspective based on [51] is tighter than those from the norm perspective based on [42,44]. 4. Optimal Parameter for the ASOR-like Iteration Method
In this section, we consider the choice of the iteration parameter
. Let
,
. According to (
24), we get
from which we minimize
to approximately obtain the following condition:
In fact, due to
,
can shrink to a sufficient condition
, which means
for
and
is an empty set for
. However, from (
28), we only need to prove
when
that obtains
for
and
. This is a contradictory inequality.
In addition, when
, according to (
28), we get
which implies
with
and
. For
, after some simple algebraic operations, we get the existence of
that
for
and
for
. The roots of
can be solved by the function
roots in Matlab to get the theoretical optimal parameter
, expressed as
In order to explore the characteristics of the roots of the quadratic Equation (
29), we plot the contour for
and the
for
with
in
Figure 1. In fact,
and
with
are both real values when
,
and
with
are both real values when
. In this case, the complex roots are not considered. Therefore, it is obvious that
for
and
However, due to
in
, from
Figure 1, we know that
Now, we devote our attention to investigating the approximate optimal parameter
. Let
, and then we have
Let , and this satisfies , where is an upper bound of with . This is the reason that we find in minimizing .
It is not difficult to find that
is strictly monotonously decreasing for
and is strictly monotonously increasing for
. In addition,
is strictly monotonously increasing in
. By simply drawing and analyzing function
and
, we derive that
It notices that is obtained by with and .
Remark 2. Consider the range of values of ω obtained by the above convergence conditions, according to (27), (30), (31), and (32), we plot the Figure 2. It is easy to see that the blue curve divides the green area into two parts; the top part is actually the condition of , and the bottom part is actually the condition of . According to the condition of (27), when , it holds . Therefore, it leads to the new convergence conditions that if , for and for ; if , for . Furthermore, . Comparing and , we have The right of Figure 2 illustrates the gap of the and where . 5. Numerical Results
In this section, we will present three numerical examples to compare the ASOR-like iteration method with the previous algorithms to illustrate the feasibility and effectiveness of the ASOR-like iteration method. The following six algorithms will be tested.
SOR-like-exp method [
42]: namely, the iteration format is (
3). We choose the experimental optimal parameter
with the smallest iteration step of the corresponding method in
(in Example 1) and
(in Example 2 and Example 3).
ASOR-like-exp method: its iteration format is (
9). The optimal parameter selection of the ASOR-like-exp method is consistent with the SOR-like-exp method.
SOR-like-opt method [
44]: its iteration format is consistent with the SOR-like-exp method where the theoretical optimal parameter
follows (
4).
can be calculated by the classical bisection method with the termination criterion is
or the updated ends of the interval
, see [
44] for specific operations.
ASOR-like-opt method: its iteration format is consistent with the ASOR-like-exp method, and is calculated in accordance with (31).
SOR-like-aopt method [
44]: its iteration format is consistent with the SOR-like-exp method where the approximate optimal parameter
follows (
5).
ASOR-like-aopt method: its iteration format is consistent with the ASOR-like-exp method and is calculated in accordance with (32).
The numerical experiments are explained in several aspects in the following. On the one hand, the choice of parameters
are particularly important, which greatly affects the CPU time of numerical experiments. On other hand, in order to facilitate the comparison of algorithms, we select the following three experiments that satisfy the unique solution property of the AVE (
1) for comparison.
All test problems are conducted under MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel(R) Core(TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. The description of each method includes the number of iteration steps (denoted by “IT”), the CPU time (denoted by “CPU”) and residual relative error (denoted by “RES”). The stopping criterion of iteration is
or the prescribed maximal iteration number
is exceeded (“–” is used in the following tables to illustrate this case). All tests are started from the initial zero vector.
Example 1. Considering the random AVE (1) with in [16,44], the influence of the condition number and the density of A (abbreviation for and ) on the tests will be discussed during the numerical implements. Let
be 1, 10, or
, respectively, and the results are used to analyze the superiority of the ASOR-like method in different optimal parameter
. Let
and
is generated. For Example 1, the information (the order
n, the approximate density of
A (abbreviation for
),
and
) of random AVE problems under specific conditions obtained by numerical experiments are shown in
Table 1,
Table 2 and
Table 3.
From the numerical results displayed in
Table 1,
Table 2 and
Table 3, we find that the “CPU” of the ASOR-like-opt iteration method and the ASOR-like-aopt iteration method are less than the SOR-like-opt iteration method and the SOR-like-aopt iteration method in general, but the ASOR-like-opt iteration method compared to the SOR-like-opt iteration method requires much iteration steps, the two methods for selecting the approximate optimal parameter
or the experimental optimal parameter
basically keep the same iteration steps. In brief, the ASOR-like iteration method is superior to the SOR-like iteration method under choosing appropriate optimal parameter.
Example 2 ([24]). Consider the two-dimensional convection diffusion equationwhere q is a nonnegative constant, p is a real number, , and is its boundary. By using the five-point finite difference scheme and the central difference scheme to the diffusive terms and the convective terms, respectively. The equidistant step and the mesh Reynolds number are denoted. Then we acquire the system of linear equations , where the matrix of , and are two identity matrices, means the Kronecker product symbol, and are the tridiagonal matrices with , , . For our numerical experiments, we define the matrix A in AVE (1) by making use of the matrix R as follows. For any positive number
p and
q, the matrix
R is nonsymmetric positive definite. When
, the matrix
R provided is symmetric positive definite. We set
, where
L is the strictly lower part of
R. It is not hard to find that the matrix
A is nonsymmetric positive definite. Let
,
, and
is generated. We present the numerical results for different values of
m,
p,
q in
Table 4 and
Table 5.
From
Table 4 and
Table 5 we can see that all iteration methods can successfully produce an approximately unique solution to the AVE (
1) for selecting appropriate problem scales
and the convective measurements
q (
when
and
;
when
and
). In the case where it converges to the unique solution of AVE (
1), the ASOR-like-opt iteration method and the ASOR-like-aopt iteration method are superior to the SOR-like-opt iteration method and the SOR-like-aopt iteration method in “CPU”, respectively, and the numerical results with theoretical optimal parameters are much better than the numerical results with experimental optimal parameters.
Example 3. Consider the AVE (1), where the sparse, symmetry matrix A with comes from five different test problems in [42]. Let and is generated. From
Table 6, we present the numerical results on the ASOR-like iteration method incorporated with the SOR-like iteration method, corresponding to these optimal parameters. Obviously, all iteration methods can compute an approximate solution of the problem in [
42]. In particular, the ASOR-like-opt iteration method outperforms the SOR-like-opt iteration method for all small-scale full data matrix problems.