Abstract
We present a modified Picard-like method to solve absolute value equations by equivalently expressing the implicit fixed-point equation form of the absolute value equations as a two-by-two block nonlinear equation. This unifies some existing matrix splitting algorithms and improves the efficiency of the algorithm by introducing the parameter . For the choice of in the new method, we give a way to determine the quasi-optimal values. Numerical examples are given to show the feasibility of the proposed method. It is also shown that the new method is better than those proposed by Ke and Ma in 2017 and Dehghan and Shirilord in 2020.
MSC:
65H10
1. Introduction
The absolute value equation (AVE)
where , , and =, was introduced by Rohn [1], and has attracted the attention of many scholars, such as Mangasarian [2,3], Mangasarian and Meyer [4], Noor et al. [5,6,7], and Ketabchi and Moosaei [8,9,10,11], because it can be used as an important tool in the optimization field, such as in the complementarity problem, linear programming, and convex quadratic programming [12]. When , (1) will become
which can be obtained from a class of ordinary differential equations [5],
Obviously, when B is the zero matrix, AVE (1) reduces to a linear system. In general, the matrix B in AVE (1) is supposed to be nonzero.
One of the important problems in the AVE is the existence and uniqueness of its solution. There are a variety of existence and uniqueness results; for instance, in [13], Rohn proved that if or (in this paper, we consider , where means the 2-norm), then AVE (1) has an unique solution for any , where denotes the largest singular value of the matrices; in [14], Wu and Guo proved that if A can be expressed as , where , and M is a nonsingular M-matrix, then AVE (1) has an unique solution for any vector b; in addition, in [15], Wu and Li proved that AVE (1) has a unique solution for any vector b, if and only if matrix is nonsingular for any diagonal matrix with . We refer to [4,16] for other sufficient or necessary conditions for the existence and uniqueness results.
In this paper, we focus on another problem in the AVE (1), that is, how to solve the AVE (1). In fact, various numerical methods have been developed; for instance, Mangasarian, in [17], proposed a generalized Newton method for solving the AVE, which generated a sequence formally stated as
where the diagonal matrix , . It was proved to be convergent under certain conditions; Lian et al., in [18], further studied the generalized Newton method and obtained the weaker convergence results; Wang et al., in [19], proposed some modified Newton-type iteration methods for generalized absolute value equations. Since the convergence requirements of Newton’s method are strict, Edalatpour et al., in [20], proposed a generalization of the Gauss–Seidel iteration method; in addition, Ke and Ma, in [21], proposed an SOR-like iteration method by reformulating equivalently the AVE (1) as a two-by-two block nonlinear equation
Guo, Wu, and Li, in [22], presented some new convergence conditions obtained from the involved iteration matrix of the SOR-like iteration method in [21]; also based on (3), Li and Wu, in [23], improved the SOR-like iteration method proposed by Ke and Ma in [21] and obtained a modified SOR-like iteration method; Ali et al. proposed two modified generalized GaussSeidel (MGGS) iteration techniques to determine the AVE (1) in [24]. To accelerate the convergence, in [25], Salkuyeh proposed the Picard-HSS iteration method for the AVE; Zheng extended this method to the Picard-HSS-SOR Method in [26]; in [27], Ma proposed Picard methods for solving the AVE by combining matrix splitting iteration algorithms, such as the Jacobi, SSOR, or SAOR; Dehghan et al., in [28], proposed the following matrix multisplitting Picard-iterative methods (PIM), see Algorithm 1, under the condition . In addition, for A being an M-matrix case, in [29], Ali et al. presented two new generalized Gauss–Seidel iteration methods; finally, in [30], Yu et al. proposed an inverse-free dynamical system to solve AVE (1). We also refer to [8,9,10,14,25,31,32,33,34,35] for other methods of finding the solution to the AVE.
| Algorithm 1:Picard Iteration Method (PIM). |
Suppose that is an initial guess for the solution of (3), is the target accuracy, and (1, 2, …, p, where p is a positive integer) is a splitting for matrix A, where exists. We compute by using the following iteration: 1. Input ; 2. Solve the following equations to obtain , where is a positive integer.
3. Set ; 4. End the iteration when ; 5. Output . |
The main contributions of this paper are listed as follows.
(1) Based on a splitting of the two-by-two block coeffificient matrix, the Modified Picard-Like Iteration Method (MPIM) is proposed for solving AVE (1). It unifies some existing matrix splitting algorithms, such as the algorithms in [21,23,24,25,26,28];
(2) Compared with Algorithm 1, we introduce a parameter, , to accelerate the convergence of the proposed MPIM method. The quasi-optimal parameters for the MPIM method are also given for various cases.
The present paper proceeds as follows. In Section 2, based on the equivalent form (3), we present a general matrix splitting method for solving the AVE (1). It unifies some existing matrix splitting methods. Furthermore, by combining this with the Picard technique, we propose an MPIM method. In Section 3, some numerical examples are given to show the feasibility and efficiency of the new methods.
2. Modified Picard-like Iteration Method
In this section, based on (3), we first give a general matrix splitting iteration method. By combining this with the Picard technique, we propose a Modified Picard-Like iteration method named MPIM. We also discuss the choice of the quasi-optimal parameter of the MPIM.
2.1. General Matrix Splitting Iteration
As stated in Section 1, by letting , AVE (1) is equivalent to the form (3). Note that the matrix form for (3) can be written as , where
and . Hence, if we let , and exists, then , where
Furthermore, the form (3) can be reformulated as
This provides an iteration method to solve AVE (1) as below, see Algorithm 2.
| Algorithm 2:General Splitting Iteration Method (GSIM). |
Suppose that is an initial guess for the solution of (3), is the target accuracy, , and is a splitting for matrix A. We compute by using the following iteration: 1. Input . 2. Solve the following equations to obtain , 3. End the iteration when . 4. Output . |
Remark 1.
The GSIM unifies some existing matrix splitting algorithms for the AVE. For example, if , and , then it becomes the SOR-like method (SOR) proposed by Ma and Ke in [21]; if , and , where , then it becomes the Modified SOR-like method (M-SOR) proposed by Wu and Li in [23]; if , and , then it becomes the MGGS by Ali et al. in [24]; if , then we obtain a new algorithm named the Modified Jacobi iteration method; if , then we obtain another new algorithm named the Modified SSOR-Like method (M-SSOR). It should be pointed out here that by choosing other splittings for the matrix A, more methods can be extracted.
Next, a theoretical analysis of the convergence for the GSIM is performed according to the following result, which can be found in [23].
Lemma 1
([23]). Let λ be any root of with . Then, , if and only if , and .
Theorem 1.
Letbe nonsingular and, B, M, N, and ω be defined as in Algorithm 2. Denote, , , and.
If
then
where
Proof.
This implies
and
Thus,
Let
Then, the spectral radius . In fact, we assume that is an eigenvalue of matrix T; then,
which is equal to
By applying Lemma 1 to (10), is equal to
Therefore, ; consequently,
The proof is complete. □
Theorem 1 tells us that the iteration sequence produced by the GSIM can converge to the solution of (3) under some conditions. This also holds true for all the methods in Remark 1.
2.2. Modified Picard-like Iteration Method
Remark 2.
The MPIM unifies some existing Picard-like methods for the AVE. For example, if , and , where γ is a positive constant and , then this method will reduce to the Picard-SSOR Method in [28]; if , and , where γ and μ are positive constants and , then this method will reduce to the Picard-SAOR Method in [28]; if , and , where , , γ is a positive constant, and , then this method will reduce to the Picard-HSS Method in [25]; if , and , where , , γ is a positive constant, and , then this method will reduce to the Picard-HSS-SOR Method in [26]. Obviously, by choosing other splittings for the matrix A, more methods can be also extracted.
Now, we consider the convergence of the MPIM. In fact, similar to the computation in [28], the (20) in Step 2 can be rewritten as
where . Thus, the corresponding iteration matrix for the MPIM is
We use the notations presented in Section 2.1, and set , , , and . Obviously, as with the GSIM, the MPIM is convergent if
2.3. Range of
As stated in [28], the PIM may not be convergent if the matrix A and B do not satisfy
Next, we analyze in detail how to select such that the MPIM converges even when (15) does not hold.
Theorem 2.
Suppose
if, then (14) holds, i.e., the MPIM is convergent.
Proof.
Note that
implies
and
Hence,
This means
since . Furthermore, because
we obtain
i.e.,
hence
Theorem 3.
Suppose
if, then (14) holds, i.e., the MPIM is convergent.
Proof.
This means
Furthermore, since , we obtain ; hence,
which implies
2.4. Quasi-Optimal Value of
It can be seen from Section 2.3 that for a given , we can always adopt a specific splitting form to make the algorithm converge. Next, we consider how to select to make the algorithm converge faster when the splitting form of the matrix has been given. For the MPIM, the iteration matrix
Then,
We denote
Then,
Apparently, and all involve the and . It is well known that the smaller is, the faster the MPIM converges. However, it is not easy to determine in general. So, we next discuss the minimum of instead of to provide some choices for .
Theorem 4.
Suppose that is given with
then,
where .
Proof.
Then, the axis of symmetry of the parabola is
We notice that
This leads to
i.e.,
since the axis of symmetry of the parabola is . The proof is complete. □
Proof.
From (17), we obtain
Since , we have
i.e.,
(I) Since
then
Note that
is equal to
which means
i.e., . So,
then,
(II) From (I), we know when
we have , i.e.
Thus,
Similar to the proof of Theorem 4, we obtain
The proof is complete. □
It can be seen under the condition of Theorem 4 that we can choose an appropriate parameter to accelerate the convergence of the algorithm, even though the PIM does not converge, as shown in Section 3.2. Based on Theorem 5 (II), it can be seen that when
becomes a linear function, and the rate of convergence is almost completely determined by the value of . It means under this condition, plays a more important role than , and if we change the value of , it will not cause great changes in the convergence, as shown in Section 3.2. Based on Theorem 5 (I), it also can be seen that when
we can choose an appropriate parameter to accelerate the convergence of the algorithm, as shown in Section 3.2.
3. Numerical Examples
In this section, we demonstrate the performance of Algorithms 1–3 with numerical examples.
The numerical experiments were performed using MATLAB software on an Intel (R) Pentium (R) CPU N3700 for (1.60 GHz, 4 GB RAM). Zero vector was used for the initial guess. The iterations were terminated when the current iterate satisfied
where is the k-th iterate for the PIM, GSIM, and MPIM algorithms. For all mesh sizes m and n, we set , and the max iteration was in all the methods. Moreover, we took the vector b such that the solution . For the matrices A and B, we took
in the GSIM algorithm, with tridiagonal matrices
and
where m and n are positive integer numbers, and ⊗ is the Kronecker product, where . In addition, we took
in the PIM and MPIM algorithms to compare the efficiency of these methods, where , and .
| Algorithm 3:Modified Picard-Like Iteration Method (MPIM). |
Suppose that is an initial guess for solution of (3), is the target accuracy, , and (1, 2, …, p) is a splitting for matrix A, where exists. We compute by using the following iteration: 1. Input . 2. Solve the following equations to obtain , 3. Set . 4. Set . 5. End the iteration when . 6. Output . |
3.1. An Application
We solved the following ordinary differential equations:
to show our method is feasible. Similarly to the Example 3.1 in [5], we discredite the above equation using the finite difference method to obtain (2), where A of size 10 is given by:
and the exact solution is
We compared the SOR algorithm with , the M-SOR with , and the MPIM with , , and to solve the above AVE problem in 72, 2156, and 18 iterations, respectively; see Figure 1.
Figure 1.
Number of iterations.
3.2. General Splitting Iteration Method
We solved (2), where A and B are as mentioned above, by Algorithm 2 and 3 with the following special methods: the SOR-like (SOR), the Modified SOR-Like (M-SOR), and the Modified SSOR-Like (M-SSOR) with , the Modified Picard-SSOR (MP-SSOR) with and , and the MGGS with , and ; see Remarks 1 and 2.
From Table 1, we can see that when the M-SSOR and MP-SSOR proposed in this paper were better than the previous algorithms [21,23,24]. Note that for all the iterative methods, we did not choose the best parameters of to further optimize the algorithm (in fact, the splitting parameter of A can be different from to speed up the algorithm, as discussed in Remark 2).
Table 1.
Comparision of the iteration number and the CPU time in (2).
3.3. Modified Picard-like Iteration Method
Moreover, we solved (1) by the MPIM and PIM with special splitting (, , record as a splitting 1). Since
based on Theorem 4, we chose . Some numerical results, such as the number of iterations, the parameters , the CPU time needed to run the iterative methods, and the residual error Re(k), are reported in Table 2.
Table 2.
Comparision of the iteration number and the CPU time for , .
From Table 2, we can see that with the increase in i, i.e., the increase in the maximum singular value of B, the Picard method gradually tended to be non-convergent for a given splitting form. However, when we took , the algorithm was always convergent. This is consistent with the analysis of Theorem 4. When , the spectral radius of iterative matrix was recorded as in Figure 2, where
Figure 2.
The spectral radius of for , .
In addition, we changed the spliiting of A. For the PIM and MPIM, we let , and , where . Since
based on Theorem 5 (II), we chose , where
It can be seen from Table 3 that when the splitting form was modified, the MPIM was still better than the PIM. We selected to give the spectral radius under different values, as shown in Figure 3, where
Table 3.
Comparision of the iteration number and the CPU time for , .
Figure 3.
The spectral radius of for , .
Now, for the MPIM and the PIM, we chose , and , where , record as a splitting 3. We selected to give the spectral radius under different values, as shown in Figure 4. Since
based on Theorem 5 (I), we chose . Then, we see that when was equal to 1, the difference between the spectral radius values corresponding to was very small.
Figure 4.
The spectral radius of for , .
We let , and , and used the PIM algorithm with , , and , the MPIM algorithm with , , , and , the SOR algorithm with , the M-SSOR algorithm with , and the M-SOR algorithm with to solve AVE (1), as shown in Figure 5.
Figure 5.
The Rate of Convergency.
The proposed MPIM and Modified SSOR algorithms were significantly better than the methods in the related literature [21,23,24,28].
4. Conclusions
In this paper, we solved AVE (1) by equivalently expressing the implicit fixed-point equation form of the absolute value equations as a two-by-two block nonlinear equation and then proposed an MPIM method for solving it. We also proved the convergence of the MPIM method under suitable choices of the involved parameters and presented the choice of the quasi-optimal parameters. Finally, we performed some numerical experiments to demonstrate that the MPIM method was feasible and effective. However, the convergence analysis and the choice of the optimal parameters of the whole iteration method remain unresolved, which need to be considered in the future.
Author Contributions
Writing—original draft, Y.L.; Writing—review & editing, C.L. All authors have read and agreed to the published version of the manuscript.
Funding
Yunnan Provincial Ten Thousands Plan Young Top Talents.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Rohn, J. A theorem of the alternatives for the equation Ax+B|x|=b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef]
- Mangasarian, O.L. Absolute value equation solution via concave minimization. Optim. Lett. 2007, 1, 3–8. [Google Scholar] [CrossRef]
- Mangasarian, O.L. A hybrid algorithm for solving the absolute value equation. Optim. Lett. 2015, 9, 1469–1474. [Google Scholar] [CrossRef]
- Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
- Noor, M.A.; Iqbal, J.; Al-Said, E. Residual iterative method for solving absolute value equations. Abstr. Appl. Anal. 2012, 2012, 406232. [Google Scholar] [CrossRef]
- Noor, M.A.; Iqbal, J.; Noor, K.I.; Al-Said, E. On an iterative method for solving absolute value equations. Optim. Lett. 2012, 6, 1027–1033. [Google Scholar] [CrossRef]
- Noor, M.A.; Iqbal, J.; Noor, K.I.; Al-Said, E. Generalized AOR method for solving absolute complementarity problems. J. Appl. Math. 2012, 2012, 743861. [Google Scholar] [CrossRef]
- Ketabchi, S.; Moosaei, H. An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side. Comput. Math. Appl. 2012, 64, 1882–1885. [Google Scholar] [CrossRef]
- Ketabchi, S.; Moosaei, H. Minimum norm solution to the absolute value equation in the convex case. Optim. Theory Appl. 2012, 154, 1080–1087. [Google Scholar] [CrossRef]
- Ketabchi, S.; Moosaei, H.; Fallhi, S. Optimal error correction of the absolute value equations using a genetic algorithm. Comput. Math. Model. 2013, 57, 2339–2342. [Google Scholar] [CrossRef]
- Moosaei, H.; Ketabchi, S.; Noor, M.A.; Iqbal, J.; Hooshyarbakhsh, V. Some techniques for solving absolute value equations. Appl. Math. Comput. 2015, 268, 696–705. [Google Scholar] [CrossRef]
- Gottle, R.W.; Pang, J.S.; Stone, R.E. The Linear Complementarity Problem; Academic Press: New York, NY, USA, 1992. [Google Scholar]
- Rohn, J. On unique solvability of the absolute value equation. Optim. Lett. 2009, 3, 603–606. [Google Scholar] [CrossRef]
- Wu, S.L.; Guo, P. On the unique solvability of the absolute value equation. J. Optim. Theory Appl. 2016, 169, 705–712. [Google Scholar] [CrossRef]
- Wu, S.L.; Li, C.X. The unique solution of the absolute value equations. Appl. Math. Lett. 2018, 76, 195–200. [Google Scholar] [CrossRef]
- Li, C.X.; Wu, S.L. A note on unique solvability of the absolute value equation. Optim. Lett. 2020, 14, 1957–1960. [Google Scholar]
- Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
- Lian, Y.Y.; Li, C.X.; Wu, S.L. Weaker convergent results of the generalized Newton method for the generalized absolute value equations. Comput. Appl. Math. 2018, 338, 221–226. [Google Scholar] [CrossRef]
- Wang, A.; Cao, Y.; Chen, J.X. Modified Newton-type iteration methods for generalized absolute value equations. Optim. Theory Appl. 2019, 181, 216–230. [Google Scholar] [CrossRef]
- Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. A generalization of the Gauss-Seidel iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 293, 156–167. [Google Scholar] [CrossRef]
- Ke, Y.F.; Ma, C.F. SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 311, 195–202. [Google Scholar] [CrossRef]
- Guo, P.; Wu, S.L.; Li, C.X. On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 2019, 97, 107–113. [Google Scholar] [CrossRef]
- Li, C.X.; Wu, S.L. Modified SOR-like iteration method for absolute value equations. Math. Probl. Eng. 2020, 2020, 9231639. [Google Scholar]
- Ali, R.; Ahmad, A.; Ahmad, I.; Ali, A. The modification of the generalized Gauss-Seidel iteration techniques for absolute value equations. Comput. Algor. Numer. Dimen. 2022, 1, 130–136. [Google Scholar]
- Salkuyeh, D.K. The Picard-HSS iteration method for absolute value equations. Optim. Lett. 2014, 8, 2191–2202. [Google Scholar] [CrossRef]
- Zheng, L. The Picard-HSS-SOR iteration method for absolute value equations. J. Inequal. Appl. 2020, 2020, 258. [Google Scholar] [CrossRef]
- Lv, C.Q.; Ma, C.F. Picard splitting method and Picard CG method for solving the absolute value equation. Nonlinear Sci. Appl. 2017, 10, 3643–3654. [Google Scholar] [CrossRef]
- Dehghan, M.; Shirilord, A. Matrix multisplitting Picard-iterative method for solving generalized absolute value matrix equation. Appl. Numer. Math. 2020, 158, 425–438. [Google Scholar] [CrossRef]
- Ali, R.; Khan, I.; Ali, M.A.A. Two new generalized iteration methods for solving absolute value equations using M-matrix. AIMS Math. 2022, 7, 8176. [Google Scholar] [CrossRef]
- Yu, D.; Chen, C.; Yang, Y.; Han, D. An inertial inverse-free dynamical system for solving absolute value equations. J. Ind. Manag. Optim. 2023, 19, 2549–2559. [Google Scholar] [CrossRef]
- Hu, S.L.; Huang, Z.H.; Zhang, Q. A generalized Newton method for absolute value equations associated with second order cones. Comput. Appl. Math. 2011, 235, 1490–1501. [Google Scholar] [CrossRef]
- Iqbal, J.; Arif, M. Symmetric SOR method for absolute complementarity problems. Appl. Math. 2013, 2013, 172060. [Google Scholar] [CrossRef]
- Li, C.X. A modified generalized Newton method for absolute value equations. Optim. Theory Appl. 2016, 170, 1055–1059. [Google Scholar] [CrossRef]
- Wang, H.; Liu, H.; Cao, S. A verification method for enclosing solutions of absolute value equations. Collect. Math. 2013, 64, 17–18. [Google Scholar] [CrossRef]
- Cruz, J.Y.B.; Ferreira, O.P.; Prudente, L.F. On the global convergence of the inexact semi-smooth Newton method for absolute value equation. Comput. Optim. Appl. 2016, 65, 93–108. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).