Next Article in Journal
A New Method of Identifying the Aerodynamic Dipole Sound Source in the Near Wall Flow
Next Article in Special Issue
Dynamics and Bifurcations of a Discrete-Time Moran-Ricker Model with a Time Delay
Previous Article in Journal
A Deterministic Setting for the Numerical Computation of the Stabilizing Solutions to Stochastic Game-Theoretic Riccati Equations
Previous Article in Special Issue
Ansatz and Averaging Methods for Modeling the (Un)Conserved Complex Duffing Oscillators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gauss Quadrature Method for System of Absolute Value Equations

1
School of Mathematics and Statistics, Anyang Normal University, Anyang 455002, China
2
Department of Mathematics, Abdul Wali Khan University Mardan, Mardan 23200, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2069; https://doi.org/10.3390/math11092069
Submission received: 13 March 2023 / Revised: 21 April 2023 / Accepted: 25 April 2023 / Published: 27 April 2023
(This article belongs to the Special Issue Theory and Applications of Numerical Analysis)

Abstract

:
In this paper, an iterative method was considered for solving the absolute value equation (AVE). We suggest a two-step method in which the well-known Gauss quadrature rule is the corrector step and the generalized Newton method is taken as the predictor step. The convergence of the proposed method is established under some acceptable conditions. Numerical examples prove the consistency and capability of this new method.

1. Introduction

Consider the AVE of the form:
A x x = b ,
where A R n × n , x , b R n , and x represents a vector in R n whose components are x l ( l = 1 , 2 , , n ) . AVE (1) is a particular case of
A x + B x = b
and was introduced by Rohn [1]. AVE (1) arises in linear and quadratic programming, network equilibrium problems, complementarity problems, and economies with institutional restrictions on prices. Recently, several iterative methods were investigated to find the approximate solution of (1). For instance, Khan et al. [2] proposed a new technique based on Simpson’s rule for solving the AVE. Feng et al. [3,4] considered certain two-step iterative techniques to solve the AVE. Shi et al. [5] proposed a two-step Newton-type method for solving the AVE, and the linear convergence was discussed. Noor et al. [6,7] studied the solution of the AVE using minimization techniques, and the convergence of these techniques was proven. The Gauss quadrature method is a powerful technique to evaluate the integrals. In [8], it was used to solve the system of nonlinear equations. For other interesting methods for solving the AVE, the interested readers may refer to [9,10,11,12,13,14,15,16,17,18] for details.
The notations are defined in the following. For x R n , x denotes the two-norm x T x 1 2 . Let s i g n ( x ) be a vector with entries 0 , ± 1 , based on the entries of x that are zero, positive, or negative. Assume that d i a g s i g n ( x ) is a diagonal matrix. A generalized Jacobian σ x of x is given by
D x = σ x = d i a g s i g n x .
For A R n × n , s v d ( A ) will represent the n singular values of A.
In the present paper, the Gauss quadrature method with the generalized Newton method is considered to solve (1). Under the condition that A 1   < 1 7 , we establish the proposed method’s convergence. A few numerical examples are given to demonstrate the performance of the proposed method.

2. Gauss Quadrature Method

Consider
g ( x ) = A x x b .
A generalized Jacobian of g is given by
g ( x ) = σ g ( x ) = A D ( x ) ,
where D ( x ) = d i a g s i g n ( x ) as defined in (3). Let ζ be a solution of (1). The two-point quadrature rule is
x k ζ g ( t ) d t = ζ x k 2 g ζ + x k 2 + 1 3 ζ x k 2 + g ζ + x k 2 + 1 3 ζ x k 2 .
Now, by the fundamental theorem of calculus, we have
x k ζ g ( t ) d t = g ζ g x k .
As ζ is a solution of (1), that is g ζ = 0 , therefore, (7) can be written as
x k ζ g ( t ) d t = g x k .
From (6) and (8), we obtain
ζ x k 2 g ζ + x k 2 + 1 3 ζ x k 2 + g ζ + x k 2 + 1 3 ζ x k 2 = g x k .
Thus,
ζ = x k 2 g ζ + x k 2 + 1 3 ζ x k 2 + g ζ + x k 2 + 1 3 ζ x k 2 1 g x k .
From the above, the Gauss quadrature method (GQM) can be written as follows (Algorithm 1):
Algorithm 1: Gauss Quadrature Method (GQM)
1: Select x 0 R n .
2: For k, calculate η k = A D x k 1 b .
3: Using Step 2, calculate
x k + 1 = x k 2 g η k + x k 2 + 1 3 η k x k 2 + g η k + x k 2 + 1 3 η k x k 2 1 g x k .
4: If | | x k + 1 x k | | < T o l , then stop. If not, move on to Step 2.

3. Analysis of Convergence

In this section, the convergence of the suggested technique is investigated. The predictor step:
η k = A D x k 1 b
is well defined; see Lemma 2 [14]. To prove that
g η k + x k 2 + 1 3 η k x k 2 + g η k + x k 2 + 1 3 η k x k 2
is nonsingular, first we assume that
τ k = η k + x k 2 + 1 3 η k x k 2 ,
and
Θ k = η k + x k 2 + 1 3 η k x k 2 .
Then,
g η k + x k 2 + 1 3 η k x k 2 + g η k + x k 2 + 1 3 η k x k 2 = 2 A D η k + x k 2 + 1 3 η k x k 2 D η k + x k 2 + 1 3 η k x k 2 = 2 A D τ k D Θ k ,
where D τ k and D Θ k are diagonal matrices with entries 0 or ± 1 .
Lemma 1. 
If s v d   ( A ) exceeds 1, then 2 A D τ k D Θ k 1 exists for any diagonal matrix D defined in (3).
Proof. 
If 2 A D τ k D Θ k is singular, then 2 A D τ k D Θ k u = 0 for some u 0 . As the singular values of A are greater than one, therefore, using Lemma 1 [14], we have
u T u < u T A T A u = 1 4 u T D τ k + D Θ k D τ k + D Θ k u = 1 4 u T D τ k D τ k + 2 D τ k D Θ k + D Θ k D Θ k u 1 4 4 u T u = u T u ,
which is a contradiction; hence, 2 A D τ k D Θ k is nonsingular, and the sequence in Algorithm 1 is well defined.  □
Lemma 2. 
If s v d   ( A ) > 1 , then the sequence of GQMs is bounded and well defined. Hence, an accumulation point x ˜ exists such that
g τ k + g Θ k x ˜ = g τ k + g Θ k x ˜ 2 g x ˜ ,
and
A D ˜ x ˜ x ˜ = b .
Proof. 
The proof is in accordance with [14]. It is hence omitted.  □
Theorem 1. 
If g τ k + g Θ k 1   < 1 6 , then the GQM converges to a solution ζ of (1).
Proof. 
Consider
x k + 1 ζ = x k ζ 2 g τ k + g Θ k 1 g x k ,
or
g τ k + g Θ k x k + 1 ζ = g τ k + g Θ k x k ζ 2 g x k .
As ζ is the solution of (1), therefore,
g ( ζ ) = A ζ ζ b = 0 .
From (18) and (19), we have
g τ k + g Θ k x k + 1 ζ = g τ k + g Θ k x k ζ 2 g x k + 2 g ( ζ ) = g τ k + g Θ k x k ζ 2 g x k g ( ζ ) = g τ k + g Θ k x k ζ 2 A x k x k A ζ + ζ = g τ k + g Θ k 2 A x k ζ + 2 x k ζ = 2 x k ζ g τ k + g Θ k x k ζ .
It follows that
x k + 1 ζ = g τ k + g Θ k 1 2 x k ζ g τ k + g Θ k x k ζ .
Using Lemma 5 in [14], we know x k ζ   2 x k ζ . Thus,
x k + 1 ζ =   g τ k + g Θ k 1 2 x k ζ g τ k + g Θ k x k ζ   g τ k + g Θ k 1 4 x k ζ +   g τ k + g Θ k · x k ζ .
Since D τ k and D Θ k are diagonal matrices, hence
g τ k + g Θ k 2 .
Combining (21) and (22), we have
x k + 1 ζ 6 g τ k + g Θ k 1 · x k ζ .
From the assumption that g τ k + g Θ k 1   < 1 / 6 , we obtain
x k + 1 ζ <   x k ζ .
Hence, the sequence x k converges linearly to ζ .  □
Theorem 2. 
Suppose that D τ k , D Θ k are non-zeros and   A 1   < 1 7 . Then, the solution of (1) is unique for any b. Furthermore, the GQM is well defined and converges to the unique solution of (1) for any initial starting point x 0 R n .
Proof. 
The unique solvability directly follows from A 1   < 1 7 ; see [13]. Since A 1 exists, therefore, by Lemma 2.3.2 (p. 45) [16], we have
g τ k + g Θ k 1 =   2 A D τ k D Θ k 1 ( 2 A ) 1 · D τ k + D Θ k ( 1 2 A 1 · D τ k + D Θ k A 1 1 A 1 < 1 6 .
Hence, the proof is complete.  □

4. Numerical Results

In this section, we compare the GQM with other approaches that are already in use. We took the initial starting point from the references cited in each example. K, CPU and RES represent the number of iterations, the time in seconds, and the norm of the residual, respectively. We used MATLAB (R2018a), with an Intel(R) Core (TM)-i5-3327, 1.00 GHz, CPU @0.90GHz, and 4 GB RAM, for the computations.
Example 1 
([11]). Consider the AVE in (1) with
A = t r i d i a g 1.5 , 4 , 0.5 R n × n , x R n a n d b = 1 , 2 , , n T .
The comparison of the GQM with the MSOR-like method [11], the GNM [14], and the residual method (RIM) [15] is given in Table 1.
From the last row of Table 1, it can be seen that the GQM converges to the solution of (1) very quickly. The residuals show that the GQM is more accurate than the MSOR [14] and RIM [15].
Example 2 
([3]). Consider
A = r o u n d ( p × ( e y e ( p , p ) 0.02 × ( 2 × r a n d ( p , p ) 1 ) ) ) .
Select a random μ R p and b = A μ μ .
Now, we compare the GQM with the TSI [4] and INM [3] in Table 2.
From Table 2, we see that our suggested method converges in two iterations to the approximate solution of (1) with high accuracy. The other two methods are also two-step methods and performed a little worse for this problem.
Example 3 
([10]). Let
A = t r i d i a g ( 1 , 8 , 1 ) R n × n , b = A u u f o r u = 1 , 1 , 1 , , T R n ,
with the same initial vector as given in [10].
We compared our proposed method with the modified iteration method (MIM) [9] and the generalized iteration methods (GIMs) [10].
The last row of Table 3 reveals that the GQM converges to the solution of (1) in two iterations. Moreover, it is obvious from the residual that the GQM is more accurate than the MIM and GIM.
Example 4. 
Consider the Euler–Bernoulli equation of the form:
d 4 x d s 4 x = e s ,
with boundary conditions:
x ( 0 ) = 0 , x ( 1 ) = 0 , x ( 1 ) = 0 , x ( 0 ) = 0 .
We used the finite difference method to discretize the Euler–Bernoulli equation. The comparison of the GQM with the Maple solution is given in Figure 1.
From Figure 1, we see that the curves overlap one another, which shows the efficiency and implementation of the GQM for solving (1).
Example 5 
([6]). Consider the following AVE with
a m m = 4 p , a m , m + 1 = a m + 1 , m = p , a m n = 0.5 , m = 1 , 2 , , p .
Choose the constant vector b such that ζ = 1 , 1 , , 1 T is the actual solution of (1). We took the same initial starting vector as given in [6].
The comparison of the GQM with the MMSGP [1] and the MM [6] is given in Table 4.
From Table 4, we observe that the GQM is more effective for solving (1). Moreover, when n increases, our proposed method is very consistent, while the other two methods require more iterations for large systems.

5. Conclusions

In this paper, we considered a two-step method for solving the AVE, and the well-known generalized Newton method was taken as the predictor step and the Gauss quadrature rule as the corrector step. The convergence was proven under certain suitable conditions. This method was shown to be effective for solving AVE (1) compared to the other similar methods. This idea can be extended to solve generalized absolute value equations. It is also interesting to study the three-point Gauss quadrature rule as the corrector step for solving the AVE.

Author Contributions

The idea of the present paper was proposed by J.I.; L.S. and F.R. wrote and completed the calculations; M.A. checked all the results. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rohn, J. A theorem of the alternatives for the equation Ax+Bx=b. Linear Multilinear Algebr. 2004, 52, 421–426. [Google Scholar] [CrossRef]
  2. Khan, A.; Iqbal, J.; Akgul, A.; Ali, R.; Du, Y.; Hussain, A.; Nisar, K.S.; Vijayakumar, V. A Newton-type technique for solving absolute value equations. Alex. Eng. J. 2023, 64, 291–296. [Google Scholar] [CrossRef]
  3. Feng, J.M.; Liu, S.Y. An improved generalized Newton method for absolute value equations. SpringerPlus 2016, 5, 1–10. [Google Scholar] [CrossRef] [PubMed]
  4. Feng, J.M.; Liu, S.Y. A new two-step iterative method for solving absolute value equations. J. Inequalities Appl. 2019, 2019, 39. [Google Scholar] [CrossRef]
  5. Shi, L.; Iqbal, J.; Arif, M.; Khan, A. A two-step Newton-type method for solving system of absolute value equations. Math. Probl. Eng. 2020, 2020, 2798080. [Google Scholar] [CrossRef]
  6. Noor, M.A.; Iqbal, J.; Khattri, S.; Al-Said, E. A new iterative method for solving absolute value equations. Inter. J. Phys. Sci. 2011, 6, 1793–1797. [Google Scholar]
  7. Noor, M.A.; Iqbal, J.; Noor, K.I.; Al-Said, E. On an iterative method for solving absolute value equations. Optim. Lett. 2012, 6, 1027–1033. [Google Scholar] [CrossRef]
  8. Srivastava, H.M.; Iqbal, J.; Arif, M.; Khan, A.; Gasimov, Y.M.; Chinram, R. A new application of Gauss Quadrature method for solving systems of nonlinear equations. Symmetry 2021, 13, 432. [Google Scholar] [CrossRef]
  9. Ali, R. Numerical solution of the absolute value equation using modified iteration methods. Comput. Math. Methods 2022, 2022, 2828457. [Google Scholar] [CrossRef]
  10. Ali, R.; Khan, I.; Ali, A.; Mohamed, A. Two new generalized iteration methods for solving absolute value equations using M-matrix. AIMS Math. 2022, 7, 8176–8187. [Google Scholar] [CrossRef]
  11. Huang, B.; Li, W. A modified SOR-like method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 2022, 400, 113745. [Google Scholar] [CrossRef]
  12. Liang, Y.; Li, C. Modified Picard-like method for solving absolute value equations. Mathematics 2023, 11, 848. [Google Scholar] [CrossRef]
  13. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Lin. Alg. Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
  14. Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  15. Noor, M.A.; Iqbal, J.; Al-Said, E. Residual iterative method for solving absolute value equations. In Abstract and Applied Analysis; Hindawi: London, UK, 2012. [Google Scholar] [CrossRef]
  16. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  17. Yu, Z.; Li, L.; Yuan, Y. A modified multivariate spectral gradient algorithm for solving absolute value equations. Appl. Math. Lett. 2021, 21, 107461. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Yu, D.; Yuan, Y. On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations. Symmetry 2023, 15, 589. [Google Scholar] [CrossRef]
Figure 1. Comparison of the GQM with the Maple solution for h = 0.02 (step size).
Figure 1. Comparison of the GQM with the Maple solution for h = 0.02 (step size).
Mathematics 11 02069 g001
Table 1. Numerical comparison of the GQM with the RIM and MSOR-like method.
Table 1. Numerical comparison of the GQM with the RIM and MSOR-like method.
Methodn100020003000400050006000
K242525252525
RIMCPU7.08420654.430295150.798374321.604186581.212038912.840059
RES7.6844× 10 7 4.9891 × 10 7 6.3532 × 10 7 7.6121 × 10 7 8.8041 × 10 7 9.9454 × 10 7
K303132323333
MSOR-LikeCPU0.00673900.00956210.02156340.05414560.05701340.0791257
RES5.5241 × 10 7 7.0154 × 10 7 5.8684 × 10 7 9.0198 × 10 7 5.6562 × 10 7 7.4395 × 10 7
K555555
GNMCPU0.00596510.0073330.01150380.03303450.05518180.0783684
RES3.1777 × 10 10 7.8326 × 10 9 2.6922 × 10 10 3.7473 × 10 9 8.3891 × 10 9 5.8502 × 10 8
K222222
GQMCPU0.0018160.0034100.0187710.03264250.0315390.069252
RES6.1366 × 10 12 1.7588 × 10 11 3.1143 × 10 11 2.8152 × 10 11 3.04866 × 10 11 3.1723 × 10 11
Table 2. Comparison of the GQM with the TSI and INM.
Table 2. Comparison of the GQM with the TSI and INM.
Methodp2004006008001000
K33344
TSIRES7.6320× 10 12 9.0622 × 10 12 1.9329 × 10 11 4.0817 × 10 11 7.1917 × 10 11
CPU0.0316190.1205200.325910.836491.00485
K33344
INMRES2.1320 × 10 12 6.6512 × 10 12 3.0321 × 10 11 2.0629 × 10 11 8.0150 × 10 11
CPU0.0128510.0981240.1568100.6384210.982314
K22222
GQMRES1.1623 × 10 12 4.4280 × 10 12 1.0412 × 10 11 1.9101 × 10 11 2.8061 × 10 11
CPU0.0127620.0317330.1180010.2048040.273755
Table 3. Comparison of the GQM with the MIM and GIM.
Table 3. Comparison of the GQM with the MIM and GIM.
Methodsn10002000300040005000
K78888
MIMRES6.7056× 10 9 7.30285 × 10 10 7.6382 × 10 10 9.57640 × 10 10 8.52425 × 10 10
CPU0.2152400.9124290.9167881.5035184.514201
K66666
GIMRES3.6218 × 10 8 5.1286 × 10 8 6.2720 × 10 8 7.2409 × 10 8 8.0154 × 10 8
CPU0.2383520.5412640.9615341.4531892.109724
K22222
GQMRES3.1871 × 10 14 4.5462 × 10 14 5.7779 × 10 14 6.53641 × 10 14 8 7.26571 × 10 14
CPU0.2049740.3211840.4628690.8195031.721235
Table 4. Comparison for Example 5.
Table 4. Comparison for Example 5.
MMSGP MM GQM
pKCPURESKCPURESKCPURES
2240.0051295.6800× 10 7 20.0299651.2079 × 10 12 10.0051610
4370.0087019.7485 × 10 7 40.0278645.5011 × 10 8 10.0076815.0242 × 10 15
8450.0092175.5254 × 10 7 60.0453876.9779 × 10 8 10.0050283.4076 × 10 14
16660.0124585.8865 × 10 7 70.3569302.0736 × 10 8 10.0052537.2461 × 10 14
32550.0315978.2514 × 10 7 80.0332774.9218 × 10 8 10.0044982.0885 × 10 13
64860.0856217.6463 × 10 7 90.1857539.0520 × 10 9 10.0071916.6775 × 10 13
128900.5210566.3326 × 10 7 90.4523941.7912 × 10 8 10.2623643.2435 × 10 12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, L.; Iqbal, J.; Riaz, F.; Arif, M. Gauss Quadrature Method for System of Absolute Value Equations. Mathematics 2023, 11, 2069. https://doi.org/10.3390/math11092069

AMA Style

Shi L, Iqbal J, Riaz F, Arif M. Gauss Quadrature Method for System of Absolute Value Equations. Mathematics. 2023; 11(9):2069. https://doi.org/10.3390/math11092069

Chicago/Turabian Style

Shi, Lei, Javed Iqbal, Faiqa Riaz, and Muhammad Arif. 2023. "Gauss Quadrature Method for System of Absolute Value Equations" Mathematics 11, no. 9: 2069. https://doi.org/10.3390/math11092069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop