Abstract
In this paper, the two-step method is considered with the generalized Newton method as a predictor step. The three-point Newton–Cotes formula is taken as a corrector step. The proposed method’s convergence is discussed in detail. This method is very simple and therefore very effective for solving large systems. In numerical analysis, we consider a beam equation, transform it into a system of absolute value equations and then use the proposed method to solve it. Numerical experiments show that our method is very accurate and faster than already existing methods.
Keywords:
absolute value equations; Newton–Cotes open formula; convergence analysis; numerical results; beam equation MSC:
65F10; 65H10
1. Introduction
Suppose an AVE of the form
where , and represents an absolute value. The AVE
is the generalized form of Equation (1) for , which was first presented by Rohn []. The AVE Equation (1) has many applications in pure and applied sciences []. It is difficult to find the exact solution of Equation (1) because of the absolute values of x. For some works on this aspect, we refer to [,,]. Many iterative methods were proposed to study the AVE Equation (1), for example [,,,,,,,,,].
Nowadays, the two-step techniques are very poplar for solving AVE Equation (1). Liu [,] presented two-step iterative methods to solve AVEs. Khan et al. [] have suggested a new method based on a generalized Newton’s technique and Simpson’s rule for solving AVEs. Shi et al. [] have developed a two-step Newton-type method with linear convergence for AVEs. Noor et al. [] have suggested minimization techniques for AVEs and discussed the convergence of these techniques under some suitable conditions. In [], the two-step Gauss quadrature method was suggested for solving AVEs. When the coefficient matrix A in the AVE Equation (1) has the Toeplitz structure, Gu et al. [] suggested the nonlinear CSCS-like method and the Picard–CSCS method for solving this problem.
In this paper, the Newton–Cotes open method along with the generalized Newton technique [] is suggested to solve Equation (1). This new method is straightforward and very effective. The proposed method’s convergence is proved under the condition that in Section 3. To prove the effectiveness, we consider several examples in Section 4. The main aim of this new method is to obtain the solution of (1) in a few iterations with good accuracy. This new method successfully solves large systems of AVEs. In most cases, this new method requires just one iteration to find the approximate solution of Equation (1) with accuracy up to . The following notations are used. Let be a vector with entries , based on the associated entries of x. The generalized Jacobian of based on a subgradient [,] of the entries of is the diagonal matrix D given by
denotes the n singular values of A, represents the 2-norm of A and is the maximum eigenvalue of in absolute. is the 2-norm of the vector x; for more detail, see [].
2. Proposed Method
We develop a new two-step (NTS) method for AVE Equation (1) in this section. Let
Then, is given by:
Consider the predictor step as:
Now, using the three-point Newton–Cotes formula, we have
Thus,
From Equation (10), the NTS method can be written as (Algorithm 1):
Algorithm 1: NTS Method |
1: Choose . 2: For k, calculate 3: Using Step 2, calculate 4: If , then stop. Otherwise, go to step 2. |
3. Convergence
Now, we examine the convergence of the NTS method. The predictor step
is well defined; see Lemma 2 []. To prove that
is nonsingular, first we consider
Now
where , and are diagonal matrices defined in Equation (3).
Lemma 1.
If , then exists for any diagonal matrix D defined in Equation (3).
Proof.
If is singular, then
for some . As , therefore, using Lemma 1 [], we have
which is a contradiction, hence is nonsingular. □
Lemma 2.
If , then the sequence of the NTS method is well defined and bounded with an accumulation point such that
or it is equivalent to
Hence, there exists an accumulation point with
for some diagonal matrix whose diagonal entries are 0 or depending on whether the corresponding component of is zero, positive, or negative as defined in (3).
Proof.
The proof is the same as given in []. Thus, it is skipped. □
Theorem 1.
If , then the NTS method converges to a solution v of Equation (1).
Proof.
Consider
It is seen that
It follows that
Thus, we know
This leads to
Since , and are diagonal matrices, therefore
We also use the Lipchitz continuity (see Lemma 5 []), that is
Lemma 3.
Let and , , be non-zeros. Then, for any b, the NTS method converges to the unique solution of Equation (1) for any initial guess .
4. Numerical Result
In the section, several examples are taken to prove the efficiency of the suggested method. We use Matlab R2021a with a core (TM) i5@ 1.70 GHz. The CPU time in seconds, number of iterations and 2-norm of residuals are denoted by time, K and RES, respectively.
Example 1
([]). Consider
A comparison of the NTS method with the MSOR-like method [], generalized Newton method (GNM) [] and RIM [] is given in Table 1.

Table 1.
NTS method verses MSOR-like method and RIM.
Table 1 shows that the NTS method finds the solution of Equation (1) very quickly. The RES of the NTS method shows that the new method is more accurate than all the methods stated in Table 1.
Example 2
([]). Consider
Choose a random and .
We compare the NTS method with INM [], the GQ method [] and TSI [] in Table 2.

Table 2.
Numerical results for Example 2.
It is clear that the NTS method converges in one iteration in most cases. The other two methods require at least three iterations to find the solution of Equation (1) to achieve the given accuracy.
Example 3
([]). Let
where the initial vector is taken from [].
We compare the NTS method with GGS [], MGS [] and Method II [].
As seen in Table 3, the suggested method approximates the solution of Equation (1) in just one iteration. The residual shows that the NTS method is very accurate.
Example 4.
Consider the beam equation of the form
with boundary conditions
where is the length of the beam, the modulus of elasticity , the intensity of uniform load , the stress at the ends is 1000 lb and the central moment of inertia .

Table 3.
Comparison of NTS method with GGS, MGS and Method II.
We use FDM to discretize this equation. A comparison of the NTS method with the solution by Maple is illustrated in Figure 1.

Figure 1.
Deflection of beam for (step size).
Figure 1 shows the effectiveness and accuracy of the NTS method. Clearly, the deflection of the beam is maximum at the center.
Example 5
([]). Consider an AVE of the form
Choose a constant vector b, and the initial guess is taken from []. A comparison of the NTS method with MM [] and MMSGP [] is presented in Table 4.

Table 4.
The numerical results for Example 5.
We observe that the NTS method is very successful for solving Equation (1). Furthermore, the NTS method is very consistent when n increases (large systems), whereas the other two methods need more iterations.
5. Conclusions
In this paper, we have used a two-step method for AVEs. In this new method, a three-point Newton–Cotes open formula is taken as a corrector step, while a generalized Newton method is taken as the predictor. The local convergence of the NTS method is proved in Section 2. Theorem 1 proves the linear convergence of the proposed method. A comparison shows that this method is very accurate and converges in just one iteration in most cases. This idea can be used to solve generalized AVEs and also to find all solutions of AVEs in the future.
Author Contributions
The idea of the present paper came from J.I. and S.M.G.; P.G., J.I., S.M.G., M.A. and R.K.A. wrote and completed the calculations; L.S. checked all the results. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported in part by the Key Scientific Research Projects of Universities in Henan Province under Grant 22A110005.
Data Availability Statement
Not applicable.
Acknowledgments
The authors would like to extend their sincere appreciation to Researchers Supporting Project number RSPD2023R802 KSU, Riyadh, Saudi Arabia.
Conflicts of Interest
The authors declare that they have no conflict of interest.
References
- Rohn, J. A theorem of the alternatives for the equation Ax + B|x| = b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef]
- Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Its Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
- Mansoori, A.; Eshaghnezhad, M.; Effati, S. An efficient neural network model for solving the absolute value equations. IEEE Tran. Circ. Syst. II Express Briefs 2017, 65, 391–395. [Google Scholar] [CrossRef]
- Chen, C.; Yang, Y.; Yu, D.; Han, D. An inverse-free dynamical system for solving the absolute value equations. Appl. Numer. Math. 2021, 168, 170–181. [Google Scholar] [CrossRef]
- Cairong, C.; Yu, D.; Han, D. Exact and inexact Douglas-Rachford splitting methods for solving large-scale sparse absolute value equations. IMA J. Numer. Anal. 2023, 43, 1036–1060. [Google Scholar]
- Ali, R. Numerical solution of the absolute value equation using modified iteration methods. Comput. Math. Methods 2022, 2022, 2828457. [Google Scholar] [CrossRef]
- Ali, R.; Khan, I.; Ali, A.; Mohamed, A. Two new generalized iteration methods for solving absolute value equations using M-matrix. AIMS Math. 2022, 7, 8176–8187. [Google Scholar] [CrossRef]
- Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. A generalization of the Gauss-Seidel iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 293, 156–167. [Google Scholar]
- Haung, B.; Li, W. A modified SOR-like method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 2022, 400, 113745. [Google Scholar] [CrossRef]
- Mansoori, A.; Erfanian, M. A dynamic model to solve the absolute value equations. J. Comput. Appl. Math. 2018, 333, 28–35. [Google Scholar] [CrossRef]
- Noor, M.A.; Iqbal, J.; Al-Said, E. Residual Iterative Method for Solving Absolute Value Equations. Abstr. Appl. Anal. 2012, 2012, 406232. [Google Scholar] [CrossRef]
- Salkuyeh, D.K. The Picard-HSS iteration method for absolute value equations. Optim. Lett. 2014, 8, 2191–2202. [Google Scholar] [CrossRef]
- Abdallah, L.; Haddou, M.; Migot, T. Solving absolute value equation using complementarity and smoothing functions. J. Comput. Appl. Math. 2018, 327, 196–207. [Google Scholar] [CrossRef]
- Yu, Z.; Li, L.; Yuan, Y. A modified multivariate spectral gradient algorithm for solving absolute value equations. Appl. Math. Lett. 2021, 21, 107461. [Google Scholar] [CrossRef]
- Zhang, Y.; Yu, D.; Yuan, Y. On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations. Symmetry 2023, 15, 589. [Google Scholar] [CrossRef]
- Feng, J.; Liu, S. An improved generalized Newton method for absolute value equations. SpringerPlus 2016, 5, 1042. [Google Scholar] [CrossRef][Green Version]
- Feng, J.; Liu, S. A new two-step iterative method for solving absolute value equations. J. Inequal. Appl. 2019, 2019, 39. [Google Scholar] [CrossRef]
- Khan, A.; Iqbal, J.; Akgul, A.; Ali, R.; Du, Y.; Hussain, A.; Nisar, K.S.; Vijayakumar, V. A Newton-type technique for solving absolute value equations. Alex. Eng. J. 2023, 64, 291–296. [Google Scholar] [CrossRef]
- Shi, L.; Iqbal, J.; Arif, M.; Khan, A. A two-step Newton-type method for solving system of absolute value equations. Math. Prob. Eng. 2020, 2020, 2798080. [Google Scholar] [CrossRef]
- Noor, M.A.; Iqbal, J.; Khattri, S.; Al-Said, E. A new iterative method for solving absolute value equations. Int. J. Phys. Sci. 2011, 6, 1793–1797. [Google Scholar]
- Shi, L.; Iqbal, J.; Raiz, F.; Arif, M. Gauss quadrature method for absolute value equations. Mathematics 2023, 11, 2069. [Google Scholar] [CrossRef]
- Gu, X.-M.; Huang, T.-Z.; Li, H.-B.; Wang, S.-F.; Li, L. Two CSCS-based iteration methods for solving absolute value equations. J. Appl. Anal. Comput. 2017, 7, 1336–1356. [Google Scholar]
- Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
- Polyak, B.T. Introduction to Optimization; Optimization Software Inc., Publications Division: New York, NY, USA, 1987. [Google Scholar]
- Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA; London, UK, 1970. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).