A New Efﬁcient Method for Absolute Value Equations

: In this paper, the two-step method is considered with the generalized Newton method as a predictor step. The three-point Newton–Cotes formula is taken as a corrector step. The proposed method’s convergence is discussed in detail. This method is very simple and therefore very effective for solving large systems. In numerical analysis, we consider a beam equation, transform it into a system of absolute value equations and then use the proposed method to solve it. Numerical experiments show that our method is very accurate and faster than already existing methods.


Introduction
Suppose an AVE of the form where A ∈ R n×n , x, b ∈ R n and | · | represents an absolute value. The AVE is the generalized form of Equation (1) for B ∈ R n×n , which was first presented by Rohn [1]. The AVE Equation (1) has many applications in pure and applied sciences [2]. It is difficult to find the exact solution of Equation (1) because of the absolute values of x. For some works on this aspect, we refer to [3][4][5]. Many iterative methods were proposed to study the AVE Equation (1), for example [6][7][8][9][10][11][12][13][14][15]. Nowadays, the two-step techniques are very poplar for solving AVE Equation (1). Liu [16,17] presented two-step iterative methods to solve AVEs. Khan et al. [18] have suggested a new method based on a generalized Newton's technique and Simpson's rule for solving AVEs. Shi et al. [19] have developed a two-step Newton-type method with linear convergence for AVEs. Noor et al. [20] have suggested minimization techniques for AVEs and discussed the convergence of these techniques under some suitable conditions. In [21], the two-step Gauss quadrature method was suggested for solving AVEs. When the coefficient matrix A in the AVE Equation (1) has the Toeplitz structure, Gu et al. [22] suggested the nonlinear CSCS-like method and the Picard-CSCS method for solving this problem.
In this paper, the Newton-Cotes open method along with the generalized Newton technique [23] is suggested to solve Equation (1). This new method is straightforward and very effective. The proposed method's convergence is proved under the condition that A −1 < 1 10 in Section 3. To prove the effectiveness, we consider several examples in Section 4. The main aim of this new method is to obtain the solution of (1) in a few iterations with good accuracy. This new method successfully solves large systems of AVEs. In most cases, this new method requires just one iteration to find the approximate solution of Equation (1) with accuracy up to 10 −13 . The following notations are used. Let sign(x) be a vector with entries 1, 0, −1, based on the associated entries of x. The generalized Jacobian σ|x| of |x| based on a subgradient [24,25] of the entries of |x| is the diagonal matrix D given by svd(A) denotes the n singular values of A, A = (λ) 1 2 represents the 2-norm of A and λ is the maximum eigenvalue of A T A in absolute. x = (x T , x) is the 2-norm of the vector x; for more detail, see [26].

Proposed Method
We develop a new two-step (NTS) method for AVE Equation (1) in this section. Let Then, J (x) is given by: Consider the predictor step as: Let v be the solution of Equation (1). To construct the corrector step, we proceed as follows: Now, using the three-point Newton-Cotes formula, we have From Equations (7) and (8), we have From Equation (10), the NTS method can be written as (Algorithm 1):

Convergence
Now, we examine the convergence of the NTS method. The predictor step is well defined; see Lemma 2 [23]. To prove that is nonsingular, first we consider where D φ k , D δ k and D τ k are diagonal matrices defined in Equation (3).

Lemma 2.
If svd(A) > 1, then the sequence of the NTS method is well defined and bounded with an accumulation point x such that or it is equivalent to Hence, there exists an accumulation point x with for some diagonal matrix D whose diagonal entries are 0 or ±1 depending on whether the corresponding component of x is zero, positive, or negative as defined in (3).
Proof. The proof is the same as given in [23]. Thus, it is skipped.
, then the NTS method converges to a solution v of Equation (1).

Proof. Consider
It is seen that As the solution to Equation (1) is v, therefore From Equations (18) and (19), we have It follows that Thus, we know This leads to Since D φ k , D δ k and D τ k are diagonal matrices, therefore We also use the Lipchitz continuity (see Lemma 5 [23]), that is From Equations (20)- (22), we have In Equation (23), the supposition 2J φ k − J δ k + 2J τ k −1 < 1 9 is used. Hence x k converges linearly to the solution of Equation (1). Lemma 3. Let A −1 < 1 10 and D φ k , D δ k , D τ k be non-zeros. Then, for any b, the NTS method converges to the unique solution of Equation (1) for any initial guess x 0 ∈ R n .

Numerical Result
In the section, several examples are taken to prove the efficiency of the suggested method. We use Matlab R2021a with a core (TM) i5@ 1.70 GHz. The CPU time in seconds, number of iterations and 2-norm of residuals are denoted by time, K and RES, respectively. Example 1 ([9]). Consider A = tridiag(−1.5, 4, −0.5) ∈ R s×s , x ∈ R s and b = (1, 2, · · · , s) T .
A comparison of the NTS method with the MSOR-like method [9], generalized Newton method (GNM) [23] and RIM [11] is given in Table 1.  Table 1 shows that the NTS method finds the solution of Equation (1) very quickly. The RES of the NTS method shows that the new method is more accurate than all the methods stated in Table 1.
We compare the NTS method with INM [7], the GQ method [21] and TSI [16] in Table 2. It is clear that the NTS method converges in one iteration in most cases. The other two methods require at least three iterations to find the solution of Equation (1) to achieve the given accuracy. (26) where the initial vector is taken from [7].

Example 3 ([7]). Let
We compare the NTS method with GGS [8], MGS [6] and Method II [7]. As seen in Table 3, the suggested method approximates the solution of Equation (1) in just one iteration. The residual shows that the NTS method is very accurate.
with boundary conditions where L = 120 in. is the length of the beam, the modulus of elasticity E = 3 × 10 7 lb/in. 2 , the intensity of uniform load q = 100 lb/ f t, the stress at the ends is 1000 lb and the central moment of inertia M = 625 in. 4 .
We use FDM to discretize this equation. A comparison of the NTS method with the solution by Maple is illustrated in Figure 1.  Figure 1 shows the effectiveness and accuracy of the NTS method. Clearly, the deflection of the beam is maximum at the center.
Choose a constant vector b, and the initial guess is taken from [20]. A comparison of the NTS method with MM [20] and MMSGP [1] is presented in Table 4. We observe that the NTS method is very successful for solving Equation (1). Furthermore, the NTS method is very consistent when n increases (large systems), whereas the other two methods need more iterations.

Conclusions
In this paper, we have used a two-step method for AVEs. In this new method, a three-point Newton-Cotes open formula is taken as a corrector step, while a generalized Newton method is taken as the predictor. The local convergence of the NTS method is proved in Section 2. Theorem 1 proves the linear convergence of the proposed method. A comparison shows that this method is very accurate and converges in just one iteration in most cases. This idea can be used to solve generalized AVEs and also to find all solutions of AVEs in the future.