Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence

: In this study, we suggested the local convergence of three iterative schemes that works for systems of nonlinear equations. In earlier results, such as from Amiri et al. (see also the works by Behl et al., Argryos et al., Chicharro et al., Cordero et al., Geum et al., Guitiérrez, Sharma, Weerakoon and Fernando, Awadeh), authors have used hypotheses on high order derivatives not appearing on these iterative procedures. Therefore, these methods have a restricted area of applicability. The main difference of our study to earlier studies is that we adopt only the ﬁrst order derivative in the convergence order (which only appears on the proposed iterative procedure). No work has been proposed on computable error distances and uniqueness in the aforementioned studies given on R k . We also address these problems too. Moreover, by using Banach space, the applicability of iterative procedures is extended even further. We have examined the convergence criteria on several real life problems along with a counter problem that completes this study.


Introduction
The most common and difficult problem in the field of computational mathematics is to obtain the solutions of where F : Ω ⊂ B 1 → B 2 a Fréchet-differentiable, B 1 and B 2 Banach domains, Ω, a nonempty convex. It is hard to obtain the exact solution in analytic form for such problems or, in simple words, it is almost fictitious. This is one of main reasons that we must obtain an approximated and efficient solution up to any specific degree of accuracy by means of an iterative procedure. Therefore, researchers have been putting great effort into developing new iterative methods over the past few decades. In addition, the accuracy of a solution is also dependent on several facts, some of them are: the choice of iterative method, initial approximation/s and structure of the considered problem with software such as Maple, Fortran, MATLAB, Mathematica, and so forth. Further, the people who used these iterative schemes faced several issues, some of which include: choice of starting point, derivative being zero about the root (in the case of derivative free multi-point schemes), difficulty near the initial point, slower convergence, divergence, convergence to an undesired solution, oscillation, failure of the iterative method, and so forth (for further information, please see [1][2][3][4][5]).
A radius of convergence r shall be shown to be Notice that 0 ≤ ψ 0 (θ) < 1, and for all θ ∈ [0, r). LetS(a, b) stand for the closure of S(a, b) a with center a ∈ Ω and of radius b > 0. The conditions (B) are used in the local convergence analysis of iterative procedure (2) provided the "ψ" functions are as given previously. Assume: Set Ω 2 = Ω ∩S(x * ,r). Next, we develop the analysis of iterative procedure (2) by the preceding notation and conditions (B). Theorem 1. Under the conditions (B) forr = r, further suppose that x 0 ∈ S(x * , r) − {x * }. Then, sequence {x σ } generated by iterative scheme (2) is well defined, remains in S(x * , r) for all σ = 0, 1, 2, 3, . . . and converges to x * . Moreover, the following assertions hold and where the "G i " functions are given previously and r is defined by (9). Furthermore, x * is the only solution of equation F(x) = 0 given in Ω 2 by (B 6 ).
Proof. Sequence {x σ } shall be shown to be well defined, to remain in S(x * , r) and to converge to x * using mathematical induction. In order to achieve this, we shall also show estimates (14)- (16). Let us assume that x ∈ S(x * , r) − {x * }. Using B 2 , (8) and (9), we have The Banach perturbation lemma on inversible operators [6], together with estimation (16), ensure: the existence of F (x) −1 The induction for assertions (14)- (16) is terminated by simply substituting x σ , y σ , z σ and x σ+1 by x σ+1 , y σ+1 , z σ+1 and x σ+2 , respectively in the preceding calculations. It follows by the estimation Secondly, we study iterative procedure (3) in an analogous way. There will be no change in the function G 1 . However, we must re-define the functions G 2 and G 3 in the following way withḠ 1 = G 1 : respectively.
Then, we arrive at the following theorem with these changes: Under the conditions (B) forr =r, further suppose that x 0 ∈ S(x * ,r) − {x * }. Then, sequence {x σ } generated by iterative scheme (3) is well defined, remains in S(x * ,r) for all σ = 0, 1, 2, 3, . . . and converges to x * . Moreover, the following assertions hold and where the "Ḡ i " functions are given previously. Furthermore, x * is the only solution of equation F(x) = 0 given in Ω 2 by (B 6 ).
Proof. By simply repeating the proof of Theorem 1 but using iterative procedure (3) instead of method (2), we get the estimates The proof of uniqueness of the solution is given in Theorem 1.
Next, in order to study the local convergence of iterative procedure (3), we add condition (B ) in (B) as follows: Again, there are no changes in the function G 1 . But, we have to re-define the functions G 2 andḠ 3 in the following way forḠ 1Ḡ1 : We define the radius of convergence for method (4) in the following way: wherer 4 is the smallest positive solution of the equation With these new functions, we arrive at the following theorem: Under the conditions (B ) forr =r, further suppose that x 0 ∈ S(x * ,r) − {x * }. Then, sequence {x σ } generated by iterative scheme (4) is well defined, remains in S(x * ,r) for all σ = 0, 1, 2, 3, . . . and converges to x * . Moreover, the following assertions hold and where the "Ḡ i " functions are given previously. Furthermore, x * is the only solution of equation F(x) = 0 given in Ω 2 by (B 6 ).
Proof. By simply repeating the proof of Theorem 1 but using iterative procedure (4) instead of method (2), we get the estimates The proof of uniqueness of the solution is given in Theorem 1.

Numerical Examples
Here, we present the computational results based on the suggested theoretical results in this paper.
We also compare the results of iterative procedures (2)- (4) with on the basis of radii of convergence. By the proceeding definition of H(θ), we choose for method (4). This way, hypothesis (B ) is satisfied. We use [x, y; F] = 1 0 F y + µ(x − y) dµ.
We choose a well mixture of standard and applied science problems for the computational results, which are illustrated in Examples 1-5. The results are listed in Tables 1-5. Additionally, we obtain the COC approximated by means of or ACOC [19] by: In addition, we adopt = 10 −100 as the error tolerance and the terminating criteria to solve nonlinear system or scalar equations are: (i) x σ+1 − x σ < , and (ii) F(x σ ) < . The computations are performed with the package Mathematica 11 with multiple precision arithmetic.
In Table 1, we present radii for example (1). It is straightforward to say that method (2) is better than other mentioned methods because it has larger radius of convergence.
Example 2. Let B 1 = B 2 = R 3 and Ω = S(0, 1). Assume F on Ω with v = (x, y, z) T as where, u = (u 1 , u 2 , u 3 ) T . Then, we obtain So, we obtain convergence radii that are mentioned in Table 2. It is clear to say on the basis of above table that method (2) has a larger radius of convergence as compared to the other mentioned methods. So, we concluded that it better than the methods namely, (3) and (4). Since the method (2) has a larger radius of convergence as compared to the other methods (3) and (4). This means that method (2) has a wider domain for the choice of the starting points. So, we conclude that method (2) has more number of convergent points as compared to methods (3) and (4). We noticed from the above table that method (2) has better choices of staring points as compared to methods (3) and (4). Because methods (3) and (4) have a smaller domain of convergence as a contrast to method (2). It is straightforward to say on the basis of above table that method (2) has a larger domain of convergence in contrast to methods (3) and (4).
We provide the radii of convergence for Example 3 in Table 3.
We mentioned the radii of convergence for Example 4 in Table 4.
We list the radii of convergence for Example 5 in Table 5.

Remark 1.
We have noticed that, in all five examples, method (2) has a bigger radius of convergence as compared to all the other mentioned methods. So, we conclude that method (2) is better than the methods (3) and (4) in terms of convergent points and domain of convergence.

Conclusions
A comparative study was presented for three high convergence order methods utilizing only the first derivative (and the divided difference of order one) that only exist in these methods. Our analysis generated error bounds and results on the uniqueness of x * that can be computed using majorant functions. However, in earlier studies, these concerns were not addressed and the procedures were limited to operators with the ninth order derivatives that are not in these methods. Our technique is applicable to extend to other procedures, since it is so general. In our numerical experiments, a comparison is given between the convergence radii.