Local Convergence for Multi-Step High Order Solvers under Weak Conditions

: Our aim in this article is to suggest an extended local convergence study for a class of multi-step solvers for nonlinear equations valued in a Banach space. In comparison to previous studies, where they adopt hypotheses up to 7th F´rechet-derivative, we restrict the hypotheses to only ﬁrst-order derivative of considered operators and Lipschitz constants. Hence, we enlarge the suitability region of these solvers along with computable radii of convergence. In the end of this study, we choose a variety of numerical problems which illustrate that our works are applicable but not earlier to solve nonlinear problems.


Introduction
Finding the approximate solution µ of is one of the top priorities in the field of Numerical analysis.We assume that F : A ⊂ E 1 → E 2 is a Fréchet-differentiable operator, E 1 , E 2 are Banach spaces and A is a convex subset of E 1 .The B(E 1 , E 2 ) is known as the set of bounded linear operators.The problem of finding an approximate unique solution µ is very important, since many problems can be written as Equation (1) in References [1][2][3][4][5][6][7][8].However, it is not always possible to access the solution µ in an explicit form.Hence, most of the solvers are iterative in nature.The analysis of solvers involves local convergence that stands on the knowledge around µ.It also ensures the convergence of iteration procedures.One of the most significant tasks in the analysis of iterative procedures is to yield the convergence region.Hence, it is essential to suggest the radius of convergence.
We redefine the iterative solver suggested in Reference [7], for all σ = 0, 1 2, . . .as ), where x 0 ∈ A is a starting guess, z σ = φ 1 (x σ , y σ ) is a λ-order iteration function solver (for λ ≥ 1) and F stands for the first-order Fŕechet-derivative of F. The study of these methods is important for various reasons already stated in Reference [7].For brevity we refer the reader to Reference [7] and the references therein.On top of those reasons, we also mention that method (2) generalizes the existing widely used Newton's type methods such as Newton's, Traub's and other methods.So, it is important to study these methods under the same set of convergence criteria.Keeping the linear operator frozen is also a very cheap and efficient way of increasing the order of convergence.The convergence order of (2) was given in Reference [7] but using hypotheses up to the 7th-order derivative of function F. Only the 1st-order derivative emerges in scheme (2).Such conditions hamper the suitability of solver (2).Consider function Using this definition, we get It is clear from the above that the 3rd-order derivative of F(x) is unbounded in A. We have plenty of research articles on iterative solvers .The local convergence analysis of these solvers traditionally requires the usage of Taylor expansions and the operator involved must be sufficiently many times differentiable in a neighborhood of the solution µ.This way, the convergence order is established but derivatives of an order higher than one do not appear in these solvers, as we saw previously with the motivational example restricting the applicability of solvers.Another problem is that this approach does not provide error estimates on x n − µ that can be used to predetermine the number of steps required to attain a prescribed error tolerance.The uniqueness of the solution µ also cannot be established in any set containing it.Moreover, the starting guess is a shot in the dark.Therefore, it is important to find a technique other than the preceding.This is what we offer in this article.Furthermore, (COC) and (ACOC) [27] are used to compute the convergence order (to be explained in Remark 1 (d)).These formulas do not require higher than one derivative, and in the case of ACOC, knowledge of µ is not needed.It is worth noting that the iterates are obtained by using (2), which involves the first derivative.Hence, these iterates also depend on the first derivative (see Remark 1 (d)).Our techniques can be used on other solvers to extend their applicability in a similar fashion.

Local Convergence
Here, we present a study of local convergence for solver (2).For this, we consider a function ϕ 0 : [0, ∞) → [0, ∞) which is nondecreasing and continuous such that ϕ 0 (0) = 0. We assume has a minimal positive solution r 0 .Define functions g 1 , g 2 , h 1 and h 2 on the interval [0, r 0 ) by and are also nondecreasing and continuous, satisfying ϕ(0) = 0. We have that h Then, by the intermediate value theorem, we notice that the functions h 1 and h 2 have solutions in the interval (0, r 0 ).Call as r 1 and r 2 the smallest such solutions in (0, r 0 ) of the functions h 1 and h 2 , respectively.Assume p(t) = 1 has minimal positive solution r p .Consider functions These functions are defined in the interval [0, r), where r = min{r 0 , r p }. Consider functions g (i) , h (i) , i = 1, 2, . . ., m on [0, r) as where Then, h (i) (0) = −1 and h (i) (ζ) → ∞ as ζ → r− .Defined by r (i) be the minimal solutions of corresponding to functions h (i) in (0, r).

Remark 1.
(a) It is clear from (13) that we can drop the hypothesis (15) and choose Indeed, we have instead (4) provided that function ϕ 0 is strictly increasing.(c) If ϕ 0 , w, v are constants functions, then where r 1 is the radius for Newton's solver [14].
Rheinboldt [26] and Traub [6] also provided radius of convergence instead of r 1 and by Argyros [1,2] where ϕ 1 is a constant for (9) on D, so (d) By adopting conditions to the 7th-order derivative of operator F, the order of the convergence of solver (2) was given in Reference [7].We assume hypotheses only on the 1st-order derivative of operator F. For obtaining the order of convergence, we adopted or the computational order of convergence COC and the approximate computational order of convergence ACOC [28,29], respectively.These definitions can also be found in Reference [27].They do not require derivatives higher than one.Indeed, notice that to generate iterates x n and therefore compute ξ and ξ * , we need to use the formula (2) using only the first derivatives.It is vital to note that ACOC does not need the prior information of exact root µ.(e) Consider F satisfying the autonomous differential equation [1,2] of where P is a given and continuous operator.Then, F (x * ) = P(F(x * )) = P(0), our results apply but without knowledge of x * and choose F(x) = e x − 1.Hence, we select P(x) = x + 1.

Concrete Applications
Here, we illustrate the theoretical consequences suggested in Section 2. We choose λ = 1 and ϕ 1 (x σ , y σ ) = y σ − F (y σ ) −1 F(y σ ), in all examples.Next, we provide numerical examples given as follows: We study the mixed Hammerstein-like equation [4,18], defined as follows: where The solution µ(s) = 0 is the same as zero of (1), where F : A → A, given as: and Then, we consider by Remark 1.But F is not Lipschitz, so earlier studies [4,7] are not applicable to solving this problem.On the other hand, our technique does not exhibit this kind of behavior.The different radii of convergence mentioned in Table 1.We notice that the radius of convergence decreases as "i" increases as expected, since we trade higher order convergence, with a smaller domain of convergence of initial points.
Example 2. Describing the movement of a particle in 3-D by the following system of differential equations given as follows: So, we obtain Then, we have for µ = (0, 0, 0 The different radii of convergence mentioned in Table 2.We notice that the radius of convergence decreases as "i" increases as expected, since we trade higher order convergence, with a smaller domain of convergence of initial points. Example 3. Let us choose E 1 = E 2 = S, facilitated by the max norm.Set A = Ū(0, 1) and choose a function We have that Then, we have that ϕ 0 (ζ) = 15ζ, ϕ(ζ) = 30ζ and v(ζ) = 2. So, we yield the Table 3, where we calculated distinct radii of convergence.
Table 3. Distinct radii of convergence.
i r 1 r 2 r (i) r 1 0.0333333 0.0625 0.0625 0.0625 2 0.0333333 0.0625 0.0324524 0.0324524 3 0.0333333 0.0625 0.0296809 0.0296809 4 0.0333333 0.0625 0.0270781 0.0270781 We notice that the radius of convergence decreases as "i" increases as expected, since we trade a higher order convergence with a smaller domain of convergence of initial points.4. We notice that the radius of convergence decreases as "i" increases as expected, since we trade a higher order convergence with a smaller domain of convergence of initial points.

Application of Our Scheme on Large System of Nonlinear Equations
We cited the (j), ( F(x j ) ), x j+1 − x j and ξ * ≈ as the index of number of iteration, absolute residual errors, errors among two iterations and computational convergence order, respectively, in Tables 5-7.
We depicted the numerical out comes in Table 5.We have computed ACOC and observed that as we increases "i" so does the ACOC.
Example 6.We choose a prominent 2D Bratu problem [31,32], which is given by Let us assume that Θ i,j = u(x i , t j ) is a numerical result over the grid points of the mesh.In addition, we consider that τ 1 and τ 2 are the number of steps in the direction of x and t, respectively.Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively.In order to find the solution of PDE (62), we adopt the following approach which further yields the succeeding SNE By choosing τ 1 = τ 2 = 11, h = 1 11 , and C = 0.1, we get a large SNE of order 100 × 100.The starting point is x 0 = 0.1(sin(πh) sin(πk), sin(2πh) sin(2πk), . . ., sin(10πh) sin(10πk)) T and results are depicted in Table 6.
In order to access a giant system of nonlinear equations of order 200 × 200, we pick σ = 200.In addition, we consider the following starting approximation for this problem:  7. We have computed ACOC and observed that, as "i" increases, so does the ACOC.

Concluding Remarks
Recently, there has been a surge in the development of multi-step solvers for nonlinear equations.In this article, we present a unifying local convergence of solver (2), relying only on the first derivative.This way, we expand the applicability of these solvers.Notice that in earlier studies that are special cases of (2), higher than one derivatives are used, which do not appear in the solver.Moreover, no bounds on the distances x σ − µ are provided, nor uniqueness theorems.Furthermore, we provide computable bounds and uniqueness of solutions.This is where the novelty of our article lies.Numerical and applications are also given to test the convergence conditions.In our application, we solve the 2D-Bratu, BVP problems as well as a system of nonlinear equations of 200 × 200.

AuthorFunding:
Contributions: R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing-Original Draft Preparation; Writing-Review & Editing.All authors have read and agreed to the published version of the manuscript.Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-237-130-1440.

Table 1 .
Distinct radii of convergence.

Table 2 .
Distinct radii of convergence.

Table 4 .
Distinct radii of convergence.

Table 5 .
Computational results on a boundary value problem 5.

Table 6 .
Computational results of 2D Bratu problem in Example 6.We have computed ACOC and observed that, as "i" increases, so does the ACOC.

Table 7 .
Computational results on Example 7.