Convergence of Higher Order Jarratt-Type Schemes for Nonlinear Equations from Applied Sciences

Symmetries are important in studying the dynamics of physical systems which in turn are converted to solve equations. Jarratt’s method and its variants have been used extensively for this purpose. That is why in the present study, a unified local convergence analysis is developed of higher order Jarratt-type schemes for equations given on Banach space. Such schemes have been studied on the multidimensional Euclidean space provided that high order derivatives (not appearing on the schemes) exist. In addition, no errors estimates or results on the uniqueness of the solution that can be computed are given. These problems restrict the applicability of the methods. We address all these problems by using the first order derivative (appearing only on the schemes). Hence, the region of applicability of existing schemes is enlarged. Our technique can be used on other methods due to its generality. Numerical experiments from chemistry and other disciplines of applied sciences complete this study.


Introduction
Problems from applied sciences such as mathematics, biology, chemistry, and physics (including symmetries) to mention a few are converted to nonlinear equations which are solved by iterative methods, since exact solutions are hard to find. Let T 1 and T 2 denote Banach spaces and D ⊂ T 1 stand for an open and convex set. Moreover, we use the notation (T 1 , T 2 ) for the space of continuous linear operators mapping T 1 into T 2 . The task of determining a solution x * of equation: where F : D → T 2 is differentiable as defined by Fréchet is of extreme significance in computational disciplines. Finding x * in a closed form is desirable but rarely attainable. That is why one resorts to develop iterative schemes approximating x * , if certain convergence criteria hold. One of the most basic and popular iterative methods is known as the Newton method [1], which is defined as follows: it has second of order convergence. However, it is a one-point method and one-point methods have several issues concerning the order and computational efficiency. For instance, if we want to attain a third order one-point iterative method then we need the (2) with α ∈ R − {0}, A λ : T 1 × T 1 → T 1 is a continuous operator of a scheme with order λ ≥ 2, B : D → (T 1 , T 2 ) and C : D → (T 1 , T 2 ). We have left A λ as general as possible to include numerous special cases of it. But, as an example A λ can be A 4 = A(x k , y k ) = y k − F (y k ) −1 F(y k ). Other choices are given by (31), (32), and in paper [33]. Schemes (2) and (3) were shown to be of order (λ + 2) and 2j, respectively in [33], when T 1 = T 2 = R j , j = 1, 2, 3, . . . . However, they used high order derivatives in order to demonstrate the convergence order.
Moreover, we refer the reader to [33] for a plethora of choices of α, B σ , C σ leading to already studied schemes or new schemes. Some choices are also given by us in the numerical section of this study. The computational efficiencies and other benefits were also presented in [33].
We have some concerns with the aforementioned studies: (a) The convergence order was established by utilizing the Taylor series expansion requiring higher order derivatives (not appearing on the schemes); (b) Lack of computable estimates on distances x σ − x * ; (c) Results related to the uniqueness of the solutions are not given; (d) We do not know in advance how many iterates are needed to achieve a desired error tolerance given in advance; (e) Earlier studies have been made only on the multidimensional Euclidean space.
The novelty of our study lies in the fact that we address concerns (a)-(e) using the derivative (only appearing on the schemes) as well as very general conditions. This way, we also provide computable upper bound estimations on x σ − x * and results on the uniqueness of solutions. Moreover, our results are obtained in the more general setting of a Banach space. Hence, the region of applicability is extended for these schemes. It is worth noticing that computing the convergence radii shows how difficult (limited too) it is to determine initial points. Our idea is so general that it can be used on other schemes in a similar fashion . We suppose from now on that x * is a simple solution of Equation (1).
It is worth noticing that the foundations of symmetrical principles include quantum physics and micro-world. These problems once converted to equations of the form (1), their solutions are hard to obtain in closed form or analytically. That is why such schemes are important to study.
The rest of the study includes: local convergence of these schemes in Section 2, and the numerical experiments in Section 3. In particular similar work (see Example 2 and Example 4) in our Section 3 can be done in [13,14,25]. Finally, Section 4 is devoted to the concluding remarks.
The local convergence analysis of scheme (2) relies on the conditions (A) provided that the scalar functions are as previously defined. Suppose: Next, the local convergence analysis of scheme (2) is developed using conditions (A) with functions ψ k as previously defined. Theorem 1. Suppose conditions (A) hold and choose x 0 ∈ S(x * , ρ) − {x * }. Then, sequence {x σ } starting from x 0 and generated by scheme (2) is well defined in S(x * , ρ). It stays in S(x * , ρ) for all σ = 0, 1, 2, . . . and converges to x * . Moreover, the only solution of Equation (1) in the set S 1 is x * .
Proof. The following assertions shall be shown using mathematical induction: and where the functions "ψ k " as given previously and radius ρ is as defined by (7).

Remark 1.
(a) By (A 1 ) and the estimation: can be dropped and µ 1 (τ) be replaced by: The results obtained here can be used for operators F satisfying the autonomous differential equation [9,10] of the form: where P is a known continuous operator. Since F (x * ) = P(F(x * )) = P(0), we can apply the results without actually knowing the solution x * . Let us consider an example F(x) = e x − 1.
Then, we can choose P(x) = x + 1. (c) If µ 0 (τ) = K 0 τ and µ(t) = Kτ then r 1 = 2 2K 0 +K was shown in [9,10] to be the convergence radius for Newton's method. It follows from (7) and the definition of r 1 that the convergence radius ρ of the method (2) cannot be larger than the convergence radius r 1 of the second order Newton's method. As already noted in [9,10], r 1 is at its smallest as large as the convergence ball given by Rheinboldt [28]: In particular, for K 0 < K 1 (where K 1 is the constant on D), we have that: Therefore our convergence ball r 1 is at most three times larger than Rheinboldt's. The same value for r R is given by Traub [1]. (2) is not changing if we use the conditions of Theorem 1 instead of the stronger conditions given in [33]. Moreover, for the error bounds in practice we can use the Computational Order of Convergence (COC) [32]: or Approximate Computational Order of Convergence (ACOC) [32] by: So, the convergence order is obtained in this way without evaluations higher than the first Fréchet derivative.
Next , we present the local convergence of scheme (3) in an analogous way. Define functions on Ω 0 as:ψ where q : Ω 0 → Ω is a continuous and non-decreasing function, and Suppose equations: Itshall be shown thatρ is a convergence radius for scheme (3).
Moreover replace the second condition in (A 2 ) by: Let us call the resulting conditions (A ). Then, as in Theorem 1, we get in turn the estimates that also motivate the introduction of the "ψ functions: . . .
Hence, we get the local convergence result for scheme (3).

Numerical Applications
We present the computational results based on the suggested theoretical results in this paper. We choose (2) in order to obtain fifth and sixth order iterative procedures. In particular, we have: where where For more details of these values please see the article Zhanlav and Otgondroj [33]. Next, we show how to choose the "ψ functions for Schemes (31) and (32), respectively.
Suppose equation: has a smallest solution R q 1 ∈ Ω 0 − {0}, where: Case β = 0 Set Ω 1 = 0, min{R 0 , R q 1 } . Define function ψ 2 : Ω 1 → Ω by: The choice of functions q 1 and ψ 2 is justified by the estimates: where we also used: Case β = 0 The proceeding calculations for ψ 2 in this case suggest that this function can be defined by: , since: Moreover, suppose equation: has a smallest solution R q 2 ∈ Ω 1 − {0} for: Define function ψ 3 : and p(τ) = µ 0 (τ) + µ 0 ψ 1 (τ)τ The choice of functions q 2 and ψ 3 is justified by the estimates: where we also adopted the esitmates: Next, we find functionsψ 2 andψ 3 (withψ 1 = ψ 1 ) but for scheme (32) in an analogous way. We can write: where we also used the following estimate: Hence, we define functionψ 2 by: In view of the previous calculations for finding ψ 3 and the third substep of Schemes (31) and (32), we defineψ : Ω − 0 → Ω: Moreover, to find radius ρ for Schemes (31) and (32), we use equations involving ψ 1 , ψ 2 , ψ 3 andψ 1 ,ψ 2 ,ψ 3 as defined previously in this section. We compare these methods on the basis of the radii of convergence. In addition, we choose = 10 −100 as the error tolerance. The terminating criteria to solve nonlinear system or scalar equations are: The computations are performed with the package Mathematica 11 with multiple precision arithmetic. Example 1. Following the example presented in introduction, for x * = 1, we can set: This way conditions (A) are satisfied. Then, by solving equations ψ k (τ) − 1 = 0, we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. In Table 1, we present the numerical values of radii ρ k and ρ for example (1). It is clear to say on the basis of above table that method (32) (for α = 1, δ = −1) has a larger radius of convergence as compared to the other mentioned methods. So, we concluded that it is better than other mentioned methods.
The above functions clearly satisfied the conditions (A). Then, by solving equations ψ k (τ) − 1 = 0, we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. We provide the computational values of radii of convergence for Example 2 in Table 2. Since the method (32) (for α = 1, δ = −1) has a larger radius of convergence as compared to other special cases of (31) and (32). So, we conclude that method (32) has a higher number of convergent points as compared to other mentioned methods.
We can easily cross verify that the above functions satisfied the conditions (A). Then, by solving equations ψ k (τ) − 1 = 0, we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. We provide the computational radii of convergence for Example 3 in Table 4.  (3). We noticed from the above table that method (32) (for α = 1, δ = −1) has better choices of staring points as compared to other sub cases of (31) and (32). Since other sub cases of (31) and (32) have a smaller domain of convergence as contrast to method (32) (for α = 1, δ = −1). It also shows the consistent computational order of convergence. No doubt that a particular case of scheme (31) for α = 2 3 , β = 1; δ = −1 is consuming the lowest number of iterations. This is quite obvious because the choice of initial is very close to the required solution.

Example 4.
We choose a prominent 2D Bratu problem [7,30], which is given by: Let us assume that Θ i,j = u(x i , t j ) is a numerical result over the grid points of the mesh. In addition, we consider that τ 1 and τ 2 are the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of PDE (34), we adopt the following approach: which further yields the succeeding SNE: 2, 3, . . . , τ 1 , j = 1, 2, 3, . . . , τ 2 . (36) By choosing τ 1 = τ 2 = 11, h = 1 11 , and C = 0.1, we get a large SNE of order 100 × 100 which converges to the following required root: T the column vector. Choose T 1 = T 2 = R 100 and D =S x * , 1 5 . Then, we have: µ 0 (τ) = µ(τ) = 7τ and µ 1 (τ) = 2.
This way conditions (A) are satisfied. Then, by solving equations ψ k (τ) − 1 = 0, we find solutions ρ k and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. The computational results are depicted in Table 5. It is straightforward to say on the basis of above table that method (32) (for α = 1, δ = −1) has a larger domain of convergence in contrast to other particulars cases of (31) and (32). It also consumes the same number of iterations as compared to other mentioned methods.

Remark 2.
We have observed from Examples (1)-(4) that method (32) (for α = 1, δ = −1) has a larger radius of convergence as compared to other particulars cases of (31) and (32). So, we deduce that this particular case is better than other particulars cases of (31) and (32) in terms of convergent points and domain of convergence. It also shows the consistent computational order of convergence.

Conclusions
A unified local convergence is presented for a family of higher order Jarratt-type schemes on Banach spaces. Our analysis uses only the derivative on these schemes in contrast to other approaches using λ + 3 and (2j + 1) order derivatives for Schemes (2) and (3), respectively (not on these schemes). Hence, the applicability of these schemes is extended. Moreover, our analysis gives computable error distances and uniqueness of the solution answers. This was not done in the earlier work [33]. Our idea provides a new way of looking at iterative schemes. So, it can extend the applicability of this and other schemes [3][4][5][6][7][8][9][10][13][14][15][16][17][18][19][20][21][22][23][24][25][26] . Finally, numerical experiments are conducted to solve problems from chemistry and other disciplines of applied sciences. Notice in particular that many problems from the micro-world are symmetric.