High Convergence Order Iterative Procedures for Solving Equations Originating from Real Life Problems

: The foremost aim of this paper is to suggest a local study for high order iterative procedures for solving nonlinear problems involving Banach space valued operators. We only deploy suppositions on the ﬁrst-order derivative of the operator. Our conditions involve the Lipschitz or Hölder case as compared to the earlier ones. Moreover, when we specialize to these cases, they provide us: larger radius of convergence, higher bounds on the distances, more precise information on the solution and smaller Lipschitz or Hölder constants. Hence, we extend the suitability of them. Our new technique can also be used to broaden the usage of existing iterative procedures too. Finally, we check our results on a good number of numerical examples, which demonstrate that they are capable of solving such problems where earlier studies cannot apply.


Introduction
One of the most primary and principal problems in numerical analysis associate with how to approximate a locally unique zero λ * of S(λ) = 0, where S : ∆ ⊂ E 1 → E 2 is a Fréchet-differentiable operator. In addition, E 1 , E 2 are two Banach spaces and ∆ is a convex subset of Banach space E 1 . We denote (E 1 , E 2 ) as the space of bounded linear operators from E 1 to E 2 . Approximating a unique solution λ * is vital, since several problems can be transform to Equation (1) by adopting mathematical modeling [1][2][3][4][5][6][7][8]. However, it is not always possible to get λ * in a closed form. Therefore, most of the schemes to solve such problems are iterative. The convergence study of iterative schemes involves the information about λ * is known as local convergence. Convergence domain of an iterative method is an important task to guarantee convergence. Hence, it is very essential to suggest the radius of convergence.
Then, obviously third-order derivative S (t) is unbounded on ∆. There is a plethora of research articles on iterative schemes . The initial guess λ 0 must be close enough to the required solution for guaranteed convergence. However, it is not giving us any idea of: how to choose λ 0 , find a convergence radius, the bounds on λ l − λ * and the uniqueness results. We deal with these problems for the method in Equation (2) in Section 2.
We enlarge the suitability of the scheme in Equation (2) by adopting only hypotheses on the first-order derivative of S and generalized conditions. In addition, we avoid the use of Taylor series expansions. In this way, there is no need to use the higher-order derivatives to illustrate the convergence order of the scheme in Equation (2). We adopt COC and ACOC for the order of convergence, which avoid higher-order derivatives (see Remark 1 (d)). When the generalized conditions are specialized to the Lipschitz case (see Remark 1 (a)), the Hölder case [1] (see Remark 1 (c)) or the advantages mentioned in the Introduction are obtained.
Let U(z, ρ) andŪ(z, ρ) stand, respectively, for the open and closed balls in E 1 with center z ∈ E 1 and radius ρ > 0.

Remark 1.
(a) It is clear from Equation (12) that the condition in Equation (14) can be released and adopted as follow: since, Further, Singh et al. [1] considered the following conditions for each λ, η ∈ ∆ in the Hölder case for β ∈ 4 5 , 5 4 (corresponding to Equation (4)). In our case, we have holds, since ∆ 0 ⊆ ∆. Hence, the improvements, as stated in the Abstract of this paper, hold for w <w (see the numerical examples too).
Therefore, the convergence radius τ has maximum value τ 1 and τ 1 is the convergence radius of Newton's method Rheindoldt [22] and Traub [8] provided the following convergence radius instead of τ 1 On the other hand, Argyros [2,3] proposed the following convergence radius where w 1 is the Lipschitz constant for Equation (8) on ∆. However, we have The convergence radius q adopted in [24] is smaller than the radius τ DS proposed by Dennis and Schnabel [3] However, q cannot be computed using the Lipschitz constants. (d) By adopting fifth-order derivative of S, the convergence order of the scheme in Equation (2) was demonstrated in [24]. On the other hand, our approach required only hypotheses on first-order derivative of S. To obtain the convergence order, we adopt the following techniques for the computational order of convergence COC , for each l = 0, 1, 2, . . . (49) or the approximate computational order of convergence (ACOC) [19], Neither technique t requires any kind of derivative(s). It is also vital to note that there is no need of exact zero λ * in the case of ξ * . (e) Consider operator S satisfying the autonomous differential equation [2,3] where T is a given continuous operator. By S (λ * ) = T(S(λ * )) = P(0), we can use our results without the prior knowledge of required solution λ * . For example, S(λ) = e x − 1. Therefore, we obtain T(λ) = x + 1.

(f) In view of estimates
and similarly we can replace the terms w((1 + φ 1 (α))α), w((1 + φ 2 (α))α) in the definition of functions φ 2 and φ 3 by and say w 0 , w are constants, then the new functions φ 2 and φ 3 are tighter than the old one leading to larger τ and tighter error bounds on the distances λ l − λ * (if w 0 < w).

Concrete Examples
Here, we test the convergence conditions using concrete examples.
We use (54), where t k and w k are the abscissas and weights, respectively. Denoting the approximations of x(t i ) with x i (i = 1, 2, 3, ..., 8), then it yields the following 8 × 8 system of nonlinear equations: By Gauss-Legendre quadrature formula, we obtained the values of t k and w k when k = 8, which are depicted in Table 1.
The  Table 6.  Example 5. By the example in the Introduction, we get L = L 0 = 96.662907 and M = 2. The radii of convergence of the method (2) for Example 5 are described in Table 7.