Convergence Analysis and Dynamical Nature of an Efﬁcient Iterative Method in Banach Spaces

: We study the local convergence analysis of a ﬁfth order method and its multi-step version in Banach spaces. The hypotheses used are based on the ﬁrst Fréchet-derivative only. The new approach provides a computable radius of convergence, error bounds on the distances involved, and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches using Taylor expansions of higher order derivatives, which may not exist or may be very expensive or impossible to compute. Numerical examples are provided to validate the theoretical results. Convergence domains of the methods are also checked through complex geometry shown by drawing basins of attraction. The boundaries of the basins show fractal-like shapes through which the basins are symmetric.


Introduction
Let X, Y be Banach spaces and D ⊆ X be a closed and convex set. In this study, we locate a solution x * of the nonlinear equation where G : D ⊆ X → Y is a Fréchet-differentiable operator. In computational sciences, many problems can be written in the form of (1). See, for example, [1][2][3]. The solutions of such equations are rarely attainable in closed form. This is why most methods for solving these equations are usually iterative. The most well-known method for approximating a simple solution x * of Equation (1) is Newton's method, which is given by , for each m = 0, 1, 2, . . . (2) and has a quadratic order of convergence. In order to attain the higher order of convergence, a number of modified Newton's or Newton-like methods have been proposed in the literature (see [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]) and references cited therein. In particular, Sharma and Kumar [18] recently proposed a fifth order method for approximating the solution of G(x) = 0 using the Newton-Chebyshev composition defined for each n = 0, 1, 2, . . . by z m = y m − Γ m G(y m ), where Γ m = G (x m ) −1 , and [z m , y m ; G] is the first order divided difference of G. The method has been shown to be computationally more efficient than existing methods of a similar nature. The important part in the development of an iterative method is to study its convergence analysis. This is usually divided into two categories, namely the semilocal and local convergence. The semilocal convergence is based on the information around an initial point and gives criteria that ensure the convergence of iteration procedures. The local convergence is based on the information of a convergence domain around a solution and provides estimates of the radii of the convergence balls. Local results are important since they provide the degree of difficulty in choosing initial points. There exist many studies which deal with the local and semilocal convergence analysis of iterative methods such as [3][4][5][7][8][9][10][11]13,16,19,[21][22][23]. The semilocal convergence of the method (3) in Banach spaces has been established in [18]. In the present work, we study the local convergence of this method and its multi-step version, including the computable radius of convergence, error bounds on the distances involved, and estimates on the uniqueness of the solution.
We summarize the contents of the paper. In Section 2, the local convergence (including radius of convergence, error bounds, and uniqueness results of method (3)) is studied. The generalized multi-step version is presented in Section 3. Numerical examples are performed to verify the theoretical results in Section 4. In Section 5, the basins of attractors are studied to visually check the convergence domain of the methods. Finally, some conclusions are reported in Section 6.
Then, for each t ∈ [0, r) and Let U(v, ρ) andŪ(v, ρ) symbolise the open and closed balls in X, with a radius ρ > 0 and a centre v ∈ X.
Using the above notations, we then describe the local convergence analysis of method (3). andŪ where r is defined by (5). Then, for each m = 0, 1, . . ., the sequence {x m } generated by method (3) for x 0 ∈ U(x * , r) − {x * } is well defined, stays in U(x * , r), and converges to x * . Furthermore, the following estimates hold: and where the "g" functions are defined previously. Furthermore, if there exists T ∈ [r, 2 L 0 ) such that U(x * , T) ⊂ D, then x * is the only solution of G(x) = 0 inŪ(x * , T).

Generalized Method
The multistep version of (3) consisting of q + 1, (q ∈ N), steps is expressed as where Next, we show that the generalized scheme (30) possesses convergence order 2q + 3.

Order of Convergence
The definition of divided difference is required to derive (30) convergence order. Recalling the result of Taylor's expansion on vector functions (see [24]) for this: Lemma 1. G : D ⊂ R n → R n be r-times Fréchet differentiable in a convex set D ⊂ R n then for any x, h ∈ R n , the following expression holds: where ||R r || ≤ 1 r! sup 0≤t≤1 ||G (r) (x + th)|| ||h|| r and h r = (h, h, r . . ., h).
The divided difference operator [·, · ; G] : D × D ⊂ R n × R n −→ L(R n ) is defined by (see [24]) When we expand G (x + th) in the Taylor series at point x and integrate, we obtain where where The inversion of G (x m ) yields We are in a position to investigate scheme (30)'s convergence behaviour. As a result, the following theorem is established: Suppose that (i) G : D ⊂ R n → R n is many times differentiable mapping. (ii) There exists a solution x * ∈ D of equation G(x) = 0 such that G (x * ) is nonsingular. Then, sequence {x n } generated by method (30) for x 0 ∈ D converges to x * with order 2q + 3, q ∈ N.
Proof. Employing (34) and (38) in the Newton iteration y m , we obtain that The Taylor series of G(y m ) about x * yields Substituting (38)-(40) in first step of (30), we obtain As a result, we arrive at the conclusion In addition, we have Using (42) and (43) in the second step of method (30), it follows that The expansion of G(z Then, we have Using (46) in (30), we obtain As we know from (44) that z (1) 6 , from (47) for q = 2, 3, we therefore have 10 .

Proceeding by induction, it follows that
This completes the proof of Theorem 2.

Remark 2.
Note that method (3) utilizes three functions, one derivative, and one inverse operator per full iteration and converges to the solution with the fifth order of convergence. The generalized scheme (30) based on (3) (for q = 1) generates the methods with increasing convergence orders 5, 7, 9, . . . corresponding to q = 1, 2, 3, . . . at an additional cost of one function evaluation per each iteration. This fulfils the main aim of developing higher order methods, keeping computational cost under control.
Denote by r (q) the smallest zero on the interval (0, r 2 ) of function h µ . Define r * by r * = min{r 1 , r (q) }.
Proposition 1. Suppose that the conditions of Theorem 2 hold. Then, sequence {x m } generated for x 0 ∈ U(x * , r * ) − {x * } by method (30) is well defined in U(x * , r * ), remains in U(x * , r * ), and converges to x * . Moreover, the following estimates hold: and Furthermore, x * is the only solution of G(x) = 0 in D 1 = D ∩ U(x * , r * ).
Proof. Only new estimations (50) and (51) will be shown. We show the first two estimations using the evidence of Theorem 1. Then, we will be able to obtain that Moreover, we have Similarly, we obtain That is, we have x m , y m ,z m , z (i) m ∈ U(x * , r * ), i = 1, 2, . . . , q, and , so lim m→∞ x m = x * and x m+1 ∈ U(x * , r * ). The uniqueness result is standard, as shown in Theorem 1.

Numerical Examples
Here, we shall demonstrate the theoretical results of local convergence which we have proved in Sections 2 and 3. To do so, the methods of the family (30) of order five, seven, and nine are chosen. Let us denote these methods by M 5 , M 7 , and M 9 , respectively. The divided difference in the examples is computed by [x, y ; F] = 1 0 F (y + θ(x − y))dθ. We consider three numerical examples, which are presented as follows: Example 1. Let us consider B = R m−1 for natural integer m ≥ 2. B is equipped with the max-norm |a ij | for A = (a ij ) 1≤i,j≤m−1 . Consider the two-point boundary value problem on interval [0, 1]: Let us denote ∆ = 1/m, u i = ∆i, and v i = V(u i ) for each i = 0, 1, . . . , m. We can write the discretization of v at points u i in the following form: Using the initial conditions in (54), we obtain that v 0 = v m = 0, and (54) is equivalent to the system of the nonlinear equation F(v) = 0 with v = (v 1 , v 2 , . . . , v m−1 ) in the following form: Using (55), the Fréchet-derivative of operator F is given by Choosing m = 11, the corresponding solution is x * = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0) T , and we have L 0 = L = L 1 = 3.942631477 and M = 2. The parameters using method (30) are given in Table 1.  Thus, it follows that the above-considered methods of scheme (30) converge to x * and remain inŪ(x * , r * ).

Example 2.
Scholars have determined that the speed of blood in a course is an element of the distance of the blood from the conduit's focal pivot ( Figure 1). As per Poiseuille's law, the speed (cm/s) of blood that is r cm from the focal hub of a supply route is given by the capacity where R is the range of the course, and C is a consistent that relies upon the thickness of the blood and the tension between the two closures of the vein. Assume that for a specific course, C = 1.76 × 10 5 cm/s and R = 1.2 × 10 −2 cm.  The zero of f 2 (x) = 0 is x * = 0.012; then, we have L 0 = L = L 1 = 84.2803 and M = 5280. The parameters using method (30) are given in Table 2.
It follows that the above-considered methods of scheme (30) will converge to x * and remain inŪ(x * , r * ) if r * is chosen as shown in Table 2.   The relationship between the Mach number M and the flow area A, derived by Zucrow and Hoffman [25], is given by where A * is the choking area (i.e., the area where M = 1), and γ is the specific heat ratio of the flowing gas shown in Figure 4. For each value of ε, two values of M exist, one less than unity (i.e., subsonic flow) and one greater than unity (i.e., supersonic flow). For the values of ε = 5.00 and γ = 1.4, Equation (57) becomes where x = M. The graph of the function f 3 (x) is shown in Figure 5, and the zero is x * = 0.116689. Then, we have that L = L 0 = L 1 = 8.137146, and M = 0.610065. The parameters using method (30) are given in Table 3.  r 1 = 0.0819303 r 1 = 0.0819303 r 1 = 0.0819303 r (1) = 0.050974 r (2) = 0.0355748 r (3) = 0.0254287 r * = 0.050974 r * = 0.0355748 r * = 0.0254287 The computed values of r * show that the considered methods of the scheme (30) will converge to x * and remain inŪ(x * , r * ).

Study of Complex Dynamics of the Method
To view the geometry of the methods of the family (30) of five, seven, and nine order methods, in the complex plane, we present the attraction of basins of the roots by performing the methods on some functions (see Table 4). The basins are displayed in Figures 6-8 concerning capacities. To draw basins, we use square shapes R ∈ C of size [−2, 2] × [−2, 2] and allot various shadings to the basins. The dark region is appointed to the focuses for which the strategy is disparate.