Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives

: For the purpose of obtaining solutions to Banach-space-valued nonlinear models, we offer a new extended analysis of the local convergence result for a seventh-order iterative approach without derivatives. Existing studies have used assumptions up to the eighth derivative to demonstrate its convergence. However, in our convergence theory, we only use the ﬁrst derivative. Thus, in contrast to previously derived results, we obtain conclusions on calculable error estimates, convergence radius, and uniqueness region for the solution. As a result, we are able to broaden the utility of this efﬁcient method. In addition, the convergence regions of this scheme for solving polynomial equations with complex coefﬁcients are illustrated using the attraction basin approach. This study is concluded with the validation of our convergence result on application problems.


Introduction
Numerous very complicated scientific and engineering phenomena may be treated using nonlinear equations of the kind: where G : Ω ⊆ Y 1 → Y 2 is derivable, as suggested by Fréchet. Y 1 , Y 2 are complete normed vector spaces, and Ω ⊆ Y 1 is non-null, convex, and open. Confronting such nonlinearity has remained a major challenge in mathematics. Analytical solutions to these problems are incredibly difficult to come up with. As a result, scientists and researchers often utilize iterative procedures to obtain the desired answer. Among iterative approaches, Newton's method is often employed to solve these nonlinear equations. Steffensen method [1,2] is well known among iterative schemes without derivatives. Sharma and Arora [3] deduced the following algorithm, which is a variant of the Steffensen method: where B k = G [w k , y k ] is the divided difference of order one, and w k = y k + G (y k ) and y 0 ∈ Ω is an initial point. This algorithm uses one matrix inversion. Furthermore, Wang and Zhang's [4] extended method (3) to design a seventh convergence order method is as follows: In addition, numerous novel higher-order iterative strategies [3,[5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] have been developed and implemented during the past few years. The majority of these research papers provide convergence theorems for iterative schemes by imposing requirements on higher-order derivatives. Furthermore, these investigations make no judgments about the convergence radius, error distances, or existence-uniqueness area for the solution.
In the research of iterative schemes, it is essential to determine the domain of convergence. The convergence domain is often rather narrow in most circumstances. Thus, without making any additional assumptions, the convergence domain must be extended. Additionally, accurate error distances must be approximated in the convergence investigation of iterative methods. Focusing on these points, we consider a method without derivatives, which is as follows: where G [., .] : , v k = y k + G (y k ) and q k = y k − G (y k ). However, it is crucial to note that the seventh order convergence of (4) was achieved in [24] by the use of conditions on the derivative of order eight, while this scheme is derivative-free. Because the convergence of this scheme is reliant on derivatives of a higher order, its usefulness is reduced. Taking the function G , defined on Ω = [− 1 2 , 3 2 ] by: one can observe that the previous conclusion on the convergence of this method [24] fails to hold because of the unboundedness of G . Aside from that, the analytical outcome in [24] is not sufficient for the calculation of error ||y k − y * || and convergence radius. There is no conclusion in [24] that can be drawn concerning the location and uniqueness of y * . The local analysis results allow one to estimate the error ||y k − y * ||, convergence radius and uniqueness zone for y * . The findings of local convergence, in particular, are very useful since they provide information on the critical problem of identifying starting points. For this reason, we propose a new extended local analysis of the derivative free method (4). In determining the convergence radius, error ||y k − y * ||, and uniqueness area of y * , our work is beneficial. This technique can be used to extend the applicability of other methods and relevant topics along the same lines 25]. In addition, using the attraction basin approach, the areas where this method can find solutions to complex polynomial equations are shown. The arrangement of this paper can be described by summarizing the remainder of the text into the following statements. Section 2 deduces the theoretical outcomes with respect to local convergence of method (4). In Section 3, attraction basins for this scheme are shown. The suggested local analysis is verified numerically in Section 4. Finally, several conclusions are offered.

Local Convergence
We introduce scalar parameters and functions to deal with the local convergence analysis of scheme (4). Let c ≥ 0, c 0 ≥ 0, d ≥ 0 and M = [0, ∞).
The parameter r * defined by is shown next to be a convergence radius for method (4). Let M 1 = [0, r * ). It follows from the definition of radius r * that for all t ∈ M 1 : and By U[y * , δ], we denote the closure of the open ball U(y * , δ) with center y * ∈ Ω and of radius δ > 0.
The conditions (A) are needed provided that y * is a simple solution of equation G (y) = 0, and functions Q j are used as previously defined.
Suppose the following: for all y ∈ Ω and some a ≥ 0, b ≥ 0. Let for all y ∈ Ω 0 some c ≥ 0, d ≥ 0, and s, z given by the first two substeps of method (4).
Next, we show the following local convergence result for method (4) using the preceding notation and the conditions (A). Theorem 1. Suppose that the conditions (A) hold. Then, iteration {y k } generated by method (4) converges to y * provided that starter y 0 ∈ U(y * , r * ) \ {y * }.
Proof. Let T = G [q, y * ] for some q ∈ Ω 1 with G (q) = 0. Then, using (ii) and (iii), we have: Remark 1. (a) Let us consider the choices G [y, s] = 1 2 (G (y) + G (s)) or G [y, s] = 1 0 G (y + θ(s − y)) dθ or the standard definition of the divided difference when Y 1 = R i [5,8,9,15,16,22]. Moreover, suppose: (b) Hypotheses (A) can be condensed using instead the classical but strongest and less precise condition for studying methods with divided differences [22]: for all u 1 , u 2 , u 3 , u 4 ∈ Ω, where function P 6 : M × M → M is continuous and nondecreasing. However this condition does not give the largest convergence conditions, and all the "P" functions are at least as small as P 6 (t, t).

Attraction Basins
The attraction basins is an extremely valuable geometrical tool for measuring the convergence zones of various iteration schemes. Using this tool, we can see all of the beginning points that converge to any root when we use an iterative procedure. This allows us to identify in a visual fashion which locations are excellent selections as starting points and which ones are not. We select the starting point z 0 ∈ E = [−2, 2] × [−2, 2] ⊂ C, and algorithm (4) is applied on 10 polynomials with complex coefficients. The point z 0 is a member of the basin of a zero z * of a test polynomial if lim n→∞ z n = z * , and then z 0 is displayed using a specific color associated with z * . As per the number of iterations, we employ light to dark colors for each starting guess z 0 . The point z 0 ∈ E is denoted in black if it is not a member of the attraction basin of any zero of the test polynomial. The conditions ||z n − z * || < 10 −6 or maximum 100 iterations are set to end the iteration process. For constructing the fractal diagrams, MATLAB 2019a is employed.

Numerical Examples
The numerical verification of the new convergence result is conducted in this section. The following example is taken.

Conclusions
By eliminating the Taylor series tool from the existing convergence theorem, extended local convergence of a seventh order method without derivatives is developed. The first derivative is all that is required for our convergence result, unlike the preceding concept. In addition, the error estimates, convergence radius, and region of uniqueness for the solution are calculated. As a result, the usefulness of this effective algorithm is enhanced. The convergence zones of this algorithm for solving polynomial equations with complex coefficients are also shown. This aids in the selection of beginning points with the purpose of obtaining a certain root. Our convergence result is validated by numerical testing.