Extended High Order Algorithms for Equations under the Same Set of Conditions

A variety of strategies are used to construct algorithms for solving equations. However, higher order derivatives are usually assumed to calculate the convergence order. More importantly, bounds on error and uniqueness regions for the solution are also not derived. Therefore, the benefits of these algorithms are limited. We simply use the first derivative to tackle all these issues and study the ball analysis for two sixth order algorithms under the same set of conditions. In addition, we present a calculable ball comparison between these algorithms. In this manner, we enhance the utility of these algorithms. Our idea is very general. That is why it can also be used to extend other algorithms as well in the same way.


Introduction
We consider two Banach spaces Y 1 and Y 2 with an open and convex subset Z ( = ∅) of Y 1 . Let us denote the set {B L : Y 1 → Y 2 linear and bounded operators} by L(Y 1 , Y 2 ). Suppose X : Z ⊆ Y 1 → Y 2 is Fréchet derivable. Equations of the kind are often utilized in science and other applied areas to solve several highly challenging problems. We should not ignore the fact that solving these equations is a difficult process, as the solution could only be discovered analytically on rare instances. This is why iterative processes are generally used for solving these equations. However, it is an arduous task to develop an effective iterative approach for addressing (1). The classical Newton's iterative strategy is most typically employed for this issue. In addition, a lot of studies on higher order modifications of conventional processes like Newton's, Jarratt's, Chebyshev's, etc. are presented in [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Wang et al. [16] presented a sixth order variant of Jarratt's algorithm, which adds the evaluation of the function at an additional point in the iteration procedure of Jarratt's method [12]. Grau-Sánchez and Gutiérrez [10] by applying Obreshkov-like techniques described two families of zero-finding iterative approaches. An efficient family of nonlinear system algorithms is suggested by Cordero et al. [8] using a reduced composition technique on Newton's and Jarratt's algorithm. Sharma et al. [17] composed two weighted-Newton steps to construct an efficient fourth order weighted-Newton method to solve nonlinear systems. Sharma and Arora [18] constructed iterative algorithms of fourth and sixth convergence order for solving nonlinear systems. Two bi-parametric fourth order families of predictor-corrector iterative algorithms are discussed in [9]. Newton-like iterative approaches of fifth and eighth rate of convergence are also designed by Sharma and Arora [19]. Additional studies on other algorithms with their convergence and dynamics are available in [20][21][22][23][24][25].
Notice that higher convergence order algorithms using Taylor expansions suffer from the following problems: (1') Higher order derivatives (not on the algorithms) should exist although convergence may be possible without these conditions. (2') We do not know in advance how many iterations should be performed to reach a certain error tolerance. Hence, there is a need to address these problems. The novelty of our article lies in the fact that we handle (1')-(5') as follows.
(1") We only use the derivative that actually appears on these algorithms. The convergence order is recovered again, since we by pass Taylor series, (which require the higher order derivatives) and use instead the computational order of convergence (COC) given by and the approximate computational order of convergence (ACOC) given by These formulae use the algorithms (which depend on the first derivative). In the case of ACOC no knowledge of v * is needed.
(2") We use generalized Lipschitz-type conditions which allow us to provide upper bounds on ||v n − v * || which in turn can be used to determine the smallest number of iterations to reach the error tolerance. (3") Under our local convergence analysis a convergence ball is determined. Hence, we know from where to pick the stater v 0 so that convergence to the solution v * can be achieved. (4") A uniqueness ball is provided. (5") The results are presented in the more general setting of Banach space valued operators.
In this article, to demonstrate our technique, we selected the following sixth convergence order algorithms to expand their utility. However, our technique is so general that it can be applied on other algorithms . We also compare their convergence balls and dynamical properties. Define algorithms for all n = 0, 1, 2, ..., by SM1: SM2: For α = 2 3 , these algorithms are described in [26] (see also [11]), where the benefits over other algorithm are well explained. Conditions on derivatives of the seventh order and Taylor series expansion have been employed in [11,26] to determine their convergence rate. Because of such results needing derivatives of higher order, these algorithms are very difficult to implement, as their utility is reduced although they may converge. In order to justify this, we have the following function where Y 1 = Y 2 = R and X is defined on Z = [− 1 2 , 3 2 ]. Then, it is crucial to highlight that X is not bounded. Hence, the existing convergence results for methods SM1 and SM2 based on X (vii) do not work in this scenario, although these algorithms may still converge with convergence order six. Clearly this is the case, since the conditions in the aforementioned studies are only sufficient.
The other parts of this work can be summarized as: In Section 2, the main convergence theorems on the ball convergence of SM1 and SM2 are discussed. Section 3 deals with comparison of the attraction basins for these procedures. Numerical applications are placed in Section 4. This study is concluded with final comments in Section 5.

Ball Convergence
Our ball convergence analysis is requires the development of some scalar parameters and functions. Set M = [0, ∞).
Suppose function: (iii) λ 0 (U 1 (s)s) − 1 has a smallest root r 1 in M 0 \ {0}. Set r 2 = min{r 0 , r 1 } and The scalar ρ given as shall be shown to be a convergence radius for SM1. Set T = [0, ρ). It is implied by (5) that and hold for each s in T.
The notation U(v * , r) stands for the closure of a ball of radius r > 0 and center v * ∈ Y 1 . We suppose from now on that v * is a simple root of X , scalar functions "λ" are as given previously and X : Z → Y 2 is differentiable. Further, conditions (C) hold: Next, the main convergence result for SM1 is developed utilizing conditions (C).
Theorem 1. Suppose that the conditions (C) hold forρ = ρ. Then, iteration {v n } given by SM1 exists in U(v * , ρ), stays in U(v * , ρ) and converges to v * provided the initial guess v 0 is in where radius r and functions U k are as given previously. Furthermore, v * is the only zero of X in the set Z 1 given in (C 4 ) is v * .
Then, under conditions (C) forρ = ρ, the choice of the U k functions is justified by the estimates Hence, we arrived at the ball convergence result for SM2.

Theorem 2.
Suppose that the conditions (C) hold withρ = ρ. Then, the conclusions of Theorem 1 hold for SM2 with ρ, U k , replacing ρ, U k , respectively.

Remark 1. The continuity assumption
, for all u, v ∈ Z on X is employed in existing studies. But then, since Z 0 ⊆ Z , we have λ(s) ≤ λ(s), for all s ∈ [0, 2r 0 ). This is a significant achievement. All results, which are obtained earlier, can be presented in terms of λ, since u i ∈ Z 0 . This is a more specific location about v n . This improves the convergence radii; tightens the upper error ||v n − v * || and produces a better knowledge about v * . To demonstrate this, let us take the example X (v) = e v − 1 for Z = U(0, 1). Then, we have λ 0 (s) = (e − 1)s < λ(s) = e 1 e−1 s < λ(s) = es, and using Rheinboldt or Traub [14,15] (for λ 0 = λ = λ), we get R TR = 0.081751, previous studies by Argyros [5] (for λ = λ), R E = 0.108316 and with this study ρ 1 = ρ 1 = 0.127564, so

Comparison of Attraction Basins
Comparison of the dynamical qualities of SM1 and SM2 are provided in this section by employing the tool attraction basin. Suppose M (z) is the notation for a second or higher degree complex polynomial. Then, the set {z 0 ∈ C : z j → z * as j → ∞} represents the attraction basin corresponding to a zero z * of M , where {z j } ∞ j=0 is formed by an iterative algorithm with a starting choice z 0 ∈ C. Let us select a region E = [−4, 4] × [−4, 4] on C with a grid of 400 × 400 points. To prepare attraction basins, we apply SM1 and SM2 on variety of complex polynomials by selecting every point z 0 ∈ E as a stater. The point z 0 remains in the basin of a zero z * of a considered polynomial if lim j→∞ z j = z * . Then, we display z 0 with a fixed color corresponding to z * . As per the number of iterations, we employ the light to dark colors to each z 0 . Black color is the sign of non-convergence zones. The terminating condition of the iteration is ||z j − z * || < 10 −6 with the maximum limit of 300 iterations. We used MATLAB 2019a to design the fractal pictures.

Numerical Examples
We apply the proposed techniques to estimate the convergence radii for SM1 and SM2 when α = 2 3 .

SM1 SM2
It is worth noticing that if we stop at the first iterate in both algorithms (i.e., restrict ourselves to the first substep of Jarratt's algorithm), then the radius is largest (see ρ 1 and ρ 1 ). Moreover, if we increase the convergence order to four (i.e., consider only the first two substeps of the algorithms), then the radii get smaller (see ρ 2 and ρ 2 ). Furthermore, if we increase the order to six (i.e., use all the substeps of these algorithms), then, we obtain the smallest radii (see ρ and ρ). This is expected when the order increases. Concerning the corresponding error estimates ||v n − v * || we see clearly that fewer iterates are needed to reach v * as order increases. We solved Example 3 with v 0 = 0.9993 using these algorithms and the results are presented in Tables 4 and 5. In addition, we executed four iterations of these schemes ten times in MATLAB 2019a. Then, we obtained the average elapsed time and average CPU time (in seconds) for SM1 and SM2 and these values are presented in Table 6.

Conclusions
Major problems appear when studying high convergence order algorithms for solving equations. One of them is that the order is shown assuming the existence of higher order derivatives that do not appear on the algorithms. In particular in the case of SM1 ans SM2 derivatives up to the order seven have been utilized. Hence (see also our example in the introduction) these derivative restrict the utilization of these algorithms. We also do not know how many iterates needed to arrive at a prearranged accuracy. Moreover, no uniqueness of the solution is known about a certain ball. This is not only the case for the algorithms we studied but all the high convergence algorithms whose convergence order is shown using Taylor series. That is why we addressed all these concerns in the more general situation of a Banach space, under generalized continuity conditions and using only the derivative appearing on these algorithms. Our technique can be applied to extend the utilization of other algorithms since it is so general. We also present the convergence ball and dynamical comparison between these schemes.