Extended Convergence for Two Sixth Order Methods under the Same Weak Conditions

: High-convergence order iterative methods play a major role in scientiﬁc, computational and engineering mathematics, as they produce sequences that converge and thereby provide solutions to nonlinear equations. The convergence order is calculated using Taylor Series extensions, which require the existence and computation of high-order derivatives that do not occur in the methodology. These results cannot, therefore, ensure that the method converges in cases where there are no such high-order derivatives. However, the method could converge. In this paper, we are developing a process in which both the local and semi-local convergence analyses of two related methods of the sixth order are obtained exclusively from information provided by the operators in the method. Numeric applications supplement the theory.


Introduction
The problem most common in applied and computational mathematics, and in the fields of science and engineering generally, is that of finding a solution to a nonlinear equation.
where F : Ω ⊆ X → Y is derivable as per Fréchet, X and Y are complete normed linear spaces and Ω is a non-null, open and convex set.
Researchers have battled for a long time to overcome this nonlinearity. In most of the cases, a direct solution is very hard to obtain. For this reason, the use of an iterative algorithm to arrive at a conclusion has been widely used by researchers and scientists. Newton's method is a well-known iterative method for handling non-linear equations. Many new iterative strategies of higher order for the handling of non-linear equalities have been detected and are being applied in the last few years [1][2][3][4][5][6][7][8][9][10][11]. Theorems of convergence in the majority of these papers, however, are deduced by the application of high-order derivatives. In addition, the results are not discussed in terms of error bounds, convergence radii, or in the region where the solution is unique.
Examining local (LCA) and semi-local analyses (SLA) of an iterative algorithm makes it possible to estimate convergence domains, error estimates, and the unique region of a solution. The local and semi-local convergence results of efficient iterative methods were derived and stated in [9][10][11][12][13]. Important results were presented in these works, which include convergence radii, error estimation measurement, and extended benefits of this iteration approach. The results of this kind of analysis are valuable because they illustrate the complexities of starting point selection. Additionally, the applicability of our analysis can be extended to engineering problems such as the shrinking projection methods used for solving variational inclusion problems as in [14][15][16].
In this article, convergence theorems are developed for two competing methods having sixth order convergence found in [17] and are as stated below: and The local convergence of methods (2) and (3) are given in [17]. The order was established assuming that the seventh derivative (at least) of the operator F exists. As a result, these schemes' applicability is limited. In order to observe it, we define F on Ω = [−0.5, 1.5] by The third derivative is given by Hence, due to the unboundedness of F , the conclusions on convergence of (2) and (3) are not true for this example. Nor does it provide a formula for the approximation of the error, the region of convergence, or the singleness and exact location of its root x * . This strengthens our idea to develop the Ball-Convergence-Theory and thus compare the convergence range of (2) and (3) using hypotheses based on F only. This research provides important formulas for the assessment of errors and convergence radii. The study also discusses the precise position and singleness of x * .
The rest of the contents are: Section 2 deals with the LCA of the methods (2) and (3). Section 3 discusses the SLA of the methods under consideration. Numerical examples are in Section 4. Concluding comments are also included.

LCA
Set M = [0, +∞). Certain functions defined on the interval M play a role in the LCA of these methods. Assume: (i) ∃ function ω 0 : M → R, which is non-decreasing and continuous such that the function ω 0 (t) − 1 admits a smallest positive root ρ 0 . Set M 0 = [0, ρ 0 ).
(ii) ∃ a function ω : M 0 → R, which is non-decreasing and continuous such that the function g 1 (t) − 1 admits a smallest positive root r 1 ∈ M 0 , where g 1 : M 0 → R is (iii) The function p(t) − 1 has a smallest positive root ρ p ∈ M 0 , where the function p : M 0 → R is given as .
(iv) The functions g 2 (t) − 1, g 3 (t) − 1 have smallest positive roots r 2 , r 3 ∈ M 1 , where g 2 : M 1 → R, g 3 : M 1 → R are given by , Note that in practice, we choose the smallest of the two functions in the formula for the functionω.
The parameter r is shown to be a radius of convergence (RC) for the method (2) (see Theorem 1).
The notation U(x * , α) stands for the open ball with center x * and of radius α > 0, whereas U[x * , α] stands for the closure of the ball U(x * , α).
The scalar functions ω 0 and ω relate x * to operators appearing on the method (2) or the method (3) are as follows. Suppose: The conditions (H 1 )-(H 2 ) are utilized first to prove the convergence of the method (2). Let l n = x n − x * . Theorem 1. Assume the conditions (H 1 )-(H 4 ) hold and the initial guess x 0 ∈ U(x * , d) for d = r. Then, the following assertion holds: lim n→∞ x n = x * Proof. The iterates {x k }, {y k }, {z k } shall be shown to exist in the ball U(x * , r) by mathematical induction. Let u ∈ U(x * , r), but arbitrary. By utilizing item (6) and the hypotheses (H 1 ), (H 2 ), Then, it follows by the standard Lemma due to Banach [12,18] involving linear operators that their inverses If we choose u = x 0 , then the iterate y 0 exists by the first sub-step of the method (2) if k = 0, since by hypothesis x 0 ∈ U(x * , r). Moreover, we have which gives by (H 3 ), (8) (for m = 1), (9) (for u = x 0 ) and (5) that Thus, the iterate y 0 ∈ U(x * , r). Then, by (5), (7), (H 2 ) and (10), we obtain Hence, the iterate z 0 exists by the second sub-step of the method (2) and It also follows by (12) that the iterate z 0 ∈ U(x * , r). Furthermore, the iterate x 1 exists by the third sub-step of the method (2) for k = 0. By the third sub-step, it follows in turn leading to Thus, the iterate x 1 ∈ U(x * , r). Exchange x 0 , y 0 , z 0 , x 1 by x k , y k , z k , x k+1 in the preceding calculations to see that the following estimates hold: The following proposition is to determine the uniqueness of this solution x * . Proposition 1. Assume: Set Then, the only solution of (1) in the region Ω 2 is x * .
Proof. Assume ∃x ∈ Ω 2 with F(x) = 0. It follows that for The LCA of the method (3) is obtained analogously, but the functions g 2 and g 3 are given instead by This time the RCr is provided again by the formula (5), but with the new functions g 2 and g 3 . Then, similarly under the conditions (H 1 )-(H 4 ) with d =r, it follows ≤ω n + 2ω y n − x n 2 , Therefore, under the above-mentioned changes, the conclusions of the Theorem 1 hold, but for the method (3). The results of the Proposition (1) obviously also apply to the method (3). Therefore, we can provide the corresponding result for the method (3).
Theorem 2. Assume the conditions (H 1 )-(H 4 ) hold for d =r and the initial guess x 0 ∈ U(x * ,r). Then, the following assertion holds: lim n→+∞ x n = x * .
Proof. It follows from Theorem 1 under the preceding changes.

Remark 1.
Under the conditions (H 1 )-(H 4 ), we can set ρ 2 = r or ρ 2 =r in Proposition 1 depending on which method is used.
Then, the sequence {t n } given by the method (2) is bounded from above by ξ, non-decreasing and is convergent to some ξ * ∈ [0, ξ].
Proof. The result is implied immediately from the formula (14) and the condition (16).
Then, the sequence {t n } given by the formula (15) is bounded from above by ξ 1 and is convergent to some ξ * 1 ∈ [0, ξ 1 ].
Proof. The result is implied immediately by the formula (15) and the condition (17).

Remark 2.
A possible choice for the upper bounds ξ or ξ 1 is ρ 0 given in (i) of Section 2.
Next, we are developing the semi-local convergence theorem for the method (2).

Proof. As in Theorem 1, mathematical induction and the following calculations lead in turn to
Notice also that y 0 − x 0 = λ = s 0 − t 0 ≤ ξ * , so y 0 ∈ U[x 0 , ξ] initiating the induction. Thus, the sequence {x n } is fundamental in a Banach space X (since {t n } is fundamental as convergent by the condition (C 4 )). By letting n → ∞ in (18) and using the continuity of the operator F, we conclude that F(x * ) = 0.
There exists ρ 4 ≥ ρ 3 such that Set Then, the only solution of the equation F(x) = 0 in the region Ω 2 is y * .
Example 2. We define the function F(x) = sin x on Ω, where X = Y = Ω = R. We have F (x) = cos x and also x * = 0 is the solution of F(x) = 0. Now, the conditions (H 1 )-(H 4 ) are validated for ω 0 (t) = ω(t) = t. Then, the RC are as given in Table 1.
The conditions of Lemmas 1 and 2 are verified in Tables 3 and 4, respectively.

Conclusions
The LCA and SLA for the methods (2) and (3) are validated by applying a generalized condition of Lipschitz to the first derivative only. A comparison is made between the two convergence balls, which are very similar in terms of their efficiency. This study derives estimates of convergence balls, measurement of error distances, and existence-uniqueness regions of the solution. Finally, the proposed theoretical results are checked for application problems. The process of this article shall be applied on other high convergence order methods using inverses of operators that are linear in our future research [1][2][3][4][5][6][7][8].

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: L (X, Y) Set of Linear operators from X to Y {t n } Scalar sequence