Previous Article in Journal
Chassis Assembly Detection and Identification Based on Deep Learning Component Instance Segmentation
Previous Article in Special Issue
Extended Convergence Analysis of the Newton–Hermitian and Skew–Hermitian Splitting Method

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Ball Convergence for Combined Three-Step Methods Under Generalized Conditions in Banach Space

by
R. A. Alharbey
1,
Ioannis K. Argyros
2 and
Ramandeep Behl
1,*
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(8), 1002; https://doi.org/10.3390/sym11081002
Submission received: 21 June 2019 / Revised: 23 July 2019 / Accepted: 24 July 2019 / Published: 3 August 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

## Abstract

:
Problems from numerous disciplines such as applied sciences, scientific computing, applied mathematics, engineering to mention some can be converted to solving an equation. That is why, we suggest higher-order iterative method to solve equations with Banach space valued operators. Researchers used the suppositions involving seventh-order derivative by Chen, S.P. and Qian, Y.H. But, here, we only use suppositions on the first-order derivative and Lipschitz constrains. In addition, we do not only enlarge the applicability region of them but also suggest computable radii. Finally, we consider a good mixture of numerical examples in order to demonstrate the applicability of our results in cases not covered before.
PACS:
65G99; 65H10; 47J25; 47J05; 65D10; 65D99

## 1. Introduction

One of the most useful task in numerical analysis concerns finding a solution $κ$ of
$Θ ( x ) = 0 ,$
where $Θ : D ⊂ X → Y$ is a Fréchet-differentiable operator, $X , Y$ are Banach spaces and $D$ is a convex subset of $X$. The $L ( X , Y )$ is the space of bounded linear operators from $X$ to $Y$.
Consider, a three step higher-order convergent method defined for each $l = 0 , 1 , 2 , …$ by
$y l = x l − Θ ′ ( x l ) − 1 Θ ( x l ) , z l = ϕ x l , Θ ( x l ) , Θ ′ ( x l ) , Θ ′ ( y l ) , x l + 1 = z l − β A l − 1 Θ ( z l ) ,$
where $α , β ∈ S$, $A l = ( β − α ) Θ ′ ( x l ) + α Θ ′ ( y l ) , ( S = R or S = C )$ and the second sub step represents any iterative method, in which the order of convergence is at least $m = 1 , 2 , 3 , …$. If $X = Y = R$, then it was shown in [1]. The proof uses Taylor series expansions and the conditions on function $Θ$ is up to the seventh differentiable. These suppositions of derivatives on the considered function $Θ$ hamper the applicability of (2). Consider, a function $μ$ on $X = Y = R$, $D = [ − 0.5 , 1.5 ]$ by
$μ ( t ) = 0 , t = 0 t 3 ln t 2 + t 5 − t 4 , t ≠ 0 .$
Then, we have that
$μ ′ ( t ) = 3 t 2 ln t 2 + 5 t 4 − 4 t 3 + 2 t 2 ,$
$μ ″ ( t ) = 6 t ln t 2 + 20 t 3 − 12 t 2 + 10 t$
and
$μ ‴ ( t ) = 6 ln t 2 + 60 t 2 − 24 t + 22 .$
Then, obviously the third-order derivative $μ ‴ ( t )$ is not bounded on $D$. Method (2) studied in [1], for $X = Y = R$ suffers from several following defects:
(i)
Applicable only on the real line.
(ii)
Range of initial guesses for granted convergence is not discussed.
(iii)
Higher than first order derivatives and Taylor series expansions were used limiting the applicability.
(iv)
No computable error bounds on $∥ Ω l ∥$ (where $Ω l = x l − κ$) were given.
(v)
(vi)
The convergence order claim by them is also not correct, e.g., see the following method 43 [1]
$y l = x l − Θ ( x l ) Θ ′ ( x l ) , z l = x l − 2 Θ ( x l ) Θ ′ ( y l ) + Θ ′ ( x l ) , x l + 1 = z l − β Θ ( z l ) α Θ ′ ( y l ) + ( β − α ) Θ ′ ( x l ) .$
It has fifth-order of convergence for $α = β$ but $α ≠ β ∈ R$ provides fourth-order convergence. But, authors claimed sixth-order convergence for every $α , β ∈ R$ that is not correct. The new proof is given in Section 2.
(vii)
They can’t choose special cases like methods 41, 47 and 49 (numbering from their paper [1]) because Chen and Qian [1], consider $y l = x l − f ( x l ) f ′ ( x l )$ in the proof of theorem. Additionally, it is clearly mentioned in the expression of (21) (from their paper [1]).
To address all these problems, we first extend method (2) to Banach space valued operators. The order of convergence is computed by using $C O C$ or $A C O C$ (see remark 2.2(d)). Our technique uses only the first derivative in the analysis of method (2), so we can solve classes of equations not possible before in [1].
The remaining material of the paper is ordered as proceeds: Section 2 suggest convergence study of scheme (2). The applicability of our technique appears in Section 3.

## 2. Convergence Analysis

We consider some scalars functions and constraints for convergence study. Therefore, we assume that functions $v , w 0 , w , g ¯ 2 : [ 0 , + ∞ ) → [ 0 , + ∞ )$ are continuous and nondecreasing with $w 0 ( 0 ) = w ( 0 ) = 0$ and $α , β ∈ S$. Assume equation
$w 0 ( t ) = 1$
has a minimal positive solution $r 0$.
Functions $g 1 , h 1 , p$ and $h p$ defined on $[ 0 , r 0 )$ as follow:
$g 1 ( t ) = ∫ 0 1 w ( 1 − η ) t d η 1 − w 0 ( t ) , h 1 ( t ) = g 1 ( t ) − 1 , p ( t ) = | β | − 1 | β − α | w 0 ( t ) + | α | w 0 g 1 ( t ) t , β ≠ 0 , and h p ( t ) = p ( t ) − 1 .$
Notice, that $h 1 ( 0 ) = h p ( 0 ) = − 1 < 0$ and $h 1 ( t ) → + ∞$, $h q ( t ) → + ∞$ as $t → r 0 −$. Then, by the intermediate value theorem (IVT), the functions $h 1$ and $h p$ have roots in $0 , r 0$. Let $r 1$ and $r p$, stand respectively the smallest such roots of the function $h 1$ and $h p$. Additionally, we consider two functions $g 2$ and $h 2$ on $0 , r 0$ by
$g 2 ( t ) = g ¯ 2 ( t ) t m − 1 , and h 2 ( t ) = g 2 ( t ) − 1 .$
Suppose that
$g ¯ 2 ( 0 ) < 1 , if m = 1$
and
$g 2 ( t ) → a ( a number greater than one or + ∞ )$
as $t → r ¯ 0 −$ for some $r ¯ 0 ≤ r 0$. Then, again by adopting IVT that function $h 2$ has some roots $( 0 , r ¯ 0 )$. Let $r 2$ be the smallest such root. Notice that, if $m > 1$ condition (5) is not needed to show $h 2 ( 0 ) < 0$, since in this case $h 2 ( 0 ) = g 2 ( 0 ) − 1 = 0 − 1 = − 1 < 0$.
Finally, functions $g 3$ and $h 3$ on $[ 0 , r ¯ p )$ by
$g 3 ( t ) = 1 + ∫ 0 1 v ( η g 2 ( t ) t ) d η 1 − p ( t ) g 2 ( t ) , and h 3 ( t ) = g 3 ( t ) − 1 ,$
where $r ¯ p = min { r p , r 2 }$. Suppose that
$( 1 + v ( 0 ) ) g ¯ 2 ( 0 ) < 1 , if m = 1 ,$
we get by (7) that $h 3 ( 0 ) = ( 1 + v ( 0 ) ) g ¯ 2 ( 0 ) − 1 < 0$ and $h 3 ( t ) → + ∞$ or positive number as $t → r ¯ p −$. Let $r 3$ stand for the smallest root of function $h 3$ in $( 0 , r p )$. Consider a radius of convergence r as
$r = min { r 1 , r 3 } .$
Then, it holds
$0 ≤ g i ( t ) < 1 , i = 1 , 2 , 3 for each t ∈ [ 0 , r ) .$
Let us assume that we have center $z ∈ X$ and radius $ρ > 0$ of $U ( z , ρ )$ and $U ¯ ( z , ρ )$ open and closed ball, respectively, in the Banach space $X$.
Theorem 1.
Let$Θ : D ⊆ X → Y$be a differentiable operator. Let$v , w 0 , w , g ¯ 2 : [ 0 , ∞ ) → [ 0 , ∞ )$be nondecreasing continuous functions with$w 0 ( 0 ) = w ( 0 ) = 0$. Additionally, we consider that$r 0 ∈ [ 0 , ∞ ) , α ∈ S , β ∈ S − { 0 }$and$m ≥ 1$. Assume that there exists$κ ∈ D$such that for every$λ 1 ∈ D$
$Θ ( κ ) = 0 , Θ ′ ( κ ) − 1 ∈ L ( Y , X ) ,$
$∥ Θ ′ ( κ ) − 1 Θ ′ ( λ 1 ) − Θ ′ ( κ ) ∥ ≤ w 0 ( ∥ λ 1 − κ ∥ ) .$
and Equation (4) has a minimal solution $r 0$ and (5) holds.
Moreover, assume that for each$λ 1 , λ 2 ∈ D 0 : = D ∩ U ( κ , r 0 )$
$∥ Θ ′ ( κ ) − 1 Θ ′ ( λ 1 ) − Θ ′ ( λ 2 ) ∥ ≤ w ( ∥ λ 1 − λ 2 ∥ ) ,$
$∥ Θ ′ ( κ ) − 1 Θ ′ ( λ 1 ) ∥ ≤ v ( ∥ λ 1 − κ ∥ ) ,$
$∥ ϕ ( λ 1 , Θ ( λ 1 ) , Θ ′ ( λ 1 ) , Θ ′ ( λ 2 ) ) ∥ ≤ g ¯ 2 ( ∥ λ 1 − κ ∥ ) ∥ λ 1 − κ ∥ m$
and
$U ¯ ( κ , r ) ⊆ D .$
Then, for$x 0 ∈ U ( κ , r ) − { κ }$, we have$lim l → ∞ x l = κ$, where${ x l } ⊂ U ( κ , r )$and the following assertions hold
$∥ y l − κ ∥ ≤ g 1 ( ∥ Ω l ∥ ) ∥ Ω l ∥ ≤ ∥ Ω l ∥ < r ,$
$∥ z l − κ ∥ ≤ g 2 ( ∥ Ω l ∥ ) ∥ Ω l ∥ ≤ ∥ Ω l ∥$
and
$∥ x l + 1 − κ ∥ ≤ g 3 ( ∥ Ω l ∥ ) ∥ Ω l ∥ ≤ ∥ Ω l ∥ ,$
where$x l − κ = Ω l$and functions$g i , i = 1 , 2 , 3$are given previously. Moreover, if$R ≥ r$
$∫ 0 1 w 0 ( η R ) d η < 1 ,$
then κ is unique in$D 1 : = D ∩ U ¯ ( κ , R )$.
Proof.
We demonstrate that the sequence ${ x l }$ is well-defined in $U ( κ , r )$ and converges to $κ$ by adopting mathematical induction. By the hypothesis $x 0 ∈ U ( κ , r ) − { κ }$, (4), (6) and (13), we yield
$∥ Θ ′ ( κ ) − 1 ( Θ ′ ( x 0 ) − Θ ′ ( κ ) ) ∥ ≤ w 0 ( ∥ Ω 0 ∥ ) < w 0 ( r ) < 1 ,$
where $Ω 0 = x 0 − κ$ and $Θ ′ ( x 0 ) − 1 ∈ L ( Y , X )$, $y 0$ exists by the first two sub steps of method (2) and
$∥ Θ ′ ( x 0 ) − 1 Θ ′ ( κ ) ∥ ≤ 1 1 − w 0 ( ∥ Ω 0 ∥ ) .$
From (4), (8), (9) (for $i = 1$), (10), (12), (21) and the first substep of (2), we have
$∥ y 0 − κ ∥ = ∥ Ω 0 − Θ ′ ( x 0 ) − 1 Θ ( x 0 ) − κ ∥ = Θ ′ ( x 0 ) − 1 Θ ′ ( x 0 ) ( Ω 0 − κ ) − Θ ( x 0 ) − Θ ( κ ) = Θ ′ ( x 0 ) − 1 Θ ′ ( κ ) Θ ′ ( κ ) − 1 Θ ′ ( x 0 ) ( Ω 0 − κ ) − Θ ( x 0 ) − Θ ( κ ) ≤ ∥ Θ ′ ( x 0 ) − 1 Θ ( κ ) ∥ ∥ ∫ 0 1 Θ ′ ( κ ) − 1 ( Θ ′ ( κ + η ( Ω 0 − κ ) ) − Θ ′ ( x 0 ) ) ( Ω 0 ) d η ∥ ≤ ∫ 0 1 w ( ( 1 − η ) ∥ Ω 0 ∥ ) d η ∥ Ω 0 ∥ 1 − w 0 ( ∥ Ω 0 ∥ ) ≤ g 1 ( ∥ Ω 0 ∥ ) ∥ Ω 0 ∥ ≤ ∥ Ω 0 ∥ < r ,$
which implies (16) for $l = 0$ and $y 0 ∈ U ( κ , r )$.
By (8), (9) (for $i = 2$) and (14), we get
$∥ z 0 − κ ∥ = ∥ ϕ ( x 0 , Θ ( x 0 ) , Θ ′ ( x 0 ) , Θ ′ ( y 0 ) ) ∥ ≤ g ¯ 2 ( ∥ Ω 0 ∥ ) ∥ Ω 0 ∥ m = g 2 ( ∥ Ω 0 ∥ ) ∥ Ω 0 ∥ ≤ ∥ Ω 0 ∥ < r ,$
so (17) holds $l = 0$ and $z 0 ∈ U ( κ , r )$.
Using expressions (4), (8) and (11), we obtain
$( β Θ ′ ( κ ) ) − 1 ( β − α ) ( Θ ′ ( x 0 ) − Θ ′ ( κ ) ) + α ( Θ ′ ( y 0 ) − Θ ′ ( κ ) ) ≤ | β | − 1 | β − α | w 0 ( ∥ Ω 0 ∥ ) + | α | w 0 ( ∥ y 0 − κ ∥ ) ≤ | β | − 1 | β − α | w 0 ( ∥ Ω 0 ∥ ) + | α | w 0 ( g 1 ( ∥ Ω 0 ∥ ) ∥ Ω 0 ∥ ) = p ( ∥ Ω 0 ∥ ) ≤ p ( r ) < 1 ,$
so
$∥ ( ( β − α ) Θ ′ ( x 0 ) + α Θ ′ ( y 0 ) ) − 1 Θ ′ ( κ ) ∥ ≤ 1 1 − p ( ∥ Ω 0 ∥ ) .$
and $x 1$ is well-defined.
In view of (4), (8), (9) (for $i = 3$), (13), (22), (23) and (24), we get in turn that
$∥ x 1 − κ ∥ = ∥ z 0 − κ ∥ + | β | ∫ 0 1 v ( η ∥ z 0 − κ ∥ ) d η ∥ Ω 0 ∥ ≤ 1 + | β | ∫ 0 1 v ( η g 2 ( ∥ Ω 0 ∥ ) ) d η | β | ( 1 − p ( ∥ Ω 0 ∥ ) ) g 2 ( ∥ Ω 0 ∥ ) ∥ Ω 0 ∥ = g 3 ( ∥ Ω 0 ∥ ) ∥ Ω 0 ∥ ≤ ∥ Ω 0 ∥ < r ,$
that demonstrates (18) and $x 1 ∈ U ( κ , r )$. If we substitute $x 0$, $y 0 , x 1$ by $x l$, $y l$, $x l + 1$, we arrive at (18) and (19). By adopting the estimates
$∥ x l + 1 − κ ∥ ≤ c ∥ Ω l ∥ < r , c = g 2 ( ∥ Ω 0 ∥ ) ∈ [ 0 , 1 ) ,$
so $lim l → ∞ x l = κ$ and $x l + 1 ∈ U ( κ , r )$.
Now, only the uniqueness part is missing, so we assume that $κ ∗ ∈ D 1$ with $Θ ( κ ∗ ) = 0$. Consider, $Q = ∫ 0 1 Θ ′ ( κ + η ( κ − κ ∗ ) ) d η$. From (8) and (15), we obtain
$∥ Θ ′ ( κ ) − 1 ( Q − Θ ′ ( κ ) ) ∥ ≤ ∥ ∫ 0 1 w 0 ( η ∥ κ ∗ − κ ∥ ) d η ≤ ∫ 0 1 w 0 ( η R ) d η < 1 ,$
and by
$0 = Θ ( κ ) − Θ ( κ ∗ ) = Q ( κ − κ ∗ ) ,$
we derive $κ = κ ∗$. □
Remark 1.
(a)
By expression (13) hypothesis (15) can be omitted, if we set
$v ( t ) = 1 + w 0 ( t ) or v ( t ) = 1 + w 0 ( r 0 ) ,$
since,
$∥ Θ ′ ( κ ) − 1 Θ ′ ( x ) − Θ ′ ( κ ) + Θ ′ ( κ ) ∥ = 1 + ∥ Θ ′ ( κ ) − 1 ( Θ ′ ( x ) − Θ ′ ( κ ) ) ∥ ≤ 1 + w 0 ( ∥ x − κ ∥ ) = 1 + w 0 ( t ) for ∥ x − κ ∥ ≤ r 0 .$
(b)
Consider$w 0$to be strictly increasing, so we have
$r 0 = w 0 − 1 ( 1 )$
for (4).
(c)
If$w 0$and w are constants, then
$r 1 = 2 2 w 0 + w$
and
$r ≤ r 1 ,$
where$r 1$is the convergence radius for well-known Newton’s method
$x l + 1 = x l − Θ ′ ( x l ) − 1 Θ ( x l ) ,$
given in [2].
On the other hand, Rheindoldt [3] and Traub [4] suggested
$r T R = 2 3 w 1 ,$
where as Argyros [2,5]
$r A = 2 2 w 0 + w 1 ,$
where$w 1$is the Lipschitz constant for (9) on $D$. Then,
$w ≤ w 1 , w 0 ≤ w 1 ,$
so
$r T R ≤ r A ≤ r 1$
and
$r T R r A → 1 3 a s w 0 w → 0 .$
(d)
We use the following rule for$C O C$
or$A C O C$[6], defined as
not requiring derivatives and$ξ ∗$does not depend on κ.
(e)
Our results can be adopted for operators Θ that satisfy [2,5]
$Θ ′ ( x ) = P ( Θ ( x ) ) ,$
for a continuous operator P. The beauty of our study is that we can use the results without prior knowledge of solution κ, since$Θ ′ ( κ ) = P ( Θ ( κ ) ) = P ( 0 )$. As an example$Θ ( x ) = e x − 1 ,$so we assume$P ( x ) = x + 1$.
(f)
Let us show how to consider functions$ϕ , g 2 ¯ , g 2$and m. Define function ϕ by
$ϕ ( x l , Θ ( x l ) , Θ ′ ( x l ) , Θ ′ ( y l ) ) = y l − Θ ′ ( y l ) − 1 Θ ( y l ) .$
Then, we can choose
$g 2 ( t ) = ∫ 0 1 w ( ( 1 − η ) g 1 ( t ) t ) d η 1 − w 0 ( g 1 ( t ) t ) g 1 ( t ) .$
If$w 0 , w , v$are given in particular by$w 0 ( t ) = L 0 t , w ( t ) = L t$and$v ( t ) = M$for some$L 0 , L > 0 ,$and$M ≥ 1$, then we have that
$g ¯ 2 ( t ) = L 2 8 ( 1 − L 0 t ) 2 1 − L 0 L t 2 2 ( 1 − L 0 t ) , g 2 ( t ) = g ¯ 2 ( t ) t 3 a n d m = 4 .$
(g)
If$β = 0$, we can obtain the results for the two-step method
$y l = x l − Θ ′ ( x l ) − 1 Θ ( x l ) , x l + 1 = ϕ ( x l , Θ ( x l ) , Θ ′ ( x l ) , Θ ′ ( y l ) )$
by setting$z l = x l + 1$in Theorem 1.
Convergence Order of Expression (3) from [1]
Theorem 2.
Let$Θ : R → R$has a simple zero ξ being a sufficiently many times differentiable function in an interval containing ξ. Further, we consider that initial guess$x = x 0$is sufficiently close to ξ. Then, the iterative scheme defined by (3) from [1] has minimum fourth-order convergence and satisfy the following error equation
$e l + 1 = − c 2 2 c 2 2 + c 3 ( α − β ) β e l 4 + 1 2 β 2 [ 4 β c 4 c 2 ( β − α ) − 4 c 2 4 ( 2 α 2 − 8 α β + 5 β 2 ) − 2 c 3 c 2 2 ( 2 α 2 + α β − 4 β 2 ) + 3 β c 3 2 ( β − α ) ] e l 5 + O ( e l 6 ) ,$
where $α , β ∈ R$, $e l = x l − ξ$ and $c j = Θ ( j ) ( ξ ) j ! Θ ′ ( ξ )$ for $j = 1 , 2 , … 6$.
Proof.
The Taylor’s series expansion of function $Θ ( x l )$ and its first order derivative $Θ ′ ( x l )$ around $x = ξ$ with the assumption $Θ ′ ( ξ ) ≠ 0$ leads us to:
$Θ ( x l ) = Θ ′ ( ξ ) ∑ j = 1 6 c j e l j + O ( e l 7 ) ,$
and
$Θ ′ ( x l ) = Θ ′ ( ξ ) ∑ j = 1 6 j c j e l j + O ( e l 7 ) ,$
respectively.
By using the Equations (49) and (50), we get
$y l − ξ = c 2 e l 2 − 2 ( c 2 2 − c 3 ) e l 3 + ( 4 c 2 3 − 7 c 3 c 2 + 3 c 4 ) e l 4 + ( − 8 c 2 4 + 20 c 3 c 2 2 − 10 c 4 c 2 − 6 c 3 2 + 4 c 5 ) e l 5 + ( 16 c 2 5 − 52 c 3 c 2 3 + 28 c 4 c 2 2 + ( 33 c 3 2 − 13 c 5 ) c 2 − 17 c 3 c 4 + 5 c 6 ) e l 6 + O ( e l 7 ) .$
The following expansion of $Θ ( y l )$ about $ξ$
$Θ ′ ( y l ) = Θ ′ ( ξ ) [ 1 + 2 c 2 2 e l 2 + ( 4 c 2 c 3 − 4 c 2 3 ) e l 3 + c 2 ( 8 c 2 3 − 11 c 3 c 2 + 6 c 4 ) e l 4 − 4 c 2 ( 4 c 2 4 − 7 c 3 c 2 2 + 5 c 4 c 2 − 2 c 5 ) e l 5 + 2 16 c 2 6 − 34 c 3 c 2 4 + 30 c 4 c 2 3 − 13 c 5 c 2 2 + ( 5 c 6 − 8 c 3 c 4 ) c 2 + 6 c 3 3 e l 6 ] .$
From Equations (50)–(52) in the second substep of (3), we have
$z l − ξ = c 2 2 + c 3 2 e l 3 + − 3 c 2 3 + 3 c 3 c 2 2 + c 4 e l 4 + 6 c 2 4 − 9 c 3 c 2 2 + 2 c 4 c 2 − 3 4 ( c 3 2 − 2 c 5 ) e l 5 + 1 2 − 18 c 2 5 + 50 c 3 c 2 3 − 30 c 4 c 2 2 − 5 c 3 2 − c 5 c 2 − 5 c 3 c 4 + 4 c 6 e l 6 + O ( e l 7 ) .$
Similarly, we can expand function $f ( z l )$ about $ξ$ with the help of Taylor series expansion, which is defined as follows:
$Θ ( z l ) = Θ ′ ( ξ ) [ c 2 2 + c 3 2 e l 3 + − 3 c 2 3 + 3 c 3 c 2 2 + c 4 e l 4 + 6 c 2 4 − 9 c 3 c 2 2 + 2 c 4 c 2 − 3 4 ( c 3 2 − 2 c 5 ) e l 5 + c 2 c 2 2 + c 3 2 2 + 1 2 − 18 c 2 5 + 50 c 3 c 2 3 − 30 c 4 c 2 2 − 5 ( c 3 2 − c 5 ) c 2 − 5 c 3 c 4 + 4 c 6 e l 6 + O ( e l 7 ) ] .$
Adopting expressions (49)–(54), in the last sub-step of method (3), we have
$e l + 1 = − c 2 2 c 2 2 + c 3 ( α − β ) β e l 4 + 1 2 β 2 [ 4 β c 4 c 2 ( β − α ) − 4 c 2 4 ( 2 α 2 − 8 α β + 5 β 2 ) − 2 c 3 c 2 2 ( 2 α 2 + α β − 4 β 2 ) + 3 β c 3 2 ( β − α ) ] e l 5 + O ( e l 6 ) .$
For choosing $α = β$ in (55), we obtain
$e l + 1 = 2 c 2 4 + c 3 c 2 2 e l 5 + O ( e l 6 ) .$
The expression (55) confirms that the scheme (3) have maximum fifth-order convergence for $α = β$ (that can be seen in (56)). This completes the proof and also contradict the claim of authors [1]. □
This type of proof and theme are close to work on generalization of the fixed point theorem [2,5,7,8]. We recall a standard definition.
Definition 2.
Let${ x l }$be a sequence in$X$which converges to κ. Then, the convergence is of order$λ ≥ 1$if there exist$λ > 0$, abd$l 0 ∈ N$such that
$∥ x l + 1 − κ ∥ ≤ λ ∥ x l − κ ∥ λ for each l ≥ l 0 .$

## 3. Examples with Applications

Here, we test theoretical results on four numerical examples. In the whole section, we consider $ϕ ( x l , Θ ( x l ) , Θ ′ ( x l ) , Θ ′ ( y l ) ) = x l − 2 f ( x l ) f ′ ( y l ) + f ′ ( x l ) ,$ that means $m = 2$ for the computational point of view, called by $( M 1 )$.
Example 1.
Set$X = Y = C [ 0 , 1 ]$. Consider an integral equation [9], defined by
$x ( β ) = 1 + ∫ 0 1 T ( β , α ) x ( α ) 3 2 + x ( α ) 2 2 d α$
where
$T ( β , α ) = ( 1 − β ) α , α ≤ s , β ( 1 − α ) , s ≤ α .$
Consider corresponding operator$Θ : C [ 0 , 1 ] → C [ 0 , 1 ]$as
$Θ ( x ) ( β ) = x ( β ) − ∫ 0 α T ( β , α ) x ( α ) 3 2 + x ( α ) 2 2 d α .$
But
$∫ 0 α T ( β , α ) d α ≤ 1 8 ,$
and
$Θ ′ ( x ) y ( β ) = y ( β ) − ∫ 0 α T ( β , α ) 3 2 x ( α ) 1 2 + x ( α ) d α .$
Using$κ ( s ) = 0$, we obtain
$Θ ′ ( κ ) − 1 Θ ′ ( x ) − Θ ′ ( y ) ≤ 1 8 3 2 ∥ x − y ∥ 1 2 + ∥ x − y ∥ ,$
So, we can set
$w 0 ( α ) = w ( α ) = 1 8 3 2 α 1 2 + α .$
Hence, by adopting Remark 2.2(a), we have
$v ( α ) = 1 + w 0 ( α ) or v 0 ( α ) = M ,$
The results in [1] are not applicable, since$Θ ′$is not Lipschitz. But, our results can be used. The radii of convergence of method (2) for example (1) are described in Table 1.
Example 2.
Consider a system of differential equations
$θ 1 ′ ( x ) − θ 1 ( x ) − 1 = 0 θ 2 ′ ( y ) − ( e − 1 ) y − 1 = 0 θ 3 ′ ( z ) − 1 = 0$
that model for the motion of an object for$θ 1 ( 0 ) = θ 2 ( 0 ) = θ 3 ( 0 ) = 0$. Then, for$v = ( x , y , z ) T$consider$Θ : = ( θ 1 , θ 2 , θ 3 ) : D → R 3$defined by
$Θ ( v ) = e x − 1 , e − 1 2 y 2 + y , z T .$
We have
$Θ ′ ( v ) = e x 0 0 0 ( e − 1 ) y + 1 0 0 0 1 .$
Then, we get$w 0 ( t ) = L 0 t , w ( t ) = L t , w 1 ( t ) = L 1 t$and$v ( t ) = M$, where$L 0 = e − 1 < L = e 1 L 0 = 1.789572397 , L 1 = e$and$M = e 1 L 0 = 1.7896$. The convergence radii of scheme (2) for example (2) are depicted in Table 2.
We follow the stopping criteria for computer programming (i)$∥ F ( X l ) ∥$and (ii)$∥ X l + 1 − X l ∥ < 10 − 100$in all the examples.
Example 3.
Set$X = Y = C [ 0 , 1 ]$and$D = U ¯ ( 0 , 1 )$. Consider Θ on$D$as
$Θ ( φ ) ( x ) = ϕ ( x ) − 5 ∫ 0 1 x η φ ( η ) 3 d η .$
We have that
$Θ ′ ( φ ( ξ ) ) ( x ) = ξ ( x ) − 15 ∫ 0 1 x η φ ( η ) 2 ξ ( η ) d η , for each ξ ∈ D .$
Then, we get$κ = 0 , L 0 = 7.5 , L 1 = L = 15$and$M = 2$. leading to$w 0 ( t ) = L 0 t , v ( t ) = 2 = M , w ( t ) = L t , w 1 ( t ) = L 1 t$. The radii of convergence of scheme (2) for problem (3) are described in the Table 3.
Example 4.
We get$L = L 0 = 96.662907$and$M = 2$for example at introduction. Then, we can set$w 0 ( t ) = L 0 t , v ( t ) = M = 2 , w ( t ) = L t , w 1 ( t ) = L t$. The convergence radii of the iterative method (2) for example (4) are mentioned in the Table 4.

## 4. Conclusions

A major problem in the development of iterative methods is the convergence conditions. In the case of especially high order methods, such as (2), the operator involved must be seventh times differentiable according to the earlier study [1] which do not appear in the methods, limiting the applicability. Moreover, no error bounds or uniqueness of the solution that can be computed are given. That is why we address these problems based only on the first order derivative which actually appears in the method. The convergence order is determined using $C O C$ or $A C O C$ that do not require higher than first order derivatives. Our technique can be used to expand the applicability of other iterative methods [1,2,3,4,5,6,7,8,9,10,11,12,13] along the same lines.

## Author Contributions

All the authors have equal contribution for this paper.

## Funding

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-253-247-1440). The authors, therefore, acknowledge, with thanks, the DSR technical and financial support.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Chen, S.P.; Qian, Y.H. A family of combined iterative methods for solving nonlinear equations. Am. J. Appl. Math. Stat. 2017, 5, 22–32. [Google Scholar] [CrossRef]
2. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin, Germany, 2008. [Google Scholar]
3. Rheinboldt, W.C. An Adaptive Continuation Process for Solving Systems of Nonlinear Equations; Polish Academy of Science, Banach Center Publications: Warsaw, Poland, 1978; Volume 3, pp. 129–142. [Google Scholar]
4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
5. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: New Jersey, NJ, USA, 2013. [Google Scholar]
6. Kou, J. A third-order modification of Newton method for systems of nonlinear equations. Appl. Math. Comput. 2007, 191, 117–121. [Google Scholar]
7. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
8. Sharma, J.R.; Ghua, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for system of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
9. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
10. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
11. Argyros, I.K.; Magreñán, Á.A. Ball convergence theorems and the convergence planes of an iterative methods for nonlinear equations. SeMA 2015, 71, 39–55. [Google Scholar]
12. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear system. J. Comput. Appl. Math. 2012, 252, 86–94. [Google Scholar] [CrossRef]
13. Potra, F.A.; Pták, V. Nondiscrete Introduction and Iterative Process. In Research Notes in Mathematics; Pitman Advanced Publishing Program: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
Table 1. Radii of convergence for problem (1).
Table 1. Radii of convergence for problem (1).
$α$$β$m$r 1$$r p$$r 2$$r 3$rMethods
1122.63033.134752.63032.15462.1546M1
1222.63033.351242.63032.01572.0157M1
Table 2. Radii of convergence for problem (2).
Table 2. Radii of convergence for problem (2).
$α$$β$$r 1$$r p$$r 2$$r 3$rMethods$x 0$n$ρ$
110.3826920.4223590.3217330.2189330.218933M10.1534.9963
120.3826920.4414870.3217330.2189330.218933M10.1144.0000
Table 3. Radii of convergence for problem (3).
Table 3. Radii of convergence for problem (3).
$α$$β$m$r 1$$r p$$r 2$$r 3$rMethods
1120.06666670.08240450.02331230.008198250.00819825M1
1220.06666670.08888890.02331230.008198250.00819825M1
Table 4. Radii of convergence for problem (4).
Table 4. Radii of convergence for problem (4).
$α$$β$m$r 1$$r p$$r 2$$r 3$rMethods$x 0$n$ρ$
1120.01029140.01029170.009950720.009580250.00958025M11.00835.0000
1220.01029140.0102920.009950720.009580250.00958025M11.00743.0000

## Share and Cite

MDPI and ACS Style

Alharbey, R.A.; Argyros, I.K.; Behl, R. Ball Convergence for Combined Three-Step Methods Under Generalized Conditions in Banach Space. Symmetry 2019, 11, 1002. https://doi.org/10.3390/sym11081002

AMA Style

Alharbey RA, Argyros IK, Behl R. Ball Convergence for Combined Three-Step Methods Under Generalized Conditions in Banach Space. Symmetry. 2019; 11(8):1002. https://doi.org/10.3390/sym11081002

Chicago/Turabian Style

Alharbey, R. A., Ioannis K. Argyros, and Ramandeep Behl. 2019. "Ball Convergence for Combined Three-Step Methods Under Generalized Conditions in Banach Space" Symmetry 11, no. 8: 1002. https://doi.org/10.3390/sym11081002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.