Next Article in Journal
Onicescu’s Informational Energy and Correlation Coefficient in Exponential Families
Next Article in Special Issue
Location, Separation and Approximation of Solutions for Quadratic Matrix Equations
Previous Article in Journal / Special Issue
Extending the Local Convergence of a Seventh Convergence Order Method without Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending King’s Method for Finding Solutions of Equations

by
Samundra Regmi
1,
Ioannis K. Argyros
2,*,
Santhosh George
3 and
Christopher I. Argyros
4
1
Learning Commons, University of North Texas at Dallas, Dallas, TX 75201, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575025, India
4
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Foundations 2022, 2(2), 348-361; https://doi.org/10.3390/foundations2020024
Submission received: 4 March 2022 / Revised: 9 April 2022 / Accepted: 11 April 2022 / Published: 18 April 2022
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences)

Abstract

:
King’s method applies to solve scalar equations. The local analysis is established under conditions including the fifth derivative. However, the only derivative in this method is the first. Earlier studies apply to equations containing at least five times differentiable functions. Consequently, these articles provide no information that can be used to solve equations involving functions that are less than five times differentiable, although King’s method may converge. That is why the new analysis uses only the operators and their first derivatives which appear in King’s method. The article contains the semi-local analysis for complex plane-valued functions not presented before. Numerical applications complement the theory.
MSC:
49M15; 65J15; 65G99

1. Introduction

In this article, the function F : Ω T T is differentiable, where T = R or T = C and Ω is an open nonempty set.
The nonlinear equation
F ( x ) = 0
is studied in this article. An analytic form of a solution x * is preferred. However, this form is not always available. So, mostly iterative solution methods have been applied to approximate the solution x * .
In particular, King’s [1] fourth-order method (KM) has been used;
u 0 Ω , v n = u n F ( u n ) 1 F ( u n ) u n + 1 = v n A n 1 ( F ( u n ) + γ F ( v n ) ) F ( u n ) 1 F ( v n ) ,
where γ T is a parameter and A n = F ( u n ) + ( γ 2 ) F ( v n ) .
As motivation consider the real function
μ ( s ) = 0 i f s = 0 s 5 s 4 + s 3 log s 2 i f s 0 .
This definition gives
μ ( s ) = 6 log s 2 + 60 s 2 24 s + 22 .
However, then, the third derivative is unbounded. So, the convergence of KM is not assured by previous analyses in [1,2,3,4,5,6,7,8].
This is the case, since Taylor series requiring derivatives of high order (not in KM) are utilized in the analysis for convergence. This is a common observation for other methods, such as Traub’s, Jarratt’s, and the Kung–Traub method to mention some [2,3,5,6,7,8,9,10]. On the top of these concerns, some other problems exist with earlier studies. No computable data are provided for distances u n + 1 u n or u n x * or the uniqueness and location of solution x * .
All these concerns are addressed utilizing conditions involving only the first derivative in the method (2) [9,10,11,12,13,14,15,16].
The next four sections include semi-local analysis, local analysis, the experiment, and conclusions, respectively.

2. Semi-Local Analysis

Set L 0 , L , L 1 , L 2 , δ and η to be positive parameters. Set L 3 = L L 2 2 , and L 4 = δ | γ | L 2 4 . Let the sequence { t n } be given as
t 0 = 0 , s 0 = η , t n + 1 = s n + [ L 3 + L 4 ( s n t n ) ] ( s n t n ) 3 ( 1 p n ) ( 1 L 0 t n ) s n + 1 = t n + 1 + L ( t n + 1 t n ) 2 + 2 L 1 ( t n + 1 s n ) 2 ( 1 L 0 t n + 1 ) ,
where p n = L 2 ( t n + | γ 2 | ( s n η ) ) . Sequence { t n } shall be shown to be majorizing for KM.
Lemma 1.
Suppose
t n < 1 L 0 and p n < 1 .
Then, the following assertions hold
t n s n t n + 1
and
lim n t n = t * 1 L 0 ,
where t * is the unique least upper bound of sequence { t n } .
Proof. 
Assertions (5) and (6) follow immediately by (3) and (4). □
Another result is given for the sequence { t n } using stronger conditions but which are easier to verify than (4). However, first, we need to introduce some concepts. Let
a = ( L 3 + L 4 η ) η 2 , b = L t 1 2 + 2 L 1 ( t 1 η ) 2 ( 1 L 0 t 1 ) η ,
and
c = max { a , b } .
Develop polynomials defined on the interval [ 0 , 1 ) as
f n ( 1 ) ( t ) = 2 ( L 3 + L 4 t n η ) t 2 n 1 η 2 + L 0 ( 1 + t ) ( 1 + t + + t n 1 ) η 1 ,
g n ( 1 ) ( t ) = 2 ( L 3 + L 4 t n + 1 η ) t n + 1 η 2 ( L 3 + L 4 t n η ) t n 1 η + L 0 ( 1 + t ) ,
g 1 ( t ) = g 1 ( 1 ) ( t ) ,
and
f n ( 2 ) ( t ) = L [ 4 ( L 3 + L 4 t n η ) ( t n η ) 2 + 1 ] 2 t n 1 η + 8 L 1 ( L 3 + L 4 t n η ) t 2 n 1 η 2 + 2 L 0 ( 1 + t ) ( 1 + t + + t n ) η 2 .
Moreover, set
g 2 ( t ) = g 2 ( 2 ) ( t ) .
Notice that polynomials g 1 and g 2 are independent of n . In particular, say
g 1 ( t ) = 2 L 4 t η 2 ( t 3 1 ) + 2 L 3 η ( t 2 1 ) + L 0 ( 1 + t ) .
Then, condition g 1 ( t ) 0 needed in the next Lemma holds if
2 L 4 t η 2 ( 1 t 3 ) + 2 L 3 η ( 1 t 2 ) L 0 ( 1 + t ) .
The left side of this estimate is a positive multiple of η . However, the right side of it is positive but independent of η . So, this estimate certainly holds for sufficiently small η . The same observation is made for polynomial g 2 and condition g 2 ( t ) 0 .
An auxiliary result connects these polynomials.
Lemma 2.
The following items hold:
(i) 
f n + 1 ( 1 ) ( t ) f n ( 1 ) ( t ) = g n ( 1 ) ( t ) t n η ;
(ii) 
g n + 1 ( 1 ) ( t ) g n ( 1 ) ( t ) ;
(iii) 
g n + 1 ( 1 ) ( t ) f n ( 1 ) ( t ) g 1 ( t ) t n η , if g 1 ( t ) 0 ;
and
(iv) 
f n + 1 ( 2 ) ( t ) f n ( 2 ) ( t ) g 2 ( t ) t n 1 η , if g 2 ( t ) 0 .
Proof. 
By the definition of these polynomials, we get in turn:
(i)
f n + 1 ( 1 ) ( t ) = f n + 1 ( 1 ) ( t ) f n ( 1 ) ( t ) + f n ( 1 ) ( t ) = 2 ( L 3 + L 4 t n + 1 η ) t 2 n + 1 η 2 + L 0 ( 1 + t ) ( 1 + t + + t n ) η 1 2 ( L 3 + L 4 t n η ) t 2 n 1 η 2 L 0 ( 1 + t ) ( 1 + t + + t n 1 ) η + f n ( 1 ) ( t ) + 1 = f n ( 1 ) ( t ) + g n ( 1 ) ( t ) t n η ;
(ii)
g n + 1 ( 1 ) ( t ) g n ( 1 ) ( t ) = 2 ( L 3 + L 4 t n + 2 η ) t n + 2 η 2 ( L 3 + L 4 t n + 1 η ) t n η 2 ( L 3 + L 4 t n + 1 ) η ) t n + 1 η + 2 ( L 3 + L 4 t n η ) t n 1 η = 2 [ ( L 3 + L 4 t n + 2 η ) t 3 ( L 3 + L 4 t n + 1 η ) t ( L 3 + L 4 t n + 1 η ) t 2 + ( L 3 + L 4 t n η ) ] t n 1 η = 2 ( t 1 ) 2 ( t + 1 ) ( L 3 + L 4 η t n ( t 2 + t + 1 ) ) t n 1 η 0 .
(iii)
This estimate follows immediately from the first two;
(iv)
It follows similarly from the definition of polynomials g 2 and f n ( 2 ) , since t [ 0 , 1 ) .
Define the parameters
β 1 = 1 L 0 η 1 + L 0 η , β 2 = 1 2 L 0 η 1 + 2 L 0 η , β 3 = 1 2 L 2 η 1 + 2 L 2 η ( 1 + 2 | γ 2 | ) ,
β = min { β 1 , β 2 , β 3 }
and
M = 2 max { L 0 , L 2 } .
Notice that β ( 0 , 1 ) .
Lemma 3.
Suppose:
L 0 t 1 < 1 ,
M η < 1 ,
c α β ,
g 1 ( t ) 0 at t = α
and
g 2 ( t ) 0 at t = α
hold for some α ( 0 , 1 ) . Then, sequence { t n } is convergent to t * . Notice, criteria (7)–(11) determine the “smallness” of η to force convergence of the method.
Proof. 
Mathematical induction is used to show
0 [ L 3 + L 4 ( s m t m ) ] ( s m t m ) 3 ( 1 p m ) ( 1 L 0 t m ) α ,
0 L ( t m + 1 t m ) 2 + 2 L 1 ( t m + 1 s m ) 2 ( 1 L 0 t m + 1 ) α ( s m t m )
and
t m s m t m + 1 .
These estimates are true for m = 0 by (7) or (8) and the definition of sequence { t m } . Then, it follows 0 t 1 s 0 α ( s 0 t 0 ) = α η and 0 s 1 t 1 α ( s 0 t 0 ) = α η . Suppose:
0 t m + 1 s m α ( s m t m ) α m + 1 η
and
0 s m + 1 t m + 1 α ( s m t m ) α m + 1 η .
Then,
t m + 1 s m + α m + 1 η t m + α m η + α m + 1 η s m 1 + 2 α m η + α m + 1 η s 1 + 2 α 2 η + + 2 α m η + α m + 1 η t 1 + α η + 2 α 2 η + + 2 α m η + α m + 1 η = η + 2 α η ( 1 + α + + α m 1 ) η + α m + 1 η = η ( 1 + α ) ( 1 α m + 1 ) 1 α < η 1 + α 1 α = t * * .
Evidently, (12) holds if
2 ( L 3 + L 4 α m η ) ( α m η ) 2 + L 0 α ( 1 + α ) 1 α ( 1 α m ) η α 0
or
f m ( 1 ) ( t ) 0 at t = α .
Define
f ( 1 ) ( t ) = lim m f m ( 1 ) ( t ) .
It can be shown instead from Lemma 2 that
f ( 1 ) ( t ) 0 a t t = α .
However, by (15) and (20),
f ( 1 ) ( t ) = L 0 ( 1 + t ) η 1 t 1 .
Then, (21) holds by (10) and (22). Moreover, instead of (13), we can show
[ L ( t n + 1 s n + s n t n ) 2 + 2 L 1 ( L 3 + L 4 ( s n t n ) ] ( s n t n ) 3 ( 1 p n ) ( 1 L 0 t n ) 2 ( 1 L 0 t m + 1 ) α ( s n t n ) ,
since
1 1 L 0 t m 2 ,
1 1 p m 2
and
0 t m + 1 t m ( 1 + α ) ( s m t m )
hold. Indeed, (24) holds if
2 L 2 t m 2 L 2 ( 1 + α ) η 1 α 1
or
α 1 2 L 0 η 1 + 2 L 0 η .
However, this holds because of the choice of β 2 and (9). Moreover, estimate (25) holds if
2 L 2 | γ 2 | [ ( 1 + α ) η 1 α η ] + ( 1 + α ) η 1 α 1 ,
which is true by the choice of β 3 and (9). Then, (23) holds if
L [ 4 ( L 3 + L 4 ( s n t n ) ) ( s n t n ) 2 + 1 ] 2 ( s n t n ) + 8 L 1 ( L 3 + L 4 ( s n t n ) ) ( s n t n ) 2 α
or
L [ 4 ( L 3 + L 4 α n η ) ( α n η ) 2 + 1 ] α n 1 η + 8 L 1 ( L 3 + L 4 α n η ) α 2 n 1 η 2 1 0
or
f m ( 2 ) ( t ) 0 at t = α .
or
f ( t ) 0 at t = α .
However, this holds by (11). By sequence { t m } , (12) and (13), the estimate (14) also holds. Therefore, the induction for estimates (12)–(14) is terminated. Hence, { t m } is bounded by t * * , which is non-decreasing. Hence, it converges to t * . □
The semi-local convergence analysis of KM uses conditions (H). Suppose that there exist:
(H1)
u 0 Ω , η 0 , δ 0 : F ( u 0 ) 0 , A 0 0 , F ( u 0 ) 1 F ( u 0 ) η and A 0 1 F ( u 0 ) δ ;
(H2)
L 0 > 0 : F ( u 0 ) 1 ( F ( v ) F ( u 0 ) ) L 0 v u 0 for all v Ω . Set Ω 0 = U ( u 0 , 1 L 0 ) Ω ;
(H3)
L > 0 , L 1 > 0 , L 2 > 0 : F ( u 0 ) 1 ( F ( v ) F ( u ) ) L v u ,
F ( u 0 ) 1 F ( v ) L 1
and
A 0 1 F ( v ) L 2 ,
for all v , w Ω 0 ;
(H4)
The conditions in Lemma 1 or in Lemma 3 are true;
(H5)
U [ u 0 , t * ] Ω .
Theorem 1.
Assume conditions H hold. Then, KM is well defined in U ( u 0 , t * ) , lies in U ( u 0 , t * ) , for all n = 0 , 1 , 2 , and converges to a solution x * U [ u 0 , t * ] of Equation (1), so
v m u m s m t m
and
u m + 1 v m t m + 1 s m .
Proof. 
We have by { t n } and (H1)
v 0 u 0 = F ( u 0 ) 1 F ( u 0 ) η = s 0 t 0 < t * .
So, (29) is true if m = 0 and v 0 U ( u 0 , t * ) . Pick u U ( u 0 , t * ) . By (H1), (H2) and t * , then
F ( u 0 ) 1 ( F ( u 0 ) F ( u ) ) L 0 u 0 u L 0 t * < 1 .
That is F ( u ) 0 with
F ( u ) 1 F ( u 0 ) 1 1 L 0 u u 0 .
By the Banach lemma on functions [11,12,13], iteration u 1 is well-defined. Suppose u k , v k U ( u 0 , t * ) . Then, we can write
u k + 1 v k = A k 1 ( F ( u k ) + γ F ( v k ) ) F ( u k ) 1 F ( v k ) .
By (H1), (H3), we get
A 0 1 ( A k A 0 ) A 0 1 ( F ( u 0 ) F ( u k ) ) + | γ 2 | A 0 1 ( F ( v 0 ) F ( v k ) ) 0 1 A 0 1 F ( u 0 + θ ( u k u 0 ) ) d θ u k v 0 + | γ 2 | 0 1 A 0 1 ( F ( v 0 + θ ( v k v 0 ) ) d θ v k v 0 L 2 ( u k u 0 + | γ 2 | v k v 0 ) p ¯ k p k = L 2 ( t k + | γ 2 | ( s k η ) ) < 1 ,
so A k O and
A k 1 A 0 1 1 p k .
Then, by (H3), (3), (31) (for u = u 0 ), (32) and (33), we obtain
u k + 1 v k A k 1 A 0 [ A 0 1 F ( u k ) + | γ | A 0 1 F ( u 0 ) × F ( u 0 ) 1 F ( v k ) ] F ( u k ) 1 F ( u 0 ) F ( u 0 ) 1 F ( v k ) [ L 2 ( ( s k t k ) + δ | γ | L 2 ( s k t k ) 2 ] L 2 ( s k t k ) 2 ( 1 p k ) ( 1 L 0 t k ) = t k + 1 s k ,
so (30) holds, where we also used that (29) and (30) hold for all k smaller than n 1 . We also get
F ( u 0 ) 1 F ( u k ) F ( u 0 ) 1 F ( u k ) ( v k u k ) L 1 v k u k L 1 ( s k t k ) ,
F ( v k ) = F ( v k ) F ( u k ) + F ( u k ) = 0 1 F ( u k + θ ( v k u k ) ) d θ ( v k u k ) F ( u k ) ( v k u k ) ,
and
F ( u 0 ) 1 F ( v k ) L 2 ( s k t k ) 2
We also have
u k + 1 u 0 u k + 1 v k + v k u 0 ( t k + 1 s k ) + ( s k t 0 ) = t K + 1 < t * ,
so u k + 1 U ( u 0 , t * ) . Then, we write
F ( u k + 1 ) = F ( u k + 1 ) F ( u k ) + F ( u k ) = F ( u k + 1 ) F ( u k ) F ( u k ) ( v k u k ) = F ( u k + 1 ) F ( u k ) F ( u k ) ( u k + 1 u k ) + F ( u k ) ( u k + 1 v k ) = 0 1 ( F ( u k + θ ( u k + 1 u k ) ) d θ F ( u k ) ) ( u k + 1 u k ) + F ( u k ) ( u k + 1 v k ) .
By (H3), we get
F ( u 0 ) 1 F ( u k + 1 ) L 2 u k + 1 u k 2 + L 1 u k + 1 v k L 2 ( t k + 1 t k ) 2 + L 1 ( t k + 1 s k ) .
Then, by the first substep of KM
v k + 1 u k + 1 F ( u k + 1 ) 1 F ( u 0 ) F ( u 0 ) 1 F ( u k + 1 ) L 2 ( t k + 1 t k ) 2 + L 1 ( t k + 1 s k ) 1 L 0 t k + 1 = s k + 1 t k + 1 ,
and
v k + 1 u 0 v k + 1 u k + 1 + u k + 1 u 0 s k + 1 t k + 1 + t k + 1 t 0 = s k + 1 < t * .
Therefore, (29) holds and v k + 1 U [ u 0 , t * ] . The induction is finished. So, { u k } is Cauchy in T . Hence, there exists x * U [ u 0 , t * ] such that lim k x n = x * . By letting k approach in (35), F ( x * ) = 0 .
Notice that 1 L 0 under conditions of Lemma 1 or ( 1 + α ) η 1 α under conditions of Lemma 3 provided in closed form may be used for t * in Theorem 1.
Proposition 1.
Suppose
(1) 
The point b U [ u 0 , r 0 ] Ω is a solution of Equation (1) with F ( b ) 0 , and condition (H2) holds;
(2) 
Point r r 0 exists:
L 0 ( r + r 0 ) < 2 .
Set Ω 1 = U [ u 0 , r ] Ω . Then, b uniquely solves Equation (1) in Ω 1 .
Proof. 
Let ξ Ω 1 satisfy F ( ξ ) = 0 . Set B = 0 1 F ( b + q ( ξ b ) ) d q . Then, by (H2) and (40), we obtain in turn that
F ( u 0 ) 1 ( B F ( u 0 ) ) L 0 0 1 ( ( 1 q ) u 0 b + q u 0 ξ ) d q L 0 2 ( r 0 + r ) < 1 .
Therefore, ξ = b follows from B 0 and B ( ξ b ) = F ( ξ ) F ( b ) = 0 0 = 0 .

3. Local Convergence

Set K 0 , K , and K 1 to be positive parameters. Define function g 1 : [ 0 , 1 K 0 ) R by
g 1 ( t ) = K t 2 ( 1 K 0 t ) .
Notice that
ρ 0 = 2 2 K 0 + K < 1 K 0
is a radius of convergence for Newton’s method provided by us in [11,12,13]. This point ρ 0 also solves the equation
G 1 ( t ) = g 1 ( t ) 1 = 0 .
Develop q : [ 0 , 1 K 0 ) R , Q : [ 0 , 1 K 0 ) R by
q ( t ) = | γ 2 | K 1 g 1 ( t ) + K 2 t
and
Q ( t ) + 1 = q ( t ) .
Then, we have Q ( 0 ) = 1 and Q ( ρ 0 ) = K 2 ρ 0 + | γ 2 | K 1 > 0 . The intermediate value theorem assures Q has zeros in ( 0 , ρ 0 ) . Let ρ Q stand for the smallest zero in ( 0 , ρ 0 ) . Define functions g 2 : [ 0 , ρ Q ) R and G 2 : [ 0 , ρ Q ) R by
g 2 ( t ) = g 1 ( t ) 1 + K 1 2 ( 1 + | γ | g 1 ( t ) ) ( 1 q ( t ) ) ( 1 K 0 t )
and
G 2 ( t ) = g 2 ( t ) 1 .
It follows G 2 ( 0 ) = 1 and G 2 ( t ) as t ρ Q . Let ρ be the smallest such zero of G 2 on ( 0 , ρ Q ) . Set I = [ 0 , ρ ) . Then, the definition of ρ implies that for all t I
0 g 1 ( t ) < 1 ,
0 q ( t ) < 1
and
0 g 2 ( t ) < 1 .
The local convergence of KM uses conditions (C). Suppose that there is
(C1)
a solution x * Ω of Equation (1) with F ( x * ) 0 ;
(C2)
K 0 > 0 , so that
F ( x * ) 1 ( F ( x * ) F ( w ) ) K 0 x * w
for all w Ω . Define Ω 2 = U ( x * , 1 K 0 ) Ω ;
(C3)
There exist K > 0 , K 1 > 0 such that
F ( x * ) 1 ( F ( w ) F ( v ) ) K w v
and
F ( x * ) 1 F ( v ) K 1 x * v
for all v , w Ω 2 ;
(C4)
U [ x * , ρ ] Ω .
Theorem 2.
Choose u 0 U ( x * , ρ ) { x * } . Then, under conditions (C), sequence { u n } generated by KM converges to x * , so that
v n x * g 1 ( d n ) d n d n < ρ
and
d n + 1 g 2 ( d n ) d n d n ,
where d n = u n x * , and the functions g 1 , g 2 were previously defined.
Proof. 
Pick z U ( x * , ρ ) { x * } . Then, by (C1) and (C2)
F ( x * ) 1 ( F ( z ) F ( x * ) ) K 0 z x * K 0 ρ < 1 .
So, we have F ( z ) 0 and
F ( z ) 1 F ( x * ) 1 1 K 0 z x * .
If z = u 0 , we see that iterate v 0 is well-defined by KM for n = 0 . Moreover, we can write
v 0 x * = u 0 x * F ( u 0 ) 1 F ( u 0 ) = F ( u 0 ) 1 [ 0 1 ( F ( x * + θ ( u 0 x * ) ) F ( u 0 ) ) d θ ( u 0 x * ) ] .
By (42), (48) (for z = u 0 ), (C3) and (46), we have in turn that
v 0 x * K u 0 x * 2 2 ( 1 K ) u 0 x * = g 1 ( u 0 x * ) u 0 x * u 0 x * < ρ .
Hence, iterate v 0 U ( x * , ρ ) and (42) holds if n = 0 . Next, we show that A 0 0 . If u 0 x * , we obtain by (C1), (C2), and (46)
( F ( x * ) ( u 0 x * ) 1 ) [ A 0 F ( x * ) ( u 0 x * ) ] 1 u 0 x * [ F ( x * ) 1 ( F ( u 0 ) F ( x * ) F ( x * ) ( u 0 x * ) ) + | γ 2 | F ( x * ) 1 F ( v 0 ) ] 1 u 0 x * [ K 2 u 0 x * 2 + | γ 2 | K 1 v 0 x * ] K 2 u 0 x * + | γ 2 | K 1 g 1 ( u 0 x * ) = q ( u 0 x * ) q ( ρ ) < 1 .
It follows that A 0 0 , and
A 0 1 F ( x * ) 1 u 0 x * ( 1 q ( u 0 x * ) ) .
Then, using (44), (C3), (48), (50), and (51)
u 1 x * v 0 x * + A 0 1 F ( x * ) ( F ( x * ) 1 F ( u 0 ) + | γ | F ( x * ) 1 F ( v 0 ) ) F ( u 0 ) 1 F ( x * ) F ( x * ) 1 F ( v 0 ) 1 + K 1 2 ( u 0 x * ) + | γ | v 0 x * u 0 x * ( 1 q ( u 0 x * ) ) ( 1 K 0 u 0 x * ) v 0 x * 1 + K 1 2 ( u 0 x * ) + | γ | g 1 ( u 0 x * ) ( 1 q ( u 0 x * ) ) ( 1 K 0 u 0 x * ) g 1 ( u 0 x * ) u 0 x * = g 2 ( u 0 x * ) u 0 x * u 0 x * < ρ .
That is iterate u 1 U ( u 0 , x * ) and (43) holds for n = 0 . Simply switch u 0 , v 0 , u 1 by u k , v k , u k + 1 in the above calculations to terminate the induction for (42) and (43). Then, it follows from the estimate
u k + 1 x * λ u k x * < ρ ,
where λ = g 2 ( u 0 x * ) [ 0 , 1 ) . We conclude lim k u k = x * and u k + 1 U ( x * , ρ ) .
A uniqueness of the solution result follows next.
Proposition 2.
Suppose
(1) 
Element λ U ( x * , ρ 0 ) Ω solves Equation (1), F ( λ ) = 0 , and (C2) holds;
(2) 
There exists ρ * ρ 0 such that
K 0 ρ * < 2 .
Set Ω 3 = U [ λ , ρ * ] Ω . Then, element λ uniquely solves Equation (1) in Ω 3 .
Proof. 
Let x ¯ Ω 3 with F ( x ¯ ) = 0 . Set E = 0 1 F ( λ + τ ( x ¯ λ ) ) d τ . Then, using (C2) and (54), we get in turn that
F ( λ ) 1 ( E F ( λ ) ) K 0 0 1 ( 1 τ ) λ x ¯ d τ K 0 2 ρ * < 1 .
Hence, x ¯ = λ follows from E 0 and E ( λ x ¯ ) = F ( λ ) F ( x ¯ ) = 0 0 = 0 .
Next, the fourth-order convergence is shown using only the first derivative. Suppose:
A n 1 F ( z ) ω
and
F ( x ) 1 ( F ( x ) F ( y ) ) ω 0 x y
hold for all x , y , z Ω , for some constants ω > 0 and ω 0 > 0 . Further, suppose
θ ω 0 2 2 ( 3 2 + ω 0 4 + | γ | ω 0 2 ( 1 + ω 0 4 ) ) 1 > 0 .
Let ψ ( t ) = φ ( t ) 1 = 0 , where φ ( t ) = ω ω 0 2 2 ( 3 2 + ω 0 4 t + | γ | ω 0 2 ( t + ω 0 4 t 2 ) t 3 . Then, ψ ( 0 ) = 1 < 0 and ψ ( 1 ) = θ ω 0 2 2 ( 3 2 + ω 0 4 + | γ | ω 0 2 ( 1 + ω 0 4 ) ) 1 > 0 . Hence, by the intermediate value theorem, ψ ( t ) = 0 has positive solutions. Let r o be the smallest such solution.
Theorem 3.
Suppose conditions (55)–(57) hold. Then, sequence { u n } given in (2) is convergent to x * with order four, i.e.,
u n + 1 x * ϱ ( r o ) d n 4 ,
where ϱ ( r o ) = θ ω 0 2 2 ( 3 2 + ω 0 4 r 0 + | γ | ω 0 2 ( r o + ω 0 4 r o 2 ) .
Proof. 
The first substep of (2) and (56) gives
v n x * F ( u n ) 1 0 1 [ F ( u n ) F ( x * + θ ( u n x * ) ) ] d θ ( u n x * ) ω 0 2 d n 2 .
Note
u n + 1 x * = v n x * A n 1 ( F ( u n ) + γ F ( v n ) ) F ( u n ) 1 F ( v n ) = A n 1 [ A n ( F ( u n ) + γ F ( v n ) ) F ( u n ) 1 0 1 F ( x * + θ ( v n x * ) ) d θ ] ( v n x * ) = A n 1 F ( u n ) F ( u n ) 1 [ F ( u n ) 0 1 F ( x * + θ ( v n x * ) ) d θ ] ( v n x * ) + A n 1 F ( v n ) F ( u n ) 1 γ [ F ( u n ) 0 1 F ( x * + θ ( v n x * ) ) d θ ] ( v n x * ) 2 A n 1 F ( v n ) ( v n x * ) ,
so, since F ( u n ) = 0 1 F ( x * + u ( u n x * ) ) d u ( u n x * ) and F ( v n ) = 0 1 F ( x * + u ( v n x * ) ) d u ( v n x * )
d n + 1 A n 1 0 1 F ( x * + u ( u n x * ) ) d u F ( u n ) 1 [ F ( u n ) 0 1 F ( x * + θ ( v n x * ) ) d θ ] ( u n x * ) ( v n x * ) + A n 1 0 1 F ( x * + u ( v n x * ) ) d u F ( u n ) 1 γ [ F ( u n ) 0 1 F ( x * + θ ( v n x * ) ) d θ ] ( v n x * ) 2 + 2 A n 1 0 1 F ( x * + u ( v n x * ) ) d u ( v n x * ) 2
0 1 A n 1 F ( x * + u ( u n x * ) ) d u 0 1 F ( u n ) 1 [ F ( u n ) F ( x * + θ ( v n x * ) ) d θ ] ( u n x * ) ( v n x * ) + | γ | 0 1 A n 1 F ( x * + u ( v n x * ) ) d u + 0 1 F ( u n ) 1 [ F ( u n ) F ( x * + θ ( v n x * ) ) d θ ] ( v n x * ) 2 + 2 0 1 A n 1 F ( x * + u ( v n x * ) ) d u ( v n x * ) 2 .
Therefore, (55) and (56) give
d n + 1 ω ω 0 d n + v n x * 2 d n v n x * + | γ | ω ω 0 d n + v n x * 2 v n x * 2 + 2 ω v n x * 2 ω ω 0 2 2 1 + ω 0 4 d n d n 4 + | γ | ω ω 0 3 4 d n + ω 0 4 d n 2 d n 4 + ω 0 2 4 ω d n 4 φ ( d n ) d n ϱ ( r o ) d n 4 .

4. Numerical Example

We verify convergence criteria using KM.
Example 1.
Let us consider a scalar function F defined on the set Ω = U [ u 0 , 1 s ] for s ( 0 , 1 2 ) by
F ( x ) = x 3 s .
Choose γ = 2 and u 0 = 1 . Then, we obtain the estimates η = 1 s 3 ,
| F ( u 0 ) 1 ( F ( x ) F ( u 0 ) ) | = | x 2 u 0 2 | | x + u 0 | | x u 0 | ( | x u 0 | + 2 | u 0 | ) | x u 0 | = ( 1 s + 2 ) | x u 0 | = ( 3 s ) | x u 0 | ,
for each x Ω , so L 0 = 3 s , Ω 0 = U ( u 0 , 1 L 0 ) Ω = U ( u 0 , 1 L 0 ) ,
| F ( u 0 ) 1 ( F ( y ) F ( x ) | = | y 2 x 2 | | y + x | | y x | ( | y u 0 + x u 0 + 2 u 0 ) | | y x | = ( | y u 0 | + | x u 0 | + 2 | u 0 | ) | y x | ( 1 L 0 + 1 L 0 + 2 ) | y x | = 2 ( 1 + 1 L 0 ) | y x | ,
for each x , y Ω , and so L = 2 ( 1 + 1 L 0 ) ,
| F ( u 0 ) 1 ( F ( y ) F ( x ) | = ( | y u 0 | + | x u 0 | + 2 | u 0 | ) | y x | ( 1 s + 1 s + 2 ) | y x | = 2 ( 2 s ) | y x | ,
for each x , y Ω , so L 1 = ( 2 s ) 2 and L 2 = 3 ( 2 s ) 2 F ( 1 ) + | γ 2 | ( 1 F ( 1 ) F ( 1 ) ) 2 s .
Then, for s = 0.95 , γ = 0.5 , we have 1 L 0 = 0.4878 .
According to the information taken from Table 1, the conditions of Lemma 1 hold. Consequently, the sequence converges and the interval of initial points has been further extended.
Example 2.
Set function F : I = [ 1 , 1 ] R as
F ( x ) = e x 1 .
Notice that x * = 0 solves equation F ( x ) = 0 . Choose γ = 2 . Then, conditions of Theorem 3 hold for ω = ω 0 = e 2 . Then, the radius is r o = 0.1381 .
Example 3.
The example used in the introduction gives ω = ω 0 = 96.6629073 . Then, for γ = 2 , the radius is
r o = 0.0092 .
Recall that it was shown in the Introduction that earlier articles cannot be used to solve this problem. The method used is a specialization of KM for γ = 2 .

5. Conclusions

In this article, the extension of KM is presented. The convergence of KM has been shown by assuming the existence of a fifth derivative which was not considered before. This observation holds true for other high-convergence order methods such as Traub’s and Jarratt’s method. Other such methods can be found in [1,2,3,4,5,6,7,8] and the references therein. Therefore, these results cannot assure convergence. However, these methods may converge. Other concerns involve the absence of error estimates or uniqueness results that can be computed. This is our motivation for presenting a convergence analysis based on the first derivative used in KM. The generality of the technique allows its usage in other methods mentioned previously. This can be a fruitful direction of future research.

Author Contributions

Conceptualization, S.R., C.I.A., I.K.A. and S.G.; methodology, S.R., C.I.A., I.K.A. and S.G.; software, S.R., C.I.A., I.K.A. and S.G.; validation, S.R., C.I.A., I.K.A. and S.G.; formal analysis, S.R., C.I.A., I.K.A. and S.G.; investigation, S.R., C.I.A., I.K.A. and S.G.; resources, S.R., C.I.A., I.K.A. and S.G.; data curation, S.R., C.I.A., I.K.A. and S.G.; writing—original draft preparation, S.R., C.I.A., I.K.A. and S.G.; writing—review and editing, S.R., C.I.A., I.K.A. and S.G.; visualization, S.R., C.I.A., I.K.A. and S.G.; supervision, S.R., C.I.A., I.K.A. and S.G. project administration, S.R., C.I.A., I.K.A. and S.G.; funding acquisition, S.R., C.I.A., I.K.A. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. King, R.F. A family of fourth-order methods for nonlinear equations. SIAM Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  2. Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative scheme. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  3. Chun, C.; Lee, M.Y.; Neta, B.; Dzunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef] [Green Version]
  4. Gunerhan, H. Optical soliton solutions of nonlinear Davey-Stewartson equation using an efficient method. Rev. Mex. Física 2021, 67. [Google Scholar] [CrossRef]
  5. Gutiérrez, J.M.; Magreńán, A.A.; Varona, J.L. The Gauss-Seidelization of iterative methods for solving nonlinear equations in the complex plane. Appl. Math. Comput. 2011, 218, 2467–2479. [Google Scholar] [CrossRef]
  6. Petković, M.S.; Neta, B.; Petković, L.D.; Dzunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  7. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  8. Traub, J.F. Iterative Schemes for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  9. Jhangeer, A.; Muddassar, M.; Awrejcewicz, J.; Naz, Z.; Riaz, M.B. Phase portrait, multi-stability, sensitivity and chaotic analysis of Gardner’s equation with their wave turbulence and solitons solutions. Results Phys. 2022, 32, 104981. [Google Scholar] [CrossRef]
  10. Nisar, K.S.; Inc, M.; Jhangeer, A.; Muddassar, M.; Infal, B. New soliton solutions of Heisenberg ferromagnetic spin chain model. Pramana-J. Phys. 2022, 96, 28. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; Hilout, S. Inexact Newton-type methods. J. Complex. 2010, 26, 577–590. [Google Scholar] [CrossRef] [Green Version]
  12. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  13. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  14. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  15. Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publisher: Hauppauge, NY, USA, 2021; Volume IV. [Google Scholar]
  16. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
Table 1. Sequence (3) and condition (4).
Table 1. Sequence (3) and condition (4).
n123456
p n 00.10040.10330.10330.10330.1033
t n 0.01670.01720.01720.01720.01720.0172
s n 0.01670.01720.01720.01720.01720.0172
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S.; Argyros, C.I. Extending King’s Method for Finding Solutions of Equations. Foundations 2022, 2, 348-361. https://doi.org/10.3390/foundations2020024

AMA Style

Regmi S, Argyros IK, George S, Argyros CI. Extending King’s Method for Finding Solutions of Equations. Foundations. 2022; 2(2):348-361. https://doi.org/10.3390/foundations2020024

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, Santhosh George, and Christopher I. Argyros. 2022. "Extending King’s Method for Finding Solutions of Equations" Foundations 2, no. 2: 348-361. https://doi.org/10.3390/foundations2020024

Article Metrics

Back to TopTop