Next Article in Journal / Special Issue
Offset-Assisted Factored Solution of Nonlinear Systems
Previous Article in Journal / Special Issue
Numerical Properties of Different Root-Finding Algorithms Obtained for Approximating Continuous Newton’s Method
 
 
Comment published on 26 April 2016, see Algorithms 2016, 9(2), 30.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations

by
Diyashvir Kreetee Rajiv Babajee
Independent Scholar, 65, Captain Pontre Street, Sainte Croix, Port Louis 11708, Mauritius
Algorithms 2016, 9(1), 1; https://doi.org/10.3390/a9010001
Submission received: 26 October 2015 / Revised: 13 December 2015 / Accepted: 16 December 2015 / Published: 24 December 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

:
Kung-Traub’s conjecture states that an optimal iterative method based on d function evaluations for finding a simple zero of a nonlinear function could achieve a maximum convergence order of 2 d 1 . During the last years, many attempts have been made to prove this conjecture or develop optimal methods which satisfy the conjecture. We understand from the conjecture that the maximum order reached by a method with three function evaluations is four, even for quadratic functions. In this paper, we show that the conjecture fails for quadratic functions. In fact, we can find a 2-point method with three function evaluations reaching fifth order convergence. We also develop 2-point 3rd to 8th order methods with one function and two first derivative evaluations using weight functions. Furthermore, we show that with the same number of function evaluations we can develop higher order 2-point methods of order r + 2 , where r is a positive integer, 1 . We also show that we can develop a higher order method with the same number of function evaluations if we know the asymptotic error constant of the previous method. We prove the local convergence of these methods which we term as Babajee’s Quadratic Iterative Methods and we extend these methods to systems involving quadratic equations. We test our methods with some numerical experiments including an application to Chandrasekhar’s integral equation arising in radiative heat transfer theory.

1. Introduction

The problem of finding a simple zero of a nonlinear equation f ( x ) = 0 , is an often discussed problem in many applications of science and technology. The most commonly used method is the Newton-Raphson method (simply called as Newton’s method). Many higher order variants of Newton’s method have been developed and rediscovered in the last 15 years. Recently, the order of convergence of many variants of Newton’s method has been improved using the same number of functional evaluations by means of weight functions (see [1,2,3,4,5,6] and the references therein). The aim of such research is to develop optimal methods which satisfy Kung-Traub’s conjecture. In this paper, we develop 2-point methods with 1 function and 2 first derivative evaluations for solving quadratic equations and study Kung-Traub’s conjecture for these methods. We extend these methods to systems of quadratic equations and conduct some numerical experiments to test the efficiencies of the methods.

2. Developments of the Methods

Let x ( k + 1 ) = ψ ( x ( k ) ) define an Iterative Function (I.F.).
Definition 1. 
[7] If the sequence { x ( k ) } tends to a limit x * in such a way that
lim n x ( k + 1 ) x * ( x ( k ) x * ) p = C
for p 1 , then the order of convergence of the sequence is said to be p, and C is known as the asymptotic error constant. If p = 1 , p = 2 or p = 3 , the convergence is said to be linear, quadratic or cubic, respectively.
Let e ( k ) = x ( k ) x * , then the relation
e ( k + 1 ) = C ( e ( k ) ) p + O ( e ( k ) ) p + 1 = O ( e ( k ) ) p
is called the error equation. The value of p is called the order of convergence of the method.
Definition 2. 
[8] The Efficiency Index is given by
E I = p 1 d
where d is the total number of new function evaluations (the values of f and its derivatives) per iteration.
Let x ( k + 1 ) be determined by new information at x ( k ) , ϕ 1 ( x ( k ) ) , . . . , ϕ i ( x ( k ) ) , i 1
No old information is reused. Thus,
x ( k + 1 ) = ψ ( x ( k ) , ϕ 1 ( x ( k ) ) , . . . , ϕ i ( x ( k ) ) )
Then ψ is called a multipoint I.F without memory. Kung-Traub’s Conjecture [9]
Let ψ be an I.F. without memory with d evaluations. Then
p ( ψ ) p O p t = 2 d 1
where p o p t is the maximum order.
The second order Newton I.F. (2ndNR) is given by
ψ 2 n d N R ( x ) = x u ( x ) , u ( x ) = f ( x ) f ( x )
The 2ndNR I.F. is a 1-point I.F. with 2 functions evaluations and it satisfies the Kung-Traub conjecture with d = 2 . Thus, E I 2 n d N R = 1 . 414 . The 2-point fourth order Jarratt I.F. (4thJM) [10] is given by
ψ 4 t h J M ( x ) = x u ( x ) 3 τ + 1 6 τ 2 τ = f x 2 3 u ( x ) f ( x )
The 4thJM I.F. with 3 function evaluations satisfies the Kung-Traub conjecture with d = 3 .
According to Kung-Traub’s conjecture, it is not possible to obtain an I.F. with three function evaluations reaching an order greater than four. We show that this conjecture fails for quadratic functions.
We consider the quadratic function f ( x ) = κ 2 x 2 + κ 1 x + κ 0 , where κ 2 0 , κ 1 , κ 0 are constants. Consider the following I.F. for quadratic function:
ψ ( r + 2 ) t h B Q I M ( x ) = x u ( x ) H ( τ , r )
where
H ( τ , r ) = 1 + i = 1 r a i ( τ 1 ) i
where a i ’s are constants.
The error equation of the I.F. defined by Equation (7) for r = 6 is given by
ψ ( x ) x * = 4 3 a 1 + 1 c 2 ( e ( k ) ) 2 + 16 3 a 1 16 9 a 2 2 c 2 2 ( e ( k ) ) 3 + 52 3 a 1 + 112 9 a 2 + 64 27 a 3 + 4 c 2 3 ( e ( k ) ) 4 + 152 3 a 1 176 3 a 2 640 27 a 3 256 81 a 4 8 c 2 4 ( e ( k ) ) 5 + 3968 27 a 3 + 688 3 a 2 + 416 3 a 1 + 3328 81 a 4 + 16 + 1024 243 a 5 c 2 5 ( e ( k ) ) 6 + 1088 3 a 1 800 a 2 19456 27 a 3 16384 243 a 5 25600 81 a 4 32 4096 729 a 6 c 2 6 ( e ( k ) ) 7 + 64 + 2752 3 a 1 + 7744 3 a 2 + 82496 27 a 3 + 151040 81 a 4 + 50176 81 a 5 + 77824 729 a 6 c 2 7 ( e ( k ) ) 8 + . . . .
where c 2 = f ( x * ) f ( x * ) , f ( x * ) 0 .
Eliminating the terms in ( e ( k ) ) j , j = 2 , 3 , 4 , 5 , 6 , 7 we obtain a system of 6 linear equations with 6 unknowns:
A X = B
where
A = 4 3 0 0 0 0 0 16 3 16 9 0 0 0 0 52 3 112 9 64 27 0 0 0 152 3 176 3 640 27 256 81 0 0 416 3 688 3 3968 27 3328 81 1024 243 0 1088 3 800 19456 27 25600 81 16384 243 4096 729
X = a 1 a 2 a 3 a 4 a 5 a 6 , B = 1 2 4 8 16 32
whose solutions are given by
X = a 1 a 2 a 3 a 4 a 5 a 6 = A 1 B = 3 4 9 8 135 64 567 128 5103 512 24057 1024
We note that A is a lower triangular matrix and the solutions are easily obtained once the first solution is obtained from the first equation.
In this way, we obtain a family of higher order I.F.s which we term as higher order 2-point Babajee’s Quadratic Iterative Methods for solving quadratic equations ( ( r + 2 ) thBQIM).
The first six members of ( r + 2 ) thBQIM’s family in Equation (7) with their error equation are
  • r = 1 : 2-point 3rdBQIM I.F.
    H ( τ , 1 ) = 1 3 4 ( τ 1 )
    ψ 3 t h B Q I M ( x ) x * = 2 c 2 2 ( e ( k ) ) 3 + O ( e ( k ) ) 4
  • r = 2 : 2-point 4thBQIM I.F.
    H ( τ , 2 ) = 1 3 4 ( τ 1 ) + 9 8 ( τ 1 ) 2
    ψ 4 t h B Q I M ( x ) x * = 5 c 2 3 ( e ( k ) ) 4 + O ( e ( k ) ) 5
  • r = 3 : 2-point 5thBQIM I.F.
    H ( τ , 3 ) = 1 3 4 ( τ 1 ) + 9 8 ( τ 1 ) 2 135 64 ( τ 1 ) 3
    ψ 5 t h B Q I M ( x ) x * = 14 c 2 4 ( e ( k ) ) 5 + O ( e ( k ) ) 6
  • r = 4 : 2-point 6thBQIM I.F.
    H ( τ , 4 ) = 1 3 4 ( τ 1 ) + 9 8 ( τ 1 ) 2 135 64 ( τ 1 ) 3 + 567 128 ( τ 1 ) 4
    ψ 6 t h B Q I M ( x ) x * = 42 c 2 5 ( e ( k ) ) 6 + O ( e ( k ) ) 7
  • r = 5 : 2-point 7thBQIM I.F.
    H ( τ , 5 ) = 1 3 4 ( τ 1 ) + 9 8 ( τ 1 ) 2 135 64 ( τ 1 ) 3 + 567 128 ( τ 1 ) 4 5103 512 ( τ 1 ) 5
    ψ 7 t h B Q I M ( x ) x * = 132 c 2 6 ( e ( k ) ) 7 + O ( e ( k ) ) 8
  • r = 6 : 2-point 8thBQIM I.F.
    H ( τ , 6 ) = 1 3 4 ( τ 1 ) + 9 8 ( τ 1 ) 2 135 64 ( τ 1 ) 3 + 567 128 ( τ 1 ) 4 5103 512 ( τ 1 ) 5 + 24057 1024 ( τ 1 ) 6
    ψ 8 t h B Q I M ( x ) x * = 429 c 2 7 ( e ( k ) ) 8 + O ( e ( k ) ) 9
We note that the maximum order reached by optimal methods with four function evaluations is eight. We have obtained an eighth order 2-point method with only three function evaluations for solving quadratic equations. This implies that the Kung-Traub conjecture fails for quadratic equations.

3. Convergence Analysis

Theorem 3. 
Let a sufficiently smooth function f : D R R has a simple root x * in the open interval D. Then the six members of 2-point ( r + 2 ) thBQIM’s family in Equation (7) ( r = 1 , 2 , 3 , 4 , 5 , 6 ) are of local 3rd to 8th order convergence, respectively.
Proof. 
We will prove the 3rd order convergence of the 2-point 3rdBQIM I.F. and 8th order convergence of the 2-point 8thBQIM I.F.
The proofs for the 2-point 4th to 7th order I.F.s follow on similar lines.
It is easy to see that for a quadratic function,
f ( x ) = f ( x * ) e ( k ) + c 2 ( e ( k ) ) 2
and
f ( x ) = f ( x * ) 1 + 2 c 2 e ( k )
By Taylor expansion and using computer algebra software as Maple
u ( x ) = e ( k ) c 2 ( e ( k ) ) 2 + 2 c 2 2 ( e ( k ) ) 3 4 c 2 3 ( e ( k ) ) 4 + 8 c 2 4 ( e ( k ) ) 5 16 c 2 5 ( e ( k ) ) 6 + 32 c 2 6 ( e ( k ) ) 7 64 c 2 7 ( e ( k ) ) 8 + 128 c 2 8 ( e ( k ) ) 9 + . . .
so that
τ = 1 4 3 c 2 e ( k ) + 4 c 2 2 ( e ( k ) ) 2 32 3 c 2 3 ( e ( k ) ) 3 + 80 3 c 2 4 ( e ( k ) ) 4 64 c 2 5 ( e ( k ) ) 5 + 448 3 c 2 6 ( e ( k ) ) 6 1024 3 c 2 7 ( e ( k ) ) 7 + 768 c 2 8 ( e ( k ) ) 8 + . . .
Now,
H ( τ , 1 ) = 1 + c 2 e ( k ) 3 c 2 2 ( e ( k ) ) 2 + 8 c 2 3 ( e ( k ) ) 3 + . . .
Using Equations (8) and (10), we have
u ( x ) H ( τ , 1 ) = e ( k ) 2 c 2 2 ( e ( k ) ) 3 + O ( e ( k ) ) 4
which leads to the error equation for the 2-point 3rdBQIM I.F.
Similarly,
H ( τ , 6 ) = 1 + c 2 e ( k ) c 2 2 ( e ( k ) ) 2 + c 2 3 ( e ( k ) ) 3 c 2 4 ( e ( k ) ) 4 + c 2 5 ( e ( k ) ) 5 c 2 6 ( e ( k ) ) 6 428 c 2 7 ( e ( k ) ) 7 + . . .
Using Equations (8) and (11), we have
u ( x ) H ( τ , 6 ) = e ( k ) 429 c 2 7 ( e ( k ) ) 8 + O ( e ( k ) ) 9
which leads to the error equation for the 2-point 8thBQIM I.F.
We next prove the local convergence of the 2-point ( r + 2 ) thBQIM’s family for any r.
Theorem 4. 
Let a sufficiently smooth function f : D R R has a simple root x * in the open interval D. Then the members of 2-point ( r + 2 ) thBQIM’s family in Equation (7) are of local ( r + 2 ) th order convergence.
Proof. 
We prove this result by induction.
The case r = 1 corresponds to the 3rdBQIM I.F.
Assume the 2-point ( r + 2 ) thBQIM family has order of convergence of ( r + 2 ) . Then it satisfies the error equation
ψ ( r + 2 ) t h B Q I M ( x ) x * = C r c 2 r + 1 ( e ( k ) ) r + 2 + O ( e ( k ) ) r + 3
where C r is the asymptotic error constant.
Assume that Equation (12) holds for r = m .
Now from Equation (9), we have
τ 1 = 4 3 c 2 e ( k ) 1 3 c 2 e ( k ) + 8 c 2 2 ( e ( k ) ) 2 + . . .
so that
( τ 1 ) m + 1 = 4 3 m + 1 c 2 m + 1 ( e ( k ) ) m + 1 1 3 c 2 e ( k ) + 8 c 2 2 ( e ( k ) ) 2 + . . . m + 1 = 4 3 m + 1 c 2 m + 1 ( e ( k ) ) m + 1 1 + O e ( k )
For the case r = m + 1 ,
ψ ( m + 3 ) t h B Q I M ( x ) x * = x u ( x ) H ( τ , m + 1 ) x * = x u ( x ) H ( τ , m ) x * a m + 1 u ( x ) ( τ 1 ) m + 1 = ψ ( m + 2 ) t h B Q I M ( x ) x * a m + 1 u ( x ) ( τ 1 ) m + 1 =Cmc2m+1(e(k))m+2am+143m+1c2m+1(e(k))m+2+O(e(k))m+3 using Equations (8), (12) and (13) = C m a m + 1 4 3 m + 1 c 2 m + 1 ( e ( k ) ) m + 2 + O ( e ( k ) ) m + 3
which shows that the 2-point ( m + 3 ) thBQIM family has ( m + 3 ) th order of convergence if we choose
a m + 1 = C m 3 4 m + 1
From Equation (14), we can obtain higher order I.F. if we know the asymptotic error constant of the previous I.F.
For example, for the 2-point 3rdBQIM I.F., C 1 = 2 and from Equation (14),
a 2 = C 1 3 4 2 = 9 8
and we can obtain the 4thBQIM I.F.
Similarly, for the 2-point 8thBQIM I.F., C 6 = 429 and from Equation (14),
a 7 = C 6 3 4 7 = 938223 16384
and we can obtain the 2-point 9thBQIM I.F. with
H ( τ , 7 ) = 1 3 4 ( τ 1 ) + 9 8 ( τ 1 ) 2 135 64 ( τ 1 ) 3 + 567 128 ( τ 1 ) 4 5103 512 ( τ 1 ) 5 + 24057 1024 ( τ 1 ) 6 938223 16384 ( τ 1 ) 7
From Theorem 4, we conclude that we can have a family of order r + 2 , r = 1 , 2 , . . . with only 3 function evaluations.
The Efficiency Index of the 2-point ( r + 2 ) thBQIM family is given by
E I = ( r + 2 ) 1 3 , r 1
In the following section, we extend our methods to systems of equations.

4. Extension to Systems of Equations

Consider the system of nonlinear equations f ( x ) = 0 , where f ( x ) = ( f 1 ( x ) , f 2 ( x ) , . . . , f n ( x ) ) T , x = ( x 1 , x 2 , . . . , x n ) T , f i : R n R , i = 1 , 2 , , n defined as
f i ( x ) = b i + l = 1 n m = 1 n b l , m x l x m , b i , b l , m , i , l , m = 1 , . . n , are constants .
and f : D R n R n is a smooth map and D is an open and convex set, where we assume that x * = ( x 1 * , x 2 * , . . . , x n * ) T is a zero of the system and x ( 0 ) = x 1 ( 0 ) , x 2 ( 0 ) , . . . , x n ( 0 ) T is an initial guess sufficiently close to x * .
We define the 2-point ( r + 2 ) thBQIM’s family for systems of quadratic equations as:
ψ ( r + 2 ) t h B Q I M ( x ) = x H ( τ ( x ) , r ) u ( x )
where
u ( x ) = f ( x ) 1 f ( x ) y ( x ) = x 2 3 u ( x ) τ ( x ) = f ( x ) 1 f y ( x ) H ( τ ( x ) , r ) = I + i = 1 r a i ( τ ( x ) I ) i , I is the identity matrix.
Let us define
c 2 = 1 2 [ f ( x * ) ] 1 f ( 2 ) ( x * ) , e ( k ) = x ( k ) x *
Using the notations in [11], it is noted that c 2 e ( k ) L ( R n ) .
The error at the ( k + 1 ) th iteration is e ( k + 1 ) = L ( e ( k ) ) p + O ( e ( k ) ) p + 1 , where L is a p-linear function L L ( R n × × R n , R n ) , is called the error equation and p is the order of convergence.
Observe that ( e ( k ) ) p is ( e ( k ) , e ( k ) , , e ( k ) ) .
The first six members of ( r + 2 ) thBQIM’s family in Equation (16) with their error equation are
  • r = 1 : 2-point 3rdBQIM I.F.
    H ( τ ( x ) , 1 ) = I 3 4 ( τ ( x ) I )
    ψ 3 t h B Q I M ( x ) x * = 2 c 2 2 ( e ( k ) ) 3 + O ( e ( k ) ) 4
  • r = 2 : 2-point 4thBQIM I.F.
    H ( τ ( x ) , 2 ) = I 3 4 ( τ ( x ) I ) + 9 8 ( τ ( x ) I ) 2
    ψ 4 t h B Q I M ( x ) x * = 5 c 2 3 ( e ( k ) ) 4 + O ( e ( k ) ) 5
  • r = 3 : 2-point 5thBQIM I.F.
    H ( τ ( x ) , 3 ) = I 3 4 ( τ ( x ) I ) + 9 8 ( τ ( x ) I ) 2 135 64 ( τ ( x ) I ) 3
    ψ 5 t h B Q I M ( x ) x * = 14 c 2 4 ( e ( k ) ) 5 + O ( e ( k ) ) 6
  • r = 4 : 2-point 6thBQIM I.F.
    H ( τ ( x ) , 4 ) = I 3 4 ( τ ( x ) I ) + 9 8 ( τ ( x ) I ) 2 135 64 ( τ ( x ) I ) 3 + 567 128 ( τ ( x ) I ) 4
    ψ 6 t h B Q I M ( x ) x * = 42 c 2 5 ( e ( k ) ) 6 + O ( e ( k ) ) 7
  • r = 5 : 2-point 7thBQIM I.F.
    H ( τ ( x ) , 5 ) = I 3 4 ( τ ( x ) I ) + 9 8 ( τ ( x ) I ) 2 135 64 ( τ ( x ) I ) 3 + 567 128 ( τ ( x ) I ) 4 5103 512 ( τ ( x ) I ) 5
    ψ 7 t h B Q I M ( x ) x * = 132 c 2 6 ( e ( k ) ) 7 + O ( e ( k ) ) 8
  • r = 6 : 2-point 8thBQIM I.F.
    H ( τ ( x ) , 6 ) = I 3 4 ( τ ( x ) I ) + 9 8 ( τ ( x ) I ) 2 135 64 ( τ ( x ) I ) 3 + 567 128 ( τ ( x ) I ) 4 5103 512 ( τ ( x ) I ) 5 + 24057 1024 ( τ ( x ) I ) 6
    ψ 8 t h B Q I M ( x ) x * = 429 c 2 7 ( e ( k ) ) 8 + O ( e ( k ) ) 9

4.1. Convergence Analysis

Theorem 5. 
Let f : D R n R n be twice Frechet differentiable at each point of an open convex neighborhood D of x * R n , that is a solution of the quadratic system f ( x ) = 0 . Let us suppose that f ( x ) is continuous and nonsingular in x * , and x ( 0 ) is close enough to x * . Then the sequence { x ( k ) } k 0 obtained using the iterative expressions Equation (16), r = 1 , 2 , . . . , 6 converge to x * with order 3 to 8, respectively.
Proof. 
We will prove for the case r = 6 . The other cases follow along similar lines. Since f is a quadratic function of several variables, we have
f ( x ( k ) ) = f ( x * ) e ( k ) + c 2 ( e ( k ) ) 2
and
f ( x ( k ) ) = f ( x * ) I + 2 c 2 e ( k )
f ( x ( k ) ) 1 = [ I 2 c 2 e ( k ) + 4 c 2 2 ( e ( k ) ) 2 8 c 2 3 ( e ( k ) ) 3 + 16 c 2 4 ( e ( k ) ) 4 32 c 2 5 ( e ( k ) ) 5 + 64 c 2 6 ( e ( k ) ) 6 128 c 2 7 ( e ( k ) ) 7 + 256 c 2 8 ( e ( k ) ) 8 . . . ] [ f ( x * ) ] 1
Using Equations (17) and (19), we have
u ( x ( k ) ) = e ( k ) c 2 ( e ( k ) ) 2 + 2 c 2 2 ( e ( k ) ) 3 4 c 2 3 ( e ( k ) ) 4 + 8 c 2 4 ( e ( k ) ) 5 16 c 2 5 ( e ( k ) ) 6 + 32 c 2 6 ( e ( k ) ) 7 64 c 2 7 ( e ( k ) ) 8 + . . .
and the expression for y ( x ( k ) ) is given by
y ( x ( k ) ) = x * + 1 3 e ( k ) + 2 3 c 2 ( e ( k ) ) 2 4 3 c 2 2 ( e ( k ) ) 3 + 8 3 c 2 3 ( e ( k ) ) 4 16 3 c 2 4 ( e ( k ) ) 5 + 32 3 c 2 5 ( e ( k ) ) 6 64 3 c 2 6 ( e ( k ) ) 7 + 128 3 c 2 7 ( e ( k ) ) 8 + . . . .
The Taylor expansion of Jacobian matrix f ( y ( x ( k ) ) ) is then given by
f ( y ( x ( k ) ) ) = f ( x * ) I + 2 c 2 ( y ( x ( k ) ) x * ) = f ( x * ) [ I + 2 3 c 2 ( e ( k ) ) + 4 3 c 2 2 ( e ( k ) ) 2 8 3 c 2 3 ( e ( k ) ) 3 + 16 3 c 2 4 ( e ( k ) ) 4 32 3 c 2 5 ( e ( k ) ) 5 + 64 3 c 2 6 ( e ( k ) ) 6 128 3 c 2 7 ( e ( k ) ) 7 + 256 3 c 2 8 ( e ( k ) ) 8 + . . . . ]
Therefore, using Equation (19), we obtain
τ ( x ( k ) ) = [ f ( x ( k ) ) ] 1 f ( y ( x ( k ) ) ) = I 4 3 c 2 ( e ( k ) ) + 4 c 2 2 ( e ( k ) ) 2 32 3 c 2 3 ( e ( k ) ) 3 + 80 3 c 2 4 ( e ( k ) ) 4 64 c 2 5 ( e ( k ) ) 5 + 448 3 c 2 6 ( e ( k ) ) 6 1024 3 c 2 7 ( e ( k ) ) 7 + 768 c 2 8 ( e ( k ) ) 8 + . . . .
so that
H ( τ ( x ) , 6 ) = I + c 2 ( e ( k ) ) c 2 2 ( e ( k ) ) 2 + c 2 3 ( e ( k ) ) 3 c 2 4 ( e ( k ) ) 4 + c 2 5 ( e ( k ) ) 5 c 2 6 ( e ( k ) ) 6 428 c 2 7 ( e ( k ) ) 7 + . . . . .
Using Equations (20) and (21), we have, after simplifications,
H ( τ ( x ) , 6 ) u ( x ( k ) ) = e ( k ) 429 c 2 7 ( e ( k ) ) 8 + . . .
and, thus,
x H ( τ ( x ) , 6 ) u ( x ( k ) ) = x * + e ( k ) ( e ( k ) 429 c 2 7 ( e ( k ) ) 8 + . . . ) = x * + 429 c 2 7 ( e ( k ) ) 8 + . . .
Theorem 6. 
Let f : D R n R n be twice Frechet differentiable at each point of an open convex neighborhood D of x * R n , that is a solution of the quadratic system f ( x ) = 0 . Let us suppose that f ( x ) is continuous and nonsingular in x * , and x ( 0 ) is close enough to x * . Then the sequence { x ( k ) } k 0 obtained using the iterative expressions Equation (16), r = 1 , 2 , . . . converges to x * with order r + 2 with the error equation
ψ ( r + 2 ) t h B Q I M ( x ) x * = C r c 2 r + 1 ( e ( k ) ) r + 2 + . . .
The proof is by induction and follows along similar lines.
Similarly as in the case of scalar equations, we can obtain higher order I.F. for systems if we know the asymptotic error constant of the previous I.F. using
a r + 1 = C r 3 4 r + 1 , r = 1 , 2 , . . .

5. Numerical Experiments

5.1. Scalar Equation

We consider the Test problem 1 (TP1) of finding the positive zero of the quadratic function f ( x ) = x 2 2 to compare the efficiency of the proposed methods. Numerical computations have been carried out in the MATLAB software rounding to 1000 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process | x ( k + 1 ) x ( k ) | < ϵ where ϵ = 10 50 . Let N be the number of iterations required for convergence. For simplicity, we denote X e Y = X × 10 Y .
The computational order of convergence is given by
ρ = ln | ( x ( N ) x ( N 1 ) / ( x ( N 1 ) x ( N 2 ) ) | ln | ( x ( N 1 ) x ( N 2 ) ) / ( x ( N 2 ) x ( N 3 ) ) |
We choose x ( 0 ) = 1 . The results in Table 1 show that, as the order of the ( r + 2 ) thBQIM I.F. (r = 1,2,3,4,5,6), the methods converge in less iterations. The computational order of convergence agree with the theoretical order of convergence confirming that Kung-Traub’s conjecture fails for quadratic functions.
Table 1. Results of the quadratic function f ( x ) = x 2 2 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Table 1. Results of the quadratic function f ( x ) = x 2 2 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Error 3 rd BQIM 4 th BQIM 5 th BQIM 6 th BQIM 7 th BQIM 8 th BQIM
| x 1 x 0 | 5.6e−15.8e−15.8e−15.8e−15.9e−15.9e−1
| x 2 x 1 | 2.3e−27.7e−32.8e−31.1e−34.3e-41.8e−4
| x 3 x 2 | 3.0e−67.5e−103.6e−143.5e−196.9e−252.9e−31
| x 4 x 3 | 7.0e−186.9e−381.3e−684.0e−1121.8e−1701.3e−245
| x 5 x 4 | 8.7e−534.9e−150----
| x 6 x 5 | ------
ρ345678

5.2. Dynamic Behaviour in the Complex Plane

Consider our Test problem 2 (TP2) based on the quadratic function f ( z ) = z 2 1 where z is a complex number. We let z 1 * = 1 and z 2 * = 1 which are the roots of unity for f ( z ) = z 2 1 . We study the dynamic behaviour of higher order ( r + 2 ) thBQIM I.F.s ( r = 1 , 2 , 3 , 4 , 5 , 6 ). We take a square R × R = [ 2 , 2 ] × [ 2 , 2 ] of 256 × 256 points and we apply our iterative methods starting in every z ( 0 ) in the square. If the sequence generated by the iterative method attempts a zero z j * of the polynomial with a tolerance | f ( z ( k ) ) | < 1 e 4 and a maximum of 100 iterations, we decide that z ( 0 ) is in the basin of attraction of this zero.
If the iterative method starting in z ( 0 ) reaches a zero in N iterations ( N 100 ), then we mark this point z ( 0 ) with a blue color if | z ( N ) z 1 * | < 1 e 4 or green color if | z ( N ) z 2 * | < 1 e 4 . If N > 100 , we conclude that the starting point has diverged and we assign a dark blue color. Let N D be number of diverging points and we count the number of starting points which converge in 1, 2, 3, 4, 5 or above 5 iterations.
Table 2 shows that all 6 methods are globally convergent and as the order of the method increases, the number of starting points converging to a root in 1 or 2 iterations increases. This is the advantage of higher order methods.
Table 2. Results of the quadratic function f ( z ) = z 2 1 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Table 2. Results of the quadratic function f ( z ) = z 2 1 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
I.F. N = 1 N = 2 N = 3 N = 4 N = 5 N > 5 N D
3rdBQIM56653628,73616,240542885400
4thBQIM23216,90827,5327780370095640
5thBQIM52823,34823,1965928334091960
6thBQIM92827,88019,6805272307287040
7thBQIM139231,30416,7364856286483940
8thBQIM189233,92414,2204564278881840
Figure 1. Polynomiographs of 3rdBQIM, 4thBQIM and 5thBQIM I.F.s. for f ( z ) = z 2 1 . (a) 3rdBQIM; (b) 4thBQIM; (c) 5thBQIM.
Figure 1. Polynomiographs of 3rdBQIM, 4thBQIM and 5thBQIM I.F.s. for f ( z ) = z 2 1 . (a) 3rdBQIM; (b) 4thBQIM; (c) 5thBQIM.
Algorithms 09 00001 g001
Bahman Kalantari coined the term “polynomiography” to be the art and science of visualization in the approximation of roots of polynomial using I.F. [12]. Figure 1 and Figure 2 show the polynomiography of the six methods. It can be observed as the order of the method increases, the methods behave more chaotically (the size of the “petals” become larger).
Figure 2. Polynomiographs of 6thBQIM, 7thBQIM and 8thBQIM I.F.s. for f ( z ) = z 2 1 . (a) 6thBQIM; (b) 7thBQIM; (c) 8thBQIM.
Figure 2. Polynomiographs of 6thBQIM, 7thBQIM and 8thBQIM I.F.s. for f ( z ) = z 2 1 . (a) 6thBQIM; (b) 7thBQIM; (c) 8thBQIM.
Algorithms 09 00001 g002

5.3. Systems of Quadratic Equations

For our numerical experiments in this section, the approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic in MATLAB. We use the following stopping criterion for the numerical scheme:
x ( k + 1 ) x ( k ) 2 < 1 e 50
For a system of equations, we used the approximated computational order of convergence p c given by (see [13])
p c log ( x ( k + 1 ) x ( k ) 2 / x ( k ) x ( k 1 ) 2 ) log ( x ( k ) x ( k 1 ) 2 / x ( k 1 ) x ( k 2 ) 2 )
We consider the Test Problem 3 (TP3) which is a system of 2 equations:
x 1 2 + x 2 2 7 = 0 x 1 x 2 + 1 = 0
Using the substitution method, Equation (25) reduces to the quadratic equation x 2 2 x 2 3 = 0 whose positive root is given by x 2 * = 1 + 13 2 = 2 . 302775638 . . Therefore x 1 * = x 2 * 1 = 13 2 = 1 . 302775638 . .
We use x ( 0 ) = ( 1 , 2 ) T as starting vector and apply our Equation (16), r = 1 , 2 , . . . , 6 to calculate the approximate solutions of Equation (25).
Table 3. Results of the TP3 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Table 3. Results of the TP3 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Error 3 rd BQIM 4 th BQIM 5 th BQIM 6 th BQIM 7 th BQIM 8 th BQIM
x ( 1 ) x ( 0 ) 2 4.2e−14.3e−14.3e−14.3e−14.3e−14.3e−1
x ( 2 ) x ( 1 ) 2 9.2e−32.5e−37.6e−42.5e−48.6e−53.1e−5
x ( 3 ) x ( 2 ) 2 6.0e−81.4e−125.1e−182.9e−242.7e−314.0e−39
x ( 4 ) x ( 3 ) 2 1.6e−231.5e−497.5e−897.4e−1447.0e−2173.0e−310
x ( 5 ) x ( 4 ) 2 3.4e−701.9e−197----
p c 345678
Table 3 shows that as the order of the methods increase the methods converge in less iterations (4 iterations) and with a smaller error. Similarly, as in the case for scalar equations, the computational order of convergence for this system of 2 equations agree with the theoretical one.
We next consider the Test Problem 4 (TP4) [14]
x 1 2 + x 2 2 1 = 0 x 1 2 x 2 2 0 . 5 = 0
Using the elimination method, Equation (26) reduces to the simple quadratic equation 2 x 2 2 1 . 5 = 0 whose positive root is given by x 2 * = 3 2 = 0 . 866025403 . . and therefore x 1 * = 1 2 .
Using x ( 0 ) = ( 2 , 3 ) T as starting vector far from the root, we apply our methods (16), r = 1 , 2 , . . . , 6 to find the numerical solutions of Equation (26).
Table 4. Results of TP4 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Table 4. Results of TP4 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Error 3 rd BQIM 4 th BQIM 5 th BQIM 6 th BQIM 7 th BQIM 8 th BQIM
x ( 1 ) x ( 0 ) 2 2.0e02.2e02.3e02.4e02.4e02.5e0
x ( 2 ) x ( 1 ) 2 5.3e−13.8e−12.8e−12.2e−11.7e−11.4e−1
x ( 3 ) x ( 2 ) 2 3.5e−25.0e−36.3e−46.8e−56.2e−64.5e−7
x ( 4 ) x ( 3 ) 2 3.1e−51.5e−99.2e−163.5e−244.1e−356.9e−49
x ( 5 ) x ( 4 ) 2 5.2e−142.3e−359.0e−758.3e−1402.5e−2390
x ( 6 ) x ( 5 ) 2 2.7e−401.3e−138----
x ( 7 ) x ( 6 ) 2 4.2e−119-----
p c 3.004.004.986.007.007.63
In Table 4, with the starting vector distant from the root, we observe that the methods take more iterations to converge. As from the third iteration, the iterate of the methods are close to the root and they converge to the root at their respective rate of convergence.
We next consider the Test Problem 5 (TP5) which is a system of 4 equations [15].
x 2 x 3 + x 4 ( x 2 + x 3 ) = 0 x 1 x 3 + x 4 ( x 1 + x 3 ) = 0 x 1 x 2 + x 4 ( x 1 + x 2 ) = 0 x 1 x 2 + x 1 x 3 + x 2 x 3 = 1
Using the substitution method, Equation (27) reduces to the simple quadratic equation 3 x 1 2 1 = 0 whose positive root is given by x 1 * = 1 3 = 0 . 577350269 . . Therefore x 2 * = x 3 * = x 1 * = 1 3 = 0 . 577350269 . . and x 4 * = x 1 * 2 = 1 2 3 = 0 . 288675134 . .
Using x ( 0 ) = ( 0 . 5 , 0 . 5 , 0 . 5 , 0 . 25 ) T as starting vector, we apply our Equation (16), r = 1 , 2 , . . . , 6 to find the numerical solutions of Equation (27).
In Table 5, we deduce that similar observations on computational order of convergence can be made for this system of four equations.
Table 5. Results of TP5 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Table 5. Results of TP5 for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Error 3 rd BQIM 4 th BQIM 5 th BQIM 6 th BQIM 7 th BQIM 8 th BQIM
x ( 1 ) x ( 0 ) 2 1.4e−11.4e−11.4e−11.4e−11.4e−11.4e−1
x ( 2 ) x ( 1 ) 2 1.7e−33.5e−48.1e−52.0e−55.2e−61.4e−6
x ( 3 ) x ( 2 ) 2 2.4e−98.6e−152.6e−217.1e−291.7e−373.9e−47
x ( 4 ) x ( 3 ) 2 6.5e−273.1e−579.7e−1041.4e−1697.9e−2580
x ( 5 ) x ( 4 ) 2 1.3e−79-----
p c 345678.1

5.4. Application

As an application, we consider the quadratic integral equation of the type:
x ( s ) = g ( s ) + λ x ( s ) 0 1 K ( s , t ) x ( t ) d t
Equation (28) appears in [16] and is known as Chandrasekhar’s integral equation. It arises from the study of the radiative transfer theory, the transport of neutrons and the kinetic theory of the gases. It is studied in [17] and, under certain conditions for the kernel, in [18,19].
We define the kernel K ( s , t ) as a continuous function in s , t [ 0 , 1 ] such that 0 < K ( s , t ) < 1 and K ( s , t ) + K ( t , s ) = 1 . Moreover, we assume that g ( s ) C [ 0 , 1 ] is a given function and λ is a real constant. The solution of Equation (28) is equivalent to solving the equation F ( x ) = 0 , where F : C [ 0 , 1 ] C [ 0 , 1 ] and
F ( x ) ( s ) = x ( s ) g ( s ) λ x ( s ) 0 1 K ( s , t ) x ( t ) d t , x C [ 0 , 1 ] , s [ 0 , 1 ]
We choose g ( s ) = 1 and K ( s , t ) = s s + t so that we are required to solve the following equation:
F ( x ) ( s ) = x ( s ) 1 λ x ( s ) 0 1 s s + t x ( t ) d t , x C [ 0 , 1 ] , s [ 0 , 1 ]
If we discretize the integral given in Equation (29) using the Mid-point Integration Rule with n grid points
0 1 s s + t x ( t ) d t = 1 n j = 1 n t j t i + t j x j , x j = x ( t j ) , t j = ( j 0 . 5 ) h , h = 1 n , 1 j n
we obtain the resulting system of non-linear equations:
f i ( x ) = x i λ x i n j = 1 n t j t i + t j x j , 1 i n
The λ are equally spaced with Δ λ = 0 . 01 in the interval λ ( 0 , 0 . 5 ) . We choose n = 100 and ( 1 , 1 , . . . . . , 1 ) T as the starting vector. In this case, for each λ, we let M λ be the minimum number of iterations for which the infinity norm between the successive approximations x ( k + 1 ) x ( k ) < 1 e 13 , where the approximation x ( k ) is calculated correct to 16 digits (double precision in MATLAB). Let M λ ¯ be the mean of iteration number for the 49 λ’s.
All methods converge for all 49 values of λ. The results are given in Table 6 which shows that all methods converge in less than five iterations. It is the 8thBQIM I.F. which has the greatest number of λ converging in two or three iterations and the smallest mean iteration number. We also observe that there is a small difference in the mean iteration number between the 7thBQIM and 8thBQIM I.F.s. Developing 9th or higher order I.F.s would not be necessary for this application.
Table 6. Results of the Chandrasekhar’s integral equation for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Table 6. Results of the Chandrasekhar’s integral equation for the 3rdBQIM, 4thBQIM, 5thBQIM, 6thBQIM, 7thBQIM and 8thBQIM I.F.s.
Method M = 2 M = 3 M = 4 M = 5 M > 5 M λ ¯
3rdBQIM02123503.67
4thBQIM13413103.29
5thBQIM3388003.10
6thBQIM3406003.06
7thBQIM3415003.04
8thBQIM3424003.02

6. Conclusions and Future Work

In this work, we have shown that Kung-Traub’s conjecture fails for quadratic functions, that is, we can obtain iterative methods for solving quadratic equations with three functions evaluations reaching order of convergence greater than four. Furthermore, using weight functions, we showed that it is possible to develop methods with three function evaluations of any order. These methods are extended to systems involving quadratic equations. We have developed 3rd to 8th order methods and applied them in some numerical experiments including an application to Chandrasekhar’s integral equation. The dynamic behaviour of the methods were also studied. This research will open the door to new avenues. For example, for solving quadratic equations numerically, we can improve the order of fourth order method with two function and one first derivative evaluations (Ostrowski’s method [8]) or fourth order derivative-free method with three function evaluations (higher order Steffensen’s method (see [20])). The question we now pose: Is it possible to develop fifth order methods with three function evaluations for solving cubic or higher order polynomials? This is for future considerations.

Acknowledgments

The author is thankful to the anonymous reviewers for their valuable comments to improve the readability of the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Babajee, D.K.R. Several Improvements of the 2-point third order midpoint iterative method using weight functions. Appl. Math. Comp. 2012, 218, 7958–7966. [Google Scholar] [CrossRef]
  2. Babajee, D.K.R. On a two-parameter Chebyshev-Halley-like family of optimal two-point fourth order methods free from second derivatives. Afr. Mat. 2015, 26, 689–697. [Google Scholar] [CrossRef]
  3. Babajee, D.K.R.; Jaunky, V.C. Applications of Higher-Order Optimal Newton Secant Iterative Methods in Ocean Acidification and Investigation of Long-Run Implications of CO2 Emissions on Alkalinity of Seawater. ISRN Appl. Math. 2013. [Google Scholar] [CrossRef]
  4. Babajee, D.K.R.; Thukral, R. On a 4-point sixteenth-order King family of iterative methods for solving nonlinear equations. Int. J. Math. Math. Sci. 2012. [Google Scholar] [CrossRef]
  5. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  6. Soleymani, F.; Khratti, S.K.; Vanani, S.K. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett. 2011, 25, 847–853. [Google Scholar] [CrossRef]
  7. Wait, R. The Numerical Solution of Algebraic Equations; John Wiley & Sons: New York, NY, USA, 1979. [Google Scholar]
  8. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  9. Kung, H.T.; Traub, J.F. Optimal Order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  10. Jarratt, P. A Review of Methods for Solving Nonlinear Algebraic Equations in One Variable; Gordon & Breach Science Publishers: New York, NY, USA, 1970. [Google Scholar]
  11. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  12. Kalantari, B. Polynomial Root-Finding and Polynomiography; World Scientific Publishing Co. Pte. Ltd.: Singapore, 2009. [Google Scholar]
  13. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comp. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  14. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Accelerated methods of order 2p for systems of nonlinear equations. J. Comp. Appl. Math. 2010, 53, 485–495. [Google Scholar]
  15. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comp. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  16. Chandrasekhar, D. Radiative Transfer; Dover Publications: New York, NY, USA, 1960. [Google Scholar]
  17. Argyros, I. Quadratic equations and applications to Chandrasekhar’s and related equations. Bull. Austral. Math. Soc. 1985, 32, 275–292. [Google Scholar] [CrossRef]
  18. Argyros, I. On a class of nonlinear integral equations arising in neutron transport. Aequ. Math. 1988, 35, 99–111. [Google Scholar] [CrossRef]
  19. Ezquerro, J.; Gutierrez, J.; Hernandez, M.; Salanova, M. Solving nonlinear integral equations arising in radiative transfer. Numer. Funct. Anal. Optim. 1999, 20, 661–673. [Google Scholar] [CrossRef]
  20. Soleymani, F.; Babajee, D.K.R.; Shateyi, S.; Motsa, S.S. Construction of Optimal Derivative-Free Techniques without Memory. J. Appl. Math. 2012. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Babajee, D.K.R. On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations. Algorithms 2016, 9, 1. https://doi.org/10.3390/a9010001

AMA Style

Babajee DKR. On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations. Algorithms. 2016; 9(1):1. https://doi.org/10.3390/a9010001

Chicago/Turabian Style

Babajee, Diyashvir Kreetee Rajiv. 2016. "On the Kung-Traub Conjecture for Iterative Methods for Solving Quadratic Equations" Algorithms 9, no. 1: 1. https://doi.org/10.3390/a9010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop