Next Article in Journal
The Quadratic Residues and Some of Their New Distribution Properties
Next Article in Special Issue
Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem
Previous Article in Journal
A Note on Weakly S-Noetherian Rings
Previous Article in Special Issue
Local Convergence of Solvers with Eighth Order Having Weak Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence and Dynamics of a Higher-Order Method

by
Alejandro Moysi
1,
Ioannis K. Argyros
2,
Samundra Regmi
2,
Daniel González
3,
Á. Alberto Magreñán
1,* and
Juan Antonio Sicilia
4
1
Universidad de la Rioja, Av. de la Paz, 93-103, 26006 Logroño, La Rioja, Spain
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Escuela de Ciencias Físicas y Matemáticas, Universidad de las Americas, Avda. de los Granados y Colimes, Quito 170125, Ecuador
4
Universidad Internacional de la Rioja (UNIR), Av. de la Paz 137, 26006 Logroño, La Rioja, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(3), 420; https://doi.org/10.3390/sym12030420
Submission received: 31 December 2019 / Revised: 12 February 2020 / Accepted: 17 February 2020 / Published: 5 March 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
Solving problems in various disciplines such as biology, chemistry, economics, medicine, physics, and engineering, to mention a few, reduces to solving an equation. Its solution is one of the greatest challenges. It involves some iterative method generating a sequence approximating the solution. That is why, in this work, we analyze the convergence in a local form for an iterative method with a high order to find the solution of a nonlinear equation. We extend the applicability of previous results using only the first derivative that actually appears in the method. This is in contrast to either works using a derivative higher than one, or ones not in this method. Moreover, we consider the dynamics of some members of the family in order to see the existing differences between them.

1. Introduction

Mathematics is always changing and the way we teach it also changes, as can be seen in the literature. Moreover, in advanced mathematics we need to use different alternatives since we all know the different problems that students encounter. In this paper, we present a study on iterative methods that can be used in postgraduate studies in order to teach them.
In the present work, we are focused on the problem of solving the equation
g ( x ) = 0 ,
giving an approximating solution x * , where g : Ω S S is differentiable and S = R or C . There exist several studies related to this problem, since we need to use iterative methods to find the solution. We refer the reader to the book by Petkovic et al. [1] for a collection of relevant methods. The method of interest is in this case:
y n = x n g ( x n ) g ( x n ) , t n = y n g ( y n ) ( g ( x n ) + ρ g ( y n ) ) g ( x n ) ( g ( x n ) + ( ρ 2 ) g ( y n ) ) , x n + 1 = t n δ K ( t n ) g ( x n ) ,
where a starting point x 0 is chosen, parameters ρ , δ S , and
K ( t n ) = g ( x n ) + g ( x n ) ( t n y n ) 2 ( t n x n ) ( y n x n ) ( x n + 2 y n 3 t n ) + g ( t n ) ( t n y n ) ( x n t n ) x n + 2 y n 3 t n g [ x n ; y n ] ( t n x n ) 3 ( y n x n ) ( x n + 2 y n 3 t n ) .
If we consider only the first two steps of the method in Equation (2), we obtain the King’s class of methods. This method has an order of 4 [2]. However, Equation (2) has limited usage, since its convergence assumes the existence of fifth order derivatives not appearing in the method. Moreover, no computable error bounds on | | x n x * | | or uniqueness results are given. Furthermore, the initial point x 0 is a shot in the dark. As an example consider function
g ( x ) = x 3 ln x 2 + x 5 x 4 , x 0 x = 0 .
Then g ( x ) is unbounded on Ω = [ 1 2 , 3 2 ] . Hence, there is no guarantee that the method in Equation (2) converges to x * = 1 under the results in [2].
Our technique can also be used to extend the applicability of other methods defined in [1,2,3]. The novelty of our work, compared to other such as [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18], is that we give weaker conditions, only in the first derivative, to guarantee the convergence of the described method. Those conditions are given in Section 2 and the dynamical study appears in the Section 3.

2. Local Convergence Analysis

In this section we study the local convergence analysis of the method in Equation (2). If v I R and μ > 0 , we can define U ( v , μ ) and U ¯ ( v , μ ) , respectively, the open and closed balls in R. Besides, we require the parameters L 0 > 0 , L > 0 , M 0 > 0 , M > 0 , γ 0 > 0 , μ , and δ I R . We need to define some parameters and functions to analyze the local convergence. Consider functions on the interval 0 , 1 L 0 by
g 1 ( t ) = L t 2 ( 1 L 0 t )
g 1 ¯ ( t ) = L 0 2 t + M | ρ 2 | g 1 ( t )
h 1 ( t ) = g 1 ¯ ( t ) 1
Then, h 1 ( 0 ) = 1 < 0 and h 1 ( t ) + as t 1 L 0 . Function h 1 has zeros in the interval 0 , 1 L 0 by the intermediate value theorem. Let r 1 be the smallest such zero. Define functions g 2 and h 2 on the interval [ 0 , r 1 ) by
g 2 ( t ) = 1 + M 2 ( 1 + | ρ | g 1 ( t ) ) ( 1 L 0 t ) ( 1 g ( t ) ) g 1 ( t )
and
h 2 ( t ) = g 2 ( t ) 1 .
By these definitions, h 2 ( 0 ) = 1 < 0 and h 2 ( t ) + as t r 1 . For this reason, function h 2 has a smallest zero r 2 ( 0 , r 1 ) . Moreover, define functions on [ 0 , r 2 ) by
g 3 ( t ) = g 2 ( t ) + M | δ | [ 1 1 L 0 t + β γ M 3 g 1 2 ( t ) ( 1 + | ρ | g 1 ( t ) ) ( 1 L 0 t ) ( 1 L 0 2 t ) ( 1 g 1 ¯ ( t ) ) 2 + β γ M 2 g 1 ( t ) ( 1 + | ρ | g 1 ( t ) ) ( 1 L 0 t ) 2 ( 1 g 1 ¯ ( t ) ) + β M 0 M 3 ( 1 + | ρ 1 | g 1 ( t ) + | ρ | g 1 2 ( t ) ) 2 ( 1 L 0 t ) 2 ( 1 L 0 2 t ) ( 1 g 1 ¯ ( t ) ) 2 ] ,
and
h 3 ( t ) = g 3 ( t ) 1 .
Suppose that
M | δ | ( 1 + β M 0 M 3 ) < 1 .
We can see that h 3 ( 0 ) = M | δ | ( 1 + β M 0 M 3 ) 1 < 0 and h 3 ( t ) + as t r 1 .
Characterize by r 3 the smallest such zero of h 3 ( t ) = 0 in ( 0 , r 1 1 ) . Set r = min { r 1 , r 2 , r 3 } . Then, we have that
0 g 1 ( t ) < 1
0 g 1 ¯ ( t ) < 1
0 g 2 ( t ) < 1
and
0 g 3 ( t ) < 1 for each t [ 0 , r ) .
We can express the method in Equation (2) in a different way as
y n = x n g ( x n ) g ( x n ) t n = y n g ( y n ) g ( x n ) ( g ( x n ) + ρ g ( y n ) ) ( g ( x n ) + ( ρ 2 ) g ( y n ) ) x n + 1 = t n δ ( A n + B n + C n + D n ) ,
provided
A n = g ( x n ) g ( x n ) , B n = ( t n y n ) 2 ( t n x n ) ( y n x n ) ( x n + 2 y n 3 t n ) , C n = g ( t n ) ( t n y n ) ( x n t n ) g ( x n ) ( x n + 2 y n 3 t n )
and
D n = g [ x n ; y n ] ( t n x n ) 3 g ( x n ) ( y n x n ) ( x n + 2 y n 3 t n ) .
Moreover, by simple algebraic manipulations we can write the previous B n , C n and D n in view of the definition of x n , y n , t n , B n , and C n as
B n = g ( y n ) 2 ( g ( x n ) + ρ g ( y n ) ) 2 ( f 2 ( x n ) + ( ρ 1 ) g ( x n ) g ( y n ) + ρ f 2 ( y n ) ) g ( x n ) ( g ( x n ) + ( ρ 2 ) g ( y n ) ) 2 ( f 2 ( x n ) + ( ρ + 1 ) g ( x n ) g ( y n ) + 3 ρ f 2 ( y n ) )
C n = g ( t n ) g ( y n ) ( g ( x n ) + ρ g ( y n ) ) ( f 2 ( x n ) + ( ρ 1 ) g ( x n ) g ( y n ) + ρ f 2 ( y n ) ) g ( x n ) ( g ( x n ) + ( ρ 2 ) g ( y n ) ) ( f 2 ( x n ) + ( ρ + 1 ) g ( x n ) g ( y n ) + 3 ρ f 2 ( y n ) )
and
D n = g [ x n ; y n ] ( f 2 ( x n ) + ( ρ 1 ) g ( x n ) g ( y n ) + ρ f 2 ( y n ) ) 3 g ( x n ) g ( x n ) 2 ( g ( x n ) + ( ρ 2 ) g ( y n ) ) 2 ( f 2 ( x n ) + ( ρ + 1 ) g ( x n ) g ( y n ) + 3 ρ f 2 ( y n ) ) ·
Next, we can give the local convergence result for the method in Equation (2) using the preceding notation.
Theorem 1. 
Let D I R be a convex subset and f : D I R a differentiable function. Consider the divided difference of order one g [ . ; . ] : D × D L ( D ) , x * D and for each x , y D the constants L 0 > 0 , L > 0 , M 0 > 0 , M > 0 , γ > 0 , ρ, δ I R , and β > 1 such that
M | δ | ( 1 + β M 0 M 3 ) < 1 ,
0 < ρ 0 ρ , max { ρ 0 , 3 2 2 } ρ 3 + 2 2 ,
g ( x * ) = 0 , g ( x * ) 0 , | | g ( x * ) 1 | | γ ,
| | g ( x * ) 1 ( g ( x ) g ( x * ) ) | | L 0 | | x x * | | ,
| | g ( x * ) 1 ( g ( x ) g ( y ) ) | | L | | x y | | ,
| | g ( x * ) 1 g ( x ) | | M
| | g ( x * ) 1 g [ x ; y ] | | M 0
and
U ¯ ( x * , r ) D ,
where the radius r is given previously
ρ 0 = ( 1 + β ) 2 ( β 1 ) ( 2 2 ( β 1 ) ( 3 β 1 ) + ( 5 β 3 ) ) .
Then, for x 0 U ( x * , r ) { x * } the method in Equation (2) generates a well defined sequence { x n } , all its terms are in U ( x * , r ) ( n = 0 , 1 , 2 , ), and the sequence converges to x * . Furthermore, the following estimates are verified
| | y n x * | | g 1 ( | | x n x * | | ) | | x n x * | | < | | x n x * | | < r ,
| | t n x * | | g 2 ( | | x n x * | | ) | | x n x * | | < | | x n x * | |
and
| | x n + 1 x * | | g 3 ( | | x n x * | | ) | | x n x * | | < | | x n x * | | ,
using the functions g 1 , g 2 and g 3 is defined above Theorem 1. Besides, x * is the only solution of F ( x ) = 0 in U ¯ ( x * , T ) for T r , 2 L 0 such that U ¯ ( x * , T ) D .
Proof. 
We shall show estimates of Equations (21)–(23) using mathematical induction. We get, through hypothesis x 0 U ( x * , r ) { x * } , the definition of r and Equation (15) that
| | g ( x * ) 1 ( g ( x 0 ) g ( x * ) ) | | L 0 | | x 0 x * | | < L 0 r < 1 .
From the Banach Lemma on invertible operators and Equation (24) it follows that g ( x 0 ) is invertible and
| | g ( x 0 ) 1 g ( x * ) | | 1 1 L 0 | | x 0 x * | | ·
Then, y 0 is well defined by the method in Equation (2) for n = 0 . Then, we can write
y 0 x * = x 0 x * g ( x 0 ) 1 g ( x 0 )
So, we get, using Equations (4), (16), (25), and (26) that
| | y 0 x * | | | | g ( x 0 ) 1 g ( x * ) | | 0 1 g ( x * ) 1 ( g ( x * + θ ( x 0 x * ) ) g ( x 0 ) ) ( x 0 x * ) d θ L | | x 0 x * | | 2 2 ( 1 L 0 | | x 0 x * | | ) = g 1 ( | | x 0 x * | | ) | | x 0 x * | | < | | x 0 x * | | < r ,
which shows y 0 U ( x * , r ) and Equation (21) for n = 0 .
Using Equation (14), we can express
g ( x 0 ) = g ( x 0 ) g ( x * ) = 0 1 g ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ .
We get, in view of Equations (17) and (28), that
| | g ( x * ) 1 g ( x 0 ) | | = 0 1 g ( x * ) 1 g ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ M | | x 0 x * | | ,
and similarly
| | g ( x * ) 1 g ( y 0 ) | | M | | y 0 x * | | .
At this point, we have that, by Equation (18),
| | g ( x * ) 1 g [ x 0 ; y 0 ] | | M 0 .
Hence, g ( x 0 ) + ( ρ 2 ) g ( y 0 ) is invertible. We get, using Equations (5), (14), (15), (27), and (30), that
| | ( g ( x * ) ( x 0 x * ) ) 1 ( g ( x 0 ) g ( x * ) g ( x * ) ( x 0 x * ) + ( ρ 2 ) g ( y 0 ) ) | | | | x 0 x * | | 1 0 1 g ( x * ) 1 g ( x * + θ ( x 0 x * ) g ( x * ) ) ( x 0 x * ) d θ | | + | | g ( x * ) 1 g ( y 0 ) | | x 0 x * | | 1 L 0 2 | | x 0 x * | | 2 + | ρ 2 | M | | y 0 x * | | | | x 0 x * | | 1 L 0 2 | | x 0 x * | | 2 + | ρ 2 | M g 1 ( | | x 0 x * | | ) | | x 0 x * | | = g 1 ¯ ( | | x 0 x * | | ) < 1 .
Then, by Equation (32) the function g ( x 0 ) + ( ρ 2 ) g ( y 0 ) is invertible and
| | ( g ( x 0 ) + ( ρ 2 ) g ( y 0 ) ) 1 g ( x * ) | | 1 | | x 0 x * | | ( 1 g 1 ¯ ( | | x 0 x * | | ) ) ·
It also follows that t 0 is well defined from the method in Equation (2) for n = 0 . Then, using the method in Equation (2) for n = 0 , Equations (6), (25), (29), (30), and (33), we have that
| | t 0 x * | | | | y 0 x * | | + g ( x * ) 1 g ( y 0 ) g ( x * ) 1 ( g ( x 0 ) + ρ g ( y 0 ) ) g ( x * ) 1 g ( x 0 ) g ( x * ) 1 ( g ( x 0 ) + ( ρ 2 ) g ( y 0 ) ) | | y 0 x * | | + M 2 | | y 0 x * | | ( | | x 0 x * | | + | ρ | | | y 0 x * | | ) ( 1 L 0 | | x 0 x * | | ) | | x 0 x * | | ( 1 g 1 ¯ ( | | x 0 x * | | ) ) 1 + M 2 ( | | x 0 x * | | + | ρ | g 1 ( | | x 0 x * | | ) | | x 0 x * | | ) ( 1 L 0 | | x 0 x * | | ) | | x 0 x * | | ( 1 g 1 ¯ ( | | x 0 x * | | ) ) | | y 0 x * | | = g 2 ( | | x 0 x * | | ) | | x 0 x * | | < | | x 0 x * | | < r ,
which shows Equation (22) for t 0 U ( x * , r ) and n = 0 . Next, we define estimates on | | A 0 | | , | | B 0 | | , | | C 0 | | , and | | D 0 | | . Assume that g ( x 0 ) 0 . We take into account the expressions f 2 ( x 0 ) + ( ρ + 1 ) g ( x 0 ) g ( y 0 ) + 3 ρ f 2 ( y 0 ) and f 2 ( x 0 ) + ( ρ 1 ) g ( x 0 ) g ( y 0 ) + ρ f 2 ( y 0 ) as quadratic polynomials in g ( y 0 ) (or g ( x 0 ) ). At this point, the discriminants are formed, respectively, by ( ρ 2 10 ρ + 1 ) f 2 ( x 0 ) and ( ρ 2 6 ρ + 1 ) f 2 ( x 0 ) , which are less than zero by Equation (13). Consequently,
f 2 ( x 0 ) + ( ρ + 1 ) g ( x 0 ) g ( y 0 ) + 3 ρ f 2 ( y 0 ) > 0
and
f 2 ( x 0 ) + ( ρ 1 ) g ( x 0 ) g ( y 0 ) + ρ f 2 ( y 0 ) > 0 .
Then, x 1 is well defined. Besides, we have, by Equations (35) and (36), that
f 2 ( x 0 ) + ( ρ 1 ) g ( x 0 ) g ( y 0 ) + ρ f 2 ( y 0 ) f 2 ( x 0 ) + ( ρ + 1 ) g ( x 0 ) g ( y 0 ) + 3 ρ f 2 ( y 0 ) = f 2 ( x 0 ) + ( ρ 1 ) g ( x 0 ) g ( y 0 ) + ρ f 2 ( y 0 ) f 2 ( x 0 ) + ( ρ + 1 ) g ( x 0 ) g ( y 0 ) + 3 ρ f 2 ( y 0 ) β ,
so Equation (37) reduces to exhibiting that for λ = g ( y 0 ) g ( x 0 )
φ ( λ ) 0 ,
where
φ ( t ) = ( 1 3 β ) ρ t 2 + ( ρ 1 β ( ρ + 1 ) ) t + 1 β .
The inequality of Equation (38) is satisfied for all t I R , if 1 3 β < 0 (i.e., β > 1 3 ), and the discriminant Δ of φ is
Δ 0
or
ψ ( ρ ) 0 ,
where,
ψ ( t ) = ( β 1 ) 2 t 2 2 ( β 1 ) ( 5 β 3 ) t + ( β + 1 ) 2 .
But the discriminant Δ 1 of ψ is given by
Δ 1 = 32 ( β 1 ) 3 ( 3 β 1 ) > 0 , if β > 1 .
Moreover, we have ( β 1 ) ( 5 β 3 ) > 0 for β > 1 . Then, by the Descartes rule of signs ψ has two positive zeros. Denote by ρ 0 the smallest such zero, which can be given in closed form using the quadratic formula to arrive at the definition of ρ 0 given in Equation (20). Hence, Equation (38) holds for all λ I R provided that Equation (13) is satisfied. Then, x 1 is well defined by the method in Equation (2) for n = 0 and we get, using Equations (8), (25), and (29), that
| | A 0 | | M | | x 0 x * | | 1 L 0 | | x 0 x * | | .
Next, using inequalities from Equations (9), (14), (25), (27), (30), (33), and (37), we have that
| | B 0 | | γ β M 4 | | y 0 x * | | ( | | x 0 x * | | + | ρ | | | y 0 x * | | ) 2 | | x 0 x * | | ( 1 L 0 | | x 0 x * | | 2 ) | | x 0 x * | | 2 ( 1 g 1 ¯ ( | | x 0 x * | | ) ) 2 ( 1 L 0 | | x 0 x * | | ) γ β M 4 g 1 2 ( | | x 0 x * | | ) ( 1 + | ρ | g 1 ( | | x 0 x * | | ) ) | | x 0 x * | | ( 1 L 0 2 | | x 0 x * | | ) ( 1 g 1 ¯ ( | | x 0 x * | | ) ) 2 ( 1 L 0 | | x 0 x * | | ) .
Through Equations (10), (14), (25), (27), (30), (33), and (37), we obtain that
| | C 0 | | γ β M 3 | | y 0 x * | | ( | | x 0 x * | | + | ρ | | | y 0 x * | | ) ( 1 L 0 | | x 0 x * | | ) 2 | | x 0 x * | | ( 1 g 1 ¯ ( | | x 0 x * | | ) ) γ β M 3 g 1 ( | | x 0 x * | | ) ( 1 + | ρ | g 1 ( | | x 0 x * | | ) ) | | x 0 x * | | ( 1 L 0 | | x 0 x * | | ) 2 ( 1 g 1 ¯ ( | | x 0 x * | | ) )
Then, we obtain, by Equations (11), (14), (18), (25), (27), (30), (33), and (37) that
| | D 0 | | β M 0 M 4 ( | | x 0 x * | | 2 + | ρ 1 | | | x 0 x * | | | | y 0 x * | | + | ρ | | | y 0 x * | | 2 ) 2 ( 1 L 0 | | x 0 x * | | ) 2 | | x 0 x * | | 3 ( 1 L 0 | | x 0 x * | | 2 ) ( 1 g 1 ¯ ( | | x 0 x * | | ) ) 2 β M 0 M 4 ( 1 + | ρ 1 | g 1 ( | | x 0 x * | | ) + | ρ | g 1 2 ( | | x 0 x * | | ) ) 2 | | x 0 x * | | ( 1 L 0 | | x 0 x * | | ) 2 ( 1 L 0 | | x 0 x * | | 2 ) ( 1 g 1 ¯ ( | | x 0 x * | | ) ) 2
Then, using Equations (7), (34) and (39)–(42), and the method in Equation (2) for n = 0 , we obtain that
| | x 1 x * | | | | t 0 x * | | + | δ | ( | | A 0 | | + | | B 0 | | + | | C 0 | | + | | D 0 | | ) g 3 ( | | x 0 x * | | ) | | x 0 x * | | < | | x 0 x * | | < r ,
which shows the inequality in Equation (23) for n = 0 . By simply replacing x 0 , y 0 , t 0 , x 1 by x k , y k , t k , x k + 1 in the previous estimates we have that the estimates in Equations (21)–(23) hold. Then, from | | x k + 1 x * | | a , a = g 3 ( | | x 0 x * | | ) [ 0 , 1 ) , | | x k x * | | < r , we get that lim k x k = x * and x k + 1 U ( x * , r ) . At last, to show the uniqueness part, let us suppose that there exists y * U ¯ ( x * , T ) with g ( y * ) = 0 . Define Q = 0 1 g ( y * + θ ( x * y * ) ) d θ . We get, using Equation (15), that
| | g ( x * ) 1 ( Q g ( x * ) ) | | L 0 0 1 | | y * + θ ( x * y * ) x * | | d θ L 0 0 1 ( 1 θ ) | | x * y * | | d θ = L 0 2 T < 1 .
It follows from Equation (44) that Q is invertible. Then, we conclude that x * = y * from the estimate 0 = g ( x * ) g ( y * ) = Q ( x * y * ) . □
As an example, consider g ( x ) = e * 1 , Ω = U ( 0 , 1 ) with x * = 0 and g [ x ; y ] = 0 1 g ( y + θ ( x y ) ) d θ . Then, we have γ = 1 , L 0 = e 1 , L = e , M = e , M 0 = e 2 . Then choosing ρ = 2.0 , δ = 0.001 , β = 1.01 conditions Equations (12) and (13) are satisfied. The radii obtained are: r 1 = 1.16395 , r 2 = 0.0585945 , and r 3 = 0.0530484 , so as a consequence we obtain r = r 3 = 0.0530484 .

3. Dynamical Analysis

In this section, the method in Equation (2) is applied to three different families of functions and its behavior has been analyzed by changing the δ parameter using techniques that appear in [19,20,21,22].

3.1. Exponential Family

The method has been applied to the function g ( x ) = e x 1 by considering the corresponding equation g ( x ) = 0 . This equation has a solution at the point x = 0 , which is the only one attractive fixed point of the method. In Figure 1 we observe how the method changes with the δ parameter. Dynamical planes represent the behavior of the method in the complex domain.
In Figure 2 the symmetry of the region of convergence to the solution x = 0 with respect to the imaginary axis is observed. Small islands of convergence appear out of the main region. It is necessary to increase the maximum number of iterations to achieve convergence with high values of δ .

3.2. Sinus Family

The method can be applied to the function g ( x ) = sin ( x ) with equation g ( x ) = 0 . In this case, the equation has a periodical solutions x = π , x = 0 , x = π , and , coinciding with the fixed points of the method. In Figure 3 how the method changes with the δ parameter is shown. Dynamical planes represent the behavior of the method in the complex domain as it appears in Figure 4, where with high values of δ the region of convergence is reduced.

3.3. Polynomial Family

The method was applied to the function g ( x ) = ( x 1 ) ( x + 1 ) with equation g ( x ) = 0 . The sink fixed points obtained in this case using the method are x = 1 , x = 1 , the solutions of the previous equation. In Figure 5 how the method changes with the δ parameter is shown. Dynamical planes represent the behavior of the method in the complex domain as it appears in Figure 6, where with larger values of δ the region of convergence is more complex.

4. Conclusions

The study of high-order iterative methods is very important, since problems from all disciplines require the solution of some equations. This solution is found as the limit of sequences generated by such methods, since closed form solutions can rarely be found in general. The convergence order is usually found in the literature using expensive Taylor expansions, high-order derivatives, and without computable error estimates on | | x n x * | | or uniqueness results. It is worth noticing that the high-order derivatives do not appear in these methods. Moreover, the initial point is a “shot in the dark”. Hence, the applicability of these methods is very limited. To address all these problems, we have developed a technique using hypotheses only on the first derivative that actually appears on the method and Lipschitz-type conditions. This allows us to: extend the applicability of the method; find a radius of convergence as well as computable error estimates and uniqueness results based on Lipschitz constants. Although we demonstrated our technique on the method in Equation (2), clearly it can be used to extend the applicability of other methods along the same lines. In view of the involvement of parameters on the method, the dynamics of it have also been explored in many interesting cases.

Author Contributions

All authors have equally contributed to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Research supported in part by Programa de Apoyo a la investigación de la fundación Séneca–Agencia de Ciencia y Tecnología de la Región de Murcia 20928/PI/18 and by Spanish MINECO project PGC2018-095896-B-C21.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petković, M.S.; Neta, B.; Petković, L.D.; Dźunić, J. Multipoint Methods for Solving Nonlinear Equations: A Survey; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  2. Hueso, J.L.; Martinez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  3. Behl, R.; Kanwar, V.; Kim, Y.I. Higher-order families of multiple root finding methods suitable for non-convergent cases and their dynamics. Math. Model. Anal. 2019, 24, 422–444. [Google Scholar] [CrossRef] [Green Version]
  4. Amat, S.; Busquier, S.; Gutiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef] [Green Version]
  5. Argyros, I.K. Computational Theory of Iterative Methods. Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007. [Google Scholar]
  6. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA; Taylor & Francis Group: Abingdon, UK, 2017. [Google Scholar]
  7. Argyros, I.K.; Magreñán, Á.A. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
  8. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis. Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
  9. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
  10. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  11. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  12. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  13. Sharma, J.R. Improved Chebyshev–Halley methods with sixth and eighth order of convergence. Appl. Math. Comput. 2015, 256, 119–124. [Google Scholar] [CrossRef]
  14. Sharma, R. Some fifth and sixth order iterative methods for solving nonlinear equations. Int. J. Eng. Res. Appl. 2014, 4, 268–273. [Google Scholar]
  15. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice–Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  16. Madhu, K.; Jayaraman, J. Higher Order Methods for Nonlinear Equations and Their Basins of Attraction. Mathematics 2016, 4, 22. [Google Scholar] [CrossRef] [Green Version]
  17. Sanz-Serna, J.M.; Zhu, B. Word series high-order averaging of highly oscillatory differential equations with delay. Appl. Math. Nonlinear Sci. 2019, 4, 445–454. [Google Scholar] [CrossRef] [Green Version]
  18. Pandey, P.K. A new computational algorithm for the solution of second order initial value problems in ordinary differential equations. Appl. Math. Nonlinear Sci. 2018, 3, 167–174. [Google Scholar] [CrossRef] [Green Version]
  19. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root–finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  20. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
  21. Magreñán, Á.A.; Argyros, I.K. On the local convergence and the dynamics of Chebyshev-Halley methods with six and eight order of convergence. J. Comput. Appl. Math. 2016, 298, 236–251. [Google Scholar] [CrossRef]
  22. Lotfi, T.; Magreñán, Á.A.; Mahdiani, K.; Rainer, J.J. A variant of Steffensen-King’s type family with accelerated sixth-order convergence and high efficiency index: Dynamic study and approach. Appl. Math. Comput. 2015, 252, 347–353. [Google Scholar] [CrossRef]
Figure 1. Method Representation. (a) ρ = 0.01 δ = 0. (b) ρ = 0.01 δ = 0.01. (c) ρ = 0.01 δ = 0.1. (d) ρ = 0.01 δ = 1.
Figure 1. Method Representation. (a) ρ = 0.01 δ = 0. (b) ρ = 0.01 δ = 0.01. (c) ρ = 0.01 δ = 0.1. (d) ρ = 0.01 δ = 1.
Symmetry 12 00420 g001aSymmetry 12 00420 g001b
Figure 2. Dynamical planes associated with the method. (a) ρ = 0.01 δ = 0.01 maxiter = 10. (b) ρ = 0.01 δ = 0.1 maxiter = 10. (c) ρ = 0.01 δ = 1 maxiter = 10. (d) ρ = 0.01 δ = 1 maxiter = 20.
Figure 2. Dynamical planes associated with the method. (a) ρ = 0.01 δ = 0.01 maxiter = 10. (b) ρ = 0.01 δ = 0.1 maxiter = 10. (c) ρ = 0.01 δ = 1 maxiter = 10. (d) ρ = 0.01 δ = 1 maxiter = 20.
Symmetry 12 00420 g002
Figure 3. Method Representation. (a) ρ = 0.01 δ = 0. (b) ρ = 0.01 δ = 0.01. (c) ρ = 0.01 δ = 0.1. (d) ρ = 0.01 δ = 1.
Figure 3. Method Representation. (a) ρ = 0.01 δ = 0. (b) ρ = 0.01 δ = 0.01. (c) ρ = 0.01 δ = 0.1. (d) ρ = 0.01 δ = 1.
Symmetry 12 00420 g003
Figure 4. Dynamical planes associated with the method. (a) ρ = 0.01 δ = 0.01 maxiter = 10. (b) ρ = 0.01 δ = 0.1 maxiter = 10. (c) ρ = 0.01 δ = 1 maxiter = 10. (d) ρ = 0.01 δ = 10 maxiter = 10.
Figure 4. Dynamical planes associated with the method. (a) ρ = 0.01 δ = 0.01 maxiter = 10. (b) ρ = 0.01 δ = 0.1 maxiter = 10. (c) ρ = 0.01 δ = 1 maxiter = 10. (d) ρ = 0.01 δ = 10 maxiter = 10.
Symmetry 12 00420 g004aSymmetry 12 00420 g004b
Figure 5. Method Representation. (a) ρ = 0.01 δ = 0. (b) ρ = 0.01 δ = 0.01. (c) ρ = 0.01 δ = 0.1. (d) ρ = 0.01 δ = 1.
Figure 5. Method Representation. (a) ρ = 0.01 δ = 0. (b) ρ = 0.01 δ = 0.01. (c) ρ = 0.01 δ = 0.1. (d) ρ = 0.01 δ = 1.
Symmetry 12 00420 g005aSymmetry 12 00420 g005b
Figure 6. Dynamical planes associated with the method. (a) ρ = 0.01 δ = 0.01 maxiter = 10. (b) ρ = 0.01 δ = 0.1 maxiter = 10. (c) ρ = 0.01 δ = 1 maxiter = 10. (d) ρ = 0.01 δ = 10 maxiter = 25.
Figure 6. Dynamical planes associated with the method. (a) ρ = 0.01 δ = 0.01 maxiter = 10. (b) ρ = 0.01 δ = 0.1 maxiter = 10. (c) ρ = 0.01 δ = 1 maxiter = 10. (d) ρ = 0.01 δ = 10 maxiter = 25.
Symmetry 12 00420 g006

Share and Cite

MDPI and ACS Style

Moysi, A.; Argyros, I.K.; Regmi, S.; González, D.; Magreñán, Á.A.; Sicilia, J.A. Convergence and Dynamics of a Higher-Order Method. Symmetry 2020, 12, 420. https://doi.org/10.3390/sym12030420

AMA Style

Moysi A, Argyros IK, Regmi S, González D, Magreñán ÁA, Sicilia JA. Convergence and Dynamics of a Higher-Order Method. Symmetry. 2020; 12(3):420. https://doi.org/10.3390/sym12030420

Chicago/Turabian Style

Moysi, Alejandro, Ioannis K. Argyros, Samundra Regmi, Daniel González, Á. Alberto Magreñán, and Juan Antonio Sicilia. 2020. "Convergence and Dynamics of a Higher-Order Method" Symmetry 12, no. 3: 420. https://doi.org/10.3390/sym12030420

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop