Next Article in Journal
Einstein–Gauss–Bonnet Gravity with Nonlinear Electrodynamics: Entropy, Energy Emission, Quasinormal Modes and Deflection Angle
Next Article in Special Issue
An Ulm-Type Inverse-Free Iterative Scheme for Fredholm Integral Equations of Second Kind
Previous Article in Journal
Analysis of the Internal Structure of Hadrons Using Direct Photon Production
Previous Article in Special Issue
A General Optimal Iterative Scheme with Arbitrary Order of Convergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivative-Free Iterative Methods with Some Kurchatov-Type Accelerating Parameters for Solving Nonlinear Systems

School of Mathematical Sciences, Bohai University, Jinzhou 121000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(6), 943; https://doi.org/10.3390/sym13060943
Submission received: 31 March 2021 / Revised: 11 May 2021 / Accepted: 21 May 2021 / Published: 26 May 2021
(This article belongs to the Special Issue Recent Advances and Application of Iterative Methods)

Abstract

:
Some Kurchatov-type accelerating parameters are used to construct some derivative-free iterative methods with memory for solving nonlinear systems. New iterative methods are developed from an initial scheme without memory with order of convergence three. New methods have the convergence order 2 + 5 4.236 and 5, respectively. The application of new methods can solve standard nonlinear systems and nonlinear ordinary differential equations (ODEs) in numerical experiments. Numerical results support the theoretical results.

1. Introduction

Many real-world problems that arise in various scientific fields are modeled by mathematically interesting nonlinear systems F ( x ) = 0 . Symmetries and conservation laws are powerful tools for studying explicit solutions of nonlinear systems. Finding the solution of nonlinear systems is an important problem in the area of mathematics, where F : D R n R n . Iterative method is a kind of efficient method for solving nonlinear systems. Optimization and acceleration of iterative methods can be achieved by applying symmetries. Newton’s method [1] is the oldest method for solving nonlinear systems, which is quadratically convergent assuming that initial approximation is close enough to the root. Based on Newton’s method, some high-order iterative method have been proposed in the literature. For example, Torres-Hernandez et al. [2], Gdawiec et al. [3], Akgül et al. [4] and Cordero et al. [5] developed some variants of Newton’s method by using fractional derivatives. Behl et al. [6] and Geum et al. [7] proposed some high-order iterative methods and their dynamics are investigated. Schwandt [8] proposed a symmetric iterative method for solving nonlinear systems. Barco et al. [9] obtained the local solutions of partial differential equations by symmetry approach. Derivative-free method is a kind of variant of Newton’s method, which can solve the solution of non-differentiable nonlinear systems. One of the celebrated derivative-free iterative methods is Traub’s method [10], which is given by
z ( j + 1 ) = z ( j ) [ s ( j ) ,   z ( j ) ;   F ] 1 F ( z ( j ) ) ,
where s ( j ) =   z ( j ) + B   F ( z ( j ) ) , B is a nonzero arbitrary parameter and [ s ( j ) , z ( j ) ; F ] 1 is the inverse of the first-order divided difference operator [ s ( j ) , z ( j ) ; F ] . The first-order divided difference operator [ . , . ; F ] : D × D R n × R n L ( R n ) is an n × n matrix, which is defined by [11]:
[ z + h , z ; F ] = 0 1 F ( z + t h ) d t , ( z , h ) R n × R n ,
where h = s z . Developing the Taylor’s expansion of F ( z + t h ) on the point z, we obtain
0 1 F ( z + t h ) d t = F ( z ) + 1 2 F ( z ) h + 1 6 F ( z ) h 2 + O [ h 3 ] .
In the process of computation, the first-order divided difference operator [ s ( j ) , z ( j ) ;   F ] is calculated by [11]
[ s ( j ) , z ( j ) ;   F ] i k = F i ( s 1 ( j ) , s k 1 ( j ) , s k ( j ) , z k + 1 ( j ) , , z m ( j ) ) F i ( s 1 ( j ) , s k 1 ( j ) , z k ( j ) , z k + 1 ( j ) , , z m ( j ) ) ( s k ( j ) z k ( j ) ) ,
where 1 i , k m .
Based on Traub’s method, many derivative-free methods have been studied in the literature [12,13,14,15,16,17,18,19]. Derivative-free methods can be divided into two groups: iterative method with memory and iterative method without memory. Iterative method with memory is superior to iterative method without memory in terms of computational efficiency and stability. To date, very few derivative-free methods with memory for solving nonlinear systems have been proposed in the literature. Recently, Petković and Sharma [12] designed the following derivative-free method with memory for solving nonlinear systems by using the variable parameter method
s ( j ) = z ( j ) B ( j ) F ( z ( j ) ) , t ( j ) = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) ( a I + G ( j ) ( ( 3 2 a ) I + ( a 2 ) G ( j ) ) ) [ s ( j ) , z ( j ) ; F ] 1 F ( t ( j ) ) ,
where B ( j ) = [ s ( j 1 ) , z ( j 1 ) ; F ] 1 is called variable parameter, G ( j ) = [ s ( j ) , z ( j ) ; F ] 1 [ w ( j ) , t ( j ) ; F ] , w ( j ) = t ( j ) + c F ( t ( j ) ) and c R 0 . Method (5) has the convergence order 2 + 5 4.236 , when the parameter a 3 . Using the same variable parameter B ( j ) as method (5), Ahmad et al. [13] and Kansal et al. [14] proposed some high order iterative methods with memory for solving nonlinear systems. Using the Kurchatov’s divided difference operator [15], Chicharro et al. [16] designed two derivative-free methods with memory for solving nonlinear systems. Firstly, they constructed the following third-order iterative method without memory
s ( j )   = z ( j ) + B F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) ,
which satisfies the following error equation
ε ( j + 1 ) = A 2 2 ( I + B F ( ζ ) ) 2 ( ε ( j ) ) 3 + O ( ( ε ( j ) ) 4 ) ,
where ε ( j ) = z ( j ) ζ , ζ is the zero of nonlinear function F and A j = 1 j ! F ( ζ ) 1 F ( j ) ( ζ ) , ( j = 1 , 2 , , n ) . Replacing the constant parameter B with B ( j ) = [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] 1 in method (6), they obtained the following fourth-order method with memory
s ( j )   = z ( j ) [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) ,
where the first-order divided difference operator [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] is called Kurchatov’s divided difference operator. Using the Kurchatov’s divided difference operator to design the variable parameter, Cordero et al. [17], Argyros et al. [18] and Candela et al. [19] proposed some efficient Kurchatov-type methods. Variable parameters can be designed by different schemes, which usually uses iterative sequences from the current and previous steps.
In this paper, we design some new variable parameters by using some new Kurchatov-type divided difference operators. This paper is organized as follows. Some new Kurchatov-type divided difference operators are used to construct iterative schemes with memory for the numerical solution of nonlinear systems in Section 2. The main advantage of the new Kurchatov-type divided difference operators is that it has less errors than the Kurchatov’s first-order operator [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] . The order of the basic method (6) is increased from 3 to ( 2 + 5 ) 4.236 and 5, respectively. The application of new methods to solve standard nonlinear systems and nonlinear ordinary differential equations (ODEs) is made in numerical experiments. Numerical experiments are made in Section 3. A short summary is given in Section 4.

2. Some New Iterative Schemes with Memory

If I + B F ( ζ ) 0 in (7), the convergence order of method (6) is three. Letting B = F ( ζ ) 1 , the order of convergence of method (6) can be improved. However, F ( ζ ) is unknown in practice. In order to improve the order of convergence of method (6), we could choose a variable parameter B ( j ) to replace constant parameter B. The variable parameter B ( j ) should satisfy l i m j B ( j ) = F ( ζ ) 1 . Using the Kurchatov’s divided difference operator [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] to approach F ( ζ ) , Chicharro et al. [16] designed the iterative method (8) with a variable parameter B ( j ) = [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] 1 . The Kurchatov’s divided difference operator [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] satisfies
lim j [ 2 z ( j ) z ( j 1 ) , z ( j 1 ) ; F ] = F ( ζ ) ,
where iterative sequence { z ( j ) } ζ as j .
To obtain some more effective iterative methods, we design some new Kurchatov-type first-order divided difference operators to construct the accelerating parameter B ( j ) . If j , then iterative sequences generated by iterative method (7) satisfy { t ( j ) } ζ , { z ( j ) } ζ and { s ( j ) } ζ . Using t ( j ) , z ( j ) and s ( j ) , we can design some first-order divided difference operators to approach F ( ζ ) .
Scheme 1.
Using t ( j 1 ) and z ( j ) , we design the first-order divided difference operator [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] and obtain the following variable parameter
B ( j ) = [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] 1 .
Using (2) and (3), we have
[ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] = 0 1 F ( z ( j ) + x h ) d x = F ( z ( j ) ) + F ( z ( j ) ) h + 2 3 F ( z ( j ) ) h 2 + O ( h 3 ) ,
where h = 2 ( t ( j 1 ) z ( j ) ) .
Using Taylor’s expansion around ζ and taking into account F ( ζ ) = 0 , we have
F ( z ( j ) ) = F ( ζ ) [ ε ( j ) + A 2 ( ε ( j ) ) 2 + A 3 ( ε ( j ) ) 3 ] + O ( ( ε ( j ) ) 4 ) ,
F ( z ( j ) ) = F ( ζ ) [ I + 2 A 2 ( ε ( j ) ) + 3 A 3 ( ε ( j ) ) 2 ] + O ( ( ε ( j ) ) 3 ) ,
F ( z ( j ) ) = F ( ζ ) [ 2 A 2 + 6 A 3 ( ε ( j ) ) ] + O ( ( ε ( j ) ) 2 ) ,
and
F ( z ( j ) ) = F ( ζ ) [ 6 A 3 ] + O ( ε ( j ) ) ,
where ε ( j ) = z ( j ) ζ .
From (12)–(15), we have
[ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] = F ( ζ ) [ I + 2 A 2 ε t ( j 1 ) + A 3 ( ε ( j ) ) 2
+ 4 A 3 ( ε t ( j 1 ) ) 2 2 A 3 ( ε t ( j 1 ) ) ( ε ( j ) ) ] + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
where ε t ( j 1 ) = t ( j 1 ) ζ , ε t ( j 1 ) 0 and ε ( j ) 0 as j .
From (16), we obtain
lim j [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] = F ( ζ ) .
This means that the first order divided difference operator [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] can be used to construct the variable parameter B ( j ) .
Using X X 1 = I and (17), we obtain
B ( j ) = [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] 1 = [ I 2 A 2 ε t ( j 1 ) A 3 ( ε ( j ) ) 2 + ( 2 A 2 2 4 A 3 ) ( ε t ( j 1 ) ) 2
+ 2 A 3 ( ε t ( j 1 ) ) ( ε ( j ) ) ] F ( ζ ) 1 + O 3 ( ε t ( j 1 ) , ε ( j ) )
and
I + B ( j ) F ( ζ ) 2 A 2 ε t ( j 1 ) + A 3 ( ε z ( j ) ) 2 ( 2 A 2 2 4 A 3 ) ( ε t ( j 1 ) ) 2 2 A 3 ( ε t ( j 1 ) ) ( ε z ( j ) ) 2 A 2 ε t ( j 1 )
2 A 2 2 ( I + B ( j 1 ) F ( ζ ) ) ( ε ( j 1 ) ) 2 .
In this manuscript, the symbols ∼ and O are used in the following way: if l i m n ( x n / y n ) = C and C 0 , then we have x n = O ( y n ) or x n y n .
Scheme 2.
Using t ( j 1 ) and z ( j ) , we design another first-order divided difference operator [ 2 t ( j 1 ) z ( j ) , t ( j 1 ) ; F ] and obtain the following variable parameter
B ( j ) = [ 2 t ( j 1 ) z ( j ) , t ( j 1 ) ; F ] 1 .
Using (2) and (3), we get
[ 2 t ( j 1 ) z ( j ) , t ( j 1 ) ; F ] = F ( t ( j 1 ) ) + F ( t ( j 1 ) ) 2 h + F ( t ( j 1 ) ) 6 h 2 + O ( h 3 )
= F ( ζ ) [ I + 3 A 2 ε t ( j 1 ) 7 A 3 ( ε t ( j 1 ) ) 2 3 A 3 ( ε ( j ) ) 5 A 3 ( ε t ( j 1 ) ) ( ε ( j ) ) ] + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
where h = ε t ( j 1 ) ε ( j ) .
From (21), we get
B ( j ) = [ I 3 A 2 ε t ( j 1 ) + ( 9 A 2 2 7 A 3 ) ( ε t ( j 1 ) ) 2 + 3 A 3 ( ε ( j ) )
( 5 A 3 + 9 A 2 A 3 ) ( ε t ( j 1 ) ) ( ε ( j ) ) ] F ( ζ ) 1 + O 3 ( ε t ( j 1 ) , ε ( j ) )
and
I + B ( j ) F ( ζ ) 3 A 2 ε t ( j 1 ) 3 A 2 2 ( I + B ( j 1 ) F ( ε ) ) ( ε ( j 1 ) ) 2 .
Scheme 3.
Using t ( j 1 ) and z ( j ) , we design another first-order divided difference operator [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] and obtain the following variable parameter
B ( j ) = [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] 1 .
Using (2) and (3), we get
[ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] = F ( z ( j ) ) + 1 2 F ( z ( j ) ) h + 1 6 F ( z ( j ) ) h 2 + O ( h 3 )
= F ( ζ ) [ I + 3 A 2 ε ( j ) A 2 ( ε t ( j 1 ) ) 5 A 3 ( ε ( j ) ) ( ε t ( j 1 ) ) + A 3 ( ε t ( j 1 ) ) 2 + 4 A 3 ( ε ( j ) ) 2 + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
where h = ε ( j ) ε t ( j 1 ) .
From (25), we obtain
B ( j ) = [ I + A 2 ε t ( j 1 ) 3 A 2 ε ( j ) + ( 9 A 2 2 5 A 3 ) ( ε t ( j 1 ) ) ε ( j ) + ( 9 A 2 2 4 A 3 ) ( ε ( j ) ) 2
( 3 A 2 2 + A 3 ) ( ε t ( j 1 ) ) 2 ] F ( ζ ) 1 + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
and
I + B ( j ) F ( ζ ) A 2 ε t ( j 1 ) A 2 2 ( I + B ( j 1 ) F ( ζ ) ) ( ε ( j 1 ) ) 2 .
Scheme 4.
The first-order divided difference operator [ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] can be constructed by using t ( j 1 ) and z ( j ) , then we obtain the following variable parameter
B ( j ) = [ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] 1 .
Using (2) and (3), we have
[ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] = F ( z ( j ) ) + F ( z ( j ) ) h + 2 3 F ( z ( j ) ) h 2 + O ( h 3 )
= F ( ζ ) [ I + 4 A 2 ε ( j ) 2 A 2 ( ε t ( j 1 ) ) 14 A 3 ( ε ( j ) ) ( ε t ( j 1 ) )
+ 4 A 3 ( ε t ( j 1 ) ) 2 + 13 A 3 ( ε ( j ) ) 2 ] + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
where h = 2 ( ε ( j ) ε t ( j 1 ) ) .
From (29), we get
B ( j ) = [ I + 2 A 2 ε t ( j 1 ) 4 A 2 ε ( j ) + ( 16 A 2 2 14 A 3 ) ( ε t ( j 1 ) ) ( ε ( j ) )
+ ( 16 A 2 2 13 A 3 ) ( ε ( j ) ) 2 + ( 4 A 2 2 4 A 3 ) ( ε t ( j 1 ) ) 2 ] F ( ζ ) 1 + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
and
I + B ( j ) F ( ζ ) 2 A 2 ε t ( j 1 ) 2 A 2 2 ( I + B ( j 1 ) F ( ζ ) ) ( ε ( j 1 ) ) 2 .
Scheme 5.
Using (11) and (29), we obtain
[ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] + [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] 2 = F ( ζ ) [ I + 2 A 2 ε ( j ) + 7 A 3 ( ε ( j ) ) 2
+ 4 A 3 ( ε t ( j 1 ) ) 2 8 A 3 ( ε t ( j 1 ) ) ( ε ( j ) ) ] + O 3 ( ε t ( j 1 ) , ε ( j ) ) .
Using (32), we design the following variable parameter
B ( j ) = ( [ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] + [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] 2 ) 1
= [ I 2 A 2 ε ( j ) + ( 4 A 2 2 7 A 3 ) ( ε ( j ) ) 2 4 A 3 ( ε t ( j 1 ) ) 2 + 8 A 3 ( ε t ( j 1 ) ) ( ε ( j ) ) ] F ( ζ ) 1 + O 3 ( ε t ( j 1 ) , ε ( j ) ) ,
and
I + B ( j ) F ( ζ ) 2 A 2 ε ( j ) .
Scheme 6.
Using (11) and (25), we get
B ( j ) = ( [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] + 2 [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] 3 ) 1
and
I + B ( j ) F ( ζ ) 2 A 2 ε ( j ) .
Scheme 7.
Using (20) and (25), we design
B ( j ) = ( [ 2 t ( j 1 ) z ( j ) , t ( j 1 ) ; F ] + 3 [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] 4 ) 1
and
I + B ( j ) F ( ζ ) 2 A 2 ε ( j ) .
The first-order divided difference operators (11), (21), (25) and (29) are called Kurchatov-type divided difference operator. Replacing parameter B of method (6) with Schemes 1–7, respectively, we get seven new iterative methods with memory as follows:
s ( j )   = z ( j ) [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
s ( j )   = z ( j ) [ 2 t ( j 1 ) z ( j ) , t ( j 1 ) ; F ] 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
s ( j )   = z ( j ) [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
s ( j )   = z ( j ) [ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
s ( j )   = z ( j ) ( [ 3 z ( j ) 2 t ( j 1 ) , z ( j ) ; F ] + [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] 2 ) 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
s ( j )   = z ( j ) ( [ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] + 2 [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] 3 ) 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
s ( j )   = z ( j ) ( [ 2 t ( j 1 ) z ( j ) , t ( j 1 ) ; F ] + 3 [ 2 z ( j ) t ( j 1 ) , z ( j ) ; F ] 4 ) 1 F ( z ( j ) ) , t ( j )   = z ( j ) [ s ( j ) , z ( j ) ; F ] 1 F ( z ( j ) ) , z ( j + 1 ) = t ( j ) [ s ( j ) , t ( j ) ; F ] 1 F ( t ( j ) ) .
The iterative process of the new methods (39)–(45) can be converted to solve linear systems. For example, method (39) can be written by
[ 2 t ( j 1 ) z ( j ) , z ( j ) ; F ] γ 1 = F ( z ( j ) ) , s ( j ) = γ 1 + z ( j ) , [ s ( j ) , z ( j ) ; F ] γ 2 = F ( z ( j ) ) , t ( j ) = z ( j ) + γ 2 , [ s ( j ) , t ( j ) ; F ] γ 3 = F ( t ( j ) ) , z ( j + 1 ) = t ( j ) + γ 3 .
Remark 1.
Schemes 1–4 have the same error relations. So, methods (39)–(42) have the same convergence order. Schemes 5–7 have different schemes with the same error relations, methods (43)–(45) have the same convergence order.
The convergence orders of new schemes (39)–(42) are analyzed in the following result.
Theorem 1.
Let ζ R n be a zero of F : D R n R n that is sufficiently differentiable function in an open neighborhood D of ζ. Suppose that initial guess z ( 0 ) is close enough to ζ. Then, iterative methods (39)–(42) have the convergence order 2 + 5 4.236 and the order of convergence of methods (43)–(45) is 5.
Proof. 
Let h ( j ) = I + B ( j ) F ( ζ ) in (19), then
h ( j ) 2 A 2 2 ( I + B ( j 1 ) F ( ζ ) ) ( ε ( j 1 ) ) 2 2 A 2 2 h ( j 1 ) ( ε ( j 1 ) ) 2
4 A 2 4 ( h ( j 2 ) ) ( ε ( j 2 ) ) 2 ( ε ( j 1 ) ) 2
2 j A 2 2 j ( h ( 0 ) ) ( ε ( 0 ) ) 2 ( ε ( 1 ) ) 2 ( ε ( 2 ) ) 2 ( ε ( j 2 ) ) 2 ( ε ( j 1 ) ) 2 .
Suppose that the iterative sequence z ( j ) has the following error relation
ε ( j + 1 ) D j + 1 ( ε ( 0 ) ) r j + 1 , 0 j n ,
where ε ( 0 ) = z ( 0 ) ζ , ε ( k + 1 ) = z ( k + 1 ) ζ and D j + 1 is an asymptotic error constant.
From (7), (47) and (48), we get
ε ( j + 1 ) A 2 2 ( I + B ( j ) F ( ζ ) ) 2 ( ε ( j ) ) 3
A 2 2 ( h ( j ) ) 2 ( ε ( j ) ) 3
2 2 j A 2 4 j + 2 h 0 2 ( ε ( 0 ) ) 4 ( D 1 ( ε ( 0 ) ) r 1 ) 4 ( D 2 ( ε ( 0 ) ) r 2 ) 4 ( D j 2 ( ε ( 0 ) ) r j 2 ) 4 ( D j 1 ( ε ( 0 ) ) r j 1 ) 4 ( D j ( ε ( 0 ) ) r j ) 3 .
Comparing the error ε ( 0 ) in (48) and (49), we get
r j + 1 = 4 + 4 r 1 + 4 r 2 + + 4 r j 2 + 4 r j 1 + 3 r j .
From (50), we have
r j + 1 = 4 r j + r j 1 .
Letting lim j ( r j + 1 / r j ) = lim j ( r j / r j 1 ) = R and dividing (51) by r j , we get
R = 4 + 1 R .
The solution of Equation (52) is 2 + 5 . Therefore, method (39) with memory has order R = 2 + 5 4.236 . The variable parameters (10), (20), (24) and (28) have the same error relation, so methods (39)–(42) have the same convergence order.
From (7) and (34), we get
ε ( j + 1 ) A 2 2 ( I + B ( j ) F ( ζ ) ) 2 ( ε ( j ) ) 3 4 A 2 4 ( ε ( j ) ) 5 .
Therefore, method (43) with memory has convergence order five. The variable parameters Schemes 5–7 have the same error relation, so methods (43)–(45) have the same convergence order. □

3. Numerical Results

Our methods (39)–(45) are compared with Petković’s method (5) and Chicharro’s method (8) for solving nonlinear systems and ODEs. For numerical experiments, Maple 14 with 2048 digits is used. The stopping criterion | | z ( j ) z ( j 1 ) | | < 10 100 is used in numerical algorithms. The initial parameter B ( 0 ) is the identity matrix.
Table 1, Table 2, Table 3 and Table 4 give the numerical results and the following information: NI means the number of iterations, EF means function values at the last step, EV represents the error values of | | z ( j ) z ( j 1 ) | | , e T i m e represents the CPU time (in second) and ACOC [20] is the approximated computational order of convergence. Figure 1, Figure 2, Figure 3 and Figure 4 show the iterative processes of different methods for solving nonlinear systems.
Example 1.
1 2 j = 1 , j i 15 z j 2 + arctan z i = 0 ,   i = 1 , 2 , 15 .
The solution ζ { 0.2074 ,   , 0.2074 } T is obtained by the initial guess z ( 0 ) = { 0.038 , 0.038 } T .
The iterative processes of different methods for solving Example 1 are shown by Figure 1. Figure 1 shows that our method (44) has higher computational accuracy than other methods.
Example 2.
z i c o s ( 2 z i j = 1 15 z j ) = 0 , i = 1 , 2 , 15 ,
The solution ζ { 0.939822 , 0.939822 , 0.939822 } T is obtained by initial guess z ( 0 ) = { 0.5 , 0.5 , , 0.5 } T .
The iterative processes of different methods for solving Example 2 are shown by Figure 2. Figure 2 shows that our method (44) has higher computational accuracy than other methods. Methods (41) and (42) have the similar convergence behavior.
Example 3.
Boundary-value problem [21]:
u ( z ) + e u ( z ) = 0 , z [ 0 , 1 ] , u ( 0 ) = 0 , u ( 1 ) = 1 .
Using difference method, the second derivative of this problem is discretized by
u j = u j + 1 2 u j + u j 1 h 2 , j = 1 , 2 , 3 , , n 1 ,
The interval [ 0 , 1 ] is divided into n smaller intervals with end points 0 = z 0 < z 1 < < z n 1 < z n = 1 . The partition is regular, this is ▵ z j = 1 / n for all j. We obtain the nonlinear systems as follows:
u j 1 2 u j + u j + 1 + h 2 e u j = 0 , j = 1 , 2 , 3 , , n 1 .
For n = 6 , the solution { 0.07748 , 0.12494 , 0.14093 , , 0.07748 } T is founded by the initial value is z ( 0 ) = ( 0.3 , 0.3 ) T . The numerical results are displayed in Table 3.
The iterative processes of different methods for solving Example 3 are shown by Figure 3. Figure 3 show that our method (44) has higher computational accuracy than other methods. Methods (39), (40) and (41) have the similar convergence behavior for Example 3.
Example 4.
Boundary-value problem [22]:
u ( z ) u ( z ) 3 s i n ( u ( z ) 2 ) = 0 , z [ 0 , 1 ] , u ( 0 ) = 0 , u ( 1 ) = 1 .
The first derivative is discretized by
u j = u j + 1 u j 1 2 h , j = 1 , 2 , 3 , , n 1 .
We get the following nonlinear systems by using the same discretization method as Example 3
u j 1 2 u j + u j + 1 h 2 u j 3 h 2 s i n ( ( u j 1 u j + 1 2 h ) 2 ) = 0 , j = 1 , 2 , 3 , , n 1 .
For n = 8 , the solution { 0.0846 , 0.1767 , 0.2776 , , 0.8159 } T is founded by the initial guess z ( 0 ) = ( 0.97 , 0.97 ) T .  Table 4 shows the numerical results.
Table 1, Table 2, Table 3 and Table 4 show that our iterative methods (43)–(45) with memory are superior to Petković’s method (5) and Chicharro’s method (8) with memory in terms of convergence order and iterative methods (40)–(41) cost less computing time than other methods. Methods (43)–(45) have the similar computational accuracy, so we omit methods (43) and (45) in Figure 1, Figure 2, Figure 3 and Figure 4. Figure 1, Figure 2, Figure 3 and Figure 4 show that our methods (44) have higher computational accuracy than other methods.
The iterative processes of different methods for solving Example 4 are shown by Figure 4. Figure 4 shows that our method (44) has higher computational accuracy than other methods.

4. Conclusions

In this paper, we proposed four new Kurchatov-type first-order divided operators. Using these new Kurchatov-type first-order divided operators, we designed some new accelerating parameters and constructed seven derivative-free iterative methods with memory for solving nonlinear systems. The local convergence order of Chicharro’s method without memory (6) was improved from 3 to 2 + 5 4.236 and 5, respectively. Numerical results support the theoretical results. We should note that the main objective of this paper was to develop a high-order method and prove the local convergence order of new methods. The initial approximation must be close enough to zero of the nonlinear function. If the initial approximation is far from the zero of nonlinear function, then the iterative sequence generated by iterative method converges slowly or diverges. Therefore, the choice of good initial approximations is very important to iterative methods. Some strategies for finding sufficiently good initial approximation have been proposed [23,24,25]. Finding good initial approximation for multipoint iterative method needs further research.

Author Contributions

Methodology, X.W.; writing—original draft preparation, X.W., Y.Z.; writing—review and editing, Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 61976027), the Open Project of Key Laboratory of Mathematical College of Chongqing Normal University (No. CSSXKFKTM202005), Educational Commission Foundation of Liaoning Province of China (Nos. LJ2019010, LJ2019011), National Natural Science Foundation of Liaoning Province (No. 2019-ZD-0502), University-Industry Collaborative Education Program (Nos. 201901077017, 201902014012, 201902184038), LiaoNing Revitalization Talents Program (No. XLYC2008002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Torres-Hernandez, A.; Brambila-Paz, F.; Iturrará-Viveros, U.; Caballero-Cruz, R. Fractioaal Newton–Raphson method accelerated with Aitken’s method. Axioms 2021, 10, 47. [Google Scholar] [CrossRef]
  3. Gdawiec, K.; Kotarski, W.; Lisowska, A. Newton’s method with fractional derivatives and various iteration processes via visual analysis. Numer. Algorithms 2021, 86, 953–1010. [Google Scholar] [CrossRef]
  4. Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2αth-order of convergence and its sability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar] [CrossRef]
  5. Cordero, A.; Girona, I.; Torregrosa, J.R. A variant of Chebyshev’s method with 3αth-order of convergence by using fractional derivatives. Symmetry 2019, 11, 1017. [Google Scholar] [CrossRef] [Green Version]
  6. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comp. Appl. Math. 2020, 113249. [Google Scholar] [CrossRef]
  7. Geum, Y.H.; Kim, Y.I.; Magreñán, Á.A. A biparametric extension of King’s fourth-order methods and their dynamics. Appl. Math. Comput. 2016, 282, 254–275. [Google Scholar] [CrossRef]
  8. Schwandt, H. A symmetric iterative interval method for systems of nonlinear equations. Computing 1984, 33, 153–164. [Google Scholar] [CrossRef]
  9. Barco, M.A.; Prince, G.E. New symmetry solution techniques for first-order non-linear PDEs. Appl. Math. Comput. 2001, 124, 169–196. [Google Scholar] [CrossRef]
  10. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  11. Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  12. Petković, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [Google Scholar] [CrossRef]
  13. Ahmad, F.; Soleymani, F.; Haghani, F.K.; Serra-Capizzano, S. Higher order derivative-free iterative methods with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
  14. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. Memory in a new variant of king’s family for solving nonlinear sytstems. Mathematics 2020, 8, 1251. [Google Scholar] [CrossRef]
  15. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR 1971, 198, 524–526. [Google Scholar]
  16. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
  17. Cordero, A.; Soleymani, F.; Torregrosa, J.R.; Khaksar Haghani, F. A family of Kurchatov-type methods and its stability. Appl. Math. Comput. 2017, 294, 264–279. [Google Scholar] [CrossRef] [Green Version]
  18. Argyros, I.K.; Ren, H. On the Kurchatov method for solving equations under weak conditions. Appl. Math. Comput. 2016, 273, 98–113. [Google Scholar] [CrossRef]
  19. Candela, V.; Peris, R. A class of third order iterative Kurchatov-Steffensen(derivative free) methods for solving nonlinear equations. Appl. Math. Comput. 2019, 350, 93–104. [Google Scholar] [CrossRef]
  20. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  21. Ahmad, F.; Rehman, S.U.; Ullah, M.Z.; Aljahdali, H.M.; Ahmad, S.; Alshomrani, A.S.; Carrasco, J.A.; Ahmad, S.; Sivasankaran, S. Frozen Jocabian multistep iterative method for solving nonlinear IVPs and BVPs. Complexity 2017, 2017, 9407656. [Google Scholar] [CrossRef]
  22. Narang, M.; Bhatia, S.; Kanwar, V. New efficient derivative free family of seventh-order methods for solving systems of nonlinear equations. Numer. Algorithms 2017, 76, 283–307. [Google Scholar] [CrossRef]
  23. Petković, M.S.; Yun, B.I. Sigmoid-like functions and root finding methods. Appl. Math. Comput. 2008, 204, 784–793. [Google Scholar] [CrossRef]
  24. Yun, B.I. A non-iterative method for solving nonlinear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar] [CrossRef]
  25. Yun, B.I. Iterative methods for solving nonliear equations with finitely any roots in an interval. J. Comput. Appl. Math. 2012, 236, 3308–3318. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Iterative processes of different methods for Example 1.
Figure 1. Iterative processes of different methods for Example 1.
Symmetry 13 00943 g001
Figure 2. Iterative processes of different methods for Example 2.
Figure 2. Iterative processes of different methods for Example 2.
Symmetry 13 00943 g002
Figure 3. Iterative processes of different methods for Example 3.
Figure 3. Iterative processes of different methods for Example 3.
Symmetry 13 00943 g003
Figure 4. Iterative processes of different methods for Example 4.
Figure 4. Iterative processes of different methods for Example 4.
Symmetry 13 00943 g004
Table 1. Convergence behavior of iterative methods for Example 1.
Table 1. Convergence behavior of iterative methods for Example 1.
MethodsNIEVEFACOCe-Time
(5)74.577 × 10 103 1.648 × 10 461 4.2241915.537
(8)66.536 × 10 182 5.328 × 10 726 3.9786415.428
(39)65.041 × 10 102 6.013 × 10 427 4.2364915.943
(40)71.645 × 10 313 1.562 × 10 1322 4.2360118.111
(41)69.817 × 10 191 3.228 × 10 803 4.2366914.180
(42)61.114 × 10 214 3.614 × 10 904 4.2338115.319
(43)65.202 × 10 262 1.988 × 10 1304 5.0000020.280
(44)66.070 × 10 262 4.302 × 10 1304 5.0000020.623
(45)65.833 × 10 262 3.526 × 10 1304 5.0000019.000
Table 2. Convergence behavior of iterative methods for Example 2.
Table 2. Convergence behavior of iterative methods for Example 2.
MethodsNIEVEFACOCe-Time
(5)125.059 × 10 417 3.088 × 10 1759 4.2359839.998
(8)73.408 × 10 299 3.497 × 10 1191 3.9983212.776
(39)72.323 × 10 288 9.941 × 10 1214 4.2356214.242
(40)72.484 × 10 298 1.108 × 10 1255 4.2356114.851
(41)68.864 × 10 134 2.159 × 10 559 4.2409310.966
(42)61.014 × 10 130 5.896 × 10 546 4.2354210.764
(43)77.681 × 10 176 2.609 × 10 870 4.9996525.256
(44)63.898 × 10 472 1.470 × 10 2046 5.0000014.492
(45)61.791 × 10 386 1.796 × 10 1923 5.0000013.135
Table 3. Convergence behavior of iterative methods for Example 3.
Table 3. Convergence behavior of iterative methods for Example 3.
MethodsNIEVEFACOCe-Time
(5)49.478 × 10 125 1.079 × 10 530 4.239091.154
(8)56.828 × 10 371 1.072 × 10 1483 4.036951.669
(39)41.172 × 10 102 1.181 × 10 436 4.203581.294
(40)46.850 × 10 101 7.002 × 10 429 4.207581.372
(41)41.353 × 10 105 1.369 × 10 449 4.198371.372
(42)41.329 × 10 102 2.014 × 10 436 4.204461.357
(43)43.818 × 10 143 2.243 × 10 718 5.007841.700
(44)41.427 × 10 143 1.637 × 10 720 5.004091.794
(45)41.841 × 10 143 5.854 × 10 720 5.005051.762
Table 4. Convergence behavior of iterative methods for Example 4.
Table 4. Convergence behavior of iterative methods for Example 4.
MethodsNIEVEFACOCe-Time
(5)81.932 × 10 132 3.291 × 10 559 4.267799.750
(8)66.690 × 10 282 1.315 × 10 1002 3.546079.094
(39)54.488 × 10 111 4.446 × 10 468 4.321507.410
(40)54.697 × 10 102 1.137 × 10 429 4.232167.488
(41)55.905 × 10 128 4.505 × 10 540 4.249527.534
(42)55.803 × 10 102 2.515 × 10 429 4.273797.566
(43)51.956 × 10 146 6.081 × 10 730 5.040979.687
(44)53.845 × 10 167 1.974 × 10 833 5.050289.672
(45)51.104 × 10 121 7.583 × 10 517 4.294769.703
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Jin, Y.; Zhao, Y. Derivative-Free Iterative Methods with Some Kurchatov-Type Accelerating Parameters for Solving Nonlinear Systems. Symmetry 2021, 13, 943. https://doi.org/10.3390/sym13060943

AMA Style

Wang X, Jin Y, Zhao Y. Derivative-Free Iterative Methods with Some Kurchatov-Type Accelerating Parameters for Solving Nonlinear Systems. Symmetry. 2021; 13(6):943. https://doi.org/10.3390/sym13060943

Chicago/Turabian Style

Wang, Xiaofeng, Yingfanghua Jin, and Yali Zhao. 2021. "Derivative-Free Iterative Methods with Some Kurchatov-Type Accelerating Parameters for Solving Nonlinear Systems" Symmetry 13, no. 6: 943. https://doi.org/10.3390/sym13060943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop