Next Article in Journal
Two-Stage Classification with SIS Using a New Filter Ranking Method in High Throughput Data
Next Article in Special Issue
A Modified Self-Adaptive Conjugate Gradient Method for Solving Convex Constrained Monotone Nonlinear Equations for Signal Recovery Problems
Previous Article in Journal
Dominant Cubic Coefficients of the ‘1/3-Rule’ Reduce Contest Domains
Previous Article in Special Issue
Unified Local Convergence for Newton’s Method and Uniqueness of the Solution of Equations under Generalized Conditions in a Banach Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Bi-Parametric Family of Fourth Order Composite Newton–Jarratt Methods for Nonlinear Systems

by
Janak Raj Sharma
1,*,
Deepak Kumar
1,
Ioannis K. Argyros
2 and
Ángel Alberto Magreñán
3
1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology Longowal, Sangrur 148106, India
2
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
3
Departamento de Matemáticas y Computación, Universidad de La Rioja, 26004 Logroño, La Rioja, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(6), 492; https://doi.org/10.3390/math7060492
Submission received: 8 May 2019 / Revised: 23 May 2019 / Accepted: 24 May 2019 / Published: 29 May 2019
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
We present a new two-parameter family of fourth-order iterative methods for solving systems of nonlinear equations. The scheme is composed of two Newton–Jarratt steps and requires the evaluation of one function and two first derivatives in each iteration. Convergence including the order of convergence, the radius of convergence, and error bounds is presented. Theoretical results are verified through numerical experimentation. Stability of the proposed class is analyzed and presented by means of using new dynamics tool, namely, the convergence plane. Performance is exhibited by implementing the methods on nonlinear systems of equations, including those resulting from the discretization of the boundary value problem. In addition, numerical comparisons are made with the existing techniques of the same order. Results show the better performance of the proposed techniques than the existing ones.
MSC:
65H10; 65J10; 65Y20; 41A25

1. Introduction

The construction of fixed-point methods for nonlinear equations is an interesting and important task in numerical analysis and many applied scientific disciplines. This has led to the development of many numerical methods, frequently of an iterative nature. In the past few decades, the iterative techniques have been applied in many diverse fields such as economics, engineering, physics, dynamical models, and so on.
In this paper, we consider the problem of finding the solution of nonlinear equations F ( x ) = 0 ( F : Ω B B , B is a Banach space and Ω a non-empty convex open set) by iterative methods of convergence order four. The solution vector (say, x * ) can be computed as a fixed point of some function M : Ω B B by using fixed-point iteration
x 0 Ω , x n + 1 = M ( x n ) , n 0 .
Newton’s method [1,2,3] is the most widely used technique for solving nonlinear equations. This method has quadratic convergence under the conditions that the function F is continuously differentiable and a close initial estimate x 0 is given. We define this method as
x n + 1 = x n F ( x n ) 1 F ( x n ) ,
where F ( x ) 1 is the inverse of the Frèchet-derivative F ( x ) .
Based on Newton’s or Newton-like iterations, many higher order methods have been proposed in the literature. For example, Cordero-Torregrosa [4], Frontini-Sormani [5], Grau et al. [6], Homeier [7] and Noor-Waseem [8] have proposed third-order methods where each requires three evaluations per iteration namely, one F and two F . Cordero-Torregrosa also introduced two third-order methods in [4], one method requires the evaluations of one F and three F , while the other requires one F and four F . Darvishi-Barati [9] have derived a third-order method that utilizes two F and one F . Babajee et al. [10] have presented a fourth order method requiring one F and two F . In [11], Cordero et al. implemented fourth order Jarratt’s method [12] for scalar equations to systems of equations, that uses one F and two F . Cordero et al. also presented a fourth order method requiring two F and two F in [13]. Darvishi-Barati developed a fourth order method in [14] which requires two F and three F . Grau et al. developed a fourth order method in [6] that uses three F and one F per iteration. Grau et al. [15] also generalized the fourth order Ostrowski-like [16] and Ostrowski’s [3] methods to systems of nonlinear equations each of which utilizes two F, one F and one divided difference per iteration. In [17], Hueso et al. presented a fourth order method which requires one F and two F . Noor and Noor in [18] developed a fourth order method that uses three F and one F . Sharma and Arora proposed a fourth order method in [19] requiring one F and two F in each iteration. Apart from these third and fourth-order methods, researchers have also proposed some higher order methods, see, for example [11,13,15,17,20,21,22,23,24] and references therein.
One of the important problems related to iterative methods is to study their complex dynamics. This study allows us to find the best members of a family of iterative methods in terms of stability, and once we have found those members, we can know the points that converge to any of the roots. That is the reason why the study of dynamical behavior of a family of iterative methods is so important. Learning of the complex dynamics of iterative methods has an historic importance since Schröder in 1870 and later Cayley in 1879 proposed to study Newton’s method in order to find the solutions of an equation in the complex plane. One of the most famous problems in this area is known as “the Cayley’s” problem, in which Cayley tried to characterize the basins of attraction of each of the roots of a cubic polynomial using Newton’s method. Firstly, he characterized the basins of attraction of a quadratic polynomial, that was easy as the Julia set was the mediatrix of the segment joining both roots. In the cubic case, he was not able to find that characterization. Nowadays, with the use of computers, and symbolic and numerical software we can draw those basins and we can understand how difficult it is to draw them without computers. In recent times, many authors have analyzed the complex dynamics of iteration functions in their work, see [17,25,26,27,28] and references therein.
In this paper, we present a two-parameter family of iterative methods with fourth order convergence. The scheme is composed of two steps and requires the evaluations of one function and two first order derivatives in each iteration. For a set of parametric values, the Jarratt’s method [11] is a special case of the family. In order to make the paper self-contained, we provide local convergence including convergence radius, error bounds, and estimates on the uniqueness of the solution. Moreover, the stability of the methods is analyzed and presented by means of using dynamic tools, namely, convergence plane and basins of attraction. The efficiency is demonstrated by implementing the methods on some different systems of nonlinear equations.
We summarize the contents of the paper. Section 2 contains the development of the family of methods along with fourth order of convergence. The local convergence of methods is presented in Section 3. The dynamical behavior of the family is analyzed in Section 4. In Section 5, numerical examples are considered to verify the theoretical results. In Section 6 the methods are applied to solve systems of nonlinear equations of different nature. Concluding remarks are given in Section 7.

2. Development of Method

Let us consider the scheme
z n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n F ( x n ) 1 F ( x n ) + F ( x n ) 1 D n C n 1 F ( x n ) ,
where C n = r F ( x n ) + t F ( z n ) , D n = C n 1 2 F ( x n ) A n 1 B n , A n = 3 ( 3 r + t ) F ( z n ) ( 3 r + 5 t ) F ( x n ) , B n = ( 9 r 2 + 30 r t + 5 t 2 ) F ( z n ) + ( 3 r 2 22 r t 9 t 2 ) F ( x n ) , and r and t are arbitrary constants. This is a scheme which uses Jarratt’s step as the first step whereas Newton–Jarratt composition in the second step (see, [11]). For this reason, we shall call it the composite Newton–Jarratt scheme.
We introduce some known notations and results [11], which are needed to obtain the convergence order of new method. Let F : Ω R m R m be sufficiently differentiable in Ω . The qth derivative of F at u R m , q 1 , is the q-linear function F ( q ) ( u ) : R m × × R m R m such that F ( q ) ( u ) ( v 1 , , v q ) R m . It can be easily seen that
(i)
F ( q ) ( u ) ( v 1 , , v q 1 , ) L ( R m ) ,
(ii)
F ( q ) ( u ) ( v σ ( 1 ) , , v σ ( q ) ) = F ( q ) ( u ) ( v 1 , , v q ) , ∀ permutation σ of { 1 , 2 , , q } .
From the above expressions, we can use the following notation:
(a)
F ( q ) ( u ) ( v 1 , , v q ) = F ( q ) ( u ) v 1 , , v q ,
(b)
F ( q ) ( u ) v q 1 F ( p ) ( u ) v p = F ( q ) ( u ) F ( p ) ( u ) v q + p 1 .
On the other hand, assuming that the Jacobian matrix F ( x * ) is nonsingular we can apply Taylor’s expansion for x * + h R m lying in a neighborhood of a solution x * of F ( x ) = 0 , that is
F ( x * + h ) = F ( x * ) h + q = 2 p 1 K q h q + O ( h p ) ,
where K q = 1 q ! F ( x * ) 1 F ( q ) ( x * ) , q 2 . Notice that K q h q R m since F ( q ) ( x * ) L ( R m × × R m , R m ) and F ( x * ) 1 L ( R m ) . Also, we have that
F ( x * + h ) = F ( x * ) I + q = 2 p 1 q K q h q 1 + O ( h p 1 ) ,
where I is the identity matrix. Therefore, q K q h q 1 L ( R m ) . From (5), we obtain
F ( x * + h ) 1 = I + X 2 h + X 3 h 2 + X 4 h 3 + F ( x * ) 1 + O ( h p ) ,
where
X 2 = 2 K 2 , X 3 = 4 K 2 2 3 K 3 , X 4 = 8 K 2 3 + 6 K 2 K 3 + 6 K 3 K 2 4 K 4 ,
Let e n = x n x * be the error in the nth iteration, then the equation
e n + 1 = M ( e n ) p + O ( ( e n ) p + 1 ) ,
where M is a p-linear function M L ( R m × × R m , R m ) , is called the error equation and p is the order of convergence. Note that ( e n ) p is ( e n , e n , p , e n ) .
We show that the scheme (3) has the fourth order of convergence with the help of following theorem:
Theorem 1.
Let F : Ω R m R m be a sufficiently many times differentiable mapping. Moreover, suppose that there exists a solution x * of F ( x ) = 0 such that F ( x * ) 1 is invertible. Then, sequence { x n } generated by method (3) for x 0 Ω converges to x * with convergence order four provided that 3 r t 0 and r + t 0 .
Proof. 
Let e n = x n x * for n = 0 , 1 , 2 and Γ = F ( x n ) 1 . Then the Taylor’s expansion of F ( x n ) in a neighborhood of x * yields
F ( x n ) = F ( x * ) ( e n + K 2 ( e n ) 2 + K 3 ( e n ) 3 + K 4 ( e n ) 4 + O ( ( e n ) 5 ) ) .
Also,
F ( x n ) = F ( x * ) ( I + 2 K 2 e n + 3 K 3 ( e n ) 2 + 4 K 4 ( e n ) 3 + O ( ( e n ) 4 ) ) .
Inversion of F ( x n ) yields,
F ( x n ) 1 = ( I 2 K 2 e n + ( 4 K 2 2 3 K 3 ) ( e n ) 2 ( 4 K 4 6 K 2 K 3 6 K 3 K 2 + 8 K 2 3 ) ( e n ) 3 + O ( ( e n ) 4 ) ) Γ .
Let e ¯ n = z n x * be the local error corresponding to the first step of method (3), then
e ¯ n = 1 3 e n + 2 K 2 3 e n 2 4 3 ( K 2 2 K 3 ) e n 3 + 2 3 ( 4 K 2 3 4 K 2 K 3 3 K 3 K 2 + 3 K 4 ) e n 4 + O ( e n 5 ) .
Expanding F ( z n ) about x * by Taylor expansion, we obtain that
F ( z n ) = F ( x * ) I + 2 K 2 ( e ¯ n ) + 3 K 3 ( e ¯ n ) 2 + 4 K 4 ( e ¯ n ) 3 + O ( ( e ¯ n ) 4 ) .
Using (10) and (13), we obtain
A n = 3 ( 3 r + t ) F ( z n ) ( 3 r + 5 t ) F ( x n ) = F ( x * ) ( 2 ( 3 r t ) I 8 t K 2 e n + 3 ( 3 r + 5 t ) K 3 + ( 3 r + t ) ( 4 K 2 2 + K 3 ) ( e n ) 2 + 4 ( 3 r + 5 t ) K 4 + ( 3 r + 1 ) ( 8 K 2 3 + 9 K 2 K 3 + 3 K 3 K 2 + 4 9 K 4 ) ( e n ) 3 + O ( ( e n ) 4 ) ) .
Inversion of Equation (14) yields
A n 1 = ( 1 2 ( 3 r t ) I + 2 t K 2 ( 3 r + t ) 2 e n 1 2 ( 3 r t ) 3 ( 18 r 2 18 t 2 ) K 2 2 ( 9 r 2 + 18 r t 7 t 2 ) K 3 ( e n ) 2 + 1 9 ( 3 r + t ) 4 ( 54 ( 9 r 3 15 r 2 t r t 2 + 7 t 3 ) K 2 3 9 ( 81 r 3 63 r 2 t 81 r t 2 + 31 t 3 ) K 3 K 2 + 4 ( 3 r + t ) 2 ( 6 r + 11 t ) K 4 ) ( e n ) 3 + O ( ( e n ) 4 ) ) Γ .
It follows from (10) and (13) that
B n = ( 9 r 2 + 30 r t + 5 t 2 ) F ( z n ) + ( 3 r 2 22 r t 9 t 2 ) F ( x n ) = F ( x * ) ( 4 ( 3 r 2 + 2 r t t 2 ) I + 4 3 ( 9 r 2 18 r t 11 t 2 ) K 2 e n + ( 3 ( 3 r 2 22 r t 9 t 2 ) K 3 + 1 3 ( 9 r 2 + 30 r t + 5 t 2 ) ( 4 K 2 2 + K 3 ) ) ( e n ) 2 + ( 4 ( 3 r 2 22 r t 9 t 2 ) K 4 + ( 9 r 2 + 30 r t + 5 t 2 ) 8 3 K 2 3 + 3 K 2 K 3 + K 3 K 2 + 4 27 K 4 ( e n ) 3 + O ( ( e n ) 4 ) )
and
C n = r F ( x n ) + t F ( z n ) = F ( x * ) ( ( r + t ) I + ( 2 r + 2 3 t ) K 2 e n + ( ( 3 r K 3 + 1 3 t ( 4 K 2 2 + K 3 ) ( e n ) 2 + 4 2 3 t K 2 3 + t K 2 K 3 + r K 4 + t K 4 ( e n ) 3 + O ( ( e n ) 4 ) ) .
Then
C n 1 = ( 1 r + t I 2 3 ( 3 r + t ) K 2 ( r + t ) 2 e n + 1 9 ( r + t ) 3 ( 36 r 2 + 12 r t 8 t 2 ) K 2 2 ( 27 r 2 + 30 r t + 3 t 2 ) K 3 ( e n ) 2 + 4 27 ( r + t ) 4 ( ( 54 r 3 66 r t 2 28 t 3 ) K 2 3 3 ( 27 r 3 + 30 r 2 t 5 r t 2 8 t 3 ) K 3 K 2 + ( r + t ) 2 ( 27 r + t ) K 4 ) ( e n ) 3 + O ( ( e n ) 4 ) ) Γ .
By using (10), (15), (16), (17) in the second step of method (3), we obtain the error equation
e n + 1 = 1 9 ( 3 r t ) ( r + t ) 3 ( 9 r 2 18 r t 11 t 2 ) K 2 3 9 ( 3 r 2 + 2 r t t 2 ) K 3 K 2 + ( 3 r 2 + 2 r t t 2 ) K 4 ( e n ) 4 + O ( ( e n ) 5 ) .
This shows that the proposed scheme has an order of convergence four provided that 3 r t 0 and r + t 0 .□
Below are some concrete methods of the proposed family (3):
(i)
The set of values
r = t 3 , t 0 , C , t = 0 , C R { 0 } ,
yields the fourth order generalized Jarratt’s method
z n = x n 2 3 F ( x n ) 1 F ( x n ) ,
x n + 1 = x n 1 2 F ( x n ) 1 3 F ( z n ) + F ( x n ) ( 3 F ( z n ) F ( x n ) ) 1 F ( x n ) , for r = t 3 , t 0 , x n 1 2 3 F ( z n ) F ( x n ) 1 ( 3 F ( z n ) + F ( x n ) ) F ( x n ) 1 F ( x n ) , for r = C , t = 0 , C R { 0 }
(see [11]) that, from now on, is denoted by JM.
(ii)
When r = 5 and t = 1 , we obtain the following method
z n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n 4 5 F ( x n ) 21 F ( z n ) 1 ( 5 F ( z n ) + 11 F ( x n ) ) ( F ( z n ) 5 F ( x n ) ) 1 F ( x n ) ,
which is denoted as MI.
(iii)
For r = 1 and t = 5 , we obtain the method
z n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n + 4 11 F ( x n ) 3 F ( z n ) 1 F ( z n ) + 7 F ( x n ) ( F ( x n ) 5 F ( z n ) ) 1 F ( x n ) ,
that now on is denoted by MII.
(iv)
For r = 1 and t = 1 , the method is given by
z n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n 1 2 3 F ( z n ) 2 F ( x n ) 1 11 F ( z n ) 7 F ( x n ) ( F ( x n ) + F ( z n ) ) 1 F ( x n ) ,
which is denoted as MIII.
(v)
For r = 5 and t = 3 , the following method is obtained
z n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n 4 9 F ( z n ) 5 F ( x n ) 1 15 F ( z n ) 7 F ( x n ) ( 3 F ( z n ) + 5 F ( x n ) ) 1 F ( x n ) ,
which is denoted as MIV.
In addition, to compare the above five methods, we use the computational cost which is measured by C = d + o p (see [20]), where d is the number of function evaluations per iteration and o p denotes the number of operations (e.g., products and quotients) needed per iteration. The various evaluations and operations that contribute towards the total computational cost for a system of m nonlinear equations in m unknowns are explained as follows. When calculating F in any iterative method, we evaluate m scalar functions f i , ( 1 i m ) . In order to compute an inverse linear operator, a linear system is solved that requires m ( m 1 ) ( 2 m 1 ) / 6 products and m ( m 1 ) / 2 quotients in the LU decomposition process, whereas m ( m 1 ) products and m quotients are needed in the resolution of two triangular linear systems. We also add m 2 products for the multiplication of a matrix with a vector.
Let us denote the computational costs of methods by C JM , C MI , C MII , C MIII and C MIV . Taking the above considerations into account the computational costs of the methods are expressed as:
C JM = 2 3 m 3 + 6 m 2 + 1 3 m ; C MI = C MII = C MIII = C MIV = m 3 + 6 m 2 .
We observe that the cost of JM is minimum as compared to the other methods and the relation between different computational costs for m > 1 is:
C JM < C MI = C MII = C MIII = C MIV .
Notice also that for m = 1 the computational cost of the methods is same, that is
C JM = C MI = C MII = C MIII = C MIV .
So far we studied the local convergence of method (3) for solving systems of m equations with m unknowns. This is called the well-constrained case. The method cannot be used when the number of unknowns is not equal to the number of equations, since the formulas used to show the order of convergence do not hold in this setting. Moreover, according to (18), the convergence order cannot be attained if 3 r = t or r = t , or when t 3 r or r t . It is worth mentioning that another way of looking for “all” solutions within a domain is to construct a box in R m and use subdivision solvers. We refer the interested reader to the excellent expositions by Sosin and Elber [29], Aizenshtein et al. [30] and Bartoň [31]. Two more problems exist with method (3). The convergence is shown using derivatives up to the fifth order that do not appear in the method limiting its applicability. The method is restricted on R m . Next, we study the local convergence, using only the first derivative and in the more general setting of Banach space valued operators.

3. Local Convergence in Banach Space

In this section F : Ω B B 1 , where B and B 1 are Banach spaces and Ω a non-empty convex and open set. We shall utilize some parameters and real functions in the local convergence analysis of method (3) that follows. Let ϕ 0 : [ 0 , + ) [ 0 , + ) be a continuous and nondecreasing function satisfying ϕ 0 ( 0 ) = 0 . Define parameters ϱ ¯ by
ϱ ¯ = sup { s [ 0 , ) : ϕ 0 ( s ) < 1 } .
Let also ϕ : [ 0 , ϱ ¯ ) [ 0 , + ) , ψ : [ 0 , ϱ ¯ ) [ 0 , + ) , be continuous and nondecreasing functions satisfying ϕ ( 0 ) = 0 . Define functions λ 1 and μ 1 on the interval [ 0 , ϱ ¯ ) by
λ 1 ( s ) = 1 1 ϕ 0 ( s ) 0 1 ϕ ( ( 1 θ ) s ) d θ + 1 3 0 1 ψ ( θ s ) d θ
and
μ 1 ( s ) = λ 1 ( s ) 1 .
Suppose that
ψ ( 0 ) 3 < 1
and
μ 1 ( s ) a positive number or + as s ( ϱ ¯ ) .
We have
μ 1 ( 0 ) = ψ ( 0 ) 3 1 < 0 .
Then, by (25)–(27) and the intermediate value theorem equation μ 1 ( s ) = 0 has solutions in ( 0 , ϱ ¯ ) . Denote by s 1 the smallest such solution. Define functions P 1 and q 1 on the interval [ 0 , ϱ ¯ ) by
P 1 ( s ) = 1 2 | 3 r t | 3 | 3 r + t | ϕ 0 ( λ 1 ( s ) s ) + | 3 r + 5 t | ϕ 0 ( s ) for 3 r t 0
and
q 1 ( s ) = P 1 ( s ) 1 .
Notice that q 1 ( 0 ) = 1 . Suppose that
q 1 ( s ) a positive number or + as s ( ϱ ¯ ) .
Denote by s q 1 the smallest solution of equation q 1 ( s ) = 0 in ( 0 , ϱ ¯ ) . Define functions P 2 and q 2 on the interval [ 0 , ϱ ¯ ) by
P 2 ( s ) = 1 | r + t | | r | ϕ 0 ( s ) + | t | ϕ 0 ( λ 1 ( s ) s ) for r + t 0
and
q 2 ( s ) = P 2 ( s ) 1 .
Notice that q 2 ( 0 ) = 1 . Suppose that
q 2 ( s ) a positive number or + as s ( ϱ ¯ ) .
Denote by s q 2 the smallest solution of equation q 2 ( s ) = 0 in ( 0 , ϱ ¯ ) . Moreover, define function ϕ 1 on the interval [ 0 , s q 1 ) by
ϕ 1 ( s ) = | r | ψ ( s ) + | t | ψ ( λ 1 ( s ) s ) + 1 4 | 3 r t | ( 1 P 1 ( s ) ) ψ ( s ) | 9 r 2 + 30 r t + 5 t 2 | ψ ( λ 1 ( s ) s ) + | 3 r 2 22 r t 9 t 2 | ψ ( s ) .
Notice that ϕ 1 ( 0 ) 0 for each s [ 0 , s q 1 ) . Furthermore, define functions λ 2 and μ 2 on the interval [ 0 , ϱ ) by
λ 2 ( s ) = λ 0 ( s ) + ϕ 1 ( s ) 0 1 ψ ( θ s ) d θ ( 1 ϕ 0 ( s ) ) | r + t | ( 1 P 2 ( s ) )
and
μ 2 ( s ) = λ 2 ( s ) 1 ,
where ϱ = min { s q 1 , s q 2 } and λ 0 ( s ) = 0 1 ϕ ( ( 1 θ ) s ) d θ 1 ϕ 0 ( s ) .
Set
α = 1 | r + t | ( | r | + | t | ) ψ ( 0 ) + β ψ ( 0 ) ,
where
β = 1 4 | 3 r t | ψ ( 0 ) 2 | 9 r 2 + 30 r t + 5 t 2 | + | 3 r 2 22 r t 9 t 2 | .
Suppose that
α < 1
and
μ 2 ( s ) a positive number or + as s ϱ .
We have by (30) that μ 2 ( 0 ) < 0 . Denote by s 2 the smallest solution of equations μ 2 ( s ) = 0 in ( 0 , ϱ ) . Finally, define the radius of convergence s * by
s * = min { s i } , i = 1 , 2 .
Then, we have for each s [ 0 , s * ) , i = 1 , 2 that
0 λ i ( s ) < 1 ,
0 P i ( s ) < 1
and
0 ϕ 1 ( s ) .
Some equation alternatives to the aforementioned conditions are:
ϕ 0 ( s ) = 1
has positive solutions. Denote by ϱ ¯ the smallest such solution. Functions ϕ : [ 0 , ϱ ¯ ) [ 0 , + ) , ψ : [ 0 , ϱ ¯ ) [ 0 , + ) are continuous and increasing with ϕ ( 0 ) = 0 . Then, we have for each s [ 0 , ϱ ¯ ) that
0 ϕ 0 ( s ) < 1 ,
(26), (28), (29), and (31) hold. Let U ( y , a ) = { x B : x y < a } stand for the open ball in R k of center y B and of radius a > 0 . Denote by U ¯ ( y , a ) the closure of U ( y , a ) .
The local convergence analysis of method (3) is based on the conditions ( A ) as follows:
(a1)
Let F : Ω B B 1 be continuously Fréchet-differentiable. Suppose there exists x * Ω such that F ( x * ) = 0 and F ( x * ) 1 L ( B , B ) .
(a2)
There exists function ϕ 0 : [ 0 , + ) [ 0 , + ) continuous and nondecreasing satisfying ϕ 0 ( 0 ) = 0 such that for each x Ω
F ( x * ) 1 ( F ( x ) F ( x * ) ) ϕ 0 ( x x * ) .
Set
Ω 0 = Ω U ( x * , ϱ ¯ ) ,
where ϱ ¯ is given in (24).
(a3)
There exist functions ϕ : [ 0 , ϱ ¯ ) [ 0 , + ) , ψ : [ 0 , ϱ ¯ ) [ 0 , + ) continuous and nondecreasing satisfying ϕ ( 0 ) = 0 such that for each x , y Ω 0
F ( x * ) 1 ( F ( x ) F ( y ) ) ϕ ( x y )
and
F ( x * ) 1 F ( x ) ψ ( x x * ) .
(a4)
Conditions (25), (26), (28)–(31) and 3 r t 0 , r + t 0 hold.
(a5)
There exists s * * s * such that 0 1 ϕ 0 ( θ s * * ) d θ < 1 . Set Ω 1 = Ω U ¯ ( x * , s * * ) .
The conditions ( A ) together with the preceding notation are used to show the local convergence of method (3).
Theorem 2.
Suppose that the ( A ) conditions hold. Then, sequence { x n } starting from x 0 U ( x * , s * ) { x * } is well defined in U ( x * , s * ) , remains in U ( x * , s * ) for each n = 0 , 1 , 2 , and converges to a unique solution x * of equation F ( x ) = 0 in Ω 1 . Moreover, the following error bounds hold
z n x * λ 1 ( x n x * ) x n x * x n x * < s *
and
x n + 1 x * λ 2 ( x n x * ) x n x * x n x * ,
where λ i , i = 1 , 2 are defined previously.
Proof. 
Estimates (36) and (37) will be shown using mathematical induction. Let x U ( x * , s * ) { x * } . Using (24), (32), ( a 1 ) and ( a 2 ) , we have in turn that
F ( x * ) 1 ( F ( x ) F ( x * ) ) ϕ 0 ( x x * ) ϕ 0 ( s * ) < 1 .
The Banach lemma on invertible operators [1] and (38) guarantee the existence of F ( x ) 1 L ( B , B )
F ( x 0 ) 1 F ( x * ) 1 1 ϕ 0 ( x 0 x * ) .
We also have that z 0 is well defined by the first substep of method (3) for n = 0 and (39) holds for x = x 0 since x 0 U ( x * , s * ) . We can write by ( a 1 )
F ( x ) = F ( x ) F ( x * ) = 0 1 F ( x * + θ ( x x * ) ) d θ ( x x * ) .
Notice that x * + θ ( x x * ) x * = θ x x * < s * , so x * + θ ( x x * ) U ( x * , s * ) for each θ [ 0 , 1 ] . Then, by the second condition in ( a 3 ) and (40), we get in turn that
F ( x * ) 1 F ( x ) = 0 1 F ( x * ) 1 F ( x * + θ ( x x * ) ) d θ ( x x * ) 0 1 ψ ( θ x x * ) d θ x x * .
Clearly, (41) holds for x = x 0 . Then, by the first substep of method (3) (for n = 0 ), (32), (33) (for i = 1 ), the first condition in ( a 3 ) , (39) and (41), we obtain in turn that
z 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 3 F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x * ) 0 1 F ( x * ) 1 F ( x * + θ ( x 0 x * ) ) F ( x 0 ) ( x 0 x * ) d θ + 1 3 F ( x 0 ) 1 F ( x * ) F ( x * ) 1 F ( x 0 ) 1 1 ϕ 0 ( x 0 x * ) 0 1 ϕ ( 1 θ ) x 0 x * d θ + 1 3 0 1 ψ θ x 0 x * d θ x 0 x * = λ 1 ( x 0 x * ) x 0 x * x 0 x * < s * ,
which shows (36) for n = 0 and z 0 U ( x * , s * ) .
To establish the existence of x 1 , we need to show the invertibility of linear operators A 0 and C 0 . Indeed, using (34) (for i = 1 ), (42) and ( a 2 ) , we have in turn that
( 2 ( 3 r t ) F ( x * ) ) 1 A 0 ( 3 ( 3 r + t ) ( 3 r + 5 t ) ) F ( x * ) 1 2 | 3 r t | 3 | 3 r + t | F ( x * ) 1 F ( z 0 ) F ( x * ) + | 3 r + 5 t | F ( x * ) 1 F ( x 0 ) F ( x * ) 1 2 | 3 r t | 3 | 3 r + t | ϕ 0 ( z 0 x * ) + | 3 r + 5 t | ϕ 0 ( x 0 x * 1 2 | 3 r t | 3 | 3 r + t | ϕ 0 ( λ 1 ( x 0 x * ) x 0 x * ) + | 3 r + 5 t | ϕ 0 ( x 0 x * = P 1 ( x 0 x * ) P 1 ( s * ) < 1 ,
so
A 0 1 F ( x * ) 1 2 | 3 r t | ( 1 P 1 ( x 0 x * ) ) .
Similarly, we obtain in turn that
( ( r + t ) F ( x * ) ) 1 C 0 ( r + t ) F ( x * ) 1 | r + t | ( | r | F ( x * ) 1 F ( x 0 ) F ( x * ) + | t | F ( x * ) 1 F ( z 0 ) F ( x * ) ) 1 | r + t | | r | ϕ 0 ( x 0 x * ) + | t | ϕ 0 ( z 0 x * ) 1 | r + t | | r | ϕ 0 ( x 0 x * ) + | t | ϕ 0 ( λ 1 ( x 0 x * ) x 0 x * ) = P 2 ( x 0 x * ) P 2 ( s * ) < 1 ,
so
C 0 1 F ( x * ) 1 | r + t | ( 1 P 2 ( x 0 x * ) ) .
Hence, x 1 is well defined by the second substep of method (3) when n = 0 . We also need an estimate on F ( x ) 1 D 0 :
F ( x * ) 1 D 0 F ( x * ) 1 C 0 + 1 2 F ( x * ) 1 F ( x 0 ) A 0 1 F ( x * ) F ( x * ) 1 B 0 | r | ψ ( x 0 x * ) + | t | ψ ( z 0 x * ) + ψ ( x 0 x * ) 4 | 3 r t | ( 1 P 1 ( x 0 x * ) ) ( | 9 r 2 + 30 r t + 5 t 2 | ψ ( z 0 x * ) + | 3 r 2 22 r t 9 t 2 | ψ ( x 0 x * ) ) ϕ 1 ( x 0 x * ) .
Next, by (32), (33) (for i = 2 ), (39) (for x = x 0 ), (41) (for x = x 0 , x = z 0 ), the second substep of method (3), (42), (44), (46), and (47), we obtain in turn that
x 1 x * λ 0 ( x 0 x * ) x 0 x * + F ( x 0 ) 1 F ( x * ) F ( x * ) 1 D 0 C 0 1 F ( x * ) F ( x * ) 1 F ( x 0 ) λ 0 ( x 0 x * ) + ϕ 1 ( x 0 x * ) 0 1 ψ ( θ x 0 x * ) d θ 1 ϕ 0 ( x 0 x * ) ) | r + t | ( 1 P 2 ( x 0 x * ) ) x 0 x * = λ 2 ( x 0 x * ) x 0 x * x 0 x * < s * ,
which shows (37) for n = 0 and x 1 U ( x * , s * ) . The induction for (36) and (37) is finished, if we use x m , z m , x m + 1 for x 0 , z 0 , x 1 in the preceding estimates. It then follows from the estimate
x m + 1 x * c x m x * < s * ,
where c = λ 2 ( x 0 x * ) [ 0 , 1 ] that lim m x m = x * and x m + 1 U ( x * , s * ) .
Finally, let T = 0 1 F ( x * + θ ( y * x * ) ) d θ for y * Ω 1 with F ( y * ) = 0 . Then, we can show the uniqueness of the solution x * in Ω 1 . By ( a 2 ) and ( a 5 ) we have in turn that
F ( x * ) 1 ( T F ( x * ) ) 0 1 ϕ 0 ( x * y * θ ) d θ 0 1 ϕ 0 ( θ s * * ) d θ < 1 ,
so T 1 L ( B , B ) . In view of the identity
0 = F ( y * ) F ( x * ) = T ( y * x * ) ,
we deduce that x * = y * .□

4. Dynamical Study

Here, we study the dynamical behavior of the operator associated to the bi-parametric scheme (3). On the one hand, the basins of attraction show us all the starting points that will converge to any root, if we apply an iterative method, so we can see in a visual way which points are good choices as a starting point and which are not. On the other hand, when we have a family of iterative methods, we cannot compute the basins of each member of the family since there exist infinitely many members, so we need to use another tool, this tool is the parameter plane. With this tool and using the definition of the critical point, we can characterize those members of the family that will have a good behavior in terms of stability and those who have complex behavior such as convergence to extraneous fixed points, cycles or even chaotical behavior. When the members of the family with “good” and “bad” behavior have been found, we can draw the basins in order to find the size of the “bad” zones. That is the reason why the study of dynamical behavior of a family of iterative methods is so important.
The basin of attraction of an attractor x * is defined as
A x * = { z 0 C ^ : R n z 0 x * , n } .
The Fatou set of rational function R, F R , is defined as the set of points of the complex plane z C ^ whose orbits converges to any attractor. Its complement set is the Julia set, J R in C ^ . This means that the attraction basins of any fixed point belongs to the Fatou set whereas the boundaries of these basins belong to the Julia set.
The complex dynamics of the family (3) is studied by applying the rational operator associated to the family on a generic polynomial p ( z ) = ( z a ) ( z b ) , and by using the Möebius map h ( z ) = z a z b , whose properties are
i ) h = 1 , i i ) h a = 0 , i i i ) h b = .
The rational operator associated with the proposed family of iterative methods (3) (we call it F ( z ) ) is obtained by using p ( z ) and is given by
G ( z , r , t ) = h F h 1 ( z ) = z 4 9 r 2 z + 9 r 2 + 6 r t z 18 r t 3 t 2 z 11 t 2 9 r 2 z + 9 r 2 18 r t z + 6 r t 11 t 2 z 3 t 2 .

4.1. Fixed Points and Stability

Note that z = 0 and z = are fixed points of G ( z , r , t ) that are related to the roots of the polynomial p ( z ) . We focus our attention on the extraneous fixed points, i.e., the points which are fixed points of G and are not solutions of the equation F ( z ) = 0 . Notice that z = 1 is an extraneous fixed point, associated to infinity. Moreover, we find the following extraneous fixed points:
e x 1 ( r , t ) = 1 , e x 2 ( r , t ) = 9 r 2 243 r 4 756 r 3 t + 198 r 2 t 2 + 540 r t 3 + 85 t 4 + 18 r t + 11 t 2 6 3 r 2 + 2 r t t 2 , e x 3 ( r , t ) = 9 r 2 + 243 r 4 756 r 3 t + 198 r 2 t 2 + 540 r t 3 + 85 t 4 + 18 r t + 11 t 2 6 3 r 2 + 2 r t t 2 .
Related to the stability of these extraneous fixed points we need G , which is given as
G ( z , r , t ) = 12 z 3 27 r 4 ( z + 1 ) 2 36 r 3 t z 2 + z + 1 6 r 2 t 2 ( z ( 13 z 4 ) + 13 ) 4 r t 3 ( ( z 21 ) z + 1 ) + t 4 ( z ( 11 z + 34 ) + 11 ) 9 r 2 ( z + 1 ) + 6 r t ( 1 3 z ) t 2 ( 11 z + 3 ) 2 .
From the form of the derivative, it is clear that the origin and are superattractive fixed points for every value of r and t.
Related to the stability of the other fixed points, we obtain the multiplicators associated to each one as:
G ( e x 1 ( r , t ) , r , t ) = 3 4 r 4 3 r + t + 1 t + 3 , G ( e x 2 ( r , t ) , r , t ) = 4 81 r 4 + 216 r 3 t 54 r 2 t 2 144 r t 3 19 t 4 9 ( t 3 r ) 2 ( r + t ) 2 , G ( e x 3 ( r , t ) , r , t ) = 4 81 r 4 + 216 r 3 t 54 r 2 t 2 144 r t 3 19 t 4 9 ( t 3 r ) 2 ( r + t ) 2 .
Due to the complexity of the stability function of the extraneous fixed points, we use the graphical tools of software Mathematica to obtain the regions of stability of them. In Figure 1 and Figure 2, the stability regions of z = 1 , e x 2 ( r , t ) (and also e x 3 ( r , t ) since its behavior is similar to e x 2 ( r , t ) as both points are conjugated) are shown. Every value of the parameter which is in grey means that the associated fixed point is repulsor.

4.2. Critical Points and Parameter Spaces

Here, the critical points will be calculated and the convergence planes, which correspond to the parameter planes, associated to the free critical points will be shown. We obtain that z = 0 and z = are critical points. On the other hand, the free critical points are the following:
c r 0 = 1 ,
c r 1 ( r , t ) = 27 r 4 18 r 3 t + 12 r 2 t 2 + 2 t ( r t ) ( 3 r + t ) ( 3 r + 2 t ) 9 r 2 6 r t 7 t 2 3 r 2 + 14 r t + 3 t 2 + 42 r t 3 + 17 t 4 ( 3 r t ) ( r + t ) 9 r 2 18 r t 11 t 2
and
c r 2 ( r , t ) = 27 r 4 + 18 r 3 t 12 r 2 t 2 + 2 t ( r t ) ( 3 r + t ) ( 3 r + 2 t ) 9 r 2 6 r t 7 t 2 3 r 2 + 14 r t + 3 t 2 42 r t 3 17 t 4 ( 3 r t ) ( r + t ) 9 r 2 18 r t 11 t 2 .
We present the following results involving the free critical points.
Lemma 1.
(a)
If r = 1 3 ( 2 t ) or r = 1 3 2 2 t + t or r = 1 3 t 2 2 t , then c r 1 = c r 2 = 1 .
(b)
If r = 1 3 2 10 t 7 t or r = 1 3 2 10 t 7 t or t = 0 , then c r 1 = c r 2 = 1 .
(c)
For other values of r and t, the family has 2 free critical points.
Moreover, it is clear that for every value of r and t, c r 1 ( r , t ) = 1 c r 2 ( r , t )
We will only study the parameter planes associated to the other two free critical points because the behavior of c r 0 ( r , t ) = 1 is simple due to the fact that it is a pre-image of the extraneous fixed point z = 1 related to the original divergence to . In order to study the behavior of these free critical points, we are going to draw the parameter planes but we will use a variant of the algorithms proposed in [27], in which we consider the horizontal axis as the r-axis and the vertical as the t-axis, and iterate the associated free critical point. In the parameter plane, dit appears in cyan the convergence to 0 and to , in yellow the convergence to 1 and in black the convergence to any point or cycle distinct to the roots or even divergence or chaos. Consequently, by just looking at the parameter plane, we can see which pair of points ( r , t ) will have a desirable behavior in terms of stability, every point in cyan is a good choice.
In Figure 3, the parameter plane associated to c r 2 ( r , t ) is shown, where some zones with anomalies are detected.
Now, we will draw some of the dynamical planes. The way we draw these is the following: each point represents a point of the complex plane in the form z = x + y I and we iterate that point a maximum of 1000 iterations and draw the convergence to 0 in magenta, to in cyan, whereas non-convergence to the roots in black color. First of all, in Figure 4 and Figure 5 appear two dynamical planes associated with non-convergent zones. On the other hand, in Figure 6, a dynamical plane with regions of convergence to z = 1 , is shown.
Finally, we show the planes without any chaos. In Figure 7 appears the dynamical plane associated without non-convergence problem whereas in Figure 8 appears the plane associated to special case the Jarratt’s method of the family.

5. Numerical Examples

Here, we shall demonstrate the theoretical results of local convergence which we have proved in Section 3. Results are also compared with that of Newton’s method. The radius of convergence ( s N ) for Newton’s method is given by
s N = 2 2 L 0 + L ,
where L 0 and L are given parameters (see [1,25]).
We consider three numerical examples.
Example 1.
Let us consider B = R m 1 for natural integer m 2 . B is equipped with the max-norm x = max 1 i m 1 x i . The corresponding matrix norm is A = max 1 i m 1 j = 1 j = m 1 | a i j | for A = ( a i j ) 1 i , j m 1 . Consider the following two point boundary value problem on interval [0, 1] is
v + v 3 / 2 = 0 , v ( 0 ) = v ( 1 ) = 0 .
Let us denote h = 1 / m , u i = h i and v i = v ( u i ) for each i = 0 , 1 , , m . We can write the discretization of v at points u i in the following form:
v i v i 1 2 v i + v i + 1 h 2 for each i = 2 , 3 , , m 1 .
Using the initial conditions in (54), we get that v 0 = v m = 0 and (54) is equivalent to the system of nonlinear equation F ( v ) = 0 , where v = ( v 1 , v 2 , , v m 1 ) T and F ( v ) is;
F ( v ) = h 2 v 1 3 / 2 2 v 1 + v 2 , v i 1 + h 2 v i 3 / 2 2 v i + v i + 1 for each i = 2 , 3 , , m 1 .
The Fréchet-derivative of operator F is given by
F ( v ) = 3 2 h 2 v 1 1 / 2 2 1 0 0 1 3 2 h 2 v 1 1 / 2 2 1 0 0 1 1 0 0 1 3 2 h 2 v 1 1 / 2 2 .
We choose m = 11 and corresponding solution is x * = ( 0 , 0 , , 0 ) T . Then, we have that ϕ 0 ( t ) = ϕ ( t ) = L 0 t , ψ ( t ) = L t , where L 0 = L = 3 . 942631477 . Values of the parameters s N , s 1 , s 2 and s * , using their definition, are given in Table 1.
Thus the convergence of the method (3) is guaranteed, provided that x 0 U ( x * , s * ) .
Example 2.
Let B = C [ 0 , 1 ] , be a space of continuous functions defined on [ 0 , 1 ] and equipped with max norm. Let Ω = U ¯ ( 0 , 1 ) . Define F on Ω by
F ( φ ) ( x ) = ϕ ( x ) 5 0 1 x θ φ ( θ ) 3 d θ .
We obtain that
F ( φ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ φ ( θ ) 2 ξ ( θ ) d θ , for each ξ Ω .
Then, we have that ϕ 0 ( t ) = ϕ ( t ) = L 0 t and ψ ( t ) = L t , where L 0 = 7 . 5 , L = 15 . Using the definition of s N , s 1 , s 2 and s * , the parameter values are shown in Table 2.
Therefore, the convergence of the method (3) to x * = 0 is guaranteed, provided that x 0 U ( x * , s * ) .
Example 3.
Consider the nonlinear integral equation of the mixed Hammerstein-type [32] defined by
x ( s ) = 0 1 G ( s , t ) x ( t ) 3 / 2 + x ( t ) 2 2 d t ,
where G is the Green’s function defined on the interval [ 0 , 1 ] × [ 0 , 1 ] by
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
The solution of Equation (55) is x * ( s ) = 0 , wherein F : Ω C [ 0 , 1 ] C [ 0 , 1 ] is defined by
F ( x ) ( s ) = x ( s ) 0 1 G ( s , t ) x ( t ) 3 / 2 + x ( t ) 2 2 d t .
Observe that
0 1 G ( s , t ) d t 1 8 .
Then, we have that
F ( x ) y ( s ) = y ( s ) 0 1 G ( s , t ) 3 2 x ( t ) 1 / 2 + x ( t ) d t ,
so since F ( x * ( s ) ) = I , then
F ( x * ) 1 ( F ( x ) F ( y ) ) 5 16 x y .
In (56), replacing y by x 0 we get that
F ( x * ) 1 ( F ( x ) F ( x 0 ) ) 5 16 x x 0 .
Therefore, we have
ϕ 0 ( t ) = ϕ ( t ) = L 0 t and ψ ( t ) = L t , where L 0 = L = 5 16 .
Computed values of the parameters s N , s 1 , s 2 , and s * are displayed in Table 3.
Thus the convergence of the method (3) to x * ( s ) = 0 is guaranteed, provided that x 0 U ( x * , s * ) .
We conclude that s N is larger than the new radii as expected because as the order of convergence increases the radius decreases in general.

6. Applications

The methods JM, MI, MII, MIII, and MIV of the family (3) are applied to solve different systems of nonlinear equations in R m . We also compare the methods with some existing fourth order methods. For instance, we choose the methods proposed by Babajee et al. [10], Cordero et al. [13], Darvishi-Barati [14], Hueso et al. [17], Noor-Noor [18], and Sharma-Arora [19], which are given by
Method by Babajee et al. (BM):
y n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n 2 I 1 4 F ( x n ) 1 F ( y n ) I + 3 4 F ( x n ) 1 F ( y n ) I 2 × F ( x n ) + F ( y n ) 1 F ( x n ) .
Method by Cordero et al. (CM):
y n = x n F ( x n ) 1 F ( x n ) , x n + 1 = y n 2 F ( x n ) 1 F ( x n ) 1 F ( y n ) F ( x n ) 1 ) F ( y n ) .
Darvishi-Barati Method (DBM):
y n = x n F ( x n ) 1 F ( x n ) , z n = x n F ( x n ) 1 F ( x n ) + F ( y n ) , x n + 1 = x n 1 6 F ( x n ) + 2 3 F x n + z n 2 + 1 6 F ( z n ) 1 F ( x n ) .
Method by Hueso et al. (HM):
y n = x n θ Γ x n F ( x n ) , H ( x n , y n ) = Γ x n F ( y n ) , G s ( x n , y n ) = s 1 I + s 2 H ( y n , x n ) + s 3 H ( x n , y n ) + s 4 H ( y n , x n ) 2 z n = x n G s ( x n , y n ) Γ x n F ( x n ) , x n + 1 = z n ,
where Γ x n = F ( x n ) 1 , θ = 2 3 , s 1 = 5 8 s 2 8 , s 3 = s 2 3 , s 4 = 9 8 s 2 24 and s 2 R . In numerical problems, we choose s 2 = 1 .
Noor-Noor Method(NNM):
y n = x n F ( x n ) 1 F ( x n ) , z n = F ( x n ) 1 F ( y n ) , x n + 1 = x n F ( x n ) 1 F ( x n ) F ( x n ) 1 F ( y n ) F ( x n ) 1 F ( y n + z n ) .
Sharma-Arora Method (SAM):
y n = x n 2 3 F ( x n ) 1 F ( x n ) , x n + 1 = x n 23 8 I F ( x n ) 1 F ( y n ) 3 I 9 8 F ( x n ) 1 F ( y n ) F ( x n ) 1 F ( x n ) .
The computations are performed in programming package Mathematica [33] with multiple-precision arithmetic using 4096 digits. Programs are performed in the processor, AMD A8-7410 APU with AMU Radeon R5 Graphics @ 2.20 GHz (64-bit Operating System) Microsoft Window 10 Ultimate 2016. In each method, we record the number of iterations ( n ) that are required to converge to the solution by employing the stopping criterion
| | x n + 1 x n | | + | | F ( x n ) | | < 10 200 .
The theoretical order of convergence is verified by calculating the approximate computational order of convergence (ACOC) using the formula (see [4])
ACOC = ln ( x n + 1 x n / x n x n 1 ) ln ( x n x n 1 / x n 1 x n 2 ) .
For numerical experiments we consider the following problems:
Problem 1.
Consider the nonlinear problem BROYDN3D [34]. Let x = ( x 1 , x 2 , , x m ) T . The nonlinear system F ( x ) = 0 is given by F = ( F 1 , F 2 , , F m ) T with
F i ( x ) = ( 3 2 x i ) x i x i 1 2 x i + 1 + 1 , 1 i m
where x 0 = x m + 1 = 0 by convention. The initial guess is x ( 0 ) = { 1 , 1 · · · · · · m t i m e s , 1 } T .
In tests, we choose the systems with dimension m = 10 , 50 , 100 , 200 , 500 and their corresponding solutions are given by
{ 0.7596 , 0.7166 , 0.7085 , 0.7066 , 0.7051 , 0.7015 , 0.6918 , 0.6657 , 0 . 5960 , 0 . 4164 } T , { 0.5707 , 0.6819 , 0.7024 , 0.7062 , 0.7069 , 0.7070 , 0.7071 , , 0.7071 , 0.7070 , 0.7070 , 0.7070 , 0.7068 , 0.7063 , 0.7050 , 0.7015 , 0 . 6918 , 0 . 6657 , 0 . 5960 , 0 . 4164 } T , { 0.5707 , 0.6819 , 0.7024 , 0.7062 , 0.7069 , 0.7070 , 0.7071 , , 0.7071 , 0.7070 , 0.7070 , 0.7070 , 0.7068 , 0.7063 , 0.7050 , 0.7015 , 0 . 6918 , 0 . 6657 , 0 . 5960 , 0 . 4164 } T , { 0.5707 , 0.6819 , 0.7024 , 0.7062 , 0.7069 , 0.7070 , 0.7071 , 0.7071 , 0.7071 , , 0.7071 , 0.7070 , 0.7070 , 0.7070 , 0.7068 , 0.7063 , 0.7050 , 0.7015 , 0.6918 , 0.6657 , 0.5960 , 0.4164 } T
and
{ 0.5707 , 0.6819 , 0.7024 , 0.7062 , 0.7069 , 0.7070 , 0.7071 , 0.7071 , 0.7071 , 0.7071 , , 0.7071 , 0.7070 , 0.7070 , 0.7070 , 0.7068 , 0 . 7063 , 0 . 7050 , 0 . 7015 , 0 . 6918 , 0 . 6657 , 0 . 5960 , 0 . 4164 } T .
Problem 2.
Considering the system of nonlinear equations [35]:
x i 2 x i + 1 1 , 1 i m 1 , x i 2 x 1 1 , i = m .
with initial value x ( 0 ) = { 3 2 , 3 2 , m t i m e s , 3 2 } T towards the required solution of the systems of equations for m = 10 , 50 , 100 , 200 , 500 . The corresponding solutions are: x * = ( 1 , 1 10 , 1 ) T , ( 1 , 1 50 , 1 ) T , ( 1 , 1 100 , 1 ) T , ( 1 , 1 200 , 1 ) T and ( 1 , 1 500 , 1 ) T .
Problem 3.
Next, the boundary value problem (see [19]):
u + u 3 = 0 , u ( 0 ) = 0 , u ( 1 ) = 1 ,
is studied. Assume the following partitioning of the interval [ 0 , 1 ] :
t 0 = 0 < t 1 < t 2 < < t l 1 < t l = 1 , t j + 1 = t j + h , h = 1 / l .
Let us define u 0 = u ( t 0 ) = 0 , u 1 = u ( t 1 ) , . . . , u l 1 = u ( t l 1 ) , u l = u ( t l ) = 1 . We discretize the problem by using the numerical formulae for first and second derivatives
u k = u k 1 2 u k + u k + 1 h 2 , ( k = 1 , 2 , 3 , l 1 ) ,
then the following system of l 1 nonlinear equations in l 1 variables is obtained:
u k 1 2 u k + u k + 1 + h 2 u k 3 = 0 , ( k = 1 , 2 , 3 , l 1 ) .
In particular, we solve this problem for l = 11 , 51 , 101 , 201 , 501 so that m = 10 , 50 , 100 , 200 , 500 by selecting u ( 0 ) = { 1 , 1 , m t i m e s , 1 } T as the initial value. Their corresponding solutions are given as follows:
{ 0 . 0959 , 0 . 1919 , 0 . 2878 , 0 . 3835 , 0 . 4787 , 0 . 5730 , 0 . 6658 , 0 . 7561 , 0 . 8429 , 0 . 9247 } T ,
{ 0.0207 , 0.0414 , 0.0621 , 0.0828 , 0.1035 , 0.1242 , 0.1449 , 0.1656 , 0.1863 , 0.2070 , 0.2277 , 0.2484 , 0.2691 , 0.2898 , 0.3105 , 0.3312 , 0.3518 , 0.3724 , 0.3930 , 0.4136 , 0.4342 , 0.4547 , 0.4752 , 0.4957 , 0.5161 , 0.5364 , 0.5567 , 0.5769 , 0.5971 , 0.6172 , 0.6372 , 0.6570 , 0.6768 , 0.6965 , 0.7160 , 0.7354 , 0.7546 , 0.7737 , 0.7925 , 0.8112 , 0 . 8297 , 0 . 8480 , 0 . 8660 , 0 . 8838 , 0 . 9013 , 0 . 9186 , 0 . 9355 , 0 . 9521 , 0 . 9684 , 0 . 9844 } T ,
{ 0.0104 , 0.0209 , 0.0313 , 0.0418 , 0.0522 , 0.0627 , 0.0732 , 0.0836 , 0.0941 , 0.1045 , 0.1150 , 0.1255 , 0.1359 , 0.1464 , 0.1568 , 0.1673 , 0.1777 , 0.1882 , 0.1986 , 0.2091 , 0.2196 , 0.2300 , 0.2405 , 0.2509 , 0.2614 , 0.2718 , 0.2822 , 0.2927 , 0.3031 , 0.3136 , 0.3240 , 0.3344 , 0.3449 , 0.3553 , 0.3657 , 0.3761 , 0.3865 , 0.3969 , 0.4073 , 0.4177 , 0.4281 , 0.4385 , 0.4488 , 0.4592 , 0.4695 , 0.4799 , 0.4902 , 0.5005 , 0.5108 , 0.5211 , 0.5314 , 0.5417 , 0.5519 , 0.5621 , 0.5724 , 0.5826 , 0.5927 , 0.6029 , 0.6130 , 0.6231 , 0.6332 , 0.6433 , 0.6533 , 0.6633 , 0.6733 , 0.6832 , 0.6932 , 0.7031 , 0.7129 , 0.7227 , 0.7325 , 0.7422 , 0.7519 , 0.7616 , 0.7712 , 0.7808 , 0.7903 , 0.7998 , 0.8092 , 0.8186 , 0.8279 , 0.8372 , 0.8464 , 0.8555 , 0.8646 , 0.8736 , 0.8826 , 0.8915 , 0.9003 , 0.9091 , 0 . 9177 , 0 . 9263 , 0 . 9348 , 0 . 9433 , 0 . 9516 , 0 . 9599 , 0 . 9681 , 0 . 9762 , 0 . 9842 , 0 . 9921 } T ,
{ 0.005 , 0.01 , 0.01 , 0.02 , 0.02 , 0.03 , 0.03 , 0.04 , 0.04 , 0.05 , 0.05 , 0.06 , 0.06 , 0.07 , 0.07 , 0.08 , 0.08 , 0.09 , 0.09 , 0.10 , 0.11 , 0.11 , 0.12 , 0.12 , 0.13 , 0.13 , 0.14 , 0.14 , 0.15 , 0.15 , 0.16 , 0.16 , 0.17 , 0.17 , 0.18 , 0.18 , 0.19 , 0.19 , 0.20 , 0.21 , 0.21 , 0.22 , 0.22 , 0.23 , 0.23 , 0.24 , 0.24 , 0.25 , 0.25 , 0.26 , 0.26 , 0.27 , 0.27 , 0.28 , 0.28 , 0.29 , 0.29 , 0.30 , 0.30 , 0.31 , 0.32 , 0.32 , 0.33 , 0.33 , 0.34 , 0.34 , 0.35 , 0.35 , 0.36 , 0.36 , 0.37 , 0.37 , 0.38 , 0.38 , 0.39 , 0.39 , 0.40 , 0.40 , 0.41 , 0.41 , 0.42 , 0.43 , 0.43 , 0.44 , 0.44 , 0.45 , 0.45 , 0.46 , 0.46 , 0.47 , 0.47 , 0.48 , 0.48 , 0.49 , 0.49 , 0.50 , 0.50 , 0.51 , 0.51 , 0.52 , 0.52 , 0.53 , 0.53 , 0.54 , 0.54 , 0.55 , 0.55 , 0.56 , 0.57 , 0.57 , 0.58 , 0.58 , 0.59 , 0.59 , 0.60 , 0.60 , 0.61 , 0.61 , 0.62 , 0.62 , 0.63 , 0.63 , 0.64 , 0.64 , 0.65 , 0.65 , 0.66 , 0.66 , 0.67 , 0.67 , 0.68 , 0.68 , 0.69 , 0.69 , 0.70 , 0.70 , 0.71 , 0.71 , 0.72 , 0.72 , 0.73 , 0.73 , 0.74 , 0.74 , 0.75 , 0.75 , 0.76 , 0.76 , 0.77 , 0.77 , 0.77 , 0.78 , 0.78 , 0.79 , 0.79 , 0.80 , 0.80 , 0.81 , 0.81 , 0.82 , 0.82 , 0.83 , 0.83 , 0.84 , 0.84 , 0.85 , 0.85 , 0.85 , 0.86 , 0.86 , 0.87 , 0.87 , 0.88 , 0.88 , 0.89 , 0.89 , 0.89 , 0.90 , 0.90 , 0.91 , 0.91 , 0.92 , 0.92 , 0.93 , 0.93 , 0.93 , 0.94 , 0.94 , 0.95 , 0.95 , 0.95 , 0.96 , 0.96 , 0.97 , 0.97 , 0.98 , 0.98 , 0.98 , 0.99 , 0.99 } T and { 0.002 , 0.004 , 0.006 , 0.008 , 0.01 , 0.01 , 0.01 , 0.01 , 0.01 , 0.02 , 0.02 , 0.02 , 0.02 , 0.029 , 0.03 , 0.03 , 0.03 , 0.03 , 0.04 , 0.042 , 0.04 , 0.04 , 0.04 , 0.05 , 0.05 , 0.05 , 0.05 , 0.05 , 0.06 , 0.06 , 0.065 , 0.06 , 0.06 , 0.07 , 0.07 , 0.07 , 0.07 , 0.08 , 0.08 , 0.08 , 0.08 , 0.08 , 0.09 , 0.09 , 0.09 , 0.09 , 0.09 , 0.10 , 0.10 , 0.10 , 0.10 , 0.10 , 0.11 , 0.11 , 0.11 , 0.11 , 0.12 , 0.12 , 0.12 , 0.12 , 0.12 , 0.13 , 0.13 , 0.13 , 0.13 , 0.13 , 0.14 , 0.14 , 0.14 , 0.14 , 0.14 , 0.15 , 0.15 , 0.15 , 0.15 , 0.16 , 0.16 , 0.16 , 0.16 , 0.16 , 0.17 , 0.17 , 0.17 , 0.17 , 0.17 , 0.18 , 0.18 , 0.18 , 0.18 , 0.18 , 0.19 , 0.19 , 0.19 , 0.19 , 0.20 , 0.20 , 0.20 , 0.20 , 0.20 , 0.21 , 0.21 , 0.21 , 0.21 , 0.21 , 0.22 , 0.22 , 0.22 , 0.22 , 0.22 , 0.23 , 0.23 , 0.23 , 0.23 , 0.24 , 0.24 , 0.24 , 0.24 , 0.24 , 0.25 , 0.25 , 0.25 , 0.25 , 0.25 , 0.26 , 0.26 , 0.26 , 0.26 , 0.26 , 0.27 , 0.27 , 0.27 , 0.27 , 0.28 , 0.28 , 0.28 , 0.28 , 0.28 , 0.29 , 0.29 , 0.29 , 0.29 , 0.29 , 0.30 , 0.30 ,
0.30 , 0.30 , 0.30 , 0.31 , 0.31 , 0.31 , 0.31 , 0.32 , 0.32 , 0.32 , 0.32 , 0.32 , 0.33 , 0.33 , 0.33 , 0.33 , 0.33 , 0.34 , 0.34 , 0.34 , 0.34 , 0.34 , 0.35 , 0.35 , 0.35 , 0.35 , 0.36 , 0.36 , 0.36 , 0.36 , 0.36 , 0.37 , 0.37 , 0.37 , 0.37 , 0.37 , 0.38 , 0.38 , 0.38 , 0.38 , 0.38 , 0.39 , 0.39 , 0.39 , 0.39 , 0.40 , 0.40 , 0.40 , 0.40 , 0.40 , 0.41 , 0.41 , 0.41 , 0.41 , 0.41 , 0.42 , 0.42 , 0.42 , 0.42 , 0.42 , 0.43 , 0.43 , 0.43 , 0.43 , 0.43 , 0.44 , 0.44 , 0.44 , 0.44 , 0.45 , 0.45 , 0.45 , 0.45 , 0.45 , 0.46 , 0.46 , 0.46 , 0.46 , 0.46 , 0.47 , 0.47 , 0.47 , 0.47 , 0.47 , 0.48 , 0.48 , 0.48 , 0.48 , 0.48 , 0.49 , 0.49 , 0.49 , 0.49 , 0.50 , 0.50 , 0.50 , 0.50 , 0.50 , 0.51 , 0.51 , 0.51 , 0.51 , 0.51 , 0.52 , 0.52 , 0.52 , 0.52 , 0.52 , 0.53 , 0.53 , 0.53 , 0.53 , 0.53 , 0.54 , 0.54 , 0.54 , 0.54 , 0.55 , 0.55 , 0.55 , 0.55 , 0.55 , 0.56 , 0.56 , 0.56 , 0.56 , 0.56 , 0.57 , 0.57 , 0.57 , 0.57 , 0.57 , 0.58 , 0.58 , 0.58 , 0.58 , 0.58 , 0.59 , 0.59 , 0.59 , 0.59 , 0.59 , 0.60 , 0.60 , 0.60 , 0.60 , 0.60 , 0.61 , 0.61 , 0.61 , 0.61 , 0.61 , 0.62 , 0.62 , 0.62 , 0.62 , 0.63 , 0.63 , 0.63 , 0.63 , 0.63 , 0.64 , 0.64 , 0.64 , 0.64 , 0.64 , 0.65 , 0.65 , 0.65 , 0.65 , 0.65 , 0.66 , 0.66 , 0.66 , 0.66 , 0.66 , 0.67 , 0.67 , 0.67 , 0.67 , 0.67 , 0.68 , 0.68 , 0.68 , 0.68 , 0.68 , 0.69 , 0.69 , 0.69 , 0.69 , 0.69 , 0.70 , 0.70 , 0.70 , 0.70 , 0.70 , 0.71 , 0.71 , 0.71 , 0.71 , 0.71 , 0.72 , 0.72 , 0.72 , 0.72 , 0.72 , 0.73 , 0.73 , 0.73 , 0.73 , 0.73 , 0.74 , 0.74 , 0.74 , 0.74 , 0.74 , 0.74 , 0.75 , 0.75 , 0.75 , 0.75 , 0.75 , 0.76 , 0.76 , 0.76 , 0.76 , 0.76 , 0.77 , 0.77 , 0.77 , 0.77 , 0.77 , 0.78 , 0.78 , 0.78 , 0.78 , 0.78 , 0.79 , 0.79 , 0.79 , 0.79 , 0.79 , 0.80 , 0.80 , 0.80 , 0.80 , 0.80 , 0.80 , 0.81 , 0.81 , 0.81 , 0.81 , 0.81 , 0.82 , 0.82 , 0.82 , 0.82 , 0.82 , 0.83 , 0.83 , 0.83 , 0.83 , 0.83 , 0.83 , 0.84 , 0.84 , 0.84 , 0.84 , 0.84 , 0.85 , 0.85 , 0.85 , 0.85 , 0.85 , 0.85 , 0.86 , 0.86 , 0.86 , 0.86 , 0.86 , 0.87 , 0.87 , 0.87 , 0.87 , 0.87 , 0.87 , 0.88 , 0.88 , 0.88 , 0.88 , 0.88 , 0.89 , 0.89 , 0.89 , 0.89 , 0.89 , 0.89 , 0.90 , 0.90 , 0.90 , 0.90 , 0.90 , 0.91 , 0.91 , 0.91 , 0.91 , 0.91 , 0.91 , 0.92 , 0.92 , 0.92 , 0.92 , 0.92 , 0.92 , 0.93 , 0.93 , 0.93 , 0.93 , 0.93 , 0.93 , 0.94 , 0.94 , 0.94 , 0.94 , 0.94 , 0.94 , 0.95 , 0.95 , 0.95 , 0.95 , 0.95 , 0.95 , 0.96 , 0.96 , 0.96 , 0.96 , 0.96 , 0.96 , 0.97 , 0.97 , 0.97 , 0.97 , 0.97 , 0.97 , 0.98 , 0.98 , 0.98 , 0.98 , 0 . 98 , 0 . 98 , 0 . 99 , 0 . 99 , 0 . 99 , 0 . 99 , 0 . 99 , 0 . 99 } T .
Numerical results are displayed in Table 4, Table 5 and Table 6, which contain:
Dimension ( m ) of considered system of equations.
Required number of iterations ( n ) .
Error | | x n + 1 x n | | of the approximation to the corresponding solution of above problems, wherein X ( h ) means X × 10 h .
Approximate computational order of convergence (ACOC) calculated by the Formula (57).
From the displayed numerical results in Table 4, Table 5 and Table 6, we observe the stable convergence behavior of the methods. The computational order of convergence also verifies the theoretical fourth order of convergence. Also, observe that at the same iteration, the computed error for the proposed methods is small in general. Similar numerical experimentations have been performed on other different problems and results are found to be on a par with those presented here.

7. Conclusions

In the foregoing study, we have presented a two-parameter family of iterative methods with fourth order convergence. The algorithm is composed of two Newton–Jarratt steps and requires the evaluations of one function and two first-order derivatives in each iteration. For a set of parametric values, the well-known Jarratt’s fourth order method is a special case of the family. Local convergence analysis including convergence radius, error bounds and estimates on the uniqueness of solution has been provided. Moreover, the stability of the methods is analyzed by means of using dynamic tools, namely, convergence plane and basins of attraction. Theoretical results regarding the radius of convergence and order of convergence are verified in the considered numerical examples. The efficiency is demonstrated by the application of the methods on a variety of systems of nonlinear equations.

Author Contributions

The contribution of all the authors has been equal. All of them have worked together to prepare the manuscript

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  3. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  4. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  5. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  6. Grau-Sánchez, M.; Grau, Á.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef] [Green Version]
  7. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  8. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  9. Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  10. Babajee, D.K.R.; Cordero, A.; Soleymani, F.; Torregrosa, J.R. On a novel fourth-order algorithm for solving systems of nonlinear equations. J. Appl. Math. 2012, 2012, 165452. [Google Scholar] [CrossRef]
  11. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton–Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  12. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1996, 20, 434–437. [Google Scholar] [CrossRef]
  13. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef] [Green Version]
  14. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  15. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  16. Grau, M.; Díaz-Barrero, J.L. A technique to composite a modified Newton’s method for solving nonlinear equations. arXiv 2011, arXiv:1106.0996. [Google Scholar]
  17. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  18. Noor, K.I.; Noor, M.A. Iterative methods with fourth-order convergence for nonlinear equations. Appl. Math. Comput. 2007, 189, 221–227. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  20. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Design and multidimensional extension of iterative methods for solving nonlinear problems. Appl. Math. Comput. 2017, 293, 194–203. [Google Scholar] [CrossRef]
  21. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  22. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
  23. Sharma, J.R.; Sharma, R.; Kalra, N. A novel family of composite Newton–Traub methods for solving systems of nonlinear equations. Appl. Math. Comput. 2015, 269, 520–535. [Google Scholar] [CrossRef]
  24. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
  25. Argyros, I.K.; Magreñán, Á.A. On the convergence of an optimal fourth-order family of methods and its dynamics. Appl. Math. Comput. 2015, 252, 336–346. [Google Scholar] [CrossRef]
  26. Argyros, I.K.; Magreñán, Á.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algor. 2016, 71, 1–23. [Google Scholar] [CrossRef]
  27. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
  28. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  29. van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput. Aided Design 2017, 90, 37–47. [Google Scholar] [CrossRef]
  30. Aizenshtein, M.; Bartoň, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Geom. Design 2012, 29, 265–279. [Google Scholar] [CrossRef]
  31. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput. Aided Design 2011, 43, 1870–1878. [Google Scholar] [CrossRef]
  32. Argyros, I.K.; Sharma, J.R.; Kumar, D. Ball convergence of the Newton–Gauss method in Banach space. SeMA 2017, 74, 429–439. [Google Scholar] [CrossRef]
  33. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  34. Bargiacchi-Soula, S.; Fehrenbach, J.; Masmoudi, M. From linear to nonlinear large scale systems. SIAM J. Matrix Anal. Appl. 2010, 31, 1552–1569. [Google Scholar] [CrossRef]
  35. Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algor. 2014, 67, 917–933. [Google Scholar] [CrossRef]
Figure 1. Stability region of z = 1 .
Figure 1. Stability region of z = 1 .
Mathematics 07 00492 g001
Figure 2. Stability region of z = e x 2 ( r , t ) .
Figure 2. Stability region of z = e x 2 ( r , t ) .
Mathematics 07 00492 g002
Figure 3. Parameter plane associated to the free critical point c r 2 ( r , t ) .
Figure 3. Parameter plane associated to the free critical point c r 2 ( r , t ) .
Mathematics 07 00492 g003
Figure 4. Basins of attraction associated with the member r = 5 and t = 1 .
Figure 4. Basins of attraction associated with the member r = 5 and t = 1 .
Mathematics 07 00492 g004
Figure 5. Basins of attraction associated with the member r = 1 and t = 5 .
Figure 5. Basins of attraction associated with the member r = 1 and t = 5 .
Mathematics 07 00492 g005
Figure 6. Basins of attraction associated with the member r = 1 and t = 1 .
Figure 6. Basins of attraction associated with the member r = 1 and t = 1 .
Mathematics 07 00492 g006
Figure 7. Basins of attraction associated with the member r = 5 and t = 3 .
Figure 7. Basins of attraction associated with the member r = 5 and t = 3 .
Mathematics 07 00492 g007
Figure 8. Basins of attraction associated with Jarratt’s method (JM).
Figure 8. Basins of attraction associated with Jarratt’s method (JM).
Mathematics 07 00492 g008
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
ParameterNMJMMIMIIMIIIMIV
s N 0.169092 -----
s 1 - 0.152183 0.152183 0.152183 0.152183 0.152183
s 2 - 0.114471 0.102803 0.0405032 0.102462 0.107633
s * - 0.114471 0.102803 0.0405032 0.102462 0.107633
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
ParameterNMJMMIMIIMIIIMIV
s N 0.0666667 -----
s 1 - 0.0627273 0.0627273 0.0627273 0.0627273 0.0627273
s 2 - 0.0438191 0.0382687 0.0207147 0.0372223 0.0400211
s * - 0.0438191 0.0382687 0.0207147 0.0372223 0.0400211
Table 3. Numerical results for Example 3.
Table 3. Numerical results for Example 3.
ParameterNMJMMIMIIMIIIMIV
s N 2.133333 -----
s 1 - 1.920000 1.920000 1.920000 1.920000 1.920000
s 2 - 1.444421 1.297012 0.511010 1.292725 1.357911
s * - 1.444421 1.297012 0.511010 1.292725 1.357911
Table 4. Comparison of performance of methods for Problem 1.
Table 4. Comparison of performance of methods for Problem 1.
MethodsBMCMDBMHMNNMSAMJMMIMIIMIIIMIV
m = 10
n55555555555
x n + 1 x n 4.09 ( 124 ) 1.49 ( 116 ) 1.21 ( 140 ) 6.83 ( 137 ) 1.01 ( 121 ) 1.49 ( 116 ) 4.28 ( 158 ) 2.54 ( 146 ) 1.22 ( 142 ) 2.96 ( 130 ) 4.16 ( 200 )
ACOC4.0014.0014.0014.0014.0014.0014.0014.0014.0014.0004.000  
m = 50
n55555555555
x n + 1 x n 4.09 ( 124 ) 1.49 ( 116 ) 1.21 ( 140 ) 6.83 ( 137 ) 1.01 ( 121 ) 1.49 ( 116 ) 4.28 ( 158 ) 2.54 ( 146 ) 1.22 ( 142 ) 2.96 ( 130 ) 4.16 ( 200 )
ACOC4.0014.0014.0014.0014.0014.0014.0014.0014.0014.0004.000  
m = 100
n55555555555
x n + 1 x n 4.09 ( 124 ) 1.49 ( 116 ) 1.21 ( 140 ) 6.83 ( 137 ) 1.01 ( 121 ) 1.49 ( 116 ) 4.28 ( 158 ) 2.54 ( 146 ) 1.22 ( 142 ) 2.96 ( 130 ) 4.16 ( 200 )
ACOC4.0014.0014.0014.0014.0014.0014.0014.0014.0014.0004.000  
m = 200
n55555555555
x n + 1 x n 4.09 ( 124 ) 1.49 ( 116 ) 1.21 ( 140 ) 6.83 ( 137 ) 1.01 ( 121 ) 1.49 ( 116 ) 4.28 ( 158 ) 2.54 ( 146 ) 1.22 ( 142 ) 2.96 ( 130 ) 4.16 ( 200 )
ACOC4.0014.0014.0014.0014.0014.0014.0014.0014.0014.0004.000  
m = 500
n55555555555
x n + 1 x n 4.09 ( 124 ) 1.49 ( 116 ) 1.21 ( 140 ) 6.83 ( 137 ) 1.01 ( 121 ) 1.49 ( 116 ) 4.28 ( 158 ) 2.54 ( 146 ) 1.22 ( 142 ) 2.96 ( 130 ) 4.16 ( 200 )
ACOC4.0014.0014.0014.0014.0014.0014.0014.0014.0014.0004.000
Table 5. Comparison of performance of methods for Problem 2.
Table 5. Comparison of performance of methods for Problem 2.
MethodsBMCMDBMHMNNMSAMJMMIMIIMIIIMIV
m = 10
n55555555565
x n + 1 x n 1.27 ( 84 ) 3.71 ( 76 ) 2.29 ( 96 ) 2.14 ( 97 ) 5.86 ( 80 ) 1.12 ( 77 ) 1.31 ( 122 ) 5.76 ( 108 ) 7.63 ( 104 ) 7.77 ( 197 ) 1.07 ( 116 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0004.0004.000  
m = 50
n55555555565
x n + 1 x n 2.87 ( 84 ) 8.37 ( 76 ) 5.17 ( 96 ) 4.84 ( 97 ) 1.32 ( 79 ) 2.53 ( 77 ) 2.97 ( 122 ) 1.30 ( 107 ) 1.72 ( 103 ) 1.76 ( 196 ) 2.42 ( 116 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0004.0004.000  
m = 100
n55555555565
x n + 1 x n 4.01 ( 84 ) 1.17 ( 76 ) 7.24 ( 96 ) 6.77 ( 97 ) 1.85 ( 79 ) 3.55 ( 77 ) 4.16 ( 122 ) 1.82 ( 107 ) 2.41 ( 103 ) 2.46 ( 196 ) 3.38 ( 116 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0004.0004.000  
m = 200
n55555555565
x n + 1 x n 5.68 ( 84 ) 1.66 ( 76 ) 1.02 ( 95 ) 9.58 ( 97 ) 2.62 ( 79 ) 5.01 ( 77 ) 5.88 ( 122 ) 2.58 ( 107 ) 3.41 ( 103 ) 3.48 ( 196 ) 4.78 ( 116 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0004.0004.000  
m = 500
n55555555565
x n + 1 x n 8.98 ( 84 ) 2.62 ( 76 ) 1.62 ( 95 ) 1.51 ( 96 ) 4.14 ( 79 ) 7.93 ( 77 ) 9.29 ( 122 ) 4.07 ( 107 ) 5.40 ( 103 ) 5.50 ( 196 ) 7.56 ( 116 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0004.0004.000
Table 6. Comparison of performance of methods for Problem 3.
Table 6. Comparison of performance of methods for Problem 3.
MethodsBMCMDBMHMNNMSAMJMMIMIIMIIIMIV
m = 10
n55555555544
x n + 1 x n 1.04 ( 145 ) 2.70 ( 151 ) 1.29 ( 193 ) 1.68 ( 160 ) 1.46 ( 161 ) 3.72 ( 134 ) 2.55 ( 177 ) 6.04 ( 168 ) 1.57 ( 164 ) 1.27 ( 63 ) 1.49 ( 51 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0003.9933.999  
m = 50
n55555555544
x n + 1 x n 6.43 ( 146 ) 1.77 ( 151 ) 5.39 ( 193 ) 1.27 ( 160 ) 5.79 ( 161 ) 1.99 ( 134 ) 2.58 ( 177 ) 2.60 ( 168 ) 1.23 ( 164 ) 6.31 ( 63 ) 3.28 ( 51 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0003.9933.999  
m = 100
n55555555544
x n + 1 x n 8.65 ( 146 ) 1.56 ( 151 ) 7.15 ( 193 ) 1.71 ( 160 ) 7.67 ( 161 ) 2.67 ( 134 ) 3.54 ( 177 ) 3.54 ( 168 ) 1.67 ( 164 ) 9.14 ( 63 ) 4.62 ( 51 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0003.9933.999  
m = 200
n55555555544
x n + 1 x n 1.21 ( 145 ) 2.17 ( 151 ) 9.93 ( 193 ) 2.40 ( 160 ) 1.07 ( 161 ) 3.71 ( 134 ) 4.96 ( 177 ) 4.95 ( 168 ) 2.33 ( 164 ) 1.30 ( 62 ) 6.52 ( 51 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0003.9933.999  
m = 500
n55555555544
x n + 1 x n 1.90 ( 145 ) 3.40 ( 151 ) 1.56 ( 193 ) 3.77 ( 160 ) 1.67 ( 161 ) 5.84 ( 134 ) 7.81 ( 177 ) 7.80 ( 168 ) 3.67 ( 164 ) 2.06 ( 62 ) 1.03 ( 50 )
ACOC4.0004.0004.0004.0004.0004.0004.0004.0004.0003.9933.999

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, D.; Argyros, I.K.; Magreñán, Á.A. On a Bi-Parametric Family of Fourth Order Composite Newton–Jarratt Methods for Nonlinear Systems. Mathematics 2019, 7, 492. https://doi.org/10.3390/math7060492

AMA Style

Sharma JR, Kumar D, Argyros IK, Magreñán ÁA. On a Bi-Parametric Family of Fourth Order Composite Newton–Jarratt Methods for Nonlinear Systems. Mathematics. 2019; 7(6):492. https://doi.org/10.3390/math7060492

Chicago/Turabian Style

Sharma, Janak Raj, Deepak Kumar, Ioannis K. Argyros, and Ángel Alberto Magreñán. 2019. "On a Bi-Parametric Family of Fourth Order Composite Newton–Jarratt Methods for Nonlinear Systems" Mathematics 7, no. 6: 492. https://doi.org/10.3390/math7060492

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop