 Next Article in Journal
Construction of Regular Developable Bézier Patches
Next Article in Special Issue
An Approach for Integrating Uncertainty When Selecting an Anti-Torpedo Decoy in Brand New Warships
Previous Article in Journal
Active Optics in Astronomy: Freeform Mirror for the MESSIER Telescope Proposal
Previous Article in Special Issue
Some Notes to Extend the Study on Random Non-Autonomous Second Order Linear Differential Equations Appearing in Mathematical Modeling

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Modified Potra–Pták Multi-step Schemes with Accelerated Order of Convergence for Solving Systems of Nonlinear Equations

1
Department of Mathematics, D.A.V. University, Sarmastpur, 144012 Jalandhar, India
2
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2019, 24(1), 3; https://doi.org/10.3390/mca24010003
Received: 29 November 2018 / Revised: 22 December 2018 / Accepted: 23 December 2018 / Published: 27 December 2018
(This article belongs to the Special Issue Mathematical Modelling in Engineering & Human Behaviour 2018)

## Abstract

:
In this study, an iterative scheme of sixth order of convergence for solving systems of nonlinear equations is presented. The scheme is composed of three steps, of which the first two steps are that of third order Potra–Pták method and last is weighted-Newton step. Furthermore, we generalize our work to derive a family of multi-step iterative methods with order of convergence $3 r + 6 , r = 0 , 1 , 2 , … .$ The sixth order method is the special case of this multi-step scheme for $r = 0 .$ The family gives a four-step ninth order method for $r = 1 .$ As much higher order methods are not used in practice, so we study sixth and ninth order methods in detail. Numerical examples are included to confirm theoretical results and to compare the methods with some existing ones. Different numerical tests, containing academical functions and systems resulting from the discretization of boundary problems, are introduced to show the efficiency and reliability of the proposed methods.

## 1. Introduction

Many applied problems in Science and Engineering [1,2,3] are reduced to solve nonlinear systems $F ( x ) = 0$ numerically, that is, for a given nonlinear function $F ( x ) : D ⊂ R m ⟶ R m$, where $F ( x ) = ( f 1 ( x ) , f 2 ( x ) , … , f m ( x ) ) T$ and $x = ( x 1 , x 2 , … , x m ) T$, to find a vector $α = ( α 1 , α 2 , … , α m ) T$ such that $F ( α ) = 0 .$ The most widely used method for this purpose is the classical Newton’s method [3,4], which converges quadratically under the conditions that the function F is continuously differentiable and a good initial approximation $x ( 0 )$ is given. It is defined by
$x ( k + 1 ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , k = 0 , 1 , … ,$
where $F ′ ( x ( k ) ) − 1$ is the inverse of Fréchet derivative $F ′ ( x ( k ) )$ of the function $F ( x )$ at $x ( k )$. In order to improve the order of convergence of Newton’s method, several methods have been proposed in literature, see, for example [5,6,7,8,9,10,11] and references therein. In particular, the third order method by Potra–Pták  for systems of nonlinear equations is given by
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) − F ′ ( x ( k ) ) − 1 F ( y ( k ) ) k = 0 , 1 , …$
It is quite clear that this scheme requires the evaluation of two functions, one derivative and one matrix inversion per iteration, that is usually avoided by solving a linear system. This algorithm is illustrious not only for its simplicity but also for its efficient character.
In this paper, based on Potra–Pták method (2), we develop a three-step scheme with increased order of convergence and still maintaining the efficient character. With these considerations, we propose a three-step iterative method with accelerated sixth order of convergence; of the three steps, the first two are those of Potra–Pták method whereas the third is a weighted Newton-step. Then, based on this scheme, a multi-step family with increasing order of convergence $3 r + 6 , r = 0 , 1 , 2 , … ,$ is developed. The sixth order method is the special case of this multi-step scheme for $r = 0 .$ The family gives a four-step ninth order scheme for $r = 1 .$ As much higher order methods are not used in practice, so we study sixth and ninth order methods in particular.
The rest of the paper is organized as follows: In Section 2, we present the new three-step scheme for solving nonlinear systems and we analyze its order of convergence. In Section 2.1, the order of this scheme is improved in three units, by adding another step that needs a new functional evaluation and to solve a linear system with the same matrix of coefficients as before. This idea can be generalized for designing an iterative method with arbitrary order of convergence. The computational efficiency of the proposed schemes is studied in Section 3, doing a comparative analysis with the efficiency of other known methods. Section 4 is devoted to the numerical experiments with academical multivariate functions and nonlinear systems resulted of the discretization of boundary problems. For some cases, the basins of attraction of the methods are also showed. The paper finishes with some conclusions and the references used in it.

## 2. The Method and Analysis of Convergence

$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − ( a I + F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] ( b I + c F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] ) ) F ′ ( x ( k ) ) − 1 F ( z ( k ) ) ,$
where the first two-steps are those of Potra–Pták scheme  for nonlinear systems and $[ z ( k ) , y ( k ) ; F ]$ is the first order divided difference of F.
In order to discuss the behavior of scheme (3), we consider the following expression of divided difference operator $[ · , · ; F ] : D × D ⊂ R m × R m ⟶ L ( R m )$, see for example, [2,10],
$[ x + h , x ; F ] = ∫ 0 1 F ′ ( x + t h ) d t , ∀ x , h ∈ R m .$
By expanding $F ′ ( x + t h )$ in Taylor series at the point x and integrating, we have
$[ x + h , x ; F ] = ∫ 0 1 F ′ ( x + t h ) d t = F ′ ( x ) + 1 2 F ″ ( x ) h + 1 6 F ″ ( x ) h 2 + O ( h 3 ) .$
where $h i = ( h , h , … i , h ) , h ∈ R m .$ Let $e ( k ) = x ( k ) − α$. Developing $F ( x ( k ) )$ in a neighborhood of $α$ and assuming that $Γ = [ F ′ ( α ) ] − 1$ exists, we have
$F ( x ( k ) ) = F ′ ( α ) ( e ( k ) + A 2 ( e ( k ) ) 2 + A 3 ( e ( k ) ) 3 + A 4 ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) ) ,$
where $A i = 1 i ! Γ F ( i ) ( α ) ∈ L i ( R m , R m )$ and $( e ( k ) ) i = ( e ( k ) , e ( k ) , … i , e ( k ) ) , e ( k ) ∈ R m$. Also,
$F ′ ( x ( k ) ) = F ′ ( α ) ( I + 2 A 2 e ( k ) + 3 A 3 ( e ( k ) ) 2 + 4 A 4 ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) ) ,$
$F ″ ( x ( k ) ) = F ′ ( α ) ( 2 A 2 + 6 A 3 e ( k ) + 12 A 4 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) ) ,$
$F ″ ( x ( k ) ) = F ′ ( α ) ( 6 A 3 + 24 A 4 e ( k ) + O ( ( e ( k ) ) 2 ) ) .$
Inversion of $F ′ ( x ( k ) )$ yields,
$F ′ ( x ( k ) ) − 1 = I − 2 A 2 e ( k ) + ( 4 A 2 2 − 3 A 3 ) ( e ( k ) ) 2 − ( 8 A 2 3 − 6 A 2 A 3 − 6 A 3 A 2 + 4 A 4 ) ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) Γ .$
We are in a position to analyze the behavior of scheme (3). Thus, the following result is proven.
Theorem 1.
Let $F : D ⊂ R m → D$ be sufficiently differentiable function in an open neighborhood D of its zero α. Let us suppose that the Jacobian matrix $F ′ ( x )$ is continuous and nonsingular at α. If an initial approximation $x ( 0 )$ is sufficiently close to α, then the local order of convergence of method (3) is at least $6 ,$ provided $a = 13 / 4 , b = − 7 / 2$ and $c = 5 / 4 .$
Proof.
Let $e y ( k ) = y ( k ) − α$ is the local error of Newton’s method given by
$e y ( k ) = A 2 ( e ( k ) ) 2 − 2 ( A 2 2 − A 3 ) ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) .$
Employing Equations (10) and (11) in the second step of (3), we get
$e z ( k ) = z ( k ) − α = 2 A 2 e ( k ) e y ( k ) + O ( ( e ( k ) ) 4 ) .$
Using Equations (7)–(9) in (5) for $x + h = z ( k )$, $x = y ( k )$ and $h = e z ( k ) − e y ( k )$, it follows that
$[ z ( k ) , y ( k ) ; F ] = F ′ ( α ) ( I + A 2 ( e y ( k ) + e z ( k ) ) + O ( ( e ( k ) ) 4 ) ) .$
With the help of Equations (10) and (13), we can write
$a I + F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] ( b I + c F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] ) = ( a + b + c ) I − 2 ( b + 2 c ) A 2 e ( k ) + ( ( 5 b + 14 c ) A 2 2 − 3 ( b + 2 c ) A 3 ) ( e ( k ) ) 2 − ( 10 ( b + 4 c ) A 2 3 − ( 8 b + 22 c ) A 2 A 3 − ( 6 b + 18 c ) A 3 A 2 + 4 ( b + 2 c ) A 4 ) ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) Γ .$
Expanding $F ( z ( k ) )$ about $α$, we obtain
$F ( z ( k ) ) = F ′ ( α ) e z ( k ) + A 2 ( e z ( k ) ) 2 + O ( ( e z ( k ) ) 3 ) .$
Using (10), (14) and (15) in the last step of (3), we get
$e ( k + 1 ) = x ( k + 1 ) − α = 2 1 − a − b − c A 2 2 ( e ( k ) ) 3 + 4 ( a + 2 b + 3 c ) A 2 3 ( e ( k ) ) 4 − 2 ( ( 4 a + 13 ( b + 2 c ) ) A 2 2 − 3 A 3 ( a + 2 b + 3 c ) ) A 2 2 ( e ( k ) ) 5 + 4 ( ( 3 a + 17 b + 45 c ) A 2 5 − ( 3 a + 10 ( b + 2 c ) ) A 2 A 3 A 2 2 + ( 3 a + 6 ( b + 3 c ) ) A 3 A 2 3 + 2 ( a + 2 b + 3 c ) A 4 A 2 2 ) ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .$
In order to achieve sixth order of convergence it is clear that terms $1 − a − b − c ,$$a + 2 b + 3 c$ and $4 a + 13 ( b + 2 c )$ must vanish for some values of $a ,$b, and c. This happens when $a = 13 4 ,$$b = − 7 2$ and $c = 5 4$. Thus, the error equation (16) becomes
$e ( k + 1 ) = x ( k + 1 ) − α = ( 26 A 2 3 + A 2 A 3 − 3 A 3 A 2 ) A 2 2 ( e ( k ) ) 6 + O ( ( e ( k ) ) 7 ) .$
This proves the sixth order of convergence. □
Thus, the proposed scheme (3) is given by
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − 13 4 I − F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] 7 2 I − 5 4 F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] × F ′ ( x ( k ) ) − 1 F ( z ( k ) ) .$
Clearly this formula uses three functional evaluations, one evaluation of the Jacobian matrix, one of a divided difference, and one matrix inversion per iteration. We denote this scheme by $H 6 , 1 .$

#### 2.1. Multi-step Method with Order $3 r + 6$

In this section, we improve the $H 6 , 1$ by adding a functional evaluation per each new step to get the multi-step version called $H 3 r + 6 , 1$ method. The method is defined as
$x ( k + 1 ) = H 3 r + 6 , 1 ( x ( k ) ) = ν r ( x ( k ) ) , ν j ( x ( k ) ) = ν j − 1 ( x ( k ) ) − θ ( x ( k ) ) F ′ ( x ( k ) ) − 1 F ( ν j − 1 ( x ( k ) ) ) , θ ( x ( k ) ) = 13 4 I − F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] 7 2 I − 5 4 F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] , ν 0 = H 6 , 1 ( x ( k ) ) , j = 1 , 2 , … , r ; r ⩾ 1 .$
Let us note that case $r = 0$ is $H 6 , 1$ method given by (18). This multi-step version has the order of convergence $3 r + 6 , r ⩾ 1 ,$ which we shall prove through the following result.
Theorem 2.
Let us Assume that $F : D ⊂ R m ⟶ D$ is a sufficiently Fréchet differentiable function in an open convex set D containing the zero α of $F ( x )$ and $F ′ ( x )$ is continuous and nonsingular at α. Then, sequence ${ x ( k ) } k ⩾ 0$, $x ( 0 ) ∈ D$ obtained by using method (19) converges to α with convergence order $3 r + 6 .$
Proof.
Let $e ν j ( k ) = ν j ( k ) − α ,$ for all $j = 1 , 2 , … , r .$ Taylor’s expansion of $F ( ν j − 1 ( k ) ( x ( k ) ) )$ about $α$ yields
$F ( ν j − 1 ( x ( k ) ) ) = F ′ ( α ) e ν j − 1 ( k ) + A 2 ( e ν j − 1 ( k ) ) 2 + O ( ( e ν j − 1 ( k ) ) 3 ) .$
Using (10) and (14) for $a = 13 / 4 , b = − 7 / 2$ and $c = 5 / 4 ,$ we have
$θ ( x ( k ) ) F ′ ( x ( k ) ) − 1 = 13 4 I − F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] 7 2 I − 5 4 F ′ ( x ( k ) ) − 1 [ z ( k ) , y ( k ) ; F ] F ′ ( x ( k ) ) − 1 = I − S 3 ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) Γ ,$
where $S 3 = 15 A 2 3 + 1 2 A 2 A 3 − 3 2 A 3 A 2 .$
With the help of (20) and (21), we can write
$θ ( x ( k ) ) F ′ ( x ( k ) ) − 1 F ( ν j − 1 ( x ( k ) ) ) = ( I − S 3 ( e ( k ) ) 2 + ⋯ ) ( e ν j − 1 ( k ) + A 2 ( e ν j − 1 ( k ) ) 2 + ⋯ ) = e ν j − 1 ( k ) − S 3 ( e ( k ) ) 3 e ν j − 1 ( k ) + ⋯ .$
By substituting (22) in (19), we get
$e ν j ( k ) = ν j ( k ) − α = e ν j − 1 ( k ) − ( e ν j − 1 ( k ) − S 3 ( e ( k ) ) 3 e ν j − 1 ( k ) + ⋯ ) = S 3 ( e ( k ) ) 3 e ν j − 1 ( k ) + ⋯ .$
As we know that $e ν 0 ( k ) = ( 26 A 2 3 + A 2 A 3 − 3 A 3 A 2 ) A 2 2 ( e ( k ) ) 6 ,$ so from (23), for $j = 1 , 2 , … ,$ we have that
$e ν 1 ( k ) = S 3 ( e ( k ) ) 3 e ν 0 ( k ) = S 3 ( e ( k ) ) 3 ( 26 A 2 3 + A 2 A 3 − 3 A 3 A 2 ) A 2 2 ( e ( k ) ) 6 + ⋯ , e ν 2 ( k ) = S 3 ( e ( k ) ) 2 e ν 1 ( k ) = S 3 2 ( e ( k ) ) 6 ( 26 A 2 3 + A 2 A 3 − 3 A 3 A 2 ) A 2 2 ( e ( k ) ) 6 + ⋯ .$
Proceeding by induction, we have
$e ν r ( k ) = S 3 r ( 26 A 2 3 + A 2 A 3 − 3 A 3 A 2 ) A 2 2 ( e ( k ) ) 3 r + 6 + O ( ( e ( k ) ) 3 r + 7 ) , r ⩾ 0 .$
Hence, the result of the theorem is proven. □
As a special case, this family when $r = 1$ gives a ninth order method:
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , ν 0 ( k ) = H 6 , 1 ( x ( k ) , y ( k ) ) , x ( k + 1 ) = ν 0 ( k ) − θ ( x ( k ) ) F ′ ( x ( k ) ) − 1 F ( ν 0 ( k ) ) k = 0 , 1 , 2 , …$
It is clear that this scheme requires four functional evaluations, one Jacobian matrix, one divided difference, and one matrix inversion per full iteration. We denote this scheme by $H 9 , 1 .$
Remark 1.
Multi-step version $H 3 r + 6$$( r ⩾ 0 ) ,$ utilizes $r + 3$ functional evaluations of $F ,$ one evaluation of $F ′$, and one divided difference. Also, the scheme (19) requires only one matrix inversion per iteration.

## 3. Computational Efficiency

To obtain an estimation of the efficiency of the proposed methods we shall make use of efficiency index, according to which the efficiency of an iterative method is given by $E = ρ 1 C$, where $ρ$ is the order of convergence and C is the computational cost per iteration. For a system of m nonlinear equations and m unknowns, the computational cost per iteration is given by (see [9,10])
$C ( μ 0 , μ 1 , m ) = P 0 ( m ) μ 0 + P 1 ( m ) μ 1 + P ( m ) ,$
where $P 0 ( m )$ represents the number of evaluations of scalar functions $( f 1 , f 2 , … , f m )$ used in the evaluations of F and $[ y , x ; F ]$. The divided difference $[ y , x ; F ]$ of F is an $m × m$ matrix with elements (see [11,12])
$[ y , x ; F ] i j = ( f i ( y 1 , . . . . . , y j − 1 , y j , x j + 1 , . . . . . , x m ) − f i ( y 1 , . . . . . , y j − 1 , x j , x j + 1 , . . . . . , x m ) + f i ( x 1 , . . . . . , x j − 1 , y j , y j + 1 , . . . . . , y m ) − f i ( x 1 , . . . . . , x j − 1 , x j , y j + 1 , . . . . . , y m ) ) / ( 2 ( y j − x j ) ) , 1 ⩽ i , j ⩽ m .$
The number of evaluations of scalar functions of $F ′$, i.e. $∂ f i ∂ x j$, $1 ⩽ i , j ⩽ m$, is $P 1 ( m )$. $P ( m )$ represents the number of products or quotients needed per iteration, and $μ 0$ and $μ 1$ are ratios between products and evaluations required to express the value of $C ( μ 0 , μ 1 , m )$ in terms of products.
To compute F in any iterative method we calculate m scalar functions and if we compute the divided difference $[ y , x ; F ]$ then we evaluate $2 m ( m − 1 )$ scalar functions, where $F ( x )$ and $F ( y )$ are computed separately. We must add $m 2$ quotients from any divided difference. The number of scalar evaluations is $m 2$ for any new derivative $F ′$. In order to compute an inverse linear operator we solve a linear system, where we have $m ( m − 1 ) ( 2 m − 1 ) / 6$ products and $m ( m − 1 ) / 2$ quotients in the LU decomposition and $m ( m − 1 )$ products and m quotients in the resolution of two triangular linear systems. We suppose that a quotient is equivalent to l products. Moreover, we add $m 2$ products for the multiplication of a matrix with a vector or of a matrix by a scalar and m products for the multiplication of a vector by a scalar.
Denoting the efficiency indices of $H ρ , i$ ($ρ = 6 , 9$ and $i = 1 , 2 , 3 , 4$) by $E ρ , i$ and computational cost by $C ρ , i$, then taking into account the above considerations, we obtain
$C 6 , 1 = ( 2 m 2 + m ) μ 0 + m 2 μ 1 + m 6 ( 2 m 2 + 39 m − 11 + 9 l ( m + 3 ) ) a n d E 6 , 1 = 6 1 / C 6 , 1 .$
$C 9 , 1 = ( 2 m 2 + 2 m ) μ 0 + m 2 μ 1 + m 6 ( 2 m 2 + 69 m − 11 + 9 l ( m + 5 ) ) a n d E 9 , 1 = 9 1 / C 9 , 1 .$
To check the performance of the new sixth order method, we compare it with some existing sixth order which belongs to the same class. So, we choose the sixth order methods presented in [10,13]. The sixth order methods presented in  are given by
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , k = 0 , 1 , 2 , … z ( k ) = y ( k ) − ( 2 [ y ( k ) , x ( k ) ; F ] − F ′ ( x ( k ) ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = H 6 , 2 ( x ( k ) , y ( k ) , z ( k ) ) = z ( k ) − ( 2 [ y ( k ) , x ( k ) ; F ] − F ′ ( x ( k ) ) ) − 1 F ( z ( k ) )$
and
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , k = 0 , 1 , 2 , … z ( k ) = y ( k ) − ( 2 [ y ( k ) , x ( k ) ; F ] − 1 − F ′ ( x ( k ) ) − 1 ) F ( y ( k ) ) , x ( k + 1 ) = H 6 , 3 ( x ( k ) , y ( k ) , z ( k ) ) = z ( k ) − ( 2 [ y ( k ) , x ( k ) ; F ] − 1 − F ′ ( x ( k ) ) − 1 ) F ( z ( k ) ) .$
The sixth order method proposed in  is given by
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , k = 0 , 1 , 2 , … z ( k ) = y ( k ) − ( 3 I − 2 F ′ ( x ( k ) ) − 1 [ y ( k ) , x ( k ) ; F ] ) F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = H 6 , 4 ( x ( k ) , y ( k ) , z ( k ) ) = z ( k ) − ( 3 I − 2 F ′ ( x ( k ) ) − 1 [ y ( k ) , x ( k ) ; F ] ) F ′ ( x ( k ) ) − 1 F ( z ( k ) ) .$
Per iteration these methods utilize the same number of function evaluations as that of $H 6 , 1 .$ The computational cost and efficiency for the methods $H 6 , 2 ,$$H 6 , 3$ and $H 6 , 4$ is given as below:
$C 6 , 2 = ( 2 m 2 + m ) μ 0 + m 2 μ 1 + m 3 ( 2 m 2 + 9 m − 8 + 6 l ( m + 1 ) ) and E 6 , 2 = 6 1 / C 6 , 2 ,$
$C 6 , 3 = ( 2 m 2 + m ) μ 0 + m 2 μ 1 + m 3 ( 2 m 2 + 12 m − 8 + 6 l ( m + 2 ) ) and E 6 , 3 = 6 1 / C 6 , 3 ,$
$C 6 , 4 = ( 2 m 2 + m ) μ 0 + m 2 μ 1 + m 3 ( 2 m 2 + 39 m − 5 + 9 l ( m + 3 ) ) and E 6 , 4 = 6 1 / C 6 , 4 .$

#### 3.1. Comparison among the Efficiencies

To compare the iterative methods $H ρ , i$, we consider the ratio
$R p , i ; q , j = log E p , i log E q , j = log ( p ) C q , j log ( q ) C p , i .$
It is clear that if $R p , i ; q , j > 1$, the iterative method $H p , i$ is more efficient than $H q , j$. Moreover, if we need to compare the methods having the same order then using (36) we can say that the iterative method $H p , i$ is more efficient than $H q , j ,$ if $C q , j > C p , i .$ This means that the comparison of the methods possessing same order can be done by just comparing their computational costs.
$H 6 , 1$versus$H 6 , 2$case:
In this case comparison of the corresponding values of $C 6 , 1$ and $C 6 , 2$ in (28) and (33) gives $E 6 , 1 > E 6 , 2$ for all $m ⩾ 4$ and $l ⩾ 1$.
$H 6 , 1$versus$H 6 , 3$case:
Comparing the corresponding values of $C 6 , 1$ and $C 6 , 3$ in (28) and (34) we obtain $E 6 , 1 > E 6 , 3$ for all $m ⩾ 2$ and $l ⩾ 1$.
$H 6 , 1$versus$H 6 , 4$case:
Comparison of computational costs $C 6 , 1$ and $C 6 , 4$ in (28) and (34) gives $E 6 , 1 > E 6 , 4$ for all $m ⩾ 2$ and $l ⩾ 1$.
In addition to the above comparisons we compare the proposed methods $H 6 , 1$ and $H 9 , 1$ with each other.
$H 9 , 1$versus$H 6 , 1$case:
The particular boundary $R 9 , 1 ; 6 , 1 = 1$ expressed by $μ 0$ written as a function of $μ 1$ and m is
$μ 0 = 1 12 2 ( r − s ) m 2 + 3 m ( 2 μ 1 ( r − s ) + 3 l ( r − s ) + 3 r − 23 s ) + 3 l ( r − 5 s ) − 22 s m ( s − r ) + s .$
where $r = ln ( 3 )$ and $s = ln ( 2 ) .$ This function has a vertical asymptote for $m = s / ( r − s ) = 1.7095 .$
Note that for $m ⩾ 44$, the numerator of (37) is positive and the denominator is negative, which shows that $μ 0$ is always negative for $m ⩾ 44$. That is, the boundary is out of admissible region for $m ⩾ 44$ and we have $E 9 , 1 > E 6 , 1$$∀ ( μ 1 , μ 0 ) ∈ ( 0 , + ∞ ) × ( 0 , + ∞ )$ and $l ⩾ 1$.
We summarize the above results in following theorem:
Theorem 3.
For all $μ 0 > 0$, $μ 1 > 0$ and $l ⩾ 1$ we have:
(a) $E 6 , 1 > E 6 , 2$$∀ m ⩾ 4 .$
(b) $E 6 , 1 > E 6 , 3$ and $E 6 , 1 > E 6 , 4$$∀ m ⩾ 2 .$
(c) $E 9 , 1 > E 6 , 1$$∀ m ⩾ 44 .$
Otherwise, the efficiency comparison depends on m, $μ 0$, $μ 1$ and l.

## 4. Numerical Results

This section is devoted to check the effectiveness and efficiency of some of our proposed methods on different types of applications. In all cases, we apply our proposed schemes H$6 , 1$ and H$9 , 1$ and compare the results with those obtained by known methods H$6 , 2$, H$6 , 3$, and H$6 , 4$. We consider academic examples, a special case of a nonlinear conservative problem which is transformed in a nonlinear system by approximating the derivatives by divided differences and, also the approximation of the solution of an elliptic partial differential equation that model a chemical problem.
All the experiments have been carried out in Matlab 2017 with variable precision arithmetics with 1000 digits of mantissa. These calculations have been made with an Intel Core processor i7-4700HQ with a CPU of 2.40 GHz and 16.0 GB al RAM memory. In the tables we include the number of iterations (iter), the residual error of the corresponding function ($∥ F ( x ( k + 1 ) ) ∥$) in the last iterated and the difference between the two last iterates ($∥ x ( k + 1 ) − x ( k ) ∥$). We also present the approximated computational order of convergence (ACOC) defined in  with the expression
$A C O C ≈ log ∥ x ( k + 1 ) − x ( k ) ∥ / ∥ x ( k ) − x ( k − 1 ) ∥ log ∥ x ( k ) − x ( k − 1 ) ∥ / ∥ x ( k − 1 ) − x ( k − 2 ) ∥ .$
When the components of this vector are stable, it is an approximation of the theoretical order of convergence. In other case, it does not give us any information and we will denote it by −. Also the execution time (in seconds) has been calculated (by means of “cputime” Matlab command) with the mean value of 100 consecutive executions, for big-sized systems corresponding to Examples 1 to 3.

#### 4.1. Example 1

We consider the case of a nonlinear conservative problem described by the differential equation
$y ″ ( t ) + Φ ( y ( t ) ) = 0 , t ∈ [ 0 , 1 ] ,$
with the boundary conditions
$y ( 0 ) = y ( 1 ) = 0 .$
We transform this boundary problem into a system of nonlinear equations by approximating the second derivative by a divided difference of second order. We introduce points $t i = 0 + i h$, $i = 0 , 1 , … , m + 1$, where $h = 1 m + 1$ and m is an appropriate positive integer. A scheme is then designed for the determination of numbers $y i$ as approximations of the solution $y ( t )$ at point $t i$. By using divided differences of second order
$y ″ ( t i ) ≈ y i + 1 − 2 y i + y i − 1 h 2 ,$
we transform the boundary problem into the nonlinear system
$y i + 1 − 2 y i + y i − 1 + h 2 Φ ( y i ) = 0 , i = 1 , 2 , … , m .$
Introducing vectors
$y = ( y 1 , y 2 , … , y m ) T , Φ y = ( Φ ( y 1 ) , Φ ( y 2 ) , … , Φ ( y m ) ) T$
and the matrix of size $m × m$
$A = − 2 1 0 ⋯ 0 1 − 2 1 ⋯ 0 0 1 − 2 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 0 ⋯ − 2 ,$
system (38) can be written in the form
$F ( y ) ≡ A y + h 2 Φ y = 0 .$
In this case, we choose the law $Φ ( y ( t ) ) = 1 + y ( t ) 3$ for the heat generation in the boundary problem and we solve system (38) by using iterative methods H$6 , 1$, H$6 , 2$, H$6 , 3$, H$6 , 4$, and H$9 , 1$. In all cases, we use as initial estimation $x ( 0 ) = ( 0.5 , 0.5 , … , 0.5 ) T$ and the stoping criterium $∥ x ( k + 1 ) − x ( k ) ∥ < 10 − 100$ or $∥ F ( x ( k + 1 ) ) ∥ < 10 − 100$. We can see in Table 1 the results obtained for $m = 20$ and $m = 50$.
There are no significant differences among the results obtained for different values of step h, i.e. for different sizes of the nonlinear system resulting from discretization. Let us remark that, for the lowest number of iterations, the best results in terms of lowest residuals have been obtained by methods H$9 , 1$ and H$6 , 1$.

#### 4.2. Example 2

Let us consider the system of nonlinear equations
$∑ j = 1 , j ≠ i m x j − e − x i = 0 , i = 1 , 2 , … , m ,$
for an arbitrary positive integer m. We solve this system whit the same schemes as before, using as initial guess $x ( 0 ) = ( 1 , 1 , … , 1 ) T$ and two values for the size, $m = 20$ and $m = 50$, being the solution $α = ( 0.05 , 0.05 , … , 0.05 ) T$ and $α = ( 0.02 , 0.02 , … , 0.02 ) T$, respectively. The stopping criterium used is again $∥ x ( k + 1 ) − x ( k ) ∥ < 10 − 100$ or $∥ F ( x ( k + 1 ) ) ∥ < 10 − 100$, that is, the process finishes when one of them is satisfied. The obtained results are shown in Table 2.
In this case, all the methods use the same number of iterations to achieve the solution, but the lowest residual are those of proposed schemes H$9 , 1$ and H$6 , 1$ .

#### 4.3. Example 3

Gas dynamics can be modeled by a boundary problem described by the following elliptic partial differential equation and the boundary conditions:
$u x x + u y y = u 3 , ( x , y ) ∈ [ 0 , 1 ] × [ 0 , 1 ] , u ( x , 0 ) = 2 x 2 − x + 1 , u ( x , 1 ) = 2 , u ( 0 , y ) = 2 y 2 − y + 1 , u ( 1 , y ) = 2 .$
By using central divided differences and step $h = 1 / 5$ in both variables, this problem is discretized in the nonlinear system
$F ( x ) = A x + h 2 ϕ ( x ) − b = 0 ,$
where
$A = B − I 0 0 − I B − I 0 0 − I B − I 0 0 − I B , B = 4 − 1 0 0 − 1 4 − 1 0 0 − 1 4 − 1 0 0 − 1 4 ,$
being I the identity matrix of size $4 × 4$, $ϕ ( x ) = x 1 3 , x 2 3 , … , x 16 3 T$ and
$b = 44 25 , 23 25 , 28 25 , 87 25 , 23 25 , 0 , 0 , 2 , 28 25 , 0 , 0 , 2 , 87 25 , 2 , 2 , 4 T .$
To solve problem (39) we have used as initial guess $x ( 0 ) = ( 1 , 1 , … , 1 ) T$. Also the stopping criteria $∥ F ( x ( k + 1 ) ) ∥ < 10 − 100$ or $∥ x ( k + 1 ) − x ( k ) ∥ < 10 − 100$ have been used and the process finishes when one of them is satisfied or the number of iterations reachs to 50.
The results are shown in Table 3. The first column shows the numerical aspects (approximated computational order of convergence ACOC, last difference between consecutive iterates $∥ x ( k + 1 ) − x ( k ) ∥$ and residual $∥ F ( x ( k + 1 ) ) ∥$) analyzed for the schemes used to solve the problem. In the rest of the columns we show the numerical results obtained by the methods $H 6 , 1$, $H 6 , 2$, $H 6 , 3$, $H 6 , 4$, and $H 9 , 1$.
The low number of iterations needed justify that the ACOC does not approximate properly the theoretical order of convergence. The methods giving lowest exact error at the last iteration are again H$9 , 1$ and H$6 , 1$.

#### 4.4. Example 4

Let us consider the nonlinear two-dimensional system $F ( x 1 , x 2 ) = 0$ with coordinate functions
$f 1 ( x 1 , x 2 ) = log ( x 1 2 ) − 2 log ( cos ( x 2 ) ) , f 2 ( x 1 , x 2 ) = x 1 tan x 1 2 + x 2 − 2 .$
This system has two real solutions at points, approximately, $( 0.954811 , 0.301815 ) T$ and $( − 0.954811 , − 0.301815 ) T$. By using $x ( 0 ) = ( 1 , 0.5 ) T$ as initial estimation, we have obtained the results appearing in Table 4, in which the residuals $∥ x ( k + 1 ) − x ( k ) ∥$ and $∥ F ( x ( k + 1 ) ) ∥$ appear, for the three first iterations.
It can be observed that, although the difference between two consecutive iterations is not very small, the precision in the estimation of the root is very high, being the best for proposed schemes H$6 , 1$ and H$9 , 1$.
Moreover, dynamical planes help us to get global information about the convergence process. In Figure 1 we can see the dynamical planes of the proposed methods H$6 , 1$ and H$9 , 1$ and known schemes H$6 , 2$, H$6 , 3$, and H$6 , 4$ when they are applied on system $F ( x 1 , x 2 ) = 0$. This figures are obtained by using the routines described in . To draw these images, a mesh of $400 × 400$ initial points has been used, 80 was the maximum number of iterations involved and $10 − 3$ the tolerance used as the stopping criterium. In this paper, we have used a white star to show the roots of the nonlinear system. A color is assigned to each initial estimation (each point of the mesh) depending on where they converge to: Blue and orange correspond to the basins of the roots of the system (40) (brighter as lower is the number of iterations needed to converge) and black color is adopted when the maximum number of iterations is reached or the process diverges.
In Figure 1, the shape and wideness of the basins of attraction show that H$6 , 1$, H$6 , 2$, H$6 , 3$, and H$6 , 4$ can found any of both roots by using a great amount of initial estimations, some of them far from the roots. However, H$6 , 4$ is hardly able to find the roots and their basins are very small.

#### 4.5. Example 5

Let us consider the nonlinear two-dimensional system $G ( x 1 , x 2 ) = 0$ with coordinate functions
$g 1 ( x 1 , x 2 ) = x 1 2 + x 2 2 − 1 ,$
$g 2 ( x 1 , x 2 ) = x 1 2 − x 2 2 + 1 2 .$
This system has four real solutions at points $1 2 , 3 2 T$, $1 2 , − 3 2 T$, $− 1 2 , 3 2 T$, and $− 1 2 , − 3 2 T$. Let us remark that they are symmetric as the system shows the intersection between two conical curves, an ellipse and a circumference. By using $x ( 0 ) = ( 1 , 1 ) T$ as initial estimation, we have obtained the results appearing in Table 5, in which the residuals $∥ x ( k + 1 ) − x ( k ) ∥$ and $∥ F ( x ( k + 1 ) ) ∥$ also appear, for the three first iterations.
The numerical results appearing in Table 5 show as the estimation of the roots has similar errors in all sixth order methods, although H$6 , 2$ and H$6 , 3$ are slightly better than H$6 , 1$. Indeed, the ninth order scheme has the best precision in the approximation of the roots of $G ( x 1 , x 2 )$, as corresponds to its high order of convergence. In terms of computational time, the mean of one hundred consecutive iterations have been used in order to get good estimations of the real time and, in case of proposed methods, they use similar times as the existing schemes getting better accuracy.
In Figure 2 the colors blue, orange, green and purple correspond to the basins of the roots of the system (41). We can see the dynamical planes of the proposed method methods H$6 , 1$ and H$9 , 1$ and known schemes H$6 , 2$, H$6 , 3$, and H$6 , 4$ when they are applied on system $G ( x 1 , x 2 ) = 0$.
Regarding the stability of proposed and known iterative processes on $G ( x 1 , x 2 )$, it can be observed in Figure 2 that, in all cases except H$6 , 4$, the connected component of the basins of attraction that holds the root (usually known as immediate basin of attraction) is very wide. For all iterative methods black areas of low convergence or divergence appear, being more diffuse in case of H$6 , 2$ and wider in H$6 , 4$. Methods H$6 , 1$, H$6 , 3$ and H$9 , 1$ present black areas that are mainly regions of slower convergence, that would be colored if a higher number of iterations is fixed as a maximum; meanwhile, the biggest black area of the dynamical plane associated to method H$6 , 4$ corresponds to divergence.

## 5. Concluding Remarks

From Potra–Pták third order scheme, an efficient sixth order method for solving nonlinear system is proposed. Moreover, it has been extended by adding subsequent steps with the same structure that adds a new functional evaluation per new step. Then the order of the resulting procedure increases in three units per step. The proposed methods have been shown to be more efficient that several known methods of the same order. Some numerical tests with academic and real-life problems have been made to check the efficiency and applicability of the designed procedures. The numerical performance have been opposed to the stability shown by the dynamical planes of new and known methods on two-dimensional systems.

## Author Contributions

Methodology, H.A.; Writing—original draft, J.R.T.; Writing—review & editing, A.C.

## Funding

This research was partially supported by Ministerio de Economía y Competitividad under grants MTM2014-52016-C2-2-P and Generalitat Valenciana PROMETEO/2016/089.

## Acknowledgments

The authors are grateful to the anonymous referees for their valuable comments and suggestions to improve the paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
3. Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
5. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
6. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
7. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
8. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef][Green Version]
9. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
10. Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
11. Potra, F.-A.; Pták, V. Nondiscrete Induction and Iterarive Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
12. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
13. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
15. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Basins of attraction of known and proposed methods on $F ( x 1 , x 2 ) = 0$. (a) H$6 , 1$; (b) H$6 , 2$; (c) H$6 , 3$; (d) H$6 , 4$; (e) H$9 , 1$.
Figure 1. Basins of attraction of known and proposed methods on $F ( x 1 , x 2 ) = 0$. (a) H$6 , 1$; (b) H$6 , 2$; (c) H$6 , 3$; (d) H$6 , 4$; (e) H$9 , 1$.
Figure 2. Basins of attraction of known and proposed methods on $G ( x 1 , x 2 ) = 0$. (a) H$6 , 1$; (b) H$6 , 2$; (c) H$6 , 3$; (d) H$6 , 4$; (e) H$9 , 1$.
Figure 2. Basins of attraction of known and proposed methods on $G ( x 1 , x 2 ) = 0$. (a) H$6 , 1$; (b) H$6 , 2$; (c) H$6 , 3$; (d) H$6 , 4$; (e) H$9 , 1$.
Table 1. Numerical results for conservative boundary problem.
Table 1. Numerical results for conservative boundary problem.
Method H$6 , 1$H$6 , 2$H$6 , 3$H$6 , 4$H$9 , 1$
ACOC5.58336.0210--6.2081
iter33443
$m = 20$$∥ x ( k + 1 ) − x ( k ) ∥$$2.78 × 10 − 35$$1.37 × 10 − 34$$2.69 × 10 − 98$$3.59 × 10 − 96$$8.63 × 10 − 59$
$∥ F ( x ( k + 1 ) ) ∥$$6.10 × 10 − 125$$1.17 × 10 − 101$$4.55 × 10 − 229$$8.10 × 10 − 225$$1.87 × 10 − 210$
CPU time (seconds)0.0250.0240.0280.0260.022
ACOC3.10242.1869--6.0462
iter33443
$m = 50$$∥ x ( k + 1 ) − x ( k ) ∥$$3.81 × 10 − 34$$2.08 × 10 − 34$$6.93 × 10 − 97$$9.18 × 10 − 95$$2.24 × 10 − 57$
$∥ F ( x ( k + 1 ) ) ∥$$2.76 × 10 − 121$$8.62 × 10 − 101$$1.03 × 10 − 225$$1.80 × 10 − 221$$7.16 × 10 − 206$
CPU time (seconds)0.0420.0420.0430.0420.044
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
Method H$6 , 1$H$6 , 2$H$6 , 3$H$6 , 4$H$9 , 1$
ACOC5.98985.90785.92485.94428.4359
iter33333
$m = 20$$∥ x ( k + 1 ) − x ( k ) ∥$$3.10 × 10 − 45$$6.62 × 10 − 46$$1.67 × 10 − 46$$3.55 × 10 − 47$$8.19 × 10 − 78$
$∥ F ( x ( k + 1 ) ) ∥$$3.45 × 10 − 155$$1.94 × 10 − 127$$1.24 × 10 − 128$$5.59 × 10 − 130$$6.49 × 10 − 271$
CPU time (seconds)0.0310.0280.0300.0280.029
ACOC4.39312.01775.89625.94257.0463
iter33333
$m = 50$$∥ x ( k + 1 ) − x ( k ) ∥$$1.04 × 10 − 49$$1.10 × 10 − 52$$2.90 × 10 − 53$$8.37 × 10 − 54$$2.50 × 10 − 83$
$∥ F ( x ( k + 1 ) ) ∥$$9.16 × 10 − 170$$6.01 × 10 − 142$$4.15 × 10 − 143$$3.46 × 10 − 144$$5.37 × 10 − 289$
CPU time (seconds)0.0600.0580.0580.0610.059
Table 3. Numerical results for elliptic partial differential equation.
Table 3. Numerical results for elliptic partial differential equation.
Method H$6 , 1$H$6 , 2$H$6 , 3$H$6 , 4$H$9 , 1$
ACOC3.01002.01075.61325.87245.2651
iter33333
$∥ x ( k + 1 ) − x ( k ) ∥$$4.51 × 10 − 40$$1.82 × 10 − 48$$3.74 × 10 − 47$$3.56 × 10 − 46$$6.95 × 10 − 67$
$∥ F ( x ( k + 1 ) ) ∥$$6.27 × 10 − 138$$4.19 × 10 − 129$$1.67 × 10 − 126$$1.52 × 10 − 124$$2.45 × 10 − 234$
CPU time (seconds)0.0350.0360.0350.0350.035
Table 4. Numerical results for Example 4.
Table 4. Numerical results for Example 4.
H$6 , 1$H$6 , 2$H$6 , 3$H$6 , 4$H$9 , 1$
$∥ x ( 1 ) − x ( 0 ) ∥$$1.90 × 10 − 1$$1.55 × 10 − 1$$1.45 × 10 − 1$$1.27 × 10 − 1$$2.01 × 10 − 1$
$∥ x ( 2 ) − x ( 1 ) ∥$$1.44 × 10 − 2$$5.69 × 10 − 2$$7.36 × 10 − 2$$1.18 × 10 − 1$$2.62 × 10 − 3$
$∥ x ( 3 ) − x ( 2 ) ∥$$1.07 × 10 − 9$$5.15 − 7$$1.25 × 10 − 5$$1.74 × 10 − 4$$1.69 × 10 − 18$
$∥ F ( x ( 1 ) ) ∥$$4.12 × 10 − 2$$1.21 × 10 − 1$$1.41 × 10 − 1$$1.85 × 10 − 1$$7.56 × 10 − 3$
$∥ F ( x ( 2 ) ) ∥$$2.41 × 10 − 9$$2.65 × 10 − 6$$3.05 × 10 − 5$$2.99 × 10 − 4$$6.77 × 10 − 18$
$∥ F ( x ( 3 ) ) ∥$$3.21 × 10 − 44$$1.54 × 10 − 23$$6.19 × 10 − 20$$6.93 × 10 − 16$$5.39 × 10 − 86$
Table 5. Numerical results for Example 5.
Table 5. Numerical results for Example 5.
H$6 , 1$H$6 , 2$H$6 , 3$H$6 , 4$H$9 , 1$
$∥ x ( 1 ) − x ( 0 ) ∥$$5.10 × 10 − 1$$5.15 × 10 − 1$$5.125 × 10 − 1$$5.10 × 10 − 1$$5.16 × 10 − 1$
$∥ x ( 2 ) − x ( 1 ) ∥$$7.96 × 10 − 3$$2.38 × 10 − 3$$5.63 × 10 − 3$$8.30 × 10 − 3$$1.46 × 10 − 3$
$∥ x ( 3 ) − x ( 2 ) ∥$$6.03 × 10 − 12$$3.54 × 10 − 16$$3.60 × 10 − 13$$8.89 × 10 − 12$$1.14 × 10 − 23$
$∥ F ( x ( 1 ) ) ∥$$1.13 × 10 − 2$$3.37 × 10 − 3$$8.00 × 10 − 3$$1.18 × 10 − 2$$2.07 × 10 − 3$
$∥ F ( x ( 2 ) ) ∥$$8.53 × 10 − 12$$5.00 × 10 − 16$$5.10 × 10 − 13$$1.26 × 10 − 11$$1.61 × 10 − 23$
$∥ F ( x ( 3 ) ) ∥$$2.56 × 10 − 56$$8.87 × 10 − 62$$8.99 × 10 − 57$$5.02 × 10 − 54$$6.87 × 10 − 161$

## Share and Cite

MDPI and ACS Style

Arora, H.; Torregrosa, J.R.; Cordero, A. Modified Potra–Pták Multi-step Schemes with Accelerated Order of Convergence for Solving Systems of Nonlinear Equations. Math. Comput. Appl. 2019, 24, 3. https://doi.org/10.3390/mca24010003

AMA Style

Arora H, Torregrosa JR, Cordero A. Modified Potra–Pták Multi-step Schemes with Accelerated Order of Convergence for Solving Systems of Nonlinear Equations. Mathematical and Computational Applications. 2019; 24(1):3. https://doi.org/10.3390/mca24010003

Chicago/Turabian Style

Arora, Himani, Juan R. Torregrosa, and Alicia Cordero. 2019. "Modified Potra–Pták Multi-step Schemes with Accelerated Order of Convergence for Solving Systems of Nonlinear Equations" Mathematical and Computational Applications 24, no. 1: 3. https://doi.org/10.3390/mca24010003