Next Article in Journal
An Improved Multiobjective Particle Swarm Optimization Based on Culture Algorithms
Next Article in Special Issue
Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions
Previous Article in Journal
Revised Gravitational Search Algorithms Based on Evolutionary-Fuzzy Systems

Algorithms 2017, 10(2), 45; https://doi.org/10.3390/a10020045

Article
An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems
1
School of Mathematics and Physics, Bohai University, Jinzhou 121013, China
2
Department of Mathematics and Information Engineering, Puyang Vocational and Technical College, Puyang 457000, China
*
Author to whom correspondence should be addressed.
Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro
Received: 26 January 2017 / Accepted: 20 April 2017 / Published: 25 April 2017

Abstract

:
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems.
Keywords:
nonlinear systems; iterative method; Newton’s method; computational efficiency
MSC:
65H10

1. Introduction

We consider the problem of finding a zero of a nonlinear function $F : D ⊂ R m → R m$, that is, a solution $α$ of the nonlinear system $F ( x ) = 0$ with $m$ equations and $m$ unknowns. Newton’s method [1,2] is the well-known method for solving nonlinear systems, which can be written as:
$x ( k + 1 ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) ,$
where $F ′ ( x ( k ) )$ is the Jacobian matrix of the function $F$ evaluated in the $k th$ iteration and $F ′ ( x ( k ) ) − 1$ is the inverse of $F ′ ( x ( k ) )$. Newton’s method is denoted by NM. Newton’s method converges quadratically if $F ′ ( α )$ is nonsingular and $F ′ ( x )$ is Lipschitz continuous on $D$. The method (1) can be written as:
${ F ′ ( x ( k ) ) γ ( k ) = F ( x ( k ) ) , x ( k + 1 ) = x ( k ) − γ ( k ) ,$
which requires $( m 3 − m ) / 3$ multiplications and divisions in the LU decomposition (lower upper, also called LU factorization) and $m 2$ multiplications and divisions for solving two triangular linear systems. So, the computational cost (multiplications and divisions) of the method (1) is $( m 3 − m ) / 3 + m 2 .$
In order to accelerate the convergence or to reduce the computational cost and function evaluation in each step of the iterative process, many efficient methods have been proposed for solving nonlinear systems, see [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21] and the references therein. Cordero et al. [3,4] developed some variants of Newton’s method. One of the methods is the following fourth-order method :
${ y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) − [ 2 I − F ′ ( x ( k ) ) − 1 F ′ ( y ( k ) ) ] F ′ ( x ( k ) ) − 1 F ( y ( k ) ) ,$
where $I$ is the identity matrix. Method (3) is denoted by CM4 and requires LU decomposition of the Jacobian matrix only once per full iteration. Based on method (3), Cordero et al.  presented the following sixth-order method:
${ y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − [ 2 I − F ′ ( x ( k ) ) − 1 F ′ ( y ( k ) ) ] F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − F ′ ( y ( k ) ) − 1 F ( z ( k ) ) ,$
where $I$ is the identity matrix. Method (4) is denoted by CHM and requires two LU decompositions of the Jacobian matrix, one for $F ′ ( x ( k ) )$ and one for $F ′ ( y ( k ) )$. Grau-Sánchez et al.  obtained the following sixth-order method:
${ y ( k ) = x ( k ) − F [ x ( k ) + F ( x ( k ) ) , x − F ( x ( k ) ) ] − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − { 2 F [ x ( k ) , y ( k ) ] − F [ x ( k ) + F ( x ( k ) ) , x − F ( x ( k ) ) ] } − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − { 2 F [ x ( k ) , y ( k ) ] − F [ x ( k ) + F ( x ( k ) ) , x − F ( x ( k ) ) ] } − 1 F ( z ( k ) ) ,$
where $F [ ⋅ , ⋅ ]$ denotes the first-order divided difference of $D$. Method (5) is denoted by SNAM and requires two LU decompositions for solving the linear systems involved.
It is well known that the computational cost of the iterative method greatly influences the efficiency of the iterative method. The number of LU decompositions that are used in the iterative method thus plays an important role when it comes to measuring the computational cost. So, we can reduce the computational cost of the iterative method by reducing the number of LU decompositions in each iteration.
The purpose of this paper is to construct a new sixth-order iterative method for solving small- and medium-sized systems. The theoretical advantages of the new method are based on the assumption that the Jacobian matrix is dense and that LU factorization is used to solve systems with the Jacobian. This assumption is not correct for sparse Jacobian matrices. An important feature of the new method is that the LU decomposition is computed only once per full iteration. This paper is organized as follows. In Section 2, based on the well-known fourth-order method (3), we present a sixth-order iterative method for solving nonlinear systems. The new method only increases one function evaluation, $F$. For a system of $m$ equations, each iteration uses $3 m + 2 m 2$ evaluations of scalar functions. The computational efficiency is compared to some well-known methods in Section 3. In Section 4, numerical examples are given to illustrate the convergence behavior of our method. Section 5 offers a short conclusion.

2. The New Method and Analysis of Convergence

Based on the method (3), we construct the following iterative scheme:
${ y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − ( 2 I − F ′ ( x ( k ) ) − 1 F ′ ( y ( k ) ) ) F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − ( 2 I − F ′ ( x ( k ) ) − 1 F ′ ( y ( k ) ) ) F ′ ( x ( k ) ) − 1 F ( z ( k ) ) ,$
where $I$ is the identity matrix. We note that the first two steps of method (6) are the same as those of method (3), and the third step of method (6) increases one function evaluation $F ( z ( k ) )$. Method (6) will be denoted by M6. For method (6), we have the following convergence analysis.
Theorem 1.
Let $α ∈ R m$ be a solution of the system $F ( x ) = 0$ and $F : D ⊂ R m → R m$ be sufficiently differentiable in an open neighborhood $D$ of $α$. Suppose that $F ′ ( x )$ is nonsingular in $D .$ Then, for an initial approximation sufficiently close to $α$, the iterations converge with order 6.
Proof.
By using the notation introduced in , we have the following Taylor’s expansion of $F ( x ( k ) )$ around $α$:
$F ( x ( k ) ) = F ′ ( α ) [ e + A 2 e 2 + A 3 e 3 + O ( e 4 ) ] ,$
where $A i = 1 i ! F ′ ( α ) − 1 F ( i ) ( α ) ∈ L i ( R m , R m )$, $e = x ( k ) − α$ $A i e i ∈ R m$, $F ( i ) ( α ) ∈ L ( R m × ⋯ × R m , R m )$, $F − 1 ( α ) ∈ L ( R m )$ and $e i$ denotes $( e , ⋯ i , e )$. From (7) the derivatives of $F ( x ( k ) )$ can be written as:
$F ′ ( x ( k ) ) = F ′ ( α ) [ I + 2 A 2 e + 3 A 3 e 2 + O ( e 3 ) ] = F ′ ( α ) D ( e ) + O ( e 3 ) ,$
where $D ( e ) = I + 2 A 2 e + 3 A 3 e 2$. The inverse of (8) can be written as:
$F ′ ( x ( k ) ) − 1 = D ( e ) − 1 F ′ ( α ) − 1 + O ( e 3 ) .$
Then, we compel the inverse of $D ( e )$ to be (see ):
$D ( e ) − 1 = I + X 2 e + X 3 e 2 + O ( e 3 ) ,$
such that $X 2$ verifies:
$D ( e ) D ( e ) − 1 = D ( e ) − 1 D ( e ) = I .$
Solving system (11), we get:
$X 2 = − 2 A 2 ,$
$X 3 = 4 A 2 2 − 3 A 3 ,$
then,
$F ′ ( x ( k ) ) − 1 = [ I − 2 A 2 e + ( 4 A 2 2 − 3 A 3 ) e 2 ] F ′ ( α ) − 1 + O ( e 3 ) .$
Let us denote $E = y ( k ) − α .$ From (6), (7) and (14), we arrive at:
$E = e − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) = A 2 e 2 + O ( e 3 ) .$
By a similar argument to that of (7), we obtain:
$F ( y ( k ) ) = F ′ ( α ) [ E + A 2 E 2 + O ( E 3 ) ] = F ′ ( α ) [ A 2 e 2 + A 2 3 e 4 + O ( e 5 ) ] ,$
$F ′ ( y ( k ) ) = F ′ ( α ) [ I + 2 A 2 E + O ( E 2 ) ] = F ′ ( α ) [ I + 2 A 2 2 e 2 + O ( e 3 ) ] .$
From (14) and (17), we have:
$( 2 I − F ′ ( x ( k ) ) − 1 F ′ ( y ( k ) ) ) F ′ ( x ( k ) ) − 1 = [ I − 2 A 2 E − 4 A 2 2 e 2 ] F ′ ( α ) − 1 + O ( e 3 ) = [ I − 6 A 2 2 e 2 ] F ′ ( α ) − 1 + O ( e 3 ) .$
Let us denote $ε = z ( k ) − α .$ From (14), (15) and (18), we get:
$ε = z ( k ) − α = E − [ I − 2 A 2 E − 4 A 2 2 e 2 ] [ E + A 2 E 2 + O ( E 3 ) ] = A 2 E 2 + 4 A 2 2 e 2 E + O ( e 5 ) = 5 A 2 3 e 4 + O ( e 5 ) ,$
$F ( z ( k ) ) = F ′ ( α ) [ ε + A 2 ε 2 + O ( ε 3 ) ] = F ′ ( α ) [ 5 A 2 3 e 4 + O ( e 5 ) ] .$
Therefore, from (15) and (18)—(20), we obtain the error equation:
$e n + 1 = x ( k + 1 ) − α = ε − [ I − 2 A 2 E − 4 A 2 2 e 2 ] [ ε + O ( ε 2 ) ] = 2 A 2 E ε + 4 A 2 2 e 2 ε + O ( e 7 ) = 30 A 2 5 e 6 + O ( e 7 ) .$
This implies that method (6) is of sixth-order convergence. This completes the proof.
In order to simplify the calculation, the new method (6) can be written as:
From (22), we can see that the LU decomposition of the Jacobian matrix $F ′ ( x ( k ) )$ would be computed only once per iteration.

3. Computational Efficiency

Here, we want to compare the computational efficiency index of our sixth-order method (M6, (6)) with Newton’s method (NM, (1)), Cordero’s fourth-order method (CM4, (3)), Grau-Sánchez’s sixth-order method (SNAM, (5)) and Cordero’s sixth-order methods (CHM, (4)) and (CTVM, (18)). The method CTVM  is as follows:
${ y ( k ) = x ( k ) − 1 / 2 F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = x ( k ) + [ F ′ ( x ( k ) ) − 2 F ′ ( y ( k ) ) ] − 1 [ 3 F ( x ( k ) ) − 4 F ( y ( k ) ) ] , x ( k + 1 ) = z ( k ) + [ F ′ ( x ( k ) ) − 2 F ′ ( y ( k ) ) ] − 1 F ( z ( k ) ) .$
We define, respectively, the computational efficiency index ($C E I$) of the methods NM, CM4, SNAM, CHM, CTVM and M6 as:
where $ρ i$ is the convergence order of the method and $C i ( μ , m )$ is the computational cost of method. The $C i ( μ , m )$ is given by :
where $a i ( m )$ represents the number of evaluations of the scalar functions used in the evaluations of $F$, $F ′$ and $[ y , x ; F ]$. The $p i ( m )$ represents the computational cost per iteration. $μ > 0$ is the ratio between multiplications (and divisions) and evaluations of functions that are required to express $C i ( μ , m )$ in terms of multiplications (and divisions). The divided difference $[ y , x ; F ]$ is defined by :
where $F ( x )$ and $F ( y )$ are computed separately. When we compute a divided difference, we need $m 2$ quotients and $m ( m − 1 )$ scalar function evaluations. We must add $m$ multiplications for the scalar product, $m 2$ multiplications for the matrix-vector multiplication and $m 2$ evaluations of scalar functions for any new derivative $F ′ .$ In order to factorize a matrix, we require $( m 3 − m ) / 3$ multiplications and divisions in the LU decomposition. We require $m 2$ multiplications and divisions for solving two triangular linear systems.
Taking into account the previous considerations, we give the computational cost of each iterative method in Table 1.
From Table 1, we can see that these sixth-order iterative methods have the same number of function evaluations. Our method (M6) needs less LU decompositions than other methods with the same order. Therefore, the computational cost of our method is lower.
We use the following expressions  to compare the computational efficiency index of each iterative method:
$R i , j = log C E I i log C E I j = log ( ρ i ) C j ( μ , m ) log ( ρ j ) C i ( μ , m ) , i , j = 1 , 2 , 3 , 4 , 5 , 6 .$
For $R i , j > 1$ the iterative method $M i$ is more efficient than $M j .$ We have the following theorem:
Theorem 2.
For all $μ > 0$ we have:
1.
$C E I 6 > C E I 3$ for all $m ≥ 5 ,$
2.
$C E I 6 > C E I 4$ for all $m ≥ 7 ,$
3.
$C E I 6 > C E I 5$ for all $m ≥ 9 .$
Proof.
We note that the methods SNAM, CHM, CTVM and method M6 have the same order $ρ 3 = ρ 4 = ρ 5 = ρ 6 = 6$ and the same number of function evaluations $a 3 ( m ) = a 4 ( m ) = a 5 ( m ) = a 6 ( m )$$= m ( 2 m + 3 ) .$
• Based on the expression (26), the relation between SNAM and M6 can be given by:
$R 6 , 3 = log ( ρ 6 ) C 3 ( μ , m ) log ( ρ 3 ) C 6 ( μ , m ) = m ( 2 m + 3 ) μ + 2 ( m 3 − m ) / 3 + 6 m 2 m ( 2 m + 3 ) μ + ( m 3 − m ) / 3 + 7 m 2 + 2 m .$
Subtracting the denominator from the numerator of (27), we have:
$1 3 m ( m 2 − 3 m − 7 ) .$
Equation (28) is positive for $m ≥ 4.541 .$ Thus, we get $C E I 6 > C E I 3$ for all $m ≥ 5$ and $μ > 0 .$
• The relation between CHM and M6 is given by:
$R 6 , 4 = log ( ρ 6 ) C 4 ( μ , m ) log ( ρ 4 ) C 6 ( μ , m ) = m ( 2 m + 3 ) μ + 2 ( m 3 − m ) / 3 + 5 m 2 + m m ( 2 m + 3 ) μ + ( m 3 − m ) / 3 + 7 m 2 + 2 m .$
Subtracting the denominator from the numerator of (29), we have:
$1 3 m ( m 2 − 6 m − 4 ) .$
Equation (30) is positive for $m ≥ 6.606 .$ Thus, we get $C E I 6 > C E I 4$ for all $m ≥ 7$ and $μ > 0 .$
• The relation between CHM and M6 can be given by:
$R 6 , 4 = log ( ρ 6 ) C 5 ( μ , m ) log ( ρ 5 ) C 6 ( μ , m ) = m ( 2 m + 3 ) μ + 2 ( m 3 − m ) / 3 + 4 m 2 + 3 m m ( 2 m + 3 ) μ + ( m 3 − m ) / 3 + 7 m 2 + 2 m .$
Subtracting the denominator from the numerator of (31), we have:
$1 3 m ( m 2 − 9 m + 2 ) .$
Equation (32) is positive for $m ≥ 8.772 .$ Thus, we obtain $C E I 6 > C E I 5$ for all $m ≥ 9$ and $μ > 0$. This completes the proof.
Theorem 3.
For all $m ≥ 2$ we have:
1.
$C E I 6 > C E I 1$ for all $μ > m 2 log ( 3 ) + 3 ( log ( 3 ) − 6 log ( 2 ) ) m − ( 6 log ( 2 ) + log ( 3 ) ) 3 ( m log ( 2 / 3 ) + log ( 4 / 3 ) ) ,$
2.
$C E I 6 > C E I 2$ for all $μ > m 2 log ( 3 / 2 ) + 6 ( 2 log ( 3 ) − 5 log ( 2 ) ) m + 2 ( log ( 3 ) − 4 log ( 2 ) ) 6 ( m log ( 2 / 3 ) + log ( 4 / 3 ) ) .$
Proof.
• From expression (26) and Table 1, we get the following relation between NM and M6:
$R 6 , 1 = log ( ρ 6 ) C 1 ( μ , m ) log ( ρ 1 ) C 6 ( μ , m ) = log ( 6 ) log ( 2 ) m ( m + 1 ) μ + ( m 3 − m ) / 3 + m 2 m ( 2 m + 3 ) μ + ( m 3 − m ) / 3 + 7 m 2 + 2 m .$
We consider the boundary $R 6 , 1 = 1$. The boundary can be given by the following equation:
$μ = H 6 , 1 ( m ) = m 2 log ( 3 ) + 3 ( log ( 3 ) − 6 log ( 2 ) ) m − ( 6 log ( 2 ) + log ( 3 ) ) 3 ( m log ( 2 / 3 ) + log ( 4 / 3 ) ) ,$
where $C E I 6 > C E I 1$ over it (see Figure 1). Boundary (34) cuts axes at points and . Thus, we get $C E I 6 > C E I 1$ since $R 6 , 1 > 1$ for all $m ≥ 2$ and $μ > H 6 , 1 ( m ) .$
• The relation between CM4 and M6 is given by:
$R 6 , 2 = log ( ρ 6 ) C 2 ( μ , m ) log ( ρ 2 ) C 6 ( μ , m ) = log ( 6 ) log ( 4 ) 2 m ( m + 1 ) μ + ( m 3 − m ) / 3 + 4 m 2 + m m ( 2 m + 3 ) μ + ( m 3 − m ) / 3 + 7 m 2 + 2 m .$
We consider the boundary $R 6 , 2 = 1$. The boundary can be given by the following equation:
$μ = H 6 , 2 ( m ) = m 2 log ( 3 / 2 ) + 6 ( 2 log ( 3 ) − 5 log ( 2 ) ) m + 2 ( log ( 3 ) − 4 log ( 2 ) ) 6 ( m log ( 2 / 3 ) + log ( 4 / 3 ) ) ,$
where $C E I 6 > C E I 2$ over it (see Figure 1). Boundary (36) cuts axes at points and . Thus, we get $C E I 6 > C E I 2$ since $R 6 , 2 > 1$ for all $m ≥ 2$ and $μ > H 6 , 2 ( m ) .$
This completes the proof.
In Table 2 and Table 3, we show the computational efficiency indices of NM, CM4, SNAM, CHM, CTVM and M6, for different values of the size of the nonlinear systems.
The results shown in Table 2 and Table 3 are in concordance with the Theorems 2 and 3. We can see that, for small- and medium-sized systems, the computational efficiency index of our method (M6) is similar to the other methods in this paper.

4. Numerical Examples

In this section, we compare the related methods by numerical experiments. The numerical experiments are performed using the MAPLE computer algebra system with 2048 digits. The method M6 is compared with NM, CM4, SNAM, CHM, CTVM by solving some nonlinear systems. The stopping criterion used is $| | x ( k + 1 ) − x ( k ) | | < 10 − 200$ or $| | F ( x ( k ) ) | | < 10 − 200$.
Following nonlinear systems are used: and the initial value is The solution is .
and the initial value is $x ( 0 ) = ( 0.2 , 1.5 , 1.5 ) T .$ The solution is
$F 3 ( x ) = ( f 1 ( x ) , f 2 ( x ) , … , f n ( x ) ) ,$ where $x = ( x 1 , x 2 , ⋯ , x n ) T$ and such that
$f n ( x ) = x n x 1 − 1 .$
When $n$ is odd, the exact zeros of $F 4 ( x )$ are $α 1 = ( 1 , 1 , ⋯ , 1 )$ and $α 2 = ( − 1 , − 1 , ⋯ , − 1 ) .$ The initial value is
Table 4 presents the results showing the following information: the number of iterations $k$ needed to converge to the solution, the value of the stopping factors at the last step and the computational order of convergence $ρ$. The computational order of convergence $ρ$ is defined by :
$ρ ≈ ln ( | | x ( k + 1 ) − x ( k ) | | / | | x ( k ) − x ( k − 1 ) | | ) ln ( | | x ( k ) − x ( k − 1 ) | | / | | x ( k − 1 ) − x ( k − 2 ) | | ) .$
The numerical results shown in Table 4 are in concordance with the theory developed in this paper. The order of convergence of our method (M6) is 6, which is higher than the methods NM and CM4. The iterative method SNAM is not convergent (nc) for $F 3$ with the corresponding initial value. The convergence behavior of our method is similar to the existing methods in this paper.

5. Conclusions

In this paper, we have proposed a new iterative method of order six for solving nonlinear systems. Although five linear systems are required to be solved in each iteration, the LU decomposition of the linear systems are computed only once per full iteration. Numerical results are given to show that our method has a similar convergence behavior as the existing methods in this paper. The new method is suitable for solving small- and medium-sized systems. In order to obtain an efficient iterative method for solving nonlinear systems, we should make the iterative method achieve an as high as possible convergence order consuming an as small as possible computational cost. In addition, the theoretical advantages of the new method are based on the assumption that the Jacobian matrix is dense and that LU factorization is used to solve the systems with the Jacobian. This assumption is not correct for sparse Jacobian matrices.

Acknowledgments

This project is supported by the National Natural Science Foundation of China (Nos. 11547005 and 61572082), the PhD Start-up Fund of Liaoning Province of China (No. 201501196) and the Educational Commission Foundation of Liaoning Province of China (No. L2015012).

Author Contributions

Xiaofeng Wang conceived and designed the experiments; Yang Li performed the experiments; Xiaofeng Wang wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

1. Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
2. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
3. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef]
4. Cordero, A.; Hueso, J.L.; Matínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
5. Sánchez, M.G.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 263–272. [Google Scholar]
6. Cordero, A.; Hueso, J.L.; Matínez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef]
7. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Pseudocomposition: A technique to design predictor-corrector methods for systems of nonlinear equations. Appl. Math. Comput. 2012, 218, 11496–11504. [Google Scholar] [CrossRef]
8. Ezquerro, J.A.; Grau, À.; Sánchez, M.G.; Hernández, M.A. On the efficiency of two variants of Kurchatov’s method for solving nonlinear systems. Numer. Algorithms 2013, 64, 685–698. [Google Scholar] [CrossRef]
9. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
10. Ezquerro, J.A.; Grau, À.; Sánchez, M.G.; Hernández, M.A.; Noguera, M. Analysing the efficiency of some modifications of the secant method. Comput. Math. Appl. 2012, 64, 2066–2073. [Google Scholar] [CrossRef]
11. Sánchez, M.G.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
12. Sánchez, M.G.; Grau, À.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 739–1743. [Google Scholar]
13. Ezquerro, J.A.; Hernández, M.A.; Romero, N. Solving nonlinear integral equations of Fredholm type with high order iterative methods. J. Comput. Appl. Math. 2011, 236, 1449–1463. [Google Scholar] [CrossRef]
14. Argyros, I.K.; Hilout, S. On the local convergence of fast two-step Newton-like methods for solving nonlinear equations. J. Comput. Appl. Math. 2013, 245, 1–9. [Google Scholar] [CrossRef]
15. Amat, S.; Busquier, S.; Grau, À.; Sánchez, M.G. Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Appl. Math. Comput. 2013, 219, 7954–7963. [Google Scholar] [CrossRef]
16. Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algorithms 2013, 62, 429–444. [Google Scholar] [CrossRef]
17. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
18. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
19. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
20. Păvăloiu, I. On approximating the inverse of a matrix. Creative Math. 2003, 12, 15–20. [Google Scholar]
21. Diaconu, A.; Păvăloiu, I. Asupra unor metode iterative pentru rezolvarea ecuaţiilor operaţionale neliniare. Rev. Anal. Numer. Teor. Aproximaţiei 1973, 2, 61–69. [Google Scholar]
Figure 1. The boundary functions $H 6 , 1$ and $H 6 , 2$ in $( μ , m )$—plain.
Figure 1. The boundary functions $H 6 , 1$ and $H 6 , 2$ in $( μ , m )$—plain.
Table 1. Computational cost of the iterative methods.
Table 1. Computational cost of the iterative methods.
Methods$ρ$$a ( m )$$p ( m )$$C ( μ , m )$
NM2$m ( m + 1 )$$( m 3 − m ) / 3 + m 2$$C 1 = m ( m + 1 ) μ + ( m 3 − m ) / 3 + m 2$
CM44$2 m ( m + 1 )$$( m 3 − m ) / 3 + 4 m 2 + m$$C 2 = 2 m ( m + 1 ) μ + ( m 3 − m ) / 3 + 4 m 2 + m$
SNAM6$m ( 2 m + 3 )$$2 ( m 3 − m ) / 3 + 6 m 2$$C 3 = m ( 2 m + 3 ) μ + 2 ( m 3 − m ) / 3 + 6 m 2$
CHM6$m ( 2 m + 3 )$$2 ( m 3 − m ) / 3 + 5 m 2 + m$$C 4 = m ( 2 m + 3 ) μ + 2 ( m 3 − m ) / 3 + 5 m 2 + m$
CTVM6$m ( 2 m + 3 )$$2 ( m 3 − m ) / 3 + 4 m 2 + 3 m$$C 5 = m ( 2 m + 3 ) μ + 2 ( m 3 − m ) / 3 + 4 m 2 + 3 m$
M66$m ( 2 m + 3 )$$( m 3 − m ) / 3 + 7 m 2 + 2 m$$C 6 = m ( 2 m + 3 ) μ + ( m 3 − m ) / 3 + 7 m 2 + 2 m$
Table 2. Computational efficiency indices of the methods $( μ = 2 )$.
Table 2. Computational efficiency indices of the methods $( μ = 2 )$.
$m$$C E I 1$$C E I 2$$C E I 3$$C E I 4$$C E I 5$$C E I 6$
51.00556061.00524501.00498951.00528381.00552831.0050600
71.00254221.00257531.00237291.00251261.00264231.0025375
91.00138451.00148701.00133401.00140961.00148311.0014905
111.00084051.00094801.00083141.00087611.00092071.0009643
201.00017771.00023261.00018981.00019781.00020601.0002482
501.00001411.00002241.00001651.00001691.00001731.0000258
1001.00000191.00000341.00000231.00000241.00000241.0000040
2001.00000021.00000051.00000031.00000031.00000031.0000006
Table 3. Computational efficiency indices of the methods ($μ = 6$).
Table 3. Computational efficiency indices of the methods ($μ = 6$).
$m$$C E I 1$$C E I 2$$C E I 3$$C E I 4$$C E I 5$$C E I 6$
51.00283321.00274891.00289411.00299071.00306751.0029177
71.00139561.00140551.00145541.00150681.00155251.0015157
91.00080541.00083901.00085361.00088391.00091231.0009150
111.00051241.00055051.00055041.00056971.00058821.0006057
201.00012421.00014881.00013911.00014341.00014761.0001681
501.00001171.00001681.00001391.00001411.00001441.0000199
1001.00000171.00000281.00000211.00000211.00000221.0000034
2001.00000021.00000041.00000031.00000031.00000031.0000005
Table 4. Numerical results for $F i ( i = 1 , 2 , 3 )$ by the methods.
Table 4. Numerical results for $F i ( i = 1 , 2 , 3 )$ by the methods.
Function Method$k$$| | x ( k ) − x ( k − 1 ) | |$$| | F ( x ( k ) ) | |$$ρ$
$F 1$NM82.42128 × 10−1921.06480 × 10−3831.99667
CM455.59843 × 10−1472.69120 × 10−5864.00129
SNAM43.76810 × 10−393.25655 × 102276.09363
CHM44.18959 × 101234.03125 × 107365.99962
CTVM42.07203 × 101002.63883 × 105976.00033
M647.65662 × 101191.55028 × 107106.00589
$F 2$NM93.41596 × 101162.48971 × 102321.97549
CM453.73825 × 10901.20501 × 10−3594.02761
SNAM49.18821 × 10−356.76819 × 102075.98999
CHM48.31995 × 10528.11818 × 10−3105.72008
CTVM43.82928 × 10424.59455 × 102515.85429
M648.13364 × 10656.14607 × 10−3875.99644
$F 3$NM222.71070 × 101962.20459 × 10−3921.99900
CM462.26562 × 101151.03777 × 104604.00061
SNAMnc
CHM52.79450 × 10994.68047 × 105945.92903
CTVM55.12075 × 101931.30600 × 1011575.97091
M651.99499 × 101613.41913 × 109676.08153