Next Article in Journal / Special Issue
On the Local Convergence of a Third Order Family of Iterative Processes
Previous Article in Journal
Computer Aided Diagnosis System for Early Lung Cancer Detection
Previous Article in Special Issue
Local Convergence of an Efﬁcient High Convergence Order Method Using Hypothesis Only on the First Derivative

Algorithms 2015, 8(4), 1111-1120; https://doi.org/10.3390/a8041111

Article
An Optimal Biparametric Multipoint Family and Its Self-Acceleration with Memory for Solving Nonlinear Equations
by and
College of Sciences, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Academic Editor: Alicia Cordero
Received: 8 October 2015 / Accepted: 24 November 2015 / Published: 1 December 2015

## Abstract

:
In this paper, a family of Steffensen-type methods of optimal order of convergence with two parameters is constructed by direct Newtonian interpolation. It satisfies the conjecture proposed by Kung and Traub (J. Assoc. Comput. Math. 1974, 21, 634–651) that an iterative method based on m evaluations per iteration without memory would arrive at the optimal convergence of order $2 m - 1$. Furthermore, the family of Steffensen-type methods of super convergence is suggested by using arithmetic expressions for the parameters with memory but no additional new evaluation of the function. Their error equations, asymptotic convergence constants and convergence orders are obtained. Finally, they are compared with related root-finding methods in the numerical examples.
Keywords:
nonlinear equation; Newton’s method; Steffensen’s method; derivative free; optimal convergence; super convergence

## 1. Introduction

Solving the nonlinear equation $f ( x ) = 0$ is a fundamental problem in scientific computation. Besides Newton’s method (NM), Steffensen’s method (SM):
$x n + 1 = x n - f 2 ( x n ) f ( x n + f ( x n ) ) - f ( x n ) , n = 0 , 1 , 2 , …$
is also a famous method for dealing with such a problem, because it is derivative free and maintains quadratic convergence (see ). Since Kung and Traub conjectured in 1974 that a multipoint iteration based on m evaluations without memory has optimal order $2 m - 1$ of convergence (see ), NM and SM are methods of optimal order. The efficiency index of them is $2 = 1.4142$.
In order to achieve higher order of convergence, the self-acceleration of SM (SASM) was introduced in Traub’s book as follows (see ):
$x n + 1 = x n - γ n f 2 ( x n ) f ( x n + γ n f ( x n ) ) - f ( x n )$
where $γ n = - γ n - 1 f ( x n - 1 ) f ( x n - 1 + γ n - 1 f ( x n - 1 ) ) - f ( x n - 1 )$, which was obtained recursively by using memory. SASM achieves super convergence of order $1 + 2 = 2.4142$. Its efficiency index is $1 + 2 = 1.5538$. The other two choices were also introduced for Steffensen-type methods by Zheng, et al., (see [4,5]): $γ n = - x n - x n - 1 f ( x n ) - f ( x n - 1 )$ and $γ n = x n - x n - 1 f ( x n - 1 )$. The latter is the same as the above expression of $γ n$ for SASM, but different from the above for the multi-step methods. These expressions of $γ n$ ensure the methods to achieve super convergence by using the same number of evaluations of f as before. Local and semilocal convergence of Steffensen-type methods and their applications in the solution of nonlinear systems and nonlinear differential equations were discussed in the literature (see [1,5,6]).
Moreover, Džunić, Petković introduced generalized biparametric multipoint methods as follows (DPM, see ):
$y k , 1 = y k , 0 + γ k f ( y k , 0 ) , y k , 0 = x k , y k , 2 = y k , 0 - f ( y k , 0 ) f [ y k , 0 , y k , 1 ] + p k f ( y k , 1 ) , y k , j = y k , j - 1 - f ( y k , j - 1 ) N j - 1 ′ ( y k , j - 1 ; y k , 0 , y k , 1 , … , y k , j - 1 ) , j = 3 , … , n , x k + 1 = y k , n - f ( y k , n ) N n ′ ( y k , n ; y k , 0 , y k , 1 , … , y k , n ) , k = 0 , 1 , … ,$
where $γ k = - 1 N m ′ ( y k , 0 ) , p k = - N m + 1 ′ ′ ( y k , 1 ) 2 N m + 1 ′ ( y k , 1 ) , ( m = 1 , … , n + 1 ) ,$ and $N j ( x ; y k , 0 , y k , 1 , … , y k , j ) ( j = 2 , … , n )$ was Newton’s interpolating polynomial of degree j.
This paper is organized as the following. In Section 2, by using Newton’s method for the direct Newtonian interpolation of the function, we construct an optimal Steffensen-type method of second-order which has one more parameter than that in SASM, establish an optimal Steffensen-type method of fourth-order which generalizes Ren-Wu-Bi’s method (RWBM, see ), deduce their error equations and asymptotic convergence constants, and induce to a general optimal Steffensen-type family of $2 m - 1$th-order without memory. Furthermore, in Section 3, we obtain the family of Steffensen-type methods by accelerating with memory, and Steffensen-type methods of super second-order and super fourth-order of convergence by doubly accelerating with memory. In Section 4, we compare the proposed families with NM, SM, SASM, RWBM and DPM by solving nonlinear equations in numerical examples. Finally we make conclusions in Section 5.

## 2. A Steffensen-Type Family of Optimal Order without Memory

Let $x n$ be an approximation of the simple root of a nonlinear equation $f ( x ) = 0$ and $z n = x n + γ n f ( x n )$. By direct Newtonian interpolatory polynomial of degree one, such that $N 1 ( x n ) = f ( x n )$ and $N 1 ( z n ) = f ( z n )$, we have
$N 1 ( x ) = f ( x n ) + f [ x n , z n ] ( x - x n )$
and $f ( x ) ≈ N 1 ( x )$, where $R 1 ( x ) = f ( x ) - N 1 ( x ) = f [ x n , z n , x ] ( x - x n ) ( x - z n )$.
So, for some $μ n ≈ f [ x n , z n , x ]$, we have
$N ˜ 2 ( x ) = f ( x n ) + f [ x n , z n ] ( x - x n ) + μ n ( x - x n ) ( x - z n )$
and $f ( x ) ≈ N ˜ 2 ( x )$, which is a polynomial of degree two based still on $f ( x n )$ and $f ( z n )$, but $f ( x ) ≈ N ˜ 2 ( x )$ could be better than $f ( x ) ≈ N 1 ( x )$ by adding a higher-order term. We suggest that the next approximation $x n + 1$ of the root of $f ( x )$ be obtained from Newton’s iteration for $N ˜ 2 ( x )$ as $x n + 1 = x n - N ˜ 2 ( x n ) N ˜ 2 ′ ( x n ) = x n - f ( x n ) f [ x n , z n ] + μ n ( x n - z n )$. Then, we have an optimal second-order Steffensen-type method:
$x n + 1 = x n - f ( x n ) f [ x n , z n ] + μ n ( x n - z n ) , n = 0 , 1 , 2 , …$
where $z n = x n + γ n f ( x n )$, ${ γ n }$ and ${ μ n }$ are bounded constant sequences. This method gives SM when $γ n ≡ 1$ and $μ n ≡ 0$.
Similarly, an optimal fourth-order Steffensen-type method is obtained as follows:
$y n = x n - f ( x n ) f [ x n , z n ] , x n + 1 = y n - f ( y n ) f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + α n ( y n - x n ) ( y n - z n ) , n = 0 , 1 , 2 , …$
where $z n = x n + γ n f ( x n )$, ${ γ n }$ and ${ α n }$ are bounded constant sequences. This method gives RWBM when $γ n ≡ 1$ and $α n ≡ α$.
Theorem 1.
Let $f : D → ℜ$ be a sufficiently differentiable function with a simple root $a ∈ D$, $D ⊂ ℜ$ be an open set, $x 0$ be close enough to a, then the method Equations (4) and (5) are at least of second-order and fourth-order, respectively, and satisfy the error equations:
$e n + 1 = [ ( 1 + γ n f ′ ( a ) ) c 2 - μ n γ n ] e n 2 + O ( e n 3 )$
$e n + 1 = ( 1 + γ n f ′ ( a ) ) 2 c 2 [ c 2 2 - c 3 + α n f ′ ( a ) ] e n 4 + O ( e n 5 )$
respectively, where $c k = f ( k ) ( a ) k ! f ′ ( a )$, $e n = x n - a , n = 0 , 1 , 2 , ⋯$.
Proof.
The theorem can be proved by the definition of divided difference and Taylor formula, see  or the proof of Theorem 2.
By successive Newtonian interpolatory polynomials up to $m + 1$ points, we can derive the optimal $2 m$th-order Steffensen-type family, moreover we are able to write it in a preferable explicit form as follows: for any $m > 0$, $x n + 1 = y m$ is obtained for $n = 0 , 1 , ⋯$, by
$y 1 = y 0 - f ( y 0 ) f [ y 0 , y - 1 ] , y 2 = y 1 - f ( y 1 ) f [ y 1 , y 0 ] + f [ y 1 , y 0 , y - 1 ] ( y 1 - y 0 ) , ⋯ y m = y m - 1 - f ( y m - 1 ) f [ y m - 1 , y m - 2 ] + ⋯ + f [ y m - 1 , ⋯ , y - 1 ] ( y m - 1 - y m - 2 ) ⋯ ( y m - 1 - y 0 ) + ν n ( y m - 1 - y m - 2 ) ⋯ ( y m - 1 - y - 1 )$
where $y - 1 = x n + γ n f ( x n )$, $y 0 = x n$, ${ γ n }$ and ${ ν n }$ are bounded constant sequences. When $ν n ≡ 0$, it gives the general optimal Steffensen-type family in .
Theorem 2.
Let $f : D → ℜ$ be a sufficiently differentiable function with a simple root $a ∈ D$, $D ⊂ ℜ$ be an open set, $x 0$ be close enough to a, then the family Equation (8) converges with at least $2 m$th-order, and moreover satisfies the error equation:
$e n + 1 = D m e n 2 m + O ( e n 2 m + 1 )$
where
and
here $D - 1 = 1 + γ n f ′ ( a ) , D 0 = 1$, $D 1 = ( 1 + γ n f ′ ( a ) ) c 2$, ⋯, $D m - 1 = D m - 2 [ c 2 D m - 2 + ( - 1 ) m - 2$ $c m D m - 3 ⋯ D - 1 ]$, $c m = f ( m ) ( a ) m ! f ′ ( a ) ,$ and $e n = x n - a$ for $n = 0 , 1 , ⋯$
Proof.
We prove the theorem by induction. For $m = 1$, the theorem is valid by Theorem 1. For $m > 1$, let $d k = y k - a$, $k = - 1 , 0 , ⋯ , m$, then $d - 1 = D - 1 e n + O ( e n 2 )$, $d 0 = D 0 e n$, $d 1 = D 1 e n 2 + O ( e n 3 )$, ⋯,
$d m - 1 = D m - 1 e n 2 m - 1 + O ( e n 2 m - 1 + 1 )$
and noting that $d m - 2 ⋯ d - 1 = O ( e n 1 + 1 + 2 + ⋯ + 2 m - 2 ) = O ( e n 2 m - 1 )$, we have
$d m = d m - 1 f [ y m - 1 , y m - 2 , a ] d m - 2 + f [ y m - 1 , y m - 2 , y m - 3 ] ( d m - 1 - d m - 2 ) + ⋯ + ν n ( d m - 1 - d m - 2 ) ⋯ ( d m - 1 - d - 1 ) f [ y m - 1 , y m - 2 ] + ⋯ + f [ y m - 1 , ⋯ , y - 1 ] ( d m - 1 - d m - 2 ) ⋯ ( d m - 1 - d 0 ) + ν n ( d m - 1 - d m - 2 ) ⋯ ( d m - 1 - d - 1 ) = d m - 1 f [ y m - 1 , y m - 2 , y m - 3 ] d m - 1 - f [ y m - 1 , y m - 2 , y m - 3 , a ] d m - 2 d m - 3 + ⋯ + ( - 1 ) m ν n d m - 2 ⋯ d - 1 + O ( e n 2 m - 1 + 1 ) f ′ ( a ) + O ( e n ) = d m - 1 f [ y m - 1 , y m - 2 , y m - 3 ] d m - 1 + ( - 1 ) m - 1 ( f [ y m - 1 , ⋯ , y - 1 , a ] - ν n ) d m - 2 ⋯ d - 1 + O ( e n 2 m - 1 + 1 ) f ′ ( a ) + O ( e n ) = D m - 1 [ c 2 D m - 1 + ( - 1 ) m - 1 ( c m + 1 - ν n f ′ ( a ) ) D m - 2 ⋯ D - 1 ] e n 2 m + O ( e n 2 m + 1 )$

## 3. A Steffensen-Type Family of Super Convergence with Memory

The added high-order terms in the denominators in Equations (4) and (5) at least have no bad effect by now. Furthermore, by adjusting these coefficients of the high-order terms, i.e., only using several arithmetic operations of old evaluations of f to express the parameters, the asymptotic convergence constants of the optimal second-order and fourth-order methods can tend to zero, respectively, and the obtained methods of super-convergence can exceed SASM and RWBM, respectively. For example:
The super second-order method: Iterate Equation (4) with
$μ n = 1 + γ n f [ x n , z n ] γ n f [ x n , z n ] f [ z n - 1 , x n , z n ]$
The super fourth-order method: Iterate Equation (5) with
$α n = f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ]$
Theorem 3.
Let $f : D → ℜ$ be a sufficiently differentiable function with a simple root $a ∈ D$, $D ⊂ ℜ$ be an open set, $x 0$ be close enough to a, then the methods Equations (10) and (11) satisfy the following error equations:
$e n + 1 = - ( 1 + γ n f ′ ( a ) ) [ c 3 e n - 1 e n 2 + O ( e n - 1 2 e n 2 ) ]$
$e n + 1 = ( 1 + γ n f ′ ( a ) ) 2 [ c 2 c 4 e n - 1 e n 4 + O ( e n - 1 2 e n 4 ) ]$
where $c k = f ( k ) ( a ) k ! f ′ ( a )$, $e n = x n - a , n = 0 , 1 , 2 , …$, and achieve convergence of order at least $1 + 2$ and $2 + 5$ respectively.
Proof.
By the definition of divided difference and Taylor formula, we also have
$f [ z n - 1 , x n , z n ] = f ′ ′ ( a ) 2 + f ′ ′ ′ ( a ) 6 e n - 1 + O ( e n - 1 2 ) , f [ x n - 1 , x n , z n , y n ] = f ′ ′ ′ ( a ) 6 + f ( 4 ) ( a ) 4 ! e n - 1 + O ( ( e n - 1 ) 2 )$
Equation (12) follows from Equation (6) by
$μ n = 1 + γ n f ′ ( a ) γ n f ′ ( a ) ( f ′ ′ ( a ) 2 ! + f ′ ′ ′ ( a ) 3 ! e n - 1 + O ( e n - 1 2 ) )$
The order $1 + 2 ≈ 2.4142$ is obtained as the positive root by solving $s 2 - 2 s - 1 = 0$.
Equation (13) follows from Equation (7) by
$α n = f ′ ( a ) [ c 3 - c 2 2 + c 4 e n - 1 + O ( e n - 1 2 ) ]$
The order $2 + 5 ≈ 4.2361$ is obtained as the positive root by solving $s 2 - 4 s - 1 = 0$.
Generally, we have the super $2 m$th-order Steffensen-type family: Iterate Equation (8) with
$ν n = f [ y m - 2 , y m - 1 ] ( c ˜ m + 1 + ( - 1 ) m - 1 c ¯ 2 D ¯ m - 1 D ¯ - 1 ⋯ D ¯ m - 2 )$
where $D ¯ - 1 = 1 + γ n f [ y m - 2 , y m - 1 ] , D ¯ 0 = 1$, $D ¯ 1 = ( 1 + γ n f [ y m - 2 , y m - 1 ] ) c ¯ 2$, ⋯, $D ¯ m - 1 = D ¯ m - 2 [ c ¯ 2 D ¯ m - 2 + ( - 1 ) m - 2 c ¯ m D ¯ m - 3 ⋯ D ¯ - 1 ]$, $c ¯ m = f [ y - 1 , ⋯ , y m - 1 ] f [ y m - 2 , y m - 1 ]$ and $c ˜ m + 1 = f [ x n - 1 , y - 1 , ⋯ , y m - 1 ] f [ y m - 2 , y m - 1 ]$.
Theorem 4.
Let $f : D → ℜ$ be a sufficiently differentiable function with a simple root $a ∈ D$, $D ⊂ ℜ$ be an open set, $x 0$ be close enough to a, then the family Equation (14) is super $2 m$th-order convergent, and satisfies the following error equation:
$e n + 1 = ( - 1 ) m D m - 1 ⋯ D - 1 c m + 2 e n - 1 e n 2 m + O ( e n - 1 2 e n 2 m ) , m > 1$
where $D - 1 = 1 + γ n f ′ ( a ) , D 0 = 1$, $D 1 = ( 1 + γ n f ′ ( a ) ) c 2$, ⋯, $D m - 1 = D m - 2 [ c 2 D m - 2 + ( - 1 ) m - 2$ $c m D m - 3 ⋯ D - 1 ]$, $c m = f ( m ) ( a ) m ! f ′ ( a ) ,$ and $e n = x n - a$ for $n = 0 , 1 , ⋯$.
Proof.
Since
$ν n = f [ y m - 2 , y m - 1 ] ( c ˜ m + 1 + ( - 1 ) m - 1 c ¯ 2 D ¯ m - 1 D ¯ - 1 ⋯ D ¯ m - 2 ) = f ′ ( a ) ( c m + 1 + ( - 1 ) m - 1 c 2 D m - 1 D - 1 ⋯ D m - 2 + c m + 2 e n - 1 ) + O ( e n - 1 2 )$
we obtain Equation (15) by Theorem 2 from
Furthermore, we propose two doubly-accelerated Steffensen-type methods:
Theorem 5.
Let $f : D → ℜ$ be a sufficiently differentiable function with a simple root $a ∈ D$, $D ⊂ ℜ$ be an open set, $x 0$ be close enough to a, then the doubly-accelerated Steffensen-type methods Equations (16) and (17) achieve third-order and 4.74483 order convergence, respectively.
Proof.
Denoting $e n z : = z n - a$ and $e n : = x n - a$, if $z n$ converges to a with order $p > 1$ and satisfies the error equation
$e n z = C n e n p + o ( e n p )$
where $C n$ tends to the asymptotic convergence constant C, and if $x n$ converges to a with order $r > 2$ and satisfies the error equation
$e n + 1 = D n e n r + o ( e n r )$
where $D n$ tends to the asymptotic convergence constant D, then
$e n z = C n ( D n - 1 e n - 1 r ) p + o ( e n - 1 r p ) = C n D n - 1 p e n - 1 r p + o ( e n - 1 r p ) ,$
$e n + 1 = D n ( D n - 1 e n - 1 r ) r + o ( e n - 1 r 2 ) = D n D n - 1 r e n - 1 r 2 + o ( e n - 1 r 2 )$
By Taylor formula, for Equations (10) and (16), we also have
$e n z = e n - f [ x n , a ] e n f [ x n , z n - 1 ] = f [ x n , z n - 1 , a ] f [ x n , z n - 1 ] e n - 1 z e n = c 2 C n - 1 D n - 1 e n - 1 p + r + o ( e n - 1 p + r )$
and
$e n + 1 = e n - f [ x n , a ] e n f [ x n , z n ] + ( 1 - f [ x n , z n - 1 ] f [ x n , z n ] ) f [ z n - 1 , x n , z n ] f [ x n , a ] e n f [ x n , z n - 1 ] = e n f [ x n , z n , a ] ( z n - a ) + ( f [ x n , a ] f [ x n , z n - 1 ] - f [ x n , a ] f [ x n , z n ] ) f [ z n - 1 , x n , z n ] e n f [ x n , z n ] + ( f [ x n , a ] f [ x n , z n - 1 ] - f [ x n , a ] f [ x n , z n ] ) f [ z n - 1 , x n , z n ] e n = e n 2 f [ x n , z n , a ] ( 1 - f [ x n , a ] f [ x n , z n - 1 ] ) + ( f [ x n , a ] f [ x n , z n - 1 ] - f [ x n , a ] f [ x n , z n ] ) f [ z n - 1 , x n , z n ] f [ x n , z n ] + ( f [ x n , a ] f [ x n , z n - 1 ] - f [ x n , a ] f [ x n , z n ] ) f [ z n - 1 , x n , z n ] e n = e n 2 f [ x n , z n , a ] f [ x n , z n ] ( f [ x n , z n - 1 ] - f [ x n , a ] ) + f [ z n - 1 , x n , z n ] f [ x n , a ] ( f [ x n , z n ] - f [ x n , z n - 1 ] ) f [ x n , z n ] 2 f [ x n , z n - 1 ] + ( f [ x n , z n ] - f [ x n , z n - 1 ] ) f [ x n , a ] f [ z n - 1 , x n , z n ] e n = e n 2 ( f [ x n , z n , a ] f [ x n , z n ] - f [ z n - 1 , x n , z n ] f [ x n , a ] ) f [ x n , z n - 1 ] + ( f [ z n - 1 , x n , z n ] - f [ x n , z n , a ] ) f [ x n , z n ] f [ x n , a ] f [ x n , z n ] 2 f [ x n , z n - 1 ] + ( f [ x n , z n ] - f [ x n , z n - 1 ] ) f [ x n , a ] f [ z n - 1 , x n , z n ] e n = e n 2 ( f [ z n - 1 , x n , z n ] f [ x n , z n , a ] e n z - f [ z n - 1 , x n , z n , a ] f [ x n , z n ] e n - 1 z ) f [ x n , z n - 1 ] + f [ z n - 1 , x n , z n , a ] f [ x n , z n ] f [ x n , a ] e n - 1 z f [ x n , z n ] 2 f [ x n , z n - 1 ] + ( f [ x n , z n ] - f [ x n , z n - 1 ] ) f [ x n , a ] f [ z n - 1 , x n , z n ] e n = e n 2 f [ z n - 1 , x n , z n ] f [ x n , z n , a ] f [ z n - 1 , x n ] e n z - f [ z n - 1 , x n , a ] f [ z n - 1 , x n , z n , a ] f [ x n , z n ] ( e n - 1 z ) 2 f [ x n , z n ] 2 f [ z n - 1 , x n ] + ( f [ x n , z n ] - f [ z n - 1 , x n ] ) f [ x n , a ] f [ z n - 1 , x n , z n ] e n = - c 2 c 3 C n - 1 2 D n - 1 2 e n - 1 2 r + 2 p + o ( e n - 1 2 r + 2 p ) .$
Comparing the exponents of $e n - 1$ in two expressions of $e n z$ and two expressions of $e n + 1$ respectively, we have two equations in the following system:
$r p = p + r , r 2 = 2 r + 2 p$
From its non-trivial solution $p = 3 2$ and $r = 3$, we prove that Equation (16) achieves third-order convergence.
For Equations (11) and (17), we have
$e n z = e n - f [ x n , a ] e n f [ x n , z n - 1 ] = f [ x n , z n - 1 , a ] f [ x n , z n - 1 ] e n - 1 z e n = c 2 C n - 1 D n - 1 e n - 1 p + r + o ( e n - 1 p + r ) , e n y = e n - f [ x n , a ] e n f [ x n , z n ] = f [ x n , z n , a ] f [ x n , z n ] e n z e n = c 2 2 C n - 1 D n - 1 2 e n - 1 p + 2 r + o ( e n - 1 p + 2 r )$
and
$e n + 1 = e n y - f [ y n , a ] e n y f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) = e n y f [ x n , y n , a ] e n + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) = e n y f [ y n , x n , z n ] e n y - f [ x n , y n , z n , a ] e n z e n + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( e n y - e n ) ( e n y - e n z ) f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) = e n y ( f [ y n , x n , z n ] f [ x n , z n , a ] f [ x n , z n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) e n z e n + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , y n , z n , a ] ) e n z e n + o ( e n z e n e n - 1 ) f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) = e n y f [ y n , x n , z n ] ( f [ x n , z n , a ] f [ x n , y n ] - f [ x n , z n , y n ] f [ x n , z n ] ) f [ x n , z n ] f [ x n , y n ] e n z e n + f [ x n - 1 , x n , z n , y n , a ] e n z e n e n - 1 + o ( e n z e n e n - 1 ) f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) = e n y f [ y n , x n , z n ] ( f [ x n , z n , y n ] f [ x n , y n , z n ] ( e n y - e n z ) - f [ x n , z n , y n , a ] f [ x n , y n ] e n y ) f [ x n , z n ] f [ x n , y n ] e n z e n + f [ x n - 1 , x n , z n , y n , a ] e n z e n e n - 1 + o ( e n z e n e n - 1 ) f [ y n , x n ] + f [ y n , x n , z n ] ( y n - x n ) + ( f [ x n - 1 , x n , z n , y n ] - f [ x n , z n , y n ] 2 f [ x n , y n ] ) ( y n - x n ) ( y n - z n ) = c 2 2 C n - 1 D n - 1 2 e n - 1 p + 2 r c 4 c 2 C n - 1 D n - 1 e n - 1 p + r D n - 1 e n - 1 r e n - 1 = c 2 3 c 4 C n - 1 2 D n - 1 4 e n - 1 2 p + 4 r + 1$
Comparing the exponents of $e n - 1$ in two expressions of $e n z$ and two expressions of $e n + 1$ respectively, we have two equations in the following system:
$r p = p + r , r 2 = 2 p + 4 r + 1$
From its non-trivial solution $r = 4.74483$ and $p = 1.26704$, we prove that Equation (17) achieves 4.74483 order convergence.

## 4. Numerical Examples

The proposed families are compared with NM, SM, SASM, RWBM and DPMs by solving some nonlinear equations in the following examples. We compute Equation (4) with $γ n ≡ 1$ and $μ n ≡ 1$, Equation (5) with $γ n ≡ 1$ and $α n ≡ 0$ or $γ n ≡ 1$ and $α n ≡ 1$, Equation (16) with $γ n ≡ 1$ and $μ 0 = 0$, Equation (17) with $γ n ≡ 1$ and $α 0 = 0$, Equation (16) with $γ 0 = 1$ and $μ 0 = 0$, and Equation (17) with $γ 0 = 1$ and $α 0 = 0$. DPM1(1) is denoted as one-step DPM without memory where $γ n ≡ 1$ and $p n ≡ 1$; DPM1(2) is denoted as one-step DPM with memory where $γ n ≡ 1$ and $p 0 = 1$ and DPM1(3) is denoted as one-step DPM with memory where $γ 0 = 1$ and $p 0 = 1$. DPM2(1), DPM2(2) and DPM2(3) are denoted similarly. The computational order of convergence is defined as:
$COC = log ( | e n | / | e n - 1 | ) log ( | e n - 1 | / | e n - 2 | )$
Example 1. The numerical results in Table 1 agree with the theoretical error equations and asymptotic convergence constants in the theorems.
Table 1. $f ( x ) = x 2 - e - x - 3 x + 1 , a = 0 , x 0 = 0.2$.
Table 1. $f ( x ) = x 2 - e - x - 3 x + 1 , a = 0 , x 0 = 0.2$.
Methodn12345
NM$| e n |$0.12618e-10.39224e-40.38462e-90.36982e-190.34192e-39
COC1.716852.089501.997462.000002.00000
SM$| e n |$0.90483e-20.20376e-40.10379e-90.26931e-200.18132e-41
COC1.923491.969161.999262.000002.00000
SASM$| e n |$0.10005e-10.27820e-50.42758e-140.31858e-350.27123e-86
COC1.861072.733512.478552.397252.41719
DPM1(1)$| e n |$0.35098e-10.89701e-30.60310e-60.27280e-120.55816e-25
COC1.081232.107141.992161.99992.00000
DPM1(2)$| e n |$0.35098e-10.17051e-40.85716e-120.1044e-290.77854e-73
COC1.081234.384452.20272.454462.40742
DPM1(3)$| e n |$0.113790.58486e-40.45402e-150.14730e-440.49933e-136
COC0.35041213.42873.073833.015723.0001
Equation (4)$| e n |$0.90483e-20.83467e-40.69659e-80.48524e-160.23546e-32
COC1.923491.513662.004141.999992.00000
Equation (16)$| e n |$0.90483e-20.12295e-50.11371e-140.13249e-360.16634e-89
COC1.923492.876122.336262.427922.41188
Equation (16)$| e n |$0.90483e-20.49807e-70.69167e-230.2069e-700.55353e-213
COC1.923493.91183.015132.996973.0000
DPM2(1)$| e n |$0.19766e-30.1919e-150.15768e-640.48294e-2600.42497e-1042
COC4.299344.06633.999984.00004.00000
DPM2(2)$| e n |$0.19766e-30.37718e-180.16139e-850.64139e-3930.34726e-1795
COC4.299344.898124.576874.562964.56169
DPM2(3)$| e n |$0.19766e-30.61235e-210.17871e-1090.18862e-5560.37821e-2814
COC4.299345.826395.056565.048595.04881
RWBM$| e n |$0.47770e-40.18986e-180.47372e-760.18361e-3060.41433e-1228
(Equation (5), $γ n = 1 , α n ≡ 0$)COC5.181733.976044.000004.000004.00000
RWBM$| e n |$0.11363e-30.14757e-160.41995e-680.27538e-2740.50918e-1099
(Equation (5), $γ n = 1 , α n ≡ 1$)COC4.643333.970504.000004.000004.00000
Equation (17)$| e n |$0.47770e-40.52156e-200.1841e-870.31207e-3730.90942e-1584
COC5.181734.407074.225844.236644.23604
Equation (17)$| e n |$0.47770e-40.8438e-230.29043e-1110.32054e-5310.86331e-2524
COC5.181725.177724.717254.747264.7447
Example 2. The numerical results of self-acceleration of Steffensen’s method (SASM), Equations (16) and (17), DPM1(3), Equations (16) and (17) and DPM2(3) are in Table 2 for the following nonlinear functions:
$f 1 ( x ) = 1 2 ( e x - 2 - 1 ) , a = 2 , x 0 = 2 . 5 , f 2 ( x ) = e x 2 + sin x - 1 , a = 0 , x 0 = 0 . 25 f 3 ( x ) = e - x 2 + x + 2 - 1 , a = - 1 , x 0 = - 0 . 85 , f 4 ( x ) = e - x - arctan x - 1 , a = 0 , x 0 = 0 . 2$
Table 2. Numerical results for $f i ( x ) , i = 1 , 2 , 3 , 4$.
Table 2. Numerical results for $f i ( x ) , i = 1 , 2 , 3 , 4$.
SASMEquation (16)Equation (16)DPM1(3)Equation (17)Equation (17)DPM2(3)
$f 1 : | e 4 |$0.245e-400.784e-140.107e-280.164e-230.101e-1950.727e-2730.426e-231
COC2.413532.453503.007342.982114.235994.745175.04588
$f 2 : | e 4 |$0.396e-440.194e-170.177e-350.304e-290.524e-1760.148e-2540.188e-283
COC2.413162.323343.017912.947624.235674.746065.04155
$f 3 : | e 4 |$0.380e-490.346e-140.300e-380.172e-320.168e-1680.689e-2580.618e-265
COC2.412952.512513.165943.126214.236224.748955.04542
$f 4 : | e 4 |$0.344e-860.696e-370.112e-700.326e-600.111e-3990.115e-5600.437e-555
COC2.417212.431463.000782.999544.242834.75985.04856

## 5. Conclusions

In this paper, the general optimal $2 m - 1$th-order Steffensen-type family with two parameters is constructed by using Newton’s iteration for the direct Newtonian interpolatory polynomial of the function, and its corresponding accelerated Steffensen-type family is derived by using the expression of one of the parameters with memory but no additional new evaluation of the function. In the theoretical analysis and the numerical examples, the proposed families without and with memory only use m evaluations of f to achieve optimal $2 m - 1$th-order of convergence and super $2 m - 1$th-order of convergence for solving a simple root of nonlinear functions, respectively. Their asymptotic convergence constants and orders of convergence compared with NM, SM, SASM, RWBM, DPM are verified. The advantage of the proposed methods is that they can offer high precision roots in scientific and engineering computation efficiently.
The biparametric Steffensen-type family Equation (8) is not only an alternative to the biparametric multipoint root finding family Equation (3) from , but also brings about methods Equations (16) and (17), which doubly accelerate SM and RWBM, respectively. Moreover, when the second parameter $ν n ≡ 0$, the family Equation (8) gives the single-parametric Steffensen-type family in . Furthermore, this single-parametric Steffensen-type family was improved to be the self-accelerating method in  by self-correcting the parameter $γ n$ with memory. Additionally, one-step Steffensen methods with memory were derived from Equation (4) in [11,12], and a general multi-step Steffensen method with memory different from Equations (3) and (8) was proposed in .

## Acknowledgments

Supported in part by Natural Science Foundation of China (No. 11471019).

## Author Contributions

All of the authors have worked together to develop the present manuscript and the corresponding author has played a main role in theoretical analyses and numerical examples.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Ortega, J.M.; Rheinboldt, W.G. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
2. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 634–651. [Google Scholar] [CrossRef]
3. Traub, J.F. Iterative Methods for the Solution of Equations. In Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964; pp. 142–149. [Google Scholar]
4. Zheng, Q.; Wang, J.; Zhao, P.; Zhang, L. A Steffensen-like method and its higher-order variants. Appl. Math. Comput. 2009, 214, 10–16. [Google Scholar] [CrossRef]
5. Zheng, Q.; Zhao, P.; Zhang, L.; Ma, W. Variants of Steffensen-secant method and applications. Appl. Math. Comput. 2010, 216, 3486–3496. [Google Scholar] [CrossRef]
6. Alarcón, V.; Amat, S.; Busquier, S.; López, D.J. A Steffensen’s type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 142–149. [Google Scholar] [CrossRef]
7. Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar] [CrossRef]
8. Ren, H.; Wu, Q.; Bi, W. A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
9. Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
10. Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. J. Comput. Appl. Math. 2012, 236, 2909–2920. [Google Scholar] [CrossRef]
11. Zheng, Q.; Huang, F.; Guo, X.; Feng, X. Doubly-accelerated Steffensen’s methods with memory and their applications on solving nonlinear ODEs. J. Comput. Anal. Appl. 2013, 15, 886–891. [Google Scholar]
12. Liu, Z.; Zheng, Q. A one-step Steffensen-type method with super-cubic convergence for solving nonlinear equations. Procedia Comput. Sci. 2014, 29, 1870–1875. [Google Scholar] [CrossRef]
13. Wang, X.; Zhang, T. Efficient n-point iterative methods with memory for solving nonlinear equations. Numer. Algorithms 2015, 70, 357–375. [Google Scholar] [CrossRef]