Next Article in Journal
Topological Properties of E-Metric Spaces with Applications to Fixed Point Theory
Next Article in Special Issue
Generalized Inverses Estimations by Means of Iterative Methods with Memory

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems

by 1,* and
1
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, 46022 València, Spain
2
Dpto. de Educación en Línea, Universidad San Francisco de Quito, Quito 170901, Ecuador
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(12), 1221; https://doi.org/10.3390/math7121221
Received: 22 November 2019 / Revised: 5 December 2019 / Accepted: 6 December 2019 / Published: 11 December 2019

## Abstract

:
In this work, a new class of iterative methods for solving nonlinear equations is presented and also its extension for nonlinear systems of equations. This family is developed by using a scalar and matrix weight function procedure, respectively, getting sixth-order of convergence in both cases. Several numerical examples are given to illustrate the efficiency and performance of the proposed methods.
MSC:
65H05; 37D99

## 1. Introduction

In this paper we consider the problem of finding a solution for $F ( x ) = 0$, where $F : D ⊂ R n → R n$ is a sufficiently differentiable multivariate function, when $n > 1$, defined on a convex set D. The solution of this kind of multidimensional nonlinear problems is usually numerical, as it cannot be solved analytically in most cases. In this sense, the role of iterative procedures capable of estimating their solutions is critical.
Newton’s scheme is most employed iterative procedure for solving nonlinear problems (see [1]); its iterative expression is
$x ( k + 1 ) = x ( k ) − [ F ′ ( x ( k ) ) ] − 1 F ( x ( k ) ) , k = 0 , 1 , …$
where $F ′ ( x ( k ) )$ denotes the Jacobian matrix of nonlinear function F evaluated on the iterate $x ( k )$. While not in the same number as for scalar equations, in recent years many researchers have focused their attention on this kind of problem. One initial approach is to modify the classical methods in order to accelerate the convergence and also to reduce the amount of functional evaluations and operations per iteration. In [2,3] good reviews can be found.
There have been different ways to approach this problem: In [4], a general procedure, called pseudo-composition, was designed. It involved predictor–corrector methods with a high order of convergence, with the corrector step coming from a Gaussian quadrature. Moreover, other techniques have been used: Adomian decomposition [5,6,7], multipoint methods free from second derivative [8,9], multidimensional Steffensen-type schemes [10,11,12,13], and even derivative-free methods with memory [14,15,16].
Recently, the weight function technique has also been developed for designing iterative methods for solving nonlinear systems (see, for example [17]). This procedure allows the order of convergence of a method to be increased many times without increasing the number of functional evaluations. Among others, Sharma et al. in [18] designed a scheme with fourth-order of convergence by using this procedure and, more recently, Artidiello et al. constructed in [19,20] several classes of high-order schemes by means of matrix weight functions.
On the other hand, as most of the iterative methods for scalar equations are not directly extendable to systems, it was necessary to find a new technique that makes it feasible. In [21,22], the authors presented a general process able to transform any scalar iterative method to the multidimensional case.
In what follows, a few methods of sixth-order of convergence are revisited and used for comparison with our proposed scheme. Different efficiency aspects are treated as well as the numerical performance on several nonlinear problems.
In what follows, we list several existing sixth-order methods that will be used with the aim of comparison. The first scheme ($C 6 1$) is introduced in [7] by Cordero et al. and modified in [23] by the same authors. Its iterative expression is
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − F ′ ( x ( k ) ) − 1 [ 2 I − F ′ ( y ( k ) ) F ′ ( x ( k ) ) − 1 ] F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − F ′ ( y ( k ) ) − 1 F ( z ( k ) ) , k ≥ 0 .$
Let us notice that this scheme reaches sixth-order of convergence using a functional evaluation of nonlinear function F at three different points and also its associate Jacobian matrix $F ′$ is evaluated at two different points per iteration.
The second method ($C 6 2$), a modified Newton–Jarratt composition, was presented by A. Cordero et al. in [24], and is expressed as
$z ( k ) = x ( k ) − 2 3 F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , y ( k ) = x ( k ) − 1 2 3 F ′ ( z ( k ) ) − F ′ ( x ( k ) ) − 1 3 F ′ ( z ( k ) ) + F ′ ( x ( k ) ) F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) − − 1 2 F ′ ( x ( k ) ) + 3 2 F ′ ( z ( k ) ) − 1 F ( y ( k ) ) , k ≥ 0 .$
This structure allows to reach the sixth-order of convergence by means of two evaluations of nonlinear function F, and also two of $F ′$, per iteration.
We also recall, as ($XH 6$) the scheme introduced by X.Y. Xiao and H.W. Yin in [25] based on the method presented by J.R. Sharma et al. in [18]
$y ( k ) = x ( k ) − 2 3 F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − 1 2 − I + 9 4 F ′ ( y ( k ) ) − 1 F ′ ( x ( k ) ) + 3 4 F ′ ( x ( k ) ) − 1 F ′ ( y ( k ) ) F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , x ( k + 1 ) = z ( k ) − 1 2 3 F ′ ( y ( k ) ) − 1 − F ′ ( x ( k ) ) − 1 F ( z ( k ) ) , k ≥ 0 .$
In this case, two functional evaluations of F and $F ′$ are made, respectively, on points $x ( k )$ and $y ( k )$, per iteration.
The fourth class of iterative methods is of Jarrat-type ($B 6$) and was introduced by R. Behl et al. in [26] as
$y ( k ) = x ( k ) − 2 3 F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = x ( k ) − − a 1 I + a 2 ( F ′ ( y ( k ) ) − 1 F ′ ( x ( k ) ) ) 2 F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , x ( k + 1 ) = z ( k ) − b 2 F ′ ( x ( k ) ) + b 3 F ′ ( y ( k ) ) − 1 F ′ ( x ( k ) ) + b 1 F ′ ( y ( k ) ) F ′ ( x ( k ) ) − 1 F ( z ( k ) ) , k ≥ 0 ,$
where $a 2 = 3 8$, $a 1 = 1 − a 2 = 5 8$, $b 2 = b 1 − b 3 + 1 = − 1 2 ( 3 b 1 + 1 )$, $b 3 = 1 2 ( 5 b 1 + 3 )$ and $b 1$ is a free parameter. This is a parametric family of iterative schemes that reaches order of convergence six with two functional evaluations of F and two of $F ′$ per iteration.
Let us introduce now some concepts that will be used throughout the manuscript. They are related with such important aspects of iterative methods as convergence, order, efficiency and those related with the technique used in the proof of the main result.
Let ${ x ( k ) } k ≥ 0$ be a sequence in $R n$ which converges to $ξ$, then the convergence is said to be of order p with $p ≥ 1$ if there exist $M > 0$ ($0 < M < 1$ if $p = 1$ ) and $k 0$ such that
$∥ x ( k + 1 ) − ξ ∥ ≤ M ∥ x ( k ) − ξ ∥ p , ∀ k ≥ k 0 ,$
or
$∥ e ( k + 1 ) ∥ ≤ M ∥ e ( k ) ∥ p , ∀ k ≥ k 0 , where e ( k ) = x ( k ) − ξ .$
Moreover, with $ξ ∈ R n$ such that $F ( ξ ) = 0$ and supposing that $x ( k − 1 ) , x ( k ) , x ( k + 1 )$ are three consecutive iterations close to $ξ$, then the order of convergence can be estimated in practice by the computational order of convergence $ρ$ that can be calculated by using the expression
$p ≈ ρ = ln ( ∥ x ( k + 1 ) − x ( k ) ∥ / ∥ x ( k ) − x ( k − 1 ) ∥ ) ln ( ∥ x ( k ) − x ( k − 1 ) ∥ / ∥ x ( k − 1 ) − x ( k − 2 ) ∥ ) .$
Let $F : D ⊆ R n → R n$ be a sufficiently Fréchet differentiable function in D, for any $ξ + h ∈ R n$ in a neighborhood of $ξ$, the solution of the system $F ( x ) = 0$. By applying a Taylor development around $ξ$ and assuming that $F ′ ( ξ )$ is not singular (see [24] for further information), we have
$F ( ξ + h ) = F ′ ( ξ ) h + ∑ q = 2 p − 1 C q h q + O ( h p ) ,$
being $C q = ( 1 / q ! ) [ F ′ ( ξ ) ] − 1 F ( q ) ( ξ )$ and $q = 2 , 3 , …$. Let us remark that $C q h q ∈ R n$ as $F ( q ) ( ξ ) ∈ L ( R n × ⋯ × R n , R n )$ and $[ F ′ ( ξ ) ] − 1 ∈ L ( R n )$. Therefore,
$F ′ ( ξ + h ) = F ′ ( ξ ) I + ∑ q = 2 p − 1 q C q h q − 1 + O ( h p − 1 ) ,$
where I is the identity matrix and $q C q h q − 1 ∈ L ( R n )$.
The proposed class of iterative method and its analysis of convergence are presented in Section 2. Moreover, two particular subclasses of this family, both depending on a real parameter, are shown. In Section 3, their efficiency is calculated and compared with those of some existing classes or schemes with the same order of convergence. Finally, their numerical performance is checked in Section 4 on several multidimensional problems and some conclusions are stated in Section 5.

## 2. Design and Convergence Analysis of the Proposed Class

Let $F : D ⊆ R n → R n$ be a real sufficiently Fréchet differentiable function and $H : R n × n → R n × n$ be a matrix weight function whose variable is $t ( k ) = I − [ F ′ ( x ( k ) ) ] − 1 [ x ( k ) , y ( k ) ; F ]$. Let us notice that the divided difference operator of F on $R n$, $[ · , · ; F ] : Ω × Ω ⊂ R n × R n ⟶ L ( R n )$ is defined in [27] as
$[ x , y ; F ] ( x − y ) = F ( x ) − F ( y ) , for any x , y ∈ Ω .$
Then, we propose the three step iterative method
$y ( k ) = x ( k ) − F ′ ( x ( k ) ) − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − H ( t ( k ) ) F ′ ( x ( k ) ) − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − H ( t ( k ) ) F ′ ( x ( k ) ) − 1 F ( z ( k ) ) , k ≥ 0 .$
In order to properly describe the Taylor development of the matrix weight function, we recall the denotation defined by Artidiello et al. in [19]: Let $X = R n × n$ denote the Banach space of real square matrices of size $n × n$, then the function $H : X → X$ can be defined such that the Fréchet derivative satisfies
(a)
$H ′ ( u ) ( v ) = H 1 u v , where H ′ : X → L ( X ) and H 1 ∈ R$,
(b)
$H ″ ( u , v ) ( v ) = H 2 u v w , where H ″ : X × X → L ( X ) and H 2 ∈ R$.
Let us also remark that, when k tends to infinity, then the variable $t ( k )$ tends to the identity matrix I. So, there exist real $H 1$ and $H 2$ such that H can be expanded around I as
$H ( t ( k ) ) = H ( I ) + H 1 ( t ( k ) − I ) + 1 2 H 2 ( t ( k ) − I ) 2 + O ( ( t ( k ) − I ) 3 ) .$
Therefore, the following results state the conditions that assure the sixth-order of convergence of Class (8) and present its error equation.
Theorem 1.
Let us consider a sufficiently Fréchet differentiable function $F : D ⊆ R n → R n$ in an open neighborhood D of $ξ ∈ R n$ satisfying $F ( ξ ) = 0$ and $H : R n × n → R n × n$, a sufficiently Fréchet differentiable matrix function. Let us also assume that $F ′ ( x )$ is non-singular at ξ, and $x ( 0 )$ is an initial value close enough to ξ. Then, the sequence ${ x ( k ) } k ≥ 0$, obtained from Class (8), converges to ξ with order six if $H 0 = I$, $H 1 = 2$ and $∥ H 2 ∥ < ∞$, where $H 0 = H ( 0 )$ and I are the identity matrix, its error equation being
$e ( k + 1 ) = 1 4 ( H 2 2 − 22 H 2 + 120 ) C 2 5 + ( − 24 + 2 H 2 ) C 2 2 C 3 C 2 + ( − 20 + 2 H 2 ) C 3 C 2 3 + 4 C 3 2 C 2 e ( k ) 6 + O ( e ( k ) 7 ) ,$
where $C q = 1 q ! [ F ′ ( ξ ) ] − 1 F ( q ) ( ξ )$, $q = 2 , 3 , …$ and $e ( k ) = x ( k ) − ξ$.
Proof.
By using the Taylor expansion series of the nonlinear function and its corresponding Jacobian matrix around $ξ$ we get,
$F ( x ( k ) ) = F ′ ( ξ ) [ e ( k ) + ∑ j = 2 6 C i e ( k ) i ] + O ( e ( k ) 7 ) , F ′ ( x ( k ) ) = F ′ ( ξ ) [ I + ∑ j = 2 6 i C i e ( k ) i − 1 ] + O ( e ( k ) 6 ) .$
Moreover, the expansion of the inverse of the Jacobian matrix can be expressed as
$[ F ′ ( x ( k ) ) ] − 1 = [ I + ∑ j = 2 6 X j e ( k ) j − 1 ] [ F ′ ( ξ ) ] − 1 + O ( e ( k ) 6 ) ,$
where
$X 2 = − 2 C 2 , X 3 = − 3 C 3 + 4 C 2 2 , X 4 = − 4 C 4 + 6 C 2 C 3 + 6 C 3 C 2 − 8 C 2 3 , X 5 = − 5 C 5 + 8 C 2 C 4 − 12 C 2 2 C 3 + 9 C 3 2 + 8 C 4 C 2 − 12 C 2 C 3 C 2 + 16 C 2 4 − 12 C 3 C 2 2 , X 6 = − 32 C 2 5 + 24 C 2 3 C 3 − 18 C 2 C 3 2 − 16 C 2 2 C 4 + 12 C 3 C 4 + 10 C 2 C 5 − 6 C 6 + 24 C 2 2 C 3 C 2 + 18 C 3 2 C 2 − 16 C 2 C 4 C 2 + 10 C 5 C 2 + 24 C 2 C 3 C 2 2 − 16 C 4 C 2 2 + 24 C 3 C 2 3 + 12 C 4 C 3 − 18 C 3 C 2 C 3 .$
Then,
$[ F ′ ( x ( k ) ) ] − 1 F ( x ( k ) ) = e ( k ) − C 2 e ( k ) 2 + 2 ( C 2 2 − C 3 ) e ( k ) 3 + ( 4 C 2 C 3 + 3 C 3 C 2 − 4 C 2 3 − 3 C 4 ) e ( k ) 4 + ( − 4 C 5 + 6 C 2 C 4 − 8 C 2 2 C 3 + 6 C 3 2 + 4 C 4 C 2 − 6 C 2 C 3 C 2 + 8 C 2 4 − 6 C 3 C 2 2 ) e ( k ) 5 + ( − 16 C 2 5 + 16 C 2 3 C 3 − 12 C 2 C 3 2 − 12 C 2 2 C 4 + 9 C 3 C 4 + 8 C 2 C 5 − 5 C 6 + 12 C 2 2 C 3 C 2 − 9 C 3 2 C 2 − 8 C 2 C 4 C 2 + 5 C 5 C 2 + 12 C 2 C 3 C 2 2 − 8 C 4 C 2 2 + 12 C 3 C 2 3 + 8 C 4 C 3 − 12 C 3 C 2 C 3 ) e ( k ) 6 + O ( e ( k ) 7 ) .$
So,
$y ( k ) − ξ = C 2 e ( k ) 2 − 2 ( C 2 2 − C 3 ) e ( k ) 3 − ( 4 C 2 C 3 + 3 C 3 C 2 − 4 C 2 3 − 3 C 4 ) e ( k ) 4 − ( − 4 C 5 + 6 C 2 C 4 − 8 C 2 2 C 3 + 6 C 3 2 + 4 C 4 C 2 − 6 C 2 C 3 C 2 + 8 C 2 4 − 6 C 3 C 2 2 ) e ( k ) 5 − ( − 16 C 2 5 + 16 C 2 3 C 3 − 12 C 2 C 3 2 − 12 C 2 2 C 4 + 9 C 3 C 4 + 8 C 2 C 5 − 5 C 6 + 12 C 2 2 C 3 C 2 − 9 C 3 2 C 2 − 8 C 2 C 4 C 2 + 5 C 5 C 2 + 12 C 2 C 3 C 2 2 − 8 C 4 C 2 2 + 12 C 3 C 2 3 + 8 C 4 C 3 − 12 C 3 C 2 C 3 ) e ( k ) 6 + O ( e ( k ) 7 ) , ( y ( k ) − ξ ) 2 = C 2 2 e ( k ) 4 + ( − 4 C 2 3 + 2 C 2 C 3 + 2 C 3 C 2 ) e ( k ) 5 + ( 12 C 2 4 − 11 C 2 2 C 3 + 4 C 3 2 + 3 C 2 C 4 − 4 C 2 C 3 C 2 + 3 C 4 C 2 − 7 C 3 C 2 2 ) e ( k ) 6 + O ( e ( k ) 7 ) , ( y ( k ) − ξ ) 3 = C 2 3 e ( k ) 6 + O ( e ( k ) 7 )$
and therefore,
$F ( y ( k ) ) = F ′ ( ξ ) [ ( y ( k ) − ξ ) + C 2 ( y ( k ) − ξ ) 2 ] + O ( ( y ( k ) − ξ ) 3 ) = F ′ ( ξ ) [ C 2 e ( k ) 2 + 2 ( C 3 − C 2 2 ) e ( k ) 3 + ( 3 C 4 + 5 C 2 3 − 3 C 3 C 2 − 4 C 2 C 3 ) e ( k ) 4 + ( 4 C 5 − 6 C 2 C 4 + 10 C 2 2 C 3 − 6 C 3 2 − 4 C 4 C 2 + 8 C 2 C 3 C 2 − 12 C 2 4 + 6 C 3 C 2 2 ) e ( k ) 5 + ( 28 C 2 5 − 27 C 2 3 C 3 + 16 C 2 C 3 2 + 15 C 2 2 C 4 − 9 C 3 C 4 − 8 C 2 C 5 + 5 C 6 − 16 C 2 2 C 3 C 2 + 9 C 3 2 C 2 + 11 C 2 C 4 C 2 − 5 C 5 C 2 − 18 C 2 C 3 C 2 2 + 8 C 4 C 2 2 − 12 C 3 C 2 3 − 8 C 4 C 3 + 12 C 3 C 2 C 3 ) e ( k ) 6 ] + O ( e ( k ) 7 ) .$
Knowing the definition of the variable of weight function H, it can be expanded as
$t ( k ) = I − [ F ′ ( x ( k ) ) ] − 1 [ x ( k ) , y ( k ) ; F ] = C 2 e ( k ) + ( − 3 C 2 3 + 2 C 3 ) e ( k ) 2 + ( 8 C 2 3 − 6 C 2 C 3 + 2 C 4 − 4 C 3 C 2 ) e ( k ) 3 + ( − 20 C 2 4 + 16 C 2 2 C 3 − 8 C 3 2 − 4 C 2 C 4 + 11 C 2 C 3 C 2 − 2 C 4 C 2 + 10 C 3 C 2 2 − 3 C 2 C 4 ) e ( k ) 4 + O ( e ( k ) 5 ) .$
Then, using the Taylor expansion of H,
$H ( t ( k ) ) = H 0 + H 1 ( t ( k ) − I ) + H 2 ( t ( k ) − I ) 2 + O ( ( t ( k ) − I ) 3 )$
fixing $H 0 = I$, $H 1 = 2$ and with the help of Equations (9) and (10) we obtain
$H ( t ( k ) ) [ F ′ ( x ( k ) ) ] − 1 F ( y ( k ) ) = C 2 e ( k ) 2 + ( − 2 C 2 2 + 2 C 3 ) e ( k ) 3 + − C 2 3 + H 2 2 C 2 3 − 4 C 2 C 3 + 3 C 4 − 2 C 3 C 2 e ( k ) 4 + ( 28 − 5 H 2 ) C 2 4 + ( − 2 + H 2 ) C 2 2 C 3 + ( − 4 + H 2 ) C 2 C 3 C 2 − 4 C 3 2 − 6 C 2 C 4 + 4 C 5 − 4 C 4 C 2 + ( − 6 + H 2 ) C 3 C 2 2 e ( k ) 5 + − 6 C 3 C 4 − 8 C 2 C 5 + 5 C 6 + 52 C 2 2 C 3 C 2 − 13 C 3 2 C 2 − 10 C 5 C 2 + 52 C 2 C 3 C 2 2 + 4 C 4 C 2 2 + C 3 C 2 3 − 8 C 4 C 3 + ( 105 − 10 H 2 ) C 2 3 C 3 + ( − 3 + H 2 ) C 2 C 4 C 2 − 9 C 2 2 C 3 C 2 H 2 + 2 C 3 2 C 2 H 2 − 9 C 2 C 3 C 2 2 + C 4 C 2 2 H 2 − 9 C 3 C 2 2 H 2 + ( 20 + H 2 ) C 3 C 2 C 3 + ( − 3 + 3 H 2 2 ) C 2 2 C 4 + ( − 40 + 2 H 2 ) C 2 C 3 2 + ( − 154 + 31 H 2 ) C 2 5 e ( k ) 6 + O ( e ( k ) 7 ) .$
Then, the error at the second step is
$z ( k ) − ξ = 5 − H 2 2 C 2 3 − C 3 C 2 e ( k ) 4 + ( − 36 + 5 H 2 ) C 2 4 + ( 10 − H 2 ) ( C 2 2 C 3 + C 2 C 3 C 2 ) − 2 C 3 2 + ( 12 − H 2 ) C 3 C 2 2 e ( k ) 5 + ( 20 − 2 H 2 ) C 3 2 C 2 − 3 C 3 C 4 + 5 C 5 C 2 + 170 − 31 H 2 C 2 5 + 22 − 2 H 2 C 3 2 C 2 + ( 24 − 2 H 2 ) C 3 C 2 C 3 + 15 − 3 H 2 2 C 2 2 C 4 + 4 − H 2 C 4 C 2 2 + − 65 + 9 H 2 C 3 C 2 3 + 11 − H 2 C 2 C 4 C 2 − 64 C 2 C 3 C 2 2 − 64 C 2 2 C 3 C 2 + 9 H 2 C 2 C 3 C 2 2 + 9 H 2 C 2 2 C 3 C 2 + − 69 + 10 H 2 C 2 3 C 3 e ( k ) 6 + O ( e ( k ) 7 ) ,$
and therefore
$F ( z ( k ) ) = F ′ ( ξ ) [ ( z ( k ) − ξ ) ] + O ( ( z ( k ) − ξ ) 2 ) = F ′ ( ξ ) 5 − H 2 2 C 2 3 − C 3 C 2 e ( k ) 4 + ( − 36 + 5 H 2 ) C 2 4 + ( 10 − H 2 ) ( C 2 2 C 3 + C 2 C 3 C 2 ) − 2 C 3 2 + ( 12 − H 2 ) C 3 C 2 2 e ( k ) 5 + ( ( 20 − 2 H 2 ) C 3 2 C 2 − 3 C 3 C 4 + 5 C 5 C 2 + 170 − 31 H 2 C 2 5 + 22 − 2 H 2 C 3 2 C 2 + ( 24 − 2 H 2 ) C 3 C 2 C 3 + 15 − 3 H 2 2 C 2 2 C 4 + 4 − H 2 C 4 C 2 2 + − 65 + 9 H 2 C 3 C 2 3 + 11 − H 2 C 2 C 4 C 2 − 64 C 2 C 3 C 2 2 − 64 C 2 2 C 3 C 2 + 9 H 2 C 2 C 3 C 2 2 + 9 H 2 C 2 2 C 3 C 2 + − 69 + 10 H 2 C 2 3 C 3 ) e ( k ) 6 ) + O ( e ( k ) 7 ) .$
Finally, with Equations (9), (11) and (12) we get
$H ( t ( k ) ) [ F ′ ( x ( k ) ) ] − 1 F ( z ( k ) ) = 5 − H 2 2 C 2 3 − C 3 C 2 e ( k ) 4 + ( − 36 + 5 H 2 ) C 2 4 + ( 10 − H 2 ) ( C 2 2 C 3 + C 2 C 3 C 2 ) − 2 C 3 2 + ( 12 − H 2 ) C 3 C 2 2 e ( k ) 5 + 1 4 − 12 C 3 C 4 + 20 C 5 C 2 + 80 − 8 H 2 C 2 C 3 2 + 84 − 8 H 2 C 3 2 C 2 + 44 − 4 H 2 C 2 C 4 C 2 + 96 − 8 H 2 C 3 C 2 C 3 + 16 − 4 H 2 C 4 C 2 2 + 60 − 6 H 2 C 2 2 C 4 + − 256 + 36 H 2 C 2 C 3 C 2 2 + − 232 + 34 H 2 C 2 2 C 3 C 2 + − 240 + 34 H 2 C 3 C 2 3 + − 276 + 40 H 2 C 2 3 C 3 + − H 2 2 − 102 H 2 + 560 C 2 5 e ( k ) 6 + O ( e ( k ) 7 )$
and the resulting error equation is
$e ( k + 1 ) = 1 4 ( H 2 2 − 22 H 2 + 120 ) C 2 5 + ( − 24 + 2 H 2 ) C 2 2 C 3 C 2 + ( − 20 + 2 H 2 ) C 3 C 2 3 + 4 C 3 2 C 2 e ( k ) 6 + O ( e ( k ) 7 )$
and the proof is complete. □
Theorem 1 provides the convergence conditions for the proposed Class (8) of the iterative methods. However, there are several ways to define matrix weight function H satisfying those conditions. Each defined weight function generates different iterative schemes or classes.
Family 1
The weight function defined by
$H 1 ( t ) = I + 2 t + 1 2 α t 2$
where $α ∈ R$ satisfies the convergence conditions of Theorem 1. A new parametric family of sixth-order methods is then obtained as
$y ( k ) = x ( k ) − [ F ′ ( x ( k ) ) ] − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − I + 2 t ( k ) + 1 2 α t ( k ) 2 [ F ′ ( x ( k ) ) ] − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − I + 2 t ( k ) + 1 2 α t ( k ) 2 [ F ′ ( x ( k ) ) ] − 1 F ( z ( k ) ) , k ≥ 0 .$
where $t ( k ) = I − [ F ′ ( x ( k ) ) ] − 1 [ x ( k ) , y ( k ) ; F ]$. This family is denoted by PSH6$1$.
Family 2
The weight function defined by
$H 2 ( t ) = I + 2 ( I + α t ) − 1 t$
also satisfies the convergence conditions of Theorem 1. Then, a new class of sixth-order methods depending on a free parameter $α$ is obtained
$y ( k ) = x ( k ) − [ F ′ ( x ( k ) ) ] − 1 F ( x ( k ) ) , z ( k ) = y ( k ) − I + 2 ( I + α t ( k ) ) − 1 t ( k ) [ F ′ ( x ( k ) ) ] − 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) − I + 2 ( I + α t ( k ) ) − 1 t ( k ) [ F ′ ( x ( k ) ) ] − 1 F ( z ( k ) ) , k ≥ 0 .$
being again $t ( k ) = I − [ F ′ ( x ( k ) ) ] − 1 [ x ( k ) , y ( k ) ; F ]$. We denote in what follows this class as PSH6$2$.
Let us also remark that both subclasses use three functional evaluations of F, one evaluation of the Jacobian matrix $F ′$ and one evaluation of the divided difference $[ · , · ; F ]$ in order to reach sixth-order of convergence.

## 3. Computational Efficiency

In order to analyze the efficiency of an iterative method, there are two key aspects: The number of functional evaluations and the number of operations (products–quotients), both per iteration. So, our aim is to compare the performance of the proposed ($PSH 6 1$ and $PSH 6 2$) and known methods (C6$1$, C6$2$, XH6 and B6, described in the Introduction). To get this aim, we use the multidimensional extension of efficiency index $I = p 1 / d$ defined by Ostrowski in [28] and the computational efficiency index $C I = p 1 / ( d + o p )$ defined in [29]. In the latter, p is the convergence order, d is the amount of evaluations made per iteration and $o p$ is the amount of products–quotients made per iteration.
In order to calculate the efficiency index I we recall that the number of functional evaluations of one F, $F ′$ and first order divided difference $[ · , · ; F ]$ at each iteration is n, $n 2$ and $n ( n − 1 )$, respectively. The comparison of efficiency indices for the different methods is shown in Table 1. Let us remark that, despite some of them using more than one Jacobian matrix per iteration or use divided differences operator, the efficiency index I, taking into account only the number of functional evaluations, is the same in all cases. So, it is necessary to calculate their corresponding computational efficiency index $C I$. In this way, the computational effort per iteration should be taken into account in order to decide on the efficiency of the different iterative schemes.
In the case of the calculation of the computational efficiency index $C I$, we take into account that the amount of products–quotients needed to solve a linear system by means of Gaussian elimination is $1 3 n 3 + n 2 − 1 3 n$ where n is the system size. If required, the solution uses $L U$ decomposition of m linear systems with the same matrix of coefficients, then $1 3 n 3 + m n 2 − 1 3 n$ products–quotients are necessary; moreover, $n 2$ products are made in the case of matrix–vector multiplication and for calculation of a first order divided differences operator $n 2$ quotients are needed. The notation LS( $F ′ ( x )$ ) and LS($O t h e r s$) define the number of linear systems to be solved with the same $F ′ ( x )$ matrix of coefficients and with other coefficient matrices, respectively. The comparison of computational efficiency indices of the examined methods is shown in Table 2.
These indices obviously depend on the size of the nonlinear system to be solved, but some preliminary conclusions can be stated, such as that the third-degree coefficients describing the sum of operations and functional evaluations make a big difference: Some of them (including special cases of our proposed methods) have $1 3$ as the director coefficient; meanwhile, others have $2 3$ or even the unity as the director coefficient, making the computational cost for big-sized systems much higher.
Figure 1 and Figure 2 show the computational efficiency index for the examined methods and systems of size from 2 to 20 with weight functions $H 1$ and $H 2$, respectively, in the cases of the proposed schemes. In Figure 1a,b, the parameter $α$ is not null, and in Figure 1c,d it is equal to zero. Let us notice that the behavior of the $C I$ for the weight functions $H 1$ and $H 2$ is the same when $α = 0$ and it is better than those of the comparison methods. This performance is explained for the dominating term $1 3 n 3$ in the computational cost calculation; it is due to the existence of only one type of linear systems to be solved per iteration with the matrix of coefficients $F ′ ( x )$.
Let us also remark that, even when $α ≠ 0$, our methods are competitive with the existing ones, especially in systems with size higher than 10, where the differences among the indices of all the methods are not significant (see Figure 1b and Figure 2b).

## 4. Numerical Results

In this section, we compare the numerical performance of the proposed methods PSH6$1$ described in Expression (14) with $α = 0$, $α = 5.5$ and $α = 10$, PSH6$2$ (see Equation (16)) for the same values of the parameter $α$ and existing schemes C6$1$, described in Equation (1); C6$2$ is expressed in Equation (2), XH6 appears in Equation (3) and the B6 scheme is expressed in Equation (4), with $b 1 = 3$.
To get this aim we use the Matlab computer algebra system, with 2000 digits of mantissa in variable precision arithmetics, to make the comparative numerical experiments. Moreover, the stopping criterion used is $∥ x ( k + 1 ) − x ( k ) ∥ < 10 − 200$ or $∥ F ( x ( k + 1 ) ) ∥ < 10 − 200$. The initial values employed and the searched solutions are symbolized as $x ( 0 )$ and $ξ$, respectively. When the iterative expression of the method involves the evaluation of a divided difference operator, it is calculated by using the first-order estimation of the Jacobian matrix whose elements are (see [27])
$[ a , b ; F ] i j = F i ( a 1 , . . . , a j − 1 , a j , b j + 1 , . . . , b m ) − F i ( a 1 , . . . , a j − 1 , b j , b j + 1 , . . . , b m ) a j − b j , 1 ≤ i , j ≤ n .$
where $F i$ and $i = 1 , 2 , … , n$ are the coordinate functions of F.
For each nonlinear function, one table will be displayed with the results of the numerical experiments. The given information is organized as follows: k is the number of iteration needed to converge to the solution (’nc’ appears in the table if the method does not converge), the value of the stopping residuals is $∥ x ( k + 1 ) − x ( k ) ∥$ or $∥ F ( x ( k + 1 ) ) ∥$ at the last step (’-’ if there is no convergence) and the approximated computational order of convergence is $ρ$ (if the value of $ρ$ for the last iterations is not stable, then ’-’ appears in the table). In this way, it can be checked if the convergence has reached the root ($∥ F ( x ( k + 1 ) ) ∥ < 10 − 200$ is achieved), if it is only a very slow convergence with no significant difference between the two last iterations ($∥ x ( k + 1 ) − x ( k ) ∥ < 10 − 200$ but $∥ F ( x ( k + 1 ) ) ∥ > 10 − 200$), or both criteria are satisfied.
The test systems used are defined by the following nonlinear functions:
Example 1.
The first nonlinear system, whose solution is $ξ = ( 0.0 , 0.0 ) T$, is described by function
$F 1 ( x 1 , x 2 ) = ( sin ( x 1 ) + x 2 sin ( x 1 ) , x 1 − x 2 ) .$
Our test is made by using as initial estimation $x ( 0 ) = ( 0.8 , 0.8 ) T$ and the results appear in Table 3.
In Table 3, it can be observed that, except for method C6$2$, all the compared schemes converge to the solution in four iterations, with null (for the fixed precision of the machine at 2000 digits) or almost null residual $∥ F ( x ( k + 1 ) ) ∥$. Moreover, the computational approximation order of convergence agrees in all cases with the theoretical one.
Example 2.
The following nonlinear function describes a system with solution $ξ ≈ ( 2.4914 , 0.2427 , 1.6535 ) T$,
$F 3 ( x 1 , x 2 , x 3 ) = ( x 1 2 + x 2 2 + x 3 2 − 9 , x 1 x 2 x 3 − 1 , x 1 + x 2 − x 3 2 ) .$
We test all the new and existing methods with this system with the initial estimation $x ( 0 ) = ( 2.0 , 0.5 , 1.0 ) T$ and the results are provided in Table 4.
In this example, the proposed methods take at least one more iteration to converge to the solution (see Table 4). However, the precision of the results are the same or even better than those of the known methods, as $∥ F ( x ( k + 1 ) ) ∥$ is null in all new cases.
Example 3.
Now, we test the methods with the nonlinear system described by
$F 4 ( x 1 , x 2 , x 3 , x 4 ) = ( x 1 x 2 + x 4 ( x 1 + x 2 ) , x 1 x 3 + x 4 ( x 1 + x 3 ) , x 2 x 3 + x 4 ( x 2 + x 3 ) , x 1 x 2 + x 1 x 3 + x 2 x 3 − 1 ) ,$
using as the initial guess $x ( 0 ) = ( 2.5 , 2.5 , 2.5 , 2.5 ) T$. The searched root is $ξ ≈ ( 0.5774 , 0.5774 , 0.5774 , − 0.2887 ) T$ and we can find the residuals, the number of iterations needed to converge and the estimated order of convergence in Table 5.
A similar performance can be observed in Table 5 where the effective stopping criterium is that involving the evaluation of the nonlinear function and the residual is null most times. However, the number of iterations needed does not improve on the number provided for most of known methods.
Example 4.
Finally we test the proposed and existing methods with a nonlinear system of variable size. It is described as
$x i − cos 2 x i − ∑ j = 1 4 x j = 0 , i = 1 , 2 , … , n ,$
with $n = 20$ and starting with the estimation $x ( 0 ) = ( 0.75 , … , 0.75 ) T$. In this case, the solution is $ξ ≈ ( 0.5149 , … , 0.5149 ) T$ and the obtained results can be found at Table 6.
When the systems are large, as in the case of $F 5 ( x 1 , x 2 , … , x n )$, where $n = 20$, our schemes provide excellent results, equalling or improving the performance of existing procedures, see Table 6. The number of iterations needed to satisfy one of the stopping criteria and the residuals obtained show a very competitive performance. Moreover, the theoretical order of convergence is estimated with full precision.

## 5. Conclusions

In this work, we have proposed an efficient class of iterative schemes, with two specific subfamilies with very good performance. We have also compared them with other existing methods of the same order, with good results. The choice of parameters for the different proposed subfamilies does not pursue a specific objective but the dependence of the convergence on the selection of parameter $α$ is considered for further studies. Being similar, the numerical experiments show slightly better behavior for PS6$H 1$ than for PS6$H 2$, in comparison with the existing procedures.

## Author Contributions

Formal analysis, A.C.; Investigation, R.R.C. and J.R.T.; Software, R.R.C.; Writing—original draft, R.R.C.; Writing—review and editing, A.C. and J.R.T. These authors contributed equally to this work.

## Funding

This research has been partially supported by both Generalitat Valenciana and Ministerio de Ciencia, Investigación y Universidades, under grants PROMETEO/2016/089 and PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), respectively.

## Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions that have improved the final version of this manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
2. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Design, Analysis, and Applications of Iterative Methods for Solving Nonlinear Systems. In Nonlinear Systems—Design, Analysis, Estimation and Control; InTech: London, UK, 2016. [Google Scholar] [CrossRef]
3. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA SIMAI Springer Series: New York, NY, USA, 2016. [Google Scholar]
4. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Pseudo-composition: A technique to design predictor-corrector methods for systems of nonlinear equations. Appl. Math. Comput. 2012, 218, 11496–11508. [Google Scholar]
5. Babolian, E.; Biazar, J.; Vahidi, A.R. Solution of a system of nonlinear equations by Adomian decomposition method. Appl. Math. Comput. 2004, 150, 847–854. [Google Scholar] [CrossRef]
6. Darvishi, M.T.; Barati, A. Super cubic itertive methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar]
7. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef][Green Version]
8. Hernández, M.A. Second-derivative-free variant of the Chebyshev method for nonlinear equations. J. Opt. Theory Appl. 2000, 104, 501–515. [Google Scholar] [CrossRef]
9. Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Third and fourth order iterative methods free from second derivative for nonlinear systems. Appl. Math. Comput. 2009, 211, 190–197. [Google Scholar] [CrossRef]
10. Soleymani, F.; Sharifi, M.; Shateyi, S.; Haghani, F.K. A class of Steffensen-type iterative methods for nonlinear systems. J. Appl. Math. 2014, 2014, 705375. [Google Scholar] [CrossRef]
11. Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2018, 9, 501–514. [Google Scholar] [CrossRef]
12. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
13. Cordero, A.; Jordán, C.; Sanabria, E.; Torregrosa, J.R. A new Class of iterative Processes for Solving Nonlinear Systems by using One Divided Differences Operator. Mathematics 2019, 7, 776. [Google Scholar] [CrossRef][Green Version]
14. Narang, M.; Bhatia, S.; Alshomrani, A.S.; Kanwar, V. General efficient class of Steffensen type methods with memory for solving systems of nonlinear equations. J. Comput. Appl. Math. 2019, 352, 23–39. [Google Scholar] [CrossRef]
15. Candela, V.; Peris, R. A class of third order iterative Kurchatov–Steffensen (derivative free) methods for solving nonlinear equations. Appl. Math. Comput. 2019, 350, 93–104. [Google Scholar] [CrossRef]
16. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Iterative Methods with Memory for Solving Systems of Nonlinear Equations Using a Second Order Approximation. Mathematics 2019, 7, 1069. [Google Scholar] [CrossRef][Green Version]
17. Li, J.; Huang, P.; Su, J.; Chen, Z. A linear, stabilized, nonspatial iterative, partitioned time stepping method for the nonlinear Navier-Stokes/Navier-Stokes interaction model. Bound. Value Probl. 2019, 2019, 115. [Google Scholar] [CrossRef][Green Version]
18. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
19. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef][Green Version]
20. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Design and multidimensional extension of iterative methods for solving nonlinear problems. Appl. Math. Comput. 2017, 293, 194–203. [Google Scholar] [CrossRef]
21. Abad, M.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bulletin Mathématique (Societe des Sciences Mathematiques de Roumanie) 2014, 57, 133–145. [Google Scholar]
22. Cordero, A.; García-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski-Chun type parametric families. J. Math. Chem. 2015, 53, 430–449. [Google Scholar] [CrossRef][Green Version]
23. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef][Green Version]
24. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarrat composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
25. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
26. Behl, R.; Sarría, Í.; González, R.; Magreñán, Á.A. Highly efficient family of iterative methods for solving nonlinear models. J. Comput. Appl. Math. 2019, 346, 110–132. [Google Scholar] [CrossRef]
27. Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
28. Ostrowski, A.M. Solution of Equations and System of Equations; Prentice Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
29. Cordero, A.; Torregrosa, J.R. On interpolation variants of Newton’s method for functions of several variables. J. Comput. Appl. Math. 2010, 234, 34–43. [Google Scholar] [CrossRef][Green Version]
Figure 1. $C o m p u t a t i o n a l e f f i c i e n c y i n d e x CI$ indices for PSH6$1$ and comparison methods.
Figure 1. $C o m p u t a t i o n a l e f f i c i e n c y i n d e x CI$ indices for PSH6$1$ and comparison methods.
Figure 2. $CI$ indices for PSH6$2$ and comparison methods.
Figure 2. $CI$ indices for PSH6$2$ and comparison methods.
Table 1. Efficiency index of the examined methods.
Table 1. Efficiency index of the examined methods.
Methodn.Fn.$F ′$n.$[ · , · ; F ]$FEI
PSH6$1$311$2 n 2 + 2 n$$6 1 2 n 2 + 2 n$
PSH6$2$311$2 n 2 + 2 n$$6 1 2 n 2 + 2 n$
$C 6 1$320$2 n 2 + 3 n$$6 1 2 n 2 + 3 n$
$C 6 2$220$2 n 2 + 2 n$$6 1 2 n 2 + 2 n$
$XH 6$220$2 n 2 + 2 n$$6 1 2 n 2 + 2 n$
$B 6$220$2 n 2 + 2 n$$6 1 2 n 2 + 2 n$
Table 2. Computational efficiency index of the examined methods.
Table 2. Computational efficiency index of the examined methods.
MethodFELS($F ′ ( x )$)LS($Others$)$M × V$$CI$
$PSH 6 1 { α ≠ 0 }$$2 n 2 + 2 n$704$6 1 1 3 n 3 + 13 n 2 + 5 3 n$
$PSH 6 1 { α = 0 }$$2 n 2 + 2 n$502$6 1 1 3 n 3 + 5 n 2 + 2 3 n$
$PSH 6 2 { α ≠ 0 }$$2 n 2 + 2 n$542$6 1 2 3 n 3 + 11 n 2 − 2 3 n$
$PSH 6 2 { α = 0 }$$2 n 2 + 2 n$502$6 1 1 3 n 3 + 5 n 2 + 2 3 n$
$C 6 1$$2 n 2 + 3 n$311$6 1 2 3 n 3 + 7 n 2 + 7 3 n$
$C 6 2$$2 n 2 + 2 n$132$6 1 2 3 n 3 + 8 n 2 + 4 3 n$
$XH 6$$2 n 2 + 2 n$322$6 1 2 3 n 3 + 9 n 2 + 4 3 n$
$B 6$$2 n 2 + 2 n$243$6 1 n 3 + 11 n 2 + n$
Table 3. Numerical results of the examined methods for $F 1 ( x 1 , x 2 )$ and $x ( 0 ) = ( 0.8 , 0.8 ) T$.
Table 3. Numerical results of the examined methods for $F 1 ( x 1 , x 2 )$ and $x ( 0 ) = ( 0.8 , 0.8 ) T$.
Methodk$∥ x ( k + 1 ) − x ( k ) ∥$$∥ F ( x ( k + 1 ) ) ∥$$ρ$
$PSH 6 1 { α = 0.0 }$4$5.7517 × 10 − 60$0.05.9906
$PSH 6 1 { α = 5.5 }$4$2.0238 × 10 − 64$0.05.9962
$PSH 6 1 { α = 10 }$4$2.9651 × 10 − 78$0.06.0264
$PSH 6 2 { α = 0.0 }$4$5.7517 × 10 − 60$0.05.9906
$PSH 6 2 { α = 5.5 }$4$1.0081 × 10 − 46$$3.6422 × 10 − 275$5.9701
$PSH 6 2 { α = 10 }$4$6.6149 × 10 − 43$$6.8963 × 10 − 252$5.9523
$C 6 1$4$1.5912 × 10 − 73$0.05.9973
$C 6 2$10$6.3065 × 10 − 72$0.05.9975
$XH 6$4$8.6943 × 10 − 66$0.05.9953
$B 6$4$5.0674 × 10 − 80$0.06.0030
Table 4. Numerical results of the examined methods for $F 3 ( x 1 , x 2 , x 3 )$ and $x ( 0 ) = ( 2.0 , 0.5 , 1.0 ) T$.
Table 4. Numerical results of the examined methods for $F 3 ( x 1 , x 2 , x 3 )$ and $x ( 0 ) = ( 2.0 , 0.5 , 1.0 ) T$.
Methodk$∥ x ( k + 1 ) − x ( k ) ∥$$∥ F ( x ( k + 1 ) ) ∥$$ρ$
$PSH 6 1 { α = 0.0 }$5$1.1553 × 10 − 91$0.0-
$PSH 6 1 { α = 5.5 }$5$1.3862 × 10 − 138$0.0-
$PSH 6 1 { α = 10 }$5$3.1738 × 10 − 101$$0.0$-
$PSH 6 2 { α = 0.0 }$5$1.1553 × 10 − 91$0.0-
$PSH 6 2 { α = 5.5 }$6$6.4700 × 10 − 85$0.0-
$PSH 6 2 { α = 10 }$6$2.7383 × 10 − 132$$0.0$-
$C 6 1$4$5.5171 × 10 − 38$$7.1730 × 10 − 225$6.0424
$C 6 2$4$2.1522 × 10 − 93$0.06.0006
$XH 6$4$6.1878 × 10 − 50$$5.5325 × 10 − 297$5.9482
$B 6$4$5.1979 × 10 − 168$0.06.0365
Table 5. Numerical results of the examined methods for $F 4 ( x 1 , x 2 , x 3 , x 4 )$ and $x ( 0 ) = ( 2.5 , 2.5 , 2.5 , 2.5 ) T$.
Table 5. Numerical results of the examined methods for $F 4 ( x 1 , x 2 , x 3 , x 4 )$ and $x ( 0 ) = ( 2.5 , 2.5 , 2.5 , 2.5 ) T$.
Methodk$∥ x ( k + 1 ) − x ( k ) ∥$$∥ F ( x ( k + 1 ) ) ∥$$ρ$
$PSH 6 1 { α = 0.0 }$5$1.7213 × 10 − 82$0.05.8841
$PSH 6 1 { α = 5.5 }$5$6.2032 × 10 − 101$0.06.0319
$PSH 6 1 { α = 10 }$5$5.9604 × 10 − 139$$0.0$7.0104
$PSH 6 2 { α = 0.0 }$5$1.7213 × 10 − 82$0.05.8841
$PSH 6 2 { α = 5.5 }$5$2.4280 × 10 − 56$0.05.4681
$PSH 6 2 { α = 10 }$5$2.2166 × 10 − 50$$0.0$5.2317
$C 6 1$4$2.8009 × 10 − 167$0.06.1732
$C 6 2$4$6.0097 × 10 − 36$$9.3590 × 10 − 222$6.7740
$XH 6$5$1.0184 × 10 − 173$0.06.1665
$B 6$4$9.0970 × 10 − 198$0.05.6982
Table 6. Numerical results of proposed methods for $F 5 ( x 1 , x 2 , … , x n )$, $n = 20$ and $x ( 0 ) = ( 0.75 , … , 0.75 ) T$.
Table 6. Numerical results of proposed methods for $F 5 ( x 1 , x 2 , … , x n )$, $n = 20$ and $x ( 0 ) = ( 0.75 , … , 0.75 ) T$.
Methodk$∥ x ( k + 1 ) − x ( k ) ∥$$∥ F ( x ( k + 1 ) ) ∥$$ρ$
$PSH 6 1 { α = 0.0 }$4$1.8871 × 10 − 184$0.06.0
$PSH 6 1 { α = 5.5 }$4$1.1531 × 10 − 189$0.06.0
$PSH 6 1 { α = 10 }$4$2.8662 × 10 − 195$0.06.0
$PSH 6 2 { α = 0.0 }$4$1.8871 × 10 − 184$0.06.0
$PSH 6 2 { α = 5.5 }$4$2.0650 × 10 − 171$0.06.0
$PSH 6 2 { α = 10 }$4$4.6908 × 10 − 165$0.06.0
$C 6 1$3$9.2604 × 10 − 39$$7.5226 × 10 − 233$5.7540
$C 6 2$4$9.7326 × 10 − 195$0.06.0
$XH 6$4$2.4997 × 10 − 191$0.06.0
$B 6$6$5.7210 × 10 − 197$0.06.0

## Share and Cite

MDPI and ACS Style

Capdevila, R.R.; Cordero, A.; Torregrosa, J.R. A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems. Mathematics 2019, 7, 1221. https://doi.org/10.3390/math7121221

AMA Style

Capdevila RR, Cordero A, Torregrosa JR. A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems. Mathematics. 2019; 7(12):1221. https://doi.org/10.3390/math7121221

Chicago/Turabian Style

Capdevila, Raudys R., Alicia Cordero, and Juan R. Torregrosa. 2019. "A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems" Mathematics 7, no. 12: 1221. https://doi.org/10.3390/math7121221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.