Abstract
In this manuscript, a new family of Jacobian-free iterative methods for solving nonlinear systems is presented. The fourth-order convergence for all the elements of the class is established, proving, in addition, that one element of this family has order five. The proposed methods have four steps and, in all of them, the same divided difference operator appears. Numerical problems, including systems of academic interest and the system resulting from the discretization of the boundary problem described by Fisher’s equation, are shown to compare the performance of the proposed schemes with other known ones. The numerical tests are in concordance with the theoretical results.
1. Introduction
The design of iterative processes for solving scalar equations, , or nonlinear systems, , with n unknowns and equations, is an interesting challenge of numerical analysis. Many problems in Science and Engineering need the solution of a nonlinear equation or system in any step of the process. However, in general, both equations and nonlinear systems have no analytical solution, so we must resort to approximate the solution using iterative techniques. There are different ways to develop iterative schemes such as quadrature formulaes, Adomian polynomials, divided difference operator, weight function procedure, etc. have been used by many researchers for designing iterative schemes to solve nonlinear problems. For a good overview on the procedures and techniques as well as the different schemes developed in the last half century, one refers to some standard texts [,,,,].
In this paper, , we want to design Jacobian-free iterative schemes for approximating the solution of a nonlinear system , where is a nonlinear multivariate function defined in a convex set D. The best known method for finding a solution is Newton’s procedure,
being the Jacobian of F evaluated in the kth iteration.
Based on Newton-type schemes and by using different techniques, several methods for approximating a solution of have been published recently. The main objective of all these processes is to speed the convergence or increase their computational efficiency. We are going to recall some of them that we will use in the last section for comparison purposes.
From a variant of Steffensen’s method for systems introduced by Samanskii in [] that replaces the Jacobian matrix by the divided difference operator defined as
being , Wang and Fang in [] designed a fourth-order scheme, denoted by WF4, whose iterative expression is
where I is the identity matrix of size , and . Let us observe that this method uses two functional evaluations and two divided difference operators per iteration. Let us remark that Samanskii in [] defined also a third-order method with the same divided differences operator at the two steps.
Sharma and Arora in [] added a new step in the previous method obtaining a sixth-order scheme, denoted by SA6, whose expression is
where, as before, and . In relation with WF4, a new functional evaluation, per iteration, is needed.
By replacing the third step of equation (2), Narang et al. in [] proposed the following seventh-order scheme that uses two divided difference operators and three functional evaluations per iteration, which is denoted by NM7,
where , being again , and , with , .
In a similar way, Wang et al. (see []) designed a scheme of order 7 that we denote by S7, modifying only the third step of expression (3). Its iterative expression is
where, as in the previous schemes, , with and .
Different indices can be used to compare the efficiency of iterative processes. For example, in [], Ostrowski introduced the efficiency index , where p is the convergence order and d is the quantity of functional evaluations at each iteration. Moreover, the matrix inversions appearing in the iterative expressions are in practice calculated by solving linear systems. Therefore, the amount of quotients/products, denoted by , employed in each iteration play an important role. This is the reason why we presented in [] the computational efficiency index, , combining and the number of operations per iteration. This index is defined as .
Our goal of this manuscript is to construct high-order Jacobian-free iterative schemes for solving nonlinear systems involving low computational cost on large systems.
We recall, in Section 2, some basic concepts that we will use in the rest of the manuscript. Section 3 is devoted to describe our proposed iterative methods for solving nonlinear systems and to analyze their convergence. The efficiency indices of our methods are studied in Section 4, as well as a comparative analysis with the schemes presented in the Introduction. Several numerical tests are shown in Section 5, for illustrating the performance of the new schemes. To get this aim, we use a discretized nonlinear one-dimensional heat conduction equation by means of approximations of the derivatives and also some systems of academic interest. We finish the manuscript with some conclusions.
2. Basic Concepts
If a sequence in converges to , it is said to be of order of convergence p, being , if ( for ) and exist satisfying
or
being .
Although this notation was presented by the authors in [], we show it for the sake of completeness. Let be sufficiently Fréchet differentiable in D. The qth derivative of at , , is the q-linear function such that . Let us observe that
- 1.
- , where denotes the set of linear mappings defined from into .
- 2.
- , for all permutation of .
From the above properties, we can use the following notation (let us observe that denotes p times):
- (a)
- (b)
Let us consider in a neighborhood of . By applying Taylor series and considering that is nonsingular,
being , . Let us notice that as and .
Moreover, we express as
the identity matrix being denoted by I. Then, . From expression (6), we get
where
The equation
where K is a p-linear operator , known as error equation and p is the order of convergence.In addition, we denote by .
Divided difference operator of function (see, for example, []) is defined as a mapping satisfying
In addition, by using the formula of Gennochi-Hermite [] and Taylor series expansions around x, the divided difference operator is defined for all x, as follows:
Being and , the divided difference operator for points and is
where
are obtained by replacing the Taylor expansion of the different terms that appear in development (9) and doing algebraic manipulations.
For computational purposes, the following expression (see []) is used
where and and .
3. Proposed Methods and Their Convergence
From a Samanskii-type method and by using the composition procedure “frozening” the divided difference operator (we hold the same divided difference operator in all the steps of the method), we propose the following four-steps iterative class with the aim of reaching order five:
where , , and are real parameters, and .
It is possible to prove that, under some assumptions, we can reach order five. We have used different combinations of these steps trying to preserve order 5 and reducing the computational cost. The best result we have been able to achieve is the following:
where , and are real parameters, and . The convergence of class (11) is presented in the following result.
Theorem 1.
Let us assume being a differentiable enough operator at each point of the open neighborhood D of the solution of the system . Let us suppose that is continuous and nonsingular in and the initial estimation is near enough to . Therefore, sequence calculated from expression (11) converges to with order 4 if and for all , the error equation being
In addition, if the order of convergence is five and the error equation is
where ,
Proof.
By using the Taylor expansion of and its derivatives around :
From the above expression, by replacing in the first order divided difference operator the values we obtain:
From the above expression, we have
where
Then,
Thus,
and
From the values of and in expression (11), we have
Then,
Similarly, we obtain
and
Thus,
Therefore, we obtain
Thus, by requiring that the coefficients of and are null, we get and for all , 4 being its order,
By adding the coefficient of to the above system, we get 5 being the order of convergence with error equation in this case:
□
4. Efficiency Indices
As we have mentioned in the Introduction, we use indices and to compare the different iterative methods.
To evaluate function F, n scalar functions are calculated and for the first order divided difference . In addition, to calculate an inverse linear operator, an linear system must be solved; then, we have to do quotients/products for getting decomposition and solving the corresponding triangular linear systems. Moreover, for solving m linear systems with the same matrix of coefficients, we need to do products-quotients. In addition, we need products for each matrix-vector multiplication and quotients for evaluating a divided difference operator.
According to the last considerations, we calculate the efficiency indices of methods CJST5, NM7, S7, SA6 and WF4. In case CJST5, for each iteration, we evaluate F three times and once , so functional evaluations are needed. Therefore, . The indices obtained for the mentioned methods are also calculated and shown in Table 1.

Table 1.
Efficiency indices for different methods.
In Table 2, we present the indices of schemes NM7, SA6, S7, WF4 and CJST5. In it, the amount of functional evaluations is denoted by , the number of linear systems with the same as the matrix of coefficients is and represents the quantity of products matrix-vector. Then, in case CJST5, for each iteration, functional evaluations are needed, since we evaluate three times the function F and one divided difference of first order . In addition, we must solve three linear systems with as coefficients matrix (that is ). Thus, the value of for CJST5 is

Table 2.
Computational cost of the procedures.
Analogously, we obtain the indices of the other methods. In Figure 1, we observe the computational efficiency index of the different methods of size 5 to 80. The best index corresponds to our proposed scheme.

Figure 1.
for several sizes of the system.
5. Numerical Examples
We begin this section checking the performance of the new method on the resulting system obtained by the discretization of Fisher’s partial differential equation. Thereafter, we compare its behavior with that of other known methods on some academic problems. For the computations, we have used R2015a ( Natick, Massachusetts, USA) with variable precision arithmetic, with 1000 digits of mantissa. The characteristics of the computer are, regarding the processor, Intel(R) Core(TM) i7-7700 CPU @ 3.6 GHz, 3.601 Mhz, four processors and RAM 16 GB.
We use the estimation of the theoretical order of convergence p, called Computational Order of Convergence (COC), introduced by Jay [] with the following expression:
and the Approximated Computational Order of Convergence (ACOC), defined by Cordero and Torregrosa in []
Example 1.
Fisher’s equation,
was proposed in [] by Fisher to model the diffusion process in population dynamics. In it, is the diffusion constant, r is the level of growth of the species and p is the carrying capacity. Lately, this formulation has proven to be fruitful for many other problems as wave genetics, economy or propagation.
Now, we study a particular case of this equation, when and the spatial interval is , and null boundary conditions.
We transform Example 1 in a set of nonlinear systems by applying an implicit method of finite differences, providing the estimated solution in the instant from the estimated one in . The spacial step is selected and the temporal step is , and being the quantity of subintervals in x and t, respectively, and is the final instant. Therefore, a grid of domain with points , is selected:
Our purpose is to estimate the solution of problem (12) at these points, by solving many nonlinear systems, as much as the number of temporal nodes . For it, we use the following finite differences of order :
By denoting the approximation of the solution at as , and, by replacing it in Example 1, we get the system
for and . The unknowns of this system are that is, the approximations of the solution in each spatial node for the fixed instant . Let us remark that, for solving this system, the knowledge of solution in is required.
Let us observe (Table 3) that the results improve when the temporal step is smaller. In this case, the COC is not a good estimation of the theoretical error. In Figure 2, we show the approximated solution of the problem when , by taking , and using method CJST5.

Table 3.
Fisher results by CJST5 and different .

Figure 2.
Approximated solution of Example 1.
In the rest of examples, we are going to compare the performance of the proposed method with the schemes presented in the Introduction as well as with the Newton-type method replacing the Jacobian matrix by the divided difference operator, that is, the Samanskii’s scheme (see []).
Example 2.
Let us define the nonlinear system
We use, in this example, the starting estimation , the solution being . Table 4 shows the residuals and for as well as ACOC and COC. We observe that the COC index is better than the corresponding ACOC of the other methods. In addition, the value is better or similar to that of S7 and NM7 methods, both of them of order 7.
Example 3.
We consider now

Table 4.
Numerical results for Example 2.
The numerical results are displayed in Table 5. The initial estimation is and the size of the system is , the solution being . We show the same information as in the previous example.
Example 4.
The third example is given by the system:

Table 5.
Numerical results for Example 3.
Its solution is . By using the starting guess with , we obtain the results appearing in Table 6.

Table 6.
Numerical results for Example 4.
The different methods give us the expected results, according to their order of convergence.
Example 5.
Finally, the last example that we consider is:
The solution of it is . By using the initial estimation with , we obtain the numerical results displayed in Table 7.

Table 7.
Numerical results for Example 5.
It is observed in Table 5, Table 6 and Table 7 that, for the proposed academic problems, the introduced method (CJST5) shows a good performance comparable with higher-order methods. Of course, the worst results are those obtained by Samanskii’s method, but it has been included because it is the Jacobian-free version of Newton’s scheme and it is also the first step of our proposed scheme. Let us also remark that, when only three iterations are calculated, the index COC gives more reliable information than the ACOC one in all of the examples.
6. Conclusions
In this paper, we design a family of iterative methods for solving nonlinear systems with fourth-order convergence. This family does not use Jacobian matrices and one of its elements has order five. The relationship between the proposed method and other known ones in terms of efficiency index and computational efficiency index allows us to see that our method is more efficient than the other ones. In addition, its error bounds are smaller with the same number of iterations in some cases. Thus, our proposal is competitive mostly for big size systems.
Author Contributions
The individual contributions of the authors are as follows: conceptualization, J.R.T.; writing—original draft preparation, C.J. and E.S.; validation, A.C. and J.R.T.; formal analysis, A.C.; numerical experiments, C.J. and E.S.
Funding
This research has been supported partially by Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22, PGC2018-094889-B-I00, TEC2016-79884-C2-2-R and also by Spanish grant PROMETEO/2016/089 from Generalitat Valenciana.
Acknowledgments
The authors would like to thank the anonymous reviewers for their useful comments and suggestions that have improved the final version of this manuscript.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
References
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
- Petković, M.S.; Neta, B.; Petković, L.D.; Dz̆unić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: New York, NY, USA, 2013. [Google Scholar]
- Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA SIMAI Springer Series; Springer International Publishing: Cham, Switzerland, 2016; Volume 10. [Google Scholar]
- Samanskii, V. On a modification of the Newton method. Ukrain. Mat. 1967, 19, 133–138. [Google Scholar]
- Wang, X.; Fang, X. Two Efficient Derivative-Free Iterative Method for Solving Nonlinear Systems. Algorithms 2016, 9, 14. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Narang, M.; Bathia, S.; Kanwar, V. New efficient derivative free family of seventh-order methods for solving systems of nonlinear equations. Numer. Algorithms 2017, 76, 283–307. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solution of Equations and Systems of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Hermite, C. Sur la formule d’interpolation de Lagrange. Reine Angew. Math. 1878, 84, 70–79. [Google Scholar] [CrossRef]
- Jay, L.O. A note of Q-order of convergence. BIT Numer. Math. 2001, 41, 422–429. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Fisher, R.A. The wave of advance of advantageous genes. Ann. Eugen. 1937, 7, 353–369. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).