Abstract
This article introduces a novel two-step fifth-order Jacobian-free iterative method aimed at efficiently solving systems of nonlinear equations. The method leverages the benefits of Jacobian-free approaches, utilizing divided differences to circumvent the computationally intensive calculation of Jacobian matrices. This adaptation significantly reduces computational overhead and simplifies the implementation process while maintaining high convergence rates. We demonstrate that this method achieves fifth-order convergence under specific parameter settings, with broad applicability across various types of nonlinear systems. The effectiveness of the proposed method is validated through a series of numerical experiments that confirm its superior performance in terms of accuracy and computational efficiency compared to existing methods.
MSC:
65H10
1. Introduction
Let be a nonlinear system of equations, , and the functions for be the coordinate components of F, expressed as . Solving nonlinear systems is generally challenging, and solutions are typically found by linearizing the problem or employing a fixed-point iteration function , leading to an iterative fixed-point method. Among the various root-finding techniques for nonlinear systems, Newton’s method is the most well known, and follows the second-order iterative procedure:
being the Jacobian matrix of F at the k-th iterate.
Recently, many researchers have focused on developing iterative methods that outperform Newton’s method in terms of both efficiency and order of convergence. Numerous approaches need the computation of at different points along each iteration. Nevertheless, calculating the Jacobian poses significant challenges, particularly in high-dimensional problems, where its computation can be costly or even impractical (see for an example [1,2,3,4,5] and the references therein). In some instances, the Jacobian may not exist at all. To address this issue, alternative approaches have been proposed in the literature, such as replacing the Jacobian matrix with a divided difference operator. One of the simplest alternatives is the multidimensional version of Steffensen’s method, attributed to Samanskii [6,7], which substitutes the Jacobian in Newton’s procedure with a first-order operator of divided differences:
where , and is the operator of divided differences related to F [8],
The implementation of this divided difference is carried out, for all , as
where and , as defined by Ortega et al. in [8]. This substitution retains the second order of convergence while bypassing the calculation of . However, the set of converging initial estimations is dramatically reduced by this replacement in case of Newton’s and Samanskii’s schemes. This implies that the initial estimation used in the iterative procedures must be closer to the (unknown) solution in order to converge. This Jacobian-free scheme is known as an unstable variant of Newton’s procedure.
Although both the Samanskii and Newton methods exhibit quadratic convergence, it has been shown that the Jacobian-free scheme is less stable than Newton’s method, with stability depending more on the initial guess. This has been thoroughly analyzed in [9,10,11,12,13], where it was found that, for scalar cases , derivative-free iterative methods become more stable when selecting for small real values of .
Our aim in this manuscript is to propose a very efficient and stable class of Jacobian-free iterative methods. Its convergence order is fourth, although it holds a fifth-order element. The main achievement of this construction is the relation between scalar accelerators, firstly appearing in [14], with the vectorial point-wise power weight function introduced firstly in [15]. The first technique allows faster convergence with low computational costs, while the second one improves the stability of Jacobian-free methods, allowing convergence to the solution with initial estimations that are not so close to the solution as in Samanskii’s case. These techniques are presented in the following state of the art, showing some of the advances in this specific area of research in the last few years.
However, substituting the Jacobian with divided differences can result in a lower convergence order for some iterative methods, for example the multidimensional version of Ostrowski’s fourth-order method (see [16]),
achieves only cubic convergence if is replaced by , as follows:
Despite the reduction in performance observed in some Jacobian-free methods, it is important to highlight that there are iterative methods that are successfully modified in their iterative expressions to preserve the order of convergence, even after fully transiting to Jacobian-free formulations. This is the case of a combination of the Traub–Steffensen family of methods and a second step with divided difference operators, proposed by Behl et al. in [17].
where . The iterative schemes have a fourth order of convergence for every , ; for our purposes, we choose , and this is called Traub-Ste. Let us remark that different approximations of the Jacobian matrix have been necessary to hold the original order of convergence, increasing the computational cost and, therefore, diminishing its efficiency.
Other fourth-order methods also lose their convergence order when the Jacobian is replaced with divided differences, such as Jarratt’s scheme [18], Sharma’s method [19], Montazeri’s method [20], Ostrowski’s vectorial extension ([16]), and Sharma–Arora’s fifth-order scheme [21]. In all these cases, Jacobian-free versions of the methods reduce to lower orders of convergence. Nevertheless, Amiri et al. [15] demonstrated that using a specialized divided difference operator of the form , where , , as an approximation of the Jacobian matrix may preserve the convergence order. By selecting an appropriate parameter m, the original fourth-order convergence of these methods can be maintained. Also, the appropriate value of m for each optimal order of convergence was conjectured.
For the sake of comparison with our proposed methods, we now consider several efficient vectorial iterative schemes existing in the literature to be transformed into their Jacobian-free versions following the idea of Amiri et al. [15]. This technique allowed us to design Jacobian-free partners of vectorial schemes with minimal computational costs. In the efficiency and numerical sections, we compare these Jacobian-free schemes with our proposed procedures. The first one is the vectorial extension of Ostrowski’s scheme (see [16,22], for instance),
whose Jacobian-free version, obtained by substituting the Jacobian matrix by the divided difference operator (with Amiri et al. approach [15], ), is
where , and (with ), . We denote this method as .
Another fourth-order method proposed by Sharma in [19] using Jacobian matrices is
to which we apply the same procedure, Amiri’s, performed for (2), obtaining its Jacobian-free partner
which we denote by , and , , first appeared in [15].
We finish with a sixth-order scheme [19], which is obtained by adding a step to the previous method (4),
Similarly, its Jacobian-free version was constructed in [15] and denoted by ,
where again, and , . It should be noticed that in schemes (3), (5) and (7), a quadratic element-by-element power of was employed in the divided differences. This adjustment was essential for preserving the convergence order of the original method (see [15]). However, in our proposal, the order of convergence of the original schemes is held, avoiding the computational cost of this element-by-element power.
Therefore, we aim to avoid the calculation of Jacobian matrices, which can be a bottleneck in terms of computational efficiency, especially for large systems. To achieve this, we present a two-step fifth-order efficient Jacobian-free iterative method that addresses these challenges by eliminating the need for direct Jacobian computation. Our approach is grounded in the use of divided differences and scalar accelerators recently developed in some very efficient schemes (using Jacobian matrices) [14,23]. This not only reduces the computational costs, but also accelerates the convergence with simpler iterative expressions. The proposed method’s design and theoretical underpinnings are discussed, emphasizing its ability to achieve high-order convergence without the Jacobian calculations typically required.
Our proposal is a variant of the scheme proposed by Singh, Sharma and Kumar in [14],
where is a scalar accelerator that can be interpreted as , and , respectively. The real parameters and make the order of convergence of the method five if = , the order four if and is arbitrary, and the order two if and is arbitrary. It is known that in many practical applications, computing the Jacobian matrix can be very resource-intensive and time-consuming; therefore, Jacobian-free methods are often preferred.
Making a modification to the scheme (8) by replacing the Jacobian matrices by specific divided differences, we obtain the following family of novel Jacobian-free methods:
where , , and . From now on, we will refer to our modified scheme as . This class of iterative schemes uses scalar accelerators, in order to achieve high-order convergence with low computational effort, and also obtains good stability properties by using vectorial point-wise weight functions in the first-order divided differences. These properties have, as a direct consequence, higher efficiency compared with the existing Jacobian-free methods previously mentioned, as well as good numerical results on nonlinear systems of different sizes.
In Section 2, we demonstrate the theoretical order of convergence of the new parametric class of Jacobian-free iterative methods using scalar accelerators, depending on the values of the parameters involved. Subsequently, in Section 3 we carry out an efficiency analysis in which we compare our proposed method with the Jacobian-free versions of others previously cited in the literature, with excellent results. Finally, Section 4 presents practical results of these iterative methods applied to different nonlinear systems of equations, the testing of which confirms the theoretical results for both small- and big-sized academical and applied systems.
2. Convergence of New Jacobian-Free Iterative Method
The following result shows the error equations arising from method (9) for the possible parameter values, thereby demonstrating that the convergence results of the family (8) hold.
Theorem 1.
Let F be a differentiable enough function , defined in the open convex neighborhood Ω of ξ solution of . Let us also consider an initial seed near enough to ξ and let be continuous and invertible at ξ. Then, the parametric class of iterative schemes presented in (9) locally converges for all , with the order of convergence given by the following:
- (a)
- Fifth-order convergence if , the corresponding error equation being
- (b)
- Fourth-order convergence if , the corresponding error equation being
- (c)
- Second-order convergence if , the corresponding error equation being
where , , and , are combinations of , denoting the error at iteration k by .
Proof.
Let be the error at the k-th iteration and let be a solution of . Then, expanding in the neighborhood of , we have
Then, based on the formula of Genochi and Hermite (see [8]), we have
Taking into account that , and performing a series expansion up to the fifth order, we obtain
By combining Formulas (13) and (15) in the Taylor series expansion (14), we obtain the following:
where
Next, we expand the inverse of the divided difference operator , forcing it to satisfy ,
where
By using the Taylor expansions of defined in (13) and obtained in (17), we obtain the error equation for the first step:
where
Now, we find the error equation for the second step,
from which arises
Then, following the process seen in (14), the expansion of the second difference operator is given by
which can be expressed as
where
Again, we obtain the following in a similar way to how it is obtained in (17):
Now, we proceed to calculate the expansion of , obtaining
According to Theorem 1 in [14], proven by Singh, Sharma and Kurmar, we have
where
and
Using (19), obtain
After substituting (29) in (28) when performing the quotient, we obtain the following:
Finally, fitting (19), (27) and the last result obtained in (30), we have
In this last result, it is easy to observe that when is different from one (), the iterative method has an order of convergence equal to two, since the term would not cancel out. However, when , both terms with and cancel out while the term is as follows
Let us remember that , so that
but since , then , so . Also, . Given that , replacing in Equation (32), we obtain
resulting in the error equation
From the above, it is clear to see that if but , only order four is reached. □
3. Efficiency Analysis
We demonstrated the order of convergence of the proposed class of the iterative method for the different values of the parameters . In this section, we perform a computational effort study considering the effort of solving the involved linear systems per iteration and the other computational costs (functional evaluations, amount of product/quotients,…), not only for the proposed class but also for some Jacobian-free schemes presented in the introductory section.
In order to achieve this aim, it is known that the operations (products/quotients) needed for solving a linear system are
However, if other linear systems with the same coefficient matrix are solved, then the cost is increased only in operations each. For each divided difference, we calculate quotients; for each functional evaluation of F at different points, there are costs of n real evaluations; for each evaluation of a divided differences, there are scalar evaluations. Indeed, a matrix–vector product needs products/quotients. Based on the above, the computational cost for each method appears in Table 1. From family (9), which we call , we consider the fifth-order member and its fourth-order partner . They have the same computational cost, which will be reflected, along with the others, in Table 1.
Table 1.
Computational effort of new and comparison schemes.
The results presented in Table 1 show that the method with the highest computational cost is Traub-Ste, while those with intermediate costs are and , with the latter being slightly better. The ones that offer the lowest cost are and , although the latter has sixth-order convergence, which is a factor to consider when obtaining the efficiency index.
In order to show more clearly how computational cost influences the efficiency index , where p is the convergence order of the corresponding scheme (see [24]), we present Figure 1 and Figure 2 for different sizes of the nonlinear system to be solved.
Figure 1.
I indices for and comparison methods.
Figure 2.
I indices for and comparison schemes.
In Figure 1, we observe that for systems with dimensions , the proposed class of vectorial iterative methods (9) shows better computational efficiency than the other schemes, for both members of the family. On the other hand, as the system grows to sizes , it becomes apparent that methods and yield better results. Let us notice that for sizes of , our method has better efficiency than , and for sizes , it has better efficiency compared to (see Figure 2).
In the next section, we check the theoretical convergence order of our proposed method and assess the efficiency of different schemes in nonlinear systems of various sizes.
4. Numerical Results
In this section, we test numerically whether or not the theoretical order holds for practical purposes in the proposed Jacobian-free class (9). Moreover, we compare it with the methods that appeared in the efficiency section to show their accuracy and computational performance.
Numerical results were obtained with the Matlab2022b version, using 8000 digits in variable precision arithmetics, a processor AMD A12-9720P RADEON R7, 12 COMPUTE CORES 4C, +8G-Ram, and 2.70 GHz. The results are shown in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, including the following information, where the appearing norms are Euclidean:
- k: amount of iterations needed (“-” appears when the scheme does not converge or it needs more iterations than the maximum allowed).
- : obtained solution.
- Cpu-time: average time in seconds required by the iterative method to reach the solution of the problem when executed ten times.
- : approximated computational order of convergence, ACOC, firstly appearing in [25](if is not stable, then “-” appears in the table).
- .
- . If or are very far from zero or we reach infinity or NAN, then “-” appears in the table.
Regarding the stopping criterium, the iterative process ends when one of the following conditions is fulfilled:
- (i)
- ;
- (ii)
- ;
- (iii)
- Fifty iterations are reached.
4.1. Academic Problems
We include a wide variety of nonlinear systems with different dimensions, ranging from 2 to 25. The diversity of expressions present in these systems ensures the versatility of the methods applied and their efficiency.
Example 1.
The first case we present is a system , where
the solution for which is .
Table 2.
Results for function , using as a seed .
Table 2.
Results for function , using as a seed .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 7 | 4.00 | 1.108 × | 4.624 × | 191.7242 | ||
| 5 | 4.97 | 4.904 × | 3.995 × | 143.1027 | ||
| Traub-Ste | 5 | 4.00 | 1.241 × | 1.517 × | 227.8747 | |
| - | - | - | - | - | - | |
| 6 | 4.00 | 7.338 × | 3.015 × | 163.9808 | ||
| 5 | 5.96 | 1.336 × | 7.876 × | 142.6446 |
In Table 2, the method reached the maximum number of iterations without converging to the solution, while the most notable scheme is in almost all aspects, such as errors, iterations and computational time. On the other hand, although exhibits a lower computational time than , the latter has an additional iteration, a relatively similar time, and better errors, proving that it might even be superior in terms of efficiency.
Example 2.
The second case we present is a system , where
being its solution.
Table 3.
Numerical results for , .
Table 3.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 8 | 3.99 | 2.794 × | 4.092 × | 73.3126 | ||
| 8 | 5.00 | 7.534 × | 2.162 × | 72.6143 | ||
| Traub-Ste | 16 | 4.00 | 9.389 × | 3.459 × | 317.9304 | |
| 15 | 4.00 | 6.341 × | 1.641 × | 139.3808 | ||
| 7 | 4.00 | 8.510 × | 3.949 × | 76.9114 | ||
| 6 | 6.02 | 4.223 × | 8.063 × | 79.7898 |
In Table 3, we observe that our proposed schemes yield the best computational times, with the method standing out for having the smallest error norm among all schemes. The method, while showing good overall performance in terms of errors, is more computationally expensive as it takes one iteration longer to reach the solution to the system and slightly more time compared to methods, indicating that the program takes more time to generate each iteration.
Example 3.
The third case we present is a system , where
where the solution is .
The corresponding numerical results appear in Table 4, where the method shows the best errors, while the shortest computational time is achieved by the method.
Table 4.
Numerical results for , .
Table 4.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 4 | 4.00 | 1.305 × | 2.980 × | 12.8166 | ||
| 4 | 5.00 | 4.997 × | 1.143 × | 12.6105 | ||
| Traub-Ste | 4 | 4.00 | 9.461 × | 1.372 × | 18.9457 | |
| 4 | 4.00 | 1.420 × | 2.882 × | 12.4768 | ||
| 4 | 4.00 | 3.440 × | 1.007 × | 12.8707 | ||
| 3 | 6.07 | 1.482 × | 3.012 × | 10.5536 |
Example 4.
The fourth case we present is a system , where
the solution for which .
Table 5.
Numerical results for , .
Table 5.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 4 | 3.99 | 7.838 × | 4.801 × | 14.6365 | ||
| 4 | 5.00 | 2.864 × | 2.514 × | 14.6532 | ||
| Traub-Ste | 4 | 4.00 | 8.656 × | 3.124 × | 21.7138 | |
| 4 | 4.00 | 2.268 × | 6.443 × | 14.0140 | ||
| 4 | 4.00 | 2.594 × | 9.227 × | 14.8093 | ||
| 3 | 5.32 | 6.593 × | 7.254 × | 12.0667 |
The results in Table 5 are similar to those in Table 4. They are very balanced, with the method presenting the best errors, while the shortest computational time is achieved by the method. However, this last one requires more effort to generate an iteration because, despite it producing one less iteration than the other iterative methods, their execution times are very similar.
Example 5.
The fifth case we present is a system , where
its solution being .
Table 6.
Numerical results for , .
Table 6.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 3 | 4.03 | 1.343 × | 1.076 × | 58.6649 | ||
| 3 | 5.03 | 5.715 × | 1.098 × | 58.6010 | ||
| Traub-Ste | 4 | 4.00 | 2.252 × | 1.397 × | 138.3238 | |
| 4 | 4.00 | 1.317 × | 1.705 × | 76.6871 | ||
| 3 | 4.00 | 1.490 × | 2.175 × | 64.3101 | ||
| 3 | 6.01 | 1.565 × | 4.782 × | 69.4223 |
For Table 6, we note that the best computational times are obtained by the methods . On the other hand, the best errors are obtained by the scheme, which has a competitive computational time considering it requires four iterations, similar to Traub-Ste. The latter has been shown to be the one that requires the most time to converge.
Example 6.
The sixth case we present is a system , where
and there exist two solutions, and .
Table 7.
Numerical results for .
Table 7.
Numerical results for .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 6 | 4.00 | 5.425 × | 5.270 × | 20.2517 | ||
| 5 | 4.97 | 5.922 × | 1.965 × | 16.9191 | ||
| Traub-Ste | 20 | 4.00 | 9.536 × | 4.618 × | 154.5691 | |
| 5 | 4.00 | 1.050 × | 2.164 × | 16.5869 | ||
| 10 | 4.01 | 3.127 × | 1.500 × | 51.2143 | ||
| 17 | 5.90 | 6.480 × | 2.076 × | 105.4212 |
In Table 7, the best errors were obtained by and Traub-Ste, which converged to different solutions, while in terms of performance, and appeared to be better.
Example 7.
The seventh case we present is a system , where
and .
Table 8.
Numerical results for , .
Table 8.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 5 | 4.00 | 5.203 × | 6.322 × | 3.6588 | ||
| 4 | 5.00 | 5.282 × | 6.419 × | 3.0030 | ||
| Traub-Ste | 6 | 4.00 | 6.615 × | 7.658 × | 4.6891 | |
| - | - | - | - | - | - | |
| 6 | 4.00 | 5.613 × | 3.787 × | 3.7011 | ||
| 7 | 5.92 | 2.032 × | 5.723 × | 4.4699 |
In Table 8, the method could not converge to the solution because it reached the maximum number of iterations, whereas the and methods showed better overall behavior.
4.2. Special Problems
In the following cases, we address problems of interest where we demonstrate the advantages of our method applied to three types of systems: a non-differentiable system, a high-dimensional system, and a system with real-world applications.
We present a non-differentiable system of size with , which includes both transcendental and algebraic functions, such as the absolute value. These characteristics justify the use of derivative-free numerical schemes, as proposed in this work, to obtain the desired results. To demonstrate the effectiveness and robustness of our method, we will solve the problem using five different initial estimates, thereby showcasing its general good performance when addressing such problems.
Example 8.
The eighth case we present is a system , where
with solutions and .
Table 9.
Numerical results for , .
Table 9.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 4 | 4.00 | 1.139 × | 9.420 × | 2.9146 | ||
| 4 | 4.20 | 1.415 × | 2.272 × | 2.8907 | ||
| Traub-Ste | 4 | 4.11 | 4.417 × | 1.391 × | 3.1954 | |
| 6 | 4.00 | 2.520 × | 1.241 × | 3.4855 | ||
| 7 | 4.02 | 1.401 × | 3.093 × | 4.0501 | ||
| 6 | 6.27 | 4.634 × | 7.902 × | 3.6475 |
In Table 9, the method stands out with the lowest computational time, being more efficient than the others, despite requiring a similar number of iterations and showing a convergence order of , which is close to that of the other methods. The Traub-Ste method is less efficient, with a computational time of but with a larger errors. is competitive, only falling slightly behind in computational time. Methods and are the most expensive in terms of computational time, requiring seven and six iterations, respectively, with the highest time recorded for . Despite their higher orders of convergence, both methods show lower overall efficiency compared to the and methods.
Table 10.
Numerical results for , .
Table 10.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 6 | 4.00 | 1.019 × | 6.029 × | 4.1403 | ||
| 5 | 4.05 | 2.474 × | 4.929 × | 3.3999 | ||
| Traub-Ste | 6 | 4.00 | 8.946 × | 2.341 × | 4.4433 | |
| 8 | 4.00 | 2.050 × | 5.429 × | 4.4246 | ||
| - | - | - | - | - | - | |
| - | - | - | - | - | - |
In Table 10, method stands out as the one with the lowest computational time, requiring fewer iterations compared to the other methods and showing a convergence order of 4.05. The Traub-Ste method, despite having the same convergence order as , shows a slightly higher computational time and larger errors in the difference between iterates. , which requires more iterations, falls only slightly behind in computational time but remains competitive overall. Methods and do not converge to the solution, as division by zero occurred when calculating the divided differences.
Table 11.
Numerical results for , .
Table 11.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 10 | 4.00 | 6.679 × | 1.113 × | 6.9568 | ||
| 8 | 4.28 | 4.387 × | 3.113 × | 5.4126 | ||
| Traub-Ste | 15 | 4.00 | 4.978 × | 2.442 × | 10.6624 | |
| - | - | - | - | - | - | |
| - | - | - | - | - | - | |
| - | - | - | - | - | - |
Method stands out as the one with the lowest computational time, requiring fewer iterations compared to the other methods and showing a convergence order of . The Traub-Ste scheme, despite having the same convergence order as , shows a significantly higher computational time but better results in terms of errors. , while requiring more iterations than , still maintains a competitive time compared to Traub-Ste. Division by zero is the reason why the other methods could not reach the solution.
Table 12.
Numerical results for , .
Table 12.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 16 | 4.00 | 2.821 × | 3.543 × | 10.5084 | ||
| 6 | 4.19 | 8.959 × | 1.550 × | 4.2948 | ||
| Traub-Ste | - | - | - | - | - | - |
| - | - | - | - | - | ||
| - | - | - | - | - | - | |
| - | - | - | - | - | - |
In Table 12, it is observed that only the members of our proposed class converged, and they did so to different solutions. stood out as the most efficient in terms of time and iterations, while yielded the best errors. Method did not converge because it exceed the maximum number of iterations, while the other methods could not reach the solution due to division by zero during the computation of divided differences.
Table 13.
Numerical results for , .
Table 13.
Numerical results for , .
| Iterative Method | k | Cpu-Time | ||||
|---|---|---|---|---|---|---|
| 10 | 4.00 | 2.063 × | 6.071 × | 6.8361 | ||
| 7 | 4.17 | 3.281 × | 6.327 × | 4.7535 | ||
| Traub-Ste | - | - | - | - | - | - |
| - | - | - | - | - | - | |
| - | - | - | - | - | - | |
| - | - | - | - | - | - |
The observations for Table 13 are the same as those in the previous one, but now method does not converge to the solution due to a division by zero, just like the other methods that do not converge.
Example 9.
Now, we work with a high-dimensional problem , where we use anonymous functions to obtain the results in a reasonable amount of time. We take two initial estimates for this problem. The ninth case we present is a system , where
whose solution is .
The results presented in Table 14 and Table 15 show that for both initial estimates, the best performances were obtained by the Traub-Ste method, followed closely by our proposed methods, especially . On the other hand, for both cases, the method failed to converge to a solution, reaching the maximum number of iterations, while and displayed the same behavior, as shown in Table 14.
Table 14.
Results for function , using as a seed .
Table 15.
Results for function , using as a seed .
Example 10.
In this problem, we focus on the comprehensive analysis of a nonlinear elliptic initial and boundary value problem, previously addressed in [26]. The same idea from [27] is adopted, where introducing a new term makes the problem non-differentiable. The partial differential equation we will study next has the ability to represent a wide range of nonlinear phenomena present in physical, chemical, and biological systems. The applied problem will revolve around the diffusion of nutrients in a biological substrate. In this context, and within the fields of agriculture and biotechnology, a two-dimensional substrate that simulates a cultivation area or a growth medium for microorganisms is considered.The tenth case is given by the partial differential equation
In this scenario, represents the concentration of nutrients at each point within the substrate. The equation reflects how nutrients diffuse within the substrate, while the term incorporates the interaction between nutrient concentration and the biochemical processes present in the substrate. By discretizing it with central differences, we will have the expression
Then, following the process seen in [27] results in the nonlinear system
To solve the system of nonlinear Equation (36), we employ the iterative method . We generate an initial approximation of the exact solution . The technical specifications of the computer used to solve this case are identical to those employed in solving academical problems. In this instance, we select .
By solving the system associated with this PDE, we present the solution of the system in the following matrix:
After five iterations, the obtained solution satisfies
and the norm of the nonlinear operator evaluated at the last iteration is
The approximate solution in , resized to a matrix for , and then embedded in the solution matrix within the grid bounded by the boundary conditions, is presented in the matrix above.
We can assert that the values obtained fall within the range . In the context of the posed problem, which involves the diffusion of nutrients in a two-dimensional biological substrate, these observations take on key importance. The concentration of nutrients follows a pattern where values are constrained between 0 and 1. This limitation in concentration suggests a biological equilibrium in the system, where absorption, diffusion, and biochemical reactions are in harmony. This implies that the system is stable and well regulated in terms of nutrient availability.
5. Conclusions
The fifth-order Jacobian-Free iterative method with scalar accelerators developed in this study (along with its fourth-order partners) has proven to be an efficient tool for solving nonlinear systems of equations. It preserves the convergence order of the original scheme, despite the elimination of Jacobian matrix calculations, with low computational cost. The substitution of these matrices with divided differences not only reduces computational complexity but also facilitates implementation, maintaining high precision and efficiency. The numerical results highlight its good performance in terms of efficiency, both in non-differentiable problems, high-dimensional problems, and practical application problems. In future works, higher-order Jacobian-free schemes might be developed, focusing on their stability and efficiency.
Author Contributions
Conceptualization, J.R.T. and A.C.; software, A.R.-C.; methodology, J.G.M.; validation, J.G.M., A.C. and J.R.T.; investigation, A.C.; formal analysis, J.R.T.; resources, A.R.-C.; writing—original draft preparation, A.R.-C.; visualization, J.G.M.; writing—review and editing, J.R.T. and A.C.; supervision, J.R.T. and A.C. All authors have read and agreed to the published version of the manuscript.
Funding
Funded with Ayuda a Primeros Proyectos de Investigación (PAID-06-23), Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV).
Data Availability Statement
All the necessary data is available inside this paper.
Acknowledgments
The authors thank the reviewers for their corrections and comments, which have helped to improve this document.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Jiang, Y.; Chai, T.; Chen, G. Output Feedback Based Adaptive Optimal Output Regulation for Continuous-Time Strict-Feedback Nonlinear Systems. IEEE Trans. Autom. Control 2024. [Google Scholar] [CrossRef]
- Xu, W.; Zheng, N.; Hayami, K. Jacobian-free implicit inner-iteration preconditioner for nonlinear least squares problems. J. Sci. Comput. 2016, 68, 1055–1081. [Google Scholar] [CrossRef]
- Fung, S.W.; Heaton, H.; Li, Q.; McKenzie, D.; Osher, S.; Yin, W. Jacobian-free backpropagation for implicit networks. AAAI Conf. Artif. Intell. 2022, 36, 6648–6656. [Google Scholar] [CrossRef]
- Ashrafizadeh, A.; Devaud, C.B.; Aydemir, N.U. A Jacobian-free Newton–Krylov method for thermalhydraulics simulations. Int. J. Numer. Methods Fluids 2015, 77, 590–615. [Google Scholar] [CrossRef]
- Eguillon, Y.; Lacabanne, B.; Tromeur-Dervout, D. IFOSMONDI co-simulation algorithm with Jacobian-free methods in PETSc. Eng. Comput. 2022, 38, 4423–4449. [Google Scholar] [CrossRef]
- Samanskii, V. On a modification of the Newton method. Ukr. Math. J. 1967, 19, 133–138. [Google Scholar]
- Steffensen, J.F. Remarks on iteration. Skand. Aktuarietidskr. 1933, 1, 64–72. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
- Chicharro, F.I.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput. 2013, 219, 7023–7035. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T.; Qin, Y. Efficient two-step derivative-free iterative methods with memory and their dynamics. Int. J. Comput. Math. 2016, 93, 1423–1446. [Google Scholar] [CrossRef]
- Chu, Y.; Rafiq, N.; Shams, M.; Akram, S.; Mir, N.A.; Kalsoom, H. Computer methodologies for the comparison of some efficient derivative free simultaneous iterative methods for finding roots of non-linear equations. Comput. Mater. Contin. 2020, 66, 275–290. [Google Scholar] [CrossRef]
- Bahgat, M.S. Three-point iterative algorithm in the absence of the derivative for solving nonlinear equations and their basins of attraction. J. Egypt. Math. Soc. 2021, 29, 23. [Google Scholar] [CrossRef]
- Zhanlav, T.; Otgondorj, K. Comparison of some optimal derivative-free three-point iterations. J. Numer. Anal. Approx. Theory 2020, 49, 76–90. [Google Scholar] [CrossRef]
- Singh, H.; Sharma, J.R.; Kumar, S. A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numer. Algorithms 2023, 93, 203–225. [Google Scholar] [CrossRef]
- Amiri, A.R.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Preserving the order of convergence: Low-complexity Jacobian-free iterative schemes for solving nonlinear systems. Comput. Appl. Math. 2018, 337, 87–97. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
- Behl, R.; Cordero, A.; Torregrosa, J.R.; Bhalla, S. A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems. Mathematics 2021, 9, 2122. [Google Scholar] [CrossRef]
- Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
- Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Abad, M.F.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. 2014, 57, 133–145. [Google Scholar]
- Cordero, A.; Rojas–Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algorithms 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
- Traub, I.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Villalba, E.G.; Hernandez, M.; Hueso, J.L.; Martínez, E. Using decomposition of the nonlinear operator for solving non-differentiable problems. Math. Methods Appl. Sci. 2022. [Google Scholar] [CrossRef]
- Cordero, A.; Leonardo-Sepúlveda, M.; Torregrosa, J.R.; Vassileva, M.P. Enhancing the convergence order from p to p+3 in iterative methods for solving nonlinear systems of equations without the use of Jacobian matrices. Mathematics 2023, 11, 4238. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).