Abstract
This article presents the development of three-step derivative-free techniques with memory, which achieve higher convergence orders for solving systems of nonlinear equations. The suggested approaches enhance an existing seventh-order method (without memory) by incorporating various adjustable, self-correcting parameters in the first iterative step. This modification leads to a significant increase in the convergence order, with new methods reaching values of approximately , , , , , and . Additionally, the computational efficiency of these new approaches is evaluated against other comparable methods. Numerical tests show that the suggested approaches are consistently more efficient.
Keywords:
system of nonlinear equations; derivative-free methods; with memory; order of convergence; computational efficiency MSC:
65H10; 65Y20; 65B99
1. Introduction
In many real-world engineering and scientific problems, it is common to encounter equations of the form , where and D represents a neighborhood around a solution of . These equations are often difficult to solve exactly, so the goal is to find approximate solutions. In these situations, iterative approaches are helpful because they produce a series of approximations that, under specific circumstances, converge to the system’s actual solution.
Among various iterative methods, Newton’s procedure is particularly notable. It can be expressed as follows:
where represents the Jacobian matrix associated with G. Newton’s approach is well known for its remarkable quadratic convergence rate, straightforwardness, and effectiveness. An alternative method, Steffensen’s method, arises when the derivative in the Newtonian scheme is substituted with the divided difference . As discussed in reference [1], Steffensen’s method is a derivative-free iterative approach that demonstrates quadratic convergence. In contrast, methods like Newton’s require the computation of the derivative in the iterative formula. This requirement poses challenges when dealing with non-differentiable functions, when the computation of derivatives is costly, or when the Jacobian matrix is singular.
Various methods similar to Newton’s have been devised, which employ distinct techniques such as weight functions, direct composition, and estimations of Jacobian matrices through divided difference operators. Consequently, several iterative approaches for approximating solutions to have been scrutinized, each exhibiting distinct orders of convergence. These proposals aim to enhance convergence speed or refine computational efficiency.
Recent studies [2,3,4] have introduced novel parametric classes of iterative techniques, which provide a rapid approach for addressing such problems. Moreover, previous works [5,6] have proposed several iterative methods that eliminate the necessity for the Jacobian matrix while showing advantageous rates of convergence. These methods replace the Jacobian matrix with the divided difference operator . In addition, the comprehensive work by Argyros [7] provides a detailed theoretical foundation for iterative methods, discussing their convergence properties, computational aspects, and broad range of applications. This resource serves as a cornerstone for advancing iterative techniques in solving nonlinear equations and systems.
Motivation: Computing the Jacobian matrix for derivative-based iterative methods when solving nonlinear systems, especially in higher dimensions, is a challenging task. This is why derivative-free iterative methods are often preferred as they avoid the need for computing derivatives. Additionally, the incorporation of memory techniques in iterative methods can significantly improve the convergence rate without requiring extra function evaluations. This is why we are particularly interested in methods with memory as they offer enhanced performance compared to those without memory. While there are a few studies on derivative-free iterative techniques with memory, most of the existing literature focuses on methods without memory. This gap in the literature motivates our research to explore and develop derivative-free iterative techniques with memory, which can offer superior convergence properties.
Novelty of the paper: Author: Yes, bold is removed. The novelty of this research lies in the development of new derivative-free iterative schemes with memory, which achieve more than ninth-order convergence. It improves existing seventh-order iterative methods while minimizing computational costs. These methods are specifically designed to reduce the quantity of function evaluations and eliminate the need for the costly inversion of Jacobian matrices. By striking a balance between computational efficiency and high convergence rates, the proposed techniques offer significant advancements over current methods.
The structure of our presentation is as follows: The new techniques are presented, and a convergence study is conducted in Section 2. The computational efficiency of these approaches is evaluated in Section 3, which also provides a broad comparison with a number of well-known algorithms. Numerous numerical examples are provided in Section 4 in order to verify the theoretical conclusions and compare the convergence characteristics of the suggested approaches with those of other comparable, well-established approaches. Lastly, Section 5 concludes with final remarks.
2. With Memory Method and Its Convergence Analysis
In the subsequent section, we concentrate on optimizing the parameter in the iterative technique suggested by Sharma and Arora [8] to improve its convergence rate. We start by examining the seventh-order method without memory, as described in the cited work [8].
Here, and it is denoted by . The error expressions for the sub-steps of the above-mentioned method are as follows:
where , , . Let be a root of the nonlinear system . From the error expression (5), it is apparent that the method achieves a convergence order of 7 when . However, if we choose , the convergence order can be improved to exceed nine. Since the precise value of is not directly accessible, we rely on an approximation of derived from the available data, which helps to further accelerate the convergence rate.
The fundamental idea behind developing memory-based methods is to iteratively compute the parameter matrix (for ) by using a sufficiently accurate approximation of , which is derived from the available information. We propose the following forms for this variable matrix parameter :
- Scheme 1.
- Scheme 2.this divided difference operator is also known as Kurchatov’s divided difference.
- Scheme 3.
- Scheme 4.
- Scheme 5.
- Scheme 6.By substituting the parameter in method (2) with schemes 1 through 6, we obtain six new three-step iterative methods with memory, which are described in detail as follows:
To evaluate the convergence properties of schemes (12)–(17), we need the following results related to Taylor expansions of vector functions (one can see the Refs. [9,10,11,12,13]).
Lemma 1.
Let be a convex set, and let be a function that is p-times Fréchet differentiable. Then, for any , the function can be expressed as follows:
where the remainder term satisfies the following inequality:
and denotes the vector , with h repeated p times.
In our approach, we have used the divided difference operator for the multi-variable function G (see [9,14,15]). This operator, represented as , is a mapping , and its definition can be outlined as follows:
Using a Taylor series to expand around the point r and then integrate, we obtain
Let represent the error of the approximation of the solution in the iteration. Assuming the existence of , we derive the following by expanding and its first two derivatives around :
Hence,
and
For the iterative techniques (12)–(17), the convergence order is given by the following theorems:
Theorem 1.
Let be a differentiable function, where D represents a neighborhood around the root α of G, and is continuous and invertible. Suppose the initial approximation is adequately close to α. Then, the sequence generated by the iterative procedure described in (12) converges to α with a rate of convergence of . Likewise, the method defined in (13) yields convergence to α with a convergence rate of .
Proof.
Let be a series of approximations generated by an iterative method having a R-order of at least r that converges to the root of G. After that,
Let
Substituting into (18) and then employing (20) and (21), we attain
Now,
so
and hence
From (23) for index, we have
Substituting Equation (28) into Equation (27) gives the following:
or
Applying relation (29) in (5), we obtain
where Thus, as and , we obtain
By analyzing the exponents of on the right-hand side of Equations (22) and (31), we obtain the indicial equation.
The convergence order of the iterative scheme (12) is found to be . Alternatively, by applying (18), we can express
Then,
Therefore,
As a result, we observe that the expressions , , , and may appear in . It is clear that the terms and tend to converge to zero more rapidly than . Therefore, we need to assess whether or converges faster. Assuming that the R-order of the method is at least z, it follows that
where approaches , the asymptotic error constant, as . Consequently, we obtain
Thus, if , we find that converges to 0 as . Therefore, when , we have . From the error Equation (5) and this relation, we derive
By equating the exponents of in Equations (22) and (32), we attain the following:
The only positive solution to this equation yields the convergence order of technique (13), which is . □
The two preceding methods, both of which incorporate memory, were developed using the variable . In this section, we will investigate the consequences of using the approximations and instead. Specifically, we will examine the methods presented in Equations (14)–(17).
As part of our ongoing analysis, we aim to determine the convergence order of the memory-based schemes (14)–(17), following a similar approach to the one previously outlined.
Theorem 2.
Let be a differentiable function defined on a neighborhood D surrounding the root α of G. Suppose that is continuous and that the inverse exists. Given an initial approximation adequately close to α, the following convergence properties hold for the iterating sequences generated by the respective methods:
- The sequence , produced using the method in (14), converges to α with a convergence rate of .
- The sequence , generated by the procedure described in (15), converges to α with a convergence rate of .
- The sequence , formed through the technique outlined in (16), converges to α with a convergence order of .
- The sequence , produced by the approach specified in (17), converges to α with a convergence order of .
3. Computational Efficiency
In order to gauge the effectiveness of the suggested methods, we will employ the efficiency index:
The convergence order is indicated by the parameter p, the computing cost for each iteration is indicated by C, and the number of significant decimal digits in the approximation is indicated by d. According to the results in [16], the computing cost for each iteration for a system of m nonlinear equations with m unknowns can be expressed as follows:
In this context, denotes the amount of scalar function evaluations required for computing G and , whereas the total amount of products required per iteration is denoted by . According to [17], the divided difference of G is represented as a matrix with particular entries.
It should be noted that a more sophisticated form of the divided difference was proposed by Grau-Sanchez et al. [18]. Nonetheless, the formulation shown in (34) continues to be the most commonly utilized in real-world applications, including our study, where we follow the same methodology.
We examine the ratio among products and scalar function evaluations as well as the ratio between products and quotients in order to represent the value of in terms of products. We consider the following factors when calculating the computational expenses for each iteration:
- The evaluation of G requires computing m scalar functions
- Calculating the divided difference involves evaluating and independently and requires scalar functions.
- Each divided difference calculation includes quotients, m products for vector–scalar multiplication, and products for matrix–scalar multiplication.
- Computing the inverse of a linear operator involves solving a linear system, which entails products and quotients during LU decomposition, as well as products and m quotients for solving two triangular systems.
Our investigation systematically compares the efficiency of six proposed methods: (12), (13), (14), (15), (16), and (17) with existing similar nature with memory methods denoted as , , , and in the given ref. [2]:
is articulated as follows:
where, , , .
is given as follows:
where
is expressed as follows:
where .
is given as follows:
where,
is given as follows:
where
Given the quantities of evaluations provided, we can express the efficiency indices of as and the computational costs as . Taking these into account, we can now rephrase the statement.
Efficiency Comparison
In order to assess the computational efficiency index for iterative methods like and , we consider the following ratio:
When , it is clear that the iterative scheme outperforms in terms of efficiency. This is particularly applicable in cases where and . For the purpose of our analysis, we focus on the specific values and .
- versus :Here, the inequality is valid for . This indicates that for .
- versus :The inequality holds true when , indicating that for .
- versus :When , the inequality is valid, which implies that for .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :In this case, we have for , which means for .
- versus :In this case, we have for , which means for .
- versus :The inequality holds true when , indicating that for .
- versus :When , the inequality is valid, which implies that for .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :In this case, we have for , which means for .
- versus :Here, the inequality is valid for , which indicates that for .
- versus :The inequality holds true when , indicating that for .
- versus :When , the inequality is valid, which implies that for .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :The inequality is valid for . This demonstrates that when .
- versus :The inequality holds true when , indicating that for .
- versus :When , the inequality is valid, implying that for .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :The inequality holds true for . This implies that when .
- versus :The inequality holds true when , indicating that for .
- versus :When , the inequality is valid, which implies that for .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :In this case, the inequality is valid when . This indicates that for , .
- versus :The inequality holds true when , indicating that for .
- versus :When , the inequality is valid, which implies that for .
- versus :For , the inequality holds, indicating that the efficiency index exceeds .
- versus :For , the inequality is satisfied, which implies that the efficiency index is greater than .
The aforementioned outcomes are visually depicted in Figure 1.
Figure 1.
Plots for efficiency index values for the following: (a) versus , , , , ; (b) versus , , , , ; (c) versus , , , , ; (d) versus , , , , ; (e) versus , , , , ; (f) versus , , , , .
Figure 1 illustrates the comparative performance of the proposed methods, namely , , , , , and . The results clearly demonstrate that these methods consistently achieve superior efficiency compared to the well-established existing methods, including , , , , and . This enhanced efficiency is evident across all scenarios considered in the analysis.
4. Numerical Results and Discussion
In this section, we provide many numerical problems that demonstrate the convergence characteristic of the suggested approaches. The performance of our approach is compared with existing methods, namely , , , , and , as detailed in reference [2]. All computations were performed using Mathematica 8.0 [19] with multiple-precision arithmetic set to 2048 digits to ensure a high level of accuracy. The stopping criterion for the experiments is the following:
where T is the tolerance specific to each method. For each example, the error tolerance is set to .
Evaluation Metrics
Table 1, Table 2, Table 3 and Table 4 summarize the numerical results of the compared methods across test examples, using the following metrics:
Table 1.
Numerical results for Example 1.
Table 2.
Numerical results for Example 2.
Table 3.
Numerical results for Example 3.
Table 4.
Numerical results for Example 4.
- The computed approximation ().
- The function value ().
- Distance between consecutive iterations, ().
- Iteration count required to satisfy the stopping criteria.
- The approximated computational order of convergence (ACOC):
- The total CPU time taken for the entire computation process.
These metrics provide a comprehensive comparison of the efficiency, accuracy, and convergence speed of the proposed techniques relative to existing schemes.
Example 1.
We consider the boundary value problem specified below (see [9]):
Consider dividing the interval as
Define the discrete variables corresponding to the function values at the partition points:
Using numerical approximations for the first and second derivatives, we obtain the following:
By substituting these approximations into the governing equation, we obtain a set of nonlinear equations for variables:
- Specific Case:
For , the step size and partition points are as follows:
The initial values are set as follows:
where the identity matrix is indicated by I.
Example 2.
Now, consider the system of six equations as described in ref. [20]:
With an initial guess and initializing with , where the identity matrix is denoted by I, the error tolerance is set to .
Example 3.
We examine the fifty-equation system: (taken from [2]):
The tolerance for this example is , the starting guess is , and the initial estimates for the vectors , , and are .
Example 4.
We examine the three-equation system: (adapted from [6]):
The tolerance for this example is , the starting guess is . The parameter is initialized as , where I represents the identity matrix.
Figure 2 provides a graphical representation of the errors at each consecutive iteration for the different iterative processes employed to solve Examples 1–4. The figure highlights the superior performance of our proposed method (15)–(17) compared to other well-known methods. Specifically, it shows that our method converges more rapidly and achieves significantly higher computational accuracy across all examples.
Figure 2.
Graphical comparison of errors in consecutive iteration for Examples 1–4.
Furthermore, Figure 3 depicts the function values at each iteration for Examples 1–4. These results reaffirm the effectiveness of our approach in achieving consistent and accurate solutions while maintaining stability in the iterative process. The function values obtained using our method exhibit a faster decrease to zero (or the desired tolerance), which is a direct indicator of superior convergence characteristics.
Figure 3.
The graphical comparison illustrates the values of the functions at each iteration for Examples 1–4.
In addition to accuracy, computational efficiency is a critical aspect when comparing iterative methods. For the four examples considered, our method demonstrates a notable reduction in CPU time compared to other existing methods. This computational advantage arises from the robust design of our iterative scheme, which effectively reduces the number of iterations required to achieve the desired tolerance. Consequently, our approach is not only more accurate but also computationally efficient, making it a preferable choice for solving nonlinear systems.
These results collectively highlight the advantages of our proposed method in terms of both computational accuracy and efficiency, particularly when compared to other well-established methods from the literature.
5. Conclusions
We have developed a new family of three-step derivative-free iterative algorithms with memory for solving a system of nonlinear equations. Using the first-order divided difference operator for multivariable functions and applying Taylor expansion, we have determined the minimal order of convergence for these techniques. The convergence orders were determined to be , , , , and . To enhance the convergence speed, we incorporated a self-accelerating parameter that evolves as the iterations progress, which is implemented through a self-correcting matrix. This allows for faster convergence in solving nonlinear equation systems. Furthermore, we have evaluated the computational efficiency of our schemes against existing approaches, with numerical experiments demonstrating that our methods achieve significantly better performance. These findings validate the effectiveness and reliability of our proposed approach. In future research, this technique will be extended to similar methods [4,5,6,8,20].
Author Contributions
Conceptualization, N.K. and J.P.J.; methodology, N.K. and J.P.J.; software, N.K. and J.P.J.; validation, N.K., J.P.J. and I.K.A.; formal analysis, N.K., J.P.J. and I.K.A.; resources, N.K.; writing—original draft preparation, N.K. and J.P.J.; writing—review and editing, N.K., J.P.J. and I.K.A.; supervision, J.P.J. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contribution presented in the study are included in the article, further inquiries can be directed to the corresponding authors.
Acknowledgments
The authors would like to pay their gratitude to all the reviewers for their significant comments and suggestions which improved the quality of the current work. The first two authors are thankful to the Department of Science and Technology, New Delhi, India for sanctioning the proposal under the scheme FIST program (Ref. No. SR/FST/MS/2022 dated 19 December 2022).
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Steffensen, J. Remarks on iteration. Scand. Actuar. J. 1993, 1, 64–72. [Google Scholar] [CrossRef]
- Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero, P.N. Design of iterative methods with memory for solving nonlinear systems. Math. Methods Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
- Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero, P.N. Improving the order of a fifth-order family of vectorial fixed point schemes by introducing memory. Fixed Point Theory J. 2023, 24, 155–172. [Google Scholar] [CrossRef]
- Behel, R.; Cordero, A.; Torregrosa, J.R.; Bhalla, S. A new high-order jacobian-free iterative method with memory for solving nonlinear system. Mathematics 2021, 9, 2122. [Google Scholar] [CrossRef]
- Petkovic, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [Google Scholar] [CrossRef]
- Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press—Taylor and Francis Corp.: Boca Raton, FL, USA, 2022. [Google Scholar]
- Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algorithms 2014, 67, 917–933. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
- Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk. SSSR 1971, 198, 524–526. [Google Scholar]
- Shakhno, S.M. On a Kurchatov’s method of linear interpolation for solving nonlinear equations. PAAM-Proc. Appl. Math. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
- Shakhno, S.M. On the difference method with quadratic convergence for solving nonlinear operator equations. Mat. Stud. 2006, 26, 105–110. [Google Scholar]
- Ezquerro, J.A.; Grau, A.; Grau-Sanchez, M.; Hernandez, M. On the efficiency of two variants of Kurchatov’s method for solving nonlinear systems. Numer. Algorithms 2013, 64, 685–698. [Google Scholar] [CrossRef]
- Argyros, I.K. Advances in the Efficiency of Computational Methods and Applications; World Scientific Publishing Company: Singapore, 2000. [Google Scholar]
- Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M. A technique to choose the most efficient method between secant method and some variants. Appl. Math. Comput. 2012, 218, 6415–6426. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, Á.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
- Wolfram, S. The Mathematica Book; Wolfram Research, Inc.: Champaign, IL, USA, 2003. [Google Scholar]
- Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discret. Math. 2013, 141, 390–403. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).