Next Article in Journal
Innovative Quantum Encryption Method for RGB Images Based on Bit-Planes and Logistic Maps
Previous Article in Journal
Numerical Study and Model Validation of Low-Pressure Hydrogen–Air Combustion in a Closed Vessel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Techniques with Memory Extension of Three-Step Derivative-Free Iterative Scheme for Nonlinear Systems

1
Department of Mathematics, Guru Ghasidas Vishwavidyalaya (A Central University), Bilaspur 495009, C.G., India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Authors to whom correspondence should be addressed.
Computation 2025, 13(2), 55; https://doi.org/10.3390/computation13020055
Submission received: 26 December 2024 / Revised: 24 January 2025 / Accepted: 12 February 2025 / Published: 17 February 2025

Abstract

:
This article presents the development of three-step derivative-free techniques with memory, which achieve higher convergence orders for solving systems of nonlinear equations. The suggested approaches enhance an existing seventh-order method (without memory) by incorporating various adjustable, self-correcting parameters in the first iterative step. This modification leads to a significant increase in the convergence order, with new methods reaching values of approximately 7.2749 , 7.5311 , 7.6056 , 8.1231 , 8.2749 , and 9.2169 . Additionally, the computational efficiency of these new approaches is evaluated against other comparable methods. Numerical tests show that the suggested approaches are consistently more efficient.

1. Introduction

In many real-world engineering and scientific problems, it is common to encounter equations of the form G ( r ) = 0 , where G : D R n R n and D represents a neighborhood around a solution of G ( r ) = 0 . These equations are often difficult to solve exactly, so the goal is to find approximate solutions. In these situations, iterative approaches are helpful because they produce a series of approximations { r k } that, under specific circumstances, converge to the system’s actual solution.
Among various iterative methods, Newton’s procedure is particularly notable. It can be expressed as follows:
r ( k + 1 ) = r ( k ) [ G ( r ( k ) ) ] 1 G ( r ( k ) ) ,
where G ( r ( k ) ) represents the Jacobian matrix associated with G. Newton’s approach is well known for its remarkable quadratic convergence rate, straightforwardness, and effectiveness. An alternative method, Steffensen’s method, arises when the derivative in the Newtonian scheme is substituted with the divided difference [ r + G ( r ) , r ; G ] . As discussed in reference [1], Steffensen’s method is a derivative-free iterative approach that demonstrates quadratic convergence. In contrast, methods like Newton’s require the computation of the derivative G in the iterative formula. This requirement poses challenges when dealing with non-differentiable functions, when the computation of derivatives is costly, or when the Jacobian matrix is singular.
Various methods similar to Newton’s have been devised, which employ distinct techniques such as weight functions, direct composition, and estimations of Jacobian matrices through divided difference operators. Consequently, several iterative approaches for approximating solutions to G ( r ) = 0 have been scrutinized, each exhibiting distinct orders of convergence. These proposals aim to enhance convergence speed or refine computational efficiency.
Recent studies [2,3,4] have introduced novel parametric classes of iterative techniques, which provide a rapid approach for addressing such problems. Moreover, previous works [5,6] have proposed several iterative methods that eliminate the necessity for the Jacobian matrix while showing advantageous rates of convergence. These methods replace the Jacobian matrix G ( ) with the divided difference operator [ , ; G ] . In addition, the comprehensive work by Argyros [7] provides a detailed theoretical foundation for iterative methods, discussing their convergence properties, computational aspects, and broad range of applications. This resource serves as a cornerstone for advancing iterative techniques in solving nonlinear equations and systems.
Motivation: Computing the Jacobian matrix for derivative-based iterative methods when solving nonlinear systems, especially in higher dimensions, is a challenging task. This is why derivative-free iterative methods are often preferred as they avoid the need for computing derivatives. Additionally, the incorporation of memory techniques in iterative methods can significantly improve the convergence rate without requiring extra function evaluations. This is why we are particularly interested in methods with memory as they offer enhanced performance compared to those without memory. While there are a few studies on derivative-free iterative techniques with memory, most of the existing literature focuses on methods without memory. This gap in the literature motivates our research to explore and develop derivative-free iterative techniques with memory, which can offer superior convergence properties.
Novelty of the paper: Author: Yes, bold is removed. The novelty of this research lies in the development of new derivative-free iterative schemes with memory, which achieve more than ninth-order convergence. It improves existing seventh-order iterative methods while minimizing computational costs. These methods are specifically designed to reduce the quantity of function evaluations and eliminate the need for the costly inversion of Jacobian matrices. By striking a balance between computational efficiency and high convergence rates, the proposed techniques offer significant advancements over current methods.
The structure of our presentation is as follows: The new techniques are presented, and a convergence study is conducted in Section 2. The computational efficiency of these approaches is evaluated in Section 3, which also provides a broad comparison with a number of well-known algorithms. Numerous numerical examples are provided in Section 4 in order to verify the theoretical conclusions and compare the convergence characteristics of the suggested approaches with those of other comparable, well-established approaches. Lastly, Section 5 concludes with final remarks.

2. With Memory Method and Its Convergence Analysis

In the subsequent section, we concentrate on optimizing the parameter β in the iterative technique suggested by Sharma and Arora [8] to improve its convergence rate. We start by examining the seventh-order method without memory, as described in the cited work [8].
p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ]   × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
Here, t ( k ) = r ( k ) + β G ( r ( k ) ) and it is denoted by M 7 . The error expressions for the sub-steps of the above-mentioned method are as follows:
e ( p , k ) = ( I + β G ( α ) ) A 2 ( e ( k ) ) 2 ,
e ( q , k ) = ( I + β G ( α ) ) A 2 ( e ( k ) ) 4 ,
e ( k + 1 ) = ( I + β G ( α ) ) 2 A 2 2 Q ( e ( k ) ) 7 ,
where Q = ( ( 2 I + β G ( α ) ) A 2 2 A 3 ) ( ( 5 I + β G ( α ) ( 5 I + β G ( α ) ) ) A 2 2 ( I + β G ( α ) ) A 3 ) , e ( k + 1 ) = r ( k + 1 ) α , A i = 1 i ! [ G ( α ) ] 1 G ( i ) ( α ) . Let α be a root of the nonlinear system G ( r ) = 0 . From the error expression (5), it is apparent that the method achieves a convergence order of 7 when β [ G ( α ) ] 1 . However, if we choose β = [ G ( α ) ] 1 , the convergence order can be improved to exceed nine. Since the precise value of G ( α ) is not directly accessible, we rely on an approximation of G ( α ) derived from the available data, which helps to further accelerate the convergence rate.
The fundamental idea behind developing memory-based methods is to iteratively compute the parameter matrix β = B i ( k ) (for k = 1 , 2 , , 6 , i = 1 , 2 , , 6 ) by using a sufficiently accurate approximation of [ G ( α ) ] 1 , which is derived from the available information. We propose the following forms for this variable matrix parameter B i ( k ) :
  • Scheme 1.
B 1 ( k ) = [ t ( k 1 ) , r ( k 1 ) ; G ] 1 .
  • Scheme 2.
    B 2 ( k ) = [ 2 r ( k ) r ( k 1 ) , r ( k 1 ) ; G ] 1 .
    this divided difference operator is also known as Kurchatov’s divided difference.
  • Scheme 3.
B 3 ( k ) = [ r ( k ) , p ( k 1 ) ; G ] 1 .
  • Scheme 4.
B 4 ( k ) = [ r ( k ) , q ( k 1 ) ; G ] 1 .
  • Scheme 5.
B 5 ( k ) = [ 2 r ( k ) p ( k 1 ) , p ( k 1 ) ; G ] 1 .
  • Scheme 6.
    B 6 ( k ) = [ 2 r ( k ) q ( k 1 ) , q ( k 1 ) ; G ] 1 .
    By substituting the parameter β in method (2) with schemes 1 through 6, we obtain six new three-step iterative methods with memory, which are described in detail as follows:
t ( k ) = r ( k ) + B 1 ( k ) G ( r ( k ) ) , p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ] × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
t ( k ) = r ( k ) + B 2 ( k ) G ( r ( k ) ) , p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ] × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
t ( k ) = r ( k ) + B 3 ( k ) G ( r ( k ) ) , p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ] × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
t ( k ) = r ( k ) + B 4 ( k ) G ( r ( k ) ) , p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ] × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
t ( k ) = r ( k ) + B 5 ( k ) G ( r ( k ) ) , p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ] × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
t ( k ) = r ( k ) + B 6 ( k ) G ( r ( k ) ) , p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) 3 I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , r ( k ) ; G ] + [ p ( k ) , t ( k ) ; G ] [ t ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) [ q ( k ) , p ( k ) ; G ] 1 [ t ( k ) , r ( k ) ; G ] + [ p ( k ) , r ( k ) ; G ] [ q ( k ) , r ( k ) ; G ] × [ t ( k ) , r ( k ) ; G ] 1 G ( q ( k ) ) .
To evaluate the convergence properties of schemes (12)–(17), we need the following results related to Taylor expansions of vector functions (one can see the Refs. [9,10,11,12,13]).
Lemma 1.
Let D R m be a convex set, and let G : D R m R m be a function that is p-times Fréchet differentiable. Then, for any r , h R m , the function G ( r + h ) can be expressed as follows:
G ( r + h ) = G ( r ) + G ( r ) h + 1 2 ! G ( r ) h 2 + 1 3 ! G ( r ) h 3 + + 1 ( p 1 ) ! G ( p 1 ) ( r ) h p 1 + R p ,
where the remainder term R p satisfies the following inequality:
R p   1 p ! sup 0 t 1 G ( p ) ( r + t h ) h p ,
and h p denotes the vector ( h , h , , h ) , with h repeated p times.
In our approach, we have used the divided difference operator for the multi-variable function G (see [9,14,15]). This operator, represented as [ . , . ; G ] , is a mapping D × D R m × R m L ( R m ) , and its definition can be outlined as follows:
[ r + h , r ; G ] = 0 1 G ( r + t h ) d t ; r , h R m .
Using a Taylor series to expand G ( r + t h ) around the point r and then integrate, we obtain
[ r + h , r ; G ] = 0 1 G ( r + t h ) d t = G ( r ) + 1 2 G ( r ) h + O ( h 2 ) .
Let e ( k ) = r ( k ) α represent the error of the approximation r ( k ) of the solution α in the k t h iteration. Assuming the existence of [ G ( α ) ] 1 , we derive the following by expanding G ( r ( k ) ) and its first two derivatives around α :
G ( r ( k ) ) = G ( α ) ( e ( k ) + A 2 ( e ( k ) ) 2 + A 3 ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) ) .
Hence,
G ( r ( k ) ) = G ( α ) ( I + 2 A 2 ( e ( k ) ) + 3 A 3 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) ) .
and
G ( r ( k ) ) = G ( α ) ( 2 A 2 + 6 A 3 ( e ( k ) ) + O ( ( e ( k ) ) 2 ) ) .
For the iterative techniques (12)–(17), the convergence order is given by the following theorems:
Theorem 1.
Let G : D R n R n be a differentiable function, where D represents a neighborhood around the root α of G, and G ( α ) is continuous and invertible. Suppose the initial approximation r ( 0 ) is adequately close to α. Then, the sequence { r ( k ) } generated by the iterative procedure described in (12) converges to α with a rate of convergence of 7.2749 . Likewise, the method defined in (13) yields convergence to α with a convergence rate of 7.5311 .
Proof. 
Let { r ( k ) } be a series of approximations generated by an iterative method having a R-order of at least r that converges to the root α of G. After that,
e ( k + 1 ) D ( k , r ) ( e ( k ) ) r ,
e ( k + 1 ) D ( k , r ) ( D ( k 1 , r ) ( e ( k 1 ) ) r ) r D ( k , r ) ( D ( k 1 , r ) ) r ( e ( k 1 ) ) r 2 .
Let
e t ( k ) = t ( k ) α , = r ( k ) + B ( k ) G ( r ( k ) ) α , = e ( k ) + B ( k ) G ( r ( k ) ) , = e ( k ) + B ( k ) G ( α ) [ e ( k ) + O ( ( e ( k ) ) 2 ) ] , = ( I + B ( k ) G ( α ) ) e ( k ) + O ( ( e ( k ) ) 2 ) .
Substituting r + h = t ( k 1 ) , r = r ( k 1 ) into (18) and then employing (20) and (21), we attain
[ t ( k 1 ) , r ( k 1 ) ; G ] = G ( α ) ( I + A 2 ( e t ( k 1 ) + e ( k 1 ) ) + O ( ( e ( k 1 ) ) 2 ) ) .
Now,
[ t ( k 1 ) , r ( k 1 ) ; G ] 1 = ( I A 2 ( e t ( k 1 ) + e ( k 1 ) ) + O ( ( e ( k 1 ) ) 2 ) ) G ( α ) 1 ,
so
B ( k ) = ( I A 2 ( e t ( k 1 ) + e ( k 1 ) ) + O ( ( e ( k 1 ) ) 2 ) ) G ( α ) 1 ,
and hence
I + B ( k ) G ( α ) = A 2 ( e t ( k 1 ) + e ( k 1 ) ) + O ( ( e ( k 1 ) ) 2 ) .
From (23) for k 1 index, we have
e t ( k 1 ) = ( I + B ( k 1 ) G ( α ) ) e ( k 1 ) + O ( ( e ( k 1 ) ) 2 ) .
Substituting Equation (28) into Equation (27) gives the following:
I + B ( k ) G ( α ) = A 2 ( ( I + B ( k 1 ) G ( α ) ) e ( k 1 ) + e ( k 1 ) ) + O ( ( e ( k 1 ) ) 2 ) ,
or
I + B ( k ) G ( α ) = A 2 ( 2 I + B ( k 1 ) G ( α ) ) e ( k 1 ) + O ( ( e ( k 1 ) ) 2 ) .
Applying relation (29) in (5), we obtain
e ( k + 1 ) = ( A 2 ( 2 I + B ( k 1 ) G ( α ) ) e ( k 1 ) ) 2 A 2 2 Q ( e ( k ) ) 7 , L ( k ) ( e ( k 1 ) ) 2 ( e ( k ) ) 7 + O ( ( e ( k ) ) 8 ) ,
where L ( k ) = ( A 2 ( 2 I + B ( k 1 ) G ( α ) ) A 2 2 Q . Thus, as r ( k ) α and k , we obtain
e ( k + 1 ) ( e ( k ) ) 7 ( e ( k 1 ) ) 2 , ( D ( k 1 , r ) ( e ( k 1 ) ) r ) 7 ( e ( k 1 ) ) 2 , ( D ( k 1 , r ) ) 7 ( e ( k 1 ) ) 7 r + 2 .
By analyzing the exponents of e ( k 1 ) on the right-hand side of Equations (22) and (31), we obtain the indicial equation.
r 2 = 7 r + 2 r 2 7 r 2 = 0 r = 7.2749 .
The convergence order of the iterative scheme (12) is found to be 7.2749 . Alternatively, by applying (18), we can express
[ 2 r ( k ) r ( k 1 ) , r ( k 1 ) ; G ] = G ( α ) ( I + 2 A 2 e ( k ) + A 3 ( e ( k 1 ) ) 2 2 A 3 e ( k ) e ( k 1 ) + 4 A 3 ( e ( k ) ) 2 + O 3 ( e ( k ) , e ( k 1 ) ) ) .
Then,
[ 2 r ( k ) r ( k 1 ) , r ( k 1 ) ; G ] 1 = ( I 2 A 2 e ( k ) A 3 ( e ( k 1 ) ) 2 + 2 A 3 e ( k ) e ( k 1 ) + 4 ( A 2 2 A 3 ) ( e ( k ) ) 2 ) G ( α ) 1 + O 3 ( e ( k ) , e ( k 1 ) ) .
Therefore,
I + B ( k ) G ( α ) = 2 A 2 e ( k ) + A 3 ( e ( k 1 ) ) 2 2 A 3 e ( k ) e ( k 1 ) 4 ( A 2 2 A 3 ) ( e ( k ) ) 2 .
As a result, we observe that the expressions e ( k ) , e ( k ) e ( k 1 ) , ( e ( k ) ) 2 , and ( e ( k 1 ) ) 2 may appear in I + B ( k ) G ( α ) . It is clear that the terms e ( k ) e ( k 1 ) and ( e ( k ) ) 2 tend to converge to zero more rapidly than e ( k ) . Therefore, we need to assess whether e ( k ) or ( e ( k 1 ) ) 2 converges faster. Assuming that the R-order of the method is at least z, it follows that
e ( k + 1 ) D ( k , z ) ( e ( k ) ) z ,
where D ( k , z ) approaches D z , the asymptotic error constant, as k . Consequently, we obtain
e ( k ) ( e ( k 1 ) ) 2 D ( k 1 , z ) ( e ( k 1 ) ) z ( e ( k 1 ) ) 2 .
Thus, if z > 2 , we find that D ( k 1 , z ) ( e ( k 1 ) ) z ( e ( k 1 ) ) 2 converges to 0 as k . Therefore, when z > 2 , we have I + B ( k ) G ( α ) ( e ( k 1 ) ) 2 . From the error Equation (5) and this relation, we derive
e ( k + 1 ) = ( D ( k 1 , r ) ) 7 ( e ( k 1 ) ) 7 r + 4 .
By equating the exponents of e ( k 1 ) in Equations (22) and (32), we attain the following:
r 2 = 7 r + 4 .
The only positive solution to this equation yields the convergence order of technique (13), which is r = 7.5311 . □
The two preceding methods, both of which incorporate memory, were developed using the variable r ( k 1 ) . In this section, we will investigate the consequences of using the approximations p ( k 1 ) and q ( k 1 ) instead. Specifically, we will examine the methods presented in Equations (14)–(17).
As part of our ongoing analysis, we aim to determine the convergence order of the memory-based schemes (14)–(17), following a similar approach to the one previously outlined.
Theorem 2.
Let G : D R n R n be a differentiable function defined on a neighborhood D surrounding the root α of G. Suppose that G ( α ) is continuous and that the inverse [ G ( α ) ] 1 exists. Given an initial approximation r ( 0 ) adequately close to α, the following convergence properties hold for the iterating sequences { r ( k ) } generated by the respective methods:
  • The sequence { r ( k ) } , produced using the method in (14), converges to α with a convergence rate of p = 7.6056 .
  • The sequence { r ( k ) } , generated by the procedure described in (15), converges to α with a convergence rate of p = 8.1231 .
  • The sequence { r ( k ) } , formed through the technique outlined in (16), converges to α with a convergence order of p = 8.2749 .
  • The sequence { r ( k ) } , produced by the approach specified in (17), converges to α with a convergence order of p = 9.2169 .

3. Computational Efficiency

In order to gauge the effectiveness of the suggested methods, we will employ the efficiency index:
E = 1 log d log p C .
The convergence order is indicated by the parameter p, the computing cost for each iteration is indicated by C, and the number of significant decimal digits in the approximation r ( k ) is indicated by d. According to the results in [16], the computing cost for each iteration for a system of m nonlinear equations with m unknowns can be expressed as follows:
C ( μ , m , l ) = A ( m ) μ + P ( m , l ) .
In this context, A ( m ) denotes the amount of scalar function evaluations required for computing G and [ x , y ; G ] , whereas the total amount of products required per iteration is denoted by P ( m , l ) . According to [17], the divided difference [ x , y ; G ] of G is represented as a m × m matrix with particular entries.
[ x , y ; G ] i j = g i ( x 1 , , x j , y j + 1 , , y m ) g i ( x 1 , , x j 1 , y j , , y m ) x j y j , i 1 , j m .
It should be noted that a more sophisticated form of the divided difference was proposed by Grau-Sanchez et al. [18]. Nonetheless, the formulation shown in (34) continues to be the most commonly utilized in real-world applications, including our study, where we follow the same methodology.
We examine the ratio μ > 0 among products and scalar function evaluations as well as the ratio l 1 between products and quotients in order to represent the value of C ( μ , m , l ) in terms of products. We consider the following factors when calculating the computational expenses for each iteration:
  • The evaluation of G requires computing m scalar functions g 1 , g 2 , , g m .
  • Calculating the divided difference [ x , y ; G ] involves evaluating G ( x ) and G ( y ) independently and requires m ( m 1 ) scalar functions.
  • Each divided difference calculation includes m 2 quotients, m products for vector–scalar multiplication, and m 2 products for matrix–scalar multiplication.
  • Computing the inverse of a linear operator involves solving a linear system, which entails m ( m 1 ) ( 2 m 1 ) 6 products and m ( m 1 ) 2 quotients during LU decomposition, as well as m ( m 1 ) products and m quotients for solving two triangular systems.
Our investigation systematically compares the efficiency of six proposed methods: M 7.2749 1 (12), M 7.5311 1 (13), M 7.6056 1 (14), M 8.1231 1 (15), M 8.2749 1 (16), and M 9.2169 1 (17) with existing similar nature with memory methods denoted as M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 and M 9.21699 2 in the given ref. [2]:
M 7.53113 1 is articulated as follows:
p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) H 1 ( μ ( k ) ) [ p ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) H 2 ( μ ( k ) , v ( k ) ) [ q ( k ) , p ( k ) ; G ] 1 G ( q ( k ) ) ,
where, t ( k ) = r ( k ) [ r ( k ) , r ( k 1 ) ; G ] 1 G ( r ( k ) ) ,   μ ( k ) = I [ t ( k ) , r ( k ) ; G ] 1 [ p ( k ) , t ( k ) ; G ] v ( k ) = I [ t ( k ) , r ( k ) ; G ] 1 [ q ( k ) , p ( k ) ; G ] H 1 ( μ ( k ) ) ,   H 1 ( μ ) = μ 2 + μ + I H 2 ( μ , v ) = I + μ v + 13 6 μ v 2 .
M 8 1 is given as follows:
p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) H 1 ( μ ( k ) ) [ p ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) H 2 ( μ ( k ) , v ( k ) ) [ q ( k ) , p ( k ) ; G ] 1 G ( q ( k ) ) ,
where t ( k ) = r ( k ) [ 2 r ( k ) r ( k 1 ) , r ( k 1 ) ; G ] 1 G ( r ( k ) ) .
M 8.12310 1 is expressed as follows:
p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) H 1 ( μ ( k ) ) [ p ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) H 2 ( μ ( k ) , v ( k ) ) [ q ( k ) , p ( k ) ; G ] 1 G ( q ( k ) ) ,
where t ( k ) = r ( k ) [ r ( k ) , p ( k 1 ) ; G ] 1 G ( r ( k ) ) .
M 9.21699 1 is given as follows:
p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) H 1 ( μ ( k ) ) [ p ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) H 2 ( μ ( k ) , v ( k ) ) [ q ( k ) , p ( k ) ; G ] 1 G ( q ( k ) ) ,
where, t ( k ) = r ( k ) [ 2 r ( k ) p ( k 1 ) , p ( k 1 ) ; G ] 1 G ( r ( k ) ) .
M 9.21699 2 is given as follows:
p ( k ) = r ( k ) [ t ( k ) , r ( k ) ; G ] 1 G ( r ( k ) ) , q ( k ) = p ( k ) H 1 ( μ ( k ) ) [ p ( k ) , r ( k ) ; G ] 1 G ( p ( k ) ) , r ( k + 1 ) = q ( k ) H 2 ( μ ( k ) , v ( k ) ) [ q ( k ) , p ( k ) ; G ] 1 G ( q ( k ) ) ,
where t ( k ) = r ( k ) [ r ( k ) , q ( k 1 ) ; G ] 1 G ( r ( k ) ) .
Given the quantities of evaluations provided, we can express the efficiency indices of M a i as E a i and the computational costs as C a i . Taking these into account, we can now rephrase the statement.
C 7.2749 1 = m 3 14 + 2 m 2 + 3 l ( 5 + 6 m ) 3 μ + 15 m ( 1 + μ ) and E 7.2749 1 = 7 . 2749 1 C 7.2749 1 .
C 7.5311 1 = m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) and E 7.5311 1 = 7 . 5311 1 C 7.5311 1 .
C 7.6056 1 = m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) and E 7.6056 1 = 7 . 6056 1 C 7.6056 1 .
C 8.1231 1 = m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) and E 8.1231 1 = 8 . 1231 1 C 8.1231 1 .
C 8.2749 1 = m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) and E 8.2749 1 = 8 . 2749 1 C 8.2749 1 .
C 9.2169 1 = m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) and E 9.2169 1 = 9 . 2169 1 C 9.2169 1 .
C 7.53113 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) and E 7.53113 1 = 7 . 53113 1 C 7.53113 1 .
C 8 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) and E 8 1 = 8 1 C 8 1 .
C 8.12310 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) and E 8.12310 1 = 8 . 12310 1 C 8.12310 1 .
C 9.21699 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) and E 9.21699 1 = 9 . 21699 1 C 9.21699 1 .
C 9.21699 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) and E 9.21699 2 = 9 . 21699 1 C 9.21699 2 .

Efficiency Comparison

In order to assess the computational efficiency index for iterative methods like M a i and M b j , we consider the following ratio:
R a , b i , j = log E a i log E b j = C b j log a C a i log b .
When R a , b i , j > 1 , it is clear that the iterative scheme M a i outperforms M b j in terms of efficiency. This is particularly applicable in cases where μ > 0 and l 1 . For the purpose of our analysis, we focus on the specific values μ = 1 and l = 1 .
  • M 7.2749 1 versus M 7.53113 1 :
    R 7.2749 , 7.53113 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.2749 ) m 3 14 + 2 m 2 + 3 l ( 5 + 6 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.53113 ) .
    Here, the inequality R 7.2749 , 7.53113 1 , 1 > 1 is valid for m 2 . This indicates that E 7.2749 1 > E 7.53113 1 for m 2 .
  • M 7.2749 1 versus M 8 1 :
    R 7.2749 , 8 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 7.2749 ) m 3 14 + 2 m 2 + 3 l ( 5 + 6 m ) 3 μ + 15 m ( 1 + μ ) log ( 8 ) .
    The inequality R 7.2749 , 8 1 , 1 > 1 holds true when m 1 , indicating that E 7.2749 1 > E 8 1 for m 1 .
  • M 7.2749 1 versus M 8.12310 1 :
    R 7.2749 , 8.12310 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.2749 ) m 3 14 + 2 m 2 + 3 l ( 5 + 6 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.12310 ) .
    When m 2 , the inequality R 7.2749 , 8.12310 1 , 1 > 1 is valid, which implies that E 7.2749 1 > E 8.12310 1 for m 2 .
  • M 7.2749 1 versus M 9.21699 1 :
    R 7.2749 , 9.21699 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 7.2749 ) m 3 14 + 2 m 2 + 3 l ( 5 + 6 m ) 3 μ + 15 m ( 1 + μ ) log ( 9.21699 ) .
    For m 1 , the inequality R 7.2749 , 9.21699 1 , 1 > 1 holds, indicating that the efficiency index E 7.2749 1 exceeds E 9.21699 1 .
  • M 7.2749 1 versus M 9.21699 2 :
    R 7.2749 , 9.21699 1 , 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.2749 ) m 3 14 + 2 m 2 + 3 l ( 5 + 6 m ) 3 μ + 15 m ( 1 + μ ) log ( 9.21699 ) .
    In this case, we have R 7.2749 , 9.21699 1 , 2 > 1 for m 3 , which means E 7.2749 1 > E 9.21699 2 for m 3 .
  • M 7.5311 1 versus M 7.53113 1 :
    R 7.5311 , 7.53113 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.5311 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 7.53113 ) .
    In this case, we have R 7.5311 , 7.53113 1 , 1 > 1 for m 5 , which means E 7.5311 1 > E 7.53113 1 for m 5 .
  • M 7.5311 1 versus M 8 1 :
    R 7.5311 , 8 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 7.5311 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 8 ) .
    The inequality R 7.5311 , 8 1 , 1 > 1 holds true when m 6 , indicating that E 7.5311 1 > E 8 1 for m 6 .
  • M 7.5311 1 versus M 8.12310 1 :
    R 7.2749 , 8.12310 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.5311 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 8.12310 ) .
    When m 7 , the inequality R 7.5311 , 8.12310 1 , 1 > 1 is valid, which implies that E 7.5311 1 > E 8.12310 1 for m 7 .
  • M 7.5311 1 versus M 9.21699 1 :
    R 7.75311 , 9.21699 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 7.5311 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 13 , the inequality R 7.5311 , 9.21699 1 , 1 > 1 holds, indicating that the efficiency index E 7.5311 1 exceeds E 9.21699 1 .
  • M 7.5311 1 versus M 9.21699 2 :
    R 7.75311 , 9.21699 1 , 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.5311 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    In this case, we have R 7.5311 , 9.21699 1 , 2 > 1 for m 13 , which means E 7.5311 1 > E 9.21699 2 for m 13 .
  • M 7.6056 1 versus M 7.53113 1 :
    R 7.6056 , 7.53113 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.6056 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 7.53113 ) .
    Here, the inequality R 7.6056 , 7.53113 1 , 1 > 1 is valid for m 3 , which indicates that E 7.6056 1 > E 7.53113 1 for m 3 .
  • M 7.6056 1 versus M 8 1 :
    R 7.6056 , 8 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 7.6056 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 8 ) .
    The inequality R 7.6056 , 8 1 , 1 > 1 holds true when m 1 , indicating that E 7.6056 1 > E 8 1 for m 1 .
  • M 7.6056 1 versus M 8.12310 1 :
    R 7.6056 , 8.12310 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.6056 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 8.12310 ) .
    When m 6 , the inequality R 7.5311 , 8.12310 1 , 1 > 1 is valid, which implies that E 7.5311 1 > E 8.12310 1 for m 6 .
  • M 7.6056 1 versus M 9.21699 1 :
    R 7.6056 , 9.21699 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 7.6056 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 11 , the inequality R 7.6056 , 9.21699 1 , 1 > 1 holds, indicating that the efficiency index E 7.6056 1 exceeds E 9.21699 1 .
  • M 7.6056 1 versus M 9.21699 2 :
    R 7.6056 , 9.21699 1 , 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 7.6056 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 12 , the inequality R 7.6056 , 9.21699 1 , 2 > 1 holds, indicating that the efficiency index E 7.6056 1 exceeds E 9.21699 2 .
  • M 8.1231 1 versus M 7.53113 1 :
    R 8.1231 , 7.53113 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.1231 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 7.53113 ) .
    The inequality R 8.1231 , 7.53113 1 , 1 > 1 is valid for m 2 . This demonstrates that E 8.1231 1 > E 7.53113 1 when m 2 .
  • M 8.1231 1 versus M 8 1 :
    R 8.1231 , 8 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 8.1231 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 8 ) .
    The inequality R 8.1231 , 8 1 , 1 > 1 holds true when m 1 , indicating that E 8.1231 1 > E 8 1 for m 1 .
  • M 8.1231 1 versus M 8.12310 1 :
    R 8.1231 , 8.12310 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.1231 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 8.12310 ) .
    When m 4 , the inequality R 8.1231 , 8.12310 1 , 1 > 1 is valid, implying that E 8.1231 1 > E 8.12310 1 for m 4 .
  • M 8.1231 1 versus M 9.21699 1 :
    R 8.1231 , 9.21699 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 8.1231 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 7 , the inequality R 8.1231 , 9.21699 1 , 1 > 1 holds, indicating that the efficiency index E 8.1231 1 exceeds E 9.21699 1 .
  • M 8.1231 1 versus M 9.21699 2 :
    R 8.1231 , 9.21699 1 , 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.1231 ) m 2 9 + 3 l ( 3 + 5 m ) 4 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 8 , the inequality R 8.1231 , 9.21699 1 , 2 > 1 holds, indicating that the efficiency index E 8.1231 1 exceeds E 9.21699 2 .
  • M 8.2749 1 versus M 7.53113 1 :
    R 8.2749 , 7.53113 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.2749 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 7.53113 ) .
    The inequality R 8.2749 , 7.53113 1 , 1 > 1 holds true for m 3 . This implies that E 8.2749 1 > E 7.53113 1 when m 3 .
  • M 8.2749 1 versus M 8 1 :
    R 8.2749 , 8 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 8.2749 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 8 ) .
    The inequality R 8.2749 , 8 1 , 1 > 1 holds true when m 3 , indicating that E 8.2749 1 > E 8 1 for m 3 .
  • M 8.2749 1 versus M 8.12310 1 :
    R 8.2749 , 8.12310 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.2749 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 8.12310 ) .
    When m 5 , the inequality R 8.2749 , 8.12310 1 , 1 > 1 is valid, which implies that E 8.2749 1 > E 8.12310 1 for m 5 .
  • M 8.2749 1 versus M 9.21699 1 :
    R 8.2749 , 9.21699 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 8.2749 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 7 , the inequality R 8.2749 , 9.21699 1 , 1 > 1 holds, indicating that the efficiency index E 8.2749 1 exceeds E 9.21699 1 .
  • M 8.2749 1 versus M 9.21699 2 :
    R 8.2749 , 9.21699 1 , 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 8.2749 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 8 , the inequality R 8.2749 , 9.21699 1 , 2 > 1 holds, indicating that the efficiency index E 8.2749 1 exceeds E 9.21699 2 .
  • M 9.2169 1 versus M 7.53113 1 :
    R 9.2169 , 7.53113 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 9.2169 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 7.53113 ) .
    In this case, the inequality R 9.2169 , 7.53113 1 , 1 > 1 is valid when m 2 . This indicates that for m 2 , E 9.2169 1 > E 7.53113 1 .
  • M 9.2169 1 versus M 8 1 :
    R 9.2169 , 8 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 9.2169 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 8 ) .
    The inequality R 9.2169 , 8 1 , 1 > 1 holds true when m 1 , indicating that E 9.2169 1 > E 8 1 for m 1 .
  • M 9.2169 1 versus M 8.12310 1 :
    R 9.2169 , 8.12310 1 , 1 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 9.2169 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 8.12310 ) .
    When m 3 , the inequality R 9.2169 , 8.12310 1 , 1 > 1 is valid, which implies that E 9.2169 1 > E 8.12310 1 for m 3 .
  • M 9.2169 1 versus M 9.21699 1 :
    R 9.2169 , 9.21699 1 , 1 = m 3 13 + 4 m 2 + 3 l ( 4 + 7 m ) + 15 m ( 1 + μ ) log ( 9.2169 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 4 , the inequality R 9.2169 , 9.21699 1 , 1 > 1 holds, indicating that the efficiency index E 9.2169 1 exceeds E 9.21699 1 .
  • M 9.2169 1 versus M 9.21699 2 :
    R 9.2169 , 9.21699 1 , 2 = m 3 16 + 4 m 2 + 3 l ( 4 + 7 m ) 3 μ + 15 m ( 1 + μ ) log ( 9.2169 ) m 2 7 + 3 l ( 3 + 5 m ) 2 μ + m ( 9 + 2 m + 12 μ ) log ( 9.21699 ) .
    For m 5 , the inequality R 9.2169 , 9.21699 1 , 2 > 1 is satisfied, which implies that the efficiency index E 9.2169 1 is greater than E 9.21699 2 .
The aforementioned outcomes are visually depicted in Figure 1.
Figure 1 illustrates the comparative performance of the proposed methods, namely M 7.2749 1 , M 7.5311 1 , M 7.6056 1 , M 8.1231 1 , M 8.2749 1 , and M 9.2169 1 . The results clearly demonstrate that these methods consistently achieve superior efficiency compared to the well-established existing methods, including M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , and M 9.21699 2 . This enhanced efficiency is evident across all scenarios considered in the analysis.

4. Numerical Results and Discussion

In this section, we provide many numerical problems that demonstrate the convergence characteristic of the suggested approaches. The performance of our approach is compared with existing methods, namely M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , and M 9.21699 2 , as detailed in reference [2]. All computations were performed using Mathematica 8.0 [19] with multiple-precision arithmetic set to 2048 digits to ensure a high level of accuracy. The stopping criterion for the experiments is the following:
r ( k + 1 ) r ( k )   T ,
where T is the tolerance specific to each method. For each example, the error tolerance is set to T = 10 50 .

Evaluation Metrics

Table 1, Table 2, Table 3 and Table 4 summarize the numerical results of the compared methods across test examples, using the following metrics:
  • The computed approximation ( r ( k + 1 ) ).
  • The function value ( G ( r ( k + 1 ) ) ).
  • Distance between consecutive iterations, ( r ( k + 1 ) r ( k ) ).
  • Iteration count required to satisfy the stopping criteria.
  • The approximated computational order of convergence (ACOC):
    A C O C log r ( k + 1 ) r ( k ) r ( k ) r ( k 1 ) log r ( k ) r ( k 1 ) r ( k 1 ) r ( k 2 ) .
  • The total CPU time taken for the entire computation process.
These metrics provide a comprehensive comparison of the efficiency, accuracy, and convergence speed of the proposed techniques relative to existing schemes.
Example 1.
We consider the boundary value problem specified below (see [9]):
y = 1 2 y 3 + 3 y 3 2 x + 1 2 ; y ( 0 ) = 0 , y ( 1 ) = 1 .
Consider dividing the interval [ 0 , 1 ] as
x 0 = 0 < x 1 < x 2 < < x n 1 < x n = 1 , x i + 1 = x i + h , h = 1 n .
Define the discrete variables corresponding to the function values at the partition points:
y 0 = y ( x 0 ) , y 1 = y ( x 1 ) , , y n 1 = y ( x n 1 ) , y n = y ( x n ) .
Using numerical approximations for the first and second derivatives, we obtain the following:
y k y k + 1 y k 1 2 h , k = 1 , 2 , 3 , , n 1 , y k y k 1 2 y k + y k + 1 h 2 , k = 1 , 2 , 3 , , n 1 .
By substituting these approximations into the governing equation, we obtain a set of ( n 1 ) nonlinear equations for ( n 1 ) variables:
y k + 1 2 y k + y k 1 h 2 2 y k 3 3 2 h ( y k + 1 y k 1 ) + 3 2 x k h 2 1 2 h 2 = 0 , k = 1 , 2 , 3 , , n 1 .
  • Specific Case:  n = 5
For n = 5 , the step size and partition points are as follows:
h = 1 5 , x k = k h , k = 0 , 1 , 2 , 3 , 4 , 5 .
The initial values are set as follows:
y ( 0 ) = 1 1 1 1 , t ( 0 ) = 0.01 I ,
where the identity matrix is indicated by I.
Example 2.
Now, consider the system of six equations as described in ref. [20]:
j = 1 , j i 6 x j e x i = 0 , 1 i 6 ,
With an initial guess r ( 0 ) = { 0.25 , 0.25 , 0.25 , 0.25 , 0.25 , 0.25 } T and initializing t ( k ) with t ( 0 ) = 0.01 I , where the identity matrix is denoted by I, the error tolerance is set to 10 50 .
Example 3.
We examine the fifty-equation system: (taken from [2]):
x i 2 x i + 1 1 = 0 , 1 i 49 , x 50 2 x 1 1 = 0 .
The tolerance for this example is 10 50 , the starting guess is r ( 0 ) = 0.9 , 0.9 , , 0.9 T , and the initial estimates for the vectors r ( 1 ) , p ( 1 ) , and q ( 1 ) are 0.7 , 0.7 , , 0.7 T .
Example 4.
We examine the three-equation system: (adapted from [6]):
10 x + sin ( x + y ) 1 = 0 , 8 y cos 2 ( z y ) 1 = 0 . 12 z + sin z 1 = 0 .
The tolerance for this example is 10 50 , the starting guess is r ( 0 ) = 0.5 , 0.1 , 0.1 T . The parameter t ( k ) is initialized as t ( 0 ) = 0.01 I , where I represents the identity matrix.
Figure 2 provides a graphical representation of the errors at each consecutive iteration for the different iterative processes employed to solve Examples 1–4. The figure highlights the superior performance of our proposed method (15)–(17) compared to other well-known methods. Specifically, it shows that our method converges more rapidly and achieves significantly higher computational accuracy across all examples.
Furthermore, Figure 3 depicts the function values at each iteration for Examples 1–4. These results reaffirm the effectiveness of our approach in achieving consistent and accurate solutions while maintaining stability in the iterative process. The function values obtained using our method exhibit a faster decrease to zero (or the desired tolerance), which is a direct indicator of superior convergence characteristics.
In addition to accuracy, computational efficiency is a critical aspect when comparing iterative methods. For the four examples considered, our method demonstrates a notable reduction in CPU time compared to other existing methods. This computational advantage arises from the robust design of our iterative scheme, which effectively reduces the number of iterations required to achieve the desired tolerance. Consequently, our approach is not only more accurate but also computationally efficient, making it a preferable choice for solving nonlinear systems.
These results collectively highlight the advantages of our proposed method in terms of both computational accuracy and efficiency, particularly when compared to other well-established methods from the literature.

5. Conclusions

We have developed a new family of three-step derivative-free iterative algorithms with memory for solving a system of nonlinear equations. Using the first-order divided difference operator for multivariable functions and applying Taylor expansion, we have determined the minimal order of convergence for these techniques. The convergence orders were determined to be 7.2749 , 7.5311 , 7.6056 , 8.1231 , 8.2749 and 9.2169 . To enhance the convergence speed, we incorporated a self-accelerating parameter that evolves as the iterations progress, which is implemented through a self-correcting matrix. This allows for faster convergence in solving nonlinear equation systems. Furthermore, we have evaluated the computational efficiency of our schemes against existing approaches, with numerical experiments demonstrating that our methods achieve significantly better performance. These findings validate the effectiveness and reliability of our proposed approach. In future research, this technique will be extended to similar methods [4,5,6,8,20].

Author Contributions

Conceptualization, N.K. and J.P.J.; methodology, N.K. and J.P.J.; software, N.K. and J.P.J.; validation, N.K., J.P.J. and I.K.A.; formal analysis, N.K., J.P.J. and I.K.A.; resources, N.K.; writing—original draft preparation, N.K. and J.P.J.; writing—review and editing, N.K., J.P.J. and I.K.A.; supervision, J.P.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contribution presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors would like to pay their gratitude to all the reviewers for their significant comments and suggestions which improved the quality of the current work. The first two authors are thankful to the Department of Science and Technology, New Delhi, India for sanctioning the proposal under the scheme FIST program (Ref. No. SR/FST/MS/2022 dated 19 December 2022).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Steffensen, J. Remarks on iteration. Scand. Actuar. J. 1993, 1, 64–72. [Google Scholar] [CrossRef]
  2. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero, P.N. Design of iterative methods with memory for solving nonlinear systems. Math. Methods Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
  3. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
  4. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero, P.N. Improving the order of a fifth-order family of vectorial fixed point schemes by introducing memory. Fixed Point Theory J. 2023, 24, 155–172. [Google Scholar] [CrossRef]
  5. Behel, R.; Cordero, A.; Torregrosa, J.R.; Bhalla, S. A new high-order jacobian-free iterative method with memory for solving nonlinear system. Mathematics 2021, 9, 2122. [Google Scholar] [CrossRef]
  6. Petkovic, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [Google Scholar] [CrossRef]
  7. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press—Taylor and Francis Corp.: Boca Raton, FL, USA, 2022. [Google Scholar]
  8. Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algorithms 2014, 67, 917–933. [Google Scholar] [CrossRef]
  9. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  10. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk. SSSR 1971, 198, 524–526. [Google Scholar]
  11. Shakhno, S.M. On a Kurchatov’s method of linear interpolation for solving nonlinear equations. PAAM-Proc. Appl. Math. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
  12. Shakhno, S.M. On the difference method with quadratic convergence for solving nonlinear operator equations. Mat. Stud. 2006, 26, 105–110. [Google Scholar]
  13. Ezquerro, J.A.; Grau, A.; Grau-Sanchez, M.; Hernandez, M. On the efficiency of two variants of Kurchatov’s method for solving nonlinear systems. Numer. Algorithms 2013, 64, 685–698. [Google Scholar] [CrossRef]
  14. Argyros, I.K. Advances in the Efficiency of Computational Methods and Applications; World Scientific Publishing Company: Singapore, 2000. [Google Scholar]
  15. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  16. Grau-Sánchez, M.; Noguera, M. A technique to choose the most efficient method between secant method and some variants. Appl. Math. Comput. 2012, 218, 6415–6426. [Google Scholar] [CrossRef]
  17. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  18. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  19. Wolfram, S. The Mathematica Book; Wolfram Research, Inc.: Champaign, IL, USA, 2003. [Google Scholar]
  20. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discret. Math. 2013, 141, 390–403. [Google Scholar] [CrossRef]
Figure 1. Plots for efficiency index values for the following: (a) M 7.2749 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (b) M 7.5311 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (c) M 7.6056 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (d) M 8.1231 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (e) M 8.2749 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (f) M 9.2169 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 .
Figure 1. Plots for efficiency index values for the following: (a) M 7.2749 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (b) M 7.5311 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (c) M 7.6056 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (d) M 8.1231 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (e) M 8.2749 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 ; (f) M 9.2169 1 versus M 7.53113 1 , M 8 1 , M 8.12310 1 , M 9.21699 1 , M 9.21699 2 .
Computation 13 00055 g001
Figure 2. Graphical comparison of errors in consecutive iteration for Examples 1–4.
Figure 2. Graphical comparison of errors in consecutive iteration for Examples 1–4.
Computation 13 00055 g002
Figure 3. The graphical comparison illustrates the values of the functions at each iteration for Examples 1–4.
Figure 3. The graphical comparison illustrates the values of the functions at each iteration for Examples 1–4.
Computation 13 00055 g003
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Method | | r ( k + 1 ) r ( k ) | | | | G ( r ( k + 1 ) ) | | IterationACOCCPU
M 7.53113 1 1.69 × 10 318 3.49 × 10 2065 46.47130.12497
M 8 1 2.36 × 10 343 5.41 × 10 2362 46.90130.12497
M 8.12310 1 1.35 × 10 52 1.57 × 10 373 37.49770.10935
M 9.21699 1 2.05 × 10 56 2.85 × 10 454 38.12140.09372
M 9.21699 2 2.70 × 10 58 3.91 × 10 469 38.42880.09376
M 7.2749 1 6.46 × 10 56 2.44 × 10 411 37.28290.07820
M 7.5311 1 1.51 × 10 57 3.02 × 10 436 37.58390.09369
M 7.6056 1 2.52 × 10 59 1.84 × 10 456 37.61270.09372
M 8.1231 1 2.07 × 10 64 2.87 × 10 529 38.11480.09372
M 8.2749 1 6.18 × 10 62 7.70 × 10 515 38.27850.09371
M 9.2169 1 6.32 × 10 70 6.68 × 10 635 39.00000.07810
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
Method | | r ( k + 1 ) r ( k ) | | | | G ( r ( k + 1 ) ) | | IterationACOCCPU
M 7.53113 1 1.02 × 10 109 1.08 × 10 829 37.70191.62081
M 8 1 6.57 × 10 115 4.47 × 10 922 38.11891.44667
M 8.12310 1 7.19 × 10 119 1.71 × 10 970 38.43721.70871
M 9.21699 1 6.44 × 10 133 2.79 × 10 1217 39.56611.55475
M 9.21699 2 2.31 × 10 133 2.95 × 10 1222 39.60191.58814
M 7.2749 1 1.40 × 10 114 1.42 × 10 837 37.21531.35980
M 7.5311 1 4.64 × 10 115 1.30 × 10 871 37.25021.23408
M 7.6056 1 1.08 × 10 119 2.20 × 10 914 37.58661.32781
M 8.1231 1 4.60 × 10 130 2.27 × 10 1061 38.33951.32781
M 8.2749 1 2.21 × 10 127 2.72 × 10 1055 38.14481.32781
M 9.2169 1 1.10 × 10 142 2.86 × 10 1289 39.25591.32781
Table 3. Numerical results for Example 3.
Table 3. Numerical results for Example 3.
Method | | r ( k + 1 ) r ( k ) | | | | G ( r ( k + 1 ) ) | | IterationACOCCPU
M 7.53113 1 9.22 × 10 55 2.15 × 10 298 35.940996.67874
M 8 1 6.68 × 10 76 5.04 × 10 609 37.896595.12069
M 8.12310 1 2.37 × 10 58 8.82 × 10 350 36.4033105.39603
M 9.21699 1 5.09 × 10 88 1.55 × 10 806 39.3335102.83968
M 9.21699 2 1.26 × 10 64 1.03 × 10 452 37.211483.71736
M 7.5311 1 3.00 × 10 61 3.02 × 10 332 36.221580.48735
M 7.6056 1 3.22 × 10 66 6.50 × 10 401 35.746496.65305
M 8.1231 1 4.37 × 10 81 2.24 × 10 551 37.281487.39948
M 8.2749 1 1.79 × 10 78 2.19 × 10 651 38.281993.83893
M 9.2169 1 2.43 × 10 84 9.32 × 10 760 38.983793.94005
Table 4. Numerical results for Example 4.
Table 4. Numerical results for Example 4.
Method | | r ( k + 1 ) r ( k ) | | | | G ( r ( k + 1 ) ) | | IterationACOCCPU
M 7.53113 1 1.47 × 10 61 3.24 × 10 359 35.38530.07810
M 8 1 1.31 × 10 61 1.08 × 10 380 35.39050.07810
M 8.12310 1 3.34 × 10 66 8.01 × 10 453 35.87550.07806
M 9.21699 1 5.31 × 10 73 2.40 × 10 563 36.59310.10935
M 9.21699 2 1.09 × 10 73 2.17 × 10 573 36.66580.10934
M 7.2749 1 5.86 × 10 58 1.96 × 10 377 35.95200.07807
M 7.5311 1 3.46 × 10 61 1.29 × 10 399 36.34650.07814
M 7.6056 1 1.14 × 10 64 5.04 × 10 495 36.77720.07814
M 8.1231 1 9.41 × 10 72 1.56 × 10 502 37.63760.07807
M 8.2749 1 1.81 × 10 71 6.71 × 10 491 37.60280.06245
M 9.2169 1 3.47 × 10 80 3.56 × 10 660 38.66820.10934
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, N.; Jaiswal, J.P.; Argyros, I.K. Novel Techniques with Memory Extension of Three-Step Derivative-Free Iterative Scheme for Nonlinear Systems. Computation 2025, 13, 55. https://doi.org/10.3390/computation13020055

AMA Style

Kumar N, Jaiswal JP, Argyros IK. Novel Techniques with Memory Extension of Three-Step Derivative-Free Iterative Scheme for Nonlinear Systems. Computation. 2025; 13(2):55. https://doi.org/10.3390/computation13020055

Chicago/Turabian Style

Kumar, Nishant, Jai P. Jaiswal, and Ioannis K. Argyros. 2025. "Novel Techniques with Memory Extension of Three-Step Derivative-Free Iterative Scheme for Nonlinear Systems" Computation 13, no. 2: 55. https://doi.org/10.3390/computation13020055

APA Style

Kumar, N., Jaiswal, J. P., & Argyros, I. K. (2025). Novel Techniques with Memory Extension of Three-Step Derivative-Free Iterative Scheme for Nonlinear Systems. Computation, 13(2), 55. https://doi.org/10.3390/computation13020055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop