Next Article in Journal
Modeling Immigration in Spain: Causes, Size and Consequences
Next Article in Special Issue
A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems
Previous Article in Journal
The Research on Consistency Checking and Improvement of Probabilistic Linguistic Preference Relation Based on Similarity Measure and Minimum Adjustment Model
Previous Article in Special Issue
Common Attractive Point Results for Two Generalized Nonexpansive Mappings in Uniformly Convex Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Matrix Eigenvalues to Construct an Iterative Method with the Highest Possible Efficiency Index Two

1
Mathematical Modeling and Applied Computation (MMAC) Research Group, Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Member of Young Researchers and Elite Club, Shahr-e-Qods Branch, Islamic Azad University, Tehran 37515-374, Iran
3
Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, P. Bag X5050, Thohoyandou 0950, South Africa
4
Institute of Mathematical Sciences, Faculty of Science, University of Malaya, Kuala Lumpur 50603, Malaysia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(9), 1370; https://doi.org/10.3390/math10091370
Submission received: 9 March 2022 / Revised: 6 April 2022 / Accepted: 11 April 2022 / Published: 20 April 2022
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis)

Abstract

:
In this paper, we first derive a family of iterative schemes with fourth order. A weight function is used to maintain its optimality. Then, we transform it into methods with several self-accelerating parameters to reach the highest possible convergence rate 8. For this aim, we employ the property of the eigenvalues of the matrices and the technique with memory. Solving several nonlinear test equations shows that the proposed variants have a computational efficiency index of two (maximum amount possible) in practice.

1. Introduction

This paper is concerned with the numerical solution of nonlinear problems having the structure g ( x ) = 0 . In fact, we look at iterative approaches to solving nonlinear problems. It is famous that the celebrated Newton’s scheme can define x k + 1 = x k g ( x k ) g ( x k ) with the local second convergence rate for simple roots while per iteration it requires one evaluation of the function and one of its first derivative. For some applications, one may refer to [1,2].
A wide range of problems which are not related to nonlinear equations at first sight can be expressed as finding the solution of nonlinear equations in special spaces (e.g., in operator form.) For example, finding approximate-analytic solutions to nonlinear stochastic differential equations [3] is possible via Chaplygin type solvers which are in fact some Newton’s iteration in an appropriate operator environment in order to be imposed for solving such equations [4].
Let us recall that the efficiency of the iterative schemes [5] is calculated via p d wherein d is the number of functional evaluations per cycle and p is the speed rate. Besides, for multi-point without-memory iterative schemes, the optimal convergence order is 2 n 1 , needing n functional evaluations per cycle [6].
Now some definitions are given which will be used later in this work.
A famous fourth-order two-point method without-memory is the King’s method, which is given by [7]:
y k = x k g ( x k ) g ( x k ) , γ R , x k + 1 = y k g ( y k ) g ( x k ) g ( x k ) + γ g ( y k ) g ( x k ) + ( γ 2 ) g ( y k ) .
The authors of [8] made the following three steps based on Ostrowski’s method and the use of the technique of the Chun weight function [9]:
y k = x k g ( x k ) g ( x k ) , γ R , t k = g ( z k ) g ( x k ) , u ( 0 ) = u ( 0 ) = 1 , z k = y k g ( y k ) g ( x k ) g ( x k ) + γ g ( y k ) g ( x k ) + ( γ 2 ) g ( y k ) , x k + 1 = z k u ( t k ) g [ x k , y y ] g ( z k ) g [ x k , z y ] g [ z k , y k ] .
Ostrowski’s method proposed in [5] has fourth order of convergence as follows:
y k = x k g ( x k ) g ( x k ) , k 0 , x k + 1 = y k g ( y k ) ( y k x k ) 2 g ( y k ) g ( x k ) .
This method can be rewritten as follows:
y k = x k g ( x k ) g ( x k ) , k 0 , x k + 1 = y k g ( y k ) g ( x k ) g ( x k ) g ( x k ) 2 g ( y k ) .
This method supports the optimality conjecture of Kung–Traub for the highest possible convergence order for methods without memory. Accordingly, the efficiency index of Newton’s and Ostrowski’s methods are: 2 1.414 and 4 3 1.587 , respectively.
In this work, we turn the famous Ostrowski method into a family of Steffensen-like methods, ref. [10]. This technique eliminates the disadvantage of calculating the function derivative. A three self-parameters family of optimal two-step methods is obtained, which uses the weight function technique to maintain the optimality of the without-memory methods. In addition, the matrix eigenvalue technique is employed to prove the convergence order of the proposed methods.
To explain the motivation of the current manuscript clearly we should address why we need to achieve such high precision results, for which applications? The answer is that we mostly do not need high precision. Actually the current study is useful in terms of a theoretical point of view by proposing a general family of methods with memory that possesses a 100% order improvement without any additional functional evaluations. In terms of the application point of view, we employ multiple precision arithmetic in numerical simulations just to re-check the order of convergence in numerical tests. In applications, clearly our method that has a higher order of convergence reaches the convergence radius faster, and gives the final solution in reasonable timing.
We describe the structure of the modified Ostrowski’s methods two-step without memory in Section 2. The improvement of the convergence rate of this family is attained via employing several parameters of self-acceleration. Such parameters are computed per loop via the information from the current and the previous loops which help to accelerate the convergence without adding further incorporation of functional evaluations. The efficiency index of the new method is two (the highest efficiency index available). The theoretical proof is presented in Section 3. Computational pieces of evidence are brought forward in Section 4 and uphold the analytical results. Finally, we provide concluding remarks in Section 5.

2. Derivation of Methods and Convergence Analysis

By looking at relation (4), it can be seen that this method uses the derivative of the function in the first and second steps and this shows that the two-point family of schemes (4) achieves the fourth convergence rate employing three evaluations of functions only (viz., g ( x k ) , g ( y k ) , and g ( x k ) ) per full iteration. To derive new methods, we approximate g ( x k ) given in one-step (4) as follows:
g ( x k ) g [ w k , x k ] = g ( w k ) g ( x k ) w k x k , w k = x k + β g ( x k ) .
In what follows, the derivative g ( x k ) in the second step will be estimated via g [ y k , w k ] h ( t k ) , where h ( t k ) is a differentiable function that relies on the real variable
t k = g ( y k ) g ( x k ) .
Thus, it is begun by the scheme (4), the approximation (5), and mentions the following two-point method:
w k = x k + β g ( x k ) , y k = x k g ( x k ) g [ w k , x k ] , k 0 , x k + 1 = y k H ( t k ) g ( x k ) g ( x k ) 2 g ( y k ) g ( y k ) g [ y k , w k ] .
The next theorem illustrates the weight function and under what conditions the convergence rate of (6) will achieve the optimal order four.
Theorem 1.
Let for the open interval D, the function g : D R R have a simple root x * D . As long as the starting point x 0 is close enough to the exact root, then { x k } obtained via (6) tends to x * . If H is a real function under the assumptions H ( 0 ) < , H ( 0 ) = 1 , H ( 0 ) = 1 and β 0 then the fourth order of convergence can be obtained for (6).
Proof. 
The proof of this theorem is similar to the way of proving convergence order for similar schemes in the literature, see e.g., ref. [11]. It is hence omitted and we bring the final error equation, which can be written as follows:
e k + 1 = 1 2 ( ( 1 + β g ( x * ) ) 2 c 2 ) ( ( 2 + h 2 + g ( x * ) β ( 2 + h 2 ) c 2 2 + 2 c 3 ) ) e k 4 + O ( e k 5 ) ,
where c i = 1 i ! g ( i ) ( x * ) g ( x * ) and using relations h 0 = H ( 0 ) , h 1 = H ( 0 ) , h 2 = H ( 0 ) . Hence, the fourth-order convergence is established. The proof is ended. □
Some of the functions that satisfy Theorem 1 are as follows:
H 1 ( t ) = 1 t , H 2 ( t ) = 1 1 + t , H 3 ( t ) = ( 1 t 2 ) 2 , H 4 ( t ) = e t , H 5 ( t ) = 1 + 2 t 1 + 3 t , H 6 ( t ) = cos ( t ) sin ( t ) , H 7 ( t ) = arccos ( t ) , H 8 ( t ) = t 2 + 1 1 + t , H 9 ( t ) = e t 2 t .
By considering a new accelerator, the following two-step method can be obtained:
w k = x k + β g ( x k ) , k 0 , y k = x k g ( x k ) g [ w k , x k ] + γ g ( w k ) , x k + 1 = y k H ( t k ) g ( x k ) g ( x k ) 2 g ( y k ) g ( y k ) g [ y k , w k ] + γ g ( w k ) .
The method (6) can also be converted into a without memory method by adding two self-accelerator parameters to a three-parameter method as follows:
w k = x k + β g ( x k ) , k 0 , y k = x k g ( x k ) g [ w k , x k ] + γ g ( w k ) , x k + 1 = y k H ( t k ) g ( x k ) g ( x k ) 2 g ( y k ) g ( y k ) g [ y k , w k ] + γ g ( w k ) + λ ( y k x k ) ( y k w k ) .
Theorem 2.
Having similar conditions as in Theorem 1, then, the iteration methods (9) and (10) have fourth order of convergence and satisfy the following error equations, respectively:
e k + 1 = ( 1 + β g ( x * ) ) 2 ( γ + c 2 ) ( ( γ + c 2 ) γ ( 1 + β g ( x * ) ) + ( 1 + β g ( x * ) ) c 2 + c 3 ) ) e k 4 + O ( e k 5 ) ,
and
e k + 1 = ( 1 + β g ( x * ) ) 2 ( γ + c 2 ) ( g ( x * ) ( 1 + β g ( x * ) ) γ 2 λ + g ( x * ) ( 2 c 2 g ( x * ) β γ + ( 1 + g ( x * ) β ) c 2 2 + c 3 ) ) ( g ( x * ) ) 1 e k 4 + O ( e k 5 ) .
Proof. 
This is proved as in the Theorem 1; hence, it is omitted. □
We also note here that (12) can be rewritten as follows:
e k + 1 = ( 1 + β g ( x * ) ) 2 ( γ + c 2 ) ( g ( x * ) ( 1 + β g ( x * ) ) γ 2 λ g ( x * ) c 2 2 ( g ( x * ) β + 1 ) c 2 2 ( 1 + g ( x * ) β ) + c 3 g ( x * ) ) ) ( g ( x * ) ) 1 e k 4 + O ( e k 5 ) .

3. Further Improvements via the Concept of Methods with Memory

3.1. One-Parametric Method

It is observed by (7) that the convergence rate order for the presented methods (6) is 4 when β 1 g ( x * ) . We could approximate the parameter β by β k :
β k = 1 g ( x * ) 1 N 3 ( x k ) ,
where N 3 ( x k ) are defined as follows:
N 3 ( x k ) = N 3 ( t ; x k , x k 1 , w k 1 , y k 1 ) .
Combining (6) with (14), one is able to propose a family of two-point Ostrowski–Steffensen-type methods with memory as comes next:
β k = 1 N 3 ( x k ) , k = 1 , 2 , 3 , , w k = x k + β k g ( x k ) , y k = x k g ( x k ) g [ x k , w k ] , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] g ( x k ) g ( x k ) 2 g ( y k ) .
Theorem 3.
This theorem has similar conditions to Theorem 1. As long as β k in (16) is computed recursively by (14), then the convergence R-order would be six.
Proof. 
The matrix approach discussed initially in [12] is now used to obtain the rate of convergence for such an accelerated method. Recalling the lower bound for the rate of convergence in such cases on the single-step s-point procedure technique (14), i.e., x k = φ ( x k 1 , x k 2 , , x k s ) would be the spectral radius of M ( s ) = ( m i j ) , corresponding to the method by having:
m i , i 1 = 1 , i = 2 , 3 , , s , m 1 , j = amount of information required at point x k j , j = 1 , 2 , , s , m i , j = 0 otherwise .
Then the spectral radius of M = M 1 . M 2 . . M s would be the lower bound for the s-step method φ = φ 1 · φ 2 φ s . We can state each of the estimates x k + 1 , y k , and w k as a function of available information g ( y k ) , g ( w k ) and g ( x k ) from the k-th iterate and g ( y k 1 ) , g ( w k 1 ) and g ( x k 1 ) from the past iterate. From the relations (16) and (17), we create the corresponding matrices as follows:
x k + 1 = φ 1 ( y k , w k , x k , y k 1 ) ; M 1 = 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0
y k = φ 2 ( w k , x k , y k 1 , w k 1 ) ; M 2 = 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 0
w k = φ 3 ( x k , y k 1 , w k 1 , x k 1 ) ; M 3 = 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0
Hence, we obtain
M = M 1 M 2 M 3 = 4 4 0 0 2 2 0 0 1 1 0 0 1 0 0 0 ,
for which its eigenvalues are ( 6 , 0 , 0 , 0 ) . It is derived that the rate of convergence would be six for the methods with memory (16). The proof is complete now. □

3.2. Two-Parametric Method

Now, similar to the prior case, we build the following derivative-free with the memory method from (9):
β k = 1 N 3 ( x k ) , γ k = N 4 ( w k ) 2 N 4 ( w k ) , k = 1 , 2 , 3 , , y k = x k g ( x k ) g [ x k , w k ] + γ k g ( w k ) , w k = x k + β k g ( x k ) , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] + γ k g ( w k ) g ( x k ) g ( x k ) 2 g ( y k ) .
Theorem 4.
Let x 0 be an initial approach close enough to x * of g ( x ) = 0 . If the parameters β k and γ k will compute recursively, then the convergence R-order of (18) is at least 7.
Proof. 
Using appropriate matrixes in the proof of Theorem 3 and substituting them into the goal matrix, we obtain that (18) has seventh order of convergence. In fact, the proof is similar to the proof of Theorem 3 and hence it is omitted. □

3.3. Tri-Parametric Method

The method (10) with memory could be expressed in what follows:
β k = 1 N 3 ( x k ) , γ k = N 4 ( w k ) 2 N 4 ( w k ) , λ k = N 5 ( y k ) 6 , k = 1 , 2 , 3 , , y k = x k g ( x k ) g [ x k , w k ] + γ k g ( w k ) , w k = x k + β k g ( x k ) , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] + γ k g ( w k ) + λ k ( y k x k ) ( y k w k ) g ( x k ) g ( x k ) 2 g ( y k ) .
Now, we establish a theorem for determining the rate of (19).
Theorem 5.
With the hypotheses of Theorem 3 and that the three parameters β k , γ k and λ k have been recursively calculated in (19), then the convergence rate of the with-memory method suggested in (19) is 7.53.
Proof. 
From the relation (19) and similar to that used in Theorem 3, we construct the corresponding matrix as follows:
M = M 1 M 2 M 3 = 4 4 4 4 0 0 2 2 2 2 0 0 1 1 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 ,
and its eigenvalues are ( 1 2 ( 7 + 65 ) , 1 2 ( 7 65 ) , 0 , 0 , 0 , 0 ) . Thus, 1 2 ( 7 + 65 ) 7.53 is the rate of convergence. □
We will present the process of the work in the following four sections until the maximum degree of convergence (i.e., 100 % ):
(I)
Now, we consider three parameters’ iterative methods as follows:
β k = 1 N 6 ( x k ) , γ k = N 7 ( w k ) 2 N 7 ( w k ) , λ k = N 8 ( y k ) 6 , k = 2 , 3 , 4 , , y k = x k g ( x k ) g [ x k , w k ] + γ k g ( w k ) , w k = x k + β k g ( x k ) , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] + γ k g ( w k ) + λ k ( y k x k ) ( y k w k ) g ( x k ) g ( x k ) 2 g ( y k ) .
Theorem 6.
Having the same conditions as in Theorems 1 and 5, then (20) converges to x * with the rate of convergence 7.77.
Proof. 
In a similar fashion, one obtains that
M = 4 4 4 4 4 0 0 2 2 2 2 2 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 ,
for which its eigenvalues are 1 2 ( 7 + 73 ) , 1 2 ( 7 73 ) , 0 , 0 , 0 , 0 , 0 , which states that the order of the with-memory methods (20) is 7.77 . □
(II)
Now, we study tri-parametric iterative methods as follows:
β k = 1 N 9 ( x k ) , γ k = N 10 ( w k ) 2 N 10 ( w k ) , λ k = N 11 ( y k ) 6 , k = 3 , 4 , 5 , , y k = x k g ( x k ) g [ x k , w k ] + γ k g ( w k ) , w k = x k + β k g ( x k ) , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] + γ k g ( w k ) + λ k ( y k x k ) ( y k w k ) g ( x k ) g ( x k ) 2 g ( y k ) .
Theorem 7.
With the hypotheses of Theorem 3 and that the three parameters β , γ and λ have been recursively calculated in (21), then (21) converges to x * with the convergence order 7.89.
Proof. 
Proving this theorem is similar to that of Theorem 3. □
(III)
Now, we consider three parameters’ iterative methods as follows:
β k = 1 N 12 ( x k ) , γ k = N 13 ( w k ) 2 N 13 ( w k ) , λ k = N 14 ( y k ) 6 , k = 4 , 5 , 6 , , y k = x k g ( x k ) g [ x k , w k ] + γ k g ( w k ) , w k = x k + β k g ( x k ) , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] + γ k g ( w k ) + λ k ( y k x k ) ( y k w k ) g ( x k ) g ( x k ) 2 g ( y k ) ,
where N 12 ( x k ) , N 13 ( w k ) and N 14 ( y k ) are defined as follows:
N 12 ( x k ) = N 12 ( t ; x k , w k 1 , y k 1 , x k 1 , y k 2 , w k 2 , , w k 4 , y k 4 , x k 4 ) , N 13 ( w k ) = N 13 ( t ; w k , x k , w k 1 , y k 1 , x k 1 , y k 2 , w k 2 , , w k 4 , y k 4 , x k 4 ) , N 14 ( y k ) = N 14 ( t ; y k , w k , x k , w k 1 , y k 1 , x k 1 , y k 2 , w k 2 , , w k 4 , y k 4 , x k 4 ) .
Theorem 8.
Having the same assumptions as in Theorem 1, then the proposed family with memory method defined by (22) has R-order 7.94.
Proof. 
From the relation (22) and similar to that used in the previous section, we derive the associated matrices as comes next:
x k + 1 = φ 1 ( y k , w k , x k , y k 1 , w k 1 , x k 1 , y k 2 , w k 3 , x k 3 ) ;
M 1 = 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0
y k = φ 2 ( w k , x k , y k 1 , w k 1 , x k 1 , y k 2 , w k 2 , x k 2 , y k 3 ) ;
M 2 = 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0
w k = φ 3 ( x k , y k 1 , w k 1 , x k 1 , y k 2 , w k 2 , x k 2 , y k 3 , w k 3 ) ;
M 3 = 1 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0
So, we obtain
M = M 1 M 2 M 3 = 4 4 4 4 4 4 4 0 0 2 2 2 2 2 2 2 0 0 1 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 .
The only eigenvalue of the matrix M that is positive and real is the number 7.94 . It follows that the convergence rate of (22) is 7.94. □
(IV)
At the end of this section, we have presented the most important theorem of this paper, which has the highest degree of convergence of a Ostrowski-like two-point method, i.e., 7.97:
β k = 1 N 15 ( x k ) , γ k = N 16 ( w k ) 2 N 16 ( w k ) , λ k = N 17 ( y k ) 6 , k = 5 , 6 , 7 , , y k = x k g ( x k ) g [ x k , w k ] + γ k g ( w k ) , w k = x k + β k g ( x k ) , k = 0 , 1 , 2 , , t k = g ( y k ) g ( x k ) , H ( 0 ) = 1 , H ( 0 ) = 1 , x k + 1 = y k H ( t k ) g ( y k ) g [ y k , w k ] + γ k g ( w k ) + λ k ( y k x k ) ( y k w k ) g ( x k ) g ( x k ) 2 g ( y k ) .
Theorem 9.
With the hypotheses of Theorem 3 and that the three parameters β , γ and λ have recursively calculated, then (24) has R-order 7.97 8 and its efficiency index is 7 . 97 1 3 2 .
Proof. 
From the relation (24) and similar to that used in the previous section, we construct the corresponding matrices as follows:
x k + 1 = φ 1 ( y k , w k , x k , y k 1 , w k 1 , x k 1 , y k 2 , w k 2 , x k 2 , y k 3 ) ;
M 1 = 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0
y k = φ 2 ( w k , x k , y k 1 , w k 1 , x k 1 , y k 2 , w k 2 , x k 2 , y k 3 , w k 3 ) ;
M 2 = 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0
w k = φ 3 ( x k , y k 1 , w k 1 , x k 1 , y k 2 , w k 2 , x k 2 , y k 3 , w k 3 , x k 3 ) ;
M 3 = 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 .
Thus, we get
M = M 1 M 2 M 3 = 4 4 4 4 4 4 4 4 0 0 2 2 2 2 2 2 2 2 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 ,
and its eigenvalues are ( 7.97243 , 0.48621 + 0.71846 i , 0.48621 + 0.71846 i , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) . This states that 7.97 8 is the analytical order and the efficiency index is 7 . 97 1 3 2 . □

4. Numerical Results

The principal purpose of numerical examples is to verify the validity of the theoretical developments through a variety of test examples using high accuracy computations by use of the Mathematica program. All computations were performed using Mathematica 11 [13].
In the tables, the abbreviations Div, TNE and Iter are used as follows:
  • TNE: Total Number of Evaluations required for a method to do the specified iterations;
  • Iter: The number of iterations;
  • The errors | x k α | of estimations to the simple zeros of g i ( x ) , i = 1 , 2 , 3 ;
  • The computational order of convergence ( r c ) [14] can be calculated via:
    r c = log | g ( x k ) / g ( x k 1 ) | log | g ( x k 1 ) / g ( x k 2 ) | , COC = log | ( x k x * ) / ( x k 1 x * ) | log | ( x k 1 x * ) / ( x k 2 x * ) | .
We shall check the effectiveness of the new without and with memory methods. We employ the presented methods (6), (16), (18), (19) and (24) denoted by TM4, TM6, TM7, TM7.5 and TM8, respectively (for β 0 = γ 0 = λ 0 = 0.01 ), to solve some nonlinear equations. We compared our methods and some known methods as follows: Campos et al. (CCTVM) [15], Choubey–Jaiswal (CJM) [16], Chun’s method (CM) [17], Cordero et al. (CLKTM) [18], Cordero et al. (CLTAM) [19], Jaiswal’s method (JM) [20], Jarratt’s method (JM) [21], Kung–Traub’s method (KTM) [6], Maheshwari’s method (MM) [22], Kansal et al.’s method (KKBM) [23], Lalehchini et al.’s method (LLMM) [24], Mohammadi et al.’s method (MLAM) [25], Ostrwoski’s method (OM) [5], Soleymani et al.’s method (SLTKM) [11], Torkashvand–Kazemi (TKM) [26], Traub’s method (TM) [27], Wang’s method (WM) [28] and Zafar et al.’s method (ZYKZM) [29].
In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, we show the numerical results obtained by applying the different methods with a memory for approximating the solution of g i ( x ) = 0 , i = 1 , 2 , 3 , given as follows:
g 1 ( x ) = x 5 + x 4 + 4 x 2 15 , x * 1.34 , x 0 = 1.1 , g 2 ( x ) = x 3 + 4 x 2 10 , x * 1.36 , x 0 = 1 , g 3 ( x ) = 10 x e x 2 1 , x * 1.67 , x 0 = 1 .
Here, ≈ stands for an approximation of the solution that we have written only to provide an overview of the solution. All calculations are performed using 2000 floating point arithmetic in Wolfram Mathematica. This means that we care about very small numbers in terms of magnitude and we do not allow the programming package to consider them as zero automatically. Actually, higher orders can only be seen in the convergence phase and more clearly in high precision computing. The numerical results shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 confirmed the theoretical discussions and the efficiency of the proposed scheme under different choices of the weight functions.
The question may arise now of do we really need to use the small numbers (e.g., 2.83E-1148 which stands for 2.83 × 10 1148 in Table 7)? The answer is ’no’; to illustrate, we must state that sometimes in applications, results up to at most 100 digits are enough. However, here we used such a number of high floating point arithmetic on purpose to check the computational order of convergence (25). In fact, for higher order methods, the higher speed can only be seen in the number of meaningful decimal places when the method takes several iterations.
In addition, in Table 8 a comparison among various schemes is given, which again states that the proposed methods with memory possess a higher computational efficiency index and can be employed in solving nonlinear equations.
We end this section by pointing out that the extension of our methods for the system of nonlinear equations (see some application of nonlinear system in [31,32,33]) require the computation of a divided difference operator (DDO) which would be a dense matrix. This dense structure of the DDO matrix restricts the usefulness of such methods. Because of this, we consider our proposed methods only for the scalar case.

5. Conclusions

In this paper, we have used the idea of the weight function and have turned Ostrowski’s method into an optimal order method. We have constructed the with-memory methods by using the same number of evaluations that do not require the calculation of a function derivative. Then, with the advent of accelerator parameters and eigenvalues of matrices, we created the with memory methods with higher orders. By interpolatory accelerator parameters, the methods with memory reached up to 100% convergence improvement. Numerical tests were intended to verify the better performance of the proposed methods over the others. Employing such an efficient numerical scheme for practical problems in solving stochastic differential equations [34] is worth investigating in future work.

Author Contributions

Conceptualization, M.Z.U.; Data curation, V.T.; Formal analysis, M.Z.U., S.S. and M.A.; Funding acquisition, S.S.; Investigation, M.Z.U., V.T. and M.A.; Methodology, M.Z.U. and V.T.; Project administration, V.T. and M.A.; Supervision, V.T. and S.S.; Visualization, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project, under grant no. (KEP-48-130-42).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

For data availability statement, we state that data sharing is not applicable to this article as no new data were used in this study. All used data has clearly been mentioned in the text.

Acknowledgments

The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project, under grant no. (KEP-48-130-42).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, L.; Cho, S.Y.; Yao, J.C. Convergence analysis of an inertial Tseng’s extragradient algorithm for solving pseudomonotone variational inequalities and applications. J. Nonlinear Var. Anal. 2021, 5, 627–644. [Google Scholar]
  2. Alsaedi, A.; Broom, A.; Ntouyas, S.K.; Ahmad, B. Existence results and the dimension of the solution set for a nonlocal inclusions problem with mixed fractional derivatives and integrals. J. Nonlinear Funct. Anal. 2020, 2020, 28. [Google Scholar]
  3. Itkin, A.; Soleymani, F. Four-factor model of quanto CDS with jumps-at-default and stochastic recovery. J. Comput. Sci. 2021, 54, 101434. [Google Scholar] [CrossRef]
  4. Soheili, A.R.; Amini, M.; Soleymani, F. A family of Chaplygin-type solvers for Itô stochastic differential equations. Appl. Math. Comput. 2019, 340, 296–304. [Google Scholar] [CrossRef]
  5. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  6. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  7. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  8. Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
  9. Chun, C.; Lee, M.Y. A new optimal eighth-order family of iterative methods for the solution of nonlinear equations. Appl. Math. Comput. 2013, 223, 509–519. [Google Scholar] [CrossRef]
  10. Torkashvand, V.; Lotfi, T.; Araghi, M.A.F. A new family of adaptive methods with memory for solving nonlinear equations. Math. Sci. 2019, 13, 1–20. [Google Scholar] [CrossRef] [Green Version]
  11. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Haghani, F.K. Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
  12. Herzberger, J. Uber Matrixdarstellungen fur Iterationverfahren bei nichtlinearen Gleichungen. Computing 1974, 12, 215–222. [Google Scholar] [CrossRef]
  13. Don, E. Schaum’s Outline of Mathematica; McGraw-Hill Professional: New York, NY, USA, 2000. [Google Scholar]
  14. Petković, M.S.; Neta, B.; Petkovixcx, L.D.; Džunixcx, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  15. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef] [Green Version]
  16. Choubey, N.; Jaiswal, J.P. Two- and three-point with memory methods for solving nonlinear equations. Numer. Anal. Appl. 2017, 10, 74–89. [Google Scholar] [CrossRef]
  17. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  18. Cordero, A.; Lotfi, T.; Khoshandi, A.; Torregrosa, J.R. An efficient Steffensen-like iterative method with memory. Bull. Math. Soc. Sci. Math. Roum. 2015, 58, 49–58. [Google Scholar]
  19. Cordero, A.; Lotfi, T.; Torregrosa, J.R.; Assari, P.; Mahdiani, K. Some new bi-accelarator two-point methods for solving nonlinear equations. Comput. Appl. Math. 2016, 35, 251–267. [Google Scholar] [CrossRef]
  20. Jaiswal, J.P. Two efficient bi-parametric derivative free with memory methods for finding simple roots nonlinear equations. J. Adv. Appl. Math. 2016, 1, 203–210. [Google Scholar] [CrossRef]
  21. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  22. Maheshwari, A.K. A fourth-order iterative method for solving nonlinear equations. Appl. Math. Comput. 2009, 211, 383–391. [Google Scholar] [CrossRef]
  23. Kansal, M.; Kanwar, V.; Bhatia, S. Efficient derivative-free variants of Hansen-Patrick’s family with memory for solving nonlinear equations. Numer. Algorithms 2016, 73, 1017–1036. [Google Scholar] [CrossRef]
  24. Lalehchini, M.J.; Lotfi, T.; Mahdiani, K. On developing an adaptive free-derivative Kung and Traub’s method with memory. J. Math. Ext. 2020, 14, 221–241. [Google Scholar]
  25. Zadeh, M.M.; Lotfi, T.; Amirfakhrian, M. Developing two efficient adaptive Newton-type methods with memory. Math. Methods Appl. Sci. 2019, 42, 5687–5695. [Google Scholar] [CrossRef]
  26. Torkashvand, V.; Kazemi, M. On an efficient family with memory with high order of convergence for solving nonlinear equations. Int. J. Ind. Math. 2020, 12, 209–224. [Google Scholar]
  27. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
  28. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  29. Zafar, F.; Yasmin, N.; Kutbi, M.A.; Zeshan, M. Construction of tri-parametric derivative free fourth order with and without memory iterative method. J. Nonlinear Sci. Appl. 2016, 9, 1410–1423. [Google Scholar] [CrossRef]
  30. Wang, X.; Zhu, M. Two iterative methods with memory constructed by the method of inverse interpolation and their dynamics. Mathematics 2020, 8, 1080. [Google Scholar] [CrossRef]
  31. Zhao, Y.-L.; Zhu, P.-Y.; Gu, X.-M.; Zhao, X.-L.; Jian, H.-Y. A preconditioning technique for all-at-once system from the nonlinear tempered fractional diffusion equation. J. Sci. Comput. 2020, 83, 10. [Google Scholar] [CrossRef] [Green Version]
  32. Zhao, Y.-L.; Gu, X.-M.; Ostermann, A. A preconditioning technique for an all-at-once system from volterra subdiffusion equations with graded time steps. J. Sci. Comput. 2021, 88, 11. [Google Scholar] [CrossRef]
  33. Gu, X.M.; Wu, S.L. A parallel-in-time iterative algorithm for Volterra partial integro-differential problems with weakly singular kernel. J. Comput. Phys. 2020, 417, 109576. [Google Scholar] [CrossRef]
  34. Ernst, P.A.; Soleymani, F. A Legendre-based computational method for solving a class of Itô stochastic delay differential equations. Numer. Algorithms 2019, 80, 1267–1282. [Google Scholar] [CrossRef]
Table 1. Comparison of various iterative schemes (first part).
Table 1. Comparison of various iterative schemes (first part).
Functions OM
[5]
JM
[21]
KTM
[6]
MM
[22]
CM
[17]
g 1 ,
x 0 = 1.1
| x k + 1 x k | 1.47E-433.75E-435.39E-311.08E-181.47E-43
| g ( x k + 1 ) | 9.19E-1714.04E-1695.40E-1202.13E-7039.19E-171
Iter44444
r c 4.004.004.003.994.00
g 2 ,
x 0 = 1
| x k + 1 x k | 3.60E-473.60E-473.36E-383.85E-283.60E-47
| g ( x k + 1 ) | 2.45E-1862.45E-1864.37E-1501.60E-1092.45E-186
Iter44444
r c 4.004.004.004.004.00
g 3 ,
x 0 = 1
| x k + 1 x k | 1.56E-294.40E-247.32E-283.43E-261.56E-29
| g ( x k + 1 ) | 1.35E-1157.72E-941.33E-1081.31E-1011.35E-115
Iter44444
r c 4.004.004.004.004.00
Table 2. Comparison of various iterative schemes (second part).
Table 2. Comparison of various iterative schemes (second part).
Functions TM4 (6),
H 1 ( t )
TM4 (6),
H 2 ( t )
TM4 (6),
H 3 ( t )
TM4 (6),
H 4 ( t )
TM4 (6),
H 5 ( t )
g 1 ,
x 0 = 1.1
| x k + 1 x k | 4.24E-120E-02.89E-107.74E-8-
| g ( x k + 1 ) | 1.80E-456.39E-141.62E-371.86E-274.98E-12
Iter33333
r c 3.994.114.004.003.83
g 2 ,
x 0 = 1
| x k + 1 x k | 7.94E-122.17E-96.57E-158.14E-15-
| g ( x k + 1 ) | 6.12E-453.42E-351.47E-573.57E-597.01E-11
Iter33333
r c 3.994.003.993.973.63
g 3 ,
x 0 = 1
| x k + 1 x k | 8.11E-96.13E-105.94E-93.76E-91.81E-8
| g ( x k + 1 ) | 9.42E-337.08E-392.01E-332.12E-344.87E-31
Iter33333
r c 4.004.074.004.004.01
Table 3. Comparison of various iterative schemes (third part).
Table 3. Comparison of various iterative schemes (third part).
Functions TM6 (16),
H 1 ( t )
TM6 (16),
H 2 ( t )
TM6 (16),
H 3 ( t )
TM6 (16),
H 4 ( t )
TM6 (16),
H 5 ( t )
g 1 ,
x 0 = 1.1
| x k + 1 x k | 1.02E-909.59E-387.76E-851.60E-631.64E-36
| g ( x k + 1 ) | 1.05E-5387.07E-2211.98E-5031.55E-3811.82E-219
Iter44444
r c 6.006.006.006.006.00
g 2 ,
x 0 = 1
| x k + 1 x k | 1.70E-1001.09E-781.58E-1258.85E-1043.53E-28
| g ( x k + 1 ) | 2.03E-5991.41E-4681.28E-7493.93E-6191.58E-170
Iter44444
r c 6.006.006.006.006.00
g 3 ,
x 0 = 1
| x k + 1 x k | 2.96E-841.97E-842.69E-842.43E-847.65E-85
| g ( x k + 1 ) | 1.246E-5011.06E-5026.92E-5023.77E-5023.67E-505
Iter44444
r c 6.006.006.006.006.00
Table 4. Comparison of various iterative schemes (fourth part).
Table 4. Comparison of various iterative schemes (fourth part).
Functions TM7 (18),
H 1 ( t )
TM7 (18),
H 2 ( t )
TM7 (18),
H 3 ( t )
TM7 (18),
H 4 ( t )
TM7 (18),
H 5 ( t )
g 1 ,
x 0 = 1.1
| x k + 1 x k | 8.86E-1301.66E-552.63E-1192.95E-923.80E-56
| g ( x k + 1 ) | 3.69E-9033.02E-3837.51E-8301.68E-6409.89E-388
Iter44444
r c 7.007.007.007.007.00
g 2 ,
x 0 = 1
| x k + 1 x k | 5.63E-1476.47E-1177.66E-1861.89E-1503.78E-54
| g ( x k + 1 ) | 3.62E-10336.95E-8162.26E-12981.25E-10501.61E-376
Iter44444
r c 7.007.007.007.007.00
g 3 ,
x 0 = 1
| x k + 1 x k | 3.32E-1191.70E-1192.83E-1192.40E-1193.43E-120
| g ( x k + 1 ) | 4.38E-8273.99E-8301.43E-8284.52E-8295.48E-835
Iter44444
r c 7.006.997.007.007.00
Table 5. Comparison of various iterative schemes (fifth part).
Table 5. Comparison of various iterative schemes (fifth part).
Functions TM7.5
(19),
H 1 ( t )
TM7.5
(19),
H 2 ( t )
TM7.5
(19),
H 3 ( t )
TM7.5
(19),
H 4 ( t )
TM7.5
(19),
H 5 ( t )
g 1 ,
x 0 = 1.1
| x k + 1 x k | 1.08E-1602.53E-695.73E-1475.42E-1142.62E-69
| g ( x k + 1 ) | 1.97E-12051.11E-5187.52E-11027.82E-5842.70E-516
Iter44444
r c 7.517.517.507.507.47
g 2 ,
x 0 = 1
| x k + 1 x k | 1.58E-1882.99E-1483.39E-2373.84E-1921.21E-64
| g ( x k + 1 ) | 5.80E-15059.32E-11832.50E-18946.85E-15346.55E-514
Iter44444
r c 8.008.008.008.008.00
g 3 ,
x 0 = 1
| x k + 1 x k | 4.23E-1371.89E-1373.49E-1372.87E-1372.78E-138
| g ( x k + 1 ) | 1.97E-10274.76E-10304.70E-10281.07E-10282.63E-1036
Iter44444
r c 7.517.517.517.517.50
Table 6. Comparison of various iterative schemes (sixth part).
Table 6. Comparison of various iterative schemes (sixth part).
Functions TM8 (24),
H 1 ( t )
TM8 (24),
H 2 ( t )
TM8 (24),
H 3 ( t )
TM8 (24),
H 4 ( t )
TM8 (24),
H 5 ( t )
g 1 ,
x 0 = 1.1
| x k + 1 x k | 3.27E-1672.00E-697.97E-1525.40E-1171.11E-69
| g ( x k + 1 ) | 4.02E-13318.04E-5494.95E-12082.19E-9297.19E-551
Iter44444
r c 8.008.008.0038.008.00
g 2 ,
x 0 = 1
| x k + 1 x k | 1.58E-1882.99E-1483.39E-2373.84E-1921.21E-64
| g ( x k + 1 ) | 5.80E-15059.32E-11832.50E-18946.85E-15346.55E-514
Iter44444
r c 8.008.008.0038.008.00
g 3 ,
x 0 = 1
| x k + 1 x k | 3.19E-1441.42E-1442.63E-1442.16E-1442.06E-145
| g ( x k + 1 ) | 1.31E-11492.04E-11522.83E-11505.79E-11513.95E-1159
Iter44444
r c 8.008.008.0038.008.00
Table 7. Comparison of various iterative schemes (seventh part).
Table 7. Comparison of various iterative schemes (seventh part).
Functions CLKTM
( b = 1 )
[18]
CLTAMM
( A = 1 )
[19]
KKBM
(a = 1) Cas 1
[23]
ZYKZM,
Method F1
[29]
TM8,
(24),
H 6 ( t )
g 1 ,
x 0 = 1.1
| x k + 1 x k | 3.15E-842.32E-1151.21E-1062.46E-1085.90E-111
| g ( x k + 1 ) | 4.20E-5062.90E-8023.44E-7414.11E-8104.50E-881
Iter44444
r c 6.007.007.007.498.00
g 2 ,
x 0 = 1
| x k + 1 x k | 1.85E-1002.26E-1777.02E-1571.96E-1543601.47E-159
| g ( x k + 1 ) | 1.43E-5994.91E-14161.23E-10953.15E-12323.19E-1273
Iter44444
r c 6.008.007.008.008.00
g 3 ,
x 0 = 1
| x k + 1 x k | 3.11E-878.95E-1121.85E-1198.71E-1364.69E-144
| g ( x k + 1 ) | 1.50E-5202.64E-7777.30E-8304.42E-10252.83E-1148
Iter44444
r c 6.006.996.997.518.00
Table 8. Comparison improvement of convergence order of the proposed method with other schemes.
Table 8. Comparison improvement of convergence order of the proposed method with other schemes.
With Memory
Methods
Number of
Sub-Steps
Optimal
Order
COCPercentage
Increase
CCTVM [15]24.004.24 5.9 %
CJM [16]24.004.56 14.03 %
CJM [16]24.004.79 19.78 %
CJM [16]24.005.00 20 %
CJM [16]38.009.00 12.5 %
CJM [16]38.009.58 19.79 %
CJM [16]38.009.80 22.44 %
CJM [16]38.0010.00 25 %
CLKTM [18]24.006.00 50 %
CLTAMM [19]24.007.00 75 %
JM [20]24.007.00 75 %
JM [20]38.0014.00 75 %
KKBM [23]24.007.00 75 %
LLMM [24]24.006.32 57.93 %
MLAM [25]24.005.95 48.75 %
SLTKM [11]24.007.22 80.58 %
SLTKM [11]24.0012.00 50 %
TKM [26]38.0014.00 75 %
TKM [26]416.0028.00 75 %
TM [27]12.002.41 20.5 %
WM [28]24.004.24 5.75 %
WM [28]24.004.45 11.23 %
WZM [30]24.004.56 14.03 %
WZM [30]38.0010.13 26.64 %
ZYKZM [29]24.007.5 88.28 %
(16)24.006.00 50 %
(18)24.007.00 75 %
(19)24.007.53 88.28 %
(20)24.007.77 94.25 %
(21)24.007.89 97.25 %
(23)24.007.94 98.5 %
(24)24.007.97 99.25 %
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ullah, M.Z.; Torkashvand, V.; Shateyi, S.; Asma, M. Using Matrix Eigenvalues to Construct an Iterative Method with the Highest Possible Efficiency Index Two. Mathematics 2022, 10, 1370. https://doi.org/10.3390/math10091370

AMA Style

Ullah MZ, Torkashvand V, Shateyi S, Asma M. Using Matrix Eigenvalues to Construct an Iterative Method with the Highest Possible Efficiency Index Two. Mathematics. 2022; 10(9):1370. https://doi.org/10.3390/math10091370

Chicago/Turabian Style

Ullah, Malik Zaka, Vali Torkashvand, Stanford Shateyi, and Mir Asma. 2022. "Using Matrix Eigenvalues to Construct an Iterative Method with the Highest Possible Efficiency Index Two" Mathematics 10, no. 9: 1370. https://doi.org/10.3390/math10091370

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop