Abstract
We suggest a new and cost-effective iterative scheme for nonlinear equations. The main features of the presented scheme are that it does not involve any derivative in the structure, achieves an optimal convergence of fourth-order factors, has more flexibility for obtaining new members, and is two-point, cost-effective, more stable and yields better numerical results. The derivation of our scheme is based on the weight function technique. The convergence order is studied in three main theorems. We have demonstrated the applicability of our methods on four numerical problems. Out of them, two are real-life cases, while the third one is a root clustering problem and the fourth one is an academic problem. The obtained numerical results illustrate preferable outcomes as compared to the existing ones in terms of absolute residual errors, CPU timing, approximated zeros and absolute error difference between two consecutive iterations.
Keywords:
Kung–Traub conjecture; nonlinear equations; Newton’s method; efficiency index; multiple roots MSC:
65G99; 65H10
1. Introduction
Most of the applied science problems are nonlinear in nature because nature itself is nonlinear instead of simple or linear. The solutions of nonlinear problems are more complicated as compared to the linear and simple problems. Therefore, we consider a nonlinear problem of the following form:
( is an analytic function). Such equations originate from the applied and computer science, engineering, statistics, economics, chemistry, biology and physics, etc. (see details in [1,2,3]). The application of iterative methods can also be found for the computation of approximate solutions of stationary and evolutionary problems, which are associated with differential and partial differential equations (more details in [4,5]). The exact solutions of such problems are almost non-existent. Thus, we have to focus or depend on the approximate solutions that can be obtained with the help of iterative methods. One of the most famous schemes is known as Newton’s method, which is given by
Undoubtedly, this scheme has second-order convergence and is a widely used method for nonlinear equations. There are several problems with this scheme. Some of the major ones are: it is a one-point method (for convergence and efficiency problems; details are given in [1,2,3]), it has a linear order of convergence for multiple zeros and the calculation of the first-order derivative at each substep. Finding the derivative is quite a rigorous problem because, sometimes, the derivative of a function consumes a large amount of time in achieving the final result or does not exist.
Therefore, higher-order optimal derivative-free methods came into demand. Then, some scholars suggested a few such methods that have fourth-order optimal convergence. Some of the most important members are given below.
In 2015, Hueso et al. [6] suggested
where
is denoted by .
In 2019, Sharma et al. [7] proposed
where
is denoted by . The suggested scheme (4) is one of their best methods among others proposed by Sharma et al. [7].
In 2019, Sharma et al. [8] gave
where and , with . The expression (5) is one of their best schemes among other methods presented by Sharma et al. [8]. We call it .
In 2020, Kumar et al. [9] presented
where , which is called . The expression (6) is one of their best schemes among others given by Kumar et al. [9].
In 2020, Behl et al. [10] suggested:
where and , which is called . Some other higher-order derivative-free techniques can be found in [11,12,13,14,15].
We aspire to suggest a new two-step, more general and cost-effective family of iterative methods. The new scheme is derivative-free and has optimal convergence of order four. The derivation of this two-step scheme is based on the weight function technique. Further, we present three main Theorems 1–3, which demonstrate the fourth-order convergence for , when the value of is known in advance. The applicability of our methods is illustrated on four numerical problems. Two of them are real-life, the third one is root clustering (which originates from applied mathematics) and the last one is an academic problem. The numerical outcomes demonstrate preferable results in terms of absolute residual errors, CPU timing, approximated zeros and the absolute error difference between two consecutive iterations, in contrast to previous studies.
The rest of the paper is summarized as follows. Section 2 includes the construction as well as the convergence analysis of our scheme. The convergence analysis is studied thoroughly in three Theorems 1–3. Section 3 is devoted to the numerical experiments, where we illustrate the efficiency and convergence of our scheme. In addition, we also propose three weight functions that satisfy the hypotheses of Theorems 1–3. Further, four numerical problems are chosen to confirm the theoretical results. Finally, the concluding remarks are presented in Section 4.
2. Construction of Higher-Order Scheme
We suggest a new form of iterative scheme that has fourth-order optimal convergence for multiple zeros, which is given by
where and is the known multiplicity of the needed zero. Further, the maps and are weight functions and analytic in the neighborhood of origin (0). Moreover, we considered and and two multi-valued maps. We assume that the principal root (see [16]) is given by , with for . The choice of for agrees with that of , which is depicted in the numerical section. In an analogous way, we obtain , where .
In Theorems 1–3, we demonstrate the convergence analysis of our scheme (8), without adopting any extra value of f at some other points.
Theorem 1.
We assume that is a multiple zero of order two () of function f. Consider the map , which is analytic in in the neighborhood of the needed zero ξ. Then, our scheme (8) attains fourth-order of convergence, if
where . The scheme (8) satisfies the following error equation:
where .
Proof.
We assume that and are the terms of error (in th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for f at two different points and in the neighborhood of with hypotheses and . Then, we obtain
and
respectively, with .
It is clear from expression (12) that we have . Thus, we can easily expand in the neighborhood of origin in the following way:
From (14), we observe that the scheme will attain at least second-order of convergence, when
With the help of Taylor’s series expansions, we obtain
From expression (18), we have . Thus, we expand in the neighborhood of origin , which is defined as
The coefficients of , and should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily obtained by the following values:
We have the following error equation by adopting (22) in (21):
and . We deduce from expression (23) that our scheme (8) has obtained the fourth-order of convergence for and . In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme. □
Theorem 2.
Suppose that is a multiple solution of order three () of function f. Consider the map , which is analytic in surrounding the needed zero ξ. Then, our scheme (8) attains fourth-order convergence, if
where and . Scheme (8) satisfies the following error equation:
where .
Proof.
We assume that and are the terms of error (in th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for f at two different points and in the neighborhood with hypotheses and . Then, we have
and
respectively, with .
Next, from expression (27), we have . Thus, we expand the weight function in the neighborhood of origin in the following way:
From (29), we observe that the scheme will attain at least second-order convergence, when
Again with the help of Taylor’s series expansions, we obtain
From expression (33), we have . Thus, we expand in the neighborhood of origin as:
By using expressions (25)–(35) in scheme (8), we have
where . For example, the first coefficient is explicitly written as , etc.
The coefficients of , and should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values:
General Error form of the Proposed Scheme
Theorem 3.
Proof.
We consider that and are the terms of error (in th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for f at two different points and in the neighborhood with hypotheses , and . Then, we obtain
and
where , . For example, and , etc.
Next, from expression (41), we have . Thus, we expand the weight function in the neighborhood of origin , which is defined as
Again, with the help of Taylor’s series expansions, we have
From expression (47), we have . Thus, we expand in the neighborhood of origin as
The coefficients of , and should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values:
By adopting (51) in (50), we obtain the following error equation:
We deduce from expression (52) that our scheme (8) has obtained the fourth order of convergence for and . In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme. □
Remark 1.
It seems from (52) (for ) that α and b are not involved in this expression. However, they actually appear in the coefficient of . Here, we do not need to calculate the coefficient of because the optimal fourth order of convergence is already obtained. Further, the calculation work of is quite rigorous and consumes a large amount of time. Nonetheless, the role of α and b can be observed in (23) and (38).
3. Numerical Experimentation
We demonstrate the efficiency and convergence of some members from our scheme (8). Therefore, we choose the following three weight functions:
- First weight function:
- Second weight function:
- Third weight function:
Clearly, all the above three weight functions satisfy the conditions provided in Theorems 1–3. Now, we use these weight functions in our scheme (8) and call them , and , respectively. We consider two applied science problems, one clustering root problem and an academic problem for the numerical tests.
There are no fixed criteria for the comparison of two different iterative methods. However, we assume the following six different aspects for the comparison:
- The values of iterate at ;
- The absolute residual error;
- The differences between two consecutive iterations;
- CPU timing;
- The number of iterations for attaining accuracy up to ;
- Computational order of convergence (COC) based on the accuracy.
The values of the above mentioned parameters are depicted in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, along with initial guesses. The values of , , COC and were calculated in Mathematica-9 for a minimum of 3000 significant digits, which minimizes the rounding off error. However, we depict these values up to 15 (with exponent), 2 (with exponent), 6 and 2 (with exponent) significant digits, respectively.
We adopted the following rules
and
in order to calculate the computational order of convergence and the approximate computational order of convergence (ACOC) [17], respectively. Further, the CPU timing is obtained by the command “AbsoluteTiming[]” in . We execute the same programming five times, and their average time is depicted in Table 7. The stands for in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6.
The configurations and outline of the adopted laptop are defined as follows:
- Processor: Intel(R) Core(TM)2 Duo CPU T6400 @ 2.00 GHz;
- Manufacturer: HP;
- Installed memory (RAM): 4:00 GB;
- Windows edition: Windows 7 Professional;
- System type: 64-bit-Operating System.
In order to maintain uniformity in the case of the comparison of the iterative methods, we choose in the existing as well as our methods. We consider five existing methods for comparison, namely (3)–(7). The details of these methods are given in the Introduction.
Remark 2.
For the following specific values of the weight functions,
We can obtain the Behl’s scheme [18] as a special case of our method.
Example 1
(Eigenvalue problem). Eigenvalues and vectors are one of the most basic and challenging problems of linear algebra. The quality of a thing or object can be determined with the help of the eigenvalue problem. It is not always suitable to use the linear algebra technique. Thus, we have to rely on numerical techniques, which provide the approximate zero. Therefore, we choose the proceeding square matrix of , which has multiple zeros:
whose characteristic equation is given below:
The function has , a multiple zero with . The computational results along with starting guesses are depicted in Table 1 and Table 2.
From Table 1, we can conclude that methods and display the most outstanding behavior among the mentioned methods in terms of accurate iterate , the difference between two consecutive iterations and absolute residual errors. Further, we can say that the other methods have almost two times larger residual errors than and .
We can observe from Table 2 that the desired root is closer from the second iteration onward in our suggested methods and as compared to the mentioned ones. In addition, the other existing methods have almost three times larger residual errors, which demonstrates the better performance of our methods and .
Table 1.
Behavior of iterative methods on eigenvalue problem with .
Table 1.
Behavior of iterative methods on eigenvalue problem with .
| Methods | ||||
|---|---|---|---|---|
| HM | 1 | 5.3(−3) | ||
| 2 | ||||
| 3 | ||||
| SM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| SM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| KM | 1 | |||
| 2 | ||||
| 3 | ||||
| BM | 1 | |||
| 2 | ||||
| 3 | ||||
| 1 | ||||
| 2 | ||||
| 3 | ||||
| PM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM3 | 1 | |||
| 2 | ||||
| 3 |
Table 2.
Behavior of iterative methods on eigenvalue problem with .
Table 2.
Behavior of iterative methods on eigenvalue problem with .
| Methods | ||||
|---|---|---|---|---|
| HM | 1 | |||
| 2 | ||||
| 3 | ||||
| SM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| SM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| KM | 1 | |||
| 2 | ||||
| 3 | ||||
| BM | 1 | |||
| 2 | ||||
| 3 | ||||
| PM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM3 | 1 | |||
| 2 | ||||
| 3 |
Example 2
(Continuous stirred tank reactor (CSTR)). Here, we consider another problem of applied science, namely an isothermal continuous stirred tank reactor (CSTR). The components and are the fed rates of the reactors and , respectively. In this way, we have the proceeding reaction scheme (for details, see [19]):
Douglas [20] invented this model (55) while designing a simple model that can control the feedback systems. He transformed the expression (55) in the mathematical form, which is given by
where is the gain of the proportional controller. For a particular value of , we obtain
The solutions of are called poles of the open-loop transfer function. The zeros of are . Among them, is a multiple zero with . The starting points and numerical results for are illustrated in Table 3 and Table 4.
From Table 3, we find that the lowest residual error among the existing methods is ; however, our methods and have , and , respectively. Thus, we can say that the existing methods have almost two times larger residual errors than our methods. This also indicates the faster convergence of our methods and as compared to others. Our techniques and also perform much better in terms of and as compared to other existing ones.
We can observe from Table 4 that our method has the lowest residual error as compared to (which is the lowest among other existing ones ). This clearly indicates that has the fastest convergence and smallest residual error among others. Our methods and have almost a two times lower error difference and better as compared to other existing ones.
Table 3.
Behavior of iterative methods on CSTR problem with .
Table 3.
Behavior of iterative methods on CSTR problem with .
| Methods | ||||
|---|---|---|---|---|
| HM | 1 | |||
| 2 | ||||
| 3 | ||||
| SM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| SM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| KM | 1 | |||
| 2 | ||||
| 3 | ||||
| BM | 1 | |||
| 2 | ||||
| 3 | ||||
| PM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM3 | 1 | |||
| 2 | ||||
| 3 |
Table 4.
Behavior of iterative methods on CSTR problem with .
Table 4.
Behavior of iterative methods on CSTR problem with .
| Methods | ||||
|---|---|---|---|---|
| HM | 1 | |||
| 2 | ||||
| 3 | ||||
| SM1 | 1 | |||
| 2 | −2.85000000000000 | |||
| 3 | ||||
| SM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| KM | 1 | |||
| 2 | ||||
| 3 | ||||
| BM | 1 | |||
| 2 | ||||
| 3 | ||||
| PM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM3 | 1 | |||
| 2 | ||||
| 3 |
Example 3
(Root clustering problem). We chose a root clustering problem similar to Zeng [21]:
The zeros of are and of multiplicity and , respectively. All of the zeros are quite close to each other. Therefore, this is known as a root clustering problem. We chose as the multiple zero of multiplicity 191 for the numerical experiment. The computational results are depicted in Table 5 with an initial approximation.
Undoubtedly, demonstrates slightly better behavior than our and existing methods, as shown in Table 5. However, there is no huge difference, as our methods show in the previous Table 1, Table 2, Table 3 and Table 4. Our results are also significantly closer to in terms of . It is merely a difference of only four significant digits in the case of .
Table 5.
Behavior of iterative methods on root clustering problem with .
Table 5.
Behavior of iterative methods on root clustering problem with .
| Methods | ||||
|---|---|---|---|---|
| HM | 1 | |||
| 2 | ||||
| 3 | ||||
| SM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| SM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| KM | 1 | |||
| 2 | ||||
| 3 | ||||
| BM | 1 | |||
| 2 | ||||
| 3 | ||||
| PM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM3 | 1 | |||
| 2 | ||||
| 3 |
Example 4
(Academic problem). We chose another academic problem, which is given by
The zero of is of multiplicity . Computational results are depicted in Table 6 with initial approximation.
Undoubtedly, demonstrates slightly better behavior than our and existing methods, as shown in Table 6. However, there is no huge difference, as our methods show in the previous Table 1, Table 2, Table 3 and Table 4. Our results are also significantly closer to in terms of , with a difference of only four significant digits in the case of .
Table 6.
Behavior of iterative methods on with .
Table 6.
Behavior of iterative methods on with .
| Methods | ||||
|---|---|---|---|---|
| HM | 1 | |||
| 2 | ||||
| 3 | ||||
| SM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| SM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| KM | 1 | |||
| 2 | ||||
| 3 | ||||
| BM | 1 | |||
| 2 | ||||
| 3 | ||||
| PM1 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM2 | 1 | |||
| 2 | ||||
| 3 | ||||
| PM3 | 1 | |||
| 2 | ||||
| 3 |
Remark 3.
From Table 7, we find that has the lowest average execution time for attaining the desired accuracy. The average execution times of the computational results of methods and , respectively, are two and three times those of and . Further, and also consume less CPU time (on average) as compared to and .
Table 7.
CPU timing on the basis of number of iterations.
Table 7.
CPU timing on the basis of number of iterations.
| Ex. (1) | Ex. (2) | Ex. (3) | Ex. (4) | |||||
|---|---|---|---|---|---|---|---|---|
| HM | 0.060000 | 0.350000 | 0.015001 | 0.060000 | 17.610078 | 0.0023191 | 18.0973921 | 3.01623202 |
| SM1 | 0.062001 | 0.340000 | 0.010000 | 0.045003 | 25.445229 | 0.0015465 | 25.9037795 | 4.31729658 |
| SM2 | 0.050000 | 0.340000 | 0.010000 | 0.046001 | 10.761069 | 0.001541 | 11.208611 | 1.86810183 |
| KM | 0.060000 | 0.332004 | 0.010000 | 0.048003 | 10.118055 | 0.0077654 | 10.5758274 | 1.7626379 |
| BM | 0.050000 | 0.331000 | 0.010000 | 0.040000 | 10.063077 | 0.0014246 | 10.4955016 | 1.74925027 |
| PM1 | 0.050000 | 0.320000 | 0.002000 | 0.031000 | 7.608033 | 0.0014013 | 8.0124343 | 1.33540572 |
| PM2 | 0.050000 | 0.316002 | 0.003000 | 0.040000 | 7.762041 | 0.0014151 | 8.1724581 | 1.36207635 |
| PM3 | 0.240000 | 0.320000 | 0.004000 | 0.040000 | 7.605034 | 0.0018232 | 8.2108572 | 1.3684762 |
The abbreviations of T.T. and A.T. stand for total timing and average timing, respectively.
Remark 4.
On the basis of the obtained number of iterations in Table 8, we conclude that requires the fewest average iterations (in order to attain the desired accuracy) as compared to the existing methods. In addition, the average number of iterations of our methods and (4) is also lower as compared to (which is the lowest among the existing methods). Thus, we deduce that our method is the fastest among other mentioned methods.
Table 8.
Number of iterations required in order to attain the desired accuracy.
Table 8.
Number of iterations required in order to attain the desired accuracy.
| Ex. (1) | Ex. (2) | Ex. (3) | Ex. (4) | |||||
|---|---|---|---|---|---|---|---|---|
| HM | 6 | 6 | 7 | 7 | 4 | 5 | 35 | 5.83 |
| SM1 | 5 | 4 | 5 | 4 | 4 | 3 | 26 | 4.3 |
| SM2 | 5 | 4 | 5 | 5 | 4 | 3 | 26 | 4.3 |
| KM | 5 | 4 | 5 | 5 | 4 | 3 | 26 | 4.3 |
| BM | 5 | 4 | 5 | 5 | 4 | 3 | 26 | 4.3 |
| PM1 | 5 | 4 | 4 | 4 | 4 | 3 | 24 | 4 |
| PM2 | 4 | 4 | 4 | 4 | 4 | 3 | 23 | 3.83 |
| PM3 | 4 | 4 | 4 | 4 | 4 | 4 | 24 | 4 |
The abbreviations of T.Iter. and A.Iter. are total iterations and average iterations, respectively.
Remark 5.
Table 9.
COC based on the number of iterations required in order to attain the desired accuracy.
Table 9.
COC based on the number of iterations required in order to attain the desired accuracy.
| Ex. (1) | Ex. (2) | Ex. (3) | Ex. (4) | |||
|---|---|---|---|---|---|---|
| HM | 4.000 | 4.000 | 2.000 | 2.000 | 4.000 | 3.000 |
| SM1 | 4.000 | 4.000 | 1.325 | 1.321 | 5.883 | 5.000 |
| SM2 | 4.000 | 4.000 | 1.330 | 6.012 | 4.000 | 5.000 |
| KM | 4.000 | 4.000 | 1.330 | 6.014 | 4.000 | 5.000 |
| BM | 4.000 | 4.000 | 1.329 | 6.017 | 4.000 | 5.000 |
| PM1 | 4.000 | 4.000 | 4.000 | 4.000 | 4.000 | 5.000 |
| PM2 | 4.000 | 4.000 | 4.000 | 4.000 | 4.000 | 5.000 |
| PM3 | 4.000 | 4.000 | 4.000 | 4.000 | 4.000 | 5.000 |
4. Concluding Remarks
- We have suggested a new two-step, derivative-free and cost-effective iterative scheme for multiple zeros .
- Our scheme is based on the weight function technique. By using the weight functions at both substeps, we provide more flexibility for generating more general new schemes. Several new and existing cases are depicted in numerical Section 2 and Remark 2, respectively.
- Our Scheme (8) consumes only three values of f at different points. Thus, the optimality of our scheme is confirmed by the Kung–Traub conjecture.
- Undoubtedly, demonstrates slightly better behavior than our and existing methods in Example 3. Our results are also considerably closer to in terms of , with a difference of only four significant digits in the case of .
- requires the lowest execution time to obtain the numerical results. The execution times for the computational results of methods and , respectively, are two and three times those of our methods, namely and . Thus, we deduce that our schemes are cost-effective.
- The average number of iterations of our methods and is the lowest as compared to 4.6 (lowest among the existing methods).
- We cannot use our scheme for the solution of nonlinear systems. In the future, we can work in two directions: either for the extension to the eighth order of convergence for multiple roots or the extension for nonlinear systems.
Funding
This research was funded by Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D- 013-130-1441-1442.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D- 013-130-1441-1442. The authors, therefore, acknowledge with thanks DSR for the technical and financial support.
Conflicts of Interest
The author declares no conflict of interest.
References
- Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1964. [Google Scholar]
- Petkovic, M.; Neta, B.; Petkovic, L.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications A Contemporary Study; CRC Press: Boca Raton, FL, USA; Taylor & Francis Group: Hoboken, NJ, USA, 2017. [Google Scholar]
- Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
- Hueso, J.L.; Martínez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef] [Green Version]
- Sharma, J.R.; Kumar, S.; Jäntschi, L. On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef] [Green Version]
- Sharma, J.R.; Kumar, S.; Jäntschi, L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
- Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Aggarwal, P.; Chu, Y.M. An optimal fourth order derivative-free Numerical Algorithm for multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
- Behl, R.; Alharbi, S.K.; Mallawi, F.O.; Salimi, M. An Optimal Derivative-Free Ostrowski’s Scheme for Multiple Roots of Nonlinear Equations. Mathematics 2020, 8, 1809. [Google Scholar] [CrossRef]
- Le, D. An efficient derivative free method for solving nonlinear equations. ACM Trans. Math.Soft. 1985, 11, 250–262. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equations. J. Comput. Appl. Math. 2013, 252, 95–102. [Google Scholar] [CrossRef]
- Zhanlav, T.; Otgondorj, K. Comparison of some optimal derivative free three point iterations. Numer. Anal. Approx. Theory 2020, 49, 76–90. [Google Scholar]
- Cordero, A.B.; Torregrosa, J.R.S. Low-complexity root-finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2015, 275, 502–515. [Google Scholar] [CrossRef]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Ahlfors, L.V. Complex Analysis; McGraw-Hill Book, Inc.: New York, NY, USA, 1979. [Google Scholar]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Behl, R.; Cordero, A.; Torregrosa, J.R. A new higher-order optimal derivative free scheme for multiple roots. J. Comput. Appl. Math. 2022, 404, 113773. [Google Scholar] [CrossRef]
- Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Hoboken, NJ, USA, 1999. [Google Scholar]
- Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
- Zeng, Z. Computing multiple roots of inexact polynomials. Math. Comput. 2004, 74, 869–903. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).