Abstract
In this paper, we introduce some new quantum numerical techniques of midpoint and trapezoidal type essentially by using the decomposition technique. We also check the order of convergence of our suggested iterative methods. Numerical examples demonstrate that the new -iterative methods can generate solutions to the nonlinear equations with acceptable accuracy.
Keywords:
Taylor’s series in q-calculus; iterative methods; convergence analysis; Daftardar-Gejji-Jafari decomposition technique MSC:
49J40; 90C33
1. Introduction
In most scientific and engineering applications, the problem of finding the solution to nonlinear equations has become an active area of research. Many researchers have explored various order iterative methods to find solutions to the nonlinear equations using various techniques such as variational iterative methods and decomposition techniques; for details, see [1,2,3,4,5,6,7,8,9,10,11,12]. Symmetry analysis is a significant tool in various areas of mathematics and physics. There are several problems in engineering and mathematical sciences that possess symmetry, which can be transformed into the nonlinear systems . For details, see [13].
In the Adomian decomposition method, the solution is considered in terms of an infinite series, which converges towards an exact solution. Chun [2] and Abbasbandy [14] constructed and investigated different higher-order iterative methods by applying the decomposition technique of Adomian [15]. Darvishi and Barati [16] also applied the Adomian decomposition technique to develop Newton-type methods that are cubically convergent for the solution of the system of a nonlinear equation. Daftardar-Gejji and Jafari [17] have used different modifications of the Adomian decomposition method [15] and suggested a simple technique that does not need the derivative evaluation of the Adomian polynomial, which is the major advantage of using this technique over Adomian decomposition method.
Jackson [18] introduced the -Taylor’s formula. Then, Jing and Fan [19] derived -Taylor’s formula with its -remainder by using the q-differentiation approach and established results on the -remainder in the q-Taylor’s formula. Ernest presented the four different -Taylor’s formulas along with integral remainder; see [20,21,22].
In this paper, we consider the well-known fixed point iterative method in which we rewrite the nonlinear equation as We determine the convergence of our proposed methods. In order to illustrate the efficiency of these new methods, we present several numerical examples. We hope that the ideas and techniques of this paper will inspire interested readers working in this field.
2. Construction of q-Iterative Methods
In this section, some new different order multi-step -iterative methods are constructed by considering the mid point and Trapezoidal rule in the setting of -calculus and using the technique of decomposition [17].
2.1. Mid Point Rule
Consider the nonlinear equation
which is equivalent to
Assume that is the simple root of nonlinear Equation (1) and is the initial guess sufficiently close to the root. Using the fundamental theorem of calculus and mid-point quadrature formula in the -calculus, we have
Now, using the technique of He [23], the nonlinear Equation (1) can be written as an equivalent coupled system of equations
where
from which it follows that
where
and
It is clear that is a nonlinear operator. Now, we establish a sequence of higher-order iterative methods implementing the decomposition technique presented by Daftardar-Gejji and Jafari [17]. In this technique, the solution of (1) can be represented as in terms of infinite series.
Here, the operator can be decomposed as:
Consequently, it follows that
and
It is noted that is approximated by
and
For
From (4), it can easily be computed
Using (7), we obtain
For
Using (12), we have
This fixed-point formulation is used to suggest the following Algorithms 1–4.
| Algorithm 1 A new q-iterative scheme of second order convergence |
For a given (initial guess), an approximate solution is computed by the following iterative scheme
|
Thus,
For
Take,
This relation yields the following two-step method for solving nonlinear Equation (1).
| Algorithm 2 A new q-iterative scheme of third order convergence |
For a given initial guess the approximated solution can be computed by the following iterative schemes.
|
It is noted that
For
Using (17), we have
Using this relation, we have the following three-step method for solving nonlinear Equation (1).
| Algorithm 3 A new q-iterative scheme of fourth order convergence |
For a given initial guess , compute the approximate solution by the following iterative scheme
|
2.2. Trapezoidal Rule
Again using the technique of He [23] and fundamental law of calculus along with Trapezoidal rule in the -calculus, we can obtain
and
from which it follows that
where
and
Now, applying the decomposition technique of Daftardar-Gejji and Jafari [17], we have
For
From (20), it can easily be computed as
Using (23), we obtain
For
Using (25), we have
This formulation determines the Algorithm 1.
For
Take
This relation yields the following two-step method for solving nonlinear Equation (1).
| Algorithm 4 A new q-iterative scheme of third order convergence |
For a given initial guess , the approximate solution can be computed by the following iterative scheme
|
It is noted that
For
Using (28), we have
This formulation yields the following three-step method for solving nonlinear Equation (1).
3. Order of Convergence
This section comprises the convergence analysis of the -iterative methods determined by Algorithms 1–5 in the previous section.
In the following theorem, we have found the order of convergence of the Algorithm 1, and it is a quadratic order of convergence.
Theorem 1.
Let be an open interval, and is differential function. If be a simple root of and is sufficiently close to β, then multi-step method defined by Algorithm 1 has quadratic convergence.
Proof.
Let be the root of nonlinear equation , or equivalently . Let and be the errors at and iterations, respectively.
| Algorithm 5 A new q-iterative scheme of fourth order convergence |
For a given initial guess , the approximated solution is computed by the following iterative schemes.
|
Now, expanding and in terms of -Taylor’s series about , to obtain
and
where
and
Using (14), we have
Hence
□
In the following theorem, we have found the order of convergence of the Algorithm 2, and it is the third order of convergence.
Theorem 2.
Let be an open interval, and is differential function. If be a simple root of and is sufficiently close to β, then multi-step method defined by Algorithm 2 has a third order of convergence.
Proof.
From (37), we obtain
Using (15), we have
Expanding in terms of -Taylor’s series about we obtain
Expanding in terms of -Taylor’s series about we obtain
and
Now,
From (16), we obtain
Therefore,
□
In the following theorem, we have found the order of convergence of the Algorithm 3, and it is the fourth order of convergence.
Theorem 3.
Let be an open interval, and is differential function. If be a simple root of and is sufficiently close to β, then the multi-step method defined by Algorithm 3 has the fourth order of convergence.
Proof.
From (38), we obtain
From (45), we obtain
Expanding in terms of -Taylor’s series about we have
Expanding in terms of -Taylor’s series about we have
Now,
Now,
Using (19), we obtain
Therefore,
□
In the following theorem, we have found the order of convergence of the Algorithm 4 and it is third order of convergence.
Theorem 4.
Let be an open interval, and is differential function. If be a simple root of and is sufficiently close to β, then multi-step method defined by Algorithm 4 has the third order of convergence.
Proof.
From (38), we obtain
Expanding in terms of -Taylor’s series, we obtain
Now,
Now,
Using (28), we have
Therefore,
□
In the following theorem we have found the order of convergence of the Algorithm 5, and it is the fourth order of convergence.
Theorem 5.
Let be an open interval, and be a differential function. If be a simple root of and is sufficiently close to β, then the multi-step method defined by Algorithm 5 has the fourth order of convergence.
4. Numerical Examples and Comparison Results
This section elaborates on the efficacy of algorithms introduced in this chapter with the support of examples. All the numerical experiments are performed with Intel (R) core [TM] GHz, 12 GB of RAM, and all the codes are written in maple. We use and obtain an approximated simple root rather than the exact based on the exactness of the computer.
The abbreviation CAG is used for the classical iterative method and QAG for the -analogue of the classical iterative method.
Recall the classical algorithm in [8] defined by
and classical algorithm in [8] defined by
and classical algorithm in [8] defined by
and classical algorithm in [8] defined by
and classical algorithm in [8] defined by
For simplicity, we denote the Algorithms 1–5 by QAG1, QAG2, QAG3, QAG4 and QAG5, respectively. The computational results are presented in tables to elaborate on the performance and efficacy of our -iterative methods, which is the main motivation for transforming the classical methods toward the -iterative methods.
The first two examples are concerned with the performance of -iterative methods with the classical methods for different values of up to three iterations. Similarly, we can check the performance of the -iterative methods for different values of for the rest of the iterations until we achieve the desired accuracy.
Example 1.
For and
Table 1 shows the computation of and for and different values of by using QAG1.
Table 1.
Calculation of and for and different values of by using QAG1.
We can observe from Table 1 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG1 at are closer to zero as compared to the values calculated by CAG1.
Table 2 shows the computation of and for and different values of by using QAG2.
Table 2.
Calculation of and for and different values of by using QAG2.
We can observe from Table 2 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG2 at are closer to zero as compared to the values calculated by CAG2.
Table 3 shows the computation of and for and different values of by using QAG3.
Table 3.
Calculation of and for and different values of by using QAG3.
We can observe from Table 4 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG4 at are closer to zero as compared to the values calculated by CAG4.
Table 4.
Calculation of and for and different values of by using QAG4.
Table 5 shows the computation of and for and different values of by using QAG5.
Table 5.
Calculation of and for and different values of by using QAG5.
We can observe from Table 3 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG3 at are closer to zero as compared to the values calculated by CAG3.
Table 4 shows the computation of and for and different values of by using QAG4.
We can observe from Table 5 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG5 at are closer to zero as compared to the values calculated by CAG5.
Example 2.
For and
Table 6 shows the computation of and for and different values of by using QAG1.
Table 6.
Calculation of and for and different values of by using QAG1.
We can observe from Table 6 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG1 at are closer to zero as compared to the values calculated by CAG1.
Table 7 shows the computation of and for and different values of by using QAG2.
Table 7.
Calculation of and for and different values of by using QAG2.
We can observe from Table 7 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG2 at are closer to zero as compared to the values calculated by CAG2.
Table 8 shows the computation of and for and different values of by using QAG3.
Table 8.
Calculation of and for and different values of by using QAG3.
We can observe from Table 8 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG3 at are closer to zero as compared to the values calculated by CAG3.
Table 9 shows the computation of and for and different values of by using QAG4.
Table 9.
Calculation of and for and different values of by using QAG4.
We can observe from Table 9 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG4 at are closer to zero as compared to the values calculated by CAG4.
Table 10 shows the computation of and for and different values of by using QAG5.
Table 10.
Calculation of and for and different values of by using QAG5.
We can observe from Table 10 that we can obtain more accurate values of when and for which tend towards zero. The values of calculated by QAG5 at are closer to zero as compared to the values calculated by CAG5.
Comparison of the Classical and -Analogue of Iterative Methods
Here, we check the efficiency of our new iterative methods by considering some of the nonlinear equations. Furthermore, we compare the standard Newton’s method (NM), Halley method (HM), Algorithm 2B [8] and Algorithm 2C [8] with our new iterative Algorithms 1–5. In the tables, we display the number of iterations (IT), the approximate root , the value and be the distance between two successive estimations. It is important to mention that in order to obtain better computational results of the -iterative methods, we take the value of
Example 3.
For and
| Methods | IT | CUP-Time | |||
| NM [8] | 5 | 1.203 | |||
| HM [8] | 5 | 1.171 | |||
| Algorithm 2B [8] | 3 | 1.156 | |||
| Algorithm 2C [8] | 3 | 0.828 | |||
| QAG1 | 4 | 0.140 | |||
| QAG2 | 3 | 0.156 | |||
| QAG3 | 3 | 0.140 | |||
| QAG4 | 4 | 0.109 | |||
| QAG5 | 5 | 0.171 |
Example 4.
For and
| Methods | IT | CUP-Time | |||
| HM [8] | 4 | 1.062 | |||
| Algorithm 2B [8] | 3 | 1.406 | |||
| Algorithm 2C [8] | 3 | 1.109 | |||
| QAG1 | 4 | 0.390 | |||
| QAG2 | 3 | 0.375 | |||
| QAG3 | 3 | 0.390 | |||
| QAG4 | 3 | 0.390 | |||
| QAG5 | 3 | 0.390 |
Example 5.
For and
| Methods | IT | CUP-Time | |||
| NM [8] | 5 | 1.421 | |||
| HM [8] | 5 | 1.406 | |||
| Algorithm 2B [8] | 3 | 1.640 | |||
| Algorithm 2C [8] | 3 | 1.328 | |||
| QAG1 | 4 | 0.609 | |||
| QAG2 | 3 | 0.625 | |||
| QAG3 | 3 | 0.593 | |||
| QAG4 | 4 | 0.203 | |||
| QAG5 | 4 | 0.250 |
Example 6.
For and
| Methods | IT | CUP-Time | |||
| NM [8] | 4 | 0.390 | |||
| HM [8] | 4 | 0.281 | |||
| Algorithm 2B [8] | 3 | 0.203 | |||
| Algorithm 2C [8] | 2 | 0.218 | |||
| QAG1 | 3 | 0.187 | |||
| QAG2 | 3 | 0.171 | |||
| QAG3 | 2 | 0.156 | |||
| QAG4 | 3 | 0.187 | |||
| QAG5 | 3 | 0.171 |
Tables compare the solutions obtained by using the classical and our -iterative methods. The results show that our -analogue iterative methods QAG1, QAG2, QAG3, QAG4 and QAG5 give the same results as the classical methods NM, HM, Algorithm 2B [8] and Algorithm 2C [8].
Remark 1.
The efficiency index is considered as , where p represents the order of the method and m is the total number of function evaluations per iteration necessary by the method.
| Methods | Efficiency Index | Methods | Efficiency Index |
| Algoritm 2A [8] | Algorithm 1 | ||
| Algoritm 2B [8] | Algorithm 2 | ||
| Algoritm 2C [8] | Algorithm 3 | ||
| Algoritm 2D [8] | Algorithm 4 | ||
| Algoritm 2E [8] | Algorithm 5 |
We conclude that the efficiency indexes calculated by Algoritms 1, Algoritm 2, Algoritm 3, Algoritm 4 and Algoritm 5 are the same as those calculated by Algoritm 2A [8], Algoritm 2B [8], Algoritm 2C [8], Algoritm 2D [8] and Algoritm 2E [8], respectively.
5. Conclusions
In this paper, we have introduced some new multi-step algorithms using the Daftardar–Jafari decomposition technique. The comparison of these newly established algorithms with the classical methods reflects that the proposed -iterative methods are reliable and the best alternatives to the already known algorithms. The computational results conclude that the -analogue of the iterative methods for solving the nonlinear equations generates the same results as the classical methods, but the convergence rate towards approaching the root is higher than the convergence rate suggested by the classical methods. It is worth mentioning here that one can extend the results obtained in this paper by using the post quantum calculus techniques. This will be an interesting problem for future research.
Author Contributions
Conceptualization, K.N., A.G.K., F.A., M.U.A. and C.C.; Data curation, K.N., A.G.K., F.A., M.U.A. and C.C.; Formal analysis, K.N., A.G.K., F.A., M.U.A. and C.C.; Investigation, K.N., A.G.K., F.A., M.U.A. and C.C.; Methodology, K.N., A.G.K., F.A., M.U.A. and C.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by the Department of Mathematics, Faculty of Science, Khon Kaen University, Fiscal Year 2022.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors are grateful to the editor and the anonymous reviewers for their valuable comments and suggestions.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Alqudah, M.A.; Mohammed, P.O.; Abdeljawad, T. Solution of singular integral equations via Riemann–Liouville fractional integrals. Math. Probl. Eng. 2020, 2020, 1250970. [Google Scholar] [CrossRef]
- Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
- Hamasalh, F.K.; Muhammad, P.O. Generalized quartic fractional spline interpolation with applications. Int. J. Open Probl. Comput. Sci. Math. 2015, 8, 67–80. [Google Scholar] [CrossRef]
- Kurt, A.; Şenol, M.; Tasbozan, O.; Chand, M. Two reliable methods for the solution of fractional coupled Burgers’ equation arising as a model of polydispersive sedimentation. Appl. Math. Nonlinear Sci. 2019, 4, 523–534. [Google Scholar] [CrossRef]
- Mohammed, P.O.; Alqudah, M.A.; Hamed, Y.S.; Kashuri, A.; Abualnaja, K.M. Solving the modified Regularized long wave equations via higher degree B-spline algorithm. J. Funct. Spaces 2021, 2021, 5580687. [Google Scholar] [CrossRef]
- Mohammed, P.O.; Machado, J.A.T.; Guirao, J.L.; Agarwal, R.P. Adomian decomposition and fractional power series solution of a class of nonlinear fractional differential equations. Mathematics 2021, 9, 1070. [Google Scholar] [CrossRef]
- Noor, M.A.; Waseem, M.; Noor, K.I.; Ali, M.A. New iterative technique for solving nonlinear equations. Appl. Math. Comput. 2015, 265, 1115–1125. [Google Scholar] [CrossRef]
- Sana, G.; Noor, M.A.; Noor, K.I. Some Multistep Iterative Methods for Nonlinear Equation Using Quadrature Rule. Int. J. Anal. Appl. 2020, 18, 920–938. [Google Scholar]
- Solaiman, O.S.; Hashim, I. Two new efficient sixth order iterative methods for solving nonlinear equations. J. King Saud-Univ. Sci. 2019, 31, 701–705. [Google Scholar] [CrossRef]
- Touchent, K.A.; Hammouch, Z.; Mekkaoui, T. A modified invariant subspace method for solving partial differential equations with non-singular kernel fractional derivatives. Appl. Math. Nonlinear Sci. 2020, 5, 35–48. [Google Scholar] [CrossRef]
- Ullah, M.Z.; Ahmad, F.; Jabbar, M.A. A correction note on three-step iterative methods for nonlinear equations and generalization of method. J. Mod. Methods Numer. 2014, 5, 10–17. [Google Scholar] [CrossRef][Green Version]
- Wu, J.; Yuan, J.; Gao, W. Analysis of fractional factor system for data transmission in SDN. Appl. Math. Nonlinear Sci. 2019, 4, 191–196. [Google Scholar] [CrossRef]
- Wang, X.-F. Fixed–point iterative method with eighth–order constructed by undetermined parameter technique for solving nonlinear systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
- Abbasbandy, S. Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl. Math. Comput. 2003, 145, 887–893. [Google Scholar] [CrossRef]
- Adomian, G. Nonlinear Stochastic Systems Theory and Applications to Physics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1988; Volume 46. [Google Scholar]
- Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
- Daftardar-Gejji, V.; Jafari, H. An iterative method for solving nonlinear functional equations. J. Math. Anal. Appl. 2006, 316, 753–763. [Google Scholar] [CrossRef]
- Jackson, F.H. A q-form of Taylor’s formula. Messenger Math. 1909, 38, 62–64. [Google Scholar]
- Jing, S.C.; Fan, H.Y. q-Taylor’s formula with its q-remainder. Commun. Theor. Phys. 1995, 23, 117. [Google Scholar] [CrossRef]
- Ernst, T. A method for q-calculus. J. Nonlinear Math. Phys. 2003, 10, 487–525. [Google Scholar] [CrossRef]
- Ernst, T. A New Notation for q-Calculus a New q-Taylor’s Formula; Department of Mathematics, Uppsala University: Uppsala, Sweden, 1999; pp. 1–28. [Google Scholar]
- Erzan, A. Finite q-differences and the discrete renormalization group. Phys. Lett. A 1997, 225, 235–238. [Google Scholar] [CrossRef]
- He, J.H. A new iteration method for solving algebraic equations. Appl. Math. Comput. 2003, 135, 81–84. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).