Abstract
Many real-life problems can be reduced to scalar and vectorial nonlinear equations by using mathematical modeling. In this paper, we introduce a new iterative family of the sixth-order for a system of nonlinear equations. In addition, we present analyses of their convergences, as well as the computable radii for the guaranteed convergence of them for Banach space valued operators and error bounds based on the Lipschitz constants. Moreover, we show the applicability of them to some real-life problems, such as kinematic syntheses, Bratu’s, Fisher’s, boundary value, and Hammerstein integral problems. We finally wind up on the ground of achieved numerical experiments, where they perform better than other competing schemes.
MSC:
65G99; 65H10
1. Introduction
Establishment of higher-order efficient iterative schemes for finding the solutions
(where is a differentiable mapping with open domain ) is one of the foremost tasks in the area of numerical analysis and computation methods because of its wide application in real-life situations. We can easily find several real-life problems that were phrased into the nonlinear system (1) along with the same fundamental properties. For example, transport theory, combustion, reactor, kinematics syntheses, steering, chemical equilibrium, neurophysiology, and economic modeling problems were solved by being formulated to , and details can be found in the research articles [1,2,3,4,5].
Analytical methods for these problems are rare. Therefore, many authors developed iterative schemes that are based on iteration procedures. These iterative methods depend on several things, like starting the initial guess/es, the considered problem, the body structure of the proposed method, efficiency, and so forth. (For more details, please go through [6,7,8,9,10]). Some authors [11,12,13,14,15,16] gave special concern to the development of higher-order multi-point iterative methods. Faster convergence toward the required root, better efficiency, less CPU time, and fast accuracy are some of the main reasons behind the importance of multi-point methods.
The inspiration behind this work was the thought to suggest a new sixth-order iterative technique based on the weight function approach along with lower computational costs for large nonlinear systems. The beauty of using this approach is that it gives us the flexibility to produce new, as well as some special cases of the earlier methods. A good variety of applied science problems is considered in order to investigate the authenticity of our presented methods. Finally, using numerical experiments, we show the superiority of our schemes when compared to others in regard to computational cost, residual error, and CPU time. Moreover, they also show the stable computational order of convergence and minimum asymptotic error constants in contrast with exiting iterative methods.
2. Multi-Dimensional Case
Consider the following new scheme:
where is sufficiently differentiable in with , where . We demonstrate the sixth-order convergence in Theorem 1 by adopting the same procedure suggested in [16].
Let be sufficiently differentiable in . The kth derivative of F at , , is the k-linear function with , and we have
- , for each permutation of ,
that further yields
- (a)
- (b)
- .
This being contained in the neighborhood of the required root of , we have
where , , provided is invertible. We recognize that , since , and .
We can also write
I being the identity and .
The denotes the error in the th step. Then,
where M is a p-linear function , which is known as the error equation, and where p is the convergence order. Observe that is .
Theorem 1.
Suppose is a sufficient differentiable mapping with open domain that consists of the required zero ξ. Further, we assume is invertible and continuous around ξ. Moreover, we consider the starting guess is close enough to ξ for sure convergence. Then, scheme (2) attains maximum sixth-order convergence, provided that
where I is the identity matrix.
Proof.
We can write and as follows:
and
where I is the identity matrix of size and , .
Moreover, we have
After some simple algebraic calculations, we have
Finally, we have
where is a function of only . □
Specializations
Some of the fruitful cases are mentioned below:
(1) We assume
for , which generates the following new sixth-order, Jarratt-type scheme:
(2) Consider the following weight function for :
leading to
(3) Now, we assume another weight function (for ),
that yields
which is another new sixth-order scheme.
In like manner, we can obtain many familiar and advanced sixth-order, Jarratt-type schemes by adopting different weight functions.
3. Local Convergence Analysis
It is well-known that iterative methods defined on the real line or on the dimensional Euclidean space constitute the motivation for extending these methods to more abstract spaces, such as Hilbert, Banach, or other spaces. The local convergence analysis of method (2) after defining it for Banach space operators for all are as
where , are Banach spaces, is a nonempty, convex, and open subset of , , , , and is such that . Then, under certain hypotheses given later, method (25) converges to a solution of equation
where is a continuously differentiable operator in the sense of Fréchet. For acceptable convergence analysis, we first need to define some parameters and scalar functions. Let be a continuous and increasing function with , where . The equation
has a minimum of one positive zero. Denote by the smallest such solution, and set . Let , be continuous and increasing maps with . Consider maps and on as
and
Suppose that
Then, by the definition of function and (26), we have and , as . Then, by the mean value theorem, there exists at least one solution of in the interval . Denoted by is the smallest such solution.
Suppose
where
has a minimum of one positive zero. Denoted by is the smallest such solution, and set
Further, we consider functions and on by
and
where and are continuous and increasing functions. We also get and , as . Recall by the smallest solution of equation in . Suppose that the equations
have at least one positive solution, where . Denote by the smallest such solutions, and set , Define functions and on the interval by
and
We also get and , as . Denote by the smallest solution of equation in . Define a radius of convergence R by
It follows from each
and
Define . Let to stand for the closure of . By , denote the space of bounded linear operators from into .
The following conditions are the base for the study of local convergence analysis:
with such that and are invertible.
There exists function continuous, increasing with such that for each
Set where is given in (25).
There exist functions , , and continuous and increasing with such that for each
and
There exists such that
Set .
Next, we provide the local convergence analysis of method (25) using the hypotheses and the previously developed notations.
Theorem 2.
Assume the hypotheses hold and . Then, the sequence , and
and
Moreover, is the only solution of in the set given in .
Proof.
Estimates (40)–(42) are shown utilizing the hypotheses and mathematical induction. By , , (31) and (32), we have that
for each , so and
by the Banach perturbation lemma on invertible operators [6,7]. Then, is well-defined by method (25) for . By convexity, we have that for each , by adopting , we get
From the hypotheses () and , we get
Using the first substep of method (25) for , (31), (35), , (44) (for ), and (46) (for ), we have in turn that
so (40) holds for and . We must show that .
We need an estimate on the expression inside the bracket in (50):
so by , (44) and (49), we have in turn that (51), in norm, is bounded above by
and (31), (37), (44)(for ), (46)(for ), (47), (49) and (52), we yield
so (41) holds for and . Next, we must show that . Using (31), (38), and (47), we get that
Thus, and
Hence, is well-defined by the last substep of method (25). By adopting (31), (39), (53) and (55), we have
from which we get that
so (42) holds for and .
Then, we replace , by , , in the preceding estimations to finish the induction for (40)–(42). In view of the estimate
we conclude that and . Let us consider that be such that . Using , and , we have
so . Therefore, by the identity
we deduce that . □
Hence, function A can be defined by
Similarly,
so we can define B by
Remark 1.
The results in this section were obtained using hypotheses only on the first derivative, in contrast to the results in Theorem 1 where hypotheses up to the seventh derivative of F were used to show the convergence order six. Hence, we have extended the usage of method (25) in Banach space valued operators. Notice also that there are even simple functions defined on the real line, where the hypotheses of Theorem 1 do not hold. Hence, the method 2 may or may not converge. As a motivational and academic example, see Example 6 in the next section. Then, notice that the third derivative of F does not exist. Using the approach of Theorem (2), we bypass the computation of higher-order-than-one derivatives. However, we assume hypotheses only on the first-order derivative of operator F. For obtaining the order of convergence, we adopted
or
the computational order of convergence , and the approximate computational order of convergence [17,18], respectively. These definitions can also be found in [19]. They do not require derivatives higher than one. Indeed, notice that to generate iterates and therefore compute ρ and , we need to use the formula (2) using only the first derivatives. It is vital to note that does not need the prior information of the exact root ξ.
4. Numerical Experimentation
Here, we demonstrate the suitability of our iterative methods for real-life complications. In addition, we also want to validate our theoretical results which were presented in earlier sections. Therefore, we consider four real-life issues (namely, Bratu’s one-dimension, Fisher’s, kinematic synthesis, and Hammerstein integration problems), where the fifth one is a standard academic problem and the sixth one is a motivational problem. The corresponding starting initial approximation and zeroes are depicted in examples (1)–(6).
Next, we consider our schemes, namely, (22), (23), and (24), recalled as (), (), and (), respectively to investigate the computational conduct of them with existing techniques. We contrast them with sixth-order schemes given by Hueso et al. [20] and Lotfi et al. [21], where out of them we consider the expressions, namely, (14–15) and (5), known as and , respectively. In addition, we also compare them with an Ostrowski-type method proposed by Grau-Sánchez et al. [22], where among them we choose the iterative scheme (7), denoted by . Further, we contrast them with sixth-order iterative schemes presented by Sharma and Arora [23] (expression-18) and Abbasbandy et al. [24] (expression-8), notated as and , respectively. Furthermore, we contrast them with solution techniques of order six designed by Soleymani et al. [25] (method-5) and Wang and Li [26] (method-6), known as and , respectively.
In the Table 1, Table 2, 4, 6 and 7, we report our findings’ iteration indexes , , , , and by using Mathematica (Version 9) with multiple precision arithmetic and minimum 300 digits of mantissa that minimize the rounding-off errors. Further, the variable is the last obtained value of . Furthermore, the radii of convergence and the consumption of central processing unit (CPU) time by distinct schemes are depicted in the Tables 8 and 9, respectively. The indicates .
Table 1.
Conduct of different techniques in Bratu’s problem Example 1.
Table 2.
Conduct of different techniques in Fisher’s Equation (2).
Example 1. Bratu Problem:
We find the huge applicability of the Bratu Problem [27] in the areas of thermal reaction, the Chandrasekhar model of the expansion of the universe, the chemical reactor theory, radiative heat transfer, and the fuel ignition model of thermal combustion and nanotechnology [28,29,30,31]. The mathematical formulation of this problem is given below:
By adopting the following central difference
it yields the following nonlinear system of size from BVP (67) with step size
We consider and initial value for this problem, and computational out-comings are depicted in Table 1.
Example 2.
Here, we choose another well-known Fisher’s equation [32]
where D is the diffusion parameter. We adopted the finite difference discretization technique in order to convert the above differential Equation (69) into a system of nonlinear equations. Thus, we chose as the required solution at the grid points of the mesh. In addition, x and t are the numbers of steps in the direction of M and N, respectively. Moreover, h and k are the corresponding step sizes of M and N, respectively. By adopting central, backward, and forward differences, it resulted in:
that leading to
where . For particular values of , , and , which led us to a nonlinear system of size , with the starting point convergence towards the following solution:
We depicted the numerical out-coming in Table 2.
Example 3.
Here, we choose a remarkable kinematic synthesis problem that is related to steering, as mentioned in [4,5], which is defined as follows:
where
and
The values of and (in radians) are depicted in Table 3 and the behavior of methods in Table 4. We chose the starting approximation that converges to
Table 3.
The parameters and (in radians) used in Example 3.
Table 4.
Conduct of different techniques in kinematic synthesis problem 3.
Example 4.
We choose here a distinguished problem of applied science that is popular as the Hammerstein integral equation (see [10], pp. 19–20), which is given as follows:
where and the kernel F is
To convert the expression (72) into a finite-dimensional problem, we adopt the following Gauss Legendre quadrature formula:
where and are the abscissas and the weights, respectively. We recall by , where we have
and
The parameters and are mentioned as depicted in Table 5.
Table 5.
(Abscissas and weights for ).
The desired root is . We depicted the numerical out coming in Table 6 on the ground of the starting approximation .
Table 6.
Conduct of different techniques in Hammerstein integral problem 4.
Example 5.
Finally, we choose
Here, we picked in order to deduce a huge system of . In addition, we selected the starting guess that converges to , and the results are depicted in Table 7.
Table 7.
Conduct of different techniques in Example 5.
Example 6.
As a counter-example, we picked a function F on , by
that leads to
and
Surely, we can say that is not bounded on Ω in the neighborhood of point . This means the study prior to Section 5 is not applicable. In particular, hypotheses on the seventh-order derivative of F or even higher are considered to demonstrate the convergence of the proposed scheme in Section 3. Because of this section, we now demand the hypotheses on the first order.
Further, we have
and functions A and B, as given in Application 3.2. The desired solution of (6) is . The distinct radii, , COC (ρ), and are stated in Table 8 and Table 9.
Table 8.
Different radii of convergence.
Table 9.
Consumption of CPU time by distinct schemes.
5. Concluding Remarks
In this paper, a new family of sixth-order schemes was introduced to produce sequences heading to a solution of an equation. In addition, we present analyses of their convergences, as well as the computable radii for the guaranteed convergence of them for Banach space valued operators and error bounds based on the Lipschitz constants. It turns out that these schemes are superior to existing ones utilizing similar information. Numerical experiments test the convergence criteria and also numerically show the superiority of the new schemes.
Author Contributions
R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing. All authors have read and agreed to the published version of the manuscript.
Funding
Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-534-130-1441.
Acknowledgments
This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-534-130-1441). The authors, therefore, gratefully acknowledge the DSR technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybernet Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
- Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef]
- Moré, J.J. A Collection of Nonlinear Model Problems; Allgower, E.L., Georg, K., Eds.; Computational Solution of Nonlinear Systems of Equations Lectures in Applied Mathematics; American Mathematical Society: Providence, RI, USA, 1990; Volume 26, pp. 723–762. [Google Scholar]
- Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
- Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
- Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
- Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multi-Point Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Abad, M.F.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. 2014, 57, 133–145. [Google Scholar]
- Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef][Green Version]
- Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski-Chun type parametric families. J. Math. Chem. 2014, 52, 430–449. [Google Scholar]
- Sharma, J.R.; Gua, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 2, 307–323. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Beyer, W.A.; Ebanks, B.R.; Qualls, C.R. Convergence rates and convergence-order profiles for sequences. Acta Appl. Math. 1990, 20, 267–284. [Google Scholar] [CrossRef]
- Potra, F.A. On Q-order and R-order of convergence. J. Optim. Theory Appl. 1989, 63, 415–431. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth-order families of iterative methods for nonlinear system. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
- Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
- Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
- Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
- Wang, X.; Li, Y. An efficient sixth-order Newton-type method for solving nonlinear systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef]
- Alaidarous, E.S.; Ullah, M.Z.; Ahmad, F.; Al-Fhaid, A.S. An Efficient Higher-Order Quasilinearization Method for Solving Nonlinear BVPs. J. Appl. Math. 2013, 2013, 1–11. [Google Scholar] [CrossRef]
- Gelfand, I.M. Some problems in the theory of quasi-linear equations. Trans. Am. Math. Soc. Ser. 1963, 2, 295–381. [Google Scholar]
- Wan, Y.Q.; Guo, Q.; Pan, N. Thermo-electro-hydrodynamic model for electrospinning process. Int. J. Nonlinear Sci. Numer. Simul. 2004, 5, 5–8. [Google Scholar] [CrossRef]
- Jacobsen, J.; Schmitt, K. The Liouville Bratu Gelfand problem for radial operators. J. Diff. Equat. 2002, 184, 283–298. [Google Scholar] [CrossRef]
- Jalilian, R. Non-polynomial spline method for solving Bratu’s problem. Comput. Phys. Commun. 2010, 181, 1868–1872. [Google Scholar] [CrossRef]
- Sauer, T. Numerical Analysis, 2nd ed.; Pearson: Harlow, UK, 2012. [Google Scholar]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).