Abstract
We develop a sixth order Steffensen-type method with one parameter in order to solve systems of equations. Our study’s novelty lies in the fact that two types of local convergence are established under weak conditions, including computable error bounds and uniqueness of the results. The performance of our methods is discussed and compared to other schemes using similar information. Finally, very large systems of equations ( and ) are solved in order to test the theoretical results and compare them favorably to earlier works.
MSC:
65H05; 65G99
1. Introduction
A plenty of problems from Biology, Chemistry, Economics, Engineering, Mathematics, and Physics are converted to a mathematical expression of the following form
Here, , is differentiable, is a Banach space and is nonempty and open. Closed form solutions are rarely found, so iterative methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16] are used converging to the solution .
In particular, we propose the following new scheme
is an initial point and is a free parameter. In addition to this, is a divided difference of order one.
We shall present two convergence analyses. Later, we present the advantages over other methods using similar information.
2. Local Convergence Analysis I
We assume that . We use method (2) with standard Taylor expansions [9] for studying local convergence.
Theorem 1.
Suppose that mapping F is s sufficient differentiable on Ω, with , a simple zero of F. We also consider that the inverse of F, . Then, provided that is close enough to . Moreover, the convergence order is six.
Proof.
Set and , where , . We shall use some Taylor series expansions, first for and :
and
respectively.
Secondly, we expand
Thirdly, we need the expansions for and
According to Theorem 1, the applicability of method (2) is limited to mappings F with derivatives up to the seventh order.
Now, we choose , and define a function f, as follows:
We have the following derivatives of function f
However, is not bounded on , so Section 2, cannot be used. In this case, we have a more general alternative given in the up coming section.
3. Local Convergence Analysis II
Consider and . Let be a increasingly continuous map with .
Suppose equation
has as the smallest positive zero. In addition, we assume that is a increasingly continuous map with .
Consider functions and defined on semi open interval as follow:
By these definitions, we have and as . Subsequently, the intermediate value theorem assures that function has minimum one solution in . Let be the minimal such zero.
The expression
has the smallest positive zero . Set .
We construe the functions and on interval in the following way
We yield and since . The stand for the minimal such zero of function on .
The equation
has as the smallest positive solution. Set . Define functions and on as
We obtain and as . The imply the minimal zero of on . Moreover, define
Accordingly, we have
and
for all
denotes the open ball centered at and of radius . By , we denote the closure of
We use the following conditions in order to study the local convergence:
- (a1)
- is a differentiable operator in the Fréchet sense, is a divided difference of order one. In addition to this, we assume that is a simple zero of F. At last, the inverse of operator F, .
- (a2)
- Let be a increasingly continuous function with , parameters and , such that for eachSet , where exists and is given by (12).
- (a3)
- We assume that is a increasingly continuous
- (a4)
- (a5)
- There exists , such thatSet .
Theorem 2.
Under the hypotheses further consider that . Accordingly, the proceeding assertions hold
and
In addition, the is the unique solution of in the set mentioned in hypothesis .
Proof.
We first show items (20)–(24) by adopting mathematical induction. Because hold and by condition , we have
and
so and belong in . Afterwards, for , and
so the Banach lemma on invertible operators [3,4,5,12] gives , and
It also follows that is defined.
It also follows that is well defined by the second substep of method (2) for . In particular, we have
It also follows that is well defined by (30) and the last substep of method (2) for . Then, as in (25) and (26) (for ) and (30), we obtain in turn
so, (for ) and (24) holds for . Subsequently, substituting , , by , , , respectively. Hence, the induction for (30) and (22)–(24) is complete. Using the estimation
where , we deduce that and .
Finally, we want to illustrate that the required solution is unique. Therefore, let for , so that . Then, by and , we get
so . Finally, is deduced from . ☐
Remark 1.
Another way of defining functions and radii is as follows:
Suppose that equation
has a smallest positive solution . Let be a increasingly continuous function with .
Let functions and be defined in the interval by
The stands for the smallest positive root of in . Moreover, define functions and on the closed interval , as follows:
The and serve as the minimal positive roots of and on closed interval , respectively. Subsequently, Theorem 2 can be written by using the "bar" conditions and functions, with .
4. Numerical Examples
Here, we monitor the convergence conditions on three problems (1)–(3). We choose in the examples. We can confirm the verification the hypotheses of Theorem 2 for the given choices of the functions and parameters a and b.
Example 1.
Here, we investigate the application of our results on Hammerstein integral equations (see [9], pp. 19–20) for as follows:
where
The values of and when , are illustrated in Table 1. Subsequently, we have
Table 1.
Abscissas and weights for k = 8.
Table 2.
Convergence radii for Example 1.
Table 3.
Convergence radii for Example 1 with bar functions.
Example 2.
Here, we choose as integral equation [17,18], for as
where
Because so, is given as
We get
Moreover,
so , since ,
Hence, we have
Therefore, our results can be utilized even though is not bounded on Ω. The radii for Example 2 are given in Table 4.
Table 4.
Convergence radii for Example 2 with bar functions.
Example 3.
We assume the following differential equations
characterizes the progress/movement of a molecule in 3D with for . The required solution describes to given as
Table 5.
Convergence radii for Example 3.
Table 6.
Convergence radii for Example 3 with bar functions.
5. Applications with Large Systems
We choose and in our scheme (2), called by and , respectively. Now, we compare our schemes with a 6th-order iterative methods suggested by Abbasbandy et al. [19] and Hueso et al. [20], among them we picked the methods (8) and (14–15) , respectively, known as and . Moreover, a comparison of them has been done with the 6th-order iterative methods given by Wang and Li [21], among their method we chose expression (6), denoted by and . At the last, we contrast (2) with sixth-order scheme given by Sharma and Arora [22], we pick expression (13), known as . The details of all the iterative expressions are given, as follows:
method :
scheme :
where and are real numbers.
terative method :
scheme :
where and .
The , , , and stands for index of iteration, absolute residual errors in the function F, error between two successive iterations and computational convergence order, receptively. There values are listed in Table 9, Table 10 and Table 11. Moreover, the quantity is the final obtained value of .
Table 9.
Comparisons of different methods on a Boundary value problem in Example 5.
Table 10.
Comparisons of different methods on two-dimensional (2D) Bratu problem in Example 6.
Table 11.
Comparisons of different methods on Example 7.
The estimation of all the above parameters have been calculated by Mathematica-9. For minimizing the round-off errors, we have chosen multiple precision arithmetic with 1000 digits of mantissa. The term symbolizes the in all mentioned tables. We adopted the command "AbsoluteTiming[]" in order to calculate the CPU time. We run our programs three times and depicted the average CPU time in Table 12, also one can observe the times used for each iterative method, where we want to point out that for big size problems the method uses the minimum time, so it is being very competitive. The configuration of the used computer is given below:
Table 12.
CPU time of different methods on Examples 5–7.
Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz
Made: HP
RAM: 8:00 GB
System type: 64-bit-Operating System, x64-based processor.
Example 5.
Here, we deal with a boundary value problem from Ortega and Rheinboldt [9], given by
We assume
partition of the interval and .
Now, we discretize expression (46) by adopting following numerical formula for derivatives
which leads to
system of nonlinear equations.
For specific value of , we have a system and the required solution is
The computational estimations are listed in Table 9 on the basis of initial approximation .
Example 6.
The classical 2D Bratu problem [23,24] is given by
By adopting finite difference discretization, we can deduced the above PDE (48) to a nonlinear system. For this purpose, we denote as numerical solution at the grid points of the mesh. In addition to this, and stand for the number of steps in the directions of μ and θ, respectively. The h and k called as the respective step sizes in the directions of μ and θ. Adopt the following central difference formula to and
leads to us
For obtaining a large system of , we choose and . The numerical results are listed in Table 10 based on the initial guess .
Example 7.
Let us consider the following nonlinear system
For specific value , we have system, and chose the following starting point
The is the required solution of system 7. Table 11 provides the numerical results.
Remark 3.On the basis of Table 9, Table 10 and Table 11, we conclude that our methods namely and perform better in the contrast of existing schemes and on the basis of residual errors, errors between two consecutive iterations, and asymptotic error constant. In addition, our methods also demonstrate the stable computational order of convergence. Finally, we concluded that our methods not only perform better than existing methods in numerical results, but also take half of the CPU time in contrast to other existing methods (results can be easily found in Table 12).
6. Conclusions
We presented a new family of Steffensen-type methods with one parameter. The local convergence is studied in Section 2 while using Taylor expansion and derivative up to the order seven, when . To extend the suitability of these iterative methods, we only use hypotheses on the first derivative in Section 3 and Banach space valued operators. This way, we also find computable error bounds on as well as uniqueness results based on generalized Lipschitz-type real functions. Numerical examples of equations, favorable comparisons to other methods can be found in Section 4.
Author Contributions
M.Z.U.: Validation; Review & Editing, R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.
Funding
Research and development office (RDO) at the ministry of Education, Kingdom of Saudi Arabia. Grant no (HIQI-22-2019).
Acknowledgments
This project was funded by the research and development office (RDO) at the ministry of Education, Kingdom of Saudi Arabia. Grant No. (HIQI-22-2019). The authors also, acknowledge with thanks research and development office (RDO-KAU) at King Abdulaziz University for technical support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Amat, S.; Bermudez, C.; Hernández-Verón, M.A.; Martínez, E. On an efficient k-step iterative method for nonlinear equations. J. Comput. Appl. Math. 2016, 302, 258–271. [Google Scholar] [CrossRef]
- Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
- Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishers: New York, NY, USA, 2019; Volume III. [Google Scholar]
- Argyros, I.K.; Hillout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magrenan, A.A. A Contemporary Study of Iterative Methods; Academy Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
- Cordero, A.; Torregrosa, J.R. Low-complexity root finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2015, 275, 502–515. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. How to improve the domain of starting points for Steffensen’s method. Stud. Appl. Math. 2014, 132, 354–380. [Google Scholar] [CrossRef]
- Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Advanced Publishing Program: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Rheindoldt, W.C. An adaptive continuation process for solving systems of equations. Pol. Acad. Sci. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
- Sharma, J.R.; Ghua, R.K.; Sharma, R. An efficient fourth-order weighted Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–325. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solutions of Equations; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
- Zuníc, J.D.; Petkovíc, M.S. A cubically convergent Steffensenlike method for solving nonlinear equations. Appl. Math. Lett. 2012, 25, 1881–1886. [Google Scholar]
- Alarcón, V.; Amat, S.; Busquier, S.; López, D.J. A Steffensens type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 243–250. [Google Scholar] [CrossRef]
- Behl, R.; Argyros, I.K.; Machado, J.A.T. Ball comparison between three sixth order methods for Banach space valued operators. Mathematics 2020, 8, 667. [Google Scholar] [CrossRef]
- Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected Topics in Numerical Analysis; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2010; ISBN 978-3-8433-6793-6. [Google Scholar]
- Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
- Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef]
- Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287, 287–288. [Google Scholar] [CrossRef]
- Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
- Wang, X.; Li, Y. An Efficient Sixth Order Newton Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
- Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
- Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).