Abstract
Local convergence analysis is mostly carried out using the Taylor series expansion approach, which requires the utilization of high-order derivatives, not iterative methods. There are other limitations to this approach, such as the following: the analysis is limited to finite-dimensional Euclidean spaces; no a priori computable error bounds on the distance or uniqueness of the solution results are provided. The local convergence analysis in this paper positively addresses these concerns in the more general setting of a Banach space. The convergence conditions involve only the operators in the methods. The more important semi-local convergence analysis not studied before is developed by using majorizing sequences. Both types of convergence analyses are based on the concept of generalized continuity. Although we study a certain class of methods, the same approach applies to extend the applicability of other schemes along the same lines.
MSC:
65G99; 47H17; 49M15
1. Introduction
Iterative methods are a powerful tool in numerical analysis used to solve nonlinear equations. Nonlinear equations often arise in a variety of fields, including physics, engineering, economics, and finance, and are notoriously difficult to solve analytically. Such problems can be transformed into a form like
where and is a Banach space.
Iterative methods are numerical techniques that start with an initial guess and iteratively refine the solution until a desired level of accuracy is achieved. This makes them particularly useful for solving complex nonlinear equations that cannot be easily solved using traditional analytical methods. The usage of iterative methods for solving nonlinear equations has revolutionized many areas of science and engineering, and continues to be an important research topic in the field of numerical analysis.
The Newton–Raphson method, one of the most significant iterative techniques, is defined as
Newton’s method is a classic and widely used iterative algorithm for finding the solutions to nonlinear systems. However, Newton’s method has some limitations. For example, it can fail to converge if the initial guess is too far from the required solution or, if the Fréchet derivative of operator G is zero or very small, then it is hard to obtain the inverse of G. To address these issues, various extensions of Newton’s method have been developed over time, such as Steffensen’s method and a higher-order version of Steffensen’s method. These extensions improve the convergence rate and robustness of Newton’s method, making it more effective for a wider range of problems. In this context, researchers continue to explore new ways to extend Newton’s method and other numerical methods, further expanding the range of applications where they can be used effectively.
One of these fourth-order convergent methods, presented by Singh, A. [1], is defined by
where is the space of bounded linear operators from into and T is any iteration function of convergence order four.
Some of the special cases of scheme (2) are given below:
Special case 1:
Method (2) becomes
Method (3), also studied by Wang et al. [2], was the multidimensional extension to the scalar method proposed by Ren et al. [3].
Special case 3:
It follows by this choice that method (2) specializes to
Scheme (5) is proposed by Cordero et al. [5]. Some other important cases are mentioned by Singh, A. [1].
There are certain limitations with earlier works using the Taylor series expansion approach. Below is a list.
- (L1)
- The local convergence analysis carried out in the case when , where k is a natural number.
- (L2)
- There are no computable error bounds on the distances . Therefore, we do not know a priori how many iterations must be carried out to reach a certain pre-decided error tolerance.
- (L3)
- There is no uniqueness of the solution results.
- (L4)
- The existence is assumed of derivatives that are not present in the method. As an example for method (2), consider and function defined by
- (L5)
- The choice of the initial point is a “shot in the dark”, since no computable radius of convergence is provided.
- (L6)
In this paper, we address these limitations.
- (L1)′
- The convergence analysis is carried out in the setting of a Banach space.
- (L2)′
- A priori computable upper error bounds on the distances are provided. Hence, we know in advance the number of iterations to be carried out in order to achieve a desired error tolerance.
- (L3)′
- A neighborhood is specified that contains only one solution.
- (L4)′
- The convergence is established using only the operators in method (2).
- (L5)′
- The radius of convergence is determined. Hence, if we choose an initial point from the ball with this radius, the convergence is assured.
- (L6)′
- The semi-local convergence is developed by utilizing majorizing sequences.
It is worth noting that the convergence conditions are based on the concept of generalized continuity (see e.g., conditions –). Our approach can be used to extend the applicability of other methods along the same lines.
2. Convergence Analysis I: Local
Let .
Assume:
- (H1)
- There exist functions of which are increasing as well as continuous (IC) such that the equation admits a smallest positive solution (sps), denoted by . Let .
- (H2)
- There exists an IC function such that for the equation has a sps , where
- (H3)
- There exists an IC function such that the equation admits a sps . The set and the function are developed later. Define the parameters byThe functions and are associated with the data in method (2) as follows:
- (H4)
- There exists an operator P such that and for eachLet .
- (H5)
- (H6)
- for each
- (H7)
- , where .
- (H8)
- There exists a function IC such that for each
- (H9)
- There exists such that .
Let . Let . It follows by the definitions that for each
and
Next, the local convergence analysis of method (2) uses the conditions – in combination with the preceding notation.
Theorem 1.
Assume the conditions – are satisfied. Then, the following assertions hold provided
and
Moreover, the point is the only solution of the equation in the set .
Proof.
The assertions (10)–(12) are validated using induction. By the conditions (6), and (7), and the choice of the starting point we have
which implies by the standard perturbation of linear operators [6,7,8] due to Banach and
Thus, the iterate is well defined by the first substep of method (2) if . Moreover, we can write in turn that
employing (6), (8), , (15), and (16), we obtain
Hence, the iterate and the assertion (11) holds if Notice that the iterate is well defined by the second subset of method (2). Then, the application of (6), (9), and implies
So, the iterate validating (10) and also (12) if . The induction for the assertions (10)–(13) is terminated if replace, respectively, in the preceding calculations. Furthermore, the estimate
for leads to and the iterate . Therefore, the assertions (10) (for and (13) are satisfied. It is left to show the uniqueness of the solution in the set . Let be a solution of the equation Then, the conditions and give in turn that
Thus, the linear operator and
since . Hence, we conclude by identity (20) that . □
Remark 1. The second condition in is left uncluttered. A possible choice for the function G is motivated by the calculations:
thus,
Hence, we can set
Moreover, if is not satisfied, then the condition can be replaced by
- (H7)′
- , where
The function can be determined further provided that the operator T is specialized (see Section 4).
3. Convergence II: Semi-Local
Semi-local majorizing sequences [6,8] are employed for this type of convergence. Let and . Define the sequence by
where is a sequence of non-negative parameters to be determined later and are given IC functions. A general convergence result is needed for the sequence .
Lemma 1.
Assume:
- (C1)
- and for some .
Then, the following assertions hold
and there exists such that
Proof.
Formula (21) and condition imply assertion (22) by which (23) is satisfied. Limit point a is the least bound (upper) of sequence which is unique. Sequence is shown to be majorizing for in Theorem 2. But, first, let us associate with the operators in method (2) as follows:
- (C2)
- There exists a linear operator P such that and, for each ,It follows by the first condition that there exists such thatthus, . Set and .
- (C3)
- for each .
- (C4)
- There exists IC function such that
- (C5)
- (C6)
- The equation has a smallest positive solution and there exists such that .Let .
- (C7)
- , where .
Notice that can be the smallest positive solution of the equation
(if it exists). □
The semi-local convergence analysis relies on conditions – under the developed terminology.
Theorem 2.
Assume that conditions – are satisfied. Then, the following assertions hold
and there exists a solution of equation such that
and
Proof.
As in the local case, induction is used to show assertions (24)–(26). Clearly, assertion (24) holds for . The conditions in and Formula (21) imply the existence of iterate ,
Thus, iterate and assertion (25) holds if .
Condition gives
and
Hence, iterate and assertion (26) holds for . Then, by the first substep of method (2), we can write:
leading by , and the induction hypothesis to
Thus, we obtain by the first substep of method (2) for replacing and condition , (21), and (30)
and
The induction is completed for relations (24)–(26). By condition sequence is convergent to . Hence, it follows that sequence is also fundamental. Moreover, by (29) and (31), sequence is majorizing for . Therefore, sequence is also fundamental in Banach space E and as such it is convergent to some (since is a closed set). Furthermore, by letting in (30), , where we also used the continuity of the operator G. Finally, from the estimate
and by letting , assertion (28) follows. Hence, assertions (27) and (28) are also satisfied. □
The isolation of a solution of the equation is discussed in the following item.
Proposition 1.
Assume that there exists with and some ; condition holds in ball and there exists such that
Set . Then, is unique as a solution of the equation in the set .
Proof.
Let be such that with . It follows that the divided difference is well defined. It follows by condition and (33) that
Thus, . Then, from the identity
we conclude that . □
Remark 2
- (i)
- The limit point can be replaced by in condition .
- (ii)
- It is clear that, under all conditions –, one can choose and in Proposition 1.
- (iii)
- Notice also that, as in the local case,provided thatfor some IC function .
4. Special Cases and Applications
The functions and can be specialized:
Local Case 1: Assume:
There exists an IC function such that, for each ,
We need the estimates:
and from
Define the functions by
Assume that the equation admits a smallest positive solution . Then, and condition can be replaced by provided that
The motivation for the definition of the function follows from the estimates
and
Local Case 2
Assume:
Semi-local Case 1
We have
with where the following estimates are used by the definition of method (2)
and
Thus, condition is dropped provided that it is replaced by
and for each
Semi-local Case 2:
Assume:
for each and some IC function .
We can write as before
so
Thus, again, condition can be replaced by in Theorem 2.
Remark 3.
Concerning the choice of the linear operators we can suggest two interesting cases:
- (1)
- , if the operator G is differentiable in the local convergence case.
- (2)
- , if the operator is not necessarily differentiable or if the operator is differentiable in the semi-local convergence case.
Other possible choices are given in [6,7,9].
5. Numerical Applications
To evaluate the effectiveness of our mathematical methods, we test them on a variety of problems, including systems of differential equations, first kind Hammerstein integrals [7,8,10], steering motion problems, and boundary value problems. In addition to this, we also chose a nonlinear, nondifferentiable function; these functions arise in a wide range of applications and are particularly challenging to solve. By solving these problems using our methods, we can determine their accuracy, efficiency, and suitability for various applications.
First of all, we obtained the radii of convergence for iterative solver (2), so that we have an idea of how much closeness of the initial approximation is required for convergence to the exact solution. We used the iterative approach to perform the computation and found the computational order of convergence after choosing a suitable beginning approximation. This made it easier for us to see how quickly the iterative solver was reaching the precise solution. For computing computational order of convergence , the following formulas are used:
or approximated computational order of convergence [5,10] by:
To make sure the approach was effective, we additionally recorded the CPU timing for the computation. Finally, we determined how many iterations would be necessary to achieve the specified accuracy as well as the residual error.
Iterative solvers use stopping criteria to decide when to stop iterating and accept the current approximation as the solution. Depending on the type of problem being solved and the method being used, a variety of stopping criteria can be used. We opt for the following standard criteria:
- (i)
- and
- (ii)
- ,
where is the error tolerance. The computations and multiple precision arithmetic are performed with the help of Mathematica-11.
Example 1.
Consider the following system of differential equations
subject to . We consider and . The is a solution. Let us assume that function G is defined on Λ with by
This definition gives
Thus, by the definition of G, it follows that . Let . Then, hypotheses – are verified for
The computational results are shown in Table 1.
Table 1.
Numerical results of solver (2) for Example 1.
Example 2.
Let and . Consider the nonlinear integral equation of the first kind Hammerstein operator H defined by
The calculation for the derivative gives
for . By this value of the operator , conditions – are verified; we choose
with , where I is an identity matrix. By adopting the above functions, we obtain the radii for compositions (2) of Example 2 in Table 2.
Table 2.
Radii of solver (2) of Example 2.
Example 3.
The system of nonlinear equations is a powerful tool for solving boundary value problems in many fields, such as physics, engineering, and finance. It allows for the modeling of complex systems with multiple variables and nonlinear relationships. By solving the system of equations, we can obtain solutions that satisfy the boundary conditions and accurately represent the behavior of the system. Therefore, we consider a boundary value problem (see [8]), which is given by
with . The interval is divided into 1006 sections to yield
Then, we can choose . We have
acquired by the use of the discretization approach. The following nonlinear system of equations is obtained as .
We present iterations and the of Example 4 in Table 3. Expression (38) converges to the following resulting column vector (not a matrix):
Table 3.
Numerical results for Example 3.
Example 4.
We examine one of the most well-known applied science problems, the Hammerstein integral equation (see pp. 19–20 in [8] to compare the effectiveness and applicability of our suggested methods to those of others), which is given below:
where and kernel G is
To convert the aforementioned equation into a problem with finite dimensions, use the Gauss–Legendre quadrature formula, which is where the abscissas and the weights are determined for by the Gauss–Legendre quadrature formula. Denoting the approximations of by , one obtains the system of nonlinear equations where
For , the abscissas and weights are known and shown in Table 4.
Table 4.
The abscissas and weights by Gauss–Legendre quadrature formula.
The convergence approaches towards the root , This is tested in Table 5.
Table 5.
Numerical results for Example 4.
Example 5.
Here, we solve the nonlinear, nondifferentiable system given as
Then, we set , and where
Notice that and constitute the nondifferentiable part of the equation. The convergence towards the root is tested in Table 6.
Table 6.
Numerical results for Example 5.
Example 6
(Bratu 2D Problem). The widely recognized two-dimensional Bratu problem, as described in [11,12], is defined as follows:
The approximate solution for a nonlinear partial differential equation can be determined by employing finite difference discretization. This approach simplifies the problem into solving a system of nonlinear equations. Let us denote the approximate solution at the grid points of the mesh as , where represents the solution at position and time . Additionally, we define M and N as the number of steps in the x and t directions, and h and k as the corresponding step sizes. To tackle the provided partial differential equation, we will apply the central difference method to and , i.e.,
We seek the solution to a system with dimension by choosing and . It converges to the following resulting column vector (not a matrix):
The computational results are given in Table 7.
Table 7.
Numerical results for Example 6.
6. Conclusions
A process is introduced that shows convergence for a family of Steffensen-like methods. The advantage of this process over earlier ones is that the condition about the existence of is not required. Consequently, the methods can be applied to solve nondifferentiable equations with a convergence theory to back them up. Moreover, the process is applicable to other methods involving the inverses of linear operators such as [1,2,3,4,5,6,7,8,9,10,13,14,15,16,17,18]. It is worth noting that in earlier work the local convergence analysis is shown using Taylor series expansions and assumptions on derivatives of high order not present in the method. Furthermore, no computable error estimates or uniqueness of the solution results are provided. Finally, the more interesting semi-local convergence analysis not studied in [1,19,20,21,22,23,24,25,26,27,28,29] is also developed in this paper. All these concerns are positively addressed in this paper and in the more general setting of a Banach space. Therefore, the applicability of such methods is extended to the local as well as the semi-local convergence cases using only conditions on the operators appearing in the method. Finally, the numerical applications further complement the theoretical findings.
Author Contributions
R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review. M.A.: Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia under grant no. (IFPIP:1305-247-1443).
Data Availability Statement
Not applicable.
Acknowledgments
The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2018, 5, 501–514. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algorithms 2013, 62, 429–444. [Google Scholar] [CrossRef]
- Ren, H.; Wu, Q.; Bi, W. A calss of two-step Steffensen type method of fourth order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. An efficient derivative free method for solving systems of nonlinear equations. Appl. Anal. Discret. Math. 2013, 7, 390–403. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equation. J. Comput. Appl. Math. 2013, 252, 95–102. [Google Scholar] [CrossRef]
- Argyros, I.K. The Theory and Application of Iteration Methods, 2nd ed.; Engineering Series; Routledge: Boca Raton, FL, USA, 2022. [Google Scholar]
- Magreñán, A.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
- Grau-Sánchez, M.; Grau, A.; Noguera, M. Frozen divided difference schme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
- Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
- Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
- Abad, M.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes. Bulltein Math. 2014, 105, 133–145. [Google Scholar]
- Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
- Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solution of Equations and Systems of Equations, Pure and Applied Mathematics; Academic Press: New York, NY, USA; London, UK, 1960; Volume IX. [Google Scholar]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Alefeld, G.E.; Potra, F.A. Some efficient methods for enclosing simple zeros of nonlinear equations. BIT 1992, 32, 334–344. [Google Scholar] [CrossRef]
- Costabile, F.; Gualtieri, M.I.; Serra-Capizzano, S. An iterative method for the computation of the solutions of nonlinear equations. Calcolo 1999, 36, 17–34. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Grau-Sánchez, M.; Grau, A.; Hernández, M.A. Construction of derivative-free iterative methods from Chebyshev’s method. Anal. Appl. 2013, 11, 1350009. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Grau-Sánchez, M.; Grau, A.; Hernández, M.A.; Noguera, M.; Romero, N. On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. 2011, 151, 163–174. [Google Scholar] [CrossRef]
- Galántai, A.; Abaffy, J. Always convergent iteration methods for nonlinear equations of Lipschitz functions. Numer. Algorithms 2015, 69, 443–453. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M. A technique to choose the most efficient method between secant method and some variants. Appl. Math. Comput. 2012, 218, 6415–6426. [Google Scholar] [CrossRef]
- Hernández, M.A.; Rubio, M.J. Semilocal convergence of the secant method under mild convergence conditions of differentiability. Comput. Math. Appl. 2022, 44, 277–285. [Google Scholar] [CrossRef]
- Potra, F.A.; Pták, V. A generalization of Regula Falsi. Numer. Math. 1981, 36, 333–346. [Google Scholar] [CrossRef]
- Potschka, A. Backward step control for global Newton-type methods. SIAM J. Numer. Anal. 2016, 54, 361–387. [Google Scholar] [CrossRef]
- Sanchez, M.G.; Noguera, M.; Gutieerez, J.M. Frozen iterative methods using divided differences “A la Schmidt-Schwerlick”. J. Optim. Theory Appl. 2014, 10, 931–948. [Google Scholar] [CrossRef]
- Schmidt, J.W.; Schwetlick, H. Ableitungsfreie Verfahren mit Hoherer Konvergenzgeschwindigkeit. Computing 1968, 3, 215–226. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).