Abstract
The convergence order of an iterative method used to solve equations is usually determined by using Taylor series expansions, which in turn require high-order derivatives, which are not necessarily present in the method. Therefore, such convergence analysis cannot guarantee the theoretical convergence of the method to a solution if these derivatives do not exist. However, the method can converge. This indicates that the most sufficient convergence conditions required by the Taylor approach can be replaced by weaker ones. Other drawbacks exist, such as information on the isolation of simple solutions or the number of iterations that must be performed to achieve the desired error tolerance. This paper positively addresses all these issues by considering a technique that uses only the operators on the method and Ω-generalized continuity to control the derivative. Moreover, both local and semi-local convergence analyses are presented for Banach space-valued operators. The technique can be used to extend the applicability of other methods along the same lines. A large number of concrete examples are shown in which the convergence conditions are fulfilled.
MSC:
65H10; 65Y20; 65G99; 41A58
1. Introduction
In this study, our goal is to obtain a solution of the non-linear equation
where is assumed to be a differentiable operator in the Fréchet sense with and Banach spaces, and is an open and convex set. Only in certain cases is it possible to obtain exactly the solution . As a result, given certain assumptions based on the initial estimate, researchers rely on the construction of iterative methods that produce a sequence converging to . The fixed point method, successive substitutions method or Picard method is defined by
where is a continuous operator. This method is of convergence order one [1,2,3,4].
Newton’s method [1,2,3,4,5,6] is a well-known one-step iterative procedure, which is defined for and each … by
This method has convergence of second order if is chosen close enough to the solution denoted by . By adding substeps to a one-step method, higher-order convergence methods (of order greater than two) have been obtained, as found in the literature [7,8,9,10,11,12,13,14]. As an example, the Traub two-step method
is of order three, whereas the two-step method
is of order four. Furthermore, numerous one- and multi-step methods have been proposed to improve the convergence order and computational cost of Newton’s method [15,16,17,18,19,20,21,22,23].
For a given , let us consider the following multi-step iterative approach to solving (1), given in [24]:
It was shown that this method presents eighth-order convergence, highlighting that it only uses one operator inversion per iteration. Its high efficiency was demonstrated by comparing it with other methods that have appeared in the literature. However, there are certain problems limiting the applicability of this method (2), particularly when Taylor series are used to establish convergence. These problems are as follows.
- (P1)
- The convergence order eight was determined in [24] provided that and assuming the existence and boundedness of higher-order derivatives which do not appear in the formulation of the method. Let us see an example with . Define the function by , if , and if , where and . It is clear that solves the equation . However, for , the derivatives are not continuous at . Hence, the results in [24] cannot ensure the convergence of the method to . However, this method converges when taking, for example, . Consequently, we can ensure that the conditions in [24] related to the convergence of the method are weakened.
In addition, there are other factors that restrict the applicability of this method.
- (P2)
- No prior estimates are provided, and the number of iterations that must be performed to achieve a predetermined error tolerance is unknown.
- (P3)
- Since the radius of convergence is not given in [24], selecting initial estimates that guarantee convergence to is difficult in general.
- (P4)
- The uniqueness of in a neighborhood around it is not determined.
- (P5)
- The study in [24] is restricted to .
- (P6)
- The semi-local analysis of convergence, which is in fact the most interesting, has not been developed in [24].
Therefore, issues – must be addressed in our study. It is also worth noting that Taylor series expansions constitute the dominant technique for the study of multi-step iterative methods, especially when it comes to showing the convergence order.
As a novelty Our study provides the following new insights.
- The new local conditions are based on controlling the first derivative that is present in the method.
- A prior estimate of is developed. Thus, the number of iterations to be performed can be known in advance.
- A radius of convergence is provided.
- A set is determined that contains only one solution.
- The studies are provided for Banach space-valued operators.
- A semi-local analysis based on majorizing sequences [2,4] is carried out.
Both types of analyses rely on generalized continuity conditions on , which are employed to control it [1,2,3]. The same set of conditions is used in both analyses. It is also worth noting that the methodology of this study can be applied to other methods using Taylor series and inverses of linear operators, such as those in [6,15,16,17,18,19,20,21,22,23,25,26].
A detailed historical overview of convergence analysis methods can also be found in [2,4,5,21,26]. The flow chart of the proposed convergence technique is divided into two parts.
(1) Local convergence analysis
Sufficient local conditions are provided to establish the convergence of the method. The iterates are shown to exist on a ball centered at the solution and of a certain radius, which is well defined. Convergence is assured provided that the initial guess the initial point is chosen from the ball. It is also shown that the sequence converges to zero sequence. Furthermore, a certain ball is specified that contains only one solution of Equation (1).
(2) Semi-local convergence analysis
Scalar sequences majorize (control) the iterates, which are shown to exist in a ball centred at and of a certain computable radius. Convergence is established as long as is small enough. A priori estimates of and determine the number of iterations to that must be performed to reach a predetermined error tolerance. The uniqueness of the solution results in a finite radius ball centered at , completing this type of analysis.
2. Analysis of the Local Convergence
Let . The following abbreviations will be used in the analysis.
FCND: a function that is continuous as well as nondecreasing on an interval.
SPS: the smallest and positive solution.
The local analysis relies on the following conditions.
- (H1)
- There exists an FCND such that the equation has the SPS denoted by . Let us assume that .
- (H2)
- There exists an FCND such that, for defined bythe equation has the SPS in . Let stand for this SPS.
- (H3)
- The equation has solutions in . Let us denote by the SPS on . Set .Define the functions , byandThe choice of depends on the functions and . We will choose the one that provides the largest radius of convergence.
- (H4)
- The equation has the SPS on . Let us denote by the SPS of this equation in .
- (H5)
- The equation has the SPS in . Let us denote by the SPS in . Set and define the function byDefine the function byThe choice of depends on the functions and . We will choose the one that provides the largest radius of convergence.
- (H6)
- The equation has a solution on . Let us denote by the SPS in . LetThese definition implies that for each , it holds thatandFrom now on, by , we mean the open ball with center and radius . Moreover, the closure of is denoted by .
Furthermore, we assume the existence of a linear operator related to the functions and ℵ, as defined below:
- (H7)
- There exists that solves the equation and there also exists an invertible linear operator M such that, for each ,Set .
- (H8)
- For any , we have that
- (H9)
- .
Remark 1.
The choice is popular, but not necessarily the most flexible one. However, in this case, is simple. We do not adopt such an assumption in conditions –. Consequently, our approach can be used to obtain solutions of multiplicity greater than one provided that . Other choices can be (the identity operator) or , where is some auxiliary point.
The main local convergence analysis for method (2) is presented below.
Theorem 1.
Suppose that conditions – hold. If , then the sequence generated by method (2) is well defined on the ball . Moreover, for each , it holds that
where the radius of convergence r is given in (1), and the functions are those given previously.
Proof.
By the hypothesis . So, the expression (4) clearly holds if . Let . The application of Condition and the Formula (3) give in return
The estimate (12), in combination with the perturbation lemma on linear and invertible operators by Banach [2], implies the existence of , which verifies
In particular, for , the linear operator exists. Thus, the iterate exists. Then, we can write
Using condition , (13) (for ), (7) (for i = 1), and Formulas (3) and (14), we arrive at
Thus, the iterate , and (9) holds if . Note that and are given in the second and third steps of (2), from which we can write
leading to
where , and we also use the estimates
or
and
leading to
Furthermore, from the last step of (2), we can write
Thus, from (7) (for ), (13) , (15)–(19) for , and (20), we arrive at
where .
Hence, the iterate , and (8) holds for and , respectively. Proceeding inductively, one can prove (8)–(11) provided that the iterates replace , respectively, in the preceding calculations. Finally, from the estimation
where , we conclude that the iterate and . The uniqueness of the solution in a certain neighborhood containing it is given in the next result. □
Proposition 1.
Let us suppose that there exists such that condition is satisfied in the ball , and there exists such that, for the function defined in , it holds that
Set . Then, the equation has a unique solution in .
Proof.
We proceed by contradiction. Suppose that there exists such that . Define the linear operator by . It follows from (23) and that
Thus, the operator is invertible. Consequently, from the identity
we conclude that . □
Remark 2.
Under all conditions –, we will obtain using Proposition 1.
3. Semi-Local Analysis for Method (2)
The conditions for this case, as well as the calculations, are the same as in the local case. In this case, the point and the functions and ℵ are replaced by and the functions and v, respectively.
Now, we assume the following.
- (C1)
- There exists a such that has the SPS in . Denote by the in . Set .
- (C2)
- There exists a such that has the SPS in . Define the sequences , , and for , some , and each byandNote that the sequences , , and are majorizing for , , and , respectively (see Theorem 2).
- (C3)
- There exists such that, for eachBy adopting the above conditions in (26), we obtainand there exists such that .
- (C4)
- There exist a point and an invertible operator K such that, for each ,Set . Notice that, if , we haveThus, is invertible. Hence, we can take .
- (C5)
- For each , we have
- (C6)
- .
Remark 3.
(i) We can take , although this is not the most flexible choice. Other choices can be or where is an auxiliary point.
The main semi-local analysis of convergence follows for method (2).
Theorem 2.
Suppose that conditions – hold. Then, the sequence is well defined and it holds that
and
Furthermore, there exists solving Equation (1) such that
Proof.
As in the local analysis, mathematical induction is employed to prove items (27)–(30). From the definition of , (26), and the first substep of method (2), we obtain
Thus, we have that , and (28) holds if . Let . From the definition of , (26) and the condition , we obtain
Therefore, the linear operator is invertible, and
If in (32), then is invertible. Consequently, the iterates , , and are well defined. We have
Hence, we obtain
or
Then, from the second substep of (2), (26), (32) for , and (33)–(35), we have that
and
Thus, the iterate , and (29) holds if . We can write
Therefore, we obtain
where we have used
Then, we yield
Next, from the third substep of method (2), (26), and (37), we obtain in turn
and
Thus, the iterates , and (30) holds. We can write
leading to
and
Taking in (39) and using the continuity of P, we deduce that . Finally, the estimate (31) follows from
assuming that . □
As in the local analysis, the uniqueness of the solution of equation can be obtained as follows.
Proposition 2.
Let us assume that there exists a solution of equation for some , the condition holds on the ball , and there exists such that
Set .
Then, the equation has only one solution in .
Proof.
Suppose that there also exists such that . Define the linear operator . It follows from and (41) that
Therefore, the linear operator is invertible, and . □
Remark 4.
(i) The limit point can be replaced by in condition
- (ii)
- If all conditions – hold, then set and in Proposition 2.
4. Numerical Examples
In this section, we present the computational results, developed through a plethora of numerical examples. Among them, two examples are academic in nature, while the remaining four involve applied science problems, including the Hammerstein integral equation of the first kind, the Fisher’s equation, and boundary value problems (BVP). The corresponding results for these examples are collected in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7. In addition, we compute the computational order of convergence based on the formula
or the approximate computational order of convergence [25,26] given by
The stopping criteria and error tolerance are established on the following basis:
Table 1.
Example 1: Radii of convergence.
Table 2.
Example 2: Radii of convergence.
Table 3.
Computational results of Example 3.
Table 4.
Computational results of Example 4.
Table 5.
The values of abscissas and weights .
Table 6.
Computational results of Example 5.
Table 7.
Computational results of Example 6.
, where .
with multiple precision arithmetic is used for the computational results.
The first two examples are used to validate local convergence through conditions –, provided that .
Example 1.
Let us assume that and . In addition, we choose P on with as
It follows from (44) that
It is straightforward to see that . Then, conditions – hold, taking
In Table 1, we collect the obtained radii of convergence for Example 1.
Example 2.
We consider and . We consider the well-known Hammerstein first-kind non-linear integral operator P:
Then, we obtain
for each .
By substituting these values in conditions –, we obtain
The rest of the examples test the performance of method (2).
Example 3.
Let us consider the following boundary value problem [2]:
with and . The interval is uniformly divided into l subintervals, which yields
Then, we can choose . Using the classical second-order approximations for the derivatives
we discretize (45) and obtain the following non-linear system of equations:
In Table 3, we present the computational results after solving the system (46) by taking and with method (2). The approximate root after five iterations is
Example 4.
Let us consider a well-known Fisher’s problem [6], which is given by
with homogeneous Neumann’s boundary conditions
where η is a diffusion parameter. We apply the finite difference discretization approach to (47) and obtain a non-linear system, considering a mesh with points in the spatial direction and points in the temporal direction. At the grid points of a mesh, denotes the approximate values of the solution, i.e., . The corresponding step sizes are and , respectively. By using the following centered, backward, and forward approximations of the derivatives
we obtain a discretization of problem (47) given by the system
Choosing , we obtain a system of 121 equations with 121 unknowns. For the numerical computations, we take The approximate solution of this non-linear system after five iterations of method (2) is given below:
We illustrate the numerical results in Table 4.
Example 5.
We study the Hammerstein non-linear integral equation (details can be found in [2] (pp. 19–20)). The following Hammerstein integral equation is a standard applied science example of computational analysis:
where and the kernel G is
The Gauss–Legendre quadrature formula can be used to transform the above equation into a finite-dimensional problem. We approximate the integral by considering appropriate weights and abscissas . For , the and are depicted in Table 5. The are adopted to represent the approximations of . Thus, we obtain the following system of non-linear equations:
where
Example 6.
Finally, we examine a more complex system of non-linear equations, consisting of 300 equations with 300 unknowns. This analysis highlights the method’s ability to handle significant computational challenges and demonstrates its applicability for a wide range of practical applications involving large-scale non-linear systems. We consider the following system:
The desired zero of this problem is . The number of iterations, the CPU time, the absolute value of function at the corresponding point, the absolute residual error, and the COC of Example 6 are shown in Table 7.
5. Concluding Remarks
In this study, certain drawbacks are identified that prevent the applicability of iterative methods if the usual Taylor expansion series approach is utilized to demonstrate convergence. Motivated by these issues, and in order to extend the applicability of iterative methods, a different technique is developed that does not use the Taylor series. In this way, both local and semi-local convergence analyses are based solely on the operators that define the methods, which extends their applicability to more abstract spaces, such as Banach spaces. Although the technique has been demonstrated with method (2), it can also be used to analyze other methods, such as those in [6,15,16,17,18,19,20,21,22,23,25,26]. Numerous concrete examples have been included to demonstrate the presented approach. This will be the direction of our future studies. In addition, we will try to further weaken the sufficient convergence conditions and even consider the necessary ones.
Author Contributions
Conceptualization, R.B., I.K.A. and H.R.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B., I.K.A. and H.R.; formal analysis, R.B., I.K.A., H.R. and H.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B., I.K.A. and H.R.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A., H.R. and H.A.; visualization, R.B., I.K.A., H.R. and H.A.; supervision, R.B., I.K.A. and H.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare that there is no conflict of interest.
References
- Argyros, G.I.; Regmi, S.; Argyros, I.K.; George, S. Contemporary Algorithms, 4th ed.; Nova Publisher: New York, NY, USA, 2024. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Argyros, I.K. Unified Convergence Criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
- Argyros, I.K. Theory and Applications of Iterative Methods, 2nd Edition Engineering Series; CRC Press-Taylor and Francis Group: Boch Raton, FL, USA, 2022. [Google Scholar]
- Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
- Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
- Ogbereyivwe, O.; Ojo-Orobosa, V. Family of optimal two-step fourth order iterative method and its extension for solving nonlinear equations. J. Interdiscip. Math. 2021, 24, 1347–1365. [Google Scholar] [CrossRef]
- Akram, S.; Khalid, M.; Junjua MU, D.; Altaf, S.; Kumar, S. Extension of King’s iterative scheme by means of memory for nonlinear equations. Symmetry 2023, 15, 1116. [Google Scholar] [CrossRef]
- Panday, S.; Mittal, S.K.; Stoenoiu, C.E.; Jäntschi, L. A New Adaptive Eleventh-Order Memory Algorithm for Solving. Nonlinear Equations. Math. 2024, 12, 1809. [Google Scholar] [CrossRef]
- Sharma, H.; Kansal, M. A modified Chebyshev-Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Methods Appl. Sci. 2023, 46, 12549–12569. [Google Scholar] [CrossRef]
- Wang, X.; Tao, Y. A new Newton method with memory for solving nonlinear equations. Mathematics 2020, 8, 108. [Google Scholar] [CrossRef]
- Torkashvand, V. A two-step method adaptive with memory with eighth-order for solving nonlinear equations and its dynamic. Comput. Methods Differ. Equat. 2022, 10, 1007–1026. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 11, 2036. [Google Scholar] [CrossRef]
- Zheng, Q.; Zhao, X.; Liu, Y. An optimal biparametric multipoint family and its self- acceleration with memory for solving nonlinear equations. Algorithms 2015, 8, 1111–1120. [Google Scholar] [CrossRef]
- Li, X.; Mu, C.; Ma, J.; Wang, C. Sixteenth-order method for nonlinear equations. Appl. Math. Comput. 2010, 215, 3754–3758. [Google Scholar] [CrossRef]
- Sharma, J.R.; Sharma, R. A family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
- Thukral, R.; Petkovic, M.S. A family of three-point methods of optimal order for solving nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2278–2284. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. New modifications of Potra-Ptak’s method with optimal fourth and eighth order of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
- Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. Appl. Math. Comput. 2010, 215, 3449–3454. [Google Scholar] [CrossRef]
- Liu, L.; Wang, X. New eighth-order methods for solving nonlinear equations. J. Comput. Appl. Math. 2010, 234, 1611–1620. [Google Scholar]
- Nedzhibov, G.H. A family of multi-point iterative methods for nonlinear equations. J. Comput. Appl. Math. 2008, 222, 244–250. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarrat’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Kou, J.; Li, Y.; Wang, X. Some modification of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef]
- Wang, X. Fixed-point iterative method with eight-order constructed by undermined paramater technique for solving nonlinear systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett. 2010, 23, 472–478. [Google Scholar] [CrossRef]
- Zhanlav, T.; Otgondorj, K.H. Higher order Jarratt-like iterations for solving systems of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).