Abstract
The conventional approach of the local convergence analysis of an iterative method on , with m a natural number, depends on Taylor series expansion. This technique often requires the calculation of high-order derivatives. However, those derivatives may not be part of the proposed method(s). In this way, the method(s) can face several limitations, particularly the use of higher-order derivatives and a lack of information about a priori computable error bounds on the solution distance or uniqueness. In this paper, we address these drawbacks by conducting the local convergence analysis within the broader framework of a Banach space. We have selected an important family of high convergence order methods to demonstrate our technique as an example. However, due to its generality, our technique can be used on any other iterative method using inverses of linear operators along the same line. Our analysis not only extends in spaces but also provides convergence conditions based on the operators used in the method, which offer the applicability of the method in a broader area. Additionally, we introduce a novel semilocal convergence analysis not presented before in such studies. Both forms of convergence analysis depend on the concept of generalized continuity and provide a deeper understanding of convergence properties. Our methodology not only enhances the applicability of the suggested method(s) but also provides suitability for applied science problems. The computational results also support the theoretical aspects.
MSC:
65G99; 47H17; 49M15
1. Introduction
Iterative methods stand as a foundation in numerical analysis that can handle the tough challenge of solving nonlinear equations. These equations came from diverse domains ranging from physics and engineering to economics and finance [1,2,3,4,5,6,7]. In general, the analytical solutions to such problems are almost nonexistent. They can be transmitted in the form of:
where with and being Banach spaces.
Iterative methods represent numerical strategies that work with an initial approximation or more than one initial approximation. They enhance the solution until it reaches a predefined level of precision. One such iterative method is the Newton–Raphson method. This is one of the most significant iterative methods, which is given by
where denotes a natural number, including zero. Newton’s method is a well-known and frequently applied iterative procedure for solving nonlinear problems. However, this method encounters various challenges when applied to nonlinear equations, such as slower convergence, divergence when the Jacobian matrix approaches a null matrix, and failure to function when the Jacobian matrix is null. Consequently, researchers have proposed improvements or modifications to this method over time. We have also opted for such an iterative approach.
where , and and .
The convergence order is shown in [8] using a local Taylor series by assuming the existence and boundedness of for , where m is a natural number.
There exist the same concerns with the Taylor expansion series technique mostly used to study the convergence of iterative methods.
1.1. Motivation
- (L1)
- The convergence is shown by assuming derivatives not in this method, such as , , and . Let . Define the function for and for , where and . It follows by this definition that the functions and are not continuous at . Thus, the results in [8] can not guarantee the convergence of the method to the solution . But the method converges to , say for and . It follows by this academic example that the convergence conditions can be weakened. Clearly, it is preferable to use conditions only on that are in the method.
- (L2)
- The selection of the initial point constitutes a “shot in the dark”. This is because there is no computable radius of convergence to assist us in choosing possible points .
- (L3)
- A priori and computable estimates on are not available. Hence, we do not know in advance how many iterations should be carried out to arrive at a desirable error tolerance.
- (L4)
- There is no computable domain about that contains no other solution of the equation .
- (L5)
- There is no information in [8] about the semilocal analysis of convergence for the method.
- (L6)
- This study is restricted on .
The concerns constitute our motivation for this article.
It is worth noticing that these concerns are present in any other studies using the Taylor series on [4,5,6,7,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. Therefore, the handling of these concerns will also extend the applicability and inverse of linear operators of other methods along the same lines. Our technique is demonstrated in method (2) as an example. But if it is so general, it can be used on any other method using inverses along the same lines. In particular, we deal with these concerns as follows:
1.2. Novelty
- (L1)′
- The new sufficient convergence conditions involve only the operators in the method. (See the conditions or ).
- (L2)′
- A computable radius of convergence is determined. Thus, the selection of initial points that guarantee the convergence of the method becomes possible. (See ).
- (L3)′
- The number of iterations required to achieve an error tolerance is known in advance, because we provide computable upper error bounds on . (See Theorem 1).
- (L4)′
- A neighborhood of is found that contains no other solution of the equation . (See the Propositions).
- (L5)′
- A semilocal analysis of convergence is developed by relying on majorizing sequences [1,2,3,4,5]. (See Section 3).
- (L6)′
- The convergence is established for Banach space-valued operators.
Notice also that the convergence in both cases is based on the generalized continuity [1,2,3] assumption on the operator .
2. Local Analysis
Let . The convergence is based on some conditions.
Suppose the following:
- (H1)
- There exists a function , which is continuous as well as nondecreasing, such that admits a smallest solution, which is positive. Denote such a solution with the letter .Take .
- (H2)
- The rest exists as a function that is continuous and nondecreasing such that for , the equation admits a smallest positive solution denoted by , where
- (H3)
- If define . The equation admits a smallest solution, which is positive, denoted by .Set and .
- (H4)
- The equations and admit a smallest positive solution denoted by , respectively, whereand forTakeand set .The definitions imply that for each ,The developed scalar functions and ℵ are associated to the operator in the method.
- (H5)
- There exists an invertible operator such that for each ,Take .
- (H6)
- for eachand
- (H7)
- .
Remark 1.
- (i)
- A possible and popular choice for or . In the latter choice, is a simple solution of the equation according to the condition . In our analysis, we do not necessarily assume that is simple. Thus, the method can also be used to find solutions of of multiplicity greater than one. Another selection is . Other choices can be considered as and hold.
- (ii)
- We shall choose the smallest version of the functions in the examples to obtain tighter error bounds on as well as a larger r.
Next, the local analysis follows for the method (2).
Theorem 1.
Suppose that the conditions are satisfied. Then, if , the sequence is convergent to .
Proof.
The following items are established by induction:
where the radius r is determined by the Formula (3), and all the scalar functions are as given previously. The condition , (3), and the definition of and r give in turn
So, the linear operator is invertible by the standard perturbation Lemma on operators that are invertible due to Banach [1,2,3,4,5]. We also have
Consequently, the iterate exists by the first substep of the method (2), and
By applying the condition , and using (7) , (12), and (13), we obtain in turn that
Thus, the iterate , and the item (8) is satisfied for . We need the estimate
(by (5)), where the first norm in (15) can be calculated by two different ways:
or
So, the linear operator is invertible and
It follows that the iterate exists by the second substep of the method (2), and
Hence, we obtain by (3), (7) (for ), (16), and (17) that
It follows by (18) that the iterate , and the item (9) holds if . Moreover, the iterate is well defined by the third substep of the method (2), and
leading to
Thus, the iterate , and the item (10) is satisfied if . But the computations for the derivation of (20) can be repeated if replace , respectively, in the preceding estimations, which terminates the induction for (10). Hence, the iterate and
where . Therefore, we conclude that all the iterates stay in and . □
Next, a neighborhood of is determined containing no other solution.
Proposition 1.
Suppose the following:
The solution for some .
The condition is satisfied in the ball , and there exists such that
Define .
Then, the only solution of the equation in the set is .
Proof.
Let be such that .
Remark 2.
Clearly, we can take in Proposition 1.
In the next section, similar estimates are used to show the semilocal analysis of convergence for the method (2). But the role of the functions and the solution is exchanged by the functions and the initial point , respectively.
3. Semilocal Analysis
Suppose the following:
- (E1)
- There exists a function that is continuous and nondecreasing such that the equation has a smallest solution, which is positive, denoted by . Set .
- (E2)
- There exists a function that is continuous and nondecreasing .Define the scalar sequence for , some , and eachbyThis sequence is shown to be majorizing for the method (2) in Theorem 2. However, let us first give a convergence condition for it.
- (E3)
- There exists such that for each ,It follows by the formula (24), , , and this condition thatand this sequence is convergent to some .As in the local analysis, the functions and v relate to as follows:
- (E4)
- There exist and an invertible operator such that for each ,Notice thatConsequently, the inverse of the linear operator exists, and we can choose .Set .
- (E5)
- for each .
- (E6)
- .
The main semilocal analysis of convergence follows under these conditions in the next result.
Theorem 2.
Suppose that the conditions are satisfied. Then, there exists a solution of the equation such that for each ,
Proof.
As the local case but using the conditions E instead of H, we have the series of calculations
So, from the first two substeps, we have
Thus, we obtain from (26) and (27),
and
hence, the iterate
We also need the estimate for the definition of that
So, we have
and
Thus, (29) and (30) lead to
and
Hence, the iterate . Similarly, we have from
that
and
so the iterate .
Then, from the identity
we obtain
leading to
and
so the iterate .
Next, as in the local case, the uniqueness of the solution is established for the equation in a neighborhood of the initial point .
Proposition 2.
Suppose the following:
There exists a solution for some
The condition () is satisfied in the ball , and there exists such that
Take .
Then, the only solution of the equation in the set is .
Proof.
Let such that Define the linear operator . It follows by the condition and (35) that
so the operator is invertible. Then, from the identity
we conclude that . □
Remark 3.
- (i)
- A popular choice for or . Other selections are possible as long as the conditions and hold.
- (ii)
- The limit point can be replaced by in the condition .
- (iii)
- Under all the conditions in Proposition 2, one can take and .
4. Numerical Illustrations
We illustrate numerically the theoretical consequences proposed in the previous sections. We have illustrated the applicability in real-life and standard nonlinear problems along with initial guesses that are depicted in Examples 1–5. These examples involve a good mixture of scalar and nonlinear systems. We display not only the radius of convergence in the Table 1, Table 2, Table 3, Table 4 and Table 5 but also the least number of iterations corresponding to the solutions of , absolute residual error at the corresponding iteration, and . Additionally, we obtain the approximated by means of
or [7,15,16,17] by:
We adopt as a sanction error. Ceasing criteria are adopted for computer programming to solve the nonlinear system: and .
The computational work is implemented with the software by adopting higher-precision arithmetic. In addition, we have chosen and in the next two examples.
Example 1.
We selected a well-known mathematical issue involving a first-order nonlinear integral equation and a Hammerstein operator that is used in many fields, including physics and engineering. Both integral and nonlinear components are present in this problem. These problems have either no analytical answers at all or very complex ones. To provide an example, let us take and . The Hammerstein operator Δ [1,2,3,4,6] is involved in this first-kind nonlinear integral equation, which is defined by
The derivative of operator Δ is given below:
for . The values of the operator satisfy the hypotheses of Theorem 1, if we choose for
Hence, we present the convergence radii of illustration (Example 1) in Table 1.
Table 1.
Radii of illustration (Example 1) of method (2).
Table 1.
Radii of illustration (Example 1) of method (2).
| r | |||||||
|---|---|---|---|---|---|---|---|
| 1 | 0.083333 | 0.036537 | 0.036537 | 0.04167 | 0.018854 | 0.013528 | 0.013528 |
| 0.083333 | 0.017014 | 0.017014 | 0.018518 | 0.0091788 | 0.0072646 | 0.0072646 | |
| 0.083333 | 0.0083754 | 0.0083754 | 0.0087719 | 0.0045910 | 0.0039538 | 0.0039538 | |
| 0.083333 | 0.023074 | 0.023074 | 0.025641 | 0.012293 | 0.0093465 | 0.0093465 |
Example 2.
A three-by-three nonlinear system is a complicated math problem found in science and engineering. It becomes more complicated when it mixes together polynomials and exponential terms. That is why we are picking one of these systems to work on in order to show the applicability of our methods. Let and where . Consider P on ϕ by means of
where . It has the succeeding Fréchet-derivative defined by
Then, ; so, we have
Hence, we present the convergence radii of illustration (Example 2) in Table 2.
Table 2.
Radii of illustration (Example 1) of method (2).
Table 2.
Radii of illustration (Example 1) of method (2).
| r | |||||||
|---|---|---|---|---|---|---|---|
| 1 | 0.581977 | 0.380074 | 0.380074 | 0.382692 | 0.20423 | 0.15438 | 0.15438 |
| 0.581977 | 0.18349 | 0.18349 | 0.16433 | 0.103706 | 0.084580 | 0.084580 | |
| 0.581977 | 0.092482 | 0.092482 | 0.076748 | 0.053632 | 0.046965 | 0.046965 | |
| 0.581977 | 0.24531 | 0.24531 | 0.22993 | 0.13639 | 0.10772 | 0.10772 |
Examples for SLAC
We illustrate the theoretical results of the semilocal convergence on three different problems, namely, (3)–(5). In addition, we select four cases from expression (2): the fifth-order method with and , denoted as Case-1 and Case-2, respectively; and the seventh-order method with and , denoted by Case-3 and Case-4, respectively. The considered three examples, (3)–(5), are renowned 2D Bratu, BVP, and Fisher problems, which are applied science concerns that comprise three nonlinear systems of order , , and , respectively.
Example 3.
Bratu 2D Problem:
Defined in [18], there is the well-known 2D Bratu problem which is given below:
To have a nonlinear system, we use the finite difference discretization procedure for the expression (40). The required answer at a mesh’s grid points is . In addition, the step numbers x and t correspond to the corresponding directions M and N. Moreover, the step sizes in the corresponding M and N are h and k. Applying the central differences to the PDE (40) ( and ) mentioned before, we obtain
As an example of a nonlinear system of size , we picked and . It converges to the following column vector, which is not a matrix,
In Table 3, we mention the computational results based on Example 3.
Table 3.
Radii of illustration (Example 3).
Example 4.
Boundary value problems (BVPs) are important parts of mathematics and physics. They work with differential equations that have several, frequently boundary-based conditions. The modeling of practical phenomena such as heat transfer, fluid flow, and quantum mechanics relies heavily on BVPs, which provide valuable insights into physical systems. Therefore, we chose the following BVP (see [14]):
with . The partition of the interval into ℓ pieces provides us with the following:
The stands for , respectively. In addition, and . By using them in the expression (41), we have
with the help of the discretization technique. In this way, we obtain the proceeding nonlinear system of equations of order
We picked and in order to obtain a larger nonlinear system of order . In Table 4, we depicted the number of iterations, residual errors, , CPU timing, and error differences between two iterations for Example (4). The above system (42) converges to the following estimated zero:
Table 4.
Radii of illustration (Example 4).
Table 4.
Radii of illustration (Example 4).
| Method (4) | CPU | |||||
|---|---|---|---|---|---|---|
| Case-1 | 4 | |||||
| Case-2 | 4 | |||||
| Case-3 | 3 | |||||
| Case-4 | 3 |
Example 5.
We adopt a renowned Fisher’s equation [22]
We refer to the D as a diffusion parameter. To obtain a nonlinear system, we use the finite difference discretization procedure for the expression (43). The required solution at a mesh’s grid points is . In addition, the step numbers x and t correspond to the corresponding directions M and N, respectively. Moreover, the step sizes in the corresponding M and N are h and l, respectively. Utilizing the center, backward, and forward distinctions that follow, we obtain
leading to
Here, are used. The system of nonlinear equations of dimension is obtained by choosing , , and . The above nonlinear system converges to the following column vector solution (not a matrix):
We illustrate the numerical results in the Table 5.
Table 5.
Radii of illustration (Example 5).
5. Conclusions
This paper concludes by highlighting the nature of a convergence analysis in iterative approaches, especially when there are no explicit assurances about convergence. It is shown that the investigation of convergence behaviors is different from conventional assumptions by taking into account derivatives such as , , and with the help of Taylor series expansion. In the previous work, there was no computable radius of convergence, and there was missing information about the choice of the initial point. However, we made a significant contribution by offering computable bounds on , which allows one to make well-informed decisions about the number of iterations needed to achieve acceptable error tolerances. Furthermore, the computation of a convergence radius makes the process of choosing an initial location more confident. This study’s relevance to a wide range of issues is highlighted by its emphasis on Banach space operators. In addition, we also introduce a semilocal analysis and careful consideration of generalized continuity assumptions, which significantly advance our understanding of convergence behaviors in iterative methods. We have checked the semilocal convergence by offering practical examples for real-world applications. In particular, the drawbacks of the Taylor expansions series approach listed in of the introduction have all been positively addressed by . The technique developed in this paper can also be used with the same benefits on other iterative methods requiring inverses of linear operators or not in an analogous fashion [5,6,7,8,9,10,11,12,13,14,23]. This will be the direction of our future research, including the further weakening of the sufficient convergence conditions presented in this article.
Author Contributions
Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A.; formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A. and S.A.; visualization, R.B., I.K.A. and S.A.; supervision, R.B. and I.K.A. All authors have read and agreed to the published version of this manuscript.
Funding
This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445).
Institutional Review Board Statement
Not applicable.
Data Availability Statement
Data are contained within the article.
Acknowledgments
The author Sattam Alharbi wishes to thank Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445) for funding support.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Argyros, I.K.; Magreñan, A.A. A Contemporary Study of Iterative Methods; Academic Press: Cambridge, MA, USA ; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
- Argyros, I.K.; George, S. Mathematical Modeling for the Solutions with Application; Nova Publisher: New York, NY, USA, 2019; Volume III. [Google Scholar]
- Argyros, I.K. Unified Convergence Criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
- Gutiérrez, J.M.; Hernández, M.A. A family of Chebyshev-Halley type methods in Banach spaces. Bull. Aust. Math. Soc. 1997, 55, 113–130. [Google Scholar] [CrossRef]
- Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
- Xiao, X.; Yin, H. A new class of methods with high-order of convergencefor solving systems of nonlinear equations. Appl. Math. Comput. 2015, 264, 300–309. [Google Scholar]
- Behl, R.; Sarria, F.; González, R.; Magrénãn, Á.A. Highly efficient family of iterative methods for solving nonlinear methods for solving nonlinear models. J. Comput. Appl. Math. 2016, 346, 110–132. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Grau-Sánchez, M.; Grau, Á.; Hernández, M.A.; Noguera, M.; Romero, N. On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. Math. Comput. 2011, 151, 163–174. [Google Scholar] [CrossRef]
- Frontini, M.; Sormani, E. Some variant of Newton’s method with third-order convergence. Appl. Math. Comput. 2003, 140, 419–426. [Google Scholar] [CrossRef]
- Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, Á.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, Á.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
- Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
- Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
- Noor, M.A.; Wassem, M. Some iterative methods for solving a system of nonlinear equations. Appl. Math. Comput. 2009, 57, 101–106. [Google Scholar] [CrossRef]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
- Shakhno, S.M. On a Kurchatov’s method of linear interpolation for solving nonlinear equations. PAMM Proc. Appl. MAth. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Dennis, E., Jr.; John, R.; Robert, B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1996. [Google Scholar]
- Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Elsevier: Nauka, Moscow, 1984. (In Russian) [Google Scholar]
- Potra, F.A. On an iterative algorithm of order 1.839… for solving nonlinear operator equations. Numer. Funct. Anal. Optim. 1985, 7, 75–106. [Google Scholar] [CrossRef]
- Fernando, T.G.I.; Weerakoon, S. Imporved Newton’s method for finding roots of a nonlinear equation. In Proceedings of the 53rd Annual Sessions of Sri Lanka Association for the Advancement of Science (SLAA), Matara, Sri Lanka, 8–12 December 1997; pp. E1–E22. [Google Scholar]
- Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).