Abstract
In this study, we extended the applicability of a derivative-free algorithm to encompass the solution of operators that may be either differentiable or non-differentiable. Conditions weaker than the ones in earlier studies are employed for the convergence analysis. The earlier results considered assumptions up to the existence of the ninth order derivative of the main operator, even though there are no derivatives in the algorithm, and the Taylor series on the finite Euclidian space restricts the applicability of the algorithm. Moreover, the previous results could not be used for non-differentiable equations, although the algorithm could converge. The new local result used only conditions on the divided difference in the algorithm to show the convergence. Moreover, the more challenging semi-local convergence that had not previously been studied was considered using majorizing sequences. The paper included results on the upper bounds of the error estimates and domains where there was only one solution for the equation. The methodology of this paper is applicable to other algorithms using inverses and in the setting of a Banach space. Numerical examples further validate our approach.
Keywords:
three step eighth order algorithm; convergence; divided differences; differentiable- non-differentiable equation MSC:
49M15; 65H10; 47H17; 65G99
1. Introduction
Let F indicate the mapping of a subset into itself, where E is a Banach space. In a plethora of applications, researchers have reduced the problem to finding a solution of
The analytical version of the solution is difficult to determine in general. Therefore, iterative algorithms have been developed that generate sequences that converge to by means of some initial hypotheses [,,,].
The Newton’s Scheme [,,] defined by
is a popular quadratic-order algorithm. Recently, there has been a surge in the need to develop an algorithm with an order higher than two [,,,,]. The Taylor series expansion provides the local order of convergence. But, there are limitations to this approach:
- (C1)
- The convergence analysis is usually only local and where j is a natural number.
- (C2)
- The sufficient convergence hypotheses involve where order of convergence.
- (C3)
- No a priori and computational error distances are available.
- (C4)
- The isolation of the solution is not discussed.
- (C5)
- The semi-local convergence, which is considered more interesting and challenging than the local convergence, is not discussed.
Our idea addresses concerns (C1)–(C5) as follows:
- (C1)’
- The analysis is developed in Banach space.
- (C2)’
- The sufficient convergence hypotheses involve only the operators on the algorithm (see Algorithm 1), i.e., the divided differences. This is in contrast with the motivational work in [] using hypotheses on high-order derivatives in the algorithm to show the convergence of the algorithm.
- (C3)’
- Error estimates become available under the concept of continuity [,,,] in the local and majorizing sequences [,,,] in the semi-local case.
- (C4)’
- The isolation of the solution is specified.and
- (C5)’
- The semi-local convergence analysis of the algorithm is studied.
An algorithm was taken from [] to demonstrate this idea. However, the same idea was similarly applicable in the algorithm containing the inverses of linear operators [,,,,,,,,,,,].
Let us redevelop the algorithm, but formatted in Banach space, as follows,
| Algorithm 1 |
| Step 1: Given solve for |
| Step 2: Set |
| Step 3: Solve for |
| Step 4: Solve for |
| Step 5: Solve for |
| Step 6: Ste |
| Step 7: Solve for |
| Step 8: Solve for |
| Step 9: Solve for |
| Step 10: Solve for |
| Step 11: Solve for |
| Step 12: Set |
| Step 13: If STOP. Otherwise, repeat the process with |
Here, and are free real parameters, and are fixed real numbers, and
Here, is a divided difference of an order one for the operator F [,,,], and the notation is used for the set of continuous linear operators mapping E into itself. The interesting point of the algorithm is that, because of the usage of the same coefficient operator, only one LU decomposition can be performed for solving, e.g., linear systems with multiple right-hand sides. Thus, the algorithm without memory includes three steps (see Step 2, Step 6, and Step 12, where the iterations are computed, respectively) and five free non-zero operator parameters. In Section 4, the parameters are further specialized. The convergence order is shown to be eight in Theorem 1 in []. But the existence of the ninth derivative is required for the local convergence analysis []. Thus, if F is not differentiable by at least that amount, the conclusions in [] cannot assure the convergence of the algorithm to But, the algorithm can converge. Other limitations are listed in the aforementioned concerns of (C1)–(C5). As a further motivation, consider the folowing example: If stands for any neighborhood containing the numbers define if Clearly, solves equation But, the conclusions in [] cannot assure that , although the algorithm converges to as the function is not continuous at A more important semi-local analysis of the algorithm that has not yet been presented is also developed in this paper [,,,,,].
2. Local Analysis
We use the symbols and to denote open and closed balls in respectively, with center and of radius Let M denote the nonnegative axis, and NFC stands for a function that is nondecreasing and continuous on M or some subset of it. Then, the following hypotheses are required in the local analysis.
Assume:
- (H1)
- Nondecreasing functions and continuous (NFC) exist, so that the equationadmits a minimal positive solution (MPS) denoted by Let It follows that for eachand, consequently, the function provided byis positive.
- (H2)
- NFC and exist, such that the equation admits MPS denoted by respectively, provided are provided asand forDefine parameterand the set These definitions imply that if
- (H3)
- L is an invertible operator on E, such that for eachDefine the region with
- (H4)
- for and and are provided by the last two substeps of the algorithm.It is shown that exist (see Proof of Theorem 1).and
- (H5)
- where is the closure of
A local analysis of the algorithm follows.
Theorem 1.
Under conditions (H1)–(H5), the sequence is convergent to provided that Moreover, the following assertions hold
Proof.
From the hypothesis If and then It follows that the divided difference is well defined. Then, by (H1), (4) and (H3), we obtain
Thus, by the Banach perturbation Lemma on the linear operators with inverses [,,,] exists,
and the iterate exists in the first substep of the algorithm.
The uniqueness region can be determined.
Proposition 1.
Suppose:
A solution of exists for the equation for the condition (H3) holds on the ball and exists, such that
Define the region Then, the equation is uniquely solvable by in the region
Proof.
Let us consider the divided difference provided that Then, it follows by (H3) and (15) that
Thus, the linear operator M is invertible. It follows from that □
Remark 1.
- (i)
- We can certainly choose in Proposition 1.
- (ii)
- Possible choice for the uncluttered functions can be obtained as follows:Thus, we can defineSimilarly, we setand
- (iii)
- A possible choice for L in local convergence studies may be , or , or any other linear operator satisfying the conditions (H1)–(H5) (see also the Example 1 in the Section 4).
3. Semi-Local Analysis
The role of is exchanged with and as follows:
Assume:
- (E1)
- NFC exists, such that equationhas MPS Let Define of the sequence for and each byand
- (E2)
- exists, such that for eachIt follows that and exists, such that
- (E3)
- An invertible linear operator of L and exists, such that for eachandNotice that by condition (E1)Thus, the linear operator is invertible and we can take
- (E4)
- Letfor eachand
- (E5)
Then, using induction, as in the local case, we obtain the estimates
Thus, and is Cauchy in a Banach space Hence, exists, such that
By letting in (16), we obtain Notice that from the estimate,
If in (17), we obtain
Therefore, we arrive at the semi-local result for the Algorithm 1:
Theorem 2.
Suppose that conditions (E1)–(E5) hold. Then, the following assertions hold
and exists, solving the equation
The uniqueness of the solution for equation of is specified.
Proposition 2.
Suppose: A solution of exists for the equation for some condition (E3) holds on the ball and exists, such that
Define the region Then, equation is uniquely solvable by in the region
Proof.
Let with and Then, the divided difference is well defined. It then follows from (18) that
Therefore, we deduce □
Remark 2.
- (i)
- The limit point can be replaced by in (E5) (provided in the condition (E1)).
- (ii)
- Suppose that all conditions (E1)–(E5) hold in Proposition 1. Then, set and
- (iii)
- Functions can be specified as in the local case by the following estimates:Hence, we can defineSimilarly, we chooseand
- (iv)
- A possible choice for L may be , provided that the operator is invertible or Other choices are possible, as long as conditions (E1)–(E4) are validated.
4. Numerical Examples
In this Section, we chose , and for all of the examples to obtain the specialization of the algorithm, which is defined by Algorithm 2.
| Algorithm 2 |
| Step1: Given Solve for |
| Step 2: Set |
| Step 3: Solve for |
| Step 4: Solve for |
| Step 5: Solve for |
| Step 6: Set |
| Step 7: Solve for |
| Step 8: Solve for |
| Step 9: Solve for |
| Step 10: Solve for |
| Step 11: Set |
| Step 12: If STOP. Otherwise, repeat the process with |
Here , .
Also, we considered the choice of the divided difference and
In Example 1, we provided the choice of the operator L as well as the functions to validate the local convergence conditions (H1)–(H5). Notice that functions and were chosen, as in Remark 1 (ii). There was no need to choose the operator L in the rest of the examples as the convergence of the aforementioned Algorithm was established (semi-local convergence). The stopping criterion is where is the desired error tolerance.
Example 1.
Let and The mapping F is defined on Ω for as
Then, is calculated to be
Then, solves equation . Moreover, the definition of provides . Take
Then, conditions (H3)–(H5) are valid if we define for as provided in Remark 1.
These choices of scalar functions validate the conditions of Theorem 1. This assures the convergence of the sequence to solution
Example 2.
The solution sought for the nonlinear system
Let for , where
Then, the system becomes
The divided difference belongs in the space and is the standard matrix in [,]. Let us choose . It turns out that the algorithm converges to the solution of , as the initial guess of is close enough to it. Hence, there is no need to validate the conditions of Theorem 2, which are sufficient. Then, the application of the algorithm provides the solution after three iterations. The solution , where
and
Example 3.
Consider the system of 100 equations defined by
where
The results obtained for the initial point are provided in Table 1.
Table 1.
Iterated solutions of Example 3.
Example 4.
In this example, we consider a system of five equations defined by
where
The results obtained for the initial points are provided in Table 2.
Table 2.
Iterated solutions of Example 4.
We used a 4-core 64-bit Windows machine with 11th Gen Intel(R) Core (TM) i5-1135G7 CPU @ 2.40 GHz for all our computations using MATLAB R2023b.
5. Conclusions
The step eighth-order Algorithm without derivatives of the operator was studied in this paper using assumptions only on the first divided difference of the operator. Earlier studies using the Taylor series expansion algorithm made assumptions up to a ninth order derivative not on the algorithm [].
We provided sufficient convergence conditions involving only the operators on the algorithm, computable error upper bounds on and presented the uniqueness of the solution results. It is worth noticing that the methodology of this study was not dependent on the convergence order of the iterative algorithm as the convergence conditions did not make use of it. Moreover, the assumption that the solution was simple was not made or implied by the convergence conditions. Thus, in case convergence conditions were satisfied, the methods also found solutions of multiplicity greater than one. The approach in this paper was applied to other algorithms with inverses to obtain the same benefits [,,,]. This will be the focus of our future research.
Author Contributions
Conceptualization, S.R., I.K.A. and S.G.; algorithm study, S.R., I.K.A. and S.G.; software, S.R., I.K.A. and S.G.; validation, S.R., I.K.A. and S.G.; formal analysis, S.R., I.K.A. and S.G.; investigation, S.R., I.K.A. and S.G.; resources, S.R., I.K.A. and S.G.; data curation, S.R., I.K.A. and S.G.; writing—original draft preparation, S.R., I.K.A. and S.G.; writing—review and editing, S.R., I.K.A. and S.G.; visualization, S.R., I.K.A. and S.G.; supervision, S.R., I.K.A. and S.G.; project administration, S.R., I.K.A. and S.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
References
- Argyros, I.K.; George, S. Ball comparison between two optimal eight-order Algorithm under weak conditions. SeMA J. 2015, 72, 1–11. [Google Scholar] [CrossRef]
- Argyros, I.K.; George, S. Local convergence of two competing third order Algorithm in Banach space. Appl. Math. 2014, 4, 341–350. [Google Scholar]
- Argyros, C.I.; Regmi, S.; Argyros, I.K.; George, S. Contemporary Algorithms: Theory and Applications; NOVA Publishers: Hauppauge, NY, USA, 2022; Volume II. [Google Scholar]
- Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative Algorithm for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
- Ahmad, F.; Tohidi, E.; Ullah, M.Z.; Carrasco, J.A. Higher order multi-step Jarratt-like Algorithm for solving systems of nonlinear equations: Application to PDEs and ODEs. Comput. Math. Appl. 2015, 70, 624–636. [Google Scholar]
- Alharbey, R.A.; Kansal, M.; Behl, R.; Machado, J.A.T. Efficient Three-Step Class of Eighth-Order Multiple Root Solvers and Their Dynamics. Symmetry 2019, 11, 837. [Google Scholar] [CrossRef]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of Algorithm for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef]
- Budzkoa, D.A.; Cordero, A.; Torregrosa, J.R. New family of iterative Algorithm based on the Ermakov–Kalitkin Algorithm for solving nonlinear systems of equations. Comput. Math. Math. Phys. 2015, 55, 1947–1959. [Google Scholar] [CrossRef]
- Cordero, A.; Soleymani, F.; Torregro, J.R. Dynamical analysis of iterative Algorithm for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
- Ahmad, F.; Soleymani, F.; Haghani, F.K.; Serra-Capizzano, S. Higher order derivative-free iterative Algorithm with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
- Potra, F.-A. A characterisation of the divided differences of an operator which can be represented by Riemann integrals. Math.-Rev. Anal. Numér. Thérie Approx. Anal. Numér. Théor. Approx. 1980, 2, 251–253. [Google Scholar]
- Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step Algorithm for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
- Shakhno, S.M.; Gnatyshyn, O.P. On an iterative algorithm of order 1.839… for solving nonlinear operator equations. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M.; Diaz-Barrero, J.L. Note on the efficiency of some iterative Algorithm for solving nonlinear equations. SeMA J. 2015, 71, 15–22. [Google Scholar] [CrossRef][Green Version]
- Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding Algorithm. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar] [CrossRef]
- Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef]
- Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S. On a new Algorithm for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 15, 751975. [Google Scholar]
- Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in SeveralVariables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
- Sharma, J.R.; Arora, H.; Petković, M.S. An efficient derivative free family of fourth order Algorithm for solving systems of nonlinear equations. Appl. Math. Comput. 2014, 235, 383–393. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical Algorithm for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Singh, R.; Panday, S. Efficient optimal eighth order Algorithm for solving nonlinear equations. AIP Conf. Proc. 2023, 2728, 030013. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).