Next Article in Journal
Exploiting Asymmetric Co States in a Co-N-C Catalyst for an Efficient Oxygen Reduction Reaction
Next Article in Special Issue
On the Convergence of Two-Step Kurchatov-Type Methods under Generalized Continuity Conditions for Solving Nonlinear Equations
Previous Article in Journal
Autonomous Vehicles: The Cybersecurity Vulnerabilities and Countermeasures for Big Data Communication
Previous Article in Special Issue
On Strengthened Extragradient Methods Non-Convex Combination with Adaptive Step Sizes Rule for Equilibrium Problems
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Applicability of Cordero Type Iterative Method

1
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575 025, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(12), 2495; https://doi.org/10.3390/sym14122495
Received: 19 October 2022 / Revised: 10 November 2022 / Accepted: 16 November 2022 / Published: 24 November 2022
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications II)

Abstract

:
Symmetries play a vital role in the study of physical systems. For example, microworld and quantum physics problems are modeled on the principles of symmetry. These problems are then formulated as equations defined on suitable abstract spaces. Most of these studies reduce to solving nonlinear equations in suitable abstract spaces iteratively. In particular, the convergence of a sixth-order Cordero type iterative method for solving nonlinear equations was studied using Taylor expansion and assumptions on the derivatives of order up to six. In this study, we obtained order of convergence six for Cordero type method using assumptions only on the first derivative. Moreover, we modified Cordero’s method and obtained an eighth-order iterative scheme. Further, we considered analogous iterative methods to solve an ill-posed problem in a Hilbert space setting.

1. Introduction

As already mentioned in the abstract, the main goal is to obtain convergence order of the method studied in [1] without using assumptions on the higher-order derivatives. Throughout this paper U , V denote Banach spaces and Ω U is a convex set. We are interested in approximating the solution u * of the equation
J ( u ) = 0 ,
where J : Ω U V is a nonlinear operator that is Frèchet differentiable. A considerable number of nonlinear problems of the form (1) that arise in physics, chemistry, biology, finance, and mathematics are modeled on principles of symmetry. In general, the classical Newton method of second-order defines k = 0 , 1 , 2 , , by
u k + 1 = u k J u k 1 J ( u k ) ,
where J u k = J ( u k ) , is considered to be the most efficient iterative method to solve Equation (1). Cordero et al. [2] modified the classical Newton method by employing Adomian polynomial decomposition and obtained a fourth-order iterative scheme. The iterative scheme in [2] is defined k = 0 , 1 , 2 , , by
v k = u k J u k 1 J ( u k ) u k + 1 = v k ( 2 J u k 1 J u k 1 J v k J u k 1 ) J ( v k ) ,
where J v k = J ( v k ) . This new fourth-order Cordero method has better stability than the classical Newton method with higher-order convergence.
A new technique was introduced by Cordero et al. in [1] to improve the convergence order of an iterative method from q to q + 2 by combining it with the classical Newton method. By using this technique, the authors modified the fourth-order iterative method (3) to a sixth-order iterative scheme that is defined k = 0 , 1 , 2 , , by
v k = u k J u k 1 J ( u k ) w k = v k J u k 1 ( 2 I J v k J u k 1 ) J ( v k ) u k + 1 = w k J v k 1 J ( w k ) .
However, the disadvantage of the convergence analysis conducted by Cordero et al. [1] is that they use Taylor expansion which involves the Fréchet derivative of the function up to order six. The convergence analysis of iterative methods in Banach space is conducted by using Taylor expansion which requires assumptions on the higher-order derivatives of the operator involved [1,3,4,5,6,7,8]. If the higher-order derivatives are unbounded, these schemes bear limited applicability. For example, consider the equation G ( t ) = 0 , where G : [ 1 2 , 5 2 ] R is defined by
G ( t ) = t 3 l o g ( t 2 ) + t 5 t 4 t 0 0 t = 0 .
Since the third-order derivative of G is unbounded, the convergence analysis depends on Taylor expansion which is not applicable in this example.
In this study, we could obtain the sixth-order convergence for the method (4) without using Taylor expansion. We employed only the assumptions on the Fréchet derivative of order one. The novelty of our approach is that it does not require higher-order Fréchet derivatives of the operator and Taylor expansion in the convergence analysis. Thus, we enhance the method’s utility. We also modify the last step of the method (4) and obtain a new eighth-order iterative scheme that is defined k = 0 , 1 , 2 , , by
v k = u k J u k 1 J ( u k ) w k = v k J u k 1 ( 2 I J v k J u k 1 ) J ( v k ) u k + 1 = w k J w k 1 J ( w k ) .
where J w k = J ( w k ) .
In [9], Parhi and Sharma proved the convergence of the method (4) without using Taylor expansion. However, the authors could not obtain the sixth-order convergence theoretically for method (4).
In this study, we also estimate the radius of convergence of the methods (4) and (5) under assumptions on first-order Fréchet derivative and compute the efficiency indices. We numerically demonstrate that the radius of convergence in our study is superior to the estimates of Parhi and Sharma. We also considered the analogous iterative methods of these two iterative schemes to solve an ill-posed problem in a Hilbert space.
The convergence analysis of methods (4) and (5) is provided in Section 2. The radius of convergence and Approximate Computational Order of Convergence (ACOC) is computed numerically in Section 3. A numerical example of an ill-posed problem is given in Section 4 and the paper concludes in Section 5.

2. Convergence Analysis of (4) and (5)

We use notations B ( t 0 , ρ ) = { t U : t t 0 < ρ } and B ( t 0 , ρ ) ¯ = { t U : t t 0 ρ } for some ρ > 0 . The following definition and assumptions are used to prove our results.
Definition 1. 
A sequence { u n } is said to converge to solution u * with order q if there exists K > 0 such that
u n + 1 u * K u n u * q .
Assumption A1. 
ζ 1 > 0 such that u , v D ( J ) ,
J ( u ) 1 ( J ( v ) J ( u ) ) ζ 1 v u .
Assumption A2. 
ζ 2 > 0 , ρ > 0 such that u , v B ( u * , ρ ) ,
J ( u ) 1 J ( v ) ζ 2 .
The local convergence is based on functions ϕ i , ψ i , i = 1 , 2 , which are defined as follows. Let ϕ 1 : [ 0 , ) [ 0 , ) be defined by
ϕ 1 ( t ) = ζ 1 3 32 [ 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 t + ζ 1 3 t 2 ] t 3
and
ψ 1 ( t ) = ϕ 1 ( t ) 1 .
We observe that ψ 1 ( 0 ) = 1 and ψ 1 ( t ) as t . So, by intermediate value theorem ψ 1 ( t ) = 0 has a minimal zero ρ 1 > 0 . Similarly, define ϕ 2 : [ 0 , ) [ 0 , ) by
ϕ 2 = ζ 1 2 2 1 + ϕ 1 ( t ) ζ 1 t ϕ 1 ( t ) t 2
and
ψ 2 ( t ) = ϕ 2 ( t ) 1 .
Furthermore, let ρ 2 > 0 be the minimal zero of ψ 2 ( t ) = 0 . Let
ρ = m i n { 2 ζ 1 , ρ 1 , ρ 2 } .
Then, 0 < ϕ 1 ( t ) , ϕ 2 ( t ) < 1 , t ( 0 , ρ ) . Let e n u = u n u * , e n v = v n u * , and e n w = w n u * , n = 0 , 1 , 2 ,
Theorem 1. 
(Existence and Uniqueness)Let ρ be as in (6). Then { u k } defined by (4) with u 0 B ( u * , ρ ) { u * } , converges to u * with order of convergence six, i.e.,
e k + 1 u C ( e k u ) 6 ,
where C = ζ 1 5 64 1 + ϕ 1 ( ρ ) ζ 1 ρ 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 ρ + ζ 1 3 ρ 2 . Suppose that (1) has a simple solution in the set S = Ω B ( u * , ρ ) ¯ . Then u * is the unique solution of equation J ( u ) = 0 in the set S , provided that ζ 1 ρ < 2 .
Proof. 
(Existence Part) By induction, we shall prove the following inequalities:
v n B ( u * , ρ ) , e n v ζ 1 2 ( e n u ) 2 , w n B ( u * , ρ ) , e n w ϕ 1 ( e n u ) e n u , u n + 1 B ( u * , ρ ) , e n + 1 u C ( e n u ) 6 .
For u 0 B ( u * , r ) , by (4) we have,
v 0 u * = u 0 u * J u 0 1 ( J ( u 0 ) J ( u * ) ) = J u 0 1 0 1 J ( u * + t ( u 0 u * ) ) J u 0 d t ( u 0 u * ) ,
So by Assumption A1, we obtain,
e 0 v ζ 1 2 ( e 0 u ) 2 .
By (6), ζ 1 2 ( e 0 u ) 2 ζ 1 2 ρ 2 ρ , so we have v 0 B ( u * , ρ ) . Again, from the second step of (4),
w 0 u * = v 0 u * J u 0 1 ( 2 I J v 0 J u 0 1 ) ( J ( v 0 ) J ( u * ) ) = J u 0 1 ( J u 0 ( v 0 u * ) ( 2 I J v 0 J u 0 1 ) × 0 1 J ( u * + t ( v 0 u * ) ) ( v 0 u * ) d t ) = J u 0 1 0 1 ( J ( u * + t ( v 0 u * ) ) J u 0 ) ( v 0 u * ) d t J u 0 1 ( I J v 0 J u 0 1 ) 0 1 J ( u * + t ( v 0 u * ) ) ( v 0 u * ) d t = J u 0 1 0 1 ( J ( u * + t ( v 0 u * ) ) J u 0 ) ( v 0 u * ) d t J u 0 1 ( J u 0 J v 0 ) J u 0 1 0 1 J ( u * + t ( v 0 u * ) ) ( v 0 u * ) d t .
By adding and subtracting the term Γ = 0 1 J ( u * + t ( v 0 u * ) ) d t we get,
w 0 u * = J u 0 1 0 1 ( J ( u * + t ( v 0 u * ) ) J u 0 ) ( v 0 u * ) d t ) J u 0 1 ( J u 0 + Γ Γ J v 0 ) × J u 0 1 0 1 J ( u * + t ( v 0 u * ) ) ( v 0 u * ) d t = J u 0 1 Γ 0 1 J u 0 d t ( v 0 u * ) J u 0 1 J u 0 Γ J u 0 1 Γ ( v 0 u * ) J u 0 1 Γ 0 1 J v 0 d t J u 0 1 Γ ( v 0 u * ) = J u 0 1 Γ 0 1 J u 0 d t I J u 0 1 Γ ( v 0 u * ) J u 0 1 Γ 0 1 J v 0 d t J u 0 1 Γ ( v 0 u * ) = J u 0 1 Γ 0 1 J u 0 d t J u 0 1 ( J u 0 Γ ) ( v 0 u * ) J u 0 1 J v 0 J v 0 1 Γ 0 1 J v 0 d t J u 0 1 Γ ( v 0 u * ) .
Therefore, by (7), Assumptions A1 and A2, we obtain
e 0 w ζ 1 2 e 0 u + e 0 v 2 2 e 0 v + ζ 1 ζ 2 2 2 ( e 0 v ) 2 = ζ 1 2 ( e 0 u ) 2 + e 0 u e 0 v + ( e 0 v ) 2 4 e 0 v + ζ 1 ζ 2 2 2 ( e 0 v ) 2 ζ 1 2 ( e 0 u ) 2 + ζ 1 2 ( e 0 u ) 3 + ζ 1 2 16 ( e 0 u ) 4 ζ 1 2 2 ( e 0 u ) 2 + ζ 1 ζ 2 2 2 ζ 1 2 ( e 0 u ) 2 2 = ζ 1 4 32 16 + 8 ζ 1 e 0 u + ζ 1 2 ( e 0 u ) 2 ( e 0 u ) 4 + ζ 1 3 ζ 2 2 8 ( e 0 u ) 4 = ζ 1 3 32 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 4 = ϕ 1 ( e 0 u ) e 0 u < e 0 u .
Thus, w 0 B ( u * , ρ ) . By the third step of (4) we have,
u 1 u * = w 0 u * J v 0 1 ( J ( w 0 ) J ( u * ) ) = J v 0 1 0 1 ( J ( u * + t ( w 0 u * ) ) J v 0 ) ( w 0 u * ) d t .
Again, by using Assumption A1, (7) and (8) we get,
e 1 u ζ 1 ( e 0 v + e 0 w 2 ) e 0 w ζ 1 ( ζ 1 2 ( e 0 u ) 2 + ζ 1 3 64 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 4 ) ζ 1 3 32 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 4 = ζ 1 5 64 1 + ζ 1 2 32 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 2 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 6 = ζ 1 5 64 1 + ϕ 1 ( e 0 u ) ζ 1 e 0 u 32 ζ 1 3 ϕ 1 ( e 0 u ) ( e 0 u ) 3 = ϕ 2 ( e 0 u ) e 0 u .
Note that,
ϕ 2 ( e 0 u ) = ζ 1 2 2 1 + ϕ 1 ( e 0 u ) ζ 1 e 0 u ( ϕ 1 ( e 0 u ) ) ( e 0 u ) 2 = ζ 1 5 64 1 + ϕ 1 ( e 0 u ) ζ 1 e 0 u 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 5 ,
So by (9), we get,
e 1 u = ζ 1 5 64 1 + ϕ 1 ( e 0 u ) ζ 1 e 0 u 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 6 C ( e 0 u ) 6 .
Further, since ϕ 2 ( e 0 u ) < 1 , we have u 1 B ( u * , ρ ) . The induction is complete, by replacing u 0 , v 0 , w 0 , u 1 by u n , v n , w n , u n + 1 , , respectively, in the preceding arguments.
(Uniqueness Part) Let u ¯ be another solution of the Equation (1) in the set S .
Let T = 0 1 J ( u * + t ( u ¯ u * ) ) d t . By using Assumption A1, we have
J ( u * ) 1 ( T J ( u * ) ) ζ 1 0 1 u * + t ( u ¯ u * ) u * d t = ζ 1 0 1 t u ¯ u * d t ζ 1 2 ρ < 1 .
Therefore, by using Banach lemma [10], one can conclude that T is invertible.
Hence u ¯ = u * follows from 0 = J ( u ¯ ) J ( u * ) = T ( u ¯ u * ) .
Next, we prove the convergence of method (5). Let ϕ 2 ˜ : [ 0 , ) [ 0 , ) be defined by
ϕ 2 ˜ ( t ) = ζ 1 2 ϕ 1 ( t ) t 4 .
Again, by intermediate value theorem ψ 2 ˜ ( t ) = ϕ 2 ˜ ( t ) 1 = 0 has a minimal zero ρ 2 ˜ > 0 . Let us define
ρ ˜ = m i n { 2 ζ 1 , ρ 1 , ρ 2 ˜ } .
Theorem 2. 
Let ρ ˜ be as in (10). Then { u k } defined by (5) with u 0 B ( u * , ρ ˜ ) { u * } , converges to u * with the order of convergence eight. i.e.,
e k + 1 u C ˜ ( e k u ) 8 ,
where C ˜ = ζ 1 ϕ 1 ( ρ ˜ ) 2 ρ ˜ 3 . Furthermore, u * is the unique solution of Equation (1) in the set S = Ω B ( u * , ρ ˜ ) ¯ provided that ζ 1 ρ ˜ < 2 .
Proof. 
By the third sub-step of (5), we have
u 1 u * = w 0 u * J w 0 1 ( J ( w 0 ) J ( u * ) )
so, by (8), we get
e 1 u ζ 1 2 ( e 0 w ) 2 ζ 1 2 ζ 1 3 32 16 ζ 1 + 4 ζ 2 2 + 8 ζ 1 2 e 0 u + ζ 1 3 ( e 0 u ) 2 ( e 0 u ) 4 2 = ζ 1 2 ϕ 1 ( e 0 u ) ( e 0 u ) 3 ( e 0 u ) 8 = ϕ 2 ˜ ( e 0 u ) e 0 u < e 0 u .
From (11), we get
e 1 u ζ 1 ϕ 1 ( ρ ˜ ) 2 ρ ˜ 3 ( e 0 u ) 8 = C ˜ ( e 0 u ) 8 .
The rest of the proof proceeds in the same manner as in Theorem 1. □
Remark 1. 
Note that by (8), we obtain the convergence order four for the Cordero method (3).

3. Estimation of Radius of Convergence and Computational Order

We estimate the radius of convergence ρ and ρ ˜ to validate the theoretical results.
Example 1. 
Let U = V = R , u 0 = 1 , Ω = [ u 0 ( 1 k ) , u 0 + ( 1 k ) ] , k ( 2 2 , 1 ) and F : Ω K be defined by
J ( u ) = u 3 k .
We have, J u 0 1 = 1 3 .
J u 0 1 ( J ( u ) J u 0 ) = 1 3 ( 3 u 2 3 ) u + 1 u 1 = ( 3 k ) ( 1 k ) .
By using Banach Lemma,
J ( u ) 1 J u 0 1 1 J u 0 1 J ( u ) I = 1 3 ( 1 ( 3 k ) ( 1 k ) ) ,
So,
J ( u ) 1 ( J ( v ) J ( u ) ) J ( u ) 1 3 v 2 3 u 2 3 ( v + u ) ( v u ) 3 ( 1 ( 3 k ) ( 1 k ) ) = 2 ( 2 k ) ( 1 ( 3 k ) ( 1 k ) ) v u .
Therefore, ζ 1 = 2 ( 2 k ) ( 1 ( 3 k ) ( 1 k ) ) .
J ( u ) 1 J ( v ) J ( u ) 1 J ( v ) 6 v 3 ( 1 ( 3 k ) ( 1 k ) ) = 2 ( 2 k ) ( 1 ( 3 k ) ( 1 k ) ) = ζ 2 .
Set k = 0.85 , we then get, ζ 1 = ζ 2 3.3948 , ρ 1 0.1899 , ρ 2 0.2092 , 2 ζ 1 = 0.35 ,
ρ = m i n { 2 ζ 1 , ρ 1 , ρ 2 } 0.1899 . Furthermore, we have ρ 2 ˜ 0.4409 and ρ ˜ = min { 2 ζ 1 , ρ 1 , ρ 2 ˜ } = 0.1899 . Using the convergence analysis in [9], we obtain the radius R = 0.1123 .
Example 2. 
Let U = V = R 3 , Ω = B [ 0 , 1 ] , u 0 = ( 0 , 0 , 0 ) T . Define function J on Ω for x = ( u , v , w ) T by
J ( x ) = e u 1 , e 1 2 v 2 + v , w T .
Then,
J ( x ) = e u 0 0 0 ( e 1 ) v + 1 0 0 0 1 .
Thus, ζ 1 = e 1 and ζ 2 = e . Furthermore, we get, 2 ζ 1 1.1639 , ρ 1 0.4510 , ρ 2 0.4779 and the radius of convergence ρ = m i n { 2 ζ 1 , ρ 1 , ρ 2 } 0.4510 . Furthermore, we have ρ 2 ˜ 0.7154 and ρ ˜ = min { 2 ζ 1 , ρ 1 , ρ 2 ˜ } 0.4510 . Parhi and Sharma [9] considered this example 2 and obtained the radius R = 0.133649 .
Remark 2. 
We observe that, ρ ˜ = ρ in the above examples. Furthermore, note that we can obtain a better radius of convergence than that of Parhi and Sharma’s convergence analysis in [9].
To ensure the methods (4) and (5) attain the order of convergence computationally, we calculated the Approximate Computational Order of Convergence (ACOC) (Table 1), that is defined as [1]
Σ = ln u k + 1 u k u k u k 1 / ln u k u k 1 u k 1 u k 2 .
We considered the following functions and used the stopping criterion u k + 1 u k + J ( u k + 1 ) 10 10 .
J ( t 1 , t 2 , t 3 ) = ( e t 1 1 , e 1 2 t 2 2 + t 2 , t 3 ) ,
J ( t ) = t 3 0.85 ,
J ( t 1 , t 2 ) = ( t 1 2 4 t 2 + t 2 2 , 2 t 1 t 2 2 2 ) ,
J ( t 1 , t 2 ) = ( t 1 2 + t 2 2 1 , t 1 2 t 2 2 + 0.5 ) ,
J ( t 1 , t 2 ) = ( t 1 3 t 2 , t 2 3 t 2 ) ,
J ( t 1 , t 2 ) = ( 3 t 1 2 t 2 t 2 3 , t 1 3 3 t 1 t 2 2 1 ) .
Note that the oscillatory nature of the approximations and slow convergence in the initial stage present the main disadvantages in the computation of ACOC in higher-order iterative methods. In Table 1, we observe that the choice of a suitable initial approximation plays a vital role to achieve the maximum order of convergence (see Equations (12), (13) and (14)). Furthermore, it requires at least four iterations to compute ACOC (see Equation (15)). Specifically in Table 1, we provide ACOC for nonlinear equations using Newton method (NM) (2), Cordero’s fourth-order method (CM) (3), first extension (CM1) (4) and second extension (CM2) (5). Here, N , u * , and u 0 denote the number of iterations, root, and initial value, respectively.
Remark 3. 
The efficiency index c f is defined as c f = q 1 m , where q is the order of convergence and m is the number of functions (and derivatives) [11]. The informational efficiency I is defined as I = q m [12]. The efficiency index and informational efficiency of the fourth-order Cordero method (3) are c f = 4 1 / 4 = 1.41 and I = 4 / 4 = 1 , , respectively, which coincide with that of the Newton method. Whereas c f = 6 1 / 5 = 1.43 , I = 6 / 5 = 1.2 for the sixth-order method (4) and c f = 8 1 / 6 = 1.41 , I = 8 / 6 = 1.33 for the eighth-order method (5).

4. Application to Ill-Posed Problem

We implemented the analogous iterative methods (2), (3), (4) and (5) to solve the nonlinear ill-posed problem (see [13,14] for details).
Example 3. 
Let c > 0 be a constant. Consider the inverse problem of identifying the distributed-growth law u ( t ) , t ( 0 , 1 ) , in the initial value problem
d y d t = u ( t ) y ( t ) , y ( 0 ) = c ,
from the noisy data y δ ( t ) L 2 ( 0 , 1 ) . One can reformulate the above problem as an ill-posed operator equation
J ( u ) = y ,
with
[ J ( u ) ] ( t ) = c e 0 t u ( θ ) d θ , u L 2 ( 0 , 1 ) , t ( 0 , 1 ) .
The Fréchet derivative of J is given by
J ( u ) h ( t ) = [ J ( u ) ] ( t ) 0 t h ( θ ) d θ .
It is proved in [15], that J is positive type and spectrum of J u is the singleton set { 0 } . We use the Lavrentiev regularization method with α > 0 (see [14] for details), i.e.,
J ( u ) + α ( u u 0 ) = y ,
to approximate the exact solution u ^ of (18). To solve (19), we consider the analogous iterative methods (2), (3), (4) and (5) defined k = 0 , 1 , , by
u k + 1 = u k ( J u k + α I ) 1 ( J ( u k ) + α ( u k u 0 ) y δ ) ,
v k = u k ( J u k + α I ) 1 ( J ( u k ) + α ( u k u 0 ) y δ ) u k + 1 = v k ( 2 ( J u k + α I ) 1 ( ( J u k + α I ) 1 ( J v k + α I ) ( J u k + α I ) 1 ( J ( v k ) + α ( v k u 0 ) y δ ) ) ) ,
v k = u k ( J u k + α I ) 1 ( J ( u k ) + α ( u k u 0 ) y δ ) w k = v k ( J u k + α I ) 1 ( 2 I ( J v k + α I ) ( J u k + α I ) 1 ) ( J ( v k ) + α ( v k u 0 ) y δ ) u k + 1 = w k ( J v k + α I ) 1 ( J ( w k ) + α ( w k u 0 ) y δ ) ,
and
v k = u k ( J u k + α I ) 1 ( J ( u k ) + α ( u k u 0 ) y δ ) w k = v k ( J u k + α I ) 1 ( 2 I ( J v k + α I ) ( J u k + α I ) 1 ) ( J ( v k ) + α ( v k u 0 ) y δ ) u k + 1 = w k ( J w k + α I ) 1 ( J ( w k ) + α ( w k u 0 ) y δ ) ,
respectively.
Remark 4. 
We choose a priori α which satisfies the following condition;
Ψ ( α , y δ ) : = α 2 J u 0 + α I 2 J u 0 y δ = d δ
for some d > 1 with d δ J ( u 0 ) y δ (see [13,14] for details).
For computation, we have taken u ^ ( t ) = t , u 0 ( t ) = 0 and y ( t ) = e t 2 2 .  Table 2 provides the relative error E α = C S u ^ u ^ of each iterative method, where CS is the computed solution. We choose α according to (20). The accuracy of reconstruction increases as the relative error decreases.
For δ = 0.001 , 0.0001 , the exact and noisy data are shown in subfigure (a) and the computed solution is in subfigure (b), respectively, in both Figure 1 and Figure 2.

5. Conclusions

We studied the convergence analysis of a three-step Cordero type method of order six and modified it to a new eighth-order iterative method. The convergence analysis of these methods was studied without using Taylor’s expansion. We use assumptions based only on the first-order Fréchet derivative. We computed the radius of convergence and computational efficiencies of these methods. Furthermore, we considered analogous iterative methods to solve an ill-posed problem in a Hilbert space. The developed process can also be applied to any other method using inverses of linear operators with the same benefits. This represents the topic of our future study.

Author Contributions

Conceptualization, K.R., I.K.A., M.S.K., S.G. and J.P.; methodology, K.R., I.K.A., M.S.K., S.G. and J.P.; software, K.R., I.K.A., M.S.K., S.G. and J.P.; validation, K.R., I.K.A., M.S.K., S.G. and J.P.; formal analysis, K.R., I.K.A., M.S.K., S.G. and J.P.; investigation, K.R., I.K.A., M.S.K., S.G. and J.P.; resources, K.R., I.K.A., M.S.K., S.G. and J.P.; data curation, K.R., I.K.A., M.S.K., S.G. and J.P.; writing—original draft preparation, K.R., I.K.A., M.S.K., S.G. and J.P.; writing—review and editing, K.R., I.K.A., M.S.K., S.G. and J.P.; visualization, K.R., I.K.A., M.S.K., S.G. and J.P.; supervision, K.R., I.K.A., M.S.K., S.G. and J.P.; project administration, K.R., I.K.A., M.S.K., S.G. and J.P.; funding acquisition, K.R., I.K.A., M.S.K., S.G. and J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar]
  2. Cordero, A.; Martínez, E.; Toregrossa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2012, 231, 541–551. [Google Scholar] [CrossRef]
  3. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  4. Cordero, A.; Ezquerro, J.A.; Hernández-Verón, M.A.; Torregrosa, J.R. On the local convergence of a fifth-order iterative method in Banach spaces. Appl. Math. Comput. 2012, 251, 396–403. [Google Scholar] [CrossRef]
  5. Fang, L.; Sun, L.; He, G. An efficient newton-type method with fifthorder convergence for solving nonlinear equations. Comput. Appl. Math. 2008, 227, 269–274. [Google Scholar]
  6. Grau-Sánchez, M.; Grau, A.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2021, 236, 1259–1266. [Google Scholar] [CrossRef]
  7. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  8. Sharma, J.R.; Sharma, R.; Kalra, N. A novel family of composite Newton–Traub methods for solving systems of nonlinear equations. Appl. Math. Comput. 2015, 269, 520–535. [Google Scholar] [CrossRef]
  9. Parhi, S.K.; Sharma, D. On the Local Convergence of a Sixth-Order Iterative Scheme in Banach Spaces. In New Trends in Applied Analysis and Computational Mathematics; Springer: Singapore, 2021; pp. 79–88. [Google Scholar]
  10. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  11. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Elsevier: Amsterdam, The Netherlands, 1973. [Google Scholar] [CrossRef]
  12. Traub, J.F. Iterative Methods for Solution of Equations; Prentice-Hal: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  13. George, S.; Saeed, M.; Argyros, I.K.; Jidesh, P. An apriori parameter choice strategy and a fifth order iterative scheme for Lavrentiev regularization method. J. Appl. Math. Comput. 2022, 1–21. [Google Scholar] [CrossRef]
  14. George, S.; Jidesh, P.; Krishnendu, R.; Argyros, I.K. A new parameter choice strategy for Lavrentiev regularization method for nonlinear ill-posed equations. Mathematics 2022, 10, 3365. [Google Scholar] [CrossRef]
  15. Nair, M.T.; Ravishankar, P. Regularized versions of continuous Newton’s method and continuous modified Newton’s method under general source conditions. Numer. Funct. Anal. Optim. 2008, 29, 1140–1165. [Google Scholar] [CrossRef]
Figure 1. Data (a) and Solution (b) with δ = 0.001.
Figure 1. Data (a) and Solution (b) with δ = 0.001.
Symmetry 14 02495 g001
Figure 2. Data (a) and Solution (b) with δ = 0.0001.
Figure 2. Data (a) and Solution (b) with δ = 0.0001.
Symmetry 14 02495 g002
Table 1. ACOC for methods (2), (3), (4) and (5).
Table 1. ACOC for methods (2), (3), (4) and (5).
Eq. No. u * u 0 NNNNACOCACOCACOCACOC
NMCMCM1CM2NMCMCM1CM2
(2)(3)(4)(5)(2)(3)(4)(5)
(12) ( 0 , 0 , 0 ) ( 1 2 , 0 , 0 ) 64442 4.3 6.2 7.6
( 1.1 , 1.1 , 1.1 ) 8554 1.98 3.7 5.9 6.4
(13) 0.9472 1.6 75442 3.9 5 6.8
0.5 8104172 3.9 5.6 6.5
(14) ( 0.3542 , 1.1364 ) ( 0.6 , 0.7 ) 75442 3.5 5.8 8
(15) ( 1 2 , 3 2 ) ( 0.35 , 0.5 ) 7544 1.5 3.9 6.3 8.4
( 0.9 , 1 ) 64432 3.4 5.4 Not defined
(16) ( 1 , 1 ) ( 1.1 . , 0.75 ) 75442 3.7 5.8 9.5
(17) ( 1 2 , 3 2 ) ( 0.4 , 1 2 ) 8785 1.2 4.2 5.2 7.9
Table 2. Relative errors for Example 3.
Table 2. Relative errors for Example 3.
Method α and E α δ = 0.01 δ = 0.001 δ = 0.0001 δ = 0.00001
α 3.719646 × 10 2 1.147848 × 10 2 3.601858 × 10 3 1.136730 × 10 3
(5) E α 1.323726 × 10 1 3.780750 × 10 2 1.912899 × 10 2 1.532976 × 10 2
stopping index 11111111
(4) E α 1.323724 × 10 1 3.780538 × 10 2 1.912626 × 10 2 1.532735 × 10 2
stopping index 15151515
(3) E α 1.314847 × 10 1 3.852680 × 10 2 2.047544 × 10 2 1.680669 × 10 2
stopping index 11111111
(2) E α 1.305499 × 10 1 3.947806 × 10 2 2.210599 × 10 2 1.856438 × 10 2
stopping index 27272727
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Remesh, K.; Argyros, I.K.; Saeed K, M.; George, S.; Padikkal, J. Extending the Applicability of Cordero Type Iterative Method. Symmetry 2022, 14, 2495. https://doi.org/10.3390/sym14122495

AMA Style

Remesh K, Argyros IK, Saeed K M, George S, Padikkal J. Extending the Applicability of Cordero Type Iterative Method. Symmetry. 2022; 14(12):2495. https://doi.org/10.3390/sym14122495

Chicago/Turabian Style

Remesh, Krishnendu, Ioannis K. Argyros, Muhammed Saeed K, Santhosh George, and Jidesh Padikkal. 2022. "Extending the Applicability of Cordero Type Iterative Method" Symmetry 14, no. 12: 2495. https://doi.org/10.3390/sym14122495

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop