Next Article in Journal
Stability of Fractional-Order Quasi-Linear Impulsive Integro-Differential Systems with Multiple Delays
Next Article in Special Issue
Iterative Approximate Solutions for Variational Problems in Hadamard Manifold
Previous Article in Journal
A New Interior Search Algorithm for Energy-Saving Flexible Job Shop Scheduling with Overlapping Operations and Transportation Times
Previous Article in Special Issue
Development of Patent Technology Prediction Model Based on Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Processes for Approximating Solutions of Nonlinear Equations

by
Samundra Regmi
1,*,
Ioannis K. Argyros
2,
Santhosh George
3 and
Christopher I. Argyros
4
1
Department of Mathematics, University of Houston, Houston, TX 77204, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangaluru 575 025, India
4
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(7), 307; https://doi.org/10.3390/axioms11070307
Submission received: 24 May 2022 / Revised: 17 June 2022 / Accepted: 23 June 2022 / Published: 24 June 2022
(This article belongs to the Special Issue 10th Anniversary of Axioms: Mathematical Analysis)

Abstract

:
In this article, we present generalized conditions of three-step iterative schemes for solving nonlinear equations. The convergence order is shown using Taylor series, but the existence of high-order derivatives is assumed. However, only the first derivative appears on these schemes. Therefore, the hypotheses limit the utilization of the schemes to operators that are at least nine times differentiable, although the schemes may converge. To the best of our knowledge, no semi-local convergence has been given in the setting of a Banach space. Our goal is to extend the applicability of these schemes in both the local and semi-local convergence cases. Moreover, we use our idea of recurrent functions and conditions only on the derivative or divided differences of order one that appear in these schemes. This idea can be applied to extend other high convergence multipoint and multistep schemes. Numerical applications where the convergence criteria are tested complement this article.
MSC:
49M15; 47H17; 65J15; 65G99; 41A25

1. Introduction

Let M and M 1 denote Banach spaces, D stand for an open set and F : D M M 1 be a continuous operator.
We denote by x * a solution of the nonlinear equation
F ( x ) = 0 .
Iterative schemes are utilized for solving the nonlinear Equation (1). A plethora of iterative schemes have been employed for approximating x * [1,2].
In this article, we study the generalized three-step iterative schemes defined for n = 0 , 1 , 2 , , by
y n = x n M 1 , n 1 F ( x n ) z n = y n M 2 , n 1 F ( y n ) x n + 1 = z n M 3 , n 1 F ( z n ) ,
where M 1 , n = M 1 ( x n ) , M 1 : D L ( M , M 1 ) , M 2 , n = M 2 ( x n , y n ) , M 2 : D × D L ( M , M 1 ) ,   M 3 , n = M 3 ( x n , y n , z n ) , and M 3 : D × D × D L ( M , M 1 ) .
This scheme generalizes numerous others already in the literature [3,4,5]. If, e.g.,
M 1 , n = M 2 , n = F ( x n ) , M 3 , n = O ,
or
M 1 , n = M 2 , n = M 3 , n = F ( x n )
or
M 1 , n = F ( x n ) , M 2 , n = F ( y n ) and M 3 , n = F ( z n ) ,
then Newton–Traub-type methods are obtained.
The convergence order of the specialized schemes was shown to be three, five, and eight, respectively, using Taylor expansions. In the case of order three, the fourth derivative is used. Hence, the assumptions on the ninth derivative reduce the applicability of these schemes [2,4,5,6]. In particular, even a simple scalar equation cannot be handled with the existing results.
For example: Let M = M 1 = R , D = [ 0.5 , 1.5 ] . Define scalar function λ on D by
λ ( t ) = t 3 log t 2 + t 5 t 4 i f t 0 0 i f t = 0 .
Notice that t * = 1 solves equation λ ( t ) = 0 and the third derivative is given by
λ ( t ) = 6 log t 2 + 60 t 2 24 t + 22 .
Obviously, λ ( t ) is not bounded on D . Therefore, the convergence of the scheme (2) is not guaranteed by the previous analyses in [2,4,5,6,7,8]. A plethora of other choices can be found in [4,5,6,7,8]. Therefore, it is important to study the local as well as the semi-local convergence under unifying convergence and weaker than before criteria.
There are two important types of convergence: The semi-local and the local. The semi-local is based on the information about an initial guess to provide criteria guaranteeing the convergence of the scheme; while the local one is based on the information around a solution to find estimates of the radii of the convergence balls.
The local convergence results are important, although the solution is generally unknown since the convergence order of the scheme can be determined. This type of result also demonstrates the degree of difficulty in choosing initial guesses. There are cases when the radius of convergence of the scheme can be found without knowing the solution.
As an example, let M = M 1 = R . Suppose that function F satisfies an autonomous differential [4,6] equation of the form
S ( F ( t ) ) = F ( t ) ,
where S is a continuous function. Notice that S ( F ( t * ) ) = F ( t * ) or F ( t * ) = S ( 0 ) . In the case of F ( t ) = e t 1 , we can choose S ( t ) = t + 1 (see also the numerical section).
Moreover, the local results can apply to projection schemes such as Arnoldi’s, the generalized minimum residual scheme (GMRES), the generalized conjugate scheme (GCS) for combined Newton/finite projection schemes, and in relation to the mesh independence principle to develop the cheapest and most efficient mesh refinement techniques [5,7,9].
In this article, we introduce a majorant sequence and also use our idea of recurrent functions to extend the applicability of the scheme (2). Our analysis includes error bounds and results on the uniqueness of x * based on computable Lipschitz constants not given before in [2,4,5,6,7,8] and in other similar studies using the Taylor series. Our idea is very general. Therefore, it applies to other schemes too [9,10,11,12,13,14].
The rest of the article is set up as follows: In Section 2, we present the results of the local analysis. Section 3 contains the semi-local analysis, whereas in Section 4, special cases are discussed. The numerical experiments are presented in Section 5. Concluding remarks are given in the last Section 6.

2. Local Analysis

Let 1 , 2 and 3 be given positive constants. Define function φ 1 on the interval [ 0 , 1 1 ) by
φ 1 ( t ) = ( 0 + 2 1 ) t 2 ( 1 1 t ) .
Notice that r 1 = 2 0 + 4 1 solves equation φ 1 ( t ) 1 = 0 . Set ρ 1 = min { 1 1 , 1 2 } . Moreover, define function φ 2 on the interval [ 0 , ρ 1 ) by
φ 2 ( t ) = 2 + 0 2 φ 1 ( t ) ) t 1 2 t .
Then, φ 2 ( 0 ) = 1 and φ 2 ( t ) as t ρ 1 . Denote by r 2 the minimal root of function φ 2 ( t ) 1 guaranteed to exist by the intermediate value theorem on the interval ( 0 , ρ 1 ) . Furthermore, define function φ 3 on the interval [ 0 , ρ 2 ) by
φ 3 ( t ) = ( 3 + 0 2 φ 2 ( t ) ) t 1 3 t ,
for ρ 2 = min { 1 3 , ρ 1 } . It follows that φ 3 ( 0 ) = 1 and φ 3 ( t ) as t ρ 2 . Denote by r 3 the minimal root of function φ 3 ( t ) 1 in the interval ( 0 , ρ 2 ) .
We then show that r defined by
r = min { r 1 , r 2 , r 3 }
is a radius of convergence for scheme (2). Set T = [ 0 , r ) . It then follows that for all t T
1 t < 1 , 2 t < 1 , 3 t < 1 ,
0 φ 1 ( t ) < 1 ,
0 φ 2 ( t ) < 1 ,
and
0 φ 3 ( t ) < 1
hold.
Denote by U ( x , ρ ) the open ball with center x M and of radius ρ > 0 . Moreover, the ball U [ x , ρ ] denotes the closure of the ball U ( x , ρ ) . Furthermore, by F , we denote the Fréchet derivative of operator F .
The following conditions are needed to show the local convergence of scheme (2). Suppose:
(A1)
There exists a simple solution x * D of equation F ( x ) = 0 .
(A2)
F ( x * ) 1 ( M 1 ( x ) F ( x * ) ) 1 x x * for all x D and some 1 > 0 . Set D 1 = U ( x * , 1 1 ) D .
(A3)
F ( x * ) 1 ( F ( x ) F ( x * ) ) 0 x x * for all x D 1 and some 0 > 0 .
(A4)
F ( x * ) 1 ( M 2 ( x , y ) F ( x * ) ) 2 x x * for all x D 1 , y = x F ( x ) 1 F ( x ) , and some constant 2 > 0 .
(A5)
F ( x * ) 1 ( M 3 ( x , y , z ) F ( x * ) ) 3 x x * for all x D 1 , z = y M 2 ( x , y ) 1 F ( y ) , and some constant 3 > 0 .
(A6)
U [ x * , r ] D .
The main local convergence result follows for scheme (2).
Theorem 1.
Suppose conditions (A1)–(A5) hold. Then, sequence { x n } produced by scheme (2) for x 0 U ( x * , r ) { x * } exists in U ( x * , r ) , remains in U ( x * , r ) for all n = 0 , 1 , 2 , and converges to x * . Moreover, the following estimates hold
y n x * φ 1 ( x n x * ) x n x * x n x * < r ,
y n x * φ 2 ( x n x * ) x n x * x n x * ,
and
y n x * φ 3 ( x n x * ) x n x * x n x * ,
where the functions φ j , j = 1 , 2 , 3 are previously defined and radius r is given by (6).
Proof. 
Mathematical induction is employed to show assertions (11)–(13). Let v U ( x * , r ) { x * } . Using (A1) and (A2), we obtain
F ( x * ) 1 ( M 1 ( v ) F ( x * ) ) 1 v x * 1 r < 1 .
It follows by (7) and the Banach lemma on invertible operators [2] that M 1 ( v ) 1 L ( M 1 , M ) and
M 1 ( v ) 1 F ( x * ) 1 1 1 v x * .
In particular, iterate y 0 is well defined by the first substep of method (2) and (14) for v = x 0 . Then, we can write by this substep
y 0 x * = x 0 x * M 1 , 0 1 F ( x 0 ) = M 1 , 0 1 F ( x * ) F ( x * ) 1 0 1 [ ( M 1 , 0 ( x 0 ) F ( x * ) ) + ( F ( x * ) 0 1 F ( x * + θ ( x 0 x * ) ) d θ ] ( x 0 x * ) ,
Then, in view of estimate (15) (for v = x 0 ), conditions (A1), (A2), (A3), and identity (15), we get
y 0 x * ( 1 x 0 x * + 0 2 x 0 x * ) x 0 x * 1 1 x 0 x * = ( 0 + 2 1 ) x 0 x * 2 2 ( 1 1 x 0 x * ) = φ 1 ( x 0 x * ) x 0 x * x 0 x * < r ,
where we also used identity
F ( x * ) ( x 0 x * ) [ F ( x 0 ) F ( x * ) ] = [ F ( x * ) 0 1 F ( x 0 + θ ( x 0 x * ) ) d θ ] ( x 0 x * ) ,
since F ( x * ) = 0 ,
F ( x * ) 1 ( M 1 , 0 ( x 0 ) F ( x * ) ) 1 x 1 x * ,
and
F ( x * ) 1 ( F ( x * ) 0 1 F ( x 0 + θ ( x 0 x * ) ) d θ 2 x 0 x * ,
and the triangle inequality. It follows from (16), that iterate y 0 U ( x * , r ) , and (11) holds for n = 0 . Then, using condition (A4),
F ( x * ) 1 ( M 2 , 0 F ( x * ) ) 2 x 0 x * 2 r < 1 .
That is M 2 , 0 1 L ( M 1 , M ) ,
M 2 , 0 1 F ( x * ) 1 1 2 x 0 x * ,
and iterate z 0 exists by the second substep of method (2) for n = 0 . Then, similarly to the derivation of identity (15), we can also write by this substep
z 0 x * = y 0 x * M 2 , 0 1 F ( y 0 ) = M 2 , 0 1 [ ( M 2 , 0 F ( x * ) ) + ( F ( x * ) 0 1 F ( x * + θ ( y 0 x * ) ) d θ ] ( x 0 x * ) .
Then, as in the derivation of estimate (16) but using (17), (A2) and (A4), we obtain
z 0 x * ( 2 x 0 x * + 0 2 y 0 x * ) x 0 x * 1 2 x 0 x * φ 2 ( x 0 x * ) x 0 x * x 0 x * ,
Hence, iterate z 0 U ( x 0 , t * ) and (12) holds for n = 0 . Then, by using (A5), we obtain
F ( x * ) 1 ( M 3 , 0 F ( x * ) ) 3 x 0 x * 3 r < 1
Therefore, it follows that M 3 , 0 1 L ( M 1 , M ) ,
M 3 , 0 1 F ( x * ) 1 1 3 x 0 x * ,
and iterate x 1 is well defined by the third substep of scheme (2) for n = 0 . Furthermore, by this substep as in (15), we obtain the identity
x 1 x * = M 3 , 0 1 [ ( M 3 , 0 F ( x * ) ) + ( F ( x * ) 0 1 F ( x * + θ ( z 0 x * ) ) d θ ] ( x 0 x * ) .
Then, using (20), (21), (A3) and (A5) as in (16), we have
x 1 x * ( 3 x 0 x * + 0 2 z 0 x * ) x 0 x * 1 2 x 0 x * = φ 3 ( x 0 x * ) x 0 x * x 0 x * .
It then follows by estimate (22) that iterate x 1 U ( x * , r ) and (13) holds for n = 0 . Therefore, the induction for assertions (11)–(13) is completed if x i , y i , z i , x i + 1 replace x 0 , y 0 , z 0 , x 1 , respectively, in the previous calculations. Finally, from the calculation
x i + 1 x * λ x i x * < r ,
where λ = φ 3 ( x 0 x * [ 0 , 1 ) , we conclude that lim i x i = x * and x i + 1 U ( x * , r ) .
The uniqueness of the solution’s result follows.
Proposition 1.
Suppose that there exists a simple solution x * D of equation F ( x ) = 0 , and (A3) holds. Set D 2 = U ( x * , 2 0 ) D . Then, element x * is the only solution of equation F ( x ) = 0 in region D 2 .
Proof. 
Consider x ˜ D 2 with F ( x ˜ ) = 0 . Define the linear operator Q = 0 1 F ( x * + θ ( x ˜ x * ) ) d θ . Then, by applying condition (A3)
F ( x * ) 1 ( Q F ( x * ) ) 0 0 1 x * + θ ( x ˜ x * ) ) x * d θ 0 0 1 θ x ˜ x * d θ < 0 2 2 0 = 1 .
It follows that the linear operator Q is invertible. Then, the approximation Q ( x ˜ x * ) = F ( x ˜ ) F ( x * ) = 0 0 = 0 , gives x ˜ x = Q 1 ( 0 ) = 0 . Hence, we conclude that x ˜ = x * .
Remark 1.
A similar result was given in ([15], Theorem 1) in the special case when M = M 1 = R and M 1 , n = F ( x n ) . However, this non-affine invariant form result is not correct, since it corresponds to the (11) estimate which is
y n x * 1 x n x * 2 2 ( 1 1 x n x * )
but which is not implied by (A2). Hence, the proof of Theorem 1 in [15] breaks down at this point. Notice also that in [15] they used M ¯ 1 , n = F ( x n ) 1 , M ¯ 2 , n = M 2 , n 1 and M ¯ 3 , n = M 3 , n 1 .

3. Semi-Local Analysis

The semi-local analysis of iterative scheme (2) is based on some Lipschitz-type conditions relating operators F , F , and linear operators M j , n to some parameters. Moreover, sequence { x n } is majorized by some scalar sequences depending on some parameters. Suppose:
(H1)
There exist x 0 D , η 0 such that F ( x 0 ) 1 , M 1 , 0 1 L ( M 1 , M ) and M 1 , 0 1 F ( x 0 ) η .
(H2)
F ( x 0 ) 1 ( M 1 ( x ) F ( x 0 ) ) a 1 x x 0 for all x D and some a 1 > 0 . Set D 3 = U ( x 0 , 1 a 1 ) D .
(H3)
0 1 F ( x 0 ) 1 ( F ( z + θ ( x z ) ) M 3 ( x , y , z ) ) d θ b 1
0 1 F ( x 0 ) 1 ( F ( x + θ ( y x ) ) M 1 ( x ) ) d θ b 2 ,
0 1 F ( x 0 ) 1 ( F ( y + θ ( z y ) ) M 2 ( x , y ) ) d θ b 3 ,
F ( x 0 ) 1 ( M 2 ( x , y ) F ( x 0 ) ) a 2 y x 0 ,
F ( x 0 ) 1 ( M 3 ( x , y , z ) F ( x 0 ) ) a 3 z x 0 ,
where for all θ [ 0 , 1 ] , x D 3 and y , z are taken from method (26) (or for all y , z D 3 ), and b 1 , b 2 , b 3 , a 2 and a 3 are positive constants depending on operators F , F and M j , n .
(H4)
U [ x 0 , ρ ] D for some ρ > 0 to be given later.
As can be seen by the proof of Theorem 2 that follows the iterates, { x n } lies in the set D 3 which is a more accurate domain than D , since D 3 D . This way, at least as tight constants are obtained than if conditions (H3) and (H4) hold only in D (see also the numerical section).
We chose the last two conditions in (H3) this way. However, other choices are also possible [1,2,3,4]. Notice that if a 1 η < 1 and D ˜ = U ( y 0 , 1 a 1 η ) D , then D ˜ D 3 , respectively, and even smaller constants “a” are obtained, if D ˜ replaces D 3 .
Moreover, we define the scalar sequence { t n } by
t 0 = 0 , s 0 = η
u n = s n + b 2 ( s n t n ) 1 a 2 s n
t n + 1 = u n + b 3 ( u n s n ) 1 a 3 u n , s n + 1 = t n + 1 + b 1 ( t n + 1 u n ) 1 a 1 t n + 1 .
This sequence shall be shown to be majorizing for scheme { x n } in Theorem 2. However, first, a convergence result for it is needed.
We then develop results on the convergence of sequence { t n } .
Lemma 1.
Suppose
a 2 s n < 1 , a 3 u n < 1 and a 1 t n + 1 < 1
hold for all n = 0 , 1 , 2 , . Then, sequence { t n } is such that s n u n t n + 1 < 1 a 1 and lim n t n = t * 1 a 1 .
Proof. 
It follows from (26) and (27) that sequence { t n } is nondecreasing, bounded from above by 1 a 1 and as such it converges to its unique least upper bound t * [ 0 , 1 a 1 ] .
The semi-local convergence of method (2) follows next.
Theorem 2.
Under conditions (H1)–(H4), further suppose: conditions of Lemma 1 hold and ρ = t * in (H4). Then, the sequence { x n } generated by method (2) exists in U ( x 0 , t * ) , stays in U ( x 0 , t * ) and converges to a solution x * U [ x 0 , t * ] of equation F ( x ) = 0 . Moreover, the following estimates hold
y n x n s n t n
z n y n u n s n ,
and
x n + 1 z n t n + 1 u n .
Proof. 
Mathematical induction is used to show (29)–(31). Using (H1) and (27)
y 0 x 0 = M 1 , 0 1 F ( x 0 ) η = s 0 t 0 ,
so iterate y 0 U ( x 0 , t * ) and (56) holds for n = 0 . Let v U ( x 0 , t * ) . It then follows from (H3) that
F ( x 0 ) 1 ( M 2 ( x 0 , y 0 ) F ( x 0 ) ) a 2 y 0 x 0 < a 2 t * < 1 .
That is, M 2 ( x 0 , y 0 ) 1 L ( M 1 , M ) ,
M 2 ( x 0 , y 0 ) 1 F ( x 0 ) 1 1 a 2 y 0 x 0 ,
and iterate z 0 is well defined by the second substep of method (26) for n = 0 . By the first substep of method (2)
F ( y 0 ) = F ( y 0 ) F ( x 0 ) M 1 , 0 ( y 0 x 0 ) ,
F ( x 0 ) 1 F ( y 0 ) = 0 1 F ( x 0 ) 1 ( F ( x 0 + θ ( y 0 x 0 ) ) M 1 ( x 0 ) ) d θ ( y 0 x 0 ) ,
F ( x 0 ) 1 F ( y 0 ) b 2 y 0 x 0 b 2 ( s 0 t 0 )
and
z 0 y 0 M 2 ( x 0 , y 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( y 0 ) u 0 s 0 .
Hence, (29) holds for n = 0 and
z 0 x 0 z 0 y 0 + y 0 x 0 u 0 s 0 + s 0 t 0 = u 0 < t * .
Therefore, iterate, z 0 U ( x 0 , t * ) . As in (31), we obtain
M 3 ( x 0 , y 0 , z 0 ) 1 F ( x 0 ) 1 1 a 3 z 0 x 0 .
By the second substep of method (2), we can write
F ( z 0 ) = F ( z 0 ) F ( y 0 ) + F ( y 0 ) = 0 1 ( F ( y 0 + θ ( z 0 y 0 ) ) d θ M 2 , 0 ) ( z 0 y 0 ) .
Consequently
F ( x 0 ) 1 F ( z 0 ) b 3 z 0 y 0 b 3 ( u 0 s 0 ) .
Then, we obtain
x 1 z 0 M 3 ( x 0 , y 0 , z 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( z 0 ) t 1 u 0 ,
and
x 1 x 0 x 1 z 0 + z 0 y 0 + y 0 x 0 t 1 u 0 + u 0 s 0 + s 0 t 0 = t 1 < t * .
That is, iterate x 1 U ( x 0 , t * ) and (31) holds for n = 0 . Moreover, we can write
F ( x 1 ) = F ( x 1 ) F ( z 0 ) M 3 , 0 ( x 1 z 0 ) = 0 1 [ F ( z 0 + θ ( x 1 z 0 ) ) d θ M 3 , 0 ] ( x 1 x 0 ) ,
F ( x 0 ) 1 F ( x 1 ) b 1 x 1 x 0 b 1 ( t 1 u 0 ) ,
y 1 x 1 M 1 , 0 1 ( x 0 ) F ( x 0 ) F ( x 0 ) 1 F ( x 1 ) s 1 t 1
and
y 1 x 0 y 1 x 1 + x 1 z 0 s 1 t 1 + t 1 u 0 s 1 < t * ,
so y 1 U ( x 0 , t * ) and (29) holds for n = 0 . Simply revisit the preceding estimations with x k , y k , z k , x k + 1 replacing x 0 , y 0 , z 0 , x 1 , respectively, to terminate the induction for items (29)–(31). Sequence { t k } is complete as convergent. In view of (29)–(31), sequence { x n } is also complete and as such, it converges to some x * U [ x 0 , t * ] . By letting k in the estimate
F ( x 0 ) 1 F ( x k ) b 1 ( t k u k 1 )
and using the continuity of F , we conclude that F ( x * ) = 0 .
A uniqueness result follows.
Proposition 2.
Under the conditions of Theorem 2, further suppose that there exists R t * such that
0 2 ( R + t * ) < 1 .
Set D 4 = U [ x 0 , t * ] D . Then, the element x * is the only solution of equation F ( x ) = 0 in the region D 4 .
Proof. 
Let x ˜ D 4 be such that F ( x ˜ ) = 0 . Then, as in Proposition 1, we obtain
F ( x 0 ) 1 ( Q F ( x 0 ) ) 0 0 1 ( ( 1 θ ) x 0 x * + θ x 0 x ˜ ) d θ 0 2 ( t * + R ) < 1 .
Therefore, we deduce that x ˜ = x * .

4. Special Cases

Let M 1 , n = F ( x n ) , M 2 , n = F ( y n ) and M 3 , n = F ( z n ) . Then, method (2) reduces to
y n = x n F ( x n ) 1 F ( x n ) z n = y n F ( y n ) 1 F ( y n ) x n + 1 = z n F ( z n ) 1 F ( z n ) .
This is Newton’s three-step method also called by some Traub’s extended three-step method. It seems to be the most interesting special case of method (2) to consider as an application. Moreover, the semi-local convergence of it uses our new idea of recurrent functions, and the resulting convergence criteria are weaker than those in earlier works for method (32) using the Kantorovich condition 2 L 1 η 1 [2,4,7,8] (as can also be seen in Example 5.2). Moreover, the error bounds are tighter and the information on the location of the solution is more precise than in the aforementioned works. Finally, in Lemma 2, we gave even weaker convergence criteria for method (32). Hence, this is clearly a most revealing special case to consider, since it can also be connected to earlier works and improve them too.
The following conditions are used.
Suppose:
(H1)
There exists x 0 D , η 0 such that F ( x 0 ) 1 L ( M 1 , M ) and
F ( x 0 ) 1 F ( x 0 ) η .
(H2)
F ( x 0 ) 1 ( F ( x ) F ( x 0 ) ) L 0 x x 0
for all x D and some L 0 > 0 . Set D 5 = U ( x 0 , 1 L 0 ) D .
(H3)
For each x , y D 5
F ( x 0 ) 1 ( F ( u ) F ( v ) ) L u v
for all u D 5 and v = u F ( u ) 1 F ( u ) D (or all v D 5 ) and some L > 0 .
(H4)
U [ x 0 , t * ] D for some t * to be given later.
Notice that condition (H3) was used for all u , v D and constants L 1 [2,4,7,8] as well as for all u , v D 5 with constant K [1,3]. That is:
(M1)
For each x , y D
F ( x 0 ) 1 ( x 0 ) ( F ( u ) F ( v ) ) L 1 u v .
(M2)
For each x , y D 5
F ( x 0 ) 1 ( F ( u ) F ( v ) ) K u v .
It follows by these definitions that
L K L 1 and L 1 .
Hence, any analysis using L improves earlier ones using L 1 or K (see also the numerical section). The sequence { t n } defined by
t 0 = 0 , s 0 = η , u n = s n + L ( s n t n ) 2 2 ( 1 L 0 s n ) , t n + 1 = u n + L ( u n s n ) 2 2 ( 1 L 0 u n ) , s n + 1 = t n + 1 + L ( t n + 1 u n ) 2 2 ( 1 L 0 t n + 1 ) ,
shall be shown to be majorizing for method (32). However, first we need some convergence results for it.
Notice that the corresponding sequences are
t ¯ 0 = 0 , s ¯ 0 = η u ¯ n = s ¯ n + K ( s ¯ n t ¯ n ) 2 2 ( 1 L 0 s ¯ n ) , t ¯ n + 1 = u ¯ n + K ( u ¯ n u ¯ n ) 2 2 ( 1 L 0 u ¯ n ) , s ¯ n + 1 = t ¯ n + 1 + K ( t ¯ n + 1 u ¯ n ) 2 2 ( 1 L 0 t ¯ n + 1 ) ,
t ¯ ¯ 0 = 0 , s ¯ ¯ 0 = η u ¯ ¯ n = s ¯ ¯ n + L 1 ( s ¯ ¯ n t ¯ ¯ n ) 2 2 ( 1 L 1 s ¯ ¯ n ) , t ¯ ¯ n + 1 = u ¯ ¯ n + L 1 ( u ¯ ¯ n u ¯ ¯ n ) 2 2 ( 1 L 1 u ¯ ¯ n ) , s ¯ ¯ n + 1 = t ¯ ¯ n + 1 + L 1 ( t ¯ ¯ n + 1 u ¯ ¯ n ) 2 2 ( 1 L 1 t ¯ ¯ n + 1 ) .
We assume that L 0 K . Otherwise, replace K by L 0 in sequence (35). If follows from (34) and these definitions that
t n t ¯ n t ¯ ¯ n , s n s ¯ n s ¯ ¯ n , u n u ¯ n u ¯ ¯ n
and
t * = lim n t n s * = lim n s n u * = lim n u n
(if these limits exist). Hence, the new majorizing sequence is more precise. The convergence criteria for sequences (35) [1,3] and (36) [2,4,7,8] are:
2 K η 1
and
2 L 1 η 1 ,
respectively. However, the convergence criterion for sequence (34) is
2 L η 1 .
Notice that
2 L 1 η 1 2 K η 1 2 L η 1 .
Condition (40) is weakened further in Lemma 3. It is worth noticing that these benefits are obtained under the same computational cost, since in practice, the computation of the Lipschitz constant L 1 requires that of L 0 , K and L as special cases. Notice that criterion (39) is due to Kantorovich [2].
Then, two convergence results for sequence (34) are presented.
Lemma 2.
Suppose
L 0 s n < 1 , L 0 u n < 1 and L 0 t n + 1 < 1 .
Then, sequence { t n } is such that 0 t n s n t n + 1 and lim n t n = t * 1 L 0 .
Proof. 
See Lemma 1. □
Next, some stronger conditions than (42) are given but are easier to show. However, first, we define polynomials on the interval [ 0 , 1 ) by
f n ( 1 ) ( t ) = L 2 t 2 n 1 η + L 0 ( 1 + t + + t 2 n ) η 1 ,
f n ( 2 ) ( t ) = L 2 t 2 n η + L 0 ( 1 + t + + t 2 n + 1 ) η 1 ,
f n ( 3 ) ( t ) = L 2 t 2 n + 1 η + L 0 ( 1 + t + + t 2 n + 2 ) η 1 ,
p ( t ) = L 0 t 3 + ( L 0 + L 2 ) t 2 L 2 .
and parameter γ by
γ = 2 L L + L 2 + 8 L 0 L .
Notice that γ ( 0 , 1 ) ,   p ( γ ) = 0 , whereas the other two roots of p are negative by the Descarte’s rule of signs. Define the parameters
a = L η 2 ( 1 L 0 η ) , b = L ( u 0 s 0 ) 2 ( 1 L 0 u 0 ) , c = L ( t 1 u 0 ) 2 ( 1 L 0 t 1 ) and d = max { a , b , c } .
Then, we show:
Lemma 3.
Suppose that
0 d δ < 1 L 0 η .
Then, the sequence { t n } generated by (34) is nondecreasing, bounded from above by t * * = η 1 δ and converges to its unique least upper bound t * [ 0 , t * * ] . Moreover, the following items hold
0 s n t n δ ( t n s n 1 ) δ 2 n ( s 0 t 0 ) ,
0 u n s n δ ( s n t n ) δ 2 n + 1 ( s 0 t 0 ) ,
and
0 t n + 1 s n δ ( u n s n ) δ 2 n + 2 ( s 0 t 0 ) .
Proof. 
Induction is utilized for items
0 L ( s k t k ) 2 ( 1 L 0 s k ) γ ,
0 L ( u k s k ) 2 ( 1 L 0 u k ) γ ,
0 L ( t k + 1 u k ) 2 ( 1 L 0 t k + 1 ) γ ,
and
t k s k u k t k + 1 .
These estimates hold for k = 0 by (34) and (43). Suppose they hold for all integers smaller or equal to k . Then, we obtain
t k + 1 u k + γ 2 k + 2 η s k + γ 2 k + 1 η + γ 2 k + 2 η t k + γ 2 k η + γ 2 k + 1 η + γ 2 k + 2 η t 0 + γ η + + γ 2 k + 2 η = 1 γ 2 k + 3 1 γ η < η 1 γ = t * * ,
similarly,
s k 1 γ 2 k + 1 1 γ η and u k = 1 γ 2 k + 2 1 γ η .
Then, evidently, (47) holds if
L 2 γ 2 k η + L 0 γ 1 γ 2 k + 1 1 γ η γ 0
or
f k ( 1 ) ( t ) 0 at t = γ .
By the definition of f k ( 1 ) , we can find a relationship between two consecutive functions:
f k + 1 ( 1 ) ( t ) = f k + 1 ( 1 ) ( t ) f k ( 1 ) ( t ) + f k ( 1 ) ( t ) = f k ( 1 ) ( t ) + L 2 t 2 k + 1 η + L 0 ( 1 + t + + t 2 k + 2 ) η 1 L 2 t 2 k + 1 η L 0 ( 1 + t + + t 2 k ) η + 1 = f k ( 1 ) ( t ) + p ( t ) t 2 k + 1 η .
In particular, by the definition of p, we obtain
f k + 1 ( 1 ) = f k ( 1 ) ( t ) at t = γ .
Let function
f ( 1 ) ( t ) = lim k f k ( 1 ) ( t ) .
It follows by the definition of f k ( 1 ) and (54) that
f ( 1 ) ( t ) = L 0 η 1 t 1 .
Consequently, assertion (51) holds if
f ( 1 ) ( t ) 0 at t = γ ,
which is true by the right hand side of inequality (43). Similarly, to show (48)
L 2 γ 2 k + 1 η + L 0 γ 1 γ 2 k + 2 1 γ η γ 0
or
f k ( 2 ) ( t ) 0 at t = γ .
This time, we also have
f k + 1 ( 2 ) ( t ) = f k ( 2 ) ( t ) + p ( t ) t 2 k η ,
and for
f ( 2 ) ( t ) = lim k f k ( 2 ) ( t ) = L 0 η 1 t 1 0
at t = γ . Moreover, (49) holds if
L 2 γ 2 k + 2 η + γ L 0 1 γ 2 k + 3 1 γ η γ 0
or
f k ( 3 ) ( t ) 0 at t = γ .
However, we have
f k + 1 ( 3 ) ( t ) = f k ( 3 ) ( t ) + p ( t ) t 2 k + 1 γ ,
so
f k + 1 ( 3 ) ( t ) = f k ( 3 ) ( t ) , at t = γ .
That is, (55) holds if f ( 3 ) ( t ) = lim k f k ( 3 ) ( t ) 0 , at t = γ . However, again, we obtain
f ( 3 ) ( t ) = L 0 η 1 t 1 .
Therefore, assertion (55) holds again by (45). Furthermore, (50) holds by (34) and (47)–(49). The induction for items (47)–(50) is completed. Hence, we deduce t k s k t k + 1 and lim k t k = t * .

5. Numerical Example

We verify convergence criteria using method (32). Moreover, we compare Lipschitz constants L 0 , L , L 1 and K . In particular, the first example is used to show that the ratio L 0 L 1 can be arbitrarily small.
Example 1.
Let M = M 1 = R . Define function
ψ ( t ) = δ 0 t + δ 1 + δ 2 sin δ 3 t , t 0 = 0 ,
where δ j , j = 0 , 1 , 2 , 3 are fixed parameters. Then, clearly for δ 3 large and δ 2 small, L 0 L 1 can be (arbitrarily) small, so that L 0 L 1 0 .
The parameters L 0 , L , K and L 1 are computed in the next example. Moreover, the convergence criteria (46)–(48) and those of Lemma 3 are compared.
Example 2.
Let M = M 1 = R . Let us consider a scalar function F defined on the set D = U [ x 0 , 1 q ] for q ( 0 , 1 2 ) by
F ( x ) = x 3 q .
Choose x 0 = 1 . Then, we obtain the estimates η = 1 q 3 ,
| F ( x 0 ) 1 ( F ( x ) F ( x 0 ) ) | = | x 2 x 0 2 | | x + x 0 | | x x 0 | ( | x x 0 | + 2 | x 0 | ) | x x 0 | = ( 1 q + 2 ) | x x 0 | = ( 3 q ) | x x 0 | ,
for all x D , so L 0 = 3 q ,   D 0 = U ( x 0 , 1 L 0 ) D = U ( x 0 , 1 L 0 ) ,
| F ( x 0 ) 1 ( F ( y ) F ( x ) | = | y 2 x 2 | | y + x | | y x | ( | y x 0 + x x 0 + 2 x 0 ) | | y x | = ( | y x 0 | + | x x 0 | + 2 | x 0 | ) | y x | ( 1 L 0 + 1 L 0 + 2 ) | y x | = 2 ( 1 + 1 L 0 ) | y x | ,
for all x , y D and so K = 2 ( 1 + 1 L 0 ) .
| F ( x 0 ) 1 ( F ( y ) F ( x ) | = ( | y x 0 | + | x x 0 | + 2 | x 0 | ) | y x | ( 1 q + 1 q + 2 ) | y x | = 2 ( 2 q ) | y x | ,
for all x , y D and L 1 = 2 ( 2 q ) .
Notice that for all q ( 0 , 1 2 )
L 0 < K < L 1 .
Next, set y = x F ( x ) 1 F ( x ) , x D . Then, we have
y + x = x F ( x ) 1 F ( x ) + x = 5 x 3 + q 3 x 2 .
Define function F ¯ on the interval D = [ q , 2 q ] by
F ¯ ( x ) = 5 x 3 + q 3 x 2 .
Then, we obtain by this definition that
F ¯ ( x ) = 15 x 4 6 x q 9 x 4 = 5 ( x q ) ( x 2 + x q + q 2 ) 3 x 3 ,
where p = 2 q 5 3 is the critical point of function F ¯ . Notice that q < p < 2 q . It follows that this function is decreasing on the interval ( q , p ) and increasing on the interval ( q , 2 q ) , since x 2 + x q + q 2 > 0 and x 3 > 0 . So, we can set
K 2 = 5 ( 2 q ) 2 + q 9 ( 2 q ) 2
and
K 2 < L 0 .
However, if x D 0 = [ 1 1 L 0 , 1 + 1 L 0 ] , then
L = 5 ϱ 3 + q 9 ϱ 2 ,
where ϱ = 4 q 3 q and K < K 1 for all q ( 0 , 1 2 ) . Then, criterion (39) is not satisfied for all q ( 0 , 1 2 ) . Hence, there is no guarantee that scheme (34) converges to x * = q 3 . Moreover, our earlier criterion (38) holds for q ( 0.4620 , 1 ] . Furthermore, the new criterion by solving becomes
2 L ¯ η 1 ,
where L ¯ = 1 8 ( 4 L 0 + L + L 2 + 8 L 0 L ) . This condition holds for q ( 0.4047 , 1 ) . Clearly, the new results extend the range of values q for which scheme (34) converges.
This range can be extended even further if we apply Lemma 2. Indeed, choose q = 0.4 , and we have the following Table 1, showing that the conditions of Lemma 2 are satisfied.
Example 3.
Consider M = M 1 = C [ 0 , 1 ] and D = U [ 0 , 1 ] . Then, the boundary value problem (BVP) [4]
ς ( 0 ) = 0 , ς ( 1 ) = 1 ,
ς = ς σ ς 2
can be also given as
ς ( s ) = s + 0 1 G ( s , t ) ( ς 3 ( t ) + σ ς 2 ( t ) ) d t
where σ is a constant and G ( s , t ) is the Green’s function
G ( s , t ) = t ( 1 s ) , t s s ( 1 t ) , s < t .
Consider F : D M 1 as
[ F ( x ) ] ( s ) = x ( s ) s 0 1 G ( s , t ) ( x 3 ( t ) + σ x 2 ( t ) ) d t .
Let us set ς 0 ( s ) = s and D = U ( ς 0 , ρ 0 ) . Then, clearly U ( ς 0 , ρ 0 ) U ( 0 , ρ 0 + 1 ) , since ς 0 = 1 . If 2 σ < 5 . Then, conditions (H1)–(H4) are satisfied for
L 0 = 2 σ + 3 ρ 0 + 6 8 , L = σ + 6 ρ 0 + 3 4 .
Hence, L 0 < L 1 .
The next two examples concern the local convergence of method (34) and the radii r j , r were computed using Formula (6) and the functions φ j .
Example 4.
If M = M 1 = C [ 0 , 1 ] is equipped with the max-norm, D = U [ 0 , 1 ] , consider Q : D M 1 given as
Q ( λ ) ( x ) = φ ( x ) 5 0 1 x τ λ ( τ ) 3 d τ .
We obtain
Q ( λ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x τ λ ( τ ) 2 ξ ( τ ) d τ , for   each ξ D .
Then, since x * = 0 , conditions (A1)–(A5) hold provided that 0 = 1 = 2 = 3 = 7.5 . Then, the radii are:
r 1 = 0.0533 = r , r 2 = 0.1499 , and r 3 = 0.1660 .
Example 5.
Consider the motion system
H 1 ( w 1 ) = e w 1 , H 2 ( w 2 ) = ( e 1 ) w 2 + 1 , H 3 ( w 3 ) = 1
with H 1 ( 0 ) = H 2 ( 0 ) = H 3 ( 0 ) = 0 . Let H = ( H 1 , H 2 , H 3 ) . Let M = M 1 = R 3 , D = U [ 0 , 1 ] , x * = ( 0 , 0 , 0 ) t r . Let function H on D for w = ( w 1 , w 2 , w 3 ) t r given as
H ( w ) = ( e w 1 1 , e 1 2 w 2 2 + w 2 , w 3 ) t r .
The Fréchet derivative is given by
H ( w ) = e x 0 0 0 ( e 1 ) w 2 + 1 0 0 0 1 .
Notice that H ( x * ) = I . Let w R 3 with w = ( w 1 , w 2 , w 3 ) t r . Moreover, the nor for M R 3 × R 3 is
M = max 1 k 3 i = 1 3 m k , i .
We need to verify conditions (A1)–(A5). To achieve this, we study G ( t ) = e t 1 on D = [ 1 , 1 ] . We have t * = 1 , hence G ( t * ) = 1 , and
| G ( t ) G ( t * ) | = | t + t 2 2 + + t n n ! + | = | 1 + t 0 2 ! + + ( t 0 ) n 1 n ! + | | t o |
so 1 = e 1 . Then, D 1 = U ( x * , 1 e 1 ) D = U ( x * , 1 e 1 ) . This time we obtain
| G ( t ) G ( t * ) | 0 | t 0 | ,
where
0 = 1 + 1 ( e 1 ) 2 ! + + 1 ( e 1 ) n 1 n ! + 1.43 < 1 .
Then, we have for t D 1
| s | = | t G ( t ) 1 G ( t ) | = | t 1 + e t | = | ( t ) 2 2 ! + + ( t ) n n ! + | = | t | ( | t | 2 ! + + | t | n 1 n ! + ) 0 1 e 1 .
Moreover,
| F ( s ) F ( t * ) | = | e s 1 | | s | ( 1 + | s | 2 ! + + | s | n 1 n ! + ) | t | 0 1 e 1 ( 1 + 0 1 ( e 1 ) 2 ! + + 0 1 e 1 n 1 1 n ! + ) = 2 ( t 0 ) ,
where 2 0.49 < 1 . We can set 3 = 2 .
Then, the radii are:
r 1 = 0.2409 = r , r 2 = 0.3101 , and r 3 = 0.3588 .
In the last example, we revisit the motivational example given in the introduction, where we apply scheme (32).
Example 6.
The iterates for the motivational example with x 0 = 0.85 are given in Table 2.

6. Conclusions

Conditions for the convergence of generalized three-step schemes are presented for both the local as well as semi-local case. The sequences generated by these schemes approximate solutions of equation F ( x ) = 0 that are locally unique. The convergence conditions depend on the divided difference of the order of one or the derivative, which appears on the schemes. However, this is not the case with earlier articles utilizing high-order derivatives, which do not appear in the schemes. Moreover, the error analysis is tighter because we show that the iterates remain in a stricter domain than in earlier articles. Hence, the utilization of these schemes is extended with the same or even weaker conditions. Our process does not depend on these schemes. Therefore, it can be employed similarly to extend the usage of the other schemes [9,10,15,16,17,18].

Author Contributions

Conceptualization, S.R., I.K.A., S.G. and C.I.A.; methodology, S.R., I.K.A., S.G. and C.I.A.; software, S.R., I.K.A., S.G. and C.I.A.; validation, S.R., I.K.A., S.G. and C.I.A.; formal analysis, S.R., I.K.A., S.G. and C.I.A.; investigation, S.R., I.K.A., S.G. and C.I.A.; resources, S.R., I.K.A., S.G. and C.I.A.; data curation, S.R., I.K.A., S.G. and C.I.A.; writing—original draft preparation, S.R., I.K.A., S.G. and C.I.A.; writing—review and editing, S.R., I.K.A., S.G. and C.I.A.; visualization, S.R., I.K.A., S.G. and C.I.A.; supervision, S.R., I.K.A., S.G. and C.I.A.; project administration, S.R., I.K.A., S.G. and C.I.A.; funding acquisition, S.R., I.K.A., S.G. and C.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our gratitude to the reviewers for the constructive criticism of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  2. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  3. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  4. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  5. Kou, J.; Wang, X.; Li, Y. Some eight order root finding three-step methods, Commun. Nonlinear Sci. Numer. Simulat. 2010, 15, 536–544. [Google Scholar] [CrossRef]
  6. Argyros, I.K.; Magrenan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  7. Grau-Sanchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving system of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  8. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef] [Green Version]
  9. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  10. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton–Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  11. Sharma, J.R.; Arora, H. Efficient derivative - free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  12. Xiao, X.; Yin, H. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
  13. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
  14. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  15. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhãuser: Cham, Switzerland, 2018. [Google Scholar]
  16. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  17. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839... for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  18. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
Table 1. Sequence (32).
Table 1. Sequence (32).
n123456
u i 0.23300.29450.30080.30090.30090.3009
s i 0.20000.28960.30080.30090.30090.3009
t n + 1 0.23410.29460.30080.30090.30090.3009
L 0 s i 0.52000.75300.78200.78240.78240.7824
L 0 u i 0.60580.76580.78220.78240.78240.7824
L 0 t i + 1 0.60870.76590.78220.78240.78240.7824
Table 2. Sequence (32).
Table 2. Sequence (32).
n123456
y i 1.16090.20670.08460.03770.01740.0081
z i 0.31210.16400.06950.03130.01450.0068
x i 0.85000.39850.13990.06050.02740.0127
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S.; Argyros, C.I. Numerical Processes for Approximating Solutions of Nonlinear Equations. Axioms 2022, 11, 307. https://doi.org/10.3390/axioms11070307

AMA Style

Regmi S, Argyros IK, George S, Argyros CI. Numerical Processes for Approximating Solutions of Nonlinear Equations. Axioms. 2022; 11(7):307. https://doi.org/10.3390/axioms11070307

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, Santhosh George, and Christopher I. Argyros. 2022. "Numerical Processes for Approximating Solutions of Nonlinear Equations" Axioms 11, no. 7: 307. https://doi.org/10.3390/axioms11070307

APA Style

Regmi, S., Argyros, I. K., George, S., & Argyros, C. I. (2022). Numerical Processes for Approximating Solutions of Nonlinear Equations. Axioms, 11(7), 307. https://doi.org/10.3390/axioms11070307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop