Next Article in Journal
Adaptive Fuzzy Backstepping Control of Fractional-Order Chaotic System Synchronization Using Event-Triggered Mechanism and Disturbance Observer
Previous Article in Journal
Magnetic Field, Variable Thermal Conductivity, Thermal Radiation, and Viscous Dissipation Effect on Heat and Momentum of Fractional Oldroyd-B Bio Nano-Fluid within a Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extension on the Local Convergence for the Multi-Step Seventh Order Method with ψ-Continuity Condition in the Banach Spaces

by
Mohammad Taghi Darvishi
1,*,
R. H. Al-Obaidi
1,2,
Akanksha Saxena
3,
Jai Prakash Jaiswal
4 and
Kamal Raj Pardasani
3
1
Department of Mathematics, Faculty of Science, Razi University, Kermanshah 67149, Iran
2
Medical Physics Department, Al-Mustaqbal University College, Hillah 51001, Iraq
3
Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal 462003, India
4
Department of Mathematics, Guru Ghasidas Vishwavidyalaya (A Central University), Bilaspur 495009, India
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(12), 713; https://doi.org/10.3390/fractalfract6120713
Submission received: 1 November 2022 / Revised: 20 November 2022 / Accepted: 23 November 2022 / Published: 30 November 2022
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)

Abstract

:
The local convergence analysis of the multi-step seventh order method to solve nonlinear equations is presented in this paper. The point of this paper is that our proposed study requires a weak hypothesis where the Fréchet derivative of the nonlinear operator satisfies the ψ -continuity condition, which thereby extends the applicability of the method when both Lipschitz and Hölder conditions fail. The convergence in this study is considered under the hypotheses on the first-order derivative without involving derivatives of the higher-order. To find a subset of the original convergence domain, a strategy is devised here. As a result, the new Lipschitz constants are at least as tight as the old ones, allowing for a more precise convergence analysis in the local convergence case. Some concrete numerical examples showing the performance of the method over some existing schemes are presented in this article.

1. Introduction

One of the wide-ranging subjects that has close connections with mathematics, computer science, engineering, and the applied sciences is numerical analysis. Additionally, one of the most elemental and primal problems in numerical analysis, when mathematically modeled, leads to integral equations, boundary value problems, and differential problems, concerning conversion to nonlinear equations that have the following form:
T ( x ) = 0 ,
where T is defined on a convex open subset D of a Banach space X with values in a Banach space Y. Many problems in different fields of computational science and of engineering, such as radiative transfer theory, as well as optimization, can be solved in a form such as (1) using mathematical modelings. Analytical methods of solving problems are very scarce or almost non existent. Therefore, many researchers only rely on iterative methods, and they have proposed a plethora of iterative methods. In various articles, many authors have studied the local convergence analysis using Taylor’s series, but they did not obtain the radii of the convergence ball for the solution that can be seen in the refs. [1,2]. This approach has been extended to the iterative methods in the Banach spaces for obtaining better theoretical results without following Taylor’s series approach. In this way, there is no need to use the higher-order derivatives to show the convergence of the scheme. These types of techniques are discussed by many authors; for a better understanding, one can go through refs. [3,4,5,6]. The practice of numerical functional analysis for finding such solutions is widely and substantially connected to the Newton-like methods, which are defined as follows:
x n + 1 = x n [ T ( x n ) ] 1 T ( x n ) , n 0 ,
they are frequently used by various researchers as they have quadratic convergence (as can be seen in reference [6]) and only one evaluation of the Jacobian of T is needed at each step. The other properties of Newton’s method are established in the reference book [7]. Moreover, in some applications involving stiff systems, high-order methods are useful. Therefore, it is important to study high-order methods.
Obtaining the radius of the convergence ball, as well as devising a theory to extend the convergence region, are both important issues. The convergence domain is critical for the steady behaviour of an iterative method from a numerical stand point. The convergence analysis of iterative procedures, particularly the local analysis, is based on the information around a solution, to find estimates of the radii of the convergence balls. Plenty of studies have been conducted on the local and semilocal convergence analysis of the Newton-like techniques. Many iterative methods of an increasing order of convergence, such as third order [8,9], fourth order [10], and fifth order [11,12], etc., have been developed in recent decades and have demonstrated their efficiency in numerical terms. In particular, Sharma and Gupta [13] constructed the three-step method of order five, mentioned as follows:
y n = x n 1 2 Γ n T ( x n ) , z n = x n [ T ( y n ) ] 1 T ( x n ) , x n + 1 = z n [ 2 [ T ( y n ) ] 1 Γ n ] T ( z n ) ,
where, Γ n = [ T ( x n ) ] 1 . The local convergence of the above multi-step Homeier’s-like method has been studied by Panday and Jaiswal [14] with the help of Lipschitz and Hölder continuity conditions. Let us consider the scheme given by Xiao and Yin [15]
y n = x n 1 2 Γ n T ( x n ) , z n ( 1 ) = x n [ T ( y n ) ] 1 T ( x n ) , z n ( 2 ) = z n ( 1 ) [ 2 [ T ( y n ) ] 1 Γ n ] T ( z n ( 1 ) ) , x n + 1 = z n ( 2 ) [ 2 [ T ( y n ) ] 1 Γ n ] T ( z n ( 2 ) ) .
This method requires the evaluation of three functions, two first-order derivatives, and two matrix inversions per iteration. In this article, we have weakened the continuity condition and then analyzed its local convergence. The motivation for writing this paper is the extension of the applicability of method (4) by using the novelty that includes the extension of the convergence domain, which can be illustrated by creating a subset D that includes the iterates. Nevertheless, because the Lipschitz-like parameters (or functions) in this set D are at least as tight as the originals, the convergence is finer.
The local convergence of the scheme (4) has been recently analyzed under the assumptions of both the Lipschitz and Hölder conditions in the article [16], but numerous problems exists for which the Lipschitz as well as the Hölder condition fail. As an motivational illustration, consider the nonlinear integral equation of the mixed Hammerstein type given by [12]
F [ x ( s ) ] = x ( s ) 5 0 1 s t x ( t ) 3 d t ,
with x ( s ) in C [ 0 , 1 ] . The first derivative of F is
F [ x ( s ) ] v ( s ) = v ( s ) 15 0 1 s t x ( t ) 2 v ( t ) d t .
It is clear that neither the Lipschitz nor the Hölder condition hold for this problem. Thereby, we expand the applicability of method (4) by using the hypothesis only on the first-order derivative of the function T and generalized Lipschitz continuity conditions. In this manuscript, we address many concerns by providing the radius of the convergence ball, the computable error bounds, and the uniqueness of the solution of the result on using the weaker continuity condition.
The outline of this paper is as follows: Section 2 deals with the local convergence results for method (4), obtaining a ball of convergence followed by its uniqueness. The numerical examples appear before the concluding Section. At last, we discuss the obtained results and validate them.

2. Local Convergence Analysis

In this section, the local convergence of the multi-step method (4) is investigated. This local convergence analysis is centered on some parameters and scalar functions. Let ψ 0 be a non-decreasing continuous function defined on the interval [ 0 , + ) with values in [ 0 , + ) satisfying ψ 0 ( 0 ) = 0 . Define parameter ρ 0 by
ρ 0 = s u p { t 0 : ψ 0 ( t ) < 1 } .
Let also ψ : [ 0 , ρ 0 ) [ 0 , + ) be a continuous and non-decreasing function so that ψ ( 0 ) = 0 . Define the functions η 1 , η 2 , η 3 , η 4 , p , H 1 , H 2 , H 3 , and H 4 on interval [ 0 , ρ 0 ) by
η 4 ( a ) = [ 0 1 ψ ( ( 1 t ) η 3 ( a ) a ) d t 1 ψ 0 ( η 3 ( a ) a ) η 3 ( a ) + [ ψ ( η 1 ( a ) a ) + ψ ( η 3 ( a ) a ) ] 1 ψ 0 ( η 3 ( a ) a ) × 0 1 ( ψ 0 ( t η 3 ( a ) a ) + 1 ) d t 1 p ( a ) η 3 ( a ) + 0 1 ( ψ 0 ( t η 3 ( a ) a ) + 1 ) 1 ψ 0 ( a ) × [ ψ ( η 1 ( a ) a ) + ψ ( a ) ] 1 p ( a ) ] η 3 ( a ) ,
where
η 1 ( a ) = 1 1 ψ 0 ( a ) 1 2 0 1 ( ψ 0 ( t a ) + 1 ) d t + 0 1 ψ ( ( 1 t ) a ) d t ,
η 2 ( a ) = 1 1 ψ 0 ( a ) 0 1 ψ ( ( 1 t ) a ) d t + [ ψ ( η 1 ( a ) a ) + ψ ( a ) ] [ 0 1 ( ψ 0 ( t a ) + 1 ) d t ] 1 p ( a ) ,
η 3 ( a ) = [ 0 1 ψ ( ( 1 t ) η 2 ( a ) a ) d t 1 ψ 0 ( η 2 ( a ) a ) η 2 ( a ) + [ ψ ( η 1 ( a ) a ) + ψ ( η 2 ( a ) a ) ] 1 ψ 0 ( η 2 ( a ) a ) × 0 1 ( ψ 0 ( t η 2 ( a ) a ) + 1 ) d t 1 p ( a ) η 2 ( a ) + 0 1 ( ψ 0 ( t η 2 ( a ) a ) + 1 ) d t 1 ψ 0 ( a ) × [ ψ ( η 1 ( a ) a ) + ψ ( a ) ] 1 p ( a ) ] η 2 ( a ) ,
and
p ( a ) = ψ 0 ( η 1 ( a ) a ) .
Let
H 1 ( a ) = η 1 ( a ) 1 , H 2 ( a ) = η 2 ( a ) 1 ,
H 3 ( a ) = η 3 ( a ) 1 , H 4 ( a ) = η 4 ( a ) 1 .
We have that H 1 ( 0 ) = H 2 ( 0 ) = H 3 ( 0 ) = H 4 ( 0 ) < 0 .
Suppose that H 1 ( a ) + or a positive constant and H 2 ( a ) + or a positive constant as t ρ 0 . Similarly, H 3 ( a ) + or a positive constant and H 4 ( a ) + or a positive constant as t ρ ¯ 0 , where
ρ ¯ 0 = m a x { a [ 0 , ρ 0 ] : ψ 0 ( η 1 ( a ) . a ) < 1 } .
It then follows from the intermediate value theorem that functions H i , i = 1 , 2 , 3 , 4 have zeros in the interval ( 0 , ρ 0 ) . Define the radius of convergence ρ by
ρ = m i n { ρ i } , i = 1 , 2 , 3 , 4 ;
where ρ i s denote the smallest solution of functions H i s . Then, we have that for each a [ 0 , ρ )
0 η i ( a ) < 1 ,
0 ψ 0 ( a ) < 1 ,
0 ψ 0 ( η 1 ( a ) a ) < 1 .
Let B ( x * , ρ ) , B ( x * , ρ ) ¯ stand, respectively, for the open and closed ball in X such that x * X and of radius ρ > 0 . Next, we present the local convergence analysis of method (4) using the preceding notations and generalized Lipschitz-Hölder type conditions.
Theorem 1.
Let T : D X Y be a continuously first order Fréchet differentiable operator. Suppose that x * D and function ψ 0 : [ 0 , + ) [ 0 , + ) with ψ 0 ( 0 ) = 0 , continuous and non-decreasing such that for each x D
T ( x * ) = 0 , [ T ( x * ) ] 1 L ( Y , X ) ,
where L ( X , Y ) is the set of bounded linear operators from X to Y,
[ T ( x * ) ] 1 ( T ( x ) T ( x * ) ) ψ 0 ( x x * ) .
Moreover, suppose that there exists function ψ : [ 0 , + ) [ 0 , + ) with ψ ( 0 ) = 0 , continuous and non-decreasing such that for each x , y i n D 0 = D B ( x * , ρ 0 )
[ T ( x * ) ] 1 ( T ( x ) T ( y ) ) ψ ( x y ) ,
B ( x * , ρ ) D ,
where ρ 0 and ρ are defined by Equations (5) and (14), respectively. Then, the sequence { x n } generated by method (4) for x 0 B ( x * , ρ ) \ { x * } that is well defined in B ( x * , ρ ) remains in B ( x * , ρ ) for each n = 0 , 1 , 2 , and converges to x * . Moreover, the following estimates holds:
y n x * η 1 ( x n x * ) x n x * x n x * < ρ ,
z n ( 1 ) x * η 2 ( x n x * ) x n x * x n x * < ρ ,
z n ( 2 ) x * η 3 ( x n x * ) x n x * x n x * < ρ ,
and
x n + 1 x * η 4 ( x n x * ) x n x * x n x * < ρ ,
where the functions η i , i = 1 , 2 , 3 , 4 are defined by the expressions (6)(9). Furthermore, if ϱ ρ exists such that
0 1 ψ 0 ( θ ϱ ) d θ < 1 ,
then the point x * is the only solution of equation T ( x ) = 0 in D 1 = D B ( x * , ϱ ) ¯ .
Proof. 
We shall show by mathematical induction that sequence { x n } is well defined and converges to x * . Using the hypotheses, x 0 B ( x * , ρ ) \ { x * } , Equation (5) and inequality (19), we have that
[ T ( x * ) ] 1 ( T ( x 0 ) T ( x * ) ) ψ 0 ( x 0 x * ) ψ 0 ( ρ ) < 1 .
It follows from the above and the Banach lemma on invertible operator [17] that [ T ( x 0 ) ] 1 L ( Y , X ) or T ( x 0 ) is invertible and
[ T ( x 0 ) ] 1 T ( x * ) 1 1 ψ 0 ( x 0 x * ) .
Now, y 0 is well defined by the first sub-step of the scheme (4) and for n = 0 ,
y 0 x * = x 0 x * 1 2 [ [ T ( x 0 ) ] 1 T ( x 0 ) ] = 1 2 [ [ T ( x 0 ) ] 1 T ( x 0 ) ] + [ T ( x 0 ) ] 1 [ T ( x 0 ) ( x 0 x * ) T ( x 0 ) + T ( x * ) ] | | .
Expanding T ( x 0 ) along x * and taking the norm of the Equation (29), we obtain
y 0 x * 1 2 [ T ( x 0 ) ] 1 T ( x * ) 0 1 [ T ( x * ] 1 [ T ( x * + t ( x 0 x * ) ] d t x 0 x * + [ T ( x 0 ) ] 1 T ( x * ) 0 1 [ T ( x * ) ] 1 [ T ( x 0 ) T ( x * + t ( x 0 x * ) ) ] d t x 0 x * , 1 1 ψ 0 ( x 0 x * ) [ 1 2 0 1 [ ψ 0 ( t x 0 x * ) + 1 ] d t + 0 1 ψ ( ( 1 t ) x 0 x * ) d t ] x 0 x * η 1 ( x 0 x * ) x 0 x * < ρ .
From the inequalities (19) and (30), we have
[ T ( x * ) ] 1 [ T ( y 0 ) T ( x * ) ] ψ 0 ( y 0 x * ) ψ 0 ( g 1 ( x 0 x * ) x 0 x * ) = p ( x 0 x * ) < 1 .
Thus, by Banach lemma,
[ T ( y 0 ) ] 1 T ( x * ) 1 1 p ( x 0 x * ) .
From the second sub-step of the method (4), we have
z 0 ( 1 ) x * = x 0 x * [ T ( y 0 ) 1 T ( x 0 ) ] = x 0 x * [ T ( x 0 ) ] 1 T ( x 0 ) + [ T ( x 0 ) ] 1 [ T ( y 0 ) T ( x 0 ) ] T ( y 0 ) 1 T ( x 0 ) .
On taking norm of the Equation (33), we obtain
z 0 ( 1 ) x * x 0 x * [ T ( x 0 ) ] 1 T ( x 0 ) + [ T ( x 0 ) ] 1 T ( x * ) . [ T ( x * ) ] 1 [ T ( y 0 ) T ( x 0 ) ] [ T ( y 0 ) ] 1 T ( x * ) [ T ( x * ) ] 1 T ( x 0 ) 1 1 ψ 0 ( x 0 x * ) [ 0 1 ψ ( ( 1 t ) x 0 x * ) d t + [ ψ ( y 0 x * ) + ψ ( x 0 x * ) ] [ 0 1 ( ψ 0 ( t x 0 x * ) + 1 ) d t ] 1 p ( x 0 x * ) ] x 0 x * .
Thus, we obtain
z 0 ( 1 ) x * η 2 ( x 0 x * ) x 0 x * x 0 x * < ρ .
From the next sub-step of the method (4), we have
z 0 ( 2 ) x * = z 0 ( 1 ) x * ( 2 [ T ( y 0 ) ] 1 [ T ( x 0 ) ] 1 ) T ( z 0 ( 1 ) ) = ( z 0 ( 1 ) x * [ T ( z 0 ( 1 ) ) ] 1 T ( z 0 ( 1 ) ) ) + [ T ( z 0 ( 1 ) ) ] ( 1 ) T ( x * ) T ( x * ) 1 [ T ( y 0 ) T ( z 0 ( 1 ) ) ] . [ T ( y 0 ) ] ( 1 ) T ( x * ) [ T ( x * ) ] 1 T ( z 0 ( 1 ) ) + [ T ( x 0 ) ] ( 1 ) T ( x * ) [ T ( x * ) ] 1 [ T ( y 0 ) T ( x 0 ) ] . [ T ( y 0 ) ] ( 1 ) T ( x * ) [ T ( x * ) ] 1 T ( z 0 ( 1 ) ) .
On expanding T ( z 0 ( 1 ) ) along x * and taking norm of the Equation (36), we obtain
z 0 ( 2 ) x * 1 1 w 0 ( z 0 ( 1 ) x * ) 0 1 ψ ( ( 1 t ) z 0 ( 1 ) x * ) d t . z 0 ( 1 ) x * + [ w ( y 0 x * ) + w ( z 0 ( 1 ) x * ] ) 1 w 0 ( z 0 ( 1 ) x * ) × [ T ( x * ) ] 1 T ( z 0 ( 1 ) ) + 1 1 w 0 ( x 0 x * ) [ w ( y 0 x * ) + w ( x 0 x * ) ] × [ T ( x * ) ] 1 T ( z 0 ( 1 ) ) 1 1 w 0 ( z 0 ( 1 ) x * ) 0 1 ψ ( ( 1 t ) z 0 ( 1 ) x * ) d t . z 0 ( 1 ) x * + [ w ( y 0 x * ) + w ( z 0 ( 1 ) x * ] ) 1 w 0 ( z 0 ( 1 ) x * ) × 1 1 p ( x 0 x * ) 0 1 ψ 0 ( t z 0 ( 1 ) x * ) + 1 d t z 0 ( 1 ) x * + [ w ( y 0 x * ) + w ( x 0 x * ) ] 1 w 0 ( x 0 x * ) × 1 1 p ( x 0 x * ) 0 1 ψ 0 ( t z 0 ( 1 ) x * ) + 1 d t z 0 ( 1 ) x * .
Thus, we have
z 0 ( 2 ) x * η 3 ( x 0 x * ) x 0 x * < ρ .
Now, from the last sub-step of the method (4), we have
x 1 x * = z 0 ( 2 ) x * ( 2 [ T ( y 0 ) ] 1 [ T ( x 0 ) ] 1 ) T ( z 0 ( 2 ) ) = ( z 0 ( 2 ) x * [ T ( z 0 ( 2 ) ) ] 1 T ( z 0 ( 2 ) ) ) + [ T ( z 0 ( 2 ) ) ] ( 1 ) T ( x * ) T ( x * ) 1 [ T ( y 0 ) T ( z 0 ( 1 ) ) ] . [ T ( y 0 ) ] ( 1 ) T ( x * ) [ T ( x * ) ] 1 T ( z 0 ( 2 ) ) + [ T ( x 0 ) ] ( 1 ) T ( x * ) [ T ( x * ) ] 1 [ T ( y 0 ) T ( x 0 ) ] . [ T ( y 0 ) ] ( 1 ) T ( x * ) [ T ( x * ) ] 1 T ( z 0 ( 2 ) ) .
On expanding T ( z 0 ( 2 ) ) along x * and taking norm of the Equation (39), we obtain
x 1 x * 1 1 w 0 ( z 0 ( 2 ) x * ) 0 1 ψ ( ( 1 t ) z 0 ( 2 ) x * ) d t . z 0 ( 2 ) x * + [ w ( y 0 x * ) + w ( z 0 ( 2 ) x * ] ) 1 w 0 ( z 0 ( 2 ) x * ) × [ T ( x * ) ] 1 T ( z 0 ( 2 ) ) + 1 1 w 0 ( x 0 x * ) [ w ( y 0 x * ) + w ( x 0 x * ) ] × [ T ( x * ) ] 1 T ( z 0 ( 2 ) ) 1 1 w 0 ( z 0 ( 2 ) x * ) 0 1 ψ ( ( 1 t ) z 0 ( 2 ) x * ) d t . z 0 ( 2 ) x * + [ w ( y 0 x * ) + w ( z 0 ( 2 ) x * ] ) 1 w 0 ( z 0 ( 2 ) x * ) × 1 1 p ( x 0 x * ) 0 1 ψ 0 ( t z 0 ( 2 ) x * ) + 1 d t z 0 ( 2 ) x * + [ w ( y 0 x * ) + w ( x 0 x * ) ] 1 w 0 ( x 0 x * ) × 1 1 p ( x 0 x * ) 0 1 ψ 0 ( t z 0 ( 2 ) x * ) + 1 d t z 0 ( 2 ) x * .
Thus, we have
x 1 x * η 4 ( x 0 x * ) x 0 x * < ρ ,
which shows that for n = 0 , x 1 B ( x * , ρ ) . By simply replacing x 0 , y 0 , z 0 ( 1 ) , z 0 ( 2 ) , x 1 with x n , y n , z n ( 1 ) , z n ( 2 ) , x n + 1 in the preceding estimates, we arrive at inequalities (22)–(25). By the estimate,
x n + 1 x * η 4 ( x 0 x * ) x n x * < ρ .
We conclude that lim n x n = x * and x n + 1 B ( x * , r ) . Finally, to prove the uniqueness, let y * B ( x * , r ) where y * x * with T ( y * ) = 0 . Define F = 0 1 T ( x * + t ( y * x * ) ) d t . On expanding T ( y * ) along x * and using inequalities (19) and (26), we obtain
[ T ( x * ) ] 1 0 1 [ T ( x * + t ( y * x * ) T ( x * ) ] d t 0 1 ψ 0 ( t y * x * ) d t 0 1 ψ 0 ( t ϱ ) d t < 1 .
So, by Banach lemma, 0 1 [ T ( x * ) ] 1 [ T ( x * + t ( y * x * ) ) ] d t exists and invertible leading to the conclusion x * = y * , which completes the uniqueness part of the proof. □

3. Numerical Example

To show the performance of the method presented in this paper, some numerical examples are considered in this section.
Example 1
([18]). Returning back to the motivational example given in the introduction of this study. Let us now consider a nonlinear integral equation of Hammerstein type. These equations have strong physical background and arise in electro-magnetic fluid dynamics [19]. This equation has the following form:
x ( s ) = u ( s ) + a b G ( s , t ) H ( x ( t ) ) d t , a x b ,
for x ( s ) , u ( s ) C [ a , b ] with < a < b < ; G is the Green function and H is a polynomial function. The standard procedure to solve these type of equations contains in rewriting it as a nonlinear operator in a Banach space, i.e., F ( x ) = 0 , F : ψ C [ a , b ] C [ a , b ] with a non-empty open convex subset and
F [ x ( s ) ] = x ( s ) u ( s ) a b G ( s , t ) H ( x ( t ) ) d t .
considering the uniform norm v = max s [ a , b ] | v ( s ) | . It was observed that in some cases boundedness conditions may not be satisfied since F ( x ) or F ( x ) can be unbounded in a general domain. Thus, an alternative is looking at a domain that contains the solution. However, it is more convenient to apply the local convergence results obtained in our study in order to give the radius of convergence ball. Let x * = 0 ; then, on using assumptions (18)–(20) we have that, w 0 ( t ) = 7.5 t < w ( t ) = 15 t . Additionally, with the help of Mathematica 9 software for numerical computation, it is straightforward to say on the basis of the Table 1 and graph (Figure 1) that the method (4) has a larger domain of convergence in contrast to method (MMB) whose local convergence is analyzed by Behl et al. [18].
Example 2
([20]). Suppose that the motion of an object in three dimensions is governed by system of differential equations
f 1 ( x ) f 1 ( x ) 1 = 0 , f 2 ( y ) ( e 1 ) y 1 = 0 , f 3 ( z ) 1 = 0 ,
with x , y , z D = U ( 0 , 1 ) ¯ for f 1 ( 0 ) = f 2 ( 0 ) = f 3 ( 0 ) = 0 . Then, the solution of the system is given for v = ( x , y , z ) T by function F : = ( f 1 , f 2 , f 3 ) : D R 3 defined by
F ( v ) = ( e x 1 , e 1 2 y 2 + y , z ) T .
Then, the first Fréchet derivative is given by
F ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Notice that x * = ( 0 , 0 , 0 ) , F ( x * ) = F ( x * ) 1 = d i a g { 1 , 1 , 1 } . Then, on using assumptions (18)–(20) and on assuming ψ 0 ( t ) = ψ 0 t , ψ ( t ) = ψ t , we have that ψ 0 = e 1 < ψ = e 1 ψ 0 . Additionally, with the help of Mathematica 9 software for numerical computation, the radius ρ of convergence is computed in Table 2 and graph (Figure 2). Since method (4) has a larger radius of convergence than method (MMB), this means that method (4) has a wider domain for the choice of the starting points. As a result, the strategy under consideration is more efficient.
Example 3
([4]). Let X = Y = ψ = R . Define F ( x ) = sin x . Using our assumptions, we obtain F ( x ) = cos x . Moreover, for x * = 0 , on using assumptions (18)–(20) and on assuming ψ 0 ( t ) = ψ 0 t , ψ ( t ) = ψ t , it is derived that ψ 0 = ψ = 1 . Additionally, with the help of Mathematica 9 software for numerical computation, it is clear to say on the basis of the Table 3 and graph (Figure 3) that the method (4) has a larger radius of convergence than the method (MMR) mentioned in the reference [4]. So, we can conclude that the presented method enlarges the radius of the convergence ball.
Example 4
([4]). Consider the function f defined on D = [ 1 2 , 5 2 ] by
f ( x ) = x 3 log ( x 2 ) + x 5 x 4 , i f x 0 0 , i f x = 0 .
The consecutive derivatives of f are
f ( x ) = 3 x 2 log ( x 2 ) + 5 x 4 4 x 3 + 2 x 2 , f ( x ) = 6 x log ( x 2 ) + 20 x 3 12 x 2 + 10 x , f ( x ) = 6 log ( x 2 ) + 60 x 2 24 x + 22 .
It can be easily seen that f is unbounded on D. Nevertheless, all of the assumptions of the Theorem (1) for the iterative method (4) are satisfied, and hence applying the convergence results with x * = 1 , we obtain ψ 0 ( t ) = ψ 0 t , ψ ( t ) = ψ t ; it can be calculated that ψ 0 = ψ = 96.6628 . Additionally, with the help of Mathematica 9 software for numerical computation, Table 4 and graph (Figure 4) display the radius ρ of convergence by the discussed method (4) along with the existing multi-step scheme (MMB). We discovered that when compared, the method discussed here enlarges the radius of the convergence ball as compared to the existing one.

4. Conclusions

The major issues in the study of the convergence of the iterative methods are the radius of convergence, the selection of the initial point, and the uniqueness of the solution. In this paper, we have addressed these issues using the efficient seventh order method by considering the sufficient convergence conditions that are weaker than the Lipschitz and Hölder ones. This means that our analysis is applicable to solve such nonlinear problems when both Lipschitz and Hölder continuity conditions fail without applying higher-order derivatives. Further, a convergence theorem for the existence and uniqueness of the solution has been established followed by its error bounds. Moroever, the comparision of the domain of convergence has also been done with the help of numerical examples where Method (4) is discovered to have larger convergence domain than existing method. As a result, solver (4) outperforms the existing method in terms of practical application. As a matter of fact, we have extended the applicability of the method by solving some nonlinear equations employing our analytical results.

Author Contributions

M.T.D. supervision, validation, and writing—review and editing of the final version; R.H.A.-O., investigation, reviewing, and validation; A.S. and J.P.J., methodology, software, and writing—original draft; and K.R.P., reviewing and validation of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Behl, R.; Motsa, S.S. Geometric construction of eighth-order optimal families of Ostrowski’s method. Sci. World J. 2015, 2015, 614612. [Google Scholar] [CrossRef] [PubMed]
  2. Sharma, J.R.; Arora, H. A new family of optimal eighth order methods with dynamics for nonlinear equations. Appl. Math. Comput. 2016, 273, 924–933. [Google Scholar] [CrossRef]
  3. Rall, L.B. Computational Solution of Nonlinear Operator Equations; Robert, E., Ed.; Krieger Publishing Company: New York, NY, USA, 1979. [Google Scholar]
  4. Regmi, S.; Argyros, C.I.; Argyros, I.K.; George, S. Extended convergence of a sixth order scheme for solving equations under–continuity conditions. Moroc. J. Pure Appl. Anal. 2022, 8, 92–101. [Google Scholar] [CrossRef]
  5. Sharma, J.R.; Argyros, I.K. Local convergence of a Newton-Traub composition in Banach spaces. SeMA 2018, 75, 57–68. [Google Scholar] [CrossRef]
  6. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1977. [Google Scholar]
  7. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  8. Argyros, I.K.; George, S. Local convergence of two competing third order methods in Banach spaces. Appl. Math. 2016, 41, 341–350. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Khattri, S.K. Local convergence for a family of third order methods in Banach spaces, Punjab Univ. J. Math. 2016, 46, 52–63. [Google Scholar]
  10. Argyros, I.K.; Gonzalez, D.; Khattri, S.K. Local convergence of a one parameter fourth-order Jarratt-type method in Banach spaces. Comment. Math. Univ. Carol. 2016, 57, 289–300. [Google Scholar]
  11. Cordero, A.; Ezquerro, J.A.; Hernéz, M.A.; Torregrosa, J. On the local convergence of a fifth-order iterative method in Banach spaces. Appl. Math. Comput 2015, 251, 396–403. [Google Scholar] [CrossRef] [Green Version]
  12. Martínez, E.; Singh, S.; Hueso, J.L.; Gupta, D.K. Enlarging the convergence domain in local convergence studies for iterative methods in Banach spaces. Appl. Math. Comput. 2016, 281, 252–265. [Google Scholar] [CrossRef] [Green Version]
  13. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  14. Panday, B.; Jaiswal, J.P. On the local convergence of modified Homeier-like method in Banach spaces. Numer. Anal. Appl. 2018, 11, 332–345. [Google Scholar] [CrossRef]
  15. Xiao, X.; Yin, H. A new class of methods with higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2015, 264, 300–309. [Google Scholar] [CrossRef]
  16. Saxena, A.; Jaiswal, J.P.; Pardasani, K.R. Broadening the convergence domain of Seventh-order method satisfying Lipschitz and Holder conditions. Results Nonlinear Anal. 2022, 5, 473–486. [Google Scholar] [CrossRef]
  17. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: Singapore, 2013. [Google Scholar]
  18. Behl, R.; Argyros, I.K.; Mallawi, F.O. Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence. Mathematics 2021, 9, 1375. [Google Scholar] [CrossRef]
  19. Polyanin, A.D.; Manzhirov, A.V. Handbook of Integral Equations; Chapman and Hall/CRC: Florida, NY, USA, 1998. [Google Scholar]
  20. Argyros, I.K.; George, S. Increasing the order of convergence for iterative methods in Banach space under weak conditions. Malaya J. Mat. 2018, 6, 396–401. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Graphical representation of convergence radius.
Figure 1. Graphical representation of convergence radius.
Fractalfract 06 00713 g001
Figure 2. Graphical representation of convergence radius.
Figure 2. Graphical representation of convergence radius.
Fractalfract 06 00713 g002
Figure 3. Graphical representation of convergence radius.
Figure 3. Graphical representation of convergence radius.
Fractalfract 06 00713 g003
Figure 4. Graphical representation of convergence radius.
Figure 4. Graphical representation of convergence radius.
Fractalfract 06 00713 g004
Table 1. Comparison of convergence radius (Example 1).
Table 1. Comparison of convergence radius (Example 1).
Radius ρ 1 ρ 2 ρ 3 ρ 4 ρ
Method (4)0.02962960.02056010.01754490.01663410.0166341
MMB0.06666670.02922980.01189070.004409010.00440901
Table 2. Comparison of convergence radius (Example 2).
Table 2. Comparison of convergence radius (Example 2).
Radius ρ 1 ρ 2 ρ 3 ρ 4 ρ
Method (4)0.1643310.1357570.1192830.1141510.114151
MMB0.3826920.1983280.09494980.0405250.040525
Table 3. Comparison of convergence radius (Example 3).
Table 3. Comparison of convergence radius (Example 3).
Radius ρ 1 ρ 2 ρ 3 ρ 4 ρ
Method (4)0.2857140.2386550.2100990.2011860.201186
MMR0.444440.2774660.15771-0.15771
Table 4. Comparison of convergence radius (Example 4).
Table 4. Comparison of convergence radius (Example 4).
Radius ρ 1 ρ 2 ρ 3 ρ 4 ρ
Method (4)0.002955780.002468940.002173530.002081310.00208131
MMB0.006896820.003448410.00156060.0006211050.000621105
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Darvishi, M.T.; Al-Obaidi, R.H.; Saxena, A.; Prakash Jaiswal, J.; Raj Pardasani, K. An Extension on the Local Convergence for the Multi-Step Seventh Order Method with ψ-Continuity Condition in the Banach Spaces. Fractal Fract. 2022, 6, 713. https://doi.org/10.3390/fractalfract6120713

AMA Style

Darvishi MT, Al-Obaidi RH, Saxena A, Prakash Jaiswal J, Raj Pardasani K. An Extension on the Local Convergence for the Multi-Step Seventh Order Method with ψ-Continuity Condition in the Banach Spaces. Fractal and Fractional. 2022; 6(12):713. https://doi.org/10.3390/fractalfract6120713

Chicago/Turabian Style

Darvishi, Mohammad Taghi, R. H. Al-Obaidi, Akanksha Saxena, Jai Prakash Jaiswal, and Kamal Raj Pardasani. 2022. "An Extension on the Local Convergence for the Multi-Step Seventh Order Method with ψ-Continuity Condition in the Banach Spaces" Fractal and Fractional 6, no. 12: 713. https://doi.org/10.3390/fractalfract6120713

APA Style

Darvishi, M. T., Al-Obaidi, R. H., Saxena, A., Prakash Jaiswal, J., & Raj Pardasani, K. (2022). An Extension on the Local Convergence for the Multi-Step Seventh Order Method with ψ-Continuity Condition in the Banach Spaces. Fractal and Fractional, 6(12), 713. https://doi.org/10.3390/fractalfract6120713

Article Metrics

Back to TopTop