Next Article in Journal
Fermatean Probabilistic Hesitant Fuzzy Power Bonferroni Aggregation Operators with Dual Probabilistic Information and Their Application in Green Supplier Selection
Next Article in Special Issue
Second-Order Neutral Differential Equations with a Sublinear Neutral Term: Examining the Oscillatory Behavior
Previous Article in Journal
Fourier Series Related to p-Trigonometric Functions
Previous Article in Special Issue
Uniform Stabilization and Asymptotic Behavior with a Lower Bound of the Maximal Existence Time of a Coupled System’s Semi-Linear Pseudo-Parabolic Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Extending the Applicability of Iterative Methods for Solving Systems of Nonlinear Equations

1
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal, Mangalore 575025, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, University of Houston, Houston, TX 77004, USA
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(9), 601; https://doi.org/10.3390/axioms13090601
Submission received: 22 July 2024 / Revised: 30 August 2024 / Accepted: 1 September 2024 / Published: 4 September 2024
(This article belongs to the Special Issue Differential Equations and Related Topics, 2nd Edition)

Abstract

:
In this paper, we present a technique that improves the applicability of the result obtained by Cordero et al. in 2024 for solving nonlinear equations. Cordero et al. assumed the involved operator to be differentiable at least five times to extend a two-step p-order method to order p + 3 . We obtained the convergence order of Cordero et al.’s method by assuming only up to the third-order derivative of the operator. Our analysis is in a more general commutative Banach algebra setting and provides a radius of the convergence ball. Finally, we validate our theoretical findings with several numerical examples. Also, the concept of basin of attraction is discussed with examples.

1. Introduction

In science and engineering, we commonly experience problems that require solving the systems of nonlinear equations expressed by
Ψ ( x ) = 0 ,
where Ψ : Ω X Y is a nonlinear operator. Here, X is a commutative Banach algebra, Y is a Banach space, and Ω is a non-empty open convex subset of X. Many real-life problems, including optimal control, transport theory, neurophysiology, reactors and steering, kinematics, synthesis problems, etc., fit into Equation (1) (for details, see [1,2,3,4,5,6]). However, it is not always possible to obtain an analytical or closed-form solution of Equation (1). Thus, the iterative methods are useful for approximating such solutions. In some conditions, the solution ξ Ω of (1) may be obtained as a fixed point of some appropriate iterative function F : Ω Ω satisfying x n + 1 = F ( x n ) , n = 0 , 1 , 2 , , where x 0 is an initial guess of ξ . The Newton method [7], defined by
x n + 1 = x n Ψ ( x n ) 1 Ψ ( x n ) , n = 0 , 1 , 2 ,
is probably the most popular and widely used iterative method for finding the approximate solution for (1). Under some conditions, (2) has the order of convergence two (see [7]). We always want to find algorithms that solve our problems in optimistic ways. In recent years, numerous authors developed a modified version of (2), which improved the method’s order of convergence (see Definition 2). For example, Behl et al. in [8] proposed a method of order seven and Lotfi et al. in [9] proposed a method of order six. In both cases, they used the Taylor series expansion with the existence of at least seven-time Fréchet differentiability of the involving operator on R m . Cordero et al. in [10] proved that the method
y n = x n Ψ ( x n ) 1 Ψ ( x n ) , z n = ϑ ( x n , y n ) , x n + 1 = z n 7 2 I + Ψ ( x n ) 1 Ψ ( y n ) 4 I + 3 2 Ψ ( x n ) 1 Ψ ( y n ) Ψ ( x n ) 1 Ψ ( z n ) ,
had the order of convergence p + 3 , if ϑ was an iterative map with convergence order p. Here, I is the identity operator. Behl et al. and Lotfi et al.’s works are particular cases of Cordero et al.’s work in [10]. But the techniques used in [10] needed higher-order Fréchet differentiability of the involved operator and the analysis was conducted in Euclidean spaces. For example, consider the function γ : [ 2 , 2 ] R defined by
γ ( s ) = 0 if s = 0 1 24 s 4 log s 2 + s 7 s 5 otherwise .
We can observe that s = 1 is a simple solution of γ ( s ) = 0 and γ ( 4 ) ( s ) = log s 2 + 840 s 3 120 s + 25 6 is unbounded on [ 2 , 2 ] . Hence, the function γ is not differentiable more than three times on [ 2 , 2 ] . Thus, the analysis given in [8,9,10] does not guarantee the convergence of the method (3) to the solution s = 1 of γ ( s ) = 0 . To reduce the requirement of higher-order derivatives for the convergence analysis, numerous authors have studied similar results in [11,12,13]. The significance of our work is as follows:
(i)
We obtain the convergence order p + 3 of method (3) without using the Taylor series expansion.
(ii)
We use conditions on the operator and its Fréchet derivates up to the third order only.
(iii)
We provide the convergence ball which was not given in earlier studies.
(iv)
The analysis in [10] was conducted in the Euclidean spaces, whereas our study is in a more general commutative Banach algebra setting.
We organised this paper as follows. In Section 2, we discuss some useful concepts used in our study. The convergence analysis of method (3) is presented in Section 3. Several numerical examples are given in Section 4 to validate our results, and the method’s basin of attraction is presented in Section 5. Finally, we provide a conclusion based on the theoretical and numerical experiments in Section 6.

2. Preliminary Concepts

Let 𝒪 ( η , d ) = { x X : x η < d } , and 𝒪 [ η , d ] = { x X : x η d } , where d > 0 and η X .
Lemma 1
(cf. [7], (Theorem 1.1.12)). Let T be a bounded linear operator defined on X with T ρ < 1 . Then, the operator I T is invertible, and ( I T ) 1 1 1 ρ .
Definition 1
(See [14]). The efficiency index for comparing the iterative methods is defined by E f = p 1 / θ , where θ is the total number of functions and derivatives evaluated per iteration.
Definition 2
(See [15,16]). Let ξ X and α n X , for n = 0 , 1 , 2 , 3 , . Then, the sequence { α n } is said to converge to ξ with an R order of at least p if the following conditions hold:
(i) 
lim n α n ξ = 0 ,
(ii) 
There exist τ > 0 , n 0 N , and p [ 0 , ) such that α n + 1 ξ τ α n ξ p , n n 0 .
Definition 3
(See [16]). Let ξ be a solution of Equation (1) and suppose that x n 2 , x n 1 , x n and x n + 1 are four consecutive iterations near to ξ. Then, the approximated computational order of convergence (ACOC) is defined by
A C O C = ln x n + 1 x n / x n x n 1 ln x n x n 1 / x n 1 x n 2 .
Remark 1.
Note that the ACOC is not a good measure if there is an oscillating behaviour of the approximations or slow convergence in the initial stage [17].
We use the following conditions on the operator Ψ in our study:
(C1)
Ψ ( ξ ) 1 is a bounded linear operator from Y into X, where ξ is a simple solution of (1);
(C2)
Ψ ( ξ ) 1 Ψ ( x ) Ψ ( ξ ) L x ξ , for some L > 0 and x Ω ;
(C3)
Ψ ( ξ ) 1 Ψ ( x ) Ψ ( ξ ) M x ξ , for some M > 0 and x Ω ;
(C4)
Ψ ( ξ ) 1 Ψ ( x ) Ψ ( ξ ) K x ξ , for some K > 0 and x Ω ;
(C5)
Ψ ( ξ ) 1 Ψ ( x ) P , for some P > 0 and x Ω ;
(C6)
Ψ ( ξ ) 1 Ψ ( x ) Q , for some Q > 0 and x Ω .
By condition ( C 2 ) , we have
Ψ ( ξ ) 1 Ψ ( x ) 1 + L x ξ , for all x Ω .
The next tool, known as the Mean Value Theorem (MVT) in integral form [18], is used very frequently in our study:
Ψ ( x ) Ψ ( y ) = 0 1 Ψ y + s ( x y ) d s ( x y ) , x , y Ω ,
Ψ ( x ) Ψ ( y ) = 0 1 Ψ y + t ( x y ) d t ( x y ) , x , y Ω ,
Ψ ( x ) Ψ ( y ) = 0 1 Ψ y + w ( x y ) d w ( x y ) , x , y Ω .

3. Main Results

First, we introduce some functions and parameters used in this analysis. Define the following functions ζ , g : 0 , 1 L R by
ζ ( s ) = 1 1 L s P τ 2 2 s p 3 + K τ 64 ( 4 + τ s p 1 ) + Q τ 2 4 s p 2 + M P τ 3 ( 1 L s ) 2 + 1 ( 1 L s ) 3 ( 2 + L s ) M P τ 8 ( 2 + τ s p 1 ) + P 2 τ 2 4 s p 2 + 3 Q L τ 8 ( 4 L s ) + 1 ( 1 L s ) 4 [ 3 K τ 32 ( 4 L 2 s 2 ) ( 2 + L τ s p ) + 3 L P 2 τ 8 ( 8 + L s ) + Q L τ 16 ( 2 + L s ) 2 × ( 2 + τ s p 1 ) + 3 Q L τ 8 ( 2 + L s ) ( 2 + L τ s p ) ] + ( 2 + L s ) 2 ( 1 L s ) 5 [ K τ 96 ( 5 2 L s ) ( 2 + L τ s p ) + 3 L P 2 τ 16 ( 2 + τ s p 1 ) ] + 9 M P τ 32 ( 1 L s ) 6 ( 4 L 2 s 2 ) ( 2 + L s ) ( 2 + L τ s p ) ,
and g ( s ) = ζ ( s ) s p + 2 1 . Clearly, ζ and g are continuous on 0 , 1 L . We observe that each of the terms in ζ ( s ) are non-negative and non-decreasing on 0 , 1 L ; hence, ζ is non-decreasing on 0 , 1 L . Therefore, g is also non-decreasing ( g ( s ) = ( p + 2 ) s p + 1 ζ ( s ) + ζ ( s ) s p + 2 0 , s ) with g ( 0 ) = 1 and lim s 1 L g ( s ) = + . By the Intermediate Value Theorem, there exists a smallest r 0 , 1 L such that g ( r ) = 0 . Observe that 0 ζ ( s ) s p + 2 < 1 , s ( 0 , r ) .
Theorem 1.
Assume that the conditions ( C 1 ) ( C 6 ) hold. Then, the sequence ( x n ) n N generated by (3) with an initial value x 0 𝒪 ( ξ , r ) \ { ξ } is well defined, and x n 𝒪 [ ξ , r ] , n N { 0 } . Moreover, the following holds:
( i ) x n + 1 ξ ζ ( r ) x n ξ p + 3 , for n = 0 , 1 , 2 ,
( i i ) lim n x n = ξ .
Proof. 
Using the principle of mathematical induction, we prove that x n 𝒪 [ ξ , r ] , n N . First, we verify the result for n = 1 . Using (3) and (5) with x = z 0 and y = ξ , we write x 1 ξ as follows:
x 1 ξ = z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( y 0 ) I Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 Ψ ( y 0 ) I 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) = Ψ ( x 0 ) 1 0 1 Ψ ( x 0 ) Ψ ξ + t ( z 0 ξ ) d t ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( y 0 ) Ψ ( x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 Ψ ( y 0 ) Ψ ( x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) .
Using (6) with x = x 0 , y = ξ + t ( z 0 ξ ) in the first term, and x = y 0 , y = x 0 in the second and third terms above, we obtain
x 1 ξ = Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) × x 0 ξ t ( z 0 ξ ) d s d t ( z 0 ξ ) + Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) = Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) d s d t ( x 0 ξ ) ( z 0 ξ ) Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) t d s d t ( z 0 ξ ) 2 + Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) .
Adding and subtracting Ψ ( ξ ) in the integrands above (first, third, and fourth terms), we obtain
x 1 ξ = Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) Ψ ( ξ ) d s d t × ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( z 0 ξ ) Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) t d s d t ( z 0 ξ ) 2 + Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s ( y 0 x 0 ) × Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 × 0 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) × ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
Using (7) (by taking suitable x and y) in the first and fourth terms in the above equation, we obtain
x 1 ξ = I 1 + I 2 + Ψ ( x 0 ) 1 0 1 0 1 0 1 Ψ ξ + w s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) × s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) d w d s d t ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + w x 0 ξ + s ( y 0 x 0 ) [ x 0 ξ + s ( y 0 x 0 ) ] × d w d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) × ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) ,
where
I 1 = Ψ ( x 0 ) 1 0 1 0 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) t d s d t ( z 0 ξ ) 2
and
I 2 = 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 × 0 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
For convenience, we denote a ( t , s , w ) = ξ + w s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) , and b ( s , w ) = ξ + w x 0 ξ + s ( y 0 x 0 ) .
By adding and subtracting Ψ ( ξ ) in the integrand of the last term of Equation (10), we have
x 1 ξ = I 1 + I 2 + Ψ ( x 0 ) 1 0 1 0 1 0 1 Ψ a ( t , s , w ) s d w d s d t ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 0 1 0 1 0 1 Ψ a ( t , s , w ) ( 1 s ) t d w d s d t ( z 0 ξ ) ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 0 1 0 1 Ψ b ( s , w ) d w d s ( x 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 0 1 0 1 Ψ b ( s , w ) s d w d s ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 × Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
Again, by adding and subtracting Ψ ( ξ ) in the integrand in the third, sixth, and seventh terms above, we obtain
x 1 ξ = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) ,
where
I 3 = Ψ ( x 0 ) 1 0 1 0 1 0 1 Ψ a ( t , s , w ) Ψ ( ξ ) s d w d s d t ( x 0 ξ ) 2 ( z 0 ξ ) I 4 = Ψ ( x 0 ) 1 0 1 0 1 0 1 Ψ a ( t , s , w ) ( 1 s ) t d w d s d t ( z 0 ξ ) ( x 0 ξ ) ( z 0 ξ ) , I 5 = Ψ ( x 0 ) 1 0 1 0 1 Ψ b ( s , w ) Ψ ( ξ ) d w d s ( x 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) , I 6 = Ψ ( x 0 ) 1 0 1 0 1 Ψ b ( s , w ) Ψ ( ξ ) s d w d s ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) ,
and
I 7 = 3 2 Ψ ( x 0 ) 1 0 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s ( y 0 x 0 ) Ψ ( x 0 ) 1 × Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
By adding Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) ( z 0 ξ ) in the third and subtracting the same in the sixth term, and combing the fourth and fifth terms in (11), we obtain
x 1 ξ = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) [ x 0 ξ + y 0 x 0 ] ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) x 0 ξ + y 0 x 0 1 2 ( y 0 x 0 ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) [ z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) ] 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
By using the relation y 0 = x 0 Ψ ( x 0 ) 1 Ψ ( x 0 ) in the third term, and by adding and subtracting the term 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 ( z 0 ξ ) in the above equation, we obtain
x 1 ξ = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( y 0 x 0 ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) x 0 ξ Ψ ( x 0 ) 1 Ψ ( x 0 ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
By applying (5) (with suitable x and y) in the third and in the sixth term, and with the relation α 2 β 2 = ( α β ) ( α + β ) , α , β X , we obtain
x 1 ξ = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) [ x 0 ξ ( y 0 x 0 ) ] ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 0 1 [ Ψ ( x 0 ) Ψ ( ξ + t ( x 0 ξ ) ) ] d t ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 0 1 [ Ψ ( x 0 ) Ψ ( ξ + t ( z 0 ξ ) ) ] d t ( z 0 ξ ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) .
By using (6) (with suitable x and y) in third and sixth term above, we obtain
x 1 ξ = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) [ x 0 ξ ( y 0 x 0 ) ] ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 0 1 0 1 Ψ c ( t , s ) ( 1 t ) d s d t ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 0 1 0 1 Ψ d ( t , s ) × x 0 ξ t ( z 0 ξ ) d s d t ( z 0 ξ ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) ,
where c ( t , s ) = ξ + t + ( 1 t ) s ( x 0 ξ ) and d ( t , s ) = ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) . By adding and subtracting Ψ ( ξ ) in the integrand of the third and sixth term above, we obtain
x 1 ξ = j = 1 7 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) [ x 0 ξ ( y 0 x 0 ) ] ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 0 1 0 1 Ψ c ( t , s ) Ψ ( ξ ) ( 1 t ) d s d t ( x 0 ξ ) 2 × ( z 0 ξ ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 0 1 0 1 Ψ d ( t , s ) Ψ ( ξ ) d s d t × ( x 0 ξ ) ( z 0 ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 0 1 0 1 Ψ d ( t , s ) t d s d t ( z 0 ξ ) 2 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) = j = 1 13 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) ( z 0 ξ ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) ,
where
I 8 = 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) [ x 0 ξ ( y 0 x 0 ) ] ( z 0 ξ ) I 9 = Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 0 1 0 1 Ψ c ( t , s ) Ψ ( ξ ) ( 1 t ) d s d t × ( x 0 ξ ) 2 ( z 0 ξ ) ,
I 10 = Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) , I 11 = 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) I 12 = Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 0 1 0 1 Ψ d ( t , s ) Ψ ( ξ ) d s d t × ( x 0 ξ ) ( z 0 ξ ) ,
and
I 13 = Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 0 1 0 1 Ψ d ( t , s ) t d s d t ( z 0 ξ ) 2 .
By adding and subtracting Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) ( z 0 ξ ) , we obtain
x 1 ξ = j = 1 13 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) [ x 0 ξ + y 0 x 0 ] ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) ( z 0 ξ ) 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) = j = 1 14 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) ( z 0 ξ ) 1 + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) = j = 1 14 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( z 0 ξ ) 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) [ z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) ] ,
where I 14 = Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) ( z 0 ξ ) . By using the commutative properties in X, we have
x 1 ξ = j = 1 14 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) × ( x 0 ξ ) 2 ( z 0 ξ ) ( y 0 x 0 ) 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) [ z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) ] .
Again, by adding and subtracting 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 ( z 0 ξ ) , we obtain
x 1 ξ = j = 1 14 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( x 0 ξ ) 2 ( y 0 x 0 ) 2 ( z 0 ξ ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) [ z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) ] .
By using the identity α 2 β 2 = ( α β ) ( α + β ) , we obtain
x 1 ξ = j = 1 14 I j + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) [ x 0 ξ ( y 0 x 0 ) ] ( z 0 ξ ) + 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) + Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) [ z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) ] = j = 1 17 I j ,
where
I 15 = 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 ξ ) x 0 ξ ( y 0 x 0 ) ( z 0 ξ ) , I 16 = 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) ,
and
I 17 = Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) Ψ ( x 0 ) 1 Ψ ( ξ ) ( y 0 x 0 ) z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) .
Before moving further, we find some useful estimates. By ( C 2 ) , we have
Ψ ( ξ ) 1 Ψ ( x ) Ψ ( ξ ) L x ξ < 1 , for all x 𝒪 ξ , 1 L .
By Lemma 1 (with T = I Ψ ( ξ ) 1 Ψ ( x ) , and ρ = L x ξ ), we obtain
Ψ ( x ) 1 Ψ ( ξ ) 1 1 L x ξ , x 𝒪 ξ , 1 L .
Since x 0 𝒪 ξ , 1 L , and by using (5), we have
Ψ ( x 0 ) 1 Ψ ( z 0 ) = Ψ ( x 0 ) 1 0 1 Ψ ξ + t ( z 0 ξ ) d t ( z 0 ξ ) = Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 Ψ ( ξ ) 1 Ψ ξ + t ( z 0 ξ ) d t ( z 0 ξ ) , y 0 ξ = x 0 ξ Ψ ( x 0 ) 1 Ψ ( x 0 ) = Ψ ( x 0 ) 1 0 1 Ψ ( x 0 ) Ψ ξ + t ( x 0 ξ ) d t ( x 0 ξ ) = Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 Ψ ( ξ ) 1 Ψ ( x 0 ) Ψ ξ + t ( x 0 ξ ) d t ( x 0 ξ ) ,
and
z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) = Ψ ( x 0 ) 1 0 1 Ψ ( x 0 ) Ψ ξ + t ( z 0 ξ ) d t ( z 0 ξ ) = Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 Ψ ( ξ ) 1 Ψ ( x 0 ) Ψ ξ + t ( z 0 ξ ) d t × ( z 0 ξ ) .
By using ( C 2 ) , (4) and (12), we obtain
Ψ ( x 0 ) 1 Ψ ( z 0 ) 2 + L z 0 ξ 2 ( 1 L x 0 ξ ) z 0 ξ
y 0 ξ 3 L x 0 ξ 2 2 1 L x 0 ξ ,
and
z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) L ( 2 x 0 ξ + z 0 ξ ) 2 ( 1 L x 0 ξ ) z 0 ξ .
By (14), we obtain
y 0 x 0 2 + L x 0 ξ 2 ( 1 L x 0 ξ ) x 0 ξ .
Since ϑ in (3) is of order p (without loss of generality, we assume that n 0 = 0 and α n + 1 = x n + 1 = z n in Definition 2), we have
z 0 ξ τ x 0 ξ p .
We shall find an upper bound of I j for each j = 1 , , 17 . Using ( C 2 ) and (12), we obtain
I 1 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 0 1 Ψ ( ξ ) 1 Ψ ξ + s ( x 0 ξ ) + ( 1 s ) t ( z 0 ξ ) × t d s d t z 0 ξ 2 P z 0 ξ 2 2 ( 1 L x 0 ξ ) .
By using ( C 3 ) , ( C 5 ) , (12), (13) and (16), we find estimates for I 2 and I 7 :
I 2 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 Ψ ( ξ ) 1 Ψ x 0 + s ( y 0 x 0 ) d s y 0 x 0 × Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 Ψ ( ξ ) 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s × y 0 x 0 Ψ ( x 0 ) 1 Ψ ( z 0 ) 9 M P ( 4 L 2 x 0 ξ 2 ) ( 2 + L x 0 ξ ) ( 2 + L z 0 ξ ) 64 ( 1 L x 0 ξ ) 6 z 0 ξ x 0 ξ 3
and
I 7 3 2 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 Ψ ( ξ ) 1 Ψ x 0 + s ( y 0 x 0 ) Ψ ( ξ ) d s × y 0 x 0 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 Ψ ( x 0 ) 1 Ψ ( z 0 ) 9 M P ( 4 L 2 x 0 ξ 2 ) ( 2 + L x 0 ξ ) ( 2 + L z 0 ξ ) 64 ( 1 L x 0 ξ ) 6 z 0 ξ x 0 ξ 3 .
By utilising ( C 4 ) and (12), we obtain
I 3 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 0 1 0 1 Ψ ( ξ ) 1 Ψ a ( t , s , w ) Ψ ( ξ ) s d w d s d t × x 0 ξ 2 z 0 ξ K ( 4 x 0 ξ + z 0 ξ ) 64 ( 1 L x 0 ξ ) z 0 ξ x 0 ξ 2 .
By using ( C 6 ) and (12), we have
I 4 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 0 1 0 1 Ψ ( ξ ) 1 Ψ a ( t , s , w ) ( 1 s ) t d w d s d t × z 0 ξ x 0 ξ z 0 ξ Q x 0 ξ 4 ( 1 L x 0 ξ ) z 0 ξ 2 .
Similarly, by using ( C 4 ) , (12), (13) and (16), we obtain
I 5 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 0 1 Ψ ( ξ ) 1 Ψ b ( s , w ) Ψ ( ξ ) d w d s × x 0 ξ y 0 x 0 Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 K ( 4 L 2 x 0 ξ 2 ) ( 2 + L z 0 ξ ) 32 ( 1 L x 0 ξ ) 4 z 0 ξ x 0 ξ 3
and
I 6 Ψ ( x 0 ) 1 Ψ ( ξ ) 0 1 0 1 Ψ ( ξ ) 1 Ψ b ( s , w ) Ψ ( ξ ) s d w d s × y 0 x 0 2 Ψ ( x 0 ) 1 Ψ ( z 0 ) K ( 2 + L x 0 ξ ) 2 ( 5 2 L x 0 ξ ) ( 2 + L z 0 ξ ) 96 ( 1 L x 0 ξ ) 5 z 0 ξ x 0 ξ 3 .
By ( C 6 ) , (12), (14) and (16), we obtain
I 8 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 ξ x 0 ξ ( y 0 x 0 ) z 0 ξ 3 Q L ( 4 L x 0 ξ ) 8 ( 1 L x 0 ξ ) 3 z 0 ξ x 0 ξ 3 .
From ( C 3 ) , ( C 5 ) and (12), we obtain
I 9 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) x 0 ξ 2 z 0 ξ × 0 1 0 1 Ψ ( ξ ) 1 Ψ c ( t , s ) Ψ ( ξ ) ( 1 t ) d s d t M P 3 ( 1 L x 0 ξ ) 2 z 0 ξ x 0 ξ 3 .
By using ( C 6 ) , (12), (13), (14), and (16), we have
I 10 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 ξ y 0 x 0 Ψ ( x 0 ) 1 Ψ ( z 0 ) 3 L Q ( 2 + L x 0 ξ ) ( 2 + L z 0 ξ ) 8 ( 1 L x 0 ξ ) 4 z 0 ξ x 0 ξ 3 .
Similarly, by ( C 2 ) , ( C 6 ) , (12), and (16),
I 11 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 2 Ψ ( x 0 ) 1 Ψ ( ξ ) × 0 1 Ψ ( ξ ) 1 Ψ ( x 0 ) Ψ ξ + t ( z 0 ξ ) d t z 0 ξ Q L ( 2 + L x 0 ξ ) 2 ( 2 x 0 ξ + z 0 ξ ) 16 ( 1 L x 0 ξ ) 4 z 0 ξ x 0 ξ 2 .
Using ( C 3 ) , ( C 5 ) , (12), and (16), we obtain
I 12 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 Ψ ( x 0 ) 1 Ψ ( ξ ) x 0 ξ × z 0 ξ 0 1 0 1 Ψ ( ξ ) 1 Ψ d ( t , s ) Ψ ( ξ ) d s d t M P ( 2 + L x 0 ξ ) ( 2 x 0 ξ + z 0 ξ ) 8 ( 1 L x 0 ξ ) 3 z 0 ξ x 0 ξ 2 .
From ( C 5 ) , (12), and (16), we get
I 13 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 Ψ ( x 0 ) 1 Ψ ( ξ ) × 0 1 0 1 Ψ ( ξ ) 1 Ψ d ( t , s ) t d s d t z 0 ξ 2 P 2 ( 2 + L x 0 ξ ) 4 ( 1 L x 0 ξ ) 3 z 0 ξ 2 x 0 ξ .
Applying ( C 5 ) , (12), (14), and (16), we obtain
I 14 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 Ψ ( x 0 ) 1 Ψ ( ξ ) × Ψ ( ξ ) 1 Ψ ( ξ ) y 0 ξ z 0 ξ 3 L P 2 ( 2 + L x 0 ξ ) 4 ( 1 L x 0 ξ ) 4 z 0 ξ x 0 ξ 3 ,
and
I 15 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) × y 0 ξ x 0 ξ ( y 0 x 0 ) z 0 ξ 3 L P 2 ( 4 L x 0 ξ ) 8 ( 1 L x 0 ξ ) 4 z 0 ξ x 0 ξ 3 .
Similarly, using ( C 5 ) , (12), (15), and (16), we obtain
I 16 1 2 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) × y 0 x 0 2 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) L P 2 ( 2 + L x 0 ξ ) 2 ( 2 x 0 ξ + z 0 ξ ) 16 ( 1 L x 0 ξ ) 5 z 0 ξ x 0 ξ 2
and
I 17 Ψ ( x 0 ) 1 Ψ ( ξ ) Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 Ψ ( x 0 ) 1 Ψ ( ξ ) × Ψ ( ξ ) 1 Ψ ( ξ ) y 0 x 0 z 0 ξ Ψ ( x 0 ) 1 Ψ ( z 0 ) L P 2 ( 2 + L x 0 ξ ) 2 ( 2 x 0 ξ + z 0 ξ ) 8 ( 1 L x 0 ξ ) 5 z 0 ξ x 0 ξ 2 .
By using (17) in all the above upper bounds for I j , j = 1 , 2 , 17 , and by the triangle inequality, we obtain
x 1 ξ j = 1 17 I j x 0 ξ p + 3 1 L x 0 ξ [ P τ 2 2 x 0 ξ p 3 + K τ 64 ( 4 + τ x 0 ξ p 1 ) + Q τ 2 4 x 0 ξ p 2 ] + M P τ 3 ( 1 L x 0 ξ ) 2 x 0 ξ p + 3 + x 0 ξ p + 3 ( 1 L x 0 ξ ) 3 × [ ( 2 + L x 0 ξ ) M P τ 8 ( 2 + τ x 0 ξ p 1 ) + P 2 τ 2 4 x 0 ξ p 2 + 3 Q L τ 8 ( 4 L x 0 ξ ) ] + x 0 ξ p + 3 ( 1 L x 0 ξ ) 4 [ 3 K τ 32 ( 4 L 2 x 0 ξ 2 ) ( 2 + L τ x 0 ξ p ) + 3 L P 2 τ 8 ( 8 + L x 0 ξ ) + Q L τ 16 ( 2 + L x 0 ξ ) 2 ( 2 + τ x 0 ξ p 1 ) + 3 Q L τ 8 ( 2 + L x 0 ξ ) ( 2 + L τ x 0 ξ p ) ] + x 0 ξ p + 3 ( 1 L x 0 ξ ) 5 ( 2 + L x 0 ξ ) 2 [ K τ 96 ( 5 2 L x 0 ξ ) ( 2 + L τ x 0 ξ p ) + 3 L P 2 τ 16 ( 2 + τ x 0 ξ p 1 ) ] + 9 M P τ x 0 ξ p + 3 32 ( 1 L x 0 ξ ) 6 ( 4 L 2 x 0 ξ 2 ) × ( 2 + L x 0 ξ ) ( 2 + L τ x 0 ξ p ) = ζ ( x 0 ξ ) x 0 ξ p + 3 .
Since x 0 ξ < r , we have ζ ( x 0 ξ ) x 0 ξ p + 2 < 1 , and hence x 1 ξ < r . Thus, the iteration x 1 𝒪 [ ξ , r ] . The function ζ is non-decreasing on ( 0 , 1 L ) and x 0 ξ < r , so we obtain
x 1 ξ ζ ( r ) x 0 ξ p + 3 .
We assume that the result is true for any n = k N , i.e., x k 𝒪 [ ξ , r ] . If we replace x 0 , y 0 , and x 1 by x k , y k , and x k + 1 , respectively, in the above computations, we obtain x k + 1 𝒪 [ ξ , r ] and
x k + 1 ξ ζ ( r ) x k ξ p + 3 .
Therefore, by the principle of induction, we have x n 𝒪 [ ξ , r ] , n N and
x n + 1 ξ ζ ( r ) x n ξ p + 3 , n N { 0 } .
Furthermore, we have obtained (9). By Definition 2, we have proved that the order of convergence of the method (3) was p + 3 . □
Remark 2.
Some special cases of z n in (3) are given below.
(i) 
The method (3) becomes the sixth-order method studied in [9], when
z n = x n 2 [ Ψ ( x n ) + Ψ ( y n ) ] 1 Ψ ( x n ) .
(ii) 
Next, the method (3) becomes the seventh-order method studied in [8], when
z n = y n 2 I Ψ ( x n ) 1 Ψ ( y n ) Ψ ( x n ) 1 Ψ ( y n ) .
(iii) 
Similarly, the method (3) becomes the eighth-order method studied in [19], if
z n = y n 1 4 I + 1 2 Ψ ( y n ) 1 Ψ ( x n ) I + 1 2 Ψ ( y n ) 1 Ψ ( x n ) Ψ ( x n ) 1 Ψ ( y n ) .
The next result concerns the uniqueness of the solution of (1).
Proposition 1.
Assume that ( C 1 ) and ( C 2 ) hold. Suppose that there exists μ r such that
L μ < 2 .
Then, Equation (1) has a unique solution ξ 𝒪 [ ξ , μ ] Ω .
Proof. 
Similar to the proof of Proposition 2.2 in [11]. □

4. Numerical Examples

This section considers four examples. In Example 1, we discuss the Hammerstein type nonlinear integral equation, which has many applications in science and technology (for details, see [1,2,20,21]). In Example 2, we discuss the solution of a system of nonlinear equations on R 3 , and in Example 4, we consider a 30 × 30 system of nonlinear equations. Finally, in Example 5, we solve the Van der Pol differential equation. This equation has a wide range of applications in seismology, physical sciences, biological sciences, etc., for example, in the geological fault of two plates, electric flow in a vacuum tube, and neuron action potential (see [22,23,24]).
Example 1.
Let X = Y = C [ 0 , 1 ] and Ω = x X : x < 1 , where x = max s [ 0 , 1 ] | x ( s ) | .
Consider the integral equation
x ( s ) = 1 35 0 1 e ( 5 s 2 + t 2 ) x ( t ) 4 d t , s [ 0 , 1 ] .
We are interested in finding a function x ( s ) which satisfies Ψ ( x ) = 0 , where
Ψ ( x ) ( s ) = x ( s ) 1 35 0 1 e ( 5 s 2 + t 2 ) x ( t ) 4 d t .
The Fréchet derivatives of Ψ up to the third order are given by
Ψ ( x ) y ( s ) = y ( s ) 4 35 0 1 e ( 5 s 2 + t 2 ) x ( t ) 3 y ( t ) d t , Ψ ( x ) ( y z ) ( s ) = 12 35 0 1 e ( 5 s 2 + t 2 ) x ( t ) 2 z ( t ) y ( t ) d t ,
and
Ψ ( x ) ( y z w ) ( s ) = 24 35 0 1 e ( 5 s 2 + t 2 ) x ( t ) w ( t ) z ( t ) y ( t ) d t , for x , y , z , w Ω .
To verify our conditions given in Section 2, we consider ξ = 0 . Observe that Ψ ( 0 ) 1 = I , the identity map, so we have
Ψ ( 0 ) 1 Ψ ( x ) Ψ ( 0 ) 4 35 x 0 , Ψ ( 0 ) 1 Ψ ( x ) Ψ ( 0 ) 12 35 x 0
and
Ψ ( 0 ) 1 Ψ ( x ) Ψ ( 0 ) 24 35 x 0 , x Ω .
All conditions are verified, and L = 4 35 ,   M = 12 35 , and K = 24 35 ,   P = 12 35 , and Q = 24 35 . Thus, the results in Theorem 1 apply to this particular nonlinear integral equation.
Next, we converted the problem of solving the integral equation into a system of nonlinear equations using the Gauss–Legendre quadrature formula given in [25]. We solved the obtained system using the method (3) with z n as in (18)–(20) with the initial point x 0 = ( 0.8 , , 0.8 ) 8 t i m e s R 8 . The results obtained in each iteration are shown in Table 1, from which it is visible that the method (3) with z n as in (18)–(20) are converging to ξ within the second iteration. Hence, the results in Theorem 1 are validated for this example.
Example 2.
Let X = Y = R 3 with max norm and Ψ : X Y be defined by
Ψ ( α ) = 1 3 sin α 1 , α 2 2 15 + α 2 3 , 1 3 α 3 , for α = ( α 1 , α 2 , α 3 ) .
The Fréchet derivatives up to the third order are given as follows:
Ψ ( α ) = 1 3 cos α 1 0 0 0 2 α 2 5 + 1 0 0 0 1 ,
Ψ ( α ) = 1 3 sin α 1 0 0 0 0 0 0 0 0 0 0 0 0 2 5 0 0 0 0 0 0 0 0 0 0 0 0 0
and
Ψ ( α ) = 1 3 cos α 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .
All the conditions given in Section 2 are verified with ξ = ( 0 , 0 , 0 ) , and the constants are L = M = K = 1 3 , P = 1 3 , and Q = 1 3 . Hence, the results in Theorem 1 are applicable to solve Ψ ( α ) = 0 . Taking the initial value w 0 = ( 0.5 , 0.5 , 0.5 ) , we obtain that the method (3) with z n as in (18)–(20) are converging to ξ = ( 0 , 0 , 0 ) within the three iterations. The approximated solutions are shown in Table 2.
Example 3.
Consider the scalar function q : [ 1 , 1 ] R defined by
q ( s ) = s 8 105 | s | 7 2 .
Note that q ( 0 ) = 0 and the fourth derivative q ( 4 ) ( s ) does not exist on [ 1 , 1 ] . Therefore, the analysis in [8,9,10] cannot guarantee the convergence of the iterative method to the simple solution s * = 0 . Here, the function q satisfies the assumptions ( C 1 ) ( C 6 ) with L = 4 15 , M = P = 2 3 , and K = Q = 1 . From Table 3, one can compare the convergence order, efficiency index, and the number of iterations required to converge to the solution s * = 0 from the initial point s 0 = 0.85 for different choices of z n in (3) and the methods mentioned in the table.
Example 4.
Consider the following system of equations studied in [28]
x ( k ) cos 5 x ( k ) i = 1 30 x ( i ) = 0 , 1 k 30 , x = ( x ( 1 ) , , x ( 30 ) ) R 30 .
We take the initial value x 0 = ( 0.1 , , 0.1 30 t i m e s ) R 30 . In this case, all coordinates of the approximated solutions are the same, i.e., x n = { x n ( k ) } 1 k 30 , where x n ( 1 ) = = x n ( 30 ) . The approximated solution of the system of nonlinear equations in Equation (4), solved by method (3) with z n as in (18)–(20), with an accuracy of 10 14 , are given in Table 4.
One can compare the number of iterations required for convergence and the corresponding errors along with the ACOC from Table 4.
Example 5.
Consider the Van der Pol equation [29], which is described as follows:
y κ ( y 2 1 ) y + y = 0 , κ > 0 , y ( 0 ) = 0 , y ( 2 ) = 1 .
We consider the nodes t j , 0 j m satisfying t 0 = 0 < t 1 < t 2 < < t m = 2 , where t j t j 1 = 2 m , j 1 , and write, y 0 = y ( t 0 ) = 0 , y 1 = y ( t 1 ) , , y m 1 = y ( t m 1 ) , y m = y ( t m ) = 1 . When we use the divided difference technique to discretize (22), we end up with a ( m 1 ) × ( m 1 ) system of nonlinear equations as follows:
4 y j κ m ( y j 2 1 ) ( y j + 1 y j 1 ) + m 2 ( y j 1 2 y j + y j + 1 ) = 0 , j = 1 , 2 , , m 1 .
Let us take the value of m = 60 , initial approximation y j 0 = log 1 j 2 , j = 1 , 2 , , 59 , and κ = 1 3 . For this case, we obtain a 59 × 59 system of nonlinear equations. The graph of the approximated solution of (22) corresponding to the choices of z n as in (18)–(20) is plotted in Figure 1.
It is observed that from the Figure 1, the solution curve of the Van der Pol Equation (22) is concave on [ 0 , 2 ] .

5. Basin of Attractions

This section discusses the basin of attraction of a solution of the nonlinear system of Equation (1), considering X = Y = C with the usual norm. Let F : C ^ = C { } C ^ be an iterative function for (1). A point ξ C is called an attracting fixed point of F if it satisfies F ( ξ ) = ξ and | F ( ξ ) | < 1 . We denote F n the n-time composition of F. For an iterative method, suppose that the sequence { F n ( x 0 ) } n 0 converges to the solution ξ ; then, any such point x 0 is often referred to as an initial point/initial guess. The difficulty lies in making the correct initial guess. The set
A ( ξ ) = x C ^ : { F n ( x ) } n 0 converges to a solution ξ of ( 1 )
is known as the basin of attraction of ξ . Such a set is always open but not connected in general. The connected component of A ( ξ ) containing ξ is called an immediate basin of ξ . The set of correct guesses is precisely the union of all the basins corresponding to solutions of (1). The Fatou set is denoted by
F ( F ) = x C ^ : { F n ( x ) } n 0 is equicontinuous
and the complement of F ( F ) , denoted by J ( F ) , is known as the Julia set. Here, F ( F ) is an open set, and J ( F ) is a closed set. For any attracting fixed point ξ of F, we have A ( ξ ) F ( F ) and the boundary of A ( ξ ) , A ( ξ ) J ( F ) . For more details, see [30,31,32,33,34,35].
Example 6.
Consider the following two complex-variable polynomials
P ( z ) = z ( z 3 1 )
Q ( z ) = z ( z 0.5 ) ( z 1 ) 2 + 1 .
Note that the zeros for P are z 0 = 0 , z 1 = 1 , z 2 = 1 2 + 3 2 i , and z 3 = 1 2 3 2 i , and for Q, they are z 0 = 0 ,   z 1 = 0.5 ,   z 2 = 1 i , and z 3 = 1 + i . We divide the region R = z = x + i y | 2 R e ( z ) 2 , 2 I m ( z ) 2 which contains all the roots of (23), in 401 × 401 equally spaced grid points. Next, we apply method (3) with z n as in (18)–(20) using each grid point as the initial point and obtain the basin of attractions for the roots z 0 , z 1 , z 2 and z 3 as shown in Figure 2 for (23) and Figure 3 for (24).
Note that we assigned the colours blue, green, magenta, and red to the grid points for which the sequence { F n } n 0 converges. It converges with an accuracy of 10 8 to the roots z 0 , z 1 , z 2 , and z 3 , respectively. We assigned a black colour to divergent grid points in Figure 2 and Figure 3. It is observed from Figure 2 and Figure 3 that the convergence region for choice (19) is smaller than the convergence region of choices (18) and (20) of method (3) in R.

6. Conclusions

In this work, we extended the convergence order to p + 3 from a two-step iterative method of order p using weaker conditions than in earlier studies. Numerical examples were discussed, and the performance of the considered methods was compared with some existing methods. From Equation (9), one could obtain a bound for the asymptotic error constant as ζ ( r ) . Our work is limited to the operators which satisfy conditions ( C 1 ) ( C 6 ) only. Therefore, there is a scope to weaken these conditions further to enhance the applicability of the considered methods.

Author Contributions

Conceptualization, validation, formal analysis, investigation, and visualization by I.B., M.M., S.G., K.S., I.K.A. and S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

Santhosh George thanks Science and Engineering Research Board, Govt. of India for support under Project Grant No. CRG/2021/004776. Indra Bate and Muniyasamy M would like to thank the National Institute of Technology Karnataka, India, for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sakawa, Y. Optimal control of a certain type of linear distributed-parameter systems. IEEE Trans. Automat. Control 1966, 11, 35–41. [Google Scholar] [CrossRef]
  2. Hu, S.; Khavanin, M.; Zhuang, W. Integral equations arising in the kinetic theory of gases. Appl. Anal. 1989, 34, 261–266. [Google Scholar] [CrossRef]
  3. Cercignani, C. Nonlinear problems in the kinetic theory of gases. In Trends in Applications of Mathematics to Mechanics (Wassenaar, 1987); Springer: Berlin/Heidelberg, Germany, 1988; pp. 351–360. [Google Scholar]
  4. Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef]
  5. Grosan, C.; Abraham, A. A New Approach for Solving Nonlinear Equations Systems. IEEE Trans. Syst. Man Cybern.-Part A Syst. Humans 2008, 38, 698–714. [Google Scholar] [CrossRef]
  6. Moré, J.J. A collection of nonlinear model problems. In Computational Solution of Nonlinear Systems of Equations. Lectures in Applied Mathematics; American Mathematical Society: Providence, RI, USA, 1990; Volume 26, pp. 723–762. [Google Scholar]
  7. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  8. Behl, R.; Arora, H. CMMSE: A novel scheme having seventh-order convergence for nonlinear systems. J. Comput. Appl. Math. 2022, 404, 113301. [Google Scholar] [CrossRef]
  9. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
  10. Cordero, A.; Leonardo-Sepúlveda, M.A.; Torregrosa, J.R.; Vassileva, M.P. Increasing in three units the order of convergence of iterative methods for solving nonlinear systems. Math. Comput. Simul. 2024, 223, 509–522. [Google Scholar] [CrossRef]
  11. George, S.; Bate, I.; Muniyasamy, M.; Chandhini, G.; Senapati, K. Enhancing the applicability of Chebyshev-like method. J. Complex. 2024, 83, 101854. [Google Scholar] [CrossRef]
  12. George, S.; Kunnarath, A.; Sadananda, R.; Jidesh, P.; Argyros, I.K. On obtaining order of convergence of Jarratt-like method without using Taylor series expansion. Comput. Appl. Math. 2024, 43, 243. [Google Scholar] [CrossRef]
  13. Muniyasamy, M.; Chandhini, G.; George, S.; Bate, I.; Senapati, K. On obtaining convergence order of a fourth and sixth order method of Hueso et al. without using Taylor series expansion. J. Comput. Appl. Math. 2024, 452, 116136. [Google Scholar]
  14. Ostowski, A.M. Solution of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  15. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  16. Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef]
  17. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  18. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  19. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 1, 6457532. [Google Scholar] [CrossRef]
  20. Kelley, C.T. Approximation of solutions of some quadratic integral equations in transport theory. J. Integral Equ. 1982, 4, 221–237. [Google Scholar]
  21. Hernández-Verón, M.A.; Martínez, E.; Singh, S. On the Chandrasekhar integral equation. Comput. Math. Methods 2021, 3, e1150. [Google Scholar] [CrossRef]
  22. Burridge, R.; Knopoff, L. Model and theoretical seismicity. Bull. Seismol. Soc. Amer. 1967, 57, 341–371. [Google Scholar] [CrossRef]
  23. van der Pol, B. The nonlinear theory of electric oscillations. Proc. IRE 1934, 22, 1051–1086. [Google Scholar] [CrossRef]
  24. FitzHugh, R. Impulses and Physiological States in Theoretical Models of Nerve Membrane. Biophys. J. 1961, 1, 445–466. [Google Scholar] [CrossRef]
  25. Hernández, M. Chebyshev’s approximation algorithms and applications. Comput. Math. Appl. 2001, 41, 433–445. [Google Scholar] [CrossRef]
  26. George, S.; Sadananda, R.; Padikkal, J.; Argyros, I.K. On the Order of Convergence of the Noor–Waseem Method. Mathematics 2022, 10, 4544. [Google Scholar] [CrossRef]
  27. Ham, Y.; Chun, C. A fifth-order iterative method for solving nonlinear equations. Appl. Math. Comput. 2007, 194, 287–290. [Google Scholar] [CrossRef]
  28. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  29. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
  30. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Sci. Ser. A Math. Sci. 2004, 10, 3–35. [Google Scholar]
  31. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  32. Campos, B.; Canela, J.; Vindel, P. Dynamics of Newton-like root finding methods. Numer. Algorithms 2023, 93, 1453–1480. [Google Scholar] [CrossRef]
  33. Husain, A.; Nanda, M.N.; Chowdary, M.S.; Sajid, M. Fractals: An Eclectic Survey, Part-I. Fractal Fract. 2022, 6, 89. [Google Scholar] [CrossRef]
  34. Husain, A.; Nanda, M.N.; Chowdary, M.S.; Sajid, M. Fractals: An Eclectic Survey, Part-II. Fractal Fract. 2022, 6, 379. [Google Scholar] [CrossRef]
  35. Laplante, P.A.; Laplante, C. Introduction to Chaos, Fractals and Dynamical Systems; World Scientific: London, UK, 2023. [Google Scholar]
Figure 1. Graph of the solutions of (22).
Figure 1. Graph of the solutions of (22).
Axioms 13 00601 g001
Figure 2. Basin of attractions of method (3) with z n as in (18)–(20), respectively, for (23).
Figure 2. Basin of attractions of method (3) with z n as in (18)–(20), respectively, for (23).
Axioms 13 00601 g002
Figure 3. Basin of attractions of method (3) with z n as in (18)–(20), respectively, for (24).
Figure 3. Basin of attractions of method (3) with z n as in (18)–(20), respectively, for (24).
Axioms 13 00601 g003
Table 1. Approximated solutions using method (3) with different choices of z n for Example 1.
Table 1. Approximated solutions using method (3) with different choices of z n for Example 1.
Grids z n as in (18) z n as in (19) z n as in (20)
x 1 ( s ) x 2 ( s ) x 1 ( s ) x 2 ( s ) x 1 ( s ) x 2 ( s )
s 1 2.1896 × 10 6 0 3.1050 × 10 9 0 7.5719 × 10 10 0
s 2 2.0835 × 10 6 0 2.9544 × 10 9 0 7.2047 × 10 10 0
s 3 1.6559 × 10 6 0 2.3481 × 10 9 0 5.7260 × 10 10 0
s 4 9.5337 × 10 7 0 1.3519 × 10 9 0 3.2968 × 10 10 0
s 5 3.8101 × 10 7 0 5.4028 × 10 10 0 1.3175 × 10 10 0
s 6 1.19632 × 10 7 0 1.6964 × 10 10 0 4.1369 × 10 11 0
s 7 3.8802 × 10 8 0 5.5022 × 10 11 0 1.3418 × 10 11 0
s 8 1.7994 × 10 8 0 2.5516 × 10 11 0 6.2225 × 10 12 0
Table 2. Approximated solutions using method (3) with different choices of z n for Example 2.
Table 2. Approximated solutions using method (3) with different choices of z n for Example 2.
Iteration z n as in (18) z n as in (19) z n as in (20)
1( 1.6015 × 10 4 , 3.5456 × 10 5 , 0)( 1.2461 × 10 5 , 1.2952 × 10 5 , 0)( 3.3271 × 10 6 , 5.7837 × 10 7 , 0)
2( 5.0487 × 10 29 , 1.1438 × 10 29 , 0)(0, 3.7616 × 10 37 , 0)(0, 0, 0)
3(0, 0, 0)(0, 0, 0)(0, 0, 0)
Table 3. Comparison table for Example 3.
Table 3. Comparison table for Example 3.
Choice of z n
in (3)
Convergence
Order
(3)
Efficiency
Index
(3)
Number of Iterations Required
for Convergence to s * = 0 .
(18)61.5650853
(19)71.4757733
(20)81.5157172
From (3) in [26]61.4309693
From (11) in [27]81.5157172
Table 4. Comparison table for Example 4.
Table 4. Comparison table for Example 4.
Choice of z n
in (3)
Iteration
n
Approximated Solution
x n
Error
F ( x n )
ACOC
1 ( 0.062855849895130 , , 0.062855849895130 ) 0.063455
(18)2 ( 0.060413827547897 , , 0.060413827547897 ) 1.9948 × 10 11
3 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14 6.3413
1 ( 0.061338012367114 , , 0.061338012367114 ) 0.024000
(19)2 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14
3 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14 7.4905
1 ( 0.059667133238013 , , 0.059667133238013 ) 0.019368
(20)2 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14
3 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14 7.8901
1 ( 0.060027920435136 , , 0.060027920435136 ) 0.010013
From (3) in [26]2 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14
3 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14 5.9382
1 ( 0.059684988681059 , , 0.059684988681059 ) 0.018905
From (11) in [27]2 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14
3 ( 0.060413827548666 , , 0.060413827548666 ) 1.0533 × 10 14 8.2013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bate, I.; Murugan, M.; George, S.; Senapati, K.; Argyros, I.K.; Regmi, S. On Extending the Applicability of Iterative Methods for Solving Systems of Nonlinear Equations. Axioms 2024, 13, 601. https://doi.org/10.3390/axioms13090601

AMA Style

Bate I, Murugan M, George S, Senapati K, Argyros IK, Regmi S. On Extending the Applicability of Iterative Methods for Solving Systems of Nonlinear Equations. Axioms. 2024; 13(9):601. https://doi.org/10.3390/axioms13090601

Chicago/Turabian Style

Bate, Indra, Muniyasamy Murugan, Santhosh George, Kedarnath Senapati, Ioannis K. Argyros, and Samundra Regmi. 2024. "On Extending the Applicability of Iterative Methods for Solving Systems of Nonlinear Equations" Axioms 13, no. 9: 601. https://doi.org/10.3390/axioms13090601

APA Style

Bate, I., Murugan, M., George, S., Senapati, K., Argyros, I. K., & Regmi, S. (2024). On Extending the Applicability of Iterative Methods for Solving Systems of Nonlinear Equations. Axioms, 13(9), 601. https://doi.org/10.3390/axioms13090601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop