Next Article in Journal
Back to Basics: The Power of the Multilayer Perceptron in Financial Time Series Forecasting
Previous Article in Journal
The Generalized Fox–Wright Function: The Laplace Transform, the Erdélyi–Kober Fractional Integral and Its Role in Fractional Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Efficient Multistep Solvers for Solving Equations in Banach Spaces

by
Ramandeep Behl
1,*,
Ioannis K. Argyros
2 and
Sattam Alharbi
3
1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1919; https://doi.org/10.3390/math12121919
Submission received: 15 May 2024 / Revised: 10 June 2024 / Accepted: 17 June 2024 / Published: 20 June 2024
(This article belongs to the Section E: Applied Mathematics)

Abstract

In this paper, we investigate the local and semilocal convergence of an iterative method for solving nonlinear systems of equations. We first establish the conditions under which these methods converge locally to the solution. Then, we extend our analysis to examine the semilocal convergence of these methods, considering their behavior when starting from initial guesses that are not necessarily close to the solution. Iterative approaches for solving nonlinear systems of equations must take into account the radius of convergence, computable upper error bounds, and the uniqueness of solutions. These points have not been addressed in earlier studies. Moreover, we provide numerical examples to demonstrate the theoretical findings and compare the performance of these methods under different circumstances. Finally, we conclude that our examination offers a significant understanding of the convergence characteristics of previous iterative techniques for solving nonlinear equation systems.

1. Introduction

This study deals with the task of finding a solution x * for the equation
F ( x ) = 0 ,
which is locally unique, where Z 1 , Z 2 are the Banach spaces [1] and Ω Z 1 is the convex set, F : Ω Z 2 is a differentiable operator in the Fréchet sense. Solving a system of nonlinear equations of the form (1) is one of the most challenging and complicated tasks, as there is no general method that works for all problems. Thus, the iterative solvers [2] are the only option to obtain the solution of (1). Numerous scientific and engineering domains, such as steering issues, Fisher problems, and BVPs, can be converted into higher-order systems of nonlinear equations. Thus, such problems can be solved by iterative approaches without losing generality. Therefore, these methods provide powerful tools for solving complex problems and are essential for many practical applications [3,4].
The Newton iterative solver [2,5,6] is particularly useful for solving systems of nonlinear equations, which is defined below:
x ( m + 1 ) = x ( m ) F ( x ( m ) ) 1 F ( x ( m ) ) , m = 0 , 1 , 2 , 3 , .
It involves updating a vector of estimates for the solution by iteratively solving a linear system of equations derived from the Jacobian matrix F ( x ( m ) ) of the system. The importance of the Newton iterative solver for systems of nonlinear equations lies in its ability to provide accurate solutions and handle complex systems with multiple variables and equations.
In particular, we present the local convergence analysis of a multistep solver, described as follows:
y 1 ( m ) = x ( m ) F ( x ( m ) ) 1 F ( x ( m ) ) , y 2 ( m ) = x ( m ) 2 F ( x ( m ) ) + F ( y 1 m ) 1 F ( x ( m ) , y 3 ( m ) = y 2 ( m ) A m F ( x ( m ) ) 1 F ( y 2 ( m ) ) , x n + 1 ( m + 1 ) = y n ( m + 1 ) = y n 1 ( m ) A m F ( x ( m ) ) 1 F ( y n 1 ( m ) ) ,
where n is a given natural number, initial guess x 0 Ω , and
A m = 7 2 I 4 F ( x ( m ) ) 1 F ( y 1 ( m ) ) + 3 2 ( F ( x ( m ) ) 1 F ( y 1 ( m ) ) 2 , m = 0 , 1 , 2 , 3 , 4 ,
The convergence order 3 ( n 1 ) ( n 3 ) was shown in ref. [7] for Z 1 = Z 2 = R j . The solver (2) uses ( n 1 ) operator evaluations and two frozen derivatives per iteration. The order of convergence was derived in an earlier study [7] by using derivatives of the involved opertor of order at least eight (which do not appear in solver (2)) and Taylor expansions.
As a motivating experiment, let us assume Ω as any interval containing 0 to 1. Then, define the following function F on Ω by
F ( t ) = t 4 log t + t 7 t 6 , t 0 0 , t = 0 .
Notice that for the required solution x * = 1 , we have F ( x * ) = 0 , but F ( 5 ) ( t ) does not exist on Ω , even though the solver converges to x * = 1 . But, the result in [7], which requires the existence of at least the eighth derivative, cannot be applied. This is true as [7]’s requirements are sufficient but also not necessary. In addition, the radius of convergence is a critical aspect of iterative solvers. It determines the convergence properties of the solver and indicates the maximum range of the initial values that leads to a convergent solution. A small radius of convergence implies a slow convergence rate, while a large radius of convergence ensures faster and more robust convergence. The accurate computation of the radius of convergence is essential for optimizing the performance and reliability of iterative solvers in practical applications. Further, the computable upper error bounds on y n ( j ) x * and results on the uniqueness of x * were not considered in a previous study [7] or in similar methods using Taylor series to determine the order of convergence [2,8,9,10,11,12,13,14].
In this study, we cover these factors, which are important for determining the effectiveness and reliability of every iterative solver. By considering the local, semi-local, radius of convergence, computable error bounds, and uniqueness of the root, we can assert the granted convergence of the iterative method for a particular problem. We demonstrate the application in steering problems, Fisher problems, BVPs, and higher-order systems of nonlinear equations, as well as suggest the radii of convergence.

2. Analysis: Local

Let S = ( , + ) . Consider the function 0 : S S to be continuous and nondecreasing (CN).
Assume:
(i)
The equation
0 ( t ) 1 = 0 ,
has a minimal positive solution (mps) κ 0 . Set S 0 = [ 0 , κ 0 ) .
Consider the function ℵ: S 0 S be continuous and nondecreasing (CN). Define functions g 1 , h 1 , g 2 , h 2 on the interval, so by
g 1 ( t ) = 0 1 ( 1 θ ) t d θ 1 0 ( t ) and h 1 ( t ) = g 1 ( t ) 1 .
(ii)
Equation h 1 ( t ) = 0 has an mps in ( 0 , κ 0 ) , denoted by r 1 .
(iii)
The equation
p ( t ) 1 = 0 ,
has an mps, denoted by κ 1 , where
p ( t ) = 1 2 0 ( t ) + 0 g 1 ( t ) t .
Set κ = min { κ 0 , κ 1 } and S 1 = [ 0 , κ ] .
Consider the function v : S 1 S to be CN. Define functions g 2 and h 2 on the interval S 1 by
g 2 ( t ) = 0 1 ( 1 θ ) t d θ 1 0 ( t ) + 0 ( t ) + 0 ( g 1 ( t ) t ) 0 1 1 ( θ t ) d θ 2 1 0 ( t ) 1 p ( t )
and h 2 ( t ) = g 2 ( t ) 1 .
(iv)
Equation h 2 ( t ) = 0 has an mps in S 1 , denoted by r 2 . Define functions g n , h n ( n 3 ) on the interval S 1 by
g n ( t ) = [ 0 1 ( 1 θ ) g n 1 ( t ) t d θ 1 0 g n 1 ( t ) t + 0 ( t ) + 0 g n 1 ( t ) t 0 1 1 θ g n 1 ( t ) t d θ 1 0 g n 1 ( t ) t 1 0 ( t ) + 1 2 0 ( t ) + 0 g n 2 ( t ) t 1 0 ( t ) 2 + 4 0 ( t ) + 0 g n 2 ( t ) t 1 0 ( t ) × 0 1 1 θ g n 1 ( t ) t d θ 1 0 ( t ) ] g n 1 ( t )
and
h n ( t ) = g n ( t ) 1 .
(v)
Equations h n ( t ) = 0 have an mps, denoted by r n in S 1 .
Define a radius of convergence r by
r = min { r i } , i = 1 , 2 , 3 , , n .
These definitions imply that
0 0 ( t ) < 1 ,
0 p ( t ) < 1
and
0 g i ( t ) < 1 ,
for all t [ 0 , r ] .
We denote the open and closed balls in Z 1 by E ( z , γ ) ,   E [ z , γ ] , respectively with a center of z Z 1 and radius of γ > 0 .
The convergence conditions ( A ) will be used.
Assume:
  • ( A 1 )   F : Ω Z 2 is differentiable; x * Ω exists, and F ( x * ) = Δ ( z 1 , z 2 ) , such that
    F ( x * ) = 0 and Δ 1 ( Z 2 , Z 1 ) .
  • ( A 2 ) For each x Ω
    Δ 1 F ( x ) Δ 0 ( x x * ) .
  • Set Ω 0 = Ω E ( x * , κ 0 ) .
  • ( A 3 ) For each x , y S 0
    Δ 1 F ( x ) F ( y ) ( x y ) .
  • Set Ω 1 = Ω E ( x * , κ 0 ) .
  • ( A 4 ) For each x Ω 1
    Δ 1 F ( x ) 1 ( x x * ) .
  • ( A 5 ) E ( x * , r ) Ω
  • and
  • ( A 6 ) There exists r * r , such that
    0 1 0 θ r * d θ < 1 .
  • Set Ω 2 = Ω E [ x * , r * ] .
Based on the preceding notations and conditions ( A ) , we arrive at the main local convergence result for solver (2)
Theorem 1.
Assume conditions ( A ) hold. Then, starting from x 0 E ( x * , r ) { x * } sequence { y n ( m ) } generated by solver (2) is defined, stays in E ( x * , r ) , and converges to x * . Moreover, solution x * is unique in the set Ω 2 given in ( A 6 ).
Proof. 
We use mathematical induction. Then, by the condition ( A 2 ) , we have that for each x E ( x * , r ) { x * }
Δ 1 F ( x ) Δ 0 ( x x * ) 0 ( r ) < 1 .
It follows from (9) and the celebrated Lemma due to Banach on invertiabe operators [1,6,15] that F ( x ) 1 ( Z 2 , Z 1 ) , which stands for the space of bounded linear operators from Z 2 to Z 1 , and
F ( x ) 1 Δ 1 1 0 ( x x * ) .
Thus, for x = x ( m ) , we have from (5), (8) ( f o r i = 1 ), ( A 3 ) and (10)
y 1 m x * = ρ ( m ) F ( x ( m ) ) 1 F ( x ( m ) ) F ( x ( m ) ) 1 Δ 0 1 Δ 1 F x * + θ ( ρ ( m ) ) F ( x ( m ) d θ ρ ( m ) 0 1 ( 1 θ ) ρ ( m ) d θ 1 0 ( ρ ( m ) ) ρ ( m ) ρ ( m ) < r ,
where x ( m ) x * = ρ ( m ) . Hence, the iterate y 1 ( m ) E ( x * , r ) . We need the estimate
( 2 Δ ) 1 F ( x ( m ) ) + F ( y 1 ( m ) ) 2 Δ l 2 Δ 1 F ( x ( m ) ) Δ + Δ 1 F ( y 1 ( m ) ) Δ l 2 0 ( ρ ( m ) ) + 0 ( y 1 ( m ) x * ) l 2 0 ( ρ ( m ) ) + 0 g 1 ( ρ ( m ) ) ρ ( m ) = p ( ρ ( m ) ) p ( r ) < 1 .
Thus, we have
F ( x ( m ) ) + F ( y 1 ( m ) ) 1 Δ l 2 1 p ( ρ ( m ) ) .
Hence, the iterate y 2 ( m ) is also well defined. Thus, similar to the second sub step of solver, we have the (2)
y 2 ( m ) x * = ρ ( m ) F ( x ( m ) ) 1 F ( x ( m ) ) + [ 2 F ( x ( m ) ) + F ( y ( m ) ) 1 + F ( x ( m ) ) 1 ] F ( x ( m ) ) = ρ ( m ) F ( x ( m ) ) 1 F ( x ( m ) ) + F ( x ( m ) ) 1 F ( y 1 ( m ) ) F ( x ( m ) ) × ( F ( x ( m ) ) + F ( y 1 ( m ) ) ) 1 F ( x ( m ) ) [ 0 1 ( 1 θ ) ρ ( m ) d θ 1 0 ( ρ ( m ) ) + 0 ( ρ ( m ) ) + 0 ( y 1 ( m ) x * ) 0 1 1 θ ρ ( m ) d θ 1 0 ( ρ ( m ) ) 1 p ( ρ ( m ) ) ] ρ ( m ) g 2 ( ρ ( m ) ) ρ ( m ) ρ ( m ) .
Therefore, the iterate y 2 ( m ) E ( x * , r ) .
Next, for i 3 , we similarly obtain,
y i ( m ) x * = y i 1 ( m ) x * F ( y i 1 ( m ) ) 1 F ( y i 1 ( m ) ) + F ( y i 1 ( m ) ) 1 F ( x ( m ) ) 1 F ( y i 1 ( m ) ) + 5 2 I 4 F ( x ( m ) ) 1 F ( y i 2 ( m ) ) + 3 2 F ( x ( m ) ) 1 F ( y i 2 ( m ) ) 2 F ( x ( m ) ) 1 F ( y i 1 ( m ) ) [ 0 1 ( ( 1 θ ) y i 1 ( m ) x * ) d θ 1 0 ( y i 1 ( m ) x * ) + ( 0 y i 1 ( m ) x * ) + 0 ( ρ ( m ) ) ) 0 1 1 ( θ y i 1 ( m ) x * ) d θ ( 1 0 ( y i 1 ( m ) x * ) ) ( 1 0 ( ρ ( m ) ) ) + l 2 0 ( ρ ( m ) ) + 0 ( y i 2 ( m ) x * ) 1 0 ( ρ ( m ) ) 2 + 4 0 ( ρ ( m ) ) + 0 ( y i 2 ( m ) x * ) 1 0 ( ρ ( m ) ) × 0 1 1 θ y i 1 ( m ) x * d θ 1 0 ( ρ ( m ) ) ] y i 1 ( m ) x * g i ( y i 1 ( m ) x * ) y i 1 ( m ) x * y i 1 ( m ) x * < r .
Thus, the iterate y i ( m ) E ( x * , r ) . Thus, using the estimate y n + 1 ( m ) x * c y n ( m ) x * < r , where c = g n x 0 x * [ 0 , 1 ] , we deduce that y n + 1 ( m ) E ( x * , r ) and lim m y n + 1 ( m ) = x * . Moreover, the uniqueness of the solution part follows if we consider y * Ω 2 with F ( y * ) = 0. Then, using ( A 2 ) and ( A 5 ) , for T = 0 1 F ( x * + θ ( y * x * ) ) d θ , we get
Δ 1 T Δ 0 1 0 θ y * x * d θ 0 1 0 ( θ r * ) d θ < 1 .
Hence, T 1 ( Z 2 , Z 1 ) . Furthermore, we use the approximation 0 = F ( y * ) F ( x * ) = T ( y * x * ) to conclude that x * = y * . □
Remark 1.
A usual pick for Δ = F ( x * ) , which makes x * a simple solution. Note that this hypothesis is not made in Theorem 2. Consequently, method (2) is applicable for compute solutions of multiplicity greater than 1, if Δ F ( x * ) . Moreover, the pick Δ = F ( x * ) , is not necessarily the most flexible.

3. Semi-Local Convergence

The proof of the semi-local convergence requires the utilization of majorizing sequences [1,3,4,6].
Assume that a CN function w 0 : [ 0 , + ) R exists, such that equation w 0 ( t ) 1 = 0 has an mps, denoted by α . Let w : [ 0 , α ) R be a CN.
Define the scalar sequence { a m } for a 0 = a 0 ( 0 ) = 0 , a 0 ( 1 ) 0 , n is a fixed natural number and each a m + 1 = a n ( m ) , m = 0 , 1 , 2 , . . by
w ¯ m = w ( a m ( 1 ) a 0 ( m ) ) or w 0 ( a 0 ( m ) ) + w 0 ( a m ( 1 ) ) , a 2 ( m ) = a 1 ( m ) + w m ¯ ( a m ( 1 ) a m ( 0 ) ) 2 1 0.5 w 0 ( a 0 ( m ) ) + w 0 ( a 1 ( m ) ) , b 2 ( m ) = 1 + 0 1 w 0 a 0 ( m ) + θ ( a 2 ( m ) a 0 ( m ) ) d θ a m 2 a 0 ( m ) + 1 + w 0 ( a 0 ( m ) ) a 1 ( m ) a 0 ( m ) , p m = 6 w ¯ n 1 w 0 ( a 0 ( m ) ) 2 + 4 w ¯ n 1 w 0 ( a 0 ( m ) ) + 5 , a 3 ( m ) = a 2 ( m ) + p m b 2 ( m ) 1 w 0 ( a 0 ( m ) ) , b k 1 ( m ) = 1 + 0 1 w 0 a 0 ( m ) + θ ( a k 1 ( m ) a 0 ( m ) ) d θ a k 1 ( m ) a 0 ( m ) + 1 + w 0 ( a 0 ( m ) ) ( a 1 ( m ) a 0 ( m ) ) , k = 2 , ( n 1 ) a k ( m ) = a k 1 ( m ) + p m b k 1 ( m ) 1 w 0 ( a 0 ( m ) ) , a m + 1 = a n m + 1 = a n 1 ( m ) + p m b n 1 ( m ) 1 w 0 ( a 0 ( m ) ) , λ m + 1 = 0 1 w ( 1 θ ) ( a m + 1 a m ) d θ ( a m + 1 a m ) + ( 1 + w 0 a m ) a m + 1 a 1 ( m ) ,
and
a m + 1 ( 1 ) = a m + 1 + λ m + 1 1 w 0 ( a m + 1 ) .
General convergence conditions follow for the sequence { a m } .
Lemma 1.
Assume
w 0 ( a 0 ( m ) ) + w 0 ( a 1 ( m ) ) < 2 , w 0 ( a 0 ( m ) ) < 1 , and a m α .
Then, the following assertions hold
a m a m + 1 α ,
and α * [ 0 , α ] exists, such that lim m a m = α * .
Proof. 
The condition (2) and the definition of the sequence { a m } given by formula (14) imply the assertion (16). Hence, the limit point α * exists. □
Note that α * is the unique least upper bound of sequence { a m } . Next, functions w 0 , w and the limit point α * are connected to the operators on the solver (2).
Assume:
  • ( H 1 ) There exists x 0 Ω , Λ ( z 1 , z 2 ) and a parameter a 0 ( 1 ) 0 , such that linear operator Λ is invertible and Λ 1 F ( x 0 ) a 0 ( 1 ) .
  • ( H 2 ) Λ 1 F ( v ) Λ w 0 ( v x 0 ) for each v 1 Ω .
  • Set Ω 2 = Ω E ( x 0 , α ) . The choice of α , and the conditions ( H 2 ) for v 1 = x 0 , imply
    Λ 1 F ( v 1 ) Λ w 0 ( v 1 x 0 ) = w 0 ( 0 ) < 1 .
  • Thus, Λ 1 , and consequently, we can pick Λ 1 F ( x 0 ) α 0 1 .
  • ( H 3 ) Λ 1 F ( v 2 ) F ( v 1 ) w ( v 2 v 1 ) for each v 1 , v 2 Ω 2 .
  • ( H 4 ) Condition (2) holds
  • and
  • ( H 5 ) E [ x 0 , a * ] Ω .
The preceding notation and the conditions ( H ) allow us to present the semi-local convergence result for the solver (2).
Theorem 2.
Assume that the conditions ( H ) hold. Thus, the sequence { x ( m ) } generated by solver (2) exists in E ( x 0 , a * ) for each m = 0 , 1 , 2 , 3 , and is convergent to some x * E [ x 0 , a * ] . Moreover, the following assertions hold
y 1 ( m ) x ( m ) a 1 ( m ) a 0 ( m ) , y 2 ( m ) y 1 ( m ) a 2 ( m ) a 1 ( m ) , y k ( m ) y k 1 ( m ) a k ( m ) a k 1 ( m ) , k = 2 , , ( n 1 ) , and x ( m + 1 ) y n 1 ( m ) = y n ( m + 1 ) y n 1 ( m ) a m + 1 a n 1 ( m ) .
Proof. 
Mathematical induction is employed to prove these assertions. The first one holds for m = 0 , as by ( H 1 ) and (14)
y 1 ( 0 ) x 0 = Λ 1 F ( x 0 ) a 1 ( 0 ) = a 1 ( 0 ) a 0 ( 0 ) < a *
and the iterate y 1 ( 0 ) E ( x 0 , a * ) .
We need the following estimate
2 Λ 1 F ( x ( m ) ) + F ( y 1 ( m ) ) 2 Λ 1 2 Λ 1 F ( x ( m ) ) Λ + Λ 1 F ( y 1 ( m ) ) Λ 1 2 w 0 ( x ( m ) x 0 ) + w 0 ( y 1 ( m ) x 0 ) 1 2 w 0 ( a 0 ( m ) ) + w 0 ( a 1 ( m ) ) < 1 ,
so,
F ( x ( m ) ) + F ( y 1 ( m ) ) 1 Λ 1 2 1 w 0 ( a 0 ( m ) ) + w 0 ( a 1 ( m ) ) .
Thus, we can rewrite the first two substeps of the solver (2) in the following way:
( y 2 ( m ) y 1 ( m ) ) = ( F ( x ( m ) ) 1 2 F ( x ( m ) ) + F ( y 1 ( m ) ) 1 ) F ( x ( m ) ) = 2 F ( x ( m ) ) + F ( y 1 ( m ) ) 1 F ( x ( m ) ) 1 F ( x ( m ) ) = F ( x ( m ) ) + F ( y 1 ( m ) ) 1 2 F ( x ( m ) ) F ( x ( m ) ) F ( y 1 ( m ) ) F ( x ( m ) ) = F ( x ( m ) ) + F ( y 1 ( m ) ) 1 F ( x ( m ) ) F ( y 1 ( m ) ) y 1 ( m ) x ( m ) .
This further yield
y 2 ( m ) y 1 ( m ) = w m ¯ y 1 ( m ) x ( m ) 2 1 w 0 ( a 0 ( m ) ) + w 0 ( a 1 ( m ) ) a 2 ( m ) a 1 ( m )
and
y 2 ( m ) x 0 y 2 ( m ) y 1 ( m ) + y 1 ( m ) x 0 a 2 ( m ) a 1 ( m ) + a 1 ( m ) a 0 ( 0 ) = a 2 ( m ) < a * ,
where we also use the following estimates
Λ 1 F ( x ( m ) ) F ( y 1 ( m ) ) w ( x ( m ) y 1 ( m ) ) w ( a 1 ( m ) a 0 ( m ) ) w ¯ m
and
Λ 1 F ( x ( m ) ) F ( y 1 ( m ) ) Λ 1 F ( x ( m ) ) Λ + Λ 1 F ( y 1 ( m ) ) Λ w 0 ( x ( m ) x 0 ) + w 0 ( y 1 ( m ) x 0 ) w 0 ( a 0 ( m ) ) + w 0 ( a 1 ( m ) ) w m .
Hence, the iterate y 2 ( m ) E ( x 0 , a * ) and the second assertion holds. An upper bound on A m is needed. As in the local case
A m = 6 I F ( x ( m ) ) 1 F ( y 1 ( m ) ) 2 4 I F ( x ( m ) ) 1 F ( y 1 ( m ) ) + 5 I ,
thus,
A m 6 w m ¯ 1 w 0 ( a 0 ( m ) ) 2 + 4 w m ¯ 1 w 0 ( a 0 ( m ) ) + 5 = p m ,
where
I F ( x ( m ) ) 1 F ( y 1 ( m ) ) F ( x ( m ) ) 1 Λ F ( x 0 ) 1 F ( y 1 ( m ) ) Λ w m ¯ 1 w 0 ( a 0 ( m ) ) .
Moreover, we can write that
F ( y 2 ( m ) ) = F ( y 2 ( m ) ) F ( x ( m ) ) + F ( x ( m ) ) = F ( y 2 ( m ) ) F ( x ( m ) ) F ( x ( m ) ) ( y 1 ( m ) x ( m ) ) = 0 1 F x ( m ) + θ ( y 2 ( m ) x ( m ) ) d θ y 2 ( m ) x ( m ) F ( x ( m ) ) ( y 1 ( m ) x ( m ) ) .
Hence, we have
Λ 1 F ( y 2 ( m ) ) [ 1 + 0 1 w 0 x ( m ) x 0 + θ y 2 ( m ) x ( m ) d θ y 2 ( m ) x ( m ) + 1 + w 0 ( x ( m ) x 0 ) ] y 1 ( m ) x ( m ) 1 + 0 1 w 0 ( a 0 ( m ) + θ a 2 ( m ) a 0 ( m ) ) d θ a 2 ( m ) a 0 ( m ) + 1 + w 0 ( a ( m ) ) × ( a 1 ( m ) a 0 ( m ) ) b 2 ( m ) .
Consequently, by the third substep of the solver (2), we get
y 3 ( m ) y 2 ( m ) A m F ( x ( m ) ) 1 Λ Λ 1 F ( y 2 ( m ) ) p m b 2 ( m ) 1 w 0 ( a 0 ( m ) ) = a 3 ( m ) a 2 ( m )
and
y 3 m x 0 y 3 m y 2 m + y 2 m x 0 a 3 m a 2 m + a 2 m a 0 = a 3 m < a * .
Hence, the iterate y 3 m E ( x 0 , a * ) and the third assertion holds. The rest of the assertions are validated, simply by replacing 2 and 3 by ( k 1 ) and ( k ) , respectively, in the last calculations. Thus, we can write by the first substep of the solver
F ( x ( m + 1 ) ) = F ( x ( m + 1 ) ) F ( x ( m ) ) + F ( x ( m ) ) = F ( x ( m + 1 ) ) F x ( m ) F ( x ( m ) ) ( x ( m + 1 ) x ( m ) ) + F ( x ( m ) ) ( x ( m + 1 ) x m ) F ( x ( m ) ) ( y 1 ( m ) x ( m ) ) .
Thus, we obtain
Λ 1 F ( x ( m + 1 ) ) 0 1 w ( 1 θ ) x ( m + 1 ) x ( m ) d θ x ( m + 1 ) x ( m ) + ( 1 + w 0 ( x ( m ) x 0 ) ) y 1 ( m ) x ( m ) 0 1 w ( 1 θ ) ( a ( m + 1 ) a m ) + ( 1 + w 0 ( a m ) d θ ( a 1 ( m ) a m ) = λ m + 1 .
Therefore, we have
y 1 ( m + 1 ) x ( m + 1 ) F ( x ( m + 1 ) ) 1 F ( x 0 ) Λ 1 F ( x ( m + 1 ) ) λ ( m + 1 ) 1 w o ( a 0 ( m + 1 ) ) = a 1 ( m + 1 ) a 0 ( m + 1 ) ,
and
y 1 ( m + 1 ) x 0 y 1 ( m + 1 ) x ( m + 1 ) + x ( m + 1 ) x 0 a 1 ( m + 1 ) a 0 ( m + 1 ) + a 0 ( m + 1 ) a 0 ( 0 ) = a 1 m + 1 < a * .
Thus, the iterate y 1 ( m + 1 ) E ( x 0 , a * ) and the induction for the assertions is terminated. Then, sequence { a m } is shown to be majorizing for { x m } . Moreover, the latter sequence is also complete in the Banach space Z 1 . Hence, x * E [ x 0 , a * ] exists, such that lim m x m = x * . Notice that the continuity of the operator F together with the estimate (18) lead to F ( x * ) = 0 , provided that m .
A uniqueness domain of a solution of the equation F ( x ) = 0 can be specified. □
Proposition 1.
Suppose there exists a solution y * E ( x 0 , d 0 ) of the equation F ( x ) = 0 for some d 0 > 0 ; the condition ( H 2 ) holds in the ball E ( x 0 , d 0 ) and there exists d d 0 , such that
0 1 w 0 ( 1 θ ) d 0 + θ d d θ < 1 .
Set Ω 3 = Ω E [ x 0 , d ] .
Thus the equation F ( x ) = 0 is uniquely solvable by y * in the domain Ω 3 .
Proof. 
Define the linear operator
M = 0 1 F ( y * + θ ( z * y * ) ) d θ , for z * Ω 3 ,
with F ( z * ) = 0 .
Thus, it follows by ( H 2 ) and (19) that
Λ 1 M Λ 0 1 w 0 ( 1 θ ) y * x 0 + θ z * x 0 d θ 0 1 w 0 ( 1 θ ) d 0 + θ d d θ < 1 .
Therefore, we deduce that z * is equal to y * . □
Remark 2.
(1) A usual pick (but not necessarily the most appropriate) is Λ = F ( x 0 ) .
(2) The limit point a * can be exchanged by α (see Lemma 1) in the condition ( H 5 ).
(3) If all the conditions ( H 1 ) ( H 5 ) hold in the Proposition 1, then we can set y * = x * and d 0 = a * .
( H 3 ) Assume that
Λ 1 F ( x ) w 1 ( x x 0 ) ,
for each x Ω 2 , where w 1 : [ 0 , α ] R is a CN function.
Notice that
Λ 1 F ( x ) = Λ 1 F ( x ) Λ + Λ 1 + Λ 1 F ( x ) Λ 1 + w 0 ( x x 0 )
Hence, the estimate 1 + w 0 ( t ) can be replaced by w 1 t in the local as well as in the semi-local case. In this case, the condition ( H 3 ) must be added in the conditions ( H 1 ) ( H 5 ) . The same modification is used for the local case. This is important to do, as it is possible that w 1 ( t ) < 1 + w 0 ( t ) . As an example, let us assume that F ( x ) = sin x . Then, x * = 0 ,   w 1 ( t ) = t , w 0 ( t ) = 1 + t and w 1 ( t ) < w 0 ( t ) . Clearly, if this is not true, then we do not add the condition ( H 3 ) .

4. Numerical Applications

In this numerical segment, we emphasize the significance of verifying and validating both local and semilocal convergence across six numerical problems. For this purpose, we choose three cases from the iterative solver (2) for n = 3 , n = 4 , and n = 5 , denoted by (Case–1), (Case–2), and (Case–3), respectively. The first two problems, referred to as Examples 1 and 2, are analyzed in order to assess the local convergence. Later on, we will focus on the semilocal convergence. For computational findings, we select the following problems: the steering problem (usually three-dimensional), the Fisher problem (using a system of dimensions 121 × 121 ), and the boundary value problem (BVP), involving a system of dimensions 60 × 60 . In the sixth example, we study a large system of 600 × 600 -dimensional nonlinear equations.
We initially determine the radii of convergence for the iterative solver (2) in order to tackle the given problem. We execute the iterative procedure after choosing a suitable initial approximation and determining the computational order of convergence. This makes it easier for us to see how fast the iterative solver was getting close to the required answer. The computation order of convergence ( C O C ) [9] is calculated using the following formulas:
κ = ln x ρ + 1 x * x ρ x * ln x ρ x * x ρ 1 x * , for   ρ = 1 , 2 ,
or approximated computational order of convergence A C O C [11,16] by:
κ * = ln x ρ + 1 x ρ x ρ x ρ 1 ln x ρ x ρ 1 x ρ 1 x ρ 2 , for   ρ = 2 , 3 ,
In order to ensure the effectiveness of the iterative solver, we additionally recorded the CPU timing during the computation. The residual error and the number of iterations required to reach the desired precision were ultimately established.
Stopping criteria are important in iterative solvers to determine when to stop the iteration process and consider the current approximation as the solution. Depending on the type of problem being solved and the algorithm being used, several stopping conditions might be applied. We choose the following stopping criterion:
(i)
x k + 1 x k < ϵ , and
(ii)
F ( x k ) < ϵ ,
where ϵ = 10 300 is the error tolerance. The multiple precision arithmetic operations are performed using Mathematica-v.11 software.
Example 1.
Let Z 1 = Z 2 = R 3 and Ω = E [ 0 , 1 ] . Consider the mapping for ϑ = ϑ 1 ϑ 2 ϑ 3 on the ball Ω by
F ( ϑ ) = e 1 2 ϑ 1 2 + ϑ 1 e ϑ 2 1 ϑ 3 .
Then, the derivative F is found to be:
F ( ϑ ) = ( e 1 ) ϑ 1 + 1 0 0 0 e ϑ 2 0 0 0 1
Set Δ = F ( θ * ) . We obtain F ( ϑ * ) = 0 for ϑ * = 0 0 0 . Then, Δ = I , and the conditions ( H 1 ) ( H 4 ) are verified for
0 ( t ) = ( e 1 ) t , ( t ) = e 1 e 1 t , ( t ) = e 1 e 1 .
The radii of convergence and computational results are depicted in Table 1 and Table 2, respectively.
Example 2.
Let Ω = E [ 0 , 1 ] and Z 1 = Z 2 = C [ 0 , 1 ] . Consider the nonlinear integral equation of the first kind Hammerstein operator F be defined by
F ( Υ ) ( x ) = Υ ( x ) 3 0 1 x ζ Υ ( ζ ) 3 d ζ .
The calculation for the derivative gives
F Υ ( q ) ( x ) = q ( x ) 9 0 1 x ζ Υ ( ζ ) 2 q ( ζ ) d ζ ,
for q C [ 0 , 1 ] . For this value of the operator F , the conditions ( H 1 ) ( H 4 ) , for Δ = F ( x * ) = I and x * = 0 are verified, so we choose
0 ( t ) = 4.5 t , ( t ) = 9 t , ( t ) = 1 + 4.5 t .
By adopting these functions, we calculate the radii for compositions (2) of the Example 2 in Table 3.
Example 3.
We use a well-known partial differential equation (PDE), which was proposed by Fisher in [17]. This PDE describes the spatiotemporal dynamics of a population in a homogeneous environment, which is also known as Fisher’s equation. The mathematical expression for Fisher’s equation is defined by the following:
θ t = D θ x x + θ ( 1 θ ) = 0 , with homogeneous Neumann s boundary conditions θ ( x , 0 ) = 1.5 + 0.5 c o s ( π x ) , 0 x 1 , θ x ( 0 , t ) = 0 , t 0 , θ x ( 1 , t ) = 0 , t 0 .
The differential Equation (23) is discretized using a finite difference method to create a system of nonlinear equations. In order to do this, we denote w i , j as θ ( x i , t j ) , which stands for the desired solution at the mesh’s grid points. Here, x and t denote the number of steps in the directions of M and N, respectively. Further, h and k correspond to step sizes of M and N, respectively. By adopting forward, backward, and central differences, we obtain, in turn, that
θ x x ( x i , t j ) = ( w i + 1 , j 2 w i , j + w i 1 , j ) / h 2 , θ t ( x i , t j ) = ( w i , j w i , j 1 ) / k , and θ x ( x i , t j ) = ( w i + 1 , j w i , j ) / ( h ) , t [ 0 , 1 ] ,
leading to
w 1 , j w i , j 1 k w i , j 1 w i , j D w i + 1 , j 2 w i , j + w i 1 , j h 2 ,
where i = 1 , 2 , 3 , , M , j = 1 , 2 , 3 , , N , h = 1 M and k = 1 N . For specific values of M = 11 , N = 11 , h = 1 11 , k = 1 11 , and D = 1 , this results in a large nonlinear system of size 121 × 121 . Convergence leads us to the following column vector solution u ( x i , t j ) = x * (not a matrix), which is given below:
x * = 1.645 , 1.473 , 1.375 , 1.312 , 1.269 , 1.236 , 1.210 , 1.188 , 1.169 , 1.153 , 1.138 , 1.623 , 1.464 , 1.370 , 1.310 , 1.268 , 1.236 , 1.210 , 1.188 , 1.169 , 1.153 , 1.138 , 1.583 , 1.445 , 1.361 , 1.306 , 1.266 , 1.235 , 1.209 , 1.188 , 1.169 , 1.153 , 1.138 , 1.528 , 1.419 , 1.349 , 1.300 , 1.263 , 1.233 , 1.208 , 1.187 , 1.169 , 1.153 , 1.138 , 1.463 , 1.388 , 1.334 , 1.292 , 1.259 , 1.231 , 1.208 , 1.187 , 1.169 , 1.153 , 1.138 , 1.395 , 1.355 , 1.317 , 1.284 , 1.255 , 1.229 , 1.207 , 1.186 , 1.168 , 1.152 , 1.138 , 1.328 , 1.322 , 1.301 , 1.276 , 1.251 , 1.227 , 1.206 , 1.186 , 1.168 , 1.152 , 1.138 , 1.268 , 1.292 , 1.286 , 1.269 , 1.248 , 1.226 , 1.205 , 1.186 , 1.168 , 1.152 , 1.138 , 1.219 , 1.267 , 1.274 , 1.263 , 1.245 , 1.224 , 1.204 , 1.185 , 1.168 , 1.152 , 1.138 , 1.185 , 1.250 , 1.265 , 1.258 , 1.242 , 1.223 , 1.204 , 1.185 , 1.168 , 1.152 , 1.138 , 1.168 , 1.240 , 1.261 , 1.256 , 1.241 , 1.223 , 1.203 , 1.185 , 1.168 , 1.152 , 1.138 t r .
We depicted the numerical outcome in Table 4 based on the initial guess
x ( 0 ) = 1.0 , 1.0 , 1.0 , 1.0 , 1.0 , 1.0 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.1 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.2 , 1.3 , 1.3 , 1.3 , 1.3 , 1.3 , 1.3 1.3 , 1.3 , 1.3 , 1.3 , 1.3 , 1.3 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.4 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.6 , 1.6 , 1.6 , 1.6 , 1.6 , 1.6 1.6 , 1.6 , 1.6 , 1.6 , 1.6 , 1.6 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.7 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.8 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 1.9 , 2.0 , 2.0 , 2.0 , 2.0 , 2.0 , 2.0 , 2.0 . t r .
Example 4.
The motion issue for steering involves determining the optimal course for a vehicle to take while taking into consideration its dynamic constraints, such as the maximum turning radius and acceleration. Solving this issue is essential for establishing effective and safe navigation in practical settings, especially when it comes to driverless cars. The steering problem’s mathematical equation of motion is defined by Tsoulos & Stavrakoudis and Awawdeh [18,19], which is provided below:
β 1 β 2 sin τ θ β 3 β 2 cos φ θ + 1 β 1 β 2 cos τ θ β 3 β 2 sin φ θ β 3 2 [ G θ β 2 cos φ θ + 1 G θ β 2 cos τ θ ) 1 2 [ E θ β 2 sin τ θ β 3 G θ β 2 sin φ θ β 3 ] 2 = 0 , θ = 1 , 2 , 3 ,
where
G θ = β 2 ( β 3 β 1 ) sin ( τ 0 ) + β 2 cos ( τ 0 ) β 3 β 2 sin ( τ θ ) β 2 cos ( τ θ ) + β 1 β 3 , E θ = β 1 β 2 sin ( φ θ ) + β 3 + β 2 cos ( φ θ ) cos ( φ 0 ) β 3 β 2 sin ( φ θ ) sin ( φ 0 ) .
The values of τ i , φ i (in radians) are provided below:
τ 0 = 1.3954 , τ 1 = 1.7444 , τ 2 = 2.0656 , τ 3 = 2.4601 , φ 0 = 1.7461 , φ 1 = 2.0364 , φ 2 = 2.2391 , φ 3 = 2.4601 .
The number of iterations, residual error, error difference between two consecutive iterations, initial guess, convergence order, and CPU timing of Example 4 are shown in Table 5. These values correspond to the initial approximation ( 0.88 , 0.68 , 0.64 ) t r .
Example 5.
A mathematics problem known as a boundary value problem (BVP) aims to solve a differential equation under certain boundary conditions. The values of the solution at the boundary of the domain where the problem is being solved are specified by the boundary conditions. One of the main areas of inquiry in Physics and Mathematics is solving BVPs. Therefore, we select the following boundary value problem (see [6]).
u + a 2 u 2 + 1 = 0
with u ( 0 ) = u ( 1 ) = 0 . The interval [ 0 , 1 ] is divided into k parts, which yields
γ 0 = 0 < γ 1 < γ 2 < < γ k 1 < γ k , γ k + 1 = γ k + h , h = 1 k .
Then, we can choose u 0 = u ( γ 0 ) = 0 , u 1 = u ( γ 1 ) , , u k 1 = u ( γ k 1 ) , u k = u ( γ k ) = 1 . We have
u θ = u θ + 1 u θ 1 2 h , u θ = u θ 1 2 u θ + u θ + 1 h 2 , θ = 1 , 2 , 3 , , 1 .
This is achieved using a discretization technique. The following nonlinear system of equations for ( θ 1 ) × ( θ 1 ) is thus obtained.
u θ 1 2 u θ + u θ + 1 + μ 2 4 ( u θ + 1 u θ 1 ) 2 + h 2 = 0 .
For = 61 and μ = 1 2 , we have a nonlinear 60 × 60 system of equations. In Table 6, we present iterations and the CO of Example 5.
x * = 0.02756 , 0.05467 , 0.08133 , 0.1075 , 0.1333 , 0.1586 , 0.1836 , 0.2081 , 0.2321 , 0.2558 , 0.2791 , 0.3019 , 0.3244 , 0.3465 , 0.3682 , 0.3895 , 0.4104 , 0.4309769 , 0.4511 , 0.4709 , 0.4903 , 0.5094 , 0.5281 , 0.5465 , 0.5645 , 0.5822 , 0.5995 , 0.6165 , 0.6331 , 0.6494 , 0.6654 , 0.6810 , 0.6963 , 0.7113 , 0.7260 , 0.7403 , 0.7543 , 0.7680 , 0.7814 , 0.7945 , 0.8072 , 0.8197 , 0.8319 , 0.8437 , 0.8552 , 0.8665 , 0.8774 , 0.8881 , 0.8984 , 0.9085 , 0.9182 , 0.9277 , 0.9369 , 0.9457 , 0.9543 , 0.9626 , 0.9707 , 0.9784 , 0.9859 , 0.9931 t r .
Example 6.
Solving large systems of nonlinear equations is a sophisticated problem that arises in many scientific and engineering applications. To illustrate, we choose to tackle the complexity presented by the ensuing system of nonlinear equations, characterized by an order of 600 × 600 :
F ( x ) = x j 2 x j + 1 1 = 0 , 1 j 600 , x j 2 x 1 1 = 0 .
In the aforementioned system, (26), x * = ( 1 , 1 , 1 , , ( 600 t i m e s ) ) T is the convergence point. Table 7 presents the numerical results based on the initial approxiamtion ( 1.1 , 1.1 , 600 , 1.1 ) t r .

5. Conclusions

In this research article, we established the local and semilocal convergence of method (2), along with the corresponding radius of convergence. In addition, we derived computable upper error bounds that allowed us to estimate the accuracy of the approximate solutions obtained using the method. Further, we also demonstrated the uniqueness of solutions produced by the method (2) under certain conditions. Finally, we conclude that our research contributes to the development of more efficient and robust numerical methods for solving nonlinear equations in a variety of scientific and engineering applications. Similar methodology is applicable to other methods [2,8,9,10,11,12,13,14]. Thus, this can be the chosen course of action for upcoming projects.

Author Contributions

Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A.; formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A. and S.A.; visualization, R.B., I.K.A. and S.A.; supervision, R.B. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University, project number (PSAU/2024/R/1445).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author Sattam Alharbi wish to thank the Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445) for funding support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamom Press: Oxford, UK, 1982. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Argyros, I.K. Theory and Applications of Iterative Methods; 2nd Edition Engineering Series; CRC Press-Taylor and Francis Group: Boch Raton, FL, USA, 2022. [Google Scholar]
  4. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  5. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  6. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  7. Loft, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems. Int. J. Comput. Math. 2014, 92, 1921–1934. [Google Scholar]
  8. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolation. Math. Comput. Mod. 2013, 57, 1950–1956. [Google Scholar] [CrossRef]
  9. Ezquerro, J.A.; Grau-Sánchez, M.; Hernández, M.A.; Nouguera, M.; Romero, N. On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. 2011, 151, 163–174. [Google Scholar] [CrossRef]
  10. George, S.; Kanagaraj, K. Derivative free regularization method for nonlinear ill-posed equations in Hilbert scales. Comput. Methods Appl. Math. 2019, 19, 765–778. [Google Scholar] [CrossRef]
  11. Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett. 2010, 23, 472–478. [Google Scholar] [CrossRef]
  12. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  13. Shakhno, S.M. Convergence of the two-step combined method and uniqueness of the solution of nonlinear operator equations. J. Comput. Appl. Math. 2014, 261, 378–386. [Google Scholar] [CrossRef]
  14. Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algor. 2017, 67, 917–933. [Google Scholar] [CrossRef]
  15. Rheinboldt, W.C. An Adaptive Continuation Process for Solving Systems of Nonlinear Equations. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  16. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  17. Sauer, T. Numerical Analysis, 2nd ed.; Pearson: Upper Saddle River, NJ, USA, 2012. [Google Scholar]
  18. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  19. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
Table 1. Radii of convergence for Example 1.
Table 1. Radii of convergence for Example 1.
Cases κ 0 r 1 κ 1 r 2 r 3 r 4 r 5 r
of (2)
Case–10.581980.38270.44150.19660.1215--0.1215
Case–20.581980.38270.44150.19660.12150.1118-0.1118
Case–30.581980.38270.44150.19660.12150.11180.09230.0923
Table 2. Numerical results of solver (2) for Example 1.
Table 2. Numerical results of solver (2) for Example 1.
Cases x 0 F ( x 3 ) x ( 4 ) x ( 3 ) m ρ CPU
of (2) Timing
Case–1 ( 0.11 , 0.11 , 0.11 ) T 1.2 × 10 174 1.2 × 10 174 45.00540.258927
Case–2 ( 0.1 , 0.1 , 0.1 ) T 2.6 × 10 568 2.6 × 10 568 37.00230.157492
Case–3 ( 0.08 , 0.08 , 0.08 ) T 8.6 × 10 2058 2.9 × 10 2058 39.00250.463888
Table 3. Radii of solver (2) of Example 2.
Table 3. Radii of solver (2) of Example 2.
Cases κ 0 r 1 κ 1 r 2 r 3 r 4 r 5 r
Case–10.22220.11110.14810.071270.05056--0.05056
Case–20.22220.11110.14810.071270.050560.04729-0.04729
Case–30.22220.11110.14810.071270.050560.047290.045220.04522
Table 4. Numerical results of solver (2) for Example 3.
Table 4. Numerical results of solver (2) for Example 3.
Cases F ( x ( 3 ) ) x ( 4 ) x ( 3 ) m ρ CPU
of (2) Timing
Case–1 7.0 × 10 113 5.5 × 10 114 45.006693.2924
Case–2 5.7 × 10 306 2.6 × 10 307 37.060230.7952
Case–3 2.0 × 10 646 7.1 × 10 648 39.033862.7814
Table 5. Numerical results of solver (2) for Example 4.
Table 5. Numerical results of solver (2) for Example 4.
Cases F ( x ( 3 ) ) x ( 4 ) x ( 3 ) m ρ CPU
of (2) Timing
Case–1 2.9 × 10 75 2.3 × 10 73 44.99450.401338
Case–2 7.9 × 10 173 7.8 × 10 171 46.99660.924285
Case–3 7.6 × 10 331 8.6 × 10 329 29.00080.358374
The solver (2) converges to x * = 0.9051 0.6977 0.6508 .
Table 6. Numerical results of solver (2) for Example 5.
Table 6. Numerical results of solver (2) for Example 5.
Cases F ( x ( 3 ) ) x ( 4 ) x ( 3 ) m ρ CPU
of (2) Timing
Case–1 1.3 × 10 139 1.5 × 10 137 45.0680969.673
Case–2 5.4 × 10 384 1.2 × 10 381 37.0599941.422
Case–3 3.8 × 10 782 7.8 × 10 781 39.17151197.97
The solver (2) converges to the approximated root.
Table 7. Numerical results of solver (2) for Example 6.
Table 7. Numerical results of solver (2) for Example 6.
Cases F ( x ( m ) ) x ( m + 1 ) x ( m ) m ρ CPU
of (2) Timing
Case–1 9.9 × 10 275 3.3 × 10 275 36.18835065.5
Case–2 1.4 × 10 886 4.8 × 10 887 39.12886889.65
Case–3 9.5 × 10 2057 3.2 × 10 2057 312.0988694.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Alharbi, S. Extended Efficient Multistep Solvers for Solving Equations in Banach Spaces. Mathematics 2024, 12, 1919. https://doi.org/10.3390/math12121919

AMA Style

Behl R, Argyros IK, Alharbi S. Extended Efficient Multistep Solvers for Solving Equations in Banach Spaces. Mathematics. 2024; 12(12):1919. https://doi.org/10.3390/math12121919

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, and Sattam Alharbi. 2024. "Extended Efficient Multistep Solvers for Solving Equations in Banach Spaces" Mathematics 12, no. 12: 1919. https://doi.org/10.3390/math12121919

APA Style

Behl, R., Argyros, I. K., & Alharbi, S. (2024). Extended Efficient Multistep Solvers for Solving Equations in Banach Spaces. Mathematics, 12(12), 1919. https://doi.org/10.3390/math12121919

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop