Next Article in Journal
Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks
Previous Article in Journal
Smooth Information Criterion for Regularized Estimation of Item Response Models
Previous Article in Special Issue
Integration of Polynomials Times Double Step Function in Quadrilateral Domains for XFEM Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations

by
Ioannis K. Argyros
1,*,
Santhosh George
2,
Samundra Regmi
3 and
Christopher I. Argyros
4
1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Surathkal 575025, India
3
Department of Mathematics, University of Houston, Houston, TX 77205, USA
4
School of Computational Science and Engineering, Georgia Institute of Technology, North Avenue Atlanta, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(4), 154; https://doi.org/10.3390/a17040154
Submission received: 7 March 2024 / Revised: 30 March 2024 / Accepted: 8 April 2024 / Published: 10 April 2024
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)

Abstract

:
Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator are exchanged by a finite sum of fixed linear operators. Two types of convergence analysis are presented for these algorithms: the semilocal and the local. The Fréchet derivative of the operator on the equation is controlled by a majorant function. The semi-local analysis also relies on majorizing sequences. The celebrated contraction mapping principle is utilized to study the convergence of the Krasnoselskij-like algorithm. The numerical experimentation demonstrates that the new algorithms are essentially as effective but less expensive to implement. Although the new approach is demonstrated for Newton-like algorithms, it can be applied to other single-step, multistep, or multipoint algorithms using inverses of linear operators along the same lines.

1. Introduction

Let T 1 , T 2 stand for Banach spaces; Ω T 1 be an open and convex set; and G : Ω T 2 denote a differentiable operator according to Fréchet [1,2,3,4].
A plethora of applications from diverse areas of research such as Optimization and Computational Sciences are reduced using mathematical modeling [5,6,7,8,9,10,11,12,13,14] to locate a solution x * of a nonlinear equation like
G ( x ) = 0
The closed version of a solution x * Ω is possible only in special cases. Consequently, most solution approaches utilized by researchers and practitioners are iterative when the sequence is generated approximating the solution x * .
The algorithm of successive substitutions or the algorithm of iteration or the Picard algorithm is a simple and important algorithm for solving linear as well as nonlinear equations. This algorithm originated in antiquity, appearing in the writings of Heron of Alexandria in the second century B.C. in relation to root extraction. Later, Cauchy, as well as Picard, employed this algorithm to assure the existence of solutions of differential equations. It is defined for P : T 1 T 1 and each n = 0 , 1 , 2 , by
z 0 Ω , z n + 1 = P ( z n ) .
Banach inaugurated the abstract formulation of this algorithm followed by Cacciopoli and Weisinger. We refer the readers to the reference by Berinde [15], Krasnoselskij [16], and Kantorovich et al. [3] for further information on the convergence conditions (see also [17,18,19,20,21,22,23]). Concerning the convergence order of this algorithm, it is only linear. Thus, there is a need for introducing algorithms of convergence order higher than one.
Newton’s algorithm is without a doubt the most well-known algorithm of convergence order two for solving transcendental as well as scalar equations. It is written for each n = 0 , 1 , 2 , as
x 0 Ω , x n + 1 = x n G ( x n ) 1 G ( x n ) ,
where G is the notation for the derivative of the operator G according to Fréchet [8,24,25]. The construction of Newton’s algorithm is based on linearization. Let x 0 Ω be an initial point. If the operator G is Fréchet-differentiable, one can write the Ostrowski [21] representation
G ( x ) = G ( x 0 ) + G ( x 0 ) ( x x 0 ) + d ( x , x 0 ) ,
where d ( x , x 0 ) = O ( x x 0 ) as x x 0 0 . If x * Ω is a solution of the Equation (1), it follows by the preceding representation that
G ( x 0 ) + G ( x 0 ) ( x * x 0 ) = d ( x * , x 0 ) .
If x * is near to x 0 , then one can neglect the supposedly small quantity d ( x * , x 0 ) , leading to a linear equation
G ( x 0 ) + G ( x 0 ) ( x x 0 ) = 0 .
It is said that this equation is obtained from Equation (1) by the technique of linearization or the tangent algorithm since this is the equation of the tangent line to the curve y = G ( x ) at the point ( x 0 , G ( x 0 ) ) provided that G is a real function. It follows by this linearization that x 1 is a unique solution if G ( x 0 ) 1 exists. In this case, we can write
x 1 = x 0 G ( x 0 ) 1 G ( x 0 ) .
If x 0 is close to x * , then maybe x 1 is even closer. Thus, this process can be repeated with x 1 replacing x 0 leading to Newton’s algorithm. A main drawback with the implementation of Newton’s algorithm is the inversion of the linear operator G ( x n ) at each step of the iteration. We address this issue in this paper. Our methodology applies to all single-step, multistep methods, and quasi-Newton [5,10,20,26,27] using inverses along the same lines. We shall demonstrate our methodology to a large class of algorithms involving inverses including Newton’s algorithm as a special case.
Let us consider the popular Newton-like algorithm defined for y 0 Ω and each n = 0 , 1 , 2 , . . . by
y 0 Ω , y n + 1 = y n L n 1 G ( y n )
Let L : Ω L ( T 1 , T 2 ) , the space of bounded linear operators mapping the space T 1 into T 2 . Notice that for L n = G ( y 0 ) or L n = G ( y n ) , we obtain the modified and Newton’s algorithm, respectively. There are two types of convergence usually studied for iterative algorithms: the semi-local and the local analysis. The former uses the condition on the initial guess y 0 , and the operator F and the solution y * are found in a neighborhood of x 0 . The latter differs from the former since the convergence conditions depend on y * and demonstrate how difficult is to choose the initial guess y 0 . The main challenge of local analysis is that y * is usually unknown. Numerous papers have been presented dealing with the semi-local as well as the local analysis of convergence for the Newton-like algorithm (2) [2,24,25,26,28]. The convergence conditions involve Lipchitz–Holder or generalized continuity conditions utilized to control the Fréchet derivative G of the operator F . By L n in (2), we mean L n = L ( y n ) . The inversion of the linear operator at each step is computationally expensive or impossible in general. To essentially utilize the algorithm but without the inverse, we replace it with a finite sum of linear operators related to L as follows:
Suppose:
There exists Δ L ( T 1 , T 2 ) such that Δ 1 L ( T 2 , T 1 ) and ( I Δ 1 ( Δ L ( y ) ) 1 L ( T 2 , T 1 ) . Then, the Newton-like algorithm (2) can be rewritten as
y 0 Ω , y n + 1 = y n [ I Δ 1 ( Δ L n ) ] 1 Δ 1 G ( y n ) ,
where I denotes the identity operator. The iterates of algorithms (2) and (3) coincide, since
[ I Δ 1 ( Δ L n ) ] 1 Δ 1 = [ Δ ( I Δ 1 ( Δ L n ) ) ] 1 = L n 1 .
Even if we replace algorithm (2) with algorithm (3), we still need to invert the linear operator ( I Δ 1 ( Δ L n ) ) at each step of the iteration. But we can avoid this inversion if we introduce for k a fixed natural number the operators Γ = Γ ( x ) = Δ 1 ( Δ L ( x ) ) and
A k ( x ) = A = I + Γ + Γ 2 + . . . + Γ k .
Then, consider the replacement of algorithm (2) defined for x 0 = y 0 Ω by
x 0 Ω , x n + 1 = x n A Δ 1 G ( x n ) .
Algorithm (4) requires the inversion of the inversion of only the frozen linear operator Δ at each step. Notice also that A is a linear operator. By letting k + , and if lim k + A k = L exists, then
L n 1 = L Δ 1 .
The condition
Δ 1 ( Δ L ( x ) ) < 1 x Ω
guarantees the existence of this limit [5,8,16]. A possible choice for Δ = I . If T 1 = T 2 = R i , i a natural number, and H denotes the Hessian of the operator G , then we can choose Δ = H ( x 0 ) (semi-local case) or Δ = H ( x * ) (local case). The choice Δ = H ( x ¯ ) has been considered in [29], where x ¯ Ω is an auxiliary point. In the more general setting of Banach space Δ L ( T 1 , T 2 ) which is the space of bounded linear operator form T 1 into T 2 . As a further example, if L n = G ( x n ) , then Δ = G ( x n ) (semi-local case) or Δ = G ( x * ) (local case) or Δ = G ( x ˜ ) . Other choices for Δ are possible as long as they satisfy the convergence conditions ( H 4 ) and ( H 5 ) (semi-local case) and ( C 1 ) and ( C 2 ) (local case) (see Section 2 and Section 3, respectively).
We also study the Kransnoselskij-like or the Picard-like algorithm [15,16]
x 0 Ω , x n + 1 = P k ( x n ) ,
where
P k ( x ) = P ( x ) = ( 1 λ ) x λ ( A Δ 1 G ( x ) x ) , λ ( 0 , 1 ]
for locating fixed points. If λ = 1 , then algorithm (5) reduces to algorithm (4). The semi-local analysis of convergence for algorithm (4) relies on majorizing sequences [3,24,25]. But the analysis for algorithm (5) depends on the celebrated contraction mapping principle [3,17,21,22].
The preceding reasoning justifies the study of the semi-local and local analysis of convergence appearing in this paper. The rest of the paper is organized as follows: In Section 2 and Section 3, we develop the semi-local and local analysis of convergence for algorithm (4). The convergence of the Krasnoselskij-like algorithm is presented in Section 4. The numerical experimentations demonstrating the efficiency of the new hybrid algorithms are provided in Section 5. Concluding remarks and directions of future research complete this paper in Section 6.

2. Semi-Local Analysis

Throughout this paper, we use the symbol U ( x , α ) to denote the open ball centered at x T 1 with radius α > 0 , and U [ x , α ] is the closure of U ( x , α ) . The following Banach Lemma is used to prove our results.
Theorem 1 
(Banach Lemma on Invertible Operators [3,15]). If M is a bounded linear operator in T 1 , M 1 exists if and only if there is a bounded linear operator M 1 in T 1 such that M 1 1 exists and
I M 1 M < 1 .
If M 1 exists, then
M 1 = n = 0 ( I M 1 M ) n M 1
and
M 1 M 1 1 I M 1 M .
Further, we use majorizing sequences to prove the semi-local convergence. Recall the definition of a majorizing sequence.
Definition 1 
([3,5]). Let { x n } be a sequence in a normed space X. Then a nonnegative scalar sequence { v n } for which
x n + 1 x n v n + 1 v n n 0
holds is a majorizing sequence for { x n } . Note that any majorizing sequence is necessarily nondecreasing. Moreover, if the sequence { v n } converges, then { x n } converges too, and for v * = lim n v n
x * x n v * v n .
Hence, the study of the convergence of the sequence { x n } reduces to that of { v n } .
Let S = [ 0 , + ) .
Suppose:
(H1)
There exists parameters δ 0 , γ [ 0 , 1 2 ) , an element x 0 Ω , and an invertible operator Δ such that
A Δ 1 G ( x 0 ) δ .
(H2)
There exists a function w 0 : S S , which is nondecreasing as well as continuous (FNDC), such that the equation w 0 ( t ) 1 = 0 has the smallest positive solution. Denote such solution by ρ and set S 0 = [ 0 , ρ ) .
(H3)
There exist (FNDC) w : S 0 S and w 1 : S 0 S . Define the scalar sequence { α n } for α 0 = 0 , α 1 = δ some γ 0 , b ¯ 0 , and each n = 0 , 1 , 2 , . . . by
q n + 1 = ( 1 + γ + . . . + γ k ) ( 0 1 w ( ( 1 τ ) ( α n + 1 α n ) d τ + w 1 ( α n ) + b ¯ γ k + 1 ) ,
and
α n + 2 = α n + 1 + q n + 1 ( α n + 1 α n ) .
The sequence { α n } is shown to be majorizing for algorithm (4) in Theorem 2. But first, the convergence conditions are given for the sequence { α n } .
(H4)
There exists ρ 0 [ 0 , ρ ) such that for each n = 0 , 1 , 2 , . . .
q n + 1 < 1 and α n ρ 0 .
This condition and the formula (7) imply that for each n = 0 , 1 , 2 , . . .
0 α n α n + 1 ρ 0
and there exists α [ 0 , ρ 0 ) such that lim n + = α . The parameter α is the unique least upper bound of the sequence { α n } .
It is worth noting that this sequence can be computed a priori and relates to the initial approximation. Such conditions are weaker than the usual convergence conditions given as functions of the starting point [3,4,5].
Next, we relate the scalar sequences and functions w 0 , w to the operators on algorithm (4).
(H5)
Δ 1 ( G ( v ) Δ ) w 0 ( v x 0 ) for each v Ω .
Set Ω 0 = U ( x 0 , ρ ) Ω .
(H6)
Δ 1 ( G ( v 2 ) G ( v 1 ) ) w ( v 2 v 1 ) , γ = Δ 1 ( Δ L ( v ) ) ,
and
Δ 1 ( G ( v ) L ( v ) ) w 1 ( v x 0 )
for each v , v 1 , v 2 Ω 0 .
Set b = γ 1 γ k 1 γ and b ¯ = 1 1 b .
and
(H7)
U [ x 0 , α ] Ω .
The conditions ( H 1 ) ( H 5 ) and the developed notations are utilized to show the main semi-local analysis of convergence for algorithm (4).
Theorem 2. 
Suppose that the conditions ( H 1 ) ( H 5 ) hold. Then, if the initial guess x 0 Ω the following assertions hold for the sequence { x n } generated by algorithm (4)
{ x n } B ( x 0 , α )
and there exists a solution x * B [ x 0 , α ] of the equation G ( x ) = 0 such that
x * x n α α n ,
where the sequence { α n } is given by the formula (7) and α is its limit.
Proof. 
Notice that all the iterates { x m } (m a natural number) of algorithm (4) are well defined. We present a proof based on mathematical induction. In particular, we show that for each n = 0 , 1 , 2 , . . .
x n + 1 x n α n + 1 α n .
Assertion (9) holds if n = 0 , by (4), (7), and the condition ( H 1 ) , since
x 1 x 0 = A Δ 1 G ( x 0 ) α 1 α 0 < α .
It also follows that the iterate x 1 U ( x 0 , α ) . Next, we show that the linear operator A = A ( x m ) is invertible by using the restrictions on γ , b , b ¯ given in the condition ( H 5 ):
I A ( x m ) = Γ + Γ 2 + . . . + Γ k Γ + Γ 2 + . . . + Γ k = Γ 1 Γ k 1 Γ γ 1 γ k 1 γ = b < 1
since γ [ 0 , 1 2 ) by (11) and the condition ( H 1 ) . Inequality (11) and Theorem 1 assure the invertibility of the operator A, and
A 1 1 1 b = b ¯ .
Then, we can write by algorithm (2) in turn that
G ( x m + 1 ) = G ( x m + 1 ) G ( x m ) Δ A 1 ( x m + 1 x m ) = G ( x m + 1 ) G ( x m ) G ( x m ) ( x m + 1 x m ) + ( G ( x m ) L m ) ( x m + 1 x m ) + ( L m Δ A 1 ) ( x m + 1 x m ) = G ( x m + 1 ) G ( x m ) G ( x m ) ( x m + 1 x m ) + ( G ( x m ) L m ) ( x m + 1 x m ) + ( L m A Δ ) A 1 ( x m + 1 x m ) .
But we have
L m A Δ = L m ( I + Γ + Γ 2 + . . . + Γ k ) Δ = L m Δ + ( Δ + Δ + L m ) ( Γ + Γ 2 + . . . + Γ k ) = L m Δ + Δ ( Γ + Γ 2 + . . . + Γ k ) ( Δ L m ) ( Γ + Γ 2 + . . . + Γ k ) = L m Δ + Δ Γ + Δ ( Γ 2 + . . . + Γ k ) ( Δ L m ) ( Γ + Γ 2 + . . . + Γ k ) = Δ ( Γ 2 + . . . + Γ k ) ( Δ L m ) ( Γ + Γ 2 + . . . + Γ k )
since L m Δ + Δ Γ = 0 by the definition of Γ . Thus, we obtain from (14)
Δ 1 ( L m A Δ ) = Γ 2 + . . . + Γ k Γ ( Γ + Γ 2 + . . . + Γ k ) = Γ k + 1 .
Then, it follows by (13), the conditions ( H 4 ), ( H 5 ), (13), the conditions ( H 4 ), ( H 5 ), (15), the inductions hypotheses, and the triangle inequality, in turn, that
Δ 1 G ( x m + 1 ) 0 1 w ( ( 1 τ ) x m + 1 x m ) d τ x m + 1 x m + w 1 ( x m x 0 ) x m + 1 x m + b ¯ γ k + 1 x m + 1 x m 0 1 w ( ( 1 τ ) ( α m + 1 α m ) d τ + w 1 ( α m ) + b ¯ γ k + 1 × ( α m + 1 α m ) .
Consequently, by algorithms (4), (7), and (16), we obtain in turn that
x m + 2 x m + 1 A ( x m + 1 ) Δ 1 G ( x m + 1 ) A ( x m + 1 ) Δ 1 G ( x m + 1 ) ( 1 + γ + . . . + γ k ) [ 0 1 w ( ( 1 τ ) ( α m + 1 α m ) d τ + w 1 ( α m ) + b ¯ γ k + 1 ] ( α m + 1 α m ) = α m + 2 α m + 1 ,
and
x m + 2 x 0 x m + 2 x m + 1 + x m + 1 x 0 α m + 2 α m + 1 + α m + 1 α 0 = α m + 2 < α .
Hence, assertion (10) holds and the iterate x m + 2 B ( x 0 , α ) . By the condition ( H 4 ) , the sequence { α m } is complete as convergent to α . Therefore, by (10) the sequence { x m } is also complete in the Banach space T 1 , and as such it converges to some x * B [ x 0 , α ] (since B [ x 0 , α ] is a closed set). By sending m + in (16), and the continuity of the operator G, we deduce that G ( x * ) = 0 . Finally, the estimation for i a natural number
x m + i x m α m + i α m ,
shows (9), if i + .
Next, a set is specified that contains only one solution of the equation G ( x ) = 0 .
Proposition 1. 
Suppose: There exists a solution y B ( x 0 , ρ 1 ) of the equation G ( x ) = 0 for some ρ 4 > 0 ; the condition ( H 4 ) holds in the ball B ( x 0 , ρ 1 ) , and there exists ρ 2 ρ 1 such that
0 1 w 0 ( τ ρ 1 + ( 1 τ ) ρ 2 ) d τ < 1
Set Ω 2 = B [ x 0 , ρ 2 ] Ω . Then, the element y is the only solution of the equation G ( x ) = 0 in the set Ω 2 .
Proof. 
Suppose that there exists a solution z Ω 2 of the equation G ( x ) = 0 with z y .
  • Define the linear operator E = 0 1 G ( y + τ ( z y ) ) d τ . By using the condition ( H 4 ) and (17), we obtain in turn
Δ 1 ( E Δ ) 0 1 w 0 ( τ y x 0 + ( 1 τ ) z x 0 ) d τ 0 1 w 0 ( τ ρ 2 + ( 1 τ ) ρ 3 ) d τ < 1 .
Thus, the linear operator E 1 L ( T 2 , T 1 ) , and from the identity we obtain
z y = E 1 ( G ( z ) G ( y ) ) = E 1 ( 0 ) = 0 .
Therefore, we conclude that z = y . □
Remark 1. 
(1) 
The limit point α can be replaced by ρ in the condition ( H 6 ).
(2) 
If all the conditions ( H 1 )–( H 6 ) hold in Proposition 1, take y = x * and ρ 2 = α .
(3) 
The second hypothesis in the condition ( H 5 ) can be replaced as follows:
Suppose that there exists (FNDC) w 2 : S S such that equation w 2 ( t ) 1 = 0 has an SPS. Denote such solution by ρ 3 , and set γ [ 0 , w 2 ( ρ 3 ) ) . Then, γ [ 0 , 1 ) . In this case, set α ¯ = m i n { α , ρ 3 } , and replace α by α ¯ in the condition ( H 6 ).
(4) 
The results for algorithm (3) or (2) can be obtained if we let k + in Theorem 1. A possible choice for k = n , although a smaller value k = 1 or k = 2 is preferred to reduce the computational cost (see also the numerical Section 6).

3. Local Analysis

In Section 3, we exchange the role of x * and the ”w” functions with x 0 , and the ” ψ ” function, respectively. But the computations are similar.
Suppose:
(C1)
There exists a solution x * Ω of the equation G ( x ) = 0 , and an invertible operator Δ L ( T 1 , T 2 ) such that for each τ [ 0 , 1 ] , x Ω
0 1 Δ 1 ( G ( x * + τ ( x x * ) ) Δ ) d τ 0 1 ψ ( τ x x * ) d τ ,
for some (FNDC) ψ : S S .
(C2)
There exists γ [ 0 , 1 2 ) , such that
Δ 1 ( A ( x ) Δ ) γ .
Define the function g : S S by
g ( t ) = 1 γ k + 1 1 γ 0 1 ψ ( τ t ) d τ + γ 1 γ k 1 γ .
(C3)
The equation g ( t ) 1 = 0 has an SPS. Denote such a solution by ρ 4 .
and
(C4)
U [ x * , ρ 4 ] Ω .
Theorem 3. 
Suppose that the conditions ( C 1 ) ( C 4 ) hold. Then, the sequence { x n } with initial guess x 0 U ( x * , ρ 4 ) { x * } exists in U ( x * , ρ 4 ) , stays in U ( x * , ρ 4 ) , and converges to x * such that
x n + 1 x * g ( x n x * ) x n x * x n x * ρ 4 .
Proof. 
Assertion (18) is shown by mathematical induction. Notice that all the iterates x m exist by algorithm (4). We can also write, in turn, that
x m + 1 x * = x m x * A Δ 1 G ( x m ) = x m x * A Δ 1 0 1 G ( x * + τ ( x m x * ) ) d τ ( x m x * ) = [ I A Δ 1 ( 0 1 G ( x * + τ ( x m x * ) ) Δ + Δ ) d τ ] ( x m x * ) = [ ( I A A Δ 1 ( 0 1 G ( x * + τ ( x m x * ) ) Δ ) d τ ] ( x m x * ) .
But, we have as in the semi-local case
A 1 γ k + 1 1 γ ,
and
I A γ 1 γ k 1 γ .
Hence, (19) turns into
x m + 1 x * = ( I A ) A ( 0 1 Δ 1 ( G ( x * + τ ( x m x * ) ) Δ ) d τ ( x m x * ) .
Using the conditions ( C 1 )–( C 3 ), we obtain
x m + 1 x * γ 1 γ k 1 γ + 1 γ k + 1 1 γ 0 1 ψ ( τ x m x * ) d τ x m x * g ( x m x * ) x m x * x m x * < ρ 4 .
Thus, assertion (18) holds and the iterate x m + 1 U ( x * , ρ 4 ) . Then, by (22), we obtain
x m + 1 x * d x m x * d m + 1 x 0 x * ,
where d = g ( x 0 x * ) [ 0 , 1 ) . Therefore, we deduce from (23) that lim m + = x * , and the iterate x m + 1 U ( x * , ρ 4 ) . □
Next, the uniqueness of the x * set is determined.
Proposition 2. 
Suppose: There exists ρ 5 > 0 such that the condition ( C 1 ) holds in the ball U ( x * , ρ 5 ) ,
Δ 1 ( G ( u ) M ) ψ 0 ( u x *
for each u Ω and some (FNDC) ψ 0 : S S and there exists ρ 6 ρ 5 such that
0 1 ψ 0 ( τ ρ 6 ) d τ < 1 .
Define the set Ω 2 = U [ x * , ρ 6 ] Ω . Then, the only solution of the equation G ( x ) = 0 in the set Ω 2 is x * .
Proof. 
Suppose that there exists y Ω 3 solving the equation G ( x ) = 0 and y 0 x * . Define the linear operator E 1 = 0 1 G ( x * + τ ( y 0 x * ) ) d τ . By the condition ( C 1 ), (24), and (25), we have in turn that
Δ 1 ( E 1 Δ ) 0 1 ψ 0 ( τ y 0 x * ) d τ 0 1 ψ 0 ( τ ρ 6 ) d τ < 1 ,
Hence, L 1 is invertible. Finally, from the identity y 0 x * = E 1 ( G ( y 0 ) G ( x * ) ) = E 1 ( 0 ) = 0 , we conclude that y 0 = x * . □
Remark 2. 
(1) 
We can set in Proposition 2 y 0 = x * and ρ 5 = ρ 4 .
(2) 
The parameter ρ 4 defined in the condition ( C 3 ) is the radius of convergence for algorithm (4).
(3) 
As in the semi-local analysis if k + , we obtain the results for algorithm (3). Another choice for k = n . However, we shall choose a small value of k to save computational cost.

4. Convergence of the Krasnoselskij-like Algorithm

The contraction mapping principle has been used extensively to find fixed points using iterative algorithms.
Theorem 4 
([15]). Let Q : T 1 T 1 be a contraction operator, with Lipchitz parameter ζ [ 0 , 1 ) . Then, the operator Q has a fixed point x * T 1 i.e., Q ( x * ) = x * , Moreover, for each initial guess x 0 T 1 the Picard algorithm or the algorithm of successive substitutions x n + 1 = Q ( x n ) converges to x * T 1 .
Theorem 4 cannot be used if the operator Q has more than one fixed point. The fixed points must be separated in this case. Let us consider Q 0 to be a closed substep of Q 0 with Q 0 ϕ . Then, the following result is available.
Theorem 5 
([15]). Suppose that the operator Q : Q 0 Q 0 is a contraction with constant ζ [ 0 , 1 ) . Then, the operator Q has a unique fixed point x * Q 0 . Moreover, for each x 0 Q 0 , the Picard algorithm converges to x * . The convergence of algorithm (5) is based on Theorem 6.
Theorem 6. 
Let k be a fixed natural number. Suppose that the following conditions hold for x , y Ω :
Δ 1 ( L ( x ) Δ ) < 1 , P ( x ) P ( x 0 ) p 0 x x 0 , P ( y ) P ( x 0 ) p y x
and
P ( x 0 ) x 0 r 1 p 0 ,
for some invertible operator and p 0 , p [ 0 , 1 ) , and U ( x 0 , r ) Ω provided that r [ 0 , 1 p 0 ) . Then, the operator P : Ω Ω has a unique fixed point x * Ω , and the Krasnoselskij-like algorithm (5) converges to x * .
Proof. 
Notice that
P ( x ) x 0 P ( x ) P ( x 0 ) + P ( x 0 ) x 0 p 0 r + P ( x 0 ) x 0 r .
Thus, P : U ( x 0 , r ) U ( x 0 , r ) is a contraction operator with constant p [ 0 , 1 ) . Set Q 0 = U [ x 0 , r ] . Then, the result follows from Theorem 5. □

5. Error Analysis

The sequences { y n } and { x n } are generated by formulas (2) and (4), respectively. We select a portion of the standard semi-local convergence result for the Newton-like algorithm (2) [28].
Theorem 7. 
Let G : Ω Y be Fréchet-differentiable and let L ( x ) L ( T 1 , T 2 ) be an approximation to the linear operator G ( x ) . Suppose that there exist an open convex subset Ω ˜ of Ω, x 0 Ω ˜ , a bounded linear invertible operator K 0 ( = L ( x 0 ) ) , and constants η , K > 0 , K 0 , K 1 , μ , l 0 such that for all x , y Ω ˜ the following conditions hold:
L 0 1 G ( x 0 ) η , K 0 1 ( F ( y ) F ( x ) ) K y x , K 0 ( F ( x ) K 1 ( x ) ) K 0 x x 0 + μ , K 0 ( A ( x ) K 0 ) K 1 x x 0 + l , l 1 = μ + l < 1 .
In addition, suppose that
h : = σ η 1 2 ( 1 l 1 ) 2 ,
where σ = m a x { K , K 0 + K 1 } , and
U ¯ = U [ x 0 , t * ] Ω ˜
where
t * = ( 1 l 1 ( 1 l 1 ) 2 2 h ) ) / σ 0
Then, the following assertions hold
The sequence { y n } generated by algorithm (2) remains in the ball U ( x 0 , t * ) and converges to a solution x * U ¯ of the equation G ( x ) = 0 and
L n 1 K 0 s 1 : = 1 1 ( K 0 t * + l )
Next, the sequences { y n } and { x n } are related to each other.
Lemma 1. 
Suppose that the conditions ( H 1 ) ( H 5 ) for ( Δ = K 0 ) , and those of Theorem 7 hold. Then, the following error assertion holds for each n = 0 , 1 , 2 , . . .
x n + 1 y n + 1 e n ,
where
e n = s 1 [ ( 0 1 w ( ( 1 τ ) x n y n ) d τ + w ( x n y n + w 1 ( x n x 0 ) x n y n + b ¯ γ k + 1 x n + 1 x n ] .
Proof. 
Under the conditions of Theorems 2 and 7, the iterates { y n } and { x n } are well defined by formulas (2), and (4), respectively. By subtracting (2) from (4) and pulling out L n 1 , we can write, in turn, that
x n + 1 y n + 1 = x n y n + L n 1 G ( y n ) A Δ 1 G ( x ) = L n 1 [ L n ( x n y n ) G ( x n ) + G ( x n ) + G ( y n ) L n A Δ 1 G ( x n ) ] = L n 1 ( G ( x n ) G ( y n ) F ( y n ) ( x n y n ) L n 1 ( F ( y n ) F ( x n ) ) ( x n y n ) L n 1 ( F ( x n ) L n ) ( x n y n ) + L n 1 ( Δ L n A ) A 1 ( x n + 1 x n ) .
We need, in turn, the following estimates obtained by the conditions of Theorems 2 and 7
K 0 1 ( G ( x n ) G ( y n ) F ( y n ) ( x n y n ) ) 0 1 s K 0 1 ( F ( y n + τ ( x n y n ) ) F ( y n ) ) d τ ( x n y n ) 0 1 s w ( ( 1 τ ) x n y n ) d τ x n y n
K 0 1 ( F ( y n ) F ( x n ) ) ( x n y n ) w ( x n y n ) x n y n , K 0 1 ( F x n ) L n ) ( x n y n ) w 1 ( x n x 0 ) x n y n
w 1 ( α ) x n y n
Δ 1 ( Δ L n A ) Γ k + 1 γ k + 1 ,
and
K 0 G ( x n ) = B 1 ( x n + 1 x n ) = b ¯ x n + 1 x n
By summing up (30)–(34) and using the triangle inequality in (34), we obtain (27). □
It is convenient for the next result to define the function φ : [ 0 , + ) R by
φ ( t ) = s 1 0 1 w ( ( 1 τ ) t ) d τ + w ( t ) + w 1 ( α ) t + s 2 t , where s 2 = s 1 b ¯ γ k + 1 η .
Proposition 3. 
Let all the conditions of Lemma 1 hold. Suppose, in addition, that the equation ψ ( t ) = 0 has the smallest solution s * > s 2 . Then, the following assertion holds for each n = 0 , 1 , 2 ,
x n y n s * .
Proof. 
The estimate (27) for n = 0 implies x 1 y 1 s 2 s * , which is true by the choice of s * . Suppose that (35) holds for all integer values m smaller or equal to m. Then, we have by the induction hypothesis, and (2) that
x m + 1 y m + 1 e m s 1 ( 0 1 w ( ( 1 τ ) s * ) d τ + w ( s * ) + w 1 ( α ) ) s * + s 2 = s * ,
by the definition of s * .

6. Numerical Examples

The examples use L n = G ( x n ) , Δ = I , which are independent of x 0 and x * .
Example 1. 
The solution sought for the nonlinear system
g 1 = u 0.1 sin u 0.3 cos v + 0.4 g 2 = v 0.2 cos u + 0.1 sin v + 0.3 .
Let G = ( g 1 , g 1 ) . Then, the system becomes
G ( s ) = 0 f o r s = ( θ 1 , θ 2 ) T .
Then
G ( ( u , v ) ) = 1 0.1 cos ( u ) 0.3 sin ( v ) 0.2 sin ( u ) 0.1 cos ( v ) + 1 .
Algorithm (2)
x k + 1 = x k G ( x k ) 1 G ( x k ) .
Algorithm (4),  k = 1 , Δ = I
A 1 ( x ) = I + ( I G ( x ) ) , p 1 ( x ) = x ( I + ( I G ( x ) ) ) G ( x ) , x n + 1 = p 1 ( x n ) .
Algorithm (4),  k = 2 , Δ = I
A 2 ( x ) = I + ( I G ( x ) ) + ( I G ( x ) ) 2 , p 2 ( x ) = x A 2 ( x ) G ( x ) , x n + 1 = p 2 ( x n ) .
Algorithm (4),  k = 3 , Δ = I
A 3 ( x ) = I + ( I G ( x ) ) + ( I G ( x ) ) 2 + ( I G ( x ) ) 3 , p 3 ( x ) = x A 3 ( x ) G ( x ) , x n + 1 = p 3 ( x n ) .
Algorithm (4),  k = 4 , Δ = I
A 4 ( x ) = I + ( I G ( x ) ) + ( I G ( x ) ) 2 + ( I G ( x ) ) 3 + ( I G ( x ) ) 4 , p 4 ( x ) = x A 4 ( x ) G ( x ) , x n + 1 = p 4 ( x n ) .
Algorithm (4),  k = 5 , Δ = I
A 5 ( x ) = I + ( I G ( x ) ) + ( I G ( x ) ) 2 + ( I G ( x ) ) 3 + ( I G ( x ) ) 4 + ( I G ( x ) ) 5 , p 5 ( x ) = x A 5 ( x ) G ( x ) , x n + 1 = p 5 ( x n ) .
Algorithm (4),  k = 1 , 5 ¯ , Δ = G ( x 0 )
x n + 1 = x n A Δ 1 G ( x n ) , Γ = Δ 1 ( Δ G ( x ) ) , A = I + i = 1 k Γ i .
Thus, the comparison shows that the behavior of method (4) is essentially the same as Newton’s method (2). However, the iterates of method (4) are cheaper to obtain than Newton’s. As observed in Table 1, Table 2, Table 3 and Table 4, the number of iterations required for the proposed methods with k ranging from 3 to 5 closely aligns with those of Newton’s method.
Table 5 shows the results of calculations to determine the Computational Order of Convergence (COC) and the Approximated Computational Order of Convergence (ACOC) aiming to compare the convergence order of method (4) with the convergence order of Newton’s method ( 2 ) .
Definition 2. 
The computational order of convergence of a sequence { x n } n 0 is defined by
ρ ¯ n = ln | e n + 1 / e n | ln | e n / e n 1 | ,
where x n 1 , x n , x n + 1 are three consecutive iterations near the root α and e n = x n α [6].
Definition 3. 
The approximated computational order of convergence of a sequence { x n } n 0 is defined by
ρ ^ n = ln | e ^ n + 1 / e ^ n | ln | e ^ n / e ^ n 1 | ,
where e ^ n = x n x n 1 . x n , x n 1 , x n 2 are three consecutive iterates [6].
Table 5 demonstrates that the convergence of the proposed methods closely corresponds with the convergence of Newton’s method, particularly for values of k ranging from 4 to 5 with the convergence order closely approximating 2.
Example 2. 
Let T 1 = T 2 = R 3 and Ω = U [ x * , 1 ] . The mapping G is defined on Ω for Θ = ( Θ 1 , Θ 2 , Θ 3 ) t r R 3 as
G ( Θ ) = ( Θ 1 , e Θ 2 1 , e 1 2 Θ 3 2 + Θ 3 ) t r .
Then, the definition of the derivative according to Fréchet [2,3,8,13,30] gives for the mapping G that
G ( Θ ) = 1 0 0 0 e Θ 2 0 0 0 ( e 1 ) Θ 3 + 1 .
The point x * = ( 0 , 0 , 0 ) t r solves the equation G ( a ) = 0 . Moreover, G ( x * ) = I . The conditions of Theorem 3 hold, provided that γ = γ ( t ) = ( e 1 ) t , ψ ( t ) = ( e 1 ) t , and k = 1 . Then, we can take ρ 4 ( 0 , 0.2909883534 ) .
Example 3. 
Let U [ 0 , 1 ] stand the space of continuous functions mapping the interval [ 0 , 1 ] into the real number system. Let T 1 = T 2 = K [ 0 , 1 ] and Ω = U [ x * , 1 ] with x * ( κ ) = 0 . The operator G is defined on U [ 0 , 1 ] as
G ( z ) ( κ ) = z ( κ ) 4 0 1 κ z ( τ ) 3 d τ .
Then, the definition of the derivative according to Fréchet [2,3,8,13,30] gives for the operator G
G ( z ( w ) ) ( κ ) = w ( κ ) 12 0 1 κ τ z ( τ ) 2 w ( τ ) d τ
for each w U [ 0 , 1 ] . Therefore, the conditions of Theorem 3 hold for x * = 0 , k = 1 G ( x * ( κ ) ) = I if we choose γ = γ ( t ) = 6 t and ψ ( t ) = 6 t . Then, we obtain ρ 4 ( 0 , 0.8 3 ¯ ) .

7. Concluding Remarks

The difficulty of implementing the Newton-like algorithms is addressed in this paper. In particular, the computation of L n 1 required at each step of the Newton-like algorithms is avoided with the introduction of algorithm (4) (or algorithm (5)), where the inversion only once of a fixed linear operator is required to implement it. The inverse of the linear operator is exchanged with a finite sum of linear operators related to G . Both the local and the semi-local convergence analysis of these algorithms is comparable to Newton’s in the sense that the number of iteration steps to reach a predetermined tolerance of error is essentially the same. The numerical examples are used to demonstrate that algorithm (4) or algorithm (5) are reliable replacements of the Newton-like algorithms for all practical purposes. We plan to study extensions of the presented algorithms like
x n + 1 = x n G ˜ ( x n ) G ( x n ) ,
where G ˜ is a conscious approximation to the inverse of a linear operator (like L n ) which may be a divided difference or some other operator [18,30,31,32,33,34,35].

Author Contributions

Conceptualization, I.K.A., S.G., S.R. and C.I.A.; Algorithm, I.K.A., S.G., S.R. and C.I.A.; methodology, I.K.A., S.G., S.R. and C.I.A.; software, I.K.A., S.G., S.R. and C.I.A.; validation, I.K.A., S.G., S.R. and C.I.A.; formal analysis, I.K.A., S.G., S.R. and C.I.A.; investigation, I.K.A., S.G., S.R. and C.I.A.; resources, I.K.A., S.G., S.R. and C.I.A.; data curation, I.K.A., S.G., S.R. and C.I.A.; writing—original draft preparation, I.K.A., S.G., S.R. and C.I.A.; writing—review and editing, I.K.A., S.G., S.R. and C.I.A.; visualization, I.K.A., S.G., S.R. and C.I.A.; supervision, I.K.A., S.G., S.R. and C.I.A.; project administration, I.K.A., S.G., S.R. and C.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank graduate Mykhailo Havdiak from the Department of Optimal Processes, Ivan Franko National University of Lviv, Lviv, Ukraine, for providing Example 1 of this paper.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Driscoll, T.A.; Braun, R.J. Fundamentals of Numerical Computation: Julia Edition; SIAM: Philadelphia, PA, USA, 2022. [Google Scholar]
  2. Ezquerro, J.A.; Gutierrez, J.M.; Hernández, M.A.; Romero, N.; Rubio, M.J. The Newton algorithm: From Newton to Kantorovich. Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. (In Spanish) [Google Scholar]
  3. Kantorovich, L.V.; Akilov, G. Functional Analysis in Normed Spaces; Fizmatgiz: Moscow, Russia, 1959; German translation, Akademie-Verlag: Berlin, Germany, 1964; English translation (2nd edition), Pergamon Press: London, UK, 1981, 1964. [Google Scholar]
  4. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton- Kantarovich type theorems. J. Complex. 2010, 25, 3–42. [Google Scholar] [CrossRef]
  5. Argyros, I.K. The Theory and Applications of Iteration Methods with Applications, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Publ.: Abingdon, UK, 2022. [Google Scholar]
  6. Candelario, G.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Generalized conformable fractional Newton-type method for solving nonlinear systems. Numer. Algor. 2023, 93, 1171–1208. [Google Scholar] [CrossRef]
  7. Darve, E.; Wootters, M. Numerical Linear Algebra with Julia; SIAM: Philadelphia, PA, USA, 2021. [Google Scholar]
  8. Deuflhard, P. Newton Algorithms for Nonlinear Problems. Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 2004; Volume 35. [Google Scholar]
  9. Kelley, C.T. Solving Nonlinear Equations with Iterative Methods; Solvers and Examples in Julia, Fundamentals of Algorithms; SIAM: Philadelphia, PA, USA, 2023. [Google Scholar]
  10. Pollock, S.; Rebholz, L.G.; Xiao, M. Anderson-accelerated convergence of Picard iterations for incompressible Navier–Stokes equations. SIAM J. Numer. Anal. 2019, 57, 615–637. [Google Scholar] [CrossRef]
  11. Rackauckas, C.; Mishra, P.; Gowda, S.; Huang, L. Sparse Diff Tools.jl, Julia Package, 2020. Available online: https://github.com/JuliaDiff/SparseDiffTools.jl (accessed on 20 February 2024).
  12. Rackauckas, C.; Nie, Q. Differentialequations. jl–a performant and feature-rich ecosystem for solving differential equations in Julia. J. Open Res. Softw. 2017, 5, 15. [Google Scholar] [CrossRef]
  13. Revels, J. Reverse Diff.jl, Julia Package. 2020. Available online: https://github.com/JuliaDiff/ReverseDiff.jl (accessed on 20 February 2024).
  14. Uecker, H. Numerical Continuation and Bifurcation in Nonlinear PDEs; SIAM: Philadelphia, PA, USA, 2021. [Google Scholar]
  15. Berinde, V. Iterative Approximation of Fixed Points; Springer: New York, NY, USA, 2007. [Google Scholar]
  16. Krasnoselskij, M.A. Two remarks on the algorithm of successive approximations. Uspehi Mat. Nauk. 1995, 10, 123–127. (In Russian) [Google Scholar]
  17. Bian, W.; Chen, X.; Kelley, C.T. Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM J. Sci. Comp. 2021, 43, S1–S20. [Google Scholar] [CrossRef]
  18. De Dterck, H.; He, Y. Linear asymptotic convergence of Anderson acceleration: Fixed point analysis. arXiv 2021, arXiv:14716v1. [Google Scholar]
  19. Evans, C.; Pollock, S.; Rebholz, L.G.; Xiao, M. A proof that Anderson acceleration improves the convergence rate in linearly converging fixed-point methods (but not in those converging quadratically). SIAM J. Numer. Anal. 2020, 58, 788–810. [Google Scholar] [CrossRef]
  20. Fang, H.R.; Saad, Y. Two classes of multisecant methods for nonlinear acceleration. Numer. Linear Algebra Appl. 2009, 16, 197–221. [Google Scholar] [CrossRef]
  21. Pollock, S.; Rebholz, L.G. Anderson acceleration for contractive and noncontractive operators. IMA Numer. Anal. 2021, 41, 2841–2872. [Google Scholar] [CrossRef]
  22. Padcharoen, A.; Kumam, P.; Chaipunya, P.; Shehu, Y. Convergence of inertial modified Krasnoselskii-Mann iteration with application to image recovery. Thai J. Math. 2020, 18, 126–142. [Google Scholar]
  23. Zhang, J.; O’Donoghue, B.; Boyd, S. Globally convergent type-I Anderson acceleration for nonsmooth fixed-point iterations. SIAM J. Optim. 2020, 30, 3170–3197. [Google Scholar] [CrossRef]
  24. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s algorithm. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  25. Argyros, I.K.; George, S. On a unified convergence analysis for Newton-type algorithms solving generalized equations with the Aubin property. J. Complex. 2024, 81, 101817. [Google Scholar] [CrossRef]
  26. Catinas, E. The inexact, inexact perturbed, and quasi-Newton algorithms are equivalent models. Math. Comp. 2005, 74, 291–301. [Google Scholar] [CrossRef]
  27. Erfanifar, R.; Hajariah, M.A. A new multi-step method for solving nonlinear systems with high efficiency indices. Numer. Algorithms 2024. [Google Scholar] [CrossRef]
  28. Yamamoto, T. A convergence theorem for Newton-like algorithms in Banach spaces. Numer. Math. 1987, 51, 545–557. [Google Scholar] [CrossRef]
  29. Ezquerro, J.A.; Hernández, M.A. Domains of global convergence for Newtons’s algorithm from auxiliary points. Appl. Math. Lett. 2018, 85, 48–56. [Google Scholar] [CrossRef]
  30. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  31. Cances, E.; Kremlin, G.; Levitt, A. Convergence analysis of direct minimization and self-consistent iterations. SIAM J. Sci. Comp. 2021, 42, 243–274. [Google Scholar]
  32. De Dterck, H.; He, Y. Anderson acceleration as a Krylov method with application to asymptotic convergence analysis. arXiv 2021, arXiv:2109.14181v1. [Google Scholar]
  33. Rheinboldt, W.C. A unified convergence theory for a class of iterative process. SIAM J. Numer. Anal. 1968, 5, 42–63. [Google Scholar] [CrossRef]
  34. Singh, S. A third order iterative algorithm for inversion of cumulative beta distribution. Numer. Algor. 2023, 1–23. [Google Scholar]
  35. Traub, J.F.; Wozdiakowsi, H. Convegence and complexity of Newton iteration for operator equations. J. Assoc. Comput. March. 1979, 26, 250–258. [Google Scholar] [CrossRef]
Table 1. Number of iterations to achieve tolerance ε = 10 9 with initial guess x 0 = ( 1 , 1 ) and I G ( x 0 ) = 0.3129 < 1 .
Table 1. Number of iterations to achieve tolerance ε = 10 9 with initial guess x 0 = ( 1 , 1 ) and I G ( x 0 ) = 0.3129 < 1 .
AlgorithmIterationsAlgorithmIterations
(2) Newton4(2) Newton4
(37),  k = 1 6(42),  k = 1 8
(38),  k = 2 5(42),  k = 2 6
(39),  k = 3 4(42),  k = 3 5
(40),  k = 4 4(42),  k = 4 5
(41),  k = 5 4(42),  k = 5 4
Table 2. Number of iterations to achieve tolerance ε = 10 9 with initial guess x 0 = ( 0 , 0 ) and I G ( x 0 ) = 0.1414 < 1 .
Table 2. Number of iterations to achieve tolerance ε = 10 9 with initial guess x 0 = ( 0 , 0 ) and I G ( x 0 ) = 0.1414 < 1 .
AlgorithmIterationsAlgorithmIterations
(2) Newton3(2) Newton3
(37),  k = 1 5(42),  k = 1 3
(38),  k = 2 4(42),  k = 2 3
(39),  k = 3 3(42),  k = 3 3
(40),  k = 4 3(42),  k = 4 3
(41),  k = 5 3(42),  k = 5 3
Table 3. Number of iterations to achieve tolerance ε = 10 9 , where x 0 = ( 15 , 15 ) and I G ( x 0 ) = 0.257 < 1 .
Table 3. Number of iterations to achieve tolerance ε = 10 9 , where x 0 = ( 15 , 15 ) and I G ( x 0 ) = 0.257 < 1 .
AlgorithmIterationsAlgorithmIterations
(2) Newton5(2) Newton5
(37),  k = 1 7(42),  k = 1 9
(38),  k = 2 5(42),  k = 2 7
(39),  k = 3 5(42),  k = 3 6
(40),  k = 4 5(42),  k = 4 6
(41),  k = 5 5(42),  k = 5 5
Table 4. Number of iterations to achieve tolerance ε = 10 12 , where x 0 = ( 15 , 15 ) and I G ( x 0 ) = 0.257 < 1 .
Table 4. Number of iterations to achieve tolerance ε = 10 12 , where x 0 = ( 15 , 15 ) and I G ( x 0 ) = 0.257 < 1 .
AlgorithmIterationsAlgorithmIterations
(2) Newton7(2) Newton7
(37),  k = 1 8(42),  k = 1 12
(38),  k = 2 7(42),  k = 2 8
(39),  k = 3 7(42),  k = 3 8
(40),  k = 4 7(42),  k = 4 7
(41),  k = 5 7(42),  k = 5 7
Table 5. Computational Order of Convergence and the Approximated Computational Order of Convergence, where x 0 = ( 15 , 15 ) , ε = 10 12 .
Table 5. Computational Order of Convergence and the Approximated Computational Order of Convergence, where x 0 = ( 15 , 15 ) , ε = 10 12 .
AlgorithmCOCACOC
(2)   Newton1.86241.9697
(37),    k = 1 0.8631
(38),    k = 2 0.26951.0438
(39),    k = 3 1.97142.3569
(40),    k = 4 1.83541.9453
(41),    k = 5 1.86421.9661
(42),    k = 1 0.90651.0118
(42),    k = 2 0.59120.999
(42),    k = 3 0.73210.9926
(42),    k = 4 1.9332.0151
(42),    k = 5 1.86791.9578
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; George, S.; Regmi, S.; Argyros, C.I. Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations. Algorithms 2024, 17, 154. https://doi.org/10.3390/a17040154

AMA Style

Argyros IK, George S, Regmi S, Argyros CI. Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations. Algorithms. 2024; 17(4):154. https://doi.org/10.3390/a17040154

Chicago/Turabian Style

Argyros, Ioannis K., Santhosh George, Samundra Regmi, and Christopher I. Argyros. 2024. "Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations" Algorithms 17, no. 4: 154. https://doi.org/10.3390/a17040154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop