Next Article in Journal
Robust Optimality and Duality for Nonsmooth Multiobjective Programming Problems with Vanishing Constraints Under Data Uncertainty
Next Article in Special Issue
PIPET: A Pipeline to Generate PET Phantom Datasets for Reconstruction Based on Convolutional Neural Network Training
Previous Article in Journal
Nonlinear Optimization and Adaptive Heuristics for Solving Irregular Object Packing Problems
Previous Article in Special Issue
Theoretical Study of the Wear of a Reduced-Diameter Wheel for Freight Wagons, Based on Its Diameter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence Study for an Iterative Scheme with a High Order of Convergence

by
Eulalia Martínez
1,*,† and
Arleen Ledesma
2,*,†
1
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain
2
Escuela de Matemática, Universidad Autónoma de Santo Domingo (UASD), Alma Máter, Santo Domingo 10105, Dominican Republic
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2024, 17(11), 481; https://doi.org/10.3390/a17110481
Submission received: 27 September 2024 / Revised: 22 October 2024 / Accepted: 24 October 2024 / Published: 25 October 2024

Abstract

:
In this paper, we address a key issue in Numerical Functional Analysis: to perform the local convergence analysis of a fourth order of convergence iterative method in Banach spaces, examining conditions on the operator and its derivatives near the solution to ensure convergence. Moreover, this approach provides a local convergence ball, within which initial estimates lead to guaranteed convergence with details about the radii of domain of convergence and estimates on error bounds. Next, we perform a comparative study of the Computational Efficiency Index ( C E I ) between the analyzed scheme and some known iterative methods of fourth order of convergence. Our ultimate goal is to use these theoretical findings to address practical problems in engineering and technology.

1. Introduction

A significant issue in Numerical Functional Analysis is problem-solving in science and engineering using mathematical modeling techniques. A specific problem can be modeled, under certain conditions, as a nonlinear equation in a Banach space.
Let X , Y be Banach spaces, D an open convex subset of X, and T a nonlinear operator, T : D X Y . The primary focus in studying an iterative method is to find the closest approximation to the solution of the equation T ( x ) = 0 . However, it is crucial to ensure that the selected iterative method indeed converges to this approximation. To achieve both convergence and accurate results for the scheme, we can perform studies aimed at the necessary conditions that must be verified by the solution x * , the initial value x 0 , or the operator within the framework of the iterative method.
In Banach spaces, a convergence study can be approached from two perspectives: considering the neighborhood of the solution (referred to as local convergence analysis) or focusing on the initial estimates and the domain (known as semilocal convergence analysis). When we analyze local convergence (see [1,2]), conditions are applied on the operators and their respective derivatives at the solution x * . In consequence, we obtain the local convergence ball, centered in the solution and with radius r: B ( x * , r ) . The elements contained within this ball are possible initial estimates which guarantee convergence, meaning that they are suitable initial points for the scheme to work or for which the sequence of iterates converges to the solution of the problem. Additionally, an error bound is established.
Local convergence analysis plays a significant role in understanding the behavior, efficiency, and reliability of an iterative method. This analysis provides detailed information about how fast an iterative method converges to a solution given an initial estimate. Moreover, through local convergence analysis we are able to identify the conditions under which an iterative method will converge to a solution, concerning initial estimate proximity and smoothness conditions. This analysis often provides error bounds, showing how the error decreases in each iteration, which helps in determining how accurate the current approximation is. In summary, local convergence analysis helps in choosing the right method, refining existing methods, and understanding their behavior under realistic conditions.
Local convergence analysis is essential in science and engineering for guaranteeing the accuracy, efficiency, stability, and adaptability of iterative methods used to solve complex real-world problems. Whether optimizing designs, conducting scientific simulations, solving nonlinear equations, or managing large-scale data, understanding the local convergence properties of an iterative scheme is necessary to achieve reliable and efficient solutions. For example, in Computational Fluid Dynamics (CFD), where simulations of airflow around objects may engage a large number of variables, fast local convergence can save considerable computational cost and make the process feasible. In molecular dynamics, iterative methods are used to study complex systems and the efficiency provided by a local convergence analysis can allow more detailed and expansive simulations. In mechanical engineering, solving the analysis of material deformations often requires iterative methods and local convergence analysis ensures these methods can handle the complexities of the system. In physics and chemistry, where iterative methods are used to solve quantum mechanical equations, local convergence analysis helps ensure accurate energy levels and wave functions are obtained. Computer-Assisted Diagnostic (CAD) systems rely heavily on various computational models and algorithms. These models often use iterative methods to solve complex mathematical problems (such as nonlinear equations, optimization tasks, and machine learning model training) and local convergence analysis is essential in ensuring the accuracy, efficiency, robustness, and adaptability of iterative methods used in CAD systems.
Several studies [2,3] have explored, for some iterative methods, local convergence under various continuity conditions. For instance, Argyros in [4], for the modified Newton’s method for multiple zeros, provides an estimate for the convergence radius, assuming the function meets Hölder and center-Hölder continuity conditions. This result is further refined by Zhou et al. in [5]. Bi et al. in [6] studied third-order methods, and they established local convergence using divided differences and its properties. Behl et al. (see [7]) used Taylor’s expansions to establish a new set of bounds in order to ensure convergence. Zhou and Song (see [8]) also employed Taylor’s expansions to analyze the local convergence of Osada’s method, but considering that the derivative verifies the center-Hölder continuity condition. In addition, Singh et al. [9] examined, for a fifth-order iterative scheme, both the local and semilocal convergence under Hölder continuity conditions.
In the semilocal convergence analysis, certain conditions are placed both on the operators and their respective derivatives at x 0 (the initial estimate), and iterations are then carried out. This leads to results such as the existence of the solution, its uniqueness, a priori error bound, the R-order of convergence of the scheme, and the ascertainment of the convergence region where the operator T is defined. In this work, we will focus on local convergence analysis. Some elements of this study are shown in [10].
This paper presents an iterative method along with its local convergence analysis for solving the nonlinear equation T ( x ) = 0 , which has a solution x * . Consider the (fourth-order) family of iterative schemes studied in [11], defined for k = 0 , 1 , 2 , by:
y k = x k 2 3 T x k 1 T x k x k + 1 = x k 5 8 α + α T y k 1 T x k + α 3 T x k 1 T y k + 3 8 α 3 T y k 1 T x k 2 T x k 1 T x k .
where α C is a parameter and x 0 is the initial estimate.
In order to approximate the solution of the nonlinear equation T ( x ) = 0 , it is assumed that the derivative T ( x ) exists in a neighborhood of the solution x * and that T x * 1 L ( Y , X ) , where L ( Y , X ) is the set of bounded linear operators mapping from Y to X.
In the following sections, this paper addresses these topics: a local convergence analysis is presented in Section 2. In Section 3, we undertake a computational efficiency study of the new method and compare it with other existing schemes. Section 4 includes several numerical examples in order to validate the theoretical results. Concluding observations are summarized in Section 5.

2. Local Convergence Analysis

In this section, we provide the local convergence analysis for scheme (1). Let B ¯ ( x * , r ) and B ( x * , r ) represent the closed and open balls, respectively, in Banach space X, centered in x * (the solution) with radius r > 0 . In order to develop a local convergence study, certain conditions are applied on operators T , T in x * , which is presumed to exist. Hence, we consider, for real numbers L 0 > 0 , L > 0 and for all x , y D , the conditions listed below:
(C1)
T x * = 0 , T x * 1 L ( Y , X )
(C2)
T x * 1 T ( x ) T x * L 0 x x *
(C3)
T x * 1 T ( x ) T ( y ) L x y
(C4)
T x * 1 T ( x ) M , x D for M > 1 real .
Cordero et al., in [11], examined the scheme (1), applying complex dynamics techniques. However, they did not provide a theoretical analysis of local convergence, which we consider essential for the reasons previously outlined. In Section 2.1, we perform this study.

2.1. Local Convergence Analysis Using Conditions (C1), (C2) and (C3)

In order to perform the study on local convergence analysis of the scheme, it is appropriate to define specific parameters and functions. Define function g 1 on the interval 0 , 1 L 0 by:
g 1 ( t ) = 1 1 L 0 t L 2 t + 1 3 1 + L 0 t
Now, consider the function h 1 ( t ) = g 1 ( t ) 1 , then h 1 ( 0 ) = 1 / 3 1 = 2 / 3 < 0 and h 1 1 / L 0 = + . Consequently, h 1 ( t ) has at least one zero in ] 0 , 1 / L 0 [ . Let r 1 be the smallest zero in ] 0 , 1 / L 0 [ ; it follows that:
0 < r 1 < 1 / L 0 and 0 g 1 ( t ) < 1 , t [ 0 , r 1 [ .
Moreover, define function g 2 in the interval 0 , r 1 by:
g 2 t = L t 2 1 L 0 t + 3 8 + α 1 + L 0 t 1 L 0 t + α 1 + L 0 t 1 L 0 t g 1 t + α 3 1 + L 0 t g 1 t 1 L 0 t 1 + L 0 t + 3 8 α 3 1 + L 0 t 1 L 0 t g 1 t 2 .
Consider the function h 2 ( t ) = g 2 ( t ) 1 . Following that:
g 1 ( 0 ) = 1 3 , g 2 ( 0 ) = 3 8 + α + α + α 3 + 3 8 α 3 , then h 2 ( 0 ) = 3 8 + α + α + α 3 + 3 8 α 3 1 and h 2 ( 0 ) < 0 if α 3 8 , 1 8 , also
h 2 r 1 = L r 1 2 1 L 0 r 1 + 3 8 + α 1 + L 0 r 1 1 L 0 r 1 + α 1 + L 0 r 1 1 L 0 r 1 g 1 r 1 + α 3 1 + L 0 r 1 g 1 r 1 1 L 0 r 1 1 + L 0 r 1 + 3 8 α 3 1 + L 0 r 1 1 L 0 r 1 g 1 r 1 2 1 .
From (3), we have that 1 + L 0 r 1 > 0 and 1 L 0 r 1 g 1 > 0 , and conclude that h 2 r 1 > 0 .
Consequently, h 2 ( t ) has at least one zero in ] 0 , r 1 [ . Let r be the smallest zero in ] 0 , r 1 [ , we have:
0 < r < r 1 < 1 / L 0 y 0 g 2 ( t ) < 1 , t [ 0 , r [ .
Thus, for α 3 8 , 1 8 , 0 < r < r 1 < 1 / L 0 .
Let us consider the following lemma.
Lemma 1. 
If operator T satisfies (C2) and (C3), then the inequalities listed below are verified, for all x D and t [ 0 , 1 ] :
T x * 1 T ( x ) 1 + L 0 x x * T x * 1 T x * + t x x * 1 + L 0 x x * T x * 1 T ( x ) 1 + L 0 x x * x x * .
The proof is obvious, considering the following remarks.
Proof of Lemma 1.
From (C2) we have:
T x * 1 T ( x ) = T x * 1 T ( x ) T x * + T x * = T x * 1 T ( x ) T x * + I 1 + T x * 1 T ( x ) T x * 1 + L 0 x x * .
Then, consequently:
T x * 1 T x * + t x x * T x * 1 T x * + t T x x * T x * 1 T x * + t T x * 1 T x x * I + t T x * 1 T ( x ) T x *
and using (C2):
T x * 1 T x * + t x x * 1 + t L 0 x x * .
Since t 0 , 1 :
T x * 1 T x * + t x x * 1 + L 0 x x * .
In addition, T x * 1 T ( x ) = T x * 1 T ( x ) T x * and by using an adequate integral estimate, we have:
T x * 1 T ( x ) = T x * 1 T ( x ) T x * = T x * 1 T x * + t x x *   x x * 1 + L 0 x x * x x * .
In Theorem 1, we present the local convergence result for scheme (1), under conditions (C1)–(C4). For this theorem, condition (C4) is omitted, and the radius of the convergence ball is determined excluding the use of constant M.
Theorem 1. 
Let T : D X Y a differentiable Fréchet operator. Suppose that there exists x * D and α 3 8 , 1 8 such that (C2) and (C3) are satisfied and B ¯ ( x * , r ) D , where r is the value obtained previously by using auxiliary functions g 1 and g 2 . The sequence x k generated by (1) for x 0 B ( x * , r ) x * is well defined for k = 0 , 1 , 2 , remains in B ( x * , r ) and converges to x * . In addition, the following estimates hold k = 0 , 1 , 2 , :
y k x * g 1 x k x * x k x * x k x * r , x k + 1 x * g 2 x k x * x k x * x k x * r .
Furthermore, if R r , 2 L 0 exists such that B ¯ ( x * , R ) D , then the limit point x * is the unique solution of equation T ( x ) = 0 in B ¯ ( x * , R ) .
Proof of Theorem 1.
Since x 0 D , taking (C2), for x = x 0 we have:
T x * 1 T x 0 T x * L 0 x 0 x * .
Assuming that x 0 x * < 1 / L 0 . Thus:
T x * 1 T x 0 T x * 1 .
Hence, by Banach Lemma on invertible operators, T x 0 1 exists and we have:
T x 0 1 T x * 1 1 L 0 x 0 x *
Therefore, y 0 is well defined. In (1), for k = 0 , we have y 0 = x 0 2 3 T x 0 1 T x 0 :
y 0 x * = x 0 x * 2 3 T x 0 1 T x 0 + T x 0 1 T x 0 T x 0 1 T x 0 = x 0 x * + 1 3 T x 0 1 T x 0 T x 0 1 T x 0 , y 0 x * = T x 0 1 T x * 0 1 T x * 1 T x * + t x 0 x * T x 0 x 0 x * d t + 1 3 T x 0 1 T x * 0 1 T x * 1 T x * + t x 0 x * x 0 x * d t .
By taking norms on both sides of the last expression and using (C3) and (6), which results in:
y 0 x * T x 0 1 T x * 0 1 T x * 1 T x * + t x 0 x * T x 0 x 0 x * d t + 1 3 T x 0 1 T x * 0 1 T x * T x * + t x 0 x * x 0 x * d t 1 1 L 0 x 0 x * L 2 x 0 x * + 1 3 1 + L 0 x 0 x * x 0 x * = g 1 ( x 0 x * ) x 0 x * ,
where in the last inequality we have used Lemma 1 and the previously defined auxiliary function g 1 . Therefore, taking (2) and (3):
y 0 x * g 1 x 0 x * x 0 x * < x 0 x * .
Since y 0 D , using (C2) we get:
T x * 1 T y 0 T x * L 0 y 0 x * L 0 x 0 x * < 1 ,
thus, by Banach Lemma (see [12]) on invertible operators T y 0 1 exists and:
T y 0 1 T x * 1 1 L 0 y 0 x * .
Hence, x 1 is well defined and we have:
x 1 = x 0 5 8 α + α T y 0 1 T x 0 + α 3 T x 0 1 T y 0 + 3 8 α 3 T y 0 1 T x 0 2 T x 0 1 T x 0 .
Subtracting x * on both sides of the equation:
x 1 x * = T x 0 1 T x * T x * 1 T x 0 T x 0 x 0 x * + 3 8 + α T x 0 1 T x * T x * 1 T x 0 α T y 0 1 T x * T x * 1 T x 0 α 3 T x 0 1 T x * T x * 1 T y 0 T x * 1 T x 0 3 8 α 3 T y 0 1 T x * T x * 1 T x 0 T y 0 1 T x * T x * 1 T x 0 ,
By taking norms, from (5), (6), (8), and (9) we obtain:
x 1 x * L 2 x 0 x * 1 L 0 x 0 x * + 3 8 + α 1 + L 0 x 0 x * 1 L 0 x 0 x * + α 1 + L 0 x 0 x * 1 L 0 y 0 x * + α 3 1 + L 0 y 0 x * 1 + L 0 x 0 x * 1 L 0 x 0 x * + 3 8 α 3 1 + L 0 x 0 x * 1 L 0 y 0 x * 2 x 0 x * .
Taking (7):
x 1 x * L x 0 x * 2 1 L 0 x 0 x * + 3 8 + α 1 + L 0 x 0 x * 1 L 0 x 0 x * + α 1 + L 0 x 0 x * 1 L 0 g 1 x 0 x * x 0 x * + α 3 1 + L 0 g 1 x 0 x * x 0 x * 1 L 0 x 0 x * 1 + L 0 x 0 x * + 3 8 α 3 1 + L 0 x 0 x * 1 L 0 g 1 x 0 x * x 0 x * 2 x 0 x * ,
and using the auxiliary function g 2 :
x 1 x * g 2 x 0 x * x 0 x * .
Then, using (10) and (4):
x 1 x * g 2 x 0 x * x 0 x * < x 0 x * < r .
It follows that the theorem is verified for k = 0 . By using mathematical induction, we replace x 0 , y 0 , x 1 with x k , y k , x k + 1 and have:
y k x * g 1 x k x * x k x * < x k x * < r ,
x k + 1 x * g 2 x k x * x k x * < x k x * < r ,
where the functions g i have been previously obtained. Using x k + 1 x * x k x * < r , it is hereby concluded that x k + 1 B ( x * , r ) . Function g 2 is increasing over its domain, and therefore:
x k + 1 x * g 2 x 0 x * x k x * g 2 x 0 x * g 2 x k 1 x * x k 1 x * g 2 x 0 x * 2 g 2 x k 2 x * x k 2 x * g 2 x 0 x * k + 1 x 0 x * .
x k + 1 x * g 2 x 0 x * k + 1 x 0 x * .
By applying limits in (11), we get:
lim k x k + 1 x * lim k g 2 ( x 0 x * ) k + 1 x 0 x * ,
where lim k g 2 x 0 x * k + 1 = 0 , and therefore, lim k x k = x * . Subsequently, the method converges to the solution.
In order to demonstrate uniqueness, if we have y * B ( x * , R ) with T ( y * ) = 0 . Let the operator P = 0 1 T y * + t x * y * d t . Therefore, using (C2):
T x * 1 P T x * 0 1 T x * 1 T y * + t x * y * T x * d t 0 1 L 0 y * + t x * t y * x * d t 0 1 L 0 t 1 x * y * d t = L 0 t 2 2 t x * y * 0 1 = L 0 2 x * y * = L 0 2 R < 1 .
Thus:
R < 2 L 0 .
For the aforementioned reasons and Banach Lemma, P 1 exists and:
0 = T ( x * ) T ( y * ) = P ( x * y * ) .
Using the inverse operator P 1 :
P 1 P ( x * y * ) = 0 ,
hence, x * y * = 0 , and therefore, x * = y * . It is concluded that the root x * is unique. □

2.2. Local Convergence Analysis Using Condition (C4)

In the preceding section, we refrained from using condition (C4), which sometimes is a downside when dealing with practical problems. In this section, condition (C4) is used, enabling us to make a comparison between the radii of local convergence obtained when using M with those obtained excluding this bound. The local convergence analysis is hereby completed.
Proceeding with the notation already established in (C1)–(C4), we now define new auxiliary functions g i ¯ that depend on M. For the study of local convergence of (1), the solution x * is assumed to exist and conditions are established on operators T , T in that solution. Therefore, we assume conditions (C1)–(C4) for all x , y D and for real numbers L 0 > 0 , L > 0 .
Let us define function the following function on the interval 0 , 1 L 0 :
g 1 ¯ ( t ) = 1 1 L 0 t L 2 t + 1 3 M
Now, consider the function h 1 ¯ ( t ) = g 1 ¯ ( t ) 1 , then h 1 ¯ ( 0 ) = M 3 1 = M 3 3 < 0 for M < 3 and h 1 ¯ 1 / L 0 = + . Consequently, h 1 ¯ ( t ) has at least one zero in ] 0 , 1 L 0 [ . Let r 1 ¯ the smallest zero in ] 0 , L 0 [ , which implies that:
0 < r 1 ¯ < 1 / L 0 y 0 g 1 ¯ ( t ) < 1 , t [ 0 , r 1 ¯ [ .
and
r 1 ¯ = 2 ( 3 M ) 3 ( L + 2 L 0 )
Moreover, define function g 2 in the interval 0 , r 1 ¯ by:
g 2 ¯ t = L t 2 1 L 0 t + 3 8 + α M 1 L 0 t + α M 1 L 0 t g 1 ¯ t + α 3 M 2 1 L 0 t + 3 8 α 3 M 1 L 0 t g 1 ¯ t 2 .
Consider the function h 2 ¯ ( t ) = g 2 ¯ ( t ) 1 . Following that:
g 1 ¯ ( 0 ) = M 3 , g 2 ¯ ( 0 ) = 3 8 + α M + α M + α 3 M 2 + 3 8 α 3 M 2 , then h 2 ¯ ( 0 ) = 3 8 + α M + α M + α 3 M 2 + 3 8 α 3 M 2 1 and h 2 ¯ ( 0 ) < 0 if 3 8 + α M + α M + α 3 M 2 + 3 8 α 3 M 2 < 1 , also
h 2 ¯ r 1 ¯ = L r 1 ¯ 2 1 L 0 r 1 ¯ + 3 8 + α M 1 L 0 r 1 ¯ + α M 1 L 0 r 1 ¯ g 1 ¯ r 1 ¯ + α 3 M 2 1 L 0 r 1 ¯ + 3 8 α 3 M 1 L 0 r 1 ¯ g 1 ¯ r 1 ¯ 2 1 .
From (13), we have that 1 L 0 r 1 ¯ > 0 , g 1 ¯ r 1 ¯ = 1 and 1 L 0 r 1 ¯ g 1 ¯ r 1 ¯ > 0 , and conclude that h 2 ¯ r 1 ¯ > 0 .
Consequently, h 2 ¯ ( t ) has at least one zero in ] 0 , r 1 ¯ [ . Let r ¯ be the smallest zero in ] 0 , r 1 ¯ [ , we have:
0 < r ¯ < r 1 ¯ < 1 / L 0 and 0 g 2 ¯ ( t ) < 1 , t [ 0 , r ¯ [ .
We will now consider the following lemma.
Lemma 2. 
Provided that operator T satisfies (C1) and (C4), then the inequalities listed below are verified, for all x D and t [ 0 , 1 ] :
T x * 1 T x * + t x 0 x * M T x * 1 T ( x 0 ) M x 0 x *
The proof of this lemma can be accomplished by using the definition of r ¯ and C4.
Proof of Lemma 2.
We have that:
T ( x 0 ) = T ( x 0 ) T ( x * ) = x * x 0 T z d z = 0 1 T x * + t x 0 x * x 0 x * d t
Notice that x * + t x 0 x * x * = t x 0 x * x 0 x * r ¯ . Then, x * + t x 0 x * B x * , r ¯ . Using (16) and (C4), we obtain
T x * 1 T x * + t x 0 x * M .
Consequently:
T x * 1 T x 0 = T x * 1 0 1 T x * + t x 0 x * x 0 x * d t 0 1 T x * 1 T x * + t x 0 x * x 0 x * d t 0 1 M x 0 x * d t = M x 0 x * t | 0 1 = M x 0 x * .
We will now present the local convergence result for scheme (1), given the (C1)–(C4) conditions.
Theorem 2. 
Let T : D X Y be a differentiable Fréchet operator, L 0 > 0 , L > 0 and M > 1 . Let x * D be such that conditions (C1)–(C4) are satisfied for all x , y D , and moreover, it is verified that M < 3 and 3 8 + α M + α M + α 3 M 2 + 3 8 α 3 M 2 < 1 . Then, the sequence x k generated by (1) for x 0 B ( x * , r ¯ ) x * is well defined for k = 0 , 1 , 2 , remains in B ( x * , r ¯ ) and converges to x * . In addition, the estimates provided below hold the following conditions for k = 0 , 1 , 2 , :
y k x * g 1 ¯ x k x * x k x * x k x * r ¯ , x k + 1 x * g 2 ¯ x k x * x k x * x k x * r ¯ ,
where the g ¯ functions are defined before the introduction of Theorem 2. Furthermore, if R ¯ r ¯ , 2 L 0 exists such that B ¯ ( x * , R ¯ ) D , then the limit point x * stands as the unique solution to the equation T ( x ) = 0 in B ¯ ( x * , R ¯ ) .
Proof of Theorem 2.
We do not include the proof of the preceding theorem as it is nearly the same as Theorem 1. The uniqueness proof is shown in Theorem 1. □
Next, we perform a comparative study of the Efficiency Index ( E I ) and the Computational Efficiency Index ( C E I ) between schemes (1), Jaiswal [13], Choubey and Jaiswal [14], and Sharma et al. [15].

3. Efficiency Index (EI) and Computational Efficiency Index (CEI)

Sometimes, evaluating the nonlinear operator T or its derivatives involves a high computational cost, rendering other operations carried out in the iterative process insignificant. An important measure of the robustness of an algorithm is the Efficiency Index ( E I ) , introduced by Ostrowski in [16]. It is defined by:
E I = p 1 d ,
where p stands for the order of convergence of the method and d denotes the number of functional evaluations per iteration. It is important to remember that the number of functional evaluations of T and T at each iteration is N and N 2 , respectively. Thus, for scheme (1), the number of functional evaluations per iteration is d = 2 N 2 + N . Given that order of convergence of the method is p = 4 , the efficiency index is:
E I = 4 1 2 N 2 + N .
However, the previous concept does not consider the cost of all operations involved in each iteration. The computational efficiency, which takes these costs into consideration, was introduced by Traub [17].
In the N-dimensional case, several linear systems have to be solved for each iteration, and therefore, it is important consider the number of operations performed. Let us recall that the number of products/quotients (per iteration) in the direct solution of a linear system is:
N 3 3 + N 2 N 3
and the number of products/coefficients (per iteration) in the direct solution of M linear systems with the same coefficient matrix, using LU factorization, is:
N 3 3 + M N 2 N 3 ,
where N is the size of the linear systems. The cost increases only in N 2 operations for each linear system solved with the same coefficient matrix. Moreover, matrix–vector products correspond to N 2 operations.
Hence, Cordero et al. in [18] defined the Computational Efficiency Index ( C E I ) as:
C E I = p 1 d + o p ,
where p denotes the order of convergence of the method, d represents the number of functional evaluations per iteration, and o p stands for the number of products/quotients per iteration. However, it is pointless to add d functional evaluations with operations and, for that reason, we will apply to d a correction factor μ , which transforms the number of evaluations into a number of operations, such that:
C E I = p 1 μ d + o p .
Therefore, for method (1), the number of products/quotients per iteration is:
o p = 2 N 3 3 + 7 N 2 2 N 3 .
and:
μ d + o p = μ 2 N 2 + N + 2 N 3 3 + 7 N 2 2 N 3 = 2 N 3 3 + 2 μ + 7 N 2 + μ 2 3 N = 2 N 3 + 3 2 μ + 7 N 2 + 3 μ 2 N 3 .
Consequently, the Computational Efficiency Index of the method is:
C E I = 4 3 2 N 3 + 3 2 μ + 7 N 2 + 3 μ 2 N .
As previously mentioned, the efficiency indexes are analyzed for other methods and then compared the respective indexes of method (1) (hereinafter referred to as M1). The methods considered are Jaiswal (see [13]):
y k = x k 2 3 T x k 1 T x k x k + 1 = x k 3 2 I 3 4 T x k 1 T y k + 1 4 T x k 1 T y k 2 · 2 3 T y k T x k 1 · T y k T x k 1 · T x k ,
as well as methods proposed by Choubey and Jaiswal (see [14]), denoted as CHJ:
y k = x k 2 3 T x k 1 T x k x k + 1 = x k 25 16 I 9 8 T x k 1 T y k + 9 16 T x k 1 T y k 2 · 4 T x k + 3 T y k 1 T x k
and Sharma et al. (see [15]):
y k = x k 2 3 T x k 1 T x k x k + 1 = x k 1 2 I 9 4 T y k 1 T x k + 3 4 T x k 1 T y k T x k 1 T x k .
The comparisons of the Computational Efficiency Indexes for the aforementioned methods are shown in Table 1. The notation L S ( T x ) and L S ( O t h e r ) stands for the number of linear systems to be solved with the same coefficient matrix T x as well as with other coefficient matrices, respectively. Moreover, in the case of matrix–vector multiplication (M × V) N 2 products are obtained.
Since method (1) depends on parameter α , we consider α = 0 and α = 9 8 , values which eliminate terms of the iterative expression, reducing the cost of solving the linear systems and the added computational effort. Also, let us consider μ = 1 2 , 3 4 , 1 , 3 2 . All schemes have the same number of functional evaluations of T and T per iteration, and hence d = 2 N 2 + N .
With the results obtained from Table 1, we represent in Figure 1 the performance of the efficiency index for the examined schemes.
Figure 1 shows that M 1 for α = 0 has better behavior than all the other methods analyzed. Furthermore, Sharma and M 1 for α = 9 8 are second in efficiency. It is also shown that as μ increases, the Computational Efficiency decreases for all schemes.
In the next section, some examples are worked out in order to perform numerical tests.

4. Numerical Results

In this section, several numerical examples are provided to validate the efficiency of our analysis for local convergence (Examples 1–3).
Example 1. 
The van der Waals equation of state (for a vapor) is (see [19,20]):
P + a V 2 V b = R T ,
This equation can be rearranged to the form:
P V 3 P b + R T V 2 + a V a b = 0
in V, where every constant possesses physical significance, with values that can be found in [19]. Let X = R and D = 0 , 100 . Choose P = 10 , 000 kPa and T = 800 K. Then, the solution V from the resulting equation is 36.9167 . Applying the conditions given in Theorems 1 and 2, we obtain L 0 = 0.1151 , L = 0.1215 and M = 1.12 . The radii are:
C 1 C 3 : α = 0.10 , r = 0.1575 < r 1 = 3.0365 C 4 : α = 0 , r ¯ = 0.6397 < r 1 ¯ = 3.5000
The uniqueness radius is:
R < 2 L 0 = 2 0.1151 = 17.38
Example 2 
(Beam Design Problem). Here, we examine a beam positioning problem (see [21]), where a beam of length l units is positioned against the edge of a cubical box with one end placed on the floor and the other end positioned against the wall. The box, with sides of 1 units each, is positioned on the floor braced against the wall and one end rests on the floor, as shown in Figure 2.
The triangles formed by the floor, box and beam (lower) and by the box, wall, and beam (upper) are similar.
In this case, the measurement from the base of the wall to where the beam touches the floor is l = 4 . Suppose y represents the length measured from the floor to the edge of the box along the length of the beam, and let x be the distance measured from the bottom of the box to the lower edge of the beam. Let X = R and D = 0 , 3 . Then, we have:
x 4 + 2 x 3 14 x 2 + 2 x + 1 = 0 .
One positive solution of the equation is the zero x = 2.7609 , and the first derivative is 4 x 3 + 6 x 2 28 x + 2 = 0 . Applying the conditions given in Theorems 1 and 2, we obtain L 0 = 1.87 , L = 1.9723 and M = 1.1634 . The radii are:
C 1 C 3 : α = 0 , r = 0.0100 < r 1 = 0.1916 C 4 : α = 0 , r ¯ = 0.0208 < r 1 ¯ = 0.2143
The uniqueness radius is:
R < 2 L 0 = 2 1.87 = 1.0695
In Examples 3 and 4, we choose two members of the described iterative method and perform numerical tests to compare them with iterative schemes [13,14,15].
Numerical calculations were executed on a computer with 16 MB RAM and Intel Core i7 processor using MATLAB R2024a software, using variable precision arithmetics with 2000 digits of mantissa. Since we are comparing schemes of high order of convergence, the error tolerance is set at ϵ = 10 100 , to reduce the risk of significant error accumulation, and therefore, ensure more accurate results.
For each method, the error estimate between two consecutive iterations is x k + 1 x k and the residual error is T ( x k + 1 ) . The stopping criterium is set at x k + 1 x k + T ( x k + 1 ) < ϵ . The necessary number of iterations (iter) to converge to the solution is provided.
To validate the theoretical order of convergence p, the ACOC is computed. The execution time, denoted as e-time, required to achieve convergence is displayed (in seconds), based on the average of 10 successive runs for each scheme.
Example 3. 
Let us examine a system of differential equations that describe the motion of an object given by (see [22]):
T 1 x = e x , F 2 y = e 1 y + 1 , F 3 z = 1
with initial conditions T 1 ( 0 ) = T 2 ( 0 ) = T 3 ( 0 ) = 0 . Let us assume T = ( T 1 , T 2 , T 3 ) . Let X = Y = R 3 , D = U ˜ ( 0 , 1 ) and x * = ( 0 , 0 , 0 ) T . Define function T on D for w = ( x , y , z ) T by:
T w = e x 1 , e 1 2 y 2 + y , z T .
The Fréchet derivative is given by:
T ν = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1
Applying the conditions given in Theorems 1 and 2, we obtain L 0 = e 1 and L = M = e 1 e 1 . The radii are:
C 1 C 3 : α = 0.10 , r = 0.0127 < r 1 = 0.2607 C 4 : α = 0 , r ¯ = 0.3449 < r 1 ¯ = 0.4147
The uniqueness radius is:
R < 2 L 0 = 2 e 1 = 1.1640
The results from Table 2 show that scheme M 1 for α = 0 converges adequately to the solution, and so do CHJ and Jaiswal methods. Scheme M 1 for α = 9 8 does not converge to the solution. We must remember that, according to Theorem 1, the values of α for we can guarantee convergence are those in 3 8 , 1 8 . The Sharma scheme does not converge to the solution either.
Example 4. 
Consider the system of equations of size n = 20 , with the expression:
x i c o s 2 x i x 1 x 2 x 3 x 4 = 0 ; i = 1 , 2 , , n
with x * = ( 0.5149 , 0.5149 , , 0.5149 ) T .
In a similar way to Example 3, the results from Table 3 show that schemes M 1 for α = 0 , CHJ and Jaiswal converge to the solution. Schemes M 1 for α = 9 8 and Sharma fail to converge. According to Theorem 1, the values of α for we can guarantee convergence are those in 3 8 , 1 8 .

5. Conclusions

The current paper presents a local convergence study of a class family of iterative schemes within Banach spaces, provided that the Fréchet derivative satisfies the Lipschitz continuity condition. For the convergence balls, the radii have been obtained, according to the existence and uniqueness Theorem. In the examples shown, scheme (1) gives smaller radii without using M.
All schemes have the same Ostrowski’s Efficiency Index, since this index focuses exclusively on the number of functional evaluations and the order of convergence. Scheme (1) has better behavior for α = 0 (in terms of Computational Efficiency) than all the other methods analyzed. Furthermore, Sharma and M 1 for α = 9 8 are second in efficiency.
For the Computational Efficiency Index was introduced a correction factor μ , which transformed the number of evaluations into a number of operations. The value of this parameter can be tested computationally to establish a relationship between evaluating a function and performing arithmetic operations. Nevertheless, the key is to use the same value as in other studies to ensure a proper comparison.
Future works related to this line of research and based on these results are focused on semilocal convergence analysis using majorizing sequences. This study extends the scope of local convergence by not only establishing conditions under which an initial approximation leads to a solution, but also providing the domain of existence of that solution, thus being valid for proving the existence of a solution to a problem for which prior information was not known. This analysis is particularly valuable for demonstrating the existence of solutions and ensuring the reliability of iterative algorithms in functional analysis, optimization, and numerical applications.

Author Contributions

Conceptualization, A.L. and E.M.; methodology, A.L. and E.M.; software, A.L.; validation, E.M.; formal analysis, A.L. and E.M.; writing—original draft, A.L.; writing—review and editing, E.M.; visualization, A.L.; supervision, E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ayuda a Primeros Proyectos de Investigación (PAID-06-23), Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, J.; Cho, Y.; George, S. Contemporary Algorithms: Theory and Applications; Nova Science Publishers, Inc.: New York, NY, USA, 2022; Volume I, pp. 3–10. [Google Scholar] [CrossRef]
  2. Martínez, E.; Hueso, J.; Singh, S.; Gupta, D. Enlarging the convergence domain in local convergence studies for iterative methods in Banach spaces. Appl. Math. Comput. 2016, 281, 252–265. [Google Scholar] [CrossRef]
  3. Magreñán, Á.; Argyros, I. On the local convergence and the dynamics of Chebyshev–Halley methods with six and eight order of convergence. J. Comput. Appl. Math. 2016, 298, 236–251. [Google Scholar] [CrossRef]
  4. Argyros, I. On the Convergence and Application of Newton’s Method Under Weak Hölder Continuity Assumptions. Int. J. Comput. Math. 2010, 80, 767–780. [Google Scholar] [CrossRef]
  5. Zhou, X.; Chen, X.; Song, Y. On the convergence radius of the modified Newton method for multiple roots under the center–Hölder condition. Numer. Algorithms 2014, 65, 221–232. [Google Scholar] [CrossRef]
  6. Bi, W.; Ren, H.; Wu, X. Convergence of the modified Halley’s method for multiple zeros under Hölder continuous derivative. Numer. Algorithms 2011, 58, 497–512. [Google Scholar] [CrossRef]
  7. Behl, R.; Martínez, E.; Cevallos, F.; Alshomrani, A. Local Convergence Balls for Nonlinear Problems with Multiplicity and Their Extension to Eighth-Order Convergence. Hindawi Math. Probl. Eng. 2019, 2019, 1427809. [Google Scholar] [CrossRef]
  8. Zhou, X.; Song, Y. Convergence radius of Osada’s method under center-Hölder continuous condition. Appl. Math. Comput. 2014, 243, 809–816. [Google Scholar] [CrossRef]
  9. Singh, S.; Gupta, D.; Hueso, J.L. Semilocal and local convergence of a fifth order iteration with Fréchet derivative satisfying Hölder condition. Appl. Math. Comput. 2016, 276, 266–277. [Google Scholar] [CrossRef]
  10. Martínez, E.; Ledesma, A. Book of Abstracts of the Conference Mathematical Modelling in Engineering & Human Behaviour (MME&HB2024); Universitat Politècnica de València: València, Spain, 2024; pp. 47–50. ISBN 978-84-09-57681-4. [Google Scholar]
  11. Cordero, A.; Ledesma, A.; Torregrosa, J.R.; Maimó, J.G. Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Math. 2024, 9, 8564–8593. [Google Scholar] [CrossRef]
  12. Taylor, M. Introduction To Functional Analysis, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1958; pp. 160–165. [Google Scholar]
  13. Jaiswal, J. Some Class of Third and Fourth-Order Iterative Methods for Solving Nonlinear Equations. J. Appl. Math. 2014, 2014, 817656. [Google Scholar] [CrossRef]
  14. Choubey, N.; Jaiswal, J. Improving the Order of Convergence and Efficiency Index of an Iterative Method for Nonlinear Systems. Proc. Natl. Acad. Sci. India Sect. A 2016, 86, 221–227. [Google Scholar] [CrossRef]
  15. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  16. Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966; Volume I, pp. 19–21. [Google Scholar]
  17. Traub, J. Iterative Methods for the Solution of Equations, 2nd ed.; Chelsea Publishing Company: New York, NY, USA, 1982; pp. 11–12. [Google Scholar]
  18. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.; Vindel, P. Newton–Like Methods for Nonlinear Systems with Competitive Efficiency Indices. In Proceedings of the Seventh International Conference on Engineering Computational Technology, Valencia, Spain, 14–17 September 2010; Volume 41, pp. 1–13. [Google Scholar] [CrossRef]
  19. Hoffman, J.D. Numerical Methods for Engineers and Scientists, 2nd ed.; McGraw-Hill Book Company: New York, NY, USA, 2001; pp. 185–186. [Google Scholar]
  20. JKumar, D.; Argyros, I.A.; Sharma, J.R. Convergence Ball and Complex Geometry of an Iteration Function of Higher Order. Mathematics 2019, 7, 28. [Google Scholar] [CrossRef]
  21. Zachary, J.L. Introduction to Scientific Programming; Springer: New York, NY, USA, 1996; pp. 239–240. [Google Scholar]
  22. Argyros, C.I.; Regmi, S.; Argyros, I.K.; George, S. Local convergence for some third-order iterative methods under weak conditions. J. Korean Math. Soc. 2016, 53, 781–793. [Google Scholar] [CrossRef]
Figure 1. Computational Efficiency Index ( C E I ) .
Figure 1. Computational Efficiency Index ( C E I ) .
Algorithms 17 00481 g001
Figure 2. Beam problem.
Figure 2. Beam problem.
Algorithms 17 00481 g002
Table 1. Computational Efficiency Index ( C E I ) .
Table 1. Computational Efficiency Index ( C E I ) .
Method LS M × V op CEI = p 1 μ d + op
T Other μ = 1 2 μ = 3 4 μ = 1 μ = 3 2
M 1 ( α = 0 ) 121 2 N 3 3 + 4 N 2 2 N 3 4 6 4 N 3 + 30 N 2 N 4 12 8 N 3 + 66 N 2 + N 4 3 2 N 3 + 18 N 2 + N 4 6 4 N 3 + 42 N 2 + 5 N
M 1 ( α = 9 8 ) 212 2 N 3 3 + 5 N 2 2 N 3 4 6 4 N 3 + 36 N 2 N 4 12 8 N 3 + 78 N 2 + N 4 3 2 N 3 + 21 N 2 + N 4 6 4 N 3 + 48 N 2 + 5 N
Jaiswal313 2 N 3 3 + 7 N 2 2 N 3 4 6 4 N 3 + 48 N 2 N 4 12 8 N 3 + 102 N 2 + N 4 3 2 N 3 + 27 N 2 + N 4 6 4 N 3 + 60 N 2 + 5 N
CHJ312 2 N 3 3 + 6 N 2 2 N 3 4 6 4 N 3 + 42 N 2 N 4 12 8 N 3 + 90 N 2 + N 4 3 2 N 3 + 24 N 2 + N 4 6 4 N 3 + 54 N 2 + 5 N
Sharma212 2 N 3 3 + 5 N 2 2 N 3 4 6 4 N 3 + 36 N 2 N 4 12 8 N 3 + 78 N 2 + N 4 3 2 N 3 + 21 N 2 + N 4 6 4 N 3 + 48 N 2 + 5 N
Table 2. Numerical performance of iterative schemes for a nonlinear system of equations.
Table 2. Numerical performance of iterative schemes for a nonlinear system of equations.
Method x ˜ x k + 1 x k T ( x k + 1 ) IterACOCe-Time
x 0 = 0.5 0.5 0.5 T
M 1 ( α = 0 ) 1.18 × 10 40 , 0 , 0 T 1.17 × 10 31 1.17 × 10 31 43.97540.33349
M 1 ( α = 9 8 ) nc 1.13 × 10 17 9.22 × 10 51 501.000003.68628
Jaiswal 4.43 × 10 41 , 7.24 × 10 71 , 0 T 8.49 × 10 32 8.49 × 10 32 43.99720.32919
CHJ 2.7 × 10 43 , 1.19 × 10 66 , 0 T 1.62 × 10 27 1.62 × 10 27 43.94540.35557
Sharmanc 1.13 × 10 17 9.22 × 10 51 501.000003.60789
Table 3. Numerical performance of iterative schemes for a nonlinear system of equations.
Table 3. Numerical performance of iterative schemes for a nonlinear system of equations.
Method x ˜ x k + 1 x k T ( x k + 1 ) IterACOCe-Time
x 0 = 1 , 1 , , 1 T
M 1 ( α = 0 ) 0.514933 , 0.514933 , , 0.514933 T 1.18 × 10 11 3.21 × 10 11 33.65381.78310
M 1 ( α = 9 8 ) nc 3.36 × 10 30 1.03 × 10 31 50-33.70610
Jaiswal 0.514933 , 0.514933 , , 0.514933 T 1.23 × 10 11 3.35 × 10 11 33.64761.80741
CHJ 0.514933 , 0.514933 , , 0.514933 T 1.73 × 10 11 4.69 × 10 11 33.59721.78408
Sharmanc 3.36 × 10 30 1.03 × 10 31 50-40.46302
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martínez, E.; Ledesma, A. Local Convergence Study for an Iterative Scheme with a High Order of Convergence. Algorithms 2024, 17, 481. https://doi.org/10.3390/a17110481

AMA Style

Martínez E, Ledesma A. Local Convergence Study for an Iterative Scheme with a High Order of Convergence. Algorithms. 2024; 17(11):481. https://doi.org/10.3390/a17110481

Chicago/Turabian Style

Martínez, Eulalia, and Arleen Ledesma. 2024. "Local Convergence Study for an Iterative Scheme with a High Order of Convergence" Algorithms 17, no. 11: 481. https://doi.org/10.3390/a17110481

APA Style

Martínez, E., & Ledesma, A. (2024). Local Convergence Study for an Iterative Scheme with a High Order of Convergence. Algorithms, 17(11), 481. https://doi.org/10.3390/a17110481

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop