Next Article in Journal
Sensitivity Analysis of Eigenvalues for PDNT Toeplitz Matrices
Next Article in Special Issue
The Existence of Fixed Points for Generalized ωbφ-Contractions and Applications
Previous Article in Journal
A Novel Robust Transformation Approach to Finite Population Median Estimation Using Monte Carlo Simulation and Empirical Data
Previous Article in Special Issue
Homotopy Analysis Method and Physics-Informed Neural Networks for Solving Volterra Integral Equations with Discontinuous Kernels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence-Equivalent DF and AR Iterations with Refined Data Dependence: Non-Asymptotic Error Bounds and Robustness in Fixed-Point Computations

by
Kadri Doğan
1,
Emirhan Hacıoğlu
2,
Faik Gürsoy
3,
Müzeyyen Ertürk
3 and
Gradimir V. Milovanović
4,5,*
1
Department of Basic Sciences, Faculty of Engineering, Artvin Çoruh University, 08100 Artvin, Türkiye
2
Department of Mathematics, Trakya University, 22030 Edirne, Türkiye
3
Department of Mathematics, Adiyaman University, 2040 Adiyaman, Türkiye
4
Serbian Academy of Sciences and Arts, 11000 Belgrade, Serbia
5
Faculty of Sciences and Mathematics, University of Niš, 18000 Niš, Serbia
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 738; https://doi.org/10.3390/axioms14100738
Submission received: 22 August 2025 / Revised: 26 September 2025 / Accepted: 27 September 2025 / Published: 29 September 2025
(This article belongs to the Special Issue Advances in Fixed Point Theory with Applications)

Abstract

Recent developments in fixed-point theory have focused on iterative techniques for approximating solutions, yet there remain important questions about whether different methods are equivalent and how well they resist perturbations. In this study, two recently proposed algorithms, referred to as the DF and AR iteration methods, are shown to be connected by proving that they converge similarly when applied to contraction mappings in Banach spaces, provided that their control sequences meet specific, explicit conditions. This work extends previous research on data dependence by removing restrictive assumptions related to both the perturbed operator and the algorithmic parameters, thereby increasing the range of situations where the results are applicable. Utilizing a non-asymptotic analysis, the authors derive improved error bounds for fixed-point deviations under operator perturbations, achieving a tightening of these estimates by a factor of 3–15 compared to earlier results. A key contribution of this study is the demonstration that small approximation errors lead only to proportionally small deviations from equilibrium, which is formalized in bounds of the form s * s ˜ *   O ( ε / ( 1 λ ) ) . These theoretical findings are validated through applications involving integral equations and examples from function spaces. Overall, this work unifies the convergence analysis of different iterative methods, enhances guarantees regarding stability, and provides practical tools for robust computational methods in areas such as optimization, differential equations, and machine learning. By relaxing structural constraints and offering a detailed sensitivity analysis, this study significantly advances the design and understanding of iterative algorithms in applied mathematics.
MSC:
47H09; 47H10; 47H14; 47J25; 65L10

1. Introduction and Preliminaries

Throughout the course of this study, N will denote the set of natural numbers. Let C be a non-empty closed and convex subset of the Banach space B equipped with a norm · . We define S : C C as a mapping, and Fix S = x C : S x = x as the set encompassing all fixed points of S within C .
Since Banach’s pioneering contribution in 1922 [1], the Banach contraction principle, based on the class of contraction mappings, has attracted significant attention in mathematical research. A mapping S : C C on a normed space ( C , · ) is called a contraction if there exists a constant λ [ 0 , 1 ) such that
( x , y C ) S x S y   λ x y .
Owing to its fundamental properties and strong potential for addressing complex problems across various disciplines, this class of mappings has been extensively studied under different structural frameworks. Consequently, it has emerged as a powerful tool in fixed-point theory. However, recognizing both the strengths and inherent limitations of contraction mappings, researchers have introduced more generalized classes that relax certain constraints while extending their applicability under broader conditions (see [2,3,4,5,6,7,8]). These generalized mappings have been investigated in terms of stability, existence, and uniqueness of fixed points, data dependence, and other qualitative aspects within diverse mathematical settings, including metric, topological, and normed spaces (see [9,10,11,12,13,14,15,16]). Moreover, numerous iterative algorithms have been developed to approximate the fixed points of such mappings, specifically addressing the challenges posed by modern computational problems (see [17,18,19,20,21,22,23,24,25]). While classical fixed-point iterations, such as those of Picard and Krasnoselskij, continue to serve as fundamental methods for solving nonlinear equations, emerging applications in high-dimensional optimization, machine learning, and perturbed dynamical systems expose the limitations of these traditional approaches. These domains require algorithms that offer enhanced stability, accelerated convergence, and robustness to approximation errors. To meet these demands, recent contributions by Filali et al. [26] and Alam and Rohen [27] have introduced two distinct approaches, hereinafter referred to as the “DF iteration algorithm” (Algorithm 1) and “AR iteration algorithm” (Algorithm 2), respectively, which will be examined in detail in the subsequent sections.
Algorithm 1: DF iteration algorithm
   Input: A mapping S ,
   initial point x 0 C ,
    ρ n , σ n , τ n [ 0 , 1 ] ,
   and budget N.
   1: for n = 0 , 1 , 2 , , N do
   2: w n = S 1 τ n x n + τ n S x n
    y n = S 2 w n
    z n = S 1 σ n S y n + σ n S w n
    x n + 1 = S 1 ρ n S y n + ρ n S z n
   3: end for
   Output: Approximate solution x N
Algorithm 2: AR iteration algorithm
   Input: A mapping S ,
   initial point x 0 C ,
    ρ n * , σ n * , τ n * [ 0 , 1 ] , ,
   and budget N.
   1: for n = 0 , 1 , 2 , , N do
   2: w n * = S 1 τ n * x n * + τ n * S x n *
    y n * = S 1 σ n * w n * + σ n * S w n *
    z n * = S 1 ρ n * " y n * + ρ n * S y n *
    x n + 1 * = S 2 z n *
   3: end for
   Output: Approximate solution x N *
Both studies presented a range of theoretical results, including convergence and stability analyses, as well as insights into the sensitivity of fixed points with respect to perturbations in contraction mappings. More specifically, the authors derived the following results concerning the data dependence of fixed points:
Theorem 1.
Let S , S ˜ : C C be two contractions with contractivity constant λ [ 0 , 1 ) and fixed points s * and s ˜ * , respectively. Suppose that there exists a maximum admissible error ε > 0 such that S x S ˜ x ε for all x C (in this case, S and S ˜ are called approximate operators of each other).
(i) ([26], Theorem 5) Let x n and x ˜ n be the iterative sequences generated by the DF iteration algorithm associated with S and S ˜ , respectively. If the sequences ρ n , σ n , τ n in the DF iteration algorithm satisfy the conditions 1 / 2 ρ n τ n for all n N and n = 1 ρ n ( 1 λ ) = , then it holds that
s * s ˜ * 15 ε 1 λ .
(ii) ([27], Theorem 8) Let x n * and x ˜ n * be the iterative sequences generated by the AR iteration algorithm associated with S and S ˜ , respectively. If the sequences ρ n * , σ n * , τ n * in the DF iteration algorithm satisfy either of
n = 1 ρ n * ( 1 λ ) = , n = 1 σ n * ( 1 λ ) = , a n d n = 1 τ n * 1 λ = ,
then it holds that
s * s ˜ * 15 ε 1 λ .
We should immediately point out that, in both results, the restrictions imposed on the control sequences ρ n , σ n , τ n , ρ n * , σ n * , τ n * [ 0 , 1 ] and the imposed contraction condition on the mapping S ˜ , significantly restrict the applicability of these theories. Indeed, when S and S ˜ are approximate operators of each other and S is a λ contraction, the mapping S ˜ satisfies the following condition for all x, y C :
S ˜ x S ˜ y S ˜ x S x + S x S y + S y S ˜ y λ x y + μ ,
where μ = 2 ε . This demonstrates that S ˜ inherits a weakened contraction-like property, deviating from strict contractivity by an additive constant μ dependent on the approximation error ε . As is evident, any contraction mapping satisfies condition (4). However, as the following example illustrates, a mapping S ˜ that satisfies (4) does not necessarily satisfy the contraction condition given in (1).
Example 1.
Consider the space C = [ 0 , 1 ] R equipped with the usual absolute value norm. Define the mappings S , S ˜ : C C as follows:
S x = x 4 and S ˜ x = 1 x 4 , 0 x 1 2 , 2 x 4 , 1 2 < x 1 .
Then, we have
S ˜ x S x 1 4 for all x C .
So, we can choose ε = 1 / 4 . Since S ˜ is discontinuous, it does not qualify as a contraction. However, it possesses a unique fixed point at 1 / 5 . On the other hand, the mapping S ˜ satisfies condition (4) with λ = 1 / 4 and μ = 1 / 2 . To verify this, we examine the following cases:
Cases 1 and 2: If x , y [ 0 , 1 / 2 ] or x , y ( 1 / 2 , 1 ] , then for any μ > 0 , we have
S ˜ x S ˜ y 1 4 x y 1 4 x y + μ .
Cases 3 and 4: If x [ 0 , 1 / 2 ] and y ( 1 / 2 , 1 ] (or vice versa), then
S ˜ x S ˜ y = 1 4 x 4 y 4 1 4 x y + 1 4 .
Thus, considering all cases together, we conclude that the mapping S ˜ satisfies condition (4) with λ = 1 / 4 and μ = 1 / 2 . (Note that while the theory requires μ = 2 ε = 1 / 2 , this example shows that μ = 1 / 4 is sufficient for the inequality to hold, indicating a tighter bound for this specific mapping.)
The example provided below illustrates that the condition delineated in (4) lacks sufficiency in ensuring the existence of fixed points for a mapping S ˜ .
Example 2.
Let us consider the set C = [ 0 , 1 ] R , equipped with the norm induced by the usual absolute value metric. Define the mappings S , S ˜ : C C , as follows:
S = 8 49 x and S ˜ x = x + 2 x + 3 , 0 x < 1 2 , x + 1 x + 3 , 1 2 x 1 .
Then, we have
S ˜ x S x 2 3 for all x C .
So, we can choose ε = 2 / 3 . We now demonstrate that the mapping S ˜ satisfies the inequality in (4) under four cases.
Cases 1 and 2: If x , y 0 , 1 / 2 or x , y 1 / 2 , 1 , then for any μ > 0 , we obtain
S ˜ x S ˜ y = x y x + 3 y + 3 1 9 x y 1 9 x y + μ ,
and
S ˜ x S ˜ y = 2 x 2 y x + 3 y + 3 8 49 x y 8 49 x y + μ ,
respectively. Hence, in both cases, the mapping S ˜ satisfies the inequality in (4) for any λ 8 / 49 , 1 and any μ > 0 .
Cases 3 and 4: If x 0 , 1 / 2 and y 1 / 2 , 1 (or vice versa), then S ˜ x = ( x + 2 ) / ( x + 3 ) and S ˜ y = ( y + 1 ) / ( y + 3 ) . It is important to note that for every x 0 , 1 / 2
d S ˜ d x = 1 x + 3 2 > 0 ,
and for every y 1 / 2 , 1
d S ˜ d y = 2 y + 3 2 > 0 ,
indicating that S ˜ x and S ˜ y are increasing functions. Consequently,
S ˜ x S ˜ y = x + 2 x + 3 y + 1 y + 3
reaches its maximum value when y = 1 / 2 and x = 1 / 2 . This implies that
sup x [ 0 , 1 / 2 ) , y [ 1 / 2 , 1 ] S ˜ x S ˜ y = 2 7 ,
meaning that for every x 0 , 1 / 2 and y 1 / 2 , 1 , S ˜ x S ˜ y 2 / 7 . On the other hand, employing similar arguments, we obtain
inf x 0 , 1 / 2 , y 1 / 2 , 1 x y = 0 ,
indicating that for every x 0 , 1 / 2 and y 1 / 2 , 1 , 0 x y . Thus, if x 0 , 1 / 2 and y 1 / 2 , 1 , then for any λ 0 , 1 and any μ 2 / 7 , the following inequality holds:
S ˜ x S ˜ y λ x y + μ .
As a result, for every x , y [ 0 , 1 ] , the mapping S ˜ satisfies the inequality in (4) for any λ 8 / 49 , 1 and for any μ 2 / 7 . (Note that while the theory guarantees the inequality for μ = 2 ε = 4 / 3 , this example demonstrates that a smaller μ = 2 / 7 is sufficient, indicating a tighter bound for this specific mapping). However, upon solving the equation S ˜ x = x , we find that for x 0 , 1 / 2 , x = 1 ± 3 0 , 1 / 2 , and for x 1 / 2 , 1 , x = 1 ± 2 1 / 2 , 1 . Consequently, the mapping S ˜ does not possess any fixed points for x [ 0 , 1 ] .
By Banach’s fixed-point theorem, the contraction mapping S : C C guarantees the existence of a unique fixed point s * C . However, the approximate operator S ˜ : C C , while satisfying inequality (4) deviates from strict contractivity because of the additive perturbation term μ = 2 ε > 0 . Although S ˜ , lacks the classical contraction property, its structural proximity to S permits an analysis of fixed-point stability. Specifically, if μ is sufficiently small—as a consequence of the bounded approximation error ε —the fixed points of S ˜ , should they exist (say s ˜ * ), lie within a neighborhood of s * . By utilizing inequality (4) and employing a non-asymptotic approach, we derive a concrete upper bound for s * s ˜ * without relying on asymptotic assumptions, as required in Theorem 1. Specifically, we establish the following bound:
s * s ˜ * S s * S ˜ s * + S ˜ s * S ˜ s ˜ * ε + λ s * s ˜ * + 2 ε ,
which leads to
s * s ˜ * 3 ε 1 λ .
This result formalizes the intuition that small perturbations in the operator propagate controllably to its equilibria. While S ˜ does not inherently inherit the contraction property, the weakened inequality still enables meaningful conclusions about fixed-point proximity, illustrating the robustness of contraction-based frameworks under bounded approximations. Such insights are pivotal in applied settings, where numerical or modeling errors require tolerance analyses in dynamical systems and iterative algorithms.
The preceding discussions indicate that the applicability of the data-dependence results in parts (i) and (ii) of Theorem 1 can be further extended by relaxing the contraction assumption imposed on the mapping S ˜ and instead treating S ˜ merely as an approximate operator of the mapping S .
In this study, we establish a convergence-equivalence result between the DF and AR iteration algorithms in approximating the fixed point of a contraction mapping. Moreover, we derive enhanced versions of the data-dependence results in parts (i) and (ii) of Theorem 1 by not only removing the contraction condition imposed on the mapping S ˜ but also eliminating the constraints on the control sequences ρ n , σ n , τ n , ρ n * , σ n * , τ n * 0 , 1 .
The following lemma plays a key role in establishing our results:
Lemma 1
([28]). Let Φ n i n = 0 , i = 1 , 2 , 3 , be three sequences such that Φ n i 0 for each n N , i = 1 , 2 , 3 , Φ n 3 ( 0 , 1 ) for all n n 0 , n = 1 Φ n 3 = , Φ n 2 = o Φ n 3 , and
( n N ) Φ n + 1 1 1 Φ n 3 Φ n 1 + Φ n 2 .
It then holds that lim n Φ n 1 = 0 .

2. Main Results

In this section, we establish our main theoretical findings concerning the convergence behavior of the iterative processes under consideration. In particular, we compare the trajectories generated by the DF iteration and the AR iteration when applied to a contraction mapping. The significance of the following theorem lies in the fact that it provides conditions under which both iterative schemes not only converge to the unique fixed point s * of S , but also approach each other asymptotically. This comparison allows us to assess the relative stability and efficiency of the algorithms, and highlights the robustness of their convergence under mild assumptions on the control sequences. We will now state the results precisely:
Theorem 2.
Let S : C C be a λ-contraction mapping with a fixed point s * . Consider the sequences x n and x n * , which are generated by the DF and AR iteration algorithms, respectively. The following statements are equivalent:
(i) Define the sequence M n for all n N as
M n = max 1 τ n ( 1 λ ) , 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * 1 λ .
If the sequence 2 λ 5 M n τ n ( 1 λ ) n = 0 is bounded and n = 0 τ n = , then the sequence x n x n * n = 0 converges strongly to 0, and x n n = 0 converges strongly to s * .  
(ii) Define the sequence M n for all n N as
M n = max { [ 1 min ρ n * , σ n * , τ n * ( 1 λ ) ] , 1 ρ n 1 ( 1 σ n ) λ 2 + σ n 1 τ n ( 1 λ ) } .
If the sequence 2 λ 5 M n min ρ n * , σ n * , τ n * ( 1 λ ) n = 0 is bounded and n = 0 min ρ n * , σ n * , τ n * = , then the sequence x n * x n n = 0 converges strongly to zero and x n * n = 0 converges strongly to s * .
Proof. 
(i) Using the inequality in (1) and employing the DF and AR iteration algorithms, we achieve the following for every n N :
x n + 1 x n + 1 * = S 1 ρ n S y n + ρ n S z n S 2 z n * λ 1 ρ n S y n + ρ n S z n S z n * λ 1 ρ n S y n + ρ n S z n s * + λ s * S z n * λ 1 ρ n S y n S s * + λ ρ n S z n S s * + λ S s * S z n * λ 2 1 ρ n y n s * + λ 2 ρ n z n s * + λ 2 s * z n * λ 2 1 ρ n S 2 w n s * + λ 2 ρ n S 1 σ n S y n + σ n S w n s * + λ 2 S 1 ρ n * y n * + ρ n * S y n * s * λ 4 1 ρ n + ρ n 1 σ n λ 2 + σ n w n s * + λ 3 1 ρ n * ( 1 λ ) y n * s * λ 5 1 ρ n + ρ n 1 σ n λ 2 + σ n 1 τ n ( 1 λ ) x n s * + λ 5 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * ( 1 λ ) x n * s * 1 τ n ( 1 λ ) x n x n * + 2 λ 5 M n x n * s * .
where
M n = max 1 τ n ( 1 λ ) , 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * 1 λ .
Now, we set the following for every n N :
Φ n 1 : = x n x n * 0 , Φ n 3 : = τ n ( 1 λ ) ( 0 , 1 ) , Φ n 2 : = 2 λ 5 M n x n * s * .
Since 2 λ 5 M n τ n ( 1 λ ) n = 0 is bounded, there exists a number μ > 0 such that for every n N , the following holds:
2 λ 5 M n τ n ( 1 λ ) < μ .
Moreover, since lim n x n * s * = 0 according to ([27], Theorem 3), for any given ε > 0 , there exists an n 0 N such that for all n n 0 , x n * s * < ε / μ . Thus, for every n n 0 , we have
2 λ 5 M n τ n ( 1 λ ) x n * s * < ε
which implies lim n Φ n 2 / Φ n 3 = 0 , i.e., Φ n 2 = o Φ n 3 . Consequently, the inequality in (6) satisfies all the requirements of Lemma 1 and, therefore, we obtain lim n x n x n * = 0 . On the other hand, since x n s * x n x n * + x n * s * , we can conclude that lim n x n s * = 0 .
(ii) Utilizing the inequality in (1) with AR and DF iteration algorithms, we obtain the following for all n N :
x n + 1 * x n + 1 S 2 z n * s * + s * S 1 ρ n S y n + ρ n S z n λ 2 z n * s * + λ 2 1 ρ n s * y n + ρ n s * z n λ 3 1 ρ n * 1 λ y n * s * + λ 4 1 ρ n s * w n + ρ n 1 σ n s * y n + σ n s * w n λ 5 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * ( 1 λ ) x n * s * + λ 5 1 ρ n 1 1 σ n λ 2 + σ n 1 τ n ( 1 λ ) s * x n λ 5 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * ( 1 λ ) x n * x n + λ 5 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * ( 1 λ ) x n s * + λ 5 1 ρ n 1 1 σ n λ 2 + σ n 1 τ n 1 λ s * x n 1 min ρ n * , σ n * , τ n * ( 1 λ ) x n * x n + λ 5 1 min ρ n * , σ n * , τ n * ( 1 λ ) x n s * + λ 5 1 ρ n 1 1 σ n λ 2 + σ n 1 τ n ( 1 λ ) x n s * 1 min ρ n * , σ n * , τ n * ( 1 λ ) x n * x n + 2 λ 5 M n x n s * ,
where
M n = 1 min ρ n * , σ n * , τ n * ( 1 λ ) , 1 ρ n 1 1 σ n λ 2 + σ n 1 τ n ( 1 λ ) .
Define the following for all n N :
Φ n 1 : = x n * x n 0 , Φ n 3 : = min ρ n * , σ n * , τ n * ( 1 λ ) ( 0 , 1 ) , Φ n 2 : = 2 λ 5 M n x n s * .
Given that 2 λ 5 M n min ρ n * , σ n * , τ n * 1 λ n = 0 is bounded, there exists μ > 0 such that for all n N :
2 λ 5 M n min ρ n * , σ n * , τ n * ( 1 λ ) < μ .
Furthermore, as lim n x n s * = 0 according to ([26], Theorem 2), for any ε > 0 , there exists an n 0 N such that for all n n 0 , x n s * < ε / μ . Hence, for each n n 0
2 λ 5 M n min ρ n * , σ n * , τ n * ( 1 λ ) x n s * < ε .
This implies lim n Φ n 2 / Φ n 3 = 0 or Φ n 2 = o Φ n 3 . Thus, the inequality in (7) satisfies the conditions of Lemma 1, leading to lim n x n * x n = 0 . Moreover, since
x n * s * x n * x n + x n s * ,
we conclude that lim n x n * s * = 0 .    □
To demonstrate the applicability of Theorem 2, we will now provide an example based on a nonlinear differential equation. This example serves two purposes. First, it shows how an abstract contraction mapping arising from an integral operator can be constructed in a concrete functional setting. Second, it illustrates that the theoretical convergence results established in Theorem 2 can be verified numerically by examining the behavior of the DF and AR iteration schemes. In particular, we consider an initial value problem whose solution can be reformulated as a fixed-point problem, and then show that the corresponding operator is indeed a contraction mapping. This allows us to apply Theorem 2 directly and validate the convergence through numerical simulations.
Example 3.
Let C 2 [ 0 , 1 ] denote the set of functions defined on [ 0 , 1 ] that possess continuous second-order derivatives, equipped with the supremum norm · . It is well known that B = C 2 [ 0 , 1 ] , · forms a Banach space. Now, consider the following initial value problem:
d 2 d t 2 u ( t ) = u 2 ( t ) 2 u ( t ) 2 + 1 2 , u ( 0 ) = 1 , d d t u ( 0 ) = 1 .
A potential solution to this problem can be expressed in the following integral form:
u ( t ) = 1 + 0 t 1 + 0 s u 2 ( x ) 2 u ( x ) 2 + 1 2 d x d s .
Now, let C = u t B : 0 u ( t ) 1 B . Then, the operator S : C C , defined as
S ( u ) ( t ) = 1 + 0 t 1 + 0 s u 2 ( x ) 2 u ( x ) 2 + 1 2 d x d s ,
is a contraction mapping. Indeed, it satisfies the contraction property
S ( u ) S ( v ) = 0 t 0 s u 2 ( x ) 2 u ( x ) 2 + 1 2 v 2 ( x ) 2 v ( x ) 2 + 1 2 d x d s 0 t 0 s u 2 ( x ) 2 u ( x ) 2 + 1 2 v 2 ( x ) 2 v ( x ) 2 + 1 2 d x d s 0 t 0 s d x d s u 2 ( t ) 2 u ( t ) 2 + 1 2 v 2 ( t ) 2 v ( t ) 2 + 1 2 3 10 u ( t ) v ( t ) .
Now, for each n 1 , we consider the following sequences:
τ n = 1 1 n + 10 , ρ n = 1 1 ( n + 10 ) 2 , σ n = 1 1 ( n + 10 ) 2 , τ n * = 1 1 n + 10 , ρ n * = 1 1 n + 10 , σ n * = 1 1 n + 10 .
As illustrated in Figure 1 (top left), the sequences
2 M n λ 5 τ n ( 1 λ ) n 1 a n d 2 M n λ 5 min τ n * , ρ n * , σ n * ( 1 λ ) n 1
are both bounded, thereby satisfying the hypotheses of Theorem 2. Let
RES ( t , x n ) = d 2 d t x n ( t ) x n 2 ( t ) 2 x n ( t ) 2 + 1 2
be the residual error for n > 0 . Figure 1 (top right) shows that, starting with the initial norm x 0 x 0 * = 0 , where x 0 ( t ) = x 0 * ( t ) = t + 1 , the sequence x n x n * n 1 converges to 0, while the diagrams below, together with Table 1, illustrate that the sequences x n n 1 and { x n * } n 1 converge to the fixed point of the mapping S . Numbers in parentheses indicate decimal exponents.
Algorithms 3 and 4 provide theoretical error bounds when the exact operator S is replaced by a perturbed operator S ˜ , which can be interpreted as an approximation arising in practical applications. To illustrate the usefulness of this result, we consider a modified nonlinear differential equation. Such an example demonstrates how the admissible error ε can be explicitly quantified in a functional setting, and how the theoretical estimates (9)–(10) compare with the actual numerical deviations observed in practice.
Theorem 3.
Let S : C C be a λ-contraction mapping with a fixed point s * , and let S ˜ : C C be a mapping. Consider the sequences x ˜ n and x ˜ n * generated by the DF and AR iteration algorithms associated with S ˜ , which are defined as follows:
Algorithm 3: DF iteration algorithm for S ˜
   Input: A mapping S ˜ ,
   initial point x 0 C ,
    ρ n , σ n , τ n [ 0 , 1 ] ,
   and budget N.
   1: for n = 0 , 1 , 2 , N do
   2: w ˜ n = S ˜ 1 τ n x ˜ n + τ n S ˜ x ˜ n
    y ˜ n = S ˜ 2 w ˜ n
    z ˜ n = S ˜ 1 σ n S ˜ y ˜ n + σ n S ˜ w ˜ n
    x ˜ n + 1 = S ˜ 1 ρ n S ˜ y ˜ n + ρ n S ˜ z ˜ n
   3: end for
   Output: Approximate solution x ˜ N
Algorithm 4: AR iteration algorithm for S ˜
   Input: A mapping S ˜ ,
   initial point x 0 C ,
    ρ n * , σ n * , τ n * [ 0 , 1 ] ,
   and budget N.
   1: for n = 0 , 1 , 2 , N do
   2: w ˜ n * = S ˜ 1 τ n * x ˜ n * + τ n * S ˜ x ˜ n *
    y ˜ n * = S ˜ 1 σ n * w ˜ n * + σ n * S ˜ w ˜ n *
    z ˜ n * = S ˜ 1 ρ n * y ˜ n * + ρ n * S ˜ y ˜ n *
    x ˜ n + 1 * = S ˜ 2 z ˜ n *
   3: end for
   Output: Approximate solution x ˜ N *
Suppose the following conditions hold:
(C1) There exists a maximum admissible error ε > 0 such that
( x C ) S x S ˜ x ε ,
(C2) There exists s ˜ * C such that S ˜ s ˜ * = s ˜ * and both iterative sequences x ˜ n and x ˜ n * converge to s ˜ * .
Then, the following bounds hold for the iterative sequences x ˜ n and x ˜ n * , respectively:
s * s ˜ * λ 2 + 1 2 λ + 1 1 λ 5 ε
and
s * s ˜ * λ 4 + λ 3 + λ 2 + 1 λ + 1 1 λ 5 ε .
Proof. 
We begin by deriving the bound presented in (9) for the term s * s ˜ * , utilizing the DF iteration algorithms associated with the mappings S and S ˜ . Using the contraction property of S , condition (C1), and the DF iteration algorithm for both S and S ˜ , we obtain the following estimates:
w n w ˜ n S 1 τ n x n + τ n S x n S 1 τ n x ˜ n + τ n S ˜ x ˜ n + S 1 τ n x ˜ n + τ n S ˜ x ˜ n S ˜ 1 τ n x ˜ n + τ n S ˜ x ˜ n λ 1 τ n x n + τ n S x n 1 τ n x ˜ n + τ n S ˜ x ˜ n + ε λ 1 τ n ( 1 λ ) x n x ˜ n + λ τ n ε + ε , y n y ˜ n S S w n S S ˜ w ˜ n + S S ˜ w ˜ n S ˜ S ˜ w ˜ n λ S w n S w ˜ n + S w ˜ n S ˜ w ˜ n + ε λ 2 w n w ˜ n + ( λ + 1 ) ε , z n z ˜ n S 1 σ n S y n + σ n S w n S 1 σ n S ˜ y ˜ n + σ n S ˜ w ˜ n + S 1 σ n S ˜ y ˜ n + σ n S ˜ w ˜ n S ˜ 1 σ n S ˜ y ˜ n + σ n S ˜ w ˜ n λ 1 σ n S y n S ˜ y ˜ n + σ n S w n S ˜ w ˜ n + ε λ 2 1 σ n y n y ˜ n + σ n w n w ˜ n + ( λ + 1 ) ε , x n + 1 x ˜ n + 1 S 1 ρ n S y n + ρ n S z n S 1 ρ n S ˜ y ˜ n + ρ n S ˜ z ˜ n + S 1 ρ n S ˜ y ˜ n + ρ n S ˜ z ˜ n S ˜ 1 ρ n S ˜ y ˜ n + ρ n S ˜ z ˜ n λ 1 ρ n S y n S y ˜ n + ε + ρ n S z n S z ˜ n + ε + ε λ 2 1 ρ n y n y ˜ n + ρ n z n z ˜ n + ( λ + 1 ) ε ,
By combining these inequalities, we obtain
x n + 1 x ˜ n + 1 λ 5 1 ρ n + ρ n λ 2 1 σ n + ρ n σ n 1 τ n 1 λ x n x ˜ n + λ 4 1 ρ n + ρ n λ 2 1 σ n + ρ n σ n λ τ n ε + ε + λ 2 1 ρ n + ρ n λ 2 1 σ n ( λ + 1 ) ε + λ 2 ρ n ( λ + 1 ) ε + ( λ + 1 ) ε .
Since for every n N , ρ n , σ n , and τ n [ 0 , 1 ] and λ [ 0 , 1 ) , we conclude that
x n + 1 x ˜ n + 1 λ 5 x n x ˜ n + λ 4 + 2 λ 2 + 1 ( λ + 1 ) ε .
From ([26], Theorem 2), we have lim n x n = s * and under assumption (C2), we also have lim n x ˜ n = s ˜ * . Taking the limit on both sides of the final inequality yields
s * s ˜ * λ 2 + 1 2 λ + 1 1 λ 5 ε .
Next, we derive the bound specified in (10) for the quantity s * s ˜ * by utilizing the AR iteration algorithms associated with the mappings S and S ˜ . By leveraging the contraction property of S , assumption (C1), and the AR iteration algorithm for both S and S ˜ , the following estimates can be derived in a manner similar to the previous ones:
w n * w ˜ n * λ 1 τ n * ( 1 λ ) x n * x ˜ n * + λ τ n * ε + ε , y n * y ˜ n * λ 1 σ n * ( 1 λ ) w n * w ˜ n * + λ σ n * ε + ε , z n z ˜ n λ 1 ρ n * ( 1 λ ) y n * y ˜ n * + λ ρ n * ε + ε , x n + 1 x ˜ n + 1 λ 2 z n * z ˜ n * + ( λ + 1 ) ε .
By successively substituting these bounds, we obtain the following:
x n + 1 x ˜ n + 1 λ 5 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) 1 τ n * ( 1 λ ) x n * x ˜ n * + λ 4 1 ρ n * ( 1 λ ) 1 σ n * ( 1 λ ) λ τ n * ε + ε + λ 3 1 ρ n * ( 1 λ ) λ σ n * ε + ε + λ 2 λ ρ n * ε + ε + ( λ + 1 ) ε .
Since for every n N , ρ n * , σ n * , and τ n * [ 0 , 1 ] and λ 0 , 1 , it follows that
x n + 1 x ˜ n + 1 λ 5 x n * x ˜ n * + λ 4 + λ 3 + λ 2 + 1 ( λ + 1 ) ε .
From ([27], Theorem 3), we know that lim n x n = s * , and under assumption (C2), we also have lim n x ˜ n = s ˜ * . Taking limits on both sides of the last inequality, we finally obtain
s * s ˜ * λ 4 + λ 3 + λ 2 + 1 λ + 1 1 λ 5 ε ,
which completes the proof. □
To clarify the applicability of Theorem 3, we present a concrete example constructed from a nonlinear differential equation. This will allow us to explicitly see how the perturbation of the operator affects the fixed point and how the theoretical error bounds are reflected in practice.
Example 4.
Let B , C , S , ρ n , σ n , τ n , ρ n * , σ n * , and τ n * be defined as in Example 3. We now consider the following second-order initial value problem
d 2 d t 2 u ( t ) = u 2 ( t ) 2 u ( t ) 2 + u 2 ( t ) u 3 t 10 2 + 1 2 , u ( 0 ) = 1 , d d t u ( 0 ) = 1 .
A possible solution to this problem can be formulated as an integral equation:
u ( t ) = 1 + 0 t 1 + 0 s u 2 ( x ) 2 u ( x ) 2 + u 2 ( x ) u 3 x 10 2 + 1 2 d x d s .
Define an operator S ˜ : C C as follows:
S ˜ ( u ) ( t ) = 1 + 0 t 1 + 0 s u 2 ( x ) 2 u ( x ) 2 + u 2 ( x ) u 3 ( x ) 10 2 + 1 2 d x d s .
Thus, we can establish the following bound:
S ( u ) S ˜ ( u ) = | 1 + 0 t 1 + 0 s u 2 ( x ) 2 u ( x ) 2 + 1 2 d x d s 1 + 0 t 1 + 0 s u 2 ( x ) 2 u ( x ) 2 + u 2 ( x ) u 3 ( x ) 10 2 + 1 2 d x d s | 0 t 0 s u 2 ( x ) u 3 ( x ) 10 2 + 1 2 d x d s 1 2 u 2 ( t ) u 3 t 10 2 < 7.42 × 10 4 = ε .
The obtained value 7.42 × 10 4 represents the admissible tolerance ε, showing that the error in the approximation does not exceed this bound. As in Example 3, we introduce the residual error for the sequence { x ˜ n } by
RES ( t , x ˜ n ) = d 2 d t 2 x ˜ n ( t ) x ˜ n 2 ( t ) 2 x ˜ n ( t ) 2 + x ˜ n 2 ( t ) x ˜ n 3 ( t ) 10 2 + 1 2 .
From Table 2 and Figure 2, starting with the initial function x ˜ 0 ( t ) = x ˜ 0 * ( t ) = t + 1 , it is evident that the sequences { x ˜ n } n 1 and { x ˜ n * } n 1 exhibit convergence toward a fixed point of S ˜ . As a result, all the conditions of Theorem 3 are satisfied, thereby confirming the validity of the estimates in (9) and (10), as demonstrated below:
s * s ˜ * 8.55 × 10 4 1.15 × 10 3 = λ 2 + 1 2 λ + 1 1 λ 5 ε ,
s * s ˜ * 9.678 × 10 4 1.38 × 10 3 = λ 4 + λ 3 + λ 2 + 1 λ + 1 1 λ 5 ε .
The numerical estimates demonstrate that the deviations | s * s ˜ * | are bounded by 8.55 × 10 4 and 9.678 × 10 4 , which remain well within the respective theoretical tolerances λ 2 + 1 2 λ + 1 1 λ 5 ε and λ 4 + λ 3 + λ 2 + 1 λ + 1 1 λ 5 ε , thereby confirming both the accuracy of the numerical scheme and the sharpness of the established analytical bounds.
The numerical outcomes presented in Table 2 and Figure 2 highlight two important observations. First, the residual errors decrease rapidly with each iteration, confirming the strong convergence of both DF and AR schemes towards the fixed point of S ˜ . Second, the measured deviations between s * and s ˜ * remain well within the analytical error bounds derived in Theorem 3. This shows that estimates (9) and (10) are not only mathematically valid, but also numerically sharp. Consequently, Example 4 provides concrete evidence of the stability and reliability of the proposed iterative methods when small perturbations are introduced into the underlying operator.
Remark 1.
(1) The estimates provided in Theorem 3 within (9) and (10) for the quantity s * s ˜ * offer substantially more precise approximations than the corresponding ones presented in parts (i) and (ii) of Theorem 1. Additionally, a comparative analysis of the estimates given in (9) and (10) reveals that the bound in (9) exhibits superior accuracy compared to that in (10).
Furthermore, by utilizing the identity
1 1 λ k + 1 i = 0 k λ i = 1 1 λ , for all k N ,
we derive the following asymptotic results:
(D1) Taking the limit as n under the assumption that ρ n , σ n , τ n 0 , the inequality given in (11) leads to
s * s ˜ * λ 4 + λ 3 + λ 2 + λ + 1 1 λ 5 ε = ε 1 λ .
(D2) As n , assuming ρ n , σ n , τ n 1 , the inequality from (11) results in
s * s ˜ * λ 5 + λ 4 + λ 3 + λ 2 + λ + 1 1 λ 6 ε = ε 1 λ .
(D3) Taking the limit as n with ρ n * , σ n * , τ n * 0 , the inequality from (12) yields
s * s ˜ * λ 4 + λ 3 + λ 2 + λ + 1 1 λ 5 ε = ε 1 λ .
(D4) Finally, as n under the condition ρ n * , σ n * , τ n * 1 , applying the limit to the inequality in (12) gives
s * s ˜ * λ 7 + λ 6 + λ 5 + λ 4 + λ 3 + λ 2 + λ + 1 1 λ 8 ε = ε 1 λ .
Based on these results, it follows that in cases (D1) and (D2), the estimates obtained in (13) and (14) are more precise than the bound in (9). Similarly, for cases (D3) and (D4), the estimates in (15) and (16) yield more refined approximations than that in (10).
(2) Let the mappings S and S ˜ be as defined in Theorem 3. Then, we observe that
s * s ˜ * = S s * S ˜ s ˜ * S s * S s ˜ * + S s ˜ * S ˜ s ˜ * λ s * s ˜ * + sup x C S x S ˜ x ,
where, due to condition (C1), we have
sup x C S x S ˜ x ε ,
which leads to
s * s ˜ * ε 1 λ .
Thus, using a more straightforward approach, we obtain a tighter bound for s * s ˜ * compared to those provided in (9) and (10).
(3) Table 3 presents a comparison of the results obtained from the analyses conducted to establish an upper bound for s * s ˜ * :
Example 5.
Let B , C , S , and S ˜ be defined as in Example 4.
(D1) Consider the parametrized sequences τ n , ρ n , σ n , prescribed by
τ n = 1 n + 10 , ρ n = 1 ( n + 10 ) 2 , σ n = 1 ( n + 10 ) 2
for all n 1 . These sequences exhibit asymptotic behavior τ n , ρ n , σ n 0 as n . As empirically validated by Figure 3, the iterative sequence { x n } n 1 , starting with the initial x 0 ( t ) = t + 1 , converges to the fixed point of S , while { x ˜ n } n 1 , starting with the initial x ˜ 0 ( t ) = t + 1 , converges to the fixed point of S ˜ . A quantitative assessment yields
s * s ˜ * 9.55 × 10 4 1.05 × 10 3 = ε 1 λ .
(D2) Building on Example 4, define the sequences
τ n = 1 1 n + 10 , ρ n = 1 1 ( n + 10 ) 2 , σ n = 1 1 ( n + 10 ) 2
for all n 1 . Here, τ n , ρ n , σ n 1 as n . Graphical results in Figure 3 (left and right) confirm that { x n } n 1 and { x ˜ n } n 1 , starting with the initial x 0 ( t ) = x ˜ 0 ( t ) = t + 1 , converge to s * and s ˜ * , respectively. The error propagation adheres to the bound:
s * s ˜ * 8.55 × 10 4 1.05 × 10 3 = ε 1 λ .
(D3) Let the sequences τ n * , ρ n * , and σ n * be specified via
τ n * = 1 ( n + 10 ) , ρ n * = 1 n + 10 , σ n * = 1 n + 10
for all n 1 . These sequences satisfy τ n * , ρ n * , σ n * 0 as n . As depicted in Figure 4 (left and right), { x n * } n 1 and { x ˜ n * } n 1 , starting with the initial x 0 * ( t ) = x ˜ 0 * ( t ) = t + 1 , converge to s * and s ˜ * , respectively. The deviation between fixed points is bounded by
s * s ˜ * 8.73 × 10 4 1.05 × 10 3 = ε 1 λ .
(D4) Adopting the framework of Example 4, define
τ n * = 1 1 n + 10 , ρ n * = 1 1 n + 10 , σ n * = 1 1 n + 10
for all n 1 . Then, τ n * , ρ n * , σ n * 1 as n . Starting with the initial x 0 * ( t ) = x ˜ 0 * ( t ) = t + 1 , Figure 1 (bottom-right) and Figure 2 (right) illustrate the convergence of { x n * } n 1 and { x ˜ n * } n 1 to s * and s ˜ * , respectively. The empirical error remains well within the theoretical upper limit
s * s ˜ * 9.678 × 10 4 1.05 × 10 3 = ε 1 λ .

3. Discussion

The equivalence of convergence between the DF and AR algorithms (Theorem 2) underscores a deeper structural symmetry in their iterative mechanisms, which had not been previously recognized. This finding simplifies comparative analyses in applications where algorithm selection is non-trivial. Our data-dependence results (Theorem 3) significantly improve upon the works of Filali et al. [26] and Alam et al. [27], by removing the requirement for S ˜ to be a strict contraction, instead permitting it to satisfy a weakened inequality S ˜ x S ˜ y λ x y + μ . This generalization is critical for real-world scenarios where numerical approximations or noisy data inherently violate strict contractivity. A notable limitation lies in the assumption of a uniform bound ε on S x S ˜ x , which may not hold in unbounded domains. Additionally, while our focus on Banach spaces covers a broad class of problems, extending these results to metric spaces with non-linear structures—such as hyperbolic or CAT(0) spaces—remains an open challenge.

4. Conclusions

The primary contribution of this study is the derivation of enhanced data-dependence estimates without imposing stringent contraction conditions on the perturbed operator. By employing a non-asymptotic approach, we establish explicit error bounds on the perturbations of fixed points, given by s * s ˜ * O ε / ( 1 λ ) , and demonstrate that small perturbations in the operator systematically propagate to their corresponding equilibria in a controlled and quantifiable manner. These refined estimates offer stronger theoretical guarantees compared to existing results, thereby improving the robustness of the framework in practical applications where numerical approximations and modeling inaccuracies are unavoidable. Furthermore, we characterize the conditions under which the DF and AR iteration algorithms exhibit equivalent convergence behavior, providing a unified theoretical perspective on their properties. Our analysis reveals that, under specific conditions, both algorithms demonstrate comparable stability and sensitivity characteristics. Additionally, we refine the data-dependence analysis in the context of contraction mappings by relaxing restrictive assumptions on control sequences and underlying mappings, thereby broadening the scope and applicability of asymptotic data-dependence results. These findings are substantiated through rigorous theoretical analysis and illustrative examples. The insights derived from this study contribute to a deeper understanding of fixed-point approximation methods and their sensitivity to perturbations. Future research directions include extending these results to more general classes of contractive mappings and investigating their implications in optimization, differential equations, and machine learning algorithms.

Author Contributions

Conceptualization, E.H., F.G., and G.V.M.; data curation, E.H. and K.D.; methodology, E.H., F.G., and G.V.M.; formal analysis, K.D., E.H., F.G., G.V.M., and M.E.; investigation, K.D., E.H., F.G., G.V.M., and M.E.; resources, K.D., E.H., F.G., G.V.M., and M.E.; writing—original draft preparation, K.D., E.H., and F.G.; writing—review and editing, K.D., E.H., F.G., G.V.M., and M.E.; visualization, K.D., E.H., F.G., and G.V.M.; supervision, E.H., F.G., and G.V.M.; project administration, G.V.M.; funding acquisition, E.H., F.G., G.V.M., and M.E. All authors have read and agreed to the published version of the manuscript.

Funding

The authors (E.H., F.G., and M.E.) acknowledge that their contribution to this work was partially supported by the Adiyaman University Scientific Research Projects Unit under Project No. FEFMAP/2025-0001, titled “A New Preconditional Forward-Backward Algorithm for Monotone Operators: Convergence Analysis and Applications”. The work of G.V.M. was supported in part by the Serbian Academy of Sciences and Arts (Φ-96).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest related to this work. They have no financial, personal, or professional relationships that could be construed as having influenced the research, analysis, or conclusions presented in this paper.

References

  1. Banach, S. Sur les opérations dans les ensembles abstraits et leurs applications aux équations intégrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  2. Berinde, V. Approximating fixed points of weak contractions using the Picard iteration. Nonlinear Anal. Forum 2004, 9, 43–53. [Google Scholar]
  3. Chatterjea, S.K. Fixed-point theorems. C. R. Acad. Bulg. Sci. 1972, 25, 727–730. [Google Scholar] [CrossRef]
  4. Ćirić, L.B. A generalization of Banach’s contraction principle. Proc. Am. Math. Soc. 1974, 45, 267–273. [Google Scholar] [CrossRef]
  5. Kannan, R. Some results on fixed points. Bull. Calcutta Math. Soc. 1968, 60, 71–76. [Google Scholar]
  6. Osilike, M.O. Stability results for fixed point iteration procedures. J. Nigerian Math. Soc. 1995, 14, 17–29. [Google Scholar]
  7. Popescu, O. A new class of contractive mappings. Acta Math. Hung. 2021, 164, 570–579. [Google Scholar] [CrossRef]
  8. Rus, I.A. Some fixed point theorems in metric spaces. Rend. Istit. Mat. Univ. Trieste 1971, 3, 169–172. [Google Scholar]
  9. Abbas, M.; Ali, B.; Butt, A.R. Existence and data dependence of the fixed points of generalized contraction mappings with applications. Rev. Real Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. 2015, 109, 603–621. [Google Scholar] [CrossRef]
  10. Khan, A.R.; Kumar, V.; Hussain, N. Analytical and numerical treatment of Jungck-type iterative schemes. Appl. Math. Comput. 2014, 231, 521–535. [Google Scholar] [CrossRef]
  11. Khatoon, S.; Uddin, I.; Başarır, M. A modified proximal point algorithm for a nearly asymptotically quasi-nonexpansive mapping with an application. Comput. Appl. Math. 2021, 40, 250. [Google Scholar] [CrossRef]
  12. Rus, I.A.; Petruşel, A.; Petruşel, G. Fixed Point Theory 1950–2000: Romanian Contributions; House of the Book of Science: Cluj-Napoca, Romania, 2002. [Google Scholar]
  13. Şoltuz, Ş.M.; Grosan, T. Data dependence for Ishikawa iteration when dealing with contractive-like operators. Fixed Point Theory Appl. 2008, 2008, 242916. [Google Scholar] [CrossRef]
  14. Ullah, K.; Arshad, M. Numerical reckoning fixed points for Suzuki’s generalized nonexpansive mappings via new iteration process. Filomat 2018, 32, 187–196. [Google Scholar] [CrossRef]
  15. Usurelu, G.; Bejenaru, A.; Postolache, M. Newton-like methods and polynomiographic visualization of modified Thakur processes. Int. J. Comput. Math. 2021, 98, 1049–1068. [Google Scholar] [CrossRef]
  16. Zaslavski, A.J. Two convergence results for inexact orbits of nonexpansive operators in metric spaces with graphs. Axioms 2023, 12, 999. [Google Scholar] [CrossRef]
  17. Ali, F.; Ali, J. Convergence, stability, and data dependence of a new iterative algorithm with an application. Comput. Appl. Math. 2020, 39, 267. [Google Scholar] [CrossRef]
  18. Çelik, R.; Şimşek, N. Some convergence, stability, and data dependence results for K* iterative method of quasi-strictly contractive mappings. Turk. J. Math. 2022, 46, 2819–2833. [Google Scholar] [CrossRef]
  19. Çopur, A.K. Results of convergence, stability, and data dependency for an iterative algorithm. J. New Theory 2024, 48, 99–112. [Google Scholar] [CrossRef]
  20. Ertürk, M.; Gürsoy, F. Some convergence, stability and data dependency results for a Picard-S iteration method of quasi-strictly contractive operators. Math. Bohem. 2019, 144, 69–83. [Google Scholar] [CrossRef]
  21. Gürsoy, F. A robust alternative to examine data dependency of fixed points of quasi-contractive operators: An efficient approach that relies on the collage theorem. Comput. Appl. Math. 2024, 43, 168. [Google Scholar] [CrossRef]
  22. Hacıoğlu, E.; Gürsoy, F.; Maldar, S.; Atalan, Y.; Milovanović, G.V. Iterative approximation of fixed points and applications to two-point second-order boundary value problems and to machine learning. Appl. Numer. Math. 2021, 167, 143–172. [Google Scholar] [CrossRef]
  23. Karakaya, V.; Atalan, Y.; Doğan, K.; Bouzara, N.E.H. Some fixed point results for a new three steps iteration process in Banach spaces. Fixed Point Theory 2017, 18, 625–640. [Google Scholar] [CrossRef]
  24. Maldar, S. New parallel fixed point algorithms and their application to a system of variational inequalities. Symmetry 2022, 14, 1025. [Google Scholar] [CrossRef]
  25. Micula, S.; Milovanović, G.V. Iterative processes and integral equations of the second kind. In Matrix and Operator Equations and Applications; Moslehian, M.S., Ed.; Springer: Cham, Switzerland, 2023; pp. 661–711. [Google Scholar] [CrossRef]
  26. Filali, D.; Eljaneid, N.H.E.; Alatawi, A.; Alshaban, E.; Ali, M.S.; Khan, F.A. A novel and efficient iterative approach to approximating solutions of fractional differential equations. Mathematics 2025, 13, 33. [Google Scholar] [CrossRef]
  27. Alam, K.H.; Rohen, Y. Convergence of a refined iterative method and its application to fractional Volterra–Fredholm integro-differential equations. Comput. Appl. Math. 2025, 44, 2. [Google Scholar] [CrossRef]
  28. Weng, X. Fixed point iteration for local strictly pseudocontractive mapping. Proc. Am. Math. Soc. 1991, 113, 727–731. [Google Scholar] [CrossRef]
Figure 1. (top) The first 100 terms of the sequences: 2 M n λ 5 τ n ( 1 λ ) and 2 M n λ 5 min τ n * , ρ n * , σ n * ( 1 λ ) (left) and { x n } and { x n * } (right); (bottom) residual errors RES ( t , x n ) (left) and RES ( t , x n * ) (right) for n = 1 and 2.
Figure 1. (top) The first 100 terms of the sequences: 2 M n λ 5 τ n ( 1 λ ) and 2 M n λ 5 min τ n * , ρ n * , σ n * ( 1 λ ) (left) and { x n } and { x n * } (right); (bottom) residual errors RES ( t , x n ) (left) and RES ( t , x n * ) (right) for n = 1 and 2.
Axioms 14 00738 g001
Figure 2. Residual errors RES ( t , x ˜ n ) (left) and RES ( t , x ˜ n * ) (right) for n = 1 and 2.
Figure 2. Residual errors RES ( t , x ˜ n ) (left) and RES ( t , x ˜ n * ) (right) for n = 1 and 2.
Axioms 14 00738 g002
Figure 3. The residual errors RES ( t , x n ) = d 2 d t 2 x n ( t ) x n 2 ( t ) 2 x n ( t ) 2 + 1 2 (left) and RES ( t , x ˜ n ) = d 2 d t 2 x ˜ n ( t ) x ˜ n 2 ( t ) 2 x ˜ n ( t ) 2 + x ˜ n 2 ( t ) x ˜ n 3 ( t ) 10 2 + 1 2 (right) for n = 1 and 2.
Figure 3. The residual errors RES ( t , x n ) = d 2 d t 2 x n ( t ) x n 2 ( t ) 2 x n ( t ) 2 + 1 2 (left) and RES ( t , x ˜ n ) = d 2 d t 2 x ˜ n ( t ) x ˜ n 2 ( t ) 2 x ˜ n ( t ) 2 + x ˜ n 2 ( t ) x ˜ n 3 ( t ) 10 2 + 1 2 (right) for n = 1 and 2.
Axioms 14 00738 g003
Figure 4. The residual errors RES ( t , x ˜ n * ) = d 2 d t 2 x n * ( t ) ( x n * ) 2 ( t ) 2 x n * ( t ) 2 + 1 2 (left) and RES ( t , x ˜ n * ) = d 2 d t 2 x ˜ n * ( t ) ( x ˜ n * ) 2 ( t ) 2 x ˜ n * ( t ) 2 + ( x ˜ n * ) 2 ( t ) ( x ˜ n * ) 3 ( t ) 10 2 + 1 2 (right) for n = 1 and 2.
Figure 4. The residual errors RES ( t , x ˜ n * ) = d 2 d t 2 x n * ( t ) ( x n * ) 2 ( t ) 2 x n * ( t ) 2 + 1 2 (left) and RES ( t , x ˜ n * ) = d 2 d t 2 x ˜ n * ( t ) ( x ˜ n * ) 2 ( t ) 2 x ˜ n * ( t ) 2 + ( x ˜ n * ) 2 ( t ) ( x ˜ n * ) 3 ( t ) 10 2 + 1 2 (right) for n = 1 and 2.
Axioms 14 00738 g004
Table 1. Numerical values of the residual errors RES ( t , x n ) and RES ( t , x n * ) for n = 1 and 2.
Table 1. Numerical values of the residual errors RES ( t , x n ) and RES ( t , x n * ) for n = 1 and 2.
t RES ( t , x 1 ) RES ( t , x 2 ) RES ( t , x 1 * ) RES ( t , x 2 * )
0.0.0.0.0.
0.1 6.39 ( 20 ) 2.10 ( 40 ) 2.08 ( 22 ) 2.10 ( 33 )
0.2 1.14 ( 16 ) 1.53 ( 32 ) 1.89 ( 20 ) 1.53 ( 31 )
0.3 1.10 ( 14 ) 1.17 ( 27 ) 4.97 ( 19 ) 1.17 ( 30 )
0.4 3.19 ( 13 ) 4.56 ( 26 ) 1.28 ( 17 ) 4.56 ( 26 )
0.5 4.72 ( 12 ) 1.28 ( 24 ) 2.15 ( 16 ) 1.28 ( 24 )
0.6 4.56 ( 11 ) 2.98 ( 19 ) 2.49 ( 15 ) 2.99 ( 23 )
0.7 3.27 ( 10 ) 6.09 ( 19 ) 2.22 ( 14 ) 6.10 ( 21 )
0.8 1.89 ( 9 ) 1.13 ( 18 ) 1.63 ( 13 ) 1.13 ( 20 )
0.9 9.25 ( 9 ) 1.96 ( 17 ) 1.03 ( 12 ) 1.97 ( 18 )
1. 3.98 ( 8 ) 3.21 ( 17 ) 5.78 ( 12 ) 3.22 ( 17 )
Table 2. Numerical values of the residual errors RES ( t , x ˜ n ) and RES ( t , x ˜ n * ) for n = 1 and 2.
Table 2. Numerical values of the residual errors RES ( t , x ˜ n ) and RES ( t , x ˜ n * ) for n = 1 and 2.
t RES ( t , x ˜ 1 ) RES ( t , x ˜ 2 ) RES ( t , x ˜ 1 * ) RES ( t , x ˜ 2 * )
0.0.0.0.0.
0.1 7.19 ( 20 ) 8.42 ( 40 ) 1.90 ( 24 ) 2.79 ( 37 )
0.2 1.30 ( 16 ) 1.12 ( 33 ) 4.09 ( 21 ) 3.62 ( 33 )
0.3 1.27 ( 14 ) 2.00 ( 27 ) 4.38 ( 19 ) 6.26 ( 29 )
0.4 3.73 ( 13 ) 1.55 ( 26 ) 1.45 ( 17 ) 4.74 ( 25 )
0.5 5.60 ( 12 ) 7.65 ( 22 ) 2.56 ( 16 ) 2.28 ( 24 )
0.6 5.49 ( 11 ) 2.82 ( 22 ) 3.06 ( 15 ) 8.25 ( 23 )
0.7 4.01 ( 10 ) 8.53 ( 20 ) 2.79 ( 14 ) 2.44 ( 21 )
0.8 2.36 ( 9 ) 2.22 ( 20 ) 2.10 ( 13 ) 6.25 ( 20 )
0.9 1.18 ( 8 ) 5.15 ( 19 ) 1.36 ( 12 ) 1.43 ( 18 )
1. 5.19 ( 8 ) 1.08 ( 18 ) 7.90 ( 12 ) 2.99 ( 18 )
Table 3. Comparison of the coefficients of ε / ( 1 λ ) across these bounds for a typical λ [ 0 , 1 ) .
Table 3. Comparison of the coefficients of ε / ( 1 λ ) across these bounds for a typical λ [ 0 , 1 ) .
Bound inCoefficientImprovement Factor (vs. Original 15)
(2) or (3)15
(5)3 5 ×
(9) λ 2 + 1 2 λ + 1 λ 4 + λ 3 + λ 2 + λ + 1 ≈3–6×
(10) λ 4 + λ 3 + λ 2 + 1 λ + 1 λ 4 + λ 3 + λ 2 + λ + 1 ≈3–5×
(13)–(16)1 15 ×
(17)1 15 ×
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Doğan, K.; Hacıoğlu, E.; Gürsoy, F.; Ertürk, M.; Milovanović, G.V. Convergence-Equivalent DF and AR Iterations with Refined Data Dependence: Non-Asymptotic Error Bounds and Robustness in Fixed-Point Computations. Axioms 2025, 14, 738. https://doi.org/10.3390/axioms14100738

AMA Style

Doğan K, Hacıoğlu E, Gürsoy F, Ertürk M, Milovanović GV. Convergence-Equivalent DF and AR Iterations with Refined Data Dependence: Non-Asymptotic Error Bounds and Robustness in Fixed-Point Computations. Axioms. 2025; 14(10):738. https://doi.org/10.3390/axioms14100738

Chicago/Turabian Style

Doğan, Kadri, Emirhan Hacıoğlu, Faik Gürsoy, Müzeyyen Ertürk, and Gradimir V. Milovanović. 2025. "Convergence-Equivalent DF and AR Iterations with Refined Data Dependence: Non-Asymptotic Error Bounds and Robustness in Fixed-Point Computations" Axioms 14, no. 10: 738. https://doi.org/10.3390/axioms14100738

APA Style

Doğan, K., Hacıoğlu, E., Gürsoy, F., Ertürk, M., & Milovanović, G. V. (2025). Convergence-Equivalent DF and AR Iterations with Refined Data Dependence: Non-Asymptotic Error Bounds and Robustness in Fixed-Point Computations. Axioms, 14(10), 738. https://doi.org/10.3390/axioms14100738

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop