Next Article in Journal
Hybrid Model and Data-Driven Emergency Load Shedding Optimization for Frequency Security in Receiving-End Power Grids
Previous Article in Journal
LSSCC-Net: Integrating Spatial-Feature Aggregation and Adaptive Attention for Large-Scale Point Cloud Semantic Segmentation
Previous Article in Special Issue
On a Novel Iterative Algorithm in CAT(0) Spaces with Qualitative Analysis and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Ishikawa Iterative Algorithm with Errors and Variable Generalized Ishikawa Iterative Algorithm for Nonexpansive Mappings in Symmetric Banach Spaces

School of Mathematical Sciences, Mudanjiang Normal University, Mudanjiang 157000, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(1), 125; https://doi.org/10.3390/sym18010125
Submission received: 1 December 2025 / Revised: 29 December 2025 / Accepted: 6 January 2026 / Published: 9 January 2026

Abstract

We present a generalized Ishikawa iterative algorithm with an error term and a variable generalized Ishikawa iterative algorithm. Leveraging the geometric symmetry inherent in uniformly convex Banach spaces, we establish their respective weak convergence theorems for nonexpansive mappings. As applications, we extend several recent results in the literature related to the proximal point algorithm and the split feasibility problem. Consequently, we propose a hyper-generalized proximal point algorithm and a hyper-generalized perturbation CQ algorithm. Our work not only broadens the application scope of these methods but also highlights the foundational role of symmetric space properties in ensuring algorithmic convergence.

1. Introduction

This paper adopts the following conventional notations: H for a Hilbert space, E for a real Banach space with its dual E * . F i x ( T ) for the set of fixed points of an operator T. Furthermore, the symbols ⇀ and → denote weak and strong convergence, respectively, while ω x n : = x : x n k x n , s . t . x n k x represents the set of all weak convergence points of the sequence x n .
Fixed point theory for nonexpansive mappings plays a pivotal role in modern nonlinear analysis and optimization. The seminal contributions by Browder [1] and Kirk [2] paved the way for its rapid development. A particularly fruitful line of inquiry has been the extensive study of iterative methods for locating fixed points of these mappings. Among many algorithms, the KM iterative algorithm is the most commonly used and has a remarkable effect.
The famous KM iteration algorithm was proposed by Krasnosel’skii [3] and Mann [4]. Given some x 0 H , the iteration format is
x n + 1 = ( 1 α ) x n + α T x n , n = 0 , 1 , 2 , ,
Subsequently, Reich [5] changed the coefficient α for each iteration to α n [ 0 , 1 ] , and proposed a more general iteration format
x n + 1 = ( 1 α n ) x n + α n T x n , n = 0 , 1 , 2 , ,
when α n is satisfied
n = 0 α n ( 1 α n ) = .
A key result regarding the weak convergence of (3) in Banach spaces was provided by Reich.
In 2017, the generalized KM iterative algorithm was introduced by Kanzow and Shehu [6] for nonexpansive mappings T in Hilbert spaces H. α n , β n [ 0 , 1 ] satisfy α n + β n 1 , { r n } is error sequence. Given some x 0 H , the iteration format is
x n + 1 = α n x n + β n T x n + r n , n = 0 , 1 , 2 , .
If the following conditions hold: (i) n = 0 α n β n = ; (ii) n = 0 ( 1 α n β n ) < ; (iii) n = 0 r n < , then the sequence { x n } converges weakly to the fixed point of T. The proposed generalized KM iteration algorithm relaxes the convex combination condition satisfied by the original coefficient α n to allow for the sum of coefficients to be less than or equal to 1, which broadens and enhances the flexibility of coefficient selection.
In 2019, Zhang et al. [7] presented the weak convergence result of the generalized KM iterative algorithm in Banach spaces. Zhang et al. [7] extended this result to Banach spaces by proving the weak convergence of the generalized KM iterative algorithm, building on the findings of the document. They subsequently applied this conclusion to the zero point problem. Furthermore, he also proposed a variable generalized KM iterative algorithm [8] in Banach spaces, the iteration format is
x n + 1 = α n x n + β n T n x n , n = 0 , 1 , 2 , .
By leveraging the previous research methods, he proved the weak convergence and applied it to the split feasibility problem.
Over time, the KM iteration algorithm has seen continued advancement, the Ishikawa iteration [9], which is more general in form and more applicable in application, is proposed based on the KM iteration. Its iteration format is
x n + 1 = t n T ( s n T x n + ( 1 s n ) x n ) + ( 1 t n ) x n , n = 0 , 1 , 2 , ,
where { t n } , { s n } are within [ 0 , 1 ] two sequences of satisfying certain conditions. Obviously, when s n 0 , the Ishikawa iteration format degenerates into the KM iteration format. In 1993, the weak convergence of Ishikawa iterations on bounded sets was established by Tan and Xu [10].
In recent years, the primary trajectory of research on the generalized Ishikawa iterative algorithm has evolved from the convergence analysis of its classical form. It has progressively shifted toward constructing more general and applicable algorithmic frameworks by means of relaxed conditions, error terms, extended operators, and expanded spaces, ultimately achieving deep integration with specific applied problems [11,12,13,14,15].
In 2018, Wang [11] proposed the generalized Ishikawa iteration:
x n + 1 : = α n x n + β n T ( s n x n + t n T x n ) , n = 0 , 1 , 2 , .
When the coefficients α n ,   β n ,   s n ,   t n satisfy certain conditions, he established the weak convergence of the iterative algorithm in Hilbert spaces, along with its application to variational inequalities.
The objective of this work is the enhancement of algorithm (7) and the investigation into the weak convergence of the modified algorithm in Banach spaces, building on the research of Zhang [7] and Wang [11]. The format of the improved generalized Ishikawa algorithm is
y n + 1 = a n T ( u n T y n + v n y n + e n ) + b n x n + r n , n = 0 , 1 , 2 , ,
which is called the generalized Ishikawa iterative algorithm with errors term. More precisely, the weak convergence of algorithm (8) will be established for a uniformly convex Banach space E, provided that E satisfies either the Opial property or the KK-property (both of which will be defined later).
The remainder of this paper is structured into the following sections: Section 2 provides a review of fundamental concepts and results that require further analysis. Section 3 is dedicated to analyzing the weak convergence of the generalized Ishikawa iterative algorithm with errors and explores its application to the hyper-generalized proximal point algorithm. Section 4 addresses the weak convergence of the variable generalized Ishikawa iterative algorithm and discusses its application to the hyper-generalized perturbation CQ algorithm. Finally, Section 5 presents the concluding remarks.

2. Preliminaries

For every ε ( 0 , 2 ] , there is a number δ ( ε ) such that if x , y E , x = y = 1 and x y ε , it holds that
1 2 ( x + y ) 1 δ ( ε ) ,
then E is termed a uniformly convex space.
We say that E exhibits the KK-property in the case that for any sequence { x n } in E, conditions x n x and x n x imply that x n x . On the other hand, Opial’s property is defined by the requirement that for any sequence { x n } in E with the condition x n x implies
lim n sup x n x < lim n sup x n y , y E , y x .
To conclude, this section presents several necessary lemmas and propositions. This will be beneficial for our subsequent convergence analysis.
Lemma 1
([10]). Let { α n } and { β n } be sequences of nonnegative real numbers. n 0 , the following formula holds
a n + 1 a n + b n , n = 0 b n < ,
then lim n a n exists.
Lemma 2
([11]). If { α n } and { β n } are nonnegative real number columns, and satisfy the following conditions
n = 0 α n β n < , n = 0 α n = ,
then lim n inf β n = 0 .
Lemma 3
(Demiclosedness principle [16]). D is a nonempty closed convex subset of E, T : D D is a nonexpansive mapping, I T is called demiclosed if and only if
( ( x n ) D , x n x , x n T x n y ) x T x = y .
Remark 1
([16]). Demiclosedness principle naturally holds in uniformly convex Banach spaces.
Analogous to the proof of proposition 3.2 in the reference by Kim [17], the following proposition can be simply proved to be true.
Proposition 1.
Let E be a uniformly convex Banach space, and the sequence { x n } is generated by the following iterations:
x n + 1 = α n T ( s n T x n + t n x n + e n ) + β n x n + r n , n = 0 , 1 , 2 , ,
then
(i) 
For p , q F i x ( T ) and 0 t 1 , lim n t x n + ( 1 t ) p q exists;
(ii) 
If, in addition, the dual space E * of E has the KK-property, then the weak limit set of { x n } , denoted by ω ( x n ) : = { x : { x n k } { x n } , s . t . x n k x } is a singleton.

3. On the Convergence of the Generalized Ishikawa Iterative Algorithm with Errors Term and Its Applications

3.1. Weak Convergence of Generalized Ishikawa Iterative Algorithm with Errors

This section will establish the weak convergence of the generalized Ishikawa iterative algorithm with errors. The findings not only expand the scope of the generalized Ishikawa iterative algorithm, but also further enhance the applicability and performance of the generalized KM iterative algorithm, resulting in an Ishikawa iterative algorithm with broader application and improved performance.
Theorem 1.
Let E be a uniformly convex Banach space whose dual E * has the KK property or E itself satisfies the Opial property. Let D be a nonempty closed convex subset of E, and T : D D a nonexpansive mapping satisfying condition F i x ( T ) . The sequence { y n } is generated by the following iterative scheme:
y n + 1 = a n T ( u n T y n + v n y n + e n ) + b n y n + r n , n = 0 , 1 , 2 , ,
where a n , b n , u n , v n [ 0 , 1 ] , { a n } consistently greater than 0 and a n + b n 1 , u n + v n 1 , { e n } and { r n } are error sequences. The following conditions are assumed to hold:
(i) 
n = 0 u n v n = ;
(ii) 
n = 0 ( 1 u n v n ) < , n = 0 ( 1 a n b n ) < ;
(iii) 
n = 0 e n < , n = 0 r n < ;
(iv) 
n = 0 a n u n < .
Then the sequence { y n } generated by the iteration (9) converges weakly to the fixed point of T.
Proof. 
We divide the proof into four steps.
Step 1 For y * F i x ( T ) , we will show lim n y n y * exists.
By setting
E n : = e n y * , R n : = r n y * ,
we know from the definition of e n and r n that,
0 < E n < + , 0 < R n < + .
Moreover,
y n + 1 y * = a n T ( u n T y n + v n y n + e n ) + b n y n + r n y * a n u n T y n + v n y n + e n y * + b n y n y * + ( 1 a n b n ) R n + ( a n + b n ) r n a n u n ( T y n y * ) + v n ( y n y * ) + ( 1 u n v n ) E n + ( u n + v n ) e n + b n y n y * + ( 1 a n b n ) R n + r n = a n ( u n T y n y * + v n y n y * ) + a n ( 1 u n v n ) E n + ( u n + v n ) e n + b n y n y * + ( 1 a n b n ) R n + r n a n ( u n + v n ) y n y * + a n ( 1 u n v n ) E n + ( u n + v n ) e n + b n y n y * + ( 1 a n b n ) R n + r n y n y * + a n ( 1 u n v n ) E n + a n e n + ( 1 a n b n ) R n + r n .
It follows from conditions (ii) and (iii) and Lemma 1applied to (10) with α n : = y n y * and β n : = a n ( 1 u n v n ) E n + a n e n + ( 1 a n b n ) R n + r n that lim n y n y * exists, whence the sequence { y n } is bounded.
Step 2 We now demonstrate that lim n inf T y n y * = 0 is satisfied.
Set
d n : = u n T y n + v n y n + e n ,
and
M n : = ( 1 u n v n ) 2 E n 2 + e n 2 + ( 1 a n b n ) 2 R n 2 + r n 2 + 2 ( 1 u n v n ) E n ( e n + y n y * + a n b n y n y * + a n r n ) + 2 r n ( y n y * + a n e n ) + 2 ( 1 + a n b n ) e n · y n y * + 2 ( 1 a n b n ) R n ( r n + y n y * + a n e n ) + 2 a n ( 1 u n v n ) ( 1 a n b n ) E n · R n .
For y * F i x ( T ) , we obtain
y n + 1 y * 2 = a n T d n + b n y n + r n y * 2 = a n ( T d n y * ) + b n ( y n y * ) + ( 1 a n b n ) R n + ( a n + b n ) r n 2 a n 2 d n y * 2 + b n ( y n y * ) + ( 1 a n b n ) R n + ( a n + b n ) r n 2 + 2 a n d n y * · b n ( y n y * ) + ( 1 a n b n ) R n + ( a n + b n ) r n a n 2 u n ( T y n y * ) + v n ( y n y * ) 2 + a n 2 ( 1 u n v n ) E n + ( u n + v n ) e n 2 + 2 a n 2 u n ( T y n y * ) + v n ( y n y * ) · ( 1 u n v n ) E n + ( u n + v n ) e n + b n 2 y n y * 2 + ( 1 a n b n ) R n + ( a n + b n ) r n 2 + 2 b n y n y * · ( 1 a n b n ) R n + ( a n + b n ) r n + 2 a n u n T y n + v n y n + e n y * · b n ( y n y * ) + ( 1 a n b n ) R n + ( a n + b n ) r n a n 2 ( u n ( u n + v n ) T y n y * 2 + v n ( u n + v n ) y n y * 2 u n v n h ^ ( T y n y n ) ) + b n 2 y n y * 2 + 2 a n b n y n y * 2 + M n ( a n 2 + b n 2 + 2 a n b n ) y n y * 2 a n 2 u n v n h ^ ( T y n y n ) + M n y n y * 2 a n 2 u n v n h ^ ( T y n y n ) + M n ,
we know from (11) that
u n v n h ^ ( T y n y n ) 1 a n 2 ( y n y * 2 y n + 1 y * 2 + M n ) .
It follows from conditions (ii) and (iii) that n = 0 M n < . This inequality, together with (12), implies that
n = 0 u n v n h ^ ( T y n y n ) < .
More precisely, lim n u n v n h ^ ( T y n y n ) = 0 . By an application of Lemma 2 under condition (i), we necessarily obtain
lim n inf h ^ ( T y n y n ) = 0 ,
and hence
lim n inf T y n y n = 0 .
Step 3 We now prove that lim n T y n y n = 0 . To this end, first observe that
y n + 1 y n = a n T ( u n T y n + v n y n + e n ) + b n x n + r n y n a n u n T y n + v n y n + e n y n + a n T y n y n + ( 1 a n b n ) y n + r n a n u n T y n y n + a n ( 1 u n v n ) y n + a n e n + a n T y n y n + ( 1 a n b n ) y n + r n .
Then we have
T y n + 1 y n + 1 = T y n + 1 a n T ( u n T y n + v n y n + e n ) b n x n r n T y n + 1 T y n + a n T y n T ( u n T y n + v n y n + e n ) + ( 1 a n ) T y n y n + ( 1 a n b n ) y n r n y n 1 y n + a n y n u n T y n v n y n e n + ( 1 a n ) T y n y n + ( 1 a n b n ) y n r n a n ( 1 + u n ) T y n y n + a n e n + r n + a n ( 1 u n v n ) y n + ( 1 a n b n ) y n + a n u n T y n y n + ( 1 u n v n ) y n a n e n + ( 1 a n ) T y n y n + r n + ( 1 a n b n ) y n T y n y n + ( 1 + a n ) ( 1 u n v n ) y n + 2 ( a n u n T y n y n + a n e n + r n + ( 1 a n b n ) y n ) .
We set
N n : = ( 1 + a n ) ( 1 u n v n ) y n + 2 ( a n u n T y n y n + a n e n + r n + ( 1 a n b n ) y n ) .
It follows from conditions (ii), (iii) and (iv) that n = 0 N n < .
Setting g n : = T y n y n and k n : = N n , then Formula (14) can be written g n + 1 g n + k n . By applying Lemma 1, we have that lim n y n T y n exists. Since lim n inf y n T y n = 0 , this yields lim n y n T y n = 0 .
Step 4 An application of the demiclosedness principle for the operator I T yields the inequality ω ( y n ) F i x ( T ) , where ω ( y n ) : = { y : { y n k } { y n } , s . t . y n k y } . Therefore, to establish the weak convergence of { y n } to a fixed point of T, it suffices to prove that the set of its weak cluster points ω ( y n ) is a singleton. We proceed by considering two cases.
In the first case, where E * possesses the KK-property, we conclude that ω ( y n ) is a singleton by appealing to Proposition 1.
In the second case, E satisfies Opial’s property.
Take y * , y * * ω ( y n ) , and let { y n i } and { y m j } be subsequences of { y n } such that y n i y * and y m j y * * , respectively. If y * y * * , then
lim n y n y * = lim i y n i y * = lim j y m j y * * < lim j y m j y * = lim n y n y * ,
a contradiction. Hence, in both cases, we have shown that ω ( y n ) is a singleton, which completes the proof. □
Setting e n = r n = 0 in Theorem 1 yields the following corollary.
Corollary 1.
Let E be a uniformly convex Banach space that also possesses either the KK property (in its dual E * ) or the Opial property. Let D be a nonempty closed convex subset of E, and T : D D a nonexpansive mapping satisfying F i x ( T ) . The sequence { y n } is generated by the following iteration:
y n + 1 = a n T ( u n T y n + v n y n ) + b n y n , n = 0 , 1 , 2 , ,
where a n , b n , u n , v n [ 0 , 1 ] , { a n } consistently greater than 0 and a n + b n 1 , u n + v n 1 . Assume the following conditions hold:
(i) 
n = 0 u n v n = ;
(ii) 
n = 0 ( 1 u n v n ) < , n = 0 ( 1 a n b n ) < ;
(iii) 
n = 0 a n u n < .
Then the sequence { y n } generated by the iteration (15) converges weakly to the fixed point of T.
If we set a n : = v n , b n : = 1 v n , v n : = 1 u n and e n = r n = 0 , then the following conclusion holds.
Corollary 2.
Let E be a uniformly convex Banach space. Assume, in addition, that either E * has the KK-property or E satisfies the Opial’s property. D is a nonempty closed convex subset of E. Let T : D D be a nonexpansive mapping such that F i x ( T ) . The sequence { y n } is generated by the following iteration:
y n + 1 = v n T ( u n T y n + ( 1 u n ) y n ) + ( 1 v n ) y n , n = 0 , 1 , 2 , ,
where u n , v n [ 0 , 1 ] , { t n } consistently greater than 0. Assume the following conditions hold:
(i) 
n = 0 u n ( 1 u n ) = ;
(ii) 
n = 0 v n u n < .
Then the sequence { y n } generated by the iteration (16) converges weakly to the fixed point of T.
We next provide a concrete example to demonstrate that the sequence generated by recursion (9) of Theorem 1 converges weakly to a fixed point of T, when a n + b n < 1 , u n + v n < 1 , and compare the iterative results with the generalized KM iteration.
Example 1.
Define the mapping T : R R by T y = max { 0 , y } . Then T is nonexpansive and possesses a unique fixed point at y = 0 .
Proof. 
We consider the following iterative scheme:
y n + 1 = a n T ( u n T y n + v n y n + e n ) + b n y n + r n , n = 0 , 1 , 2 , ,
where a n = u n = 1 1 + n ,   b n = v n = 1 1 n ,   e n = 1 n 2 ,   r n = 0 . Then we have
y n + 1 = 1 1 + n max 0 , 1 1 + n max { 0 , y n } + 1 1 n y n + 1 n 2 + 1 1 n y n .
Observe that the selections of a n ,   b n ,   u n ,   v n ,   e n and r n adhere to conditions (i)–(iv) in Theorem 1, and a n + b n < 1 , u n + v n < 1 . Taking y 1 = 1 , by the iteration, we get that y 2 = 0 .
We now turn to the generalized KM iterative scheme:
y n + 1 = a n y n + b n T y n + r n , n = 0 , 1 , 2 , .
where a n = 1 1 n , b n = 1 1 + n , r n = 0 . Then we have
y n + 1 = 1 1 n y n + 1 1 + n max { 0 , y n } .
Taking y 1 = 1 , by the iteration, we get that y 2 = 1 2 ,   y 3 = 1 4 ,   y 4 = 1 6 ,   y 5 = 1 8 , , when n 2 , y n = 1 2 n . □
Figure 1 compares the convergence behavior of the generalized Ishikawa iteration with errors and the generalized KM iteration. As shown, the generalized Ishikawa algorithm with errors exhibits faster convergence and higher computational accuracy.
A subsequent example is provided to compare the convergence rates of the generalized Ishikawa iteration with the generalized Ishikawa iteration with errors.
Example 2.
Define the mapping T : R R by T y = y + 2 , which is readily verified to be nonexpansive.
First, consider the generalized Ishikawa iteration:
y n + 1 = a n T ( u n T y n + v n y n ) + b n y n , n = 0 , 1 , 2 , ,
where a n = u n = 1 1 + n , b n = v n = 1 1 n , then we obtain
y n + 1 = 1 1 + n 1 1 + n ( y n + 2 ) + 1 1 n y n + 2 + 1 1 n y n .
Setting y 1 = 0 in the iteration yields y 2 = 1 2 ,   y 3 = 2 3 ,   y 4 = 3 4 ,   y 5 = 4 5 , . Furthermore, a straightforward induction demonstrates that when n 2 , it follows that y n = n 1 n .
Next, consider the generalized Ishikawa iteration with errors:
y n + 1 = a n T ( u n T y n + v n y n + e n ) + b n y n + r n , n = 0 , 1 , 2 , ,
where a n = u n = 1 1 + n ,   b n = v n = 1 1 n ,   e n = 0 ,   r n = 1 n 2 + 1 , then we obtain
y n + 1 = 1 1 + n 1 1 + n ( y n + 2 ) + 1 1 n y n + 2 + 1 1 n y n + 1 n 2 + 1 .
Setting y 1 = 0 in the iteration yields y 2 = 1 .
A comparison of the convergence rates between the generalized Ishikawa iteration, with and without errors, is shown in Figure 2.
The comparison of the two figures indicates that the generalized Ishikawa iteration with errors, as proposed in this study, exhibits not only improved convergence speed but also higher accuracy and performance compared to the generalized KM iteration and generalized Ishikawa iteration. The error term serves as a regulatory factor and enhances the efficiency of convergence.
Now, we present a new example addressing the convergence issue under “Rotation Mapping Problem”. For certain special mappings, such as small-angle rotations, standard iterative methods exhibit extremely slow convergence or even fail to converge. To tackle this problem, this paper employs a generalized Ishikawa iteration algorithm with an error term. By introducing a contraction error term, convergence is significantly accelerated. Taking a rotation mapping with an angle of π / 100 as an example, we demonstrate the superior performance of the algorithm. The experiment is conducted in R 2 , with the mapping T defined as a rotation matrix about the origin by π / 100 radians, and the initial point set as ( 1 , 0 ) .
Example 3.
The mapping T : R 2 R 2 is defined as a rotation about the origin by θ radians: T ( x ) = R ( θ ) x . Here, the rotation matrix R is given by: R ( θ ) = cos θ sin θ sin θ cos θ , with the rotation angle θ = π / 100 (approximately 1.8 ° ). T is a nonexpansive mapping with the origin 0 = ( 0 , 0 ) as its unique fixed point.
For this problem, the generalized KM iteration algorithm cannot be directly applied because it is a single-step iteration (using T only once), whereas the generalized Ishikawa iteration is a two-step iteration (using T twice, i.e., a nested T). For rotation mappings, which exhibit strong directional behavior, algorithms with a two-layer structure can better adjust the direction through the construction of intermediate points, potentially leading to faster convergence. Therefore, in addressing this issue, we directly compare the performance of the generalized Ishikawa iteration algorithm and its variant incorporating an error term.
First, consider the generalized Ishikawa iteration:
y n + 1 = a n T ( u n T y n + v n y n ) + b n y n , n = 0 , 1 , 2 , ,
where
a n = ϵ a ( n + 1 ) α a , b n = 1 a n ,
u n = u const ϵ u ( n + 1 ) α u , v n = 1 u n ,
ϵ a = 0.50 , α a = 1.05 , u const = 0.50 , ϵ u = 0.10 , α u = 0.5 .
Next, consider the generalized Ishikawa iteration with errors:
y n + 1 = a n T ( u n T y n + v n y n + e n ) + b n y n + r n , n = 0 , 1 , 2 , ,
where the parameters a n ,   b n ,   u n ,   v n are the same as those in the generalized Ishikawa iteration.
r n = c · 1 ( n + 1 ) p · y n , c = 2.0 , p = 1.1 , e n = 0 .
Observe that the selections of a n , b n , u n , v n , e n , and r n adhere to conditions (i)–(iv) in Theorem 1.
For the rotation mapping problem, the comparison of convergence for the generalized Ishikawa iteration with and without errors is shown in Figure 3.
From Figure 3, it can be observed that the trajectory plot of the error-free generalized Ishikawa iteration shows the iteration points slowly circling on a circle, hardly converging to the fixed point. In contrast, the trajectory plot of the generalized Ishikawa iteration with errors shows the iteration points rapidly contracting toward the fixed point. The norm convergence plot displays the distance from the origin over time, while the norm convergence plot (log scale) more clearly illustrates the difference in convergence rates. A zoomed-in view of the last 50 iterations highlights the differences in convergence accuracy. The performance comparison table quantitatively presents various performance metrics, indicating that the generalized Ishikawa iteration with errors can approach the fixed point in as few as five steps, demonstrating excellent performance.

3.2. Application of Generalized Ishikawa Iterative Algorithm with Errors

This section demonstrates the applicability of Theorem 1 to the Hyper-Generalized Proximal Point Algorithm (HGPPA) for locating zeros of an m-accretive operator, that is,
0 A ( x ) ,
where A : E E is an m-accretive operator on a uniformly convex Banach space E. The solution set of problem (17), denoted by z e r ( A ) , is taken to be nonempty. The proximal point algorithm (PPA) for solving (17), originating from the seminal work of Browder [16] and Martinet [18], iteratively generates a sequence { y n } by the scheme
0 γ A ( y n + 1 ) + y n + 1 y n ,
where the proximal parameter γ is positive in a real Hilbert space, the resolvent operator J γ A is defined as J γ A : = ( I + γ A ) 1 . Consequently, the PPA iteration (18) admits the compact form
y n + 1 : = J γ A ( y n ) .
The convergence of the PPA (18) was first established in Hilbert spaces by Browder [16]. Kim et al. [17] later generalized this convergence result to the setting of uniformly convex Banach spaces and m-accretive operators. Specifically, they studied an algorithm of the form:
y n + 1 : = ( 1 a n ) y n + a n ( J γ A ( y n ) + e n ) ,
where a n ( 0 , 1 ) , { e n } being an error sequence in E. Zhang et al. [7] introduced the following Generalized Proximal Point Algorithm (GPPA):
y n + 1 : = a n y n + b n ( J γ A ( y n ) + e n ) , n = 0 , 1 , 2 , ,
where a n and b n are real sequences in [ 0 , 1 ] satisfying a n + b n 1 . The GPPA affords greater flexibility in parameter selection and admits a wider application scope than Algorithm (20). Zhang et al. also provided a proof of its weak convergence in uniformly convex Banach spaces.
This paper will further extend the algorithm (21), considering the hyper generalized proximal point algorithm (HGPPA):
y n + 1 = a n J γ A ( u n J γ A y n + v n y n + e n ) + b n y n + r n , n = 0 , 1 , 2 , ,
where a n , b n , u n , v n [ 0 , 1 ] , { a n } consistently greater than 0 and a n + b n 1 ,   u n + v n 1 , { e n } and { r n } are error sequences, (22) can be viewed as a generalization of several earlier proximal point algorithms.
Next, we proceed to establish the weak convergence of the HGPPA in uniformly convex Banach spaces.
Theorem 2.
Let E be a uniformly convex Banach space whose dual E * satisfies the KK property or which itself satisfies the Opial property. Let A be an m-accretive operator in E with z e r ( A ) . Suppose the sequence { y n } is generated by (22), where the parameters a n , b n , u n , v n [ 0 , 1 ] satisfy a n + b n 1 , u n + v n 1 , with { a n } consistently greater than 0, and { e n } , { r n } are error sequences. The following conditions are assumed to hold:
(i) 
n = 0 u n v n = ;
(ii) 
n = 0 ( 1 u n v n ) < , n = 0 ( 1 a n b n ) < ;
(iii) 
n = 0 e n < , n = 0 r n < ;
(iv) 
n = 0 a n u n < .
Then the sequence { y n } generated by the iteration (22) converges weakly to the fixed point of z e r ( A ) .
Proof. 
Define the operator T : = J γ A for notational convenience. Note that, F i x ( T ) = z e r ( T ) , then HGPPA (22) can be rewritten as (9).
Since A is m-accretive, it follows from Lopez et al. [19] that T is nonexpansive. The application of Theorem 1 to (9) now yields the weak convergence of { y n } to a point in z e r ( A ) . This concludes the proof. □

4. Convergence of Variable Generalized Ishikawa Iterative Algorithm and Its Application

4.1. Weak Convergence of Variable Generalized Ishikawa Iterative Algorithm

This section focuses on refining the iterative process of the variable generalized KM algorithm and introduces a new variable generalized Ishikawa iterative algorithm, defined as follows:
y n + 1 = a n T n ( u n T y n + v n y n ) + b n y n , n = 0 , 1 , 2 , ,
We proceed to prove that the variable generalized Ishikawa iteration converges weakly.
Theorem 3.
Let E be a uniformly convex Banach space that satisfies the Opial property. Let D be a nonempty closed convex subset of E, and T : D D a nonexpansive mapping satisfying condition F i x ( T ) . Let { T n } be a sequence of nonexpansive operators on D. The sequence { y n } is generated by (23), where a n , b n , u n , v n [ 0 , 1 ] , { a n } consistently greater than 0 and a n + b n 1 , u n + v n 1 . The following conditions are assumed to hold:
(i) 
n = 0 u n v n = ;
(ii) 
n = 0 a n u n < ;
(iii) 
n = 0 ( 1 u n v n ) < , n = 0 ( 1 a n b n ) < ;
(iv) 
ρ 0 , n = 0 a n D ρ ( T n , T ) < , where
D ρ ( T n , T ) : = sup { T n y T y : y ρ , y E } ,
then the sequence { y n } generated by the iteration (23) converges weakly to the fixed point of T.
Proof. 
We structure the proof into four steps for clarity.
Step 1 For z F i x ( T ) , we will show lim n y n z exists.
y n + 1 z = a n T n ( u n T y n + v n y n ) + b n y n z = a n ( T n ( u n T y n + v n y n ) z ) + b n ( y n z ) ( 1 a n b n ) z a n T n ( u n T y n + v n y n ) z + b n y n z + ( 1 a n b n ) z a n T n ( u n T y n + v n y n ) T n z + a n T n z z + b n y n z + ( 1 a n b n ) z a n u n T y n + v n y n z + a n T n z z + b n y n z + ( 1 a n b n ) z = a n u n ( T y n z ) + v n ( y n z ) ( 1 u n v n ) z + a n T n z z + b n y n z + ( 1 a n b n ) z a n ( u n T y n z + v n y n z ( 1 u n v n ) z ) + a n T n z z + b n y n z + ( 1 a n b n ) z a n y n z + ( 1 u n v n ) a n z + a n T n z z + b n y n z + ( 1 a n b n ) z y n z + a n D z ( T n , T ) + ( 1 u n v n ) a n z + ( 1 a n b n ) z .
It follows from conditions (iii) and (iv) and Lemma 1 applied to (24) with α n : = y n z and β n : = a n D z ( T n , T ) + ( 1 u n v n ) a n z + ( 1 a n b n ) z that lim n y n z exists, whence the sequence { y n } is bounded.
Step 2 We now demonstrate that lim n inf T y n y n = 0 is satisfied.
Set
p n : = T n ( u n T y n + v n y n ) T ( u n T y n + v n y n ) ,
and
M n : = ( ( 1 u n v n ) 2 a n 2 + ( 1 a n b n ) 2 + 2 ( 1 u n v n ) ( 1 a n b n ) ) z 2 + 2 ( ( 1 u n v n ) a n 2 + ( 1 u n v n ) a n 2 + ( 1 a n b n ) ) y n z · z + 2 ( ( 1 a n b n ) + ( 1 u n v n ) ) a n p n · z + a n 2 p n 2 + 2 a n y n z · p n .
For z F i x ( T ) , we have
y n + 1 z 2 = a n T n ( u n T y n + v n y n ) + b n y n z 2 = a n ( T n ( u n T y n + v n y n ) z ) + b n ( y n z ) ( 1 a n b n ) z 2 = a n ( T ( u n T y n + v n y n ) z ) + a n p n + b n ( y n z ) ( 1 a n b n ) z 2 a n ( T ( u n T y n + v n y n ) z ) + b n ( y n z ) 2 + a n p n ( 1 a n b n ) z 2 + 2 a n ( T ( u n T y n + v n y n ) z ) + b n ( y n z ) · a n p n ( 1 a n b n ) z a n ( T ( u n T y n + v n y n ) z ) 2 + b n ( y n z ) 2 + 2 a n T ( u n T y n + v n y n ) z · b n y n z + a n p n ( 1 a n b n ) z 2 + 2 a n ( T ( u n T y n + v n y n ) z ) + b n ( y n z ) · a n p n ( 1 a n b n ) z a n 2 ( u n ( T y n z ) + v n ( y n z ) 2 + ( 1 u n v n ) 2 z 2 ) + 2 a n 2 y n z · ( 1 u n v n ) z + b n 2 y n z 2 + 2 a n T ( u n T y n + v n y n ) z · b n y n z + a n p n ( 1 a n b n ) z 2 + 2 a n ( T ( u n T y n + v n y n ) z ) + b n ( y n z ) · a n p n ( 1 a n b n ) z a n 2 ( u n ( u n + v n ) T y n z 2 + v n ( u n + v n ) y n z 2 u n v n h ^ ( T y n y n ) ) + b n 2 y n z 2 + 2 a n b n y n z 2 + M n y n z 2 a n 2 u n v n h ^ ( T y n y n ) + M n ,
we know from (25) that
u n v n h ^ ( T y n y n ) 1 a n 2 ( y n z 2 y n + 1 z 2 + M n ) .
It follows from conditions (iii) and (iv) that n = 0 M n < . This inequality, together with (26), implies that
n = 0 u n v n h ^ ( T y n y n ) < .
More precisely, lim n u n v n h ^ ( T y n y n ) = 0 . By an application of Lemma 2 under condition (i), we necessarily obtain
lim n inf h ^ ( T y n y n ) = 0 ,
and hence
lim n inf T y n y n = 0 .
Step 3 We now prove that lim n T y n y n = 0 . To this end, first observe that
y n + 1 y n = a n T n ( u n T y n + v n y n ) + b n y n y n = a n ( T n ( u n T y n + v n y n ) y n ) ( 1 a n b n ) y n a n ( T n ( u n T y n + v n y n ) T n y n + T n y n T y n + T y n y n ) + ( 1 a n b n ) y n a n ( u n T y n + v n y n y n + T n y n T y n + T y n y n ) + ( 1 a n b n ) y n = a n ( u n ( T y n y n ) ( 1 u n v n ) y n ) + a n T n y n T y n + a n T y n y n + ( 1 a n b n ) y n a n u n T y n y n + ( 1 u n v n ) a n y n + a n T n y n T y n + a n T y n y n + ( 1 a n b n ) y n ,
Then we have
T y n + 1 y n + 1 = T y n + 1 a n T n ( u n T y n + v n y n ) b n y n T y n + 1 T y n + a n T n ( u n T y n + v n y n ) T y n + b n T y n y n + ( 1 a n b n ) T y n y n 1 y n + a n ( T n ( u n T y n + v n y n ) T n y n + T n y n T y n ) + b n T y n y n + ( 1 a n b n ) T y n y n 1 y n + a n ( u n ( T y n y n ) ( 1 u n v n ) y n + T n y n T y n ) + b n T y n y n + ( 1 a n b n ) T y n y n 1 y n + a n ( u n T y n y n + ( 1 u n v n ) y n + T n y n T y n ) + b n T y n y n + ( 1 a n b n ) T y n T y n y n + 2 a n ( 1 u n v n ) y n + T n y n T y n ) + 2 a n u n T y n y n + ( 1 a n b n ) ( y n + T y n ) .
We set
N n : = 2 a n ( 1 u n v n ) y n + T n y n T y n ) + 2 a n u n T y n y n + ( 1 a n b n ) ( y n + T y n ) .
It follows from conditions (ii), (iii) and (iv) that n = 0 N n < .
Setting g n : = T y n y n and k n : = N n , then formula (28) can be written g n + 1 g n + k n . By applying Lemma 1, we have that lim n y n T y n exists. Since lim n inf y n T y n = 0 , this yields lim n y n T y n = 0 .
Step 4 An application of the demiclosedness principle for the operator I T yields the inequality ω ( y n ) F i x ( T ) , where ω ( y n ) : = { y : { y n k } { y n } , s . t . y n k y } . Therefore, to establish the weak convergence of { y n } to a fixed point of T, it suffices to prove that the set of its weak cluster points ω ( y n ) is a singleton.
E satisfies the Opial’s property. Take y * , y * * ω ( y n ) , and let { y n i } and { y m j } be subsequences of { y n } such that y n i y * and y m j y * * , respectively. If y * y * * , then
lim n y n y * = lim i y n i y * = lim j y m j y * * < lim j y m j y * = lim n y n y * ,
a contradiction. Thus, we have shown that ω ( y n ) is a singleton, which completes the proof. □
By specializing Theorem 3 to a Hilbert space setting, we immediately obtain the following corollary.
Corollary 3.
Let H be a real Hilbert space and C a nonempty closed convex subset of H. Let { T n } be a sequence of nonexpansive operators on C, and T : C C a nonexpansive mapping. The sequence { y n } is generated by (23) with a n , b n , u n , v n [ 0 , 1 ] satisfying a n + b n 1 , u n + v n 1 , and { a n } consistently greater than 0. The following conditions are assumed to hold
(i) 
n = 0 u n v n = ;
(ii) 
n = 0 a n u n < ;
(iii) 
n = 0 ( 1 u n v n ) < , n = 0 ( 1 a n b n ) < ;
(iv) 
ρ 0 , n = 0 a n D ρ ( T n , T ) < , where
D ρ ( T n , T ) : = sup { T n y T y : y ρ , y E } ,
then the sequence { y n } generated by the iteration (23) converges weakly to the fixed point of T.

4.2. Application of Variable Generalized Ishikawa Iterative Algorithm

We apply the variable generalized Ishikawa iterative algorithm to the split feasibility problem (SFP) in this section, thereby establishing a more generalized convergence result.
First, let us review what a split feasible problem is [20]. A split feasible problem refers to the process of finding
x C satisfying A x Q
where C and Q are closed convex subset of the Hilbert space H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator.
To solve the SFP (29), Byrne [21,22] proposed the CQ algorithm, generating the sequence { x n } through the following format:
x n + 1 : = P C ( x n γ A * ( I P Q ) A x n ) , n 0
where P C and P Q denote the metric projections onto the sets C and Q, γ ( 0 , 2 λ ) , λ is the largest eigenvalue of A * A , and A * is the adjoint of A.
Assuming that the SFP (29) is solvable, it is straightforward to observe that x C is a solution to SFP (29) if and only if it satisfies the following equation:
x : = P C ( I γ A * ( I P Q ) A ) x , x C
where γ > 0 is an arbitrary positive constant. Furthermore, in reference [23], it is shown that the operator P C ( I γ A * ( I P Q ) A ) defined in (31) is nonexpansive for all sufficiently small γ > 0 . The CQ algorithm (30) can be viewed as a special case of the KM iterative. Applying the KM algorithm to the fixed-point operator P C ( I γ A * ( I P Q ) A ) from (31) yields the following iterative scheme for the sequence { x n } :
x n + 1 : = ( 1 α n ) x n + α n P C ( x n γ A * ( I P Q ) A x n ) .
If α n satisfies condition (3), then the resulting sequence { x n } from iteration (32) is weakly convergent.
Zhao and Yang [24] incorporated perturbations into the closed convex sets C and Q, and investigated the following perturbed CQ algorithm:
x n + 1 : = ( 1 α n ) x n + α n P C n ( x n γ A * ( I P Q n ) A x n ) , n 0 ,
where { C n } and { Q n } represent the sequence of sets to which Mosco converges for sets C and Q, and { α n } ( 0 , 1 ) , with γ being the spectral radius of the matrix A T A .
Zhang [8] studied the following generalized perturbation CQ algorithm in the Hilbert space:
x n + 1 : = α n x n + β n P C n ( x n γ A * ( I P Q n ) A x n ) , n 0 ,
where { α n } [ 0 , 1 ] , { β n } [ 0 , 1 ] , and α n + β n 1 . Here, the ρ -distance function between two closed convex subsets E 1 and E 2 in the Hilbert space is introduced as follows:
d ρ ( E 1 , E 2 ) = sup { P E 1 x P E 2 x , x ρ } .
The following conditions are assumed to hold:
( a 1 ) n = 0 α n β n = ;
( a 2 ) n = 0 ( 1 α n β n ) < ;
( a 3 ) ρ 0 , n = 0 β n d ρ ( C n , C ) < and n = 0 β n d ρ ( Q n , Q ) < .
Then the sequence { x n } generated by the generalized perturbation CQ algorithm (34) converges weakly to a solution of the SFP (29).
Based on Zhang’s research, we further extend it and consider the following hyper generalized perturbation CQ algorithm in the Hilbert space:
x n + 1 : = α n P C n ( I γ A * ( I P Q n ) A ) ( s n P C ( I γ A * ( I P Q ) A ) x n + t n x n ) + β n x n , n 0 ,
Next, we will prove the convergence of the hyper generalized perturbation projection algorithm.
Theorem 4.
Let H be a real Hilbert space and C a nonempty closed convex subset of H. Let { T n } be a sequence of nonexpansive operators on C, and T : C C a nonexpansive mapping. The sequence { x n } is generated by iterative scheme (23), where α n , β n , s n , t n [ 0 , 1 ] , { α n } consistently greater than 0 and α n + β n 1 , s n + t n 1 . If the assumed conditions ( a 1 )–( a 3 ) are assumed to hold, then the sequence { x n } generated by the iteration (35) converges weakly to the fixed point of SFP (29).
Proof. 
Set T n : = P C n ( I γ A * ( I P Q n ) A ) , T : = P C ( I γ A * ( I P Q ) A ) , then both T n and T are nonexpansive. Since the solution set of SFP (29) is nonempty, F i x ( T ) . Noting that F i x ( T ) is the solution of SFP (29), the generalized perturbation projection algorithm (35) can be written as
x n + 1 = α n T n ( s n T x n + t n x n ) + β n x n , n = 0 , 1 , 2 , ,
For ρ > 0 , set
ρ ¯ = sup { max { A x , x γ A * ( I P Q ) A x } : x ρ } < ,
then for x H , x ρ . Next, calculate T n x T x :
T n x T x P C n ( I γ A * ( I P Q n ) A ) x P C n ( I γ A * ( I P Q ) A ) x + P C n ( I γ A * ( I P Q ) A ) x P C ( I γ A * ( I P Q ) A ) x P C n ( I γ A * ( I P Q ) A ) x P C ( I γ A * ( I P Q ) A ) x + γ A * ( P Q n A x P Q A x ) d ρ ¯ ( C n , C ) + γ A d ρ ¯ ( Q n , Q ) ,
that is D ρ ( T n , T ) d ρ ¯ ( C n , C ) + γ A d ρ ¯ ( Q n , Q ) . From assumption ( a 3 ), we can conclude that
n = 0 β n D ρ ( T n , T ) n = 0 β n d ρ ¯ ( C n , C ) + γ A n = 0 β n d ρ ¯ ( Q n , Q ) < .
By Corollary 3, the sequence { x n } converges weakly to a solution of SFP (29). The proof is therefore complete. □

5. Conclusions

This work has established weak convergence theorems for generalized Ishikawa iteration with errors and variable generalized Ishikawa iterative in uniformly convex Banach spaces, generalizing prior work by Zhang [8] and Wang [11]. Furthermore, the applicability of these findings has been demonstrated through convergence analysis of the HGPPA and a perturbed CQ algorithm. Under specific regularity conditions, these weak convergence results can be strengthened to strong convergence in Hilbert spaces. A natural open question is whether such strong convergence remains valid in uniformly convex Banach spaces, which we leave as a direction for future research.

Author Contributions

Conceptualization, L.Y., Y.Z. and W.Z.; methodology, L.Y., Y.Z. and W.Z.; software, L.Y., Y.Z. and W.Z.; validation, L.Y., Y.Z. and W.Z.; formal analysis, L.Y., Y.Z. and W.Z.; writing—original draft preparation, L.Y., Y.Z. and W.Z.; writing—review and editing, L.Y., Y.Z. and W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was completed with the support of the Basic Research Funding for Young Researchers Project of Heilongjiang Province (No. 1453QN022).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors are very thankful to the referees for their valuable and helpful comments.

Conflicts of Interest

The authors declare no conflict of competing interests.

References

  1. Browder, F.E. Nonexpansive nonlinear operators in a banach space. Proc. Natl. Acad. Sci. USA 1965, 54, 1041–1044. [Google Scholar] [CrossRef] [PubMed]
  2. Kirk, W.A. A fixed point theorem for mappings which do not increase distances. Am. Math. Mon. 1965, 72, 1004–1006. [Google Scholar] [CrossRef]
  3. Krasnosel’skiı, M.A. Two remarks on the method of successive approximations. Uspekhi Mat. Nauk. 1965, 10, 123–127. [Google Scholar]
  4. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  5. Reich, S. Weak convergence theorems for nonexpansive mappings in banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef]
  6. Kanzow, C.; Shehu, Y. Generalized krasnoselskii–mann-type iterations for nonexpansive mappings in hilbert spaces. Comput. Optim. Appl. 2017, 67, 595–620. [Google Scholar] [CrossRef]
  7. Zhang, Y.C.; Guo, K.; Wang, T. Generalized krasnoselskii–mann-type iteration for nonexpansive mappings in banach spaces. J. Oper. Res. Soc. China 2021, 9, 195–206. [Google Scholar] [CrossRef]
  8. Zhang, Y.C. Generalized KM Iterative Algorithm and Its Applications in Zero Point Problems and Split Feasibility Problems. Master’s Thesis, West China Normal University, Nanchong, China, 2020. [Google Scholar]
  9. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  10. Tan, K.K.; Xu, H.K. Approximating fixed points of non-expansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301. [Google Scholar] [CrossRef]
  11. Wang, T.; Guo, K.; Zhao, S.L. A class of generalized ishikawa iterations in hilbert spaces and its application to variational inequalities. J. Xihua Norm. Univ. Natural Sci. Ed. 2018, 33, 153–160. [Google Scholar]
  12. Kondo, A. Iterative scheme generating method beyond Ishikawa iterative method. Math. Ann. 2024, 391, 1–22. [Google Scholar] [CrossRef]
  13. Tomar, A.; Alam, H.K.; Sajid, M.; Rohen, Y.; Singh, S.S. Fibonacci-Ishikawa iterative method in modular spaces for asymptotically non-expansive monotonic mathematical operators. J. Inequalities Appl. 2025, 2025, 126. [Google Scholar] [CrossRef]
  14. Pragadeeswarar, V.; Gopi, R.; Park, C.; Lee, J.R. Convergence of Fixed Points for Relatively Nonexpansive Mappings via Ishikawa Iteration. Int. J. Appl. Comput. Math. 2025, 11, 129. [Google Scholar] [CrossRef]
  15. Liu, X.; Song, X.; Chen, L.; Zhao, Y. Distributed Ishikawa algorithms for seeking the fixed points of multi-agent global operators over time-varying communication graphs. J. Comput. Appl. Math. 2025, 457, 116250. [Google Scholar] [CrossRef]
  16. Browder, F.E. Convergence theorems for sequences of nonlinear operators in banach spaces. Math. Z. 1967, 100, 201–225. [Google Scholar] [CrossRef]
  17. Kim, T.H.; Xu, H.K. Robustness of mann’s algorithm for nonexpansive mappings. J. Math. Anal. Appl. 2007, 327, 1105–1115. [Google Scholar] [CrossRef]
  18. Martinet, B. Regularisation d’inequations variationelles par approximations successives. Rev. Fr. D’Informatique Rech. Oper. 1970, 4, 154–159. [Google Scholar]
  19. Lopez, G.; Martlnmarquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in banach spaces. Abstr. Appl. Anal. 2012, 2012, 1–25. [Google Scholar] [CrossRef]
  20. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  21. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  22. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103–120. [Google Scholar] [CrossRef]
  23. Xu, H.K. A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  24. Zhao, J.; Yang, Q. Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21, 1791–1799. [Google Scholar] [CrossRef]
Figure 1. Comparison diagram of convergence rate between generalized Ishikawa iteration with errors and generalized KM iteration.
Figure 1. Comparison diagram of convergence rate between generalized Ishikawa iteration with errors and generalized KM iteration.
Symmetry 18 00125 g001
Figure 2. Comparison diagram of convergence rate of generalized Ishikawa iteration with errors and generalized Ishikawa iteration.
Figure 2. Comparison diagram of convergence rate of generalized Ishikawa iteration with errors and generalized Ishikawa iteration.
Symmetry 18 00125 g002
Figure 3. Comparison diagram of convergence rate of generalized Ishikawa iteration with errors and generalized Ishikawa iteration (Rotation Mapping Problem).
Figure 3. Comparison diagram of convergence rate of generalized Ishikawa iteration with errors and generalized Ishikawa iteration (Rotation Mapping Problem).
Symmetry 18 00125 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, L.; Zhu, Y.; Zhao, W. Generalized Ishikawa Iterative Algorithm with Errors and Variable Generalized Ishikawa Iterative Algorithm for Nonexpansive Mappings in Symmetric Banach Spaces. Symmetry 2026, 18, 125. https://doi.org/10.3390/sym18010125

AMA Style

Yu L, Zhu Y, Zhao W. Generalized Ishikawa Iterative Algorithm with Errors and Variable Generalized Ishikawa Iterative Algorithm for Nonexpansive Mappings in Symmetric Banach Spaces. Symmetry. 2026; 18(1):125. https://doi.org/10.3390/sym18010125

Chicago/Turabian Style

Yu, Liangjuan, Yuhan Zhu, and Wenying Zhao. 2026. "Generalized Ishikawa Iterative Algorithm with Errors and Variable Generalized Ishikawa Iterative Algorithm for Nonexpansive Mappings in Symmetric Banach Spaces" Symmetry 18, no. 1: 125. https://doi.org/10.3390/sym18010125

APA Style

Yu, L., Zhu, Y., & Zhao, W. (2026). Generalized Ishikawa Iterative Algorithm with Errors and Variable Generalized Ishikawa Iterative Algorithm for Nonexpansive Mappings in Symmetric Banach Spaces. Symmetry, 18(1), 125. https://doi.org/10.3390/sym18010125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop