Next Article in Journal
Analysing Disease Spread on Complex Networks Using Forman–Ricci Curvature
Previous Article in Journal
On Solving the MHD Problem for Several Classes of Three-Dimensional Domains Within the Framework of Discrete Potential Theory
Previous Article in Special Issue
µ-Integrable Functions and Weak Convergence of Probability Measures in Complete Paranormed Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Krasnosel’skiǐ–Mann-Type Subgradient Extragradient Algorithms for Variational Inequality and Hierarchical Fixed-Point Problems

1
Department of Mathematics, King Abdulaziz University, Jeddah 22254, Saudi Arabia
2
Department of Mathematics, Central University of Kashmir, Ganderbal 191131, Jammu and Kashmir, India
3
Department of Mathematics, College of Science, Qassim University, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(23), 3740; https://doi.org/10.3390/math13233740
Submission received: 3 October 2025 / Revised: 11 November 2025 / Accepted: 18 November 2025 / Published: 21 November 2025
(This article belongs to the Special Issue Functional Analysis and Mathematical Optimization)

Abstract

In this work, we present a Krasnosel’skiǐ–Mann-type subgradient extragradient algorithm to solve variational inequalities and hierarchical fixed-point problems for nonexpansive and quasi-nonexpansive mappings in Hilbert spaces. We establish weak convergence of the generated sequences to a common solution and derive several related results. The algorithm is validated through numerical examples, and several applications are discussed to demonstrate the method’s applicability. The proposed approach extends and unifies existing methods and findings in this field.

1. Introduction

We work within a real Hilbert space H , with its associated inner product · , · and norm · . The focus is on a nonempty, closed, and convex subset C of H . A mapping U : C C is nonexpansive if U z 1 U z 2     z 1 z 2 , z 1 , z 2 H . Its fixed-point set, F ( U ):= { z C : U z = z } is closed and convex whenever nonempty. For a proper function g : H ( , + ] , the subdifferential at w H is g ( w ) = { z 1 H : z 2 w , z 1 + g ( w ) g ( z 2 ) , z 2 H } . We say g is subdifferential at w if g ( w ) . The indicator function ψ C : H ( , + ] is defined as
ψ C ( w ) = 0 , w C , , otherwise .
Note that ψ C is a convex function when C is a convex set.
In 2006, Moudafi et al. [1] examined the convergence of a scheme for a hierarchical fixed-point problem (in short, HFPP): Find w ¯ F ( U ) such that
w ¯ V w ¯ , w ¯ w 0 , w F ( U ) ,
with U , V : C C are nonexpansive. Let Φ denote the set of solutions of HFPP (1). If w ¯ F ( U ) , then ( 1 ) ( I V ) w ¯ , w w ¯ + ψ F ( U ) ( w ¯ ) ψ F ( U ) ( w ) ( I V ) w ¯ ψ F ( U ) ( w ¯ ) . Hence, HFPP (1) can be reformulated as a variational inclusion: find w ¯ F ( U ) such that
0 ( I U ) w ¯ + N F ( U ) ( w ¯ ) ,
where N F ( w ¯ ) denotes the normal cone to F ( U ) at w ¯ given by
N F ( U ) ( w ¯ ) = ψ F ( U ) ( w ¯ ) = { u H : z w ¯ , u 0 , z F ( U ) } , if w ¯ F ( U ) , , otherwise .
If we set V = I , then Φ is just F ( U ) . Moreover, it is noted that HFPP (1) is of significant interest since it encompasses, as particular cases, several well-known problems, including variational inequalities over fixed-point sets, hierarchical minimization problems, and related models (see Moudafi [2]).
Moudafi [2], in 2007, introduced a Krasnoselskii–Mann-type iteration for HFPP (1): given w 0 C ,
w m + 1 = ( 1 ζ m ) w m + ζ m ( σ m V w m + ( 1 σ m ) U w m ) , m 0 ,
with { ζ m } , { σ m } ( 0 , 1 ) . This scheme unifies many fixed-point methods applied in areas such as signal processing and image reconstruction, and its convergence theory provides a common framework for analyzing related algorithms (see [1,3,4,5,6,7,8]).
This work addresses the variational inequality (VI) defined by finding w ¯ C such that
A ( w ¯ ) , u w ¯ 0 , u C ,
introduced in [9] where A : H H and the solution set is denoted by Sol (VI (4)). A standard approach for solving VI (4) is the projected gradient method:
w m + 1 = P C ( w m μ A w m ) ,
with μ > 0 , where P C is the metric projection onto C . This method requires A to be inverse strongly (or strongly) monotone, which can be restrictive. To relax this, Korpelevich [10] introduced the extragradient iteration
v m = P C ( w m μ A w m ) , w m + 1 = P C ( w m μ A v m ) .
Many subsequent works have proposed enhancements to this scheme see, e.g., [11,12,13,14,15].
A key limitation of the extragradient method is the double projection onto C required in each step, which is often computationally intensive. This issue was addressed by Censor et al. [13,16,17] via the subgradient extragradient method. Their approach, designed for VI (4) and subsequently extended to equilibrium problems [18], reduces the computational load by calculating the second projection onto a half-space instead of the original set C .
v m = P C ( w m μ A w m ) , w m + 1 = P T m ( w m μ A v m ) , where T m = { w H : w m μ A w m v m , w v m 0 } .
This sequence { w m } converges weakly to Sol (VI (4)).
The problem of determining an element belonging to F ( U ) Sol ( VI ( 4 ) ) was addressed by Takahashi and Toyoda [19], who developed the subsequent iterative procedure
v m = P C ( w m λ m A w m ) , w m + 1 = ( 1 α m ) w m + α m U v m .
Under suitable conditions, weak convergence of the iterates to a solution was guaranteed.
The preceding algorithm can fail when the operator A is merely monotone and L -Lipschitz-continuous. To overcome this limitation, Nadezhkina and Takahashi [20] incorporated Korpelevich’s extragradient technique, introducing the following scheme:
v m = P C ( w m λ m A w m ) , w m + 1 = ( 1 α m ) w m + α m U P C ( w m λ m A v m ) ,
with λ m 0 , 1 L , α m ( 0 , 1 ) . The sequence { w m } converges weakly to a point w ¯ F ( U ) Sol ( VI ( 4 ) ) .
Drawing inspiration from the preceding discussion, we present a Krasnosel’skiǐ–Mann iterative scheme that integrates the ideas from (3), (7), and (9) to approximate a common solution of HFPP (1) and VI (4). We establish a weak convergence result for this method and provide an illustrative example. This approach generalizes and unifies several existing results [2,13,16,17,20].

2. Preliminaries

Throughout the paper, we denote the strong and weak convergence of a sequence { x n } to a point x H by x n x and x n x , respectively. ω w { x n } denotes the set of all weak limits of { x n } . Let us recall the following concepts which are of common use in the context of convex and nonlinear analysis.
It is well known that a real Hilbert space H satisfies the following:
(i)
The identity
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 ,
for all x , y H and λ [ 0 , 1 ] ;
(ii)
Opial’s condition; i.e., if for any sequence { x n } in H such that x n x , for some x H , the inequality
lim inf n x n x < lim inf n x n y
holds for all x y .
For every point x H , there exists a unique nearest point in C denoted by P C x such that
x P C x x y , y C .
The mapping P C is called the metric projection of H onto C .
It is well known (see [21] for details) that P C is nonexpansive and satisfies
x y , P C x P C y P C x P C y 2 , x H .
Moreover, P C x is characterized by the fact P C x C and
x P C x , y P C x 0 ,
and
x y 2   x P C x 2 + y P C x 2 , x H , y C .
Definition 1
([22]). An operator M : H 2 H is said to be
(i)
Monotone if
u v , x y 0 , w h e n e v e r u M ( x ) , v M ( y ) ;
(ii)
Maximal monotone if M is monotone and the graph, graph ( M ) : = { ( x , y ) H × H : y M ( x ) } , is not properly contained in the graph of any other monotone operator.
Lemma 1
([3,23]). Let U : H H . Then,
(i)
U is called L -Lipschitz-continuous with L > 0 if
U x U y     L x y , x , y H
(ii)
U is called quasi-nonexpansive if
U x z     x z , x F ( U ) , x H
(iii)
I U is a maximal monotone mapping on H ;
(iv)
U is demiclosed on H in the sense that, if { x n } converges weakly to x H and { x n U x n } converges strongly to 0, then x F ( U ) .
Lemma 2
([24]). Let C be a nonempty subset of H and { x k } be a sequence in H such that the following conditions hold:
(i)
For every x C , lim k x k x exists;
(ii)
Every sequential weak cluster point of { x k } is in C .
Then, { x k } converges weakly to a point in C .

3. Krasnosel’skiǐ–Mann-Type Subgradient Extragradient Algorithm

Our contribution is a Krasnosel’skiǐ–Mann-type subgradient extragradient algorithm designed to solve HFPP (1) in conjunction with VI (4).
Lemma 3.
Algorithm 1 generates a sequence { s m } which decreases monotonically and is bounded below by min { μ L , s 0 } .
Algorithm 1 Iterative scheme
 
Initialization: Given s 0 > 0 , μ ( 0 , 1 ) . Select an arbitrary starting point w 0 H : Set m = 0 .
 
Iterative Steps: For the current point w m :
 
Step 1. Calculate
v m = P C ( w m s m A w m ) , u m = P T m ( w m s m A v m ) , where T m = { w H : w m s m A w m v m , w v m 0 } . w m + 1 = ( 1 α m ) w m + α m U ( σ m V u m + ( 1 σ m ) u m ) , and s m + 1 = min μ w m v m A w m A v m , s m if A w m A v m 0 , s m , otherwise
where { α m } , { σ m } [ a , b ] ( 0 , 1 ) for all m 0 .
 
Step 2. Update m m + 1 and proceed to Step 1.
Proof. 
It is clear that { s m } forms a monotonically decreasing sequence, since A is a Lipschitz-continuous mapping with constant L . For A v m A w m 0 , we have
μ w m v m A w m A v m μ w m v m L w m v m = μ L .
Clearly, { s m } admits the lower bound min μ L , s 0 .    □
Remark 1.
We have λ = lim m s m and it follows that λ > 0 .
Theorem 1.
Let A : H H be monotone and L -Lipschitz-continuous ( L unknown), U : C C nonexpansive, and V : C C be continuous quasi-nonexpansive with I V monotone. Assume that Γ = Sol ( VI ( 4 ) ) Φ Fix ( V ) . Then, the sequences { w m } and { u m } produced by Algorithm 1 converge weakly to w ¯ Γ , where w ¯ = P Γ x 0 .
Proof. 
For convenience, we divide the proof into following steps:
Step I. Let { u m } produced by Algorithm 1. Then,
u m p 2 w m p 2 1 μ s m s m + 1 v m w m 2 1 μ s m s m + 1 u m v m 2
Proof of Step I. Since, p Sol ( VI ( 4 ) ) C T m , we have
u m p 2 = P T m ( w m s m A v m ) P T m p 2 u m p , w m s m A v m p = 1 2 u m p 2 + 1 2 w m s m A v m p 2 1 2 u m w m + s m A v m 2 = 1 2 u m p 2 + 1 2 w m p 2 + 1 2 s m 2 A v m 2 w m p , s m A v m 1 2 u m w m 2 1 2 s m 2 A v m 2 u m w m , s m A v m 1 2 u m p 2 + 1 2 w m p 2 1 2 u m w m 2 u m p , s m A v m .
This implies that
u m p 2 w m p 2 u m w m 2 2 u m p , s m A v m .
Since A is monotone, we have 2 s m A v m A p , v m p 0 . Thus, by combining it with (14), we get
u m p 2 w m p 2 u m w m 2 2 u m p , s m A v m + 2 s m A v m A p , v m p = w m p 2 u m w m 2 + 2 v m u m , s m A v m 2 s m A p , v m p = w m p 2 u m w m 2 + 2 s m v m u m , A v m A w m + 2 s m A w m , v m z m 2 s m A p , v m p .
Next, we estimate
2 s m v m u m , A v m A w m 2 s m A v m A w m v m u m 2 μ s m s m + 1 w m v m v m u m μ s m s m + 1 w m v m 2 + μ s m s m + 1 v m u m 2 .
Since v m = P C ( w m s m A w m ) and u m T m , we have w m s m A w m v m , u m v m 0 . This implies that
2 s m A w m , v m u m 2 v m w m , u m v m = u m w m 2 v m w m 2 u m v m 2 .
Since, p Sol ( VI ( 4 ) ) , we have A p , v m p 0 , and using (16) and (17) in (15), we get
u m p 2   w m p 2 1 μ s m s m + 1 v m w m 2 1 μ s m s m + 1 u m v m 2 .
Step II. The sequences { w m } , { v m } , and { u m } generated by Algorithm 1 are bounded.
Proof of Step II. Let z m = ( 1 σ m ) u m + σ m V u m , then
z m p 2 = ( 1 σ m ) u m + σ m V u m p 2 ( 1 σ m ) u m p 2 + σ m V u m p 2 σ m ( 1 σ m ) V u m u m 2 (19) u m p 2 σ m ( 1 σ m ) V u m u m 2 (20) u m p 2
Hence, it follows from (18) and (19) that
z m p 2 w m p 2 1 μ s m s m + 1 v m w m 2 1 μ s m s m + 1 u m v m 2 σ m ( 1 σ m ) V u m u m 2
It follows from Lemma 3 that
lim n 1 μ s m s m + 1 = 1 μ > 0 ,
which yields that m 0 N with 1 μ s m s m + 1 > 0 , m m 0 . Hence, by (21), we get
z m p 2 w m p 2 .
It follows from Algorithm 1 and (22) that we have
w m + 1 p 2 = ( 1 α m ) w m + α m U z m p 2 ( 1 α m ) w m p 2 + α m U z m p 2 α m ( 1 α m ) U z m w m 2 ( 1 α m ) w m p 2 + α m z m p 2 α m ( 1 α m ) U z m w m 2 ( 1 α m ) w m p 2 + α m w m p 2 α m ( 1 α m ) U z m w m 2 w m p 2 α m ( 1 α m ) U z m w m 2 w m p 2
This implies that lim m w m p exists and is finite. Therefore, { w m } is bounded, and consequently, we deduce that { z m } , { v m } , and { u m } are bounded.
Step III. x * F ( U ) F ( V )
Proof of Step III. Since p F ( U ) F ( V ) , we have from Algorithm 1 and (22) that
w m + 1 p 2 = ( 1 α m ) w m + α m U z m p 2 ( 1 α m ) w m p 2 + α m U z m p 2 α m ( 1 α m ) U z m w m 2 ( 1 α m ) w m p 2 + α m w m p 2 α m 1 μ s m s m + 1 v m w m 2 α m 1 μ s m s m + 1 u m v m 2 α m σ m ( 1 σ m ) V u m u m 2 α m ( 1 α m ) U z m w m 2 = w m p 2 α m 1 μ s m s m + 1 v m w m 2 α m 1 μ s m s m + 1 u m v m 2 α m σ m ( 1 σ m ) V u m u m 2 α m ( 1 α m ) U z m w m 2 .
This implies that
α m 1 μ s m s m + 1 v m w m 2 + α m 1 μ s m s m + 1 u m v m 2 + α m σ m ( 1 σ m ) V u m u m 2 + α m ( 1 α m ) U z m w m 2 w m p 2 w m + 1 p 2 .
Since lim m w m p exists and is finite, lim m 1 μ s m s m + 1 = 1 μ > 0 , and { α m } , { σ m } [ a , b ] ( 0 , 1 ) , (23) implies that
lim m v m w m = lim m u m v m = lim m V u m u m = lim m U z m w m = 0
Since
u m w m u m v m + v m w m ,
it follows from (24) that
lim m u m w m = 0 .
Since
z m u m σ m V u m u m ,
it follows from (24) that
lim m z m u m = 0 .
Since
w m + 1 w m α m U z m w m ,
it follows from (24) that
lim m w m + 1 w m = 0 .
Since
U z m z m U z m w m + w m u m + u m z m ,
it follows from (24)–(26) that
lim m U z m z m = 0 .
Since
U z m u m U z m z m + z m u m ,
it follows from (26) and (28) that
lim m U z m u m = 0 .
Since { w m } is bounded, { w m k } { w m } , a subsequence with w m k w ¯ . Further, it follows from (24)–(26) that there exists a subsequence { v m k } of { v m } , { u m k } of { u m } , and { z m k } of { z m } such that v m k , u m k , z m k w ¯ . It follows from Lemma 1(ii), (24), and (28) that w ¯ F ( U ) , x * F ( V ) .
Step IV. Next, we show that w ¯ Sol ( VI ( 4 ) ) Φ .
Proof of Step IV. First, we will show that w ¯ Φ . It follows from Algorithm 1 that
1 σ m U z m z m = ( I V ) u m + 1 σ m ( U z m u m ) .
Applying monotonicity of I V and z F ( U ) , we have
U z m z m σ m , u m z = ( I V ) u m ( I V ) z , u m z + ( I V ) z , u m z + 1 σ m U z m u m , u m z ( I V ) z , u m z + 1 σ m U z m u m , u m z
It follows from (28)–(30) that
lim sup m z V z , u m z 0 , z F ( U ) .
Since u m w ¯ , we get
( I V ) z , w ¯ z 0 , z F ( U ) .
Since F ( U ) is convex, λ z + ( 1 λ ) w ¯ F ( U ) and λ ( 0 , 1 ) , and thus
( I V ) ( λ z + ( 1 λ ) w ¯ ) , w ¯ ( λ z + ( 1 λ ) w ¯ ) = λ ( I V ) ( λ z + ( 1 λ ) w ¯ ) , w ¯ z 0 , z F ( U ) ,
which implies that
( I V ) ( λ z + ( 1 λ ) w ¯ ) , w ¯ z 0 , z F ( U ) .
Applying λ 0 + , we have
( I V ) w ¯ , w ¯ z 0 , z F ( U ) .
That is, w ¯ Φ .
We now show that w ¯ Sol ( VIP ( 4 ) ) . Define
M u = A u + N C ( u ) , if u C ; , if u C ,
where N C ( u ) is the normal cone to C at u C . Since M is maximal monotone, 0 M u if and only if u Sol ( VIP ( 4 ) ) . For ( u , w ) graph ( M ) , we have w M u = A u + N C ( u ) and hence w A u N C ( u ) , hence u v , w A u 0 , for all v C .
On the other hand, from v m = P C ( I s m A ) w m and u C , we have
w m s m A w m v m , v m u 0 .
This implies that
u * v m , v m w m s m + A w m 0 .
Since u v , w A u 0 , v C , v m k C , and the concept of A , obtain
u v m k , w u v m k , A u u v m k , A u u v m k , v m k w m k s m + A w m k = u v m k , A u A v m k + u v m k , A v m k A w m k u v m k , v m k w m k s m u v m k , A v m k A w m k u v m k , v m k w m k s m .
Taking the limit k and using continuity of A , we obtain u w ¯ , w 0 . Hence, by maximal monotonicity, w ¯ M 1 0 , implying w ¯ Sol ( VIP ( 4 ) ) and therefore w ¯ Γ .
Due to lim m w m w ¯ existing and Lemma 2, the sequence { w m } converges weakly to w ¯ Γ .    □
Some direct consequences of Theorem 1 are derived below.
Letting V = I , Algorithm 1 reduces to a following Krasnosel’skiǐ–Mann subgradient extragradient scheme for approximating a fixed point of a nonexpansive mapping U and solving VI (4).
Theorem 2.
Let A : H H be monotone and L -Lipschitz-continuous ( L unknown), and U : C C be nonexpansive. Assume that Γ 1 = Sol ( VI ( 4 ) ) F ( U ) . Then, the sequences { w m } and { u m } produced by Algorithm 2 converge weakly to w ¯ Γ 1 , where w ¯ = P Γ 1 x 0 .
Algorithm 2 Iterative scheme
 
Initialization: Given s 0 > 0 , μ ( 0 , 1 ) , select an arbitrary starting point w 0 H : Set m = 0 .
 
Iterative Steps: For the current point w m :
 
Step 1. Calculate
v m = P C ( w m s m A w m ) , u m = P T m ( w m s m A v m ) , where T m = { w H : w m s m A w m v m , w v m 0 } . w m + 1 = ( 1 α m ) w m + α m U u m , and s m + 1 = min μ w m v m A w m A v m , s m if A w m A v m 0 , s m , otherwise
where { α m } , { σ m } [ a , b ] ( 0 , 1 ) for all m 0 .
 
Step 2. Update m m + 1 and proceed to Step 1.

Applications in Optimization and Monotone Operator Theory

Consider a nonexpansive mapping V : C C and a maximal monotone operator F. The associated inclusion problem is to find a zero of F:
Find w ¯ H such that 0 F ( w ¯ ) .
For λ > 0 , let U : = J λ F be the resolvent of F, which is nonexpansive. Define Φ = F ( P F 1 ( 0 ) V ) . By Theorem 1, we have the following result.
Theorem 3.
Let A : H H be monotone and L -Lipschitz-continuous ( L unknown), and let V : C C be a continuous quasi-nonexpansive mapping with I V monotone. Furthermore, let F : D ( F ) H H be maximal monotone, and set U = J λ F as its resolvent. Assume that Γ 2 = Sol ( VI ( 4 ) ) Φ F ( V ) . Then, the sequences { w m } and { u m } produced by Algorithm 1 converge weakly to w ¯ Γ 2 , where w ¯ = P Γ 2 x 0 .
Proof. 
Noting that F 1 ( 0 ) = F ( U ) for U = J λ F , problem (35) is equivalent to finding a fixed point of U . The result is then an immediate consequence of Theorem 1. □
Setting V = I in Theorem 2 yields the following.
Theorem 4.
Let A : H H be monotone and L -Lipschitz-continuous ( L unknown), let U = J λ F . Assume that Γ 3 = Sol ( VI ( 4 ) ) F 1 ( 0 ) . Then, the sequences { w m } and { u m } produced by Algorithm 2 converge weakly to w ¯ Γ 3 , where w ¯ = P Γ 3 x 0 .
Remark 2.
Let g : H ( , + ] be a proper, convex, and lower semicontinuous function. Its subdifferential F = g is then a maximal monotone operator. Applying our framework to the minimization problem
min w H g ( w ) ,
we note that its solution set corresponds to F 1 ( 0 ) , the set of zeros of the subdifferential. Therefore, a solution to (36) is obtained directly from Theorem 4.

4. Numerical Example

The weak convergence behavior of Algorithm 1 is illustrated via the following two numerical examples.
Example 1.
We illustrate the proposed algorithm in the finite-dimensional Hilbert space R 2 . The monotone operator is chosen as
A ( x ) = M x , M = 2 0 0 1 ,
the feasible set is C = [ 2 , 2 ] 2 , and the mappings are U ( x ) = x and V ( x ) = 0.5 x . The projection onto C is defined componentwise as
P C ( x ) = min { max ( x 1 , 2 ) , 2 } , min { max ( x 2 , 2 ) , 2 } .
The distinct initial points are taken as w 0 = ( 1.8 , 1.5 ) , w 0 = ( 1.5 , 1.8 ) , and w 0 = ( 1 , 1 ) , with parameters s 0 = 0.5 , μ = 0.9 , α = 0.3 , and σ = 0.3 .
The sequence { w m } is generated by Algorithm 1. All numerical computations and graphical analyses were implemented in MATLAB R2015(a). Figure 1 displays the convergence paths of the algorithm from different starting points. The trajectories demonstrate how different initial points converge to similar solution regions. Figure 2 shows the norm of differences between consecutive iterates. All three curves exhibit rapid initial decrease followed by gradual convergence. Figure 3 tracks the distance from each iterate to the approximate solution. Different initial points converge at comparable rates to the same solution region. Figure 4 and Figure 5 display the convergence of the generating sequence and the step differences w m + 1 w m for distinct values of α with initial point w 0 = ( 1.8 , 1.5 ) . Numerical results show that the iterates converge weakly to a unique fixed point (close to the origin), and the step-to-step differences decay rapidly to zero, confirming the theoretical weak convergence of the algorithm. The iteration was terminated when w m + 1 w m < 10 12 .

4.1. Convergence Comparison Analysis

We compare the performance of three optimization algorithms: the proposed method and the Korpelevich [10] and Nadezhkina algorithms [20]. The comparison is based on their convergence behavior and computational efficiency. The convergence comparison of the three algorithms starts from initial point [ 1.8 , 1.5 ] T . The values of parameters were set as in Table 1.
Figure 6 illustrates the convergence behavior of the three algorithms, where the y-axis represents the norm w m (distance from solution) and the x-axis shows the iteration count. The proposed algorithm demonstrates significantly faster convergence, reaching the solution tolerance in substantially fewer iterations compared to the traditional methods. The Korpelevich method [10] exhibits steady but slower convergence, while the Nadezhkina algorithm [20] shows the slowest convergence rate among the three.
The Figure 7 displays a bar chart comparing the average number of iterations required for convergence across multiple initial points. Each bar corresponds to one algorithm, with numeric labels indicating exact averages. The proposed algorithm requires the lowest average number of iterations to achieve convergence, establishing it as the most efficient method. The performance comparison summarized as in Table 2.
The parameter sensitivity study, as shown in Figure 8, Figure 9, Figure 10 and Figure 11, investigates how the μ parameter affects the proposed algorithm’s performance:
  • μ = 0.9 : Optimal performance with fastest convergence;
  • μ = 0.7 : Good performance, slightly slower than optimal;
  • μ = 0.4 and μ = 0.5 : Suboptimal performance with slower convergence rates.
Figure 8. Convergence for μ = 0.4 .
Figure 8. Convergence for μ = 0.4 .
Mathematics 13 03740 g008
Figure 9. Convergence for μ = 0.5 .
Figure 9. Convergence for μ = 0.5 .
Mathematics 13 03740 g009
Figure 10. Convergence for μ = 0.7 .
Figure 10. Convergence for μ = 0.7 .
Mathematics 13 03740 g010
Figure 11. Convergence for μ = 0.9 .
Figure 11. Convergence for μ = 0.9 .
Mathematics 13 03740 g011
Figure 12 demonstrates that μ = 0.9 provides the best performance with the lowest average iteration count. The performance degradation as μ moves away from this optimal value in either direction highlights the importance of proper parameter tuning.
The comprehensive analysis demonstrates that the Proposed algorithm represents a significant advancement over traditional methods, offering both theoretical improvements and practical benefits for solving optimization problems in this class.
Example 2.
We now consider the Hilbert space L 2 ( [ 0 , 1 ] ) with the inner product
f , g = 0 1 f ( x ) g ( x ) d x .
Functions are discretized over N uniform grid points, and the norm is approximated by the discrete L 2 -norm
u L 2 1 N i = 1 N | u ( x i ) | 2 1 / 2 .
Consider C = [ 0 , 1 ] , and the projection mapping onto C is defined pointwise for any function u L 2 ( [ 0 , 1 ] ) as
( P C u ) ( x ) = min { max ( u ( x ) , 0 ) , 1 } , x [ 0 , 1 ] .
Also, the projection P C is expressed in Figure 13. We define the operators U ( u ) = u , V ( u ) = u * + β ( u u * ) , and β = 0.6 , where the target function is u * ( x ) = 1 + 0.5 sin ( 2 π x ) . The iteration step applies V and then projects the result onto C via P C . Since U is the identity, F ( U ) = C . The HFPP condition
w ¯ V w ¯ , w ¯ w 0 , w F ( U ) ,
together with the contraction V and the projection P C , leads numerically to the solution w ¯ ( x ) 0.2 , x [ 0 , 1 ] . Thus, the generating sequence convergence weakly to 0.2.

4.2. Application in Image Reconstruction

We analyze a MATLAB implementation of an iterative optimization algorithm for image deconvolution and reconstruction. The algorithm addresses the problem of recovering an original image from blurred and noisy observations using a variational approach with projection constraints (see [25] for details).
The algorithm solves the regularized least-squares optimization problem
min x 1 2 H x y 2 2 + R ( x )
subject to box constraints
0 x i 1 , i
where
  • H: Blurring operator (convolution with Gaussian kernel);
  • y: Observed noisy and blurred image;
  • x: Image to be reconstructed;
  • R ( x ) : Regularization term implicitly handled through operators U and V.

4.3. Implementation

(i)
Data Generation:
  • Original Image: Shepp–Logan phantom of size 64 × 64 .
  • Blur Kernel: Gaussian filter of size 7 × 7 with σ = 2 .
  • Noise Model: Additive Gaussian noise with variance 0.02 .
(ii)
The main algorithm components are as follows:
  • Forward Operator
    A ( x ) = H x y ,
    where H represents circular convolution with the Gaussian blur kernel.
  • Projection Operator
    proj C ( x ) = max ( min ( x , 1 ) , 0 ) ,
    which ensures pixel values remain in the valid range [ 0 , 1 ] .
  • Regularization Operators
    U ( x ) = 0.05 x V ( x ) = 0.05 x
    These provide mild Tikhonov-type regularization.
(iii)
Algorithm parameters describe in Table 3.
The generated figure, shown in Figure 14, contains four subplots:
(i)
Original phantom: Ground truth image for comparison.
(ii)
Blurred + Noisy: Degraded input image (PSNR ≈ 20–25 dB).
(iii)
Reconstructed: Final output after algorithm convergence.
(iv)
PSNR Convergence: Monotonic improvement in reconstruction quality.
Figure 14. Image deconvolution, restoration, and convergence.
Figure 14. Image deconvolution, restoration, and convergence.
Mathematics 13 03740 g014
The iterative optimization algorithm successfully reconstructs images from blurred and noisy observations. The algorithm provides a solid foundation for image reconstruction problems and can be extended with more sophisticated regularization terms for improved performance in challenging scenarios.

5. Conclusions

We have introduced a novel Krasnosel’skiǐ–Mann-type subgradient extragradient algorithm to find a common solution to variational inequalities and hierarchical fixed-point problems. The main theoretical contribution is the proof of weak convergence under standard assumptions for nonexpansive and quasi-nonexpansive mappings in real Hilbert spaces. To underscore the value of our approach, we have detailed several of its applications and included numerical examples that illustrate its practical behavior and convergence. Our results demonstrate that the proposed method serves as a unifying framework, successfully generalizing and encompassing several well-known algorithms and theorems. Future work will focus on establishing strong convergence and applying this method to larger-scale real-world problems.

Author Contributions

M.A.: Review and Editing, Methodology; R.A.: Writing—Original Draft, Conceptualization; M.F.: Review and Editing, Software. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2025).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2025).

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Moudafi, A.; Maige, P.E. Towards viscosity approximations of hierarchical fixed-point problems. Fixed Point Theory Appl. 2006, 2006, 95453. [Google Scholar] [CrossRef]
  2. Moudafi, A. Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 2007, 23, 1635–1640. [Google Scholar] [CrossRef]
  3. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  4. Kazmi, K.R.; Ali, R.; Furkan, M. Krasnoselski-Mann type iterative method for hierarchical fixed point problem and split mixed equilibrium problem. Numer. Algorithms 2018, 77, 289–308. [Google Scholar] [CrossRef]
  5. Kazmi, K.R.; Ali, R.; Furkan, M. Hybrid iterative method for split monotone variational inclusion problem and hierarchical fixed point problem for a finite family of nonexpansive mappings. Numer. Algorithms 2018, 79, 499–527. [Google Scholar] [CrossRef]
  6. Moudafi, A.; Mainge, P.E. Strong convergence of an iterative method for hierarchical fixed-point problems. Pac. J. Optim. 2007, 3, 529–538. [Google Scholar]
  7. Yang, Q.; Zhao, J. Generalized KM theorem and their applications. Inverse Probl. 2006, 22, 833–844. [Google Scholar] [CrossRef]
  8. Yao, Y.; Liou, Y.C. Weak and strong convergence of Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 2008, 24, 501–508. [Google Scholar] [CrossRef]
  9. Hartman, P.; Stampacchia, G. On some non-linear elliptic differential-functional equation. Acta Mathenatica 1966, 115, 271–310. [Google Scholar] [CrossRef]
  10. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  11. Farid, M. The subgradient extragradient method for solving mixed equilibrium problems and fixed point problems in Hilbert spaces. J. Appl. Numer. Optim. 2019, 1, 335–345. [Google Scholar] [CrossRef]
  12. He, B.S. A new method for a class of variational inequalities. Math. Program. 1994, 66, 137–144. [Google Scholar] [CrossRef]
  13. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed]
  14. Farid, M.; Peeyada, P.; Ali, R.; Cholamjiak, W. Extragradient method with inertial iterative technique for pseudomonotone split equilibrium and fixed point problems of new mappings. J. Anal. 2024, 32, 1463–1485. [Google Scholar] [CrossRef]
  15. Solodov, M.V.; Tseng, P. Modified projection-typemethods formonotone variational inequalities. SIAM J. Control Optim. 1996, 34, 1814–1830. [Google Scholar] [CrossRef]
  16. Censor, Y.; Gibali, A.; Reich, S. Extension of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  17. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optimization 2011, 61, 1119–1132. [Google Scholar] [CrossRef]
  18. Layashko, S.I.; Semenov, V.V.; Voitova, T.A. Low-cost modification of Korelevich’s methods for monotone equilibrium problems. Cybern. Syst. Anal. 2011, 47, 631–640. [Google Scholar] [CrossRef]
  19. Takahashi, W.; Toyoda, M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118, 417–428. [Google Scholar] [CrossRef]
  20. Nadezhkina, N.; Takahashi, W. Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128, 191–201. [Google Scholar] [CrossRef]
  21. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  22. Brézis, H. Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. In Mathematical Studies; North Holland American Elsevier: Amsterdam, The Netherlands, 1973; p. 5. [Google Scholar]
  23. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory, Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; p. 28. [Google Scholar]
  24. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  25. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
Figure 1. Target point across different initializations.
Figure 1. Target point across different initializations.
Mathematics 13 03740 g001
Figure 2. Step-to-step convergence for distinct initial points.
Figure 2. Step-to-step convergence for distinct initial points.
Mathematics 13 03740 g002
Figure 3. Convergence of the sequence { w m } for distinct initial points.
Figure 3. Convergence of the sequence { w m } for distinct initial points.
Mathematics 13 03740 g003
Figure 4. Convergence of the sequence { w m } for distinct values of α .
Figure 4. Convergence of the sequence { w m } for distinct values of α .
Mathematics 13 03740 g004
Figure 5. Step-to-step convergence for distinct values of α .
Figure 5. Step-to-step convergence for distinct values of α .
Mathematics 13 03740 g005
Figure 6. Convergence comparison starting from initial point [ 1.8 , 1.5 ] T .
Figure 6. Convergence comparison starting from initial point [ 1.8 , 1.5 ] T .
Mathematics 13 03740 g006
Figure 7. Average iteration count across multiple initial points.
Figure 7. Average iteration count across multiple initial points.
Mathematics 13 03740 g007
Figure 12. Average iteration count vs. μ .
Figure 12. Average iteration count vs. μ .
Mathematics 13 03740 g012
Figure 13. Projection onto C = [ 0 , 1 ] .
Figure 13. Projection onto C = [ 0 , 1 ] .
Mathematics 13 03740 g013
Table 1. Parameter values used in convergence comparison.
Table 1. Parameter values used in convergence comparison.
ParameterSymbolValueDescription
Initial step size s 0 1.5 Initial step size for proposed algorithm
Adaptivity coefficient μ 0.9 Controls step-size scaling
Averaging parameter α 0.9 Relaxation factor for w update
Mixing parameter σ 0.5 Controls weighting between U and V
Max iterations m max 100Iteration cap for all algorithms
Tolerance ε 10 12 Stopping criterion for w m + 1 w m
Korpelevich step μ Kor 0.1 Fixed step for Korpelevich method
Nadezhkina step λ 0.1 Fixed step for Nadezhkina method
Nadezhkina weight α Nad 0.5 Relaxation factor in Nadezhkina method
Table 2. Convergence comparison analysis.
Table 2. Convergence comparison analysis.
AlgorithmAvg IterationsSpeedup
Proposed15.0Baseline
Korpelevich80.05.3× slower
Nadezhkina45.03.0× slower
Table 3. Algorithm parameters for image reconstruction.
Table 3. Algorithm parameters for image reconstruction.
ParameterValue
Initial step size ( s 0 )0.1
Momentum parameter ( μ )0.85
Line search parameter ( α )0.4
Regularization parameter ( σ )0.4
Maximum iterations200
Tolerance 10 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alansari, M.; Ali, R.; Farid, M. Krasnosel’skiǐ–Mann-Type Subgradient Extragradient Algorithms for Variational Inequality and Hierarchical Fixed-Point Problems. Mathematics 2025, 13, 3740. https://doi.org/10.3390/math13233740

AMA Style

Alansari M, Ali R, Farid M. Krasnosel’skiǐ–Mann-Type Subgradient Extragradient Algorithms for Variational Inequality and Hierarchical Fixed-Point Problems. Mathematics. 2025; 13(23):3740. https://doi.org/10.3390/math13233740

Chicago/Turabian Style

Alansari, Monairah, Rehan Ali, and Mohammad Farid. 2025. "Krasnosel’skiǐ–Mann-Type Subgradient Extragradient Algorithms for Variational Inequality and Hierarchical Fixed-Point Problems" Mathematics 13, no. 23: 3740. https://doi.org/10.3390/math13233740

APA Style

Alansari, M., Ali, R., & Farid, M. (2025). Krasnosel’skiǐ–Mann-Type Subgradient Extragradient Algorithms for Variational Inequality and Hierarchical Fixed-Point Problems. Mathematics, 13(23), 3740. https://doi.org/10.3390/math13233740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop