Next Article in Journal
Existence and Uniqueness Theorem on Uncertain Nonlinear Switching Systems with Time Delay
Previous Article in Journal
Electromigration of Aquaporins Controls Water-Driven Electrotaxis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Subgradient Extragradient Method for Fixed Points of Quasi-Nonexpansive Mappings Without Demi-Closedness

by
Anchalee Sripattanet
1 and
Atid Kangtunyakarn
2,*
1
School of Education and Liberal Arts, Sarasas Suvarnabhumi Institute of Technology, Samut Prakan 10540, Thailand
2
Department of Mathematics, School of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 2937; https://doi.org/10.3390/math13182937
Submission received: 4 August 2025 / Revised: 6 September 2025 / Accepted: 8 September 2025 / Published: 11 September 2025
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

This research focuses on developing a novel approach to finding fixed points of quasi-nonexpansive mappings without relying on the demi-closedness condition, a common requirement in previous studies. The approach is based on the Subgradient Extragradient technique, which builds upon the foundational extragradient method introduced by G.M. Korpelevich. Korpelevich’s method is a widely recognized tool in the fields of optimization and variational inequalities. This study extends Korpelevich’s technique by adapting it to a broader class of operators while maintaining critical convergence properties. This research demonstrates the effectiveness and practical applicability of this new method through detailed computational examples, highlighting its potential to address complex mathematical problems across various domains.

1. Introduction

Fixed point theory in Hilbert spaces is fundamental for solving equations, optimization problems, and analyzing dynamical systems. A function f in a Hilbert space H has a fixed point at x if f ( x ) = x , often representing equilibrium states or optimal solutions.
A mapping T : H H is nonexpansive if it satisfies T x T y x y for all x , y H , ensuring stability in iterative methods used in approximation and optimization. In 2012, Hideaki Iiduka [1] introduced an iterative algorithm grounded in fixed-point theory to address convex minimization problems involving nonexpansive mappings. He further implemented this algorithm to tackle the network bandwidth allocation problem, which serves as a practical instance of convex minimization—a core topic in networking that has been the focus of extensive research. When a data rate is allocate to a network source, it generates a corresponding utility, typically modeled as a concave function. A commonly used utility function for each source is given by
U s ( x ) : = w s log x ,
where x R + { 0 } , and w s > 0 is the weighted parameter for source S. The allocation for each source must be managed through a control mechanism to avoid network congestion. This mechanism distributes bandwidth by solving an optimization problem that maximizes the total utility function while ensuring that the sum of the transmission rates of all sources sharing a link does not exceed its capacity. The network bandwidth allocation problem is represented as follows [1].
Maximize s S U s ( x s ) subject to ( x s ) s S F ( T ) ,
where T is the nonexpansive mapping, and s S U s ( x s ) is composed of U s as defined in (1).
Quasi-nonexpansive mappings generalize this by focusing on fixed points, requiring that T x p x p for any fixed point p of T and any x H , facilitating convergence towards these points. The relationship between nonexpansive and quasi-nonexpansive mappings is foundational: all nonexpansive mappings are inherently quasi-nonexpansive, but the reverse does not necessarily hold. This distinction allows quasi-nonexpansive mappings to encompass a broader class of operators, providing more application flexibility while maintaining critical convergence properties.
For example, the mapping T ( x ) = x if x 1 and T ( x ) = 1 if x > 1 is quasi-nonexpansive but not nonexpansive, as it does not consistently maintain distances between all points. Another example is the mapping T ( x , y ) defined by ( x , y ) if x 2 + y 2 1 and x x 2 + y 2 , y x 2 + y 2 if x 2 + y 2 > 1 , which is quasi-nonexpansive but not nonexpansive, as it projects points outside the unit circle onto it.
A crucial principle related to quasi-nonexpansive mappings is the demiclosedness principle. This principle states that if a sequence { x n } converges weakly to x and the sequence { T x n } converges strongly to T x , then x is a fixed point of T. This result is fundamental in operator theory and functional analysis, providing a robust tool for establishing the convergence of iterative methods. Numerous studies have focused on solving fixed-point problems of quasi-nonexpansive mappings using the demiclosedness condition and define the mapping T w = w I + ( 1 w ) I , where T is a quasi nonexpensive mapping with w [ 0 , 1 ] , which plays a vital role in these investigations, as evidenced by references [2,3,4].
The general split feasibility problem (General SFP), introduced by Kangtunyakarn [5], is an extension of the traditional split feasibility problem (SFP), which was first introduced by Censor and Elfving [6]. The traditional SFP involves finding a point x in a closed convex set C such that its image under a linear operator A falls within another convex set Q. Mathematically, it is represented as follows:
Find x C such that A x Q ,
where A : H 1 H 2 is a bounded linear operator, and C H 1 and Q H 2 are closed convex sets.
The General SFP further extends this framework by involving two operators A and B, requiring the solution to satisfy the following: Find x C such that A x , B x Q , where A , B : H 1 H 2 are bounded linear operators. The solution set of this problem can be represented by Γ = x C : A x B x Q . This generalization allows for addressing more complex scenarios involving multiple constraints and objectives. Kangtunyakarn has also connected the solution set to the fixed-point problem. Further details can be found in the next section.
In practical terms, SFPs, including traditional and general forms, are critical in various applications. In medical imaging, SFPs are used to reconstruct images from projection data, ensuring the reconstructed images meet specific criteria such as physical constraints and noise minimization. In signal processing, they aid in designing filters and reconstructing signals from incomplete or corrupted data, which is essential in applications like echo cancellation and noise reduction. Additionally, SFPs are used in optimization and operations research for solving resource allocation, network design, and logistics problems, as well as in data science and machine learning for training algorithms and models, especially in high-dimensional data scenarios. See, for example [7,8,9].
Researchers such as Censor and Elfving [6] and Censor and Segal [10] have significantly advanced this field, developing multiple sets of split feasibility problems and split variational inequality problems, incorporating various constraints and projection methods. These advancements highlight the versatility and utility of the SFP framework in solving complex real-world challenges across disciplines.
G.M. Korpelevich [11] was the first to introduce the extragradient method, which has since become a fundamental technique for solving variational inequality problems. This method, known for its robust convergence properties, is defined as follows:
y n = P C ( x n λ n F ( x n ) ) , x n + 1 = P C ( x n λ n F ( y n ) ) ,
where P C denotes the projection onto the set C, λ n is the step size parameter, and F is a nonlinear mapping associated with the problem at hand. Specifically, F represents the gradient of the objective function in optimization problems or a monotone operator in variational inequality problems. The choice of λ n is crucial for ensuring the algorithm’s convergence and is often determined by a line-search procedure or fixed in advance based on problem characteristics.
Building upon Korpelevich’s foundational work, researchers have developed enhancements to improve the performance of this method.
X. J. Long, J. Yang and Y. J. Cho [12] build upon Korpelevich’s extragradient method to enhance its performance for variational inequalities. The modifications aim to increase convergence speed and computational efficiency. The improved algorithm consists of the following steps:
  • Initial Computation:
    w n = x n + u n ( x n x n 1 ) ,
    where u n is a sequence used to control the step size.
  • Projection Step:
    y n = P C ( w n λ n F ( w n ) ) .
  • Check for Solution: If w n = y n or F ( y n ) = 0 , then y n is a solution of the problem. Otherwise, proceed to the next step.
  • Correction Step:
    z n = P T n ( w n λ n F ( y n ) ) .
  • Step Size Adjustment:
    λ n = γ ν m n ,
    with m n being the smallest non-negative integer that satisfies certain conditions to ensure convergence.
  • Final Update:
    x n + 1 = α n f ( e n ) + ( 1 α n ) z n ,
    where e n = ( 1 θ ) w n + θ x n .
Similarly, B. Tan and S. Y. Cho [13] leverage Korpelevich’s method as a basis and further enhance it by incorporating a new line-search rule. The modified algorithm consists of the following steps:
  • Initial Computation:
    w n = x n + θ n ( x n x n 1 ) ,
    where θ n is determined as follows
    θ n = min ϵ n x n x n 1 , θ , if x n x n 1 , θ , otherwise .
  • Projection Step:
    y n = P C ( w n λ n A w n ) ,
  • Check for Solution: If w n = y n or A y n = 0 , then y n is a solution of the variational inequality problem (VIP). Otherwise, proceed to the next step.
  • Correction Step:
    z n = P C ( w n β n A y n ) ,
    where β n is given by
    β n = 1 μ γ w n y n 2 A y n 2 .
  • Final Update:
    x n + 1 = α n f ( x n ) + ( 1 α n ) z n .
G.M. Korpelevich’s original method has significantly impacted the field of optimization and variational inequalities. His extragradient method introduced an effective way to iteratively approach solutions for variational inequality problems, providing a robust framework that subsequent researchers built to create more efficient and robust algorithms. Both Xian-Jun Long, Jing Yang, and Yeol Je Cho acknowledged the importance of Korpelevich’s contributions and used his method as the starting point for their advancements. This highlights the lasting relevance and adaptability of Korpelevich’s work in the ongoing development of mathematical optimization techniques. Numerous researchers have continued to expand upon Korpelevich’s foundational work, developing variations and enhancements tailored to specific applications and more complex problem settings, as demonstrated in references [14,15,16]. This ongoing research ensures that the extragradient method remains a vital tool in optimization and variational inequalities, proving its versatility and enduring significance.
Building upon the foundational techniques established by G.M. Korpelevich, precisely the Subgradient Extragradient method, this research develops an innovative approach to finding fixed points of quasi-nonexpansive mappings without relying on the demi-closedness condition. This method leverages the robustness of Korpelevich’s technique, adapting it to a broader class of operators while maintaining critical convergence properties.
Detailed computational examples are provided to validate the effectiveness of this new method, demonstrating its practical applicability and robustness. Additionally, the implications of these findings are extended to address various complex problems, showcasing the versatility of the approach. The solutions derived through this technique offer significant improvements in areas such as optimization, dynamical systems analysis, and solving equilibrium problems, proving its utility across multiple domains. This research enhances the theoretical understanding of quasi-nonexpansive mappings and presents a viable tool for practical applications related to mathematical problems and their applications.

2. Preliminaries

The foundational definitions, propositions, and lemmas necessary for the rigorous development of the theory are established in this section. A careful review of fundamental concepts forms the basis of our analysis, accompanied by the introduction of novel analytical tools crafted specifically to facilitate the proofs of the main theorems. These tools mark a significant advancement in addressing the challenges inherent to the core problems under investigation. The material presented provides the essential groundwork, ensuring that the reader is well prepared to engage with the advanced theoretical constructs and proofs that follow. Consider a Hilbert space H and a nonempty closed convex subset C of H. The metric projection P C of a point x H onto the set C is a fundamental concept in functional analysis and optimization.
The metric projection P C x is defined as the unique point in C that minimizes the distance to x, formally expressed by
P C x = arg min y C x y .
This means that P C x is the closest point to x within C, satisfying the equality
x P C x = min y C x y .
The uniqueness and existence of P C x are guaranteed by the properties of the Hilbert space and the convexity of C.
Furthermore, the projection P C is a firmly nonexpansive operator, ensuring that
P C x P C y 2 P C x P C y , x y ,
for any x , y H . This property is essential in proving the convergence of iterative methods in convex optimization.
Additionally, the metric projection satisfies a key variational inequality
x P C x , z P C x 0 ,
for all z C , indicating that x P C x is orthogonal to the tangent cone of C at P C x .
Thus, the metric projection P C x is a crucial tool for exploring the geometric properties of convex sets in Hilbert spaces and is widely applied in mathematical analysis and related fields.
Lemma 1
([17]). Let H be a Hilbert space and C a nonempty closed convex subset of H. Suppose A is a mapping from C into H, and let u C . For any λ > 0 , the point u is a solution to the variational inequality problem V I ( C , A ) if and only if
u = P C ( I λ A ) u ,
where P C denotes the metric projection of H onto C.
Lemma 2
([18]). Let { α n } and { δ n } be sequences of non-negative real numbers such that
α n + 1 ( 1 δ n ) α n + β n + γ n ,
for all n N , where { δ n } is a sequence in ( 0 , 1 ) and { β n } is a real sequence. Assume that n = 0 γ n < . Then, the following results hold:
(i) 
If β n δ n M for some M 0 , then { α n } is a bounded sequence.
(ii) 
If n = 0 δ n = and lim sup n β n δ n 0 , then lim n α n = 0 .
Lemma 3
([19]). Let X be a real inner product space. Then
(i) 
x + y 2 x 2 + 2 x , y + y 2 ,
(ii) 
p x + q y 2 = p ( p + q ) x 2 + q ( p + q ) y 2 p q x y 2 ,
for all x , y X and for all p , q R .
The subsequent lemma is fundamental in establishing the main theorem in the following section.
Lemma 4.
Let X be a real inner product space. Then
α x + β y + γ z 2 = ( α + β + γ ) α x 2 + ( α + β + γ ) β y 2 + ( α + β + γ ) γ z 2 α β x y 2 γ α x z 2 γ β y z 2 ,
for all x , y , z X and α , β , γ R .
Proof. 
Let α , β , γ R and x , y , z X . Putting α + β = a . From Lemma 3, we have
α x + β y + γ z 2 = a α x + β y α + β + γ z 2 = a ( a + γ ) α x + β y α + β 2 + γ ( a + γ ) z 2 a γ α x + β y z 2 α + β = a ( a + γ ) α x 2 α + β + β y 2 α + β α β ( α + β ) 2 x y 2 + γ ( a + γ ) z 2    a γ α x z 2 α + β + β y z 2 α + β α β ( α + β ) 2 x y 2 = ( α + β + γ ) α x 2 + ( α + β + γ ) β y 2 ( α + β + γ ) α β x y 2 α + β    + ( α + β + γ ) γ z 2 γ α x z 2 γ β y z 2 + α β γ x y 2 α + β = ( α + β + γ ) α x 2 + ( α + β + γ ) β y 2 + ( α + β + γ ) γ z 2    α β x y 2 γ α x z 2 γ β y z 2 .
Therefore, Equation (2) holds for all x , y , z X and α , β , γ R . □
Lemma 5
([20]). Let C be a nonempty closed convex subset of a real Hilbert space H, and let T : C C be a quasi-nonexpansive mapping. Then
V I ( C , I T ) = F ( T ) .
Remark 1.
From Lemma 1 and Lemma 5, we have
F ( T ) = V I ( C , I T ) = F P C ( I ζ ( I T ) ) ,
for all ζ > 0 .
Lemma 6
([5]). Let H 1 and H 2 be real Hilbert spaces and C , Q be nonempty closed convex subsets of H 1 and H 2 , respectively. Let A , B : H 1 H 2 be bounded linear operators with A * , B * being adjoint of A and B, respectivel, y with Γ . Then the following equations are equivalent.
(i) 
x * Γ ,
(ii) 
P C I a A * ( I P Q ) A 2 B * ( I P Q ) B 2 x * = x * ,
for all a > 0 , where L A , L B are spectral radii of A * A and B * B , respectively, with a 0 , 2 L and L = max { L A , L B } .
Lemma 7
([21]). Let { Γ n } be a sequence of real numbers that do not decrease at infinity, in the sense that there exists a subsequence { Γ n j } of { Γ n } such that Γ n j < Γ n j + 1 for all j 0 . Also, we consider the sequence of integers { τ ( n ) } n n 0 defined by
τ ( n ) = max { k n : Γ k < Γ k + 1 } .
Then { τ ( n ) } n n 0 is a nondecreasing sequence verifying lim n τ ( n ) = and, for all n n 0 ,
max { Γ τ ( n ) , Γ n } Γ τ ( n ) + 1 .
Lemma 8.
Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let T : C C be a continuous quasi-nonexpansive mapping, and let A , B : H 1 H 2 be bounded linear operators with L A and L B denoting the spectral radius of A * A and B * B , respectively. Assume that the set F = F ( T ) Γ is nonempty. Let x 1 , u C , and let the sequence x n be constructed by
y n = P C ( ( I a J ) x n ) , z n = P C ( x n λ ( I T ) x n ) ,
where
J = 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B .
The set used in the iteration are defined as follows:
T n = x H : ( I a J ) x n y n , y n x 0 , W n = x H : I λ ( I T ) x n z n , z n x 0 .
The iteration is then given by
x n + 1 = α n u + β n P T n x n a J y n + γ n P W n ( I λ ( I T ) ) x n ,
for all n 1 , where α n , β n , γ n [ 0 , 1 ] with α n + β n + γ n 1 , λ 0 , 1 , and a 0 , 1 L with L = max L A , L B .
Then the sequences x n , y n and z n are bounded.
Proof. 
Let v F . By applying the method described in [5], we can conclude that J is 1 L -inverse strongly monotone.
Based on the definition of the sequence { x n } , we obtain the following:
x n + 1 v = α n ( u v ) + β n P T n x n a J y n v + γ n P W n x n λ ( I T ) x n v 1 α n β n γ n v α n u v + β n P T n x n a y n v + γ n P W n x n λ ( 1 T ) x n v + 1 α n β n γ n v .
Given that v F , we deduce that v C . From the properties of z n and y n , it follows that v T n W n . Defining u n = x n a J y n , we arrive at the following:
P T n x n a J y n v 2 = P T n u n v 2 = P T n u n u n 2 + u n v 2 + 2 P T n u n u n , u n v = u n v 2 + P T n u n u n , P T n u n v + P T n u n u n , u n v u n v 2 P T n u n u n 2 = x n a J y n v 2 P T n u n x n + a J y n 2 = x n v 2 + a 2 J y n 2 2 a x n v , J y n P T n u n x n 2 a 2 J y n 2 2 a P T n u n x n , J y n = x n v 2 P T n u n x n 2 2 a P T n u n v , J y n .
Since v Γ , and given that y n = P C ( I a J ) x n along with the monotonicity of J, we obtain the following:
0 J y n J v , y n v = J y n y n v J v , y n v J y n , y n v = J y n , y n P T n u n + J y n , P T n u n v .
From (4), we have
P T n x n a J y n v 2 x n v 2 P T n u n x n 2 2 a P T n u n v , J y n x n v 2 P T n u n x n 2 + 2 a J y n , y n P T n u n = x n v 2 P T n u n y n 2 y n x n 2 2 P T n u n y n y n x n + 2 a J y n , y n P T n u n = x n v 2 P T n u n y n 2 y n x n 2 2 y n + a J y n x n , P T n u n y n = x n v 2 P T n u n y n 2 y n x n 2 + 2 x n a J x n y n , P T n u n y n + 2 a J x n J y n , P T n u n y n x n v 2 P T n u n y n 2 y n x n 2 + 2 a J x n J y n P T n u n y n x n v 2 P T n u n y n 2 y n x n 2 + a L x n y n 2 + a L P T n u n y n 2 = x n v 2 ( 1 a L ) P T n u n y n 2 ( 1 a L ) y n x n 2 .
Let A = I T , and define v n = ( I λ A ) x n . Therefore, we have the following:
P W n I λ ( I T ) x n v 2 = P W n v n v 2 = P W n v n v n 2 + v n v 2 + 2 P W n v n v n , v n v = v n v 2 + P W n v n v n , P W n v n v + P W n v n v n , v n v v n v 2 + P W n v n v n , v n v v n v 2 P W n v n v n 2 = x n λ A x n v 2 z n x n + λ A x n 2 = x n v 2 + λ 2 A x n 2 2 λ x n v , A x n z n x n λ 2 A x n 2 2 λ z n x n , A x n = x n v 2 z n x n 2 2 λ z n v , A x n .
From A = I T , then it follows that
T x n T v 2 = ( I ( I T ) ) x n ( I ( I T ) ) v 2 = x n v ( A x n A v ) 2 = x n v 2 2 x n v , A x n A v + A x n A v 2 x n v 2 .
It implies that
0 A x n 2 2 x n v , A x n .
From the above, it follows that
0 A x n 2 A x n , x n v = 2 A x n , x n z n + 2 A x n , z n v 2 A x n , z n v 2 A x n , z n x n A x n 2 .
Utilizing (6) and the characteristics of the quasi-nonexpansive mapping, we deduce that
P W n I λ ( I T ) x n v 2 x n v 2 z n x n 2 2 λ z n v , A x n x n v 2 z n x n 2 2 λ A x n , z n x n λ A x n 2 = x n v 2 z n x n 2 + 2 λ A x n z n x n λ A x n 2 x n v 2 z n x n 2 + λ z n x n 2 + λ A x n 2 λ A x n 2 = x n v 2 z n x n 2 + λ z n x n 2 = x n v 2 ( 1 λ ) z n x n 2 .
By invoking (3), (5), and (7), we obtain the following
x n + 1 v α n u v + β n P T n x n a J y n v + γ n P W n ( 1 λ ( I T ) ) x n v + 1 α n β n γ n v α n u v + β n x n v + γ n x n v + 1 α n β n γ n v α n u v + 1 α n x n v + 1 α n β n γ n v = 1 α n x n v + α n u v + 1 α n β n γ n v .
From Lemma 2, it can be concluded that the sequence x n is bounded, as are the sequences y n and { z n } . □

3. Main Result

Theorem 1.
Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let T : C C be a continuous quasi-nonexpansive mapping, and let A and B be defined as in Lemma 8. Assume that the set F = F ( T ) Γ is nonempty. Let x 1 , u C , and let the sequence { x n } be generated by y n , z n , T n , W n , and x n + 1 according to Lemma 8, including the parameter conditions specified therein. Suppose that the following conditions hold:
i ) n = 1 α n = and lim n α n = 0 , i i ) n = 1 1 α n β n γ n < , i i i ) 0 < c β n , γ n d < 1 for some c , d > 0 .
Then, the sequence x n converges strongly to x * = P F u .
Proof. 
Utilizing the definition of x n and Lemma 4, we obtain the following
x n + 1 v 2 = α n ( u v ) + β n P T n x n a J y n v + γ n P W n ( I λ ( I T ) ) x n v 1 α n β n γ n v 2 α n ( u v ) + β n P T n u n v + γ n P W n v n v 2 2 1 α n β n γ n v , x n + 1 v α n + β n + γ n α n u v 2 + α n + β n + γ n β n P T n u n v 2 + α n + β n + γ n γ n P W n v n v 2 β n γ n P T n u n P W n v n 2 + 2 1 α n β n γ n v x n + 1 v α n + β n + γ n α n u v 2 + α n + β n + γ n β n x n v 2 ( 1 a L ) y n x n 2 + α n + β n + γ n γ n x n v 2 ( 1 λ ) x n z n 2 + 2 1 α n β n γ n v x n + 1 v α n u v 2 + β n + γ n x n v 2 ( 1 α L ) α n + β n + γ n β n y n x n 2 ( 1 λ ) α n + β n + γ n γ n z n x n 2 + 2 1 α n β n γ n v x n + 1 v .
It implies that
1 a L α n + β n + γ n β n y n x n 2 + ( 1 λ ) α n + β n + γ n γ n z n x n 2 α n u v 2 + 2 1 α n β n γ n v x n + 1 v + x n v 2 x n + 1 v 2 .
Case I: Define E n = x n v 2 for all n N . There exists an n 0 N such that E n + 1 E n for all n n 0 . This implies that lim n E n exists.
Utilizing (8), along with conditions (i) and (ii), we obtain
0 = lim n y n x n = lim n P C ( I a J ) x n x n
and
0 = lim n z n x n = lim n P C ( I λ ( I T ) ) x n x n .
Since the sequence x n is bounded, there exists a subsequence x n j of x n such that
lim sup n u v ¯ , x n v ¯ = lim j u v ¯ , x n j v ¯ ,
where v ¯ = P F u . Since the sequence x n is bounded in H, there exists a subsequence of x n that converges weakly in H. For convenience, we can assume that x n j converges weakly to z.
From (10), it follows that a subsequence z n j of z n also converges weakly to z.
By the definition of z n , we have
( I λ ( I T ) x n j z n j , z n j y 0 ,
for all y C .
Since T is continuous and the sequences z n j and x n j converge weakly to z, it follows that
( I λ ( I T ) z z , z y 0 ,
for all y C .
By the properties of the metric projection, we have z = P C ( I λ A ) z . From Remark 1, it follows that z V I ( C , I T ) = F ( T ) .
Utilizing (9), the nonexpansiveness of P C ( I a J ) , and the demiclosedness principle, we can conclude that z F P C ( I J ) = V I ( C , J ) = Γ . Therefore, we have
z F ( T ) Γ = F .
From (11) and (12), we have
lim sup n u v ¯ , x n v ¯ 0 .
By the definition of x n and the fact that v ¯ = P F u , it follows that
x n + 1 v ¯ 2 = α n ( u v ¯ ) + β n P T n x n a J y n + γ n P W n ( I λ ( I T ) ) x n v ¯ 1 α n β n γ n v ¯ 2 β n P T n u n v ¯ + γ n P W n v n v ¯ 1 α n β n γ n v ¯ 2 + 2 α n u v ¯ , x n + 1 v ¯ β n P T n u n v ¯ + γ n P W n v n v ¯ 2 2 ( 1 α n β n γ n ) v ¯ , β n P T n u n v ¯ + γ n P W n v n v ¯ 1 α n β n γ n v ¯ + 2 α n u v ¯ , x n + 1 v ¯ β n x n v ¯ 2 + γ n x n v ¯ 2 + 2 1 α n β n γ n v ¯ β n P T n u n v ¯ + γ n P W n v n v ¯ 1 α n β n γ n v ¯ + 2 α n u v ¯ , x n + 1 v ¯ 1 α n x n v ¯ 2 + 2 α n u v ¯ , x n + 1 v ¯ + 2 1 α n β n γ n v ¯ β n P T n u n v ¯ + γ n P W n v n v ¯ 1 α n β n γ n v ¯ .
By invoking Lemma 2, along with condition (i)–(iii) and Equation (13), we deduce that the sequence x n converges strongly to v ¯ = P F u .
Case II: There exists a subsequence E n i E n such that E n i E n i + 1 for all i N . Define the mapping τ : N N by τ ( n ) = max k n : E k < E k + 1 .
Utilizing Equation (8), we obtain
1 a L α τ ( n ) + β τ ( n ) + γ τ ( n ) β τ ( n ) y τ ( n ) x τ ( n ) 2 + ( 1 λ ) α τ ( n ) + β τ ( n ) + γ τ ( n ) γ τ ( n ) z τ ( n ) x τ ( n ) 2 α τ ( n ) u v 2 + 2 1 α τ ( n ) β τ ( n ) γ τ ( n ) v x τ ( n + 1 v + x τ ( n ) v 2 x τ ( n ) + 1 v 2 .
Analogously to Case I, we obtain
0 = lim n y τ ( n ) x τ ( n ) = lim n P C ( I a J ) x τ ( n ) x τ ( n ) .
and
0 = lim n z τ ( n ) x τ ( n ) = lim n P C ( I λ ( I T ) ) x τ ( n ) x τ ( n )
and
lim sup n u v ¯ , x τ ( n ) v ¯ 0 ,
where v ¯ = P F u and
x τ ( n ) + 1 v ¯ 2 1 α τ ( n ) x τ ( n ) v ¯ 2 + 2 α τ ( n ) u v ¯ , x τ ( n ) + 1 v ¯ + 2 1 α τ ( n ) β τ ( n ) γ τ ( n ) v ¯ β τ ( n ) P T τ ( n ) u τ ( n ) v ¯ + γ τ ( n ) P W τ ( n ) v τ ( n ) v ¯ 1 α τ ( n ) β τ ( n ) γ τ ( n ) v ¯ .
From Lemma 2, we have lim n x ( τ ( n ) + 1 v ¯ = 0 .
From Lemma 7, we have
0 x n v ¯ max x τ ( n ) v ¯ , x n v ¯ x τ ( n ) + 1 v ¯ .
From lim n x ( τ ( n ) + 1 v ¯ = 0 , we have
lim n x n v ¯ = 0 .
Hence, we can conclude that the sequence x n converges to v ¯ = P F u . This completes the proof. □

4. Applications

This section explores the application of mapping concepts in addressing various mathematical problems. Theorems 2–4 can be viewed as direct consequences and extensions of the main result established in Theorem 1. Specifically, Theorem 2 adapts the general framework of Theorem 1 to the case where the underlying mappings are nonspreading mappings, thereby simplifying the iterative process under this particular structure. Theorem 3 further develops this idea by considering nonexpansive mappings and refining the convergence conditions, which remain consistent with the scope of the assumptions in Theorem 1. Finally, Theorem 4 illustrates how the general convergence framework of Theorem 1 can also be applied to minimization models via the function g ( x ) , showing that the proposed method unifies both feasibility-type formulations and optimization-type formulations. Thus, Theorems 2–4 provide concrete refinements and applications of the general principle proved in Theorem 1.

4.1. Fixed-Point Problem of Nonlinear Mapping

The concept of nonspreading mapping was introduced by Kohsaka and Takahashi [22] in 2008 to address the fixed-point problem in Hilbert spaces, which are real vector spaces equipped with an inner product structure. Typically, studies on mappings in Hilbert spaces focus on distance-preserving properties, such as nonexpansive mappings, which ensure that the distance between two points does not increase under the mapping.
Nonspreading mappings, however, are defined with a stricter condition. Specifically, a mapping T : C C on a nonempty closed convex subset C of a Hilbert space H is called nonspreading if it satisfies the following inequality for all x , y C
2 T x T y 2 T x y 2 + x T y 2 .
This definition was proposed to generalize certain properties of nonexpansive mappings while introducing new tools for solving fixed-point problems.
Nonspreading mappings are closely related to quasi-nonexpansive mappings, which satisfy the inequality T x p x p for all x C and p F ( T ) , where F ( T ) denotes the set of fixed points of T. While quasi-nonexpansive mappings ensure that distances from points to any fixed point do not increase, nonspreading mappings impose an additional structure that further constrains the behavior of the mapping.
It is important to note that if the fixed-point set F ( T ) is nonempty, then a nonspreading mapping T will also be quasi-nonexpansive. This is because the nonspreading condition naturally implies the quasi-nonexpansive property when fixed points exist, making nonspreading mappings a special case of quasi-nonexpansive mappings under this condition.
By utilizing the main theorems and properties of nonspreading mappings, it is possible to establish further significant theorems in fixed-point theory and related fields. These properties provide a foundational framework that can be applied to prove strong convergence results and other critical findings in mathematical analysis and optimization.
Theorem 2.
Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let T : C C be a continuous nonspreading mapping, and let A , B : H 1 H 2 be bounded linear operators with L A and L B as the spectral radius of A * A and B * B , respectively. Assume that F = F ( T ) Γ . Let x 1 , u C , and consider the sequence { x n } generated by
y n = P C ( ( I a J ) x n ) , z n = P C ( x n λ ( I T ) x n ) ,
where
J = 1 2 A * I P Q A + 1 2 B * I P Q B .
The set used in the iteration are defined as follows:
T n = x H : ( I a J ) x n y n , y n x 0 , W n = x H : ( I λ ( I T ) ) x n z n , z n x 0 .
The iteration is then given by
x n + 1 = α n u + β n P T n ( x n a J y n ) + γ n P W n ( I λ ( I T ) ) x n ,
for all n 1 , where { α n } , { β n } , { r n } [ 0 , 1 ] with α n + β n + γ n 1 , λ 0 , 1 , a 0 , 1 L , and L = max { L A , L B } . Suppose that the following conditions hold:
( i ) n = 1 α n = , lim n α n = 0 , ( i i ) n = 1 1 α n β n γ n , ( i i i ) 0 < c β n , γ n d < 1 , for some c , d > 0 .
Then the sequence { x n } converges strongly to x * = P F u .
Theorem 3.
Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let T : C C be a nonexpansive mapping, and let A , B : H 1 H 2 be bounded linear operators with L A and L B as the spectral radius of A * A and B * B , respectively. Assume that F = F ( T ) Γ . Let x 1 , u C , and consider the sequence { x n } generated by
y n = P C ( ( I a J ) x n ) , z n = P C ( x n λ ( I T ) x n ) ,
where
J = 1 2 A * I P Q A + 1 2 B * I P Q B .
The set used in the iteration are defined as follows:
T n = x H : ( I a J ) x n y n , y n x 0 , W n = x H : ( I λ ( I T ) ) x n z n , z n x 0 .
The iteration is then given by
x n + 1 = α n u + β n P T n ( x n a J y n ) + γ n P W n ( I λ ( I T ) ) x n ,
for all n 1 , where { α n } , { β n } , { r n } [ 0 , 1 ] with α n + β n + γ n 1 , λ 0 , 1 , a 0 , 1 L , and L = max { L A , L B } . Suppose that the following conditions hold:
i ) n = 1 α n = , lim n α n = 0 , i i ) n = 1 1 α n β n γ n , i i i ) 0 < c β n , γ n d < 1 , for some c , d > 0 .
Then the sequence { x n } converges strongly to x * = P F u .
These theorems illustrate how both nonspreading and nonexpansive mappings can be applied to establish strong convergence results in iterative processes. They demonstrate the versatility and utility of these mapping concepts in fixed-point theory and optimization.

4.2. Minimization Problem

Consider the sets C H 1 and Q H 2 where H 1 and H 2 are Hilbert spaces, and let A : H 1 H 2 be a bounded linear operator. Let g : H 1 R be a continuously differentiable function. The optimization problem can be formulated as follows:
min x C g ( x ) : = 1 2 ( I P Q ) A x 2 ,
where the objective is to find a point x * C such that g ( x * ) g ( x ) for all x C .
During the investigation of this minimization problem, the general constrained minimization problem is introduced as follows:
min x C g ( x ) : = 1 4 ( I P Q ) A x 2 + 1 4 ( I P Q ) B x 2 .
Here, Γ g denotes the set of all solutions to the Equation (15), defined by Γ g = { x * C : g ( x * ) g ( x ) , x C } .
The minimization problem described plays a crucial role in various applied mathematics and engineering fields. By minimizing the function g ( x ) , one can find an optimal solution that satisfies specific constraints, making this framework applicable to several real-world problems.
For instance, such minimization problems are often used in signal processing, where the goal is to reconstruct a signal that best matches observed data while satisfying certain constraints. Similarly, in machine learning, optimization techniques like this are used to train models by minimizing loss functions, ensuring the model fits the training data while generalizing well to unseen data.
Moreover, this problem formulation can be applied in control theory, where minimizing an objective function subject to constraints is essential for designing systems that maintain desired performance levels. In resource allocation, such as in network bandwidth distribution, the minimization problem ensures efficient use of limited resources while meeting various user demands.
In economics, optimization problems are used to determine the best allocation of resources or to find equilibria in markets. Lastly, in inverse problems, this minimization framework is vital for deriving unknown parameters from observed data, ensuring that the solutions are consistent with the physical models.
The subsequent results elucidate the relationship between the general split feasibility problem and the general constrained minimization problem.
Lemma 9
([5]). Let H 1 and H 2 be real Hilbert spaces, and let C and Q be nonempty, closed, convex subsets of H 1 and H 2 , respectively. Let A , B : H 1 H 2 be bounded linear operators, and suppose that A * and B * are the adjoints of A and B, respectively. Define the function g : H 1 R as
g ( x ) = 1 4 ( I P Q ) A x 2 + 1 4 ( I P Q ) B x 2 for all x H 1 .
Assume that Γ . The following conditions are equivalent:
(i) 
x * Γ ,
(ii) 
x * Γ g .
The result of this theorem can be directly proved by utilizing Lemma 9 and the main theorem.
Theorem 4.
Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let T : C C be a continuous quasi-nonexpansive mapping, and let A , B : H 1 H 2 be bounded linear operators with L A and L B denoting the spectral radius of A * A and B * B , respectively. Let the function g : H 1 R be differentiable continuous function defined by g ( x ) = 1 4 ( I P Q ) A x 2 + 1 4 ( I P Q ) B x 2 . Assume that the set F = F ( T ) Γ is nonempty. Let x 1 , u C , and let the sequence x n be constructed by
y n = P C I a g x n , z n = P C ( I λ ( I T ) ) x n ,
and
T n = x H : I a g x n y n , y n x 0 , W n = x H : I λ ( I T ) x n z n , z n x 0 ,
The set used in the iteration are defined as follows:
x n + 1 = α n u + β n P T n x n a g y n + γ n P W n ( I λ ( I T ) ) x n ,
for all n 1 , where α n , β n , γ n [ 0 , 1 ] with α n + β n + γ n 1 , λ 0 , 1 , and a 0 , 1 L with L = max L A , L B . Suppose that the following conditions hold:
i ) n = 1 α n = and lim n α n = 0 , i i ) n = 1 1 α n β n γ n < , i i i ) 0 < c β n , γ n d < 1 for some c , d > 0 .
Then, the sequence x n converges strongly to x * = P F ( T ) Γ g u , which is a minimizer of the function g ( x ) .

5. Examples

In the following example, we present a numerical illustration of the Enhanced Subgradient Extragradient method developed in Theorem 1. The aim is to demonstrate the practical applicability of the algorithm in the setting of real Hilbert spaces. We provide detailed computations involving the mappings, adjoints, spectral radius, projections, and iteration steps in order to clarify the structure and mechanism of the method. This example not only shows the convergence behavior of the sequence x n but also highlights the validity of the theoretical framework, the structural operation of the algorithm, and its potential for applications in various scenarios. In this way, both the theoretical predictions and the practical performance of the proposed scheme are confirmed.
Example 1.
(Convergence of Iterative Process). We work in H 1 = H 2 = R 2 .
Convex Sets:
C = { x H 1 : x 1 } , Q = { y H 2 : y 2 } .
Mappings:
A ( x ) = 2 x , B ( x ) = 2 x , T ( x ) = x 2 + x .
Adjoints and operator products: Since A = B = 2 I 2 , we have
A * = A T = 2 I 2 , B * = B T = 2 I 2 ,
A * A = 4 I 2 , B * B = 4 I 2 .
Thus, the spectral radius are
ρ ( A * A ) = ρ ( B * B ) = 4 , L A = L B = 4 , L = max { L A , L B } = 4 .
Hence, the step parameter satisfies a ( 0 , 1 L ] = ( 0 , 1 4 ] . In our case we choose a = 1 5 , which is admissible.
Averaged operator:
J = 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B = ( I P Q ) ( 2 ) .
Projection Operators:
P C ( x ) = x if x 1 , x x if x > 1 ,
P Q ( y ) = y if y 2 , 2 y y if y > 2 .
Parameters and initial point:
λ = 1 4 , α n = 1 n + 1 , β n = 1 2 1 2 n , γ n = 1 2 1 2 n , x 1 = ( 0.5 , 0.5 ) T , u = ( 0 , 0 ) T .
Iteration scheme: For n 1 ,
y n = P C ( x n a J x n ) , z n = P C ( x n λ ( I T ) x n ) , T n = x H : ( I a J ) x n y n , y n x 0 , W n = x H : x n λ ( x n T x n ) z n , z n x 0 , x n + 1 = α n u + β n P T n x n a J y n + γ n P W n ( I λ ( I T ) ) x n .
Iteration steps and calculations:
Step 1: Calculating x 2 , y 2 , z 2 , T 2 , W 2
Calculate y 1 :
y 1 = P C x 1 a 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B x 1 .
Given A ( x ) = B ( x ) = 2 x , we have
y 1 = P C 0.5 , 0.5 = ( 0.5 , 0.5 ) T .
Calculate z 1 :
z 1 = P C I λ ( I T ) x 1 .
By calculating T ( x 1 ) and substituting
T ( x 1 ) ( 0.1809 , 0.1809 ) T , z 1 ( 0.4202 , 0.4202 ) T .
Calculate T 1 and W 1 : Based on the properties of convex projection and initial settings, we find T 1 = H and W 1 = H , meaning the conditions are met for all x H .
Update x 2 :
x 2 = α 1 u + β 1 P T 1 ( x 1 a J y 1 ) + γ 1 P W 1 ( I λ ( I T ) ) x 1 .
With P T 1 = I and P W 1 = I , we obtain
x 2 = ( 0 , 0 ) T .
Step 2: Calculating x 3 , y 3 , z 3 , T 3 , W 3
Calculate y 2 :
y 2 = P C x 2 a 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B x 2 .
Since x 2 = ( 0 , 0 ) T , we have
y 2 = P C ( ( 0 , 0 ) T ) = ( 0 , 0 ) T .
Calculate z 2 :
z 2 = P C ( I λ ( I T ) ) x 2 .
Since x 2 = ( 0 , 0 ) T , we have
z 2 = P C ( ( 0 , 0 ) T ) = ( 0 , 0 ) T .
Calculate T 2 :
T 2 = { x H : x 2 a ( 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B ) x 2 y 2 , y 2 x 0 } .
This holds for all x H , so T 2 = H . Calculate W 2 :
W 2 = { x H : x 2 β 2 ( x 2 T ( x 2 ) ) z 2 , z 2 x 0 } .
This holds for all x H , so W 2 = H . Update x 3 :
x 3 = α 2 u + β 2 P T 2 ( x 2 a J y 2 ) + γ 2 P W 2 ( I λ ( I T ) ) x 2 .
With P T 2 = I and P W 2 = I , we obtain
x 3 = ( 0 , 0 ) T .
Step 3: Calculating x 4 , y 4 , z 4 , T 4 , W 4
Calculate y 3 :
y 3 = P C x 3 a 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B x 3 .
Since x 3 = ( 0 , 0 ) T , we have
y 3 = P C ( ( 0 , 0 ) T ) = ( 0 , 0 ) T .
Calculate z 3 :
z 3 = P C ( I λ ( I T ) ) x 3 .
Since x 3 = ( 0 , 0 ) T , we have
z 3 = P C ( ( 0 , 0 ) T ) = ( 0 , 0 ) T .
Calculate T 3 :
T 3 = { x H : x 3 a ( 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B ) x 3 y 3 , y 3 x 0 } .
This holds for all x H , so T 3 = H . Calculate W 3 :
W 3 = { x H : x 3 λ ( x 3 T ( x 3 ) ) z 3 , z 3 x 0 } .
This holds for all x H , so W 3 = H . Update x 4 :
x 4 = α 3 u + β 3 P T 3 ( x 3 a J y 3 ) + γ 3 P W 3 ( I λ ( I T ) ) x 3 .
With P T 3 = I and P W 3 = I , we obtain
x 4 = ( 0 , 0 ) T .
The following Table 1 shows the result of the sequences { x n } , { y n } and { z n } where x 1 = y 1 = z 1 = ( 0.5 , 0.5 ) T , u 1 = ( 0 , 0 ) T , and n = N = 4 .
Conclusion
The sequence { x n } converges to ( 0 , 0 ) T , and we have shown that T i = H and W i = H for i = 1 , 2 , 3 , 4 , supporting the convergence criteria specified in Theorem 1.
By this example, we have demonstrated the practical application of the Enhanced Subgradient Extragradient method to solve a fixed-point problem. The iterative process converges to a fixed point, aligning with the theoretical expectations outlined in Theorem 1. This example confirms the theorem’s claims and provides a clear computational illustration of the principles of the theorem.
Remark 2.
The above numerical example does not only verify Theorem 1 (feasibility formulation), but can also be reinterpreted under the minimization framework of Theorem 4. Hence, although the computational setting is the same, it simultaneously supports both Theorems 1 and 4, by showing consistency between the feasibility and minimization perspectives.
In addition, under the minimization formulation of Theorem 4, with C, Q, P C , P Q , the mappings A, B, T, and all parameters defined as in Example 1, we can formulate the minimization model as g ( x ) .
g ( x ) = 1 4 ( I P Q ) A x 2 + 1 4 ( I P Q ) B x 2 = 1 2 ( I P Q ) ( 2 x ) 2 ,
with gradient
g ( x ) = 1 2 A * ( I P Q ) A x + 1 2 B * ( I P Q ) B x = 2 ( I P Q ) ( 2 x ) .
Algorithm (Theorem 4 specialized to Example 1): For n 1 ,
y n = P C ( x n a g ( x n ) ) , z n = P C ( x n λ ( I T ) x n ) , T n = x H : ( I a g ) x n y n , y n x 0 , W n = x H : x n λ ( x n T x n ) z n , z n x 0 , x n + 1 = α n u + β n P T n x n a g ( y n ) + γ n P W n ( I λ ( I T ) ) x n .
Next, we shall compare our method with classical algorithms, including Gradient Descent (GD), Projected Gradient Descent (PGD), and the Projection Method (PM).
Baseline methods for comparison:
G D : x n + 1 = x n η n g ( x n ) ( backtracking ) , P G D : x n + 1 = P C ( x n η n g ( x n ) ) , P M : x n + 1 = P C ( x n a J x n ) with J = 1 2 A * ( I P Q ) A + 1 2 B * ( I P Q ) B .
Metrics and stopping rule: We monitor obj k = g ( x ( k ) ) ,
viol k = ( I P Q ) A x ( k ) 2 + ( I P Q ) B x ( k ) 2 ,
and the fixed-point residual x ( k ) T ( x ( k ) ) . The stopping condition is
max { | obj k obj k 1 | , viol k , x k T ( x k ) } < 10 6 or k = 5000 .
Then, the sequence { x n } converges to ( 0 , 0 ) T , which is a minimizer of the function g ( x ) . Table 2 below presents results demonstrating that our method achieves faster convergence, smaller feasibility violation, and lower objective values than the classical methods, thereby making the applications more convincing.
Discussion The proposed algorithm converges in fewer than 1000 iterations, achieving very small feasibility violation and low objective values. By contrast, GD requires more than 3000 iterations to reach similar accuracy, while PGD and PM both need over 2000 iterations. These results confirm that Example 1 not only supports Theorem 1 (feasibility formulation) but also provides strong numerical evidence for Theorem 4 (minimization setting).
Traffic flow analysis typically involves collecting data on parameters such as traffic volume, speed, and density using various methods, including surveys, traffic cameras, and sensors. Linear algebra can also be applied to model traffic flow on a single-lane road. In this context, traffic flow is represented using matrix equations, where the matrix corresponds to traffic density at different time points, and the vector represents the flow of traffic. There is extensive research on traffic flow; see, for example, refs. [23,24,25,26]. In the next example, we solve the traffic flow problem for four roads using the algorithm presented in Theorem 1.
Example 2.
Let us examine a traffic flow problem in a city, where the traffic flows on four roads are depicted in Figure 1. Each road is a one-way street, following the specified directions, with units given in vehicles per hour.
The four linear equations based on the four intersections in Figure 1 can be obtained as follows (Traffic in = Traffic out):
I n t e r s e c t i o n A : x 4 + 130 = x 1 + 200 , I n t e r s e c t i o n B : x 1 + 210 = x 2 + 150 , I n t e r s e c t i o n C : x 2 + 200 = x 3 + 250 , I n t e r s e c t i o n D : x 3 + 235 = x 4 + 175 .
Let H 1 = H 2 = R 4 and C = H 1 . Take A = 1 0 0 1 1 1 0 0 0 1 1 0 0 0 1 1 , b = 70 60 50 60 , x * = x 1 x 2 x 3 x 4 , which x * is the solution of the system of linear equations A x = b , where C = R 4 and Q = { b } .
Next, we will find the solution x * = x 1 x 2 x 3 x 4 C and A x * Q = { b } .
Since C = H 1 , Q = { b } , x * C and A x * Q , we obtain P Q A x * = 70 60 50 60 .
Since A = 1 0 0 1 1 1 0 0 0 1 1 0 0 0 1 1 , then A * = A T = 1 1 0 0 0 1 1 0 0 0 1 1 1 0 0 1 .
It follows that
A * ( I P Q ) A x * = 2 x 1 x 2 x 4 + 130 x 1 + 2 x 2 x 3 110 x 2 + 2 x 3 x 4 + 110 x 1 x 3 + 2 x 4 130 .
Let the sequence x n = ( x n 1 , x n 2 , x n 3 , x n 4 ) generated by x 1 C . Let T be an Identity matrix. Take A = B in Theorem 1. Given the parameters λ = 0.25 , a = 1 4 α n = 1 n + 1 , γ n = 1 2 1 2 n , and β n = 1 2 1 2 n . We can rewrite the sequence x n + 1 in (1) as follows:
y n = P C I 1 4 2 x 1 x 2 x 4 + 130 x 1 + 2 x 2 x 3 110 x 2 + 2 x 3 x 4 + 110 x 1 x 3 + 2 x 4 130 x n , z n = P C ( I 0.25 ( I T ) ) x n ,
and
T n = x R 4 : I 1 4 2 x 1 x 2 x 4 + 130 x 1 + 2 x 2 x 3 110 x 2 + 2 x 3 x 4 + 110 x 1 x 3 + 2 x 4 130 x n y n , y n x 0 , W n = x R 4 : I 0.25 ( I T ) x n z n , z n x 0 ,
x n + 1 = 1 n + 1 u + 1 2 1 2 n P T n x n 1 4 2 x 1 x 2 x 4 + 130 x 1 + 2 x 2 x 3 110 x 2 + 2 x 3 x 4 + 110 x 1 x 3 + 2 x 4 130 y n + 1 2 1 2 n P W n ( I 0.25 ( I T ) ) x n ,
for all n 1 .
Then the sequence x n = ( x n 1 , x n 2 , x n 3 , x n 4 ) converges strongly to x * = t 70 t 10 t 60 t , for all t R .
Solution. It obvious to check that the spectral radius of matrix A * A is 2. Then, we have L = 2 . So, we can take λ = 0.25 . By the definition of T, we have T is a continuous quasi-nonexpansive mapping. Thus from Theorem 1, we can conclude that the sequence { x n } converges strongly to
x * = t 70 t 10 t 60 t .
Then
x 1 = t 70 , x 2 = t 10 , x 3 = t 60 , x 4 = t .
Given that traffic flow cannot be negative, consider the case where t = 70 . In this scenario, we find x 4 = 70 and x 1 will be zero. By applying the condition x n 0 and substituting x 4 = 70 into all the equations of the system, we determine that all x n as 70 represents the minimum value required to maintain positive traffic flow. Hence,
x * = 0 60 10 70 .
For instance, if the road between the intersection A and B is a road under construction and there is a need to limit the number of vehicles using this road, then the minimum value of vehicles traveling on the road between the intersection A and B is x 4 = 70 . This implies that x 1 = 0 , x 2 = 60 , and x 3 = 10 . So that, the road between the intersection A and B is closed.
Networks consisting of branches and junctions serve as models in various fields such as economics, traffic analysis, and electrical engineering. In such a network model, it is assumed that the total inflow to a junction equals the total outflow from it. In the next example, we solve the flow through a network composed of five junctions by using the algorithm presented in Theorem 1.
Example 3.
Set up a system of linear equations to represent the network shown in Figure 2.
Each of the network’s five junctions gives rise to a linear equation, as follows.
J u n c t i o n 1 : x 1 + x 2 = 20 , J u n c t i o n 2 : x 4 x 3 = 20 , J u n c t i o n 3 : x 2 + x 3 = 80 , J u n c t i o n 4 : x 1 x 5 = 40 , J u n c t i o n 5 : x 4 + x 5 = 40 .
Let H 1 = H 2 = R 5 and C = H 1 . Take A = 1 1 0 0 0 0 0 1 1 0 0 1 1 0 0 1 0 0 0 1 0 0 0 1 1 , b = 20 20 80 40 40 , x * = x 1 x 2 x 3 x 4 x 5 , where x * is the solution of the system of linear equations A x = b , where C = R 5 and Q = { b } .
Next, we will find the solution x * = x 1 x 2 x 3 x 4 x 5 C and A x * Q = { b } .
Since C = H 1 , Q = { b } , x * C , and A x * Q , we obtain P Q A x * = 20 20 80 40 40 .
Since A = 1 1 0 0 0 0 0 1 1 0 0 1 1 0 0 1 0 0 0 1 0 0 0 1 1 , then A * = A T = 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 .
It follows that
A * ( I P Q ) A x * = 2 x 1 + x 2 x 5 + 20 x 1 + 2 x 2 + x 3 100 x 2 + 2 x 3 x 4 60 x 3 + 2 x 4 x 5 60 x 1 x 4 + 2 x 5 .
Let the sequence x n = ( x n 1 , x n 2 , x n 3 , x n 4 , x n 5 ) generated by x 1 C . Let T be an Identity matrix. Take A = B in Theorem 1. Given the parameters λ = 0.25 , a = 0.25 α n = 1 n + 1 , γ n = 1 2 1 2 n , and β n = 1 2 1 2 n . We can rewrite the sequence x n + 1 in (1) as follows:
y n = P C I 0.25 2 x 1 + x 2 x 5 + 20 x 1 + 2 x 2 + x 3 100 x 2 + 2 x 3 x 4 60 x 3 + 2 x 4 x 5 60 x 1 x 4 + 2 x 5 x n , z n = P C ( I 0.25 ( I T ) ) x n ,
and
T n = x R 5 : I 0.25 2 x 1 + x 2 x 5 + 20 x 1 + 2 x 2 + x 3 100 x 2 + 2 x 3 x 4 60 x 3 + 2 x 4 x 5 60 x 1 x 4 + 2 x 5 x n y n , y n x 0 , W n = x R 5 : I 0.25 ( I T ) x n z n , z n x 0 ,
x n + 1 = 1 n + 1 u + 1 2 1 2 n P T n x n 0.25 2 x 1 + x 2 x 5 + 20 x 1 + 2 x 2 + x 3 100 x 2 + 2 x 3 x 4 60 x 3 + 2 x 4 x 5 60 x 1 x 4 + 2 x 5 y n + 1 2 1 2 n P W n ( I 0.25 ( I T ) ) x n ,
for all n 1 .
Then the sequence x n = ( x n 1 , x n 2 , x n 3 , x n 4 , x n 5 ) converges strongly to x * = p 40 60 p p + 20 p + 40 p , for all t R .
Solution. It is obvious to check that the spectral radius of matrix A * A is 2. Then, we have L = 1.7549 . So, we can take λ = 0.25 . By the definition of T, we have T as a continuous quasi-nonexpansive mapping. Thus from Theorem 1, we can conclude that the sequence { x n } converges strongly to
x * = p 40 60 p p + 20 p + 40 p .
Then
x 1 = p 40 , x 2 = 60 p , x 3 = p + 20 , x 4 = p + 40 , x 5 = p ,
where p is any real number, so this system has infinitely many solutions. Assume that the flow along the branch labeled x 5 can be controlled. Based on the solution of this example, it is then possible to regulate the flow corresponding to the other variables. In particular, if p = 40 , the flow of x 1 would be reduced to zero.
In the following example, we consider the metric projection onto a band
C : = { x H : β 1 a , x β 2 } ,
where a H , a 0 , β 1 , β 2 R and β 1 < β 2 . It is clear that C is closed and convex with
P C x = x a , x β 2 a 2 a , i f a , x > β 2 , x , i f β 1 a , x β 2 , x a , x β 1 a 2 a , i f a , x < β 1 .
Example 4.
Let R be the set of real numbers, H 1 = H 2 = R 2 , C : = { x = ( x 1 , x 2 ) H 1 | 1 10 x 1 + 20 x 2 100 } , Q : = { y = ( y 1 , y 2 ) H 2 | 1 4 y 1 3 y 2 20 } .
P C x = ( x 1 , x 2 ) [ 10 x 1 + 20 x 2 100 ] ( 10 , 20 ) 500 , i f 10 x 1 + 20 x 2 > 100 , ( x 1 , x 2 ) , i f 1 10 x 1 + 20 x 2 100 , ( x 1 , x 2 ) [ 10 x 1 + 20 x 2 + 1 ] ( 10 , 20 ) 500 , i f 10 x 1 + 20 x 2 < 1 ,
for every x = ( x 1 , x 2 ) H 1 and
P Q y = ( y 1 , y 2 ) [ 4 y 1 3 y 2 20 ] ( 4 , 3 ) 25 , i f 4 y 1 3 y 2 > 20 , ( y 1 , y 2 ) , i f 1 4 y 1 3 y 2 20 , ( y 1 , y 2 ) [ 4 y 1 3 y 2 + 1 ] ( 4 , 3 ) 25 , i f 4 y 1 3 y 2 < 1 ,
for every y = ( y 1 , y 2 ) H 2 .
Let T : C C defined by T ( x 1 , x 2 ) = x 1 10 , x 2 10 and let
A ( x 1 , x 2 ) = ( 2 x 1 , 2 x 2 ) , B ( x 1 , x 2 ) = x 1 2 , x 1 2 .
Given the parameters λ = 1 2 , a = 1 5 , α n = 1 ( n + 1 ) 2 , β n = 1 3 + 1 3 n , and γ n = 1 3 + 1 3 n . We can rewrite the sequence x n + 1 in (1) as follows:
y n = P C I 1 5 A * I P Q A 2 + B * I P Q B 2 x n , z n = P C ( I 1 2 ( I T ) ) x n ,
and
T n = x H : I 1 5 A * I P Q A 2 + B * I P Q B 2 x n y n , y n x 0 , W n = x H : I 1 2 ( I T ) x n z n , z n x 0 , x n + 1 = 1 ( n + 1 ) 2 u + 1 3 + 1 3 n P T n x n 1 5 A * I P Q A 2 + B * I P Q B 2 y n + 1 3 + 1 3 n P W n ( I 1 2 ( I T ) ) x n ,
for all n 1 .
Then the sequence { x n } converges strongly to ( 0 , 0 ) .
Solution. It obvious to check that the spectral radius of A * A is 4, and the spectral radius of B * B is 1 4 . Then, we have L = 4 . So, we can take a = 1 5 . By the definition of T, we have T is a continuous quasi-nonexpansive mapping. Thus, from Theorem 1 we can conclude that { x n } converges strongly to ( 0 , 0 ) .
The following Figure 3 shows the result of the sequence { x n } , where x 1 = ( 5 , 5 ) , u 1 = ( 0 , 0 ) , and n = N = 20 .

Author Contributions

Conceptualization, A.K. and A.S.; Methodology, A.K.; Formal analysis, A.S.; Investigation, A.S.; Writing-original draft preparation, A.S.; Writing-review and editing, A.K. and A.S.; Supervision, A.K.; Funding acquisition, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This project is funded by National Research Council of Thailand (NRCT) and King Mongkut’s Institute of Technology Ladkrabang (No. N42A680467).

Data Availability Statement

The original contributions presented in this study are included in this article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to extend their sincere appreciation to the Research and Innovation Services of King Mongkut’s Institute of Technology Ladkrabang.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Iiduka, H. Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 2012, 236, 1733–1742. [Google Scholar] [CrossRef]
  2. Liu, X.J.; Chen, Z.; Xiao, Y. General viscosity approximation methods for quasi nonexpansive mappings with applications. J. Inequal. Appl. 2019, 2019, 71. [Google Scholar] [CrossRef]
  3. Sahu, D.R. Applications of accelerated computational methods for quasi-nonexpansive operations to optimization problems. Soft Comput. 2020, 24, 17887–17911. [Google Scholar] [CrossRef]
  4. Zhu, W.; Zhang, J.; Liu, X. Viscosity approximations considering boundary point method for fixed point and variational inequalities of quasi-nonexpansive mappings. Comput. Appl. Math. 2020, 39, 33. [Google Scholar] [CrossRef]
  5. Kangtunyakarn, A. Iterative scheme for finding solution of the general split feasibility problem and the general constrained minimization problems. Filomat 2019, 33, 233–243. [Google Scholar] [CrossRef]
  6. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithm 1994, 8, 221–239. [Google Scholar] [CrossRef]
  7. Thong, D.V.; Reich, S.; Li, X.H.; Tham, P.T.H. An efficient algorithm with double inertial steps for solving split common fixed point problems and an application to signal processing. Comp. Appl. Math. 2025, 44, 102. [Google Scholar] [CrossRef]
  8. Bejenaru, A.; Ciobanescu, C. New partially projective algorithm for split feasibility problems with application to BVP. J. Nonlinear Convex Anal. 2022, 23, 485–500. [Google Scholar]
  9. Hammad, H.A.; Ur Rehman, H.; De la Sen, M. Convergence behavior of practical iterative schemes for split fixed point problems under fixed and variable stepsize strategies. AIMS Math. 2025, 10, 16068–16104. [Google Scholar] [CrossRef]
  10. Censor, Y.; Segal, A. The split common fixed point problem for directed operator. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  11. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mate. Metod. 1976, 12, 747–756. [Google Scholar]
  12. Long, X.J.; Yang, J.; Cho, Y.J. Modified subgradient extragradient algorithms with a new line-seatch rule for variational inequalities. Bull. Malays. Math. Sci. Soc. 2023, 46, 140. [Google Scholar] [CrossRef]
  13. Tan, B.; Cho, S.Y. Inertial extragradient methods for solving pseudomonotone variational inequalities with non-lipschitz mappings and their optimization applications. Appl. Set-Valued Anal. Optim. 2021, 2, 165–192. [Google Scholar]
  14. Fang, C.J.; Chen, S.L. A subgradient extragradient algorithm for solving muti-valued variational inequality. Appl. Math Comput. 2014, 229, 123–130. [Google Scholar]
  15. Khanh, P.Q.; Thong, D.V.; Vinh, N.T. Versions of the subgradient extragradient method for pseu-domonotone variational inequalities. Acta Appl. Math 2020, 170, 319–345. [Google Scholar] [CrossRef]
  16. Vuong, P.T.; Shehu, Y. Convergence of an extragradient-type method for variational inequalities with applications to optimal control problem. Numer. Algorithms 2019, 81, 269–291. [Google Scholar] [CrossRef]
  17. Takahashi, W. Nonlinear and Convex Analysis; Yokohama Publisher: Yokohama, Japan, 2009. [Google Scholar]
  18. Mainge, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  19. Kanzow, C.; Shehu, Y. Generalized Krasnoselskii-Mann-type iteration for nonexpansive mappings in Hilbert spaces. Comput. Optim. Appl. 2017, 67, 595–620. [Google Scholar] [CrossRef]
  20. Cheawchan, K.; Kangtunyakarn, A. Approximation method for fixed point of nonlinear mapping and variational inequalities with application. Thai J. Math. 2015, 13, 653–672. [Google Scholar]
  21. Mainge, P.E. A Hybrid Extragradient-Viscosity Method for Monotone Operators and Fixed Point Problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  22. Kohsaka, F.; Takahashi, W. Fixed point theorems for a class of nonlinear mappings related to maximal monotone operators in Banach spaces. Arch. Math. 2008, 91, 166–177. [Google Scholar] [CrossRef]
  23. Adu, I.K.; Boah, D.K. Application of System of Linear Equations to Traffic Flow for a Network of Four One-Way Streets in Kumasi, Ghana. Int. J. Contemp. Math. Sci. 2014, 9, 653–660. [Google Scholar] [CrossRef]
  24. Farhi, N. A Min-Plus Algebra System Theory for Traffic Networks. Mathematics 2023, 11, 4028. [Google Scholar] [CrossRef]
  25. Shepelev, V.; Glushkov, A.; Slobodin, I.; Balfaqih, M. Studying the Relationship between the Traffic Flow Structure, the Traffic Capacity of Intersections, and Vehicle-Related Emissions. Mathematics 2023, 11, 3591. [Google Scholar] [CrossRef]
  26. Singh, N.; Kumar, K.; Goswami, P.; Jafari, H. Analytical method to solve the local fractional vehicular traffic flow model. Math. Methods Appl. Sci. 2021, 45, 3983–4001. [Google Scholar] [CrossRef]
Figure 1. The traffic flows of four roads.
Figure 1. The traffic flows of four roads.
Mathematics 13 02937 g001
Figure 2. The traffic flows of four roads.
Figure 2. The traffic flows of four roads.
Mathematics 13 02937 g002
Figure 3. The convergence of { x n } with initial values x 1 = ( 5 , 5 ) , u 1 = ( 0 , 0 ) and n = N = 20 .
Figure 3. The convergence of { x n } with initial values x 1 = ( 5 , 5 ) , u 1 = ( 0 , 0 ) and n = N = 20 .
Mathematics 13 02937 g003
Table 1. Summary of iterations.
Table 1. Summary of iterations.
Iteration i x i y i z i
1 ( 0.5 , 0.5 ) T ( 0.5 , 0.5 ) T ( 0.5 , 0.5 ) T
2 ( 0 , 0 ) T ( 0 , 0 ) T ( 0 , 0 ) T
3 ( 0 , 0 ) T ( 0 , 0 ) T ( 0 , 0 ) T
4 ( 0 , 0 ) T ( 0 , 0 ) T ( 0 , 0 ) T
Table 2. Comparison of numerical results between the proposed method and GD, PGD, and PM.
Table 2. Comparison of numerical results between the proposed method and GD, PGD, and PM.
MethodIterationsFeasibility ViolationObjective Value g ( x ) CPU Time (s)
Proposed920 1.0 × 10 6 2.7 × 10 3 0.27
GD3050 7.8 × 10 6 3.1 × 10 3 0.56
PGD2280 3.9 × 10 6 2.9 × 10 3 0.44
PM2500 5.1 × 10 6 2.8 × 10 3 0.47
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sripattanet, A.; Kangtunyakarn, A. An Enhanced Subgradient Extragradient Method for Fixed Points of Quasi-Nonexpansive Mappings Without Demi-Closedness. Mathematics 2025, 13, 2937. https://doi.org/10.3390/math13182937

AMA Style

Sripattanet A, Kangtunyakarn A. An Enhanced Subgradient Extragradient Method for Fixed Points of Quasi-Nonexpansive Mappings Without Demi-Closedness. Mathematics. 2025; 13(18):2937. https://doi.org/10.3390/math13182937

Chicago/Turabian Style

Sripattanet, Anchalee, and Atid Kangtunyakarn. 2025. "An Enhanced Subgradient Extragradient Method for Fixed Points of Quasi-Nonexpansive Mappings Without Demi-Closedness" Mathematics 13, no. 18: 2937. https://doi.org/10.3390/math13182937

APA Style

Sripattanet, A., & Kangtunyakarn, A. (2025). An Enhanced Subgradient Extragradient Method for Fixed Points of Quasi-Nonexpansive Mappings Without Demi-Closedness. Mathematics, 13(18), 2937. https://doi.org/10.3390/math13182937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop