Next Article in Journal
A New Generalization of q-Truncated Polynomials Associated with q-General Polynomials
Previous Article in Journal
Nonparametric Estimation of Dynamic Value-at-Risk: Multifunctional GARCH Model Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Double-Inertial Two-Subgradient Extragradient Algorithm for Solving Variational Inequalities with Minimum-Norm Solutions

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Center for Research and Innovation, Asia International University, Bukhara 200100, Uzbekistan
3
School of Mathematics, Zhejiang Normal University, Jinhua 321004, China
4
School of Computational Science and Engineering, Georgia Institute of Technology, 225 North Avenue NW, Atlanta, GA 30313, USA
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1962; https://doi.org/10.3390/math13121962
Submission received: 14 May 2025 / Revised: 11 June 2025 / Accepted: 12 June 2025 / Published: 14 June 2025

Abstract

Variational inequality problems (VIPs) provide a versatile framework for modeling a wide range of real-world applications, including those in economics, engineering, transportation, and image processing. In this paper, we propose a novel iterative algorithm for solving VIPs in real Hilbert spaces. The method integrates a double-inertial mechanism with the two-subgradient extragradient scheme, leading to improved convergence speed and computational efficiency. A distinguishing feature of the algorithm is its self-adaptive step size strategy, which generates a non-monotonic sequence of step sizes without requiring prior knowledge of the Lipschitz constant. Under the assumption of monotonicity for the underlying operator, we establish strong convergence results. Numerical experiments under various initial conditions demonstrate the method’s effectiveness and robustness, confirming its practical advantages and its natural extension of existing techniques for solving VIPs.

1. Introduction

This paper investigates the variational inequality problem (VIP) in real Hilbert spaces, a fundamental mathematical framework with wide-ranging applications in operations research, split feasibility problems, equilibrium analysis, optimization theory, and fixed-point problems.
Let D be a nonempty, closed, and convex subset of a real Hilbert space X , endowed with the inner product · , · and the induced norm · . Given an operator A : X X , the VIP seeks to find a point u * D such that
A ( u * ) , x u * 0 , x D .
Variational inequality problems have gained increasing attention in recent mathematical research due to their capacity to model a broad spectrum of phenomena across multiple disciplines. Applications span structural mechanics, economic theory, and optimization [1,2,3], as well as operations research, engineering systems, and the physical sciences [4,5,6,7]. The VIP framework provides a powerful tool for analyzing equilibrium in complex systems, including transportation networks, pricing mechanisms, financial markets, supply chains, ecological networks, and communication infrastructures [8,9,10].
The foundations of finite-dimensional VIP theory were established independently by Smith [11] and Dafermos [12], who applied the framework to traffic flow models. Subsequent developments by Lawphongpanich and Hearn [13] and Panicucci et al. [14] further extended these models to traffic assignment problems under Wardrop’s equilibrium principles. More recent studies have expanded VIP’s applications to economic models involving Nash equilibria, pricing mechanisms, internet-based resource allocation, financial networks, and environmental policy design [15,16,17,18,19,20,21].
Recent advances in iterative algorithms for VIPs have focused on identifying algorithmic properties that ensure convergence under weaker assumptions [22]. Among the various methods developed, two primary classes are widely studied: regularization techniques and projection-based methods. This paper emphasizes projection methods, with a particular focus on their convergence behavior and computational efficiency.
The projected gradient method is one of the most fundamental projection-based approaches for solving VIPs. Its iterative scheme is given by
u n + 1 = P D u n λ n A ( u n ) , n 1 ,
where P D denotes the metric projection onto D . While this method benefits from simplicity, requiring only one projection per iteration, its convergence relies on strong assumptions: the operator A must be both L-Lipschitz-continuous and μ -strongly monotone, with step sizes constrained by λ n 0 , 2 μ L 2 . These restrictive conditions often limit the method’s applicability in practice.
The strong monotonicity condition is particularly restrictive for many real-world problems. Korpelevich’s extragradient method (EM) [23] addresses this limitation by relaxing the assumptions while still ensuring convergence. The EM iteration scheme is given by
y n = P D u n λ n A ( u n ) , u n + 1 = P D u n λ n A ( y n ) , n N ,
which requires only that A is monotone and L-Lipschitz continuous, with step sizes λ n 0 , 1 L . Korpelevich established the weak convergence of the sequence { u n } to a solution of the VIP under these milder conditions.
In the past two decades, significant research has been devoted to improving and generalizing the extragradient method, resulting in numerous extensions and refinements [24,25,26,27,28,29,30,31,32].
The standard EM (3) requires two evaluations of the operator A and two projections onto the feasible set D per iteration. However, when D is a general closed and convex set, these projections can become computationally expensive. To mitigate this issue, Censor et al. [33] proposed the subgradient extragradient method (SEM), which replaces the second projection onto D with a computationally simpler projection onto a half-space. The SEM iteration scheme is defined as follows:
y n = P D u n λ n A ( u n ) , X n = z X : u n λ n A ( u n ) y n , z y n 0 , u n + 1 = P X n u n λ n A ( y n ) , n N .
Building on this advancement, Censor et al. [34] introduced the two-subgradient extragradient method (TSEM), which further enhances computational efficiency by utilizing two half-space projections instead of projections onto D . The TSEM algorithm is formulated as
y n = P D n u n λ n A ( u n ) , D n = u X : L ( u n ) + L ( u n ) , u u n 0 , u n + 1 = P D n u n λ n A ( y n ) , n N .
We now briefly introduce the inertial technique, which has become increasingly influential in the development of iterative schemes for solving VIPs. This approach originates from Polyak’s work [35] on the discretization of second-order dissipative dynamical systems. Its main characteristic is the use of information from the two most recent iterations to compute the next approximation. Although conceptually simple, this modification has proven remarkably effective in accelerating convergence in practice. The inertial idea has inspired numerous advanced algorithms, as reflected in recent studies [1,36,37].
A significant development occurred in 2024 when Alakoya et al. [38] introduced the inertial two-subgradient extragradient method (ITSEM). This method preserves computational efficiency while ensuring strong convergence to the minimum-norm solution of the VIP, a particularly desirable property, as minimum-norm solutions often possess meaningful physical interpretations in applications. The ITSEM algorithm is defined by the following iteration:
w n = u n + τ n ( u n u n 1 ) , y n = P D n w n λ n A ( w n ) , D n = u X : L ( w n ) + L ( w n ) , u w n 0 , z n = P D n w n λ n A ( y n ) , u n + 1 = ( 1 ψ n κ n ) w n + κ n z n , n N .
Recent advances in projection-based methods have extended from single inertial schemes to more sophisticated double-inertial step techniques for solving VIPs. These enhanced approaches utilize multiple inertial extrapolations to accelerate convergence while operating under relaxed assumptions. Notable contributions include the double-inertial subgradient extragradient method for pseudomonotone VIPs developed by Yao et al. [39], the single projection approach with double-inertial steps proposed by Thong et al. [40], and the method introduced by Li and Wang [41] for quasi-monotone VIPs. These methodological innovations have significantly expanded the theoretical scope and practical utility of projection methods. A comprehensive overview of these developments can be found in [42,43] and the related literature.
Motivated by these recent trends, we propose a novel double-inertial two-subgradient extragradient algorithm that incorporates several key innovations. First, the proposed framework integrates double-inertial steps with a self-adaptive step size strategy, which eliminates the need for prior knowledge of the Lipschitz constant and avoids computationally expensive linesearch procedures. Second, our method guarantees strong convergence under milder assumptions than those required by existing algorithms, making it particularly effective for problems involving non-monotone operators or unknown Lipschitz parameters. Third, through extensive numerical experiments, we demonstrate the algorithm’s superior performance in terms of convergence rate, solution accuracy, and robustness across various problem instances. The results confirm the method’s practical advantages in terms of both efficiency and stability under diverse parameter settings.
The remainder of this paper is organized as follows. Section 2 presents the mathematical preliminaries and key definitions required for the development of our method. In Section 3, we establish the convergence properties of the proposed algorithm through a detailed theoretical analysis. Section 4 reports a series of numerical experiments that benchmark the algorithm against existing techniques. Finally, Section 5 concludes the paper by summarizing the main findings and outlining directions for future research.

2. Preliminaries

Let D be a nonempty, closed, and convex subset of a real Hilbert space X . We denote weak convergence by u n u and strong convergence by u n u . The weak limit set of the sequence { u n } , denoted by w ω ( u n ) , consists of all points u X for which there exists a subsequence { u n j } such that u n j u , i.e.,
w ω ( u n ) : = { u X : u n j u for some subsequence { u n j } { u n } } .
The following basic identities and inequality hold for all p 1 , p 2 X and ϕ R :
( 1 ) p 1 + p 2 2 = p 1 2 + 2 p 1 , p 2 + p 2 2 , ( 2 ) p 1 + p 2 2 p 1 2 + 2 p 2 , p 1 + p 2 , ( 3 ) ϕ p 1 + ( 1 ϕ ) p 2 2 = ϕ p 1 2 + ( 1 ϕ ) p 2 2 ϕ ( 1 ϕ ) p 1 p 2 2 .
The metric projection operator  P D : X D maps each u X to its unique nearest point in D , defined by
u P D ( u ) = inf { u v : v D } .
The operator P D is nonexpansive and satisfies the following properties [44]:
(1)
For all u , v X ,
P D ( u ) P D ( v ) 2 P D ( u ) P D ( v ) , u v .
(2)
For all u X and z D ,
z = P D ( u ) u z , z y 0 , y D .
(3)
For all u X and y D ,
P D ( u ) y 2 + u P D ( u ) 2 u y 2 .
(4)
Let Q = { w X : v , w u 0 } be a half-space defined by u , v X with v 0 . Then the projection of x X onto Q is explicitly given by
P Q ( x ) = x max 0 , v , x u v 2 v .
Definition 1 
([45]). Let A : X X be an operator. Then
(1)
A is γ-strongly monotone if there exists γ > 0 such that for all p 1 , p 2 X ,
A ( p 1 ) A ( p 2 ) , p 1 p 2 γ p 1 p 2 2 .
(2)
A is γ-inverse strongly monotone (or γ-cocoercive) if for some γ > 0 ,
A ( p 1 ) A ( p 2 ) , p 1 p 2 γ A ( p 1 ) A ( p 2 ) 2 .
(3)
A is monotone if for all p 1 , p 2 X ,
A ( p 1 ) A ( p 2 ) , p 1 p 2 0 .
(4)
A is L-Lipschitz continuous for some L > 0 if
A ( p 1 ) A ( p 2 ) L p 1 p 2 , p 1 , p 2 X .
Definition 2 
([46]). A function L : X R is said to be Gâteaux-differentiable at u X if there exists an element L ( u ) X such that
lim h 0 L ( u + h v ) L ( u ) h = v , L ( u ) v X , h [ 0 , 1 ] .
The vector L ( u ) is called the Gâteaux derivative of L at u. If this holds for all u X , then L is said to be Gâteaux-differentiable on X .
Definition 3 
([44]). Let L : X R be convex. The function L is subdifferentiable at u X if the subdifferential
L ( u ) : = { ζ X : L ( y ) L ( u ) + ζ , y u y X }
is nonempty. Each ζ L ( u ) is called a subgradient of L at u. If L is Gâteaux-differentiable at u, then L ( u ) = { L ( u ) } .
Definition 4 
([44]). Let X be a real Hilbert space. A function L : X R { + } is said to be weakly lower semicontinuous at u X if, for every sequence { u n } X such that u n u , we have
L ( u ) lim inf n L ( u n ) .
Lemma 1 
([44]). Let L : X R { + } be a convex function. Then, L is weakly sequentially lower semicontinuous if and only if it is lower semicontinuous.
Lemma 2 
([47]). Let D : = { u X L ( u ) 0 } , where L : X R is a continuously differentiable convex function. Suppose that the solution set VI ( D , A ) of the variational inequality problem (1) is nonempty. Then, for any u D , the point u is a solution to VI ( D , A ) if and only if one of the following conditions holds:
1. 
A ( u ) = 0 ;
2. 
u D and there exists a constant η u > 0 such that A ( u ) = η u L ( u ) ,
where D denotes the boundary of the set D .
Lemma 3 
([31]). Let D be a nonempty, closed, and convex subset of X , and let A : D X be a continuous monotone operator. Then, for any z D , the following equivalence holds:
z VI ( D , A ) A ( u ) , u z 0 for all u D .
Lemma 4 
([48]). Let { ξ n } ( 0 , 1 ) be a sequence such that n = 1 ξ n = , and let { b n } be a nonnegative sequence. Suppose that { c n } is a real sequence satisfying
b n + 1 ( 1 ξ n ) b n + ξ n c n , for all n 1 .
If every subsequence { b n k } satisfies
lim inf k ( b n k + 1 b n k ) 0 ,
then the condition lim sup n c n 0 implies that lim n b n = 0 .
Lemma 5 
([49]). Let { λ n } and { Φ n } be two nonnegative sequences such that
λ n + 1 λ n + Φ n , for all n 1 .
If n = 1 Φ n < + , then the limit lim n λ n exists and is finite.

3. Main Results

We propose a new inertial subgradient extragradient method that incorporates both monotonic and non-monotonic step size strategies to solve variational inequality problems. The iterative sequence { u n } is generated according to the steps described in Algorithm 1. The convergence of the method is established under the following key assumptions.
Assumption 1. 
The following conditions are assumed:
(C1) 
The feasible set D X is defined as
D = { u X L ( u ) 0 } ,
where L : X R is a continuously differentiable convex function with L 1 -Lipschitz continuous gradient L ( · ) . The constant L 1 is not assumed to be known in advance.
(C2) 
The operator A : X X satisfies the following conditions:
(a) 
A is monotone and L 2 -Lipschitz-continuous (where L 2 is also unknown);
(b) 
For all u D , the growth condition A ( u ) K L ( u ) holds for some constant K > 0 ;
(c) 
The solution set VI ( D , A ) is nonempty.
(C3) 
The parameters satisfy the following conditoins:
(a) 
δ 0 , K + 1 + K 2 ;
(b) 
The sequence { Φ n } is nonnegative and summable, i.e., n = 1 Φ n < + .
Remark 1. 
The proposed algorithm possesses the following key properties:
(i)
The half-space construction guarantees that D D n for all n 1 , which follows directly from the convexity of L and the subgradient inequality.
(ii)
Under the assumptions on { σ i , n } ( i = 1 , 2 ) and { β n } , the inertial terms satisfy
lim n τ i , n u n i + 1 u n i β n lim n σ i , n β n = 0 , i = 1 , 2 .
Consequently, combining this with lim n β n = 0 and the uniform lower bound ψ n > ψ > 0 , we obtain the uniform boundedness result: For any z Ω , there exists M z > 0 such that
sup n 1 z + 1 β n ψ n τ 1 , n β n u n u n 1 + τ 2 , n β n u n 1 u n 2 M z .
Algorithm 1: Double-Inertial Two-Subgradient Extragradient Method
Require: 
Initial points u 1 , u 0 , u 1 X ; parameters τ 1 , τ 2 , λ 1 > 0 ; sequences { ψ n } [ ψ , 1 ] with ψ ( 0 , 1 ) , { σ i , n } ( 0 , ) ( i = 1 , 2 ), and { β n } ( 0 , 1 ) such that:
lim n β n = 0 , n = 1 β n = , lim n σ i , n β n = 0 ( i = 1 , 2 ) .
1:
Initialization: Set n = 1
2:
STEP 1 Compute
w n = u n + τ 1 , n ( u n u n 1 ) + τ 2 , n ( u n 1 u n 2 ) ,
where
τ 1 , n = min τ 1 , σ 1 , n u n u n 1 , u n u n 1 , τ 1 , otherwise ,
τ 2 , n = min τ 2 , σ 2 , n u n 1 u n 2 , u n 1 u n 2 , τ 2 , otherwise .
Then set
p n = β n ( 1 ψ n ) u n + ( 1 β n ) w n .
3:
STEP 2 Define the half-space
D n = u X : L ( p n ) + L ( p n ) , u p n 0 .
Compute
y n = P D n p n λ n A ( p n ) .
4:
if  L ( y n ) 0 and p n y n = 0  then
5:
    return  y n (solution found)
6:
else
7:
    Proceed to STEP 3.
8:
STEP 3 Compute
u n + 1 = P D n p n λ n A ( y n ) .
9:
STEP 4 Update the step size
λ n + 1 = min λ n + ϕ n , δ p n y n A ( p n ) A ( y n ) + L ( p n ) L ( y n ) , if denominator 0 , λ n + ϕ n , otherwise .
10:
Increment: Set n n + 1 and return to STEP 1.
The following lemma is instrumental in establishing the boundedness of the iterative sequence and serves as a foundation for the convergence analysis of the proposed algorithm.
Lemma 6. 
Let { λ n } be the sequence generated by the update rule in Algorithm 1. Then,
lim n λ n = λ min δ L 2 + L 1 , λ 1 , λ 1 + Φ ,
where Φ : = n = 1 ϕ n .
Proof. 
Assume that for all n 1 ,
A ( p n ) A ( y n ) + L ( p n ) L ( y n ) 0 .
Since A is L 2 -Lipschitz-continuous and L is L 1 -Lipschitz-continuous, we have
A ( p n ) A ( y n ) + L ( p n ) L ( y n ) ( L 2 + L 1 ) p n y n .
Thus,
δ p n y n A ( p n ) A ( y n ) + L ( p n ) L ( y n ) δ L 2 + L 1 .
From the update rule, it follows that
λ n + 1 λ n + ϕ n for all n 1 ,
so the sequence { λ n } is bounded above by λ 1 + Φ and bounded below by min δ L 2 + L 1 , λ 1 . Applying Lemma 5, we conclude that the limit lim n λ n exists and is finite. Denoting this limit by λ , we obtain
λ min δ L 2 + L 1 , λ 1 , λ 1 + Φ .
Lemma 7. 
Let { p n } and { v n } be sequences generated by Algorithm 1. Then, for any solution u ^ VI ( D , A ) , the following estimate holds:
u n + 1 u ^ 2 p n u ^ 2 1 δ 2 λ n 2 λ n + 1 2 2 δ K λ n λ n + 1 p n y n 2 .
Proof. 
From the step size update rule, we observe that
λ n + 1 = min λ n + ϕ n , δ p n y n A ( p n ) A ( y n ) + L ( p n ) L ( y n ) δ p n y n A ( p n ) A ( y n ) + L ( p n ) L ( y n ) .
This inequality immediately yields the key estimate
A ( p n ) A ( y n ) + L ( p n ) L ( y n ) δ λ n + 1 p n y n , n 1 .
Let u ^ VI ( D , A ) be a solution. For simplicity, define v n = p n λ n A ( y n ) , which gives u n + 1 = P D n ( v n ) . Since u ^ D D n , the projection property (6) yields
u n + 1 u ^ 2 v n u ^ 2 v n P D n ( v n ) 2 .
We now expand these terms systematically:
v n u ^ 2 v n u n + 1 2 = ( p n u ^ ) λ n A ( y n ) 2 ( p n u n + 1 ) λ n A ( y n ) 2 = p n u ^ 2 2 λ n p n u ^ , A ( y n ) p n u n + 1 2 + 2 λ n p n u n + 1 , A ( y n ) = p n u ^ 2 p n u n + 1 2 + 2 λ n u ^ u n + 1 , A ( y n ) = p n u ^ 2 [ p n y n 2 + y n u n + 1 2 + 2 p n y n , y n u n + 1 ] + 2 λ n u ^ y n , A ( y n ) + 2 λ n y n u n + 1 , A ( y n ) = p n u ^ 2 p n y n 2 y n u n + 1 2 + 2 λ n u ^ y n , A ( y n ) + 2 u n + 1 y n , p n λ n A ( y n ) y n = p n u ^ 2 p n y n 2 y n u n + 1 2 + 2 λ n u ^ y n , A ( y n ) + 2 u n + 1 y n , p n λ n A ( p n ) y n + 2 λ n u n + 1 y n , A ( p n ) A ( y n ) .
Since y n = P D n ( p n λ n A ( p n ) ) and u n + 1 D n , the projection property yields
p n λ n A ( p n ) y n , u n + 1 y n 0 .
Combining (10) with Young’s inequality, we obtain
2 λ n u n + 1 y n , A ( p n ) A ( y n ) u n + 1 y n 2 + λ n 2 A ( p n ) A ( y n ) 2 u n + 1 y n 2 + δ 2 λ n 2 λ n + 1 2 p n y n 2 .
Furthermore, the monotonicity of A implies
u ^ y n , A ( y n ) u ^ y n , A ( u ^ ) .
Substituting (12)–(15) into (11) produces the key estimate
u n + 1 u ^ 2 p n u ^ 2 1 δ 2 λ n 2 λ n + 1 2 p n y n 2 + 2 λ n u ^ y n , A ( u ^ ) .
We now analyze two distinct cases:
Case 1: 
A ( u ^ ) = 0 . In this scenario, the desired result follows immediately from (16).
Case 2: 
A ( u ^ ) 0 . By Lemma 2, we have
  • u ^ D ;
  • There exists η u ^ > 0 such that A ( u ^ ) = η u ^ L ( u ^ ) .
Since u ^ D , we know L ( u ^ ) = 0 . Applying the subdifferential inequality yields
L ( y n ) L ( u ^ ) + L ( u ^ ) , y n u ^ = 1 η u ^ A ( u ^ ) , y n u ^ .
This inequality implies
u ^ y n , A ( u ^ ) η u ^ L ( y n ) .
Furthermore, since y n D n , the half-space construction gives
L ( p n ) + L ( p n ) , y n p n 0 .
Applying the subdifferential inequality, we obtain
L ( y n ) + L ( y n ) , p n y n L ( p n ) .
Combining inequalities (18) and (19) yields
L ( y n ) L ( y n ) L ( p n ) , y n p n .
From (17) and (20), we derive the key estimate
u ^ y n , A ( u ^ ) η u ^ L ( y n ) L ( p n ) , y n p n .
We note that by Assumption 1 C2(b), the constant η u ^ satisfies
η u ^ K .
Consequently, we derive the following estimates:
2 λ n η u ^ L ( y n ) L ( p n ) , y n p n 2 λ n η u ^ L ( y n ) L ( p n ) y n p n 2 λ n K L ( y n ) L ( p n ) y n p n .
Substituting (10), (21) and (22) into (16) yields
u n + 1 u ^ 2 p n u ^ 2 1 δ 2 λ n 2 λ n + 1 2 p n y n 2 + 2 λ n K L ( y n ) L ( p n ) y n p n p n u ^ 2 1 δ 2 λ n 2 λ n + 1 2 p n y n 2 + 2 δ K λ n λ n + 1 p n y n 2 = p n u ^ 2 1 δ 2 λ n 2 λ n + 1 2 2 δ K λ n λ n + 1 p n y n 2 ,
which establishes the desired inequality (9). □
The following fundamental lemma establishes the boundedness of the iterative sequences generated by our algorithm.
Lemma 8. 
Under Assumption 1, the sequences { u n } and { p n } generated by Algorithm 1 are bounded.
Proof. 
Since the limit of { λ n } exists by Lemma 6, we have lim n λ n = lim n λ n + 1 = λ . From the parameter conditions in Assumption 1, we derive
lim n 1 δ 2 λ n 2 λ n + 1 2 2 δ K λ n λ n + 1 = 1 δ 2 2 δ K > 0 .
Consequently, there exists an integer n 0 1 such that for all n n 0 ,
1 δ 2 λ n 2 λ n + 1 2 2 δ K λ n λ n + 1 > 0 .
From inequality (9), we deduce that for all n n 0 ,
u n + 1 u ^ p n u ^ .
Using the definition of { p n } and the bound (8), we systematically derive
p n u ^ = β n ( 1 ψ n ) u n + ( 1 β n ) w n u ^ = β n [ ( 1 ψ n ) u n u ^ ] + ( 1 β n ) ( w n u ^ ) β n ( 1 ψ n ) u n u ^ + ( 1 β n ) w n u ^ = β n ( 1 ψ n ) u n u ^ + ( 1 β n ) u n + τ 1 , n ( u n u n 1 ) + τ 2 , n ( u n 1 u n 2 ) u ^ β n ( 1 ψ n ) u n u ^ + ψ n u ^ + ( 1 β n ) u n u ^ + τ 1 , n u n u n 1 + τ 2 , n u n 1 u n 2 = ( 1 β n ψ n ) u n u ^ + β n ψ n u ^ + ( 1 β n ) τ 1 , n u n u n 1 + τ 2 , n u n 1 u n 2 = ( 1 β n ψ n ) u n u ^ + β n ψ n u ^ + 1 β n ψ n τ 1 , n β n u n u n 1 + τ 2 , n β n u n 1 u n 2 ( 1 β n ψ n ) u n u ^ + β n ψ n M u ^ , n n 0 .
Combining the inequalities (24) and (25), we obtain the recursive estimate:
u n + 1 u ^ p n u ^ ( 1 β n ψ n ) u n u ^ + β n ψ n M u ^ max u n u ^ , M u ^ max u n 0 u ^ , M u ^ , n n 0 .
This recursive bound establishes the boundedness of both { u n } and { p n } , thus completing the proof of Lemma 8. □
Lemma 9. 
Let { u n } be generated by Algorithm 1 under Assumption 1. Then for any solution u * VI ( D , A ) , the following estimate holds:
u n + 1 u * 2 ( 1 γ n ) u n u * 2 + φ n 1 δ 2 λ n 2 λ n + 1 2 2 δ K λ n λ n + 1 p n y n 2 ,
where γ n = 2 β n ψ n 1 β n + β n ψ n and
φ n = γ n β n 2 ψ n u n u * 2 + γ n ( 1 β n ) 2 2 ψ n β n [ τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 + 2 τ 1 , n u n u n 1 u n u * + 2 τ 2 , n u n 1 u n 2 u n u * + 2 τ 1 , n τ 2 , n u n u n 1 u n 1 u n 2 ] + γ n u * , p n u n + γ n u * , u n u * .
Proof. 
Using the definition of p n , we derive
p n u * 2 = β n ( 1 ψ n ) u n + ( 1 β n ) w n u * 2 = β n [ ( 1 ψ n ) u n u * ] + ( 1 β n ) ( w n u * ) 2 ( 1 β n ) 2 w n u * 2 + 2 β n ( 1 ψ n ) u n u * , p n u * = ( 1 β n ) 2 u n + τ 1 , n ( u n u n 1 ) + τ 2 , n ( u n 1 u n 2 ) u * 2 + 2 β n ( 1 ψ n ) u n u * , p n u * + ψ n u * , p n u * .
First, we expand the squared norm as follows:
u n + τ 1 , n ( u n u n 1 ) + τ 2 , n ( u n 1 u n 2 ) u * 2 = u n u * 2 + τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 + 2 τ 1 , n u n u n 1 , u n u * + 2 τ 2 , n u n 1 u n 2 , u n u * + 2 τ 1 , n τ 2 , n u n u n 1 , u n 1 u n 2 u n u * 2 + τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 + 2 τ 1 , n u n u n 1 u n u * + 2 τ 2 , n u n 1 u n 2 u n u * + 2 τ 1 , n τ 2 , n u n u n 1 u n 1 u n 2 .
Substituting (27) into (26) yields
p n u * 2 ( 1 β n ) 2 [ u n u * 2 + τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 + 2 τ 1 , n u n u n 1 u n u * + 2 τ 2 , n u n 1 u n 2 u n u * + 2 τ 1 , n τ 2 , n u n u n 1 u n 1 u n 2 ] + 2 β n ( 1 ψ n ) u n u * p n u * + 2 β n ψ n u * , p n u * ( 1 β n ) 2 [ u n u * 2 + τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 + 2 τ 1 , n u n u n 1 u n u * + 2 τ 2 , n u n 1 u n 2 u n u * + 2 τ 1 , n τ 2 , n u n u n 1 u n 1 u n 2 ] + β n ( 1 ψ n ) u n u * 2 + p n u * 2 + 2 β n ψ n u * , p n u * .
We consequently derive the following key inequality:
p n u * 2 1 β n + β n 2 β n ψ n 1 β n + β n ψ n u n u * 2 + ( 1 β n ) 2 1 β n + β n ψ n [ τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 + 2 τ 1 , n u n u n 1 u n u * + 2 τ 2 , n u n 1 u n 2 u n u * + 2 τ 1 , n τ 2 , n u n u n 1 u n 1 u n 2 ] + 2 β n ψ n 1 β n + β n ψ n u * , p n u * = 1 2 β n ψ n 1 β n + β n ψ n u n u * 2 + β n 2 1 β n + β n ψ n u n u * 2 + ( 1 β n ) 2 1 β n + β n ψ n [ τ 1 , n 2 u n u n 1 2 + τ 2 , n 2 u n 1 u n 2 2 ] + 2 β n ψ n 1 β n + β n ψ n u * , p n u * = ( 1 γ n ) u n u * 2 + φ n .
Substituting inequality (28) into (9) yields the desired convergence result. □
Lemma 10. 
Let { p n } and { y n } be the sequences generated by Algorithm 1 under Assumption 1. If there exists a subsequence { p n k } that converges weakly to u * X and satisfies lim k p n k y n k = 0 , then u * VI ( D , A ) .
Proof. 
Let { p n k } and { y n k } be subsequences of the iterates generated by the algorithm, such that p n k u * and
lim k p n k y n k = 0 y n k u * .
Since y n k D n k , the half-space construction implies that
L ( p n k ) + L ( p n k ) , y n k p n k 0 .
Applying the Cauchy–Schwarz inequality gives
L ( p n k ) L ( p n k ) p n k y n k .
Since L ( · ) is continuous and the sequence { p n k } is bounded, it follows that { L ( p n k ) } is also bounded. Hence, there exists a constant M > 0 such that
L ( p n k ) M , for all k 0 .
Substituting into (29), we obtain
L ( p n k ) M y n k p n k .
Since L ( · ) is convex and continuous, Lemma 1 ensures that L is weakly lower semi-continuous. Therefore, taking the limit inferior on both sides of (30) gives
L ( u * ) lim inf k L ( p n k ) lim k M y n k p n k = 0 ,
which implies that u * D . Furthermore, we have
y n k p n k + λ n k A ( p n k ) , z y n k 0 , for all z D D n k .
Using the monotonicity of A , we derive the following inequality
0 y n k p n k , z y n k + λ n k A ( p n k ) , z y n k = y n k p n k , z y n k + λ n k A ( p n k ) , z p n k + λ n k A ( p n k ) , p n k y n k y n k p n k , z y n k + λ n k A ( z ) , z p n k + λ n k A ( p n k ) , p n k y n k .
Taking the limit as k and using the fact that
lim k y n k p n k = 0 and lim k λ n k = λ > 0 ,
we obtain the variational inequality
A ( z ) , z u * 0 , for all z D .
By Lemma 3, this implies that u * VI ( D , A ) . □
We now establish the strong convergence of the proposed algorithm.
Theorem 1. 
Under Assumption 1, the sequence { u n } generated by Algorithm 1 converges strongly to u ¯ VI ( D , A ) , where u ¯ is the minimum-norm solution:
u ¯ = arg   min p : p VI ( D , A ) .
Proof. 
Since u ¯ = P VI ( D , A ) ( 0 ) , it follows from Lemma 9 that
u n + 1 u ¯ 2 ( 1 γ n ) u n u ¯ 2 + γ n · φ n γ n .
Since lim n β n = 0 , there exists n 1 N such that for all n n 1 ,
γ n = 2 β n ψ n 1 β n + β n ψ n ( 0 , 1 ) , and ψ β n γ n 4 β n .
The conditions on the parameters yield
lim n γ n = 0 and n = 1 γ n = .
To prove that lim n u n u ¯ = 0 , we apply Lemma 4. It suffices to show that for any subsequence { u n k u ¯ } satisfying
lim inf k u n k + 1 u ¯ u n k u ¯ 0 ,
we also have
lim sup k φ n k γ n k 0 .
Consider a subsequence { u n k u ¯ } satisfying condition (33). From Lemma 9, we derive the key inequality
1 δ 2 λ n k 2 λ n k + 1 2 2 δ K λ n k λ n k + 1 p n k y n k 2 ( 1 γ n k ) u n k u ¯ 2 u n k + 1 u ¯ 2 + γ n k φ n k γ n k ,
where the normalized term expands as
φ n k γ n k = ( 1 β n k ) 2 2 ψ n k [ β n k τ 1 , n k 2 β n k 2 u n k u n k 1 2 + β n k τ 2 , n k 2 β n k 2 u n k 1 u n k 2 2 + 2 τ 1 , n k β n k u n k u n k 1 u n k u ¯ + 2 τ 2 , n k β n k u n k 1 u n k 2 u n k u ¯ + β n k 2 τ 1 , n k τ 2 , n k β n k 2 u n k u n k 1 u n k 1 u n k 2 ] + u ¯ , p n k u n k + u ¯ , u n k u ¯ + β n k 2 ψ n k u n k u ¯ 2 .
Applying the limit conditions (7), (32) and (33) to (34) yields
lim k 1 δ 2 λ n k 2 λ n k + 1 2 2 δ K λ n k λ n k + 1 p n k y n k 2 = 0 .
From (23), we obtain the key convergence:
lim k p n k y n k = 0 .
Using the boundedness of { u n } and conditions (7) and lim k β n k = 0 , we analyze
p n k u n k = β n k ( 1 ψ n k ) u n k + ( 1 β n k ) w n k u n k β n k ( 1 ψ n k ) u n k + ( 1 β n k ) w n k u n k = β n k ( 1 ψ n k ) u n k + ( 1 β n k ) [ u n k + τ 1 , n k ( u n k u n k 1 ) + τ 2 , n k ( u n k 1 u n k 2 ) ] u n k = β n k ( 1 ψ n k ) u n k + ( 1 β n k ) [ τ 1 , n k ( u n k u n k 1 ) + τ 2 , n k ( u n k 1 u n k 2 ) ] β n k u n k β n k ( 1 ψ n k ) u n k + ( 1 β n k ) ( τ 1 , n k u n k u n k 1 + τ 2 , n k u n k 1 u n k 2 ) + β n k u n k = β n k ( 2 ψ n k ) u n k + ( 1 β n k ) β n k τ 1 , n k u n k u n k 1 β n k + τ 2 , n k u n k 1 u n k 2 β n k 0 as k .
The boundedness of { u n } ensures that w ω ( u n ) is nonempty. Let u * w ω ( u n ) be arbitrary, with { u n k } being a subsequence such that u n k u * . From (37), we deduce p n k u * , and Lemma 10 combined with (36) establishes that u * VI ( D , A ) . This proves w ω ( u n ) VI ( D , A ) . Consider a weakly convergent subsequence { u n k j } of { u n k } with u n k j q . Since u ¯ = P VI ( D , A ) ( 0 ) , we have
lim sup k u ¯ , u ¯ u n k = lim j u ¯ , u ¯ u n k j = u ¯ , u ¯ q 0 .
Taking k in (35) and applying (7), (37) and (38), we obtain
lim sup k φ n k γ n k 0 .
Finally, Lemma 4 applied to (31) yields
lim n u n u ¯ = 0 ,
completing the proof of strong convergence. □

4. Numerical Illustrations

In the following analysis, we present a detailed comparison of the algorithms applied to selected numerical examples. Each example is chosen to highlight distinct aspects of performance, including convergence rate, computational efficiency, and solution accuracy. All experiments are carried out under uniform conditions to ensure a fair and consistent comparison. The test cases are designed to represent a range of problem types, including ill-conditioned systems and problems with varying levels of computational complexity.
To assess the efficiency and robustness of the proposed method (Algorithm 1, hereafter referred to as Alg1), we compare its performance with several benchmark algorithms from the existing literature. In particular, we include Algorithm 3.2 from [38] (denoted as Alg3.2), Algorithm 2 from [50] (referred to as Alg2), and Algorithm 3.1 from [51] (referred to as Alg3.1).
The comparison is based on three representative examples. For each case, the problem settings and algorithmic parameters are selected and adjusted to maintain consistency across methods. Where applicable, parameter choices follow those recommended in the respective source references, with slight modifications to accommodate the test conditions. For the proposed Alg1, we employ parameters specifically tuned to balance rapid convergence with low computational cost.
Example 1 (HpHard Problem). 
We examine the classical HpHard problem originally proposed by Harker and Pang [52], which has been widely used as a benchmark in variational inequality research [32,53]. The problem is defined by several key components:
First, we have the linear operator G : R m R m given by G ( u ) = M u + q , where the matrix M has the decomposition M = N N T + B + D . Here, B represents an m × m skew-symmetric matrix, N is an arbitrary m × m matrix, and D is a positive-definite diagonal matrix of the same dimension. The feasible set is the hypercube C = { u R m : 10 u i 10 } . The mapping G is both monotone and Lipschitz0continuous, with the Lipschitz constant L = M . In the special case where q = 0 , the solution set reduces to Ω = { 0 } .
For our numerical experiments, we initialize all algorithms with u 0 = u 1 = ( 1 , 1 , , 1 ) R m and employ the stopping criterion w n y n 10 4 . The parameter configurations for each compared algorithm are as follows:
1. 
For Alg3.2 from [38]: λ 1 = 0.93 , θ = 0.87 , μ = 0.8 , with the sequences ξ n = 2 3 n + 2 2 , ψ n = 2 3 n + 2 , κ n = 1 ψ n 2 , and ϕ n = 20 ( 2 n + 5 ) 2 .
2. 
For Alg2 from [50]: χ 0 = 0.20 , μ = 0.30 , θ = 0.70 , with ϕ n = 1 n + 2 and σ n = 1 ( n + 1 ) 2 .
3. 
For Alg3.1 from [51]: λ 0 = 0.5 , μ = 0.4 , γ = 1.5 , using ψ n = 1 n + 1 , p n = 1 ( n + 1 ) 1.1 , q n = n + 1 n , ϕ = 0.4 , and ξ n = 100 ( n + 1 ) 2 .
4. 
For our proposed Algorithm 1 (Alg1), τ 1 = τ 2 = 0.65 , λ 1 = 0.45 , ψ n = 0.7 , β n = 1 n + 1 , with σ 1 , n = σ 2 , n = 100 ( n + 1 ) 2 , δ = 0.25 , and ϕ n = 20 ( 2 n + 5 ) 2 .
Comparative Numerical Analysis: This study evaluates the performance of four optimization algorithms (Alg1, Alg3.2, Alg2, and Alg3.1) across increasing problem dimensions m { 5 , 10 , 20 , 50 , 100 , 200 } . Our proposed Alg1 demonstrates superior efficiency in both iteration counts and computational time compared to existing methods, as evidenced by the numerical results in Table 1 and visualized in Figure 1 and Figure 2. The key findings reveal several important patterns.
First, Alg1 consistently achieves the lowest iteration counts and fastest computation times across all dimensions. For example, at m = 200 , Alg1 converges in just 39 iterations (0.105 s), while Alg3.2 requires 104 iterations (0.538 s), a 62.5% reduction in iterations and 80.5% faster computation.
Second, all algorithms exhibit dimension-dependent performance degradation, though Alg1 maintains the most favorable scaling. While Alg1’s iteration count grows from 28 ( m = 5 ) to 39 ( m = 200 ), Alg3.2 shows a more dramatic increase from 72 to 104 iterations for the same dimensional range.
Third, the computational time follows similar trends, with Alg1 demonstrating remarkable efficiency. At m = 5 , Alg1 completes in 0.0587 s compared to Alg3.2’s 0.1876 s, and this advantage persists at m = 200 (0.1052 vs. 0.538 s).
The results conclusively demonstrate Alg1’s superior scalability and computational efficiency, particularly for high-dimensional problems. This performance advantage stems from the algorithm’s innovative combination of inertial techniques and adaptive step sizes, which effectively mitigate the computational burden associated with increasing dimensionality.
Example 2. 
Consider the Hilbert space H = L 2 ( [ 0 , 1 ] ) equipped with the inner product
u , y = 0 1 u ( t ) y ( t ) d t , u , y H ,
and induced norm
u = 0 1 | u ( t ) | 2 d t 1 / 2 .
Let C : = { u L 2 ( [ 0 , 1 ] ) : u 1 } be the closed unit ball. We examine the nonlinear operator G : C H defined by
G ( u ) ( t ) = 0 1 u ( t ) H ( t , s ) f ( u ( s ) ) d s + g ( t ) ,
with kernel and nonlinearity given by
H ( t , s ) = 2 t s e t + s e e 2 1 , f ( u ) = cos ( u ) , g ( t ) = 2 t e t e e 2 1 .
This operator satisfies
  • Monotonicity: G ( u ) G ( y ) , u y 0 for all u , y C ;
  • Lipschitz continuity with L = 2 : G ( u ) G ( y ) 2 u y .
The compared algorithms use the following configurations.
Comparative Numerical Analysis: In this study, we conduct a comprehensive numerical comparison of four optimization algorithms (Table 2): Alg1, Alg3.2, Alg2, and Alg3.1. Our analysis focuses on their performance across different initial points x 0 and function complexities. The test functions range from elementary cases (x, sin ( x ) , cos ( x ) ) to more challenging examples ( x 2 sin ( x ) , x 2 exp ( x ) cos ( x ) ). The evaluation metrics include iteration counts and CPU time, with results presented in Table 3 and Figure 3 and Figure 4. Key findings are as follows.
(i)
Alg1 demonstrates superior efficiency, consistently achieving the lowest iteration counts and CPU times. For instance,
  • For x, Alg1 requires 32 iterations (1.225 s) versus Alg3.2’s 75 iterations (2.274 s).
  • For x 2 sin ( x ) , Alg1 converges in 20 iterations (2.634 s), while Alg3.2 needs 76 iterations (7.542 s).
(ii)
The initial point x 0 significantly impacts convergence. Complex functions (e.g., x 2 exp ( x ) cos ( x ) ) amplify this effect, with Alg3.2 requiring 84 iterations (11.222 s) compared to Alg1’s 20 iterations (3.336 s).
(iii)
Nonlinearities in functions like x 2 sin ( x ) increase computational demand, yet Alg1 maintains robust performance.
(iv)
The performance gap widens with function complexity, reinforcing Alg1’s scalability.
Alg1 outperforms competing algorithms across all test scenarios, demonstrating faster convergence and lower computational costs, particularly for complex functions. The results underscore its reliability for diverse optimization problems.
Example 3. 
The third example is adapted from [54], where the mapping G : R 2 R 2 is defined by
G ( u ) = 0.5 u 1 u 2 2 u 2 10 7 4 u 1 0.1 u 2 2 10 7 ,
with the feasible set given by C = { u R 2 ( u 1 2 ) 2 + ( u 2 2 ) 2 1 } .
This mapping G satisfies several important properties. First, it is Lipschitz-continuous with Lipschitz constant L = 5 . Second, it is pseudomonotone on the set C , though it is not monotone. Finally, the problem admits a unique solution, given by u * = ( 2.707 , 2.707 ) .Parameter settings:
  • Alg3.2 [38]: λ 1 = 0.93 , θ = 0.87 , μ = 0.8 , ξ n = 2 3 n + 2 2 , ψ n = 2 3 n + 2 , κ n = 1 ψ n 2 , ϕ n = 20 ( 2 n + 5 ) 2
  • Alg2 [50]: χ 0 = 0.20 , μ = 0.30 , θ = 0.70 , ϕ n = 1 n + 2 , σ n = 1 ( n + 1 ) 2
  • Alg3.1 [51]: λ 0 = 0.5 , μ = 0.4 , γ = 1.5 , ψ n = 1 n + 1 , p n = 1 ( n + 1 ) 1.1 , q n = n + 1 n , ϕ = 0.4 , ξ n = 100 ( n + 1 ) 2
  • Alg1 (Algorithm 1): τ 1 = 0.65 , τ 2 = 0.65 , λ 1 = 0.45 , ψ n = 0.7 , β n = 1 n + 1 , σ 1 , n = 100 ( n + 1 ) 2 , σ 2 , n = 100 ( n + 1 ) 2 , δ = 0.25 , ϕ n = 20 ( 2 n + 5 ) 2
Comparative Numerical Analysis: This study conducts a systematic numerical comparison of four optimization algorithms—Alg1, Alg3.2, Alg2, and Alg3.1—evaluating their computational efficiency through iteration counts and CPU time. The analysis examines algorithm performance across multiple initial points x 0 R 2 , including [ 1.5 , 1.7 ] , [ 2.0 , 3.0 ] , and others, to assess convergence behavior under varying starting conditions.
The numerical results, presented in Table 4 and Figure 5 and Figure 6, demonstrate that Alg1 consistently achieves superior performance. Notably, while iteration counts remain stable across different x 0 values, Alg1 maintains significantly lower computational times compared to competing methods. Key observations are as follows.
(i)
Alg1 outperforms all other algorithms in both metrics. For x 0 = [ 1.5 , 1.7 ] , it completes in 51 iterations (0.69481 s) versus Alg3.2’s 96 iterations (2.51503 s).
(ii)
The performance advantage persists across all test cases. At x 0 = [ 2.0 , 3.0 ] , Alg1 requires 51 iterations (0.62848 s) compared to Alg3.2’s 96 iterations (2.46246 s).
(iii)
Iteration counts remain constant for each algorithm regardless of x 0 :
  • Alg3.2: 96 iterations.
  • Alg2: 79–81 iterations.
  • Alg3.1: 66 iterations.
  • Alg1: 51 iterations.
(iv)
CPU times show minor variations with x 0 . For Alg1, they range from 0.62848 s ( [ 2.0 , 3.0 ] ) to 0.96045 s ( [ 1.0 , 2.0 ] ).
(v)
The computational advantage of Alg1 is most pronounced against Alg3.2. At x 0 = [ 2.7 , 2.6 ] , Alg1 uses 0.67053 s versus Alg3.2’s 2.90826 s.
(vi)
Alg1 demonstrates robust efficiency across all test cases, maintaining low CPU times without compromising convergence speed.
Alg1 emerges as the most efficient algorithm, demonstrating consistent superiority in both iteration counts and computational time across all tested initial points x 0 . The results highlight its robustness and practical value for optimization tasks.

5. Conclusions and Future Directions

This study introduces a novel algorithm for solving variational inequality problems in real Hilbert spaces, which is particularly effective for equilibrium problems involving monotone operators. The algorithm’s key innovation is its dynamic variable step size mechanism, which provides two significant advantages: first, it eliminates the need for prior knowledge of Lipschitz-type constants, and second, it avoids the computational overhead associated with line search procedures. Extensive numerical experiments demonstrate the algorithm’s superior performance compared to existing methods in the literature.

Future Research Directions

The promising results suggest several valuable extensions for future work.
1. Application to problems with non-monotone or quasi-monotone operators, which would expand the algorithm’s practical utility across a wider range of problems.
2. Extensions to handle equilibrium problems with additional constraints, such as composite or split feasibility constraints, enabling solutions to more complex optimization scenarios.
3. The development of parallel and distributed implementations to enhance computational efficiency, particularly for large-scale problems in high-dimensional spaces.
4. Further theoretical analysis to establish convergence under weaker or more general conditions, thereby broadening the algorithm’s applicability.
5. Practical applications in important domains, including image reconstruction, machine learning, and network optimization, to validate and demonstrate the method’s effectiveness in real-world settings.
6. Exploration of alternative dynamic step size strategies, potentially incorporating adaptive learning or machine learning techniques, to further improve convergence speed and robustness.

Author Contributions

Software, F.A. and H.u.R.; Formal analysis, C.A.; Investigation, H.u.R. and C.A.; Writing—original draft, I.K.A., F.A. and H.u.R.; Writing—review & editing, I.K.A., F.A. and H.u.R.; Supervision, I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

Habib ur Rehman acknowledges financial support from Zhejiang Normal University, Jinhua, China (Grant YS304223974).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Alakoya, T.O.; Mewomo, O.T. Viscosity s-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 2022, 41, 31–39. [Google Scholar] [CrossRef]
  2. Aubin, J.-P.; Ekeland, I. Applied Nonlinear Analysis; Wiley: Hoboken, NJ, USA, 1984. [Google Scholar]
  3. Ogwo, G.N.; Izuchukwu, C.; Shehu, Y.; Mewomo, O.T. Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. J. Sci. Comput. 2022, 90, 35. [Google Scholar] [CrossRef]
  4. Baiocchi, C.; Capelo, A. Variational and Quasivariational Inequalities: Applications to Free Boundary Problems; Wiley: Hoboken, NJ, USA, 1984. [Google Scholar]
  5. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  6. Godwin, E.C.; Mewomo, O.T.; Alakoya, T.O. A strongly convergent algorithm for solving multiple set split equality equilibrium and fixed point problems in Banach spaces. Proc. Edinb. Math. Soc. 2023, 66, 475–515. [Google Scholar] [CrossRef]
  7. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  8. Geunes, J.; Pardalos, P.M. Network optimization in supply chain management and financial engineering: An annotated bibliography. Networks 2003, 42, 66–84. [Google Scholar] [CrossRef]
  9. Nagurney, A. Network Economics: A Variational Inequality Approach, 2nd ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
  10. Nagurney, A.; Dong, J. Supernetworks: Decision-Making for the Information Age; Edward Elgar Publishing: Northampton, MA, USA, 2002. [Google Scholar]
  11. Smith, M.J. The existence, uniqueness and stability of traffic equilibria. Transp. Res. B 1979, 13, 295–304. [Google Scholar] [CrossRef]
  12. Dafermos, S. Traffic equilibrium and variational inequalities. Transp. Sci. 1980, 14, 42–54. [Google Scholar] [CrossRef]
  13. Lawphongpanich, S.; Hearn, D.W. Simplicial decomposition of the asymmetric traffic assignment problem. Transp. Res. B 1984, 18, 123–133. [Google Scholar] [CrossRef]
  14. Panicucci, B.; Pappalardo, M.; Passacantando, M. A path-based double projection method for solving the asymmetric traffic network equilibrium problem. Optim. Lett. 2007, 1, 171–185. [Google Scholar] [CrossRef]
  15. Aussel, D.; Gupta, R.; Mehra, A. Evolutionary variational inequality formulation of the generalized Nash equilibrium problem. J. Optim. Theory Appl. 2016, 169, 74–90. [Google Scholar] [CrossRef]
  16. Ciarciá, C.; Daniele, P. New existence Theorems for quasi-variational inequalities and applications to financial models. Eur. J. Oper. Res. 2016, 251, 288–299. [Google Scholar] [CrossRef]
  17. Nagurney, A.; Parkes, D.; Daniele, P. The internet, evolutionary variational inequalities, and the time-dependent Braess paradox. Comput. Manag. Sci. 2007, 4, 355–375. [Google Scholar] [CrossRef]
  18. Scrimali, L.; Mirabella, C. Cooperation in pollution control problems via evolutionary variational inequalities. J. Glob. Optim. 2018, 70, 455–476. [Google Scholar] [CrossRef]
  19. Xu, S.; Li, S. A strongly convergent alternated inertial algorithm for solving equilibrium problems. J. Optim. Theory Appl. 2025, 206, 35. [Google Scholar] [CrossRef]
  20. Yao, Y.; Adamu, A.; Shehu, Y. Strongly convergent golden ratio algorithms for variational inequalities. Math. Methods Oper. Res. 2025. [Google Scholar] [CrossRef]
  21. Tan, B.; Qin, X. Two relaxed inertial forward-backward-forward algorithms for solving monotone inclusions and an application to compressed sensing. Can. J. Math. 2025, 1–22. [Google Scholar] [CrossRef]
  22. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  23. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekonom. Mat. Methods 1976, 12, 747–756. [Google Scholar]
  24. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  25. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  26. Reich, S.; Thong, D.V.; Cholamjiak, P.; Long, L.V. Inertial projection-type methods for solving pseudomonotone variational inequality problems in Hilbert space. Numer. Algorithms 2021, 88, 813–835. [Google Scholar] [CrossRef]
  27. Suleiman, Y.I.; Kumam, P.; Rehman, H.U.; Kumam, W. A new extragradient algorithm with adaptive step-size for solving split equilibrium problems. J. Inequal. Appl. 2021, 2021, 136. [Google Scholar] [CrossRef]
  28. Rehman, H.U.; Tan, B.; Yao, J.C. Relaxed inertial subgradient extragradient methods for equilibrium problems in Hilbert spaces and their applications to image restoration. Commun. Nonlinear Sci. Numer. Simul. 2025, 146, 108795. [Google Scholar] [CrossRef]
  29. Ceng, L.C.; Ghosh, D.; Rehman, H.U.; Zhao, X. Composite Tseng-type extragradient algorithms with adaptive inertial correction strategy for solving bilevel split pseudomonotone VIP under split common fixed-point constraint. J. Comput. Appl. Math. 2025, 470, 116683. [Google Scholar] [CrossRef]
  30. Nwawuru, F.O.; Ezeora, J.N.; Rehman, H.U.; Yao, J.C. Self-adaptive subgradient extragradient algorithm for solving equilibrium and fixed point problems in Hilbert spaces. Numer. Algorithms 2025. [Google Scholar] [CrossRef]
  31. Dong, Q.; Cho, Y.; Zhong, L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  32. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  33. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  34. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  35. Polyak, B.T. Some methods of speeding up the convergence of iterative methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  36. Gibali, A.; Jolaoso, L.O.; Mewomo, O.T.; Taiwo, A. Fast and simple Bregman projection methods for solving variational inequalities and related problems in Banach spaces. Results Math. 2020, 75, 179. [Google Scholar] [CrossRef]
  37. Godwin, E.C.; Alakoya, T.O.; Mewomo, O.T.; Yao, J.-C. Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. 2023, 102, 4253–4278. [Google Scholar] [CrossRef]
  38. Alakoya, T.O.; Mewomo, O.T. Strong convergent inertial two-subgradient extragradient method for finding minimum-norm solutions of variational inequality problems. Netw. Spat. Econ. 2024, 24, 425–459. [Google Scholar] [CrossRef]
  39. Yao, Y.H.; Iyiola, O.S.; Shehu, Y. Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 2022, 90, 71. [Google Scholar] [CrossRef]
  40. Thong, D.V.; Dung, V.T.; Anh, P.K.; Van Thang, H. A single projection algorithm with double inertial extrapolation steps for solving pseudomonotone variational inequalities in Hilbert space. J. Comput. Appl. Math. 2023, 426, 115099. [Google Scholar] [CrossRef]
  41. Li, H.; Wang, X. Subgradient extragradient method with double inertial steps for quasi-monotone variational inequalities. Filomat 2023, 37, 9823–9844. [Google Scholar] [CrossRef]
  42. Pakkaranang, N. Double inertial extragradient algorithms for solving variational inequality problems with convergence analysis. Math. Methods Appl. Sci. 2024, 47, 11642–11669. [Google Scholar] [CrossRef]
  43. Wang, K.; Wang, Y.; Iyiola, O.S.; Shehu, Y. Double inertial projection method for variational inequalities with quasi-monotonicity. Optimization 2024, 73, 707–739. [Google Scholar] [CrossRef]
  44. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  45. Bianchi, M.; Schaible, G. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  46. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
  47. He, S.; Xu, H.-K. Uniqueness of supporting hyperplanes and an alternative to solutions of variational inequalities. J. Glob. Optim. 2013, 57, 1375–1384. [Google Scholar] [CrossRef]
  48. Saejung, G.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  49. Tan, K.K.; Xu, H.-K. Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
  50. Muangchoo, K.; Alreshidi, N.A.; Argyros, I.K. Approximation results for variational inequalities involving pseudomonotone bifunction in real Hilbert spaces. Symmetry 2021, 13, 182. [Google Scholar] [CrossRef]
  51. Tan, B.; Sunthrayuth, P.; Cholamjiak, P.; Cho, Y.J. Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem. Int. J. Comput. Math. 2023, 100, 525–545. [Google Scholar] [CrossRef]
  52. Harker, P.T.; Pang, J.S. For the linear complementarity problem. Lect. Appl. Math. 1990, 26, 265–284. [Google Scholar]
  53. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
  54. Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2018, 68, 385–409. [Google Scholar] [CrossRef]
Figure 1. Convergence behavior and computational efficiency of the algorithms for dimensions m { 5 , 10 , 20 } . The top row shows error versus iterations; the bottom row shows error versus time. (a) Convergence ( m = 5 ). (b) Convergence ( m = 10 ). (c) Convergence ( m = 20 ). (d) Time efficiency ( m = 5 ). (e) Time efficiency ( m = 10 ). (f) Time efficiency ( m = 20 ).
Figure 1. Convergence behavior and computational efficiency of the algorithms for dimensions m { 5 , 10 , 20 } . The top row shows error versus iterations; the bottom row shows error versus time. (a) Convergence ( m = 5 ). (b) Convergence ( m = 10 ). (c) Convergence ( m = 20 ). (d) Time efficiency ( m = 5 ). (e) Time efficiency ( m = 10 ). (f) Time efficiency ( m = 20 ).
Mathematics 13 01962 g001
Figure 2. Convergence behavior and computational efficiency of the algorithms for dimensions m { 50 , 100 , 200 } . The top row shows error versus iterations; the bottom row shows error versus time. (a) Convergence ( m = 50 ). (b) Convergence ( m = 100 ). (c) Convergence ( m = 200 ). (d) Time efficiency ( m = 50 ). (e) Time efficiency ( m = 100 ). (f) Time efficiency ( m = 200 ).
Figure 2. Convergence behavior and computational efficiency of the algorithms for dimensions m { 50 , 100 , 200 } . The top row shows error versus iterations; the bottom row shows error versus time. (a) Convergence ( m = 50 ). (b) Convergence ( m = 100 ). (c) Convergence ( m = 200 ). (d) Time efficiency ( m = 50 ). (e) Time efficiency ( m = 100 ). (f) Time efficiency ( m = 200 ).
Mathematics 13 01962 g002
Figure 3. Convergence behavior of Alg3.2, Alg2, Alg3.1, and Alg1 for different initial points x 0 . The top row shows error versus iteration count (n); the bottom row shows error versus CPU time (t). (a) Error vs. iteration for x 0 = x . (b) Error vs. iteration for x 0 = sin ( x ) . (c) Error vs. iteration for x 0 = cos ( x ) . (d) Error vs. time for x 0 = x . (e) Error vs. time for x 0 = sin ( x ) . (f) Error vs. time for x 0 = cos ( x ) .
Figure 3. Convergence behavior of Alg3.2, Alg2, Alg3.1, and Alg1 for different initial points x 0 . The top row shows error versus iteration count (n); the bottom row shows error versus CPU time (t). (a) Error vs. iteration for x 0 = x . (b) Error vs. iteration for x 0 = sin ( x ) . (c) Error vs. iteration for x 0 = cos ( x ) . (d) Error vs. time for x 0 = x . (e) Error vs. time for x 0 = sin ( x ) . (f) Error vs. time for x 0 = cos ( x ) .
Mathematics 13 01962 g003
Figure 4. Convergence behavior for complex initial points x 0 . Top row: error versus iteration count (n); bottom row: error versus CPU time (t). Algorithms compared: Alg3.2, Alg2, Alg3.1, and Alg1. (a) Error vs. iteration for x 0 = exp ( x ) . (b) Error vs. iteration for x 0 = x 2 sin ( x ) . (c) Error vs. iteration for x 0 = x 2 exp ( x ) cos ( x ) . (d) Error vs. time for x 0 = exp ( x ) . (e) Error vs. time for x 0 = x 2 sin ( x ) . (f) Error vs. time for x 0 = x 2 exp ( x ) cos ( x ) .
Figure 4. Convergence behavior for complex initial points x 0 . Top row: error versus iteration count (n); bottom row: error versus CPU time (t). Algorithms compared: Alg3.2, Alg2, Alg3.1, and Alg1. (a) Error vs. iteration for x 0 = exp ( x ) . (b) Error vs. iteration for x 0 = x 2 sin ( x ) . (c) Error vs. iteration for x 0 = x 2 exp ( x ) cos ( x ) . (d) Error vs. time for x 0 = exp ( x ) . (e) Error vs. time for x 0 = x 2 sin ( x ) . (f) Error vs. time for x 0 = x 2 exp ( x ) cos ( x ) .
Mathematics 13 01962 g004
Figure 5. Convergence behavior of Alg3.2, Alg2, Alg3.1, and Alg1 for different initial points. The top row shows error versus iteration count; the bottom row shows error versus CPU time. (a) Error vs. iteration for x 0 = [ 1.5 , 1.7 ] . (b) Error vs. iteration for x 0 = [ 2.0 , 3.0 ] . (c) Error vs. iteration for x 0 = [ 1.0 , 2.0 ] . (d) Error vs. time for x 0 = [ 1.5 , 1.7 ] . (e) Error vs. time for x 0 = [ 2.0 , 3.0 ] . (f) Error vs. time for x 0 = [ 1.0 , 2.0 ] .
Figure 5. Convergence behavior of Alg3.2, Alg2, Alg3.1, and Alg1 for different initial points. The top row shows error versus iteration count; the bottom row shows error versus CPU time. (a) Error vs. iteration for x 0 = [ 1.5 , 1.7 ] . (b) Error vs. iteration for x 0 = [ 2.0 , 3.0 ] . (c) Error vs. iteration for x 0 = [ 1.0 , 2.0 ] . (d) Error vs. time for x 0 = [ 1.5 , 1.7 ] . (e) Error vs. time for x 0 = [ 2.0 , 3.0 ] . (f) Error vs. time for x 0 = [ 1.0 , 2.0 ] .
Mathematics 13 01962 g005
Figure 6. Convergence behavior for additional initial points. Top row: error versus iteration count; bottom row: error versus CPU time. Algorithms compared: Alg3.2, Alg2, Alg3.1, and Alg1. (a) Error vs. iteration for x 0 = [ 2.7 , 2.6 ] . (b) Error vs. iteration for x 0 = [ 5.0 , 3.0 ] . (c) Error vs. iteration for x 0 = [ 4.0 , 6.0 ] . (d) Error vs. time for x 0 = [ 2.7 , 2.6 ] . (e) Error vs. time for x 0 = [ 5.0 , 3.0 ] . (f) Error vs. time for x 0 = [ 4.0 , 6.0 ] .
Figure 6. Convergence behavior for additional initial points. Top row: error versus iteration count; bottom row: error versus CPU time. Algorithms compared: Alg3.2, Alg2, Alg3.1, and Alg1. (a) Error vs. iteration for x 0 = [ 2.7 , 2.6 ] . (b) Error vs. iteration for x 0 = [ 5.0 , 3.0 ] . (c) Error vs. iteration for x 0 = [ 4.0 , 6.0 ] . (d) Error vs. time for x 0 = [ 2.7 , 2.6 ] . (e) Error vs. time for x 0 = [ 5.0 , 3.0 ] . (f) Error vs. time for x 0 = [ 4.0 , 6.0 ] .
Mathematics 13 01962 g006
Table 1. Performance comparison across dimensions (iterations n and time t in seconds).
Table 1. Performance comparison across dimensions (iterations n and time t in seconds).
mAlg3.2Alg2Alg3.1Alg1
n t n t n t n t
5720.1876510.0937380.0652280.0587
10810.2445570.0789480.0833220.0560
20860.2299570.1320410.0672270.0554
50920.2788590.0836460.0709270.0796
100990.2444630.1448450.1004320.0695
2001040.5379830.2917690.2309390.1052
Table 2. Parameter settings for the compared algorithms.
Table 2. Parameter settings for the compared algorithms.
AlgorithmParameters
Alg3.2 [38] λ 1 = 0.93 , θ = 0.87 , μ = 0.8
ξ n = 2 3 n + 2 2 , ψ n = 2 3 n + 2
κ n = 1 ψ n 2 , ϕ n = 20 ( 2 n + 5 ) 2
Alg2 [50] χ 0 = 0.20 , μ = 0.30 , θ = 0.70
ϕ n = 1 n + 2 , σ n = 1 ( n + 1 ) 2
Alg3.1 [51] λ 0 = 0.5 , μ = 0.4 , γ = 1.5
ψ n = 1 n + 1 , p n = 1 ( n + 1 ) 1.1
q n = n + 1 n , ϕ = 0.4 , ξ n = 100 ( n + 1 ) 2
Algorithm 1 (Alg1) τ 1 = τ 2 = 0.65 , λ 1 = 0.45
ψ n = 0.7 , β n = 1 n + 1
σ i , n = 100 ( n + 1 ) 2 ( i = 1 , 2 ), δ = 0.25
ϕ n = 20 ( 2 n + 5 ) 2
Table 3. Performance comparison: iteration counts (n) and CPU times (t, seconds).
Table 3. Performance comparison: iteration counts (n) and CPU times (t, seconds).
FunctionAlg3.2Alg2Alg3.1Alg1
n t n t n t n t
x752.274451.740392.038321.225
sin ( x ) 754.669453.418394.310251.799
cos ( x ) 754.539453.277394.170251.931
exp ( x ) 786.355464.734395.626232.362
x 2 sin ( x ) 767.542465.620396.760202.635
x 2 exp ( x ) cos ( x ) 8411.222467.699419.201203.336
Table 4. Performance comparison across initial points x 0 : iteration counts (n) and CPU times (t, seconds).
Table 4. Performance comparison across initial points x 0 : iteration counts (n) and CPU times (t, seconds).
x 0 Alg3.2Alg2Alg3.1Alg1
n t n t n t n t
[ 1.5 , 1.7 ] 962.51503801.05539660.83240510.69481
[ 2.0 , 3.0 ] 962.46246791.12663660.77810510.62848
[ 1.0 , 2.0 ] 963.61541801.50510661.23019510.96045
[ 2.7 , 2.6 ] 962.90826791.06613660.89569510.67053
[ 5.0 , 3.0 ] 962.59047811.25559660.82584510.63624
[ 4.0 , 6.0 ] 962.49513791.07621660.88292510.63624
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Amir, F.; Rehman, H.u.; Argyros, C. A Double-Inertial Two-Subgradient Extragradient Algorithm for Solving Variational Inequalities with Minimum-Norm Solutions. Mathematics 2025, 13, 1962. https://doi.org/10.3390/math13121962

AMA Style

Argyros IK, Amir F, Rehman Hu, Argyros C. A Double-Inertial Two-Subgradient Extragradient Algorithm for Solving Variational Inequalities with Minimum-Norm Solutions. Mathematics. 2025; 13(12):1962. https://doi.org/10.3390/math13121962

Chicago/Turabian Style

Argyros, Ioannis K., Fouzia Amir, Habib ur Rehman, and Christopher Argyros. 2025. "A Double-Inertial Two-Subgradient Extragradient Algorithm for Solving Variational Inequalities with Minimum-Norm Solutions" Mathematics 13, no. 12: 1962. https://doi.org/10.3390/math13121962

APA Style

Argyros, I. K., Amir, F., Rehman, H. u., & Argyros, C. (2025). A Double-Inertial Two-Subgradient Extragradient Algorithm for Solving Variational Inequalities with Minimum-Norm Solutions. Mathematics, 13(12), 1962. https://doi.org/10.3390/math13121962

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop