Next Article in Journal
Construction Conditions and Applications of a Hilbert-Type Multiple Integral Inequality Involving Multivariable Upper Limit Functions and Higher-Order Partial Derivatives
Previous Article in Journal
Slant Helices and Darboux Helices in Myller Configuration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Totally Relaxed, Self-Adaptive Tseng Extragradient Method for Monotone Variational Inequalities

by
Olufemi Johnson Ogunsola
1,
Olawale Kazeem Oyewole
2,3,*,
Seithuti Philemon Moshokoa
2 and
Hammed Anuoluwapo Abass
4
1
Department of Mathematics, Federal University of Agriculture, Alabata PMB 2240, Nigeria
2
Department of Mathematics and Statistics, Tshwane University of Technology, Arcadia 0007, South Africa
3
Department of Mathematics, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai 602105, India
4
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Ga-Rankuwa, Pretoria 0204, South Africa
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(5), 354; https://doi.org/10.3390/axioms14050354
Submission received: 2 April 2025 / Revised: 2 May 2025 / Accepted: 2 May 2025 / Published: 7 May 2025
(This article belongs to the Section Mathematical Analysis)

Abstract

:
In this work, we study a class of variational inequality problems defined over the intersection of sub-level sets of a countable family of convex functions. We propose a new iterative method for approximating the solution within the framework of Hilbert spaces. The method incorporates several strategies, including inertial effects, a self-adaptive step size, and a relaxation technique, to enhance convergence properties. Notably, it requires computing only a single projection onto a half space. Using some mild conditions, we prove that the sequence generated by our proposed method is strongly convergent to a minimum-norm solution to the problem. Finally, we present some numerical results that validate the applicability of our proposed method.

1. Introduction

Ever since the independent introduction of the classical variational inequality problem (VIP) by Fichera [1] and Stampacchia [2], this field has received great attention from numerous researchers. Let C be a nonempty, closed and convex subset of a real Hilbert space H and let · , · and · be the inner product and induced norm on H , respectively. The VIP with an operator F is primarily to find a point a C such that
F ( a ) , b a 0 , a C .
We denote the solution set of the VIP (1) by V I ( C , F ) . The great attention being received by the VIP is due to its applications in so many areas of studies such as optimization, economics, structural analysis, engineering, physics, and operation research (see [3,4,5,6,7,8] and the references therein).
From the fixed point formulation, find a C such that a = P C ( a λ F ( a ) ) , many projection algorithms for approximating the solution of VIPs have been proposed (for more on fixed point articles, see [9,10,11]). Some of these can be found in [7,12,13,14,15]. The simplest known projection method is the gradient projection method (GPM), which is formulated as
a m + 1 = P C ( a m λ F ( a m ) ) , m > 0
where λ is a positive real number. This is obtained from the fixed point formulation a = P C ( a λ F ( a ) ) .
This projection method is characterized by some stringent assumptions for its convergence. In a bid to weaken some of these assumptions, Korpelevich [7] proposed the following extragradient method (EgM):
EGM
a 0 C , b m = P C ( a m λ F ( a m ) ) , a m + 1 = P C ( a m λ F ( b m ) ) , m 0
where λ ( 0 , 1 L ) , L is the Lipschitz constant of the operator F and P C is the metric projection of H onto C .
Two projections of H onto the feasible set C and two evaluations of the cost operator F that must be performed at each iteration make EgM computationally difficult. This significantly reduces the method’s efficiency and applicability, especially when F and C are complex in structure. The authors  [16,17,18,19,20] came up with different modifications to address computational inefficiency of EgM. One of such notable modifications, called the Tseng extragradient method (TEgM), was proposed by Tseng [19]. This method, which is presented as follows, reduced the two projections of the EgM to one:
TEGM
b m = P C ( a m λ F ( a m ) ) , a m + 1 = b m + λ ( F ( a m ) F ( b m ) )
where λ ( 0 , 1 L ) .
Another modification was given by Censor et al. [21] with the introduction of the subgradient extragradient method (SEgM), in which the second projection onto C was replaced with a projection onto a constructible half space, which is one of the subgradient half spaces known to have a simple structure. The following is the method proposed by Censor et al. in [21]:
SEgM
a 0 C , b m = P C ( a m λ F ( a m ) ) , T m = { a ¯ H : a m λ F ( a m ) b m , a ¯ b m 0 } , a m + 1 = P T m ( a m λ F ( b m ) )
where λ ( 0 , 1 L ) .
It is clear that the TEgM and SEgM algorithms still have the drawback of calculating a projection onto the feasible set C . In order to weaken this requirement, Censor et al. [3] proposed the following two-subgradient extragradient method (TSEgM):
TSEgM
a 0 C , b m = P T m ( a m λ m F ( a m ) ) , T m : = { a ¯ H : c ( a m ) + ϵ m , a ¯ a m 0 } , a m + 1 = P T m ( a m λ F ( b m ) ) ,
where ϵ m c ( a m ) and c ( a ) is the sub-differential of the convex function c ( . ) at point a defined in (12).
In the TSEgM, the main idea is that any closed and convex set C can be expressed as
C = { a H : c ( a ) 0 } ,
where c : H R is a convex function. For instance, we can take c ( a ) : = dist ( a , C ) , where “dist” is the distance function. In this method, we observe that two projections are made onto a half space, an advantage that enhances the computational efficiency of the algorithm.
Another method for solving the VIP is the projection and contraction method (PCM), which has been developed by many researchers in the literature (see [22,23,24,25] and the references therein). These PCM algorithms have been shown through numerical experiments to always outperform the EgMs (see [1]). He et al. [14] recently proposed another variant of the projection and contraction algorithm for solving the VIP. The algorithm is as follows:
PCM
b m = P C m ( a m λ m F ( a m ) ) d ( a m , b m ) = ( a m b m ) α m ( F ( a m ) F ( b m ) ) a m + 1 1 = a m γ β m d ( a m , b m ) o r a m + 1 2 = P Q m ( a m γ β m λ m F ( b m ) ) C m : = { a ¯ H : c ( a m ) + c ( a m ) , a ¯ a m 0 } Q m : = { a ¯ H : a ¯ b m , λ m F b m d ( a m b m ) 0 }
where C m and Q m are half spaces, γ ( 0 , 2 ) is a relaxation factor, λ m is the production step size and β m is the optimal correction step length.
Iterative methods with an improved rate of convergence for solving the problems of optimization have received great attention from many researchers recently. The inertial and relaxation techniques are the two commonly used techniques researchers employ to speed up the rate of convergence of algorithms. The inertial technique, which was introduced by Polyak [26], has been used by many authors, few of whom are in [12,13,16,27]. The relaxation method, on the other hand, has also been adopted by many researchers; see [28,29,30]. The authors in [30] investigated the effect of these two techniques on the convergence properties of iterative schemes.
Cao and Guo [27] modified the work of Censor et al. in [3], with the addition of an inertial technique, and so proposed the following inertial two-subgradient extragradient algorithm for solving the VIP:
ITSEgM
s m = a m + θ m ( a m a m 1 ) , C m : = { a ¯ H : c ( a m ) + ϵ m , a ¯ a m 0 } , b m = P C m ( s m λ F ( s m ) ) , a m + 1 : = P C m ( a m λ F ( b m ) ) 0 }
where λ > 0 and θ m 0 .
In this research, our interest is in studying the VIPs where the feasible set C is given as a finite intersection of sub-level sets of convex functions defined as follows:
C : = i = 1 k C i : = { z H : c i ( z ) 0 } ,
where k is a positive integer and c i : H R for all i I : = { 1 , 2 , , k } are convex functions.
In recent times, He et al. [31] proposed a new iterative algorithm called the totally relaxed and self-adaptive subgradient extragradient method (TRSSEM) for solving VIP. The algorithm is as follows:
TRSSEM
a 0 H , C m i : = { a H : c i ( a m ) + c i ( a m ) , a a m } 0 } , C m : = C m i , b m = P C m ( a m β m F ( a m ) ) , β m 2 F ( a m ) F ( b m ) 2 + M β m a m b m 2 v 2 a m b m , a m + 1 = P C m ( a m β m F ( b m ) ) or , a m + 1 = P T m ( a m β m F ( b m )
where β m = σ ρ l m , σ > 0 , ρ ( 0 , 1 ) , M = M L , L = m a x { L i | i I } and
T m : = { a H : a m , a b m 0 } if a m 0 , H if a m = 0
v ( 0 , 1 ) .
A weak convergent result of the proposed method in the framework of Hilbert spaces was obtained by the authors in [31].
This paper is motivated by the above studies, which necessitated our proposed study on inertial and relaxation methods for solving the VIP in the framework of Hilbert spaces. The following are the important features of our algorithm:
  • The combination of the inertial and relaxation techniques for speeding up the convergence rate the iterative scheme.
  • The presence of a simple self-adaptive stepsize, which is generated at each iteration by some simple computations.
  • The algorithm is independent of the use of the Lipschitz continuity assumption which is commonly employed by authors when solving the monotone variational inequality problem (MVIP).
  • Strong convergence of the generated sequence to a minimum-norm solution to the problems.
  • Computation of only one projection onto some half space.
The organization of the paper is as follows: Section 2 contains some recalled definitions and well-known lemmas needed for our analysis. Section 3 consists of our proposed algorithm, Section 4 accommodates strong convergence analysis of the algorithm, while in Section 5, there are numerical experiments used to validate our results, and the concluding remarks are given in Section 6.

2. Preliminaries

Let C be a nonempty, closed and convex subset of a real Hilbert space H . The weak convergence and strong convergence of { a m } to a are represented by a m a and a m a , respectively, and w ω ( a m ) denotes the set of weak limits of { a m } , that is,
w ω ( a m ) = { a H : a m j a for some subsequence { a m j } of { a m } } .
Definition 1.
Let H be a real Hilbert space H . The mapping F : H H is said to be
(1) L -Lipschitz-continuous, where L > 0 , if
F a F b L a b , a , b H .
If L [ 0 , 1 ) , then F is a contraction;
(2) nonexpansive, if  F is 1-Lipschitz continuous.
Definition 2.
Given a mapping F : H H , F is called monotone if 
F a F b , a b 0 , a , b H .
Definition 3.
([32]). A function c : H R is said to be G a ^ teaux-differentiable at a H if there exists an element denoted by c ( a ) H such that
lim h 0 c ( a + b h ) c ( a ) h = b , c ( a ) , b H , h [ 0 , 1 ] ,
where c ( a ) is called the G a ^ teaux differential of c at a. Recall that if for each a H , c is G a ^ teaux differentiable at a, then c is G a ^ teaux-differentiable on H .
Definition 4.
([32]). Let c be a convex function. c : H R is said to be subdifferentiable at a point a H if the set
c ( a ) = { ζ H : c ( b ) c ( a ) + ζ , b a , b H }
is nonempty. Each element in c ( a ) is called a subgradient of c at a. We note that if c is subdifferentiable at each a H , then c is subdifferentiable on H . It is also known that if c is G a ^ teaux-differentiable at a, then c is subdifferentiable at a and c ( a ) = { c ( a ) } .
Definition 5.
Let H be a real Hilbert space. A function c : H R { + } is said to be weakly lower semi-continuous (w-lsc) at a H , if
c ( a ) lim inf m c ( a m )
holds for every sequence { a m } in H satisfying a m a .
Lemma 1.
Let H be a real Hilbert space such that a , b H . Then, the following results hold:
(i) 
      2 a , b = a 2 + b 2 a b 2 = a + b 2 a 2 b 2 ;
(ii) 
    a + b 2 a 2 + 2 b , a + b ;
(iii) 
  If α , β [ 0 , 1 ] with α + β = 1 , we have
α a + β b 2 = α a 2 + β b 2 α β a b 2 .
Lemma 2.
Let C be a nonempty, closed, and convex subset of H. Suppose F : C H is a continuous monotone mapping and a ¯ C ; then,
a ¯ V I ( C , F ) C ( a ) , a a ¯ 0 a C .
Lemma 3.
([33]). Let C be a set defined as in (10), and let F : h H be an operator. Suppose the solution set V I ( C , F ) is nonempty. Then, the following alternating theorem holds for the solution of the V I ( C , F ) , that is, given a ^ C , a ^ V I ( C , F ) , if and only if one of the following holds.
(i) 
F a ^ = 0 , or
(ii) 
a ^ b d ( C ) , and there exist β a ^ > 0 (depending on the point a ^ ) and  κ c o n v { c i ( a ^ ) : i I a ^ * } such that F ( a ^ ) = β a ^ κ , where b d ( C ) denotes the boundary of the set C ,   I κ * = { i I : c i ( a ^ ) = 0 } and c o n v { c i ( a ^ ) : i I a ^ * } is the convex hull of the set { c i ( a ^ ) : i I a ^ * } .
Lemma 4.
([34]). Let { a m } be a sequence of non-negative real numbers, { α m } be a sequence in ( 0 , 1 ) with m = 1 α m = and { b m } be a sequence of real numbers. Assume that
a m + 1 ( 1 α m ) a m + α m b m , for all m 1 ,
and if lim sup k b m k 0 for every subsequence { a m k } of { a m } satisfying lim inf k ( a m k + 1 a m k ) 0 , then lim m a m = 0 .
Lemma 5.
([35]). Suppose { λ m } and { μ m } are two nonnegative real sequences such that
λ m + 1 λ m + μ m , m 1 .
If m = 1 μ m < + , then lim m λ m exists.

3. Proposed Algorithm

We present our proposed algorithm in this section and give the conditions for its convergence.
Assumption 1.
(1) 
The solution set V I ( C , F ) is nonempty.
(2) 
The mapping F : H H is monotone and L -Lipschitz-continuous on H .
(3) 
For all i I , the family of functions c i : H R satisfy the following conditions.
(i) 
Any c i ( i I ) is convex on H .
(ii) 
Any c i ( i I ) is weakly lower semi-continuous on H .
(iii) 
Any c i ( i I ) is Gâteaux-differentiable and c i ( i I ) is L i -Lipschitz-continuous on H .
(iv) 
There exists a positive constant M such that for all a ^ b d ( C ) , the following holds:
F a ^ M inf { m ( a ^ ) : m ( a ^ ) con { c i ( a ^ ) : i I a ^ * } } ,
where I a ^ * is defined as in Lemma 3.
(4) 
{ α m } m = 1 , { β m } m = 1 and { ξ m } m = 1 are non-negative sequences satisfying the following conditions:
(i) 
α m ( 0 , 1 ) , lim m α m = 0 , m = 1 α m = + , lim m ξ m α m = 0 .
(ii) 
{ β m } [ a , b ] ( 0 , 1 α m ) , { ϕ m } ( 0 , 1 ] such that lim m + ϕ m = ϕ ( 0 , 1 ] .
(iii) 
0 < δ < 2 ϕ M 2 + M 2 + 4 M ( 1 ϕ ) + 4 2 ϕ .
(iv) 
Let { σ m } be a nonnegative sequence such that m = 1 σ m < + .
The following is the proposed algorithm (Algorithm 1):
Algorithm 1: TRSTEM
Initialization: Given θ > 0 , λ 1 > 0 . Let a 0 , a 1 H be two initial points and set m = 1 .
Given the ( m 1 ) t h and m t h iterates, choose θ m such that 0 θ m θ ^ m with θ ^ m defined by
θ ^ m = min θ , ξ m a m a m 1 , if a m a m 1 , θ , otherwise .
Iterative steps: Calculate the next iterate a m + 1 as follows:
s m = a m + θ m ( a m a m 1 ) , C m i = a H : c i ( s m ) + c i ( s m ) , a s m 0 , i I , C m : = i I C m i , b m = P C m ( s m λ m F s m ) , e m = ( 1 ϕ m ) s m + ϕ m ( b m + λ m ( F s m F b m ) ) , a m + 1 = ( 1 α m β m ) s m + β m e m
where
  λ m + 1 = min δ s m b m F s m F b m + c i ( s m ) c i ( b m ) , λ m + σ m , if F s m F b m + c i ( s m ) c i ( b m ) 0 , λ m + σ m , otherwise ,
where c i ( s m ) c i ( b m ) = max i I c i ( s m ) c i ( b m ) .
Remark 1.
  • We do not require the knowledge of the Lipschitz constant of the cost operator F or the Lipschitz constant of each G a ^ teaux differential c i ( · ) of c i ( · ) to implement our proposed algorithm, as most often used by some researchers (for instance, see [27]).
  • Computation of only one projection onto some half space is another feature of our algorithm that makes it computationally efficient to implement.
Remark 2.
Observe that, by Assumption 1 4(i), it can easily be verified from (13) that
lim m θ m a m a m 1 = 0 and lim m θ m α m a m a m 1 = 0 .

4. Convergence Analysis

Here, we start with some relevant lemmas needed to establish the strong convergence theorem of our proposed algorithm.
Lemma 6.
Suppose C and C m are the sets defined by (10) and (14), respectively. Then, we have that C C m , m 1 .
Proof. 
For all i I , let C i : = { a H : c i ( a ) 0 } . Thus, we see that C = i I C i . Then, for each i I and any a C i , by the subdifferential inequality, it follows that
c i ( s m ) + c i ( s m ) , a s m c i ( a ) 0 .
By definition of the sets C m i (14), we see that a C m i . It then follows that C i C m i , m 1 , i I . Therefore, C C m , m 1 as required. □
Lemma 7.
Let { λ m } be a sequence generated by Algorithm 1. Then, { λ m } is well defined and lim m λ m = λ , where λ m i n { δ K , λ 1 } , λ 1 + σ , for some positive constant K and σ = m = 1 σ m .
Proof. 
By the Lipschitz continuity of F and and each c i ( · ) , F s m F b m + c i ( s m ) c i ( b m ) 0 , for all m 1 , we have
δ s m b m F w F b m + c i ( s m ) c i ( b m ) δ s m b m L s m b m + L ^ s m b m = δ K ,
where L ^ = max { L i : i I } , K = L + L ^ . Clearly, from the definition of λ m + 1 , the sequence { λ m } has lower bound min { δ K , λ 1 } and upper bound λ 1 + σ . By Lemma 5, we have that lim n λ m exists and we denote λ = lim m λ m . It is clear that λ min { δ K , λ 1 } , λ 1 + σ .
Lemma 8.
Let p V I ( C , F ) and { a m } be a sequence generated by Algorithm 1 under Assumption 1. Then, we have the following inequality:
e m p 2 s m p 2 ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 λ m ) δ λ m λ m + 1 M δ λ m λ m + 1 s m b m 2 .
Proof. 
From (15), we have
λ m + 1 = min δ s m b m F s m F b m + c i ( s m ) c i ( b m ) , λ m + σ m δ s m b m F s m F b m + c i ( s m ) c i ( b m ) ,
which implies that
F s m F b m + c i ( s m ) c i ( b m ) δ λ m + 1 s m b m , m 1 .
Assume p V I ( C , F ) . Then, p C C m . Since b m = P C m ( s m λ m F s m ) and p C m , then by the characterization of P C m we obtain
s m λ m F s m b m , b m p 0 ,
which is equivalent to
2 s m b m , b m p 2 λ m F s m F b m , b m p 2 λ m F b m , b m p 0 .
Using Lemma 1(i), we have
2 s m b m , b m p =   s m p 2 s m b m 2 b m p 2 .
By the monotonicity of F , we obtain
F b m , b m p = F b m F p , b m p + F p , b m p F p , b m p .
Substituting (21) and (22) into (20), we have
b m p 2 s m p 2 s m b m 2 2 λ m F s m F b m , b m p + 2 λ m F p , p b m .
Now using the definition of e m together with Lemma 1, we obtain
e m p 2 =   ( 1 ϕ m ) s m + ϕ m b m + ϕ m λ m ( F s m F b m ) p 2 =   ( 1 ϕ m ) ( s m p ) + ϕ m ( b m p ) + ϕ m λ m ( F s m F b m ) 2 = ( 1 ϕ m ) 2 s m p 2 + ϕ m 2 b m p 2 + ϕ m 2 λ m 2 F s m F b m 2 +   2 ϕ m ( 1 ϕ m ) s m p , b m p + 2 λ m ϕ m ( 1 ϕ m ) s m p , F s m F b m +   2 λ m ϕ m 2 b m p , F s m F b m = ( 1 ϕ m ) 2 s m p 2 + ϕ m ( 1 ϕ m ) [ s m p 2 + b m p 2 + ϕ m 2 b m p 2 + ϕ m 2 λ m 2 F s m F b m 2   s m b m 2 ] + 2 λ m ϕ m ( 1 ϕ m ) s m p , F s m F b m + 2 λ m ϕ m 2 b m p , F s m F b m = ( 1 ϕ m ) s m p 2 + ϕ m b m p 2 ϕ m ( 1 ϕ m ) s m b m 2 + ϕ m 2 λ m 2 F s m F b m 2 +   2 λ m ϕ m ( 1 ϕ m ) s m p , F s m F b m + 2 λ m ϕ m 2 b m p , F s m F b m .
Applying (23) in (24) gives
e m p 2 ( 1 ϕ m ) s m p 2 + ϕ m [ s m p 2 s m b m 2 2 λ m F s m F b m , b m p + 2 λ m F p , p b m ] ϕ m ( 1 ϕ m ) s m b m 2 + ϕ m 2 λ m 2 F s m F b m 2 +   2 ϕ m ( 1 ϕ m ) λ m s m p , F s m F b m + 2 ϕ m 2 b m p , F s m F b m =   s m p 2 ϕ m ( 2 ϕ m ) s m b m 2 + ϕ m 2 λ m 2 F s m F b m 2 +   2 λ m ϕ m ( 1 ϕ m ) F s m F b m , s m b m + 2 λ m ϕ m F p , p b m .
Now, we consider the following two cases:
Case 1: F p = 0 . If F p = 0 , then from (25) and by applying (19), we have
e m p 2   s m p 2 ϕ m ( 2 ϕ m ) s m b m 2 + ϕ m 2 λ m 2 F s m F b m 2 +   2 λ m ϕ m ( 1 ϕ m ) F s m F b m , s m b m   s m p 2 ϕ m ( 2 ϕ m ) s m b m 2 + ϕ m 2 δ 2 λ m 2 λ m + 1 2 s m b m 2 +   2 ϕ m ( 1 ϕ m ) δ λ m λ m + 1 s m b m 2 =   s m p 2 ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 λ m ) δ λ m λ m + 1 s m b m 2   s m p 2 ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 λ m ) δ λ m λ m + 1 M δ λ m λ m + 1 s m b m 2 ,
which is the required inequality.
Case 2: F p 0 . By Lemma 3, we have that p b d ( C ) and
F p = β p i I p * α i c i ( p ) ,
where β p is some positive constant, I p * = { i I : c i ( p ) = 0 } , and { α i } i I p * are nonnegative constants satisfying i I p * α i = 1 . Then, by the subdifferential inequality, we obtain
c i ( p ) + c i ( p ) , b m p c i ( b m ) , m 1 , i I p * .
Since p b d ( C ) , we have that c i ( p ) = 0 , for each i I p * , and then
c i ( p ) , b m p c i ( b m ) , m 0 , i I p * .
From (26) and (28), we obtain
F p , b m p β p i I p * α i c i ( b m ) .
Since b m C m = i I C m i , it follows that
c i ( s m ) + c i ( s m ) , b m s m 0 .
Then, by the differential inequality, we obtain
c i ( b m ) + c i ( b m ) , s m b m c i ( s m ) , m 1 , i I p * .
Adding (30) and (31) gives
c i ( b m ) c i ( b m ) c i ( s m ) , b m s m ,
Now, by applying (19) and (32), we obtain
c i ( b m )   c i ( b m ) c i ( s m ) b m s m δ λ m + 1 b m s m 2 .
By the condition (3)(iv) of Assumption 1, we have
β p M .
So, from (29) and by applying (33), (34) and the definition of λ m , we obtain
F p , b m p M δ λ m + 1 b m s m 2 λ m F p , p b m M δ λ m λ m + 1 b m s m 2 .
Applying (19) and (35) in (25), we obtain
e m p 2   s m p 2 ϕ m ( 2 ϕ m ) s m b m 2 + ϕ m 2 λ m 2 F s m F b m 2 +   2 λ m ϕ m ( 1 ϕ m ) s m b m , F s m F b m + M ϕ m δ λ m λ m + 1 b m s m 2   s m p 2 ϕ m ( 2 ϕ m ) s m b m 2 + ϕ m 2 δ 2 λ m 2 λ m + 1 2 s m b m 2 +   2 ϕ m ( 1 ϕ m ) δ λ m λ m + 1 s m b m 2 + M ϕ m δ λ m λ m + 1 b m s m 2 =   s m p 2 ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 ϕ m ) δ λ m λ m + 1 M δ λ m λ m + 1 s m b m 2 ,
which is the required inequality. Thus, we have obtained the desired result. □
Lemma 9.
Let { a m } be a sequence generated by Algorithm 1. Then, under Assumption 1, { a m } is bounded.
Proof. 
Let p V I ( C , F ) . First, since the limit of { λ m } exists with lim m λ m = λ , then by Assumption 1 (4)(ii)–(iii), we have
lim m + ϕ m [ 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 ϕ m ) δ λ m λ m + 1 M δ λ m λ m + 1 ] = ϕ 2 ϕ ϕ δ 2 2 ( 1 ϕ ) δ M δ > 0 .
Hence, there exists m 0 1 such that for all m m 0 , we have
ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 ϕ m ) δ λ m λ m + 1 M δ λ m λ m + 1 > 0 .
Consequently, from (17), we have that for all m m 0 ,
e m p s m p .
Using the definition of s m , we have
s m p =   a m + θ m ( a m a m 1 ) p   a m p + θ m a m a m 1 =   a m p + α m θ m α m a m a m 1 .
Hence, by Remark 2, there exists K 1 > 0 such that
θ m α m a m a m 1 K 1 m 1 .
It then follows from (38) that
s m p a m p + α m K 1 m 1 .
From the definition of a m + 1 , we have
a m + 1 p =   ( 1 α m β m ) s m + β m e m p =   ( 1 α m β m ) ( s m p ) + β n ( e m p ) α m p   ( 1 α m β m ) ( s m p ) + β m ( e m p ) + α m p .
On the other hand, by applying Lemma 1(i) and (37) we obtain
( 1 α m β m ) ( s m p ) + β m ( e m p ) 2 = ( 1 α m β m ) 2 s m p 2 + 2 ( 1 α m β m ) β m s m p , e m p + β m 2 e m p 2 ( 1 α m β m ) 2 s m p 2 + 2 ( 1 α m β m ) β m e m p s m p + β m 2 e m p 2 ( 1 α m β m ) 2 s m p 2 + ( 1 α m β m ) β m e m p 2 + s m p 2 + β m 2 e m p 2 = ( 1 α m β m ) ( 1 α m ) s m p 2 + β m ( 1 α m ) e m p 2 ( 1 α m β m ) ( 1 α m ) s m p 2 + β m ( 1 α m ) s m p 2 = ( 1 α m ) 2 s m p 2 ,
which implies that
( 1 α m β m ) ( s m p ) + β m ( e m p ) ( 1 α m ) s m p .
Next, by applying (39) and (41) in (40), we have, for all m m 0 ,
a m + 1 p ( 1 α m ) s m p + α m p ( 1 α m ) a m p + α m K 1 + α m p ( 1 α m ) a m p + α m p + K 1 max a m p , p + K 1 max a m 0 p , p + K 1 .
This implies that the sequence { a m } is bounded. Consequently, { s m } , { b m } and { e m } are all bounded. □
Lemma 10.
Let { s m } and { b m } be sequences generated by Algorithm 1 such that lim m s m b m = 0 . Suppose { s m j } is a subsequence of { s m } that converges weakly to some a ¯ H and lim j s m j b m j = 0 , then a ¯ V I ( C , F ) .
Proof. 
Suppose { s m } and { b m } are two sequences generated by Algorithm 1 with subsequences { s m j } and { b m j } , respectively, such that s m j a ¯ ; then, by the hypothesis of the lemma, it follows that b m j a ¯ as j + . Since b m j C m j , then, by the definition of C m , we obtain
c i ( s m j ) + c i ( s m j ) , b m j s m j 0 .
Using the Cauchy–Schwartz inequality, we have
c i ( s m j ) c i ( s m j ) b m j s m j .
Since c i ( . ) is Lipschitz-continuous and { s m j } is bounded, then { c i ( s m j ) } is bounded. Thus, there exists a constant K 0 > 0 such that c i ( s m j ) K 0 for all j 0 . Hence, from (42), we obtain
c i ( s m j ) K 0 b m j s m j .
Since c i ( · ) is weakly lower semi-continuous, then it follows from (43) and the definition of weakly lower semi-continuity that
c i ( a ¯ ) lim inf j c i ( s m j ) lim j K 0 b m j s m j = 0 ,
which implies that a ¯ C . By the characterization of P C m , we obtain
b m j s m j + λ m j F s m j , t b m j 0 , t C C m j
It follows from the monotonicity of F , that
0 b m j s m j , t b m j + λ m j F s m j , t b m j = b m j s m j , t b m j + λ m j F s m j , t s m j + λ m j F s m j , s m j b m j b m j s m j , t b m j + λ m j F t , t s m j + λ m j F s m j , s m j b m j
Letting j in the last inequality, applying lim j b m j s m j = 0 and lim j λ m j = λ > 0 , we have
F t , t a ¯ 0 t C .
Applying Lemma 2, we have a ¯ V I ( C , F ) .
Lemma 11.
Let { a m } be a sequence generated by Algorithm 1. Then, under Assumption 1, we have the following inequality for all p V I ( C , F ) and m N :
a m + 1 p 2 ( 1 2 α m ) a m p 2 + α m ( 1 β m ) ( α m 2 + β m 2 ) α m K 3 + 3 K 2 ( 1 α m ) 2 + β m 2 θ m α m a m a m 1 β m ( 1 β m ) ϕ m 2 ϕ m δ 2 λ m 2 λ m + 1 2 ( 1 ϕ m ) δ λ m λ m + 1 φ 2 s m b m 2 .
Proof. 
Let p V I ( C , F ) . Then, from the definition of s m , the use of the Cauchy–Schwartz inequality and the application of Lemma 1, we obtain
s m p 2 =   a m + θ m ( a m a m 1 ) p 2 =   a m p 2 + θ m 2 a m a m 1 2 + 2 θ m a m p , a m a m 1   a m p 2 + θ m 2 a m a m 1 2 + 2 θ m a m a m 1 a m p =   a m p 2 + θ m a m a m 1 ( θ m a m a m 1 + 2 a m p )   a m p 2 + 3 K 2 θ m a m a m 1 =   a m p 2 + 3 K 2 α m θ m α m a m a m 1
where K 2 : = sup n N { a m p , θ m a m a m 1 } > 0 .
Next, from the definition of a m + 1 and by applying (17), (46) together with Lemma 1, we obtain
a m + 1 p 2 =   ( 1 α m β m ) ( s m p ) + β m ( e m p ) α m p 2   ( 1 α m β m ) ( s m p ) + β m ( e m p ) 2 2 α m p , a m + 1 p = ( 1 α m β m ) 2 s m p 2 + β m 2 e m p 2 + 2 β m ( 1 α m β m ) s m p , e m p +   2 α m p , p a m + 1 ( 1 α m β m ) 2 s m p 2 + β m 2 e m p 2 + 2 β m ( 1 α m β m ) s m p e m p +   2 α m p , p a m + 1 ( 1 α m β m ) 2 s m p 2 + β m 2 e m p 2 + β m ( 1 α m β m ) s m p 2 + e m p 2 +   2 α m p , p a m + 1 = ( 1 α m β m ) ( 1 α m ) s m p 2 + β m ( 1 α m ) e m p 2 + 2 α m p , p a m + 1 ( 1 α m β m ) ( 1 α m ) s m p 2 + β m ( 1 α m ) s m p 2 β m ( 1 α m ) ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 λ m ) δ λ m λ m + 1 M δ λ m λ m + 1 s m b m 2 +   2 α m p , p a m + 1 = ( 1 α m ) 2 s m p 2 + 2 α m p , p a m + 1 β m ( 1 α m ) ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 λ m ) δ λ m λ m + 1 M δ λ m λ m + 1 s m b m 2 ( 1 α m ) 2 | | a m p | | 2 + 3 K 2 α m ( 1 α m ) 2 θ m α m a m a m 1 β m ( 1 α m ) ϕ m 2 ϕ m ϕ m δ 2 λ m 2 λ m + 1 2 2 ( 1 λ m ) δ λ m λ m + 1 M δ λ m λ m + 1 s m b m 2 +   2 α m p , p a m + 1 ,
which is the required inequality. □
  • In the following theorem, we state and prove the strong convergence theorem for our proposed algorithm.
Theorem 1.
Let { a m } be a sequence generated by Algorithm 1 under Assumption 3.1. Then, the sequence { a m } converges strongly to a point a ^ V I ( C , F ) , where a ^ = min { p : p V I ( C , F ) } .
Proof. 
Since a ^ = min { p : p V I ( C , F ) } , we have a ^ = P V I ( C , F ) ( 0 ) . From Lemma 11, we obtain
a m + 1 a ^ 2 ( 1 α m ) a m a ^ 2 + α m 3 K 2 ( 1 α m ) 2 θ m α m a m a m 1 + 2 a ^ , a ^ a m + 1 = ( 1 α m ) a m a ^ 2 + α m d m ,
where d m = 3 M 2 ( 1 α m ) 2 θ m α m a m a m 1 + 2 a ^ , a ^ a m + 1 . Next, we claim that the sequence { a m a ^ } converges to zero. To show this, by Lemma 4, it suffices to establish that lim sup k d m k 0 for every subsequence { a m k a ^ } of { a m a ^ } satisfying
lim inf k a m k + 1 a ^ a m k a ^ 0 .
Suppose { a m k a ^ } is a subsequence of { a m a ^ } such that (48) holds. Again, from Lemma 11, we have
β m k ( 1 α m k ) ϕ m k [ 2 ϕ m k ϕ m k δ 2 λ m k 2 λ m k + 1 2 2 ( 1 λ m k ) δ λ m k λ m k + 1 M δ λ m k λ m k + 1 ] s m k b m k 2 ( 1 α m k ) 2 a m k a ^ 2 a m k + 1 a ^ 2 + 3 K 2 α m k ( 1 α m k ) 2 θ m k α m k a m k a m k 1 + 2 α m k a ^ , a ^ a m k + 1 .
Applying (48), Remark 2, together with the fact that lim k α m k = 0 , we obtain
β m k ( 1 α m k ) ϕ m k 2 ϕ m k ϕ m k δ 2 λ m k 2 λ m k + 1 2 2 ( 1 λ m k ) δ λ m k λ m k + 1 M δ λ m k λ m k + 1 s m k b m k 2 0 , k .
By the conditions on the control parameters together with (36), we obtain
s m k b m k 0 , k .
From the definition of e m and applying (19), we have
e m k s m k =   ( 1 ϕ m k ) s m k + ϕ m k ( b m k + λ m k ( F s m k F b m k ) ) s m k ( 1 ϕ m k ) s m k s m k + ϕ m k ( b m k s m k + λ m k F s m k F b m k ) ϕ m k b m k s m k + δ λ m k λ m k + 1 s m k b m k = ϕ m k 1 + δ λ m k λ m k + 1 s m k b m k .
By applying (49) together with the conditions on the control parameters, it follows from (50) that
e m k s m k 0 , k .
By Remark 2, we obtain
a m k s m k = θ m k a m k x m k 1 0 , k .
Moreover, by applying (51) and (52), we obtain
a m k e m k a m k s m k + s m k e m k 0 , k .
Next, by using (52), (53), together with the fact that lim k α m k = 0 , we obtain
a m k + 1 a m k =   ( 1 α m k β m k ) ( s m k a m k ) + β m k ( e m k a m k ) α m a m k ( 1 α m k β m k ) s m k a m k + β m k e m k a m k + α m k a m k 0 , as k .
Since { a m } is bounded, then w ω ( a m ) is nonempty. Suppose a * w ω ( a m ) is an arbitrary element. Then, there exists a subsequence { a m k } of { a m } such that a m k a * as k . It follows from (52) that s m k a * as k . Moreover, by Lemma 10 and (49) we obtain a * V I ( C , F ) . Since a * w ω ( a m ) was chosen arbitrarily, it follows that w ω ( a m ) V I ( C , F ) .
Next, since { a m k } is bounded, there exists a subsequence { a m k j } of { a m k } such that a m k j q and
lim sup k a ^ , a ^ a m k = lim j a ^ , a ^ a m k j .
Since a ^ = P V I ( C , F ) ( 0 ) , it follows that
lim sup k a ^ , a ^ a m k = lim j a ^ , a ^ a m k j = a ^ , a ^ q 0 ,
Therefore, it follows from the last inequality and (54) that
lim sup k a ^ , a ^ a m k + 1 0 .
Now, by Remark 2 and (55), we have lim sup k d m k 0 . Thus, by invoking Lemma 4, it follows from (47) that { a m a ^ } converges to zero as desired. □

5. Numerical Example

In this section, we present two numerical examples to illustrate the behavior of the sequences generated by Algorithm 1 and make comparisons with the algorithms presented in [6,36,37]. All the programs are implemented in MATLAB 2023b on a Intel(R)Core(TM) i5-8250S CPU @1.60 GHz computer with 8.00 GB RAM.
Example 1.
Consider the variational inequality V I ( C , F ) (1) with the feasible set C : = C 1 C 2 R 2 , where
C 1 : = { ( a 1 , a 2 ) R 2 | c 1 ( a 1 , a 2 ) : = a 1 2 a 2 0 } ,
and
C 2 : = { ( a 1 , a 2 ) R 2 | c 2 ( a 1 , a 2 ) : = a 1 2 + a 2 2 2 0 } ,
and F : R 2 R 2 is defined by F ( a 1 , a 2 ) = ( 3 h ( a 1 ) , 2 a 1 + a 2 ) , where
h ( t ) : = e ( t 1 ) + e , i f t > 1 , e t , i f 1 t 1 , e 1 ( t + 1 ) + e 1 , i f t < 1 .
We observe from Lemma 3 that the solution set of the variational inequality problem V I ( C , F ) 0 and ( 1 , 1 ) is its solution. The following constants, which can be obtained through simple calculations, are used: Lipschitz constants of the functions c i , for i = 1 , 2 are L 1 = L 2 = 2 and thus L = m a x { L 1 , L 2 } . Furthermore, based on Assumption 1 (3)(iv), M = 3 e 2 + 1 and M = 6 e 2 + 1 . As in TRSSEM, v = 0.99 , σ = 1 and ρ : = 0.04 . To implement SEM, we first need to estimate the Lipschitz connstant of F , which is 9 e 2 + 5 , and also calculate the projection operator P C 1 C 2 . This operator without explicit expression is a weakness of SEM. Furthermore, control parameter conditions are taken as follows: θ = 1 2 , ξ m = 1 m 1.1 , ϕ m = 1 2 m + 1 , α m = 1 m + 1 , β m = 1 13 m + 15 , δ = 0.5 , σ m = 1 m m , λ 1 = 0.97 . The numerical and graphical results of three methods are shown in Figure 1. The different initial values of a 0 and a 1 are as follows. The process is terminated by using the stopping criterion a m + 1 a m   ϵ , where ϵ = 10 4 .
(Case 1): 
a 0 = [ 2.0 2.0 ] and a 1 = [ 3.0 2.0 ] ;
(Case 2): 
a 0 = [ 1.0 1.0 ] and a 1 = [ 2.0 3.0 ] ;
(Case 3): 
a 0 = [ 4.0 2.0 ] and a 1 = [ 3.0 5.0 ] ;
(Case 4): 
a 0 = [ 0.2 1.1 ] and a 1 = [ 0.7 1.2 ] .
The report of this example is given in Figure 1 and Table 1.
Figure 1. Top left: Case 1; top right: Case 2; bottom left: Case 3; bottom right: Case 4 [6,36,37].
Figure 1. Top left: Case 1; top right: Case 2; bottom left: Case 3; bottom right: Case 4 [6,36,37].
Axioms 14 00354 g001
Example 2.
Suppose F ( a ) = 2 a , a H , and let C H be a closed, convex, and feasible set defined as follows:
C : = i = 1 m C i : = i = 1 m { a H : c i ( a ) : =   a i 2 2 0 } ,
for each i = 1 , , m , where H = ( l 2 ( R ) , . ) , a 2 = k = 1 | a k | 2 1 2 , a , b = k = 1 a k b k , a l 2 ( R ) , and l 2 ( R ) : = a = ( a 1 , a 2 , , a m , ) , a k R : k = 1 | x k | 2 < . We choose the following as our control parameters: θ = 1 2 , ξ m = 1 m 1.1 , ϕ m = 1 2 m + 1 , α m = 1 m + 1 , β m = 1 13 m + 15 , δ = 0.5 , ξ m = ( 2 3 m + 1 ) 2 , λ 1 = 3.97 , σ m = 30 ( 3 m + 4 ) 2 . The process is terminated by using the stopping criterion a m + 1 a m     ϵ , where ϵ = 10 4 . We consider the following cases for the initial points a 0 and a 1 ;
(Case i)
a 0 = ( 1.7 , 1.6 , , 0 , ) and a 1 = ( 2.1 , 2.2 , , 0 , ) ;
(Case ii)
a 0 = ( 2.2 , 1.1 , , 0 , ) and a 1 = ( 0.71 , 1.12 , , 0 , ) ;
(Case iii)
a 0 = ( 3.7 , 2.6 , , 0 , ) and a 1 = ( 3.1 , 2.5 , , 0 , ) ;
(Case iv)
a 0 = ( 0.64 , 1.75 , , 0 , ) and a 1 = ( 0.95 , 1.75 , , 0 , ) .
The report of this example is given in Figure 2.

6. Conclusions

A novel iterative method was proposed for approximating the solution of a class of variational inequality problems defined over the intersection of sub-level sets of a countable family of convex functions. The method requires only a single projection onto a half space for computation. It was shown that the sequence generated by the proposed method converges strongly to the minimum-norm solution of the problem. The efficiency and validity of the method were demonstrated through two numerical examples. In the future, we intend to study the convergence behavior of our proposed algorithm in Banach spaces under weaker conditions, as well as to develop accelerated variants that further reduce computational complexity.

Author Contributions

Conceptualization, O.J.O.; Formal analysis, O.J.O. and O.K.O.; Methodology, S.P.M.; Software, O.K.O.; Validation, O.K.O. and S.P.M.; Writing—original draft, O.J.O.; Writing—review and editing, O.K.O. and H.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fichera, G. Sul problem elastostatico di signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Cl Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  2. Stampacchia, G. Variational Inequalities. In Theory and Applications of Monotone Operators, Proceedings of the NATO Advanced Study Institute, Venice, Italy, 17–30 June 1968; Edizioni Odersi: Gubbio, Italy, 1968; pp. 102–192. [Google Scholar]
  3. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  4. Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
  5. Gu, Z.; Mani, G.; Gnanaprakasam, A.J.; Li, Y. Solving a System of Nonlinear Integral Equations via Common Fixed Point Theorems on Bicomplex Partial Metric Space. Mathematics 2021, 9, 1584. [Google Scholar] [CrossRef]
  6. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
  7. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metody 1976, 12, 747–756. [Google Scholar]
  8. Nallaselli, G.; Baazeem, A.S.; Gnanaprakasam, A.J.; Mani, G.; Javed, K.; Ameer, E.; Mlaiki, N. Fixed Point Theorems via Orthogonal Convex Contraction in Orthogonal b-Metric Spaces and Applications. Axioms 2023, 12, 143. [Google Scholar] [CrossRef]
  9. Beg, I.; Mani, G.; Gnanaprakasam, A.J. Best proximity point of generalized F-proximal non-self contractions. J. Fixed Point Theory Appl. 2021, 23, 49. [Google Scholar] [CrossRef]
  10. Gnanaprakasam, A.J.; Nallaselli, G.; Haq, A.U.; Mani, G.; Baloch, I.A.; Nonlaopon, K. Common Fixed-Points Technique for the Existence of a Solution to Fractional Integro-Differential Equations via Orthogonal Branciari Metric Spaces. Symmetry 2022, 14, 1859. [Google Scholar] [CrossRef]
  11. Ramaswamy, R.; Mani, G.; Gnanaprakasam, A.J.; Abdelnaby, O.A.A.; Stojiljković, V.; Radojevic, S.; Radenović, S. Fixed Points on Covariant and Contravariant Maps with an Application. Mathematics 2022, 10, 4385. [Google Scholar] [CrossRef]
  12. Alakoya, T.O.; Taiwo, A.; Mewomo, O.T.; Cho, Y.J. An iterative algorithm for solving variational inequality generalized mixed equilibrium, convex minimization and zeros problems for a class of nonexpansive-type mappings. Ann. Univ. Ferrara Sez. VII Sci. Mat. 2021, 67, 1–31. [Google Scholar] [CrossRef]
  13. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2017, 70, 687–704. [Google Scholar] [CrossRef]
  14. He, S.; Dong, Q.-L.; Tian, H. Relaxed projection and contraction methods for solving Lipschitz-continuous monotone variational inequalities. Rev. De La Real Acad. De Cienc. Exactas Fis. Y Nat. Ser. A Mat. 2019, 113, 2773–2791. [Google Scholar] [CrossRef]
  15. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. A modified extragradient algorithm for a certain class of split pseudo-monotone variational inequality problem. Numer. Algebra Control Optim. 2022, 12, 373–393. [Google Scholar] [CrossRef]
  16. Ceng, L.; Petrușel, A.; Qin, X.; Yao, J. A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
  17. Iusem, A.N.; Nasri, M. Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 2011, 50, 59–76. [Google Scholar] [CrossRef]
  18. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 2021, 88, 1419–1456. [Google Scholar] [CrossRef]
  19. Tseng, P. A Modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  20. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms 2019, 80, 741–752. [Google Scholar] [CrossRef]
  21. Censor, Y.; Gibali, A.; Reich, S. The Split Variational Inequality Problem; The Technion-Israel Institute of Technology: Haifa, Israel, 2010. [Google Scholar]
  22. He, S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
  23. He, B.; Yuan, X.; Zhang, J.J. Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 2004, 27, 247–267. [Google Scholar] [CrossRef]
  24. Solodov, M.V.; Tseng, P. Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34, 1814–1830. [Google Scholar] [CrossRef]
  25. Sun, D. A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 1996, 91, 123–140. [Google Scholar] [CrossRef]
  26. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  27. Cao, Y.; Guo, K. On the convergence of inertial two-subgradient extragradient method for solving variational inequality problems. Optimization 2020, 69, 1237–1253. [Google Scholar] [CrossRef]
  28. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in hilbert space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  29. Attouch, H.; Cabot, A. Convergence of a relaxed inertial forward–backward algorithm for structured monotone inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
  30. Iutzeler, F.; Hendrickx, J.M. A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Methods Softw. 2019, 34, 383–405. [Google Scholar] [CrossRef]
  31. He, S.; Wu, T.; Gibali, A.; Dong, Q.-L. Totally relaxed, self-adaptive algorithm for solving variational inequalities over the intersection of sub-level sets. Optimization 2018, 67, 1487–1504. [Google Scholar] [CrossRef]
  32. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: New York, NY, USA, 2017. [Google Scholar]
  33. Nguyen, H.Q.; Xu, H.K. The supporting hyperplane and an alternative to solutions of variational inequalities. J. Nonlinear Convex Anal. 2015, 16, 2323–2331. [Google Scholar]
  34. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  35. Tan, K.K.; Xu, H.K. Approximating Fixed Points of Nonexpansive Mappings by the Ishikawa Iteration Process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
  36. Thong, D.V.; Gibali, A. Two strong convergence subgradient extragradient methods for solving variational inequalities in Hilbert spaces. Jpn. J. Ind. Appl. Math. 2019, 36, 299–321. [Google Scholar] [CrossRef]
  37. Uzor, V.A.; Mewomo, O.T.; Alakoya, T.O.; Gibali, A. Outer approximated projection and contraction method for solving variational inequalities. J. Inequalities Appl. 2023, 2023, 141. [Google Scholar] [CrossRef]
Figure 2. Top left: Case i; top right: Case ii; bottom left: Case iii; bottom right: Case iv [6,36,37].
Figure 2. Top left: Case i; top right: Case ii; bottom left: Case iii; bottom right: Case iv [6,36,37].
Axioms 14 00354 g002
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
AlgorithmCase 1Case 2Case 3Case 4
Iter.CPU TimeIter.CPU TimeIter.CPU TimeIter.CPU Time
Algorithm 1200.0073190.0065260.0061110.0057
He et al. [6]570.0118470.0129430.0108400.0157
Thong and Gibali [36]250.0094250.090320.0087160.0085
Uzor et al. [37]260.0217570.02131080.0202190.0089
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ogunsola, O.J.; Oyewole, O.K.; Moshokoa, S.P.; Abass, H.A. A Totally Relaxed, Self-Adaptive Tseng Extragradient Method for Monotone Variational Inequalities. Axioms 2025, 14, 354. https://doi.org/10.3390/axioms14050354

AMA Style

Ogunsola OJ, Oyewole OK, Moshokoa SP, Abass HA. A Totally Relaxed, Self-Adaptive Tseng Extragradient Method for Monotone Variational Inequalities. Axioms. 2025; 14(5):354. https://doi.org/10.3390/axioms14050354

Chicago/Turabian Style

Ogunsola, Olufemi Johnson, Olawale Kazeem Oyewole, Seithuti Philemon Moshokoa, and Hammed Anuoluwapo Abass. 2025. "A Totally Relaxed, Self-Adaptive Tseng Extragradient Method for Monotone Variational Inequalities" Axioms 14, no. 5: 354. https://doi.org/10.3390/axioms14050354

APA Style

Ogunsola, O. J., Oyewole, O. K., Moshokoa, S. P., & Abass, H. A. (2025). A Totally Relaxed, Self-Adaptive Tseng Extragradient Method for Monotone Variational Inequalities. Axioms, 14(5), 354. https://doi.org/10.3390/axioms14050354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop