Next Article in Journal
A Modified Auxiliary Method for Efficient Solutions to the (2+1)-Dimensional Variable-Coefficient Burgers’ Equation
Next Article in Special Issue
Complement Reducible Uniform Hypergraphs
Previous Article in Journal
Convergence-Enhanced and ANN-Accelerated Solvers for Absolute Value Problems
Previous Article in Special Issue
On Order Degree Problem for Moore Bound
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems and Fixed Point Problems with Applications in Optimal Control Problems

1
School of Mathematics and Information Science, North Minzu University, Xixia District, Yinchuan 750000, China
2
School of Mathematics and Statistics, Ningxia University, Xixia District, Yinchuan 750000, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(12), 881; https://doi.org/10.3390/axioms14120881
Submission received: 12 September 2025 / Revised: 21 November 2025 / Accepted: 24 November 2025 / Published: 28 November 2025
(This article belongs to the Special Issue Mathematics and Its Applications in Other Disciplines)

Abstract

This paper presents an enhanced inertial Tseng’s extragradient method designed to address variational inequality problems involving pseudomonotone operators, along with fixed point problems governed by quasi-nonexpansive operators in real Hilbert spaces. Provided that the parameters satisfy appropriate conditions, the proposed method is shown to converge strongly. Finally, we provide computational results and illustrate their utility through optimal control applications. These aim to show the efficacy and superiority of the proposed algorithm compared with some existing algorithms.

1. Introduction

Throughout this paper, we make the following assumptions: let H denote a real Hilbert space equipped with the inner product · , · and its induced norm · , and let C be a nonempty, closed, and convex subset of H. For the mapping K : H H where K C , the variational inequality problem (abbreviated as VIP) is formulated as find a point where b C satisfying
K b , c b 0 , c C ,
where K represents a nonlinear mapping, and the solution set corresponding to the Equation (1) is denoted as VI ( C , K ) .
As a fundamental modeling framework in applied mathematics, variational inequalities find wide applications across diverse domains, including optimal control, signal processing, image restoration, and composite minimization (see [1,2,3,4,5,6,7]). Researchers have proposed and analyzed numerous effective approaches for solving variational inequality problems over recent decades; see [8,9,10,11,12,13,14,15,16,17,18] and related references. It is worth noting that such approaches typically necessitate the mapping K to satisfy certain monotonicity properties.
Among projection-type schemes, the projected gradient method stands as the most rudimentary and historically antecedent approach to variational inequality problems, and its iteration can be succinctly expressed as
a n + 1 = P C a n δ K a n .
Under the assumptions that the mapping K is L-Lipschitz continuous and λ -strongly monotone, and that the step size satisfies δ ( 0 , 2 λ / L 2 ) , the sequence { a n } generated by Equation (2) converges to an element of VI ( C , K ) .
In numerous studies addressing variational inequalities governed by pseudomonotone and Lipschitz continuous operators, the extragradient method stands as the most prevalently employed algorithm. Indeed, the extragradient method (EGM), introduced by Korpelevich [19] for finite-dimensional saddle-point problems, proceeds as detailed below.
b n = P C a n δ K a n , a n + 1 = P C a n δ K b n ,
where K is monotone and L-Lipschitz continuous, selecting a fixed step size δ ( 0 , 1 / L ) ensures that the sequence { a n } generated by Equation (3) converging weakly to a point in VI ( C , K ) provided that the solution set is nonempty. A notable drawback of the extragradient method is requiring two projections onto the set C at each iteration, which can severely impair the overall computational efficiency.
Subsequently, some researchers have presented two categories of methods aimed at improving the numerical efficiency of Equation (3). Tseng [20] first introduced the forward–backward–forward method, which is now commonly referred to as Tseng’s extragradient method (TEGM). Unlike standard extragradient methods, Equation (4) achieves improved efficiency by computing just one projection per iteration. The precise algorithmic formulation appears as follows:
b n = P C a n δ K a n , a n + 1 = b n δ K b n K a n .
Provided that the mapping K is monotone and L-Lipschitz continuous, and that the constant step size is chosen as δ ( 0 , 1 / L ) , the sequence { a n } generated by Equation (4) converges weakly to a point in VI ( C , K ) . As the second alternative, the subgradient extragradient method (SEGM) was introduced by Censor, Gibali, and Reich [21]. Its iterations are specified below.
b n = P C a n δ K a n , T n = { a H a n δ K a n b n , a b n 0 } , a n + 1 = P T n a n δ K b n .
Under the conditions that K is L-Lipschitz continuous and monotone and that the step size is fixed at δ ( 0 , 1 / L ) , Equation (5) is known to converge weakly to solutions of both monotone variational inequalities (see [22]) and their pseudomonotone counterparts (see [23,24]).
Let U : C C be a nonlinear mapping, the fixed point problem (FPP) is defined as follows:
find q C such that U q = q .
We denote the fixed point set of U as Fix ( U ) . The present work aims to find a point q that solves Equations (1) and (6) simultaneously, specifically satisfying the following conditions:
q Fix ( U ) VI ( C , K ) .
Numerous iterative methods have been developed for finding a common element of Equation (7) in a Hilbert space H, see, e.g., [25,26,27,28,29,30,31,32,33,34]. Specifically, Takahashi and Toyoda [35] introduced an iterative method for solving Equation (7), which is formulated as follows:
a n + 1 = ( 1 β n ) a n + β n P C U a n λ n K a n .
Here, the mapping K is λ -inverse and strongly monotone, and U is nonexpansive. Under certain conditions, they established that the iterative sequence { a n } generated by this algorithm converges weakly to a solution of Equation (7). Next, Censor, Gibali, and Reich [22] proved weak convergence to Equation (7) solutions under the condition of the Lipschitz continuity and monotonicity of K and the nonexpansiveness of U . The algorithm is specified as follows:
b n = P C a n λ K a n , T n = a H a n λ K a n b n , a b n 0 , a n + 1 = β n a n + ( 1 β n ) U P T n a n λ K b n .
In infinite-dimensional spaces, norm convergence is generally preferable to weak convergence. This necessitates the development of algorithms with norm convergence guarantees for solutions of Equation (7) in Hilbert spaces. For the case where K is merely monotone and Lipschitz-continuous, Kraikaew and Saejung [36] proposed the Halpern subgradient extragradient method (HSEGM), which combines the subgradient extragradient method with Halpern’s iterative method. The complete algorithm is presented below:
b n = P C a n λ K a n , T n = a H a n λ K a n b n , a b n 0 , c n = γ n a 0 + ( 1 γ n ) P T n a n λ K b n , a n + 1 = β n a n + ( 1 β n ) U c n ,
where λ ( 0 , 1 / L ) , K is a monotone and L-Lipschitz continuous mapping, and U is a quasi-nonexpansive mapping. Under these conditions, the sequence { a n } generated by Equation (8) converges strongly to P Fix ( U ) VI ( C , K ) ( a 0 ) . The Equation (8) has two main shortcomings. Firstly, its applicability is restricted to monotonic operators K . Furthermore, it employs a fixed step size that depends on the Lipschitz constant of K .
Inertial techniques can accelerate the convergence rate of algorithms, which has motivated extensive research on inertial methods [37,38,39,40,41]. Nest, Cai et al. [42] proposed an inertial Tseng extragradient method for solving Equation (7) by combining viscosity approximation and Tseng’s extragradient method. The algorithm is formally described as follows:
d n = a n + θ n ( a n a n 1 ) , b n = P C d n λ K d n , c n = b n λ K b n K d n , a n + 1 = β n f ( a n ) + ( 1 β n ) [ γ n U c n + ( 1 γ n ) c n ] .
Consider a pseudomonotone, L-Lipschitz continuous mapping K and a nonexpansive mapping U , with λ ( 0 , 1 / L ) . Under the assumption that VI ( C , K ) Fix ( U ) is nonempty, the strong convergence of the iterative sequence { a n } generated by Equation (9) was established. Specifically, the sequence converges to a point q satisfying q = P VI ( C , K ) Fix ( U ) f ( q ) , where q VI ( C , K ) Fix ( U ) . Like the Equation (8), this method employs a fixed step size dependent on the Lipschitz constant. This reliance necessitates an a priori estimate of the constant, posing a significant practical limitation as it is frequently unavailable or challenging to determine for nonlinear problems. However, in our new algorithm, a new adaptive step size was adopted to address this defect.
Inspired by the research works mentioned above, we put forward an enhanced adaptive inertial Tseng extragradient algorithm. The following are the advantages of this algorithm:
(1)
We propose a new step size rule. The rule allows the algorithm to function without relying on the pre-known information about the Lipschitz constant of the mapping.
(2)
Our convergence analysis establishes the strong convergence of the generated sequences to a solution of Equation (7) under relaxed conditions, where one of the mappings is assumed to be pseudomonotone and l-Lipschitz continuous, and the other is quasi-nonexpansive.
(3)
Numerical simulations confirm the theoretical analysis. Comparative results show that the proposed algorithm achieves significantly faster convergence than conventional methods [10,11,12,13,25,30].
(4)
Applications of the algorithm in optimal control are presented.
Here is the organizational structure of our paper. In Section 2, we provide some essential definitions as well as technical lemmas that will be utilized later. Then, in Section 3, we be put forward an algorithm and analyze its convergence. Subsequently, Section 4 presents computational experiments validating the theoretical framework, including comparative performance analyses and practical applications. Finally, in Section 5, we present a concise summary.

2. Preliminaries

This section introduces several fundamental definitions necessary for understanding the core results. Let C be a closed and convex subset of the real Hilbert space H. For a sequence { c n } , its weak convergence to a point c as n is denoted by { c n } c , whereas its strong convergence to c is denoted by { c n } c .
Definition 1.
Let K : H H be a mapping. Then, K is called:
(1) 
L-Lipschitz continuous with L > 0 , if
K b K c     L b c , b ,   c H .
(2) 
nonexpansive, if
K c K q     c q , c H ,   q F i x ( K ) .
(3) 
quasi-nonexpansive, if
K c q     c q , c H ,   q F i x ( K ) .
(4) 
monotone, if
K b K c ,   b c 0 , b ,   c H .
(5) 
pseudomonotone, if
K c ,   b c 0 K b ,   b c 0 , b ,   c H .
(6) 
β-strongly monotone, if there exists a constant β > 0 such that
K b K c , b c β b c 2 , b ,   c H .
(7) 
sequentially weakly continuous, if for each sequence { c n } we have
c n c a s n i m p l i e s K c n K c a s n .
For every element a H , there exists a unique closest point in C , denoted by P C a , that satisfies the inequality a P C a     a b for all b C . This operator P C is termed the metric projection of H onto C .
Remark 1.
A monotone mapping must be a pseudomonotone operator, and a nonexpansive mapping must be a quasi-nonexpansive mapping, but the converse is not true.
The lemmas presented below furnish indispensable support for the proof of this paper’s main results.
Lemma 1
([12]). The metric projection c = P C a in a real Hilbert space H is equivalently characterized by the following variational inequality:
c a ,   c b 0 , b C ,
where C H is any nonempty closed convex subset.
Lemma 2
([25]). For any vectors c , b H and any scalars γ , β R , the following identities hold in the real Hilbert space H:
(i) 
c + b 2 c 2 + 2 b , c + b ,
(ii) 
γ c + β b 2 = γ ( γ + β ) c 2 + β ( γ + β ) b 2 γ β c b 2 .
Lemma 3
([43]). Consider three sequences: { c n } of nonnegative real numbers, { δ n } of real numbers in ( 0 , 1 ) with n = 1 δ n = , and { d n } of real numbers. Suppose that
c n + 1 ( 1 δ n ) c n + δ n d n , n 1 .
if lim sup k d n k 0 and for every subsequence { c n k } of { c n } , we have lim inf k ( c n k + 1 c n k ) 0 , then lim n c n = 0 .
Lemma 4
([44]). Let { c n } be a sequence of non-negative real numbers with a subsequence { c n j } satisfying c n j < c n j + 1 for all j N . Then, there exists a nondecreasing sequence { v k } in N with lim k v k = , satisfying the following properties for all sufficiently large k N :
c v k c v k + 1 and c k c v k + 1 .
In fact, v k is defined as the maximum element n in the set { 1 , 2 , , k } for which c n < c n + 1 .
Lemma 5
([45]). Let β be a parameter in ( 0 , 1 ] and δ a positive constant. Let S : H H be an L-Lipschitz and α–strongly monotone mapping, and U : H H for a nonexpansive mapping. A new mapping U δ : H H can then be defined by
U δ c = ( I β δ S ) ( U c ) , c H .
Then, U δ is a contraction provided δ < 2 α L 2 , that is
U δ c U δ d ( 1 β η ) c d , c H ,
where η = 1 1 δ ( 2 α δ L 2 ) ( 0 , 1 ) .

3. Main Results

In the present section, a modified version of Tseng’s extragradient method is introduced, which is designed to solve Equation (7). The subsequent assumptions are pivotal for the establishment of our main results.
(C1)
Let C denote a nonempty closed convex subset of the real Hilbert space H.
(C2)
The operator K : H H is assumed to be pseudomonotone and l-Lipschitz continuous on H, and sequentially weakly continuous when restricted to C .
(C3)
Let U : H H be quasi-nonexpansive with I U demiclosed at zero and V I ( C , K ) F i x ( U ) .
(C4)
Let S : H H be α -strongly monotone and L-Lipschitz continuous on H and let g : H H be a contraction with a constant ρ [ 0 , 1 ) such that ρ < η , where η is defined by Lemma 5. Let { σ n } denote a positive sequence satisfying lim n σ n γ n = 0 , where { γ n } is a sequence taking values in ( 0 , 1 ) with lim n γ n = 0 and n = 1 γ n = . Furthermore, let { ξ n } be a sequence in [ 0 , 1 ] that fulfills the following condition:
lim inf n ξ n ( 1 ξ n ) > 0 .
We proceed to present the algorithm specified below.
Remark 2.
The sequence { τ n } is well defined and it is a nondecreasing sequence, lim n τ n = τ max { τ 1 , L μ } .
Proof. 
From the definition of { τ n } , we have τ n τ n + 1 for all n N , so { τ n } is nondecreasing. In addition, set
λ n : = K d n K b n μ d n b n .
Since K is l-Lipschitz continuous, we have
λ n = K d n K b n μ d n b n l d n b n μ d n b n l μ .
If d n b n 0 , then
τ 2 = max { τ 1 , λ 1 } max τ 1 , l μ , τ 3 = max { τ 2 , λ 2 } max τ 2 , l μ max τ 1 , l μ .
By induction, we obtain τ n + 1 max τ 1 , l μ .
It follows that { τ n } is a nondecreasing sequence and bounded above. Therefore, there exists lim n τ n = τ max τ 1 , l μ . □
Lemma 6
([30]). Consider a sequence { d n } generated by Algorithm 1. If there exists a subsequence { d n k } of { d n } such that { d n k } converges weakly to c H and lim k d n k b n k = 0 , then it follows that c VI ( C , K ) .
Algorithm 1 Modified inertial viscosity-type Tseng’s extragradient algorithm
Initialization: Given θ > 0 , τ 1 > 0 , a ( 0 , 1 ] , μ ( 0 , 1 ) , { γ n } ( 0 , 1 ) . Let a 0 , a 1 H be arbitrary. Iterative Steps: Calculate a n + 1 as follows:
Step 1. Given the iterates a n 1 and a n ( n 1 ), choose θ n such that 0 θ n θ ¯ n , where
θ ¯ n = min σ n a n a n 1 , θ , if a n a n 1 , θ , otherwise .
Step 2. Set d n = a n + θ n ( a n a n 1 ) and compute
b n = P C d n 1 τ n K d n , c n = b n a τ n ( K b n K d n ) ,
Step 3. Compute
a n + 1 = γ n g ( a n ) + ( I γ n δ S ) ξ n U c n + ( 1 ξ n ) c n .
Update
τ n + 1 = max K d n K b n μ d n b n , τ n , if d n b n 0 , τ n , otherwise .
Set n : = n + 1 and go to Step 1.
Lemma 7.
Given that Conditions 1-4 are satisfied, the sequences { d n } , { b n } , and { c n } produced by Algorithm 1 satisfy the property that
c n q 2 d n q 2 ( 1 a 2 μ 2 τ n + 1 2 τ n 2 ) b n d n 2 , q V I ( C , K ) .
and
c n b n a μ τ n + 1 τ n b n d n .
Proof. 
First, leveraging the definition of { τ n } , it can be easily deduced that
K d n K b n μ τ n + 1 d n b n , n 0 .
As defined, { c n } satisfies
c n q 2 =   b n a τ n ( K b n K d n ) q 2 =   b n d n a τ n ( K b n K d n ) + d n q 2 =   b n d n 2 + d n q 2 + a 2 τ n 2 K b n K d n 2 + 2 b n d n , d n q   2 a τ n b n q , K b n K d n =   b n d n 2 + d n q 2 + a 2 τ n 2 K b n K d n 2 + 2 b n d n , b n q   2 b n d n , b n d n 2 a τ n b n q , K b n K d n =   d n q 2 b n d n 2 + a 2 τ n 2 K b n K d n 2 + 2 b n d n , b n q   2 a τ n b n q , K b n K d n .
By Lemma 1, since { b n } C , we have
b n ( d n 1 τ n K d n ) , b n q 0 ,
or equivalently
b n d n , b n q 1 τ n K d n , b n q .
From (13), (14), and a ( 0 , 1 ] i.e. 1 a 1 , we get the following:
c n q 2   d n     q 2     b n     d n 2   +   a 2 μ 2 τ n + 1 2 τ n 2 b n d n 2   2 τ n K d n , b n q 2 a τ n b n q , K b n K d n =   d n q 2 ( 1 a 2 μ 2 τ n + 1 2 τ n 2 ) b n d n 2   1 a × 2 a τ n K d n , b n q 2 a τ n b n q , K b n K d n   d n q 2 ( 1 a 2 μ 2 τ n + 1 2 τ n 2 ) b n d n 2   2 a τ n K d n , b n q 2 a τ n b n q , K b n K d n =   d n q 2 ( 1 a 2 μ 2 τ n + 1 2 τ n 2 ) b n d n 2 2 a τ n K b n , b n q .
Since { b n } C , we obtain K q , b n q 0 . Using q V I ( C , K ) and the pseudomonotonicity of K , we have
K b n , b n q 0 .
Combining (15) and (16), we deduce
c n q 2     d n q 2 ( 1 a 2 μ 2 τ n + 1 2 τ n 2 ) b n d n 2 .
By (12) and the construction of { c n } , we obtain
c n b n   =   a τ n K b n K d n     a μ τ n + 1 τ n b n d n .
Theorem 1.
Assuming Conditions 1-4 and that { θ n } is chosen with
lim n θ n γ n a n a n 1 = 0 ,
the sequence { a n } from Algorithm 1 converges strongly to q V I ( C , K ) F i x ( U ) , which is uniquely determined by q = P V I ( C , K ) F i x ( U ) ( I δ S + g ) ( q ) for the following variational inequality:
( g δ S ) ( q ) , b q 0 , b V I ( C , K ) F i x ( U ) .
Proof. 
The proof proceeds via four distinct claims.
Claim 1. The sequence { a n } is bounded. Set t n = ξ n U c n + ( 1 ξ n ) c n ; using Lemmas 2 and 7, we obtain the following:
t n q 2 =   ξ n U c n + ( 1 ξ n ) c n q + ( 1 ξ n ) q ( 1 ξ n ) q 2 =   ξ n ( U c n q ) + ( 1 ξ n ) ( c n q ) 2 =   ξ n U c n q 2 + ( 1 ξ n ) c n q 2 ξ n ( 1 ξ n ) U c n c n 2   ξ n c n q 2 + ( 1 ξ n ) c n q 2 ξ n ( 1 ξ n ) U c n c n 2 =   c n q 2 ξ n ( 1 ξ n ) U c n c n 2   d n q 2 ( 1 a 2 μ 2 τ n + 1 2 τ n 2 ) b n d n 2 ξ n ( 1 ξ n ) U c n c n 2 .
which implies
t n q     d n q .
From the definition of { d n } , we get
d n q =   a n + θ n ( a n a n 1 ) q   a n q + θ n a n a n 1 =   a n q + γ n θ n γ n a n a n 1 .
Due to lim n θ n γ n a n a n 1 = 0 , there exists a positive constant M 1 satisfying
θ n γ n a n a n 1     M 1 , n 1 .
By (19) and (20), we have
d n q     a n q + γ n M 1 .
According to Lemma 5, we obtain
a n + 1 q =   γ n g ( a n ) + ( I γ n δ S ) t n q =   γ n ( g ( a n ) δ S q ) + ( I γ n δ S ) ( t n q ) γ n ( g ( a n ) δ S q ) + ( 1 γ n η ) ( t n q ) = γ n ( g ( a n ) g ( q ) ) + ( g ( q ) δ S q ) + ( 1 γ n η ) ( t n q ) γ n ρ a n q + γ n g ( q ) δ S q + ( 1 γ n η ) ( a n q + γ n M 1 ) γ n ρ a n q + γ n g ( q ) δ S q + ( 1 γ n η ) a n q + γ n M 1 = ( 1 ( η ρ ) γ n ) a n q + γ n ( g ( q ) δ S q + M 1 ) = ( 1 ( η ρ ) γ n ) a n q + γ n ( η ρ ) g ( q ) δ S q + M 1 η ρ max a n q , g ( q ) δ S q + M 1 η ρ .
Hence, we get a n + 1 q     max a 0 q , g ( q ) δ S q + M 1 η ρ .
It follows that the sequence { a n } is bounded. Thus, the sequences { c n } , { t n } , { d n } , and { g ( a n ) } are also bounded.
Claim 2. We prove that
1 μ 2 a 2 τ n + 1 2 τ n 2 b n d n 2 + ξ n ( 1 ξ n ) c n U c n 2   a n q 2 a n + 1 q 2 + γ n M 4 ,
for some M 4 > 0 . Actually, from (i) in Lemma 2 we obtain
a n + 1 q 2 =   t n q + γ n ( g ( a n ) δ S ( t n ) ) 2   t n q 2 + 2 γ n g ( a n ) δ S ( t n ) , a n + 1 q   t n q 2 + 2 γ n g ( a n ) δ S ( t n ) a n + 1 q   t n q 2 + γ n M 2 ,
where M 2 : = sup n N { 2 g ( a n ) δ S ( t n ) a n + 1 q } . Substituting (17) into (23), we have
a n + 1 q 2   d n q 2 1 μ 2 a 2 τ n + 1 2 τ n 2 b n d n 2   ξ n ( 1 ξ n ) c n U c n 2 + γ n M 2 .
Using (21), we get
d n q 2   a n q + γ n M 1 2 =   a n q 2 + γ n 2 M 1 a n q + γ n M 1 2   a n q 2 + γ n M 3 ,
where M 3 : = sup n N { 2 M 1 a n q + γ n M 1 2 } . Substituting (25) into (24), we have
a n + 1 q 2   a n q 2 + γ n M 3 + γ n M 2 1 μ 2 a 2 τ n + 1 2 τ n 2 b n d n 2   ξ n ( 1 ξ n ) c n U c n 2 .
This implies
1 μ 2 a 2 τ n + 1 2 τ n 2 b n d n 2 + ξ n ( 1 ξ n ) c n U c n 2   a n q 2 a n + 1 q 2 + γ n M 4 ,
where M 4 : = M 2 + M 3 .
Claim 3. We prove that
a n + 1 q 2 1 2 ( η ρ ) γ n 1 γ n ρ a n q 2 + 2 ( η ρ ) γ n 1 γ n ρ [ γ n η 2 2 ( η ρ ) · M 6 . + 1 2 ( η ρ ) · θ n γ n a n a n 1 · M 5 + 1 η ρ g ( q ) δ S q , a n + 1 q ] ,
for some M 5 , M 6 > 0 . Actually, from the definition of { d n } and (i) in Lemma 2, we obtain
d n q 2 =   a n + θ n ( a n a n 1 ) q 2   a n q 2 + 2 θ n d n q , a n a n 1   a n q 2 + 2 θ n d n q a n a n 1   a n q 2 + θ n a n a n 1 M 5 ,
where M 5 : = sup n N { 2 d n q } . From (18) and (27), we have
t n q 2     a n q 2 + θ n a n a n 1 M 5 ,
which follows that
a n + 1 q 2 =   γ n g ( a n ) + ( I γ n δ S ) t n q 2 =   γ n ( g ( a n ) δ S q ) + ( I γ n δ S ) ( t n q ) 2   ( 1 γ n η ) 2 t n q 2 + 2 γ n g ( a n ) δ S q , a n + 1 q =   ( 1 γ n η ) 2 t n q 2 + 2 γ n g ( a n ) g ( q ) , a n + 1 q +   2 γ n g ( q ) δ S q , a n + 1 q   ( 1 γ n η ) 2 t n q 2 + 2 γ n ρ a n q a n + 1 q +   2 γ n g ( q ) δ S q , a n + 1 q   ( 1 γ n η ) 2 t n q 2 + γ n ρ a n q 2 + a n + 1 q 2 +   2 γ n g ( q ) δ S q , a n + 1 q   ( 1 γ n η ) 2 + γ n ρ a n q 2 + γ n ρ a n + 1 q 2 +   θ n a n a n 1 M 5 + 2 γ n g ( q ) δ S q , a n + 1 q .
Then, we get that
( 1 γ n ρ ) a n + 1 q 2 ( 1 γ n η ) 2 + γ n ρ a n q 2 + θ n a n a n 1 M 5 + 2 γ n g ( q ) δ S q , a n + 1 q .
This means that
a n + 1 q 2   1 2 ( η ρ ) γ n 1 γ n ρ a n q 2 + η 2 γ n 2 1 γ n ρ a n q 2 +   1 1 γ n ρ θ n a n a n 1 M 5 + 2 γ n g ( q ) δ S q , a n + 1 q =   1 2 ( η ρ ) γ n 1 γ n ρ a n q 2 + 2 ( η ρ ) γ n 1 γ n ρ [ γ n η 2 2 ( η ρ ) a n q 2 +   1 2 ( η ρ ) · θ n γ n a n a n 1 M 5 + 1 η ρ g ( q ) δ S q , a n + 1 q ]   1 2 ( η ρ ) γ n 1 γ n ρ a n q 2 + 2 ( η ρ ) γ n 1 γ n ρ [ γ n η 2 2 ( η ρ ) M 6 +   1 2 ( η ρ ) · θ n γ n a n a n 1 M 5 +   1 η ρ g ( q ) δ S q , a n + 1 q ] ,
where M 6 : = sup n N { a n q 2 } .
Claim 4. The convergence of { a n q 2 } to zero is established through an analysis of two possible cases concerning the sequence behavior.
Case 1. When there exists some N such that for all n N , a n + 1 q 2 a n q 2 is satisfied. This implication leads to the existence of lim n a n q 2 .
  • From (26), we obtain
lim n b n d n = 0 ,
lim n c n U c n = 0 .
In view of the construction of { d n } , it follows that
lim n a n d n = lim n θ n a n a n 1 = lim n γ n θ n γ n a n a n 1 = 0 .
By combining (29) and (31), we derive
lim n a n b n     lim n a n d n   + lim n d n b n = 0 .
From (11) and (29), we conclude
lim n c n b n     lim n a μ τ n + 1 τ n d n b n = 0 .
Combining (29) and (32), we obtain
lim n c n d n     lim n c n b n   + lim n b n d n = 0 .
According to (31) and (33), we get
lim n c n a n     lim n c n d n   + lim n d n a n = 0 .
Moreover, applying the definition of { a n } reveals that
a n + 1 a n   a n + 1 t n + t n a n =   γ n g ( a n ) δ S ( t n ) + ξ n U c n + ( 1 ξ n ) c n a n   γ n g ( a n ) δ S ( t n ) + ξ n U c n c n + c n a n .
Applying lim n γ n = 0 , (30), (34) to (35), we have
lim n a n + 1 a n = 0 .
The boundedness of { a n } guarantees the existence of a subsequence { a n k } that converges weakly to some z H , with
lim sup k g ( q ) δ S ( q ) , a n q = lim n g ( q ) δ S ( q ) , a n k q = g ( q ) δ S ( q ) , z q .
Relation (31) ensures that the subsequence { d n k } converges weakly to z; hence, invoking (29) together with Lemma 6, we deduce that z V I ( C , K ) . Equation (34) implies that c n k z . Given (30) and the demiclosedness of U at zero, it follows that z F i x ( U ) . Consequently, z V I ( C , K ) F i x ( U ) . Lemma 5 ensures that P V I ( C , K ) F i x ( U ) ( I δ S + g ) is a contraction; hence, by the Banach contraction mapping principle, there exists a unique q such that q = P V I ( C , K ) F i x ( U ) ( I δ S + g ) ( q ) . From Lemma 1 we derive
lim sup k g ( q ) δ S ( q ) , a n q = g ( q ) δ S ( q ) , z q 0 .
From (36) and (37), we get
lim sup n g ( q ) δ S ( q ) , a n + 1 q lim sup n g ( q ) δ S ( q ) , a n + 1 a n + lim sup n g ( q ) δ S ( q ) , a n q 0 .
Employing Lemma 3 on (28), we obtain lim n a n q = 0 .
Case 2. There exists a subsequence { a n j q 2 } of { a n q 2 } such that a n j q 2 a n j + 1 q 2 for all j N . Under this circumstance, Lemma 4 guarantees the existence of a nondecreasing sequence { v k } in N with lim k v k = , and the following inequality is satisfied:
a v k q 2     a v k + 1 q 2 and a k q 2 a v k + 1 q 2 , k N .
Combining (22) and (38), we obtain
1 μ 2 a 2 τ n + 1 2 τ n 2 b v k d v k 2 + ξ n ( 1 ξ n ) c v k U c v k 2   a v k q 2 a v k + 1 q 2 + γ v k M 4   γ v k M 4 .
We can deduce that lim k b v k d v k = 0 and lim k c v k U c v k = 0 . Applying a methodology parallel to Case 1 leads to the conclusion that
lim k c v k d v k = 0 ,
lim k a v k c v k = 0 ,
lim k a v k + 1 a v k = 0 ,
lim sup k g ( q ) δ S ( q ) , a v k + 1 q 0 .
Considering (28), we observe
a v k + 1 q 2 1 2 ( η ρ ) γ v k 1 γ v k ρ a v k q 2 + 2 ( η ρ ) γ v k 1 γ v k ρ [ γ v k η 2 2 ( η ρ ) M 6 + 1 2 ( η ρ ) · α v k γ v k a v k a v k 1 M 5 + 1 η ρ g ( q ) δ S q , a v k + 1 q ] .
It implies that
a v k + 1 q 2   γ v k η 2 2 ( η ρ ) M 6 + 1 2 ( η ρ ) · α v k γ v k a v k a v k 1 M 5 +   1 ( η ρ ) g ( q ) δ S q , a v k + 1 q .
Then, from (38), we have
a k q 2   a v k + 1 q 2 γ v k η 2 2 ( η ρ ) M 6 + 1 2 ( η ρ ) · α v k γ v k a v k a v k 1 M 5 + 1 ( η ρ ) g ( q ) δ S q , a v k + 1 q .
The limit lim k a k q 2 = 0 implies a k q , completing the proof. □
According to Corollary 1, when we set U = I in Algorithm 1, the following result can be obtained.
Corollary 1.
Let a 0 , a 1 H and the sequence { a n } be created by
d n = a n + θ n ( a n a n 1 ) , b n = P C d n 1 τ n K d n , c n = b n a τ n K b n K d n , a n + 1 = γ n g ( a n ) + c n γ n δ S c n ,
where θ n and τ n are defined in Algorithm 1, respectively. The iterative sequence { a n } generated by (39) converges in norm to q V I ( C , K ) , where q = P V I ( C , K ) ( I δ S + g ) ( q ) .
Remark 3.
It is worth noting that, in our algorithm, the mapping K is pseudomonotone and l-Lipschitz continuous, whereas U is quasi-nonexpansive. Since the operators we consider satisfy weaker assumptions than those in the introduction, our algorithm has a broader scope of applicability.

4. Numerical Experiments

In this part, we present two numerical examples to verify the proposed algorithm. The effectiveness of the algorithm we proposed was evaluated through comparison with the algorithms in the literature (see [10,11,12,13,25,30]). Furthermore, a comparison is drawn between our algorithm and results that exclusively focus on solving the variational inequality problem. For the purpose of ensuring a fair comparison with existing algorithms tailored to monotone or pseudomonotone variational inequalities, the quasi-nonexpansive mapping U in Hilbert space H is taken as U a = a in each example.
For simplicity, we refer to the algorithms as follows: Algorithm 3.1 in Singh et al. [25] as the DIMIEM; Algorithm 1 in Shehu et al. [10] as the S Algorithm; Algorithm 3.2 in Thong et al. [12] as the T Algorithm; Algorithm 1 in Yao et al. [11] as the Y Algorithm; Algorithm 3.4 in Anh et al. [13] as the A Algorithm; and Algorithm 3 in Zeng et al. [30] as the Z Algorithm. We measure the error at step n using a n + 1 a n . Iterations stop when this error falls below the tolerance ϵ , i.e., when a n + 1 a n < ϵ . The practical applicability of our algorithm is illustrated herein. The codebase was developed using MATLAB 2024(b) and executed on a HUASUO computer. This computer is configured with an Intel(R) Core(TM) i5-8265U CPU that operates at a frequency of 1.60GHz and has 8.00 GB of RAM. Bulleted lists look like this:
Example 1.
Let K : R 2 R 2 be the nonlinear operator given by
K ( a , b ) = ( a + b + sin a , a + b + sin b ) .
The feasible set C is defined by C = [ 2 , 5 ] × [ 2 , 5 ] . K is confirmed to be monotonic and satisfy l-Lipschitz continuity with l = 3 . Initial points a 0 = ( 1 , 1 ) and a 1 = ( 2 , 2 ) are selected, while S is designated as S ( a , b ) = K ( a , b ) + 1 2 ( a , b ) and g is assigned as g ( a , b ) = 1 2 ( a , b ) .
Additionally, set a = 0.5, θ = 0.5 , τ 1 = 0.9 , μ = 0.9 , δ = 0.5 , σ n = 1 ( n + 1 ) 2 . Iterations are terminated once the residual satisfies a n + 1 a n   = 10 12 . Figure 1 shows the differences in the convergence speed of the algorithm with varying values of a. Figure 2 and Table 1 jointly reveal that Algorithm 1 drives the iterates to the solution markedly faster than its predecessors. The corresponding parameter settings are detailed below.
  • DIMTEM: θ = 0.5 , μ = 0.9 , τ 1 = 0.9 , v = 0.3 .
  • S Algorithm: μ = 0.9 , λ 1 = 0.9 , θ = 0.5 .
  • T Algorithm: ϵ n = 1 ( n + 1 ) 2 , μ = 0.9 , τ 1 = 0.9 , α = 0.5 .
  • Y Algorithm: θ = 0.5 , μ = 0.9 , δ = 0.8 , λ 1 = 0.9 .
  • A Algorithm: τ n = 1 ( n + 1 ) 2 , μ = 0.9 , λ 1 = 0.9 , α = 0.5 .
  • Z Algorithm: ϵ n = 1 ( n + 1 ) 2 , μ = 0.9 , γ = 0.5 , τ 1 = 0.9 , α = 0.5 , a = 0.5.
Example 2.
Let A ( a ) : = M a + q where
M = B B T + C + D
and B R m × m , C R m × m is skew-symmetric, and D R m × m is diagonal with non-negative diagonal entries. Consequently, M is positive semidefinite, and q denotes a vector in R m . Take the feasible set C to be the closed convex polyhedron C : = { a R m : Q a b } , where Q R d × m and b R d with b 0 . That A is monotone and Lipschitz continuous with the Lipschitz constant L =   M holds evidently. By setting q = 0 , the solution set V I ( C , K ) F i x ( U ) = { 0 } is then derived.
For the numerical experiment we fix d = m = 5 and set the algorithmic parameters to a = 0.9, θ = 0.4 , τ 1 = 0.1 , μ = 0.9 , δ = 0.5 , and σ n = 1 n + 1 . The operators are prescribed as S ( a ) = a 8 and g ( a ) = a 20 . Iterations are terminated once the residual satisfies a n + 1 a n   = 10 14 . Figure 3 shows the effect of the parameter a on the convergence speed. Figure 4 and Table 1 jointly demonstrate that the iterates produced by Algorithm 1 approach the solution markedly faster than those generated by previously proposed methods. The adopted parameters are as follows:
  • DIMTEM: μ = 0.9 , θ = 0.4 , τ 1 = 0.1 , v = 0.3 .
  • S Algorithm: μ = 0.9 , λ 1 = 0.1 , θ = 0.4 .
  • T Algorithm: μ = 0.9 , α = 0.4 , τ 1 = 0.1 , ϵ n = 1 n + 1 .
  • Y Algorithm: μ = 0.9 , θ = 0.4 , λ 1 = 0.1 , δ = 0.5 .
  • A Algorithm: μ = 0.9 , α = 0.4 , λ 1 = 0.1 , τ n = 1 n + 1 .
  • Z Algorithm: μ = 0.9 , α = 0.4 , τ 1 = 0.1 , a = 0.9, γ = 10 , ϵ n = 1 n + 1 .
Subsequently, our proposed Corollary 1 is employed to solve the optimal control problems. Various methods for addressing this problem have been put forward by numerous scholars in recent years. For details regarding the relevant algorithms and problem descriptions, readers are referred to references [3,4,5].
Example 3
(Control of a harmonic oscillator, see [46]).
minimize x 2 ( 3 π ) subject to x ˙ 1 ( t ) = x 2 ( t ) , x ˙ 2 ( t ) = x 1 ( t ) + u ( t ) , t [ 0 , 3 π ] , x ( 0 ) = 0 , u ( t ) [ 1 , 1 ] .
The parameters we utilize are specified as follows:
u * ( t ) = 1 , if t [ 0 , π / 2 ) ( 3 π / 2 , 5 π / 2 ) ; 1 , if t ( π / 2 , 3 π / 2 ) ( 5 π / 2 , 3 π ] .
We set our parameters in the following way:
a = 0.9 , δ = 0.5 , θ = 0.6 , g ( x ) = 0.1 x , N = 100 , σ n = 10 4 ( n + 1 ) 2 , γ n = 10 4 n + 1 .
Initial controls u 0 ( t ) = u 1 ( t ) are generated randomly over the interval [ 1 , 1 ] , with the termination condition defined as either u n + 1 u n   10 4 or a maximum of 1000 iterations. Corollary 1 attained the required error precision in 97 iterations, with a total runtime of 0.1397 seconds. The approximate optimal control derived from Corollary 1, along with its corresponding trajectories, is presented in Figure 5.
Remark 4.
Regarding the experimental part, the following points are noteworthy:
(1) 
As can be observed from Figure 2 and Figure 4, compared with the existing algorithms, the algorithm we put forward converges more rapidly and demonstrates better computational performance.
(2) 
From Figure 1 and Figure 3, it can be observed that the convergence is influenced by the parameter a, and better performance of Algorithm 1 is achieved as the value of a increases.
(3) 
Example 3 and the corresponding figure demonstrate that the algorithm proposed herein performs effectively in solving optimal control problems.

5. Conclusions

In this research paper, a novel algorithm is put forward for addressing both the pseudomonotone variational inequality problem and the quasi-nonexpansive fixed point problem within a Hilbert space. This algorithm is developed by combining Tseng’s extragradient method, the viscosity method, and an adaptive step size strategy. A key advantage of this method lies in the requirement that only one projection be computed per iteration. The strong convergence of the algorithm is proven without relying on prior knowledge of the mapping’s Lipschitz constant. Moreover, the incorporation of an inertial term further yields a substantial acceleration of the convergence rate. Numerical experiments indicate that, under certain specific scenarios, the algorithm put forward in this study outperforms the existing methods documented in the literature. Regarding its application, the algorithm is also employed to solve the optimal control problem.

Author Contributions

Conceptualization, Y.B.; Methodology, Y.B. and G.Y.; Software, Y.B., L.S. and S.W.; Resources, G.Y.; Writing—original draft, Y.B.; Writing—review & editing, G.Y., L.S. and S.W.; Visualization, L.S.; Supervision, S.W.; Funding acquisition, G.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Natural Science Foundation of China under Grant (No. 12361062) and Natural Science Foundation of Ningxia Provincial of China (No. 2023AAC02053).

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The authors are grateful to the reviewers for their constructive comments and suggestions, which enhanced the quality of this work.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Suparatulatorn, R.; Charoensawan, P.; Khemphet, A. An inertial subgradient extragradient method of variational inequality problems involving quasi-nonexpansive operators with applications. Math. Methods Appl. Sci. 2021, 44, 12760–12773. [Google Scholar] [CrossRef]
  2. Sahu, D.R.; Yao, J.C.; Verma, M.; Shukla, K.K. Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 2021, 70, 75–100. [Google Scholar] [CrossRef]
  3. Viet, T.D.; Tien, D.V.; Van, T.H.; Van, L.L. Novel projected gradient methods for solving pseudomontone variational inequality problems with applications to optimal control problems. J. Ind. Manag. Optim. 2025, 21, 1393–1413. [Google Scholar] [CrossRef]
  4. Tan, B.; Zhou, Z.; Li, S. Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems. J. Appl. Math. Comput. 2022, 68, 1387–1411. [Google Scholar] [CrossRef]
  5. Tan, B.; Qin, X.; Yao, J.C. Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimal control problems. Numer. Algorithms 2021, 88, 1757–1786. [Google Scholar] [CrossRef]
  6. Thu Thuy, N.T.; Thanh Tung, T. Strong convergence of one-step inertial algorithm for a class of bilevel variational inequalities. Optimization 2025, 1–33. [Google Scholar] [CrossRef]
  7. Jolaoso, L.O.; Sunthrayuth, P.; Cholamjiak, P.; Cho, Y.J. Inertial projection and contraction methods for solving variational inequalities with applications to image restoration problems. Carpathian J. Math. 2023, 39, 683–704. [Google Scholar] [CrossRef]
  8. Fu, Q.; Cai, G.; Cholamjiak, P.; Inkrong, P. Modified extragradient methods with inertial technique for solving pseudo-monotone variational inequality problems. J. Sci. Comput. 2025, 102, 59. [Google Scholar] [CrossRef]
  9. Oyewole, O.K.; Reich, S. Two subgradient extragradient methods based on the golden ratio technique for solving variational inequality problems. Numer. Algorithms 2024, 97, 1215–1236. [Google Scholar] [CrossRef]
  10. Shehu, Y.; Iyiola, O.S.; Reich, S. A modified inertial subgradient extragradient method for solving variational inequalities. Optim. Eng. 2022, 23, 421–449. [Google Scholar] [CrossRef]
  11. Yao, Y.; Iyiola, O.S.; Shehu, Y. Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 2022, 90, 71. [Google Scholar] [CrossRef]
  12. Thong, D.V.; Van Hieu, D.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2020, 14, 115–144. [Google Scholar] [CrossRef]
  13. Ky Anh, P.; Viet Thong, D.; Vinh, N.T. Improved inertial extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2022, 71, 505–528. [Google Scholar] [CrossRef]
  14. Tang, H.; Gong, W. Two improved extrapolated gradient algorithms for pseudo-monotone variational inequalities. AIMS Math. 2025, 10, 2064–2082. [Google Scholar] [CrossRef]
  15. Anh, P.N. Relaxed projection methods for solving variational inequality problems. J. Glob. Optim. 2024, 90, 909–930. [Google Scholar] [CrossRef]
  16. Zaslavski, A.J. Numerical Optimization with Computational Errors; Springer: Cham, Switzerland, 2016; Volume 108. [Google Scholar]
  17. Pakkaranang, N. Double inertial extragradient algorithms for solving variational inequality problems with convergence analysis. Math. Methods Appl. Sci. 2024, 47, 11642–11669. [Google Scholar] [CrossRef]
  18. Yao, Y.; Adamu, A.; Shehu, Y. Strongly convergent inertial forward-backward-forward algorithm without on-line rule for variational inequalities. Acta Math. Sci. 2024, 44, 551–566. [Google Scholar] [CrossRef]
  19. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon. 1976, 12, 747–756. [Google Scholar]
  20. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  21. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  22. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed]
  23. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  24. Thong, D.V.; Shehu, Y.; Iyiola, O.S. Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numer. Algorithms 2020, 84, 795–823. [Google Scholar] [CrossRef]
  25. Singh, W.; Chandok, S. Mann-type extragradient algorithm for solving variational inequality and fixed point problems. Comput. Appl. Math. 2024, 43, 259. [Google Scholar] [CrossRef]
  26. Guo, D.; Cai, G.; Tan, B. Convergence analysis of subgradient extragradient method with inertial technique for solving variational inequalities and fixed point problems. Commun. Nonlinear Sci. Numer. Simul. 2025, 148, 108851. [Google Scholar] [CrossRef]
  27. Aremu, K.O.; Mona, M.I.; Ibrahim, M. A modified self-adaptive inertial tseng algorithm for solving a quasimonotone variational inequality and fixed point problems in real hilbert space. J. Anal. 2025, 33, 319–340. [Google Scholar] [CrossRef]
  28. Alakoya, T.O.; Mewomo, O.T. S-Iteration inertial subgradient extragradient method for variational inequality and fixed point problems. Optimization 2024, 73, 1477–1517. [Google Scholar] [CrossRef]
  29. Wang, Y.; Wu, C.; Shehu, Y.; Huang, B. Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Math. 2024, 9, 9705–9720. [Google Scholar] [CrossRef]
  30. Zeng, Y.; Cai, G.; Dong, Q.L. Self Adaptive Iterative Algorithm for Solving Variational Inequality Problems and Fixed Point Problems in Hilbert Spaces. Acta Appl Math 2023, 183, 2. [Google Scholar] [CrossRef]
  31. Belay, Y.A.; Zegeye, H.; Boikanyo, O.A.; Kagiso, D.; Gidey, H.H. An inertial method for solving bilevel variational inequality problems with fixed point constraints. Ann Univ Ferrara 2025, 71, 15. [Google Scholar] [CrossRef]
  32. Ogwo, G.N.; Alakoya, T.O.; Mewomo, O.T. Iterative algorithm with self-adaptive step size for approximating the common solution of variational inequality and fixed point problems. Optimization 2023, 72, 677–711. [Google Scholar] [CrossRef]
  33. Godwin, E.C.; Alakoya, T.O.; Mewomo, O.T.; Yao, J.C. Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. 2023, 102, 4253–4278. [Google Scholar] [CrossRef]
  34. Khuangsatung, W.; Kangtunyakarn, A. An intermixed method for solving the combination of mixed variational inequality problems and fixed-point problems. J. Inequalities Appl. 2023, 2023, 1. [Google Scholar] [CrossRef]
  35. Takahashi, W.; Toyoda, M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118, 417–428. [Google Scholar] [CrossRef]
  36. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  37. Thong, D.V.; Van Hieu, D. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
  38. Gibali, A.; Reich, S.; Zalas, R. Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 2017, 66, 417–437. [Google Scholar] [CrossRef]
  39. Boţ, R.I.; Csetnek, E.R. An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 2016, 171, 600–616. [Google Scholar] [CrossRef]
  40. Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
  41. Shehu, Y.; Iyiola, O.S. Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method. J. Fixed Point Theory Appl. 2017, 19, 2483–2510. [Google Scholar] [CrossRef]
  42. Cai, G.; Dong, Q.L.; Peng, Y. Strong convergence theorems for inertial Tseng’s extragradient method for solving variational inequality problems and fixed point problems. Optim. Lett. 2021, 15, 1457–1474. [Google Scholar] [CrossRef]
  43. Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
  44. Cai, G. Viscosity implicit algorithms for a variational inequality problem and fixed point problem in Hilbert spaces. Acta Math. Sin. Chin. Ser. 2020, 40, 395–407. [Google Scholar]
  45. Cai, G. Viscosity iterative algorithm for variational inequality problems and fixed point problems of strict pseudo-contractions in uniformly smooth Banach spaces. Acta Math. Sin. Engl. Ser. 2015, 31, 1435–1448. [Google Scholar] [CrossRef]
  46. Pietrus, A.; Scarinci, T.; Veliov, V. High order discrete approximations to Mayer’s problems for linear systems. SIAM J. Control Optim. 2018, 56, 102–119. [Google Scholar] [CrossRef]
Figure 1. Convergence Comparison of Algorithm 1 for Different a Values.
Figure 1. Convergence Comparison of Algorithm 1 for Different a Values.
Axioms 14 00881 g001
Figure 2. Comparison of algorithms for Example 1.
Figure 2. Comparison of algorithms for Example 1.
Axioms 14 00881 g002
Figure 3. Convergence Comparison of Algorithm 1 for Different a Values.
Figure 3. Convergence Comparison of Algorithm 1 for Different a Values.
Axioms 14 00881 g003
Figure 4. Numerical results for Example 2.
Figure 4. Numerical results for Example 2.
Axioms 14 00881 g004
Figure 5. Numerical results for Example 3. (a) Initial and optimal controls. (b) Optimal trajectories.
Figure 5. Numerical results for Example 3. (a) Initial and optimal controls. (b) Optimal trajectories.
Axioms 14 00881 g005
Table 1. A comparison of iteration counts and computational errors is presented for the algorithms in Examples 1 and 2.
Table 1. A comparison of iteration counts and computational errors is presented for the algorithms in Examples 1 and 2.
Algorithms Example 1 Example 2
Iter. E n Iter. E n
Algorithm 3401.52984   ×   10 12 731.57752   ×   10 12
DIMTEM1314.42261   ×   10 12 3003.16653   ×   10 11
S Algorithm1452.49988   ×   10 11 2851.02457   ×   10 14
T Algorithm842.43963   ×   10 12 1101.00935   ×   10 14
Y Algorithm1506.15847   ×   10 7 1761.11008   ×   10 14
A Algorithm1113.20422   ×   10 11 2411.03155   ×   10 14
Z Algorithm1284.25814   ×   10 12 1531.42964   ×   10 14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, Y.; Yu, G.; Sun, L.; Weng, S. Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems and Fixed Point Problems with Applications in Optimal Control Problems. Axioms 2025, 14, 881. https://doi.org/10.3390/axioms14120881

AMA Style

Bai Y, Yu G, Sun L, Weng S. Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems and Fixed Point Problems with Applications in Optimal Control Problems. Axioms. 2025; 14(12):881. https://doi.org/10.3390/axioms14120881

Chicago/Turabian Style

Bai, Yaling, Guolin Yu, Linqi Sun, and Shengquan Weng. 2025. "Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems and Fixed Point Problems with Applications in Optimal Control Problems" Axioms 14, no. 12: 881. https://doi.org/10.3390/axioms14120881

APA Style

Bai, Y., Yu, G., Sun, L., & Weng, S. (2025). Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems and Fixed Point Problems with Applications in Optimal Control Problems. Axioms, 14(12), 881. https://doi.org/10.3390/axioms14120881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop