Next Article in Journal
Geometric and Structural Properties of Indefinite Kenmotsu Manifolds Admitting Eta-Ricci–Bourguignon Solitons
Previous Article in Journal
A New Generalization of q-Truncated Polynomials Associated with q-General Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithms and Inertial Algorithms for Inverse Mixed Variational Inequality Problems in Hilbert Spaces

by
Chih-Sheng Chuang
Department of Applied Mathematics, National Chiayi University, Chiayi 600355, Taiwan
Mathematics 2025, 13(12), 1966; https://doi.org/10.3390/math13121966
Submission received: 20 May 2025 / Revised: 10 June 2025 / Accepted: 13 June 2025 / Published: 14 June 2025
(This article belongs to the Section C: Mathematical Analysis)

Abstract

The inverse mixed variational inequality problem comes from classical variational inequality, and it has many applications. In this paper, we propose new algorithms to study the inverse mixed variational inequality problems in Hilbert spaces, and these algorithms are based on the generalized projection operator. Next, we establish convergence theorems under inverse strong monotonicity conditions. In addition, we also provide inertial-type algorithms for the inverse mixed variational inequality problems with conditions that differ from the above convergence theorems.

1. Introduction

The variational inequality problem is a powerful tool for modeling and analyzing equilibrium problems in a variety of disciplines, including engineering, economics, and operations research [1,2,3,4,5]. Over the years, many extensions of the classical variational inequality problem have been proposed to deal with increasingly complex situations. In 2006, He et al. [6,7] first proposed the inverse variational inequality problem, which is different from the classical variational inequality problem.
Find   IVI ( F , C )   x ¯ H   such   that   F ( x ¯ ) C   and   x ¯ , y F ( x ¯ ) 0   for   all   y C ,
where H is a real Hilbert space, C is a nonempty closed convex subset of H, and F : H H is a mapping. Indeed, the inverse variational inequality problem has many practical applications, such as in dynamic power price problems [8], traffic congestion control problems [6], spatial price equilibrium control problems [9], ampere–volt characteristics for circuits [10], and neural network problems [11,12,13,14].
Furthermore, many algorithms have been proposed to study the inverse variational inequality problem, such as the proximal point-based algorithm, the projection-type algorithm, the alternating contraction projection algorithm, the regularized iterative algorithm, and the dynamical system method. One can see this range of proposed algorithms in [6,13,14,15,16,17,18,19,20].
In 2014, Li et al. [21] introduced the inverse mixed variational inequality problem:
Find   x ¯ H   such   that   F ( x ¯ ) C   and   x ¯ , y F ( x ¯ ) + g ( y ) g ( F ( x ¯ ) ) 0   for   all   y C ,
where H is a real Hilbert space, C is a nonempty closed convex subset of H, F : H H is a mapping, and g : C R is a proper, convex, and lower semicontinuous function. Further, Li et al. [21] proposed the following algorithm and related convergence theorems under suitable conditions.
x 1 H is given arbitrarily , x n + 1 : = x n 1 ρ n F ( x n ) P C f [ F ( x n ) ρ x n ] , n N ,
where ρ > 0 and { ρ n } n N is a sequence of positive real numbers, P C f is a generalized projection (see Definition 3).
Although the inverse mixed variational inequality problem has been investigated in the literature [21], several theoretical and algorithmic questions remain unanswered, including the development of more efficient solution methods and the exploration of convergence properties under weaker assumptions.
Motivated by the above works, we consider the following inverse mixed variational inequality problem IMVI( F , μ g , C ): find x ¯ H such that F ( x ¯ ) C and
x ¯ , y F ( x ¯ ) + μ g ( y ) μ g ( F ( x ¯ ) ) 0   for   all   y C ,
where μ > 0 , H is a real Hilbert space, C is a nonempty closed convex subset of H, F : H H is a mapping, and g : C R is a proper, convex, and lower semicontinuous function. Further, we consider the following algorithms and related convergence theorems for the inverse mixed variational inequality problem.
(i)
The classical algorithm for the inverse mixed variational inequality problem:
x 1 C is given arbitrarily , x n + 1 = x n + λ P C α μ , g F ( x n ) α x n F ( x n ) , n N ,
where μ , α , and λ are positive real numbers, and P C α μ , g is a generalized ( μ , g ) projection operator (see Definition 3).
(ii)
A generalized algorithm from the classical algorithm:
x 1 C is given arbitrarily , y n = x n + β n λ P C α μ , g F ( x n ) α x n F ( x n ) , z n = y n + λ P C α μ , g F ( y n ) α y n F ( y n ) , x n + 1 = x n + κ n ( z n x n ) , n N ,
where μ , α , and λ are positive real numbers, { β n } n N and { κ n } n N are sequences in the interval [ 0 , 1 ] .
(iii)
Inertial Algorithm (1):
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ P C α μ , g F ( u n ) α u n F ( u n ) , n N ,
μ , α , and λ are positive real numbers, { t n } n N is a sequence in the interval [ 0 , c ) [ 0 , 1 ) .
(iv)
Inertial Algorithm (2):
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ β n P C α μ , g F ( u n ) α u n F ( u n ) , n N ,
where μ , α , and λ are positive real numbers, { t n } n N is a sequence in the interval [ 0 , c ] [ 0 , 1 ] , { β n } n N is a sequence in the interval [ a , b ] ( 0 , 2 ) .
In Section 1, we review the existing inverse mixed variational inequality problems and algorithms discussed in the literature. In Section 2, we present the necessary definitions and tools. In Section 3, we propose algorithms for solving inverse mixed variational inequality problems under different assumptions, where the mapping F is assumed to be inverse strongly monotone rather than strongly monotone. Moreover, our convergence theorems differ from those proposed by Li et al. [21]. In Section 4, we present inertial algorithms and related convergence theorems for inverse mixed variational inequality problems.

2. Preliminaries

Let H be a real Hilbert space with inner product · , · and norm | | · | | . We denote the strong and weak convergence of { x n } n N to x H by x n x and x n x , respectively. For each x , y , u , v H , and λ R , we have
| | x + y | | 2 = | | x | | 2 + 2 x , y + | | y | | 2 ,
| | λ x + ( 1 λ ) y | | 2 = λ | | x | | 2 + ( 1 λ ) | | y | | 2 λ ( 1 λ ) | | x y | | 2 ,
2 x y , u v = | | x v | | 2 + | | y u | | 2 | | x u | | 2 | | y v | | 2 .
Lemma 1
(Opial’s Theorem). Let H be a real Hilbert space, and { x n } n N be a sequence in H with x n x . Then, lim inf n | | x n x | | < lim inf n | | x n y | | for all y H with y x .
Definition 1.
Let H be a real Hilbert space, B : H H be a mapping, and β > 0 . Thus,
(i) 
B is monotone if x y , B x B y 0 for all x , y H ;
(ii) 
B is β-strongly monotone if x y , B x B y β | | x y | | 2 for all x , y H ;
(iii) 
B is β-inverse strongly monotone if x y , B x B y β | | B x B y | | 2 for all x , y H .
Definition 2.
Let C be a nonempty closed convex subset of a real Hilbert space H, and let T : C H be a mapping. Let F i x ( T ) : = { x C : T x = x } . Thus,
(i) 
T is a nonexpansive mapping if | | T x T y | | | | x y | | for every x , y C ;
(ii) 
T is a firmly nonexpansive mapping if | | T x T y | | 2 x y , T x T y for every x , y C , that is, | | T x T y | | 2 | | x y | | 2 | | ( I T ) x ( I T ) y | | 2 for every x , y C ;
(iii) 
T : C H is quasi-nonexpansive mapping if F i x ( T ) and | | T x y | | | | x y | | for all x C and y F ( T ) .
Lemma 2
([22,23]). Let C be a nonempty closed convex subset of a real Hilbert space H, and let T : C H be a quasi-nonexpansive mapping. Then, F i x ( T ) is a closed convex set.
Lemma 3
([24]). Let C be a nonempty closed and convex subset of a real Hilbert space H. Let T : C H be a nonexpansive mapping, and { x n } n N be a sequence in C. If x n w and lim n | | x n T x n | | = 0 , then T w = w .
Lemma 4.
Let C be a nonempty closed convex subset of a real Hilbert space H, and let T : H H be a firmly nonexpansive mapping with F i x ( T ) . Thus, x T x , T x y 0 for every x H and y F i x ( T ) .
Proof. 
For every x H and y F i x ( T ) , since T is a firmly nonexpansive mapping, we have
| | T x y | | 2 x y , T x y = x T x , T x y + | | T x y | | 2 .
Hence, we know x T x , T x y 0 , and the proof is completed. □
Lemma 5.
Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for each x H , there is a unique element x ¯ = x ¯ ( x ) C such that | | x x ¯ | | = min y C | | x y | | . Here, set P C ( x ) = x ¯ , and P C is called the metric projection from H onto C.
Definition 3
([21]). Let μ be a positive real number, H be a real Hilbert space, and C be a closed convex subset of H. Let g : C R { + } be a proper, convex, and lower semicontinuous function. We say that P C μ , g : H C is a generalized ( μ , g ) projection operator if
P C μ , g ( x ) : = u C : G ( x , u ) = inf v C G ( x , v )
for each x H , where G : H × C R { + } is defined as
G ( x , v ) : = x 2 2 x , v + v 2 + 2 μ g ( v ) .
Lemma 6
([21]). Let μ be a positive real number, H be a real Hilbert space, and C be a closed convex subset of H. Let g : C R { + } be a proper, convex, and lower semicontinuous function. Thus,
(i) 
for each x H , the set P C μ , g ( x ) is nonempty and singleton.
(ii) 
for each x H , x ¯ = P C μ , g ( x ) if, and only if,
y C : x x ¯ , x ¯ y + μ g ( y ) μ g ( x ¯ ) 0 .
(iii) 
P C μ , g is a firmly nonexpansive mapping.
Remark 1.
If g ( v ) = 0 for each v C , then the mapping P C μ , g is reduced to the metric projection P C from H onto C.
Lemma 7
([21]). Let μ be a positive real number, H be a real Hilbert space, and C be a closed convex subset of H. Let g : C R { + } be a proper, convex, and lower semicontinuous function, and F : H H be a mapping. Then x ¯ H is a solution of IMVI( F , μ · g , C ) if, and only if, F ( x ¯ ) = P C α μ , g ( F ( x ¯ ) α x ¯ ) for all α > 0 .
To study the algorithms from the perspective of a fixed approach, we require the following two convergence results.
Lemma 8
([25]). Let { α n } n N be a sequence in the interval [ a , b ] ( 0 , 1 ) . Let C be a nonempty closed convex subset of a real Hilbert space H, and let T : C C be a nonexpansive mapping with F i x ( T ) . Let { x n } n N be generated by x 1 C and x n + 1 = α n x n + ( 1 α n ) T x n for each n N . Then, the sequence { x n } n N converges weakly to an element of F i x ( T ) .
Lemma 9
([26]). Let { α n } n N and { β n } n N be sequences in the interval [ a , b ] ( 0 , 1 ) . Let C be a nonempty closed convex subset of a real Hilbert space H, and let T : C C be a nonexpansive mapping with F i x ( T ) . Let { x n } n N be generated by x 1 C and
y n : = ( 1 β n ) x n + β n T x n , x n + 1 : = ( 1 α n ) x n + α n T y n , n N .
Then, the sequence { x n } n N converges weakly to an element of F i x ( T ) .
To study the inertial algorithms, we need the following two results.
Lemma 10
([27]). Let { a n } n N { 0 } and { b n } n N { 0 } be sequence in [ 0 , ) , and let { k n } n N be a sequence in [ 0 , k ] [ 0 , 1 ) . Suppose that n = 1 b n < and a n + 1 a n k n ( a n a n 1 ) + b n for all n N . Then, { a n } is a convergent sequence and n = 1 [ a n + 1 a n ] + < , where [ t ] + = max { t , 0 } (for any t R ).
Lemma 11
([28]). Let { a n } n N { 0 } and { b n } n N be sequences in [ 0 , ) , and let { k n } n N be a sequence in [ 0 , k ] [ 0 , 1 ) . Suppose that a n + 1 a n + b n + 1 k n ( a n a n 1 + b n ) for all n N . Then, { a n } n N is a convergent sequence and n = 1 b n < .

3. Algorithms for the Inverse Mixed Variational Inequality Problem

Lemma 12.
Let μ, α, and λ be positive real numbers, H be a real Hilbert space, and C be a closed convex subset of H. Let g : C R { + } be a proper, convex, and lower semicontinuous function, and F : H H be a mapping. Let T : H H be defined as
T ( x ) = λ P C α μ , g F ( x ) α x F ( x )
for each x H . Thus, for each u , v H , we have
| | T ( u ) T ( v ) | | 2 λ 2 | | F ( u ) F ( v ) | | 2 λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 2 λ 2 α P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , u v .
Proof. 
First, we know that
T ( x ) = λ P C α μ , g F ( x ) α x F ( x ) α x α x
for each x H . Hence, for each u , v H , we have
| | T ( u ) T ( v ) | | 2 = λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 + λ 2 | | [ F ( u ) α u ] [ F ( v ) α v ] | | 2 + λ 2 α 2 | | u v | | 2 2 λ 2 P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , [ F ( u ) α u ] [ F ( v ) α v ] 2 λ 2 α P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , u v + 2 λ 2 α [ F ( u ) α u ] [ F ( v ) α v ] , u v .
By Lemma 6, we have
| | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , [ F ( u ) α u ] [ F ( v ) α v ] .
In addition,
| | [ F ( u ) α u ] [ F ( v ) α v ] | | 2 = | | F ( u ) F ( v ) | | 2 + α 2 | | u v | | 2 2 α F ( u ) F ( v ) , u v .
Therefore, by (6), (7), and (8), we have
| | T ( u ) T ( v ) | | 2 λ 2 | | F ( u ) F ( v ) | | 2 λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 2 λ 2 α P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , u v .
The proof is completed. □
Lemma 13.
Let μ, α, λ, and γ be positive real numbers with λ α = 1 and λ 2 γ . Let H be a real Hilbert space, and C be a closed convex subset of H. Let g : C R { + } be a proper, convex and lower semicontinuous function, and F : H H be a mapping. Let S : H H be defined as S ( x ) = x + λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ] for each x H . If F is γ-inversely strongly monotone, then S is a nonexpansive mapping.
Proof. 
Let T : H H be defined as T ( x ) = λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ] for each x H . Then, S ( x ) = x + T ( x ) for each x H . For each u , v H , we have
| | S ( u ) S ( v ) | | 2 = | | u v | | 2 + | | T ( u ) T ( v ) | | 2 + 2 T ( u ) T ( v ) , u v .
By Lemma 12 and the assumption, we have
| | T ( u ) T ( v ) | | 2 λ 2 | | F ( u ) F ( v ) | | 2 λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 2 λ P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , u v ,
and
2 T ( u ) T ( v ) , u v = 2 λ P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) , u v 2 λ F ( u ) F ( v ) , u v .
By (10), (11), and (12),
| | S ( u ) S ( v ) | | 2 | | u v | | 2 λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 + λ 2 | | F ( u ) F ( v ) | | 2 2 λ F ( u ) F ( v ) , u v .
Since F is γ -inversely strongly monotone, we further have
| | S ( u ) S ( v ) | | 2 | | u v | | 2 λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 + λ 2 | | F ( u ) F ( v ) | | 2 2 λ γ | | F ( u ) F ( v ) | | 2 | | u v | | 2 λ 2 | | P C α μ , g ( F ( u ) α u ) P C α μ , g ( F ( v ) α v ) | | 2 .
Therefore, S is a nonexpansive mapping. □
Theorem 1.
Let μ, α, λ, and γ be positive real numbers with λ α = 1 and λ 2 γ . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping, and g : C R { + } be a proper, convex, and lower semicontinuous function. Let { β n } n N be a sequence in the interval [ a , b ] ( 0 , 1 ) . Let Ω IMVI ( F , μ g , C ) be the solution set of the problem IMVI( F , μ g , C ) and assume that Ω IMVI ( F , μ g , C ) . Let { x n } n N be generated by the following iterative process:
x 1 C is given arbitrarily , x n + 1 = x n + β n λ P C α μ , g F ( x n ) α x n F ( x n ) , n N .
Then, the sequence { x n } n N converges weakly to an element of Ω IMVI ( F , μ g , C ) .
Proof. 
Let S : H H be defined as
S ( x ) = x + λ P C α μ , g F ( x ) α x F ( x )
for each x H . From Lemma 13, we know that S is a nonexpansive mapping and F i x ( S ) = Ω IMVI ( F , μ g , C ) . In addition, we have
x n + 1 = ( 1 β n ) x n + β n x n + β n λ P C α μ , g F ( x n ) α x n F ( x n ) = ( 1 β n ) x n + β n x n + λ P C α μ , g ( F ( x n ) α x n ) F ( x n ) = ( 1 β n ) x n + β n S ( x n ) .
So, by (16) and Lemma 8, there exists x ¯ H such that the sequence { x n } n N converges weakly to x ¯ F i x ( S ) = Ω IVI ( F , C ) . □
Remark 2.
In Theorem 1, the stepsize parameter λ is allowed to satisfy λ 2 γ . This condition differs from the setting used in [21] (Algorithm 4.1), where the parameters are subject to different constraints. As a result, our algorithm and that of [21] are fundamentally different in terms of both structure and parameter selection. Consequently, Theorem 1 provides a distinct convergence result compared to [21] (Theorems 4.3 and 4.4). Moreover, we assume that F is inverse strongly monotone, while [21] (Theorem 4.5) assumes that F is strongly monotone. Therefore, Theorem 1 is essentially different from [21] (Theorem 4.5) in terms of the monotonicity assumption of F.
Theorem 2.
Let μ, α, λ, and γ be positive real numbers with λ α = 1 and λ 2 γ . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping, and g : C R { + } be a proper, convex, and lower semicontinuous function. Let Ω IMVI ( F , μ g , C ) be the solution set of the problem IMVI( F , μ g , C ), and assume that Ω IMVI ( F , μ g , C ) . Let { β n } n N and { κ n } n N be sequences in the interval [ a , b ] ( 0 , 1 ) . Let { x n } n N be generated by the following iterative process:
x 1 C is given arbitrarily , y n = x n + β n λ P C α μ , g F ( x n ) α x n F ( x n ) , z n = y n + λ P C α μ , g F ( y n ) α y n F ( y n ) , x n + 1 = x n + κ n ( z n x n ) , n N .
Then, the sequence { x n } n N converges weakly to an element of Ω IMVI ( F , μ g , C ) .
Proof. 
Let S : H H be defined as
S ( x ) = x + λ P C α μ , g ( F ( x ) α x ) F ( x )
for each x H . By Lemma 13, we know S is a nonexpansive mapping and F i x ( S ) = Ω IMVI ( F , μ g , C ) . In addition, we have
y n = ( 1 β n ) x n + β n S ( x n ) , z n = S ( y n ) , x n + 1 = ( 1 κ n ) x n + κ n S ( y n ) .
From Lemma 9, there exists x ¯ H such that the sequence { x n } n N converges weakly to x ¯ F i x ( S ) = Ω IMVI ( F , μ g , C ) . □

4. Inertial Algorithms for the Inverse Mixed Variational Inequality Problem

In this section, we consider different assumptions of the parameters. As a result, Lemmas 13 and 14 differ from each other. Therefore, the assumptions in the convergence theorems of Section 3 and Section 4 are also distinct.
Lemma 14.
Let μ, α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, and C be a closed convex subset of H. Let g : C R { + } be a proper, convex, and lower semicontinuous function, and F : H H be a mapping. Let S : H H be defined as S ( x ) = x + λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ] for each x H . If F is γ-inversely strongly monotone, then S is a firmly nonexpansive mapping.
Proof. 
Let W : H H be defined as
W ( x ) : = x + 2 λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ]
for each x H . From Lemma 13, we know that W is a nonexpansive mapping. Next, we have
S ( x ) = 1 2 x + 1 2 x + 2 λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ] = 1 2 x + 1 2 W ( x ) .
Hence, for u , v H , we have
| | S ( u ) S ( v ) | | 2 = 1 2 | | u v | | 2 + 1 2 | | W ( u ) W ( v ) | | 2 1 4 | | ( u W ( u ) ) ( v W ( v ) ) | | 2 | | u v | | 2 | | ( u S ( u ) ) ( v S ( v ) ) | | 2 .
Therefore, S is a firmly nonexpansive mapping. □
Theorem 3.
Let μ, α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping, and g : C R { + } be a proper, convex, and lower semicontinuous function. Let { t n } n N be a sequence in the interval [ 0 , c ) [ 0 , 1 ) . Let Ω IMVI ( F , μ g , C ) be the solution set of the problem IMVI( F , μ g , C ), and assume that Ω IMVI ( F , μ g , C ) . Let { x n } n N be generated by the following iterative process:
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ P C α μ , g F ( u n ) α u n F ( u n ) , n N .
If we further assume that n = 1 t n | | x n x n 1 | | 2 < , then the sequence { x n } n N converges weakly to an element of Ω IMVI ( F , μ g , C ) .
Proof. 
Take any z Ω IMVI ( F , μ g , C ) and n N , and let z and n be fixed. Let S : H H be defined as
S ( x ) = x + λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ]
for each x H . From Lemma 14, we know S is a firmly nonexpansive mapping. From Lemma 7, we know F i x ( S ) = Ω IMVI ( F , μ g , C ) . Next, we have
x 0 , x 1 C are given arbitrarily , x n + 1 = S ( x n + t n ( x n x n 1 ) ) , n N .
From Lemma 4, we know that
0 2 x n + t n ( x n x n 1 ) x n + 1 , x n + 1 z = 2 x n x n + 1 , x n + 1 z + 2 t n x n x n 1 , x n + 1 z = | | x n z | | 2 | | x n + 1 x n | | 2 | | x n + 1 z | | 2 + 2 t n x n x n 1 , x n + 1 z .
Next, we know that
2 x n x n 1 , x n + 1 z = 2 x n x n 1 , x n + 1 x n + 2 x n x n 1 , x n z = 2 x n x n 1 , x n + 1 x n + | | x n z | | 2 + | | x n x n 1 | | 2 | | x n 1 z | | 2 .
From (21) and (22), we have
| | x n + 1 z | | 2 | | x n z | | 2 | | x n + 1 x n | | 2 + 2 t n x n x n 1 , x n + 1 z = t n ( | | x n z | | 2 | | x n 1 z | | 2 ) | | x n + 1 x n | | 2 + t n | | x n x n 1 | | 2 + 2 t n x n x n 1 , x n + 1 x n = t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + t n | | x n x n 1 | | 2 + t n 2 | | x n x n 1 | | 2 ( | | x n + 1 x n | | 2 2 t n x n x n 1 , x n + 1 x n + t n 2 | | x n x n 1 | | 2 ) = t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + t n | | x n x n 1 | | 2 + t n 2 | | x n x n 1 | | 2 | | x n + 1 x n t n ( x n x n 1 ) | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + 2 t n | | x n x n 1 | | 2 | | x n + 1 x n t n ( x n x n 1 ) | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + 2 t n | | x n x n 1 | | 2 .
From (23) and Lemma 10, we know that lim n | | x n z | | 2 exists, { x n } n N is a bounded sequence, and
n = 1 [ | | x n + 1 z | | 2 | | x n z | | 2 ] + < .
Further, by assumption and (23) again, we know that
lim n | | S ( u n ) u n | | = lim n | | x n + 1 x n t n ( x n x n 1 ) | | = 0 .
Since { x n } n N is a bounded sequence, there exist a subsequence { x n k } k N of { x n } n N and x ¯ H such that x n k x ¯ . Further, we know that
u n k = x n k + t n k ( x n k x n k 1 ) x ¯ .
From Lemma 3, we know that x ¯ F i x ( S ) .
Next, if { x m k } k N is also a subsequence of { x n } n N with x m k y ¯ H , then we know that y ¯ F i x ( S ) by using the same argument. Here, we want to show that x ¯ = y ¯ . If not, then it follows from Lemma 1 that
lim n | | x n x ¯ | | = lim k | | x n k x ¯ | | = lim inf k | | x n k x ¯ | | < lim inf k | | x n k y ¯ | | = lim k | | x n k y ¯ | | = lim n | | x n y ¯ | | .
Similar, we have
lim n | | x n y ¯ | | < lim n | | x n x ¯ | | .
This leads to a contradiction. So, we know that x ¯ = y ¯ , and this implies that every weakly convergent subsequence of { x n } n N has the same weak limit x ¯ . Therefore, we know that x n x ¯ and the proof is completed. □
Remark 3.
If we set t n = 0 for all n N , then the algorithm in Theorem 3 reduces to the following scheme:
x 1 C is given arbitrarily , x n + 1 = x n + λ P C α μ , g F ( x n ) α x n F ( x n ) , n N .
In addition, the conditions imposed on the parameter α and λ are different from that in Theorem 1. Therefore, Theorems 1 and 3 describe different convergence theorems under distinct parameter settings.
Theorem 4.
Let μ, α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping, and g : C R { + } be a proper, convex, and lower semicontinuous function. Let { t n } n N be a nondecreasing sequence in the interval [ 0 , c ] [ 0 , 1 / 3 ) . Let Ω IMVI ( F , μ g , C ) be the solution set of the problem IMVI( F , μ g , C ) and assume that Ω IMVI ( F , μ g , C ) . Let { x n } n N be generated by the following iterative process:
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ P C α μ , g F ( u n ) α u n F ( u n ) , n N .
Then, the sequence { x n } n N converges weakly to an element of Ω IMVI ( F , μ g , C ) .
Proof. 
Take any z Ω IMVI ( F , μ g , C ) and n N , and let z and n be fixed. From (23), we know that
| | x n + 1 z | | 2 | | x n z | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) | | x n + 1 x n | | 2 + t n | | x n x n 1 | | 2 + 2 t n x n x n 1 , x n + 1 x n t n ( | | x n z | | 2 | | x n 1 z | | 2 ) | | x n + 1 x n | | 2 + t n | | x n x n 1 | | 2 + t n ( | | x n x n 1 | | 2 + | | x n + 1 x n | | 2 ) .
This implies that
| | x n + 1 z | | 2 | | x n z | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) | | x n + 1 x n | | 2 + t n | | x n x n 1 | | 2 + t n ( | | x n x n 1 | | 2 + | | x n + 1 x n | | 2 ) = ( t n 1 ) | | x n + 1 x n | | 2 + 2 t n | | x n x n 1 | | 2 .
Hence,
| | x n + 1 z | | 2 t n + 1 | | x n z | | 2 + 2 t n + 1 | | x n + 1 x n | | 2 | | x n + 1 z | | 2 t n | | x n z | | 2 + 2 t n + 1 | | x n + 1 x n | | 2 | | x n z | | 2 t n | | x n 1 z | | 2 + 2 t n | | x n x n 1 | | 2 + ( 3 t n + 1 1 ) | | x n + 1 x n | | 2 .
Here, set μ n as
k n = | | x n z | | 2 t n | | x n 1 z | | 2 + 2 t n | | x n x n 1 | | 2 .
By (31) and (32),
0 ( 1 3 c ) | | x n + 1 x n | | 2 ( 1 3 t n + 1 ) | | x n + 1 x n | | 2 k n k n + 1 ,
and this implies that
n = 0 ( 1 3 c ) | | x n + 1 x n | | 2 < & n = 0 | | x n + 1 x n | | 2 < .
Further,
n = 0 t n + 1 | | x n + 1 x n | | 2 n = 0 | | x n + 1 x n | | 2 < .
Therefore, we obtain the conclusion of Theorem 4 from Theorem 3. □
Theorem 5.
Let μ, α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping, and g : C R { + } be a proper, convex and lower semicontinuous function. Let { t n } n N be a sequence in the interval [ 0 , c ] [ 0 , 1 ] . Let { β n } n N be a sequence in the interval [ a , b ] ( 0 , 2 ) . Let Ω IMVI ( F , μ g , C ) be the solution set of the problem IMVI( F , μ g , C ) and assume Ω IMVI ( F , μ g , C ) . Let { x n } n N be generated by the following iterative process:
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ β n P C α μ , g F ( u n ) α u n F ( u n ) , n N .
If we further assume that n = 1 t n | | x n x n 1 | | 2 < , then the sequence { x n } n N converges weakly to an element of Ω IMVI ( F , μ g , C ) .
Proof. 
Take any z Ω IMVI ( F , μ g , C ) and n N , and let z and n be fixed. Let S : H H be defined as S ( x ) = x + λ [ P C α μ , g ( F ( x ) α x ) F ( x ) ] for each x H . From Lemma 14, we know that S is a firmly nonexpansive mapping and F i x ( S ) = Ω IMVI ( F , μ g , C ) . Next,
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = ( 1 β n ) u n + β n S ( u n ) , n N .
Thus, we know that
0 u n S ( u n ) , S ( u n ) z = u n S ( u n ) , S ( u n ) u n + u n z ,
and this implies that
| | u n S ( u n ) | | 2 u n S ( u n ) , u n z .
Next, we have
| | x n + 1 z | | 2 = | | ( 1 β n ) u n + β n S ( u n ) z | | 2 = | | u n z β n ( u n S ( u n ) ) | | 2 = | | u n z | | 2 + β n 2 | | u n S ( u n ) | | 2 2 β n u n z , u n S ( u n ) .
From (37) and (38), we have
| | x n + 1 z | | 2 = | | u n z | | 2 + β n 2 | | u n S ( u n ) | | 2 2 β n u n z , u n S ( u n ) | | u n z | | 2 + β n 2 | | u n S ( u n ) | | 2 2 β n | | u n S ( u n ) | | 2 = | | u n z | | 2 β n ( 2 β n ) | | u n S ( u n ) | | 2 .
Further,
| | u n z | | 2 = | | x n + t n ( x n x n 1 ) z | | 2 = | | ( x n z ) + t n ( x n x n 1 ) | | 2 = | | x n z | | 2 + t n 2 | | x n x n 1 | | 2 + 2 t n x n z , x n x n 1 = | | x n z | | 2 + t n 2 | | x n x n 1 | | 2 + t n | | x n x n 1 | | 2 + t n ( | | x n z | | 2 | | x n 1 z | | 2 ) | | x n z | | 2 + 2 t n | | x n x n 1 | | 2 + t n ( | | x n z | | 2 | | x n 1 z | | 2 ) .
From (39) and (40),
| | x n + 1 z | | 2 | | x n z | | 2 + t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + 2 t n | | x n x n 1 | | 2 β n ( 2 β n ) | | u n S ( u n ) | | 2 .
From (41) and Lemma 10, we know that lim n | | x n z | | 2 exists, { x n } n N is a bounded sequence, and
n = 1 [ | | x n + 1 z | | 2 | | x n z | | 2 ] + < .
Further, by assumption and (41) again, we know that
lim n | | u n S ( u n ) | | = 0 .
Since { x n } n N is a bounded sequence, there exists a subsequence { x n k } k N of { x n } n N and x ¯ H such that x n k x ¯ . Further, we know that
u n k = x n k + t n k ( x n k x n k 1 ) x ¯ .
From (43) and Lemma 3, we know that x ¯ F i x ( S ) . Next, following the same argument as the final proof of Theorem 3, we know that x n x ¯ , and the proof is completed. □
Remark 4.
If we set t n = 0 for all n N , then the algorithm in Theorem 5 reduces to the following scheme:
x 1 C are given arbitrarily , x n + 1 = x n + λ β n P C α μ , g F ( x n ) α x n F ( x n ) , n N .
However, the conditions imposed on the parameters α and λ are different from that in Theorem 1. Therefore, Theorems 1 and 5 describe different convergence theorems under distinct parameter settings.
Theorem 6.
Let μ, α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping, and g : C R { + } be a proper, convex and lower semicontinuous function. Let { t n } n N be a sequence in the interval [ 0 , 1 7 ] [ 0 , 1 ) . Let { β n } n N be a sequence in the interval [ a , 1 2 ] ( 0 , 2 ) . Let Ω IMVI ( F , μ g , C ) be the solution set of the problem IMVI( F , μ g , C ) and assume Ω IMVI ( F , μ g , C ) . Let { x n } n N be generated by the following iterative process:
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ β n P C α μ , g F ( u n ) α u n F ( u n ) , n N .
Then, the sequence { x n } n N converges weakly to an element of Ω IMVI ( F , μ g , C ) .
Proof. 
Take any z Ω IMVI ( F , μ g , C ) and n N , and let z and n be fixed. From the proof of Theorem 5, by (39) and (40), we obtain
| | x n + 1 z | | 2 | | x n z | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + ( t n + t n 2 ) | | x n x n 1 | | 2 β n ( 2 β n ) | | u n S ( u n ) ) | | 2 = t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + ( t n + t n 2 ) | | x n x n 1 | | 2 2 β n β n | | x n + 1 u n | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + ( t n + t n 2 ) | | x n x n 1 | | 2 3 | | x n + 1 u n | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + ( t n + t n 2 ) | | x n x n 1 | | 2 3 | | x n + 1 x n | | 2 3 t n 2 | | x n x n 1 | | 2 + 6 t n · | | x n + 1 x n | | · | | x n x n 1 | | t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + ( t n + t n 2 ) | | x n x n 1 | | 2 3 | | x n + 1 x n | | 2 3 t n 2 | | x n x n 1 | | 2 + | | x n + 1 x n | | 2 + 9 t n 2 | | x n x n 1 | | 2 .
Hence,
| | x n + 1 z | | 2 | | x n z | | 2 + 2 | | x n + 1 x n | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 ) + t n ( 1 + 7 t n ) | | x n x n 1 | | 2 t n ( | | x n z | | 2 | | x n 1 z | | 2 + 2 | | x n x n 1 | | 2 ) .
From (46) and Lemma 11, we know n = 1 | | x n + 1 x n | | 2 < . Therefore, we obtain the conclusion of Theorem 6 by using Theorem 5. □
The following are convergence theorems for the inverse variational inequality problems, and they are special cases of Theorems 4 and 6, respectively.
Corollary 1.
Let α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strongly monotone mapping. Let { t n } n N be a nondecreasing sequence in the interval [ 0 , c ] [ 0 , 1 / 3 ) . Let Ω IVI ( F , C ) be the solution set of the problem IVI( F , C ) and assume that Ω IVI ( F , C ) . Let { x n } n N be generated by the following iterative process:
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ P C F ( u n ) α u n F ( u n ) , n N .
Then, the sequence { x n } n N converges weakly to an element of Ω IVI ( F , C ) .
Corollary 2.
Let α, λ, and γ be positive real numbers with 2 λ α = 1 and λ γ . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let F : H H be a γ-inversely strong monotone mapping. Let { t n } n N be a sequence in the interval [ 0 , 1 7 ] [ 0 , 1 ) . Let { β n } n N be a sequence in the interval [ a , 1 2 ] ( 0 , 2 ) . Let Ω IVI ( F , C ) be the solution set of the problem IVI( F , C ), and assume that Ω IVI ( F , C ) . Let { x n } n N be generated by the following iterative process:
x 0 , x 1 C are given arbitrarily , u n = x n + t n ( x n x n 1 ) , x n + 1 = u n + λ β n P C F ( u n ) α u n F ( u n ) , n N .
Then the sequence { x n } n N converges weakly to an element of Ω IVI ( F , C ) .

5. Conclusions

The main purpose of this paper is to contribute additional algorithmic strategies for solving the inverse mixed variational inequality problem, particularly in light of the fact that existing algorithms for this problem in the literature remain relatively scarce. Our goal is to provide more feasible and applicable methods for addressing such problems.
Inertial algorithms are well known for their ability to enhance the practical performance of iterative schemes. The motivation behind our proposed inertial algorithm lies in its potential to accelerate convergence, thereby offering a more efficient alternative for solving the target problem.

Funding

This research was supported by the National Science and Technology Council (NSTC 113-2115-M-415-003).

Data Availability Statement

All original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IVIInverse variational inequality problem
IMVIInverse mixed variational inequality problem

References

  1. Alber, Y. The regularization method for variational inequalities with nonsmooth unbounded operator in Banach space. Appl. Math. Lett. 1993, 6, 63–68. [Google Scholar] [CrossRef]
  2. Baiocchi, C.; Capelo, A. Variational and Quasi-Variational Inequalities. Application to Free Boundary Problems; Wiley: New York, NY, USA; London, UK, 1984. [Google Scholar]
  3. Browder, F.E. On the unification of the variations and the theory of monotone nonlinear operators in Banach spaces. Proc. Natl. Acad. Sci. USA 1966, 56, 419–425. [Google Scholar] [CrossRef] [PubMed]
  4. Deimling, K. Nonlinear Functional Analysis; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  5. Fan, J.H.; Liu, X.; Li, J.L. Iterative schemes for approximating solutions of generalized variational inequalities in Banach spaces. Nonlinear Anal. 2009, 70, 3997–4007. [Google Scholar] [CrossRef]
  6. He, B.S.; Liu, H.X. Inverse Variational Inequalities in the Economic Field: Applications and Algorithms. 2006. Available online: http://www.paper.edu.cn/releasepaper/content/200609-260 (accessed on 1 May 2025).
  7. He, B.S.; Liu, H.X.; Li, M.; He, X.Z. PPA-Based Methods for Monotone Inverse Variational Inequalities. 2006. Available online: http://www.paper.edu.cn/releasepaper/content/200606-219 (accessed on 1 May 2025).
  8. Yang, J. Dynamic Power Price Problem: An Inverse Variational Inequality Approach. J. Ind. Manag. Optim. 2008, 4, 673–684. [Google Scholar] [CrossRef]
  9. Scrimali, L. An inverse variational inequality approach to the evolutionary spatial price equilibrium problem. Optim. Eng. 2012, 13, 375–387. [Google Scholar] [CrossRef]
  10. Addi, K.; Brogliato, B.; Goeleven, D. A qualitative mathematical analysis of a class of linear variational inequalities via semi-complementarity problems: Applications in electronics. Math. Program. 2011, 126, 31–67. [Google Scholar] [CrossRef]
  11. He, B. Inexact implicit methods for monotone general variational inequalities. Math. Program. 1999, 86, 199–217. [Google Scholar] [CrossRef]
  12. Ju, X.; Li, C.D.; He, X.; Feng, G. A proximal neurodynamic model for solving inverse mixed variational inequalities. Neural Netw. 2021, 138, 1–9. [Google Scholar] [CrossRef] [PubMed]
  13. Xu, H.K.; Dey, S.; Vetrivel, V. Notes on a neural network approach to inverse variational inequalities. Optimization 2021, 70, 901–910. [Google Scholar] [CrossRef]
  14. Zou, X.; Gong, D.; Wang, L.; Chen, Z. A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing 2016, 173, 1163–1168. [Google Scholar] [CrossRef]
  15. Chen, J.W.; Ju, X.X.; Köbis, E.; Liou, Y.C. Tikhonov-type regularization methods for inverse mixed variational inequalities. Optimization 2020, 69, 401–413. [Google Scholar] [CrossRef]
  16. Dey, S.; Reich, S. A dynamical system for solving inverse quasi-variational inequalities. Optimization 2023, 73, 1681–1701. [Google Scholar] [CrossRef]
  17. He, S.; Dong, Q.L. An existence-uniqueness theorem and alternating contraction projection methods for inverse variational inequalities. J. Inequal. Appl. 2018, 2018, 351. [Google Scholar] [CrossRef] [PubMed]
  18. He, X.Z.; Liu, H.X. Inverse variational inequalities with projection-based solution methods. Eur. J. Oper. Res. 2011, 208, 12–18. [Google Scholar] [CrossRef]
  19. Luo, X.P.; Yang, J. Regularization and iterative methods for monotone inverse variational inequalities. Optim. Lett. 2014, 8, 1261–1272. [Google Scholar] [CrossRef]
  20. Vuong, P.T.; He, X.; Thong, D.V. Global exponential stability of a neural network for inverse variational inequalities. J. Optim. Theory Appl. 2021, 190, 915–930. [Google Scholar] [CrossRef]
  21. Li, X.; Li, X.; Huang, N.J. A generalized f-projection algorithm for inverse mixed variational inequalities. Optim. Lett. 2014, 8, 1063–1076. [Google Scholar] [CrossRef]
  22. Itoh, S.; Takahashi, W. The common fixed point theory of single-valued mappings and multi-valued mappings. Pacific J. Math. 1978, 79, 493–508. [Google Scholar] [CrossRef]
  23. Kocourek, P.; Takahashi, W.; Yao, J.C. Fixed point theorems and weak convergence theorems for generalized hybrid mappings in Hilbert spaces. Taiwan. J. Math. 2010, 14, 2497–2511. [Google Scholar] [CrossRef]
  24. Browder, F.E. Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 53, 1272–1276. [Google Scholar] [CrossRef]
  25. Mann, W. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  26. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  27. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef]
  28. Chuang, C.S.; Hong, C.C. New self-adaptive algorithms and inertial self-adaptive algorithms for the split variational inclusion problems in Hilbert space. Numer. Funct. Anal. Optim. 2022, 43, 1050–1068. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chuang, C.-S. Algorithms and Inertial Algorithms for Inverse Mixed Variational Inequality Problems in Hilbert Spaces. Mathematics 2025, 13, 1966. https://doi.org/10.3390/math13121966

AMA Style

Chuang C-S. Algorithms and Inertial Algorithms for Inverse Mixed Variational Inequality Problems in Hilbert Spaces. Mathematics. 2025; 13(12):1966. https://doi.org/10.3390/math13121966

Chicago/Turabian Style

Chuang, Chih-Sheng. 2025. "Algorithms and Inertial Algorithms for Inverse Mixed Variational Inequality Problems in Hilbert Spaces" Mathematics 13, no. 12: 1966. https://doi.org/10.3390/math13121966

APA Style

Chuang, C.-S. (2025). Algorithms and Inertial Algorithms for Inverse Mixed Variational Inequality Problems in Hilbert Spaces. Mathematics, 13(12), 1966. https://doi.org/10.3390/math13121966

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop