Abstract
The inverse mixed variational inequality problem comes from classical variational inequality, and it has many applications. In this paper, we propose new algorithms to study the inverse mixed variational inequality problems in Hilbert spaces, and these algorithms are based on the generalized projection operator. Next, we establish convergence theorems under inverse strong monotonicity conditions. In addition, we also provide inertial-type algorithms for the inverse mixed variational inequality problems with conditions that differ from the above convergence theorems.
Keywords:
variational inequality; inverse variational inequality; inverse strong monotonicity; inertial algorithm; generalized projection MSC:
47J20; 47H05; 65K10
1. Introduction
The variational inequality problem is a powerful tool for modeling and analyzing equilibrium problems in a variety of disciplines, including engineering, economics, and operations research [1,2,3,4,5]. Over the years, many extensions of the classical variational inequality problem have been proposed to deal with increasingly complex situations. In 2006, He et al. [6,7] first proposed the inverse variational inequality problem, which is different from the classical variational inequality problem.
where H is a real Hilbert space, C is a nonempty closed convex subset of H, and is a mapping. Indeed, the inverse variational inequality problem has many practical applications, such as in dynamic power price problems [8], traffic congestion control problems [6], spatial price equilibrium control problems [9], ampere–volt characteristics for circuits [10], and neural network problems [11,12,13,14].
Furthermore, many algorithms have been proposed to study the inverse variational inequality problem, such as the proximal point-based algorithm, the projection-type algorithm, the alternating contraction projection algorithm, the regularized iterative algorithm, and the dynamical system method. One can see this range of proposed algorithms in [6,13,14,15,16,17,18,19,20].
In 2014, Li et al. [21] introduced the inverse mixed variational inequality problem:
where H is a real Hilbert space, C is a nonempty closed convex subset of H, is a mapping, and is a proper, convex, and lower semicontinuous function. Further, Li et al. [21] proposed the following algorithm and related convergence theorems under suitable conditions.
where and is a sequence of positive real numbers, is a generalized projection (see Definition 3).
Although the inverse mixed variational inequality problem has been investigated in the literature [21], several theoretical and algorithmic questions remain unanswered, including the development of more efficient solution methods and the exploration of convergence properties under weaker assumptions.
Motivated by the above works, we consider the following inverse mixed variational inequality problem IMVI(): find such that and
where , H is a real Hilbert space, C is a nonempty closed convex subset of H, is a mapping, and is a proper, convex, and lower semicontinuous function. Further, we consider the following algorithms and related convergence theorems for the inverse mixed variational inequality problem.
- (i)
- The classical algorithm for the inverse mixed variational inequality problem:where , , and are positive real numbers, and is a generalized projection operator (see Definition 3).
- (ii)
- A generalized algorithm from the classical algorithm:where , , and are positive real numbers, and are sequences in the interval .
- (iii)
- Inertial Algorithm (1):, , and are positive real numbers, is a sequence in the interval .
- (iv)
- Inertial Algorithm (2):where , , and are positive real numbers, is a sequence in the interval , is a sequence in the interval .
In Section 1, we review the existing inverse mixed variational inequality problems and algorithms discussed in the literature. In Section 2, we present the necessary definitions and tools. In Section 3, we propose algorithms for solving inverse mixed variational inequality problems under different assumptions, where the mapping F is assumed to be inverse strongly monotone rather than strongly monotone. Moreover, our convergence theorems differ from those proposed by Li et al. [21]. In Section 4, we present inertial algorithms and related convergence theorems for inverse mixed variational inequality problems.
2. Preliminaries
Let H be a real Hilbert space with inner product and norm . We denote the strong and weak convergence of to by and , respectively. For each , and , we have
Lemma 1
(Opial’s Theorem). Let H be a real Hilbert space, and be a sequence in H with . Then, for all with .
Definition 1.
Let H be a real Hilbert space, be a mapping, and . Thus,
- (i)
- B is monotone if for all ;
- (ii)
- B is β-strongly monotone if for all ;
- (iii)
- B is β-inverse strongly monotone if for all .
Definition 2.
Let C be a nonempty closed convex subset of a real Hilbert space H, and let be a mapping. Let . Thus,
- (i)
- T is a nonexpansive mapping if for every ;
- (ii)
- T is a firmly nonexpansive mapping if for every , that is, for every ;
- (iii)
- is quasi-nonexpansive mapping if and for all and .
Lemma 2
([22,23]). Let C be a nonempty closed convex subset of a real Hilbert space H, and let be a quasi-nonexpansive mapping. Then, is a closed convex set.
Lemma 3
([24]). Let C be a nonempty closed and convex subset of a real Hilbert space H. Let be a nonexpansive mapping, and be a sequence in C. If and , then .
Lemma 4.
Let C be a nonempty closed convex subset of a real Hilbert space H, and let be a firmly nonexpansive mapping with . Thus, for every and .
Proof.
For every and , since T is a firmly nonexpansive mapping, we have
Hence, we know , and the proof is completed. □
Lemma 5.
Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for each , there is a unique element such that Here, set , and is called the metric projection from H onto C.
Definition 3
([21]). Let μ be a positive real number, H be a real Hilbert space, and C be a closed convex subset of H. Let be a proper, convex, and lower semicontinuous function. We say that is a generalized projection operator if
for each , where is defined as
Lemma 6
([21]). Let μ be a positive real number, H be a real Hilbert space, and C be a closed convex subset of H. Let be a proper, convex, and lower semicontinuous function. Thus,
- (i)
- for each , the set is nonempty and singleton.
- (ii)
- for each , if, and only if,
- (iii)
- is a firmly nonexpansive mapping.
Remark 1.
If for each , then the mapping is reduced to the metric projection from H onto C.
Lemma 7
([21]). Let μ be a positive real number, H be a real Hilbert space, and C be a closed convex subset of H. Let be a proper, convex, and lower semicontinuous function, and be a mapping. Then is a solution of IMVI() if, and only if, for all .
To study the algorithms from the perspective of a fixed approach, we require the following two convergence results.
Lemma 8
([25]). Let be a sequence in the interval . Let C be a nonempty closed convex subset of a real Hilbert space H, and let be a nonexpansive mapping with . Let be generated by and for each . Then, the sequence converges weakly to an element of .
Lemma 9
([26]). Let and be sequences in the interval . Let C be a nonempty closed convex subset of a real Hilbert space H, and let be a nonexpansive mapping with . Let be generated by and
Then, the sequence converges weakly to an element of .
To study the inertial algorithms, we need the following two results.
Lemma 10
([27]). Let and be sequence in , and let be a sequence in . Suppose that and for all . Then, is a convergent sequence and , where (for any ).
Lemma 11
([28]). Let and be sequences in , and let be a sequence in . Suppose that for all . Then, is a convergent sequence and .
3. Algorithms for the Inverse Mixed Variational Inequality Problem
Lemma 12.
Let μ, α, and λ be positive real numbers, H be a real Hilbert space, and C be a closed convex subset of H. Let be a proper, convex, and lower semicontinuous function, and be a mapping. Let be defined as
for each . Thus, for each , we have
Proof.
Lemma 13.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a closed convex subset of H. Let be a proper, convex and lower semicontinuous function, and be a mapping. Let be defined as for each . If F is γ-inversely strongly monotone, then S is a nonexpansive mapping.
Proof.
Theorem 1.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping, and be a proper, convex, and lower semicontinuous function. Let be a sequence in the interval . Let be the solution set of the problem IMVI() and assume that . Let be generated by the following iterative process:
Then, the sequence converges weakly to an element of .
Proof.
Let be defined as
for each . From Lemma 13, we know that S is a nonexpansive mapping and . In addition, we have
So, by (16) and Lemma 8, there exists such that the sequence converges weakly to . □
Remark 2.
In Theorem 1, the stepsize parameter λ is allowed to satisfy . This condition differs from the setting used in [21] (Algorithm 4.1), where the parameters are subject to different constraints. As a result, our algorithm and that of [21] are fundamentally different in terms of both structure and parameter selection. Consequently, Theorem 1 provides a distinct convergence result compared to [21] (Theorems 4.3 and 4.4). Moreover, we assume that F is inverse strongly monotone, while [21] (Theorem 4.5) assumes that F is strongly monotone. Therefore, Theorem 1 is essentially different from [21] (Theorem 4.5) in terms of the monotonicity assumption of F.
Theorem 2.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping, and be a proper, convex, and lower semicontinuous function. Let be the solution set of the problem IMVI(), and assume that . Let and be sequences in the interval . Let be generated by the following iterative process:
Then, the sequence converges weakly to an element of .
Proof.
Let be defined as
for each . By Lemma 13, we know S is a nonexpansive mapping and . In addition, we have
From Lemma 9, there exists such that the sequence converges weakly to . □
4. Inertial Algorithms for the Inverse Mixed Variational Inequality Problem
In this section, we consider different assumptions of the parameters. As a result, Lemmas 13 and 14 differ from each other. Therefore, the assumptions in the convergence theorems of Section 3 and Section 4 are also distinct.
Lemma 14.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a closed convex subset of H. Let be a proper, convex, and lower semicontinuous function, and be a mapping. Let be defined as for each . If F is γ-inversely strongly monotone, then S is a firmly nonexpansive mapping.
Proof.
Let be defined as
for each . From Lemma 13, we know that W is a nonexpansive mapping. Next, we have
Hence, for , we have
Therefore, S is a firmly nonexpansive mapping. □
Theorem 3.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping, and be a proper, convex, and lower semicontinuous function. Let be a sequence in the interval . Let be the solution set of the problem IMVI(), and assume that . Let be generated by the following iterative process:
If we further assume that , then the sequence converges weakly to an element of .
Proof.
Take any and , and let z and n be fixed. Let be defined as
for each . From Lemma 14, we know S is a firmly nonexpansive mapping. From Lemma 7, we know . Next, we have
From Lemma 4, we know that
Next, we know that
From (21) and (22), we have
From (23) and Lemma 10, we know that exists, is a bounded sequence, and
Further, by assumption and (23) again, we know that
Since is a bounded sequence, there exist a subsequence of and such that . Further, we know that
From Lemma 3, we know that .
Next, if is also a subsequence of with , then we know that by using the same argument. Here, we want to show that . If not, then it follows from Lemma 1 that
Similar, we have
This leads to a contradiction. So, we know that , and this implies that every weakly convergent subsequence of has the same weak limit . Therefore, we know that and the proof is completed. □
Remark 3.
If we set for all , then the algorithm in Theorem 3 reduces to the following scheme:
In addition, the conditions imposed on the parameter α and λ are different from that in Theorem 1. Therefore, Theorems 1 and 3 describe different convergence theorems under distinct parameter settings.
Theorem 4.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping, and be a proper, convex, and lower semicontinuous function. Let be a nondecreasing sequence in the interval . Let be the solution set of the problem IMVI() and assume that . Let be generated by the following iterative process:
Then, the sequence converges weakly to an element of .
Proof.
Theorem 5.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping, and be a proper, convex and lower semicontinuous function. Let be a sequence in the interval . Let be a sequence in the interval . Let be the solution set of the problem IMVI() and assume . Let be generated by the following iterative process:
If we further assume that , then the sequence converges weakly to an element of .
Proof.
Take any and , and let z and n be fixed. Let be defined as for each . From Lemma 14, we know that S is a firmly nonexpansive mapping and . Next,
Thus, we know that
and this implies that
Next, we have
From (37) and (38), we have
Further,
From (39) and (40),
From (41) and Lemma 10, we know that exists, is a bounded sequence, and
Further, by assumption and (41) again, we know that
Since is a bounded sequence, there exists a subsequence of and such that . Further, we know that
From (43) and Lemma 3, we know that . Next, following the same argument as the final proof of Theorem 3, we know that , and the proof is completed. □
Remark 4.
If we set for all , then the algorithm in Theorem 5 reduces to the following scheme:
However, the conditions imposed on the parameters α and λ are different from that in Theorem 1. Therefore, Theorems 1 and 5 describe different convergence theorems under distinct parameter settings.
Theorem 6.
Let μ, α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping, and be a proper, convex and lower semicontinuous function. Let be a sequence in the interval . Let be a sequence in the interval . Let be the solution set of the problem IMVI() and assume . Let be generated by the following iterative process:
Then, the sequence converges weakly to an element of .
Proof.
The following are convergence theorems for the inverse variational inequality problems, and they are special cases of Theorems 4 and 6, respectively.
Corollary 1.
Let α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let be a γ-inversely strongly monotone mapping. Let be a nondecreasing sequence in the interval . Let be the solution set of the problem IVI() and assume that . Let be generated by the following iterative process:
Then, the sequence converges weakly to an element of .
Corollary 2.
Let α, λ, and γ be positive real numbers with and . Let H be a real Hilbert space, and C be a nonempty closed convex subset of H. Let be a γ-inversely strong monotone mapping. Let be a sequence in the interval . Let be a sequence in the interval . Let be the solution set of the problem IVI(), and assume that . Let be generated by the following iterative process:
Then the sequence converges weakly to an element of .
5. Conclusions
The main purpose of this paper is to contribute additional algorithmic strategies for solving the inverse mixed variational inequality problem, particularly in light of the fact that existing algorithms for this problem in the literature remain relatively scarce. Our goal is to provide more feasible and applicable methods for addressing such problems.
Inertial algorithms are well known for their ability to enhance the practical performance of iterative schemes. The motivation behind our proposed inertial algorithm lies in its potential to accelerate convergence, thereby offering a more efficient alternative for solving the target problem.
Funding
This research was supported by the National Science and Technology Council (NSTC 113-2115-M-415-003).
Data Availability Statement
All original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The author declares no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| IVI | Inverse variational inequality problem |
| IMVI | Inverse mixed variational inequality problem |
References
- Alber, Y. The regularization method for variational inequalities with nonsmooth unbounded operator in Banach space. Appl. Math. Lett. 1993, 6, 63–68. [Google Scholar] [CrossRef]
- Baiocchi, C.; Capelo, A. Variational and Quasi-Variational Inequalities. Application to Free Boundary Problems; Wiley: New York, NY, USA; London, UK, 1984. [Google Scholar]
- Browder, F.E. On the unification of the variations and the theory of monotone nonlinear operators in Banach spaces. Proc. Natl. Acad. Sci. USA 1966, 56, 419–425. [Google Scholar] [CrossRef] [PubMed]
- Deimling, K. Nonlinear Functional Analysis; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
- Fan, J.H.; Liu, X.; Li, J.L. Iterative schemes for approximating solutions of generalized variational inequalities in Banach spaces. Nonlinear Anal. 2009, 70, 3997–4007. [Google Scholar] [CrossRef]
- He, B.S.; Liu, H.X. Inverse Variational Inequalities in the Economic Field: Applications and Algorithms. 2006. Available online: http://www.paper.edu.cn/releasepaper/content/200609-260 (accessed on 1 May 2025).
- He, B.S.; Liu, H.X.; Li, M.; He, X.Z. PPA-Based Methods for Monotone Inverse Variational Inequalities. 2006. Available online: http://www.paper.edu.cn/releasepaper/content/200606-219 (accessed on 1 May 2025).
- Yang, J. Dynamic Power Price Problem: An Inverse Variational Inequality Approach. J. Ind. Manag. Optim. 2008, 4, 673–684. [Google Scholar] [CrossRef]
- Scrimali, L. An inverse variational inequality approach to the evolutionary spatial price equilibrium problem. Optim. Eng. 2012, 13, 375–387. [Google Scholar] [CrossRef]
- Addi, K.; Brogliato, B.; Goeleven, D. A qualitative mathematical analysis of a class of linear variational inequalities via semi-complementarity problems: Applications in electronics. Math. Program. 2011, 126, 31–67. [Google Scholar] [CrossRef]
- He, B. Inexact implicit methods for monotone general variational inequalities. Math. Program. 1999, 86, 199–217. [Google Scholar] [CrossRef]
- Ju, X.; Li, C.D.; He, X.; Feng, G. A proximal neurodynamic model for solving inverse mixed variational inequalities. Neural Netw. 2021, 138, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Xu, H.K.; Dey, S.; Vetrivel, V. Notes on a neural network approach to inverse variational inequalities. Optimization 2021, 70, 901–910. [Google Scholar] [CrossRef]
- Zou, X.; Gong, D.; Wang, L.; Chen, Z. A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing 2016, 173, 1163–1168. [Google Scholar] [CrossRef]
- Chen, J.W.; Ju, X.X.; Köbis, E.; Liou, Y.C. Tikhonov-type regularization methods for inverse mixed variational inequalities. Optimization 2020, 69, 401–413. [Google Scholar] [CrossRef]
- Dey, S.; Reich, S. A dynamical system for solving inverse quasi-variational inequalities. Optimization 2023, 73, 1681–1701. [Google Scholar] [CrossRef]
- He, S.; Dong, Q.L. An existence-uniqueness theorem and alternating contraction projection methods for inverse variational inequalities. J. Inequal. Appl. 2018, 2018, 351. [Google Scholar] [CrossRef] [PubMed]
- He, X.Z.; Liu, H.X. Inverse variational inequalities with projection-based solution methods. Eur. J. Oper. Res. 2011, 208, 12–18. [Google Scholar] [CrossRef]
- Luo, X.P.; Yang, J. Regularization and iterative methods for monotone inverse variational inequalities. Optim. Lett. 2014, 8, 1261–1272. [Google Scholar] [CrossRef]
- Vuong, P.T.; He, X.; Thong, D.V. Global exponential stability of a neural network for inverse variational inequalities. J. Optim. Theory Appl. 2021, 190, 915–930. [Google Scholar] [CrossRef]
- Li, X.; Li, X.; Huang, N.J. A generalized f-projection algorithm for inverse mixed variational inequalities. Optim. Lett. 2014, 8, 1063–1076. [Google Scholar] [CrossRef]
- Itoh, S.; Takahashi, W. The common fixed point theory of single-valued mappings and multi-valued mappings. Pacific J. Math. 1978, 79, 493–508. [Google Scholar] [CrossRef]
- Kocourek, P.; Takahashi, W.; Yao, J.C. Fixed point theorems and weak convergence theorems for generalized hybrid mappings in Hilbert spaces. Taiwan. J. Math. 2010, 14, 2497–2511. [Google Scholar] [CrossRef]
- Browder, F.E. Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 53, 1272–1276. [Google Scholar] [CrossRef]
- Mann, W. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
- Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
- Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef]
- Chuang, C.S.; Hong, C.C. New self-adaptive algorithms and inertial self-adaptive algorithms for the split variational inclusion problems in Hilbert space. Numer. Funct. Anal. Optim. 2022, 43, 1050–1068. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).