Next Article in Journal
NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines
Previous Article in Journal
Predictive Analysis of Carbon Emissions in China’s Construction Industry Based on GIOWA Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Forward–Backward–Forward Algorithm for Quasi-Variational Inequalities in the Moving Set Case

by
Nevena Mijajlović
1,*,
Ajlan Zajmović
1 and
Milojica Jaćimović
2
1
Faculty of Science and Mathematics, University of Montenegro, 81000 Podgorica, Montenegro
2
Montenegrin Academy of Sciences and Arts, 81000 Podgorica, Montenegro
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1956; https://doi.org/10.3390/math13121956
Submission received: 17 March 2025 / Revised: 21 April 2025 / Accepted: 28 April 2025 / Published: 13 June 2025
(This article belongs to the Special Issue Mathematical Programming and Optimization Algorithms)

Abstract

:
This paper addresses the challenge of solving quasi-variational inequalities (QVIs) by developing and analyzing a forward–backward–forward algorithm from a continuous and iterative perspective. QVIs extend classical variational inequalities by allowing the constraint set to depend on the decision variable, a formulation that is particularly useful in modeling various problems. A critical computational challenge in these settings is the expensive nature of projection operations, especially when closed-form solutions are unavailable. To mitigate this, we consider the moving set case and propose a forward–backward–forward algorithm that requires only one projection per iteration. Under the assumption that the operator is strongly monotone, we establish that the continuous trajectories generated by the corresponding dynamical system converge exponentially to the unique solution of the QVI. We extend Tseng’s well-known forward–backward–forward algorithm for variational inequalities by adapting it to the more complex framework of QVIs. We prove that it converges when applied to strongly monotone QVIs and derive its convergence rate. We perform numerical implementations of our proposed algorithm and give numerical comparisons with other related gradient projection algorithms for quasi-variational inequalities in the literature.

1. Introduction

Variational inequalities (VIs) have played an important role in mathematical optimization, providing a framework for modeling equilibrium conditions in diverse applications such as convex Nash games, traffic flow, and economic markets; see [1,2,3]. In the classical VI setting, the problem involves finding a point x * C within a fixed, non-empty, convex, and closed subset of a Euclidean space E such that
F ( x * ) , z x * 0 , z C ,
where F : E E is a given operator. This approach has proven effective across many fields, including game theory, mechanics, stochastic systems, and hydrodynamics. However, the assumption that the constraint set C remains static can be overly restrictive when addressing the dynamic and evolving nature of many real-world problems.
To overcome this limitation, the concept of quasi-variational inequalities (QVIs) has been introduced [4,5,6]. QVIs extend the classical framework by allowing the constraint set to depend on the decision variable. More formally, the fixed set C is replaced by a set-valued mapping C : E 2 E , where for every x E , the corresponding set C ( x ) is non-empty, closed, and convex. This yields the following QVI: find x * C ( x * ) such that
F ( x * ) , z x * 0 , z C ( x * ) .
This modification facilitates the modeling of complex interdependencies—such as those encountered in generalized Nash equilibrium scenarios with shared resources—thereby offering a more realistic representation across various fields, including transportation network equilibria, engineering, electricity market models, and dynamic traffic assignment [7,8,9,10,11,12].
Despite the superficial resemblance between VIs and QVIs, their underlying theories differ significantly. While the VI framework is supported by a comprehensive body of literature addressing solution existence, uniqueness, and algorithmic strategies, the theoretical development and computational methods for QVIs remain relatively underdeveloped. For the recent applications, numerical methods, and other aspects of QVIs, see [13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33].
A critical challenge in designing efficient algorithms for both VIs and QVIs is reducing the number of projection operations performed during each iteration. Projections can be computationally expensive, particularly when closed-form solutions are unavailable. In this work, we address this issue by associating a dynamical system of forward–backward–forward type with the QVI formulation. We then perform a convergence analysis of the trajectories generated by this dynamical system under the assumption that F is strongly monotone, showing that these trajectories converge exponentially to the unique solution of the QVIs.
This approach builds on earlier studies—most notably by Banert and Bot (see [34])—who examined continuous trajectories for locating zeros of the sum of a maximally monotone operator and a Lipschitz continuous monotone operator. By discretizing time explicitly, this framework naturally leads to Tseng’s forward–backward–forward algorithm with relaxation parameters (see [35]). The forward–backward–forward method for solving VIs has been extensively discussed in the literature; see, for instance [32,36,37,38,39,40,41]. In our work, we adapt these ideas for the more complex setting of QVIs.
Among the various formulations of QVIs, the moving set scenario is by far the most extensively studied. In this case, the constraint set is expressed as
C ( x ) = c ( x ) + C 0 ,
where C 0 is a fixed set and c ( x ) is a function that captures the non static aspect of the constraints. This formulation strikes a balance between modeling flexibility and analytical tractability. In [13,15,16,18,23,24,25,26,42], methods for solving QVIs in the moving set case have been considered.
Through this study, we aim to contribute to the burgeoning field of QVIs by providing both theoretical insights and practical algorithmic solutions that bridge the gap between traditional VIs and their more flexible quasi-variational counterparts.

Outline of the Paper

In Section 2, we recall some basic facts, concepts, and lemmas, which are needed in the convergence analysis. Section 3 introduces the dynamical system of the forward–backward–forward type and carries out a convergence analysis for the generated trajectories to the unique solution of the QVI. In Section 4, we adapt Tseng’s forward–backward–forward algorithm for QVIs, prove the convergence, and derive the rate of convergence. Section 5 discusses the numerical implementations of the proposed algorithm with comparisons with other related algorithms. In Section 6, we summarize our key findings and propose directions for future research.

2. Preliminaries

In this section, we begin by establishing the key notations, followed by outlining the definitions, assumptions, and crucial lemmas necessary for our convergence analysis.
Throughout the article, · denotes the Euclidean vector norm and · , · is the corresponding scalar product. P C ( x ) is the projection of x onto the set C, i.e., 
P ( x ) = argmin y C y x .
Here, we will focus on the operator F : E E . If the operator F is the gradient of a function f, then the VI can be understood as a necessary condition for optimality when minimizing f over the set C, and the QVI becomes a necessary condition for minimization with coupled constraints. In this article, we consider the operator F which is L-Lipschitz continuous, i.e.,
F ( x ) F ( y ) L x y , x , y E .
and μ -strongly monotone, i.e.,
F ( x ) F ( y ) , x y μ x y 2 , x , y E .
If μ = 0 , then F is a monotone operator. In this article, we always assume μ > 0 . From the definitions above, it is trivial to deduce that μ L .
By applying Theorem 4.3.5 from [43] to the operator F, we obtain the following corollary, which we will use in the study of convergence of our method.
Corollary 1.
Let C E be a convex set with int C and operator F : C E . Operator F is strongly monotone with parameter μ > 0 and Lipschitz continuous with Lipschitz constant L > 0 if and only if
F ( x ) F ( y ) 2 + μ L x y 2 ( L + μ ) F ( x ) F ( y ) , x y , x , y C .
Assumption 1.
Operator F : E E is L-Lipschitz continuous and μ-strongly monotone.
In our analysis, the following lemma for projection mappings is used.
Lemma 1.
Let C be a closed convex set in E. Then, for a given z E , u C satisfies
u = P C ( z ) ,
if and only if
u z , v u 0 , v C .
The primary challenge in developing algorithms for solving QVIs lies in the dynamic nature of the constraint set, which evolves during iterations; hence, ensuring that C ( x ) does not change drastically as x varies is crucial, a requirement met by imposing a contractiveness condition on the projection operator. Moreover, one effective approach for establishing existence results for QVIs is to reformulate the problem into a fixed-point problem: x * C ( x * ) is a solution of problem (1) if and only if
x * = P C ( x * ) ( x * α F ( x * ) ) , α > 0 .
The existence theorem for solutions highlights a key distinction between VIs and QVIs. Specifically, if the operator F is strongly monotone and Lipschitz continuous over a closed and convex set, the VI admits a unique solution. However, in addition to these assumptions, the following assumption is crucial for convergence in QVI problems and is present in all existing results, indicating its necessity for current approaches [44,45]: there exists l > 0 such that
P C ( x ) ( z ) P C ( y ) ( z ) l x y , l < μ L , x , y , z E .
As we mentioned, in many important applications, the convex-valued set C ( x ) is of the form (2). In this case,
P C ( x ) ( u ) = P c ( x ) + C 0 ( u ) = c ( x ) + P C 0 ( u c ( x ) ) , x , u E .
If c is l-Lipschitz continuous, then
P C ( x ) ( u ) P C ( y ) ( u ) 2 = c ( x ) + P C 0 ( u c ( x ) ) c ( y ) P C 0 ( u c ( y ) ) 2 = c ( x ) c ( y ) 2 + P C 0 ( u c ( x ) ) P C 0 ( u c ( y ) ) 2 + 2 c ( x ) c ( y ) , P C 0 ( u c ( x ) ) P C 0 ( u c ( y ) ) .
Since,
c ( x ) c ( y ) , P C 0 ( u c ( x ) ) P C 0 ( u c ( y ) ) P C 0 ( u c ( y ) ) ( u c ( x ) ) , P C 0 ( u c ( x ) ) P C 0 ( u c ( y ) ) P C 0 ( u c ( x ) ) P C 0 ( u c ( y ) ) 2 ,
it follows
P C ( x ) ( u ) P C ( y ) ( u ) 2 c ( x ) c ( y ) 2 l 2 x y 2 .
This shows that in the case of moving set, (4) holds if l < μ L , and our assumption is as follows.
Assumption 2.
C : E 2 E is a set-valued mapping of the form C ( x ) = c ( x ) + C 0 , x E , where C 0 E is a non-empty closed convex set with int C 0 , and c : E E is an l-Lipschitz continuous function such that l < μ / L .
Let us note that, from Assumptions 1 and 2, it follows that there exists a unique solution x * C ( x * ) of the problem (1), since the conditions of the theorems in [44,45] are satisfied.

3. A Dynamical System of Forward–Backward–Forward Type

In this section, we will analyze the trajectory generated by the following forward–backward–forward dynamical system for solving QVIs:
y ( t ) = P C ( x ( t ) ) ( x ( t ) α F ( x ( t ) ) )
x ˙ ( t ) + x ( t ) = y ( t ) + α ( F ( x ( t ) ) F ( y ( t ) ) ) x ( 0 ) = x 0 ,
where α > 0 is a parameter of the method and x 0 E . The formulation of this method has its roots in [38], where the continuous forward–backward–forward algorithm has been considered for solving VIs.
In the following, we will investigate the asymptotic behavior of the trajectory generated by the dynamical system (5) and (6).
Theorem 1.
Let x ( t ) be a trajectory generated by the forward–backward–forward method (5) and (6). If Assumptions 1 and 2 hold and the conditions
0 < α < min 2 l ( L + μ ) 2 L μ l 2 ( L + μ ) 2 , 1 2 L ,
α L 1 2 α 2 L 2 2 μ L + μ + 2 ( 1 2 α 2 L 2 ) < 1 l 2
are satisfied, then x ( t ) converges to the unique solution x * C ( x * ) of (1) with exponential rate
x ( t ) x * e A 1 ( α ) t x 0 x * ,
where A 1 ( α ) = l 2 + α l 2 ( L + μ ) 4 α L μ 2 ( L + μ ) . Furthermore,
lim t x ˙ ( t ) = 0 .
Proof. 
System (5) and (6) is an autonomous system of the type x ˙ = g ( x ) , where
g ( x ) = P C ( x ) ( x α F ( x ) ) + α F ( x ) F ( P C ( x ) ( x α F ( x ) ) ) x .
Since operator F is Lipschitz continuous and condition (4) is satisfied, function g ( · ) is Lipschitz continuous. Indeed,
g ( x ) g ( z ) x z + α F ( x ) F ( z ) + P C ( x ) ( x α F ( x ) ) P C ( z ) ( z α F ( z ) ) + α F ( P C ( x ) ( x α F ( x ) ) ) F ( P C ( z ) ( z α F ( z ) ) ) x z + α L x z + ( 1 + α L ) P C ( x ) ( x α F ( x ) ) P C ( x ) ( z α F ( z ) ) + P C ( x ) ( z α F ( z ) ) P C ( z ) ( z α F ( z ) ) x z + α L x z + ( 1 + α L ) ( x z + α L x z + l x z ) = ( 2 + l + 3 α L + l α L + α 2 L 2 ) x z .
Then, the classical Cauchy–Lipschitz–Picard theorem ensures the existence and uniqueness of the trajectory x C 1 ( [ 0 , + ) , E ) associated with (5) and (6). Also, note that the conditions of the theorem guarantee the existence and uniqueness of the solution x * C ( x * ) to QVI (1). Let us prove that the trajectory x ( t ) , t 0 generated by (5) and (6) converges exponentially to x * .
Equation (5) can be written as
y ( t ) = c ( x ( t ) ) + P C 0 ( x ( t ) α F ( x ( t ) ) c ( x ( t ) ) ) .
From here, we can see that y ( t ) c ( x ( t ) ) C 0 . Using this and Lemma 1, we obtain
y ( t ) x ( t ) + α F ( x ( t ) ) , w y ( t ) + c ( x ( t ) ) 0 , w C 0 .
Substituting w = x * c ( x * ) C 0 into the inequality above, we get
y ( t ) x ( t ) + α F ( x ( t ) ) , x * y ( t ) + c ( x ( t ) ) c ( x * ) 0 .
From (6), we conclude y ( t ) = x ˙ ( t ) + x ( t ) α ( F ( x ( t ) ) F ( y ( t ) ) ) . Then, (7) becomes
x ˙ ( t ) + α F ( y ( t ) ) , x * y ( t ) + c ( x ( t ) ) c ( x * ) 0 .
From (5), we see that y ( t ) C ( x ( t ) ) , so y ( t ) c ( x ( t ) ) C 0 . Now, substituting z = c ( x * ) + y ( t ) c ( x ( t ) ) c ( x * ) + C 0 = C ( x * ) into (1) and multiplying by α > 0 , we obtain
α F ( x * ) , y ( t ) x * + c ( x * ) c ( x ( t ) ) 0 .
By combining inequalities (8) and (9), we have
x ˙ ( t ) + α ( F ( y ( t ) ) F ( x * ) ) , x * y ( t ) + c ( x ( t ) ) c ( x * ) 0 ,
which is equivalent to
x ˙ ( t ) , x * x ( t ) + x ˙ ( t ) , x ( t ) y ( t ) + x ˙ ( t ) , c ( x ( t ) ) c ( x * ) + α F ( y ( t ) ) F ( x * ) , x * y ( t ) + α F ( y ( t ) ) F ( x * ) , c ( x ( t ) ) c ( x * ) 0 .
Let us note that the following obvious equalities hold:
x ˙ ( t ) , x ˙ ( t ) = x ˙ ( t ) 2 ,
x ˙ ( t ) , x ( t ) x * = 1 2 d d t x ( t ) x * 2 .
Now, we have
1 2 d d t x ( t ) x * 2 x ˙ ( t ) , c ( x ( t ) ) c ( x * ) x ˙ ( t ) 2 + α x ˙ ( t ) , F ( x ( t ) ) F ( y ( t ) ) + α F ( y ( t ) ) F ( x * ) , x * y ( t ) + α F ( y ( t ) F ( x * ) , c ( x ( t ) ) c ( x * ) .
In what follows, we shall estimate each term of the previously stated sum.
Using the Peter-Paul inequality, the l-Lipschitz continuity of c, and the L-Lipschitz continuity of F, we obtain
x ˙ ( t ) , c ( x ( t ) ) c ( x * ) l 2 x ˙ ( t ) 2 + l 2 x ( t ) x * 2 , α x ˙ ( t ) , F ( x ( t ) ) F ( y ( t ) ) α γ 2 x ˙ ( t ) 2 + α L 2 2 γ x ( t ) y ( t ) 2 , ( γ > 0 ) , α F ( y ( t ) F ( x * ) , c ( x ( t ) ) c ( x * ) α L + μ F ( y ( t ) ) F ( x * ) 2 + α l 2 ( L + μ ) 4 x ( t ) x * 2 .
From Assumptions 1 and 2, it follows that the conditions from Corollary 1 are satisfied, so inequality (3) holds. Hence,
α F ( y ( t ) ) F ( x * ) , x * y ( t ) α L + μ F ( y ( t ) ) F ( x * ) 2 α L μ L + μ y ( t ) x * .
Inserting the obtained inequalities in (10), we get
1 2 d d t x ( t ) x * 2 + 1 l 2 α γ 2 x ˙ ( t ) 2 l 2 + α l 2 ( L + μ ) 4 x ( t ) x * 2 + α L 2 2 γ x ( t ) y ( t ) 2 α L μ L + μ y ( t ) x * 2 .
Let us estimate y ( t ) x * in terms of x ( t ) x * and x ( t ) y ( t ) .
y ( t ) x * 2 = y ( t ) x ( t ) 2 + x ( t ) x * 2 + 2 y ( t ) x ( t ) , x ( t ) x * y ( t ) x ( t ) 2 + 1 2 x ( t ) x * 2 .
Now, we have
1 2 d d t x ( t ) x * 2 + 1 l 2 α γ 2 x ˙ ( t ) 2 α L 2 2 γ + α L μ L + μ x ( t ) y ( t ) 2 + l 2 + α l 2 ( L + μ ) 4 α L μ 2 ( L + μ ) x ( t ) x * 2 .
Similarly, we will derive an estimation for x ( t ) y ( t ) in terms of x ˙ ( t ) .
x ( t ) y ( t ) 2 = x ˙ ( t ) + α ( F ( x ( t ) ) F ( y ( t ) ) ) 2 x ˙ ( t ) 2 + α 2 L 2 x ( t ) y ( t ) 2 2 x ˙ ( t ) , α ( F ( x ( t ) ) F ( y ( t ) ) ) 2 x ˙ ( t ) 2 + 2 α 2 L 2 x ( t ) y ( t ) 2 ,
hence,
x ( t ) y ( t ) 2 2 1 2 α 2 L 2 x ˙ ( t ) 2 .
Note that 1 2 α 2 L 2 > 0 holds due to the theorem’s conditions. Finally, from all of the above, we obtain the following estimate:
1 2 d d t x ˙ ( t ) x * 2 + B 1 ( α ) x ˙ ( t ) 2 A 1 ( α ) x ( t ) x * 2 ,
where
B 1 ( α ) = 1 l 2 α γ 2 2 1 2 α 2 L 2 α L 2 2 γ + α L μ L + μ ,
A 1 ( α ) = l 2 + α l 2 ( L + μ ) 4 α L μ 2 ( L + μ ) .
Inequality (11) holds for all γ > 0 . We choose γ so that the expression B 1 ( α ) is maximized, i.e., 
α γ 2 + α L 2 γ ( 1 2 α 2 L 2 ) min .
This is achieved for γ = 2 L 1 2 α 2 L 2 . Now, B 1 ( α ) becomes
B 1 ( α ) = 1 l 2 α L 1 2 α 2 L 2 2 μ L + μ + 2 ( 1 2 α 2 L 2 ) .
It follows from the conditions of the theorem that there exists a choice of α such that B 1 ( α ) > 0 and A 1 ( α ) < 0 . More on this is presented after the end of the proof.
Since, B 1 ( α ) > 0 it follows
1 2 d d t x ˙ ( t ) x * 2 A 1 ( α ) x ( t ) x * 2 ,
from which we obtain
x ( t ) x * 2 A 0 e 2 A 1 ( α ) t .
Substituting the initial condition x ( 0 ) = x 0 , we obtain A 0 = x 0 x * 2 , and finally we get that the trajectory x ( t ) converges to the unique solution x * C ( x * ) :
x ( t ) x * e A 1 ( α ) t x 0 x * , a s   t .
In addition, integrating inequality (12) on [ 0 , T ] , we obtain
1 2 x ( T ) x * 2 + 1 2 x 0 x * 2 + B 1 ( α ) 0 T x ˙ ( t ) 2 d t A 1 ( α ) 0 T x ( t ) x * 2 d t 0 .
From here, when T + , we get
B 1 ( α ) 0 + x ˙ ( t ) 2 d t 1 2 x 0 x * 2 < .
Having in mind that B 1 ( α ) > 0 , it follows that
lim t x ˙ ( t ) 2 = 0 , i . e . , lim t x ˙ ( t ) = 0 .
This completes the proof of the theorem.    □
Remark 1.
The conditions stated in the theorem appear to be quite complex, raising the question of whether there are any real situations in which these conditions are met. We will graphically illustrate some of the cases where the conditions of the theorem are satisfied (Figure 1).
We have used the following parameters:
(a)
l = 0.1 ,   L = 0.4 , and μ = 0.2 for Graph 1;
(b)
l = 0.01 ,   L = 1 , and μ = 0.5 for Graph 2;
(c)
l = 0.1 ,   L = 1 , and μ = 0.5 for Graph 3.
The part of the graph for B 1 ( α ) that is positive is marked in green, while the part of the graph for A 1 ( α ) that is negative is marked in red. Clearly, we can find an interval for α where both conditions are satisfied.

4. A Forward–Backward–Forward Algorithm

In this section, we analyze the convergence of Tseng’s forward–backward–forward algorithm derived by the time discretization of the dynamical system (5) and (6) in the context of solving strongly monotone quasi-variational inequalities. The explicit time discretization of the dynamical system (5) and (6) with initial point x 0 E yields for every n 0 the following equation:
x n + 1 = P C ( x n ) ( x n α F ( x n ) ) + α F ( x n ) α F ( P C ( x n ) ( x n α F ( x n ) ) ) .
Denoting y n = P C ( x n ) ( x n α F ( x n ) ) , we can rewrite this scheme as
y n = P C ( x n ) ( x n α F ( x n ) )
x n + 1 = y n + α ( F ( x n ) F ( y n ) ) ,
In the case C ( x ) C , for all x E , this iterative scheme reduces to the classical forward–backward–forward algorithm for solving variational inequalities as introduced in [35].
In the next table we present a forward-backward-forward algorithm (Algorithm 1) for solving QVI in the moving set case.
Algorithm 1 Forward-Backward-Forward Algorithm
Initialization: Choose the starting point x 0 E and the step size α > 0 . Set n = 0 .
Step 1: Compute y n = P C ( x n ) ( x n α F ( x n ) ) .
If y n = x n or F ( y n ) = 0 , then STOP: y n is a solution.
Step 2: Set x n + 1 = y n + α ( F ( x n ) F ( y n ) ) , update n to n + 1 and go to Step 1.
For the convergence analysis, we assume that Algorithm 1 does not terminate after a finite number of iterations. In other words, we assume that for every n 0 , it holds that x n y n and F ( y n ) 0 .
Since C ( x ) = c ( x ) + C 0 , for all x E , equality (13) can be written in the form
y n c ( x n ) = P C 0 ( x n c ( x n ) α F ( x n ) ) .
According to Lemma 1, this can be rewritten as
y n x n + α F ( x n ) , w y n + c ( x n ) 0 , w C 0 .
Let w = x * c ( x * ) C 0 , so
y n x n + α F ( x n ) , x * y n + c ( x n ) c ( x * ) 0 .
Since x * C ( x * ) is a unique solution of quasi-variational inequality (1), then for every z C ( x * ) , we have
F ( x * ) , z x * 0 .
Substituting z = y n c ( x n ) + c ( x * ) C ( x * ) into this inequality yields
F ( x * ) , y n x * c ( x n ) + c ( x * ) 0 .
Multiplying both sides of (16) by α > 0 and adding the resulting inequality to (15) yields
y n x n + α ( F ( x n ) F ( x * ) ) , x * y n + c ( x n ) c ( x * ) 0 .
From (14), it yields y n = x n + 1 α ( F ( x n ) F ( y n ) ) , so
x n + 1 x n + α ( F ( y n ) F ( x * ) ) , x * y n + c ( x n ) c ( x * ) 0 .
By expanding the terms and applying the properties of the scalar product, we obtain
2 x n + 1 x n , x * x n + 2 x n + 1 x n , x n y n + 2 x n + 1 x n , c ( x n ) c ( x * ) + 2 α F ( y n ) F ( x * ) , x * y n + 2 α F ( y n ) F ( x * ) , c ( x n ) c ( x * ) 0
Using the identity for scalar products
2 u v , u w = u v 2 + u w 2 v w 2 , u , v , w E ,
we get
2 x n + 1 x n , x * x n = x n + 1 x n 2 + x n x * 2 x n + 1 x * 2 ,
2 x n + 1 x n , x n y n = x n + 1 x n 2 x n y n 2 + x n + 1 y n 2 .
Since F is L-Lipschitz continuous, we have
x n + 1 y n 2 = x n + 1 x n + 1 + α ( F ( x n ) F ( y n ) ) 2 = α 2 F ( x n ) F ( y n ) 2 α 2 L 2 x n y n 2 .
By deriving analogous estimates as in the proof of the theorem for the dynamical system, we have
x n + 1 x n , c ( x n ) c ( x * ) l 2 x n + 1 x n 2 + l 2 x n x * 2 , α F ( y n ) F ( x * ) , c ( x n ) c ( x * ) α L + μ F ( y n ) F ( x * ) 2 + α l 2 ( L + μ ) 4 x n x * 2 . α F ( y n ) F ( x * ) , x * y n α L + μ F ( y n ) F ( x * ) 2 α L μ L + μ y n x * .
Substituting the obtained estimates yields
x n + 1 x * 2 1 + l + α l 2 ( L + μ ) 2 x n x * 2 + l x n + 1 x n 2 + ( α 2 L 2 1 ) x n y n 2 2 α L μ L + μ y n x * 2
Furthermore, reasoning analogously to the continuous method, we obtain
y n x * 2 y n x n 2 + 1 2 x n x * 2 .
and
x n y n 2 = x n x n + 1 + α ( F ( x n ) F ( y n ) ) 2 x n x n + 1 2 + α 2 L 2 x n y n 2 ,
from which it follows that
x n y n 2 2 1 2 α 2 L 2 x n + 1 x n 2 ,
assuming that α < ( 2 L ) 1 . Finally, we conclude that
x n + 1 x * 2 + B 2 ( α ) x n + 1 x n 2 A 2 ( α ) x n x * 2 ,
where
B 2 ( α ) = 2 1 2 α 2 L 2 1 α 2 L 2 2 α L μ L + μ l
and
A 2 ( α ) = 1 + l + α l 2 ( L + μ ) 2 α L μ L + μ .
For the choice of α such that
2 l ( L + μ ) l 2 ( L + μ ) 2 2 L μ < α < L 2 μ 2 ( L + μ ) 2 + L 2 ( 1 l ) ( 1 l 2 ) L μ L + μ ( 1 l ) L 2
we obtain that B 2 ( α ) > 0 and 0 < A 2 ( α ) < 1 .
Thus, the following theorem is proven.
Theorem 2.
Let ( x n ) n 0 be the iterates generated by Algorithm 1 using step-size α > 0 satisfying
2 l ( L + μ ) l 2 ( L + μ ) 2 2 L μ < α < min L 2 μ 2 ( L + μ ) 2 + L 2 ( 1 l ) ( 1 l 2 ) L μ L + μ ( 1 l ) L 2 , ( 2 L ) 1 .
Suppose Assumptions 1 and 2 hold. Then, for any n 0 , we have
x n + 1 x * 2 C 2 n + 1 ( α ) x 0 x * 2 ,
and x n + 1 x n 0 as n .
Remark 2.
By discretizing the dynamical system (5) and (6), we can form a forward–backward–forward iterative algorithm with relaxation parameters as follows:
x n + 1 x n h n + x n = P C ( x n ) ( x n α F ( x n ) ) + α F ( x n ) α F ( P C ( x n ) ( x n α F ( x n ) ) ) .
Denoting y n = P C ( x n ) ( x n α F ( x n ) ) , we can rewrite this scheme as
y n = P C ( x n ) ( x n α F ( x n ) ) x n + 1 = h n ( y n + α ( F ( x n ) F ( y n ) ) ) ( 1 h n ) x n ,
where ( h n ) n 0 are relaxation parameters. Here, convergence has been proven and a convergence estimate has been derived for h n 1 . Establishing convergence for an arbitrary sequence ( h n ) presents a promising direction for further research.

5. Numerical Experiments

In this section, we present a numerical experiment which we carried out in order to compare Algorithm 1 with other algorithms in the literature designed for solving quasi-variational inequalities. We implemented the numerical codes in Matlab and performed all computations on a Windows desktop with an Acer laptop with Intel(R) Core(TM) i5-6300U processor (Acer Inc., New Taipei City, Taiwan) at 2.40 gigahertz.
Let there be a QVI such that
C 0 = { x R 4 : x = d } ,
c ( x ) = l x ,
and
F : R 4 R 4 , F ( x ) = x 1 2 + x 2 2 1 , x 3 2 + x 4 2 1 , x 3 2 + x 2 2 + 1 , x 1 2 + x 4 2 + 1 ,
where · denotes the Euclidean norm on R 4 , l = 0.1 , d = 1 . We computed the solution x * of the quasi-variational inequality by running Algorithm 1.
We considered x 0 = ( 0 , 0 , 0 , 0 ) T as the starting point and x k + 1 x k 10 6 as the stopping criterion. We compared the performances of the forward–backward–forward method (Alg1 ), the classical gradient projection method, and the three-step-size method from [22], by considering for all three methods a step size of α = 0.4 . In order to compare the performance of the algorithms, we used the so-called performance profiles, based on the number of iterations and execution time of the algorithm.
Figure 2 presents a detailed performance profile analysis that concentrates on the number of iterations each algorithm requires. The analysis shows that the proposed algorithm distinctly outperforms the others by requiring the fewest iterations, which indicates that it converges more rapidly when solving QVI problems.
Similarly, we executed the program 10,000 times to record the average execution times for the three algorithms, with the results illustrated in Figure 3. Notably, Alg3 performs three projections in every iteration, which naturally leads to a higher average execution time. These findings collectively confirm that the proposed algorithm is the most efficient choice in terms of both iteration count and execution time for the given function and parameters.
We observe similar outcomes when using alternative values for parameter α that satisfy the conditions of Theorem 2, as well as when employing a different starting point.

6. Conclusions

Our research focused on a quasi-variational inequality defined over a moving set controlled by an operator that is both strongly monotone and Lipschitz continuous. We developed an associated forward–backward–forward dynamical system and conducted a detailed analysis to establish that its trajectories converge asymptotically to a unique solution of the quasi-variational inequality. By explicitly discretizing time in this dynamical system, we derived a forward–backward–forward algorithm. We demonstrated that the sequence of iterates converges to a solution of the quasi-variational inequality and that, under strong monotonicity, this convergence is linear. The numerical comparisons of the proposed algorithm showed that our proposed forward–backward–forward algorithm is efficient and outperforms some related gradient projection-type algorithms in the literature for quasi-variational inequalities. The findings presented in this paper may serve as a catalyst for further research in this field.

Author Contributions

Conceptualization, N.M. and M.J.; methodology, N.M.; software, A.Z.; validation, N.M., A.Z. and M.J.; writing—original draft preparation, A.Z.; writing—review and editing, N.M.; supervision, M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partially supported by the Montengerin Academy of Sciences and Arts, project title: Methods of optimization, solving variational and quasi-variational inequalities. This research has also been partially supported by the project: Mathematical analysis, optimization and machine learning funded by Ministry of Education, Science and Innovation of Montenegro.

Data Availability Statement

The authors will supply the relevant data in response to reasonable requests.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
VIVariational inequality
QVIQuasi-variational inequality

References

  1. Cavazzuti, E.; Pappalardo, M.; Passacantando, M. Nash Equilibria, Variational Inequalities, and Dynamical Systems. J. Optim. Theory Appl. 2002, 114, 491–506. [Google Scholar] [CrossRef]
  2. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  3. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Classics in Applied Mathematics (Book 31); Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1987. [Google Scholar]
  4. Baiocchi, A.; Capelo, A. Variational and Quasi-Variational Inequalities; Wiley: NewYork, NY, USA, 1984. [Google Scholar]
  5. Bensoussan, A.; Goursat, M.; Lions, J.L. Contrôle impulsionnel et inéquations quasi-variationnelles stationnaries. Comptes Rendus L’Académie Sci. Paris Ser. A 1973, 276, 1279–1284. [Google Scholar]
  6. Mosco, U. Implicit variational problems and quasi variational inequalities. In Nonlinear Operators and the Calculus of Variations: Summer School Held in Bruxelles, 8–9 September 1975; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1976; Volume 543, pp. 83–156. [Google Scholar]
  7. Bliemer, M.; Bovy, P. Quasi-variational inequality formulation of the multiclass dynamic traffic assignment problem. Transp. Res. Part B Methodol. 2003, 37, 501–519. [Google Scholar] [CrossRef]
  8. Harker, P.T. Generalized Nash games and quasivariational inequalities. Eur. J. Oper. Res. 1991, 54, 81–94. [Google Scholar] [CrossRef]
  9. Kocvara, M.; Outrata, J.V. On a class of quasi-variational inequalities. Optim Methods Softw. 1995, 5, 275–295. [Google Scholar] [CrossRef]
  10. Pang, J.S.; Fukushima, M. Quasi-variational inequalities, generalized Nash equilibria and Multi-leader-follower games. Comput. Manag. Sci. 2005, 1, 21–56. [Google Scholar] [CrossRef]
  11. Wei, J.Y.; Smeers, Y. Spatial oligopolistic electricity models with Cournot generators and regulated transmission prices. Oper. Res. 1999, 47, 102–112. [Google Scholar]
  12. Yao, J.C. The generalized quasi-variational inequality problem with applications. J. Math. Anal. Appl. 1991, 158, 139–160. [Google Scholar] [CrossRef]
  13. Alizadeh, Z.; Jalilzadeh, A. Convergence Analysis of Non-Strongly-Monotone Stochastic Quasi-Variational Inequalities. arXiv 2024, arXiv:2401.03076. [Google Scholar]
  14. Antipin, A.S.; Jaćimović, M.; Mijajlović, N. Extragradient method for solving quasivariational inequalities. Optimization 2017, 67, 103–112. [Google Scholar] [CrossRef]
  15. Antipin, A.S.; Mijajlović, N.; Jaćimović, M. A Second-Order Continuous Method for Solving Quasi-Variational Inequalities. Comput. Math. Math. Phys. 2011, 51, 1856–1863. [Google Scholar] [CrossRef]
  16. Antipin, A.S.; Mijajlović, N.; Jaćimović, M. A Second-Order Iterative Method for Solving Quasi-Variational Inequalities. Comput. Math. Math. Phys. 2013, 53, 258–264. [Google Scholar] [CrossRef]
  17. Barbagallo, A. Existence results for a class of quasi-variational inequalities and applications to a noncooperative model. Optimization 2024, 1–22. [Google Scholar] [CrossRef]
  18. Dey, S.; Reich, S. A dynamical system for solving inverse quasi-variational inequalities. Optimization 2024, 73, 1681–1701. [Google Scholar] [CrossRef]
  19. Facchinei, F.; Kanzow, C.; Karl, S.; Sagratella, S. The semismooth Newton method for the solution of quasi-variational inequalities. Comput. Optim. Appl. 2015, 62, 85–109. [Google Scholar] [CrossRef]
  20. Facchinei, F.; Kanzow, C.; Sagratella, S. Solving quasi-variational inequalities via their KKT conditions. Math. Program. 2014, 144, 369–412. [Google Scholar] [CrossRef]
  21. Lara, F.; Marcavillaca, R.T. Bregman proximal point type algorithms for quasiconvex minimization. Optimization 2022, 73, 497–515. [Google Scholar] [CrossRef]
  22. Mijajlović, N.; Jaćimović, M. Three-step approximation methods from continuous and discrete perspective for quasi-variational inequalities. Comput. Math. Math. Phys. 2024, 64, 605–613. [Google Scholar] [CrossRef]
  23. Mijajlović, N.; Jaćimović, M. Strong convergence theorems by an extragradient-like approximation methods for quasi-variational inequalities. Optim. Lett. 2023, 17, 901–916. [Google Scholar] [CrossRef]
  24. Mijajlović, N.; Jaćimović, M.; Noor, M.A. Gradient-type projection methods for quasi-variational inequalities. Optim. Lett. 2019, 13, 1885–1896. [Google Scholar] [CrossRef]
  25. Mijajlović, N.; Jaćimović, M. Some Continuous Methods for Solving Quasi-Variational Inequalities. Comput. Math. Math. Phys. 2018, 58, 190–195. [Google Scholar] [CrossRef]
  26. Mijajlović, N.; Jaćimović, M. Proximal methods for solving quasi-variational inequalities. Comput. Math. Math. Phys. 2015, 55, 1981–1985. [Google Scholar] [CrossRef]
  27. Nguyen, L.V. An existence result for strongly pseudomonotone quasi-variational inequalities. Ric. Mat. 2023, 72, 803–813. [Google Scholar] [CrossRef]
  28. Nguyen, L.V.; Qin, X. Some Results on Strongly Pseudomonotone Quasi-Variational Inequalities. Set-Valued Var. Anal. 2020, 28, 239–257. [Google Scholar] [CrossRef]
  29. Noor, M.A.; Noor, K.I. Some new iterative schemes for solving general quasi variational inequalities. Le Matematiche 2024, 79, 327–370. [Google Scholar]
  30. Shehu, Y. Linear Convergence for Quasi-Variational Inequalities with Inertial Projection-Type Method. Numer. Funct. Anal. Optim. 2021, 42, 1865–1879. [Google Scholar] [CrossRef]
  31. Shehu, Y.; Gibali, A.; Sagratella, S. Inertial Projection-Type Methods for Solving Quasi-Variational Inequalities in Real Hilbert Spaces. J. Optim. Theory Appl. 2020, 184, 877–894. [Google Scholar] [CrossRef]
  32. Yao, Y.; Adamu, A.; Shehu, Y. Strongly convergent inertial forward-backward-forward algorithm without on-line rule for variational inequalities. Acta Math. Sci. 2024, 44, 551–566. [Google Scholar] [CrossRef]
  33. Yao, Y.; Jolaoso, L.O.; Shehu, Y. C-FISTA type projection algorithm for quasi-variational inequalities. Numer. Algor. 2025, 98, 1781–1798. [Google Scholar] [CrossRef]
  34. Banert, S.; Bot, R.I. A forward-backward-forward differential equation and its asymptotic properties. J. Convex Anal. 2018, 25, 371–388. [Google Scholar]
  35. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  36. Wang, K.; Wang, Y.; Shehu, Y.; Jiang, B. Double inertial Forward–Backward–Forward method with adaptive step-size for variational inequalities with quasi-monotonicity. Commun. Nonlinear Sci. Numer. Simul. 2024, 132, 107924. [Google Scholar] [CrossRef]
  37. Yin, T.C.; Hussain, N. A Forward-Backward-Forward Algorithm for Solving Quasimonotone Variational Inequalities. J. Funct. Spaces 2022, 2022, 7117244. [Google Scholar] [CrossRef]
  38. Bot, R.I.; Csetnek, E.R.; Vuong, P.T. The Forward-Backward-Forward Method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res. 2020, 287, 49–60. [Google Scholar] [CrossRef]
  39. Bot, R.I.; Mertikopoulos, P.; Staudigl, M.; Vuong, P.T. Minibatch forward- backward-forward methods for solving stochastic variational inequalities. Stoch. Syst. 2021, 11, 112–139. [Google Scholar] [CrossRef]
  40. Izuchukwu, C.; Shehu, Y.; Yao, J.C. New inertial forward-backward type for variational inequalities with Quasi-monotonicity. J. Glob. Optim. 2022, 84, 441–464. [Google Scholar] [CrossRef]
  41. Attouch, H.; Czarnecki, M.-O.; Peypouquet, J. Coupling Forward-Backward with Penalty Schemes and Parallel Splitting for Constrained Variational Inequalities. SIAM J. Optim. 2011, 21, 1251–1274. [Google Scholar] [CrossRef]
  42. Ryazantseva, I.P. First-order methods for certain quasi-variational inequalities in a Hilbert space. Comput. Math. Math. Phys. 2007, 47, 183–190. [Google Scholar] [CrossRef]
  43. Vasil’ev, F.P. Optimization Methods; Faktorial Press: Moscow, Russia, 2002. [Google Scholar]
  44. Nesterov, Y.; Scrimali, L. Solving strongly monotone variational and quasi-variational inequalities. Discret. Contin. Dyn. Syst. 2011, 31, 1383–1396. [Google Scholar] [CrossRef]
  45. Noor, M.A.; Oettli, W. On general nonlinear complementarity problems and quasi-equilibria. Le Matematiche 1994, 49, 313–331. [Google Scholar]
Figure 1. The positive part of the graph of B 1 ( α ) is in green and the negative part of the graph of A 1 ( α ) is in red for: (a) l = 0.1 , L = 0.4 , μ = 0.2 ; (b) l = 0.01 , L = 1 , μ = 0.5 ; (c) l = 0.1 , L = 1 , μ = 0.5 .
Figure 1. The positive part of the graph of B 1 ( α ) is in green and the negative part of the graph of A 1 ( α ) is in red for: (a) l = 0.1 , L = 0.4 , μ = 0.2 ; (b) l = 0.01 , L = 1 , μ = 0.5 ; (c) l = 0.1 , L = 1 , μ = 0.5 .
Mathematics 13 01956 g001
Figure 2. Comparison of the convergence behavior of the forward–backward–forward method (Alg1), the classical gradient projection method (Alg2), and the three-step method (Alg3).
Figure 2. Comparison of the convergence behavior of the forward–backward–forward method (Alg1), the classical gradient projection method (Alg2), and the three-step method (Alg3).
Mathematics 13 01956 g002
Figure 3. Performance profile based on the average time of execution.
Figure 3. Performance profile based on the average time of execution.
Mathematics 13 01956 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mijajlović, N.; Zajmović, A.; Jaćimović, M. A Forward–Backward–Forward Algorithm for Quasi-Variational Inequalities in the Moving Set Case. Mathematics 2025, 13, 1956. https://doi.org/10.3390/math13121956

AMA Style

Mijajlović N, Zajmović A, Jaćimović M. A Forward–Backward–Forward Algorithm for Quasi-Variational Inequalities in the Moving Set Case. Mathematics. 2025; 13(12):1956. https://doi.org/10.3390/math13121956

Chicago/Turabian Style

Mijajlović, Nevena, Ajlan Zajmović, and Milojica Jaćimović. 2025. "A Forward–Backward–Forward Algorithm for Quasi-Variational Inequalities in the Moving Set Case" Mathematics 13, no. 12: 1956. https://doi.org/10.3390/math13121956

APA Style

Mijajlović, N., Zajmović, A., & Jaćimović, M. (2025). A Forward–Backward–Forward Algorithm for Quasi-Variational Inequalities in the Moving Set Case. Mathematics, 13(12), 1956. https://doi.org/10.3390/math13121956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop