Next Article in Journal
On Aspects of Gradient Elasticity: Green’s Functions and Concentrated Forces
Next Article in Special Issue
Coupled Fixed Point Theorems with Rational Type Contractive Condition via C-Class Functions and Inverse Ck-Class Functions
Previous Article in Journal
Copula-Based Estimation Methods for a Common Mean Vector for Bivariate Meta-Analyses
Previous Article in Special Issue
On the Unique Solvability of Incomplete Cauchy Type Problems for a Class of Multi-Term Equations with the Riemann–Liouville Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Regularized Generalized Popov’s Method to Solve the Hierarchical Variational Inequality Problem with Generalized Lipschitzian Mappings

College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(2), 187; https://doi.org/10.3390/sym14020187
Submission received: 15 November 2021 / Revised: 20 December 2021 / Accepted: 13 January 2022 / Published: 18 January 2022

Abstract

:
In this article, we introduce a new inertial multi-step regularized generalized Popov’s extra-gradient method to solve the hierarchical variational inequality problem (HVIP). We extend the previous Lipschitzian and strongly monotone mapping to a hemicontinuous, generalized Lipschitzian and strongly monotone mapping. We also obtain a strong convergence theorem about the new Popov’s algorithm. Furthermore, we utilize some numerical experiments to highlight the feasibility and effectiveness of our method.

1. Introduction

Let H be a real Hilbert space, and C be a nonempty, closed, and convex subset of H . The variational inequality problem is a fundamental nonlinear problem, which plays a significant role in engineering, economy, and control fields [1,2,3,4,5,6,7,8]. Specifically, it is to find a point y ˜ C such that
A y ˜ , w y ˜ 0 , w C ,
where A : H H is a mapping. We define VI ( A , C ) as the solution set of (1). In recent years, plenty of scholars have studied the hierarchical variational inequality problem (HVIP), which plays an important role in physical and practical issues, and so on. HVIP is to find a point w * VI ( A , C ) such that
F w * , z w * 0 , z VI ( A , C ) ,
where mapping A : C H is Lipschitzian and monotone, and mapping F : H H is Lipschitzian and strongly monotone.
In order to study HVIP, many scholars proposed different iterative methods. The simplest iteration method is the projection algorithm,
w n + 1 = P C ( w n λ A w n ) ,
where λ > 0 , and mapping A is L-Lipschitzian and strongly monotone. However, if we want to establish the convergence of this algorithm, we should control the condition of A strictly. If A does not satisfy the strongly monotone condition, we will not obtain a strong convergence result, or even a weak convergence result. To weaken the conditions of A, Korpelevich [9] proposed the extra-gradient algorithm in which A is a Lipschitzian and monotone mapping. The algorithm is as follows:
z n = P C ( w n λ A w n ) , w n + 1 = P C ( w n λ A z n ) ,
where λ > 0 . Afterwards, Popov [10] presented the following algorithm:
w n + 1 = P C ( w n λ A z n ) , z n + 1 = P C ( w n + 1 λ A z n ) ,
where λ 0 , 2 1 L . Through observation, it is not difficult to find that Popov’s method only needs to compute the value of z n under the action of operator A at each iteration, but does not need to know the value of w n under the action of operator A. Therefore, compared with the extra-gradient method, the computation of Popov is reduced. However, Popov’s method still requires two projections on C . If the spatial structure of C is intricate, then Popov’s method is also difficult to implement. In order to solve this problem, Malitsky et al. [11] proposed the following algorithm, which converts one of the projections on C into the projection on a half-space T n ,
T n = { y H : w n λ A z n 1 z n , y z n 0 } , w n + 1 = P T n ( w n λ A z n ) , z n + 1 = P C ( w n + 1 λ A z n ) ,
where λ 0 , 1 3 L , mapping A is L-Lipschitzian and monotone. However, L is not always easy to obtain. In 2019, Hieu et al. [12] proposed a new step size, whose calculation is independent of the Lipschitz constant of A. The algorithm is as follows,
T n = { w H : w n λ n A z n 1 z n , w z n 0 } , w n + 1 = P T n ( w n λ n A z n ) , z n + 1 = P C ( w n + 1 λ n + 1 A z n ) ,
where
λ n + 1 = min λ n , μ z n 1 z n A z n A z n 1 , if A z n A z n 1 , λ n , otherwise ,
and they established the weak convergence of { w n } , { z n } , which generated by the above algorithm.
The regularization method is an important method to solve VIP. Many scholars conducted a lot of research on the regularization method. In 2020, Hieu et al. [13] proposed regularization of Popov’s extra-gradient method (RPEGM) to solve HVIP ( 2). The proposed algorithm is given as
T n = { y H : w n λ n ( A z n 1 + α n F w n ) z n , y z n 0 } , w n + 1 = P T n [ w n λ n ( A z n 1 + α n F w n ) ] , z n + 1 = P C [ w n + 1 λ n + 1 ( A z n + α n + 1 F w n + 1 ) ] ,
where
λ n + 1 = min λ n , μ z n 1 z n A z n A z n 1 , if A z n A z n 1 , λ n , otherwise .
In recent years, many scholars studied HVIP (2) with the inertial method. Jiang et al. [14] proposed a new method (IRSEGM) for solving HVIP (2) with the hemicontinuous, generalized Lipschitzian and strongly monotone mapping, which combines multi-step inertial and regularization methods. The algorithm is as follows, and a strong convergence theorem is obtained when the parameters satisfy certain conditions:
u n = w n + i = 1 min { N , n } α i , n ( w n i + 1 w n i ) , z n = P C [ u n λ n ( A u n + β n F u n ) ] , T n = { w H : u n λ n ( A u n + β n F u n ) z n , w z n 0 } , w n + 1 = P T n [ u n λ n ( A z n + β n F u n ) ] ,
where
λ n + 1 = min λ n , μ z n 1 z n A z n A z n 1 , if A z n A z n 1 , λ n , otherwise ,
α i , n = min α i , σ i , n w n i + 1 w n i , if w n i + 1 w n i , λ n , otherwise ,
N is a chosen positive integer, mapping A is L-Lipschitzian and monotone, and mapping F is hemicontinuous, generalized Lipschitzian and strongly monotone.
In this article, motivated by the above results, we propose a new multi-step inertial regularized generalized Popov’s extra-gradient method for the sake of accelerating the convergence of sequences. On the basis of previous studies, we extend F in HVIP (2) to the hemicontinuous and generalized Lipschitzian. Finally, we obtain the strong convergence result of our new algorithm under the suitable conditions. The structure of our paper is as follows. In the first part, we mainly give some of the research background. In the second part, we introduce some important definitions and lemmas. In the third part, we present a new method to deal with HVIP (2), combining the multi-step inertial regularization method with Popov’s extra-gradient method in Hilbert space, and get a strong convergence theorem for our algorithm involving mapping F as hemicontinuous, generalized Lipschitzian and strongly monotone. In the last part, numerical examples are used to exhibit the validity of our algorithm.

2. Preliminaries

In this part, we give some significant lemmas and definitions, which are important for the rest of the proof.
We respectively use → to represent strong convergence and ⇀ to represent weak convergence.
Definition 1
([14,15]). Let A : H H be a mapping.
(i) 
If mapping A satisfies
A w A z , w z 0 , w , z H .
then A is monotone.
(ii) 
If for η > 0 , mapping A satisfies
A w A z , w z η w z 2 , w , z H ,
then A is η-strongly monotone.
(iii) 
If for L > 0 , mapping A satisfies
A w A z L w z , w , z H ,
then A is L-Lipschitzian.
(iv) 
If for L > 0 , mapping A satisfies
A w A z L ( w z + 1 ) , w , z H ,
then A is L-generalized Lipschitzian.
(v) 
A is hemicontinuous if
w , h H , t n 0 A ( w + t n h ) A w as n .
Remark 1.
According to the definition of Lipschitzian and generalized Lipschitzian, it is not difficult to find that the generalized Lipschitzian is broader than Lipschitzian. In fact, the generalized Lipschitzian is not even necessarily hemicontinuous. A specific example is as follows.
Example 1.
Let g : R R , and
g ( w ) = w 1 , w < 1 , w 1 ( w + 1 ) 2 , 1 w 0 , w + 1 ( w 1 ) 2 , 0 w 1 , w + 1 , w > 1 .
Through simple proofs, it is not difficult to show that g ( w ) is generalized Lipschitzian, but it is clearly not Lipschitzian. Therefore, the algorithm proposed by us is valuable and meaningful. Other examples can be found in the literature ([14,16]).
Lemma 1
([16]). Supposing that the mapping A : H H is hemicontinous and strongly monotone in VIP (1), then VIP (1) has one and only one solution.
Lemma 2
([17,18]). If for any w H , there is one and only one element q C that meets w q w z , for any y C , in that way we denote that q = P C w , where H is a real Hilbert space, C is a nonempty closed convex subset of the H , and we have
q = P C w w q , z q 0 , z C .
Lemma 3
([19]). Let { b n } be a non-negative real sequence such that
b n + 1 ( 1 κ n ) b n + κ n σ n + δ n , n = 1 , 2 , ,
where { κ n } , { σ n } and { δ n } meet the following criteria respectively:
(i) 
{ κ n } ( 0 , 1 ) ;
(ii) 
n = 1 κ n = ;
(iii) 
lim sup σ n 0 ;
(iv) 
n = 1 | δ n | < .
Then, lim n b n = 0 .

3. Main Results

In this part, we raise a new method to deal with HVIP (2), which combines the multi-step inertia method with the regularization technique in Popov’s extra-gradient method. Through a series of derivations and proofs, the main result of our paper is obtained. Next, we assume that our algorithm satisfies the following conditions:
( C 1 )
A is k-Lipschitzian on H and monotone on C .
( C 2 )
F is hemicontinuous, β -generalized Lipschitzian and γ -strongly monotone on H .
( C 3 )
The solution set VI ( A , C ) is nonempty.
( C 4 )
Let { α n } be a sequence of ( 0 , ) and meet n = 1 α n = , n = 1 α n 2 < , lim n α n + 1 α n α n 2 = 0 .
( C 5 )
Let { σ i , n } be a sequence and satisfy { σ i , n } ( 0 , ) , lim n σ i , n α n = 0 , n = 1 σ i , n < , where i = 1 , 2 , 3 , , N (where N is a chosen positive integer).
Remark 2.
In condition ( C 4 ), we can take α n = n p , where 1 2 < p < 1 .
Now, we represent our new multi-step inertial regularized generalized Popov’s extra-gradient method.
Lemma 4.
The sequence { λ n } generated by our Algorithm 1 is not increasing, and lim n λ n min { λ 1 , μ k } > 0 .
Algorithm 1: The multi-step inertial regularized generalized Popov’s extra-gradient method.
Initialization: Given λ 0 , λ 1 , α 0 > 0 , μ ( 0 , 2 1 ) . Let w 0 , z 0 , u 0 be any three members of H .
Step 1. Compute
w 1 = P C [ u 0 λ 0 ( A z 0 + α 0 F u 0 ) ] ,
u 1 = w 1 + θ 1 , 1 ( w 1 w 0 ) ,
z 1 = P C [ u 1 λ 1 ( A z 0 + α 1 F u 1 ) ] .
Step 2. Given the current iterate w n , z n , and z n 1 , compute w n + 1 as follows:
T n = { w H : u n λ n ( A z n 1 + α n F u n ) z n , w z n 0 } .
w n + 1 = P T n [ u n λ n ( A z n + α n F u n ) ] ,
Step 3. Compute
u n + 1 = w n + 1 + i = 1 min { N , n + 1 } θ i , n + 1 ( w n + 2 i w n + 1 i ) ,
z n + 1 = P C [ u n + 1 λ n + 1 ( A z n + α n + 1 F u n + 1 ) ] .
 where 0 < θ i , n < θ i , for some θ i H with
θ i , n = min θ i , σ i , n w n + 1 i w n i , if w n + 1 i w n i , θ i , otherwise ,
λ n + 1 = min λ n , μ z n 1 z n A z n A z n 1 , if A z n A z n 1 , λ n , otherwise .
Step 4. Set n : = n + 1 and go to Step 1.
Proof. 
By definition of the sequence { λ n } , it is obvious that the sequence { λ n } is not increasing. Since A is k-Lipschitz continuous with k 0 , we have
A w A z k w z , w , z H
In the case of A z n A z n 1 , we have
μ z n 1 z n A z n A z n 1 μ z n 1 z n k z n z n 1 = μ k .
Clearly, the lower bound of the sequence { λ n } is min { λ 1 , μ k } . □
By conditions ( C 1 ) ( C 3 ) , we can easily see that ( A + α F ) is hemicontinous and strongly monotone. Therefore, in the light of Lemma 1, it is easy to conclude that for each α > 0 , there is only one solution w α for the following variational problem (6).
Find z C , such that ( A + α F ) z , w z 0 , w C ,
where α > 0 . In the same way, for every n N , there is a unique member w α n C that makes
( A + α n F ) w α n , w w α n 0 , w C .
On the other hand, according to Lemma 1, it is easy to find that when conditions ( C 1 ) ( C 3 ) are satisfied, there is a unique solution w * to HVIP (2).
Lemma 5
([14]). For w α and w * above, we have the following:
(i) 
For all μ , ν > 0 , w μ w ν | μ ν | μ τ , where τ is a positive constant,
τ = 1 γ 1 + β γ F w * + 2 β w * + β .
(ii) 
{ w α } is bounded, and w α 1 η F w * + w * .
(iii) 
lim α 0 + w α = w * .
Lemma 6.
For all n + 1 N ,
u n + 1 z n 2 1 + i = 1 N σ i , n + 1 w n + 1 z n 2 + σ ¯ n + 1 ,
where σ ¯ n + 1 = i = 1 N σ i , n + 1 2 + 2 1 i < j N σ i , n + 1 σ j , n + 1 + i = 1 N σ i , n + 1 .
Proof. 
By the definition of u n , we deduce
u n + 1 z n 2 = ( w n + 1 z n ) + i = 1 N θ i , n + 1 ( w n + 2 i w n + 1 i ) 2 w n + 1 z n + i = 1 N θ i , n + 1 w n + 2 i w n + 1 i 2 = w n + 1 z n 2 + i = 1 N θ i , n + 1 2 w n + 2 i w n + 1 i 2 + 2 w n + 1 z n i = 1 N θ i , n + 1 w n + 2 i w n + 1 i + 2 1 i < j N θ i , n + 1 θ j , n + 1 w n + 2 i w n + 1 i w n + 2 j w n + 1 j w n + 1 z n 2 + i = 1 N σ i , n + 1 2 + w n + 1 z n 2 i = 1 N σ i , n + 1 + i = 1 N σ i , n + 1 + 2 1 i < j N σ i , n + 1 σ j , n + 1 = 1 + i = 1 N σ i , n + 1 w n + 1 z n 2 + σ ¯ n + 1 .
Lemma 7.
Let m and n be two arbitrary real numbers and a be an arbitrary positive real number, then we have
(i) 
m n 1 2 a m 2 + 1 a n 2 ,
(ii) 
( m + n ) 2 ( 2 + 2 ) m 2 + 2 n 2 .
Proof. 
(i) Since ( n a m ) 2 = n 2 2 a m n + a 2 m 2 0 , we get 2 a m n n 2 + a 2 m 2 , so we deduce
m n 1 2 a m 2 + 1 a n 2 .
Peculiarly, taking a = 2 , then we obtain m n 1 2 2 m 2 + 1 2 n 2 .
(ii) Taking a = 1 + 2 in (i), we have
2 m n ( 1 + 2 ) m 2 + ( 2 1 ) n 2 ,
then we deduce
( m + n ) 2 ( 2 + 2 ) m 2 + 2 n 2 .
Theorem 1.
Assuming that Algorithm 3.1 satisfies condition ( C 1 ) ( C 5 ) , then the sequence { w n } produced by the algorithm strongly converges to the unique solution w * of HVIP (2).
Proof. 
According to Lemma 5, we have w α n w * . So we just have to prove w α n w n 0 to get w α n w * , and the proof is as follows. Since
u n w α n 2 = u n w n + 1 2 + w n + 1 w α n 2 + 2 u n w n + 1 , w n + 1 w α n ,
from (7), we have
w α n w n + 1 2 = u n w α n 2 2 w n + 1 u n , w α n w n + 1 u n w n + 1 2 = u n w α n 2 2 w n + 1 u n , w α n w n + 1 ( w n + 1 z n ) + ( z n u n ) 2 = u n w α n 2 2 w n + 1 u n , w α n w n + 1 w n + 1 z n 2 u n z n 2 2 w n + 1 z n , z n u n = u n w α n 2 w n + 1 z n 2 u n z n 2 + 2 u n λ n ( A z n 1 + α n F u n ) z n , w n + 1 z n + 2 u n λ n ( A z n + α n F u n ) w n + 1 , w α n w n + 1 + 2 λ n A z n 1 + α n F u n , w n + 1 z n + 2 λ n A z n + α n F u n , w α n w n + 1 = u n w α n 2 w n + 1 z n 2 u n z n 2 + 2 u n λ n ( A z n 1 + α n F u n ) z n , w n + 1 z n + 2 u n λ n ( A z n + α n F u n ) w n + 1 , w α n w n + 1 + 2 λ n A z n 1 A z n + A z n + α n F u n , w n + 1 z n + 2 λ n A z n + α n F u n , w α n w n + 1 = u n w α n 2 w n + 1 z n 2 u n z n 2 + 2 u n λ n ( A z n 1 + α n F u n ) z n , w n + 1 z n + 2 u n λ n ( A z n + α n F u n ) w n + 1 , w α n w n + 1 + 2 λ n A z n + α n F u n , w α n z n + 2 λ n A z n 1 A z n , w n + 1 z n .
By the definition of w n + 1 and T n , we have
2 u n λ n ( A z n 1 + α n F u n ) z n , w n + 1 z n 0 .
Similarly, by the definition of w n + 1 , Lemma 2, and w α n C T n , we also have
u n λ n ( A z n + α n F u n ) w n + 1 , w α n w n + 1 0 .
Combining (8) and (9), we obtain
w α n w n + 1 2 u n w α n 2 w n + 1 z n 2 u n z n 2 + 2 λ n A z n + α n F u n , w α n z n + 2 λ n A z n 1 A z n , w n + 1 z n .
Now let us think about 2 λ n A z n 1 A z n , w n + 1 z n and 2 λ n A z n + α n F u n , w α n z n . According to the definition of { λ n } and Lemma 7, we have
2 λ n A z n 1 A z n , w n + 1 z n 2 λ n A z n A z n 1 w n + 1 z n 2 λ n μ λ n + 1 z n z n 1 w n + 1 z n λ n μ λ n + 1 1 2 z n z n 1 2 + 2 w n + 1 z n 2 λ n μ λ n + 1 1 2 ( z n u n ) + ( u n z n 1 ) 2 + 2 w n + 1 z n 2 λ n μ λ n + 1 1 2 ( 2 + 2 ) z n u n 2 + 2 u n z n 1 2 + 2 w n + 1 z n 2 ( 1 + 2 ) λ n μ λ n + 1 u n z n 2 + λ n μ λ n + 1 u n z n 1 2 + 2 λ n μ λ n + 1 w n + 1 z n 2 .
Next, we consider 2 λ n A z n + α n F u n , w α n z n . Since α n 0 and σ i , n + 1 α n + 1 0 for each i = 1 , 2 , , N , let ξ 1 , ξ 2 , ξ 3 be three positive real numbers and satisfy
2 γ β ξ 1 ξ 2 ξ 3 > 0 ,
i = 1 N σ i , n ξ 3 λ n α n , n n 0 .
From Lemma 4, it is not difficult to know that there is a constant c that makes 0 < c λ n λ 1 . So with α n 0 as n , we obtain λ n α n β ξ 1 0 . On the other hand, from Lemma 4, we have ( 1 + 2 ) μ λ n λ n + 1 ( 1 + 2 ) μ . Since μ ( 0 , 2 1 ) , without loss of generality, we have
1 ( 1 + 2 ) λ n μ λ n + 1 λ n α n β ξ 1 ξ 2 > 0 , n n 0 .
Due to w α n VI ( A + α n F , C ) , z n C , we have
A w α n + α n F w α n , w α n z n 0 .
Combining the conditions ( C 1 ) and ( C 2 ) , we abtain
2 λ n A z n + α n F u n , w α n z n = 2 λ n A z n A w α n , w α n z n + 2 λ n A w α n + α n F u n , w α n z n 2 λ n A w α n + α n F u n , w α n z n = 2 λ n A w α n + α n F w α n , w α n z n + 2 λ n α n F u n α n F w α n , w α n z n 2 λ n α n F u n α n F w α n , w α n z n = 2 λ n α n F u n α n F w α n , w α n u n + 2 λ n α n F u n α n F w α n , u n z n 2 λ n α n γ u n w α n 2 + 2 λ n α n F u n α n F w α n , u n z n 2 λ n α n γ u n w α n 2 + 2 λ n α n F u n F w α n u n z n 2 λ n α n γ u n w α n 2 + 2 λ n α n β ( u n w α n + 1 ) u n z n = 2 λ n α n γ u n w α n 2 + 2 λ n α n β u n w α n u n z n + 2 λ n α n β u n z n 2 λ n α n γ u n w α n 2 + 2 λ n α n β 1 2 ξ 1 u n w α n 2 + 1 2 ξ 1 u n z n 2 + 2 1 2 ξ 2 u n z n 2 + λ n 2 α n 2 β 2 2 ξ 2 = ( 2 γ ξ 1 β ) λ n α n u n w α n 2 + λ n α n β ξ 1 + ξ 2 u n z n 2 + λ n 2 α n 2 β 2 ξ 2 .
Substituting (11) and (14) into (10), we deduce
w α n w n + 1 2 u n w α n 2 w n + 1 z n 2 u n z n 2 + 2 λ n A z n 1 A z n , w n + 1 z n + 2 λ n A z n + α n F u n , w α n z n 1 ( 2 γ ξ 1 β ) λ n α n u n w α n 2 1 2 λ n μ λ n + 1 w n + 1 z n 2 1 ( 1 + 2 ) λ n μ λ n + 1 λ n α n β ξ 1 ξ 2 u n z n 2 + λ n μ λ n + 1 u n z n 1 2 + λ n 2 α n 2 β 2 ξ 2 .
Combining (12) and (15), for all n n 0 , we have
w α n w n + 1 2 1 ( 2 γ ξ 1 β ) λ n α n u n w α n 2 1 2 λ n μ λ n + 1 w n + 1 z n 2 + λ n μ λ n + 1 u n z n 1 2 + λ n 2 α n 2 β 2 ξ 2 .
From Lemma 6, without loss of generality, for all n n 0 , we obtain
w n + 1 z n 2 u n + 1 z n 2 σ ¯ n + 1 1 + 1 = 1 N σ i , n + 1 u n + 1 z n 2 σ ¯ n + 1 1 + ξ 3 λ n α n = 1 1 + ξ 3 λ n α n u n + 1 z n 2 σ ¯ n + 1 1 + ξ 3 λ n α n .
Analogous to the proof of Lemma 6, for all n n 0 , we obtain
u n w α n 2 ( 1 + ξ 3 λ n α n ) w n w α n 2 + σ ¯ n + 1 .
Substituting (17) and (18) into (16),for all n n 0 , we obtain
w α n w n + 1 2 1 ( 2 γ ξ 1 β ) λ n α n ( 1 + ξ 3 λ n α n ) w n w α n 2 + σ ¯ n + 1 1 2 λ n μ λ n + 1 1 1 + ξ 3 λ n α n u n + 1 z n 2 + λ n μ λ n + 1 u n z n 1 2 + λ n 2 α n 2 β 2 ξ 2 + 1 2 λ n μ λ n + 1 σ ¯ n + 1 1 + ξ 3 λ n α n 1 ( 2 γ ξ 1 β ) λ n α n ( 1 + ξ 3 λ n α n ) w n w α n 2 1 2 λ n μ λ n + 1 1 1 + ξ 3 λ n α n u n + 1 z n 2 + λ n 2 α n 2 β 2 ξ 2 + λ n μ λ n + 1 u n z n 1 2 + σ ¯ n + 1 1 + ξ 3 λ n α n + [ 1 ( 2 γ β ξ 1 ) α n λ n ] σ ¯ n + 1 = 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ( 2 γ β ξ 1 ) ξ 3 λ n 2 α n 2 w n w α n 2 1 2 λ n μ λ n + 1 1 1 + ξ 3 λ n α n u n + 1 z n 2 + λ n 2 α n 2 β 2 ξ 2 + λ n μ λ n + 1 u n z n 1 2 + σ ¯ n + 1 1 + ξ 3 λ n α n + 1 ( 2 γ β ξ 1 ) α n λ n σ ¯ n + 1 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ] w n w α n 2 1 2 λ n μ λ n + 1 1 1 + ξ 3 λ n α n u n + 1 z n 2 + λ n μ λ n + 1 u n z n 1 2 + λ n 2 α n 2 β 2 ξ 2 + σ ¯ n + 1 1 + ξ 3 λ n α n + σ ¯ n + 1 .
According to lim n α n = 0 , lim n λ n = λ , without loss generality, for n n 0 , we deduce 1 ξ 2 λ n α n 0 . From Lemma 5 and Lemma 7, we obtain
w n + 1 w α n 2 = w n + 1 w α n + 1 2 + w α n + 1 w α n 2 2 w α n + 1 w n + 1 , w α n + 1 w α n w n + 1 w α n + 1 2 + w α n + 1 w α n 2 2 w n + 1 w α n + 1 w α n + 1 w α n w n + 1 w α n + 1 2 + w α n + 1 w α n 2 ξ 2 λ n α n w n + 1 w α n + 1 2 1 ξ 2 λ n α n w α n + 1 w α n 2 = ( 1 ξ 2 λ n α n ) w n + 1 w α n + 1 2 + 1 1 ξ 2 λ n α n w α n + 1 w α n 2 = ( 1 ξ 2 λ n α n ) w n + 1 w α n + 1 2 1 ξ 2 λ n α n ξ 2 λ n α n w α n + 1 w α n 2 ( 1 ξ 2 λ n α n ) w n + 1 w α n + 1 2 1 ξ 2 λ n α n ξ 2 λ n α n ( α n + 1 α n ) 2 α n 2 τ 2 .
By rearranging the above inequalities, for n n 0 , we obtain
( 1 ξ 2 λ n α n ) ω n + 1 w α n + 1 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ] w n w α n 2 1 2 λ n μ λ n + 1 1 1 + ξ 3 λ n α n u n + 1 z n 2 + λ n μ λ n + 1 u n z n 1 2 + λ n 2 α n 2 β 2 ξ 2 + σ ¯ n + 1 1 + ξ 3 λ n α n + σ ¯ n + 1 + 1 ξ 2 λ n α n ξ 2 λ n α n ( α n + 1 α n ) 2 α n 2 τ 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ] w n w α n 2 1 2 λ n μ λ n + 1 1 1 + ξ 3 λ n α n u n + 1 z n 2 + λ n μ λ n + 1 u n z n 1 2 + λ n 2 α n 2 β 2 ξ 2 + 2 σ ¯ n + 1 + 1 ξ 2 λ n α n ξ 2 λ n α n ( α n + 1 α n ) 2 α n 2 τ 2 .
Therefore, we have
w n + 1 w α n + 1 2 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n 1 ξ 2 λ n α n w n w α n 2 + λ n μ ( 1 ξ 2 λ n α n ) λ n + 1 u n z n 1 2 1 ( 1 ξ 2 λ n α n ) ( 1 + ξ 3 λ n α n ) 1 2 λ n μ λ n + 1 u n + 1 z n 2 + 1 1 ξ 2 λ n α n λ n 2 α n 2 β 2 ξ 2 + 2 σ ¯ n + 1 + ( α n + 1 α n ) 2 ξ 2 λ n α n 3 τ 2 = 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n 1 ξ 2 λ n α n { w n w α n 2 + λ n μ [ 1 ( 2 γ ξ 1 β ξ 3 ) λ n α n ] λ n + 1 u n z n 1 2 } 1 ( 1 ξ 2 λ n α n ) ( 1 + ξ 3 λ n α n ) 1 2 λ n μ λ n + 1 u n + 1 z n 2 + 1 1 ξ 2 λ n α n λ n 2 α n 2 β 2 ξ 2 + 2 σ ¯ n + 1 + ( α n + 1 α n ) 2 ξ 2 λ n α n 3 τ 2 .
Adding λ n + 1 μ λ n + 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n + 1 α n + 1 ] u n + 1 z n 2 both sides of this inequality, we deduce
w n + 1 w α n + 1 2 + λ n + 1 μ λ n + 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n + 1 α n + 1 ] u n + 1 z n 2 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n 1 ξ 2 λ n α n { w n w α n 2 + λ n μ [ 1 ( 2 γ ξ 1 β ξ 3 ) λ n α n ] λ n + 1 u n z n 1 2 } { 1 ( 1 ξ 2 λ n α n ) ( 1 + ξ 3 λ n α n ) 1 2 λ n μ λ n + 1 λ n + 1 μ λ n + 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n + 1 α n + 1 ] } u n + 1 z n 2 + 1 1 ξ 2 λ n α n λ n 2 α n 2 β 2 ξ 2 + 2 σ ¯ n + 1 + ( α n + 1 α n ) 2 ξ 2 λ n α n 3 τ 2 .
Since μ 0 , 2 1 , we get
lim n { 1 ( 1 ξ 2 λ n α n ) ( 1 + ξ 3 λ n α n ) 1 2 λ n μ λ n + 1 λ n + 1 μ λ n + 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n + 1 α n + 1 ] } = 1 ( 2 + 1 ) μ > 0 .
So, without loss of generality, for all n n 0 , we have
1 ( 1 ξ 2 λ n α n ) ( 1 + ξ 3 λ n α n ) 1 2 λ n μ λ n + 1 λ n + 1 μ λ n + 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n + 1 α n + 1 ] > 0 .
From (19), for all n n 0 , we have
w n + 1 w α n + 1 2 + λ n + 1 μ λ n + 2 [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n + 1 α n + 1 ] u n + 1 z n 2 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n 1 ξ 2 λ n α n { w n w α n 2 + λ n μ [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ] λ n + 1 u n z n 1 2 } + 1 1 ξ 2 λ n α n λ n 2 α n 2 β 2 ξ 2 + 2 σ ¯ n + 1 + ( α n + 1 α n ) 2 ξ 2 λ n α n 3 τ 2 = ( 1 ς n ) w n w α n 2 + λ n μ [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ] λ n + 1 u n z n 1 2 + ς n δ n + ε n ,
where
ς n = ( 2 γ β ξ 1 ξ 2 ξ 3 ) λ n α n 1 ξ 2 λ n α n ,
δ n = α n + 1 α n α n 2 2 1 ξ 2 λ n α n ( 2 γ β ξ 1 ξ 2 ξ 3 ) ξ 2 λ n 2 τ 2 ,
ε n = 1 1 ξ 2 λ n α n λ n 2 α n 2 β 2 ξ 2 + 2 σ ¯ n + 1 .
Because ς n = ( 2 γ β ξ 1 ξ 2 ξ 3 ) λ n α n 1 ξ 2 λ n α n ( 2 γ β ξ 1 ξ 2 ξ 3 ) λ n α n , n = 1 α n = , lim n α n = 0 , so n = 1 ς n = , and lim n ς n = 0 . It is easy to know that lim n δ n = 0 , n = 1 ε n < . So using Lemma 3, we can obtain
w n w α n 2 + λ n μ [ 1 ( 2 γ β ξ 1 ξ 3 ) λ n α n ] λ n + 1 u n z n 1 2 0 .
Thus, we have
w n w α n 0 .
This finishes the proof. □

4. Numerical Examples

In the part, three numerical experiments are used to compare the effectiveness of our proposed algorithm. Through the analysis of results, it is not difficult to find that the efficiency of our algorithm proposed in our paper is higher. In the following three numerical experiments, we demonstrate the advantages of our proposed algorithms by studying the effects of one-step, two-step and three-step inertia on sequence convergence. All the procedures are compiled in Matlab 9.0 and executed on PC Desktop Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, RAM 16.0 GB.
Example 2.
Let C = [ 2 , 5 ] . Denote mapping A : H H as follows,
A w = w + sin w ,
for each w R , where H = R . By the definition of A, we can show that the operator A is Lipschitzian and monotone. We use IRPEGM, 2-MIRPEGM and 3-MIRPEGM to denote the one-step, two-step and three-step inertia regularized Popov’s extra-gradient methods in this paper, respectively. For IRPEGM, 2-MIRPEGM and 3-MIRPEGM, we take w 0 = z 0 = u 0 = 1 , θ i = 0.1 , σ i , n = 1 n 2 ; for RPEGM, we take w 0 = z 0 = 1 ; and we take μ = 0.3 , α 0 = 1 and α n = ( n + 1 ) 3 4 for each method. Let F = I ; through calculation, it is not difficult to deduce that VI ( A , C ) = { 0 } . Therefore, HVIP (2) has one and only one solution: w * = 0 . In this case, we set the algorithm to stop when w n w * 10 6 , and for each algorithm, we take λ 0 = λ 1 = 0.2, 0.1, and 0.05, respectively. The numerical experimental results are represented in Figure 1, Figure 2 and Figure 3.
It is not difficult to find from Figure 1, Figure 2 and Figure 3 that the number of steps required for sequence convergence in our IRPEGM, 2-MIRPEGM and 3-MIRPEGM is about 15%, 30%, and 40% less than that of RPEGM in [13], respectively. So, we can obtain that our algorithm is much broader, and much more efficient.
Example 3.
Let Q, K, and S∈ R s × s , where K is symmetric matrices, and S is diagonal matrices with positive diagonal terms. We denote that M = Q Q T + K + S , then M is positive definite. Set mapping A : R s R s to be defined as
A w = M w + p ,
for each w R s , where p R s . Let H = R s , and set C to be defined as
C = { ( w ( 1 ) , w ( 2 ) , w ( 3 ) , , w ( s ) ) T R s : 2 w ( j ) 5 , j = 1 , 2 , , s } .
According to the definition of mapping A, it is obvious that operator A is Lipschitzian and monotone. Similarly, we use IRPEGM, 2-MIRPEGM, and 3-MIRPEGM to denote one-step, two-step and three-step inertia regularized Popov’s extra-gradient algorithm, respectively. For IRPEGM, 2-MIRPEGM, and 3-MIRPEGM, we take σ i , n = n 2 , θ i = 0.2 , w 0 = z 0 = u 0 = ( 1 , 1 , 1 , , 1 ) T for RPEGM, and take w 0 = z 0 = ( 1 , 1 , 1 , , 1 ) T , μ = 0.3 , α 0 = 1 , α n = ( n + 1 ) 3 2 for each method. Letting F = I , it is easy for us to obtain the solution set VI ( A , C ) = { ( 0 , 0 , 0 , , 0 ) T } ; therefore, for HVIP (2), there is a unique solution w * , and w * = ( 0 , 0 , 0 , , 0 ) T . In this case, the algorithm stops when w n w * 10 4 , and we consider s = 10 , 20 , 30 , respectively. Throughout this experiment, p = ( 0 , 0 , , 0 ) T and the diagonal term of D is stochastically and evenly created in (0,2), and all terms of Q and K are stochastically and equally created in (−2,2). Then, we get Figure 4, Figure 5 and Figure 6.
By looking at the features of Figure 4, Figure 5 and Figure 6, we can easily see that our algorithm has obvious advantages over RPEGM.
Example 4.
Set H , C , and A as in Example 2. By definition of A, we can show that the operator A is Lipschitzian and monotone. We use IRPEGM to denote the one-step inertial regularized subgradient extra-gradient method, which is proposed by Jiang et al. [14]. For IRPEGM, IRSEGM, we take w 0 = z 0 = u 0 = 1 , θ 1 = 0.1 , σ i , n = 1 n 2 , α 0 = 1 , α n = ( n + 1 ) 3 4 for each method. Let
F ( w ) = w 1 , w < 1 , w 1 ( w + 1 ) 2 , 1 w 0 , w + 1 ( w 1 ) 2 , 0 w 1 , w + 1 , w > 1 ,
It is easy to verify that F is hemicontinuous, generalized Lipschitzian and strongly monotone on H , but not Lipschitzian. Through calculation, it is not difficult to deduce that VI ( A , C ) = { 0 } ; therefore, HVIP (2) has one and only one solution w * = 0 . In this case, we set that the algorithm stops when w n w * 10 6 . The numerical experiment results are represented in Table 1.

5. Conclusions

In the paper, we propose a new multi-step inertial regularized generalized Popov’s extra-gradient method to solve the hierarchical variational inequality problem based on previous studies. Compared with previous algorithms, our algorithm has the following advantages. Firstly, compared with that of Hieu et al. [13], of the proposed algorithm, we introduce the multi-step inertial method to accelerate the convergence of the sequence. Secondly, we extend that F is Lipschitzian in [13] to hemicontinuous and generalized Lipschitzian; therefore, it is clear that our algorithm is relatively broader. Thirdly, compared with that of Jiang et al. [14], our algorithm only requires obtaining the value of z n under the action of A, with no need to discuss the behavior of w n under the action of A, and therefore, our algorithm is relatively simpler and more efficient.

Author Contributions

Conceptualization, Y.G. and B.J.; data curation, B.J.; formal analysis, Y.W. and Y.G.; funding acquisition, Y.G.; methodology, Y.W. and B.J.; project administration, Y.W.; resources, B.J.; writing—original draft, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

The National Natural Science Foundation of China: 12171435.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

The authors thank the referees for their helpful comments, which notably improved the presentation of this paper. This work was supported by the National Natural Science Foundation of China (grant No. 12171435).

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ceng, L.C.; Shehu, Y.; Wang, Y. Parallel Tseng’s extragradient mrthods for solving systoms of variational inequalities on hadamard manifolfs. Symmetry 2020, 12, 43. [Google Scholar] [CrossRef] [Green Version]
  3. Hieu, D.V.; Strodiot, J.J.; Muu, L.D. An extragradient algorithm for solving variational inequalities. J. Optim. Theory Appl. 2020, 185, 476–503. [Google Scholar] [CrossRef]
  4. Wang, Y.; Li, C.; Lu, L. A new algorithm for the common solutions of a generalized variational inequality system and a nonlinear operator equation in Banach spaces. Mathematics 2020, 8, 1994. [Google Scholar] [CrossRef]
  5. Wang, Y.; Pan, C. Viscosity approximation methods for a general variational inequality system and fixed point problems in Banach spaces. Symmetry 2020, 12, 36. [Google Scholar] [CrossRef] [Green Version]
  6. Yang, J.; Liu, H.; Li, G. Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities. Numer. Algor. 2020, 84, 389–405. [Google Scholar] [CrossRef]
  7. Reich, S.; Thong, D.V.; Cholamjiak, P.; Long, L.V. Inertial projection-type methods for solving pseudomonotone variatinal inequality problems in Hilbert spaces. Numer. Algor. 2021, 88, 813–835. [Google Scholar] [CrossRef]
  8. Thong, D.V.; Shehu, Y.; Iyiola, O.S.; Thang, H.V. New hybrid projection methods for variational inequalities involving pseudomonotone mappings. Optim. Eng. 2021, 22, 363–386. [Google Scholar] [CrossRef]
  9. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  10. Popov, L.D. A modification of the Arrow-Hurwicz method for searching for saddle points. Mat. Zameki. 1980, 28, 777–784. [Google Scholar]
  11. Malitsky, Y.V.; Semenov, V.V. An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [Google Scholar] [CrossRef]
  12. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified extragradient-like algorithms with new stepsizes for variational inequalities. Comput. Optim. Appl. 2019, 73, 913–932. [Google Scholar] [CrossRef]
  13. Hieu, D.V.; Moudafi, A. Regularization projection method for solving bilevel variational inequality problem. Optim. Lett. 2020, 15, 205–229. [Google Scholar] [CrossRef]
  14. Jiang, B.; Wang, Y.; Yao, J.C. Multi-step inertial regularized methods for hierarchical variational inequality problems involving generalized lipschitz continuous and hemicontinuous mappings. Mathematics 2021, 9, 2103. [Google Scholar] [CrossRef]
  15. Hammad, H.A.; Renman, H.; Almusawa, H. Tikhonov regularization terms for Accelerating inertial mann-like algorithm with applications. Symmetry 2021, 13, 554. [Google Scholar] [CrossRef]
  16. Zhou, H.; Zhou, Y.; Feng, G. Iterative methods for solving a class of monotone variational inequality problems with applications. J. Inequal. Appl. 2015, 2015, 68. [Google Scholar] [CrossRef] [Green Version]
  17. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  18. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequlities in Hilbert space. Numer. Algor. 2019, 80, 741–752. [Google Scholar] [CrossRef]
  19. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
Figure 1. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 2 with λ 1 = 0.2 .
Figure 1. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 2 with λ 1 = 0.2 .
Symmetry 14 00187 g001
Figure 2. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 2 with λ 1 = 0.1 .
Figure 2. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 2 with λ 1 = 0.1 .
Symmetry 14 00187 g002
Figure 3. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 2 with λ 1 = 0.05 .
Figure 3. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 2 with λ 1 = 0.05 .
Symmetry 14 00187 g003
Figure 4. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 3 with s = 10 .
Figure 4. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 3 with s = 10 .
Symmetry 14 00187 g004
Figure 5. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 3 with s = 20 .
Figure 5. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 3 with s = 20 .
Symmetry 14 00187 g005
Figure 6. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 3 with s = 30 .
Figure 6. Comparison of RPEGM, IRPEGM, 2-MIRPEGM and 3-MIRPEGM in Example 3 with s = 30 .
Symmetry 14 00187 g006
Table 1. Numerical results of IRPEGM and IRSEGM as regards Example 4.
Table 1. Numerical results of IRPEGM and IRSEGM as regards Example 4.
μ λ 1 IRPEGMIRSEGM
Iter.Time [s]Iter.Time [s]
0.20.21280.95961671.2030
0.3730.5893990.8926
0.4730.58911671.2683
0.30.2760.60521881.3341
0.31571.08141841.3190
0.4760.61911841.3587
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Gao, Y.; Jiang, B. A Regularized Generalized Popov’s Method to Solve the Hierarchical Variational Inequality Problem with Generalized Lipschitzian Mappings. Symmetry 2022, 14, 187. https://doi.org/10.3390/sym14020187

AMA Style

Wang Y, Gao Y, Jiang B. A Regularized Generalized Popov’s Method to Solve the Hierarchical Variational Inequality Problem with Generalized Lipschitzian Mappings. Symmetry. 2022; 14(2):187. https://doi.org/10.3390/sym14020187

Chicago/Turabian Style

Wang, Yuanheng, Yidan Gao, and Bingnan Jiang. 2022. "A Regularized Generalized Popov’s Method to Solve the Hierarchical Variational Inequality Problem with Generalized Lipschitzian Mappings" Symmetry 14, no. 2: 187. https://doi.org/10.3390/sym14020187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop