Advanced Algorithms and Common Solutions to Variational Inequalities

: The paper aims to present advanced algorithms arising out of adding the inertial technical and shrinking projection terms to ordinary parallel and cyclic hybrid inertial sub-gradient extra-gradient algorithms (for short, PCHISE). Via these algorithms, common solutions of variational inequality problems (CSVIP) and strong convergence results are obtained in Hilbert spaces. The structure of this problem is to ﬁnd a solution to a system of unrelated VI fronting for set-valued mappings. To clarify the acceleration, effectiveness, and performance of our parallel and cyclic algorithms, numerical contributions have been incorporated. In this direction, our results unify and generalize some related papers in the literature.


Introduction
In this manuscript, we discuss the problem of finding fixed points which also solve VI via a Hilbert space (Hs). Let be a nonempty closed convex subset (ccs) of Hs under the induced norm . and the inner product ., . .
The structure of the variational inequality problem (VIP) was built by the authors [1], for finding ℘ * ∈ such that where ‫ג‬ : → be a nonlinear mapping. They refer to the set of solutions of VIP (1) as V I(‫,ג‬ ). VI is involved in many interesting fields like, transportation, economics, engineering mechanics, mathematical programming. It is considered an indispensable tool in such specializations (see, for example, [2][3][4][5][6][7][8]). VI widely spread in optimization problems (OPs), where the algorithms were used solving it, see [7,9].
Under suitable stipulation to talk VIPs there is a two-way: projection modes and regularized manners. According to these lines, many iterative schemes have been presented and discuss for solving VIPs. Here, We focused on the first type. One of the easiest ways is using the gradient projection method, because when calculating it only needs one projection on the feasible set. However, the convergence of this method requires slightly strong assumptions that operators are strongly monotone or inverse strongly monotone. Via Lipschitz continuous and monotone mappings ‫ג‬ ℘ n = P ( n − ‫(ג‬ n )), n+1 = P ( n − ‫℘(ג‬ n )), for a suitable parameter and the metric projection P onto . Finding a projection and simplicity of the method depend on if it simple the extra-gradient method is computable and very useful otherwise the extra-gradient method is more complicated. The extra-gradient method used to solve two distance OPs if is a cc set.
In a Hs, the weak convergence of a solution of the VIPs is incorporated under the sub-gradient extra-gradient method [11] by the below algorithm: ℘ n = P ( n − ‫(ג‬ n )), where ξ n is a half-space defined as follows: Authors [12] accelerate the speed of convergence of the algorithm by building the following algorithm: Our paper is interested in finding CSVIP. The CSVIP here is to find a point ℘ * where i : → be a nonlinear mapping and i be a finite family of non-empty ccs of such that ∩ N i=1 i = ∅. Please note that If N = 1, CSVIP (2) reduce to VIP (1). Here CSVIPs takes many forms such as: Convex feasibility problem (CFP), if we consider all ‫ג‬ i = 0, then we find a point ℘ * ∈ ∩ N i=1 i in the non-empty intersection of a finite family of cc sets. Common fixed point problem (CFPP) If we take the sets i are the fixed point sets in (CFP). These problems have been studied in-depth and expansion, and their numerous applications have become the focus of attention of many researchers see [13][14][15][16][17][18][19].
For multi-valued mappings of ‫ג‬ i : → 2 , i = 1, .., N, an algorithm for solving the CSVIP is given by [20]. For simplicity we list the below algorithm for ‫ג‬ i is a single-valued: the approximation n+1 of the algorithm (3), can be found by constructing N + 1 subsets 1 n , 2 n ,.., i n and £ n and solve the following minimization problem: when N is large, this task can be very costly. Respect to the power of the number of half-spaces, the number of subcases in the explicit solution formula of the problem (4) is two. In Banach spaces, for finding a common element of the set of fixed points via a family of asymptotically quasi φ−non-expansive mappings the authors [21,22] derived two strongly convergent parallel hybrid iterative methods. This algorithm can be formulated in Hilbert spaces as follows: where η n ∈ (0, 1), lim sup n→∞ η n < 1. According to this algorithm, the approximation n+1 is defined as the projection of 0 onto n+1 , and finding the explicit form of the sets n and perform numerical experiments seems complicated. By the same scenario, Hieu [23], introduced two PCHSE algorithms for CSVIPs in Hilbert spaces and analyze their convergence by numerical results. Our main goal in this paper is to present iterative procedures for solving CSVIPs and prove its strong convergence. We called it, PCHISE algorithms. Our algorithms generates a sequence that converges strongly to the nearest point projection of the starting point onto the solution set of the CSVIP. To simplify this convergence, we use the inertial technical and shrinking projection methods. Also, some numerical experiments to support our results are given.
The outline of this work is as follows: In the next section, we give a definition and lemmas that we will use in study of the strong convergence analysis. Strong convergence results are obtained bu these procedures in Section 3, and at the ending, in Section 4, non-trivial two computational examples to discuss the performance of our algorithms and support theoretical results are incorporated.

Definition and Necessary Lemmas
In this section, we recall some definitions and results which will be used later. Definition 1. [24] For all , ℘ ∈ , a nonlinear operator ‫ג‬ is called (iv) maximal monotone if it is monotone and its graph is not a proper subset of one of any other monotone mapping, Here, the set Θ(‫)ג‬ = { ∈ : ‫ג‬ = } is referred to the set of all fixed points of a mapping ‫.ג‬ It's obvious that a monotone mapping ‫ג‬ : Let be a real Hilbert space (rHs). Then for each , ℘ ∈ and ø ∈ [0, 1], For each ∈ , the projection P defined by P = arg min{ ℘ − : ℘ ∈ }. Also, P exists and is unique because is nonempty ccs of . The projection P : → has the following properties: (ii) For all ℘∈ , ∈ , Lemma 3. [25] Suppose that ‫ג‬ is a monotone, hemi-continuous mapping form onto , where is a non-empty ccs of a Hs , then Lemma 4. [26] Suppose that = ∅ is a ccs of a Hs . Given that , ℘, Λ ∈ and ι ∈ R, the set The normal cone N to a set at a point ∈ defined by Thus, the following result is very important.
Lemma 5. [27] Suppose that ‫ג‬ is a monotone, hemi-continuous mapping form onto , where is a non-empty ccs of a Hs , with D(‫)ג‬ = . Let be a mapping defined by Then is a maximal monotone and −1 (0) = V I(‫,ג‬ ).

Main Theorems
This part is devoted to discuss the strong converges for our proposed algorithms under the following considerations: : → be a finite collection of monotone and L-Lipschitz continuous mappings and the solution set Θ is nonempty. Let { n } be a sequence generated by 0 , 1 ∈ i 1 = = , for all i = 1, .., N and Then the sequence { n } converges strongly to = P Θ 1 .

Proof. The proof is divided into the below steps
Step 1. Show that where * ∈ Θ and r = 1 − n L > 0.
Let * ∈ Θ, then by Lemma 1 (i), we can write Also, by simple calculations, we can find Similarly, Since, ‫ג‬ i is monotone on i and ℘ i n ∈ i , we can get This together with * ∈ V I(‫ג‬ i , i ), yields From definition of the metric projection onto ξ i n , one can obtain Thus, by (10), we get Put s i n = Υ n − n ‫ג‬ i (℘ i n ) and write again Λ i n = P ξ i n (s i n ). From Lemma 2 (ii) and (9), one can write From (11), we have From (13) in (12) and applying (7), (8), we can get Hence, we have the inequality (6).
Step 2. Show that n+1 is well-defined for all 1 ∈ and Θ ⊂ n+1 . Since ‫ג‬ i is Lipschitz continuous, thus, Lemma 3 confirm that V I(‫ג‬ i , i ) is too for all i = 1, .., N. Hence, Θ is closed and convex. It follows from the definition of n+1 and Lemma 4 that, n+1 is closed and convex for each n ≥ 1.
Step 3. Prove that lim n→∞ n−1 − n exists. Since Θ = ∅ is ccs of , then there is a unique u ∈ Θ such that u = P Θ 1 From n = P n 1 , n+1 ⊂ n and n+1 ∈ n , we can get On the other hand, as Θ ⊂ n , we have This proves that { n } is bounded and non-decreasing. Hence, lim n→∞ n−1 − n exists. Step 4. Prove that for all i = 1, .., N., the following relation holds From n+1 ∈ n+1 ⊂ n and n = P n 1 , we can get For this inequality, letting n → ∞ and using Step 3, we find lim n→∞ n+1 − n = 0.
Proof. By arguing similarly as in the proof of Theorem 1, we obtain that Θ and n+1 are cc and Θ ⊂ n+1 for all n ≥ 1. We have demonstrated that before { n }, {℘ n } and {Λ n } are bounded and Now, let the sequence { n } has some weak cluster points and subsequence { n k }. For i ∈ {1, 2, ..., N} be fixed, the set of indexes i is finite, this leads to n k p and [n k ] = i for all k. also (27) gives ℘ n k as k → ∞. By the same scenario of (22)-(25), one can get ∈ V I(‫ג‬ i , i ). for all i, and ∈ Θ. The rest of the proof comes immediately from proof Theorem 1.

Numerical Experiments
In this section, we consider two numerical examples to explain the efficiency of the proposed algorithms. The MATLAB codes run in MATLAB version 9.5 (R2018b) on Intel(R) Core(TM)i5-6200 CPU PC @ 2.30GHz 2.40GHz, RAM 8.00 GB. We use the Quadratic programming to solve the minimization problems. (1) For Van Hieu results in [23] Algorithm 3.1 (Alg. 1) we use λ = 1 2L . (2) For our proposed algorithms (Alg. 2) we use n = 1 2L and π n = 0.2.

Example 1.
Let the operators i can be define on the convex set ⊂ R m as follows: where q i ∈ R m , B i is an m × m matrix, S i is an m × m skew-symmetric matrix and D i is an m × m diagonal matrix whose diagonal entries are non-negative. All these above mentioned matrices and vectors q i are randomly generated (B = rand(m), C = rand(m), S = 0.5C − 0.5C T , D = diag(rand(m, 1))) between (0, 1). The feasible set i = ⊂ R m is cc set and defined as: where A is an 20 × m matrix and d is a non-negative vector. It is clear that i is monotone and L-Lipschitz continuous with L = max B i B T i + S i + D i : i = 1, · · · , m . In this example, we choose q i = 0. Thus, the solution set Ω = {0}. During Example 1, we use x 0 = x 1 = (1, 1, · · · , 1) and D n = n .  ≤ 1} be the unit ball. Let us define an operator i : i → by for all ∈ , t ∈ [0, 1] and i = 1, 2, where As shown in [14] the i is monotone (hence pseudo-monotone) and L-Lipschitz-continuous with L = 2. Moreover, the solution set of the CSVIPs for the operators i on i is Ω = {0}. During example 2, we use x 0 = x 1 = t and D n = n .

Discussion
We have the following observations concerning the above-mentioned experiments: (i) Figures 1 and 2 and Table 1 demonstrates the behavior of both algorithms as the size of the problem m varies. We can see that the performance of the algorithm depends on the size of the problem. More time and a significant number of iterations are required for large dimensional problems. In this case, we can see that the inertial effect strengthens the efficiency of the algorithm and improves the convergence rate. (ii) Figure 3 and Table 2 display the behavior of both algorithms while the number of problems N varies. It could be said that the performance of algorithms also depends on the number of problems involved. In this scenario, we can see that roughly the same number of iterations are required, but the execution time depends entirely on the number of problems N. (iii) Figures 4-6 and Table 3 shows the behavior of both algorithms as tolerance varies. In this case, we can see that, as tolerance is closer to zero, iteration and elapsed time also increase. (iv) Based on the progress of the numerical results, we find that our methods are effective and successful in finding solutions for VIP and our algorithms converges faster than the algorithms of Hieu [19].

Conclusions
In this manuscript, we propose two strongly convergent parallel and cyclic hybrid inertial CQ-sub-gradient extra-gradient algorithms for finding common the CSVIP. This problem consists of finding a common solution to a system of unrelated variational inequalities corresponding to set-valued mappings in a Hs. The algorithms presented in this article are a hybrid of synthesis the inertial technical, shrinking projection and CQ-terms to parallel and cyclic hybrid inertial sub-gradient extra-gradient algorithms to develop possible practical numerical methods when the number of sub-problems is large. Finally, non-trivial numerical examples are given here to verify the efficiency of the proposed parallel and cyclic algorithms.