On Strengthened Extragradient Methods Non-Convex Combination with Adaptive Step Sizes Rule for Equilibrium Problems

: Symmetries play a vital role in the study of physical phenomena in diverse areas such as dynamic systems, optimization, physics, scientiﬁc computing, engineering, mathematical biol-ogy, chemistry, and medicine, to mention a few. These phenomena specialize mostly in solving equilibria-like problems in abstract spaces. Motivated by these facts, this research provides two innovative modifying extragradient strategies for solving pseudomonotone equilibria problems in real Hilbert space with the Lipschitz-like bifunction constraint. Such strategies make use of multiple step-size concepts that are modiﬁed after each iteration and are reliant on prior iterations. The excellence of these strategies comes from the fact that they were developed with no prior knowledge of Lipschitz-type parameters or any line search strategy. Mild assumptions are required to prove strong convergence theorems for proposed strategies. Various numerical tests have been reported to demonstrate the numerical behavior of the techniques and then contrast them with others.


Introduction
Consider that Σ is a nonempty, convex, and closed subset of a real Hilbert space Π. The inner product and norm are indicated with ., . and . , respectively. Furthermore, R and N symbolize the set of real numbers and the set of natural numbers, respectively. Assume that R : Π × Π → R is indeed a bifunction with the equilibrium problem solution set EP(R, Σ). Let s * = P EP(R,Σ) , whereas θ represents a zero element in Π. In this case, Σ characterizes the subset of a Hilbert space Π and R as follows: R : Π × Π → R is a bifunction through R(r 1 , r 1 ) = 0, for all r 1 ∈ Σ. The equilibrium problem [1,2] for R on Σ is to: Find s * ∈ Σ such that R(s * , r 1 ) ≥ 0, ∀ r 1 ∈ Σ.
The above-mentioned framework is an appropriate mathematical framework that incorporates a variety of problems, including vector and scalar minimization problems, saddle point problems, variational inequality problems, complementarity problems, Nash equilibrium problems in non-cooperative games, and inverse optimization problems [1,3,4]. This issue is primarily connected to Ky Fan inequity on the grounds of his prior contributions to the field [2]. It is also important to consider an approximate solution if the problem does not have an exact solution or is difficult to calculate. Several methodologies have been proposed and tested to tackle various types of equilibrium problems (1). Many successful algorithmic techniques, as well as theoretical characteristics, have already been proposed to solve the (1) issue in both finite-and infinite-dimensional spaces.
The regularization technique is the most significant method for dealing with many illposed problems in various subfields of applied and pure mathematics. The regularization approach is distinguished by the use of monotone equilibrium problems to convert the original problem into a strongly monotone equilibrium subproblem. As a result, each computationally productive subproblem is strongly monotone and has a unique solution. The discovered subproblem, for example, may be more successfully resolved than the initial problem, and the regularization solutions may lead to some solution to the basic problem once the regularization variables look to have an adequate limit. The two most prevalent regularization methods are the proximal point and Tikhonov's regularized approaches. These approaches were recently extended to equilibrium problems [5][6][7][8][9][10][11][12][13]. A few techniques to address non-monotone equilibrium problems can be found in [14][15][16][17][18][19][20][21][22][23][24][25][26].
The proximal method [27] is indeed an innovative approach for determining equilibrium problems that are founded on minimization problems. Along with Korpelevich's contribution [28] technique to addressing the saddle point problem, this procedure has also been known as the two-step extragradient method in [29]. Tran et al. [29] constructed an iterative sequence of {s k } in the following manner: where 0 < λ < min 1 2c 1 , 1 2c 2 . The iterative sequence created by the aforementioned approach exhibits weak convergence, and prior knowledge of Lipschitz-type variables is necessary in order to use it. Lipschitz-type parameters are frequently unknown or difficult to calculate. To address this issue, Hieu et al. [30] introduced the following adaptation of the approach in [31] for equilibrium: Let [t] + = max{t, 0} and select s 1 ∈ Σ, µ ∈ (0, 1) with λ 0 > 0, such that To solve a pseudomonotone equilibrium problem, the authors have suggested a nonconvex combination iterative technique in [32]. The availability of a strong convergence iterative sequence without the need for hybrid projection or viscosity techniques is the main contribution. The details of the algorithm are as follows: Choose 0 < λ k < min 1 The main objective of this study is to focus on using well-known projection algorithms that are, in general, easier to apply due to their efficient and easy mathematical computation. We design and adapt an explicit subgradient extragradient method to solve the problem of pseudomonotone equilibrium and other specific classes of variational inequality problems and fixed-point problems, inspired by the works of [30,33]. Our techniques are a variation on the approaches described in [32]. Strong convergence results matching the sequence of the two methods are achieved under specific, moderate circumstances. Some applications of variational inequality and fixed-point problems are given. Consequently, experimental investigations have shown that the proposed strategy is more successful than the current one [32].
The rest of the article is organized as follows: Section 2 includes basic definitions and lemmas. Section 3 proposes new methods and their convergence analysis theorems. Section 4 contains several applications of our findings to variational inequality and fixedpoint problems. Section 5 contains numerical tests to demonstrate the computational effectiveness of our proposed methods.

Preliminaries
Suppose that a convex function : Σ → R and subdifferential of at r 1 ∈ Σ is expressed as follows: A normal cone of Σ at r 1 ∈ Σ is expressed as follows:

Lemma 1. ([34]) Suppose that a convex function
: Σ → R is subdifferentiable and lower semicontinuous upon Σ. Then r 1 ∈ Σ is a minimizer of a function if and only if where ∂ (r 1 ) and N Σ (r 1 ) denotes the subdifferential of at r 1 ∈ Σ and the normal cone of Σ at r 1 , respectively.

Main Results
We add a method and have strong convergence results for that method. The following is a detailed algorithm: The following lemma can be used to demonstrate that the step-size sequence λ k generated by the previous formula decreases monotonically and is bounded, as required for iterative sequence convergence.
The following lemma can be used to verify the boundedness of an iterative sequence.
Proof. By the value r k and Lemma 1, we obtain From definition of Π k , we have Using the value of λ k+1 , we can write Expressions (2)-(4) imply that (see Lemma 3.3 in [42]): The strong convergence analysis for Algorithm 1 is presented in the following theorem. The details of the convergence theorems are given below.

Algorithm 1 Self-Adaptive Explicit Extragradient Method with Non-Convex Combination
Step 0: Let Step 1: Compute In the case that m k = s k , stop and s k ∈ EP(R, Σ). Otherwise, go to the next step.
Step 2: Step 4: Revise the step size as follows and continue: Set k := k + 1 and move back to Step 1.
Theorem 1. Let a sequence {s k } be generated by Algorithm 1. Then, sequence {s k } converges strongly to s * ∈ EP(R, Σ).

Proof. Given that
As a result, there exists a finite number k 1 ∈ N such that Using Lemma 7, we have We derive using Lemma 3 (i) for any k ≥ k 1 , such that Notice that there is By Lemma 3 (ii) and (9), (10) implies that (see Equation (3.6) [32]) The remains of the proof can be split into two parts: Thus, lim k→+∞ s k − s * , exists and let lim k→+∞ s k − s * = l. By relationship (7), we have The existence of lim k→+∞ s k − s * = l, provides that and accordingly Thus, the sequence {s k } is a bounded sequence. Hence, we may select a subsequence {s k j } of {s k } such that {s k j } converges weakly to a certain s ∈ Σ such that lim sup From (13) the subsequence {m k j } also converges weakly to s as j → +∞. Due to the expression (3), we obtain Allowing j → +∞ entails that As a result, s ∈ EP(R, Σ). Eventually, using (15) and Lemma 2 (ii), we derive We have the desired results from of the assertion on φ k , δ k , (11), (13), (14), (18) and Lemma 4.
Consequently, according to Lemma 5, there is indeed a sequence {n j } ⊂ N such that n j → +∞, we have s n j − s * ≤ s m j+1 − s * and s j − s * ≤ s m j+1 − s * , for all j ∈ N.
By the expression (7), we have s n j − m n j 2 + r n j − m n j 2 ≤ s n j − s * 2 − s n j +1 − s * 2 + φ n j δ n j s * 2 + φ n j s n j − m n j 2 + r n j − m n j 2 , ∀ n j ≥ k 1 . (20) The above expressions imply that lim j→+∞ s n j − m n j = lim j→+∞ r n j − m n j = 0, thus lim j→+∞ s n j − r n j ≤ lim j→+∞ s n j − m n j + lim j→+∞ m n j − r n j = 0.
By statements identical to those in expression (18), we have lim sup j→+∞ −s * , s n j − s * ≤ 0.
From expression (11), we obtain It is given that s n j − s * ≤ s m j+1 − s * , implies that The expression (19) and (25) implies that Because φ n j → 0, it derives via expressions (21), (22) such that Consequently, s k → s * . This is the required result.
Now, a modification of Algorithm 1 proves a strong convergence theorem for it. For the purpose of simplicity, we will adopt the notation [t] + = max{0, t} and the conventional 0 0 = +∞ and a 0 = +∞ (a = 0). The following is a more detailed algorithm: Lemma 8. Let R : Π × Π → R be a bifunction satisfies the conditions (R1)-(R4). For any s * ∈ EP(R, Σ) = ∅, we have The strong convergence analysis for Algorithm 2 is presented in the following theorem. The details of the convergence theorems are given below.

Algorithm 2 Modified Self-Adaptive Explicit Extragradient Method with Non-Convex Combination
Step 0: Let Step 1: Compute If m k = s k , then s k is the solution of problem (EP). Otherwise, go to next step.

Algorithm 2 Cont.
Step 2: First, choose ω k ∈ ∂R(P Σ (s k ), m k ) satisfying P Σ (s k ) − λ k ω k − m k ∈ N K (m k ) and generate a half-space Step 3: Compute Step 4: Modify step size as follows: Set k := k + 1 and go back to Step 1.

Theorem 2. Let a sequence {s k } be generated by Algorithm 2 and satisfy the conditions (R1)-(R4).
Then, a sequence {s k } is strongly convergent to an element s * of EP(R, Σ).

Proof. Using Lemma 8, we have
It is given that λ k → λ, there exists a fixed number 0 ∈ (0, 1 − µ), which is indeed a specific number such that Thus, there exists a fixed number m 1 ∈ N such that Combining the expression (28) and (29), we obtain The value of s k+1 with Lemma 3 provides (see Equation (3.17) [32]) The rest of the discussion will be divided into two parts: Case 1: Assume that there exists an integer m 2 ∈ N (m 2 ≥ m 1 ) such that Thus, the lim k→+∞ s k − s * exists. By expression (28), we have The above, together with the assumptions on λ k , φ k and ϕ k , yields that As a result, {s k } is bounded, and we may choose a subsequence {s k j } of {s k } such that {s k j } converges weakly to s ∈ Σ and lim sup As with expression (3) with (34), we have Allowing j → +∞, indicates that R(s, y) ≥ 0, ∀ y ∈ Σ. It continues that s ∈ EP(R, Σ). In the end, by expression (35) and Lemma 2, we may obtain The needed result is obtained using Equation (31) and the Lemma 4. Case 2: Assume that a subsequence {k i } of {k} such that Thus, by Lemma 5 there exists a nondecreasing sequence {n j } ⊂ N such that {n j } → +∞, which gives s n j − s * ≤ s m j+1 − s * and s j − s * ≤ s m j+1 − s * , for all j ∈ N. (38) Using expression (31), we have The remaining proof is analogous to Case 2 in Theorem 1.

Applications
In this section, we derive our main results, which are used to solve fixed-point and variational inequality problems. An operator T : which is equivalent to (ii) Weakly sequentially continuous on Σ if T (s k ) T (s * ) as each sequence in Σ satisfying s k s * .
Note: If we take R(x, y) = x − Tx, y − x , ∀x, y ∈ Σ, the equilibrium problem converts into to the fixed-point problem through 2c 1 = 2c 2 = 3−2κ 1−κ . The algorithm's m k and r k values become (for more information, see [32]): The following fixed-point theorems are derived from the results in Section 3.

Corollary 1.
Suppose that Σ is a nonempty closed and convex subset of a Hilbert space Π. Let T : Σ → Σ is a weakly continuous and κ-strict pseudocontraction with Fix(T ) = ∅. Let s 1 ∈ Σ, Additionally, the sequence {s k } is created as follows: The relevant step-size λ k+1 is obtained: Thus, the sequence {s k } strongly converges to s * = P Fix(T ) (θ).

Corollary 2.
Suppose that Σ is a nonempty closed and convex subset of a Hilbert space Π. Let T : Σ → Σ is a weakly continuous and κ-strict pseudocontraction with Fix(T ) = ∅. Let s 1 ∈ Π, Additionally, the sequence {s k } is created as follows: The relevant step size λ k+1 is obtained as follows: Thus, the sequence {s k } strongly converges to s * = P Fix(T ) (θ).
An operator G : Π → Π is said to be (i) L-Lipschitz continuous on Σ if Note: If R(x, y) := G(x), y − x for all x, y ∈ Σ, the equilibrium problem converts into a variational inequality problem via L = 2c 1 = 2c 2 (for more information, see [44]). By the value of m k and r k in Algorithm 1, we derived Due to ω k ∈ ∂R(s k , m k ), we obtain and consequently 0 ≤ G(s k ) − ω k , z − m k , ∀ z ∈ Π. It implies that Assumption 1. Assume that G fulfills the following conditions: (i) An operator G is pseudomonotone upon Σ and V I(G, Σ) is nonempty; (ii) G is L-Lipschitz continuous on Σ with L > 0; (iii) lim sup k→+∞ G(s k ), y − s k ≤ G(s * , y − s * for any y ∈ Σ and {s k } ⊂ Σ meet s k s * .
Corollary 3. Let G : Σ → Π be an operator and satisfies Assumption 1. Assume that sequence {s k } is generated as follows: Let Moreover, sequence {s k } is generated as follows: Next, step size λ k+1 is obtained as follows: Then, sequence {s k } strongly converges to the solution s * ∈ V I(G, Σ).

Numerical Illustration
The computational results in this section show that our proposed algorithms are more efficient than Algorithms 3.1 and 3.2 in [32]. The MATLAB program was executed in MATLAB version 9.5 on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70 GHz 1.70 GHz, RAM 4.00 GB) (R2018b). In all our algorithms, we used the built-in MATLAB fmincon function to solve the minimization problems. (i) The setting for design variables for Algorithm 3.1 (Algo. 3.1) and Algorithm 3.2 (Algo. 3.2) in [32] possess different values that are given in all examples.
(ii) The settings for the design variables for Algorithm 1 (Algo. 1 ) and Algorithm 2 (Algo. 2) are , D k = s k − m k ≤ and for different λ 0 .

Figures 3 and 4 and
As illustrated in [45], G is monotone and L-Lipschitz-continuous via L = 2. Figures 5 and 6 and Tables 5 and 6 illustrate the numerical results with s 1 = t and = 10 −6 .

Discussion About Numerical Experiments:
The following conclusions may be drawn from the numerical experiments outlined above: (i) Examples 1-3 have reported data for numerous methods in both finite-and infinite-dimensional domains. It is apparent that the given algorithms outperformed in terms of number of iterations and elapsed time in practically all circumstances. All trials demonstrate that the suggested algorithms outperform the previously available techniques. (ii) Examples 1-3 have reported results for several methods in finite and infinite-dimensional domains. In most cases, we can observe that the scale of the problem and the relative standard deviation used impact the algorithm's effectiveness. (iii) The development of an inappropriate variable step size generates a hump in the graph of algorithms in all examples. It has no impact on the effectiveness of the algorithms. (iv) For large-dimensional problems, all approaches typically took longer and showed significant variation in execution time. The number of iterations, on the other hand, changes slightly less.

Conclusions
The paper provides two explicit extragradient-like approaches for solving an equilibrium problem involving a pseudomonotone and a Lipschitz-type bifunction in a real Hilbert space. A new step-size rule has been presented that does not rely on Lipschitz-type constant information. The algorithm's convergence has been established. Several tests are presented to show the numerical behavior of our two algorithms and to compare them to others that are well known in the literature.