Abstract
This paper presents the Mann-type inertial accelerated subgradient extragradient algorithm with non-monotonic step sizes for solving the split equilibrium and fixed point problems relating to pseudomonotone and Lipschitz-type continuous bifunctions and nonexpansive mappings in the framework of real Hilbert spaces. By sufficient conditions on the control sequences of the parameters of concern, the strong convergence theorem to support the proposed algorithm, which involves neither prior knowledge of the Lipschitz constants of bifunctions nor the operator norm of the bounded linear operator, is demonstrated. Some numerical experiments are performed to show the efficacy of the proposed algorithm.
Keywords:
split equilibrium problems; split fixed point problems; pseudomonotone bifunction; nonexpansive mapping; subgradient extragradient method; inertial method MSC:
47H09; 49M37; 65K15; 90C33
1. Introduction
The fixed point problem is a powerful instrument for understanding the fields of chemistry, physics, engineering, and economics in various mathematical models (see [1,2,3,4]). The fixed point problem is expressed as follows:
where H is a real Hilbert space, and is a mapping. The set of fixed points of the mapping T is denoted by . To find fixed points of a nonexpansive mapping T can be performed by using the widest methods, which was proposed by Mann [5]:
where C is a nonempty closed convex subset of H, and . In [6], the author showed that if T has a fixed point and , then the sequence generated by (2) converges weakly to a fixed point of T.
On the contrary, after a paper by Blum and Oettli [7] was published, the equilibrium problem began to garner attention and has been utilized for studying a range of mathematical problems, such as variational inequality problems, optimization problems, minimax problems, saddle point problems, and Nash equilibrium problems (see [7,8,9,10]). The following is a formulation of the equilibrium problem:
where is a bifunction. The solution set of the equilibrium problem (3) is represented by . An acknowledged approach to solve the equilibrium problem (3) is the proximal point method during which f is a monotone bifunction (see [11]). Unfortunately, in this circumstance, if f is a pseudomonotone bifunction, which is a weaker property than a monotone bifunction, the proximal point method cannot be assured. To maneuver past this obstacle, Tran et al. [12] presented the following extragradient method to solve the equilibrium problem when f is a pseudomonotone and Lipschitz-type continuous bifunction:
where and are Lipschitz constants of f, and . They showed that the sequence generated by (4) converges weakly to a solution of the equilibrium problem. It is important to note that when the feasible set C has a complicated structure, the computing performance of the algorithm is impacted by the extragradient method, a two-step iteration technique. Specifically, each iteration requires solving the optimization problems two times on the feasible set C in order to find and . To gain over this drawback, Hieu [13] expanded the following subgradient extragradient method to solve the equilibrium problem when f is a pseudomonotone and Lipschitz-type continuous bifunction:
where and are Lipschitz constants of f, , with such that . The author showed that the sequence generated by (5) converges strongly to . It is highlighted that in the second step, the optimization problem on the feasible set C is transformed to the half-space by the subgradient extragradient method in order to achieve for each iteration. As a result, by only having the optimization problem solved on the feasible set C once in order to acquire , the computing performance of the extragradient method is improved by the subgradient extragradient method. Additionally, the step sizes of the previously stated algorithms are dependent on the values and . This indicates that the prior knowledge of the values and is necessary for these algorithms. In practical applications, it may be difficult to acquire this kind of data.
At this point, the inertial method, which originates from a discrete version of a second-order dissipative dynamic system [14,15], has been extensively studied to accelerate algorithmic convergence properties, for instance, see [16,17,18,19,20] and the references therein. The key characteristic of this method is that the previous two iterates are applied to establish the next iterate point. In 2023, by combining the techniques of subgradient extragradient and inertial methods in conjunction with the Mann-type method, Panyanak et al. [21] presented the following algorithm to solve the equilibrium and fixed point problems, when T is a -demicontractive mapping and f is a pseudomonotone and Lipschitz-type continuous bifunction:
where the parameter is chosen in with
and the step size is given as
when , , , such that , , , with , , and . They showed that the sequence generated by (6) converges strongly to . It is apparent that (6) effectively addresses the unknown information of the Lipschitz constants of f through the use of the adaptive step size. Furthermore, the adaptive step size criteria are provided, which depend on previously known information for the uncomplicated calculation of updating the step size of each iteration.
In 2016, Dinh et al. [22] generated the following split equilibrium and fixed point problems:
where and are real Hilbert spaces; and are nonempty closed convex subsets of and , respectively; and are mappings; and are bifunctions; and is a bounded linear operator. Dinh et al. [22] presented the following algorithm by utilizing the extragradient and proximal point methods to solve the split equilibrium and fixed point problems (7), where S and T are nonexpansive mappings, is a pseudomonotone and Lipschitz-type continuous bifunction, and is a monotone bifunction:
where and are Lipschitz constants of , , , with , such that , and , and is the adjoint operator of A. They showed that the sequence generated by (8) converges weakly to a solution of the split equilibrium and fixed point problems (7). This kind of problem has attracted a lot of attention due to its broad applications in intensity-modulated radiation therapy, image restoration, network resource allocation, signal processing, phase retrievals, and other real-world applications (see, e.g., [23,24,25]).
Inspired by the above results, Petrot et al. [26] presented the following algorithm by applying the concept of the extragradient method to solve the split equilibrium and fixed point problems (7), where S and T are nonexpansive mappings and and are pseudomonotone and Lipschitz-type continuous bifunctions:
where and are Lipschitz constants of ; and are Lipschitz constants of , with , with , with , with , with such that ; and h is a -contraction mapping. They showed that the sequence generated by (9) converges strongly to a solution of the split equilibrium and fixed point problems (7). It is noteworthy that the step size of this algorithm is determined by the operator norm of the bounded linear operator A. In order to overcome this drawback, Ezeora et al. [27] presented the following algorithm by employing the ideas of extragradient and inertial methods to solve the split equilibrium and fixed point problems (7), where S and T are nonexpansive mappings and and are pseudomonotone and Lipschitz-type continuous bifunctions:
where the parameter is chosen in with
and for small enough, the step size is given as
when and are Lipschitz constants of ; and are Lipschitz constants of , , , , with , with , with , such that , and ; and h is a -contraction mapping. They showed that the sequence generated by the EIM Algorithm (10) converges strongly to a solution of the split equilibrium and fixed point problems (7). We observe that, in order to guarantee the convergence of the EIM Algorithm, the maximum values of the scalars and depend on the Lipschitz constants of and , respectively.
Motivated by the advantageous method presented in the EIM Algorithm by Ezeora et al. [27], this paper continues to focus on methods that solve the split equilibrium and fixed point problems (7). We consider an updated iterative algorithm that does not require prior knowledge of either the Lipschitz constants of the bifunctions or the operator norm of the bounded linear operator. This approach aims to find the minimum-norm solution for the split equilibrium and fixed point problems (7). Numerical experiments and comparisons with some recently developed algorithms are conducted to evaluate the performance of the introduced algorithm.
This paper is arranged as follows: Section 2 reviews some preliminary definitions and properties for use later on. Section 3 presents the Mann-type inertial accelerated subgradient extragradient algorithm and the corresponding strong convergence result. In Section 4, we discuss the performance of the proposed algorithm in comparison to some recently developed algorithms through numerical experiments. In Section 5, we close this paper with some conclusions.
2. Preliminaries
We provide some important definitions and qualifications in this section that are applied throughout the work. Let H be a real Hilbert space endowed with inner product , and its corresponding norm . For a sequence in H, we denote the strong convergence and the weak convergence of a sequence to a point by and , respectively. The sets of the real numbers and the natural numbers are represented by and , respectively.
We begin with some definitions and results concerning the nonlinear mappings.
Definition 1.
Let be a mapping. The mapping T is called nonexpansive if
Remark 1.
We notice that if T is a nonexpansive mapping, is closed and convex; see [28].
Definition 2.
Let be a mapping. The mapping T is called demiclosed at if for each sequence , , and , we have .
Lemma 1
([28]). Let be a nonexpansive mapping having a fixed point. Then, is demiclosed at 0.
In what follows, we collect some fundamental concepts which are needed for the sequels. For each , it is known that
and
for each such that ; see [17].
For a point , the metric projection of a point x onto C is represented by in the sense of
where C is a nonempty closed convex subset of H.
Lemma 2
([28,29]). Let C be a nonempty closed convex subset of H. Then, the following results hold:
- (i)
- For each , is well-defined and singleton;
- (ii)
- if and only if , ;
- (iii)
- is a nonexpansive mapping.
Let be a function. The subdifferential of f at is given by
The function f is called subdifferentiable at y if .
Lemma 3
([29]). For each , the subdifferentiable of a continuous convex function f is a weakly closed and bounded convex set.
Lemma 4
([9]). Let C be a convex subset of H. Suppose that is subdifferentiable on C. Then, is a solution to the following convex problem:
if and only if , where is the normal cone of C at .
The following technical lemmas are necessary to obtain the convergence results.
Lemma 5
([30]). Let and be sequences of non-negative real numbers and be a sequence of real numbers satisfying the following relation:
where with . If and , then .
Lemma 6
([31]). Let be a sequence of real numbers in which there exists a subsequence of with , for any . Then, there exists a non-decreasing sequence of positive integers with , and the following relations hold:
for any (sufficiently large) numbers . Indeed, is the largest number k in the set satisfying the following relation:
We end this section by providing some definitions and properties relating to the equilibrium problems.
Definition 3.
Let C be a nonempty closed convex subset of H and be a bifunction. The bifunction f is called
- (i)
- Monotone on C if
- (ii)
- Pseudomonotone on C if
- (iii)
- Lipschitz-type continuous on H if there exists constants and satisfying
Remark 2.
Note that a monotone bifunction is a pseudomonotone bifunction. However, in general, the converse is not true; see, e.g., [32].
Let C be a nonempty closed convex subset of a real Hilbert space H and be a bifunction. In this paper, we take into account the following assumptions:
- (A1)
- For each fixed , is sequentially weakly upper semicontinuous on C, that is, if is a sequence in C with , then ;
- (A2)
- For each fixed , is convex, subdifferentiable, and lower semicontinuous on H;
- (A3)
- f is psuedomonotone on C;
- (A4)
- f is Lipschitz-type continuous on H.
Remark 3.
- (i)
- It is well-known that the solution set is closed and convex, where the bifunction f satisfies assumptions ; see [12,33,34].
- (ii)
- We observe that , for each , where the bifunction f satisfies assumptions and ; see [17].
- (iii)
- For each fixed , the subdifferential of a bifunction at is provided by
3. Main Results
Let and be real Hilbert spaces and and be nonempty closed convex subsets of and , respectively. We start by recalling the split equilibrium and fixed point problems:
where and are bifunctions, and are mappings, is a bounded linear operator, and is the adjoint operator of A. We observe that the setting of problem (13) differs from that of problem (7). Specifically, in problem (13), the domain of each operator is the entire space, whereas in problem (7), the domain of the considered operator is a closed convex subset of the corresponding space. Consequently, the fixed point sets of the operators T and S in problem (13) are independent of the sets and , respectively. This distinction can lead to different applications and interpretations for problems (7) and (13). The solution set of the split equilibrium and fixed point problems (13) is, henceforth, represented by in the form of
It is noteworthy that, by using Remarks 1, 3(i), and the linearity property of the operator A, the solution set is closed and convex; see [35].
The following algorithm (Algorithm 1) is presented to solve the split equilibrium and fixed point problems (13).
| Algorithm 1. Mann-type inertial accelerated subgradient extragradient algorithm |
| Initialization. Select , , , , , , with , with , with , with , with , with , , such that , , , and , for some positive constants a and b. Put and arbitrarily. |
| Step 1. Select satisfying , where
|
| Step 2. Compute
|
| Step 3. Select and create a half-space
|
| Step 4. Compute
|
| Step 5. Calculate
|
| Step 6. Compute
|
| Step 7. Select and create a half-space
|
| Step 8. Compute
|
| Step 9. Construct by using the following expression:
|
| Step 10. Calculate
|
| Step 11. Set and return to Step 1. |
Remark 4.
- (i)
- The auxiliary sequences and in Algorithm 1 are introduced to account for bias in the bifunctions and in steps 2 and 6, respectively. These auxiliary biases contribute to the acceleration of Algorithm 1. We highlight that the superior numerical behavior of Algorithm 1 can be affected by choosing sequences and . Take note that the accelerated subgradient extragradient method contained in Algorithm 1 reduces to the situation as presented in [13,21] if and , for each .
- (ii)
- The self-adaptive step sizes , , and are implemented without requiring prior knowledge of the Lipschitz constants of the bifunctions and or the operator norm of the bounded linear operator A, respectively. This demonstrates that Algorithm 1 automatically updates the iteration step sizes , , and by utilizing some previously existing data. We emphasize that the step sizes , , and in Algorithm 1 reduce to the non-increasing step sizes in the case of , , and for each , as presented in [21,36]. These may have a significant impact on the numerical results; see Section 4 for further discussion.
- (iii)
- It is important to note that step 9 in Algorithm 1 is improved as the metric projection onto the half-space instead of the metric projection onto the nonempty closed convex subset , as highlighted in [22,26,27]. This is an advantage of Algorithm 1 given that the metric projection onto the half-space has an explicit formula. Meanwhile, the metric projection onto may be difficult if has a complicated structure.
- (iv)
- In step 3, the nonemptiness of the set is always guaranteed. Indeed, by the definition of and Lemma 4, one sees thatThus, there exist and such thatThis ensures that there exists a solution within , confirming its nonemptiness. Similarly, in step 7, the nonemptiness of the set is guaranteed by utilizing the definition of and Lemma 4.
The following lemmas are very useful for analyzing the convergence of Algorithm 1.
Lemma 7.
Let and be bifunctions that satisfy –, and be nonexpansive mappings, be a bounded linear operator, and be the adjoint operator of A. Let and . Assume that the sequences , , and are generated by (14), (15), and (16), respectively. Then, the following results hold:
and
Proof.
The proof of this Lemma follows the technique in (Lemma 3.1 of [37]). Firstly, we note that
This, together with the definition of and the assumptions on the parameter , yields
Thus, the sequence has a lower bound as by induction.
Additionally, in view of the definition of , we obtain for each . Then, by induction, the sequence has an upper bound as , where . Consequently, we conclude that is a bounded sequence and .
In what follows, we set
Combining it with the definition of sequence , we have
Thus, the series is convergent. Next, we assert the convergence of the series . Suppose that . We observe that
This implies that
Then, by taking in (17), we have , as . This is a contradiction with the boundedness property of . Owing to the convergence of the series and , by taking in (17), we deduce that and .
On the other hand, from the Lipschitz-type continuity of on and of on , there exists some positive constants and , respectively, such that
and
These, together with the definitions of and , and the conditions on the sequences and , yield
and
Hence, by induction, we find that the sequences and have a lower bound as and , respectively. Similar to the above technique, we can show that
and
This completes the proof. □
Lemma 8.
Let and be bifunctions that satisfy –, and be nonexpansive mappings, be a bounded linear operator, and be the adjoint operator of A. Assume that the solution set Ω is nonempty. Let and . If , , , , , and are created by the process of Algorithm 1, then the following results hold:
and
.
Proof.
First, we show that for every . Let be fixed and . Since , it follows that there exists such that
So, from , we obtain
This implies that . Hence, we conclude that for every . Similarly, since , we can show that for every . As a result, we ensure that Algorithm 1 is well-defined.
Afterwards, by utilizing the above facts, we display the results of the Lemma. Let . That is, , , and , . Due to and the subdifferentiability of , we obtain
It follows from that
Likewise, by using the definition of and , we obtain
Combined with relation (20), one sees that
Furthermore, from the definition of and Lemma 4, we obtain
Then, there exists and such that
This, together with the subdifferentiability of , yields
Additionally, from , we obtain
It follows from equality (22) that
Due to relation (23), we obtain
In particular, from , one sees that
Combined with the pseudomonotonic of , we have
So, relations (21) and (26) imply that
On the other hand, we observe from the definition of that
Using this together with relation (27), we obtain
Owing to the above relation, we have the following facts:
This implies that
Hence, by applying the choice of the parameter , we deduce that
Similarly, we can show that
This completes the proof. □
We are presently equipped to consider the strong convergence of Algorithm 1.
Theorem 1.
Let and be bifunctions that satisfy –, and be nonexpansive mappings, be a bounded linear operator, and be the adjoint operator of A. Assume that the solution set Ω is nonempty. Then, the sequence that is generated by Algorithm 1 converges strongly to the minimum-norm element of Ω.
Proof.
Let . That is, , , and , . We start by regarding the definition of and using the nonexpansivity of as follows:
Consider,
Using this one together with relation (30), we have
It follows from the nonexpansivity of S and the definition of that
Combined with the condition on the parameter and Lemma 7, we obtain
So, there exists such that
On the other hand, from the assumption of the parameter , the fact that , and Lemma 7, we have
Thus, there exists such that
Moreover, considering the choice of the parameter , the fact that , and Lemma 7, we obtain
Then, there exists such that
Choose . Let us now consider the case for every satisfying . So, by applying relations (33) and (34) to Lemma 8, we have the following relations:
and
Using this together with relation (31), we have
This, together with the assumption on the parameter and the fact (32), yields
On the other hand, we note from the definition of , the nonexpansivity of T, and relation (35) that
It follows from the definition of that
Owing to the condition on the sequence , we obtain
This, together with the facts and , yields
Then, there exists a constant such that
Combined with relations (38) and (39), we have
It follows that is a bounded sequence. As such, the sequence is bounded.
On the other hand, in view of the definition of , one sees that
Furthermore, from the definition of and (12), we obtain
Thus, by utilizing relations (35), (38), (42), and (43), the nonexpansivity of T, and Lemma 8, we have
This implies that
Now, we are ready to assert that the sequence converges strongly to by investigating the proof in two cases.
Case 1. Suppose that , for every . This means the sequence is non-increasing. Since the sequence is bounded, we can confirm that the limit of exists. Combining with relation (45) the facts that and , and the properties of the sequences and , we have
and
These imply that
In addition, from , we obtain
This, together with (49), yields
It follows from (47) that
Consider
So, by using (48) and the fact that , we have
Combining it with (51), we obtain
On the other hand, in view of relation (37), one sees that
Thus, by utilizing (51) and (53), and the existence of , we have
Furthermore, from Lemma 8 and the nonexpansivity of S, we obtain
So, by using (55) and the above relation, we obtain
and
These imply that
Since , it follows from (55) and (58) that
Now, let and be a subsequence of with , as . By using (50)–(52) and (54), we have , , , and , as . It follows that , as . Combining (56) and (58), we have and as . Since and are closed convex subsets, and are weakly closed. Hence, and .
Next, due to relations (21), (25) and (28), we obtain
for each . So, by utilizing the facts (46), (47) and (49), and the boundedness of , we find that the right-hand side of the inequality (60) tends to zero. It follows from the sequentially weakly upper semicontinuity of and that
Similarly, we can show that
for each . Using this one together with (56)–(58), and the boundedness of , we find that the right-hand side of the inequality (61) tends to zero. Thus, by using the sequentially weakly upper semicontinuity of and , we have
On the other hand, since , as , and considering (48), by the demiclosedness at zero of , we have . Similarly, since , as , and considering (59), it follows from the demiclosedness at zero of that . Hence, we can conclude that .
Put . Relation (35) and the nonexpansivity of T imply that
From the definition of , one sees that
It follows from (11) that
Using this together with relations (38) and (62), we obtain
Consider
where . It follows from relation (64) that
Indeed, from and the properties of , we have
Therefore, by using (40), (48), (66), (67), and Lemma 5, we obtain
Case 2. Suppose that there exists a subsequence of satisfying
According to Lemma 6, there exists a non-decreasing sequence with , and
Using this one together with relation (45), we have
4. Numerical Experiments
This section presents some numerical experiments occurring in finite- and infinite-dimensional Hilbert spaces to illustrate the effectiveness of the introduced Algorithm 1 and compare it with the EIM Algorithm. We focus on the effectiveness of two groups of auxiliary parameters: , and , based on different choices of these parameters. All the numerical computations were implemented in Matlab R2021b and performed on an Apple M1 with 8.00 GB RAM.
Example 1.
Let and be equipped with the Euclidean norm. We consider the functions and , which are formed by and , where is an invertible symmetric positive semidefinite matrix. The mappings and with respect to the functions and , respectively, are given as follows:
and
where is the identity matrix of dimension n. Observe that T and S are nonexpansive mappings, so that and ; see [38].
On the other hand, we regard the bifunctions and , which are manufactured from the Nash–Cournot oligopolistic equilibrium models of electricity markets; see [33,39],
where , and , are matrices such that , are symmetric positive semidefinite and , are negative semidefinite. We notice that . So, by utilizing the qualification of , we find that is monotone. Likewise, we find that is monotone.
Afterwards, the bifunctions and are imposed as follows:
and
where and are the constrained boxes. Notice that and are Lipschitz-type continuous; see [12,19].
Through this numerical experiment, the matrices , , , and were generated randomly in the interval of such that they satisfy the qualifications mentioned above. The linear operator is an matrix, in which each of its entries is generated randomly in the interval of . Notice that the solution set Ω is nonempty due to . Algorithm 1 was tested in conjunction with the EIM Algorithm by applying the stopping criteria . The starting points were generated randomly in the interval of . We randomly selected 10 starting points, and the average results are shown, where and . Also, we took into account the control parameters of Algorithm 1 and the EIM Algorithm as follows.
- In Algorithm 1, we chose , , , , , , and .
- In the EIM Algorithm, we set , , , , , and , which are the appropriate values as presented in [27]. Indeed, and are Lipschitz constants of and , respectively. Observe that, if and , then and are Lipschitz constants of both and . Thus, we set .
In order to evaluate the optimal values of the control parameters, for the first experiment, the numerical results were obtained by taking the variation of the parameters , , and . The results of the numerical comparison are presented in Table 1 for any collections of parameters , , , , and , and , where the parameters are fixed. We observe that when , , and , the step sizes taken into consideration in Algorithm 1 reduces to forms equivalent to the step sizes provided in [21,36,40].
Table 1.
For for all . Numerical performance for Example 1 based on the different choices of parameters , , and .
In Table 1, the number of iterations (Iter) and the CPU time (Time) in seconds are displayed. Also, we may suggest that the parameters , , and better demonstrate the number of iterations and the CPU time than the parameters , , and , respectively. This illustrates that Algorithm 1 can be improved efficiently by selecting parameters , and . Additionally, the results indicate that Algorithm 1 performs most effectively according to the suggested parameters , , and . Eventually, when the parameters and , Algorithm 1 is superior to the EIM Algorithm in terms of the number of iterations and the CPU time.
In the ensuing experiment, we regard the numerical results of the control parameters and , which are included to allow for bias in the bifunctions and , respectively. The numerical computations are presented in Table 2 for different choices of control parameters , , , , and , , , , while preserving fixed values of the control parameters .
Table 2.
For , for all . Numerical performance for Example 1 based on the different choices of parameters and .
Table 2 points out that the choices of pertinent parameters, especially the values of parameters and , are important. The parameters and , which converge to 1 the fastest, such as and , exhibit better performance than other cases when considering the number of iterations. Nevertheless, we point out that the fastest converging sequence to 1 is the constant sequence 1, but the numerical results turn out to be slow (see Table 1). This demonstrates that the proposed parameters and in Algorithm 1 lead to the improved performance of Algorithm 1.
Example 2.
Let be two infinite-dimensional Hilbert spaces endowed with inner product
and its corresponding norm
Assume that , , and , are positive real numbers such that , for some and , for some . Take the feasible sets as and . To be considered here are the operators and , which are defined by
Set
Observe that the operators and are psuedomonotone rather than monotone and satisfy Lipschitz continuity; see [41]. These imply that and are psuedomonotone and Lipschitz-type continuous bifunctions. Here, we consider the nonexpansive mappings and , which are defined by and , . Also, the linear operator is imposed by , .
During this numerical experiment, the control parameters of Algorithm 1 are determined as in Example 1. Meanwhile, for the EIM Algorithm, we are required to know the Lipschitz constants of and . Indeed, and are Lipschitz constants of and , respectively. Similar to Example 1, if and , then and are Lipschitz constants of both and . So, we choose for the EIM Algorithm. The remaining parameters of the EIM Algorithm are given as in Example 1. Here, we take , , , and , , . In the above setting, the solution set Ω is ; see [41]. Algorithm 1 was tested along with the EIM Algorithm by utilizing the stopping criteria with the starting points . The numerical results are reported in Table 3 for any collections of parameters , , , , and , by fixing the parameters as in Example 1.
Table 3.
For for all . Numerical behavior for Example 2 based on the different choices of parameters , , and .
Table 3 indicates that the number of iterations and the CPU time of the concerned parameters , , and in all cases are equal. Ultimately, when the parameters and , Algorithm 1 outperforms the EIM Algorithm in terms of both the number of iterations and the CPU time.
To see the optimum values of the control parameters, the following numerical comparison results were obtained while accounting for the different choices of the control parameters , , , , and , , , by choosing the appropriate values of parameters as in Example 1.
From Table 4, the number of iterations in the case of the optimal parameters and , in which these parameters converge to 1 the fastest, illustrates a better behavior than other cases.
Table 4.
For , for all . Numerical behavior for Example 2 based on the different choices of parameters and .
Concentrating on these observations, we deduce that making choices for auxiliary parameters , , and , , , as allowed in Algorithm 1, can improve the effectiveness of Algorithm 1 in finite- and infinite-dimensional Hilbert spaces.
5. Conclusions
This paper introduces the Mann-type inertial accelerated subgradient extragradient algorithm with non-monotonic step sizes for finding the minimum-norm solution of split equilibrium and fixed point problems, specifically involved with pseudomonotone and Lipschitz-type continuous bifunctions and nonexpansive mappings within the context of real Hilbert spaces. Without requiring prior knowledge of the Lipschitz constants of bifunctions as well as the operator norm of the bounded linear operator, we take into consideration the Mann-type and inertial methods together with the accelerated subgradient extragradient method to propose a sequence that is strongly convergent to the minimum-norm solution of the split equilibrium and fixed point problems by sufficient conditions on the control sequences of the parameters of concern. Numerical experiments were conducted to demonstrate the efficacy of the proposed algorithm in both finite- and infinite-dimensional Hilbert spaces. The results confirm that focusing on the fine-tuning of auxiliary control parameters within algorithms is a promising and important research direction, as it tends to lead to significant improvements in performance.
Author Contributions
Conceptualization, M.K., K.K. and N.P.; methodology, M.K., K.K. and N.P.; software, M.K., K.K. and N.P.; formal analysis, M.K., K.K. and N.P.; investigation, M.K., K.K. and N.P.; writing—original draft preparation, M.K., K.K. and N.P.; writing—review and editing, M.K., K.K. and N.P. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by Naresuan University (NU) and National Science, Research and Innovation Fund (NRMF) Grant No. R2567B011. Narin Petrot received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation Grant No. B41G670027.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ansari, Q.H.; Nimana, N.; Petrot, N. Split hierarchical variational inequality problems and related problems. Fixed Point Theory Appl. 2014, 2014, 208. [Google Scholar] [CrossRef]
- Iiduka, H. Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. Math. Program. 2016, 159, 509–538. [Google Scholar] [CrossRef]
- Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
- Inchan, I. Iterative scheme for fixed point problem of asymptotically nonexpansive semigroups and split equilibrium problem in Hilbert spaces. J. Nonlinear Anal. Optim. 2020, 11, 41–57. [Google Scholar]
- Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
- Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef]
- Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 127–149. [Google Scholar]
- Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef]
- Daniele, P.; Giannessi, F.; Maugeri, A. Equilibrium Problems and Variational Models; Kluwer: Dordrecht, The Netherlands, 2003. [Google Scholar]
- Dinh, B.; Thanh, H.; Ngoc, H.; Huyen, T. Strong convergence algorithms for equilibrium problems without monotonicity. J. Nonlinear Anal. Optim. 2018, 9, 139–150. [Google Scholar]
- Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
- Tran, D.Q.; Dung, L.M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
- Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2017, 111, 823–840. [Google Scholar] [CrossRef]
- Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 2004, 9, 773–782. [Google Scholar] [CrossRef]
- Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator damping. Set-Valued. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
- Hieu, D.V. An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 2018, 88, 399–415. [Google Scholar] [CrossRef]
- Vinh, N.T.; Muu, L.D. Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
- Shehu, Y.; Izuchukwu, C.; Yao, J.C.; Qin, X. Strongly convergent inertial extragradient type methods for equilibrium problems. Appl. Anal. 2023, 102, 2160–2188. [Google Scholar] [CrossRef]
- Suantai, S.; Petrot, N.; Khonchaliew, M. Inertial extragradient methods for solving split equilibrium problems. Mathematics 2021, 9, 1884. [Google Scholar] [CrossRef]
- Tan, B.; Cho, S.Y.; Yao, J. Accelerated inertial subgradient extragradient algorithms with non-monotonic step sizes for equilibrium problems and fixed point problems. J. Nonlinear Var. Anal. 2022, 6, 89–122. [Google Scholar]
- Panyanak, B.; Khunpanuk, C.; Pholasa, N.; Pakkaranang, N. Dynamical inertial extragradient techniques for solving equilibrium and fixed-point problems in real Hilbert spaces. J. Ineq. Appl. 2023, 2023, 7. [Google Scholar] [CrossRef]
- Dinh, B.V.; Son, D.X.; Anh, T.V. Extragradient-proximal methods for split equilibrium and fixed point problems in Hilbert spaces. Vietnam J. Math. 2017, 45, 651–668. [Google Scholar] [CrossRef]
- Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
- Petrot, N.; Rabbani, M.; Khonchaliew, M.; Dadashi, V. A new extragradient algorithm for split equilibrium problems and fixed point problems. J. Ineq. Appl. 2019, 2019, 137. [Google Scholar] [CrossRef]
- Ezeora, J.N.; Enyi, C.D.; Nwawuru, F.O.; Ogbonna, R.C. An algorithm for split equilibrium and fixed-point problems using inertial extragradient techniques. Comput. Appl. Math. 2023, 42, 103. [Google Scholar] [CrossRef]
- Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
- Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces, Lecture Notes in Mathematics 2057; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
- Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
- Karamardian, S.; Schaible, S.; Crouzeix, J.P. Characterizations of generalized monotone maps. J. Optim. Theory Appl. 1993, 76, 399–413. [Google Scholar] [CrossRef]
- Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
- Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
- Gebrie, A.G.; Wangkeeree, R. Hybrid projected subgradient-proximal algorithms for solving split equilibrium problems and split common fixed point problems of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2018, 2018, 5. [Google Scholar] [CrossRef]
- Ogbuisi, F.U. The projection method with inertial extrapolation for solving split equilibrium problems in Hilbert spaces. Appl. Set-Valued Anal. Optim. 2021, 3, 239–255. [Google Scholar]
- Liu, H.; Yang, J. Weak convergence of iterative methods for solving quasimonotone variational inequalities. Comput. Optim. Appl. 2020, 77, 491–508. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
- Contreras, J.; Klusch, M.; Krawczyk, J.B. Numerical solution to Nash-Cournot equilibria in coupled constraint electricity markets. EEE Trans. Power Syst. 2004, 19, 195–206. [Google Scholar] [CrossRef]
- Wairojjana, N.; Younis, M.; Rehman, H.U.; Pakkaranang, N.; Pholasa, N. Modified viscosity subgradient extragradient-like algorithms for solving monotone variational inequalities problems. Axioms 2020, 9, 118. [Google Scholar] [CrossRef]
- Hieu, D.V.; Cho, Y.J.; Xiao, Y.B.; Kumam, P. Modified extragradient method for pseudomonotone variational inequalities in infinite dimensional Hilbert spaces. Vietnam J. Math. 2021, 49, 1165–1183. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).