Convergence Theorems for Common Solutions of Split Variational Inclusion and Systems of Equilibrium Problems

In this paper, the split variational inclusion problem (SVIP) and the system of equilibrium problems (EP) are considered in Hilbert spaces. Inspired by the works of Byrne et al., López et al., Moudafi and Thukur, Sobumt and Plubtieng, Sitthithakerngkiet et al. and Eslamian and Fakhri, a new self-adaptive step size algorithm is proposed to find a common element of the solution set of the problems SVIP and EP. Convergence theorems are established under suitable conditions for the algorithm and application to the common solution of the fixed point problem, and the split convex optimization problem is considered. Finally, the performances and computational experiments are presented and a comparison with the related algorithms is provided to illustrate the efficiency and applicability of our new algorithms.

On the other hand, many real-world inverse problems can be cast into the framework of the split inverse problem (SIP), which is formulated as follows: Find a point x * ∈ X that solves the problem IP1 and such that the point y * = Ax * ∈ Y solves the problem IP2, where IP1 and IP2 are two inverse problems, X and Y are two vector spaces and A : X → Y is a linear operator.Realistic problems can be represented by making different choices of the spaces X and Y (including the case X = Y) and by choosing appropriate inverse problems for the problems IP1 and IP2.In particular, the well-known split convex feasibility problem (SCFP) (Censor and Elfving [24]) is illustrated as follows: Find a point x * such that x * ∈ C and Ax * ∈ Q, where C and Q are nonempty closed and convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A is a bounded and linear operator from H 1 to H 2 .The problem SCFP (2) has received important attention due to its applications in signal processing, image reconstruction, with particular progress in intensity modulated radiation therapy, compressed sensing, approximating theory, and control theory (see, for example, [25][26][27][28][29][30][31][32] and the references therein).Initiated by the problem SCFP, several split type problems have been investigated and studied, for example, split variational inequality problems, split common fixed point problems, and split null point problems.Especially, Moudafi [33] introduced the split monotone variational inclusion problem (SMVIP) for two operators f 1 : H 1 → H 1 and f 2 : H 2 → H 2 and multi-valued maximal monotone mappings B 1 ,B 2 as follows: Find a point x * ∈ H 1 such that 0 ∈ f 1 (x * ) + B 1 (x * ) and If f 1 = f 2 = 0 in the problem SMVIPs (3) and ( 4), the problem SMVIP reduces to the following split variational inclusion problem (SVIP): and The problem SVIPs ( 5) and ( 6) constituted a pair of variational inclusion problems which have to be solved so that the image y * = Ax * under a given bounded linear operator A of the solution of the problem SVIP (5) in H 1 is the solution of the other SVIP (6) in another Hilbert space H 2 .Indeed, one can see that x * ∈ B −1 1 (0) and y * = Ax * ∈ B −1 2 (0).The SVIPs ( 5) and ( 6) is at the core of modeling many inverse problems arising from phase retrieval and other real-world problems, for instance, in sensor networks in computerized tomography and data compression (see, for example, [34][35][36]).
In the process of studying equilibrium problems and split inverse problems, not only techniques and methods for solving the respective problems have been proposed (see, for example, CQ-algorithm in Byrne [37,38], relaxed CQ-algorithm in Yang [39] and Gibali et al. [40], self-adaptive algorithm in López et al. [41], Moudafi and Thukur [42], and Gibali [43]), but also the common solution of equilibrium problems, split inverse problems, and other problems have been considered in many works (see, for example, Plubtieng and Sombut [44] considered the common solution of equilibrium problems and nonspreading mappings; Sobumt and Plubtieng [45] studied a common solution of equilibrium problems and split feasibility problems in Hilbert spaces; Sitthithakerngkiet et al. [46] investigated a common solution of split monotone variational inclusion problems and fixed points problem of nonlinear operators; Eslamian and Fakhri [47] considered split equality monotone variational inclusion problems and fixed point problem of set-valued operators; Censor and Segal [48], Plubtieng and Sriprad [49] explored split common fixed point problems for directed operators).In particular, some applications to mathematical models for studying a common solution of convex optimizations and compressed sensing whose constraints can be presented as equilibrium problems and split variational inclusion problems, which stimulated our research on this kind of problem.
Motivated by the above works, we consider the following split variational inclusion problem and equilibrium problem: Let The split monotone variational inclusion and equilibrium problem (SMVIEP) is as follows: such that where EP(φ) denotes the solution set of the problem EP.  (7) and (8) in Hilbert spaces.
The outline of the paper is as follows: In Section 2, we collect definitions and results which are needed for our further analysis.In Section 3, our new self-adaptive step size algorithms are introduced and analyzed, the weak and strong convergence theorems for the proposed algorithms are obtained, respectively, under suitable conditions.Moreover, as applications, the existence of a fixed point of a pseudo contractive mapping and a solution of the split convex optimization problem is considered in Section 4. Finally, the numerical examples and a comparison with some related algorithms are presented to illustrate the performances of our new algorithms.

Preliminaries
Now, we recall some concepts and results which are needed in the sequel.Let H be a Hilbert space with the inner product •, • and the induced norm • and I be the identity operator on H. Let Fix(T) denote the fixed point set of an operator T if T has fixed point.The symbols " → " and " " represent the strong and the weak convergence, respectively.For any sequence {x n } ⊂ H, w w (x n ) denotes the weak w-limit set of {x n }, that is, x for some subsequence {n j } of {n}}.
The following properties of the norm in a Hilbert space H are well known: for all x, y, z ∈ H and α, β, γ ≥ 0.Moreover, the following inequality holds: for all x, y ∈ H, Let C be a closed convex subset of H.For all x ∈ H, there exists a unique nearest point in C, denote P C x, such that The operator P C is called the metric projection of H onto C. Some properties of the operator P C are as follows: for all x, y ∈ H, x − y, P C x − P C y ≥ P C x − P C y 2 (13) and, for all x ∈ H and y ∈ C, Definition 1.Let H be a real Hilbert space and D be a subset of H.For all x, y ∈ D, an operator h : D → H is said to be: (1) firmly nonexpansive on D if (4) hemicontinuous if it is continuous along each line segment in D.
Remark 1.The following can be easily obtained: (1) An operator h is firmly nonexpansive if and only if I − h is firmly nonexpansive (see [50], Lemma 2.3), then h is nonexpansive.(2) If h 1 and h 2 are averaged, then their composition S = h 1 • h 2 is averaged (see [50], Lemma 2.2).Definition 2. Let H be a real Hilbert space and λ > 0. The operator B : H → 2 H is said to be: (1) monotone if, for all u ∈ B(x) and v ∈ B(y), (2) maximal monotone if the graph Graph(B) of B, is not properly contained in the graph of any other monotone mapping.
(3) The resolvent of B with parameter λ > 0 is denoted by where I is the identity operator.
Remark 2. For any λ > 0, the following hold: (1) B is maximal monotone if and only if J B λ is single-valued, firmly nonexpansive, and dom(J B λ ) = H, where dom(B) The solution set Ω of the problem SVIPs ( 5) and ( 6) is equivalent to the following: (4) For some more results, refer to [33,36,51].
Lemma 1. (see [2]) Let C be a nonempty closed convex subset of a Hilbert space H and suppose that φ : C × C → R satisfies the conditions (A1)-(A4).For all r > 0 and x ∈ H, define a mapping T r : Then the following hold: (1) T r is nonempty single-valued.
(2) T r is firmly nonexpansive, that is, for all x, y ∈ C, T r x − T r y, x − y ≥ T r x − T r y 2 and, further, T r is nonexpansive.(3) EP(φ) = Fix(T r ) is closed and convex.Lemma 2. (see [26,52]) Assume that {a n } is a sequence of nonnegative real numbers such that, for each n ≥ 0, where {θ n } is a sequence in (0, 1) and {δ n } is a sequence such that (a) Then the limit of the sequence {a n } exists and lim n→∞ a n = 0. Lemma 3. (see [53]) Assume that {a n } and {δ n } are the sequences of nonnegative numbers such that, for each n ≥ 0, If lim sup n→∞ δ n ≤ ∞ and {a n } has a subsequence converging to zero, then lim n→∞ a n = 0.
Lemma 4. (see [52]) Let {Γ n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence {Γ n j } of {Γ n } such that Γ n j < Γ n j+1 for each j ≥ 0. Also, consider the sequence {σ(n)} n≥n 0 of integers defined by Then {σ(n)} n≥n 0 is a nondecreasing sequence satisfying lim n→∞ σ(n) = ∞ and, for each n ≥ n 0 , Lemma 5. (see [54]) Let C be a nonempty closed convex subset of a real Hilbert space H.If T : C → C is nonexpansive and Fix(T) = ∅, then the mapping I − T is demiclosed at 0, that is, if {x n } is a sequence in C converges weakly to x * and x n − Tx n → 0, then x * = Tx * .

The Main Results
In this section, we introduce our algorithms and state our main results.Throughout this paper, we always assume that H 1 and H 2 are Hilbert spaces, C is a nonempty closed convex subset of H 1 , the bifunction φ : Now, we define the functions by and where J B λ = (I + λB) −1 for any λ > 0. From Aubin [55], one can see that f and g are weakly lower semi-continuous and convex differentiable.Moreover, it is known that the functions F and G are Lipschitz continuous according to Beryne et al. [36].Denote the solution of the problem SVIPs ( 5) and ( 6) by
Iterative Step: For any iterate x n for each n ≥ 0, compute and calculate the next iterate as where {α n } and {β n } are the sequences in (0, 1).Stop Criterion: If x n+1 = x n = y n , then stop.Otherwise, set n := n + 1 and return to Iterative Step.
Iterative Step: For any iterate x n for each n ≥ 0, compute and calculate the next iterate as where {α n } and {β n } are the sequences in (0, 1).Stop Criterion: If x n+1 = x n = y n , then stop.Otherwise, set n := n + 1 and return to Iterative Step.
Iterative Step: For any iterate x n for each n ≥ 0, compute and calculate the next iterate as where {α n }, {β n } and τ n are the sequences in (0, 1) with α n + τ n ≤ 1. Stop Criterion: If x n+1 = x n = y n , then stop.Otherwise, set n := n + 1 and return to Iterative Step.

Weak Convergence Analysis for Algorithm 1
First, we give one lemma for our main result.
If x n = y n , then it follows from the construction of y n that x n = z n and hence x n ∈ Fix(T r ), that is, On the other hand, the operator J B 1 λ and I − γ n A * (I − J B 2 λ )A are averaged from Remark 1.Since Ω = ∅, it follows from Lemma 2.1 of [51], with the averaged operators J B 1 λ and If x n from the recursion (31) and hence It can be written as which means that w = 0 and so . Hence x n ∈ Ω and, furthermore, x n ∈ EP(φ) Ω.This completes the proof.Theorem 1.Let H 1 , H 2 be two real Hilbert spaces and A : H 1 → H 2 be a bounded linear operator.Let φ be a bifunction satisfying the conditions (A1)-(A4).Assume that B 1 : H 1 → 2 H 1 and B 2 : H 2 → 2 H 2 are maximal monotone mappings with EP(φ) Ω = ∅.Then the sequence {x n } generated by Algorithm 1 converges weakly to an element of EP(φ) Ω, where the parameters {α n }, {β n } are in (0, 1) and satisfy the following conditions: ) is averaged and nonexpansive.First, we show that the sequences {x n } and {y n } are bounded.Since EP(φ) Ω = ∅, we take z ∈ EP(φ) Ω and then z = T r (z), z = J B 1 λ z and Az = J B 2 λ Az.At the same time, it follows that z n = T r x n and T r is nonexpansive according to Lemma 1 and so and which means that the sequence {x n } is bounded and so are {y n }, {z n } and {u n }, where z n = T r x n and u n = T n y n .Next, we show x n+1 − x n → 0. Since z n = T r x n , we have z n−1 = T r x n−1 and, for all y ∈ C, and Putting y = z n−1 in (39) and y = z n in (40), we have and Adding the inequalities ( 41) and ( 42), since φ is monotone from (A2), we have which implies that that is, Furthermore, we have According to the definition of the iterative sequence {y n }, we have and, since u n = T n y n , we have On the other hand, we have λ is nonexpansive, we have For any ε > 0 small enough, the sequence {x n } is Fejer monotone with respect to EP(φ) ∩ Ω, which ensures the existence of the limit of { x n − z } and so we can denote l(z) = lim n→∞ x n − z .Thus it follows from (48) that since F and G are Lipschitz continuous from Byrne [36] and so F(y n ) and G(y n ) are bounded.
In addition, it also follows (48) that which means that and hence it follows that f (y n ) → 0 and g(y n ) → 0 if and only if Thus it follows from the above inequalities that where Note that it follows from the conditions of α n and β n , combining the formula (51), that the limit of x n+1 − x n exists from Lemma 3. Since the limit of x n − z exists, there exists a subsequence {x nk − z} of the sequence {x n − z} converges to a point and x nk+1 − x nk → 0. Thus it follows that x n+1 − x n → 0 from Lemma 3. Further, Next, we show x * ∈ EP(φ) Ω, where x * is a weak cluster of the sequence {x n }.Note that {x n } is bounded and so there exists a point x * ∈ C such that x n x * and so y n x * and z n x * .Also, since z n = T r x n and z n − x n = T r x n − x n → 0, we can see that x * ∈ Fix(T r ) from Lemma 5, which implies that x * ∈ EP(φ) according to Lemma 1.
On the other hand, according to the lower semi-continuity of f and g, it follows from the formula (48) and the lower semi-continuities of f and g that and that is, and so we can have Ax * ∈ B −1 2 (0) and x * ∈ B −1 1 (0) from Remark 2. Therefore, x * ∈ EP(φ) Ω.This completes the proof.Remark 3. If the operators B 1 and B 2 are set-valued, odd and maximal monotone mappings, then the operator J λ )A) is asymptotically regular (see Theorem 4.1 in [56] and Theorem 5 in [57]) and odd.Consequently, the strong convergence of Algorithm 1 is obtained (for the similar proof, see Theorem 1.1 in [58], Theorem 4.3 in [36]).
Remark 4. If we take γ n ≡ γ, where γ ∈ 0, 2  L is a constant which depends on the norm of the operator A, then the conclusion of Theorem 1 also holds.

Strong Convergence Analysis for Algorithms 2 and 3
Theorem 2. Let H 1 , H 2 be two real Hilbert spaces and A : H 1 → H 2 be a bounded linear operator.Let φ be a bifunction satisfying the conditions (A1)-(A4).Assume that B 1 : are maximal monotone mappings with EP(φ) Ω = ∅.If the sequence {α n } in (0, 1) satisfies the following conditions: then the sequence {x n } generated by Algorithm 2 converges strongly to a point z = P EP(φ) Ω x 0 .
Proof.First, we show that the sequence {x n } and {y n } are bounded.Denote and take p ∈ EP(φ) Ω, as in the proof of Theorem 1, we can see y n − p ≤ x n − p and which implies that the sequence {x n } is bounded and so is the sequence {y n }.
Next, we show lim n→∞ sup x 0 − z, x n − z ≤ 0, where z = P EP(φ)∩Ω x 0 .Indeed, there exists a subsequence {x nk } of {x n } such that lim Since {x nk } converges weakly to x * because {x n } is bounded, according to (11), we can see that Now, we show x n → z.As in the proof of Theorem 1, the operator J B 1 λ (I − γ n A * (I − J B 2 λ )A) is averaged and then nonexpansive.Thus it follows from (2) that where It is easy to see that lim n→∞ θ n α n ≤ 0 and hence we have x n − z → 0 from Lemma 2 and ( 59), which means that the sequence {x n } converges strongly to z.This completes the proof.
For the following strong convergence theorem of Algorithm 3, now, we recall the minimum-norm element of EP(φ) Ω, which is a solution of the following problem: argmin{ x : x solves the problem EP (1) and the problem SVIPs (5) then the sequences {x n } and {y n } generated by Algorithm 3 converge strongly to a point z = P EP(φ)∩Ω (0), the minimum-norm element of EP(φ) Ω.
Proof.We show several steps to prove the result.
Step 1.We show that the sequences {x n } and {y n } are bounded.Since EP(φ) Ω is not empty, take a point p ∈ EP(φ) Ω.Since the operator J B 1 λ (I − γ n A * (I − J B 2 λ )A) is nonexpansive and y n − p ≤ x n − p , we have which implies that {x n } is bounded and so is {y n }.
Step 2. We show that x n+1 − x n → 0 and x n → z, where z = P EP(φ) Ω (0), the minimum-norm element of EP(φ) Ω.To this end, we denote For a point z = P EP(φ)∩Ω (0), similarly as in the proof of Theorem 1, we have which means, for any ε > 0 small enough, that and Moreover, it follows from (64) that On the other hand, since y n − z ≤ x n − z , it follows from ( 10), (33), and (59) that which implies that Now, we consider two possible cases for the convergence of the sequence { x n − z 2 )}.
Case I. Assume that { x n − z } is not increasing, that is, there exists n 0 ≥ 0 such that, for each n ≥ n 0 , x n+1 − z ≤ x n − z .Therefore, the limit of lim n→∞ x n − z exists and lim Since lim n→∞ τ n = 0, it follows from (68) that and F and G are Lipschitz continuous and, for any ε > 0 small enough, we obtain and so f (y n ) → 0, g(y n ) → 0 and u n − x n → 0 as n → ∞.Therefore, it follows from (65) that x n+1 − x n → 0. From ( 11) and ( 33), (66), we have Since {x n } is bounded, as in the proof of Theorem 1, the sequence {x n } converges weakly to a point x * ∈ EP(φ) Ω and the following inequality holds from the property (12): Since τ n → 0, Σ ∞ n=1 τ n = ∞ by using Lemma 2 to the formula (70) and so we can deduce that x n − z → 0, that is, the sequence {x n } converges strongly to z.Furthermore, it follows from the property of the metric projection that, for all p ∈ EP(φ) Ω, which implies that z is the minimum-norm solution of the system of the problem EP (1) and the problem SVIPs ( 5) and ( 6).
Case II.If the sequence { x n − z 2 } is increasing, then it is easy to see that u n − x n → 0 from (68) because of τ n → 0 and so, from (65), we can get x n+1 − x n → 0.
Without loss generality, we assume that there exists a subsequence and so, from (68), it follows that Since τ σ(n) → 0 as n → ∞, similarly as in the proof in Case I, we get and and so Combining ( 75) and (76) yields and hence From (77), we can see that Thus lim n→∞ x σ(n)+1 − z 2 = 0. Therefore, according to Lemma 4, we have which implies that the sequence {x n } converges strongly to z, the minimum-norm element of EP(φ) Ω.This completes the proof.

Corollary 1.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 → H 2 be a bounded linear operator.Let φ be a bifunction satisfying the conditions (A1)-(A4).Assume that B 1 : If the sequence {α n } in (0, 1) satisfies the following conditions: Denote the solution set of the split convex optimization problem by Now, we show the existence of a common element of the fixed point set of a pseudo-contractive mapping and the solution set of the split convex minimization problem as follows: Theorem 4. Let H 1 , H 2 be two real Hilbert spaces and A : H 1 → H 2 be a bounded linear operator.Let T be a pseudo-contractive mapping.Assume that h : H 1 → R and l : H 2 → R are two proper, convex and lower semi-continuous functions such that ∂h and ∂l are maximal monotone mappings with Fix(T) Ω = ∅.If the sequences {α n }, {β n } in (0, 1) satisfy the following conditions: and, for any λ > 0 and r > 0, the sequence {x n } is generated by the following iterations: where γ n is defined as in Algorithm 1, then the sequence {x n } converges weakly to an element of Fix(T) Ω.
Thus, according to Theorem 1, the conclusion follows.
Theorem 5. Let H 1 , H 2 be two real Hilbert spaces and A : H 1 → H 2 be a bounded linear operator.Let T be a pseudo-contractive mapping.Assume that h : H 1 → R and l : H 2 → R are two proper, convex and lower semi-continuous functions such that ∂h and ∂l are maximal monotone mappings with Fix(T) Ω = ∅.If the sequence {α n } in (0, 1) satisfies the following conditions: and, for any x 0 ∈ H 1 , the sequence {x n } is generated by the following iterations: where γ n is defined as in Algorithm 2, then the sequence {x n } converges strongly to a point z = P Fix(T)∩Ω x 0 .Theorem 6.Let H 1 , H 2 be two real Hilbert spaces and A : H 1 → H 2 be a bounded linear operator.Let T be a pseudo-contractive mapping.Assume that h : H 1 → R and l : H 2 → R are two proper, convex and lower semi-continuous functions such that ∂h and ∂l are maximal monotone mappings with Fix(T) Ω = ∅.If the sequence {α n } in (0, 1) satisfies the following conditions: and, for any λ > 0 and r > 0, the sequence {x n } is generated by the following iterations: where γ n is defined as in Algorithm 3, then the sequence {x n } converges strongly to a point z = P Fix(T)∩Ω (0).

Numerical Examples
In this section, we present some examples to illustrate the applicability, efficiency and stability of our self-adaptive step size iterative algorithms.We have written all the codes in Matlab R2016b (MathWorks, Natick, MA, USA) and are preformed on a LG dual core personal computer (LG Display, Seoul, Korea).

Numerical Behavior of Algorithm 1
Example 1.Let H 1 = H 2 = R and define the operators A, B 1 , and B 2 on real line R by Ax = 3x, B 1 x = 2x, B 2 x = 4x for all x ∈ R, the bifunction φ by φ(x, y) = −3x 2 + xy + 2y 2 and set the parameters on Algorithm 1 by From the definition of φ, we have and then For the quadratic function of y, if it has at most one solution in R, then the discriminant of this function ∆ = (z n + 5z n r − x n ) 2 = 0, that is, z n = x n 1+5r .According to Algorithm 1, if G(y n ) 2 + H(y n ) 2 = 0, then we compute the new iteration {x n+1 } and the iterative progress is written as In this way, the step size γ n is self-adaptive and not given beforehand.First, we test three cases of λ = 0.01, 0.5, 2 and r = 0.01, 0.5, 2 for initial point x 0 = 1, and then test three initial points x 0 randomly generated by Matlab for λ and r.The values of {y n } and {x n } are reported in Figures 1 and 2 and Table 1.In these figures, x-axes represent for the number of iterations while y-axes represent the value of x n and y n , the stopping criterion of Figure 1 is x n+1 − y n = 10 −6 .These figures imply that the behavior of x n for Algorithm 1 that converges to the same solution, i.e., 0 ∈ EP(φ) Ω as a solution of this example.Table 1.The convergence of Algorithm 1.
x 0 = −40  We can summarize the following observations from these Figures 1 and 2 and Table 1: (1) The results presented in Figures 1 and 2 and Table 1 imply that Algorithm 1 converges to the same solution; (2) The convergence rate of Algorithm 1 is fast, efficient, stable and simple to implement.The number of iterations remains almost consistent irrespective of the initial point x 0 and parameters λ, r.
(3) The error of x n+1 − y n can be obtain approximately equal to 10 −15 even smaller in 20 iterations.

Numerical Behaviours of Algorithms 2 and 3
Example 2. Let H 1 = H 2 = R 3 and define the operators A, B 1 , and B 2 as the following: the bifunction φ by φ(x, y) = −3x 2 + x, y + 2y 2 .In this example, we set the parameters in Algorithm 2 and Algorithm 3 by First, we take an initial point x 0 = (13, −12, 25), then the test results are reported in Figure 3. Next, we present the test results for initial point x 0 = (1, −1, −2) in Table 2.  From Table 2 and Figure 3, one can see the convergence rate of Algorithm 3 is faster than Algorithm 2 and (0, 0, 0) ∈ EP(φ) Ω is the minimum-norm solution of the experiment.

Comparisons with Other Algorithms
In this part, we present several experiments in comparison our Algorithms 1 and 3 with the other algorithms.Two methods used to compare are the algorithm in Byrne et al. [36] and the algorithm in Sitthithakerngkiet et al. [46].The step size γ ≡ 0.001 for the algorithm in Byrne et al. [36] and the algorithm in Sitthithakerngkiet et al. [46] which depends on the norm of operator A .In addition, let the mapping S n : R 3 → R 3 be an infinite family of nonexpansive mappings S n (x) = { x 2 n } and a nonnegative real sequence ζ n = 1.Then the W-mapping W n is generated by S n and ζ n and , the bounded linear operator D = I in Sitthithakerngkiet et al. [46].We choose the stopping criterion for our algorithm is x n+1 − y n ≤ DOL, for Byrne et al. [36] is x n+1 − x n ≤ DOL and for Sitthithakerngkiet et al. [46] is x n+1 − y n ≤ DOL.For the three algorithms, the operators A, B 1 , B 2 are defined as Example 2, the parameters α n = 10 −3 n and β n = 0.5 − 1 10n+2 , u = (0, 0, 1) in Sitthithakerngkiet et al. [46], n+1 and β n = 1 10n+2 in our Algorithm 3. We λ = 1, x 0 = (13, −12, 25) for all these algorithms and compare the iterations and computer times.The experiment results are reported in Table 3. From all the above figures and tables, one can see the behavior of the sequences {x n } and {y n }, which concludes that {x n } and {y n } converge to a solution and our algorithms are fast, efficient, and stable and simple to implement (it takes average of 10 iterations to converge).Especially, one can see that our Algorithms 1 and 3 seem to have a competitive advantage.However, as mentioned in the previous sections, the main advantage of these our algorithms is that the step size is self-adaptive without the prior knowledge of operator norms.

Compressed Sensing
In the last example, we choose a problem from the field of compressed sensing according to the review comments, that is the recovery of a sparse and noisy signal from a limited number of sampling.Let x 0 ∈ R n be K-sparse signal, K << n.The sampling matrix A ∈ R m×n , m < n is stimulated by standard Gaussian distribution and vector b = Ax + , where is additive noise.When = 0, it means that there is no noise to the observed data.Our task is to recover the signal x 0 from the data b.
For further details, one can consult with Nguyen and Shin [34].
For solving the problem, we recall the LASSO (Least Absolute Shrink Select Operator) problem Tibshirani [60] as follows: In this example, we take h(x) = 1 2 Ax − b 2 , φ(x, y) = h(y) − h(x), then the problem EP ( 1) is equivalent to the following problem: Ax − b 2 2 .
For the experiment setting, we choose the following parameters in our Algorithm 3: , A ∈ R m×n is generated randomly with m = 2 10 , n = 2 12 , x 0 ∈ R n is K-spikes with amplitude ±1 distributed in whole domain randomly.In addition, for simplicity, we take t = K and the stopping criterion x i+1 − x i ≤ DOL with DOL = 10 −6 .All the numerical results are presented in Figures 4 and 5

Conclusions
A series of some problems in finance, physics, network analysis, signal processing, image reconstruction, economics, and optimizations are reduced to find a common solution of the split inverse and equilibrium problems, which implies numerous possible applications to mathematical models whose constraints can be presented as the problem EP (1) and the split inverse problem.
This motivated the study of a common solution set of split variational inclusion problems and equilibrium problems.
The main result of this paper is to introduce a new self-adaptive step size algorithm without prior knowledge of the operator norms in Hilbert spaces to solve the split variational inclusion and equilibrium problems.The convergence theorems are obtained under suitable assumptions and the numerical examples and comparisons are presented to illustrate the efficiency and reliability of the algorithms.In one sense, the algorithms and theorems in this paper complement, extend, and unify some related results in the split variational inclusion and equilibrium problems.

Figure 2 .
Figure 2. Values of x n+1 − y n and x n .

Figure 3 .
Figure 3. Values of x n , y n , z n for Algorithms 2 and 3.
1 and H 2 be real Hilbert spaces, A : H 1 → H 2 be a bounded linear operator.Let B 1 : H 1 → 2 H 1 and B 2 : H 2 → 2 H 2 be two set-valued mappings with nonempty values and φ : C × C → R be a bifunction, where C is nonempty closed convex subset of H 1 .
Combined with techniques of Byrne et al., López et al., Moudafi and Thukur, Sitthithakerngkiet et al. as well as of Sobumt and Plubtieng and Eslamian and Fakhri, the purpose of this paper is to introduce a new iterative method which is called a new self-adaptive step size algorithm for solving the problem SMVIEPs Values of x n and y n for different λ and r.

Table 2 .
The convergence of Algorithm 3.

Table 3 .
Comparison Algorithms 1 and 3 with other algorithms.