Relaxed projection methods with self-adaptive step size for solving variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in Banach spaces

: In each iteration, the projection methods require computing at least one projection onto the closed convex set. However, projections onto a general closed convex set are not easily executed, a fact that might affect the efﬁciency and applicability of the projection methods. To overcome this drawback, we propose two iterative methods with self-adaptive step size that combines the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and ﬁxed point problems for an inﬁnite family of multivalued relatively nonexpansive mappings in the setting of Banach spaces. The core of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space and this guarantees the easy implementation of our proposed methods. Moreover, the step size of each algorithm is self-adaptive. We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to ﬁnding a common solution of constrained convex minimization and ﬁxed point problems in Banach spaces. Finally, we present some numerical examples in order to demonstrate the efﬁciency of our algorithms in comparison with some recent iterative methods.


Introduction
Let E be a real Banach space with norm || • ||, and E * be the dual of E. For x ∈ E and f ∈ E * , let x, f be the value of f at x. Suppose that C is a nonempty, closed, and convex subset of E. The Variational Inequality Problem (VIP) that is associated with C and A is formulated, as follows: find a point x * ∈ C, such that where A : E → E * is a single-valued mapping.We denote the solution set of VIP (1) by V I(C, A).
Let C be a nonempty, closed, and convex subset of a real Hilbert space H.A mapping P C is said to be the metric projection of H onto C if, for all x ∈ H, there exists a unique nearest point in C, denoted by P C x, such that ||x − P C x|| ≤ ||x − y||, ∀ y ∈ C.
Variational inequality was first introduced independently by Fichera [1] and Stampacchia [2].The VIP is a useful mathematical model that unifies many important concepts in applied mathematics, such as necessary optimality conditions, complementarity problems, network equilibrium problems, and systems of nonlinear equations (see [3][4][5][6][7]).Many authors have proposed and analysed several iterative algorithms for solving the VIP (1) and related optimization problems, see [8][9][10][11][12][13][14][15][16][17][18][19][20][21] and references therein.One of the most famous of these methods is the extragradient method (EgM) proposed by Korpelevich [22], which is presented in Algorithm 1, as follows: x 0 ∈ C, y n = P C (x n − λAx n ), where C ⊆ R n , A : C → R n is a monotone and L−Lipschitz continuous operator with λ ∈ (0, 1 L ).If the solution set V I(C, A) is nonempty, then the sequence {x n } generated by EgM converges to an element in V I(C, A).The EgM was further extended to infinite dimensional spaces by many authors; see, for instance, [11,23,24].Observe that the EgM involves two projections onto the closed convex set C and two evaluations of A per iteration.Computing projection onto an arbitrary closed convex set is a difficult task, a drawback that may affect the efficiency of the EgM, as mentioned in [25].Hence, a major improvement on the EgM is to minimize the number of evaluations of P C per iteration.Censor et al. [25] initiated an attempt in this direction, modifying the EgM by replacing the second projection with a projection onto a half-space.This new method only involves one projection onto C and it is called the subgradient extragradient method (SEgM).The SEgM algorithm is given in Algorithm 2 , as follows: x 0 ∈ H, where H is an Hilbert space.
Censor et al. [25] showed that if the solution set V I(C, A) is nonempty, the sequence {x n } generated by SEgM converges weakly to an element p ∈ V I(C, A), where p = lim n→∞ P V I(C,A) (x n ).
Bauschke and Combettes [26] pointed out that, in solving optimization problems, strong convergence of iterative schemes are more desirable than their weak convergence counterparts.Hence, the need to develop algorithms that generate strong convergence sequence.
Let S : E → E be a nonlinear mapping, a point x * ∈ E is called a fixed point of S if Sx * = x * .We denote, by F(S), the set of all fixed points of S, i.e., If S is a multivalued mapping, i.e., S : E → 2 E , then x * ∈ E is called a fixed point of S if Here, we consider the fixed point problem for an infinite family of multivalued relatively nonexpansive mappings.The fixed point theory for multivalued mappings can be utilized in various areas, such as game theory, control theory, mathematical economics, and differential inclusion (see [13,27,28] and the references therein).The existence of common fixed points for a family of mappings has been considered by many authors (see [13,[28][29][30] and the references therein).Many optimization problems can be formulated as finding a point in the intersection of the fixed point sets of a family of nonlinear mappings.For instance, the well-known convex feasibility problem reduces to finding a point in the intersection of the fixed point sets of a family of nonexpansive mappings (see [31,32]).The problem of finding an optimal point that minimizes a given cost function over the common set of fixed points of a family of nonlinear mappings is of wide interdisciplinary interest and practical importance (see [33,34]).A simple algorithmic solution to the problem of minimizing a quadratic function over the common set of fixed points of a family of nonexpansive mappings is of extreme value in many applications including set theoretic signal estimation (see [34,35]).
In this article, we are interested in studying the problem of finding a common solution of both the VIP (1) and the common fixed point problem for multivalued mappings.The motivation for studying such problems lies in its potential application to mathematical models whose constraints can be expressed as fixed point problem and VIP.This happens, in particular, in practical problems, such as signal processing, network resource allocation, and image recovery.A scenario is in network bandwidth allocation problem for two services in a heterogeneous wireless access networks in which the bandwidth of the services are mathematically related (see, for instance, [36][37][38][39] and the references therein).
Observe that all of the above results about the extragradient method or subgradient extragradient method are all confined in Hilbert spaces.However, many important problems related to practical problems are generally defined in Banach spaces.For instance, Zhang et al. [40] pointed out that in machine learning, Banach spaces possess much richer geometric structures, which are potentially useful for developing learning algorithms.This is due to the fact that any two Hilbert spaces over C of the same dimension are isometrically isomorphic.Der and Lee [41] also pointed out that most data in machine learning do not come with any natural notion distance that can be induced from an inner-product.Zhang et al. [40] further argued that most data come with intrinsic structures that make them impossible to be embedded into a Hilbert space.Hence, it is more desirable to propose an iterative algorithm for finding a solution of VIP (1) in Banach spaces.
Very recently, Liu [42] extended the subgradient extragradient method from Hilbert spaces into Banach spaces and proposed the following strong convergence theorem.Theorem 1.Let E be a two-uniformly convex and uniformly smooth Banach space with the two-uniformly convexity constant c 1 and C be a nonempty closed convex subset of E. Let S : E → E be a relatively nonexpansive mapping and A : E → E * a monotone and L-Lipschitz mapping on C with L > 0. Let {λ n } be a real number sequence satisfying 0 < inf n≥1 λ n ≤ sup n≥1 λ n < c 1 L .Suppose that V I(C, A) F(S) is nonempty.Let {x n } ⊂ E be a sequence generated by Algorithm 3, as follows: We notice that Algorithm 3 has one projection made onto the closed convex set C. As earlier pointed out, computing projection onto an arbitrary closed convex set is a difficult task.Another shortcoming of Algorithm 3, the EgM and SEgM, is that the step size is defined by a constant (or a sequence) which is dependent on the Lipschitz constant of the monotone operator.The Lipschitz constant is typically assumed to be known, or at least estimated prior.However, in many cases, this parameter is unknown or difficult to estimate.Moreover, the step size that is defined by the constant is often very small and slows down the convergence rate.In practice, a larger step size can often be used and yields better numerical results.To overcome these drawbacks, in this article, motivated by the cited works and the ongoing research in this area, we propose some relaxed subgradient extragdient methods with self-adaptive step size for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive mappings in the setting of Banach spaces.In our algorithms, the two projections are made onto some half-space, which guarantees the easy implementation of the proposed methods.Moreover, the step size can be selected adaptively.We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces.Finally, we present some numerical examples to demonstrate the efficiency of our algorithms.

Preliminaries
In what follows, we let N and R be the set of positive integers and real numbers, respectively.Let E be a Banach space, E * be the dual space of E, and let •, • denote the duality pairing of E and E * .When {x n } is a sequence in E, we denote the strong convergence of {x n } to x ∈ E by x n → x and the weak convergence by x n x.An element z ∈ E is called a weak cluster point of {x n } if there exists a subsequence {x n j } of {x n } converging weakly to z.We write ω w (x n ) to indicate the set of all weak cluster points of {x n }.
Next, we present some definitions and results that are employed in our subsequent analysis.

Definition 1.
A function f : E → R is said to be weakly lower semicontinuous (w-lsc) at x ∈ E, if holds for an arbitrary sequence {x n } ∞ n=0 in E satisfying x n x.
Let g : E → R be a function.The subdifferential of g at x is defined by If ∂g(x) = ∅, then we say that g is subdifferentiable at x.
Clearly, an α-inverse-strongly-monotone mapping is monotone and 1 α -Lipschitz continuous.However, the converse is not always true.
A Banach space E is said to be smooth, if the limit lim t→0 (||x + ty|| − ||x||)/t exists for all x, y ∈ S E , where S E = {x ∈ E : ||x|| = 1}.Moreover, if this limit is attained uniformly for x, y ∈ S E , then E is said to be uniformly smooth.It is obvious that a uniformly smooth space is smooth.In particular, a Hilbert space is uniformly smooth.
For p > 1, the generalized duality mapping J p : E → 2 E * is defined, as follows: In particular, J = J 2 is called the normalized duality mapping.If E = H, where H is a real Hilbert space, then J = I.The normalized duality mapping J has the following properties (see [43]): (1) if E is smooth, then J is single-valued; (2) if E is strictly convex, then J is one-to-one and strictly monotone; (3) if E is reflexive, then J is surjective; and, (4) if E is uniformly smooth, then J is uniformly norm-to-norm continuous on each bounded subset of E.
Let E be a smooth Banach space.The Lyapunov functional φ : E × E → R (see [44]) is defined by From the definition, it is easy to see that φ(x, x) = 0 for every x ∈ E. If E is strictly convex, then φ(x, y) = 0 ⇐⇒ x = y.If E is a Hilbert space, it is easy to see that φ(x, y) = ||x − y|| 2 for all x, y ∈ E.Moreover, for every x, y, z ∈ E and α ∈ (0, 1), the Lyapunov functional φ satisfies the following properties: Next, we define the functional It can be deduced from (2) that V is non-negative and We have the following result in a reflexive strictly convex and smooth Banach space.
Lemma 1. ( [45]) Let E be a reflexive strictly convex and smooth Banach space with E * as its dual.Subsequently, for all x ∈ E and x * , y * ∈ E * .
Definition 3. Let C be a nonempty closed convex subset of a real Banach space E. A point p ∈ C is called an asymptotic fixed point (see [46]) of T if C contains a sequence {x n } which converges weakly to p, such that lim n→∞ ||x n − Tx n || = 0. We denote the set of asymptotic fixed points of T by F(T).A mapping T : C → C is said to be: 1. relatively nonexpansive (see [47]) if: generalized nonspreading [48] if there are α, β, γ, δ ∈ R such that The following result shows the relationship between generalized nonspreading mappings and relatively nonexpansive mappings.

Lemma 2. ([48]
) Let E be a strictly convex Banach space with a uniformly Gâteaux differentiable norm, C a nonempty closed convex subset of E and T a generalized nonspreading mapping of C into itself such that F(T) = ∅.Subsequently, T is relatively nonexpansive.
Let N(C) and CB(C) denote the family of nonempty subsets and nonempty closed bounded subsets of C, respectively.The Hausdorff metric on CB(C) is defined by The class of relatively nonexpansive multivalued mappings contains the class of relatively nonexpansive single-valued mappings.[49]) Let E be a strictly convex and smooth Banach space, and C a nonempty closed convex subset of E. Suppose T : C → N(C) is a relatively nonexpansive multi-valued mapping.If p ∈ F(T), then T p = {p}.Lemma 3. ( [49]) Let E be a strictly convex and smooth Banach space and C be a nonempty closed convex subset of E. Let T : C → N(C) be a relatively nonexpansive multi-valued mapping.Subsequently, F(T) is closed and convex.Lemma 4. ( [50]) Let E be a smooth and uniformly convex Banach space, and {x n } and {y n } be sequences in E, such that either

Remark 1. (See
Remark 2. From Property (P4) of the Lyapunov functional, it follows that the converse of Lemma 4 also holds if the sequences {x n } and {y n } are bounded (see [51]).
Let C be a nonempty closed convex subset of a smooth, strictly convex and reflexive Banach space E, then generalized projection Π C defined by is the unique minimizer of the Lyapunov functional φ.It is known that, if E is a real Hilbert space, then Π C coincides with the metric projection P C .The following results relating to the generalized projection are well known.

Lemma 5. ([52]
) Let C be a nonempty closed convex subset of a reflexive, strictly convex, and smooth Banach space E. Given x ∈ E and z ∈ C.Then, z = Π C x implies φ(y, z) + φ(z, x) ≤ φ(y, x), ∀ y ∈ C. Lemma 6. ([52,53]) Let C be a nonempty closed and convex subset of a smooth Banach space E and x ∈ E. Subsequently, x 0 = Π C x if and only if

Lemma 7. ([52]
) Let p be a real number with p ≥ 2.Then, E is p-uniformly convex if and only if there exists c ∈ (0, 1] such that Here, the best constant 1/c is called the p-uniformly convexity constant of E.

Lemma 8. ([54]
) Let E be a two-uniformly convex and smooth Banach space.Then, for every x, y ∈ E, , where 1 c is the two-uniformly convexity constant of E.

Lemma 10. ([56]
) Let E be a uniformly convex Banach space, r > 0 be a positive number and B r (0) be a closed ball of E. Subsequently, for any given sequence {x i } ∞ i=1 ⊂ B r (0) and for any given sequence {λ i } ∞ of positive number with ∑ ∞ n=1 λ n = 1, there exists a continuous, strictly increasing, and convex function g : [0, 2r) → [0, ∞) with g(0) = 0, such that, for any positive integer i, j with i < j, Lemma 11. ( [57]) Let {a n } be a sequence of nonnegative real numbers, {α n } be a sequence in (0, 1) with ∑ ∞ n=1 α n = ∞ and {b n } be a sequence of real numbers.Assume that ) is continuous with respect to the weak * -topology of E * .

Lemma 12. ([52]
) Let C be a nonempty, closed, and convex subset of a Banach space E and A a monotone, hemicontinuous operator of C into E * .Subsequently, It is obvious from Lemma 12 that the set V I(C, A) is a closed and convex subset of C.

Main Results
In this section, we present our algorithms and prove some strong convergence results for the proposed algorithms.We establish the convergence of the algorithms under the following conditions:

Condition A:
(A1) E is a 2-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant 1 c ; (A2) C is a nonempty closed convex set, which satisfies the following conditions: is weakly lower semicontinuous on E; (A4) For any x ∈ E, at least one subgradient ξ ∈ ∂g(x) can be calculated (i.e.g is subdifferentiable on E), where ∂g(x) is defined as follows: In addition, ∂g(x) is bounded on bounded sets.

Condition B:
(B1) The solution set that is denoted by ) is nonempty, where S i : E → CB(E) is an infinite family of multivalued relatively nonexpnsive mappings; (B2) The mapping A : E → E * is monotone and Lipschitz continuous with Lipschitz constant L > 0.

Condition C:
).Now, we present our first algorithm in Algorithm 4, as follows: Step 0.

Step 1.
Construct the half-space where ξ n ∈ ∂g(x n ), and compute If x n − y n = 0, then set x n = w n = z n and go to Step 4. Else, go to Step 2.

Step 2.
Construct the half-space and compute Step 3. Compute Step 4. Compute Step 5. Compute Set n := n + 1 and return to Step 1.

Remark 3.
From the construction of the half-spaces C n and T n , it can easily be verified that C ⊆ C n and C ⊆ T n .
Lemma 13.Let {x n } be a sequence generated by Algorithm 4. Subsequently, the following inequality holds for all p ∈ Ω and n ∈ N : In particular, there exists N 0 > 0, such that, for all n > N 0 , we have Proof.Let p ∈ Ω, then by applying Lemma 5 and Property (P3) of the Lyapunov functional together with the monotonicity of A, we have By the definition of T n , we have that Subsequently, by the definition of λ n+1 , Lipschitz continuity of A, Lemma 8 and Cauchy-Schwartz inequality, we obtain Combining ( 5) and ( 6), we get ).
Lemma 14.The sequence generated {x n } generated by Algorithm 4 is bounded.
Proof.Let p ∈ Ω, then by applying Lemma 10 and (4), we have that for all n > N 0 Hence, the sequence {x n } is bounded.Consequently, {w n }, {y n } and {z n } are also bounded.

Lemma 15.
The following inequality holds for all p ∈ Ω and n > N 0 : Proof.Let p ∈ Ω, then, from (7) and by applying (4), we have Since {x n } is bounded, then, by Condition (A4), there exists a constant M > 0, such that ||ξ n k || ≤ M for all k ≥ 0. Hence, By Condition (A3), we have that Hence, it follows from Condition (A2) that x ∈ C. From Lemma 6, we obtain By the monotonicity of A, we obtain Letting k → ∞, and since Ax n is bounded and lim n→∞ ||x n − y n || = 0, we have By Lemma 12, it follows that x ∈ V I(C, A) as required.
Proof.By the hypothesis of the lemma, the construction of {x n+1 }, the properties of the Lyapunov functional, and applying (3), we obtain Subsequently, it follows from Lemma 13, Lemma 8, and Lemma 16 that lim Next, we show that ω w (x n ) ⊂ ∩ ∞ i=1 F(S i ).Suppose x n k x * ∈ ω w (x n ), then by applying Lemma 15, we obtain It follows from the property of g that lim Since J −1 is uniformly norm-to-norm continuous on bounded sets, we have lim From ( 8), we have that lim Because J is uniformly norm-to-norm continuous on bounded sets, we obtain lim By the definition of z n , applying (10) and the fact that lim n→∞ α n = 0, we get Because J −1 is uniformly norm-to-norm continuous on bounded sets, it follows that By applying ( 9) and ( 11), we have Hence, it follows that From ( 11) and ( 12), and the definition of S i for all i ≥ 1, we have that Hence, it follows from ( 11) and ( 13) that This, together with (8), implies that ω w (x n ) ⊂ Ω, as required.
Theorem 2. Let {x n } be a sequence generated by Algorithm 4 such that Conditions A-C are satisfied.Subsequently, the sequence {x n } converges strongly to x † = Π Ω x 1 .
Proof.Let x † = Π Ω x 1 .Afterwards, it follows from Lemma 15 that Now, we claim that the sequence {φ(x † , x n )} converges to zero.In order to establish this, in view of Lemma 11, it suffices to show that lim sup k→∞ z n Subsequently, it follows from Lemma 17 that ω w (x n ) ⊂ Ω.Additionally, from (11) we have that Because x † = Π Ω x 1 , it follows from Lemma 6 and (15) that lim sup Applying Lemma 11 to ( 14), and, together with ( 16), we deduce that lim n→∞ φ(x † , x n ) = 0, that is, lim n→∞ x n = x † , and this completes the proof.
Next, we propose our second algorithm in Algorithm 5, which is a slight modification of Algorithm 4. Because the proof of this result is very similar to that of Theorem 2, the proof is left for the reader to verify.Theorem 3. Let {x n } be a sequence generated by Algorithm 5, as follows:

Step 1.
Construct the half-space where ξ n ∈ ∂g(x n ), and compute If x n − y n = 0, then set x n = w n = z n and go to Step 4. Else, go to Step 2.

Step 2.
Construct the half-space and compute Step 3.
Suppose that all of the conditions of Theorem 2 are satisfied.Subsequently, the sequence {x n } generated by Algorithm 5 converges strongly to x † = Π Ω x 1 .Now, we obtain some results, which are consequences of Theorem 2. If we take the multivalued relatively nonexpansive mappings S i , i ∈ N in Theorem 2 as single-valued relatively nonexpansive mappings, then we obtain the following result.Corollary 1.Let S i : E → E, i ∈ N be an infinite family of single-valued relatively nonexpansive mappings and {x n } be a sequence generated by Algorithm 6, as follows:

Step 1.
Construct the half-space where ξ n ∈ ∂g(x n ), and compute Construct the half-space and compute w n = Π T n J −1 (Jx n − λ n Ay n ).
Suppose that the solution set Ω = V I(C, A) ∩ ∞ i=1 F(S i ) is nonempty and the remaining conditions of Theorem 2 are satisfied.Subsequently, the sequence {x n } generated by Algorithm 6 converges strongly to x † = Π Ω x 1 .
Remark 5. Corollary 1 improves and extends Theorem 1 [42] in the following senses: (i) The projection onto the closed convex set C is replaced with a projection onto a half-space, which can easily be computed.(ii) The step size is self-adaptive and independent on the Lipschitz constant of the monotone operator.
(iii) The result extends the fixed point problem from a single relatively nonexpansive mapping to an infinite family of relatively nonexpansive mappings.
The next result follows from Lemma 2 and Corollary 1.
Corollary 2. Let S i : E → E, i ∈ N be an infinite family of generalized nonspreading mappings and {x n } be a sequence generated by Algorithm 7, as follows: Algorithm 7: Step 0. Select {α n }, {β n,i }, λ 1 , and µ such that Condition C holds.Choose x 1 ∈ E and set n = 1.

Step 1.
Construct the half-space where ξ n ∈ ∂g(x n ), and compute If x n − y n = 0, then set x n = w n = z n and go to Step 4. Else, go to Step 2.

Step 2.
Construct the half-space and compute Step 3.

Suppose that the solution set
) is nonempty and the remaining conditions of Theorem 2 are satisfied.Afterwards, the sequence {x n } generated by Algorithm 7 converges strongly to x † = Π Ω x 1 .

Constrained Convex Minimization and Fixed Point Problems
In this section, we present an application of our result to finding a common solution of constrained convex minimization problem [59][60][61] and fixed point problem in Banach spaces.
Let E be a real Banach space and C be a nonempty closed convex subset of E. The constrained convex minimization problem is defined as finding a point x * ∈ C, such that where f is a real-valued convex function.Convex optimization theory is a powerful tool for solving many practical problems in operation research.In particular, it has been widely employed to solve practical minimization problems over complicated constraints [32,62], e.g., convex optimization problems with a fixed point constraint and a variational inequality constraint.The lemma that follows will be required: Lemma 18. [63] Let E be a real Banach space and let C be a nonempty closed convex subset of E. Let f be a convex function of E into R. Step 0.

Step 1.
Construct the half-space where ξ n ∈ ∂g(x n ), and compute If x n − y n = 0, then set x n = w n = z n and go to Step 4. Else, go to Step 2.

Step 2.
Construct the half-space Step 4. Compute Step 5. Compute Set n := n + 1 and return to Step 1.

Suppose that the solution set
) is nonempty and the remaining conditions of Theorem 2 are satisfied.Subsequently, the sequence {x n } generated by Algorithm 8 converges strongly to x † = Π Ω x 1 .
Proof.Because f is convex, then f is monotone [63].The result then follows by letting A = f in Theorem 2 and applying Lemma 18.

Numerical Example
In this section, we present some numerical examples to demonstrate the efficiency of our methods, Algorithm 4 and Algorithm 5 in comparison with Algorithm 3.All of the numerical computations were carried out using Matlab version R2019 (b).
We choose Now, we consider the mapping S i : R 2 → R 2 defined by S i z = ||B|| −1 Bz, for all i, where z = (x, y) T .It is easily verified that each S i is quasi-nonexpansive (note that in a Hilbert space relatively nonexpansive mapping reduces to quasi-nonexpansive mapping).The solution of the problem is x † = (0, 0) T .We test the algorithms for three different starting points using ||x n+1 − x n || < as stopping criterion, where = 10 −9 .The numerical result is reported in Figure 1 and Table 1.

Conclusion 268
In this paper, we study a classical monotone and Lipschitz continuous variational inequality and 269 fixed point problems defined on a level set of a convex function in the setting of Banach spaces.We 270 propose two iterative methods with self-adaptive step size which combine the Halpern method with a 271 relaxed projection method for approximating a common solution of variational inequality and fixed 272 point problems for an infinite family of relatively nonexpansive mappings in Banach spaces.The main 273 advantage of our algorithms is to replace every projection onto the closed convex set with a projection 274 onto some half-space which guarantees easy implementation of our proposed methods.Moreover, the 275 step size can be selected adaptively.We obtain strong convergence results for the proposed methods

Conclusions
In this paper, we studied a classical monotone and Lipschitz continuous variational inequality and fixed point problems defined on a level set of a convex function in the setting of Banach spaces.We proposed two iterative methods with self-adaptive step size that combine the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive mappings in Banach spaces.The main advantage of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space which guarantees easy implementation of our proposed methods.Moreover, the step size can be adaptively selected.We obtained strong convergence results for the proposed methods without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces.The obtained results improve and extend several existing results in the current literature in this direction.
Assume that {x n } and {y n } are sequences generated by Algorithm 4, such that lim n→∞ ||x n − y n || = 0. Let {x n k } be a subsequence of {x n }, which converges weakly to some x ∈ E as k → ∞, then x ∈ V I(C, A).
(17) is Fréchet differentiable, then z is a solution of Problem(17)if and only if z ∈ V I(C, f ).applying Theorem 2 and Lemma 18, we can approximate a common solution of the constrained convex minimization Problem(17)and fixed point problem for an infinite family of multivalued relatively nonexpansive mappings.Let E be a two-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant 1 c , and S i : E → CB(E), i ∈ N be an infinite family of multivalued relatively nonexpansive mappings.Let f : E → R be a fréchet differentiable convex function and suppose f is L-Lipschitz continuous with L > 0. Assume that Problem (17) is consistent.Let {x n } be a sequence generated by Algorithm 8, as follows: 1. Consider a nonlinear operator A : R 2 → R 2 defined by A(x, y) = (x + y + sin x, −x + y + sin y), and let the feasible set C be a box defined by C = [−1, 1] × [−1, 1].It can easily be verified that A is monotone and Lipschitz continuous with the constant L = 3.Let B be a 2 × 2 matrix defined by

Table 1 .
Numerical results for Example 1.

Table 2 .
Numerical results for Example 2.