Inertial Krasnoselskii–Mann Method in Banach Spaces

: In this paper, we give a general inertial Krasnoselskii–Mann algorithm for solving inclusion problems in Banach Spaces. First, we establish a weak convergence in real uniformly convex and q -uniformly smooth Banach spaces for ﬁnding ﬁxed points of nonexpansive mappings. Then, a strong convergence is obtained for the inertial generalized forward-backward splitting method for the inclusion. Our results extend many recent and related results obtained in real Hilbert spaces.


Introduction
Let X be a real Banach space and given a single and set-valued operators A : X → X and B : X → 2 X , respectively. We consider the following inclusion problem: findx ∈ X such that 0 ∈ Ax + Bx. (1) Such inclusion problems are quite general since it include as special cases various problems such as: non-smooth convex optimization problems, variational inequalities and convex-concave saddle-point problems, just to name a few. (see, e.g., [1][2][3][4][5]).
The following method was introduced in [16] (see also [14]) for finding zero of (1) when A = 0 and B is maximal monotone operator:x 0 , x 1 ∈ H; y n = x n + θ n (x n − x n−1 ) x n+1 = J B r n (y n ), n ≥ 1.
Alvarez and Attouch [16] established the weak convergence of (3) under some appropriate conditions on {θ n } and {r n }. Several other modifications of (2) with inertial extrapolation step have been considered in Hilbert spaces by many authors, see, for example, [17][18][19][20][21]. Based on the above mentioned results [19,[22][23][24][25][26], our main contribution in this paper is the following. We extend the results of [17] concerning the inertial Krasnoselskii-Mann iteration for fixed point of nonexpansive mappings to uniformly convex and q-uniformly smooth Banach space. We also extend the forward-backward splitting method with inertial extrapolation step for solving (1) from Hilbert spaces to Banach spaces. While the mentioned results establish only weak convergence, we also provide strong convergence analysis in Banach spaces.
The outline of the paper is as follows. We first recall some basic definitions and results in Section 2. Our algorithms are presented and analysed in Section 3. In Section 4 an infinite dimensional example is presented and final remarks and conclusions are given in Section 5.

Preliminaries
Let X be a real Banach space. The modulus of convexity of X is defined as the function The modulus of smoothness of X is the function ρ : R + → R + defined by We say X is uniformly smooth if lim t→0 ρ(t)/t = 0. X is said to be q-uniformly smooth with 1 < q ≤ 2, if there exists a constant k q > 0 such that ρ(t) ≤ k q t q for t > 0. If X is q-uniformly smooth, then it is uniformly smooth (see, e.g., [27]). Suppose that X * is the dual space of X. The generalized duality mapping J q (q > 1) of X is defined by J q (x) := {j q (x) ∈ X * : x, j q (x) = x q , j q (x) = x q−1 }, ∀x ∈ X, where ·, · denotes the duality pairing between X and X * . In particular, we call J 2 := J, the normalized duality mapping on X. Furthermore, (see, e.g., [28] (p. 1128)) It is well known that (see, for example, [27]) X is uniformly smooth if and only if the duality mapping J q is single-valued and norm-to-norm uniformly continuous on bounded subsets of X.
Let B : X → 2 X . We denote the domain of B by D(B) = {x ∈ X : Bx = ∅} and its range by R(B) = {Bz : z ∈ D(B)}. We say that B is accretive if, for each x, y ∈ D(A), there exists j(x − y) ∈ J(x − y) such that (see, for example, [25]) B is said to be m-accretive if R(I + rB) = X for all r > 0. Given α > 0 and q ∈ (1, ∞), we say that a single-valued accretive operator A is α-inverse strongly accretive (α-isa, for short) of order q if, for each x, y ∈ D(A), there exists j q (x − y) ∈ J q (x − y) such that We say that A is α-strongly accretive of order q if, for each x, y ∈ D(A), there exists j q (x − y) ∈ J q (x − y) such that Let ∅ = C ⊂ X and T : C → C be a nonlinear mapping. The set of fixed points of T is defined by For the rest of this paper, we shall adopt the following notation: Lemma 1 ([29] p. 33). Let q > 1 and X be a real normed space with the generalized duality mapping J q . Then, for any x, y ∈ X, we have x + y q ≤ x q + q y, j q (x + y) for all j q (x + y) ∈ J q (x + y).

Lemma 2 ([28]
Cor. 1 ). Let 1 < q ≤ 2 and X be a smooth Banach space. Then the following statements are equivalent: (i) X is q-uniformly smooth.
(ii) There is a constant k q > 0 such that for all x, y ∈ X x + y q ≤ x q + q y, j q (x) + k q y q .
The best constant k q will be called the q-uniform smoothness coefficient of X.
. Let X be a Banach space. Let A : X → X be an α-isa of order q and B : X → 2 X an m-accretive operator. Then we have

Lemma 4 ([25] Lem. 3.3)
. Let X be a uniformly convex and q-uniformly smooth Banach space for some q ∈ (1,2]. Assume that A is a single-valued α-isa of order q in X. Then, given r > 0, there exists a continuous, strictly increasing and convex function φ q : R + → R + with φ q (0) = 0 such that, for all x, y ∈ B r , where k q is the q-uniform smoothness coefficient of X.
Lemma 6 (Maingé [30]). Let {ϕ n }, {δ n } and {θ n } be sequences in [0, +∞) such that and there exists a real number θ with 0 ≤ θ n ≤ θ < 1 for all n ∈ N. Then the following hold: Lemma 7 (Goebel and Reich [31]). Let E be a uniformly convex Banach space and C ⊂ E be nonempty, closed and convex and T : C → C be a nonexpansive mapping. Then I − T is demi-closed at 0.
Lemma 8 (Xu,[28]). Let E be a uniformly convex Banach space. The following statements hold in E: x + y q ≤ x q + q j q (x), y + c q y q ∀x, y ∈ E.
Lemma 9 (Xu,[32]). Let {a n } be a sequence of nonnegative real numbers satisfying the following relation: Then, a n → 0 as n → ∞.
Notations: x n x, n → ∞ means {x n } converges weakly to x and x n → x, n → ∞ means {x n } converges strongly to x.

The Algorithm
In this section, we introduce our method and give the convergence analysis. Recall that 1 is the space of all sequences whose series is absolutely convergent.
Let E be a uniformly convex Banach space and T : E → E a nonexpansive mapping and Fix(T) = ∅.

Remark 1.
Observe that since the value of x n − x n−1 is a priori known before θ n , then Step (2) in Algorithm 1 is easily implemented. Furthermore, observe that by the assumption that { n } ∞ n=1 ⊂ l 1 , we have that ∑ ∞ n=0 n x n − x n−1 < ∞ and ∑ ∞ n=0 n x n − x n−1 q < ∞.

Convergence Analysis
We start with the weak convergence analysis of Algorithm 1 for nonexpansive mappings. Throughout our analysis we assume that E be a uniformly convex Banach space. Theorem 1. Suppose T : E → E is a nonexpansive mapping and Fix(T) = ∅. Assume that 0 < a ≤ λ n ≤ b < 1. Then {x n } generated by Algorithm 1 converges weakly to a point in F(T).
Proof. Take z ∈ F(T). Then and Observe that From (16) and (17), we have (noting that θ It follows from (15) and (18) that We next show that lim n→∞ Tw n − w n = 0. From the update of x n+1 in Algorithm 1, we get Using (18) in (20), we get Also, Since lim n→∞ θ n x n − x n−1 q = 0 and lim n→∞ x n − z q exists, we obtain from (21) that lim n→∞ w q (λ n )ϕ( Tw n − w n ) = 0. Since lim inf n→∞ λ n (1 − λ n ) > 0, we get lim n→∞ ϕ( Tw n − w n ) = 0 and by the continuity of ϕ, we get lim n→∞ Tw n − w n = 0.
Furthermore, since {x n } is bounded, there exists {x n k } ⊂ {x n } such that x n k p ∈ B. By (22), we have that w n k p ∈ B. Using the demiclosedness of I − T in Lemma 7, we get that p ∈ F(T). By the results in [33], we have that {x n } has exactly one weak limit point and hence {x n } is weakly convergent. This ends the proof.

Remark 2.
(a) We mention here that quasi-nonexpansiveness is a weaker sufficient condition for Theorem 1. (b) It can also be shown in Theorem 1 that Therefore, Algorithm 1 preserves certain properties of the Krasnoselskii-Mann iteration. r is nonexpansive. Therefore, by Theorem 1, we have that {x n } converges weakly to a point in S and the desired result is obtained.
We give two instances of strong convergence of the relaxed forward-backward Algorithm 1.
Suppose that one of the following holds: (i) A is α-isa of order q, B is β-strongly accretive of order q, and r ∈ 0, αq c q 1 q−1 .
Then {x n } generated by Algorithm 1 with T := T A,B r converges strongly to a unique point in S.

Proof.
We first show that the inclusion problem (1) has a unique solution by showing that in each of the cases above T A,B r is a contraction map on E.
(i) For all x, y ∈ E, we have Therefore, I − rA is a nonexpansive mapping. Let x, y, u, v ∈ E. Since B is β-strongly accretive of order q, we have that Hence, (ii) Observe that r(βq − c q r q−1 L q ) ∈ (0, 1) and define τ := 1 − r(βq − c q r q−1 L q ) 1 q . Then for all x, y ∈ E, Therefore, in both cases (i) and (ii), T A,B r is a contraction map on E with constant τ. Each of these cases in (i) and (ii) above implies that the inclusion problem (1) has a unique solution x * ∈ S. Consequently, using the update of x n+1 in Algorithm 1 with T = T A,B r , we get Observe that by the update ofθ n in Algorithm 1, we have ∑ ∞ n=1 θ n x n − x n−1 < ∞, using Lemma 9, we get that x n → x * , n → ∞, and the proof is complete.
We next present a complexity bound for Algorithm 1 in this result.
Proof. From the proof of Theorem 3, for any n ≥ 1 we get Without the loss of generality, we assume that for every n <n we have Concatenating (25) and (26) we obtain, for every k <k, Therefore, by the definition ofn, it holds that For any n >n there are two possibilities. If then, by (25) and recalling that (1 − λ(1 − τ)) ≤ 1, we obtain that x n satisfies (24). Otherwise, if and the desired result holds.

Remark 3.
We observe that, in contradiction with the assumptions of Theorem 2, in Theorem 4 the summability of { n } is not required. However if one wants a good bound in (24) then a small value of must be set, but, in this case, small values of θ n are allowed.
To summarize and emphasize the novelty and major advantages of our proposed scheme, we list next several relations to recent works.

1.
Our result in Theorem 1 extends the results in [17,26,30,34,35] from Hilbert spaces to uniformly convex and q-uniformly smooth Banach spaces. Furthermore, when θ n = 0 in Algorithm 1, Theorem 1 reduces to the results in [33] and other related papers.
Shehu in [37] obtained a nonasymptotic O(1/n) convergence rate result for a Krasnoselskii-Mann iteration with inertial extrapolation step in real Hilbert spaces under the stringent condition of Boţ et al. [17] (Theorem 5). In this paper, we obtain the results for Krasnoselskii-Mann iteration with inertial extrapolation step under easy assumptions and give some complexity results in uniformly convex Banach spaces.

4.
Themelis and Patrinos in [38] study a Newton-type generalization of the classical Krasnoselskii-Mann iteration in Hilbert spaces and obtained superlinear convergence when the direction satisfies Dennis-More condition in Hilbert spaces. However, Themelis and Patrinos in [38] do not consider Krasnoselskii-Mann iteration with inertial steps. Our results here involve inertial Krasnoselskii-Mann iteration obtained in a higher space viz, uniformly convex Banach space which extends Hilbert space. 5.
In [39], Phon-on et al. established inertial S-iteration in Banach spaces and obtained convergence under boundedness of some generated sequence. In this paper, the boundedness assumption of any generated sequence is dispensed with in our results. Therefore, our results improve on the results of this paper.

Numerical Illustration
In this section, we present two numerical examples in order to illustrate the behaviour of our proposed method. For the first example we are concern with the split convex feasibility problem (SCFP) (Censor and Elfving [40]) in an infinite-dimensional Hilbert space. Let H 1 and H 2 be two real Hilbert spaces and T : H 1 → H 2 a bounded and linear operator and T * its adjoint. Let C ⊆ H 1 and Q ⊆ H 2 be nonempty, closed and convex sets. The split convex feasibility problem is formulated as follows: find a point x ∈ C such that Tx ∈ Q.
So, if we take Ax := ∇ 1 2 ||Tx − P Q Tx|| 2 = T * (I − P Q )Tx, where P Q is the metric projection onto Q, ∇ is the gradient and B = ∂i C is the characteristic function of the set C (i C (x) = 0 if x ∈ C and i C (x) = ∞ if x / ∈ C). So, the SCFP has a inclusion structure as in (1). It can be seen that A is Lipschitz continuous with constant L = T 2 and B is maximal monotone, see e.g., [41].
Moreover, the solution set of (30) is nonempty since clearly x(t) = 0 is a solution. As explained before, we define Ax := ∇ 1 2 ||Tx − P Q Tx|| 2 = T * (I − P Q )Tx and B = ∂i C and translate (30) to an inclusion formulation as in (1).
We implement our algorithm with different starting point x 0 (t) = x 1 (t), t ∈ [0, 2π]. We choose the stopping criterion ||x n − y n || < 10 −5 and other parameters are chosen as ε n = 1/n 2 , λ n = 1/n, θ = 0.5, r = 0.5. To justify our algorithm's name we compare it with the standard Krasnoselskii-Mann, which is the update of x n+1 in Algorithm 1 with w n replaced by x n and λ n ∈ (0, 1). The results for different starting points are presented in Table 1.

Conclusions
In this paper, we give weak and strong convergence results for relaxed inertial forward-backward splitting method in uniformly convex and q-uniformly smooth Banach spaces under some appropriate conditions. Our results are new in Banach spaces, and generalize some existing results in the literature. In our future project, we will generalize our results in this paper to finding zero of maximal monotone operators in a more general Banach space.
Author Contributions: Y.S. and A.G. contributed equally to the this paper with regards to all aspects such as: conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing-original draft preparation, writing-review and editing, visualization, supervision, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.