Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration

: The relaxed inertial Tseng-type method for solving the inclusion problem involving a maximally monotone mapping and a monotone mapping is proposed in this article. The study modiﬁes the Tseng forward-backward forward splitting method by using both the relaxation parameter, as well as the inertial extrapolation step. The proposed method follows from time explicit discretization of a dynamical system. A weak convergence of the iterates generated by the method involving monotone operators is given. Moreover, the iterative scheme uses a variable step size, which does not depend on the Lipschitz constant of the underlying operator given by a simple updating rule. Furthermore, the proposed algorithm is modiﬁed and used to derive a scheme for solving a split feasibility problem. The proposed schemes are used in solving the image deblurring problem to illustrate the applicability of the proposed methods in comparison with the existing state-of-the-art methods.

To study the formulation of the monotone inclusion problem (1), the relaxation techniques are important tools as they give iterative schemes more versatility [30,31]. In order to accelerate the convergence of numerical methods, inertial effects were introduced. This technique traces back to the pioneering work of Polyak [32] who introduced the heavy ball method to speed up the gradient algorithm's convergence behavior and allow the identification of various critical points. The inertial idea was later used and developed by Nesterov [33] and Alvarez and Attouch (see [34,35]) in the sense of solving smooth convex minimization problems and monotone inclusions/non-smooth convex minimization problems, respectively. A considerable amount of literature has contributed to inertial algorithms over the last decade [36][37][38][39][40].
Due to the advantages of the inertial effects and relaxation techniques, Attouch and Cabot extensively studied the inertial algorithm for monotone inclusion and convex optimization problems. To be precise, they focused on the relaxed inertial proximal method (RIPA) in [41,42] and the relaxed inertial forward-backward method (RIFB) in [43]. In [44], and a relaxed inertial Douglas-Rachford algorithm for monotone inclusions was proposed. Similarly, in [45], Iutzeler and Hendrickx studied the influence of inertial effects and relaxation techniques on the numerical performance of algorithms. The similarity between relaxation and inertial parameters for relative-error inexact under-relaxed algorithms was addressed in [46,47].
The last equality follows from an explicit discretization of (2) in time x, with a step size h n > 0. Taking h n = 1, we obtain: Setting s n = (I + λB) −1 (I − λA)u n in (4), we get: It can be observed that in the case ρ = 1, Equation (5) reduces to Tseng's forward-backward-forward method [20]. The convergence of the scheme in [20] requires that 0 < λ < 1 L , where L is the Lipschitz constant of A, or λ can be computed using a line search procedure with a finite stopping criterion. It has been known that line search procedures involve extra functions' evaluations, thereby reducing the computational performance of a given scheme. In this article, we propose a simple variable step size, which does not involve any line search.
The main iterative scheme in this study is given by: where ρ is the relaxation parameter and is the extrapolation parameter. It is well known that the extrapolation step speeds up the convergence of a scheme. The step size λ n is defined to be self-adaptively updated according to a new simple step size rule. Furthermore, (6) without the additional last step is exactly the scheme proposed in [49], which converges weakly to the solution of (1) with a restrictive assumption on A. Moreover, (6) can be considered as a relaxed version of the scheme proposed by Tseng [20].
Recently, Gibali et al. [7] proposed a modified Tseng algorithm by incorporating the Mann method with a variable step size for solving (1). The question now is: Can we have a fast iterative scheme involving a more general class of operators with a variable step size? We provide a positive answer to this question in this study.
Inspired and motivated by [7,20,49], we propose a relaxed inertial scheme with variable step sizes by incorporating the inertial extrapolation step and the relaxed parameter with the forward and backward scheme. The aim of this modification is to obtain a self-adaptive scheme with fast convergence properties involving a more general class of operators. Furthermore, we present a modified version of the proposed scheme for solving the split feasibility problem. Moreover, to illustrate the performance and to show the applicability of the proposed methods when compared to the existing algorithms in the literature, we apply the proposed algorithms to solve the problem of image recovery.
The outline of this work is as follows: We give some definitions and lemmas that we will use in our convergence analysis in the next section. We present the convergence analysis of our proposed scheme in Section 3, and lastly, in Section 4, we illustrate the inertial effect and the computational performance of our algorithms by giving some experiments by using the proposed algorithms to solve the problem of image recovery.

Preliminaries
This section recalls some known facts and necessary tools that we need for the convergence analysis of our method. Throughout this article, H is a real Hilbert space with the inner product and norm denoted respectively as ·, · and · , and E is a nonempty closed and convex subset of H. The notation u j u (resp u j −→ u) is used to indicate that, respectively, the sequence {u j } converges weakly (strongly) to u. The following is known to hold in a Hilbert space: and for every t, s ∈ H [30]. The following definitions can be found for example in [7,30]. (1) Monotone if: (2) Firmly nonexpansive if: or equivalently, (3) L-Lipschitz continuous on H if there exists a constant L > 0 such that: If L = 1, then A is called nonexpansive.

Definition 2 ([30]).
A multi-valued mapping B : H −→ 2 H is said to be monotone, if for every u, v ∈ H, x ∈ Bu and y ∈ Bv ⇒ x − y, u − v ≥ 0. Furthermore, B is said to be maximal monotone if it is monotone and if for every (u, Definition 3. Let B : H −→ 2 H be a multi-valued maximal monotone mapping. Then, the resolvent J B λ : H −→ H mapping associated with B is defined by: for some λ > 0, where I stands for the identity operator on H. It is worth mentioning that it is well known that if B : H −→ 2 H is a set-valued maximal monotone mapping and λ > 0, then Dom(J B λ ) = H, and J B λ is a single-valued and firmly nonexpansive mapping (see [50] for more properties of maximal monotone mapping).

Lemma 1 ([51]
). Let A : H −→ H be a Lipschitz continuous and monotone mapping and B : H −→ 2 H be a maximal monotone mapping, then the mapping A + B is a maximal monotone mapping.
and there exists ∈ R with 0 ≤ n ≤ ≤ 1 for all n ≥ 1. Then, the following are satisfied:

Lemma 3 ([53]
). Let E ⊂ H be a nonempty set and a sequence {u j } in H such that the following are satisfied: (a) for every u ∈ E, lim j→∞ u j − u exists; (b) every sequentially weak cluster point of {u j } is in E.
Then, {u j } converges weakly in E. Lemma 4. Let { n } be a non-negative real number sequence, {γ n } be a sequence of real numbers in (0, 1) with ∑ ∞ n=1 γ n = ∞, and {δ n } be a sequence of real numbers satisfying: If lim sup i→∞ δ n j ≤ 0 for every subsequence { n j } of { n } satisfying ( n j +1 − n j ) ≥ 0, then lim n→∞ n = 0.

Relaxed Inertial Tseng-Type Algorithm for the Variational Inclusion Problem
In this section, we give a detailed description of our proposed algorithm, and we present the weak convergence analysis of the iterates generated by the algorithm to the solution of the inclusion problem (1) involving the sum of a maximally monotone and monotone operator. We suppose the following assumptions for the analysis of our method.

Assumption 1.
A1 The feasible set of (1) is a nonempty closed and convex subset of H. A2 The solution set Γ of (1) is nonempty. A3 A : H −→ H is monotone, L-Lipschitz continuous on H, and B : H −→ 2 H maximally monotone.
Proof. It can be observed that the sequence {λ n } is monotonically decreasing. Since A is a Lipschitz function with Lipschitz's constant L, for At n = As n , we have: It is obvious for At n = As n that the inequality (8) is satisfied. Hence, it follows that λ n ≥ min{ µ L , λ 0 }. (11) is well defined and:

Remark 1. By Lemma 5, the update
Next, the following lemma and its proof are crucial for the convergence analysis of the sequence generated by Algorithm 1.

Algorithm 1
Relaxed inertial Tseng-type algorithm for the VI problem.
Iterative steps: Given the current iterates u n−1 and u n ∈ H.
Step 1. Set t n as: t n := u n + (u n − u n−1 ), (10) Step 2. Compute: If t n = s n , stop. t n is the solution of (1). Else, go to Step 3.
Step 3. Compute: where the stepsize sequence λ n+1 is updated as follows: Set n := n + 1, and go back to Step 1.

Lemma 6.
Let A be an operator satisfying the assumption (A3). Then, for allȗ ∈ Γ = ∅, we have: Proof. From the fact that the resolvent J B λ n is firmly nonexpansive and s n = (I + λ n B) −1 (I − λ n A)t n = J B λ n (1 − λ n A)t n , we have: Hence, we get: which is the same as: 2 t n − s n , s n −ȗ − 2λ n At n − As n , s n −ȗ ≥ 0.

Lemma 7.
Let {t n } be a sequence generated by Algorithm 1 and Assumption (A 1 − A 3 ) be satisfied. If there exists a subsequence {t n i } weakly convergent to q ∈ H with lim n→∞ t n − s n = 0, then q ∈ Γ.
Proof. Suppose (y, x) ∈ Graph(A + B), that is x − Ay ∈ By, and since s n i = (I + λ n i B) 1 (I − λ n i A)t n i , we get: This implies that: 1 λ n i (t n i − s n i − λ n i At n i ) ∈ Bs n i .
By the maximal monotonicity of B, we have: Hence, From the fact that A is Lipschitz continuous and lim n→∞ t n − s n = 0, it follows that lim n→∞ At n − As n = 0; since lim n→∞ λ n exists, we get: The above inequality together with the maximal monotonicity of A + B implies that 0 ∈ (A + B)q, that is q ∈ Γ; hence the proof. Theorem 1. Let A be an operator satisfying the assumptions (A3) and: with Then, for allȗ ∈ Γ = ∅, the sequence {u n } generated by Algorithm 1, converges weakly toȗ.

Application to the Split Feasibility Problem
In this section, we derive a scheme for solving the split feasibility problem from Algorithm 1. The split feasibility problem (SFP) is a problem of finding a pointǔ ∈ C such that Aǔ ∈ Q, where C, Q are nonempty closed and convex subsets of H 1 and H 2 , respectively, and A : H 1 −→ H 2 is a bounded linear operator. Censor and Elfving in [54] introduced the problem (SFP) in finite-dimensional Hilbert spaces by using a multi-distance method to obtain an iterative method for solving SFP. A number of problems that arise from phase retrievals and in medical image reconstruction can be formulated as split variational feasibility problems [3,55]. The problem (SFP) can also be used in various disciplines such as image restoration, dynamic emission tomographic image reconstruction, and radiation therapy treatment planning [2,7,56]. Suppose f : H −→ (∞, ∞) is proper lower semi-continuous convex. Then, for all u ∈ H, the subdifferential ∂ f of f is defined as follows: For a nonempty closed and convex subset C of H, the indicator function i C of C is given by: Furthermore, the normal cone of C at u N C u is given as: It is known that the indicator function i C is a proper, lower semi-continuous and convex function on H. Thus, the subdifferential ∂i C of i C is a maximal monotone operator and: Therefore, for all u ∈ H, we can define the resolvent of ∂i C as J ∂i C λ = (I + λ∂i C ) −1 for each λ > 0. Hence, we can see that for λ > 0: Now, based on the above derivation, Algorithm 1 can be reduced to the following scheme. Let C and Q be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, A : H 1 −→ H 2 be a bounded linear operator with adjoint A * , and Γ SFP be the solution set of the problem (SFP). Let u −1 , u 0 ∈ H 1 be arbitrary, λ 0 > 0, > 0, and ρ > 0. Let {u n } be a sequence generated by the following scheme: where the step size λ n is updated using Equation 11. If Γ SFP = ∅, then the sequence {u n } converges weakly to an element of Γ SFP = ∅.

Application to the Image Restoration Problem
The VI problem as mentioned in Section 1 can be applied for solving many problems. Of particular interest, in this subsection, we use Algorithm 1 and Scheme (43) (Algorithm 2) to solve the problem of image deblurring. Furthermore, to illustrate the effectiveness of the proposed scheme, we give a comparative analysis of Algorithm 1 and the algorithms proposed in [49,57]. Furthermore, we compare Scheme 43 with Byrne's algorithm proposed in [3] for solving the split feasibility problem.
Recall that the image deblurring problem in image processing can be expressed as: where u ∈ R n represents the original image, M is the deblurring matrix, c is the observed image, and δ ∈ R m is the Gaussian noise. It has been known that solving (44) is equivalent to solving the convex unconstrained optimization problem: with ρ > 0 as the regularization parameter. To solve (45), we suppose A = ∇S(u) and B = ∂T where S(u) = 1 2 Mu − c 2 2 and T(u) = u 2 1 , then we have that ∇S(u) = M t (Mu − c) is 1 M 2 -cocoercive. Therefore, for any 0 < τ < 2 M 2 , (I − τ∇S) is nonexpansive [58]. The subgradient ∂T is maximal monotone [21]. It is well known that: u is a solution of (45) where prox ρT (u) = arg min x∈R n T(u) + 1 2ρ u − x 2 ; for more details, see [1].
For the split feasibility problem (SFP), we reformulate Problem 45 as a convex constrained optimization problem: where t > 0 is a given constant, and to solve (46), we take Au = ∇S(u). We consider C := {u ∈ R n : u 1 ≤ t} and Q := {c}.
To measure the quality of the recovered image, we adopted the improved signal-to-noise ratio (ISNR) [26] and structural similarity index measure (SSIM) [59]. We considered motion blur from MATLAB as the blurring function using ("fspecial('motion', 9, 40)"). For the comparison, we considered the standard test images of Butterfly (654 × 654), Lena (512 × 512), and Pepper (512 × 512) (see Figure 1). For the control parameters, we took = 0.9, λ 0 = 1, µ = 0.3, and ρ = 0.1, for Algorithm 1 and Algorithm 2 (Scheme 43). α n = 0.9 and λ n = 0.5 − 150n 1000n+150 for Algorithm 3.1 in [49], Algorithm 1.3 in [57], and Algorithm 1.1 in [3]. For all the algorithms, we took u n+1 −u n 2 u n+1 2 < 10 −4 as the stopping criterion. For reference, all codes were written using MATLAB2018b on a personnel computer. It can be seen from Figures 2-6 and Table 1 that the recovered images by the proposed Algorithm 1 had higher ISNR and SSIM values, which meant that the quality of the images recovered by Algorithm 1 was better than the compared algorithms.   It can be observed from Figures 5-7 that the restoration quality of the images restored by the modified algorithm was better than the quality of the images restored by the compared algorithm, and this is verified by the higher ISNR and SSIM values of Algorithm 2 in Table 2.

Conclusions
A relaxed inertial self-adaptive Tseng-type method for solving the variational inclusion problem was proposed in this work, and the scheme was derived from the explicit time discretization of the dynamical system. The main advantage of this scheme was that it involved both the use of an extrapolation step, as well as a relaxation parameter, and the iterates generated by the proposed scheme converged weakly to the solution of the zeros of the sum of a maximally monotone operator and a monotone operator. Furthermore, the proposed method did not require prior knowledge of the Lipschitz constant of the cost operator, and the iterates generated converged fast to the solution of the problem due to the inertial extrapolation step. A modified scheme derived from the proposed method was given for solving the split feasibility problem. The application of the proposed methods in image recovery and comparison with some of the existing state-of-the-art methods illustrated that the proposed methods are robust and efficient.