Next Article in Journal
On the Solutions of a Class of Integral Equations Pertaining to Incomplete H-Function and Incomplete H-Function
Next Article in Special Issue
A Generalized Viscosity Inertial Projection and Contraction Method for Pseudomonotone Variational Inequality and Fixed Point Problems
Previous Article in Journal
A Computational Method for Subdivision Depth of Ternary Schemes
Previous Article in Special Issue
Nondifferentiable Multiobjective Programming Problem under Strongly K-Gf-Pseudoinvexity Assumptions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration

by
Jamilu Abubakar
1,2,
Poom Kumam
1,3,4,*,
Abdulkarim Hassan Ibrahim
1 and
Anantachai Padcharoen
5
1
Department of Mathematics, King Mongkut’s University of Technology Thonburi, Bangkok 10140, Thailand
2
Department of Mathematics, Usmanu Danfodiyo University, Sokoto 840004, Nigeria
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
5
Department of Mathematics, Faculty of Science and Technology, Rambhai Barni Rajabhat University, Chanthaburi 22000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(5), 818; https://doi.org/10.3390/math8050818
Submission received: 30 March 2020 / Revised: 22 April 2020 / Accepted: 24 April 2020 / Published: 18 May 2020

Abstract

:
The relaxed inertial Tseng-type method for solving the inclusion problem involving a maximally monotone mapping and a monotone mapping is proposed in this article. The study modifies the Tseng forward-backward forward splitting method by using both the relaxation parameter, as well as the inertial extrapolation step. The proposed method follows from time explicit discretization of a dynamical system. A weak convergence of the iterates generated by the method involving monotone operators is given. Moreover, the iterative scheme uses a variable step size, which does not depend on the Lipschitz constant of the underlying operator given by a simple updating rule. Furthermore, the proposed algorithm is modified and used to derive a scheme for solving a split feasibility problem. The proposed schemes are used in solving the image deblurring problem to illustrate the applicability of the proposed methods in comparison with the existing state-of-the-art methods.

1. Introduction

This paper considers the problem of finding a point u ˘ H such that:
0 ( A + B ) u ˘ ,
where A : H H and B : H 2 H are respectively single-valued and multi-valued operators on a real Hilbert space H . The variational inclusion (VI) problem (1) is a fundamental problem in optimization theory, which is applied in many areas of study such as image processing, machine learning, transportation problems, equilibrium, economics, and engineering [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17].
There are several approaches to the VI problem, the popular one being the forward-backward splitting method introduced in [18,19]. Several studies have been carried out, and a number of algorithms have been considered and proposed to solve (1) [5,7,20,21,22,23,24,25,26,27,28,29].
To study the formulation of the monotone inclusion problem (1), the relaxation techniques are important tools as they give iterative schemes more versatility [30,31]. In order to accelerate the convergence of numerical methods, inertial effects were introduced. This technique traces back to the pioneering work of Polyak [32] who introduced the heavy ball method to speed up the gradient algorithm’s convergence behavior and allow the identification of various critical points. The inertial idea was later used and developed by Nesterov [33] and Alvarez and Attouch (see [34,35]) in the sense of solving smooth convex minimization problems and monotone inclusions/non-smooth convex minimization problems, respectively. A considerable amount of literature has contributed to inertial algorithms over the last decade [36,37,38,39,40].
Due to the advantages of the inertial effects and relaxation techniques, Attouch and Cabot extensively studied the inertial algorithm for monotone inclusion and convex optimization problems. To be precise, they focused on the relaxed inertial proximal method (RIPA) in [41,42] and the relaxed inertial forward-backward method (RIFB) in [43]. In [44], and a relaxed inertial Douglas–Rachford algorithm for monotone inclusions was proposed. Similarly, in [45], Iutzeler and Hendrickx studied the influence of inertial effects and relaxation techniques on the numerical performance of algorithms. The similarity between relaxation and inertial parameters for relative-error inexact under-relaxed algorithms was addressed in [46,47].
In this study, we associate (1) with the following dynamical system [48]:
d u ( x ) d x = ρ [ u ( x ) + ( I + λ B ) 1 u ( x ) λ A u ( x ) + λ A u ( x ) λ A ( I + λ B ) 1 u ( x ) λ A u ( x ) ] ,
where ρ > 0 and λ > 0 .
u n + 1 u n h n = ρ [ u n + ( I + λ B ) 1 ( u n λ A u n ) + λ A u n λ A ( I + λ B ) 1 ( u n λ A u n ) ] .
The last equality follows from an explicit discretization of (2) in time x , with a step size h n > 0 . Taking h n = 1 , we obtain:
u n + 1 = ( 1 ρ ) u n + ρ ( I + λ B ) 1 ( u n λ A u n ) + ρ λ A u n ρ λ A ( I + λ B ) 1 ( u n λ A u n ) .
Setting s n = ( I + λ B ) 1 ( I λ A ) u n in (4), we get:
u n + 1 = ( 1 ρ ) u n + ρ s n + ρ λ ( A u n A s n ) , n 1 .
It can be observed that in the case ρ = 1 , Equation (5) reduces to Tseng’s forward-backward-forward method [20]. The convergence of the scheme in [20] requires that 0 < λ < 1 L , where L is the Lipschitz constant of A , or λ can be computed using a line search procedure with a finite stopping criterion. It has been known that line search procedures involve extra functions’ evaluations, thereby reducing the computational performance of a given scheme. In this article, we propose a simple variable step size, which does not involve any line search.
The main iterative scheme in this study is given by:
t n = u n + ϱ ( u n u n 1 ) , s n = ( I + λ n B ) 1 ( I λ n A ) t n , u n + 1 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A t n A s n ) , n 1 .
where ρ is the relaxation parameter and ϱ is the extrapolation parameter. It is well known that the extrapolation step speeds up the convergence of a scheme. The step size λ n is defined to be self-adaptively updated according to a new simple step size rule.
Furthermore, (6) without the additional last step is exactly the scheme proposed in [49], which converges weakly to the solution of (1) with a restrictive assumption on A . Moreover, (6) can be considered as a relaxed version of the scheme proposed by Tseng [20].
Recently, Gibali et al. [7] proposed a modified Tseng algorithm by incorporating the Mann method with a variable step size for solving (1). The question now is: Can we have a fast iterative scheme involving a more general class of operators with a variable step size? We provide a positive answer to this question in this study.
Inspired and motivated by [7,20,49], we propose a relaxed inertial scheme with variable step sizes by incorporating the inertial extrapolation step and the relaxed parameter with the forward and backward scheme. The aim of this modification is to obtain a self-adaptive scheme with fast convergence properties involving a more general class of operators. Furthermore, we present a modified version of the proposed scheme for solving the split feasibility problem. Moreover, to illustrate the performance and to show the applicability of the proposed methods when compared to the existing algorithms in the literature, we apply the proposed algorithms to solve the problem of image recovery.
The outline of this work is as follows: We give some definitions and lemmas that we will use in our convergence analysis in the next section. We present the convergence analysis of our proposed scheme in Section 3, and lastly, in Section 4, we illustrate the inertial effect and the computational performance of our algorithms by giving some experiments by using the proposed algorithms to solve the problem of image recovery.

2. Preliminaries

This section recalls some known facts and necessary tools that we need for the convergence analysis of our method. Throughout this article, H is a real Hilbert space with the inner product and norm denoted respectively as · , · and · , and E is a nonempty closed and convex subset of H . The notation u j u ( resp u j u ) is used to indicate that, respectively, the sequence { u j } converges weakly (strongly) to u. The following is known to hold in a Hilbert space:
t ± s 2 = t 2 + s 2 ± 2 t , s
and for every t , s H [30]. The following definitions can be found for example in [7,30].
Definition 1.
Let A : H H be a mapping defined on a real Hilbert space H . For all u , v E , A is said to be:
(1) 
Monotone if:
A u A v , v u 0 ;
(2) 
Firmly nonexpansive if:
A u A v 2 A u A v , u v ,
or equivalently,
A u A v 2 u v 2 ( I A ) u ( I A ) v 2 ;
(3) 
L-Lipschitz continuous on H if there exists a constant L > 0 such that:
A u A v L u v .
If L = 1 , then A is called nonexpansive.
Definition 2
([30]). A multi-valued mapping B : H 2 H is said to be monotone, if for every u , v H , x B u and y B v x y , u v 0 . Furthermore, B is said to be maximal monotone if it is monotone and if for every ( u , x ) H , x y , u v 0 for every ( v , y ) Graph ( B ) x B u .
Definition 3.
Let B : H 2 H be a multi-valued maximal monotone mapping. Then, the resolvent J λ B : H H mapping associated with B is defined by:
J λ B ( u ) = ( I + λ B ) 1 ( u ) ,
for some λ > 0 , where I stands for the identity operator on H .
It is worth mentioning that it is well known that if B : H 2 H is a set-valued maximal monotone mapping and λ > 0 , then Dom ( J λ B ) = H , and J λ B is a single-valued and firmly nonexpansive mapping (see [50] for more properties of maximal monotone mapping).
Lemma 1
([51]). Let A : H H be a Lipschitz continuous and monotone mapping and B : H 2 H be a maximal monotone mapping, then the mapping A + B is a maximal monotone mapping.
Lemma 2
([52]). Suppose { γ n } , { ϕ n } and { ϱ n } are sequences in [ 0 , ) such that, for all n 1 ,
γ n + 1 γ n + ϱ n ( γ n γ n 1 ) + ϕ n , ϕ n <
and there exists ϱ R with 0 ϱ n ϱ 1 for all n 1 . Then, the following are satisfied:
(i) 
[ γ n γ n 1 ] + < , where [ a ] + = max { a , 0 } ;
(ii) 
there exists γ [ 0 , ) with lim γ n = γ .
Lemma 3
([53]). Let E H be a nonempty set and a sequence { u j } in H such that the following are satisfied:
(a) 
for every u E , lim j u j u exists;
(b) 
every sequentially weak cluster point of { u j } is in E.
Then, { u j } converges weakly in E .
Lemma 4.
Let { ϱ n } be a non-negative real number sequence, { γ n } be a sequence of real numbers in ( 0 , 1 ) with n = 1 γ n = , and { δ n } be a sequence of real numbers satisfying:
ϱ n + 1 ( 1 γ n ) ϱ n + γ n δ n for all n 1 .
If lim sup i δ n j 0 for every subsequence { ϱ n j } of { ϱ n } satisfying ( ϱ n j + 1 ϱ n j ) 0 , then lim n ϱ n = 0 .

3. Relaxed Inertial Tseng-Type Algorithm for the Variational Inclusion Problem

In this section, we give a detailed description of our proposed algorithm, and we present the weak convergence analysis of the iterates generated by the algorithm to the solution of the inclusion problem (1) involving the sum of a maximally monotone and monotone operator. We suppose the following assumptions for the analysis of our method.
Assumption 1.
A1 
The feasible set of (1) is a nonempty closed and convex subset of H .
A2 
The solution set Γ of (1) is nonempty.
A3 
A : H H is monotone, L-Lipschitz continuous on H , and B : H 2 H maximally monotone.
Lemma 5.
The generated sequence { λ n } by (11) is monotonically decreasing and bounded from below by min { μ L , λ 0 } .
Proof. 
It can be observed that the sequence { λ n } is monotonically decreasing. Since A is a Lipschitz function with Lipschitz’s constant L, for A t n A s n , we have:
μ t n s n A t n A s n μ L
It is obvious for A t n = A s n that the inequality (8) is satisfied. Hence, it follows that λ n min { μ L , λ 0 } .  □
Remark 1.
By Lemma 5, the update (11) is well defined and:
λ n + 1 A t n A s n μ t n s n .
Next, the following lemma and its proof are crucial for the convergence analysis of the sequence generated by Algorithm 1.
Algorithm 1 Relaxed inertial Tseng-type algorithm for the VI problem.
Initialization: Choose u 1 , u 0 H , μ ( 0 , 1 ) , ϱ 0 , λ 0 > 0 , and ρ > 0 .
Iterative steps: Given the current iterates u n 1 and u n H .
    Step 1. Set t n as:
t n : = u n + ϱ ( u n u n 1 ) ,
    Step 2. Compute:
s n = ( 1 + λ n B ) 1 ( 1 λ n A ) t n
    If t n = s n , stop. t n is the solution of (1). Else, go to Step 3.
    Step 3. Compute:
u n + 1 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A t n A s n ) .
    where the stepsize sequence λ n + 1 is updated as follows:
λ n + 1 : = min λ n , μ t n s n A t n A s n , if A t n A s n λ n , otherwise .
Set n : = n + 1 , and go back to Step 1.
Lemma 6.
Let A be an operator satisfying the assumption ( A 3 ). Then, for all u ˘ Γ Ø , we have:
u n + 1 u ˘ 2 t n u ˘ 2 η ρ t n s n 2 ,
where η = 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 .
Proof. 
From the fact that the resolvent J λ n B is firmly nonexpansive and s n = ( I + λ n B ) 1 ( I λ n A ) t n = J λ n B ( 1 λ n A ) t n , we have:
s n u ˘ , t n s n λ n A t n = J λ n B ( I λ n A ) t n J λ n B ( I λ n A ) u ˘ , ( I λ n A ) t n ( I λ n A ) u ˘ + ( I λ n A ) u ˘ s n , s n u ˘ 2 + s n u ˘ , u ˘ s n s n u ˘ , λ n A u ˘ , = s n u ˘ , λ n A u ˘ .
Hence, we get:
s n u ˘ , t n s n λ n ( A t n A s n ) 0 ,
which is the same as:
2 t n s n , s n u ˘ 2 λ n A t n A s n , s n u ˘ 0 .
However,
2 t n s n , s n u ˘ = t n u ˘ 2 t n s n 2 s n u ˘ 2 .
Substituting Equation (14) in Equation (13), we get:
s n u ˘ 2 t n u ˘ 2 t n s n 2 2 λ n A t n A s n , s n u ˘ .
On the other hand, from the definition of u n + 1 , we have:
u n + 1 u ˘ 2 = ( 1 ρ ) t n + ρ s n + ρ λ n ( A t n A s n ) u ˘ 2 , = ( 1 ρ ) ( t n u ˘ ) + ρ ( s n u ˘ ) + ρ λ n ( A t n A s n ) 2 , = ( 1 ρ ) 2 t n u ˘ 2 + ρ 2 s n u ˘ 2 + ρ 2 λ n 2 A t n A s n 2 + 2 ρ ( 1 ρ ) t n u ˘ , s n u ˘ + 2 λ n ρ ( 1 ρ ) t n u ˘ , A t n A s n + 2 λ n ρ 2 s n u ˘ , A t n A s n .
Using Equation (7), we have:
2 t n u ˘ , s n u ˘ = t n u ˘ 2 + s n u ˘ 2 t n s n 2 .
Substituting Equation (17) in Equation (16), we get:
u n + 1 u ˘ 2 = ( 1 ρ ) t n u ˘ 2 + ρ 2 s n u ˘ 2 + ρ 2 λ n 2 A t n A s n 2 + ρ ( 1 ρ ) t n u ˘ 2 + s n u ˘ 2 t n s n 2 + 2 λ n ρ ( 1 ρ ) t n u ˘ , A t n A s n + 2 λ n ρ 2 s n u ˘ , A t n A s n , = ( 1 ρ ) t n u ˘ 2 + ρ s n u ˘ 2 ρ ( 1 ρ ) t n s n 2 + λ n 2 ρ 2 A t n A s n 2 + 2 λ n ρ ( 1 ρ ) t n u ˘ , A t n A s n + 2 λ n ρ 2 s n u ˘ , A t n A s n .
Putting Inequality (15) in Equation (18), we have,
u n + 1 u ˘ 2 ( 1 ρ ) t n u ˘ 2 + ρ t n u ˘ 2 t n s n 2 2 λ n A t n A s n , s n u ˘ ρ ( 1 ρ ) t n s n 2 + λ n 2 ρ 2 A t n A s n 2 + 2 λ n ρ ( 1 ρ ) t n u ˘ , A t n A s n + 2 λ n ρ 2 s n u ˘ , A t n A s n , = t n u ˘ 2 ρ ( 2 ρ ) t n s n 2 + λ n 2 ρ 2 A t n A s n 2 + 2 λ n ρ ( 1 ρ ) t n s n , A t n A s n , t n u ˘ 2 ρ ( 2 ρ ) t n s n 2 + λ n 2 ρ 2 μ 2 λ n + 1 2 t n s n 2 + 2 λ n ρ ( 1 ρ ) μ λ n + 1 t n s n 2 , t n u ˘ 2 ρ 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 t n s n 2 ;
hence the proof.  □
Lemma 7.
Let { t n } be a sequence generated by Algorithm 1 and Assumption ( A 1 A 3 ) be satisfied. If there exists a subsequence { t n i } weakly convergent to q H with lim n t n s n = 0 , then q Γ .
Proof. 
Suppose ( y , x ) Graph ( A + B ) , that is x A y B y , and since s n i = ( I + λ n i B ) 1 ( I λ n i A ) t n i , we get:
( I λ n i A ) t n i ( I + λ n i B ) s n i .
This implies that:
1 λ n i ( t n i s n i λ n i A t n i ) B s n i .
By the maximal monotonicity of B , we have:
y s n i , x A y 1 λ n i ( t n i s n i λ n i A t n i ) 0 .
Hence,
y s n i , x y s n i , A y + 1 λ n i ( t n i s n i λ n i A t n i ) = y s n i , A y A t n i + y s n i , 1 λ n i ( t n i s n i ) = y s n i , A y A s n i + y s n i , A s n i A t n i + y s n i , 1 λ n i ( t n i s n i ) y s n i , A s n i A t n i + y s n i , 1 λ n i ( t n i s n i ) .
From the fact that A is Lipschitz continuous and lim n t n s n = 0 , it follows that lim n A t n A s n = 0 ; since lim n λ n exists, we get:
y q , x = lim i y s n i , x 0 .
The above inequality together with the maximal monotonicity of A + B implies that 0 ( A + B ) q , that is q Γ ; hence the proof.  □
Theorem 1.
Let A be an operator satisfying the assumptions ( A 3 ) and:
0 < ρ < 1 + 8 κ 1 2 κ 2 ( 1 κ ) ,
with κ 0 , 1 μ ( 2 ρ + ρ μ ) 2 ρ 1 μ 2 . Then, for all u ˘ Γ Ø , the sequence { u n } generated by Algorithm 1, converges weakly to u ˘ .
Proof. 
From the definition of u n + 1 and (9), we have:
u n + 1 s n = ( 1 ρ ) t n + ρ s n + ρ λ n ( A t n A s n ) s n , = ( 1 ρ ) ( t n s n ) + ρ λ n ( A t n A s n ) , ( 1 ρ ) t n s n + ρ λ n A t n A s n , ( 1 ρ ) t n s n + ρ μ λ n λ n + 1 t n s n , = 1 ρ 1 μ λ n λ n + 1 t n s n .
Furthermore,
u n + 1 t n u n + 1 s n + s n t n ,
Substituting Equation (22) in Equation (21), we get:
u n + 1 t n 1 ρ 1 μ λ n λ n + 1 t n s n + s n t n , = 2 ρ 1 μ λ n λ n + 1 t n s n .
Hence,
t n s n 1 2 ρ 1 μ λ n λ n + 1 u n + 1 t n .
Putting Equation (24) and Lemma 6 together, we have:
u n + 1 s n 2 t n u ˘ 2 η ρ 2 ρ 1 μ λ n λ n + 1 u n + 1 t n .
Set:
ζ n = ρ 2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 2 ρ 1 μ λ n λ n + 1 2 .
Observe that, λ n λ as n ; we get:
lim n ζ n = ρ 2 ρ 2 μ ( 1 ρ ) ρ μ 2 2 ρ 1 μ 2 , = 1 μ ( 2 ρ + ρ μ ) 2 ρ 1 μ 2 , > 0 .
It follows from (20) that there exists a κ such that:
2 ρ 2 μ ( 1 ρ ) λ n λ n + 1 ρ μ 2 λ n 2 λ n + 1 2 2 ρ 1 μ λ n λ n + 1 2 > κ for all n .
This together with Equation (22) imply that:
u n + 1 s n 2 t n u ˘ 2 κ u n + 1 t n .
On the other hand, from the Definition (10), we have:
t n u ˘ 2 = u n + ϱ ( u n u n 1 ) u ˘ 2 , = ( 1 + ϱ ) ( u n u ˘ ) ϱ ( u n 1 u ˘ ) 2 , = ( 1 + ϱ ) u n u ˘ 2 ϱ u n 1 u ˘ 2 + ϱ ( 1 + ϱ ) u n u n 1 2 .
Furthermore, from Definition (10) and the Cauchy–Schwartz inequality, we get:
u n + 1 t n 2 = u n + 1 u n ϱ ( u n u n 1 ) 2 , = u n + 1 u n 2 + ϱ 2 u n u n 1 2 2 ϱ u n + 1 u n , u n u n 1 , u n + 1 u n 2 + ϱ 2 u n u n 1 2 2 ϱ u n + 1 u n u n u n 1 , u n + 1 u n 2 + ϱ 2 u n u n 1 2 ϱ u n + 1 u n 2 ϱ u n u n 1 2 , ( 1 ϱ ) u n + 1 u n 2 + ( ϱ 2 ϱ ) u n u n 1 2 .
Putting Equations (27)–(29) together, we obtain:
u n + 1 u ˘ 2 ( 1 + ϱ ) u n u ˘ 2 ϱ u n 1 u ˘ 2 + ϱ ( 1 + ϱ ) u n u n 1 2 κ ( 1 ϱ ) u n + 1 u n 2 + ( ϱ 2 ϱ ) u n u n 1 2 , ( 1 + ϱ ) u n u ˘ 2 ϱ u n 1 u ˘ 2 κ ( 1 ϱ ) u n + 1 u n 2 , + ϱ ( 1 + ϱ ) κ ( ϱ 2 ϱ ) u n u n 1 2 , ( 1 + ϱ ) u n p 2 ϱ u n 1 p 2 β n u n + 1 u n 2 + γ n u n u n 1 2 .
where β n = κ n ( 1 ϱ ) , and γ n = ϱ ( 1 + ϱ ) κ ( ϱ 2 ϱ ) .
Set:
ϕ n = u n u ˘ 2 ϱ u n 1 u ˘ 2 + β n u n u n 1 2 .
Thus,
ϕ n + 1 ϕ n = u n + 1 u ˘ 2 ϱ u n u ˘ 2 + β n + 1 u n + 1 u n 2 u n u ˘ 2 + ϱ u n 1 u ˘ 2 β n u n u n 1 2 , = u n + 1 p 2 ( 1 + ϱ ) u n u ˘ 2 + ϱ u n 1 u ˘ 2 + β n + 1 u n + 1 u n 2 β n u n u n 1 2 , u n + 1 u ˘ 2 ( 1 + ϱ ) u n u ˘ 2 + ϱ u n 1 u ˘ 2 + β n + 1 u n + 1 u n 2 β n u n u n 1 2 , ( γ n β n + 1 ) u n + 1 u n 2 .
Notice that,
γ n β n + 1 = κ n ( 1 ϱ ) ϱ ( 1 + ϱ ) κ ( ϱ 2 ϱ ) = ( 1 κ ) ϱ 2 ( 1 + 2 κ ) ϱ + κ
Therefore,
ϕ n + 1 ϕ n τ u n + 1 u n 2 ,
with τ = ( 1 κ ) ϱ 2 ( 1 + 2 κ ) ϱ + κ , it follows from (20) that τ > 0 . Therefore, the sequence { ϕ n } is nonincreasing. Further, from the definition of ϕ n + 1 , we have
ϕ n + 1 = u n + 1 u ˘ 2 ϱ u n u ˘ 2 + β n + 1 u n + 1 u n 2 , ϱ u n u ˘ 2 .
In addition, we have:
ϕ n = u n u ˘ 2 ϱ u n 1 u ˘ 2 + R n u n u n 1 2 , u n u ˘ 2 ϱ u n 1 u ˘ 2 .
The last inequality implies that:
u n u ˘ 2 ϕ n + ϱ u n 1 u ˘ 2 , ϕ 1 + ϱ u n 1 u ˘ 2 , ϕ 1 ( ϱ n 1 + + 1 ) + ϱ n u 0 u ˘ 2 , ϕ 1 ϱ + ϱ n u 0 u ˘ 2 .
Combining Equations (31) and (33), we obtain:
ϕ n + 1 ϱ u n u ˘ 2 , ϱ ϕ 1 1 ϱ + ϱ n + 1 u 0 u ˘ 2 .
It follows from Expressions (30) and (34) that:
τ n = 1 i u n + 1 u n ϕ 1 ϕ i + 1 , ϕ 1 + ϱ ϕ 1 1 ϱ + ϱ i + 1 u 0 u ˘ 2 , ϕ 1 1 ϱ + u 0 u ˘ 2 .
Letting i in the above expression implies that,
n = 1 u n + 1 u n < + lim n u n + 1 u n = 0 .
Moreover, from:
u n + 1 t n = u n + 1 u n 2 + ϱ 2 u n u n 1 2 2 ϱ u n + 1 u n , u n u n 1 ,
we can obtain:
u n + 1 t n 0 as n .
By Relation (30) together with Lemma 2, we obtain:
lim n u n u ˘ 2 = l , for some finite l > 0 .
By Equation (28), we also obtain:
lim n t n u ˘ 2 = l .
Moreover,
0 u n t n u n u n + 1 + u n + 1 t n 0 .
It follows from Equation (25) that:
s n t n 0 .
Since lim n u n u ˘ 2 exists, therefore the sequence { u n } is bounded. Let { u n i } be a subsequence of { u n } such that u n i u , then from (41), we have t n i u . Now, since lim n t n s n = 0 , by Lemma 7, we get u Γ . Consequently, by Lemma 3, the sequence { u n } converges weakly to the solution of (1).  □

4. Application to the Split Feasibility Problem

In this section, we derive a scheme for solving the split feasibility problem from Algorithm 1. The split feasibility problem (SFP) is a problem of finding a point u ˇ C such that A u ˇ Q , where C , Q are nonempty closed and convex subsets of H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator. Censor and Elfving in [54] introduced the problem (SFP) in finite-dimensional Hilbert spaces by using a multi-distance method to obtain an iterative method for solving SFP. A number of problems that arise from phase retrievals and in medical image reconstruction can be formulated as split variational feasibility problems [3,55]. The problem (SFP) can also be used in various disciplines such as image restoration, dynamic emission tomographic image reconstruction, and radiation therapy treatment planning [2,7,56]. Suppose f : H ( , ) is proper lower semi-continuous convex. Then, for all u H , the subdifferential f of f is defined as follows:
f ( u ) = { ω H : f ( u ) ω , u v + f ( v ) v H } .
For a nonempty closed and convex subset C of H , the indicator function i C of C is given by:
i C ( u ) = 0 if u C if u C .
Furthermore, the normal cone of C at u N C u is given as:
N C u = { ω H : ω , u v 0 v H } .
It is known that the indicator function i C is a proper, lower semi-continuous and convex function on H . Thus, the subdifferential i C of i C is a maximal monotone operator and:
i C u = { ω H : i C u ω , u v + i C v v H } , = { ω H : ω , u v 0 v H } , = N C u .
Therefore, for all u H , we can define the resolvent of i C as J λ i C = ( I + λ i C ) 1 for each λ > 0 . Hence, we can see that for λ > 0 :
v = J λ i C u u ( v + λ i C v ) , u v λ i C v , v = P C u .
Now, based on the above derivation, Algorithm 1 can be reduced to the following scheme.
Let C and Q be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, A : H 1 H 2 be a bounded linear operator with adjoint A , and Γ S F P be the solution set of the problem (SFP). Let u 1 , u 0 H 1 be arbitrary, λ 0 > 0 , ϱ > 0 , and ρ > 0 . Let { u n } be a sequence generated by the following scheme:
t n : = u n + ϱ ( u n u n 1 ) , s n : = P C t n λ n A ( I P Q ) A t n , u n + 1 : = ( 1 ρ ) t n + ρ s n + ρ λ n A ( I P Q ) A t n A ( I P Q ) A s n .
where the step size λ n is updated using Equation (11). If Γ S F P Ø , then the sequence { u n } converges weakly to an element of Γ S F P Ø .

Application to the Image Restoration Problem

The VI problem as mentioned in Section 1 can be applied for solving many problems. Of particular interest, in this subsection, we use Algorithm 1 and Scheme (43) (Algorithm 2) to solve the problem of image deblurring. Furthermore, to illustrate the effectiveness of the proposed scheme, we give a comparative analysis of Algorithm 1 and the algorithms proposed in [49,57]. Furthermore, we compare Scheme 43 with Byrne’s algorithm proposed in [3] for solving the split feasibility problem.
Recall that the image deblurring problem in image processing can be expressed as:
c = M u + δ ,
where u R n represents the original image, M is the deblurring matrix, c is the observed image, and δ R m is the Gaussian noise. It has been known that solving (44) is equivalent to solving the convex unconstrained optimization problem:
min u R n 1 2 M u c 2 2 + ρ u 1 2 ,
with ρ > 0 as the regularization parameter. To solve (45), we suppose A = S ( u ) and B = T where S ( u ) = 1 2 M u c 2 2 and T ( u ) = u 1 2 , then we have that S ( u ) = M t ( M u c ) is 1 M 2 -cocoercive. Therefore, for any 0 < τ < 2 M 2 , ( I τ S ) is nonexpansive [58]. The subgradient T is maximal monotone [21]. It is well known that:
u is a solution of ( 45 ) 0 S ( u ) + T ( u ) u = p r o x ρ T ( I τ S ) ( x )
where p r o x ρ T ( u ) = arg m i n x R n T ( u ) + 1 2 ρ u x 2 ; for more details, see [1].
For the split feasibility problem (SFP), we reformulate Problem 45 as a convex constrained optimization problem:
min u R n 1 2 M u c 2 2 subject to u 1 t ,
where t > 0 is a given constant, and to solve (46), we take A u = S ( u ) . We consider C : = { u R n : u 1 t } and Q : = { c } .
To measure the quality of the recovered image, we adopted the improved signal-to-noise ratio (ISNR) [26] and structural similarity index measure (SSIM) [59]. We considered motion blur from MATLAB as the blurring function using (“fspecial(‘motion’,9,40)”). For the comparison, we considered the standard test images of Butterfly ( 654 × 654 ) , Lena ( 512 × 512 ) , and Pepper ( 512 × 512 ) (see Figure 1). For the control parameters, we took ϱ = 0.9 , λ 0 = 1 , μ = 0.3 , and ρ = 0.1 , for Algorithm 1 and Algorithm 2 (Scheme 43). α n = 0.9 and λ n = 0.5 150 n 1000 n + 150 for Algorithm 3.1 in [49], Algorithm 1.3 in [57], and Algorithm 1.1 in [3]. For all the algorithms, we took u n + 1 u n 2 u n + 1 2 < 10 4 as the stopping criterion. For reference, all codes were written using MATLAB2018b on a personnel computer.
It can be seen from Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 and Table 1 that the recovered images by the proposed Algorithm 1 had higher ISNR and SSIM values, which meant that the quality of the images recovered by Algorithm 1 was better than the compared algorithms.
It can be observed from Figure 5, Figure 6 and Figure 7 that the restoration quality of the images restored by the modified algorithm was better than the quality of the images restored by the compared algorithm, and this is verified by the higher ISNR and SSIM values of Algorithm 2 in Table 2.

5. Conclusions

A relaxed inertial self-adaptive Tseng-type method for solving the variational inclusion problem was proposed in this work, and the scheme was derived from the explicit time discretization of the dynamical system. The main advantage of this scheme was that it involved both the use of an extrapolation step, as well as a relaxation parameter, and the iterates generated by the proposed scheme converged weakly to the solution of the zeros of the sum of a maximally monotone operator and a monotone operator. Furthermore, the proposed method did not require prior knowledge of the Lipschitz constant of the cost operator, and the iterates generated converged fast to the solution of the problem due to the inertial extrapolation step. A modified scheme derived from the proposed method was given for solving the split feasibility problem. The application of the proposed methods in image recovery and comparison with some of the existing state-of-the-art methods illustrated that the proposed methods are robust and efficient.

Author Contributions

Conceptualization, J.A. and A.H.I.; methodology, J.A.; software, A.P.; validation, J.A., P.K. and A.H.I.; formal analysis, J.A.; investigation, P.K.; resources, P.K.; data curation, A.P.; writing—original draft preparation, J.A.; writing—review and editing, J.A. and A.H.I.; visualization, A.P.; supervision, P.K.; project administration, P.K.; funding acquisition, P.K. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Faculty of Science, KMUTT. The first and the third authors were supported by “the Petchra Pra Jom Klao Ph.D. Research Scholarship” from King Mongkut’s University of Technology Thonburi (Grant Nos. 38/2018 and 16/2018, respectively).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  2. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103. [Google Scholar] [CrossRef] [Green Version]
  3. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  4. Hanjing, A.; Suantai, S. A Fast Image Restoration Algorithm Based on a Fixed Point and Optimization Method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  5. Thong, D.V.; Cholamjiak, P. Strong convergence of a forward–backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38, 94. [Google Scholar] [CrossRef]
  6. Marcotte, P. Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar] [CrossRef]
  7. Gibali, A.; Thong, D.V. Tseng type methods for solving inclusion problems and its applications. Calcolo 2018, 55, 49. [Google Scholar] [CrossRef]
  8. Khobotov, E.N. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987, 27, 120–127. [Google Scholar] [CrossRef]
  9. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY, USA, 1980; Volume 31. [Google Scholar]
  10. Trémolières, R.; Lions, J.L.; Glowinski, R. Numerical Analysis of Variational Inequalities; North Holland: Amsterdam, The Netherlands, 2011. [Google Scholar]
  11. Baiocchi, C. Variational and quasivariational inequalities. In Applications to Free-boundary Problems; Springer: Basel, Switzerland, 1984. [Google Scholar]
  12. Konnov, I. Combined Relaxation Methods for Variational Inequalities; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001; Volume 495. [Google Scholar]
  13. Jaiboon, C.; Kumam, P. An extragradient approximation method for system of equilibrium problems and variational inequality problems. Thai J. Math. 2012, 7, 77–104. [Google Scholar]
  14. Kumam, W.; Piri, H.; Kumam, P. Solutions of system of equilibrium and variational inequality problems on fixed points of infinite family of nonexpansive mappings. Appl. Math. Comput. 2014, 248, 441–455. [Google Scholar] [CrossRef]
  15. Chamnarnpan, T.; Phiangsungnoen, S.; Kumam, P. A new hybrid extragradient algorithm for solving the equilibrium and variational inequality problems. Afrika Matematika 2015, 26, 87–98. [Google Scholar] [CrossRef]
  16. Deepho, J.; Kumam, W.; Kumam, P. A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems. J. Math. Model. Algorithms Oper. Res. 2014, 13, 405–423. [Google Scholar] [CrossRef]
  17. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019, 2019, 1–25. [Google Scholar] [CrossRef]
  18. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  19. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  20. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  21. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  22. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J. Generalized Halpern-type forward–backward splitting methods for convex minimization problems with application to image restoration problems. Optimization 2019, 1–25. [Google Scholar] [CrossRef]
  23. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J.; Sitthithakerngkiet, K. Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 2020, 97, 482–497. [Google Scholar] [CrossRef]
  24. Huang, Y.; Dong, Y. New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 2014, 237, 60–68. [Google Scholar] [CrossRef]
  25. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef] [Green Version]
  26. Padcharoen, A.; Kitkuan, D.; Kumam, W.; Kumam, P. Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems. Comput. Math. Method 2020, e1088. [Google Scholar] [CrossRef] [Green Version]
  27. Attouch, H.; Peypouquet, J.; Redont, P. Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 2018, 457, 1095–1117. [Google Scholar] [CrossRef]
  28. Dadashi, V.; Postolache, M. Forward–backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
  29. Suparatulatorn, R.; Khemphet, A. Tseng type methods for inclusion and fixed point problems with applications. Mathematics 2019, 7, 1175. [Google Scholar] [CrossRef] [Green Version]
  30. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  31. Eckstein, J.; Bertsekas, D.P. On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef] [Green Version]
  32. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput.Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  33. Nesterov, Y. A method for unconstrained convex minimization problem with the rate of convergence O (1/k2). Doklady Ussr 1983, 269, 543–547. [Google Scholar]
  34. Alvarez, F. On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 2000, 38, 1102–1119. [Google Scholar] [CrossRef] [Green Version]
  35. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  36. Dong, Q.; Cho, Y.; Zhong, L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  37. Abubakar, J.; Sombut, K.; Ibrahim, A.H. An Accelerated Subgradient Extragradient Algorithm for Strongly Pseudomonotone Variational Inequality Problems. Thai J. Math. 2019, 18, 166–187. [Google Scholar]
  38. Van Hieu, D. An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 2018, 1–17. [Google Scholar] [CrossRef]
  39. Thong, D.V.; Van Hieu, D. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
  40. Abubakar, J.; Kumam, P.; Rehman, H.; Ibrahim, A.H. Inertial Iterative Schemes with Variable Step Sizes for Variational Inequality Problem Involving Pseudomonotone Operator. Mathematics 2020, 8, 609. [Google Scholar] [CrossRef]
  41. Attouch, H.; Cabot, A. Convergence of a relaxed inertial proximal algorithm for maximally monotone operators. Math. Program. 2019. [Google Scholar] [CrossRef]
  42. Attouch, H.; Cabot, A. Convergence rate of a relaxed inertial proximal algorithm for convex minimization. Optimization 2019. [Google Scholar] [CrossRef]
  43. Attouch, H.; Cabot, A. Convergence of a Relaxed Inertial Forward–Backward Algorithm for Structured Monotone Inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
  44. Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef] [Green Version]
  45. Iutzeler, F.; Hendrickx, J.M. A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Methods Softw. 2019, 34, 383–405. [Google Scholar] [CrossRef] [Green Version]
  46. Alves, M.M.; Marcavillaca, R.T. On inexact relative-error hybrid proximal extragradient, forward-backward and Tseng’s modified forward-backward methods with inertial effects. Set-Valued Variat. Anal. 2019, 1–25. [Google Scholar] [CrossRef] [Green Version]
  47. Alves, M.M.; Eckstein, J.; Geremia, M.; Melo, J.G. Relative-error inertial-relaxed inexact versions of Douglas-Rachford and ADMM splitting algorithms. Comput. Optim. Appl. 2020, 75, 389–422. [Google Scholar] [CrossRef] [Green Version]
  48. Xia, Y.; Wang, J. A general methodology for designing globally convergent optimization neural networks. IEEE Trans. Neural Netw. 1998, 9, 1331–1343. [Google Scholar] [PubMed] [Green Version]
  49. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  50. Takahashi, W. Nonlinear Functional Analysis-Fixed Point Theory and its Applications; Springer: New York, NY, USA, 2000. [Google Scholar]
  51. Brezis, H. Ope Rateurs Maximaux Monotones Et Semi-Groupes De Contractions Dans Les Espaces De Hilbert; Elsevier: North Holland, The Netherlands, 1973. [Google Scholar]
  52. Ofoedu, E. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
  53. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  54. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  55. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071. [Google Scholar] [CrossRef] [Green Version]
  56. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353. [Google Scholar] [CrossRef] [Green Version]
  57. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  58. Iiduka, H.; Takahashi, W. Strong convergence theorems for nonexpansive nonself-mappings and inverse-strongly-monotone mappings. J. Convex Anal. 2004, 11, 69–79. [Google Scholar]
  59. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Original test images. (a) Butterfly, (b) Lena, and (c) Pepper.
Figure 1. Original test images. (a) Butterfly, (b) Lena, and (c) Pepper.
Mathematics 08 00818 g001
Figure 2. Degraded and restored (ad) Butterfly images and (eh) enlarged Butterfly images by the various algorithms.
Figure 2. Degraded and restored (ad) Butterfly images and (eh) enlarged Butterfly images by the various algorithms.
Mathematics 08 00818 g002
Figure 3. Degraded and restored (ad) Lena images and (eh) enlarged Lena images by the various algorithms.
Figure 3. Degraded and restored (ad) Lena images and (eh) enlarged Lena images by the various algorithms.
Mathematics 08 00818 g003
Figure 4. Degraded and restored (ad) Pepper images and (eh) enlarged Pepper images by the various algorithms.
Figure 4. Degraded and restored (ad) Pepper images and (eh) enlarged Pepper images by the various algorithms.
Mathematics 08 00818 g004
Figure 5. Degraded and restored (ac) Butterfly images and (df) enlarged Butterfly images by the various algorithms.
Figure 5. Degraded and restored (ac) Butterfly images and (df) enlarged Butterfly images by the various algorithms.
Mathematics 08 00818 g005
Figure 6. Degraded and restored (ac) Lena images and (df) enlarged Lena images by the various algorithms.
Figure 6. Degraded and restored (ac) Lena images and (df) enlarged Lena images by the various algorithms.
Mathematics 08 00818 g006
Figure 7. Degraded and restored (ac) Pepper images (df) enlarged Pepper images by the various algorithms.
Figure 7. Degraded and restored (ac) Pepper images (df) enlarged Pepper images by the various algorithms.
Mathematics 08 00818 g007
Table 1. The ISNR and SSIM values of the compared algorithms.
Table 1. The ISNR and SSIM values of the compared algorithms.
Algorithm 1Moudafi and OlineyLorenz and Pock
ImagesISNRSSIMISNRSSIMISNRSSIM
Butterfly7.7745530.96927.5469090.96867.5877480.9688
Lena7.1100840.98197.1265830.98137.1478070.9814
Pepper8.4895810.97898.3730340.97808.3547130.9779
Table 2. The ISNR and SSIM values of the compared algorithms.
Table 2. The ISNR and SSIM values of the compared algorithms.
Equation (43)Byrne
ImagesISNRSSIMISNRSSIM
Butterfly7.77419090.99995.0780510.9998
Lena7.1121280.99995.3969040.9996
Pepper8.4881400.99996.1270680.9997

Share and Cite

MDPI and ACS Style

Abubakar, J.; Kumam, P.; Hassan Ibrahim, A.; Padcharoen, A. Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration. Mathematics 2020, 8, 818. https://doi.org/10.3390/math8050818

AMA Style

Abubakar J, Kumam P, Hassan Ibrahim A, Padcharoen A. Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration. Mathematics. 2020; 8(5):818. https://doi.org/10.3390/math8050818

Chicago/Turabian Style

Abubakar, Jamilu, Poom Kumam, Abdulkarim Hassan Ibrahim, and Anantachai Padcharoen. 2020. "Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration" Mathematics 8, no. 5: 818. https://doi.org/10.3390/math8050818

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop