Next Article in Journal
Steady Fluid–Structure Coupling Interface of Circular Membrane under Liquid Weight Loading: Closed-Form Solution for Differential-Integral Equations
Previous Article in Journal
A Fourth Order Symplectic and Conjugate-Symplectic Extension of the Midpoint and Trapezoidal Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Tseng’s Method with Inertial Viscosity Type for Solving Inclusion Problems and Its Application to Image Restoration Problems

by
Nattakarn Kaewyong
1 and
Kanokwan Sitthithakerngkiet
2,*
1
Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
2
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(10), 1104; https://doi.org/10.3390/math9101104
Submission received: 25 April 2021 / Revised: 5 May 2021 / Accepted: 11 May 2021 / Published: 13 May 2021
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In this paper, we study a monotone inclusion problem in the framework of Hilbert spaces. (1) We introduce a new modified Tseng’s method that combines inertial and viscosity techniques. Our aim is to obtain an algorithm with better performance that can be applied to a broader class of mappings. (2) We prove a strong convergence theorem to approximate a solution to the monotone inclusion problem under some mild conditions. (3) We present a modified version of the proposed iterative scheme for solving convex minimization problems. (4) We present numerical examples that satisfy the image restoration problem and illustrate our proposed algorithm’s computational performance.

1. Introduction

Let H be a real Hilbert space with the inner product · , · and induced norm · . A zero-point problem for monotone operators is defined as follows: find x H such that
0 T x ,
where T is a monotone operator. Barely a decade ago, many authors intensively studied the convergence of iterative methods to find a zero-point for monotone operators in the framework of Hilbert spaces. Additionally, many iterative methods have been constructed and studied to solve a zero-point problem (1), since it is connected to various optimization and nonlinear analysis issues, such as variational inequality problems, convex minimization problems, and so on. The proximal point algorithm (PPA) [1], which was constructed by Martinet in 1970, is well known as being the first algorithm to solve the problem (1). This algorithm is shown below:
x n + 1 = ( I + λ n T ) 1 x n , n 1 ,
where I is the identity mapping, and  { λ n } is a sequence of positive real numbers. After Martinet [1] proposed the proximal point algorithm (PPA), many algorithms were developed by many authors to solve the zero-point problem. The reader can see [2,3,4] and the references therein for more details.
In this paper, we focus on the following monotone inclusion problem:
find x H such that 0 ( A + B ) x ,
where A : H H and B : H 2 H are single and multi-valued mappings, respectively. The monotone inclusion problem (3) can be written as the zero-point problem (1) by setting T = A + B . However, the full resolvent operator ( I + λ T ) 1 is much harder to compute than the resolvent operators ( I + λ A ) 1 and ( I + λ B ) 1 .
Because the monotone inclusion problem (3) is the core of image processing and many mathematical problems [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22], many researchers have proposed and developed iterative methods for solving inclusion problems (3). The forward-backward splitting method, constructed and studied by Lions and Mercier [23] in 1979, is the most popular algorithm for solving the (3) problem. It is defined by the following iterative:
x n + 1 = ( I + λ B ) 1 ( I λ A ) x n , n 1 ,
where x 1 H is arbitrarily chosen and λ > 0 . In the algorithm (4), operators A and B are usually called the forward operator and the backward operator, respectively. For more details about forward-backward methods that have been constructed and considered to solve the inclusion problem (3), the reader is directed to [2,9,11,24,25,26,27,28,29,30,31,32].
To speed up the convergence rate of iteration methods, Polyak [33] introduced inertial extrapolation as an acceleration process in 1964. This method is well known as the heavy ball method. Polyak [33] used his algorithm to solve the smooth convex minimization problem. In recent years, many researchers have intensively used this useful concept for combining their algorithms with an inertial term to accelerate the speed of convergence.
In 2001, Alvarez and Attouch [34] constructed an algorithm to solve a problem of monotone operators. It combines the heavy ball method with the proximal point algorithm. The algorithm is defined as follows:
w n = x n + θ n ( x n x n 1 ) x n + 1 = ( I + λ n B ) 1 w n , n 1 ,
where x 0 , x 1 H are arbitrarily chosen, { θ n } [ 0 , 1 ) , and  { λ n } is nondecreasing with
n = 1 θ n x n x n 1 < .
They proved that the sequence { x n } generated by the algorithm (5) converges weakly to a zero-point of the monotone operator B.
Moudafi and Oliny [35] studied the monotone inclusion problem (3). They constructed the inertial proximal point algorithm, which combines the heavy ball method with the proximal point algorithm. The inertial proximal point algorithm is defined as follows:
w n = x n + θ n ( x n x n 1 ) x n + 1 = ( I + λ n B ) 1 ( x n λ n A w n ) , n 1 ,
where x 0 , x 1 H are arbitrarily chosen, and  A : H H and B : H 2 H are single and multi-valued mappings, respectively. It was proven that if λ n < 2 / L with the Lipschitz constant L of the monotone operator A and the condition (6) holds, then the sequence { x n } generated by the algorithm (7) converges weakly to a solution of the inclusion problem (3). Moreover, it has been observed that for θ n > 0 , the proximal point algorithm (7) cannot be written as the forward-backward splitting method (4), since there is no estimation of operator A at point x n .
Lorenz and Pock [36] studied the monotone inclusion problem (3). They proposed the inertial forward-backward algorithm for monotone operators, which is defined as
w n = x n + θ n ( x n x n 1 ) x n + 1 = ( I + λ n B ) 1 ( I λ n A ) w n , n 1 ,
where x 0 , x 1 H are arbitrarily chosen, A : H H and B : H 2 H are single and multi-valued mappings, respectively. They proved that the sequence { x n } generated by the algorithm (8) converges weakly to a solution of the monotone inclusion problem (3) with some conditions.
Kitkuan and Kumam [26] combined the forward-backward splitting method (4) with the viscosity approximation method [37] for solving the monotone inclusion problem (3). It is called the inertial viscosity forward-backward splitting algorithm, which is defined as:
w n = x n + θ n ( x n x n 1 ) x n + 1 = α n h ( x n ) + ( 1 α n ) ( I + λ B ) 1 ( I λ n A ) w n , n 1 ,
where x 0 , x 1 H are arbitrarily chosen, h : H R is a differentiable function such that its gradient h is a contraction with the constant k ( 0 , 1 ) , and  A : H H and B : H 2 H are an inverse strongly monotone and a maximal monotone operator, respectively. They proved that the sequence { x n } generated by the algorithm (9) converges strongly to a solution of the monotone inclusion problem (3) under suitable conditions.
Besides solving the monotone inclusion problem using an algorithm combined with the heavy ball idea, there are many ways to solve the monotone inclusion problem. Tseng [24] introduced a powerful iterative method to solve the monotone inclusion problem (3), which is called the modified forward-backward splitting method. In short, it is known as Tseng’s splitting algorithm. Let C be a closed and convex subset of a real Hilbert space H. Tseng’s splitting algorithm is defined as
y n = ( I + λ n B ) 1 ( I λ n A ) x n x n + 1 = P C ( y n λ n ( A y n A x n ) ) , n 1 ,
where x 1 H is arbitrarily chosen, λ n is chosen to be the largest λ { δ , δ l , δ l 2 , } satisfying λ A y n A x n μ x n y n where δ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) , and  P C is the projection onto a closed convex subset C of H. However, Tseng’s splitting algorithm only obtains weak convergence in real Hilbert spaces.
Recently, Dilshad, Aljohani, and Akram [38] introduced and studied an iterative scheme to approximate the common solution to a split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. Let H 1 and H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator with its adjoint operator A : H 2 H 1 . Their algorithm is
v n = ( I + λ G 1 ) 1 ( I + μ A ( ( I + λ G 2 ) 1 I ) A ) x n x n + 1 = α n f ( x n ) + ( 1 α n ) T [ n + 1 ] v n , n 1 ,
where x 1 H is arbitrarily chosen, λ > 0 , { α n } ( 0 , 1 ) , f is a contraction with a constant k ( 0 , 1 ) , and  T [ n + 1 ] is a finite collection of nonexpansive mappings. For more detail about the split variational inclusion in other class of mappings and methods for solving them, the reader is directed to [39,40,41,42].
Based on the above idea, we introduce a new modified Tseng’s method, which combines inertial and viscosity techniques to solve inclusion problems in the framework of real Hilbert spaces. The project aims to obtain algorithms with better performance and can be applied for a broader class of mappings. Furthermore, we present a modified version of the proposed iterative scheme to solve convex minimization problems. Moreover, we illustrate the computational performance of our proposed algorithms by conducting experiments that satisfy the image restoration problem.
The outline of this paper is as follows: Definitions and lemmas used to analyze our algorithm are shown in Section 2. In Section 3, we present the convergence analysis of our main theorem. Finally, we conduct experiments in which we use our proposed algorithm to solve the image restoration problem.

2. Preliminaries

In this section, we present some notations that are used throughout this work. Let H be a real Hilbert space with the inner product · , · and the induced norm · . Let C be a nonempty closed and convex subset of a real Hilbert space H. x n x and x n x denote the strong convergence and weak convergence of a sequence { x n } n = 1 to x H . For any point x, there exists a unique nearest point for C, which is denoted by P C ( x ) , such that x P C ( x ) x y for all y C . The operator P C denotes the metric projection from H onto C. It is well known that the metric projection P C is nonlinear, and it satisfies the following:
x P C ( x ) , P C ( x ) y 0 ,
for all y C . Next, we present several properties of operators and set-valued mappings, which are helpful later. Let T : H H be a mapping, F i x ( T ) denotes the set of fixed points of T, i.e.,
F i x ( T ) = { x H | T x = x } .
Proposition 1.
Let H be a real Hilbert space and T : H H be a mapping.
1. 
T is callednonexpansive mappingif
T x T y x y ,
for all x , y H .
2. 
T is called firmly nonexpansive mapping if
T x T y 2 + ( I T ) x ( I T ) y 2 x y 2 ,
for all x , y H .
Proposition 2
([43]). Let T : H H be a mapping. Then, the following items are equivalent:
(i) 
T is firmly nonexpansive;
(ii) 
( I T ) is firmly nonexpansive;
(iii) 
T x T y 2 x y , T x T y , for all x , y H .
It is well known that the metric projection P C is a firmly nonexpansive mapping, i.e.,
P C ( x ) P C ( y ) 2 P C ( x ) P C ( y ) , x y ,
for all x , y H .
For convenience, we let B : H 2 H be a set-valued mapping and
J λ B = ( I + λ B ) 1 ,
be the resolvent of mapping B where λ > 0 . It is well known that J λ B is single-valued, D ( J λ B ) = H , where D ( J λ B ) is a domain of the operator J λ B , and  J λ B is a firmly nonexpansive mapping for all λ > 0 .
Definition 1.
Let B : H 2 H be a set-valued mapping with the graph G ( B ) . B is called monotone if,
x y , u v 0 ,
for all x , y H , u B x and v B y . A monotone mapping A : H 2 H is maximal if the graph of G ( B ) for B is not properly contained in the graph of any other monotone mapping.
Definition 2
([44]). Let T : H H be a mapping.
1. 
T is called L-Lipchitz continuous if a non-negative real number L 0 exists such that
T x T y L x y ,
for all x , y H .
2. 
T is called α-inverse strongly monotone if a positive real number α exists such that
x y , T x T y α T x T y 2 ,
for all x , y H . Moreover, if T is α-inverse strongly monotone, then T is 1 / α -Lipschitz continuous.
Lemma 1
([43]). Let H be a real Hilbert space. Then, the following equations hold:
(i) 
x y 2 = x 2 y 2 2 x y , y , for all x , y H ;
(ii) 
x + y 2 x 2 + 2 y , x + y , for all x , y H ;
(iii) 
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 , for all x , y H and α [ 0 , 1 ] .
Lemma 2
([45]). Let H be a real Hilbert space. Let A : H H be an α-inverse strongly monotone operator and B : H 2 H be a maximal monotone operator. Then, for  λ > 0 , the following relation holds:
F i x ( J λ B ( I λ A ) ) = ( A + B ) 1 ( 0 ) .
Lemma 3
([46]). Let a n be a sequence of non-negative real numbers that satisfy the following relation:
a n + 1 ( 1 α n ) a n + α n σ n + γ n , n 0 ,
where
(i) 
{ a n } [ 0 , 1 ] , α n = ,
(ii) 
lim sup σ n 0 ,
(iii) 
γ n 0 ( n 1 ) , γ n < .
Then, a n 0 as n .
Lemma 4
([47]). Let { Γ n } be sequence of real numbers that does not decrease at infinity in the sense that the following subsequence exists: { Γ n i } of { Γ n } such that { Γ n i } < { Γ n i + 1 } for all i 0 . Additionally, consider the sequence of integers { η ( n ) } n n 0 defined by
η ( n ) = max k n | Γ k Γ k + 1 .
Then { η ( n ) } n n 0 is a nondecreasing sequence that verifies lim n η ( n ) = and for all n n 0 ,
max Γ η ( n ) , Γ n Γ η ( n ) + 1 .

3. Results

In this section, we present a convergence analysis of the proposed algorithm, which generates sequences that converge strongly to a solution to the monotone inclusion problem (3). Throughout this section, Ω = ( A + B ) 1 ( 0 ) is used to denote the set of all the solutions to the monotone inclusion problem (3). We use the following conditions for the analysis of our method.
Assumption 1.
A1 
Ω is nonempty.
A2 
A is L-Lipschitz continuous and monotone, and B is maximal monotone.
A3 
h : H H is σ-Lipschitz continuous, where σ [ 0 , 1 ) .
A4 
Let { α n } be a sequence in ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = .
Remark 1.
Observe that from Assumption 1 (A4) and Algorithm 1, we have from { θ n } [ 0 , 1 ) that
(1) 
lim n θ n α n x n x n 1 = 0 .
(2) 
lim n θ n x n x n 1 = 0 .
Algorithm 1 An iterative algorithm for solving inclusion problems
Initialization: Given λ 1 > 0 , μ ( 0 , 1 ) . Let x 1 , x 2 H be arbitrary. Choose { α n } , h to satisfy Assumption 1 and { θ n } to satisfy Remark 1.
Iterative Step: Given the current iterate x n , calculate the next iterate as follows:
Step 1. Compute
w n = x n + θ n ( x n x n 1 ) y n = J λ n B ( I λ n A ) w n s n = y n λ n ( A y n A w n )
and
x n + 1 = α n h ( x n ) + ( 1 α n ) s n .

Step 2. Update
λ n + 1 = min μ w n y n A w n A y n , λ n if A w n A y n 0 ; λ n otherwise .

Replace n with n + 1 and then repeat Step 1.
Next, we provide a useful lemma for analyzing our main theorem.
Lemma 5
([11]). The sequence { λ n } generated by (14) is a non-increasing sequence and
lim n λ n = λ min λ 0 , μ L .
Lemma 6
([11]). Assume that Assumption (1) holds and let { s n } be any sequence generated by Algorithm 1. Then,
s n p 2 w n p 2 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 ,
for all p Ω and
s n y n μ λ n λ n + 1 w n y n .
Theorem 1.
Assume that Assumptions (1) A1–A4 hold. Let { x n } be a sequence generated by Algorithm 1. Then, x n p , where p = P Ω h ( p ) .
Proof. 
Step 1. We prove that { x n } , { w n } , { y n } and { s n } are bounded sequences. Assume that p = P Ω h ( p ) . Since [11]
s n p 2 w n p 2 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 ,
and
s n y n μ λ n λ n + 1 w n y n .
Moreover, we observe that
y n p J λ n B ( I λ n A ) w n p w n p ,
and
w n p = x n θ n ( x n x n 1 ) p x n p + θ n x n x n 1 .
Consider
x n + 1 p = α n h ( x n ) + ( 1 α n ) s n p α n h ( x n ) p + ( 1 α n ) s n p α n h ( x n ) h ( p ) + α n h ( p ) p + ( 1 α n ) s n p α n σ x n p + α n h ( p ) p + ( 1 α n ) w n p ( 1 α n ( 1 σ ) ) x n p + α n ( 1 σ ) h ( p ) p ( 1 σ ) + ( 1 α n ( 1 σ ) ) θ n x n x n 1 max x n p + θ n x n x n 1 , ( 1 σ ) h ( p ) p ( 1 σ ) max x n p + θ 1 x 1 x 0 , ( 1 σ ) h ( p ) p ( 1 σ ) .
Thus, { x n } is bounded and { s n } , { y n } , and  { w n } are bounded.
Next, we observe that
w n p 2 = x n + θ n ( x n x n + 1 ) p 2 = x n p 2 + 2 θ n x n x n 1 , x n p + θ n 2 x n x n 1 2 .
It follows that
( x n x n 1 ) ( x n p ) 2 = x n x n 1 2 2 x n x n 1 , x n p + x n p 2 ,
and
2 θ n x n x n 1 , x n p = θ n x n x n 1 2 + θ n ( x n p 2 x n 1 p 2 ) .
Then, by combining (22) with (23), we find that
w n p 2 = x n p 2 + θ n x n x n 1 2 + θ n ( x n p 2 x n 1 p 2 ) + θ n 2 x n x n 1 2 x n p 2 + θ n ( 1 + θ n ) x n x n 1 2 + θ n ( x n p 2 x n 1 p 2 ) x n p 2 + 2 θ n x n x n 1 2 + θ n ( x n p 2 x n 1 p 2 ) .
Next, by combining (17) with the above inequality, we find that
s n p 2 x n p 2 + 2 θ n x n x n 1 2 + θ n ( x n p 2 x n 1 p 2 ) 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 .
Consider
x n + 1 p 2 = α n h ( x n ) + ( 1 α n ) s n p 2 = α n h ( x n ) + ( 1 α n ) s n p , x n + 1 p = α n h ( x n ) p , x n + 1 p + ( 1 α n ) s n p , x n + 1 p α n 2 h ( x n ) h ( p ) 2 + x n + 1 p 2 + α n h ( p ) p , x n + 1 p ( 1 α n ) 2 s n p 2 + x n + 1 p 2 α n 2 h ( x n ) h ( p ) 2 + x n + 1 p 2 + α n h ( p ) p , x n + 1 p ( 1 α n ) 2 x n p 2 + 2 θ n x n x n 1 2 + θ n ( x n p 2 x n 1 p 2 ) 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 + x n + 1 p 2 ( 1 α n ( 1 σ 2 ) ) 2 x n p 2 + 1 2 x n + 1 p 2 + α n h ( p ) p , x n + 1 p + θ n ( 1 θ n ) x n x n 1 2 + ( 1 α n ) 2 θ n x n p 2 x n 1 p 2 ( 1 α n ) 2 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 .
It follows that
x n + 1 p 2 = ( 1 α n ( 1 σ 2 ) ) x n p 2 + α n ( 1 σ 2 ) 2 ( 1 σ 2 ) h ( p ) p , x n + 1 p + 2 θ n ( 1 α n ) x n x n + 1 2 + ( 1 α n ) θ n ( x n p 2 x n 1 p 2 ) ( 1 α n ) 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 .
Therefore, we obtain
( 1 α n ) 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 x n p 2 x n + 1 p 2 + 2 ( 1 α n ) θ n x n x n + 1 2 + α n ( 1 σ 2 ) 2 ( 1 σ 2 ) h ( p ) p , x n + 1 p + ( 1 α n ) θ n ( x n p 2 x n 1 p 2 ) .
Moreover, by using (26), we obtain
x n + 1 p 2 ( 1 α n ( 1 σ 2 ) ) x n p 2 + α n ( 1 σ 2 ) 2 ( 1 σ 2 ) h ( p ) p , x n + 1 p + 2 ( 1 α n ) ( 1 σ 2 ) θ n α n x n x n + 1 2 + ( 1 α n ) ( 1 σ 2 ) θ n α n x n x n + 1 ( x n p + x n 1 p ) .
Next, we consider two possible cases to show that x n p 0 .
Case 1. Suppose that the sequence Γ n = { x n p 2 } is non-increasing, N N exists such that Γ n + 1 Γ n for each n N . Therefore, Γ n converges.
Since lim n α n = 0 , lim n 1 μ 2 λ n 2 λ n + 1 2 > 0 , by using Remark 1, we obtain from (27) that
lim n w n y n = 0 .
Thus, from (18), we immediately obtain
lim n s n y n = 0 .
If we consider
s n w n s n y n + w n y n ,
then
lim n s n w n = 0 .
Since { x n } is bounded, take a subsequence { x n k } of { x n } such that x n k p H . By setting T n = J λ n B ( I λ n A ) , we have
( I T n ) p 2 = ( I T n ) p , ( I T n ) p = ( I T n ) p , p w n k + ( I T n ) p , w n k T n w n k + ( I T n ) p , T n w n k T n p .
Using the fact that x n w n 0 and (29), we obtain
lim n ( I T n ) p = 0 .
Therefore, p Ω . We can obtain
lim sup n 2 1 σ 2 h ( p ) p , x n + 1 p = lim sup k 2 1 σ 2 h ( p ) p , x n k p = 2 1 σ 2 h ( p ) p , p p 0 .
By applying Lamma 3 and using (28) and (35) and using conditions of all parameters, we can claim that x n p = P Ω h ( p ) .
Case 2. Suppose that the sequence Γ n = { x n p 2 } is increasing. Let η : N N be a mapping for all n N values (where N is large enough). This is defined by
η ( n ) : = max k N : Γ k Γ k + 1 .
Then, η ( n ) as n and Γ η ( n ) Γ η ( n ) + 1 for all n N . By using (27), and the conditions of the parameters for each n N , we have
w η ( n ) y η ( n ) 2 Γ η ( n ) Γ η ( n ) + 1 + 2 ( 1 α η ( n ) ) θ η ( n ) x η ( n ) x η ( n ) + 1 2 + α η ( n ) ( 1 ρ 2 ) 2 ( 1 ρ 2 ) h ( p ) p , x η ( n ) + 1 p + ( 1 α η ( n ) ) θ η ( n ) ( Γ η ( n ) Γ η ( n ) 1 ) .
Since α n 0 , we can conclude that
lim n w η ( n ) y η ( n ) = 0 .
Moreover, by following the proof in Case 1, we obtain
lim sup n h ( p ) p , x η ( n ) + 1 p 0 .
Using (28), we have
Γ η ( n ) + 1 ( 1 α η ( n ) ( 1 ρ 2 ) ) Γ η ( n ) + α n ( 1 ρ 2 ) 2 ( 1 ρ 2 ) h ( p ) p , x η ( n ) + 1 p + 2 ( 1 α η ( n ) ) ( 1 ρ 2 ) θ η ( n ) α η ( n ) x η ( n ) x η ( n ) + 1 2 + ( 1 α η ( n ) ) ( 1 ρ 2 ) θ η ( n ) α η ( n ) x η ( n ) x η ( n ) + 1 Γ η ( n ) + Γ η ( n ) 1 .
By applying Lamma 3 to (39), using (38) and the conditions of all parameters, we can claim that
lim n x η ( n ) + 1 p = 0 .
Using Lemma 4, we obtain
0 x n p x n p , x η ( n ) p x η ( n ) + 1 p 0 as n .
Therefore, x n p = P Ω h ( p ) , which completes the proof.    □

4. Applications and Numerical Results

Let F : H R and G : H R be a convex function and a convex, lower-semicontinuous, and nonsmooth function, respectively. Consider the convex minimization problem that finds x ¯ H such that
F ( x ¯ ) + G ( x ¯ ) = min x H F ( x ) + G ( x ) .
Using Fermat’s rule, an equivalent of the problem (41) is obtained in the form
0 F ( x ¯ ) + G ( x ¯ ) ,
where G is a subdifferential of G, which is a maximal monotone. For more detail, we direct the reader to [48]. F is a gradient of F, which is 1 / L -Lipschitz continuous [49]. By setting A = F and B = G , we obtain the following theorem:
Algorithm 2 An iterative algorithm for solving the convex minimization problems
Initialization: Given λ 1 > 0 , μ ( 0 , 1 ) , let x 1 , x 2 H be arbitrary. Choose { α n } , h to satisfy Assumption 1 and { θ n } to satisfy Remark 1.
Iterative Step: Given the current iterate x n , calculate the next iterate as follows:
Step 1. Compute
w n = x n + θ n ( x n x n 1 ) y n = J λ n G ( I λ n F ) w n s n = y n λ n ( F y n F w n )
and
x n + 1 = α n h ( x n ) + ( 1 α n ) s n .

Step 2. Update
λ n + 1 = min μ w n y n F w n F y n , λ n if F w n F y n 0 ; λ n otherwise .

Replace n with n + 1 and then repeat Step 1.
Theorem 2.
Assume that Assumptions (1) A1–A4 are held. Let { x n } be a sequence generated by Algorithm 2. Then, x n p , where p = P Ω h ( p ) .
In this paper, we focus on the topic of image restoration. The inversion of the following model can be used to formulate the image restoration problem:
y = A x + b .
where x R n × 1 is an original image, y R m × 1 is the observed image, b is additive noise, and A R m × n . To solve the problem (43), we can transform it into the least squares minimization problem
min x 1 2 A x b 2 2 + λ x 1 ,
where λ > 0 is a regularization parameter. We set G ( x ) = x 1 , F ( x ) = 1 2 A x b 2 2 and λ 1 = 0.001 . The Lipschitz gradient of F is in the form
F ( x ) = A T ( A x b ) ,
where A T is a transpose of operator A. Now, an iteration is used to find the solution to the following convex minimization problem: Find x R n such that
x arg min 1 2 A x b 2 2 + λ x 1 ,
where A is a bounded linear operator and b is the degraded image. Therefore, we use Theorem 2 to solve (45) by setting h ( x ) = z 2 12 , θ n = 70 n 9 100 n and α n = 1 1000 n + 1 . Next, since G ( x ) = x 1 , we immediately know from [50] that
( I + λ G ) 1 ( x ) = max { | x 1 | λ , 0 } sign ( x 1 ) , | x 2 | λ , 0 } sign ( x 2 ) , { | x n | λ , 0 } sign ( x n ) .
In this part, we present the restoration of an image that has been corrupted by a motion blur specified by a motion length of 22 pixels and a motion orientation of 45 (blur matrix A 1 ), a Gaussian blur with a filter size of 9 × 9 and a standard deviation of σ = 2 (blur matrix A 2 ), an out of focus blur or a circular average filtered blurred image with a radius of r = 5 (blurred matrix A 3 ), and an average blur with a filter size of 9 × 9 (blurred matrix A 4 ), respectively. We use Algorithm 2 to restore the original grey (cameraman) and RGB (baboon) images, which are shown in Figure 1. Blurred grey images and blurred RGB images with a blurred matrix A 1 A 4 are shown in Figure 2 and Figure 3, respectively.
The reconstructed grey images corrupted by blurred matrixes A 1 A 4 are shown in Figure 4, and the reconstructed RGB images corrupted by blurred matrixes A 1 A 4 are shown in Figure 5.
In order to measure the quality of the restored images, we use the signal-to-noise ratio:
SNR = 20 log x 2 x x n + 1 2 ,
where x is an original image. The behavior of SNR for the Algorithm 2 of all cases for grey and RGB images are shown in Figure 6 and Figure 7, respectively.

5. Conclusions

In this paper, we proposed a modified Tseng’s method that combines inertial and viscosity techniques to solve monotone inclusion problems in real Hilbert spaces. We also established a strong convergence theorem. Our modifications improve the practicality of the algorithm, which means that it performs better and can be applied for a more expansive mapping class. Moreover, we used our algorithm to solve some parts of image recovery problems.

Author Contributions

N.K. and K.S. contributed equally in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Thailand Science Research and Innovation Fund, and King Mongkut’s University of Technology North Bangkok with Contract no. KMUTNB-BasicR-64-33-1.

Acknowledgments

The authors would like to thank the Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche Opérationnelle 1970, 4, 154–158. [Google Scholar]
  2. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  3. Reich, S. Extension problems for accretive sets in Banach spaces. J. Funct. Anal. 1977, 26, 378–395. [Google Scholar] [CrossRef] [Green Version]
  4. Nevanlinna, O.; Reich, S. Strong convergence of contraction semigroups and of iterative methods for accretive operators in Banach spaces. Isr. J. Math. 1979, 32, 44–58. [Google Scholar] [CrossRef]
  5. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  6. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103. [Google Scholar] [CrossRef] [Green Version]
  7. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  8. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  9. Thong, D.V.; Cholamjiak, P. Strong convergence of a forward–backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38, 1–16. [Google Scholar] [CrossRef]
  10. Marcotte, P. Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar] [CrossRef]
  11. Gibali, A.; Thong, D.V. Tseng type methods for solving inclusion problems and its applications. Calcolo 2018, 55, 1–22. [Google Scholar] [CrossRef]
  12. Khobotov, E.N. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987, 27, 120–127. [Google Scholar] [CrossRef]
  13. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  14. Trémolières, R.; Lions, J.L.; Glowinski, R. Numerical Analysis of Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  15. Konnov, I.V. Combined relaxation methods for variational inequality problems over product sets. Lobachevskii J. Math. 1999, 2, 3–9. [Google Scholar]
  16. Baiocchi, C. Variational and quasivariational inequalities. In Applications to Free-Boundary Problems; Wiley: New York, NY, USA, 1984. [Google Scholar]
  17. Jaiboon, C.; Kumam, P. An extragradient approximation method for system of equilibrium problems and variational inequality problems. Thai J. Math. 2012, 7, 77–104. [Google Scholar]
  18. Kumam, W.; Piri, H.; Kumam, P. Solutions of system of equilibrium and variational inequality problems on fixed points of infinite family of nonexpansive mappings. Appl. Math. Comput. 2014, 248, 441–455. [Google Scholar] [CrossRef]
  19. Chamnarnpan, T.; Phiangsungnoen, S.; Kumam, P. A new hybrid extragradient algorithm for solving the equilibrium and variational inequality problems. Afr. Mat. 2015, 26, 87–98. [Google Scholar] [CrossRef]
  20. Deepho, J.; Kumam, W.; Kumam, P. A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems. J. Math. Model. Algorithms Oper. Res. 2014, 13, 405–423. [Google Scholar] [CrossRef]
  21. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019, 1–25. [Google Scholar] [CrossRef]
  22. Peters, J.F. Foundations of Computer Vision: Computational Geometry, Visual Image Structures and Object Shape Detection; Springer: Berlin, Germany, 2017; Volume 124. [Google Scholar]
  23. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  24. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  25. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J. Generalized Halpern-type forward–backward splitting methods for convex minimization problems with application to image restoration problems. Optimization 2020, 69, 1557–1581. [Google Scholar] [CrossRef]
  26. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J.; Sitthithakerngkiet, K. Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 2020, 97, 482–497. [Google Scholar] [CrossRef]
  27. Huang, Y.; Dong, Y. New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 2014, 237, 60–68. [Google Scholar] [CrossRef]
  28. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef] [Green Version]
  29. Padcharoen, A.; Kitkuan, D.; Kumam, W.; Kumam, P. Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems. Comput. Math. Methods 2020, 3, e1088. [Google Scholar] [CrossRef] [Green Version]
  30. Attouch, H.; Peypouquet, J.; Redont, P. Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 2018, 457, 1095–1117. [Google Scholar] [CrossRef]
  31. Dadashi, V.; Postolache, M. Forward–backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
  32. Suparatulatorn, R.; Khemphet, A. Tseng type methods for inclusion and fixed point problems with applications. Mathematics 2019, 7, 1175. [Google Scholar] [CrossRef] [Green Version]
  33. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  34. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  35. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  36. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  37. Cholamjiak, P.; Suantai, S. Viscosity approximation methods for a nonexpansive semigroup in Banach spaces with gauge functions. J. Glob. Optim. 2012, 54, 185–197. [Google Scholar] [CrossRef]
  38. Dilshad, M.; Aljohani, A.; Akram, M. Iterative Scheme for Split Variational Inclusion and a Fixed-Point Problem of a Finite Collection of Nonexpansive Mappings. J. Funct. Spaces 2020, 2020, 3567648. [Google Scholar] [CrossRef]
  39. Abbas, M.; Ibrahim, Y.; Khan, A.R.; De la Sen, M. Split variational inclusion problem and fixed point problem for a class of multivalued mappings in CAT (0) spaces. Mathematics 2019, 7, 749. [Google Scholar] [CrossRef] [Green Version]
  40. De la Sen, M. Stability and convergence results based on fixed point theory for a generalized viscosity iterative scheme. Fixed Point Theory Appl. 2009, 2009, 314581. [Google Scholar] [CrossRef] [Green Version]
  41. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comput. 2015, 250, 986–1001. [Google Scholar] [CrossRef]
  42. Pan, C.; Wang, Y. Generalized viscosity implicit iterative process for asymptotically non-expansive mappings in Banach spaces. Mathematics 2019, 7, 379. [Google Scholar] [CrossRef] [Green Version]
  43. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  44. Suwannaut, S.; Suantai, S.; Kangtunyakarn, A. The method for solving variational inequality problems with numerical results. Afr. Mat. 2019, 30, 311–334. [Google Scholar] [CrossRef]
  45. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, 2012, 109236. [Google Scholar] [CrossRef] [Green Version]
  46. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  47. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  48. Rockafellar, R. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef] [Green Version]
  49. Baillon, J.B.; Haddad, G. Quelques propriétés des opérateurs angle-bornés etn-cycliquement monotones. Isr. J. Math. 1977, 26, 137–150. [Google Scholar] [CrossRef]
  50. Hale, E.T.; Yin, W.; Zhang, Y. A Fixed-Point Continuation Method for L1-Regularized Minimization with Applications to Compressed Sensing; CAAM TR07-07; Rice University: Houston, TX, USA, 2007; Volume 43, p. 44. [Google Scholar]
Figure 1. Original images.
Figure 1. Original images.
Mathematics 09 01104 g001
Figure 2. Blurred grey images with blurred matrixes A 1 A 4 , respectively.
Figure 2. Blurred grey images with blurred matrixes A 1 A 4 , respectively.
Mathematics 09 01104 g002
Figure 3. Blurred RGB images with blurred matrixes A 1 A 4 , respectively.
Figure 3. Blurred RGB images with blurred matrixes A 1 A 4 , respectively.
Mathematics 09 01104 g003
Figure 4. Reconstructed grey images corrupted by blur matrixes A 1 A 4 , respectively.
Figure 4. Reconstructed grey images corrupted by blur matrixes A 1 A 4 , respectively.
Mathematics 09 01104 g004
Figure 5. Reconstructed RGB images corrupted by blurred matrixes A 1 A 4 , respectively.
Figure 5. Reconstructed RGB images corrupted by blurred matrixes A 1 A 4 , respectively.
Mathematics 09 01104 g005
Figure 6. The behavior of SNR for the Algorithm 2 of all cases for grey images.
Figure 6. The behavior of SNR for the Algorithm 2 of all cases for grey images.
Mathematics 09 01104 g006
Figure 7. The behavior of SNR for the Algorithm 2 of all cases for RGB images.
Figure 7. The behavior of SNR for the Algorithm 2 of all cases for RGB images.
Mathematics 09 01104 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kaewyong, N.; Sitthithakerngkiet, K. Modified Tseng’s Method with Inertial Viscosity Type for Solving Inclusion Problems and Its Application to Image Restoration Problems. Mathematics 2021, 9, 1104. https://doi.org/10.3390/math9101104

AMA Style

Kaewyong N, Sitthithakerngkiet K. Modified Tseng’s Method with Inertial Viscosity Type for Solving Inclusion Problems and Its Application to Image Restoration Problems. Mathematics. 2021; 9(10):1104. https://doi.org/10.3390/math9101104

Chicago/Turabian Style

Kaewyong, Nattakarn, and Kanokwan Sitthithakerngkiet. 2021. "Modified Tseng’s Method with Inertial Viscosity Type for Solving Inclusion Problems and Its Application to Image Restoration Problems" Mathematics 9, no. 10: 1104. https://doi.org/10.3390/math9101104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop