Next Article in Journal
The Differential on Graph Operator Q(G)
Previous Article in Journal
Docking of Platinum Compounds on Cube Rhombellane Functionalized Homeomorphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Mann-Type Algorithm for a Nonexpansive Mapping to Solve Monotone Inclusion and Image Restoration Problems

by
Natthaphon Artsawang
1 and
Kasamsuk Ungchittrakool
1,2,*
1
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
2
Research Center for Academic Excellence in Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(5), 750; https://doi.org/10.3390/sym12050750
Submission received: 17 February 2020 / Revised: 25 March 2020 / Accepted: 31 March 2020 / Published: 6 May 2020

Abstract

:
In this article, we establish a new Mann-type method combining both inertial terms and errors to find a fixed point of a nonexpansive mapping in a Hilbert space. We show strong convergence of the iterate under some appropriate assumptions in order to find a solution to an investigative fixed point problem. For the virtue of the main theorem, it can be applied to an approximately zero point of the sum of three monotone operators. We compare the convergent performance of our proposed method, the Mann-type algorithm without both inertial terms and errors, and the Halpern-type algorithm in convex minimization problem with the constraint of a non-zero asymmetric linear transformation. Finally, we illustrate the functionality of the algorithm through numerical experiments addressing image restoration problems.

1. Introduction

Throughout this article, H is defined as a real Hilbert space with an inner product and corresponding norm which is denoted by the notations · , · and · = · , · , respectively. Let T : H H be a nonexpansive mapping, that is, T x T y x y for all x , y H . Given C a nonempty closed convex subset of H . The set of all fixed points of the operator T is denoted by Fix ( T ) : = { x H : T x = x } . The metric projection of H onto C , proj C : H C is defined by proj C ( x ) = arg min c C x c for all x H (see more detail in [1] and the references therein).
Problem: The fixed point problem for the mapping T is generally denoted as,
find   x H such   that   x = T x
Many problems in the real world, such as optimal control problems, economic modelings, variational analysis, game theory, data analysis, etc. can be formed into the fixed point problem of nonexpansive mappings (see Bagiror et al.’s book [2] for more applications and recent developments). A solution of the fixed point problem for nonexpansive mappings was approximated by the iterative method which was introduced by Mann [3]. In addition, the "Mann Iteration" stated that
x n + 1 = α n x n + ( 1 α n ) T x n , n 1 ,
where x 1 H and ( α n ) n 1 is a real sequence in [ 0 , 1 ] . The weak convergent result of the iterative sequence ( x n ) n 1 was obtained under control condition that n 1 α n ( 1 α n ) = + (see [4,5]). To obtain the strong convergence for the fixed point solutions of nonexpansive mappings, one of the most important methods to solve the fixed point problem for a nonexpansive mapping was introduced by Halpern [6]:
x n + 1 = α n u + ( 1 α n ) T x n , n 1 ,
where x 1 , u H and ( α n ) n 1 is a real sequence in [ 0 , 1 ] . In direction to study and improve this algorithm in Equation (2), many results have been presented (see [7,8,9,10,11,12,13,14]). In 2000, Moudafi [15] proposed iterative method which involved the concept of viscosity to solve strong convergence of the iterate. Moreover, many authors are interested in studying and developing Moudafi’s algorithm. The several methods that are in reference to this study are reviewed in the next extensively (see, for example, [7,16,17,18,19,20]). Recently, Bot et al. [21] proposed a new Mann-type algorithm (MTA) to solve the fixed point problem for a nonexpansive mapping and proved strong convergence of the iterate without using viscosity and projection method under some control conditions of parameters sequences. Their algorithm is defined by
( MTA ) x n + 1 = ( 1 α n ) δ n x n + α n T δ n x n , n 1 ,
where x 1 H and ( α n ) n 0 , ( δ n ) n 0 are sequences in ( 0 , 1 ] .
Polyak [22] firstly proposed an inertial extrapolation as an acceleration process to solve the smooth convex minimization problem. An inertial algorithm is a two-step iterative method and the next iterate is defined by making use of the previous two iterates. It is well known that combining an inertial term in an algorithm can accelerate the speed of convergence of the sequence generated by the algorithm. Subsequently, there are many authors who are interested in studying the inertial-type algorithm. We refer interested readers to [23,24,25,26,27,28,29,30,31] for more information. In 2015, Combettes and Yamada [32] presented a new Mann algorithm combining error term for solving a common fixed point of averaged nonexpansive mappings in a Hilbert space. By using the concept of the inertial method, the technique of Halpern method, and error terms, Shehu et al. [33] introduced an algorithm for solving a fixed point of a nonexpansive mapping, which is defined as follows:
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , x n + 1 = α n x 0 + δ n y n + γ n T y n + e n ,
for all n 1 , where ( θ n ) n 0 [ 0 , θ ] with θ [ 0 , 1 ) , ( α n ) n 0 , ( δ n ) n 0 and ( γ n ) n 0 are sequences in ( 0 , 1 ] and ( e n ) n 0 is a sequence in H .
Being motivated by the above facts, we intend to accelerate the speed of convergence by avoiding the viscosity concept, hence, we propose a Mann-type method combining both inertial terms and errors for finding a fixed point of a nonexpansive mapping in a Hilbert space.
Let a nonexpansive mapping T from H into itself be such that Fix ( T ) . We propose the following algorithm.
( Algorithm 1 ) x 0 , x 1 C , y n = x n + θ n ( x n x n 1 ) , x n + 1 = δ n y n + α n ( T δ n y n δ n y n ) + ε n ,
for all n 1 , where ( θ n ) n 0 [ 0 , θ ] with θ [ 0 , 1 ) , ( α n ) n 0 and ( δ n ) n 0 are sequences in ( 0 , 1 ] and ( ε n ) n 0 is a sequence in H .
On the other hand, for the set of all zeros of the sum of three monotone operators A , B , C as the following
find x H such that 0 A x + B x + C x ,
where A , B , C are maximal monotone operators on a Hilbert space H and C is δ -cocoercive with parameter δ . The problem in Equation (4) was considered by Davis and Yin [34] and it can be reformulated to the fixed point problem for nonexpansive mappings. Therefore, it is interesting to study the fixed point problem in order to apply for solving the zeros problem of maximal monotone operators.
For the applications, we can formulate the main problem, that is, the fixed point problem in order to apply in the case of finding a zero point of the sum of three maximal monotone operators. Furthermore, the convergence behavior between the algorithms obtained from Algorithm 1 are illustrated by some numerical experiment.

2. Preliminaries

This section gathers the results in real Hilbert spaces that are useful for this study, e.g., convergence analysis.
Lemma 1.
[20] Let H be a real Hilbert space. The conditions are verifiable, as follows.
1. 
x y 2 = x 2 y 2 2 x y , y for all x , y H ,
2. 
x + y 2 x 2 + 2 x + y , y for all x , y H ,
3. 
r x + ( 1 r ) y 2 = r x 2 + ( 1 r ) y 2 r ( 1 r ) x y 2 for all r [ 0 , 1 ] and x , y H .
Lemma 2.
[14,35] Let ( a n ) n 1 and ( μ n ) n 0 be sequences of nonnegative real numbers and satisfy the inequality
a n + 1 ( 1 δ n ) a n + μ n + ε n n 0 ,
where 0 δ n 1 for all n 0 . Assume that k 1 ε n < + . Then, the following statement hold:
1. 
If μ n c δ n (where c 0 ), then ( a n ) n 1 is bounded.
2. 
If n 0 δ n = and lim sup n + μ n δ n 0 , then the sequence ( a n ) n 0 converges to 0.
Lemma 3.
[1] Let T be a nonexpansive operator from H into itself. Let ( x n ) n 0 be a sequence in H and x H such that x n x as n + (i.e., ( x n ) n 0 converges weakly to x) and x n T x n 0 as n + (i.e., ( x n T x n ) n 0 converges strongly to 0). Then, x Fix ( T ) .
Assumption 1.
Let ( α n ) n 0 and ( δ n ) n 0 be sequences in ( 0 , 1 ] and let ( ε n ) n 0 be a sequence in H . Assume the conditions are verifiable, as follows.
1. 
lim inf n + α n > 0 and n 1 | α n α n 1 | < + ,
2. 
lim n + δ n = 1 , n 0 ( 1 δ n ) = + and n 1 | δ n δ n 1 | < + ,
3. 
n 0 ε n < + .
We have verified Assumption 1, as shown in the following remark.
Remark 1.
Let z H . We set δ n = 1 1 n + 2 , α n = 1 4 1 ( n + 3 ) 2 and ε n = z ( n + 1 ) 3 for all n 0 . It is easy to see that Assumption 1 is satisfied.

3. Main Results

This section discusses the convergence analysis of the proposed algorithm, beginning with given boundedness of our algorithm, as in the following lemma.
Lemma 4.
Let T : H H be a nonexpansive mapping such that Fix ( T ) and let ( x n ) n 0 be generated by Algorithm 1. Let ( θ n ) n 0 be a sequence in [ 0 , θ ] with θ [ 0 , 1 ) such that n 1 θ n x n x n 1 < + . Suppose Assumption 1 holds. Then, ( x n ) n 0 is bounded.
Proof. 
Let n N and a sequence ( z n ) n 1 be defined by
z n + 1 = δ n z n + α n ( T δ n z n δ n z n ) + ε n .
By nonexpansiveness of T, we have
x n + 1 z n + 1 = ( 1 α n ) δ n ( y n z n ) + α n ( T δ n y n T δ n z n ) ( 1 α n ) δ n y n z n + α n δ n y n z n = δ n y n z n = δ n x n z n + θ n ( x n x n 1 ) δ n x n z n + δ n θ n x n x n 1 δ n x n z n + θ n x n x n 1 .
By applying Lemma 2, we have lim n + x n z n = 0 .
Next, we expect that ( z n ) n 1 is bounded. Let x * Fix ( T ) . It follows that
z n + 1 x * δ n z n + α n ( T δ n z n δ n z n + ε n x * ) ( 1 α n ) δ n z n x * + α n T δ n z n x * + ε n δ n z n x * + ε n = δ n ( z n x * ) + ( δ n 1 ) x * + ε n δ n z n x * + ( 1 δ n ) x * + ε n .
Notice that n 0 ε n < + . We can apply Lemma 2 to obtain that ( z n ) n 1 is bounded. Seeing that lim n + x n z n = 0 and ( z n ) n 1 is bounded, we get that ( x n ) n 0 is bounded. □
Theorem 1.
Let T : H H be a nonexpansive mapping such that Fix ( T ) and let ( x n ) n 0 be generated by Algorithm 1. Let ( θ n ) n 0 be a sequence in [ 0 , θ ] with θ [ 0 , 1 ) such that n 1 θ n x n x n 1 < + . Suppose Assumption 1 holds. Then, the sequence ( x n ) n 0 strongly converges to x * : = proj Fix ( T ) ( 0 ) .
Proof. 
From Lemma 4, we have ( x n ) n 0 is bounded. Moreover, ( y n ) n 1 is also bounded. Let x * : = proj Fix ( T ) ( 0 ) . Then, x * Fix ( T ) . By using Lemma 1 and Equation (2), we get that
δ n y n x * 2 = δ n ( y n x * ) + ( δ n 1 ) x * 2 = δ n 2 y n x * 2 + 2 δ n ( 1 δ n ) x * , y n x * + ( 1 δ n ) 2 x * 2 δ n x n x * + θ n ( x n x n 1 ) 2 + ( 1 δ n ) 2 δ n x * , y n x * + ( 1 δ n ) x * 2 δ n x n x * 2 + 2 δ n θ n ( x n x n 1 ) , y n x * + ( 1 δ n ) 2 δ n x * , y n x * + ( 1 δ n ) x * 2 .
By using Lemma 1 and the nonexpansiveness of T, we have
x n + 1 x * 2 = δ n y n + α n ( T δ n y n δ n y n ) + ε n x * 2 = ( 1 α n ) ( δ n y n x * ) + α n ( T δ n y n x * ) + ε n 2 ( 1 α n ) ( δ n y n x * ) + α n ( T δ n y n x * ) 2 + 2 ε n , x n + 1 x * = ( 1 α n ) δ n y n x * 2 + α n T δ n y n x * 2 α n ( 1 α n ) T δ n y n δ n y n 2 + 2 ε n , x n + 1 x * δ n y n x * 2 + 2 ε n , x n + 1 x * .
Combining Equations (7) and (8), we obtain that
x n + 1 x * 2 δ n x n x * 2 + ( 1 δ n ) 2 δ n x * , y n x * + ( 1 δ n ) x * 2 + 2 δ n θ n ( x n x n 1 ) , y n x * + 2 ε n , x n + 1 x * δ n x n x * 2 + ( 1 δ n ) 2 δ n x * , y n x * + ( 1 δ n ) x * 2 + 2 δ n y n x * θ n ( x n x n 1 ) + 2 x n + 1 x * ( ε n ) .
Next, we claim that x n + 1 x n 0 as n + . By the boundedness of a sequence ( y n ) n 1 and the nonexpansiveness of T, we have
x n + 1 x n = δ n y n + α n ( T δ n y n δ n y n ) + ε n ( δ n 1 y n 1 + α n 1 ( T δ n 1 y n 1 δ n 1 y n 1 ) + ε n 1 ) ( 1 α n ) ( δ n y n δ n 1 y n 1 ) + ( α n α n 1 ) δ n 1 y n 1 + α n ( T δ n y n T δ n 1 y n 1 ) + ( α n α n 1 ) T δ n 1 y n 1 + ε n ε n 1 δ n y n δ n 1 y n 1 + | α n α n 1 | ( δ n 1 y n 1 + T δ n 1 y n 1 ) + ε n ε n 1 δ n y n δ n 1 y n 1 + | α n α n 1 | C 1 + ε n ε n 1 ,
where C 1 > 0 . After that, we consider the term δ n y n δ n 1 y n 1 in the inequality in Equation (10).
Let us consider,
δ n y n δ n 1 y n 1 = δ n ( y n y n 1 ) + ( δ n δ n 1 ) y n 1 δ n y n y n 1 + | δ n δ n 1 | ( y n 1 ) δ n x n x n 1 + δ n θ n x n x n 1 + δ n θ n 1 x n 1 x n 2 + | δ n δ n 1 | ( y n 1 ) δ n x n x n 1 + θ n x n x n 1 + θ n 1 x n 1 x n 2 + | δ n δ n 1 | C 2 ,
where C 2 > 0 . Combining Equations (10) and (11), we get that
x n + 1 x n δ n x n x n 1 + θ n x n x n 1 + θ n 1 x n 1 x n 2 + | α n α n 1 | C 1 + | δ n δ n 1 | C 2 + ε n ε n 1 .
By applying Lemma 2 and Assumption 1, we can conclude that x n + 1 x n 0 as n + .
In the following, we prove that T δ n y n δ n y n 0 as n + . We observe that
T δ n y n δ n y n = T δ n y n x n + 1 + x n + 1 δ n y n T δ n y n x n + 1 + x n + 1 δ n y n = ( 1 α n ) ( T δ n y n δ n y n ) ε n + ( 1 δ n ) x n + 1 + δ n x n + 1 δ n y n ( 1 α n ) T δ n y n δ n y n + ε n + ( 1 δ n ) x n + 1 + δ n x n + 1 y n = ( 1 α n ) T δ n y n δ n y n + ε n + ( 1 δ n ) x n + 1 + δ n x n + 1 x n + δ n θ n x n x n 1 .
It follows that
T δ n y n δ n y n 1 α n ε n + ( 1 δ n ) x n + 1 + δ n x n + 1 x n + δ n θ n x n x n 1 .
Since lim n + x n + 1 x n = 0 and the properties of the sequences involved, we can conclude that lim n + T δ n y n δ n y n = 0 .
To show that the sequence ( x n ) n 0 strongly converges to x * , it is sufficient to prove that
lim sup n + x * , y n x * 0 .
On the other hand, assume that the inequality in Equation (15) does not hold. Then, there exist a real number k > 0 and a subsequence ( y n i ) i 1 such that
x * , y n i x * k > 0 i 1 .
For ( y n ) n 1 bounded on a Hilbert space H , we can find a subsequence of ( y n ) n 1 that weakly converges to a point y H . Without loss of generality, we can assume that y n i y as i + . Therefore,
0 < k lim i + x * , y n i x * = x * , y x * .
Notice that lim n + δ n = 1 . We get δ n i y n i y as i + . Applying Lemma 3, we obtain that y Fix ( T ) . With this, we have x * , y x * 0 , which is a contradiction. Hence, the inequality in Equation (15) is verified. It follows that
lim sup n + 2 δ n x * , y n x * + ( 1 δ n ) x * 2 0 .
Using Lemma 2 and Equation (9), we can conclude that lim n + x n = x * . Based on what is described above, the proof is complete. □
Remark 2.
The assumption of the sequence ( θ n ) n 0 in Theorem 1 is verified, if we choose θ n such that 0 θ n θ ¯ n , where
θ ¯ n = min θ , c n x n x n 1 , i f x n x n 1 , θ , o t h e r w i s e ,
and n 0 c n < + .

4. Applications

This section is devoted to discussing the applications of the algorithm proposed in this paper in the monotone inclusion problems.
The operator K : H H is called a monotone if it satisfies K x K y , x y 0 for all x , y H and is said to be δ -cocoercive with δ > 0 if there exists a positive real number δ such that K x K y , x y δ K x K y 2 for all x , y H . The set of all zeros of the operator K is denoted by zer ( K ) : = { z H : 0 = K ( z ) } .
Let L be a set-valued operator on H and its graph be denoted by gra ( L ) : = { ( x , u ) H × H : u L x } . The operator L is called maximal monotone if there exists no proper monotone extension of the graph of L. The operator L is said to be ρ-strongly monotone with ρ > 0 if x y , u v ρ x y 2 for all ( x , u ) , ( y , v ) gra ( L ) .
The resolvent of the operator L is denoted by J L : H 2 H which is defined by J L : = ( I d + L ) 1 where I d is the identity operator on H . Furthermore, J L is a single-valued operator when L is a maximal monotone operator.
Let C be a nonempty closed convex subset of H . The indicator function is defined by
δ C ( x ) = 0 , if x C , + , otherwise ,
for all x H . We consider the monotone inclusion problem as follows:
find x H such that 0 A x + B x + C x ,
where A : H 2 H and B : H 2 H are maximal monotone operators and C : H H is a δ -cocoercive operator with δ > 0 . We assume that zer ( A + B + C ) . We propose the following algorithm for solving the problem in Equation (17).
( Algorithm 2 ) a n = x n + θ n ( x n x n 1 ) , y n = J μ B ( δ n a n ) , z n = J μ A ( 2 y n δ n a n μ C y n ) , x n + 1 = δ n a n + α n ( z n y n ) + ε n ,
for all n 1 , where x 0 , x 1 H , μ ( 0 , 2 δ ) , ( θ n ) n 1 [ 0 , θ ] with θ [ 0 , 1 ) , and ( α n ) n 0 and ( δ n ) n 0 are sequences in ( 0 , 1 ] and ( ε n ) n 0 is a sequence in H .
The above iterative scheme can be rewritten as
x n + 1 = δ n a n + α n [ J μ A ( 2 J μ B Id μ C J μ B ) + I d J μ k B ] ( δ n a n ) + ε n = δ n a n + α n ( T δ n a n δ n a n ) + ε n
where x 0 , x 1 H , a n : = x n + θ n ( x n x n 1 ) and
T : = J μ A ( 2 J μ B I d μ C J μ B ) + I d J μ B .
The following proposition is the important tool for verifying the convergence of Algorithm 2 (see Proposition 2.1 in [34])
Proposition 1.
Let T 1 , T 2 : H H be two firmly nonexpansive operators and C be a δ-cocoercive operator with δ > 0 . Let μ ( 0 , 2 δ ) . Then, operator T : = I d T 2 + T 1 ( 2 T 2 I d μ C T 2 ) is α-averaged with coefficient α : = 2 δ 4 δ μ < 1 .
In particular, the following inequality holds for all z , w H
T z T w 2 z w 2 ( 1 α ) α ( I d T ) z ( I d T ) w 2 .
The following lemma is a characterization of zer ( A + B + C ) .
Lemma 5.
Lemma 2.2 in [34] Let A : H 2 H and B : H 2 H be maximal monotone operators and C : H H be a δ-cocoercive operator with δ > 0 . Suppose that zer ( A + B + C ) . Then,
zer ( A + B + C ) = J μ B ( Fix ( T ) ) ,
where T : = J μ A ( 2 J μ B I d μ C J μ B ) + I d J μ B with μ > 0 .
Remark 3.
1. 
If we set C x = 0 for all x H in Lemma 5, zer ( A + B ) = J μ B ( Fix ( T ) ) , where T : = J μ A ( 2 J μ B I d ) + I d J μ B with μ > 0 .
2. 
If we set B x = 0 for all x H in Lemma 5, zer ( A + C ) = Fix ( T ) , where T : = J μ A ( I d μ C ) with μ > 0 .
Theorem 2.
Let A , B : H 2 H be two maximal monotone operators and C : H H be δ-cocoercive with δ > 0 . Suppose that zer ( A + B + C ) . Let ( θ n ) n 1 be a sequence in [ 0 , θ ] with θ [ 0 , 1 ) and μ ( 0 , 2 δ ) . Let ( x n ) n 0 , ( y n ) n 1 and ( z n ) n 1 be generated by Algorithm 2. Assume that Assumption 1 holds and n 1 θ n x n x n 1 < + . Then, the following statements are true:
1. 
( x n ) n 0 strongly converges to x * : = proj Fix ( T ) ( 0 ) , where
T : = J μ A ( 2 J μ B I d μ C J μ B ) + I d J μ B for some μ > 0 .
2. 
( y n ) n 1 and ( z n ) n 1 strongly converge to J μ B ( x * ) zer ( A + B + C ) .
Proof. 
Equation (1): Let ( x n ) n 0 be generated by Algorithm 2. Then, the iterative method can be rewritten as
x n + 1 = δ n a n + α n ( T δ n a n δ n a n )
where x 0 , x 1 H , a n : = x n + θ n ( x n x n 1 ) and T : = J μ A ( 2 J μ B I d μ C J μ B ) + I d J μ B .
By applying Proposition 1, we get T is nonexpansive.
On the other hand, by Lemma 5, we obtain that
J μ B ( Fix ( T ) ) = zer ( A + B + C ) .
It means that Fix ( T ) . By applying Theorem 1, we have the sequence ( x n ) n 0 strongly converges to x * : = proj Fix ( T ) ( 0 ) as n + .
Equation (2): The sequences ( a n ) n 0 as Algorithm 2, and we obtain that a n x * as n + . Since J μ B is continuous, we have y n J μ B ( x * ) zer ( A + B + C ) . From the last line of Algorithm 2, we get that lim n + z n y n = 0 . This proof is complete. □
Using similar arguments as in Theorem 2 and set C x = 0 for all x H , we can prove the following results.
Corollary 1.
Let A , B : H 2 H be two maximal monotone operators and zer ( A + B ) be a nonempty set. We consider the following algorithm:
( n 1 ) a n = x n + θ n ( x n x n 1 ) , y n = J μ B ( δ n a n ) , z n = J μ A ( 2 y n δ n a n ) , x n + 1 = δ n a n + α n ( z n y n ) + ε n ,
where x 0 , x 1 H , μ ( 0 , 2 δ ) , ( θ n ) n 1 [ 0 , θ ] with θ [ 0 , 1 ) , and ( α n ) n 0 and ( δ n ) n 0 are sequences in ( 0 , 1 ] and ( ε n ) n 0 is a sequence in H . Assume that Assumption 1 holds and n 1 θ n x n x n 1 < + . Then, the following statements hold:
1. 
( x n ) n 0 strongly converges to x * : = proj Fix ( J μ A ( 2 J μ B I d ) + I d J μ B ) ( 0 ) for some μ > 0 .
2. 
( y n ) n 1 and ( z n ) n 1 strongly converge to J μ B ( x * ) zer ( A + B ) .
Proof. 
It follows from the proof of Theorem 2. □
Using similar arguments as in Theorem 2 and setting B x = 0 for all x H , we can prove the following results.
Corollary 2.
Let A : H 2 H be a maximal monotone operator and C : H H a δ-cocoercive operator with δ > 0 and zer ( A + C ) . Let μ ( 0 , 2 δ ) and ( x n ) n 0 be generated by the following iterative scheme
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( 1 α n ) δ n y n + α n J μ A ( δ n y n μ C δ n y n ) + ε n ,
for all n 1 , where ( θ n ) n 1 [ 0 , θ ] with θ [ 0 , 1 ) , and ( α n ) n 0 and ( δ n ) n 0 are sequences in ( 0 , 1 ] and ( ε n ) n 0 is a sequence in H . Assume that Assumption 1 holds and n 1 θ n x n x n 1 < + .
Then, the sequence ( x n ) n 0 strongly converges to a point proj zer ( A + C ) ( 0 ) .

5. Numerical Experiments

To illustrate the behavior of the proposed iterative method, we provide a numerical example in a convex minimization problem and compare the convergence performance of the proposed algorithm with some algorithms in the literature. Moreover, we also employed our algorithm in the context of image restoration problems. All the experiments were implemented in MATLAB R2016b running on a MacBook Air 13-inch, Early 2017 with a 1.8 GHz Intel Core i5 processor and 8 GB 1600 MHz DDR3 memory.

5.1. Convex Minimization Problems

In this subsection, we present some comparisons among Algorithm 2, MTA, and Shehu et al.’s algorithm in Equation (3) (Algorithm 3.1 in [33]) in convex minimization problem.
Example 1.
Let f : R s R be defied by f ( x ) = x 1 for all x R s , g : R s R be defined by indicator function g ( x ) = δ W ( x ) with W : = { x : A x = b } for all x R s , where A : R s R l is a non-zero linear transformation, b R l and s > l ; and h : R s R be defined by h ( x ) = 1 2 x 2 2 for all x R s . Since s > l , we get that A is an asymmetric transformation. Find the solution of the following problem:
minimize x 1 + δ W ( x ) + 1 2 x 2 2 subject to x R s .
The problem in Equation (20) can be written in the form of the problem in Equation (17) as:
find x R s such that 0 x 1 + δ W ( x ) + h ( x ) ,
where A = · 1 , B = δ W ( · ) and C = h ( · ) .
In this setting, we have J μ δ W ( x ) = x + A T ( A A T ) 1 ( b A x ) ,
J μ · 1 ( x ) = ( max { 0 , 1 μ | x 1 | } x 1 , max { 0 , 1 μ | x 2 | } x 2 , . . . , max { 0 , 1 μ | x s | } x s ) ,
and h ( x ) = x , where x = ( x 1 , x 2 , . . . , x s ) R s .
We begin with the problem by random vectors z , x 0 , x 1 R s and b R l and matrix A R l × s . Next, we compare the performance of Algorithm 2 with two remained performance. The parameters that are used in our algorithm are chosen as follows: α n = 1 1 ( n + 2 ) 2 , δ n = 1 1 n + 2 , ε n = z ( 100 n ) 2 , and
θ n = min 1 2 , 1 ( n + 1 ) 2 x n x n 1 , if x n x n 1 , 1 2 , otherwise .
We choose α n = 1 n + 1 , δ n = γ n = 1 2 ( n + 1 ) and e n = ε n for Shehu et al.’s algorithm in Equation (3) in [33]. We obtain the CPU times (seconds) and the number of iterations by using the stopping criteria: y n y n 1 10 4 .
In Table 1, we present a comparison among the numerical results of Algorithm 2, MTA, and Shehu et al.’s algorithm in Equation (3) in different sizes of matrix A . The smallest number of iterations is generated by Algorithm 2 for all sizes of matrix A . Moreover, Algorithm 2 requires the least CPU computation time to reach the optimality tolerance for all cases.
Figure 1 shows the behavior of y k y k 1 for Algorithm 2, MTA, and Shehu et al.’s algorithm in Equation (3) in two different choices of ( l , s ) . We can observe that by using our algorithm the behavior of the red line, and Algorithm 2 has the best performance.

5.2. Image Restoration Problems

In this subsection, we apply the proposed algorithm, image restoration problems, which involves deblurring and denoising images. We consider the degradation model that represents an actual image restoration problems or through the least useful mathematical abstractions thereof.
y = H x + w ,
where y , H , x and w represent the degraded image, degradation operator, or blurring operator; original image; and noise operator, respectively.
The reconstructed image is obtained by solving the following regularized least-squares problem
min x 1 2 H x y 2 2 + μ ϕ ( x ) ,
where μ > 0 is the regularization parameter and ϕ ( · ) is the regularization functional. A well-known regularization function used to remove noise in the restoration problem is the l 1 norm, which is called Tikhonov regularization [36]. The problem in Equation (24) can be written in the form of the following problem as:
find x arg min x R k 1 2 H x y 2 2 + μ x 1 ,
where y is the degraded image and H is a bounded linear operator. Note that problem in Equation (25) is a spacial case of the problem in Equation (4) by setting A = f ( · ) , B = 0 , and C = L ( · ) where f ( x ) = x 1 and L ( x ) = 1 2 H x y 2 2 . This setting we have that C ( x ) = L ( x ) = H * ( H x y ) , where H * is a transpose of H. We begin the problem by choosing images and degrade them by random noise and different types of blurring. The random noise in this study is provided by Gaussian white noise of zero mean and 0.001 variance. We solve the problem in Equation (25) by using our algorithm in Corollary 2. We set α n = 1 1 ( n + 1 ) 2 , δ n = 1 1 100 n + 1 , μ = 0.001 , ε n = 0 and θ n is defined as Equation (22).
We compare our proposed algorithm with the inertial Mann-type algorithm that was introduced by Kitkuan et al. [30]. In Kitkuan et al.’s algorithm (Algorithm in Theorem 3.1 in [30]), we choose ς n = θ n , α n = 1 n + 1 , λ n = 0.001 , and h ( x ) = 1 12 x 2 2 . We assess the quality of the reconstructed image by using the signal to noise ratio (SNR) for monochrome images, which is defined by
SNR ( n ) = 20 log 10 x 2 2 x x n 2 2 ,
where x and x n denote the original and the restored image at iteration n, respectively.
For color images, we estimate the quality of the reconstructed image by using the normalized color difference (NCD) [37] which is defined by
NCD ( n ) = i = 1 N j = 1 M ( L i , j o L i , j ( n ) ) 2 + ( u i , j o u i , j ( n ) ) 2 + ( v i , j o v i , j ( n ) ) 2 i = 1 N j = 1 M ( L i , j o ) 2 + ( u i , j o ) 2 + ( v i , j o ) 2 ,
where i , j are indices of the sample position, N , M characterize an image size and L i , j o , u i , j o , v i , j o and L i , j ( n ) , u i , j ( n ) , v i , j ( n ) are values of the perceived lightness and two representatives of chrominance related to the original and the restored image at iteration n, respectively. We generated the noised model in order to obviously see the differences between degraded and original figure as follows. Figure 2 firstly shows the original image. Secondly, the degraded image was corrupted by average blur (size 20 by 20) and Gaussian noise (zero mean and 0.001 variance). We randomly selected parameters which visibly showed the differences sharpness level and. Lastly, reconstructed images are shown. Figure 3 firstly shows the original image. Secondly, the degraded image was corrupted by Gaussian blur (size 20 by 20 with the standard deviation 20) and Gaussian noise (zero mean and 0.001 variance). With this point, we found that any adjustment of the standard deviation as much as small might not shown the difference between degraded and original figure. Lastly, reconstructed images are shown. Figure 4 firstly shows the original image. Secondly, the degraded image was corrupted by motion blur (the linear motion of a camera by 30 pixels with an angle of 60 degrees) and Gaussian noise (zero mean and 0.001 variance). We randomly selected parameters which visibly showed the differences sharpness level. Lastly, reconstructed images are shown. The comparisons between our proposed algorithm in Equation (19) and Kitkuan et al.’s algorithm (Algorithm in Theorem 3.1 in [30]) in image restoration problems are presented in Figure 5 and Table 2. Furthermore, we also present the comparison of Kitkuan et al.’s algorithm (Algorithm in Theorem 3.1 in [30]), our algorithm, and the well-known technique for image restoration which is Weiner filtering (WF) [38,39]. Figure 6 presents the comparative results of two degradation images ’Artsawang’ and ’Mandril’ corrupted by motion blur and different salt and pepper noise from 0 % to 10 % .

6. Conclusions

In this paper, we propose a new Mann-type method combining both inertial terms and errors to solve the fixed point problem for a nonexpansive mapping. We also prove the strong convergence of the proposed algorithm under some sufficient conditions of involved parameters. We also apply results to approximate the solutions of the monotone inclusion problems. Furthermore, we also provide a numerical example to compare the proposed algorithm with other algorithms in the convex minimization problem. Finally, we use our method to solve image restoration problems.

Author Contributions

Conceptualization, N.A. and K.U.; methodology, N.A. and K.U.; formal analysis, N.A. and K.U.; investigation, N.A. and K.U.; writing—original draft preparation, N.A. and K.U.; and writing—review and editing, N.A. and K.U. All authors have read and approved to the published version of the manuscript.

Funding

This research was supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program.

Acknowledgments

The second author would like to thank Naresuan University and The Thailand Research Fund for financial support. Moreover, N. Artsawang was supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0158/2557).

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  2. Bagiror, A.; Karmitsa, N.; Mäkelä, M.M. Introduction to Nonsmooth Optimization: Theory, Practice and Software; Springer: New York, NY, USA, 2014. [Google Scholar]
  3. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  4. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for fejer-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
  5. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
  6. Halpern, B. Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  7. Shioji, N.; Takahashi, W. Strong convergence of approximated sequences for nonexpansive mapping in Banach spaces. Proc. Am. Math. Soc. 1997, 125, 3641–3645. [Google Scholar] [CrossRef]
  8. Chidume, C.E.; Chidume, C.O. Iterative approximation of fixed points of nonexpansive mappings. J. Math. Anal. Appl. 2006, 318, 288–295. [Google Scholar] [CrossRef] [Green Version]
  9. Cholamjiak, P. A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces. Numer. Algorithms 2016, 71, 915–932. [Google Scholar] [CrossRef]
  10. Lions, P.-L. Approximation de points fixes de contractions. CR Acad. Sci. Paris Sér. 1977, 284, A1357–A1359. [Google Scholar]
  11. Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
  12. Reich, S. Some problems and results in fixed point theory. Contemp. Math. 1983, 21, 179–187. [Google Scholar]
  13. Wittmann, R. Approximation of fixed points of nonexpansive mappings. Arch. Math. 1992, 58, 486–491. [Google Scholar] [CrossRef]
  14. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  15. Moudafi, A. Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  16. Dong, Q.L.; Lu, Y.Y. A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theroy Appl. 2015, 2015, 37. [Google Scholar] [CrossRef] [Green Version]
  17. Dong, Q.L.; Yuan, H.B. Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping. Fixed Point Theroy Appl. 2015, 2015, 125. [Google Scholar] [CrossRef] [Green Version]
  18. Kanzow, C.; Shehu, Y. Generalized Krasnoselskii-Mann-type iterations for nonexpansive mappings in Hilbert spaces. Comput. Optim. Appl. 2017, 67, 595–620. [Google Scholar] [CrossRef]
  19. Kim, T.H.; Xu, H.K. Strong convergence of modified Mann iterations. Non-Linear Anal. 2005, 61, 51–60. [Google Scholar] [CrossRef]
  20. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  21. Boţ, R.I.; Csetnek, E.R.; Meier, D. Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces. Optim. Methods Softw. 2019. [Google Scholar] [CrossRef]
  22. Polyak, B.T. Some methods of speeding up the convergence of iterative methods. Zh. Vychisl. Mat. Mat. Fiz. 1964, 4, 1–17. [Google Scholar]
  23. Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef] [Green Version]
  24. Boţ, R.I.; Csetnek, E.R.; László, S. An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. EURO J. Comput. Optim. 2016, 4, 3–25. [Google Scholar] [CrossRef] [Green Version]
  25. Boţ, R.I.; Csetnek, E.R.; Nimana, N. Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data. Optim. Lett. 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Boţ, R.I.; Csetnek, E.R.; Nimana, N. An Inertial Proximal-Gradient Penalization Scheme for Constrained Convex Optimization Problems. Vietnam J. Math. 2017, 46, 53–71. [Google Scholar] [CrossRef] [Green Version]
  27. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 2018. [Google Scholar] [CrossRef]
  28. Cholamjiak, P.; Shehu, Y. Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  29. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J. Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems. Optimization 2019. [Google Scholar] [CrossRef]
  30. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J.; Sitthithakerngkiet, K. Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 2020, 97, 1–19. [Google Scholar] [CrossRef]
  31. Mainge, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  32. Combettes, P.L.; Yamada, I. Compositions and convex combinations of averaged nonexpansive operators. J. Math. Anal. Appl. 2015, 425, 55–70. [Google Scholar] [CrossRef]
  33. Shehu, Y.; Iyiola, O.S.; Ogbuisi, F.U. Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing. Numer. Algorithms 2019. [Google Scholar] [CrossRef]
  34. Davis, D.; Yin, W. A Three-Operator Splitting Scheme and its Optimization Applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef] [Green Version]
  35. Mainge, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  36. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill—Posed Problems. SIAM Rev. 1979, 21, 266–267. [Google Scholar]
  37. Ma, Z.; Wu, H.R. Partition based vector filtering technique for color suppression of noise in digital color images. IEEE Trans. Image Process. 2006, 15, 2324–2342. [Google Scholar]
  38. Lim, J.S. Two-Dimensional Signal Processing; Prentice-Hall: Englewood Cliffs, NJ, USA, 1990. [Google Scholar]
  39. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 2017. [Google Scholar]
Figure 1. Illustration the behavior of y n y n 1 for Algorithm 2, MTA, and Shehu et al.’s algorithm in Equation (3).
Figure 1. Illustration the behavior of y n y n 1 for Algorithm 2, MTA, and Shehu et al.’s algorithm in Equation (3).
Symmetry 12 00750 g001
Figure 2. (a) The original image ‘camera man’; (b) the images degraded by average blur and random noise (Gaussian noise); and (ce) the reconstructed image by using Weiner filter, Kitkuan et al.’s algorithm, and our algorithm in Equation (19), respectively.
Figure 2. (a) The original image ‘camera man’; (b) the images degraded by average blur and random noise (Gaussian noise); and (ce) the reconstructed image by using Weiner filter, Kitkuan et al.’s algorithm, and our algorithm in Equation (19), respectively.
Symmetry 12 00750 g002
Figure 3. (a) The original image ‘Artsawang’; (b) the images degraded by Gaussian blur and random noise (Gaussian noise); and (ce) the reconstructed image by using Weiner filter, Kitkuan et al.’s algorithm, and our algorithm in Equation (19), respectively.
Figure 3. (a) The original image ‘Artsawang’; (b) the images degraded by Gaussian blur and random noise (Gaussian noise); and (ce) the reconstructed image by using Weiner filter, Kitkuan et al.’s algorithm, and our algorithm in Equation (19), respectively.
Symmetry 12 00750 g003
Figure 4. (a) The original image ‘Mandril’; (b) the images degraded by motion blur and random noise (Gaussian noise); and (ce) the reconstructed image by usingWeiner filter, Kitkuan et al.’s algorithm, and our algorithm in Equation (19), respectively.
Figure 4. (a) The original image ‘Mandril’; (b) the images degraded by motion blur and random noise (Gaussian noise); and (ce) the reconstructed image by usingWeiner filter, Kitkuan et al.’s algorithm, and our algorithm in Equation (19), respectively.
Symmetry 12 00750 g004
Figure 5. (a) The behavior of SNR for two algorithms in Figure 2d,e; (b) the behavior of NCD for two algorithms in Figure 3d,e; and (c) the behavior of NCD for two algorithms in Figure 4d,e.
Figure 5. (a) The behavior of SNR for two algorithms in Figure 2d,e; (b) the behavior of NCD for two algorithms in Figure 3d,e; and (c) the behavior of NCD for two algorithms in Figure 4d,e.
Symmetry 12 00750 g005
Figure 6. (a,b) The behavior of NCD in motion blur and different different salt and pepper noise from 0% to 10%.
Figure 6. (a,b) The behavior of NCD in motion blur and different different salt and pepper noise from 0% to 10%.
Symmetry 12 00750 g006
Table 1. Comparison: Algorithm 2, MTA and Shehu et al.’s algorithm in Equation (3).
Table 1. Comparison: Algorithm 2, MTA and Shehu et al.’s algorithm in Equation (3).
( l , s ) Algorithm 2MTAShehu et al.’s Algorithm Equation (3)
CPU Time (s)IterationsCPU Time (s)IterationsCPU Time (s)Iterations
(20,700)0.021870.04282780.0756626
(20,800)0.018970.09143500.1745796
(20,7000)0.030271.775112730.097753
(20,8000)0.030861.241912900.067154
(200,7000)0.036581.94528584.65382028
(200,8000)0.040672.51159770.142553
(500,7000)0.040374.16478928.36201956
(500,8000)0.054884.32398139.09291835
(1000,7000)0.070376.795478614.16931751
(1000,8000)0.072877.830282516.37521784
(3000,7000)0.1597718.055977944.81291940
(3000,8000)0.1763722.351484149.68721891
(100,80,000)0.1376826.686314891.592694
(1000,80,000)0.69498344.704832899.418193
Table 2. The performance of the normalized color difference (NCD) in two images.
Table 2. The performance of the normalized color difference (NCD) in two images.
The Normalized Color Difference (NCD).
Kitkuan et al.’s AlgorithmOur Algorithm in Equation (19)
n Artsawang ImageMandril ImageArtsawang ImageMandril Image
10.998030.998420.996630.99731
500.996600.997300.996590.99727
1000.996610.997290.996580.99726
2000.996600.997280.996580.99726
3000.996590.997270.996580.99726
4000.996590.997270.996580.99726

Share and Cite

MDPI and ACS Style

Artsawang, N.; Ungchittrakool, K. Inertial Mann-Type Algorithm for a Nonexpansive Mapping to Solve Monotone Inclusion and Image Restoration Problems. Symmetry 2020, 12, 750. https://doi.org/10.3390/sym12050750

AMA Style

Artsawang N, Ungchittrakool K. Inertial Mann-Type Algorithm for a Nonexpansive Mapping to Solve Monotone Inclusion and Image Restoration Problems. Symmetry. 2020; 12(5):750. https://doi.org/10.3390/sym12050750

Chicago/Turabian Style

Artsawang, Natthaphon, and Kasamsuk Ungchittrakool. 2020. "Inertial Mann-Type Algorithm for a Nonexpansive Mapping to Solve Monotone Inclusion and Image Restoration Problems" Symmetry 12, no. 5: 750. https://doi.org/10.3390/sym12050750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop