Next Article in Journal
Benefits and Limitations of the Artificial with Respect to the Traditional Learning of Mathematics
Next Article in Special Issue
Location of Multiple Damage Types in a Truss-Type Structure Using Multiple Signal Classification Method and Vibration Signals
Previous Article in Journal
Hybrid Nanofluid Flow Past a Permeable Moving Thin Needle
Previous Article in Special Issue
Vibration Signal Processing-Based Detection of Short-Circuited Turns in Transformers: A Nonlinear Mode Decomposition Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Accelerated Viscosity Iterative Method for an Infinite Family of Nonexpansive Mappings with Applications to Image Restoration Problems

1
PhD Degree Program in Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Data Science Research Center, Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(4), 615; https://doi.org/10.3390/math8040615
Submission received: 16 March 2020 / Revised: 5 April 2020 / Accepted: 9 April 2020 / Published: 16 April 2020
(This article belongs to the Special Issue Mathematical Methods in Images and Signals Processing)

Abstract

:
The image restoration problem is one of the popular topics in image processing which is extensively studied by many authors because of its applications in various areas of science, engineering and medical image. The main aim of this paper is to introduce a new accelerated fixed algorithm using viscosity approximation technique with inertial effect for finding a common fixed point of an infinite family of nonexpansive mappings in a Hilbert space and prove a strong convergence result of the proposed method under some suitable control conditions. As an application, we apply our algorithm to solving image restoration problem and compare the efficiency of our algorithm with FISTA method which is a popular algorithm for image restoration. By numerical experiments, it is shown that our algorithm has more efficiency than that of FISTA.

1. Introduction

Let us first consider a simple linear inverse problem as the following form:
A x = b + w ,
where x R n × 1 is the solution of the problem to be approximated, A R m × n and b R m × 1 are known and w R m × 1 is an additive noise vector. Such problems (1) arise in various applications such as the image and signal processing problems, astrophysical problems and data classification problems.
Further, one of their well-known applications is the problem of approximating the original image from the observed blurred and noisy image which is known as the image restoration problem. In this problem, x , A and b represent the original image, blur operator and observed image, respectively.
The purpose of the image restoration problem is to minimize the additive noise in which the classical estimator is the least squares (LS) given as follows:
x ^ : = arg min x A x b 2 2 ,
where · 2 is 2 -norm. However, this model still has some ill-conditions in the case that the least square solution has a huge norm which is thus meaningless. In 1977, Tikhonov and Arsenin [1] improved this ill-posed problem by introducing the regularization techniques which are known as the Tikhonov regularization (TR) model and it is of the following form:
x ^ : = arg min x { A x b 2 2 + λ L x 2 2 } ,
for some regularization parameter λ > 0 and Tikhonov matrix L.
On the other hand, another successful regularization method for improvement of Tikhonov regularization is known as the least absolute shrinkage and selection operator (LASSO) which was introduced by Tibshirani (1996). The method is to find a solution
x ^ : = arg min x { A x b 2 2 + λ x 1 } ,
where · 1 is 1 -norm. The LASSO can be applied to regression problems and image restoration problems (see [2,3] for examples).
For solving (3) and (4), we extend them to a general naturally formulation, that is, the problem of finding the minimizer of sum of two functions:
x ^ : = arg min x { h ( x ) + g ( x ) } .
In order to solve (5), we assume the following:
  • h : R n R is a smooth convex loss function and differentiable with L-Lipschitz continuous gradient, where L > 0 , i.e.,
    h ( x ) h ( y ) L x y , x , y R n ;
  • g : R n R { + } is a proper convex and lower semi-continuous function.
We here denote the set of all solutions of above problem by arg min ( h + g ) . It is well-known that the solution of (5) can be reformulated as the problem of finding a zero-solution x ^ such that
0 g ( x ^ ) + h ( x ^ ) ,
where g is the subdifferential of function g and h is the gradient operator of function h (see [4] for more details). Moreover, the problem (6) can be solved by using the proximal gradient technique which was presented by Parikh and Boyd [5], i.e., if x ^ is a solution of (6), then it is a fixed point of a forward-backward operator T defined by T : = prox λ g ( I λ h ) for λ > 0 . The operator prox λ g is called the proximity operator with respect to λ and function g. We know that T is a nonexpansive mapping whenever λ 0 , 2 L . It is easily seen that prox λ g is an example of the resolvent of g , that is, prox λ g = J λ g = ( I + λ g ) 1 , see Section 2 for more details.
We have seen from above fact that fixed point theory plays very important role in solving and developing of the image and signal processing problems which can be applied to solving many real-world problems in digital image processing such as medical image and astronomy as well as image processing for security sections. Fixed point theory focuses on two important problems. The first one is an existence problem of a solution of many kind of real-world problems while the other problem is a problem of how to approximate such solutions of the interested problems. For the past two decades, a lot of fixed point iteration processes were introduced and studied to solving many practical problems. It is well-known by Banach Contraction Principle that every contraction map from a complete metric space X into itself has a unique fixed point.
A mapping T from a metric space ( X , d ) into itself is called a contraction if there is a k [ 0 , 1 ) such that d ( T x , T y ) k d ( x , y ) for all x , y X .
It is well-known that the Picard iteration process, defined by x 1 X and
x n + 1 = T x n , n 1 ,
converges to a unique fixed point x of T.
It is observed that when k = 1 in above inequality, we have a new nonlinear mapping, called nonexpansive mapping. This type of mapping plays a crucial role to solving many optimization problems and economics.
From now on, we would like to provide some background concerning various iteration methods for finding a fixed point of nonexpansive and other nonlinear mappings.
Mann [6] was the first who introduced a modified iterative method known as Mann iteration process in Hilbert space H as follows: x 1 H ,
x n + 1 = α n x n + ( 1 α n ) T x n , n 1 ,
where { α n } is a real sequence in [ 0 , 1 ] . In 1974, Ishikawa extended Mann iteration, called the Ishikawa iteration process, by the following method: For an initial point x 1 ,
y n = ( 1 α n ) x n + α n T x n , x n + 1 = ( 1 β n ) x n + β n T y n , n 1 ,
where { α n } , { β n } [ 0 , 1 ] . Agarwal et al. employed the idea of the Ishikawa method to introduce S-iteration process as follows: For an initial point x 1 ,
y n = ( 1 α n ) x n + α n T x n , x n + 1 = ( 1 β n ) T x n + β n T y n , n 1 ,
where { α n } and { β n } are sequences in [ 0 , 1 ] . They showed that the convergence behavior of S-iteration is better than that of Mann and of Ishikawa iterations.
Because Mann iteration obtained only weak convergence (see [7] for more details). In 2000, Moufafi [8] introduced a well-known viscosity approximation was defined as follows: For x 1 H ,
x n + 1 = α n f ( x n ) + ( 1 α n ) T x n , n 1 ,
where { α n } [ 0 , 1 ] and f is a contraction mapping. Under some suitable control conditions, he proved that { x n } converges strongly to a fixed point of T, when T is a nonexpansive mapping. Recently, authors in [9] proposed the viscosity-based inertial forward-backward algorithm (VIFBA), for solving (5) by finding a common fixed point of an infinite family { T n } of forward-backward operators. For initial points x 0 , x 1 H , they define their method as follows:
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 β n ) y n + β n f ( y n ) , x n + 1 = T n z n , n 1 ,
where f is a contraction mapping on H and { α n } , { β n } are sequences in [ 0 , 1 ] . Here, the inertial term is represented by the term θ n ( x n x n 1 ) which was firstly introduced by Nesterov [10]. This algorithm is also applied to solve the regression and recognition problems.
In 2009, Beck and Teboulle introduced a fast iterative shrinkage-thresholding algorithm (FISTA) which was defined by
y n = T x n , t n + 1 = 1 + 1 + 4 t n 2 2 , θ n = t n 1 t n + 1 , x n + 1 = y n + θ n ( y n y n 1 ) , n 1 ,
where T = prox λ g ( I λ h ) for λ > 0 and the initial points x 1 = y 0 R n and t 1 = 1 . Moreover, they applied their algorithm to the image restoration problems (see [3] for more details). It is pointed out from this work that the LASSO model is a suitable model for image restoration problems.
Motivated and inspired by all of these researches going on in this direction, in this paper, we introduce a new accelerated algorithm for finding a common fixed point of a family of nonexpansive mappings { T n } in Hilbert spaces based on the concept of inertial forward-backward, of Mann and of viscosity algorithms. Then a strong convergence theorem is established under some control conditions. Moreover, we apply the main results to solving image restoration problems and compare efficiency of our proposed algorithm with others. The presented results in this work also improve some well-known results in the literature.
This paper is organized as follows: In Section 2, Preliminaries, we recall some definitions and the useful facts which will be used in the later sections. We prove and analyze a strong convergence of the proposed algorithm in Section 3, Main Results. In the next section, Section 4 (Applications), we apply our main result to solving image restoration problems. Finally, the last section, Section 5 (Conclusions), is the summary of our work.

2. Preliminaries

Throughout this paper, we let H be a real Hilbert space with inner product · , · and norm · . Let { x n } be a sequence in H. We use x n x stands for { x n } converges strongly to x and x n x stands for { x n } converges weakly to x. Let T : C C be a mapping from a nonempty closed convex subset of H into itself. A fixed point of T is a point x C such that x = T x . The set of all fixed points of T is denoted by F ( T ) , that is,
F ( T ) : = { x C : x = T x } .
A mapping T : C H is said to be L-Lipschitzian, if there exists a constant L > 0 such that
T x T y L x y , x , y C .
If L = 1 , then T is said to be a nonexpansive mapping. It is well-known that if T is nonexpansive, then F ( T ) is closed and convex.
We call a mapping f : C H a contraction, if there exists a constant k [ 0 , 1 ) such that
f ( x ) f ( y ) k x y , x , y C .
Here, we say that f is a k-contraction mapping.
Let A : H 2 H . The domain of A is the set D ( A ) : = { x H : A x } and the range of A is the set R ( A ) : = { A z : z D ( A ) } . The inverse of A is denoted by A 1 is defined as follows: x A 1 y if and only if y A x . The graph of A is denoted by G ( A ) and G ( A ) : = { ( x , u ) : u A x } .
An operator A : H 2 H is said to be monotone if u v , x y 0 , for all u A x and v A y . A monotone operator A on H is said to be maximal if the graph of A is not properly contained in any graph of other monotone operators on H. It is well-known that A is maximal if and only if for x D ( A ) and u A x , u v , x y 0 implies ( y , v ) G ( A ) .
Moreover, A is a maximally monotone operator if and only if R ( I + λ A ) = H for every λ > 0 , where I is an identity operator. We also know that the subdifferential of a proper lower semicontinuous convex function is a nice example of a maximal monotone.
For a function g : H [ , + ] . The subdifferential g : H 2 H of g at x H , with g ( x ) R , is the set g ( x ) : = { x H : g ( x ) + y x , x g ( y ) , y H } . We take by convention g ( x ) : = , if g ( x ) { ± } . If g Γ 0 ( H ) , the set of proper lower semicontinuous convex functions from H to ( , + ] , then g is maximally monotone (see [11] for more details).
For a maximally monotone operator A and λ > 0 , the resolvent of A for λ is defined to be a single-valued operator J λ A : R ( I + λ A ) D ( A ) , where J λ A = ( I + λ A ) 1 . It is well-known that J λ A is a nonexpansive mapping and F ( J λ A ) = A 1 0 , where A 1 0 : = { x H : 0 A x } and it is called the set of all zero (or null) points of A.
Let A : H 2 H be a multi-valued mapping and B : H H a single-valued nonlinear mapping. The quasi-variational inclusion problem is the problem of finding a point x H such that
0 A x + B x .
The set of all solutions of the problem (12) is denoted by ( A + B ) 1 0 .
A classical method for solving the problem (12) is the forward-backward method [12,13,14] which was first introduced by Combettes and Hirstoaga [15] in the following manner: x 1 H and
x n + 1 = J λ A ( x n λ B x n ) , n 1 ,
where λ > 0 . Moreover, we have from [16], if A is a maximally monotone operator and B is an L-Lipschitz continuous, then F ( J λ A ( I λ B ) ) = ( A + B ) 1 0 .
Definition 1.
Let g Γ 0 ( H ) and λ > 0 . The proximity operator of parameter λ of g at x H is denoted by p r o x λ g and it is defined by
p r o x λ g x : = arg min y H g ( y ) + 1 2 λ y x 2 .
It is well-known that if g Γ 0 ( H ) , then J λ g = prox λ g , that is, the proximity operator is an example of resolvent operator. Moreover, if g = · 1 , then
prox λ · 1 x = sign ( x ) max { x 1 λ , 0 } ,
where sign is a signum function (see  [4] for more details).
The following basic definitions and well-known results are also needed for proving our main results.
Lemma 1. 
([17,18]) Let H be a real Hilbert space. For x , y H and any arbitrary real number λ in [ 0 , 1 ] , the following hold:
1.
x ± y 2 = x 2 ± 2 x , y + y 2 ;
2.
x + y 2 x 2 + 2 y , x + y ;
3.
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 .
The identity in Lemma 1(3) implies that the following equality holds:
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 β γ y z 2 α γ x z 2 ,
for all x , y , z H and α , β , γ [ 0 , 1 ] with α + β + γ = 1 .
Let C be a nonempty closed convex subset of a Hilbert space H. We know that for each element x H , there exists a unique point in C, say P C x , such that
x P C x x y , y C .
Such a mapping P C is called the metric projection of H onto C. It is well-known that P C is a nonexpansive mapping. Moreover, P C can be characterized by the following inequality
x P C x , y P C x 0
holds for all x H and y C (see [19] for more details).
We next recall the following properties which are useful for proving our main result, we refer to [20,21].
Let { T n } and T be families of nonexpansive mappings of H into itself such that F ( T ) n = 1 F ( T n ) , where F ( T ) is the set of all common fixed points of T . We say that { T n } satisfies NST-condition(I) with T if for each bounded sequence { x n } such that lim n x n T n x n = 0 , it follows
lim n x n T x n = 0 for   all T T .
In particular, if T consists of one mapping T, i.e., T = { T } , then { T n } is said to satisfy NST-condition(I) with T.
Lemma 2.
Let { T n } be a family of nonexpansive mappings of H into itself and T : H H a nonexpansive mapping with F ( T ) n = 1 F ( T n ) . One always has, if { T n } satisfies NST-condition(I) with T, then { T t } also satisfies NST-condition(I) with T, for any subsequences { t } of positive integers.
Proof. 
Let { x t } be a bounded sequence such that x t T t x t 0 as t + . Take u F ( T ) . Define the sequence { x n } by
x n : = x t if n = t ; u otherwise .
Then { x n } is bounded. Moreover, we have that
lim n x n T n x n = lim t x t T t x t = 0 ,
due to u is a fixed point of T n for all n N . By the NST-condition(I) with T on { T n } , we obtain that
lim t x t T x t = lim n x n T x n = 0 .
Thus, { T t } satisfies NST-condition(I) with T. ☐
Proposition 1. 
([22]) Let H be a Hilbert space. Let A : H 2 H be a maximally monotone operator and B : H H an L-Lipschitz operator, where L > 0 . Let T n = J λ n A ( I λ n B ) , where 0 < λ n < 2 L for all n 1 and let T = J λ A ( I λ B ) , where 0 < λ < 2 L with λ n λ . Then { T n } satisfies the NST-condition(I) with T.
The following lemmas are crucial for proving our main results.
Lemma 3. 
([23]) Let H be a real Hilbert space and T : H H a nonexpansive mapping with F ( T ) . Then the mapping I T is demiclosed at zero, i.e., for any sequences { x n } in H such that x n x H and x n T x n 0 imply x F ( T ) .
Lemma 4. 
([24,25]) Let { s n } , { ξ n } be sequences of nonnegative real numbers, { δ n } a sequence in [0,1] and { t n } a sequence of real numbers such that
s n + 1 ( 1 δ n ) s n + δ n t n + ξ n ,
for all n N . If the following conditions hold:
1. 
n = 1 δ n = ;
2. 
n = 1 ξ n < ;
2. 
lim sup n t n 0 .
Then lim n s n = 0 .
Lemma 5. 
([26]) Let { Θ n } be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence { Θ n i } of { Θ n } which satisfies Θ n i < Θ n i + 1 for all i N . Define the sequence { τ ( n ) } n n 0 of integers as follows:
τ ( n ) : = max { k n : Θ k < Θ k + 1 } ,
where n 0 N such that { k n 0 : Θ k < Θ k + 1 } . Then the following hold:
1. 
τ ( n 0 ) τ ( n 0 + 1 ) and τ ( n ) ;
2. 
Θ τ ( n ) Θ τ ( n ) + 1 and Θ n Θ τ ( n ) + 1 for all n n 0 .

3. Main Results

In this section, we first give a new algorithm for finding a common fixed point of a family of nonexpansive mappings in a real Hilbert space. We then prove its strong convergence under some suitable conditions.
We now propose a new accelerated algorithm for approximating a solution of our common fixed point problem as the following.
Let H be a real Hilbert space. Let { T n } be a family of nonexpansive mappings on H into itself. Let f be a k-contraction mapping on H with k ( 0 , 1 ) and let { η n } ( 0 , ) and { σ n } , { α n } , { β n } , { γ n } ( 0 , 1 ) .
We next prove the convergence of the sequence generated by Algorithm 1. To this end, we assume that the algorithm does not stop after finitely many iterations.
Algorithm 1: NAVA (New Accelerated Viscosity Algorithm).
    Initialization: Take x 0 , x 1 H . Choose θ 0 .
    For n 1 :
        Set
              θ n : = min θ , η n α n x n x n 1 if x n x n 1 ; θ otherwise .

        Compute
                y n : = x n + θ n ( x n x n 1 ) , z n : = ( 1 σ n ) y n + σ n T n y n , x n + 1 : = α n f ( x n ) + β n T n y n + γ n T n z n .
Theorem 1.
Let { T n } be a family of nonexpansive mappings and T : H H a nonexpansive mapping such that F ( T ) n = 1 F ( T n ) . Suppose that { T n } satisfies NST-condition(I) with T. Let { x n } be the sequence generated by Algorithm 1 such that the following additional conditions hold:
1.
α n + β n + γ n = 1 ;
2.
0 < a σ n a < 1 ;
3.
0 < b β n b < 1 ;
4.
0 < c γ n c < 1 ;
5.
lim n η n = 0 ;
6.
lim n α n = 0 and n = 1 α n = ,
for some positive real numbers a , b , c , a , b , c . Then the sequence { x n } converges strongly to u F ( T ) , where u = P F ( T ) f ( u ) .
Proof. 
Let u F ( T ) be such that u = P F ( T ) f ( u ) . First of all, we show that { x n } is bounded. By the definition of y n and of z n , we have
y n u = x n + θ n ( x n x n 1 ) u x n u + θ n x n x n 1 , n 1 ,
and
z n u = ( 1 σ n ) y n + σ n T n y n u ( 1 σ n ) y n u + σ n T n y n u = ( 1 σ n ) y n u + σ n T n y n T n u ( 1 σ n ) y n u + σ n y n u = y n u , n 1 .
From (16) and (17), we also have that
x n + 1 u = α n f ( x n ) + β n T n y n + γ n T n z n u = α n ( f ( x n ) u ) + β n ( T n y n u ) + γ n ( T n z n u ) α n f ( x n ) u + β n T n y n u + γ n T n z n u = α n f ( x n ) u + β n T n y n T n u + γ n T n z n T n u α n f ( x n ) f ( u ) + α n f ( u ) u + β n y n u + γ n z n u α n k x n u + α n f ( u ) u + ( β n + γ n ) y n u α n k x n u + α n f ( u ) u + ( β n + γ n ) ( x n u + θ n x n x n 1 ) ( α n k + β n + γ n ) x n u + α n f ( u ) u + ( β n + γ n ) θ n x n x n 1 ( α n k + β n + γ n ) x n u + α n f ( u ) u + ( b + c ) θ n x n x n 1 = ( 1 α n ( 1 k ) ) x n u + α n f ( u ) u + ( b + c ) α n · θ n α n x n x n 1 , n 1 .
According to the definition of θ n and the assumption (5), we have
θ n α n x n x n 1 0 as n .
Then there exists a positive constant M 1 > 0 such that
θ n α n x n x n 1 M 1 , n 1 .
From (18), we obtain
x n + 1 u ( 1 α n ( 1 k ) ) x n u + α n f ( u ) u + α n ( b + c ) M 1 = ( 1 α n ( 1 k ) ) x n u + α n ( f ( u ) u + ( b + c ) M 1 ) = ( 1 α n ( 1 k ) ) x n u + α n ( 1 k ) f ( u ) u + ( b + c ) M 1 1 k max x n u , f ( u ) u + ( b + c ) M 1 1 k max x 1 u , f ( u ) u + ( b + c ) M 1 1 k , n 1 .
This implies { x n } is bounded and so are { y n } , { z n } , { f ( x n ) } and { T n y n } .
Indeed, we have that for all n 1 ,
y n u 2 = x n + θ n ( x n x n 1 ) u 2 = ( x n u ) + θ n ( x n x n 1 ) 2 x n u 2 + 2 θ n x n u x n x n 1 + θ n 2 x n x n 1 2 .
By Lemma 1(2), (14) and (17) we have
x n + 1 u 2 = α n f ( x n ) + β n T n y n + γ n T n z n u 2 = α n ( f ( x n ) f ( u ) ) + β n ( T n y n u ) + γ n ( T n z n u ) + α n ( f ( u ) u ) 2 α n ( f ( x n ) f ( u ) ) + β n ( T n y n u ) + γ n ( T n z n u ) 2 + 2 α n f ( u ) u , x n + 1 u α n f ( x n ) f ( u ) 2 + β n T n y n u 2 + γ n T n z n u 2 + 2 α n f ( u ) u , x n + 1 u = α n f ( x n ) f ( u ) 2 + β n T n y n T n u 2 + γ n T n z n T n u 2 + 2 α n f ( u ) u , x n + 1 u α n k 2 x n u 2 + β n y n u 2 + γ n z n u 2 + 2 α n f ( u ) u , x n + 1 u α n k 2 x n u 2 + ( β n + γ n ) y n u 2 + 2 α n f ( u ) u , x n + 1 u , n 1
It follows from (19) with 0 < k < 1 that
x n + 1 u 2 α n k x n u 2 + ( β n + γ n ) ( x n u 2 + 2 θ n x n u x n x n 1 + θ n 2 x n x n 1 2 ) + 2 α n f ( u ) u , x n + 1 u = ( 1 α n ( 1 k ) ) x n u 2 + ( β n + γ n ) θ n x n x n 1 2 x n u + θ n x n x n 1 + 2 α n f ( u ) u , x n + 1 u , n 1 .
Since
θ n x n x n 1 = α n · θ n α n x n x n 1 0 , as n ,
there exists a positive constant M 2 > 0 such that
θ n x n x n 1 M 2 , n 1 .
From the inequality (21), we derive that for all n 1 ,
x n + 1 u 2 ( 1 α n ( 1 k ) ) x n u 2 + 3 M 3 ( β n + γ n ) θ n x n x n 1 + 2 α n f ( u ) u , x n + 1 u ( 1 α n ( 1 k ) ) x n u 2 + 3 M 3 ( b + c ) θ n x n x n 1 + 2 α n f ( u ) u , x n + 1 u ( 1 α n ( 1 k ) ) x n u 2 + α n ( 1 k ) 3 M 3 ( b + c ) 1 k · θ n α n x n x n 1 + 2 1 k f ( u ) u , x n + 1 u ,
where M 3 : = sup n 1 { x n u , M 2 } . From above inequality, we set
s n : = x n u 2 , δ n : = α n ( 1 k )
and
t n : = 3 M 3 ( b + c ) 1 k · θ n α n x n x n 1 + 2 1 k f ( u ) u , x n + 1 u , n 1 .
So, we obtain
s n + 1 ( 1 δ n ) s n + δ n t n ,
for all n 1 .
Now, we consider two cases for the proof as follows:
Case 1. Suppose that there exists a natural number n 0 such that the sequence { x n u } n n 0 is nonincreasing. Hence, { x n u } converges due to it is bounded from below by 0. Using the assumption (6), we get that n = 1 δ n = . From Lemma 4, we next claim that
lim sup n f ( u ) u , x n + 1 u 0 .
Coming back to the definition of z n , by Lemma 1(3), one has that
z n u 2 = ( 1 σ n ) y n + σ n T n y n u 2 = ( 1 σ n ) ( y n u ) + σ n ( T n y n u ) 2 = ( 1 σ n ) y n u 2 + σ n T n y n u 2 σ n ( 1 σ n ) y n T n y n 2 = ( 1 σ n ) y n u 2 + σ n T n y n T n u 2 σ n ( 1 σ n ) y n T n y n 2 y n u 2 σ n ( 1 σ n ) y n T n y n 2 , n 1 .
Using the facts that (14), (17), (19) and (24) yield
x n + 1 u 2 = α n f ( x n ) + β n T n y n + γ n T n z n u 2 = α n ( f ( x n ) u ) + β n ( T n y n u ) + γ n ( T n z n u ) 2 α n f ( x n ) u 2 + β n T n y n u 2 + γ n T n z n u 2 = α n f ( x n ) u 2 + β n T n y n T n u 2 + γ n T n z n T n u 2 α n f ( x n ) u 2 + β n y n u 2 + γ n z n u 2 α n f ( x n ) u 2 + β n y n u 2 + γ n ( y n u 2 σ n ( 1 σ n ) y n T n y n 2 ) = α n f ( x n ) u 2 + ( β n + γ n ) y n u 2 γ n σ n ( 1 σ n ) y n T n y n 2 α n f ( x n ) u 2 + ( β n + γ n ) x n u 2 + 2 ( β n + γ n ) θ n x n u x n x n 1 + ( β n + γ n ) θ n 2 x n x n 1 2 γ n σ n ( 1 σ n ) y n T n y n 2 = α n f ( x n ) u 2 + ( 1 α n ) x n u 2 + 2 ( β n + γ n ) θ n x n u x n x n 1 + ( β n + γ n ) θ n 2 x n x n 1 2 γ n σ n ( 1 σ n ) y n T n y n 2 , n 1 .
It implies that for all n 1 ,
γ n σ n ( 1 σ n ) y n T n y n 2 α n ( f ( x n ) u 2 x n u 2 ) + x n u 2 x n + 1 u 2 + ( β n + γ n ) θ n x n x n 1 2 x n u + θ n x n x n 1 .
It follows from the assumptions (2), (3), (4), (6) and the convergence of the sequences { x n u } and of { θ n x n x n 1 } that
y n T n y n 0 as n .
According to { T n } satisfies NST-condition(I) with T, we obtain that
y n T y n 0 as n .
From the definition of y n and of z n , we obtain
y n x n = θ n x n x n 1 0 as n ,
and
z n y n σ n y n T n y n 0 as n .
Using (27) and (30) with the assumption (6), we have
x n + 1 y n x n + 1 T n y n + T n y n y n α n f ( x n ) + β n T n y n + γ n T n z n T n y n + T n y n y n α n ( f ( x n ) T n y n ) + β n ( T n y n T n y n ) + γ n ( T n z n T n y n ) + T n y n y n α n f ( x n ) T n y n + β n T n y n T n y n + γ n T n z n T n y n + T n y n y n α n f ( x n ) T n y n + γ n z n y n + T n y n y n , n 1 ,
which implies
x n + 1 y n 0 as n .
Hence
x n + 1 x n x n + 1 y n + y n x n 0 as n .
Let
v = lim sup n f ( u ) u , x n + 1 u .
So, there exists a subsequence { x t } of { x n } such that
v = lim t f ( u ) u , x t + 1 u .
Since { x t } is bounded, there exists a subsequence { x t } of { x t } such that x t w for some w H . Without loss of generality, we may assume that x t w and
v = lim t f ( u ) u , x t + 1 u .
From (28) and (29), we derive
x n T x n x n y n + y n T y n + T y n T x n 2 x n y n + y n T y n , n 1 ,
and hence,
x n T x n 0 as n .
It implies by Lemma 3 that w F ( T ) . Since x n + 1 x n 0 , we get x t + 1 w . Moreover, using u = P F ( T ) f ( u ) and (15), we obtain
v = lim t f ( u ) u , x t + 1 u = f ( u ) u , w u 0 .
Then
lim sup n f ( u ) u , x n + 1 u 0 .
It implies from (37) with the fact of θ n α n x n x n 1 0 that lim sup n t n 0 . Coming back to (23), using Lemma 4, we conclude that x n u .
Case 2. Suppose that the sequence { x n u } n n 0 is not a monotonically decreasing sequence for some n 0 large enough. Set
Θ n : = x n u 2 .
So, there exists a subsequence { Θ n j } of { Θ n } such that Θ n j Θ n j + 1 for all j N . In this case, we define τ : { n : n n 0 } N by
τ ( n ) : = max { k N : k n , Θ k < Θ k + 1 } .
By Lemma 5, we have that Θ τ ( n ) Θ τ ( n ) + 1 for all n n 0 . That is,
x τ ( n ) u x τ ( n ) + 1 u , n n 0 .
As in Case 1, we can conclude that for all n n 0 ,
γ τ ( n ) σ τ ( n ) ( 1 σ τ ( n ) ) y τ ( n ) T τ ( n ) y τ ( n ) 2 α τ ( n ) f ( x τ ( n ) ) u 2 x τ ( n ) u 2 + x τ ( n ) u 2 x τ ( n ) + 1 u 2 + ( β τ ( n ) + γ τ ( n ) ) θ τ ( n ) x τ ( n ) x τ ( n ) 1 × 2 x τ ( n ) u + θ τ ( n ) x τ ( n ) x τ ( n ) 1 α τ ( n ) f ( x τ ( n ) ) u 2 x τ ( n ) u 2 + ( β τ ( n ) + γ τ ( n ) ) θ τ ( n ) x τ ( n ) x τ ( n ) 1 × 2 x τ ( n ) u + θ τ ( n ) x τ ( n ) x τ ( n ) 1
and hence,
y τ ( n ) T τ ( n ) y τ ( n ) 0 as n .
Similarly to the proof of Case 1, we get
y τ ( n ) x τ ( n ) 0 ,
z τ ( n ) y τ ( n ) 0
and
x τ ( n ) + 1 y τ ( n ) 0 ,
as n , and hence
x τ ( n ) + 1 x τ ( n ) 0 as n .
We next show that lim sup n f ( u ) u , x τ ( n ) + 1 u 0 . Put
v = lim sup n f ( u ) u , x τ ( n ) + 1 u .
Without loss of generality, there exists a subsequence { x τ ( t ) } of { x τ ( n ) } such that { x τ ( t ) } converges weakly to some point w H and
v = lim t f ( u ) u , x τ ( t ) + 1 u .
By Lemma 2, one has { T τ ( t ) } satisfies NST-condition(I) with T. So, according to the equality (38), y τ ( t ) T τ ( t ) y τ ( t ) 0 , we obtain that
y τ ( t ) T y τ ( t ) 0 as t ,
which implies, by (39) and Lemma 3 again, that w F ( T ) . Since x τ ( t ) + 1 x τ ( t ) 0 , we get x τ ( t ) + 1 w . Moreover, using u = P F ( T ) f ( u ) and (15), we obtain
v = lim t f ( u ) u , x τ ( t ) + 1 u = f ( u ) u , w u 0 .
Then
lim sup n f ( u ) u , x τ ( n ) + 1 u 0 .
Since Θ τ ( n ) Θ τ ( n ) + 1 and α τ ( n ) ( 1 k ) > 0 , as in the proof of Case 1, we have that for all n n 0 ,
x τ ( n ) u 0 2 3 M 3 ( b + c ) 1 k · θ τ ( n ) α τ ( n ) x τ ( n ) x τ ( n ) 1 + 2 1 k f ( u ) u , x τ ( n ) + 1 u .
It follows by the fact that θ n α n x n x n 1 0 and (45) that
lim sup n x τ ( n ) u 0 2 0 ,
and hence x τ ( n ) u 0 0 as n . It implies by (42) that x τ ( n ) + 1 u 0 0 as n .
By Lemma 5, we get
x n u 0 x τ ( n ) + 1 u 0 0 as n .
Hence x n u 0 . The proof is completed. ☐
As a direct consequence of Theorem 1, by using Proposition 1, we obtain the following corollary.
Corollary 1.
Let H be a real Hilbert space. Let A : H 2 H be a maximally monotone operator and B : H H an L-Lipschitz operator, where L > 0 . Let T n = J λ n A ( I λ n B ) , where 0 < λ n < 2 L for all n 1 and let T = J λ A ( I λ B ) , where 0 < λ < 2 L with λ n λ . Suppose that ( A + B ) 1 0 . Let f be a k-contraction mapping on H with k ( 0 , 1 ) . Let { x n } be a sequence in H generated by Algorithm 1 under the same conditions of parameters as in Theorem 1. Then { x n } converges strongly to u ( A + B ) 1 0 , where u = P ( A + B ) 1 0 f ( u ) .
Proof. 
Since ( A + B ) 1 0 = F ( T ) n = 1 F ( T n ) and T , T n are nonexpansive for each n N , we can conclude that the sequence { x n } converges strongly to u ( A + B ) 1 0 by using Proposition 1 and Theorem 1. ☐

4. Applications

In this section, we first begin with presenting the algorithm obtained from our main results. We investigate throughout this section under the following setting.
H is a real Hilbert space;
h : H R is a differentiable and convex function with an L-Lipschitz continuous gradient h where L > 0 ;
g Γ 0 ( H ) ;
arg min ( h + g ) ;
f is a k-contraction mapping on H with k ( 0 , 1 ) ;
λ 0 , 2 L and { λ n } 0 , 2 L with λ n λ ;
{ η n } ( 0 , ) and { σ n } , { α n } , { β n } , { γ n } ( 0 , 1 ) .
The algorithm we propose in this context has the following formulation.
We next prove the strong convergence of the sequence generated by our proposed algorithm.
Theorem 2.
Let { x n } be a sequence generated by Algorithm 2 under the same conditions of parameters as in Theorem 1. Then { x n } converges strongly to u arg min ( h + g ) .
Algorithm 2: AVFBA (Accelerated Viscosity Forward-Backward Algorithm).
    Initialization: Take x 0 , x 1 H . Choose θ 0 .
    For n 1 :
        Set
        θ n : = min θ , η n α n x n x n 1 if x n x n 1 ; θ otherwise .

        Compute
        y n : = x n + θ n ( x n x n 1 ) , z n : = ( 1 σ n ) y n + σ n prox λ n g ( I λ n h ) y n , x n + 1 : = α n f ( x n ) + β n prox λ n g ( I λ n h ) y n + γ n prox λ n g ( I λ n h ) z n .
Proof. 
In Corollary 1, we set A : = g and B : = h . So, A is a maximal operator. Then we obtain the required result directly by Corollary 1. ☐
We next discuss some experiment results by using our proposal algorithm to solving the image restoration problem. The image restoration problem (2) can be related to
min x { A x b 2 2 + λ x 1 } ,
where x R n is the original image, b is the observed image and A represents the blurring operator. In this situation, we choose the regularization parameter λ = 5 e 5 . For this example, we look at the 256 × 256 Schonbrunn palace (original image). We use a Gausssian blur of size 9 × 9 and standard deviation σ = 4 to create the blurred and noisy image (observed image). These two images are given as in Figure 1.
In 2009, Thung and Raveendran [27] introduced Peak Signal-to-Noise Ratio (PSNR) to measure a quality of restored images for each x n as the following:
P S N R ( x n ) = 10 log 255 2 M S E ,
where M S E = 1 256 2 x n x 2 , the Mean Square Error for the original image x. We note that a higher PSNR shows a higher quality for deblurring image.
In Theorem 2, we set h ( x ) = A x b 2 2 and g ( x ) = λ x 1 and choose the Lipschitz constant L of the gradient h which is the maximum value of eigenvalues of the matrix A T A .
Let us begin with the first experiment. We study convergence behavior of our method by considering the following six different cases:
ParametersCase (a)Case (b)Case (c)Case (d)Case (e)Case (f)
α n 1 33 n 1 33 n 1 33 n 1 33 n 1 33 n 1 33 n
β n n n + 1 n 300 n + 1 n 300 n + 1 n 300 n + 1 n 300 n + 1 n 300 n + 1
γ n 1 α n β n 1 α n β n 1 α n β n 1 α n β n 1 α n β n 1 α n β n
σ n n 10 ( n + 1 ) n 10 ( n + 1 ) 99 n 100 ( n + 1 ) 99 n 100 ( n + 1 ) 99 n 100 ( n + 1 ) 99 n 100 ( n + 1 )
η n 33 · 10 20 n 33 · 10 20 n 33 · 10 20 n 33 · 10 20 n 33 · 10 20 n 33 · 10 20 n
λ n n L ( n + 1 ) n L ( n + 1 ) n L ( n + 1 ) 31 n 20 L ( n + 1 ) 31 n 20 L ( n + 1 ) 31 n 20 L ( n + 1 )
θ 0.50.50.50.50.090.99
It is clear that these control parameters satisfy all conditions of Theorem 2. In this experiment, we take f ( x ) = 0.25 · x . By Theorem 2, the sequence { x n } converges to the original image and its convergence behavior for each case is shown by the values of PSNR as seen in Table 1.
The second experiment is to consider the behavior of the sequence { x n } for each case of contraction mappings f ( x ) = k · x . We consider the following four different cases as follows:
Case (1) f ( x ) = 0.1 · x
Case (2) f ( x ) = 0.5 · x
Case (3) f ( x ) = 0.75 · x
Case (4) f ( x ) = 0.95 · x
We choose the parameters as follows:
α n = 1 33 n , β n = n 300 n + 1 , γ n = 1 α n β n , σ n = 99 n 100 ( n + 1 ) , η n = 33 · 10 20 n , λ n = 31 n 20 L ( n + 1 ) .
Here θ = 0.99 then θ n = min 0.99 , 10 20 n 2 x n x n 1 if x n x n 1 ; 0.99 otherwise .
From Table 2, we get the values of PSNR at x 500 of each case which equal to 32.212326, 32.929758, 33.580650 and 34.170032, respectively. We also observe from Table 2 and Figure 2 that when k is close to 1, the value of PSNR is higher than those of smaller k.
On the other hand, the other experiment is to compare the quality of image restored by our algorithm and the quality of image restored by FISTA method [3]. Here, all parameters in Theorem 2 were the same as the previous experiment and we used f ( x ) = 0.95 · x .
For FISTA method [3], we set
h ( x ) = A x b 2 2 , g ( x ) = λ x 1 and T = prox λ g ( I λ h ) ,
where the parameter λ = 1 L .
Then we obtain the PSNR values of our algorithm and of FISTA as seen in Table 3 and Table 4, and Figure 3. The restoration images at 500th iteration of both algorithms are also presented in Figure 4.
Our experiments show that our algorithm gives a better performance in restoring the blurred image than that of FISTA [3].

5. Conclusions

In this paper, we present a new accelerated fixed point algorithm using the ideas of the viscosity and inertial technique to solving image restoration problems. A strong convergence theorem of our proposed method, Theorem 1, is established and proved under some suitable conditions. We then compare its convergence behavior with the others by considering its application to an image restoration problem. We find that our algorithm has convergence behavior better than FISTA which is a well-known and popular method using in image restoration problem.

Author Contributions

Funding acquisition and supervision, S.S.; writing—review and editing and software, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Chiang Mai University, Chiang Mai, Thailand.

Acknowledgments

The authors would like to thank Chiang Mai University, Chiang Mai, Thailand for the financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tikhonov, A.N.; Arsenin, V.Y. Solution of Ill-Posed Problems; V.H. Winston: Washington, DC, USA, 1977. [Google Scholar]
  2. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  3. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  4. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Incorporated; Springer: New York, NY, USA, 2017. [Google Scholar]
  5. Parikh, N.; Boyd, S. Proximal Algorithms. Found. Trends® Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  6. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  7. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
  8. Moudafi, A. Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  9. Verma, M.; Sahu, D.R.; Shukla, K.K. VAGA: A novel viscosity-based accelerated gradient algorithm- Convergence analysis and applications. Appl. Intell. 2018, 48, 2613–2627. [Google Scholar] [CrossRef]
  10. Nesterov, Y. A method for solving the convex programming problem with convergence rate (1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  11. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef] [Green Version]
  12. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  13. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  14. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  15. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  16. Lopez, G.; Martin-Marquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012. [Google Scholar] [CrossRef] [Green Version]
  17. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  18. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  19. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  20. Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in banach spaces. J. Math. Anal. Appl. 2007, 8, 11–34. [Google Scholar]
  21. Takahashi, W. Viscosity approximation methods for countable families of nonexpansive mappings in banach spaces. Nonlinear Anal. 2009, 70, 719–734. [Google Scholar] [CrossRef]
  22. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian. J. 2020, 36, 35–44. [Google Scholar]
  23. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  24. Aoyama, K.; Yasunori, Y.; Takahashi, W.; Toyoda, M. On a strongly nonexpansive sequence in a Hilbert space. J. Nonlinear Convex Anal. 2007, 8, 471–489. [Google Scholar]
  25. Xu, H.K. Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65, 109–113. [Google Scholar] [CrossRef] [Green Version]
  26. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  27. Thung, K.-H.; Raveendran, P. A Survey of Image Quality Measures. In Proceedings of the IEEE Technical Postgraduates (TECHPOS) International Conference, Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
Figure 1. Original image
Figure 1. Original image
Mathematics 08 00615 g001
Figure 2. Comparison of four cases in Theorem 2.
Figure 2. Comparison of four cases in Theorem 2.
Mathematics 08 00615 g002
Figure 3. Schonbrunn palace
Figure 3. Schonbrunn palace
Mathematics 08 00615 g003
Figure 4. Original palace
Figure 4. Original palace
Mathematics 08 00615 g004
Table 1. The values of PSNR of six cases in Theorem 2 at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 .
Table 1. The values of PSNR of six cases in Theorem 2 at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 .
No. IterationsCase (a)Case (b)Case (c)Case (d)Case (e)Case (f)
119.43526619.44047619.52888919.68693719.68693719.686937
520.53977320.56844220.84949221.15106420.88820721.598216
1021.12685121.17405921.60630522.06869621.55470023.435160
2522.13678922.22884722.94996823.58761222.78808925.383824
5023.12524123.24653124.08715824.74826123.89312427.283463
10024.19847924.33196525.19937825.85206925.00020629.440357
25025.60545025.74197626.60803327.26515926.40812731.342524
50026.64513526.78402627.67489828.36589127.46618132.443743
Table 2. The values of PSNR of four cases in Theorem 2 at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 .
Table 2. The values of PSNR of four cases in Theorem 2 at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 .
No. IterationsCase (1)Case (2)Case (3)Case (4)
119.64562919.74281619.78194719.800875
521.59574121.60092421.60182221.601207
1023.43036623.44155323.44582923.447628
2525.43334825.28917425.17892325.079196
5027.32910927.16787226.99776226.819210
10029.37515629.46839529.35215029.121785
25031.09176031.81112332.26057732.431003
50032.21232632.92975833.58065034.170032
Table 3. The values of PSNR at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 (Schonbrunn palace).
Table 3. The values of PSNR at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 (Schonbrunn palace).
No. IterationsOur AlgorithmFISTA Method
119.80087519.785363
521.60120720.774354
1023.44762821.530504
2525.07919623.502806
5026.81921025.401943
10029.12178527.342763
25032.43100330.290802
50034.17003232.356010
Table 4. The values of PSNR at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 (Camera man).
Table 4. The values of PSNR at x 1 , x 5 , x 10 , x 25 , x 50 , x 100 , x 250 , x 500 (Camera man).
No. IterationsOur AlgorithmFISTA Method
121.73886521.730405
523.16574822.429808
1024.70216923.081284
2525.98684724.741192
5027.38338926.213578
10029.32009727.633632
25032.28781729.889833
50034.26287332.016958

Share and Cite

MDPI and ACS Style

Puangpee, J.; Suantai, S. A New Accelerated Viscosity Iterative Method for an Infinite Family of Nonexpansive Mappings with Applications to Image Restoration Problems. Mathematics 2020, 8, 615. https://doi.org/10.3390/math8040615

AMA Style

Puangpee J, Suantai S. A New Accelerated Viscosity Iterative Method for an Infinite Family of Nonexpansive Mappings with Applications to Image Restoration Problems. Mathematics. 2020; 8(4):615. https://doi.org/10.3390/math8040615

Chicago/Turabian Style

Puangpee, Jenwit, and Suthep Suantai. 2020. "A New Accelerated Viscosity Iterative Method for an Infinite Family of Nonexpansive Mappings with Applications to Image Restoration Problems" Mathematics 8, no. 4: 615. https://doi.org/10.3390/math8040615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop