Next Article in Journal
Special Functions of Mathematical Physics: A Unified Lagrangian Formalism
Next Article in Special Issue
Auto-Colorization of Historical Images Using Deep Convolutional Neural Networks
Previous Article in Journal
Wavelet Thresholding Risk Estimate for the Model with Random Samples and Correlated Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Image Restoration Algorithm Based on a Fixed Point and Optimization Method

1
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Data Science Research Center, Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 378; https://doi.org/10.3390/math8030378
Submission received: 19 February 2020 / Revised: 5 March 2020 / Accepted: 6 March 2020 / Published: 8 March 2020
(This article belongs to the Special Issue Mathematical Approaches to Image Processing with Applications)

Abstract

:
In this paper, a new accelerated fixed point algorithm for solving a common fixed point of a family of nonexpansive operators is introduced and studied, and then a weak convergence result and the convergence behavior of the proposed method is proven and discussed. Using our main result, we obtain a new accelerated image restoration algorithm, called the forward-backward modified W-algorithm (FBMWA), for solving a minimization problem in the form of the sum of two proper lower semi-continuous and convex functions. As applications, we apply the FBMWA algorithm to solving image restoration problems. We analyze and compare convergence behavior of our method with the others for deblurring the image. We found that our algorithm has a higher efficiency than the others in the literature.

1. Introduction

It is well-known that fixed point theory has relevant applications in many branches of analysis [1,2,3,4,5,6,7,8,9] and it can be applied to solving many areas of science and applied science, engineering, economics and medicine, such as image/signal processing [10,11,12,13,14,15,16,17] and modeling intensity modulated radiation theory treatment planning [18,19,20]. Many real life problems can be equivalently formulated as fixed point problems, meaning that one has to find a fixed point of some operators. One of most popular fixed point algorithms is Picard iteration. Up to now, many fixed point algorithms have been introduced and studied to solve various kinds of real world problems, such as Mann iteration [7], Ishikawa iteration [4], SP-iteration [21] and W-iteration [22].
The image restoration problem is an important topic in image processing. This problem can be transformed to an optimization problem using the least absolute shrinkage and selection operator (LASSO) model. There are several optimization and fixed point methods for such problem; see [23,24,25,26,27] for examples. One of the most popular methods for solving the image restoration problem is FISTA (fast iterative shrinkage-thresholding algorithm). This method was shown by Beck and Teboulle in [28] to have more efficiency than the previous methods in the literature.
In this paper, we focus our attention on a new accelerated algorithm that has been developed from the view of fixed point. For instance, Wongyai and Suantai [22] proposed the W-algorithm for solving a fixed point problem of a continuous function, and proved that the W-algorithm has a convergence rate better than the others. Motivated by this idea, we propose a new algorithm by modification of W-algorithm for solving a common fixed point problem of a countable family of nonexpansive operators. We also prove the convergence of our algorithm under some conditions and apply it to solving the image restoration problem and compare its efficiency with other methods in term of PSNR (peak signal-to-noise ratio).
The organization of this paper is as follows. In Section 2, we briefly describe background and related algorithms in the literature. In Section 3, we describe some notation and useful lemmas for the latter section. In Section 4, we introduce our proposed algorithm for the common fixed point problem, giving the theoretical proofs of its convergence under particular conditions. In Section 5, we apply our algorithm to solving the image restoration problem and compare its performance with other existing methods. Finally, we conclude our work in Section 6.

2. Background and Related Algorithms

In this section, we recall the background of a mathematical model for the image restoration problem and some related algorithms used to solving this problem. A simple model for image restoration problem is formulated by the linear model:
A y = a + v ,
where y R n × 1 is an original image, a R m × 1 is the observed image, v is additive noise and A R m × n is the blurring operation. In order to solve the problem (1), Tibshirani in [29], introduced the least absolute shrinkage and selection operator (LASSO) for solving the following minimization problem:
min y { A y a 2 2 + λ y 1 } ,
where λ > 0 is a regularization parameter, y 1 = i = 1 n | y i | , and y 2 = i = 1 n | y i | 2 . The general minimization problem which includes (2) as a special case is the following minimization problem:
min y R n { F ( y ) : = f ( y ) + h ( y ) } ,
where h : R n R { + } is proper convex and lower semi-continuous, and f : R n R is a convex and differentiable function such that f is a Lipschitz continuity with constant L > 0 . The set of minimizers of F is denoted by A r g m i n ( F ) .
The classical forward-backward splitting (FBS) algorithm [30] for problem (3) is given by the following iterative formula:
x n + 1 = p r o x c n h backward step   ( I c n f ) forward step ( x n ) , c n ( 0 , 2 / L ) , n N ,
where x 1 R n , c n is the step-size, I is an identity operator and p r o x h is the proximity operator of h defined by p r o x h ( x ) : = argmin y { h ( y ) + 1 2 x y 2 2 } ; see [31] for more details. In different literature, FBS is also called the iterative denoising method [32], Landweber iteration [33] or the fixed point continuation (FPC) algorithm [34]. In the last several years, some acceleration techniques have been proposed in order to accelerate the convergence rate of the studied algorithms.
The inertial forward-backward splitting (IFBS) was proposed by Moudafi and Oliny in [35] as follows:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = p r o x c n h ( y n c n f ( x n ) ) , c n ( 0 , 2 / L ) , n N ,
where x 0 , x 1 R n , θ n is the inertial parameter which controls the momentum x n x n 1 . The convergence of IFBS can be guaranteed under proper choices of c n and θ n .
The fast iterative shrinkage-thresholding algorithm (FISTA) was proposed by Beck and Teboulle in [28] as follows:
{ y n = p r o x 1 L h ( x n 1 L f ( x n ) ) , t n + 1 = 1 + 1 + 4 t n 2 2 , θ n = t n 1 t n + 1 , x n + 1 = y n + θ n ( y n y n 1 ) , n N ,
where x 1 = y 0 R n , t 1 = 1 . They proved the convergence rate of the FISTA and applied the FISTA to image restoration problem. Very recently, Liang and Schonlieb [36] modified FISTA by replacing t n + 1 = ( p + q + r t n 2 ) / 2 where p , q > 0 and 0 < r 4 , and proved the weak convergence theorem of FISTA.
The new accelerated proximal gradient algorithm (NAGA) was proposed by Verma and Shukla in [37] as follows:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = T n [ ( 1 α n ) y n + α n T n y n ] , n N ,
where x 0 , x 1 R n , T n is the forward-backward operator of f and h with respect to c n ( 0 , 2 / L ) . They proved the convergence and stability of the algorithm under a few specific conditions, and applied the algorithm for solving the convex minimization problem with sparsity-inducing regularizes for multitask learning framework.

3. Preliminaries

Let us review some important definitions and useful lemmas needed for the convergence theorem presented in the next section.
Let H be a real Hilbert space with norm · and inner product · , · , and C be a nonempty closed convex subset of H . A mapping T : C C is said to be a L-Lipschitz operator if there exists L > 0 such that T x T y L x y for all x , y C . An L-Lipschitz operator is called a nonexpansive operator if L = 1 . The set of all fixed points of T is denoted by F i x ( T ) ; i.e., F i x ( T ) : = { x C : T x = x } . Let { T n } and Ω be families of nonexpansive operators of C into itself such that F i x ( Ω ) Γ : = n = 1 F i x ( T n ) , where F i x ( Ω ) is the set of all common fixed points of Ω , and let ω w ( x n ) denote the set of all weak-cluster points of a bounded sequence { x n } in C . A sequence { T n } is said to satisfy the NST (Nakajo, Shimoji and Takahashi1)-condition (I) with Ω [38], if for every bounded sequence { x n } in C ,
lim n + x n T n x n = 0 lim n + x n T x n = 0 T Ω .
If Ω is singleton, i.e., Ω = T , then { T n } is said to satisfy the NST-condition (I) with T . After that, Nakajo et al. [39] introduced the NST* condition which is more general than that of NST-condition. A sequence { T n } is said to satisfy the NST*-condition if for every bounded sequence { x n } in C ,
lim n + x n T n x n = lim n + x n x n + 1 = 0 ω w ( x n ) Γ .
It follows directly from above definition that if { T n } satisfies the NST-condition (I), then { T n } satisfies the NST*-condition. Observe that if h : H R { + } is a proper, convex and lower semicontinuous function, then for all x H the p r o x h ( x ) exists and is unique; cf. [40]. It is well-known that
x * A r g m i n ( f + h ) 0 h ( x * ) + f ( x * ) ,
where h is the subdifferential of h defined by h ( x * ) : = { u : h ( x ) u , x x * + h ( x * ) x } and f is the gradient of f; see [41] for more details.
Note that the subdifferential operator h is a maximal monotone (see [42] for more details) and the solution of (3) is a fixed point of the following operator:
x * A r g m i n ( f + h ) x * = J c h ( I c f ) ( x * ) = p r o x c h ( I c f ) ( x * ) ,
where c > 0 and J h is the resolvent of h defined by J h = ( I + h ) 1 . If c ( 0 , 2 L ) , we know that p r o x c h ( I c f ) is a nonexpansive mapping. The operator p r o x c h ( I c f ) is known as the forward-backward operator of f and h with respect to c . We end this part with the following lemmas which will be used to prove our main results.
Lemma 1.
([43]). For a real Hilbert space H , let h : H R { + } be proper convex and lower semi-continuous function, and f : H R be convex differentiable with gradient f being L-Lipschitz constant for some L > 0 . If { T n } is the forward-backward operator of f and h with respect to c n ( 0 , 2 / L ) such that c n converges to c , then { T n } satisfies NST-condition (I) with T , where T is the forward-backward operator of f and h with respect to c ( 0 , 2 / L ) .
Lemma 2.
([44]). Let H be a real Hilbert space. Then the following results hold:
(i) 
t u + ( 1 t ) v 2 = t u 2 + ( 1 t ) v 2 t ( 1 t ) u v 2 t [ 0 , 1 ] , u , v H ;
(ii) 
u ± v 2 = u 2 ± 2 u , v + v 2 u , v H .
Lemma 3.
([45]). Let { a n } , { b n } and { γ n } be sequences of nonnegative real numbers such that a n + 1 ( 1 + γ n ) a n + b n , n N . If n = 1 γ n < + and n = 1 b n < + , then lim n + a n exists.
Lemma 4.
(Opial [46]). Let H be a Hilbert space and { x n } be a sequence in H such that there exists a nonempty set Γ H satisfying
(i) 
For every p Γ , lim n + x n p exists;
(ii) 
Each weak-cluster point of the sequence { x n } is in Γ .
Then there exists x * Γ such that { x n } weakly converges to x * .

4. Main Results

In this section, we propose a modified W-algorithm which is called “MWA” for finding a common fixed point of a countable family of nonexpansive operators in a real Hilbert space. We are now ready to introduce the MWA algorithm by assuming the following:
  • H is a real Hilbert space;
  • { T n : H H } is a family of nonexpansive operators;
  • { T n } satisfies the NST*-condition;
  • Γ : = n = 1 F i x ( T n ) .
We aim to prove a weak convergence theorem of Algorithm 1 (MWA) to a common fixed point of T n . We start with the following supporting lemma.
Lemma 5.
Let { a n } and { θ n } be sequences of nonnegative real numbers such that
a n + 1 ( 1 + θ n ) a n + θ n a n 1 , n N .
Then the following holds
a n + 1 K · j = 1 n ( 1 + 2 θ j ) , where K = max { a 1 , a 2 } .
Moreover, if n = 1 θ n < + , then { a n } is bounded.
Proof. 
Given K = max { a 1 , a 2 } , we have
a 3 ( 1 + 2 θ 2 ) K , a 4 ( 1 + θ 3 ) a 3 + θ 3 a 2 ( 1 + θ 3 ) ( 1 + 2 θ 2 ) K + θ 3 K ( 1 + θ 3 ) ( 1 + 2 θ 2 ) K + θ 3 ( 1 + 2 θ 2 ) K = ( 1 + 2 θ 3 ) ( 1 + 2 θ 2 ) K .
By Mathematical induction, we obtain
a n + 1 K · j = 2 n ( 1 + 2 θ j ) K · j = 1 n ( 1 + 2 θ j ) .
Note that the infinite product of the ( 1 + 2 θ j ) converges if the infinite sum of the θ j converges. Indeed, { a n } is bounded if n = 1 θ n < + .  ☐
Algorithm 1: (MWA): A modified W-algorithm
  • Initial. Take x 0 , x 1 H arbitrarily and n = 1 .
  • Step 1. Compute w n , z n , y n and x n + 1 using
    { w n = x n + θ n ( x n x n 1 ) , z n = ( 1 γ n ) w n + γ n T n w n , y n = ( 1 β n ) T n w n + β n T n z n , x n + 1 = ( 1 α n ) T n z n + α n T n y n ,
  • Then update n : = n + 1 and go to Step 1.
Now, we present the main convergence result of Algorithm 1 (MWA) under some suitable control conditions.
Theorem 6.
Let { x n } be a sequence generated by Algorithm 1 (MWA) where γ n [ a 1 , b 1 ] ( 0 , 1 ) , β n [ 0 , 1 ] , α n [ 0 , b 2 ] [ 0 , 1 ) , θ n 0 and n = 1 θ n < + . Then the following hold:
(i) 
x n + 1 x * K · j = 1 n ( 1 + 2 θ j ) , where K = max { x 1 x * , x 2 x * } and x * Γ .
(ii) 
{ x n } converges weakly to a point in Γ .
Proof. 
(i) Let x * Γ . By Algorithm 1, we have
w n x * x n x * + θ n x n x n 1 ,
z n x * ( 1 γ n ) w n x * + γ n T n w n x * w n x * ,
y n x * ( 1 β n ) T n w n x * + β n T n z n x * ( 1 β n ) w n x * + β n z n x * w n x * ,
and
x n + 1 x * ( 1 α n ) T n z n x * + α n T n y n x * ( 1 α n ) z n x * + α n y n x * w n x * .
From (10) and (13), we get
x n + 1 x * x n x * + θ n x n x n 1 .
This implies
x n + 1 x * ( 1 + θ n ) x n x * + θ n x n 1 x * .
Apply Lemma 5, we get x n + 1 x * K · j = 1 n ( 1 + 2 θ j ) , where K = max { x 1 x * , x 2 x * } . Since n = 1 θ n < + , it follows that { x n } is bounded. This implies n = 1 θ n x n x n 1 < + .
(ii) By (14) and Lemma 3, we obtain that lim n + x n x * exists. By Lemma 2(ii), we obtain
w n x * 2 x n x * 2 + θ n 2 x n x n 1 2 + 2 θ n x n x * x n x n 1 .
By Lemma 2(i), we obtain
z n x * 2 = ( 1 γ n ) w n x * 2 + γ n T n w n x * 2 γ n ( 1 γ n ) w n T n w n 2 w n x * 2 γ n ( 1 γ n ) w n T n w n 2 .
Using Lemma 2(i) again together with (12) and (17), we get
x n + 1 x * 2 ( 1 α n ) T n z n x * 2 + α n T n y n x * 2 ( 1 α n ) z n x * 2 + α n y n x * 2 w n x * 2 ( 1 α n ) γ n ( 1 γ n ) w n T n w n 2 x n x * 2 + θ n 2 x n x n 1 2 + 2 θ n x n x * x n x n 1 ( 1 α n ) γ n ( 1 γ n ) w n T n w n 2 .
Since n = 1 θ n x n x n 1 < + and lim n + x n x * exists, it follows that lim n + w n T n w n = 0 . Note that
x n T n x n x n w n + w n T n w n + T n w n T n x n 2 x n w n + w n T n w n ,
and
y n z n y n w n + w n z n T n w n w n + β n T n z n T n w n + w n z n T n w n w n + β n z n w n + w n z n = ( 1 + ( 1 + β n ) γ n ) T n w n w n .
These imply by Algorithm 1 that lim n + x n T n x n = 0 and lim n + y n z n = 0 . By Algorithm 1 and nonexpansivity of T n , we have
x n + 1 x n T n z n x n + α n T n y n T n z n T n z n T n x n + T n x n x n + α n y n z n z n x n + T n x n x n + α n y n z n z n w n + w n x n + T n x n x n + α n y n z n ,
w n x n = θ n x n x n 1 0 , and z n w n = γ n T n w n w n 0 .
These imply lim n + x n x n + 1 = 0 . Since { T n } satisfies the NST*-condition, we get ω w ( x n ) Γ : = n = 1 F i x ( T n ) . Therefore, by Opial’s lemma (Lemma 4), we conclude that { x n } converges weakly to a point in Γ : = n = 1 F i x ( T n ) . This completes the proof. ☐
Finally, we apply our proposed algorithm, MWA, for solving the minimization problem (3) by setting T n = p r o x c n h ( I c n f ) , the forward-backward operator of f and h with respect to c n , where h : R n R { + } is proper convex and lower semi-continuous, and f : R n R is a convex and differentiable function such that f is a Lipschitz continuity with constant L > 0 .
By using the convergence result of Algorithm 1 (MWA) in Theorem 6, we obtain the convergence of Algorithm 2 (FBMWA) as in the following theorem.
Algorithm 2: (FBMWA): A forward-backward modified W-algorithm.
  • Initial. Take x 0 , x 1 H are arbitrarily and n = 1 .
  • Step 1. Compute w n , z n , y n and x n + 1 using
    { w n = x n + θ n ( x n x n 1 ) , z n = ( 1 γ n ) w n + γ n p r o x c n h ( I c n f ) w n , y n = ( 1 β n ) p r o x c n h ( I c n f ) w n + β n p r o x c n h ( I c n f ) z n , x n + 1 = ( 1 α n ) p r o x c n h ( I c n f ) z n + α n p r o x c n h ( I c n f ) y n ,
  • Then update n : = n + 1 and go to Step 1.
Theorem 7.
Let { x n } be a sequence generated by Algorithm 2 (FBMWA) where γ n , β n , α n θ n are the same as in Theorem 6, and c n ( 0 , 2 / L ) such that { c n } converges to c . Then the following holds
(i) 
x n + 1 x * K · j = 1 n ( 1 + 2 θ j ) , where K = max { x 1 x * , x 2 x * } and x * A r g m i n ( f + h ) .
(ii) 
{ x n } converges weakly to a point in A r g m i n ( f + h ) .
Proof. 
Let T be the forward-backward operator of f and h with respect to c , and T n be the forward-backward operator of f and h with respect to c n , that is T : = p r o x c h ( I c f ) and T n : = p r o x c n h ( I c n f ) . Then T and { T n } are nonexpansive operators for all n , and F i x ( T ) = n = 1 F i x ( T n ) = A r g m i n ( f + h ) ; see Proposition 26.1 in [41]. By Lemma 1, we have that { T n } satisfies the NST*-condition. Therefore, we obtain the required result directly by Theorem 6. ☐

5. Simulated Results for the Image Restoration Problem

In this section, we apply Algorithm 2 (FBMWA) to solving the image restoration problem (2) and compare the deblurring efficiency of the FBMWA algorithm with FBS [30], IFBS [35], FISTA [28] and NAGA [37]. Our programs were written in Matlab and all algorithms ran on a laptop, Intel core i5, 4.00 GB RAM. All algorithms were applied to solving problem (2), where f ( y ) = A y a 2 2 , h ( y ) = λ y 1 , A is the blurring operator, a is the observed image and λ is the regularization parameter.
In this experiment, two gray-scale images, Lenna and Cameraman of size 256 2 are considered the original images. The images went through a Gaussian blur of size 9 2 and standard deviation σ = 4 . We use the peak signal-to-noise ratio (PSNR) [47] to measure the performance our the algorithms where P S N R ( x n ) is defined by
P S N R ( x n ) = 10 log 10 ( 255 2 M S E ) ,
where M S E = 1 M x n x ¯ 2 2 , M is the number of image samples and x ¯ is the original image. It is noted that a higher value of PSNR of the same number of iteration shows a higher quality of deblurring image. The relative error is defined by
x n x n 1 2 x n 1 2 t o l ,
where t o l denotes a prescribed tolerance value. For these experiments, the regularization parameter was chosen to be λ = 5 × 10 5 , and the initial image was the blurred image. The Lipschitz constant L , was computed by the maximum eigenvalues of the matrix A T A . We set parameters as follows:
α n = β n = γ n = 0.5 , c n = n L ( n + 1 ) , c = 1 L , θ n defined by ( 6 ) ( for NAGA ) ,
θ n = { 1 n 2 x n x n 1 2 2 if x n x n 1 , 0 otherwise , ( for IFBS ) ,
and
θ n = { t n 1 t n + 1 if 1 n < N , 1 2 n otherwise , ( for FBMWA ) ,
where t n is a sequence defined by t 1 = 1 and t n + 1 = 1 + 1 + 4 t n 2 2 , and N is a number of iterations that we want to stop. The results of deblurring image of Cameraman and Lenna with 1000th iteration of the studied algorithms are shown in Table 1 and Figure 1 and Figure 2.
From Table 1 and the graph of PSNR in Figure 1, we see that FBMWA gives a higher PSNR than the other algorithms, so the performance of the image restoration of FBMWA is better than those of FBS, IFBS, FISTA and NAGA. We also see that after 1000 iterations, FBMWA gives a better result of deblurring for Cameraman and Lenna, as shown in Figure 2.
The results of deblurring image of Cameraman and Lenna for the 1000th iteration of the FBMWA under different parameters θ n are shown in Table 2 and Figure 3 and Figure 4, where θ n is defined by
θ n = { μ n if 1 n < N , 1 2 n otherwise ,
where μ n is a sequence of nonnegative real numbers and N is a number of iterations that we want to stop. We observe that the inertial parameter θ n using by FBMWA plays an important role in improving quality of deblurring image. It is noted that if { θ n } is nondecreasing and tends to 1 , the values of PSNR increase, as shown in Table 2, Figure 3. However, we can see the result of the deblurring image of FBMWA with different inertial parameters θ n (seven cases), as shown in Figure 4. We also observe from Table 2 that the parameter μ n = n n + 1 gives a higher PSNR than the others.
Open problem: It is noted that we can choose θ n as in (21) for the Algorithm 2, and convergence of Algorithm 2 can be guaranteed by Theorem 7. Can we use θ n as defined by (6) for Algorithm 2?

6. Conclusions

In this work, we proposed a modified W-algorithm for solving a common fixed point problem of a family of nonexpansive operators and proved the weak convergence result of the proposed method under some control conditions. We applied our main result to solving a minimization problem in the form of the sum of two proper lower semi-continuous and convex functions. As applications, we applied our algorithm, FBMWA, to solving image restoration problems. Moreover, we did some numerical experiments to illustrate the performance of the studied algorithms and show that PSNR of FBMWA is better than those of the FBS [30], IFBS [35], FISTA [28] and NAGA [37].

Author Contributions

Funding acquisition and supervision, S.S.; writing—review and editing and software, A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Thailand Science Research and Innovation under the project IRN62W0007 and Chiang Mai University.

Acknowledgments

This work was supported by Thailand Science Research and Innovation under the project IRN62W0007. The opinions in the research report belong to the researchers; Thailand Science Research and Innovation do not always have to agree. The second author would also like to thank Chiang Mai University for the financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bauschke, H.H. The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 1996, 202, 150–159. [Google Scholar] [CrossRef] [Green Version]
  2. Chidume, C.E.; Bashir, A. Convergence of path and iterative method for families of nonexpansive mappings. Appl. Anal. 2008, 67, 117–129. [Google Scholar] [CrossRef]
  3. Halpern, B. Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  4. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  5. Klen, R.; Manojlović, V.; Simić, S.; Vuorinen, M. Bernoulli inequality and hypergeometric functions. Proc. Am. Math. Soc. 2014, 142, 559–573. [Google Scholar] [CrossRef]
  6. Kunze, H.; La Torre, D.; Mendivil, F.; Vrscay, E.R. Generalized fractal ntransforms and self-similar objects in cone metric spaces. Comut. Math. Appl. 2012, 64, 1761–1769. [Google Scholar] [CrossRef] [Green Version]
  7. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  8. Radenović, S.; Rhoades, B.E. Fixed point theorem for two non-self mappings in cone metric spaces. Comput. Math. Appl. 2009, 57, 1701–1707. [Google Scholar] [CrossRef] [Green Version]
  9. Todorcević, V. Harmonic Quasiconformal Mappings and Hyperbolic Type Metrics; Springer Nature Switzerland AG: Basel, Switzerland, 2019. [Google Scholar]
  10. Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  11. Byrne, C. Aunified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  12. Cholamjiak, P.; Shehu, Y. Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  13. Combettes, P.L.; Wajs, V. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  14. Kunrada, K.; Pholasa, N.; Cholamjiak, P. On convergence and complexity of the modified forward-backward method involving new linesearches for convex minimization. Math. Meth. Appl. Sci. 2019, 42, 1352–1362. [Google Scholar]
  15. Suantai, S.; Eiamniran, N.; Pholasa, N.; Cholamjiak, P. Three-step projective methods for solving the split feasibility problems. Mathematics 2019, 7, 712. [Google Scholar] [CrossRef] [Green Version]
  16. Suantai, S.; Kesornprom, S.; Cholamjiak, P. Modified proximal algorithms for finding solutions of the split variational inclusions. Mathematics 2019, 7, 708. [Google Scholar] [CrossRef] [Green Version]
  17. Thong, D.V.; Cholamjiak, P. Strong convergence of a forward–backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38. [Google Scholar] [CrossRef]
  18. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [Green Version]
  19. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple set split feasibility problem and its applications. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  20. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets feasibility problem. J. Math. Anal. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  21. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuousfunctions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  22. Wongyai, S.; Suantai, S. Convergence Theorem and Rate of Convergence of a New Iterative Method for Continuous Functions on Closed Interval. In Proceedings of the AMM and APAM Conference Proceedings, Bankok, Thailand, 23–25 May 2016; pp. 111–118. [Google Scholar]
  23. Ben-Tal, A.; Nemirovski, A. Lectures on Modern Convex Optimization, Analysis, Algorithms, and Engineering Applications; MPS/SIAM Ser. Optim.; SIAM: Philadelphia, PA, USA, 2001. [Google Scholar]
  24. Bioucas-Dias, J.; Figueiredo, M. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1998, 20, 33–61. [Google Scholar] [CrossRef]
  26. Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Statist. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  27. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  28. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  29. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  30. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  31. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  32. Figueiredo, M.A.T.; Nowak, R.D. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  33. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  34. Hale, E.T.; Yin, W.; Zhang, Y. A Fixed-Point Continuation Method for l1-Regularized Minimization. Siam J. Optim. 2008, 19, 1107–1130. [Google Scholar] [CrossRef]
  35. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  36. Liang, J.; Schonlieb, C.B. Improving fista: Faster, smarter and greedier. arXiv 2018, arXiv:1811.01430. [Google Scholar]
  37. Verma, M.; Shukla, K.K. A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recogn. Lett. 2017, 95, 98–103. [Google Scholar] [CrossRef]
  38. Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
  39. Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. Theor. Methods Appl. 2009, 71, 112–119. [Google Scholar] [CrossRef]
  40. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: New York, NY, USA, 2004. [Google Scholar]
  41. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  42. Burachik, R.S.; Iusem, A.N. Set-Valued Mappings and Enlargements of Monotone Operator; Springer Science Business Media: New York, NY, USA, 2007. [Google Scholar]
  43. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar]
  44. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  45. Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  46. Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
  47. Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
Figure 1. The graphs of peak signal-to-noise ratio (PSNR) for Cameraman (left) and Lenna (right).
Figure 1. The graphs of peak signal-to-noise ratio (PSNR) for Cameraman (left) and Lenna (right).
Mathematics 08 00378 g001
Figure 2. Results for deblurring of the Cameraman and Lenna.
Figure 2. Results for deblurring of the Cameraman and Lenna.
Mathematics 08 00378 g002
Figure 3. The graphs of PSNR of the FBMWA under different parameters θ n for Cameraman (left) and Lenna (right).
Figure 3. The graphs of PSNR of the FBMWA under different parameters θ n for Cameraman (left) and Lenna (right).
Mathematics 08 00378 g003
Figure 4. Results of FBMWA for deblurring of the Cameraman and Lenna.
Figure 4. Results of FBMWA for deblurring of the Cameraman and Lenna.
Mathematics 08 00378 g004
Table 1. Comparison of image restorations of the studied methods.
Table 1. Comparison of image restorations of the studied methods.
CameramanLenna
AlgorithmsPSNRTol.PSNRTol.
FBS27.19532.32 × 10 5 29.49071.73 × 10 5
IFBS27.19532.32 × 10 5 29.49071.73 × 10 5
FISTA34.66594.13 × 10 5 36.93243.34 × 10 5
NAGA35.66704.15 × 10 5 37.80883.32 × 10 5
FBMWA36.27834.21 × 10 5 38.29893.31 × 10 5
Table 2. Effective parameters of our method for image restoration.
Table 2. Effective parameters of our method for image restoration.
CameramanLenna
CaseParametersPSNRTol.PSNRTol.
1 μ n = 1 2 n 27.89112.13 × 10 5 30.16031.66 × 10 5
2 μ n = 10 n 2 27.90032.12 × 10 5 30.16931.65 × 10 5
3 μ n = 0.5 28.71462.00 × 10 5 30.97711.60 × 10 5
4 μ n = 0.9 30.99201.81 × 10 5 33.28381.47 × 10 5
5 μ n = t n 1 t n + 1 , t 1 = 1 ,
t n + 1 = 1 + 1 + 4 t n 2 2 ,
36.27834.21 × 10 5 38.29893.31 × 10 5
6 μ n = n n + 1 37.09791.63 × 10 4 38.85621.30 × 10 4
7 μ n = 1 30.68329.13 × 10 4 32.79967.07 × 10 4

Share and Cite

MDPI and ACS Style

Hanjing, A.; Suantai, S. A Fast Image Restoration Algorithm Based on a Fixed Point and Optimization Method. Mathematics 2020, 8, 378. https://doi.org/10.3390/math8030378

AMA Style

Hanjing A, Suantai S. A Fast Image Restoration Algorithm Based on a Fixed Point and Optimization Method. Mathematics. 2020; 8(3):378. https://doi.org/10.3390/math8030378

Chicago/Turabian Style

Hanjing, Adisak, and Suthep Suantai. 2020. "A Fast Image Restoration Algorithm Based on a Fixed Point and Optimization Method" Mathematics 8, no. 3: 378. https://doi.org/10.3390/math8030378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop