Next Article in Journal
Splitting Sequences for Coding and Hybrid Incremental ARQ with Fragment Retransmission
Next Article in Special Issue
Nonexistence of Global Solutions to Higher-Order Time-Fractional Evolution Inequalities with Subcritical Degeneracy
Previous Article in Journal
Solvability and Stability of the Inverse Problem for the Quadratic Differential Pencil
Previous Article in Special Issue
Fuzzy Stability Results of Generalized Quartic Functional Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Fixed-Point Algorithm for Convex Minimization Problems and Its Application in Image Restoration Problems

by
Panadda Thongpaen
1 and
Rattanakorn Wattanataweekul
2,*
1
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Department of Mathematics, Statistics and Computer, Faculty of Science, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(20), 2619; https://doi.org/10.3390/math9202619
Submission received: 21 September 2021 / Revised: 12 October 2021 / Accepted: 12 October 2021 / Published: 17 October 2021
(This article belongs to the Special Issue Functional Inequalities and Equations)

Abstract

:
In this paper, we introduce a new iterative method using an inertial technique for approximating a common fixed point of an infinite family of nonexpansive mappings in a Hilbert space. The proposed method’s weak convergence theorem was established under some suitable conditions. Furthermore, we applied our main results to solve convex minimization problems and image restoration problems.

1. Introduction

Let us first mention a mathematical scheme for an image restoration problem, as well as some algorithms that will be employed to solve it. The following linear pattern is a simple pattern of an image restoration problem, that is,
A x = b + y ,
where A R m × n is the blurring operation, x R n × 1 is an image, b R m × 1 is the observed image, and y is an additive noise. The image restoration problem is finding the original image x R n × 1 that satisfies (1). It is well known that the image restoration problem is a dominant topic in image processing.
In order to find the solution of the problem (1), we minimize the additive noise to approximate the original image by using the method known as the least squares (LS) problem:
min x A x b 2 2 ,
where · is an 2 -norm defined by x 2 = k = 1 n | x k | 2 . There are many iterations that can solve the problem (2) such as the Richardson iteration; see [1] for the details. However, the number of unknown variables is much more than the observations that cause (2) to be an ill-posed problem because of a huge norm result, which is thus meaningless; see [2,3]. Therefore, in order to improve the ill conditioned least squares problem, several regularization methods were introduced. One of the most popular regularization methods is the Tikhonov regularization suggested by Tikhonov; see [4]. It is defined to solve the following minimization problem:
min x A x b 2 2 + β L x 2 2 ,
where β is called a positive regularization parameter and L R m × n is called the Tikhonov matrix. In the standard form, L is set to be the identity. In statistics, (3) is known as a ridge regression. To improve the original LS (2) and classical regularization such as subset selection and ridge regression (3) for solving (1), Tibshirani [5] defined a new method, called the least absolute shrinkage and selection operator (LASSO) model, as the following form:
min x A x b 2 2 + β x 1 ,
where β is a positive regularization parameter, x 1 = k = 1 n | x k | , and x 2 = k = 1 n | x k | 2 . This model can be used to solve the problem (1) utilizing optimization methods; see [5,6] for instances. The problem presented in (4) can be extended to the general natural formulation as follows:
min x ϕ ( x ) + ψ ( x ) .
The solution of Problem (5) is usually established under the following assumptions:
(i)
ψ is a lower semicontinuous function and proper convex from a Hilbert space H into R { + } ;
(ii)
ϕ is a convex differentiable function from H into R with ϕ being -Lipschitz constant for some > 0 , that is, ϕ ( x ) ϕ ( y ) x y for all x , y R n .
The set of all solutions of the problem (5) will is denoted by argmin ( ϕ + ψ ) .
It is well known that if x argmin ( ϕ + ψ ) , then the solution of (5) can be reformulated as the problem of finding a zero-solution x such that:
0 ψ ( x ) + ϕ ( x ) ,
where ϕ is the gradient operator of function ϕ and ψ is the subdifferential of function ψ ; see [7] for more details. Furthermore, Parikh and Boyd [8] solved the problem (6) by using the proximal gradient technique, that is if x solves (6), then:
x = prox κ ψ ( I κ ϕ ) ( x ) ,
where κ is a positive parameter, prox κ ψ = ( I + κ ψ ) 1 , and I is the identity operator. This means that x is a fixed point of the proximal operator. In [9,10,11], the authors guaranteed many important properties of proximal operators, for instance prox κ ψ is well defined with a full domain, single-valued, and even nonexpansive.
In addition, the classical forward–backward splitting algorithm (FBS) [12] is generated by x 1 R n and:
x n + 1 = prox κ n ψ ( I κ n ϕ ) ( x n ) ,
where κ n ( 0 , 2 ) is the step size and I is the identity operator with prox ψ the proximity operator of ψ defined by prox ψ ( x ) : = arg min y R n ϕ ( y ) + 1 2 x y 2 2 ; see [13] for more details. Because of its simplicity, the method (7) has been widely utilized to solve the problem (5), and as a result, it has been enhanced by many works, as seen in [11,14,15,16].
From the work [8], it is worth noting that the fixed-point theory can be applied to solve the problem (5). The fixed-point theory plays a very important role for solving many problems in science, data science, economics, medicine, and engineering; see [11,17,18,19,20,21,22,23] for more details. There are several methods for finding the approximate solutions of fixed-point problems; see [24,25,26,27,28,29,30]. Shoaib [31] proved a result of Al Mazrooei et al. [32] by using new contractive conditions on a closed set in b-multiplicative metric space. They obtained a unique common solution of Fredholm multiplicative integral equations. Recently, Kim [33] introduced the coupled Mann pair iterative scheme for a common coupled fixed point in Hilbert spaces.
In order to accelerate the convergence rate of the studied methods, Polyak [34] introduced the technique for improving the rate of convergence and giving a better convergence behavior of those methods by adding an inertial step. The following iterative methods with an inertial step can be used for improving the performance of (7).
The inertial forward–backward splitting (IFBS) was presented by Moudafi and Oliny in [35] as follows:
z n = x n + ρ n ( x n x n 1 ) , x n + 1 = prox κ n ψ ( z n κ n ϕ ( x n ) ) ,
where x 0 , x 1 R n , κ n ( 0 , 2 ) , and ρ n is the inertial parameter that controls the momentum x n x n 1 . The convergence of the IFBS can be guaranteed by proper choices of κ n and ρ n .
The fast iterative shrinkage-thresholding algorithm (FISTA) is defined by:
z n = prox 1 ψ ( x n 1 ϕ ( x n ) ) , t n + 1 = 1 + 1 + 4 t n 2 2 , ρ n = t n 1 t n + 1 , x n + 1 = z n + ρ n ( z n z n 1 ) ,
where n N , x 1 = z 0 R n , and t 1 = 1 . This notion was suggested by Beck and Teboulle [6]. They also proved the FISTA’s convergence rate and applied it to solve image restoration problems.
Recently, Verma and Shukla [16] proposed the new accelerated proximal gradient algorithm (NAGA) as follows:
z n = x n + ρ n ( x n x n 1 ) , y n = ( 1 τ n ) z n + τ n prox κ n ψ ( z n κ n ϕ ( z n ) ) , x n + 1 = prox κ n ψ ( y n κ n ϕ ( y n ) ) ,
where n N , x 0 , x 1 R n , τ n ( 0 , 1 ) , κ n ( 0 , 2 ) , and ρ n ( 0 , 1 ) is the inertial parameter, which controls the momentum x n x n 1 . The authors proved NAGA’s convergence theorem under the condition x n x n 1 2 ρ n 0 and applied it to solve the convex minimization problem for a multitask learning framework using sparsity-inducing regularizes.
Motivated and inspired by all the works mentioned above, in this article, we introduced a new iterative method for the approximation of a common fixed point of an infinite family of nonexpansive mappings in Hilbert spaces. We also proved weak convergence theorems of the introduced method under some suitable conditions. Furthermore, we applied our main results for solving a convex minimization problem and image restoration problems.
This paper is organized as follows: The next section proposes some preliminary results that will be utilized throughout the paper. In Section 3, we introduce a new accelerated algorithm using the inertial techniques and analyze its weak convergence to the solution (5). After that, we apply our main results to solving image restoration problems, and some numerical experiments of the proposed methods are given in Section 4. In the last section, we present the brief conclusion of our work.

2. Preliminaries

Throughout this article, let N and R be the set of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product · , · and the norm · induced by the inner product. The weak and strong convergences of { x n } in H are denoted by x n x and x n x , respectively, for each sequence { x n } in H.
Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping T from C into itself is said to be an -Lipschitz operator if there exists > 0 such that:
T x T y x y , x , y C .
If = 1 , then T is called a nonexpansive operator. The set of all fixed points of T is denoted by F ( T ) , that is F ( T ) = { x C : T x = x } . Let { T n } and Ω be families of nonexpansive mappings of C into itself such that F ( Ω ) F : = n = 1 F ( T n ) , where F ( Ω ) is the set of all common fixed points of Ω .
A sequence { T n } is said to satisfy the NST condition ( I ) with Ω [36], if for every bounded sequence { x n } in C,
lim n + x n T n x n = 0 lim n + x n T x n = 0 , T Ω .
Note that { T n } is said to satisfy the NST condition ( I ) with T when Ω is a singleton, that is Ω = { T } . After that, the concept of the NST condition was introduced by Nakajo et al. [37], and the examples of mappings that satisfy the NST condition were given.
A sequence { T n } is said to satisfy the NST condition if for every bounded sequence { x n } in C ,
lim n x n T n x n = 0 = lim n x n x n + 1 ω w ( x n ) Ω ,
where ω w ( x n ) is the set of all weak cluster points of { x n } .
Note that the NST condition is more general than the NST condition ( I ) . It can be directly obtained from the definition given above that if { T n } satisfies the NST condition ( I ) , then { T n } satisfies the NST condition.
Lemma 1
([38,39]). Let H be a real Hilbert space. For any u , v H and r [ 0 , 1 ] , the following results hold:
(i) 
u v 2 = x 2 2 u , v + v 2 ;
(ii) 
r u + ( 1 r ) v 2 = r u 2 + ( 1 r ) v 2 r ( 1 r ) u v 2 .
The identity in Lemma 1 (ii) implies that the following equality holds:
r u + s v + t w 2 = r u 2 + s v 2 + t w 2 r s u v 2 s t v w 2 r t u w 2
for all u , v , w H and r , s , t [ 0 , 1 ] with r + s + t = 1 .
In proving our main theorem, we need the following lemmas.
Lemma 2
([40]). Let { u n } , { v n } , and { w n } be sequences of nonnegative real numbers such that u n + 1 ( 1 + w n ) u n + v n for all n N . If n = 1 w n < + and n = 1 v n < + , then lim n + u n exists.
Lemma 3
([35]). Let H be a Hilbert space, and let { u n } be a sequence in H such that there exists a nonempty set F H satisfying: for every p F , lim n + u n p exists. Any weak cluster point of { u n } is in F . Then, there exists u in F with { u n } converging weakly to u .
We end this section with the following lemmas, which will be used to prove our main results in the next section.
Lemma 4
([41]). Let { u n } and { ρ n } be sequences of nonnegative real numbers such that u n + 1 ( 1 + ρ n ) u n + ρ n u n 1 for all n N . Then, the following holds
u n + 1 M · j = 1 n ( 1 + 2 ρ j ) ,
where M = max { u 1 , u 2 } . Moreover, if n = 1 ρ n < + , then { u n } is bounded.
Recall the definition of the forward–backward operator of lower semicontinuous and convex functions ϕ , ψ : R n ( , + ) as follows: A forward–backward operator T is defined by T : = prox κ ψ ( I κ ϕ ) for κ > 0 , where ϕ is the gradient operator of function ϕ and prox κ ψ ( I κ ϕ ) x : = arg min y H { ψ ( y ) + 1 2 κ y x 2 } (see [7,11]). The operator prox κ ψ was defined by Moreak in 1962 [42] and called the proximity operator with respect to κ and ψ . We know that T is a nonexpansive mapping whenever κ ( 0 , 2 ) .
Lemma 5
([14]). Let ψ be a lower semicontinuous function and proper convex from a Hilbert space H into R { + } , and let ϕ be a convex differentiable function from H into R with ϕ being ℓ-Lipschitz constant for some > 0 . Let T be the forward–backward operator of ϕ and ψ. A sequence { T n } satisfies the NST condition (I) with T if { T n } is the forward–backward operator of ϕ and ψ such that κ n κ with κ n , κ 0 , 2 .

3. Main Results

In this section, we begin by formally introducing a new algorithm for finding a common fixed point of a countable family of nonexpansive mappings in a real Hilbert space H. Let { T n : H H } be a family of nonexpansive mappings with { τ n } , { ϵ n } , { μ n } , and { ζ n } being sequences in ( 0 , 1 ) .
Next, we prove a weak convergence theorem of Algorithm 1 for a family of nonexpansive mappings in a real Hilbert space.
Algorithm 1: (MSA): A modified S-algorithm.
Initial. Take x 0 , x 1 H arbitrarily and n = 1 . Choose ρ n 0 and n = 1 ρ n < + .
Step 1. Compute z n , y n , and x n + 1 using:
z n = x n + ρ n ( x n x n 1 ) , y n = ( 1 τ n ϵ n ) z n + τ n T n z n + ϵ n T n x n , x n + 1 = ( 1 μ n ζ n ) y n + μ n T n z n + ζ n T n y n .

 Then, update n : = n + 1 , and go to Step 1.
Theorem 1.
Let H be a real Hilbert space, and let { T n : H H } be a family of nonexpansive mappings such that F : = n = 1 F ( T n ) . Let { x n } be a sequence generated by Algorithm 1 and { τ n } , { ϵ n } { μ n } , and { ζ n } be sequences in ( 0 , 1 ) satisfying the following conditions:
(i) 
0 < lim inf n ( τ n + ϵ n ) lim sup n ( τ n + ϵ n ) < 1 ;
(ii) 
0 < lim inf n τ n ;
(iii) 
0 < lim sup n μ n .
If { T n } satisfies the NST condition, then { x n } converges weakly to an element in F .
Proof. 
Let p F . Then, by Algorithm 1 and T n being nonexpansive, we have:
z n p = x n + ρ n ( x n x n 1 ) p x n p + ρ n x n x n 1 ,
and:
y n p = ( 1 τ n ϵ n ) z n + τ n T n z n + ϵ n T n x n p ( 1 τ n ϵ n ) z n p + τ n T n z n p + ϵ n T n x n p ( 1 τ n ϵ n ) z n p + τ n z n p + ϵ n x n p = ( 1 ϵ n ) z n p + ϵ n x n p .
It follows that:
y n p ( 1 ϵ n ) [ x n p + ρ n x n x n 1 ] + ϵ n x n p x n p + ρ n x n x n 1 .
The above inequality implies:
x n + 1 p = ( 1 μ n ζ n ) y n + μ n T n z n + ζ n T n y n p ( 1 μ n ζ n ) y n p + μ n T n z n p + ζ n T n y n p ( 1 μ n ζ n ) y n p + μ n z n p + ζ n y n p = ( 1 μ n ) y n p + μ n z n p ( 1 μ n ) [ x n p + ρ n x n x n 1 ] + μ n [ x n p + ρ n x n x n 1 ] = x n p + ρ n x n x n 1 x n p + ρ n ( x n p + p x n 1 ) = ( 1 + ρ n ) x n p + ρ n x n 1 p .
By Lemma 4, we obtain that x n + 1 p M · j = 1 n ( 1 + 2 ρ j ) , where M = max { x 1 p , x 2 p } . Since n = 1 ρ n < + , we obtain that { x n } is bounded. This together with n = 1 ρ n < + give n = 1 ρ n x n x n 1 < + . Using (9) and Lemma 2, we obtain that lim n x n p exists for all p F . Coming back to the definition of y n , from (8), one has that:
y n p 2 = ( 1 τ n ϵ n ) ( z n p ) + τ n ( T n z n p ) + ϵ n ( T n x n p ) 2 = ( 1 τ n ϵ n ) z n p 2 + τ n T n z n p 2 + ϵ n T n x n p 2 τ n ( 1 τ n ϵ n ) z n T n z n 2 τ n ϵ n T n z n T n x n 2 ϵ n ( 1 τ n ϵ n ) z n T n x n 2 ( 1 ϵ n ) z n p 2 + ϵ n x n p 2 τ n ( 1 τ n ϵ n ) z n T n z n 2 .
By (8), (10), together with Lemma 1 and the nonexpansiveness of T n , we have:
x n + 1 p 2 ( 1 μ n ζ n ) y n p 2 + μ n T n z n p 2 + ζ n T n y n p 2 ( 1 μ n ζ n ) y n p 2 + μ n z n p 2 + ζ n y n p 2 = ( 1 μ n ) y n p 2 + μ n z n p 2 = ( 1 μ n ) [ ( 1 ϵ n ) z n p 2 + ϵ n x n p 2 ] ( 1 μ n ) τ n ( 1 τ n ϵ n ) z n T n z n 2 + μ n z n p 2 = ( 1 ϵ n + μ n ϵ n ) z n p 2 + ϵ n ( 1 μ n ) x n p 2 ( 1 μ n ) τ n ( 1 τ n ϵ n ) z n T n z n 2 ( 1 ϵ n + μ n ϵ n ) [ x n p 2 + 2 ρ n x n p x n x n 1 ] + ( 1 ϵ n + μ n ϵ n ) ρ n 2 x n x n 1 2 + ϵ n ( 1 μ n ) x n p 2 ( 1 μ n ) τ n ( 1 τ n ϵ n ) z n T n z n 2 = x n p 2 + 2 ρ n ( 1 ϵ n + μ n ϵ n ) x n p x n x n 1 + ( 1 ϵ n + μ n β n ) ρ n 2 x n x n 1 2 ( 1 μ n ) τ n ( 1 τ n ϵ n ) z n T n z n 2 .
Since lim n x n p exists for all p F and n = 1 ρ n x n x n 1 < + , we have from the above inequality that:
lim n ( 1 μ n ) τ n ( 1 τ n ϵ n ) z n T n z n 2 = 0 .
By the conditions (i) and (iii), we conclude that:
lim n z n T n z n = 0 .
This implies by the nonexpansiveness of T n that:
x n T n x n x n z n + z n T n z n + T n z n T n x n x n z n + z n T n z n + z n x n = 2 x n z n + z n T n z n .
Thus:
x n T n x n 2 x n z n + z n T n z n .
By the definition of z n , we obtain:
z n x n = x n + ρ n ( x n x n 1 ) x n = | ρ n | x n x n 1 = ρ n x n x n 1 .
From (12) and (13), we obtain:
x n T n x n 2 ρ n x n x n 1 + z n T n z n .
By (11), (14), and n = 1 ρ n x n x n 1 < + , we obtain:
lim n x n T n x n = 0 .
From (13) and n = 1 ρ n x n x n 1 < + , we obtain:
lim n z n x n = 0 .
Since:
y n z n τ n T n z n z n + ϵ n T n x n z n τ n T n z n z n + ϵ n ( T n x n x n + x n z n ) ,
by (11), (15), and (16), we obtain:
lim n y n z n = 0 .
From y n x n y n z n + z n x n , (16) and (17), we obtain:
lim n y n x n = 0 .
Since:
x n + 1 x n ( 1 μ n ζ n ) y n x n + μ n T n z n x n + ζ n T n y n x n ( 1 μ n ζ n ) y n x n + μ n [ T n z n T n x n + T n x n x n ] + ζ n [ T n y n T n x n + T n x n x n ] ( 1 μ n ζ n ) y n x n + μ n [ z n x n + T n x n x n ] + ζ n [ y n x n + T n x n x n ] ,
it follows by (15)–(18) that lim n x n + 1 x n = 0 . Since { T n } satisfies the NST condition, we obtain that the set of all weak cluster points of the sequence { x n } is a subset of F . Applying Lemma 3, we obtain that there exists x F such that x n x .
Now, we move on to the application of our introduced algorithm for solving a convex minimization problem (5) by setting T n : = prox κ n ψ ( I κ n ϕ ) in Algorithm 1.
Next, we prove that a sequence { x n } generated by Algorithm 2 converges weakly to the solution of the convex minimization problem (5).
Algorithm 2: (FBMSA): A forward-backward modified S-algorithm.
Initial. Take x 0 , x 1 H arbitrarily and n = 1 . Choose ρ n 0 and n = 1 ρ n < + .
Step 1. Compute z n , y n and x n + 1 using:
z n = x n + ρ n ( x n x n 1 ) , y n = ( 1 τ n ϵ n ) z n + τ n prox κ n ψ ( I κ n ϕ ) z n + ϵ n prox κ n ψ ( I κ n ϕ ) x n , x n + 1 = ( 1 μ n ζ n ) y n + μ n prox κ n ψ ( I κ n ϕ ) z n + ζ n prox κ n ψ ( I κ n ϕ ) y n .

 Then, update n : = n + 1 , and go to Step 1.
Theorem 2.
Let g be a lower semicontinuous function and proper convex from a real Hilbert space H into R { + } , and let ϕ be a convex differentiable function from H into R with ϕ being ℓ-Lipschitz constant for some > 0 . Let { x n } be a sequence generated by Algorithm 2 such that κ n κ with κ n , κ 0 , 2 . Suppose { τ n } , { ϵ n } , { μ n } and { ζ n } are sequences in ( 0 , 1 ) satisfying the assumptions as in Theorem 1. Then, { x n } converges weakly to an element in argmin ( ϕ + ψ ) .
Proof. 
Let T and T n be the forward–backward operators of ϕ and ψ with respect to κ and κ n , respectively. Then, T : = prox κ ψ ( I κ ϕ ) and T n : = prox κ n ψ ( I κ n ϕ ) . Then, T and { T n } are nonexpansive operators for all n. By Proposition 26.1 in [7], F : = n = 1 F ( T n ) = argmin ( ϕ + ψ ) . It follows from Lemma 5 that { T n } satisfies the NST condition. Using Theorem 1, we obtain the required result. □

4. Applications

The image restoration problem is solved using Algorithm 2 in this part. We also compared the deburring efficiency of Algorithm 2 with NAGA [16], FISTA [6], IFBS [35], and FBS [12]. As the mentioned in the literature, the image restoration problems can be related to the LASSO problem, that is min x { A x b 2 2 + β x 1 } , where A represents the blurring operator, x R n is the original image, b is the observed image, and β is a positive regularization parameter.
To solve image restoration problem, especially the true RGB images, this model is highly costly to compute for the multiplication of A x and x 1 because of the size of matrix A and x, as well as their members. In order to overcome this problem, most of researchers in this area employ the 2D fast Fourier transform for the transformation of the true RGB images, and the above model is slightly reformulated by using the 2D fast Fourier transform as the following form:
min x { A x b 2 2 + β W x 1 } ,
where A is the blurring operator, which is often chosen as A = B W , B is the blurring matrix, W is the 2D fast Fourier transform, b R m × n is the observed image of size m × n , and β is a positive regularization parameter. Hence, it can be viewed as the summation of two convex minimization problem, that is, min x { ϕ ( x ) + ψ ( x ) } . Therefore, Algorithm 2, FBS [12], IFBS [35], FISTA [6], and NAGA [16] can be applied to solve an image restoration problem by setting ϕ ( x ) = A x b 2 2 , ψ ( x ) = β W x 1 .
In our experiment, we selected the regularization parameter β = 5 × 10 5 and considered the original image size of 256 × 256 px. The Gaussian blur of size 9 × 9 and standard deviation ξ = 4 were used to rate the blurred and noisy image. Figure 1 shows the original and observed images.
We used the peak signal-to-noise ratio (PSNR) as a measure of the performance of our algorithm, which is defined as follows:
PSNR ( x n ) = 10 log 10 ( 255 2 M S E ) ,
where MSE = 1 256 2 x n x 2 , the mean-squared error for original image x. The concept of the PSNR was proposed by Thung and Raveendran [43] in 2009. It is worth noting that a higher PSNR demonstrates a higher quality for deblurring the image. Then, we computed the Lipschitz constant by using the maximum eigenvalues of the matrix A T A .
Table 1 shows the parameters for Algorithm 2, FISTA, NAGA, IFBS, and FBS.
As seen in Table 1, all parameters were created to satisfy all conditions of those convergence theorems for each algorithm. By Theorem 2, the sequence { x n } generated by Algorithm 2 converges to the original image.
For this experiment, our programs were run on an Intel(R) core(TM) i7-9700CPU with 32.00 GB RAM, Windows 10, in the MATLAB computing environment. From the controllers, which were set as above, we obtained the results of deblurring the image of Wat Phra Singh Woramahaviharn with 1000 iterations as in Table 2.
Table 2 shows the images’ recovery efficiency compared to other methods under different numbers of iterations. It is seen from Table 2 that Algorithm 2 has a higher PSNR than the other algorithms. Therefore, the convergence behavior of our algorithm is better than those of NAGA, FISTA, IFBS, and FBS.
Moreover, the results of deblurring the image of Wat Phra Singh Woramahaviharn at the 1000th iteration of all the studied algorithms are presented in Figure 2.
It was derived from the graph of PSNR in Figure 2 that Algorithm 2 gives a higher value of the PSNR than the other algorithms. This demonstrates that Algorithm 2’s image restoration performance is better than those of NAGA, FISTA, IFBS, and FBS.
We observed from Figure 3 that Algorithm 2 gives a better result of deblurring for Wat Phra Singh Woramahaviharn in all the numbers of iterations.

5. Conclusions

This paper introduced a new accelerated algorithm for solving a common fixed-point problem of a family of nonexpansive operators. The weak convergence theorem for this method was proven by setting some conditions. Our main results can be applied to solve a minimization problem involving two proper lower semicontinuous and convex functions. The proposed method was also used to solve the image restoration problems. To compare the performance of the studied algorithm, we conducted certain numerical experiments and obtained that the PSNR of our proposed algorithm is higher than those of FBS [12], IFBS [35], FISTA [6], and NAGA [16].

Author Contributions

Conceptualization, R.W.; Formal analysis, P.T. and R.W.; Investigation, P.T.; Methodology, R.W.; Supervision, R.W.; Validation, R.W.; Writing—original draft, P.T.; Writing—review—editing, R.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very grateful to the anonymous referees for their helpful comments, which improved the presentation of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  2. Eld’en, L. Algorithms for the regularization of ill conditioned least squares problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
  3. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
  4. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; VH Winston & Sons: Washington, DC, USA; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  5. Tibshirain, R. Regression shrinkage abd selection via lasso. J. R. Stat. Soc. Ser. B (Method) 1996, 58, 267–288. [Google Scholar]
  6. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inveerse problems. SIAMJ. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  7. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Incorporated; Springr: New York, NY, USA, 2017. [Google Scholar]
  8. Parikh, N.; Boyd, S. Proximal Algorthims. Found. Trends R Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  9. Combettes, P.L. Quasi-Fejérian analysis of some optimization algorithms in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. In Studies in Computational Mathematics; North-Holland: Amsterdam, The Netherlands, 2001; Volume 8, pp. 115–152. [Google Scholar]
  10. Combettes, P.L.; Pesquet, J.-C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems. Science and Engineering; Springer Optimization and Its Applications: New York, NY, USA, 2011; Volume 49, pp. 185–212. [Google Scholar]
  11. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  12. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  13. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  14. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
  15. Moudafi, A.; Oliny, M. Convergence of splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  16. Verma, M.; Shukla, K.K. A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recogn. Lett. 2017, 95, 98–103. [Google Scholar] [CrossRef]
  17. Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  18. Byrne, C. Aunified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  19. Cholamjiak, P.; Shehu, Y. Inertial forward–backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  20. Kunrada, K.; Pholasa, N.; Cholamjiak, P. On convergence and complexity of the modified forward–backward method involving new linesearches for convex minimization. Math. Meth. Appl. Sci. 2019, 42, 1352–1362. [Google Scholar]
  21. Suantai, S.; Eiamniran, N.; Pholasa, N.; Cholamjiak, P. Three-step projective methods for solving the split feasibility problems. Mathematics 2019, 7, 712. [Google Scholar] [CrossRef] [Green Version]
  22. Suantai, S.; Kesornprom, S.; Cholamjiak, P. Modified proximal algorithms for finding solutions of the split variational inclusions. Mathematics 2019, 7, 708. [Google Scholar] [CrossRef] [Green Version]
  23. Thong, D.V.; Cholamjiak, P. Strong convergence of a forward–backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38, 1–16. [Google Scholar] [CrossRef]
  24. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  25. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  26. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuousfunctions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  27. Hanjing, A.; Suthep, S. The split fixed-point problem for demicontractive mappings and applications. Fixed Point Theory 2020, 21, 507–524. [Google Scholar] [CrossRef]
  28. Wongyai, S.; Suantai, S. Convergence Theorem and Rate of Convergence of a New Iterative Method for Continuous Functions on Closed Interval. In Proceedings of the AMM and APAM Conference Proceedings, Bankok, Thailand, 23–25 May 2016; pp. 111–118. [Google Scholar]
  29. De la Sen, M.; Agarwal, R.P. Common fixed points and best proximity points of two cyclic self-mappings. Fixed Point Theory Appl. 2012, 2012, 1–17. [Google Scholar] [CrossRef] [Green Version]
  30. Gdawiec, K.; Kotarski, W. Polynomiography for the polynomial infinity norm via Kalantari’s formula and nonstandard iterations. Appl. Math. Comput. 2017, 307, 17–30. [Google Scholar] [CrossRef] [Green Version]
  31. Shoaib, A. Common fixed point for generalized contraction in b-multiplicative metric spaces with applications. Bull. Math. Anal. Appl. 2020, 12, 46–59. [Google Scholar]
  32. Al-Mazrooei, A.E.; Lateef, D.; Ahmad, J. Common fixed point theorems for generalized contractions. J. Math. Anal. 2017, 8, 157–166. [Google Scholar]
  33. Kim, K.S. A Constructive scheme for a common coupled fixed-point problems in Hilbert space. Mathematics 2020, 8, 1717. [Google Scholar] [CrossRef]
  34. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  35. Moudafi, A.; Al-Shemas, E. Simulataneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
  36. Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of families of nonexpansive mapping in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
  37. Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. Theor. Methods Appl. 2009, 71, 112–119. [Google Scholar] [CrossRef]
  38. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  39. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  40. Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  41. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  42. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math. 1962, 255, 2897–2899. [Google Scholar]
  43. Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
Figure 1. The Wat Phra Singh Woramahaviharn.
Figure 1. The Wat Phra Singh Woramahaviharn.
Mathematics 09 02619 g001
Figure 2. The graph of the peak signal-to-noise ratio (PSNR) for Wat Phra Singh Woramahaviharn.
Figure 2. The graph of the peak signal-to-noise ratio (PSNR) for Wat Phra Singh Woramahaviharn.
Mathematics 09 02619 g002
Figure 3. Results for Wat Phra Singh Woramahaviharn’s image deblurring.
Figure 3. Results for Wat Phra Singh Woramahaviharn’s image deblurring.
Mathematics 09 02619 g003
Table 1. Algorithms and their setting controls.
Table 1. Algorithms and their setting controls.
MethodsSetting
Algorithm 2 τ n = ζ n = 0.950 , ϵ n = μ n = 0.005 , κ n = n ( n + 1 ) and
        ρ n = n n + 1 if 1 n < N 1 2 n otherwise , where N is astop number of iteration .
FISTA κ = 1 , ρ n = t n 1 t n + 1 , where t n + 1 = 1 + 1 + 4 t n 2 2 .
NAGA τ n = 0.500 , κ n = n ( n + 1 ) and ρ n = t n 1 t n + 1 , where t n + 1 = 1 + 1 + 4 t n 2 2 .
IFBS κ n = n ( n + 1 ) and ρ n = 1 n 2 x n x n 1 2 2 if x n x n 1 0 otherwise
FBS κ n = n ( n + 1 )
Table 2. The values of the PSNR at x 200 , x 300 , x 400 , x 500 , x 1000 .
Table 2. The values of the PSNR at x 200 , x 300 , x 400 , x 500 , x 1000 .
Iteration No.Peak Signal-to-Noise Ratio (PSNR)
Algorithm 2NAGAFISTAIFBSFBS
20033.876433.145732.617328.284028.2840
30034.595134.101833.655628.865028.8650
40034.890234.617434.268929.259329.2593
50035.039134.876634.640929.553229.5532
100035.206835.196135.156230.418730.4186
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Thongpaen, P.; Wattanataweekul, R. A Fast Fixed-Point Algorithm for Convex Minimization Problems and Its Application in Image Restoration Problems. Mathematics 2021, 9, 2619. https://doi.org/10.3390/math9202619

AMA Style

Thongpaen P, Wattanataweekul R. A Fast Fixed-Point Algorithm for Convex Minimization Problems and Its Application in Image Restoration Problems. Mathematics. 2021; 9(20):2619. https://doi.org/10.3390/math9202619

Chicago/Turabian Style

Thongpaen, Panadda, and Rattanakorn Wattanataweekul. 2021. "A Fast Fixed-Point Algorithm for Convex Minimization Problems and Its Application in Image Restoration Problems" Mathematics 9, no. 20: 2619. https://doi.org/10.3390/math9202619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop