1. Introduction
Let
H be a real Hilbert space. The unconstrained minimization problem of the sum of two convex functions is modeled as the following form:
where
f and
are proper, lower semi-continuous, and convex functions. If
f is differentiable on
H, we know that problem (1) can be described by the fixed point equation, that is,
where
and
is the proximal operator of
g, i.e.,
where
is the identity operator in
H and
is the subdifferential of
g. Therefore, the forward-backward algorithm was defined in the following manner:
where
. Some works that relate to the forward-backward method for convex optimization problems can be investigated in [
1,
2,
3,
4,
5,
6]. This method covers the gradient method [
7,
8,
9] and the proximal algorithm [
10,
11,
12]. Combettes and Wajs [
13] introduced the following relaxed forward-backward method.
Cruz and Nghia [
14] suggested the forward-backward method using linesearch approach, which does not need the Lipschitz constant in implementation.
It was shown that converges weakly to a minimizer of .
Now, inspired by Cruz and Nghia [
14], we suggest a new projected forward-backward algorithm for solving the constrained convex minimization problem, which is modeled as follows:
where
is a nonempty closed convex subset of
H,
f and
g are convex functions on
H that
f is differentiable on
H. We denote by
the solution set of (
4).
By the way, to obtain a nice convergence rate, Polyak [
15] introduced the heavy ball method for solving smooth convex minimization problem. In case
, Nesterov [
16] modified the heavy ball method as follows.
In this work, motivated by Algorithm 1 [
13], Algorithm 2 [
14], Algorithm 3 [
16] and Algorithm 4 [
17], we are interested to design a new projected forward-backward algorithm for solving the constrained convex minimization problem (
4) and establishing the convergence theorem. We also apply our method to solve image inpainting and provide some comparisons and numerical results. Finally, we show the effects of each parameters in the proposed algorithm.
Algorithm 1 Ref. [13] Let and For , define |
where , and is the Lipschitz constant of the gradient of f. |
Algorithm 2 Let and . Let . For define |
where and is the smallest nonnegative integer satisfying |
Algorithm 3 Let and . For define |
where . The term is called inertial term. |
In 2003, Moudafi and Oliny [
17] suggested the inertial forward-backward splitting as follows:
Algorithm 4 Let and . For define |
where . Many works explored that algorithms involving inertial term have a nice rate of convergence [3,18,19,20]. The complexity of some variants of the forward-backward algorithms can be found in the work of Cruz and Nghia [14]. |
This paper is organized as follows: In
Section 2, we recall some preliminaries and mathemetical tools. In
Section 3, we prove the weak convergence theorem of the proposed method. In
Section 4, we provide numerical experiments in image inpainting to valid the convergence theorem and, finally, in
Section 5, we give conclusions of this paper.
2. Preliminaries
Let us review some important definitions and lemmas for proving the convergence theorem. Let
H be a real Hilbert space with inner product
and norm
. Let
be a proper, lower semicontinuous (
), and convex function. The domain of
h is defined by
For any
, we know that the orthogonal projection of
u onto a nonempty, closed and convex subset
C of
H is defined by
Lemma 1 ([
21])
. Let C be a nonempty, closed and convex subset of a real Hilbert space H. Then, for any , we have- (i)
for all;
- (ii)
for all;
- (iii)
for all.
The directional derivative of
h at
u in the direction
d is
Definition 1. The subdifferential of h at u is defined by It is known that
is maximal monotone and if
h is differentiable, then
is the gradient of
h denoted by
. Moreover,
is monotone, that is,
for all
. From (
4), we also know that
where
and
.
Lemma 2 ([
22])
. is demiclosed, i.e., if the sequence satisfies that converges weakly to u and converges strongly to a, then Lemma 3. Let,
andbe real positive sequences such that Ifand, thenexists.
Lemma 4 ([
23])
. Let and be real positive sequences such thatThen,, where. Moreover, if, thenis bounded.
Definition 2. Let S be a nonempty subset of H. A sequencein H is said to be quasi-Fejér convergent to S if and only if for allthere exists a positive sequencesuch thatandfor allWhenis a null sequence, we say thatis Fejér convergent to S.
Lemma 5 ([
21,
24])
. If is quasi-Fejér convergent to S, then we have:- (i)
is bounded.
- (ii)
If all weak accumulation points ofis in S, thenweakly converges to a point in S.
4. Numerical Experiments
In this section, we aim to apply our result for solving an image inpainting problem which is of the following mathematical model:
where
,
A is a linear map that selects a subset of the entries of an
matrix by setting each unknown entry in the matrix to 0,
u is matrix of known entries
, and
is regularization parameter.
In particular, we investigate the image inpainting problem [
25,
26]:
where
is the Frobenius matrix norm, and
is the nuclear matrix norm. Here, we define
as follows:
The optimization problem (
21) relates to (
4). In fact, let
and
. Then,
is 1-Lipschitz continuous. Moreover,
is obtained by the singular value decomposition (SVD) [
27].
From Algorithm 5, we obtain the Algorithm 6 for image inpainting.
To measure the quality of images, we consider the signal-to-noise ratio (SNR) and the structural similarity index (SSIM) [
28], which are given by
and
where
u is the original image,
is the restored image,
and
are the mean values of the original image
u and restored image
, respectively,
and
are the variances,
is the covariance of two images,
and
, and
L is the dynamic range of pixel values. SSIM ranges from 0 to 1, and 1 means perfect recovery. Next, we analyze its convergence including its effects of the parameters
and
that proposed in Algorithm 5. We now present the corresponding numerical results (number of iterations denoted by Iter and CPU denoted by the time of CPU).
First, we investigate the effect
. Set parameters as follows:
Algorithm 6 Forward-backward algorithm for image inpainting. |
Step 1: Input and . |
Step 2: Compute
|
Step 3: (Linesearch rule) Set |
While |
|
End while.
|
Step 4: Compute
|
and |
Set go to Step 2.
|
where
is a sequence defined by
and
.
From
Table 1, we observe that SNR and SSIM of Algorithm 5 have been getting larger when the parameter
approaches
. Moreover, the CPU of Algorithm 5 is decreasing when
tends to
.
Next, we discuss the effect of
. The numerical experiments are given in
Table 2.
From
Table 2, we observe that SNR, SSIM, and CPU time of Algorithm 5 have been getting larger when the parameter
approaches
.
Next, we discuss the effect
. The numerical experiments are given in
Table 3. From
Table 3, we observe that SNR, SSIM, and the CPU time of Algorithm 5 have been getting larger if
increases. The SNR and the reconstructed images are shown in
Figure 3,
Figure 4 and
Figure 5.
Now, we present the performance of Algorithms 5 and its comparison to the projected version of Algorithm 1 [
13] and Algorithm 2 [
14]. The initial point
and
are chosen to be zero and let
and
in Algorithm 1. Let
,
and
be defined by (
25) in Algorithms 2 and 5, respectively. The numerical results are shown in
Table 4.
From
Table 4, we see that the experiment results of Algorithm 5 are better than Algorithms 1 and 2 in terms of SNR and SSIM in all cases.
The figure of the inpainting image for the 260th and 310th iterations are shown in
Figure 7,
Figure 8 and
Figure 9, respectively.