A Projected Forward-Backward Algorithm for Constrained Minimization with Applications to Image Inpainting

In this research, we study the convex minimization problem in the form of the sum of two proper, lower-semicontinuous, and convex functions. We introduce a new projected forward-backward algorithm using linesearch and inertial techniques. We then establish a weak convergence theorem under mild conditions. It is known that image processing such as inpainting problems can be modeled as the constrained minimization problem of the sum of convex functions. In this connection, we aim to apply the suggested method for solving image inpainting. We also give some comparisons to other methods in the literature. It is shown that the proposed algorithm outperforms others in terms of iterations. Finally, we give an analysis on parameters that are assumed in our hypothesis.


Introduction
Let H be a real Hilbert space. The unconstrained minimization problem of the sum of two convex functions is modeled as the following form: where f and g : H → R ∪ {+∞} are proper, lower semi-continuous, and convex functions. If f is differentiable on H, we know that problem (1) can be described by the fixed point equation, that is, u = prox αg (u − α∇ f (u)), (2) where α > 0 and prox g is the proximal operator of g, i.e., prox g = (Id + ∂g) −1 where Id is the identity operator in H and ∂g is the subdifferential of g. Therefore, the forwardbackward algorithm was defined in the following manner: where α k > 0. Some works that relate to the forward-backward method for convex optimization problems can be investigated in [1][2][3][4][5][6]. This method covers the gradient method [7][8][9] and the proximal algorithm [10][11][12]. Combettes and Wajs [13] introduced the following relaxed forward-backward method. Cruz and Nghia [14] suggested the forward-backward method using linesearch approach, which does not need the Lipschitz constant in implementation.
It was shown that (u k ) converges weakly to a minimizer of f + g. Now, inspired by Cruz and Nghia [14], we suggest a new projected forward-backward algorithm for solving the constrained convex minimization problem, which is modeled as follows: where Ω is a nonempty closed convex subset of H, f and g are convex functions on H that f is differentiable on H. We denote by S * the solution set of (4). By the way, to obtain a nice convergence rate, Polyak [15] introduced the heavy ball method for solving smooth convex minimization problem. In case g = 0, Nesterov [16] modified the heavy ball method as follows.
In this work, motivated by Algorithm 1 [13], Algorithm 2 [14], Algorithm 3 [16] and Algorithm 4 [17], we are interested to design a new projected forward-backward algorithm for solving the constrained convex minimization problem (4) and establishing the convergence theorem. We also apply our method to solve image inpainting and provide some comparisons and numerical results. Finally, we show the effects of each parameters in the proposed algorithm.
where 1] and α is the Lipschitz constant of the gradient of f .
where α k = σθ m k and m k is the smallest nonnegative integer satisfying In 2003, Moudafi and Oliny [17] suggested the inertial forward-backward splitting as follows: where θ k ∈ [0, 1). Many works explored that algorithms involving inertial term have a nice rate of convergence [3,[18][19][20]. The complexity of some variants of the forward-backward algorithms can be found in the work of Cruz and Nghia [14].
This paper is organized as follows: In Section 2, we recall some preliminaries and mathemetical tools. In Section 3, we prove the weak convergence theorem of the proposed method. In Section 4, we provide numerical experiments in image inpainting to valid the convergence theorem and, finally, in Section 5, we give conclusions of this paper.

Preliminaries
Let us review some important definitions and lemmas for proving the convergence theorem. Let H be a real Hilbert space with inner product ·, · and norm · . Let h : H →R be a proper, lower semicontinuous (l.s.c.), and convex function. The domain of h is defined by domh := {u ∈ H|h(u) < +∞}. For any u ∈ H, we know that the orthogonal projection of u onto a nonempty, closed and convex subset C of H is defined by Lemma 1 ([21]). Let C be a nonempty, closed and convex subset of a real Hilbert space H. Then, for any u ∈ H, we have (i) u − P C u, a − P C u ≤ 0 for all a ∈ C; (ii) The directional derivative of h at u in the direction d is t .

Definition 1. The subdifferential of h at u is defined by
It is known that ∂h is maximal monotone and if h is differentiable, then ∂h is the gradient of h denoted by ∇h. Moreover, ∇h is monotone, that is, ∇h(u) − ∇h(v), u − v ≥ 0 for all u, v ∈ H. From (4), we also know that where c > 0 and prox g = (Id + ∂g) −1 .

Lemma 3.
Let (a k ), (b k ) and (r k ) be real positive sequences such that

Lemma 4 ([23]
). Let (a k ) and (θ k ) be real positive sequences such that Definition 2. Let S be a nonempty subset of H. A sequence (u k ) in H is said to be quasi-Fejér convergent to S if and only if for all u ∈ S there exists a positive sequence (ε k ) such that Lemma 5 ([21,24]). If (u k ) is quasi-Fejér convergent to S, then we have: If all weak accumulation points of (u k ) is in S, then (u k ) weakly converges to a point in S.

Results
In this section, we suggest a new projected forward-backward algorithm and establish the weak convergence. The following conditions are assumed:

uniformly continuous on bounded subsets of H and is bounded on any bounded
subset of H.
Next, we will prove weak convergence theorem of the proposed algorithm.
converges to a point in S * .

Algorithm 5
Let Ω be a nonempty closed convex subset of H. and where α k = σφ m k and m k is the smallest nonnegative integer such that Set u k+1 by Proof. Let u * be a solution in S * . Thus, we obtain By the definition of proximal operator and v k , we have By the convexity of g, we get The convexity of f gives Using (13) and (14) with any u ∈ H and y = w k , we obtain This yields From (12) and (15), we see that Hence, This shows that u k+1 − u * ≤ (1 + θ k ) u k − u * + θ k u k−1 − u * . By Lemma 4, we have Next, we consider By (15)- (17), we see that This gives It is easily seen that w k − u k → 0 and hence u k+1 − u k → 0. On the other hand, we see that It follows that (u k ) is Fejér convergent to S * . Thus, we have where M = sup{ u k − u * |k ∈ N} < +∞. Since (u k ) is bounded, the set of weak accumulation points is nonempty. Take any weak accumulation pointū of (u k ). Thus, there is a subsequence (u k n ) of (u k ) weakly converging toū. Moreover, {w k n } also weakly converges toū. Since (u k n ) is bounded and w k n − v k n → 0, from (A2), we obtain Since v k n = prox α kn g (w k n − α k n ∇ f (w k n )), it follows from (7) that which yields By passing n → ∞ in (19), we get from (18) and Lemma 2 that 0 ∈ ∂( f + g)(ū). Thus, u ∈ S * . By Lemma 5 (ii), we conclude that (u k ) weakly converges to a point in S * .

Numerical Experiments
In this section, we aim to apply our result for solving an image inpainting problem which is of the following mathematical model: where u 0 ∈ R M×N (M < N), A is a linear map that selects a subset of the entries of an M × N matrix by setting each unknown entry in the matrix to 0, u is matrix of known entries A(u 0 ), and µ > 0 is regularization parameter.

Algorithm 6
Forward-backward algorithm for image inpainting.
To measure the quality of images, we consider the signal-to-noise ratio (SNR) and the structural similarity index (SSIM) [28], which are given by and SSIM = (2a u a u r + c 1 )(2σ uu r + c 2 ) (a 2 u + a 2 u r + c 1 )(σ 2 u + σ 2 u r + c 2 ) (24) where u is the original image, u r is the restored image, a u and a u r are the mean values of the original image u and restored image u r , respectively, σ 2 u and σ 2 u r are the variances, σ 2 uu r is the covariance of two images, c 1 = (0.01L) 2 and c 2 = (0.03L) 2 , and L is the dynamic range of pixel values. SSIM ranges from 0 to 1, and 1 means perfect recovery. Next, we analyze its convergence including its effects of the parameters δ, φ and σ that proposed in Algorithm 5. We now present the corresponding numerical results (number of iterations denoted by Iter and CPU denoted by the time of CPU). First, we investigate the effect δ. Set parameters as follows: where t k is a sequence defined by t 1 = 1 and t k+1 = 1+ 1+4t 2 k 2 . From Table 1, we observe that SNR and SSIM of Algorithm 5 have been getting larger when the parameter δ approaches 0.5. Moreover, the CPU of Algorithm 5 is decreasing when δ tends to 0.5.    Table 1 for δ = 0.5 (SNR = 22.8626, SSIM = 0.9476); (c) restored images in Table 2 for φ = 0.5 (SNR = 23.0594, SSIM = 0.9479); (d) restored images in Table 3 for σ = 5 (SNR = 22.9865, SSIM = 0.9477).  Table 1 for δ = 0.5 (SNR = 26.3994, SSIM = 0.9210); (c) restored images in Table 2 for φ = 0.5 (SNR = 26.4002, SSIM = 0.9210); (d) restored images in Table 3 for σ = 0.5 (SNR = 26.4084, SSIM = 0.9210).
Next, we discuss the effect of φ. The numerical experiments are given in Table 2. From Table 2, we observe that SNR, SSIM, and CPU time of Algorithm 5 have been getting larger when the parameter φ approaches 0.5.
Next, we discuss the effect σ. The numerical experiments are given in Table 3. From Table 3, we observe that SNR, SSIM, and the CPU time of Algorithm 5 have been getting larger if σ increases. The SNR and the reconstructed images are shown in Figures 3-5.  The real images are shown in Figure 6, input image, and the reconstructed images are shown in Figures 3, 4 and 6, respectively. Now, we present the performance of Algorithms 5 and its comparison to the projected version of Algorithm 1 [13] and Algorithm 2 [14]. The initial point u 0 and u 1 are chosen to be zero and let α k = 1 A 2 and λ k = 0.09 in Algorithm 1. Let σ = 0.1, δ = 0.13, φ = 0.5 and θ be defined by (25) in Algorithms 2 and 5, respectively. The numerical results are shown in Table 4. From Table 4, we see that the experiment results of Algorithm 5 are better than Algorithms 1 and 2 in terms of SNR and SSIM in all cases.
The figure of the inpainting image for the 260th and 310th iterations are shown in Figures 7-9, respectively.

Conclusions
In this research, we investigated inertial projected forward-backward algorithm using linesearches for constrained minimization problems. The weak convergence results were proved under control conditions. The proposed algorithms do not need to compute the Lipschitz constant of the gradient of functions. We applied our results to solve image inpainting. We also presented the effects of all parameters that are assumed in our method.
For our future research, we aim to find a new linesearch technique that does not require the Lipschitz continuity assumption on the gradient of the function. We note that the proposed algorithm depends on the computation of the projection which is not an easy task to find in some cases. It is interesting to construct new algorithms that do not involve the projection.
Author Contributions: Funding acquisition and supervision, S.S.; writing-original draft preparation, K.K.; writing-review and editing and software, P.C. All authors have read and agreed to the published version of the manuscript.