Abstract
In algorithm development, symmetry plays a vital part in managing optimization problems in scientific models. The aim of this work is to propose a new accelerated method for finding a common point of convex minimization problems and then use the fixed point of the forward-backward operator to explain and analyze a weak convergence result of the proposed algorithm in real Hilbert spaces under certain conditions. As applications, we demonstrate the suggested method for solving image inpainting and image restoration problems.
Keywords:
Hilbert space; forward-backward algorithm; convergence theorems; convex minimization problems; fixed point MSC:
47H10; 47J25; 65K05; 90C30
1. Introduction
In this study, let be a real Hilbert space with an inner product and the induced norm . Let be the set of all positive integers and be the set of all real numbers. The operator denotes the identity operator. Weak and strong convergence are denoted by the symbols ⇀ and →, respectively.
In recent years, the convex minimization problem in the form of the sum of two convex functions plays and important role in solving real-world problems such as in signal and image processing, machine learning and medical image reconstruction, see [,,,,,,,,,], for instance. This problem can be written in the following form:
where is a convex and differentiable function such that is -Lipschitz continuous and is a convex and proper lower semi-continuous function. Symmetry, or invariance, serves as the foundation for the solution of problem (1). The solution set for problem (1) is equivalent to the fixed point Equation (2),
where , is the proximity operator of and stands for the gradient of . It is known that if the step size , then is nonexpansive. For the past decade, many algorithms based on fixed point method were proposed to solve the problem (1), see [,,,,,,].
Lions and Mercier proposed the forward-backward splitting (FBS) algorithm [] as the following:
where and .
Combettes and Wajs [] studied the relaxed forward-backward splitting (R-FBS) method in 2005, which was defined as follows:
where , , and
An inertial technique is often used to speed up the forward-backward splitting procedure. As a result, numerous inertial algorithms were created and explored in order to speed up the algorithms’ convergence behavior, see [,,,] for example. Beck and Teboulle [] recently published FISTA, a fast iterative shrinkage-thresholding algorithm to solve the problem (1). The following are the characteristics of FISTA:
where It is worth noting that is an inertial parameter that determines the momentum
In this work, we are interested to construct a new accelerated algorithm for finding a common element of the convex minimization problems (6) by using inertial and fixed point techniques of forward-backward operators:
where and are convex and proper lower semi-continuous function. Then, we prove a weak convergence result of the proposed algorithm in real Hilbert spaces under certain conditions and illustrate the theoretical results via some numerical experiments in image inpainting and image restoration problems.
2. Preliminaries
Basic concepts, definitions, notations and some relevant lemmas for usage in the following parts will be discussed in this section.
Let be a convex and proper lower semi-continuous function. The proximity operator can be written in the equivalent form:
when is the subdifferential of given by
We notice that where is a nonempty closed convex set, is the indicator function and is the orthogonal projection operator on The subdifferential operator is a maximal monotone (for additional information, see []), and the solution of (1) is a fixed point of the operator below:
where and is solution set for problem (1).
The following Lipschitz continuous and nonexpansive operators are considered. An operator is called Lipschitz continuous if there exists such that
When is 1-Lipschitz continuous, it is referred to as nonexpansive. If a point is called fixed point of and denotes the set of fixed points for .
The operator is called demiclosed at zero if any sequence converges weakly to z and the sequence converges strongly to zero, then . If is a nonexpansive operator, then is known to be demiclosed at zero [].
Let and be such that . Then, is said to satisfy NST-condition (I) with [] if for each bounded sequence ,
The following basic property on will be used in the study (see []): for all and ,
Lemma 1
([]). Let be a convex and differentiable function such that is -Lipschitz continuous and be a convex and proper lower semi-continuous function. Let and , where with Then satisfies NST-condition (I) with .
Lemma 2
([]). Let and be two sequences of non-negative real numbers such that
Then . Moreover, if then is bounded.
Lemma 3
([]). Let and be two sequences of non-negative real numbers such that
for all . If then exists.
Lemma 4
([]). Let be a sequence in and that satisfies
- (I)
- For every exists;
- (II)
- where is the set of all weak-cluster points of
Then, converges weakly to a point in
3. Main Results
In this section, we suggest an inertial forward-backward splitting algorithm to solve common points of convex minimization problems and prove weak convergence of the proposed algorithm. Assumptions that will be used throughout this section are as follows:
- ►
- and are convex and differentiable functions from to
- ►
- and are Lipschitz continuous with constants and respectively;
- ►
- and are convex and proper lower semi-continuous functions from to
- ►
Remark 1.
Let If then and are nonexpansive operators with Moreover, if then Lemma 1 asserts that satisfies NST-condition (I) with .
| Algorithm 1: Given: Choose and |
Fordo end for. |
Next, the convergence result of Algorithm 1 can be shown as follows:
Theorem 1.
Let be the sequence created by Algorithm 1. Suppose that and are the sequences which satisfy the following conditions:
- (A1)
- for some with and ;
- (A2)
- and ;
- (A3)
- such that and as .
Then, the following holds:
- (i)
- where and
- (ii)
- converges weakly to common point in
Proof.
Define operators as follows:
Then, Algorithm 1 can be written as follows:
Let By (10), we have
By (11)–(13) and the nonexpansiveness of and we have
This implies
When we apply Lemma 2 to the Equation (15), we obtain where Hence, the proof of (i) is now complete.
By (15) and condition (A2), we have that is bounded. This implies By (14) and Lemma 3, we obtain that exists. By (9) and (10), we obtain
By (8), (11) and the nonexpansiveness of we obtain
By (8), (12), (16), (17) and the nonexpansiveness of and we have
From (18) and by condition (A1), (A2), and exists, we obtain
From (19), we obtain
From (10) and , we have
Since is bounded, we have . By (19) and (21), we obtain By Condition (A3) and Remark 1, we know that and satisfies NST-condition (I) with and , respectively. From (19), (20) and by using the demiclosedness of and we obtain From Lemma 4, we conclude that converges weakly to a point in This completes the proof. □
Open Problem: Can we choose the step size and that does not depend on the Lipschitz constant of the gradient of the function and respectively, and the obtained convergence result of the proposed algorithm?
If we set and for all then Algorithm 1 is reduced to Algorithm 2.
| Algorithm 2: Given: Choose and |
Fordo end for. |
The following result is immediately obtained by Theorem 1.
Corollary 1.
Let be the sequence created by Algorithm 2. Suppose that and are the sequences which satisfy the following conditions:
- (A1)
- , for some with and ;
- (A2)
- and ;
- (A3)
- such that as .
Then the following hold:
- (i)
- where and
- (ii)
- converges weakly to a point in
4. Applications
For this part, we apply the Algorithm 1 to solving constrained image inpainting problems (22) and apply the Algorithm 2 to solving image restoration problems (24). As image quality metrics, we utilize the peak signal-to-noise ratio (PSNR) in decibel (dB) [], which is formulated as follows:
where z and M are the original image and the number of image samples, respectively. All experimental simulations are performed in MATLAB\R2022a on a PC with an Intel Core-i5 processor and 4.00 GB of RAM running Windows 8 64-bit.
4.1. Image Inpainting Problems
In this experiment, we apply the Algorithm 1 to solving the following constrained image inpainting problems []:
where is a given image, are observed, is a subset of the index set , which indicates where data are available in the image domain and the rest are missed, and define by
In Algorithm 1, we set
where is regularization parameter, is the Frobenius matrix norm and is the nuclear matrix norm. Then, is convex differentiable and with 1-Lipschitz continuous. We note that the proximity operator of can be computed by the singular value decomposition (SVD), see [], and the proximity operator of is the orthogonal projection onto the closed convex set Therefore, Algorithm 1 is reduced to Algorithm 3 which can be used for solving constrained image inpainting problems (22), we have the following algorithm:
| Algorithm 3: Given: Choose and |
Fordo end for. |
In the standard Gallery, we marked and fixed the damaged portion of the image, and we compared Algorithm 3 with different inertial parameters settings. The following are the details of the parameters for Algorithm 3:
where is a positive integer depending on the number of iterations of Algorithm 3.
The regularization parameter was set to and the stopping criterion is as follows:
where is a given small constant. The number of iterations is indicated by Iter., and CPU time is indicated by CPU (second). We use the parameters selection cases I–V in Table 1 to evaluate the performance of Algorithm 3. Table 2 displays the results that were achieved. We observe from Table 2 that when the stopping criterion or at the 2000th iteration, Algorithm 3 with inertial parameter (Case V) outperforms the other cases in terms of PSNR performance. We may infer from Table 2 that Algorithm 3 is more effective at recovering images when inertial parameters are added. The test image and the restored images are shown in Figure 1 and Figure 2.
Table 1.
The different inertial parameters settings.
Table 2.
Results of comparing the selection of inertial parameters in terms of number of iterations, CPU time, PSNR, and the stopping criteria for Algorithm 3.
Figure 1.
Test image.
Figure 2.
The painted image and restored images. (a) The painted image; (b–f) Images that have been recovered for cases I through V with respectively.
To solve a general convex optimization problem, model the sum of three convex functions in the form:
where and are convex and proper lower semi-continuous function and is a differentiable function with a -Lipschitz continuous gradient. Cui et al. introduced an inertial three-operator splitting (iTOS) algorithm [] which can be applied to solving constrained image inpainting problems (22).
Next experiment, we set for Algorithm 4 (iTOS algorithm) and use the parameters selection as in Table 3 to evaluate the performance. Table 3 displays the results that were achieved. We observe from Table 2 and Table 3 that when the stopping criterion or at the 2000th iteration, the Algorithm 3 with inertial parameter (Case V) outperforms all cases of the iTOS algorithm in terms of PSNR performance.
| Algorithm 4: An inertial three-operator splitting (iTOS) algorithm []. |
Let and where For , let
where is nondecreasing with and for all and such that
|
Table 3.
Results of comparing the selection of parameters in terms of number of iterations, CPU time, PSNR, and the stopping criteria for iTOS algorithm.
4.2. Image Restoration Problems
In this experiment, we apply the Algorithm 2 to solving the image restoration problems by using the LASSO model []:
where , is the -norm and is the Euclidean norm.
In Algorithm 2, we set where is the observed image and , when R and W are the kernel matrix and 2-D fast Fourier transform, respectively.
We will use two test photos (Pepper and Bird, with sizes of and , respectively) to exhibit two scenarios of blurring processes in Table 4 and add a random Gaussian white noise , with the original and blurred images shown in Figure 3.
Table 4.
Processes of blurring in Detail.
Figure 3.
The deblurring images of Pepper and Bird.
We examine and compare the efficiency of our algorithms (Algorithm 2 := ALG 2) to that of FBS, R-FBS and FISTA algorithms. The image restoration performance of the examined methods is next tested by setting as described in (25) and using blurred images as starting points. For all algorithms, the maximum number of iterations is set at 300. The regularization parameter in the LASSO model (24) is set to . The following are the parameters for the studied algorithms under consideration:
where is a positive integer depending on the number of iteration of Algorithm 2.
Figure 4, Figure 5, Figure 6 and Figure 7 present the deblurring test images by the studied algorithms. In Figure 8, we see that the graph of PSNR of Algorithm 2 is higher than the others, which means that the efficiency of restored images by Algorithm 2 is better than the other methods. The number of iterations is indicated by Iter., and CPU time is indicated by CPU (second).
Figure 4.
The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario I of the Pepper.
Figure 5.
The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario II of the Pepper.
Figure 6.
The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario I of the Bird.
Figure 7.
The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario II of the Bird.
Figure 8.
The PSNR graphs of the studied algorithms: (a,b) for Pepper; (c,d) for Bird.
5. Conclusions
In this research, an inertial forward-backward splitting algorithm for solving a common point of convex minimization problems is developed. We investigated the weak convergence of the suggested algorithm based on the fixed point equation of the forward-backward operator under some suitable control conditions. Finally, we use numerical simulations to show the benefits of the inertial terms in the studied algorithms for the constrained image inpainting problems (22) and the image restoration problems (24).
Author Contributions
Formal analysis, writing—original draft preparation, methodology, writing—review and editing, A.H.; software, N.P.; conceptualization, supervision, manuscript revision, S.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research project was supported by Rajamangala University of Technology Isan, Contract No. RMUTI/RF/01, the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183] and Chiang Mai University.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
This research project was supported by Rajamangala University of Technology Isan, Contract No. RMUTI/RF/01, the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183]. We also would like to thank Chiang Mai University and Rajamangala University of Technology Isan for the partial financial support. N. Pholasa was supported by University of Phayao and Thailand Science Research and Innovation grant no. FF66-UoE.
Conflicts of Interest
The authors declare that they have no competing interest.
References
- Bertsekas, D.P.; Tsitsiklis, J.N. Parallel and Distributed Computation: Numerical Methods; Athena Scientific: Belmont, MA, USA, 1997. [Google Scholar]
- Combettes, P.L.; Pesquet, J.C. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
- Hanjing, A.; Suantai, S. An inertial alternating projection algorithm for convex minimization problems with applications to signal recovery problems. J. Nonlinear Convex Anal. 2022, 22, 2647–2660. [Google Scholar]
- Lin, L.J.; Takahashi, W. A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 2012, 16, 429–453. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. D’Inform. Rech. Oper. 1970, 4, 154–158. [Google Scholar]
- Yatakoat, P.; Suantai, S.; Hanjing, A. On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications. Adv. Contin. Discret. Model. 2022, 25. [Google Scholar] [CrossRef]
- Suantai, S.; Jailoka, P.; Hanjing, A. An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems. J. Inequal. Appl. 2021, 42. [Google Scholar] [CrossRef]
- Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 17, 877–898. [Google Scholar] [CrossRef]
- Aremu, K.O.; Izuchukwu, C.; Grace, O.N.; Mewomo, O.T. Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 2020, 13, 2161–2180. [Google Scholar] [CrossRef]
- Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
- Cui, F.; Tang, Y.; Yang, Y. An inertial three-operator splitting algorithm with applications to image inpainting. arXiv 2019, arXiv:1904.11684. [Google Scholar]
- Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef]
- Thongpaen, P.; Wattanataweekul, R. A fast fixed-point algorithm for convex minimization problems and its application in image restoration problems. Mathematics 2021, 9, 2619. [Google Scholar] [CrossRef]
- Suantai, S.; Kankam, K.; Cholamjiak, P. A novel forward-backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 2020, 8, 42. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 35–44. [Google Scholar] [CrossRef]
- Burachik, R.S.; Iusem, A.N. Set-Valued Mappings and Enlargements of Monotone Operator; Springer Science Business Media: New York, NY, USA, 2007. [Google Scholar]
- Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
- Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. Theory Methods Appl. 2009, 71, 112–119. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
- Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
- Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar] [CrossRef]
- Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
- Cai, J.F.; Candes, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).