Abstract
In this paper, we introduce a new iterative method using an inertial technique for approximating a common fixed point of an infinite family of nonexpansive mappings in a Hilbert space. The proposed method’s weak convergence theorem was established under some suitable conditions. Furthermore, we applied our main results to solve convex minimization problems and image restoration problems.
1. Introduction
Let us first mention a mathematical scheme for an image restoration problem, as well as some algorithms that will be employed to solve it. The following linear pattern is a simple pattern of an image restoration problem, that is,
where is the blurring operation, is an image, is the observed image, and y is an additive noise. The image restoration problem is finding the original image that satisfies (1). It is well known that the image restoration problem is a dominant topic in image processing.
In order to find the solution of the problem (1), we minimize the additive noise to approximate the original image by using the method known as the least squares (LS) problem:
where is an -norm defined by There are many iterations that can solve the problem (2) such as the Richardson iteration; see [1] for the details. However, the number of unknown variables is much more than the observations that cause (2) to be an ill-posed problem because of a huge norm result, which is thus meaningless; see [2,3]. Therefore, in order to improve the ill conditioned least squares problem, several regularization methods were introduced. One of the most popular regularization methods is the Tikhonov regularization suggested by Tikhonov; see [4]. It is defined to solve the following minimization problem:
where is called a positive regularization parameter and is called the Tikhonov matrix. In the standard form, L is set to be the identity. In statistics, (3) is known as a ridge regression. To improve the original LS (2) and classical regularization such as subset selection and ridge regression (3) for solving (1), Tibshirani [5] defined a new method, called the least absolute shrinkage and selection operator (LASSO) model, as the following form:
where is a positive regularization parameter, , and This model can be used to solve the problem (1) utilizing optimization methods; see [5,6] for instances. The problem presented in (4) can be extended to the general natural formulation as follows:
The solution of Problem (5) is usually established under the following assumptions:
- (i)
- is a lower semicontinuous function and proper convex from a Hilbert space H into ;
- (ii)
- is a convex differentiable function from H into with being ℓ-Lipschitz constant for some that is, for all
The set of all solutions of the problem (5) will is denoted by argmin.
It is well known that if argmin, then the solution of (5) can be reformulated as the problem of finding a zero-solution such that:
where is the gradient operator of function and is the subdifferential of function ; see [7] for more details. Furthermore, Parikh and Boyd [8] solved the problem (6) by using the proximal gradient technique, that is if solves (6), then:
where is a positive parameter, , and I is the identity operator. This means that is a fixed point of the proximal operator. In [9,10,11], the authors guaranteed many important properties of proximal operators, for instance prox is well defined with a full domain, single-valued, and even nonexpansive.
In addition, the classical forward–backward splitting algorithm (FBS) [12] is generated by and:
where is the step size and I is the identity operator with prox the proximity operator of defined by prox see [13] for more details. Because of its simplicity, the method (7) has been widely utilized to solve the problem (5), and as a result, it has been enhanced by many works, as seen in [11,14,15,16].
From the work [8], it is worth noting that the fixed-point theory can be applied to solve the problem (5). The fixed-point theory plays a very important role for solving many problems in science, data science, economics, medicine, and engineering; see [11,17,18,19,20,21,22,23] for more details. There are several methods for finding the approximate solutions of fixed-point problems; see [24,25,26,27,28,29,30]. Shoaib [31] proved a result of Al Mazrooei et al. [32] by using new contractive conditions on a closed set in b-multiplicative metric space. They obtained a unique common solution of Fredholm multiplicative integral equations. Recently, Kim [33] introduced the coupled Mann pair iterative scheme for a common coupled fixed point in Hilbert spaces.
In order to accelerate the convergence rate of the studied methods, Polyak [34] introduced the technique for improving the rate of convergence and giving a better convergence behavior of those methods by adding an inertial step. The following iterative methods with an inertial step can be used for improving the performance of (7).
The inertial forward–backward splitting (IFBS) was presented by Moudafi and Oliny in [35] as follows:
where , and is the inertial parameter that controls the momentum . The convergence of the IFBS can be guaranteed by proper choices of and .
The fast iterative shrinkage-thresholding algorithm (FISTA) is defined by:
where , and This notion was suggested by Beck and Teboulle [6]. They also proved the FISTA’s convergence rate and applied it to solve image restoration problems.
Recently, Verma and Shukla [16] proposed the new accelerated proximal gradient algorithm (NAGA) as follows:
where and is the inertial parameter, which controls the momentum . The authors proved NAGA’s convergence theorem under the condition and applied it to solve the convex minimization problem for a multitask learning framework using sparsity-inducing regularizes.
Motivated and inspired by all the works mentioned above, in this article, we introduced a new iterative method for the approximation of a common fixed point of an infinite family of nonexpansive mappings in Hilbert spaces. We also proved weak convergence theorems of the introduced method under some suitable conditions. Furthermore, we applied our main results for solving a convex minimization problem and image restoration problems.
This paper is organized as follows: The next section proposes some preliminary results that will be utilized throughout the paper. In Section 3, we introduce a new accelerated algorithm using the inertial techniques and analyze its weak convergence to the solution (5). After that, we apply our main results to solving image restoration problems, and some numerical experiments of the proposed methods are given in Section 4. In the last section, we present the brief conclusion of our work.
2. Preliminaries
Throughout this article, let and be the set of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product and the norm induced by the inner product. The weak and strong convergences of in H are denoted by and , respectively, for each sequence in H.
Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping T from C into itself is said to be an ℓ-Lipschitz operator if there exists such that:
If , then T is called a nonexpansive operator. The set of all fixed points of T is denoted by , that is Let and be families of nonexpansive mappings of C into itself such that , where is the set of all common fixed points of
A sequence is said to satisfy the NST condition with [36], if for every bounded sequence in C,
Note that is said to satisfy the NST condition with T when is a singleton, that is After that, the concept of the NST condition was introduced by Nakajo et al. [37], and the examples of mappings that satisfy the NST condition were given.
A sequence is said to satisfy the NST condition if for every bounded sequence in
where is the set of all weak cluster points of .
Note that the NST condition is more general than the NST condition . It can be directly obtained from the definition given above that if satisfies the NST condition , then satisfies the NST condition.
Lemma 1
([38,39]). Let H be a real Hilbert space. For any and , the following results hold:
- (i)
- (ii)
The identity in Lemma 1 (ii) implies that the following equality holds:
for all and with
In proving our main theorem, we need the following lemmas.
Lemma 2
([40]). Let , and be sequences of nonnegative real numbers such that for all If and then exists.
Lemma 3
([35]). Let H be a Hilbert space, and let be a sequence in H such that there exists a nonempty set satisfying: for every exists. Any weak cluster point of is in Then, there exists u in F with converging weakly to
We end this section with the following lemmas, which will be used to prove our main results in the next section.
Lemma 4
([41]). Let and be sequences of nonnegative real numbers such that for all Then, the following holds
where Moreover, if then is bounded.
Recall the definition of the forward–backward operator of lower semicontinuous and convex functions as follows: A forward–backward operator T is defined by for where is the gradient operator of function and (see [7,11]). The operator prox was defined by Moreak in 1962 [42] and called the proximity operator with respect to and . We know that T is a nonexpansive mapping whenever .
Lemma 5
([14]). Let ψ be a lower semicontinuous function and proper convex from a Hilbert space H into , and let ϕ be a convex differentiable function from H into with being ℓ-Lipschitz constant for some Let T be the forward–backward operator of ϕ and ψ. A sequence satisfies the NST condition (I) with T if is the forward–backward operator of ϕ and ψ such that with
3. Main Results
In this section, we begin by formally introducing a new algorithm for finding a common fixed point of a countable family of nonexpansive mappings in a real Hilbert space H. Let be a family of nonexpansive mappings with , and being sequences in .
Next, we prove a weak convergence theorem of Algorithm 1 for a family of nonexpansive mappings in a real Hilbert space.
| Algorithm 1: (MSA): A modified S-algorithm. |
| Initial. Take arbitrarily and . Choose and . Step 1. Compute , and using: Then, update , and go to Step 1. |
Theorem 1.
Let H be a real Hilbert space, and let be a family of nonexpansive mappings such that . Let be a sequence generated by Algorithm 1 and , and be sequences in satisfying the following conditions:
- (i)
- ;
- (ii)
- ;
- (iii)
- .
If satisfies the NST condition, then converges weakly to an element in
Proof.
Let Then, by Algorithm 1 and being nonexpansive, we have:
and:
It follows that:
The above inequality implies:
By Lemma 4, we obtain that where Since we obtain that is bounded. This together with give Using (9) and Lemma 2, we obtain that exists for all Coming back to the definition of from (8), one has that:
By (8), (10), together with Lemma 1 and the nonexpansiveness of , we have:
Since exists for all and we have from the above inequality that:
By the conditions (i) and (iii), we conclude that:
This implies by the nonexpansiveness of that:
Thus:
By the definition of , we obtain:
From (12) and (13), we obtain:
By (11), (14), and we obtain:
From (13) and we obtain:
Since:
by (11), (15), and (16), we obtain:
From (16) and (17), we obtain:
Since:
it follows by (15)–(18) that Since satisfies the NST condition, we obtain that the set of all weak cluster points of the sequence is a subset of Applying Lemma 3, we obtain that there exists such that □
Now, we move on to the application of our introduced algorithm for solving a convex minimization problem (5) by setting prox in Algorithm 1.
Next, we prove that a sequence generated by Algorithm 2 converges weakly to the solution of the convex minimization problem (5).
| Algorithm 2: (FBMSA): A forward-backward modified S-algorithm. |
| Initial. Take arbitrarily and . Choose and Step 1. Compute and using: Then, update , and go to Step 1. |
Theorem 2.
Let g be a lower semicontinuous function and proper convex from a real Hilbert space H into , and let ϕ be a convex differentiable function from H into with being ℓ-Lipschitz constant for some Let be a sequence generated by Algorithm 2 such that with . Suppose and are sequences in satisfying the assumptions as in Theorem 1. Then, converges weakly to an element in argmin.
Proof.
Let T and be the forward–backward operators of and with respect to and , respectively. Then, prox and prox. Then, T and are nonexpansive operators for all n. By Proposition 26.1 in [7], . It follows from Lemma 5 that satisfies the NST condition. Using Theorem 1, we obtain the required result. □
4. Applications
The image restoration problem is solved using Algorithm 2 in this part. We also compared the deburring efficiency of Algorithm 2 with NAGA [16], FISTA [6], IFBS [35], and FBS [12]. As the mentioned in the literature, the image restoration problems can be related to the LASSO problem, that is where A represents the blurring operator, is the original image, b is the observed image, and is a positive regularization parameter.
To solve image restoration problem, especially the true RGB images, this model is highly costly to compute for the multiplication of and because of the size of matrix A and x, as well as their members. In order to overcome this problem, most of researchers in this area employ the 2D fast Fourier transform for the transformation of the true RGB images, and the above model is slightly reformulated by using the 2D fast Fourier transform as the following form:
where is the blurring operator, which is often chosen as is the blurring matrix, is the 2D fast Fourier transform, is the observed image of size , and is a positive regularization parameter. Hence, it can be viewed as the summation of two convex minimization problem, that is, Therefore, Algorithm 2, FBS [12], IFBS [35], FISTA [6], and NAGA [16] can be applied to solve an image restoration problem by setting
In our experiment, we selected the regularization parameter and considered the original image size of px. The Gaussian blur of size and standard deviation were used to rate the blurred and noisy image. Figure 1 shows the original and observed images.
Figure 1.
The Wat Phra Singh Woramahaviharn.
We used the peak signal-to-noise ratio (PSNR) as a measure of the performance of our algorithm, which is defined as follows:
where MSE the mean-squared error for original image x. The concept of the PSNR was proposed by Thung and Raveendran [43] in 2009. It is worth noting that a higher PSNR demonstrates a higher quality for deblurring the image. Then, we computed the Lipschitz constant ℓ by using the maximum eigenvalues of the matrix .
Table 1 shows the parameters for Algorithm 2, FISTA, NAGA, IFBS, and FBS.
Table 1.
Algorithms and their setting controls.
As seen in Table 1, all parameters were created to satisfy all conditions of those convergence theorems for each algorithm. By Theorem 2, the sequence generated by Algorithm 2 converges to the original image.
For this experiment, our programs were run on an Intel(R) core(TM) i7-9700CPU with 32.00 GB RAM, Windows 10, in the MATLAB computing environment. From the controllers, which were set as above, we obtained the results of deblurring the image of Wat Phra Singh Woramahaviharn with 1000 iterations as in Table 2.
Table 2.
The values of the PSNR at .
Table 2 shows the images’ recovery efficiency compared to other methods under different numbers of iterations. It is seen from Table 2 that Algorithm 2 has a higher PSNR than the other algorithms. Therefore, the convergence behavior of our algorithm is better than those of NAGA, FISTA, IFBS, and FBS.
Moreover, the results of deblurring the image of Wat Phra Singh Woramahaviharn at the 1000th iteration of all the studied algorithms are presented in Figure 2.
Figure 2.
The graph of the peak signal-to-noise ratio (PSNR) for Wat Phra Singh Woramahaviharn.
It was derived from the graph of PSNR in Figure 2 that Algorithm 2 gives a higher value of the PSNR than the other algorithms. This demonstrates that Algorithm 2’s image restoration performance is better than those of NAGA, FISTA, IFBS, and FBS.
We observed from Figure 3 that Algorithm 2 gives a better result of deblurring for Wat Phra Singh Woramahaviharn in all the numbers of iterations.
Figure 3.
Results for Wat Phra Singh Woramahaviharn’s image deblurring.
5. Conclusions
This paper introduced a new accelerated algorithm for solving a common fixed-point problem of a family of nonexpansive operators. The weak convergence theorem for this method was proven by setting some conditions. Our main results can be applied to solve a minimization problem involving two proper lower semicontinuous and convex functions. The proposed method was also used to solve the image restoration problems. To compare the performance of the studied algorithm, we conducted certain numerical experiments and obtained that the PSNR of our proposed algorithm is higher than those of FBS [12], IFBS [35], FISTA [6], and NAGA [16].
Author Contributions
Conceptualization, R.W.; Formal analysis, P.T. and R.W.; Investigation, P.T.; Methodology, R.W.; Supervision, R.W.; Validation, R.W.; Writing—original draft, P.T.; Writing—review—editing, R.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors are very grateful to the anonymous referees for their helpful comments, which improved the presentation of this manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
- Eld’en, L. Algorithms for the regularization of ill conditioned least squares problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
- Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; VH Winston & Sons: Washington, DC, USA; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
- Tibshirain, R. Regression shrinkage abd selection via lasso. J. R. Stat. Soc. Ser. B (Method) 1996, 58, 267–288. [Google Scholar]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inveerse problems. SIAMJ. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Incorporated; Springr: New York, NY, USA, 2017. [Google Scholar]
- Parikh, N.; Boyd, S. Proximal Algorthims. Found. Trends R Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
- Combettes, P.L. Quasi-Fejérian analysis of some optimization algorithms in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. In Studies in Computational Mathematics; North-Holland: Amsterdam, The Netherlands, 2001; Volume 8, pp. 115–152. [Google Scholar]
- Combettes, P.L.; Pesquet, J.-C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems. Science and Engineering; Springer Optimization and Its Applications: New York, NY, USA, 2011; Volume 49, pp. 185–212. [Google Scholar]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
- Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
- Moudafi, A.; Oliny, M. Convergence of splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef]
- Verma, M.; Shukla, K.K. A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recogn. Lett. 2017, 95, 98–103. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Byrne, C. Aunified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
- Cholamjiak, P.; Shehu, Y. Inertial forward–backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
- Kunrada, K.; Pholasa, N.; Cholamjiak, P. On convergence and complexity of the modified forward–backward method involving new linesearches for convex minimization. Math. Meth. Appl. Sci. 2019, 42, 1352–1362. [Google Scholar]
- Suantai, S.; Eiamniran, N.; Pholasa, N.; Cholamjiak, P. Three-step projective methods for solving the split feasibility problems. Mathematics 2019, 7, 712. [Google Scholar] [CrossRef]
- Suantai, S.; Kesornprom, S.; Cholamjiak, P. Modified proximal algorithms for finding solutions of the split variational inclusions. Mathematics 2019, 7, 708. [Google Scholar] [CrossRef]
- Thong, D.V.; Cholamjiak, P. Strong convergence of a forward–backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38, 1–16. [Google Scholar] [CrossRef]
- Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
- Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
- Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuousfunctions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef]
- Hanjing, A.; Suthep, S. The split fixed-point problem for demicontractive mappings and applications. Fixed Point Theory 2020, 21, 507–524. [Google Scholar] [CrossRef]
- Wongyai, S.; Suantai, S. Convergence Theorem and Rate of Convergence of a New Iterative Method for Continuous Functions on Closed Interval. In Proceedings of the AMM and APAM Conference Proceedings, Bankok, Thailand, 23–25 May 2016; pp. 111–118. [Google Scholar]
- De la Sen, M.; Agarwal, R.P. Common fixed points and best proximity points of two cyclic self-mappings. Fixed Point Theory Appl. 2012, 2012, 1–17. [Google Scholar] [CrossRef][Green Version]
- Gdawiec, K.; Kotarski, W. Polynomiography for the polynomial infinity norm via Kalantari’s formula and nonstandard iterations. Appl. Math. Comput. 2017, 307, 17–30. [Google Scholar] [CrossRef]
- Shoaib, A. Common fixed point for generalized contraction in b-multiplicative metric spaces with applications. Bull. Math. Anal. Appl. 2020, 12, 46–59. [Google Scholar]
- Al-Mazrooei, A.E.; Lateef, D.; Ahmad, J. Common fixed point theorems for generalized contractions. J. Math. Anal. 2017, 8, 157–166. [Google Scholar]
- Kim, K.S. A Constructive scheme for a common coupled fixed-point problems in Hilbert space. Mathematics 2020, 8, 1717. [Google Scholar] [CrossRef]
- Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Moudafi, A.; Al-Shemas, E. Simulataneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
- Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of families of nonexpansive mapping in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
- Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. Theor. Methods Appl. 2009, 71, 112–119. [Google Scholar] [CrossRef]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
- Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
- Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
- Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef]
- Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math. 1962, 255, 2897–2899. [Google Scholar]
- Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).