Abstract
Many authors have proposed fixed-point algorithms for obtaining a fixed point of G-nonexpansive mappings without using inertial techniques. To improve convergence behavior, some accelerated fixed-point methods have been introduced. The main aim of this paper is to use a coordinate affine structure to create an accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings in a Hilbert space with a symmetric directed graph G and prove the weak convergence theorem of the proposed algorithm. As an application, we apply our proposed algorithm to solve image restoration and convex minimization problems. The numerical experiments show that our algorithm is more efficient than FBA, FISTA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration.
1. Introduction
Let H be a real Hilbert space with the norm and C be a nonempty closed convex subset of A mapping is said to be nonexpansive if it satisfies the following symmetric contractive-type condition:
for all ; see [1].
The notation of the set of all fixed points of T is
Many mathematicians have studied iterative schemes for finding the approximate fixed-point theorem of nonexpansive mappings over many years; see [2,3]. One of these is the Picard iteration process, which is well known and popular. Picard’s iteration process is defined by
where and an initial point is randomly selected.
The iterative process of Picard has been developed extensively by many mathematicians, as follows:
Mann iteration process [4] is defined by
where and an initial point is randomly selected and is a sequence in .
Ishikawa iteration process [5] is defined by
where and an initial point is randomly selected and , are sequences in .
S-iteration process [6] is defined by
where and an initial point is randomly selected and , are sequences in . We know that the S-iteration process (3) is independent of Mann and Ishikawa iterative schemes and converges quicker than both; see [6].
Noor iteration process [7] is defined by
where and an initial point is randomly selected and , are sequences in . We can see that Mann and Ishikawa iterations are special cases of the Noor iteration.
SP-iteration process [8] is defined by
where and an initial point is randomly selected and , are sequences in . We know that Mann, Ishikawa, Noor and SP-iterations are equivalent and the SP-iteration converges faster than the other; see [8].
The fixed-point theory is a rapidly growing field of research because of its many applications. It has been found that a self-map on a set admits a fixed point under specific conditions. One of the recent generalizations is due to Jachymiski.
Jachymski [9] proved some generalizations of the Banach contraction principle in a complete metric space endowed with a directed graph using a combination of fixed-point theory and graph theory. In Banach spaces with a graph, Aleomraninejad et al. [10] proposed an iterative scheme for G-contraction and G-nonexpansive mappings. G-monotone nonexpansive multivalued mappings on hyperbolic metric spaces endowed with graphs were defined by Alfuraidan and Khamsi [11]. On a Banach space with a directed graph, Alfuraidan [12] showed the existence of fixed points of monotone nonexpansive mappings. For G-nonexpansive mappings in Hilbert spaces with a graph, Tiammee et al. [13] demonstrated Browder’s convergence theorem and a strong convergence theorem of the Halpern iterative scheme. The convergence theorem of the three-step iteration approach for solving general variational inequality problems was investigated by Noor [7]. According to [14,15,16,17], the three-step iterative method gives better numerical results than the one-step and two-step approximate iterative methods. For approximating common fixed points of a finite family of G-nonexpansive mappings, Suantai et al. [18] combined the shrinking projection with the parallel monotone hybrid method. Additionally, they used a graph to derive a strong convergence theorem in Hilbert spaces under certain conditions and applied it to signal recovery. There is also research related to the application of some fixed-point theorem on the directed graph representations of some chemical compounds; see [19,20].
Several fixed-point algorithms have been introduced by many authors [7,9,10,11,12,13,14,15,16,17,18] for finding a fixed point of G-nonexpansive mappings with no inertial technique. Among these algorithms, we need those algorithms that are efficient for solving the problem. So, some accelerated fixed-point algorithms have been introduced to improve convergence behavior; see [21,22,23,24,25,26,27,28]. Inspired by these works mentioned above, we employed a coordinate affine structure to define an accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings applied to image restoration and convex minimization problems.
This paper is divided into four sections. The first section is the introduction. In Section 2, we recall the basic concepts of mathematics, definitions, and lemmas that will be used to prove the main results. In Section 3, we prove a weak convergence theorem of an iterative scheme with the inertial step for finding a common fixed point of a countable family of G-nonexpansive mappings. Furthermore, we apply our proposed method for solving image restoration and convex minimization problems; see Section 4.
2. Preliminaries
The basic concepts of mathematics, definitions, and lemmas discussed in this section are all important and useful in proving our main results.
Let X be a real normed space and C be a nonempty subset of X. Let , where Δ stands for the diagonal of the Cartesian product . Consider a directed graph G in which the set of its vertices corresponds to C, and the set of its edges contains all loops, that is Assume that G does not have parallel edges. Then, . The conversion of a graph G is denoted by . Thus, we have
A graph G is said to be symmetric if ; we have .
A graph G is said to be transitive if for any such that ; then,
Recall that a graph G is connected if there is a path between any two vertices of the graph Readers might refer to [29] for additional information on some basic graph concepts.
We say that a mapping is said to be G-contraction [9] if T is edge preserving, i.e., for all , and there exists such that
for all where is called a contraction factor. If T is edge preserving, and
for all , then T is said to be G-nonexpansive; see [13].
A mapping is called G-demiclosed at 0 if for any sequence and ; then, .
To prove our main result, we need to introduce the concept of the coordinate affine of the graph . For any , with we say that is said to be left coordinate affine if
for all , Similar to this, is said to be right coordinate affine if
for all ,
If is both left and right coordinate affine, then is said to be coordinate affine.
The following lemmas are the fundamental results for proving our main theorem; see also [21,30,31].
Lemma 1
([30]). Let and such that
where If and then exists.
Lemma 2
([31]). For a real Hilbert space H, the following results hold:
(i) For any and
(ii) For any
Lemma 3
([21]). Let and such that
where Then,
where Furthermore, if then is bounded.
Let be a sequence in We write to indicate that a sequence converges weakly to a point Similarly, will symbolize the strong convergence. For if there is a subsequence of such that then v is called a weak cluster point of Let be the set of all weak cluster points of
The following lemma was proved by Moudafi and Al-Shemas; see [32].
Lemma 4
([32]). Let be a sequence in a real Hilbert space H such that there exists satisfying:
(i) For any exists.
(ii) Any weak cluster point of
Then, there exists such that
Let and be families of nonexpansive mappings of C into itself such that where is the set of all common fixed points of each A sequence satisfies the NST-condition (I) with if, for any bounded sequence in
for all ; see [33]. If then satisfies the NST-condition (I) with
The forward–backward operator of lower semi-continuous and convex functions of has the following definition:
A forward-backward operator T is defined by for , where is the gradient operator of function f and (see [34,35]). Moreau [36] defined the operator as the proximity operator with respect to and function g. Whenever , we know that T is a nonexpansive mapping and L is a Lipschitz constant of . We have the following remark for the definition of the proximity operator; see [37].
Remark 1.
Let be given by . The proximity operator of g is evaluated by the following formula
where and .
The following lemma was proved by Bassaban et al.; see [22].
Lemma 5.
Let H be a real Hilbert space and T be the forward–backward operator of f and g, where g is a proper lower semi-continuous convex function from H into , and f is a convex differentiable function from H into with gradient being L-Lipschitz constant for some . If is the forward–backward operator of f and g such that with a, , then satisfies the -condition (I) with T.
3. Main Results
In this section, we obtain a useful proposition and a weak convergence theorem of our proposed algorithm by using the inertial technique.
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph such that . Let be a family of G-nonexpansive mappings of C into itself such that .
The following proposition is useful for our main theorem.
Proposition 1.
Let and be such that , . Let be a sequence generated by Algorithm 1. Suppose is symmetric, transitive and left coordinate affine. Then, for all
| Algorithm 1 (MSPA) A modified SP-algorithm |
|
Proof.
We shall prove the results by using mathematical induction. From Algorithm 1, we obtain
Since , and is left coordinate affine, we obtain and
Since and is edge preserving, we obtain . Next, suppose that
for We shall show that and By Algorithm 1, we obtain
and
In the following theorem, we prove the weak convergence of G-nonexpansive mapping by using Algorithm 1.
Theorem 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph with and is symmetric, transitive and left coordinate affine. Let and be a sequence in H defined by Algorithm 1. Suppose that satisfies the NST-condition (I) with T such that and for all Then, converges weakly to a point in
Proof.
Let . By the definitions of and , we obtain
and
By the definition of and (11), we obtain
So, we obtain , where from Lemma 3. Thus, is bounded because . Then,
4. Applications
In this section, we are interested in applying our proposed method for solving a convex minimization problem. Furthermore, we also compared the convergence behavior of our proposed algorithm with the others and give some applications to solve the image restoration problem.
4.1. Convex Minimization Problems
Our proposed method will be used to solve a convex minimization problem of the sum of two convex and lower semicontinuous functions . So, we consider the following convex minimization problem: min. It is well known that is a minimizer of (22) if and only if , where ; see Proposition 3.1 (iii) [35]. It is also known that T is nonexpansive if when L is a Lipschitz constant of . Over the past two decades, several algorithms have been introduced for solving the problem (22). A simple and classical algorithm is the forward–backward algorithm (FBA), which was introduced by Lions, P.L. and B. Mercier [23].
The forward–backward algorithm (FBA) is defined by
where , and L is a Lipschitz constant of , , and is a sequence in such that . A technique for improving speed and giving a better convergence behavior of the algorithms was introduced firstly by Polyak [38] by adding an inertial step. Since then, many authors have employed the inertial technique to accelerate their algorithms for various kinds of problems; see [21,22,24,25,26,27,28]. The performance of FBA can be improved using an iterative method with the inertial steps described below.
A fast iterative shrinkage-thresholding algorithm (FISTA) [27] is defined by
where , , , and is the inertial step size. The FISTA was suggested by Beck and Teboulle [27]. They proved the convergence rate of the FISTA and applied the FISTA to the image restoration problem [27]. The inertial step size of the FISTA was firstly introduced by Nesterov [39].
A new accelerated proximal gradient algorithm (nAGA) [28] is defined by
where , is the forward–backward operator of f and g with and are sequences in and . The nAGA was introduced for proving a convergence theorem by Verma and Shukla [28]. The nonsmooth convex minimization problem with sparsity, including regularizers, was solved using this method for the multitask learning framework.
The convergence of Algorithm 2 is obtained using the convergence result of Algorithm 1, as shown in the following theorem.
| Algorithm 2 (FBMSPA) A forward–backward modified SP-algorithm |
|
Theorem 2.
For , g is a convex function and f is a smooth convex function with a gradient having a Lipschitz constant L. Let be such that converges to a and let and and let be a sequence generated by Algorithm 2, where and are the same as in Algorithm 1. Then, the following holds:
(i) , where and
(ii) converges weakly to a point in Argmin.
Proof.
We know that T and are nonexpansive operators, and for all n; see Proposition 26.1 in [34]. By Lemma 5, we find that satisfies the NST-condition (I) with T. From Theorem 1, we obtain the required result directly by putting , the complete graph, on . □
4.2. The Image Restoration Problem
We can describe the image restoration problem as a simple linear model
where and are known, u is an additive noise vector, and x is the “true” image. In image restoration problems, the blurred image is represented by c, and is the unknown true image. In these problems, the blur operator is described by the matrix B. The problem of finding the original image from the noisy image and observed blurred is called an image restoration problem. There are several methods that have been proposed for finding the solution of problem (25); see, for instance, [40,41,42,43].
A new method for the estimation a solution of (25), called the least absolute shrinkage and selection operator (LASSO), was proposed by Tibshirani [44] as follows:
where is called a regularization parameter and is an -norm defined by . The LASSO can also be applied to solve image and regression problems [27,44], etc.
Due to the size of the matrix B and x along with their members, the model (26) has the computational cost of the multiplication and for solving the RGB image restoration problem. In order to solve this issue, many mathematicians in this field have used the 2-D fast Fourier transform for true RGB image transformation. Therefore, the model (26) was slightly modified using the 2-D fast Fourier transform as follows:
where is a positive regularization parameter, R is the blurring matrix, W is the 2-D fast Fourier transform, is the blurring operation with and is the observed blurred and noisy image of size .
We apply Algorithm 2 to solve the image restoration problem (27) by using Theorem 2 when and . Then, we compare Algorithm 2’s deblurring to that of FISTA and FBA. In this experiment, we consider the true RGB images, Suan Dok temple and Aranyawiwek temple of size , as the original images. We blur the images with a Gaussian blur of size and , where is the standard deviation. To evaluate the performance of these methods, we utilize the peak signal-to-noise ratio (PSNR) [45] to measure the efficiency of these methods when PSNR() is defined by
where a monotic image with 8 bits/pixel has a maximum gray level of 255 and , and are the i-th samples in image and , respectively, N is the number of image samples and is the original image. We can see that a higher PSNR indicates better a deblurring image quality. For these experiments, we set and the original image was the blurred image. The Lipchitz constant L is calculated using the matrix as the maximum eigenvalues.
The parameters of Algorithm 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration are the same as in Table 1.
Table 1.
Methods and their setting controls.
Note that all of the parameters in Table 1 satisfy the convergence theorems for each method. The convergence of the sequence generated by Algorithm 2 to the original image is guaranteed by Theorem 2. However, the PSNR value is used to measure the convergence behavior of this sequence. It is known that PSNR is a suitable measurement for image restoration problems.
The following experiments show the efficacy of the blurring results of Suan Dok and Aranyavivek temples at the 500th iteration of Algorithms 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration using PSNR as our measurement, shown in tables and figures as follows.
It is observed from Figure 1 and Figure 2 that the graph of PSNR of Algorithm 2 is higher than that of FISTA FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration which shows that Algorithm 2 gives a better performance than the others.
Figure 1.
The graphs of PSNR of each algorithm for Suan Dok temple.
Figure 2.
The graphs of PSNR of each algorithm for Aranyawiwek temple.
The efficiency of each algorithm for image restoration is shown in Table 2, Table 3, Table 4 and Table 5 for different number of iterations. The value of PSNR of Algorithm 2 is shown to be higher than that of FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration. Thus, Algorithm 2 has a better convergence behavior than the others.
Table 2.
The values of PSNR for Algorithm 2, FISTA, FBA of Suan Dok temple.
Table 3.
The values of PSNR for Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Suan Dok temple.
Table 4.
The values of PSNR for Algorithm 2, FISTA and FBA of Aranyawiwek temple.
Table 5.
The values of PSNR for Algorithm 2, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Aranyawiwek temple.
We show the original images, blurred images, and deblurred images by Algorithm 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration for Suan Dok (Figure 3) and Aranyawiwek temples (Figure 4).
Figure 3.
Results for Suan Dok temple’s deblurring image.
Figure 4.
Results for Aranyawiwek temples’s deblurring image.
5. Conclusions
In this study, we used a coordinate affine structure to propose an accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings in a Hilbert space with a symmetric directed graph G. Moreover, we proved the weak convergence theorem of the proposed algorithm under some suitable conditions. Then, we compared the convergence behavior of our proposed algorithm with FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration. We also applied our results to image restoration and convex minimization problems. We found that Algorithm 2 gave the best results out of all of them.
Author Contributions
Conceptualization, R.W.; Formal analysis, K.J. and R.W.; Investigation, K.J.; Methodology, R.W.; Supervision, R.W.; Validation, R.W.; Writing—original draft, K.J.; Writing—review and editing, R.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Fundamental Fund 2022, Chiang Mai university and Ubon Ratchathani University.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The first author was supported by Fundamental Fund 2022, Chiang Mai university, Thailand. The second author would like to thank Ubon Ratchathani University.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Berinde, V. A Modified Krasnosel’skiǐ–Mann Iterative Algorithm for Approximating Fixed Points of Enriched Nonexpansive Mappings. Symmetry 2022, 14, 123. [Google Scholar] [CrossRef]
- Bin Dehaish, B.A.; Khamsi, M.A. Mann iteration process for monotone nonexpansive mappings. Fixed Point Theory Appl. 2015, 2015, 177. [Google Scholar] [CrossRef] [Green Version]
- Dong, Y. New inertial factors of the Krasnosel’skii-Mann iteration. Set-Valued Var. Anal. 2021, 29, 145–161. [Google Scholar] [CrossRef]
- Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
- Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
- Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed point of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61. [Google Scholar]
- Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef] [Green Version]
- Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2000, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
- Jachymski, J. The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 2008, 136, 1359–1373. [Google Scholar] [CrossRef]
- Aleomraninejad, S.M.A.; Rezapour, S.; Shahzad, N. Some fixed point result on a metric space with a graph. Topol. Appl. 2012, 159, 659–663. [Google Scholar] [CrossRef] [Green Version]
- Alfuraidan, M.R.; Khamsi, M.A. Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph. Fixed Point Theory Appl. 2015, 2015, 44. [Google Scholar] [CrossRef]
- Alfuraidan, M.R. Fixed points of monotone nonexpansive mappings with a graph. Fixed Point Theory Appl. 2015, 2015, 49. [Google Scholar] [CrossRef] [Green Version]
- Tiammee, J.; Kaewkhao, A.; Suantai, S. On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 2015, 187. [Google Scholar] [CrossRef] [Green Version]
- Tripak, O. Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 2016, 87. [Google Scholar] [CrossRef] [Green Version]
- Sridarat, P.; Suparaturatorn, R.; Suantai, S.; Cho, Y.J. Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 2019, 42, 2361–2380. [Google Scholar] [CrossRef]
- Glowinski, R.; Tallec, P.L. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanic; SIAM: Philadelphia, PA, USA, 1989. [Google Scholar]
- Haubruge, S.; Nguyen, V.H.; Strodiot, J.J. Convergence analysis and applications of the Glowinski Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1998, 97, 645–673. [Google Scholar] [CrossRef]
- Suantai, S.; Kankam, K.; Cholamjiak, P.; Cholamjiak, W. A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comp. Appl. Math. 2021, 40, 145. [Google Scholar] [CrossRef]
- Baleanu, D.; Etemad, S.; Mohammadi, H.; Rezapour, S. A novel modeling of boundary value problems on the glucose graph. Commun. Nonlinear Sci. Numer. Simulat. 2021, 100, 105844. [Google Scholar] [CrossRef]
- Etemad, S.; Rezapour, S. On the existence of solutions for fractional boundary value problems on the ethane graph. Adv. Differ. Equ. 2020, 2020, 276. [Google Scholar] [CrossRef]
- Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
- Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting Algorithms for the Sum of Two Nonlinear Operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Janngam, K.; Suantai, S. An accelerated forward-backward algorithm with applications to image restoration problems. Thai. J. Math. 2021, 19, 325–339. [Google Scholar]
- Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 2020, 53, 208–224. [Google Scholar] [CrossRef]
- Gebrie, A.G.; Wangkeeree, R. Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 2020, 53, 332–351. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
- Verma, M.; Shukla, K. A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recogn. Lett. 2018, 95, 98–103. [Google Scholar] [CrossRef]
- Johnsonbaugh, R. Discrete Mathematics; Pearson: Hoboken, NJ, USA, 1997. [Google Scholar]
- Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
- Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
- Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin, Germany, 2017. [Google Scholar]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 168–1200. [Google Scholar] [CrossRef] [Green Version]
- Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Hebd. Des Séances L’académie Des Sci. 1962, 255, 2897–2899. [Google Scholar]
- Beck, A. First-Order Methods in Optimization; Tel-Aviv University: Tel Aviv-Yafo, Israel, 2017; pp. 129–177. ISBN 978-1-61197-498-0. [Google Scholar]
- Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
- Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
- Eldén, L. Algorithms for the regularization of ill-conditioned least squares problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
- Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
- Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; V.H. Winston: Washington, DC, USA, 1977. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).