An Accelerated Fixed-Point Algorithm with an Inertial Technique for a Countable Family of G -Nonexpansive Mappings Applied to Image Recovery

: Many authors have proposed ﬁxed-point algorithms for obtaining a ﬁxed point of G - nonexpansive mappings without using inertial techniques. To improve convergence behavior, some accelerated ﬁxed-point methods have been introduced. The main aim of this paper is to use a coordinate afﬁne structure to create an accelerated ﬁxed-point algorithm with an inertial technique for a countable family of G -nonexpansive mappings in a Hilbert space with a symmetric directed graph G and prove the weak convergence theorem of the proposed algorithm. As an application, we apply our proposed algorithm to solve image restoration and convex minimization problems. The numerical experiments show that our algorithm is more efﬁcient than FBA, FISTA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration.


Introduction
Let H be a real Hilbert space with the norm • and C be a nonempty closed convex subset of H.A mapping T : C → C is said to be nonexpansive if it satisfies the following symmetric contractive-type condition: Tx − Ty ≤ x − y , for all x, y ∈ C; see [1].
The notation of the set of all fixed points of T is F(T) := {x ∈ C : x = Tx}.Many mathematicians have studied iterative schemes for finding the approximate fixed-point theorem of nonexpansive mappings over many years; see [2,3].One of these is the Picard iteration process, which is well known and popular.Picard's iteration process is defined by where n ≥ 1 and an initial point x 1 is randomly selected.The iterative process of Picard has been developed extensively by many mathematicians, as follows: Mann iteration process [4] is defined by where n ≥ 1 and an initial point x 1 is randomly selected and {ρ n } is a sequence in [0, 1].
Ishikawa iteration process [5] is defined by where n ≥ 1 and an initial point x 1 is randomly selected and {ζ n }, {ρ n } are sequences in [0, 1].S-iteration process [6] is defined by where n ≥ 1 and an initial point x 1 is randomly selected and {ζ n }, {ρ n } are sequences in [0, 1].We know that the S-iteration process (3) is independent of Mann and Ishikawa iterative schemes and converges quicker than both; see [6].
Noor iteration process [7] is defined by where n ≥ 1 and an initial point x 1 is randomly selected and {η n }, {ζ n }, {ρ n } are sequences in [0, 1].We can see that Mann and Ishikawa iterations are special cases of the Noor iteration.SP-iteration process [8] is defined by where n ≥ 1 and an initial point x 1 is randomly selected and {η n }, {ζ n }, {ρ n } are sequences in [0,1].We know that Mann, Ishikawa, Noor and SP-iterations are equivalent and the SP-iteration converges faster than the other; see [8].
The fixed-point theory is a rapidly growing field of research because of its many applications.It has been found that a self-map on a set admits a fixed point under specific conditions.One of the recent generalizations is due to Jachymiski.
Jachymski [9] proved some generalizations of the Banach contraction principle in a complete metric space endowed with a directed graph using a combination of fixed-point theory and graph theory.In Banach spaces with a graph, Aleomraninejad et al. [10] proposed an iterative scheme for G-contraction and G-nonexpansive mappings.G-monotone nonexpansive multivalued mappings on hyperbolic metric spaces endowed with graphs were defined by Alfuraidan and Khamsi [11].On a Banach space with a directed graph, Alfuraidan [12] showed the existence of fixed points of monotone nonexpansive mappings.For G-nonexpansive mappings in Hilbert spaces with a graph, Tiammee et al. [13] demonstrated Browder's convergence theorem and a strong convergence theorem of the Halpern iterative scheme.The convergence theorem of the three-step iteration approach for solving general variational inequality problems was investigated by Noor [7].According to [14][15][16][17], the three-step iterative method gives better numerical results than the one-step and two-step approximate iterative methods.For approximating common fixed points of a finite family of G-nonexpansive mappings, Suantai et al. [18] combined the shrinking projection with the parallel monotone hybrid method.Additionally, they used a graph to derive a strong convergence theorem in Hilbert spaces under certain conditions and applied it to signal recovery.There is also research related to the application of some fixed-point theorem on the directed graph representations of some chemical compounds; see [19,20].
This paper is divided into four sections.The first section is the introduction.In Section 2, we recall the basic concepts of mathematics, definitions, and lemmas that will be used to prove the main results.In Section 3, we prove a weak convergence theorem of an iterative scheme with the inertial step for finding a common fixed point of a countable family of G-nonexpansive mappings.Furthermore, we apply our proposed method for solving image restoration and convex minimization problems; see Section 4.

Preliminaries
The basic concepts of mathematics, definitions, and lemmas discussed in this section are all important and useful in proving our main results.
Let X be a real normed space and C be a nonempty subset of X.Let = {(u, u) : u ∈ C}, where stands for the diagonal of the Cartesian product C × C. Consider a directed graph G in which the set V(G) of its vertices corresponds to C, and the set E(G) of its edges contains all loops, that is E(G) ⊇ .Assume that G does not have parallel edges.Then, G = (V(G), E(G)).The conversion of a graph G is denoted by G −1 .Thus, we have Recall that a graph G is connected if there is a path between any two vertices of the graph G. Readers might refer to [29] for additional information on some basic graph concepts.
We say that a mapping T : C → C is said to be G-contraction [9] if T is edge preserving, i.e., (Tu, Tv) ∈ E(G) for all (u, v) ∈ E(G), and there exists ρ ∈ [0, 1) such that , where ρ is called a contraction factor.If T is edge preserving, and Tu − Tv ≤ u − v for all (u, v) ∈ E(G), then T is said to be G-nonexpansive; see [13].
A mapping T : u and Tu n → 0; then, Tu = 0. To prove our main result, we need to introduce the concept of the coordinate affine of the graph for all (x, y), (x, z) ∈ E(G).
If E(G) is both left and right coordinate affine, then E(G) is said to be coordinate affine.
The following lemmas are the fundamental results for proving our main theorem; see also [21,30,31].
Lemma 2 ([31]).For a real Hilbert space H, the following results hold: where n ∈ N.Then, Let {u n } be a sequence in X.We write u n u to indicate that a sequence {u n } converges weakly to a point u ∈ H. Similarly, u n → u will symbolize the strong convergence.
Let ω w (u n ) be the set of all weak cluster points of {u n }.
The following lemma was proved by Moudafi and Al-Shemas; see [32].

Lemma 4 ([32]
).Let {u n } be a sequence in a real Hilbert space H such that there exists ∅ = Λ ⊂ H satisfying: (i) For any p ∈ Λ, lim n→∞ u n − p exists.
Then, there exists x * ∈ Λ such that u n x * .
Let {T n } and ψ be families of nonexpansive mappings of for all T ∈ ψ; see [33].If ψ = {T}, then {T n } satisfies the NST-condition (I) with T.
The forward-backward operator of lower semi-continuous and convex functions of f , g : R n → (−∞, +∞] has the following definition: A forward-backward operator T is defined by T := prox λg (I − λ∇ f ) for λ > 0, where ∇ f is the gradient operator of function f and prox λg x := argmin y∈H g(y) + 1 2λ y − x 2 (see [34,35]).Moreau [36] defined the operator prox λg as the proximity operator with respect to λ and function g.Whenever λ ∈ (0, 2/L), we know that T is a nonexpansive mapping and L is a Lipschitz constant of ∇ f .We have the following remark for the definition of the proximity operator; see [37].
Remark 1.Let g : R n → R be given by g(x) = λ x 1 .The proximity operator of g is evaluated by the following formula The following lemma was proved by Bassaban et al.; see [22].
Lemma 5. Let H be a real Hilbert space and T be the forward-backward operator of f and g, where g is a proper lower semi-continuous convex function from H into R ∪ {∞}, and f is a convex differentiable function from H into R with gradient ∇ f being L-Lipschitz constant for some L > 0.
If {T n } is the forward-backward operator of f and g such that a n → a with a, a n ∈ (0, 2/L), then {T n } satisfies the NST-condition (I) with T.

Main Results
In this section, we obtain a useful proposition and a weak convergence theorem of our proposed algorithm by using the inertial technique.
Let C be a nonempty closed and convex subset of a real Hilbert space The following proposition is useful for our main theorem.
Algorithm 1 (MSPA) A modified SP-algorithm Step 1. y n , z n and x n+1 are computed by Then, n := n + 1 and go to Step 1.
In the following theorem, we prove the weak convergence of G-nonexpansive mapping by using Algorithm 1.
Theorem 1.Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph G = (V(G), E(G)) with V(G) = C and E(G) is symmetric, transitive and left coordinate affine.Let x 0 , x 1 ∈ C and {x n } be a sequence in H defined by Algorithm 1. Suppose that {T n } satisfies the NST-condition (I) with T such that ∅ Then, {x n } converges weakly to a point in F(T).
By the definitions of y n and z n , we obtain and By the definition of x n+1 and (11), we obtain From ( 10)-( 12), we obtain So, we obtain Note that {x n } being bounded implies that {y n } and {z n } are also bounded.By Lemma 1 and ( 13), we find that lim n→∞ x n − x * exists.Then, we let lim n→∞ x n − x * = a.From the boundedness of {y n } and ( 12), we obtain lim inf By ( 10) and ( 14), we obtain lim sup From ( 15) and ( 16), it follows that Similarly, from ( 11), ( 12), ( 17) and the boundedness of {z n }, we obtain lim sup From (18), we obtain that lim n→∞ z n − x * = a.It follows that lim n→∞ z n − x * exists.By the definition of x n+1 and Lemma 2 (i), we obtain From ( 14) and ( 19), we obtain Since and from (20), it follows that Since {z n } is bounded, (20), and {T n } satisfies the NST-condition (I) with T, we obtain that z n − Tz n → 0. Let ω w (z n ) be the set of all weak cluster points of {z n }.Then, ω w ∈ F(T) by the demicloseness of I − T at 0. By Lemma 3, we conclude that there exists x * ∈ F(T) such that z n x * and it follows from (21) that x n x * .The proof is now complete.

Applications
In this section, we are interested in applying our proposed method for solving a convex minimization problem.Furthermore, we also compared the convergence behavior of our proposed algorithm with the others and give some applications to solve the image restoration problem.

Convex Minimization Problems
Our proposed method will be used to solve a convex minimization problem of the sum of two convex and lower semicontinuous functions f , g : R n → (−∞, +∞].So, we consider the following convex minimization problem: min f (x) + g(x) x ∈ R n .It is well known that x * is a minimizer of ( 22) if and only if x * = Tx * , where T = prox ρg (I − ρ∇ f ); see Proposition 3.1 (iii) [35].It is also known that T is nonexpansive if ρ ∈ (0, 2/L) when L is a Lipschitz constant of ∇ f .Over the past two decades, several algorithms have been introduced for solving the problem (22).A simple and classical algorithm is the forwardbackward algorithm (FBA), which was introduced by Lions, P.L. and B. Mercier [23].
The forward-backward algorithm (FBA) is defined by where n ≥ 1, A technique for improving speed and giving a better convergence behavior of the algorithms was introduced firstly by Polyak [38] by adding an inertial step.Since then, many authors have employed the inertial technique to accelerate their algorithms for various kinds of problems; see [21,22,[24][25][26][27][28].
The performance of FBA can be improved using an iterative method with the inertial steps described below.
A fast iterative shrinkage-thresholding algorithm (FISTA) [27] is defined by where n ≥ 1, ) and θ n is the inertial step size.The FISTA was suggested by Beck and Teboulle [27].They proved the convergence rate of the FISTA and applied the FISTA to the image restoration problem [27].The inertial step size θ n of the FISTA was firstly introduced by Nesterov [39].
A new accelerated proximal gradient algorithm (nAGA) [28] is defined by where n ≥ 1, T n is the forward-backward operator of f and g with a n ∈ (0, 2/L) and {µ n }, {ρ n } are sequences in (0, 1) and x n −x n−1 2 µ n → 0. The nAGA was introduced for proving a convergence theorem by Verma and Shukla [28].The nonsmooth convex minimization problem with sparsity, including regularizers, was solved using this method for the multitask learning framework.
The convergence of Algorithm 2 is obtained using the convergence result of Algorithm 1, as shown in the following theorem.
Algorithm 2 (FBMSPA) A forward-backward modified SP-algorithm 1: Initial.Take x 0 , x 1 ∈ C are arbitary and n Step 1. y n , z n and x n+1 are computed by Then, n := n + 1 and go to Step 1.
Theorem 2. For f , g : R n → (−∞, ∞], g is a convex function and f is a smooth convex function with a gradient having a Lipschitz constant L. Let a n ∈ (0, 2/L) be such that {a n } converges to a and let T := prox ag (I − a∇ f ) and T n := prox a n g (I − a n ∇ f ) and let {x n } be a sequence generated by Algorithm 2, where β n , α n and θ n are the same as in Algorithm 1.Then, the following holds: (i) (ii) {x n } converges weakly to a point in Argmin( f + g).
Proof.We know that T and {T n } are nonexpansive operators, and [34].By Lemma 5, we find that {T n } satisfies the NST-condition (I) with T. From Theorem 1, we obtain the required result directly by putting G = R n × R n , the complete graph, on R n .

The Image Restoration Problem
We can describe the image restoration problem as a simple linear model where B ∈ R m×n and c ∈ R m×1 are known, u is an additive noise vector, and x is the "true" image.In image restoration problems, the blurred image is represented by c, and x ∈ R n×1 is the unknown true image.In these problems, the blur operator is described by the matrix B. The problem of finding the original image x * ∈ R n×1 from the noisy image and observed blurred is called an image restoration problem.There are several methods that have been proposed for finding the solution of problem (25); see, for instance, [40][41][42][43].
A new method for the estimation a solution of (25), called the least absolute shrinkage and selection operator (LASSO), was proposed by Tibshirani [44] as follows: where λ > 0 is called a regularization parameter and • 1 is an l 1 -norm defined by The LASSO can also be applied to solve image and regression problems [27,44], etc. Due to the size of the matrix B and x along with their members, the model (26) has the computational cost of the multiplication Bx and x 1 for solving the RGB image restoration problem.In order to solve this issue, many mathematicians in this field have used the 2-D fast Fourier transform for true RGB image transformation.Therefore, the model ( 26) was slightly modified using the 2-D fast Fourier transform as follows: where λ is a positive regularization parameter, R is the blurring matrix, W is the 2-D fast Fourier transform, B is the blurring operation with B = RW and C ∈ R m×n is the observed blurred and noisy image of size m × n.
We apply Algorithm 2 to solve the image restoration problem (27) by using Theorem 2 when f (x) = Bx − C 2 2 and g(x) = λ Wx 1 .Then, we compare Algorithm 2's deblurring to that of FISTA and FBA.In this experiment, we consider the true RGB images, Suan Dok temple and Aranyawiwek temple of size 500 2 , as the original images.We blur the images with a Gaussian blur of size 9 2 and σ = 4, where σ is the standard deviation.To evaluate the performance of these methods, we utilize the peak signal-to-noise ratio (PSNR) [45] to measure the efficiency of these methods when PSNR(x n ) is defined by where a monotic image with 8 bits/pixel has a maximum gray level of 255 and MSE = 1 x n (i) and x * (i) are the i-th samples in image x n and x * , respectively, N is the number of image samples and x * is the original image.We can see that a higher PSNR indicates better a deblurring image quality.For these experiments, we set λ = 5 × 10 −5 and the original image was the blurred image.The Lipchitz constant L is calculated using the matrix B T B as the maximum eigenvalues.
The parameters of Algorithm 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration are the same as in Table 1.
Table 1.Methods and their setting controls.

Methods Setting
Algorithm 2 α n = 0.9, Note that all of the parameters in Table 1 satisfy the convergence theorems for each method.The convergence of the sequence {x n } generated by Algorithm 2 to the original image x * is guaranteed by Theorem 2. However, the PSNR value is used to measure the convergence behavior of this sequence.It is known that PSNR is a suitable measurement for image restoration problems.
The following experiments show the efficacy of the blurring results of Suan Dok and Aranyavivek temples at the 500th iteration of Algorithms 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration using PSNR as our measurement, shown in tables and figures as follows.
It is observed from Figures 1 and 2 that the graph of PSNR of Algorithm 2 is higher than that of FISTA FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration which shows that Algorithm 2 gives a better performance than the others.
The efficiency of each algorithm for image restoration is shown in Tables 2-5 for different number of iterations.The value of PSNR of Algorithm 2 is shown to be higher than that of FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration.Thus, Algorithm 2 has a better convergence behavior than the others.

Figure 1 .
Figure 1.The graphs of PSNR of each algorithm for Suan Dok temple.

Figure 2 .
Figure 2. The graphs of PSNR of each algorithm for Aranyawiwek temple.

Table 2 .
The values of PSNR for Algorithm 2, FISTA, FBA of Suan Dok temple.

Table 3 .
The values of PSNR for Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Suan Dok temple.

Table 4 .
The values of PSNR for Algorithm 2, FISTA and FBA of Aranyawiwek temple.

Table 5 .
The values of PSNR for Algorithm 2, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Aranyawiwek temple.