Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces

The resolvent is a fundamental concept in studying various operator splitting algorithms. In this paper, we investigate the problem of computing the resolvent of compositions of operators with bounded linear operators. First, we discuss several explicit solutions of this resolvent operator by taking into account additional constraints on the linear operator. Second, we propose a fixed point approach for computing this resolvent operator in a general case. Based on the Krasnoselskii–Mann algorithm for finding fixed points of non-expansive operators, we prove the strong convergence of the sequence generated by the proposed algorithm. As a consequence, we obtain an effective iterative algorithm for solving the scaled proximity operator of a convex function composed by a linear operator, which has wide applications in image restoration and image reconstruction problems. Furthermore, we propose and study iterative algorithms for studying the resolvent operator of a finite sum of maximally monotone operators as well as the proximal operator of a finite sum of proper, lower semi-continuous convex functions.


Introduction
Let H be a real Hilbert space, the associated product is denoted by •, • and the corresponding norm is • .Let T : H → 2 H be a maximally monotone operator with its domain and range denoted by Dom(T) and R(T).Let I be the identity operator.We consider the simplest monotone inclusion problem: find x ∈ H, such that 0 ∈ Tx.
Many problems in variational inequalities, partial differential equations, signal and image processing can be solved via the monotone inclusion problem (1).See, for example, [1][2][3][4].It is well known that x is a solution of (1) if and only if x = J λT x, for any λ > 0.Here and in what follows, let I be the identity operator, the resolvent of T with parameter λ > 0 is defined by J λT = (I + λT) −1 , and the Yoshida approximation of T with index λ is denoted by λ T = 1 λ (I − J λT ), respectively.The proximal point algorithm (PPA) is the most popular method to solve (1).
Let ϕ : H → (−∞, +∞] be a proper, lower semi-continuous convex function.By Fermat lemma, the following convex minimization problem min x∈H ϕ(x) (2) is equivalent to the monotone inclusion problem (1), where T = ∂ϕ.Here, ∂ϕ is the classical subdifferential of ϕ.For any λ > 0 and x 0 ∈ H, the PPA for solving the convex minimization problem (2) is defined as In fact, the resolvent of ∂ϕ is equivalent to the proximity operator prox ϕ , which was first introduced by Moreau [5].More precisely, we have The proximity operators play an important role in studying various operator splitting algorithms for solving convex optimization problems.See, for example, [6][7][8].In particular, Combettes et al. [9] proposed a forward-backward splitting algorithm for solving the dual of the proximity operator of a sum of composed convex functions.Adly et al. [10] provided an explicit decomposition of the proximity operator of the sum of two closed convex functions.Since the resolvent operator is a natural generalization of the proximity operators, Bauschke and Combettes [11] extended the Dykstra algorithm [12] for computing the projection onto the intersection of two closed convex sets to compute the resolvent of the sum of two maximally monotone operators.Combettes [13] generalized the Douglas-Rachford splitting and Dykstra-like algorithm to solve the resolvent of a sum of maximally monotone operators.Very recently, Artacho and Campoy [14] generalized the averaged alternating modified reflection algorithm [15] to compute the resolvent of the sum of two maximally monotone operators.On the other hand, in order to compute the resolvent of composed operators A * TA, where H 1 and H 2 are two Hilbert spaces, A : H 1 → H 2 is a continuous linear operator and its adjoint is A * , and T : H 2 → 2 H 2 is a maximally monotone operator.Fukushima [16] proved that, if AA * is invertible, then A * TA is maximally monotone.Moreover, Fukushima [16] showed that and where z is the unique solution of 0 ∈ 1 λ (AA * ) −1 (z − Ax) + Tz.The difference between ( 5) and ( 6) is that the former requires evaluating T −1 , while the latter has to calculate (AA * ) −1 .In Proposition 23.25 of [17], Bauschke and Combettes proved that, if AA * = µI, for some µ > 0, then Since the computation of the resolvent of composed operators in ( 5)-( 7) are restricted, Moudafi [18] developed a fixed point approach for computing the resolvent of the composed operators A * TA without the requirement of (AA * ) −1 and T −1 .The basic assumption is that the operator A * TA is maximally monotone.This is true if 0 ∈ ri(R(A) − Dom(T)) [19,20], where ri stands for the relative interior of a set; otherwise, cone(R(A) − Dom(T)) = span(R(A) − Dom(T)) [17], where cone denotes conical hull of a set and span stands for closed span of a set.The most general condition for the maximal monotonicity of the composition A * TA can be found in [21].
Let T = ∂ϕ; then, the resolvent of A * TA is equivalent to evaluating the proximity operator prox ϕ•A .More precisely, we have This convex optimization problem ( 8) is a general extension of the famous Rudin-Osher-Fatemi (ROF) image denoising model [22].It is worth mentioning that Micchelli et al. [23] proposed a fixed point algorithm to solve the proximity operator (8).The work of Moudafi [18] is a generalization of Micchelli et al. [23].
In recent years, Newton-type methods have been combined with the forward-backward splitting (FBS) algorithm to accelerate the speed of the original FBS algorithm.See, for example, [24][25][26].Argyriou et al. [27] considered the following convex optimization problem: where b ∈ R n , A : R n → R m is a linear transformation, U ∈ R n×n is a symmetric positive definite matrix, and ϕ : R m → (−∞, +∞] is a proper, lower semi-continuous convex function.They proved that the minimizer of ( 9) can be found via a fixed point equation.In particular, when U = I, the convex optimization problem ( 9) is equivalent to the problem of computing the proximity operator of ϕ • A in (8).In [28], Chen et al. proposed an accelerated primal-dual fixed point (PDFP 2 O) algorithm based on an adapted metric method for solving the convex optimization of the sum of two convex functions, where one of which is differentiable and the other is composed by a linear transformation.This algorithm could be viewed as a combination of the original PDFP 2 O [29] with a Quasi-Newton method.This convex optimization problem (9) could be viewed as the proximity operator of ϕ • A relative to the metric induced by U. Recall that the proximity operator of a proper, lower semi-continuous convex function f (x) from R n to (−∞, +∞] relative to the metric induced by U is defined by, which was introduced in [30].See also [31].Thus, ( 9) is equivalent to where x ∈ R n , A, U and ϕ are the same as (9).Let A = I in (11); it becomes the scaled proximal operators of (10).By the first-order optimality condition, the convex optimization problem (11) is equivalent to the following monotone inclusion problem: which means that u = (I + U −1 A * ∂ϕA) −1 (x).It is worth mentioning that the scaled proximity operator (10) was extensively used in [32,33] for deriving effective iterative algorithms to solve structural convex optimization problems.However, these works didn't consider the general scaled proximity operator of ϕ • A and the related resolvent operator problem.For this purpose, in this paper, we investigate the solution of the resolvent of composed operators U −1 A * TA, where A : H 1 → H 2 is a continuous linear operator, T : H 2 → 2 H 2 is a maximally monotone operator, and U is a self-adjoint strongly positive definite operator.In particular, when T = ∂ϕ, the resolvent of composed operators U −1 A * TA is equivalent to the proximity operator as follows: The convex minimization problem ( 13) is an extension of the convex minimization problem (8).In this paper, we always assume that A * TA is maximally monotone under some qualification conditions.According to Minty's theorem, if A * TA is maximally monotone, then R(I + λA * TA) = H 1 , for any λ > 0. Thus, the resolvent J λA * TA (x) is single-valued, for any x ∈ H 1 .To study the solution of the resolvent of composed operators U −1 A * TA, we divide our work into two parts.First, we present an explicit solution of the resolvent of composed operators under some conditions on A and U. Second, we develop a fixed point algorithm to solve the resolvent of composed operators.As an application, we discuss the resolvent of a finite sum of maximally monotone operators.Furthermore, we employ the obtained results to solve the problem of computing scaled proximity operators of a convex function composed by a linear operator and a finite sum of proper, lower semi-continuous convex functions, respectively.
The rest of the paper is organized as follows.In Section 2, we review some backgrounds on monotone operator theory.In Section 3, we first investigate the solution of the resolvent of composed operators U −1 A * TA.Second, we propose a fixed point approach for solving the resolvent of U −1 A * TA.Finally, we employ the proposed fixed point algorithm to compute the resolvent of the sum of a finite number of maximally monotone operators with U. In Section 4, we apply the obtained results to solve the problem of computing scaled proximity operators of a convex function composed by a linear operator and a finite sum of proper, lower semi-continuous convex functions, respectively.We give some conclusions and future work in the last section.

Preliminaries
In this section, we review some definitions and lemmas in monotone operator theory and convex analysis, which are used throughout the paper.Most of them can be found in [17].
Let H, H 1 and H 2 be real Hilbert spaces with inner product •, • and induced norm x stands for {x k } converging weakly to x, and x k → x stands for {x k } converging strongly to x.I denotes the identity operator.Let A : H 1 → H 2 be a continuous linear operator and its adjoint be A * : H 2 → H 1 such that Ax, y = x, A * y , for any x ∈ H 1 and y ∈ H 2 .
Let T : H → 2 H be a set-valued operator.We denote by its domain Dom(T) = {x ∈ H : Tx = ∅}, by its range R(T) = {y ∈ H : ∃x ∈ H, y ∈ Tx}, by its graph gra(T) = {(x, y) ∈ H × H : y ∈ Tx}, and by its set of zeros zer(T) = {x ∈ H : 0 ∈ Tx} .We say that T is monotone if x − y, u − v ≥ 0, for all (x, u), (y, v) ∈ gra(T).T is said to be maximally monotone if its graph is not contained in the graph of any other monotone operator.Letting λ > 0, the resolvent of λT is defined by and the Yoshida approximation of T with index λ is The resolvent and Yoshida approximation of λT have the following relationship: We follow the notation as [31].Let B(H 1 , H 2 ) be the space of bounded linear operators from H 1 to H 2 , and B(H) = B(H, H).We set S(H) = {U ∈ B(H) | U = U * }, where U * denotes the adjoint of U. In the S(H), the Loewner partial ordering is defined by Let α > 0. We set Let P be a orthogonal matrix and its inverse be U is defined as the square root of U ∈ P α (H).For every U ∈ P α (H), we define a scalar product and a norm by Let T : H → H be a single-valued operator.We say that T is averaged if there exists a non-expansive operator R : We collect several useful lemmas.

Lemma 2. ([17]
) Let C be a nonempty subset of H and let T : Lemma 3. ( [34,35]) Let S be a nonempty subset of H, let T 1 : S → H be α 1 -averaged and let T 2 : S → H be Lemma 4. ([31]) Let T : H → 2 H be a maximally monotone operator, let α > 0 and let U ∈ P α (H).The scalar product of H is defined by x, y U = x, Uy , for any x, y ∈ H.Then, the following hold: The Kransnoselskii-Mann algorithm is a popular iterative algorithm for finding fixed points of non-expansive operators.The convergence of it is summarized in the following theorem.
Then, the following hold: (i) {x k } is Fejer monotone with respect to Fix(T), i.e., x k+1 − p ≤ x k − p , for any p ∈ Fix(T); (ii) x k+1 − Tx k converges strongly to 0; (iii) {x k } converges weakly to a fixed point in Fix(T).

Computing Method for the Resolvent of Composed Operators
In this section, we consider the problem of computing the resolvent of composed operators (13).The obtained results extend and generalize the corresponding results of Fukushima [16] and Bauschke and Combettes [17], respectively.Second, we develop a fixed point approach for computing the resolvent of U −1 A * TA.We also propose a simple and efficient iterative algorithm to approximate the fixed point.The convergence of this algorithm is established in general Hilbert spaces.Finally, we apply the fixed point method to solve the resolvent of the sum of a finite family of maximally monotone operators.

Analytic Approach Method of Resolvent Operator
The following proposition is a direct generalization of Proposition 23.25 of [17].
Proposition 1.Let α > 0 and U ∈ P α (H).Let T : H 2 → 2 H 2 is a maximally monotone operator, and let A : H 1 → H 2 be a continuous linear operator and its adjoint is A * .Suppose that AU −1 A * is invertible.Let λ > 0, Then, the following hold: (ii) Suppose that AU −1 A * = νI, for some ν > 0.Then, Proof.By Lemma 4, we know that λU −1 A * TA is maximally monotone, if λA * TA is maximally monotone.Thus, (ii) Bringing νI = AU −1 A * into u of (21), we can find that u = (T −1 + λνI) −1 (Ax).Then, we have which is equivalent to u = λν T(Ax). Then, In the Formula (21), T −1 needs to be calculated.However, it is sometimes difficult to evaluate it.Inspired by the method introduced by Fukushima [16], we provide an alternative way to compute the resolvent of composed operators, which avoids computing the inverse of operator T. Proposition 2. Let α > 0 and U ∈ P α (H).Let T : H 2 → 2 H 2 be a maximally monotone operator, and let A : H 1 → H 2 be a continuous linear operator and its adjoint is A * .Suppose that AU −1 A * is invertible.Then, the resolvent of λU −1 A * TA is where z is the unique solution of Proof.Let ω = x − λU −1 A * u.By ( 21), we have Let By (25) and ω, we obtain It follows from ( 24) and ( 25) that u ∈ Tz.Taking into account that u ∈ Tz and (26), we get Finally, we come to the conclusion that

Fixed-Point Approach Method of Resolvent Operator
In Propositions 1 and 2, the resolvent of composed operators U −1 A * TA is computed either with T −1 or requiring AU −1 A * to satisfy additional conditions.In practice, it is still difficult to evaluate it without these conditions.To overcome this difficulty, in this subsection, we propose a fixed point algorithm to compute the resolvent of U −1 A * TA.Our method discards these conditions on T −1 and AU −1 A * .Lemma 5. Let α > 0 and U ∈ P α (H).Let T : H 2 → 2 H 2 be a maximally monotone operator, and let A : H 1 → H 2 be a continuous linear operator.Let x ∈ H 1 .Then, the following hold: Proof.(1) Let x ∈ H 1 ; then, we have (2) Let x ∈ H 1 , we have In the next lemma, we provide a fixed point characterization of the resolvent of composed operators U −1 A * TA.To achieve this goal, we define two operators F :

and define
and Qy := (I − J 1 λ T )Fy. ( Lemma 6.Let α > 0 and U ∈ P α (H).Let T : H 2 → 2 H 2 be a maximally monotone operator, and let A : H 1 → H 2 be a continuous linear operator and its adjoint is A * .Let λ > 0.Then, we have if and only if y is a fixed-point of Q.
According to (32), we have Hence, (33) at both ends and at the same time multiplied by U −1 A * , we obtain and finally By comparing (35) to (27), it is easy to find that The proof is completed.Lemma 7. Let α > 0 and U ∈ P α (H).Let T : H 2 → H 2 be a maximally monotone operator, and let A : H 1 → H 2 be a continuous linear operator and its adjoint is A * .Let λ > 0, define operator W : Proof.(i) Let y 1 , y 2 ∈ H 2 , we have In virtue of U = U * and UU −1 = I, we have Because U ∈ P α (H 1 ) and for any x ∈ H 1 , x, Ux ≥ α x 2 , we obtain , and, by Lemma 3, we find that (I This completes the proof.
Lemma 6 tells us that the resolvent of composed operators U −1 A * TA can be computed via the fixed point of operator Q.Furthermore, Lemma 7 shows that Q is an averaged operator.Therefore, we can define an iterative algorithm to approximate the fixed point of Q.For any y 0 ∈ H 2 , let the sequences {u k } and {y k } be defined by where ) and λ ∈ (0, 2α A 2 ).Now, we are ready to prove the convergence of the iterative scheme (36).Theorem 2. Let α > 0 and U ∈ P α (H).Let T : H 2 → 2 H 2 be a maximally monotone operator, and let A : H 1 → H 2 be a continuous linear operator and its adjoint is A * .Let the sequences {y k } and {u k } be generated by (36).Assume that Therefore, the iterative sequence {y k+1 } in (36) can be rewritten as The condition on {α k } implies that α k β ∈ (0, 1) and It follows from Lemma 6 that Fix(Q) = ∅, and we observe that Fix(Q) = Fix(C).Then, Fix(C) = ∅.
According to Theorem 1, we can conclude that (a) lim k→+∞ y k − y exists, for any y ∈ Fix(Q) = Fix(C); (b) lim k→+∞ y k − Cy k = 0, and lim k→+∞ y k − Qy k = 0; (c) {y k } converges weakly to a fixed point of C, which is also a fixed point of Q.
(ii) Let y ∈ Fix(Q).By using F as λ A 2 2α -averaged, and I − J 1 λ T as non-expansive, we have For y k+1 and y, we have Combining (38) with (39), we obtain Hence, we arrive We notice that lim k→+∞ y k − y exists, and lim k→+∞ y k − Qy k = 0.By letting k → +∞, the right of inequality (41) is equal to zero.Together with the condition inf α k > 0, and 2α λ A 2 − 1 > 0.Then, we obtain lim By virtue of J λU −1 A * TA (x) = x − λU −1 A * y, y ∈ Fix(Q).Then, we have Taking into account the fact that lim k→+∞ y k − y exists and (42), we obtain from the above inequality that lim Since the two norms • and • U are equivalent, we have lim k→+∞ u k − J λU −1 A * TA x = 0. Hence, {u k } converges strongly to the resolvent operator J λU −1 A * TA .This completes the proof.
Remark 1.Let U = I in (36); then, it reduced to the iterative algorithm introduced in Moudafi [18].Therefore, the corresponding result of Moudafi [18] is a special case of ours.At the same time, the proposed iterative algorithm (36) provides a larger range of relaxation parameters than [18].

Resolvent of a Sum of m Maximally Monotone Operators with U
In this subsection, we apply the fixed-point approach method that was proposed in Section 3.2 to solve the resolvent of the sum of a finite number of maximally monotone operators.Problem 1.Let α > 0 and U ∈ P α (H).Let m ≥ 2 be an integer.For any i ∈ {1, • • • , m}, let T i : H → 2 H be a maximally monotone operator.Letting x ∈ H, the problem is to solve the resolvent operator of the form, To solve the resolvent operator (43), we formally reformulate it as a special case of the resolvent operator (13), which was studied in the previous section.More precisely, we obtain the following convergence theorem.Theorem 3. Let α > 0 and U ∈ P α (H).Let m ≥ 2. For any i ∈ {1, • • • , m}, let T i : H → 2 H be a maximally monotone operator.Let x ∈ H and let y 0 i ∈ H, i = 1, • • • , m.Let the sequences {u k } and {y k i } m i=1 be generated by the following: where α k ∈ (0, 4α−λm 2α ) and λ ∈ (0, 2α m ) satisfy the following conditions: Then, the sequence {u k } converges strongly to the resolvent operator (43).
The associated norm is Let us introduce the operators: Therefore, T is a maximally monotone operator, and A is a bounded linear operator with A = √ m.Let y ∈ H, x ∈ H, by the definition of A, we have

Hence, we have ∀y ∈ H
In addition, letting x ∈ H, we have Let y k = (y k 1 , • • • , y k m ) ∈ H.Then, the iterative scheme (44) can be rewritten as According to Theorem 2 (ii), we can conclude that the sequence {u k } converges strongly to the resolvent operator (43).

Applications
In this section, we apply the obtained results in the last section to solve the problem of computing proximity operators of convex functions.
Proof.Because the proximity operator prox U ϕ•A (x) is equivalent to the resolvent operator J U −1 A * •∂ϕ•A (x).In Theorem 2, let T = ∂ϕ, we can conclude that the sequence {u k } generated by (49) converges strongly to the proximity operator prox U ϕ•A (x).
Proof.Let T i = ∂ f i , i = 1, • • • , m.We know that the proximity operator (50) is equivalent to the resolvent operator (43), that is, , for any i = 1, • • • , m.By Theorem 3, we can conclude that the sequence {u k } generated by (51) converges strongly to the proximity operator prox U ∑ m i=1 f i (x).

Conclusions
Inspired and motivated by the work of Moudafi [18], in this paper, we discussed the resolvent of composed operators U −1 A * TA.Under some additional conditions, we obtained explicit solutions of the resolvent of composed operators.The obtained results generalized and extended the classical results of Fukushima [16] and Bauschke and Combettes [17].On the other hand, we presented a fixed point algorithm approach for computing the resolvent of composed operators.By virtue of the Krasnoselskii-Mann algorithm for finding fixed points of non-expansive operators, we proved that the strong convergence of the proposed fixed-point iterative algorithm.As applications, we employed the proposed algorithm to solve the scaled proximity operator of a convex function composed of a linear

Theorem 1 .
([17]) (Kransnoselskii-Mann algorithm) Let C be a nonempty closed convex subset of H, let T : C → C be a non-expansive operator such that Fix(T) = ∅, where Fix(T) denotes the fixed point set of T. Let