A Fixed-Point Subgradient Splitting Method for Solving Constrained Convex Optimization Problems

: In this work, we consider a bilevel optimization problem consisting of the minimizing sum of two convex functions in which one of them is a composition of a convex function and a nonzero linear transformation subject to the set of all feasible points represented in the form of common ﬁxed-point sets of nonlinear operators. To ﬁnd an optimal solution to the problem, we present a ﬁxed-point subgradient splitting method and analyze convergence properties of the proposed method provided that some additional assumptions are imposed. We investigate the solving of some well known problems by using the proposed method. Finally, we present some numerical experiments for showing the effectiveness of the obtained theoretical result.


Introduction
Many applications in science and engineering have shown a huge interest in solving an inverse problem of finding x ∈ R n satisfying where b ∈ R r is the observed data and B r×n is the corresponding nonzero matrix. Actually, the inverse problem is typically facing the ill-condition of the matrix B so that it may have no solution. Then, an approach for finding an approximate solution by minimizing the squared norm of the residual term has been considered: Observe that the problem (2) has several optimal solutions; in this situation, it is not clear which of these solutions should be considered. One strategy for pursuing the best optimal solution among these many solutions is to add a regularization term to the objective function. The classical technique is to consider the celebrated Tikhonov regularization [1] of the form where λ > 0 is a regularization parameter. In this setting, the uniqueness of the solution to (3) is acquired. However, note that from a practical point of view, the shortcoming of this strategy is that the unique solution to the regularization problem (3) may probably not optimal in the original sense as in (2), see [2,3] for further discussions.
To over come this, we should consider the strategy of selecting a specific solution among optimal solutions to (2) by minimizing an additional prior function over these optimal solutions. This brings the framework of the following bilevel optimization problem, where f : R n → R and h : R m → R are convex functions, and A : R n → R m is a nonzero linear transformation. It is very important to point out that many problems can be formulated into this form. ∈ R (n−1)×n the problem (4) becomes the fused lasso [4] solution to the problem (2). This situation also occurs in image denoising problems (r = n and B is the identity matrix), and in image inpainting problems (r = n and B is a symmetric diagonal matrix), where the term Ax 1 is known as an 1D total variation [5]. When m = n, f = · 2 , h = · 1 , and A is the identity matrix, the problem (4) becomes the elastic net [6] solution to the problem (2). Moreover, in wavelet-based image restoration problems, the matrix A is given by an inverse wavelet transform [7]. Let us consider the constrained set of (4), it is known that introducing the Landweber operator T : R n → R n of the form yields T is firmly nonexpansive and the set of all fixed points of T is nothing else that the set argmin x∈R n Bu − b 2 , see [8] for more details. Motivated by this observation, the bilevel problem (4) can be considered in the general setting as where T : R n → R n is a nonlinear operator with Fix T := {x ∈ R n : Tx = x} = ∅. Note that problem (5) encompasses not only the problem (4), but also many problems in the literature, for instance, the minimization over the intersection of a finite number of sublevel sets of convex nonsmooth functions (see Section 5.2), the minimization over the intersection of many convex sets in which the metric projection on such intersection can not be computed explicitly, see [9][10][11] for more details. There are some existing methods for solving convex optimization problems over fixed-point set in the form of (5), but the celebrated one is due to the hybrid steepest descent method, which was firstly investigated in [12]. Note that the algorithm proposed by Yamada [12] goes on with the hypotheses that the objective functions are strongly convex and smooth and the operator T is nonexpansive. Several variants and generalizations of this well-known method are, for instance, Yamada and Ogura [11] considered the same scheme for solving the problem (5) when T belongs to a class of so called quasi-shrinking operator. Cegielski [10] proposed a generalized hybrid steepest descent method by using the sequence of quasi-nonexpansive operators. Iiduka [13,14] considered a nonsmooth convex optimization problem (5) with fixed-point constraints of certain quasi-nonexpansive operators.
On the other hand, in the recent decade, the split common fixed point problem [15,16] turns out to be one of the attractions among several nonlinear problems due to its widely applications in many image and signal processing problems. Actually, for given a nonzero linear transformation A : R n → R m , and two nonlinear operators T : R n → R n and S : R m → R m with Fix(T) = ∅, and R(A) ∩ Fix(S) = ∅, the split common fixed point problem is to find a point x * ∈ R n in which The key idea of this problem is to find a point in the fixed point of a nonlinear operator in a primal space, and subsequently its image under an appropriate linear transformation forms a fixed point of another nonlinear operator in another space. This situation appears, for instance, in dynamic emission tomographic image reconstruction [17] and in the intensity-modulated radiation therapy treatment planning, see [18] for more details. Of course, there are many authors that have investigated the study of iterative algorithms for split common fixed point problems and proposed their generalizations in several aspects, see, for example, [9,[19][20][21][22] and references therein. The aim of this paper is to present a nonsmooth and non-strongly convex version of the hybrid steepest descent method for minimizing the sum of two convex functions over the fixed-point constraints of the form: where f : R n → R and h : R m → R are convex nonsmooth functions, A : R n → R m is a nonzero linear transformation, T : R n → R n and S : R m → R m are certain quasi-nonexpansive operators with Fix(T) = ∅, and R(A) ∩ Fix(S) = ∅, and X ⊂ R n is a simple closed convex bounded set. We prove the convergence of function value to the minimum value where some control conditions on a stepsize sequence and a parameter are imposed. The paper is organized as follows. After recalling and introducing some useful notions and tools in Section 2, we present our algorithm and discuss the convergence analysis in Section 3. Furthermore, in Section 4, we discuss an important implication of our problem and algorithm to the minimizing sum of convex functions over coupling constraints. In Section 5, we discuss in detail some remarkably practical applications, and Section 6 describes the results of numerical experiments on fused lasso like problem. Finally, the conclusions are given in Section 7.

Preliminaries
We summarize some useful notations, definitions, and properties, which we will utilize later. For further details, the reader can consult the well-known books, for instance, in [8,[23][24][25]. Let R n be an n-dimensional Euclidean space with an inner product ·, · and the corresponding norm · .
Let T : R n → R n be an operator. We denote the set of all fixed points of T by Fix T, that is, We say that T is ρ-strongly quasi-nonexpansive (ρ-SQNE), where ρ ≥ 0, if Fix T = ∅ and for all x ∈ R n and z ∈ Fix T. If ρ > 0, then T is called strongly quasi-nonexpansive (SQNE). If ρ = 0, then T is called quasi-nonexpansive (QNE), that is, for all x ∈ R n and z ∈ Fix T. Clearly, if T is SQNE, then it is QNE. We say that T is cutter if Fix T = ∅ and for all x ∈ R n and all z ∈ Fix T. We say that T is firmly nonexpansive (FNE) if for all x, y ∈ R n .
The following properties will be applied in the next sections.
Fact 1 (Lemma 2.1.21 [8]). If T : R n → R n is QNE, then Fix T is closed and convex.
Let f : R n → R be a function and x ∈ R n . The subdifferential of f at x is the set Fact 3 (Corollary 16.15 [24] ). Let f : R n → R be a convex function. Then, the subdifferential ∂ f (x) = ∅ for all x ∈ R n .
Fact 4 (Proposition 16.17 [24]). Let f : R n → R be a convex function. Then, the subdifferential ∂ f maps every bounded subset of R n to a bounded set.
As we work on the n-dimensional Euclidean space, we will use the notion of matrix instead of the notion of linear transformation throughout this work. Denote R m×n by the set of all real-valued m × n matrices. Let A ∈ R m×n be given. We denote by R(A) := {y ∈ R m : y = Ax for some x ∈ R n } its range, and A its (conjugate) transpose. We denote the induced norm of A by A , which is given by

Method and its Convergence
Now, we formulate the composite nonsmooth convex minimization problem over the intersections of fixed-point sets which we aim to investigate throughout this paper. Problem 1. Let R n and R m be two Euclidean spaces. Assume that Our objective is to solve Throughout this work, we denote the solution set of Problem 1 by Γ and assume that it is nonempty.
Problem 1 can be viewed as a bilevel problem in which data given from two sources in a system. Actually, let us consider the system of two users in different sources (they can have differently a number of factors n and m) in which they can communicate to each other via the transformation A. The first user aims to find the best solutions with respect to criterion f among many feasible points represented in the form of fixed point set of an appropriate operator T. Similarly, the second user has its own objective in the same fashion of finding the best solutions among feasible points in Fix S with priori criterion h. Now, to find the best solutions of this system, we consider the fixed-point subgradient splitting method (in short, FSSM) as follows, see Algorithm 1.

Algorithm 1: Fixed-Point Subgradient Splitting Method.
Initialization: The positive sequence {α k } k≥1 and the parameter γ ∈ (0, +∞), and an arbitrary x 1 ∈ R n . Iterative Step: For given x k ∈ R n , compute Remark 1. Actually, this algorithm has simultaneously the following features; (i) splitting computation, (ii) simple scheme, and (iii) boundedness of iterates. Concerning the first feature, we present the iterative scheme by allowing the process of a subgradient of f and a use of operator T in the space R n , and a subgradient of h and a use of operator S in the space R m separately. Regarding the simplicity of iterative scheme, we need not to compute the inverse of the matrix A; in this case, the transpose of A is enough. Finally, the third feature is typically required when performing the convergence of subgradient type method. Of course, the boundedness is often considered in image processing and machine learning in the form of a (big) box constraint or a big Euclidean ball.
To study the convergence properties of a function values of a sequence generated by Algorithm 1, we start with the following technical result. Lemma 1. Let {x k } k≥1 be a sequence generated by Algorithm 1. Then, for every k ≥ 1 and u ∈ X ∩ Fix T ∩ A −1 (Fix S), it holds that Proof. Let k ≥ 1 be arbitrary. By the definition of {y k } k≥1 , we have Now, by using the definition of {z k } k≥1 and the cutter property of S, we derive which in turn implies that (6) becomes We now focus on the last two terms of the right-hand side of (7). Observe that Now, inequalities (6)-(8) together give On the other hand, using the definition of {x k } k≥1 and the assumption that T is QNE, we obtain Replacing (9) in (10), we obtain Next, the convexities of f and g give By making use of these two inequalities in (11), we obtain which is the required inequality and the proof is completed.
The following lemma is very useful for the convergence result. Proof. As X is a bounded set, it is clear that the sequence {x k } k≥1 is bounded. Now, let u ∈ Γ be given. The linearity of A and quasi-nonexpansiveness of S yield This implies that {SAx k } k≥1 is bounded. Consequently, applying Fact 4, we obtain that {h (SAx k )} k≥1 is also bounded.
By the triangle inequality, we have Therefore, the boundedness of {α k } k≥1 implies that {z k } k≥1 is bounded. Consequently, the triangle inequality and the linearity of A yields the boundedness of {y k } k≥1 . As T is QNE, we have {Ty k } k≥1 is bounded. Thus, { f (Ty k )} k≥1 is bounded by Fact 4.
For the sake of simplicity, we let and assume that ( f + h • A) * > −∞.
We consider a convergence property in objective values with diminishing stepsize as the following theorem. Theorem 1. Let {x k } k≥1 be a sequence generated by Algorithm 1. If the following control conditions hold, Proof. Let z ∈ Γ be given. We note from Lemma 1 that for every k ≥ 1 this is true because − 1 − γ A 2 γ z k − Ax k 2 ≤ 0 via the assumption (i). Summing up (12) for 1, . . . , k we obtain that This implies that Next, we show that lim inf k→+∞ (( f (Ty k ) + h(SAx k )) − ( f + h • A) * )) ≤ 0. By supposing a contradiction that lim inf there exist k 0 ≥ 1 and γ > 0 such that which is a contradiction. Therefore, we can conclude that

Remark 2.
The convergence results obtained in Theorem 1 are slightly different from the convergence results obtained by the classical gradient method or even projected gradient method, namely, lim inf k→+∞ ( f (Ty k ) + h(SAx k )) = ( f + h • A) * . This is because, in each iteration, we can not ensure whether the estimate Tx k is belonging to the constrained set Fix(T) or not, this means that the property f (Ty k ) ≥ f * may not be true in general. Similarly, we can not ensure that h(SAx k ) ≥ (h • A) * .

Convex Minimization Involving Sum of Composite Functions
The aim of this section is to show that Algorithm 1 and their convergence properties can be employed when solving a convex minimization involving sum of a finite number of composite functions.
Let us take a look the composite convex minimization problem: where, we assume further that, for all i = 1, . . . , l, there hold In this section, we assume that the solution set of (13) is denoted by Ω and asuume that it is nonempty.

Define a matrix
Furthermore, defining a function h : R m → R by for all x = (x 1 , x 2 , . . . , x l ) ∈ R m , we also have that the function h is a convex function (see [24], Proposition 8.25). By hte above setting, we can rewrite the problem (13) as which is nothing else than Problem 1.
Here, to investigate the solving of the problem (13), we state the Algorithm 2 as follow.

Initialization:
The positive sequence {α k } k≥1 and the parameter γ ∈ (0, +∞), and an arbitrary x 1 ∈ R n . Iterative Step: For given x k ∈ R n , compute As an above consequence, we note that for all x = (x 1 , x 2 , . . . , x l ) ∈ R m . Furthermore, we know that ∂h(SAx k ) = ∂h 1 (S 1 A 1 x k ) × · · · × ∂h l (S l A l x k ), see ( [25], Corollary 2.4.5). Putting d k := (d k,1 , . . . , d k,l ) where d k,i ∈ ∂h i (S i A i x k ), i = 1, . . . , l, for all k ≥ 1, we obtain that for all k ≥ 1. Thus, Algorithm 2 can be rewrite as for all k ≥ 1. Since A 2 ≤ ∑ l i=1 A i 2 , the convergence result therefore follows from Theorem 1 and can be stated as the following corollary.

Corollary 1.
Let {x k } k≥1 be a sequence generated by Algorithm 2. If the following control conditions hold:

Related Problems
The aim of this section is to show that Algorithm 1 and their convergence results can be employed when solving some well-known problems. Furthermore, we also present some noticeable applications relate to Algorithm 1 and its convergence results. For simplicity, we assume here that S = I.

Convex Minimization with Least Square Constraint
Let us discuss the composite minimization problem over the set of all minimizers of the proximity function of a system of linear equations: where B is a nonzero p × n matrix and b is an p × 1 vector. Now, we define the Landweber operator by Suppose that B is nonzero, we have B 2 = λ max (B B) = 0, which yields that the Landweber operator L is well defined. Furthermore, if argmin g = ∅, then it holds that L is FNE with Fix L = argmin g [8] (Lemma 4.6.2, Theorem 4.6.3). In view of T := L, the problem (14) is a special case of Problem 1 so that the problem (14) can be solved by Algorithm 1. Furthermore, if the set {u ∈ R n : Bu = b} = ∅, the problem (14) is equivalent to

Convex Minimization with Nonsmooth Functional Constraints
Let us discuss a convex minimization with nonsmooth functional constraints where g i : R n → R are convex function, i = 1, . . . , l. Denote and assume that Furthermore, define the subgradient projection related to g i by where g i (x) is a (fixed) subgradient of g i at x ∈ R n . It is well known that P i is a cutter (a 1-SQNE operator) with Fix P i = Ξ(g i , 0). Note that, if l i=1 Ξ(g i , 0) = ∅, the cyclic subgradient projection operator P := P m P m−1 ...P 1 , which is a composition of SQNE operators P i , is SQNE with (Theorem 2.1.50, [8]). In 2018, Cegielski and Nimana [26] proposed the following extrapolated operator, where λ ∈ (0, 2) and σ : R n → R is a step size function defined by where U i := P i P i−1 ...P 1 for i = 1, 2, ..., m and U 0 := I. Note that the operator P λ,σ is SQNE with Fix P λ,σ = Fix P = ∅ ( [26], Theorem 3.2). By means of T := P λ,σ or T := P, the problem (15) is nothing else than Problem 1 and Algorithm 1 is applicable for the problem (15).

Convex Minimization with Complex Constraints
Let B : R n ⇒R n be a set-valued operator. We denote by It is well known that if B is maximally monotone and r > 0, then the resolvent of rB is (single-valued) FNE with zer(B) = Fix J rB , see ( [24], Corollary 23.31, Proposition 23.38). Now, let us consider the minimal norm-like solution of the classical monotone inclusion problem: where B : R n ⇒ R n is a maximally monotone operator such that zer(B) = ∅. For given r > 0, and putting T := J rB , the problem (15) is nothing else than Problem 1 and Algorithm 1 is also applicable for the problem (16).
In particular, let us consider the following minimal norm-like solution of minimization problem, The problem (17) has been considered by many authors, for instance [2,[27][28][29][30] and references therein.
Recall that for given r > 0 and a proper convex lower semicontinuous function ϕ : R n → (−∞, +∞], we denote by prox rϕ (x) the proximal point of parameter r of ϕ at x, which is the unique optimal solution of the optimization problem Note that prox rϕ = J r∂ϕ . Therefore, putting T := prox rϕ , Algorithm 1 is also applicable for the problem (17).

Numerical Experiments
In this section, to demonstrate the effectiveness of the fixed-point subgradient splitting method (Algorithm 1), we apply the proposed method to solve the fused lasso like problem. All the experiments were performed under MATLAB 9.6 (R2019a) running on a MacBook Pro 13-inch, 2019 with a 2.4 GHz Intel Core i5 processor and 8 GB 2133 MHz LPDDR3 memory.
For a given design matrix A := [a 1 | · · · |a r ] in R r×s where a i = (a 1i , . . . , a si ) ∈ R s and a response vector b = (b 1 , . . . , b r ) ∈ R r . We consider the fused lasso like problem of the form minimize S := Id; the identity matrix; and the constrained set X := [−1, 1] s , we obtain that the problem (18) is a special case of Problem 1 so that the problem (14) can be solved by Algorithm 1 (See Section 5.1) for more details. We generate the matrix A with normally distributed random chosen in (−10, 10) with a given percentage p A of nonzero elements. We generate vectors b = (b 1 , . . . , b r ) ∈ R r corresponding to A by the linear model b = Ax 0 + , where ∼ N(0, Ax 0 2 ) and the vector x 0 has 10% of nonzero components with normally distributed random generating. The initial point is a vector whose coordinates are chosen randomly in (−1, 1). In the numerical experiment, denoting the estimate for all k ≥ 1, we consider the behavior of the average of the relative changes with the optimality tolerance 10 −3 . We performed 10 independent tests for any collection of dimensions (r, s) and the percentages of nonzero elements of A for various step size parameters α k . The results are showed in Table 1, where the average number of iterations (#Iters) and average CPU time (Time) to reach the optimality tolerance for any collection of parameters are presented. We see that the method performed with parameter α k = 0.1/k behaves significantly better than other in the sense of the average number of iterations as well as the averaged CPU time for all dimensions and percentages of nonzero elements of A. Moreover, in the case p A = 10%, we observed much bigger number of the averaged iterations and CPU time for the choice of the combinations of parameters α k = 0.3/k, 0.5/k, 0.7/k and 0.9/k with all the problem sizes.

Conclusions
In this paper, we introduced a simple fixed-point subgradient splitting method, whose main feature is the combination of the subgradient method with the hybrid steepest descent method relating to a nonlinear operator. We performed the convergence analysis of the method by proving the convergence of the function values to the minimum value. The result is obtained by adopting some specific suitable assumptions on the step sizes. We discussed in detail the applications of the proposed scheme to some remarkable problems in the literature. Numerical experiments on fused lasso like problem show evidence of the performance of our work.
Funding: This work was financial supported by the young researcher development project of Khon Kaen University.