Next Article in Journal
Extension of Buchdahl’s Theorem on Reciprocal Solutions
Next Article in Special Issue
Integrated Design Symmetry Method for Point Meshing Tooth Surfaces Based on Surface Envelope Approximation Theory
Previous Article in Journal
An Analytical Study of the Mikhailov–Novikov–Wang Equation with Stability and Modulation Instability Analysis in Industrial Engineering via Multiple Methods
Previous Article in Special Issue
Improvement and Application of Hale’s Dynamic Time Warping Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Forward–Backward Methods and Splitting Operators for a Sum of Maximal Monotone Operators

1
Faculty of Science, Yibin University, Yibin 644000, China
2
Department of Mathematics, China Three Gorges University, Yichang 443002, China
3
School of Mathematics and Computing Sciences, Hunan University of Science and Technology, Xiangtan 411201, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 880; https://doi.org/10.3390/sym16070880
Submission received: 16 May 2024 / Revised: 24 June 2024 / Accepted: 26 June 2024 / Published: 11 July 2024

Abstract

:
Suppose each of A 1 , , A n is a maximal monotone, and β B is firmly nonexpansive with β > 0 . In this paper, we have two purposes: the first is finding the zeros of j = 1 n A j + B , and the second is finding the zeros of j = 1 n A j . To address the first problem, we produce fixed-point equations on the original Hilbert space as well as on the product space and find that these equations associate with crucial operators which are called generalized forward–backward splitting operators. To tackle the second problem, we point out that it can be reduced to a special instance of n = 2 by defining new operators on the product space. Iterative schemes are given, which produce convergent sequences and these sequences ultimately lead to solutions for the last two problems.

1. Introduction

In the field of image processing and restoration, the simultaneous optimization of multiple objectives is often necessary to achieve desired results. For instance, when restoring historical paintings, balancing original brushstrokes while also removing degradation is crucial. To tackle this complex optimization problem, researchers have proposed various algorithms and techniques.
In what follows, we abstract the above issues into the following two types of problems and proceed to solve them. Let γ and β be two positive numbers, and let H be a real Hilbert space with inner product · , · and associated norm · . We consider two classes of minimization problems
min x H i = 1 n f i ( x ) + g ( x )
and
min x H i = 1 n f i ( x )
where each f i is a lower semicontinuous, and proper and convex functions on H and g is differentiable with the 1 / β -Lipschitz gradient. As described in [1], the discussion can be conducted in a general setting where problems (1) and (2) are special instances of those that follow, respectively:
find   a , such   that a z e r ( i = 1 n A i + B ) ,
find   a , such   that a z e r ( i = 1 n A i ) ,
where A i is the maximal monotone and β B is firmly nonexpansive. The main purpose of this paper is to present algorithms to solve these inclusions and consider the convergence of the proposed methods.
Finding the zeros of an operator is a crucial task in operator theory [1,2]. A number of ideas have been proposed to address the issue of when the operator is decomposed into a sum of two. For instance, Ref. [3] developed the forward–backward method to solve when one of the operator is the maximal monotone and the other is firmly nonexpansive. We remark that by this, we gain a solution (3) in the case n = 1 . On the other hand, in the case of when each of the two operators is maximal monotone, the authors of [4] presented the Douglas–Rachford method, while the authors of [5] developed the splitting operator. Likewise, we remark that it happens to be the special case of problem (4) for the case of n = 2 . Details of these two algorithms can be found in Section 2.2.
It is rather difficult to extend the forward–backward method to address problem (3) for a general n. Nevertheless, in [6], the authors achieved an algorithm. This proves to be covered by the four kinds of generalized forward–backward method given in this paper.
As for problem (4), extension variants of the Douglas–Rachford method have emerged. Most of them rely on reducing the original problem to the simple case of n = 2 by introducing auxiliary variables. In this paper, we point out that this problem can be solved by applying the generalized forward–backward method described above with B = 0 . Additionally, we prove that this method can also be explained as a reduction in the original problem for the case n = 2 . A benefit of this observation is that we can design splitting operators which allow us to solve (4) by the proximal point algorithm. The work of [7] is also included in this paper. For recent improvements in this field, please refer to references [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24].
This paper is divided into five sections. The conception of the product space as well as some of the operators on it are introduced in Section 2. To address the general problem, we will review in this section some classical methods for finding the zeros of a sum of two operators. Section 3 is devoted to solving problem (3) by deriving fixed-point equations. In this process, we obtain the so called generalized forward–backward splitting methods. In Section 4, we solve problem (4) from two different perspectives. Firstly, we view it as an instance of (3), so we obtain fixed-point equations as a consequence of the generalized forward–backward methods. Secondly, we point out by introducing new operators on the product space that this problem can be reduced to the case of n = 2 , which has been solved in Section 2. Based on this, we can obtain fixed-point equations in another way and we can generate splitting operators with good properties. We provide in Section 5 iterative schemes which produce convergent sequences, and these sequences eventually lead to solutions of either problem (3) or (4).

2. Foundations

In this section, we shall introduce some notations to elaborate problems (3) and (4). Also, existing work which can be used to address these two problems for a special n will be reviewed.

2.1. Notations and Definitions

We begin by introducing some notations and properties of operators on H .
Definition 1 
([1]). Let A be a set-valued operator A : H 2 H , where the graph of A is defined as the set g r a ( A ) = { ( x , y ) H 2 : y A x } . No distinctions between an operator and its graph are made, that is, ( x , y ) A denotes y A x . Its inverse A 1 is defined as ( x , y ) A 1 ( y , x ) A . The zeros of A is defined as
z e r ( A ) = A 1 ( 0 ) = { x H : ( x , 0 ) A } = { x H : 0 A x } .
The domain and range of A are, respectively,
d o m ( A ) = { x H : A x } , i m ( A ) = { y : x H , ( x , y ) A } .
Definition 2 
([1]). An operator A : H 2 H is single-valued if x H , and the cardinality of A x is at most 1.
Definition 3 
([1]). If A , B : H 2 H are two set-valued operators on H , the resolvent J A and the reflection operator R A are defined, respectively, as
J A = ( Id + A ) 1 , R A = 2 J A Id .
Throughout this paper, let Id denote the identity operator on the appropriate Hilbert space.
Definition 4 
([1]). An operator A : H 2 H is monotone if y ˜ y , x ˜ x 0 , ( x , y ) , ( x ˜ , y ˜ ) A . Moreover, A is maximal monotone if its graph is not strictly contained in any other monotone operator.
Definition 5 
([1]). An operator A : H 2 H is nonexpansive if
y ˜ y x ˜ x ( x , y ) , ( x ˜ , y ˜ ) A .
Let α ( 0 , 1 ) . An operator A is α-averaged nonexpansive if there exists a nonexpansive operator R such that A = ( 1 α ) Id + α R . In this case, we use the notation A A ( α ) .
Definition 6 
([1]). An operator A : H 2 H is firmly nonexpansive if
y ˜ y 2 y ˜ y , x ˜ x , ( x , y ) , ( x ˜ , y ˜ ) A .
Furthermore, A is firmly nonexpansive if and only if A A ( 1 2 ) .
Lemma 1 
([1]). Let A : H 2 H be a monotone operator, then it is maximal monotone if and only if i m ( Id + A ) = H .
Lemma 2 
([1]). Let A : H 2 H be such that d o m ( A ) , set D = i m ( Id + A ) , and T = J A | D . Then, A is maximal monotone if and only if T is firmly nonexpansive and D = H .

2.2. Product Space

Let ω j be positive constants such that j = 1 n ω j = 1 , and we define the product space H n endowed with the following scalar product x = ( x i ) i = 1 n , y = ( y i ) i = 1 n H n , x , y : = i = 1 n ω i x i , y i and the corresponding norm. Below are some elementary operators in the product space H n . Let K be a subspace of H n , and K , N K be its orthogonal complementary, the normal cone operator, respectively:
K = { ( x 1 , , x n ) : x i H , x 1 = = x n } , K = { ( x 1 , , x n ) : x i H , i = 1 n ω i x i = 0 } , N K ( x ) = { K x K ; o t h e r w i s e .
Moreover, let A i , B be operators given in (3), whereby two operators Γ A and B on H n are defined as follows: for x = ( x i ) i = 1 n , y = ( y i ) i = 1 n ,
( x , y ) Γ A ( x i , y i ) γ A i ω i a n d y = B x y i = B x i .
It is easy to establish that Γ A is maximal monotone, and β B A ( 1 / 2 ) is thus single-valued.
Lemma 3 
([1]). Define the canonical isometry C : H K as C ( x ) : = ( x , , x ) . Then, for any z H n , the following statements hold:
(i) 
J N K z = C ( i = 1 n ω i z i ) , R N K z = C ( i = 1 n ω i z i ) z ;
(ii) 
Let u K , then R N K u = u .
(iii) 
R Γ A z = ( R γ A i ω i ( z i ) ) i = 1 n .
Lemma 4. 
Let u i , v i , i = 1 , , n be elements in H , then
u i = 2 j = 1 n ω j v j v i , 1 i n . v i = 2 j = 1 n ω j u j u i , 1 i n .
Proof. 
This relation can be verified by applying a weighted sum on both sides of the equations. □
Below is an equivalent form of the last lemma in the product space.
Lemma 5. 
Let u , v H n , then u = R N K v if and only if v = R N K u .

2.3. Zeros of a Sum of Two Operators

In this subsection, we will review some classical methods to find the zeros of a sum of two operators, i.e.,
find x H such   that 0 A x + B x .
When A is maximal monotone and B is firmly nonexpansive, we find that the inclusion (7) is equivalent to the next fixed-point equation. The corresponding operator T is called the forward–backward splitting operator.
Lemma 6. 
Let A be maximal monotone and B be firmly nonexpansive. Define
T : = J γ A ( Id γ B ) ,
then
x z e r ( A + B ) x = T x .
When A and B are both maximal monotone, we propose two fixed-point equations to solve problem (7).
Lemma 7. 
Let A and B be maximal monotone, then x z e r ( A + B ) if and only if x = J B z and z is a fixed point of the following firmly nonexpansive operator G A , B = Id + R A R B 2 .
Lemma 8. 
Let A and B be maximal monotone, then x z e r ( A + B ) if and only if x = J B z and z is a fixed point of the following firmly nonexpansive operator G γ , A , B ,
G γ , A , B : = J γ A ( 2 J γ B Id ) + ( Id J γ B ) .
The operator G γ , A , B is called the Douglas–Rachford splitting operator and it was presented by Lions and Mercier [4]. It is well known that G γ , A , B is firmly nonexpansive; thus, Lemma 2 says that there exists a maximal monotone operator S γ , A , B such that G γ = J S γ , A , B . In [5], Jonathan Eckstein constructed his splitting operator as follows:
S γ , A , B = ( v + λ b , u v ) | ( u , b ) B , ( v , a ) A , v + λ a = u λ b .

3. Generalized Forward–Backward Methods

In this section, we consider the problem (3). Recall that in Proposition 6, we dealt with this problem for n = 1 by deriving a fixed-point equation. We shall develop variant fixed-point equations and algorithms for general n.

3.1. Fixed-Point Equations on H

Proposition 1. 
Let a H , then a z e r ( i = 1 n A i + B ) if and only if there exists v j H , 1 j n , such that
a = γ B a + j = 1 n ω j v j , a = J γ A i ω i ( v i ) , 1 i n .
Proof. 
By definition, we have a z e r ( i = 1 n A i + B ) 0 γ B a + j = 1 n γ A j a θ j γ A j ω j a , 0 = γ B a + j = 1 n ω j θ j . In addition, θ j γ A j ω j a if and only if a = J γ A j ω j ( a + θ j ) . Thus, we obtain the equivalent condition (12) by denoting v j : = a + θ j . In the above and later discussions, note that B is a single-valued operator which can be established from β B A ( 1 / 2 ) . □
As a direct consequence of this proposition, we have the following result.
Proposition 2. 
Let a H , then a z e r ( i = 1 n A i + B ) if and only if there exists z j H , 1 j n , such that
a = j = 1 n ω j z j , a = J γ A i ω i ( 2 a γ B a z i ) , 1 i n .
Proof. 
Let z i : = 2 a γ B a v i in Equation (12). □
Proposition 3. 
Let a H , then a z e r ( i = 1 n A i + B ) if and only if there exists z j H , 1 j n , such that
a = γ B a + j = 1 n ω j z j , a = J γ A i ω i ( 2 j = 1 n ω j z j z i ) , 1 i n .
Proof. 
Let z i : = 2 j = 1 n ω j v j v i in Equation (12). □
Proposition 4. 
Let a H , then a z e r ( i = 1 n A i + B ) if and only if there exists z j H , 1 j n , such that
a = j = 1 n ω j z j a = J γ A i ω i ( z i γ B a ) , 1 i n .
Proof. 
Let z i : = v i + γ B a in Equation (12). □

3.2. Fixed-Point Equations on the Product Space H n

In this subsection, the fixed-point Equations (12)–(15) will be further studied. Firstly, we define four operators on the product space H n .
Definition 7. 
Let z : = ( z i ) i = 1 n . Define four operators T 1 , T 2 , T 3 , T 4 : H n H n as follows:
( T 1 z ) i : = z i + J γ A i ω i ( 2 x γ B x z i ) x ,
( T 2 z ) i : = x + z i + J γ A i ω i ( 2 x z i ) γ B j = 1 n ω j J γ A j ω j ( 2 x z j ) ,
( T 3 z ) i : = z i x + 2 j = 1 n ω j J γ A j ω j ( z j ) J γ A i ω i ( z i ) γ B j = 1 n ω j J γ A j ω j ( z j ) ,
( T 4 z ) i : = z i x + 2 j = 1 n ω j J γ A j ω j ( z j γ B x ) J γ A i ω i ( z i γ B x ) .
where x : = j = 1 n ω j z j and 1 i n .
Analogous to the forward–backward operator T in (8), these four operators are called generalized forward–backward splitting operators. We propose below the new fixed-point equations in the product space.
Proposition 5. 
Let a H , then a z e r ( i = 1 n A i + B ) if and only if there exists some z = ( z i ) i = 1 n H n such that any of the following equations holds:
(i) 
a = j = 1 n ω j z j , T 1 z = z .
(ii) 
a = J γ A i ω i ( 2 j = 1 n ω j z j z i ) , 1 i n , T 2 z = z .
(iii) 
a = J γ A i ω i ( z i ) , 1 i n , T 3 z = z .
(iv) 
a = j = 1 n ω j z j , T 4 z = z .
Proof. 
(i) In the proofs of all the four parts, we only prove the sufficiency since the necessity can be obtained readily. Assume T 1 z = z , then
x = J γ A i ω i ( 2 x z i γ B x ) , x = j = 1 n ω j z j .
Combining this with the first identity of (20), we have a = x so that relation (13) holds.
(ii)
Assume that T 2 z = z , then
x = J γ A i ω i ( 2 x z i ) γ B j = 1 n ω j J γ A j ω j ( 2 x z j ) , x = j = 1 n ω j z j .
Let a i : = J γ A i ω i ( 2 x z i ) , then the first identity of Equation (25) is rewritten as
x = a i γ B ( j = 1 n ω j a j ) .
This, combined with (25), implies that there exists b H such that
b = a i = J γ A i ω i ( 2 x z i ) , 1 i n , x = b γ B b .
Comparing with the first identity of relation (21), we achieve that a = b , so we gain Equation (14).
(iii)
Assume that T 3 z = z , then
x = 2 j = 1 n ω j J γ A j ω j ( z j ) J γ A i ω i ( z i ) γ B j = 1 n ω j J γ A j ω j ( z j ) , x = j = 1 n ω j z j .
Let a i : = J γ A i ω i ( z i ) , then we have
x = 2 j = 1 n ω j a j a i γ B ( j = 1 n ω j a j ) .
Thus, there exists b H such that b = a i = J γ A i ω i ( z i ) , i = 1 , , n , and x = 2 b b γ B b = b γ B b . This, combined with the first identity of (22), implies that a = b . And, by substituting z i with v i , we can obtain relation (12).
(iv)
Assume that T 4 z = z , then we have
x = 2 j = 1 n ω j J γ A j ω j ( z j γ B x ) J γ A i ω i ( z i γ B x ) , x = j = 1 n ω j z j .
It is readily deduced from the first identity that x = J γ A i ω i ( z i γ B x ) , i 1 i n . Moreover, by the first identity of relation (23), we have a = x . Thus, we obtain relation (15).

3.3. Characterization of Forward–Backward Splitting Operators

Proposition 6. 
Let T 1 , T 2 , T 3 , T 4 be defined in Definition 7; then, we have
T 1 = Id + R Γ A R N K 2 Id γ B J N K ,
T 2 = Id γ B J N K Id + R Γ A R N K 2 ,
T 3 = Id γ B J N K Id + R N K R Γ A 2 ,
T 4 = Id + R N K R Γ A 2 Id γ B J N K .
Proof. 
(i) Let u : = ( u i ) i = 1 n with u i = z i γ B x , then, we can check that
u = z γ B J N K z , 2 x z i γ B x = 2 ( x γ B x ) ( z i γ B x ) = 2 j = 1 n ω j u j u i .
Moreover, definition (16) can be written equivalently as
( T 1 z ) i = R γ A i ω i ( 2 x z i γ B x ) + z i γ B x / 2 .
These results lead to
( T 1 z ) i = R γ A i ω i ( 2 j = 1 n ω j u j u i ) + u i 2 ,
or equivalently
T 1 z = R Γ A R N K u + u 2 .
By noting (34), we can obtain relation (30).
(ii)
Let u : = ( u i ) i = 1 n with u i = z i + R γ A i ω i ( 2 x z i ) / 2 , then, we can check that
x + z i + J γ A i ω i ( 2 x z i ) = u i , j = 1 n ω j J γ A j ω j ( 2 x z j ) = j = 1 n ω j u j .
Thus, we can deduce from (17) that ( T 2 z ) i = u i γ B ( j = 1 n ω j u j ) , i.e.,
T 2 z = ( Id γ B J N K ) u .
Combining this with the fact that u = ( z + R Γ A R N K z ) / 2 , we can verify the equality (31).
(iii)
Let u : = ( u i ) i = 1 n with u i = z i x + 2 j = 1 n ω j J γ A j ω j ( z j ) J γ A i ω i ( z i ) , then we can check that j = 1 n ω j J γ A j ω j ( z j ) = j = 1 n ω j u j . Furthermore, we have
u i = z i + 2 j = 1 n ω j R γ A j ω j ( z j ) R γ A i ω i ( z i ) / 2 ,
or equivalently,
u = z + R N K R Γ A z 2 .
So, it can be induced from definition (18) that
( T 3 z ) i = u i γ B ( j = 1 n ω j u j ) ,
and thus,
T 3 z = Id γ B J N K u = Id γ B J N K Id + R N K R Γ A 2 z .
(iv)
It is readily induced from (19) that
( T 4 z ) i = z i γ B x + 2 j = 1 n ω j R γ A j ω j ( z j γ B x ) R γ A i ω i ( z i γ B x ) / 2 .
Let u : = ( u i ) i = 1 n with u i = z i γ B x ; then, we can check that u = z γ B J N K z and
( T 4 z ) i = u i + 2 j = 1 n ω j R γ A j ω j ( u j ) R γ A i ω i ( u i ) / 2 .
In other words, we gain
T 4 z = u + R N K R Γ A u 2 = Id + R N K R Γ A 2 Id γ B J N K u .
Lemma 9 
([1]). For any γ > 0 , β > γ 2 , we have
Id + R Γ A R N K 2 , Id + R N K R Γ A 2 A ( 1 / 2 ) , Id γ B J N K A ( γ / 2 β ) .
Proposition 7. 
Let T 1 , T 2 , T 3 , T 4 be defined in Definition 7. For any γ > 0 , β > γ 2 , define α : = m a x ( 2 3 , 2 1 + 2 β / γ ) , then each T i is α-averaged nonexpansive.
Proof. 
It follows from [6], Proposition 4.6. □
Note that a fixed-point equation for problem (3) was also proposed in [6]. We remark that the associated operator in that context is nothing but T 1 in our work; thus, we have covered the method given by [6].

4. Zeros of a Sum of n Maximal Monotone Operators

The main focus of this section is to present fixed-point equations as well as algorithms to solve problem (4).

4.1. An Application of the Generalized Forward–Backward Methods

Observe that it can be identified as a special instance of problem (3) with B = 0 , so we can apply the methods given in Proposition 5 to obtain a solution. In this case, the operators T 1 , T 2 are simplified to P, and T 3 , T 4 to Q, respectively:
P = Id + R Γ A R N K 2 , Q = Id + R N K R Γ A 2 .
The following results are a direct consequence of Lemma 9 and Proposition 5.
Proposition 8. 
Let x H , then x z e r ( j = 1 n A j ) if and only if there exists z = ( z j ) j = 1 n such that z is a fixed point of P or Q and x = j = 1 n ω j z j .
Proposition 9. 
The operators P and Q are both firmly nonexpansive.
We also present some properties of the fixed points of the operators P and Q.
Lemma 10. 
Let z = ( z j ) j = 1 n be a fixed point of the operator P, then we have
j = 1 n ω j z j = J γ A i ω i ( j = 1 n 2 ω j z j z i ) , 1 i n .
Proof. 
It suffices to prove that J N K z = J Γ A R N K z . By assumption, we have z = R Γ A R N K z . So, we obtain that z = 2 J Γ A R N K z R N K z thus J N K z = J Γ A R N K z .
Lemma 11. 
Let z = ( z j ) j = 1 n be a fixed point of the operator Q, then we have
j = 1 n ω j z j = J γ A i ω i z i , 1 i n .
Proof. 
It suffices to prove that J N K z = J Γ A z . By assumption, we have z = R N K R Γ A z . It can be induced from the linearity of the operator R N K that z = 2 R N K J Γ A z R N K z , and thus, J N K z = R N K J Γ A z . So, we obtain that J Γ A z = R N K J N K z = J N K z . Note that the first and second identities hold due two Proposition 5 and (ii) of Proposition 3, respectively. □
It seems that we have solved the problem perfectly. Nevertheless, we shall investigate what the intrinsic mode is. This research is conducted from a distinctive perspective.

4.2. Reformulation of Fixed-Point Equations

We shall begin with a crucial proposition which reveals that finding the zero of the sum of n maximal monotone operators can be reduced to the simple case of n = 2 .
Proposition 10. 
Let x H , then x z e r ( j = 1 n A j ) if and only if x z e r ( ( Γ A + N K ) ) with x = ( x , , x ) .
Proof. 
We investigate problem (4) as follows:
0 j = 1 n A j x 0 j = 1 n A j x j , x 1 = = x n . x 1 = = x n , θ j H , θ j γ A j ω j x j . j = 1 n ω j θ j = 0 .
Define x : = ( x 1 , , x n ) , then we obtain an equivalent relation
0 ( Γ A + N K ) x .
Based on the above discussions, we can regain Proposition 8 by an alternative approach.
Proposition 11. 
Let x H , then x z e r ( j = 1 n A j ) if and only if there exists z = ( z j ) j = 1 n such that z is a fixed point of P and x = j = 1 n ω j z j .
Proof. 
Let x : = ( x , , x ) , then by Lemma 7, x z e r ( ( Γ A + N K ) ) is equivalent to
z = Id + R Γ A R N K 2 z , x = J N K z .
Note that the second identity states that x = j = 1 n ω j z j , and thus, we conclude from Lemma 10 that x z e r ( j = 1 n A j ) . □
Proposition 12. 
Let x H , then x z e r ( j = 1 n A j ) if and only if there exists z = ( z j ) j = 1 n such that z is a fixed point of Q and x = j = 1 n ω j z j .
Proof. 
Similar to the proof of the last proposition, let x : = ( x , , x ) ; then, by Lemma 7, x z e r ( ( Γ A + N K ) ) is equivalent to
z = Id + R N K R Γ A 2 z , x = J Γ A z .
The second identity states that x = J γ A i ω i z i , 1 i n . It follows from Lemma 11 that x = j = 1 n ω j z j . And, we conclude from Lemma 10 that x z e r ( j = 1 n A j ) . □
Therefore, although the above fixed-point methods originate from applications of generalized forward–backward methods, they intrinsically rely on bringing back the original problem to the case of n = 2 . As previously discussed, most methods can be explained as extensions of the Douglas–Rachford splitting. We remark that our work is an extension of algorithms described in Lemma 7.

4.3. Splitting Operators

In this subsection, we will design splitting operators for problem (4) with general n. It is clear from Lemma 10 that we only have to investigate the next one:
0 ( Γ A + N K ) x .
Inspired by construction (11), we can generate a splitting operator as
S ˜ 1 = ( v + γ b , u v ) | ( u , b ) Γ A , ( v , a ) N K , v + γ a = u γ b .
Note that here, v , a , u , b H n and
( v , a ) N K v 1 = = v n , j = 1 n ω i a i = 0 ,
( u , b ) Γ A ( u i , b i ) γ A i ω i .
Write the identity v + γ a = u γ b as v i + γ a i = u i γ b i . This, combined with identity (53), implies that
v i = j = 1 n ω j ( u j γ b j ) , i = 1 , , n , a i = ( u i γ b i ) j = 1 n ω j ( u j γ b j ) / γ .
Therefore, the operator in (52) can be written equivalently as
S ˜ 1 = ( p , q ) | p = ( p i ) i = 1 n , q = ( q i ) i = 1 n , p i = j = 1 n ω j ( u j γ b j ) + γ b i , q i = j = 1 n ω j ( u j γ b j ) + a i , ( u i , b i ) γ A i ω i .
With some changes in notations, we obtain the next splitting operator.
Definition 8. 
Let A i be any maximal monotone operator, and define
S 1 = ( u , v ) | u = ( u i ) i = 1 n , v = ( v i ) i = 1 n , u i = j = 1 n ω j ( a j γ b j ω j ) + γ b j ω j , v i = j = 1 n ω j ( a j γ b j ω j ) + a i , ( a i , b i ) A i .
Proposition 13. 
Let u z e r ( S 1 ) , then j = 1 n ω j u j z e r ( j = 1 n A j ) .
Proof. 
By assumption, we have
u i = j = 1 n ω j ( a j γ b j ω j ) + γ b j ω j ,
0 = j = 1 n ω j ( a j γ b j ω j ) + a i .
A weighted sum on both sides of (58) leads to
j = 1 n b j = 0 , a 1 = = a n .
Let a : = a i , then, we claim that a z e r ( j = 1 n A j ) . Likewise, a weighted sum on both sides of (57) implies that a = j = 1 n ω j u j . By (59), we have completed the proof. □
Proposition 14. 
Let Q be given in (45), then J S 1 = Q .
Proof. 
By definition, we obtain
J S 1 = ( Id + S 1 ) 1 = ( u + v , u ) | u = ( u i ) i = 1 n , v = ( v i ) i = 1 n , u i = j = 1 n ω j ( a j γ b j ω j ) + γ b j ω j , v i = j = 1 n ω j ( a j γ b j ω j ) + a i , ( a i , b i ) A i . = ( v , u ) | u = ( u i ) i = 1 n , v = ( v i ) i = 1 n , v i = a i + γ b j ω j , u i = j = 1 n ω j ( a j γ b j ω j ) + γ b j ω j , ( a i , b i ) A i . .
It is implied from the second identity of (60) that a i = J γ A i ω i ( v i ) . Substituting this into the third identity of (60), we obtain that
u i = 2 j = 1 n ω j J γ A j ω j ( v j ) J γ A i ω i ( v i ) j = 1 n ω j v j + v i ,
so
2 u i v i = 2 2 j = 1 n ω j J γ A j ω j ( v j ) J γ A i ω i ( v i ) 2 j = 1 n ω j v j + v i ,
This is equivalent to
2 u v = 2 R N K J Γ A v R N K v = R N K ( 2 J Γ A ) R N K v = R N K R Γ A v .
Thus, we have u = ( v + R N K R Γ A v ) / 2 , which completes this proof. □
Proposition 15. 
S 1 is a maximal monotone operator.
Proof. 
Assume that ( u , v ) , ( u ˜ , v ˜ ) are arbitrary elements in S 1 . That is,
u = ( u i ) i = 1 n , v = ( v i ) i = 1 n , u ˜ = ( u i ˜ ) i = 1 n , v ˜ = ( v i ˜ ) i = 1 n , u i = j = 1 n ω j ( a j γ b j ω j ) + γ b j ω j , v i = j = 1 n ω j ( a j γ b j ω j ) + a i , ( a i , b i ) A i , u ˜ i = j = 1 n ω j ( a ˜ j γ b ˜ j ω j ) + γ b ˜ j ω j , v ˜ i = j = 1 n ω j ( a ˜ j γ b ˜ j ω j ) + a ˜ i , ( a ˜ i , b ˜ i ) A i .
Let a : = j = 1 n ω j ( a ˜ j a j ) , b : = γ j = 1 n ( b ˜ j b j ) . Then
u ˜ i u i = a b + γ ω i ( b ˜ i b i ) , v ˜ i v i = a + b + ( a ˜ i a i ) ,
This implies that
v ˜ i v i , u ˜ i u i = a b 2 + γ ω i b a , b ˜ i b i + a ˜ i a i , a b + γ ω i a ˜ i a i , b ˜ i b i .
Thus,
v ˜ v , u ˜ u = i = 1 n ω i v ˜ i v i , u ˜ i u i = γ i = 1 n a ˜ i a i , b ˜ i b i 0 .
Note that the last inequality holds due to the fact that ( a i , b i ) A i , ( a ˜ i , b ˜ i ) A i and each operator A i is monotone. Thus, we have proved that S 1 is a monotone operator. By Proposition 1, to prove that it is also maximal, it remains to show that i m ( Id + S 1 ) = H n . This is quite easy since we have proved in Proposition 14 that J S 1 is firmly nonexpansive. □
By exchanging the two operators Γ A and N K in (52) and applying the same methods, we obtain a splitting operator S 2 , which possesses properties similar to S 1 .
Definition 9. 
Let A i be any maximal monotone operator, and then define
S 2 = ( u , v ) | u = ( u i ) i = 1 n , v = ( v i ) i = 1 n , u i = j = 1 n ω j ( a j + γ b j ω j ) γ b j ω j , v i = j = 1 n ω j ( a j + γ b j ω j ) a i , ( a i , b i ) A i .
Proposition 16. 
Let u z e r ( S 2 ) , then j = 1 n ω j u j z e r ( j = 1 n A j ) .
Proposition 17. 
Let P be given in (45), then J S 2 = P .
Proposition 18. 
S 2 is a maximal monotone operator.
Recall that we have proved in Proposition 9 that P , Q are both firmly nonexpansive. In this subsection, we propose constructive approaches to obtain maximal monotone operators S 1 and S 2 such that Q = J S 1 , P = J S 2 .

5. Iterative Algorithms and Convergence

The main focus of this section is to present iterative schemes which produce convergent sequences leading to a solution of problem (3) or (4). Firstly, we recall three lemmas which can be used to find F i x ( A ) , the set of fixed points of an averaged nonexpansive operator A.
Lemma 12. 
Let k ( 0 , 1 ) ; let A be a k-averaged nonexpansive operator on H such that F i x ( A ) , and let ( λ n ) n N be a sequence in [ 0 , 1 / k ] such that λ n ( 1 k λ n ) = + . Set x 0 H , and define for n N ,
x n + 1 = x n + λ n ( A x n x n ) ,
then, ( x n ) n N converges weakly to a point in Fix(A).
Under particular circumstances, we can use Picard iteration and obtain sequences which converge to a fixed point of A.
Lemma 13. 
Let H = R m , and let C be a closed and convex set of H . If A : C C is a k-averaged nonexpansive operator on H such that k ( 0 , 1 ) , F i x ( A ) , set x 0 H , and define for n N ,
x n + 1 = A x n ,
then ( x n ) n N converges weakly to a point in Fix(A).
Lemma 14. 
Let A be a k-averaged nonexpansive operator on H such that k ( 0 , 1 / 2 ] , F i x ( A ) , set x 0 H , and define for n N ,
x n + 1 = A x n ,
then ( x n ) n N converges weakly to a point in Fix(A).
Based on the above discussions, we present the following series of propositions. We remark that the corresponding iterative schemes lead to a solution of problem (3) or (4). Therefore, we have developed efficient algorithms to solve the minimization problem (1) and (2).
Proposition 19. 
Suppose we have z e r ( j = 1 n A j + B ) , and γ > 0 , β > γ 2 . Let ( λ n ) n N be a sequence in [ 0 , 1 / k ] such that λ n ( 1 k λ n ) = + . For each 1 i 4 , set x 0 ( i ) H n , and define for n N ,
x n + 1 ( i ) = x n ( i ) + λ n ( T i x n ( i ) x n ( i ) ) ,
then, the following hold:
(i) 
For each 1 i 4 , there exists z ( i ) = ( z j ( i ) ) j = 1 n , such that ( x n ( i ) ) n N z ( i ) .
(ii) 
For i = 1 , 4 , we have j = 1 n ω j z j ( i ) z e r ( j = 1 n A j + B ) .
(iii) 
For i = 2 , 3 , 1 l n , we have J γ A l ω l ( 2 j = 1 n ω j z j ( i ) z l ( i ) ) z e r ( j = 1 n A j + B ) .
Proof. 
This relation is due to Propositions 5 and 7 and Lemma 12. □
Similarly, Propositions 5 and 7 and Lemma 13 result in the next result.
Proposition 20. 
Suppose we have H = R m , z e r ( j = 1 n A j + B ) , and γ > 0 , β > γ 2 . For each 1 i 4 , set x 0 ( i ) H n , and define for n N ,
x n + 1 ( i ) = T i x n ( i ) ,
then, the following hold:
(i) 
For each 1 i 4 , there exists z ( i ) = ( z j ( i ) ) j = 1 n , such that ( x n ( i ) ) n N z ( i ) .
(ii) 
For i = 1 , 4 , we have j = 1 n ω j z j ( i ) z e r ( j = 1 n A j + B ) .
(iii) 
For i = 2 , 3 , 1 l n , we have J γ A l ω l ( 2 j = 1 n ω j z j ( i ) z l ( i ) ) z e r ( j = 1 n A j + B ) .
Proposition 21. 
Suppose we have z e r ( j = 1 n A j ) , and γ > 0 , β > γ 2 . Set x 0 , y 0 H n , and define
x n + 1 = P x n , y n + 1 = Q y n ,
then, the following hold:
(i) 
There exists x = ( x j ) j = 1 n , y = ( y j ) j = 1 n H n , such that ( x n ) n N x , ( y n ) n N y .
(ii) 
j = 1 n ω j x j , j = 1 n ω j y j z e r ( j = 1 n A j ) .
Proof. 
This relation follows from Propositions 8 and 9 and Lemma 14. □

6. Conclusions

In this paper, we have explored iterative schemes for solving problems (3) or (4) and introduced the generalized forward–backward splitting methods along with the introduction of new operators. By introducing new operators, we were able to simplify the problems to the case of n = 2 and obtain fixed-point equations and splitting operators with desirable properties in a different way.
Our research has shown that the generalized forward–backward splitting methods are highly beneficial. Through these methods, we can generate convergent sequences that eventually lead to the solution of the problems.
Additionally, the introduction of new operators has proved to be effective in simplifying the problems and enabling a better understanding and handling of complex situations. When the zeros of the sum of n operators are reduced to the zeros of just two operators, symmetry manifests in the potential alignment of properties and patterns across the operators. If the operators possess symmetry, such as invariance under certain transformations, the zeros of these operators may exhibit similar behaviors or locations. This symmetry allows us to generalize and simplify the problem by focusing on just two representative operators.

Author Contributions

All authors contributed equally to this paper: Conceptualization, H.X.; validation, Z.L. and Y.Z.; data curation, X.L.; writing—original draft preparation, H.X.; writing—review and editing, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the specialized research fund of Yibin University (Grant No. 412-2021QH027).

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Acknowledgments

The authors would like to express special thanks to Wang Jinrong and Qiu Xiaoling for their valuable suggestions during the process of writing. All authors contribute equally to this work.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  2. Minty, G.J. Monotone (Nonlinear) Operators in Hilbert space. Duke Math. J. 1962, 29, 341–346. [Google Scholar] [CrossRef]
  3. Chen, G.H.-G.; Rockafellar, R.T. Convergence rates in forward-backward splitting. Siam J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef]
  4. Douglas, J.; Rachford, H.H. On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 1956, 82, 421–439. [Google Scholar] [CrossRef]
  5. Eckstein, J. The Lions-Mercier Splitting Algorithm and the Alternating Direction Method Are Instances of the Proximal Point Algorithm; Laboratory for Information and Decision Systems: Cambridge, MA, USA, 1988. [Google Scholar]
  6. Raguet, H.; Fadili, J.; Peyre, G. Generalized Forward-Backward Splitting. SIAM J. Imaging Sci. 2013, 6, 1199–1226. [Google Scholar] [CrossRef]
  7. Xiao, H.; Zeng, X. A proximal point method for the sum of maximal monotone operators. Math. Methods Appl. Sci. 2014, 37, 2638–2650. [Google Scholar] [CrossRef]
  8. Camlibel, K.; Iannelli, L.; Tanwani, A. Convergence of proximal solutions for evolution inclusions with time-dependent maximal monotone operators. Math. Program 2022, 194, 1017–1059. [Google Scholar] [CrossRef]
  9. Dadashi, V.; Postolache, M. Forward–backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef]
  10. Boikanyo, O.A. The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, 2016, 2371857. [Google Scholar] [CrossRef]
  11. Dadashi, V. On a hybrid proximal point algorithm in Banach spaces. Univ. Politeh. Buchar. Ser. A 2018, 80, 45–54. [Google Scholar]
  12. Dadashi, V.; Khatibzadeh, H. On the weak and strong convergence of the proximal point algorithm in reflexive Banach spaces. Optimization 2017, 66, 1487–1494. [Google Scholar] [CrossRef]
  13. Dadashi, V.; Postolache, M. Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 2017, 174, 518–529. [Google Scholar] [CrossRef]
  14. Moudafi, A. On the convergence of the forward-backward algorithm for null-point problems. J. Nonlinear Var. Anal. 2018, 2, 263–268. [Google Scholar]
  15. Yuan, H. A splitting algorithm in a uniformly convex and 2-uniformly smooth Banach space. J. Nonlinear Funct. Anal. 2018, 26, 1–2. [Google Scholar] [CrossRef]
  16. Combettes, P.L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53, 475–504. [Google Scholar] [CrossRef]
  17. Combettes, P.L.; Pesquet, J.-C. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
  18. Combettes, P.L.; Pesquet, J.-C. A proximal decomposition method for solving convex variational inverse problems. Inverse Problems 2008, 24, 65014–65040. [Google Scholar] [CrossRef]
  19. Eckstein, J. Parallel alternating direction multiplier decompositon of convex programme. J. Optim. Theory Appl. 1994, 80, 39–62. [Google Scholar] [CrossRef]
  20. Eckstein, J.; Svaiter, B.F. General projective splitting methods for sums of maximal monotone operators. SIAM J. Control Optim. 2009, 48, 787–811. [Google Scholar] [CrossRef]
  21. Minty, G.J. On the maximal domain of a “monotone” function. Mich. Math. J. 1961, 8, 135–137. [Google Scholar] [CrossRef]
  22. Rockafellar, R.T. On the maximality of Sums of Nonlinear Monotone Operators. Trans. Am. Math. Soc. 1970, 149, 75–88. [Google Scholar] [CrossRef]
  23. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  24. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, H.; Li, Z.; Zhang, Y.; Liu, X. Generalized Forward–Backward Methods and Splitting Operators for a Sum of Maximal Monotone Operators. Symmetry 2024, 16, 880. https://doi.org/10.3390/sym16070880

AMA Style

Xiao H, Li Z, Zhang Y, Liu X. Generalized Forward–Backward Methods and Splitting Operators for a Sum of Maximal Monotone Operators. Symmetry. 2024; 16(7):880. https://doi.org/10.3390/sym16070880

Chicago/Turabian Style

Xiao, Hongying, Zhaofeng Li, Yuanyuan Zhang, and Xiaoyou Liu. 2024. "Generalized Forward–Backward Methods and Splitting Operators for a Sum of Maximal Monotone Operators" Symmetry 16, no. 7: 880. https://doi.org/10.3390/sym16070880

APA Style

Xiao, H., Li, Z., Zhang, Y., & Liu, X. (2024). Generalized Forward–Backward Methods and Splitting Operators for a Sum of Maximal Monotone Operators. Symmetry, 16(7), 880. https://doi.org/10.3390/sym16070880

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop