Next Article in Journal
Mathematical Modeling of Autoimmune Diseases
Next Article in Special Issue
Stability Analysis of Linear Feedback Systems in Control
Previous Article in Journal
Symmetrical and Asymmetrical Rectifications Employed for Deeper Ocean Extrapolations of In Situ CTD Data and Subsequent Sound Speed Profiles
Previous Article in Special Issue
Convergence Analysis of Self-Adaptive Inertial Extra-Gradient Method for Solving a Family of Pseudomonotone Equilibrium Problems with Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Symmetric FBF Method for Solving Monotone Inclusions

1
Department of Mathematics, ORT Braude College, Karmiel 2161002, Israel
2
The Center for Mathematics and Scientific Computation, University of Haifa, Mt. Carmel, Haifa 3498838, Israel
3
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2020, 12(9), 1456; https://doi.org/10.3390/sym12091456
Submission received: 17 July 2020 / Revised: 29 August 2020 / Accepted: 2 September 2020 / Published: 4 September 2020
(This article belongs to the Special Issue Symmetry in Optimization and Control with Real World Applications)

Abstract

:
The forward–backward–forward (FBF) splitting method is a popular iterative procedure for finding zeros of the sum of maximal monotone and Lipschitz continuous monotone operators. In this paper, we introduce a forward–backward–forward splitting method with reflection steps (symmetric) in real Hilbert spaces. Weak and strong convergence analyses of the proposed method are established under suitable assumptions. Moreover, a linear convergence rate of an inertial modified forward–backward–forward splitting method is also presented.

1. Introduction

In this paper we are concerned with solving monotone inclusion problems and introducing a new self-adaptive, reflected forward–backward–forward method for solving it. Monotone inclusions appear naturally and play an important role in many applied fields, such as fixed point problems, equilibriums, and many more, as, for example, in [1,2,3,4,5,6,7,8]. More precisely, various problems in signal processing, computer vision and machine learning can be modelled mathematically using this formulation, see for example [9] and the references therein.
Let us recall the definition of the monotone inclusion problem. Given a maximal monotone operator B : H 2 H and a Lipschitz continuous monotone operator A : H H defined on a real Hilbert space H; the monotone inclusion problem is formulated as finding a point x H such that
0 ( A + B ) x .
One of the simplest and most popular methods for solving (1) is the well-known forward–backward splitting method, introduced by Passty [7] and Lions and Mercier [5]. The iterative step of the method is phrased as follows.
x n + 1 = J λ n B ( I λ n A ) x n , λ n > 0 ,
where J λ n B : = ( I + λ n B ) 1 denotes the resolvent of the maximal monotone operator B. Tseng, in [10], introduced a modification of (2) that includes an extra step that enables us to obtain convergence under weaker assumptions than those mentioned above.
The Tseng [10] iterative step is formulated as follows.
y n = J λ n B ( x n λ n A x n ) , x n + 1 = y n + λ n ( A x n A y n ) ) , n 1 ,
where λ n ( 0 , 1 L ) (L is the Lipschitz constant of A) is obtained using an Armijo line search rule, as seen in ([10] (2.4)). The forward–backward–forward algorithm (3) has been studied extensively in the literature due to its applicability—e.g., in [11,12,13,14,15,16,17,18,19,20,21].
Recently, Malitsky and Tam [22] introduced the following forward–reflected–backward splitting method for solving (1).
x n + 1 = J λ n B ( x n λ n A x n λ n 1 ( A x n A x n 1 ) ) ,
where λ n [ ϵ , 1 2 ϵ 2 L ] , ϵ > 0 and λ n is defined via a line search procedure, as seen in ([22] Algorithm 1).
Csetnek et al., in [23], proposed the following iterative step for solving (1).
x n + 1 = J λ B ( x n λ A x n ) λ ( A x n A x n 1 ) ) ,
where λ [ ϵ , 1 3 ϵ 3 L ] , ϵ > 0 .
Observe that the iterative methods (4) and (5) coincide when B = 0 but (4) is more general due to the choice of λ n . In any case, a major drawback of these two methods is the prior knowledge of the Lipschitz constant of A and, in most applications, this is either unknown or difficult to approximate. Furthermore, when a line search procedure is used as an inner loop, this might include extra computations and hence results in slow convergence and then could make the method inefficient.
Very recently, Hieu et al., in [24], proposed a self-adaptive forward–backward splitting variant that does not depend on the Lipschitz constant of A and no line search procedure is required. Choose λ 1 > 0 and μ ( 0 , 1 2 ) .
x n + 1 = J λ n B ( x n λ n A x n λ n 1 ( A x n A x n 1 ) ) , λ n + 1 = min μ x n + 1 x n A x n + 1 A x n , λ n .
Motivated by the above recent developments in the field of algorithms for solving inclusion problems (1), our contributions in this paper are:
  • We propose a new reflected forward–backward–forward iterative method for solving inclusion problems that has a different structure from the methods proposed in [10,22,23,24].
  • Our scheme is a self-adaptive procedure that does not require prior knowledge of the Lipschitz constant of A and no line search procedure is needed.
  • We also propose a modification of the forward–backward–forward method with inertial extrapolation step and obtain linear convergence result under some standard assumptions.
For deeper understanding and motivation, we next present the relations between dynamical systems and monotone inclusions.

Dynamical Systems and Monotone Inclusions

The forward–backward splitting method (2) can be interpreted as a discretization of the dynamical system (see [25,26])
x ˙ ( t ) + x ( t ) = J λ B ( x ( t ) λ A x ( t ) )
which consequently takes the form
x ˙ ( t ) λ B ( x ˙ ( t ) + x ( t ) ) + λ A x ( t )
as a monotone inclusion (1). Therefore, (7) can be considered as the dynamical system for the monotone inclusion (1), where A is co-coercive.
Now, let us consider the monotone inclusion (1) for which A is monotone and Lipschitz continuous on H. To solve (1), let us consider the dynamical system, in the spirit of (8):
x ˙ ( t ) + α ( t ) x ( t ) = α ( t ) J λ B ( I λ A ) λ ( A J λ B ( I λ A ) A ) ( x ˙ ( t ) + x ( t ) ) ,
where α : [ 0 , ) [ 0 , ) is a Lebesgue measurable function. Observe that the dynamical system (9) is not explicit because x ˙ ( t ) appears on both sides of (9). The dynamical system (9) is different from the second order dynamical system considered in ([27] Section 2).
Using the forward discretization x ˙ ( t ) x n + 1 x n h on the left-hand side and the backward discretization x ˙ ( t ) x n x n 1 h on the right-hand side of (9), we have
x n + 1 x n h + α n x n = α n [ J λ B ( I λ A ) λ ( A J λ B ( I λ A ) A ) ] x n + x n x n 1 h .
When h = 1 , (10) becomes
x n + 1 = ( 1 α n ) x n + α n [ J λ B ( I λ A ) λ ( A J λ B ( I λ A ) A ) ] ( 2 x n x n 1 ) = ( 1 α n ) x n + α n [ J λ B ( I λ A ) ( 2 x n x n 1 )
λ ( A J λ B ( I λ A ) ( 2 x n x n 1 ) A ( 2 x n x n 1 ) ) ] .
We call (11) the reflected forward–backward–forward method. Taking w n : = 2 x n x n 1 , y n : = J λ B ( I λ A ) w n , then (11) becomes
x n + 1 = ( 1 α n ) x n + α n ( y n λ ( A y n A w n ) ) .
Our interest is to study (13) in a more general setting λ : = λ n for solving (1).
The rest of the paper is organized as follows: Next we recall some basic definitions and results. In Section 3 we present and analyze our two forward–backward–forward methods, and then final conclusions are given in Section 4.

2. Preliminaries

We give some lemmas for our analysis.
Lemma 1
(See [2]). The following statements hold in H:
(i) 
x + y 2 = x 2 + 2 x , y + y 2 for all x , y H ;
(ii) 
x + y 2 x 2 + 2 y , x + y for all x , y H
(iii) 
x + y 2 = 2 x 2 + 2 y 2 x y 2 , x , y H .
Lemma 2
(Maingé [28]). Let { φ n } , { δ n } and { θ n } be sequences in [ 0 , + ) such that
φ n + 1 φ n + θ n ( φ n φ n 1 ) + δ n , n 1 , n = 1 + δ n < + ,
and there exists a real number θ with 0 θ n θ < 1 for all n N . Then the following hold:
(i) 
n = 1 + [ φ n φ n 1 ] + < + , where [ t ] + : = max { t , 0 } ;
(ii) 
there exists φ * [ 0 , + ) such that lim n φ n = φ * .
Lemma 3
(Opial [29]). Let C be a nonempty set of H and { x n } be a sequence in H such that the following two conditions hold:
(i) 
for any x C , lim n x n x exists;
(ii) 
every sequential weak cluster point of { x n } is in C.
Then { x n } converges weakly to a point in C.
Lemma 4
(Xu [30]). Let { a n } be a sequence of nonnegative real numbers satisfying the following relation:
a n + 1 ( 1 α n ) a n + α n σ n + γ n , n 1 ,
where
(a) 
{ α n } [ 0 , 1 ] , k = 1 α n = ;
(b) 
lim sup σ n 0 ;
(c) 
γ n 0 ( n 1 ) , n = 1 γ n < .
Then, a n 0 as n .
Definition 1.
A mapping A : H H is called
(a) 
strongly monotone with modulus γ > 0 on H if
A x A y , x y γ x y 2 , x , y H .
In this case, we say that A is γ-strongly monotone;
(b) 
monotone on H if
A x A y , x y 0 , x , y H ;
(c) 
co-coercive on H if
A x A y , x y γ A x A y 2 , x , y H ;
(d) 
Lipschitz continuous on H if there exists a constant L > 0 such that
A x A y L x y for all x , y H .
Definition 2.
A multi-valued operator B : H 2 H with graph G ( T ) = { ( x , x * ) : x * B x } is said to be monotone if for any x , y D ( T ) , x * B x and y * B y
x y , x * y * 0 .
A monotone operator B is said to be maximal if B = S whenever S : H 2 H is monotone and G ( B ) G ( S ) . For more details, see, for instance [31].
Lemma 5
(See [2]). Let B : H 2 H be a maximal monotone mapping and A : H H be a mapping. Define a mapping
T λ x : = J λ B ( I λ A ) ( x ) , x H , λ > 0 .
Then F ( T λ ) = ( A + B ) 1 ( 0 ) , where F ( T λ ) denotes the set of all fixed points of T λ .

3. Our Results

We give two new Forward–Backward–Forward algorithms and their analysis under suitable conditions. We assume that the following conditions hold for the rest of this paper.
Assumption 1
(a) 
Let B : H 2 H be a maximal monotone operator; A : H H a monotone and L-Lipschitz continuous.
(b) 
The solution set ( A + B ) 1 ( 0 ) of the inclusion problem (1) is nonempty.
(c) 
0 < α α n α n + 1 1 2 + δ : = ε , δ > 0 .
Remark 1.
1. 
Observe that by (15) it is clear that λ n + 1 λ n , n 1 . Moreover, if A w n A y n , then
μ w n y n A w n A y n μ L w n y n w n y n = μ L
and thus 0 < min λ 1 , μ L λ n n 1 . This means that lim n λ n exists. Therefore, lim n λ n = λ > 0 .
2. 
By Lemma 5, it is clear that if w n = y n , then w n is a solution of problem (1).
Lemma 6.
The sequence { x n } n = 1 generated by Algorithm 1 is bounded.
Algorithm 1 Forward–Backward–Forward Algorithm 1
1:
Choose parameters as in Assumptions 1(c), μ ( 0 , 1 ) and λ 1 > 0 . Choose arbitrary starting points x 0 , x 1 H . Set n : = 1 .
2:
Given the iterates x n , x n 1 , compute
w n = 2 x n x n 1 y n = J λ n B ( w n λ n A w n ) ,
where
λ n + 1 = min μ w n y n A w n A y n , λ n , A w n A y n λ n , otherwise .
If w n = y n , then STOP.
3:
Otherwise, compute the next iterate via
x n + 1 = ( 1 α n ) x n + α n [ y n λ n ( A y n A w n ) ] .
4:
Set n n + 1 and go to 2.
Proof. 
Let us define u n : = y n λ n ( A y n A w n ) , n 1 . Then
u n p 2 = y n λ n ( A y n A w n ) p 2 = y n p 2 + λ n 2 A y n A w n 2 λ n y n p , A y n A w n = w n p 2 + w n y n 2 + 2 y n w n , w n p + λ n 2 A y n A w n 2 2 λ n y n p , A y n A w n = w n p 2 + w n y n 2 2 y n w n , y n w n + 2 y n w n , y n p + λ n 2 A y n A w n 2 2 λ n y n p , A y n A w n = w n p 2 w n y n 2 2 y n w n , y n p + λ n 2 A y n A w n 2 2 λ n y n p , A y n A w n = w n p 2 w n y n 2 + λ n 2 A y n A w n 2 2 w n y n λ n ( A w n A y n ) , y n p w n p 2 w n y n 2 + λ n 2 μ 2 λ n + 1 2 y n w n 2 2 w n y n λ n ( A w n A y n ) , y n p = w n p 2 1 λ n 2 μ 2 λ n + 1 2 y n w n 2 2 w n y n λ n ( A w n A y n ) , y n p .
We now show that
w n y n λ n ( A w n A y n ) , y n p 0 .
From y n = ( I + λ n B ) 1 ( w n λ n A w n ) , we obtain w n λ n A w n y n + λ n B y n . Noting that B is maximal monotone, we obtain v n B y n , such that
( I λ n A ) w n = y n + λ n v n .
Therefore,
v n = 1 λ n ( w n y n λ n A w n ) .
Furthermore, 0 ( A + B ) p and A y n + v n ( A + B ) y n . Since A + B is maximal monotone, one has
A y n + v n , y n p 0 .
Putting (20) into (19), we have
1 λ n w n y n λ n A w n + λ n A y n , y n p 0 .
Thus,
w n y n λ n ( A w n A y n ) , y n p 0 .
This is (18). Using (18) in (17), we get
u n p 2 w n p 2 1 λ n 2 μ 2 λ n + 1 2 y n w n 2 .
Using Algorithm 1, we get
x n + 1 p 2 = ( 1 α n ) ( x n p ) + α n ( u n p ) 2 = ( 1 α n ) x n p 2 + α n u n p 2 α n ( 1 α n ) x n u n 2 ,
which in turn implies that
x n + 1 p 2 ( 1 α n ) x n p 2 + α n w n p 2 α n ( 1 α n ) x n u n 2 .
Note that
x n + 1 = ( 1 α n ) x n + α n u n
and this implies
u n x n = 1 α n ( x n + 1 x n ) , n .
Using (24) in (23), we get
x n + 1 p 2 ( 1 α n ) x n p 2 + α n w n p 2 ( 1 α n ) α n x n + 1 x n 2 .
Additionally, by Lemma 1 (iii),
w n p 2 = 2 x n x n 1 p 2 = ( x n p ) + ( x n x n 1 ) 2 = 2 x n p 2 x n 1 p 2 + 2 x n x n 1 2 .
Using (26) in (25):
x n + 1 p 2 ( 1 α n ) x n p 2 + 2 α n x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 1 α n α n x n + 1 x n 2 = ( 1 + α n ) x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 1 α n α n x n + 1 x n 2 .
Define
Γ n : = x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 , n 1 .
Since α n α n + 1 , we have
Γ n + 1 Γ n = x n + 1 p 2 ( 1 + α n + 1 ) x n p 2 + α n x n 1 p 2 + 2 α n + 1 x n + 1 x n 2 2 α n x n x n 1 2 x n + 1 p 2 ( 1 + α n ) x n p 2 + α n x n 1 p 2 + 2 α n + 1 x n + 1 x n 2 2 α n x n x n 1 2 .
Now, using (27) in (28), one gets
Γ n + 1 Γ n 1 α n α n x n + 1 x n 2 + 2 α n + 1 x n + 1 x n 2 = 1 α n α n 2 α n + 1 x n + 1 x n 2 .
By condition (c) of Assumption 1, one gets
1 α n α n 2 α n + 1 = 1 α n 1 2 α n + 1 2 + δ 1 2 2 + δ = δ + δ 2 + δ δ .
Using (30) in (29), we have
Γ n + 1 Γ n δ x n + 1 x n 2 .
Therefore, { Γ n } is non-increasing. Similarly,
Γ n = x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 x n p 2 α n x n 1 p 2 .
Note that
α n < 1 2 + δ = ϵ < 1 .
From (32), we have
x n p 2 α n x n 1 p 2 + Γ n ϵ x n 1 p 2 + Γ 1 ϵ n x 0 p 2 + ( 1 + + ϵ n 1 ) Γ 1 ϵ n x 0 p 2 + Γ 1 1 ϵ .
Consequently,
Γ n + 1 = x n + 1 p 2 α n + 1 x n p 2 + 2 α n + 1 x n + 1 x n 2 α n + 1 x n p 2
and this means from (33) that
Γ n + 1 α n + 1 x n p 2 ϵ x n p 2 ϵ n + 1 x 0 p 2 + ϵ Γ 1 1 ϵ .
By (31) and (34), we get
δ n = 1 k x n + 1 x n 2 Γ 1 Γ k + 1 ϵ k + 1 x 0 p 2 + Γ 1 1 ϵ .
This implies
n = 1 x n + 1 x n 2 Γ 1 δ ( 1 ϵ ) < + .
Therefore, lim n x n + 1 x n = 0 . Additionally, from (16), we get
w n x n = x n x n 1 0 , n .
From (27), we get
x n + 1 p 2 ( 1 + α n ) x n p 2 α n x n 1 p 2 + 2 x n x n 1 2 .
Using Lemma 2 in (38) (noting (36)), we get
lim n x n p 2 = l < .
Hence, { x n p } is bounded. Therefore { x n } is bounded. □
Theorem 1. 
{ x n } n = 1 generated by Algorithm 1 converges weakly to a point in ( A + B ) 1 ( 0 ) .
Proof. 
By (24), we have
u n x n = 1 α n x n + 1 x n 1 α x n + 1 x n 0 , n .
Therefore,
w n u n u n x n + w n x n 0 , n .
Now, { w n } and { u n } are bounded by the boundedness of { x n } . Hence, there exists M > 0 such that (noting (21))
1 λ n 2 μ 2 λ n + 1 2 y n w n 2 w n p 2 u n p 2 = w n p + u n p w n p u n p M w n u n 0 , n .
Therefore,
y n w n 0 , n .
Additionally, by the boundedness of { x n } , there exists subsequence { x n j } such that x n j z H .
Let ( v , u ) Graph ( A + B ) . Thus, u A v B v and so y n j = ( I + λ n j B ) 1 ( I λ n j A ) w n j . Hence, ( I λ n j A ) w n j ( I + λ n j B ) y n j , which turns to
1 λ n j ( w n j y n j λ n j A w n j ) B y n j .
B is maximal monotone, giving
v y n j , u A v 1 λ n j ( w n j y n j λ n j A w n j ) 0 .
Thus,
v y n j , u v y n j , A v 1 λ n j ( w n j y n j λ n j A w n j ) = v y n j , A v A w n j + v y n j , 1 λ n j ( w n j y n j ) = v y n j , A v A y n j + v y n j , A y n j A w n j + v y n j , 1 λ n j ( w n j y n j ) v y n j , A y n j A w n j + v y n j , 1 λ n j ( w n j y n j ) .
Since lim n w n y n = 0 and A is Lipschitz continuous, we get lim j A w n j A y n j = 0 . Furthermore, since 0 < min λ 1 , μ L λ n n 1 , we get from (43) that
v z , u = lim j v y n j , u 0 .
This implies by the maximal monotonicity of A + B (see ([2] Corollary 24.4(i))) that 0 ( A + B ) z . Thus 0 ( A + B ) 1 ( 0 ) .
Since by (39), we have that lim n x n z exists. Hence, Opial’s lemma 3 shows that { x n } converges weakly to a point in ( A + B ) 1 ( 0 ) . This completes the proof. □
If A is strongly monotone and Lipschitz continuous on H, we show that { x n } n = 1 converges strongly in Algorithm 1. Note that in this case the splitting operator T λ in Lemma 5 is a contraction mapping and hence ( A + B ) 1 ( 0 ) is a singleton.
Theorem 2.
Suppose A is strongly monotone and Lipschitz continuous on H. Then { x n } n = 1 generated Algorithm 1 converges strongly to the unique point in ( A + B ) 1 ( 0 ) .
Proof. 
Take unique point p ( A + B ) 1 ( 0 ) . From the definition of y n , there exists v n B y n , such that
y n + λ n v n = w n λ n A w n .
From 0 A p + B p , there exists v * B p , such that A p + v * = 0 . Given that u n = y n λ n ( A y n A w n ) , n 1 , we have
w n p 2 = w n y n + y n u n + u n p 2 = w n y n 2 + y n u n 2 + u n p 2 + 2 w n y n , y n p + 2 y n u n , u n p = w n y n 2 y n u n 2 + u n p 2 + 2 w n u n , y n p = w n y n 2 λ n 2 A w n A y n 2 + u n p 2 + 2 λ n A y n + v n , y n p = w n y n 2 λ n 2 A w n A y n 2 + u n p 2 + 2 λ n A y n A p + v n v * , y n p .
Since A + B is strongly monotone, there exists η > 0 such that
A y n A p + v n v * , y n p η y n p 2 .
Now, from (44) and (45) we have
u n p 2 w n p 2 w n y n 2 + λ n 2 A w n A y n 2 2 λ n η y n p 2 w n p 2 1 λ n 2 μ 2 λ n + 1 2 w n y n 2 2 λ n η y n p 2 .
Since lim n λ n = λ and the sequence { λ n } is monotonically decreasing, we have λ n λ n 1 . Let ρ be a fixed number in the interval ( μ , 1 ) . Additionally, since lim n λ n μ λ n + 1 = μ < ρ , there exists N > 0 such that λ n μ λ n + 1 < ρ n N . So, n N , we have
λ n λ , λ n μ λ n + 1 < ρ .
Plugging (26) and (47) in (46), we have n N ,
u n p 2 w n p 2 ( 1 ρ 2 ) w n y n 2 2 λ η y n p 2 = 2 x n x n 1 p 2 ( 1 ρ 2 ) w n y n 2 2 λ η y n p 2 = 2 x n p 2 x n 1 p 2 + 2 x n x n 1 2 ( 1 ρ 2 ) w n y n 2 2 λ η y n p 2 .
Consequently,
x n + 1 p 2 ( 1 α n ) x n p 2 + α n u n x * 2 ( 1 α n ) x n p 2 + α n [ 2 x n p 2 x n 1 p 2 + 2 x n x n 1 2 ( 1 ρ 2 ) w n y n 2 2 λ η y n p 2 ] ( 1 + α n ) x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 α n ( 1 ρ 2 ) w n y n 2 2 α n λ η y n p 2 .
Therefore
2 α λ η y n p 2 x n p 2 x n + 1 p 2 + α n x n p 2 x n 1 p 2 + 2 x n x n 1 2 .
So,
2 α λ η k = N n y k p 2 x N p 2 x n + 1 p 2 + α n x n p 2 α N 1 x N 1 p 2 + 2 k = N n x k x k 1 2 .
Since { x n } is bounded by Lemma 6 and k = N x k x k 1 2 < by (36), we obtain k = N y k p 2 < . Hence lim n y n p = 0 . Consequently, we get
x n p x n y n + y n p x n w n + w n y n + y n p 0
as n . This concludes the proof. □
Remark 2.
1. 
Observe that the convergence Theorems 1 and 2 assume that the mapping A is monotone and Lipschitz continues. In case that the Lipschitz constant L of A is known or can be easily evaluated, then one can choose the step-sizes λ n in Algorithm 1, as follows, and the convergence theorems remain valid.
0 < a λ n b < 1 L .
This and our adaptive step-size rule for determining { λ n } are quite flexible and general so extend several related results in the literature—e.g., in [22,23,24,32,33].
2. 
In case we incorporate a general inertial term w n = x n + θ ( x n x n 1 ) for θ [ 0 , 1 ) (not necessarily 1), then Lemma 6 and Theorems 1 and 2 still hold. Moreover, with this general term and under η-strongly monotonicity and L-Lipschitz continuity we are able to present linear convergence of the next algorithm.
Theorem 3.
Suppose that A is η-strongly monotone and L-Lipschitz continuous on H. Then { x n } n = 1 generated by Algorithm 2 converges linearly to the unique point in ( A + B ) 1 ( 0 ) .
Algorithm 2 Forward–Backward–Forward Algorithm 2
1:
Choose the step-sizes { λ n } as in (50), define σ : = min { 1 b 2 L 2 , 2 a η } ( 0 , 1 ) and choose θ 0 , σ 2 σ and α ( 0 , 1 2 ) . Choose arbitrary starting points x 0 , x 1 H and set n : = 1 .
2:
Given x n , x n 1 , compute
w n = x n + θ ( x n x n 1 ) y n = J λ n B ( w n λ n A w n ) ,
If w n y n = 0 , then STOP.
3:
Otherwise, Compute
x n + 1 = ( 1 α ) x n + α [ y n λ n ( A y n A w n ) ] .
4:
Set n n + 1 and go to 2.
Proof. 
Following lines of arguments in obtaining (44) and (45), one can show that
u n p 2 w n p 2 w n y n 2 + λ n 2 A w n A y n 2 2 λ n η y n p 2 w n p 2 ( 1 λ n 2 L 2 ) w n y n 2 2 λ n η y n p 2 w n p 2 ( 1 b 2 L 2 ) w n y n 2 2 a η y n p 2 w n p 2 min { 1 b 2 L 2 , 2 a η } w n y n 2 + y n p 2 w n p 2 1 2 min { 1 b 2 L 2 , 2 a η } w n p 2 = ζ 2 w n p 2 ,
ζ 2 = 1 1 2 min { 1 b 2 L 2 , 2 a η } ( 0 , 1 ) . Consequently,
x n + 1 p 2 = ( 1 α ) x n p 2 + α u n p 2 α ( 1 α ) x n u n 2 ( 1 α ) x n p 2 + α ζ 2 w n p 2 1 α α x n + 1 x n 2 = ( 1 α ) x n p 2 + α ζ 2 [ ( 1 + θ ) x n p 2 θ x n 1 p 2 + θ ( 1 + θ ) x n x n 1 2 ] 1 α α x n + 1 x n 2 = ( 1 α ( 1 ζ 2 ( 1 + θ ) ) ) x n p 2 α ζ 2 θ x n 1 p 2 + α ζ 2 θ ( 1 + θ ) x n x n 1 2 1 α α x n + 1 x n 2 .
Since α < 1 2 , we therefore have
x n + 1 p 2 + x n + 1 x n 2 x n + 1 p 2 + 1 α α x n + 1 x n 2 ( 1 α ( 1 ζ 2 ( 1 + θ ) ) ) x n p 2 + α ζ 2 θ ( 1 + θ ) x n x n 1 2 .
Observe that 0 < 1 α ( 1 ζ 2 ( 1 + θ ) ) < 1 since θ 0 , σ 2 σ . We obtain from (55) that
x n + 1 p 2 + x n + 1 x n 2 1 α ( 1 ζ 2 ( 1 + θ ) ) [ x n p 2 + α ζ 2 θ ( 1 + θ ) 1 α ( 1 ζ 2 ( 1 + θ ) ) x n x n 1 2 ] 1 α ( 1 ζ 2 ( 1 + θ ) ) x n p 2 + x n x n 1 2 ,
where α ζ 2 θ ( 1 + θ ) 1 α ( 1 ζ 2 ( 1 + θ ) ) < 1 since α < 1 2 < 1 1 ζ 2 ( 1 + θ ) ( 1 θ ) .
Denote b n : = x n p 2 + x n x n 1 2 . Then (56) implies
b n + 1 ( 1 α ( 1 ζ 2 ( 1 + θ ) ) ) n b 1 .
Therefore,
x n + 1 p 2 b n + 1 ( 1 α ( 1 ζ 2 ( 1 + θ ) ) ) n b 1 .
This concludes the proof. □
We give some remarks about the contributions of our proposed methods and the consequent improvements over some related methods in the literature.
Remark 3.
(a) The proposed methods in [22,23] use a fixed constant step size which depends on the Lipschitz constant of the forward operator A. This approach is quite restrictive and has limited applications since the Lipschitz constant or an estimate of it must be known before the methods in [22,23] can be applied. Our proposed method in Algorithm 1 uses a self-adaptive step size in (15), which is more applicable and without recourse to finding the Lipschitz constant or an estimate of it.
(b) In our proposed methods in this paper, the forward–backward J λ n B ( I λ n A ) acted on the reflection 2 x n x n 1 , (i.e., there is J λ n B ( I λ n A ) ( 2 x n x n 1 ) ) which speeds up the acceleration of our proposed methods since x = J λ n B ( I λ n A ) x solves the inclusion problem (1). This is not the case in the proposed methods in [22,23,24]. In these methods, J λ n B ( I λ n A ) does not act on the reflection 2 x n x n 1 .
(c) We give a linear convergence rate in Theorem 3. No linear convergence for the proposed methods in [22,23] is given.

4. Conclusions

In this work, we study the Tseng-type algorithm with reflection step for solving monotone inclusion in real Hilbert spaces. We propose two variants and their weak and strong convergence results under suitable conditions, as well as convergence rate for stronger assumptions. Our contributions in this paper show that we can modify the Tseng algorithm with extrapolation term x n + 1 = x n + θ ( x n x n 1 ) with θ = 1 and obtain convergence analysis. This approach has not been considered before in the literature. Our work generalizes and extends some related results in the literature, such as in [10,22,23,24]. Some of the continuing projects that can be studied further are splitting algorithms for finding a zero of the sum of three monotone operators in which two are maximal monotone and the third is Lipschitz continuous—e.g., in [16,33].

Author Contributions

Analysis, Y.S. and A.G.; Investigation, Y.S.; Methodology, Y.S.; Visualization, A.G.; Writing—original draft, Y.S.; Writing—review and editing, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aoyama, K.; Kimura, Y.; Takahashi, W. Maximal monotone operators and maximal monotone functions for equilibrium problems. J. Convex Anal. 2008, 15, 395–409. [Google Scholar]
  2. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  3. Chen, G.H.-G.; Rockafellar, R.T. Convergence rates in forward-backward splitting. SIAM J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef] [Green Version]
  4. Combettes, P.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  5. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  6. Moudafi, A.; Thera, M. Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1997, 94, 425–448. [Google Scholar] [CrossRef]
  7. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  8. Peaceman, D.H.; Rachford, H.H. The numerical solutions of parabolic and elliptic differential equations. J. Soc. Indust. Appl. Math. 1955, 3, 28–41. [Google Scholar] [CrossRef]
  9. Beck, A.; Teboulle, M. Gradient-Based Algorithms with Applications to Signal Recovery Problems. In Convex Optimization in Signal Processing and Communications; Yonina, E., Daniel, P., Eds.; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  10. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  11. Alves, M.M.; Geremia, M. Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng’s F-B four-operator splitting method for solving monotone inclusions. Numer. Algorithms 2019, 82, 263–295. [Google Scholar] [CrossRef] [Green Version]
  12. Boţ, R.I.; Csetnek, E.R. An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 2016, 171, 600–616. [Google Scholar] [CrossRef] [Green Version]
  13. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 42. [Google Scholar] [CrossRef]
  14. Gibali, A.; Thong, D.V. Tseng type methods for solving inclusion problems and its applications. Calcolo 2018, 55, 49. [Google Scholar] [CrossRef]
  15. Khatibzadeh, H.; Moroşanu, G.; Ranjbar, S. A splitting method for approximating zeros of the sum of two monotone operators. J. Nonlinear Convex Anal. 2017, 18, 763–776. [Google Scholar]
  16. Latafat, P.; Patrinos, P. Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators. Comput. Optim. Appl. 2017, 68, 57–93. [Google Scholar] [CrossRef] [Green Version]
  17. Shehu, Y. Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces. Results Math. 2019, 74, 138. [Google Scholar] [CrossRef] [Green Version]
  18. Shehu, Y.; Cai, G. Strong convergence result of forward-backward splitting methods for accretive operators in Banach spaces with applications. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Math. RACSAM 2018, 112, 71–87. [Google Scholar] [CrossRef]
  19. Thong, D.V.; Vuong, P.T. Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2019, 68, 2203–2222. [Google Scholar] [CrossRef]
  20. Thong, D.V.; Vinh, N. The Inertial methods for fixed point problems and zero point problems of the sum of two monotone mappings. Optimization 2019, 68, 1037–1072. [Google Scholar] [CrossRef]
  21. Wang, Y.; Wang, F. Strong convergence of the forward-backward splitting method with multiple parameters in Hilbert spaces. Optimization 2018, 67, 493–505. [Google Scholar] [CrossRef]
  22. Malitsky, Y.; Tam, M.K. A Forward-Backward splitting method for monotone inclusions without cocoercivity. arXiv 2020, arXiv:1808.04162. [Google Scholar] [CrossRef]
  23. Csetnek, E.R.; Malitsky, Y.; Tam, M.K. Shadow Douglas–Rachford Splitting for Monotone Inclusions. Appl. Math Optim. 2019, 80, 665–678. [Google Scholar] [CrossRef] [Green Version]
  24. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified forward–backward splitting method for variational inclusions. 4OR-Q J. Oper. Res. 2020. [Google Scholar] [CrossRef]
  25. Abbas, B.; Attouch, H.; Svaiter, B.F. Newton-Like Dynamics and Forward-Backward Methods for Structured Monotone Inclusions in Hilbert Spaces. J. Optim. Theory Appl. 2014, 161, 331–360. [Google Scholar] [CrossRef]
  26. Attouch, H.; Cabot, A. Convergence of a Relaxed Inertial Forward–Backward Algorithm for Structured Monotone Inclusions. Appl. Math. Optim. 2019. [Google Scholar] [CrossRef]
  27. Boţ, R.I.; Sedlmayer, M.; Vuong, P.T. A relaxed inertial Forward-Backward-Forward algorithm for solving monotone inclusions with application to GANs. arXiv 2020, arXiv:2003.07886. [Google Scholar]
  28. Maingé, P.-E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  29. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  30. Xu, H.K. Iterative algorithms for nonlinear operators. J. London. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  31. Barbu, V. Nonlinear Semigroups and Differential Equations in Banach Spaces; Editura Academiei RS Romania: Bucharest, Romania, 1976. [Google Scholar]
  32. Cevher, V.; Vũ, B.C. A reflected Forward-Backward splitting method for monotone inclusions involving Lipschitzian operators. Set-Valued Var. Anal. 2020. [Google Scholar] [CrossRef] [Green Version]
  33. Rieger, J.; Tam, M.K. Backward-Forward-Reflected-Backward splitting for three operator monotone inclusions. Appl. Math. Comput. 2020, 381, 125248. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Gibali, A.; Shehu, Y. A Symmetric FBF Method for Solving Monotone Inclusions. Symmetry 2020, 12, 1456. https://doi.org/10.3390/sym12091456

AMA Style

Gibali A, Shehu Y. A Symmetric FBF Method for Solving Monotone Inclusions. Symmetry. 2020; 12(9):1456. https://doi.org/10.3390/sym12091456

Chicago/Turabian Style

Gibali, Aviv, and Yekini Shehu. 2020. "A Symmetric FBF Method for Solving Monotone Inclusions" Symmetry 12, no. 9: 1456. https://doi.org/10.3390/sym12091456

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop