Next Article in Journal
The ASR Post-Processor Performance Challenges of BackTranScription (BTS): Data-Centric and Model-Centric Approaches
Next Article in Special Issue
Fixed Point Theorems on Almost (φ,θ)-Contractions in Jleli-Samet Generalized Metric Spaces
Previous Article in Journal
Numerical Analysis of Building Cooling Using New Passive Downdraught Evaporative Tower Configuration in an Arid Climate
Previous Article in Special Issue
A Derivative-Free MZPRP Projection Method for Convex Constrained Nonlinear Equations and Its Application in Compressive Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Approach to Split Variational Inclusion Issues through a Three-Step Iterative Process

by
Andreea Bejenaru
1 and
Mihai Postolache
1,2,3,*
1
Department of Mathematics and Informatics, University Politehnica of Bucharest, 060042 Bucharest, Romania
2
Business School, Sichuan University, Chengdu 610064, China
3
Gh. Mihoc—C. Iacob Institute of Mathematical Statistics and Applied Mathematics, Romanian Academy, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(19), 3617; https://doi.org/10.3390/math10193617
Submission received: 13 September 2022 / Revised: 27 September 2022 / Accepted: 28 September 2022 / Published: 2 October 2022
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

Abstract

:
Split variational inclusions are revealed as a large class of problems that includes several other pre-existing split-type issues: split feasibility, split zeroes problems, split variational inequalities and so on. This makes them not only a rich direction of theoretical study but also one with important and varied practical applications: large dimensional linear systems, optimization, signal reconstruction, boundary value problems and others. In this paper, the existing algorithmic tools are complemented by a new procedure based on a three-step iterative process. The resulting approximating sequence is proved to be weakly convergent toward a solution. The operation mode of the new algorithm is tracked in connection with mixed optimization–feasibility and mixed linear–feasibility systems. Standard polynomiographic techniques are applied for a comparative visual analysis of the convergence behavior.

1. Introduction

In general, the setting for split problems includes two Hilbert spaces and a linear operator between them. The statement of the problem refers to finding the points of the domain that satisfy some given conditions so that their images through the linear operator satisfy analogical conditions within the range. Depending on the restrictions initially set, as well as to other mappings involved in the issue, several types of split problems were defined and analyzed so far. Most often, there is a very well-defined inclusion relationship between these issues. However, the most important feature common to all split problems is the application area, which makes them all go beyond a simple theoretical exercise.
For instance, the split monotone variational inclusion problem (SMVI) [1] is fit for those approaches including both multivalued mappings as well as single-valued self-mappings on the Hilbert spaces. Through certain assignments, several other split problems arise from this general framework:
  • When the multivalued mappings are the normal cones corresponding to some given closed and convex subsets of the two Hilbert spaces, one finds the split variational inequality problem (SVI) [2]. Furthermore, the split feasibility problem (SFP) [3] (please also see [4,5,6,7,8,9,10,11,12,13]) is obtained by special assignments for the self-mapping involved in variational inequalities. A panoramic picture about various iteration algorithms developed so far to solve the SFP was provided by Hamdi et al. in [14].
  • Because the split equality problem (SEP) (see [15,16]) can equivalently be converted to a split feasibility problem in some product space, it follows that the SEP also provides a subclass of the SMVI.
  • When the multivalued mappings in a split monotone variational inclusion problem are null, one reaches the split zeroes problem (SZP) [2] and, in particular, the split common fixed-point problem (SCFPP) (an interesting survey of iteration algorithms fit for this type of problem is provided in [14]).
  • By excluding the self-mappings and keeping only the multivalued mappings, we find the split common null-point problem (SCNPP) [17]. Some possible iterative algorithms to solve this problem are listed in Xiong et al. [18]. In particular, when the multivalued mappings are precisely the subdifferential operators of two lower semi-continuous convex functions, we reach the split minimization problem (SMP).
  • When there is coincidence between the two Hilbert spaces, the self-mappings and the multivalued mappings, and the linear operator is the identity map, then the (simple, single space-related) inclusion problem is recovered. A deep survey of it is realized by Luo in [19], together with an inertial self-adaptive splitting algorithm, as an alternative to the classical forward–backward splitting method. If, furthermore, the self-mapping is null, then Rockefeller’s variational inclusion problem [20] is obtained, with its particular case of a convex minimization problem. In this case, the forward–backward splitting procedure is nothing else than the proximal point algorithm [21]. An interesting approach on implicit variational inclusion problems was realized by Agarwal and Verma in [22].
All the issues listed above prove the significant generality of the SMVI problem. This makes us highly motivated to provide new solutions for it. We aim to initiate a new algorithm to solve the SMVI problem. The inspiration comes from the recently developed algorithm initiated by Feng et al. in [7] to solve the split feasibility problem. Feng et al. combined the CQ algorithm with Thakur et al.’s [23] iteration procedure and obtained the so-called SFP-TTP projective algorithm. Similar ideas were developed in [24,25] using other three-step iteration procedures, resulting in the so-called partially projective algorithms. They benefit of an interesting upgrade: one of the projections is made only once per iteration. In the following, we shall adapt these ideas to the SMVI problem. For this, we start by properly defining the framework, as well as by listing some of the original procedures.

2. Preliminaries

In this section, we recall the formal statements of all the split-type problems listed above, starting with the most general one, the split monotone variational inclusion problem.
Let H 1 and H 2 be two real Hilbert spaces and let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be two multivalued mappings on the Hilbert spaces H 1 and H 2 , respectively. Consider also A : H 1 H 2 a bounded linear operator, as well as f : H 1 H 1 and g : H 2 H 2 , two given single-valued operators. The split monotone variational inclusion problem (SMVI) was initiated by Moudafi in [1] as a generalization for the split variational inequality problem (SVI) defined and analyzed by Censor et al. in [2]. The statement of the SMVI problem is the following:
find   x H 1   such   that   f ( x ) + B 1 ( x ) 0 ,
and, simultaneously,
A x H 2   solves   g ( A x ) + B 2 ( A x ) 0 .
The SVI problem is recovered by considering C and Q, two closed and convex subsets of H 1 and H 2 , respectively, and by taking the multivalued mappings as the corresponding normal cones: B 1 ( x ) = N C ( x ) and B 2 ( y ) = N Q ( y ) . Consequently, the split variational inequality problem adopts the following statement:
find   x C   such   that   f ( x ) , u x 0 , u C
and, simultaneously,
A x Q   solves   g ( A x ) , v A x 0 , v Q .
Furthermore, by letting f g 0 , one finds the split feasibility problem (SFP) introduced by Censor and Elfving in [3]:
find   x C   such   that   A x Q .
By letting B 1 and B 2 be the zero operators, one reaches the split zeroes problem (SZP), which was also introduced in [2]:
find   x H 1   such   that   f ( x ) = 0
and, simultaneously,
A x H 2   solves   g ( A x ) = 0 .
In particular, if f = I S and g = I T , we recover the split common fixed-point problem (SCFPP):
find   x H 1   such   that   x Fix ( S )
and, simultaneously,
A x H 2   solves   A x Fix ( T ) .
Not least, by taking f and g from the SMVI problem as zero maps, one finds an important particular case; this resulting case was also studied by Byrne et al. in [17] as a split common null-point problem (SCNPP) for two set-valued mappings:
find   x H 1   such   that   0 B 1 ( x )
and, simultaneously,
A x H 2   with   0 B 2 ( A x ) .
Finally, from optimality conditions for convex minimization, it is well known that if φ is a lower semi-continuous convex function defined on a closed convex subset C of some Hilbert space H, then x * C minimizes φ if and only if 0 φ ( x * ) + N C ( x * ) , where φ stands for the subdifferential operator (which is well known to be maximal monotone). That is why, given two closed and convex subsets C and Q and two lower semi-continuous convex functions φ and ψ , by setting B 1 ( x ) = φ ( x ) + N C ( x ) and B 2 ( y ) = ψ ( y ) + N Q ( y ) , one reaches the split minimization problem (SMP):
find   x C   such   that   x argmin u C φ ( u )
and, simultaneously,
A x Q   with   A x argmin v C ψ ( v ) .
In the following, we recall some standard definitions which appear naturally in connection with split-type problems and are commonly encountered in the literature.
The usual assumptions about the split monotone variational inclusion problem refer, first of all, to its consistency, meaning that the solution set
Ω = { x H 1 : f ( x ) + B 1 ( x ) 0   and   g ( A x ) + B 2 ( A x ) 0 }
is nonempty. Secondly, B 1 and B 2 are usually considered to be maximal monotone operators, while f and g are assumed inverse strongly monotone. In this paper, we shall adopt these standard hypotheses too.
Definition 1
([5] and the references herein). Let H be a Hilbert space and T : H H be a (possibly nonlinear) self-mapping of H. Then:
  • T is said to be nonexpansive, if
    T x T y x y , x , y H .
  • T is said to be an averaged operator if T = ( 1 α ) I + α N , where α ( 0 , 1 ) , I is the identity map and N : H H is a nonexpansive mapping.
  • T is called monotone if
    T x T y , x y 0 , x , y H .
  • Assume ν > 0 . Then, T is called ν-inverse strongly monotone (ν-ism) if
    T x T y , x y ν T x T y 2 , x , y H .
  • Any 1-ism T is also known as being firmly nonexpansive, that is,
    T x T y , x y T x T y 2 , x , y H .
Several properties are worth being mentioned next.
Proposition 1
([5] and the references herein). The following statements hold true on Hilbert spaces.
(i)
Each firmly nonexpansive mapping is averaged and each averaged operator is nonexpansive.
(ii)
T is a firmly nonexpansive mapping if and only if its complement I T is firmly nonexpansive.
(iii)
The composition of a finite number of averaged operators is averaged.
(iv)
An operator N is nonexpansive if and only if its complement I N is a 1 2 -ism.
(v)
An operator T is averaged if and only if its complement is a ν-ism, for some ν > 1 2 . Moreover, if T = ( 1 α ) I + α N , then I T is a 1 2 α -ism.
(vi)
If T is a ν-ism and γ > 0 , then γ T is ν γ -ism.
Definition 2.
Let H be a real Hilbert space. Let B : H 2 H and λ > 0 .
  • B is called a monotone mapping if
    u v , x y 0   f o r   a l l   u B ( x )   a n d   v B ( y ) .
  • B is called a maximal monotone mapping if B is monotone and its graph gph ( B ) = { ( x , u ) | x H , B ( x ) , u B ( x ) } is not properly contained in the graph of any other monotone operator.
  • The resolvent of B with parameter λ is denoted and defined by J λ B = ( I + λ B ) 1 , where I is the identity operator (recall that the scalar multiplication and the addition of multivalued operators are defined as follows: ( λ B ) ( x ) = { λ u | u B ( x ) } and ( B 1 + B 2 ) ( x ) = { u 1 + u 2 | u 1 B 1 ( x ) , u 2 B 2 ( x ) } , while the inverse B 1 of B is the operator defined by x B 1 ( y ) y B ( x ) .)
The most important properties of maximal monotone operators from the perspective of variational inclusions are given below.
Proposition 2.
If B is a maximal monotone operator, then
(i) 
J λ B is a firmly nonexpansive (single-valued) operator (see [26,27]). Furthermore, according to Proposition 1 (i), it is also averaged and, ultimately, nonexpansive.
(ii) 
0 f ( x ) + B ( x ) x Fix ( J λ B ( I λ f ) ) (an immediate consequence of the definition).
(iii) 
Let h : H H be an α-inverse strongly monotone operator. Then, J λ B ( I λ h ) is averaged for each λ ( 0 , 2 α ) (see [1,17]). Again, Proposition 1 (i) ensures the nonexpansiveness of J λ B ( I λ h ) .
Moudafi presented in [1] an algorithm (we shall call it SMVI-Standard Algorithm) that converged weakly to a solution of the SMVI under certain conditions. It relies on an iteration procedure defined as follows: for λ > 0 and an arbitrary initial point x 0 H 1 , the sequence { x n } is generated by
x n + 1 = U [ I γ A * ( I T ) A ] x n , n 0 ,
where 0 < γ < 1 / L , A * : H 2 H 1 being the adjoint operator of A, while L = A 2 is the spectral radius or the largest eigenvalue of the selfadjoint operator A * A ; moreover, T = J λ B 2 ( I λ g ) and U = J λ B 1 ( I λ f ) .
In particular, for the split common null-point problem for two set-valued mappings, this procedure becomes: for λ > 0 and an arbitrary initial point x 0 H 1 , the sequence { x n } is generated by
x n + 1 = J λ B 1 [ I γ A * ( I J λ B 2 ) A ] x n , n 0 .
We shall further refer to it as SCNPP-Standard Algorithm.
The main result in [1] is stated below.
Theorem 1
([1]). Consider a bounded linear operator A : H 1 H 2 , H 1 and H 2 being two Hilbert spaces. Let f : H 1 H 1 and g : H 2 H 2 be α 1 and α 2 inverse strongly monotone operators on H 1 and H 2 , respectively, and B 1 , B 2 two maximal monotone operators, and set α : = min { α 1 , α 2 } . Consider the operators T = J λ B 2 ( I λ g ) , U = J λ B 1 ( I λ f ) with λ ( 0 , 2 α ) . Then, any sequence { x n } generated by the SMVI-Standard Algorithm weakly converges to x Ω (a solution for the SMVI), provided that 0 < γ < 1 / L .
In addition, for the particular case of a split common null-point problem (i.e., f = g 0 ), Byrne et al. [17] provided a similar result under the more relaxed condition 0 < γ < 2 / L . A strong convergence result was also stated and proved in [17].
Theorem 2
([17]). Let H 1 and H 2 be two real Hilbert spaces. Let two set-valued, odd and maximal monotone operators B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 , and a bounded linear operator A : H 1 H 2 be given. If γ ( 0 , 2 / L ) , then any sequence { x n } generated by the SCNPP-Standard Algorithm converges strongly to x Ω (a solution for the SCNPP).
The following Lemmas will provide important tools in the main chapter.
Lemma 1
([28]). If in a Hilbert space H the sequence { x n } is weakly convergent to a point x, then for any y x , the following inequality holds true
lim inf n x n y > lim inf n x n x .
Lemma 2
([28]). In a Hilbert space H, for every nonexpansive mapping T : C X defined on a closed convex subset C H , the mapping I T is demiclosed at 0 (if x n x and lim n x n T x n = 0 , then x = T x ).
Lemma 3
([29], Lemma 1.3). Suppose that X is a uniformly convex Banach space and 0 < p t n q < 1 for all n 1 (i.e., { t n } is bounded away from 0 and 1). Let { x n } and { y n } be two sequences of X such that lim sup n x n r , lim sup n y n r and lim sup n t n x n + ( 1 t n ) y n = r hold true for some r 0 . Then, lim n x n y n = 0 .

3. New Three-Step Algorithm for Split Variational Inclusions

Further on, we will work under the following hypotheses: B 1 and B 2 are maximal monotone set-valued mappings, and f and g are α 1 and α 2 inverse strongly monotone operators, respectively. Set α : = min { α 1 , α 2 } and denote T = J λ B 2 ( I λ g ) , U = J λ B 1 ( I λ f ) with λ ( 0 , 2 α ) . According to Proposition 2 (iii), both T and U are averaged, meaning that they are also nonexpansive.
Feng et al. initiated in [7] a three-step algorithm to solve the split feasibility problem. Their starting point was the TTP three-step iterative procedure introduced in [23] to solve the fixed-point problem for a nonexpansive mapping T. For an arbitrarily initial point x 0 C , the sequence { x n } is generated through the TTP procedure by the iteration scheme
u n = ( 1 γ n ) x n + γ n T x n , v n = ( 1 β n ) u n + β n T u n , x n + 1 = ( 1 α n ) T u n + α n T v n ,
where { α n } , { β n } , { γ n } are three real sequences in (0,1).
Turning back to Moudafi’s SMVI-Standard Algorithm, we may rewrite it as follows
x n + 1 = U 1 γ A 2 x n + γ A 2 S x n , n 0 ,
where
S : H 1 H 1 , S = I 1 A 2 A * ( I T ) A .
We notice that the resulting procedure has the pattern of a Krasnosel’skii iterative process. This inspires us to a change involving the TTP procedure. Moreover, we use the mapping U partially (once on every iteration step, as in [24,25]). Our procedure is defined as follows: for an arbitrarily initial point x 0 H 1 , the sequence { x n } is generated by
u n = ( 1 γ n ) x n + γ n S x n v n = ( 1 β n ) u n + β n S u n x n + 1 = U ( 1 α n ) S u n + α n S v n ,
where { α n } , { β n } , { γ n } are three real sequences in (0,1). We shall refer to this iteration procedure as SMVI-TTP Algorithm.
We start our approach with some fundamental Lemmas.
Lemma 4.
If S is defined by relation (1), then Ω = F ( U ) F ( S ) , where F ( U ) and F ( S ) denote the sets of fixed points for operators U and S, respectively.
Proof. 
According to Proposition 2 (ii), Ω = { x H 1 : x F ( U )   and   A x F ( T ) } . Moreover, because S = I 1 A 2 A * ( I T ) A , if A x is a fixed point of T it follows that x is a fixed point of S. Therefore, Ω F ( U ) F ( S ) . Let us prove next the converse inclusion relationship.
Let x F ( U ) F ( S ) . It follows that A * ( I T ) A x = 0 . Let ω = ( I T ) A x . We wish to prove that ω = 0 . Assume the contrary, i.e., ω 0 . Let z Ω . Using the fact that A * ω = 0 and T A z = A z , we find
T A x T A z 2 = A x ω A z 2 = A x A z 2 + ω 2 2 A x A z , ω = A x A z 2 + ω 2 2 x z , A * ω = A x A z 2 + ω 2 .
Because ω 0 , it follows that
T A z T A x > A x A z ,
which contradicts the fact that T is nonexpansive (see Proposition 2 (iii)).
In conclusion, ω = A x T A x = 0 , and the proof is complete. □
Lemma 5.
The mapping S defined by relation (1) is nonexpansive.
Proof. 
According to Proposition 2 (iii), T is averaged. It follows from Proposition 1 (v) that I T is ν -ism, for some ν > 1 2 . Therefore,
A * ( I T ) A x A * ( I T ) A y , x y = ( I T ) A x ( I T ) A y , A x A y ν ( I T ) A x ( I T ) A y 2 ν A 2 A * ( I T ) A x A * ( I T ) A y 2 ,
hence A * ( I T ) A is a ν A 2 -ism. Moreover, from Proposition 1 (vi), it follows that 1 A 2 A * ( I T ) A is a ν -ism. Applying Proposition 1 (v) again, we obtain that S = I 1 A 2 A * ( I T ) A is averaged and also nonexpansive. □
Lemma 6.
Let { x n } be the sequence generated by the SMVI-TTP Algorithm (2). Then, lim n x n p exists for any p Ω .
Proof. 
Let p Ω = F ( U ) F ( S ) (according to Lemma 4). Because S is nonexpansive, it follows that S is also quasi-nonexpansive, i.e., S x p x p for each x H 1 . Thus, the iteration procedure (2) leads to
u n p = ( 1 γ n ) x n + γ n S x n p = ( 1 γ n ) ( x n p ) + γ n ( S x n p ) ( 1 γ n ) x n p + γ n S x n p ( 1 γ n ) x n p + γ n x n p = x n p .
The same reasoning applies to v n p , and one obtains
v n p = ( 1 β n ) u n + β n S u n p = ( 1 β n ) ( u n p ) + β n ( S u n p ) ( 1 β n ) u n p + β n S u n p ( 1 β n ) u n p + β n u n p = u n p .
Now, using inequality (3), one finds
v n p x n p .
In addition, using the property of mapping U being nonexpansive (according to Proposition 2 (iii)) and the fact that p is a fixed point of U, we find that
x n + 1 p = U ( 1 α n ) S u n + α n S v n p ( 1 α n ) S u n + α n S v n p = ( 1 α n ) ( S u n p ) + α n ( S v n p ) ( 1 α n ) u n p + α n v n p ,
and together with (3) and (5), these lead to
x n + 1 p ( 1 α n ) x n p + α n x n p = x n p .
We conclude from (7) that { x n p } is bounded and decreasing for all p Ω . Hence, lim n x n p exists. □
Lemma 7.
Let { x n } be the sequence generated by the SMVI-TTP Algorithm (2), with { γ n } bounded away from 0 and 1. Then:
(i) 
lim n x n S x n = 0 ;
(ii) 
lim n x n U x n = 0 .
Proof. 
(i) Let p Ω = F ( U ) F ( S ) . By Lemma 6, it follows that lim n x n p exists. Let us denote
r = lim n x n p .
From (3), it is known that u n p x n p . Taking lim sup on both sides of the inequality, one obtains
lim sup n u n p lim sup n x n p = r .
Again, because S is quasi-nonexpansive, one has
lim sup n S x n p lim sup n x n p = r .
Now, inequality (6) combined with (4) leads to
x n + 1 p ( 1 α n ) u n p + α n v n p u n p .
Applying lim sup to (11) and using (8) together with (9), one obtains
r = lim sup n x n + 1 p lim sup n u n p r
which implies
lim sup n u n p = r .
Relation (12) can be rewritten as
lim sup n u n p = lim sup n ( 1 γ n ) x n + γ n S x n p = lim sup n ( 1 γ n ) ( x n p ) + γ n ( S x n p ) = r
From (8), (10), (12) and Lemma 3, one finds lim n S x n x n = 0 .
(ii) For proving the second part of the Lemma, we start with some additional limits. First of all, using the definition of u n from equation (2), one finds
u n x n = γ n ( S x n x n ) = γ n S x n x n ,
so
lim n u n x n = 0 .
Furthermore, again using the nonexpansiveness of S, as well as relation (13), we obtain
S u n x n S u n S x n + S x n x n u n x n + S x n x n ,
following that
lim n S u n x n = 0 .
Moreover, combining equations (13) and (14), we also obtain
S u n u n S u n x n + x n u n ,
leading to
lim n S u n u n = 0 .
Similarly, by considering the definition of { v n } , we obtain
lim n S v n u n = 0 ,
which together with the limit (13) finally leads to
S v n x n S v n u n + u n x n ,
hence
lim n S v n x n = 0 .
Next, we use the results included in relations (14) and (16) and the nonexpansiveness of U to evaluate x n + 1 U x n . We have
x n + 1 U x n = U ( ( 1 α n ) S u n + α n S v n ) U x n ( 1 α n ) S u n + α n S v n x n ( 1 α n ) S u n x n + α n S v n x n ,
and therefore
lim n x n + 1 U x n = 0 .
Moreover,
x n + 1 p x n + 1 U x n + U x n p x n + 1 U x n + x n p
and by taking the limit n , based on (17), we obtain
lim n U x n p = lim n x n p = r .
Next, we shall support our proof on the following identity, which relates an operator F to its complement G = I F (see [5]):
x y 2 F x F y 2 = 2 G x G y , x y G x G y 2 .
Applying identity (19) for the mapping U and taking x = x n and y = p , we find
x n p 2 U x n p 2 = 2 ( I U ) x n ( I U ) p , x n p ( I U ) x n ( I U ) p 2 .
Let us also recall that U is an averaged mapping (see Proposition 2 (iii)); so, according to Proposition 1 (v), its complement I U is a ν -ism, for some ν > 1 2 . From Definition 1, we may conclude that
( I U ) x n ( I U ) p , x n p ν ( I U ) x n ( I U ) p 2 .
By turning back to identity (20) and using the fact that p is a fixed point of U, we obtain
x n p 2 U x n p 2 ( 2 ν 1 ) x n U x n 2 .
Based on equation (18), we may conclude that lim n x n U x n = 0 . □
Theorem 3.
Let { x n } be the sequence generated by the SMVI-TTP Algorithm (2), with { γ n } bounded away from 0 and 1. Then, { x n } is weakly convergent to a point of Ω.
Proof. 
One immediate consequence of Lemma 6 is that { x n } is bounded. In conclusion, there exists at least one weakly convergent subsequence. Let
ω w ( x n ) = { x H 1 : { x n i }   weakly   convergent   to   x }
denote the weakly subsequential limit set of the sequence { x n } . Then, ω w ( x n ) is a nonempty subset. We prove next that it contains exactly one weak limit point. To start, let us assume the contrary: let x , y ω w ( x n ) , x y and let { x n i } x and { x n j } y . By Lemma 7, we have lim n x n i S x n i = 0 and lim n x n i U x n i = 0 , where S and U are nonexpansive mappings (see Lemma 5). Applying Lemma 2, we find that S x = x and U x = x ; hence, x F ( U ) F ( S ) = Ω . Similar arguments provide y Ω . In general, ω w ( x n ) Ω .
From Lemma 6, the sequences { x n x } and { x n y } are convergent. These properties, together with Lemma 1, generate the following inequalities
lim n x n x = lim n x n i x < lim n x n i y = lim n x n y = lim n x n j y < lim n x n j x = lim n x n x .
This provides the expected contradiction. Hence, ω w ( x n ) is a singleton. Let ω w ( x n ) = { p } . We just need to prove that x n p . Assume the contrary. Then, for a certain point y 0 H 1 , there exists ϵ > 0 such that, for all k N , one could find n k k satisfying | < x n k p , y 0 > | > ϵ . The resulting subsequence { x n k } is itself bounded (because { x n } is bounded); hence, it contains a weakly convergent subsequence { x n k l } . However, this new subsequence is also a weakly convergent subsequence of { x n } ; hence, its weak limit must be p. Taking l in the inequality
| < x n k l p , y 0 > | > ϵ ,
one finds 0 ϵ > 0 . Absurd! Hence, x n p Ω . □

4. Mixed Optimization–Feasibility Systems with Simulation

Let us suppose that from the minimizers of some given function φ (assuming that there is more than one such optimizing element), we wish to select those having a particular feature. We obtain an optimization–feasibility system. For instance, consider a system of the following type:
find   x H 1   such   that   x argmin u φ ( u )   and   A x Q H 2 ,
where H 1 and H 2 denote some Hilbert spaces, Q is a nonempty, closed and convex subset of H 2 , A : H 1 H 2 is a bounded linear operator and φ is a proper lower semi-continuous and convex function on H 1 .
This problem could be regarded as a split variational inclusion, by taking f = 0 and g = 0 , B 1 = φ and B 2 = P Q . Let us notice that f and g are inverse strongly monotone for each ν > 0 . Because φ is proper lower semi-continuous and convex, B 1 is a maximal monotone operator. Moreover, B 2 is maximal monotone provided that Q is closed and convex. Moreover, for an arbitrary selected parameter λ > 0 ,
U x = J λ B 1 x = argmin u H 1 λ φ ( u ) + 1 2 x u 2 = Prox λ φ x ,
that is the proximal mapping defined by Moreau in [30], and
T y = J λ B 2 y = P Q y ,
making
S = I 1 A 2 A * ( I P Q ) A .
Adapting the SMVI-TTP Algorithm and the SMVI-Standard Algorithm for this particular setting, we obtain some proximal procedures to solve the optimization–feasibility system:
  • the Prox-TTP Algorithm: for an arbitrary initial point x 0 H 1 , the sequence { x n } is generated by
    u n = ( 1 γ n ) x n + γ n S x n v n = ( 1 β n ) u n + β n S u n x n + 1 = Prox λ φ ( 1 α n ) S u n + α n S v n , n 0 ,
    where { α n } , { β n } , { γ n } are three real sequences in (0,1), λ > 0 .
  • Prox-Standard Algorithm: for an arbitrary initial point x 0 H 1 , the sequence { x n } is generated by
    x n + 1 = Prox λ φ [ I γ A * ( I P Q ) A ] x n , n 0 ,
    where 0 < γ < 1 A 2 and λ > 0 .
Based on the results obtained in the previous section, any sequence { x n } resulting from the previous procedures is weakly convergent to a solution of the mixed system.
In particular, we can apply the same algorithms for solving a mixed linear–feasibility system of the following type:
find   x R n   such   that   M x = b A x Q ,
where M is a k × n real matrix ( k < n ), b R k , A is a m × n real matrix and Q is a closed and convex subset of R m . Using the least-square function, we can rephrase the problem as an optimization–feasibility system:
find   x R n   such   that   x argmin u 1 2 M u b 2 A x Q .
In this particular case, we have φ ( x ) = 1 2 M x b 2 , so
Prox λ φ x = ( I + λ M T M ) 1 ( x + λ M T b ) .
Example 1.
Consider the mixed system
x 1 x 2 + 1 = 0 A x 1 ,
where · defines the Euclidean norm ( l 2 -norm) on R 3 and A = 1 1 1 1 3 1 . We notice that it is a linear–feasibility-type system, by identifying M as the line matrix ( 1 1 ) , b = 1 and Q as the l 2 -unit ball on R 3 . Moreover,
P Q ( y ) = y , y 1 1 y y , y > 1 ,
while Prox λ φ x will be computed using the Formula (23), for each particular value assigned to the parameter λ.
To find a solution, we repeatedly apply Prox-TTP Algorithm (21) until the distance between two consecutive estimates falls below a certain allowable error, say ε = 10 5 , and count the number of iterations to perform, n * . For this, we need to assign some values to the parameters involved in the procedure. For instance, by choosing λ = 1 and the initial estimation x 0 = ( 1 , 1 ) , as well as the iteration step coefficients α n = 2 n + 1 4 n + 5 , β n = 2 n + 1 12 n + 8 3 and γ n = 2 n + 2 3 n + 4 , the algorithm reaches the approximate solution x * = 1 2 , 1 2 , after n * = 17 iterations.
It would be interesting to see what happens if the parametric features of the algorithm or the initial estimation are changed. The resulting data, for different parametric assignments, are included in Table 1.
By comparing the first three lines, we notice that the approximate solution obtained, starting from a fixed initial estimate, does not seem to be affected by the choice of the iteration step coefficients. Therefore, to simplify the procedure, we could use constant coefficients. Comparing the lines 3, 4 and 5, we may conclude that the larger λ , the smaller n * , the number of iterations required. Moreover, the lines 5, 6 and 7 tell us that the problem has multiple solutions. As expected, not all the solutions of the equation x 1 y 1 + 1 = 0 also satisfy the feasibility condition so as to provide a solution for the system. The initial estimate x 0 = 1 4 , 3 4 is itself a solution for the system (as pointed in line 7), while x 0 = 0 , 1 is not (see line 6).
Another interesting issue with the newly introduced algorithm is its efficiency compared to other procedures. We will apply Prox-TTP Algorithm (21), as well as Prox-Standard Algorithm (22) and compare the results. In order to have a global picture about the convergence behavior of these two algorithms, we shall apply a special technique, called polynomiography. Generally, this means that instead of analyzing the resulting approximate solutions and the required number of iterations starting from a single particular initial estimate x 0 , we will choose an entire region of R 2 and take each point of the region as a starting point. Then, for each such initial approximation, we count the number of iterations required for its orbit to reach a system solution. Depending on this number, we assign to that particular starting element a color. The color–number correspondence is usually set through a colorbar that accompanies the picture. The result is a colored image in which the particular color of a pixel encodes the number of iterations needed to obtain a solution when the algorithm is set to start from that point. Obviously, the color corresponding to one iteration will define the solution set itself.
In our example, we choose the region enclosed in a square set to [ 1 , 1 ] × [ 1 , 1 ] and we set the inputs for the algorithm as follows: the admissible error controlling the exit criterion is ε = 10 5 ; the iteration step coefficients are set to be constant α n = β n = γ n = 1 2 for the Prox-TTP procedure and γ = 1 8 A 2 for the Prox-Standard procedure; and the resolvent parameter is chosen λ = 1 . In addition to the error-related exit command, we consider an extra stopping condition: if the given accuracy does not result after 30 iterative steps, the algorithm is set to break. This helps us to avoid infinite loops or very slow processes corresponding to those initial points which generate slow-convergent iteration sequences. For these points, we supplement the colorbar with white. We also assign the color black to points for which only one iteration is required (the approximate solutions). The resulting polynomiographs are included in Figure 1 and Figure 2.
Analyzing the two images, the most important conclusion refers to the system solutions. As expected, both algorithms provide the same image of the solution set (the segment marked by the dark line). Moreover, apparently, the TTP procedure is more efficient because it generally uses colors from the bottom of the palette, while the standard procedure also uses colors from the top as well as white (indicating slow convergence). However, we must not forget that these images resulted for a particular selection of inputs ( λ , α n , β n , γ n , γ and ε ). It is not excluded that, by choosing the entries differently, the standard procedure is to become more efficient.

5. Conclusions

The standard algorithm that Moudafi suggested for the split variational inclusion problem includes two control parameters, γ and λ . This means that, by simply adjusting the values of the parameters, one could obtain more efficient procedures. It is natural to think that the performance could be even better controlled if more parameters are involved. The three-step iteration procedure we analyzed here included four control inputs: λ and the iteration coefficients α n , β n and γ n . We proved next, using polynomiography, that a particular selection of these parameters improves the general convergence trend of the approximating sequences. More importantly, two new types of systems have been addressed as examples of SMVI problems; they combine an optimization problem and a linear subsystem, respectively, with a feasibility condition. For a particular selected experiment, we proved that the standard Moudafi procedure, as well as the newly introduced algorithm provide similar images of the solution set.

Author Contributions

Conceptualization, A.B. and M.P.; software, A.B.; validation, A.B.; formal analysis, A.B. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  2. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  3. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  4. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  5. Byrne, C. An unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  6. Dang, Y.Z.; Gao, Y. The strong convergence of a three-step algorithm for the split feasibility problem. Optim. Lett. 2013, 7, 1325–1339. [Google Scholar] [CrossRef]
  7. Feng, M.; Shi, L.; Chen, R. A new three-step iterative algorithm for solving the split feasibility problem. Univ. Politeh. Buch. Ser. A 2019, 81, 93–102. [Google Scholar]
  8. Sahu, D.R.; Pitea, A.; Verma, M. A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer. Algorithms 2020, 83, 421–449. [Google Scholar] [CrossRef]
  9. Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. A gradient projection method for solving split equality and split feasibility problems in Hilbert spaces. Optimization 2014, 64, 2321–2341. [Google Scholar] [CrossRef] [Green Version]
  10. Wang, F.; Xu, H.K. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 2010, 102085. [Google Scholar] [CrossRef] [Green Version]
  11. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  12. Yao, Y.; Postolache, M.; Liou, Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fix Point Theory A. 2013, 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  13. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2020, 69, 269–281. [Google Scholar] [CrossRef]
  14. Hamdi, A.; Liou, Y.C.; Yao, Y.; Luo, C. The common solutions of the split feasibility problems and fixed point problems. J. Inequal. Appl. 2015, 2015, 385. [Google Scholar] [CrossRef] [Green Version]
  15. Tian, D.; Jiang, L.; Shi, L. Gradient methods with selection technique for the multiple-sets split equality problem. Mathematics 2019, 7, 928. [Google Scholar] [CrossRef] [Green Version]
  16. Xu, H.K.; Cegielski, A. The Landweber operator approach to the split equality problem. SIAM J. Optim. 2021, 31, 626–652. [Google Scholar] [CrossRef]
  17. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  18. Xiong, J.F.; Ma, Z.L.; Zhang, L.S. Convergence theorems for the split variational inclusion problem in Hilbert spaces. J. Nonlinear Funct. Anal. 2021, 40, 1–12. [Google Scholar]
  19. Luo, Y. An inertial splitting algorithm for solving inclusion problems and its applications to compressed sensing. J. Appl. Numer. Optim. 2020, 2, 279–295. [Google Scholar]
  20. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  21. Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Autom. Inform. Rech. Opér. 1970, 4, 154–158. [Google Scholar]
  22. Agarwal, R.P.; Verma, R.U. General implicit variational inclusion problems based on A–Maximal (m)-relaxed monotonicity (AMRM) framework. Appl. Math. Comput. 2009, 215, 367–379. [Google Scholar] [CrossRef]
  23. Thakur, B.S.; Thakur, D.; Postolache, M. A new iteration scheme for approximating fixed points of nonexpansive mapping. Filomat 2016, 30, 2711–2720. [Google Scholar] [CrossRef] [Green Version]
  24. Bejenaru, A.; Ciobanescu, C. New partially projective algorithm for split feasibility problems with applications to BVP. J. Nonlinear Convex Anal. 2022, 23, 485–500. [Google Scholar]
  25. Usurelu, G.I. Split feasibility handled by a single-projection three-step iteration with comparative analysis. J. Nonlinear Convex. Anal. 2021, 22, 543–557. [Google Scholar]
  26. Bauschke, H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: New York, NY, USA, 2017. [Google Scholar]
  27. Minty, G.J. Monotone (nonlinear) operators in Hilbert space. Duke Math. J. 1962, 29, 341–346. [Google Scholar] [CrossRef]
  28. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  29. Schu, J. Weak and strong convergence of fixed points of asymptotically nonexpansive mappings. Bull. Austral. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef] [Green Version]
  30. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
Figure 1. Convergence behavior via the Prox–TTP Algorithm.
Figure 1. Convergence behavior via the Prox–TTP Algorithm.
Mathematics 10 03617 g001
Figure 2. Convergence behavior via the Prox–Standard Algorithm.
Figure 2. Convergence behavior via the Prox–Standard Algorithm.
Mathematics 10 03617 g002
Table 1. The resulting solutions for different parametric assignments.
Table 1. The resulting solutions for different parametric assignments.
x 0 ( α n , β n , γ n ) λ x * n *
1 ( 1 , 1 ) 2 n + 1 4 n + 5 , 2 n + 1 12 n + 8 3 , 2 n + 2 3 n + 4 1 ( 0.5 , 0.5 ) 17
2 ( 1 , 1 ) 1 n + 1 , n n + 1 , 1 2 1 ( 0.5 , 0.5 ) 17
3 ( 1 , 1 ) 1 2 , 1 2 , 1 2 1 ( 0.5 , 0.5 ) 17
4 ( 1 , 1 ) 1 2 , 1 2 , 1 2 0.5 ( 0.5 , 0.5 ) 23
5 ( 1 , 1 ) 1 2 , 1 2 , 1 2 2 ( 0.5 , 0.5 ) 14
6 ( 0 , 1 ) 1 2 , 1 2 , 1 2 2 ( 0.1667 , 0.8333 ) 14
7 ( 0.25 , 0.75 ) 1 2 , 1 2 , 1 2 2 ( 0.25 , 0.75 ) 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bejenaru, A.; Postolache, M. New Approach to Split Variational Inclusion Issues through a Three-Step Iterative Process. Mathematics 2022, 10, 3617. https://doi.org/10.3390/math10193617

AMA Style

Bejenaru A, Postolache M. New Approach to Split Variational Inclusion Issues through a Three-Step Iterative Process. Mathematics. 2022; 10(19):3617. https://doi.org/10.3390/math10193617

Chicago/Turabian Style

Bejenaru, Andreea, and Mihai Postolache. 2022. "New Approach to Split Variational Inclusion Issues through a Three-Step Iterative Process" Mathematics 10, no. 19: 3617. https://doi.org/10.3390/math10193617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop