Next Article in Journal / Special Issue
A Remark for the Hyers–Ulam Stabilities on n-Banach Spaces
Previous Article in Journal
Unification Theories: Means and Generalized Euler Formulas
Previous Article in Special Issue
A Self-Adaptive Shrinking Projection Method with an Inertial Technique for Split Common Null Point Problems in Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inertial Generalized Viscosity Approximation Method for Solving Multiple-Sets Split Feasibility Problems and Common Fixed Point of Strictly Pseudo-Nonspreading Mappings

by
Hammed Anuoluwapo Abass
1,2 and
Lateef Olakunle Jolaoso
3,*
1
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa
2
DSI-NRF Center of Excellence in Mathematical and Statistical Sciences (CoE-MaSS), Johannesburg 2193, South Africa
3
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Pretoria 0204, South Africa
*
Author to whom correspondence should be addressed.
Submission received: 30 November 2020 / Revised: 14 December 2020 / Accepted: 14 December 2020 / Published: 24 December 2020
(This article belongs to the Special Issue Nonlinear Analysis and Optimization with Applications)

Abstract

:
In this paper, we propose a generalized viscosity iterative algorithm which includes a sequence of contractions and a self adaptive step size for approximating a common solution of a multiple-set split feasibility problem and fixed point problem for countable families of k-strictly pseudononspeading mappings in the framework of real Hilbert spaces. The advantage of the step size introduced in our algorithm is that it does not require the computation of the Lipschitz constant of the gradient operator which is very difficult in practice. We also introduce an inertial process version of the generalize viscosity approximation method with self adaptive step size. We prove strong convergence results for the sequences generated by the algorithms for solving the aforementioned problems and present some numerical examples to show the efficiency and accuracy of our algorithm. The results presented in this paper extends and complements many recent results in the literature.

1. Introduction

The problem of finding a point in the intersection of closed and convex subsets in real Hilbert spaces has appeared severally in diverse areas of mathematics and physical sciences. This problem is commonly referred to as the Convex Feasibility Problem (shortly, CFP), and finds its applications in various disciplines such as image restoration, computer tomograph and radiation therapy treatment planning, see [1]. A generalization of the CFP is the Split Feasibility Problem (SFP) which was introduced by Censor and Elfving [2] and defined as finding a point in a nonempty closed convex set, whose image under a bounded operator is in another set. Mathematically, the SFP can be formulated as:
find x * C such that A x * Q ,
where C and Q are nonempty closed convex subsets of R N and R M respectively, and A is a given matrix of dimension N × M . The SFP also models inverse problems arising from phase retrieval and intensity modulated radiation therapy [2]. Censor et al. [3] further introduced another generalization of the CFP and SFP called the Multiple Set Split Feasibility Problem (MSSFP) which is formulated as
find x * C : = i = 1 k C i such that A x * Q : = j = 1 t Q j ,
where k 1 and r 1 are given integers, A is a given M × N real matrix with A * its transpose, { C i } i = 1 k and { Q j } j = 1 t are nonempty closed convex subsets of R N and R M , respectively. Observe that when k = r = 1 , the MSSFP reduces to SFP (1). In this paper, we focus on the MSSFP in a unified framework. We denote the set of solutions of (2) by Ω and assume that Ω is consistent (i.e., nonempty). It is well known that the MSSFP is equivalent to the following minimization problem:
min 1 2 x P C ( x ) 2 + 1 2 A x P Q ( A x ) 2 ,
where P C and P Q are the orthogonal projections onto C and Q respectively. For solving (3), Censor et al. [3] defined a proximity function p ( x ) for measuring the distance of a point to all sets as follows:
p ( x ) : = 1 2 i = 1 k α i x P C i ( x ) 2 + 1 2 j = 1 t β j A x P Q j ( A x ) 2
where α i > 0 , β j > 0 i and j respectively, and i = 1 k α i + j = 1 t β j = 1 . It is easy to see that
p ( x ) : = i = 1 k α i ( x P C i ( x ) ) + j = 1 t β j A * ( I P Q ) A x .
Censor et al. [3] also introduced the following projection method for solving the MSSFP:
x n + 1 = P Ω ( x n s p ( x n ) ) ,
where s is a positive scalar. They further proved the weak convergence of (5) under the condition that the stepsize s satisfies
0 < s L s s μ < 2 L ,
where L = i = 1 k α i + ρ ( A * A ) j = 1 t β j is the Lipschitz constant of p . A major setback of (5) is the fact that the algorithm used a fixed stepsize which is restricted by the Lipschitz constant (this depends on the largest eigenvalue of the matrix A * A ). Computing the largest eigenvalue of A * A is usually difficult and its conservation results in slow convergence. More so, note that the projection onto the sets C and Q are often difficult to calculate when the sets are not simple. This can also result in the complication of (5). Several efforts have been made in order to find best appropriate modifications of (5) without the setbacks in infinite dimensional real Hilbert spaces. For instance, Zhao and Yang [4] introduced a new projection method such that the stepsize s is selected via an Armijo line search technique for solving the MSSFP. However, this line search process required extra inner iteration for obtaining a suitable stepsize. The authors in [5] also introduced a self-adaptive projection method which requires the computation of the stepsize directly without any inner iteration. More so, López et al. [6] introduced a relaxed projection method with a fixed stepsize and proved a weak convergence result for solving the MSSFP. He et al. [7] further combined a Halpern iterative scheme with the relaxed projection method and proved a strong convergence result for solving the MSSFP. Recently, Suantai et al. [8] introduced an inertial relaxed projection method with a self-adaptive stepsize for solving the MSSFP. Also, Wen et al. [9] introduced a cyclic-simultaneous projection method and proved weak convergence result for solving the MSSFP.
Constructing iterative schemes with a faster rate of convergence are usually of great interest. The inertial-type algorithm which originated from the equation for an oscillator with damping and conservative restoring force has been an important tool employed in improving the performance of algorithms and has some nice convergence characteristics. In general, the main feature of the inertial-type algorithms is that we can use the previous iterates to construct the next one. Since the introduction of the inertial-like algorithm, many authors combined the inertial term [ θ n ( x n x n 1 ) ] together with different kinds of iterative algorithms, including Mann, Kranoselski, Halpern, Viscosity, to mention a few, to approximate solutions of fixed point problems and optimization problems. Most authors were able to prove weak convergence results while few proved strong convergence results.
Polyak [10] was the first author to propose the heavy ball method, Alvarez and Attouch [11] employed this to the setting of a general maximal monotone operator using the Proximal Point Algorithm (PPA), which is called the inertial PPA, and is of the form:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( I + r n B ) 1 y n , n > 1 .
They proved that if { r n } is non-decreasing and { θ n } [ 0 , 1 ) with
n = 1 + θ n x n x n 1 2 < + ,
then the Algorithm (6) converges weakly to a zero of a maximal monotone operator B. More precisely, condition (7) is true for θ n < 1 3 . Here θ n is an extrapolation factor. Other initial-type algorithms can be found in, for instance [12,13,14,15,16,17].
Motivated by the works of Wen et al. [9] and López et al. [6], in this paper, we introduce a general viscosity relaxed projection method with inertial process for solving the MSSFP with the fixed point of strictly pseudo-nonspreading mappings in real Hilbert spaces. The stepsize of our algorithm is selected self-adaptively in each iteration and its convergence does not involve prior estimate of the matrix A * A . More so, we define some sublevel sets whose projections can be calculated explicitly using the formula in [18]. The general viscosity approximation method guarantees strong convergence of the sequences generated by the algorithm. This improves the weak convergence results proved in [6,9,19]. We further provide some numerical experiments to illustrate the performance and accuracy of our algorithm. Our results improve and complement the results of [6,7,8,9,19,20,21,22,23,24] and many other results in this direction.

2. Preliminaries

We state some known and useful results which will be needed in the proof of our main theorem. In the sequel, we denote strong and weak convergence by “→” and “⇀”, respectively.
Let C be a nonempty closed convex subset of a real Hilbert space H with inner product . , . and norm | | . | | . Let S : C C be a nonlinear mapping and F ( S ) = { x C : S x = x } be the set of all fixed points of S.
A mapping S : C C is called
  • nonexpansive, if
    S x S y x y , x , y C ;
  • quasi-nonexpansive, if F ( S ) is nonempty, and
    S x p x p , p F ( S ) ;
  • nonspreading [25], if
    2 S x S y 2 S x y 2 + S y x 2 , x , y C ;
  • k-strictly pseudo-nonspreading in terms of Browder-Petryshyn [26], if there exists k [ 0 , 1 ) such that
    S x y 2 x y 2 + k x S x ( y S y ) 2 + 2 x S x , y S y , x . y C .
Remark 1.
(a) 
If S : C C is a nonspreading mapping with F ( S ) , then S is quasi-nonexpansive and F ( S ) is closed and convex.
(b) 
It is also clear that every nonspreading mapping is k-strictly pseudo-nonspreading with k=0, but the converse is not true, see example 3 in [27].
Lemma 1.
[27] Let T : C C be a k-strictly pseudo-nonspreading mapping with k [ 0 , 1 ) . Denote T β : = β I + ( 1 β ) T , where β [ k , 1 ) , then
(a) 
F ( T ) = F ( T β ) ,
(b) 
the following inequality holds:
T β x T β y 2 x y 2 + 2 1 β x T β x , y T β y , x , y C ;
(c) 
T β is a quasi-nonexpansive mapping.
Lemma 2.
[28] Let C H be nonempty, closed and convex set. Then, x , y H and z C
1. 
x P C x , z P C x 0 ,
2. 
P C x P C y 2 P C x P C y , x y ,
3. 
P C x z 2 x z 2 P C x x 2 .
Lemma 3.
[29] Let H be a real Hilbert space and { x i } i 1 be a bounded sequence in H. For α i ( 0 , 1 ) such that i = 1 α i = 1 , the following identity holds
i = 1 α i x i 2 = i = 1 α i x i 2 1 i < j < α i α j x i x j 2 .
More so, from Lemma 3, we get the following result.
Lemma 4.
[30] For all x 1 , x 2 , , x n H , the following inequality holds:
i = 1 n λ i x i 2 = i = 1 n λ i x i 2 1 2 i , j = 1 n λ i λ j x i x j 2 , n 2 ,
where λ i [ 0 , 1 ] , i = 1 , 2 , , n , i = 1 n λ i = 1 .
Lemma 5.
[27] Let C be a closed convex subset of H,   T : C C be a k-strictly pseudo-nonspreading mapping with F ( T ) . If { x n } is a sequence in C which converges weakly to p and { ( I T ) x n } converges strongly to q, then ( I T ) p = q . In particular, if q = 0 , then p = T p .
Lemma 6.
[31] Let { a n } be a sequence of nonegative real numbers { γ n } be a sequence of real numbers in ( 0 , 1 ) with conditions n = 1 γ n = and { d n } be a sequence of real numbers. Assume that
a n + 1 ( 1 γ n ) a n + γ n d n , n 1 .
If lim sup k d n k 0 for every subsequence { a n k } of { a n } satisfying the condition:
lim inf k ( a n k + 1 a n k ) 0 , t h e n lim n a n = 0 .

3. Main Results

In this section, we present our iterative algorithm and its convergence result.
Let H 1 and H 2 be real Hilbert spaces, C be a nonempty, closed and convex subset of a real Hilbert space H and { g n } be a sequence of { σ n } -contractive self maps of H with lim inf n σ n lim sup n σ n = σ μ < 1 . Suppose that { g n ( x ) } is uniformly convergent to { g ( x ) } for any x D , where D is a bounded subset of C, let S m : H 1 H 1 , be a countable family of k m -strictly pseudo-nonspreading mapping with k : = sup m 1 k m ( 0 , 1 ) and S m , β : = β I + ( 1 β ) S m , where β [ k , 1 ) , and m N \ { 0 } .
Before we state our algorithm, we assume that the following conditions hold:
(A1)
The set C i is given by C i = { x H 1 : c i ( x ) 0 } where c i : H 1 R ( i = 1 , 2 , k ) are convex functions. Also, the set Q j is given by Q j = { y H 2 : q j ( y ) 0 } ( j = 1 , 2 , , t ) are convex functions. In addition, we assume that both c i and q j are subdifferentiable on H 1 and H 2 respectively and c i and q j are bounded operators.
(A2)
For any x H 1 and y H 2 , at least one subgradient ξ i c i ( x ) and η j q j ( y ) can be calculated, where c i ( x ) and q j ( y ) denote the subdifferentials of c i and q j at x and y respectively, i.e.,
c i ( x ) = { ξ i H 1 : c i ( z ) c i ( x ) + ξ i , z x z H 1 } ,
and
q j ( y ) = { η j H 2 : q j ( u ) q j ( y ) + η j , u y u H 2 } .
(A3)
We set C i n and Q j n as the half-spaces defined by
C i n = { x H 1 : c i ( x n ) + ξ i n , x x n 0 } ,
where ξ n n c i ( x n ) ( i = 1 , 2 , , k ) and
Q j n = { y H 2 : q j ( A x n ) + η j n , y A x n 0 } ,
where η j n q j ( A x n ) ( j = 1 , 2 , , t ) .
(A4)
We define the proximity function by
f n ( x ) = 1 2 j = 1 t λ j A x P Q j n ( A x ) 2 ,
where λ j > 0 1 j t . Then the gradient of f n ( x ) is given by
f n ( x ) = j = 1 t λ j A * ( I P Q n ) ( A x ) .
(A5)
The control sequences { α n } , { w i } , { γ n , m } and { ρ n } are chosen such that
-
{ α n } ( 0 , 1 ) , lim n + α n = 0 , n = 1 + α n = + ;
-
{ γ n , m } ( 0 , 1 ) , lim inf n + γ n , 0 γ n , m > 0 , m = 0 + γ n , m = 1 ;
-
{ w i } [ 0 , 1 ] with i = 1 + w i = 1 ;
-
{ ρ n } ( 0 , 4 ) and lim inf n + ρ n ( 4 ρ n ) > 0 .
We now present our algorithm as follows:
First we show that the sequence { x n } generated by Algorithm 1 is bounded.
Algorithm 1: GVA
Step 0:
Select the initial point x 1 H and the sequences { α n } , { w i } , { γ n , m } , { ρ n } such that Assumption (A5) is satisfied. Set n = 1 .
Step 1:
Given the nth iterate (i.e., x n , n 0 ), if f n ( x n ) = 0 , STOP. Otherwise, compute
y n = i = 1 k ω i P C i n ( x n τ n f n ( x n ) ) ,
where the stepsize τ n is defined by
τ n = ρ n f n ( x n ) f n ( x n ) 2 .
Step 2:
Compute
x n + 1 = α n g n ( x n ) + ( 1 α n ) γ n , 0 y n + m = 1 γ n , j S m , β y n .
Step 3:
Set n n + 1 and return to Step 1.
Lemma 7.
Suppose the solution set Γ = Ω m = 1 F ( S m ) and { x n } is the sequence generated by Algorithm 1. Then { x n } is bounded.
Proof. 
Let x * Γ and w n = γ n , 0 y n + m = 1 γ n , m S m , β y n . By applying the nonexpansivity property of the projection mapping and Lemma 4, we have
y n x * 2 = i = 1 k ω i P C i n ( x n τ n f n ( x n ) ) x * 2 x n τ n f n ( x n ) x * 2 = x n x * 2 2 τ n f n ( x n ) , x n x * + τ n f n ( x n ) 2 .
Also from Lemma 2, we obtain
f n ( x n ) , x n x * = j = 1 t λ j A * ( I P Q j n ) A x n , x n x * = j = 1 t λ j ( I P Q j n ) A x n , A x n P Q j n ( A x n ) + j = 1 t λ j ( I P Q j n ) A x n , P Q j n ( A x n ) A x * j = 1 t λ j A x n P Q j n ( A x n ) 2 = 2 f n ( x n ) .
On substituting (9) into (8), we have
y n x * 2 x n x * 2 4 τ n f n ( x n ) + τ n f n ( x n ) 2 = x n x * 2 ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) 2 x n x * 2 .
More so from Lemma 1, we get
w n x * = γ n , 0 y n + m = 1 γ n , m S m , β y n x * γ n , 0 y n x * + m = 1 γ n , m S m , β y n x * γ n , 0 y n x * + m = 1 γ n , m y n x * = y n x * .
Therefore from (10) and (11), we have
x n + 1 x * = α n g n ( x n ) + ( 1 α n ) w n x * α n g n ( x n ) x * + ( 1 α n ) w n x * α n σ n x n x * + ( 1 α n ) x n x * + α n g n ( x * ) x * ( 1 α n ( 1 σ n ) ) x n x * + α n ( 1 σ n ) g n ( x * ) x * 1 σ n max x n x * , g n ( x * ) x * 1 σ n .
Since { g n } is uniformly convergent on D, it follows that { g n ( x * ) } is bounded. Thus, there exists a positive constant M, such that g n ( x * ) x * M . By induction, we obtain
x n x * max x 1 x * , M 1 σ μ .
Hence, { x n } is bounded. Consequently { S m , β x n } , { g n ( x n ) } , { y n } and { w n } are all bounded. □
We now give our main convergence theorem.
Theorem 1.
Suppose that Γ = Ω m = 1 F ( S m ) and Assumptions (A1)–(A5) hold. Then, the sequence { x n } generated by Algorithm 1 converges strongly to point z P Γ which is a unique solution of the variational inequality
g ( z ) z , x * z 0 , x * P Γ .
Proof. 
From Lemma 1 (c), Lemma 3 and (10), we have
w n x * 2 = γ n , 0 y n + m = 1 γ n , m S m , β y n x * 2 = γ n , 0 y n x * 2 + m = 1 γ n , m S m , β y n x * 2 m = 1 γ n , 0 γ n , m y n S m , β y n 1 m < r < γ n , m γ n , r S m , β m S m , β r 2 γ n , 0 y n x * 2 + m = 1 γ n , m y n x * 2 m = 1 γ n , 0 γ n , m y n S m , β y n = y n x * 2 m = 1 γ n , 0 γ n , m y n S m , β y n x n x * 2 ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) 2 m = 1 γ n , 0 γ n , m y n S m , β y n .
Now, from (10) and (12), we have that
x n + 1 x * 2 = α n g n ( x n ) + ( 1 α n ) w n x * 2 ( 1 α n ) 2 w n x * 2 + 2 α n x n + 1 x * , g n ( x n ) x * ( 1 α n ) 2 x n x * 2 ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) ( 1 α n ) m = 1 γ n , 0 γ n , m y n S m , β y n + 2 α n x n + 1 x * , g n ( x n ) x * = ( 1 α n ) 2 x n x * 2 + α n 2 x n + 1 x * , g n ( x n ) x * .
Putting d n = 2 x n + 1 x * , g n ( x n ) x * , in view of Lemma 5, we need to prove that lim sup k d n k 0 for every { | | x n k x * | | } of { | | x n x * | | } satisfying the condition
lim inf k + { x n k + 1 x * x n k x * } 0 .
To show this, suppose that { | | x n k x * | | } is a subsequence of { | | x n x * | | } such that (14) holds. Then
lim inf k + ( x n k + 1 x * | | 2 x n k x | | 2 ) = lim inf k ( ( x n k + 1 x * 2 x n k x * 2 ) ( x n k + 1 x * + x n k x * ) ) 0 .
Now, using (12), we have that
lim sup k + ( ( 1 α n k ) m = 1 + γ n k , 0 γ n k , m y n k S m , β y n k ) lim sup k + ( ( 1 α n k ) x n k x * 2 x n k + 1 x * 2 + 2 α n k x n k + 1 x * , g n k ( x n k ) x * ) lim sup k + ( x n k x * 2 x n k + 1 x * 2 ) + lim sup k + ( 2 α n k x n k + 1 x * , g n k ( x n k ) x * ) = lim inf k + ( x n k + 1 x * 2 x n k x * 2 ) 0 .
Hence
lim k + y n k S m , β y n k = 0 .
Please note that
S m , β y n y n = β y n + ( 1 β ) S m y n y n = ( 1 β ) S m y n y n .
Then it follows that
S m y n y n = 1 1 β S m , β y n y n 0 .
Furthermore, using (12) and following the same approach as in (15), we also have that
ρ n ( 4 ρ n ) f n 2 ( x n k ) f ( x n k ) 2 ρ n k ( 4 ρ n k ) f n 2 ( x n k ) f ( x n k ) 2 0 , as k .
This implies that
n = 0 + f n 2 ( x n k ) f ( x n k ) 2 < + .
Since f n is Lipschitz continuous and { x n } is bounded, so { f n ( x n ) } is also bounded. Hence from (18), we can conclude that
lim k + 1 2 j = 1 t λ j A x n k P Q j n ( A x n k ) 2 = 0 ,
which also implies that
lim k + A x n k P Q j n ( A x n k ) = 0 , for j = 1 , 2 , , t .
Since q j is bounded on bounded sets, there exists η such that η j n η j . Please note that P Q j n A x n Q j n , thus we get
q ( A x n k ) η j n k , A x n k P Q j n k A x n k η j n k · A x n k P Q j n k A x n k η A x n k P Q j n k A x n k 0 as k + .
Since { x n } is bounded and C is closed and convex, we can suppose that the subsequence { x n k } of { x n } converges weakly to x ¯ C . We now show that x ¯ Ω . By the weakly lower semicontinuity of q j and boundedness of A, we have
q j ( A x ¯ ) lim inf k + q j ( A x n k ) 0 .
Then A x ¯ Q j , j = 1 , 2 , , t . This implies that A x ¯ j = 1 t Q j . Next we show that x ¯ i = 1 k C i .
Let u n = x n τ n f n ( x n ) . Since { u n } , { w n } and { y n } are bounded, there exist subsequences { u n k } , { w n k } and { y n k } which all converges to x ¯ . Using (10), we have that
u n x * 2 x n x * 2 ρ n k ( 4 ρ n k ) f n 2 ( x n k ) f ( x n k ) 2 .
By applying Lemma 2 (iii), we have that
i = 1 k ω i P C i n ( u n k ) u n k 2 u n k x * 2 i = 1 k ω i P C i ( u n k ) x * 2 x n k x * 2 y n k x * 2 x n k x * 2 w n k x * 2 = x n k x * 2 x n k + 1 x * 2 + x n k + 1 x * 2 w n k x * 2 x n k x * 2 x n k + 1 x * 2 + α n k g n k ( x n k ) x * 2 + ( 1 α n k ) w n k x * 2 w n k x * 2 .
By taking lim sup as k + on both sides of (22) and following the same argument as in (15), we have that
lim k + P C i n ( u n k ) u n k = 0 = lim k y n k u n k .
Also, from the definition of u n k = x n k τ n k f ( x n k ) , we have from (19) that
lim k + u n k x n k = 0 .
Using (23) and (24), we obtain that
lim k + P C i n ( u n k ) x n k = 0 .
Since c i is bounded on bounded sets, there exists ξ such that ξ i n ξ i . Thus,
c i ( x n k ) ξ i n k , x n k P C i n k ( x n k ) ξ x n k u n k + u n k P C i n k ( x n k ) 0 .
By the lower semicontinuity of c i , we have
c i ( x ¯ ) lim inf k + c i ( x n k ) 0 .
Hence x ¯ C i for i = 1 , 2 , , k , which implies that x ¯ i = 1 k C i . Hence x ¯ Ω . Furthermore, we have from (23) and (24) that
lim k + y n k x n k = 0 .
Then, from the demiclosedness of k-strictly pseudo-nonspreading mappings (Lemma 5), (16) and (25), we obtain x ¯ m = 1 F ( S m ) . Therefore, x ¯ Γ .
Next is to prove that { x n } converges strongly to z Γ . Also, (16), we have
lim k + w n k y n k = 0 .
More so, from (25) and (26), we obtain
lim k + w n k x n k = 0 .
From (27), we obtain
x n k + 1 x n k α n k g n k ( x n k ) x n k + ( 1 α n k ) w n k x n k .
Next is to prove that the lim sup k + x n k + 1 x * , g n ( x n ) x * 0 .
Indeed, take a subsequence { x n k } of { x n } such that x n k z . Hence, we have
lim sup n + g ( x * ) x * , x n x * = lim k + g ( x * ) x * , x n k x * .
Since g n ( x ) is uniformly convergent on D, we have that
lim n + ( g n ( x * ) x * ) = g ( x * ) x * .
Now, from (28) and Lemma 4 (i), we obtain
lim k + g ( x * ) x * , x n k x * = g ( x * ) x * , z x * 0 .
Using Schwartz’s inequality, we have
lim sup k + x n k + 1 x * , g n k ( x * ) x * lim k + x n k + 1 x * g n k ( x * ) g ( x * ) + lim sup k + x n k + 1 x * , g ( x * ) x * .
By the boundedness of { x n } , g n ( x ) g ( x ) , then by (28) and (29), we have
lim sup k + x n k + 1 x * , g n k ( x * ) x * 0 .
Applying (30) and Lemma 5 in (13), we obtain that { x n } converges to z. This completes the proof. □
Next, we give a generalized viscosity approximation method with inertial term which can be regard as a procedure for speeding up the convergence properties of Algorithm 1. In addition to Assumptions (A1)–(A5), we choose a sequence { ϵ n } ( 0 , ϵ ) with ϵ [ 0 , 1 ) and
lim n ϵ n α n = 0 .
Remark 2.
From (31) and Step 1, it is easy to see that lim n θ n α n | | x n x n 1 | | = 0 . Indeed, we have θ n | | x n x n 1 | | ϵ n for each n 1 , which together with (31) implies that
lim inf n + θ n α n | | x n x n 1 | | = 0 lim n + ϵ n α n = 0 .
Lemma 8.
Suppose the solution set Γ = Ω m = 1 + F ( S m ) and { x n } is the sequence generated by Algorithm 2. Then { x n } is bounded.
Algorithm 2: IGVA
Step 0:
Select the initial points x 0 , x 1 H and the sequences { α n } , { w i } , { γ n , m } , { ρ n } , { ϵ n } such that Assumption (A5) and (31) are satisfied. Set n = 1 .
Step 1:
Given the ( n 1 ) th and nth iterates (i.e., x n 1 and x n , n 1 ). Choose θ n such that 0 θ n θ ¯ n , where
θ ¯ n = min { θ , ϵ n | | x n x n 1 | | } , if x n x n 1 , θ , otherwise ,
where θ > 0 . Compute
a n = x n + θ n ( x n x n 1 ) ,
and
y n = i = 1 k ω i P C i n ( a n τ n f n ( a n ) ) ,
where the stepsize τ n is defined by
τ n = ρ n f n ( a n ) f n ( a n ) 2 .
Step 2:
Compute
x n + 1 = α n g n ( x n ) + ( 1 α n ) γ n , 0 y n + m = 1 γ n , j S m , β y n .
Step 3:
Set n n + 1 and return to Step 1.
Proof. 
Let x * Γ , using Step 1, we get
a n x * = x n + θ n ( x n x n 1 ) x * x n x * + θ n x n x n 1 = x n x * + α n · θ n α n x n x n 1 .
By the condition θ n α n x n x n 1 0 , there exists a constant M 1 > 0 such that θ n α n x n x n 1 0 M 1 , n 1 . Following similar argument as in the prove of (10) in Algorithm 1, we have
y n x * a n x * .
Also as in (11), putting w n = γ n , 0 y n + m = 1 γ n , m S m , β y n , then we get
w n x * y n x * .
Then, it follows from (32), (33) and (34) that
w n x * = x n x * + α n M 1 .
Thus, we have
x n + 1 x * = α n g n ( x n ) + ( 1 α n ) w n x * α n g n ( x n ) x * + ( 1 α n ) w n x * α n σ n x n x * + ( 1 α n ) [ x n x * + α n M 1 ] + α n g n ( x * ) x * = ( 1 α n ( 1 σ n ) ) x n x * + α n ( 1 σ n ) g n ( x * ) x * 1 σ n + α n ( 1 α n ) M 1 = ( 1 α n ( 1 σ n ) ) x n x * + α n ( 1 σ n ) g n ( x * ) x * 1 σ n + ( 1 α n ) M 1 1 σ n max x n x * , g n ( x * ) x * + M 1 1 σ n .
Since { g n } is uniformly convergence on D, it follows that { g n ( x * ) } is bounded. Thus, there exists a positive constant M 2 such that g n ( x * ) x * M 2 . Thus, it follows by induction that
x n x * max x 1 x * , M 1 + M 2 1 σ μ .
Therefore { x n } is bounded. □
Theorem 2.
Suppose that Γ = Ω m = 1 F ( S m ) and Assumptions (A1)–(A5) with (31) hold. Then, the sequence { x n } generated by Algorithm 2 converges strongly to point z P Γ which is a unique solution of the variational inequality
g ( z ) z , x * z 0 , x * P Γ .
Proof. 
Let x * Γ , then we have from Step 1 that
a n x * 2 = x n + θ n ( x n x n 1 ) x * 2 = ( x n x * ) + θ n ( x n x n 1 ) 2 = x n x * 2 + 2 θ n x n x * , x n x n 1 + θ n 2 x n x n 1 2 x n x * 2 + 2 θ n x n x * x n x n 1 + θ n 2 x n x n 1 2 x n x * 2 + θ n x n x n 1 2 x n x * + θ n x n x n 1 x n x * 2 + θ n x n x n 1 M 3 ,
where M 3 = sup n 1 { 2 x n x * + θ n x n x n 1 } .
Similarly as in (12), we get
w n x * 2 a n x * 2 = x n x * + θ n x n x n 1 M 3 .
Using Step 1, we have that
a n x n = α n · θ n α n x n x n 1 0 , as n .
Now, from (36), we have that
x n + 1 x * = α n g n ( x n ) + ( 1 α n ) w n x * 2 = ( 1 α n ) 2 w n x * 2 + 2 α n x n + 1 x * , g n ( x n ) x * = ( 1 α n ) 2 x n x * 2 + ( 1 α n ) θ n x n x n 1 M 3 + 2 α n x n + 1 x * , g n ( x n ) x * = ( 1 α n ) 2 x n x * 2 + α n ( 1 α n ) θ n α n x n x n 1 M 3 + 2 α n x n + 1 x * , g n ( x n ) x * ( 1 α n ) x n x * 2 + α n ( 1 α n ) θ n α n x n x n 1 M 3 + 2 x n + 1 x * , g n ( x n ) x * .
Next is to show that the lim sup k + x n k + 1 x * , g n ( x n ) x * 0 .
Indeed, take a subsequence { x n k } of { x n } such that x n k z . Hence, we have
lim sup n + g ( x * ) x * , x n x * = lim k + g ( x * ) x * , x n k x * .
Since g n ( x ) is uniformly convergent on D, we have that
lim n + ( g n ( x * ) x * ) = g ( x * ) x * .
Now, from (28) and Lemma 4 (i), we obtain
lim k + g ( x * ) x * , x n k x * = g ( x * ) x * , z x * 0 .
By applying Schwartz’s inequality, we get
lim sup k + x n k + 1 x * , g n k ( x * ) x * lim k + x n k + 1 x * g n k ( x * ) g ( x * ) + lim sup k + x n k + 1 x * , g ( x * ) x * .
By the boundedness of { x n } , g n ( x ) g ( x ) , then by (28) and (39), we have
lim sup k + x n k + 1 x * , g n k ( x * ) x * 0 .
On substituting (40) in (38), we obtain that { x n } converges strongly to x * . This completes the proof. □

4. Numerical Example

In this section, we give some numerical experiments to illustrate the performance of our algorithms with respect to some other algorithms in the literature. All computation are carried out using Lenovo PC with the following specification: Intel(R)core i7-600, CPU 2.48GHz, RAM 8.0GB, MATLAB version 9.5 (R2019b).
Example 1.
We consider the MSSFP where H 1 = R N and H 2 = R M , A : R N R M is given by A ( x ) = G M × N ( x ) , where G M × N is a M × N matrix. The closed convex sets C i ( i { 1 , , k } ) of R N are given by
C i = { x = ( x 1 , , x N ) T R N : c i ( x ) 0 }
where c i ( x ) = x d i 2 p i 2 such that p i = p , where is a positive real number and d i = ( x 1 , i , , x N , i ) T = ( 0 , , 0 , i 1 ) T R N for each i = 1 , 2 , , k . Also, Q j ( j { 1 , , t } ) is defined by
Q j = { y R M : q j ( y ) 0 } ,
where q j ( y ) = 1 2 y T B j y + b j T y + c j , j = 1 , 2 , , k , B j is a Hessian matrix, b j and c j are vectors generated randomly. For each i { 1 , , k } and j { 1 , , t } the subdifferentials are given by
c i ( x n ) = x n d i x n d i i f x n d i 0 , { a i R N : a i 1 } otherwise ,
and q j ( A x n ) = { ( b 1 , j , , b M , j ) T } . Please note that the projection
P C i n ( x n ) = a r g m i n { | | x x n | | : x C i n } ,
where C i n = { x H 1 : c i ( x n ) ξ i n , x n x } which is equivalent to the following quadratic programming problem
minimize 1 2 x T H ¯ x + B ¯ n T x + c ¯ , subject to D ¯ i , n ( x ) F ¯ i ,
where H ¯ = 2 I M × N , B ¯ n = 2 x n , c ¯ = | | x n 2 , D ¯ i , n = ξ ¯ i n = [ ξ ¯ i , 1 n , , ξ ¯ i , N n ] , F ¯ i = p i 2 | | x n d i 2 + ξ ¯ i n , x n . The Problem (41) as well as the projection onto Q j n can effectively be solved using Optimization Toolbox solverquadprogin MATLAB. We defined the mapping S m : R N R N by
S m x = ( x 1 , x 2 , , x i , ) if i = 1 x i < 0 , ( 2 x 1 , 2 x 2 , , 2 x i , ) if i = 1 x i 0 .
It is easy to see that S m is 1 3 -strictly pseudo-nonspreading. For each n N and m 0 , let { γ n , m } be defined by
γ n , m = 1 b m + 1 n n + 1 , n m + 1 , 1 n n + 1 k = 1 n 1 b k n = m , 0 n < m ,
where b > 1 . For simplicity, we consider the case for which k = t and compare the performance of Algorithm 1, Algorithm 2 and Algorithm (42) of Wen et al. [9] using various dimension of N . We choose b = 5 , g n ( x ) = x 4 , β = 0.2 , α n = 1 n + 1 , ϵ n = 1 n + 1 , θ = 0.01 , ρ n = n n + 1 , w i = 1 k . Similarly, for Algorithm (42) of Wen et al. [9], we take ρ n = n n + 1 and w i = 1 k . The initial points x 0 , x 1 and the matrices G M × N are generated randomly for the following values of N and M:
Case I: N = 4 and M = 10 ;
Case II: N = 10 and M = 5 ;
Case III: N = 10 and M = 10 ;
Case IV: N = 15 and M = 20 .
We use E n = | | x n + 1 x n | | < 10 4 as stopping criterion and plot the graphs of E n against number of iterations. The numerical results are shown in Table 1 and Figure 1.
Finally, we present an example in infinite dimensional Hilbert spaces.
Example 2.
Let H 1 = H 2 = L 2 ( [ 0 , 1 ] ) with norm | | x | | = 0 1 | x ( t ) | 2 d t 12 and the inner product x , y = 0 1 x ( t ) y ( t ) d t . We defined the nonempty, closed convex sets C = { x L 2 ( [ 0 , 1 ] ) : x ( t ) , 3 t 2 = 0 } and Q = { y L 2 ( [ 0 , 1 ] ) : y , t 3 1 } . We defined the linear operator A : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) by ( A x ) ( t ) = x ( t ) . The projection onto C and Q are given by
P C ( x ( t ) ) = x ( t ) x ( t ) , 3 t 2 3 t 2 2 3 t 2 if x ( t ) , 3 t 2 0 , x ( t ) , if x ( t ) , 3 t 2 = 0 ,
and
P Q ( y ( t ) ) = y ( t ) y ( t ) , t 3 t 3 2 ( t 3 ) , if y ( t ) , t 3 < 1 , y ( t ) if y ( t ) , t 3 1 .
We consider the MSSFP where k = t = 1 , C i = C , Q j = Q , S m = I (identity mapping) and m = 4 . We compare our Algorithm 2 with the CQ-type algorithm (Algorithm 3.1) of Vinh et al. [20]. For Algorithm 2, we take g n ( x ) = x 8 , β = 0.5 , w i = 1 , α n = 1 n + 1 , ϵ n = 1 ( n + 1 ) 2 , and γ n , m = 1 5 for m = 0 , 1 , , 4 . Also, for Vinh et al. alg, we take ρ n = n n + 1 and β n = 1 n + 1 . We use E n = 1 2 | | A x n P Q ( A x n ) 2 < 10 4 as stopping criterion and test the algorithms for the following initial points:
Case I: x 0 = exp ( 2 t ) , x 1 = t 3 sin ( 3 t ) 3 ,
Case II: x 0 = t 2 + 2 t 1 , x 1 = ( cos ( 2 t ) + sin ( 3 t ) ) 5 ,
Case III: x 0 = 2 t cos ( 3 t ) , x 1 = 4 sin ( 2 t ) ,
Case IV: x 0 = exp ( 2 t ) 2 , x 1 = t 3 + 3 t 1 .
The numerical results are reported in Table 2 and Figure 2.

5. Conclusions

In this paper, we introduce a generalized viscosity approximation method with self-adaptive stepsize for finding common solution of multiple set split feasibility problem and fixed point of a countable family of k-strictly pseduononspreading mappings in real Hilbert spaces. We also introduce a generalized viscosity approximation method with inertial and self-adaptive stepsize for solving the underlying problem. We prove strong convergence results for the sequences generated the algorithms under some mild conditions. We also provide some numerical example to show the performance of the proposed methods with respect to some other methods in the literature. These results improve and compliment several other results (e.g., [6,7,8,9,20]) in the literature.

Author Contributions

Conceptualization, H.A.A. and L.O.J.; methodology, H.A.A. and L.O.J.; software, H.A.A. and L.O.J.; validation, H.A.A. and L.O.J.; formal analysis, H.A.A. and L.O.J.; writing—original draft preparation, H.A.A. and L.O.J.; writing—review and editing, H.A.A. and L.O.J.; supervision, H.A.A. and L.O.J.; project administration, H.A.A. and L.O.J.; funding acquisition, H.A.A. and L.O.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Mathematical research fund at the Sefako Makgatho Health Sciences University.

Acknowledgments

The authors acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Combettes, P.L. The Convex Feasibility Problem in Image Recovery in Hawkes P, Editor Advance. Advances in Imaging and Electron Physics; Academic Press: New York, NY, USA, 1996; Volume 95, pp. 155–270. [Google Scholar]
  2. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projection in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  3. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its application for inverse problem. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, J.L.; Yang, Q.Z. Self adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 2011, 27, 035009. [Google Scholar] [CrossRef]
  5. Zhang, J.L.; Yang, Q.Z. A simple projection method for solving multiple-sets split feasibility problem. Inverse Probl. Sci. Eng. 2013, 21, 537–546. [Google Scholar]
  6. Lopez, G.; Martin-Marquez, V.; Wang, F.H.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 8, Article ID: 085004. [Google Scholar] [CrossRef]
  7. He, S.; Zhao, Z.; Luo, B. A relaxed self-adaptive CQ algorithm for the multiple-sets split feasibility problem. Optimization 2015, 64, 1907–1918. [Google Scholar] [CrossRef]
  8. Suantai, S.; Pholasa, N.; Cholamjiak, P. Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problem. Rev. R. Acad. Cienc. Exactas Fis Nat. Ser. A Mat. 2019, 113, 1081–1097. [Google Scholar] [CrossRef]
  9. Wen, M.; Peng, J.; Tang, Y. A cyclic and simultaneous iterative method for solving the multiple-sets split feasibility problem. J. Optim. Theory Appl. 2015, 166, 844–860. [Google Scholar] [CrossRef]
  10. Polyak, B.T. Some methods of speeding up the convergence of iterates methods. USSR Comput. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  11. Alvarez, F.; Attouch, H. An Inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  12. Abass, H.A.; Aremu, K.O.; Jolaoso, L.O.; Mewomo, O.T. An inertial forward-backward splitting method for approximating solutions of certain optimization problem. J. Nonlinear Funct. Anal. 2020, 2020, 6. [Google Scholar]
  13. Abass, H.A.; Izuchukwu, C.; Mewomo, O.T.; Dong, Q.L. Strong convergence of an inertial forward-backward splitting method for accretive operators in real Banach space. Fixed Point Theory 2020, 21, 397–412. [Google Scholar] [CrossRef]
  14. Jolaoso; Taiwo, L.O.; Alakoya, A.; Mewomo, T.O. A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces. Demonstr. Math. 2019, 52, 183–203. [Google Scholar] [CrossRef]
  15. Jolaoso, L.O.; Abass, H.A.; Mewomo, O.T. A viscosity-proximal gradient method with inertial extrapolation for solving certain minimization problems in Hilbert space. Arch. Math. 2019, 55, 167–194. (In English) [Google Scholar] [CrossRef]
  16. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. An inertial extragradient method via viscoscity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization 2020. [Google Scholar] [CrossRef]
  17. Jolaoso, L.O.; Aphane, M. A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl. 2020, 2020, 9. [Google Scholar] [CrossRef]
  18. Fukushima, M. A relaxed projection method for variational inequality. Math. Program 1986, 35, 58–70. [Google Scholar] [CrossRef]
  19. Wang, J.; Hu, Y.; Yu, C.K.W.; Zhuang, Y. A family of projection gradient methods for solving the multiple set split feasibility problems. J. Optim. Theory Appl. 2019, 183, 520–534. [Google Scholar] [CrossRef]
  20. Vinh, N.T.; Cholamjiak, P.; Suantai, S. A new CQ algorithm for solving split feasibility problems in Hilbert spaces. Bull. Malys. Math. Sci. Soc. 2019, 42, 2517–2534. [Google Scholar] [CrossRef]
  21. Abass, H.A.; Ogbuisi, F.U.; Mewomo, O.T. Common solution of split equilibrium problem with no prior knowledge of operator norm. UPB Sci. Bull. Ser. A 2018, 80, 175–190. [Google Scholar]
  22. Abass, H.A.; Okeke, C.C.; Mewomo, O.T. On split equality mixed equilibrium and fixed point problems of generalized ki-strictly pseudo-contractive multivalued mappings. Dyn. Contin. Discret. Impuls. syst. Ser. B Appl. Algorithms 2018, 25, 369–395. [Google Scholar]
  23. Duan, P.; He, S. Generalized viscosity approximation methods for nonexpansive mapping. Fixed Point Theory Appl. 2014, 2014, 68. [Google Scholar] [CrossRef] [Green Version]
  24. Moudafi, A. Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  25. Koshaka, F.; Takahashi, W. Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces. SIAM J. Optim. 2008, 19, 824–835. [Google Scholar]
  26. Browder, F.E.; Petryshyn, W.V. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef] [Green Version]
  27. Zhao, T.; Chang, S.S. Weak and strong convergence theorems for strictly pseudo-nonspreading mappings and equilibrium problem in Hilbert spaces. Abstr. Appl. Anal. 2013, 2013, 169206. [Google Scholar] [CrossRef]
  28. Bauschke, H.H.; Borwein, J.M. On projection algorithm for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  29. Chidume, C.E.; Okpala, M.E. Fixed point iteration for a countable family of multivalued strictly pseudocontractive-type mappings. SpringerPlus 2015, 4, 506. [Google Scholar] [CrossRef] [Green Version]
  30. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2011. [Google Scholar]
  31. Saejung, S.; Yotkaew, P. Approximation of Zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
Figure 1. Example 1, Case I–Case IV; Top–Bottom.
Figure 1. Example 1, Case I–Case IV; Top–Bottom.
Axioms 10 00001 g001
Figure 2. Example 2, Case I–Case IV; Top–Bottom.
Figure 2. Example 2, Case I–Case IV; Top–Bottom.
Axioms 10 00001 g002
Table 1. Computation result for Example 1.
Table 1. Computation result for Example 1.
Algorithm 1Algorithm 2 Wen et al. alg.
Case INo of Iter.281348
CPU time (s)0.17310.24990.4600
Case IINo of Iter.291450
CPU time (s)0.16930.15230.4719
Case IIINo of Iter.301451
CPU time (s)0.17020.19710.4222
Case IVNo of Iter.301453
CPU time (s)0.19320.22400.5702
Table 2. Computation result for Example 2.
Table 2. Computation result for Example 2.
Algorithm 2 Vinh et al. alg.
Case INo of Iter.49
CPU time (s)0.45631.3020
Case IINo of Iter.510
CPU time (s)1.55653.1576
Case IIINo of Iter.913
CPU time (s)0.87362.4094
Case IVNo of Iter.812
CPU time (s)0.79851.1278
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abass, H.A.; Jolaoso, L.O. An Inertial Generalized Viscosity Approximation Method for Solving Multiple-Sets Split Feasibility Problems and Common Fixed Point of Strictly Pseudo-Nonspreading Mappings. Axioms 2021, 10, 1. https://doi.org/10.3390/axioms10010001

AMA Style

Abass HA, Jolaoso LO. An Inertial Generalized Viscosity Approximation Method for Solving Multiple-Sets Split Feasibility Problems and Common Fixed Point of Strictly Pseudo-Nonspreading Mappings. Axioms. 2021; 10(1):1. https://doi.org/10.3390/axioms10010001

Chicago/Turabian Style

Abass, Hammed Anuoluwapo, and Lateef Olakunle Jolaoso. 2021. "An Inertial Generalized Viscosity Approximation Method for Solving Multiple-Sets Split Feasibility Problems and Common Fixed Point of Strictly Pseudo-Nonspreading Mappings" Axioms 10, no. 1: 1. https://doi.org/10.3390/axioms10010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop