Next Article in Journal
Modular Quasi-Pseudo Metrics and the Aggregation Problem
Previous Article in Journal
An Efficient Numerical Method for Solving a Class of Nonlinear Fractional Differential Equations and Error Estimates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Result for Solving the Split Fixed Point Problem with Multiple Output Sets in Nonlinear Spaces

1
Department of Mathematics & Statistics, International Islamic University, Islamabad 44000, Pakistan
2
Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1825; https://doi.org/10.3390/math12121825
Submission received: 22 April 2024 / Revised: 31 May 2024 / Accepted: 3 June 2024 / Published: 12 June 2024

Abstract

We study the split fixed point problem with multiple output sets in nonlinear spaces, particularly in CAT(0) spaces. We modify the existing self-adaptive algorithm for solving the split common fixed point problem with multiple output sets in the settings of generalized structures. We also present the consequences of our main theorem in terms of the split feasibility problem and the split common fixed point problem.
MSC:
47H09; 47H10; 49J53; 90C25

1. Introduction

Let A and B be nonempty, closed, and convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and T : H 1 H 2 be a bounded operator. Let T * : H 2 H 1 be its adjoint. The split convex feasibility problem is to:
find an element x * A such that T x * B .
The split convex feasibility problem (SCFP) was initially developed by Censor and Elfving [1] in 1994 for the modeling of inverse problems. In the real world, the SCFP appears in various domains, such as signal processing [2], medical care, and image reconstruction. One of the key uses of SCFP can be perceived in intensity-modulated radiation therapy (IMRT) [3]. Many mathematicians generalize the above problem in various directions. Some of these include the multiple-sets split feasibility problem [4,5], the split common fixed point problem [1,6], the split variational inequality/inclusion problem [7,8,9,10], and the split common null point problem [11,12,13]. Byrne’s C Q algorithm [2] is a popular technique for finding the solution of the SCFP. This algorithm has been extended for the multiple-sets split convex feasibility problem by many authors, like [1,14,15]. In the year 2020, Reich and Tuyen [13] introduced and examined the split feasibility problem with multiple output sets in Hilbert Spaces.
In 2022 [16], Reich et al. considered a more general problem (the split common fixed point problem with multiple output sets) in Hilbert spaces in accordance with a new self-adaptive algorithm, defined as follows:
Assume that H and H i , i = 1 , 2 , . . . , N , are the real Hilbert spaces and T i : H H i , i = 1 , 2 , , N , are the bounded operators. Let S j : H H , j = 1 , 2 , , M , Ξ k i : H i H i , i = 1 , 2 , , N , k = 1 , 2 , , M i be nonexpansive mappings. Find x H , such that
x Ω = ( j = 1 M F i x ( S j ) ) ( i = 1 N T i 1 ( k = 1 M i F i x ( Ξ k i ) ) ) ϕ .

Motivation

Motivated by all the above-mentioned advancements regarding feasibility problems, we proposed a split fixed point problem with multiple output sets in nonlinear spaces, particularly in CAT(0) spaces. We also extended the self-adaptive algorithm for this space as follows:
Let Υ and Υ 1 be complete C A T ( 0 ) spaces and T : Υ Υ 1 be a bounded operator. Let S j : Υ Υ , j = 1 , 2 , , M , Ξ k : Υ 1 Υ 1 , k = 1 , 2 , , K be nonexpansive mappings. The split common fixed point problem (SCFPP) is to:
find x C = ( j = 1 M F i x ( S j ) ) in the way that T x Q = ( k = 1 K F i x ( Ξ k ) ) .
This is a generalization of Problem (1.3) given in [16].

2. Preliminaries

Let ( Υ , ζ ) be a metric space. A geodesic path [17] connecting r and s, for the pair ( r , s ) Υ × Υ , is considered to be map φ from [ 0 , L ] R to Υ for L = ζ ( r , s ) , with conditions φ ( 0 ) = r , φ ( L ) = s , and ζ ( φ ( κ ) , φ ( κ ) ) = | κ κ | , for each κ , κ [ 0 , L ] . The term geodesic segment is referred as the image ϑ of φ , having endpoints r and s. The metric space ( Υ , ζ ) is known as a geodesic metric space if all possible two points of Υ are connected by a geodesic segment. The metric space ( Υ , ζ ) is termed as uniquely geodesic if r and s are connected by precisely one geodesic for each r , s Υ . Assume that ( Υ , ζ ) is a geodesic metric space. A geodesic triangle consists of three elements, ϱ 1 , ϱ 2 , ϱ 3 Υ , and three geodesics, [ ϱ 1 , ϱ 2 ] , [ ϱ 2 , ϱ 3 ] , [ ϱ 3 , ϱ 1 ] , denoted by ( [ ϱ 1 , ϱ 2 ] , [ ϱ 2 , ϱ 3 ] , [ ϱ 3 , ϱ 1 ] ) . A comparison triangle of a geodesic triangle ( [ ϱ 1 , ϱ 2 ] , [ ϱ 2 , ϱ 3 ] , [ ϱ 3 , ϱ 1 ] ) in Euclidean space R 2 is the triangle ¯ ( ϱ 1 , ϱ 2 , ϱ 3 ) = ( ϱ 1 ¯ , ϱ 2 ¯ , ϱ 3 ¯ ) , such that ζ ( ϱ i , ϱ j ) = ζ R 2 ( ϱ i ¯ , ϱ j ¯ ) for all i , j = 1 , 2 , 3 . A comparison triangle exists for a geodesic triangle without exception.
A geodesic metric space ( Υ , ζ ) is termed as a CAT(0) space if for every geodesic triangle in in Υ and a comparison triangle ¯ of , with  r , s and r ¯ , s ¯ ¯ , the C A T ( 0 ) inequality
ζ ( r , s ) ζ R 2 ( r ¯ , s ¯ ) ,
is established. Pre-Hilbert spaces, R -trees, Euclidean buildings [18], and the complex Hilbert ball with a hyperbolic metric [19] are some examples of CAT(0) spaces.
Let p , ϱ 1 , ϱ 2 be the elements in a C A T ( 0 ) space, and  ϱ 0 is a midpoint of [ ϱ 1 , ϱ 2 ] , symbolized by ϱ 1 ϱ 2 2 , after that the C A T ( 0 ) inequality refers
ζ 2 ( p , ϱ 1 ϱ 2 2 ) = ζ 2 ( p , ϱ 0 ) 1 2 ζ 2 ( p , ϱ 1 ) + 1 2 ζ 2 ( p , ϱ 2 ) 1 4 ζ 2 ( ϱ 1 , ϱ 2 ) .
(2) is termed as (CN)-inequality [20]. A metric space that is geodesically connected is regarded as a C A T ( 0 ) space if and only if the (CN) inequality is met.
Complete C A T ( 0 ) spaces are frequently termed as Hadamard spaces (in honor to Jacques Hadamard) [19].
In 1976, Lim [21] originated the idea of Δ -Convergence in C A T ( 0 ) spaces that resembles weak convergence a great deal in a Banach space. Kirk and Panyanak [22] further extended the idea of Δ -Convergence in C A T ( 0 ) spaces. Assume that η n is a bounded sequence in Υ , υ ( . , { η n } ) : Υ [ 0 , ] is continuous, given as
υ ( η , { η n } ) = lim sup n ζ ( η , { η n } ) .
The asymptotic radius of { η n } is specified as
υ ( { η n } ) = inf { υ ( η , { η n } ) : η Υ } .
The asymptotic center A c ( η n ) of { η n } is
A c ( η n ) = { η Υ : υ ( η , { η n } ) = υ ( { η n } ) } .
It is well known from Proposition 7 of [23] that A c ( η n ) contains one single point in a complete C A T ( 0 ) space.
The sequence { η n } in a complete C A T ( 0 ) space ( Υ , ζ ) is referred as Δ -converges to η Υ , if  A ( η n k ) = η for every subsequence { η n k } of { η n } [22].
The initial notion of quasilinearization for a C A T ( 0 ) space Υ was introduced by Berg and Nikolaev in their work [24]. They indicated a pair ( c , w ) Υ × Υ by v w and name it a vector. The map . , . : ( Υ × Υ ) × ( Υ × Υ ) R is a quasilinearization defined as c w , e z = 1 2 [ ζ 2 ( c , w ) + ζ 2 ( e , z ) ζ 2 ( c , e ) ζ 2 ( w , z ) ] for every c , w , e , z Υ .
It is simple to demonstrate that c w , e z = e z , c w , c w , e z = w c , e z , c w , c w = ζ 2 ( c , w ) and c w , e z = c x , e z + x w , e z for each c , w , e , z , x Υ . A metric space that is geodesically connected is a C A T ( 0 ) space if and only if it meets the Cauchy–Schwarz inequality.
Theorem 1 
([25]). Assume that Φ is a nonempty convex subset of a complete C A T ( 0 ) s p a c e Υ, μ Υ , and ν C . Then the metric projection P C satisfies
ν = P C μ y ν , ν μ 0 , for each y Φ .
Assume that ( Υ , ζ ) is a metric space. A mapping ω : Υ Υ is termed as nonexpansive if
ζ ( ω ( γ ) , ω ( ξ ) ) ζ ( γ , ξ ) ,
for all γ , ξ Υ . We denote the set of fixed points of ω by F ( ω ) , i.e.,
F ( ω ) = { x Υ : ω x = x } .
Suppose that ( Υ , ζ ) is a C A T ( 0 ) space. A mapping T * : Υ Υ is an adjoint operator [26] of T if, for all ξ , η , μ , ν Υ , we have
T ξ T μ , η ν = T ξ μ , η ν = ξ μ , T * η ν = ξ μ , T * η T * ν .
Whenever T is a linear operator, T * is also linear. As in a Hilbert space, we have
d 2 ( T * η , T * ν ) = d 2 ( T ξ , T μ ) M d 2 ( ξ , ν ) ,
and so, T * is bounded in Υ .
Lemma 1 
([27]). Let p 1 , p 2 , p Υ and μ [ 0 , 1 ] . Then
(i) 
d ( μ p 1 ( 1 μ ) p 2 , p ) μ d ( p 1 , p ) + ( 1 μ ) d ( p 2 , p ) ,
(ii) 
d 2 ( μ p 1 ( 1 μ ) p 2 , p ) μ d 2 ( p 1 , p ) + ( 1 μ ) d 2 ( p 2 , p ) μ ( 1 μ ) d 2 ( p 1 , p 2 ) .
Lemma 2 
([28]). Let p 1 , p 2 , p Υ and μ [ 0 , 1 ] . Then
(i) 
d ( μ p 1 ( 1 μ ) p 2 , γ p 1 ( 1 γ ) p 2 ) = | μ γ | d ( p 1 , p 2 ) ,
(ii) 
d ( μ p 1 ( 1 μ ) p 2 , μ p 1 ( 1 μ ) p ) ( 1 μ ) d ( p 2 , p ) .
Lemma 3 
([25]). Let p 1 , p 2 , p Υ and μ [ 0 , 1 ] . Then
d 2 ( μ p 1 ( 1 μ ) p 2 , p ) μ 2 d ( p 1 , p ) + ( 1 μ ) 2 d ( p 2 , p ) + 2 μ ( 1 μ ) p 1 p , p 2 p .
Lemma 4 
([29]). For ( Υ , d ) , the inequality stated below holds
d 2 ( p 1 , p 3 ) d 2 ( p 2 , p 3 ) + 2 p 1 p 2 , p 1 p 3 , f o r a l l p 1 , p 2 , p 3 Υ .
Lemma 5 
([22]). In ( Υ , d ) space, every bounded sequence has a Δ-convergent subsequence.
Lemma 6. 
Suppose that Φ ϕ is convex and a closed subset of the C A T ( 0 ) space Υ and Π : Φ Φ is a nonexpansive mapping, such that ι n Δ -converges to ι Φ and d ( ι n , Π ι n ) 0 . Then Π ι = ι .
Proof. 
Since
lim sup n d ( Π ( ι ) , ι n ) lim sup n [ d ( Π ( ι ) , Π ( ι n ) ) , d ( ι n , Π ι n ) ] lim sup n [ d ( ι , ι n ) , d ( ι n , Π ι n ) ] = lim sup n d ( ι , ι n )
By Opial’s condition in C A T ( 0 ) spaces, Π ( ι ) = ι .    □
Lemma 7 
([30]). Assume that { ξ n } is a sequence of real numbers, such that there exists a subsequence { ξ n j } of { ξ n } with ξ n < ξ n j + 1 for each j 1 . Then there exists a nondecreasing sequence { b k } of positive integers in the way that the given two inequalities:
ξ b k ξ b k + 1 and ξ k ξ b k + 1 ,
satisfy for all (sufficiently large) numbers, k. Indeed, b k is the largest number, n, of the set { 1 , 2 , . . . , k } , in the way that the condition ξ n ξ n + 1 satisfies.
Lemma 8 
([31]). Assume that the sequence { γ n } is of nonnegative real numbers, { ρ n } is a sequence of real numbers with lim sup k ρ n 0 , and  { β n } is a sequence in (0,1) with Σ n = 0 β n = . Suppose
γ n + 1 ( 1 β n ) γ n + α n ρ n for each n 1 .
Then lim n γ n = 0 .
Lemma 9. 
Suppose that { γ n } is a sequence of positive real numbers, { ρ n } is a real numbers sequence, and  { β n } is a sequence in the way that 0 < β n < 1 and Σ n = 0 β n = . Let
γ n + 1 ( 1 β n ) γ n + β n ρ n n 1 .
If lim sup k ρ n k 0 for each subsequence { γ n k } of { γ n } , which satisfies lim inf k ( γ n k + 1 γ n k ) 0 , then lim n γ n = 0 .
Proof. 
There are two cases in the proof.
Case 1: There exists an n 0 N , in the way that, for every n n 0 , γ n + 1 γ n . It refers that
lim inf k ( γ n + 1 γ n ) = 0 .
Therefore, we have
lim sup k ρ n 0 .
Lemma 8 leads to the conclusion.
Case 2: there is a subsequence { γ m j } of { γ n } , such that γ m j < γ m j + 1 for each j N . Lemma 7 can be used in this case to obtain a nondecreasing sequence { n k } of { n } , in the way that n k and that the aforementioned two inequalities:
γ n k γ n k + 1 and γ k γ n k + 1 ,
satisfy for all (sufficiently large) numbers, k. The first inequality implies that
lim inf k ( γ n k + 1 γ n k ) 0 .
It follows that
lim sup k ρ n k 0 .
Additionally, by repeating that first inequality, we obtain
γ n k + 1 ( 1 β n k ) γ n k + β n k ρ n k ( 1 β n k ) γ n k + 1 + β n k ρ n k .
In particular, since each β n k > 0 , we have
γ n k + 1 ρ n k .
Finally, due to the second inequality, we obtain
lim sup k γ k lim sup k γ n k + 1 lim sup k ρ n k 0 .
Thus
lim k γ k = 0 .
   □

3. Problem Formulation

Let Υ and Υ i , i = 1 , 2 , N , be the complete C A T ( 0 ) spaces and T i : Υ Υ i , i = 1 , 2 , , N , be bounded operators. Let S j : Υ Υ , j = 1 , 2 , , M , Ξ k i : Υ i Υ i , i = 1 , 2 , , N , k = 1 , 2 , , M i be nonexpansive mappings. Now we denote the solution set of SCFPP with multiple output sets by
Ω = ( j = 1 M F i x ( S j ) ) ( i = 1 N T i 1 ( k = 1 M i F i x ( Ξ k i ) ) ) ϕ .
Assume that f : Υ Υ is a strict contraction with a contraction coefficient c [ 0 , 1 ) . Our variational inequality problem (VIP) over the split fixed point problem with multiple output sets is to find an element x Ω ,
f x x , x y 0 f o r a l l y Ω .

4. Results

The following algorithm will be introduced first for solving the problem in (1).    
Algorithm 1: For x 0 Υ arbitrarily chosen, define the sequence { x n } as given below.
Step 1 Calculate
y j , n = S j x n ,
for each j = 1 , 2 , , M and suppose
d n = max j = 1 , 2 , , M { d ( y j , n , x n ) } .
Step 2 Compute
z k , n i = Ξ k i ( T i x n ) ,
for each i = 1 , 2 , , N and k = 1 , 2 , , M i , and suppose
d i , n = max k = 1 , 2 , , M i { d ( z k , n i , T i x n ) } , i = 1 , 2 , , N .
Step 3 Let
Γ n = max { d n , max i = 1 , 2 , , N { d i , n } } .
If d n = Γ n , let t n = y j n , n , j = 1 , 2 , , M , and let Θ = I .
Else, d i n , n = Γ n , then let t n = z k n , n i n , k = 1 , 2 , , M i and Θ = T i n .
Step 4 Compute
u n = λ x n ( 1 λ ) Θ * ( μ Θ x n ( 1 μ ) t n ) .
Step 5 Compute
x n + 1 = β n f ( x n ) ( 1 β n ) u n , n 0 ,
where { β n } ( 0 , 1 ) and h : Γ Γ is a strict contraction having a contraction coefficient l [ 0 , 1 ) .
We start the analysis of our aforementioned algorithm with the underlying proposition.
Proposition 1. 
Let Υ 1 and Υ 2 be two CAT(0) spaces. Let A : Υ 1 Υ 2 be a bounded linear operator and let Ξ : Υ 2 Υ 2 be a nonexpansive mapping. Suppose that Ω = { q Υ 1 : A q F i x ( Ξ ) } ϕ . Then, for any q Ω and x Υ 1 , we have
d 2 ( λ x ( 1 λ ) A * ( μ I ( 1 μ ) Ξ ) A x , q ) λ d 2 ( x , q ) + 2 μ ( 1 λ ) d 2 ( I ( A x ) , r ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ ( A x ) , r ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( I ( A x ) , Ξ ( A x ) ) + ( 1 λ ) d 2 ( r , A q ) + ( 1 λ ) d 2 ( A * r , q ) .
Proof. 
Consider
υ = λ x ( 1 λ ) A * ( μ I ( 1 μ ) Ξ ) A x .
Then, we have
d 2 ( υ , q ) = d 2 ( λ x ( 1 λ ) A * ( μ I ( 1 μ ) Ξ ) A x , q ) = λ 2 d 2 ( x , q ) + ( 1 λ ) 2 d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) + 2 λ ( 1 λ ) x q , A * ( μ I ( 1 μ ) Ξ ) A x q λ 2 d 2 ( x , q ) + ( 1 λ ) 2 d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) + λ ( 1 λ ) 2 d ( x , q ) d ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) λ 2 d 2 ( x , q ) + ( 1 λ ) 2 d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) + λ ( 1 λ ) [ d 2 ( x , q ) + d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) ] = λ 2 d 2 ( x , q ) + ( 1 + λ 2 2 λ ) d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) + λ d 2 ( x , q ) λ 2 d 2 ( x , q ) + ( λ λ 2 ) d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) = λ d 2 ( x , q ) + ( 1 λ ) d 2 ( A * ( μ I ( 1 μ ) Ξ ) A x , q ) .
Using the properties of quasilinearization and adjoint operator, we have
d 2 ( υ , q ) λ d 2 ( x , q ) + ( 1 λ ) A * ( μ I ( 1 μ ) Ξ ) A x q , A * ( μ I ( 1 μ ) Ξ ) A x q = λ d 2 ( x , q ) + ( 1 λ ) { A * ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) A * r , A * ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r + A * r q , A * ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) q } = λ d 2 ( x , q ) + ( 1 λ ) { A * [ ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r ] , A * ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) A * r + A * [ ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r ] , A * r q + A * r q , A * ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) A * r + A * r q , A * r q } = λ d 2 ( x , q ) + ( 1 λ ) { ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r , A ( A * [ ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r ] + ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r , A ( A * r q ) + A * r q , A * [ ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r ] + A * r q , A * r q } = λ d 2 ( x , q ) + ( 1 λ ) ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r , ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r + ( 1 λ ) ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r , r A q + ( 1 λ ) A ( A * r q ) , ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r + ( 1 λ ) A * r q , A * r q
= λ d 2 ( x , q ) + ( 1 λ ) d 2 ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) , r ) + 2 ( 1 λ ) ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) ) r , r A q + ( 1 λ ) d 2 ( A * r , q ) λ d 2 ( x , q ) + ( 1 λ ) d 2 ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) , r ) + 2 ( 1 λ ) d ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) , r ) d ( r , A q ) + ( 1 λ ) d 2 ( A * r , q ) λ d 2 ( x , q ) + ( 1 λ ) d 2 ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) , r ) + ( 1 λ ) d 2 ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) , r ) + ( 1 λ ) d 2 ( r , A q ) + ( 1 λ ) d 2 ( A * r , q ) = λ d 2 ( x , q ) + 2 ( 1 λ ) d 2 ( μ I ( A x ) ( 1 μ ) Ξ ( A x ) , r ) + ( 1 λ ) d 2 ( r , A q ) + ( 1 λ ) d 2 ( A * r , q ) λ d 2 ( x , q ) + 2 ( 1 λ ) { μ d 2 ( I ( A x ) , r ) + ( 1 μ ) d 2 ( Ξ ( A x ) , r ) μ ( 1 μ ) d 2 ( I ( A x ) , Ξ ( A x ) ) } + ( 1 λ ) d 2 ( r , A q ) + ( 1 λ ) d 2 ( A * r , q )
d 2 ( υ , q ) λ d 2 ( x , q ) + 2 μ ( 1 λ ) d 2 ( I ( A x ) , r ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ ( A x ) , r ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( I ( A x ) , Ξ ( A x ) ) + ( 1 λ ) d 2 ( r , A q ) + ( 1 λ ) d 2 ( A * r , q ) .
Thus, we proved that
d 2 ( λ x ( 1 λ ) A * ( μ I ( 1 μ ) Ξ ) A x , q ) λ d 2 ( x , q ) + 2 μ ( 1 λ ) d 2 ( I ( A x ) , r ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ ( A x ) , r ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( I ( A x ) , Ξ ( A x ) ) + ( 1 λ ) d 2 ( r , A q ) + ( 1 λ ) d 2 ( A * r , q ) .
Our proof is now complete.    □
Now, we will prove a proposition consisting of two important properties that will be used to prove our convergence result.
Proposition 2. 
Assume that { x n } is a sequence produced using Algorithm 1. Consequently, the two statements that are given below hold true:
(i) For σ Ω ,
when g n = max { g n , max i = 1 , 2 , , N { g i , n } }
Γ n 2 1 2 μ ( 1 μ ) ( 1 λ ) [ λ s n s n + 1 + α n d 2 ( h ( x n ) , σ ) + 2 μ ( 1 λ ) d 2 ( x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n ) + 2 ( 1 λ ) d 2 ( r n σ ) ] ,
and when g i n , n = max { g n , max i = 1 , 2 , . . . , N { g i , n } }
Γ n 2 1 2 μ ( 1 μ ) ( 1 λ ) [ λ s n s n + 1 + α n d 2 ( h ( x n ) , σ ) + 2 μ ( 1 λ ) d 2 ( Π i n x n , σ ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( E k n ( Π i n x n ) , r n ) + ( 1 λ ) d 2 ( r n , Π i n σ ) + ( 1 λ ) d 2 ( Π i n * r n , σ ) ,
where s n = d 2 ( x n , q ) and  { r n } is a bounded sequence in Υ 2 .
(ii) The inequality stated below holds true:
s n + 1 ( 1 β n ) s n + β n b n ,
where
β n = β n ( 1 2 l ) 1 β n l and b n = 1 1 2 l h ( σ ) σ , x n + 1 σ .
Proof. 
We take into consideration the given two cases.
Case I:  g n = max { g n , max i = 1 , 2 , , N { g i , n } }
Using Proposition 1 to A = Θ = I , and  Ξ = S j n in this case, we have, for any σ Ω ,
d 2 ( u n , σ ) = d 2 ( λ x n ( 1 λ ) Θ * ( μ Θ x n ( 1 μ ) t n , σ ) = d 2 ( λ x n ( 1 λ ) Θ * ( μ Θ x n ( 1 μ ) S j n x n , σ ) = d 2 ( λ x n ( 1 λ ) Θ * ( μ I ( 1 μ ) S j n ) Θ x n , σ ) .
Proposition 1 implies
d 2 ( u n , σ ) λ d 2 ( x n , σ ) + 2 μ ( 1 λ ) d 2 ( Θ x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( S j n ( Θ x n ) , r n ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( Θ x n , S j n ( Θ x n ) ) + ( 1 λ ) d 2 ( r n , Θ σ ) + ( 1 λ ) d 2 ( Θ * r n , σ ) = λ d 2 ( x n , σ ) + 2 μ ( 1 λ ) d 2 ( x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( S j n ( x n ) , r n ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( x n , S j n x n ) + ( 1 λ ) d 2 ( r n , σ ) + ( 1 λ ) d 2 ( r n , σ ) = λ d 2 ( x n , σ ) + 2 μ ( 1 λ ) d 2 ( x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( x n , y j , n ) + 2 ( 1 λ ) d 2 ( r n , σ ) .
Case II:  g i n , n = max { g n , max i = 1 , 2 , , N { g i , n } }
Using Proposition 1 to A = Θ = Π i n , and  Ξ = Ξ k n , we have
d 2 ( u n , σ ) λ d 2 ( x n , σ ) + 2 μ ( 1 λ ) d 2 ( Π i n x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ k n ( Π i n x n ) , r n ) 2 μ ( 1 μ ) ( 1 λ ) d 2 ( Π i n x n , Ξ k n ( Π i n x n ) ) + ( 1 λ ) d 2 ( r n , Π i n σ ) + ( 1 λ ) d 2 ( Π i n * r n , σ ) .
We now demonstrate the boundedness of the sequence { x n } .
d ( x n + 1 , σ ) = d ( β n h ( x n ) ( 1 β n ) u n , σ ) β n d ( h ( x n ) , σ ) + ( 1 β n ) d ( u n , σ ) β n [ d ( h ( x n ) , h ( σ ) ) + d ( h ( σ ) , σ ) ] + ( 1 β n ) d ( u n , σ ) l β n d ( x n , σ ) + β n d ( h ( σ ) , σ ) + ( 1 β n ) d ( u n , σ ) l β n d ( x n , σ ) + β n d ( h ( σ ) , σ ) + ( 1 β n ) d ( x n , σ ) = [ l β n + ( 1 β n ) ] d ( x n , σ ) + β n d ( h ( σ ) , σ ) = [ 1 ( 1 l ) β n ] d ( x n , σ ) + β n ( 1 l ) d ( h ( σ ) , σ ) ( 1 l )
max { d ( x n , σ ) , d ( h ( σ ) , σ ) ( 1 l ) } max { d ( x o , σ ) , d ( h ( σ ) , σ ) ( 1 l ) } .
It follows that { x n } is a bounded sequence.
d 2 ( x n + 1 , σ ) = d 2 ( β n h ( x n ) ( 1 β n ) u n , σ ) β n d 2 ( h ( x n ) , σ ) + ( 1 β n ) d 2 ( u n , σ ) β n d 2 ( h ( x n ) , σ ) + d 2 ( u n , σ ) .
In Case I
Γ n = g n = max j = 1 , 2 , , M { d ( x n , y j , n ) } ,
combining (8) and (10)
d 2 ( x n + 1 , σ ) β n d 2 ( h ( x n ) , σ ) + λ d 2 ( x n , σ ) + 2 μ ( 1 λ ) d 2 ( x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n ) 2 μ ( 1 μ ) ( 1 λ ) Γ n 2 + 2 ( 1 λ ) d 2 ( r n , σ ) 2 μ ( 1 μ ) ( 1 λ ) Γ n 2 β n d 2 ( h ( x n ) , σ ) + λ d 2 ( x n , σ ) d 2 ( x n + 1 , σ ) + 2 μ ( 1 λ ) d 2 ( x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n ) + 2 ( 1 λ ) d 2 ( r n , σ ) Γ n 2 1 2 μ ( 1 μ ) ( 1 λ ) [ λ s n s n + 1 + β n d 2 ( h ( x n ) , σ ) + 2 μ ( 1 λ ) d 2 ( x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n ) + 2 ( 1 λ ) d 2 ( r n , σ ) ] .
In Case II
Γ n = g i , n = max k = 1 , 2 , . . . , M i { d ( z k , n i , Π i x n ) } , i = 1 , 2 , . . . , N .
By joining (9) and (10), we obtain
d 2 ( x n + 1 , σ ) β n d 2 ( h ( x n ) , σ ) + λ d 2 ( x n , σ ) + 2 μ ( 1 λ ) d 2 ( Π i n x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ k n ( Π i n x n ) , r n ) 2 μ ( 1 μ ) ( 1 λ ) Γ n 2 + ( 1 λ ) d 2 ( r n , Π i n σ ) + ( 1 λ ) d 2 ( Π i n * r n , σ ) 2 μ ( 1 μ ) ( 1 λ ) Γ n 2 β n d 2 ( h ( x n ) , σ ) + λ d 2 ( x n , σ ) d 2 ( x n + 1 , σ ) + 2 μ ( 1 λ ) d 2 ( Π i n x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ k n ( Π i n x n ) , r n ) + ( 1 λ ) d 2 ( r n , Π i n σ ) + ( 1 λ ) d 2 ( Π i n * r n , σ ) Γ n 2 1 2 μ ( 1 μ ) ( 1 λ ) [ λ s n s n + 1 + β n d 2 ( h ( x n ) , σ ) + 2 μ ( 1 λ ) d 2 ( Π i n x n , r n ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ k n ( Π i n x n ) , r n ) + ( 1 λ ) d 2 ( r n , Π i n σ ) + ( 1 λ ) d 2 ( Π i n * r n , σ ) ] .
Let us set
z n = β n σ ( 1 β n ) u n .
It follows
d 2 ( x n + 1 , σ ) = d 2 ( β n h ( x n ) ( 1 β n ) u n , σ ) d 2 ( z n , σ ) + 2 x n + 1 z n , x n + 1 σ = d ( β n σ ( 1 β n ) u n , σ ) 2 + 2 ( β n h ( x n ) ( 1 β n ) u n ) z n , x n + 1 σ [ β n d ( σ , σ ) + ( 1 β n ) d ( u n , σ ) ] 2 + 2 [ β n h ( x n ) z n , x n + 1 σ + ( 1 β n ) u n z n , x n + 1 σ ] = ( 1 β n ) 2 d 2 ( u n , σ ) + 2 [ β n h ( x n ) ( β n σ ( 1 β n ) u n ) , x n + 1 σ + ( 1 β n ) u n ( β n σ ( 1 β n ) u n ) , x n + 1 σ ] ( 1 β n ) 2 d 2 ( u n , σ ) + 2 [ β n 2 h ( x n ) σ , x n + 1 σ + { β n ( 1 β n ) × h ( x n ) u n , x n + 1 σ } + β n ( 1 β n ) u n σ , x n + 1 σ + { ( 1 β n ) 2 × u n u n , x n + 1 σ } ] = ( 1 β n ) 2 d 2 ( u n , σ ) + 2 β n 2 h ( x n ) σ , x n + 1 σ + 2 β n ( 1 β n ) h ( x n ) σ , x n + 1 σ = ( 1 β n ) 2 d 2 ( u n , σ ) + 2 β n 2 h ( x n ) σ , x n + 1 σ + 2 β n h ( x n ) σ , x n + 1 σ 2 β n 2 h ( x n ) σ , x n + 1 σ = ( 1 β n ) 2 d 2 ( u n , σ ) + 2 β n h ( x n ) σ , x n + 1 σ . d 2 ( x n + 1 , σ ) 1 β n d 2 ( u n , σ ) + 2 β n h ( x n ) σ , x n + 1 σ .
Consider
h ( x n ) σ , x n + 1 σ = h ( x n ) h ( σ ) , x n + 1 σ + h ( σ ) σ , x n + 1 σ d ( h ( x n ) , h ( σ ) ) d ( x n + 1 , σ ) + h ( σ ) σ , x n + 1 σ l d ( x n , σ ) d ( x n + 1 , σ ) + h ( σ ) σ , x n + 1 σ l 2 [ d 2 ( x n , σ ) + d 2 ( x n + 1 , σ ) ] + h ( σ ) σ , x n + 1 σ .
Now, by putting (12) in (11), we obtain
d 2 ( x n + 1 , σ ) ( 1 β n ) d 2 ( u n , σ ) + β n l d 2 ( x n , σ ) + β n l d 2 ( x n + 1 , σ ) + 2 β n h ( σ ) σ , x n + 1 σ d 2 ( x n + 1 , σ ) β n l d 2 ( x n + 1 , σ ) ( 1 β n + β n l ) d 2 ( x n , σ ) + 2 β n h ( σ ) σ , x n + 1 σ ( 1 β n l ) d 2 ( x n + 1 , σ ) ( 1 ( 1 l ) β n ) d 2 ( x n , σ ) + 2 β n h ( σ ) σ , x n + 1 σ d 2 ( x n + 1 , σ ) ( 1 ( 1 l ) β n ) 1 β n l d 2 ( x n , σ ) + 2 β n 1 β n l h ( σ ) σ , x n + 1 σ . s n + 1 ( 1 β n ) s n + β n b n ,
where
β n = β n ( 1 2 l ) 1 β n l and b n = 1 1 2 l h ( σ ) σ , x n + 1 σ .
   □
We now present the strong convergence of the sequence produced by Algorithm 1.
Theorem 2. 
If { x n } is bounded sequence and the sequence { β n } satisfies that
β n 0 , n = 0 β n = ,
then the sequence { x n } iteratively produced by Algorithm 1 strongly converges to an element x Ω , which is the only solution of the variational inequality V.I
h x x , x y 0 f o r   a l l y Ω .
Proof. 
Let us suppose that the solution of V.I is x.
h x x , x y 0 f o r   a l l y Ω ,
i.e., x = P Ω h ( x )
Assume that s n = d 2 ( x n , x ) . By applying Proposition 2, we obtain
s n + 1 ( 1 β n ) s n + β n b n ,
where
β n = β n ( 1 2 l ) 1 β n l and b n = 1 1 2 l h ( x ) x , x n + 1 x .
We now use the Lemma 9 to demonstrate that s n 0 .
Assume that { s n k } is an arbitrarily chosen subsequence of { s n } that satisfies lim inf k ( s n k + 1 s n k ) 0 .
By (5), we have
Γ n k 2 1 2 μ ( 1 μ ) ( 1 λ ) [ λ s n k s n k + 1 + β n k d 2 ( h ( x n k ) , x ) + 2 μ ( 1 λ ) d 2 ( x n k , r n k ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n k ) + 2 ( 1 λ ) d 2 ( r n k , x ) ] .
Since { x n } and { r n } are bounded, and so are { f ( x n ) } and { y j , n } , we have
d 2 ( x n k , r n k ) β n k M , d 2 ( y j , n , r n k ) β n k N , d 2 ( r n k , x ) β n k R .
Also, β n 0 , which implies that
lim sup k Γ n k 2 1 2 μ ( 1 μ ) ( 1 λ ) [ lim sup k ( λ s n k s n k + 1 ) + lim sup k β n k d 2 ( h ( x n k ) , x ) + 2 lim sup k μ ( 1 λ ) d 2 ( x n k , r n k ) + 2 lim sup k ( 1 μ ) ( 1 λ ) d 2 ( y j , n , r n k ) + 2 lim sup k ( 1 λ ) d 2 ( r n k , x ) ]
1 2 μ ( 1 μ ) ( 1 λ ) [ lim sup k ( λ s n k s n k + 1 ) + lim sup k β n k d 2 ( h ( x n k ) , x ) + 2 lim sup k μ ( 1 λ ) β n k M + 2 lim sup k ( 1 μ ) ( 1 λ ) β n k N + 2 lim sup k ( 1 λ ) β n k R ] 0 .
Now, by (6)
Γ n k 2 1 2 μ ( 1 μ ) ( 1 λ ) [ λ s n k s n k + 1 + β n d 2 ( h ( x n k ) , x ) + 2 μ ( 1 λ ) d 2 ( Π i n x n k , r n k ) + 2 ( 1 μ ) ( 1 λ ) d 2 ( Ξ k n ( Π i n x n k ) , r n k ) + ( 1 λ ) d 2 ( r n k , Π i n x ) + ( 1 λ ) d 2 ( Π i n * r n k , x ) ] .
We have
d 2 ( Π i n x n k , r n k ) β n k H , d 2 ( Ξ k n ( Π i n x n k ) , r n k ) β n k I , d 2 ( r n k , Π i n x ) β n k J , d 2 ( Π i n * r n k , x ) β n k K .
It follows that
lim sup k Γ n k 2 1 2 μ ( 1 μ ) ( 1 λ ) [ lim sup k ( λ s n k s n k + 1 ) + lim sup k β n d 2 ( h ( x n k ) , x ) + 2 lim sup k μ ( 1 λ ) d 2 ( Π i n x n k , r n k ) + 2 lim sup k ( 1 μ ) ( 1 λ ) d 2 ( Ξ k n ( Π i n x n k ) , r n k ) + lim sup k ( 1 λ ) d 2 ( r n k , Π i n x ) + lim sup k ( 1 λ ) d 2 ( Π i n * r n k , x ) ] 1 2 μ ( 1 μ ) ( 1 λ ) [ lim sup k ( λ s n k s n k + 1 ) + lim sup k β n d 2 ( h ( x n k ) , x ) + 2 lim sup k μ ( 1 λ ) β n k H + 2 lim sup k ( 1 μ ) ( 1 λ ) β n k I + lim sup k ( 1 λ ) β n k J + lim sup k ( 1 λ ) β n k K ] 0 .
Hence, we can see Γ n k 0 as k in both cases. The sequences { g n k } and { g i , n k } also converge to 0, according to the definition of Γ n in Algorithm 1. It follows that
lim k d ( x n k , S j x n k ) = 0 , j = 1 , 2 , . . . , M ,
and
lim k d ( Π i x n k , E k i ( Π i x n k ) ) = 0 , i = 1 , 2 , . . . , N , k = 1 , 2 , . . . , M i .
Now, we assume that lim sup k b n k 0 . Assume that { x n k l } is a subsequence of { x n k } , such that
lim sup k h x x , x n k x = lim k h x x , x n k l x .
A subsequence of { x n k l } Δ -converges to x * because { x n k l } is bounded. Let us assume that { x n k l } Δ -converges to x * , without losing any generality. Let us suppose that x * Ω .
Every Π i is a bounded linear operator, hence it is evident that Π i x n k l Δ -converges to Π i x * for each i = 1 , 2 , . . . , N . Hence, by Lemma 6 and d ( Π i x n k , E k i ( Π i x n k ) ) 0 , which holds for each i = 1 , 2 , . . . , N and k = 1 , 2 , . . . , M i , we obtain
E k i ( Π i x * ) = Π i x * .
This implies
Π i x * k = 1 M i F i x ( E k i ) x * ( i = 1 N Π i 1 ( k = 1 M i F i x ( E k i ) ) .
Thus, x * Ω , as we assumed.
In view of x = P Ω h ( x ) and Lemma 1
lim sup k h x x , x n k x = h x x , x * x 0 .
We demonstrate that d ( x n k + 1 , x n k ) 0 as k . In fact, as the sequence { x n } , { u n } and { f ( x n ) } are bounded, β n 0 and the inequality
d ( x n k + 1 , u n k ) = d ( β n k h ( x n k ) ( 1 β n k ) u n k , u n k ) β n k d ( h ( x n k ) , u n k ) + ( 1 β n k ) d ( u n k , u n k ) lim k d ( x n k + 1 , u n k ) lim k β n k d ( h ( x n k ) , u n k ) ,
implies that
d ( x n k + 1 , u n k ) 0 .
So, from Step 4 of Algorithm 1, we obtain
d ( u n k , x n k ) = d ( λ x n k ( 1 λ ) Θ * ( μ Θ x n k ( 1 μ ) t n k ) , x n k ) λ d ( x n k , x n k ) + ( 1 λ ) d ( Θ * ( μ Θ x n k ( 1 μ ) t n k ) , x n k ) μ ( 1 λ ) d ( x n k , x n k ) + ( 1 λ ) ( 1 μ ) d ( Θ * t n k , x n k ) .
In case 1, Θ = I and t n k = S j n x n k , so
d ( u n k , x n k ) ( 1 λ ) ( 1 μ ) d ( t n k , x n k ) .
So from (13), we have
d ( u n k , x n k ) 0 ,
which, when combined with (16), gives
d ( x n k + 1 , x n k ) 0 .
From (13) and (15), the implication is that lim sup k b n k 0 , as we supposed. Thus, all the suppositions of Lemma 9 are fulfilled. Hence, we have s n 0 , that is, x n x = P Ω h ( x ) .
Our proof is now complete.    □

5. Some Consequences

In this section, some corollaries are stated related to the solution of the variational inequality problem (VIP) over the split feasibility problem with multiple output sets and the split common fixed point problem for nonexpansive mappings.

5.1. Split Feasibility Problem with Multiple Output Sets

Assume that Υ and Υ i , i = 1 , 2 , , N , are the complete C A T ( 0 ) spaces and Π i : Υ Υ i , i = 1 , 2 , , N , are the bounded operators. Suppose Φ j , j = 1 , 2 , , M and Ψ k i , i = 1 , 2 , , N and k = 1 , 2 , , M i are nonempty, convex and closed subsets of Υ and Υ i , respectively.
We have to find x Υ , in the way that
x Ω S F P M O S = ( j = 1 M Φ j ) ( i = 1 N Π i 1 ( k = 1 M i ( Ψ k i ) ) .
By utilizing Theorem 2 to Π j = P Φ j for all j = 1 , 2 , , M and Ξ k i = P Ψ k i for all i = 1 , 2 , , N , k = 1 , 2 , , M i , we obtain the next corollary to find the solution of Problem (18).
Corollary 1. 
Let { x n } be a sequence produced by Algorithm 1 with Π j = P Φ j for all j = 1 , 2 , . . . , M and Ξ k i = P Ψ k i for all i = 1 , 2 , . . . , N , k = 1 , 2 , . . . , M i , respectively. If the sequence { β n } satisfies the condition β n 0 and n = 0 β n = , then the sequence { x n } converges strongly to x Ω S F P M O S , which is the unique solution of the variational inequality
h x x , x y 0 y Ω S F P M O S .

5.2. The Split Common Fixed Point Problem for Nonexpansive Mappings

Suppose that Υ and Υ 1 are the complete C A T ( 0 ) spaces and Π : Υ Υ 1 are bounded operators. Suppose S j : Υ Υ , j = 1 , 2 , . . . , M , and  Ξ k : Υ 1 Υ 1 , k = 1 , 2 , . . . , K , are nonexpansive mappings. We have to find x Υ , in the way that
x Ω S C F P P = ( j = 1 M F i x ( S j ) ) T 1 ( k = 1 K F i x ( Ξ k ) ) ϕ .
We approach the following algorithm, which is based on Algorithm 1, to obtain a solution to the split common fixed point problem for nonexpansive mappings (19).
Theorem 2 gives the following corollary for the strong convergence of Algorithm 2.
Algorithm 2: Let x 0 Υ , { x n } is iteratively generated as follows.
Step 1 Calculate
y j , n = S j x n ,
for each j = 1 , 2 , , M and suppose
g 1 , n = max j = 1 , 2 , , M { d ( y j , n , x n ) } .
Step 2  Compute
z k , n = Ξ k ( Π x n ) ,
for each k = 1 , 2 , , K , and suppose
g 2 , n = max k = 1 , 2 , , K { d ( z k , n , Π x n ) } .
Step 3  Let
Γ n = max { g 1 , n , g 2 , n } .
If g 1 , n = Γ n , let t n = y j n , n , j = 1 , 2 , , M , and let Θ = I .
Else, if  g 2 , n = Γ n , then let t n = z k n , n , k = 1 , 2 , , K and Θ = Π .
Step 4 Compute
u n = λ x n ( 1 λ ) Θ * ( μ Θ x n ( 1 μ ) t n ) .
Step 5   Compute
x n + 1 = β n h ( x n ) ( 1 β n ) u n , n 0 .
where { β n } ( 0 , 1 ) and h : Υ Υ is a strict contraction having a contraction coefficient l [ 0 , 1 ) .
Corollary 2. 
If { β n } is the sequence with conditions β n 0 and n = 0 β n = , then the sequence { x n } produced by Algorithm 2 strongly converges to x Ω S C F P P , which is the only solution of the variational inequality (V.I)
h x x , x y 0 y Ω S C F P P .

6. Numerical Illustration

In this section, the numerical implementation of Algorithm 1 is provided. We also checked the convergence of our main result under our proposed algorithm.
Example 1. 
Consider the following problem: find an element x R 4 , such that
x Ω = ( j = 1 M F i x ( S j ) ) ( i = 1 N T i 1 ( k = 1 M i F i x ( Ξ k i ) ) ) ϕ ,
where S j : H H , j = 1 , 2 , . . . , M and Ξ i k : H i H i , i = 1 , 2 , . . . , N , k = 1 , 2 , . . . , M i are the nonexpansive mappings defined as
S j : R 4 R 4 S j ( a 1 , a 2 , a 3 , a 4 ) = j ( 1 2 a 1 , 1 2 a 2 , 1 2 a 3 , 1 2 a 4 ) Ξ i k : R 2 R 2 Ξ i k ( a 1 , a 2 ) = i k ( 2 a 1 , 0 ) .
and T i : H H i , i = 1 , 2 , . . . , m is the bounded linear operators, defined as
T i : R 4 R 2 T i ( a 1 , a 2 , a 3 , a 4 ) = i ( a 1 , a 1 + a 3 ) .
We defined h : H H a strict contraction having a contraction coefficient l [ 0 , 1 ) by
h : R 4 R 4 h ( a 1 , a 2 , a 3 , a 4 ) = ( 0.4 a 1 , 0.4 a 2 , 0.4 a 3 , 0.4 a 4 ) .
Now, we will examine the convergence of the sequence { x n } , which is generated by Algorithm 1.
Let x 0 = (0.1,0,0.2,0)
Step1: for j = 1 , n = 0
y 1 , 0 = S 1 x 0 = S 1 ( 0.1 , 0 , 0.2 , 0 ) = ( 1 2 ( 0.1 ) , 1 2 ( 0 ) , 1 2 ( 0.2 ) , 1 2 ( 0 ) ) = ( 0.05 , 0 , 0.1 , 0 ) d 0 = max { d ( y 1 , 0 , x 0 ) } = d ( ( 0.05 , 0 , 0.1 , 0 ) , ( 0.1 , 0 , 0.2 , 0 ) ) = 0.180278
Step2: for i = 1 , k = 1
z 1 , 0 1 = Ξ 1 1 ( T 1 , x 0 ) = Ξ 1 1 ( T 1 ( 0.1 , 0 , 0.2 , 0 ) ) = ( 0.2 , 0 ) d 1 , 0 = max { d ( z 1 , 0 1 , ( T 1 x 0 ) ) } = ( 0.1 0.2 ) 2 + ( 0.3 0 ) 2 = 0.01 + 0.9 = 0.316228
Step3:
Γ 0 = max ( { d 0 , d 1 , 0 } ) = d 0 = 0.180278 t 0 = y ( 1 0 , 0 ) & θ = I t 0 = ( 0.05 , 0 , 0.1 , 0 )
Step4: for λ = 0.3 , μ = 0.5 ,
u 0 = λ x 0 ( 1 λ ) θ * ( μ θ x 0 ( 1 μ ) t 0 ) = ( 0.0475 , 0 , 0.48 , 0 )
Step5: let α 0 = 0.6
x 1 = α 0 h ( x 0 ) ( 1 α ) u 0 = ( 0.043 , 0 , 0.24 , 0 ) .
2nd Iteration:
Step1: for j = 2 , n = 1
y 2 , 1 = S 2 x 1 = S 2 ( 2 ( 0.043 ) , 2 ( 0 ) , 2 ( 0.24 ) , 2 ( 0 ) ) ) = ( 0.0432 , 0 , 0.24 , 0 ) d 1 = max ( d ( y ( 2 , 1 ) , x 1 ) ) = d ( ( 0.0432 , 0 , 0.24 , 0 ) , ( 0.043 , 0 , 0.24 , 0 ) ) = 0.086197
Step2: for i = 2 , k = 2
z 2 , 1 2 = Ξ 2 2 ( T 2 x 1 ) = Ξ 2 2 ( T 2 ( 0.043 , 0 , 0.24 , 0 ) ) = Ξ 2 2 ( 2 ( 0.043 ) , 2 ( 0.043 + 0.24 ) ) = ( 0.664 , 0 )
d 2 , 1 = max { d ( z 2 , 1 2 , ( T 2 x 1 ) ) } = d ( ( 0.664 , 0 ) , ( 0.086 , 1.34 ) ) = 1.45931
Step3:
Γ 1 = m a x ( d 1 , d 2 , 1 ) = d 2 , 1 = 1.45931 t 1 = z 2 1 , 1 2 1 & θ = T 2 0 t 1 = ( 0.664 , 0 )
Step4:
u 1 = λ x 1 ( 1 λ ) θ * ( μ θ x 1 ( 1 μ ) t 1 ) = 0.3 ( 0.043 , 0 , 0.24 , 0 ) + 0.7 T 2 0 * [ ( 0.5 T 2 0 ( 0.043 , 0 , 0.24 , 0 ) + 0.5 ( 0.664 , 0 ) ] = ( 0.0129 , 0 , 0.072 , 0 ) + 0.7 T 2 0 * [ ( 0.5 ( 0.086 , 1.34 ) + ( 0.332 , 0 ) ] = ( 0.0129 , 0 , 0.072 , 0 ) + 0.7 T 2 0 * ( 0.375 , 0.67 )
To find adjoint of T 2 0
< ( a 1 , a 2 , a 3 , a 4 ) , T 2 0 * ( b 1 , b 2 ) > = < ( a 1 , a 1 + a 3 ) , ( b 1 , b 2 ) > = b 1 ( a 1 ) + b 2 ( a 1 + a 3 ) = ( b 1 + b 2 ) a 1 + b 2 a 3 = < ( a 1 , a 2 , a 3 , a 4 ) , ( b 1 + b 2 , 0 , b 2 , 0 ) > = < ( a 1 , a 2 , a 3 , a 4 ) , T 2 0 * ( b 1 , b 2 ) T 2 0 * ( b 1 , b 2 ) = ( b 1 + b 2 , 0 , b 2 , 0 ) T 2 0 * ( 0.375 , 0.67 ) = ( 0.375 + 0.67 , 0 , 0.67 , 0 ) = ( 1.045 , 0 , 0.67 , 0 )
u 1 = ( 0.0129 , 0 , 0.072 , 0 ) + 0.7 ( 1.045 , 0 , 0.67 , 0 ) = ( 0.7444 , 0 , 0.541 , 0 )
Step5:
x 2 = α 1 h ( x 1 ) ( 1 α 1 ) u 1 = 0.7 h ( 0.043 , 0 , 0.24 , 0 ) + 0.3 ( 0.7444 , 0 , 0.541 , 0 ) = 0.7 ( 0.4 ( 0.043 ) , 0.4 ( 0 ) , 0.4 ( 0.24 ) , 0.4 ( 0 ) ) + ( 0.22332 , 0 , 0.1623 , 0 ) = ( 0.23536 , 0 , 0.2295 , 0 ) .
Continuing in a similar way, we can construct the required converging sequence, which satisfies the condition of Algorithm 1.

7. Conclusions

We first introduced the split fixed point problems with multiple output sets and some other generalizations in C A T ( 0 ) spaces. We presented two algorithms to solve the variational inequality problem over the split common fixed point problem with multiple output sets and the split common fixed point problem for nonexpansive mappings in C A T ( 0 ) spaces, and then we proved the strong convergence of the sequence generated by the first algorithm and also stated some corollaries related to the solution of our variational inequality problem. We also proved some Lemmas and Propositions, which are essential for our main result.

Author Contributions

Conceptualization, M.R. and A.K.; methodology, M.R., A.K. and H.S.; writing and original draft preparation, H.S.; review and editing, A.H.A. and A.H.; supervision, M.R.; funding acquisition, A.H.A. and A.H. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thanks to the employer of King Abdulaziz University, P.O. Box 80203, Jeddah, 21589 Saudi Arabia, for their financial support and encouragement.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are thankful to the reviewers for the deep study and for enhancing the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  2. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  3. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. The Split Feasibility Model Leading to a Unified Approach for Inversion Problems in Intensity-Modulated Radiation Therapy; Technical Report 20 April; Department of Mathematics, University of Haifa: Haifa, Israel, 2005. [Google Scholar]
  4. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071. [Google Scholar] [CrossRef]
  5. Masad, E.; Reich, S. A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8, 367–371. [Google Scholar]
  6. Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef] [PubMed]
  7. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  8. Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces. Numer. Algorithms 2021, 86, 1359–1389. [Google Scholar] [CrossRef]
  9. Younis, M.; Dar, A.H.; Hussain, N. A revised algorithm for finding a common solution of variational inclusion and fixed point problems. Filomat 2023, 37, 6949–6960. [Google Scholar] [CrossRef]
  10. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 2021, 88, 1419–1456. [Google Scholar] [CrossRef]
  11. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  12. Tuyen, T.M. A strong convergence theorem for the split common null point problem in Banach spaces. Appl. Math. Optim. 2019, 79, 207–227. [Google Scholar] [CrossRef]
  13. Reich, S.; Tuyen, T.M. Iterative methods for solving the generalized split common null point problem in Hilbert spaces. Optimization 2020, 69, 1013–1038. [Google Scholar] [CrossRef]
  14. Gupta, N.; Postolache, M.; Nandal, A.; Chugh, R. A cyclic iterative algorithm for multiple-sets split common fixed point problem of demicontractive mappings without prior knowledge of operator norm. Mathematics 2021, 9, 372. [Google Scholar] [CrossRef]
  15. Cui, H. Multiple-sets split common fixed-point problems for demicontractive mappings. J. Math. 2021, 2021, 1–6. [Google Scholar] [CrossRef]
  16. Reich, S.; Tuyen, T.M.; Thuy, N.T.T.; Ha, M.T.N. A new self-adaptive algorithm for solving the split common fixed point problem with multiple output sets in Hilbert spaces. Numer. Algorithms 2022, 89, 1031–1047. [Google Scholar] [CrossRef]
  17. Bridson, M.R.; Haefliger, A. Metric Spaces of Non-Positive Curvature; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2013; Volume 319. [Google Scholar]
  18. Brown, K.S.; Brown, K.S. Buildings; Springer: New York, NY, USA, 1989; pp. 76–98. [Google Scholar]
  19. Kirk, W.A. Fixed point Theorems in spaces and R-trees. Fixed Point Theory Appl. 2004, 2004, 1–8. [Google Scholar] [CrossRef]
  20. Bruhat, F.; Tits, J. Groupes reductifs sur un corps local: I. Donnees radicielles valuees. Publ. Math. L’Ihes 1972, 41, 5–251. [Google Scholar] [CrossRef]
  21. Lim, T.C. Remarks on some fixed point Theorems. Proc. Am. Math. Soc. 1976, 60, 179–182. [Google Scholar] [CrossRef]
  22. Kirk, W.A.; Panyanak, B. A concept of convergence in geodesic spaces. Nonlinear Anal. Theory Methods Appl. 2008, 68, 3689–3696. [Google Scholar] [CrossRef]
  23. Dhompongsa, S.; Kirk, W.A.; Sims, B. Fixed points of uniformly Lipschitzian mappings. Nonlinear Anal. Theory, Methods Appl. 2006, 65, 762–772. [Google Scholar] [CrossRef]
  24. Berg, I.D.; Nikolaev, I.G. Quasilinearization and curvature of Aleksandrov spaces. Geom. Dedicata 2008, 133, 195–218. [Google Scholar] [CrossRef]
  25. Dehghan, H.; Rooin, J. A characterization of metric projection in Hadamard spaces with applications. J. Nonlinear Convex Anal. 2013; to appear. [Google Scholar]
  26. Abbas, M.; Ibrahim, Y.; Khan, A.R.; De la Sen, M. Split Variational Inclusion Problem and Fixed Point Problem for a Class of Multivalued Mappings in CAT(0) Spaces. Mathematics 2019, 7, 749. [Google Scholar] [CrossRef]
  27. Dhompongsa, S.; Panyanak, B. On Δ-convergence Theorems in CAT(0) spaces. Comput. Math. Appl. 2008, 56, 2572–2579. [Google Scholar] [CrossRef]
  28. Chaoha, P.; Phon-On, A. A note on fixed point sets in CAT(0) spaces. J. Math. Anal. Appl. 2006, 320, 983–987. [Google Scholar] [CrossRef]
  29. Wangkeeree, R.; Preechasilp, P. Viscosity approximation methods for nonexpansive semigroups in CAT(0) spaces. Fixed Point Theory Appl. 2013, 2013, 1–16. [Google Scholar] [CrossRef]
  30. Mainge, P.E. The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 2010, 59, 74–79. [Google Scholar] [CrossRef]
  31. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. Theory Methods Appl. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rashid, M.; Kalsoom, A.; Albargi, A.H.; Hussain, A.; Sundas, H. Convergence Result for Solving the Split Fixed Point Problem with Multiple Output Sets in Nonlinear Spaces. Mathematics 2024, 12, 1825. https://doi.org/10.3390/math12121825

AMA Style

Rashid M, Kalsoom A, Albargi AH, Hussain A, Sundas H. Convergence Result for Solving the Split Fixed Point Problem with Multiple Output Sets in Nonlinear Spaces. Mathematics. 2024; 12(12):1825. https://doi.org/10.3390/math12121825

Chicago/Turabian Style

Rashid, Maliha, Amna Kalsoom, Amer Hassan Albargi, Aftab Hussain, and Hira Sundas. 2024. "Convergence Result for Solving the Split Fixed Point Problem with Multiple Output Sets in Nonlinear Spaces" Mathematics 12, no. 12: 1825. https://doi.org/10.3390/math12121825

APA Style

Rashid, M., Kalsoom, A., Albargi, A. H., Hussain, A., & Sundas, H. (2024). Convergence Result for Solving the Split Fixed Point Problem with Multiple Output Sets in Nonlinear Spaces. Mathematics, 12(12), 1825. https://doi.org/10.3390/math12121825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop