Next Article in Journal
Modeling of the Public Opinion Polarization Process with the Considerations of Individual Heterogeneity and Dynamic Conformity
Next Article in Special Issue
Gradient Methods with Selection Technique for the Multiple-Sets Split Equality Problem
Previous Article in Journal
Multi-Attribute Decision-Making Methods as a Part of Mathematical Optimization
Previous Article in Special Issue
Fixed Points for a Pair of F-Dominated Contractive Mappings in Rectangular b-Metric Spaces with Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Theorem of Two Sequences for Solving the Modified Generalized System of Variational Inequalities and Numerical Analysis

by
Anchalee Sripattanet
and
Atid Kangtunyakarn
*
Department of Mathematics, Faculty of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 916; https://doi.org/10.3390/math7100916
Submission received: 29 August 2019 / Revised: 26 September 2019 / Accepted: 26 September 2019 / Published: 2 October 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
The purpose of this paper is to introduce an iterative algorithm of two sequences which depend on each other by using the intermixed method. Then, we prove a strong convergence theorem for solving fixed-point problems of nonlinear mappings and we treat two variational inequality problems which form an approximate modified generalized system of variational inequalities (MGSV). By using our main theorem, we obtain the additional results involving the split feasibility problem and the constrained convex minimization problem. In support of our main result, a numerical example is also presented.

Let H be a real Hilbert space. Let C be a nonempty closed convex subset of H with inner product · , · and norm · respectively and let T be a self-mapping of C . We use F ( T ) to denote the set of fixed points of T (i.e., F ( T ) = { x C : T x = x } ).
Recall that T is said to be a κ -strict pseudo-contraction if there exists a constant κ [ 0 , 1 ) such that
T x T y 2 x y 2 + κ ( I T ) x ( I T ) y 2 , x , y C .
Please note that the class of κ -strict pseudo-contractions strictly includes the class of nonexpansive mappings which are self-mappings T on C such that
T x T y x y , x , y C .
In particular, T is nonexpansive mapping if and only if T is a 0-strict pseudo-contraction.
Iterative methods for finding fixed points of nonexpansive mappings are an important topic in the theory of weak and strong convergence theorem, see for example [1,2,3] and the references therein.
Over recent decades, many authors have constructed various types of iterative methods to approximate fixed points. The first one is the Mann iteration introduced by Mann [4] in 1953 which is defined as follows:
x n + 1 = α n x n + ( 1 α n ) T x n , n 0 ,
where x 0 C is chosen arbitrarily and α n [ 0 , 1 ] , T : C C is a mapping. If T is a nonexpansive mapping, the sequence { x n } be generated by ( 3 ) converges weakly to an element of F ( T ) .
It is well known that in an infinite-dimensional Hilbert space, the normal Mann’s iterative algorithm [4] is only weakly convergent.
It is clear that strict pseudo-contractions are more general than nonexpansive mappings, and therefore they have a wider range of applications. Therefore, it is important to develop the theory of iterative methods for strict pseudo-contractions. Indeed, Browder and Petryshyn [5] proved that if the sequence { x n } is generated by ( 3 ) with a constant control parameter α n α for all n N . Then the sequence { x n } converges weakly to a fixed point of the strict pseudo-contraction T . Moreover, many mathematicians proposed iterative algorithms and proved the strong convergence theorems for a nonexpansive mapping and a κ -strictly pseudo-contractive mapping in Hilbert space to find their fixed points, see for example [6,7,8,9].
To prove the strong convergence of iterations determined by nonexpansive mapping, Moudafi [1] established a theorem for finding fixed points of nonexpansive mappings. More precisely, he established the following result, known as the viscosity approximation method .
Theorem 1.
Let C be a nonempty closed convex subset of a real Hilbert space H and let S be a nonexpansive mapping of C into itself such that F ( S ) is nonempty. Let f be a contraction of C into itself and let { x n } be a sequence defined as follows:
x 1 C i s a r b i t r a r i l y c h o s e n , x n + 1 = 1 1 + ε n S x n + ε n 1 + ε n f ( x n ) , n N ,
where { ε n } is a sequence of positive real numbers having to go to zero. Then the sequence { x n } converges strongly to z F ( S ) , where z = P F ( S ) f ( z ) and P F ( S ) is a metric projection of H onto F ( S ) .
The Moudafi viscosity approximation method can be applied to elliptic differential equations, linear programming, convex optimization and monotone inclusions, it has been widely studied in the literature (see [10,11,12]).
To construct an iterative algorithm such that it converges strongly to the fixed points of a finite family of strict pseudo-contractions by using the concept of the viscosity approximation method (4) and Manns iteration (3), Yao et al. [13] proposed the intermixed algorithm for two strict pseudo-contractions as follows:
Algorithm 1.
For arbitrarily given x 0 C , y 0 C , let the sequences { x n } and { y n } be generated iteratively by
x n + 1 = ( 1 β n ) x n + β n P C [ α n f ( y n ) + ( 1 k α n ) x n + k T x n ] , n 0 , y n + 1 = ( 1 β n ) y n + β n P C [ α n g ( x n ) + ( 1 k α n ) y n + k S y n ] , n 0 ,
where { α n } and { β n } are two sequences of real number in (0,1), T , S : C C are a strict λ -pseudo-contractions, f : C H is a ρ 1 -contraction and g : C H is a ρ 2 -contraction, k ( 0 , 1 λ ) is a constant.
Then they proved the strong convergence theorem of the iterative sequences { x n } and { y n } defined by ( 5 ) as follows.
Theorem 2.
Suppose that F ( S ) and F ( T ) . Assume the following conditions are satisfied:
( C 1 ) lim n α n = 0 and n = 1 α n = ,
( C 2 ) β n [ ξ 1 , ξ 2 ] ( 0 , 1 ) for all n 0 .
Then the sequences { x n } and { y n } generated by ( 5 ) converge strongly to P f i x ( T ) f ( y * ) and P f i x ( S ) g ( x * ) , respectively.
If putting C = H and β n = 1 in (5), we have
x n + 1 = α n f ( y n ) + ( 1 k α n ) x n + k T x n , n 0 , y n + 1 = α n g ( x n ) + ( 1 k α n ) y n + k S y n , n 0 ,
which is a modified version of viscosity approximation method. Observe that the sequence { x n } and { y n } are mutually dependent on each other.
Let B : C H . The variational inequality problem is to find a point u * C such that
B u * , v u * 0 ,
for all v C . The set of solutions of (7) is denoted by V I ( C , B ) . It is known that the variational inequality, as a strong and important tool, has already been studied for a wide class of optimization problems in economics, and equilibrium problems arising in physics and several other branches of pure and applied sciences, see for example [14,15,16,17].
Recently, in 2018, Siriyan and Kangtunyakarn [18] introduced the following modified generalized system of variational inequalities (MGSV), which involves finding x * , y * , z * C × C × C such that
x * ( I λ 1 D 1 ) ( a x * + ( 1 a ) y * ) , x x * 0 , x C , y * ( I λ 2 D 2 ) ( a x * + ( 1 a ) z * ) , x y * 0 , x C , z * ( I λ 3 D 3 ) x * , x z * 0 , x C .
where D 1 , D 2 , D 3 : C H , λ 1 , λ 2 , λ 3 > 0 and a [ 0 , 1 ] .
If putting a = 0 , in (8), we have
x * ( I λ 1 D 1 ) y * , x x * 0 , x C , y * ( I λ 2 D 2 ) z * , x y * 0 , x C , z * ( I λ 3 D 3 ) x * , x z * 0 , x C .
which is generalized system of variational inequalities modified by Ceng et al. [19],
To find an element of the set of solutions of modified generalized system of variational inequalities problem ( 8 ) , Siriyan and Kangtunyakarn [18] introduced the following iterative scheme:
x n + 1 = β n 1 x n + β n 2 T x n + β n 3 P C ( I λ D ) y n , y n = α n γ f ( x n ) + I α n A ¯ G x n ,
where D , D 1 , D 2 , D 3 : C H be d , d 1 , d 2 , d 3 -inverse strongly monotone mappings, respectively, G : C C is defined by
G ( x ) = P C ( I λ 1 D 1 ) a x + ( 1 a ) P C ( I λ 2 D 2 ) a x + ( 1 a ) P C ( I λ 3 D 3 ) x ,
and a [ 0 , 1 ) . Under some suitable conditions, see more details [18], they proved that the sequence { x n } converges strongly to x 0 = P Ω ( I A ¯ + γ f ) x 0 and ( x 0 , y 0 , z 0 ) is a solution of (10) where y 0 = P C ( I λ 2 D 2 ) a x 0 + ( 1 a ) z 0 and z 0 = P C ( I λ 3 D 3 ) x 0 .
Moreover, they proved Lemma 3 in the next section which involving MGSV and the set of solution of fixed point of nonlinear equation related to a metric projection onto C. This lemma is very important to prove our main result in Section 2.
By using the concept of (5), we introduce a new iterative method for solving a modified generalized system of variational inequalities as follows:
Algorithm 2.
Starting with x 1 , w 1 C , let the sequences { x n } and { w n } be defined by
x n + 1 = δ n x n + η n P C ( I λ 1 B 1 ) x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G C 1 x n ) w n + 1 = δ n w n + η n P C ( I λ 2 B 2 ) w n + μ n P C ( α n g ( x n ) + ( 1 α n ) G C 2 w n ) .
By putting B 1 = B 2 = 0 , we get
x n + 1 = δ n x n + η n x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G C 1 x n ) w n + 1 = δ n w n + η n w n + μ n P C ( α n g ( x n ) + ( 1 α n ) G C 2 w n ) .
which is a modified version of (5).
Under some extra conditions in Theorem 3, we prove a strong convergence theorem for solving fixed-point problems of nonlinear mappings and two variational inequality problems by using Algorithm 2 which is an approximate MGSV. Moreover, using our main result, we obtain additional results involving the split feasibility problem (SFP) and the constrained convex minimization problem. Finally, we give a numerical example for the main theorem.

1. Preliminaries

We denote the weak convergence and the strong convergence by and , respectively. For every x H , there exists a unique nearest point P C x in C such that x P C x x y for all y C . P C is called the metric projection of H onto C .
Definition 1.
A mapping f : C C is called contractive if there exists a constant ξ ( 0 , 1 ) such that
f ( u ) f ( v ) ξ u v ,
for all u , v C .
A mapping f : C H is called α - inverse strongly monotone if there exists a positive real number α > 0 such that
f u f v , u v α f u f v 2 ,
for all u , v C .
The following lemmas are needed to prove the main theorem.
Lemma 1
([20]). Each Hilbert space H satisfies Opial’s condition, i.e., for any sequence { x n } H with x n x , the inequality
lim inf n x n x < lim inf n x n y ,
holds for every y H with y x .
Lemma 2.
Let H be a real Hilbert space. Then, for all x , y , z H and α , β , γ [ 0 , 1 ] with α + β + γ = 1 , we have
(i) 
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 α γ x z 2 β γ y z 2 ,
(ii) 
x + y 2 x 2 + 2 y , x + y , x , y H .
Lemma 3
([18]). Let C be a nonempty closed convex subset of a real Hilbert space H and let D 1 , D 2 , D 3 : C H are three mappings. For every λ 1 , λ 2 , λ 3 > 0 and a [ 0 , 1 ] . The following statements are equivalent
(i) 
x * , y * , z * C × C × C is a solution of problem (8)
(ii) 
x * is a fixed point of the mapping G, i.e., x * F ( G ) , defined the mapping G : C C by G ( x ) = P C ( I λ 1 D 1 ) a x + ( 1 a ) P C ( I λ 2 D 2 ) a x + ( 1 a ) P C ( I λ 3 D 3 ) x , x C , where y * = P C ( I λ 2 D 2 ) a x * + ( 1 a ) z * and z * = P C ( I λ 3 D 3 ) x * .
Lemma 4
([21]). Let { s n } be a sequence of nonnegative real numbers satisfying
s n + 1 ( 1 α n ) s n + δ n , n 0 ,
where α n is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
(i) 
i = 1 α n = ,
(ii) 
lim s u p n δ n α n 0 or n = 1 | δ n | < .
Then lim n s n = 0 .
Lemma 5
([22]). For a given z H and u C , u = P C z u z , v u 0 , v C . Furthermore, P C is a firmly nonexpansive mapping of H onto C, i.e., P C x P C y 2 P C x P C y , x y , x , y H .
Lemma 6
([23]). Let C be a nonempty closed convex subset of a real Hilbert space H and let T : C C be a κ-strictly pseudo-contractive mapping with F ( T ) . Then, the following statements hold:
(i) 
F ( T ) = V I ( C , I T ) ,
(ii) 
For every u C and v F ( T ) ,
P C ( I λ ( I T ) ) u v u v ,
for u C and v F ( T ) and λ ( 0 , 1 κ ) .

2. Main Result

In this section, we introduce a strong convergence theorem for solving fixed-point problems of nonlinear mappings and two variational inequality problems by using Algorithm 2.
Theorem 3.
Let C be nonempty closed convex subset of a real Hilbert H . For i = 1 , 2 , let B i : C H be α i -inverse strongly monotone mapping with α = min { α 1 , α 2 } and let f , g : H H be a f and a g -contraction mappings with a = max { a f , a g } . For i = 1 , 2 and j = 1 , 2 , 3 let D j i : C H be d j i -inverse strongly monotone, where λ j i ( 0 , 2 ω i ) with ω i = min j = 1 , 2 , 3 { d j i } . For i = 1 , 2 , define G i : C C by G i ( x ) = P C ( I λ 1 i D 1 i ) ( a x + ( 1 a ) P C ( I λ 2 i D 2 i ) ( a x + ( 1 a ) P C ( I λ 3 i D 3 i ) x ) ) , x C . Let the sequences { x n } and { w n } be generated by x 1 , w 1 C and by
x n + 1 = δ n x n + η n P C ( I γ 1 B 1 ) x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) w n + 1 = δ n w n + η n P C ( I γ 2 B 2 ) w n + μ n P C ( α n g ( x n ) + ( 1 α n ) G 2 w n )
where δ n , η n , μ n , α n [ 0 , 1 ] with δ n + η n + μ n = 1 and γ ( 0 , 2 α ) with γ = min { γ 1 , γ 2 } . Assume the following conditions hold:
(i) 
F i = F ( G i ) V I ( C , B i ) for i = 1 , 2 ,
(ii) 
n = 1 α n = and lim n α n = 0 ,
(iii) 
0 < θ ¯ δ n , η n , μ n θ for all n N and for some θ ¯ , θ > 0 ,
(iv) 
n = 1 δ n + 1 δ n < , n = 1 η n + 1 η n < , n = 1 α n + 1 α n < .
Then { x n } converges strongly to x 1 * = P F 1 f ( x 2 * ) , where y 1 * = P C ( I λ 2 1 D 2 1 ) ( a x 1 * + ( 1 a ) z 1 * ) and z 1 * = P C ( I λ 3 1 D 3 1 ) x 1 * and { w n } converges strongly to x 2 * = P F 2 g ( x 1 * ) , where y 2 * = P C ( I λ 2 2 D 2 2 ) ( a x 2 * + ( 1 a ) z 2 * ) and z 1 * = P C ( I λ 3 2 D 3 2 ) x 1 * .
Proof. 
The proof of this theorem will be divided into five steps.
Step 1. We will show that { x n } is bounded.
First, we will prove that I γ B i is nonexpansive with γ = min { γ 1 , γ 2 } , for i = 1 , 2 we get
( I γ B i ) x ( I γ B i ) w 2 = x w γ ( B i x B i w ) 2 = x w 2 2 γ x w , B i x B i w + γ 2 B i x B i w 2 x w 2 2 α γ B i x B i w 2 + γ 2 B i x B i w 2 = x w 2 γ ( 2 α γ ) B i x B i w 2 x w 2 .
Thus, I γ B i is a nonexpansive mapping, for i = 1 and i = 2 .
Let x ˜ F 1 and w ˜ F 2 . Then we have
x n + 1 x ˜ = δ n x n + η n P C ( I γ 1 B 1 ) x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) ( δ n + η n + μ n ) x ˜ δ n x n x ˜ + η n P C ( I γ 1 B 1 ) x n x ˜ + μ n P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) x ˜ ( 1 μ n ) x n x ˜ + μ n α n ( f ( w n ) x ˜ ) + ( 1 α n ) ( G 1 x n x ˜ ) ( 1 μ n ) x n x ˜ + μ n α n f ( w n ) x ˜ + μ n ( 1 α n ) x n x ˜ ( 1 μ n ) x n x ˜ + μ n α n a w n w ˜ + μ n α n f ( w ˜ ) x ˜ + μ n ( 1 α n ) x n x ˜ = ( 1 μ n α n ) x n x ˜ + μ n α n a w n w ˜ + μ n α n f ( w ˜ ) x ˜ .
Similarly, we get
w n + 1 w ˜ ( 1 μ n α n ) w n w ˜ + μ n α n a x n x ˜ + μ n α n g ( x ˜ ) w ˜ .
Combining ( 12 ) and ( 13 ) , we have
x n + 1 x ˜ + w n + 1 w ˜ ( 1 μ n α n ) x n x ˜ + w n w ˜ + μ n α n a x n x ˜ + w n w ˜ + μ n α n g ( x ˜ ) w ˜ + f ( w ˜ ) x ˜ = ( 1 μ n α n ( 1 a ) ) x n x ˜ + w n w ˜ + μ n α n g ( x ˜ ) w ˜ + f ( w ˜ ) x ˜ .
By induction, we can derive that
x n x ˜ + w n w ˜ max x 1 x ˜ + w 1 w ˜ , g ( x ˜ ) w ˜ + f ( w ˜ ) x ˜ 1 a ,
for every n N . This implies that { x n } and { w n } are bounded.
Step 2. Claim that lim n x n + 1 x n = lim n w n + 1 w n = 0 .
First, we let U n = P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) and V n = P C ( α n g ( x n ) + ( 1 α n ) G 2 w n ) . Then, observe that
U n U u 1 = P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) P C ( α n 1 f ( w n 1 ) + ( 1 α n 1 ) G 1 x n 1 ) α n f ( w n ) f ( w n 1 ) + | α n α n 1 | f ( w n 1 ) + ( 1 α n ) G 1 x n G 1 x n 1 + | α n α n 1 | G 1 x n 1 α n a w n w n 1 + | α n α n 1 | f ( w n 1 ) + G 1 x n 1 + ( 1 α n ) x n x n 1 .
By the definition of x n and ( 14 ) we obtain
x n + 1 x n = δ n x n + η n P C ( I γ 1 B 1 ) x n + μ n U n δ n 1 x n 1 η n 1 P C ( I γ 1 B 1 ) x n 1 μ n 1 U n 1 δ n x n x n 1 + | δ n δ n 1 | x n 1 + η n P C ( I γ 1 B 1 ) x n P C ( I γ 1 B 1 ) x n 1 + | η n η n 1 | P C ( I γ 1 B 1 ) x n 1 + μ n U n U n 1 + | μ n μ n 1 | U n 1 = ( 1 μ n ) x n x n 1 + | δ n δ n 1 | x n 1 + | η n η n 1 | P C ( I γ 1 B 1 ) x n 1 + μ n | α n α n 1 | f ( w n 1 ) + G 1 x n 1 + μ n ( 1 α n ) x n x n 1 + | μ n μ n 1 | U n 1 + μ n α n a w n w n 1 .
Using the same method as derived in ( 15 ) , we have
w n + 1 w n ( 1 μ n ) w n w n 1 + | δ n δ n 1 | w n 1 + | η n η n 1 | P C ( I γ 2 B 2 ) w n 1 + μ n | α n α n 1 | g ( x n 1 ) + G 2 w n 1 + μ n ( 1 α n ) w n w n 1 + | μ n μ n 1 | V n 1 + μ n α n a x n x n 1 .
From ( 15 ) and ( 16 ) , then we get
x n + 1 x n + w n + 1 w n ( 1 μ n ) x n x n 1 + w n w n 1 + | δ n δ n 1 | x n 1 + w n 1 + | η n η n 1 | [ P C ( I γ 1 B 1 ) x n 1 + P C ( I γ 2 B 2 ) w n 1 ] + | μ n μ n 1 | U n 1 + V n 1 + μ n α n a w n w n 1 + x n x n 1 + μ n | α n α n 1 | [ f ( w n 1 ) + G 1 x n 1 + g ( x n 1 ) + G 2 w n 1 ] + μ n ( 1 α n ) x n x n 1 + w n w n 1 ( 1 α n θ ¯ ( 1 a ) ) x n x n 1 + w n w n 1 + | δ n δ n 1 | x n 1 + w n 1 + | η n η n 1 | [ P C ( I γ 1 B 1 ) x n 1 + P C ( I γ 2 B 2 ) w n 1 ] + | μ n μ n 1 | U n 1 + V n 1 + θ | α n α n 1 | [ f ( w n 1 ) + G 1 x n 1 + g ( x n 1 ) + G 2 w n 1 ] .
Applying Lemma 4 and the condition (ii), (iii) and (iv) we can conclude that
x n + 1 x n 0 and w n + 1 w n 0 a s n .
Step 3. Prove that lim n U n P C ( I γ 1 B 1 ) U n = lim n U n G 1 U n = 0 .
To show this, take u ˜ n = α n f ( w n ) + ( 1 α n ) G 1 x n , n N . Then we derive that
x n + 1 x ˜ 2 = δ n ( x n x ˜ ) + η n ( P C ( I γ 1 B 1 ) x n x ˜ ) + μ n ( U n x ˜ ) 2 δ n x n x ˜ 2 + η n P C ( I γ 1 B 1 ) x n x ˜ 2 δ n η n x n P C ( I γ 1 B 1 ) x n 2 + μ n u ˜ n x ˜ 2 ( 1 μ n ) x n x ˜ 2 δ n η n x n P C ( I γ 1 B 1 ) x n 2 + μ n α n ( f ( w n ) G 1 x n ) + ( G 1 x n x ˜ ) 2 ( 1 μ n ) x n x ˜ 2 δ n η n x n P C ( I γ 1 A 1 ) x n 2 + μ n G 1 x n x ˜ 2 + 2 α n f ( w n ) G 1 x n , u ˜ n x ˜ x n x ˜ 2 δ n η n x n P C ( I γ 1 B 1 ) x n 2 + 2 μ n α n f ( w n ) G 1 x n u ˜ n x ˜ ,
which implies that
δ n η n x n P C ( I γ 1 B 1 ) x n 2 x n x ˜ 2 x n + 1 x ˜ 2 + 2 μ n α n f ( w n ) G 1 x n u ˜ n x ˜ x n x n + 1 x n x ˜ + x n + 1 x ˜ + 2 μ n α n f ( w n ) G 1 x n u ˜ n x ˜ .
Then, we have
x n P C ( I γ 1 B 1 ) x n 0 a s n .
Observe that
x n + 1 x n = η n ( P C ( I γ 1 B 1 ) x n x n ) + μ n ( U n x n ) .
This follows that
μ n U n x n η n P C ( I γ 1 B 1 ) x n x n + x n + 1 x n .
From ( 17 ) and ( 18 ) , we obtain
U n x n 0 a s n .
Observe that
U n P C ( I γ 1 B 1 ) U n U n x n + x n P C ( I γ 1 B 1 ) x n + P C ( I γ 1 B 1 ) x n P C ( I γ 1 B 1 ) U n U n x n + x n P C ( I γ 1 B 1 ) x n + x n U n = 2 U n x n + x n P C ( I γ 1 B 1 ) x n ,
by ( 18 ) and ( 19 ) , we obtain
U n P C ( I γ 1 B 1 ) U n 0 a s n .
Applying the same arguments as for deriving ( 20 ) , we also obtain
V n P C ( I γ 2 B 2 ) V n 0 a s n .
Consider
x n + 1 U n x n + 1 x n + x n U n .
From ( 17 ) and ( 19 ) , we have
x n + 1 U n 0 a s n .
Since
x n G 1 x n x n x n + 1 + x n + 1 U n + U n G 1 x n x n x n + 1 + x n + 1 U n + u ˜ n G 1 x n = x n x n + 1 + x n + 1 U n + α n f ( w n ) + ( 1 α n ) G 1 x n G 1 x n = x n x n + 1 + x n + 1 U n + α n f ( w n ) G 1 x n .
From ( 17 ) , ( 21 ) and condition (ii), we get
x n G 1 x n 0 a s n .
Consider
U n G 1 U n U n x n + x n G 1 x n + G 1 x n G 1 U n U n x n + x n G 1 x n + x n U n 2 U n x n + x n G 1 x n .
From ( 19 ) and ( 22 ) , we have
U n G 1 U n 0 a s n .
Applying the same method as ( 22 ) , we also have
V n G 2 V n 0 a s n .
Step 4. Claim that lim sup n f ( x 2 * ) x 1 * , U n x 1 * 0 , where x 1 * = P F 1 f ( x 2 * ) .
First, take a subsequence { U n k } of { U n } such that
lim sup n f ( x 2 * ) x 1 * , U n x 1 * = lim k f ( x 2 * ) x 1 * , U n k x 1 * .
Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that x n k x ^ C as k . From ( 19 ) , we obtain U n k x ^ a s k .
Next, we need to show that x ^ F 1 = F ( G 1 ) V I ( C , B 1 ) . Assume x ^ F ( G 1 ) . Then, we have x ^ G 1 x ^ . By the Opial’s condition, we obtain
lim inf k U n k x ^ < lim inf k U n k G 1 x ^ lim inf k U n k G 1 U n k + lim inf k G 1 U n k G 1 x ^ lim inf k U n k x ^ .
This is a contradiction.
Therefore
x ^ F ( G 1 ) .
Assume x ^ V I ( C , B 1 ) , then we get x ^ P C ( I λ 1 B 1 ) x ^ .
From the Opial’s condition and ( 20 ) , we have
lim inf k U n k x ^ < lim inf k U n k P C ( I γ 1 B 1 ) x ^ lim inf k U n k P C ( I γ 1 B 1 ) U n k + lim inf k P C ( I γ 1 B 1 ) U n k P C ( I γ 1 B 1 ) x ^ lim inf k U n k x ^ .
This is a contradiction.
Therefore
x ^ V I ( C , B 1 ) .
By ( 25 ) and ( 26 ) , this yields that
x ^ F 1 = F ( G 1 ) V I ( C , B 1 ) .
Since U n k x ^ a s k , ( 27 ) and Lemma 5, we can derive that
lim sup n f ( x 2 * ) x 1 * , U n x 1 * = lim k f ( x 2 * ) x 1 * , U n k x 1 * = f ( x 2 * ) x 1 * , x ^ x 1 * 0 .
Following the same method as for ( 28 ) , we obtain that
lim sup n g ( x 1 * ) x 2 * , V n x 2 * 0 .
Step 5. Finally, prove that the sequences { x n } and { w n } converge strongly to x 1 * = P F 1 f ( x 2 * ) and x 2 * = P F 2 g ( x 1 * ) , respectively.
By firm nonexpansiveness of P C , we derive that
U n x 1 * 2 = P C u ˜ n x 1 * 2 u ˜ n x 1 * , U n x 1 * = α n ( f ( w n ) x 1 * ) + ( 1 α n ) ( G 1 x n x 1 * ) , U n x 1 * = α n f ( w n ) x 1 * , U n x 1 * + ( 1 α n ) G 1 x n x 1 * , U n x 1 * = α n f ( w n ) f ( x 2 * ) , U n x 1 * + α n f ( x 2 * ) x 1 * , U n x 1 * + ( 1 α n ) G 1 x n x 1 * U n x 1 * α n a w n x 2 * U n x 1 * + α n f ( x 2 * ) x 1 * , U n x 1 * + ( 1 α n ) x n x 1 * U n x 1 * α n a 2 w n x 2 * 2 + U n x 1 * 2 + α n f ( x 2 * ) x 1 * , U n x 1 * + ( 1 α n ) 2 x n x 1 * 2 + U n x 1 * 2 = α n a 2 w n x 2 * 2 + ( 1 α n ) 2 x n x 1 * 2 + α n a 2 + ( 1 α n ) 2 U n x 1 * 2 + α n f ( x 2 * ) x 1 * , U n x 1 * = α n a 2 w n x 2 * 2 + ( 1 α n ) 2 x n x 1 * 2 + 1 α n ( 1 a ) 2 U n x 1 * 2 + α n f ( x 2 * ) x 1 * , U n x 1 * ,
which yields
U n x 1 * 2 α n a 1 + α n ( 1 a ) w n x 2 * 2 + ( 1 α n ) 1 + α n ( 1 a ) x n x 1 * 2 + α n 1 + α n ( 1 a ) f ( x 2 * ) x 1 * , U n x 1 * .
From the definition of x n and ( 30 ) , we get
x n + 1 x 1 * 2 δ n x n x 1 * 2 + η n P C ( I γ 1 B 1 ) x n x 1 * 2 + μ n U n x 1 * 2 ( 1 μ n ) x n x 1 * 2 + μ n α n a 1 + α n ( 1 a ) w n x 2 * 2 + μ n α n 1 + α n ( 1 a ) f ( x 2 * ) x 1 * , U n x 1 * + μ n ( 1 α n ) 1 + α n ( 1 a ) x n x 1 * 2 = 1 μ n α n ( 2 a ) 1 + α n ( 1 a ) x n x 1 * 2 + μ n α n a 1 + α n ( 1 a ) w n x 2 * 2 + μ n α n 1 + α n ( 1 a ) f ( x 2 * ) x 1 * , U n x 1 * .
Similarly, as derived above, we also have
w n + 1 x 2 * 2 1 μ n α n ( 2 a ) 1 + α n ( 1 a ) w n x 2 * 2 + μ n α n a 1 + α n ( 1 a ) x n x 1 * 2 + μ n α n 1 + α n ( 1 a ) g ( x 1 * ) x 2 * , V n x 2 * .
From ( 31 ) and ( 32 ) , we deduce that
x n + 1 x 1 * 2 + w n + 1 x 2 * 2 1 μ n α n ( 2 a ) 1 + α n ( 1 a ) x n x 1 * 2 + w n x 2 * 2 + μ n α n a 1 + α n ( 1 a ) x n x 1 * 2 + w n x 2 * 2 + μ n α n 1 + α n ( 1 a ) f ( x 2 * ) x 1 * , U n x 1 * + g ( x 1 * ) x 2 * , V n x 2 * = 1 μ n α n ( 2 a ) 1 + α n ( 1 a ) + μ n α n a 1 + α n ( 1 a ) x n x 1 * 2 + w n x 2 * 2 + μ n α n 1 + α n ( 1 a ) f ( x 2 * ) x 1 * , U n x 1 * + g ( x 1 * ) x 2 * , V n x 2 * = 1 2 μ n α n ( 1 a ) 1 + α n ( 1 a ) x n x 1 * 2 + w n x 2 * 2 + μ n α n 1 + α n ( 1 a ) f ( x 2 * ) x 1 * , U n x 1 * + g ( x 1 * ) x 2 * , V n x 2 * .
Applying the condition ( i i ) , ( 28 ) , ( 29 ) , and Lemma 4, we can conclude that the sequences { x n } and { w n } converge strongly to x 1 * = P F 1 f ( x 2 * ) and x 2 * = P F 2 g ( x 1 * ) , respectively. This completes the proof. □
Corollary 1.
Let C be nonempty closed convex subset of a real Hilbert H . For i = 1 , 2 , let T i : C C be a κ i -strictly pseudo-contractive mapping with F ( T i ) and let f , g : H H be a f and a g -contraction mappings with a = max { a f , a g } . For i = 1 , 2 and j = 1 , 2 , 3 let D j i : C H be d j i -inverse strongly monotone, where λ j i ( 0 , 2 ω i ) with ω i = min j = 1 , 2 , 3 { d j i } . For i = 1 , 2 , define G i : C C by G i ( x ) = P C ( I λ 1 i D 1 i ) ( a x + ( 1 a ) P C ( I λ 2 i D 2 i ) ( a x + ( 1 a ) P C ( I λ 3 i D 3 i ) x ) ) , x C . Let the sequence { x n } and { w n } be generated by x 1 , w 1 C and by
x n + 1 = δ n x n + η n P C ( I γ 1 ( I T 1 ) ) x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) w n + 1 = δ n w n + η n P C ( I γ 2 ( I T 2 ) ) w n + μ n P C ( α n g ( x n ) + ( 1 α n ) G 2 w n )
where δ n , η n , μ n , α n [ 0 , 1 ] with δ n + η n + μ n = 1 , γ ( 0 , 2 α ) with α = min { 1 κ 1 2 , 1 κ 2 2 } and γ = min { γ 1 , γ 2 } . Assume the following conditions hold:
(i) 
F i = F ( G i ) F ( T i ) for i = 1 , 2 ,
(ii) 
n = 1 α n = and lim n α n = 0 ,
(iii) 
0 < θ ¯ δ n , η n , μ n θ for n N and for some θ ¯ , θ > 0 ,
(iv) 
n = 1 δ n + 1 δ n < , n = 1 η n + 1 η n < , n = 1 α n + 1 α n < .
Then { x n } converges strongly to x 1 * = P F 1 f ( x 2 * ) , where y 1 * = P C ( I λ 2 1 D 2 1 ) ( a x 1 * + ( 1 a ) z 1 * ) and z 1 * = P C ( I λ 3 1 D 3 1 ) x 1 * and { w n } converges strongly to x 2 * = P F 2 g ( x 1 * ) , where y 2 * = P C ( I λ 2 2 D 2 2 ) ( a x 2 * + ( 1 a ) z 2 * ) and z 1 * = P C ( I λ 3 2 D 3 2 ) x 1 * .
Proof .
From Theorems 3 and 6, we have the desired conclusion. □

3. Application

In this section, we obtain Theorems 4 and 5 which solve the split feasibility problem and the constrained convex minimization problem. To prove these theorems, the following definition and lemmas are needed.
Let H 1 and H 2 be real Hilbert spaces and let C , Q be nonempty closed convex subsets of H 1 and H 2 , respectively. Let A 1 , A 2 : H 1 H 2 be bounded linear operator with A 1 * , A 2 * are adjoint of A 1 and A 2 , respectively.

3.1. The Split Feasibility Problem

The split feasibility problem (SFP) is to find a point x C and A x Q . This problem was introduced by Censor and Elfving [24]. The set of all solution (SFP) is denoted by Γ = { x C ; A x Q } . The split feasibility problem was studied extensively as an extremely powerful tool in various fields such as medical image reconstruction, signal processing, intensity-modulated radiation therapy problems and computer tomograph; see [25,26,27] and the references therein.
In 2012, Ceng [28] introduced the following lemma to solve SFP;
Lemma 7.
Given x * H 1 , the following statements are equivalent:
(i) 
x * Γ ;
(ii) 
x * = P C ( I λ A * ( I P Q ) A ) x * , where A * is adjoint of A;
(iii) 
x * solves the variational inequality problem (VIP) of finding x * C such that y x * , g ( x * ) 0 , for all y C and g = A * ( I P Q ) A .
By using these results, we obtain the following theorem
Theorem 4.
Let H 1 and H 2 be a real Hilbert spaces and let C , Q be a nonempty closed convex subsets of a real Hilbert space H 1 and H 2 , respectively. Let B 1 , B 2 : H 1 H 2 be bounded linear operator with B 1 * , B 2 * are adjoint of B 1 and B 2 , respectively and L 1 , L 2 are spectral radius of B 1 * B 1 and B 2 * B 2 , respectively with L = max { L 1 , L 2 } . Let f , g : H H be a f and a g -contraction mappings with a = max { a f , a g } . For i = 1 , 2 and j = 1 , 2 , 3 let D j i : C H be d j i -inverse strongly monotone, where λ j i ( 0 , 2 ω i ) with ω i = min j = 1 , 2 , 3 { d j i } . For i = 1 , 2 , define G i : C C by G i ( x ) = P C ( I λ 1 i D 1 i ) ( a x + ( 1 a ) P C ( I λ 2 i D 2 i ) ( a x + ( 1 a ) P C ( I λ 3 i D 3 i ) x ) ) , x C . Let the sequences { x n } and { w n } be generated by x 1 , w 1 C and by
x n + 1 = δ n x n + η n P C ( I γ 1 1 ) x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) w n + 1 = δ n w n + η n P C ( I γ 2 2 ) w n + μ n P C ( α n g ( x n ) + ( 1 α n ) G 2 w n )
where 1 = B 1 * ( I P Q ) B 1 x , 2 = B 2 * ( I P Q ) B 2 x for all x H 1 , δ n , η n , μ n , α n [ 0 , 1 ] with δ n + η n + μ n = 1 and γ ( 0 , 2 L ) with γ = min { γ 1 , γ 2 } . Assume the following conditions hold:
(i) 
F i = F ( G i ) Γ i where Γ i { x C ; B i x Q } for i = 1 , 2 ,
(ii) 
n = 1 α n = and lim n α n = 0 ,
(iii) 
0 < θ ¯ δ n , η n , μ n θ for all n N and for some θ ¯ , θ > 0 ,
(iv) 
n = 1 δ n + 1 δ n < , n = 1 η n + 1 η n < , n = 1 α n + 1 α n < .
Then { x n } converges strongly to x 1 * = P F 1 f ( x 2 * ) , where y 1 * = P C ( I λ 2 1 D 2 1 ) ( a x 1 * + ( 1 a ) z 1 * ) and z 1 * = P C ( I λ 3 1 D 3 1 ) x 1 * and { w n } converges strongly to x 2 * = P F 2 g ( x 1 * ) , where y 2 * = P C ( I λ 2 2 D 2 2 ) ( a x 2 * + ( 1 a ) z 2 * ) and z 1 * = P C ( I λ 3 2 D 3 2 ) x 1 * .
Proof .
Let x , y H 1 .
First, we will show that 1 is 1 L 1 -inverse strongly monotone.
Consider,
1 ( x ) 1 ( y ) 2 = B 1 * ( I P Q ) B 1 x B 1 * ( I P Q ) B 1 y 2 = B 1 * ( I P Q ) B 1 x B 1 * ( I P Q ) B 1 y , B 1 * ( I P Q ) B 1 x B 1 * ( I P Q ) B 1 y = ( I P Q ) B 1 x ( I P Q ) B 1 y , B 1 B 1 * ( I P Q ) B 1 x B 1 B 1 * ( I P Q ) B 1 y L ( I P Q ) B 1 x ( I P Q ) B 1 y 2 .
From the property of P C , we have
( I P Q ) B 1 x ( I P Q ) B 1 y 2 = ( I P Q ) B 1 x ( I P Q ) B 1 y , ( I P Q ) B 1 x ( I P Q ) B 1 y = ( I P Q ) B 1 x ( I P Q ) B 1 y , B 1 x B 1 y ( I P Q ) B 1 x ( I P Q ) B 1 y , P Q B 1 x P Q B 1 y = B 1 * ( I P Q ) B 1 x B 1 * ( I P Q ) B 1 y , x y ( I P Q ) B 1 x ( I P Q ) B 1 y , P Q B 1 x P Q B 1 y = B 1 * ( I P Q ) B 1 x B 1 * ( I P Q ) B 1 y , x 1 y ( I P Q ) B 1 x , P Q B 1 x P Q B 1 y + ( I P Q ) B 1 y , P Q B 1 x P Q B 1 y A 1 * ( I P Q ) B 1 x B 1 * ( I P Q ) B 1 y , x y .
Since 1 ( x ) = B 1 * ( I P Q ) B 1 x , we get
1 ( x ) 1 ( y ) , x y 1 L 1 1 ( x ) 1 ( y ) 2 .
Then 1 = B 1 * ( I P Q ) B 1 x is 1 L 1 -inverse strongly monotone.
Using the same method as ( 35 ) , we have
2 ( x ) 2 ( y ) , x y 1 L 2 2 ( x ) 2 ( y ) 2 .
Then 2 = B 2 * ( I P Q ) B 2 x is 1 L 2 -inverse strongly monotone.
By using Theorems 3 and 7, we obtain the conclusion. □

3.2. The Constrained Convex Minimization Problem

Let C be closed convex subset of H . The constrained convex minimization problem is to find u * C such that
( u * ) = min u C ( u ) ,
where : H R is a continuous differentiable function. The set of all solution of ( 36 ) is denoted by Γ .
It is known that the gradient-projection algorithm is one of the powerful methods for solving the minimization problem ( 36 ) , see [29,30,31].
Before we prove the theorem, we need the following lemma.
Lemma 8
([32]). A necessary condition of optimality for a point u * C to be a solution of the minimization problem ( 36 ) is that u * solves the variational inequality
( u * ) , x u * 0 , x C .
Equivalently, u * C solves the fixed-point equation
u * = P C ( u * λ ( u * ) ) ,
for every constant λ > 0 . If, in addition, ℑ is convex, then the optimality condition ( 37 ) is also sufficient.
By using these results, we obtain the following theorem.
Theorem 5.
Let C be nonempty closed convex subset of a real Hilbert H . For i = 1 and i = 2 let i : H R be continuous differentiable function with i is 1 L i -inverse strongly monotone with L = max { L 1 , L 2 } . Let f , g : H H be a f and a g -contraction mappings with a = max { a f , a g } . For i = 1 , 2 and j = 1 , 2 , 3 let D j i : C H be d j i -inverse strongly monotone, where λ j i ( 0 , 2 ω i ) with ω i = min j = 1 , 2 , 3 { d j i } . For i = 1 , 2 , define G i : C C by G i ( x ) = P C ( I λ 1 i D 1 i ) ( a x + ( 1 a ) P C ( I λ 2 i D 2 i ) ( a x + ( 1 a ) P C ( I λ 3 i D 3 i ) x ) ) , x C . Let the sequences { x n } and { w n } be recursively defined by x 1 , w 1 C and by
x n + 1 = δ n x n + η n P C ( I γ 1 1 ) x n + μ n P C ( α n f ( w n ) + ( 1 α n ) G 1 x n ) w n + 1 = δ n w n + η n P C ( I γ 2 2 ) w n + μ n P C ( α n g ( x n ) + ( 1 α n ) G 2 w n )
where δ n , η n , μ n , α n [ 0 , 1 ] with δ n + η n + μ n = 1 , γ ( 0 , 2 α ) with α= min { 1 L 1 , 1 L 2 } and γ = min { γ 1 , γ 2 } . Assume that the following conditions are satisfied:
(i) 
F i = F ( G i ) Γ i for i = 1 , 2 ,
(ii) 
n = 1 α n = and lim n α n = 0 ,
(iii) 
0 < θ ¯ δ n , η n , μ n θ for all n N and for some θ ¯ , θ > 0 ,
(iv) 
n = 1 δ n + 1 δ n < , n = 1 η n + 1 η n < , n = 1 α n + 1 α n < .
Then { x n } converges strongly to x 1 * = P F 1 f ( x 2 * ) , where y 1 * = P C ( I λ 2 1 D 2 1 ) ( a x 1 * + ( 1 a ) z 1 * ) and z 1 * = P C ( I λ 3 1 D 3 1 ) x 1 * and { w n } converges strongly to x 2 * = P F 2 g ( x 1 * ) , where y 2 * = P C ( I λ 2 2 D 2 2 ) ( a x 2 * + ( 1 a ) z 2 * ) and z 1 * = P C ( I λ 3 2 D 3 2 ) x 1 * .
Proof .
By using Theorems 3 and 8, we obtain the conclusion. □

4. A Numerical Example

In this section, we give an example to support our main theorem.
Example 1.
Let R be the set of real numbers, C = [ 50 , 50 ] × [ 50 , 50 ] , H = R 2 . Let T 1 , T 2 : C C be defined by T 1 x = { max { 0 , 12 x 1 } , max { 0 , 12 x 2 } } , and T 2 x = { max { 18 x 1 2 , 0 } , max { 18 x 1 2 , 0 } } for every x = ( x 1 , x 2 ) C . For every i = 1 , 2 let B i : C H be defined by B i ( x ) = x T i x , for every x = ( x 1 , x 2 ) C . Let f , g : R 2 R 2 be defined by f ( x ) = ( x 1 2 , x 2 2 ) and g ( x ) = ( x 1 3 , x 2 3 ) , for all x = ( x 1 , x 2 ) R 2 . For every i = 1 , 2 , j = 1 , 2 , 3 let D j i : C H be defined by D 1 1 ( x ) = ( x 1 6 3 , 0 ) , D 2 1 ( x ) = ( x 1 6 5 , 0 ) , D 3 1 ( x ) = ( x 1 6 7 , 0 ) , D 1 2 ( x ) = ( 0 , x 2 6 2 ) , D 1 1 ( x ) = ( 0 , x 2 6 3 ) , D 1 1 ( x ) = ( 0 , x 2 6 4 ) , define G 1 , G 2 : C C by G 1 ( x ) = P C ( I 1 2 D 1 1 ) ( 1 2 x + 1 2 P C ( I 1 3 D 2 1 ) ( 1 2 x + 1 2 P C ( I 1 4 D 3 1 ) x ) ) and G 2 ( x ) = P C ( I 0.75 D 1 2 ) ( 1 2 x + 1 2 P C ( I 0.25 D 2 2 ) ( 1 2 x + 1 2 P C ( I 0.3 D 3 2 ) x ) ) .
Let the sequences x n = ( x n 1 , x n 2 ) and w n = ( w n 1 , w n 2 ) be generated by x 1 , w 1 C and by
x n + 1 = n 5 n + 2 x n + 2 n + 1 2 5 n + 2 P C ( I 0.5 B 1 ) x n + 2 n + 3 2 5 n + 2 P C ( 1 8 n f ( w n ) + ( 1 1 8 n ) G 1 x n ) , w n + 1 = n 5 n + 2 w n + 2 n + 1 2 5 n + 2 P C ( I 0.7 B 2 ) w n + 2 n + 3 2 5 n + 2 P C ( 1 8 n g ( x n ) + ( 1 1 8 n ) G 2 w n ) ,
for all n N . Then the sequence x n = ( x n 1 , x n 2 ) converges strongly to ( 6 , 6 ) and w n = ( w n 1 , w n 2 ) converges strongly to ( 6 , 6 ) .
Solution. 
By the definition of T i , B i , f , g , D j i , G i for every i = 1 , 2 , j = 1 , 2 , 3 we have ( 6 , 6 ) F ( G i ) V I ( C , B i ) . From Theorem 3, we can conclude that the sequences { x n } and { w n } converge strongly to ( 6 , 6 ) .
The following Table 1 and Figure 1 show the numerical results of the sequences { x n } and { w n } where x 1 = ( 20 , 20 ) , w 1 = ( 20 , 20 ) and n = N = 30 .

5. Conclusions

From the above numerical results, we can conclude that Table 1 and Figure 1 show that the sequences { x n } and { w n } converge to ( 6 , 6 ) and the convergence of { x n } and { w n } can be guaranteed by Theorem 3.

Author Contributions

Conceptualization, A.K.; formal analysis, A.K. and A.S.; writing-original draft, A.S.; supervision, A.K.; writing-review and editing, A.K. and A.S.

Funding

This research was supported by the Royal Golden Jubilee (RGJ) Ph.D. Programme, the Thailand Research Fund (TRF), under Grant No. PHD/0170/2561.

Acknowledgments

The authors would like to extend their sincere appreciation to the Research and Innovation Services of King Mongkut’s Institute of Technology Ladkrabang.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moudafi, A. Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  2. Peter, K.; Kuhfittig, F. Common fixed points of nonexpansive mappings by iteration. Pacific J. Math. 1981, 97. [Google Scholar]
  3. Zi, X.; Ma1, Z.; Liu, Y.; Yang, L. Strong Convergence Theorem of Split Equality Fixed Point for Nonexpansive Mappings in Banach Spaces. Appl. Math. Sci. 2018, 12, 739–758. [Google Scholar] [CrossRef]
  4. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–551. [Google Scholar] [CrossRef]
  5. Browder, F.E.; Petryshyn, W.V. Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef]
  6. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  7. Je Cho, Y.; Kang, S.M.; Qin, X. Some results on κ-strictly pseudo-contractive mappings in Hilbert spaces. Nonlinear Anal. 2009, 70, 1956–1964. [Google Scholar] [CrossRef]
  8. Ke, Y.; Ma, C. Strong convergence theorem for a common fixed point of a finite family of strictly pseudo-contractive mappings and a strictly pseudononspreading mapping. Fixed Point Theory Appl. 2015, 116. [Google Scholar] [CrossRef]
  9. Yao, Y.; Cheng, Y.C.; Kang, S. Itertive methods for κ-strictly pseudo-contractive mappings in Hilbert spaces. Analele Stiintifice ale Universitatii Ovidius Constanta Seria Matematica 2011, 19, 313–330. [Google Scholar]
  10. Chang, S.S. Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 2006, 323, 1402–1416. [Google Scholar] [CrossRef] [Green Version]
  11. Wangkeeree, R.; Boonkong, U.; Preechasilp, P. Viscosity approximation methods for asymptotically nonexpansive mapping in CAT(0) spaces. Fixed Point Theory Appl. 2015. [Google Scholar] [CrossRef]
  12. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  13. Yao, Z.; Kang, S.M.; Li, H.J. An intermixed algorithm for strict pseudo-contractions in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 206. [Google Scholar] [CrossRef]
  14. Ceng, L.C.; Yao, J.C. Iterative algorithm for generalized set-valued strong nonlinear mixed variational-like inequalities. J. Optim. Theory Appl. 2005, 124, 725–738. [Google Scholar]
  15. Nadezhkina, N.; Takahashi, W. Weak convergence theorem by an extragradientmethod for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128, 191–201. [Google Scholar] [CrossRef]
  16. Yao, J.C.; Chadli, O. Pseudomonotone complementarity problems and variational inequalities. In Handbook of Generalized Convexity and Monotonicity; Springer: New York, NY, USA, 2005; pp. 501–558. [Google Scholar]
  17. Yao, Y.; Yao, J.C. On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186, 1551–1558. [Google Scholar] [CrossRef]
  18. Siriyan, K.; Kangtunyakarn, A. A new general system of variational inequalities for convergence theorem and application. Numer Algor. 2018, 81, 99–123. [Google Scholar] [CrossRef]
  19. Ceng, L.C.; Wang, C.Y.; Yao, J.C. Strong convergencetheorems by a relaxed extragradient method for a general system of variational inequalitied. Math. Meth. Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  20. Opial, Z. Weak convergence of the sequence of successive approximation of nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  21. Xu, H.K. An iterative approach to quadric optimization. J. Optim. Theory Appl. 2003, 116, 659–678. [Google Scholar] [CrossRef]
  22. Takahashi, W. Nonlinear Function Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  23. Kangtunyakarn, A. Convergence theorem of ?-strictly pseudo-contractive mapping and a modification of genealized equilibrium problems. Fixed Point Theory Appl. 2012, 2012, 89. [Google Scholar] [CrossRef]
  24. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithm. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  25. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655–1665. [Google Scholar] [CrossRef]
  26. Censor, Y.; Bortfels, T.; Trofimov, A. A unified approach for inversion problem in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  27. Censor, Y.; Motova, A.; Segal, A. Perturbed projections ans subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef]
  28. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef] [Green Version]
  29. Jung, J.S. A general iterative approach to variational inequality problems and optimization problems. Fixed Point Theory Appl. 2011. [Google Scholar] [CrossRef]
  30. Polyak, B.T. Introduction to Optimization; Optimization Software: New York, NY, USA, 1987. [Google Scholar]
  31. Witthayarat, U.; Jitpeera, T.; Kumam, P. A new modified hybrid steepest-descent by using a viscosity approximation method with a weakly contractive mapping for a system of equilibrium problems and fixed point problems with minimization problems. Abstr. Appl. Anal. 2012, 206345. [Google Scholar] [CrossRef]
  32. Su, M.; Xu, H.K. Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1, 35–43. [Google Scholar]
Figure 1. The convergence of { x n } and { w n } with initial values x 1 = ( 20 , 20 ) , w 1 = ( 20 , 20 ) and n = N = 30 .
Figure 1. The convergence of { x n } and { w n } with initial values x 1 = ( 20 , 20 ) , w 1 = ( 20 , 20 ) and n = N = 30 .
Mathematics 07 00916 g001
Table 1. The values of { x n } and { w n } with initial values x 1 = ( 20 , 20 ) , w 1 = ( 20 , 20 ) and n = N = 30 .
Table 1. The values of { x n } and { w n } with initial values x 1 = ( 20 , 20 ) , w 1 = ( 20 , 20 ) and n = N = 30 .
n x n = ( x n 1 , x n 2 ) w n = ( w n 1 , w n 2 )
1(20.000000, 20.000000)(20.000000, 20.000000)
2(14.570064, 15.803571)(14.166667, 11.644491)
3(10.882109, 12.554478)(10.857685, 9.291785)
15(5.976595, 5.983309)(5.970351, 5.980201)
28(5.987723, 5.984952)(5.984472, 5.986233)
29(5.988190, 5.985531)(5.985060, 5.986751)
30(5.988622, 5.986069)(5.985605, 5.987233)

Share and Cite

MDPI and ACS Style

Sripattanet, A.; Kangtunyakarn, A. Convergence Theorem of Two Sequences for Solving the Modified Generalized System of Variational Inequalities and Numerical Analysis. Mathematics 2019, 7, 916. https://doi.org/10.3390/math7100916

AMA Style

Sripattanet A, Kangtunyakarn A. Convergence Theorem of Two Sequences for Solving the Modified Generalized System of Variational Inequalities and Numerical Analysis. Mathematics. 2019; 7(10):916. https://doi.org/10.3390/math7100916

Chicago/Turabian Style

Sripattanet, Anchalee, and Atid Kangtunyakarn. 2019. "Convergence Theorem of Two Sequences for Solving the Modified Generalized System of Variational Inequalities and Numerical Analysis" Mathematics 7, no. 10: 916. https://doi.org/10.3390/math7100916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop