Next Article in Journal
A New Approach to Discrete Integration and Its Implications for Delta Integrable Functions
Next Article in Special Issue
Some Common Fixed Point Results in Modular Ultrametric Space Using Various Contractions and Their Application to Well-Posedness
Previous Article in Journal
Efficient Conditional Privacy-Preserving Authentication Scheme for Safety Warning System in Edge-Assisted Internet of Things
Previous Article in Special Issue
Existence of Best Proximity Point in O-CompleteMetric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Multistep Implicit Iterative Methods for Solving Common Solution Problems with Asymptotically Demicontractive Operators and Applications

1
College of Mathematics and Statistics, Sichuan University of Science & Engineering, Zigong 643000, China
2
South Sichuan Center for Applied Mathematics, Zigong 643000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3871; https://doi.org/10.3390/math11183871
Submission received: 13 August 2023 / Revised: 7 September 2023 / Accepted: 8 September 2023 / Published: 11 September 2023

Abstract

:
It is very meaningful and challenging to efficiently seek common solutions to operator systems (CSOSs), which are widespread in pure and applied mathematics, as well as some closely related optimization problems. The purpose of this paper is to introduce a novel class of multistep implicit iterative algorithms (MSIIAs) for solving general CSOSs. By using Xu’s lemma and Maingé’s fundamental and important results, we first obtain strong convergence theorems for both one-step and multistep implicit iterative schemes for CSOSs, involving asymptotically demicontractive operators. Finally, for the applications and profits of the main results presented in this paper, we give two numerical examples and present an iterative approximation to solve the general common solution to the variational inequalities and operator equations.

1. Introduction

Throughout this paper, let H , H 1 , and H 2 be three real Hilbert spaces with the inner product as · , · , and the induced norm as · . Assume that ȷ 1 : = { 1 , 2 , , p 1 } and ȷ 2 : = { 1 , 2 , , p 2 } are two sets; here, p 1 and p 2 are two arbitrary positive integers. We assume that all the problems and iterative schemes are well-defined.
For all i ȷ 1 and each j ȷ 2 , let S i : H H and F j : H H be two nonlinear operators, and Q j H be a given nonempty closed convex subset for any j ȷ 2 . In order to solve the general common solution to variational inequalities and operator equations (GCSVIOE), we consider the following:
0 = x S i x , i ȷ 1 , F j x , y j x 0 , y j Q j , j ȷ 2 .
It is not difficult to note that (1) can also be reformulated as the following nonlinear operator system (see [1]):
0 = x S i x , i ȷ 1 , 0 = x P Q j ( I ρ F j ) x , j ȷ 2 ,
where ρ is a positive constant, I is the identity operator, and for each j ȷ 2 , P Q j is the metric projection from H to Q j , which is used to find the unique point P Q j x in Q j fulfilling
x P Q j x = inf x y | y Q j ,
i.e.,
x P Q j x , y P Q j x 0 , y Q j .
If P Q j ( I ρ F j ) in (2) is generally replaced by T j : H H for all j ȷ 2 , then one can easily see that GCSVIOE is a special case of the following common solution problem for the operator system (CSOS) involved in the nonlinear operators S i and T j , which aims to locate the point x H such that
0 = x S i x , i ȷ 1 , 0 = x T j x , j ȷ 2 .
Example 1.
When ȷ 1 and ȷ 2 are single point sets, i.e., p k = 1 for k = 1 , 2 , and S 1 and T 1 are separately denoted as S and T, one has the following special nonlinear operator system from (3):
0 = x S x and 0 = x T x .
Example 2.
If T j = I for every j ȷ 2 , then (3) reduces to a family of operator equations as follows:
( I S i ) x = 0 , i ȷ 1 ,
which was considered by Gu and He [2].
Furthermore, the split common fixed point problem (SCFPP), which is used to describe intensity-modulated radiation therapy, can be transformed into a CSOS.
Example 3.
Give a bounded linear operator A : H 1 H 2 , and two series of nonlinear operators S i : H 1 H 1 for any i ȷ 1 and T j : H 2 H 2 for each j ȷ 2 . Then, the SCFPP can be formulated by finding the point ( x 1 , x 2 ) ( H 1 , H 2 ) such that
A x 1 = x 2 , x 1 = S i x 1 , i ȷ 1 , x 2 = T j x 2 , j ȷ 2 ,
which was first introduced by Censor and Segal [3] in 2009 and has attracted widespread attention—see [4]. According to the Lemma 3.2 in [5], the SCFPP can be transformed into the following CSOS: Find x H 1 such that
0 = x S i x , i ȷ 1 , 0 = x ( A * A ) 1 A * T j A x , j ȷ 2 ,
where A * is the adjoint operator of A.
Remark 1.
We remark that CSOSs have a wide range of applications in physics [6], mechanics [7], control theory [8], economics [9], information science [10], and other problems in pure and applied mathematics and some highly related optimization problems [11,12].
In order to solve (5), Gu and He [2] introduced the following multistep iterative process with errors u n ( i ) for i ȷ 1 and n N :
x 1 C , x n + 1 = x n ( 0 ) , x n ( i 1 ) = a n ( i ) S i x n ( i ) + b n ( i ) x n + c n ( i ) u n ( i ) , x n ( p 1 ) = x n ,
where C is a nonempty closed convex subset of H and S i : C C . { a n ( i ) } , { b n ( i ) } and { c n ( i ) } are three real sequences in [ 0 , 1 ] and satisfy certain conditions. They also prove that (6) converges strongly to a common solution while { S i } i ȷ 1 is a family of nonexpansive operators in Banach space. Afterward, when S i in (5) is the more general asymptotically demicontractive operator for all i ȷ 1 , Wang et al. [13] introduced an iteration scheme as follows:
x 1 H , x n + 1 = ( 1 a n b n ) x n + b n i = 1 p 1 c i S i n x n , n N ,
where { a n } , { b n } ( 0 , 1 ) are two real sequences, and a strong convergence theorem was also obtained in real Hilbert space.
In order to solve (3), which is involved in a family of nonexpansive operators { S i } i ȷ and a series of asymptotically nonexpansive operators { T j } j ȷ , here, when ȷ = { 1 , 2 , , k } and k N , Yolacan and Kiziltunc [14] proposed the following multistep approximation algorithm (YKMSA):
x 1 H , x n + 1 = x n ( k ) , x n ( i ) = a n ( i ) T i n x n ( i 1 ) + b n ( i ) S i x n + c n ( i ) u n ( i ) , x n ( 0 ) = x n .
On the other hand, it is well known that the implicit rule is one powerful tool in the field of ordinary differential equations and is widely used to construct the iteration scheme for (asymptotically) nonexpansive-type operators; see, for example, [15,16,17,18,19] and the references therein. Of particular note is that Aibinu and Kim [20] compared the convergence rates of the following two viscosity implicit iterations:
x 1 H , x n + 1 = a n S x n + ( 1 a n ) T ( b n x n + ( 1 b n ) x n + 1 ) , n 1 ,
and
x 1 H , x n + 1 = a n S x n + b n x n + c n T ( d n x n + ( 1 d n ) x n + 1 ) , n 1 ,
where { a n } , { b n } , { c n } , and { d n } are four sequences satisfied by special conditions, and S and T are two self operators for H . They also proved that iteration (10) converges faster than (9) under some prerequisites.
Due to the complexity and effectiveness of the implicit rules (see [19]), there are few pieces of research on implicit iterations for the more general asymptotically demicontractive operators. Thus, the following question comes naturally:
Question 1.
How can a novel iteration scheme be to established with an implicit rule for the CSOSs (3) involved in asymptotically demicontractive operators? What conditions should be satisfied for strong convergence?
Motivated and inspired by the above-mentioned works, we provide a kind of novel multistep implicit iteration algorithm (MSIIA) to answer Question 1. The basic definitions of the related nonlinear operators and some useful lemmas are given in Section 2. In Section 3, we present the details of the proposed MSIIA and prove the main results. Two numerical experiments and an application on GCSVIOE are shown in Section 4. Finally, we make a brief summary of this paper in Section 5. Our studies extend and generalize the results of Gu and He [2], Wang et al. [13], and Yolacan and Kiziltunc [14].

2. Preliminary

In a real Hilbert space H , the following inequalities hold for all x , y H :
x + y 2 x 2 + 2 y , x + y ,
and
a x + ( 1 a ) y 2 a x 2 + ( 1 a ) y 2 , a [ 0 , 1 ] .
In the remainder of this section, we recall some useful definitions and lemmas.
Definition 1.
A nonlinear operator U : H H with fixed point set F i x ( U ) Ø is said to be
(i)
a c —contraction if there exists a constant c [ 0 , 1 ) such that
U x U y c x y , x , y H ;
(ii)
nonexpansive if
U x U y x y , x , y H ;
(iii)
L-Lipschitzian-continuous if there exists a constant L 0 such that
U x U y L x y , x , y H ;
(iv)
L-uniformly Lipschitzian-continuous if there exists a constant L 0 such that
U n x U n y L x y , x , y H ;
(v)
δ-demicontractive if there exists a constant δ [ 0 , 1 ) such that
U x p 2 x p 2 + δ U x x 2 , x H a n d p F i x ( U ) ,
which is also equivalent to
x U x , x p 1 δ 2 x U x 2 ;
(vi)
asymptotically demicontractive if there exists a sequence { k n } [ 0 , ) with lim n k n = 1 and a constant κ [ 0 , 1 ) such that
U n x q 2 k n 2 x q 2 + κ x U n x 2 , x C , q F ( U ) ,
which is also equivalent to the following inequalities:
U n x q , x q k n 2 + 1 2 x q 2 + κ 1 2 x U n x 2 , U n x x , x q k n 2 1 2 x q 2 + κ 1 2 x U n x 2 .
In order to enhance clarity and precision, we use a ( k n , κ ) -asymptotically demicontractive operator to represent the above-defined asymptotically demicontractive operator for the sake of convenience.
Lemma 1 
([21]). Let C be a nonempty closed convex subset of H and U : C C be a L-uniformly Lipschitzian-continuous and asymptotically demicontractive operator. Then, F i x ( U ) is a closed convex subset of C.
Definition 2.
Let U : H H be an operator. Then, I U is noted as being demiclosed at zero if, for any { x n } H , the following implication holds:
x n x ( I U ) x n 0 x = U x ,
where ⇀ and → represent weak and strong convergence, respectively.
Definition 3 
([22]). An operator T : H H is called uniformly asymptotically regular if, for any bounded subset C of H , there is the following equality:
lim n sup x C T n + 1 x T n x = 0 .
Example 4 
([23]). Let K = R , and T : K K be defined by
T ( x ) = r x , i f 0 x 1 / 2 , r ( r x ) 2 r 1 , i f 1 / 2 x r , 0 , i f x < 0 or x > r ,
where 1 / 2 < r < 1 is a given constant. Then, T is a uniformly Lipschitzian-continuous and a ( r , 0 ) -asymptotically demicontractive operator that is uniformly asymptotically regular for K, and I T is demiclosed at 0. The fixed point of T is 0.
Example 5.
Let C be a nonempty closed convex subset of H , and operator F : C H be a σ / μ 2 -inverse strongly monotone operator, i.e., σ / μ 2 F x F y 2 F x F y , x y for any x , y C . Then, T : = P C ( I ρ F ) is uniformly asymptotically regular if constant ρ ( 0 , 2 σ / μ 2 ) .
Proof. 
Since F is σ / μ 2 -inverse strongly monotone, F is also μ -Lipschitizan-continuous and σ -strongly monotone such that σ x y 2 F x F y , x y .
Now, we have the following inequality for all z C :
T n + 1 z T n z 2 = [ P C ( I ρ F ) ] n + 1 z [ P C ( I ρ F ) ] n z 2 ( I ρ F ) T n z ( I ρ F ) T n 1 z 2 ( T n z T n 1 z ) ρ ( F T n z F T n 1 z ) 2 = T n z T n 1 z 2 2 ρ T n z T n 1 z , F T n z F T n 1 z + ρ 2 F T n z F T n 1 z 2 T n z T n 1 z 2 + ( ρ 2 2 ρ σ μ 2 ) F T n z F T n 1 z 2 = ( 1 + ρ 2 μ 2 2 ρ σ ) T n z T n 1 z 2 ( 1 + ρ 2 μ 2 2 ρ σ ) n T z z 2 .
It follows from ρ ( 0 , 2 σ / μ 2 ) that
lim n sup z C T n + 1 z T n z   lim n sup z C ( 1 + ρ 2 μ 2 2 ρ σ ) n 2 T z z = 0 ,
which means that T is uniformly asymptotically regular.    □
Lemma 2 
([24]). Let { Ψ n } be a sequence of nonnegative real numbers such that
Ψ n + 1 ( 1 a n ) Ψ n + a n b n , n N ,
where { a n } and { b n } satisfy the following conditions:
(i)
a n [ 0 , 1 ] and n = 0 a n = ; (ii) lim sup n b n 0 .
Then, lim n Ψ n = 0 .
Lemma 3 
([25]). Suppose that { Ψ k } is a real number sequence that does not decrease at infinity. Then, there exists a subsequence { Ψ k j } j 0 of { Ψ k } such that
Ψ k j < Ψ k j + 1 .
Let { τ ( k ) } k k 0 be a sequence of integers defined by
τ ( k ) = max { i k | Ψ i < Ψ i + 1 } .
Then, the following statements hold:
(i)
{ τ ( k ) } k k 0 is a nondecreasing sequence, and lim k τ ( k ) = ;
(ii)
max { Ψ τ ( k ) , Ψ k } Ψ τ ( k ) + 1 for all k k 0 .
Lemma 4.
In a real Hilbert space, the following inequality holds:
( 1 a b c ) x + b y + c z p 2 1 a b c 1 a x p 2 + b 1 a y p 2 + c 1 a z p 2 + a p 2 ,
where a [ 0 , 1 ) and b , c [ 0 , 1 ] with a + b + c 1 .
Proof. 
According to (2), we have
( 1 a b c ) x + b y + c z p 2 = ( 1 a ) 1 1 a [ ( 1 a b c ) ( x p ) + b ( y p ) + c ( z p ) ] + a ( p ) 2 ( 1 a ) 1 1 a [ ( 1 a b c ) ( x p ) + b ( y p ) + c ( z p ) ] 2 + a p 2 = 1 1 a ( 1 a b c ) ( x p ) + b ( y p ) + c ( z p ) 2 + a p 2 .
Then, similar to the above inequality, it can be proved that
( 1 a b c ) x + b y + c z p 2 1 a b c 1 a x p 2 + a p 2 + a + b + c 1 a b a + b + c ( y p ) + c a + b + c ( z p ) 2 = 1 a b c 1 a x p 2 + a p 2 + a + b + c 1 a b + c a + b + c [ b b + c ( y p ) + c b + c ( z p ) ] 2 1 a b c 1 a x p 2 + a p 2 + b + c 1 a b b + c ( y p ) + c b + c ( z p ) 2 1 a b c 1 a x p 2 + b 1 a y p 2 + c 1 a z p 2 + a p 2 .
This completes the proof.    □

3. Main Results

In this section, we first introduce a one-step implicit approximation algorithm for (4) with a contraction operator and an asymptotically demicontractive operator, and then strong convergence is obtained. In order to go a step further, to solve (3), which is involved in a series of contraction operators and is a finite of asymptotically demicontractive operators, a multistep implicit iteration method is proposed, and strong convergence is proved.
Through this section, we denote the solution set of (4) by Γ : = F i x ( S ) F i x ( T ) , assuming that Γ is nonempty and x * Γ is a common solution. We introduce the following implicit Algorithm 1.
Algorithm 1 Novel one-step implicit iteration for CSOS
  • Choose an initial point x 1 H , and for any n N do
    y n = η n x n + ( 1 η n ) x n + 1 , x n + 1 = ( 1 α n β n γ n ) S x n + β n x n + γ n δ n y n + ( 1 δ n ) T n y n ,
    where S : H H is a c —contraction operator, and T : H H is a L-uniformly Lipschitzian-continuous and is a ( k n , κ ) -asymptotically demicontractive operator, where { k n } [ 0 , ) , lim n k n = 1 , and κ [ 0 , 1 ) . The real sequence { α n } , { β n } , { γ n } , { δ n } , and { η n } satisfies the following conditions:
(i)
α n , β n , γ n , δ n , η n are all in [ 0 , 1 ] ;
(ii)
α n + γ n 1 α n β n γ n 1 c ;
(iii)
α n γ n 0 , η n 1 , and i = 1 α n = ;
(iv)
max { κ , 1 1 M 1 } + ϵ δ n 1 ϵ , here M = sup { k n 2 } n = 1 and ϵ > 0 .
Lemma 5.
If { x n } is a sequence generated by Algorithm 1, then the following inequality holds:
x n + 1 x * 2 ( 1 A n ) x n x * 2 + A n x * 2 ,
where x * Γ and
A n = α n 1 α n 2 γ n ( 1 η n ) .
Proof. 
Due to Lemma 4, we can easily obtain
x n + 1 x * 2 1 α n β n γ n 1 α n S x n x * 2 + α n x * 2 + β n 1 α n x n x * 2 + γ n 1 α n δ n y n + ( 1 δ n ) T n y n x * 2 .
Note that
S x n x * 2 = S x n S x * 2 c x n x * 2
and
[ δ n y n + ( 1 δ n ) T n y n ] x * δ n 2 y n x * 2 + ( 1 δ n ) 2 T n y n x * 2 + 2 δ n ( 1 δ n ) T n y n x * , y n x * [ δ n 2 + ( 1 δ n ) 2 k n 2 + δ n ( 1 δ n ) ( k n 2 + 1 ) ] y n x * 2 + [ ( 1 δ n ) 2 κ + δ n ( 1 δ n ) ( κ 1 ) ] y n T n y n 2 = ( δ n + ( 1 δ n ) k n 2 ) y n x * 2 + ( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 2 y n x * 2 2 η n x n x * 2 + 2 ( 1 η n ) x n + 1 x * 2 .
It follows that
x n + 1 x * 2 c ( 1 α n β n γ n ) 1 α n x n x * 2 + β n 1 α n x n x * 2 + α n x * 2 + 2 γ n η n 1 α n x n x * 2 + 2 γ n ( 1 η n ) 1 α n x n + 1 x * 2 = c ( 1 α n β n γ n ) + β n + 2 γ n η n 1 α n x n x * 2 + α n x * 2 + 2 γ n ( 1 η n ) 1 α n x n + 1 x * 2 .
When combining the similar terms in (17), it means that
x n + 1 x * 2 c ( 1 α n β n γ n ) + β n + 2 γ n η n 1 α n 2 γ n ( 1 η n ) x n x * 2 + α n 1 α n 2 γ n ( 1 η n ) x * 2 .
As α n + γ n 1 α n β n γ n 1 c , since A n = α n 1 α n 2 γ n ( 1 η n ) , we immediately obtain (14).    □
Lemma 6.
If { x n } is a sequence generated by Algorithm 1, then the following inequality holds:
γ n ( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 [ c ( 1 α n β n γ n ) + β n + 2 γ n η n ] x n x * 2 ( 1 α n 2 γ n ( 1 η n ) ) x n + 1 x * 2 + α n x * 2 .
Moreover, if the limit of x n x * exists, then lim n y n T n y n = 0 .
Proof. 
According to (16), we have
[ δ n y n + ( 1 δ n ) T n y n ] x * ( δ n + ( 1 δ n ) k 2 ) y n x * 2 + ( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 2 y n x * 2 + ( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 2 η n x n x * 2 + 2 ( 1 η n ) x n + 1 x * 2 + ( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 .
and then
x n + 1 x * 2 c ( 1 α n β n γ n ) 1 α n x n x * 2 + β n 1 α n x n x * 2 + α n x * 2 + 2 γ n η n 1 α n x n x * 2 + 2 γ n ( 1 η n ) 1 α n x n + 1 x * 2 + γ n ( δ n 2 ( 1 + κ ) δ n + κ ) 1 α n y n T n y n 2 = c ( 1 α n β n γ n ) + β n + 2 γ n η n 1 α n x n x * 2 + α n x * 2 + 2 γ n ( 1 η n ) 1 α n x n + 1 x * 2 + γ n ( δ n 2 ( 1 + κ ) δ n + κ ) 1 α n y n T n y n 2 .
The above inequality is equivalent to
γ n ( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 [ c ( 1 α n β n γ n ) + β n + 2 γ n η n ] x n x * 2 + α n x * 2 ( 1 α n 2 γ n ( 1 η n ) ) x n + 1 x * 2 ,
which is the objective inequality.
According to the conditions in Algorithm 1, we have α n + γ n 1 α n β n γ n 1 c and α n γ n 0 , thus
lim sup n c ( 1 α n β n γ n ) + β n + 2 γ n 1 γ n 0
holds. Assuming that lim n x n x * 2 = L , we have
( δ n 2 ( 1 + κ ) δ n + κ ) y n T n y n 2 c ( 1 α n β n γ n ) + β n + 2 γ n 1 γ n L + α n γ n x * 2 .
Due to the definition of δ n , one has δ n 2 + ( 1 + κ ) δ n + κ > ϵ 1 > 0 , where ϵ 1 is a positive number in ( 0 , 1 ) . Thus, we can deduce that
y n T n y n = 0 , n .
   □
Lemma 7.
If { x n } is a sequence generated by Algorithm 1, and lim n y n T n y n = 0 , then x n + 1 x n 0 as n .
Proof. 
According to Lemma 4, we have
x n + 1 y n 2 = ( 1 α n β n γ n ) S x n + β n x n + γ n [ δ n y n + ( 1 δ n ) T n y n ] y n 2 1 α n β n γ n 1 α n S x n y n 2 + α n y n 2 + β n 1 α n x n y n 2 + γ n ( 1 δ n ) 1 α n T n y n y n 2 .
Note that α n 0 and T n y n y n 2 0 , which implies
lim n x n + 1 y n 2 lim n x n y n 2 .
According to η n 0 , we have y n x n , that is
lim n x n + 1 x n = 0 .
The proof is completed.    □
Lemma 8.
If { x n } is a sequence generated by Algorithm 1, then the following inequality holds:
x n + 1 x * 2 ( 1 A n ) x n x * 2 + A n 2 0 x * , x n + 1 x * ,
where A n is defined in the same as Lemma5.
Proof. 
For inequality (11), we have
x n + 1 x * 2 ( 1 α n β n γ n ) ( S x n x * ) + β n ( x n x * ) + γ n [ δ n y n + ( 1 δ n ) T n y n x * ] 2 + 2 α n 0 x * , x n + 1 x * = ( 1 α n β n γ n ) ( S x n x * ) + ( α n + β n + γ n ) 1 α n + β n + γ n [ β n ( x n x * ) + γ n [ δ n y n + ( 1 δ n ) T n y n x * ] ] 2 + 2 α n 0 x * , x n + 1 x * ( 1 α n β n γ n ) S x n x * 2 + ( α n + β n + γ n ) β n + γ n α n + β n + γ n [ β n β n + γ n ( x n x * ) + γ n β n + γ n [ δ n y n + ( 1 δ n ) T n y n x * ] ] 2 + 2 α n 0 x * , x n + 1 x * ( 1 α n β n γ n ) S x n x * 2 + β n x n x * 2 + γ n δ n y n + ( 1 δ n ) T n y n x * 2 + 2 α n 0 x * , x n + 1 x * c ( 1 α n β n γ n ) x n x * 2 + β n x n x * 2 + 2 α n 0 x * , x n + 1 x * + 2 γ n η n x n x * 2 + 2 γ n ( 1 η n ) x n + 1 x * 2 .
This leads to
x n + 1 x * 2 c ( 1 α n β n γ n ) + β n + 2 γ n η n 1 2 γ n ( 1 η n ) x n x * 2 + α n 1 2 γ n ( 1 η n ) 2 0 x * , x n + 1 x * c ( 1 α n β n γ n ) + β n + 2 γ n η n 1 α n 2 γ n ( 1 η n ) x n x * 2 + 2 α n 1 α n 2 γ n ( 1 η n ) 0 x * , x n + 1 x * .
Note that when A n = α n 1 α n 2 γ n ( 1 η n ) and α n + γ n 1 α n β n γ n 1 c , we immediately obtain the objective inequality (19).    □
Next, we give a strong convergence theorem for Algorithm 1.
Theorem 1.
If { x n } is a sequence generated by Algorithm 1, T is uniformly asymptotically regular for H and I T is demiclosed at 0; then, { x n } converges strongly to P Γ 0 .
Proof. 
According to Lemma 5, we have { x n x * } , and { x n } is bounded. In the sequel, we consider the proof in two possible cases.
(Case I) If there exists a positive integer N * such that x n + 1 x *     x n x * for all n N * , then we know that lim n x n x * exists, and because { x n } is bounded, there exists a subsequence { x n k } of { x n } such that lim k x n k q . Then, from Lemma 6, it follows that lim n y n T n y n = 0 . Since y n x n as n , one also has lim n x n T n x n = 0 .
Recall that T is uniformly asymptotically regular for H and { x n } is bounded. This means that we can find the nonempty closed convex subset K of H such that x n K holds for all n N . Then, one has
x n T x n = x n T n x n + T n x n T n + 1 x n + T n + 1 x n T x n x n T n x n + T n x n T n + 1 x n + T n + 1 x n T x n ( 1 + L ) x n T n x n + sup z K T n z T n + 1 z 0 .
Next, according to Lemma 7, we get x n + 1 x n 0 as n , so q F i x ( T ) according to the demiclosedness of I T . Hence, we have
lim n 0 x * , x n + 1 x * = lim n 0 x * , x n k + 1 x * = lim n 0 x * , q x * .
Letting x * = P Γ 0 , the following inequality holds:
lim sup n 0 x * , x n + 1 x * = 0 x * , q x * 0 .
Note that
i = 1 A n = i = 1 α n 1 α n 2 γ n ( 1 η n ) i = 1 α n = .
According to Lemma 8 and Lemma 2, we now obtain x n x * 0 .
(Case II) Put Ψ n = x n x * 2 . If there does not exist a positive integer N * such that Ψ n + 1 Ψ n for all n N * , then there exists a subsequence { Ψ τ ( n ) } according to Lemma 3 such that Ψ τ ( n ) Ψ τ ( n ) + 1 and Ψ n Ψ τ ( n ) + 1 , and { τ ( n ) } is a nondecreasing sequence such that τ ( n ) as n .
From Lemma 6, it is not difficult to verify the following inequality:
lim n y τ ( n ) T τ ( n ) y τ ( n ) = 0 .
Since y τ ( n ) x τ ( n ) , we also have
lim n x τ ( n ) T τ ( n ) x τ ( n ) = 0 .
Similar to the inequality in (20), one can obtain x τ ( n ) T x τ ( n ) 0 as n . Then, according to Lemma 7, we have
x τ ( n ) + 1 x τ ( n ) 0 .
According to the demiclosed principle, we have x τ ( n ) q F i x ( T ) again, and
lim n 0 x * , x τ ( n ) + 1 x * = lim n 0 x * , x τ ( n ) x * = lim n 0 x * , q x * 0 .
It follows from Lemma 8 that
Ψ τ ( n ) + 1 ( 1 A τ ( n ) ) Ψ τ ( n ) + A τ ( n ) 2 0 x * , x τ ( n ) + 1 x * ,
A τ ( n ) Ψ τ ( n ) + 1 + ( 1 A τ ( n ) ) ( Ψ τ ( n ) + 1 Ψ τ ( n ) ) A τ ( n ) 2 0 x * , x τ ( n ) + 1 x * .
Recalling that Ψ τ ( n ) Ψ τ ( n ) + 1 , then we have
A τ ( n ) Ψ τ ( n ) + 1 A τ ( n ) 2 0 x * , x τ ( n ) + 1 x * ,
and so
Ψ τ ( n ) + 1 2 0 x * , x τ ( n ) + 1 x * 0 .
Finally, since Ψ n Ψ τ ( n ) + 1 , we obtain Ψ n 0 , meaning { x n } converges strongly to x * = P Γ 0 .    □
Assuming that η 1 , then the implicit Algorithm 1 reduces to the following explicit Algorithm 2.
Algorithm 2 Novel one-step explicit iteration for CSOS
  • Choose an initial point x 1 H , and for any n N do
    x n + 1 = ( 1 α n β n γ n ) S x n + β n x n + γ n δ n x n + ( 1 δ n ) T n x n ,
    where the real sequence { α n } , { β n } , { γ n } , and { δ n } satisfies the following conditions:
(i)
α n , β n , γ n and δ n are all in [ 0 , 1 ] ;
(ii)
α n + γ n 1 α n β n γ n 1 c ;
(iii)
α n γ n 0 , and i = 1 α n = ;
(iv)
max { κ , 1 1 M 1 } + ϵ δ n 1 ϵ , here M = sup { k n 2 } n = 1 and ϵ > 0 is a positive number.
Corollary 1.
If S : H H is a c —contraction operator, T : H H is a L-uniformly Lipschitzian-continuous and ( k n , κ ) -asymptotically demicontractive operator, and T is uniformly asymptotically regular for H , with I T being demiclosed at 0, then { x n } converges strongly to a point in Γ according to Algorithm 2.
In the remainder of the section, we introduce a multistep implicit iteration algorithm (MSIIA) for (3) that is involved in a series of contraction operators and a finite of asymptotically demicontrative operator.
Theorem 2.
For all i { 1 , 2 , , p } , let S i : H H be a c i —contraction operator, T i : H H be a L i -uniformly Lipschitzian-continuous and ( k n ( i ) , κ ( i ) ) -asymptotically demicontractive operator, where { k n ( i ) } [ 0 , ) , lim n k n ( i ) = 1 , and κ ( i ) [ 0 , 1 ) . Moreover, assume that T i is uniformly asymptotically regular for H and I T i is demiclosed at 0. Let Ξ be the solution set of (3) if { x n } is a sequence generated by Algorithm 3; then, { x n } converges strongly to P Ξ 0 .
Algorithm 3 Novel multistep implicit iteration for CSOS
  • Choose an initial point x 1 H , and for any n N do the following:
    y n ( 1 ) = η n ( 1 ) x n + ( 1 η n ( 1 ) ) x n ( 1 ) , x n ( 1 ) = ( 1 α n ( 1 ) β n ( 1 ) γ n ( 1 ) ) S 1 x n + β n ( 1 ) x n + γ n ( 1 ) δ n ( 1 ) y n ( 1 ) + ( 1 δ n ( 1 ) ) T 1 n y n ( 1 ) , for i = 2 , 3 , , p , y n ( i ) = η n ( i ) x n ( i 1 ) + ( 1 η n ( i ) ) x n ( i ) , x n ( i ) = ( 1 α n ( i ) β n ( i ) γ n ( i ) ) S i x n ( i 1 ) + β n ( i ) x n ( i 1 ) + γ n ( i ) δ n ( i ) y n ( i ) + ( 1 δ n ( i ) ) T i n y n ( i ) , x n + 1 = x n ( p ) .
  • The real sequence { α n ( i ) } , { β n ( i ) } , { γ n ( i ) } , { δ n ( i ) } , and { η n ( i ) } satisfies
(i)
α n ( i ) , β n ( i ) , γ n ( i ) , δ n ( i ) , η n ( i ) are all in [ 0 , 1 ] ;
(ii)
α n ( i ) + γ n ( i ) 1 α n ( i ) β n ( i ) γ n ( i ) 1 c i ;
(iii)
α n ( i ) γ n ( i ) 0 , η n ( i ) 1 , and j = 1 α j ( i ) = ;
(iv)
max { κ ( i ) , 1 1 M i 1 } + ϵ i δ n ( i ) 1 ϵ i ; here, M i = sup n { ( k n ( i ) ) 2 } and ϵ i > 0 is a positive number.
Proof. 
Let x * Ξ . We divide the whole proof into four parts.
Step 1. First, we prove that the sequence { x n } is bounded. Assuming that
A n ( j ) : = α n ( j ) 1 α n ( j ) 2 γ n ( j ) ( 1 η n ( j ) ) ,
according to Lemma 5, we then obtain
x n + 1 x * 2 = x n ( p ) x * 2 ( 1 A n ( p ) ) x n ( p 1 ) x * 2 + A n ( p ) x * 2 ( 1 A n ( p ) ) ( ( 1 A n ( p 1 ) ) x n ( p 2 ) x * 2 + A n ( p 1 ) x * 2 ) + A n ( p ) x * 2 = i = 0 1 ( 1 A n ( p i ) ) x n ( p 2 ) x * 2 + ( A n ( p ) + i = 1 1 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) x * 2 i = 0 2 ( 1 A n ( p i ) ) x n ( p 3 ) x * 2 + ( A n ( p ) + i = 1 2 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) x * 2 i = 0 p 1 ( 1 A n ( p i ) ) x n x * 2 + ( A n ( p ) + i = 1 p 1 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) x * 2 .
Note that
i = 0 p 1 ( 1 A n ( p i ) ) + A n ( p ) + i = 1 p 1 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) 1 ,
and letting B n be
B n = i = 0 p 1 ( 1 A n ( p i ) ) ,
then, one has
x n + 1 x * 2 B n x n x * 2 + ( 1 B n ) x * 2 B n ( B n 1 x n 1 x * 2 + ( 1 B n 1 ) x * 2 ) + ( 1 B n ) x * 2 = B n B n 1 x n 1 x * 2 + ( 1 B n B n 1 ) x * 2 B n B n 1 ( B n 2 x n 2 x * 2 + ( 1 B n 2 ) x * 2 ) + ( 1 B n B n 1 ) x * 2 i = 0 2 B n i x n 2 x * 2 + ( 1 i = 0 2 B n i ) x * 2 i = 1 n B i x 1 x * 2 + ( 1 i = 1 n B i ) x * 2 .
Due to B i [ 0 , 1 ] , one has x n + 1 x * 2 max { x 1 x * 2 , x * 2 } , which means that { x n } is bounded, and { y n } is bounded.
Step 2. According to Lemma 6, it is easy to see that
γ n ( 1 ) [ ( δ n ( 1 ) ) 2 ( 1 + κ 1 ) δ n ( 1 ) + κ 1 ] y n ( 1 ) T 1 n y n ( 1 ) 2 [ c 1 ( 1 α n ( 1 ) β n ( 1 ) γ n ( 1 ) ) + β n ( 1 ) + 2 η n ( 1 ) γ n ( 1 ) ] x n x * 2 + α n ( 1 ) x * 2 [ 1 α n ( 1 ) 2 γ n ( 1 ) ( 1 η n ( 1 ) ) ] x n ( 1 ) x * 2 ,
and for i { 2 , 3 , , p } , we have
γ n ( i ) [ ( δ n ( i ) ) 2 ( 1 + κ i ) δ n ( i ) + κ i ] y n ( i ) T i n y n ( i ) 2 [ c i ( 1 α n ( i ) β n ( i ) γ n ( i ) ) + β n ( i ) + 2 η n ( i ) γ n ( i ) ] x n ( i 1 ) x * 2 + α n ( i ) x * 2 [ 1 α n ( i ) 2 γ n ( i ) ( 1 η n ( i ) ) ] x n ( i ) x * 2 .
Step 3. Assume that
χ n : = sup i { 1 , 2 , , p } 2 0 x * , x n ( i ) x * .
We have the following inequality according to Lemma 8 and (21):
x n + 1 x * 2 ( 1 A n ( p ) ) x n ( p 1 ) x * 2 + A n ( p ) χ n ( 1 A n ( p ) ) ( 1 A n ( p 1 ) ) x n ( p 2 ) x * 2 + ( ( 1 A n ( p ) ) A n ( p 1 ) + A n ( p ) ) χ n = i = 0 1 ( 1 A n ( p i ) ) x n ( p 2 ) x * 2 + ( A n ( p ) + i = 1 1 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) χ n i = 0 2 ( 1 A n ( p i ) ) x n ( p 3 ) x * 2 + ( A n ( p ) + i = 1 2 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) χ n i = 0 p 1 ( 1 A n ( p i ) ) x n x * 2 + ( A n ( p ) + i = 1 p 1 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) χ n .
Because B n = i = 0 p 1 ( 1 A n ( p i ) ) , we immediately have
x n + 1 x * 2 B n x n x * 2 + ( 1 B n ) χ n .
Step 4. To prove the strong convergence, we consider two possible cases.
(Case I) If there exists a positive integer N * such that x n x * x n + 1 x * for all n N * , then one knows that lim n x n x * exists, and because { x n } is bounded, there exists a subsequence { x n k } of { x n } such that x n k q .
According to Step 1, we have
x n + 1 x * 2 ( 1 A n ( p ) ) x n ( p 1 ) x * 2 + A n ( p ) x * 2 ,
and for all v = 1 , 2 , p 2 , the following holds:
x n + 1 x * 2 i = 0 v ( 1 A n ( p i ) ) x n ( p 1 v ) x * 2 + ( A n ( p ) + i = 1 v A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) x * 2 i = 0 p 1 ( 1 A n ( p i ) ) x n x * 2 + ( A n ( p ) + i = 1 p 1 A n ( p i ) j = 0 i 1 ( 1 A n ( p j ) ) ) x * 2 .
Let W : = lim k x n k x * 2 . With the conditions in Algorithm 3, it can be seen that A n ( i ) 0 as n for all i { 1 , 2 , , p } ; then, we have
W lim k x n k ( i ) x * 2 W , i = 1 , 2 , , p ,
which means that lim k x n k ( i ) x * 2 = W . According to Lemma 6, one directly obtains
lim k y n k ( i ) T i n k y n k ( i ) 2 = 0 , i = 1 , 2 , , p .
Similar to the proof in Lemma 7, the following equalities hold:
lim k x n k ( 1 ) x n k 2 = 0 , lim k x n k ( i ) x n k ( i 1 ) 2 = 0 , i = 2 , 3 , , p .
The above equations lead to x n k ( i ) q . Since T i all are uniformly Lipschitzian-continuous operators, one has
lim k x n k ( i ) T i n k x n k ( i ) 2 = 0 , i = 1 , 2 , , p .
Because T i is uniformly asymptotically regular for H , we have
lim k x n k ( i ) T i x n k ( i ) 2 = 0 , i = 1 , 2 , , p .
According to demiclosedness, we have q i = 1 p F i x ( T i ) and
lim sup n 0 x * , x n ( i ) x * = lim sup k 0 x * , x n k ( i ) x * = 0 x * , q x * 0
for every i = 2 , 3 , , p . Now, we have lim sup n χ n k 0 . Note that when j = 1 α j ( i ) = , we also have n = 1 ( 1 B n ) = via a simple calculation. Together with (22) and Lemma 2, it implies that x n x * 2 0 .
(Case II) Similar to the proof in Theorem 1, make Ψ n = x n x * 2 . If there does not exist a positive integer N * such that Ψ n + 1 Ψ n for all n N * , then there exists a subsequence { Ψ τ ( n ) } such that Ψ τ ( n ) Ψ τ ( n ) + 1 and Ψ n Ψ τ ( n ) + 1 . Moreover, { τ ( n ) } is a non-decreasing sequence such that τ ( n ) becasue n .
Since Ψ τ ( n ) Ψ τ ( n ) + 1 , it follows from Lemma 6 that
lim n y τ ( n ) ( i ) T ( i ) τ ( n ) y τ ( n ) ( i ) = 0 , i = 1 , 2 , , p .
For Lemma 7, these equations follow:
lim n x τ ( n ) ( 1 ) x τ ( n ) 2 = 0 , lim n x τ ( n ) ( i ) x τ ( n ) ( i 1 ) 2 = 0 , i = 2 , 3 , , p .
Thus we can assume that x τ ( n ) ( i ) q . Since y τ ( n ) x τ ( n ) , one gets
lim n x τ ( n ) ( i ) T ( i ) τ ( n ) x τ ( n ) ( i ) = 0 , i = 1 , 2 , , p .
By the uniformly asymptotically regularity of T i , one has x τ ( n ) ( i ) T i x τ ( n ) ( i ) 0 .
and we have x τ ( n ) q Ξ , again, by using the demiclosed principle, and
lim sup n 0 x * , x τ ( n ) + 1 ( i ) x * = lim sup n 0 x * , x τ ( n ) ( i ) x * = 0 x * , q x * 0
for any i { 2 , 3 , , p } . Then, according to (22), we have
Ψ τ ( n ) + 1 B τ ( n ) Ψ τ ( n ) + ( 1 B τ ( n ) ) χ τ ( n ) ,
( 1 B τ ( n ) ) Ψ τ ( n ) + 1 + B τ ( n ) ( Ψ τ ( n ) + 1 Ψ τ ( n ) ) ( 1 B τ ( n ) ) χ τ ( n ) .
Recall that Ψ τ ( n ) Ψ τ ( n ) + 1 ; it is easy to see that
Ψ τ ( n ) + 1 χ τ ( n ) = sup i { 1 , 2 , , p } 2 0 x * , x τ ( n ) ( i ) x * 0 .
Finally, as Ψ n Ψ τ ( n ) + 1 , we also get Ψ n 0 , which means { x n } converges strongly to a solution: x * = P Ξ 0 of (3).    □
Like the implicit one-step Algorithm 1, the multistep implicit Algorithm 3 can also be simplified to the multistep explicit Algorithm 4. For every i { 1 , 2 , , p } , letting η n ( 1 ) 1 , one can easily have the following Corollary 2 for Algorithm 4.
Algorithm 4 Novel multistep explicit iteration for CSOS
  • Choose an initial point x 1 H , and for any n N do the following:
    x n ( 1 ) = ( 1 α n ( 1 ) β n ( 1 ) γ n ( 1 ) ) S 1 x n + β n ( 1 ) x n + γ n ( 1 ) δ n ( 1 ) x n ( 1 ) + ( 1 δ n ( 1 ) ) T 1 n x n ( 1 ) , for i = 2 , 3 , , p , x n ( i ) = ( 1 α n ( i ) β n ( i ) γ n ( i ) ) S i x n ( i 1 ) + β n ( i ) x n ( i 1 ) + γ n ( i ) δ n ( i ) x n ( i ) + ( 1 δ n ( i ) ) T i n x n ( i ) , x n + 1 = x n ( p ) .
  • The real sequence { α n ( i ) } , { β n ( i ) } , { γ n ( i ) } , and { δ n ( i ) } satisfies the following conditions:
(i)
α n ( i ) , β n ( i ) , γ n ( i ) , δ n ( i ) are all in [ 0 , 1 ] ;
(ii)
α n ( i ) + γ n ( i ) 1 α n ( i ) β n ( i ) γ n ( i ) 1 c i ;
(iii)
α n ( i ) γ n ( i ) 0 , and j = 1 α j ( i ) = ;
(iv)
max { κ ( i ) , 1 1 M i 1 } + ϵ i δ n ( i ) 1 ϵ i ; here, M i = sup n { ( k n ( i ) ) 2 } and ϵ i > 0 is a positive number.
Corollary 2.
For all i { 1 , 2 , , p } , let S i : H H be a c i —contraction operator and T i : H H be a L i -uniformly Lipschitzian-continuous and ( k n ( i ) , κ ( i ) ) -asymptotically demicontractive operator. Moreover, suppose that T i is uniformly asymptotically regular for H and I T i is demiclosed at 0. Then, { x n } converges strongly to a common solution of (3) if it is generated by Algorithm 4.

4. Applications

In this section, we first give two numerical experiments to show the efficiency of the algorithms proposed in this paper. Then, by applying the main results, we solve the nonlinear optimization problem GCSVIOE corresponding to (2). All codes are written in Matlab 2020a and run on a laptop with 2.5 GHz Intel Core i5 processor.
Example 6.
Let H = R . Let S ( x ) : = x / 2 , and T be defined by (13) with r = 3 / 4 . It is not difficult to verify that { 0 } is the only common solution of (4). Note that S and T satisfy all conditions in Theorem 1 and Corollary 1; thus, the sequences generated by Algorithms 1 and 2 converge to 0 together.
Example 7.
Let H = R . Let i { 1 , 2 , , 10 } , S i ( x ) : = ( 0.5 0.02 i ) x , and T i be defined by (13) with r i = 0.6 0.02 i . The common solution of (3) is { 0 } . It is easy to see that S i and T i satisfy all conditions in Theorem 2 and Corollary 2, which leads that the sequences produced by Algorithms 3 and 4 converge to 0.
We compare the convergence speed of Algorithms 1–4 to YKMSA [14] (i.e., (8)) through Examples 6 and 7. The parameters are set as follows. In the proposed algorithms, set α n = 1 / ( 9 n + 1 ) , β n = 1 / 4 , γ n = n / ( 9 n + 9 ) , δ n = 0.99 1 / ( 4 n + 1 ) , η n = 1 / n , and let α n ( i ) = α n , β n ( i ) = β n , γ n ( i ) = γ n , δ n ( i ) = δ n , η n ( i ) = η n . For YKMSA, set a n = b n = ( n 2 1 ) / ( 2 n 2 ) , c n = 1 / n 2 , u n = 1 + 1 / n according to Lemma 3.1 [14]. The stop criterion is set as x n + 1 x n   10 10 or a maximum iteration of 10 3 . Take the initial points as 0.6 and 0.9 for Example 6 and 0.5 for Example 7. The numerical results are shown in Figure 1, Figure 2 and Figure 3. As shown in the figures, it can be observed that both the one-step and multistep algorithms involving implicit rules perform slightly better than explicit algorithms. Furthermore, as the one-step and multistep YKMSAs [14] contain Halpern-type constants, the proposed algorithms in this article converge much faster than YKMSA.
According to (2), we have shown that GCSVIOE is equivalent to a CSOS. Now, we give the following multistep implicit Algorithm 5 and Theorem 3 to find the solution of GCSVIOE.
Algorithm 5 Multistep implicit iteration for GCSVIOE
  • Choose an initial point x 1 H , and for any n N do the following:
    z n ( 1 ) = η n ( 1 ) z n + ( 1 η n ( 1 ) ) x n ( 1 ) , y n ( 1 ) = δ n ( 1 ) z n ( 1 ) + ( 1 δ n ( 1 ) ) P Q 1 ( z n ( 1 ) r F 1 z n ( 1 ) ) , x n ( 1 ) = ( 1 α n ( 1 ) β n ( 1 ) γ n ( 1 ) ) S 1 ( x n ) + β n ( 1 ) x n + γ n ( 1 ) y n ( 1 ) , for i = 2 , 3 , , p , z n ( i ) = η n ( i ) z n ( i 1 ) + ( 1 η n ( i ) ) x n ( i ) , y n ( i ) = δ n ( i ) z n ( i ) + ( 1 δ n ( i ) ) P Q i ( z n ( i ) r F i z n ( i ) ) , x n ( i ) = ( 1 α n ( i ) β n ( i ) γ n ( i ) ) S i ( x n ( i 1 ) ) + β n ( i ) x n ( i 1 ) + γ n ( i ) y n ( i ) , x n + 1 = x n ( p ) .
  • The real sequence { α n ( i ) } , { β n ( i ) } , { γ n ( i ) } , { δ n ( i ) } , and { η n ( i ) } satisfies the following conditions:
(i)
α n ( i ) , β n ( i ) , γ n ( i ) , δ n ( i ) , η n ( i ) are all in [ 0 , 1 ] ;
(ii)
α n ( i ) + γ n ( i ) 1 α n ( i ) β n ( i ) γ n ( i ) 1 c i ;
(iii)
α n ( i ) γ n ( i ) 0 , η n ( i ) 1 , and j = 1 α j ( i ) = ;
(iv)
ϵ i δ n ( i ) 1 ϵ i ; here, ϵ i > 0 is a positive constant;
(v)
r ( 0 , min i 1 , 2 , , p 2 σ i / μ i 2 ) .
Theorem 3.
For all i { 1 , 2 , , p } , suppose that S i : H H is a c i —contraction operator and F i : Q i H is a σ i / μ i 2 -inverse strongly monotone operator, where Q i is a nonempty closed convex subset of H . Then, the sequence { x n } generated by Algorithm 5converges strongly to the solution of (2), that is,GCSVIOEis solved by Algorithm 5.
Proof. 
For any i { 1 , 2 , , p } , let T i : = P Q i ( I r F i ) . Then, according to [26], one has T i , which is a nonexpansive operator for all i { 1 , 2 , , p } . Thus, for each i { 1 , 2 , , p } , T i is also 1-Lipschitzian-continuous and ( 1 , 0 ) -asymptotically demicontractive, and I T i is demiclosed at 0 [27]. Recall Example 5; one can easily see that T i is also uniformly asymptotically regular for all i { 1 , 2 , , p } . Hence, T i ( i = 1 , 2 , , p ) satisfies all the conditions required in Theorem 2.
Then, by directly applying Theorem 2, one sees that the sequence { x n } generated by Algorithm 5 converges strongly to a point in i { 1 , 2 , , p } F i x ( S i ) F i x ( T i ) . This completes the proof. □

5. Conclusions

In this paper, to answer Question 1, we propose a brand-new multistep algorithm (i.e., Algorithm 3) for solving CSOSs (3), which are highly related to nonlinear optimization problems. We first give two strong convergence theorems for both one-step and multistep iterations, utilizing the implicit rule for CSOSs that involve asymptotically demicontractive operators. In order to show the efficiency of the proposed algorithms, two numerical simulations on single-set and multi-set CSOSs are given in Section 4 and are also applied to GCSVIOE.
However, the following two areas are worthy of future research:
(i)
The common solution of two finite or infinite asymptotically demicontractive operators requires in-depth exploration.
(ii)
Convergence for CSOSs involving multivalued operators—see [28].

Author Contributions

Conceptualization, H.-Y.X. and H.-Y.L.; methodology, H.-Y.X.; software, H.-Y.X.; validation, H.-Y.X. and H.-Y.L.; formal analysis, H.-Y.X.; investigation, H.-Y.X.; resources, H.-Y.X.; data curation, H.-Y.X.; writing—original draft preparation, H.-Y.X.; writing—review and editing, H.-Y.L.; visualization, H.-Y.X.; supervision, H.-Y.L.; project administration, H.-Y.L.; funding acquisition, H.-Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Teaching Construction Project of Postgraduates, Sichuan University of Science & Engineering, grant number YZ202211, and the Innovation Fund of Postgraduates, Sichuan University of Science & Engineering, grant number Y2022188.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our thanks to the anonymous referees and editors for their valuable comments and advantageous suggestions to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSOSsCommon solution of operator systems
MSIIAMultistep implicit iterative algorithm
GCSVIOEs General common solution to variational inequalities and operator equations
SCFPPSplit common fixed point problem
YKMSAMultistep approximation algorithm proposed by Yolacan and Kiziltunc

References

  1. Censor, Y.; Gibali, A.; Reich, S. A von Neumann alternating method for finding common solutions to variational inequalities. Nonlinear Anal. Theory Methods Appl. 2012, 75, 4596–4603. [Google Scholar] [CrossRef]
  2. Gu, F.; He, Z. Multi-step iterative process with errors for common solutions of a finite family of nonexpansive operators. Math. Commun. 2006, 11, 47–54. [Google Scholar]
  3. Censor, Y.; Segal, A. The split common solution problem for directed operators. J. Convex. Anal. 2009, 16, 587–600. [Google Scholar]
  4. Zhao, J.; Wang, H.; Zhao, N. Accelerated cyclic iterative algorithms for the multiple-set split common fixed-point problem of quasi-nonexpansive operators. J. Nonlinear Var. Anal. 2023, 7, 1–22. [Google Scholar]
  5. Wang, F. A new iterative method for the split common solution problem in Hilbert spaces. Optimization 2017, 66, 407–415. [Google Scholar] [CrossRef]
  6. Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H. The evolving Planck mass in classically scale-invariant theories. J. High Energy Phys. 2017, 4, 26. [Google Scholar] [CrossRef]
  7. Benaceur, A.; Ern, A.; Ehrlacher, V. A reduced basis method for parametrized variational inequalities applied to contact mechanics. Int. J. Numer. Methods Eng. 2019, 121, 1170–1197. [Google Scholar] [CrossRef]
  8. Nwaigwe, C.; Benedict, D.N. Generalized Banach fixed-point theorem and numerical discretization for nonlinear Volterra-Fredholm equations. J. Comput. Appl. Math. 2023, 425, 115019. [Google Scholar] [CrossRef]
  9. Acemoglu, D.; Akcigit, U.; Alp, H.; Bloom, N.; Kerr, W. Innovation, reallocation and growth. Am. Econ. Rev. 2018, 108, 3450–3491. [Google Scholar] [CrossRef]
  10. Chan, S.H.; Wang, X.; Elgendy, O.A. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 2016, 3, 84–98. [Google Scholar] [CrossRef]
  11. Scellier, B.; Bengio, Y. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 2016, 11, 24. [Google Scholar] [CrossRef] [PubMed]
  12. Wu, Z.H.; Pan, S.R.; Chen, F.W.; Long, G.D.; Zhang, C.Q.; Yu, P.S. A comprehensive survey on graph neural networks. IEEE Trans. Neural. Netw. Learn. Syst. 2019, 32, 4–24. [Google Scholar] [CrossRef]
  13. Wang, Y.; Fang, X.; Kimi, T.H. A new algorithm for common fixed-point problems of a finite family of asymptotically demicontractive operators and its applications. J. Nonlinear Convex. Anal. 2020, 21, 1875–1887. [Google Scholar]
  14. Yolacan, E.; Kiziltunc, H. Convergence theorems for a finite family of nonexpansive and total asymptotically nonexpansive operators. Hacet. J. Math. Stat. 2012, 41, 657–673. [Google Scholar]
  15. Thuy, N.T.T.; Hieu, P.T. Implicit iteration methods for variational inequalities in Banach spaces. Bull. Malays. Math. Sci. Soc. 2013, 36, 917–926. [Google Scholar]
  16. Preechasilp, P. Viscosity approximation methods for implicit midpoint rule of nonexpansive mappings in geodesic spaces. Bull. Malays. Math. Sci. Soc. 2018, 41, 1561–1579. [Google Scholar] [CrossRef]
  17. Xiong, T.; Lan, H. General modified viscosity implicit rules for generalized asymptotically nonexpansive operators in complete CAT(0) spaces. J. Inequal. Appl. 2019, 2019, 176. [Google Scholar] [CrossRef]
  18. Vaish, R.; Kalimuddin-Ahmad, M.D. Hybrid viscosity implicit scheme for variational inequalities over the fixed point set of an asymptotically nonexpansive operator in the intermediate sense in Banach spaces. Appl. Numer. Math. 2021, 160, 296–312. [Google Scholar] [CrossRef]
  19. Xu, H.; Lan, H.; Zhang, F. General semi-implicit approximations with errors for common solutions of nonexpansive-type operators and applications to Stampacchia variational inequality. Comput. Appl. Math. 2022, 41, 190. [Google Scholar] [CrossRef]
  20. Aibinu, M.; Kim, J. On the rate of convergence of viscosity implicit iterative algorithms. Nonlinear Funct. Anal. Appl. 2020, 25, 135–152. [Google Scholar]
  21. Wang, Y.; Song, Y.; Fang, X. Strong convergence of the split equality fixed point problem for asymptotically quasi-pseudocontractive operators. J. Nonlinear Convex Anal. 2020, 21, 63–75. [Google Scholar]
  22. Zhou, H.; Gao, G.; Tan, B. Convergence theorems of a modified hybrid algorithm for a family of quasi-ϕ-asymptotically nonexpansive operators. J. Appl. Math. Comput. 2010, 32, 453–464. [Google Scholar] [CrossRef]
  23. Kim, D.H. Demiclosedness principle for continous TAN operators. Ph.D. Thesis, Pukyong National University, Busan, Republic of Korea, 2010. [Google Scholar]
  24. Xu, H. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  25. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  26. Takahashi, W.; Toyoda, M. Weak convergence theorems for nonexpansive operators and monotone operators. J. Optim. Theory Appl. 2013, 118, 417–428. [Google Scholar] [CrossRef]
  27. Gebrie, A.G. Weak and strong convergence adaptive algorithms for generalized split common solution problems. Optimization 2021, 71, 3711–3736. [Google Scholar] [CrossRef]
  28. Mewomo, O.T.; Okeke, C.C.; Ogbuisi, F.U. Iterative solutions of split fixed point and monotone inclusion problems in Hilbert spaces. J. Appl. Numer. Optim. 2023, 5, 271–285. [Google Scholar]
Figure 1. Results of Algorithms 1 and 2, and YKMSA for Example 6 with an initial point of x 0 = 0.6 .
Figure 1. Results of Algorithms 1 and 2, and YKMSA for Example 6 with an initial point of x 0 = 0.6 .
Mathematics 11 03871 g001
Figure 2. Results of Algorithms 1 and 2, and YKMSA for Example 6 with an initial point of x 0 = 0.9 .
Figure 2. Results of Algorithms 1 and 2, and YKMSA for Example 6 with an initial point of x 0 = 0.9 .
Mathematics 11 03871 g002
Figure 3. Results of Algorithms 3 and 4, and YKMSA for Example 7 with an initial point of x 0 = 0.5 .
Figure 3. Results of Algorithms 3 and 4, and YKMSA for Example 7 with an initial point of x 0 = 0.5 .
Mathematics 11 03871 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, H.-Y.; Lan, H.-Y. Novel Multistep Implicit Iterative Methods for Solving Common Solution Problems with Asymptotically Demicontractive Operators and Applications. Mathematics 2023, 11, 3871. https://doi.org/10.3390/math11183871

AMA Style

Xu H-Y, Lan H-Y. Novel Multistep Implicit Iterative Methods for Solving Common Solution Problems with Asymptotically Demicontractive Operators and Applications. Mathematics. 2023; 11(18):3871. https://doi.org/10.3390/math11183871

Chicago/Turabian Style

Xu, Hai-Yang, and Heng-You Lan. 2023. "Novel Multistep Implicit Iterative Methods for Solving Common Solution Problems with Asymptotically Demicontractive Operators and Applications" Mathematics 11, no. 18: 3871. https://doi.org/10.3390/math11183871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop