Next Article in Journal
Existence and Uniqueness of Solutions to SDEs with Jumps and Irregular Drifts
Previous Article in Journal
On the Dynamics of Some Three-Dimensional Systems of Difference Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Inertial Self-Adaptive Iterative Methods for Split Variational Inclusion Problems

by
Doaa Filali
1,
Mohammad Dilshad
2,*,
Atiaf Farhan Yahya Alfaifi
2 and
Mohammad Akram
3,*
1
Department of Mathematical Science, College of Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Mathematics, Faculty of Science, University of Tabuk, P.O. Box 741, Tabuk 71491, Saudi Arabia
3
Department of Mathematics, Faculty of Science, Islamic University of Madinah, P.O. Box 170, Madinah 42351, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Axioms 2025, 14(5), 373; https://doi.org/10.3390/axioms14050373
Submission received: 11 March 2025 / Revised: 11 May 2025 / Accepted: 12 May 2025 / Published: 15 May 2025
(This article belongs to the Section Mathematical Analysis)

Abstract

:
Herein, we present two hybrid inertial self-adaptive iterative methods for determining the combined solution of the split variational inclusions and fixed-point problems. Our methods include viscosity approximation, fixed-point iteration, and inertial extrapolation in the initial step of each iteration. We employ two self-adaptive step sizes to compute the iterative sequence, which do not require the pre-calculated norm of a bounded linear operator. We prove strong convergence theorems to approximate the common solution of the split variational inclusions and fixed-point problems. Further, we implement our methods and results to examine split variational inequality and split common fixed-point problems. Finally, we illustrate our methods and compare them with some known methods existing in the literature.

1. Introduction

The fixed-point theory is the coherent and logical framework for indispensable nonlinear interdisciplinary problems including differential equations, control theory, game theory, variational inequality, equilibrium problems, optimization problems, split feasibility problems, etc. Over the last few years, fixed-point theory has become an active research area, which has led to the designing and development of efficient, effective, flexible, and easily implementable approximation methods for approximating the solutions of nonlinear and inverse problems. The fixed-point problem ( FPP ) of a self-mapping Z : U U is defined by
find v U so that Z ( v ) = v .
Numerous methods have been used to address fixed-point problems. Among them, the majority of methods used to approximate the fixed points are motivated by Mann’s iterative method [1]. In order to obtain a fast convergence rate, Moudafi [2] introduced the viscosity approximation technique by blending Z with a contraction mapping.
The first split problem, namely, the split feasibility problem, was initially presented by Censor and Elfving [3]. The most recent inverse problem is the split inverse problem studied by Censor et al. [4]. Because of their relevance to mathematical models of real-life problems appearing in cancer therapy [3,5], image restoration [6], computerized tomography, and data compression [7,8], several inverse problems and methods for solving them have been developed and studied in the last few years. Maudafi [9] explored the split monotone variational inclusion problems (SplitMVIP) in the framework of Hilbert spaces. Byrne et al. [10] introduced the split common null-point problem (SplitCNPP). A special case of (SplitCNPP) is the split variational inclusion problem (SplitVIP), which is defined by
find v U so that 0 M ( v ) and u = B v U solves 0 N ( u ) ,
where M : U 2 U , N : U 2 U are monotone operators, B : U U is a bounded linear operator, and U is a Hilbert space. By using the fact that the zero of the monotone operator M is the fixed point of resolvent of M, that is, 0 M ( v ) v = J λ M ( v ) , Byrne et al. [10] suggested the following method for (SplitVIP):
v n + 1 = J μ M [ v n η B * ( I J μ N ) B v n ] ,
where B * is the adjoint of B, μ > 0 , η ( 0 , 2 / Q ) , and Q = B * B . Based on this iterative method (2), numerous iterative methods have been developed and studied to solve (SplitVIP). Kazmi and Rizvi [11] extended method (2) to investigate the common solution of (SplitVIP) and ( FPP ) as follows:
u n = J μ M [ v n + η B * ( J μ N I ) B v n ] , v n + 1 = ζ n F ( v n ) + ( 1 ζ n ) S u n ,
where F is a contraction; η ( 0 , 1 Q ) , ζ n ( 0 , 1 ) is a real sequence such that lim n ζ n = 0 , n = 1 ζ n = , and n = 1 | ζ n ζ n 1 | = . Akram et al. [12] modified the method (3) in the following manner to study the same problem:
u n = v n η [ ( I J μ 1 M ) v n + B ( I ( J μ 2 N ) B v n ] , v n + 1 = ζ n F ( v n ) + ( 1 ζ n ) S u n ,
where μ 1 > 0 , μ 2 > 0 and η = 1 1 + B 2 . Some other iterative methods for solving (SplitVIP) and ( FPP ) can also be seen in [13,14,15,16] and references therein.
The common disadvantage of these methods is the calculation of the step size, which depends on the calculation of B * B and the calculation of B * B is a challenging task. To address this challenge, researchers developed iterative methods that eliminate the estimation of B * B . Lopez et al. [17] investigated split feasibility problems without knowing the norm of the matrix. Dilshad et al. [18] studied the split common null-point problem without a pre-existing estimation of the operator norm as follows:
u n = v n J μ 1 M v n + B * ( I ( J μ 2 N ) B v n , v n + 1 = ζ n u + ( 1 ζ n ) ( v n η n u n ) ,
for some fixed u U and
η n = v n J μ 1 M v n 2 + ( I J μ 2 N ) B v n 2 v n J μ 1 M v n + B * ( I J μ 2 N ) B v n 2 .
In this direction, several research papers have caught the attention of researchers: see [19,20,21,22] and references therein.
To obtain the fast convergence of iterative algorithms, Alvarez and Attouch [23] introduced a new algorithm for estimating the solution of variational inclusions. This algorithm was named as an inertial proximal point algorithm. It is observed that the sequence derived from the inertial proximal point method converges rapidly because of its design. As a result, numerous researchers have applied the inertial term since it plays a crucial role in accelerating the convergence; see [24,25,26,27,28] and references therein.
In continuation to the above study, our aim is to present two hybrid inertial self-adaptive iterative methods to estimate the common solution of (SplitVIP) and ( FPP ), which can be summarized as follows:
  • Our motive is to introduce fast and traditionally different viscosity methods to estimate the common solution of (SplitVIP) and ( FPP ). Unlike method (3) and method (4) [or method (5)], our hybrid algorithms compute the viscosity approximation and fixed-point iteration [or Halpern-type iteration] in the initial step of each iteration.
  • To accelerate the convergence, we also add the inertial term in the initial step of the iteration. Therefore, in the first step, we compute the inertial extrapolation, fixed-point iteration, and viscosity approximation all at the same time.
  • In method (3) and method (4), the pre-calculated norm of B is essential, which is a tedious task. However, we are using two self-adaptive step-sizes, which do not require the pre-calculated norms of a bounded linear operator B.
  • Our methods are efficient and an accelerated version of method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18], which is demonstrated by numerical examples.

2. Preliminaries

All through the text, we symbolize the real Hilbert space by U and by D , which is the close and convex subset of U . We denote the strong and weak convergence of the sequence { v n } to v by v n v and v n v , respectively.
For all η 1 , η 2 , η 3 in U , t 1 , t 2 , t 3 [ 0 , 1 ] such that t 1 + t 2 + t 3 = 1 , the following equality and inequality hold:
t 1 η 1 + t 2 η 2 + t 3 η 3 2 = t 1 η 1 2 + t 2 η 2 2 + t 3 η 1 2     t 1 t 2 η 1 η 2 2 t 2 t 3 η 2 η 3 2 t 3 t 1 η 3 η 2 2
and
η 1 ± η 2 2 = η 1 2 ± 2 η 1 , η 2 + η 2 2 η 1 2 ± 2 η 2 , η 1 + η 2 .
Definition 1.
A mapping Z : U U is said to be
(1) 
Averaged if there exists a nonexpansive mapping f : U U and α ( 0 , 1 ) such that Z = ( 1 α ) I + α f ;
(2) 
Lipschitz continuous if there exists θ > 0 , such that Z ( ϱ 1 ) Z ( ϱ 2 ) θ ϱ 1 ϱ 2 , ϱ 1 , ϱ 2 U ;
(3) 
Contraction if Z ( ϱ 1 ) Z ( ϱ 2 ) θ ϱ 1 ϱ 2 , ϱ 1 , ϱ 2 U , for some θ ( 0 , 1 ) ;
(4) 
Nonexpansive if Z ( ϱ 1 ) Z ( ϱ 2 ) ϱ 1 ϱ 2 , ϱ 1 , ϱ 2 U ;
(5) 
Firmly nonexpansive if Z ( ϱ 1 ) Z ( ϱ 2 ) 2 ϱ 1 ϱ 2 , Z ( ϱ 1 ) Z ( ϱ 2 ) , ϱ 1 , ϱ 2 U ;
(6) 
κ-inverse strongly monotone (κ-ism) if there exists κ > 0 such that
Z ( ϱ 1 ) Z ( ϱ 2 ) , ϱ 1 ϱ 2 κ Z ( ϱ 1 ) Z ( ϱ 2 ) 2 , ϱ 1 , ϱ 2 U ;
(7) 
Monotone if
Z ( ϱ 1 ) Z ( ϱ 2 ) , ϱ 1 ϱ 2 0 , ϱ 1 , ϱ 2 U .
Definition 2
([29]). Let N : U 2 U be a set-valued mapping. Then,
(1) 
N is called monotone if ϱ 1 ϱ 2 , η 1 η 2 0 , ϱ 1 , ϱ 2 , U , η 1 M ( ϱ 1 ) , η 2 N ( ϱ 2 ) ;
(2) 
G r a p h ( N ) = { ( ϱ 1 , η 1 ) U × U : η 1 N ( ϱ 1 ) } ;
(3) 
N is said to be maximal monotone if N is monotone and ( I + μ N ) ( U ) = U , for μ > 0 , where I is an identity mapping on U ;
(4) 
The resolvent of N is defined by J μ N = [ I + μ N ] 1 , where I is an identity mapping and μ > 0 .
Remark 1.
(1) 
It can be easily seen that a κ-inverse strongly monotone mapping is also monotone and 1 κ -Lipschitz continuous.
(2) 
Every averaged mapping is nonexpansive, but the converse need not be true in general.
(3) 
The operator Z is firmly nonexpansive if and only if I Z is firmly nonexpansive.
(4) 
The composition of two averaged operators is also averaged.
Remark 2.
(1) 
The resolvent J μ N of the maximal monotone mapping M is single-valued, nonexpansive, as well as firmly nonexpansive for any μ > 0 .
(2) 
The resolvent J μ N is firmly nonexpansive if and only if
J μ N ϱ 1 J μ N ϱ 2 2 ϱ 1 ϱ 2 2 ( I J μ N ) ϱ 1 ( I J μ N ) ϱ 2 2 , for all ϱ 1 , ϱ 2 U .
(3) 
The operator I J μ N is nonexpansive and so it is demiclosed at zero.
(4) 
If N : U 2 U is monotone, then J μ N and I J μ N are firmly nonexpansive for μ > 0 , and J μ N is the resolvent of N.
Lemma 1
([30]). Let D be a closed and convex subset of Hilbert space U , and Z : D D is nonexpansive mapping such that
(1) 
Fix ( Z ) ,
(2) 
The sequence { v n } v and lim n Z ( v n ) v n = 0 .
  • Then, Z ( v ) = v .
Lemma 2
([31]). If { v n } is a sequence of non-negative real numbers satisfying
v n + 1 ( 1 ψ n ) v n + ψ n φ n , n 0 ,
where { ψ n } is a sequence in ( 0 , 1 ) and { φ n } is a sequence of real numbers such that
(1) 
lim n ψ n = 0 , and n = 1 ψ n = ,
(2) 
lim sup n φ n 0 .
  • Then, lim n v n = 0 .
Lemma 3
([32]). Suppose D is a closed and convex subset of U . If the sequence { v n } U satisfies the following:
(1) 
lim n v n v exists for all v D ,
(2) 
Any weak cluster point of { v n } belongs to D ;
  • then, there exists v D such that v n v .
Lemma 4
([33]). Let { v n } be a real sequence that does not decrease at infinity in the sense that there exists a subsequence { v n k } of { v n } such that v n k < v n k + 1 , k 0 . Also consider the sequence of integers { ε ( n ) } n n 0 defined by
ε ( n ) = max { k n : v k v k + 1 } .
Then, { ε ( n ) } n n 0 is a nondecreasing sequence verifying lim n ε ( n ) = and n n 0 ,
max { v ε ( n ) , v ( n ) } v ε ( n ) + 1 .

3. Main Results

The solution sets of (SplitVIP) and ( FPP ) are denoted by Δ 1 and Δ 2 , respectively. To establish the convergence of the suggested methods, we make the following assumptions:
(X1)
F : U U is θ -contraction;
(X2)
M , N : U 2 U are monotone operators and Z : U U is a nonexpansive mapping;
(X3)
{ ζ n } is a sequence in ( 0 , 1 ) so that lim n ζ n = 0 , and n = 1 ζ n = ;
(X4)
{ ϕ n } is a positive and bounded sequence such that lim n ϕ n ζ n = 0 ;
(X5)
The common solution set of (SplitVIP) and ( FPP ) is expressed by Δ 1 Δ 2 and Δ 1 Δ 2 .
Now, we are in the position to design our hybrid Algorithm 1. The hybrid Algorithm 1 is constructed in such a way that the initial step iterates the inertial extrapolation term τ n ( v n v n 1 ) combined with the viscosity approximation. We implement our hybrid Algorithm 1 to estimate the common solution of (SplitVIP) and ( FPP ).
Algorithm 1. Hybrid Algorithm 1
Choose  λ > 0 , μ > 0 , τ [ 0 , 1 ) , and   0 < ς n < 2 . Select initial points  v 0  and  v 1  and fix  n = 0 .
Iterative Step: For  n 1  iterate  v n , v n 1  and select   0 < τ n τ ¯ n , where 
τ ¯ n = min ϕ n v n v n 1 , τ , if v n v n 1 , τ , otherwise .
 Compute 
s n = ζ n F ( v n ) + ( 1 ζ n ) Z ( v n ) + τ n ( v n v n 1 ) ,
u n = s n λ n ( I J λ M ) ( s n ) ,
v n + 1 = u n μ n B * ( I J μ N ) ( B u n ) ,
  where  λ n  and  μ n  are defined by 
λ n = ς n ( I J λ M ) s n ( I J λ M ) s n + B * ( I J μ N ) B s n , if ( I J λ M ) s n + B * ( I J μ N ) B s n 0 , 0 , otherwise
 and 
μ n = ς n ( I J μ N ) B u n ( I J λ M ) u n + B * ( I J μ N ) B u n , if ( I J λ M ) u n + B * ( I J μ N ) B u n 0 , 0 , otherwise .
  If   v n + 1 = u n = v n = s n , then stop; otherwise, fix  n = n + 1  and go back to the computation.
Remark 3.
Let u n = s n in Algorithm 1; then, if ( I J λ M ) s n + B * ( I J μ N ) B s n 0 , we obtain from (11) that λ n ( I J λ M ) ( s n ) = 0 , that is, ς n ( I J λ M ) s n 2 ( I J λ M ) s n + B * ( I J μ N ) B s n = 0 , which implies that ς n ( I J λ M ) s n = 0 , which concludes that 0 M ( s n ) . If v n + 1 = u n and ( I J λ M ) u n + B * ( I J μ N ) B u n 0 , we obtain from (12) that μ n B * ( I J λ N ) ( B u n ) = 0 . Since B is a bounded linear operator, we obtain ( I J λ N ) ( B u n ) = 0 , that is, 0 N ( B u n ) . If ( I J λ M ) s n + B * ( I J μ N ) B s n = 0 , then there is nothing to show.
Remark 4.
From (9) and Assumption ( X 4 ), we have lim n τ n v n v n 1 ζ n ϕ n ζ n = 0 . Therefore, there exists a constant L 1 such that τ n v n v n 1 ζ n L 1 or τ n v n v n 1 L 1 ζ n .
Next, we utilize our hybrid Algorithm 1 to establish the strong convergence theorem, which approximates the common solution of (SplitVIP) and ( FPP ). In the presentation of the strong convergence theorem, the implemented method computes two step sizes, which makes us to calculate the norm of the bounded linear operator B.
Theorem 1.
If assumptions ( X 1 ) ( X 5 ) hold, then the sequence { v n } induced by Algorithm 1 converges strongly to v, where v = P Δ 1 Δ 2 F ( v ) .
Proof. 
Let l Δ 1 Δ 2 . By using (8), (11) and Remark 2 (4), we have
  u n l 2 = s n λ n ( I J λ M ) ( s n ) l 2 s n l 2 + λ n 2 ( I J λ M ) ( s n ) 2 2 λ n ( I J λ M ) ( s n ) , s n l = s n l 2 + λ n 2 ( I J λ M ) ( s n ) 2 2 λ n ( I J λ M ) ( s n ) ( I J λ M ) ( l ) , s n l s n l 2 + λ n 2 ( I J λ M ) ( s n ) 2 2 λ n ( I J λ M ) ( s n ) ( I J λ M ) ( l ) 2 = s n l 2 + ( λ n 2 2 λ n ) ( I J λ M ) ( s n ) 2 .
Now, using (13), we estimate that
  ( λ n 2 2 λ n ) ( I J λ M ) ( s n ) 2 = ( I J λ M ) ( s n ) 2 [ ς n 2 ( I J λ M ) ( s n ) 2 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) 2   2 ς n ( I J λ M ) ( s n ) ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) ]
  = ( I J λ M ) ( s n ) 3 ς n 2 ( I J λ M ) ( s n ) 2 ς n ( ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) ) ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) 2   ( I J λ M ) ( v n ) 3 ( ς n 2 2 ς n ) ( ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) ) ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) 2   = ( ς n 2 2 ς n ) ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) .
From (15) and (16), we obtain
u n l 2 s n l 2 + ( ς n 2 2 ς n ) ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) .
Since ς n ( 0 , 2 ) , we obtain
u n l s n l .
Applying the same steps as in the calculation of (16) and (17), we can easily obtain the following:
v n + 1 l 2 u n l 2 + ( μ 2 2 μ n ) ( I J μ N ) ( B u n ) 2 .
By using (14), we can obtain
( μ n 2 2 μ n ) ( I J μ N ) ( B u n ) 2 = ( ς n 2 2 ς n ) ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) + B * ( I J μ N ) ( B u n ) .
It follows from (19) and (20) that
v n + 1 l 2 u n l 2 + ( ς n 2 2 ς n ) ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) + B * ( I J μ N ) ( B u n ) ,
or
v n + 1 l u n l .
Combining (17) and (21), we obtain
v n + 1 l 2 s n l 2 + ς n ( ς n 2 ) ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n )     + ς n ( ς n 2 ) ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) + B * ( I J μ N ) ( B u n ) .
Since ς n ( 0 , 2 ) , we conclude that
v n + 1 l s n l .
Since F is θ -contraction, using (10) and Remark 4, we have
s n l = ζ n F ( v n ) + ( 1 ζ n ) Z ( v n ) + τ n ( v n v n 1 ) l   = ζ n F ( v n ) l + ( 1 ζ n ) Z ( v n ) l + τ n v n v n 1   ζ n F ( v n ) F ( l ) + ζ n F ( l ) l + ( 1 ζ n ) Z ( v n ) l + τ n v n v n 1   ζ n θ v n l + ζ n F ( l ) l + ( 1 ζ n ) v n l + ζ n L 1   = [ 1 ζ n ( 1 θ ) ] v n l + ζ n ( 1 θ ) F ( l ) l + L 1 1 θ   max v n l , F ( l ) l + L 1 1 θ .
Taking advantage of (24) and by mathematical induction, we achieve that the sequence { s n } is bounded, and so are { v n } and { u n } . Let r n = ζ n F ( v n ) + ( 1 ζ n ) Z ( v n ) , which is also bounded. By using (8), we obtain
    r n l 2   = ζ n F ( v n ) + ( 1 ζ n ) Z ( v n ) l 2   = ζ n 2 F ( v n ) l 2 + ( 1 ζ n ) 2 Z ( v n ) l 2 + 2 ζ n ( 1 ζ n ) F ( v n ) l , Z ( v n ) l   = ζ n 2 F ( v n ) l 2 + ( 1 ζ n ) 2 v n l 2 + 2 ζ n F ( v n ) F ( l ) , Z ( v n ) l     + 2 ζ n F ( l ) l , Z ( v n ) l 2 ζ n 2 F ( v n ) l , Z ( v n ) l   ζ n 2 F ( v n ) l 2 + ( 1 ζ n ) 2 v n l 2 + 2 ζ n θ v n l v n l     + 2 ζ n F ( l ) l , Z ( v n ) l + 2 ζ n 2 F ( v n ) l v n l   = 1 2 ζ n ( 1 θ ) v n l 2 + ζ n [ ζ n ( F ( v n ) l + v n l ) 2     + 2 F ( l ) l , Z ( v n ) l ] .
We also estimate
    r n l , τ n ( v n v n 1 )   = ζ n F ( v n ) + ( 1 ζ n ) Z ( v n ) l , τ n ( v n v n 1 )   = ζ n τ n F ( v n ) l , ( v n v n 1 ) + ( 1 ζ n ) τ n Z ( v n ) l , ( v n v n 1 )   ζ n τ n F ( v n ) l v n v n 1 + ( 1 ζ n ) τ n Z ( v n ) l v n v n 1   = τ n v n v n 1 ζ n F ( v n ) l + ( 1 ζ n ) v n l   ϕ n F ( v n ) l + v n l ,
since τ n v n v n 1 ϕ n . By using the above estimated values in (25) and (26), we obtain
s n l 2 = r n + τ n ( v n v n 1 ) l 2   = r n l 2 + 2 τ n v n v n 1 , r n l + τ n 2 v n v n 1 2   1 ζ n ( 1 θ ) v n l 2 + 2 ϕ n F ( v n ) l + v n l     + ζ n ζ n ( F ( v n ) l + v n l ) 2 + 2 F ( l ) l , Z ( v n ) l + ϕ n 2   = ( 1 e n ) v n l 2 + e n d n ,
where e n = ζ n ( 1 θ ) and
d n = 2 ϕ n ζ n F ( v n ) l + v n l + ζ n F ( v n ) l + v n l 2 + 2 F ( l ) l , Z ( v n ) l + ϕ n 2 ζ n ( 1 θ ) .
Combining (23) and (27), we obtain
v n + 1 l 2 ( 1 e n ) v n l 2 + e n d n + ς n ( ς n 2 ) ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n )     + ς n ( ς n 2 ) ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) + B * ( I J μ N ) ( B u n ) .
The remaining proof can be split in two possible cases:
  • Case I: If { v n l } is not monotonically increasing, then there exists a number N 1 such that v n + 1 l v n l for all n N 1 . Hence, the boundedness of { v n l } implies that { v n l } is convergent. Therefore, using (28), we have
    ς n ( ς n 2 ) ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) + ς n ( ς n 2 ) ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) + B * ( I J μ N ) ( B u n )   v n l 2 v n + 1 l 2 e n v n l 2 + e n d n
Since ς n ( 0 , 2 ) and e n 0 as n , by taking limit n , we obtain
lim n ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n ) = 0 ,
and lim n ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) 2 + B * ( I J μ N ) ( B u n ) = 0 ,
which implies that
lim n ( I J λ M ) ( s n ) = lim n ( I J μ N ) ( B u n ) = 0 .
From (11) and (13), we infer that
lim n u n s n = 0 .
Using (12) and (14), we obtain
lim n v n + 1 u n = 0 .
Taking together (30) and (31), we have
v n + 1 s n v n + 1 u n + u n s n 0 as n .
It is not difficult to obtain that
v n + 1 v n 0 as n .
By using s n v n = r n + τ n ( v n v n 1 ) v n , and using (32), (33), and Remark 4, we immediately see that
r n v n τ n v n v n 1 + v n + 1 v n + v n + 1 s n 0 as n .
Hence, we can obtain
s n v n τ n v n v n 1 + r n v n 0 as n .
From (10), we can easily write that
s n Z ( v n ) = ζ n ( F ( v n ) l ) + ζ n ( Z ( l ) Z ( v n ) ) + τ n ( v n v n 1 ) .
Using the boundedness of { v n } , Condition ( X 3 ) , the nonexpansive property of Z, and Remark 4, we achieve
s n Z ( v n ) 0 as n .
Similarly, we can show that
u n Z ( v n ) 0 , v n Z ( v n ) 0 as n .
Since { v n } is bounded, which implies the existence of subsequence { v n k } converging weakly to l ¯ , the subsequences { s n k } , { u n k } of { s n } and { u n } , respectively, also converge weakly to l ¯ . It follows from (29) and (38) that
lim k ( I J λ M ) ( s n k ) = 0 , lim k ( I J μ N ) ( B u n k ) = 0 and lim k v n k Z ( v n k ) = 0 .
Keeping in mind (30), (31), and (39), we infer that l ¯ Δ n Δ 2 .
Finally, we prove that the sequence { v n } strongly converges. From (28), we have
v n + 1 l 2 ( 1 e n ) v n l 2 + e n d n .
Furthermore,
    lim sup n d n     = lim sup n 2 ϕ n ς n F ( v n ) l + v n l + ς n F ( v n ) l + v n l 2 + 2 F ( l ) l , Z ( v n ) l + ϕ n 2 ζ n ( 1 θ )     = lim sup k 2 ϕ n k ς n k F ( v n k ) l + v n k l + ζ n k F ( v n k ) l + v n k l 2 + 2 F ( l ) l , Z ( v n k ) l + ϕ n k 2 ζ n k ( 1 θ )     = lim sup k 2 F ( l ) l , Z ( v n k ) v n k + 2 F ( l ) l , v n k l     = 0 + F ( l ) l , l ¯ l     0 .
Now we are in position to apply Lemma 2 in (40) and conclude that { v n } converges strongly to l . Hence, the result is proved.
  • Case II: If { v n l } is monotonically increasing, then the sequence ϵ : N : N for all n n 0 defined by ϵ ( n ) = max { k N : n k : v k l v k + 1 l } is increasing such that ϵ ( n ) as n and
0 v ϵ ( n ) l v ϵ ( n ) + 1 l , n n 0 .
By using (28), we have
    ς ϵ ( n ) ( ς ϵ ( n ) 2 ) ( I J λ M ) ( s ϵ ( n ) ) 3 ( I J λ M ) ( s ϵ ( n ) ) + B * ( I J μ N ) ( B s ϵ ( n ) ) + ς n ( ς ϵ ( n ) 2 ) ( I J μ N ) ( B u ϵ ( n ) ) 3 ( I J λ M ) ( u ϵ ( n ) ) 2 + B * ( I J μ N ) ( B u ϵ ( n ) )   v ϵ ( n ) l 2 v ϵ ( n ) + 1 l 2 + e ϵ ( n ) d ϵ ( n ) e ϵ ( n ) v ϵ ( n ) l 2 + e ϵ ( n ) d ϵ ( n )
By passing the limit n , we obtain
( I J λ M ) ( s ϵ ( n ) ) 0 , ( I J μ N ) ( B u ϵ ( n ) ) 0 .
Using the same techniques as in the proof of Case I, we obtain v ϵ ( n ) + 1 s ϵ ( n ) 0 , v ϵ ( n ) + 1 u ϵ ( n ) 0 , v ϵ ( n ) + 1 v ϵ ( n ) 0 , and Z ( v ϵ ( n ) ) v ϵ ( n ) 0 as n . From (40) and (41), we obtain
0 v ϵ ( n ) l 2 d ϵ ( n ) ,
Thus, lim sup n v ϵ ( n ) l 0 . By passing limit n and using Lemma 4,
0 v n l max { v n l , v ϵ ( n ) l } v ϵ ( n ) + 1 l .
It follows that v n l 0 , that is, v n l as n . This completes the proof. □
Further, we construct the hybrid Algorithm 2, which is a slightly modified version of hybrid Algorithm 1. In hybrid Algorithm 2, the initial step iterates the viscosity approximation, which is the convex combination of F ( v n ) and Z ( v n ) + τ n ( v n v n 1 ) , where the inertial extrapolation term τ n ( v n v n 1 ) is added to accelerate the convergence.
Algorithm 2. Hybrid Algorithm 2
Choose  λ > 0 , μ > 0 , τ [ 0 , 1 ) , and  0 < ς n < 2 . Select initial points  v 0  and  v 1  and fix  n = 0 .
Iterative Step: For  n 1 , iterate  v n ,   v n 1  and select  0 < τ n τ ¯ n , where 
τ ¯ n = min ϕ n v n v n 1 , τ , if v n v n 1 , τ , otherwise .
Compute 
s n = ζ n F ( v n ) + ( 1 ζ n ) [ Z ( v n ) + τ n ( v n v n 1 ) ] ,
u n = s n λ n ( I J λ M ) ( s n ) ,
v n + 1 = u n μ n B * ( I J μ M ) ( B u n ) ,
where  μ n  and  λ n  are defined by
λ n = ς n ( I J λ M ) s n ( I J λ M ) s n + B * ( I J μ N ) B s n , if ( I J λ M ) s n + B * ( I J μ N ) B s n 0 , 0 , otherwise
  and 
μ n = ς n ( I J μ N ) B u n ( I J λ M ) u n + B * ( I J μ N ) B u n , if ( I J λ M ) u n + B * ( I J μ N ) B u n 0 , 0 , otherwise .
  • If  v n + 1 = u n = v n = s n , then stop; otherwise, fix  n = n + 1  and go back to the computation.
The following is the convergence analysis of hybrid Algorithm 2, which is similar to that of the proof of Theorem 1.
Theorem 2.
If assumptions ( X 1 ) ( X 5 ) hold, then the sequence { v n } generated by Algorithm 2 converges strongly to v, where v = P Δ 1 Δ 2 F ( v ) .
Proof. 
Take l Δ 1 Δ 2 ; then, from (43) and using Remark 4, we see that
s n l = ζ n F ( v n ) + ( 1 ζ n ) [ Z ( v n ) + τ n ( v n v n 1 ) ] l   = ζ n F ( v n ) l + ( 1 ζ n ) [ Z ( v n ) l + τ n v n v n 1 ]   ζ n F ( v n ) F ( l ) + ζ n F ( l ) l + ( 1 ζ n ) Z ( v n ) l + τ n v n v n 1   ζ n θ v n l + ζ n F ( l ) l + ( 1 ζ n ) v n l + ζ n L 1   = [ 1 ζ n ( 1 θ ) ] v n l + ζ n ( 1 θ ) F ( l ) l + L 1 1 θ   max v n l , F ( l ) l + L 1 1 θ .
Keeping in mind (24) and using mathematical induction, we obtain that the sequence { s n } is bounded, and so is { v n } and { u n } . Denote y n = Z ( v n ) + τ n ( v n v n 1 ) ; then, by using (8) and denoting t n = τ n v n v n 1 ϕ n , we establish that
y n l 2 = Z ( v n ) + τ n ( v n v n 1 ) l   Z ( v n ) l 2 + 2 τ n v n v n 1 , y n l   v n l 2 + 2 t n y n l ,
and
F ( v n ) l , y n l = F ( v n ) F ( l ) , y n l + F ( l ) l , y n l   F ( v n ) F ( l ) y n l + F ( l ) l , y n l   θ v n l y n l + F ( l ) l , y n l   = 1 2 θ 2 v n l 2 + y n l 2 + F ( l ) l , y n l .
Using (48) and (49), we obtain
    s n l   = ζ n F ( v n ) + ( 1 ζ n ) y n l   = ζ 2 F ( v n ) l 2 + ( 1 ζ n ) 2 y n l 2 + 2 ζ n ( 1 ζ n ) F ( v n ) l , y n l   ζ n 2 F ( v n ) l 2 + ζ n ( 1 ζ n ) { θ 2 v n l 2 + y n l 2     + 2 F ( l ) l , y n l } + ( 1 ζ n ) 2 y n = l 2   ζ n 2 F ( v n ) l 2 + ζ n θ 2 v n l 2 + [ ( 1 ζ n ) 2 + ζ n ( 1 ζ n ) ] y n l 2     + 2 ζ n ( 1 ζ n F ( l ) l , y n l   ζ n 2 F ( v n ) l 2 + ζ n θ 2 v n l 2 + ( 1 ζ n ) [ v n l 2 + 2 t n y n l ]     + 2 ζ n ( 1 ζ n F ( l ) l , y n l   [ 1 ζ n ( 1 θ 2 ) ] v n l 2 + ζ n { 2 ( 1 ζ n ) F ( l ) l , y n l     + ζ n F ( v n ) l 2 + 2 t n ζ n y n l } .
Combining (23) and (50), we obtain
v n + 1 l 2 ( 1 c n ) v n l 2 + c n g n + ς n ( ς n 2 ) ( I J λ M ) ( s n ) 3 ( I J λ M ) ( s n ) + B * ( I J μ N ) ( B s n )     + ς n ( ς n 2 ) ( I J μ N ) ( B u n ) 3 ( I J λ M ) ( u n ) + B * ( I J μ N ) ( B u n ) ,
where c n = ζ n ( 1 θ 2 ) and g n = 2 ( 1 ζ n ) F ( l ) l , y n l + ζ n f ( v n ) l 2 + 2 ϕ n ζ n y n l ( 1 θ 2 ) .
Considering Case I of Theorem 1, we can easily obtain
lim n ( I J λ M ) ( s n ) = lim n ( I J μ N ) ( B u n ) = 0 ,
and
lim n u n s n = 0 , lim n v n + 1 u n = 0 ,
and, hence,
lim n v n + 1 s n = 0 , lim n v n + 1 v n = 0 .
Next, we show that Z v n v n 0 as n . As y n = Z ( v n ) + τ n ( v n v n 1 ) , then
lim n y n Z ( v n ) = 0 = 0 ,
and
s n Z ( v n ) = ζ n F ( v n ) ( 1 ζ n ) y n Z ( v n )   = ζ n ( F ( v n ) l ) + ζ n ( l y n ) + ( y n Z ( v n ) ) ,
The assumption on ζ n and the boundedness of v n imply that
s n Z ( v n ) ζ n F ( v n ) l + ζ n l y n + y n Z ( v n ) as n .
Also,
Z ( v n ) u n Z ( v n ) s n + u n s n as n .
Together with (52)–(54), we obtain by taking n
Z ( v n ) v n Z ( v n ) u n + u n s n + v n + 1 s n + v n + 1 v n 0 .
The boundedness of { v n } , { u n } , and { s n } imply the existence of subsequence { v n k } , { u n k } , and { s n k } , which converge to some point v ^ ; and from (51)–(53) and (55), we conclude that v ^ Δ 1 Δ 2 . The remaining proof can be obtained easily by using similar steps as in the proof of Theorem 1. □
Let q U be arbitrary. Then, by replacing F ( v ) with q in hybrid Algorithm 1 and hybrid Algorithm 2, we define the following Halpern-type iterative methods, which can be seen as the particular cases of our hybrid methods:
Corollary 1.
If assumptions ( X 2 ) ( X 5 ) hold good, then the sequence { v n } induced by Algorithm 3 converges strongly to q = P Δ 1 Δ 2 ( q ) .
Algorithm 3. A Particular Case of Hybrid Algorithm 1
Choose  λ > 0 , μ > 0 ,  τ [ 0 , 1 ) , and  0 < ς n < 2 . Select initial points  v 0  and  v 1 , any q U , and fix  n = 0 .
Iterative Step: For  n 1 , iterate  v n ,  v n 1 , and select  0 < τ n τ ¯ n , where
τ ¯ n = min ϕ n v n v n 1 , τ , if v n v n 1 , τ , otherwise .
Compute 
s n = ζ n q + ( 1 ζ n ) Z ( v n ) + τ n ( v n v n 1 ) , u n = s n λ n ( I J λ M ) ( s n ) , v n + 1 = u n μ n B * ( I J μ N ) ( B u n ) ,
where μ n  and  λ n  are defined by
λ n = ς n ( I J λ M ) s n ( I J λ M ) s n + B * ( I J μ N ) B s n , if ( I J λ M ) s n + B * ( I J μ N ) B s n 0 , 0 , otherwise
and
μ n = ς n ( I J μ N ) B u n ( I J λ M ) u n + B * ( I J μ N ) B u n , if ( I J λ M ) u n + B * ( I J μ N ) B u n 0 , 0 , otherwise .
  • If  v n + 1 = u n = v n = s n , then stop; otherwise, fix  n = n + 1  and go back to the computation.
Proof. 
Replacing F ( v ) = q in Algorithm 1 as well as in the proof of Theorem 1, we obtain the required result. □
Corollary 2.
If assumptions ( X 2 ) ( X 5 ) hold good, then the sequence { v n } induced by Algorithm 4 converges strongly to q = P Δ 1 Δ 2 ( q ) .
Algorithm 4. A Particular Case of Hybrid Algorithm 2
Let  λ > 0 , μ > 0 , τ [ 0 , 1 ) , and  0 < ς n < 2  be given. Select initial points  v 0  and  v 1 , any  q U , and fix  n = 0 .
Iterative Step: For  n 1 , iterate  v n , v n 1 , and select  0 < τ n τ ¯ n , where
τ ¯ n = min ϕ n v n v n 1 , τ , if v n v n 1 , τ , otherwise .
Compute
s n = ζ n q + ( 1 ζ n ) [ Z ( v n ) + τ n ( v n v n 1 ) ] , u n = s n λ n ( I J λ M ) ( s n ) , v n + 1 = u n μ n B * ( I J μ N ) ( B u n ) ,
where  μ n  and  λ n  are defined by
λ n = ς n ( I J λ M ) s n ( I J λ M ) s n + B * ( I J μ N ) B s n , if ( I J λ M ) s n + B * ( I J μ N ) B s n 0 , 0 , otherwise
  and 
μ n = ς n ( I J μ N ) B u n ( I J λ M ) u n + B * ( I J μ N ) B u n , if ( I J λ M ) u n + B * ( I J μ N ) B u n 0 , 0 , otherwise .
  • If  v n + 1 = u n = v n = s n , then stop; otherwise, fix  n = n + 1  and go back to the computation.
Proof. 
By replacing q in place of F ( v ) in Algorithm 2 as well as in the proof of Theorem 2, we obtain the desired proof. □

4. Some Advantages

Some applications of the suggested methods for solving split variational inequality and split common fixed-point problems are discussed below.

4.1. Split Variational Inequality Problem

Let D be a nonempty, closed, and convex subset of U and P D be the projection on D , G 1 : U U , and G 2 : U U , which are monotone operators, and B : U U is a bounded linear operator, then the split variational inequality problem (SplitVItP) is defined by
find v U so that G 1 ( v ) , v v , 0 , v D , and u = B l U solves G 2 ( u ) , v u , 0 , u D .
Then, by replacing J λ M = J μ N = P D in Algorithms 1–4, we can obtain the hybrid algorithms and their convergence results for (SplitVItP) and (FPP).

4.2. Split Common Fixed-Point Problem

Let T 1 : U U and T 2 : U U be self-nonexpansive mappings and B : U U be a bounded linear operator; then, the split common fixed-point problem (SplitCFPP) is defined as follows:
find v U so that v F i x ( T 1 ) , and u = B v U solves u F i x ( T 2 ) .
Then, by replacing J λ M = T 1 , J μ N = T 2 , and Z = I , identity mapping in Algorithms 1–4, we can obtain the hybrid algorithms and their convergence results for (SplitCFPP).
Next, we present numerical examples in finite and infinite dimensional Hilbert spaces, showing the efficiency of our hybrid methods and their comparison with the work studied in [10,11,12,18].

5. Numerical Examples

Example 1
(Finite dimensional). Let U = R 4 , equipped with the inner product t , q = q 1 t 1 + q 2 t 2 + q 3 t 3 + q 4 t 4 for t = ( t 1 , t 2 , t 3 , t 4 ) and q = ( q 1 , q 2 , q 3 , q 4 ) and the norm q 2 = | q 1 | 2 + | q 2 | 2 + | q 3 | 2 + | q 4 | 2 . The operators M, N, and B are defined by
M ( q 1 , q 2 , q 3 , q 4 ) = q 1 , q 2 2 , q 3 3 , q 4 4 , N ( q 1 , q 2 , q 3 , q 4 ) = q 1 , q 2 3 , q 3 , q 4 3 , B ( q 1 , q 2 , q 3 , q 4 ) = q 1 , q 2 , q 3 , q 4 ,
  • such that M is 1 4 -inverse strongly monotone and N is 1 3 -inverse strongly monotone (hence monotone); B is a bounded and linear operator. The nonexpansive mapping Z is defined by Z ( q 1 , q 2 , q 3 , q 4 ) = q 1 , 0 , q 3 , 0 , and F ( q 1 , q 2 , q 3 , q 4 ) = q 1 2 , q 2 2 , q 3 2 , q 4 2 is a θ-contraction with θ = 1 2 .
To run our algorithms, we select ϕ n = 1 n 2 + 100 , ς n = 2 1 n + 1 , and τ = 0.75 , and τ n is selected randomly from ( 0 , τ n ¯ ) , where
τ ¯ n = min 1 ( n 2 + 100 ) v n v n 1 , 0.75 , if v n v n 1 , 0.75 , otherwise
We compare our algorithms and method (2), method (3), method (4), and method (5) by using the following common parameters: λ = μ = μ 1 = μ 2 = 0.25 , ζ n = 1 n + 100 for all the methods; η = 0.5 for method (2), method (3), and method (4); u = ( 0 , 0 , 0 , 1 ) and η n given by (6) are used in method (5). The stopping condition is v n + 1 v n < 10 20 , and we consider the two cases of initial values:
  • Case (a): v 0 = ( 130 , 10 , 54 , 95 ) , v 1 = ( 14 , 30 , 105 , 100 )
  • Case (b): v 0 = ( 0 , 100 , 100 , 0 ) , v 1 = ( 1 / 2 , 0 , 1 / 100 , 5 )
It can be seen that our algorithms are efficient and effective and can be implemented easily without calculating B . The convergence of { v n } to { 0 } = Δ 1 Δ 2 is shown in Figure 1 and Figure 2 using different initial values. It is found that our algorithms approach the solution in fewer numbers of steps in comparison to the method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].
Example 2. (Infinite dimensional) Let U = l 2 : = t : = ( t 1 , t 2 , t 3 , , t n , ) , t n R : n = 1 | t n | 2 < , the space of all square summable sequences with inner product t , q = n = 1 t n q n , and the norm is t = n = 1 | t n | 2 1 / 2 . The mappings M, N, and S are defined by
M ( t ) : = t 5 N ( t ) : = t 3 Z ( t ) : = t , F ( t ) : = t 2 , B ( t ) : = t , t l 2 .
Clearly, M and N are monotone, Z is nonexpansive, F is a contraction, and B is a bounded linear operator. We choose ς n = 2 2 n + 10 , ϕ n = 1 n 2 + 1 , and τ = 0.85 , and τ n is selected randomly from ( 0 , τ n ¯ ) , where
τ ¯ n = min 1 ( n 2 + 100 ) v n v n 1 , 0.85 , if v n v n 1 , 0.85 , otherwise .
The common parameters for our algorithms and method (2), method (3), method (4), and method (5) are as follows: λ = μ = μ 1 = μ 2 = 1 3 , ζ n = 1 n + 1 for all the methods; η = 0.5 for method (2), method (3), and method (4); u = ( 0 , 1 , 0 , 0 , 0 , ) and η n given by (6) is used in method (5). We plot the convergence of the sequences induced by Algorithms 1–4. The stopping condition is v n + 1 v n < 10 25 for the following two initial values:
  • Case (a’): v 0 = { 1 3 n } n = 1 , v 1 = { 1 n } n = 1 ;
  • Case (b’): v 0 = { 1 n 2 } n = 1 , v 1 = { ( 1 ) n n 2 } n = 1 ;
Our algorithms are effective and efficient in the sense that they are implemented easily without calculating B . It can be seen in Figure 3 and Figure 4 that the sequences obtained from our methods estimate the solution in fewer numbers of steps as compared to method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].
Remark 5.
We do not present any result on the rate of convergence of the proposed methods. In the future, it will be interesting to study and compare the convergence rate of our proposed methods and other techniques.

6. Conclusions

We present two hybrid inertial self-adaptive iterative methods for estimating the common solution of ( FPP ) and (SplitVIP). Two strong convergence theorems are established. Some special cases of the proposed methods are noted. We also implement our hybrid methods to explore the solution of split variational inequality problems and split common fixed-point problems. Our algorithms are simple and different in the sense that they estimate the viscosity approximation, fixed-point iteration, and inertial extrapolation in the initial steps of each iteration. Our methods are also efficient; they involve two self-adaptive step sizes and do not require the pre-estimated norm of a bounded linear operator in the iteration process. The effectiveness and efficiency of the proposed methods are illustrated by numerical examples, Examples 1 and 2. In the study of the numerical examples, it is observed that the presented methods are effective and easily implemented without any hurdle. The iterative sequence obtained by our methods estimates the common solution of (SplitVIP) and ( FPP ) in fewer numbers of steps in comparison to method (2) of [10], method (3) of [11], method (4) of [12], and method (5) of [18].

Author Contributions

Conceptualization, D.F.; methodology, M.D.; validation, M.A.; formal analysis, A.F.Y.A.; investigation, A.F.Y.A.; writing, original draft preparation, M.D. and M.A.; review and editing, M.A. and M.D.; funding acquisition, D.F. and M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was funded by Deanship of Scientific Research and Libraries, Princess Nourah bint Abdulrahman University, through the Program of Research Project Funding After Publication, grant No. (RPFAP-81-1445).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

All authors would like to offer thanks to the journal editor and reviewers for their fruitful suggestions and comments, which enhanced the overall quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mann, W. Mean value methods in iteration. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  2. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  3. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  4. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  5. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef]
  6. Cao, Y.; Wang, Y.; Rehman, H.; Shehu, Y.; Yao, J.C. Convergence analysis of a new forward-reflected-backward algorithm for four operators without cocoercivity. J. Optim. Theory Appl. 2024, 203, 256–284. [Google Scholar] [CrossRef]
  7. Byrne, C. Iterative oblique projection onto convex sets and the split feeasiblity problems. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  8. Combettes, P.L. The convex feasibilty problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
  9. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  10. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  11. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  12. Akram, M.; Dilshad, M.; Rajpoot, B.F.; Ahmad, R.; Yao, J.-C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics 2022, 10, 2098. [Google Scholar] [CrossRef]
  13. Abass, H.A.; Ugwunnadi, G.C.; Narain, O.K. A Modified inertial Halpern method for solving split monotone variational inclusion problems in Banach spaces. Rend. Del Circ. Mat. Palermo Ser. 2 2023, 72, 2287–2310. [Google Scholar] [CrossRef]
  14. Dilshad, M.; Aljohani, A.F.; Akram, M. Iterative scheme for split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. J. Funct. Spaces 2020, 2020, 3567648. [Google Scholar] [CrossRef]
  15. Deepho, J.; Thounthong, P.; Kumam, P.; Phiangsungnoen, S. A new general iterative scheme for split variational inclusion and fixed point problems of k-strict pseudo-contraction mappings with convergence analysis. J. Comput. Appl. Math. 2017, 318, 293–306. [Google Scholar] [CrossRef]
  16. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comput. 2015, 250, 986–1001. [Google Scholar] [CrossRef]
  17. Lopez, G.; Martin-M´arquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without knowledge of matrix norm. Inverse Prob. 2012, 28, 085004. [Google Scholar] [CrossRef]
  18. Dilshad, M.; Akram, M.; Ahmad, I. Algorithms for split common null point problem without pre-existing estimation of operator norm. J. Math. Inequal. 2020, 14, 1151–1163. [Google Scholar] [CrossRef]
  19. Ezeora, J.N.; Enyi, C.D.; Nwawuru, F.O.; Richard, C.O. An algorithm for split equilibrium and fixed-point problems using inertial extragradient techniques. Comp. Appl. Math. 2023, 42, 103. [Google Scholar] [CrossRef]
  20. Tang, Y. New algorithms for split common null point problems. Optimization 2020, 70, 1141–1160. [Google Scholar] [CrossRef]
  21. Tang, Y.; Gibali, A. New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms 2019, 83, 305–331. [Google Scholar] [CrossRef]
  22. Tang, Y.; Zhang, Y.; Gibali, A. New self-adaptive inertial-like proximal point methods for the split common null point problem. Symmetry 2021, 13, 2316. [Google Scholar] [CrossRef]
  23. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear osculattor with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  24. Alamer, A.; Dilshad, M. Halpern-type inertial iteration methods with self-adaptive step size for split common null point problem. Mathematics 2024, 12, 747. [Google Scholar] [CrossRef]
  25. Filali, D.; Dilshad, M.; Alyasi, L.S.M.; Akram, M. Inertial Iterative Algorithms for Split Variational Inclusion and Fixed Point Problems. Axioms 2023, 12, 848. [Google Scholar] [CrossRef]
  26. Nwawuru, F.O.; Narain, O.K.; Dilshad, M.; Ezeora, J.N. Splitting method involving two-step inertial for solving inclusion and fixed point problems with applications. Fixed Point Theory Algorithms Sci. Eng. 2025, 2025, 8. [Google Scholar] [CrossRef]
  27. Reich, S.; Taiwo, A. Fast hybrid iterative schemes for solving variational inclusion problems. Math. Methods Appl. Sci. 2023, 46, 17177–17198. [Google Scholar] [CrossRef]
  28. Ugwunnadi, G.C.; Abass, H.A.; Aphane, M.; Oyewole, O.K. Inertial Halpern-type method for solving split feasibility and fixed point problems via dynamical stepsize in real Banach spaces. Ann. Univ. Ferrara 2024, 70, 307–330. [Google Scholar] [CrossRef]
  29. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  30. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Poit Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  31. Xu, H.K. Another control condition in an iterative maethod for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65, 109–113. [Google Scholar] [CrossRef]
  32. Opial, Z. Weak covergence of the sequence of successive approximations of nonexpansive mappings. Bull. Am. Math. Soc. 1976, 73, 591–597. [Google Scholar] [CrossRef]
  33. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Figure 1. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (a) of Example 1.
Figure 1. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (a) of Example 1.
Axioms 14 00373 g001
Figure 2. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (b) of Example 1.
Figure 2. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (b) of Example 1.
Axioms 14 00373 g002
Figure 3. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (a′) of Example 2.
Figure 3. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (a′) of Example 2.
Axioms 14 00373 g003
Figure 4. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (b′) of Example 2.
Figure 4. The comparison of our proposed methods with the other methods studied in [10,11,12,18] for Case (b′) of Example 2.
Axioms 14 00373 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Filali, D.; Dilshad, M.; Alfaifi, A.F.Y.; Akram, M. Hybrid Inertial Self-Adaptive Iterative Methods for Split Variational Inclusion Problems. Axioms 2025, 14, 373. https://doi.org/10.3390/axioms14050373

AMA Style

Filali D, Dilshad M, Alfaifi AFY, Akram M. Hybrid Inertial Self-Adaptive Iterative Methods for Split Variational Inclusion Problems. Axioms. 2025; 14(5):373. https://doi.org/10.3390/axioms14050373

Chicago/Turabian Style

Filali, Doaa, Mohammad Dilshad, Atiaf Farhan Yahya Alfaifi, and Mohammad Akram. 2025. "Hybrid Inertial Self-Adaptive Iterative Methods for Split Variational Inclusion Problems" Axioms 14, no. 5: 373. https://doi.org/10.3390/axioms14050373

APA Style

Filali, D., Dilshad, M., Alfaifi, A. F. Y., & Akram, M. (2025). Hybrid Inertial Self-Adaptive Iterative Methods for Split Variational Inclusion Problems. Axioms, 14(5), 373. https://doi.org/10.3390/axioms14050373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop