Next Article in Journal
Distance Measurement Between a Camera and a Human Subject Using Statistically Determined Interpupillary Distance
Previous Article in Journal
Group Classification and Symmetry Reduction of a (1+1)-Dimensional Porous Medium Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Refined Inertial-like Subgradient Method for Split Equality Problems

Department of Mathematics, Government College University, Katchery Road, Lahore 54000, Pakistan
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(3), 117; https://doi.org/10.3390/appliedmath5030117
Submission received: 12 July 2025 / Revised: 23 August 2025 / Accepted: 26 August 2025 / Published: 2 September 2025

Abstract

This paper presents the convergence analysis of a newly proposed algorithm for approximating solutions to split equality variational inequality and fixed point problems in real Hilbert spaces. We establish that, under reasonably mild conditions, specifically when the involved mappings are quasimonotone, uniformly continuous, and quasi-nonexpansive, the sequences generated by the algorithm converge strongly to a solution of the problem. Furthermore, we provide several numerical experiments to demonstrate the practical effectiveness of the proposed method and compare its performance with that of existing algorithms.

1. Introduction

Variational inequality problems (VIPs) and fixed-point problems (FPPs) play a central role in optimization and equilibrium theory, with wide-ranging applications in fields like engineering, economics, and operations research. Simply put, FPPs help us find values that remain unchanged under a given transformation, an idea deeply explored by mathematicians like Banach, Brouwer, and Schauder [1]. On the other hand, VIPs are used to model systems where certain inequality conditions must be met, such as physical or economic equilibrium situations. These were formalized in the mid-20th century by researchers like Stampacchia [2] and Fichera [3].
As problems in real-world systems grew more complex, so did the mathematical models needed to handle them. In many cases, constraints and objectives are distributed across different spaces or subsystems, which traditional VIPs and FPPs cannot fully capture. This gave rise to extended models like the split equality variational inequality problem (SEVIP) and the split equality fixed point problem (SEFPP). These models aim to find solutions that not only satisfy variational inequalities or fixed point conditions but also meet additional coupling constraints that link variables from separate domains.
The idea of splitting constraints across domains originally came from the split feasibility problem (SFP), introduced by Elfving and later generalized by Moudafi into the more flexible split equality feasibility problem (SEFP) [4]. These problems are especially relevant in applications such as medical image reconstruction, signal processing, and decentralized decision-making systems [5,6]. Further extending these ideas, the split equality variational inequality and fixed point problem (SEVIFPP) combines both VIP and FPP conditions, making it a powerful framework for tackling hybrid equilibrium problems.
Solving these problems efficiently requires robust iterative algorithms. Early approaches relied on projection methods and the well-known extragradient technique introduced by Korpelevich [7]. Later, improvements such as the subgradient extragradient method [8] and Tseng modified method [9] addressed various limitations, including high computational cost and limited applicability to nonmonotone settings. Other researchers introduced enhancements using inertial and viscosity terms to increase convergence speed and algorithm stability [10,11,12].
More recent contributions by Zhao [13], Kwelegano et al. [14], and others have further advanced the field by designing iterative schemes tailored to the structure of SEVIP and SEVIFPP. These methods are particularly effective when dealing with quasimonotone, pseudomonotone, or quasi-nonexpansive mappings, each of which poses unique mathematical challenges [15,16,17,18,19]. Building on these developments, Mekuriaw et al. [20] recently introduced new inertial-like algorithms that show promising convergence behavior.
Inspired by this evolving body of work, the current study proposes new iterative algorithms to solve SEVIFPP more efficiently. The aim is to develop techniques that not only guarantee convergence under more general conditions but also improve computation time and practical applicability.
Let Ω be a Hilbert space with a non-empty closed convex subset B. A fixed point problem (FPP) seeks an element r Ω such that Z r = r for a given operator Z : Ω Ω , with the set of solutions denoted by F ( Z ) . Fixed point theory has a rich history, originating from foundational results by Poincaré, Brouwer, Kakutani, Schauder, and Banach, particularly the Banach contraction principle [1].
Variational inequality problems (VIPs) provide another important framework in non-linear analysis and optimization. For a mapping Z : B Ω , the classical VIP seeks r B such that
Z r , s r 0 , s B ,
with the solution set denoted as VI ( B , Z ) . First studied by Stampacchia [2] and Fichera [3], VIPs are crucial for modeling equilibrium problems under constraints, especially in systems with non-linearity or interacting subsystems. A related concept is the Minty variational inequality problem (MVIP), which requires
Z r , r r 0 , r B ,
with the solution set MVI ( B , Z ) . If Z is continuous and convex, then MVI ( B , Z ) VI ( B , Z ) [15]; for pseudomonotone and continuous Z, the two sets coincide [17]; however, for quasi-monotone Z, the reverse inclusion may not hold [16].
The split equality feasibility problem (SEFP), introduced by Moudafi [4], extends these ideas to multiple domains. Given two Hilbert spaces Ω 1 , Ω 2 , subsets B Ω 1 and E Ω 2 , and bounded linear operators X : Ω 1 Ω 3 , Y : Ω 2 Ω 3 , the SEFP seeks ( r , v ) B × E such that
X r = Y v .
This formulation generalizes the split feasibility problem (SFP) introduced by Elfving [21] and has applications in image reconstruction, signal processing, and optimization [5,6].
By replacing the sets B and E in SEFP with solution sets of VIPs, we obtain the split equality variational inequality problem (SEVIP):
r VI ( B , Z ) , v VI ( E , J ) , such that X r = Y v ,
where Z : Ω 1 Ω 1 and J : Ω 2 Ω 2 are operators.
Likewise, the split equality fixed point problem (SEFPP) is defined as:
r F ( Z ) , v F ( J ) , such that X r = Y v ,
where F ( Z ) and F ( J ) are the respective fixed point sets.
A further extension, the split equality variational inequality and fixed point problem (SEVIFPP), combines both structures. It involves finding
r VI ( B , Z ) F ( M ) , s VI ( E , J ) F ( N ) , such that X r = Y s ,
where Z , M : Ω 1 Ω 1 and J , N : Ω 2 Ω 2 are non-linear mappings.
Different projection-like iterative techniques were developed to solve VIP under various relevant conditions. The most basic approach for resolving optimization issues is the projection gradient method defined as;
r 1 Ω r n + 1 = P B r n μ Z r n , n 1
where the constant μ is positive. It can be demonstrated with ease that the iterative approach (1) converges weakly to a unique solution of VI ( B , Z ) if Z is inversely strongly monotone and Lipschitz continuous.
To overcome the strong monotonicity requirement and achieve monotonicity, Korpelevich [7] suggested an extragradient method for resolving VIP.
r 1 Ω v n = P B r n μ Z r n , r n + 1 = P B r n μ Z v n , n 1
where Z is monotone and Lipschitz continuous from a non-empty, closed, and convex subset of a real Hilbert space Ω into a space itself and μ is a positive constant. When μ ( 0 , 1 L ) , this method’s weak convergence was achieved. This method’s primary flaw is that it necessitates the computation of two metric projections onto B for every iteration. The extragradient method becomes extremely complex and expensive to construct if the set B is not simple. Numerous authors were inspired by this to suggest some modified extragradient techniques.
Censor et al. [8] developed the subgradient extragradient technique, which addresses this problem by replacing the second projection onto B with a projection onto a straightforward half-space B n :
r 1 Ω v n = P B r n μ Z r n , B n = r Ω : r n μ Z r n v n , r v n 0 , r n + 1 = P B n r n μ Z v n , n 1
Here, μ ( 0 , 1 L ) and L signifies the Lipschitz constant of Z. They demonstrated that (3) converges weakly to VI ( B , Z ) if Z is monotone and Lipschitz. When Z is monotone (or pseudomonotone), several writers have demonstrated the poor convergence of the subgradient extragradient approach (3) (see to [22,23] for some new findings, for instance).
Later, it is noted that two projections and two operator evaluations must still be computed for each iteration of the Censor et al. [8] approach. These turn the subgradient extragradient approach (3) computationally costly in applications where the structure of Z is complex and Z has a complex process of assessment.
The Tseng extragradient method was presented by Tseng [9] in 2000.
r 1 Ω v n = P B r n μ Z r n , r n + 1 = v n μ Z v n Z r n , n 1
where μ is falling between 0 and 1 L and L being Z’s Lipschitz constant. He discovered that the sequence x n has a weak convergence to a point in VI ( B , X ) , where X is a bounded linear operator from Ω 1 to Ω 3 and Ω 1 , Ω 3 are both real Hibert spaces. The Tseng method has an advantage over Censor et al. in that it only has to make one projection onto the possible set in every iteration. Numerous additions to Tseng’s algorithm have been documented in scholarly works (see [24,25,26] as an example).
The subsequent iterative method for solving the equivalent SEVIP in the Hilbert spaces was created in 2022 by Kwelegano et al. [14].
Let ( 0 , 1 ) , ϑ > 0 , λ 0 , 1 ϑ . Select r 0 , s 0 B × E randomly.
For n 1 , perform simultaneous computations,
v n = P B r n λ Z r n , m n = r n ϑ n r n v n , y n = P E s n λ J s n , w n = s n δ n s n y n , d n = P B r n ν n X X r n Y y n , r n + 1 = β n f r n + 1 β n b n P B n r n + 1 b n d n , z n = P E s n ν n Y Y s n X r n , s n + 1 = β n g s n + 1 β n b n P E n s n + 1 b n z n .
While
B n = r B : Z m n , r m n 0 , E n = s E : J w n , v w n 0 ,
μ n = t n and δ n = v n and t n and v n are the smallest non-negative integers t and v, correspondingly so that
Z r n Z r n t r n v n , r n v n ϑ r n v n 2 , J s n J s n v s n y n , s n y n ϑ s n y n 2 ,
and the sequences ν n , β n , b n R + . Additionally, the mappings Z : Ω 1 Ω 1 and J : Ω 2 Ω 2 on bounded subsets of the non-empty closed convex subsets B and E of Ω 1 and Ω 2 , respectively, are pseudomonotone, uniformly continuous, and sequentially weakly continuous.They demonstrated that, under appropriate assumptions, the resulting sequence strongly converges to a problem solution (5). Several scholars have utilized a line search strategy to get rid the requirement of the underlying mapping Z and J’s Lipschitz continuity.
The iterative procedure below was presented by Zhao [13] in 2015 for the category of quasi-nonexpansive mappings, i.e., Z : Ω 1 Ω 1 and J : Ω 2 Ω 2 with non-empty fixed point sets F(Z) and F(J):
s n = r n ν n X X r n Y v n , r n + 1 = β n s n + 1 β n Z s n , y n = m n ν n Y Y m n X r n , m n + 1 = β n y n + 1 β n J y n .
This showed that, under certain assumptions, the procedure in (6) weakly converges to a solution of the split equality fixed point issue. This is without the need for an understanding of the norms of X and Y.
Tan [11] proposed the following method in 2022 to address the common solution of the fixed point problem related to demicontractive mapping J and variational inequality problem related to monotone and Lipschitz continuous mapping Z.
x n = r n + ψ n r n r n 1 , m n = P B x n μ n Z x n , v n = m n + μ n Z x n Z m n , r n + 1 = β n f r n + 1 β n 1 ζ n v n + ζ n J v n .
The method (7) converges strongly to a common point in (VI(B,Z) and F(J) utilizing the inertial and viscosity techniques.
In an effort to solve split equality variational inequality and fixed point problems involving uniformly continuous quasi-monotone mappings for the MVIP and quasi-non expansive demiclosed mappings for the fixed point problem in Hilbert spaces, Tan and Cho [10], Kwelegano et al. [14], Thong and Vuong [27], Zhao [13], and Polyak [28] stipulated motivation and encouragement. By motivating from Tan and Cho [10], Kwelegano et al. [14], Thong and Vuong [27], Zhao [13] and Polyyak [28], Mekuriaw, Zegeye, Takele and Tufa [20] proposed the inertial-like subgradient extragradient algorithm and Inertial-like Tseng subgradient extragradient method. In this direction, we extended the recent results achieved by Mekuriaw, Zegeye, Takele and Tufa [20], and we proposed a new algorithm for solving (SEVIFPP).

2. Preliminaries

In this section, there is an overview of the core definitions and pertinent findings needed to proof our key results. In the following, let Ω be a Hilbert space whose norm and inner product are represented by · and · , · , respectively. We compose r n r to suggest that the sequence r n converges weakly to r and r n r to suggest that the sequence r n converges strongly to r. Remember that the projection of Ω onto B (nearest point or metric), represented by P B . The meaning of P B is given below. There is a single nearest point in B for each r Ω , indicated by P B ( r ) in a way that
inf { r v : v B } = P B r r .
Here, B is a closed convex subset of a Hilbert space Ω .
Definition 1 
([29]). Suppose we have a real Hilbert space Ω. Then, a non-linear operator Z : Ω Ω is known as L-Lipschitz continuous if there is a constant L > 0 , such that Z r Z v L r v , r , v Ω .
Definition 2 
([29]). Assume we have a real Hilbert space Ω. Then, a non-linear operator Z : Ω Ω is called contraction if for a constant 0 < L < 1 , such that Z r Z v L r v , r , v Ω .
Definition 3 
([29]). Let Ω be a Hilbert space and Z : Ω Ω be a linear operator, an operator is called nonexpansive if L = 1 such that Z r Z v L r v , r , v Ω .
Definition 4 
([29]). Consider a non-linear operator Z : Ω Ω where Ω is a Hilbert space. Thus, Z is called quasi non-expensive if F ( Z ) is non-empty and
Z r v r v ; r B , v F ( Z ) .
Definition 5 
([29]). A non-linear operator Z : Ω Ω is called firmly nonexpansive if 2 Z I is nonexpansive, or correspondingly,
r v , Z r Z v Z r Z v 2 , r , v Ω
or similarly, Z is firmly nonexpansive strictly in the event that Z can be articulated as
Z = 1 2 ( I + J ) ,
as well as J : Ω Ω is nonexpansive; projections are firmly nonexpansive.
The following features of P B are well known.
(P1)
P B r P B v 2 P B r P B v , r v .
Thus, P B is firmly nonexpansive. P B is specifically nonexpansive from Ω onto B.
Additionally, there is
(P2)
r P B r , v P B r 0 , v B .
So, for each r Ω , we obtain
(P3)
P B r v 2 r v 2 P B v v 2 , v B .
Definition 6 
([30]). Suppose we have an real Hilbert space Ω and a non-linear operator Z : Ω Ω is called monotone if it satisfies
Z r Z v , r v 0 , r , v Ω .
Definition 7 
([31]). An operator Z : Ω Ω , where Ω is a real Hilbert space is referred to as pseudomonotone if
Z v , v r 0 , i m p l i e s t h a t Z r , r v 0 , r , v Ω .
Definition 8 
([31]). A non-linear operator Z : Ω Ω is called quasi-monotone if
Z v , r v > 0 i m p l i e s t h a t Z r , r v 0 , r , v B .
Definition 9 
([31]). Consider that Z : Ω Ω is a non-linear operator such that F ( Z ) . Thus, the operator I Z , where I is the identity operator on Ω, which is said to be demiclosed at zero, and in case for any r n Ω , the following simplification holds true:
r n r a n d ( I Z ) r n 0 , i m p l i e s t h a t r F ( Z ) .
Lemma 1 
([32]). Let Ω be an real Hilbert space. Then, the following results are proper:
(1) a + b 2 = a 2 + 2 a , b + b 2 , a , b Ω .
(2) a + b 2 a 2 + 2 b , a + b , a , b Ω .
(3) β a + ζ b + ν c 2 = β a 2 + ζ b 2 + ν c 2 β ζ a b 2 β ν a c 2 α ν b c 2 , a , b , c Ω , where β , ζ , ν [ 0 , 1 ] , β + ζ + ν = 1 .
Lemma 2 
([33]). If Z : Ω Ω is a continuous and quasi-nonexpansive operator, and B is a closed convex subset of a Hilbert space Ω; then, F ( Z ) is closed, convex and non-empty.
Lemma 3 
([34]). Suppose that Ω = Ω 1 × Ω 2 , where Ω 1 and Ω 2 are real Hilbert spaces. If s , t Ω and s , t = P B ( s , t ) , where C is non-empty, closed and convex subset of Ω; thus, we have
( s , t ) s , t , ( r , v ) s , t 0 , f o r a l l ( r , v ) B .
Lemma 4 
([35]). Let X : Ω Ω be an operator, and B be a closed, convex and non-empty subset of a real Hilbert space Ω. Therefore, the following inequality holds:
s P B ( s β X s ) β s P B ( s ζ X s ) ζ s B a n d β ζ > 0 .
Lemma 5 
([16]). Let Z : Ω Ω be an opertor, Ω is a Hilbert space, and B is a non-empty, closed, and convex subset of Ω. The following list shows the locations of non-empty MVI ( B , Z ) . If either
(i) Z is pseudomonotone on B and VI ( B , Z ) ;
(ii) Z is quasi-monotone on B, Z 0 on B and B is bounded;
(iii) Z is quasi-monotone on B, i n t ( B ) and there exists r VI ( B , Z ) such that Z r then MVI ( B , Z ) is non-empty.
Lemma 6 
([36]). Let b n be a sequence of non-negative real numbers and b n k be a subsequence of b n with the property fulfilling b n k < b n k + 1 k N ; then, there exists a non-decreasing sequence a s of N such that lim sup s a s = and
max b a s , b s b a s + 1 .
Lemma 7 
([37]). Assume that α n is a sequence of real numbers and β n is a sequence of non-negative real numbers such that β n ( 0 , 1 ) fulfilling
β n + 1 1 ζ n β n + ζ n α n , n 1
If lim sup n α n 0 , then lim n β n = 0 .
Lemma 8 
([20]). Assume that the conditions (C1)–(C6) described in Chapter 3 are retained. Suppose Z r 0 , r B and J s 0 , s E . Let x n , m n , y n and w n be sequences generated by the proposed algorithm. Let x n j , y n j be a subsequence of x n , y n such that x n j m n j 0 , y n j w n j 0 and x n j , y n j ( s , t ) . Then ( s , t ) MVI ( B , Z ) × MVI ( E , J ) .

3. Refined Algorithm for Split Equality Variational Inequality and Fixed Point Problems

This section covers the convergence analysis of the refined inertial-like subgradient extragradient algorithm. In the continuation, we will make the following assumptions.
  • (C1) Let the sets B and E be non-empty, closed, and convex subsets of the real Hilbert spaces Ω 1 and Ω 2 , respectively.
  • (C2) Let Z : Ω 1 Ω 1 and J : Ω 2 Ω 2 be quasi-monotone, uniformly continuous, and Z r n Z r and J s n J s , whenever r n and s n are sequences in B and E, respectively, such that r n r and s n s .
  • (C3) Let M : Ω 1 Ω 1 and N : Ω 2 Ω 2 be quasi-nonexpansive mappings such that I M and I N are demiclosed at zero.
  • (C4) Let X : Ω 1 Ω 3 and Y : Ω 2 Ω 3 be bounded linear mappings and let X and Y be adjoints of X and Y, respectively, where Ω 3 is another real Hilbert space.
  • (C5) Let Υ = { ( c , d ) ( M V I ( B , Z ) F ( M ) ) × ( M V I ( E , J ) F ( N ) ) : X c = Y d } .
  • (C6) Let ϵ n , η n , ξ n , π n and b n be the sequences satisfying n = 1 ϵ n < and lim n ϵ n β n = 0 , where β n ( 0 , 1 ) with n = 1 β n = and lim n β n = 0 , η n , ξ n , π n , b n [ e , f ] ( 0 , 1 ) , for some e , f > 0 .

Proposed Inertial-like Subgradient Extragradient Algorithm

Initialization: Let r ¯ , x 0 , x 1 B , s ¯ , s 0 , s 1 E , 0 ψ < 1 , λ , ζ , κ , , δ ( 0 , 1 ) . Set n = 1 . Iterative steps:
Step 1. Given the iterates r n 1 , s n 1 and r n , s n in B × E , choose ψ n such that 0 ψ n ψ ¯ n , where
ψ ¯ n = min ψ , ϵ n r n r n 1 , ϵ n s n s n 1 , if r n r n 1 and s n s n 1 , ψ , otherwise
Step 2. Set
x n = r n + ψ n r n 1 r n , and y n = s n + ψ n s n 1 s n .
Step 3. Compute
m n = P B x n μ n Z x n , v n = P Z n x n μ n Z m n , w n = P E y n θ n J y n , z n = P J n y n θ n J w n ,
where
Z n = r B : x n μ n Z x n m n , r m n 0 and J n = s E : y n θ n J y n w n , s w n 0 ,
μ n = ζ δ t k and t k the smallest non-negative integer i satisfying
ζ δ t Z x n Z m n λ x n m n ,
and θ n = κ v i and v i the smallest non-negative integer v satisfying
κ v J y n J w n λ y n w n .
Step 4. Compute
d n = P B x n ν n X X x n Y y n , h n = P B π n d n + 1 π n M d n , g n = P B ξ n d n + 1 ξ n M h n , r n + 1 = β n r ¯ + 1 β n 1 η n x n + η n b n v n + 1 b n g n , p n = P E y n + ν n Y X x n Y y n , q n = P E π n p n + 1 π n N p n , f n = P E ξ n p n + 1 ξ n N q n , s n + 1 = β n s ¯ + 1 β n 1 η n y n + η n b n z n + 1 b n f n .
where 0 < ν ν n ν n ^ with
ν n ^ = min ν + 1 , X x n Y y n 2 X X x n Y y n 2 + Y Y y n X x n 2 ,
for n Υ ¯ = m N : X x m Y y m 0 , otherwise ν n = ν , for some ν > 0 . Set n : = n + 1 and go to Step 1.
Remark 1. 
Note that, from (8) and condition (C6), we have ψ n r n r n 1 ϵ n for all n 1 and ϵ n β n 0 as n , which implies that
lim n ψ n β n r n r n 1 = 0
and
lim n ψ n r n r n 1 = lim n β n ψ n β n r n r n 1 = 0
Similarly, we get lim n ψ n β n s n s n 1 = 0 and lim n ψ n s n s n 1 = 0 .
Remark 2. 
Assume that conditions (C1)–(C6) are satisfied. Then, the line search rules (11) and (12) are well-defined.
Proof. 
If we consider the case when x n V I ( B , Z ) , then x n = P B x n ν n Z x n . Thus, we have x n = m n and hence (11) is satisfied for t = 0 . Let x n VI ( B , Z ) and assume on the contrary that
ζ δ t Z x n Z m n > λ x n m n , t 0 .
That is,
1 λ Z x n Z P B x n ζ δ t Z x n > 1 ζ δ t x n P B x n ζ δ t Z x n , t 0 .
Since P B is continuous, we have
lim t P B x n ζ δ t x n x n = 0 ,
which implies by the uniform continuity of Z on B that
Z P B x n ζ δ t x n Z x n = 0
Thus, by substituting (18) in (16) we get
lim t 1 ζ δ t x n P B x n ζ δ t Z x n = 0 .
Now, put b t = P B x n ζ δ t Z x n . Then, by the projection property (P2) we have
x n ζ δ t Z x n b t , r b t 0 , r B ,
which implies that
x n b t ζ δ t , r b t + Z x n , b t x n Z x n , r x n , r B .
Taking the limit as t of (21) and making use of (17) and (19), we obtain that Z x n , r x n 0 , r B , which is a contradiction to our assumption that x n V I ( B , Z ) . Thus, there exists a non-negative integer t which satisfies (11). Thus, (11) is well-defined. The proof concerning line search rule (12) is similar and hence the proof is complete. □
Theorem 1. 
Suppose conditions (C1)–(C6) are satisfied. Then, the sequences r n and s n generated by newly refined algorithm are bounded.
Proof. 
Suppose ( c , d ) Υ and let
e n = 1 η n x n + η n b n v n + 1 b n g n t n = 1 η n y n + η n b n z n + 1 b n f n .
Then, we obtain
e n c 2 1 η n x n c 2 + η n b n v n c 2 + η n 1 b n g n c 2 ,
and
t n d 2 1 η n y n d 2 + η n b n z n d 2 + η n 1 b n f n d 2 .
Moreover, we have from the quasi-nonexpansive property of M that
g n c 2 = P B ξ n d n + 1 ξ n M h n P B ( c ) 2 ξ n d n + 1 ξ n M h n p 2 = ξ n d n c + 1 ξ n M h n c 2 = ξ n d n c 2 + 1 ξ n M h n C 2 ξ n 1 ξ n M h n d n 2 d n c 2 ξ n 1 ξ n M h n d n 2 .
Similarly, we obtain that
f n d 2 p n d 2 ξ n 1 ξ n N q n p n 2 .
Substituting (25) into (23), we get
e n c 2 1 η n x n c 2 + η n b n v n c 2 + η n 1 b n d n c 2 ξ n η n 1 b n 1 ξ n M h n d n 2 .
Similarly, we get
t n d 2 1 η n y n d 2 + η n b n z n d 2 + η n 1 b n p n d 2 ξ n η n 1 b n 1 ξ n N q n p n 2 .
From (13), (22) and Lemma 1, we get
r n + 1 c 2 =   β n r ¯ + ( 1 β n ) e n c 2 =   β n ( r ¯ c ) ( 1 β n ) ( e n c ) 2   β n r ¯ c 2 + ( 1 β n ) e n c 2 .
Similarly, we have
s n + 1 d 2 β n s ¯ d 2 + 1 β n t n d 2 .
Combining (29) and (30), we get
r n + 1 c 2 + s n + 1 d 2 β n r ¯ c 2 + s ¯ d 2 + 1 β n e n d 2 + t n d 2 ,
Substituting (27) and (28), into (31), we obtain
r n + 1 c 2 + s n + 1 d 2 β n r ¯ c 2 + s ¯ d 2 + 1 β n 1 η n x n c 2 + y n d 2 + 1 β n η n b n v n c 2 + z n d 2 + 1 β n η n 1 b n d n c 2 + p n d 2 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 1 β n ξ n η n 1 b n 1 ξ n N q n p n 2 .
Since c Z n , using the definition of v n in (10), property (P1) of metric projection and Lemma 1, we have
v n c 2 = P Z n x n μ n Z m n c 2 = P Z n x n μ n Z m n P Z n c 2 v n c , x n μ n Z m n c = 1 2 v n c 2 + x n μ n Z m n c 2 v n x n + μ n Z m n 2 = 1 2 v n c 2 + x n c 2 + μ n 2 Z m n 2 v n x n 2 μ n 2 Z m n 2 2 x n c , μ n Z m n 2 v n x n , μ n Z m n = 1 2 v n c 2 + x n c 2 v n x n 2 2 v n c , μ n Z m n 2 = 1 2 v n c 2 + x n c 2 v n x n 2 + 2 c v n , μ n Z m n .
Thus, the inequality in (33) implies that
v n c 2 x n c 2 v n x n 2 + 2 c v n , μ n Z m n .
Since c MVI ( B , Z ) , we have Z r , r c 0 , r B . Taking r = m n B and rearranging, we get Z m n , c m n 0 .
Hence,
c v n , μ n Z m n = μ n Z m n , c v n = μ n Z m n , c m n + μ n Z m n , m n v n μ n Z m n , m n v n .
Using (34) and (35) and Lemma 1,
v n c 2 x n c 2 v n x n 2 + 2 μ n Z m n , m n v n = x n c 2 v n m n 2 m n x n 2 2 v n m n , m n x n + 2 μ n Z m n , m n v n = x n c 2 v n m n 2 m n x n 2 2 m n x n , v n m n 2 μ n Z m n , v n m n = x n c 2 v n m n 2 x n m n 2 + 2 x n μ n Z m n m n , v n m n .
Since v n Z n by definition of v n and by (P2), we obtain x n μ n Z x n m n , v n m n 0. This, together with the definition of μ n , the Schwartz inequality, and the fact that 2 x y x 2 + y 2 for any two real numbers x and y, give us
2 x n μ n Z m n m n , v n m n = 2 x n μ n Z x n m n , v n m n + 2 μ n Z x n Z m n , v n m n 2 μ n Z x n Z m n , m n v n 2 μ n Z x n Z m n m n v n 2 λ x n m n m n v n λ x n m n 2 + m n v n 2 .
Using (37) in (36)
v n c 2 x n c 2 ( 1 λ ) m n x n 2 + v n m n 2 .
Similarly, we get
z n d 2 y n d 2 ( 1 λ ) w n y n 2 + z n w n 2 .
Adding (38) and (39), we get
v n c 2 + z n d 2 x n c 2 + y n d 2 ( 1 λ ) m n x n 2 + v n m n 2 ( 1 λ ) w n y n 2 + z n w n 2 .
Now, using the definition of d n in (13) and property (P3) of the metric projection P B , we have
d n c 2 = P B x n ν n X X x n Y y n c 2 x n ν n X X x n Y y n c 2 d n x n ν n X X x n Y y n 2 = x n c 2 + ν n X X x n Y y n 2 2 ν n x n c , X X x n Y y n d n x n ν n X X x n Y y n 2 = x n c 2 + ν n 2 X X x n Y y n 2 2 ν n X x n X c , X x n Y y n d n x n ν n X X x n Y y n 2 .
Similarly,
p n d 2 y n d 2 + ν n 2 Y X x n Y y n 2 + 2 γ n Y y n Y d , X x n Y y n p n y n + ν n Y X x n Y y n 2 .
Adding (41) and (42) and using (14) for the case X x n Y y n , we have
d n c 2 + p n d 2 x n c 2 + y n d 2 + ν n 2 X X x n Y y n 2 + Y X x n Y y n 2 2 ν n X x n X c Y y n + Y d , X x n Y y n d n x n ν n X X x n Y y n 2 p n y n + ν n Y X x n Y y n 2 x n c 2 + y n d 2 + ν n X x n Y y n 2 2 ν n X x n Y y n 2 d n x n ν n X X x n Y y n 2 p n y n + ν n Y X x n Y y n 2 x n c 2 + y n d 2 ν n X x n Y y n 2 d n x n ν n X X x n Y y n 2 p n y n + ν n Y X x n Y y n 2
For the case X x n = Y y n in (14), we can easily show that (43) holds. Using the definitions of x n and y n given in (8) and applying Lemma 1, we have
x n c 2 = r n + ψ n r n 1 r n c 2 = 1 ψ n r n c + ψ n r n 1 c 2 1 ψ n r n c 2 + ψ n r n 1 c 2 ,
and
y n d 2 1 ψ n s n d 2 + ψ n s n 1 d 2 ,
which imply that
x n c 2 + y n d 2 1 ψ n r n c 2 + s n d 2 + ψ n r n 1 c 2 + s n 1 d 2
Substituting (40) and (43) into (32)
r n + 1 c 2 + s n + 1 d 2 β n r ¯ c 2 + s ¯ d 2 + 1 β n 1 η n x n c 2 + y n d 2 + 1 β n η n b n x n c 2 + y n d 2 ( 1 λ ) m n x n 2 + v n m n 2 ( 1 λ ) w n y n 2 + z n w n 2 + 1 β n η n 1 b n x n c 2 + y n d 2 ν n X x n Y y n 2 d n x n ν n X X x n Y y n 2 p n y n + ν n Y X x n Y y n 2 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 1 β n ξ n η n 1 b n 1 ξ n N q n p n 2
Hence,
r n + 1 c 2 + s n + 1 d 2 β n r ¯ c 2 + s ¯ d 2 + 1 β n x n c 2 + y n d 2 1 β n η n b n ( 1 λ ) m n x n 2 + v n m n 2 ( 1 λ ) w n y n 2 + z n w n 2 1 β n η n 1 b n ν n X x n Y y n 2 + d n x n ν n X X x n Y y n 2 + p n y n + ν n Y X x n Y y n 2 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 1 β n ξ n η n 1 b n 1 ξ n N q n p n 2 .
Substituting (44) into (46) and taking the properties of λ , η n , β n , ξ n and b n into account, we obtain
r n + 1 c 2 + s n + 1 d 2 β n r ¯ c 2 + s ¯ d 2 + 1 β n 1 ψ n r n c 2 + s n d 2 + ψ n r n 1 c 2 + s n 1 d 2 1 β n η n b n ( 1 λ ) m n x n 2 + v n m n 2 ( 1 λ ) w n y n 2 + z n w n 2 1 β n η n 1 b n ν n X x n Y y n 2 + d n x n ν n X X x n Y y n 2 + p n y n + ν n Y X x n Y y n 2 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 1 β n ξ n η n 1 b n 1 ξ n N q n p n 2
β n r ¯ c 2 + s ¯ d 2 + 1 β n 1 ψ n r n c 2 + s n d 2 + ψ n r n 1 c 2 + s n 1 d 2 .
Which can be written as
Δ n + 1 ( c , d ) 1 β n 1 ψ n Δ n ( c , d ) + ψ n Δ n 1 ( c , d ) + β n r ¯ c 2 + s ¯ d 2 ,
where Δ n ( c , d ) = r n c 2 + s n d 2
Using (48) and property of convex combinations of real numbers, we get
Δ n + 1 ( c , d ) max r ¯ c 2 + s ¯ d 2 , max Δ n ( c , d ) , Δ n 1 ( c , d ) .
But,
max Δ n ( c , d ) , Δ n 1 ( c , d ) max 1 β n 1 1 ψ n 1 Δ n 1 ( c , d ) + ψ n 1 Δ n 2 ( c , d ) + β n 1 r ¯ c 2 + s ¯ d 2 , Δ n 1 ( c , d ) max max r ¯ c 2 + s ¯ d 2 , max Δ n 1 ( c , d ) , Δ n 2 ( c , d ) , Δ n 1 ( c , d ) = max r ¯ c 2 + s ¯ d 2 , max Δ n 1 ( c , d ) , Δ n 2 ( c , d ) .
Substituting (50) into (49) and repeating the process n 2 times, we get
Δ n + 1 ( c , d ) max r ¯ c 2 + s ¯ d 2 , max Δ 2 ( c , d ) , Δ 1 ( c , d )
which implies that Δ n ( c , d ) is bounded. Therefore, r n and s n are bounded and hence x n , y n X x n and Y y n are bounded too. □
Theorem 2. 
Suppose conditions (C1)–(C6) hold. Let Z r 0 , r B and J s 0 , r E . Then, the sequence r n , s n produced by the refined algorithm strongly converges to a point c , d Υ such that c , d = P Υ ( r ¯ , s ¯ ) .
Proof. 
Let c , d = P Υ ( r ¯ , s ¯ ) . From the definitions of r n + 1 and s n + 1 , (22), (27), (28) and Lemma 1 we have
r n + 1 c 2 + s n + 1 d 2 = β n r ¯ c + 1 β n e n c 2 + β n r ¯ d + 1 β n t n d 2 1 β n 2 e n c 2 + 2 β n r ¯ c , r n + 1 c + 1 β n 2 t n d 2 + 2 β n s ¯ d , s n + 1 d 1 β n 2 e n c 2 + t n d 2 + 2 β n r ¯ c , r n + 1 c + s ¯ d , s n + 1 d ,
this implies
r n + 1 c 2 + s n + 1 d 2 1 β n 1 η n x n c 2 + y n d 2 + 1 β n η n b n v n c 2 + z n d 2 + 1 β n η n 1 b n d n c 2 + p n d 2 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 + N q n p n 2 + 2 β n ( r ¯ , s ¯ ) c , d , r n + 1 , s n + 1 c , d .
But (40) and (43) imply that v n c 2 + z n d 2 x n c 2 + y n d 2 and d n c 2 + p n d 2 x n c 2 + y n d 2 , respectively. Thus, by putting these two inequalities into (51), we get
r n + 1 c 2 + s n + 1 d 2 1 β n 1 η n x n c 2 + y n d 2 + 1 β n η n b n x n c 2 + y n d 2 + 1 β n η n 1 b n x n c 2 + y n d 2 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 + N q n p n 2 + 2 β n ( r ¯ , s ¯ ) c , d , r n + 1 , s n + 1 c , d 1 β n x n c 2 + y n d 2 + 2 β n ( r ¯ , s ¯ ) c , d , r n + 1 , s n + 1 c , d
From Remark 1, we have that the sequences ψ n r n r n 1 and ψ n s n s n 1 are bounded. Thus, we obtain from the boundedness of r n and s n that
x n c 2 = r n c + ψ n r n r n 1 2 r n c 2 + ψ n 2 r n r n 1 2 + 2 ψ n r n c r n r n 1 r n c 2 + M 1 ψ n r n r n 1 ,
for some M 1 0 . Similarly, we have
y n d 2 s n d 2 + M 2 ψ n s n s n 1 ,
for some M 2 0 .
From (53) and (54), we obtain
x n c 2 + y n d 2 r n c 2 + s n d 2 + M 3 ψ n r n r n 1 + s n s n 1 ,
where M 3 = max M 1 , M 2 .
Substituting (55) into (52), we get
r n + 1 c 2 + s n + 1 d 2 1 β n r n c 2 + s n d 2 + β n M 3 ψ n β n r n r n 1 + s n s n 1 + 2 ( r ¯ , s ¯ ) c , d , r n + 1 , s n + 1 c , d ,
which could be written as
Δ n + 1 c , d 1 β n Δ n c , d + β n δ n ,
where δ n = M 3 ψ n β n r n r n 1 + s n s n 1 + 2 ( r ¯ , s ¯ ) c , d , r n + 1 , s n + 1 c , d . From (46) we obtain that
( 1 β n η n b n ( 1 λ ) m n x n 2 + v n m n 2 + ( 1 λ ) w n y n 2 + z n w n 2 + 1 β n η n 1 b n ν n X x n Y y n 2 + d n x n ν n X X x n Y y n 2 + p n y n + ν n Y X x n Y y n 2 + 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 + 1 β n ξ n η n 1 b n 1 ξ n N q n p n 2 1 β n x n c 2 + y n d 2 Δ n + 1 c , d + β n r ¯ c 2 + s ¯ d 2 .
Using (55) in (57) we obtain that
( 1 β n η n b n ( 1 λ ) m n x n 2 + v n m n 2 + ( 1 λ ) w n y n 2 + z n w n 2 + 1 β n η n 1 b n ν n X x n Y y n 2 + d n x n ν n X X x n Y y n 2 + p n y n + ν n Y X x n Y y n 2 + 1 β n ξ n η n 1 b n 1 ξ n M h n d n 2 + 1 β n ξ n η n 1 b n 1 ξ n N q n p n 2 1 β n r n c 2 + s n d 2 + M 3 ψ n r n r n 1 + s n s n 1 Δ n + 1 c , d + β n r ¯ c 2 + s ¯ d 2 r n c 2 + s n d 2 β n r n c 2 + s n d 2 + M 3 ψ n r n 1 r n + s n 1 s n Δ n + 1 c , d + β n r ¯ c 2 + s ¯ d 2 Δ n c , d Δ n + 1 c , d + β n M 3 ψ n β n r n 1 + r n + s n 1 + s n r n c 2 + s n d 2 + β n r ¯ c 2 + s ¯ d 2 Δ n c , d Δ n + 1 c , d + β n M 3 ψ n β n r n r n 1 + s n s n 1 Δ n c , d + β n r ¯ c 2 + s ¯ d 2
We now consider two cases on the sequence Δ n c , d .
Case 1. Let Δ n + 1 c , d Δ n c , d n n 0 N .
Since it is bounded, lim n Δ n c , d exists. Then, taking the limit on both sides of (58) as n and taking the conditions on the parameters b n , λ , β n , η n , ξ n , π n into account, we obtain
m n x n 0 a n d v n m n 0 , and hence v n x n 0 , a s n .
In the same way, we also get
w n y n 0 and z n w n 0 , and hence z n y n 0 .
Furthermore, we have,
X x n Y y n 0 , d n x n ν n X X x n Y y n 0 ,
and
p n + y n ν n Y X x n Y y n 0 , M h n d n 0 , N q n p n 0 a s n .
By the definition of d n in (13) and property (P3) of metric projection, we have
d n x n ν n X X x n Y y n 2 + d n x n 2 ν n X X x n Y y n 2 .
Thus, combining (61) and (63), we get
d n x n 0 and also p n y n 0 .
By non-expansive property of P B and from (62) we have
g n d n = 1 ξ n M h n d n 0 as n .
By the boundedness of r n , s n and Theorem 1, there is a subsequence r n j , s n j of r n , s n with r n j , s n j ( s , t ) and
lim sup n ( r ¯ , s ¯ ) c , d , r n , m n c , d = lim sup j ( r ¯ , s ¯ ) c , d , r n j , s n j c , d .
Hence, from (59) and (60), we get
lim j m n j x n j = 0 and lim j w n j y n j = 0 .
Now, from (8), we get that x n j r n j 0 and y n j s n j 0 as j and hence x n j s and y n j t . Using (67) and Lemma 8, we obtain ( s , t ) M V I ( B , Z ) × M V I ( E , J ) . From (64), we obtain
d n j r n j d n j x n j + x n j r n j 0 as j .
Thus, we have from (68) that d n j s , which together with (62) and the demiclosedness of ( I M ) gives that s F ( M ) . Similarly, we obtain that t F ( N ) . Hence, we obtain
( s , t ) ( M V I ( B , Z ) F ( M ) ) × ( M V I ( E , J ) F ( N ) )
Then, using Lemma 1 (2), we get
X s Y t 2 = X x n j Y y n j + X s X x n j + Y y n j Y t 2 X x n j Y y n j 2 + 2 X s X x n j + Y y n j Y t , X s Y t .
Since X is a bounded linear mapping, we get X x n j X s , and similarly we have Y y n j Y t .
Thus, taking limsup on both sides of (69) and using (61), we obtain X s Y t = 0 . Thus, we obtain X s = Y t and hence ( s , t ) Υ . In addition, since c , d = P Υ ( u ¯ , s ¯ ) , Equation (66) and Lemma 3 gives
lim sup n ( r ¯ , s ¯ ) c , d , r n , m n c , d = lim j ( r ¯ , u ¯ ) c , d , r n j , s n j c , d = ( r ¯ , s ¯ ) c , d , ( s , t ) c , d 0 .
From (8) and (13),
r n + 1 r n r n + 1 x n + x n r n = β n r ¯ x n + 1 β n η n b n v n x n + 1 b n g n x n + x n r n β n r ¯ x n + 1 β n η n b n v n x n + 1 b n g n x n + x n r n .
From (64) and (65), we have that
g n x n g n d n + d n x n 0 as n .
Using the assumptions on β n , b n and η n , since x n is bounded, Remark 1, (59) and (72) imply that the limit of the right hand side of the last inequality in (71) is zero.
Thus, we have
r n + 1 r n 0 .
Similarly, we have
s n + 1 s n 0 .
Using (73) and (74) together with (70), we have
lim sup n ( r ¯ , s ¯ ) c , d , r n + 1 , s n + 1 c , d 0 .
Hence, combining the assumption on β n , Remark 1 and (75), we obtain lim sup n δ n 0. Therefore, from (56) and Lemma 7, we get Δ n c , d 0 which implies that r n c and s n s as n .
Case 2. Assume that there is a subsequence Δ n r c , d of Δ n c , d such that Δ n r c , d < Δ n r + 1 c , d , r N . In this instance, a non-decreasing sequence a s of N exists, as indicated by Lemma 6, such that lim s a s = and the following inequality holds for s N :
max Δ a s c , d , Δ s c , d Γ a s + 1 c , d .
From (56) we have
Δ a s + 1 c , d 1 β a s Δ a s c , d + β a s δ a s .
Combining (76) and (77) we obtain
Δ a s + 1 c , d 1 β a s Δ a s + 1 c , d + β a s δ a s and hence Δ s c , d δ a s
This implies that
lim sup s Δ s c , d lim sup s δ a s
Similar reasoning to that of Case 1 leads us to the conclusion that lim sup s ω a s 0 which implies that lim s Δ s c , d = 0 , that is, r s , s s c , d as n . □

4. Numerical Examples

Some numerical examples illustrating the behavior of our proposed systems are provided in this section.
Example 1. 
Let Ω 1 = Ω 2 = R 2 be a Hilbert spaces with norm y = y 1 2 + y 2 2 and inner product y , z = ( y 1 , y 2 ) , ( z 1 , z 2 ) = y 1 z 1 + y 2 z 2 for y = ( y 1 , y 2 ) , z = ( z 1 , z 2 ) R 2 . Let B = [ 0 , 2 ] × [ 1 , 0 ] and E = [ 1 , 2 ] × [ 0 , 1 ] which are non-empty, closed, and convex subsets of R 2 . We define Z and J on R 2 by:
Z y 1 , y 2 = 0 , 1 1 + e y 2 a n d J y 1 , y 2 = 1 1 + e y 1 , 0
if
Z y 1 , y 2 , x 1 , x 2 y 1 , y 2 0 0 , 1 1 + e y 2 , x 1 y 1 , x 2 y 2 0 1 1 + e y 2 x 2 y 2 > 0
as the denumerator 1 + e y 2 in 1 1 + e y 2 is positive because e y 2 > 0 therefore 1 1 + e y 2 > 0 . Since 1 1 + e y 2 x 2 y 2 > 0 it only holds if x 2 > y 2 . Now, to check the implification for quasi-monotone
Z x 1 , x 2 , x 1 , x 2 y 1 , y 2 = 0 , 1 1 + e x 2 , x 1 y 1 , x 2 y 2 = 1 1 + e x 2 x 2 y 2
as 1 1 + e x 2 > 0 and if x 2 y 2 then
Z x 1 , x 2 , x 1 , x 2 y 1 , y 2 0
This shows that Z is a quasi-monotone operator. In the same way, we can show that J is quasi-monotone.
The distance between the points ( y 1 , y 2 ) , ( y 1 , y 2 ) is
d ( y 1 , y 2 ) , ( y 1 , y 2 ) = ( y 1 , y 2 ) ( y 1 , y 2 ) = ( y 1 y 1 ) , ( y 2 y 2 ) = ( y 1 y 1 ) 2 + ( y 2 y 2 ) 2
The distance between their function values
d ( Z ( y 1 , y 2 ) , Z ( y 1 , y 2 ) = Z ( y 1 , y 2 ) Z ( y 1 , y 2 ) = 0 , 1 1 + e y 2 0 , 1 1 + e y 2 = ( 0 0 ) , 1 1 + e y 2 1 1 + e y 2 = ( 0 ) 2 + 1 1 + e y 2 1 1 + e y 2 2 = 1 1 + e y 2 1 1 + e y 2
Using the property of uniform continuity, we want to find δ such that ( y 1 y 1 ) 2 + ( y 2 y 2 ) 2 < δ which implies that 1 1 + e y 2 1 1 + e y 2 < ϵ . Notice that the function Z ( y 2 ) = 1 1 + e y 2 is continuous because the function smoothly transitions between 0 and 1 as y 2 moves from   t o   and there are no jumps and discontinuities. Also, the function is bounded (since its values lie between 0 and 1). This function is uniformly continuous on the real line because its derivative is bounded. Specifically:
Z y 2 = e y 2 ( 1 + e y 2 ) 2
Z y 2 = 1 4 as y 2 = 0 , Z y 2 0 as y 2 0 and Z y 2 0 as y 2 0 . So the derivative function lies between 0 and 1 4 . Thus, the derivative function is bounded, i.e.,
0 Z ( y 2 ) 1 4 ; y 2 R
Since the maximum of Z y 2 is 1 4 which gives Lipschitz constant L = 1 4 and implies that
Z ( y 2 ) Z ( y 2 ) 1 4 y 2 y 2
if we want
Z ( y 2 ) Z ( y 2 ) < ϵ
we can use the Lipschitz condition 1 4 y 2 y 2 < ϵ which implies that y 2 y 2 < 4 ϵ . So for given ϵ, δ = 4 ϵ this implies that Z ( y 2 Z ( y 2 ) < ϵ . Hence, Z is uniform continuous.
In a same way, it can be shown that J is quasi-monotone and uniformly continuous mappings on R 2 .
If we also define X = R 2 R 2 and Y = R 2 R 2 by X ( y 1 , y 2 ) = ( 4 y 1 , 2 y 2 ) and Y ( y 1 , y 2 ) = ( 2 y 2 , 3 y 1 ) , respectively, then X and Y are both bounded linear mappings with adjoint mappings X and Y given by X ( y 1 , y 2 ) = ( 4 y 1 , 2 y 2 ) and Y ( y 1 , y 2 ) = ( 2 y 2 , 3 y 1 ) , respectively. Let M , N : R 2 R 2 given by M ( y 1 , y 2 ) = ( y 1 , 0 ) and N ( y 1 , y 2 ) = ( 0.5 , y 1 ) such that F ( M ) , F ( N ) and both are quasi-nonexpansive mappings. Take λ = 0.3 , ζ = 0.6 , ,   κ = 0.2 , = 0.8 = δ , ψ = 0.3 , ϵ n = 1 n 3 , β n = 1 1 + n , η n = 0.5 , b n = 0.6 , ξ n = 0.9 and π n = 0.75 with given point ( r ¯ , s ¯ ) = ( 0.2 , 0.2 ) , ( 1.5 , 0.4 ) B × E and initial points ( r 0 , s 0 ) = ( 0.6 , 0.1 ) , ( 1.6 , 0.5 ) and ( r 1 , s 1 ) = ( 1.3 , 0 ) , ( 1.1 , 0.3 ) . Then, conditions ( C 1 ) ( C 6 ) are satisfied. Using MATLAB R2020a, we obtain the Figure 1 which shows that the sequences generated by the previous introduced inertial-like subgradient extragradient method and our proposed inertial-like subgradient extragradient algorithm as given in Section Proposed Inertial-like Subgradient Extragradient Algorithm converge strongly to the solution ( c , d ) = ( ( 1.2 , 1 ) , ( 1 , 1.6 ) ) (see Figure 1).
Example 2. 
Let Ω = Ω 1 = Ω 2 = Ω 3 = L 2 ( [ 0 , 1 ] ) with norm x = 0 1 | x ( t ) | 2 d t 1 2 for all x Ω and inner product x , y = 0 1 x ( t ) y ( t ) d t for all x , y Ω . Consider B = { x Ω : x 3 } and E = { y Ω : y 6 } . Then, B and E are closed and convex subsets of Ω. Let Z , J : Ω Ω be defined by
Z x ( t ) = | x ( t ) 2 | , i f x > 3 x 2 ( t ) + 1 , i f x 3
and
J y ( t ) = | y ( t ) 5 | , i f y > 6 y 2 ( t ) + 3 , i f y 6
we have two cases for both operators. Let us see for Z.
case (i): x > 3 here, Z x ( t ) = | x ( t ) 2 | if
Z y ( t ) , x ( t ) y ( t ) > 0
which imples
0 1 | y ( t ) 2 | ( x ( t ) y ( t ) ) d t > 0
it is obvious that | y ( t ) 2 | > 0 and since 0 1 | y ( t ) 2 | ( x ( t ) y ( t ) ) d t > 0 it only holds if x ( t ) > y ( t ) . Now, to check the implification for quasi-monotone
Z x ( t ) , x ( t ) y ( t ) = 0 1 | x ( t ) 2 | ( x ( t ) y ( t ) ) d t
as | x ( t ) 2 | 0 therefore
Z x ( t ) , x ( t ) y ( t ) 0
this implies quasi-monotonocity for case (i).
case (ii): x 3 here, Z x ( t ) = x 2 ( t ) + 1 if
0 1 ( x 2 ( t ) + 1 ) ( x ( t ) y ( t ) ) d t > 0
since x 2 ( t ) + 1 > 0 for all x Ω and 0 1 ( x 2 ( t ) + 1 ) ( x ( t ) y ( t ) ) d t > 0 this is only true if x ( t ) > y ( t ) . Now check for implification.
Z x ( t ) , x ( t ) y ( t ) = 0 1 ( x 2 ( t ) + 1 ) ( x ( t ) y ( t ) ) d t
as ( x 2 ( t ) + 1 ) > 0 and x ( t ) y ( t ) ; thus,
Z x ( t ) , x ( t ) y ( t ) 0
Hence, Z is a quasi-monotone operator. Similarly, J is also quasi-monotone.
To check the uniform continuity of Z.
Case (i): x > 3 , here Z x ( t ) = | x ( t ) 2 |
The absolute value function | x ( t ) 2 | is uniformly continuous because the function x ( t ) 2 is linear, and linear functions are uniformly continuous. The absolute value operator does not change the uniform continuity, so Z x ( t ) = | x ( t ) 2 | is uniform continuous.
Case (ii): x 3 , here Z x ( t ) = x 2 ( t ) + 1
Since x 3 , x 2 ( t ) + 1 is bounded in this domain and remains continuous uniformly because the derivative 2 x ( t ) is bounded on B. Hence, for all x 1 , x 2 B the difference between x 1 2 ( t ) + 1   a n d   x 2 2 ( t ) + 1 can be made arbitrarily small by controlling the difference x 1 ( t ) and x 2 ( t ) . Therefore, Z is uniformly continuous in both parts of its fragmentary definition. In a similar way, J is uniform continuous.
Now, we will check which values of x B satisfy the MVI. Let us try for x = 3 . Since x = 3 M V I ( B , Z ) if Z ( x ) , x + 3 0 , x B . So here only one case arises in Z if x 3 , i.e., if x B ; then, Z x ( t ) = x 2 ( t ) + 1 .
Z x ( t ) , x ( t ) + 3 = x 2 ( t ) + 1 , x ( t ) + 3 = 0 1 x 3 ( t ) + 3 x 2 ( t ) + x ( t ) + 3 d t
check for x ( t ) = 3
x 2 ( t ) + 1 , x ( t ) + 3 = 0
Therefore, M V I ( B , Z ) = { 3 } . Similarly, we can see that M V I ( D , J ) = { 4 } . Define the mappings M , N : Ω Ω by
M x ( t ) = 2 x ( t ) + 3 , i f x B 3 , i f x B
and
N y ( t ) = 2 x ( t ) + 4 , i f y E 4 , i f y E
If x B , M x ( t ) = 2 x ( t ) + 3 , thus x ( t ) = 3 and if x B , M x ( t ) = 3 in this case also, x ( t ) = 3 . Therefore, F ( M ) = { 3 } likewise F ( N ) = { 4 } and thus M V I ( B , Z ) F ( M ) = { 3 } and M V I ( E , J ) F ( N ) = { 4 }
Moreover, we have
M x ( t ) M ( 3 ) = 2 x ( t ) + 3 ( 3 ) = 2 x ( t ) + 3 x ( t ) + 3
thus, M is quasi-nonexpansive. Similarly, one can show that N is quasi-nonexpansive. Also note that
( I M ) x ( t ) = x ( t ) 3 , i f x B x ( t ) + 3 , i f x B
By Definition 9, assume x n ( t ) x 0 ( t ) and ( I M ) x n ( t ) 0 , if x B , ( I M ) x n = x n ( t ) 3 0 this implies x n ( t ) 3 and if x B , ( I M ) x n ( t ) = x ( t ) + 3 0 this implies x n ( t ) 3 , by the convergence property both cases imply that x 0 ( t ) = 3 .
If x 0 ( t ) B then
( I M ) x 0 ( t ) = x 0 ( t ) 3 = 0
and if x 0 ( t ) B then
( I M ) x 0 ( t ) = x 0 ( t ) + 3 = 0
both cases imply that I M is demiclosed at zero. In a similar way, we can also see that N is demiclosed at zero.
Define X , Y : Ω Ω by X x ( t ) = 8 x ( t ) and Y y ( t ) = 6 y ( t ) . Then, X and Y are bounded linear mappings and X x ( t ) = 8 x ( t ) and Y = 6 y ( t ) . Moreover, we have that X ( 3 ) = 24 = Y ( 4 ) and hence ( 3 , 4 ) Υ . Taking λ = 0.8 , ζ = 0.3 , κ = 0.3 , = 0.6 = δ , ψ = 0.5 , ϵ n = 1 n 1.1 , β n = 1 n 0.5 , η n = 0.65 , b n = 0.4 , ξ n = 0.7 and π n = 0.8 , the conditions ( C 1 ) ( C 6 ) are satisfied.

5. Conclusions

In this study, we introduce a novel inertial-like subgradient extragradient algorithm designed to solve split equality variational inequality and fixed point problems in real Hilbert spaces. The proposed method generates strongly convergent sequences under the assumption that the involved mappings are quasi-monotone and uniformly continuous, an assumption that generalizes the more restrictive Lipschitz continuity and pseudomonotonicity commonly found in the existing literature. To enhance the performance of the algorithm, we incorporated both inertial and subgradient-based strategies. Additionally, we explored potential applications of our approach to various related problems. Finally, we provided numerical examples to illustrate the effectiveness and practical utility of the proposed methods.

Author Contributions

Conceptualization, K.A. (Khushdil Ahmad) and K.S.; methodology, K.S.; software, K.A. (Khushdil Ahmad); validation, K.A. (Khadija Ahsan) and K.A. (Khushdil Ahmad); formal analysis, K.A. (Khadija Ahsan); investigation, K.A. (Khadija Ahsan); resources, K.A. (Khushdil Ahmad); data curation, K.A. (Khushdil Ahmad); writing—original draft preparation, K.A. (Khadija Ahsan); writing—review and editing, K.A. (Khushdil Ahmad); visualization, K.S.; supervision, K.S.; project administration, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The second author expresses their thanks to the Office of Research Innovation and Commercialization (ORIC), Government College University, Lahore, Pakistan, for its generous support and for facilitating this research work under project #0421/ORIC/24.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Banach, S. Sur les operations dans les ensembles abstraits et leur application aux equations integrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  2. Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences 1964, 258, 4413. [Google Scholar]
  3. Fichera, G. Problemi Elastostatici con Vincoli Unilaterali: Il Problema di Signorini con Ambigue Condizioni al Contorno; Accademia nazionale dei Lincei: Roma, Italy, 1964. [Google Scholar]
  4. Moudafi, A. Alternating CQ-algorithm for convex feasibility and split fixed-point problems. J. Nonlinear Convex. Anal. 2014, 15, 809–818. [Google Scholar]
  5. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103. [Google Scholar] [CrossRef]
  6. Combettes, P.L. The convex feasibility problem in image recovery. In Advances in Imaging and Electron Physics; Elsevier: Amsterdam, The Netherlands, 1996; Volume 95, pp. 155–270. [Google Scholar]
  7. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  8. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  9. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  10. Tan, B.; Cho, S.Y. Inertial extragradient algorithms with non-monotone stepsizes for pseudomonotone variational inequalities and applications. Comput. Appl. Math. 2022, 41, 121. [Google Scholar] [CrossRef]
  11. Tan, B.; Zhou, Z.; Li, S. Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems. J. Appl. Math. Comput. 2022, 68, 1387–1411. [Google Scholar] [CrossRef]
  12. Sunthrayuth, P.; Adamu, A.; Muangchoo, K.; Ekvittayaniphon, S. Strongly convergent two-step inertial subgradient extragradient methods for solving quasi-monotone variational inequalities with applications. Commun. Nonlinear Sci. Numer. Simul. 2025, 150, 108959. [Google Scholar] [CrossRef]
  13. Zhao, J. Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization 2015, 64, 2619–2630. [Google Scholar] [CrossRef]
  14. Kwelegano, K.M.; Zegeye, H.; Boikanyo, O.A. An Iterative method for split equality variational inequality problems for non-Lipschitz pseudomonotone mappings. Rendiconti del Circolo Matematico di Palermo Series 2 2022, 71, 325–348. [Google Scholar] [CrossRef]
  15. Zheng, L. A double projection algorithm for quasimonotone variational inequalities in Banach spaces. J. Inequalities Appl. 2018, 2018, 256. [Google Scholar] [CrossRef] [PubMed]
  16. Ye, M.; He, Y. A double projection method for solving variational inequalities without monotonicity. Comput. Optim. Appl. 2015, 60, 141–150. [Google Scholar] [CrossRef]
  17. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  18. Rahman, L.U.; Arshad, M.; Thabet, S.T.M.; Kedim, I. Iterative construction of fixed points for functional equations and fractional differential equations. J. Math. 2023, 2023, 6677650. [Google Scholar] [CrossRef]
  19. Ullah, K.; Thabet, S.T.M.; Kamal, A.; Ahmad, J.; Ahmad, F. Convergence analysis of an iteration process for a class of generalized nonexpansive mappings with application to fractional differential equations. Discret. Dyn. Nat. Soc. 2023, 2023, 8432560. [Google Scholar] [CrossRef]
  20. Mekuriaw, G.; Zegeye, H.; Takele, M.H.; Tufa, A.R. Algorithms for split equality variational inequality and fixed problems. Appl. Anal. 2024, 103, 3267–3294. [Google Scholar] [CrossRef]
  21. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  22. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  23. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  24. Shehu, Y. Single projection algorithm for variational inequalities in Banach spaces with application to contact problem. Acta Math. Sci. 2020, 40, 1045–1063. [Google Scholar] [CrossRef]
  25. Zegeye, H.; Shahzad, N. Extragradient method for solutions of variational inequality problems in Banach spaces. Abstr. Appl. Anal. 2013, 2013, 832548. [Google Scholar] [CrossRef]
  26. Zhu, L.J.; Liou, Y.C. A Tseng-Type algorithm with self-adaptive techniques for solving the split problem of fixed points and pseudomonotone variational inequalities in Hilbert spaces. Axioms 2021, 10, 152. [Google Scholar] [CrossRef]
  27. Thong, D.V.; Vuong, P.T. Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2019, 68, 2207–2226. [Google Scholar] [CrossRef]
  28. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  29. Kirk, W.A. Contraction Mappings and Extensions. In Handbook of Metric Fixed Point Theory; Kirk, W.A., Sims, B., Eds.; Springer: Dordrecht, The Netherlands, 2001. [Google Scholar]
  30. Kesornprom, S.; Cholamjiak, P. A modified inertial proximal gradient method for minimization problems and applications. AIMS Math. 2022, 7, 8147–8161. [Google Scholar] [CrossRef]
  31. Thong, D.V.; Cholamjiak, P.; Michael, T.; Cho, Y.J. Strong convergence of inertial subgradient extragradient algorithm for solving pseudomonotone equilibrium problems. Optim. Lett. 2022, 16, 545–573. [Google Scholar] [CrossRef]
  32. Zegeye, H.; Shahzad, N. Convergence of Mann type iteration method for generalized asymptotically nonexpansive mappings. Comput. Math. Appl. 2011, 62, 4007–4014. [Google Scholar] [CrossRef]
  33. Dotson, W., Jr. Fixed points of quasi-nonexpansive mappings. J. Aust. Math. Soc. 1972, 13, 167–170. [Google Scholar] [CrossRef]
  34. Boikanyo, O.A.; Zegeye, H. Split equality variational inequality problems for pseudomonotone mappings in Banach spaces. Stud. Univ. Babes-Bolyai Math. 2021, 66, 139–158. [Google Scholar] [CrossRef]
  35. Denisov, S.; Semenov, V.; Chabak, L. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  36. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  37. Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
Figure 1. Convergence of r n , s n with tolerance D n < 10 4 [10,13,14,20].
Figure 1. Convergence of r n , s n with tolerance D n < 10 4 [10,13,14,20].
Appliedmath 05 00117 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmad, K.; Shabbir, K.; Ahsan, K. A Refined Inertial-like Subgradient Method for Split Equality Problems. AppliedMath 2025, 5, 117. https://doi.org/10.3390/appliedmath5030117

AMA Style

Ahmad K, Shabbir K, Ahsan K. A Refined Inertial-like Subgradient Method for Split Equality Problems. AppliedMath. 2025; 5(3):117. https://doi.org/10.3390/appliedmath5030117

Chicago/Turabian Style

Ahmad, Khushdil, Khurram Shabbir, and Khadija Ahsan. 2025. "A Refined Inertial-like Subgradient Method for Split Equality Problems" AppliedMath 5, no. 3: 117. https://doi.org/10.3390/appliedmath5030117

APA Style

Ahmad, K., Shabbir, K., & Ahsan, K. (2025). A Refined Inertial-like Subgradient Method for Split Equality Problems. AppliedMath, 5(3), 117. https://doi.org/10.3390/appliedmath5030117

Article Metrics

Back to TopTop