Next Article in Journal
Real Reactive Micropolar Spherically Symmetric Fluid Flow and Thermal Explosion: Modelling and Existence
Previous Article in Journal
Statistical Data-Generative Machine Learning-Based Credit Card Fraud Detection Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Convergence of the Yosida–Cayley Variational Inclusion Problem with the XOR Operation and Inertial Extrapolation Scheme

1
Department of Mathematics, Aligarh Muslim University, Aligarh 202002, UP, India
2
Department of Mechanical Engineering, College of Engineering, Qassim University, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2447; https://doi.org/10.3390/math13152447
Submission received: 8 May 2025 / Revised: 20 July 2025 / Accepted: 24 July 2025 / Published: 29 July 2025

Abstract

This article studies the structure and properties of real-ordered Hilbert spaces, highlighting the roles of the XOR and XNOR logical operators in conjunction with the Yosida and Cayley approximation operators. These fundamental elements are utilized to formulate the Yosida–Cayley Variational Inclusion Problem (YCVIP) and its associated Yosida–Cayley Resolvent Equation Problem (YCREP). To address these problems, we develop and examine several solution methods, with particular attention given to the convergence behavior of the proposed algorithms. We prove both the existence of solutions and the strong convergence of iterative sequences generated under the influence of the aforesaid operators. The theoretical results are supported by a numerical result, demonstrating the practical applicability and efficiency of the suggested approaches.

1. Introduction

Variational inequality problems originate from the study of functionals constrained to convex sets and were formally introduced by Stampacchia in 1966 [1]. In his foundational work, Stampacchia developed the concept of variational inequalities to address problems involving inequalities and used the Lax–Milgram theorem [2] to investigate the regularity of solutions to partial differential equations. Since then, variational inequalities have found extensive applications in various fields, such as artificial intelligence and data science, optimal control, mechanics, finance, transportation equilibrium, and engineering sciences. Rockafellar presented a significant extension of this concept in 1976 [3], which introduced the variational inclusion problem—a broader framework that encompasses variational inequalities. In recent decades, variational inequalities and their generalizations have been actively studied in multiple mathematical settings by numerous researchers [4,5,6,7,8,9,10,11]. Hassouni and Moudafi [12] examined a class of combined variational inequalities involving single-valued mappings, later termed variational inclusions. A typical variational inclusion seeks a point at which a maximal monotone operator maps to zero. These problems serve as a unifying framework that generalizes concepts from variational inequalities, equilibrium and optimization problems, complementarity systems, mechanical models, and Nash-type equilibrium formulations.
In parallel with the development of variational theories, logical operations such as exclusive OR (XOR) and exclusive NOR (XNOR) play an essential role in digital computation. The XOR operation returns true if the two Boolean inputs differ, while XNOR yields true when the inputs are equal. These binary operations are fundamental in fault-tolerant systems, parity checking, cryptographic algorithms, and pseudorandom number generation. Both XOR and XNOR are associative and commutative and are instrumental in applications involving linear separability and logic design [13,14,15,16,17,18].
To solve non-linear operator equations and variational problems in Hilbert spaces, various approximation operators have been employed, including the resolvent, Cayley, and Yosida operators. These operators are particularly effective in approximating derivatives of convex functionals and are widely used in the study of diffusion, wave propagation, and heat transfer problems. The development of iterative algorithms based on generalized resolvent operators has been an active area of research [19], with particular attention given to improving the convergence rates of such algorithms. An important acceleration technique involves the use of inertial methods. Initially proposed by Polyak in 1964 [20] for the heavy-ball method, the inertial approach generates each new iteration using a linear combination of the two preceding terms. Chang et al. [21] applied an inertial forward–backward splitting technique to address variational inclusion problems in Hilbert spaces. More recently, Gebrie and Bedne [22] introduced a computational algorithm based on inertial extrapolation to solve generalized split common fixed-point problems, demonstrating its wide applicability. These contributions have inspired further advances in the field, including the development of new algorithms with enhanced convergence properties. Rajpoot et al. [23] recently studied a Yosida variational inclusion problem and its corresponding Yosida-resolvent equation. They employed an inertial extrapolation scheme and provided supporting numerical examples to validate their theoretical results.
Motivated by these recent advances, the present study revisits the Yosida–Cayley solution equation and its associated variational inclusion problem. We propose a novel iterative approach based on inertial extrapolation and analyze its convergence properties within Hilbert spaces. To support our theoretical findings, we provide numerical results conducted using MATLAB 2024b. The results are illustrated through convergence graphs and estimation tables, demonstrating the efficiency of the proposed algorithms.

2. Preliminaries

Let Λ ˘ be a real-ordered Hilbert space equipped with the norm | | . | | and the inner product . , . . Denote by κ ( Λ ˘ ) the collection of compact non-empty subsets of Λ ˘ , and let κ Λ ˘ be a closed convex cone. Furthermore, let 2 Λ ˘ denote the non-empty set consisting of subsets of κ ( Λ ˘ ) .
Definition 1
([18]). Let π 1 , π 2 Λ ˘ , and let κ Λ ˘ be a cone such that λ ˇ π 1 κ Λ ˘ . The cone κ Λ ˘ is said to be normal if and only if there exists a constant λ ˇ Π Λ ˘ > 0 such that
π 1 λ ˇ Π Λ ˘ π 2 ,
where λ ˇ > 0 and 0 π 1 π 2 .
Definition 2
([18]). A cone κ induces a partial order ≤ on Λ ˘ defined by
π 1 π 2 π 1 π 2 κ ( Λ ˘ ) .
Two elements π 1 and π 2 Λ ˘ , are said to be comparable, denoted by π 1 π 2 , if either π 1 π 2 or π 2 π 1 .
Definition 3
([16]). Letanddenote the XOR and XNOR operations, respectively. Consider π 1 , π 2 Λ ˘ ; letandrepresent the operators with the least upper bound (lub) and the highest lower bound (glb), respectively. The following properties are satisfied:
 (i) 
π 1 π 2 = inf { π 1 , π 2 } ;
 (ii) 
π 1 π 2 = sup { π 1 , π 2 } ;
 (iii) 
π 1 π 2 = ( π 1 π 2 ) ( π 2 π 1 ) ;
 (iv) 
π 1 π 2 = ( π 1 π 2 ) ( π 2 π 1 ) ;
 (v) 
π 1 π 2 = 0 , π 1 π 2 = π 2 π 1 , π 1 π 2 = 0 , π 1 π 2 = π 2 π 1 , π 1 π 2 = ( π 1 π 2 ) ;
 (vi) 
If π 1 0 , then π 1 0 π 1 0 ;
 (vii) 
( λ ˇ π 1 ) ( λ ˇ π 2 ) = | λ ˇ | ( π 1 π 2 ) , for any scalar λ ˇ ;
 (viii) 
If π 1 π 2 , then 0 π 1 π 2 ;
 (ix) 
If π 1 π 2 , then π 1 π 2 = 0 if and only if π 1 = π 2 .
Proposition 1
([18]). Let κ Λ ˘ be the normal cone with normal constant λ ˇ Π Λ ˘ > 0 . Then for all π 1 , π 2 Λ ˘ , the following conditions hold:
 (i) 
| | 0 0 | | = | | 0 | | = 0 ;
 (ii) 
| | π 1 π 2 | | = | | π 1 | | | | π 2 | | | | π 1 | | + | | π 2 | | ;
 (iii) 
| | π 1 π 2 | | | | π 1 π 2 | | λ ˇ Π | | π 1 π 2 | | ;
 (iv) 
If π 1 π 2 , then | | π 1 π 2 | | | | π 1 π 2 | | .
Definition 4.
A mapping p : Λ ˘ Λ ˘ is said to be Lipschitz continuous if there exists a constant λ ˇ p > 0 such that
| | p ( π 1 ) p ( π 2 ) | | λ ˇ p | | π 1 π 2 | | , π 1 , π 2 Λ ˘ .
Definition 5.
A mapping p : Λ ˘ Λ ˘ is said to be a δ p -ordered non-extended mapping if there exists a constant δ p > 0 such that
| | p ( π 1 ) p ( π 2 ) | | δ p | | π 1 π 2 | | , π 1 , π 2 Λ ˘ .
Definition 6.
Let Σ : Λ ˘ × Λ ˘ × Λ ˘ Λ ˘ be a single-valued mapping, and let Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a multi-valued mapping. Then
 (i) 
The mapping Δ is said to be Lipschitz continuous in the first argument if there exists a constant λ ˇ Σ 1 > 0 such that, for any μ 1 Δ ( . , π 1 ) , μ 2 Δ ( . , π 2 ) ,
| | Σ ( μ 1 , . , . ) Σ ( μ 2 , . , . ) | | λ ˇ Σ 1 | | μ 1 μ 2 | | , π 1 , π 2 κ ( Λ ˘ ) .
 (ii) 
The mapping Δ is said to be Lipschitz continuous in the second argument if there exists a constant λ ˇ Σ 2 > 0 such that, for any μ 1 Δ ( . , π 1 ) , μ 2 Δ ( . , π 2 ) ,
| | Σ ( . , μ 1 , . ) Σ ( . , μ 2 , . ) | | λ ˇ Σ 2 | | μ 1 μ 2 | | , π 1 , π 2 κ ( Λ ˘ ) .
Definition 7.
Let p : Λ ˘ Λ ˘ be a single-valued mapping, and let Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a multi-valued mapping. Then
 (i) 
The mapping p is called a comparison mapping if for all π 1 , π 2 Λ ˘ such that π 1 π 2 , and π 1 p ( π 1 ) and π 2 p ( π 2 ) , it holds that
p ( π 1 ) p ( π 2 ) .
 (ii) 
The comparison mapping Δ is said to be an α-non-ordinary difference mapping, if there exists ϑ π 1 Δ ( . , π 1 ) and ϑ π 2 Δ ( . , π 2 ) such that
( ϑ π 1 ϑ π 2 ) α ( π 1 π 2 ) = 0 , π 1 , π 2 Λ ˘ .
 (iii) 
The comparison mapping Δ is called a ρ-ordered rectangular mapping if there exists ϑ π 1 Δ ( . , π 1 ) and ϑ π 2 Δ ( . , π 2 ) such that
( ϑ π 1 ϑ π 2 ) ( π 1 π 2 ) λ ˇ | | π 1 π 2 | | 2 , π 1 , π 2 Λ ˘ , λ ˇ > 0 .
 (iv) 
The mapping Δ is a λ ˇ -weak-ordered different mapping if there exists a constant λ ˇ > 0 , and elements ϑ π 1 Δ ( · , π 1 ) and ϑ π 2 Δ ( · , π 2 ) , such that
λ ˇ ( ϑ π 1 ϑ π 2 ) ( π 1 π 2 ) , π 1 , π 2 Λ ˘ .
Definition 8.
A mapping ψ ˘ : Λ ˘ κ ( Λ ˘ ) is said to be D -Lipschitz continuous if there exists a constant λ ˇ D ψ > 0 such that
D ( ψ ( π 1 ) , ψ ( π 2 ) ) λ ˇ D ψ | | π 1 π 2 | | , π 1 , π 2 κ ( Λ ˘ ) ,
where D ( . , . ) denotes the Hausdörff metric on κ Λ ˘ .
Definition 9
([19]). Let Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a multi-valued mapping. The resolvent operator associated with Δ ( . , π 1 ) , denoted by β ( τ , λ ˇ ) Δ ( . , π ) : Λ ˘ Λ ˘ , is defined by for all π 1 , π 2 Λ ˘ and λ ˇ > 0 as
β ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = τ + λ ˇ Δ ( . , π 1 ) 1 ( π 2 ) ,
where τ denotes the identity operator on Λ ˘ .
Definition 10.
The Yosida approximation operator Y ( τ , λ ˇ ) Δ ( . , π 1 ) : Λ ˘ Λ ˘ is defined as
Y ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 1 λ ˇ τ β ( . , π 1 ) ( π 2 ) , π 1 , π 2 Λ ˘ , λ ˇ > 0 ,
where τ denotes the identity operator on Λ ˘ .
Definition 11.
The Cayley approximation operator associated with the multi-valued mapping Δ ( · , π 1 ) , denoted by C ( τ , λ ˇ ) Δ ( · , π 1 ) : Λ ˘ Λ ˘ , is defined as
C ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 2 β ( . , π 1 ) τ ( π 2 ) , π 1 , π 2 Λ ˘ , λ ˇ > 0 ,
where τ is the identity operator.
We need the following lemmas to prove the main results of this paper.
Lemma 1
([15]). Let Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a γ-ordered rectangular multi-valued mapping with respect to the resolvent operator β ( τ , λ ˇ ) Δ ( · , π 1 ) . Then, for all l ˘ , m ˘ Λ ˘ , the following inequality holds:
| | β ( τ , λ ˇ ) Δ ( . , π 1 ) ( l ˘ ) β ( τ , λ ˇ ) Δ ( . , π 1 ) ( m ˘ ) | | θ | | l ˘ m ˘ | | ,
where θ = 1 γ λ ˇ 1 , provided that λ ˇ > 1 γ .
Thus, the resolvent operator β ( τ , λ ˇ ) Δ ( · , π 1 ) is Lipschitz-type continuous.
Lemma 2
([14]). Let Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a ( γ , λ ˇ ) -weak-ordered rectangular different multi-valued mapping associated with the resolvent operator β ( τ , λ ˇ ) Δ ( · , π 1 ) . Let Y ( τ , λ ˇ ) Δ ( · , π 1 ) be the corresponding Yosida approximation operator. Then, for all l ˘ , m ˘ Λ ˘ , the following inequality holds:
Y ( τ , λ ˇ ) Δ ( · , π 1 ) ( l ˘ ) Y ( τ , λ ˇ ) Δ ( · , π 1 ) ( m ˘ ) θ l ˘ m ˘ ,
where θ = λ ˇ γ λ ˇ 1 , provided that λ ˇ > 1 γ .
Thus, the Yosida approximation operator Y ( τ , λ ˇ ) Δ ( · , π 1 ) is Lipschitz-type continuous.
Lemma 3
([17]). Let Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a ( γ , λ ˇ ) -weak-ordered rectangular different multi-valued mapping with respect to the resolvent operator β ( τ , λ ˇ ) Δ ( · , π 1 ) , and let C ( τ , λ ˇ ) Δ ( · , π 1 ) denote the associated Cayley approximation operator. Then, for all l ˘ , m ˘ Λ ˘ , the following inequality holds:
C ( τ , λ ˇ ) Δ ( · , π 1 ) ( l ˘ ) C ( τ , λ ˇ ) Δ ( · , π 1 ) ( m ˘ ) ( 2 θ + 1 ) l ˘ m ˘ ,
where θ = 1 γ λ ˇ 1 , provided that λ ˇ > 1 γ .
Thus, the Cayley approximation operator C ( τ , λ ˇ ) Δ ( · , π 1 ) is Lipschitz-type continuous.

3. The Yosida–Cayley Variational Inclusion Problem (YCVIP)

Let p : Λ ˘ Λ ˘ , Δ : Λ ˘ × Λ ˘ 2 Λ ˘ , Σ : Λ ˘ × Λ ˘ × Λ ˘ Λ ˘ , and ψ , ϕ , φ : Λ ˘ κ ( Λ ˘ ) be mappings. Let Y ( τ , λ ˇ ) Δ ( · , π ) and C ( τ , λ ˇ ) Δ ( · , π ) denote the Yosida and Cayley approximation operators associated with Δ ( · , π ) , respectively. Find π Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) such that
0 Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + Σ ( l ˘ , m ˘ , n ˘ ) Δ ( p ( π ) , π ) ,
where λ ˇ > 0 is a constant and τ denotes the identity mapping on Λ ˘ .
Special Case 1. 
If Y ( τ , λ ˇ ) Δ ( · , π ) = τ , Σ ( l ˘ , m ˘ , n ˘ ) = 0 , and Δ ( p ( π ) , π ) = Δ ( π ) , then (1) reduces to the problem of finding π Λ ˘ such that
0 C ( τ , λ ˇ ) Δ ( · , π ) ( π ) Δ ( π ) .
Special Case 2. 
If Y ( τ , λ ˇ ) Δ ( · , π ) C ( τ , λ ˇ ) Δ ( · , π ) ( π ) = 0 , Σ ( l ˘ , m ˘ , n ˘ ) = 0 , and Δ ( p ( π ) , π ) = Δ ( π ) , then (1) further simplifies the classical inclusion problem of finding π Λ ˘ such that
0 Δ ( π ) .
This is the fundamental variational inclusion problem studied by Rockafellar [3].

4. Fixed-Point Formulation and Iterative Algorithms

Lemma 4.
The Yosida–Cayley Variational Inclusion Problem (1) admits a solution π Λ ˘ , with corresponding elements l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) , if and only if the following condition is satisfied:
p ( π ) = β ( τ , λ ˇ ) Δ ( · , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( · , π ) C ( τ , λ ˇ ) Δ ( · , π ) ( π ) ,
where λ ˇ > 0 , and τ denotes the identity operator.
Proof. 
Suppose π Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) satisfy Equation (2). Then we have,
p ( π ) = β ( τ , λ ˇ ) Δ ( . , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) = τ + λ ˇ Δ ( . , π ) 1 p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) τ + λ ˇ Δ ( . , π ) p ( π ) = p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) p ( π ) + λ ˇ Δ ( p ( π ) , π ) = p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Δ ( p ( π ) , π ) = Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Σ ( l ˘ , m ˘ , n ˘ ) Δ ( p ( π ) , π ) = Σ ( l ˘ , m ˘ , n ˘ ) Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) = Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) 0 Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + Σ ( l ˘ , m ˘ , n ˘ ) Δ ( p ( π ) , π ) .
To solve The Yosida–Cayley Variational Inclusion Problem (YCVIP), we now develop the following method based on Lemma 4.
Algorithm 1. 
Let π 0 Λ ˘ , and l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) be initial elements. Define the sequences { π n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } iteratively as follows:
p ( π n + 1 ) = β ( τ , λ ˇ ) Δ ( · , π n ) p ( π n ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( · , π n ) C ( τ , λ ˇ ) Δ ( · , π n ) ( π n ) .
The above expression can equivalently be written in the symmetrized form:
p ( π ) = β ( τ , λ ˇ ) Δ ( · , π ) 1 2 p ( π ) + p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( · , π ) C ( τ , λ ˇ ) Δ ( · , π ) ( π ) .
We now propose the iterative strategy based on (4).
Algorithm 2. 
Let π 0 Λ ˘ , l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) be initial elements. Compute the sequences { π n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } iteratively according to the following step-by-step procedure:
p ( π n + 1 ) = ( 1 α n ) p ( π n ) + α n β ( τ , λ ˇ ) Δ ( · , π n ) [ 1 2 p ( π n ) + p ( π n + 1 ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( · , π n ) C ( τ , λ ˇ ) Δ ( · , π n ) ( π n ) ] .
where n = 0 , 1 , 2 , and α n [ 0 , 1 ] .
The predictor–corrector approach [24] is employed to describe the following inertial extrapolation scheme.
Algorithm 3. 
Let π 0 Λ ˘ , with initial values l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) . Compute the sequences { π n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } recursively as follows:
J n = π n + e n ( π n π n 1 ) ;
p ( π n + 1 ) = ( 1 α n ) p ( π n ) + α n β ( τ , λ ˇ ) Δ ( · , π n ) [ 1 2 p ( π n ) + p ( J n ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( · , π n ) C ( τ , λ ˇ ) Δ ( · , π n ) ( J n ) ] .
where e n , α n [ 0 , 1 ] for all n 1 , and e n is the extrapolation coefficient.
Let l ˘ n + 1 ψ ( π n + 1 ) , m ˘ n + 1 ϕ ( π n + 1 ) , and n ˘ n + 1 φ ( π n + 1 ) be such that:
l ˘ n l ˘ n + 1 D ψ ( π n ) , ψ ( π n 1 ) ;
m ˘ n m ˘ n + 1 D ϕ ( π n ) , ϕ ( π n 1 ) ;
n ˘ n n ˘ n + 1 D φ ( π n ) , φ ( π n 1 ) ,
where D ( · , · ) denotes the Hausdorff metric on κ ( Λ ˘ ) , and λ ˇ > 0 is a constant.

5. Main Result

In this section, we establish the existence and strong convergence of solutions to the Yosida–Cayley Variational Inclusion Problem (YCVIP), utilizing the XOR and XNOR operations.
Theorem 1.
Suppose that Λ ˘ is a real-ordered Hilbert space and κ is a cone that induces a partial order. Let Σ : Λ ˘ × Λ ˘ × Λ ˘ Λ ˘ be a single-valued mapping, and let ψ , ϕ , φ : Λ ˘ κ ( Λ ˘ ) be multi-valued mappings. Assume that Δ : Λ ˘ × Λ ˘ 2 Λ ˘ is a multi-valued mapping such that Δ ( · , π ) is a γ-ordered and ( γ , λ ˇ ) -weak-ordered rectangular different mapping with respect to its first argument. Additionally, let p : Λ ˘ Λ ˘ be a single-valued mapping that is Lipschitz continuous with constant λ ˇ p and δ p -non-extended-ordered. Assume that for all n = 0 , 1 , 2 , , we have π n + 1 π n and p ( π n + 1 ) p ( π n ) . Suppose that the following conditions hold:
β ( τ , λ ˇ ) Δ ( · , π n ) ( l ˘ ) β ( τ , λ ˇ ) Δ ( · , π n + 1 ) ( l ˘ ) μ π n π n + 1 ;
Y ( τ , λ ˇ ) Δ ( · , π n ) ( l ˘ ) Y ( τ , λ ˇ ) Δ ( · , π n + 1 ) ( l ˘ ) μ π n π n + 1 ;
C ( τ , λ ˇ ) Δ ( · , π n ) ( l ˘ ) C ( τ , λ ˇ ) Δ ( · , π n + 1 ) ( l ˘ ) μ π n π n + 1 .
Further, suppose that the following inequality is satisfied:
0 < λ ˇ Π Λ ˘ δ p { ( 1 α n ) λ ˇ p + α n μ + 3 2 α n θ λ ˇ p + α n θ λ ˇ ( μ + θ μ ) + 2 α n θ λ ˇ θ ( 2 θ + 1 ) + α n θ λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) } < 1
where θ = 1 γ λ ˇ 1 and θ = λ ˇ γ λ ˇ 1 , with λ ˇ > 1 γ for all l ˘ , m ˘ , n ˘ Λ ˘ .
Let e n , α n [ 0 , 1 ] for all n 1 , where e n is an extrapolation term satisfying
n = 1 α n = , and n = 1 e n π n 2 π n 1 + π n 2 < .
Then, the sequences { π n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } generated by Algorithm 3 strongly converge to the solution π Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) of the YCVIP.
Proof. 
We have
0 p ( π n + 1 ) p ( π n ) = { ( 1 α n ) p ( π n ) + α n β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 p ( π n ) + p ( J n ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) } ] } { ( 1 α n ) p ( π n 1 ) + α n β ( τ , λ ˇ ) Δ ( . , π n 1 ) [ 1 2 ( p ( π n 1 ) + p ( J n 1 ) ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] } . = ( 1 α n ) p ( π n ) p ( π n 1 ) + α n { β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 p ( π n ) + p ( J n ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) ] β ( τ , λ ˇ ) Δ ( . , π n 1 ) [ 1 2 ( p ( π n 1 ) + p ( J n 1 ) ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] } .
Using (iii) of Proposition 1, we get the following.
| | p ( π n + 1 ) p ( π n ) | | ( 1 α n ) λ ˇ Π Λ ˘ | | p ( π n ) p ( π n 1 ) | | + α n λ ˇ Π Λ ˘ | | β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 ( p ( π n ) + p ( J n ) ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) ] β ( τ , λ ˇ ) Δ ( . , π n 1 ) [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ { Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . π n 1 ) ( J n 1 ) } ] | | .
Substituting Equations (11)–(13) into Equation (15), we obtain the following.
| | p ( π n + 1 ) p ( π n ) | | ( 1 α n ) λ ˇ Π Λ ˘ | | p ( π n ) p ( π n 1 ) | | + α n λ ˇ Π Λ ˘ | | β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 ( p ( π n ) + p ( J n ) ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) ] β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ { Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) } ] β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] β ( τ , λ ˇ ) Δ ( . , π n 1 ) [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ { Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) } ] | | .
( 1 α n ) λ ˇ Π Λ ˘ | | p ( π n ) p ( π n 1 ) | | + α n λ ˇ Π Λ ˘ | | β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 ( p ( π n ) + p ( J n ) ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) ] β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ { Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) } ] | | + α n λ ˇ Π Λ ˘ | | β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 ( p ( π n 1 ) + p ( J n 1 ) ) + λ ˇ { Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) ( C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ) } ] β ( τ , λ ˇ ) Δ ( . , π n 1 ) [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ { Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . π n 1 ) ( J n 1 ) } ] | | ( 1 α n ) λ ˇ Π Λ ˘ | | p ( π n ) p ( π n 1 ) | | + α n μ λ ˇ Π Λ ˘ | | π n π n 1 | | + α n θ λ ˇ Π Λ ˘ | | [ 1 2 p ( π n ) + p ( J n ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) } ] [ 1 2 p ( π n 1 ) + p ( J n 1 ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] | | . λ ˇ Π Λ ˘ { ( 1 α n ) + 1 2 α n θ | | p ( π n ) p ( π n 1 ) | | + α n μ | | π n π n 1 | | + 1 2 α n θ | | p ( J n ) p ( J n 1 ) ) | | + α n θ λ ˇ | | Σ ( l ˘ n , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) | | + α n θ λ ˇ | | Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | | } .
From the definition of D -Lipschitz continuity and part (i) of Proposition 1, it follows that
| | Σ ( l ˘ n , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) | | = | | Σ ( l ˘ n , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) | | = | | Σ ( l ˘ n , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n , n ˘ n ) | | + | | Σ ( l ˘ n 1 , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n ) | | + | | Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) | | λ ˇ Σ 1 | | l ˘ n l ˘ n 1 | | + λ ˇ Σ 2 | | m ˘ n m ˘ n 1 | | + λ ˇ Σ 3 | | n ˘ n n ˘ n 1 | | λ ˇ Σ 1 D ( ψ ( π n ) , ψ ( π n 1 ) ) + λ ˇ Σ 2 D ( ϕ ( π n ) , ϕ ( π n 1 ) ) + λ ˇ Σ 3 D ( φ ( π n ) , φ ( π n 1 ) ) λ ˇ Σ 1 λ ˇ D ψ | | π n π n 1 | | + λ ˇ Σ 2 λ ˇ D ϕ | | π n π n 1 | | + λ ˇ Σ 3 λ ˇ D φ | | π n π n 1 | | { λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ } | | π n π n 1 | | .
Furthermore, by the Lipschitz continuity of the YCVIP operator and (i) of Proposition 1, we obtain the following.
| | Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | | = | | Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | |
| | Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) | | + | | Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | | μ | | π n π n 1 | | + θ | | C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) C ( τ , λ ) Δ ( . , π n 1 ) ( J n 1 ) | | μ | | π n π n 1 | | + θ | | C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | | μ | | π n π n 1 | | + θ | | C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n ) | | + θ | | C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | | μ | | π n π n 1 | | + θ μ | | π n π n 1 | | + θ ( 2 θ + 1 ) | | J n J n 1 | | ( μ + θ μ ) | | π n π n 1 | | + θ ( 2 θ + 1 ) | | J n J n 1 | | .
Now, combining (16)–(18), we get the following.
| | p ( π n + 1 ) p ( π n ) | | λ ˇ Π Λ ˘ { ( 1 α n ) + 1 2 α n θ | | p ( π n ) p ( π n 1 ) | | + ( α n μ + α n θ λ ˇ ( μ + θ μ ) + α n θ λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) ) | | π n π n 1 | | + 1 2 α n θ | | p ( J n ) p ( J n 1 ) | | + α n θ λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | } .
Since p is δ p -order non-extended mapping, we have
| | p ( π n + 1 ) p ( π n ) | | δ p | | π n + 1 π n | | . | | π n + 1 π n | | 1 δ p | | p ( π n + 1 ) p ( π n ) | | .
Combining (19) and (20) yields the following result.
| | π n + 1 π n | | λ ˇ Π Λ ˘ δ p { ( 1 α n ) + 1 2 α n θ | | p ( π n ) p ( π n 1 ) | | + ( α n μ + α n θ λ ˇ ( μ + θ μ ) + α n θ λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) | | π n π n 1 | | + 1 2 α n θ | | p ( J n ) p ( J n 1 ) | | + α n θ λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | } .
Since π n π n 1 , p ( π n ) p ( π n 1 ) , J n J n 1 , and p ( J n ) p ( J n 1 ) for all n = 0 , 1 , 2 , , and using part (iv) of Proposition 1 along with the Lipschitz continuity and the strong convergence of p, Equation (21) becomes
| | π n + 1 π n | | λ ˇ Π Λ ˘ δ p { ( 1 α n ) + 1 2 α n θ λ ˇ p | | π n π n 1 | | + ( α n μ + α n θ λ ˇ ( μ + θ μ ) + α n θ λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) | | π n π n 1 | | + 1 2 α n θ λ ˇ p | | J n J n 1 | | + α n θ λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | } .
λ ˇ Π Λ ˘ δ p { ( 1 α n ) λ ˇ p + α n μ + 1 2 α n θ λ ˇ p + α n θ λ ˇ ( μ + θ μ ) + α n θ λ ˇ λ ˇ p λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ } | | π n π n 1 | | + λ ˇ Π Λ ˘ δ p 1 2 α n θ λ ˇ p + α n θ λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | .
It follows that
| | J n J n 1 | | = | | { π n + e n ( π n π n 1 ) } { π n 1 + e n ( π n 1 π n 2 ) } | | = | | ( π n π n 1 ) + e n ( π n π n 1 ) e n ( π n 1 π n 2 ) | | = | | ( π n π n 1 ) + e n ( π n 2 π n 1 + π n 2 ) | | | | π n π n 1 | | + e n | | π n 2 π n 1 + π n 2 | | .
By combining (22) and (23), we obtain the following.
| | π n + 1 π n | | λ ˇ Π Λ ˘ δ p { ( 1 α n ) λ ˇ p + α n μ + 1 2 α n θ λ ˇ p + α n θ λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) + α n θ λ ˇ ( θ + θ μ ) } | | π n π n 1 | | + λ ˇ Π Λ ˘ δ p { 1 2 α n θ λ ˇ p + α n θ λ ˇ θ ( 2 θ + 1 ) } | | π n π n 1 | | + e n | | π n 2 π n 1 + π n 2 | | . λ ˇ Π Λ ˘ δ p { ( 1 α n ) λ ˇ p + α n μ + α n θ λ ˇ p + α n θ λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) + α n θ λ ˇ ( θ + θ μ ) + α n θ λ ˇ θ ( 2 θ + 1 ) } | | π n π n 1 | | + λ ˇ Π Λ ˘ δ p 1 2 α n θ λ ˇ p + α n θ λ ˇ θ ( 2 θ + 1 ) e n | | π n 2 π n 1 + π n 2 | | ω 1 ( θ ) | | π n π n 1 | | + ω 2 ( θ ) e n | | π n 2 π n 1 + π n 2 | | .
where ω 1 ( θ ) = λ ˇ Π Λ ˘ δ p ( 1 α n ) λ ˇ p + α n μ + α n θ λ ˇ p + α n θ λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ + α n θ λ ˇ ( θ + θ μ ) + α n θ λ ˇ θ ( 2 θ + 1 ) and ω 2 ( θ ) = λ ˇ Π Λ ˘ δ p 1 2 α n θ λ ˇ p + α n θ λ ˇ θ ( 2 θ + 1 ) .
Since e n , α n [ 0 , 1 ] , where e n is the extrapolating term for all n 1 , we have the following.
n = 1 α n = and n = 1 α n e n ( π n 2 π n 1 + π n 2 ) .
We infer from condition (14) that
ω ( θ ) = ω 1 ( θ ) + ω 2 ( θ ) 1 ,
where ω 1 ( θ ) = λ ˇ Π Λ ˘ δ p ( 1 α n ) λ ˇ p + α n μ + α n θ λ ˇ p + α n θ λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ + α n θ λ ˇ ( θ + θ μ ) + α n θ λ ˇ θ ( 2 θ + 1 ) and ω 2 ( θ ) = λ ˇ Π Λ ˘ δ p 1 2 α n θ λ ˇ p + α n θ λ ˇ θ ( 2 θ + 1 ) . Consequently, from Equation (24), it follows that the sequence { π n } is a Cauchy sequence in Λ ˘ . Since Λ ˘ is complete, there exists π Λ ˘ such that π n π as n . Next, using Equations (8)–(10), we obtain
| | l ˘ n l ˘ n 1 | | D ( ψ ( π n ) , ψ ( π n 1 ) ) λ ˇ D ψ | | π n π | | λ ˇ D ψ | | π n π | | ,
| | m ˘ n m ˘ n 1 | | D ( ϕ ( π n ) , ϕ ( π n 1 ) ) λ ˇ D ϕ | | π n π | | λ ˇ D ϕ | | π n π | | ,
| | n ˘ n n ˘ n 1 | | D ( φ ( π n ) , φ ( π n 1 ) ) λ ˇ D φ | | π n π | | λ ˇ D φ | | π n π | | .
Thus, { l ˘ n } , { m ˘ n } , and { n ˘ n } are also Cauchy sequences in Λ ˘ . Therefore, there exist π Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) such that l ˘ n l ˘ , m ˘ n m ˘ , and n ˘ n n ˘ , as n . Next, we show that l ˘ n l ˘ ψ ( π ) , m ˘ n m ˘ ϕ ( π ) , and n ˘ n n ˘ φ ( π ) as n .
Furthermore,
d ( l ˘ , ψ ( π ) ) i n f { | | l ˘ t | | , t ψ ( π ) } | | l ˘ l ˘ n | | + d ( l ˘ n , ψ ( π ) ) | | l ˘ l ˘ n | | + d ( ψ ( π n ) , ψ ( π ) ) | | l ˘ l ˘ n | | + λ ˇ D ψ | | π n π | | | | l ˘ l ˘ n | | + λ ˇ D ψ | | π n π | | 0 , a s , n .
Since ψ ( π ) is closed, it follows that l ˘ ψ ( π ) . Similarly, we obtain m ˘ ϕ ( π ) and n ˘ φ ( π ) .
Finally, applying the continuity of the mappings p, Σ, C ( τ , λ ˇ ) Δ ( . , π ) , β ( τ , λ ˇ ) Δ ( . , π ) , and Y ( τ , λ ˇ ) Δ ( . , π ) , we conclude that
p ( π ) = β ( τ , λ ˇ ) Δ ( . , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) .
Therefore, by Lemma 4, π Λ ˘ is a solution to the Yosida–Cayley Variational Inclusion Problem (YCVIP), where l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) . □

6. Yosida–Cayley Resolvent Equation Problem (YCREP)

For the Yosida–Cayley Variational Inclusion Problem (YCVIP), the iterative schemes described in Algorithms 1 and 2 can be used to establish both the existence and the convergence of solutions. In conclusion, by employing the inertial extrapolation term given in Equation (3), we derive a convergence result for the YCVIP. In this context, we define the Yosida–Cayley Resolvent Equation Problem (YCREP) as follows:
Find π , z Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) such that
Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Σ ( l ˘ , m ˘ , n ˘ ) λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = 0 ,
where the operator T ( τ , λ ˇ ) Δ ( . , π ) is defined by
T ( τ , λ ˇ ) Δ ( . , π ) ( z ) : = τ β ( τ , λ ˇ ) Δ ( . , π ) ( z ) ,
with τ denoting the identity operator and λ ˇ > 0 being a constant.
Therefore, the Yosida–Cayley Variational Inclusion Problem (YCVIP) and the Yosida– Cayley Regularized Extrapolation Problem (YCREP) are comparable.
Lemma 5.
The Yosida–Cayley Variational Inclusion Problem (YCVIP) admits a solution π Λ ˘ with l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) if and only if the Yosida–Cayley Resolvent Equation Problem (YCREP) admits a solution π , z Λ ˘ together with l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) such that
p ( π ) = β ( τ , λ ˇ ) Δ ( . , π ) ( z ) .
z = p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) .
where τ denotes the identity mapping and λ ˇ > 0 is a given constant.
Proof. 
Let π Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) be the solution of the Yosida–Cayley Variational Inclusion Problem (YCVIP), satisfying the following equation:
p ( π ) = β ( τ , λ ˇ ) Δ ( . , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π )
= β ( τ , λ ˇ ) Δ ( . , π ) ( z )
where z = p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π )
Now, we get from (32) and (33)
z = β ( τ , λ ˇ ) Δ ( . , π ) ( z ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) z β ( τ , λ ˇ ) Δ ( . , π ) ( z ) = λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) τ β ( τ , λ ˇ ) Δ ( . , π ) ( z ) = λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Σ ( l ˘ , m ˘ , n ˘ ) λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = 0 .
which is the required YCREP.
Conversely, let π , z Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) be the solutions of the YCREP. Then, we have
Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Σ ( l ˘ , m ˘ , n ˘ ) λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = 0 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ) Δ ( . , π ) ( π ) τ β ( τ , λ ˇ ) Δ ( . , π ) ( z ) = λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) z β ( τ , λ ˇ ) Δ ( . , π ) ( z ) = λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) = p ( π ) + λ ˇ { Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) } β ( τ , λ ˇ ) Δ ( . , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) H e n c e , p ( π ) = β ( τ , λ ˇ ) Δ ( . , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) .
Thus, the solutions to the YCVIP are given by π Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) , as established in Lemma 5. □
Algorithm 4. 
For given initial elements π 0 , z 0 Λ ˘ , l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) , construct the sequences { π n } , { z n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } iteratively.
π n = β ( τ , λ ˇ ) Δ ( . , π ) ( z n ) .
z n + 1 = p ( π n ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( π n ) ,
where n = 0 , 1 , 2 , · · · , α n [ 0 , 1 ] . Now YCREP, becomes
z = p ( π ) + Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + 1 λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) .
From (32) and (36), it follows that
z = β ( τ , λ ˇ ) Δ ( . , π ) ( z ) + Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + 1 λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) z β ( τ , λ ˇ ) Δ ( . , π ) ( z ) = Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + 1 λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) τ β ( τ , λ ˇ ) Δ ( . , π ) ( z ) = Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + 1 λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) + 1 λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Σ ( l ˘ , m ˘ , n ˘ ) λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = 0 ,
which establishes the desired YCREP.
Algorithm 5. 
For given initial elements π 0 , z 0 Λ ˘ , l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) , construct the sequences { π n } , { z n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } iteratively.
π n = β ( τ , λ ˇ ) Δ ( . , π ) ( z n ) .
z n + 1 = p ( π n ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( π n ) .
When δ > 0 , the YCREP can be written as
π = π δ z + β ( τ , λ ˇ ) Δ ( . , π ) ( z ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) .
We have from (39)
π = π δ z + β ( τ , λ ˇ ) Δ ( . , π ) ( z ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) = π δ { τ β ( τ , λ ˇ ) Δ ( . , π ) } ( z ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) = π δ T ( τ , λ ˇ ) Δ ( . , π ) ( z ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) 0 = δ T ( τ , λ ˇ ) Δ ( . , π ) ( z ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) Σ ( l ˘ , m ˘ , n ˘ ) λ ˇ 1 T ( τ , λ ˇ ) Δ ( . , π ) ( z ) = 0 ,
which establishes the desired YCREP.
Algorithm 6. 
For given initial elements π 0 , z 0 Λ ˘ , l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) , construct the sequences { π n } , { z n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } iteratively.
p ( π n ) = β ( τ , λ ˇ ) Δ ( . , π n ) ( z n )
z n + 1 = ( 1 α n ) p ( z n ) + α n [ 1 2 p ( π n ) + p ( π n + 1 ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( π n ) } ] .
where, n = 0 , 1 , 2 , · · · , a n d α n [ 0 , 1 ] .
Utilizing the predictor–corrector method [24], we construct an inertial extrapolation scheme to solve the YCREP.
Algorithm 7. 
For given initial elements π 0 , z 0 Λ ˘ , l ˘ 0 ψ ( π 0 ) , m ˘ 0 ϕ ( π 0 ) , and n ˘ 0 φ ( π 0 ) , construct the sequences { π n } , { z n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } iteratively.
J n = z n + e n ( z n z n 1 )
z n + 1 = ( 1 α n ) p ( z n ) + α n [ 1 2 p ( z n ) + p ( J n ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) } ] .
where, n = 0 , 1 , 2 , · · · , e n , α n [ 0 , 1 ] .
Theorem 2.
Suppose that Δ : Λ ˘ × Λ ˘ 2 Λ ˘ and ψ , ϕ , φ : Λ ˘ κ ( Λ ˘ ) are multi-valued mappings defined on the real-ordered Hilbert space Λ ˘ , such that Δ ( · , π ) is a γ-ordered and ( γ , λ ˇ ) -weakly ordered rectangular different mapping in the first argument. Let Σ : Λ ˘ × Λ ˘ × Λ ˘ Λ ˘ and p : Λ ˘ Λ ˘ be single-valued mappings, where p is Lipschitz continuous with constant λ ˇ p , and also a δ p -ordered non-extended mapping. Assume that π n + 1 π n , z n + 1 z n , p ( π n ) p ( π n 1 ) , and define
T ( τ , λ ˇ ) Δ ( · , π ) ( z ) : = τ β ( τ , λ ˇ ) Δ ( · , π ) ( z ) ,
for all n = 0 , 1 , 2 , . Suppose that the following conditions hold:
| | Y ( τ , λ ˇ ) Δ ( . , π n ) ( l ˘ ) Y ( τ , λ ˇ ) Δ ( . , π n + 1 ) ( l ˘ ) | | μ | | π n π n + 1 | | ;
| | C ( τ , λ ˇ ) Δ ( . , π n ) ( l ˘ ) C ( τ , λ ˇ ) Δ ( . , π n + 1 ) ( l ˘ ) | | μ | | π n π n + 1 | | .
The following inequality is then satisfied:
0 < λ ˇ Π Λ ˘ [ ( 1 α n ) λ ˇ p + α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) + { α n λ ˇ ( μ + θ μ ) + α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) } θ δ p μ ] < 1 ,
where θ = 1 γ λ ˇ 1 , θ = λ ˇ γ λ ˇ 1 , and λ ˇ > 1 γ , for all l ˘ , m ˘ , n ˘ Λ ˘ .
Let e n , α n [ 0 , 1 ] for all n 1 , where e n is an inertial extrapolation parameter satisfying
n = 1 α n = and n = 1 e n ( z n 2 z n 1 + z n 2 ) .
Then, the sequences { π n } , { z n } , { l ˘ n } , { m ˘ n } , and { n ˘ n } generated by Algorithm 7 converge strongly to the solution π , z Λ ˘ with l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) of the Yosida–Cayley Resolvent Equation Problem (YCREP).
Proof. 
We have
0 z n + 1 z n = { ( 1 α n ) p ( z n ) + α n [ 1 2 p ( z n ) + p ( J n ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) } ] } { ( 1 α n ) p ( z n 1 ) + α n [ 1 2 p ( z n 1 ) + p ( J n 1 ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] }
= ( 1 α n ) ( p ( z n ) p ( z n 1 ) ) + α n { [ 1 2 p ( z n ) + p ( J n ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) } ] [ 1 2 p ( z n 1 ) + p ( J n 1 ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] } .
Applying (iii) of Proposition 1, we obtain the following.
| | z n + 1 z n | | ( 1 α n ) λ ˇ Π Λ ˘ | | p ( z n ) p ( z n 1 ) | | + α n λ ˇ Π Λ ˘ | | [ 1 2 p ( z n ) + p ( J n ) + λ ˇ Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) ] [ 1 2 ( p ( z n 1 ) + p ( J n 1 ) ) + λ ˇ Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) ] | | . ( 1 α n ) λ ˇ Π Λ ˘ | | p ( z n ) p ( z n 1 ) | | + 1 2 α n λ ˇ Π Λ ˘ | | p ( z n ) p ( z n 1 ) | | + 1 2 α n λ ˇ Π Λ ˘ | | p ( J n ) p ( J n 1 ) | | + α n λ ˇ λ ˇ Π Λ ˘ | | Σ ( l ˘ n , m ˘ n , n ˘ n ) Σ ( l ˘ n 1 , m ˘ n 1 , n ˘ n 1 ) | | + α n λ ˇ λ ˇ Π Λ ˘ | | Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) Y ( τ , λ ˇ ) Δ ( . , π n 1 ) C ( τ , λ ˇ ) Δ ( . , π n 1 ) ( J n 1 ) | | . λ ˇ Π Λ ˘ { ( 1 α n ) + 1 2 α n | | p ( z n ) p ( z n 1 ) | | + ( α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) + α n ( μ + θ μ ) ) | | π n π n 1 | | + α n λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | + 1 2 α n | | p ( J n ) p ( J n 1 ) | | } .
Using the Lipschitz continuity of p in (47), we obtain
| | z n + 1 z n | | λ ˇ Π Λ ˘ { ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p | | z n z n 1 | | + ( α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) + α n ( μ + θ μ ) ) | | π n π n 1 | | + α n λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | + 1 2 α n λ ˇ p | | J n J n 1 | | } . λ ˇ Π Λ ˘ { ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p | | z n z n 1 | | + ( α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) + α n ( μ + θ μ ) ) | | π n π n 1 | | + α n λ ˇ θ ( 2 θ + 1 ) + 1 2 α n λ ˇ p | | J n J n 1 | | } .
Now, we have
| | p ( π n ) p ( π n 1 ) | | = β ( τ , λ ˇ ) Δ ( . , π n ) ( z n ) β ( τ , λ ˇ ) Δ ( . , π n 1 ) ( z n 1 ) β ( τ , λ ˇ ) Δ ( . , π n ) ( z n ) β ( τ , λ ˇ ) Δ ( . , π n ) ( z n 1 ) β ( τ , λ ˇ ) Δ ( . , π n ) ( z n 1 ) β ( τ , λ ˇ ) Δ ( . , π n 1 ) ( z n 1 ) . β ( τ , λ ˇ ) Δ ( . , π n ) ( z n ) β ( τ , λ ˇ ) Δ ( . , π n ) ( z n 1 ) + β ( τ , λ ˇ ) Δ ( . , π n ) ( z n 1 ) β ( τ , λ ˇ ) Δ ( . , π n 1 ) ( z n 1 ) θ | | z n z n 1 | | + μ | | π n π n 1 | | .
Since p is a δ p -order non-extended mapping, we have
| | p ( π n + 1 ) p ( π n ) | | δ p | | π n + 1 π n | | | | π n + 1 π n | | 1 δ p | | p ( π n + 1 ) p ( π n ) | | .
Now, combining (49) and (50), we have
| | π n π n 1 | | θ δ p | | z n z n 1 | | + μ δ p | | π n π n 1 | | | | π n π n 1 | | θ δ p μ | | z n z n 1 | | .
It follows from (48) and (51) that
| | z n + 1 z n | | λ ˇ Π Λ ˘ { ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p | | z n z n 1 | | + ( α n λ ˇ ( μ + θ μ ) + α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) ) θ δ p μ | | z n z n 1 | | + 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | } λ ˇ Π Λ ˘ [ { ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p + ( α n λ ˇ ( μ + θ μ ) + α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) ) θ δ p μ } | | z n z n 1 | | + 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | ] .
Since p is strongly convergent and π n π n 1 , J n J n 1 , and z n z n 1 for all n = 0 , 1 , 2 , , it follows that
| | z n + 1 z n | | λ Π Λ ˘ [ { ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p + { α n λ ˇ ( μ + θ μ ) + α n λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ } θ δ p μ } | | z n z n 1 | | + 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) | | J n J n 1 | | ] .
Also, we have
| | J n J n 1 | | = | | { z n + e n ( z n z n 1 ) } { z n 1 + e n ( z n 1 z n 2 ) } | | = | | ( z n z n 1 ) + e n { z n z n 1 z n 1 + z n 2 } | | = | | ( z n z n 1 ) + e n { z n 2 z n 1 + z n 2 } | | | | z n z n 1 | | + e n | | z n 2 z n 1 + z n 2 | | .
Combining (53) and (54), we obtain
| | z n + 1 z n | | λ ˇ Π Λ ˘ [ { ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p + { α n λ ˇ ( μ + θ μ ) + α n λ ˇ ( λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ ) } θ δ p μ } | | z n z n 1 | | + 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) { | | z n z n 1 | | + e n | | z n 2 z n 1 + z n 2 | | } ] . λ ˇ Π Λ ˘ [ ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p + { α n λ ˇ ( μ + θ μ ) + α n λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ } θ δ p μ } + { 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) } ] | | z n z n 1 | | + λ ˇ Π Λ ˘ { 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) } e n | | z n 2 z n 1 + z n 2 | | . ω 1 ( θ ) | | z n z n 1 | | + ω 2 ( θ ) e n | | z n 2 z n 1 + z n 2 | | ,
where
ω 1 ( θ ) = λ ˇ Π Λ ˘ [ ( 1 α n ) λ ˇ p + 1 2 α n λ ˇ p + α n λ ˇ ( μ + θ μ ) + α n λ ˇ λ ˇ Σ 1 λ ˇ D ψ + λ ˇ Σ 2 λ ˇ D ϕ + λ ˇ Σ 3 λ ˇ D φ · θ δ p μ + 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) ] , ω 2 ( θ ) = λ ˇ Π Λ ˘ 1 2 α n λ ˇ p + α n λ ˇ θ ( 2 θ + 1 ) .
Since e n , α n [ 0 , 1 ] and e n is the extrapolation term for all n 1 such that n = 1 α n = and n = 1 e n ( z n 2 z n 1 + z n 2 ) < , it follows that
ω ( θ ) = ω 1 ( θ ) + ω 2 ( θ ) < 1 .
Therefore, by inequality (55), the sequence { z n } is a Cauchy sequence in Λ ˘ , and hence there exists z Λ ˘ such that z n z as n .
From Equations (8)–(10), we obtain the following.
| | l ˘ n l ˘ n 1 | | D ( ψ ( π n ) , ψ ( π n 1 ) ) λ ˇ D ψ | | π n π | | λ ˇ D ψ | | π n π | | ,
| | m ˘ n m ˘ n 1 | | D ( ϕ ( π n ) , ϕ ( π n 1 ) ) λ ˇ D ϕ | | π n π | | λ ˇ D ϕ | | π n π | | ,
| | n ˘ n n ˘ n 1 | | D ( φ ( π n ) , φ ( π n 1 ) ) λ ˇ D φ | | π n π | | λ ˇ D φ | | π n π | | .
Thus, the sequences { l ˘ n } , { m ˘ n } , and { n ˘ n } are also Cauchy sequences in Λ ˘ . Hence, there exist π , z Λ ˘ , l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) such that
l ˘ n l ˘ , m ˘ n m ˘ , and n ˘ n n ˘ as n .
We now show that l ˘ n l ˘ ψ ( π ) , m ˘ n m ˘ ϕ ( π ) , and n ˘ n n ˘ φ ( π ) are n .
Additionally,
d ( l ˘ , ψ ( π ) ) i n f { | | l ˘ t | | , t ψ ( π ) } | | l ˘ l ˘ n | | + d ( l ˘ n , ψ ( π ) ) | | l ˘ l ˘ n | | + d ( ψ ( π n ) , ψ ( π ) ) | | l ˘ l ˘ n | | + λ ˇ D ψ | | π n π | | | | l ˘ l ˘ n | | + λ ˇ D ψ | | π n π | | 0 , a s n .
Since ψ ( π ) is closed, it follows that l ˘ ψ ( π ) . Similarly, we can conclude that m ˘ ϕ ( π ) and n ˘ φ ( π ) . Furthermore, applying the continuity of the mappings p, Σ, C ( τ , λ ˇ ) Δ ( . , π ) , β ( τ , λ ˇ ) Δ ( . , π ) , and Y ( τ , λ ˇ ) Δ ( . , π ) , we obtain the following result.
p ( π ) = β ( τ , λ ˇ ) Δ ( . , π ) p ( π ) + λ ˇ Σ ( l ˘ , m ˘ , n ˘ ) Y ( τ , λ ˇ ) Δ ( . , π ) C ( τ , λ ˇ ) Δ ( . , π ) ( π ) .
Therefore, by Lemma 5, π Λ ˘ is a solution to the Yosida–Cayley Variational Inclusion Problem (YCVIP), where l ˘ ψ ( π ) , m ˘ ϕ ( π ) , and n ˘ φ ( π ) . □

7. Numerical Result

To validate Theorems 1 and 2, we present six estimation tables and the corresponding convergence graphs obtained using MATLAB-R2024b. These numerical results illustrate the effectiveness and convergence behavior of the proposed iterative schemes. Assume that Λ ˘ = R is a real Hilbert space equipped with the standard inner product · , · and the associated norm · . Let p : Λ ˘ Λ ˘ be a single-valued mapping, Δ : Λ ˘ × Λ ˘ 2 Λ ˘ be a set-valued mapping, and ψ , ϕ , φ : Λ ˘ κ ( Λ ˘ ) be multi-valued mappings defined as follows:
Δ ( π 1 , π 2 ) = 12 5 π 1 + π 2 , p ( π 1 ) = 5 3 π 1 .
(i)
Suppose Δ is a γ -ordered rectangular mapping; then, there exist
v π 1 = 17 5 π 1 Δ ( . , π 1 ) v π 2 = 17 5 π 2 Δ ( . , π 2 )
Then, we get
( v π 1 v π 2 ) , ( π 1 π 2 ) = ( v π 1 v π 2 ) , ( π 1 π 2 ) = 17 5 π 1 17 5 π 2 , π 1 π 2 = 17 5 π 1 π 2 , π 1 π 2 = 17 5 | | π 1 π 2 | | 2 3 | | π 1 π 2 | | 2
Thus, Δ is a γ = 3 -ordered rectangular mapping.
(ii)
Suppose p is λ p -Lipschitz continuous and δ p -ordered non-extended mapping.
We have,
| | p ( π 1 ) p ( π 2 ) | | = 5 3 π 1 5 3 π 2 = 5 3 | | π 1 π 2 | | 2 | | π 1 π 2 | | .
Hence, p is a λ ˇ p = 2 -Lipschitz continuous mapping. Also we have,
p ( π 1 ) p ( π 2 ) , π 1 π 2 = 5 3 π 1 5 3 π 2 , π 1 π 2 = 5 3 π 1 π 2 , π 1 π 2 4 3 | | π 1 π 2 | | 2 .
Thus, p is the δ p = 4 3 -ordered nonextended mapping.
(iii)
Let Σ : Λ ˘ × Λ ˘ × Λ ˘ Λ ˘ be a single-valued mapping and ψ , ϕ , φ : Λ ˘ κ ( Λ ˘ ) be multi-valued mappings such that
ψ ( π ) = π 7 , ϕ ( π ) = π 6 , φ ( π ) = π 5 a n d Σ ( l ˘ , m ˘ , n ˘ ) = l ˘ 2 + m ˘ 2 + n ˘ 2 .
Thus, we obtain
D ( ψ ( π 1 ) , ψ ( π 2 ) ) = max sup π 1 S ( π 1 ) d ( π 1 , F ( π 2 ) ) , sup π 2 S ( π 2 ) d ( F ( π 1 ) , π 2 ) max π 1 7 π 2 7 , π 2 7 π 1 7 1 7 max { | | π 1 π 2 | | , | | π 2 π 1 | | } 1 6 max | | π 1 π 2 | | .
Thus, ψ is D -Lipschitz continuous with constant λ ˇ D ψ = 1 6 . Similarly, we can show that λ ˇ D ϕ = 1 5 , and λ ˇ D φ = 1 4 .
Therefore, Σ is Lipschitz continuous in all the three arguments with constants λ ˇ Σ 1 = λ ˇ Σ 2 = λ ˇ Σ 3 = 1 . Consequently, we obtain
Σ ( l ˘ , m ˘ , n ˘ ) = π 14 + π 12 + π 10 = 107 420 π .
(iv)
Consider λ ˇ = 3 , then evaluate the resolvent operator as
β ( τ , λ ˇ ) Δ ( . , π ) ( π ) = τ + λ ˇ Δ ( . , π ) 1 ( π ) = π + λ ˇ Δ ( π , π ) 1 = π + 3 . 17 5 π 1 = 5 56 π .
Now, we have
β ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 1 ) β ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 5 56 π 1 5 56 π 2 = 5 56 | | π 1 π 2 | | 1 8 | | π 1 π 2 | | .
Therefore, β ( τ , λ ˇ ) Δ ( . , π ) is Lipschitz continuous; here, θ = 1 ( γ λ ˇ 1 ) = 1 8 , λ ˇ = 3 , and γ = 3 .
Also, we have
β ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 1 ) β ( τ , λ ˇ ) Δ ( . , π 2 ) ( π 2 ) = 5 56 π 1 5 56 π 2 = 5 56 | | π 1 π 2 | | 1 9 | | π 1 π 2 | | .
Then, we have the Lipschitz constant μ = 1 9 .
(v)
Again, we have from the Yosida approximation operator
Y ( τ , λ ˇ ) Δ ( . , π ) ( π ) = 1 λ ˇ τ β ( . , π ) ( π ) = 1 λ ˇ π β ( π , π ) = 1 3 π 5 56 π = 51 168 π .
Also, we get
Y ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 1 ) Y ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 51 168 π 1 51 168 π 2 = 51 168 | | π 1 π 2 | | 3 8 | | π 1 π 2 | | .
That is, Y ( τ , λ ˇ ) Δ ( . , π ) is Lipschitz continuous with constant θ = λ ˇ ( γ λ ˇ 1 ) = 3 8 , where λ ˇ = 3 and γ = 3 .
Again, we get,
Y ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 1 ) Y ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 51 168 π 1 51 168 π 2 = 51 168 | | π 1 π 2 | | 1 4 | | π 1 π 2 | | .
Thus, we get the Lipschitz constant μ = 1 4 .
(vi)
Now, we evaluate the Cayley approximation operator.
C ( τ , λ ˇ ) Δ ( . , π ) ( π ) = 2 β ( . , π ) τ ( π ) = 2 5 56 π π = 46 56 π .
Now, we have
C ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 1 ) C ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 46 56 π 1 46 56 π 2 = 46 56 | | π 1 π 2 | | 5 4 | | π 1 π 2 | | .
That is, C ( τ , λ ˇ ) Δ ( . , π ) is Lipschitz continuous with constant ( 2 θ + 1 ) = 5 4 where θ = 1 ( γ λ ˇ 1 ) = 1 8 , λ ˇ = 3 , and γ = 3 .
And also, we have
C ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 1 ) C ( τ , λ ˇ ) Δ ( . , π 1 ) ( π 2 ) = 46 56 π 1 46 56 π 2 = 46 56 | | π 1 π 2 | | 5 6 | | π 1 π 2 | | .
Therefore the Lipschitz constant μ = 5 6 .
(vii)
We consider the interval 0 1 10 π 1 and λ ˇ Π Λ ˘ = 1 5 .
(viii)
All values of the constants fulfill the requirements (14) and (46) stated in Theorems 1 and 2.
(ix)
Obtain from Iterative Algorithm 3
J n = π n + e n ( π n π n 1 ) p ( π n + 1 ) = ( 1 α n ) p ( π n ) + α n β ( τ , λ ˇ ) Δ ( . , π n ) [ 1 2 p ( π n ) + p ( J n ) + λ ˇ { Σ ( l ˘ n , m ˘ n , n ˘ n ) Y ( τ , λ ˇ ) Δ ( . , π n ) C ( τ , λ ˇ ) Δ ( . , π n ) ( J n ) } ] π n + 1 = ( 1 α n ) π n + α n 671 7840 π n + 401 87808 J n .
Using MATLAB-R2024b, we investigate six different scenarios that involve the construction of estimation tables and convergence graphs. In each case, multiple initial values π 0 are considered along with constant values of the parameters α n and e n , where 0 α n 1 and 0 e n 1 for all n = 0 , 1 , 2 , 3 , .
In the first scenario, the parameters are selected as e n = 1 n and α n = 6 n + 6 , with initial values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 . The Yosida–Cayley Variational Inequality Problem (YCVIP) converges to the solution π = 0 after 26 iterations, resulting in a highly accurate sequence { π n + 1 } . The corresponding results are presented in the convergence graph (Figure 1) and estimation shown in Table 1.
In the second scenario, the parameters are set as e n = 1 n and α n = 9 n + 9 , with the same initial values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 . The results of the estimation are summarized in Table 2, and the convergence behavior is illustrated in Figure 2 of the sequence { π n + 1 } that converges to π = 0 after 15 iterations.
In the third scenario, we set α n = 12 n + 12 while keeping the same values of e n = 1 n and initial points π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 . The numerical results are presented in estimation Table 3, and the convergence behavior is illustrated in Figure 3 of the sequence { π n + 1 } which converges to π = 0 after 12 iterations.
In the fourth scenario, we consider e n = 1 n and α n = 24 n + 24 , while maintaining the same initial values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 . The sequence { π n + 1 } converges to the solution π = 0 after seven iterations. The numerical results are summarized in estimation Table 4, and the convergence behavior is illustrated in Figure 4.
In the fifth scenario, we take e n = 1 n and α n = 38 n + 38 and the same initial values. We obtain an estimation Table 5 and a convergence graph (Figure 5) of the convergence sequence { π n + 1 } , which converges to π = 0 (after six iterations).
Now in the final scenario, we take α n = 98 n + 98 and the same initial values and the same value of e n . Then we get the estimation shown in Table 6 and the convergence graph (Figure 6) of the convergence sequence { π n + 1 } , which converges at π = 0 (after five iterations).
To support our main results, six convergence graphs were obtained, demonstrating that the sequence { π n } converges to the solution π = 0 under different parameter settings. In the first scenario, convergence occurred within 27 iterations for α n = 6 n + 6 ; in the second, within 16 iterations for α n = 9 n + 9 ; in the third, within 13 iterations for α n = 12 n + 12 ; in the fourth, within 8 iterations for α n = 24 n + 24 ; in the fifth, within 7 iterations for α n = 38 n + 38 ; and in the sixth and final scenario, convergence was achieved in only 6 iterations for α n = 98 n + 98 .
These findings indicate that a slower decay in the sequence { α n } —as in the final scenario—leads to a faster convergence rate. This behavior is observed while maintaining 0 α n 1 and 0 e n 1 . Compared to existing methods such as those of Gebrie and Bedene [22] and Rajpoot et al. [23], the proposed inertial extrapolation model exhibits superior convergence performance. The numerical evidence clearly demonstrates that the integration of inertial extrapolation techniques enhances convergence speed and effectiveness in approaching the optimal solution.

8. Conclusions

In this paper, we studied the Yosida–Cayley Variational Inclusion Problem (YCVIP) and the Yosida–Cayley Resolvent Equation Problem (YCREP) within the framework of real-ordered Hilbert spaces, incorporating both multi-valued and single-valued mappings influenced by XOR and XNOR operations. Our primary focus was on the convergence analysis of these problems through the development of an inertial extrapolation scheme. We proposed and rigorously analyzed several iterative algorithms to solve these problems, establishing both the existence and convergence of solutions under suitable conditions on the involved operators and mappings. Furthermore, a comprehensive numerical experiment was presented to demonstrate the computational efficiency and fast convergence of the proposed methods, thereby highlighting their practical relevance and potential for broader applications.

Author Contributions

Conceptualization: A. and S.S.I.; Methodology: A. and S.S.I.; Software: A. and S.S.I.; Validation: A. and I.A.; Formal analysis: A. and S.S.I.; Writing—original draft preparation: A. and I.A.; Writing—review and editing: I.A. and S.S.I.; Funding: I.A. All authors have read and agreed to the published version of the manuscript.

Funding

The Researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2025).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hartman, P.; Stampacchia, G. On some non-linear elliptic differential-functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  2. Fechner, W. Functional inequalities motivated by the Lax–Milgram theorem. J. Math. Anal. Appl. 2013, 402, 411–414. [Google Scholar] [CrossRef]
  3. Rockafellar, R. Monotone operators and the proximal point algorithm. SIAM J. Cont. Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  4. Kegl, M.; Butinar, B.J.; Kegl, B. An efficient gradient-based optimization algorithm for mechanical systems. Commun. Numer. Methods Eng. 2002, 18, 363–371. [Google Scholar] [CrossRef]
  5. Salahuddin. Solutions of Variational Inclusions over the Sets of Common Fixed Points in Banach Spaces. J. Appl. Non. Dy. 2021, 11, 75–85. [Google Scholar]
  6. Akram, M.; Dilshad, M. A Unified Inertial Iterative Approach for General Quasi Variational Inequality with Application. Fractal Fract. 2022, 6, 395. [Google Scholar] [CrossRef]
  7. AlNemer, G.; Rehan Ali, R.; Farid, M. On the Strong Convergence of Combined Generalized Equilibrium and Fixed Point Problems in a Banach Space. Axioms 2025, 14, 428. [Google Scholar] [CrossRef]
  8. Sitthithakerngkiet, K.; Rehman, H.U.; Ioannis, K.; Argyros, I.K.; Seangwattana, T. Strong convergence of dual inertial fixed point algorithms for computing fixed points over the solution set of a variational inequality problem in real Hilbert spaces. Rend. Circ. Mat. Di Palermo 2025, 74, 82. [Google Scholar] [CrossRef]
  9. Daoud, S.M.; Shehab, M.; Al-Mimi, M.H.; Abualigah, L.; Zitar, A.R.; Shambour, M.K.Y. Gradient-Based Optimizer (GBO): A Review, Theory, Variants, and Applications. Arch. Comput. Methods Eng. 2023, 30, 2431–2449. [Google Scholar] [CrossRef]
  10. Rehman, H.U.; Sitthithakerngkiet, K.; Seangwattana, T. Dual-Inertial Viscosity-Based Subgradient Extragradient Methods for Equilibrium Problems Over Fixed Point Sets. Math. Methods Appl. Sci. Anal. Appl. 2025, 48, 6866–6888. [Google Scholar] [CrossRef]
  11. Altbawi, S.M.A.; Khalid, S.B.A.; Mokhtar, A.S.B.; Hussain Shareef, H.; Husain, N.; Ashraf Yahya, A.; Haider, S.A.; Moin, L.; Alsisi, H.R. An Improved Gradient-Based Optimization Algorithm for Solving Complex Optimization Problems. Processes 2023, 11, 498. [Google Scholar] [CrossRef]
  12. Hassouni, A.; Moudafi, A. A perturbed algorithm for variational inclusions. J. Math. Anal. Appl. 1994, 185, 706–712. [Google Scholar] [CrossRef]
  13. Ahmad, I.; Irfan, S.S.; Farid, M.; Shukla, P. Nonlinear ordered variational inclusion problem involving XOR operation with fuzzy mappings. J. Inequal. Appl. 2020, 2020, 36. [Google Scholar] [CrossRef]
  14. Ahmad, I.; Pang, C.T.; Ahmad, R.; Ishtyak, M. System of Yosida inclusions involving XOR-operation. J. Nonlinear Convex Anal. 2017, 18, 831–845. [Google Scholar]
  15. Li, H.G.; Pan, X.B.; Deng, Z.Y.; Wang, C.Y. Solving GNOVI frameworks involving (γG, λ)-weak-GRD set-valued mappings in positive Hilbert spaces. Fixed Point Theory Appl. 2014, 2014, 146. [Google Scholar] [CrossRef]
  16. Iqbal, J.; Rajpoot, A.K.; Islam, M.; Ahmad, R.; Wang, Y. System of Generalized variational inclusions involving Cayley operators and XOR-operation in q-uniformly smooth Banach spaces. Mathematics 2022, 10, 2837. [Google Scholar] [CrossRef]
  17. Ali, I.; Ahmad, R.; Wen, C.F. Cayley inclusion problem involving XOR-operation. Mathematics 2019, 7, 302. [Google Scholar] [CrossRef]
  18. Iqbal, J.; Wang, Y.; Rajpoot, A.K.; Ahmad, R. Generalized Yosida inclusion problem involving multi-valued operator with XOR operation. Demonstr. Math. 2024, 57, 20240011. [Google Scholar] [CrossRef]
  19. Noor, M.A. Generalized set-valued variational inclusions and resolvent equations. J. Math. Anal. Appl. 1998, 228, 206–220. [Google Scholar] [CrossRef]
  20. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  21. Chang, S.; Yao, J.C.; Wang, L.; Liu, M.; Zhao, L. On the inertial forward-backward splitting technique for solving a system of inclusion problems in Hilbert spaces. Optimization 2021, 70, 2511–2525. [Google Scholar] [CrossRef]
  22. Gebrie, A.G.; Bedane, S.D. A simple computational algorithm with inertial extrapolation for generalized split common fixed point problems. Heliyon 2021, 7, e08373. [Google Scholar] [CrossRef] [PubMed]
  23. Rajpoot, A.K.; Istiak, M.; Ahmed, R.; Wang, Y.; Yao, J.C. Convergence analysis for Yosida variational inclusion problem with its corresponding Yosida resolvent equation problem through inertial extrapolation scheme. Mathematics 2023, 11, 763. [Google Scholar] [CrossRef]
  24. Noor, M.A.; Noor, K.I. General bivariational inclusions and iterative methods. Int. J. Nonlinear Anal. Appl. 2023, 14, 309–324. [Google Scholar]
Figure 1. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 6 n + 6 .
Figure 1. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 6 n + 6 .
Mathematics 13 02447 g001
Figure 2. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 9 n + 9 .
Figure 2. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 9 n + 9 .
Mathematics 13 02447 g002
Figure 3. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 12 n + 12 .
Figure 3. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 12 n + 12 .
Mathematics 13 02447 g003
Figure 4. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 24 n + 24 .
Figure 4. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 24 n + 24 .
Mathematics 13 02447 g004
Figure 5. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 38 n + 38 .
Figure 5. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 38 n + 38 .
Mathematics 13 02447 g005
Figure 6. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 98 n + 98 .
Figure 6. An illustration of the convergence sequence { π n + 1 } for various initial values when α n = 98 n + 98 .
Mathematics 13 02447 g006
Table 1. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Table 1. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Number of π 0 = 2 π 0 = 1.5 π 0 = 0.5 π 0 = 0.5 π 0 = 1.5 π 0 = 2
Iterations, n π n π n π n π n π n π n
1−0.43243−0.32433−0.108110.108110.324330.43243
2−0.13587−0.1019−0.0339670.0339670.10190.13587
3−0.05304−0.03978−0.013260.013260.039780.05304
6−0.00651−0.00488−0.001620.001620.004880.00651
8−0.00145−0.00108−0.000360.000360.001080.00145
10−0.00064−0.00048−0.000160.000160.000480.00064
13−0.00023−0.00017−0.000050.000050.000170.00023
17−0.00007−0.00005−0.000010.000010.000050.00007
20−0.00003−0.00002−0.000000.000000.000020.00003
23−0.00001−0.00001−0.000000.000000.000010.00001
26−0.00001−0.00000−0.000000.000000.000000.00001
27−0.00000−0.00000−0.000000.000000.000000.00000
30−0.00000−0.00000−0.000000.000000.000000.00000
Table 2. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Table 2. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Number of π 0 = 2 π 0 = 1.5 π 0 = 0.5 π 0 = 0.5 π 0 = 1.5 π 0 = 2
Iterations, n π n π n π n π n π n π n
1−0.35406−0.26554−0.088510.088510.265540.35406
2−0.08916−0.06687−0.022290.022290.066870.08916
3−0.02801−0.02101−0.007000.007000.021010.028015
4−0.01028−0.00771−0.002570.002570.007710.01028
5−0.00423−0.00317−0.001050.001050.003170.00423
6−0.00191−0.00143−0.000470.000470.001430.00191
8−0.00047−0.00035−0.000110.000110.000350.00047
10−0.00008−0.00006−0.000020.000020.0000060.00008
12−0.00003−0.00002−0.000000.000000.000020.00003
15−0.00001−0.00001−0.000000.000000.000010.00001
16−0.00000−0.00000−0.000000.000000.000000.00000
20−0.00000−0.00000−0.000000.000000.000000.00000
Table 3. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Table 3. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Number of π 0 = 2 π 0 = 1.5 π 0 = 0.5 π 0 = 0.5 π 0 = 1.5 π 0 = 2
Iterations, n π n π n π n π n π n π n
1−0.31185−0.23389−0.077960.077960.233890.31185
2−0.06742−0.05057−0.016850.016850.050570.06742
3−0.01810−0.01357−0.004520.004520.013570.01810
4−0.00568−0.00426−0.001420.001420.004260.00568
5−0.00201−0.00151−0.000500.000500.001510.00201
6−0.00078−0.00059−0.000190.000190.000590.00078
8−0.00015−0.00011−0.000030.000030.000110.00015
10−0.00003−0.00002−0.000000.000000.000020.00003
12−0.00001−0.00000−0.000000.000000.000000.00001
13−0.00000−0.00000−0.000000.000000.000000.00000
15−0.00000−0.00000−0.000000.000000.000000.00000
Table 4. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Table 4. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Number of π 0 = 2 π 0 = 1.5 π 0 = 0.5 π 0 = 0.5 π 0 = 1.5 π 0 = 2
Iterations, n π n π n π n π n π n π n
1−0.24433−0.18324−0.0610820.0610820.183240.24433
2−0.03809−0.02857−0.009520.009520.028570.03809
3−0.00713−0.0053−0.001780.001780.005340.00713
4−0.00154−0.00115−0.000380.000380.001150.00154
5−0.00037−0.00028−0.000090.000090.000280.00037
6−0.00010−0.00007−0.000020.000020.000070.00010
7−0.00002−0.00002−0.000000.000000.000020.00002
8−0.00000−0.00000−0.000000.000000.000000.00000
10−0.00000−0.00000−0.000000.000000.000000.00000
Table 5. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Table 5. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Number of π 0 = 2 π 0 = 1.5 π 0 = 0.5 π 0 = 0.5 π 0 = 1.5 π 0 = 2
Iterations, n π n π n π n π n π n π n
1−0.21807−0.16355−0.054510.054510.163550.21807
2−0.02863−0.02147−0.007150.007150.021470.02863
3−0.00436−0.00327−0.001090.001090.003270.00436
4−0.00075−0.00056−0.000180.000180.000560.00075
5−0.00014−0.00010−0.000030.000030.000100.00014
6−0.00003−0.00002−0.000000.000000.000020.00003
7−0.00000−0.00000−0.000000.000000.000000.00000
8−0.00000−0.00000−0.000000.000000.000000.00000
Table 6. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Table 6. The convergence sequence { π n } is initialized with the values π 0 = 2 , 1.5 , 0.5 , 0.5 , 1.5 , 2 .
Number of π 0 = 2 π 0 = 1.5 π 0 = 0.5 π 0 = 0.5 π 0 = 1.5 π 0 = 2
Iterations, n π n π n π n π n π n π n
1−0.18965−0.14223−0.0474120.0474120.142230.18965
2−0.01970−0.01477−0.004920.004920.014770.01970
3−0.00222−0.00166−0.000550.000550.001660.00222
4−0.00026−0.00020−0.000060.000060.000200.00026
5−0.00003−0.00002−0.000000.000000.000020.00003
6−0.00000−0.00000−0.000000.000000.000000.00000
7−0.00000−0.00000−0.000000.000000.000000.00000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arifuzzaman; Irfan, S.S.; Ahmad, I. On the Convergence of the Yosida–Cayley Variational Inclusion Problem with the XOR Operation and Inertial Extrapolation Scheme. Mathematics 2025, 13, 2447. https://doi.org/10.3390/math13152447

AMA Style

Arifuzzaman, Irfan SS, Ahmad I. On the Convergence of the Yosida–Cayley Variational Inclusion Problem with the XOR Operation and Inertial Extrapolation Scheme. Mathematics. 2025; 13(15):2447. https://doi.org/10.3390/math13152447

Chicago/Turabian Style

Arifuzzaman, Syed Shakaib Irfan, and Iqbal Ahmad. 2025. "On the Convergence of the Yosida–Cayley Variational Inclusion Problem with the XOR Operation and Inertial Extrapolation Scheme" Mathematics 13, no. 15: 2447. https://doi.org/10.3390/math13152447

APA Style

Arifuzzaman, Irfan, S. S., & Ahmad, I. (2025). On the Convergence of the Yosida–Cayley Variational Inclusion Problem with the XOR Operation and Inertial Extrapolation Scheme. Mathematics, 13(15), 2447. https://doi.org/10.3390/math13152447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop