Next Article in Journal
Geometrical Interpretations of Interval-Valued Intuitionistic Fuzzy Sets: Reconsiderations and New Results
Previous Article in Journal
Anomalous Transport of Heterogeneous Population and Time-Changed Pólya Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Step Iterative Methodology for the Solution of Extended Ordered XOR-Inclusion Problems Incorporating Generalized Cayley–Yosida Operators

1
Department of Mathematical Science, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Mathematics, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram 522302, Andhra Pradesh, India
3
Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1969; https://doi.org/10.3390/math13121969
Submission received: 8 May 2025 / Revised: 5 June 2025 / Accepted: 9 June 2025 / Published: 14 June 2025
(This article belongs to the Special Issue Advances in Mathematical Analysis and Inequalities)

Abstract

:
The system of extended ordered XOR-inclusion problems (in short, SEOXORIP) involving generalized Cayley and Yosida operators is introduced and studied in this paper. The solution is obtained in a real ordered Banach space using a fixed-point approach. First, we develop the fixed-point lemma for the solution of SEOXORIP. By using the fixed-point lemma, we develop a three-step iterative scheme for obtaining the approximate solution of SEOXORIP. Under the Lipschitz continuous assumptions of the cost mappings, the strong convergence of the scheme is demonstrated. Lastly, we provide a numerical example with a convergence graph generated using MATLAB 2018a to verify the convergence of the sequence generated by the proposed scheme.

1. Introduction

It is widely acknowledged that convex minimization problems, variational inequalities, equilibrium challenges, and feasibility dilemmas represent particular instances of variational inclusion problems. In the context of a single-valued mapping A : X X and a multivalued (potentially nonlinear) operator B : X 2 X , the objective is to determine x X such that
0 ( A + B ) x .
which is designated as a variational inclusion problem, exhibiting applications in signal processing [1], image processing [2], machine learning [3], and applications to multiple sets split feasibility problems [4].
The inability of projection methods to effectively address variational inclusion problems necessitated the emergence of resolvent operator methods, which have proven to be efficient in solving these problems. In their seminal works, Fang and Huang [5,6] put forth the concepts of H-monotone operators and H-accretive mappings, and by establishing the resolvent operators that correspond to these concepts, they investigated a specific category of variational inclusions within the context of Hilbert and Banach spaces. Following this, Xia and Huang [7] proposed the notion of general H-monotone operators as an extension of J-proximal mapping [8] and articulated a proximal mapping that is associated with general H-monotone operators, which diverges from the resolvent operator linked with the H-accretive mapping [6].
In 1972, a multitude of solutions pertaining to nonlinear equations were presented and analyzed by Amann [9]. In the recent past, fixed-point theory alongside its applications has undergone rigorous examination within the context of real ordered Banach spaces. Consequently, it is of paramount significance and a natural progression for generalized nonlinear ordered variational inequalities (ordered equations) to be meticulously investigated and deliberated. In 2008, Li [10] delineated the framework of generalized nonlinear ordered variational inequalities and proposed a computational algorithm aimed at approximating solutions for a specific class of generalized nonlinear ordered variational inequalities (ordered equations) within real ordered Banach spaces. Over the past two to three years, various forms of ordered variational inequalities and ordered inclusion problems have been extensively scrutinized, as evidenced in works such as [11,12,13] and the associated references therein.
On the contrary, Glowinski, in the year 1989 [14], and Noor, between the years 2000 and 2001 [15,16], formulated a three-step iterative algorithm aimed at addressing various categories of variational inequalities through the application of the Lagrangian multiplier and auxiliary principle methodologies. Consequently, it can be inferred that three-step iterative algorithms occupy a pivotal and consequential role in solutions to diverse challenges encountered in both pure and applied scientific disciplines. Glowinski et al. [14] and Noor [17] demonstrated that the three-step schemes yielded superior numerical outcomes in comparison to the two-step and one-step approximation iterations. In the year 2018, Iqbal et al. [18] introduced a three-step iterative framework for generalized mixed ordered quasi-variational inclusion that incorporates the XOR operator, while more recently, Ali et al. [19] examined the convergence and stability of the three-step iterative algorithm applied to the Extended Cayley–Yosida Inclusion Problem within the context of 2-Uniformly Smooth Banach Spaces.
Remark 1.
It is imperative to acknowledge that the selection of the governing sequences can significantly influence the efficacy of iterative algorithms.
Motivated by the preceding research while adhering to Remark 1, this manuscript presents a framework for extended ordered XOR-inclusion problems that incorporate generalized Cayley and Yosida operators. The solution is attained within a real Banach space utilizing a fixed-point methodology. Initially, we established the fixed-point lemma pertinent to the solutions of the proposed system. By applying the fixed-point lemma, we formulated a three-step iterative scheme aimed at deriving an approximate solution for SEOXORIP. Under the assumptions of Lipschitz continuity pertaining to the cost mappings, we elucidate the strong convergence of the iterative scheme. Finally, we furnish a numerical example accompanied by a convergence graph generated through MATLAB 2018a to substantiate the convergence of the sequence produced by the proposed scheme.

2. Prerequisites and Formulation of SEOXORIP

Throughout this manuscript, we consider X to be a real ordered Banach space equipped with the norm . X and the inner product · , · X × X , the metric d generated by the norm . X , 2 X (correspondingly, C B ( X ) ) representing the nonempty collection of (closed and bounded) subsets of X , and the Hausdorff metric D ( . , . ) on C B ( X ) , which is defined as follows:
D ( A , B ) = max { sup s A d ( s , B ) , sup t B d ( A , t ) } , A , B C B ( X ) ,
where d ( s , B ) = inf t B { s t } and d ( A , t ) = inf s A { s t } .
A cone is a ϕ C X convex and closed subset of X that satisfies, for any x C and scalar λ > 0 , λ x C . C is called pointed cone if C { C } = ϕ . For any x , y X , we define an ordering C such that x C y i f a n d o n l y i f y x C . If either x C y or y C x , then x and y are called comparable elements, and this is denoted by x y . The real Banach space X equipped with ordering C is said to be an ordered Banach space.
Let l u b { x , y } and g l b { x , y } denote the least upper bound and greatest lower bound of the set { x , y } , respectively, for all x , y X . Suppose that l u b { x , y } and g l b { x , y } for the set { x , y } exist; then, we define binary operations:
(i)
x y = l u b { x , y } ,
(ii)
x y = g l b { x , y } ,
(iii)
x y = ( x y ) ( x y ) ,
(iv)
x y = ( x y ) ( x y ) .
The operations , , ⊕, and ⊙ are called OR, AND, XOR, and XNOR operations, respectively. For more details, refer to [20].
Proposition 1
([11,13]). Let ⊕ be an XOR operation and ⊙ be an XNOR operation. Then, the following holds:
(i)
x x = 0 , x y = y x = ( x y ) = ( y x ) ;
(ii)
If x 0 , then x 0 x x 0 ;
(iii)
( λ x ) ( λ y ) = | λ | ( x y ) ;
(iv)
If x y , then x y = 0 if and only if x = y ;
(v)
If x , y , and w are comparable to each other, then ( x y ) ( x w ) + ( w y ) ;
(ix)
If x , y , and w are comparable to each other, then [ ( w x ) ( w y ) ] = ( x y ) ;
(vi)
x y x y λ P x y ;
(vii)
If x y , then x y = x y .
Definition 1
([11,13]). A single-valued mapping f : X X is called
(i)
A comparison mapping if, for every x , y X and x y , then f ( x ) f ( y ) , x f ( x ) and y f ( y ) ;
(ii)
A strong comparison mapping if f is a comparison mapping and f ( x ) f ( y ) if and only if x y for any x , y X ;
(iii)
A f -ordered compression mapping if f is a comparison mapping and
f ( x ) f ( y ) f ( x y ) , f o r f ( 0 , 1 ) a n d x , y X .
Definition 2.
Let f , g : X X be strong comparison single-valued mappings. Then, the mapping P : X × X X is called
(i)
A P f -ordered compression mapping associated with f in the first component if there exists a constant P f > 0 such that
P ( f ( x ) , . ) P ( f ( y ) , . ) P f ( x y ) ,
(ii)
A P g -ordered compression mapping associated with g in the second component if there exists a constant P g > 0 such that
P ( . , g ( x ) ) P ( . , g ( y ) ) P g ( x y ) .
Definition 3
([21]). Let ( X , d ) be a metric space and D be a Hausdorff metric on C B ( X ) . Then, the multivalued map S : X × X 2 X is called a multivalued Lipschitz-type mapping with respect to N if there is a constant λ N D > 0 such that
D ( S ( h , u x ) , S ( h , u y ) ) λ N D x y , u x N ( x ) a n d u y N ( y ) , x , y X ,
where h : X X be single-valued mapping.
Definition 4
([22]). Let h : X X be single-valued mapping, N : X C B ( X ) and S : X × X 2 X be a multivalued mappings. Then, S ( h ( · ) , z ) is said to be α-strongly accretive with respect to h if there is a constant α > 0 such that
u v , x y α x y 2 , x , y X , u S ( h ( x ) , z ) , v S ( h ( y ) , z ) .
Proposition 2
([22]). Let h : X X be a single-valued mapping, N : X C B ( X ) be a multivalued mapping, and S : X × X 2 X be α-strongly accretive with respect to h. Then, the mapping [ I + λ S ( h ( · ) , z ) ] 1 is single-valued for λ 0 and z N ( x ) .
Definition 5
([22]). Let h : X X be a single-valued mapping, N : X C B ( X ) be a multivalued mapping, and S : X × X 2 X be α-strongly accretive with respect to h. Then, S is said to be generalized α-maximal monotone with respect to h if
[ I + λ S ( h , z ) ] X = X , f o r λ > 0 a n d z N ( x ) .
Definition 6.
Let h : X X be a single-valued mapping and N : X C B ( X ) be a multivalued mapping. Let S : X × X 2 X be generalized α-strongly accretive with respect to h. Then, the generalized proximal-point mapping I , λ S ( h ( · ) , z ) : X X associated with h and N is defined by
I , λ S ( h ( · ) , z ) ( x ) = [ I + λ S ( h ( · ) , z ) ] 1 ( x ) , x X .
Proposition 3
([11,22]). Let h : X X be a single-valued mapping and N : X C B ( X ) be a multivalued mapping. Let S : X × X 2 X be generalized α-maximal monotone with respect to h. Then, the generalized proximal-point mapping I , λ S ( h ( · ) , z ) is L -Lipschitz continuous. That is,
I , λ S ( h ( · ) , z ) ( x ) I , λ S ( h ( · ) , z ) ( y ) L x y , x , y X ,
where L = 1 α λ + 1 .
Definition 7
([19]). The generalized Yosida approximation operator Y I , λ S ( h ( · ) , z ) associated with the proximal-point mapping I , λ S ( h ( · ) , z ) is a single-valued mapping; that is, Y I , λ S ( h ( · ) , z ) : X X is defined by
Y I , λ S ( h ( · ) , z ) ( x ) = 1 λ [ I I , λ S ( h ( · ) , z ) ] ( x ) , x X .
Definition 8
([19]). The generalized Cayley operator C I , λ S ( h ( · ) , z ) associated with the proximal-point mapping I , λ S ( h ( · ) , z ) is a single-valued mapping; that is, C I , λ S ( h ( · ) , z ) : X X is defined by
C I , λ S ( h ( · ) , z ) ( x ) = [ 2 I , λ S ( h ( · ) , z ) I ] ( x ) , x X .
Remark 2.
(i)
It is well known that the generalized Yosida approximation operator is Lipschitz-type continuous with constant Θ Y = 1 λ 1 + L .
(ii)
Similarly, the generalized Cayley operator is Lipschitz-type continuous with constant Θ C = 2 L + 1 .

SEOXORIP and the Existence of Its Solution

We assume that, for k N , i = 1 , 2 , , k . Let f i , g i , h i : X X and P i : X × X X be single-valued mappings. Suppose that S i : X × X 2 X is a generalized α i -strongly accretive multivalued mapping with respect to h i , and N i : X C B ( X ) is a closed and bounded multivalued mapping. Then, our interest is in finding ( x 1 , , x k ) X k and ( v 1 , , v k ) } } N 1 ( x 1 ) × × } } N k ( x k ) such that
w 1 P 1 ( C I , λ 1 S 1 ( h 1 ( x 1 ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( x 2 ) , v 3 ) ( x 2 ) ) + f 1 ( x 1 ) g 2 ( x 2 ) + S 1 ( h 1 ( x 1 ) , v 2 ) , w 2 P 2 ( C I , λ 2 S 2 ( h 2 ( x 2 ) , v 3 ) ( x 2 ) , Y I , λ 3 S 3 ( h 3 ( x 3 ) , v 4 ) ( x 3 ) ) + f 2 ( x 2 ) g 3 ( x 3 ) + S 2 ( h 2 ( x 2 ) , v 3 ) , w k P k ( C I , λ k S k ( h k ( x k ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( x 1 ) , v 2 ) ( x 1 ) ) + f k ( x k ) g 1 ( x 1 ) + S k ( h k ( x k ) , v 1 ) ,
holds, for some w ^ = ( w 1 , w k ) X k . This system is called an extended nonlinear system of ordered XOR-inclusion problems involving generalized Cayley–Yosida operators (in short, SEOXORIP).
Moreover, to highlight the level of generalization of our problem, we present several special cases below.
Special Cases:
  • If we define C I , λ i S i ( h i ( x i ) , v i + 1 ) ( x i ) = f i ( x i ) , Y I , λ i S i ( h i ( x i ) , v i + 1 ) ( x i ) = x i , x i and f i = g i + 1 i , then SEOXORIP (5) becomes a system for finding ( x 1 , x 2 , , x k ) X k such that
    w 1 P 1 ( f 1 ( x 1 ) , x 2 ) + S 1 ( h 1 ( x 1 ) , v 2 ) , w 2 P 2 ( f 2 ( x 2 ) , x 3 ) + S 2 ( h 2 ( x 2 ) , v 3 ) , w k P k ( f k ( x k ) , x 1 ) + S k ( h k ( x k ) , v 1 ) ,
    for some ( w 1 , w 2 , , w k ) X k . System (6) was introduced and studied by [23].
  • Let i = 1 , 2 . If we define v i = { g i ( x i ) } and C I , λ 1 S i ( h i ( x i ) , v i + 1 ) ( x i ) = Y I , λ i S i ( h i ( x i ) , v i + 1 ) ( x i ) = x i , x i , then for ω i = 0 , i ,   system (5) reduces to the following System of Generalized Variational Inclusions (SGVI), that is, a problem where ( x , y ) X 1 × X 2 must be found, such that
    θ 1 P 1 ( x , y ) + S 1 ( h 1 ( x ) , g 2 ( y ) ) , θ 2 P 2 ( x , y ) + S 2 ( h 2 ( x ) , g 1 ( y ) ) ,
    hold, where θ 1 = 0 and θ 2 = 0 are the zeros of X 1 and X 2 , respectively. System (7) was studied by Kazmi et al. [22].
By using the proximal-point mapping, we can characterize the solution of system (5) in terms of fixed-point equations.
Lemma 1.
SEOXORIP (5) has the solution ( x , v ) , i.e., x = ( x 1 , , x k ) X k and v = ( v 1 , , v k ) N 1 ( x 1 ) × × N k ( x k ) , if and only if the following equations hold:
x 1 = I , λ 1 S 1 ( h 1 ( · ) , v 2 ) x 1 λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) ) + f 1 ( x 1 ) g 2 ( x 2 ) w 1 } , x 2 = I , λ 2 S 2 ( h 2 ( · ) , v 3 ) x 2 λ 2 { P 2 ( C I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) , Y I , λ 3 S 3 ( h 3 ( · ) , v 4 ) ( x 3 ) ) + f 2 ( x 2 ) g 3 ( x 3 ) w 2 } , x k = I , λ k S k ( h k ( · ) , v 1 ) x k λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) ) + f k ( x k ) g 1 ( x 1 ) w k } ,
where λ i s > 0 are non-negative real numbers.
Proof. 
The proof follows directly from Definition 6 of proximal-point mapping.
Now, we have the following theorem for the existence of the solution to system (5). □
Theorem 1.
Let i = 1 , 2 , , k , k N , a n d   w e   s e t k + 1 = 1 . Let f i , g i , h i : X X be f i , g i , and h i strongly ordered comparison mappings. Assume that P i : X × X X is a P i C i -ordered compression mapping associated with C I , λ i S i ( h i ( · ) , z ) in the first component and P i Y i -ordered compression mapping associated with Y I , λ i S i ( h i ( · ) , z ) in the second component. Let S i : X × X 2 X be generalized α i -strongly accretive with respect to h i and N i : X C B ( X ) be a closed and bounded multivalued mapping. Suppose I , λ i S i ( h i ( · ) , z ) , Y I , λ i S i ( h i ( · ) , z ) , and C I , λ i S i ( h i ( · ) , z ) are Lipschitz continuous with constants L i , Θ Y i , and Θ C i , respectively.
Additionally,   let   x i x i + 1 , y i y i + 1   f i ( x i ) f i ( y i ) , g i ( x i ) g i ( y i ) , and I , λ i S i ( h i ( · ) , z ) ( x i ) I , λ i S i ( h i ( · ) , z ) ( y i ) , and for all constants λ 1 , λ 2 , , λ k > 0 , the following relations hold:
λ 1 ( P 1 C 1 Θ C 1 + f 1 ) + λ k ( P k Y 1 Θ Y 1 + g 1 ) < 1 L 1 1 , λ 2 ( P 2 C 2 Θ C 2 + f 2 ) + λ 1 ( P 1 Y 2 Θ Y 2 + g 2 ) < 1 L 2 1 , λ k ( P k C k Θ C k + f k ) + λ k 1 ( P k 1 Y k Θ Y k + g k ) < 1 L k 1 ,
where
L i = 1 α i λ i + 1 , Θ Y i = 1 λ i ( L i 1 ) a n d Θ C i = 2 L i 1 .
Then, SEOXORIP (5) admits a solution ( x i , v i ) .
Proof. 
We define the mapping Q : X k X k by
Q ( x 1 , , x k ) : = ( Q 1 ( x 1 , x 2 ) , , Q κ ( x k , x 1 ) ) , ( x 1 , , x k ) X k ,
where the mappings Q i : X × X X are defined as
Q 1 ( x 1 , x 2 ) = I , λ 1 S 1 ( h 1 ( · ) , v 2 ) x 1 λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) ) + f 1 ( x 1 ) g 2 ( x 2 ) w 1 } , Q 2 ( x 2 , x 3 ) = I , λ 2 S 2 ( h 2 ( · ) , v 3 ) x 2 λ 2 { P 2 ( C I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) , Y I , λ 3 S 3 ( h 3 ( · ) , v 4 ) ( x 3 ) ) + f 2 ( x 2 ) g 3 ( x 3 ) w 2 } , Q k ( x κ , x 1 ) = I , λ k S k ( h k ( · ) , v 1 ) x k λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) ) + f k ( x k ) g 1 ( x 1 ) w k } .
Let x i , y i X , v i N i ( x i ) such that x i x j , y i y j , v i v j , and by applying Proposition 3, we have
Q 1 ( x 1 , x 2 ) Q 1 ( y 1 , y 2 ) = I , λ 1 S 1 ( h 1 ( · ) , v 2 ) x 1 λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) ) + f 1 ( x 1 ) g 2 ( x 2 ) w 1 } I , λ 1 S 1 ( h 1 ( · ) , v 2 ) y 1 λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 ) ) + f 1 ( y 1 ) g 2 ( y 2 ) w 1 } L 1 { x 1 y 1 + λ 1 P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 ) ) + λ 1 f 1 ( x 1 ) g 2 ( x 2 ) f 1 ( y 1 ) g 2 ( y 2 ) } .
By applying Proposition 1 and the ordered compressionness of f 1 and g 2 , we have
f 1 ( x 1 ) g 2 ( x 2 ) f 1 ( y 1 ) g 2 ( y 2 ) = f 1 ( x 1 ) f 1 ( y 1 ) g 2 ( x 2 ) g 2 ( y 2 ) f 1 ( x 1 ) f 1 ( y 1 ) + g 2 ( x 2 ) g 2 ( y 2 ) f 1 x 1 y 1 + g 2 x 2 y 2 .
Since P 1 is a P 1 C 1 -ordered compression mapping in the first component with respect to C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) and a P 1 Y 1 -ordered compression mapping in the second component with respect to Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) , we have
P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 ) P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) + P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 ) P 1 C 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) + P 1 Y 1 Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 ) P 1 C 1 Θ C 1 x 1 y 1 + P 1 Y 1 Θ Y 1 x 2 y 2 P 1 C 1 Θ C 1 x 1 y 1 + P 1 Y 1 Θ Y 1 x 2 y 2 .
Inserting (11) and (12) into (10) and applying Proposition 1, we get
Q 1 ( x 1 , x 2 ) Q 1 ( y 1 , y 2 ) = Q 1 ( x 1 , x 2 ) Q 1 ( y 1 , y 2 ) L 1 { x 1 y 1 + λ 1 ( P 1 C 1 Θ C 1 x 1 y 1 + P 1 Y 1 Θ Y 1 x 2 y 2 + f 1 x 1 y 1 + g 2 x 2 y 2 ) } = L 1 [ 1 + λ 1 ( P 1 C 1 Θ C 1 + f 1 ) ] x 1 y 1 + λ 1 ( P 1 Y 1 Θ Y 1 + g 2 ) x 2 y 2 .
Continuing in similar fashion, let x i , y i X , v i N i ( x i ) such that x i x j , y i y j , and  v i v j ; applying Proposition 3, we have
Q k ( x k , x 1 ) Q k ( y k , y 1 ) = I , λ k S k ( h k ( · ) , v 1 ) x k λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) ) + f k ( x k ) g 1 ( x 1 ) w k } I , λ k S k ( h k ( · ) , v 1 ) y k λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( y k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) ) + f k ( y k ) g 1 ( y 1 ) w k } L k { x k y k + λ k P k C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( y k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) ) + λ k f k ( x k ) g 1 ( x 1 ) f k ( y k ) g 1 ( y 1 ) } .
By applying Proposition 1 and the ordered compressionness of f k and g 1 , we have
f k ( x k ) g 1 ( x 1 ) f k ( y k ) g 1 ( y 1 ) = f k ( x k ) f k ( y k ) g 1 ( x 1 ) g 1 ( y 1 ) f k ( x k ) f k ( y k ) + g 1 ( x 1 ) g 1 ( y 1 ) f k x k y k + g 1 x 1 y 1 .
Since P k is a P k C k -ordered compression mapping associated with C I , λ k S k ( h k ( · ) , v 1 ) in the first component and a P k Y 1 -ordered compression mapping associated with Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) in the second component, we have
P k C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) P k C I , λ k S k ( h k ( · ) , v 1 ) ( y k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) P k C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ k S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) P k C I , λ k S k ( h k ( · ) , v 1 ) ( y k ) , Y I , λ 1 S 1 ( h 2 ( · ) , v 2 ) ( x 1 ) + P k C I , λ k S k ( h k ( · ) , v 1 ) ( y k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) P k C I , λ 1 S k ( h k ( · ) , v 1 ) ( y 1 ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) P k C k C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) C I , λ k S k ( h k ( · ) , v 1 ) ( y k ) + P k Y k Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 ) P k C k Θ C k x k y k + P k Y k Θ Y k x 1 y 1 P k C k Θ C k x k y k + P k Y k Θ Y k x 1 y 1 .
Inserting (15) and (16) into (14) and applying Proposition 1, we get
Q k ( x k , x 1 ) Q k ( y k , y 1 ) = Q k ( x k , x 1 ) Q k ( y k , y 1 ) L k { x k y k + λ k ( P k C k Θ C k x k y k + P k Y k Θ Y 1 x 1 y 1 + f k x k y k + g 1 x 1 y 1 ) } = L k [ 1 + λ k ( P k C k Θ C k + f k ) ] x k y k + λ k ( P k Y 1 Θ Y 1 + g 1 ) x 1 y 1 .
Based on (13) and (17), we have
Q 1 ( x 1 , x 2 ) Q 1 ( y 1 , y 2 ) + + Q k ( x k , x 1 ) Q k ( y k , y 1 ) L 1 [ 1 + λ 1 ( P 1 C 1 Θ C 1 + f 1 ) + λ k ( P k Y 1 Θ Y 1 + g 1 ) ] x 1 y 1 + L 2 [ 1 + λ 2 ( P 2 C 2 Θ C 2 + f 2 ) + λ 1 ( P 1 Y 2 Θ Y 2 + g 2 ) ] x 2 y 2 + + + L k [ 1 + λ k ( P k C k Θ C k + f k ) + λ k ( P k 1 Y k Θ Y k + g k ) ] x k y k ,
that is,
i = 1 k Q i ( x i , x i + 1 ) Q i ( y i , y i + 1 ) i = 1 k Ξ i k + i 1 L i x i y i ,
where
Ξ 1 k = L 1 [ 1 + λ 1 ( P 1 C 1 Θ C 1 + f 1 ) + λ k ( P k Y 1 Θ Y 1 + g 1 ) ] , Ξ 2 1 = L 2 [ 1 + λ 2 ( P 2 C 2 Θ C 2 + f 2 ) + λ 1 ( P 1 Y 2 Θ Y 2 + g 2 ) ] , Ξ k k 1 = L k [ 1 + λ k ( P k C k Θ C k + f k ) + λ k ( P k 1 Y k Θ Y k + g k ) ] .
The norm ( x 1 , , x k ) 1 o n X k  is defined as
( x 1 , , x κ ) 1 = x 1 + + x κ , ( x 1 , , x k ) X k .
Clearly, the structure ( X k , . 1 ) forms a Banach space with respect to the norm (19). Thus, the definition of the mapping Q , (18) and (19), implies that
Q ( x 1 , , x κ ) Q ( y 1 , , y κ ) 1 = Q 1 ( x 1 , x 2 ) Q 1 ( y 1 , y 2 ) + + Q k ( x k , x 1 ) Q k ( y k , y 1 ) max Ξ 1 k , Ξ 2 1 , , Ξ k k 1 ( x 1 y 1 + x 2 y 2 + x k y k ) .
From (9), we conclude that max Ξ 1 k , Ξ 2 1 , , Ξ k k 1 < 1 . Therefore, there exists a unique fixed point ( x 1 , , x k ) X k of the mapping Q . That is,
Q ( x 1 , , x k ) = ( x 1 , , x k ) .
This leads to
x 1 = I , λ 1 S 1 ( h 1 ( · ) , v 2 ) x 1 λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) ) + f 1 ( x 1 ) g 2 ( x 2 ) w 1 } , x 2 = I , λ 2 S 2 ( h 2 ( · ) , v 3 ) x 2 λ 2 { P 2 ( C I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 ) , Y I , λ 3 S 3 ( h 3 ( · ) , v 4 ) ( x 3 ) ) + f 2 ( x 2 ) g 3 ( x 3 ) w 2 } , x k = I , λ k S k ( h k ( · ) , v 1 ) x k λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( x k ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 ) ) + f k ( x k ) g 1 ( x 1 ) w k } .
Hence, Lemma (1) ensures that ( x , v ) is a solution of system (5).    □

3. Three-Step Iterative Scheme and Its Convergence

In this section, leveraging Lemma 1, we formulate a three-step iterative algorithm aimed at determining the approximate solution for the newly established system of ordered XOR-inclusion problems involving Cayley and Yosida operators (Algorithm 1). The convergence properties of the sequence produced by the algorithm are demonstrated under certain appropriate assumptions.
Algorithm 1: Three-Step Iterative Algorithm for the Approximate Solution of SEOXORIP
Let i = 1 , 2 , , k , k N a n d   s e t k + 1 = 1 ; let f i , g i , h i : X X and P i : X × X X be single-valued mappings. Let N i : X C B ( X ) be a D -Lipschitz continuous mapping with constants λ N D and let S i : X × X 2 X be a generalized α i -strongly accretive mapping with respect to h i . Then,
Initially: Choose ( x 1 0 , , x k 0 ) X k and ( v 1 0 , , v k 0 ) N 1 ( x 1 0 ) × × N k ( x k 0 ) .
Step I: Let x i ( n + 1 ) x i ( n ) and v i ( n ) v j ( n ) . We define
x 1 ( n + 1 ) = α 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ y 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 n ) ) + f 1 ( y 1 n ) g 2 ( y 2 n ) w 1 } ] + ( 1 α 1 n ) x 1 n + α 1 n δ 1 n y 1 ( n ) = β 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ z 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( z 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( z 2 n ) ) + f 1 ( z 1 n ) g 2 ( z 2 n ) w 1 } ] + ( 1 β 1 n ) x 1 ( n ) + β 1 n δ 2 n z 1 ( n ) = γ 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ x 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 n ) ) + f 1 ( x 1 n ) g 2 ( x 2 n ) w 1 } ] + ( 1 γ 1 n ) x 1 ( n ) + γ 1 n δ 3 n x k ( n + 1 ) = α k n I , λ 1 S k ( h k ( · ) , v 1 ) [ y k n λ k { P 1 ( C I , λ k S k ( h k ( · ) , v 1 ) ( y k n ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( y 1 n ) ) + f k ( y k n ) g 1 ( y 1 n ) w k } ] + ( 1 α k n ) x k n + α k n δ k n y k ( n ) = β k n I , λ k S k ( h k ( · ) , v 1 ) [ z k n λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( z k n ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( z 1 n ) ) + f k ( z k n ) g 1 ( z 1 n ) w k } ] + ( 1 β k n ) x k ( n ) + β k n δ k + 1 n z k ( n ) = γ k n I , λ k S k ( h k ( · ) , v 1 ) [ x k n λ k { P k ( C I , λ k S k ( h k ( · ) , v 1 ) ( x k n ) , Y I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 n ) ) + f k ( x k n ) g 1 ( x 1 n ) w k } ] + ( 1 γ k n ) x k ( n ) + γ k n δ k + 2 n .
for n = 0 , 1 , 2 , , where λ i > 0 is a constant and α i n , β i n , and γ i n are real sequences in (0,1) such that
n = 1 α i n = , i .
Step II:  Choose v i ( n + 1 ) N i ( x i ( n + 1 ) ) such that
v i ( n + 1 ) v i ( n ) 1 + 1 n + 1 D ( N i ( x i ( n + 1 ) ) , N i ( x i ( n ) ) ) ,
where D ( . , . ) is the Hausdorff metric on X .
Step III:  If the accuracy is satisfactory and x i ( n + 1 ) and v i ( n ) i satisfy step I, then stop; if not, set n = n + 1 and return to step I.
Remark 3.
Algorithm 1 becomes a two-step iterative algorithm (Ishikawa-type) if γ i n = 0 for all n 0 , and it reduces to a one-step iterative scheme (Mann-type) when choosing β i n = γ i n = 0 for all n 0 . We also observe that if we define the appropriate operators for Algorithm 1, we can easily acquire many more methods that have been studied by several authors for addressing ordered variational inclusions, see, e.g., [24,25,26].
Lemma 2.
The sequence v n 0 as n if v n and η n are sequences in [ 0 , ) such that
(i)
0 η n < 1 , n = 0 , 1 , 2 , and l i m s u p n η n < 1 ;
(ii)
v n + 1 η n v n , n = 0 , 1 , 2 , 3 , , hold.
Theorem 2.
Let all the mappings and conditions be the same as in Theorem 1, except for condition (9). Additionally, if
I , λ i S i ( h ( · ) , u ) ( x ) I , λ i S i ( h ( · ) , v ) ( x ) Λ i u v ,
sup n 1 { α i n β i n ν i 2 { γ i n ν i + 1 } + α i n ν i + 1 } < sup n 1 { α i n β i n ν i 2 γ i n ν i + α i n ν i β i n + α i n } , sup n 1 { α i n β i n ν i ( ν i γ i n + 1 ) ( δ i + 1 + μ i + 1 ) + α i n β i n ν i δ i + 1 γ i + 1 n ν i + 1 + α i n ν i + 1 β i + 1 n δ i + 1 ν i + 1 γ i + 1 n + α i n δ i + 1 } , < sup n 1 { α i n β i n ν i δ i + 1 γ i + 1 n + α i n ν i + 1 β i + 1 n δ i + 1 γ i + 1 n + 1 + α i n δ i + 1 β i + 1 n } , sup n 1 { α i n γ i + 1 n δ i + 1 ( β i n ν i + β i + 1 n ν i + 1 ) ( δ i + 2 + μ i + 2 ) + α i n β i + 2 n τ i + 2 ν i + 2 ( γ i + 2 n ν i + 2 + 1 ) + α i n β i n ν i τ i + 2 ( ν i γ i n + γ i + 2 n ν i + 2 ) + α i n β i + 1 n δ i + 1 [ γ i + 1 n δ i + 2 ν i + 2 + δ i + 2 + μ i + 2 ] } < sup n 1 α i n β i + 2 n τ i + 2 ν i + 2 γ i + 2 n + α i n β i n ν i τ i + 2 γ i + 2 n + 1 + α i n β i + 1 n δ i + 1 γ i + 1 n δ i + 2 + β i + 1 n , sup n 1 { α i n β i n ν i [ γ i + 1 n δ i + 1 δ i + 3 + γ i + 2 n τ i + 2 ( δ i + 3 + μ i + 3 ) ] + α i n β i + 1 n δ i + 1 [ γ i + 1 n ν i + 1 δ i + 3 + γ i + 2 n δ i + 2 δ i + 3 + γ i + 2 n δ i + 2 μ i + 3 + τ i + 3 + γ i + 3 n τ i + 3 ν i + 3 ] + α i n β i + 2 n τ i + 2 [ γ i + 2 n ν i + 2 ( δ i + 3 + μ i + 3 ) + γ i + 3 n δ i + 3 ν i + 3 + δ i + 3 + τ i + 3 ] } < sup n 1 { 1 + γ i + 3 n τ i + 3 + α i n β i + 2 n τ i + 2 γ i + 3 n δ i + 3 } , sup n 1 { α i n β i n ν i τ i + 2 τ i + 4 γ i + 2 n + α i n β i + 1 n δ i + 1 [ δ i + 2 γ i + 2 n τ i + 4 + γ i + 3 n τ i + 3 ( δ i + 4 + μ i + 4 ) ] + α i n β i + 2 n [ τ i + 2 τ i + 4 ( γ i + 2 n ν i + 2 + γ i + 4 n ν i + 4 + 1 ) + γ i + 3 n δ i + 3 ( τ i + 3 δ i + 4 + τ i + 2 μ i + 4 ) ] } < sup n 1 { 1 + α i n β i + 2 n τ i + 2 τ i + 4 γ i + 4 n } , sup n 1 { α i n β i + 1 n γ i + 3 n δ i + 1 τ i + 3 τ i + 5 + α i n β i + 2 n τ i + 2 [ γ i + 3 n δ i + 3 τ i + 5 + γ i + 4 n τ i + 4 ( δ i + 5 + μ i + 5 ) ] } < 1 , sup n 1 { α i n β i + 2 n γ i + 4 n τ i + 2 τ i + 4 τ i + 6 } < 1 ,
and
lim n δ i n ( δ i n ) = 0 , , i = 1 , 2 , , k .
hold, then the sequences { ( x i ( n ) , v i ( n ) ) } generated by Algorithm 1 converge strongly to the solution ( x i , v i ) of system (5).
Proof of Convergence.
Theorem 1 guarantees that System (5) admits the solution ( x i , v i ) . Let us assume that x * = ( x 1 * , x 2 * , , x k * ) is a unique solution of SEOXORIP (5). Then, we have
x i * = [ α i n I , λ i S i ( h i ( · ) , v i + 1 ) ( x i * λ i { P i C I , λ i S i ( h i ( · ) , v i + 1 ) ( x i * ) , Y I , λ i + 1 S i + 1 ( h i + 1 ( · ) , v i + 2 ) ( x i + 1 * ) + f i ( x i ) * g i + 1 ( x i + 1 * ) w i } ) + ( 1 α i n ) x i * ] = [ β i n I , λ i S i ( h i ( · ) , v i + 1 ) [ x i * λ i { P i C I , λ i S i ( h i ( · ) , v i + 1 ) ( x i * ) , Y I , λ i + 1 S i + 1 ( h i + 1 ( · ) , v i + 2 ) ( x i + 1 * ) + f i ( x i * ) g i + 1 ( x i + 1 * ) w i } ] + ( 1 β i n ) x i * ] = [ γ i n I , λ i S i ( h i ( · ) , v i + 1 ) [ x i * λ i { P i C I , λ i S i ( h i ( · ) , v i + 1 ) ( x i * ) , Y I , λ i + 1 S i + 1 ( h i + 1 ( · ) , v i + 2 ) ( x i + 1 * ) + f i ( x i * ) g i + 1 ( x i + 1 * ) w i } ] + ( 1 γ i n ) x i * ] .
From (22), (26), and Proposition 3, we obtain
x 1 ( n + 1 ) x 1 * = [ α 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 n ) ( y 2 n ) ) + f 1 ( y 1 n ) g 2 ( y 2 n ) w 1 } ) + ( 1 α 1 n ) x 1 n + α 1 n δ 1 n ] [ α 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + f 1 ( x 1 * ) g 2 ( x 2 * ) w 1 } ) + ( 1 α 1 n ) x 1 * ] α 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 n ) ( y 2 n ) ) + f 1 ( y 1 n ) g 2 ( y 2 n ) w 1 } ) I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + f 1 ( x 1 * ) g 2 ( x 2 * ) w 1 } ) + ( 1 α 1 n ) x 1 n x 1 * + α 1 n δ 1 n .
In similar way to (10), using (24), we have
x 1 ( n + 1 ) x 1 * α 1 n [ Λ 2 v 2 n v 2 + L 1 { y 1 n x 1 * + λ 1 P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 n ) ) P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + λ 1 f 1 ( y 1 n ) g 2 ( y 2 n ) f 1 ( x 1 * ) g 2 ( x 2 * ) } ] + ( 1 α 1 n ) x 1 n x 1 * + α 1 n δ 1 n 0 .
From Definitions 7 and 8 and (12), we have
P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( y 2 n ) P 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) P 1 C 1 C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( y 1 n ) C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * ) + P 1 Y 1 Y I , λ 2 S 2 ( h 2 ( · ) , v 3 n ) ( y 2 n ) Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) 3 P 1 C 1 y 1 n x 1 * + P 1 C 1 Λ 2 1 + 1 1 + n λ D N 2 y 2 n x 2 * + P 1 Y 1 λ 2 ( 2 y 2 n x 2 * + Λ 3 1 + 1 1 + n λ D N 3 y 3 n x 3 * ) .
Since v 2 n N 2 ( x 2 n ) , according to Nadler [27], ∃ v 2 N 2 ( x 2 * ) such that
v 2 n v 2 1 + 1 n + 1 D ( N 2 ( x 2 n ) , N 2 ( x 2 * ) ) .
Using the D -Lipschitz continuity of N 2 , we obtain
v 2 n v 2 1 + 1 n + 1 λ D N 2 x 2 n x 2 * .
Using (28), (30), and Proposition 3 in (27), we get
x 1 ( n + 1 ) x 1 * α 1 n [ Λ 2 1 + 1 n + 1 λ D N 2 x 2 n x 2 * + L 1 { y 1 n x 1 * + λ 1 ( 3 P 1 C 1 y 1 n x 1 * + P 1 C 1 Λ 2 1 + 1 1 + n λ D N 2 y 2 n x 2 * + P 1 Y 1 λ 2 ( 2 y 2 n x 2 * + Λ 3 1 + 1 1 + n λ D N 3 y 3 n x 3 * ) + λ 1 f 1 y 1 n x 1 * + g 2 y 2 n x 2 * } ] + ( 1 α 1 n ) x 1 n x 1 * + α 1 n δ 1 n 0 .
By ( v i i ) of Proposition 1, we have
x 1 ( n + 1 ) x 1 * = x 1 ( n + 1 ) x 1 * = α 1 n L 1 1 + λ 1 f 1 y 1 n x 1 * + ( 1 α 1 n ) x 1 n x 1 * + α 1 n Λ 2 1 + 1 1 + n λ D N 2 L 1 P 1 C 1 + 2 P 1 Y 1 λ 2 + λ 1 g 2 y 2 n x 2 * + α 1 n Λ 2 1 + 1 1 + n λ D N 2 x 2 n x 2 * + α 1 n L 1 P 1 Y 1 λ 2 Λ 3 1 + 1 1 + n λ D N 3 y 3 n x 3 * + α 1 n δ 1 n 0 .
Let
ν 1 = L 1 1 + λ 1 f 1 , δ 2 = Λ 2 1 + 1 1 + n λ D N 2 L 1 P 1 C 1 + 2 P 1 Y 1 λ 2 + λ 1 g 2 ,
μ 2 = Λ 2 1 + 1 1 + n λ D N 2 a n d τ 3 = L 1 P 1 Y 1 λ 2 Λ 3 1 + 1 1 + n λ D N 3 .
x 1 ( n + 1 ) x 1 * = α 1 n ν 1 y 1 n x 1 * + α 1 n δ 2 y 2 n x 2 * + α 1 n μ 2 x 2 n x 2 * + ( 1 α 1 n ) x 1 n x 1 * + α 1 n τ 3 y 3 n x 3 * + α 1 n δ 1 n 0 .
Similarly, we calculate
y 1 ( n ) x 1 * ( β 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ z 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( z 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( z 2 n ) ) + f 1 ( z 1 n ) g 2 ( z 2 n ) w 1 } ] + ( 1 β 1 n ) x 1 ( n ) + β 1 n δ 2 n ) ( β 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ x 1 * λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + f 1 ( x 1 * ) g 2 ( x 2 * ) w 1 } ] + ( 1 β 1 n ) x 1 * ) .
y 1 ( n ) x 1 * β 1 n [ Λ 2 v 2 n v 2 + L 1 { z 1 n x 1 * + λ 1 P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( z 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( z 2 n ) ) P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + λ 1 f 1 ( z 1 n ) g 2 ( z 2 n ) f 1 ( x 1 * ) g 2 ( x 2 * ) } ] + ( 1 β 1 n ) x 1 n x 1 * + β 1 n δ 2 n 0 β 1 n [ Λ 2 1 + 1 n + 1 λ D N 2 x 2 n x 2 * + L 1 { z 1 n x 1 * + λ 1 ( 3 P 1 C 1 z 1 n x 1 * + P 1 C 1 Λ 2 1 + 1 1 + n λ D N 2 z 2 n x 2 * + P 1 Y 1 λ 2 ( 2 z 2 n x 2 * + Λ 3 1 + 1 1 + n λ D N 3 z 3 n x 3 * ) + λ 1 f 1 z 1 n x 1 * + g 2 y 2 n x 2 * } ] ] + ( 1 β 1 n ) x 1 n x 1 * + β 1 n δ 2 n 0 = β 1 n L 1 1 + λ 1 f 1 z 1 n x 1 * + ( 1 β 1 n ) x 1 n x 1 * + β 1 n Λ 2 1 + 1 1 + n λ D N 2 L 1 P 1 C 1 + 2 P 1 Y 1 λ 2 + λ 1 g 2 z 2 n x 2 * + β 1 n Λ 2 1 + 1 1 + n λ D N 2 x 2 n x 2 * + β 1 n L 1 P 1 Y 1 λ 2 Λ 3 1 + 1 1 + n λ D N 3 z 3 n x 3 * + β 1 n δ 2 n 0 = β 1 n ν 1 z 1 n x 1 * + β 1 n δ 2 z 2 n x 2 * + β 1 n μ 2 x 2 n x 2 * + ( 1 β 1 n ) x 1 n x 1 * + β 1 n τ 3 z 3 n x 3 * + β 1 n δ 2 n 0 .
In a similar way, by using the definition of z 1 n in Algorithm 1, we obtain
z 1 ( n ) x 1 * ( γ 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ x 1 n λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 n ) ) + f 1 ( x 1 n ) g 2 ( x 2 n ) w 1 } ] + ( 1 γ 1 n ) x 1 ( n ) + γ 1 n δ 2 n ) ( γ 1 n I , λ 1 S 1 ( h 1 ( · ) , v 2 ) [ x 1 * λ 1 { P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + f 1 ( x 1 * ) g 2 ( x 2 * ) w 1 } ] + ( 1 γ 1 n ) x 1 * ) γ 1 n [ Λ 2 v 2 n v 2 + L 1 { x 1 n x 1 + λ 1 P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( x 1 n ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 n ) ) P 1 ( C I , λ 1 S 1 ( h 1 ( · ) , v 2 n ) ( x 1 * ) , Y I , λ 2 S 2 ( h 2 ( · ) , v 3 ) ( x 2 * ) ) + λ 1 f 1 ( x 1 n ) g 2 ( x 2 n ) f 1 ( x 1 * ) g 2 ( x 2 * ) } ] + ( 1 γ 1 n ) x 1 n x 1 * + γ 1 n δ 3 n 0 γ 1 n [ Λ 2 1 + 1 n + 1 λ D N 2 x 2 n x 2 + L 1 { x 1 n x 1 * + λ 1 ( 3 P 1 C 1 x 1 n x 1 * + P 1 C 1 Λ 2 1 + 1 1 + n λ D N 2 x 2 n x 2 * + P 1 Y 1 λ 2 ( 2 x 2 n x 2 * + Λ 3 1 + 1 1 + n λ D N 3 x 3 n x 3 * ) + λ 1 f 1 x 1 n x 1 * + g 2 x 2 n x 2 * } ] ] + ( 1 γ 1 n ) x 1 n x 1 * + γ 1 n δ 3 n 0 .
z 1 ( n ) x 1 * = γ 1 n L 1 1 + λ 1 f 1 x 1 n x 1 * + ( 1 γ 1 n ) x 1 n x 1 * + γ 1 n Λ 2 1 + 1 1 + n λ D N 2 L 1 P 1 C 1 + 2 P 1 Y 1 λ 2 + λ 1 g 2 x 2 n x 2 * + γ 1 n Λ 2 1 + 1 1 + n λ D N 2 x 2 n x 2 * + γ 1 n L 1 P 1 Y 1 λ 2 Λ 3 1 + 1 1 + n λ D N 3 x 3 n x 3 * + γ 1 n δ 3 n 0 = γ 1 n ν 1 + ( 1 γ 1 n ) x 1 n x 1 * + γ 1 n δ 2 + γ 1 n μ 2 x 2 n x 2 * + γ 1 n τ 3 x 3 n x 3 * + γ 1 n δ 3 n 0 .
Using (34) and (35) in (33), we have
x 1 n + 1 x 1 * α 1 n ν 1 { β 1 n ν 1 z 1 n x 1 * + β 1 n δ 2 z 2 n x 2 * + β 1 n μ 2 x 2 n x 2 * + ( 1 β 1 n ) x 1 n x 1 * + β 1 n τ 3 z 3 n x 3 * + β 1 n δ 2 n 0 } + α 1 n δ 2 { β 2 n ν 2 z 2 n x 2 * + β 2 n δ 3 z 3 n x 3 * + β 2 n μ 3 x 3 n x 3 * + ( 1 β 2 n ) x 2 n x 2 * + β 2 n τ 4 z 4 n x 4 * + β 2 n δ 3 n 0 } + α 1 n μ 2 x 2 n x 2 * + ( 1 α 1 n ) x 1 n x 1 * + α 1 n δ 1 n 0 + α 1 n τ 3 { β 3 n ν 3 z 3 n x 3 * + β 3 n δ 4 z 4 n x 4 * + β 3 n μ 4 x 4 n x 4 * + ( 1 β 3 n ) x 3 n x 3 * + β 3 n τ 5 z 5 n x 5 * + β 3 n δ 4 n 0 } α 1 n β 1 n ν 1 2 z 1 n x 1 * + α 1 n β 1 n ν 1 δ 2 + α 1 n ν 2 β 2 n δ 2 z 2 n x 2 * + α 1 n β 1 n ν 1 τ 3 + α 1 n β 2 n δ 2 δ 3 + α 1 n β 3 n τ 3 ν 3 z 3 n z 3 * + α 1 n β 2 n δ 2 τ 4 + α 1 n β 3 n τ 3 δ 4 z 4 n x 4 * + α 1 n β 3 n τ 3 τ 5 z 5 x 5 * + ( 1 α 1 n ) + α 1 n ν 1 ( 1 β 1 n ) x 1 n x 1 * + α 1 n β 1 n ν 1 μ 2 + α 1 n ( 1 β 2 n ) δ 2 + α 1 n μ 2 x 2 * x 2 + ( 1 β 3 n ) + α 1 n β 2 n δ 2 μ 3 x 3 x 3 * + α 1 n β 3 n τ 3 τ 4 x 4 n x 4 * + α 1 n ν 1 β 1 n δ 2 n 0 + α 1 n δ 2 β 2 n δ 3 n 0 + α 1 n τ 3 β 3 n δ 4 n 0 + α 1 n δ 1 n 0 α 1 n β 1 n ν 1 2 { γ 1 n ν 1 + ( 1 γ 1 n ) x 1 n x 1 * + γ 1 n δ 2 + γ 1 n μ 2 x 2 n x 2 * + γ 1 n τ 3 x 3 n x 3 * + γ 1 n δ 3 n 0 } + α 1 n β 1 n ν 1 δ 2 + α 1 n ν 2 β 2 n δ 2 { γ 2 n ν 2 + ( 1 γ 2 n ) x 2 n x 2 * + γ 2 n δ 3 + γ 2 n μ 3 x 3 n x 3 * + γ 2 n τ 4 x 4 n x 4 * + γ 2 n δ 4 n 0 } + ( α 1 n β 1 n ν 1 τ 3 + α 1 n β 2 n δ 2 δ 3 + α 1 n β 3 n τ 3 ν 3 ) { γ 3 n ν 3 + ( 1 γ 3 n ) x 3 n x 3 * + γ 3 n δ 4 + γ 3 n μ 4 x 4 n x 4 * + γ 3 n τ 5 x 5 n x 5 * + γ 3 n δ 4 n 0 } + α 1 n β 2 n δ 2 τ 4 + α 1 n β 3 n τ 3 δ 4 { γ 4 n ν 4 + ( 1 γ 4 n ) x 4 n x 4 * + γ 4 n δ 5 + γ 4 n μ 5 x 5 n x 5 * + γ 4 n τ 6 x 6 n x 6 * + γ 4 n δ 5 n 0 } + α 1 n β 3 n τ 3 τ 5 { ( γ 5 n ν 5 + ( 1 γ 5 n ) ) x 5 n x 5 * + γ 5 n δ 6 + γ 5 n μ 6 x 6 n x 6 * + γ 5 n τ 7 x 7 n x 7 * + γ 5 n δ 7 n 0 } + ( 1 α 1 n ) + α 1 n ν 1 ( 1 β 1 n ) x 1 n x 1 * + α 1 n β 1 n ν 1 μ 2 + α 1 n ( 1 β 2 n ) δ 2 + α 1 n μ 2 x 2 n x 2 * + ( 1 β 3 n ) + α 1 n β 2 n δ 2 μ 3 x 3 n x 3 * + α 1 n β 3 n τ 3 τ 4 x 4 n x 4 + α 1 n ν 1 β 1 n δ 2 n 0 + α 1 n δ 2 β 2 n δ 3 n 0 + α 1 n τ 3 β 3 n δ 4 n 0 + α 1 n δ 1 n 0 .
After simplification and by Proposition 1, we obtain
x 1 n + 1 x 1 * [ α 1 n β 1 n ν 1 2 { γ 1 n ( ν 1 1 ) + 1 } + α 1 n { ν 1 ( 1 β 1 n ) 1 } + 1 ] x 1 n x 1 * + [ α 1 n β 1 n ν 1 ( ν 1 γ 1 n + 1 ) ( δ 2 + μ 2 ) + α 1 n β 1 n ν 1 δ 2 γ 2 n ( ν 2 1 ) + α 1 n ν 2 β 2 n δ 2 ( ν 2 γ 2 n γ 2 n + 1 ) + α 1 n δ 2 ( 1 β 2 n ) ] x 2 n x 2 * + [ α 1 n γ 2 n δ 2 ( β 1 n ν 1 + β 2 n ν 2 ) ( δ 3 + μ 3 ) + α 1 n β 3 n τ 3 ν 3 ( γ 3 n ν 3 + 1 γ 3 n ) + α 1 n β 1 n ν 1 τ 3 ( ν 1 γ 1 n + γ 3 n ν 3 γ 3 n + 1 ) + α 1 n β 2 n δ 2 [ γ 3 n δ 3 ( ν 3 1 ) + δ 3 + μ 3 ] β 3 n + 1 ] x 3 n x 3 * + [ α 1 n β 1 n ν 1 [ γ 2 n δ 2 δ 4 + γ 3 n τ 3 ( δ 4 + μ 4 ) ] + α 1 n β 2 n δ 2 [ γ 2 n ν 2 δ 4 + γ 3 n δ 3 δ 4 + γ 3 n δ 3 μ 4 + τ 4 + γ 4 n τ 4 ( ν 4 1 ) ] + α 1 n β 3 n τ 3 [ γ 3 n ν 3 ( δ 4 + μ 4 ) + γ 4 n δ 4 ( ν 4 1 ) + δ 4 + τ 4 ] ] x 4 n x 4 * + [ α 1 n β 1 n ν 1 τ 3 τ 5 γ 3 n + α 1 n β 2 n δ 2 [ δ 3 γ 3 n τ 5 + γ 4 n τ 4 ( δ 5 + μ 5 ) ] + α 1 n β 3 n [ τ 3 τ 5 ( γ 3 n ν 3 + γ 5 n ν 5 γ 5 n + 1 ) + γ 4 n δ 4 ( τ 4 δ 5 + τ 3 μ 5 ) ] ] x 5 n x 5 * + [ α 1 n β 2 n γ 4 n δ 2 τ 4 τ 6 + α 1 n β 3 n τ 3 [ γ 4 n δ 4 τ 6 + γ 5 n τ 5 ( δ 6 + μ 6 ) ] ] x 6 n x 6 * + α 1 n β 3 n γ 5 n τ 3 τ 5 τ 7 x 7 n x 7 * + α 1 n β 1 n ν 1 2 γ 1 n δ 3 n 0 + α 1 n β 1 n ν 1 δ 2 + α 1 n ν 2 β 2 n δ 2 γ 2 n δ 4 n 0 + α 1 n β 1 n ν 1 τ 3 + α 1 n β 2 n δ 2 δ 3 + α 1 n β 3 n τ 3 ν 3 γ 3 n δ 4 n 0 + α 1 n β 2 n δ 2 τ 4 + α 1 n β 3 n τ 3 δ 4 γ 4 n δ 5 n 0 + α 1 n β 3 n τ 3 τ 5 γ 5 n δ 7 n 0 + α 1 n ν 1 β 1 n δ 2 n 0 + α 1 n δ 2 β 2 n δ 3 n 0 + α 1 n τ 3 β 3 n δ 4 n 0 + α 1 n δ 1 n 0 .
By applying the same logic as above, we have
x 2 n + 1 x 2 * [ α 2 n β 2 n ν 2 2 { γ 2 n ( ν 2 1 ) + 1 } + α 2 n { ν 2 ( 1 β 2 n ) 1 } + 1 ] x 2 n x 2 * + [ α 2 n β 2 n ν 2 ( ν 2 γ 2 n + 1 ) ( δ 3 + μ 3 ) + α 2 n β 2 n ν 2 δ 3 γ 3 n ( ν 3 1 ) + α 2 n ν 3 β 3 n δ 3 ( ν 3 γ 3 n γ 3 n + 1 ) + α 2 n δ 3 ( 1 β 3 n ) ] x 3 n x 3 * + [ α 2 n γ 3 n δ 3 ( β 2 n ν 2 + β 3 n ν 3 ) ( δ 4 + μ 4 ) + α 2 n β 4 n τ 4 ν 4 ( γ 4 n ν 4 + 1 γ 4 n ) + α 2 n β 2 n ν 2 τ 4 ( ν 2 γ 2 n + γ 4 n ν 4 γ 4 n + 1 ) + α 2 n β 3 n δ 3 [ γ 4 n δ 4 ( ν 4 1 ) + δ 4 + μ 4 ] β 4 n + 1 ] x 4 n x 4 * + [ α 2 n β 2 n ν 2 [ γ 3 n δ 3 δ 5 + γ 4 n τ 4 ( δ 5 + μ 5 ) ] + α 2 n β 3 n δ 3 [ γ 3 n ν 3 δ 5 + γ 4 n δ 4 δ 5 + γ 4 n δ 4 μ 5 + τ 5 + γ 5 n τ 5 ( ν 5 1 ) ] + α 2 n β 4 n τ 4 [ γ 4 n ν 4 ( δ 5 + μ 5 ) + γ 5 n δ 5 ( ν 5 1 ) + δ 5 + τ 5 ] ] x 5 n x 5 * + [ α 2 n β 2 n ν 2 τ 4 τ 6 γ 4 n + α 2 n β 3 n δ 3 [ δ 4 γ 4 n τ 6 + γ 5 n τ 5 ( δ 6 + μ 6 ) ] + α 2 n β 4 n [ τ 4 τ 6 ( γ 4 n ν 4 + γ 6 n ν 6 γ 6 n + 1 ) + γ 5 n δ 5 ( τ 5 δ 6 + τ 4 μ 6 ) ] ] x 6 n x 6 * + α 2 n β 3 n γ 5 n δ 3 τ 5 τ 7 + α 2 n β 4 n τ 4 [ γ 5 n δ 5 τ 7 + γ 6 n τ 6 ( δ 7 + μ 7 ) ] x 7 n x 7 * + α 2 n β 4 n γ 6 n τ 4 τ 6 τ 8 x 8 n x 8 * + α 2 n β 2 n ν 2 2 γ 2 n δ 4 n 0 + α 2 n β 2 n ν 2 δ 3 + α 2 n ν 3 β 3 n δ 3 γ 3 n δ 5 n 0 + α 2 n β 2 n ν 2 τ 4 + α 2 n β 3 n δ 3 δ 4 + α 2 n β 4 n τ 4 ν 4 γ 4 n δ 5 n 0 + α 2 n β 3 n δ 3 τ 5 + α 2 n β 4 n τ 4 δ 5 γ 5 n δ 6 n 0 + α 2 n β 4 n τ 4 τ 6 γ 6 n δ 8 n 0 + α 2 n ν 2 β 2 n δ 3 n 0 + α 2 n δ 3 β 3 n δ 4 n 0 + α 2 n τ 4 β 4 n δ 5 n 0 + α 2 n δ 2 n 0 .
Continuing in a similar way, we have
x k n + 1 x k * [ α k n β k n ν k 2 { γ k n ( ν k 1 ) + 1 } + α k n { ν k ( 1 β k n ) 1 } + 1 ] x k n x k * + [ α k n β k n ν k ( ν k γ k n + 1 ) ( δ 1 + μ 1 ) + α k n β k n ν k δ 1 γ 1 n ( ν 1 1 ) + α k n ν 1 β 1 n δ 1 ( ν 1 γ 1 n γ 1 n + 1 ) + α k n δ 1 ( 1 β 1 n ) ] x 1 n x 1 * + [ α k n γ 1 n δ 1 ( β k n ν k + β 1 n ν 1 ) ( δ 2 + μ 2 ) + α k n β 2 n τ 2 ν 2 ( γ 2 n ν 2 + 1 γ 2 n ) + α k n β k n ν k τ 2 ( ν k γ k n + γ 2 n ν 2 γ 2 n + 1 ) + α k n β 1 n δ 1 [ γ 2 n δ 2 ( ν 2 1 ) + δ 2 + μ 2 ] β 2 n + 1 ] x 2 n x 2 * + [ α k n β k n ν k [ γ 1 n δ 1 δ 3 + γ 2 n τ 2 ( δ 3 + μ 3 ) ] + α k n β 1 n δ 1 [ γ 1 n ν 1 δ 3 + γ 2 n δ 2 δ 3 + γ 2 n δ 2 μ 3 + τ 3 + γ 3 n τ 3 ( ν 3 1 ) ] + α k n β 2 n τ 2 [ γ 2 n ν 2 ( δ 3 + μ 3 ) + γ 3 n δ 3 ( ν 3 1 ) + δ 3 + τ 3 ] ] x 3 n x 3 * + [ α k n β k n ν k τ 2 τ 4 γ 2 n + α k n β 1 n δ 1 [ δ 2 γ 2 n τ 4 + γ 3 n τ 3 ( δ 4 + μ 4 ) ] + α k n β 2 n [ τ 2 τ 4 ( γ 2 n ν 2 + γ 4 n ν 4 γ 4 n + 1 ) + γ 3 n δ 3 ( τ 3 δ 4 + τ 2 μ 4 ) ] ] x 4 n x 4 * + α k n β 1 n γ 3 n δ 1 τ 3 τ 5 + α k n β 2 n τ 2 [ γ 3 n δ 3 τ 5 + γ 4 n τ 4 ( δ 5 + μ 5 ) ] x 5 n x 5 * + α k n β 2 n γ 4 n τ 2 τ 4 τ 6 x 6 n x 6 * + α k n β k n ν k 2 γ k n δ 2 n 0 + α k n β k n ν k δ 1 + α k n ν 1 β 1 n δ 1 γ 1 n δ 3 n 0 + α k n β k n ν k τ 2 + α k n β 1 n δ 1 δ 2 + α k n β 2 n τ 2 ν 2 γ 2 n δ 3 n 0 + α k n β 1 n δ 1 τ 3 + α k n β 2 n τ 2 δ 3 γ 3 n δ 4 n 0 + α k n β 2 n τ 2 τ 4 γ 4 n δ 6 n 0 + α k n ν k β k n δ 1 n 0 + α k n δ 1 β 1 n δ 2 n 0 + α k n τ 2 β 2 n δ 3 n 0 + α k n δ k n 0 .
From (37)–(39), we have
x 1 ( n + 1 ) x 1 + x 2 ( n + 1 ) x 2 + + x κ ( n + 1 ) x κ i = 1 n Ω i ( n ) 1 + 2 Ω i ( n ) + 3 Ω i ( n ) + 4 Ω i ( n ) + 5 Ω i ( n ) + 6 Ω i ( n ) + 7 Ω i ( n ) x i ( n ) x i + i = 1 k α i n β i n ν i 2 γ i n δ i + 2 n ( δ i + 2 n ) + i = 1 k α i n β i n ν i δ i + 1 + α i n ν i + 1 β i + 1 n δ i + 1 γ i + 1 n δ i + 3 n ( δ i + 3 n ) + i = 1 k α i n β i n ν i τ i + 2 + α i n β i + 1 n δ i + 1 δ i + 2 + α i n β i + 2 n τ i + 2 ν i + 2 γ i + 2 n δ i + 3 n ( δ i + 3 n ) + i = 1 k α i n β i + 1 n δ i + 1 τ i + 3 + α i n β i + 2 n τ i + 2 δ i + 3 γ i + 3 n δ i + 4 n ( δ i + 4 n ) + i = 1 k α i n β i + 2 n τ i + 2 τ i + 4 γ i + 4 n δ i + 6 n ( δ i + 6 n ) + i = 1 k α i n ν i β i n δ i + 1 n ( δ i + 1 n ) + i = 1 k α i n δ i + 1 β i + 1 n δ i + 2 n ( δ i + 2 n ) + i = 1 k α i n τ i + 2 β i + 2 n δ i + 3 n ( δ i + 3 n ) + i = 1 k α i n δ i n ( δ i n ) .
where
Ω i ( n ) 1 = α i n β i n ν i 2 { γ i n ( ν i 1 ) + 1 } + α i n { ν i ( 1 β i n ) 1 } + 1 , Ω i ( n ) 2 = [ α i n β i n ν i ( ν i γ i n + 1 ) ( δ i + 1 + μ i + 1 ) + α i n β i n ν i δ i + 1 γ i + 1 n ( ν i + 1 1 ) + α i n ν i + 1 β i + 1 n δ i + 1 ( ν i + 1 γ i + 1 n γ i + 1 n + 1 ) + α i n δ i + 1 ( 1 β i + 1 n ) ] , Ω i ( n ) 3 = [ α i n γ i + 1 n δ i + 1 ( β i n ν i + β i + 1 n ν i + 1 ) ( δ i + 2 + μ i + 2 ) + α i n β i + 2 n τ i + 2 ν i + 2 ( γ i + 2 n ν i + 2 + 1 γ i + 2 n ) + α i n β i n ν i τ i + 2 ( ν i γ i n + γ i + 2 n ν i + 2 γ i + 2 n + 1 ) + α i n β i + 1 n δ i + 1 [ γ i + 1 n δ i + 2 ( ν i + 2 1 ) + δ i + 2 + μ i + 2 ] β i + 1 n + 1 ] ,
Ω i ( n ) 4 = [ α i n β i n ν i [ γ i + 1 n δ i + 1 δ i + 3 + γ i + 2 n τ i + 2 ( δ i + 3 + μ i + 3 ) ] + α i n β i + 1 n δ i + 1 [ γ i + 1 n ν i + 1 δ i + 3 + γ i + 2 n δ i + 2 δ i + 3 + γ i + 2 n δ i + 2 μ i + 3 + τ i + 3 + γ i + 3 n τ i + 3 ( ν i + 3 1 ) ] + α i n β i + 2 n τ i + 2 [ γ i + 2 n ν i + 2 ( δ i + 3 + μ i + 3 ) + γ i + 3 n δ i + 3 ( ν i + 3 1 ) + δ i + 3 + τ i + 3 ] ] , Ω i ( n ) 5 = [ α i n β i n ν i τ i + 2 τ i + 4 γ i + 2 n + α i n β i + 1 n δ i + 1 [ δ i + 2 γ i + 2 n τ i + 4 + γ i + 3 n τ i + 3 ( δ i + 4 + μ i + 4 ) ] + α i n β i + 2 n [ τ i + 2 τ i + 4 ( γ i + 2 n ν i + 2 + γ i + 4 n ν i + 4 γ i + 4 n + 1 ) + γ i + 3 n δ i + 3 ( τ i + 3 δ i + 4 + τ i + 2 μ i + 4 ) ] ] , Ω i ( n ) 6 = α i n β i + 1 n γ i + 3 n δ i + 1 τ i + 3 τ i + 5 + α i n β i + 2 n τ i + 2 [ γ i + 3 n δ i + 3 τ i + 5 + γ i + 4 n τ i + 4 ( δ i + 5 + μ i + 5 ) ] , Ω i ( n ) 7 = α i n β i + 2 n γ i + 4 n τ i + 2 τ i + 4 τ i + 6 .
Let Φ ( n ) = max Ω i ( n ) 1 , Ω i ( n ) 2 , Ω i ( n ) 3 , Ω i ( n ) 4 , Ω i ( n ) 5 , Ω i ( n ) 6 , Ω i ( n ) 7 . From (25) and by algebra of convergence of sequences Ω i ( n ) k and 1 k 7 , we may say that Φ ( n ) Φ as n , where Φ = max Ω i 1 , Ω i 2 , Ω i 3 , Ω i 4 , Ω i 5 , Ω i 6 , Ω i 7 . Condition (25) implies that Φ < 1 , so Φ ( n ) < 1 for sufficiently large n.
Let v n + 1 = x 1 ( n + 1 ) x 1 + x 2 ( n + 1 ) x 2 + + x k ( n + 1 ) x k . Then, (40) can be written as
v n + 1 Φ ( n ) v n , n = 1 , 2 , .
Clearly, l i m sup n 1 Φ ( n ) < 1 for Φ ( n ) < 1 . With the help of Lemma 2, we may claim that 0 Φ n < 1 . Hence, { ( x i ( n ) , v i ( n ) ) } converge strongly to the solution ( x i , v i ) of system (5). □
A numerical example is provided with a convergence graph.

4. Numerical Example

Let k = 2 , X = R , x = ( x 1 , x 2 ) R × R and ( v 1 , v 2 ) N 1 ( x 1 ) × N 2 ( x 2 ) such that
w 1 P 1 ( C I , λ 1 S 1 ( h 1 ( x 1 ) , v 2 ) ( x 1 ) , Y I , λ 2 S 2 ( h 2 ( x 2 ) , v 1 ) ( x 2 ) ) + f 1 ( x 1 ) g 2 ( x 2 ) + S 1 ( h 1 ( x 1 ) , v 2 ) , w 2 P 2 ( C I , λ 2 S 2 ( h 2 ( x 2 ) , v 1 ) ( x 2 ) , Y I , λ 3 S 2 ( h 2 ( x 2 ) , v 1 ) ( x 1 ) ) + f 2 ( x 2 ) g 3 ( x 1 ) + S 2 ( h 2 ( x 2 ) , v 1 ) ,
for some w ^ = ( w 1 , w 2 ) R × R .
Let the mappings f i , g i , h i : R R , S i : R × R : 2 R , N i : R × R : C B ( R ) be defined by
f i ( x i ) = 2 x i 7 ( i + 1 ) , g i ( x i ) = 3 x i 49 i , h i ( x i ) = 2 x i , S i ( h i ( x i ) , v i + 1 ) = h i ( x i ) + 3 v i + 1 2 ( i + 1 )
and N i ( x i ) = 5 x i 16 i .
Suppose that the mapping P i : R × R R is defined as
P i ( x i , x i + 1 ) = x i + x i + 1 7 + i .
Now, we define the associated resolvent operator,
I , λ i S 1 ( h i , v i + 1 ) ( x i ) = 2 i ( i + 1 ) x i 3 v i + 1 2 ( i 2 + i + 1 ) ,
the Yosida approximation operator,
Y I , λ i S 1 ( h i , v i + 1 ) ( x i ) = ( 2 x i + 3 v i + 1 ) i 2 ( i 2 + i + 1 ) ,
and the Cayley operator,
C I , λ i S 1 ( h i , v i + 1 ) ( x i ) = ( i 2 + i 1 ) x i 3 v i + 1 i 2 + i + 1 .
Let us choose the controlling sequences { α i n } = i 6 n , β i n = ( 7 n 3 ) i 12 n , { γ i n } = ( 3 n + 1 ) i 12 n , and { δ i n } = i 6 n + 3 . Clearly, { α i n } , { β i n } , { γ i n } , and { δ i n } satisfy the conditions of Algorithm 1.
The sequences for the approximate solution of (41) are obtained by using Algorithm 1 in the following way:
x i n + 1 = i 6 n 160 i 2 ( i + 1 ) 160 i 2 ( i + 1 ) + 171 [ y i n 1 2 ( 7 + i ) ( i 2 + i + 1 ) { 2 ( i 2 + i 1 ) y i n + 2 i y i + 1 n ( 3 i 6 ) v i + 1 + w i n } ] + 6 n i 6 n x i n + i 2 6 n ( 6 n + 3 ) ,
y i n = 7 n 3 12 n 160 i 3 ( i + 1 ) 160 i 2 ( i + 1 ) + 171 [ z i n 1 2 ( 7 + i ) ( i 2 + i + 1 ) { 2 ( i 2 + i 1 ) z i n + 2 i z i + 1 n ( 3 i 6 ) v i + 1 + w i n } ] + 1 ( 7 n 3 ) i 12 n x i n + ( 7 n 3 ) ( i 2 + i ) 12 n ( 6 n + 3 ) ,
and
z i n = 3 n + 1 12 n 160 i 3 ( i + 1 ) 160 i 2 ( i + 1 ) + 171 [ x i n 1 2 ( 7 + i ) ( i 2 + i + 1 ) { 2 ( i 2 + i 1 ) x i n + 2 i x i + 1 n ( 3 i 6 ) v i + 1 + w i n } ] + 1 ( 3 n + 1 ) i 12 n x i n + ( 3 n + 1 ) ( i 2 + 2 i ) 12 n ( 6 n + 3 ) .
Also, it is verified that all conditions of Theorem 2 are satisfied. Clearly, for w ^ = ( 1 , 1 ) , the sequence x n = ( x 1 n , x 2 n ) converges strongly to the solution x = ( 0.39 , 0.60 ) of system (41). In this regard the convergence graph (Figure 1) and computational table (Table 1) are shown below. Also, the convergence of each sequence individually involved in Algorithm 1 is shown in Figure 2.
Remark 4.
We select identical operators to those in the above numerical illustration and compare our innovative three-step iterative Algorithm 1 against the two-step iterative algorithm (Ishikawa-style) and the one-step iterative algorithm (Mann-style). By setting γ i n = 0 , we derive the sequences { x i n } and { y i n } as detailed below:
x i n + 1 = i 6 n 160 i 2 ( i + 1 ) 160 i 2 ( i + 1 ) + 171 [ y i n 1 2 ( 7 + i ) ( i 2 + i + 1 ) { 2 ( i 2 + i 1 ) y i n + 2 i y i + 1 n ( 3 i 6 ) v i + 1 + w i n } ] + 6 n i 6 n x i n + i 2 6 n ( 6 n + 3 ) ,
y i n = 7 n 3 12 n 160 i 3 ( i + 1 ) 160 i 2 ( i + 1 ) + 171 [ z i n 1 2 ( 7 + i ) ( i 2 + i + 1 ) { 2 ( i 2 + i 1 ) z i n + 2 i z i + 1 n ( 3 i 6 ) v i + 1 + w i n } ] + 1 ( 7 n 3 ) i 12 n x i n + ( 7 n 3 ) ( i 2 + i ) 12 n ( 6 n + 3 ) .
Moreover, by taking β i n = γ i n = 0 , for all n 0 , we may approximate by using the Mann-style iterative scheme as follows:
x i n + 1 = i 6 n 160 i 2 ( i + 1 ) 160 i 2 ( i + 1 ) + 171 [ y i n 1 2 ( 7 + i ) ( i 2 + i + 1 ) { 2 ( i 2 + i 1 ) y i n + 2 i y i + 1 n ( 3 i 6 ) v i + 1 + w i n } ] + 6 n i 6 n x i n + i 2 6 n ( 6 n + 3 ) .
The iterative methods that we are employing will stop their operations once the stopping criterion x i ( n + 1 ) x i n 10 6 has been satisfactorily met. In the accompanying Table 2 and the illustrative Figure 3, we present a comprehensive comparison showcasing the performance of our innovative three-step Algorithm 1, alongside the well-established Ishikawa-type Equations (48) and (49), in conjunction with the Mann-type Algorithm (50), all of which are initiated with the chosen starting point of ( 0.5 , 1 ) .
The numerical findings that are meticulously detailed in Table 2 and the graphical representation provided in Figure 3 strongly suggest that our newly proposed three-step Algorithm 1 demonstrates impressive performance and appears to possess a significant competitive edge over the other methods. Based on this analysis, we can confidently assert that our algorithm exhibits remarkable speed and efficiency, typically requiring an average of about 10 to 12 iterations to achieve convergence successfully.

5. Conclusions

In this manuscript, the framework of extended ordered XOR-inclusion problems incorporating Cayley and Yosida operators is presented and examined within the context of real ordered Banach spaces, which encompasses a broader scope than the problems addressed in [11,19,28]. We investigated the existence of solutions to SEOXORIP by utilizing proximal-point mappings under specific conditions deemed appropriate. A three-step iterative methodology is proposed for deriving approximate solutions to SEOXORIP, and an analysis of the convergence of the proposed methodology was conducted. Ultimately, we provided a numerical example accompanied by a convergence graph generated using MATLAB 2018a to substantiate the convergence of the sequence produced by the proposed methodology.

Author Contributions

Conceptualization, D.F., I.A., and F.A.K.; methodology, I.A., N.H.E.E., and E.A.; formal analysis, M.S.A.; investigation, N.H.E.E.; resources, D.F., I.A., and F.A.K.; writing—original draft, I.A., N.H.E.E., and F.A.K.; writing—review and editing, D.F., E.A., and M.S.A.; supervision, F.A.K.; funding acquisition, D.F., E.A., and M.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The first author acknowledges Princess Nourah bint Abdulrahman University Researchers supporting Project Number (PNURSP2025R174), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors are grateful to the anonymous reviewers for their valuable remarks which improved the results and presentation of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adamu, A.; Abass, H.H.; Ibrahim, A.I.; Kilicman, A. An accelerated Halpern-type algorithm for solving variational inclusion problems with applications. Bangmod J. Math. Comput. Sci. 2022, 8, 37–55. [Google Scholar] [CrossRef]
  2. Adamu, A.; Deepho, J.; Ibrahim, A.H.; Abubakar, A.B. Approximation of zeros of sum of Monotone Mappings with Applications to Variational Inequalities Problems and image processing. Nonlinear Fun. Anal. Appl. 2021, 262, 411–432. [Google Scholar]
  3. Taiwo, A.; Reich, S.; Agarwal, R.P. Tseng-Type Algorithms for Solving Variational Inequalities Over the Solution Sets of Split Variational Inclusion Problems with An Application to A Bilevel Optimization Problem. J. Appl. Numer. Optim. 2024, 6, 41–57. [Google Scholar]
  4. Lin, L.-J.; Chen, Y.-D.; Chuang, C.-S. Solutions for a variational inclusion problem with applications to multiple sets split feasibility problems. Fixed Point Theory Appl. 2013, 2013, 333. [Google Scholar] [CrossRef]
  5. Fang, Y.-P.; Huang, N.-J. H-monotone operator and resolvent operator technique for variational inclusions. Appl. Math. Comput. 2003, 145, 795–803. [Google Scholar] [CrossRef]
  6. Fang, Y.-P.; Huang, N.-J. H-accretive operators and resolvent operator technique for solving variational inclusions in Banach spaces. Appl. Math. Lett. 2004, 17, 647–653. [Google Scholar] [CrossRef]
  7. Xia, F.-Q.; Huang, N.-J. Variational inclusions with a general H-monotone operator in Banach spaces. Comput. Math. Appl. 2007, 54, 24–30. [Google Scholar] [CrossRef]
  8. Ding, X.P.; Xia, F.Q. A new class of completely generalized quasivariational inclusions in Banach spaces. J. Comput. Appl. Math. 2002, 147, 369–383. [Google Scholar] [CrossRef]
  9. Amann, H. On the number of solutions of nonlinear equations in ordered Banach spaces. J. Funct. Anal. 1972, 11, 346–384. [Google Scholar] [CrossRef]
  10. Li, H.G. Approximation solution for general nonlinear ordered variational inequalities and ordered equations in ordered Banach space. Nonlinear Anal. Forum 2008, 13, 205–214. [Google Scholar]
  11. Ali, I.; Ahmad, R.; Wen, C.F. Cayley Inclusion problem involving XOR-operation. Mathematics 2019, 7, 302. [Google Scholar] [CrossRef]
  12. Li, H.G.; Li, L.P.; Jin, M.M. A class of nonlinear mixed ordered ininclusion problems for ordered (αA,λ)-ANODM set-valued mappings with strong compression mapping. Fixed Point Theory Appl. 2014, 2014, 79. [Google Scholar] [CrossRef]
  13. Li, H.G. A nonlinear inclusion problem involving (α,λ)-NODM set-valued mappings in ordered Hilbert space. Appl. Math. Lett. 2012, 25, 1384–1388. [Google Scholar] [CrossRef]
  14. Glowinski, G.; Le Tallec, P. Augmented Lagrangian and Operator Splitting Methods in Nonlinear Mechanics; SIAM: Philadelphia, PA, USA, 1989. [Google Scholar]
  15. Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef]
  16. Noor, M.A. A predictor-corrector algorithm for general variational inequalities. Appl. Math. Lett. 2001, 14, 53–58. [Google Scholar] [CrossRef]
  17. Noor, M.A. Three-step iterative algorithms for multivaled quasi-variational inclusions. J. Math. Anal. Appl. 2001, 255, 589–604. [Google Scholar] [CrossRef]
  18. Ahmad, I.; Rahaman, M.; Ahmad, R.; Ali, I. Convergence analysis and stability of perturbed three-step iterative algorithm for generalized mixed ordered quasi-variational inclusion involving XOR operator. Optimization 2020, 69, 821–845. [Google Scholar] [CrossRef]
  19. Ali, I.; Wang, Y.; Ahmad, R. Convergence and Stability of a Three-Step Iterative Algorithm for the Extended Cayley–Yosida Inclusion Problem in 2-Uniformly Smooth Banach Spaces: Convergence and Stability Analysis. Mathematics 2024, 12, 1977. [Google Scholar] [CrossRef]
  20. Schaefer, H.H. Banach Lattices and Positive Operators; Springer: Berlin/Heidelberg, Germany, 1974. [Google Scholar]
  21. Aubin, J.P.; Cellina, A. Differential Inclusions; Springer: Berlin, Germany, 1984. [Google Scholar]
  22. Kazmi, K.R.; Khan, F.A.; Shahzad, M. A system of generalized variational inclusions involving generalized H(·,·)-accretive mapping in real q-uniformly smooth Banach spaces. Appl. Math. Comput. 2011, 217, 9679–9688. [Google Scholar] [CrossRef]
  23. Salahuddin. System of Generalized Mixed Nonlinear Ordered Variational Inclusions. Numer. Algebra Control. Optim. 2019, 9, 445–460. [Google Scholar] [CrossRef]
  24. Du, Y.H. Fixed points of increasing operators in ordered Banach spaces and applications. Appl. Anal. 1990, 38, 1–20. [Google Scholar] [CrossRef]
  25. Ahmad, I.; Ahmad, R.; Iqbal, J. A resolvent approach for solving a set-valued variational inclusion problem using weak-RRD set-valued mapping. Korean J. Math. 2016, 16, 199–213. [Google Scholar] [CrossRef]
  26. Osilike, M.O. Stability results for the Ishikawa fixed point iteration procedure. Ind. J. Pure Appl. Math. 1995, 26, 937–945. [Google Scholar]
  27. Nadler, S.B., Jr. Multi-valued contraction mappings. Pac. J. Math. 1969, 30, 475–488. [Google Scholar] [CrossRef]
  28. Arifuzzaman; Irfan, S.S.; Ahmad, I. Convergence Analysis for Cayley Variational Inclusion Problem Involving XOR and XNOR Operations. Axioms 2025, 14, 149. [Google Scholar] [CrossRef]
Figure 1. Convergence of x n = ( x 1 n , x 2 n ) with initial value ( 1 , 2 ) .
Figure 1. Convergence of x n = ( x 1 n , x 2 n ) with initial value ( 1 , 2 ) .
Mathematics 13 01969 g001
Figure 2. Convergence of x n = ( x 1 n , x 2 n ) , y n = ( y 1 n , y 2 n ) , and z n = ( z 1 n , z 2 n ) with initial values ( 1 , 2 ) , ( 3 , 2 ) , a n d ( 4 , 1 ) , respectively.
Figure 2. Convergence of x n = ( x 1 n , x 2 n ) , y n = ( y 1 n , y 2 n ) , and z n = ( z 1 n , z 2 n ) with initial values ( 1 , 2 ) , ( 3 , 2 ) , a n d ( 4 , 1 ) , respectively.
Mathematics 13 01969 g002
Figure 3. Convergence of x n = ( x 1 n , x 2 n ) with initial value ( 0.5 , 1 ) .
Figure 3. Convergence of x n = ( x 1 n , x 2 n ) with initial value ( 0.5 , 1 ) .
Mathematics 13 01969 g003
Table 1. The computational table of { x n } , { y n } , and { z n } starting with ( x 1 0 , x 2 0 ) = ( 1 , 2 ) ( y 1 0 , y 2 0 ) = ( 3 , 2 ) , and ( z 1 0 , z 2 0 ) = ( 4 , 1 ) .
Table 1. The computational table of { x n } , { y n } , and { z n } starting with ( x 1 0 , x 2 0 ) = ( 1 , 2 ) ( y 1 0 , y 2 0 ) = ( 3 , 2 ) , and ( z 1 0 , z 2 0 ) = ( 4 , 1 ) .
No. ofFor x n = ( 1 , 2 ) For y n = ( 3 , 2 ) For z n = ( 4 , 1 )
Iterations x n = ( x 1 n , x 2 n ) y n = ( y 1 n , y 2 n ) z n = ( z 1 n , z 2 n )
n  = 1(−1,2)(3,−2)(4,−1)
n = 2(−0.49930, 0.81207)(0.23862, 0.24273)(−0.72659, 2.10466)
n = 3(−0.45165, 0.71122)(−0.44464,1.89461)(−0.43555, 0.89865)
n = 4(−0.46018, 0.80844 )(−0.33075 , 0.90799)(−0.43376, 0.89865)
n = 5(−0.45581, 0.80425)(−0.33841, 0.75610)(−0.46637, 0.82427)
n = 10(−0.44173, 0.76545)(−0.36189, 0.69389)(−0.4845, 0.74925)
n = 15(−0.43366, 0.73856)(−0.36339, 0.63498)(−0.48451, 0.71074)
n = 20(−0.42794, 0.71821)(−0.36241, 0.60042)(−0.48211, 0.68513)
n = 25(−0.42341, 0.70210)(−0.36087, 0.57680)(−0.47936, 0.66615)
n = 30(−0.41972, 0.68884)(−0.35921, 0.55919)(−0.47668, 0.65118)
n = 35(−0.41659, 0.67762)(−0.35758, 0.54531)(−0.47417, 0.63887)
n = 40(−0.41389, 0.66791)(−0.35604, 0.53394)(−0.47186, 0.62845)
n = 45(−0.41149, 0.65938)(−0.35458, 0.52436)(−0.46972, 0.61943)
n = 55(−0.40742, 0.64494)(−0.35194, 0.50891)(−0.46591 0.60444)
n = 70(−0.40252, 0.62779)(−0.34853, 0.49157)(−0.46113, 0.58701)
n = 80(−0.39981, 0.61841)(−0.34656, 0.48248)(−0.45835 0.57760)
n = 90(−0.39742, 0.61020)(−0.34479, 0.47473)(−0.45588, 0.56946)
n = 100(−0.39523, 0.60290)(−0.34318, 0.46796)(−0.45365, 0.56228)
Table 2. The values of x n = ( x 1 n , x 2 n ) with the initial value ( 0.5 , 1 ) .
Table 2. The values of x n = ( x 1 n , x 2 n ) with the initial value ( 0.5 , 1 ) .
No. ofIterationsThree-Step Iterative Algorithm
x n = ( x 1 n , x 2 n )
Two-Step Iterative Algorithm
x n = ( x 1 n , x 2 n )
One Step Iterative Algorithm
x n = ( x 1 n , x 2 n )
n  = 1(−0.5,1)(−0.5,1)(−0.5,1)
n = 2(0.12570, −0.25747)(0.02153, −0.08605)(0.03702, −0.37147)
n = 3(0.20262, −0.30925)(0.11337, −0.14096)(0.18311, −0.41916)
n = 4(0.16102, −0.13137)(0.11288, −0.0982)(0.22754, −0.34455)
n = 5(0.11547, −0.05666)(0.09339, −0.06932)(0.2329, −0.27319)
n = 10(0.01631, 0.00114)(0.02223, −0.01653)(0.15164, −0.11691)
n = 15(0.00225, 0.00301)(0.00459, −0.00439)(0.09413, −0.07328)
n = 20(0.00017, 0.00172)(0.00071 −0.00100)(0.06543, −0.05345)
n = 25(0, 0.00093)(0.00013, −0.0001)(0.04982, −0.04209)
n = 30(0, 0.00052)(0, 0.00016)(0.04026, −0.03471)
n = 35(0, 0.00029)(0, 0.00013)(0.03382, −0.02954)
n = 40(0, 0.00016)(0, 0.00009)(0.02918, −0.02571)
n = 45(0, 0.00009)(0, 0.00006)(0.02566, −0.02276)
n = 55(0, 0)(0, 0.00003)(0.02069, 0)
n = 70(0, 0)(0, 0)(0.01604, 0)
n = 80(0, 0)(0, 0)(0.01395, −0.01262)
n = 90(0, 0)(0, 0)(0.01234, −0.0112)
n = 100(0, 0)(0, 0)(0.01107, −0.01006)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Filali, D.; Ali, I.; Ali, M.S.; Eljaneid, N.H.E.; Alshaban, E.; Khan, F.A. Three-Step Iterative Methodology for the Solution of Extended Ordered XOR-Inclusion Problems Incorporating Generalized Cayley–Yosida Operators. Mathematics 2025, 13, 1969. https://doi.org/10.3390/math13121969

AMA Style

Filali D, Ali I, Ali MS, Eljaneid NHE, Alshaban E, Khan FA. Three-Step Iterative Methodology for the Solution of Extended Ordered XOR-Inclusion Problems Incorporating Generalized Cayley–Yosida Operators. Mathematics. 2025; 13(12):1969. https://doi.org/10.3390/math13121969

Chicago/Turabian Style

Filali, Doaa, Imran Ali, Montaser Saudi Ali, Nidal H. E. Eljaneid, Esmail Alshaban, and Faizan Ahmad Khan. 2025. "Three-Step Iterative Methodology for the Solution of Extended Ordered XOR-Inclusion Problems Incorporating Generalized Cayley–Yosida Operators" Mathematics 13, no. 12: 1969. https://doi.org/10.3390/math13121969

APA Style

Filali, D., Ali, I., Ali, M. S., Eljaneid, N. H. E., Alshaban, E., & Khan, F. A. (2025). Three-Step Iterative Methodology for the Solution of Extended Ordered XOR-Inclusion Problems Incorporating Generalized Cayley–Yosida Operators. Mathematics, 13(12), 1969. https://doi.org/10.3390/math13121969

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop