Next Article in Journal
Finite Time Stability Results for Neural Networks Described by Variable-Order Fractional Difference Equations
Previous Article in Journal
Adaptive Residual Useful Life Prediction for the Insulated-Gate Bipolar Transistors with Pulse-Width Modulation Based on Multiple Modes and Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Co-Variational Inequality Problem Involving Two Generalized Yosida Approximation Operators

1
Department of Mathematics, Aligarh Muslim University, Aligarh 202002, India
2
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
3
Department of Applied Mathematics, Zakir Hussain College of Engineering and Technology, Aligarh Muslim University, Aligarh 202002, India
4
Department of Mathematics and Sciences, College of Arts and Applied Sciences, Dhofar University, Salalah 211, Oman
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(8), 615; https://doi.org/10.3390/fractalfract7080615
Submission received: 22 June 2023 / Revised: 1 August 2023 / Accepted: 4 August 2023 / Published: 10 August 2023

Abstract

:
We focus our study on a co-variational inequality problem involving two generalized Yosida approximation operators in real uniformly smooth Banach space. We show some characteristics of a generalized Yosida approximation operator, which are used in our main proof. We apply the concept of nonexpansive sunny retraction to obtain a solution to our problem. Convergence analysis is also discussed.
MSC:
65J15; 47J25; 65K15

1. Introduction

Variational inequality theory is an influential unifying methodology for solving many obstacles of pure as well as applied sciences. In 1966, Hartman and Stampacchia [1] initiated the study of variational inequalities while dealing with some problems of mechanics.
The concept of variational inequalities provides us with various devices for modelling many problems existing in variational analysis related to applicable sciences. One can ensure the existence of a solution and the convergence of iterative sequences using these devices. The concept of variational inequality is applicable for the study of stochastic control, network economics, the computation of equilibria, and many other physical problems of real life. For more applications, see [2,3,4,5,6,7,8,9,10,11,12] and references mentioned there.
Alber and Yao [13] first considered and studied a co-variational inequality problem using the nonexpansive sunny retraction concept. They obtained a solution of the co-variational inequality problem and discussed the convergence criteria. Their work is extended by Ahmad and Irfan [14] with a slightly different approach.
Yosida approximation operators are useful for obtaining solutions of various types of differential equations. Petterson [15] first solved the stochastic differential equation by using the Yosida approximation operator approach. For the study of heat equations, the problem of couple sound and heat flow in compressible fluids and wave equations, etc., the concept of the Yosida approximation operator is applicable. For our purpose, we consider a generalized Yosida approximation operator and we have shown that it is Lipschitz continuous as well as strongly accretive. For more details, we refer to [16,17,18,19,20].
After the above important discussion, the aim of this work is to introduce a different version of the co-variational inequality problem, which involves two generalized Yosida approximation operators. We obtain the solution of our problem as well as discuss the convergence criteria for the sequences achieved by the iterative method.

2. Preliminaries

Throughout this document, we denote the real Banach space by E and its dual space by E * . Let a ˙ , b ˙ be the duality pairing between a ˙ E and b ˙ E * . The usual norm on E is denoted by · , the class of nonempty subsets of E by 2 E and the class of nonempty compact subsets of E by C ^ ( E ) .
Definition 1.
The Hausdörff metric on C ^ ( E ) is defined by
D ( P , Q ) = max sup x P d ( x , Q ) , sup y P d ( P , y ) , where d ( x , Q ) = inf y Q d ( x , y ) and d ( P , y ) = inf x P d ( x , y ) ,
where d is the metric induced by the norm · .
Definition 2.
The normalized duality operator J : E E * is defined by
J ( a ˙ ) = b ˙ E * : a ˙ , b ˙ = a ˙ 2 = b ˙ 2 , a ˙ E .
Some characteristics of the normalized duality operator can be discovered in [21].
Definition 3.
The modulus of smoothness for the space E is given by the function:
ρ E ( t ) = sup E c ˙ + d ˙ + c ˙ d ˙ 2 1 : c ˙ = 1 , d ˙ = t .
Definition 4.
The Banach space E is uniformly smooth if and only if
lim t 0 t 1 ρ E ( t ) = 0 .
The following result is instrumental for our main result.
Proposition 1
([13]). Let E be a uniformly smooth Banach space and J be the normalized duality operator. Then, for any a ˙ , b ˙ E , we have
(i)
a ˙ + b ˙ 2 a ˙ 2 + 2 b ˙ , J ( a ˙ + b ˙ ) .
(ii)
a ˙ b ˙ , J ( a ˙ ) J ( b ˙ ) 2 d 2 ρ E ( 4 a ˙ b ˙ / d ) , where d = a ˙ 2 + b ˙ 2 / 2 .
Definition 5.
The operator h 1 : E E is called:
(i)
Accretive, if
h 1 ( a ˙ ) h 1 ( b ˙ ) , J ( a ˙ b ˙ ) 0 , a ˙ , b ˙ E ;
(ii)
Strongly accretive, if
h 1 ( a ˙ ) h 1 ( b ˙ ) , J ( a ˙ b ˙ ) r 1 a ˙ b ˙ 2 , a ˙ , b ˙ E ,
where r 1 > 0 is a constant;
(iii)
Lipschitz continuous, if
h 1 ( a ˙ ) h 1 ( b ˙ ) λ h 1 a ˙ b ˙ , a ˙ , b ˙ E ,
where λ h 1 > 0 is a constant.
(iv)
Expansive, if
h 1 ( a ˙ ) h 1 ( b ˙ ) β h 1 a ˙ b ˙ , a ˙ , b ˙ E ,
where β h 1 > 0 is a constant.
Remark 1.
If E is a Hilbert space then the definitions of the accretive operator and the strongly accretive operator become the definitions of the monotone operator and the strongly monotone operator, respectively. For more literature on different types of operators, see [22,23,24].
Definition 6.
Let A ˜ : E E be an operator. The operator S : E × E × E E is said to be:
(i)
Lipschitz continuous in the first slot, if
S ( u 1 , · , · ) S ( u 2 , · , · ) δ S 1 u 1 u 2 , a ˙ , b ˙ E and for some u 1 A ˜ ( a ˙ ) , u 2 A ˜ ( b ˙ ) ,
where δ S 1 > 0 is a constant.
Similarly, we can obtain Lipschitz continuity of S in other slots;
(ii)
Strongly accretive in the first slot with respect to A ˜ , if
S ( u 1 , · , · ) S ( u 2 , · , · ) , J ( a ˙ b ˙ ) λ S 1 a ˙ b ˙ 2 , a ˙ , b ˙ E and for some u 1 A ˜ ( a ˙ ) , u 2 A ˜ ( b ˙ ) ,
where λ S 1 > 0 is a constant.
Similarly strong accretivity of S in other slots and with respect to other operators can be obtained.
Definition 7.
The operator A ˜ : E C ^ ( E ) is called D-Lipschitz continuous if
D ( A ˜ ( a ˙ ) , A ˜ ( b ˙ ) ) α A ˜ a ˙ b ˙ , a ˙ , b ˙ E ,
where α A ˜ > 0 is a constant and D ( · , · ) denotes the Housdörff metric.
Definition 8
([13]). Suppose that Ω is the nonempty closed convex subset of E. Then an operator Q Ω : E Ω is called:
(i)
Retraction on Ω, if Q Ω 2 = Q Ω ;
(ii)
Nonexpansive retraction on Ω, if it satisfies the inequality:
Q Ω ( a ˙ ) Q Ω ( b ˙ ) a ˙ b ˙ , a ˙ , b ˙ E ;
(iii)
Nonexpansive sunny retraction on Ω, if
Q Ω ( Q Ω ( a ˙ ) + t ^ ( a ˙ Q Ω ( a ˙ ) ) ) = Q Ω ( a ˙ ) ,
for all a ˙ E and for 0 t ^ < + .
Nonexpansive sunny retraction operators are characterized as follows, which can be found in [25,26,27].
Proposition 2.
The operator Q Ω is a nonexpansive sunny retraction, if and only if
a ˙ Q Ω ( a ˙ ) , J ( Q Ω ( a ˙ ) b ˙ ) 0 ,
for all a ˙ E and b ˙ Ω .
Remark 2.
If E is a Hilbert space, then operator Q Ω is a nonexpansive sunny retraction, if and only if
a ˙ Q Ω ( a ˙ ) , Q Ω ( a ˙ ) b ˙ 0 ,
for all a ˙ E and b ˙ Ω .
Proposition 3.
Suppose m ˜ = m ˜ ( a ˙ ) : E E and Q Ω : E Ω is a nonexpansive sunny retraction. Then, for all a ˙ E , we have
Q Ω + m ˜ ( a ˙ ) ( a ˙ ) = m ˜ ( a ˙ ) + Q Ω ( a ˙ m ˜ ( a ˙ ) ) .
Remark 3.
Let us take E to be a Hilbert space and Ω to be a nonempty closed convex subset of E. Then, an example of nonexpansive sunny retraction of E onto Ω is the nearest point projection P Ω from E onto Ω. But this fact does not hold for all Banach spaces because, outside a Hilbert space, nearest point projections are sunny but not nonexpansive. In view of Proposition 2, it is observed that a nonexpansive retraction behaves similarly in a Banach space to how the nearest point projection behaves in a Hilbert space. Bruck [28] has shown that, for a nonexpansive retraction, there is a nonexpansive sunny retraction if the Banach space is uniformly smooth.
Definition 9.
The multi-valued operator M ^ : E 2 E is called accretive, if
u v , J ( a ˙ b ˙ ) 0 , a ˙ , b ˙ E and for some u M ^ ( a ˙ ) , v M ^ ( b ˙ ) .
Definition 10.
Let h 1 : E E be an operator. The multi-valued operator M ^ : E 2 E is said to be h 1 -accretive if M ^ is accretive and the range of [ h 1 + λ M ^ ] is E, where λ > 0 is a constant.
Definition 11.
Let M ^ : E 2 E be a multi-valued operator. The operator R I , λ M ^ : E E defined by
R I , λ M ^ ( a ˙ ) = [ I + λ M ^ ] 1 ( a ˙ ) , f o r a l l a ˙ E ,
is called a classical resolvent operator, where I is the identity operator and λ > 0 is a constant.
Definition 12.
We define R h 1 , λ M ^ : E E such that
R h 1 , λ M ^ ( a ˙ ) = [ h 1 + λ M ^ ] 1 ( a ˙ ) , a ˙ E , w h e r e λ > 0 i s a c o n s t a n t .
We call it a generalized resolvent operator.
Definition 13.
The classical Yosida approximation operator is defined by
Y I , λ M ^ ( a ˙ ) = 1 λ [ I R I , λ M ^ ] ( a ˙ ) , f o r a l l a ˙ E ,
where I is the identity operator and λ > 0 is a constant.
Definition 14.
We define Y h 1 , λ M ^ : E E such that
Y h 1 , λ M ^ ( a ˙ ) = 1 λ [ h 1 R h 1 , λ M ^ ] ( a ˙ ) , a ˙ E , w h e r e λ > 0 i s a c o n s t a n t .
We call it a generalized Yosida approximation operator.
Proposition 4
([29]). Let h 1 : E E be r 1 -strongly accretive and M ^ : E 2 E be an h 1 -accretive multi-valued operator. Then, the operator R h 1 , λ M ^ : E E satisfies the following condition:
R h 1 , λ M ^ ( a ˙ ) R h 1 , λ M ^ ( b ˙ ) 1 r 1 a ˙ b ˙ , a ˙ , b ˙ E .
That is, R h 1 , λ M ^ is 1 r 1 -Lipschitz continuous.
Proposition 5.
If h 1 : E E is r 1 -strongly accretive, β h 1 -expansive, λ h 1 -Lipschitz continuous operator, and R h 1 , λ M ^ : E E is 1 r 1 -Lipschitz continuous operator, then the operator Y h 1 , λ M ^ : E E satisfies the following condition:
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) δ Y h 1 a ˙ b ˙ 2 , a ˙ , b ˙ E ,
where δ Y h 1 = β h 1 2 r 1 λ h 1 λ r 1 , β h 1 2 r 1 > λ h 1 ,   λ r 1 0 and all the constants involved are positive. That is, Y h 1 , λ M ^ is δ Y h 1 -strongly accretive with respect to the operator h 1 .
Proof. 
Since Y h 1 , λ M ^ = 1 λ [ h 1 R h 1 , λ M ^ ] , we evaluate
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) = 1 λ h 1 ( a ˙ ) R h 1 , λ M ^ ( a ˙ ) h 1 ( b ˙ ) R h 1 , λ M ^ ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) = 1 λ h 1 ( a ˙ ) h 1 ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) 1 λ R h 1 , λ M ^ ( a ˙ ) R h 1 , λ M ^ ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) .
Using the expansiveness of h 1 , Lipschitz continuity of h 1 , and Lipschitz continuity of the generalized resolvent operator R h 1 , λ M ^ , we obtain
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) 1 λ h 1 ( a ˙ ) h 1 ( b ˙ ) 2 1 λ R h 1 , λ M ^ ( a ˙ ) R h 1 , λ M ^ ( b ˙ ) h 1 ( a ˙ ) h 1 ( b ˙ ) 1 λ h 1 ( a ˙ ) h 1 ( b ˙ ) 2 1 λ 1 r 1 a ˙ b ˙ h 1 ( a ˙ ) h 1 ( b ˙ ) 1 λ β h 1 2 a ˙ b ˙ 2 1 λ 1 r 1 λ h 1 a ˙ b ˙ a ˙ b ˙ β h 1 2 λ a ˙ b ˙ 2 λ h 1 λ r 1 a ˙ b ˙ 2 β h 1 2 r 1 λ h 1 λ r 1 a ˙ b ˙ 2 = δ Y h 1 a ˙ b ˙ 2 .
That is,
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) , J ( h 1 ( a ˙ ) h 1 ( b ˙ ) ) δ Y h 1 a ˙ b ˙ 2 .
That is, Y h 1 , λ M ^ is δ Y h 1 -strongly accretive with respect to h 1 . □
Proposition 6.
Let h 1 : E E be λ h 1 -Lipschitz continuous, r 1 -strongly accretive operator and R h 1 , λ M ^ : E E is 1 r 1 -Lipschitz continuous operator, then the operator Y h 1 , λ M ^ : E E satisfies the following condition:
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) λ Y h 1 a ˙ b ˙ , a ˙ , b ˙ E ,
where λ Y h 1 = λ h 1 r 1 + 1 λ r 1 , λ r 1 0 . That is, Y h 1 , λ M ^ is λ Y h 1 -Lipschitz continuous.
Proof. 
Since h 1 and the generalized resolvent operator R h 1 , λ M ^ are Lipschitz continuous, we obtain
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) = 1 λ h 1 ( a ˙ ) R h 1 , λ M ^ ( a ˙ ) 1 λ h 1 ( b ˙ ) R h 1 , λ M ^ ( b ˙ ) = 1 λ h 1 ( a ˙ ) h 1 ( b ˙ ) + 1 λ R h 1 , λ M ^ ( a ˙ ) R h 1 , λ M ^ ( b ˙ ) 1 λ λ h 1 a ˙ b ˙ + 1 λ 1 r 1 a ˙ b ˙ = λ h 1 r 1 + 1 λ r 1 a ˙ b ˙ = λ Y h 1 a ˙ b ˙ .
That is,
Y h 1 , λ M ^ ( a ˙ ) Y h 1 , λ M ^ ( b ˙ ) λ Y h 1 a ˙ b ˙ , a ˙ , b ˙ E .
Thus, the operator Y h 1 , λ M ^ is λ Y h 1 -Lipschitz continuous. □

3. Problem Formation and Iterative Method

Suppose S : E × E × E E is a nonlinear operator, A ˜ , B ˜ , C ˜ : E C ^ ( E ) are multi-valued operators, and K ˜ : E 2 E is a multi-valued operator such that K ˜ ( a ˙ ) is a nonempty, closed, and convex set for all a ˙ E . Let h 1 , h 2 : E E be the single-valued operators, M ^ : E 2 E be an h 1 -accretive multi-valued operator and N ^ : E 2 E be an h 2 -accretive multi-valued operator, Y h 1 , λ M ^ : E E and Y h 2 , λ N ^ : E E be the generalized Yosida approximation operators, where λ > 0 is a constant.
We consider the problem of finding a ˙ E , u A ˜ ( a ˙ ) , v B ˜ ( a ˙ ) , a n d w C ˜ ( a ˙ ) such that
Y h 1 , λ M ^ ( a ˙ ) Y h 2 , λ N ^ ( a ˙ ) , J ( S ( u , v , w ) ) 0 , S ( u , v , w ) K ˜ ( a ˙ ) .
We call problem (1) a co-variational inequality problem involving two generalized Yosida approximation operators.
Clearly for problem (1), it is easily accessible to obtain co-variational inequalities studied by Alber and Yao [13] and Ahmad and Irfan [14].
We provide few characterizations of a solution of problem (1).
Theorem 1.
Let A ˜ , B ˜ , C ˜ : E C ^ ( E ) be the multi-valued operators, S : E × E × E E be the nonlinear operator, and K ˜ : E 2 E be a multi-valued operator such that K ˜ ( a ˙ ) is a nonempty, closed, and convex set for all a ˙ E . Let h 1 , h 2 : E E be the single-valued operators, M ^ : E 2 E be the h 1 -accretive multi-valued operator, and N ^ : E 2 E be the h 2 -accretive multi-valued operator, Y h 1 , λ M ^ : E E and Y h 2 , λ N ^ : E E be the generalized Yosida approximation operators, where λ > 0 is a constant. Then, the following assertions are similar:
(i)
a ˙ E , u A ˜ ( a ˙ ) , v B ˜ ( a ˙ ) , w C ˜ ( a ˙ ) constitute the solution of problem (1);
(ii)
a ˙ E , u A ˜ ( a ˙ ) , v B ˜ ( a ˙ ) , w C ˜ ( a ˙ ) such that
S ( u , v , w ) = Q K ˜ ( a ˙ ) [ S ( u , v , w ) λ Y h 1 , λ M ^ ( a ˙ ) Y h 2 , λ N ^ ( a ˙ ) ] .
Proof. 
For proof, see [7,21]. □
Combining Proposition 3 and Theorem 1, we obtain the theorem mentioned below.
Theorem 2.
Suppose all the conditions of Theorem 1 are fulfilled and, additionally, K ˜ ( a ˙ ) = m ˜ ( a ˙ ) + F , for all a ˙ E , where F is a nonempty closed convex subset of E and Q F : E F is a nonexpansive sunny retraction. Then, a ˙ E , u A ˜ ( a ˙ ) , v B ˜ ( a ˙ ) , and w C ˜ ( a ˙ ) constitute the solution of problem (1), if and only if
a ˙ = a ˙ + m ˜ ( a ˙ ) S ( u , v , w ) + Q F [ S ( u , v , w ) λ Y h 1 , λ M ^ ( a ˙ ) Y h 2 , λ N ^ ( a ˙ ) m ˜ ( a ˙ ) ] ,
where λ > 0 is a constant.
Using Theorem 2, we construct the following iterative method.
Iterative Method 1. For initial points a ˙ 0 E , u 0 A ˜ ( a ˙ 0 ) , v 0 B ˜ ( a ˙ 0 ) , w 0 C ˜ ( a ˙ 0 ) , let
a ˙ 1 = a ˙ 0 + m ˜ ( a ˙ 0 ) S ( u 0 , v 0 , w 0 ) + Q F [ S ( u 0 , v 0 , w 0 ) λ Y h 1 , λ M ^ ( a ˙ 0 ) Y h 2 , λ N ^ ( a ˙ 0 ) m ˜ ( a ˙ 0 ) ] .
Since A ˜ ( a ˙ 0 ) , B ˜ ( a ˙ 0 ) , and C ˜ ( a ˙ 0 ) are nonempty convex sets, by Nadler [30], there exists u 1 A ˜ ( a ˙ 1 ) , v 1 B ˜ ( a ˙ 1 ) , and w 1 C ˜ ( a ˙ 1 ) such that
u 1 u 0 D ( A ˜ ( a ˙ 1 ) , A ˜ ( a ˙ 0 ) ) , v 1 v 0 D ( B ˜ ( a ˙ 1 ) , B ˜ ( a ˙ 0 ) ) , and w 1 w 0 D ( C ˜ ( a ˙ 1 ) , C ˜ ( a ˙ 0 ) ) ,
where D ( · , · ) denotes the Hausdorff metric.
Proceeding in a similar manner, we can find the sequences { a ˙ n } , { u n } , { v n } and { w n } using the following method:
a ˙ n + 1 = a ˙ n + m ˜ ( a ˙ n ) S ( u n , v n , w n ) + Q F [ S ( u n , v n , w n ) λ Y h 1 , λ M ^ ( a ˙ n ) Y h 2 , λ N ^ ( a ˙ n ) m ˜ ( a ˙ n ) ] ,
u n A ˜ ( a ˙ n ) , u n + 1 u n D ( A ˜ ( a ˙ n + 1 ) , A ˜ ( a ˙ n ) ) ,
v n B ˜ ( a ˙ n ) , v n + 1 v n D ( B ˜ ( a ˙ n + 1 ) , B ˜ ( a ˙ n ) ) ,
w n C ˜ ( a ˙ n ) , w n + 1 w n D ( C ˜ ( a ˙ n + 1 ) , C ˜ ( a ˙ n ) ) ,
for n = 0 , 1 , 2 , 3 , , where λ > 0 is a constant.

4. Convergence Result

Theorem 3.
Suppose E is real uniformly smooth Banach space and ρ E ( t ) C t 2 , for some C > 0 , is the modulus of smoothness. Suppose F is a closed convex subset of E , S ( · , · , · ) : E × E × E E is an operator, A ˜ , B ˜ , C ˜ : E C ^ ( E ) are the multi-valued operators, m ˜ : E E is an operator. Let Q F : E F be a nonexpansive sunny retraction operator and K ˜ : E 2 E be a multi-valued operator such that K ˜ ( a ˙ ) = m ˜ ( a ˙ ) + F , for all a ˙ E . Let M ^ , N ^ : E 2 E be the multi-valued operators, and h 1 , h 2 : E E be the operators. Let Y h 1 , λ M ^ be the generalized Yosida approximation operator associated with the generalized resolvent operator R h 1 , λ M ^ and Y h 2 , λ N ^ be the generalized Yosida approximation operator associated with the generalized resolvent operator R h 2 , λ N ^ . Suppose that the following assertions are satisfied:
(i)
S ( · , · , · ) is λ S 1 -strongly accretive with respect to A ˜ in the first slot, λ S 2 -strongly accretive with respect to B ˜ in the second slot, λ S 3 -strongly accretive with respect to C ˜ in the third slot and δ S 1 -Lipschitz continuous in the first slot, δ S 2 -Lipschitz continuous in the second slot, δ S 3 -Lipschitz continuous in the third slot;
(ii)
A ˜ is α A ˜ -D-Lipschitz continuous, B ˜ is α B ˜ -D-Lipschitz continuous and C ˜ is α C ˜ -D-Lipschitz continuous;
(iii)
m ˜ is λ m -Lipschitz continuous;
(iv)
h 1 is r 1 -strongly accretive, β h 1 -expansive and λ h 1 -Lipschitz continuous; h 2 is r 2 -strongly accretive, β h 2 -expansive, and λ h 2 -Lipschitz continuous;
(v)
R h 1 , λ M ^ is 1 r 1 -Lipschitz continuous and R h 2 , λ N ^ is 1 r 2 -Lipschitz continuous;
(vi)
Y h 1 , λ M ^ is δ Y h 1 -strongly accretive, λ Y h 1 -Lipschitz continuous and Y h 2 , λ N ^ is δ Y h 2 -strongly accretive, λ Y h 2 -Lipschitz continuous;
(vii)
Suppose that
0 < [ 1 2 ( λ S 1 + λ S 2 + λ S 3 ) + 64 C δ S 1 2 α A ˜ 2 + δ S 2 2 α B ˜ 2 + δ S 3 2 α C ˜ 2 + 2 λ m + ( δ S 1 α A ˜ + δ S 2 α B ˜ + δ S 3 α C ˜ ) + 1 2 λ δ Y h 1 + 64 C λ 4 λ Y h 1 2 + 1 2 λ δ Y h 2 + 64 C λ 4 λ Y h 2 2 ] < 1 ,
where
δ Y h 1 = β h 1 2 r 1 λ h 1 λ r 1 , δ Y h 2 = β h 2 2 r 2 λ h 2 λ r 2 , λ Y h 1 = λ h 1 r 1 + 1 λ r 1 , λ Y h 2 = λ h 2 r 2 + 1 λ r 2 , β h 1 2 r 1 > λ h 1 a n d β h 2 2 r 2 > λ h 2 .
Then, there exist a ˙ E , u A ˜ ( a ˙ ) , v B ˜ ( a ˙ ) and w C ˜ ( a ˙ ) , the solution of problem (1). Also, sequences { a ˙ n } , { u n } , { v n } and { w n } converge strongly to a ˙ , u , v and w, respectively.
Proof. 
Using (3) of iterative method 1 and the nonexpansive retraction property of Q F , we estimate
a ˙ n + 1 a ˙ n = [ a ˙ n + m ˜ ( a ˙ n ) S ( u n , v n , w n ) + Q F [ S ( u n , v n , w n ) λ Y h 1 , λ M ^ ( a ˙ n ) Y h 2 , λ N ^ ( a ˙ n ) m ˜ ( a ˙ n ) ] ] [ a ˙ n 1 + m ˜ ( a ˙ n 1 ) S ( u n 1 , v n 1 , w n 1 ) + Q F [ S ( u n 1 , v n 1 , w n 1 ) λ Y h 1 , λ M ^ ( a ˙ n 1 ) Y h 2 , λ N ^ ( a ˙ n 1 ) m ˜ ( a ˙ n 1 ) ] ] a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) + m ˜ ( a ˙ n ) m ˜ ( a ˙ n 1 ) + Q F S ( u n , v n , w n ) λ Y h 1 , λ M ^ ( a ˙ n ) Y h 2 , λ N ^ ( a ˙ n ) m ˜ ( a ˙ n ) Q F S ( u n 1 , v n 1 , w n 1 ) λ Y h 1 , λ M ^ ( a ˙ n 1 ) Y h 2 , λ N ^ ( a ˙ n 1 ) m ˜ ( a ˙ n 1 ) a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) + 2 m ˜ ( a ˙ n ) m ˜ ( a ˙ n 1 ) + S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) + a ˙ n a ˙ n 1 λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) + a ˙ n a ˙ n 1 λ Y h 2 , λ N ^ ( a ˙ n ) Y h 2 , λ N ^ ( a ˙ n 1 ) .
Applying Proposition 1, we evaluate
a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) 2 a ˙ n a ˙ n 1 2 2 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) , J a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) = a ˙ n a ˙ n 1 2 2 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) , J ( a ˙ n a ˙ n 1 ) 2 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) , J ( a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) ) J ( a ˙ n a ˙ n 1 ) = a ˙ n a ˙ n 1 2 2 S ( u n , v n , w n ) S ( u n 1 , v n , w n ) + S ( u n 1 , v n , w n ) S ( u n 1 , v n 1 , w n ) + S ( u n 1 , v n 1 , w n ) S ( u n 1 , v n 1 , w n 1 ) , J ( a ˙ n a ˙ n 1 ) 2 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) , J ( a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) J ( a ˙ n a ˙ n 1 ) = a ˙ n a ˙ n 1 2 2 S ( u n , v n , w n ) S ( u n 1 , v n , w n ) , J ( a ˙ n a ˙ n 1 ) 2 S ( u n 1 , v n , w n ) S ( u n 1 , v n 1 , w n ) , J ( a ˙ n a ˙ n 1 ) 2 S ( u n 1 , v n 1 , w n ) S ( u n 1 , v n 1 , w n 1 ) , J ( a ˙ n a ˙ n 1 ) 2 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) , J ( a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) J ( a ˙ n a ˙ n 1 ) .
Since S ( · , · , · ) is λ S 1 -strongly accretive with respect to A ˜ in the first slot, λ S 2 -strongly accretive with respect to B ˜ in the second slot, λ S 3 -strongly accretive with respect to C ˜ in the third slot and applying (ii) of Proposition 1, (8) becomes
( a ˙ n a ˙ n 1 ) S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) 2 a ˙ n a ˙ n 1 2 2 λ S 1 + λ S 2 + λ S 3 a ˙ n a ˙ n 1 2 2 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) , J a ˙ n a ˙ n 1 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) J ( a ˙ n a ˙ n 1 ) ( 1 2 ( λ S 1 + λ S 2 + λ S 3 ) ) a ˙ n a ˙ n 1 2 + 4 d 2 ρ E 4 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) d .
As S ( · , · , · ) is δ S 1 -Lipschitz continuous in the first slot, δ S 2 -Lipschitz continuous in the second slot, δ S 3 -Lipschitz continuous in the third slot, and A ˜ is α A ˜ -D-Lipschitz continuous, B ˜ is α B ˜ -D-Lipschitz continuous, and C ˜ is α C ˜ -D-Lipschitz continuous, we have
S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) = S ( u n , v n , w n ) S ( u n 1 , v n , w n ) + S ( u n 1 , v n , w n ) S ( u n 1 , v n 1 , w n ) + S ( u n 1 , v n 1 , w n ) S ( u n 1 , v n 1 , w n 1 ) δ S 1 u n u n 1 + δ S 2 v n v n 1 + δ S 3 w n w n 1 δ S 1 D ( A ˜ ( a ˙ n ) , A ˜ ( a ˙ n 1 ) ) + δ S 2 D ( B ˜ ( a ˙ n ) , B ˜ ( a ˙ n 1 ) ) + δ S 3 D ( C ˜ ( a ˙ n ) , C ˜ ( a ˙ n 1 ) ) δ S 1 α A ˜ a ˙ n a ˙ n 1 + δ S 2 α B ˜ a ˙ n a ˙ n 1 + δ S 3 α C ˜ a ˙ n a ˙ n 1 ( δ S 1 α A ˜ + δ S 2 α B ˜ + δ S 3 α C ˜ ) a ˙ n a ˙ n 1 .
Using Equation (10) and ( i i ) of Proposition 1, we evaluate
4 d 2 ρ E 4 S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) d = 4 d 2 ρ E ( 4 d ( S ( u n , v n , w n ) S ( u n 1 , v n , w n ) + S ( u n 1 , v n , w n ) S ( u n 1 , v n 1 , w n ) + S ( u n 1 , v n 1 , w n ) S ( u n 1 , v n 1 , w n 1 ) ) ) 64 C ( S ( u n , v n , w n ) S ( u n 1 , v n , w n ) 2 + S ( u n 1 , v n , w n ) S ( u n 1 , v n 1 , w n ) 2 + S ( n n 1 , v n 1 , w n ) S ( u n 1 , v n 1 , w n 1 ) 2 ) 64 C δ S 1 2 u n u n 1 2 + δ S 2 2 v n v n 1 2 + δ S 3 2 w n w n 1 2 64 C δ S 1 2 D 2 ( A ˜ ( a ˙ n ) , A ˜ ( a ˙ n 1 ) ) + δ S 2 2 D 2 ( B ˜ ( a ˙ n ) , B ˜ ( a ˙ n 1 ) ) + δ S 3 2 D 2 ( C ˜ ( a ˙ n ) , C ˜ ( a ˙ n 1 ) ) 64 C δ S 1 2 α A ˜ 2 a ˙ n a ˙ n 1 2 + δ S 2 2 α B ˜ 2 a ˙ n a ˙ n 1 2 + δ S 3 2 α C ˜ 2 a ˙ n a ˙ n 1 2 = 64 C δ S 1 2 α A ˜ 2 + δ S 2 2 α B ˜ 2 + δ S 3 2 α C ˜ 2 a ˙ n a ˙ n 1 2 .
Combining (9) and (11), we have
( a ˙ n a ˙ n 1 ) S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) 2 [ 1 2 ( λ S 1 + λ S 2 + λ S 3 ) + 64 C δ S 1 2 α A ˜ 2 + δ S 2 2 α B ˜ 2 + δ S 3 2 α C ˜ 2 ] a ˙ n a ˙ n 1 2 ,
which implies that
( a ˙ n a ˙ n 1 ) S ( u n , v n , w n ) S ( u n 1 , v n 1 , w n 1 ) 1 2 ( λ S 1 + λ S 2 + λ S 3 ) + 64 C δ S 1 2 α A ˜ 2 + δ S 2 2 α B ˜ 2 + δ S 3 2 α C ˜ 2 a ˙ n a ˙ n 1 .
Since m ˜ is λ m -Lipschitz continuous, we have
m ˜ ( a ˙ n ) m ˜ ( a ˙ n 1 ) λ m a ˙ n a ˙ n 1 .
As Yosida approximation operator Y h 1 , λ M ^ is δ Y h 1 -strongly accretive, λ Y h 1 -Lipschitz continuous, and applying Proposition 1, we evaluate
( a ˙ n a ˙ n 1 ) λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) 2 a ˙ n a ˙ n 1 2 2 λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) , J ( a ˙ n a ˙ n 1 λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) ) = a ˙ n a ˙ n 1 2 2 λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) , J ( a ˙ n a ˙ n 1 ) 2 λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) , J ( a ˙ n a ˙ n 1 Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) ) J ( a ˙ n a ˙ n 1 ) a ˙ n a ˙ n 1 2 2 λ δ Y h 1 a ˙ n a ˙ n 1 2 + 4 d 2 ρ E 4 λ 2 Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) d = a ˙ n a ˙ n 1 2 2 λ δ Y h 1 a ˙ n a ˙ n 1 2 + 64 C λ 4 Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) 2 = a ˙ n a ˙ n 1 2 2 λ δ Y h 1 a ˙ n a ˙ n 1 2 + 64 C λ 4 λ Y h 1 2 a ˙ n a ˙ n 1 2 = ( 1 2 λ δ Y h 1 + 64 C λ 4 λ Y h 1 2 ) a ˙ n a ˙ n 1 2 ,
that is,
( a ˙ n a ˙ n 1 ) λ Y h 1 , λ M ^ ( a ˙ n ) Y h 1 , λ M ^ ( a ˙ n 1 ) 1 2 λ δ Y h 1 + 64 C λ 4 λ Y h 1 2 a ˙ n a ˙ n 1 .
Using the same arguments as for (14), we have
( a ˙ n a ˙ n 1 ) λ Y h 2 , λ N ^ ( a ˙ n ) Y h 2 , λ N ^ ( a ˙ n 1 ) 1 2 λ δ Y h 2 + 64 C λ 4 λ Y h 2 2 a ˙ n a ˙ n 1 .
Using (7), (10) and (12)–(15) becomes
a ˙ n + 1 a ˙ n 1 2 ( λ S 1 + λ S 2 + λ S 3 ) + 64 C δ S 1 2 α A ˜ 2 + δ S 2 2 α B ˜ 2 + δ S 3 2 α C ˜ 2 a ˙ n a ˙ n 1 + 2 λ m a ˙ n a ˙ n 1 + ( δ S 1 α A ˜ + δ S 2 α B ˜ + δ S 3 α C ˜ ) a ˙ n a ˙ n 1 + 1 2 λ δ Y h 1 + 64 C λ 4 λ Y h 1 2 a ˙ n a ˙ n 1 + 1 2 λ δ Y h 2 + 64 C λ 4 λ Y h 2 2 a ˙ n a ˙ n 1 = θ a ˙ n a ˙ n 1 ,
where θ = [ 1 2 ( λ S 1 + λ S 2 + λ S 3 ) + 64 C δ S 1 2 α A ˜ 2 + δ S 2 2 α B ˜ 2 + δ S 3 2 α C ˜ 2 + 2 λ m + ( δ S 1 α A ˜ + δ S 2 α B ˜ + δ S 3 α C ˜ ) + 1 2 λ δ Y h 1 + 64 C λ 4 λ Y h 1 2 + 1 2 λ δ Y h 2 + 64 C λ 4 λ Y h 2 2 ] .
In view of the assumption ( v i i ) , 0 < θ < 1 and clearly { a ˙ n } is a Cauchy sequence in E such that a ˙ n a ˙ E . Using (4)–(6) of iterative method 1, D-Lipschitz continuity of A ˜ , B ˜ , C ˜ and the techniques of Ahmad and Irfan [14], it is clear that { u n } , { v n } and { w n } are all Cauchy sequences in E. Thus, u n u E , v n v E and w n w E . Since Q F , S ( · , · , · ) , A ˜ , B ˜ , C ˜ , h 1 , h 2 , M ^ , N , Y h 1 , λ M ^ and Y h 2 , λ N ^ are all continuous operators in E, we have
a ˙ = a ˙ + m ˜ ( a ˙ ) S ( u , v , w ) + Q F [ S ( u , v , w ) λ Y h 1 , λ M ^ ( a ˙ ) Y h 2 , λ N ^ ( a ˙ ) ] .
It remains to be shown that u A ˜ ( a ˙ ) , v B ˜ ( a ˙ ) and w n w C ˜ ( a ˙ ) . In fact,
d ( u , A ˜ ( a ˙ ) ) = inf { u h : h A ˜ ( a ˙ ) } u u n + d ( u n , A ˜ ( a ˙ ) ) u u n + D ( A ˜ ( a ˙ n ) , A ˜ ( a ˙ ) ) u u n + α A ˜ a ˙ n a ˙ 0 .
Hence, d ( u , A ˜ ( a ˙ ) ) = 0 and thus u A ˜ ( a ˙ ) . Similarly, we have v B ˜ ( a ˙ ) and w C ˜ ( a ˙ ) . From Theorem 2, the result follows. □

5. Conclusions

In this work, we consider a different version of co-variational inequalities existing in the available literature. We call it the co-variational inequality problem, which involves two generalized Yosida approximation operators depending on different generalized resolvent operators. Some properties of generalized Yosida approximation operators are proved. Using the concept of nonexpansive sunny retraction, we prove an existence and convergence result for problem (1).
Our results may be used for further generalizations and experimental purposes.

Author Contributions

Conceptualization, R.A.; methodology, R.A. and A.K.R.; software, M.I.; validation, R.A., M.I. and H.A.R.; formal analysis, Y.W.; resources, Y.W. and H.A.R.; writing—original draft preparation, M.I.; writing—review and editing, A.K.R.; visualization, A.K.R.; supervision, R.A.; project administration, R.A. and Y.W.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant no. 12171435).

Data Availability Statement

Not Applicable.

Acknowledgments

All authors are thankful to all anonymous referees for their valuable suggestions which improve this paper a lot.

Conflicts of Interest

All authors declare that they do not have conflict of interest.

References

  1. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential-functional equations. Acta. Math. 1996, 115, 271–310. [Google Scholar] [CrossRef]
  2. Ahmad, R.; Ansari, Q.H. An iterative algorithm for generalized nonlinear variational inclusions. Appl. Math. Lett. 2000, 13, 23–26. [Google Scholar] [CrossRef] [Green Version]
  3. Ansari, Q.H.; Yao, J.C. A fixed point theorem and its applications to a system of variational inequalities. Bull. Aust. Math. Soc. 1995, 159, 433–442. [Google Scholar] [CrossRef] [Green Version]
  4. Aubin, J.P.; Ekeland, L. Applied Nonlinear Analysis; John Wiley and Sons: New York, NY, USA, 1984. [Google Scholar]
  5. Bensoussan, A.; Lions, J.L. Applications of Variational Inequalities in Stochastic Control; Studies in Mathematics and Its Applications; North-Holland: Amsterdam, The Netherelands, 1982; Volume 12. [Google Scholar]
  6. Giannessi, F.; Maugeri, A. Variational Inequalities and Network Equilibrium Problems; Plenum: New York, NY, USA, 1995. [Google Scholar]
  7. Guo, J.S.; Yao, J.C. Extension of strongly nonlinear quasivariational inequalities. Appl. Math. Lett. 1992, 5, 35–38. [Google Scholar] [CrossRef]
  8. Liu, L.; Yao, J.C. Iterative methods for solving variational inequality problems with a double-hierarchical structure in Hilbert spaces. Optimization 2022. [Google Scholar] [CrossRef]
  9. Nagurney, A. Variational Inequalities. In Encyclopedia of Optimization; Floudas, C., Pardalos, P., Eds.; Springer: Boston, MA, USA, 2008. [Google Scholar]
  10. Nagurney, A. Network Economics: Handbook of Computational Econometrics; Belsley, D., Kontoghiorghes, E., Eds.; John Wiley and Sons: Chicehster, UK, 2009; pp. 429–486. [Google Scholar]
  11. Siddiqi, A.H.; Ansai, Q.H. Strongly nonlinear quasivariational inequalities. J. Math. Anal. Appl. 1990, 149, 444–450. [Google Scholar] [CrossRef] [Green Version]
  12. Yao, J.C. The generalized quasivariational inequality problem with applications. J. Math. Anal. Appl. 1991, 158, 139–160. [Google Scholar] [CrossRef] [Green Version]
  13. Alber, Y.; Yao, J.C. Algorithm for generalized multi-valued covariational inequalities in Banach spaces. Funct. Differ. Equ. 2000, 7, 5–13. [Google Scholar]
  14. Ahmad, R.; Irfan, S.S. On completely generalized multi-valued co-variational inequalities involving strongly accretive operators. Filomat 2012, 26, 657–663. [Google Scholar] [CrossRef]
  15. Petterson, R. Projection scheme for stochastic differential equations with convex contractions. Stoch. Process Appl. 2000, 88, 125–134. [Google Scholar] [CrossRef] [Green Version]
  16. De, A. Hille-Yosida Theorem and Some Applications. Ph.D Thesis, Central European University, Budapest, Hungary, 2017. [Google Scholar]
  17. Ayaka, M.; Tomomi, Y. Applications of the Hille-Yosida theorem to the linearized equations of coupled sound and heat flow. AIMS Math. 2016, 1, 165–177. [Google Scholar]
  18. Sinestrari, E. On the Hille-Yosida Operators, Dekker Lecture Notes; Dekker: New York, NY, USA, 1994; Volume 155, pp. 537–543. [Google Scholar]
  19. Sinestrari, E. Hille-Yosida Operators and Cauchy Problems. In Semigroup Forum 82; Springer: New York, NY, USA, 2011; pp. 10–34. [Google Scholar] [CrossRef]
  20. Yosida, K. Functional Analysis; Grundlehren der Mathematischen Wissenschaften; Springer: Heidelberg, Germany, 1971; Volume 123. [Google Scholar]
  21. Alber, Y. Metric and generalized projection operators in Banach spaces, Properties and applications. In Theory and Applications of Nonlinear Operators of Monotone and Accelerative Type; Kartsatos, A., Ed.; Marker Dekker: New York, NY, USA, 1996. [Google Scholar]
  22. Kato, T. Perturbation Theory for Linear Operators; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1980. [Google Scholar]
  23. Kukushkin, M.V. Abstract fractional calculus for m-accretive operators. Int. J. Appl. Math. 2021, 34, 1–41. [Google Scholar] [CrossRef]
  24. Kukushkin, M.V. On one method of studying spectral properties of non-selfadjoint operators. Abstr. Appl. Anal. 2020, 2020, 1461647. [Google Scholar] [CrossRef]
  25. Benyamini, Y.; Linderstrauss, J. Geometric Nonlinear Functional Analysis, I; AMS, Coloquium Publications: Providence, RI, USA, 2000; Volume 48. [Google Scholar]
  26. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  27. Reich, S. Asymptotic behavior of contractions in Banach Spaces. J. Math. Anal. Appl. 1973, 44, 57–70. [Google Scholar] [CrossRef] [Green Version]
  28. Bruck, R.E. Nonexpansive projections on subsets of Banach spaces. Pac. J. Math. 1973, 47, 341–355. [Google Scholar] [CrossRef]
  29. Ahmad, R.; Ali, I.; Rahaman, M.; Ishtyak, M.; Yao, J.C. Cayley inclusion problem with its corresponding generalized resolvent equation problem in uniformly smooth Banach spaces. Appl. Anal. 2020, 101, 1354–1368. [Google Scholar] [CrossRef]
  30. Nadler, S.B., Jr. Multi-valued contraction mappings. Pac. J. Math. 1969, 30, 475–488. [Google Scholar] [CrossRef] [Green Version]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmad, R.; Wang, Y.; Ishtyak, M.; Rizvi, H.A.; Rajpoot, A.K. Co-Variational Inequality Problem Involving Two Generalized Yosida Approximation Operators. Fractal Fract. 2023, 7, 615. https://doi.org/10.3390/fractalfract7080615

AMA Style

Ahmad R, Wang Y, Ishtyak M, Rizvi HA, Rajpoot AK. Co-Variational Inequality Problem Involving Two Generalized Yosida Approximation Operators. Fractal and Fractional. 2023; 7(8):615. https://doi.org/10.3390/fractalfract7080615

Chicago/Turabian Style

Ahmad, Rais, Yuanheng Wang, Mohd Ishtyak, Haider Abbas Rizvi, and Arvind Kumar Rajpoot. 2023. "Co-Variational Inequality Problem Involving Two Generalized Yosida Approximation Operators" Fractal and Fractional 7, no. 8: 615. https://doi.org/10.3390/fractalfract7080615

APA Style

Ahmad, R., Wang, Y., Ishtyak, M., Rizvi, H. A., & Rajpoot, A. K. (2023). Co-Variational Inequality Problem Involving Two Generalized Yosida Approximation Operators. Fractal and Fractional, 7(8), 615. https://doi.org/10.3390/fractalfract7080615

Article Metrics

Back to TopTop