Next Article in Journal
Determining Dimensionality with Dichotomous Variables: A Monte Carlo Simulation Study and Applications to Missing Data in Longitudinal Research
Previous Article in Journal
Climbing Strategy of Variable Topology Cellular Space Robots Considering Configuration Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Analysis for Generalized Yosida Inclusion Problem with Applications

1
Department of Mathematics, Faculty of Science, Islamic University of Madinah, Madinah 42351, Saudi Arabia
2
Department of Mathematics, Faculty of Science, University of Tabuk, P.O. Box 4279, Tabuk 71491, Saudi Arabia
3
Department of Mathematics, College of Arts and Science, Wadi-Ad-Dwasir, Prince Sattam Bin Abdulaziz University, Al-Kharj 11991, Saudi Arabia
4
School of Mathematics, Thapar Institute of Engineering & Technology, Patiala 147004, Punjab, India
5
Department of Mathematics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
6
Center for Intelligent Secure Systems, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1409; https://doi.org/10.3390/math11061409
Submission received: 29 January 2023 / Revised: 6 March 2023 / Accepted: 11 March 2023 / Published: 14 March 2023

Abstract

:
A new generalized Yosida inclusion problem, involving A -relaxed co-accretive mapping, is introduced. The resolvent and associated generalized Yosida approximation operator is construed and a few of its characteristics are discussed. The existence result is quantified in q-uniformly smooth Banach spaces. A four-step iterative scheme is proposed and its convergence analysis is discussed. Our theoretical assertions are illustrated by a numerical example. In addition, we confirm that the developed method is almost stable for contractions. Further, an equivalent generalized resolvent equation problem is established. Finally, by utilizing the Yosida inclusion problem, we investigate a resolvent equation problem and by employing our proposed method, a Volterra–Fredholm integral equation is examined.

1. Introduction

The investigation of variational inequality theory began in the mid-1960s, by Hartman and Stampacchia [1]. This theory was designed to execute some complicated problems in mathematical programming, partial differential equations, and mechanics. These developments brought about a commonly acceptable format for modeling mathematical problems arising in pure and applied sciences. Due to its indispensable role and application oriented features, this theory is investigated and analyzed rather well in diverse directions; see [2,3,4,5,6,7,8,9] and references therein.
One of the most innovative and unified generalizations is reported in [10]. Let H be a real Hilbert space; P : H H and B : H 2 H are single-valued and set-valued mappings, respectively. The problem of variational inclusion is to investigate s H , such that
0 P ( s ) + B ( s ) ,
where 0 represents the zero vector in H . If H = R n , then the relation (1) coincides with the generalized equation presented by Robinson [11]. If P = 0 , then (1) coincides with the variational inclusion due to Rockafellar [12]. It is worth mentioning that variational inclusion provides an appropriate substructure to study optimization and related problems, see [13,14,15,16,17,18,19,20]. The technique of resolvent is an effective tool to explore variational inclusions. Resolvent and Yosida approximation operators play a salient role in many problems that arise in pure and applied analysis, particularly in partial differential equations and convex analysis, see, for example [21,22,23,24,25,26,27].
On the contrary, the central goal is to seek approximate solutions to these problems. One of the best approaches for dealing with variational inclusions and Yosida inclusions, is resolvent operator and its alternative forms, including resolvent equations, because these techniques yield an equivalent fixed point formulation, which plays a vital role in constructing and outlining fixed point iterative procedures for solving Yosida inclusions and problems related to optimization. Mann-like iterative algorithms, and their alternative forms, are fundamental tools among fixed point algorithms for exploring nonlinear problems. To date, numerous iterative methods have been constituted to analyze approximate solutions of nonlinear problems; see for examples [28,29,30,31,32,33].
Motivated by the important facts mentioned above, we study the generalized Yosida inclusion problem and corresponding generalized resolvent equation. We define a resolvent and associated generalized Yosida approximation operator for A -relaxed co-accretive mapping. Some characteristics of the generalized Yosida approximation operator are discussed. An existence result for a generalized Yosida inclusion problem is also proved. A four-step iterative algorithm is proposed and its convergence analysis is presented. It is also presented that the proposed scheme is almost stable for a contraction mapping. Further, a generalized resolvent equation, corresponding to the generalized Yosida inclusion problem, is presented and an equivalence between these is established. Finally, we define an iterative scheme and discuss the convergence criterion of the developed scheme, to examine the generalized resolvent equation. Our obtained results and convergence are verified by a numerical illustration.

2. Prefatory and Supplementals

Let B be a real Banach space, with norm · , and d be the induced metric. Let · , · be the pairing between B and topological dual B . We signify C B ( B ) (respectively, 2 B ), the family of all nonempty closed and bounded subsets (respectively, all non empty subsets) of B , and D ( · , · ) as the Hausdorff metric on C B ( B ) , defined by
D ( P , Q ) = sup s 1 P d ( s 1 , Q ) , sup s 2 Q d ( P , s 2 ) ,
where d ( s 1 , Q ) = inf s 2 Q d ( s 1 , s 2 ) and d ( P , s 2 ) = inf x P d ( s 1 , s 2 ) . Define the generalized duality mapping J q : B 2 B by
J q ( s ) = { f B : s , f = s q , f = s q 1 } , q > 1 , s B .
where J 2 is the normalized duality mapping. Note that J q ( s ) = s q 1 J 2 ( s ) , s ( 0 ) B . For B = H , a real Hilbert space, then J 2 is converted to the identity mapping. A Banach space B , is said to be uniformly convex if, for each ε ( 0 , 2 ] , δ > 0 such that
s 1   =   s 2   = 1 , s 1 s 2   ε s 1 + s 2 2   1 δ .
A Banach space B is called smooth if
lim t 0 s 1 + t s 1     s 1 t
exists for each s 1 , s 2 { s 3 B : s 3   = 1 } . Let q > 1 be a real number, a Banach space B is said to be q-uniformly smooth if c > 0 , such that ρ B ( t ) c t q . Note that a q-uniformly smooth Banach space is uniformly smooth. The modulus of smoothness ρ B : [ 0 , ) [ 0 , ) of B is defined by
ρ B ( t ) = sup s 1 + s 2 + s 1 s 2 2 1 : s 1 1 , s 2 t .
B signifies uniformly smooth, if lim t 0 ρ B ( t ) t = 0 . J q is single-valued if B is uniformly smooth. The following fundamental result, due to Xu [34], is significant.
Lemma 1.
A real uniformly smooth Banach space B is called q-uniformly smooth if for all s 1 , s 2 B , there exists c q > 0 such that
s 1 + s 2 q s 1 q + q s 2 , J q ( s 1 ) + c q s 2 q , q > 1 .
Definition 1.
Let A : B B and N : B × B B be single-valued mappings. Then
 (i) 
A is called σ 1 -strongly accretive if, for all s 1 , s 2 B , there exists σ 1 > 0 satisfying
A ( s 1 ) A ( s 2 ) , J q ( s 1 s 2 ) σ 1 s 1 s 2 q ;
 (ii) 
A is called relaxed ( ς , κ ) -cocoercive if, for all s 1 , s 2 B , there exist constants ς , κ > 0 satisfying
A ( s 1 ) A ( s 2 ) , J q ( s 1 s 2 ) ( ς ) A ( s 1 ) A ( s 2 ) q + κ s 1 s 2 q ;
 (iii) 
A is called δ A -Lipschitz continuous if, for all s 1 , s 2 B , there exists δ A > 0 satisfying
A ( s 1 ) A ( s 2 ) δ A s 1 s 2 ,
 (iv) 
N is called Lipschitz continuous in the first argument if, for all s 1 , s 2 B , there exists δ N 1 > 0 satisfying
N ( s 1 , · ) N ( s 2 , · ) δ N 1 s 1 s 2 .
Similarly, Lipschitz continuity of N can be defined in the second argument.
Definition 2.
A set-valued mapping T : B C B ( B ) is called D -Lipschitz continuous if, for all s 1 , s 2 B , there exists δ D T > 0 satisfying
D ( T ( s 1 ) , T ( s 2 ) ) δ D T s 1 s 2 .
Definition 3
([35]). Let φ , ψ , A : B B and G : B × B 2 B be single-valued and multi-valued mappings, respectively. Then,
 (i) 
G ( φ , · ) is termed as α-strongly accretive with regards to φ if, for all s 1 , s 2 , s 3 B , and for all p 1 G ( φ ( s 1 ) , s 3 ) , p 2 G ( φ ( s 2 ) , s 3 ) , there exists α > 0 such that
p 1 p 2 , J q ( s 1 s 2 ) α s 1 s 2 q ;
 (ii) 
G ( · , ψ ) is termed as β-relaxed accretive with regards to ψ if, for all s 1 , s 2 , s 3 B , and for all p 1 G ( s 3 , ψ ( s 1 ) ) , p 2 G ( s 3 , ψ ( s 2 ) ) , there exists β > 0 such that
p 1 p 2 , J q ( s 1 s 2 ) ( β ) s 1 s 2 q ;
 (iii) 
if G ( φ , · ) is strongly accretive with regards to φ, and G ( · , ψ ) is relaxed accretive with regards to ψ, then G ( φ , ψ ) is called symmetric accretive with regards to φ and ψ.
Definition 4.
Let φ , ψ , A : B B be the single-valued mappings. A multi-valued mapping G : B × B 2 B is called A -relaxed co-accretive with respect to φ and ψ, if A is relaxed ( ς , κ ) -cocoercive, G ( φ , ψ ) is symmetric accretive with respect to φ and ψ and for every ϱ > 0 , ( A + ϱ G ( φ , ψ ) ) ( B ) = B .
Definition 5.
Let φ , ψ , A : B B be the single-valued mappings and G : B × B 2 B be an A -relaxed co-accretive with respect to φ and ψ. Then resolvent R ϱ , A G ( · , · ) : B B is defined by
R ϱ , A G ( · , · ) ( s ) = [ A + ϱ G ( φ , ψ ) ] 1 ( s ) , s B , ϱ > 0 .
Lemma 2.
Let φ , ψ , A : B B be the single-valued mappings, such that A is δ A -Lipschitz continuous. Let G : B × B 2 B be an A -relaxed co-accretive mapping with regards to φ and ψ. Then for all ϱ > 0 with α > β and κ > ς δ A q , the resolvent operator R ϱ , A G ( · , · ) is single-valued.
Proof. 
Given p B and ϱ > 0 , let s 1 , s 2 [ A + ϱ G ( φ , ψ ) ] 1 ( p ) . Then, we have
Λ ( s 1 ) = 1 ϱ [ p A ( s 1 ) ] G ( φ ( s 1 ) , ψ ( s 1 ) ) ,
Λ ( s 2 ) = 1 ϱ [ p A ( s 2 ) ] G ( φ ( s 2 ) , ψ ( s 2 ) ) .
In point of fact, G is A -relaxed co-accretive mapping with regards to φ and ψ , we obtain
( α β ) s 1 s 2 q Λ ( s 1 ) Λ ( s 2 ) , J q ( s 1 s 2 ) = 1 ϱ p A ( s 1 ) ( p A ( s 2 ) ) , J q ( s 1 s 2 ) = 1 ϱ ( A ( s 1 ) A ( s 2 ) ) , J q ( s 1 s 2 ) ς A ( s 1 ) A ( s 2 ) q κ s 1 s 2 q ϱ .
Utilizing δ A -Lipschitz continuity of A , we acquire
( α β ) s 1 s 2 q ( ς δ A q κ ) ϱ s 1 s 2 q ,
which implies 0 [ ϱ ( α β ) + ( κ ς δ A q ) ] s 1 s 2 q 0 . Since α > β and κ > ς δ A q , we deduce that s 1 = s 2 . Therefore, the resolvent operator defined by R ϱ , A G ( · , · ) ( s ) = [ A + ϱ G ( φ , ψ ) ] 1 ( s ) is single-valued. □
Lemma 3.
Assume that the mappings φ , ψ , A , and G are the same as described in Lemma 2. Then, the resolvent R ϱ , A G ( · , · ) : B B is ϑ-Lipschitz continuous, i.e.,
R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ϑ s 1 s 2 , s 1 , s 2 B and ϱ > 0 ,
where ϑ = 1 ϱ ( α β ) + ( κ ς δ A q ) , α > β and κ > ς δ A q .
Proof. 
Let s 1 , s 2 B . It follows from (2) that
R ϱ , A G ( · , · ) ( s 1 ) = [ A + ϱ G ( φ , ψ ) ] 1 ( s 1 ) ,
R ϱ , A G ( · , · ) ( s 2 ) = [ A + ϱ G ( φ , ψ ) ] 1 ( s 2 ) .
Therefore,
Λ ( s 1 ) = 1 ϱ ( s 1 A ( R ϱ , A G ( · , · ) ( s 1 ) ) ) G ( φ ( R ϱ , A G ( · , · ) ( s 1 ) ) , ψ ( R ϱ , A G ( · , · ) ( s 1 ) ) ) ,
Λ ( s 2 ) = 1 ϱ ( s 2 A ( R ϱ , A G ( · , · ) ( s 2 ) ) ) G ( φ ( R ϱ , A G ( · , · ) ( s 2 ) , ψ ( R ϱ , A G ( · , · ) ( s 2 ) ) ) .
Symmetric accretive property of G with respect to φ and ψ yields
( α β ) R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q Λ ( s 1 ) Λ ( s 2 ) , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) = 1 ϱ s 1 A ( R ϱ , A G ( · , · ) ( s 1 ) ) ( s 2 A ( R ϱ , A G ( · , · ) ( s 2 ) ) ) , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) = 1 ϱ s 1 s 2 , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) 1 ϱ A ( R ϱ , A G ( · , · ) ( s 1 ) ) A ( R ϱ , A G ( · , · ) ( s 2 ) ) , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) .
Employing relaxed ( ς , κ ) -cocoercivity and δ A -Lipschitz continuity of A , we obtain
( α β ) R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q 1 ϱ s 1 s 2 , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) + 1 ϱ [ ς A ( R ϱ , A G ( · , · ) ( s 1 ) A ( R ϱ , A G ( · , · ) ( s 2 ) ) q κ R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q ] 1 ϱ s 1 s 2 , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) + 1 ϱ [ ς δ A q R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q κ R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q ] = 1 ϱ s 1 s 2 , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) + ( ς δ A q κ ) ϱ R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q ,
which implies that
s 1 s 2 , j q ( R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ) [ ϱ ( α β ) + ( κ ς δ A q ) ] R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q .
That is,
s 1 s 2 R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q 1 [ ϱ ( α β ) + ( κ ς δ A q ) ] R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) q .
Consequently, we get
R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ϑ s 1 s 2 ,
where, ϑ = 1 ϱ ( α β ) + ( κ ς δ A q ) . □

3. Generalized Yosida Approximation Operator

Here, we explore a few characteristics of a generalized Yosida approximation operator, aligned to A -relaxed co-accretive mapping.
Definition 6.
The generalized Yosida approximation operator, J ϱ , A G ( · , · ) : B B , is defined as
J ϱ , A G ( · , · ) ( s ) = 1 ϱ A R ϱ , A G ( · , · ) ( s ) , s B , ϱ > 0 ,
where R ϱ , A G ( · , · ) is defined in (2).
Lemma 4.
Let φ , ψ , A : B B be the single-valued mappings, such that A is δ A -Lipschitz continuous and η-strongly accretive. Suppose G : B × B 2 B is an A -relaxed co-accretive mapping, with regards to φ and ψ. Then for some ϱ > 0 , with α > β , κ > ς δ A q , the generalized Yosida approximation operator J ϱ , A G ( · , · ) is
 (i) 
L -Lipschitz continuous, where L = ( δ A + ϑ ) ϱ ;
 (ii) 
ρ-strongly accretive; where ρ = ( η ϑ ) ϱ , ϑ = 1 ϱ ( α β ) + ( κ ς δ A q )
Proof. 
(i). Given s 1 , s 2 B . Utilizing Lipschitz continuities of A , R ϱ , H A G ( · , · ) in (8), we achieve
J ϱ , A G ( · , · ) ( s 1 ) J ϱ , A G ( · , · ) ( s 2 ) = 1 ϱ [ A ( s 1 ) R ϱ , A G ( · , · ) ( s 1 ) ] [ A ( s 2 ) R ϱ , A G ( · , · ) ( s 2 ) ] 1 ϱ [ A ( s 1 ) A ( s 2 ) + R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) ] 1 ϱ [ δ A s 1 s 2 + ϑ s 1 s 2 ] = ( δ A + ϑ ) ϱ s 1 s 2 ,
i.e., J ϱ , A G ( · , · ) ( s 1 ) J ϱ , A G ( · , · ) ( s 2 ) L s 1 s 2 .
(ii). Given s 1 , s 2 B , then from (8) and η -strongly accretive property of A , we get
J ϱ , A G ( · , · ) ( s 1 ) J ϱ , A G ( · , · ) ( s 2 ) , J q ( s 1 s 2 ) = 1 ϱ A ( s 1 ) R ϱ , A G ( · , · ) ( s 1 ) [ A ( s 2 ) R ϱ , A G ( · , · ) ( s 2 ) ] , J q ( s 1 s 2 ) = 1 ϱ [ A ( s 1 ) A ( s 2 ) , J q ( s 1 s 2 ) R ϱ , A G ( · , · ) ( s 1 ) R ϱ , A G ( · , · ) ( s 2 ) , J q ( s 1 s 2 ) ] 1 ϱ [ η s 1 s 2 q R ϱ , A G ( · , · ) ( s 1 ) ) R ϱ , A G ( · , · ) ( s 2 ) ) s 1 s 2 q 1 ] .
Making use of the Lipschitz continuity of R ϱ , A G ( · , · ) , we obtain
J ϱ , A G ( · , · ) ( s 1 ) J ϱ , A G ( · , · ) ( s 2 ) , J q ( s 1 s 2 ) 1 ϱ [ η s 1 s 2 q ϑ s 1 s 2 q ] = ( η ϑ ) ϱ s 1 s 2 q ,
i.e., J ϱ , A G ( · , · ) ( s 1 ) J ϱ , A G ( · , · ) ( s 2 ) , J q ( s 1 s 2 ) ρ s 1 s 2 q , where ρ = ( η ϑ ) ϱ . Thus, J ϱ , A G ( · , · ) is ρ -strongly accretive. □

4. Main Results

Now, we formulate the generalized Yosida inclusion problem followed by a discussion of the existence result and inspection of convergence of the recommended scheme. Hereafter, we assume ( B , · ) is a q-uniformly smooth Banach space.
Let ϕ , φ , ψ , A : B B ; N : B × B B be the single-valued mappings; T : B C B ( B ) be a multi-valued mapping, and G : B × B 2 B be an A -relaxed co-accretive mapping, with regards to φ and ψ . The generalized Yosida inclusion problem (GYIP) is to locate s B , u T ( s ) , such that
0 J ϱ , A G ( φ , ψ ) ( s ) + G ( φ ( s ) , ψ ( s ) ) + N ( u , s ϕ ( s ) ) .

4.1. Existence Result

Lemma 5.
A point s B , u T ( s ) is a solution of GYIP (9) if, for some ϱ > 0 , s F ( Φ ) , where F ( Φ ) is the set of all fixed points of Φ and
Φ ( s ) = R ϱ , A G ( φ , ψ ) [ A ( s ) ϱ ( J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] .
Proof. 
One can obtain proof of the lemma as a direct consequence of (2), so we ignore it. □
Theorem 1.
Let ϕ , φ , ψ , A : B B and N : B × B B represent single-valued mappings, where ϕ is δ ϕ -Lipschitz continuous and σ-strongly accretive, A is δ A -Lipschitz continuous, N is δ N 1 and δ N 2 -Lipschitz continuous in the first and second argument, respectively. Let T : B C B ( B ) be D -Lipschitz continuous mapping with constant δ D T and G : B × B 2 B be A -relaxed co-accretive, with regards to φ and ψ. Suppose that ϱ > 0 satisfies
0 < ϑ [ δ A + ϱ ( L + δ N 1 δ D T + δ N 2 1 q σ + c q δ ϕ q q ) ] < 1 , q σ c q δ ϕ q < 1 ,
where, ϑ = 1 ϱ ( α β ) + ( κ ς δ A q ) and L = ( δ A + ϑ ) ϱ , α > β , κ > ς δ A q . Then GYIP (9) has a unique solution.
Proof. 
Given s 1 , s 2 B and u 1 T ( s 1 ) , u 2 T ( s 2 ) . Utilizing Lemma 3, Lemma 4, Lipschitz continuity of A , and (10), we obtain
Φ ( s 1 ) Φ ( s 2 ) = R ϱ , A G ( φ , ψ ) [ A ( s 1 ) ϱ ( J ϱ , A G ( φ , ψ ) ( s 1 ) + N ( u 1 , s 1 ϕ ( s 1 ) ) ] R ϱ , A G ( φ , ψ ) [ A ( s 2 ) ϱ ( J ϱ , A G ( φ , ψ ) ( s 2 ) + N ( u 2 , s 2 ϕ ( s 2 ) ) ] ϑ A ( s 1 ) ϱ ( J ϱ , A G ( φ , ψ ) ( s 1 ) + N ( u 1 , s 1 ϕ ( s 1 ) ) [ A ( s 2 ) ϱ ( J ϱ , A G ( φ , ψ ) ( s 2 ) + N ( u 2 , s 2 ϕ ( s 2 ) ) ϑ [ A ( s 1 ) A ( s 2 ) + ϱ J ϱ , A G ( φ , ψ ) ( s 1 ) J ϱ , A G ( φ , ψ ) ( s 2 ) + ϱ N ( u 1 , s 1 ϕ ( s 1 ) ) N ( u 2 , s 2 ϕ ( s 2 ) ) ] ϑ δ A s 1 s 2 + ϑ ϱ L s 1 s 2 + ϑ ϱ N ( u 1 , s 1 ϕ ( s 1 ) ) N ( u 2 , s 2 ϕ ( s 2 ) ) .
By implementing Lipschitz continuity of N and D -Lipschitz continuity of T, we acquire
N ( u 1 , s 1 ϕ ( s 1 ) ) N ( u 2 , s 2 ϕ ( s 2 ) ) δ N 1 u 1 u 2 + δ N 2 ( s 1 s 2 ) ( ϕ ( s 1 ) ϕ ( s 2 ) ) δ N 1 δ D T s 1 s 2 + δ N 2 ( s 1 s 2 ) ( ϕ ( s 1 ) ϕ ( s 2 ) ) .
In view of σ -strong monotonicity and δ ϕ -Lipschitz continuity of ϕ , we achieve
( s 1 s 2 ) ( ϕ ( s 1 ) ϕ ( s 2 ) ) q s 1 s 2 q q ϕ ( s 1 ) ϕ ( s 2 ) , j q ( s 1 s 2 ) + c q ϕ ( s 1 ) ϕ ( s 2 ) q s 1 s 2 q q σ s 1 s 2 q + c q δ ϕ q s 1 s 2 q = ( 1 q σ + c q δ ϕ q ) s 1 s 2 q ,
which implies that
( s 1 s 2 ) ( ϕ ( s 1 ) ϕ ( s 2 ) ) 1 q σ + c q δ ϕ q q s 1 s 2 .
By making use of (13) and (14), (12) becomes
Φ ( s 1 ) Φ ( s 2 ) ϑ [ δ A + ϱ ( L + δ N 1 δ D T + δ N 2 1 q σ + c q δ ϕ q q ) ] s 1 s 2 .
Thus, from (15), we deduce that
Φ ( s 1 ) Φ ( s 2 ) Δ s 1 s 2 ,
where, Δ = ϑ [ δ A + ϱ ( L + δ N 1 δ D T + δ N 2 1 q σ + c q δ ϕ q q ) ] , ϑ = 1 ϱ ( α β ) + ( κ ς δ A q ) and L = ( δ A + ϑ ) ϱ . From Condition (11), we have Δ < 1 . Then, (16) becomes
Φ ( s 1 ) Φ ( s 2 ) s 1 s 2 .
Taking the Banach contraction principle into consideration, s B , so that Φ ( s ) = s . Hence, by employing Lemma 5, we acquire s B , u T ( s ) as the unique solution of GYIP (9). □

4.2. Convergence Result

Algorithm 1.
Suppose that the mappings ϕ , φ , ψ , A , N , T , G are identical as in Theorem 1. Then, for any given s 0 B , u 0 T ( s 0 ) , we approximate { s n } and { u n } by the following scheme:
μ n = Φ [ ( 1 p n ) s n + p n Φ ( s n ) ] θ n = Φ [ ( 1 r n ) μ n + r n Φ ( μ n ) ] η n = Φ ( θ n ) s n + 1 = Φ ( η n ) u n + 1 T ( s n + 1 ) : u n + 1 u n D ( T ( s n + 1 ) , T ( s n ) ) ,
where u n T ( x n ) , u n T ( y n ) , u n T ( z n ) and { p n } , { r n } are sequences in ( 0 , 1 ) satisfying n = 0 p n = , n = 0 , 1 , 2 , .
Next, we present and analyze the convergence of the scheme (17).
Theorem 2.
Let the mappings ϕ , φ , ψ , A , N , T , G be identical and satisfy all assumptions as in Theorem 1. Suppose that Condition (11) holds. Then the approximate sequences { s n } and { u n } , initiated by iterative Algorithm 1, converge strongly to s and u , respectively, and ( s , u ) , s B , u T ( s ) solves GYIP (9).
Proof. 
Let s F ( Φ ) . Then, from (16) and (17), we obtain
μ n s = Φ [ ( 1 p n ) s n + p n Φ ( s n ) ] s Δ [ ( 1 p n ) s n s + p n Φ ( s n ) s ] Δ [ ( 1 p n ) s n s + p n Δ s n s ] = Δ [ 1 p n ( 1 Δ ) ] s n s .
θ n s = Φ [ ( 1 r n ) μ n + r n Φ ( μ n ) ] s Δ [ ( 1 r n ) μ n s + r n Φ ( μ n ) s ] Δ [ ( 1 r n ) μ n s + r n Δ μ n s ] = Δ [ 1 r n ( 1 Δ ) ] μ n s Δ 2 [ 1 r n ( 1 Δ ) ] [ 1 p n ( 1 Δ ) ] s n s .
η n s = Φ ( θ n ) s Δ θ n s Δ 3 [ 1 r n ( 1 Δ ) ] [ 1 p n ( 1 Δ ) ] s n s .
s n + 1 s = Φ ( η n ) s Δ η n s Δ 4 [ 1 r n ( 1 Δ ) ] [ 1 p n ( 1 Δ ) ] s n s .
In fact, { r n } is a sequence in ( 0 , 1 ) and Δ < 1 , therefore 1 r n ( 1 Δ ) < 1 . Hence, (21) yields
s n + 1 s Δ 4 [ 1 p n ( 1 Δ ) ] s n s .
By replicating the process, we obtain
s n + 1 s Δ 4 [ 1 p n ( 1 Δ ) ] s n s Δ 4 [ 1 p n 1 ( 1 Δ ) ] s n 1 s · · · Δ 4 [ 1 p 0 ( 1 Δ ) ] s 0 s Δ 4 ( n + 1 ) m = 0 n [ 1 p m ( 1 Δ ) ] s 0 s .
Using the facts that p m ( 0 , 1 ) , m N , 0 < Δ < 1 , and 1 e , [ 0 , 1 ] , (23) becomes
s n + 1 s Δ 4 ( n + 1 ) s 0 s e ( 1 Δ ) m = 0 n p m ,
which after taking limits both sides leads to lim n s n = s . From Algorithm 1, we achieve
u n u D ( T ( s n ) , T ( s ) ) δ D T s n s .
From (25), one can see that u n u , for sufficiently large n. Next, we show that u T ( s ) . Since u n T ( s n ) , then we obtain
d ( u , T ( s ) ) u u n + d ( u n , T ( s ) ) u u n + D ( T ( s n ) , T ( s ) ) u u n + δ D T s n s 0 , as n .
Hence, d ( u , T ( s ) ) 0 , therefore u T ( s ) as T ( s ) C B ( B ) . By utilizing Lemma 5, it follows that s B , u T ( s ) is the unique solution of GYIP (9). □
Example 1.
Let B = R , with inner product s , t = s · t and induced norm s = | s | . Define ϕ , φ , ψ , A : B B and N : B × B B by
ϕ ( s ) = 5 7 s , φ ( s ) = 6 5 s , ψ ( s ) = 3 5 s , A ( s ) = 6 5 s and N ( s , t ) = s + t 3 , for all s , t B .
ϕ ( s ) ϕ ( t ) = | 5 7 s 5 7 t | = 5 7 | s t | 5 6 s t ,
i.e., ϕ is δ ϕ -Lipschitz continuous with δ ϕ = 5 6 .
ϕ ( s ) ϕ ( t ) , s t = 5 7 s 5 7 t , s t = 5 7 s t 2 5 8 s t 2 ,
i.e., ϕ is 5 8 -strongly monotone.
A ( s ) A ( t ) = | 6 5 s 6 5 t | = 6 5 | s t | 5 4 s t ,
i.e., A is δ A -Lipschitz continuous with δ A = 5 4 .
A ( s ) A ( t ) , s t = 6 5 s 6 5 t , s t = 6 5 s t 2 1 2 A ( s ) A ( t ) 2 + 1 3 s t 2 ,
i.e., A is ( 1 2 , 1 3 ) -relaxed cocoercive.
N ( 1 5 s , s ϕ ( s ) ) N ( 1 5 t , t ϕ ( t ) ) = | 1 / 5 s + s ϕ ( s ) 3 1 / 5 t + s ϕ ( s ) 3 | = 1 15 | s t | 1 9 s t ,
i.e., N is δ N 1 -Lipschitz continuous with δ N 1 = 1 9 .
N ( u , s ϕ ( s ) ) N ( u , t ϕ ( t ) ) = | u + s ϕ ( s ) 3 u + t ϕ ( t ) 3 | = 2 21 | s t | 1 10 s t ,
i.e., N is δ N 2 -Lipschitz continuous with δ N 2 = 1 10 . Define T : B 2 B and G : B × B 2 B by
T ( s ) = 1 5 s and G ( φ ( s ) , ψ ( s ) ) = 5 ( φ ( s ) + ψ ( s ) ) , for all s B .
T ( s ) T ( t ) = | 1 5 s 1 5 t | 1 5 s t .
i.e., T is δ D T -Lipschitz continuous with δ D T = 1 5 .
G ( φ ( s ) , w ) G ( φ ( t ) , w ) , s t = 5 ( φ ( s ) + w ) 5 ( φ ( t ) + w ) , s t = 5 φ ( s ) φ ( t ) , s t = 5 6 5 s 6 5 t , s t = 6 s t 2 11 2 s t 2 ,
i.e., G is 11 2 -strongly accretive with respect to φ. Similarly, we can easily prove that G is 7 2 -relaxed accretive with respect to ψ. Thus, G is symmetric accretive. Now, for any s B , it is easy to notice that
[ A + ϱ G ( φ , ψ ) ] ( s ) = 15 ϱ + 6 5 s .
i.e., [ A + ϱ G ( φ , ψ ) ] ( B ) = B , for all ϱ > 0 . Hence G is A -relaxed co-accretive mapping.
Now for ϱ = 1 , the resolvent operator
R ϱ , A G ( · , · ) ( s ) = [ A + ϱ G ( φ , ψ ) ] 1 ( s ) = 5 21 s .
Also,
R ϱ , A G ( · , · ) ( s ) R ϱ , A G ( · , · ) ( t ) = | 5 21 s 5 21 t | 2 7 s t ,
i.e., R ϱ , A G ( · , · ) is ϑ-Lipschitz continuous with ϑ = 2 7 . Further, the Yosida approximation operator is defined as
J ϱ , A G ( · , · ) ( s ) = 1 ϱ [ A R ϱ , A G ( · , · ) ] ( s ) = 101 105 s .
Next, we show that
J ϱ , A G ( · , · ) ( s ) J ϱ , A G ( · , · ) ( t ) = | 101 105 s 101 105 t | 11 10 s t ,
i.e., J ϱ , A G ( · , · ) is L -Lipschitz continuous with L = 11 10 .
J ϱ , A G ( · , · ) ( s ) J ϱ , A G ( · , · ) ( t ) , s t = 101 105 s 101 105 t , s t 9 10 s t 2 ,
i.e., J ϱ , A G ( · , · ) is 9 10 -strongly monotone. Additionally, for λ = 1 , we have
ϑ δ A + λ ( L + δ N 1 δ D T + δ N 2 1 q σ + c q δ ϕ q q ) = 2 7 5 4 + 11 10 + 1 9 1 5 + 1 10 1 2 5 8 + 5 6 2 = 0.6968 < 1 .
Thus, all the assumptions of Theorem 1 and Condition (11) are verified. Clearly, s = 0 is the fixed point of Φ ( s ) = R ϱ , A G ( · , · ) [ A ( s ) ϱ ( R ϱ , A G ( · , · ) ( s ) + N ( u , s ϕ ( s ) ) ) ] .
Let p n = 1 ( n + 1 ) and r n = n + 1 ( n + 2 ) . Now, we approximate the sequences { μ n } , { θ n } , { η n } and { s n } by employing the Algorithm 1 as below:
μ n = Φ 1 n + 1 s n + 8 441 ( n + 1 ) s n , θ n = Φ 1 n + 2 μ n + 8 ( n + 1 ) 441 ( n + 2 ) μ n , η n = Φ ( θ n ) , s n + 1 = Φ ( η n ) , Φ ( s n ) = 8 441 s n .
Thus, all the presumptions of Theorem 2 are accomplished, and { s n } converges strongly to s = 0 , which solves GYIP (9).

4.3. Stability Result

In dealing with real world problems, we take into consideration an approximate sequence, say { t n } , in lieu of an actual sequence { s n } , for rounding the errors of approximating functions. An iterative method is known as Φ -stable if { t n } converges to the fixed point of Φ . Ostrowski [36] coined the concept of stability for a fixed point iterative scheme for the first time.
Definition 7
([36]). Let { s n } be an approximate sequence in a Banach space B . For some function Γ, an iterative sequence τ n + 1 = Γ ( Φ , τ n ) converging to s is known to be Φ-stable, if for ϵ n = t n + 1 Γ ( Φ , t n ) , n N , we have lim n ϵ n = 0 lim n t n = t .
The concept of almost stability, which is a weaker concept of stability, was defined by Osilike as follows:
Definition 8
([37]). Let { s n } be an approximate sequence in a Banach space B . For some function Γ, the iterative sequence τ n + 1 = Γ ( Φ , τ n ) converging to s is known to be almost Φ-stable, if for ϵ n = t n + 1 Γ ( Φ , t n ) , n N , we have lim n ϵ n < lim n t n = t .
Clearly, any Φ -stable iterative method is almost Φ -stable, but the converse statement is not true in general, see [37].
To prove the stability of iterative procedure, we present the following lemma of essence.
Lemma 6
([38]). Let { ε n } and { s n } be non-negative real sequences and 0 ϑ < 1 , such that s n + 1 ϑ s n + ε n , n N . Then, n = 1 s n < , provided n = 1 ε n < .
Theorem 3.
If ϕ , φ , ψ , A , N , T and G are identical, as in Theorem 1, satisfying Condition (11), then the Scheme (17) is almost Φ-stable.
Proof. 
Let { t n } be an arbitrary sequence in B , which is approximated by { s n } . Assume that { t n + 1 } , initiated by (17) that fulfills the relation t n + 1 = Γ ( Φ , t n ) , converges to a unique solution t and ϵ n = t n + 1 Γ ( Φ , t n ) , n N . In order to be persuaded of the stability of Φ , we show that n = 1 ϵ n < lim n t n = t . Suppose that n = 1 ϵ n < , making use of (17), we obtain
t n + 1 t = Γ ( Φ , t n ) t t n + 1 Γ ( Φ , t n ) + Γ ( Φ , t n ) t = ϵ n + Γ ( Φ , γ n ) t = ϵ n + [ 1 r n ( 1 Δ ) ] Φ ( Φ ( Φ ( Φ [ ( 1 p n ) t n + p n Φ ( t n ) ] ) ) ) t ϵ n + Δ [ 1 r n ( 1 Δ ) ] Φ ( Φ ( Φ [ ( 1 p n ) t n + p n Φ ( t n ) ] ) ) t ϵ n + Δ 2 [ 1 r n ( 1 Δ ) ] Φ ( Φ [ ( 1 p n ) t n + p n Φ ( t n ) ] ) t ϵ n + Δ 3 [ 1 r n ( 1 Δ ) ] Φ [ ( 1 p n ) t n + p n Φ ( t n ) ] t ϵ n + Δ 4 [ 1 r n ( 1 Δ ) ] [ 1 p n ( 1 Δ ) ] t n t .
In fact Δ < 1 , { r n } and { p n } are in ( 0 , 1 ) . Denote n = t n t , then (27) in turns n + 1 = ϵ n + Δ 4 n . As n = 1 ϵ n < , then from lemma, we infer that n = 1 n < , which yields n = 1 n = 0 . Hence, n = 1 t n = t , that is, iterative method (17) is almost Φ -stable. □

5. Applications

Here, we employ the generalized Yosida inclusion problem to investigate the generalized resolvent equation problem. In addition, we utilize our developed scheme to examine the Volterra–Fredholm integral equation.

5.1. Generalized Resolvent Equation Problem

This subsection begins with formulation of the resolvent equation problem for the generalized Yosida inclusion problem, which is to locate s , w B , such that
J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) + ϱ 1 S ϱ , A G ( φ , ψ ) ( w ) = 0 ,
where, S ϱ , A G ( φ , ψ ) ( w ) = [ I A ( R ϱ , A G ( φ , ψ ) ] ( w ) and R ϱ , A G ( φ , ψ ) is the resolvent associated to A -relaxed co-accretive mapping. We call problem (28) the generalized resolvent equation problem (in short: GREP).
Now, we establish equivalence between GYIP (9) and GREP (28).
Proposition 1.
A point s B , u T ( s ) is a solution of GYIP (9) if GREP (28) has a solution s , w B , provided A is one-to-one and
s = R ϱ , A G ( φ , ψ ) ( w ) ,
w = A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] .
Proof. 
Suppose s B , is a solution of GYIP (9), then
s = R ϱ , A G ( φ , ψ ) [ A ( s ) ϱ ( J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] .
Since w = A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] , then we have s = R ϱ , A G ( φ , ψ ) ( w ) . Thus,
w = A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] = A ( R ϱ , A G ( φ , ψ ) ( w ) ) ϱ [ J ϱ , A G ( φ , ψ ) ( R ϱ , A G ( φ , ψ ) ( w ) ) + N ( u , R ϱ , A G ( φ , ψ ) ( w ) ϕ ( R ϱ , A G ( φ , ψ ) ( w ) ) ) ] ,
which implies that
w A ( R ϱ , A G ( φ , ψ ) ( w ) ) = ϱ [ J ϱ , A G ( φ , ψ ) ( R ϱ , A G ( φ , ψ ) ( w ) ) + N ( u , R ϱ , A G ( φ , ψ ) ( w ) ϕ ( R ϱ , A G ( φ , ψ ) ( w ) ) ) ] A ( R ϱ , A G ( φ , ψ ) ) ( w ) = ϱ [ J ϱ , A G ( φ , ψ ) ( R ϱ , A G ( φ , ψ ) ( w ) ) + N ( u , R ϱ , A G ( φ , ψ ) ( w ) ϕ ( R ϱ , A G ( φ , ψ ) ( w ) ) ) ] i . e . , S ϱ , A G ( φ , ψ ) ( w ) = ϱ [ J ϱ , A G ( φ , ψ ) ( R ϱ , A G ( φ , ψ ) ( w ) ) + N ( u , R ϱ , A G ( φ , ψ ) ( w ) ϕ ( R ϱ , A G ( φ , ψ ) ( w ) ) ) ] .
It follows from (29) that
S ϱ , A G ( φ , ψ ) ( w ) = ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] .
Thus, we have
J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) + ϱ 1 S ϱ , A G ( φ , ψ ) ( w ) = 0 .
Conversely, suppose that s , w B is the solution of GREP (28), then we have
ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] = S ϱ , A G ( φ , ψ ) ( w ) = A ( R ϱ , A G ( φ , ψ ) ( w ) ) w = A ( R ϱ , A G ( φ , ψ ) [ A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] ) [ A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] ,
which implies
A ( s ) = A ( R ϱ , A G ( φ , ψ ) [ A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] ) .
Since the mapping A is one-to-one, we have
s = R ϱ , A G ( φ , ψ ) [ A ( s ) ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] .
Thus, from Lemma 5, we deduce that s B , u T ( s ) is the solution of GYIP (9).
Based on Proposition 1, we estimate the solution of GREP (28) by composing the following iterative procedure. □
Algorithm 2.
For initial points s 0 , w 0 B , we approximate the sequences { s n } and { w n } by the following scheme:
s n = R ϱ , A G ( φ , ψ ) ( w n )
w n + 1 = A ( s n ) ϱ [ J ϱ , A G ( φ , ψ ) ( s n ) + N ( u n , s n ϕ ( s n ) ) ] .
Theorem 4.
Suppose that the mappings ϕ , φ , ψ , A , N , T , G are the same, and satisfy all the assumptions and Condition (11) as in Theorem 1. Then approximate sequences { s n } and { w n } , initiated by (32)–(33), converge strongly to the s and w, respectively, which solves GREP (28).
Proof. 
Taking (33) of Algorithm 2 into account, and utilizing the Lipschitz continuity of A , we obtain
w n + 1 w n = A ( s n ) ϱ [ J ϱ , A G ( φ , ψ ) ( s n ) + N ( u n , s n ϕ ( s n ) ) ] ( A ( s n 1 ) ϱ [ J ϱ , A G ( φ , ψ ) ( s n 1 ) + N ( u n 1 , s n 1 ϕ ( s n 1 ) ) ] ) A ( s n ) A ( s n 1 ) + ϱ J ϱ , A G ( φ , ψ ) ( s n ) J ϱ , A G ( φ , ψ ) ( s n 1 ) + ϱ N ( u n , s n ϕ ( ( s n ) ) N ( u n 1 , s n 1 ϕ ( ( s n 1 ) ) δ A s n s n 1 + ϱ J ϱ , A G ( φ , ψ ) ( s n ) J ϱ , A G ( φ , ψ ) ( s n 1 ) + ϱ N ( u n , s n ϕ ( s n ) ) N ( u n 1 , s n 1 ϕ ( s n 1 ) ) .
Using the same facts as used in Theorem 1, we have
w n + 1 w n δ A + ϱ ( L + δ N 1 δ D T + δ N 2 1 q σ + c q δ ϕ q q ) s n s n 1 .
Now, making use of (32) and (7), we acquire
s n s n 1 = R ϱ , A G ( φ , ψ ) ( w n ) R ϱ , A G ( φ , ψ ) ( w n 1 ) 1 ϱ ( α β ) + ( κ ς δ A q ) w n w n 1 .
It follows from (34) and (35) that
w n + 1 w n δ A + ϱ ( L + δ N 1 δ D T + δ N 2 1 q σ + c q δ ϕ q q ) ϱ ( α β ) + ( κ ς δ A q ) w n w n 1 = Δ w n w n 1 .
From (11), we infer Δ < 1 . Thus, { w n } is a Cauchy sequence in B . Hence, w B such that w n w as n . From (35), s n s as n . Also, from (25), we have lim n u n = u . Employing the continuity of A , N , ϕ , R ϱ , A G ( φ , ψ ) and J ϱ , A G ( φ , ψ ) , we have
w = A ( s ) + ϱ [ J ϱ , A G ( φ , ψ ) ( s ) + N ( u , s ϕ ( s ) ) ] .
Thus, from Proposition 1, we deduce that s , w B solves GREP (28). □

5.2. Volterra–Fredholm Integral Equation

Next, we employ our iterative scheme (17) to investigate the approximate solution of the Volterra–Fredholm integral equation, given below:
x ( ω , κ ) = f ( ω , κ , l ( x ( ω , κ ) ) ) + 0 ω 0 κ ζ ( ω , κ , a , b , x ( a , b ) ) d a d b , ω , κ R + .
The problem (37) has previously been studied by Lungu and Rus [39]. For some t > 0 , define
Δ t = { x C ( R + 2 , B ) : M ( x ) > 0 : | x ( ω , κ ) | e t ( ω + κ ) M ( x ) } .
Define the norm on Δ t as
x t = sup ω , κ R + | x ( ω , κ ) | e t ( ω + κ ) .
Note that ( Δ t , · t ) is a Banach space, see, [40]. The lemma mentioned below, due to Lungu and Rus [39], plays a deciding role to prove our result.
Lemma 7
([39]). If the following assertions hold:
 (A1) 
f C ( R + 2 × B , B ) , ζ C ( R + 4 × B , B ) ;
 (A2) 
For some δ l > 0 , l : Δ t Δ t satisfies
| l ( x ( ω , κ ) ) l ( y ( ω , κ ) ) | δ l x y · e t ( ω + κ ) , ω , κ R + , x , y Δ t ;
 (A3) 
δ f > 0 such that | f ( ω , κ , μ 1 ) f ( ω , κ , μ 2 ) | δ f | μ 1 μ 2 | , ω , κ R + , μ 1 , μ 2 B ;
 (A4) 
δ ζ > 0 such that | ζ ( ω , κ , a , b , μ 1 ) ζ ( ω , κ , a , b , μ 2 ) | δ ζ ( ω , κ , a , b ) | μ 1 μ 2 | , ω , κ , a , b R + , μ 1 , μ 2 B ;
 (A5) 
δ ζ C ( R + 4 , R + ) and
0 ω 0 κ δ ζ ( ω , κ , a , b ) e t ( a + b ) d a d b δ e t ( ω + κ ) , ω , κ R + ;
 (A6) 
δ f δ l + δ < 1 .
Then the Volterra–Fredholm integral Equation (37), admits a unique solution s Δ t , and the iterative sequence given below,
x n + 1 ( ω , κ ) = f ( ω , κ , l ( x n ( ω , κ ) ) ) + 0 ω 0 κ ζ ( ω , κ , a , b , x n ( a , b ) ) d a d b ,
converges uniformly to s .
Theorem 5.
Let { s n } and { u n } be the approximate sequences initiated by (17). Suppose that the assumptions ( A 1 A 6 ) of the Lemma 7 are fulfilled. Then the Volterra–Fredholm integral Equation (37) admits a unique solution s Δ t , and { s n } initiated by (17) converges strongly to s .
Proof. 
Let { s n } and { u n } be the approximate sequences initiated by (17). Let G : Δ t Δ t be a mapping defined by
G ( x ( ω , κ ) ) = f ( ω , κ , l ( x ( ω , κ ) ) ) + 0 ω 0 κ ζ ( ω , κ , a , b , x ( a , b ) ) d a d b .
We substantiate that s n 0 as n . Employing (17), we acquire
s n + 1 s t = sup ω , κ R + | G ( η n ( ω , κ ) ) G ( s ( ω , κ ) ) | e t ( ω + κ ) .
Next, we estimate
| G ( η n ( ω , κ ) ) G ( s ( ω , κ ) ) | | f ( ω , κ , l ( η n ( ω , κ ) ) ) f ( ω , κ , l ( s ( ω , κ ) ) ) | + | 0 ω 0 κ ζ ( ω , κ , a , b , η n ( a , b ) ) d a d b 0 ω 0 κ ζ ( ω , κ , a , b , s ( a , b ) ) d a d b | δ f | l ( η n ( ω , κ ) ) l ( s ( ω , κ ) ) | + 0 ω 0 κ | ζ ( ω , κ , a , b , η n ( a , b ) ) ζ ( ω , κ , a , b , s ( a , b ) ) | d a d b δ f δ l η n s t e t ( ω + κ ) + 0 ω 0 κ δ ζ ( ω , κ , a , b ) | η n ( a , b ) s ( a , b ) | d a d b δ f δ l η n s t e t ( ω + κ ) + δ η n s t e t ( ω + κ ) = ( δ f δ l + δ ) η n s t e t ( ω + κ ) .
Thus, we have
s n + 1 s t ( δ f δ l + δ ) η n s t .
In a similar fashion, from (17), we obtain
η n s t ( δ f δ l + δ ) θ n s t .
Taking into consideration (40) and (41), we obtain
s n + 1 s t ( δ f δ l + δ ) 2 θ n s t .
Moreover,
θ n s t = sup ω , κ R + | G ( ( 1 r n ) μ n + r n G ( μ n ) ( ω , κ ) ) G ( s ( ω , κ ) ) | e t ( ω + κ ) .
Now, we estimate
| G ( ( 1 r n ) μ n + r n G ( μ n ) ( ω , κ ) ) G ( s ( ω , κ ) ) | | f ( ω , κ , l ( ( 1 r n ) μ n + r n G ( μ n ) ( ω , κ ) ) ) f ( ω , κ , l ( s ( ω , κ ) ) ) | + | 0 ω 0 κ ζ ( ω , κ , a , b , ( ( 1 r n ) μ n + r n G ( μ n ) ) ( a , b ) d a d b 0 ω 0 κ ζ ( ω , κ , a , b , s ( a , b ) ) d a d b | δ f | l ( ( 1 r n ) μ n + r n G ( μ n ) ( ω , κ ) ) l ( s ( ω , κ ) ) | + 0 ω 0 κ | ζ ( ω , κ , a , b , ( ( 1 r n ) μ n + r n G ( μ n ) ) ( a , b ) ζ ( ω , κ , a , b , s ( a , b ) ) | d a d b δ f δ l ( 1 r n ) μ n + r n G ( μ n ) s t e t ( ω + κ ) + 0 ω 0 κ δ ζ ( ω , κ , a , b ) | ( 1 r n ) μ n + r n G ( μ n ) ( a , b ) s ( a , b ) | d a d b δ f δ l ( 1 r n ) μ n + r n G ( μ n ) s t e t ( ω + κ ) + δ ( 1 r n ) μ n + r n G ( μ n ) s t e t ( ω + κ ) = ( δ f δ l + δ ) ( 1 r n ) μ n + r n G ( μ n ) s t e t ( ω + κ ) .
Thus, we get
θ n s t ( δ f δ l + δ ) ( 1 r n ) μ n + r n G ( μ n ) s t .
Also,
( 1 r n ) μ n + r n G ( μ n ) s t ( 1 r n ) ( μ n s ) + r n ( G ( μ n ) s ) t ( 1 r n ) μ n s t + r n G ( μ n ) s t .
Now,
G ( μ n ) G ( s ) t = sup ω , κ R + | G ( μ n ( ω , κ ) ) G ( s ( ω , κ ) ) | e t ( ω + κ ) ,
and
| G ( μ n ) ) ( ω , κ ) ) G ( s ( ω , κ ) ) | | f ( ω , κ , l ( μ n ( ω , κ ) ) ) f ( ω , κ , l ( s ( ω , κ ) ) ) | + | 0 ω 0 κ ζ ( ω , κ , a , b , μ n ( a , b ) ) d a d b 0 ω 0 κ ζ ( ω , κ , a , b , s ( a , b ) ) d a d b | δ f | l ( μ n ( ω , κ ) ) l ( s ( ω , κ ) ) | + 0 ω 0 κ | ζ ( ω , κ , a , b , μ n ( a , b ) ) ζ ( ω , κ , a , b , s ( a , b ) ) | d a d b δ f δ l μ n s t e t ( ω + κ ) + 0 ω 0 κ δ ζ ( ω , κ , a , b ) | μ n ( a , b ) s ( a , b ) | d a d b δ f δ l μ n s t e t ( ω + κ ) + δ μ n s t e t ( ω + κ ) = ( δ f δ l + δ ) μ n s t e t ( ω + κ ) .
Thus, from (47), (46) becomes
G ( μ n ) G ( s ) t ( δ f δ l + δ ) μ n s t .
Taking into consideration (45) and (48), we obtain
( 1 r n ) μ n + r n G ( μ n ) s t [ 1 r n ( 1 ( δ f δ l + δ ) ) ] μ n s t .
From (44) and (49), we acquire
θ n s t ( δ f δ l + δ ) [ 1 r n ( 1 ( δ f δ l + δ ) ) ] μ n s t .
By utilizing (50), (42) becomes
s n + 1 s t ( δ f δ l + δ ) 3 [ 1 r n ( 1 ( δ f δ l + δ ) ) ] μ n s t .
In the same manner, we estimate from (17) that
μ n s t ( δ f δ l + δ ) [ 1 p n ( 1 ( δ f δ l + δ ) ) ] s n s t .
Thus, from (51) and (52), we obtain
s n + 1 s t ( δ f δ l + δ ) 4 [ 1 r n ( 1 ( δ f δ l + δ ) ) ] [ 1 p n ( 1 ( δ f δ l + δ ) ) ] s n s t .
Revoking ( A 6 ) , we have δ f δ l + δ < 1 and since p n [ 0 , 1 ] , then by induction, we deduce that
s n + 1 s t s 0 s t i = 0 n [ 1 r i ( 1 ( δ f δ l + δ ) ) ] .
Utilizing the fact that 1 k 1 e k for all k [ 0 , 1 ] , we acquire
s n + 1 s t s 0 s t e [ r i ( 1 ( δ f δ l + δ ) ) ] i = 0 n r i ,
which turns into lim n s n s t = 0 . □

6. Concluding Remarks

A new Yosida inclusion problem, involving A -relaxed co-accretive mapping, called the generalized Yosida inclusion problem has been developed. The existence of a solution for generalized Yosida inclusion, using the technique of resolvent, is investigated. A four-step iterative scheme is proposed and its convergence analysis is discussed. In addition, the stability of the proposed scheme has been reported. An equivalence of the generalized Yosida inclusion to analogous generalized resolvent equation has been established. The convergence of the developed scheme has been analyzed to investigate the generalized resolvent equation. Theoretical results are exemplified. Finally, the generalized resolvent equation is investigated by using the generalized Yosida inclusion problem, and we employed our developed methods to investigate the approximate solution of the Volterra–Fredholm differential equation.

Author Contributions

Methodology, M.A.; Validation, M.D. and I.A.; Formal analysis, M.D., S.C. and I.A.; Resources, A.K.; Writing—original draft, M.A.; Writing—review & editing, S.C.; Funding acquisition, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2023/R/1444).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hartman, G.J.; Stampacchia, G. On some non-linear elliptic differential equations. Acta Math. 1966, 112, 271–310. [Google Scholar] [CrossRef]
  2. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  3. Chen, Y.Q.; Cho, Y.J.; Kumam, P. On the maximality of sums of two maximal monotone operators. J. Math. Anal. 2016, 7, 24–30. [Google Scholar]
  4. Cho, S.Y.; Qin, X.; Wang, L. A strong convergence theorem for solutions of zero point problems and fixed point problems. Bull. Iran. Math. Soc. 2014, 40, 891–910. [Google Scholar]
  5. Lai, X.; Zhang, Y. Fixed point and asymptotic analysis of cellular neural networks. J. Appl. Math. 2012, 2012, 689845. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, G.Q.; Cheng, S.S. Fixed point theorems arising from seeking steady states of neural networks. Appl. Math. Model. 2009, 33, 499–506. [Google Scholar] [CrossRef]
  7. Xiong, T.J.; Lan, H.Y. On general system of generalized quasi-variational-like inclusions with maximal η-monotone mappings in Hilbert spaces. J. Comput. Anal. Appl. 2015, 18, 506–514. [Google Scholar]
  8. Yao, Y.; Noor, M.A.; Zainab, S.; Liou, Y.C. Mixed equilibrium problems and optimization problems. J. Math. Anal. Appl. 2009, 354, 319–329. [Google Scholar] [CrossRef] [Green Version]
  9. Zhao, X.P.; Sahu, D.R.; Wen, C.F. Iterative methods for system of variational inclusions involving accretive operators and applications. Fixed Point Theory 2018, 19, 801–822. [Google Scholar] [CrossRef]
  10. Hassouni, A.; Moudafi, A. perturbed algorithms for variational inclusions. J. Math. Anal. Appl. 1994, 185, 706–712. [Google Scholar] [CrossRef] [Green Version]
  11. Robinson, S.M. Generalized equations and their solutions, Part I: Basic theory. Math. Prog. Study 1979, 10, 128–141. [Google Scholar]
  12. Rockafellar, R.T. Monotone operators and the proximal point algorithm, SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  13. Takahashi, S.; Takahashi, W.; Toyoda, M. Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147, 27–41. [Google Scholar] [CrossRef]
  14. Zhang, C.Y. Wang, Proximal algorithm for solving monotone variational inclusion. Optimization 2018, 67, 1197–1209. [Google Scholar] [CrossRef]
  15. Shehu, Y. An iterative method for fixed point problems, variational inclusions and generalized equilibrium problems. Math. Comput. Model. 2011, 54, 1394–1404. [Google Scholar] [CrossRef]
  16. Peng, J.W.; Wang, Y.; Shyu, D.S.; Yao, J.C. Common solutions of an iterative scheme for variational inclusions, equilibrium problems and fixed point problems. J. Inequal. Appl. 2008, 2008, 720371. [Google Scholar] [CrossRef] [Green Version]
  17. Ansari, Q.H.; Lalitha, C.S.; Mehta, M. Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization; Taylor & Francis Group: New York, NY, USA, 2014. [Google Scholar]
  18. Ceng, L.C.; Guu, S.M.; Yao, J.C. Weak convergence theorem by a modified extragradient method for variational inclusions, variational inequalities and fixed point problems. J. Nonlinear Convex Anal. 2013, 14, 21–31. [Google Scholar]
  19. Lan, H.Y.; Cho, Y.J.; Verma, R.U. Nonlinear relaxed cocoercive variational inclusions involving (A, η)-accretive mappings in Banach spaces. Comput. Math. Appl. 2006, 51, 1529–1538. [Google Scholar] [CrossRef] [Green Version]
  20. Brezis, H. Analyse Fonctionnelle; Dunod: Paris, France, 1999. [Google Scholar]
  21. Akram, M.; Chen, J.W.; Dilshad, M. Generalized Yosida approximation operator with an application to a system of Yosida inclusions. J. Nonlinear Funct. Anal. 2018, 2018, 17. [Google Scholar]
  22. Cao, H.W. Yosida approximation equations technique for system of generalized set-valued variational inclusions. J. Inequal. Appl. 2013, 455. [Google Scholar] [CrossRef] [Green Version]
  23. Akram, M. Existence and iterative approximation of solution for generalized Yosida inclusion problem. Iran. J. Math. Sci. Inform. 2020, 15, 147–161. [Google Scholar]
  24. Al-Homidan, S.; Ansari, Q.H. Fixed point theorems on product topological semilattice spaces, generalized abstract economies and systems of generalized vector quasi-equilibrium problems. Taiwan. J. Math. 2011, 15, 307–330. [Google Scholar] [CrossRef]
  25. Yao, Y.; Chen, R.; Xu, H.K. Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72, 3447–3456. [Google Scholar] [CrossRef]
  26. Alansari, M.; Dilshad, M.; Akram, M. Remark on the Yosida approximation iterative technique for split monotone Yosida variational inclusions. Comp. Appl. Math. 2020, 39, 203. [Google Scholar] [CrossRef]
  27. Ahmad, I.; Pang, C.-T.; Ahmad, R.; Ishtyak, M. System Of Yosida inclusions involving XOR operator. J. Nonlinear Convex Anal. 2017, 18, 831–845. [Google Scholar]
  28. Khan, A.; Akram, M.; Dilshad, M.; Shafi, J. A new iterative algorithm for general variational inequality problem with applications. J. Funct. Spaces 2022, 2022, 7618683. [Google Scholar] [CrossRef]
  29. Deng, L.; Hu, R.; Fang, Y.P. Inertial extragradient algorithms for solving equilibrium problems without any monotonicity in Hilbert spaces. J. Comput. Appl. Math. 2022, 44, 639–663. [Google Scholar]
  30. Akram, M.; Khan, A.; Dilshad, M. Convergence of some iterative algorithms for system of generalized set-valued variational inequalities. J. Funct. Spaces 2021, 2021, 6674349. [Google Scholar] [CrossRef]
  31. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61–79. [Google Scholar]
  32. Okeke, G.A.; Abbas, M.; la Sen, M.D.; Iqbal, H. Accelerated modified Tseng’s extragradient method for solving variational inequality problems in Hilbert spaces. Axioms 2021, 10, 248. [Google Scholar] [CrossRef]
  33. Ma, X.; Liu, H. Two optimization approaches for solving split variational inclusion problems with applications. J. Sci. Comput. 2022, 91, 58. [Google Scholar] [CrossRef]
  34. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  35. Ahmad, R.; Akram, M.; Dilshad, M. Graph Convergence for the H(·,·)-co-accretive mapping with an application. Bull. Malays. Math. Sci. Soc. 2015, 38, 1481–1506. [Google Scholar] [CrossRef]
  36. Ostrowski, A.M. The round-off stability of iterations. Z. Angew Math. Mech. 1967, 47, 77–81. [Google Scholar] [CrossRef]
  37. Osilike, M.O. Stability of the Mann and Ishikawa iteration procedures for ϕ-Strong pseudo-contractions and nonlinear equations of the ϕ-Strongly accretive type. J. Math. Anal. Appl. 1998, 227, 319–334. [Google Scholar] [CrossRef] [Green Version]
  38. Berinde, V. Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators. Fixed Point Theory Appl. 2004, 2, 97–105. [Google Scholar] [CrossRef] [Green Version]
  39. Lungu, N.; Rus, I.A. On a functional Volterra-Fredholm integral equation via Picard operators. J. Math. Inequal. 2009, 3, 519–527. [Google Scholar] [CrossRef] [Green Version]
  40. Bielecki, A. Une remarque sur l’application de la méthode de Banach-Cocciopoli-Tichonov dans la thǒrie de l’équation s=f(x, y, z, p, q). Bull. Pol. Acad. Sci. Math. 1956, 4, 265–357. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akram, M.; Dilshad, M.; Khan, A.; Chandok, S.; Ahmad, I. Convergence Analysis for Generalized Yosida Inclusion Problem with Applications. Mathematics 2023, 11, 1409. https://doi.org/10.3390/math11061409

AMA Style

Akram M, Dilshad M, Khan A, Chandok S, Ahmad I. Convergence Analysis for Generalized Yosida Inclusion Problem with Applications. Mathematics. 2023; 11(6):1409. https://doi.org/10.3390/math11061409

Chicago/Turabian Style

Akram, Mohammad, Mohammad Dilshad, Aysha Khan, Sumit Chandok, and Izhar Ahmad. 2023. "Convergence Analysis for Generalized Yosida Inclusion Problem with Applications" Mathematics 11, no. 6: 1409. https://doi.org/10.3390/math11061409

APA Style

Akram, M., Dilshad, M., Khan, A., Chandok, S., & Ahmad, I. (2023). Convergence Analysis for Generalized Yosida Inclusion Problem with Applications. Mathematics, 11(6), 1409. https://doi.org/10.3390/math11061409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop