Next Article in Journal
Modified Maximum Likelihood Estimation of the Inverse Weibull Model
Next Article in Special Issue
Fixed-Point Convergence of Multi-Valued Non-Expansive Mappings with Applications
Previous Article in Journal
Lattice Boltzmann Model for a Class of Time Fractional Partial Differential Equation
Previous Article in Special Issue
Common Fixed Point Results on a Double-Controlled Metric Space for Generalized Rational-Type Contractions with Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Method for Solving Pseudomonotone Variational Inequality and Fixed Point Problems in Banach Spaces

by
Rose Maluleka
1,2,
Godwin Chidi Ugwunnadi
1,3,* and
Maggie Aphane
1
1
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Pretoria 0204, South Africa
2
Department of Mathematics and Statistics, Tshwane University of Technology, Staatsartillerie Rd, Pretoria West, Pretoria 0183, South Africa
3
Department of Mathematics, Faculty of Science and Engineering, University of Eswatini, Private Bag 4, Kwaluseni M201, Eswatini
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(10), 960; https://doi.org/10.3390/axioms12100960
Submission received: 25 August 2023 / Revised: 20 September 2023 / Accepted: 8 October 2023 / Published: 11 October 2023
(This article belongs to the Special Issue Research on Fixed Point Theory and Application)

Abstract

:
In this paper, we introduce a new iterative method that combines the inertial subgradient extragradient method and the modified Mann method for solving the pseudomonotone variational inequality problem and the fixed point of quasi-Bregman nonexpansive mapping in p-uniformly convex and uniformly smooth real Banach spaces. Under some standard assumptions imposed on cost operators, we prove a strong convergence theorem for our proposed method. Finally, we perform numerical experiments to validate the efficiency of our proposed method.

1. Introduction

Let C be a nonempty subset of a real Banach space E with the norm | | . | | and the duality space E * . We denote the value of y * E * at x E by x , y * . For any nonlinear operator A : C E * , According to Stampacchia [1], the variational inequality problem (VIP) is defined as follows:
Find d C such that   A ( d ) , e d 0 e C .
We use V I ( C , A ) to represent the solution set of (1). The study of VIP originates from solving a minimization problem involving infinite-dimensional functions and variational calculus. As an analytical application of mechanics to the solution of partial differential equations in infinite-dimensional spaces, Hartman and Stampacchia [2] initiated the systematic study of VIP in 1964. In 1966, Stampacchia [1] demonstrated the first VIP existence and uniqueness solution. In 1979, Smith [3] originally used VIP to solve variational inequality problems in finite-dimensional spaces when he formulated the traffic assignment problem. He was unaware that his formulation was an exact variational inequality problem before Dafermos [4] realized it in 1980 while working on traffic and equilibrium problems. Since then, a variety of VIP models have been used in real-world settings. These models have a rich theoretical mathematics, some intriguing crossovers between various fields, and several significant applications in engineering and economics. Furthermore, variational inequalities give us a tool for a wide range of issues in mathematical programming, such as nonlinear systems of equation, issues with optimization, and fixed point theorems. Numerous real-world “equilibrium” problems systematically employ variational inequalities (see [5]).
There are a number of well-known techniques for resolving variational inequalities. The regularized method and the projection method are two prominent and general approaches to solving VIPs. Numerous methods have been considered and put forth to solve the VIP (1) problem based on these directives. The extragradient method, which Korpelevich [6] first proposed and which was later expanded upon due to the strong assumption of his result, uses two projections on the underlying feasible closed and convex set over each iteration. This can have an impact on the computational effectiveness of the method. There are ways to circumvent these problems. The first is the subgradient extragradient technique, Algorithm 1, first proposed by Censor et al. [7]. This method substitutes a projection onto a particular constructible half-space for the second projection onto C. They use the following approach:
Algorithm 1: Subgradient Extragradient Technique
f n = P C ( e n τ A e n ) , T n = { d H : e n τ A e n f n , d e n 0 } , e n + 1 = P T n ( e n τ A f n ) , n 0 ,
where τ ( 0 , 1 L ) . We are aware that several authors have studied iterative methods for solving variational inequality problems and fixed points of nonexpansive and quasinonexpansive mappings, as well as their generalizations, in real Hilbert spaces (see, for instance [7,8] and the references therein). Bregman [9] developed methods using the Bregman distance function D f in (2) rather than the norm when constructing and investigating feasibility and optimization problems. This approach was used to navigate problems that arise when the useful illustrations of nonexpansive operators in Hilbert spaces H, such as the metric projection P C onto a nonempty, closed, and convex subset C of H, are no longer nonexpansive in Banach spaces. This led to the development of a growing body of research on approximating solutions to problems involving variational inequality, fixed points, and other issues (see, e.g., [10,11] and the references therein).
Recently, Ma et al. [12] developed the following Algorithm 2, known as the modified subgradient extragradient method, for solving variational inequality and fixed point problems in the context of Banach space:
Algorithm 2: Modified Subgradient Extragradient Method
Let λ 0 > 0 , μ ( 0 , 1 ) . For any e 0 C . Choose a nonnegative real sequence { θ n } such that n = 1 θ n < .
(Step1) 
      Calculate f n = P C ( J e n λ A ( e n ) ) . If e n f n and T e n = e n , then stop: e n V I ( C , A ) F ( T ) ; otherwise, go to next step.
(Step2) 
      Construct T n = { e E : J e n λ n A ( e n ) J f n , e f n 0 } and compute
a n = P T n ( J e n λ n A f n ) , b n = J 1 ( α n J e 0 + ( 1 α n ) a n ) e n + 1 = J 1 ( β n J a n + ( 1 β n ) J ( T b n ) ) , n 0 ,
(Step3) 
      Compute
λ n + 1 = min μ ( | | e n f n | | 2 + | | a n f n | | 2 ) 2 A ( e n ) A ( f n ) , a n f n , λ n + θ n , if A ( e n ) A ( f n ) , a n f n > 0 λ n + θ n , otherwise .
Let n : = n + 1 and return to Step 1.
where P C is the generalized projection on E, J is the duality mapping, A : E E * is the pseudomonotone mapping, and T is the nonexpansive mapping. It was proven that the sequence { x n } generated by Algorithm 2 converges strongly to a point x * V I ( C , A ) F ( T ) , where x * = P V I ( C , A ) F ( T ) x 0 , under some mild conditions, in 2-uniformly convex real Banach spaces. For more information on the common solution of VIP and fixed point problems in real Banach spaces, which is more general than Hilbert spaces, the reader may refer to any of the following recent papers: [13,14].
Motivated by the above results, this paper investigates the strong convergence of the inertial subgradient extragradient method for solving the pseudomonotone variational inequality problem and the fixed point problem of quasi-Bregman nonexpansive mapping in p-uniformly convex and uniformly smooth real spaces. We demonstrate that, under a number of suitable conditions placed on the parameters, the suggested method strongly converges to a point in V I ( C , A ) ( F ( T ) ) . Finally, we offer a few numerical experiments that support our main finding in comparison to previous published papers.

2. Preliminaries

Let 1 < q 2 p < , where 1 p + 1 q = 1 . Consider E to be a real normed space with dual E * and S : = { x E : | | x | | = 1 } . If for any x , y in S with x y , λ ( 0 , 1 ) ; then E is (i) strictly convex space, if | | λ x + ( 1 λ ) y | | < 1 exists; (ii) smooth space if lim t 0 | | x + t y | | | | x | | t exists for each x , y S .
A function δ E : ( 0 , 2 ] [ 0 , 1 ] defined by
δ E ( ϵ ) = inf 1 | | x + y | | 2 : x , y S ( E ) , | | x y | | ϵ ,
is known as the modulus of convexity. For any ϵ ( 0 , 2 ] , the space E is uniformly convex if and only if δ E ( ϵ ) > 0 ; additionally, E is p-uniformly convex ( 1 < p < ) if there exists a positive constant c p such that δ E ( ϵ ) c p ϵ p , for all ϵ ( 0 , 2 ] . As a result, each p-uniformly convex space is also uniformly convex. The function ρ E : [ 0 , ) [ 0 , ) defined by
ρ E ( τ ) = sup | | x + τ y | | + | | x y | | 2 1 : x , y S
is the formula for the modulus of smoothness of E. Additionally, E is referred to as uniformly smooth if lim τ 0 ρ E ( τ ) τ = 0 ; if a positive real integer C q exits such that ρ E ( τ ) C q τ q for any τ > 0 , E is referred to as being q-uniformly smooth. As a result, each and every q-uniformly smooth space is uniformly smooth. If and only if the dual, E * , is p-uniformly convex, then E is q-uniformly smooth, see [15]. It is widely known that L p , p , and W p m are 2-uniformly convex and q-uniformly smooth for 1 q < 2 ; 2-uniformly smooth and p-uniformly convex for 2 p < (see [16]). The expression,
J E p ( x ) : = { x * E * : x , x * = | | x | | p ; | | x * | | = | | x | | p 1 x E } ,
defines the generalized duality mapping J E p from E to 2 E * . The mapping J E 2 = J is frequently referred to as the normalized duality mapping in the case where p = 2 . It is common knowledge that on bounded subsets of E, J E p is norm-to-norm uniformly continuous if E is uniformly smooth. It follows that J E p is single-valued if E is smooth. It is well-known that if the duality mapping J E * q from E * to E is injective and sujective, then E is reflexive and strictly convex with a strictly convex dual, and J E p J E * q = I E * (identity map in E * ) (see [17]), thus, J E p = ( J E * q ) 1 . For examples of generalized duality mapping, let a = ( a 1 , a 2 , ) p ( 1 < p < ) . The generalized duality mapping J E p in p is therefore defined by
J E p ( a ) = ( | a 1 | p 1 sgn ( a 1 ) , | a 2 | p 1 sgn ( a 2 ) , ) .
Additionally, if E = L p [ α , β ] ( 1 < p < ) , we have the generalized duality mapping J E p for any f L p [ α , β ] expressed as
J E p ( g ) ( s ) = | g ( s ) | p 1 sgn ( g ( s ) ) , s [ α , β ] .
We recall the following definitions, which were introduced in [18]. For any closed unit ball B in E with radius r > 0 , we have r B = { u E : | | u | | r } . If ρ r ( t ) > 0 for every r ,   t > 0 , and ρ r : [ 0 , ) [ 0 , ) express as
ρ r ( t ) = inf x , y r B , | | x y | | = t , δ ( 0 , 1 ) δ f ( x ) + ( 1 δ ) f ( y ) f ( δ x + ( 1 δ ) y ) ( δ ( 1 δ ) ) ,
for all t 0 then, a function f : E R is said to be uniformly convex on bounded sets. The ρ r function is also known as the gauge of uniform convexity of f, and is well known and nondecreasing. The following lemma, which is widely known, if f is uniformly convex, is crucial for the verification of our main result.
Lemma 1 
([19]). Let E be a Banach spance and f : E R a uniformly convex function on bounded subsets of E. If r > 0 and δ j ( 0 , 1 ) for each i = 0 , 1 , 2 , , s with i = 0 s δ i = 1 , we have
f i = 0 s δ i x i i = 0 s δ i f ( x i ) δ j δ k ρ r ( | | x j x k | | )
where ρ r is its gauge of uniform convexity of f, for each j , k { 0 , 1 , 2 , , s } , x i r B .
The Bregman distance in relation to f is given by
Δ f ( x , y ) = f ( x ) f ( y ) f ( y ) , x y , for every x , y E .
Let f p ( x ) : = 1 p | | x | | in particular. The derivative of the function f p is the generalized duality mapping J E p from E to 2 E * . Consequently, the Bregman distance with regard to f p is described by
Δ p ( x , y ) = 1 p | | x | | p J E p ( y ) , x + 1 q | | y | | p .
The three-point identity, a crucial property of the Bregman distance, is defined as:
Δ p ( x , y ) = Δ p ( x , z ) + Δ p ( z , y ) + J E p ( z ) J E p ( y ) , x z , x , y , z E .
Due to the lack of symmetry, the Bregman distance is not a metric in the traditional sense, but it does possess some distance-like characteristics. If E is a p-uniformly convex space, then the Bregman distance function Δ p and the metric function satisfy the relation shown below (see [20]), which proves to be extremely helpful in the demonstration of our result: let τ p > 0 be any fixed constant.
τ p | | x y | | p Δ p ( x , y ) J E p ( x ) J E p ( y ) , x y
for all x , y E . Additionally, for q > 1 and 1 p + 1 q = 1 , recall from Young’s inequality, that
J E p ( x ) , y | | J E p ( x ) | | | | y | | 1 q | | x | | p + 1 p | | y | | p .
Let E be a smooth and strictly convex real Banach space and C a nonempty, closed, and convex subset of E. The Bregman projection operator in the sense of Bregman [9] is Π C : E C defined by
Π C x = arg min y C Δ p ( y , x ) , x E .
The Bregman projection is described in the following way [21]:
J E p ( x ) J E p ( Π C x ) , z Π C x 0 , z C .
With respect to Bregman function Δ p , we obtain
Δ p ( Π C x , z ) Δ p ( x , z ) Δ p ( x , Π C x ) , z C .
The Bregman projection in terms of f 2 and the metric projection are identical in Hilbert spaces, but otherwise they are different. More significantly, in Banach spaces, the metric projection cannot share the same property, (9), as the Bregman projection.
If E is smooth, strictly convex, and reflexive Banach space. We defined the function V p : E × E * R in relation to f p , as follows:
V p ( x , x ¯ ) = 1 p | | x | | p + 1 q | | x ¯ | | q x , x ¯ , x E , x ¯ E * ,
with 1 p + 1 q = 1 (see [22]). It is well known that V p is nonnegative, and with respect to the Bregman function, we also have
V p ( x , x ¯ ) = Δ p ( x , J E * q ( x ¯ ) ) , x E , x ¯ E * .
Furthermore, V p satisfies the following inequality:
V p ( x , x ¯ ) V p ( x , x ¯ + y ¯ ) y ¯ , J E * q ( x ) x , x E and x ¯ , y ¯ E * .
Additionally, in the second variable and for all z E ; V p is convex, that is
Δ p z , J E * q i = 1 N t i J E p ( x i ) V p z , i = 1 N t i J E p ( x i ) = i = 1 N Δ p ( z , x i ) ,
where { x i } i = 1 N E , { t i } i = 1 N ( 0 , 1 ) and i = 1 N t i = 1 (see [23,24,25]).
We also need the nonlinear operators, which are introduced below.
If C is a nonempty subset of E, a Banach space, and T : C E is a mapping, then T is nonexpansive, if | | T x T y | | | | x y | | for all x , y C , and T is said to be quasi-nonexpansive if F ( T ) and | | T x q | | | | x q | | for all x C and q F ( T ) , where F ( T ) : = { x C : T ( x ) = x } denotes the set of fixed point of T. An element q in C is asymptotic fixed point of T, if for any sequence { x n } in C, converges weakly to q such that lim n | | T x n x n | | = 0 . We describe the set set of asymptotic fixed point of T by F ( T ) ^ .
Definition 1 
([26]). Let C be a nonempty subset of a real Banach space E that is uniformly smooth and p-uniformly convex ( ( 0 < p < ) ). Let T : C E be a mapping with F ( T ) , then T is said to be:
(n1)
quasi-Bregman nonexpansive if
Δ p ( q , T x ) Δ p ( q , x ) , x C , q F ( T ) ;
(n2)
Bregman nonexpansive if
Δ p ( q , T x ) Δ p ( q , x ) , x C , q F ( T ) , F ^ ( T ) = F ( T ) ;
(n3)
Bregman firmly nonexpansive if, for all x , y C
J E p ( T x ) J E p ( T y ) , T x T y J E p ( x ) J E p ( y ) , T x T y
or equivalently,
Δ p ( T x , T y ) + Δ p ( T y , T x ) + Δ p ( T x , x ) + Δ p ( T y , y ) Δ p ( T x , y ) + Δ p ( T y , x ) .
The well known demiclosedness principle plays an important role in our main result.
Definition 2. 
Assume that C is a nonempty, closed, convex subset of a uniformly convex Banach space E and that T : C C is a nonlinear mapping. Then, T is called demiclosed at 0; if { x n } is a sequence in C such that x n x and lim n | | x n T x n | | = 0 , then x = T x .
Next, we outline a few ideas about the monotonicity of an operator.
Definition 3. 
Let E be a Banach space that has E * as its dual. The operator A : E E * is referred to as:
(m1)
p L -Lipschitz, if
| | A x A y | | L | | x y | | p x , y E ,
where L 0 and p [ 1 , ) are two constants.
(m2)
monotone, if A x A y , x y 0 for all x , y E ;
(m3)
pseudomonotone, if for all x , y E , A x , y x 0 A y , y x 0 ;
(m4)
weakly sequentially continuous if for any { x n } in E such that x n x implies A x n A x .
It is clear that ( m 2 ) ( m 3 ) ; the example that follows demonstrates that the implication’s converse is not generally true. Let A ( x ) = 1 x for all x E : = [ 0 , 1 ] . Then, A is pseudomonotone but not monotone.
When demonstrating the strong convergence of our sequence, the following result is helpful:
Lemma 2 
([27]). Let { a n } be a nonnegative sequence of real numbers, and { α n } a real sequence of numbers in ( 0 , 1 ) , with
n = 1 α n =
and { b n } is a real sequence of numbers. Suppose that
a n + 1 ( 1 α n ) a n + α n b n , n 1 .
If lim sup k b n k 0 for every subsequence { a n k } of { a n } satisfying the condition
lim inf k ( a n k + 1 a n k ) 0 ,
then lim n a n = 0 .

3. Main Results

For the purpose of solving pseudomonotone variational inequality and fixed point problems, in this section, we formulate Algorithm 3, combining a modified inertial Mann-type method with a subgradient extragradient algorithm. For the convergence of the method, we require the following conditions:
Assumption 1. 
(C1)
E is a p -uniformly convex real Banach space which is also uniformly smooth and C is a nonempty, closed, and convex subset of E.
(C2)
A : C E * is pseudomonotone and ( p 1 ) L -Lipschitz continuous on E.
(C3)
A is weakly sequentially continuous; that is, for any { x n } E , we have x n x * , which implies A x n A x * .
(C4)
{ δ n } be a sequence in ( a , b ) for some 0 < a < b ; { μ n } is a positive sequence in 0 , p τ p 2 p 1 , where τ p is defined in (5), μ n = ( α n ) , where α n is a sequence in ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = .
(C5)
T : E E is a quasi-Bregman nonexpansive mapping with F ( T ) .
(C6)
Denote the set of solutions by Γ : = V I ( A , C ) F ( T ) and is assumed to be nonempty. Then Γ is closed and convex.
Now, we describe the modified inertial Mann-type subgradient extragradient methods for finding a common solution for the fixed point problem and the pseudomonotone variational inequality problem:
Algorithm 3: Modified Inertial Mann-type Subgradient Extragradient Method
Initialization: Choose x 0 , x 1 E to be arbitrary, θ ( 0 , τ p ) , μ ( 0 , τ p ) and λ 1 > 0 .
Iterative Steps: Calculate x n + 1 as follows:
(Step1) 
      Given the iterates x n 1 and x n for each n 1 , θ > 0 , choose θ n such that 0 θ n θ n ¯ , where
θ ¯ n = min { θ , μ n J E p ( x n ) J E p ( x n 1 ) } , i f x n x n 1 , θ , otherwise
(Step2) 
      Compute
y n = J E * q [ ( 1 α n ) [ J E p ( x n ) + θ n ( J E p ( x n ) J E p ( x n 1 ) ) ] ] , w n = P C ( J E * q [ J E p ( y n ) λ n A ( y n ) ] ) ,
If x n = w n = y n for some n 1 , then stop. Otherwise
(Step3) 
      Construct
T n = { y E : J E p ( y n ) λ n A ( y n ) ) w n , y w n 0 }
and Compute
v n = P T n ( J E * q [ J E p ( y n ) λ n A ( w n ) ] ) , x n + 1 = J E * q ( ( 1 δ n ) J E p ( v n ) + δ n J E p ( T v n ) ) .
where
λ n + 1 = min { μ ( | | y n w n | | p + | | v n w n | | p ) min { p , q } A ( y n ) A ( w n ) , v n w n , λ n } , if A ( y n ) A ( w n ) , v n w n > 0 , λ n , otherwise
Set n : = n + 1 and return to Step 1.
Lemma 3. 
The sequence { λ n } generated by (17) is monotonically decreasing and bounded from below by min { λ 1 , μ L } .
Proof. 
Let x * Γ and u n : = J E * q ( J E p x n + θ n ( J E p x n J E p x n 1 ) ) , then it follows from (5), (6) and (14) that
J E p u n J E p x n , u n x * u n x * J E p u n J E p x n = θ n J E p x n J E p x n 1 u n x * θ n J E p x n J E p x n 1 1 p u n x * p + 1 q θ n p J E p x n J E p x n 1 2 p 1 ( x n u n p + x n x * p ) + θ n q J E p x n J E p x n 1 2 p 1 μ n p τ p Δ p ( x n , u n ) + Δ p ( x n , x * ) + μ n q .
Using (4), we obtain
Δ p ( u n , x * ) ) = Δ p ( x n , x * ) Δ p ( x n , u n ) + J E p u n J E p x n , u n x * 1 + 2 p 1 μ n p τ p Δ p ( x n , x * ) 1 2 p 1 μ n p τ p Δ p ( x n , u n ) + μ n q .
Observe from (C5) that for any ϵ 0 , p τ p 2 p 1 , there exists a natural number N such that for all n N
μ n α n < ϵ 2 which implies μ n 2 p 1 p τ p < α n ϵ ,
then for some M > 0 , by letting σ denotes the zero vector in E, then from (13), (15) and (18), we obtain
Δ p ( y n , x * ) = Δ p ( J E * q [ ( 1 α n ) J E p ( u n ) ] , x * ) ( 1 α n ) Δ p ( u n , x * ) + α n Δ p ( σ , x * ) ( 1 α n [ 1 ϵ ] ) Δ p ( x n , x * ) ( 1 α n ϵ ) Δ p ( x n , u n ) + α n [ Δ p ( σ , x * ) + M ] .
Using (8), (10) and (16), we obtain
Δ p ( v n , x * ) = Δ p ( P T n [ J E * q ( J E p ( y n ) λ n A ( w n ) ) ] , x * ) Δ p ( J E * q ( J E p ( y n ) λ n A w n ) , x * ) Δ p ( J E * q ( J E p ( y n ) λ n A ( w n ) ) , v n ) = V p ( J E p ( y n ) λ n A w n ) , x * ) V p ( J E p ( y n ) λ n A ( w n ) ) , v n ) = 1 p | | x * | | p J E p ( y n ) , x * + λ n A ( w n ) , x * + J E p ( y n ) , v n λ n A ( w n ) , v n 1 p | | v n | | p + 1 p | | v n | | p ] + λ n J E p ( y n ) , x * v n = Δ p ( y n , x * ) Δ p ( y n , v n ) + λ n A ( w n ) , x * v n
Since w n = P C ( J E * q [ J E p ( y n ) λ n A ( w n ) ] ) is in C and A is pseudomonotone, then
A ( w n ) , w n x * 0 . Thus
A ( w n ) , x * v n A ( w n ) , w n v n .
By using definition of T n , we have
J E p ( y n ) λ n A ( y n ) J E p ( w n ) , v n w n 0
hence
J E p ( y n ) λ n A ( w n ) J E p ( w n ) , v n w n λ n A ( y n ) A ( w n ) , v n w n .
Using (4), (5), (10) and (17), we obtain
Δ p ( v n , x * ) Δ p ( y n , x * ) Δ p ( y n , v n ) + λ n A ( w n ) , x * v n Δ p ( y n , x * ) Δ p ( y n , w n ) Δ p ( w n , v n ) + λ n A ( y n ) A ( w n ) , v n w n Δ p ( y n , x * ) Δ p ( y n , w n ) Δ p ( w n , v n ) + μ λ n min { p , q } λ n + 1 | | y n w n | | p + | | v n w n | | p Δ p ( y n , x * ) Δ p ( y n , w n ) Δ p ( w n , v n ) + μ λ n τ p min { p , q } λ n + 1 Δ p ( y n , w n ) + Δ p ( w n , v n ) = Δ p ( y n , x * ) 1 μ λ n τ p min { p , q } λ n + 1 Δ p ( y n , w n ) 1 μ λ n τ p min { p , q } λ n + 1 Δ p ( w n , v n )
Since lim n λ n exists and μ ( 0 , τ p ) , then lim n 1 μ λ n τ p min { p , q } λ n + 1 = 1 μ τ p min { p , q } > 0 , then for all n N , using Lemma 1 and (10), it then follows from the definition of ( x n + 1 ) in (16), (19) and (20) that
Δ p ( x n + 1 , x * ) = V p ( ( 1 δ n ) J E p ( v n ) + δ n J E p ( T v n ) , x * ) 1 p | | x * | | p ( 1 δ n ) J E p ( v n ) , x * δ n J E p ( T v n ) , x * + ( 1 δ n ) q | | J E p ( v n ) | | q + δ n q | | J E p ( T v n ) | | q ( 1 δ n ) δ n ρ r ( | | J E p ( v n ) J E p ( T v n ) | | ) Δ p ( v n , x * ) ( 1 δ n ) δ n ρ r ( | | J E p ( v n ) J E p ( T v n ) | | ) ( 1 α n [ 1 ϵ ] ) Δ p ( x n , x * ) + α n [ Δ p ( σ , x * ) + M ] 1 μ λ n τ p min { p , q } λ n + 1 [ Δ p ( y n , w n ) + Δ p ( w n , v n ) ] ( 1 α n ϵ ) Δ p ( x n , u n ) ( 1 δ n ) δ n ρ r ( | | J E p ( v n ) J E p ( T v n ) | | ) ( 1 α n [ 1 ϵ ] ) Δ p ( x n , x * ) + α n [ Δ p ( σ , x * ) + M ] max Δ p ( x n , x * ) , [ Δ p ( σ , x * ) + M ] 1 ϵ max Δ p ( x N , x * ) , [ Δ p ( σ , x * ) + M ] 1 ϵ
By induction
Δ p ( x n , x * ) max Δ p ( x N , x * ) , [ Δ p ( σ , x * ) + M ] 1 ϵ n N .
Thus, { Δ p ( x n , x * ) } is bounded and from (5), we know that τ p | | x n x * | | p Δ p ( x n , x * ) then we conclude that { x n } is bounded. This means that { v n } , { w n } , and { y n } are also bounded.    □
We know the following lemma, which was essentially proved in [13], is important and crucial in the proof of our main result.
Lemma 4 
([13], Lemma 3.4). Let { y n } and { w n } be two sequences formulated in Algorithm 3. If there exists a subsequence { y n s } of { y n } that converges weakly to a point z E and lim s | | y n s w n s | | = 0 , then z V I ( C , A ) .
We demonstrate that the Algorithm 3 converges strongly under the assumptions (C1)–(C6) based on the analysis described above and Lemma 4.
Theorem 1. 
Suppose that Assumption 1 holds. Then, the sequence { x n } defined by Algorithm 3 converges strongly to the unique solution of the Γ.
Proof. 
Let x * Γ , letting u n : = J E * q ( J E p x n + θ n ( J E p x n J E p x n 1 ) ) , then using (11), (12), (15) and (18), we obtain
Δ p ( y n , x * ) = V p ( ( 1 α n ) J E p ( u n ) , x * ) V p ( α n J E p ( x * ) + ( 1 α n ) J E p ( u n ) , x * ) + α n y n x * , J E p ( x * ) ( 1 α n ) Δ p ( u n , x * ) + α n y n x * , J E p ( x * ) ( 1 α n ) 1 + 2 p 1 μ n p τ p Δ p ( x n , x * ) + α n y n x * , J E p ( x * ) + μ n q ( 1 α n ) 1 2 p 1 μ n p τ p Δ p ( x n , u n ) .
For any ϵ > 0 such that ϵ 0 , p τ p 2 p 1 , there exists a natural number N such that for all n N , we obtain
Δ p ( y n , x * ) ( 1 α n ( 1 ϵ ) ) Δ p ( x n , x * ) + α n [ y n x * , J E p + μ n α n q ] ( 1 α n ϵ ) Δ p ( x n , u n ) .
Using (20) and (21), it follows that
Δ p ( x n + 1 , x * ) ( 1 α n [ 1 ϵ ] ) Δ p ( x n , x * ) + α n [ y n x * , J E p ( x * ) + μ n α n q ] 1 μ λ n τ p min { p , q } λ n + 1 [ Δ p ( y n , w n ) + Δ p ( w n , v n ) ] ( 1 α n ϵ ) Δ p ( x n , u n ) ( 1 δ n ) δ n ρ r ( | | J E p ( v n ) J E p ( T v n ) | | )
( 1 α n [ 1 ϵ ] ) Δ p ( x n , x * ) + α n [ y n x * , J E p ( x * ) + μ n α n q ] .
Next, using Lemma 2 and (23), it remains to show that
lim sup s y n s x * , J E p ( x * ) 0
for every subsequence { Δ p ( x n s , x * ) } of { Δ p ( x n , x * ) } satisfying
lim inf s ( Δ n ( x n s + 1 , x * ) Δ p ( x n s , x * ) ) 0 .
Now, let { Δ p ( x n s , x * ) } be a subsequence of { Δ p ( x n , x * ) } such that
lim inf s ( Δ p ( x n s + 1 , x * ) Δ p ( x n s , x * ) ) 0
holds and, from (22), we denotes { Υ n s } as follows:
Υ n s : = ( 1 α n s ϵ ) Δ p ( x n s , u n s ) + ( 1 δ n s ) δ n s ρ r ( | | J E p ( v n s ) J E p ( T v n s ) | | ) + 1 μ λ n s τ p min { p , q } λ n s + 1 [ Δ p ( y n s , w n s ) + Δ p ( w n s , v n s ) ]
thus, from (22), we obtain
lim sup s Υ n s lim sup s Δ p ( x n s , x * ) Δ p ( x n s + 1 , x * ) + lim sup s α n s | | y n s x * | | | | J E p ( x * ) | | + μ n s α n s q ( 1 ϵ ) Δ p ( x n s , x * ) lim sup s Δ p ( x n s , x * ) Δ p ( x n s + 1 , x * ) = lim inf s Δ p ( x n s + 1 , x * ) Δ p ( x n s , x * ) 0 .
Hence, lim sup s Υ n s 0 , which implies that lim s Υ n s = 0 . It follows from (24) that
lim s Δ p ( x n s , u n s ) = 0 = lim s Δ p ( y n s , w n s ) = lim s Δ p ( w n s , v n s )
and
lim s ρ r ( | | J E p ( v n s ) J E p ( T v n s ) | | ) = 0 .
By the property of ρ r , we obtain
lim s | | J E p ( v n s ) J E p ( T v n s ) | | = 0
and, since J E * q is uniformly continuous on a bounded subset of E * , we obtain
lim s | | v n s T v n s | | = 0 .
Additionally, using (5) and (25), we obtain
lim s | | x n s u n s | | = 0 = lim s | | y n s w n s | | = lim s | | w n s v n s | | = 0 .
With J E p being uniformly norm-to-norm continuous on bounded sets, we also have
lim s | | J E p x n s J E p u n s | | = 0 = lim s | | J E p y n s J E p w n s | | = lim s | | J E p w n s J E p v n s | | = 0 .
However, we understand from the definition that y n : = J E * q ( 1 α n ) J E p u n , where u n = J E * q [ J E p x n ( J E p x n J E p x n 1 ) ] , then
| | J E p y n J E p u n | | = α n | | J E p u n | |
which implies from the fact lim n α n = 0 and the boundedness of { J E p u n } that
lim s | | J E p y n s J E p u n s | | = 0
with
| | J E p v n s J E p x n s | | | | J E p v n s J E p w n s | | + | | J E p w n s J E p y n s | | + | | J E p y n s J E p u n s | | + | | J E p u n s J E p x n s | |
it follows from (29) and (30) that
lim s | | J E p v n s J E p x n s | | = 0
Moreover, from (28) and (30), since J E * q is also uniformly continuous, we obtain from (30) that
lim s | | y n s x n s | | = 0
and from (16), we obtain | | J E p x n + 1 J E p v n | | = δ n | | J E p T v n J E p v n | | and with (26), since δ n in ( 0 , 1 ) for all n 1 , we obtain
lim s | | J E p x n s + 1 J E p v n s | | = 0 .
Thus, from (31), we obtain
lim s | | J E p x n s + 1 J E p x n s | | = 0 .
By uniform continuity of J E * q on a bounded subset of E * , we conclude, respectively, from (31),we obtain
lim s | | v n s x n s | | = 0
and
lim s | | x n s + 1 x n s | | = 0 .
Since { x n s } is bounded, it follows that there exists a subsequence { x n s k } of { x n s } that converges weakly to some point z in E. By using (33), we obtain v n s z ; from (27) and Definition 2, we conclude that z F ( T ) . Furthermore, from (32), we obtain that y n s z . This together with lim s | | y n s w n s | | = 0 in (28) and Lemma 4, we conclude that z V I ( C , A ) , therefore z Γ . Finally, using σ as a zero point in C, it follows from the definition of the Bregman projection that
lim sup s y n s x * , J E p ( x * ) = lim k y n s k x * , J E p ( x * ) = z x * , J E p ( x * ) = x * z , J E p ( σ ) J E p ( x * ) 0
We know from (23), that
Δ p ( x n s + 1 , x * ) ( 1 α n s [ 1 ϵ ] ) Δ p ( x n s , x * ) + α n s [ y n s x * , J E p ( x * ) + μ n s α n s q ] .
Hence, combining (34), (35), and together with Lemma 2, we conclude that lim n Δ p ( x n , x * ) = 0 , and together with the fact that τ p | | x n x * | | p Δ p ( x n , x * ) , we obtain x n x * as n . This complete the proof.    □
We obtain the following corollary from Theorem 1 by setting T = 0 in Algorithm 3.
Corollary 1. 
Let (C1)–(C3) of Assumption 1 hold. Choose x 0 , x 1 E to be arbitrary, θ ( 0 , τ p ) , μ ( 0 , τ p ) , and λ 1 > 0 . Calculate x n + 1 as follows:
(Step1) 
      Given the iterates x n 1 and x n for each n 1 , θ > 0 , choose θ n such that 0 θ n θ n ¯ , where
θ ¯ n = min { θ , μ n J E p ( x n ) J E p ( x n 1 ) } , i f x n x n 1 , θ , otherwise
(Step2) 
      Compute
y n = J E * q [ ( 1 α n ) [ J E p ( x n ) + θ n ( J E p ( x n ) J E p ( x n 1 ) ) ] ] , w n = P C ( J E * q [ J E p ( y n ) λ n A ( y n ) ] ) ,
If x x = w n = y n for some n 1 , then stop. Otherwise
(Step3) 
      Construct
T n = { y E : J E p ( y n ) λ n A ( y n ) ) w n , y w n 0 }
and Compute
x n + 1 = P T n ( J E * q [ J E p ( y n ) λ n A ( w n ) ] ) ,
where
λ n + 1 = min { μ ( | | y n w n | | p + | | v n w n | | p ) min { p , q } A ( y n ) A ( w n ) , v n w n , λ n } , i f A ( y n ) A ( w n ) , v n w n > 0 , λ n , otherwise
Set n : = n + 1 and return to Step 1.
Then, { x n } n = 0 converges strongly to a point p V I ( C , A ) .
Next, if, in Algorithm 3, we assume that A = 0 , we obtain the following corollary:
Corollary 2. 
Let E be a p-uniformly convex and uniformly smooth real Banach space with sequentially continuous duality mapping J E * p . Let T : E E be a quasi-Bregman nonexpansive mapping such that F ( T ) . Suppose { δ n } is a sequence in ( a , b ) for some 0 < a < b and { μ n } is a positive sequence in 0 , p τ p 2 p 1 , where τ p is defined in (5), μ n = ( α n ) , where α n is a sequence in ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = . Let { x n } n = 0 be a sequence generated in Algorithm 4 as follows:
Algorithm 4: First Modified Inertial Mann-type Method
Initialization: Choose x 0 , x 1 E to be arbitrary, θ ( 0 , τ p ) , μ ( 0 , τ p ) and λ 1 > 0 .
Iterative Steps: Calculate x n + 1 as follows:
(Step1) 
      Given the iterates x n 1 and x n for each n 1 , θ > 0 , choose θ n such that 0 θ n θ n ¯ , where
θ ¯ n = min { θ , μ n J E p ( x n ) J E p ( x n 1 ) } , i f x n x n 1 , θ , otherwise
(Step2) 
      Compute
y n = J E * q [ ( 1 α n ) [ J E p ( x n ) + θ n ( J E p ( x n ) J E p ( x n 1 ) ) ] ] , x n + 1 = J E * q ( ( 1 δ n ) J E p ( v n ) + δ n J E 1 p ( T y n ) ) .
Then, { x n } n = 0 converges strongly to a point p F ( T ) .
Proof. 
We observe that the necessary assertion is provided by the method of proof of Theorem 1.    □
Let B : E E * be a set-valued mapping with domain D ( B ) = { x E : B ( x ) } and range R ( B ) = { x * E * : x * B ( x ) } , and the graph of B is given as G r a ( B ) : = { ( x , x * ) E × E * : x * B x } . Then B is said to be monotone if x * y * , x y 0 whenever ( x , x * ) , ( y , y * ) G r a ( B ) , and B is said to be maximal monotone if its graph is not contained in the graph of any other monotone operator on E. Let B : E 2 E * be a mapping. Additionally, B is called a monotone mapping if, for any x , y dom B , we have
u B x and v B y u v , x y 0 .
B is called maximal if B is monotone and the graph of B is not properly contained in the graph of any other monotone operator. It is known that if B is maximal monotone, then the set B 1 ( 0 ) : = { u E : 0 B ( u ) } is closed, and convex. The resolvent of B is the operator Res σ B : E 2 E defined by
Res σ B = ( J E p + σ B ) 1 J E p .
It is known that Res σ B is single-valued, Bregman firmly nonexpansive, and F ^ ( Res σ B ) = F ( Res σ B ) = B 1 ( 0 ) (see [28,29]). Since every Bregman firmly nonexpansive is quasi-Bregman nonexpansive, from Corollary 2, we obtain the following result as a special case:
Corollary 3. 
Let E be a p-uniformly convex and uniformly smooth real Banach space with sequentially continuous duality mapping J E p . Let B : E 2 E * be a maximal monotone with B 1 ( 0 ) 0 . Suppose { δ n } be a sequence in ( a , b ) for some 0 < a < b and { μ n } is a positive sequence in 0 , p τ p 2 p 1 , where τ p is defined in (5), μ n = ( α n ) , where α n is a sequence in ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = . Let { x n } n = 0 be a sequence generated in Algorithm 5 as follows:
Algorithm 5: Second Modified Inertial Mann-type Method
Initialization: Choose x 0 , x 1 E to be arbitrary, θ ( 0 , τ p ) , μ ( 0 , τ p ) and λ 1 > 0 .
Iterative Steps: Calculate x n + 1 as follows:
(Step1) 
      Given the iterates x n 1 and x n for each n 1 , θ > 0 , choose θ n such that 0 θ n θ n ¯ , where
θ ¯ n = min { θ , μ n J E p ( x n ) J E p ( x n 1 ) } , i f x n x n 1 , θ , otherwise
(Step2) 
      Compute
y n = J E * q [ ( 1 α n ) [ J E p ( x n ) + θ n ( J E p ( x n ) J E p ( x n 1 ) ) ] ] , x n + 1 = J E * q ( ( 1 δ n ) J E p ( v n ) + δ n J E 1 p ( R e s σ B y n ) ) .
Then, { x n } n = 0 converges strongly to a point p B 1 ( 0 ) .
Remark 1. 
The following are considered:
(a)
Theorem 1 improves, extends, and generalizes the corresponding results [12,13,30,31,32,33] in the sense that either our method requires an inertial term to improve the convergence rate and/or the space considered is more general.
(b)
We observe that the result in Corollary 1 improves, and extends the results in [7,34,35,36] from Hilbert space to a p-uniformly convex and uniformly smooth real Banach space as well as from solving the monotone variational inequality problem to the pseudomonotone variational inequality problem.
(c)
Corollary 3 improves, and extends the corresponding results of Wei et al. [37], Ibaraki [38], and Tianchai [39] in the sense that our iterative method does not require computation of C n + 1 for each n 1 or the class of mappings considered in our corollary is more general and inertial in our method, which aids in increasing the convergence rate of the sequence generated by the method.

4. Numerical Example

In this section, we intend to demonstrate the efficiency of our Algorithm 3 with the aid of numerical experiments. Furthermore, we compare our iterative method with the methods of Censor et al. [7] (Algorithm 1) and Ma et al. [12] (Algorithm 2).
Example 1. 
Let E = L 2 [ 0 , 1 ] and C = { x L 2 [ 0 , 1 ] : a , x b } , where a = t 2 + 1 and b = 1 , with norm | | x | | = 0 1 | x ( t ) | 2 d t and inner product x , y = 0 t x ( t ) y ( t ) d t , for all x , y L 2 ( [ 0 , 1 ] ) , t [ 0 , 1 ] . Define metric projection P C as follows:
P C ( x ) = x , if x C b a , x | | a | | L 2 a + x , otherwise .
Let A : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] be defined by A ( x ( t ) ) = e | | x | | 0 t x ( s ) d s , for all x L 2 [ 0 , 1 ] , t , s [ 0 , 1 ] , then, A is pseudomonotone and uniformly continuous mapping (see [40]) and let T ( x ( t ) ) = 0 t x ( s ) d s , for all x L 2 [ 0 , 1 ] , t [ 0 , 1 ] , then T is nonexpansive mapping. For the control parameters, we use α n = 1 5 n + 2 , δ n = 1 2 α n , μ n = α n n 0.01 and θ n = θ ¯ n . We define the sequence T O L n : = | | x n + 1 x n | | 2 and apply the stopping criterion T O L n < ε for the iterative processes because the solution to the problem is unknown. ε is the predetermined error. Here, the terminating condition is set to ε = 10 5 . For the numerical experiments illustrated in Figure 1 and Table 1 below, we take into consideration the resulting cases.
Case 1: 
x 0 = t 3 and x 1 = t 2 + t .
Case 2: 
x 0 = t 3 and x 1 = t .
Case 3: 
x 0 = ( t 2 / 2 ) + t and x 1 = 2 t 3 + t .
Case 4: 
x 0 = t 2 and x 1 = ( t / 5 ) 3 + t .
Table 1. Comparison of Algorithm 3, Algorithm 2, and Algorithm 1.
Table 1. Comparison of Algorithm 3, Algorithm 2, and Algorithm 1.
Cases Algorithm 3Algorithm 2Algorithm 1
1Iter.163889
CPU (time)4.27815.78127.7115
2Iter.1345104
CPU (time)3.37126.736710.3962
3Iter.1546116
CPU (time)3.83967.030510.5921
4Iter.154094
CPU (time)3.77216.24757.9981
Figure 1. (Top Left): Case 1; (Top Right): Case 2; (Bottom Left): Case 3; (Bottom Right): Case 4, the error plotting of comparison of Algorithm 3, Algorithm 2, and Algorithm 1 for Example 1.
Figure 1. (Top Left): Case 1; (Top Right): Case 2; (Bottom Left): Case 3; (Bottom Right): Case 4, the error plotting of comparison of Algorithm 3, Algorithm 2, and Algorithm 1 for Example 1.
Axioms 12 00960 g001
Example 2. 
Let E = R N . Define A : R N R N by A ( x ) = M x + q , where the matrix M is formed as: M = V V , where V = I 2 v v v 2 and = d i a g ( σ 11 , σ 12 , , σ 1 N ) are the householder and the diagonal matrix, and
σ 1 j = cos j π N + 1 + 1 + cos π N + 1 + 1 C ^ ( cos N π N + 1 + 1 ) C ^ 1 , j = 1 , 2 , , N ,
with C ^ been the present condition number of M ([41], Example 5.2). In the numerical computation, we choose C ^ = 10 4 , q = 0 and uniformly take the vector v R N in ( 1 , 1 ) . Thus, A is pseudomonotone and Lipschitz continuous with K = | | M | | (see [41]). By setting C = { x R N : | | x | | 1 } , Matlab is used to efficiently compute the projection onto C. Moreover, we examine various instances of the problem’s dimension. That is, N = 20 , 30 , 40 , 60 , with starting points x 1 = ( 1 , 1 , , 1 ) and x 0 = ( 0 , 0 , , 0 ) . In this example, we take the stopping criterion to be ε = 10 5 and obtain the numerical results shown in Table 2 and Figure 2.
Example 3. 
Let E = l 2 ( R ) , | | . | | l 2 , where l 2 ( R ) : = { x = ( x 1 , x 2 , x 3 , ) , x i R : i = 1 | x i | 2 < } and | | x | | l 2 : = i = 1 | x i | 2 1 2 , x l 2 ( R ) . Let C = { x l 2 ( R ) : | x i | 1 i , i = 1 , 2 , 3 , } . Thus, we have an explicit formula for P C . Now, define the operator A : l 2 ( R ) l 2 ( R ) by
A x = | | x | | + 1 | | x | | + α α ,
for some α > 0 . Then, A is pseudomonotone on l 2 ( R ) (see [42]). In this experiment, the stopping criterion is ε = 10 8 , and the starting points are selected as follows:
Case 1: 
Take x 1 = ( 1 , 1 2 , 1 3 , ) and x 0 = ( 1 2 , 1 5 , 1 10 , ) .
Case 2: 
Take x 1 = ( 1 2 , 1 5 , 1 10 , ) and x 0 = ( 1 , 1 2 , 1 3 , ) .
Case 3: 
Take x 1 = ( 1 , 1 4 , 1 9 , ) and x 0 = ( 1 2 , 1 4 , 1 8 , ) .
Case 4: 
Take x 1 = ( 1 2 , 1 4 , 1 8 , ) and x 0 = ( 1 , 1 4 , 1 9 , ) .
The numerical results are reported in Table 3 and Figure 3.

5. Conclusions

The paper has proposed a new inertial subgradient extragradient method with the modified Mann algorithm for solving the Lipschitz pseudomonotone variational inequality problem and the fixed point of quasi-Bregman nonexpansive mapping in p-uniformly convex and uniformly smooth real Banach spaces. Under some suitable conditions imposed on parameters, we have proved the strong convergence of the algorithms. The efficiency of the proposed algorithm has also been illustrated by numerical experiments in comparison with other existing methods.

Author Contributions

All the authors contributed equally in the development of this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

Authors are grateful to Department of Mathematics and Applied Mathematics, Sefako Makgato Health Science University South Africa, and Department of Mathematics and Statistics, Tshwane University of Technology, South Africa for supporting this research work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stampacchia, G. Formes bilinearieres coercitivities sur les ensembles convexes. Comptes Rendus Acad. Sci. Paris 1964, 258, 4413–4416. [Google Scholar]
  2. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  3. Smith, M.J. The existence, uniqueness and stability of traffic equilibria. Transpn. Res. 1979, 13B, 295–304. [Google Scholar] [CrossRef]
  4. Dafermos, S. Traffic equilibria and variational inequalities. Transp. Sci. 1980, 14, 42–54. [Google Scholar] [CrossRef]
  5. Isac, G.; Cojocaru, M.G. Variational inequalities, complementarity problems and pseudo-monotonicity. dynamical aspects. Semin. Fixed Point Theory Cluj-Napoca 2002, 3, 41–62. [Google Scholar]
  6. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  7. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2001, 148, 318–335. [Google Scholar] [CrossRef] [PubMed]
  8. Ali, B.; Ugwunnadi, G.C. Convergence of implicit and explicit schemes for common fixed points for finite families of asymptotically nonexpansive mappings. Nonlinear Anal. Hybrid Syst. 2011, 5, 492–501. [Google Scholar] [CrossRef]
  9. Bregman, L.M. A Relazation method for finding the common point of convex set and its application to solution of convex programming. USSR Comput. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  10. Khan, A.R.; Ugwunnadi, G.C.; Makukula, Z.G.; Abbas, M. Strong convergence of inertial subgradient extragradient method for solving variational inequality in Banach space. Carpathian J. Math. 2019, 35, 327–338. [Google Scholar] [CrossRef]
  11. Ku, L.W.; Sahu, D.R. Bregman distance and strong convergence of proximal-type algorithms. Abstr. Appl. Anal. 2013, 2013, 590519. [Google Scholar] [CrossRef]
  12. Ma, F.; Yang, J.; Yin, M. A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Math. 2021, 7, 5015–5028. [Google Scholar] [CrossRef]
  13. Ceng, L.-C.; Liou, Y.-C.; Yin, T.-C. On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Math. 2023, 8, 21138–21160. [Google Scholar] [CrossRef]
  14. Ugwunnadi, G.C.; Ali, B.; Minjibir, M.S.; Idris, I. Strong convergence theorem for quasi-Bregman strictly pseudocontractive mappings and equilibrium problems in reflexive Banach spaces. Fixed Point Theory Appl. 2014, 2014, 231. [Google Scholar] [CrossRef]
  15. Lindenstrauss, J.; Tzafriri, L. Class. Banach Spaces II; Springer: Berlin, Germany, 1979. [Google Scholar]
  16. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  17. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Kluwer Academic: Dordrecht, The Netherlands, 1990. [Google Scholar]
  18. Zalinescu, C. Convex Analysis in General Vector Spaces; World Scientific Publishing: Singapore, 2002. [Google Scholar]
  19. Naraghirad, E.; Yao, J.C. Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 141. [Google Scholar] [CrossRef]
  20. Schöpfer, F.; Schuster, T.; Louis, A.K. An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Probl. 2008, 24, 055008. [Google Scholar] [CrossRef]
  21. Takahashi, W. Nonlinear Functional Analysis-Fixed Point Theory and Application; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  22. Alber, Y.I. Metric and generalized projection operators in Banach spaces: Properties and applications. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Lecture Notes in Pure and Appl. Math.; Dekker: New York, NY, USA, 1996; Volume 178, pp. 15–50. [Google Scholar]
  23. Kohsaka, F.; Takahashi, W. Proximal point algorithm with Bregman function in Banach spaces. J. Nonlinear Convex Anal. 2005, 6, 505–523. [Google Scholar]
  24. Martin-Martiquez, V.; Reich, S.; Sabach, S. Iterative methods for approximating fixed points of Bregman nonexpansive operators. Discr. Contin Dyn. Syst. Ser. S 2013, 6, 1043–1063. [Google Scholar]
  25. Phelps, R.P. Convex Functions, Monotone Operators and Differentiability, 2nd ed.; Lecture Notes in Mathematics; Springer: Berlin, Germany, 1993. [Google Scholar]
  26. Reich, S.; Sabach, S. Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Opt. 2010, 31, 22–44. [Google Scholar] [CrossRef]
  27. Saejung, S.; Yotkaew, P. Approximation of zeroes of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  28. Bauschke, H.H.; Borwein, J.M.; Combettes, P.L. Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42, 596–636. [Google Scholar] [CrossRef]
  29. Reich, S.; Sabach, S. Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optimization and Its Applications; Springer: New York, NY, USA, 2011; pp. 301–316. [Google Scholar]
  30. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  31. Liu, Y.; Kong, H. Strong convergence theorems for relatively nonexpansive mappings and Lipschitz continuous monotone mapping in Banach spaces. Indian J. Pure Appl. Math. 2019, 50, 1049–1065. [Google Scholar] [CrossRef]
  32. Ma, F. A subgradient extragradient algorithm for solving monotone variational inequalities in Banach spaces. J. Inequal. Appl. 2020, 2020, 26. [Google Scholar] [CrossRef]
  33. Thong, D.V. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Alg. 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  34. Tan, B.; Li, S.; Qin, X. On modified subgradient extragradient methods for pseudomonotone variational inequality problems with applications. Comp. Appl. Math. 2021, 40, 253. [Google Scholar] [CrossRef]
  35. Thong, D.V.; Hieu, D.V. Strong convergence of extragradient methods with a new step size for solving variational inequality problems. Comp. Appl. Math. 2019, 38, 136. [Google Scholar] [CrossRef]
  36. Thong, D.V.; Vinh, N.T.; Cho, Y.J. A strong convergence theorem for Tseng’s extragradient method for solving variational inequality problems. Optim Lett. 2020, 14, 1157–1175. [Google Scholar] [CrossRef]
  37. Wei, L.; Su, Y.-F.; Zhou, H.-Y. Iterative convergence theorems for maximal monotone operators and relatively nonexpansive mappings. Appl. Math. J. Chin. Univ. 2008, 23, 319–325. [Google Scholar] [CrossRef]
  38. Ibaraki, T. Approximation of a zero point of monotone operators with nonsummable errors. Fixed Point Theory Appl. 2016, 2016, 48. [Google Scholar] [CrossRef]
  39. Tianchai, P. The zeros of monotone operators for the variational inclusion problem in Hilbert spaces. J. Inequal Appl. 2021, 2021, 126. [Google Scholar] [CrossRef]
  40. Thong, D.V.; Shehu, Y.; Iyiola, O.S. Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numer. Algor. 2019, 84, 795–823. [Google Scholar] [CrossRef]
  41. He, H.; Ling, C.; Xu, H.K. A relaxed projection method for split variational inequalities. J. Optim. Theory Appl. 2015, 166, 213–233. [Google Scholar] [CrossRef]
  42. Thong, D.V.; Vuong, P.T. Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities, Optimization. Optimization 2019, 68, 2207–2226. [Google Scholar] [CrossRef]
Figure 2. The behavior of TOL n with ε = 10 5 for Example 2: (Top Left): N = 20 ; (Top Right): N = 30 ; (Bottom Left): N = 40 ; (Bottom Right): N = 60 .
Figure 2. The behavior of TOL n with ε = 10 5 for Example 2: (Top Left): N = 20 ; (Top Right): N = 30 ; (Bottom Left): N = 40 ; (Bottom Right): N = 60 .
Axioms 12 00960 g002
Figure 3. The behavior of TOL n with ε = 10 8 for Example 3: (Top Left): Case 1; (Top Right): Case 2; (Bottom Left): Case 3; (Bottom Right): Case 4.
Figure 3. The behavior of TOL n with ε = 10 8 for Example 3: (Top Left): Case 1; (Top Right): Case 2; (Bottom Left): Case 3; (Bottom Right): Case 4.
Axioms 12 00960 g003aAxioms 12 00960 g003b
Table 2. Numerical results for Example 2 with ε = 10 5 .
Table 2. Numerical results for Example 2 with ε = 10 5 .
N Algorithm 3Algorithm 2Algorithm 1
20Iter. CPU576982802
0.02930.14140.1593
30Iter. CPU576982802
0.02560.14050.1472
40Iter. CPU845302651
0.01710.10810.1259
60Iter. CPU1046922810
0.03580.13790.1537
Table 3. Numerical results for Example 3 with ε = 10 8 .
Table 3. Numerical results for Example 3 with ε = 10 8 .
Cases Algorithm 3Algorithm 2Algorithm 1
1Iter. CPU215978017
0.09070.19260.8535
2Iter. CPU225392644
0.02860.05060.1519
3Iter. CPU2163910221
0.06710.26951.1225
4Iter. CPU2170719870
0.01970.06093.9870
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maluleka, R.; Ugwunnadi, G.C.; Aphane, M. Inertial Method for Solving Pseudomonotone Variational Inequality and Fixed Point Problems in Banach Spaces. Axioms 2023, 12, 960. https://doi.org/10.3390/axioms12100960

AMA Style

Maluleka R, Ugwunnadi GC, Aphane M. Inertial Method for Solving Pseudomonotone Variational Inequality and Fixed Point Problems in Banach Spaces. Axioms. 2023; 12(10):960. https://doi.org/10.3390/axioms12100960

Chicago/Turabian Style

Maluleka, Rose, Godwin Chidi Ugwunnadi, and Maggie Aphane. 2023. "Inertial Method for Solving Pseudomonotone Variational Inequality and Fixed Point Problems in Banach Spaces" Axioms 12, no. 10: 960. https://doi.org/10.3390/axioms12100960

APA Style

Maluleka, R., Ugwunnadi, G. C., & Aphane, M. (2023). Inertial Method for Solving Pseudomonotone Variational Inequality and Fixed Point Problems in Banach Spaces. Axioms, 12(10), 960. https://doi.org/10.3390/axioms12100960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop