Next Article in Journal
r-Free Convolution and Variance Function
Previous Article in Journal
Supervised Learning Fuzzy Matrix Based on Input–Output Fuzzy Vectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Proximal Point Methods for Solving Equilibrium Problems in Hadamard Spaces

by
Behzad Djafari Rouhani
* and
Vahid Mohebbi
Department of Mathematical Sciences, University of Texas at El Paso, 500 W. University Avenue, El Paso, TX 79968, USA
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(2), 127; https://doi.org/10.3390/axioms14020127
Submission received: 9 December 2024 / Revised: 26 January 2025 / Accepted: 8 February 2025 / Published: 10 February 2025

Abstract

:
We investigate the Δ -convergence and strong convergence of a sequence generated by the proximal point method for pseudo-monotone equilibrium problems in Hadamard spaces. First, we show the Δ -convergence of the generated sequence to a solution of the equilibrium problem. Next, we prove the strong convergence of the generated sequence with some additional conditions imposed on the bifunction. Finally, we prove the strong convergence of the generated sequence, by using Halpern’s regularization method, without any additional condition.

1. Introduction

There are many problems arising from nonlinear analysis and optimization that can be modeled as an equilibrium problem. The equilibrium problem contains, as its particular cases, variational inequalities, convex optimization problems, Nash equilibrium problems, as well as some other problems of interests in many applications.
Equilibrium problems were studied extensively by many authors for monotone and pseudo-monotone bifunctions in Hilbert spaces, Banach spaces as well as metric spaces, because of their applications in game theory, optimization, etc. (see e.g., [1,2,3,4,5]). Recently, equilibrium problems and vector equilibrium problems have been investigated in Hadamard manifolds by many authors (see e.g., [6,7,8]). Iusem and Mohebbi [9] used the extragradient method with linesearch to solve equilibrium problems of pseudo-monotone type in Hadamard spaces and proved the Δ -convergence and the strong convergence of the generated sequence (with regularization) to a solution of the equilibrium problem. In [10], the authors studied the strong convergence of a sequence generated by the proximal point method with unbounded errors to find a zero of a monotone operator in Hadamard spaces. Then, as an application of their result, they showed the strong convergence of the generated sequence to a solution of a monotone equilibrium problem. The authors in [11,12] used the extragradient method and the proximal point algorithms to show the Δ -convergence and strong convergence of the sequences generated by those algorithms to a solution of the equilibrium problem for pseudo-monotone bifunctions. In [11], the authors solved two minimization problems and used the Lipschitz constant of the bifunction in each iteration.
In this paper, motivated by the above results, we investigate the Δ -convergence and strong convergence of the sequences generated by two proximal point methods for pseudo-monotone equilibrium problems in Hadamard spaces, by solving only one minimization problem, and by assuming A 3 , without any knowledge of the constant L, while in [11], the authors solved two minimization problems and used the Lipschitz continuity of the bifunction. Moreover, the regularization parameter λ k in this algorithm is self-adaptive while in [11], it is an a priori given sequence satisfying certain conditions.
This paper is organized as follows. In Section 2, we introduce some preliminaries related to the geometry of Hadamard spaces. In Section 3, we propose a new proximal point method for solving equilibrium problems in Hadamard spaces. We prove the Δ -convergence of the generated sequence to a solution of the equilibrium problem. Then, we show the strong convergence of the generated sequence with some additional conditions imposed on the bifunction. In Section 4, by using Halpern’s regularization method, we prove the strong convergence of the generated sequence without any additional condition imposed on the bifunction.

2. Preliminaries

Let ( X , d ) be a metric space. For x , y X , a mapping γ : [ 0 , l ] X , where l > 0 , is called a geodesic with endpoints x , y , if γ ( 0 ) = x , γ ( l ) = y , and d ( γ ( t ) , γ ( t ) ) = t t for all t , t [ 0 , l ] . If a geodesic with endpoints x , y exists for every x , y X , then ( X , d ) is called a geodesic metric space. Furthermore, if, for each x , y X , there exists a unique geodesic, then ( X , d ) is said to be uniquely geodesic.
A subset K of a uniquely geodesic space X is called convex if the geodesic joining x and y is contained in K, for any x , y K . The image of a geodesic γ with endpoints x , y is said to be a geodesic segment joining x and y and is denoted by [ x , y ] .
Suppose that X is a uniquely geodesic metric space. For each x , y X and for each t [ 0 , 1 ] , there exists a unique point z [ x , y ] such that d ( x , z ) = t d ( x , y ) and d ( y , z ) = ( 1 t ) d ( x , y ) . We use the notation ( 1 t ) x t y to denote the unique point z satisfying the above statement.
Definition 1 
([13]). A geodesic space X is said to be a CAT(0) space if for all x , y , z X and t [ 0 , 1 ] , it holds that
d 2 ( t x ( 1 t ) y , z ) t d 2 ( x , z ) + ( 1 t ) d 2 ( y , z ) t ( 1 t ) d 2 ( x , y ) .
A complete CAT(0) space is said to be a Hadamard space.
In [14,15], Berg and Nikolaev introduced the notion of quasi-linearization as follows. We denote a pair ( a , b ) X × X by a b and call it a vector. Then, quasi-linearization is defined as a map · , · : ( X × X ) × ( X × X ) R defined by
a b , c d = 1 2 { d 2 ( a , d ) + d 2 ( b , c ) d 2 ( a , c ) d 2 ( b , d ) } , a , b , c , d X .
It is clear that a b , c d = c d , a b , a b , c d = b a , c d and a x , c d + x b , c d = a b , c d for all a , b , c , d , x X . Also, X is said to satisfy the Cauchy–Schwarz inequality if a b , c d d ( a , b ) d ( c , d ) for all a , b , c , d X . We know from Corollary 3 of [15] that a geodesically connected metric space is a CAT(0) space if and only if it satisfies the Cauchy–Schwarz inequality.
Let { x k } be a bounded sequence in a Hadamard space ( X , d ) . For x X , we define r ( x , { x k } ) = lim sup k d ( x , x k ) . The asymptotic radius of { x k } is defined by
r ( { x k } ) = inf { r ( x , { x k } ) | x X }
and the asymptotic center of { x k } as the set A ( { x k } ) = { x X | r ( x , { x k } ) = r ( { x k } ) } . It is known that A ( { x k } ) is a singleton in a Hadamard space (see [16,17]).
Definition 2 
([18], p. 3690). A sequence { x k } in a Hadamard space ( X , d ) Δ-converges to x X if A ( { x k n } ) = { x } , for each subsequence { x k n } of { x k } .
We define the Δ-cluster set of the bounded sequence { x k } to be the set of all Δ-limits of Δ-convergent subsequences of { x k } . It is worthwhile to mention that the concept of Δ-convergence is an extension of the concept of weak convergence in linear spaces. Throughout this paper, we assume that X is a Hadamard space unless otherwise specified. We also denote Δ-convergence in X by Δ and the metric convergence by →.
Theorem 1 
([19]). Let ( X , d ) be a Hadamard space, { x k } a sequence in X, and x X . Then, { x k } Δ-converges to x if and only if lim sup k x x k , x y 0 for all y X .
In the following lemma, we recall a result related to the notion of Δ -convergence.
Lemma 1 
([18], Proposition 3.6). Let X be a Hadamard space. Then, every bounded closed and convex subset of X is Δ-compact, i.e., every bounded sequence in it has a Δ-convergent subsequence.
Lemma 2 
([13]). Let ( X , d ) be a CAT(0) space. Then, for all x , y , z X and t [ 0 , 1 ] ,
d ( t x ( 1 t ) y , z ) t d ( x , z ) + ( 1 t ) d ( y , z ) .
A function g : X ] , + ] is called
(i)
Convex if
g ( t x ( 1 t ) y ) t g ( x ) + ( 1 t ) g ( y ) , x , y X and t [ 0 , 1 ] ,
(ii)
Strongly convex if
g ( t x ( 1 t ) y ) t g ( x ) + ( 1 t ) g ( y ) t ( 1 t ) d 2 ( x , y ) , x , y X and t [ 0 , 1 ] ,
(iii)
Quasi-convex if
g ( t x ( 1 t ) y ) max { g ( x ) , g ( y ) } , x , y X and t [ 0 , 1 ] ,
or equivalently, for each r R , the sub-level set L r g : = { x X : g ( x ) r } is a convex subset of X. A function g is concave (resp., strongly concave, or quasi-concave), when g is convex (strongly convex or quasi-convex), respectively.
Definition 3. 
A function g : X ( , + ] is called lower semicontinuous, abbreviated lsc, (resp., Δ-lower semicontinuous) at x D ( g ) if
g ( x ) lim inf k g ( x k )
for each sequence x k x (resp., x k Δ x ) as k + . g is called lower semicontinuous (resp., Δ-lower semicontinuous) if it is lower semicontinuous (resp., Δ-lower semicontinuous) at each point of its domain. Also, g is called upper semicontinuous (resp., Δ-upper semicontinuous), whenever g is lower semicontinuous (resp. Δ-lower semicontinuous).
The following lemma shows that every lower semicontinuous and quasi-convex function is Δ -lower semicontinuous.
Lemma 3 
([10], Lemma 2.4). Let g : X R be a lower semicontinuous and quasi-convex function. Then, g is Δ-lower semicontinuous.
It is clear from Lemma 3 that a quasi-concave and upper semicontinuous function is always Δ -upper semicontinuous.
Let g : X ( , + ] be a proper, convex, and lsc function, where X is a Hadamard space. The resolvent of g of order λ > 0 is defined at each point x X as follows:
J λ g x : = Argmin y X g ( y ) + 1 2 λ d 2 ( y , x ) .
By Lemma 3.1.2 of [20] (see also Lemma 2.2.19 of [21,22]) for each x X , J λ g x exists. Therefore, the sequences generated by Algorithms 1 and 2 in the following sections are well defined.
Let X be a Hadamard space and C X be nonempty, closed, and convex. It is well known that for any x X , there exists a unique u C such that
d ( u , x ) = inf { d ( z , x ) : z C } .
We define the projection map on C, P C : X C , by defining P C ( x ) to be the unique u C which satisfies (1).
Lemma 4 
([23]). Let { s k } be a sequence of nonnegative real numbers, { α k } be a sequence of real numbers in ( 0 , 1 ) with k = 0 α k = , and { t k } be a sequence of real numbers. Suppose that
s k + 1 ( 1 α k ) s k + α k t k , k 0 .
If  lim sup n t k n 0 for every subsequence { s k n } of { s k } satisfying lim inf n ( s k n + 1 s k n ) 0 , then lim k s k = 0 .
Let K be a nonempty, closed, and convex subset of X and f : K × K R . The equilibrium problem E P ( f , K ) consists of finding x * K such that
f ( x * , y ) 0 , y K .
The set of solutions of E P ( f , K ) is denoted by S ( f , K ) .
Definition 4. 
A bifunction f is said to be monotone if f ( x , y ) + f ( y , x ) 0 for all x , y K , and strongly monotone if there exists α > 0 such that f ( x , y ) + f ( y , x ) α d 2 ( x , y ) , for all x , y K .
Definition 5. 
A bifunction f : K × K R is called pseudo-monotone if for any pair x , y K , f ( x , y ) 0 implies f ( y , x ) 0 . Also, f is called strongly pseudo-monotone, if there exists α > 0 such that if f ( x , y ) 0 , then f ( y , x ) α d 2 ( x , y ) , for all x , y K .
If f is strongly monotone, then f is monotone and strongly pseudo-monotone, and if f is strongly pseudo-monotone, then f is pseudo-monotone. We introduce the following conditions that we need for the convergence analysis:
A 1 :
f ( x , · ) : K R is convex and lower semicontinuous for all x K .
A 2 :
f ( · , y ) is Δ -upper semicontinuous for all y K .
A 3 :
f satisfies the condition f ( x , y ) L d 2 ( x , y ) for all x , y K where L > 0 is constant.
A 4 :
f is pseudo-monotone.
Clearly, A 3 shows that f ( x , x ) 0 for all x K . Also, A 3 and A 4 imply that f ( x , x ) = 0 for all x K . In the sequel, it is important to mention that in this paper, we prove the Δ -convergence and the strong convergence of the generated sequences to a solution of the equilibrium problem, without any knowledge of the constant L in A 3 .

3. Δ -Convergence and Strong Convergence

In this section, we first study the Δ -convergence of the sequence generated by the proximal point method to an equilibrium point of E P ( f , K ) . Then, we show the strong convergence of the generated sequence with some additional conditions on the bifunction. Let X be a Hadamard space, K X be a closed and convex set, and f : K × K R be a bifunction. We assume that the bifunction f satisfies A 1 A 4 and S ( f , K ) and propose the following proximal point method for solving the problem.
First of all, we note that the sequence { x k } is well defined, e.g., by Lemma 3.1.2 of [20]. It follows from (2) that f ( x k , x k + 1 ) 0 , because
f ( x k , x k + 1 ) + 1 2 λ k d 2 ( x k , x k + 1 ) f ( x k , x k ) + 1 2 λ k d 2 ( x k , x k ) = 0 .
The condition A 3 on f shows that μ L = μ d 2 ( x k , x k + 1 ) L d 2 ( x k , x k + 1 ) μ d 2 ( x k , x k + 1 ) f ( x k , x k + 1 ) where L is the constant in A 3 . Clearly, the lower bound of the sequence { λ k } is min { λ 0 , μ L } , and its upper bound is λ 0 . Since λ k + 1 λ k , lim k λ k exists and is different from zero.
In order to prove the Δ -convergence of the sequence generated by Algorithm 1 to an equilibrium point, we need the following lemmas.
Algorithm 1: Proximal Point Method
Initialization:  x 0 X , λ 0 > 0 and μ ( 0 , 1 2 ) .
Iterative step: Given x k , compute
                                    x k + 1 = Argmin y K f ( x k , y ) + 1 2 λ k d 2 ( x k , y ) .
 and
                                    λ k + 1 : = min μ d 2 ( x k , x k + 1 ) f ( x k , x k + 1 ) , λ k , if f ( x k , x k + 1 ) 0 , λ k , otherwise .
Lemma 5. 
Assume that { x k } is generated by Algorithm 1 and x * S ( f , K ) , then
( 1 2 λ k λ k + 1 μ ) d 2 ( x k , x k + 1 ) d 2 ( x k , x * ) d 2 ( x k + 1 , x * ) .
Proof. 
Let x * S ( f , K ) . Note that x k + 1 solves the minimization problem in (2). Now, letting y = t x k + 1 ( 1 t ) x * where t [ 0 , 1 ) , we have
f ( x k , x k + 1 ) + 1 2 λ k d 2 ( x k , x k + 1 ) f ( x k , t x k + 1 ( 1 t ) x * ) + 1 2 λ k d 2 ( x k , t x k + 1 ( 1 t ) x * ) t f ( x k , x k + 1 ) + ( 1 t ) f ( x k , x * ) + 1 2 λ k t d 2 ( x k , x k + 1 ) + ( 1 t ) d 2 ( x k , x * ) t ( 1 t ) d 2 ( x k + 1 , x * ) .
Since f ( x * , x k ) 0 , the pseudo-monotonicity of f implies that f ( x k , x * ) 0 . Hence, we can write the above inequality as
f ( x k , x k + 1 ) 1 2 λ k d 2 ( x k , x * ) d 2 ( x k , x k + 1 ) t d 2 ( x k + 1 , x * ) .
By letting t 1 , we obtain
f ( x k , x k + 1 ) 1 2 λ k d 2 ( x k , x * ) d 2 ( x k , x k + 1 ) d 2 ( x k + 1 , x * ) ,
that is
2 λ k f ( x k , x k + 1 ) d 2 ( x k , x * ) d 2 ( x k , x k + 1 ) d 2 ( x k + 1 , x * ) .
Using (3), we obtain
λ k + 1 f ( x k , x k + 1 ) μ d 2 ( x k , x k + 1 ) .
Therefore,
λ k λ k + 1 μ d 2 ( x k , x k + 1 ) λ k f ( x k , x k + 1 ) .
Using (6) and (8), we have
2 λ k λ k + 1 μ d 2 ( x k , x k + 1 ) d 2 ( x k , x * ) d 2 ( x k , x k + 1 ) d 2 ( x k + 1 , x * ) .
which implies that
( 1 2 λ k λ k + 1 μ ) d 2 ( x k , x k + 1 ) d 2 ( x k , x * ) d 2 ( x k + 1 , x * ) .
   □
Lemma 6. 
The sequence { x k } is bounded, and lim k d ( x k , x k + 1 ) = 0 .
Proof. 
Note that the sequence { λ k } is nonincreasing and bounded away from zero; therefore, lim n λ k exists and is different from zero. Now, by our assumptions on μ , we have
lim n 1 2 λ k λ k + 1 μ = 1 2 μ > δ > 0 ,
for some δ > 0 . Therefore, using (10), for a large enough k, we have
δ d 2 ( x k , x k + 1 ) d 2 ( x k , x * ) d 2 ( x k + 1 , x * ) .
This implies that the sequence { d 2 ( x k , x * ) } , for a large enough k, is non-increasing, and hence lim k d ( x k , x * ) exists. Hence, { x k } is bounded. Since lim k d ( x k , x * ) exists, (12) shows that lim k d ( x k , x k + 1 ) = 0 .    □
Theorem 2. 
Assume that the bifunction f satisfies A 1 A 4 , and the solution set S ( f , K ) is nonempty. Then, the sequence { x k } generated by Algorithm 1 Δ-converges to a point of S ( f , K ) .
Proof. 
Note that x k + 1 solves the minimization problem in (2). By letting z = t x k + 1 ( 1 t ) y where t [ 0 , 1 ) and y K , we obtain
f ( x k , x k + 1 ) + 1 2 λ k d 2 ( x k , x k + 1 ) f ( x k , t x k + 1 ( 1 t ) y ) + 1 2 λ k d 2 ( x k , t x k + 1 ( 1 t ) y ) t f ( x k , x k + 1 ) + ( 1 t ) f ( x k , y ) + 1 2 λ k t d 2 ( x k , x k + 1 ) + ( 1 t ) d 2 ( x k , y ) t ( 1 t ) d 2 ( x k + 1 , y ) .
From the above inequality, we obtain
f ( x k , x k + 1 ) f ( x k , y ) 1 2 λ k d 2 ( x k , y ) d 2 ( x k , x k + 1 ) t d 2 ( x k + 1 , y ) .
Now, by letting t 1 , we obtain
1 2 λ k d 2 ( x k , x k + 1 ) + d 2 ( x k + 1 , y ) d 2 ( x k , y ) f ( x k , y ) f ( x k , x k + 1 ) ,
which implies that
1 2 λ k d ( x k , x k + 1 ) d ( x k + 1 , y ) + d ( x k , y ) f ( x k , y ) f ( x k , x k + 1 ) .
Note that the sequence { x k } is bounded, lim k d ( x k , x k + 1 ) = 0 and f ( x k , x k + 1 ) 0 for all k. Hence, by A 3 , we have
lim k f ( x k , x k + 1 ) = 0 .
Letting k + in (14), we obtain
0 lim inf k f ( x k , y ) , y K .
By Lemma 1, there exists a subsequence { x k n } of { x k } and p K such that x k n Δ p . Replacing k by k n in (15), it follows from A 2 that
0 lim sup n f ( x k n , y ) f ( p , y ) , y K .
Therefore, p S ( f , K ) .
It remains to prove that there exists only one Δ -cluster point of { x k } . Let x ¯ , x ^ be two Δ -cluster points of { x k } so that there exist two subsequences { x k i } and { x n j } of { x k } whose Δ lim points are x ¯ and x ^ , respectively. We have already proved that x ¯ and x ^ are solutions of E P ( f , K ) . Hence, by the proof of Lemma 6, we can assume that lim k d ( x k , x ¯ ) = δ 1 and lim k d ( x k , x ^ ) = δ 2 . On the other hand, we have:
2 x k i x n j , x ^ x ¯ = d 2 ( x k i , x ¯ ) + d 2 ( x n j , x ^ ) d 2 ( x k i , x ^ ) d 2 ( x n j , x ¯ ) .
Letting i + , and then j + , we obtain lim j + lim i + x k i x n j , x ^ x ¯ = 0 . Also, we can write the left-hand side of the above equality as
2 x k i x n j , x ^ x ¯ = 2 x k i x ¯ , x ^ x ¯ + 2 x ¯ x ^ , x ^ x ¯ + 2 x ^ x n j , x ^ x ¯ .
Taking lim sup in the above equality by letting i + and then j + , and using Theorem 1, we conclude that d 2 ( x ¯ , x ^ ) 0 , hence x ¯ = x ^ . This establishes that the set of all Δ -cluster points of { x k } is a singleton, and hence { x k } Δ -converges to a point of S ( f , K ) .    □
In the following remark, we give a sufficient condition for the solution set of the problem to be nonempty. In this case, the sequence { x k } generated by the algorithm Δ -converges to a solution of the problem.
Remark 1. 
Assume that the bifunction f satisfies A 1 A 4 . If there is a bounded subsequence { x k n } of { x k } satisfying lim n d ( x k n , x k n + 1 ) = 0 , then S ( f , K ) . In fact, without loss of generality, we can assume that there exists p K such that x k n Δ p . Then, by replacing k by k n in (14) and using the same method as in Theorem 2, we obtain p S ( f , K ) , that is, S ( f , K ) .
Theorem 3. 
Assume that the bifunction f satisfies A 1 A 4 and S ( f , K ) . If either one of the following conditions is satisfied:
(i) 
f is strongly pseudo-monotone;
(ii) 
f ( x , · ) is strongly convex for all x K ;
(iii) 
f ( · , y ) is strongly concave for all y K ;
then, the sequence { x k } generated by Algorithm 1 converges strongly to a point of S ( f , K ) .
Proof. 
First of all, note that Theorem 2 shows that { x k } Δ -converges to a point x * of S ( f , K ) . For each condition, we show that { x k } converges strongly to x * S ( f , K ) .
(i)
Since f ( x * , x k ) 0 , by assumption, there is α > 0 such that, f ( x k , x * ) α d 2 ( x k , x * ) for all k N . Next, by (15) in the proof of Theorem 2, we have lim inf k f ( x k , x * ) 0 . Therefore, by taking the liminf, as k in the above inequality, we obtain
0 lim inf k f ( x k , x * ) lim inf k ( α d 2 ( x k , x * ) ) = α lim sup k d 2 ( x k , x * )
and hence we deduce that x k x * .
(ii)
Let λ ( 0 , 1 ) and set w k = λ x k ( 1 λ ) x * for all k N . Since f ( x k , · ) is strongly convex, we have
f ( x k , w k ) λ f ( x k , x k ) + ( 1 λ ) f ( x k , x * ) λ ( 1 λ ) d 2 ( x k , x * ) λ ( 1 λ ) d 2 ( x k , x * ) .
Setting y = w k in (14) and using (16), we obtain
1 2 λ k d ( x k , x k + 1 ) d ( x k + 1 , w k ) + d ( x k , w k ) λ ( 1 λ ) d 2 ( x k , x * ) f ( x k , x k + 1 ) .
Now, since f ( x k , x k + 1 ) 0 for all k and lim k d ( x k , x k + 1 ) = 0 by Lemma 6, it follows from A 3 that lim k f ( x k , x k + 1 ) = 0 . Letting k in (17), we conclude that x k x * .
(iii)
Let λ ( 0 , 1 ) and set w k = λ x k ( 1 λ ) x * , for all k N . Since f ( · , x * ) is strongly concave, we have
λ f ( x k , x * ) + ( 1 λ ) f ( x * , x * ) + λ ( 1 λ ) d 2 ( x k , x * ) f ( w k , x * ) 0 .
Since f ( x * , x * ) = 0 , we have f ( x k , x * ) ( 1 λ ) d 2 ( x k , x * ) . Then, by using (15) and taking the liminf as k in the above inequality, we obtain
0 lim inf k f ( x k , x * ) ( 1 λ ) lim sup k d 2 ( x k , x * ) .
Therefore, the sequence { x k } converges strongly to x * S ( f , K ) .    □

4. Strong Convergence of Halpern’s Regularization Method

In this section, we prove the strong convergence of the generated sequence by using Halpern’s regularization method without assuming any of the conditions (i), (ii), and (iii) in Theorem 3. We assume in the sequel that X is a Hadamard space, K X is nonempty, closed, and convex, and f : K × K R is a bifunction which satisfies A 1 A 4 and S ( f , K ) .
Similar to Algorithm 1, by the assumptions on the bifunction f, the sequence { x k } is well defined (see, e.g., Lemma 3.1.2 of [20]). It follows from (18) that f ( x k , y k ) 0 , because
f ( x k , y k ) + 1 2 λ k d 2 ( x k , y k ) f ( x k , x k ) + 1 2 λ k d 2 ( x k , x k ) = 0 .
Also, the condition A 3 shows that μ L = μ d 2 ( x k , y k ) L d 2 ( x k , y k ) μ d 2 ( x k , y k ) f ( x k , y k ) where L is the constant in A 3 , and the lower bound of the sequence { λ k } is min { λ 0 , μ L } , and its upper bound is λ 0 . Since λ k + 1 λ k , then lim k λ k exists and is different from zero.
In order to prove the strong convergence of the sequence { x k } in Algorithm 2, we need the following lemmas.
Algorithm 2: Halpern’s Regularization Method
Initialization:  x 0 , u K , λ 0 > 0 and μ ( 0 , 1 2 ) . Let { α k } ( 0 , 1 ) be such that lim k α k = 0 and k = 0 + α k = + .
Iterative step: Given x k , compute
                                    y k = Argmin y K f ( x k , y ) + 1 2 λ k d 2 ( x k , y ) .
                                    x k + 1 = α k u ( 1 α k ) y k .
 and
                                    λ k + 1 : = min μ d 2 ( x k , y k ) f ( x k , y k ) , λ k , if f ( x k , y k ) 0 , λ k , otherwise
Lemma 7. 
Assume that { x k } and { y k } are generated by Algorithm 2 and x * S ( f , K ) ; then,
( 1 2 λ k λ k + 1 μ ) d 2 ( x k , y k ) d 2 ( x k , x * ) d 2 ( y k , x * ) .
Proof. 
Let x * S ( f , K ) . Since y k solves the minimization problem in (18), by letting y = t y k ( 1 t ) x * with t [ 0 , 1 ) , we have
f ( x k , y k ) + 1 2 λ k d 2 ( x k , y k ) f ( x k , t y k ( 1 t ) x * ) + 1 2 λ k d 2 ( x k , t y k ( 1 t ) x * ) t f ( x k , y k ) + ( 1 t ) f ( x k , x * ) + 1 2 λ k t d 2 ( x k , y k ) + ( 1 t ) d 2 ( x k , x * ) t ( 1 t ) d 2 ( y k , x * ) .
Since f ( x * , x k ) 0 , A 4 implies that f ( x k , x * ) 0 . Hence, we can write the above inequality as f ( x k , y k ) 1 2 λ k d 2 ( x k , x * ) d 2 ( x k , y k ) t d 2 ( y k , x * ) . Now, if t 1 , we obtain
2 λ k f ( x k , y k ) d 2 ( x k , x * ) d 2 ( x k , y k ) d 2 ( y k , x * ) .
Using (20), we obtain
λ k + 1 f ( x k , y k ) μ d 2 ( x k , y k ) .
Therefore,
λ k λ k + 1 μ d 2 ( x k , y k ) λ k f ( x k , y k ) .
Using (22) and (24), we have
2 λ k λ k + 1 μ d 2 ( x k , y k ) d 2 ( x k , x * ) d 2 ( x k , y k ) d 2 ( y k , x * ) .
which implies that
( 1 2 λ k λ k + 1 μ ) d 2 ( x k , y k ) d 2 ( x k , x * ) d 2 ( y k , x * ) .
Lemma 8. 
The sequences { x k } and { y k } generated by Algorithm 2 are bounded.
Proof. 
Let x * S ( f , K ) . Note that the sequence { λ k } is nonincreasing and bounded away from zero; therefore, lim n λ k exists and is different from zero. Now, by our assumptions on μ , we have
lim n 1 2 λ k λ k + 1 μ = 1 2 μ > δ > 0 ,
for some δ > 0 . Therefore, using Lemma 7, for some k k 0 , we have
δ d 2 ( x k , y k ) d 2 ( x k , x * ) d 2 ( y k , x * )
which shows that
d ( y k , x * ) d ( x k , x * ) for all k k 0 .
Then, by (19) and (29), for k k 0 , we obtain
d ( x k + 1 , x * ) α k d ( u , x * ) + ( 1 α k ) d ( y k , x * ) α k d ( u , x * ) + ( 1 α k ) d ( x k , x * ) max { d ( u , x * ) , d ( x k , x * ) } max { d ( u , x * ) , d ( x k 0 , x * ) } ,
which implies that { x k } is bounded. Thus, by (29), { y k } is also bounded. □
Theorem 4. 
Assume that the bifunction f satisfies A 1 A 4 , and the solution set S ( f , K ) is nonempty. Then, the sequence { x k } generated by Algorithm 2 converges strongly to P S ( f , K ) u .
Proof. 
Let x * = P S ( f , K ) u . Using (20) and (29), for some k k 0 , we obtain
d 2 ( x k + 1 , x * ) ( 1 α k ) d 2 ( y k , x * ) + α k d 2 ( u , x * ) α k ( 1 α k ) d 2 ( u , y k ) ( 1 α k ) d 2 ( x k , x * ) + α k d 2 ( u , x * ) α k ( 1 α k ) d 2 ( u , y k ) .
We use Lemma 4 to show that d 2 ( x k , x * ) 0 . It suffices to show that
lim sup n ( d 2 ( u , x * ) ( 1 α k n ) d 2 ( u , y k n ) ) 0
for every subsequence { d 2 ( x k n , x * ) } of { d 2 ( x k , x * ) } satisfying lim inf n ( d 2 ( x k n + 1 , x * ) d 2 ( x k n , x * ) ) 0 . Consider such a subsequence. We have
0 lim inf n ( d 2 ( x k n + 1 , x * ) d 2 ( x k n , x * ) ) lim inf n ( α k n d 2 ( u , x * ) + ( 1 α k n ) d 2 ( y k n , x * ) d 2 ( x k n , x * ) ) = lim inf n ( α k n ( d 2 ( u , x * ) d 2 ( y k n , x * ) ) + d 2 ( y k n , x * ) d 2 ( x k n , x * ) ) lim sup n α k n ( d 2 ( u , x * ) d 2 ( y k n , x * ) ) + lim inf n ( d 2 ( y k n , x * ) d 2 ( x k n , x * ) ) = lim inf n ( d 2 ( y k n , x * ) d 2 ( x k n , x * ) ) lim sup n ( d 2 ( y k n , x * ) d 2 ( x k n , x * ) ) 0 .
This shows that
lim n ( d 2 ( y n k , x * ) d 2 ( x k n , x * ) ) = 0
Replacing k by k n in (28) and taking the limit as n , we conclude that
lim n d 2 ( x k n , y k n ) = 0 .
Since f ( x k n , y k n ) 0 for all n, using A 3 , we obtain
lim n f ( x k n , y k n ) = 0 .
On the other hand, there exists a subsequence { y k n l } of { y k n } and p K such that y k n l Δ p and
lim sup n ( d 2 ( u , x * ) ( 1 α k n ) d 2 ( u , y k n ) ) = lim l ( d 2 ( u , x * ) ( 1 α k n l ) d 2 ( u , y k n l ) ) .
It is clear that we also have x k n l Δ p . By the Δ -lower semicontinuity of d 2 ( u , · ) , we obtain
lim sup n ( d 2 ( u , x * ) ( 1 α k n ) d 2 ( u , y k n ) ) = lim t ( d 2 ( u , x * ) ( 1 α k n l ) d 2 ( u , y k n l ) ) d 2 ( u , x * ) d 2 ( u , p ) .
Now, note that since y k solves the minimization problem in (18), taking t [ 0 , 1 ) and y K , we have
f ( x k , y k ) + 1 2 λ k d 2 ( x k , y k ) f ( x k , t y k ( 1 t ) y ) + 1 2 λ k d 2 ( x k , t y k ( 1 t ) y ) t f ( x k , y k ) + ( 1 t ) f ( x k , y ) + 1 2 λ k t d 2 ( x k , y k ) + ( 1 t ) d 2 ( x k , y ) t ( 1 t ) d 2 ( y k , y ) ,
which implies that f ( x k , y k ) f ( x k , y ) 1 2 λ k d 2 ( x k , y ) d 2 ( x k , y k ) t d 2 ( y k , y ) . Letting t 1 , we obtain
1 2 λ k d 2 ( x k , y k ) + d 2 ( y k , y ) d 2 ( x k , y ) f ( x k , y ) f ( x k , y k ) .
It follows from (36) that
1 2 λ k d ( x k , y k ) d ( y k , y ) + d ( x k , y ) f ( x k , y ) f ( x k , y k ) .
Since x k n l Δ p , replacing k by k n l in (37) and then taking the liminf as l , and using (33) and (34), we obtain 0 lim inf l f ( x k n l , y ) , y K . Since f ( · , y ) is Δ -upper semicontinuous by A 2 , we obtain f ( p , y ) 0 , y K , i.e., p S ( f , K ) . Thus, we obtain d ( u , x * ) d ( u , p ) by the definition of x * . Then, (35) implies that
lim sup n ( d 2 ( u , x * ) ( 1 α k n ) d 2 ( u , y k n ) ) 0 ,
i.e., (31) is satisfied, and hence d 2 ( x k , x * ) 0 by Lemma 4. □
In the following remark, we give a sufficient condition for the solution set of the problem to be nonempty. In this case, the sequence { x k } generated by Algorithm 2 converges strongly to a solution of the problem.
Remark 2. 
Assume that the bifunction f satisfies A 1 A 4 . If there is a bounded subsequence { x k n } of { x k } satisfying lim n d ( x k n , y k n ) = 0 , then S ( f , K ) . In fact, without loss of generality, we can assume that there exists p K such that x k n Δ p . Then, by replacing k by k n in (37) and using the same method as in Theorem 4, we obtain p S ( f , K ) , that is S ( f , K ) .

5. Conclusions

The Δ -convergence, as well as the strong convergence of the generated sequence by our proximal point algorithm were studied for pseudo-monotone equilibrium problems in Hadamard spaces by solving only one minimization problem, and by assuming A 3 , without any knowledge of the constant L, while in [11], the authors solved two minimization problems and used the Lipschitz continuity of the bifunction. Moreover, the regularization parameter λ k in our algorithm is self-adaptive, while in [11], it is an a priori given sequence satisfying certain conditions. We also showed the strong convergence of the generated sequence with some additional conditions imposed on the bifunction without using the regularization. A path for future investigations could be the exploration of possible extensions of our results to Banach spaces.

Author Contributions

The authors equally contributed in the present research, at all stages from the formulation of the problem to the final findings and solution. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  2. Chadli, O.; Chbani, Z.; Riahi, H. Equilibrium problems with generalized monotone bifunctions and applications to variational inequalities. J. Optim. Theory Appl. 2000, 105, 299–323. [Google Scholar]
  3. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  4. Iusem, A.N.; Kassay, G.; Sosa, W. On certain conditions for the existence of solutions of equilibrium problems. Math. Program. 2009, 116, 259–273. [Google Scholar] [CrossRef]
  5. Iusem, A.N.; Sosa, W. On the proximal point method for equilibrium problems in Hilbert spaces. Optimization 2010, 59, 1259–1274. [Google Scholar] [CrossRef]
  6. Colao, V.; López, G.; Marino, G.; Martín-Márquez, V. Equilibrium problems in Hadamard manifolds. J. Math. Anal. Appl. 2012, 388, 61–77. [Google Scholar] [CrossRef]
  7. Iusem, A.N.; Mohebbi, V. An extragradient method for vector equilibrium problems on Hadamard manifolds. J. Nonlinear Var. Anal. 2021, 5, 459–476. [Google Scholar]
  8. Noor, M.A.; Noor, K.I. Some algorithms for equilibrium problems on Hadamard manifolds. J. Inequal. Appl. 2012, 2012, 230. [Google Scholar] [CrossRef]
  9. Iusem, A.N.; Mohebbi, V. Convergence analysis of the extragradient method for equilibrium problems in Hadamard spaces. Comput. Appl. Math. 2020, 39, 44. [Google Scholar] [CrossRef]
  10. Djafari-Rouhani, B.; Mohebbi, V. Proximal point methods with possible unbounded errors for monotone operators in Hadamard spaces. Optimization 2023, 72, 2345–2366. [Google Scholar] [CrossRef]
  11. Khatibzadeh, H.; Mohebbi, V. Approximating solutions of equilibrium problems in Hadamard spaces. Miskolc Math. Notes 2019, 20, 281–297. [Google Scholar] [CrossRef]
  12. Khatibzadeh, H.; Mohebbi, V. Monotone and Pseudo-Monotone Equilibrium Problems in Hadamard Spaces. J. Aust. Math. Soc. 2021, 110, 220–242. [Google Scholar] [CrossRef]
  13. Dhompongsa, S.; Panyanak, B. On Δ-convergence theorems in CAT(0) spaces. Comput. Math. Appl. 2008, 56, 2572–2579. [Google Scholar] [CrossRef]
  14. Berg, I.D.; Nikolaev, I.G. On a distance between directions in an Alexandrov space of curvature ≤ K. Mich. Math. J. 1998, 45, 275–289. [Google Scholar] [CrossRef]
  15. Berg, I.D.; Nikolaev, I.G. Quasilinearization and curvature of Alexandrov spaces. Geom. Dedicata 2008, 133, 195–218. [Google Scholar] [CrossRef]
  16. Dhompongsa, S.; Kirk, W.A.; Sims, B. Fixed points of uniformly lipschitzian mappings. Nonlinear Anal. 2006, 65, 762–772. [Google Scholar] [CrossRef]
  17. Reich, S.; Salinas, Z. Weak convergence of infinite products of operators in Hadamard spaces. Rend. Circ. Mat. Palermo 2016, 65, 55–71. [Google Scholar] [CrossRef]
  18. Kirk, W.A.; Panyanak, B. A concept of convergence in geodesic spaces. Nonlinear Anal. 2008, 68, 3689–3696. [Google Scholar] [CrossRef]
  19. Ahmadi Kakavandi, B. Weak topologies in complete CAT(0) spaces. Proc. Am. Math. Soc. 2012, 141, 1029–1039. [Google Scholar] [CrossRef]
  20. Jost, J. Nonpositive Curvature: Geometric and Analytic Aspects; Birkhauser: Basel, Switzerland, 1997. [Google Scholar]
  21. Bačák, M. Convex Analysis and Optimization in Hadamard Spaces; De Gruyter: Berlin, Germany, 2014. [Google Scholar]
  22. Bačák, M.; Reich, S. The asymptotic behavior of a class of nonlinear semigroups in Hadamard spaces. J. Fixed Point Theory Appl. 2014, 16, 189–202. [Google Scholar] [CrossRef]
  23. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Djafari Rouhani, B.; Mohebbi, V. Proximal Point Methods for Solving Equilibrium Problems in Hadamard Spaces. Axioms 2025, 14, 127. https://doi.org/10.3390/axioms14020127

AMA Style

Djafari Rouhani B, Mohebbi V. Proximal Point Methods for Solving Equilibrium Problems in Hadamard Spaces. Axioms. 2025; 14(2):127. https://doi.org/10.3390/axioms14020127

Chicago/Turabian Style

Djafari Rouhani, Behzad, and Vahid Mohebbi. 2025. "Proximal Point Methods for Solving Equilibrium Problems in Hadamard Spaces" Axioms 14, no. 2: 127. https://doi.org/10.3390/axioms14020127

APA Style

Djafari Rouhani, B., & Mohebbi, V. (2025). Proximal Point Methods for Solving Equilibrium Problems in Hadamard Spaces. Axioms, 14(2), 127. https://doi.org/10.3390/axioms14020127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop