Next Article in Journal
Efficient Difference and Ratio-Type Imputation Methods under Ranked Set Sampling
Next Article in Special Issue
Numerical Solution of Time-Fractional Schrödinger Equation by Using FDM
Previous Article in Journal
Strong Edge Coloring of K4(t)-Minor Free Graphs
Previous Article in Special Issue
Recent Results on Expansive-Type Evolution and Difference Equations: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resolvent-Free Method for Solving Monotone Inclusions

1
School of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing 400067, China
2
College of Mathematics, Sichuan University, Chengdu 610065, China
3
Department of Mathematics, Braude College, Karmiel 2161002, Israel
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(6), 557; https://doi.org/10.3390/axioms12060557
Submission received: 19 April 2023 / Revised: 27 May 2023 / Accepted: 3 June 2023 / Published: 5 June 2023

Abstract

:
In this work, we consider the monotone inclusion problem in real Hilbert spaces and propose a simple inertial method that does not include any evaluations of the associated resolvent and projection. Under suitable assumptions, we establish the strong convergence of the method to a minimal norm solution. Saddle points of minimax problems and critical points problems are considered as the applications. Numerical examples in finite- and infinite-dimensional spaces illustrate the performances of our scheme.
MSC:
65K05; 65K10; 47H10; 47L25

1. Introduction

Since Minty [1], and the many others to follow, such as [2,3,4], introduced the theory of the monotone operator, a large number of theoretical and practical developments have been presented. Pascali and Sburian [5] pointed out that the class of monotone operators is important, and due to the simple structure of the monotonicity condition, it can be handled easily. The monotone inclusion problem is one of the highlights due to its important significance in convex analysis and convex optimization problems, which includes convex minimization, monotone variational inequality, convex and concave minimax problems, linear programming problems and many others. For further information and applications, see, e.g., Bot and Csetnek [6], Korpelevich [7], Khanc et al. [8], Sicre et al. [9], Xu [10], Yin et al. [11] and the many references therein [12,13,14,15].
Let H be a real Hilbert space and A : H H be a given operator with domain D o m ( A ) = { x H : A x } . The monotone inclusion problem is formulated as finding a point x * such that
0 A x * .
The monotonicity term of (1) refers to the monotonicity of A which means that for all x , y H ,
u v , x y 0 , u A x , v A y .
We denote the solution set of (1) by = A 1 ( 0 ) .
One of the simplest classical algorithms for solving the monotone inclusion problem (1) is the proximal point method of Martinet [16]. Given a maximal monotone mapping A : H H and its associated resolvent J r A = ( I + r A ) 1 , the proximal point algorithm generates a sequence according to the update rule:
x n + 1 = J r A x n .
The proximal point algorithm, also known as the regularization algorithm, is a first-order optimization method that requires the function and gradient (subgradient) evaluations, and thus attracts much interest. For more relevant improvements and achievements on the regularization methods in Hilbert spaces, one can refer to [17,18,19,20,21,22,23].
One important application of monotone inclusions is the convex minimization problem. Given C R n is a nonempty, closed and convex set and a continuously differentiable function f, the constrained minimization aims to find a point x * C such that
f ( x * ) = min x C f ( x ) .
Using some operator theory properties, it is known that x * solves (3) if and only if x * = P C ( I λ f ) x * for some λ > 0 . This relationship translates to the projected gradient method:
x n + 1 = P C ( x n λ f ( x n ) ) ,
where P C is the metric projection onto C and f is the gradient of f.
The projected gradient method calls for the evaluation of the projection onto the feasible set C as well as the gradient evaluation of f. This guarantees a reduction in the objective function while keeping the iterates feasible. With the set C as above and an operator A : H H , an important problem worth mentioning is the monotonic variational inequality problem, consisting of finding a point x * C such that
A x * , x x * 0 for all x C .
Using the relationship between the projection P C , the resolvent and the normal cone N C of the set C, that is,
y = J λ N C ( x ) x y + λ N C ( y ) x y λ N C ( y ) x y , d y 0 y = P C x , d C ,
we obtain the iterative step rule for solving (4)
x n + 1 = P C ( x n λ A x n ) .
Indeed, the mentioned optimization methods above now “dominate” in modern optimization algorithms based on first-order information (such as function values and radial/subgradient), and it can be predicted that they will become increasingly important as the scale of practical application problems increases. For excellent works, one can refer to Teboulle [24], Drusvyatskiy and Lewis [25], etc. However, it is undeniable that they are highly dependent on the structure of the given problem, and computationally, these methods rely on the ability to compute resolvents/projections per iteration; taking algorithm (5), for instance, the complexity of each step depends on the computation of the projection to the convex set C.
Hence, in this work, we wish to combine the popular inertial technology (see, e.g., Nesterov [26], Alvarez [27] and Alvarez–Attouch [28]) and establish a strong convergence iterative method that does not use resolvents or projections, and has good convergence properties due to the inertial technique.
The outline of this paper is as follows. In Section 2, we collect the definitions and results needed for our analysis. In Section 3, the resolvent/projection-free algorithm and its convergence analysis are presented. Later, in Section 4, we present two applications of the monotone inclusion problem, saddle points of the minimax problem and the critical points problem. Finally, in Section 5, numerical experiments illustrate the performances of our scheme in finite- and infinite-dimensional spaces.

2. Preliminaries

Let C be a nonempty, closed and convex subset of a real Hilbert space H equipped with the inner product · , · . Denote the strong convergence to x of { x n } by x n x , the ω -weak limit set of { x n } by
w ω ( x n ) = { x H : x n j x for some subsequence { x n j } of { x n } } .
We recall two useful properties of the norm:
x + y 2 x 2 + 2 y , x + y ; α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2
β γ y z 2 α γ x z 2 ,
for all x , y , z H and α , β , γ R such that α + β + γ = 1 .
Definition 1.
Let H be a real Hilbert space. An operator A : H H is called μ inverse strongly monotone (μ-ism) (or μ-cocoercive) if there exists a number μ > 0 such that
x y , A x A y μ A x A y 2 .
Definition 2.
Let C be a nonempty, closed convex subset of H. The operator P C is called the metric projection of H onto C: for every element x H , there is a unique nearest point P C x in C, such that
x P C x = min { x y : y C } .
The characterization of the metric projection is
x P C x , y P C x 0 , x H , y C .
Lemma 1
(Xu [29], Maingé [30]). Assume that { a n } and { c n } are nonnegative real sequences such that
a n + 1 ( 1 γ n ) a n + b n + c n , n 0 ,
where { γ n } is a sequence in ( 0 , 1 ) and { b n } is a real sequence. Provided that
(a)  lim n γ n = 0 Σ n = 1 γ n = Σ n = 1 c n < ;
(b)  lim sup n b n γ n 0 .
Then, the limit of the sequence { a n } exists and lim n a n = 0 .
Lemma 2
(see, e.g., Opial [31]). Let H be a real Hilbert space and { x n } n = 0 H such that there exists a nonempty, closed and convex set S H satisfying the following:
(1) For every z S , lim n x n z exists;
(2) Any weak cluster point of { x n } n = 0 belongs to S.
Then, there exists x ¯ S such that { x n } n = 0 converges weakly to x ¯ .
Lemma 3
(see, e.g., Maingé [30]). Let { Γ n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { Γ n j } of { Γ n } such that Γ n j < Γ n j + 1 for all j 0 . Also consider the sequence of integers { σ ( n ) } n n 0 defined by
σ ( n ) = max { k n : Γ k Γ k + 1 } .
Then, { σ ( n ) } n n 0 is a nondecreasing sequence verifying lim n σ ( n ) = and, for all n n 0 ,
max { Γ σ ( n ) , Γ n } Γ σ ( n ) + 1 .

3. Main Result

We are concerned with the following monotone inclusion problem: finding x * H such that
0 A x * ,
where A is a monotone-type operator on H.
Remark 1.
Clearly, if y n = z n = x n for some n 1 , then x n is a solution of (9) and the iteration process is terminated in finite iterations. In general, the algorithm does not stop in finite iterations, and thus we assume that the algorithm generates an infinite sequence.

Convergence Analysis

For the convergence analysis of our algorithm, we assume the following assumptions:
(A1)  A is a continuous maximal monotone operator with cocoercive coefficient μ from H to H;
(A2)  The solution set of (9) is nonempty.
Theorem 1.
Suppose that the assumptions (A1)–(A2) hold. If the sequences { α n } , { γ n } are in ( 0 , 1 ) and satisfy the following conditions:
(B1)  lim n γ n = 0 lim inf ( 1 α n γ n ) α n > 0 and Σ n = 1 γ n = ;
(B2)  ϵ n = o ( γ n ) .
Then, the recursion { x n } generated by Algorithm 1 converges strongly to an element p which is closest to 0 in , that is, p = P ( 0 ) .
    Algorithm 1 Convergence Analysis
Initialization: Choose λ n ( 0 , 2 μ ) , θ ( 0 , 1 ) and ϵ n ( 0 , ) such that n = 1 ϵ n < , select arbitrary starting points x 0 , x 1 C , and set n = 1 .
Iterative Step: Given the iterates x n and x n 1 for each n 1 , choose θ n such that 0 < θ n < θ ¯ n , compute
y n = x n + θ n ( x n x n 1 ) , z n = y n λ n A y n , x n + 1 = ( 1 α n γ n ) y n + α n z n ,
where
θ ¯ n = min { θ , ϵ n [ max ( x n x n 1 2 , x n x n 1 ) ] 1 } , x n x n 1 ; θ . e l s e
Stopping Criterion: If y n = z n , then stop. Otherwise, set n : = n + 1 and return to Iterative Step.
Proof. 
First, we prove that { x n } is bounded. Without the loss of the generality, let p be the closest element to 0 in because . It follows from the cocoercivity of A with coefficient μ that
A x n , x n p = A x n A p , x n p μ A x n 2 .
Taking into account the definition of y n in the recursion (10), we have
y n p = x n + θ n ( x n x n 1 ) p z x n p + θ n x n x n 1 ,
and
z n p 2 = y n λ n A y n p 2 = y n p 2 + λ n 2 A y n 2 2 λ n A y n , y n p y n p 2 + λ n 2 A y n 2 2 μ λ n A y n 2 y n p 2 + ( λ n 2 μ ) λ n A y n 2 ,
which implies that z n p y n p . Furthermore, we have
x n + 1 p = ( 1 α n γ n ) y n + α n z n p ( 1 α n γ n ) y n p + λ n 2 z n p + γ n p ( 1 γ n ) y n p + γ n p ( 1 γ n ) [ x n p + θ n x n x n 1 ] + γ n p ( 1 γ n ) x n p + γ n [ p + θ n γ n x n x n 1 ] .
In view of the assumption on θ n , we obtain θ n x n x n 1 ϵ n = o ( γ n ) , which entails that there exists some positive constant σ such that σ = sup θ n γ n x n x n 1 ; therefore,
x n + 1 p ( 1 γ n ) x n p + γ n ( p + σ ) max { x 0 p , p + σ } ,
namely, the sequence { x n } is bounded, and so are { y n } and { z n } .
It follows from (10) and (11) that
x n + 1 p 2 = ( 1 α n γ n ) y n + α n z n p 2 = ( 1 α n γ n ) ( y n p ) + α n ( z n p ) γ n p 2 ( 1 α n γ n ) y n p 2 + α n z n p 2 + γ n p 2 ( 1 α n γ n ) α n y n z n 2 ( 1 α n γ n ) y n p 2 + α n [ y n p 2 + ( λ n 2 μ ) λ n A y n 2 ] + γ n p 2 ( 1 α n γ n ) α n y n z n 2 = ( 1 γ n ) y n p 2 + α n ( λ n 2 μ ) λ n A y n 2 + γ n p 2 ( 1 α n γ n ) α n y n z n 2 .
By using again the formation of y n , we obtain
y n p 2 = x n + θ n ( x n x n 1 ) p 2 = ( 1 + θ n ) ( x n p ) θ n ( x n 1 p ) 2 ( 1 + θ n ) x n p 2 θ n x n 1 p 2 + θ n ( 1 + θ n ) x n x n 1 2 ( 1 + θ n ) x n p 2 θ n x n 1 p 2 + 2 θ n x n x n 1 2 .
Substituting (13) into (12), we have
x n + 1 p 2 ( 1 γ n ) [ ( 1 + θ n ) x n p 2 θ n x n 1 p 2 + 2 θ n x n x n 1 2 ] + α n ( λ n 2 μ ) λ n A y n 2 + γ n p 2 ( 1 α n γ n ) α n y n z n 2 = ( 1 γ n ) [ x n p 2 + θ n ( x n p 2 x n 1 p ) 2 ) + 2 θ n x n x n 1 2 ] + α n ( λ n 2 μ ) λ n A y n 2 + γ n p 2 ( 1 α n γ n ) α n y n z n 2 = x n p 2 + γ n ( p 2 x n p 2 ) + 2 ( 1 γ n ) θ n x n x n 1 2 + ( 1 γ n ) θ n ( x n p 2 x n 1 p ) 2 ) + α n ( λ n 2 μ ) λ n A y n 2 ( 1 α n γ n ) α n y n z n 2 ,
and transposing, we have
( 1 α n γ n ) α n y n z n 2 + α n ( 2 μ λ n ) λ n A y n 2 ( x n p 2 x n + 1 p 2 ) + ( 1 γ n ) θ n ( x n p 2 x n 1 p ) 2 ) + γ n ( p 2 x n p 2 ) + 2 ( 1 γ n ) θ n x n x n 1 2 ,
Here, two cases should be considered.
Case I. Assume that the sequence x n p is decreasing, namely, there exists N 0 > 0 such that x n + 1 p     x n p for each n > N 0 , and then there is the limit of x n p and lim n ( x n + 1 p x n p ) = 0 . It turns out from (14) and the condition ( B 1 ) that
( 1 α n γ n ) α n y n z n 2 0 ; α n ( 2 μ λ n ) λ n A y n 2 0 ,
which implies that y n z n 2 0 and A y n 2 0 .
Furthermore, by the setting of u n , we have u n y n   = α n z n y n 0 and x n + 1 u n   = γ n y n 0 , which together with x n y n   = θ n x n x n 1 0 yields that
x n + 1 x n     x n + 1 u n   +   u n y n   +   y n x n 0 .
Because { x n } is bounded, it follows from Eberlein–Shmulyan’s theorem, for arbitrary point q w ω ( x n ) , that there exits a subsequence { x n j } of { x n } such that x n j converges weakly to q. By x n y n 0 , A y n 2 0 and A is continuous, we have
0 = lim n A y n = lim j A y n j = A q ,
which entails that q A 1 ( 0 ) . In view of the fact that the choice of q in w ω ( x n ) was arbitrary, we conclude that w ω ( x n ) , which makes Lemma 2 workable, that is, { x n } n = 0 converges weakly to some point in .
Now, we claim that x n p , where p = P ( 0 ) .
For this purpose, let u n = ( 1 α n ) y n + α n z n , and then we have
x n + 1 = ( 1 α n γ n ) y n + α n z n = ( 1 γ n ) u n + γ n ( u n y n ) = ( 1 γ n ) u n + γ n α n ( z n y n ) ,
which yields that
x n + 1 p 2 = ( 1 γ n ) u n + γ n α n ( z n y n ) p 2 = ( 1 γ n ) ( u n p ) + γ n α n ( z n y n ) γ n p 2 ( 1 γ n ) 2 u n p 2 2 γ n p γ n α n ( z n y n ) , x n + 1 z = ( 1 γ n ) 2 u n p 2 2 γ n α n y n z n , x n + 1 p + 2 γ n p , x n + 1 p .
In addition, by using again the formation of { y n } , we obtain
y n p 2 = x n + θ n ( x n x n 1 ) p 2 = ( 1 + θ n ) ( x n p ) θ n ( x n 1 p ) 2 ( 1 + θ n ) x n p 2 θ n x n 1 p 2 + θ n ( 1 + θ n ) x n x n 1 2 ( 1 + θ n ) x n p 2 θ n x n 1 p 2 + 2 θ n x n x n 1 2 ,
and substituting the above inequality in (15), we have
x n + 1 p 2 ( 1 γ n ) 2 y n p 2 2 γ n α n y n z n , x n + 1 p + 2 γ n p , x n + 1 p ( 1 γ n ) 2 [ ( 1 + θ n ) x n p 2 θ n x n 1 p 2 + 2 θ n x n x n 1 2 ] 2 γ n α n y n z n , x n + 1 p + 2 γ n p , x n + 1 p ( 1 γ n ) [ x n p 2 + θ n ( x n p 2 x n 1 p 2 ) + 2 θ n x n x n 1 2 ] 2 γ n α n y n z n , x n + 1 p + 2 γ n p , x n + 1 p ( 1 γ n ) [ x n p 2 + θ n x n x n 1 · ( x n p + x n 1 p ) + 2 θ n x n x n 1 2 ] 2 γ n α n y n z n , x n + 1 p + 2 γ n p , x n + 1 p ( 1 γ n ) x n p 2 + θ n ( 1 γ n ) M x n x n 1 + 2 γ n p , x n + 1 p 2 γ n α n y n z n , x n + 1 p ,
where M = sup ( x n p + x n 1 p + 2 θ n x n x n 1 ) .
Owing to p = P ( 0 ) , we can infer that 0 P ( 0 ) , y P ( 0 ) 0 for each y , so we have
lim sup p , x n + 1 p = max q w ω ( x n ) p , q p 0 .
In addition, from the assumption on { θ n } , we have
n = 1 θ n ( 1 γ n ) M x n x n 1 < ,
and from y n z n 0 , we have
lim n sup { 2 γ n α n y n z n , x n + 1 p + 2 γ n p , x n + 1 p } / γ n 0 ,
and therefore (16) enables Lemma 1 to be applicable to, namely, x n p 0 .
Case II. If the sequence x n p is not decreasing at infinity, in the sense that there exists a subsequence { x n j p } of { x n p } such that x n j p     x n j + 1 p . Owing to Lemma 3, we can induce that x σ ( n ) p     x σ ( n ) + 1 p and x n p     x σ ( n ) + 1 p , where σ ( n ) is an indicator defined by σ ( n ) = max { k n :   x k p     x k + 1 p } and σ ( n ) as n .
Taking into account the fact that the formula (14) still holds for each σ ( n ) , that is,
( 1 α σ ( n ) γ σ ( n ) ) α σ ( n ) y σ ( n ) z σ ( n ) 2 + α σ ( n ) ( 2 μ λ σ ( n ) ) λ σ ( n ) A y σ ( n ) 2 ( x σ ( n ) p 2 x σ ( n ) + 1 p 2 ) + γ σ ( n ) ( p 2 x σ ( n ) p 2 ) + ( 1 γ σ ( n ) ) θ σ ( n ) ( x σ ( n ) p 2 x σ ( n ) 1 p ) 2 ) + 2 ( 1 γ σ ( n ) ) θ σ ( n ) x σ ( n ) x σ ( n ) 1 2 γ σ ( n ) ( p 2 x σ ( n ) p 2 ) + ( 1 γ σ ( n ) ) θ σ ( n ) ( x σ ( n ) p 2 x σ ( n ) 1 p ) 2 ) + 2 ( 1 γ σ ( n ) ) θ σ ( n ) x σ ( n ) x σ ( n ) 1 2 .
In addition, from the theorem’s assumptions ( B 1 ) and ( B 2 ) that
y σ ( n ) z σ ( n ) 2 0 ; A y σ ( n ) 2 0 ,
Similarly to the proofs of (16) in Case I, we have w ω ( x n ) and
lim sup p , x σ ( n ) + 1 p 0 ,
and
x σ ( n ) + 1 p 2 ( 1 γ σ ( n ) ) x σ ( n ) p 2 + θ σ ( n ) ( 1 γ σ ( n ) ) M x σ ( n ) x σ ( n ) 1 2 γ σ ( n ) α σ ( n ) y σ ( n ) z σ ( n ) , x σ ( n ) + 1 p + 2 γ σ ( n ) p , x σ ( n ) + 1 p .
Transposing again, we have
γ σ ( n ) x σ ( n ) p 2 ( x σ ( n ) p 2 x σ ( n ) + 1 p 2 ) + θ σ ( n ) ( 1 γ σ ( n ) ) M × x σ ( n ) x σ ( n ) 1 + 2 γ σ ( n ) p , x σ ( n ) + 1 p 2 γ σ ( n ) α σ ( n ) y σ ( n ) z σ ( n ) , x σ ( n ) + 1 p θ σ ( n ) ( 1 γ σ ( n ) ) M x σ ( n ) x σ ( n ) 1 + 2 γ σ ( n ) p , x σ ( n ) + 1 p 2 γ σ ( n ) α σ ( n ) y σ ( n ) z σ ( n ) , x σ ( n ) + 1 p ,
which amounts to
x σ ( n ) p 2 θ σ ( n ) γ σ ( n ) ( 1 γ σ ( n ) ) M x σ ( n ) x σ ( n ) 1 + 2 p , x σ ( n ) + 1 p 2 α σ ( n ) y σ ( n ) z σ ( n ) , x σ ( n ) + 1 p .
Noting the grant of ϵ σ ( n ) = o ( γ σ ( n ) ) , we have θ σ ( n ) γ σ ( n ) ( 1 γ σ ( n ) ) M x σ ( n ) x σ ( n ) 1 0 . Putting (18) and (17) into (20), it yields that x σ ( n ) p 0 .
It follows from (19) that
lim n x σ ( n ) + 1 p = lim n x σ ( n ) p 2 = 0 ,
which makes Lemma 3 practicable, and hence
0 x n p max { x n p , x σ ( n ) p } x σ ( n ) + 1 p 0 .
Consequently, the sequence { x n } converges strongly to p, which is the closest point to 0 in . This completes the proof. □
Remark 2.
If the operator A is accretive with μ cocoercivity or maximal monotone, then all the above results hold.

4. Applications

4.1. Minimax Problem

Suppose H 1 and H 2 are two real Hilbert spaces, the general convex–concave minimax problem in a Hilbert space setting is illustrated as follows:
min x Q max λ S L ( x , λ ) ,
where Q and S are nonempty, closed and convex subsets of Hilbert spaces H 1 and H 2 , respectively, and L ( x , λ ) is convex in x (for each fixed λ S ) and concave in λ (for each fixed x Q ).
A solution ( x * , λ * ) Q × S of the minimax problem (21) is interpreted as a saddle point, satisfying the following inequality
L ( x * , λ ) L ( x * , λ * ) L ( x , λ * ) , x Q , λ S ,
which amounts to the fact that x * Q is a minimizer in Q of the function L ( · , λ * ) , and λ * S is a maximizer in S of the function L ( x * , · ) .
Minimax problems are an important modeling tool due to their ability to handle many important applications in machine learning, in particular, in generative adversarial nets (GANs), statistical learning, certification of robustness in deep learning and distributed computing. Some recent works can be seen in, e.g., Ataş [32], Ji-Zhao [33] and Hassanpour et al. [34].
For example, if we consider the standard convex programming problem,
min f ( x ) , s . t . h i ( x ) 0 , i = 1 , 2 , , l ,
where f and h i , ( i = 1 , 2 , , l ) are convex functions. Using the Lagrange function L, the problem (22) can be reformulated as the following minimax problem (see, e.g., Qi and Sun [35]):
L ( x , λ ) = f ( x ) + i λ i h i ( x ) .
It can be seen that L ( x , λ ) in (23) is a convex–concave function on Q × S , where
Q = { x : h i ( x ) 0 , i = 1 , 2 , , l } , S = { λ : λ i 0 , i = 1 , 2 , , l } ,
and the Kuhn–Tucker vector ( x * , λ * ) of (22) is exactly the saddle point of Lagrangian function L ( x , λ ) in (23).
Another nice example is the Tchebychev approximating problem that consists of finding ( x , λ ) such that
min λ Q max x S ( g ( x ) λ ( x ) ) 2 ,
that is, for given g : S R n R , finding λ ( x ) Q approaching g ( x ) , where λ : R n R and Q is the space composed of the functions λ .
It is known that L has a saddle point if and only if
min x Q max λ S L ( x , λ ) = max λ S min x Q L ( x , λ ) .
If L is convex–concave and differentiable, let x L ( x , λ ) and λ L ( x , λ ) present the derivatives of L on x and λ , respectively, and then we have L ( z ) = [ x L ( x , λ ) , λ L ( x , λ ) ] T , where z = ( x , λ ) .
Note that L is maximal monotone for the unconstrained case (i.e., Q = H 1 , S = H 2 ), and finding a saddle point z * = ( x * , λ * ) Q × S of L equals to solving the equation L ( z * ) = 0 . For more details on the minimax problem and its solutions, one can refer to the von Neumann works from the 1920s and 1930s [36,37] and Ky Fan’s minimax theorem [38].
Now, we consider minimax problems (21) under the unconstrained case, and let the solution set of the minimax problem be nonempty. So, by taking A = L , we can obtain the saddle point of the minimax problem in H 1 × H 2 from the following results.
Theorem 2.
Let H 1 and H 2 be two real Hilbert spaces. Suppose that the function L is convex–concave and differentiable such that . Under the setting of the parameters in Algorithm 1, if the sequences { α n } , { γ n } , { ϵ n } are in ( 0 , 1 ) and satisfying the conditions as in Theorem 1, then the sequence { z n } generated by the following scheme
y n = z n + θ n ( z n z n 1 ) , z ¯ n = y n λ n L ( y n ) , z n + 1 = ( 1 α n γ n ) y n + α n z ¯ n ,
converges strongly to the least norm element z * , where z 0 H 1 × H 2 and z 1 H 1 × H 2 are two arbitrary initial points.
Proof. 
Noting that L is maximal monotone, so letting A be L in Algorithm 1, and following Theorem 1, we have the result. □
Indeed, if we denote z n = ( x n , λ n ) H 1 × H 2 , then the recursions (24) specifically can be rewritten as follows for arbitrary initial points x 0 , x 1 , λ 0 , λ 1 ,
x n = x n + θ n ( x n x n 1 ) , λ n = λ n + θ n ( λ n λ n 1 ) , x ¯ n = x n x L ( x n , λ n ) , λ ¯ n = λ n + λ L ( x n , λ n ) , x n + 1 = ( 1 α n γ n ) x n + α n x ¯ n , λ n + 1 = ( 1 α n γ n ) λ n + α n λ ¯ n ,
and the sequence pair ( x n , λ n ) converges strongly to an element ( x * , λ * ) which is closest to ( 0 , 0 ) .

4.2. Critical Points Problem

In this part, we focus on finding the critical points of the functional F : H R { + } defined by
F : = Ψ + Φ ,
where H is a real Hilbert space, the function Ψ : H R { + } is a proper, convex and lower semi-continuous function and Φ : H R is a convex locally Lipschitz mapping.
A point x * is said to be a critical point of F = Ψ + Φ if x * d o m ( Ψ ) and if it satisfies
Ψ ( x * ) Ψ ( v ) Φ ( x * , v x * ) ,
where Φ is the generalized directional derivative of Φ at x * C in the direction v H which is defined by
Φ ( x * , v ) = lim t 0 sup w x * Φ ( w + t v ) Φ ( w ) t .
Critical point theory is a powerful theoretical tool, which has been greatly developed in recent years and has been widely used in many fields, such as differential equations, operations research optimization and so on. For some recent works on the applications of critical point theory, we can refer to Trushnikov et al. [39], Turgut et al. [40] and therein.
A typical instance is finding the solution of the impulsive differential equation model existing in the fields of medicine, biology, rocket and aerospace motion and optimization theory which can be transformed into finding the critical point of some functional.
Specifically, we consider the following impulsive differential equation:
q ¨ ( t ) = λ q ( t ) + f ( t , q ( t ) ) , t ( s k 1 , s k ) , Δ q ˙ ( s k ) = g k ( q ( s k ) ) , k = 1 , 2 , ,
where k Z ,   λ 0 , q ( t ) R n , Δ q ˙ ( s k ) = q ˙ ( s k + ) q ˙ ( s k ) , q ˙ ( s k ± ) = lim t s k ± q ˙ ( t ) , f ( t , q ) = g r a d q I ( t , q ) , I ( t , q ) C 1 ( R × R n , R ) , g k ( q ) = g r a d q G k ( q ) , G k C 1 ( R n , R ) . In addition, there exist m N and T R + such that 0 = s 0 < s 1 < s 2 < < s m = T , s k + m = s k + T , g k + m = g k holds for all k Z .
Let H = { q R R n | q b e a b s o l u t e c o n t i n u o u s , q ˙ L 2 ( ( 0 , T ) , R n ) , q ( t ) = q ( t + T ) , t R } and the norm · is induced by the inner product q , p = 0 T q ˙ ( t ) p ˙ ( t ) + q ( t ) p ( t ) d t , p , q H .
Denote K = { 1 , 2 , , m } , and the functional on H is defined as
F ( q ) = 0 T 1 2 | q ˙ | 2 1 2 λ q 2 I ( t , q ) d t + k K G k ( q ( s k ) ) ,
and then the periodic solution of the system (27) corresponds to the critical point of the functional F one to one.
If the functional F in (26) satisfies the Palais–Smale compact conditions and F is bounded from below, then there exists a critical point x * such that F ( x * ) = inf u H F ( u ) (see, e.g., Motreanu and Panagiotopoulos [41]). From Fermat’s theorem, one can refer that the critical point x * is a solution of the inclusion (see, e.g., Moameni [42]),
0 Ψ ( x * ) + c Φ ( x * ) ,
where c Φ ( · ) is the generalized derivative of Φ defined as
c Φ ( u ) = { u * H * ; Φ ( u , v ) u * , v , v H } .
From Clarke [43], c Φ carries bounded sets of H into bounded sets of H * and is hemicontinuous. Moreover, we can infer that c Φ is a monotone mapping because Φ is convex, which makes Browder ([17], Theorem 2) applicable, namely, Ψ + c Φ is a maximal monotone mapping. Denoted by is the critical points set of the problem (26). By taking A = Ψ + c Φ , we have the following result.
Theorem 3.
Let H be a real Hilbert space. Suppose that F : H ( , ] is of the form (26), bounded from below and satisfying the Palais–Smale compact conditions such that . Under the setting of the parameters in Algorithm 1, if { α n } , { γ n } , { ϵ n } are the sequences in ( 0 , 1 ) satisfying the conditions as in Theorem 1, then the sequence { z n } generated by the following schemes
y n = z n + θ n ( z n z n 1 ) , z ¯ n = y n λ n ( Ψ + c Φ ) ( y n ) , z n + 1 = ( 1 α n γ n ) y n + α n z ¯ n ,
converges strongly to an element x ¯ which is closest to 0 .

5. Numerical Examples

In this section, we present numerical examples in finite- and infinite-dimensional spaces to illustrate the applicability, efficiency and stability of Algorithm 1. All the codes for the results are written in Matlab R2016b and are performed on an LG dual-core personal computer.
Example 1.
Here, we test the effectiveness of our algorithm in finite-dimensional space which does not need super high dimensions. For the purpose, let H = R 6 , and define the monotone operators A as follows:
A = 6 0 0 0 0 0 0 7 0 0 0 0 0 0 8 0 0 0 0 0 0 3 0 0 0 0 0 0 4 0 0 0 0 0 0 1
it is easy to verify that the cocoercivity coefficient μ = 1 8 , so we set λ n = 1 8 1 10 n .
Next, let us compare our Algorithm 1 with the regularization method. Specifically, the regularization algorithm (RM) is considered as
y n = x n + θ n ( x n x n 1 ) , z n = J r A y n , x n + 1 = ( 1 α n γ n ) y n + α n z n .
As for the components, both our Algorithm 1 and the regularization method (RM), initial points x 0 , x 1 are generated randomly by Matlab, inertial coefficient θ n is chosen to satisfy that if θ > ϵ n × ( max ( x n 1 x n , x n 1 x n 2 ) ) , then θ n = 1 / ( ( n + 2 ) 2 × ( max ( x n 1 x n , x n 1 x n 2 ) ) ) ; otherwise, θ n = θ 2 , where θ = 0.6 , ϵ n = 1 / ( n + 1 ) 2 , γ n = 1 / ( 10 n ) . The experimental results are listed in Figure 1. Moreover, the iterations and convergence rate of Algorithm 1 for different values of { α n } are presented in Table 1.
Example 2.
Now, we measure our Algorithm 1 in H = L 2 [ 0 , 1 ] with · = ( 0 1 x 2 ( t ) d t ) 1 2 . Define the mappings A by A ( x ) ( t ) : = 2 x ( t ) / 3 for all x ( t ) L 2 [ 0 , 1 ] , and then it can be shown that A is 3 2 -cocoercive monotone mapping. All the parameters θ n , θ, λ n , ϵ n and γ n are chosen as in Example 1. The stop criterion is x n + 1 x n 10 6 . We test Algorithm 1 for the following three different initial points:
Case I: x 0 = 2 t 3 e 5 t , x 1 = s i n ( 3 t ) e t / 100 ;
Case II: x 0 = s i n ( 3 t ) + c o s ( 5 t ) / 2 , x 1 = 2 t s i n ( 3 t ) e 5 t / 200 ;
Case III: x 0 = 2 t s i n ( 3 t ) e 5 t / 200 , x 1 = e t e 2 t .
In addition, we also test the regularization method as illustrated in Example 1, and the tendency of the sequence is proposed in Figure 2 and Figure 3 and Table 2.

6. Conclusions

The proximal point method (regularization method) and projection-based method are two classical and significant methods for solving monotone inclusions, variational inequalities and related problems.
However, the evaluations of resolvents/projections in these methods heavily rely on the structure of the given problem, and in the general case, this might seriously affect the computational effort of the given method. Thus, motivated by the ideas of Chidume et al. [44], Alvarez [28], Alvarez–Attouch [27] and Zegeye [45], we present a simple strong convergence method that avoids the need to compute resolvents/projections.
We present several theoretical applications such as minimax problems and critical point problems, as well as some numerical experiments illustrating the performances of our scheme.

Author Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Funding

This article was funded by the National Natural Science Foundation of China (12071316) and the Natural Science Foundation of Chongqing (cstc2021jcyj-msxmX0177).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Minty, G.J. Monotone (nonlinear)operators in Hilbert spaces. Duke Math. J. 1962, 29, 341–346. [Google Scholar] [CrossRef]
  2. Browder, F. The solvability of nonlinear functional equations. Duke Math. J. 1963, 30, 557–566. [Google Scholar] [CrossRef]
  3. Leray, J.; Lions, J. Quelques résultats de Višik sur les problèmes elliptiques non linéares par les méthodes de Minty-Browder. Bull. Soc. Math. Fr. 1965, 93, 97–107. [Google Scholar] [CrossRef]
  4. Minty, G.J. On a monotonicity method for the solution of non-linear equations in Banach spaces. Proc. Nat. Acad. Sci. USA 1963, 50, 1038–1041. [Google Scholar] [CrossRef] [PubMed]
  5. Pascali, D.; Sburian, S. Nonlinear Mappings of Monotone Type; Editura Academia Bucuresti: Bucharest, Romania, 1978; p. 101. [Google Scholar]
  6. Bot, R.I.; Csetnek, E.R. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 2016, 71, 519–540. [Google Scholar] [CrossRef]
  7. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekonomika i Matematicheskie Metody 1976, 12, 747–756. [Google Scholar]
  8. Khan, S.A.; Suantai, S.; Cholamjiak, W. Shrinking projection methods involving inertial forward–backward splitting methods for inclusion problems. Rev. Real Acad. Cienc. Exactas Fis. Nat. A Mat. 2019, 113, 645–656. [Google Scholar] [CrossRef]
  9. Sicre, M.R. On the complexity of a hybrid proximal extragradient projective method for solving monotone inclusion problems. Comput. Optim. Appl. 2020, 76, 991–1019. [Google Scholar] [CrossRef]
  10. Xu, H.K. A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36, 115–125. [Google Scholar] [CrossRef]
  11. Yin, J.H.; Jian, J.B.; Jiang, X.Z.; Liu, M.X.; Wang, L.Z. A hybrid three-term conjugate gradient projection method for constrained nonlinear monotone equations with applications. Numer. Algorithms 2021, 88, 389–418. [Google Scholar] [CrossRef]
  12. Berinde, V. Iterative Approximation of Fixed Points; Lecture Notes in Mathematics; Springer: London, UK, 2007. [Google Scholar]
  13. Chidume, C.E. An approximation method for monotone Lipshitz operators in Hilbert spaces. J. Austral. Math. Soc. Ser. 1986, A 41, 59–63. [Google Scholar] [CrossRef]
  14. Kačurovskii, R.I. On monotone operators and convex functionals. Usp. Mat. Nauk. 1960, 15, 213–215. [Google Scholar]
  15. Zarantonello, E.H. Solving Functional Equations by Contractive Averaging; Technical Report #160; U. S. Army Mathematics Research Center: Madison, WI, USA, 1960. [Google Scholar]
  16. Martinet, B. Regularisation d’inequations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970, 4, 154–158. [Google Scholar]
  17. Browder, F.E. Nonlinear maximal monotone operators in Banach space. Math. Annalen 1968, 175, 89–113. [Google Scholar] [CrossRef]
  18. Bruck, R.E., Jr. A strongly convergent iterative method for the solution of 0∈Ux for a maximal monotone operator U in Hilbert space. J. Math. Anal. Appl. 1974, 48, 114–126. [Google Scholar] [CrossRef]
  19. Boikanyo, O.A.; Morosanu, G. A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4, 635–641. [Google Scholar] [CrossRef]
  20. Khatibzadeh, H. Some Remarks on the Proximal Point Algorithm. J. Optim. Theory Appl. 2012, 153, 769–778. [Google Scholar] [CrossRef]
  21. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  22. Shehu, Y. Single projection algorithm for variational inequalities in Banach spaces with applications to contact problems. Acta Math Sci. 2020, 40B, 1045–1063. [Google Scholar] [CrossRef]
  23. Yao, Y.H.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
  24. Teboulle, M. A simplified view of first order methods for optimization. Math. Program. Ser. B 2018, 170, 67–96. [Google Scholar] [CrossRef]
  25. Drusvyatskiy, D.; Lewis, A.S. Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res. 2018, 43, 919–948. [Google Scholar] [CrossRef]
  26. Nesterov, Y. Introductory Lectures on Convex Optimization; Cluwer: Baltimore, MD, USA, 2004. [Google Scholar]
  27. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  28. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  29. Xu, H.K. Iterative algorithms for nonliear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  30. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappingn Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  31. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull Amer Math Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  32. Ataş, İ. Comparison of deep convolution and least squares GANs for diabetic retinopathy image synthesis. Neural Comput. Appl. 2023, 35, 14431–14448. [Google Scholar] [CrossRef]
  33. Ji, M.M.; Zhao, P. Image restoration based on the minimax-concave and the overlapping group sparsity. Signal Image Video Process. 2023, 17, 1733–1741. [Google Scholar] [CrossRef]
  34. Hassanpour, H.; Hosseinzadeh, E.; Moodi, M. Solving intuitionistic fuzzy multi-objective linear programming problem and its application in supply chain management. Appl. Math. 2023, 68, 269–287. [Google Scholar] [CrossRef]
  35. Qi, L.Q.; Sun, W.Y. Nonconvex Optimization and Its Applications; Book Series (NOIA, Volume 4), Minimax and Applications; Kluwer Academic Publishers: London, UK, 1995; pp. 55–67. [Google Scholar]
  36. Von Neumann, J. Zur Theorie der Gesellschaftsspiele. Math. Ann. 1928, 100, 295–320. [Google Scholar] [CrossRef]
  37. Von Neumann, J. Uber ein bkonomisches Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes. Ergebn. Math. Kolloqu. Wien 1935, 8, 73–83. [Google Scholar]
  38. Fan, K. A minimax inequality and applications. In Inequalities, III; Shisha, O., Ed.; Academic Press: San Diego, CA, USA, 1972; pp. 103–113. [Google Scholar]
  39. Trushnikov, D.N.; Krotova, E.L.; Starikov, S.S.; Musikhin, N.A.; Varushkin, S.V.; Matveev, E.V. Solving the inverse problem of surface reconstruction during electron beam surfacing. Russ. J. Nondestruct. Test. 2023, 59, 240–250. [Google Scholar] [CrossRef]
  40. Turgut, O.E.; Turgut, M.S.; Kirtepe, E. A systematic review of the emerging metaheuristic algorithms on solving complex optimization problems. Neural Comput. Appl. 2023, 35, 14275–14378. [Google Scholar] [CrossRef]
  41. Motreanu, D.; Panagiotopoulos, P.D. Minimax Theorems and Qualitative Properties of the Solutions of Hemivariational Inequalities; Nonconvex Optimization and Its Applications; Kluwer Academic: New York, NY, USA, 1999. [Google Scholar]
  42. Moameni, A. Critical point theory on convex subsets with applications in differential equations and analysis. J. Math. Pures. Appl. 2020, 141, 266–315. [Google Scholar] [CrossRef]
  43. Clarke, F. Functional Analysis Calculus of Variations and Optimal Control; Springer: London, UK, 2013; pp. 193–209. [Google Scholar]
  44. Chidume, C.E.; Osilike, M.O. Iterative solutions of nonlinear accretive operator equations in arbitrary Banach spaces. Nonlinear Anal. Theory Methods Appl. 1999, 36, 863–872. [Google Scholar] [CrossRef]
  45. Zegeye, H. Strong convergence theorems for maximal monotone mappings in Banach spaces. J. Math. Anal. Appl. 2008, 343, 663–671. [Google Scholar] [CrossRef]
Figure 1. Algorithm 1 and the Regularization Method.
Figure 1. Algorithm 1 and the Regularization Method.
Axioms 12 00557 g001
Figure 2. Algorithm 1 for Case I, Case II, Case III in Example 2.
Figure 2. Algorithm 1 for Case I, Case II, Case III in Example 2.
Axioms 12 00557 g002
Figure 3. Regularization Method for Case I, Case II, Case III in Example 2.
Figure 3. Regularization Method for Case I, Case II, Case III in Example 2.
Axioms 12 00557 g003
Table 1. Example 1 Numerical Results for Algorithm 1 and Regularization Method.
Table 1. Example 1 Numerical Results for Algorithm 1 and Regularization Method.
Algorithm 1Regularization Method
{ α n } CPU TimeIter. x n + 1 x * x n + 1 x 0 r CPU TimeIter. x n + 1 x * x n + 1 x 0
1 1 100 n 0.0201426.1691  × 10 04 0.1 0.0324635.3741  × 10 04
1 2 1 100 n 0.0594889.159  × 10 04 0.05 0.05301150.0013
1 8 1 100 n 0.04222760.0039 0.01 0.22053670.0078
Table 2. Example 2 Numerical Results for Algorithm 1 and Regularization Method.
Table 2. Example 2 Numerical Results for Algorithm 1 and Regularization Method.
Case I Case II Case III
Algorithm 1RMAlgorithm 1RMAlgorithm 1RM
CPU time2.645.284.298.63.627.59
Iteration Number9179231327
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, Y.; Gibali, A. Resolvent-Free Method for Solving Monotone Inclusions. Axioms 2023, 12, 557. https://doi.org/10.3390/axioms12060557

AMA Style

Tang Y, Gibali A. Resolvent-Free Method for Solving Monotone Inclusions. Axioms. 2023; 12(6):557. https://doi.org/10.3390/axioms12060557

Chicago/Turabian Style

Tang, Yan, and Aviv Gibali. 2023. "Resolvent-Free Method for Solving Monotone Inclusions" Axioms 12, no. 6: 557. https://doi.org/10.3390/axioms12060557

APA Style

Tang, Y., & Gibali, A. (2023). Resolvent-Free Method for Solving Monotone Inclusions. Axioms, 12(6), 557. https://doi.org/10.3390/axioms12060557

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop