Next Article in Journal
Self-C2AD: Enhancing CA Auditing in IoT with Self-Enforcement Based on an SM2 Signature Algorithm
Previous Article in Journal
Developing Hybrid DMO-XGBoost and DMO-RF Models for Estimating the Elastic Modulus of Rock
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Stability of Optimization Problems with Stochastic Constraints

School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Current address: Yunnan Key Laboratory of Modern Analytical Mathematics and Applications, Kunming 650500, China.
Mathematics 2023, 11(18), 3885; https://doi.org/10.3390/math11183885
Submission received: 9 August 2023 / Revised: 8 September 2023 / Accepted: 11 September 2023 / Published: 12 September 2023

Abstract

:
In this paper, we consider optimization problems with stochastic constraints. We derive quantitative stability results for the optimal value function, the optimal solution set and the feasible solution set of optimization models in which the underlying stochastic constraints involve the mathematical expectation of random single-valued and set-valued mappings, respectively. New primal sufficient conditions are developed for the uniform error bound property of the stochastic constraint system for the single-valued case.

1. Introduction

Consider the following optimization problem with stochastic constraint (OPSC):
min x X f ( x ) s . t . E P [ γ ( x , ξ ) ] 0 ,
where f : R n R is a deterministic function, X R n is the direct constraint set for the decision vector x R n , γ : R n × Ξ R , ξ : Ω Ξ is a random variable defined on the probability space ( Ω , F , P ) with support set Ξ R m and probability distribution P, and E P [ · ] denotes the expected value with respect to P.
Model (1) can be viewed as an extension of the classic deterministic mathematical programming problem. It is closely related to the distributionally robust optimization model, as well as optimization problems with stochastic dominance constraints, which have received considerable attention over the past decade for their theoretical importance and extensive applications (for more details, please refer to references [1,2,3,4,5,6,7,8,9,10] and the references therein).
The probability distributiton of random variables in traditional stochastic programs is often assumed complete knowledge. However, in many practical applications, it is usually impossible or expensive for us to know the true probability distribution, while we are able to obtain some partial information, such as regarding prior moments, samples of the random variables, or the historical data, and use them to construct an ambiguity set of probability distributions which contain or approximate the true probability distribution (see references [11,12,13,14,15,16,17,18] and the references therein). This motivates us to consider the following perturbed optimization model:
min x X f ( x ) s . t . E Q [ γ ( x , ξ ) ] 0 ,
where Q is taken from the ambiguity set for the possible probability distributions of the random vector Ξ R m on some probability space ( Ω , F , P ) .
It is of great importance to investigate the quantitative stability of OPSC by analyzing the impact of the variation of probability distributions on the optimal value, the optimal solution set and the feasible solution set. To this end, upon utilizing the Ekeland’s Variational Principle, one of the major tasks of this paper is to develop new primal sufficient conditions for the uniform error bound property of the inequality constraint system in (2), which enables us to derive stability estimation for OPSC (2) as Q approximates P under appropriate metrics. The uniform error bound property plays an essential role in the proof of the main results. For more information regarding the error bounds of deterministic systems, please refer to references [19,20] and the references therein.
OPSC (1) can be viewed as a particular case of the following optimization problem constrained by a stochastic generalized equation (OPSGE):
min x X f ( x ) s . t . 0 E P [ Γ ( x , ξ ) ] + G ( x ) ,
where X R n , f : R n R , Γ : X × Ξ Y and G : X Y are closed set-valued mappings, X and Y are subsets of R n and R d , respectively, ξ : Ω Ξ is a random vector defined on a probability space ( Ω , F , P ) with support set Ξ R m and probability distribution P, and E P [ · ] denotes the expected value with respect to P, that is,
E P [ Γ ( x , ξ ) ] : = Ξ Γ ( x , ξ ) d P ( ξ ) = Ξ ψ ( ξ ) P ( d ξ ) : ψ is a Bochner integrable selection of Γ ( x , · ) .
The expected value of Γ is widely known as Aumann’s integral of the set-valued mapping (for more details, please refer to references [21,22,23]). Indeed, if we set d = 1 and G ( x ) = [ 0 , + ) (for all x X ), then Γ is single-valued and OPSGE (3) reduces to OPSC (1).
The stochastic generalized equation (SGE) formulation in model (3) is a natural extension of the deterministic generalized equation [24]. It provides a uniform platform for describing the first order optimality/equilibrium conditions of nonsmooth stochastic optimization problems, stochastic equilibrium problems and stochastic games (see references [15,16] and the references therein). In particular, when Γ is single-valued and G ( x ) is a normal cone of a set, SGE in (3) is known as a stochastic variational inequality which has been studied extensively over the past few years (see references [11,18] and the references therein).
Another major objective of this paper is to establish the quantitative stability result for OPSGE (3) by estimating the variation of the feasible solution set and the optimal value function, respectively, as the underlying probability distribution P perturbs from an ambiguity set. The obtained result is a supplement to and extension of the existing ones in the literature and the proof is completely self-contained.
The rest of the paper is structured as follows. Section 2 contains definitions of the basic properties under consideration and some preliminary results used throughout the following sections. In Section 3, we conduct stability analysis for both OPSC (1) and OPSGE (3) under the perturbation of the underlying probability distribution. Estimations for the variation of the optimal value function, the optimal solution set and the feasible solution set of the aforementioned optimization models are obtained.

2. Preliminaries

Throughout this paper, we use the following notations. Let N and R denote the natural number set and real number set, respectively. Let · denote the Euclidean norm in R n and [ a ] + : = max { a , 0 } , for all a R . The symbol B ( x , r ) indicates the closed ball of radius r > 0 centered at x R n . Given subsets C , D R n , define the distance from x X to C by
d ( x , C ) : = inf { x c c C } .
The excess from C to D is defined by
D ( C , D ) : = sup { d ( c , D ) c C } .
The Pompeiu–Hausdorff distance between C and D is defined as follows:
H ( C , D ) : = max { D ( C , D ) , D ( D , C ) } .
We use the convention that d ( x , ) : = , D ( , D ) : = 0 , if D and D ( , D ) : = , if D = . Let P ( Ω ) denote the set of all Borel probability measures on Ω and P P ( Ω ) represent the ambiguity set for possible probability distributions of P.
Let F : R n R m be a set-valued mapping and its domain and graph be denoted, respectively, by
dom ( F ) : = { x R n F ( x ) } and gph ( F ) : = { ( x , y ) R n × R m y F ( x ) } .
The symbol F 1 : R m R n stands for the inverse mapping of F with F 1 ( y ) = { x R n : y F ( x ) } for all y R m .
Next, we recall the notions of metric regularity and Lipschitz continuity of set-valued mappings (cf. reference [25]) which will be employed in our study.
Definition 1.
Let X R n , Y R m and Φ : R n R m be a set-valued mapping. If there exists κ > 0 such that
d ( x , Φ 1 ( y ) ) κ d ( y , Φ ( x ) ) , x X a n d y Y ,
then we say that Φ is metrically regular on X × Y with constant κ.
Definition 2.
Let X R n , Y R m and Φ : R n R m be a set-valued mapping. We say that Φ is Lipschitz continuous on X with constant ι > 0 , if
H ( Φ ( x ) , Φ ( x ) ) ι x x , x , x X .
Let us finish this section by recalling the well-known Ekeland Variational Principle (cf. reference [26]) which plays a fundamental role in the proofs of our main results.
Lemma 1
(Ekeland Variational Principle). Let X be a complete metric space and f : X R { + } be a proper lower semicontinuous function. Let ε > 0 and x ¯ X such that
f ( x ¯ ) inf x X f ( x ) + ε .
Then, for any λ > 0 there exists x X such that
  • d ( x , x ¯ ) λ ,
  • f ( x ) f ( x ¯ ) ,
  • f ( x ) < f ( u ) + ε λ d ( u , x ) for all u X { x } .

3. Stability Analysis of Optimization Problems with Stochastic Constraints

In this section, we aim to investigate the variation of the optimal value function, the optimal solution set and the feasible solution set of optimization problem with stochastic constraints, respectively, when the probability measure is perturbed. The obtained results are twofold. We provide stability analysis for both optimization model (2) and model (3), which cover the cases of single-valued and set-valued SGE constraints.

3.1. Stability Analysis of OPSC (2)

Recall that in optimization model (2), we are looking for candidates x X such that
E Q [ γ ( x , ξ ) ] 0 ,
where X R n is the direct constraint set for the decision vector x R n and Q is a perturbation of the probability distribution P of the random vector ξ R m .
To characterize the variation of probability measures with respect to OPSC (2), we define the following pseudometric induced by H :
D ( P , Q ) : = sup h H | E P [ h ( ξ ) ] E Q [ h ( ξ ) ] | , P , Q P ,
where H is a set of random functions:
H : = { h ( · ) h ( ξ ) = γ ( x , ξ ) for x X } .
For convenience, let us denote by S ( Q ) the feasible solution set of (2), i.e.,
S ( Q ) : = { x X E Q [ γ ( x , ξ ) ] 0 } .
Furthermore, let S ( Q ) and ϑ ( Q ) denote the optimal solution set and the optimal value of model (2), respectively.
Definition 3.
We say that system (6) has a uniform error bound with respect to the constraint set X and the ambiguity set P P ( Ω ) , if there exists κ > 0 such that
d ( x , S ( Q ) ) κ [ E Q [ γ ( x , ξ ) ] ] + , ( x , Q ) X × P .
To start with, we establish the following sufficient condition for the uniform error bound property of constraint system (6) by utilizing the Ekeland’s Variational Principle, which will be beneficial for the proof of our main results.
Proposition 1.
Suppose that the mapping x γ ( x , ξ ) is lower semicontinuous for any ξ R m . Let τ ( 0 , + ) be such that, for any x X and Q P with E Q [ γ ( x , ξ ) ] > 0 , there exists x X { x } , satisfying
[ γ ( x , · ) ] + γ ( x , · ) τ x x .
Then, we have dom ( S ) = P and
d ( x , S ( Q ) ) 1 τ [ E Q [ γ ( x , ξ ) ] ] + , ( x , Q ) X × P .
Consequently, system (6) has a uniform error bound with respect to X and P .
Proof. 
Note that γ ( · , ξ ( ω ) ) is lower semicontinuous; it follows from reference [27], (Theorem 7.42) that for any Q P , the mapping x E Q [ γ ( x , ξ ) ] is also lower semicontinuous. Using (8), we know that for any x X and Q P with E Q [ γ ( x , ξ ) ] > 0 , there exists x X { x } such that
E Q [ [ γ ( x , ξ ) ] + ] E Q [ γ ( x , ξ ) ] τ x x .
For any Q P , we claim that S ( Q ) . Indeed, if this is not the case, we have
E Q [ γ ( x , ξ ) ] > 0 , x X .
To proceed, we pick a sequence { x n } X such that
E Q [ γ ( x n , ξ ) ] < inf u X E Q [ γ ( u , ξ ) ] + 1 n 2 , n N .
Applying Lemma 1 to the function x E Q [ γ ( x , ξ ) ] , we are able to find a sequence { x n } X such that x n x n < 1 n and
E Q [ γ ( x n , ξ ) ] < E Q [ γ ( x , ξ ) ] + 1 n x x n , x X { x n } .
This is a contradiction to (10) and (11). Therefore, S ( Q ) , and hence dom ( S ) = P .
Now we are ready to show that (9) holds. To this end, assume, to the contrary, that there exists ( x 0 , Q 0 ) X × P such that
[ E Q 0 [ γ ( x 0 , ξ ) ] ] + < τ d ( x 0 , S ( Q 0 ) ) .
Then, d ( x 0 , S ( Q 0 ) ) > 0 , which indicates that E Q 0 [ γ ( x 0 , ξ ) ] > 0 . Let τ ( 0 , τ ) be close enough to τ such that
E Q 0 [ γ ( x 0 , ξ ) ] < inf u X [ E Q 0 [ γ ( u , ξ ) ] ] + + τ d ( x 0 , S ( Q 0 ) ) .
Applying Lemma 1 to the function x [ E Q 0 [ γ ( x , ξ ) ] ] + again, we obtain x 0 X such that x 0 x 0 < τ τ d ( x 0 , S ( Q 0 ) ) and
[ E Q 0 [ γ ( x 0 , ξ ) ] ] + < [ E Q 0 [ γ ( x , ξ ) ] ] + + τ x x 0 , x X { x 0 } .
Together with (10), we conclude that E Q 0 [ γ ( x 0 , ξ ) ] = 0 , and then x 0 S ( Q 0 ) , which is a contradiction to the fact that
x 0 x 0 < τ τ d ( x 0 , S ( Q 0 ) ) < d ( x 0 , S ( Q 0 ) ) .
Hence, (9) holds true. □
Without the semicontinuity assumption, we next provide a different primal condition to ensure the uniform error bound property of the system (6).
Proposition 2.
Let ρ [ 0 , 1 ) and τ ( 0 , + ) be constants. Suppose that for each Q P and x X S ( Q ) , there exists x X { x } such that
d ( x , S ( Q ) ) ρ d ( x , S ( Q ) )
and
[ γ ( x , · ) ] + γ ( x , · ) τ x x .
Then, we conclude that (9) holds.
Proof. 
To show the validity of (9), we fix any Q P and x X . If x S ( Q ) , then (9) holds automatically, hence we may assume that x S ( Q ) . Let x 0 = x , and suppose that x 0 , , x k are selected from X such that
d ( x i , S ( Q ) ) ρ i d ( x , S ( Q ) ) and τ x i 1 x i γ ( x i 1 , · ) [ γ ( x i , · ) ] + , 1 i k .
If x k S ( Q ) , we set x k + 1 = x k . Otherwise, by inequalities (13) and (14), there exists x k + 1 X { x k } such that
d ( x k + 1 , S ( Q ) ) ρ d ( x k , S ( Q ) ) ρ k + 1 d ( x , S ( Q ) )
and
τ x k x k + 1 γ ( x k , · ) [ γ ( x k + 1 , · ) ] + .
Hence, inductively we obtain a sequence { x n } such that x 0 = x and
d ( x n , S ( Q ) ) ρ n d ( x , S ( Q ) ) and τ x n 1 x n γ ( x n 1 , · ) [ γ ( x n , · ) ] + , n N .
Therefore,
E Q [ [ γ ( x n , ξ ) ] + ] E Q [ γ ( x n 1 , ξ ) ] τ x n 1 x n , n N .
To proceed, we divide our proof into two cases. (i) Suppose that there exist elements of the sequence { x n } which are contained in S ( Q ) . For convenience, we set the first term as x n 0 . Then γ ( x n , ξ ) > 0 for all n = 1 , 2 , , n 0 1 and [ γ ( x n 0 , ξ ) ] + = 0 . Hence,
τ d ( x , S ( Q ) ) τ x x n 0 τ i = 0 n 0 1 x i x i + 1 i = 0 n 0 1 ( E Q [ γ ( x i , ξ ) ] E Q [ γ ( x i + 1 , ξ ) ] ) E Q [ γ ( x 0 , ξ ) ] = [ E Q [ γ ( x , ξ ) ] ] + .
This shows that (9) holds. (ii) Assume that x n S ( Q ) for all n N . Then, we have γ ( x n , ξ ) > 0 , for any n N . Note that x 0 = x ; it follows from inequality (15) that
τ d ( x , S ( Q ) ) τ d ( x n , S ( Q ) ) + τ x x n τ ρ n d ( x , S ( Q ) ) + τ i = 0 n 1 x i x i + 1 τ ρ n d ( x , S ( Q ) ) + i = 0 n 1 ( E Q [ γ ( x i , ξ ) ] E Q [ γ ( x i + 1 , ξ ) ] ) τ ρ n d ( x , S ( Q ) ) + E Q [ γ ( x , ξ ) ] = τ ρ n d ( x , S ( Q ) ) + [ E Q [ γ ( x , ξ ) ] ] + .
Then,
d ( x , S ( Q ) ) 1 τ ( 1 ρ n ) [ E Q [ γ ( x , ξ ) ] ] + , n N .
Letting n , we conclude that (9) holds, which completes the proof. □
With the help of the uniform error bound property established in Proposition 1, we obtain the Lipschitz continuity of the feasible solution set as follows when the probability measure varies.
Theorem 1.
Assume that the conditions in Proposition 1 hold. Then,
H ( S ( Q ) , S ( Q ) ) 1 τ D ( Q , Q ) , Q , Q P .
Proof. 
According to the assumptions, it follows from Proposition 1 that, for any Q P , we have S ( Q ) and (9) holds. Pick any Q , Q P and x S ( Q ) , then E Q [ γ ( x , ξ ) ] 0 and
d ( x , S ( Q ) ) 1 τ [ E Q [ γ ( x , ξ ) ] ] + 1 τ [ E Q [ γ ( x , ξ ) ] E Q [ γ ( x , ξ ) ] ] + 1 τ | E Q [ γ ( x , ξ ) ] E Q [ γ ( x , ξ ) ] | 1 τ sup x X | E Q [ γ ( x , ξ ) ] E Q [ γ ( x , ξ ) ] | = 1 τ D ( Q , Q ) ,
which indicates that D ( S ( Q ) , S ( Q ) ) 1 τ D ( Q , Q ) . Note that Q and Q are arbitrarily chosen from P , we also have D ( S ( Q ) , S ( Q ) ) 1 τ D ( Q , Q ) . Hence, (16) holds true. □
Now we are ready to state the first main result of this section. To this end, we define the growth function of problem (1) by ψ P : R + R , i.e.,
ψ P ( t ) : = min { f ( x ) ϑ ( P ) d ( x , S ( P ) ) t , x S ( P ) } , t R + .
Its inverse function is ψ P 1 ( s ) : = sup { t R + ψ P ( t ) s } , for any s R . For convenience, let φ P ( s ) : = s + ψ P 1 ( 2 s ) .
By virtue of Theorem 1, we can finally deduce the quantitative stability results with respect to the optimal value function and the optimal solution set of the optimization model (2) when the probability measure is perturbed.
Theorem 2.
Assume that the conditions in Proposition 1 hold. Let X be a bounded subset of R n and f be Lipschitz continuous on X with constant L > 0 . Then, we have the following inequalities:
| ϑ ( Q ) ϑ ( Q ) | L τ D ( Q , Q ) , Q , Q P
and
D ( S ( Q ) , S ( P ) ) φ P L τ D ( Q , P ) , Q P .
Proof. 
According to the assumptions, it follows from Proposition 1 that, for any Q P , S ( Q ) is a closed subset of X. Then, S ( Q ) is compact. It follows from the continuity of f that S ( Q ) . To show the first inequality, we pick Q , Q P , x S ( Q ) and x S ( Q ) S ( Q ) . It follows from Theorem 1 that we can choose x ^ S ( Q ) such that
x ^ x 1 τ D ( Q , Q ) .
Then, we have the following estimation:
ϑ ( Q ) ϑ ( Q ) = f ( x ) f ( x ) = f ( x ) f ( x ^ ) + f ( x ^ ) f ( x ) f ( x ^ ) f ( x ) L x ^ x L τ D ( Q , Q ) .
Since Q and Q are arbitarily chosen and the general metric D ( Q , Q ) is symmetric, we also have ϑ ( Q ) ϑ ( Q ) L τ D ( Q , Q ) . Therefore, (17) holds.
For the second inequality, we pick arbitary Q P and x S ( Q ) . using Theorem 1, we are able to select x ^ S ( P ) such that x ^ x 1 τ D ( Q , P ) . Then, we have
2 L τ D ( Q , P ) f ( x ^ ) f ( x ) + L τ D ( Q , P ) f ( x ^ ) f ( x ) + ϑ ( Q ) ϑ ( P ) = f ( x ^ ) ϑ ( P ) ψ P ( d ( x ^ , S ( P ) ) ) .
Therefore, d ( x ^ , S ( P ) ) ψ P 1 2 L τ D ( Q , P ) . By the triangle inequality, we arrive at
d ( x , S ( P ) ) x ^ x + d ( x ^ , S ( P ) ) L τ D ( Q , P ) + ψ P 1 2 L τ D ( Q , P ) = φ P L τ D ( Q , P ) .
Since x is arbitarily chosen from S ( Q ) , we conclude that inequality (18) holds. □
Remark 1.
In Theorems 1 and 2, the stability results for the optimization model (2) are obtained under the primal condition (8) which ensures the uniform error bound property of the constraint system (6). It is a supplement to the Slater condition which was imposed for the stability analysis of the distributionally robust model obtained in reference [2] (Theorems 4.3 and 4.4).

3.2. Stability Analysis of OPSGE (3)

In this section, we consider optimization model (3) with a set-valued schochastic generalized equation constraint of the following form:
0 E Q [ Γ ( x , ξ ) ] + G ( x ) ,
where Q P is a perturbation of the probability distribution P, Γ : X × Ξ Y and G : X Y are closed set-valued mappings, X R n , Y R d , ξ : Ω Ξ is a random vector defined on a probability space ( Ω , F , P ) with support set Ξ R m and probability distribution P.
For convenience, we denote by S ˜ ( Q ) the feasible solution set of (3), i.e.,
S ˜ ( Q ) : = { x X 0 E Q [ Γ ( x , ξ ) ] + G ( x ) } .
Furthermore, let S ˜ ( Q ) and ϑ ˜ ( Q ) denote the optimal solution set and the optimal value of model (3), respectively.
To establish quantitative stability results for OPSGE (3) as the probability distribution perturbs, we need to define the following distance for probability measures induced by F :
D ˜ ( Q , P ) : = sup g F ( E Q [ g ( ξ ) ] E P [ g ( ξ ) ] ) , P , Q P ,
where F consists of all functions generated by the support function σ ( Γ ( x , · ) , u ) over the set X × B R n , i.e.,
F : = { g ( · ) g ( ξ ) = σ ( Γ ( x , ξ ) , u ) for x X , u 1 } .
It is well acknowledged that D ˜ is not a metric unless the set F is enriched. Furthermore, D ˜ is not symmetric (for more details, please refer to reference [28] and the references therein).
Theorem 3.
Let P P ( Ω ) , Γ : X × Ξ Y , G : X Y be defined as in (20) and S ˜ be defined as in (21) with dom ( S ˜ ) = P . Assume that κ , ι ( 0 , + ) satisfies κ ι < 1 . Suppose that the following conditions hold:
(i) 
X and Y are compact subsets of R n and R d , respectively;
(ii) 
Γ takes convex set values in Y and is upper semicontinuous with respect to x for every ξ Ξ . Furthermore, Γ is bounded by a P-integrable function ρ ( ξ ) for x X ;
(iii) 
Ψ P ( x ) = E P [ Γ ( x , ξ ) ] is metrically regular on X × Y with constant κ;
(iv) 
G is Lipschitz continuous on X with constant ι.
Then, the feasible solution set is closed and we have the following estimate for the variation of the feasible sets with constant κ 1 κ ι :
D ( S ˜ ( Q ) , S ˜ ( P ) ) κ 1 κ ι D ˜ ( Q , P ) , Q P .
Proof. 
For any x X and any sequence { x k } X with x k x , since Γ is closed, upper semicontinuous and integrably bounded, it follows from reference [29] (Theorem 2.8) that
lim sup k Ψ P ( x k ) = lim sup k E P [ Γ ( x k , ξ ) ] E P [ lim sup k Γ ( x k , ξ ) ] E P [ Γ ( x , ξ ) ] = Ψ P ( x ) .
Therefore, the mapping Ψ P is closed. Pick any Q P and any sequence { x k } S ˜ ( Q ) with x k x , we have 0 Ψ P ( x k ) + G ( x k ) , then there exists y k Y such that y k Ψ P ( x k ) and y k G ( x k ) (for each k N ). Since Y is compact, without loss of generality, we may assume that y k y Y . Through this and the closedness of Ψ P and G , one has y Ψ P ( x ) and y G ( x ) . Then, 0 Ψ P ( x ) + G ( x ) , i.e., x S ˜ ( Q ) . Hence, the feasible solution set is closed.
From assumptions (i) and (ii), it is easy to observe that Ψ P takes convex and compact set values in Y . Then, it follows from Lemma 2.5 that, for all x X and Q P ,
D ( Ψ Q ( x ) , Ψ P ( x ) ) = max u 1 ( σ ( E Q [ Γ ( x , ξ ) ] , u ) σ ( E P [ ( Γ ( x , ξ ) ] , u ) ) .
According to reference [30], Proposition 3.4, we have
E Q [ σ ( Γ ( x , ξ ) , u ) ] E P [ σ ( Γ ( x , ξ ) , u ) ] = σ ( E Q [ Γ ( x , ξ ) ] , u ) σ ( E P [ ( Γ ( x , ξ ) ] , u ) .
Then, it follows from the definition of D ˜ ( Q , P ) that
D ( Ψ Q ( x ) , Ψ P ( x ) ) D ˜ ( Q , P ) , x X , Q P .
By assumptions (iii) and (iv), one has
d ( x , Ψ P 1 ( y ) ) κ d ( y , Ψ P ( x ) ) , ( x , y ) X × Y
and
H ( G ( x ) , G ( x ) ) ι x x , x , x X .
Since dom ( S ˜ ) = P , it follows from (25) that
Ψ P 1 ( y ) , y Y
and from (26) that
G ( x ) , x X .
Let ε ( 0 , 1 ) be sufficient small such that
κ ( ι + ε ) + ε < 1 .
To show that (23) holds, we pick any Q P with Q P and let it be fixed, then it suffices to show that
S ˜ ( Q ) S ˜ ( P ) + κ 1 κ ι D ˜ ( Q , P ) B R n .
Let x 0 S ˜ ( Q ) , i.e., x 0 X and 0 Ψ Q ( x 0 ) + G ( x 0 ) . Then, there exists y 0 Y such that y 0 Ψ Q ( x 0 ) and y 0 G ( x 0 ) . It follows from (24), (25) and (27) that, there exists x 1 Ψ P 1 ( y 0 ) such that
x 1 x 0 < d ( x 0 , Ψ P 1 ( y 0 ) ) + ε κ d ( y 0 , Ψ P ( x 0 ) ) + ε κ ( D ( Ψ Q ( x 0 ) , Ψ P ( x 0 ) ) + ε κ D ˜ ( Q , P ) + ε .
If x 1 = x 0 , then y 0 Ψ Q ( x 0 ) , and then x 0 S ˜ ( Q ) . This shows that (30) holds automatically. Let x 1 x 0 , then x 1 x 0 > 0 . According to (26) and (28), there exists y 1 G ( x 1 ) such that
y 1 y 0 < d ( y 0 , G ( x 1 ) ) + ε x 1 x 0 H ( G ( x 0 ) , G ( x 1 ) ) + ε x 1 x 0 ( ι + ε ) x 1 x 0 .
By induction, we construct sequences of points x k X and y k Y such that, for k = 0 , 1 , 2 , ,
x k + 1 Ψ P 1 ( y k ) and y k + 1 G ( x k + 1 )
with
x k + 1 x k ( κ ( ι + ε ) + ε ) k x 1 x 0 and y k + 1 y k ( ι + ε ) x k + 1 x k .
According to (32), we see that x 1 and y 1 satisfy (33) and (34) with k = 0 . Suppose that for some n 1 , we have generated x 1 , x 2 , , x n and y 1 , y 2 , , y n such that (33) and (34) hold. Our goal is to show that there exist x n + 1 X and y n + 1 Y which satisfy (33) and (34). If x n = x n 1 , we set x n + 1 = x n and y n + 1 = y n . Otherwise, since x n Ψ P 1 ( y n 1 ) , it follows from (25) and (27) that there exists x n + 1 Ψ P 1 ( y n ) such that
x n + 1 x n d ( x n , Ψ P 1 ( y n ) ) + ε x n x n 1 κ d ( y n , Ψ P ( x n ) ) + ε x n x n 1 κ y n y n 1 + ε x n x n 1 .
Then, by invoking the induction hypothesis (34), we have y n y n 1 ( ι + ε ) x n x n 1 . And then,
x n + 1 x n ( κ ( ι + ε ) + ε ) x n x n 1 ( κ ( ι + ε ) + ε ) n x 1 x 0 .
If x n + 1 = x n , we set y n + 1 = y n . If x n + 1 x n , note that y n G ( x n ) , according to (28) there exists y n + 1 G ( x n + 1 ) such that
y n + 1 y n H ( G ( x n + 1 ) , G ( x n ) ) + ε x n + 1 x n ( ι + ε ) x n + 1 x n .
This completes the induction step, and hence (33) and (34) hold true for all k N .
By virtue of the inequalities for x k and y k in (34), we observe that for any positive integers m and n, we have
x n + m x n k = n n + m 1 x k + 1 x k k = n n + m 1 ( κ ( ι + ε ) + ε ) k x 1 x 0 ( κ ( ι + ε ) + ε ) n 1 ( κ ( ι + ε ) + ε ) x 1 x 0
and
y n + m y n k = n n + m 1 y k + 1 y k k = n n + m 1 ( ι + ε ) x k + 1 x k ( μ + ε ) ( κ ( ι + ε ) + ε ) n 1 ( κ ( ι + ε ) + ε ) x 1 x 0 ,
which indicates that both sequences { x k } and { y k } are Cauchy sequences. Note that ( x k , y k 1 ) gph ( Ψ P ) and ( x k , y k ) gph ( G ) , since gph ( Ψ P ) and gph ( G ) are closed, then there exists ( x , y ) gph ( Ψ P ) such that ( x k , y k 1 ) ( x , y ) , and then ( x , y ) gph ( G ) . This shows that x S ˜ ( P ) . Utilizing (31) and (34), we finally obtain that
d ( x 0 , S ˜ ( P ) ) x 0 x = lim k x k x 0 k = 0 x k + 1 x k 1 1 ( κ ( ι + ε ) + ε ) x 1 x 0 κ D ˜ ( Q , P ) + ε 1 ( κ ( ι + ε ) + ε ) .
Taking the limit as ε 0 gives us (30), which completes the proof. □
Remark 2.
In reference [28], the authors provided qualitative stability analysis for feasible solution mapping under a metric regularity assumption on the set-valued mapping E P [ Γ ( · , ξ ) ] + G ( · ) . In contrast, Theorem 3 imposes metric regularity on the mapping E P [ Γ ( · , ξ ) ] and Lipschitz continuity on G , which does not necessarily guarantee the metric regularity of the mapping E P [ Γ ( · , ξ ) ] + G ( · ) . Furthermore, the proof of Theorem 3 is completely self-contained. Hence, Theorem 3 is a supplement to the results in reference [28] (Theorem 3.1 (iii)). In reference [31], the authors focus on the case when the mapping Γ is single-valued and obtained similar stability results. Therefore, Theorem 3 also extends the results in reference [31] (Theorem 8).
With the help of Theorem 3, we are able to deduce the following quantitative stability results regarding the optimal value function.
Theorem 4.
Under the assumptions of Theorem 3, assume that f is Lipschitz continuous on X with constant L > 0 . Then, the optimal solution mapping S ˜ of model (3) is compact valued, dom ( S ˜ ) = P , and
ϑ ˜ ( P ) ϑ ˜ ( Q ) + κ L 1 κ ι D ˜ ( Q , P ) , Q P .
Proof. 
It is shown in Theorem 3 that the feasible solution set of model (3) is closed. Since X is compact, we know that S ˜ ( Q ) is compact for all Q P . Due to the continuity of f, we have S ˜ ( Q ) , for all Q P , hence dom ( S ˜ ) = P . Let Q P and { x k } S ˜ ( Q ) with x k x , one has
ϑ ˜ ( Q ) = f ( x k ) = lim k f ( x k ) = f ( x ) ,
which indicates that x S ˜ ( Q ) . Taking into account of the fact that X is compact, we ascertain that S ˜ is compact valued.
To establish (35), we pick Q P , x S ˜ ( Q ) and x S ˜ ( P ) . Note that x S ˜ ( Q ) S ˜ ( Q ) ; it follows from Theorem 3 that, for any ε > 0 , we can choose x ^ S ˜ ( P ) such that
x ^ x κ 1 κ ι D ˜ ( Q , P ) + ε .
Then, we have the following estimation:
ϑ ˜ ( P ) ϑ ˜ ( Q ) = f ( x ) f ( x ) = f ( x ) f ( x ^ ) + f ( x ^ ) f ( x ) f ( x ^ ) f ( x ) L x ^ x κ L 1 κ ι D ˜ ( Q , P ) + L ε .
By letting ε 0 , we arrive at (35), which completes the proof. □

4. Conclusions

The main contribution of this paper contains two parts. Firstly, we establish a new primal sufficient condition for the uniform error bound property of a constraint system (6) by utilizing the Ekeland’s Variational Principle. Based on that, we then study the quantitative stability for OPSC (2) and OPSGE (3), respectively, by estimating the variation of the optimal value function, the optimal solution set and the feasible solution set of the aforementioned optimization models when the underlying probability distribution perturbs. The obtained results are new, and they are supplementary to and extensions of the existing ones in the literature. The proof is completely self-contained.

Author Contributions

Conceptualization, W.O.; writing—original draft preparation, W.O. and K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 11801500 and the Basic Research Program of Yunnan Province, grant number 202301AT070080.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Claus, M.; Schultz, R. Lipschitzian properties and stability of a class of first-order stochastic dominance constraints. SIAM J. Optim. 2015, 25, 396–415. [Google Scholar] [CrossRef]
  2. Chen, Z.; Jiang, J. Stability analysis of optimization problems with kth order stochastic and distributionally robust dominance constraints induced by full random recourse. SIAM J. Optim. 2018, 28, 1396–1419. [Google Scholar] [CrossRef]
  3. Dentcheva, D.; Ruszczyński, A. Optimization with stochastic dominance constraints. SIAM J. Optim. 2003, 14, 548–566. [Google Scholar] [CrossRef]
  4. Dentcheva, D.; Ruszczyński, A. Robust stochastic dominance and its application to riskaverse optimization. Math. Program. 2010, 123, 85–100. [Google Scholar] [CrossRef]
  5. Guo, S.; Xu, H.; Zhang, L. Convergence analysis for mathematical programs with distributionally robust chance constraint. SIAM J. Optim. 2017, 27, 784–816. [Google Scholar] [CrossRef]
  6. Hadar, J.; Russell, W.R. Rules for ordering uncertain prospects. Am. Econ. Rev. 1969, 59, 25–34. [Google Scholar]
  7. Pflug, G.C.; Pichler, A. Approximations for probability distributions and stochastic optimization problems. In Stochastic Optimization Methods in Finance and Energy; International Series in Operations Research & Management Science; Springer: New York, NY, USA, 2011; Volume 163, pp. 343–387. [Google Scholar]
  8. Wang, C.; Song, Y.; Zhang, F.; Zhao, Y. Exponential stability of a class of neutral inertial neural networks with multi-proportional delays and leakage delays. Mathematics 2023, 11, 2596. [Google Scholar] [CrossRef]
  9. Xue, Y.; Han, J.; Tu, Z.; Chen, X. Stability analysis and design of cooperative control for linear delta operator system. AIMS Math. 2023, 8, 12671–12693. [Google Scholar] [CrossRef]
  10. Zhao, Y.; Wang, L. Practical exponential stability of impulsive stochastic food chain system with time-varying delays. Mathematics 2023, 11, 147. [Google Scholar] [CrossRef]
  11. Chen, X.; Fukushima, M. Expected residual minimization method for stochastic linear complementarity Problems. Math. Oper. Res. 2005, 30, 1022–1038. [Google Scholar] [CrossRef]
  12. Jia, Z.; Li, C. Almost sure exponential stability of uncertain stochastic Hopfield neural networks based on subadditive measures. Mathematics 2023, 11, 3110. [Google Scholar] [CrossRef]
  13. Li, G.D.; Zhang, Y.; Guan, Y.J.; Li, W.J. Stability analysis of multi-point boundary conditions for fractional differential equation with non-instantaneous integral impulse. Math. Biosci. Eng. 2023, 20, 7020–7041. [Google Scholar] [CrossRef] [PubMed]
  14. Ma, Z.; Yuan, S.; Meng, K.; Mei, S. Mean-square stability of uncertain delayed stochastic systems driven by G-Brownian motion. Mathematics 2023, 11, 2405. [Google Scholar] [CrossRef]
  15. Ralph, D.; Xu, H. Asympototic analysis of stationary points of sample average two stage stochastic programs: A generalized equations approach. Math. Oper. Res. 2011, 36, 568–592. [Google Scholar] [CrossRef]
  16. Ravat, U.; Shanbhag, U.V. On the characterization of solution sets of smooth and nonsmooth convex stochastic Nash games. SIAM J. Optim. 2011, 21, 1168–1199. [Google Scholar] [CrossRef]
  17. Xia, M.; Liu, L.; Fang, J.; Zhang, Y. Stability analysis for a class of stochastic differential equations with impulses. Mathematics 2023, 11, 1541. [Google Scholar] [CrossRef]
  18. Xu, H. Sample average approximation methods for a class of stochastic variational inequality problems. Asian-Pac. J. Oper. Res. 2010, 27, 103–119. [Google Scholar] [CrossRef]
  19. Ng, K.F.; Zheng, X.Y. Global weak sharp minima on Banach spaces. SIAM J. Control Optim. 2003, 41, 1868–1885. [Google Scholar] [CrossRef]
  20. Wu, Z.; Ye, J.J. Sufficient conditions for error bounds. SIAM J. Optim. 2001, 12, 421–435. [Google Scholar] [CrossRef]
  21. Aubin, J.-P.; Frankowska, H. Set-Valued Analysis; Birkhauser: Boston, MA, USA, 1990. [Google Scholar]
  22. Aumann, R.J. Integrals of set-valued functions. J. Math. Anal. Appl. 1965, 12, 1–12. [Google Scholar] [CrossRef]
  23. Hess, C. Set-valued integration and set-valued probability theory: An overview. In Handbook of Measure Theory; E. PAP: Amsterdan, The Netherlands, 2002; pp. 617–673. [Google Scholar]
  24. Robinson, S.M. Generalized Equations, Mathematical Programming the State of the Art: Bonn 1982; Springer: Berlin/Heidelberg, Germany, 1983; pp. 346–367. [Google Scholar]
  25. Dontchev, A.L.; Rockafellar, R.T. Implicit Functions and Solution Mappings; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  26. Ekeland, I. On the variational principle. J. Math. Anal. Appl. 1974, 47, 324–353. [Google Scholar] [CrossRef]
  27. Shapiro, A.; Dentcheva, D.; Ruszczyński, A. Lectures on Stochastic Programming: Modeling and Theory, 2nd ed.; MOS-SIAM Ser. Optim. 16; SIAM: Philadelphia, PA, USA, 2014. [Google Scholar]
  28. Liu, Y.; Römisch, W.; Xu, H. Quantitative stability analysis of stochastic generalized equations. SIAM J. Optim. 2014, 24, 467–497. [Google Scholar] [CrossRef]
  29. Hiai, F. Convergence of conditional expectations and strong laws of large numbers for multivalued random variables. Trans. Am. Math. Soc. 1985, 291, 603–627. [Google Scholar] [CrossRef]
  30. Papageorgiou, N. On the theory of Banach space valued multifunctions. 1. Integration and conditional expectation. J. Multivar. Anal. 1985, 17, 185–206. [Google Scholar] [CrossRef]
  31. Liu, Q.; Zhang, J.; Lin, S.; Zhang, L.W. Stability analysis of stochastic generalized equation via Brouwer’s fixed point theorem. Math. Probl. Eng. 2018, 2018, 8680540. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ouyang, W.; Mei, K. Quantitative Stability of Optimization Problems with Stochastic Constraints. Mathematics 2023, 11, 3885. https://doi.org/10.3390/math11183885

AMA Style

Ouyang W, Mei K. Quantitative Stability of Optimization Problems with Stochastic Constraints. Mathematics. 2023; 11(18):3885. https://doi.org/10.3390/math11183885

Chicago/Turabian Style

Ouyang, Wei, and Kui Mei. 2023. "Quantitative Stability of Optimization Problems with Stochastic Constraints" Mathematics 11, no. 18: 3885. https://doi.org/10.3390/math11183885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop