Next Article in Journal
Modular Representation of Physiologically Based Pharmacokinetic Models: Nanoparticle Delivery to Solid Tumors in Mice as an Example
Previous Article in Journal
Dynamics Modeling of Industrial Robotic Manipulators: A Machine Learning Approach Based on Synthetic Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Alternating Iteration Algorithm for a Parameter-Dependent Distributionally Robust Optimization Model

1
Department of Basic Courses Teaching, Dalian Polytechnic University, Dalian 116034, China
2
School of Mathematics, Liaoning Normal University, Dalian 116029, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1175; https://doi.org/10.3390/math10071175
Submission received: 9 March 2022 / Revised: 24 March 2022 / Accepted: 30 March 2022 / Published: 4 April 2022

Abstract

:
Based on a successive convex programming method, an alternating iteration algorithm is proposed for solving a parameter-dependent distributionally robust optimization. Under the Slater-type condition, the convergence analysis of the algorithm is obtained. When the objective function is convex, a modified algorithm is proposed and a less-conservative solution is obtained. Lastly, some numerical tests results are illustrated to show the efficiency of the algorithm.

1. Introduction

In stochastic programming, the involved random variables usually satisfy certain distribution. However, in the real world, the certain distribution may be unknown or only the part of it is known. Distributionally robust optimization (DRO) method happens to be an effective way to solve such uncertain problems.
The study of the DRO method can be traced back to Scarf’s early work [1], which is intended to address potential uncertainties in supply chain and inventory control. In the DRO method, historical data may not be sufficient to estimate future distribution, therefore, a larger distribution set containing the true distribution can adequately address the risk of fuzzy uncertainty sets. The DRO model has been widely used in operations research, finance and management science, see [2,3,4,5,6] for recent development and further research. However, most of the ambiguity set of DRO are independent of decision variable.
Recently, Zhang, Xu and Zhang [7] have proposed a parameter-dependent DRO model, where the probability of the underlying random variables depends on the decision variables and the ambiguity set is defined through parametric moment conditions with generic cone constraints. Under Slater-type conditions, the quantitative stability results are established for the parameter-dependent DRO. By recent developments from the variational theory, Royset and Wets [8] have established convergence results for approximations of a class of DRO problems with decision-dependent ambiguity sets. Their discussion covers a variety of ambiguity sets, including moment-based and stochastic-dominance-based ones. Luo and Mehrotra [9] have obtained formulations for problems that feature distributional ambiguity sets defined by decision-dependent bounds on moments. Until recently, DRO with decision-dependent ambiguity sets has been an almost untouched research field. The few studies [7,8,9] on DRO with decision-dependent ambiguity sets are mostly theoretical achievements and the algorithms for solving such DRO are not related.
In this paper, for the parameter-dependent DRO model in [7], we propose an alternating iteration algorithm for solving it and propose a less-conservative solution strategy for its special case.
As far as we are concerned, the main contributions of this paper can be summarized as follows. Firstly, we carry out convergence analysis for alternating iteration algorithm. Under the Slater constraint qualification, we show that any cluster point of the sequence generated by the alternating iteration algorithm is an optimal solution of the parameter-dependent DRO. Notice that the proof of the convergence of successive convex programming method in [10] cannot cover our convergence analysis, since the uncertain set in Equation (1) depends on x, therefore our convergence analysis can be seen an extension of the proposition in [10]. Secondly, when the corresponding objective function is convex, a less-conservative DRO is constructed and a modified algorithm is proposed for it. At last, numerical experiments are carried out to show the efficiency of the algorithm.
The paper is organized as follows. Section 2 demonstrates the structure of the algorithm for the parameter-dependent DRO and establishes the convergence of the algorithm. In Section 3, the modified algorithm is proposed for a special case of DRO and the less-conservative solution is obtained. In Section 4, some numerical test results are illustrated to show the less conservative property of solutions obtained by the modified algorithm.
Throughout the paper, we use the following notations. By convention, we use R n × n and S n × n to denote the space of all n × n matrices and symmetric matrices respectively. For matrix A S n × n , A 0 means that A is a negative semidefinite symmetric matrix, x denotes the Euclidean norm of a vector x in R n . For a real-valued function φ : R n R , φ ( x ) denotes the gradient of φ at x.

2. DRO Model and Its Algorithm

Consider the following distributionally robust optimization (DRO) problem:
( P ) min x sup P P ( x ) E P [ f ( x , ξ ( ω ) ) ] s . t . x X ,
where X is a compact set of R n , f : R n × R k R is a continuously differentiability function, ξ : Ω Ξ is a vector of random variables defined on probability space ( Ω , F , P ) with support set Ξ R k , for fixed x X , P ( x ) is a set of distributions which contains the true probability distribution of random variable ξ , and E P [ · ] denotes the expected value with respect to probability measure P P ( x ) .
In this paper, we consider the case when P ( x ) is constructed through moment condition
P ( x ) : = P P : E P [ Ψ ( x , ξ ( ω ) ) ] K ,
where Ψ is a random map which consists of vectors and/or matrices with measurable random components, and P denotes the set of all probability distributions/measures in the space ( Ω , F ) and K is a closed convex cone in a finite dimensional vector and/or matrix spaces. If we consider ( Ξ , B ) as a measurable space equipped with Borel sigma algebra B , then P ( x ) may be viewed as a set of probability measures defined on ( Ξ , B ) induced by the random variate ξ . To ease notation, we will use ξ to denote either the random vector ξ ( ω ) or an element of R k depending on the context.
When Ξ is a finite discrete set, that is, Ξ = { ξ 1 , , ξ N } , for some N, (2) can be written as
P ( x ) = ( p 1 , , p N ) : j = 1 N p j Ψ ( x , ξ j ) K , p j 0 , j = 1 N p j = 1 .
In this section, we consider the DRO model (1) with P ( x ) defined by (3). In this case,
E P [ f ( x , ξ ( ω ) ) ] = j = 1 N p j f ( x , ξ j ) a n d E P [ Ψ ( x , ξ ( ω ) ) ] = j = 1 N p j Ψ ( x , ξ j ) .
In [10], a successive convex programming (SCP) method for a max–min problem with fixed compact set is proposed. However, the SCP method in [10] cannot be used to solve (1) directly, since P ( x ) in (1) depends on x.
Based on the SCP algorithm, we propose an alternating iteration algorithm for solving (1). In the algorithm proposed, the optimal solution is obtained by alternative iteration of solutions of inner maximum problems and outer minimum problems in (1). For convenience, let
C = ( p 1 , , p N ) R N : p j 0 , j = 1 , 2 , , N , j = 1 N p j = 1 .
We know from the algorithm that if the algorithm stops in finite steps with C k + 1 = C k or v k t k , then x k is an optimal solution of (1). In practice, problem (6) can be solved by its dual problem. In the case when an infinite sequence is produced, we use the following theorem to ensure the validity of the algorithm.
We introduce a notation, which is used in the proof of the convergence of the Algorithm in Table 1. Let P , Q P , the total variation metric between P and Q is defined as (see, e.g., page 270 in [11]),
d T V ( P , Q ) : = sup h H E P [ h ( ξ ) ] E Q [ h ( ξ ) ] ,
where,
H : = h : R k R : h i s B measurable , sup ξ Ξ | h ( ξ ) | 1 ,
Using the total variation norm, we can define the distance from a metric P P to a metric set P P , that is,
d T V ( Q , P ) : = inf P P d T V ( Q , P ) .
We next provide the convergence of the Algorithm in Table 1.
Theorem 1.
Let { x n } be a sequence generated by Algorithm in Table 1 and x 0 be a cluster point. If (a) ( x , P ) E P [ f ( x , ξ ) ] and ( x , P ) E P [ Ψ ( x , ξ ) ] are both continuous on X × C , (b) for x X , f ( x , · ) and Ψ ( x , · ) are finite valued and continuous on Ξ, (b) 0 i n t { E P [ Ψ ( x 0 , ξ ) ] K : P C } , then x 0 is an optimal solution of problem (1).
Proof. 
Since C n is an increasing sequence of sets and C is a compact set, we have lim n C n = cl [ n = 1 C n ] : = C + . Since x 0 is a cluster point of { x n } , there exists an subsequence of { x n } converging to x 0 . Without loss of generality, for simplicity, we assume that x 0 is the limit point of { x n } . We know from step 2 in the algorithm that x n is an optimal solution of
min x sup P P ( x ) C n E P [ f ( x , ξ ( ω ) ) ] s . t . x X .
Let S ^ n ( x ) and v ^ n ( x ) denote the optimal solution set and optimal value of
sup P P n ( x ) , E P [ f ( x , ξ ) ]
respectively, S ^ ( x ) and v ^ ( x ) denote the optimal solution set and optimal value of
sup P P ( x ) C + E P [ f ( x , ξ ) ] ,
respectively. Then, we have from (8) that:
v ^ n ( x n ) v ^ n ( x ) for any x X .
We proceed the rest of the proof in three steps.
Step 1. We next show
lim n v ^ n ( x n ) = v ^ ( x 0 ) .
Let P ^ n S ^ n ( x n ) , by compactness of C + , { P ^ n } has cluster points. We assume P ^ * is a cluster point of { P ^ n } , then there exists a subsequence { n k } { n } such that P ^ n k converges to P ^ * weakly as k and P ^ * C + . Under conditions (a) and (b), we have
v ^ n k ( x n k ) = E P ^ n k [ f ( x n k , ξ ) ] E P ^ * [ f ( x 0 , ξ ) ] v ^ ( x 0 )
as k . Hence, we have
lim sup n v ^ n ( x n ) v ^ ( x 0 ) .
Since P n ( x n ) = P ( x n ) C n , we have form condition (a) that P ( x 0 ) C + , which means that S ^ ( x 0 ) . Let P * S ^ ( x 0 ) , we next show that there exists a sequence { P ^ n } with P ^ n P n ( x n ) such that P ^ n converges to P * weakly as n . Under conditions (b) and (c), we know from [Theorem 2.1] in [7] that there exist positive constants γ and ν ( 0 , 1 ) such that:
d T V ( Q , P ( x n ) ) γ x n x 0 ν
for all Q P ( x 0 ) and n large enough, which means that for P * S ^ ( x 0 ) ,
d T V ( P * , P ( x n ) C n ) d T V ( P * , P ( x n ) ) + d T V ( P * , C n ) γ x n x 0 ν + d T V ( P * , C n )
for n large enough. Let P ^ n = Π P ( x n ) C n ( P * ) , then by (12), we have P ^ n converges to P * weakly as n converges to infinity. Consequently, under condition (b),
v ^ n ( x n ) E P ^ n [ f ( x n , ξ ) ] E P * [ f ( x 0 , ξ ) ] = v ^ ( x 0 )
as n and hence,
lim inf n v ^ ( x n ) v ^ ( x 0 ) .
Combining (11) and (13), we have v ^ ( x n ) converges to v ^ ( x 0 ) as n .
Step 2. We next show for any fixed x X ,
lim n v ^ n ( x ) = v ^ ( x ) .
Since lim n C n = C + , we have lim n P ( x ) C n = P ( x ) C + .
Then under conditions (a) and (b), similarly to the proof of step 1, we have v ^ n ( x ) converges to v ^ ( x ) as n .
Step 3. Combining (9), (10) and (14), we have
v ^ ( x 0 ) v ^ ( x ) for any x X ,
which means that, x 0 is an optimal solution of
min x sup P P ( x ) C + E P [ f ( x , ξ ( ω ) ) ] s . t . x X .
By step 3 in algorithm, we have
sup P P ( x n ) C + E P [ f ( x n , ξ ) ] sup P P ( x n ) C E P [ f ( x n , ξ ) ] = E P ^ n + 1 [ f ( x n , ξ ) ] sup P P ( x n ) C + E P [ f ( x n , ξ ) ] ,
which means that
sup P P ( x n ) C + E P [ f ( x n , ξ ) ] = sup P P ( x n ) C E P [ f ( x n , ξ ) ] .
Then by the proof in step 1, letting n , we have v ^ ( x 0 ) = sup P P ( x 0 ) C E P [ f ( x 0 , ξ ) ] . Consequently, by (15), we have
v ^ ( x 0 ) v ^ ( x ) sup P P ( x ) C E P [ f ( x , ξ ) ]
for all x X . Therefore, x 0 is an optimal solution of (1). □
Remark 1.
In [10], without any constraint qualifications, the proof of the convergence of SCP method is obtained. However, in our proof, since the uncertain set in (1) depends on x, the Slater condition ensures the proof. We know from the above proof that if the uncertain set in (1) independent on x, the Slater condition can be omitted. Therefore our convergence analysis can be seen an extension of the proposition in [10].

3. Less Conservative Model and a Modified Algorithm

In this section, we consider a special case of (1) and provide a less-conservative model.
In the case when Ξ = { ξ 1 , , ξ N } and the ambiguity set is
P ( x ) : = P P : E P [ ξ μ 0 ] T Σ 0 1 E P [ ξ μ 0 ] γ 1 E P ( ξ μ 0 ) ( ξ μ 0 ) T γ 2 Σ 0 0 ,
where γ 1 and γ 2 are nonnegative constants, μ 0 R k and Σ 0 S k × k is positive semidefinite, the model (1) is the following problem:
min x X max ( p 1 , , p N ) R N E P [ f ( x , ξ ) ] s . t . j = 1 N p j g 1 ( ξ j ) 0 , j = 1 N p j g 2 ( ξ j ) 0 , p j 0 , j = 1 , , N , j = 1 N p j = 1 ,
where
g 1 ( ξ ) = Σ 0 μ 0 ξ ( μ 0 ξ ) T γ 1
and
g 2 ( ξ ) = ( ξ μ 0 ) ( ξ μ 0 ) T γ 2 Σ 0 .
The model has been investigated in [2]. As shown in [2], the constraints in (18) imply that the mean of ξ lies in an ellipsoid of size γ 1 centered at the estimate μ 0 and the centered second moment matrix of ξ lies in a positive semidefinite cone defined with a matrix inequality.
However, in the constraints of (18), not all ξ j lies in the ellipsoid of size γ 1 centered at the estimate μ 0 . In practice, we may be only interested in the ξ j which lies in the ellipsoid and omit the ones outside the ellipsoid. Consequently, we propose a less-conservative DRO model, that is
min x X max ( p 1 , , p N ) R N E P [ f ( x , ξ ) ] s . t . p j g 1 ( ξ j ) 0 , p j g 2 ( ξ j ) 0 , p j 0 , j = 1 , , N , j = 1 N p j = 1 .
In the above model, if the ξ j does not lie in an ellipsoid of size γ 1 centered at the estimate μ 0 or does not satisfy the matrix inequality g 2 ( ξ j ) 0 , the corresponding constraints are vanished. Moreover, we can choose γ 1 and γ 2 such the feasible set of the inner problem is not empty, for example, for the first constraint, let γ 1 = max { ( ξ j μ 0 ) T Σ 0 1 ( ξ j μ 0 ) : j = 1 , 2 , , N } . Compare with model (18), the model (19) is less conservative since the feasible set of the inner maximum problem is smaller.
Let Q be a set of probability distributions defined as
Q = ( p 1 , , p N ) R N : p j g i ( ξ j ) 0 , j = 1 N p j = 1 , p j 0 , j = 1 , , N , i = 1 , 2 .
Next we give a modified alternative solution algorithm for (19):
The above algorithm is based on the algorithm in Pflug and Wozabal [10] for solving a distributed robust investment problem and a cutting plane algorithm in Kelley [12] for solving convex optimization problems. A similar algorithm has been used in Xu et al. [5] to solve a different DRO model and the proof of the convergence is omitted. In the following, we provide convergence analysis of the modified alternative solution algorithm based on Theorem 1.
Theorem 2.
Let { x n } be a sequence generated by Algorithm in Table 2 and x 0 be a limit point. If for each ξ Ξ , f ( · , ξ ) is continuously differentiable and convex on X, then x 0 is an optimal solution of problem (19).
Proof. 
The proof is similar as the proof of Theorem 1. Since Q n is an increasing sequence of sets and Q is a compact set, we have lim n Q n = cl [ n = 1 Q n ] : = Q + .
Let S ^ n ( x ) and v ^ n ( x ) denote the optimal solution set and optimal value of
sup ( p 1 , , p N ) Q n j = 1 N p j [ f ( x n 1 , ξ j ) + x f ( x n 1 , ξ j ) T ( x x n 1 ) ]
respectively, S ^ ( x ) and v ^ ( x ) denote the optimal solution set and optimal value of
sup ( p 1 , , p N ) Q + j = 1 N p j f ( x , ξ j )
respectively. Then we have
v ^ n ( x n ) v ^ n ( x ) for any x X .
Let ( p 1 n , , p N n ) S ^ n ( x n ) , by compactness of Q + , { ( p 1 n , , p N n ) } has cluster points. We assume ( p 1 * , , p N * ) is a cluster point of { ( p 1 n , , p N n ) } , then there exists a subsequence { n k } { n } such that ( p 1 n k , , p N n k ) converges to ( p 1 * , , p N * ) weakly as k and ( p 1 * , , p N * ) Q + . Then we have
v ^ n k ( x n k ) = j = 1 N p j n k [ f ( x n k 1 , ξ j ) + x f ( x n k 1 , ξ j ) T ( x n k x n k 1 ) ] j = 1 N p j n k f ( x n k , ξ j ) j = 1 N p j * [ f ( x 0 , ξ j ) ] v ^ ( x 0 )
as k . Hence, we have
lim sup n v ^ n ( x n ) v ^ ( x 0 ) .
On the other hand, for ( p 1 * , , p N * ) S ^ ( x 0 ) , ( p 1 * , , p N * ) Q + , which means that ( p 1 n , , p N n ) Q n such that
( p 1 n , , p N n ) ( p 1 * , , p N * )
as n . Therefore, we have
v ^ n ( x n ) j = 1 N p j n [ f ( x n 1 , ξ j ) + x f ( x n 1 , ξ j ) T ( x n x n 1 ) ] j = 1 N p j * f ( x 0 , ξ j ) = v ^ ( x 0 )
as n . Combining (24) and (25), we obtain
lim n v ^ n ( x n ) = v ^ ( x 0 ) .
The else of proof follows from the proof of Theorem 1. □
Remark 2.
Notice that the Slater condition is not used in the proof, since the uncertain set in (1) is independent on x, the Slater condition can be omitted.

4. Numerical Tests

In this section, we discuss the numerical performance of proposed alternating iteration algorithm for solving (18) and (19). We do so by applying the alternating iteration algorithm to a news vender problem [4] and provide comparative analysis of the numerical results.
Suppose the company has to decide the order quantity x j of a product to meet the demand ξ j and the news provider trades in j = 1 , , n products. Before knowing the uncertain demand ξ j , the news vender orders x j units of product j at the wholesale price c j > 0 . Once the demand ξ j is known, it can be quantified min { x j , ξ } at the retail price of v j . Any stock that have not been sold ( x j ξ j ) + are cleared by the remedy price h j . Any unsatisfied demand ( ξ j x j ) + is lost. The total loss of the news vendors can be described as a function of the order decision x : = ( x 1 , , x n ) :
L ( x , ξ ) = c x v min ( x , ξ ) h ( x ξ ) + = ( c v ) x + ( v h ) ( x ξ ) + ,
where non-negativity and minimum operators are applied to the component method. We study the risk aversion of the news vendor problem on two models:
( H 1 ) min x X sup P P E P [ U ( L ( x , ξ ) ) ] ,
and
( H 2 ) min x X sup P Q E P [ U ( L ( x , ξ ) ) ] ,
where U ( w ) : = e w / 10 is an exponential distribution function,
P = ( p 1 , , p N ) R N : j = 1 N p j g i ( ξ j ) 0 , j = 1 N p j = 1 , p j 0 , j = 1 , , N , i = 1 , 2 .
and Q is defined as in (20). Notice that for the news vender problem, problems (18) and (19) are just ( H 1 ) and ( H 2 ) respectively.
The data are generated as follows: for i-th product, wholesale, retail and remedy prices are c j = 0.1 ( 5 + j 1 ) , v j = 0.15 ( 5 + j 1 ) and h j = 0.05 ( 5 + j 1 ) respectively; the product demands vector ξ is characterized by a multivariate log-normal distribution with the mean μ = ( μ 1 , , μ n ) , μ j = 2 , j = 1 , , n . In the execution of the algorithm, we use an ambiguity set Q in (20) with γ 1 = 0.1 and γ 2 = 1.1 . The mean and convariance matrix μ 0 and Σ 0 are calculated to be generated through a computer. The experiments are carried out through Matlab 2016 installed on a Dell notebook computer with Windows 7 operating system and Intel Core i5 processor. The SDP subproblems in Algorithms are solved by Matlab solver “SDPT3-4.0” [13].
The computation results are shown in the Table 3 and Table 4 and Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 below sequentially. In Table 3 and Table 4, we show the average cpu time (Times(s)), iteration (Iter) and optimal values (Optimal Vlue) of each test problem with different sample sizes.
From the Table 3 and Table 4 and Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5, we can roughly see that problems ( H 1 ) and ( H 2 ) can be solved by the alternating iteration algorithm. We know from Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 that when using the algorithm to solve ( H 1 ) and ( H 2 ) , the number of iterations and time of solving ( H 1 ) are basically more than that of solving ( H 2 ). Moreover, the optimal values of ( H 2 ) is smaller than the ones of ( H 1 ). Since the DRO model is usually used to describe an upper bound of uncertain optimization problems, the smaller the optimal value of the DRO model, the less conservative the DRO model is. Therefore, ( H 2 ) is a less conservative DRO model. However, according to Figure 3, ( H 1 ) is more robust than ( H 2 ) because the curve shown by ( H 1 ) is more stable.
The numerical results show that, in order to obtain a conservative total loss in the news vender problem, solving DRO model ( H 2 ) by the alternating iteration algorithm usually performs better than solving DRO model ( H 1 ) . However, in our observations, when we only focus on the robustness, DRO model ( H 1 ) may be the better choice. We provide the links to the source codes as follows: https://pan.baidu.com/s/1dSmMUynZqi5LzWgn6aUUoQ?pwd=xn44 (accessed on 25 January 2022).

5. Conclusions

In this paper, we carry out convergence analysis for an alternating iteration algorithm for a distributionally robust optimization problem where the ambiguity set depends on decision variables. Convergence analysis of the alternating iteration algorithm are obtained under the Slater-type condition, which can be seen an extension of the result in [10]. When the objective function is convex, a modified alternating iteration algorithm is proposed for obtaining a less-conservative solution of DRO and the convergence analysis is established. Finally, we discuss the numerical performance of proposed alternating iteration algorithm for obtaining a conservative total loss in the news vender problem. We can undertake similar analysis when the ambiguity set in DRO is constructed in other ways such as Kullback–Leiblor divergence [14], Wasserstein metric [15,16] etc. We leave all these for future research as they are beyond the focus of this paper.

Author Contributions

Conceptualization, S.L. and J.Z.; methodology, S.L.; software, N.S.; validation, J.Z., S.L. and N.S.; formal analysis, S.L.; investigation, J.Z.; writing—original draft preparation, S.L.; writing—review and editing, J.Z.; visualization, N.S.; supervision, J.Z.; project administration, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under Project Grant Nos. 12171219 and 61877032, the Liaoning Revitalization Talents Program No. XLYC2007113, Scientific Research Fund of Liaoning Provincial Education Department under Project No. LJKZ0961.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Scarf, H. A min-max solution of an inventory problem. In Studies in the Mathematical Theory of Inventory and Production; Arrow, K.S., Karlin, S., Scarf, H.E., Eds.; Stanford University Press: Stanford, CA, USA, 1958; pp. 201–209. [Google Scholar]
  2. Ye, Y.; Delage, E. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 2010, 58, 595–612. [Google Scholar]
  3. Goh, J.; Sim, M. Distributionally robust optimization and its tractable approximations. Oper. Res. 2010, 58, 902–917. [Google Scholar] [CrossRef]
  4. Wiesemann, W.; Kuhn, D.; Sim, M. Distributionally robust convex optimization. Oper. Res. 2014, 62, 1358–1376. [Google Scholar] [CrossRef] [Green Version]
  5. Xu, H.; Liu, Y.C.; Sun, H.L. Distributionally robust optimization with matrix moment constraints: Lagrange duality and cutting plane methods. Math. Program. 2017, 130, 1–22. [Google Scholar] [CrossRef] [Green Version]
  6. Shapiro, A. On duality theory of conic linear problems. In Semi-Infinite Programming; Goberna, M.A., Lopez, M.A., Eds.; Springer: Boston, MA, USA, 2001; pp. 135–165. [Google Scholar]
  7. Zhang, J.; Xu, H.; Zhang, L. Quantitative stability analysis for distributionally robust optimization with moment constraints. SIAM J. Optim. 2017, 26, 1855–1882. [Google Scholar] [CrossRef]
  8. Royset, J.O.; Wets, R.J.-B. Variational theory for optimization under stochastic ambiguity. SIAM J. Optim. 2017, 27, 1118–1149. [Google Scholar] [CrossRef] [Green Version]
  9. Luo, F.; Mehrotra, S. Distributionally robust optimization with decision dependent ambiguity sets. Tech. Rep. 2018, 14, 2565–2594. [Google Scholar] [CrossRef]
  10. Pflug, G.C.; Wozabal, D. Ambiguity in portfolio selection. Quantitative 2007, 7, 435–442. [Google Scholar] [CrossRef] [Green Version]
  11. Athreya, K.B.; Lahiri, S.N. Measure Theory and Probability Theory; Springer: New York, NY, USA, 2006. [Google Scholar]
  12. Kelley, J.E. The cutting-plane method for solving convex programs. SIAM J. Appl. Math. 1960, 8, 703–712. [Google Scholar] [CrossRef]
  13. Toh, K.C.; Todd, M.J.; Tütüncü, R.H. SDPT3-a Matlab software package for semidefinite programming. Optim. Methods Softw. 1999, 11, 545–581. [Google Scholar] [CrossRef]
  14. Hu, Z.; Hong, L.J. Kullback-Leibler Divergence Constrained Distributionally Robust Optimization. Available online: http://www.optimization-online.org/DB_HTML/2012/11/3677.html (accessed on 1 November 2012).
  15. Esfahani, P.M.; Kuhn, D. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Math. Program. 2018, 171, 115–166. [Google Scholar] [CrossRef]
  16. Zhao, C.; Guan, Y. Data-driven risk-averse stochastic optimization with wasserstein metric. Oper. Res. Lett. 2018, 46, 262. [Google Scholar] [CrossRef]
Figure 1. Comparative analysis of ( H 1 ) and ( H 2 ) on Time(s).
Figure 1. Comparative analysis of ( H 1 ) and ( H 2 ) on Time(s).
Mathematics 10 01175 g001
Figure 2. Comparative analysis of ( H 1 ) and ( H 2 ) on Iter.
Figure 2. Comparative analysis of ( H 1 ) and ( H 2 ) on Iter.
Mathematics 10 01175 g002
Figure 3. Comparative analysis of ( H 1 ) and ( H 2 ) on Optimal Value.
Figure 3. Comparative analysis of ( H 1 ) and ( H 2 ) on Optimal Value.
Mathematics 10 01175 g003
Figure 4. Comparative analysis of Optimal Value from ( H 1 ).
Figure 4. Comparative analysis of Optimal Value from ( H 1 ).
Mathematics 10 01175 g004
Figure 5. Comparative analysis of Optimal Value from ( H 2 ).
Figure 5. Comparative analysis of Optimal Value from ( H 2 ).
Mathematics 10 01175 g005
Table 1. The Alternating Iteration Algorithm.
Table 1. The Alternating Iteration Algorithm.
1. Set k = 0 and C 0 = { P ^ } with P ^ C satisfying { x X : E P ^ [ Ψ ( x , ξ ) ] K } .
2. Solve the outer problem
min x , t t s . t . E P [ f ( x , ξ ( ω ) ) ] t , P P k ( x ) x X ,
and obtain the solution ( x k , t k ) , where P k ( x ) = { P C k : E P [ Ψ ( x , ξ ) ] K } .
3. Solve the inner problem
max P E P [ f ( x k , ξ ) ] s . t . E P [ Ψ ( x k , ξ ) ] K , P C
and obtain the solution P ^ k and the optimal value v k .
4. Let C k + 1 = C k { P ^ k } .
5. If C k + 1 = C k or v k t k , then a solution of (1) is found and the algorithm stops. Otherwise set k = k + 1 and goto 2.
Table 2. The Modified Alternating Iteration Algorithm.
Table 2. The Modified Alternating Iteration Algorithm.
1. Let P 0 = ( p 1 0 , , p N 0 ) Q and Q 0 : = { P 0 } and x 0 X . Set k = 0 .
2. Solve the outer minimization problem
min x , t t s . t . x X , j = 1 N p j k [ f ( x k , ξ j ) + x f ( x k , ξ j ) ( x x k ) ] t , for P k = ( p 1 k , , p N k ) Q k
and obtain the solution ( x k + 1 , t k + 1 ) .
3. Solve the inner maximization problem
max ( p 1 , , p N ) R N j = 1 N p j f ( x k + 1 , ξ j ) s . t . p j Σ 0 μ 0 ξ ( μ 0 ξ ) γ 1 0 p j [ ( ξ μ 0 ) ( ξ μ 0 ) ] γ 2 Σ 0 , p j 0 , j = 1 , , N , j = 1 N p j = 1
and obtain the solution ( P k + 1 , v k + 1 ) .
4. Let Q k + 1 = Q k { P k + 1 } . If Q k + 1 = Q k or v k + 1 t k + 1 , then stop, else let k : = k + 1 , go to 1.
Table 3. The performance of ( H 1 ).
Table 3. The performance of ( H 1 ).
nTime (s)IterOptimal Value
231.234226480.9669
444.774935660.9663
652.961924720.9616
852.583319690.9578
1056.231204740.9659
1265.792619850.9629
Table 4. The performance of ( H 2 ).
Table 4. The performance of ( H 2 ).
nTime (s)IterOptimal Value
228.848998480.9483
432.741311500.7441
638.559070570.7603
841.873316570.8045
1061.811460760.8577
1263.149645750.8433
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, S.; Zhang, J.; Shi, N. An Alternating Iteration Algorithm for a Parameter-Dependent Distributionally Robust Optimization Model. Mathematics 2022, 10, 1175. https://doi.org/10.3390/math10071175

AMA Style

Lin S, Zhang J, Shi N. An Alternating Iteration Algorithm for a Parameter-Dependent Distributionally Robust Optimization Model. Mathematics. 2022; 10(7):1175. https://doi.org/10.3390/math10071175

Chicago/Turabian Style

Lin, Shuang, Jie Zhang, and Nan Shi. 2022. "An Alternating Iteration Algorithm for a Parameter-Dependent Distributionally Robust Optimization Model" Mathematics 10, no. 7: 1175. https://doi.org/10.3390/math10071175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop