Next Article in Journal
Stability of Solutions for Parametric Inverse Nonlinear Cost Transportation Problem
Next Article in Special Issue
Fuzzy Multicriteria Decision Mapping to Evaluate Implant Design for Maxillofacial Reconstruction
Previous Article in Journal
The Waring Distribution as a Low-Frequency Prediction Model: A Study of Organic Livestock Farms in Andalusia
Previous Article in Special Issue
On the Geometric Mean Method for Incomplete Pairwise Comparisons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Risk-Aversion Approach for the Multiobjective Stochastic Programming Problem

1
HUM-LOG Research Group, Instituto de Matemática Interdisciplinar (IMI), Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Plaza de las Ciencias 3, 28040 Madrid, Spain
2
Mathematical Research Institute (IMUS), University of Seville, 41004 Sevilla, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 2026; https://doi.org/10.3390/math8112026
Submission received: 10 October 2020 / Revised: 5 November 2020 / Accepted: 8 November 2020 / Published: 13 November 2020
(This article belongs to the Special Issue Multi-Criteria Optimization Models and Applications)

Abstract

:
Multiobjective stochastic programming is a field that is well suited to tackling problems that arise in many fields: energy, financial, emergencies, among others; given that uncertainty and multiple objectives are usually present in such problems. A new concept of solution is proposed in this work, which is especially designed for risk-averse solutions. The proposed concept combines the notions of conditional value-at-risk and ordered weighted averaging operator to find solutions protected against risks due to uncertainty and under-achievement of criteria. A small example is presented in order to illustrate the concept in small discrete feasible spaces. A linear programming model is also introduced to obtain the solution in continuous spaces. Finally, computational experiments are performed by applying the obtained linear programming model to the multiobjective stochastic knapsack problem, gaining insight into the behaviour of the new solution concept.

1. Introduction

Decision making is never easy, yet we often have to make decisions. Emergencies and disaster management are fields in which many difficulties often arise, such as high uncertainty and multiple conflicting objectives. Risk-averse decisions are usually sought to overcome such difficulties. Risk-aversion is the attitude for which we prefer to lower uncertainty rather than gambling extreme outcomes (positive or negative).
Risk-aversion, although typically studied in problems with uncertainty, can as well be considered when making decisions with multiple criteria. For instance, in the field of disaster management, solutions that are sufficiently good for all criteria are usually preferred to others that perform exceptionally good for some criteria, but inadequately for the others.
Multicriteria decision making (MCDM) is a field worth of consideration when studying real-world problems. This situation, in which multiple conflicting objectives have to be optimized, has led to the definition of different solution concepts and methodologies. A specific methodology should be applied depending on the problem and the type of solution considered. A key concept in MCDM is the notion of efficiency, which reflects the intuition that, for a solution to be acceptable, another cannot exist improving that one in every objective.
Uncertainty is another feature that is present in the studied problems, in which risk-averse decisions will be preferred. The most common ways for dealing with the uncertainty are stochastic programming and robust optimization, in which fuzzy optimization is also included [1]. Stochastic programming is the widest used technique when there are historical data or information to infer a probability distribution. Moreover, usually discrete distributions are used, calling scenarios the different values. The concepts of value-at-risk (VaR) and conditional value-at-risk (CVaR) are widely used for quantifying risk. They are typically defined for losses distributions in finance, where the right tail of the distributions are of interest.
Consider now the following problem, in which multiple objectives to be minimized and uncertainty are included simultaneously:
min x X f 1 ( x , ω ) , , f K ( x , ω )
The above problem is typically called multiobjective stochastic programming problem (MSP), especially if ω , the uncertainty source, has a known probability distribution.
In this paper, we introduce a new solution concept in multiobjective stochastic programming based on risk-averse preferences. Such a concept is complemented with a mathematical programming model in order to compute it efficiently, and computational experiments are performed to assess its strengths.
The remaining of this paper is organized as follows. Section 2 presents a literature review of multicriteria decision-making and uncertain optimization. Section 3 includes the definition of a novel concept of solution for MSP problems and studies its properties. In Section 3.2, such a solution concept is illustrated with a basic example when the decision space is finite and small.
Section 4 shows how to obtain such a solution with a linear programming model. An application to the multicriteria knapsack problem is developed in Section 5, and general conclusions of the research are drawn in Section 6.

2. Literature Review

2.1. Multicriteria Decision-Making and Optimization under Uncertainty

MCDM techniques have recently been used for solving real world problems as varied as: disaster management [2,3], engineering [4], finance [5,6], forest planning [7], healthcare [8], location of waste facilities [9], police districting [10], route planning [11], train scheduling [12], or urban planning [13,14].
An important concept used throughout this paper is the one of efficiency. The notation used is the given in [15]:
Definition 1
(Efficiency, [15]). Let f 1 ( x ) , , f K ( x ) be objective functions to be minimized, and let X be the feasible set. A feasible solution x ^ X is called:
  • Weakly efficient if there is no x X , x x ^ , such that f ( x ) < f ( x ^ ) i.e., f k ( x ) < f k ( x ^ ) for all k = 1 , , K .
  • Efficient or Pareto optimal if there is no x X such that f k ( x ) f k ( x ^ ) for all k = 1 , , K and f i ( x ) < f i ( x ^ ) for some i { 1 , , K } .
  • Strictly efficient if there is no x X , x x ^ , such that f ( x ) f ( x ^ ) .
The different approaches for dealing with uncertainty do not respond to the desires of the modeller; instead, they reflect the nature of the uncertainty. If the uncertainty comes with an underlying known or estimated probability distribution, then stochastic programming is used. For an introduction to stochastic programming, the reader is referred to [16]. On the other hand, if uncertainty comes from a lack of precision or semantic uncertainty, then robust optimization is used. Robust optimization does not assume a known (or existing) distribution [17,18,19]. A recent review of robust optimization is written in [20].
Stochastic programming seeks the optimization of a characteristic value of a random variable, usually its average. However, in risk-averse contexts the usage of value-at-risk and conditional value-at-risk is common for quantifying risk (see, for instance, [21,22,23,24,25]).
Definition 2
(CVaR, [26]). Given F X ( x ) distribution function, and  β [ 0 , 1 ] , the β-CVaR is the conditional expected value over { x : F X ( x ) β } .

2.2. Multiobjective Stochastic Programming

Multiobjective stochastic programming refers to models in which there are several criteria and stochastic uncertainty simultaneously. Reference [27] develops the PROTRADE method, where utility functions are defined to aggregate objectives into a single objective stochastic problem. The resulting problem is solved with an interactive method, where the decision-maker defines an expected solution and a feasibility probability. Reference [28] reduces the stochasticity by adding some good measures to the list of objectives, such as the mean, variance, or probability of being over/below a threshold. The resulting multiobjective deterministic problem is solved aggregating the objectives, but it could be solved via other techniques.
Reference [29] compares the stochastic approach with the multiobjective approach when using different techniques. The stochastic approach transforms the MSP on a single-objective stochastic problem, while the multiobjective approach first reduces the stochasticity transforming the MSP on a deterministic multiobjective problem. They highlight that “the multiobjective approach forgets the possible existence of stochastic dependencies between objectives”. Reference [30] studies stochastic goal programming, where the deviation of the objective functions to some goals set beforehand to stochastic values is minimized.
In [31], a chance-constrained compromise approach is proposed, with an example presented in  [32]. In [33], the INTEREST method is proposed. It is an interactive reference point method. The decision-maker gives reference levels u i and probabilities β i , hoping to achieve a solution x * such that P f i ( x * ) u i β i . If this is infeasible, then the decision-maker should either increase the reference levels or decrease the probabilities of achievement. [34] reviews different solutions methods for the MSP problem, categorizing them as stochastic approach or multiobjective approach. Reference [35] surveys methods for MSP problems that do not reduce the multiple objectives before the analysis of the problem, acknowledging the difficulty of risk-averse decision-making. More recently, in [36], different ordering relations for multicriteria problems with uncertainty are presented, building upon existing notions of robustness.
Some fields where MSP models have been developed are: forest management [37], multiple response optimization [38], energy generation [39,40], energy exchange [41], capacity investment [42], disaster management [43,44], portfolio optimization [45], and cash management [46], among others.

3. Methodology

The concept of CVaR allows aggregating several scenarios by just looking at what happens in the worst ones. The ordered weighted averaging (OWA) operators are defined in [47], and independently in the field of locational analysis [48,49] under the name of ordered median function. These concepts will allow for us to aggregate different criteria by looking at the least desirable ones, as a risk-aversion measure.
Definition 3
(OWA, [47]). Given a 1 , , a n R , the ordered weighted averaging (OWA) operator with weights λ 1 , , λ n is defined as:
O W A ( a 1 , , a n ) = i λ i a ( i )
where a ( 1 ) , , a ( n ) is the ordered vector from largest to smallest a 1 , , a n .
Remark 1.
For certain weights, the OWA represents a known quantity:
  • If λ i = 1 n , the resulting OWA is the average of a.
  • If λ 1 = 1 , and  λ j = 0 for j > 1 , the OWA is the maximum of a.
  • If λ n = 1 , and  λ j = 0 for j < n , the OWA is the minimum of a.
Reference [50] later studies how to assign weights for an OWA when criteria have different importances.
Definition 4
(OWA with importances, [50]). Given a 1 , , a n R with importances u 1 , , u n such that i u i = 1 the weights λ j for the OWA can be calculated with f, the weight generating function in the following manner:
1. 
Sort vector a such that a ( 1 ) a ( 2 ) a ( n ) .
2. 
With ( × ) as the order induced by a, define T j = k = 1 j u ( k ) .
3. 
Let f be a function, such that f : [ 0 , 1 ] [ 0 , 1 ] and f ( 0 ) = 0 , f ( 1 ) = 1 . This function is called weight generating function.
4. 
Obtain the weights as λ j = f ( T j ) f ( T j 1 ) .
Example 1 (of Definition 4).
Consider the following weight generating function, for a given r ( 0 , 1 ] :
f ( x ) = x r i f x < r 1 i f x r
Let ( × ) be the order, such that a ( 1 ) a ( n ) , u ( j ) the weight associated to a ( j ) , and also let T j = k = 1 j u ( k ) . We shall now see how the weights are obtained from f. Let  j * be such that T j * 1 < r T j * .
  • λ 1 = f ( T 1 ) = f ( u ( 1 ) ) = u ( 1 ) r , assuming u ( 1 ) < r
  • λ 2 = f ( T 2 ) f ( T 1 ) = f ( u ( 1 ) + u ( 2 ) ) f ( u ( 1 ) ) = u ( 1 ) + u ( 2 ) r u ( 1 ) r = u ( 2 ) r , assuming u ( 1 ) + u ( 2 ) < r
  • λ j * = f ( T j * ) f ( T j * 1 ) = 1 u ( 1 ) + u ( 2 ) + + u ( j * 1 ) r , since T j * r
  • λ j * + 1 = f ( T j * + 1 ) f ( T j * ) = 1 1 = 0
  • λ n = f ( T n ) f ( T n 1 ) = 1 1 = 0
Consequently the OWA of a 1 , , a n with importances u 1 , , u n is:
O W A = u ( 1 ) r a ( 1 ) + u ( 2 ) r a ( 2 ) + + 1 u ( 1 ) + u ( 2 ) + + u ( j * 1 ) r a ( j * ) = u ( 1 ) a ( 1 ) + u ( 2 ) a ( 2 ) + + r u ( 1 ) u ( 2 ) a ( j * ) r
That is, the OWA that is characterized by the weight generating function given in (1) is the average of the worst a j , weighted by their importances, with the total importance adding up to r. The values of λ reflect the preferences of the decision-maker. The parameter r leads to incorporating the different attributes, from worst to best, until a threshold is reached.
The starting point of this paper is the recurrent idea of representing ordered weighted or ordered median operators while using k-sums. k-sums (or k-centra in the location analysis literature) are sums of the k-largest terms of a vector [51]. One can trace back, at least to [52], the use of k-sums to represent ordered median objectives. More recent references are, for instance, [53,54,55,56]. This last reference introduces a normalized version of k-centrum, named β -average, which will be used in our paper.
Through the remaining of the paper, consider that f k j ( x ) are functions to be minimized within a feasible set X, with  k = 1 , , K representing K different objectives with importances w k and j = 1 , , J encoding J different scenarios with probabilities π j .
Definition 5
( β -average, g k β ( x ) , [56]). Given β ( 0 , 1 ] , for each criterion k it can be defined g k β ( x ) which measures the average of f on the worst scenarios f k 1 ( x ) , , f k J ( x ) , with accumulated probability equal to β.
Remark 2
([56]). Given a value β, if the sum of the probabilities of the worst scenarios is exactly β, then the β-average is exactly 1 β -CVaR.
Example 2.
Consider a point x, a fixed criterion k, and five different scenarios with probabilities π j and values of f k j given. Table 1 shows the β-averages for different values of β, in which the scenarios have been ordered from largest value of f to smallest.
  • For β = 0.2 , the scenario j = 1 is the only one needed to obtain the worst scenario with probability 0.2 , and hence g k β ( x ) = 0.2 × 10 0.2 = 10 .
  • When β equals 0.3 it is necessary to include scenario 2, obtaining a β-average of 0.2 × 10 + 0.1 × 7 0.3 = 9 .
  • Finally, if  β = 0.5 scenario 3 needs to be added as well, but only with the probability needed until reaching 0.5 : g k β ( x ) = 0.2 × 10 + 0.1 × 7 + 0.2 × 4 0.5 = 7 .
When using the β -average the functions f k j ( x ) were transformed into g k β ( x ) , a collection of K functions not depending on the scenario. An OWA will be defined now, via its weight generating function, which will reduce the K β -averages into a scalar function.
Definition 6 (r-OWA, O r ( x ) )
Given x 1 , , x K R with importances w 1 , , w K , such that k = 1 K w k = 1 and r ( 0 , 1 ] , the function O r ( x ) is defined as the OWA with the following weight generating function:
f ( x ) = x r i f x < r 1 i f x r
Remark 3.
The definition of O r ( x ) is made in a similar manner that the one given of the β-average (Definition 5), but it is done on a context with importances rather than probabilities. Example 3 shows the similarities between both of the approaches.
Example 3.
Consider a point x and let g k ( x ) be the evaluation of x under five different criteria with importances w j . Table 2 shows the r-OWAs for different values of r, in which the criteria have been ordered from largest values of g k ( x ) to smallest. Consider the case r = 0.5 :
1. 
As g k ( x ) are already ordered for largest to smallest, the values of T k are:
T 1 = 0.2 , T 2 = 0.2 + 0.1 = 0.3 , T 3 = 0.6 , T 4 = 0.85 , T 5 = 1
2. 
The values of T k under f:
f ( T 1 ) = 0.2 0.5 , f ( T 2 ) = 0.3 0.5 , f ( T 3 ) = f ( T 4 ) = f ( T 5 ) = 1
3. 
The weights of the OWA:
λ 1 = 0.2 0.5 , λ 2 = 0.3 0.2 0.5 = 0.1 0.5 , λ 3 = 1 0.3 0.5 = 0.2 0.5 , λ 4 = λ 5 = 0
4. 
Consequently, the r-OWA is:
r - O W A = 0.2 g ( 1 ) ( x ) + 0.1 g ( 2 ) ( x ) + 0.2 g ( 3 ) ( x ) 0.5 = 0.2 × 10 + 0.1 × 7 + 0.2 × 4 0.5 = 7
Remark 4.
Given x 1 , , x K and its associated importances w 1 , , w K , then the λ k of the r-OWA are λ k = λ ˜ k r , with  λ ˜ k being determined as:
O r ( x 1 , , x K ) = max λ ˜ 1 , , λ ˜ K λ ˜ 1 x 1 + + λ ˜ K x K r λ ˜ k w k , λ ˜ k = r
Given r , β ( 0 , 1 ] and x X , let us introduce the function h r β ( x ) as the r-OWA of the β -averages. That is:
h r β ( x ) = O r g 1 β ( x ) , , g K β ( x )
Remark 5.
If the importance of all the criteria is the same ( w k = 1 K for all k) and r = n K with n { 1 , , K } , then  h r β ( x ) is the average of the n worst β-averages. Recall that this is called n-centra [57].
Definition 7 (Dominance).
Let x and y feasible solutions ( x , y X ) and r , β ( 0 , 1 ] . Then x dominates y ( x y ) if h r β ( x ) h r β ( y ) , where h r β ( x ) is the r-OWA of the β-averages.
Definition 7 induces a domination relationship with the following properties:
Reflexivity 
   Given x, h r β ( x ) h r β ( x ) , and then x x , so ≿ is reflexive.
Transitiveness 
Given x y , y z , we have h r β ( x ) h r β ( y ) and h r β ( y ) h r β ( z ) , and then h r β ( x ) h r β ( z ) , which leads to x z , and we conclude that ≿ is transitive.
Antisymmetry 
Given x y , y x , we have h r β ( x ) h r β ( y ) and h r β ( y ) h r β ( x ) , but, from  h r β ( x ) = h r β ( y ) , it cannot be guaranteed that x = y , and, hence, ≿ is not antisymmetric.

3.1. Idea of Solution and Dominance Properties

Consider the multiobjective stochastic programming problem:
min x X f 1 ( x , ω ) , , f K ( x , ω )
The previously defined concepts of β -average and r-OWA transform the M S P problem into a deterministic multiple objective problem, and then into a deterministic single objective problem.
M S P M O P L P ( M I P )
f k j ( x ) β - average g k β ( x ) r - OWA h r β ( x )
  • For every x X there is a function f k j to be minimized which depends on the scenario j and the criterion k.
  • The problem is transformed into a deterministic one with multiple objectives (MOP) while using the β -average concept.
  • When computing the r-OWA, each  x X is assigned a scalar. The problem consists of finding the x, which minimizes this h r β ( x ) .
The solution procedure lies into what is usually called a scalarization approach. When obtaining a minimizer of h r β ( x ) , it is also desired that the optimal solution is efficient for the associated MOP problem:
min x X g 1 β ( x ) , , g K β ( x )
Proposition 1.
Given x * minimum of h r β ( x ) the following statements hold:
1. 
x * is not necessarily efficient of the MOP problem.
2. 
x * is weakly efficient of the MOP problem.
3. 
If x * is the only minimum of h r β ( x ) , then  x * is efficient.
4. 
Given x * not efficient, an alternative y * can be found on a second phase, such that y * is efficient and h r β ( x * ) = h r β ( y * ) .
These properties are known when using scalarization techniques [15]. Hence, only an example of the first statement will be shown.
Example 4 ( x * is not necessarily efficient).
Consider the example that is displayed in Table 3, in which there are only two feasible solutions, two equiprobable scenarios ( π 1 = π 2 = 1 2 ), three equally important criteria ( w 1 = w 2 = w 3 = 1 3 ), and consider the values of β = 1 2 and r = 2 3 are taken.
The β-averages are ( 0.8 , 0.4 , 0.65 ) for the first alternative and ( 0.8 , 0.45 , 0.65 ) for the second alternative. When computing the function h r β , both of the alternatives have an objective value of 0.725 . Consequently, even though the second alternative is an optimal solution of h r β , it is not an efficient solution of the MOP problem as its β-averages are dominated by those of the first alternative.
The transformation of the problem from a multiple objective one to a single objective one is done using weights. These weights correspond to a subjective scale that is introduced by the expert, representing the importance of the criteria considered in the problem as accurately as possible.

3.2. An Illustrative Example

The proposed solution concept will now be applied, first with a discrete (and small) case. When the solution space is discrete, and all feasible solutions can be explicitly enumerated, the steps are as  follows:
Step 0 
Normalize all objective functions f k j ( x ) .
Step 1 
Set values for β , r ( 0 , 1 ] .
Step 2 
For every x X and every criterion define g k β ( x ) as:
g k β ( x ) = average of worst scenarios for criterion k with probabilities adding up to β
Step 3 
Define h r β ( x ) as:
h r β ( x ) = a v e r a g e   o f   w o r s t   g k β ( x )   v a l u e s w i t h   i m p o r t a n c e s   a d d i n g   u p   t o   r
Step 4 
Search for x X minimizing h r β ( x ) .
Assume a decision space with only four alternatives, evaluated under five different scenarios with six criteria. For each of those alternatives, it can be computed the value of the functions f k j ( x ) to be minimized. Table 4 shows the values of f, evaluated on the feasible point x 1 , by each of the scenarios and criteria considered.
The first step is calculating the β -averages. Let us assume a value of β = 0.3 :
  • For the first criterion the worst scenario is j 5 , which has probability 0.1 . The second worst is j 4 , with a probability of 0.25 . As the sum of those probabilities exceeds the β fixed, for computing the β -average just a probability of 0.2 is considered:
    g 1 β ( x 1 ) = 0.1 × 0.86 + 0.2 × 0.76 0.3 = 0.793
  • g 2 β ( x 1 ) = 0.2 × 0.65 + 0.1 × 0.44 / 0.3 = 0.580
  • g 3 β ( x 1 ) = 0.3 × 0.90 / 0.3 = 0.900
  • g 4 β ( x 1 ) = 0.833 , g 5 β ( x 1 ) = 0.930 , g 6 β ( x 1 ) = 0.728
The last step is calculating the function h r β ( x ) , which is, the r-OWA of the β -averages. Table 5 calculates the r-OWA, and also shows the information of the previously calculated β -averages, when the value of r = 0.17 is taken.
The values of the functions for the other alternatives, as well as its β -averages and r-OWAs are shown in Table A1, Table A2 and Table A3, starting on Page 21. Table 6 illustrates a summary of the results, where all of the β -averages and r-OWAs are shown, whcih determines that the optimal alternative for the values of β and r given is Alternative 1.
Variations on β and r yield very different results. Figure 1a shows which of the four alternatives has the lowest h value, depending on the values of β and r.
Figure 1b shows the optimal objective value when varying the parameters β and r. It can be appreciated how h decreases when β and r increase. This is due to the fact that the original f k j functions are to be minimized and, the larger the parameters β and r, the more favourable scenarios/criteria will take part on the computation of h r β ( x ) , hence decreasing its optimal value.
The solution concept that is defined for MSP problems can be applied to numerous fields, but it is especially relevant for situations in which risk-aversion is strictly preferred, such as the selection of socially responsible portfolios [58] or disaster management problems.

4. Computing the Minimum: Continuous Case

A concept of solution was proposed with Definition 7. When the functions f k j ( x ) to be minimized are given, a new function h r β ( x ) to be minimized is defined, with parameters β and r, such that h r β ( x ) is the r-OWA of the β -averages. If the decision space is sufficiently small, then the procedure that is shown in the above example obtains such a solution.
In this section, a mathematical programming model will be developed in order to obtain the minimum of h r β ( x ) , which allows for one to obtain the proposed solution for bigger decision spaces, including continuous ones.

Mathematical Programming Model

Given k and x X we have the vector f k 1 ( x ) , , f k J ( x ) . Let  f k ( 1 ) ( x ) , , f k ( J ) ( x ) be the ordered vector, such that f k ( j 1 ) ( x ) f k ( j 2 ) ( x ) when j 1 j 2 .
Given β ( 0 , 1 ] , let  j ^ be the ordered scenario, such that:
j = 1 j ^ π ( j ) β , j = 1 j ^ 1 π ( j ) < β
Alternatively:
f k ( 1 ) ( x ) f k ( 2 ) ( x ) f k ( j ^ ) ( x ) f k ( j ^ + 1 ) ( x ) f k ( J ) ( x )
1 = π ( 1 ) + π ( 2 ) + + π ( j ^ 1 ) < β + π ( j ^ ) β + + π ( J )
Additionally, let:
π ^ j = π j j { ( 1 ) , , ( j ^ 1 ) } β j = ( 1 ) j = ( j ^ 1 ) π j j = j ^ 0 otherwise
The definition of π ^ j ^ is made in such a way that j π ^ j = β . In this way, the average of the β worst values can be computed as 1 β j = 1 J π ^ j f k ( j ) ( x ) , which coincides with the definition of β -average (Definition 5). This computation can be written as the following optimization problem:
max u ˜ j 1 β j = 1 J u ˜ j × f k j ( x ) s . t . j = 1 J u ˜ j = β 0 u ˜ j π j j = 1 , , J
A more natural approach would be to consider u j = u ˜ j β . These u j represent the proportion in which scenario j plays a part on the aggregated β -average. Introducing that change, the model is:
max u j j = 1 J u j × f k j ( x ) s . t . j = 1 J u j = 1 0 u j π j β j = 1 , , J
The dual formulation is:
min z , y j z + j = 1 J π j β y j s . t . z + y j f k j ( x ) j = 1 , , J z free , y j 0
Additionally, hence, finding the x X , which minimizes the average of the worst β scenarios for a given k is:
min x X ( max u ˜ j 1 β j = 1 J u ˜ j f k j ( x ) s . t . j = 1 J u ˜ j = β 0 u ˜ j π j j = 1 , , J )
Or, alternatively:
min x X ( min z , y j z + j = 1 J π j β y j s . t . z + y j f k j ( x ) j = 1 , , J z free , y j 0 j = 1 , , J )
Which is equivalent to model (4):
(4a) min z , y j , x z + j = 1 J π j β y j (4b) s . t . z + y j f k j ( x ) j = 1 , , J (4c) z free , y j 0 j = 1 , , J (4d) x X
Remark 6.
Models (3) and (4) are equivalent, as for any x X chosen in (4) the values z and y j will get as small as permitted by constraint (4b), as this improves the objective function (4a). Consequently for every x, its β-average will be computed appropriately and, thus, (4) obtains the x X with smallest β-average, as desired on (3).
For every k { 1 , , K } , thanks to the problem (2), the function g k β ( x ) can be defined, which measures, for each x X , the  β -average for that criterion, being:
g k β ( x ) min z k , y k j z k + j = 1 J π j β y k j s . t . z k + y k j f k j ( x ) j = 1 , , J z k free , y k j 0 j = 1 , , J
The already known approach for single criterion problems ends here. Given that, the next step is finding a “good” solution for all k. That is:
min x X g 1 β ( x ) , , g K β ( x )
Given r ( 0 , 1 ] the r-OWA of the β -averages will be now computed (in accordance with the definition given in Section 3). That is, the solution of the following problem is sought:
max t ˜ k 1 r k t ˜ k × g k β ( x ) k t ˜ k = r 0 t ˜ k w k k = 1 , , K
Or equivalently:
max t k k t k × g k β ( x ) k t k = 1 0 t k w k r k = 1 , , K
Its dual formulation is:
min z , v k z + k w k r v k s . t . z + v k g k β ( x ) k = 1 , , K z free , v k 0 k = 1 , , K
Replacing the value of g k β ( x ) given in (5), the next model (model (6)) is obtained:
min z , v k z + k w k r v k
s . t . z + v k min z k , y k j z k + j = 1 J π j β y k j k s . t . z k + y k j f k j ( x ) j z k free , y k j 0 j
z free , v k 0 k
Model (6) calculates for a given x X the r-OWA of its β -averages, which coincides with the notion of the function h r β ( x ) given in Section 3. This problem is not explicit, as it contains nested optimization problems in the constraints. For that reason, we propose a single level alternative for x X fixed.
Consider the following linear programming model:
(7a) min z , v k , z k , y k j z + k w k r v k (7b) s . t . z + v k z k + j = 1 J π j β y k j k (7c) z k + y k j f k j ( x ) k , j (7d) y k j 0 k , j (7e) z k free , v k 0 k (7f) z free
Proposition 2.
The transformation from model (6) to model (7) is valid, since their optimal solution and objective values coincide.
Proof. 
Let z * , v k * , z k * , y k j * be the optimal solution of model (7). z * , v k * is feasible of model (6), and it will be shown that it is also optimal for such model. Assume that it exists z , v k feasible of model (6) with:
z + k w k r v k < z * + k w k r v k *
This and constraint (7b) implies there exists k 0 , such that:
z + v k 0 < z k 0 * + j = 1 J π j β y k 0 j *
otherwise z , v k , z k * , y k j * would be optimal of model (7). Since z k 0 * and y k 0 j * are feasible of model (7) they are also feasible of the model on the RHS of constraint (6b) and, thus, z and v k 0 violate constraint (6b). □
Proposition 2 showed that the optimal solutions of models (6) and (7) coincide. Proposition 3 goes further, showing the connection between their feasible sets.
Proposition 3.
The feasible set of model (6) is a projection of the feasible set of model (7).
Proof. 
  • For each feasible solution ( z , v k ) of model (6), there is at least one feasible solution of model (7) with the same values ( z , v k ) , being so the same objective function.
    Let ( z 1 , v k 1 ) be a feasible solution of model (6), and  ( z k * , y k j * ) the optimal solution for each k minimizing g k β ( x ) (right-hand-side of equation (6b)). Because constraints (7), (7c), (7d), and (7e) are satisfied in model (6), ( z 1 , v k 1 , z k * , y k j * ) is a feasible solution or model (7).
  • For each feasible solution ( z , v k , z k , y k j ) of model (7), ( z , v k ) is a feasible solution of model (6), hence being the same objective function. Let ( z 2 , v k 2 , z k 2 , y k j 2 ) a feasible solution of model (7). Since constraints (7b), (7c) and (7d) are included in model (7), ( z k 2 , y k j 2 ) is feasible for the model that is included in the RHS of constraint (6b) and therefore greater than or equal to the minimum of that model, verifying:
    z 2 + v k 2 z k 2 + j = 1 J π j β y k j 2 min z k + j = 1 J π j β y k j
    and so, feasible for model (6).
Finally, after proving the validity of model (7), it is possible to let x X free, with the purpose of finding the one minimizing the function h r β ( x ) :
min z , v k , z k , y k j , x z + k w k r v k s . t . z + v k z k + j = 1 J π j β y k j k z k + y k j f k j ( x ) k , j y k j 0 k , j z k free , v k 0 k z free x X

5. Application to the Knapsack Problem

The multiobjective stochastic knapsack problem is used in order to illustrate the usefulness of the previously defined concept.
Definition 8 (Multiobjective stochastic knapsack problem).
Let I be a collection of objects with weights v i , which can be selected as members of a knapsack with maximum weight V. There is a set of scenarios J, each of them with probability π j , and a set of criteria K, with importances w k . For every pair of scenario-criterion, there is a benefit that is associated with selecting object i, denoted by b j k i . Which objects should be taken in order to maximize benefit?
The above problem differs with the well-known knapsack problem, in that there is stochasticity and multiple objectives to be maximized.
The following MSP model can be adapted in order to analyze the problem. Note that, to preserve the sense of the optimization, rather than to maximize the benefits of the carried objects, the value of the objects not chosen will be minimized.
min x i f k j ( x ) : = i 1 x i b k j i k , j s . t . i v i x i V i x i { 0 , 1 } i
When using the methodology developed in the previous sections, problem (8) is transformed into the following mixed-integer linear programming model:
min z , v k , z k , y k j , x i z + k w k r v k s . t . z + v k z k + j = 1 J π j β y k j k z k + y k j i 1 x i b k j i k , j i v i x i V i y k j 0 k , j x i { 0 , 1 } i z k free , v k 0 k z free
Given r , β ( 0 , 1 ] , model (MSP) obtains the x * minimizing the r-OWA of the β -averages. In order to illustrate the benefits of using model (MSP), a naive way of solving problem (8) is considered:
min x i k , j w k π j i 1 x i b k j i s . t . i v i x i V i x i { 0 , 1 } i
Hence, model (MIP) computes the average of the f k j , while using the importances of the criteria and the probability of the scenarios. It is clear that, for “average” criteria-scenarios x MIP * , the optimal solution of model (MIP), outperforms x MSP * , the optimal solution of model (MSP). Conversely, x MSP * will improve x MIP * in unfavourable criteria-scenarios, as expected of a risk-averse solution.

5.1. Computational Experiments

The following sections will show computational experiments, for different values of r and β and different number of objects, scenarios, and criteria. The capacity of the knapsack, V, is set to 1 in every instance. Algorithm 1 shows how the random instances are created, given a number of objects, scenarios, and criteria.
Algorithm 1 Generating random data, with  U ( a , b ) the uniform distribution in [ a , b ]
1: functionrandomInstance( | I | , | J | , | K | )
2:     p U ( 0.25 , 0.75 )
▹ proportion of objects that can fit on average
3:     W 1 p | I |
▹ average weight of each object
4:    for i I do
5:      w i U ( 0.5 W , 1.5 W )
▹ weight of each object
6:     for j , k J × K do
7:       b k j i U ( 0 , 1 )
▹ value of each object
8:     end for
9:    end for
10: end function
For each of the solved instances, it will be reported:
  • t MSP , t MIP : solution time in seconds of models (MSP) and (MIP). With them, the following value is calculated:
    Δ time : = t MSP t MIP ( t i m e   p e n a l t y   f a c t o r )
    Δ time , the time penalty factor, indicates the increase of computing time when solving model (MSP) rather than model (MIP).
  • z MSP * , z MIP * : optimal values of the models.
  • f MSP x MIP * , f MIP x MSP * : objective value of x MIP * in model (MSP) and vice versa.
  • To grasp the difference between the MSP and the naive approach, the following will be calculated:
    Δ avg : = 100 f MIP x MSP * z MIP * z MIP * ( d e t e r i o r a t i n g   r a t e ) Δ tail : = 100 f MSP x MIP * z MSP * f MSP x MIP * ( i m p r o v e m e n t   r a t e )
    These quantities reflect what is the effect of making decision x MSP * instead of x MIP * . Large values of Δ avg indicate high penalties for making decision x MSP * instead of x MIP * in average scenarios-criteria. Similarly, the larger Δ tail , the higher benefit obtained from making decision x MSP * in tail events. They will be, respectively, called deteriorating rate and improvement rate.
The models are solved in GAMS 26.1.0 with solver IBM ILOG CPLEX 12.8.0.0, while using a personal computer with an Intel Core i7 processor and 16GB RAM.
  • Experiment 1
The first experiment will consist on a full factorial design, in which the values of | I | , | J | , | K | , r , β fall in these sets:
  • | I | { 50 , 100 , 200 }
  • | J | { 5 , 25 , 100 }
  • | K | { 3 , 6 , 9 }
  • r { 0.33 , 0.5 , 0.67 }
  • β { 0.05 , 0.1 , 0.5 }
For each tuple ( I , J , K ) a random instance will be generated using Algorithm 1, which will then be solved for every pair ( r , β ) . All of the criteria and scenarios are given the same importance and probabilities. That is, w k = 1 | K | , π j = 1 | J | . The time limit was set in two hours by instance, in which all but three of the 3 5 = 243 configurations were solved to optimality.
  • Experiment 2
For the next experiment, 100 random instances will be created, keeping the values of | I | , | J | , | K | , r , β constant and equal to the median value of the previous experiment. That is, | I | = 100 , | J | = 25 , | K | = 6 , r = 0.5 , β = 0.1 . All of the criteria and scenarios are given the same importance and probabilities. All 100 instances were solved to optimality.

5.2. Results

  • Experiment 1
Table A4 (in the Appendix A) shows, for each of the 243 instances, the solution times of the MSP and the MIP models, and the deteriorating and improvement rates of using the MSP solution instead of the MIP solutions (measured in deviation to MIP solution).
Table 7 shows the objective values, by scenario and criterion, of the first instance of the experiment. Such an instance contains 50 objects, five scenarios, three criteria, and the parameters r and β are set to 0.33 and 0.05 respectively. Results show that the MSP solution (Table 7a) is more balanced through every scenario and criterion, with a worst value of 13.71. On the other hand, the MIP solution (Table 7b) attains larger (worse) values on some scenarios and criteria.
Table 8 shows the correlations between the times and rates with the parameters of the instance. It can be seen how the MSP solution has a higher impact when fewer scenarios are considered. In addition to that, it can be appreciated that the MSP solution times decrease when β increase, which is, when more scenarios are included in the β -average computation.
This observation is confirmed by Table 9, in which it can be seen that the median time penalty factor (how much longer does it take to solve the MSP model than the MIP model) is much smaller when β = 0.5 than when β = 0.05 .
The solution times of the MSP model are alarmingly high for some instances, due to the fact that the admissible integrality gap has been set to zero. If that is relaxed, it can be seen that all of the 243 instances reach an integrality gap smaller than 5% in about three seconds, 2% in about five seconds and 1% in about 88 seconds.
Table 10 groups instances by r and β , and shows the deteriorating and improvement rates. It can be seen that the improvement rate (in the tail) is generally higher than the deteriorating rate (in the average), especially in cases with small r and β .
This claim is also supported with Figure 2, where each of the 243 instances is shown according to the values of Δ avg and Δ tail , and are grouped by the values of ( r , β ) . Almost all of the instances are above the imaginary line Δ avg = Δ tail , which shows that considering the MSP solution improves in the tail more than it loses in the average situations. In addition to that, it can be seen that the largest improvements in the tail are on instances with β = 0.05 (one of the usual values taken for CVaR), and especially with the smallest values of r. When r and β grow, the differences between the MIP and MSP solutions are reduced.
Finally, Figure 3 shows the values of f k j ( x ) , where x = x MIP * in blue squares and x = x MSP * in orange circles, for just one of the created instances: 200 objects, 100 scenarios, three criteria, r = 0.33 , β = 0.05 . The values of f k j ( x ) are represented for each criterion, sorting the scenarios from most to least favourable. It can be appreciated how x MIP * performs better than x MSP * in average criteria-scenarios, on the central part of the images; but, x MSP * is better with unfavourable situations, those with higher values of f k j ( x ) . This can be especially appreciated for the second criterion, in which there are three scenarios with objective values of x MIP * out of control.
  • Experiment 2
Table A5 (in the appendix) contains the results for each of the 100 instances, all of them with constant parameters | I | = 100 , | J | = 25 , | K | = 6 , r = 0.5 , β = 0.1 .
Table 11 contains a summary of the results, where it is again seen that the improvements in the tail are better than the loses in the average situations. Although single instances might take a long computing time, the median MSP solution time (3.74 s) is definitely satisfactory. It is worth mentioning that the models were implemented without providing any extra bounds or known cuts that could reduce the solution times.

6. Conclusions

In this paper, a new concept of solution has been proposed for Multiobjective Stochastic Programming problems, exploiting risk-aversion. The proposed concept combines the notions of conditional value-at-risk and ordered weighted averaging operator to find solutions protected against risks due to the uncertainty and under-achievement of criteria. Thus, this concept can be particularly useful in real-life situations, where there exists a great concern with respect to unfavourable situations, such as emergency management or portfolio optimization.
The solution concept is supported by an efficient way to compute it by a Mathematical Programming problem. This model is linear, provided that the underlying problem can be linearly representable. Numerical experiments have been conducted for validating this approach, solving a multiobjective stochastic knapsack problem.
The research has shown that the improvements in the tail (unfavourable situations) are consistently higher than loses on average situations, especially when small values of the parameters β and r are chosen. These differences, although clearly noticeable, are not as high as one could expect. This is possibly due to the randomness of the data. It is reasonable to assume that, in actual real-life problems, there are choices that are more conservative for every scenario and criterion, and thus being preferable for risk-averse attitudes.
The results have shown that there is a clear increase in computational time as compared with risk neutral methods; however, this is arguably acceptable as a price to pay for being risk-averse. Furthermore, this could also be due to the random nature of the data. Nevertheless, it was also shown that allowing for even rather small integrality gaps (1%) leads to a drastic improvement in computing times.

Author Contributions

Conceptualization, J.P. and B.V.; methodology, J.L., J.P. and B.V.; software, J.L.; formal analysis, J.L., J.P. and B.V.; writing—original draft preparation, J.L.; writing—review and editing, J.L., J.P. and B.V.; visualization, J.L.; project administration, J.P. and B.V.; funding acquisition, B.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the UCM-Santander grant CT27/16-CT28/16, the Government of Spain grants MTM2016-74983-C02-01 and PID2019-108679RB-I00 (LOG4D), H2020 grant MSCA-RISE 691161 (GEO-SAFE), and Fundación BBVA 2019 grant Complex networks meet data science.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MCDMMulticriteria decision making
VaRValue-at-risk
CVaRConditional value-at-risk
MSPMultiobjective stochastic programming
OWAOrdered weighted averaging
MOPMultiple objective problem

Appendix A

Table A1. Values of alternative 2 by scenario (j) and criteria (k).
Table A1. Values of alternative 2 by scenario (j) and criteria (k).
Criteria
w 1 = 0.20 w 2 = 0.10 w 3 = 0.20 w 4 = 0.25 w 5 = 0.15 w 6 = 0.10
k 1 k 2 k 3 k 4 k 5 k 6
scenarios π 1 = 0.15 j 1 0.400.580.390.450.540.18
π 2 = 0.20 j 2 0.680.740.700.150.540.72
π 3 = 0.30 j 3 0.930.520.230.820.210.03
π 4 = 0.25 j 4 0.370.850.070.420.520.22
π 5 = 0.10 j 5 0.920.130.710.390.900.87
β -average, β = 0.30 0.9300.8320.7030.8200.6600.770
r-OWA, r = 0.17 0.930
Table A2. Values of alternative 3 by scenario (j) and criteria (k).
Table A2. Values of alternative 3 by scenario (j) and criteria (k).
Criteria
w 1 = 0.20 w 2 = 0.10 w 3 = 0.20 w 4 = 0.25 w 5 = 0.15 w 6 = 0.10
k 1 k 2 k 3 k 4 k 5 k 6
scenarios π 1 = 0.15 j 1 0.800.900.610.280.940.09
π 2 = 0.20 j 2 0.290.480.260.230.210.07
π 3 = 0.30 j 3 0.730.650.320.560.950.65
π 4 = 0.25 j 4 0.580.390.210.660.700.93
π 5 = 0.10 j 5 0.730.220.330.310.320.38
β -average, β = 0.30 0.7650.7750.4680.6430.9500.883
r-OWA, r = 0.17 0.943
Table A3. Values of alternative 4 by scenario (j) and criteria (k).
Table A3. Values of alternative 4 by scenario (j) and criteria (k).
Criteria
w 1 = 0.20 w 2 = 0.10 w 3 = 0.20 w 4 = 0.25 w 5 = 0.15 w 6 = 0.10
k 1 k 2 k 3 k 4 k 5 k 6
scenarios π 1 = 0.15 j 1 0.300.520.120.680.460.73
π 2 = 0.20 j 2 1.000.570.460.820.900.72
π 3 = 0.30 j 3 0.180.760.300.340.540.99
π 4 = 0.25 j 4 0.530.210.130.120.660.86
π 5 = 0.10 j 5 0.980.460.500.290.270.40
β -average, β = 0.30 0.9930.7600.4730.7730.8200.990
r-OWA, r = 0.17 0.993
Table A4. All instances of first experiment. The three instances with 200 objects, 100 scenarios, 6 criteria and β = 0.05 did not reach the optimal solution in 2 h. The integrality gaps of the solution shown are 0.31 % , 0.24 % and 0.19 % for r = 0.33 , 0.5 and 0.67 respectively.
Table A4. All instances of first experiment. The three instances with 200 objects, 100 scenarios, 6 criteria and β = 0.05 did not reach the optimal solution in 2 h. The integrality gaps of the solution shown are 0.31 % , 0.24 % and 0.19 % for r = 0.33 , 0.5 and 0.67 respectively.
β 0.050.10.5
r 0.330.50.670.330.50.670.330.50.67
|I||J||K| t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail
50530.120.133.758.010.150.123.758.010.180.143.514.540.140.143.756.590.120.123.756.590.200.173.513.790.120.123.755.840.120.123.755.840.130.123.163.64
60.220.113.795.070.250.113.795.070.180.111.033.010.300.133.794.010.250.133.794.010.230.151.482.100.230.163.773.580.230.143.773.580.180.171.481.47
90.280.141.766.670.360.141.766.670.200.151.583.260.220.142.025.660.210.142.025.660.220.141.242.100.200.152.024.390.220.132.024.390.250.151.201.37
2530.760.182.475.370.660.172.472.850.260.172.001.550.640.182.475.080.620.162.472.510.250.151.000.970.600.172.474.930.510.161.792.520.260.141.000.69
61.200.161.875.770.910.291.963.700.490.180.791.811.150.181.874.930.740.181.143.510.410.180.701.550.930.161.174.330.780.181.173.450.380.160.711.22
90.690.160.614.210.780.160.432.780.570.160.670.920.520.150.613.900.960.160.442.140.610.160.670.650.690.150.613.251.020.180.391.750.470.180.510.68
10031.150.150.072.430.780.150.072.150.440.140.070.161.070.140.072.020.830.190.071.740.340.140.070.161.140.150.071.810.850.160.071.530.360.140.070.14
62.510.200.772.244.450.220.781.193.520.200.230.472.620.250.771.995.090.180.311.105.450.200.190.292.700.220.771.383.310.190.470.635.590.190.230.27
94.060.180.030.411.470.170.030.301.070.160.000.002.750.170.030.291.120.160.030.301.110.160.000.002.060.190.030.321.100.160.030.181.160.170.000.00
100531.240.263.817.631.290.203.817.630.370.222.084.470.720.183.815.220.660.183.815.220.320.271.562.330.740.193.384.021.130.233.384.020.280.200.631.36
68.680.225.687.128.690.195.687.120.430.174.462.920.650.204.305.641.060.194.305.640.350.170.811.281.180.223.934.200.640.183.934.200.280.180.530.88
93.310.182.193.143.260.202.193.140.670.180.972.901.040.202.192.860.960.202.192.860.230.140.641.881.020.170.922.310.900.170.922.310.240.180.411.18
25310.650.172.966.523.390.182.074.000.290.150.481.737.090.182.965.461.830.192.963.670.300.140.480.953.460.192.194.981.300.162.963.500.340.160.410.59
632.120.202.784.479.180.190.783.530.440.180.521.3126.530.182.593.063.640.220.612.470.320.150.260.7912.770.170.602.620.900.170.502.000.410.170.260.65
98.580.180.725.521.900.170.973.490.420.180.240.826.320.190.724.751.240.160.973.060.510.200.240.441.600.191.123.830.880.191.122.530.590.170.500.23
100351.230.222.211.121.670.210.271.360.820.180.090.2522.700.212.211.161.220.190.341.050.810.180.050.1718.750.162.211.170.840.220.340.920.750.180.050.13
648.250.180.762.5631.870.170.622.0562.140.150.420.7024.730.180.712.4827.080.180.621.8142.180.190.280.5520.260.170.602.1022.090.200.751.487.790.200.170.50
92.160.190.371.483.340.180.290.771.840.170.180.411.800.170.341.252.870.200.280.692.220.190.260.221.670.180.341.142.770.190.200.633.090.160.080.13
20053146.240.231.613.81140.120.201.613.817.710.231.301.61151.220.211.613.30135.090.241.613.304.600.211.301.5883.440.221.103.0689.200.211.103.064.210.221.301.55
688.700.191.082.5089.690.191.082.505.140.170.720.8396.440.191.081.9191.660.181.081.912.930.180.910.5839.260.180.941.6832.920.180.941.680.700.170.580.44
9468.370.153.739.18484.890.143.739.1829.460.161.744.99304.040.163.696.24305.900.163.696.242.710.161.743.54110.030.153.384.92107.340.143.384.920.910.171.202.28
2535629.580.332.757.844765.420.242.245.334.860.240.811.405430.900.252.756.733394.560.242.755.055.320.280.811.226896.050.252.756.152546.430.212.754.915.660.340.811.13
62886.130.191.674.36146.480.171.932.770.570.170.190.791651.910.211.674.2015.060.221.932.290.710.210.190.8193.660.191.463.6419.360.191.932.020.550.180.120.40
91235.120.322.052.59342.320.220.961.221.990.210.220.26404.700.291.902.1228.090.210.880.760.820.200.130.1799.730.211.991.582.230.220.390.580.870.200.060.08
1003703.050.232.114.15373.650.222.032.701.420.220.470.63731.090.222.112.92157.290.202.031.961.110.220.540.47596.880.272.062.29349.780.302.131.583.220.290.530.44
67222.950.220.603.431814.250.180.482.0822.110.210.130.447217.640.140.472.57916.420.210.481.757.040.240.280.277216.940.150.472.11656.480.200.371.417.890.220.240.20
93321.230.340.070.2816.400.200.020.182.330.170.080.08198.140.190.070.3214.840.210.020.132.310.210.080.0547.160.230.080.339.770.210.010.122.630.200.060.04
Table A5. All instances of second experiment. | I | = 100 , | J | = 25 , | K | = 6 , r = 0.5 , β = 0.1 .
Table A5. All instances of second experiment. | I | = 100 , | J | = 25 , | K | = 6 , r = 0.5 , β = 0.1 .
t MSP t MIP Δ avg Δ tail t MSP t MIP Δ avg Δ tail
31.150.231.533.202.150.162.242.09
1.920.211.666.1720.090.161.806.14
8.750.240.523.077.180.162.131.93
28.060.235.082.861.020.163.033.61
1.360.301.001.803.580.241.816.12
3.670.202.272.503.640.191.193.07
2.000.202.512.03128.690.233.272.98
192.110.162.618.230.890.181.450.93
0.940.200.432.231.620.231.853.56
0.800.181.642.554.190.222.101.97
16.400.192.232.452.160.190.161.46
1.210.182.821.501.460.242.482.00
1.790.200.722.770.690.201.792.54
21.780.214.504.6120.730.202.263.50
1.350.190.690.861.860.241.772.63
31.110.190.983.2114.920.171.998.57
8.440.191.823.810.780.200.851.92
1.750.210.880.9210.480.232.502.29
1.940.212.182.651.630.242.082.29
0.980.200.873.2710.780.180.341.80
27.720.222.035.2038.800.201.964.69
14.720.153.340.9919.740.240.652.20
0.670.240.812.691.370.302.822.92
3.540.202.642.756.280.192.022.08
6.370.212.796.3522.270.341.913.13
1.860.230.932.091.690.202.212.42
1.540.202.003.4527.770.190.763.28
40.160.172.063.442.000.212.571.93
7.230.213.173.172.610.182.143.33
5.770.172.841.9840.930.181.534.61
2.100.191.393.0018.840.160.894.37
404.700.191.502.8211.260.163.984.76
24.260.184.813.4214.410.181.825.87
0.760.201.283.8812.140.162.752.75
0.640.200.871.3912.580.171.423.46
0.970.231.772.190.840.180.412.15
0.530.181.952.045.200.193.802.02
7.240.222.211.6828.160.154.683.56
0.870.250.711.4239.100.163.473.59
8.510.202.484.0619.220.163.783.13
13.060.204.442.800.560.200.633.22
59.780.205.674.910.680.171.922.44
67.500.192.963.020.700.171.861.40
3.800.170.791.2015.200.170.782.66
3.250.202.241.570.880.172.312.13
5.230.161.144.910.590.151.781.68
0.710.170.893.011.080.202.213.14
4.190.173.092.321.140.170.762.48
3.530.181.376.331.580.191.453.14
19.990.143.485.0513.480.181.715.41

References

  1. Rommelfanger, H. The Advantages of Fuzzy Optimization Models in Practical Use. Fuzzy Optim. Decis. Mak. 2004, 3, 295–309. [Google Scholar] [CrossRef]
  2. Gutjahr, W.; Nolz, P. Multicriteria optimization in humanitarian aid. Eur. J. Oper. Res. 2016, 252, 351–366. [Google Scholar] [CrossRef]
  3. Ferrer, J.; Martín-Campo, F.; Ortuño, M.; Pedraza-Martínez, A.; Tirado, G.; Vitoriano, B. Multi-criteria optimization for last mile distribution of disaster relief aid: Test cases and applications. Eur. J. Oper. Res. 2018, 269, 501–515. [Google Scholar] [CrossRef]
  4. Sun, G.; Zhang, H.; Fang, J.; Li, G.; Li, Q. A new multi-objective discrete robust optimization algorithm for engineering design. Appl. Math. Model. 2018, 53, 602–621. [Google Scholar] [CrossRef]
  5. Karsu, Ö.; Morton, A. Inequity averse optimization in operational research. Eur. J. Oper. Res. 2015, 245, 343–359. [Google Scholar] [CrossRef] [Green Version]
  6. Angilella, S.; Mazzù, S. The financing of innovative SMEs: A multicriteria credit rating model. Eur. J. Oper. Res. 2015, 244, 540–554. [Google Scholar] [CrossRef] [Green Version]
  7. Fotakis, D. Multi-objective spatial forest planning using self-organization. Ecol. Inform. 2015, 29, 1–5. [Google Scholar] [CrossRef]
  8. Guido, R.; Conforti, D. A hybrid genetic approach for solving an integrated multi-objective operating room planning and scheduling problem. Comput. Oper. Res. 2017, 87, 270–282. [Google Scholar] [CrossRef]
  9. Eiselt, H.; Marianov, V. Location modeling for municipal solid waste facilities. Comput. Oper. Res. 2015, 62, 305–315. [Google Scholar] [CrossRef]
  10. Liberatore, F.; Camacho-Collados, M. A Comparison of Local Search Methods for the Multicriteria Police Districting Problem on Graph. Math. Prob. Eng. 2016, 2016, 3690474. [Google Scholar] [CrossRef] [Green Version]
  11. Bast, H.; Delling, D.; Goldberg, A.; Müller-Hannemann, M.; Pajor, T.; Sanders, P.; Wagner, D.; Werneck, R. Route planning in transportation networks. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; pp. 19–80. [Google Scholar]
  12. Samà, M.; Meloni, C.; D’Ariano, A.; Corman, F. A multi-criteria decision support methodology for real-time train scheduling. J. Rail Transp. Plan. Manag. 2015, 5, 146–162. [Google Scholar] [CrossRef]
  13. Spina, L.; Scrivo, R.; Ventura, C.; Viglianisi, A. Urban renewal: Negotiation procedures and evaluation models. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2015; pp. 88–103. [Google Scholar]
  14. Carli, R.; Dotoli, M.; Pellegrino, R. A decision-making tool for energy efficiency optimization of street lighting. Comput. Oper. Res. 2018, 96, 223–235. [Google Scholar] [CrossRef]
  15. Ehrgott, M. Multicriteria Optimization, 2nd ed.; Springer: Berlin/Heidelberg, Germay, 2005. [Google Scholar]
  16. Birge, J.R.; Louveaux, F. Introduction to Stochastic Programming; Springer: New York, NY, USA, 2011. [Google Scholar]
  17. Ben-Tal, A.; Nemirovski, A. Robust solutions of uncertain linear programs. Oper. Res. Lett. 1999, 25, 1–13. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058–1071. [Google Scholar] [CrossRef]
  19. Klamroth, K.; Köbis, E.; Schöbel, A.; Tammer, C. A unified approach to uncertain optimization. Eur. J. Oper. Res. 2017, 260, 403–420. [Google Scholar] [CrossRef]
  20. Gabrel, V.; Murat, C.; Thiele, A. Recent advances in robust optimization: An overview. Eur. J. Oper. Res. 2014, 235, 471–483. [Google Scholar] [CrossRef]
  21. Yao, H.; Li, Z.; Lai, Y. Mean–CVaR portfolio selection: A nonparametric estimation framework. Comput. Oper. Res. 2013, 40, 1014–1022. [Google Scholar] [CrossRef]
  22. Mansini, R.; Ogryczak, W.; Speranza, M.G. Linear and Mixed Integer Programming for Portfolio Optimization; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  23. Liu, X.; Küçükyavuz, S.; Noyan, N. Robust multicriteria risk-averse stochastic programming models. Ann. Oper. Res. 2017, 259, 259–294. [Google Scholar] [CrossRef] [Green Version]
  24. Dixit, V.; Tiwari, M.K. Project portfolio selection and scheduling optimization based on risk measure: A conditional value at risk approach. Ann. Oper. Res. 2020, 285, 9–33. [Google Scholar] [CrossRef]
  25. Fernández, E.; Hinojosa, Y.; Puerto, J.; da Gama, F.S. New algorithmic framework for conditional value at risk: Application to stochastic fixed-charge transportation. Eur. J. Oper. Res. 2019, 277, 215–226. [Google Scholar] [CrossRef]
  26. Rockafellar, R.; Uryasev, S. Conditional value-at-risk for general loss distributions. J. Bank. Financ. 2002, 26, 1443–1471. [Google Scholar] [CrossRef]
  27. Goicoechea, A. Deterministic Equivalents for Use in Multiobjective, Stochastic Programming. IFAC Proc. Vol. 1980, 13, 31–40. [Google Scholar] [CrossRef]
  28. Leclercq, J.P. Stochastic programming: An interactive multicriteria approach. Eur. J. Oper. Res. 1982, 10, 33–41. [Google Scholar] [CrossRef]
  29. Caballero, R.; Cerdá, E.; Muñoz, M.M.; Rey, L. Stochastic approach versus multiobjective approach for obtaining efficient solutions in stochastic multiobjective programming problems. Eur. J. Oper. Res. 2004, 158, 633–648. [Google Scholar] [CrossRef] [Green Version]
  30. Aouni, B.; Ben Abdelaziz, F.; Martel, J.M. Decision-maker’s preferences modeling in the stochastic goal programming. Eur. J. Oper. Res. 2005, 162, 610–618. [Google Scholar] [CrossRef]
  31. Ben Abdelaziz, F.; Masri, H. A compromise solution for the multiobjective stochastic linear programming under partial uncertainty. Eur. J. Oper. Res. 2010, 202, 55–59. [Google Scholar] [CrossRef]
  32. Ben Abdelaziz, F.; Aouni, B.; Fayedh, R.E. Multi-objective stochastic programming for portfolio selection. Eur. J. Oper. Res. 2007, 177, 1811–1823. [Google Scholar] [CrossRef]
  33. Muñoz, M.M.; Luque, M.; Ruiz, F. INTEREST: A reference-point-based interactive procedure for stochastic multiobjective programming problems. OR Spectr. 2010, 32, 195–210. [Google Scholar] [CrossRef]
  34. Ben Abdelaziz, F. Solution approaches for the multiobjective stochastic programming. Eur. J. Oper. Res. 2012, 216, 1–16. [Google Scholar] [CrossRef]
  35. Gutjahr, W.J.; Pichler, A. Stochastic multi-objective optimization: A survey on non-scalarizing methods. Ann. Oper. Res. 2013, 236, 475–499. [Google Scholar] [CrossRef]
  36. Engau, A.; Sigler, D. Pareto solutions in multicriteria optimization under uncertainty. Eur. J. Oper. Res. 2020, 281, 357–368. [Google Scholar] [CrossRef]
  37. Álvarez-Miranda, E.; Garcia-Gonzalo, J.; Ulloa-Fierro, F.; Weintraub, A.; Barreiro, S. A multicriteria optimization model for sustainable forest management under climate change uncertainty: An application in Portugal. Eur. J. Oper. Res. 2018, 269, 79–98. [Google Scholar] [CrossRef]
  38. Díaz-García, J.A.; Bashiri, M. Multiple response optimisation: An approach from multiobjective stochastic programming. Appl. Math. Model. 2014, 38, 2015–2027. [Google Scholar] [CrossRef]
  39. Teghem, J.; Dufrane, D.; Thauvoye, M.; Kunsch, P. Strange: An interactive method for multi-objective linear programming under uncertainty. Eur. J. Oper. Res. 1986, 26, 65–82. [Google Scholar] [CrossRef]
  40. Bath, S.K.; Dhillon, J.S.; Kothari, D.P. Stochastic Multi-Objective Generation Dispatch. Electr. Power Compon. Syst. 2004, 32, 1083–1103. [Google Scholar] [CrossRef]
  41. Gazijahani, F.S.; Ravadanegh, S.N.; Salehi, J. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies. ISA Trans. 2018, 73, 100–111. [Google Scholar] [CrossRef] [PubMed]
  42. Claro, J.; De Sousa, J. A multiobjective metaheuristic for a mean-risk multistage capacity investment problem. J. Heurist. 2010, 16, 85–115. [Google Scholar] [CrossRef]
  43. Manopiniwes, W.; Irohara, T. Stochastic optimisation model for integrated decisions on relief supply chains: Preparedness for disaster response. Int. J. Prod. Res. 2016, 55, 979–996. [Google Scholar] [CrossRef]
  44. Bastian, N.; Griffin, P.; Spero, E.; Fulton, L. Multi-criteria logistics modeling for military humanitarian assistance and disaster relief aerial delivery operations. Optim. Lett. 2016, 10, 921–953. [Google Scholar] [CrossRef]
  45. Şakar, C.T.; Köksalan, M. A stochastic programming approach to multicriteria portfolio optimization. J. Glob. Optim. 2012, 57, 299–314. [Google Scholar] [CrossRef]
  46. Salas-Molina, F.; Rodriguez-Aguilar, J.A.; Pla-Santamaria, D. A stochastic goal programming model to derive stable cash management policies. J. Glob. Optim. 2020, 76, 333–346. [Google Scholar] [CrossRef]
  47. Yager, R. On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  48. Fernández, F.; Puerto, J. Análisis de sensibilidad de las soluciones del problema lineal múltiple ordenado. TOP 1992, 7, 17–29. [Google Scholar]
  49. Nickel, S.; Puerto, J. A unified approach to network location problems. Networks 1999, 34, 283–290. [Google Scholar] [CrossRef]
  50. Yager, R.R.; Alajlan, N. Some issues on the OWA aggregation with importance weighted arguments. Knowl. Based Syst. 2016, 100, 89–96. [Google Scholar] [CrossRef]
  51. Puerto, J.; Rodríguez-Chía, A.M.; Tamir, A. Revisiting k-sum optimization. Math. Program. 2017, 165, 579–604. [Google Scholar] [CrossRef]
  52. Kalcsics, J.; Nickel, S.; Puerto, J.; Tamir, A. Algorithmic results for ordered median problems. Oper. Res. Lett. 2002, 30, 149–158. [Google Scholar] [CrossRef]
  53. Blanco, V.; Ali, S.E.H.B.; Puerto, J. Minimizing ordered weighted averaging of rational functions with applications to continuous location. Comput. Oper. Res. 2013, 40, 1448–1460. [Google Scholar] [CrossRef]
  54. Blanco, V.; Puerto, J.; El Haj Ben Ali, S. Revisiting Several Problems and Algorithms in Continuous Location with τ Norms. Comput. Optim. Appl. 2014, 58, 563–595. [Google Scholar] [CrossRef]
  55. Ponce, D.; Puerto, J.; Ricca, F.; Scozzari, A. Mathematical programming formulations for the efficient solution of the k-sum approval voting problem. Comput. Oper. Res. 2018, 98, 127–136. [Google Scholar] [CrossRef] [Green Version]
  56. Filippi, C.; Ogryczak, W.; Speranza, M.G. Bridging k-sum and CVaR optimization in MILP. Comput. Oper. Res. 2019, 105, 156–166. [Google Scholar] [CrossRef]
  57. Nickel, S.; Puerto, J. Location Theory; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  58. Bilbao-Terol, A.; Arenas-Parra, M.; Cañal-Fernández, V. Selection of Socially Responsible Portfolios using Goal Programming and fuzzy technology. Inf. Sci. 2012, 189, 110–125. [Google Scholar] [CrossRef]
Figure 1. The results from illustrative example. (a) Optimal alternative for some values of r and β , where each of the four alternatives is colour-coded. (b) Optimal values of function h r β ( x ) for some values of r and β .
Figure 1. The results from illustrative example. (a) Optimal alternative for some values of r and β , where each of the four alternatives is colour-coded. (b) Optimal values of function h r β ( x ) for some values of r and β .
Mathematics 08 02026 g001
Figure 2. Values Δ avg and Δ tail for each of the 243 instances, grouped by values of ( r , β ) .
Figure 2. Values Δ avg and Δ tail for each of the 243 instances, grouped by values of ( r , β ) .
Mathematics 08 02026 g002
Figure 3. Single instance with 100 scenarios and three criteria. For each k, sorted values of f k j ( x ) , where x = x MIP * in blue squares and x = x MSP * in orange circles.
Figure 3. Single instance with 100 scenarios and three criteria. For each k, sorted values of f k j ( x ) , where x = x MIP * in blue squares and x = x MSP * in orange circles.
Mathematics 08 02026 g003
Table 1. Small example of β -average for different values of β .
Table 1. Small example of β -average for different values of β .
Scenario β
123450.20.30.5
π j 0.20.10.30.250.151097
f k j ( x ) 107432
Table 2. Small example of r-OWA for different values of r.
Table 2. Small example of r-OWA for different values of r.
Criterionr
123450.20.30.5
w k 0.20.10.30.250.151097
g k ( x ) 107432
Table 3. Values of two alternatives for each scenario j and criterion k, together with their β -averages ( β = 1 2 ) and r-OWAs ( r = 2 3 ).
Table 3. Values of two alternatives for each scenario j and criterion k, together with their β -averages ( β = 1 2 ) and r-OWAs ( r = 2 3 ).
(a) Alternative 1(b) Alternative 2
k 1 k 2 k 3 k 1 k 2 k 3
j 1 0.800.400.30 j 1 0.700.450.65
j 2 0.600.200.65 j 2 0.800.300.50
β -average0.800.400.65 β -average0.800.450.65
r-OWA0.725r-OWA0.725
Table 4. Values of alternative 1 by scenario (j) and criteria (k).
Table 4. Values of alternative 1 by scenario (j) and criteria (k).
Criteria
w 1 = 0.20 w 2 = 0.10 w 3 = 0.20 w 4 = 0.25 w 5 = 0.15 w 6 = 0.10
k 1 k 2 k 3 k 4 k 5 k 6
scenarios π 1 = 0.15 j 1 0.510.270.390.450.750.76
π 2 = 0.20 j 2 0.580.650.470.260.900.24
π 3 = 0.30 j 3 0.480.440.900.500.930.65
π 4 = 0.25 j 4 0.760.180.010.900.560.02
π 5 = 0.10 j 5 0.860.360.210.280.630.72
Table 5. Values of alternative 1 by scenario (j) and criteria (k).
Table 5. Values of alternative 1 by scenario (j) and criteria (k).
Criteria
w 1 = 0.20 w 2 = 0.10 w 3 = 0.20 w 4 = 0.25 w 5 = 0.15 w 6 = 0.10
k 1 k 2 k 3 k 4 k 5 k 6
scenarios π 1 = 0.15 j 1 0.510.270.390.450.750.76
π 2 = 0.20 j 2 0.580.650.470.260.900.24
π 3 = 0.30 j 3 0.480.440.900.500.930.65
π 4 = 0.25 j 4 0.760.180.010.900.560.02
π 5 = 0.10 j 5 0.860.360.210.280.630.72
β -average, β = 0.30 0.7930.5800.9000.8330.9300.728
r-OWA, r = 0.17 0.927
Table 6. β -averages and r-OWAs for each of the four feasible alternatives of the example.
Table 6. β -averages and r-OWAs for each of the four feasible alternatives of the example.
β -Averagesr-OWA
g 1 β ( x ) g 2 β ( x ) g 3 β ( x ) g 4 β ( x ) g 5 β ( x ) g 6 β ( x ) h r β ( x )
Alternative 10.7930.5800.9000.8330.9300.7280.927
Alternative 20.9300.8320.7030.8200.6600.7700.930
Alternative 30.7650.7750.4680.6430.9500.8830.943
Alternative 40.9930.7600.4730.7730.8200.9900.993
Table 7. Objective values by scenario-criterion of solutions obtained with the multiobjective stochastic programming problem (MSP) and MIP models, for the first instance of the first experiment.
Table 7. Objective values by scenario-criterion of solutions obtained with the multiobjective stochastic programming problem (MSP) and MIP models, for the first instance of the first experiment.
(a) MSP Solution(b) MIP Solution
k 1 k 2 k 3 k 1 k 2 k 3
j 1 11.9711.2611.00 j 1 9.6510.759.71
j 2 11.969.9213.71 j 2 11.1910.1714.90
j 3 13.4813.5110.92 j 3 14.0513.559.40
j 4 13.6213.1313.71 j 4 14.1213.0013.35
j 5 12.9411.3513.47 j 5 13.6410.3311.42
Table 8. Correlations.
Table 8. Correlations.
|I||J||K|r β
t MSP 0.340.09−0.11−0.05−0.19
t MIP 0.510.18−0.14−0.03−0.07
Δ time 0.310.11−0.08−0.02−0.18
Δ avg −0.05−0.57−0.28−0.09−0.36
Δ tail −0.07−0.56−0.18−0.21−0.50
Table 9. MSP runtimes and increases as compared to MIP runtimes, grouped by β .
Table 9. MSP runtimes and increases as compared to MIP runtimes, grouped by β .
β t MSP Δ time
MinMeanMedianMaxStdMinMeanMedianMaxStd
0.050.12659.496.327222.951787.070.943188.9632.7750473.049472.55
0.100.12212.472.234765.42728.490.981002.3511.0920192.483245.85
0.500.133.490.6762.149.051.0619.143.75414.2955.51
Table 10. Values of Δ avg and Δ tail , grouped by r and β .
Table 10. Values of Δ avg and Δ tail , grouped by r and β .
r β Δ avg Δ tail
MinMeanMedianMaxStdMinMeanMedianMaxStd
0.330.050.031.941.875.681.420.284.374.219.182.43
0.100.021.701.615.681.440.183.542.859.182.42
0.500.000.930.524.461.080.001.570.924.991.46
0.500.050.031.871.904.301.300.293.583.306.731.89
0.100.021.651.144.301.370.132.872.476.591.86
0.500.000.720.543.510.750.001.070.793.791.01
0.670.050.031.641.173.931.240.323.043.066.151.62
0.100.011.501.103.931.310.122.432.025.841.58
0.500.000.600.503.160.660.000.800.593.640.81
Table 11. Summary of experiment 2.
Table 11. Summary of experiment 2.
t MSP t MIP Δ time Δ avg Δ tail
mean16.980.2091.312.033.09
std46.570.03254.681.121.49
min0.530.142.810.160.86
25%1.370.176.731.182.09
50%3.740.1919.721.932.81
75%15.500.2186.192.523.51
max404.700.342175.825.678.57
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

León, J.; Puerto, J.; Vitoriano, B. A Risk-Aversion Approach for the Multiobjective Stochastic Programming Problem. Mathematics 2020, 8, 2026. https://doi.org/10.3390/math8112026

AMA Style

León J, Puerto J, Vitoriano B. A Risk-Aversion Approach for the Multiobjective Stochastic Programming Problem. Mathematics. 2020; 8(11):2026. https://doi.org/10.3390/math8112026

Chicago/Turabian Style

León, Javier, Justo Puerto, and Begoña Vitoriano. 2020. "A Risk-Aversion Approach for the Multiobjective Stochastic Programming Problem" Mathematics 8, no. 11: 2026. https://doi.org/10.3390/math8112026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop