Set-Valued T -Translative Functions and Their Applications in Finance

: A theory for set-valued functions is developed, which are translative with respect to a linear operator. It is shown that such functions cover a wide range of applications, from projections in Hilbert spaces, set-valued quantiles for vector-valued random variables, to scalar or set-valued risk measures in ﬁnance with defaultable or nondefaultable securities. Primal, dual, and scalar representation results are given, among them an inﬁmal convolution representation, which is not so well known even in the scalar case. Along the way, new concepts of set-valued lower/upper expectations are introduced and dual representation results are formulated using such expectations. An extension to random sets is discussed at the end. The principal methodology consisted of applying the complete lattice framework of set optimization.


Introduction
Set-valued risk measures for multivariate financial positions [1][2][3][4] make sense if there is more than one asset eligible for risk-compensating deposits or as an accounting unit. A typical situation is Kabanov's model of a multicurrency market with frictions [5]. In such market models, objects such as superhedging prices turn into sets of superhedging portfolios since the operation of taking the infimum or supremum can no longer be performed in a meaningful way if applied to sets in IR d with respect to the order relations generated, e.g., by solvency cones. This makes it extremely difficult to find multidimensional analogs to intervals of arbitrage-free prices or good deal bounds [6] or to apply the utility indifference pricing method.
Moreover, dual representation theorems such as the superhedging theorems in [5,7] involve vector-valued consistent price processes as dual variables instead of martingale measures and their densities. One may ask if this is an ad hoc construction or can be embedded into a general duality theory.
"Set-valuedness" provides an elegant way to overcome such difficulties and arrives at concepts and formulas that are very close to the scalar case. More specifically, infima and suprema can be understood in complete lattices of sets that basically share all order-related properties with the extended real line except the totalness of the order relation. It should be noted that such set-valuedness is also intrinsic in common one-dimensional constructions. For example, sets of lower and upper quantiles are intervals of the form {a} + IR + and {b} − IR + , as are the set of superhedging and subhedging cash amounts in market models with a single numéraire.
In this paper, a general framework for set-valued translative functions and their representation by scalar families is provided, and it is shown that it does not only cover set-valued risk measures, but also set-valued lower and upper expectations, projections, aggregation mappings, and set-valued quantiles for multivariate positions. Dualization procedures in complete lattices of sets are shown to produce the "right" dual variables.
The concepts and definitions are more general than the corresponding ones in the literature; some are even new, such as set-valued lower and upper expectations and cover, for example also the case of multiple defaultable securities as eligible assets for risk compensation [8] (the case with a single defaultable security was already discussed, e.g., in [9]). A number of already existing applications such as set-valued systemic risk measures [10] and conditional risk measures [11,12] were intentionally left out since their inclusion would have required several more pages of technical preparation. However, an unusual interpretation of conditional expectations as adjoint operators in Section 4.2 below indicates directions for the extension of the theory proposed in this paper to the dynamic case.

Complete Lattices of Sets
Let Z be a nontrivial, real linear space. The power set of Z (including the empty set ∅) is denoted by P (Z). The usual elementwise (Minkowski) sum of two nonempty sets A, B ⊆ Z is defined by: and extended by A + ∅ = ∅ + A = ∅ for A ∈ P (Z). A convex cone is a nonempty set C ⊆ Z satisfying sC ⊆ C for all s > 0 and C + C ⊆ C where sA = {sz | z ∈ A} for a nonempty set A ∈ P (Z). Moreover, for nonempty sets A ⊆ Z, the operation −A = {−a | a ∈ A} is also used, as well as −∅ = ∅. Let C ⊆ Z be a convex cone with 0 ∈ C. Such a cone generates a vector preorder, i.e., a reflexive, transitive relation, denoted by ≤ C , which is compatible with the algebraic operations on Z via: The cone C is also called the positivity cone of ≤ C since it can be recovered by C = {z ∈ Z | 0 ≤ C z}.
The set: P (Z, C) = {D ⊆ Z | D = D + C} along with its obvious twin P (Z, −C) serves as basic image set of set-valued functions in this paper. It is known [13] that (P (Z, C), ⊇) and (P (Z, −C), ⊆) are complete lattices. The result is stated for the sake of future reference. respectively.

Remark 1.
The situation is completely symmetric: just the roles of the infimum and supremum are swapped. Moreover, ∅ is the top element in (P (Z, C), ⊇) and the bottom element in (P (Z, −C), ⊆) if one understands that ⊇ in P (Z, C) corresponds to ≤ for real numbers, whereas ⊆ in P (Z, −C) corresponds to ≤ in IR. This is motivated by the interpretation of C as the positivity cone in Z with respect to the preorder ≤ C on Z: A, B ∈ P (Z, C), and A ⊇ B means that for each b ∈ B, there is a ∈ A with a ≤ C b, whereas both sets are directed upwards with respect to ≤ C (clearly, ⊇ can be considered as "≤" in this case). Similarly, A, B ∈ P (Z, −C), and A ⊆ B means that for each a ∈ A, there is b ∈ B with a ≤ C b. whereas both sets are directed downwards with respect to ≤ C . Such order relations with the same understanding of ⊇ as "≤" for sets that are directed upwards and ⊆ as "≤" for sets that are directed downwards are actually the basis for extending the ≤-relation from the rational numbers to the real numbers via upper and lower Dedekind cuts, i.e., sets of the form [a, +∞) and (−∞, b], respectively, for rational numbers a, b. Within an economic context, such order relations with the same interpretation were used in [14], for example. More details and references can be found in ( [13] Sec. 2.2) and [15].
Several lattices are used in the sequel, which include subsets of P (Z, C), P (Z, −C). We shall denote this by: where co A denotes the convex hull of a set A ⊆ Z and D − C = D + (−C).
As before, (C(Z, C), ⊇), as well as (C(Z, −C), ⊆) are complete lattices. The "intersection" formulas for the supremum/infimum coincide with the ones in (1) and (2), respectively, and one has:  (1) and (2). The "union" formulas are those in (3) where the closure replaces the convex hull in F and the closure of the convex hull is taken in G.

Further Notation
Throughout the paper, the following notation is used: • If X is a locally convex space, a linear space X * together with a duality pairing (a bilinear functional from X × X * → IR that separates points in X and X * ) is considered, which turn (X, X * ) into a dual pair of linear spaces (see ([16], Definition 5.90)). The duality pairing for x ∈ X, x * ∈ X * is written as x * (x). The topology on X is assumed to be consistent with the dual pair, i.e., X * can be identified with the topological dual of X (the linear space of all continuous linear functionals on X). Such a topology is always separated (also known as Hausdorff; see ( [16], Lemma 5.97)). A topology on X * will not be considered in this paper; • If X is a separated locally convex space and D ⊆ X a closed convex set, the set rec D = {y ∈ X | D + {y} ⊆ D} ⊆ X is a closed convex cone, the recession cone of D. It is the largest (with respect to inclusion) closed convex cone B ⊆ D satisfying D + B ⊆ D. If (X, X * ) is a dual pair of locally convex spaces, the function σ D : X * → IR ∪ {±∞} defined by σ D (x * ) = sup x∈D x * (x) is the usual support function of D. Its domain, namely dom σ D = {x * ∈ X * | σ D (x * ) < +∞} ⊆ X * , is a convex cone, which is called the barrier cone of D and denoted by barr D; • If (X, X * ) is a dual pair of locally convex spaces and B ⊆ X is a cone, the set: is called the (positive) dual cone of B; • (Ω, F ) denotes a measurable space and L 0 d := L 0 d (Ω, F ) the linear space of random variables over (Ω, F ), which take values in IR d ; the case d = 1 is denoted by L 0 := L 0 (Ω, F ); • (Ω, F , P) denotes a probability space and L p d := L p d (Ω, F , P) for p = 0 or p ∈ [1, ∞) the linear space of equivalence classes with respect to the P-a.s. equality of p-integrable random variables with values in IR d , and L ∞ d := L ∞ d (Ω, F , P) is the corresponding linear space of essentially bounded random variables with values in IR d ; L p d is a Banach space for p ∈ [1, ∞]; the case d = 1 is denoted by L p ; • 1I denotes the random variable, which takes the value of 1 for all ω ∈ Ω in L 0 and P-almost surely in L p for p = 0 or p ∈ [1, ∞], respectively; , the usual Banach space topology does the job, whereas if p = ∞, the weak * topology is considered.
Finally, we point out a slight abuse of notation due to the traditional use of capital letters such as X as a (topological or locally convex etc.) linear space in a functional analytic framework and as a random variable in a stochastic/finance setting. We trust the reader can always make the distinction.

The General Model and Primal Representations
In the first part of this section, it is assumed that X and Z are nontrivial, real linear spaces and C ⊆ Z is a convex cone with 0 ∈ C. Definition 1. Let T : Z → X be an injective linear operator. A function f : X → P (Z) is called translative with respect to T (or just T-translative) if: An outlook to financial applications is as follows: Let Z = IR m with a set h 1 , . . . , h m ⊆ X of m linearly independent elements and T : IR m → X defined by Tz = ∑ m k=1 z k h k (below; this is referred to as the "standard example" for T). If X = L 0 d , which comprises the terminal payoffs of contingent claims and the elements h 1 , . . . , h m are eligible for risk compensation or as hedging instruments, then T models the corresponding strategies, i.e., Tz is the payoff of a portfolio constructed of eligible instruments (see, e.g., [8,17]). With d = m = 1 and Tz = z1I, one easily recovers cash additive risk measures [18] (see [19] for a modern exposition) as extended real-valued (−T)-translative functions.
Translative functions can be represented via their zero sublevel sets defined by: Vice versa, a function f A : X → P (Z) can be assigned to a set A ⊆ X by means of: The following feature will play a crucial role.
Many properties of f can equivalently be characterized by the properties of A f , and vice versa. The basic primal representation result for T-translative functions reads as follows.

Proposition 2.
(a) Let f : X → P (Z) be a T-translative function. Then, f = f A f . If f additionally maps into P (Z, C), then A f is (T, C)-translative.
(b) Let A ⊆ X be an arbitrary set. Then, the function f A : Proof. (a) From the T-translativity of f , we obtain for all x ∈ X: Now, assume that f additionally maps into P (Z, C). Then, for each (b) Take x ∈ X and z ∈ Z. Then: Moreover, one has: If A is (T, C)-translative, x ∈ X and z ∈ C, then: since A + {−Tz} ⊆ A by the (T, C)-translativity of A; thus, f A maps into P (Z, C). This completes the proof of the proposition.

Remark 2.
Of course, Proposition 2 has a P (Z, −C)-valued analog as do many such results in the following; one can just replace C by −C and change the order, if necessary. This will not be stated below, but is used frequently. In the sequel, only the (a)-parts of the correspondence results will be stated, but they can always be complemented by the corresponding (b)-parts whose proofs follow by a simple application of the (a)-part as in the proof of the preceding corollary.
The next result provides a representation of a T-translative f as an infimal convolution. For this purpose, the function J T : X → P (Z, C) given by: : otherwise is introduced. Note that J T is well defined since T is assumed to be injective. Moreover, it can straightforwardly be checked that J T is T-translative (as well as its P (Z, −C)valued analog).
is T-translative if, and only if, there is a function g : X → P (Z, C) such that: Proof. For one direction, one just shows that g J T indeed is T-translative and maps into P (Z, C). For the other direction, observe that the equation f = f A f can be written as: where I A is the set-valued indicator function of A: While the representation via zero sublevel sets (also known as acceptance sets in risk measure theory) is standard and well known, the representation as an infimal convolution as in Proposition 3 is not so well known, even in the scalar case. Compare ( [21], Def. 4.5), where a scalar version of J T was used to define the numéraire-invariant (and monotone) hull of an extended real-valued function. It was used as a fundamental tool for constructing risk measures and their dual representations in [22]. Note the perfect analog of the set-valued infimal convolution to the scalar one thanks to the complete lattice property.
The representation in (8) can be understood as splitting the T-translative function f into a part related to the T-translativity and a part related to its zero sublevel set.

Remark 3.
The T-translativity property readily implies: Thus, f behaves as a linear function plus a constant on the linear subspace TZ ⊆ X. If f is sublinear or superlinear (see below), then f (0) is a convex cone and f is of the type "point plus cone" on TZ, i.e., basically a vector-valued function.
Clearly, if f maps into P (Z, C), then − f maps into P (Z, −C), and vice versa.

Further Correspondences
A few concepts related to complete lattice-valued functions are needed for the subsequent results. The following sets are associated with a function f : X → P (Z), namely its graph and its domain, defined by: respectively. The condition dom f = ∅ means that f is not everywhere equal to the top element if f maps into (P (Z, C), ⊇), and likewise, it is not everywhere equal to the bottom element if f maps into (P (Z, −C), ⊆). A function f : X → P (Z) is called proper if dom f = ∅ and f (x) = Z for all x ∈ X. This definition can be used for P (Z, C)-valued, as well as for P (Z, −C)-valued functions.
A function g : X → P (Z, −C) is called: (a2) Concave if graph g is a convex subset of X × Z; (b2) Positively homogenous if graph g is a cone in X × Z; (c1) Superadditive if graph f + graph f ⊆ graph f ; (d2) Superlinear if graph g is a convex cone in X × Z; (e2) Monotone with respect to ≤ B (or just B-monotone) if x 1 ≤ B x 2 implies g(x 1 ) ⊆ g(x 2 ).
In this definition, B ⊆ X is a convex cone with 0 ∈ B.

Remark 5.
Three comments about convexity-related properties are in order.
First, if f is convex, then f (x) is a convex subset of Z for all x ∈ X, i.e., a convex function with values in P (Z, C) automatically maps into C(Z, C). Similarly, a concave function with values in P (Z, −C) maps into C(Z, −C). Moreover, if f is positively homogenous, then f (0) is a cone, thus a convex cone whenever f is sublinear or superlinear (see Remark 3).
Secondly, the convexity of a P (Z, C)-valued function f is equivalent to Jensen's inequality: ∀x, y ∈ X, ∀s ∈ (0, 1) : i.e., ⊇ corresponds to ≤ for real numbers (see the interpretation in Remark 1). Similarly, the concavity of a P (Z, −C)-valued function g is equivalent to: ∀x, y ∈ X, ∀s ∈ (0, 1) : sg(x) + (1 − s)g(y) ⊆ g(sx + (1 − s)y), i.e., ⊆ corresponds to ≤ for real numbers in this case (again, see Remark 1). Likewise, the subadditivity of f : for all x, y ∈ X with a parallel condition for superadditive g. Thirdly, one should note that the graph of a P (IR, IR + )-valued function corresponds to the epigraph and the graph of a P (IR, −IR + )-valued function to the hypograph of an extended realvalued function. This again confirms that Definition 3 (a1), (a2) (as well as (c1), (c2), (d1), (d2)) is perfectly consistent with and true generalizations of the scalar ones. One may compare [23]: the set-valued upper and lower expectations therein do not share most of these features.

Remark 6.
With Remark 4 in view, one may note that f 1 is B-monotone if, and only if, f 4 is B-monotone as well, and f 2 and f 3 are (−B)-monotone.
The above definitions have the great advantage that most known scalar results remain valid. For example, a positively homogenous P (Z, C)-valued function is convex if, and only if, it is subadditive, and a function f : X → P (Z, C) is convex if, and only if, the function − f : X → P (Z, −C) is concave. Compare ( [13], Sec. 4) for such relationships and more references.
If Z is a topological space, the following concept can be introduced.

Definition 4.
A set A ⊆ X is called T-directionally closed if for any net z λ λ∈Λ converging to 0 ∈ Z and any x ∈ X, If the topology on Z is first-countable, then nets can be replaced by sequences.
One should note that only the topology on Z enters this definition, not a (potential) one on X. Of course, if A is closed with respect to a topology on X, then it is T-directionally closed for each continuous T.
If, in addition, X is a topological space, the following concepts can be considered.
(c) Closed if graph f is a closed subset of X × Z with respect to the product topology.

Remark 7.
Two comments about the closedness properties are in order. First, a closed P (Z, C)-valued function has closed values and closed sublevel sets, i.e., it automatically maps into F (Z, C). If such a function is additionally convex, it maps into G(Z, C). Similarly, a closed P (Z, −C)-valued function maps into F (Z, −C) and into G(Z, −C) if it is additionally concave.
Secondly, a function f : where N X is a neighborhood base of 0 ∈ X and inf/sup are understood in (F (Z, C), ⊇). This and similar relationships are due to [24]. This also shows the perfect analog of the lattice constructions to the scalar case. Of course, parallel remarks apply to closed functions mapping into F (Z, −C) and lattice-upper-semicontinuity. (g) f (x) = ∅ for all x ∈ X if, and only if,

is a topological linear space, then f is closed-valued if, and only if, A f is T-directionally closed;
(k) If X, Z are topological linear spaces and T is continuous, then f is closed if, and only if, A f is closed.

Proof.
We shall indicate the proofs for (a), (j), and (k). The remaining parts are straightforward and therefore omitted: 1], and again, the translativity property (4) yields has closed values and closed sublevel sets (see by the closedness of A f and the continuity of T. Consequently, again, by the T-translativity, (x, z) ∈ graph f . As usual, Proposition 4 has a twin for P (Z, −C)-valued T-translative functions.

Dual Representation
Let X, Z be two locally convex, real topological linear spaces with topological duals X * , Z * and C ⊆ Z a closed convex cone. No interior point or pointedness assumption is made, which means that C = {0} or a half-space is possible. By C + , we denote the positive dual cone of C, i.e., C + = {z * ∈ Z * | ∀z ∈ C : z * (z) ≥ 0}. The first goal is a dual representation of closed convex T-translative functions. It will be a consequence of the Fenchel-Moreau (biconjugation) theorem for set-valued functions as first established in [25]. Here, the version in ( [13], Theorem 5.8) is used.
It is half-space-valued since its values are superlevel sets of the continuous linear function z * . These functions are additive and positively homogenous; see ( [25], Proposition 6), ( [13], Proposition 5.1). Such functions are called collinear. The abbreviation Let f : X → G(Z, C) be a proper, closed, convex function. The Fenchel-Moreau theorem ( [13], Theorem 5.8) states that it equals its biconjugate f * * , which is defined via: It can be understood as an inf-residuation in the complete lattice G(Z, C) with respect to the addition ([13], Section 2.3)), and it replaces the usual difference in vector spaces.
This type of set-valued conjugation shares most properties with the corresponding one for extended real-valued functions. In particular:

•
The conjugate of an infimal convolution of two functions is the sum of the two conjugates (see ([25], Lemma 2), and ( [26], Theorem 2.3.1 (ix)) for the scalar case); • The conjugate of the set-valued indicator function which can be considered as a G(Z, C)-valued version of the ordinary support function this fact will be useful later on. This sets the stage for an application of the representation (8) to determine the conjugate of a T-translative function f : it is the sum of the two conjugates of I A and J T . The conjugate of J T can readily be computed as (see already ( [13], Section 5.5)): Thus, for f = I A f J T , one obtains: This translates to the following dual representation result.
Theorem 1. If f : X → G(Z, C) is a proper, closed, convex T-translative function, then: If f is additionally sublinear, then A f is a closed convex cone and (12) simplifies to: Proof. This follows from the Fenchel-Moreau theorem ( [13], Theorem 5.8), the representation of f as an infimal convolution (8) and the form of the conjugates of thus, these x * can be dropped from the intersection in (12).
Recall the definition of the recession cone rec D = {y ∈ X | D + {y} ⊆ D} of a closed convex set D ⊆ X, as well as the one of the barrier cone barr D = dom σ D from Section 2.2. The recession cone is the negative dual of the barrier cone, i.e., rec However, sometimes, it is useful to involve the latter condition.
As many other results, Theorem 1 has a twin for G(Z, −C)-valued concave functions.

Remark 8.
(1) One has: and it is easy to verify that the function Thus, only T-translative functions of type S + enter (12). In particular, a sublinear T-translative function is the pointwise supremum of such functions according to (13); (2) The condition z * = T * x * ∈ C + \{0} coupling the two dual variables transforms into a time consistency condition for financial market models, which was pointed out in ( [3], Sections 4 and 5.4) (see Example 4 and Sections 4.5 and 4.7 below). Thus, the general framework for set-valued T-translative functions and the set-valued duality theory do exactly what they are supposed to do: produce the right dual variables and appropriate dual descriptions. More details on time consistency conditions within the framework of conditional risk measures can be found in [11,12]; (3) Formula (12) can be given a more traditional form via the following calculation. Using (10) and the definition of S + , one obtains: Thus, the knowledge of the (scalar) support function σ A f for x * with T * x * ∈ C + \{0} is enough for (12).

Set-Valued T-Translative Functions-Examples
In this section, a list of examples is provided starting with general versions and moving to more concrete applications in finance and statistics. Among other things, it is shown that projections on linear subspaces in Hilbert spaces are special instances of T-translative functions, and a new proposal for lower and upper expectations for multivariate random variables is given.

Aggregation Maps
Finally, L D,M is convex if, and only if, D is convex.
Proof. Take z ∈ M, x ∈ X. Then: The last claim follows from Proposition 4 (a).

Remark 9.
Aggregation maps can be seen as just another way of writing T-translative functions: Let Z be another linear space, T : Z → X an injective linear operator, and A ⊆ X. Set M = TZ. Then, y ∈ ({x} − A) ∩ TZ if, and only if, there is z ∈ f A (x) such that y = Tz. Hence: i.e., the aggregation map L A,M can be understood as a superposition of the T-translative function f A and T defined by On the other hand, the inverse T −1 : TZ → Z exists since T is injective, and one can write: i.e., f A can be seen as the superposition of the aggregation map L A,TZ (x) and T −1 .
A (linear) scalarization very often can be identified with the projection on a onedimensional subspace: a standard example is the expected value of a random variable in L 2 , which is the projection of it onto the one-dimensional subspace of constants: the expected value satisfies the translativity property: E[X + s1I] = E[X] + s. Clearly, such a projection comes with a loss of information, and it therefore makes sense to ask for a similar procedure with a more than one-dimensional subspace as the image space: This would still "simplify" the original object, but retain more information. A decision maker then has to decide "how much scalarization" is needed. It is worth noting that a "total" scalarization leads to a total order, whereas a more general aggregation map may preserve some non-totalness of the original order relation in X, which can be a desirable feature.
The remaining part of this section deals with the dual representation of aggregation maps. Therefore, it is now assumed that X is a separated locally convex space that forms a dual pair with its topological dual X * , as in Section 3. For the embedding operator T : M → X, the element T * x * ∈ M * is the restriction of the continuous linear function x * ∈ X * to the linear subspace M since one has: The set M ⊥ is also called the annihilator of M in X * (see ( [16], 5.106 Definition)).
Thus, if D is closed and convex and L D,M is proper, one has: Moreover, one has from (10): and by Remark 8, (3), Altogether, this gives the dual representation: Note that, in contrast to the general case, only one dual variable x * appears in these formulas due to the fact that T is a simple embedding operator.
If D is a closed convex cone, then L D,M becomes sublinear, and one has σ D ( is the collection of all eligible payoffs Z, which make the overall position X + Z acceptable. If a (linear) pricing functional π : M → IR is given, then the function: A,M,π (X) = inf{π(Z) | X + Z ∈ A} was defined as an extended real-valued risk measure on V in ( [17], p. 590), ( [8], Formula (5)). This model already appeared in [28] and also in [29]. It has intimate relationships with good deal pricing, as outlined, e.g., in [29].

Projections and Conditional Expectations
Let X be a Hilbert space, M ⊆ X a closed subspace, and M ⊥ the orthogonal complement of M in X. Then, with D = M ⊥ (with the notation from the previous example): Since X and M are Hilbert spaces, continuous linear functionals over them can be identified with elements of X or M itself. The relevant collinear functions are: Since A L M ⊥ ,M = M ⊥ , it follows: Since M is a linear subspace, z ∈ L M ⊥ ,M (x) satisfies x * (x) = x * (z) for all x * ∈ M. This is the dual characterization for the projection z = Proj M (x) and also confirms that this projection is unique.
Conditional expectations are also discussed in this section since they can be seen, at least in the L 2 -space, as a special type of projections.  These ideas, combined with the procedures discussed in Section 4.5, form the building block for dual representation results for conditional risk measures as given in [11,12].

Standard Example and Scalar Translative Functions
The following special case occurs very often in applications: Z = IR m , IR m + ⊆ C ⊂ IR m , h 1 , . . . , h m ⊆ X a set of m linearly independent elements and T : IR m → X defined by Tz = ∑ m k=1 z k h k .
In some financial applications with X = L p d , the elements h 1 , . . . , h m can be assets eligible for hedging or risk compensation. This standard example also covers the case of several, potentially defaultable securities as eligible assets. Such a case for m = 1 was discussed in [9,20]. The case m > 1 for risk-free eligible assets was treated in [2,3].

Example 3.
A special case is the following one, which is well studied and has many applications, not only in finance. Let m = 1, h 1 = h ∈ X\{0} be a fixed, nonzero element of X and M = {th | t ∈ IR} the one-dimensional subspace generated by h. Then: Assume D − IR + h ⊆ D, i.e., D is (T, C)-translative for T : IR → X with Ts = sh and C = IR + . Then, sh ∈ {x} − D and t ≥ 0 imply: ∀x ∈ X, ∀s ∈ IR : τ D,h (x + sh) = τ D,h (x) + s.
Such functions have a vast number of applications in vector optimization and multicriteria decision making, idempotent analysis, statistics, finance, and economics (see, e.g., [30]). In fact, the authors realized already in 2001/2002 the intimate relationship between translative scalarization functionals used in vector optimization [31] and risk measures as defined in [18,32].
Note that the case (τ D,h (x), ∞)h above is excluded if D is assumed to be T-directionally closed for the simple linear operator T : IR → X defined by Ts = sh for s ∈ IR. In this case, D is also said to be h-directionally closed: this concept was introduced by Schrage [33] and one of the authors [30]. If a set D is not h-directionally closed, one can add all points x ∈ X such that there is a sequence {s n } n∈IN in IR with lim n→∞ s n = 0 such that x + s n h ∈ D for all n ∈ IN, and obtain a set dcl h D, which is called the h-directional closure of D. It was shown in [30,33] that this indeed is a closure operation, which is weaker (i.e., smaller in general) than the algebraic closure of D.

Subhedging and Superhedging Sets
Let X ∈ L p d be a contingent claim and C ⊆ L 0 d the set of portfolios in the market, which are available at terminal time by trading in the market starting from the zero portfolio at initial time. The sets: are the collections of superhedging and subhedging portfolios (at initial time). The interpretation is that z ∈ SupH(X) means to sell X at initial time, obtain z, trade in the market with initial portfolio z, and become solvent at terminal time; similarly, z ∈ SubH(X) means to buy X at initial time by delivering z, trade in the market with zero initial portfolio, and become solvent at terminal time.
If 0 ∈ SupH(X), then X is a terminal payoff, which can be achieved by trading in the market starting with the zero portfolio, while 0 ∈ SubH(X) means that X is a solvent position at terminal time, i.e., A SupH = −C, A SubH = C.
with a closed convex cone IR d + ⊆ C 0 ⊆ IR d and a random closed convex cone C 1 with IR d + ⊆ C 1 (ω) for all ω ∈ Ω. Then, C is a convex cone, and if it is closed (e.g., under some no-arbitrage type condition), SupH is a closed sublinear T-translative function, which has the dual representation (see (13)): The pair (Y, v) corresponds to a consistent price process as used in [5,7] for the one-period market model (C 0 , C 1 ). More general versions are of course possible; see already ( [3], Section 5.4).

Set-Valued Risk Measures
The following definition is slightly more general than the ones given in the literature cited below.
Such risk measures were introduced and studied in [2,3] with the main motivation that risk can be compensated for by several eligible assets (and not just cash in a fixed currency, which produces the usual "cash-additivity" interpretation of (R1)). The same motivation appeared in [4], which did not use the complete lattice framework, but was actually the first to define set-valued risk measures.
The presence of the operator T makes it possible to include arbitrary eligible assets in the line of the arguments given in [8,29] . . , m, linearly independent, where the case of unit vector b i = e i was the one considered in [4].
Finally, convexity and sublinearity requirements as given in Definition 3 lead to convex and sublinear (traditionally called "coherent") set-valued risk measures.
Dual representation results can be obtained as follows. Assume p ∈ [1, ∞] (compare Section 2.2 for dual L First, the monotonicity (R2) is equivalent to A R + (L p d ) + ⊆ A R (see Proposition 4 (i)) and hence implies Y ∈ −(L q d ) + (see the discussion after the proof of Theorem 1). Next, Corollary 2 gives: Defining an d × m-matrix by: Since R is (−T)-translative, the dual representation Formula (12) in Theorem 1 asks for: In order to prepare a scenario-based version, the sign of the dual variable Y is switched, which is also in accordance with an interpretation as consistent price processes later on.
This set is a convex cone, and since T * Y = E H Y , one has: The dual representation Formula (12) in Theorem 1 yields: for a proper closed convex risk measure R : L p d → G(IR m , C). In a final step, this formula will be transferred into a scenario-based version, i.e., a formula with probability measures as dual variables as in [18] (for scalar risk measures) and in [2,3] (for set-valued ones).
Take Y ∈ (L q d ) + , and define Then, V i is the density function of a probability measure Q i with V i = dQ i dP for i = 1, . . . , d. One can now write Y = diag(w) dQ dP . This gives: Defining: The following lemma provides the one-to-one correspondence between pairs (Y, v) and (Q, w).
Then, there is a vector probability measure Q whose components are absolutely continuous with respect to P and w ∈ IR d + such that E Q [H] w ∈ C + \{0} and: Let Q be a vector probability measure whose components are absolutely continuous with respect to P and w ∈ IR d Remark 10. The set: clearly is a generalization of the upper w-expectation in Section 4.7 below and shares most of its properties; its values are subsets of IR m with m ≤ d. A more thorough discussion of this new concept will be performed elsewhere.
The scenario-based dual representation reads as follows. Defining: one obtains the following dual representation result.
In particular, the penalty function can be chosen as: If R is additionally sublinear, then: . The choice b j = e j , the j-th unit vector, leads to (a more general version of) situations already considered in [4]. Special Case II: If m = d and H j = e j 1I for j = 1, . . . , d, then B becomes the d × d identity matrix. Moreover, in this case, E + H,Q,w (X) = z ∈ IR m | w E Q [X] ≤ w z , which is the upper (Q, w)-expectation defined in Section 4.7 below. and conversely, 0 ∈ R C (X) implies: ∃Z ∈ L p m : X − TZ ∈ C, 0 ∈ R(Z) which immediately produces X ∈ TA R + C.
It can happen that R C (0) = IR m even if R is finite at 0 ∈ L p m . A different and useful representation of R C , in particular with dual representations in view, can be obtained as follows. Define the functions R T : L p d → P (IR m , IR m + ) and otherwise as well as I C (X) = IR m + : X ∈ C ∅ : X ∈ C The function I C is the set-valued indicator function of C (see [25]).

Proposition 7.
The m-liquidation risk measure R C is the infimal convolution of R T and I C , i.e., Proof. By definition, For R T (X 1 ) + I C (X 2 ) to be nonempty, one must have X 1 = TZ for Z ∈ L p m and X 2 = X − X 1 = X − TZ ∈ C, and in this case: which is the definition of R C .

Remark 12.
(1) The case of m risk-free assets (e.g., m currency accounts) can be modeled by Sz = ∑ m i=1 z i b i 1I with m linearly independent vectors b i ∈ IR d . In particular, the unit vectors b i = e i , i = 1, . . . , m, can be used as, for example, already in [4]  An ad hoc procedure of this type was also used in [4]; (2) A special case for C is a one-period market model with proportional transaction costs in the form C = C 0 1I + C 1 where C 0 ⊆ IR d , C 1 ⊆ L p d are closed convex sets, which comprise the solvent positions at initial and terminal time, respectively (see [34] for this type of market model and [5,7] for the case when C 0 and C 1 are polyhedral convex cones, i.e., conical market models). Since in this case: it is possible to further decompose the representation (16). This idea is due to C. Ararat and was used in [35]; (3) Finally, if m = 1 and Sz = zb1I with z ∈ IR and b ∈ IR d , then the case of scalar risk measures for multivariate positions can be recovered.

Set-Valued Lower and Upper Expectations
In the book ( [36], p. 254ff), Huber discussed the notion of lower and upper expectations being infima and suprema of expected values over sets of probability measures (along with lower and upper probabilities). His proof for a "dual representation" became a blueprint for similar results for coherent risk measures [18].
How do we generalize these notions to the multivariate situation? A new proposal is as follows. Let Q be a set of d-dimensional vector probability measures on a measurable space (Ω, F ) and exists for all Q ∈ Q. Moreover, let C ⊆ IR d be a nontrivial (i.e., C ∈ {∅, IR d }) closed convex cone with positive dual cone C + = v ∈ IR d | ∀z ∈ C : v T z ≥ 0 . For w ∈ C + \{0}, Q ∈ Q, define the sets: which we call the upper and lower (Q, w)-expectation of X, respectively. Set: Straightforwardly, one obtains for all (Q, w) ∈ Q × C + \{0}: The set-valued function E + is called the upper expectation of X, whereas E − is called the lower expectation with respect to Q. Of course, "upper" and "lower" refer to the order generated by the cone C (see Remark 1). It is not difficult to verify that E + maps in G(IR m , C) and is sublinear with respect to ⊇, whereas E − maps into G(IR m , −C) and is superlinear with respect to ⊆. Moreover, E + (X) = −E − (−X), which corresponds to ( [36], p. 254, (2.3)). Finally, E + (−X) satisfies the requirements of a (sublinear) risk measure in Section 4.5, and its definition can be considered as its dual representation; this is stated in the following result. Proposition 8. (a) E + maps into G(IR d , C), is T-translative for Tz = z1I, and sublinear and K-monotone with respect to each random closed convex cone K such that E Q [X] ∈ C for all Q ∈ Q whenever X ∈ K (pointwise); (b) E − maps into G(IR d , −C), is T-translative for Tz = z1I, and superlinear and K-monotone with respect to each random closed convex cone K such that E Q [X] ∈ C for all Q ∈ Q whenever X ∈ K (pointwise).
Proof. (a) Everything is a direct consequence of the definition and the assumption. In particular, if Taking the intersections over (Q, w) ∈ Q × C + \{0} on both sides of the inclusion, one obtains E + (X 1 ) ⊇ E + (X 2 ); (b) Completely parallel.

Remark 13.
(1) The monotonicity assumption can be understood as a time consistency condition: if X is non-negative at terminal time (i.e., X ∈ K), then E Q [X] is non-negative at initial time (i.e., E Q [X] ∈ C). Since there are many options for preorders in dimensions ≥ 2 (rather than essentially only one in Dimension 1), "non-negative" has to be understood with respect to preorders generated by convex cones; (2) The definitions of E + , E − are essentially different from the ones in [23] and, in contrast to the latter, maintain many features of the scalar versions. For example, a relation such as E + (X) = −E − (−X) is not true for the concepts from [23]; (3) While in this section, a direct approach is given, Section 4.5 above already includes and motivates a further generalization of upper/lower expectations to the case E + (X), E − (X) ⊆ IR m with m ≤ d. Such a distinction is of course not necessary for univariate positions, but it can be vital if IR d -valued positions are subject to an evaluation in fewer than d eligible instruments, e.g., currencies.
Clearly, the definition of E + (X) can be seen as a dual representation. A primal version is obtained by looking at: where the bipolar theorem for the closed convex cone C is used. One obtains E + = f A E + with Proposition 2 (a), where T : IR d → L 0 d is defined by Tz = z1I (see Proposition 8). In particular, one obtains: One may note that this corresponds to the (trivial) E[X] = inf{r ∈ IR | E[X] ≤ r} for d = 1, C = IR + , Q = {P}. Thus, upper expectations are meaningful substitutes for the (vector) expectation in the multivariate case since there is no reasonable way to use the infimum in the preordered vector space (IR d , ≤ C ) (and very often, it does not even exist). The lower expectation corresponds to the univariate E[X] = sup{r ∈ IR | r ≤ E[X]}.

Set-Valued Quantiles
The notation from Section 2.2 is used. Moreover, let C ⊆ IR d be a closed convex cone, C + its positive dual, and X ∈ L 0 d . The function F X,C : IR d → [0, 1] defined by: is called the lower cone distribution function of X (with respect to the cone C). If d = 1 and C = IR + , then F X,IR + is just the (usual) cumulative distribution function of X, but for d > 1, it is different from the joint distribution function even if C = IR d + (see [37]). The upper level set Q X,C (p) = z ∈ IR d | F X,C (z) ≥ p is the lower p-quantile of X for p ∈ [0, 1]; it can easily be shown that Q X,C (p) ∈ G(IR d , C) (see ([37], Prop. 4) and also [38]). For a fixed p ∈ [0, 1], the function X → Q X,C (p) is T-translative for Tz = z1I.

Scalar (T, z * )-Translative Functions
First, a type of extended real-valued function is introduced, which will provide scalarizing functions for set-valued T-translative functions. Special cases of such functions were considered in [2,39], but the first reference in a finance context seems to be ( [17], Formula (CR), p. 590).
Let X, Z be topological linear spaces with topological duals X * , Z * .

Definition 7.
Let z * ∈ Z * \{0} and T : Z → X be an injective linear operator. An extended real-valued function ϕ : If ϕ(0) = 0, then ϕ coincides with the continuous linear function z * on the image TZ of T; in applications such as monetary risk measure theory, TZ is a one-dimensional subspace and (17) is the well-known "cash-additivity" property (see [19]). In ( [17], Def. 3.4), this property for T = id Z and Z a linear subspace of X was called translation invariance with respect to (Z, −z * ). In ( [39], Prop. 4.2), it was called "translative along M", where M ⊆ X corresponds to TZ in Definition 7. z k h k (the standard example). An element z * ∈ Z * can be identified with a w ∈ IR m such that z * (z) = w z for all z ∈ IR m . Then, (17) becomes: If m = 1 and h 1 = h ∈ X\{0}, w = 1, (17) specializes further to: ∀x ∈ X, ∀s ∈ IR : ϕ(x + sh) = ϕ(x) + s, (18) and such functions are called translative in direction h or just h-translative. In this case, T : IR → X with Ts = sh.
The next goal is to prove a primal representation result, that is to reconstruct ϕ from L ϕ (0). The next result prepares the ground.

Remark 14.
To every function ϕ : X → IR, one can assign a function f : X → F (IR, IR + ) by: Obviously, epi ϕ = graph f , L ϕ (0) = A f , and the function ϕ can be recovered from f by: Since epi ϕ = graph f , the function ϕ is convex (closed) if, and only if, f is convex (closed). Moreover, ϕ is translative in direction h ∈ X if, and only if, f is T-translative, where T : IR → X is defined by Ts = sh. Proposition 10. Let ϕ : X → IR be translative in direction h. Then: Moreover, the set L ϕ (0) is h-directionally closed (for a definition see the last part of Example 3) and satisfies L ϕ (0) − IR + h = L ϕ (0).
Moreover, with f as in Remark 14, we obtain from Proposition 2 (a):
Moreover, the conditions (b) and (c) are then satisfied for eachz ∈ E(z * , 1), and τ L,Tz does not depend on the particular choice ofz in E(z * , 1).
Proof. Take x ∈ X, z ∈ Z, and set t = z * (z). Then, z − tz ∈ ker z * and: . Hence: Applying this inequality to x + Tz instead of x and −z instead of z, we obtain: and altogether, (17) follows.
In the previous proposition, one obtains the same functional for eachz ∈ E(z * , 1). This raises the question when two functions of the form (22) coincide. An answer for the special case when X is a linear space of random variables was given in ( [40], Proposition 1-a) (the sublinear case) and ( [9], Proposition 5.1) (general risk measures). Note, however, that the following proposition also does not necessarily cover lower semicontinuous functions since only the directional closure is involved, not the topological closure (compare Example 3 and its references for this type of closure).
Proof. One has x ∈ dcl h (L − IR + h) if, and only if, there are sequences {s n } n∈IN and {r n } n∈IN in IR with r n ≥ 0 and x + (s n + r n )h ∈ L, for all n ∈ IN, and lim n→∞ s n = 0, which in turn is equivalent to: Hence, dcl h (L − IR + h) = L τ L,h (0), and in the same way, dcl k (M − IR + k) = L τ M,k (0). Now, assume τ L,h = τ M,k . Then, L τ L,h (0) = L τ M,k (0), and consequently, dcl h (L − IR + h) = dcl k (M − IR + k).
If t ∈ IR, then: The same argument with reversed roles of (L, h) and (M, k) produces the converse inequality, so Finally, a dual representation result for (T, z * )-translative functions is established by applying the method from Section 3.3.
Proof. In the light of the previous discussion, it suffices to note that, under the assumptions, ϕ = ϕ * * holds true, and in the definition of the Fenchel-Moreau biconjugate: one can restrict x * to those satisfying (T * x * )(z) = 1 since otherwise +∞ is subtracted under the supremum, which certainly does not contribute to it. The same argument applies to x * ∈ dom σ L ϕ (0) = barr L ϕ (0).
Such extensions were introduced in [24] and called "canonical extensions" of f ; special cases are the inf-and sup-translations used in [43] to reduce solutions of set optimization problems from "sets" to "points" (in X).
Using inf-and sup-extensions, one can extend T-translative functions from X to P (X). The feasibility of these extensions depends on a feature of the image lattice: a complete lattice (L, ≤) with an addition + is said to be inf-additive and sup-additive, respectively, if: ∀D, E ⊆ L : inf(D + E) = inf D + inf E and sup(D + E) = sup D + sup E where the addition of sets is the usual elementwise addition with the extension D + ∅ = ∅ + D = ∅. The relevance of such properties was pointed out in [44], as well as the fact that, in general, the lattice (P (Z, C), ⊇) is inf-additive, but not sup-additive, whereas it is vice versa for (P (Z, −C), ⊆). It was shown in [43] that inf-/sup-additivity is equivalent to the existence of a residuation operation, which can be considered as a generalized inverse addition (see Section 3.3).
The following result works in the setup of Section 3.1. Proof. Directly from (4) and the inf-additivity of (P (Z, C), ⊇) and the sup-additivity of (P (Z, −C), ⊆), respectively.
Note that the claim is not true in general for f and g . Using these concepts, one can extend translative functions from vector spaces to their power sets, in particular to sets of random variables if the vector space is a subspace of L 0 d . If such a set of random variables happens to be the set of selectors of a random set, one also obtains an extension of a translative function to random sets. In ( [23], Section 4.1), such a procedure was called the minimal extension of a nonlinear expectation provided the lattice is G(Z, C), although used without any reference to the complete lattice framework. Of course, not all random sets have (characterizing) sets of selectors (cf. [45]), but under some assumptions (e.g., for p-integrable, bounded random compact sets), nonlinear expectations of random sets can indeed be characterized by inf-extensions of functions defined on random vectors. An example is ([45], Proposition 2.2.38), whereas ([45], Formula (2.2.28)) is the definition of the "selection superlinear expectation" as an inf-extension over a set of selectors with the underlying lattice G(IR d , −IR d + ). In order to define convexity and monotonicity type properties for functions on P (X), one needs an algebraic structure-usually on a smaller set, which is more appropriate for applications. Let X be a topological linear space, B ⊆ X a closed convex cone, and G(X, B) the set of all closed convex subsets of X, which are stable under the addition of B (see Section 2.1). The addition on G(X, B) is D ⊕ E = cl {x + y | x ∈ D, y ∈ E}, the closure of the usual Minkowski addition with the convention D ⊕ ∅ = ∅ ⊕ E = ∅ for D, E ∈ G(X, B). Multiples are defined by sD = {sx | x ∈ D} for s > 0 and 0D = B for all D ∈ G(X, B). With these operations, G(X, B) becomes a collinear space: compare ([13], Section 2.3) for more details and references. Of course, one could also use P (X, B) or C(X, B) as departing points.
Note that the case B = {0} is not excluded; on the contrary, it will play an important role. The set G(X) := G(X, {0}) is the set of all closed convex subsets of X. Note also that G(X, B) ⊆ G(X); hence, each function F : G(X) → P (Z, C) also provides a function F : G(X, B) → P (Z, C) by restriction. Proof. The first part is obvious, and the second part follows from Proposition 15.
The question if the restriction of the inf-/sup-extension of a set-valued function coincides with the original one does not have such a straightforward answer. If F : G(X, B) → P (Z, C) is a function, then the inf-extension of its restriction to X is: