Next Article in Journal
An Evolutionary Justification of the Emergence of Leadership Using Mathematical Models
Previous Article in Journal
A Closed-Form Solution without Small-Rotation-Angle Assumption for Circular Membranes under Gas Pressure Loading
Previous Article in Special Issue
Expected Shortfall Reliability—Added Value of Traditional Statistics and Advanced Artificial Intelligence for Market Risk Measurement Purposes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Set-Valued T-Translative Functions and Their Applications in Finance

by
Andreas H. Hamel
1,*,† and
Frank Heyde
2,†
1
Faculty of Economics and Management, Free University of Bozen-Bolzano, I-39031 Bruneck-Brunico, Italy
2
Faculty of Mathematics and Computer Science, Freiberg University of Mining and Technology, 09596 Freiberg, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(18), 2270; https://doi.org/10.3390/math9182270
Submission received: 19 July 2021 / Revised: 4 September 2021 / Accepted: 10 September 2021 / Published: 15 September 2021
(This article belongs to the Special Issue Applied Mathematical Methods in Financial Risk Management)

Abstract

:
A theory for set-valued functions is developed, which are translative with respect to a linear operator. It is shown that such functions cover a wide range of applications, from projections in Hilbert spaces, set-valued quantiles for vector-valued random variables, to scalar or set-valued risk measures in finance with defaultable or nondefaultable securities. Primal, dual, and scalar representation results are given, among them an infimal convolution representation, which is not so well known even in the scalar case. Along the way, new concepts of set-valued lower/upper expectations are introduced and dual representation results are formulated using such expectations. An extension to random sets is discussed at the end. The principal methodology consisted of applying the complete lattice framework of set optimization.

1. Introduction

Set-valued risk measures for multivariate financial positions [1,2,3,4] make sense if there is more than one asset eligible for risk-compensating deposits or as an accounting unit. A typical situation is Kabanov’s model of a multicurrency market with frictions [5]. In such market models, objects such as superhedging prices turn into sets of superhedging portfolios since the operation of taking the infimum or supremum can no longer be performed in a meaningful way if applied to sets in I R d with respect to the order relations generated, e.g., by solvency cones. This makes it extremely difficult to find multidimensional analogs to intervals of arbitrage-free prices or good deal bounds [6] or to apply the utility indifference pricing method.
Moreover, dual representation theorems such as the superhedging theorems in [5,7] involve vector-valued consistent price processes as dual variables instead of martingale measures and their densities. One may ask if this is an ad hoc construction or can be embedded into a general duality theory.
“Set-valuedness” provides an elegant way to overcome such difficulties and arrives at concepts and formulas that are very close to the scalar case. More specifically, infima and suprema can be understood in complete lattices of sets that basically share all order-related properties with the extended real line except the totalness of the order relation. It should be noted that such set-valuedness is also intrinsic in common one-dimensional constructions. For example, sets of lower and upper quantiles are intervals of the form a + I R + and b I R + , as are the set of superhedging and subhedging cash amounts in market models with a single numéraire.
In this paper, a general framework for set-valued translative functions and their representation by scalar families is provided, and it is shown that it does not only cover set-valued risk measures, but also set-valued lower and upper expectations, projections, aggregation mappings, and set-valued quantiles for multivariate positions. Dualization procedures in complete lattices of sets are shown to produce the “right” dual variables.
The concepts and definitions are more general than the corresponding ones in the literature; some are even new, such as set-valued lower and upper expectations and cover, for example also the case of multiple defaultable securities as eligible assets for risk compensation [8] (the case with a single defaultable security was already discussed, e.g., in [9]). A number of already existing applications such as set-valued systemic risk measures [10] and conditional risk measures [11,12] were intentionally left out since their inclusion would have required several more pages of technical preparation. However, an unusual interpretation of conditional expectations as adjoint operators in Section 4.2 below indicates directions for the extension of the theory proposed in this paper to the dynamic case.

2. Preliminaries

2.1. Complete Lattices of Sets

Let Z be a nontrivial, real linear space. The power set of Z (including the empty set ∅) is denoted by P ( Z ) . The usual elementwise (Minkowski) sum of two nonempty sets A , B Z is defined by:
A + B = { a + b a A , b B }
and extended by A + = + A = for A P ( Z ) . A convex cone is a nonempty set C Z satisfying s C C for all s > 0 and C + C C where s A = { s z z A } for a nonempty set A P ( Z ) . Moreover, for nonempty sets A Z , the operation A = { a a A } is also used, as well as = .
Let C Z be a convex cone with 0 C . Such a cone generates a vector preorder, i.e., a reflexive, transitive relation, denoted by C , which is compatible with the algebraic operations on Z via:
y C z z y C .
The cone C is also called the positivity cone of C since it can be recovered by C = { z Z 0 C z } .
The set:
P ( Z , C ) = { D Z D = D + C }
along with its obvious twin P ( Z , C ) serves as basic image set of set-valued functions in this paper. It is known [13] that ( P ( Z , C ) , ) and ( P ( Z , C ) , ) are complete lattices. The result is stated for the sake of future reference.
Proposition 1.
The pair ( P ( Z , C ) , ) is a complete lattice in which the infimum and the supremum of a set A P ( Z , C ) are:
inf A A A = A A A and sup A A A = A A A ,
respectively. The pair ( P ( Z , C ) , ) is a complete lattice in which the infimum and the supremum of a set B P ( Z , C ) are:
inf B B B = B B B and sup B B B = B B B ,
respectively.
Remark 1.
The situation is completely symmetric: just the roles of the infimum and supremum are swapped. Moreover,is the top element in ( P ( Z , C ) , ) and the bottom element in ( P ( Z , C ) , ) if one understands that ⊇ in P ( Z , C ) corresponds to ≤ for real numbers, whereas ⊆ in P ( Z , C ) corresponds to ≤ in I R . This is motivated by the interpretation of C as the positivity cone in Z with respect to the preorder C on Z: A , B P ( Z , C ) , and A B means that for each b B , there is a A with a C b , whereas both sets are directed upwards with respect to C (clearly, ⊇ can be considered as “≤” in this case). Similarly, A , B P ( Z , C ) , and A B means that for each a A , there is b B with a C b . whereas both sets are directed downwards with respect to C . Such order relations with the same understanding of ⊇ as “≤” for sets that are directed upwards and ⊆ as “≤” for sets that are directed downwards are actually the basis for extending the ≤-relation from the rational numbers to the real numbers via upper and lower Dedekind cuts, i.e., sets of the form [ a , + ) and ( , b ] , respectively, for rational numbers a , b . Within an economic context, such order relations with the same interpretation were used in [14], for example. More details and references can be found in ([13] Sec. 2.2) and [15].
Several lattices are used in the sequel, which include subsets of P ( Z , C ) , P ( Z , C ) . We shall denote this by:
C ( Z , C ) = D Z D = co ( D + C ) and C ( Z , C ) = D Z D = co ( D C ) ,
where co A denotes the convex hull of a set A Z and D C = D + ( C ) .
As before, ( C ( Z , C ) , ) , as well as ( C ( Z , C ) , ) are complete lattices. The “intersection” formulas for the supremum/infimum coincide with the ones in (1) and (2), respectively, and one has:
inf A A A = co A A A and sup B B B = co B B B
for A C ( Z , C ) , B C ( Z , C ) .
Furthermore, if Z is a topological space, one can define:
F ( Z , C ) = D Z D = cl ( D + C ) and F ( Z , C ) = D Z D = cl ( D C ) G ( Z , C ) = D Z D = cl co ( D + C ) and G ( Z , C ) = D Z D = cl co ( D C )
where cl A stands for the topological closure of A Z .
Again, ( F ( Z , C ) , ) , ( G ( Z , C ) , ) , as well as ( F ( Z , C ) , ) , ( G ( Z , C ) , ) are complete lattices with the same “intersection” Formulas in (1) and (2). The “union” formulas are those in (3) where the closure replaces the convex hull in F and the closure of the convex hull is taken in G .

2.2. Further Notation

Throughout the paper, the following notation is used:
  • If X is a locally convex space, a linear space X * together with a duality pairing (a bilinear functional from X × X * I R that separates points in X and X * ) is considered, which turn ( X , X * ) into a dual pair of linear spaces (see ([16], Definition 5.90)). The duality pairing for x X , x * X * is written as x * ( x ) . The topology on X is assumed to be consistent with the dual pair, i.e., X * can be identified with the topological dual of X (the linear space of all continuous linear functionals on X). Such a topology is always separated (also known as Hausdorff; see ([16], Lemma 5.97)). A topology on X * will not be considered in this paper;
  • If X is a separated locally convex space and D X a closed convex set, the set rec D = { y X D + { y } D } X is a closed convex cone, the recession cone of D. It is the largest (with respect to inclusion) closed convex cone B D satisfying D + B D . If ( X , X * ) is a dual pair of locally convex spaces, the function σ D : X * I R { ± } defined by σ D ( x * ) = sup x D x * ( x ) is the usual support function of D. Its domain, namely dom σ D = { x * X * σ D ( x * ) < + } X * , is a convex cone, which is called the barrier cone of D and denoted by barr D ;
  • If ( X , X * ) is a dual pair of locally convex spaces and B X is a cone, the set:
    B + = x * X * x X : x * ( x ) 0
    is called the (positive) dual cone of B;
  • ( Ω , F ) denotes a measurable space and L d 0 : = L d 0 ( Ω , F ) the linear space of random variables over ( Ω , F ) , which take values in I R d ; the case d = 1 is denoted by L 0 : = L 0 ( Ω , F ) ;
  • ( Ω , F , P ) denotes a probability space and L d p : = L d p ( Ω , F , P ) for p = 0 or p [ 1 , ) the linear space of equivalence classes with respect to the P-a.s. equality of p-integrable random variables with values in I R d , and L d : = L d ( Ω , F , P ) is the corresponding linear space of essentially bounded random variables with values in I R d ; L d p is a Banach space for p [ 1 , ] ; the case d = 1 is denoted by L p ;
  • 𝟙 denotes the random variable, which takes the value of 1 for all ω Ω in L 0 and P-almost surely in L p for p = 0 or p [ 1 , ] , respectively;
  • The pair ( L d p , L d q ) with p ( 1 , ) and p 1 + q 1 = 1 , or p = 1 , q = , or p = , q = 1 is a dual pair with respect to the duality pairing given by E [ X Y ] for X L d p , Y L d q . Topologies on L d p that are consistent with the dual pair ( L d p , L d q ) are considered. If p [ 1 , ) , the usual Banach space topology does the job, whereas if p = , the weak * topology is considered.
Finally, we point out a slight abuse of notation due to the traditional use of capital letters such as X as a (topological or locally convex etc.) linear space in a functional analytic framework and as a random variable in a stochastic/finance setting. We trust the reader can always make the distinction.

3. Set-Valued T -Translative Functions—Theory

3.1. The General Model and Primal Representations

In the first part of this section, it is assumed that X and Z are nontrivial, real linear spaces and C Z is a convex cone with 0 C .
Definition 1.
Let T : Z X be an injective linear operator. A function f : X P ( Z ) is called translative with respect to T (or just T-translative functions) if:
x X , z Z : f ( x + T z ) = f ( x ) + { z } .
An outlook to financial applications is as follows: Let Z = I R m with a set h 1 , , h m X of m linearly independent elements and T : I R m X defined by T z = k = 1 m z k h k (below; this is referred to as the “standard example” for T). If X = L d 0 , which comprises the terminal payoffs of contingent claims and the elements h 1 , , h m are eligible for risk compensation or as hedging instruments, then T models the corresponding strategies, i.e., T z is the payoff of a portfolio constructed of eligible instruments (see, e.g., [8,17]). With d = m = 1 and T z = z 𝟙 , one easily recovers cash additive risk measures [18] (see [19] for a modern exposition) as extended real-valued ( T ) -translative functions.
Translative functions can be represented via their zero sublevel sets defined by:
A f = x X 0 f ( x ) .
Vice versa, a function f A : X P ( Z ) can be assigned to a set A X by means of:
f A ( x ) = z Z x T z A .
The following feature will play a crucial role.
Definition 2.
A set A X is called ( T , C ) -translative if:
z C : A + { T z } A .
Many properties of f can equivalently be characterized by the properties of A f , and vice versa. The basic primal representation result for T-translative functions reads as follows.
Proposition 2.
(a) Let f : X P ( Z ) be a T-translative functions function. Then, f = f A f . If f additionally maps into P ( Z , C ) , then A f is ( T , C ) -translative.
(b) Let A X be an arbitrary set. Then, the function f A : X P ( Z ) is T-translative functions and A = A f A . Moreover, if A is ( T , C ) -translative, then f A maps into P ( Z , C ) .
Proof .
(a) From the T-translativity of f, we obtain for all x X :
f A f ( x ) = z Z x T z A f = z Z 0 f ( x T z ) = z Z 0 f ( x ) + z = f ( x ) .
Now, assume that f additionally maps into P ( Z , C ) . Then, for each x A f , z C , we obtain 0 f ( x ) = f ( x T z ) + z f ( z T z ) ; hence, x T z A f .
(b) Take x X and z Z . Then:
f A ( x + T z ) = y Z x + T z T y A = y z Z x T ( y z ) A + z = f A ( x ) + z ,
so f A is T-translative. Moreover, one has:
A f A = x X 0 f A ( x ) = x X 0 z Z x T z A = A .
If A is ( T , C ) -translative, x X and z C , then:
f A ( x ) + { z } = f A ( x + T z ) = y Z x + T z T y A = y Z x T y A + { T z } f A ( x )
since A + { T z } A by the ( T , C ) -translativity of A; thus, f A maps into P ( Z , C ) . This completes the proof of the proposition. □
Remark 2.
Of course, Proposition 2 has a P ( Z , C ) -valued analog as do many such results in the following; one can just replace C by C and change the order, if necessary. This will not be stated below, but is used frequently.
Corollary 1.
(a) Let f : X P ( Z ) be T-translative functions. Then, f maps into P ( Z , C ) if, and only if, A f is ( T , C ) -translative.
(b) Let A X be a set. Then, f A maps into P ( Z , C ) if, and only if, A is ( T , C ) -translative.
Proof .
(a) If f maps into P ( Z , C ) , then A f is ( T , C ) -translative by Proposition 2 (a). Vice versa, if A f is ( T , C ) -translative, then Proposition 2 (b) yields that f A f maps into P ( Z , C ) , and hence, also f does, since f A f = f by Proposition 2 (a).
(b) f A is T-translative by construction, so one can apply Part (a) to it and obtains the result since A = A f A by Proposition 2 (b). □
Proposition 2 and Corollary 1 are the prototypes of the correspondence results for translative functions and their zero sublevel sets. In the (a)-part (as in (5)), a translative function is considered as the primary object, whereas in the (b)-part (as in (6)), a set A X is the departure point. Compare the discussions in ([20], Sec. 1), ([8], Sec. 1) within a financial context. From a mathematical point of view, both approaches are completely tantamount as far as T-translative functions mapping into P ( Z , C ) and ( T , C ) -translative sets are concerned.
In the sequel, only the (a)-parts of the correspondence results will be stated, but they can always be complemented by the corresponding (b)-parts whose proofs follow by a simple application of the (a)-part as in the proof of the preceding corollary.
The next result provides a representation of a T-translative f as an infimal convolution. For this purpose, the function J T : X P ( Z , C ) given by:
J T ( x ) = { z } + C : x = T z : otherwise
is introduced. Note that J T is well defined since T is assumed to be injective. Moreover, it can straightforwardly be checked that J T is T-translative (as well as its P ( Z , C ) -valued analog).
Proposition 3.
A function f : X P ( Z , C ) is T-translative functions if, and only if, there is a function g : X P ( Z , C ) such that:
x X : f ( x ) = g J T ( x ) = inf g x 1 + J T x 2 x 1 + x 2 = x
Proof. 
For one direction, one just shows that g J T indeed is T-translative and maps into P ( Z , C ) . For the other direction, observe that the equation f = f A f can be written as:
x X : f ( x ) = f A f ( x ) = inf I A f ( x 1 ) + J T ( x 2 ) x 1 + x 2 = x = I A f J T ( x )
where I A is the set-valued indicator function of A: I A ( x ) = C if x A and I A ( x ) = if x A . □
While the representation via zero sublevel sets (also known as acceptance sets in risk measure theory) is standard and well known, the representation as an infimal convolution as in Proposition 3 is not so well known, even in the scalar case. Compare ([21], Def. 4.5), where a scalar version of J T was used to define the numéraire-invariant (and monotone) hull of an extended real-valued function. It was used as a fundamental tool for constructing risk measures and their dual representations in [22]. Note the perfect analog of the set-valued infimal convolution to the scalar one thanks to the complete lattice property.
The representation in (8) can be understood as splitting the T-translative function f into a part related to the T-translativity and a part related to its zero sublevel set.
Remark 3.
The T-translativity property readily implies:
z Z : f ( T z ) = f ( 0 ) + { z } .
Thus, f behaves as a linear function plus a constant on the linear subspace T Z X . If f is sublinear or superlinear (see below), then f ( 0 ) is a convex cone and f is of the type “point plus cone” on T Z , i.e., basically a vector-valued function.
If f : X P ( Z ) is a function, f : X P ( Z ) is defined by ( f ) ( x ) = f ( x ) . Clearly, if f maps into P ( Z , C ) , then f maps into P ( Z , C ) , and vice versa.
Remark 4.
Let f 1 : X P ( Z , C ) be a T-translative functions function. The functions f 2 , f 3 , f 4 defined by:
f 2 ( x ) = f 1 ( x ) , f 3 ( x ) = f 1 ( x ) , f 4 ( x ) = f 1 ( x )
are variants of f 1 . The following two facts are straightforward:
(1) f 1 , f 2 map into P ( Z , C ) , while f 3 , f 4 map into P ( Z , C ) ;
(2) f 1 and f 4 are T-translative functions, while f 2 (as well as f 3 ) satisfies:
x X , z Z : f 2 ( x + T z ) = f 2 ( x ) + { z } .
Thus, f 2 , f 3 are ( T ) -translative, which means that the previous and following representation results apply to them with obvious adaptions.

3.2. Further Correspondences

A few concepts related to complete lattice-valued functions are needed for the subsequent results. The following sets are associated with a function f : X P ( Z ) , namely its graph and its domain, defined by:
graph f = ( x , z ) X × Z z f ( x ) and dom f = x X f ( x ) ,
respectively. The condition dom f means that f is not everywhere equal to the top element if f maps into ( P ( Z , C ) , ) , and likewise, it is not everywhere equal to the bottom element if f maps into ( P ( Z , C ) , ) . A function f : X P ( Z ) is called proper if dom f and f ( x ) Z for all x X . This definition can be used for P ( Z , C ) -valued, as well as for P ( Z , C ) -valued functions.
Definition 3.
A function f : X P ( Z , C ) is called:
(a1) Convex if graph f is a convex subset of X × Z ;
(b1) Positively homogeneous if graph f is a cone in X × Z ;
(c1) Subadditive if graph f + graph f graph f ;
(d1) Sublinear if graph f is a convex cone in X × Z ;
(e1) Monotone with respect to B (or just B-monotone) if x 1 B x 2 implies f ( x 1 ) f ( x 2 ) .
A function g : X P ( Z , C ) is called:
(a2) Concave if graph g is a convex subset of X × Z ;
(b2) Positively homogenous if graph g is a cone in X × Z ;
(c1) Superadditive if graph f + graph f graph f ;
(d2) Superlinear if graph g is a convex cone in X × Z ;
(e2) Monotone with respect to B (or just B-monotone) if x 1 B x 2 implies g ( x 1 ) g ( x 2 ) .
In this definition, B X is a convex cone with 0 B .
Remark 5.
Three comments about convexity-related properties are in order.
First, if f is convex, then f ( x ) is a convex subset of Z for all x X , i.e., a convex function with values in P ( Z , C ) automatically maps into C ( Z , C ) . Similarly, a concave function with values in P ( Z , C ) maps into C ( Z , C ) . Moreover, if f is positively homogenous, then f ( 0 ) is a cone, thus a convex cone whenever f is sublinear or superlinear (see Remark 3).
Secondly, the convexity of a P ( Z , C ) -valued function f is equivalent to Jensen’s inequality:
x , y X , s ( 0 , 1 ) : f ( s x + ( 1 s ) y ) s f ( x ) + ( 1 s ) f ( y ) ,
i.e., ⊇ corresponds to ≤ for real numbers (see the interpretation in Remark 1). Similarly, the concavity of a P ( Z , C ) -valued function g is equivalent to:
x , y X , s ( 0 , 1 ) : s g ( x ) + ( 1 s ) g ( y ) g ( s x + ( 1 s ) y ) ,
i.e., ⊆ corresponds to ≤ for real numbers in this case (again, see Remark 1). Likewise, the subadditivity of f : X P ( Z , C ) is equivalent to f ( x + y ) f ( x ) + f ( y ) for all x , y X with a parallel condition for superadditive g.
Thirdly, one should note that the graph of a P ( I R , I R + ) -valued function corresponds to the epigraph and the graph of a P ( I R , I R + ) -valued function to the hypograph of an extended real-valued function. This again confirms that Definition 3 (a1), (a2) (as well as (c1), (c2), (d1), (d2)) is perfectly consistent with and true generalizations of the scalar ones. One may compare [23]: the set-valued upper and lower expectations therein do not share most of these features.
Remark 6.
With Remark 4 in view, one may note that f 1 is B-monotone if, and only if, f 4 is B-monotone as well, and f 2 and f 3 are ( B ) -monotone.
The above definitions have the great advantage that most known scalar results remain valid. For example, a positively homogenous P ( Z , C ) -valued function is convex if, and only if, it is subadditive, and a function f : X P ( Z , C ) is convex if, and only if, the function f : X P ( Z , C ) is concave. Compare ([13], Sec. 4) for such relationships and more references.
If Z is a topological space, the following concept can be introduced.
Definition 4.
A set A X is called T-directionally closed if for any net z λ λ Λ converging to 0 Z and any x X ,
λ Λ : x + T z λ A implies x A .
If the topology on Z is first-countable, then nets can be replaced by sequences.
One should note that only the topology on Z enters this definition, not a (potential) one on X. Of course, if A is closed with respect to a topology on X, then it is T-directionally closed for each continuous T.
If, in addition, X is a topological space, the following concepts can be considered.
Definition 5.
A function f : X P ( Z ) is called:
(a) Closed-valued if f ( x ) is a closed set for all x X ;
(b) Level-closed if the sublevel set x X z f ( x ) is closed for each z Z ;
(c) Closed if graph f is a closed subset of X × Z with respect to the product topology.
Remark 7.
Two comments about the closedness properties are in order.
First, a closed P ( Z , C ) -valued function has closed values and closed sublevel sets, i.e., it automatically maps into F ( Z , C ) . If such a function is additionally convex, it maps into G ( Z , C ) . Similarly, a closed P ( Z , C ) -valued function maps into F ( Z , C ) and into G ( Z , C ) if it is additionally concave.
Secondly, a function f : X F ( Z , C ) is closed if, and only if, it is lattice-lower-semicontinuous, i.e.,
f ( x ) lim inf x x ¯ f ( x ) = sup U N X inf x x ¯ + U f ( x )
where N X is a neighborhood base of 0 X and inf/sup are understood in ( F ( Z , C ) , ) . This and similar relationships are due to [24]. This also shows the perfect analog of the lattice constructions to the scalar case. Of course, parallel remarks apply to closed functions mapping into F ( Z , C ) and lattice-upper-semicontinuity.
Proposition 4.
Let f : X P ( Z , C ) be T-translative functions. Then:
(a) f is convex if, and only if, A f is convex;
(b) f is subadditive if, and only if, A f is closed under addition;
(c) f is positively homogeneous if, and only if, A f is a cone;
(d) f is sublinear if, and only if, A f is a convex cone;
(e) f ( 0 ) if, and only if, T Z A f ;
(f) f ( 0 ) Z if, and only if, T Z ( X \ A f ) ;
(g) f ( x ) for all x X if, and only if, X = A f + T Z ;
(h) f ( x ) Z for all x X if, and only if, X = ( X \ A f ) + T Z ;
(i) f is B-monotone if, and only if, A f B A f ;
(j) If Z is a topological linear space, then f is closed-valued if, and only if, A f is T-directionally closed;
(k) If X, Z are topological linear spaces and T is continuous, then f is closed if, and only if, A f is closed.
Proof. 
We shall indicate the proofs for (a), (j), and (k). The remaining parts are straightforward and therefore omitted:
(a) Assume that f is convex. Take x 1 , x 2 A f . Then, ( x 1 , 0 ) , ( x 2 , 0 ) graph f , and by convexity, s ( x 1 , 0 ) + ( 1 s ) ( x 2 , 0 ) graph f for all s [ 0 , 1 ] . This gives s x 1 + ( 1 s ) x 2 A f for all s [ 0 , 1 ] ; hence, A f is convex.
Conversely, if A f is convex and ( x 1 , z 1 ) , ( x 2 , z 2 ) graph f , then:
0 f ( x 1 ) { z 1 } = f ( x 1 T z 1 ) , 0 f ( x 2 ) { z 2 } = f ( x 2 T z 2 )
by (4); hence, x 1 T z 1 , x 2 T z 2 A f . The convexity of A f gives s x 1 + ( 1 s ) x 2 T ( s z 1 + ( 1 s ) z 2 ) A f for all s [ 0 , 1 ] , and again, the translativity property (4) yields s z 1 + ( 1 s ) z 2 f ( s x 1 + ( 1 s ) x 2 ) , i.e., s ( x 1 , z 1 ) + ( 1 s ) ( x 2 , z 2 ) graph f ;
(j) First, let f be closed-valued, and take x X , z λ 0 with x + T z λ A f for all λ Λ . Then, 0 f ( x + T z λ ) = f ( x ) + { z λ } ; hence, z λ f ( x ) , and by the closedness, 0 f ( x ) , which is x A f .
Conversely, let A f be T-directionally closed, and take z λ z Z with z λ f ( x ) for all λ Λ . Then, the T-transitivity of f implies x T z λ = x T z + T ( z z λ ) A f ; hence. x T z A f since A f is T-directionally closed and z z λ 0 . This implies 0 f ( x T z ) = f ( x ) + z , so z f ( x ) , as desired;
(k) A closed function f : X P ( Z , C ) has closed values and closed sublevel sets (see Remark 7), so in particular, A f is closed. Conversely, if A f is closed and ( x λ , z λ ) graph f with ( x λ , z λ ) ( x , z ) , then the T-translativity of f yields 0 f ( x λ T z λ ) ; hence, x λ T z λ A f , which implies x T z A f by the closedness of A f and the continuity of T. Consequently, again, by the T-translativity, ( x , z ) graph f . □
As usual, Proposition 4 has a twin for P ( Z , C ) -valued T-translative functions.

3.3. Dual Representation

Let X, Z be two locally convex, real topological linear spaces with topological duals X * , Z * and C Z a closed convex cone. No interior point or pointedness assumption is made, which means that C = { 0 } or a half-space is possible. By C + , we denote the positive dual cone of C, i.e., C + = z * Z * z C : z * ( z ) 0 . The first goal is a dual representation of closed convex T-translative functions. It will be a consequence of the Fenchel–Moreau (biconjugation) theorem for set-valued functions as first established in [25]. Here, the version in ([13], Theorem 5.8) is used.
The function S ( x * , z * ) + : X G ( Z , C ) for z * C + \ { 0 } and x * X * is defined by:
S ( x * , z * ) + ( x ) = z Z x * ( x ) z * ( z ) .
It is half-space-valued since its values are superlevel sets of the continuous linear function z * . These functions are additive and positively homogenous; see ([25], Proposition 6), ([13], Proposition 5.1). Such functions are called collinear. The abbreviation H + ( z * ) = S ( x * , z * ) ( 0 ) = z Z z * ( z ) 0 is also used in the following.
Let f : X G ( Z , C ) be a proper, closed, convex function. The Fenchel–Moreau theorem ([13], Theorem 5.8) states that it equals its biconjugate f * * , which is defined via:
f * ( x * , z * ) = sup x X S ( x * , z * ) ( x ) f ( x ) = x X ( S ( x * , z * ) + ( x ) f ( x ) ) and f * * ( x ) = sup x * X * , z * C + \ 0 S ( x * , z * ) + ( x ) f * ( x * , z * ) = x * X * , z * C + \ 0 ( S ( x * , z * ) + ( x ) f * ( x * , z * ) ) .
The operation is defined by A B = { z Z B + { z } A } . It can be understood as an inf-residuation in the complete lattice G ( Z , C ) with respect to the addition A B = cl { a + b a A , b B } (see ([13], Section 2.3)), and it replaces the usual difference in vector spaces.
This type of set-valued conjugation shares most properties with the corresponding one for extended real-valued functions. In particular:
  • The conjugate of an infimal convolution of two functions is the sum of the two conjugates (see ([25], Lemma 2), and ([26], Theorem 2.3.1 (ix)) for the scalar case);
  • The conjugate of the set-valued indicator function I A ( x ) = C if x A and I A ( x ) = if x A for A X is the function (see ([25], Section 4.4 (A)), and ([26], p. 79) for the scalar case).
( I A ) * ( x * , z * ) = x A S ( x * , z * ) + ( x ) = x A z Z x * ( x ) z * ( z ) = z Z σ A ( x * ) z * ( z )
which can be considered as a G ( Z , C ) -valued version of the ordinary support function σ A ( x * ) = sup x A x * ( x ) . Note that ( I A ) * ( x * , z * ) = for x * dom σ A = y * X * σ A ( y * ) < + ; this fact will be useful later on.
This sets the stage for an application of the representation (8) to determine the conjugate of a T-translative function f: it is the sum of the two conjugates of I A and J T . The conjugate of J T can readily be computed as (see already ([13], Section 5.5)):
J T * x * , z * = x X ( S ( x * , z * ) + ( x ) J T ( x ) ) = u Z ( S ( x * , z * ) + ( T u ) ( { u } + C ) ) = u Z z Z { z + u } + C S ( x * , z * ) + ( T u ) = z Z u Z : z * ( z + u ) x * ( T u ) = z Z z * ( z ) sup u Z ( T * x * z * ) ( u ) = H + ( z * ) : z * = T * x * : z * T * x *
Thus, for f = I A f J T , one obtains:
f * ( x * , z * ) = I A f * ( x * , z * ) + J T * x * , z * = x A f S ( x * , z * ) + ( x ) : z * = T * x * : z * T * x *
This translates to the following dual representation result.
Theorem 1.
If f : X G ( Z , C ) is a proper, closed, convex T-translative functions function, then:
x X : f ( x ) = x * dom σ A f T * x * C + \ { 0 } S ( x * , T * x * ) + ( x ) I A f * ( x * , T * x * ) .
If f is additionally sublinear, then A f is a closed convex cone and (12) simplifies to:
x X : f ( x ) = x * A f T * x * C + \ { 0 } S ( x * , T * x * ) + ( x )
where A f = A f + = x * X * x A f : x * ( x ) 0 .
Proof. 
This follows from the Fenchel–Moreau theorem ([13], Theorem 5.8), the representation of f as an infimal convolution (8) and the form of the conjugates of I A f , J T . The restriction x * dom σ A f is possible since ( I A ) * ( x * , z * ) = for x * dom σ A and:
S ( x * , T * x * ) + ( x ) = z Z + { z } S ( x * , T * x * ) + ( x ) = Z ;
thus, these x * can be dropped from the intersection in (12).
Note also that if A f is a closed convex cone, then I A f * ( x * , z * ) = H + ( z * ) if x * A f and I A f * ( x * , z * ) = for x * A f . Moreover, S ( x * , z * ) + ( x ) H + ( z ) = S ( x * , z * ) + ( x ) . □
Recall the definition of the recession cone rec D = { y X D + { y } D } of a closed convex set D X , as well as the one of the barrier cone barr D = dom σ D from Section 2.2. The recession cone is the negative dual of the barrier cone, i.e., rec D = ( barr D ) + = ( barr D ) = { x X x * barr D : x * ( x ) 0 } (see ([27], Ch. 1, Sec. 5)).
If f is B-monotone with a convex cone B X , then B rec A f by Proposition 4 (i); hence, dom σ A f cl ( barr A f ) = ( rec A f ) B + . Thus, the condition x * dom σ A f already covers x * B + . However, sometimes, it is useful to involve the latter condition.
As many other results, Theorem 1 has a twin for G ( Z , C ) -valued concave functions.
Remark 8.
(1) One has:
S ( x * , T * x * ) + ( x ) = z Z x * ( x ) ( T * x * ) ( z ) = z Z x * ( x ) x * ( T z ) ,
and it is easy to verify that the function x S ( x * , T * x * ) + ( x ) is T-translative functions. Thus, only T-translative functions functions of type S + enter (12). In particular, a sublinear T-translative functions function is the pointwise supremum of such functions according to (13);
(2) The condition z * = T * x * C + \ { 0 } coupling the two dual variables transforms into a time consistency condition for financial market models, which was pointed out in ([3], Section 4 and 5.4) (see Example 4 and Section 4.5 and Section 4.7 below). Thus, the general framework for set-valued T-translative functions functions and the set-valued duality theory do exactly what they are supposed to do: produce the right dual variables and appropriate dual descriptions. More details on time consistency conditions within the framework of conditional risk measures can be found in [11,12];
(3) Formula (12) can be given a more traditional form via the following calculation. Using (10) and the definition of S + , one obtains:
S ( x * , T * x * ) + ( x ) I A f * ( x * , T * x * ) = y Z I A f * ( x * , T * x * ) + { y } S ( x * , T * x * ) + ( x ) = y Z z Z σ A f ( x * ) ( T * x * ) ( z ) + { y } z Z x * ( x ) x * ( T z ) = y Z z + y Z σ A f ( x * ) ( T * x * ) ( z + y y ) z Z x * ( x ) x * ( T z ) = y Z z Z σ A f ( x * ) ( T * x * ) ( z y ) z Z x * ( x ) x * ( T z ) = y Z σ A f ( x * ) + x * ( T y ) x * ( T z ) z Z x * ( x ) x * ( T z ) = y Z x * ( x ) σ A f ( x * ) + x * ( T y ) .
Thus, the knowledge of the (scalar) support function σ A f for x * with T * x * C + \ { 0 } is enough for (12).
Corollary 2.
Let Z = I R m and T be given by T z = k = 1 m z k h k (the “standard example”). Then:
( T * x * ) ( z ) = x * ( i = 1 m z i h i ) = i = 1 m z i x * ( h i ) ;
thus, T * x * can be identified with ( x * ( h 1 ) , , x * ( h m ) ) T I R m and (12) becomes:
f ( x ) = x * dom σ A f T * x * C + \ { 0 } z Z x * ( x ) σ A f ( x * ) + i = 1 m z i x * ( h i ) .
Another special case deserves attention. Assume that there is z ¯ C \ { 0 } satisfying z * ( z ¯ ) > 0 for all z * C + \ { 0 } . In this case, E C ( z ¯ , 1 ) = z * C + z * ( z ¯ ) = 1 is a base of the cone C + , i.e., every element z * C + \ { 0 } has a unique representation z * = s b * with s > 0 and b * E C ( z ¯ , 1 ) . In this case, intersections such as in (12) and (13) only need to run over E C ( z ¯ , 1 ) instead of over C + \ { 0 } .

4. Set-Valued T -Translative Functions—Examples

In this section, a list of examples is provided starting with general versions and moving to more concrete applications in finance and statistics. Among other things, it is shown that projections on linear subspaces in Hilbert spaces are special instances of T-translative functions, and a new proposal for lower and upper expectations for multivariate random variables is given.

4.1. Aggregation Maps

Let X be a linear space, M X a linear subspace, and D X a nonempty set. The function L D , M : X P ( M ) defined by:
L D , M ( x ) = ( { x } D ) M
is called the aggregation map with respect to D.
Proposition 5.
The aggregation map L D , M is translative with respect to the identity operator i d M : M X on X restricted to M. Moreover, if B X is a convex cone with 0 B satisfying D B D , then L D , M maps into P ( M , C ) with C = B M , and it holds:
A L D , M = x X 0 ( { x } D ) M = D .
Finally, L D , M is convex if, and only if, D is convex.
Proof. 
Take z M , x X . Then:
L D , M ( x + i d M ( z ) ) = ( { x + z } D ) M = { z } + ( x D ) M .
Take z L D , M ( x ) and c C = B M M . Then, z + c M , as well as:
z + c ( { x } D ) + B = { x } ( D B ) { x } D ;
hence, z + c ( { x } D ) M = L D , M ( x ) . The formula for A L D , M is obvious.
The last claim follows from Proposition 4 (a). □
Remark 9.
Aggregation maps can be seen as just another way of writing T-translative functions functions: Let Z be another linear space, T : Z X an injective linear operator, and A X . Set M = T Z . Then, y ( { x } A ) T Z if, and only if, there is z f A ( x ) such that y = T z . Hence:
L A , M ( x ) = ( { x } A ) M = y X y = T z , z f A ( x ) ,
i.e., the aggregation map L A , M can be understood as a superposition of the T-translative functions function f A and T defined by ( T f A ) ( x ) = T z z f A ( x ) .
On the other hand, the inverse T 1 : T Z Z exists since T is injective, and one can write:
f A ( x ) = T 1 y y L A , T Z ( x ) = { x } A T Z ,
i.e., f A can be seen as the superposition of the aggregation map L A , T Z ( x ) and T 1 .
A (linear) scalarization very often can be identified with the projection on a one-dimensional subspace: a standard example is the expected value of a random variable in L 2 , which is the projection of it onto the one-dimensional subspace of constants: the expected value satisfies the translativity property: E [ X + s 𝟙 ] = E [ X ] + s . Clearly, such a projection comes with a loss of information, and it therefore makes sense to ask for a similar procedure with a more than one-dimensional subspace as the image space: This would still “simplify” the original object, but retain more information. A decision maker then has to decide “how much scalarization” is needed. It is worth noting that a “total” scalarization leads to a total order, whereas a more general aggregation map may preserve some non-totalness of the original order relation in X, which can be a desirable feature.
The remaining part of this section deals with the dual representation of aggregation maps. Therefore, it is now assumed that X is a separated locally convex space that forms a dual pair with its topological dual X * , as in Section 3.3. Let M X be a closed subspace supplied with the topology induced by X. Then, the embedding operator T : M X is continuous. By Proposition 4 (a), (k), L D , M : X P ( M ) is closed and convex if, and only if, the set:
A L D , M = x X 0 ( { x } D ) M = D
is closed and convex. In this case, the recession cone rec D is a closed convex cone with D ( rec D ) D , i.e., L D , M maps into G ( M , C ) with C = ( rec D ) M by Proposition 2 (b).
For the embedding operator T : M X , the element T * x * M * is the restriction of the continuous linear function x * X * to the linear subspace M since one has:
x * X * , z M : ( T * x * ) ( z ) = x * ( T z ) = x * ( z ) .
Moreover, if x * dom σ D = barr D ( rec D ) + , then ( T * x * ) ( z ) = x * ( z ) 0 for all z C = ( rec D ) M ; hence, T * x * C + . Finally, T * x * 0 holds if, and only if, x * ker T * = { x * X * z M : x * ( z ) = 0 } = : M . The set M is also called the annihilator of M in X * (see ([16], 5.106 Definition)).
Thus, if D is closed and convex and L D , M is proper, one has:
x X : L D , M ( x ) = x * ( dom σ D ) \ M S ( x * , T * x * ) + ( x ) I D * ( x * , T * x * )
Moreover, one has from (10):
I D * ( x * , T * x * ) = z M σ D ( x * ) x * ( z ) ,
and by Remark 8, (3),
S ( x * , T * x * ) + ( x ) I D * ( x * , T * x * ) = y M x * ( x ) x * ( y ) + σ D ( x * )
Altogether, this gives the dual representation:
x X : L D , M ( x ) = x * ( barr D ) \ M z M x * ( x ) σ D ( x * ) x * ( z ) .
Note that, in contrast to the general case, only one dual variable x * appears in these formulas due to the fact that T is a simple embedding operator.
If D is a closed convex cone, then L D , M becomes sublinear, and one has σ D ( x * ) = 0 for x * D = ( D ) + , σ D ( x * ) = + for x * D ; hence:
x X : L D , M ( x ) = x * D \ M z M x * ( x ) x * ( z ) = x * D \ M S ( x * , x * ) + ( x ) ,
thus L D , M is the supremum of collinear functions, which are the set-valued replacements of scalar linear functions.
Example 1.
This example is another special case of an aggregation map. It was used in [8] as a base model for risk measures with multiple eligible assets. In this model, a vector subspace V L d 0 is considered with another subspace M V where the elements of M are considered as payoffs of trading strategies eligible for risk compensation. If A V is a set of financial position acceptable for a financial institution, the set:
L A , M ( X ) = Z M X + Z A = { X } + A M
is the collection of all eligible payoffs Z, which make the overall position X + Z acceptable. If a (linear) pricing functional π : M I R is given, then the function:
ϱ A , M , π ( X ) = inf π ( Z ) X + Z A
was defined as an extended real-valued risk measure on V in ([17], p. 590), ([8], Formula (5)). This model already appeared in [28] and also in [29]. It has intimate relationships with good deal pricing, as outlined, e.g., in [29].

4.2. Projections and Conditional Expectations

Let X be a Hilbert space, M X a closed subspace, and M the orthogonal complement of M in X. Then, with D = M (with the notation from the previous example):
L M , M ( x ) = P r o j M ( x ) ,
i.e., L M , M ( x ) contains a single element, which is the projection x M of x onto M. Indeed, if z L M , M ( x ) , then:
z M as well as z { x } M = { x M + x M } M { x M } + M
which implies z x M M M ; hence, z = x M . In this case, D B D means M B M ; hence, B M , and C = B M = 0 is the only possibility; thus, L M , M maps into G ( M , { 0 } ) , and one has A L M , M = M . This example also shows that the case C = { 0 } can be useful and should not be ruled out.
Since X and M are Hilbert spaces, continuous linear functionals over them can be identified with elements of X or M itself. The relevant collinear functions are:
S ( x * , z * ) + x * X , z * M .
Since A L M , M = M , it follows:
I M * ( x * , z * ) = sup x M S ( x * , z * ) ( x ) = x M z M x * ( x ) z * ( z ) = z M x M : x * ( x ) z * ( z )
for x * X , z * M . Since M is a linear subspace, this yields I M * ( x * , z * ) = for x * M ; hence, also, L M , M * ( x * , z * ) = for x * M , and I M * ( x * , z * ) = H + ( z * ) for x * M .
Since T is the embedding of M into X, T * becomes the projection from X onto M. Consequently, if x * M , we find J T * x * , z * = H + ( x * ) for z * = T * x * = x * and J T * ( x * , z * ) = whenever z * x * . Therefore,
x * M : L M , M * ( x * , x * ) = H + ( x * )
and L M , M * ( x * , z * ) = if x * z * . Theorem 1 now produces:
x X : L M , M ( x ) = x * M \ { 0 } S ( x * , x * ) ( x ) H + ( x * ) = x * M \ { 0 } S ( x * , x * ) ( x ) = z M x * M : x * ( x ) x * ( z ) .
Since M is a linear subspace, z L M , M ( x ) satisfies x * ( x ) = x * ( z ) for all x * M . This is the dual characterization for the projection z = P r o j M ( x ) and also confirms that this projection is unique.
Conditional expectations are also discussed in this section since they can be seen, at least in the L 2 -space, as a special type of projections.
Example 2.
Let ( Ω , A , P ) be a probability space and B a sub-sigma-algebra of A . Let 1 p < + . Then, L d p ( Ω , B , P ) is a closed linear subspace of L d p ( Ω , A , P ) . In this example, the abbreviations L d p ( A ) , L d p ( B ) are used. Let T : L d p ( B ) L d p ( A ) be the embedding, which is a bounded linear operator. Then:
  • The (componentwise) conditional expectation E [ · B ] : L d p ( A ) L d p ( B ) is a T-translative functions linear operator due to linearity and E [ Z B ] = Z for Z L d p ( B ) ; if p = 2 , it is a special case of the projection operator discussed above;
  • In fact, the conditional expectation considered as a function from L d q ( A ) to L d q ( B ) with p 1 + q 1 = 1 or q = for p = 1 is the adjoint of the embedding T of L d p ( B ) into L d p ( A ) , i.e., T * = E [ · B ] : L d q ( A ) L d q ( B ) by definition; one has:
    E [ Y T ( T Z ) ] = E [ E [ Y B ] T Z ]
    for all Y L d q ( A ) , Z L d p ( B ) according to the definition of the conditional expectation.
In such a situation, functions of the type S ( x * , T * x * ) + (see Remark 8) have the following form:
S ( Y , E [ Y B ] ) + ( X ) = Z L d p ( B ) E [ Y T X ] E [ E [ Y B ] T Z ] ,
so they produce half-spaces in L d p ( B ) as images with normal E [ Y B ] .
These ideas, combined with the procedures discussed in Section 4.5, form the building block for dual representation results for conditional risk measures as given in [11,12].

4.3. Standard Example and Scalar Translative Functions

The following special case occurs very often in applications: Z = I R m , I R + m C I R m , h 1 , , h m X a set of m linearly independent elements and T : I R m X defined by T z = k = 1 m z k h k .
In some financial applications with X = L d p , the elements h 1 , , h m can be assets eligible for hedging or risk compensation. This standard example also covers the case of several, potentially defaultable securities as eligible assets. Such a case for m = 1 was discussed in [9,20]. The case m > 1 for risk-free eligible assets was treated in [2,3].
Example 3.
A special case is the following one, which is well studied and has many applications, not only in finance. Let m = 1 , h 1 = h X \ { 0 } be a fixed, nonzero element of X and M = t h t I R the one-dimensional subspace generated by h. Then:
L D , M ( x ) = s h s h { x } D , s I R = s h x s h D , s I R .
Assume D I R + h D , i.e., D is ( T , C ) -translative for T : I R X with T s = s h and C = I R + . Then, s h x D and t 0 imply:
( s + t ) h x + t h D x D ;
hence, L D , M ( x ) is either empty, equal to M, or of the form τ D , h ( x ) , h , [ τ D , h ( x ) , ) h where:
τ D , h ( x ) = inf s I R x s h D .
Setting τ D , h ( x ) = + if L D , M ( x ) = and τ D , h ( x ) = if L D , M ( x ) = M , one obtains an h-translative extended real-valued function, i.e.,
x X , s I R : τ D , h ( x + s h ) = τ D , h ( x ) + s .
Such functions have a vast number of applications in vector optimization and multicriteria decision making, idempotent analysis, statistics, finance, and economics (see, e.g., [30]). In fact, the authors realized already in 2001/2002 the intimate relationship between translative scalarization functionals used in vector optimization [31] and risk measures as defined in [18,32].
Note that the case ( τ D , h ( x ) , h ) above is excluded if D is assumed to be T-directionally closed for the simple linear operator T : I R X defined by T s = s h for s I R . In this case, D is also said to be h-directionally closed: this concept was introduced by Schrage [33] and one of the authors [30]. If a set D is not h-directionally closed, one can add all points x X such that there is a sequence s n n I N in I R with lim n s n = 0 such that x + s n h D for all n I N , and obtain a set dcl h D , which is called the h-directional closure of D. It was shown in [30,33] that this indeed is a closure operation, which is weaker (i.e., smaller in general) than the algebraic closure of D.
In particular, if D = x X x * ( x ) 0 for some x * X * with x * ( h ) = 1 , then:
τ D , h ( x ) = inf s I R x * ( x s h ) 0 = x * ( x ) .

4.4. Subhedging and Superhedging Sets

Let X L d p be a contingent claim and C L d 0 the set of portfolios in the market, which are available at terminal time by trading in the market starting from the zero portfolio at initial time. The sets:
S u p H ( X ) = z I R d X + z 𝟙 C , S u b H ( X ) = z I R d X z 𝟙 C
are the collections of superhedging and subhedging portfolios (at initial time). The interpretation is that z S u p H ( X ) means to sell X at initial time, obtain z, trade in the market with initial portfolio z, and become solvent at terminal time; similarly, z S u b H ( X ) means to buy X at initial time by delivering z, trade in the market with zero initial portfolio, and become solvent at terminal time.
Assume C 0 I R d is a closed convex cone including the initial portfolios, which are available in the market at zero costs, i.e., C 0 is the solvency cone at initial time and satisfies C 0 𝟙 C . Then, S u p H ( X ) + C 0 S u p H ( X ) and S u b H ( X ) + ( C 0 ) S u b H ( X ) , i.e., the functions S u p H and S u b H map L d 0 into P ( I R d , C 0 ) and P ( I R d , C 0 ) , respectively. Moreover, S u b H ( X ) = S u p H ( X ) and both S u p H and S u b H satisfy (4) with T z = z 𝟙 .
If 0 S u p H ( X ) , then X is a terminal payoff, which can be achieved by trading in the market starting with the zero portfolio, while 0 S u b H ( X ) means that X is a solvent position at terminal time, i.e., A S u p H = C , A S u b H = C .
Dual representation results for functions such as S u p H are known as super-replication theorems in the literature. Examples are ([5], Theorem 3.1), ([7], Theorem 4.1), and ([34], Theorem 4.4). Such theorems are special instances of set-valued duality results, which was pointed out in ([3], Sec. 5.4).
Example 4.
The function S u p H is a T-translative functions function with T z = z 𝟙 and A S u p H = C . Let:
C = Z L d 1 Z C 0 𝟙 + C 1 P a . s .
with a closed convex cone I R + d C 0 I R d and a random closed convex cone C 1 with I R + d C 1 ( ω ) for all ω Ω . Then, C is a convex cone, and if it is closed (e.g., under some no-arbitrage type condition), S u p H is a closed sublinear T-translative functions function, which has the dual representation (see (13)):
S u p H ( X ) = Y C + v = E [ Y ] C 0 + \ { 0 } S ( Y , v ) + ( X ) = ( Y , v ) Y S u p H z I R d E [ Y X ] v z
where Y S u p H = ( Y , v ) L d × I R d Y C 1 + P - a . s . , v = E [ Y ] C 0 + . The pair ( Y , v ) corresponds to a consistent price process as used in [5,7] for the one-period market model ( C 0 , C 1 ) . More general versions are of course possible; see already ([3], Section 5.4).

4.5. Set-Valued Risk Measures

The following definition is slightly more general than the ones given in the literature cited below.
Definition 6.
Let p { 0 } [ 1 , ] and T : I R m L d p be a linear operator. A function R : L d p P ( I R m , C ) is called a (set-valued) risk measure if:
(R0) It is finite at zero: R ( 0 ) { I R m , } ;
(R1) It is ( T ) -translative: R ( X + T z ) = R ( X ) + { z } for X L d p , z I R m ;
(R2) It is ( L d p ) + -monotone: X 2 X 1 ( L d p ) + implies R ( X 2 ) R ( X 1 ) .
Such risk measures were introduced and studied in [2,3] with the main motivation that risk can be compensated for by several eligible assets (and not just cash in a fixed currency, which produces the usual “cash-additivity” interpretation of (R1)). The same motivation appeared in [4], which did not use the complete lattice framework, but was actually the first to define set-valued risk measures.
The presence of the operator T makes it possible to include arbitrary eligible assets in the line of the arguments given in [8,29] (for scalar risk measures). If H 1 , , H m L d p are eligible, then T z = i = 1 m z i H i . The traditional case is H i = b i 𝟙 with b i I R d , i = 1 , , m , linearly independent, where the case of unit vector b i = e i was the one considered in [4].
Finally, convexity and sublinearity requirements as given in Definition 3 lead to convex and sublinear (traditionally called “coherent”) set-valued risk measures.
Dual representation results can be obtained as follows. Assume p [ 1 , ] (compare Section 2.2 for dual L d p -spaces). Let T : I R m L d p be given by T z = z i H i where H 1 , , H m L d p are (potentially defaultable) securities eligible for risk compensation or risk hedging.
First, the monotonicity (R2) is equivalent to A R + ( L d p ) + A R (see Proposition 4 (i)) and hence implies Y ( L d q ) + (see the discussion after the proof of Theorem 1).
Next, Corollary 2 gives:
( T * Y ) ( z ) = E z i Y H i = z i E Y H i
Defining an d × m -matrix by:
H = ( H 1 , , H m ) = ( H 1 1 H 1 m H d 1 H d m )
one obtains ( T * Y ) ( z ) = E H Y z ; thus, T * Y can be identified with E H Y I R m . Since R is ( T ) -translative, the dual representation Formula (12) in Theorem 1 asks for:
T * Y = E H Y C + \ { 0 } .
In order to prepare a scenario-based version, the sign of the dual variable Y is switched, which is also in accordance with an interpretation as consistent price processes later on. Define Y R ( H , C ) = ( Y , v ) ( barr A R ) × C + \ { 0 } v = E H Y . This set is a convex cone, and since T * Y = E H Y , one has:
Y R ( H , C ) graph T * = ( Y , v ) L d q × I R m v = T * Y .
The dual representation Formula (12) in Theorem 1 yields:
R ( X ) = Y barr A R v = E H Y C + \ { 0 } S ( Y , v ) + ( X ) I A R * ( Y , v ) = ( Y , v ) Y R ( H , C ) S ( Y , v ) + ( X ) I A R * ( Y , v )
for a proper closed convex risk measure R : L d p G ( I R m , C ) .
In a final step, this formula will be transferred into a scenario-based version, i.e., a formula with probability measures as dual variables as in [18] (for scalar risk measures) and in [2,3] (for set-valued ones).
Take Y ( L d q ) + , and define w i = E [ Y i ] , as well as V i = 1 w i Y i for w i > 0 and V i ( L d q ) + with E [ V i ] = 1 if w i = 0 , i = 1 , , d . Then, V i is the density function of a probability measure Q i with V i = d Q i d P for i = 1 , , d . One can now write Y = diag ( w ) d Q d P .
This gives:
E [ Y X ] = E ( diag ( w ) d Q d P ) X = w E Q [ X ]
where E Q [ X ] = ( E Q 1 [ X 1 ] , , E Q d [ X d ] ) I R d . On the other hand,
E [ H Y ] = E H ( diag ( w ) d Q d P ) = ( w T E Q [ H 1 ] w T E Q [ H m ] ) I R m .
Defining:
E Q [ H ] = ( E Q [ H 1 ] , , E Q [ H m ] ) = ( E Q 1 [ H 1 1 ] E Q 1 [ H 1 m ] E Q d [ H d 1 ] E Q d [ H d m ] ) I R d × m
one obtains E [ H Y ] = E Q [ H ] w . The following lemma provides the one-to-one correspondence between pairs ( Y , v ) and ( Q , w ) .
Lemma 1.
(a) Let Y ( L d q ) + and v = E [ H Y ] C + \ { 0 } . Then, there is a vector probability measure Q whose components are absolutely continuous with respect to P and w I R + d such that E Q [ H ] w C + \ { 0 } and:
X L d p : S ( Y , v ) + ( X ) = z I R m E [ Y X ] v z = z I R m w E Q [ X ] w E Q [ H ] z = : E H , Q , w + ( X ) .
(b) Let Q be a vector probability measure whose components are absolutely continuous with respect to P and w I R + d such that E Q [ H ] w C + \ { 0 } . Then, there are Y ( L d q ) + and v C + \ { 0 } with v = E [ H Y ] such that S ( Y , v ) + ( X ) = E H , Q , w + ( X ) for all X L d p .
Proof .
(a) See the above discussion. (b) Set Y = diag ( w ) d Q d P and v = E [ H Y ] . □
Remark 10.
The set:
E H , Q , w + ( X ) : = z I R m w E Q [ X ] w E Q [ H ] z
clearly is a generalization of the upper w-expectation in Section 4.7 below and shares most of its properties; its values are subsets of I R m with m d . A more thorough discussion of this new concept will be performed elsewhere.
The scenario-based dual representation reads as follows. Defining:
W R ( H , C ) = ( Q , w ) M d 1 ( P ) × I R d diag ( w ) d Q d P barr A R , E Q [ H ] w C + \ { 0 }
one obtains the following dual representation result.
Corollary 3.
If R : L d p G ( I R m , C ) is a proper closed convex risk measure, then there is a penalty function α R : W R ( H , C ) G ( I R m , C ) such that:
X L d p : R ( X ) = ( Q , w ) W R ( H , C ) E H , Q , w + ( X ) α ( Q , w ) .
In particular, the penalty function can be chosen as:
α R , m i n ( Q , w ) = Z A R E H , Q , w + ( Z ) .
If R is additionally sublinear, then:
X L d p : R ( X ) = ( Q , w ) W R ( H , C ) E H , Q , w + ( X ) .
Remark 11.
Special Case I: If H j = b j 𝟙 with b j I R d , j = 1 , , m , then H = B 𝟙 with a matrix B I R d × m and E Q [ H ] = B , v = B T E [ Y ] . The choice b j = e j , the j-th unit vector, leads to (a more general version of) situations already considered in [4].
Special Case II: If m = d and H j = e j 𝟙 for j = 1 , , d , then B becomes the d × d identity matrix. Moreover, in this case, E H , Q , w + ( X ) = z I R m w E Q [ X ] w z , which is the upper ( Q , w ) -expectation defined in Section 4.7 below.

4.6. Liquidation Risk Measures

The theory of scalar risk measures in the spirit of [18,32] includes the implicit assumption that multi-asset positions and portfolios are evaluated in monetary units (or units of a risk-free numéraire). Thus, the risk evaluation takes place at the level of (univariate) monetary positions, not at the level of actual (multivariate) portfolios. Moreover, even the single assets in a portfolio are usually represented by a monetary value instead of accounted for in “physical units” ([5]). Thus, an at least virtual liquidation usually precedes the risk evaluation. The following model makes this procedure transparent and, at the same time, allows for liquidation into more than one asset, which are not assumed to be risk-free. The risk evaluation then takes place in terms of m cash-like assets, which could indeed be thought of as currencies.
Let S : I R m L m p be an injective linear operator and R : L m p P ( I R m , I R + m ) a ( S )-translative risk measure, i.e., R satisfies (R0), (R1), (R2) in Definition 6 with T replaced by S. Moreover, let T : L m p L d p be an injective linear operator and C L d p a nonempty set. Typically, C is generated by solvency sets of a (convex or conic) market model. The (set-valued) function f C : L d p P ( L m p ) defined by (see (6)):
f C ( X ) = Z L m p X T Z C
is called the liquidation map associated with C and T, which of course is T-translative:
Z L m p , X L p : f C ( X + T Z ) = f C ( X ) + Z
The function R C : L d p P I R m , I R + m defined by:
R C ( X ) = inf R ( Z ) Z f C ( X ) = R ( Z ) Z L m p , X T Z C
is called the m-liquidation risk measure with respect to C . Versions with values in F ( I R m , I R + m ) and G ( I R m , I R + m ) , respectively, are obtained by taking the corresponding infima.
Proposition 6.
The m-liquidation risk measure R C satisfies (R1) with respect to T S . It satisfies (R2) whenever C + ( L d p ) + C , and one has R C ( 0 ) whenever T ( dom R ) C . Its acceptance set is:
A C = X L d p 0 R C ( X ) = T A R + C .
Proof .
(R1) Indeed, one has for X L d p , z I R m :
R C ( X + ( T S ) ( z ) ) = R ( Z ) Z L m p , X + ( T S ) ( z ) T Z C = R ( Z S z ) { z } Z L m p , X T ( Z S z ) C = R ( Z S z ) Z S z L m p , X T ( Z S z ) C { z } = R C ( X ) { z } .
(R2) The assumption to C implies { X 1 } C { X 2 } C whenever X 2 X 1 ( L d p ) + . In this case,
R C ( X 2 ) = R ( Z ) Z L m p , T Z { X 2 } C R ( Z ) Z L m p , T Z { X 1 } C = R C ( X 1 ) .
The next claim directly follows from the definition of R C ( 0 ) and the assumption.
Finally, Y A R and X C imply:
R C ( T Y + X ) = R ( Z ) Z L m p , T Y + X T Z C R ( Y ) 0 ,
and conversely, 0 R C ( X ) implies:
Z L m p : X T Z C , 0 R ( Z )
which immediately produces X T A R + C . □
It can happen that R C ( 0 ) = I R m even if R is finite at 0 L m p .
A different and useful representation of R C , in particular with dual representations in view, can be obtained as follows. Define the functions R T : L d p P ( I R m , I R + m ) and I C : L d p P ( I R m , I R + m ) by:
R T ( X ) = R ( Z ) : if X = T Z for Z L m p : otherwise as well as I C ( X ) = I R + m : X C : X C
The function I C is the set-valued indicator function of C (see [25]).
Proposition 7.
The m-liquidation risk measure R C is the infimal convolution of R T and I C , i.e.,
X L d p : R C ( X ) = R T I C ( X ) .
Proof. 
By definition,
R T I C ( X ) = inf R T ( X 1 ) + I C ( X 2 ) X 1 + X 2 = X .
For R T ( X 1 ) + I C ( X 2 ) to be nonempty, one must have X 1 = T Z for Z L m p and X 2 = X X 1 = X T Z C , and in this case:
R T I C ( X ) = inf R ( Z ) Z L m p , X T Z C
which is the definition of R C . □
Remark 12.
(1) The case of m risk-free assets (e.g., m currency accounts) can be modeled by S z = i = 1 m z i b i 𝟙 with m linearly independent vectors b i I R d . In particular, the unit vectors b i = e i , i = 1 , , m , can be used as, for example, already in [4] in which case the first m components of a portfolio vector in I R d represent the currencies eligible for risk compensation. The operator T : L m p L d p just adds d m zero components to Z L m p , i.e.,
T Z = Z 1 , , Z m , 0 , , 0 T .
An ad hoc procedure of this type was also used in [4];
(2) A special case for C is a one-period market model with proportional transaction costs in the form C = C 0 𝟙 + C 1 where C 0 I R d , C 1 L d p are closed convex sets, which comprise the solvent positions at initial and terminal time, respectively (see [34] for this type of market model and [5,7] for the case when C 0 and C 1 are polyhedral convex cones, i.e., conical market models). Since in this case:
I C = I C 0 𝟙 I C 1 ,
it is possible to further decompose the representation (16). This idea is due to C. Ararat and was used in [35];
(3) Finally, if m = 1 and S z = z b 𝟙 with z I R and b I R d , then the case of scalar risk measures for multivariate positions can be recovered.

4.7. Set-Valued Lower and Upper Expectations

In the book ([36], p. 254ff), Huber discussed the notion of lower and upper expectations being infima and suprema of expected values over sets of probability measures (along with lower and upper probabilities). His proof for a “dual representation” became a blueprint for similar results for coherent risk measures [18].
How do we generalize these notions to the multivariate situation? A new proposal is as follows. Let Q be a set of d-dimensional vector probability measures on a measurable space ( Ω , F ) and X L d 0 such that E Q [ X ] = E Q 1 [ X 1 ] , , E Q d [ X d ] exists for all Q Q . Moreover, let C I R d be a nontrivial (i.e., C { , I R d } ) closed convex cone with positive dual cone C + = v I R d z C : v T z 0 . For w C + \ { 0 } , Q Q , define the sets:
E Q , w + ( X ) = z I R d | w E Q [ X ] w z E Q , w ( X ) = z I R d | w z w E Q [ X ]
which we call the upper and lower ( Q , w ) -expectation of X, respectively. Set:
E + ( X ) = w C + \ { 0 } , Q Q E Q , w + ( X ) and E ( X ) = w C + \ { 0 } , Q Q E Q , w ( X ) .
Straightforwardly, one obtains for all ( Q , w ) Q × C + \ { 0 } :
E Q , w + ( X ) = E Q [ X ] + H + ( w ) and E Q , w ( X ) = E Q [ X ] + H + ( w )
where H + ( w ) = z I R d w z 0 .
The set-valued function E + is called the upper expectation of X, whereas E is called the lower expectation with respect to Q . Of course, “upper” and “lower” refer to the order generated by the cone C (see Remark 1). It is not difficult to verify that E + maps in G ( I R m , C ) and is sublinear with respect to ⊇, whereas E maps into G ( I R m , C ) and is superlinear with respect to ⊆. Moreover, E + ( X ) = E ( X ) , which corresponds to ([36], p. 254, (2.3)). Finally, E + ( X ) satisfies the requirements of a (sublinear) risk measure in Section 4.5, and its definition can be considered as its dual representation; this is stated in the following result.
Proposition 8.
(a) E + maps into G ( I R d , C ) , is T-translative functions for T z = z 𝟙 , and sublinear and K-monotone with respect to each random closed convex cone K such that E Q [ X ] C for all Q Q whenever X K (pointwise);
(b) E maps into G ( I R d , C ) , is T-translative functions for T z = z 𝟙 , and superlinear and K-monotone with respect to each random closed convex cone K such that E Q [ X ] C for all Q Q whenever X K (pointwise).
Proof .
(a) Everything is a direct consequence of the definition and the assumption. In particular, if X 2 X 1 K , then E Q [ X 2 X 1 ] C ; hence, E Q [ X 1 ] + C E Q [ X 2 ] + C and:
E Q , w + ( X 1 ) = E Q [ X 1 ] + H + ( w ) E Q [ X 2 ] + H + ( w ) = E Q , w + ( X 2 ) .
since C + H + ( w ) = H + ( w ) for all w C + . Taking the intersections over ( Q , w ) Q × C + \ { 0 } on both sides of the inclusion, one obtains E + ( X 1 ) E + ( X 2 ) ;
(b) Completely parallel. □
Remark 13.
(1) The monotonicity assumption can be understood as a time consistency condition: if X is non-negative at terminal time (i.e., X K ), then E Q [ X ] is non-negative at initial time (i.e., E Q [ X ] C ). Since there are many options for preorders in dimensions 2 (rather than essentially only one in Dimension 1), “non-negative” has to be understood with respect to preorders generated by convex cones;
(2) The definitions of E + , E are essentially different from the ones in [23] and, in contrast to the latter, maintain many features of the scalar versions. For example, a relation such as E + ( X ) = E ( X ) is not true for the concepts from [23];
(3) While in this section, a direct approach is given, Section 4.5 above already includes and motivates a further generalization of upper/lower expectations to the case E + ( X ) , E ( X ) I R m with m d . Such a distinction is of course not necessary for univariate positions, but it can be vital if I R d -valued positions are subject to an evaluation in fewer than d eligible instruments, e.g., currencies.
Clearly, the definition of E + ( X ) can be seen as a dual representation. A primal version is obtained by looking at:
A E + = X L d 0 0 E + ( X ) = X L d 0 w C + \ { 0 } , Q Q : 0 E Q , w + ( X ) = X L d 0 w C + \ { 0 } , Q Q : w E Q [ X ] 0 = X L d 0 Q Q : E Q [ X ] C
where the bipolar theorem for the closed convex cone C is used. One obtains E + = f A E + with Proposition 2 (a), where T : I R d L d 0 is defined by T z = z 𝟙 (see Proposition 8).
In particular, one obtains:
E + ( X ) = f A E + ( X ) = z I R d X z 𝟙 A E + = z I R d Q Q : E Q X z 1 I C = z 𝟙 R d Q Q : z E Q X + C = Q Q E Q X + C .
One may note that this corresponds to the (trivial) E [ X ] = inf { r I R E [ X ] r } for d = 1 , C = I R + , Q = { P } . Thus, upper expectations are meaningful substitutes for the (vector) expectation in the multivariate case since there is no reasonable way to use the infimum in the preordered vector space ( I R d , C ) (and very often, it does not even exist). The lower expectation corresponds to the univariate E [ X ] = sup { r I R r E [ X ] } .

4.8. Set-Valued Quantiles

The notation from Section 2.2 is used. Moreover, let C I R d be a closed convex cone, C + its positive dual, and X L d 0 . The function F X , C : I R d [ 0 , 1 ] defined by:
F X , C ( z ) = inf P ω Ω w X ( ω ) w z w C + \ { 0 }
is called the lower cone distribution function of X (with respect to the cone C). If d = 1 and C = I R + , then F X , I R + is just the (usual) cumulative distribution function of X, but for d > 1 , it is different from the joint distribution function even if C = I R + d (see [37]).
The upper level set Q X , C ( p ) = z I R d F X , C ( z ) p is the lower p-quantile of X for p [ 0 , 1 ] ; it can easily be shown that Q X , C ( p ) G ( I R d , C ) (see ([37], Prop. 4) and also [38]). For a fixed p [ 0 , 1 ] , the function X Q X , C ( p ) is T-translative for T z = z 𝟙 .
The corresponding value-at-risk for multivariate positions are defined via:
V a R α , C ( X ) = Q X , C ( 1 α )
for α ( 0 , 1 ) , which is a ( T ) -translative (with T z = z 𝟙 ), positively homogenous risk measure as defined in Section 4.5. Compare ([37], Section 6).

5. Scalar Translative Functions

5.1. Scalar ( T , z * ) -Translative Functions

First, a type of extended real-valued function is introduced, which will provide scalarizing functions for set-valued T-translative functions. Special cases of such functions were considered in [2,39], but the first reference in a finance context seems to be ([17], Formula (CR), p. 590).
Let X , Z be topological linear spaces with topological duals X * , Z * .
Definition 7.
Let z * Z * \ 0 and T : Z X be an injective linear operator. An extended real-valued function φ : X I R ¯ = I R ± is called ( T , z * ) -translative if:
x X , z Z : φ ( x + T z ) = φ ( x ) + z * ( z ) .
If φ ( 0 ) = 0 , then φ coincides with the continuous linear function z * on the image T Z of T; in applications such as monetary risk measure theory, T Z is a one-dimensional subspace and (17) is the well-known “cash-additivity” property (see [19]). In ([17], Def. 3.4), this property for T = i d Z and Z a linear subspace of X was called translation invariance with respect to ( Z , z * ) . In ([39], Prop. 4.2), it was called “translative along M”, where M X corresponds to T Z in Definition 7.
Example 5.
Let h 1 , , h m X be a set of m linearly independent elements of X, Z = I R m and:
T z = k = 1 m z k h k
(the standard example). An element z * Z * can be identified with a w I R m such that z * ( z ) = w z for all z I R m . Then, (17) becomes:
x X , z I R m : φ ( x + k = 1 m z k h k ) = φ ( x ) + w z
If m = 1 and h 1 = h X \ { 0 } , w = 1 , (17) specializes further to:
x X , s I R : φ ( x + s h ) = φ ( x ) + s ,
and such functions are called translative in direction h or just h-translative. In this case, T : I R X with T s = s h .
As usual, ker z * = z Z z * ( z ) = 0 denotes the kernel of z * Z * . For r I R , denote:
E ( z * , r ) = z Z z * ( z ) = r ;
thus, ker z * = E ( z * , 0 ) . The next result relates the sublevel sets of φ with its zero sublevel sets where the sublevel set of φ at level r I R is defined as:
L φ ( r ) = x X φ ( x ) r .
Proposition 9.
If φ : X I R ¯ is ( T , z * ) -translative, then:
(a) r I R : L φ ( r ) + T ker z * = L φ ( r ) ;
(b) r I R : L φ ( r ) = T E ( z * , r ) + L φ ( 0 ) .
Proof .
(a) This is immediate from (17). (b) First, assume φ ( x ) r , and take z E ( z * , r ) . Then:
0 φ ( x ) r = φ ( x ) z * ( z ) = φ ( x T z ) ;
hence, x T z L φ ( 0 ) , so we have “⊆.” Conversely, take z E ( z * , r ) , x L φ ( 0 ) . Then:
φ ( x + T z ) = φ ( x ) + z * ( z ) = φ ( x ) + r r ,
so x + T z L φ ( r ) . □
For the case of the standard example with m = 1 , (a) of Proposition 9 is trivial and (b) becomes L φ ( r ) = { r h } + L φ ( 0 ) , which explains the label “directionally translative”: the sublevel sets are translates of the zero sublevel set in direction h. Note that in this case, z * = 1 , ker z * = 0 .
The next goal is to prove a primal representation result, that is to reconstruct φ from L φ ( 0 ) . The next result prepares the ground.
Remark 14.
To every function φ : X I R ¯ , one can assign a function f : X F ( I R , I R + ) by:
f ( x ) = I R if φ ( x ) = , [ φ ( x ) , ) if φ ( x ) I R , if φ ( x ) = + .
Obviously, epi φ = graph f , L φ ( 0 ) = A f , and the function φ can be recovered from f by:
φ ( x ) = inf s I R s f ( x ) .
Since epi φ = graph f , the function φ is convex (closed) if, and only if, f is convex (closed). Moreover, φ is translative in direction h X if, and only if, f is T-translative functions, where T : I R X is defined by T s = s h .
Proposition 10.
Let φ : X I R ¯ be translative in direction h. Then:
x X : φ ( x ) = τ L φ ( 0 ) , h ( x ) = inf s I R x s h L φ ( 0 ) .
Moreover, the set L φ ( 0 ) is h-directionally closed (for a definition see the last part of Example 3) and satisfies L φ ( 0 ) I R + h = L φ ( 0 ) .
Proof. 
The properties of L φ ( 0 ) follow directly from Remark 14, Proposition 4 (j), and Proposition 2 (a).
Moreover, with f as in Remark 14, we obtain from Proposition 2 (a):
φ ( x ) = inf s I R s f ( x ) = inf s I R s f A f ( x ) = inf s I R x s h A f = L φ ( 0 ) .
Corollary 4.
If φ : X I R ¯ is ( T , z * ) -translative, then it is translative in direction T z ¯ for all z ¯ E ( z * , 1 ) . In particular,
x X : φ ( x ) = inf s I R x s T z ¯ L φ ( 0 ) ,
and the right-hand side is independent of the choice of z ¯ E ( z * , 1 ) . Moreover, the set L φ ( 0 ) is T z ¯ -directionally closed and satisfies L φ ( 0 ) I R + T z ¯ = L φ ( 0 ) for each z ¯ E ( z * , 1 ) .
Proof. 
From (17), T z ¯ -translativity is immediate. Hence, all statements follow directly from Proposition 10. □
Proposition 11.
If L X , z * Z * \ { 0 } and z ¯ E ( z * , 1 ) satisfy:
(a) L + T ker z * = L ;
(b) L I R + T z ¯ = L ;
(c) L is T z ¯ -directionally closed;then the function:
τ L , T z ¯ ( x ) = inf s I R x s T z ¯ L
is ( T , z * ) -translative and satisfies L τ L , T z ¯ ( 0 ) = L .
Moreover, the conditions (b) and (c) are then satisfied for each z ¯ E ( z * , 1 ) , and τ L , T z ¯ does not depend on the particular choice of z ¯ in E ( z * , 1 ) .
Proof. 
Take x X , z Z , and set t = z * ( z ) . Then, z t z ¯ ker z * and:
τ L , T z ¯ x + T z = inf s I R x + T z s T z ¯ L = inf s I R x + T ( z t z ¯ ) s T z ¯ + t T z ¯ L = inf s t s I R , x ( s t ) T z ¯ L + t T ( t z ¯ z ) + t inf r I R x r T z ¯ L + t
since L + t T ( t z ¯ z ) L by (a). Hence:
τ L , T z ¯ ( x + T z ) τ L , T z ¯ ( x ) + z * ( z ) .
Applying this inequality to x + T z instead of x and z instead of z, we obtain:
τ L , T z ¯ ( x ) = τ L , T z ¯ ( x + T z T z ) τ L , T z ¯ ( x + T z ) z * ( z ) ,
and altogether, (17) follows.
Obviously, x L implies 0 s I R x s T z ¯ L ; hence, τ L , T z ¯ ( x ) 0 . Thus, L L τ L , T z ¯ ( 0 ) . On the other hand, let x L τ L , T z ¯ ( 0 ) , i.e.,
τ L , T z ¯ ( x ) = inf s I R x s T z ¯ L 0 .
If τ L , T z ¯ ( x ) = 0 , then there is a sequence s n n I N in I R converging to 0 with x s n T z ¯ L for all n I N . Since L is T z ¯ -directionally closed, x L follows. If τ L , T z ¯ ( x ) < 0 , then there is s < 0 such that x s T z ¯ L ; hence, x L + s T z ¯ L due to (b). Thus, we also showed L τ L , T z ¯ ( 0 ) L .
Now, assume (b) is satisfied, and take z ^ E ( z * , 1 ) . Then, z ¯ z ^ ker z * ; hence, by (a), (b):
x L , s 0 : x s T z ^ = x s T z ¯ + s T ( z ¯ z ^ ) L + T ker z * = L .
A similar argument works for (c). □
In the previous proposition, one obtains the same functional for each z ¯ E ( z * , 1 ) . This raises the question when two functions of the form (22) coincide. An answer for the special case when X is a linear space of random variables was given in ([40], Proposition 1-a) (the sublinear case) and ([9], Proposition 5.1) (general risk measures). Note, however, that the following proposition also does not necessarily cover lower semicontinuous functions since only the directional closure is involved, not the topological closure (compare Example 3 and its references for this type of closure).
Proposition 12.
Let L , M X and h , k X \ { 0 } . Then, τ L , h = τ M , k if, and only if,
dcl h L I R + h = dcl k ( M I R + k ) and dcl h L I R + h + span { h k } = dcl h L I R + h .
Proof. 
One has x dcl h L I R + h if, and only if, there are sequences s n n I N and r n n I N in I R with r n 0 and x + ( s n + r n ) h L , for all n I N , and lim n s n = 0 , which in turn is equivalent to:
τ L , h ( x ) = sup t I R x t h L 0 .
Hence, dcl h L I R + h = L τ L , h ( 0 ) , and in the same way, dcl k M I R + k = L τ M , k ( 0 ) .
Now, assume τ L , h = τ M , k . Then, L τ L , h ( 0 ) = L τ M , k ( 0 ) , and consequently, dcl h L I R + h = dcl k M I R + k .
If t I R , then:
τ L , h ( x + t ( h k ) ) = τ L , h ( x t k ) + t = τ M , k ( x t k ) + t = τ M , k ( x ) .
Hence, τ L , h ( x ) = τ M , k ( x ) 0 implies τ L , h ( x + t ( h k ) ) 0 , which proves L τ L , h ( 0 ) + span { h k } L τ L , h ( 0 ) , i.e., dcl h L I R + h + span { h k } dcl h L I R + h . The converse inclusion is obvious.
Conversely, dcl h L I R + h = dcl k ( M I R + k ) implies L τ L , h ( 0 ) = L τ M , k ( 0 ) . Moreover, for all x X , for all t I R ,
x t h = x t k t ( h k ) ;
hence, t I R x t k L τ M , k ( 0 ) = L τ L , h ( 0 ) t I R x t h L τ L , h ( 0 ) since by assumption L τ L , h ( 0 ) + span { h k } = L τ L , h ( 0 ) . This shows τ L τ L , h ( 0 ) , h ( x ) τ L τ M , k ( 0 ) , k ( x ) for all x X . The same argument with reversed roles of ( L , h ) and ( M , k ) produces the converse inequality, so τ L τ L , h ( 0 ) , h ( x ) = τ L τ M , k ( 0 ) , k ( x ) for all x X . Since τ L , h is translative in direction h and τ M , k is translative in direction k, Proposition 10 provides τ L , h ( x ) = τ L τ L , h ( 0 ) , h ( x ) = τ L τ M , k ( 0 ) , k ( x ) = τ M , k ( x ) for all x X . □
Finally, a dual representation result for ( T , z * ) -translative functions is established by applying the method from Section 3.3.
Take a function φ : X I R { + } , which is proper, lower semicontinuous, convex, and ( T , z * ) -translative for some z * Z * \ { 0 } . Then, L φ ( 0 ) is a closed convex set and φ = τ L φ ( 0 ) , T z ¯ for each z ¯ E ( z * , 1 ) by Proposition 10. Thus, φ is T z ¯ -translative and can be written as:
x X : φ ( x ) = τ L φ ( 0 ) , T z ¯ ( x ) = ( I L φ ( 0 ) α T z ¯ ) ( x )
where I A ( x ) = 0 for x A and I A ( x ) = + for x A is the scalar indicator function in the sense of convex/variational analysis and α h : X I R { + } , defined by:
α h ( x ) = s : x = s h + : otherwise ,
is an h-translative function with h X \ { 0 } . The conjugate of φ is the sum of the conjugates of I L φ ( 0 ) and α T z ¯ (see in ([26], Theorem 2.3.1. (ix))). The former is σ L φ ( 0 ) , while the latter can be computed as:
α T z ¯ * ( x * ) = sup s I R s x * ( T z ¯ ) s = 0 : ( T * x * ) ( z ¯ ) = 1 + : otherwise
The result is stated in the following theorem.
Theorem 2.
Let φ : X I R { + } be a proper, lower semicontinuous, convex, and ( T , z * ) -translative function with z * Z * \ { 0 } and z ¯ E ( z * , 1 ) . Then:
x X : φ ( x ) = sup x * ( x ) σ L φ ( 0 ) ( x * ) x * barr L φ ( 0 ) , ( T * x * ) ( z ¯ ) = 1 .
Proof. 
In the light of the previous discussion, it suffices to note that, under the assumptions, φ = φ * * holds true, and in the definition of the Fenchel–Moreau biconjugate:
φ * * ( x ) = sup x * X * x * ( x ) φ * ( x * ) = sup x * X * x * ( x ) σ L φ ( 0 ) ( x * ) α T z ¯ * ( x * )
one can restrict x * to those satisfying ( T * x * ) ( z ¯ ) = 1 since otherwise + is subtracted under the supremum, which certainly does not contribute to it. The same argument applies to x * dom σ L φ ( 0 ) = barr L φ ( 0 ) . □

5.2. Scalar Representation of T-Translative Functions

In this section, scalar representations of set-valued T-translative functions are discussed. The following concepts and results are recalled. One may compare ([13], Section 4.2) for a more thorough survey of (convex) scalarization results. If f : X G ( Z , C ) and g : X G ( Z , C ) are functions and z * C + \ { 0 } , the functions:
ψ f , z * ( x ) = inf z f ( x ) z * ( z ) and ψ g , z * ( x ) = sup z g ( x ) z * ( z )
can be used to represent f and g, respectively, as follows:
x X : f ( x ) = z * C + \ { 0 } z Z ψ f , z * ( x ) z * ( z ) ,
x X : g ( x ) = z * C + \ { 0 } z Z z * ( z ) ψ g , z * ( x ) .
Such results directly follow from the support function representation of the closed convex sets f ( x ) and g ( x ) , respectively. A set-valued convex analysis based on such representations via families of scalar functions can be found in [41] and a more general approach in [42]. The properties of such representations within a finance context were also discussed in [39].
It is assumed in the following that Z is a nontrivial, locally convex, separated topological linear space with topological dual Z * ; moreover, ( Z , Z * ) is considered to be a dual pair with the respective topologies.
Theorem 3.
(a) Let f : X G ( Z , C ) be T-translative functions. Then, ψ f , z * : X I R ¯ is ( T , z * ) -translative for each z * C + \ { 0 } and (24) holds true.
(b) Let Γ Z * \ { 0 } and ψ z * z * Γ be a family of functions such that ψ z * is ( T , z * ) -translative for all z * Γ . Then, the function f Γ : X P ( Z ) defined by:
f Γ ( x ) = z * Γ z Z ψ z * ( x ) z * ( z )
maps into G ( Z , C ) with C = ( cone Γ ) + = z Z z * Γ : 0 z * ( z ) and is T-translative functions.
Proof .
(a) is straightforward from (4) and the definition of ψ f , z * . (b) follows from (17) and the fact that i I ( z + A i ) = z + i I A i for any collection A i i I of sets with A i Z for all i I . □
Of course, a parallel result for G ( Z , C ) -valued functions g involving the functions ψ g , z * can be established. Theorem 3, (a) in connection with (24) gives a scalar representation of T-translative functions. Thus, the theorem can be understood, for example, as a generalization of ([39], Lem. 4.4, Thm. 4.5).

6. Extension to Set Functions and Random Sets

The complete-lattice framework can also be used to extend set-valued T-translative functions to functions on (random) sets. The crucial concept is as follows.
Definition 8.
Let X be a nonempty set, ( L , ) be a complete lattice, and f : X L a function. The functions f Δ : P ( X ) L and f : P ( X ) L defined by:
f Δ ( D ) = inf x D f ( x ) and f ( D ) = sup x D f ( x )
are called the inf-extension and sup-extension of f, respectively.
Such extensions were introduced in [24] and called “canonical extensions” of f; special cases are the inf- and sup-translations used in [43] to reduce solutions of set optimization problems from “sets” to “points” (in X).
Using inf- and sup-extensions, one can extend T-translative functions from X to P ( X ) . The feasibility of these extensions depends on a feature of the image lattice: a complete lattice ( L , ) with an addition + is said to be inf-additive and sup-additive, respectively, if:
D , E L : inf ( D + E ) = inf D + inf E and sup ( D + E ) = sup D + sup E
where the addition of sets is the usual elementwise addition with the extension D + = + D = . The relevance of such properties was pointed out in [44], as well as the fact that, in general, the lattice ( P ( Z , C ) , ) is inf-additive, but not sup-additive, whereas it is vice versa for ( P ( Z , C ) , ) . It was shown in [43] that inf-/sup-additivity is equivalent to the existence of a residuation operation, which can be considered as a generalized inverse addition (see Section 3.3).
The following result works in the setup of Section 3.1.
Proposition 13.
If f : X P ( Z , C ) and g : X P ( Z , C ) are T-translative functions, then:
D P ( X ) , z Z : f Δ ( D + { T z } ) = f Δ ( D ) + { z } , D P ( X ) , z Z : g ( D + { T z } ) = g ( D ) + { z } ,
respectively.
Proof. 
Directly from (4) and the inf-additivity of ( P ( Z , C ) , ) and the sup-additivity of ( P ( Z , C ) , ) , respectively. □
Note that the claim is not true in general for f and g Δ . Using these concepts, one can extend translative functions from vector spaces to their power sets, in particular to sets of random variables if the vector space is a subspace of L d 0 . If such a set of random variables happens to be the set of selectors of a random set, one also obtains an extension of a translative function to random sets. In ([23], Section 4.1), such a procedure was called the minimal extension of a nonlinear expectation provided the lattice is G ( Z , C ) , although used without any reference to the complete lattice framework. Of course, not all random sets have (characterizing) sets of selectors (cf. [45]), but under some assumptions (e.g., for p-integrable, bounded random compact sets), nonlinear expectations of random sets can indeed be characterized by inf-extensions of functions defined on random vectors. An example is ([45], Proposition 2.2.38), whereas ([45], Formula (2.2.28)) is the definition of the “selection superlinear expectation” as an inf-extension over a set of selectors with the underlying lattice G ( I R d , I R + d ) .
In order to define convexity and monotonicity type properties for functions on P ( X ) , one needs an algebraic structure—usually on a smaller set, which is more appropriate for applications. Let X be a topological linear space, B X a closed convex cone, and G ( X , B ) the set of all closed convex subsets of X, which are stable under the addition of B (see Section 2.1). The addition on G ( X , B ) is D E = cl { x + y x D , y E } , the closure of the usual Minkowski addition with the convention D = E = for D , E G ( X , B ) . Multiples are defined by s D = { s x x D } for s > 0 and 0 D = B for all D G ( X , B ) . With these operations, G ( X , B ) becomes a collinear space: compare ([13], Section 2.3) for more details and references. Of course, one could also use P ( X , B ) or C ( X , B ) as departing points.
Note that the case B = { 0 } is not excluded; on the contrary, it will play an important role. The set G ( X ) : = G ( X , { 0 } ) is the set of all closed convex subsets of X. Note also that G ( X , B ) G ( X ) ; hence, each function F : G ( X ) P ( Z , C ) also provides a function F : G ( X , B ) P ( Z , C ) by restriction.
Definition 9.
A function F : G ( X , B ) P ( Z , C ) is called convex if:
D , E G ( X , B ) , s ( 0 , 1 ) : F ( s D ( 1 s ) E ) s F ( D ) + ( 1 s ) F ( E ) ,
a function G : G ( X , B ) P ( Z , C ) is called concave if:
D , E G ( X , B ) , s ( 0 , 1 ) : s G ( D ) + ( 1 s ) G ( E ) G ( s D ( 1 s ) E ) .
Proposition 14.
(a) f : X C ( Z , C ) is convex if, and only if, f Δ : G ( X ) C ( Z , C ) is convex. In particular, if f is convex, then the restriction of f Δ to G ( X , B ) is convex as well;
(b) Let F : G ( X , B ) C ( Z , C ) be convex. Then, the restriction F | X : X C ( Z , C ) defined by F | X ( x ) = F ( { x } + B ) is convex.
Proof .
(a) The if-part relies on the inf-additivity of ( C ( Z , C ) , ) , and the only if-part follows from choosing D and E as singletons in Definition 9 and the fact that f Δ ( x ) = f ( x ) ;
(b) This part is a direct consequence of Definition 9 and Remark 5. □
Corollary 5.
Let f : X C ( Z , C ) be a T-translative functions function. Then, f Δ : G ( X ) C ( Z , C ) is convex if, and only if, A f is a convex set.
Proposition 14 tells us that convexity is a property that is inherited in both directions. For T-translative functions, this is expressed in the corollary.
Next, we examine the relationship of the B-monotonicity of a function f : X P ( Z , C ) and its inf-extension.
Definition 10.
Let S be a subset of P ( X ) . A function F : S P ( Z , C ) is called B-monotone if:
D + B E implies F ( D ) F ( E ) ,
for all D , E S .
A function F : P ( X ) P ( Z , C ) is called B-monotone on S if the restriction of F to S is B-monotone.
A function G : S P ( Z , C ) is called B-monotone if:
D E B implies G ( D ) G ( E )
for D , E G ( X , B ) . A function G : P ( X ) P ( Z , C ) is called B-monotone on S if the restriction of G to S is B-monotone.
The above definition can be understood as monotonicity with respect to set relations; see [13] for more details and references. The following results are formulated for P ( Z , C ) -valued functions, but they have the usual twins for P ( Z , C ) -valued ones.
Remark 15.
A function F : P ( X , B ) P ( Z , C ) is B-monotone if, and only if,
D E implies F ( D ) F ( E ) ,
for D , E P ( X , B ) . Hence, f Δ is B-monotone on P ( X , B ) by definition for every function f : X P ( Z , C ) .
Proposition 15.
(a) The following three statements are equivalent:
1. 
f : X P ( Z , C ) is B-monotone;
2. 
f Δ ( D + B ) = f Δ ( D ) for every D P ( X ) ;
3. 
f Δ : P ( X ) P ( Z , C ) is B-monotone;
(b) Let F : G ( X , B ) P ( Z , C ) be B-monotone. Then, the restriction F | ( X , B ) : X P ( Z , C ) defined by F | ( X , B ) ( x ) = F ( { x } + B ) is B-monotone.
Proof .
(a) The implication (iii) ⇒ (i) follows by choosing D and E as singletons in Definition 10. For the implication (ii) ⇒ (iii), note that D + B E implies f Δ ( D ) = f Δ ( D + B ) f Δ ( E ) . For the implication (i) ⇒ (ii), one has f Δ ( D + B ) f Δ ( D ) by definition since D + B D . Moreover, the B-monotonicity of f implies:
f Δ ( D ) = inf d D f ( d ) inf d D inf b B f ( d + b ) = inf x D + B f ( x ) = f Δ ( D + B ) ;
(b) Straightforward. □
Corollary 6.
Let f : X P ( Z , C ) be a T-translative functions function. Then, f Δ : P ( X ) P ( Z , C ) is B-monotone if, and only if, A f B A f .
One may ask for the relationships of a set-valued function on X and the restriction of its inf-/sup-extension. Here is the result for the inf-extension of B-monotone P ( Z , C ) -valued functions. A parallel result holds true for the sup-extension of P ( Z , C ) -valued functions.
Proposition 16.
Let f : X P ( Z , C ) be an arbitrary function. Then,
f ( x ) = ( f Δ ) | ( X , 0 ) ( x ) = f Δ ( x )
for all x X . If f is B-monotone, then so is f ( x ) = ( f Δ ) | ( X , B ) ( x ) = f Δ ( { x } + B ) for all x X .
Proof. 
The first part is obvious, and the second part follows from Proposition 15. □
The question if the restriction of the inf-/sup-extension of a set-valued function coincides with the original one does not have such a straightforward answer. If F : G ( X , B ) P ( Z , C ) is a function, then the inf-extension of its restriction to X is:
( F | ( X , B ) ) Δ ( D ) = inf x D F | ( X , B ) ( x ) = inf x D F ( { x } + B ) .
This is the original function only if:
F ( D ) = inf x D F ( { x } + B )
which is not true in general: Consider the function F : G ( I R 2 , { 0 } ) P ( I R 2 , I R + 2 ) , which gives the vector infimum with respect to I R + 2 plus I R + 2 of a set D if it exists and I R 2 if not. Consider the set D = { x I R 2 x 1 0 , x 2 0 , x 1 + x 2 1 } . Then, F ( D ) = { 0 } + I R + 2 G ( I R 2 , I R + 2 ) , F ( { x } ) = x + I R + 2 for x D and inf x D F ( { x } ) = D (this infimum now is in ( P ( I R 2 , I R + 2 ) , ) ).
Thus, Equation (26) is a requirement to a function. It is of course satisfied if, and only if, F is the inf-extension of a function f : X P ( Z , C ) , but it remains a challenge to find more specific assumptions.
Note that F ( D ) inf x D F ( { x } + B ) holds, if F is B-monotone.
Next, we use the above concepts to give extensions of T-translative functions to random sets.
Example 6.
Let X be a random closed set, which is p-integrable (see [45], Def. 1.1.1’ and p. 227), i.e., a function X : Ω P ( I R d ) . A selector of X is a random variable X : Ω I R d with X X P-a.s. The set L p ( X ) comprises all p-integrable selectors of X (see [23,45]). By the definition of p-integrability, this set is nonempty.
The question is how to extend a T-translative functions function on L d p to such random sets. Let f : L d p G ( I R d , C ) be a T-translative functions function. The function f Δ : P ( L d p ) G ( I R d , C ) defined by:
f Δ ( X ) = inf X L p ( X ) f ( X )
is called the selection random set extension of f. Similarly, if g : L d p G ( I R d , C ) is T-translative functions, then:
g ( X ) = sup X L p ( X ) g ( X )
is its selection random set extension. Of course, these concepts only make sense if the random sets have reasonable sets of selectors, e.g., for p-integrable closed convex random sets as in ([45], p. 304ff). Proposition 13 ensures that both extensions are T-translative functions, and Proposition 14 does the same with respect to convexity/concavity.
This example shows that there are indeed techniques of set optimization theory that admit handling “functions whose arguments belong to a nonlinear space” (quote from ([23], p. 8). In this case, as in the given reference, the “nonlinear space” is a complete lattice of sets that carries the algebraic structure of a collinear space (see [13]). Another application of inf-extensions to I R d -valued random variables and their random set extensions can be found in [38].

Author Contributions

All authors contributed equally to conceptualization, methodology, writing procedures. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cascos, I.; Molchanov, I. Multivariate risks and depth-trimmed regions. Financ. Stoch. 2007, 11, 373–397. [Google Scholar] [CrossRef]
  2. Hamel, A.H.; Heyde, F. Duality for set-valued measures of risk. SIAM J. Financ. Math. 2010, 1, 66–95. [Google Scholar] [CrossRef]
  3. Hamel, A.H.; Heyde, F.; Rudloff, B. Set-valued risk measures for conical market models. Math. Financ. Econ. 2011, 5, 1–28. [Google Scholar] [CrossRef] [Green Version]
  4. Jouini, E.; Meddeb, M.; Touzi, N. Vector-valued coherent risk measures. Financ. Stoch. 2004, 8, 531–552. [Google Scholar] [CrossRef]
  5. Kabanov, Y.M. Hedging and liquidation under transaction costs in currency markets. Financ. Stoch. 1999, 3, 237–248. [Google Scholar] [CrossRef]
  6. Jaschke, S.; Küchler, U. Coherent risk measures and good-deal bounds. Financ. Stoch. 2001, 5, 181–200. [Google Scholar] [CrossRef]
  7. Schachermayer, W. The fundamental theorem of asset pricing under proportional transaction costs in finite discrete time. Math. Financ. 2004, 14, 19–48. [Google Scholar] [CrossRef] [Green Version]
  8. Farkas, W.; Koch-Medina, P.; Munari, C. Measuring risk with multiple eligible assets. Math. Financ. Econ. 2015, 9, 3–27. [Google Scholar] [CrossRef] [Green Version]
  9. Farkas, W.; Koch-Medina, P.; Munari, C. Capital requirements with defaultable securities. Insur. Math. Econ. 2014, 55, 58–67. [Google Scholar] [CrossRef] [Green Version]
  10. Feinstein, Z.; Rudloff, B.; Weber, S. Measures of systemic risk. SIAM J. Financ. Math. 2017, 8, 672–708. [Google Scholar] [CrossRef]
  11. Feinstein, Z.; Rudloff, B. Time consistency of dynamic risk measures in markets with transaction costs. Quant. Financ. 2013, 13, 1473–1489. [Google Scholar] [CrossRef]
  12. Feinstein, Z.; Rudloff, B. Multi-portfolio time consistency for set-valued convex and coherent risk measures. Financ. Stoch. 2015, 19, 67–107. [Google Scholar] [CrossRef] [Green Version]
  13. Hamel, A.H.; Heyde, F.; Löhne, A.; Rudloff, B.; Schrage, C. Set optimization—A rather short introduction. In Set Optimization and Applications—The State of the Art. From Set Relations to Set-Valued Risk Measures; Hamel, A.H., Heyde, F., Löhne, A., Rudloff, B., Schrage, C., Eds.; Springer: Berlin, Germany, 2015; pp. 65–141. [Google Scholar]
  14. Kreps, D.M. A representation theorem for “preference for flexibility”. Econometrica 1979, 47, 565–577. [Google Scholar] [CrossRef]
  15. Hamel, A.H.; Löhne, A. Choosing sets: Preface to the special issue on set optimization and applications. Math. Meth. Oper. Res. 2020, 91, 1–4. [Google Scholar] [CrossRef] [Green Version]
  16. Aliprantis, C.D.; Border, K.C. Infinite Dimensional Analysis, 3rd ed.; Springer Publishers: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  17. Frittelli, M.; Scandolo, G. Risk measures and capital requirements for processes. Math. Financ. 2006, 16, 589–612. [Google Scholar] [CrossRef] [Green Version]
  18. Artzner, P.; Delbaen, F.; Eber, J.M.; Heath, D. Coherent measures of risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
  19. Föllmer, H.; Schied, A. Stochastic Finance: An Introduction in Discrete Time, 3rd ed.; Walter de Gruyter: Berlin, Germany; New York, NY, USA, 2011. [Google Scholar]
  20. Farkas, W.; Koch-Medina, P.; Munari, C. Beyond cash-additive risk measures: When changing the numéraire fails. Financ. Stoch. 2014, 18, 145–173. [Google Scholar] [CrossRef] [Green Version]
  21. Filipović, D.; Kupper, M. Monotone and cash-invariant convex functions and hulls. Insur. Math. Econ. 2007, 41, 1–16. [Google Scholar] [CrossRef]
  22. Hamel, A.H. Monetary measures of risk. arXiv 2018, arXiv:1812.04354. [Google Scholar]
  23. Molchanov, I.; Mühlemann, A. Nonlinear expectations of random sets. Financ. Stoch. 2021, 25, 5–41. [Google Scholar] [CrossRef]
  24. Heyde, F.; Löhne, A. Solution concepts for vector optimization problems: A fresh look at an old story. Optimization 2011, 60, 1421–1440. [Google Scholar] [CrossRef]
  25. Hamel, A.H. A Duality theory for set-valued functions I: Fenchel conjugation theory. Set-Valued Var. Anal. 2009, 17, 153–182. [Google Scholar] [CrossRef]
  26. Zălinescu, C. Convex Analysis in General Vector Spaces; World Scientific: Singapore, 2002. [Google Scholar]
  27. Aubin, J.-P.; Ekel, I. Applied Nonlinear Analysis; John Wiley & Sons: New York, NY, USA, 1984. [Google Scholar]
  28. Scandolo, G. Models of capital requirements in static and dynamic settings. Econ. Notes 2004, 33, 415–435. [Google Scholar] [CrossRef]
  29. Baes, M.; Koch-Medina, P.; Munari, C. Existence, uniqueness, and stability of optimal payoffs of eligible assets. Math. Financ. 2020, 30, 128–166. [Google Scholar] [CrossRef] [Green Version]
  30. Hamel, A.H. Translative Sets and Functions and Their Applications to Risk Measure Theory and Nonlinear Separation; IMPA Report D21; IMPA: Rio de Janeiro, Brazil, 2006. [Google Scholar]
  31. Gerth, C.; Weidner, P. Nonconvex separation theorems and some applications in vector optimization. J. Optim. Theory Appl. 1990, 67, 297–320. [Google Scholar] [CrossRef]
  32. Föllmer, H.; Schied, A. Convex measures of risk and trading constraints. Financ. Stoch. 2002, 6, 429–447. [Google Scholar] [CrossRef]
  33. Schrage, C. Algebraische Trennungsaussagen (Algebraic Separation Theorems). Diploma Thesis, Martin Luther University Halle-Wittenberg, Halle, Germany, 2005. [Google Scholar]
  34. Pennanen, T.; Penner, I. Hedging of claims with physical delivery under convex transaction costs. SIAM J. Financ. Math. 2010, 1, 158–178. [Google Scholar] [CrossRef] [Green Version]
  35. Ararat, C.; Hamel, A.H.; Rudloff, B. Set-valued shortfall and divergence risk measures. Int. J. Theory Appl. Financ. 2017, 20, 1750026. [Google Scholar] [CrossRef] [Green Version]
  36. Huber, P.J. Robust Statistics; John Wiley & Sons: New York, NY, USA, 1981. [Google Scholar]
  37. Hamel, A.H.; Kostner, D. Cone distribution functions and quantiles for multivariate random variables. J. Multivar. Anal. 2018, 167, 97–113. [Google Scholar] [CrossRef] [Green Version]
  38. Ararat, C.; Hamel, A.H. Lower cone distribution functions and set-valued quantiles form Galois connections. Theory Probab. Appl. 2020, 65, 179–190. [Google Scholar] [CrossRef]
  39. Munari, C. Multi-utility representation of incomplete preferences induced by set-valued risk measures. Financ. Stoch. 2021, 25, 77–99. [Google Scholar] [CrossRef]
  40. Artzner, P.; Delbaen, F.; Koch-Medina, P. Risk measures and efficient use of capital. Astin Bull. 2009, 39, 101–116. [Google Scholar] [CrossRef] [Green Version]
  41. Schrage, C. Scalar representation and conjugation of set-valued functions. Optimization 2015, 64, 197–223. [Google Scholar] [CrossRef] [Green Version]
  42. Crespi, G.P.; Hamel, A.H.; Rocca, M.; Schrage, C. Set relations via families of scalar functions and approximate solutions in set optimization. Math. Oper. Res. 2021, 46, 361–381. [Google Scholar] [CrossRef]
  43. Hamel, A.H.; Schrage, C. Directional derivatives and subdifferentials of set-valued convex functions. Pac. J. Optim. 2014, 10, 667–687. [Google Scholar]
  44. Löhne, A. Vector Optimization with Infimum and Supremum; Springer Publishers: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  45. Molchanov, I. Theory of Random Sets, 2nd ed.; Springer Publishers: London, UK, 2017. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamel, A.H.; Heyde, F. Set-Valued T-Translative Functions and Their Applications in Finance. Mathematics 2021, 9, 2270. https://doi.org/10.3390/math9182270

AMA Style

Hamel AH, Heyde F. Set-Valued T-Translative Functions and Their Applications in Finance. Mathematics. 2021; 9(18):2270. https://doi.org/10.3390/math9182270

Chicago/Turabian Style

Hamel, Andreas H., and Frank Heyde. 2021. "Set-Valued T-Translative Functions and Their Applications in Finance" Mathematics 9, no. 18: 2270. https://doi.org/10.3390/math9182270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop