Next Article in Journal
Inflation Protected Investment Strategies
Previous Article in Journal
Nonlinear Time Series and Neural-Network Models of Exchange Rates between the US Dollar and Major Currencies
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Insurance for a Minimal Expected Retention: The Case of an Ambiguity-Seeking Insurer

1
Département de Sciences Économiques, Université de Montréal, C.P. 6128, succursale Centre-ville, Montréal, QC H3C 3J7, Canada
2
Imperial College London, South Kensington Campus, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
Received: 22 December 2015 / Revised: 7 March 2016 / Accepted: 9 March 2016 / Published: 21 March 2016

Abstract

:
In the classical expected utility framework, a problem of optimal insurance design with a premium constraint is equivalent to a problem of optimal insurance design with a minimum expected retention constraint. When the insurer has ambiguous beliefs represented by a non-additive probability measure, as in Schmeidler, this equivalence no longer holds. Recently, Amarante, Ghossoub and Phelps examined the problem of optimal insurance design with a premium constraint when the insurer has ambiguous beliefs. In particular, they showed that when the insurer is ambiguity-seeking, with a concave distortion of the insured’s probability measure, then the optimal indemnity schedule is a state-contingent deductible schedule, in which the deductible depends on the state of the world only through the insurer’s distortion function. In this paper, we examine the problem of optimal insurance design with a minimum expected retention constraint, in the case where the insurer is ambiguity-seeking. We obtain the aforementioned result of Amarante, Ghossoub and Phelps and the classical result of Arrow as special cases.

1. Introduction

In the classical problem of optimal insurance design, it is well known since the work of Arrow [1] that when the insured is a risk-averse expected utility (EU) maximizer and the insurer is a risk-neutral EU maximizer, the indemnity schedule that maximizes the insured’s expected utility of terminal wealth subject to a premium constraint is a deductible indemnity of the form Y * = max 0 , X d , where X is the loss random variable and d 0 is a given constant deductible.
For a given indemnity schedule Y, the premium constraint is a constraint of the form:
Π 1 + ρ Y d P
where Π 0 is the premium paid and ρ 0 is a loading factor. It requires that the premium received by the insurer is enough to cover the expected indemnity payments loaded up for expenses and profits. By letting R 0 = X d P Π 1 + ρ , the premium constraint can be written as:
X Y d P R 0
The quantity X Y is a random variable that represents the portion of the loss that is actually suffered (or retained) by the insured after the insurance indemnification payment, and it is called the retention random variable. The constraint given in Equation (1) is often referred to as a minimum expected retention constraint. It requires that the indemnity schedule Y be such that the associated retention random variable has a minimum pre-specified expectation under the insurer’s beliefs. Since a premium constraint and a minimum expected retention constraint are equivalent in the classical EU case, the optimal indemnity schedule with a minimum expected retention constraint is then also a deductible indemnity schedule, with a constant positive deductible level.
Both the premium constraint and the minimum expected retention constraint are a simple restatement of the risk-neutral EU-maximizing insurer’s participation constraint. The insurer and the insured share the same probabilistic beliefs about the realizations of the loss random variable, and hence, Arrow’s [1] classical result is a pure risk-sharing result. The applicability of this result is limited, nonetheless. Indeed, the classical theory of optimal insurance design due to Arrow is based on the classical theory of choice under uncertainty [2,3,4], which follows the Bayesian paradigm, whereby the uncertainty that a decision maker faces in a given decision problem is described by a probability measure over a space of contingencies or states of the world. In other words, the economic agent has a clear probabilistic assessment of the underlying uncertainty that he faces.
However, the seminal work of Knight [5] suggested that there might be situations where the information available to a decision maker is too coarse for him or her to be able to formulate an additive probability measure over the list of contingencies. These occurrences are typically referred to as situations of decision under Knightian uncertainty, or ambiguity. Yet, this did not penetrate the mainstream theory of choice until Ellsberg’s [6] famous thought experiments, which can be seen as an indication of people’s aversion to unknown unknowns, or vagueness in beliefs about likelihoods. There is now a substantial body of empirical evidence for the pervasiveness of ambiguity in situations of choice under uncertainty, and we refer to Camerer [7] for a still timely review.
Largely motivated by the Ellsberg paradox, modern decision theory has developed several models of decision under ambiguity that rationalize behaviours, such as the ones depicted by Ellsberg [6]. For example, in Schmeidler [8], ambiguity is represented by a non-additive subjective “probability” measure, called a capacity, and preferences are aggregated using an integral defined with respect to capacities: the Choquet integral (we review capacities and Choquet integration in Section 3.1 and in Appendix A). Schmeidler’s [8] seminal work and his model of decision under ambiguity, which came to be known as Choquet expected utility (CEU), can be seen as the starting point of decision theoretic investigations of models of choice under ambiguity.
After Schmeidler’s work, many axiomatic models of decision under ambiguity were introduced. In Gilboa and Schmeidler [9], ambiguity is described by a collection of additive priors (a set of probability measures), and preferences are aggregated using the minimum value of the usual (Lebesgue) integral over this collection. Ghirardato, Maccheroni and Marinacci [10] propose a general model of decision under ambiguity that includes that of [9]. Amarante [11] introduces a model of decision under ambiguity that includes the aforementioned ones. We refer to the recent survey of Gilboa and Marinacci [12] for more on this topic, including other models of decision under ambiguity and applications of these models to several problems in economic theory.
In particular, Amarante [11] shows that most models of decision under ambiguity can be represented as models were the objects of choice are evaluated by a Choquet integral with respect to some capacity. That is, Choquet integration is a wide enough aggregation concept for preferences that it can encompass most models of decision under ambiguity and, in particular, the most popular ones. As a result, we focus in this paper on Choquet integration as an aggregation concept for decision-making under ambiguity, and we consider the CEU model of Schmeidler [8] as a model of decision-making under ambiguity.

1.1. Ambiguity in Optimal Insurance Design

Empirical evidence suggests that ambiguity, rather than risk, is prevalent in insurance pricing and underwriting and that often, insurers tend to exhibit more ambiguity than the insured individuals (e.g., [13]). Motivated by these findings, Amarante, Ghossoub and Phelps (AGP) [14] re-examined the classical insurance demand problem of Arrow [1] in a setting where the insurer has ambiguous beliefs (in the sense of Schmeidler [8]) about the realizations of the insurable loss, whereas the insured is an EU maximizer. Specifically, they examined the problem of a risk-averse EU-maximizing insured who wishes to choose the indemnity schedule that maximizes his or her expected utility of terminal wealth, subject to a premium constraint based on the insurer’s ambiguous beliefs and that takes the form:
Π 1 + ρ Y d ν 0
where integration is in the sense of Choquet (Section 3.1 and Appendix A). This premium constraint is a restatement of the insurer’s participation constraint. In particular, AGP [14] showed that when the insurer is ambiguity-seeking, with a concave distortion of the insured’s probability measure, then the optimal indemnity schedule is a state-contingent deductible schedule, in which the deductible depends on the state of the world only through the insurer’s distortion function. Arrow’s result then obtains as a special case when the insurer does not distort probabilities.
Because of the properties of the Choquet integral, the premium constraint in Equation (2) cannot be re-written as a minimum expected retention constraint. Therefore, in the case where the insurer is a Choquet-expected utility maximizer as in Schmeidler [8], a problem of insurance design given a premium constraint is not equivalent to a problem of insurance design given a minimum expected retention constraint, where the expectation is in the sense of Choquet, that is it is an ambiguous (or non-additive) expectation.

1.2. Related Literature

There is a growing literature devoted to the study of ambiguity in insurance design and, more broadly, in situations of contracting or risk sharing. For example, Alary et al. [15] examine a problem of optimal insurance design in which the insured is ambiguity-averse in the sense of Klibanoff, Marinacci and Mukerji [16]. The authors assume that, conditional on a non-zero loss occurring, the loss severity distribution is not ambiguous; however, the probability that a loss occurs is ambiguous. The authors then show that, in this situation, the optimal indemnity is a straight deductible. Gollier [17] also examines a problem of optimal insurance design in which the insured is ambiguity-averse in the sense of Klibanoff, Marinacci and Mukerji [16], and he shows that the optimal indemnity schedule contains a disappearing deductible when the insured’s collection of (additive) priors can be ordered according to the monotone-likelihood ratio property.
Jeleva [18] considers an insurance model in which the insured behaves according to Schmeidler’s CEU model. She assumes that the optimal insurance contract is of the co-insurance type and then looks for the optimal co-insurance factor. Young [19] and Bernard et al. [20] examine a problem of optimal insurance design in which the insured is a rank-dependent expected utility maximizer [21,22]. Doherty and Eeckhoudt [23] study the optimal level of deductible under Yaari’s dual theory [22]. Karni [24] considers an insurance model in which the preferences of the insured can accommodate ambiguous beliefs, but they satisfy certain differentiability criteria. He shows that a deductible indemnity schedule is optimal in that case. Machina [25] considers a similar setting where the preferences of the insured have a non-EU representation that satisfies certain differentiability criteria, and he examines the optimal level of co-insurance and the optimal level of deductible. Schlesinger [26] examines the optimal co-insurance level in a situation where the preferences of the insured are not necessarily EU preferences, but they are risk-averse in the sense of disliking mean-preserving increases in risk.
The papers mentioned above all assume that the ambiguity is on the side of the insured. Motivated by empirical evidence suggesting that insurers tend to exhibit more ambiguity than the insured individuals (e.g., [13]), AGP [14] study the problem of optimal insurance design in a setting where the insurer has ambiguous beliefs about the realizations of the insurable loss (and behaves according to Schmeidler’s CEU model), whereas the insured does not.
Carlier et al. [27] examine ambiguity on both sides: the insurer’s and the insured’s. Specifically, they consider the case in which both parties’ beliefs are epsilon-contaminations of a given prior, and they show that the optimal indemnity contains a deductible for high values of the loss. Anwar and Zheng [28] also examine ambiguity on both sides, but they consider a model with only two states of the world: a no-loss state and a loss state. As such, this is of limited scope, since the shape of an optimal indemnity schedule cannot be determined in a two-state world.
Carlier and Dana [29,30,31] and Chateauneuf et al. [32] examine more general problems of risk sharing and contracting that are relevant to the insurance problem considered here. However, they do not give a full analytical characterization of the optimal risk sharing rule, which is one of the main goals of the present paper in the context of an insurance design problem.
Finally, we would like to mention the recent work of Balbas et al. [33] who consider ambiguity on the side of both the insurer and the insured1. The authors assume that both parties behave according to the maxmin expected utility (MEU) model of Gilboa and Schmeidler [9], that is their ambiguity is described by a collection of additive priors (a set of probability measures), and preferences are aggregated using the minimum value of the usual (Lebesgue) integral over this collection of priors. The authors do not consider the retention random variable as a decision variable, but rather its sensitivity (i.e., mathematical derivative) with respect to total claims. This is meant to prevent some potential moral hazard issues. The authors then show that the optimal indemnity schedule is often such that the sensitivity of the retention random variable to the total claims saturates the problem’s constraints (a bang-bang solution).

1.3. This Paper’s Contribution

This paper is meant to supplement the literature and provide a framework for scholars or practitioners who are used to considering retention as a decision variable, but would like to examine the effect of ambiguity. Specifically, we examine the problem of optimal insurance design for a given minimum expected retention level, when the insurer is ambiguity-seeking in the sense of Schmeidler [8]. We focus on the case of an ambiguity-seeking rather than an ambiguity-averse insurer, since recent work suggests that situations of ambiguity-seeking are empirically relevant (e.g., [34,35,36]). We examine two cases:
(1)
We first examine the problem of insurance design when the insurer’s beliefs are a concave distortion of a probability measure that differs from the insurer’s probability measure. We show that there is an event A to which the insurer assigns full probability and on which the optimal indemnity schedule is a state-contingent deductible schedule, with a state-contingent deductible that is a function of the insurer’s distortion function and of a likelihood ratio between the two parties’ beliefs. On the complement of the event A, the optimal indemnity is full insurance. However, the insurer assigns zero probability to the complement of A.
(2)
As a special case of the above, we examine the problem of insurance design when the insurer’s beliefs are a concave distortion of the insured’s probability measure. We show that in this case, the result of AGP [14] mentioned above still holds: the optimal indemnity schedule is a state-contingent deductible schedule, in which the deductible depends on the state of the world only through the insurer’s distortion function. Arrow’s result then obtains as a special case when the insurer does not distort probabilities.

1.4. Outline

The rest of this paper is organized as follows. In Section 2, we review the classical optimal insurance design problem. In Section 3, we examine the problem of optimal insurance design given a minimum expected retention constraint, in the case of an insurer who is ambiguity-seeking in the sense of Schmeidler [8]. Proofs and related analysis are given in the Appendices.

2. Optimal Insurance: The Classical Case

2.1. Setup and Preliminaries

Consider a collection S of states of the world, and endow S with a σ-algebra F of events. Let B F denote the linear space of all bounded, R -valued and F -measurable functions on S , F . Let B + F denote the collection of all R + -valued elements of B F . Any f B F is bounded, and we define its sup norm by f s u p : = sup { | f s | : s S } < + .
In the classical problem of optimal insurance design given a premium constraint, a risk-averse EU-maximizing individual with initial wealth W 0 > 0 faces an insurable random loss, against which he or she seeks insurance. This random loss is represented by a bounded, nonnegative random variable X on the state space. That is, the random loss is a given element of B + F with closed range X S = 0 , M , where M : = X s u p < + .
Denote by Σ the σ-algebra σ { X } of subsets of S generated by X. Then, for any Y B Σ , there exists a Borel-measurable map I : R R , such that Y = I X ([37], Theorem 4.41). Let B + Σ denote the collection of nonnegative elements of B Σ , and let P be a probability measure on S , Σ . We will make the following assumption all throughout.
Assumption 1. 
The random loss X is a continuous random variable2 on the probability space S , Σ , P . That is, the Borel probability measure P X 1 is nonatomic3.
The individual seeking an insurance coverage against this random loss X has the possibility of purchasing an insurance indemnity schedule Y = I X , which pays the amount I X s 0 , in the state of the world s S . That is, Y B + Σ , and we can hence identify B + Σ with the collection of all possible indemnity schedules. The price of this insurance indemnity schedule is called the insurance premium, and it is denoted by Π 0 . The premium is determined by the insurer, based on the insurer’s beliefs about the realizations of X (and hence, of Y).

2.2. The Insurance Design Problem

In the classical insurance model of Arrow [1], both the insurer and the insured are EU maximizers, having the same non-ambiguous beliefs about the realizations of X. The insurer is assumed to be a risk-neutral EU maximizer, with a linear utility function v, which we assume to be the identity function. This is without loss of generality, as utility functions are defined up to a positive linear transformation. The insured is risk-averse, having a concave increasing utility function u.
After purchasing an indemnity schedule Y = I X , the insured’s wealth is the random variable W defined by W s = W 0 Π X s + Y s , in each state of the world s S . The insured’s problem is that of choosing an indemnity schedule that maximizes his or her expected utility of wealth, for a given premium Π. Specifically, the insured’s problem is that of choosing Y in B + Σ so as to maximize:
u W 0 Π X + Y d P
subject to the classical constraint that the indemnity function is nonnegative and does not exceed the loss, that is 0 Y X , and subject to a premium constraint of the form:
Π 1 + ρ Y d P
where ρ 0 is a given loading factor.
Arrow’s [1] classical result states that in this case, the optimal insurance indemnity schedule is a deductible insurance schedule:
Theorem 1 (Arrow). 
The optimal indemnity schedule is a deductible schedule given by:
Y * = max 0 , X d
where d 0 is a constant, such that Y * d P = Π 1 + ρ .

2.3. Premium Constraint vs. Minimal Expected Retention Constraint

In this classical framework, it is easy to see how the premium constraint given in Equation (3) is equivalent to a constraint for the form:
X Y d P R 0
for R 0 = X d P Π 1 + ρ . For a given indemnity schedule Y B + Σ , the random variable
R = X Y
is the amount of loss that is retained by the insured individual and is called the retention random variable. The constraint given in Equation (4) is called a minimum retention constraint.
Therefore, the classical problem of demand for insurance is equivalent to the following problem of optimal insurance design with a minimal expected retention:
For a given R 0 0 sup Y B Σ u W 0 Π X + Y d P s . t . 0 Y X , X Y d P R 0
By the monotonicity of u, if R 0 = 0 , then full insurance is optimal, that is Y * = X is an optimal solution for Problem (5). When R 0 > 0 , Theorem 1 implies that the optimal solution is a deductible indemnity schedule with a constant, positive deductible.
One should note the equivalence between a premium constraint and a minimum expected retention constraint holds only under the expected value premium principle, i.e., under Expected Utility Theory. Insurance pricing, risk measurement and premium principles under non-Expected Utility models or models of Ambiguity have been recently examined in the literature (see, e.g. Pichler [38] and Ghossoub [39]). Under such non-linear premium principles, the equivalence between a premium constraint and a minimum retention constraint does not hold.

3. The Case of an Ambiguity-Seeking Insurer

3.1. Preliminaries: Capacities, Choquet Integration and the CEU Model

Definition 1. 
A (normalized) capacity on a measurable space S , Σ is a set function υ : Σ 0 , 1 , such that:
(1)
υ = 0 ;
(2)
υ S = 1 ; and,
(3)
υ is monotone: for any A , B Σ , A B υ A υ B .
The capacity υ is said to be:
  • supermodular (or convex) if υ A B + υ A B υ A + υ B , for all A , B Σ ; and,
  • submodular (or concave) if υ A B + υ A B υ A + υ B , for all A , B Σ .
For instance, if S , Σ , P is a probability space and T : 0 , 1 0 , 1 is an increasing function, such that T ( 0 ) = 0 and T ( 1 ) = 1 , then the set function υ : = T P is a capacity on S , Σ , called a distorted probability measure. The function T is usually called a probability distortion. If, moreover, the distortion function T is convex (respectively concave), then the capacity υ = T P is supermodular (respectively submodular) ([40] Ex. 2.1).
Definition 2. 
Let υ 1 be a supermodular capacity and υ 2 a submodular capacity on S , Σ .
  • The core of υ 1 , denoted by c o r e υ 1 , is the collection of all probability measures Q on S , Σ , such that Q A υ A , A Σ .
  • The anti-core of υ 2 , denoted by a c o r e υ 2 , is the collection of all probability measures Q on S , Σ , such that Q A υ A , A Σ .
Both c o r e υ 1 and a c o r e υ 2 are weak * -compact and convex collections of probability measures on the space S , Σ .
Definition 3. 
Let υ be a capacity on S , Σ . The Choquet integral of Y B Σ with respect to υ is defined by:
Y d υ : = 0 + υ { s S : Y s t } d t + 0 υ { s S : Y s t } 1 d t
where the integrals are taken in the sense of Riemann.
The Choquet integral with respect to a (countably additive) measure is the usual a Lebesgue integral with respect to that measure ([41] p. 59). By a classical result of Schmeidler [42], we can represent Choquet integrals with respect to a supermodular (respectively submodular) capacity as a lower (respectively upper) envelope of Lebesgue integrals:
Proposition 1 
(Schmeidler [42]). Let υ be a capacity on S , Σ , and let Y B Σ :
  • If υ is supermodular, then Y d υ = min Y d P : P c o r e υ 1 ;
  • If υ is submodular, then Y d υ = max Y d P : P a c o r e υ 2 .
Definition 4. 
Two functions Y 1 , Y 2 B Σ are said to be comonotonic if:
Y 1 s Y 1 s Y 2 s Y 2 s 0 , for all s , s S
For instance, any Y B Σ is comonotonic with any c R . Moreover, if Y 1 , Y 2 B Σ and if Y 2 is of the form Y 2 = I Y 1 , for some Borel-measurable function I, then Y 2 is comonotonic with Y 1 if and only if the function I is nondecreasing. Appendix A provides some additional information about capacities and Choquet integration.
In the CEU model of Schmeidler [8], the ambiguous beliefs of a decision maker are represented by a non-additive probability measure (a capacity) on the state space, and preferences are aggregated using the concept of Choquet integration.
The decision maker’s preferences over alternatives induce a unique non-additive probability measure υ over the sate space and a utility function u, such that an alternative Y is evaluated by u Y d υ , where integration is in the sense of Choquet. Moreover, in the CEU model, ambiguity aversion is equivalent to the convexity of the capacity, and ambiguity loving is equivalent to the concavity of the capacity.

3.2. The Insurance Design Problem

Here, we assume that the insurer has ambiguous beliefs about the realizations of the loss random variable X. His or her ambiguous beliefs are represented by a non-additive probability measure (a capacity) on the state space, as in the CEU model of Schmeidler [8].
In this case, an indemnity schedule Y will be deemed acceptable by the insurer if the associated retention R = X Y yields a non-additive expectation of at least R 0 > 0 under the insurer’s ambiguous beliefs. In other words, the minimum retention constraint in this context is given by:
X Y d υ R 0
where integration is in the sense of Choquet. It is important to note that since the Choquet integral is not additive in general and only positively homogeneous (Proposition A1), the constraint given in Equation (6) cannot be written as a premium constraint of the form Y d υ Π 1 + ρ for some ρ > 0 . This is a fundamental point of departure from the classical case.
Recently, AGP [14] studied the problem of optimal insurance design given a premium constraint, when the insurer has ambiguous beliefs in the sense of the CEU model of Schmeidler [8]. Specifically, they study the problem:
sup Y B ( Σ ) u W 0 Π X + Y d P s . t . 0 Y X , Y d υ Π 1 + ρ
The constraint Y d υ Π 1 + ρ is a restatement of the insurer’s participation constraint:
W 0 i n s + Π 1 + ρ Y d υ W 0 i n s
where W 0 i n s is the insurer’s initial wealth.
Here, for a given R 0 0 , the insured’s problem is the following:
sup Y B ( Σ ) u W 0 Π X + Y d P s . t . 0 Y X , X Y d υ R 0
As mentioned above, due to the properties of the Choquet integral, problems Equations (7) and (8) are not equivalent.
Assumption 2. 
The insured’s utility function u satisfies the following properties:
(1)
u 0 = 0 ;
(2)
u is strictly increasing and strictly concave;
(3)
u is continuously differentiable.
(4)
The first derivative satisfies u 0 = + and lim x + u x = 0 .
Thus, in particular, we assume that the insured is risk-averse. We also make the assumption that the insured is well diversified, so that the particular exposure to X is sufficiently small, with respect to the total wealth of the insured:
Assumption 3. 
X W 0 Π , P -a.s. That is, P s S : X s > W 0 Π = 0 .
In this paper, we consider the case of an insurer who is an ambiguity-seeking insurer in the sense of Schmeidler [8]. That is, the capacity υ is submodular (concave; Definition 1). Specifically, we consider the case where the insurer’s capacity υ is of the form υ = T Q , for some probability measure Q P on S , Σ and some function T : 0 , 1 0 , 1 , which is increasing, concave and continuous with T 0 = 0 and T 1 = 1 . Then, T Q is a submodular capacity on S , Σ .
Assumption 4. 
We assume that υ = T Q , where:
(1)
Q is a probability measure on S , Σ , such that Q X 1 is nonatomic;
(2)
T : 0 , 1 0 , 1 is increasing, concave and twice differentiable; and
(3)
T ( 0 ) = 0 and T ( 1 ) = 1 .
Henceforth, we examine the following problem, for a given R 0 0 :
sup Y B ( Σ ) u W 0 Π X + Y d P s . t . 0 Y X , X Y d T Q R 0

3.3. A Characterization of the Optimal Indemnity Schedule

For each Z B + ( Σ ) , let F Z t = Q { s S : Z s t } , denote the distribution function of Z with respect to the probability measure Q, and let F Z 1 t be the left-continuous inverse of the distribution function F Z (that is, the quantile function of Z), defined by:
F Z 1 t = inf z R + : F Z z t , t 0 , 1
Definition 5. 
Denote by A Q u a n t the collection of all quantile functions f of the form F 1 , where F is the distribution function of some Z B + Σ , such that 0 Z X .
That is, A Q u a n t is the collection of all quantile functions f that satisfy the following properties:
(1)
f z F X 1 z , for each 0 < z < 1 ;
(2)
f z 0 , for each 0 < z < 1 .
Denoting by Q u a n t = f : 0 , 1 R | f is nondecreasing and left continuous the collection of all quantile functions, we can then write A Q u a n t as follows:
A Q u a n t = f Q u a n t : 0 f z F X 1 z , for each 0 < z < 1
By Lebesgue’s decomposition theorem ([37], Th. 10.61) there exists a unique pair P a c , P s of (nonnegative) finite measures on S , Σ , such that:
  • P = P a c + P s ;
  • P a c < < Q ( P a c is absolutely continuous with respect to Q); and,
  • P s Q ( P s and Q are mutually singular).
That is, for all B Σ , P a c B = 0 , whenever Q B = 0 . Moreover, there exists some A Σ , such that Q S \ A = P s A = 0 , which then implies that P a c S \ A = 0 and Q A = 1 . Note also that for all Z B Σ , Z d P = A Z d P a c + S \ A Z d P s . Furthermore, by the Radon–Nikodým Theorem ([43], Th. 4.2.2), there exists a Q-a.s. unique Σ-measurable and Q-integrable function h : S 0 , + , such that P a c C = C h d Q , for all C Σ . Hence, for all Z B Σ , Z d P = A Z h d Q + S \ A Z d P s . Furthermore, since P a c S \ A = 0 , it follows that S \ A Z d P s = S \ A Z d P . Thus, for all Z B Σ , Z d P = A Z h d Q + S \ A Z d P .
Moreover, since h : S 0 , + is Σ-measurable and Q-integrable, there exists a Borel-measurable and Q X 1 -integrable map ϕ : X S 0 , + , such that h = d P a c d Q = ϕ X . We will also make the following assumption, which can be interpreted as a kind of monotone likelihood ratio property.
Assumption 5. 
The Σ-measurable function h = ϕ X = d P a c d Q is comonotonic with X, i.e., ϕ is nondecreasing.
Since Q X 1 is nonatomic (by Assumption 4), it follows that F X X has a uniform distribution over 0 , 1 ([44], Lemma A.21), that is Q { s S : F X X s t } = t for each t 0 , 1 . Letting U : = F X X , it follows that U is a random variable on the probability space S , Σ , Q with a uniform distribution on 0 , 1 . Consider the following quantile problem:
sup f u W 0 Π f U ϕ F X 1 U d Q s . t . f A Q u a n t , T 1 U f U d Q = R 0
The following theorem characterizes the solution of the insured’s problem (Problem (9)) in terms of the solution of the relatively easier quantile problem given in Problem (12), provided the previous assumptions hold. The proof is given in Appendix C.
Theorem 2. 
Under the previous assumptions, if f * is optimal for Problem (12), then the function:
Y * = X f * U 1 A + X 1 S \ A
is optimal for Problem (9).
In particular, Y * = X f * U , Q -a.s. That is, the set E of states of the world s, such that Y * s X f * U s , has probability zero under the probability measure Q (and hence, υ E = T Q E = 0 ). The optimal indemnity schedule will be seen by the insurer to be almost surely equal to the function X f * U .
Another immediate implication of Theorem 2 is that the collection of states of the world in which the optimal indemnity schedule is a full insurance rule is a set of states to which the insurer assigns a zero likelihood. On the set of all other states of the world, the optimal indemnity schedule deviates from the full insurance rule by the function f * U .
Under the following two assumptions, it is possible to fully characterize the shape of an optimal insurance contract. This is done in Corollary 1.
Assumption 6. 
The Σ-measurable function h = ϕ X = d P a c d Q is such that ϕ is left-continuous.
Assumption 7. 
the function t T 1 t ϕ F X 1 t , defined on t 0 , 1 \ { t : ϕ F x 1 t = 0 } , is nondecreasing.
Assumption 7 is also a monotone likelihood ratio type assumption. Similar assumptions have been used in Jin and Zhou [45] in their study of portfolio choice under prospect theory, in He and Zhou [46] in their study of a portfolio choice problem under Yaari’s [22] dual theory of choice, in Jin and Zhou [47] in their study of greed and leverage within a portfolio choice problem under prospect theory and in Carlier and Dana [48] in their study of the demand for contingent claims under rank-dependent expected utility [21]. Furthermore, note that when both the insured and the insurer are Bayesian decision makers, that is when the insurer does not distort probabilities, then Assumption 7 and Assumption 5 are equivalent4.
When the previous assumptions hold, we can give an explicit characterization of an optimal indemnity schedule, as follows.
Corollary 1 (The Shape of an Optimal Indemnity Schedule). 
Under the previous assumptions, the function Y * defined below is an optimal solution for Problem (9):
Y * = min X , max 0 , X d T , ϕ 1 A + X 1 S \ A
where:
d T , ϕ = W 0 Π u 1 λ * T 1 U ϕ X
U is a random variable on the probability space S , Σ , Q with a uniform distribution on 0 , 1 , and λ * 0 is chosen, so that:
T 1 U max 0 , min X , W 0 Π u 1 λ * T 1 U ϕ X d Q = R 0
The proof of Corollary 1 is given in Appendix D. Note that if Assumption 5 holds, then Assumption 6 is a weak assumption5.

3.4. A Special Case

Finally, we consider the special case where Q = P , in which the insurer’s submodular capacity υ is a concave distortion of the insured’s probability measure P of the form υ = T P . In this case, the insured’s problem becomes:
sup Y B ( Σ ) u W 0 Π X + Y d P s . t . 0 Y X , X Y d T P R 0
where the distortion function T : 0 , 1 0 , 1 is increasing, concave, twice differentiable and satisfies T ( 0 ) = 0 and T ( 1 ) = 1 .
Since P = Q , we have:
  • P = P a c = Q ;
  • h = ϕ X = d P a c d Q is the constant function equal to one. Hence, Assumptions 5 and 6 trivially hold;
  • A = S , and hence, S \ A = ;
  • U = F X X is a random variable on the probability space S , Σ , P with a uniform distribution on 0 , 1 .
Since, moreover, the random variable V = 1 U is also a random variable on the probability space S , Σ , P with a uniform distribution on 0 , 1 , Corollary 1 then becomes:
Corollary 2. 
If Assumptions 2, 3 and 7 hold and if P X 1 is nonatomic, then the function Y * defined below is an optimal solution for Problem (13):
Y * = min X , max 0 , X d T
where:
d T = W 0 Π u 1 λ * T V
V is a random variable on the probability space S , Σ , P with a uniform distribution on 0 , 1 , and λ * 0 is chosen, so that:
T V max 0 , min X , W 0 Π u 1 λ * T V d P = R 0
Corollary 2 states that when the insurer’s ambiguous beliefs are a concave distortion of the insured’s non-ambiguous beliefs, then an indemnity schedule that solves the optimal insurance design problem given a minimum retention constraint is a state-contingent deductible schedule, with a state-contingent deductible given by:
d T = W 0 Π u 1 λ * T V
for an appropriately-chosen λ * 0 . This is an identical result to Theorem 5.4 of AGP [14], which states that when the insurer’s ambiguous beliefs are a concave distortion of the insured’s non-ambiguous beliefs, then an indemnity schedule that solves the optimal insurance design problem given a premium constraint is a state-contingent deductible schedule, with a state-contingent deductible given by:
d T = W 0 Π u 1 λ * T V
for an appropriately-chosen λ * 0 .
Hence, when the insurer’s ambiguous beliefs are a concave distortion of the insured’s non-ambiguous beliefs, a problem of optimal insurance design with a minimum retention constraint is equivalent to a problem of optimal insurance design with a premium constraint. The optimal indemnity schedule is given by a state-contingent deductible schedule6, in which the state-contingent deductible d T depends on the state of the world only through the insurer’s distortion function T. Clearly, when the insurer does not distort probabilities, so that the function T is the identity function, one recovers Arrow’s result as a special case.

Acknowledgments

We thank Daniel Bauer, Enrico Biffis, Glenn Harisson, Stefan Trautmann, George Zanjani, two anonymous reviewers and the audiences at Georgia State University and the 2014 CEAR/MRIC Behavioural Insurance Workshop for their comments and suggestions.

Author Contributions

Both authors have contributed equally to all aspects of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. More on Capacities and Choquet Integration

Definition A1. A capacity υ on S , Σ is continuous from above (respectively below) if for any sequence { A n } n 1 Σ , such that A n + 1 A n (respectively A n + 1 A n ) for each n, it holds that:
lim n + υ A n = υ n = 1 + A n respectively lim n + υ A n = υ n = 1 + A n
A capacity that is continuous both from above and below is said to be continuous.
For instance, if υ is a distorted probability measure of the form T P where T is a continuous function, then υ is a continuous capacity.
The Choquet integral with respect to a (countably additive) measure is the usual Lebesgue integral with respect to that measure ([41] p. 59). Unlike the Lebesgue integral, the Choquet integral is not an additive operator on B Σ . However, the Choquet integral is additive over comonotonic functions.
Proposition A1. 
Let υ be a capacity on S , Σ .
(1) 
If ϕ 1 , ϕ 2 B Σ , then in general, ϕ 1 + ϕ 2 d υ ϕ 1 d υ + ϕ 2 d υ .
(2) 
If ϕ 1 , ϕ 2 B Σ are comonotonic, then ϕ 1 + ϕ 2 d υ = ϕ 1 d υ + ϕ 2 d υ .
(3) 
If ϕ 1 , ϕ 2 B Σ are such that ϕ 1 ϕ 2 , then ϕ 1 d υ ϕ 2 d υ .
(4) 
For all ϕ B Σ and all c 0 , then c ϕ d υ = c ϕ d υ .
(5) 
If υ is submodular, then for any ϕ 1 , ϕ 2 B Σ , ϕ 1 + ϕ 2 d υ ϕ 1 d υ + ϕ 2 d υ .

Appendix B. Rearrangements and Supermodularity

Here, the idea of an equimeasurable rearrangement of a random variable with respect to another random variable is discussed. All proofs, additional results and references to the literature may be found in Ghossoub [51,52].

Appendix B.1. The Nondecreasing Rearrangement

Consider the setting of Section 2.1, and let ζ be the probability law of X defined by ζ B : = P X 1 B = P s S : X s B for any Borel subset B of R .
Definition A2. 
For any Borel-measurable map I : 0 , M R , define the distribution function of I as the map ζ I : R 0 , 1 defined by:
ζ I t : = ζ x 0 , M : I x t
Then, ζ I is a nondecreasing right-continuous function.
Definition A3. 
Let I : 0 , M 0 , M be any Borel-measurable map, and define the function I ˜ : 0 , M R by:
I ˜ t : = inf z R + | ζ I z ζ 0 , t
The following proposition gives some useful properties of the map I ˜ defined above.
Proposition A2. 
Let I : 0 , M 0 , M be any Borel-measurable map, and let I ˜ : 0 , M R be defined as in Equation (A2). Then, the following hold:
(1) 
I ˜ is left-continuous, nondecreasing and Borel-measurable;
(2) 
I ˜ 0 = 0 and I ˜ M M . Therefore, I ˜ 0 , M 0 , M ;
(3) 
If I 1 , I 2 : 0 , M 0 , M are such that I 1 I 2 , ζ -a.s., then I ˜ 1 I ˜ 2 ;
(4) 
I ˜ is ζ-equimeasurable with I, in the sense that for any Borel set B,
ζ t 0 , M : I t B = ζ t 0 , M : I ˜ t B
(5) 
If I ¯ : 0 , M R + is another nondecreasing, Borel-measurable map, which is ζ-equimeasurable with I, then I ¯ = I ˜ , ζ -a.s.
I ˜ is called the nondecreasing ζ-rearrangement of I. Now, define Y : = I X and Y ˜ : = I ˜ X . Then:
(1)
Y , Y ˜ B + Σ , since I and I ˜ are Borel-measurable mappings of 0 , M into itself;
(2)
Y ˜ is a nondecreasing function of X:
X s X s Y ˜ s Y ˜ s , for all s , s S ; and
(3)
Y and Y ˜ have the same distribution under P (i.e., they are P-equimeasurable):
P s S : Y s α = P s S : Y ˜ s α , for any α 0 , M .
Call Y ˜ a nondecreasing P-rearrangement of Y with respect to X and denote it by Y ˜ P . Then, Y ˜ P is P-a.s. unique. Note also that if Y 1 and Y 2 are P-equimeasurable; and for any Borel-measurable function ψ, ψ Y 1 is P-integrable if and only if ψ Y 2 is P-integrable, in which case, we have ψ Y 1 d P = ψ Y 2 d P .
Lemma A1. 
Fix Y B + Σ , and let Y ˜ P denote the nondecreasing P-rearrangement of Y with respect to X. If 0 Y X , P -a.s., then 0 Y ˜ P X .

Appendix B.2. Supermodularity

Definition A4. 
A function L : R 2 R is supermodular if for any x 1 , x 2 , y 1 , y 2 R with x 1 x 2 and y 1 y 2 , one has:
L x 2 , y 2 + L x 1 , y 1 L x 1 , y 2 + L x 2 , y 1
A function L : R 2 R is called strictly supermodular if for any x 1 , x 2 , y 1 , y 2 R with x 1 < x 2 and y 1 < y 2 , one has:
L x 2 , y 2 + L x 1 , y 1 > L x 1 , y 2 + L x 2 , y 1
Lemma A2. 
A function L : R 2 R is supermodular (respectively strictly supermodular) if and only if the function η y : = L x + h , y L x , y is nondecreasing (respectively increasing) on R , for any x R and h 0 (respectively h > 0 ).
Example A1. If g : R R is concave and a R , then the function L 1 : R 2 R defined by L 1 x , y = g a x + y is supermodular. If, moreover, g is strictly concave, then L 1 is strictly supermodular.
Lemma A3 (Hardy–Littlewood). 
Fix Y B + Σ , and let Y ˜ P denote the nondecreasing P-rearrangement of Y with respect to X. If L is supermodular, then (assuming integrability) we have:
L X , Y d P L X , Y ˜ P d P
Moreover, if L is strictly supermodular, then equality holds if and only if Y = Y ˜ P , P -a.s.

Appendix C. Proof of Theorem 2

Denote by F S B the feasibility set for Problem (9):
F S B = Y B Σ : 0 Y X and X Y d υ R 0
Let F S B = be defined as:
F S B = = Y B Σ : 0 Y X and X Y d υ = R 0
To rule out trivial situations, we will make the following assumption:
Assumption A1. 
F S B = .
Assumption A1 then implies that F S B . By monotonicity of the Choquet integral, the non-emptiness of F S B implies that:
X d υ R 0

Appendix C.1. “Splitting”

Recall that by Lebesgue’s decomposition theorem ([37], Th. 10.61) there exists a unique pair P a c , P s of (nonnegative) finite measures on S , Σ , such that P = P a c + P s , P a c < < Q and P s Q . That is, for all B Σ with Q B = 0 , we have P a c B = 0 , and there is some A Σ , such that Q S \ A = P s A = 0 . It then also follows that P a c S \ A = 0 and Q A = 1 . In the following, the Σ-measurable set A on which Q is concentrated is assumed to be fixed all throughout. Consider now the following two problems:
For a given β R 0 , sup Y B Σ A u W 0 Π X + Y d P s . t . 0 Y X , X Y d T Q = β
and:
sup Y B Σ S \ A u W 0 Π X + Y d P s . t . 0 Y 1 S \ A X 1 S \ A , S \ A X Y d T Q = 0
Remark A1. 
Since the function u is continuous (Assumption 2), it is bounded on any closed and bounded subset of R . Therefore, since the range of X is closed and bounded, the supremum of each of the above two problems is finite when their feasibility sets are non-empty. Now, the function X is feasible for Problem (A9), and so, Problem (A9) has a non-empty feasibility set.
Definition A5. 
For a given β R 0 , let Θ A , β be the feasibility set of Problem (A8) with parameter β. That is,
Θ A , β : = Y B + Σ : 0 Y X , X Y d T Q = β
Denote by Γ the collection of all β for which the feasibility set Θ A , β is non-empty:
Definition A6. 
Let Γ : = β R 0 : Θ A , β .
Lemma A4. 
Γ .
Proof. 
By Assumption A1, R 0 Γ . ☐
Lemma A5. 
X is optimal for Problem (A9).
Proof. 
The feasibility of X for Problem (A9) is clear. To show optimality, let Y be any feasible solution for Problem (A9). Then, for each s S \ A , Y s X s . Therefore, since u is increasing, we have u W 0 Π X s + Y s u W 0 Π X s + X s = u W 0 Π , for each s S \ A . Thus,
S \ A u W 0 Π X + Y d P S \ A u W 0 Π X + X d P = u W 0 Π P S \ A
Remark A2. 
Since Q S \ A = 0 and T 0 = 0 , it follows that T Q S \ A = 0 , and so, 1 S \ A d T Q = T Q S \ A = 0 , by Proposition A1. Therefore, for any Z B + Σ , it follows from the monotonicity and positive homogeneity of the Choquet integral (Proposition A1) that:
0 S \ A Z d T Q = Z 1 S \ A d T Q Z s 1 S \ A d T Q = Z s 1 S \ A d T Q = 0
and so, S \ A Z d T Q = 0 . Consequently, it follows from Proposition A1 that for any Z B + Σ ,
Z d T Q Z 1 A d T Q = A Z d T Q
Now, consider the following problem:
sup β Γ F A * β : F A * β is the supremum of Problem ( A 8 ) , forafixedβΓ
Lemma A6. 
If β * is optimal for Problem (A10) and if Y 1 * is optimal for Problem (A8) with parameter β * , then Y * : = Y 1 * 1 A + X 1 S \ A is optimal for Problem (9).
Proof. 
By the feasibility of Y 1 * for Problem (A8) with parameter β * , we have 0 Y 1 * X and X Y 1 * d T P = β * . Therefore, 0 Y * X , and:
X Y * d T Q = X Y 1 * 1 A + X X 1 S \ A d T Q = A X Y 1 * d T Q X Y 1 * d T Q = β * R 0
where the inequality A X Y 1 * d T Q X Y 1 * d T Q follows from the same argument as in Remark A2. Hence, Y * is feasible for Problem (9). To show the optimality of Y * for Problem (9), let Y ¯ be any other feasible function for Problem (9), and define α by α = X Y ¯ d T Q . Then, α R 0 , and so, Y ¯ is feasible for Problem (A8) with parameter α; and α is feasible for Problem (A10). Hence:
F A * α A u W 0 Π X + Y ¯ d P
Now, since β * is optimal for Problem (A10), it follows that F A * β * F A * α . Moreover, Y ¯ is feasible for Problem (A9) (since 0 Y ¯ X , and so, S \ A X Y ¯ d T Q = 0 by Remark A2). Thus,
F A * β * + u W 0 Π P S \ A F A * α + u W 0 Π P S \ A A u W 0 Π X + Y ¯ d P + u W 0 Π P S \ A A u W 0 Π X + Y ¯ d P + S \ A u W 0 Π X + Y ¯ d P = u W 0 Π X + Y ¯ d P
However, F A * β * = A u W 0 Π X + Y 1 * d P . Therefore,
u W 0 Π X + Y * d P = F A * β * + u W 0 Π P S \ A u W 0 Π X + Y ¯ d P .
Hence, Y * is optimal for Problem (9). ☐
Lemma A7. 
R 0 is optimal for Problem (A10).
Proof. 
The feasibility of R 0 for Problem (A10) follows from Assumption A1. The optimality of R 0 for Problem (A10) follows from the monotonicity of the utility function u appearing in Problem (A8) and of the Choquet integral, which imply that the function β F A * β is nonincreasing. ☐
By Lemma A7, we can restrict ourselves to solving Problem (A8) with parameter β = R 0 . That is, Lemmata A6 and A7 imply the following result.
Lemma A8. 
Consider the problem:
sup Y B Σ A u W 0 Π X + Y d P s . t . 0 Y X , X Y d T Q = R 0
If Y 1 * is optimal for Problem (A11), then Y * : = Y 1 * 1 A + X 1 S \ A is optimal for Problem (9).

Appendix C.2. Solving Problem (A11)

Recall that for all Z B Σ , Z d P = A Z h d Q + S \ A Z d P , where h = d P a c d Q is the Radon–Nikodým derivative of P a c with respect to Q. Moreover, by definition of the set A Σ , we have Q S \ A = P s A = 0 . Therefore, A Z h d Q = Z h d Q , for each Z B Σ . Hence, we can rewrite Problem (A11) (recalling that h = ϕ X ) as the following problem:
sup Y B Σ u W 0 Π X + Y ϕ X d Q s . t . 0 Y X , X Y d T Q = R 0
We then obtain the following result immediately:
Lemma A9. 
If Y 1 * is optimal for Problem (A12), then it is optimal for Problem (A11), and hence, the function Y * : = Y 1 * 1 A + X 1 S \ A is optimal for Problem (9).
Now, consider the following problem:
sup Z B Σ u W 0 Π Z ϕ X d Q s . t . 0 Z X , Z d T Q = R 0 = 0 + T Q { s S : Z s t } d t
Lemma A10. 
If Z * is optimal for Problem (A13), then Y * : = X Z * is optimal for Problem (A12). Consequently, the function X Z * 1 A + X 1 S \ A is optimal for Problem (9).
Proof. 
Suppose that Z * is optimal for Problem (A13), and define Y * : = X Z * . Then, Y * B Σ . Moreover, since 0 Z * X , it follows that 0 Y * X . Now,
X Y * d T Q = X X Z * d T Q = Z * d T Q = R 0
and so, Y * is feasible for Problem (A12). To show optimality of Y * for Problem (A12), suppose, by way of contradiction, that Y ¯ Y * is feasible for Problem (A12) and:
u W 0 Π X + Y ¯ h d Q > u W 0 Π X + Y * h d Q
that is, with Z ¯ : = X Y ¯ , we have:
u W 0 Π Z ¯ h d Q > u W 0 Π Z * h d Q
Now, since 0 Y ¯ X and X Y ¯ d T Q = R 0 , Z ¯ is feasible for Problem (A13), hence contradicting the optimality of Z * for Problem (A13). Thus, Y * : = X Z * is optimal for Problem (A12). The rest follows from Lemma A9. ☐
Definition A7. 
If Z 1 , Z 2 B + Σ are feasible for Problem (A13), we will say that Z 2 is an improvement of Z 1 (or is improving) when the following hold:
(1)
u W 0 Π Z 2 h d Q u W 0 Π Z 1 h d Q ; and,
(2)
Z 2 d T Q = Z 1 d T Q .
The next result shows that for any feasible claim for Problem (A13), there is a another feasible claim for Problem (A13), which is comonotonic with X and improving.
Lemma A11. 
If Z is feasible for Problem (A13) and if Z ˜ is the nondecreasing Q-rearrangement of Z with respect to X, then Z ˜ is feasible for Problem (A13), comonotonic with X and an improvement of Z.
Proof. 
Let Z be feasible for Problem (A13), and note that by Assumption 5, the map ξ X , Z : = u W 0 Π Z ϕ X is supermodular (see Example A1). Let Z ˜ denote the nondecreasing Q-rearrangement of Z with respect to X. Then, by Lemma A1 and by equimeasurability of Z and Z ˜ , the function Z ˜ is feasible for Problem (A13). Furthermore, by Lemma A3 and by the supermodularity of ξ X , Z , it follows that Z ˜ is an improvement of Z. ☐

Appendix C.3. Quantile Reformulation

Let Z B + Σ be feasible for Problem (A13); let F Z t = Q { s S : Z s t } denote the distribution function of Z with respect to the probability measure Q; and let F X t = Q { s S : X s t } denote the distribution function of X with respect to the probability measure Q. Let F Z 1 t be the left-continuous inverse of the distribution function F Z (that is, the quantile function of Z), defined by:
F Z 1 t = inf z R + : F Z z t , t 0 , 1
Let Z ˜ denote the nondecreasing Q-rearrangement of Z with respect to X. Since Z B + Σ , it can be written as ψ X for some nonnegative Borel-measurable and bounded map ψ on X S . Moreover, since 0 Z X , ψ is a mapping of 0 , M into 0 , M . Let ζ : = Q X 1 be the image measure of Q under X. By Assumption 4 (1), ζ is nonatomic. We can then define the mapping ψ ˜ : 0 , M 0 , M as in Appendix B (see Equation (A2) on p. 14) to be the nondecreasing ζ-rearrangement of ψ, that is,
ψ ˜ t : = inf z R + : ζ { x 0 , M : ψ x z } ζ 0 , t
Then, as in Appendix B, Z ˜ = ψ ˜ X . Therefore, for each s 0 S ,
Z ˜ s 0 = ψ ˜ X s 0 = inf z R + : ζ { x 0 , M : ψ x z } ζ 0 , X s 0
However, for each s 0 S ,
ζ 0 , X s 0 = Q X 1 0 , X s 0 = F X X s 0 : = F X X s 0
Moreover,
ζ { x 0 , M : ψ x z } = Q X 1 { x 0 , M : ψ x z } = Q { s S : ψ X s z } = F Z z
Consequently, for each s 0 S ,
Z ˜ s 0 = inf z R + : F Z z F X X s 0 = F Z 1 F X X s 0 : = F Z 1 F X X s 0
That is,
Z ˜ = F Z 1 F X X
where F Z 1 is the left-continuous inverse of F Z , as defined in Equation (10).
Hence, by Lemma A11 and Equation (A14), we can restrict ourselves to finding a solution to Problem (A13) of the form F 1 F X X , where F is the distribution function of a function Z B + Σ , such that 0 Z X and Z d T Q = R 0 . Moreover, since X is a nondecreasing function of X and Q-equimeasurable with X, it follows from the Q-a.s. uniqueness of the equimeasurable nondecreasing Q-rearrangement (see Appendix B) that X = F X 1 F X X , Q-a.s. (see also [44, Lemma A.21]). Thus, for any Z B + Σ ,
u W 0 Π F Z 1 F X X ϕ F X 1 F X X d Q = u W 0 Π Z ˜ ϕ X d Q u W 0 Π Z ϕ X d Q
where the inequality follows from the proof of Lemma A11. Moreover, since ζ = Q X 1 is nonatomic (by Assumption 4), it follows that F X X has a uniform distribution over 0 , 1 ([44] Lemma A.21), that is, Q { s S : F X X s t } = t for each t 0 , 1 . Finally, letting U : = F X X , we have:
F 1 U d T Q = 0 + T Q { s S : F 1 U s t } d t = 0 + T Q { s S : F 1 U s > t } d t = 0 + T 1 F t d t = 0 1 T 1 t F 1 t d t = T 1 U F 1 U d Q
where the third and last equalities above follow from the fact that U has a uniform distribution over 0 , 1 and where the second-to-last equality follows from a standard argument7.
Now, recall from Definition 5 that A Q u a n t given in Equation (11) is the collection of all admissible quantile functions, that is the collection of all functions f of the form F 1 , where F is the distribution function of a function Z B + Σ , such that 0 Z X , and consider the following problem:
sup f u W 0 Π f U ϕ F X 1 U d Q s . t . f A Q u a n t , T 1 U f U d Q = R 0
Lemma A12. 
If f * is optimal for Problem (A15), then the function f * U is optimal for Problem (A13), where U : = F X X . Moreover, X f * U is optimal for Problem (A12). Consequently, the function X f * U 1 A + X 1 S \ A is optimal for Problem (9).
Proof. 
Suppose that f * A Q u a n t is optimal for Problem (A15), and let Z * B + Σ be a corresponding function. That is, f * is the quantile function of Z * and 0 Z * X . Let Z ˜ * : = f * U . Then, Z ˜ * is the equimeasurable nondecreasing Q-rearrangement of Z * with respect to X, and so, 0 Z ˜ * X by Lemma A1. Moreover,
R 0 = T 1 U f * U d Q = f * U d T Q = Z ˜ * d T Q = 0 + T Q { s S : Z ˜ * s t } d t = 0 + T Q { s S : Z * s t } d t = Z ˜ * d T Q
where the second-to-last equality follows from the Q-equimeasurability of Z * and Z ˜ * . Therefore, Z ˜ * = f * U is feasible for Problem (A13). To show optimality, let Z be any feasible solution for Problem (A13), and let F be the distribution function for Z. Then, by Lemma A11, the function Z ˜ : = F 1 U is feasible for Problem (A13), comonotonic with X and Pareto-improving. Moreover, Z ˜ has also F as a distribution function. To show the optimality of Z ˜ * = f * U for Problem (A13), it remains to show that:
u W 0 Π Z ˜ * ϕ X d Q u W 0 Π Z ˜ ϕ X d Q
Now, let f : = F 1 , so that Z ˜ = f U . Since Z ˜ is feasible for Problem (A13), we have:
R 0 = Z ˜ d T Q = F 1 U d T Q = 0 1 T 1 t F 1 t d t = T 1 U f U d Q
Hence, f is feasible for Problem (A15). Since f * is optimal for Problem (A15), we have:
u W 0 Π f * U ϕ F X 1 U d Q u W 0 Π f U ϕ F X 1 U d Q
Moreover, since X = F X 1 U , Q -a.s., we have:
u W 0 Π Z ˜ * ϕ X d Q u W 0 Π Z ˜ ϕ X d Q
Therefore, Z ˜ * = f * U is optimal for Problem (A13). Hence, by Lemma A10, Y * : = X Z ˜ * = X f * U is optimal for Problem (A12). Finally, by Lemma A10, the function X f * U 1 A + X 1 S \ A is optimal for Problem (9).
Lemma A12 completes the proof of Theorem 2.

Appendix D. Proof of Corollary 1

Corollary A1. 
Under the assumptions of Corollary 1, an optimal solution Y * for Problem (9) takes the following form:
Y * = X max 0 , min F X 1 U , f λ * * U 1 A + X 1 S \ A
where for each t 0 , 1 \ { t : ϕ F x 1 t = 0 } ,
f λ * * t = W 0 Π u 1 λ * T 1 t ϕ F X 1 t
and λ * is chosen so that:
0 1 T 1 t max 0 , min F X 1 t , f λ * * t d t = R 0
Proof. 
Recall from Equation (11) that
A Q u a n t = f Q u a n t : 0 f z F X 1 z , for each 0 < z < 1
where Q u a n t = f : 0 , 1 R | f is nondecreasing and left continuous . Define the collection K of functions on 0 , 1 as follows:
K = f : 0 , 1 R | 0 f z F X 1 z , for each 0 < z < 1
Then, A Q u a n t = Q u a n t K . Consider the following problem.
sup f 0 1 u W 0 Π f t ϕ F X 1 t d t s . t . f A Q u a n t , 0 1 T 1 t f t d t = R 0
Lemma A13. 
If f * A Q u a n t satisfies the following:
(1) 
0 1 T 1 t f * t d t = R 0 ;
(2) 
There exists λ 0 , such that for all t 0 , 1 \ { t : ϕ F x 1 t = 0 } ,
f * t = arg max 0 y F X 1 t u W 0 Π y ϕ F X 1 t λ T 1 t y
Then, f * solves Problem (A17).
Proof. 
Suppose that f * A Q u a n t satisfies Conditions (1) and (2) above. Then, in particular, f * is feasible for Problem (A17). To show the optimality of f * for Problem (A17), let f by any other feasible solution for Problem (A17). Then, for all t 0 , 1 \ { t : ϕ F x 1 t = 0 } ,
u W 0 Π f * t ϕ F X 1 t λ T 1 t f * t u W 0 Π f t ϕ F X 1 t λ T 1 t f t
That is, u W 0 Π f * t u W 0 Π f t ϕ F X 1 t λ T 1 t f * t f t . Integrating yields V f * V f λ R 0 R 0 = 0 , that is V f * V f , as required. ☐
Hence, in view of Lemma A13, in order to find a solution for Problem (A17), one can start by solving the problem:
max 0 f λ t F X 1 t u W 0 Π f λ t ϕ F X 1 t λ T 1 t f λ t
for a given λ 0 and a fixed t 0 , 1 \ { t : ϕ F x 1 t = 0 } .
Consider first the following problem:
max f λ t u W 0 Π f λ t ϕ F X 1 t λ T 1 t f λ t
for a given λ 0 and a fixed t 0 , 1 \ { t : ϕ F x 1 t = 0 } .
By concavity of the utility function u, in order to solve Problem (A19), it suffices to solve for the first-order condition:
u W 0 Π f λ * t ϕ F X 1 t λ T 1 t = 0
which gives
f λ * t = W 0 Π u 1 λ T 1 t ϕ F X 1 t
Then, the function f λ * t solves Problem (A19), for a given λ 0 and a fixed t 0 , 1 \ { t : ϕ F x 1 t = 0 } .
By Assumption 7, the function t T 1 t ϕ F X 1 t is nondecreasing. By Assumption 2, the function u is strictly concave and continuously differentiable. Hence, the function u is both continuous and strictly decreasing. This then implies that u 1 is continuous and strictly decreasing, by the inverse function theorem ([50] pp. 221–223). Therefore, the function f λ * t in Equation (A20) is nondecreasing ( λ 0 ). Moreover, by Assumption 4, the function T is continuous. Therefore, Assumption 6 then implies that f λ * t is left-continuous.
Define the function f λ * * by:
f λ * * t = max 0 , min F X 1 t , f λ * t
Then, f λ * * t K . Moreover, since both F X 1 and f λ * are nondecreasing and left-continuous functions, it follows that f λ * * is nondecreasing and left-continuous. Consequently, f λ * * t A Q u a n t . Finally, it is easily seen that f λ * * t solves Problem (A18) for the given λ. Now, if λ * is chosen, so that 0 1 T 1 t f λ * * * t d t = R 0 , then by Lemma A13, f λ * * * is optimal for Problem (A17).
Hence, in view of Theorem 2 and Lemma A13, to conclude the proof of Corollary A1, it remains to show that there exists a λ * 0 , such that 0 1 T 1 t f λ * * * t d t = R 0 . This is given by Lemma A14 below.
Lemma A14. 
Let ψ be the function of the parameter λ 0 defined by ψ λ : = 0 1 T 1 t f λ * * t d t . Then, there exists a λ * 0 , such that ψ λ * = R 0 .
Proof. 
First note that ψ is a continuous and nonincreasing function of λ, where the continuity of ψ is a consequence of Lebesgue’s dominated convergence theorem ([37], Theorem 11.21). Indeed, since X is bounded and since F X 1 is nondecreasing, it follows that for each t 0 , 1 ,
min F X 1 t , f λ * t F X 1 t F X 1 1 M = X s < +
Moreover, since T is concave and increasing, T is nonincreasing and nonnegative, and so, for each t 0 , 1 , 0 T 1 t T 0 . However, T 0 < + , by Assumption 4 (since T is differentiable, it is continuous and, therefore, bounded on every closed and bounded subset of R ). Hence, for each t 0 , 1 ,
min F X 1 t , f λ * t T 1 t F X 1 1 T 0 X s T 0 < +
Furthermore, ψ 0 = 0 (by Assumption 2), and:
lim λ ψ λ = 0 1 T 1 t min F X 1 t , W 0 Π d t = 0 F X W 0 Π T 1 t F X 1 t d t + W 0 Π F X W 0 Π 1 T 1 t d t
However, by Assumption 3, we have F X W 0 Π = 1 . This then implies that:
lim λ ψ λ = 0 1 T 1 t F X 1 t d t = X d T Q
Now, for any Y B + Σ , which is feasible for Problem (A8), one has:
(i)
0 Y X ; and,
(ii)
X Y d T Q = R 0
Hence, 0 X Y X , and so, by the monotonicity of the Choquet integral (Proposition A1), it follows that R 0 = X Y d T Q X d T Q . Consequently,
0 = ψ 0 R 0 X d T Q = lim λ ψ λ
Hence, by the intermediate value theorem ([50], Theorem 4.23), there exists some λ * 0 , such that ψ λ * = R 0 . ☐
By Theorem 2 and Lemmatas A13 and A14, this concludes the proof of Corollary A1. ☐
Now, since the function f λ * * t = max 0 , min F X 1 t , f λ * * t is optimal for Problem (A17), it is optimal for Problem (A15). Therefore, by Lemma A12, the function:
f λ * * U = max 0 , min F X 1 U , f λ * * U
is optimal for Problem (A13), and the function:
X max 0 , min F X 1 U , f λ * * U 1 A + X 1 S \ A
is optimal for Problem (9). However, since Q X 1 is nonatomic (by Assumption 4), it follows that U = F X X has a uniform distribution over 0 , 1 ([44], Lemma A.21), and that X = F X 1 F X X , Q -a.s. Therefore,
f λ * * U = max 0 , min F X 1 U , f λ * * U = max 0 , min F X 1 U , W 0 Π u 1 λ * T 1 U ϕ F X 1 U = max 0 , min X , W 0 Π u 1 λ * T 1 U ϕ X , Q - a . s .
This Q-a.s. equality implies that:
max 0 , min X , W 0 Π u 1 λ * T 1 U ϕ X
is also optimal for Problem (A13). Lemma A12 then implies that the function:
X max 0 , min X , W 0 Π u 1 λ * T 1 U ϕ X 1 A + X 1 S \ A
is optimal for Problem (9).
Finally, it can be easily verified that:
X max 0 , min X , W 0 Π u 1 λ * T 1 U ϕ X = min X , max 0 , X W 0 Π u 1 λ * T 1 U ϕ X
The rest follows form Corollary A1, noting that since X = F X 1 U , Q -a.s., one has:
R 0 = 0 1 T 1 t max 0 , min F X 1 t , f λ * * t d t = T 1 U max 0 , min F X 1 U , f λ * * U d Q = T 1 U max 0 , min X , W 0 Π u 1 λ * T 1 U ϕ X d Q
This concludes the proof of Corollary 1.

References

  1. K. Arrow. Essays in the Theory of Risk-Bearing. Chicago, IL, USA: Markham Publishing Company, 1971. [Google Scholar]
  2. B. De Finetti. “La Prévision: Ses Lois Logiques, Ses Sources Subjectives.” Ann. l’Inst. Henri Poincaré 7 (1937): 1–68. [Google Scholar]
  3. L. Savage. The Foundations of Statistics, 2nd ed. New York, NY, USA: Dover Publications, 1972. [Google Scholar]
  4. J. Von Neumann, and O. Morgenstern. Theory of Games and Economic Behavior. Princeton, NJ, USA: Princeton University Press, 1944. [Google Scholar]
  5. F. Knight. Risk, Uncertainty, and Profit. Boston, MA, USA; New York, NY, USA: Houghton Mifflin, 1921. [Google Scholar]
  6. D. Ellsberg. “Risk, Ambiguity, and the Savage Axioms.” Q. J. Econ. 75 (1961): 643–669. [Google Scholar] [CrossRef]
  7. C. Camerer. “Individual Decision Making.” In Handbook of Experimental Economics. Edited by J.H. Kagel and A.E. Roth. Princeton, NJ, USA: Princeton University Press, 1995. [Google Scholar]
  8. D. Schmeidler. “Subjective Probability and Expected Utility without Additivity.” Econometrica 57 (1989): 571–587. [Google Scholar] [CrossRef]
  9. I. Gilboa, and D. Schmeidler. “Maxmin Expected Utility with a Non-Unique Prior.” J. Math. Econ. 18 (1989): 141–153. [Google Scholar] [CrossRef]
  10. P. Ghirardato, F. Maccheroni, and M. Marinacci. “Differentiating Ambiguity and Ambiguity Attitude.” J. Econ. Theory 118 (2004): 133–173. [Google Scholar] [CrossRef]
  11. M. Amarante. “Foundations of Neo-Bayesian Statistics.” J. Econ. Theory 144 (2009): 2146–2173. [Google Scholar] [CrossRef]
  12. I. Gilboa, and M. Marinacci. “Ambiguity and the Bayesian Paradigm.” In Advances in Economics and Econometrics: Theory and Applications, Tenth World Congress of the Econometric Society. Edited by D. Acemoglu, M. Arellano and E. Dekel. New York, NY, USA: Cambridge University Press, 2013. [Google Scholar]
  13. R. Hogarth, and H. Kunreuther. “Risk, Ambiguity, and Insurance.” J. Risk Uncertain. 2 (1989): 5–35. [Google Scholar] [CrossRef]
  14. M. Amarante, M. Ghossoub, and E. Phelps. “Ambiguity on the Insurer’s Side: The Demand for Insurance.” J. Math. Econ. 58 (2015): 61–78. [Google Scholar] [CrossRef]
  15. D. Alary, C. Gollier, and N. Treich. “The Effect of Ambiguity Aversion on Insurance and Self-Protection.” Econ. J. 123 (2013): 1188–1202. [Google Scholar] [CrossRef]
  16. P. Klibanoff, M. Marinacci, and S. Mukerji. “A Smooth Model of Decision Making under Ambiguity.” Econometrica 73 (2005): 1849–1892. [Google Scholar] [CrossRef]
  17. C. Gollier. “Optimal Insurance Design of Ambiguous Risks.” Economic Theory 57 (2014): 555–576. [Google Scholar] [CrossRef]
  18. M. Jeleva. “Background Risk, Demand for Insurance, and Choquet Expected Utility Preferences.” GENEVA Pap. Risk Insu.-Theory 25 (2000): 7–28. [Google Scholar] [CrossRef]
  19. V. Young. “Optimal Insurance under Wang’s Premium Principle.” Insur. Math. Econ. 25 (1999): 109–122. [Google Scholar] [CrossRef]
  20. C. Bernard, X. He, J. Yan, and X. Zhou. “Oprimal Insurance Design under Rank-Dependent Expected Utility.” Math. Financ. 25 (2015): 154–186. [Google Scholar] [CrossRef]
  21. J. Quiggin. “A Theory of Anticipated Utility.” J. Econ. Behav. 3 (1982): 323–343. [Google Scholar] [CrossRef]
  22. M. Yaari. “The Dual Theory of Choice under Risk.” Econometrica 55 (1987): 95–115. [Google Scholar] [CrossRef]
  23. N. Doherty, and L. Eeckhoudt. “Optimal Insurance without Expected Utility: The Dual Theory and the Linearity of Insurance Contracts.” J. Risk Uncertain. 10 (1995): 157–179. [Google Scholar] [CrossRef]
  24. E. Karni. “Optimal Insurance: A Nonexpected Utility Analysis.” In Contributions to Insurance Economics. Edited by G. Dionne. Boston, MA, USA: Kluwer Academic Publishers, 1992. [Google Scholar]
  25. M. Machina. “Non-Expected Utility and the Robustness of the Classical Insurance Paradigm.” GENEVA Pap. Risk Insur.-Theory 20 (1995): 9–50. [Google Scholar] [CrossRef]
  26. H. Schlesinger. “Insurance Demand without the Expected-Utility Paradigm.” J. Risk Insur. 64 (1997): 19–39. [Google Scholar] [CrossRef]
  27. G. Carlier, R. Dana, and N. Shahidi. “Efficient Insurance Contracts under Epsilon-Contaminated Utilities.” GENEVA Pap. Risk Insur.-Theory 28 (2003): 59–71. [Google Scholar] [CrossRef]
  28. S. Anwar, and M. Zheng. “Competitive Insurance Market in the Presence of Ambiguity.” Insur. Math. Econ. 50 (2012): 79–84. [Google Scholar] [CrossRef]
  29. G. Carlier, and R. Dana. Insurance Contracts with Deductibles and Upper Limits. Paris, France: Universite Paris Dauphine, 2002, Preprint, Ceremade. [Google Scholar]
  30. G. Carlier, and R. Dana. “Core of Convex Distortions of a Probability.” J. Econ. Theory 113 (2003): 199–222. [Google Scholar] [CrossRef]
  31. G. Carlier, and R. Dana. “Two-persons Efficient Risk-sharing and Equilibria for Concave Law-invariant Utilities.” Econ. Theory 36 (2008): 189–223. [Google Scholar] [CrossRef]
  32. A. Chateauneuf, R. Dana, and J. Tallon. “Optimal Risk-sharing Rules and Equilibria with Choquet-expected-utility.” J. Math. Econ. 34 (2000): 191–214. [Google Scholar] [CrossRef]
  33. A. Balbás, B. Balbás, R. Balbás, and A. Heras. “Optimal Reinsurance under Risk and Uncertainty.” Insur. Math. Econ. 60 (2015): 61–74. [Google Scholar] [CrossRef]
  34. E. Ert, and S. Trautmann. “Sampling Experience Reverses Preferences for Ambiguity.” J. Risk Uncertain. 49 (2014): 31–42. [Google Scholar] [CrossRef]
  35. M. Kocher, A. Lahno, and S. Trautmann. Ambiguity Aversion is the Exception. Munich, Germany: CESifo Group, 2015, Preprint, CESifo Working Paper Series No. 5261. [Google Scholar]
  36. S. Trautmann, and G. van de Kuilen. “Ambiguity Attitudes.” In The Wiley Blackwell Handbook of Judgment and Decision Making. Edited by G. Keren and G. Wu. Oxford, UK: Wiley-Blackwell, 2016. [Google Scholar]
  37. C. Aliprantis, and K. Border. Infinite Dimensional Analysis, 3rd ed. Heidelberg, Germany: Springer-Verlag, 2006. [Google Scholar]
  38. A. Pichler. “Insurance Pricing under Ambiguity.” Eur. Actuar. J. 4 (2014): 335–364. [Google Scholar] [CrossRef]
  39. M. Ghossoub. “Vigilant Measures of Risk and the Demand for Contingent Claims.” Insur. Math. Econ. 61 (2015): 27–35. [Google Scholar] [CrossRef]
  40. D. Denneberg. Non-Additive Measure and Integral. Dordrecht, Netherlands: Kluwer Academic Publishers, 1994. [Google Scholar]
  41. M. Marinacci, and L. Montrucchio. “Introduction to the Mathematics of Ambiguity.” In Uncertainty in Economic Theory: Essays in Honor of David Schmeidlers 65th Birthday. Edited by I. Gilboa. London, UK: Routledge, 2004, pp. 46–107. [Google Scholar]
  42. D. Schmeidler. “Integral Representation without Additivity.” Proc. Am. Math. Soc. 97 (1986): 255–261. [Google Scholar] [CrossRef]
  43. D. Cohn. Measure Theory. Boston, MA, USA: Birkhauser, 1980. [Google Scholar]
  44. H. Föllmer, and A. Schied. Stochastic Finance: An Introduction in Discrete Time, 3rd ed. Berlin, Germany: Walter de Gruyter, 2011. [Google Scholar]
  45. H. Jin, and X. Zhou. “Behavioral Portfolio Selection in Continous Time.” Math. Financ. 18 (2008): 385–426. [Google Scholar] [CrossRef]
  46. X. He, and X. Zhou. “Portfolio Choice via Quantiles.” Math. Financ. 21 (2011): 203–231. [Google Scholar] [CrossRef]
  47. H. Jin, and X. Zhou. “Greed, Leverage, and Potential Losses: A Prospect Theory Perspective.” Math. Financ. 23 (2013): 122–142. [Google Scholar] [CrossRef]
  48. G. Carlier, and R. Dana. “Optimal Demand for Contingent Claims when Agents Have Law Invariant Utilities.” Math. Financ. 21 (2011): 169–201. [Google Scholar] [CrossRef]
  49. R. Dudley. Real Analysis and Probability. New York, NY, USA: Cambridge University Press, 2002. [Google Scholar]
  50. W. Rudin. Principles of Mathematical Analysis, 3rd ed. New York, NY, USA: McGraw-Hill Book Company, 1976. [Google Scholar]
  51. M. Ghossoub. “Equimeasurable Rearrangements with Capacities.” Math. Oper. Res. 40 (2015): 429–445. [Google Scholar] [CrossRef]
  52. M. Ghossoub. “Contracting under Heterogeneous Beliefs.” Ph.D. Thesis, Department of Statistics & Actuarial Science, University of Waterloo, Waterloo, ON, Canada, May 2011. [Google Scholar]
  • 1.We thank one of the reviewers for bringing this to our attention.
  • 2.This is a standard assumption, and it holds in many instances, such as when it is assumed that a probability density function for X exists.
  • 3.A finite nonnegative measure η on a measurable space Ω , A is said to be nonatomic if for any A A with η A > 0 , there is some B A , such that B A and 0 < η B < η A .
  • 4.Indeed, suppose that T t = t , the identity function. Then, T = 1 , and so, T 1 t ϕ F X 1 t = 1 ϕ F X 1 t . Moreover, the function t F X 1 t is nondecreasing, since F X 1 is a quantile function. Therefore, the function t T 1 t ϕ F X 1 t is nondecreasing if and only if the function ϕ is nonincreasing.
  • 5.Any monotone function is Borel-measurable and, hence, “almost continuous”, in view of Lusin’s Theorem ([49] Theorem 7.5.2). Furthermore, any monotone function is almost surely continuous for the Lebesgue measure.
  • 6.Note that this variable deductible is anti-comonotonic with the loss X, since T is concave, u is increasing and concave (and hence, ( u ) 1 is decreasing by the inverse function theorem ([50] pp. 221–223)), X = F X 1 V , Pa.s., and F X is a nondecreasing function. However, the indemnity schedule is commonotonic with X
  • 7.See, e.g. Denneberg [40], Proposition 1.4 on p. 8 and the discussion on pp. 61–62. See also [45], p. 418, [46], p. 210 and p. 213, or [30], p. 207.

Share and Cite

MDPI and ACS Style

Amarante, M.; Ghossoub, M. Optimal Insurance for a Minimal Expected Retention: The Case of an Ambiguity-Seeking Insurer. Risks 2016, 4, 8. https://doi.org/10.3390/risks4010008

AMA Style

Amarante M, Ghossoub M. Optimal Insurance for a Minimal Expected Retention: The Case of an Ambiguity-Seeking Insurer. Risks. 2016; 4(1):8. https://doi.org/10.3390/risks4010008

Chicago/Turabian Style

Amarante, Massimiliano, and Mario Ghossoub. 2016. "Optimal Insurance for a Minimal Expected Retention: The Case of an Ambiguity-Seeking Insurer" Risks 4, no. 1: 8. https://doi.org/10.3390/risks4010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop