Pricing with Variance Gamma Information

In the information-based pricing framework of Brody, Hughston and Macrina, the market filtration $\{ \mathcal F_t\}_{t\geq 0}$ is generated by an information process $\{ \xi_t\}_{t\geq0}$ defined in such a way that at some fixed time $T$ an $\mathcal F_T$-measurable random variable $X_T$ is"revealed". A cash flow $H_T$ is taken to depend on the market factor $X_T$, and one considers the valuation of a financial asset that delivers $H_T$ at $T$. The value $S_t$ of the asset at any time $t\in[0,T)$ is the discounted conditional expectation of $H_T$ with respect to $\mathcal F_t$, where the expectation is under the risk neutral measure and the interest rate is constant. Then $S_{T^-} = H_T$, and $S_t = 0$ for $t\geq T$. In the general situation one has a countable number of cash flows, and each cash flow can depend on a vector of market factors, each associated with an information process. In the present work, we construct a new class of models for the market filtration based on the variance-gamma process. The information process is obtained by subordinating a particular type of Brownian random bridge with a gamma process. The filtration is taken to be generated by the information process together with the gamma bridge associated with the gamma subordinator. We show that the resulting extended information process has the Markov property and hence can be used to price a variety of different financial assets, several examples of which are discussed in detail.


I. INTRODUCTION
The theory of information-based asset pricing put forward by Brody, Hughston & Macrina [3,4,17] is concerned with the determination of the price processes of financial assets from first principles. The simplest version of the model is as follows. We fix a probability space (Ω, F , P). An asset delivers a single random cash flow H T at some specified time T > 0, where time 0 denotes the present. The cash flow is a function of a random variable X T , which we can think of as a "market factor" that is in some sense revealed at time T . In the general situation there will be many factors and many cash flows, but for the present we assume that there is a single factor X T : Ω → R such that the sole cash flow at time T is given by H T = h(X T ) for some Borel function h : R → R + . For simplicity we assume that interest rates are constant and that P is the risk neutral measure. We require that H T should be integrable. Under these assumptions, the value of the asset at time 0 is given by where E denotes expectation under P and r is the short rate. Since the single "dividend" is paid at time T , the value of the asset at any time t ≥ 0 is of the form where {F t } t≥0 is the market filtration. The task now is to model the filtration, and this will be done explicitly. The idea is that the filtration should contain partial or "noisy" information about the market factor X T , and hence also the impending cash flow, in such a way that X T is F T -measurable. This can be achieved by allowing {F t } to be generated by a so-called information process {ξ t } t≥0 having the property that for each value of t such that t ≥ T the random variable ξ t is F X T -measurable.
In previous work on information-based asset pricing, models have been constructed using Brownian bridge information processes [3, 4, 6-8, 11, 15, 17, 22], gamma bridge information processes [5], Lévy random bridge information processes [12][13][14], and Markov bridge information processes [18]. In what follows we present a new model for the market filtration, based on the variance-gamma process. The idea is to create a two-parameter family of information processes associated with the random market factor X T . One of the parameters is the information flow-rate σ. The other is an intrinsic parameter m associated with the variance gamma process. In the limit as m tends to infinity, the variance-gamma information process reduces to the type of Brownian bridge information process considered by Brody, Hughston & Macrina [3,4,17]. The plan of the paper is as follows. In Section II we recall properties of the gamma process, introducing the so-called scale parameter κ > 0 and shape parameter m > 0. A standard gamma subordinator is defined to be a gamma process with κ = 1/m. The mean at time t of a standard gamma subordinator is t. In Theorem 1 we prove that an increase in the shape parameter m results in a transfer of weight from the Lévy measure of any interval [c, d] in the space of jump size to the Lévy measure of any interval [a, b] such that b − a = d − c and c > a. Thus, roughly speaking, an increase in m results in an increase in the rate at which small jumps occur relative to the rate at which large jumps occur. In Section III we recall properties of the variance-gamma process and the gamma bridge, and in Definition 1 we introduce the so-called normalized variance-gamma bridge. In Lemmas 1, 2, 3, and 4 we work out various properties of the normalized variance-gamma bridge. Then in Theorem 2 we show that the normalized variance-gamma bridge and the associated gamma bridge are jointly Markov, a property that tuns out to be useful in what follows thereafter. In Section IV, at Definition 2, we introduce the so-called variance-gamma information process. The information process carries noisy information about the value of a market factor X T that will be revealed to the market at time T , where the noise is represented by the normalized variance-gamma bridge. In Lemma 4 we present a formula that relates the values of the information process at different times, and by use of that we establish in Theorem 3 that the information process and the associated gamma bridge are jointly Markov. In Section V, we consider a market where the filtration is generated by a variance gamma information process along with the associated gamma bridge. Then in Theorem 4 we present a general formula for the price process of a financial asset that at time T pays a single dividend given by a function h(X T ) of the market factor. In particular, the a priori distribution of the market factor can be quite arbitrary, specified by a probability measure F X T (dx) on R, and the only requirement being that h(X T ) should be integrable. Finally, in Section VI we present a number of examples, based on various choices of the distribution for the market factor and various choices for the payoff function, the results being summarized in Propositions 1, 2, 3, and 4. We conclude with a few comments on calibration.

II. GAMMA SUBORDINATORS
We begin with some remarks about the gamma process. Let us as usual write R + for the non-negative real numbers. Let κ and m be strictly positive constants. A continuous random variable G : Ω → R + on a probability space (Ω, F , P) will be said to have a gamma distribution with scale parameter κ and shape parameter m if where There exists a two-parameter family of gamma processes of the form Γ : Ω×R + → R + on (Ω, F , P). By a gamma process with scale parameter κ and shape parameter m we mean a Lévy process {Γ t } t≥0 such that for each t > 0 the random variable Γ t is gamma distributed with If we write (a) 0 = 1 and (a) k = a(a + 1)(a + 2) · · · (a + k − 1) for the so-called Pochhammer symbol, we find that E[Γ n t ] = κ n (mt) n . It follows that E[Γ t ] = µ t and Var[Γ t ] = ν 2 t, where µ = κ m and ν 2 = κ 2 m, or equivalently m = µ 2 /ν 2 , and κ = ν 2 /µ. The Lévy exponent for such a process is given for α < 1 by and for the corresponding Lévy measure we have One can then check that the Lévy-Khinchine relation holds for an appropriate choice of p (see, for example, [16], Lemma 1.7). By a standard gamma subordinator we mean a gamma process {γ t } t≥0 for which κ = 1/m. This implies that E[γ t ] = t and Var[γ t ] = m −1 t. The standard gamma subordinators thus constitute a one-parameter family of processes labelled by m. An interpretation of the parameter m is given by the following.
is (i) strictly greater than one, and (ii) strictly increasing as a function of m.
Proof. We begin by establishing (i). By definition we have Let δ = c − a > 0 and note that the integrand in the right hand side of (10) is a decreasing function of the variable of integration. This allows one to conclude that We proceed to establish (ii). A calculation shows that where E 1 (z) is defined for z > 0 by See [1], section 5.1.1. Next, we compute the derivative of the quotient in (9), which gives which implies that the sign of the derivative in (15) is the same as that of Finally, we observe that which implies that It follows then from (15) that the ratio (9) is strictly increasing as a function of the parameter m, and that completes the proof.
We see that the effect of an increase in the value of m is to transfer weight from the Lévy measure of any interval [c, d] ⊂ R + to any earlier (possibly overlapping) interval [a, b] ⊂ R + of the same length. The Lévy measure of any such interval is the rate of arrival of jumps for which the jump size lies in the given interval.

III. NORMALIZED VARIANCE-GAMMA BRIDGE
Let us fix a standard Brownian motion {W t } t≥0 on (Ω, F , P) and an independent standard gamma subordinator {γ t } t≥0 with parameter m. By a standard variance-gamma process with parameter m we mean a time-changed Brownian motion {V t } t≥0 of the form It is straightforward to check that {V t } is itself a Lévy process, with Lévy exponent Properties of the variance-gamma process, and financial models based on it, have been investigated extensively in [9,[19][20][21] and many other works. The other object we require going forward is the gamma bridge [5,10,23]. Let {γ t } be a standard gamma subordinator with parameter m. For fixed T > 0 the process for 0 ≤ t ≤ T and γ tT = 1 for t > T will be called a standard gamma bridge, with parameter m, over the interval [0, T ]. One can check that γ tT has a beta distribution. In particular, one finds that its density is given by where It follows then by use of the integral formula and hence Accordingly, one has E[γ tT ] = t/T and E[γ 2 tT ] = t(mt + 1)/T (mT + 1), and therefore One observes, in particular, that the expectation of γ tT does not depend on m, whereas the variance of γ tT decreases as m increases.
for 0 ≤ t ≤ T and Γ tT = 0 for t > T will be called a normalized variance gamma bridge.
We proceed to work out various properties of this process. We observe that Now, recall that the gamma process and the associated gamma bridge have the property that γ st and γ u are independent for 0 ≤ s ≤ t ≤ u and t > 0. It follows that γ st and γ uv are independent for 0 ≤ s ≤ t ≤ u ≤ v and t > 0. Furthermore, we have: Proof. Using the tower property, we find that where the last line follows from the independence of γ st and γ u , and N ( · ) denotes the standard normal distribution function.
As an immediate consequence, we also have Further, we have: Proof. We recall that the Brownian bridge {β tT } 0≤t≤T defined by Using the tower property we find that and that concludes the proof.
This follows from a straightforward calculation. Then we obtain Theorem 2. The processes {Γ tT } 0≤t≤T and {γ tT } 0≤t≤T are jointly Markov.
Proof. To establish the Markov property it suffices to show that for any bounded measurable function φ : R × R → R, any n ∈ N, and any We present the proof for n = 2. Thus we need to show that We remark that as a consequence of Lemma 4 we have Therefore, it suffices to show that Let us write for the joint density of the random variables Γ tT , γ tT , Γ t 1 T , γ t 1 T , Γ t 2 t 1 , γ t 2 t 1 . Then for the conditional density of the Γ tT and γ tT given Thus, Similarly, We shall show that Writing for the joint distribution function, we see that where the last step follows by virtue of Lemma 3. Thus we have where the next to last step follows by virtue of Lemma 2. Similarly, and hence Thus we deduce that and and the theorem follows.

IV. VARIANCE GAMMA INFORMATION
Fix T > 0 and let {Γ tT } be a normalized variance gamma bridge, as defined by (30). Let {γ tT } be the associated gamma bridge defined by (23). Let X T be a random variable and assume that X T , {γ t } t≥0 and {W t } t≥0 are independent. Then we are led to the following: By a variance-gamma information process carrying the market factor X T we mean a process {ξ t } t≥0 that takes the form for 0 ≤ t ≤ T and ξ t = σX T for t > T , where σ is a positive constant.
The market filtration is assumed to be the standard augmented filtration generated jointly by {ξ t } and {γ tT }. An elementary calculation gives Then we are led to the following result required for the valuation of assets. Proof. It suffices to show that for any n ∈ N and any times 0 < t 1 < t 2 < · · · < t n we have We present the proof for n = 2. Thus, we propose to show that By Lemma 5, we have Finally, we invoke Lemma 2, Lemma 3, and Theorem 2 to conclude that The generalization to n > 2 is straightforward.

V. INFORMATION BASED PRICING
Now we are in a position to consider the valuation of a financial asset in the setting just discussed. One recalls that P is understood to be the risk-neutral measure and that the interest rate is constant. The payoff of the asset at time T is taken to be an integrable random variable of the form h(X T ) for some Borel function h, where X T is the information revealed at T . The filtration is generated jointly by the variance-gamma information process {ξ t } and the associated gamma bridge {γ tT }. The value of the asset at time t ∈ [0, T ) is then given by the general expression (2), which on account of Theorem 3 reduces in the present context to Our goal now is to work out this expectation explicitly. Let us write F X T for the a priori distribution function of X T . Thus F X T : x ∈ R → F X T (x) ∈ [0, 1] and we have Occasionally, it will be convenient typographically to write F (x) X T in place of F X T (x), and similarly for other distribution functions. To proceed with the calculation of the conditional expectation of h(X T ), we require the following: Lemma 6. Let X be a random variable with distribution {F X (x)} x∈R and let Y be a continuous random variable with distribution {F Y (y)} y∈R and density {f Y (y)} y∈R . Then for all y ∈ R for which f Y (y) = 0 we have (59) Proof. For any two random variables X and Y it holds that and hence Here we have used the fact that for each x ∈ R there exists a Borel measurable function . Then for each y ∈ R we define By symmetry, we also have from which it follows that Now consider the measure F X|Y =y (dx) on (R, B) defined for each y ∈ R by setting for any A ∈ B. Then F X|Y =y (dx) is absolutely continuous with respect to F X (dx). In particular, suppose that F X (B) = 0 for some B ∈ B. Now, and hence E 1 {X∈B} Y = 0, and therefore E 1 {X∈B} Y = y = 0. Thus F X|Y =y (B) vanishes for any B ∈ B for which F X (B) vanishes. It follows by the Radon-Nikodym theorem that for each y ∈ R there exists a density {g y (x)} x∈R such that Note that {g y (x)} is determined uniquely apart from its values on F X -null sets. Inserting (66) into (64) we obtain and thus by Fubini's theorem we have It follows then that {F (y) Y |X=x } x∈R is determined uniquely apart from its values on F X -null sets, and we have from which it follows that for each value of x the conditional distribution function {F (y) Y |X=x } y∈R is absolutely continuous and admits a density {f The desired result (59) then follows from (66) and (71) if we observe that and that concludes the proof.
Armed with Lemma 6, we are in a position to work out the conditional expectation that leads to the asset price, and we obtain the following: Theorem 4. The variance-gamma information-based price of a financial asset with payoff h(X T ) at time T is given for t < T by Proof. To calculate the conditional expectation of h(X T ), we observe that by the tower property, where the inner expectation takes the form Here by Lemma 6 the conditional distribution function is Therefore, the inner expectation in equation (74) is given by Now, by the tower property of conditional expectation we know that But the right hand side of (77) depends only on ξ t and γ tT . It follows immediately that which translates into equation (73), and that concludes the proof.

VI. EXAMPLES
In conclusion, we present examples of variance-gamma information pricing for specific choices of (a) the payoff function h : R → R + and (b) the distribution of the market factor X T .
Example 1. We begin with the simplest case, which is that of a unit-principal credit-risky bond without recovery. We set h(x) = x, with P(X T = 0) = p 0 and P(X T = 1) = p 1 , where p 0 + p 1 = 1. Thus, we have where and δ a (dx) denotes the Dirac measure concentrated at the point a, and we are led to the following: Proposition 1. The variance-gamma information-based price of a unit-principal credit-risky discount bond with no recovery is given by Now let ω ∈ Ω denote the outcome of chance. By use of equation (52) one can check rather directly that if X T (ω) = 1, then lim t→T S t = 1, whereas if X T (ω) = 0, then lim t→T S t = 0. More explicitly, we find that and the claimed limiting behaviour of the asset price follows by inspection. Example 2. As a somewhat more sophisticated version of the previous example, we consider the case of a defaultable bond with random recovery. We shall work out the case where h(x) = x and the market factor X T takes the value c with probability p 1 and X T is uniformly distributed over the interval [a, b] with probability p 0 , where 0 ≤ a < b ≤ c. Thus, for the probability measure of X T we have and for the distribution function we obtain The bond price at time t is then obtained by working out the expression and it should be evident that one can obtain a closed-form solution. To work this out in detail, it will be convenient to have an expression for the incomplete first moment of a normally-distributed random variable with mean µ and variance ν 2 . Thus we set and for convenience we also set Then we have and of course where N ( · ) is the standard normal distribution function. We also set Finally, we obtain Proposition 2. The variance-gamma information-based price of a defaultable discount bond with a uniformly-distributed fraction of the principal paid on recovery is given by where Example 3. Next we consider the case when the payoff of an asset at time T is log-normally distributed. This will hold if h(x) = e x and X T ∼ Normal(µ, ν 2 ). It will be convenient to look at the slightly more general payoff obtained by setting h(x) = e q x with q ∈ R. If we recall the identity which holds for A > 0 and B ∈ R, a calculation gives where For q = 1, the price is thus given in accordance with Theorem 4 by Then clearly we have and a calculation leads to the following: The variance-gamma information-based price of a financial asset with a log-normally distributed payoff such that log (S T − ) ∼ Normal(µ, ν 2 ) is given for t ∈ (0, T ) by More generally, one can consider the case of a so-called power-payoff derivative for which where S T − = lim t→T S t is the payoff of the asset priced above in Proposition 3. See [2] for aspects of the theory of power-payoff derivatives. In the present case if we write for the value of the power-payoff derivative at time t, we find that C t = e r t C 0 exp ν 2 σ 2 γ tT (1 − γ tT ) −1 1 + ν 2 σ 2 γ tT (1 − γ tT ) −1 q σ γ tT ξ t − q µ − 1 2 q 2 ν 2 , where C 0 = e −r T exp q µ + 1 2 q 2 ν 2 . Example 4. Next we consider the case where the payoff is exponentially distributed. We let X T ∼ exp(λ), so P [X T ∈ dx] = λ e −λ x dx, and take h(x) = x. A calculation shows that ∞ 0 x exp −λ x + σ ξ t x − 1 2 σ 2 x 2 γ tT (1 − γ tT ) −1 dx = µ − N 1 (0, µ, ν) f (0, µ, ν) , where we set and ∞ 0 exp −λ x + σ ξ t x − 1 2 σ 2 x 2 γ tT (1 − γ tT ) −1 dx = 1 − N 0 (0, µ, ν) f (0, µ, ν) .
As a consequence we obtain: Proposition 4. The variance-gamma information-based price of a financial asset with an exponentially distributed payoff is given by In conclusion, we remark that the variance-gamma information model introduced in this paper can be calibrated to market data as follows. The distribution of the random variable X T can be inferred by observing the current prices of derivatives for which the payoff is for K ∈ R. The information flow-rate parameter σ and the underlying shape parameter m can then be inferred from option prices.