Next Article in Journal
On Clustering Histograms with k-Means by Using Mixed α-Divergences
Next Article in Special Issue
A Maximum Entropy Method for a Robust Portfolio Problem
Previous Article in Journal
On Spatial Covariance, Second Law of Thermodynamics and Configurational Forces in Continua
Previous Article in Special Issue
Reaction Kinetics Path Based on Entropy Production Rate and Its Relevance to Low-Dimensional Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Density Reconstructions with Errors in the Data

1
Business Administration, Univ. Carlos III de Madrid, Calle Madrid 126, 28093 Getafe, Spain
2
Centro de Finanzas, IESA, Ave. Iesa San Bernardino, Caracas 1010, Venezuela
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(6), 3257-3272; https://doi.org/10.3390/e16063257
Submission received: 2 May 2014 / Revised: 3 June 2014 / Accepted: 9 June 2014 / Published: 12 June 2014
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Abstract

: The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data.

1. Introduction

An important problem in many applications of probability is the determination of the probability density of a positive random variable when the information available consists of an observed sample. For example, it can be either an exit time or a reaction time, the accumulated losses or accumulated damage, and so on. A standard technique, related to a variety of branches of analysis, consists of the use of the Laplace transform. However, sometimes, such a technique may fail, because the transform cannot be determined, as in the case of the lognormal variable. In this regard, see the efforts in [1] to determine the Fourier-Laplace transform of the lognormal variable. One is then led to search for techniques to invert a Laplace transform from a few of its values determined numerically. Additionally, that is the reason why we chose to use a sample from the lognormal as data to test the methods that we propose.

To state our problem: we are interested in a method to obtain a probability density fS(s) from the knowledge of the values of the Laplace transform:

E [ e α i S ] = μ ( α i ) , i = 1 , , K .

To be specific, the positive random variable S may denote the severity of some kind of losses accumulated during a given time interval, and the density fS(s) is the object that we are after. Due to the importance of this problem for the insurance industry, there has been quite a large amount of effort devoted to finding systematic ways to compute fS(s) from the knowledge of the ingredients of some model relating the S to more basic quantities, like the frequency of losses and individual severities. See [2], for example, for a relatively recent update on methods to deal with that problem.

We should mention at the outset that if the Laplace transform E[eαS] were known as a function on the positive real axis, a variety of methods to determine fS exist. Among them, the use of maximum entropy techniques that bypass the need to extend the Laplace transform into a complex half-plane. The standard maximum entropy (SME for short) method to solve this problem is simple to implement. See [3] for a comparative study of methods (including the maximum entropy) that can be used to determine fS when E[eαX] can be computed analytically. By the way, there, we showed that with eight fractional moments (corresponding to eight values of the Laplace transform), we could obtain quite accurate inversions. That is the reason why we consider eight moments in this paper.

However, in many cases, E[eαS] has to be estimated from observed values s1, …., sN of S, that is, the only knowledge that may be available to us is the total loss in a given period. It is at this point where errors come in, because in order to determine μ(α), we have to use the random sample and average over it. If we were somehow determining μ(α) by means of some experimental procedure, then an error might come in through the measurement process.

Thus, the problem that we want to address can now be restated as:

Find F S such that E [ e α i S ] = 0 e α i s d F S ( s ) C i i = 1 , , K .

where Ci is some interval enclosing the true value of μ(αi) for i = 1, …, K. These intervals are related to the uncertainty (error) in the data. For us, these will be the statistical confidence intervals, but, as we are not using them for the statistical estimation of a mean, but as a measure of some experimental error, we adjust the width of the interval to our convenience.

To transform the problem into a fractional moment problem, note that, since S is positive, we may think of Y = eS as a variable in [0, 1], whose density fY (y) we want to infer from the knowledge of an interval in which its fractional moments fall, that is, from:

0 1 y α f Y ( y ) d y C i , i = 1 , , K .

where Ci denotes now an interval around the true, but unknown moments μ(αi) of fY.

The SME method has been used for a long time to deal with problems like Equation (2). See [4] for a rigorous proof of the basic existence and representation results and for applications in statistical mechanics. See, also, [5] and [6] for different rigorous proofs of these results. See, also, [7] for an interesting collection of applications in a large variety of fields.

However, possible extensions of the method on maximum entropy to handle errors in the data do not seem to have received much attention despite their potential applicability. Two such possible extensions, explored in the framework of the method maximum entropy in the mean as applied to linear inverse problems with convex constraints, were explored in [8]. Here, we want to provide alternative ways to incorporate methods to deal with errors in the data to solve Equation (3) within the framework of the SME methods without bringing in the method of maximum entropy in the mean as in [8]. The difference between the two methods that we analyze lies in that one of them provides us with an estimator of the additive error.

The remainder of the paper is organized as follows. In the next section, we present the two extensions of the SME, and in the third section, we apply them to obtain the probability density fS(s) from the knowledge of the interval in which the fractional moments of Y = eS fall. We point out two features of our simulations at this stage: we consider one with a relatively small sample and one with a larger sample. The exact probability density from which the data is sampled is to be used as a benchmark against which the output of the procedures is compared. We should mention that the methods we present here are a direct alternative to the methods based on the method of maximum entropy in the mean. For an application of that technique for the determination of the risk measure from the knowledge of mispriced risks, see [9].

2. The Maxentropic Approaches

As mentioned in the Introduction, in each subsection below, we consider a different way of extending the SME. In the first, we present the extension of the method of maximum entropy to include errors in the data, while in the second, we present a version that allows for the estimation of the additive error in the data.

2.1. Extension of the Standard Maxent Approach without Error Estimation

Here, we present an extension of the original variational method originally proposed by Jaynes in [10], based on an idea proposed in [11], to solve the (inverse) problem consisting of finding a probability density fY (y) (on [0, 1] in this case), satisfying the following integral constraints:

0 1 y α k f Y ( y ) d y C k for k = 0 , , M .

where the interval Ck = [ak, bk], around the true, but unknown μY (αk), is determined from the statistical analysis of the data for each of the moments. For k = 0 only, we set C0 = {1}, since for α0 = 0, we have μ0 = 1 to take care of the natural normalization requirement on fY (y). To state the extension, denote by D the class of probability densities g(y) satisfying Equation (4). This class is convex. On this class, define the entropy by S ( g ) = 0 1 g ( y ) ln ( g ( y ) ) d y, whenever the integral is finite (or −∞, if not). Now, to solve Equation (4), we extend Jaynes’ method ([10]) as follows. The problem now is:

Find g * = a r g s u p { S ( g ) | g D }

To dig further into this problem, let us introduce some notation. Let g denote any density, and denote by μg(α) the vector of α moments of g. Set C = C0 × … × CM. Additionally, for c ∈ C, let us denote by D ( c ) the collection of densities having μg(α) = c. With these notations and following the proposal in [10], then, we carry on the maximization process sequentially and restate the previous problem as:

Find g * = a r g s u p { a r g s u p { S ( g ) | g D ( c ) } | c C } .

The idea behind the proposal is clear: first, solve a maximum entropy problem each cC to determine a g c *, and then, maximize over cC to determine the c*, such that g c * * yields the maximum entropy S ( g c * * ) over all possible moments in the confidence set.

Invoking the standard argument, we know that when the inner problem has a solution, it is of the type:

g c * ( y ) = exp ( k = 0 M λ k * y α k )

in which the number of moments M appears explicitly. It is usually customary to write e λ 0 * = Z ( λ * ) 1, where λ * = ( λ 1 * , , λ M * ) is an M –dimensional vector. Recall, as well, that the normalization factor is given by:

Z ( λ ) = 0 1 e k = 1 M λ k y α k d y .

With this notation, the generic form of the solution looks like:

g c * ( y ) = 1 Z ( λ * ) e k = 1 M λ k * y α k = e k = 0 M λ k * y α k .

To complete, it remains to specify how the vector λ* can be found. For that, one has to minimize the dual entropy:

Σ ( λ , c ) = ln Z ( λ ) +< λ , c >

where < a, b > denotes the standard Euclidean scalar product. It is also a standard result that:

sup { S ( g ) | g K ( g ) = c } = inf { Σ ( λ , c ) | λ M }

With this, the double minimization process can be restated as:

sup { sup { S ( g ) | g D ( c ) } | c C } = sup { inf { Σ ( λ , c ) | λ M } | c C } .

Additionally, invoking the standard minimax argument, restate it as:

sup { sup { S ( g ) | g D ( c ) } | c C } = inf { sup { Σ ( λ , c ) | c C } | λ M } .

Now, due to the special form of Σ(λ, c), it suffices to compute sup{< λ, c > |cC}. For that, we make use of the simpleto verify the fact that sup{< λ, y > |y ∈ [−1, 1]M} = ||λ||1. Consider the affine mapping T (c) = Dc+h, where D is diagonal with elements 2/(bkak) and hk = −(ak +bk)/(bkak). This maps [a1, b1] × … × [aM, bM] bijectively onto [−1, 1]M. With this, it is easy to see that:

δ C * ( λ ) sup { < λ , c > | c C } = D 1 λ 1 < D 1 λ , h > .

Explicitly,

D 1 λ 1 = i = 1 M ( b i a i ) | λ i | 2 and < D 1 λ , h > = i = 1 M ( b i + a i ) λ i 2 .

As a first step towards a solution, we have the simple:

Lemma 1. With the notations introduced above, set:

Σ ( λ ) = ln ( Z ( λ ) ) + δ C * ( λ )

Then, Σ(λ) is strictly convex in λ.

Observe that δ C * ( λ ) / λ i is defined, except at λi = 0, where it is sub-differentiable (see [12]). Actually:

δ C * ( λ ) / λ k = b k when λ k > 0 , δ C * ( λ ) / λ k = a k when λ k < 0 , δ C * ( λ ) / λ k ( a k , b k ) when λ k = 0

Additionally, to close up, we have:

Theorem 1. Suppose that the infimum λ* of Σ(λ) is reached in the interior of the set {λ ∈ ℝM |Z(λ) < ∞}. Then, the solution to the maximum entropy problem (6) is:

g * ( y ) = 1 Z ( λ * ) e k = 1 M λ k * y α k = e k = 0 M λ k * y α k .

Due to the computation above, it is clear that:

0 1 y α k g * ( y ) d y [ a k , b k ] , f o r k = 1 , , M .

Comment: This is a rather curious result. Intuitively, at a minimum, we have ∇λ Σ(λ) = 0; then, according to Equation (11), if all λi ≠ 0, the maxentropic density g*(y) has moments equal to one of the end points of the confidence intervals. If all λi = 0, the density g*(y) is uniform, and the reconstructed moments 0 1 y α k g * ( y ) d y are anywhere in [ak, bk].

2.2. Extension of the Standard Maxent Approach with Error Estimation

In this section, we present an extension of the method of maximum entropy that allows us to estimate the errors in the measurement and to circumvent the concluding comments in the previous section, that is to obtain estimated moments different from the end points of the confidence intervals. Instead of supposing that fY (y) satisfies Equation (4), we present a totally different approach. To restate the problem, consider the following argument. If we suppose that the measurement error in the determination of the k–th moment lies within the confidence interval Ck = [ak, bk], then the unknown estimate of the measurement error can be written as pkak + (1 − pk)bk for appropriate, but unknown pk, qk = 1 − pk. We propose to extend the original problem to (if the extension is not clear, see the Appendix at the end of this section):

0 1 y α k f Y ( y ) d y + p k a k + ( 1 p k ) b k = μ k for k = 1 , , M .

The idea behind the proposal is clear. In order to obtain the μ k s either experimentally or numerically, we average over a collection of observations (simulations) and the errors in each average additively. Thus, the observed value of μk consists of the true (unknown) moment, which is to be determined, plus an error term that has to be estimated, as well. This time, we search for a density fY (y) with y ∈ [0, 1] and numbers 0 < pk < 1, (k = 1, …, M), such that Equation (13) holds. This computational simplification is possible when the support of the distribution of errors is bounded. To compress the notations, we write probability distributions concentrated on {ak, bk} as q k ( d z ) = p k a k ( d z ) + ( 1 p k ) b k ( d z ), and the probability that we are after is a mixture of continuous and discrete distributions. To determine it, we define, on the appropriate space of product probabilities (see the Appendix to this section below), the entropy:

S ( g , p ) = 0 1 g ( y ) ln ( g ( y ) ) d y k = 1 M ( p k ln p k + ( 1 p k ) ln ( 1 p k ) ) .

Needless to say, 0 < pk < 1, for k = 1, …, M. With all of these notations, our problem becomes:

Find probability density g * ( y ) and numbers 0 < p k < 1 satisfying : Constraints ( 13 ) and the normalization constraint 0 1 f Y ( y ) d y = 1.

The usual variational argument of [9] or the more rigorous proofs in [46] yield:

g * ( y ) = e k = 1 M λ k * y α k Z ( λ * ) p k = e a k λ k * e a k λ k * + e b k λ k * .

Here, the normalization factor Z(λ) is as above. This time, the vector λ* of Lagrange multipliers is to be found minimizing the dual entropy:

Σ ( λ ) = ln Z ( λ ) + k = 1 M ln ( e a k λ k + e b k λ k ) + < λ , μ >

Once λ* is found, the estimator of the measurement error is, as implicit in Equation (13), given by:

k = a k e a k λ k * + b k e b k λ k * e a k λ k * + e b k λ k * .

Notice that, although the formal expression for g*(y) is the same as that for the first method, the result is different, because the λ* is found minimizing a different functional.

So as not to interrupt the flow of ideas, we compile the basic model behind the results that we just presented, as well as some simple, but necessary computations in the Appendix.

3. Numerical Implementations

Here, we suppose that the random variable of interest follows a lognormal distribution with known parameters μ = 1 and σ = 0.1: Even though it is a simple example, it is rather representative of the type of distributions appearing in applications. As said, our data will consist of simulated data, and we shall consider two data sets of different sizes.

To produce the two examples, do the following:

(1)

Simulate two samples {s1, …, sN} of sizes N = 200 and N = 1,000 from the lognormal distribution.

(2)

For each sample, we compute its α–moment and its confidence interval using the standard statistical definition. That is, we compute

μ ¯ i = 1 N k = 1 N e S k α i , α i = 1.5 / i , i = 1 , , 8.
and the confidence interval as specified below.

(3)

After obtaining each maxentropic density, we use standard statistical test to measure the quality of the density reconstruction procedure.

Table 1 shows the error intervals, which we take to be the 10% confidence interval for the mean obtained, respectively, using the standard definition, that is as ( μ ¯ i z 0.05 s d i s q r t ( n ) , μ ¯ i + z 0.05 s d i s q r t ( n ) ), where sdi is the sample standard deviation and μ ¯ i is the sample mean of Y α i the simulated samples of sizes of 200 and 1000.

In Table 2, we list the moments of S for the two sample sizes. As mentioned before, the error intervals are the unique inputs for the maxentropic method of density without error estimation, whereas the moments are needed for both the SME and the maxentropic method with error estimation. Recall, as well, that the second method forces the estimation error in the i–th moment to lie in the corresponding error interval, which is centered at the observed sample moment.

3.1. Reconstruction without Error Estimation

We now present the results of implementing the first approach, namely the density reconstruction without estimating the additive noise.

The two panels of Figure 1 display the real (true) density from which the data was sampled, the histogram that was obtained, the density reconstructed according to the SME method, as well as the density obtained according to the first extension of the SME (labeled SMEE). The left panel contains the results obtained for a sample of a size of 200, whereas the right panel the reconstruction obtained for a sample of a size of 1, 000.

Even though the moments of the density obtained with the SME method coincide with the empirical moments, the moments of the density obtained by the first reconstruction method do not need to coincide with the sample moments. They only need to fall within (or at the boundary) of the error interval, which is centered at the sample moments. In Table 3, we list the moments that those densities determine for each sample size.

Let us now run some simple quality of reconstruction tests. An experimentalist would be most interested in the comparison of the histogram (real data) to the reconstructed density. However, as we have the real density from which the data was sampled, we can perform three comparisons. We compute the actual L1 and L2 distances between the densities and those between the densities and the histograms. Actually, this is a bin-dependent computation and not a truly good measure of the quality of reconstruction, but we carry it out as a consistency test.

In Table 4, we display the results of the computations. The distances between the continuous densities are computed using standard numerical integrators and the distances between the empirical densities and the continuous densities according to:

L 1 ( distance ) = k = 0 M 1 b k b k + 1 | f ( s ) f e ( s ) | d s + b M | f ( s ) | d s L 2 ( distance ) = k = 0 M 1 b k b k + 1 ( f ( s ) f e ( s ) ) 2 d s + b M ( f ( s ) 2 d s

where bk are the position of the bins, which enter as limits of integration. Note that the distances between the true density and the maxentropic reconstruction are much smaller that those between the maxentropic or trueand the histogram, and that the distance of the true and the maxentropic distances and the histogram are similar. Thus, the reconstruction methods are performing well.

Another measure of quality of reconstruction is the L1 and L2 distances between cumulative distribution functions. A simple way to compute them, which includes the histogram, as well, is given by:

M A E = 1 n j = 1 N | F ( s j ) F N ( s j ) | R M S E = 1 N j = 1 N ( F ( s j ) ) F N ( s j ) ) 2

where we may consider the sj to be the ordered valued of the sample without loss of generality. To distinguish them from the standard distances, MAE stands for “mean average error” and RMSE stands for “root mean square error”. The results are displayed in Table 5.

It is intuitively clear and confirmed in the results displayed in the table that the larger the size of the sample, the better the estimation results for both methods, that is, the SME and SMEE.

In Table 6, we can see details of the convergence of the method used according to the sample size. Clearly, the SMEE method involves less iterations and machine time, but involves a lower value for the gradient.

To close, let us consider Table 3 once more. We commented after Theorem 1 that when the multipliers were non-zero, the reconstructed moments are the end points of the confidence intervals. This is not borne out by the results in that table, because at a minimum, the norm of the gradient is ~ 10−4 and not exactly zero, and this tiny error explains the slight differences. Not only that, since the first method yields the boundary points of the confidence intervals as reconstructed moments, the corresponding maxentropic density is expected to differ more from the true one than that of the center of the confidence interval.

3.2. Density Reconstruction with Error Estimation

Recall that this time, we are provided with moments measured (estimated) with error and that we have prior knowledge about the range of the error about the moment. We are now interested in determining a density, the moments that it determines and an estimate of the additive measurement error.

In each panel of Figure 2, we again display four plots: Along with the true (real) density and the histograms generated by sampling from it, the reconstructed maxentropic density (labeled SME) is determined by the original sample moments, and the maxentropic densities (labeled SMEE) are provided by the second procedure, for each error interval and each sample size. Visually, the SMEE densities are more close to the SME density this time.

This time, the optimal Lagrange multipliers λ i * , i = 1 , , 8 determine both the maximum entropy distributions, as well as the weights of the endpoint of the confidence intervals for the estimation of the noise. With the multipliers, one obtains the maximum entropy density, from which the true moments can be computed as μ ^ k = 0 1 y α k f Y * ( y ) d y. The values obtained for each type of confidence interval and for each sample size are presented in Table 7.

Table 8 displays the estimated errors, as well as the corresponding weights of the endpoints of the error intervals. Keep in mind that the estimated error in the determination of each moment, as well as the estimated moment add up to the measured moment.

To measure the quality of the reconstructions, we again compute the L1 and L2 distances between densities, as well as the distance between distribution functions. These are displayed in Tables 9 and 10 below. Again, the distances between densities and histograms depend on the bin sizes. For a sample size of 1000, the results of the methods SME and SMEE seem to coincide better and are closer to the lognormal curve than the histogram (simulated data).

To finish, consider Table 11 in which we can show the details of the convergence of the SMEE versus the SME in the second case. The two leftmost columns compare the SMEE versus the SME for a sample of a size of 200, whereas the two rightmost columns compare the performance for a sample of a size of 1000: All things considered, it seems that the second method, that is the simultaneous determination of density and measurement errors, has a better performance.

4. Concluding Remarks

We presented and compared two possible ways of incorporating errors in the data into the usual maximum entropy formalism to determine probability densities. They correspond to two different demands about the resulting output of the method.

The first method goes as follows: For each c in the constraint space C, one solves a maximum entropy to obtain a density g*(c) having c as a constraint. Then, one varies c to find a c*, such that the entropy S(g*(c*)) of g*(c*) is maximal among all the S(g*(c)). Standard duality theory is invoked along the way to obtain the g*(c*) without having to actually determine the c*.

The second method uses both the moments and a range for the error as input data. In the first case, we simply obtain a density, whereas in the other, we obtain both a density, as well as an estimate of the error.

Recall that the only input needed for the first method is a constraint range and that we compared with SME with moments estimated from a large sample to have an idea of the performance of both procedures. We shall explore a more realistic case (small data set) elsewhere. In the example that we considered, both methods provide satisfactory results. The minimization procedure converges more slowly, and the norm of the gradient (which is a measure of the reconstruction error) is much larger in the first one of them. This is probably due to the fact that the Hessian matrix has very small eigenvalues near the minimum, making the function too flat there. Actually, both Hessians have a very small determinant, but that corresponding to the first method is about 10−20 times smaller.

This is reflected in the reconstruction error measured by the size of the gradient of the dual entropy Σ(λ) at a minimum. This is due to the flexibility of the method. Even though the L1 and L2 differences between the empirical density and the maxentropic densities are small in both procedures, the second method yields a better fit and an estimate of the measurement error, as well.

To finish, we mention that in [13], we present a detailed application of the techniques developed above to a problem of relevance in risk management.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

In the first subsection, we complete the basic modeling process missing in Section 2.2 and, then, some elementary computations useful for the minimization procedures.

A. Further Mathematical Details

To put the error estimation procedure into a model, consider the space Ω := [0, 1] × C on which we consider the obvious σ–algebra of Borel subsets, and consider the reference measure d Q 0 ( d y , d ξ ) = d y ( a i ( d ξ i ) + b i ( d ξ i ) ). On Ω, we define the “generalized” moments:

ϕ : Ω , where ϕ ( y , ξ ) = y α i + ξ i

where ξi is the i–th coordinate of ξ. Furthermore, when P << Q0, then there exists a density f(y) and numbers pk for k = 1, …, M, such that d P ( d y , d ξ ) = f ( y ) d y ( p i a i ( d ξ i ) + ( 1 p i ) b i ( d ξ i ) ). The maximum entropy problem can now be stated as:

Find P < < Q 0 , that maximizes S Q 0 ( P ) given by Equation ( 14 ) and E P [ ϕ i ] = μ i i = 1 , , M .

Now, the procedure is standard, and the result is stated in the easy to prove, but important:

Theorem 2. Suppose that the infimum λ* of Σ(λ) given by Equation (17) is reached in the interior of the set {λ ∈ ℝM|Z(λ) < ∞}. Then, the solution to the maximum entropy problem Equation (19) is given by Equation (16).

Proof. All it takes is to differentiate (17) with respect to λi equated to zero and to read the desired conclusion.

Comment: This is a rather diluted version of the general result presented in [4] or [10], but is enough to keep us going.

In the next two subsections, we add the explicit computation of the gradient of Σ(λ) for those that use gradient-based methods for its minimization.

B. Derivative of Σ(λ) when Reconstructing with Data in a Confidence Interval

Having determined the confidence interval [ai, bi] for each i = 1, …, 8, the next step is to minimize Σ(λ). For the first case, in the case of need, here is the derivative. It invokes Equation (11):

Σ ( λ ) λ i = 0 1 y α i e k = 1 8 λ k y α k Z ( λ ) d y + k i .

where ki = bi if λi > 0 and ki = ai if λi < 0, and in the rare case that λi = 0, choose ki ~ U(ai, bi). Additionally, as above, Z ( λ ) = 0 1 e i = 1 8 λ i y α i d y.

Once the minimizer λ* has been found, the maxentropic density (in the changed variables is:

g * ( y ) = e i = i 8 λ i * y α i Z ( λ )

when the change of variables f*(t) = et g*(et).

C. Derivative of Σ(λ) when Reconstructing with Error Estimation

This time, the derivatives of Σ(λ) are a bit different. On the one hand:

Σ ( λ ) λ i = 0 1 y α i e i = 1 8 λ i y α i Z ( λ ) d y a i e a i λ i * + b i e b i λ i * e a i λ i * + e b i λ i * + μ i .

Once the minimizing λ* has been found, the routine should be the same as above. That is, use Equation (17) to obtain the density and plot along with the result obtained in the previous section.

References

  1. Leipnik, R.B. On Lognormal Random Variables: I The Characteristic Function. J. Aust. Math. Soc. Series B 1991, 32, 327–347. [Google Scholar]
  2. Panjer, H. Operational Risk: Modeling and Analytics; John Wiley & Sons: New York, NY, USA, 2006. [Google Scholar]
  3. Gzyl, H.; Novi-Inverardi, P.L.; Tagliani, A. A comparison of numerical approaches to determine the severity of losses. J. Oper. Risk 2013, 8, 3–15. [Google Scholar]
  4. Cherny, A.; Maslov, V. On minimization and maximization of entropy functionals in various disciplines. Theory Probab. Appl 2003, 17, 447–464. [Google Scholar]
  5. Csiszar, I. I-divergence geometry of probability distributions and minimization problems. Ann. Probab 1975, 3, 148–158. [Google Scholar]
  6. Csiszar, I. Generalized I-projection and a conditional limit theorem. Ann. Probab 1984, 12, 768–793. [Google Scholar]
  7. Kapur, N. Maximum Entropy Models in Science and Engineering; Wiley Interscience: New York, NY, USA, 1996. [Google Scholar]
  8. Gzyl, H.; Velásquez, Y. Linear Inverse Problems: The Maximum Entropy Connection; World Scientific: Singapore, 2011. [Google Scholar]
  9. Gzyl, H.; Mayoral, S. A method for determining risk aversion functions from uncertain market prices of risk. Insur. Math. Econ 2010, 47, 84–89. [Google Scholar]
  10. Jaynes, E.T. Information theory and statistical physics. Phys. Rev 1957, 106, 620–630. [Google Scholar]
  11. Gamboa, F. Minimisation de l’information de Kullback et minimisation de l’entropie sous une contrainte quadratique. CRAS Paris, Sèrie I Math 1988, 306, 425–427. (In French). [Google Scholar]
  12. Borwein, J.M.; Lewis, A.S. Convex Analysis and Nonlinear Optimization; Springer Verlag: New York, NY, USA, 2000. [Google Scholar]
  13. Gomes, E.(Univ. Carlos III de Madrid, Getafe, Spain); Gzyl, H. (IESA, Caracas, Venezuela); Mayoral, S. (Univ. Carlos III de Madrid, Getafe, Spain). A maxentropic approach to determine operational risk losses. In preparation, 2014.
Figure 1. Histograms and true and maxentropic densities for different sample sizes. (a) Results with a sample of a size of 200; (b) results with a sample of a size of 1,000.
Figure 1. Histograms and true and maxentropic densities for different sample sizes. (a) Results with a sample of a size of 200; (b) results with a sample of a size of 1,000.
Entropy 16 03257f1 1024
Figure 2. Density of the individual losses obtained by SME and SME with errors (SMEE) for different samples sizes. (a) Results for a sample size of 200; (b) results for a sample size of 1000.
Figure 2. Density of the individual losses obtained by SME and SME with errors (SMEE) for different samples sizes. (a) Results for a sample size of 200; (b) results for a sample size of 1000.
Entropy 16 03257f2 1024
Table 1. Error intervals for S for sample sizes of 200 and 1000.
Table 1. Error intervals for S for sample sizes of 200 and 1000.
μiSample size 200Sample size 1,000
μ1[0.0175, 0.0177][0.0181, 0.0182]
μ2[0.1302, 0.1307][0.1318, 0.1321]
μ3[0.2559, 0.2565][0.2578, 0.2581]
μ4[0.3592, 0.3599][0.3611, 0.3614]
μ5[0.4405, 0.4412][0.4423, 0.4426]
μ6[0.5048, 0.5054][0.5065, 0.5068]
μ7[0.5565, 0.5570][0.5580, 0.5583]
μ8[0.5987, 0.5992][0.6001, 0.6004]
Table 2. Moments of Sfor different sample sizes.
Table 2. Moments of Sfor different sample sizes.
SizeMoments of S
μ1μ2μ3μ4μ5μ6μ7μ8
2000.01760.13040.25620.35960.44090.50510.55680.5990
1,0000.01810.13190.25790.36120.44240.50660.55810.6002
Table 3. Moments of the maxentropic densities reconstructed according to the first procedure.
Table 3. Moments of the maxentropic densities reconstructed according to the first procedure.
SizeMoments of S
μ1μ2μ3μ4μ5μ6μ7μ8
2000.01760.13020.25590.35920.44050.50480.55650.5987
1,0000.01810.13190.25790.36120.44240.50660.55810.6002
Table 4. L1 and L2 distances between densities and between histograms and densities. SMEE, extended standard maximum entropy.
Table 4. L1 and L2 distances between densities and between histograms and densities. SMEE, extended standard maximum entropy.
ApproachHistogram vs. True densityHistogram vs. MaxentTrue density vs. Maxent

L1-normL2-normL1-normL2-normL1-normL2-norm
SMEE-2000.15990.18550.15040.17530.05270.0583
SME-2000.15990.18550.14490.17270.06680.0761
SMEE-10000.10420.10770.10520.11580.06190.0577
SME-10000.10420.10770.09730.10440.03070.0289
Table 5. The MAE and RMSE values between reconstructed densities, the original histogram and densities.
Table 5. The MAE and RMSE values between reconstructed densities, the original histogram and densities.
ApproachHistogram vs. Real densityHistogram vs. MaxentReal density vs. Maxent

MAERMSEMAERMSEMAERMSE
SMEE-2000.01580.00560.01040.00290.01060.0027
SME-2000.01580.00560.00890.00210.01050.0027
SMEE-10000.00640.00180.00720.00240.01040.0044
SME-10000.00640.00180.00430.00090.00530.0011
Table 6. Convergence of the method (2.1) and SME for different sample sizes.
Table 6. Convergence of the method (2.1) and SME for different sample sizes.
ApproachSMEE (200)SME (200)SMEE (1,000)SME (1,000)
time1.79 min4.33 min1.40 min3.25 min
iterations69313805121223
min gradient1.06 × 10−41.86 × 10−78.81 × 10−51.72 × 10−8
Table 7. Moments determined by the second procedure for samples of 200 and 1000.
Table 7. Moments determined by the second procedure for samples of 200 and 1000.
SizeMoments of S
μ1μ2μ3μ4μ5μ6μ7μ8
2000.01760.13040.25610.35950.44080.50510.55670.5989
1,0000.01810.13190.25790.36120.44240.50660.55810.6002
Table 8. Weights and estimated errors.
Table 8. Weights and estimated errors.
SIZE200pk0.51430.50030.49850.49790.49750.49730.49720.4970
k−1.6 × 10−6−1.4 × 10−78.1 × 10−71.2 × 10−61.4 × 10−61.4 × 10−61.5 × 10−61.4 × 10−6
1000pk0.50160.50710.50360.50080.49870.49720.49600.4951
k−1 × 10−7−1.5 × 10−6−1 × 10−6−2.4 × 10−73.6 × 10−77.8 × 10−71 × 10−61.2 × 10−6
Table 9. L1 and L2 distances between the reconstructed densities, the original histogram and the densities.
Table 9. L1 and L2 distances between the reconstructed densities, the original histogram and the densities.
ApproachHistogram vs. Real densityHistogram vs. MaxentReal density vs. Maxent

L1-normL2-normL1-normL2-normL1-normL2-norm
SMEE-2000.15990.18550.14740.17480.07610.0804
SME-2000.15990.18550.14490.17270.06680.0761
SMEE-10000.10420.10770.09770.10510.03250.0301
SME-10000.10420.10770.09730.10440.03070.0289
Table 10. MAE and RMSE distances between the reconstructed densities, the original histogram and the densities.
Table 10. MAE and RMSE distances between the reconstructed densities, the original histogram and the densities.
ApproachHistogram vs. Real densityHistogram vs. MaxentReal density vs. Maxent

MAERMSEMAERMSEMAERMSE
SMEE-2000.01580.00560.00940.00230.01340.0037
SME-2000.01580.00560.00890.00210.01050.0027
SMEE-10000.00640.00180.00430.00090.00570.0013
SME-10000.00640.00180.00430.00090.00530.0011
Table 11. Convergence of the methods used for different samples sizes.
Table 11. Convergence of the methods used for different samples sizes.
ApproachSMEE (200)SME (200)SMEE (1000)SME (1000)
time37.48 s4.33 min18.95 s3.25 min
iterations22013801121223
min gradient1.07 × 10−101.86 × 10−75.82 × 10−101.72 × 10−8

Share and Cite

MDPI and ACS Style

Gomes-Gonçalves, E.; Gzyl, H.; Mayoral, S. Density Reconstructions with Errors in the Data. Entropy 2014, 16, 3257-3272. https://doi.org/10.3390/e16063257

AMA Style

Gomes-Gonçalves E, Gzyl H, Mayoral S. Density Reconstructions with Errors in the Data. Entropy. 2014; 16(6):3257-3272. https://doi.org/10.3390/e16063257

Chicago/Turabian Style

Gomes-Gonçalves, Erika, Henryk Gzyl, and Silvia Mayoral. 2014. "Density Reconstructions with Errors in the Data" Entropy 16, no. 6: 3257-3272. https://doi.org/10.3390/e16063257

Article Metrics

Back to TopTop