Next Article in Journal
Double Taxation Treaties as a Catalyst for Trade Developments: A Comparative Study of Vietnam’s Relations with ASEAN and EU Member States
Next Article in Special Issue
Parsimonious Heterogeneous ARCH Models for High Frequency Modeling
Previous Article in Journal
A Survey on Empirical Findings about Spillovers in Cryptocurrency Markets
Open AccessArticle

An Universal, Simple, Circular Statistics-Based Estimator of α for Symmetric Stable Family

by 1,2,3 and 4,*
1
Applied Statistics Unit, Indian Statistical Institute, Kolkata 700108, India
2
Department of Population Health Sciences, MCG, Augusta University, Augusta, GA 30912, USA
3
Department of Statistics, Middle East Technical University, 06800 Ankara, Turkey
4
Department of Statistics, Midnapore College(Autonomous), Midnapore 721101, India
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2019, 12(4), 171; https://doi.org/10.3390/jrfm12040171
Received: 15 October 2019 / Revised: 14 November 2019 / Accepted: 18 November 2019 / Published: 23 November 2019
(This article belongs to the Special Issue Financial Statistics and Data Analytics)

Abstract

The aim of this article is to obtain a simple and efficient estimator of the index parameter of symmetric stable distribution that holds universally, i.e., over the entire range of the parameter. We appeal to directional statistics on the classical result on wrapping of a distribution in obtaining the wrapped stable family of distributions. The performance of the estimator obtained is better than the existing estimators in the literature in terms of both consistency and efficiency. The estimator is applied to model some real life financial datasets. A mixture of normal and Cauchy distributions is compared with the stable family of distributions when the estimate of the parameter α lies between 1 and 2. A similar approach can be adopted when α (or its estimate) belongs to (0.5,1). In this case, one may compare with a mixture of Laplace and Cauchy distributions. A new measure of goodness of fit is proposed for the above family of distributions.
Keywords: Index parameter; estimation; wrapped stable; Hill estimator; characteristic function-based estimator; asymptotic; efficiency Index parameter; estimation; wrapped stable; Hill estimator; characteristic function-based estimator; asymptotic; efficiency

1. Introduction

Our motivation in this paper is to obtain a universal and efficient estimator of the tail index parameter α of symmetric stable distribution (explained in Section 2) Nolan (2003). This is achieved by appealing to methods available in circular statistics. We recall that there exist two popular estimators of α in the literature. The Hill estimator proposed by Hill (1975), which uses the linear function of the order statistics, however, can be used to estimate α [ 1 , 2 ] only. Furthermore, it is also “extremely sensitive” to the choice of k (explained in Section 6) even for other values of α . Hill (1975) and Dufour and Kurz-Kim (2010) pointed out other drawbacks of the Hill estimator. The other estimator proposed by Anderson and Arnold (1993) is based on characteristic function approach. However, this estimator cannot be obtained in a closed form and is to be solved numerically. Furthermore, neither its asymptotic distribution nor its variance and bias are available in the literature.
Our approach in this paper appeals to circular statistics and is based on the method of trigonometric moments as in SenGupta (1996) and later also discussed in Jammalamadaka and SenGupta (2001). This stems from the very useful result which presents a closed analytical form of the density of a wrapped (circular) stable distribution obtained by wrapping the corresponding stable distribution which need not have any closed form analytic representation for arbitrary α . This result shows that α is preserved as the same parameter even after the wrapping. Furthermore, this paper presents a goodness of fit test based on the wrapped probability density function, which may be used as a necessary condition to ascertain the fit of the stable distribution. We exploit this approach with the real life examples. This estimator has a simple and elegant closed form expression. It is asymptotically normally distributed with mean α and variance available in a closed analytical form. Furthermore, from extensive simulations under parameter configurations encountered in financial data, it is exhibited that this new estimator outperforms both the estimators mentioned above almost uniformly in the entire comparable support of α . In Section 2, the probability density function of the wrapped stable distribution and some associated notations are introduced. The moment estimator of the index parameter is also defined in this section. Section 3 presents the derivation of the asymptotic distribution of the moment estimator defined in Section 2. In Section 4, an improved estimator of the index parameter is obtained. Section 5 shows the derivation of the asymptotic distribution of the improved estimator using the multivariate delta method. In addition, the asymptotic variance is computed for various values of the parameters through simulation. In Section 6, comparison of the performance of the improved estimator is made with those of the Hill estimator and the characteristic function-based estimator based on their root mean square errors through simulation. In Section 7, the procedure of the various computations is presented. In Section 8, applications of the proposed estimator is made on some real life data. We also conclude with remarks on the performance of the various estimators and some comments on future scope in Section 8. Finally, the tables showing the various computations and the figures on the applications of data are given in Appendix A, Appendix B and Appendix C.

2. The Trigonometric Moment Estimator

The regular symmetric stable distribution is defined through its characteristic function given by
φ ( t ) = exp ( i t μ | σ t | α )
where μ is the location parameter; σ is the scale parameter, which we take as 1; and α is the index or shape parameter of the distribution. Here, without loss of generality, we take μ = 0 .
From the stable distribution, we can obtain the wrapped stable distribution (the process of wrapping explained in Jammalamadaka and SenGupta (2001)). Suppose θ 1 , θ 2 , , θ m is a random sample of size m drawn from the wrapped stable (given in Jammalamadaka and SenGupta (2001)) distribution whose probability density function is given by
f ( θ , ρ , α , μ ) = 1 2 π [ 1 + 2 p = 1 ρ p α cos p ( θ μ ) ] 0 < ρ 1 , 0 < α 2 , 0 < μ 2 π
It is known in general from Jammalamadaka and SenGupta (2001) that the characteristic function of θ at the integer p is defined as,
ψ θ ( p ) = E [ exp ( i p ( θ μ ) ) ] = α p + i β p
where α p = E cos p ( θ μ ) and β p = E sin p ( θ μ )
Furthermore, from Jammalamadaka and SenGupta (2001), it is known that for, the p.d.f given by Equation (1),
ψ θ ( p ) = ρ p α
Hence , E cos p ( θ μ ) = ρ p α and E sin p ( θ μ ) = 0
We define
C 1 ¯ = 1 m i = 1 m cos θ i , C 2 ¯ = 1 m i = 1 m cos 2 θ i , S 1 ¯ = 1 m i = 1 m sin θ i and S 2 ¯ = 1 m i = 1 m sin 2 θ i
Then, we note that R 1 ¯ = C 1 ¯ 2 + S 1 ¯ 2 and R 2 ¯ = C 2 ¯ 2 + S 2 ¯ 2
By the method of trigonometric moments estimation, equating R 1 ¯ and R 2 ¯ to the corresponding functions of the theoretical trigonometric moments, we get the estimator of index parameter α as (see SenGupta (1996)):
α ^ = 1 ln 2 ln ln R 2 ¯ ln R 1 ¯
Then, we define R j ¯ = 1 m i = 1 m cos j ( θ i θ ¯ ) , j = 1 , 2 and θ ¯ is the mean direction given by θ ¯ = arctan S 1 ¯ C 1 ¯ . Note that R 1 ¯ R ¯ .
We consider two special cases.

2.1. Special Case 1: μ = 0 , σ = 1

We now consider the case as treated by Anderson and Arnold (1993), specifically μ = 0 and σ = 1 , and hence the concentration parameter ρ = exp ( 1 ) as both parameters are known. This case may arise when one has historical data or prior information on the scale parameter. In such a case, the probability density function reduces to
f ( θ , α ) = 1 2 π [ 1 + 2 p = 1 { exp ( 1 ) } p α cos p θ ) ] , 0 < α 2
In addition, by the method of trigonometric moments estimation, the estimator of index parameter α is given by
α 1 ^ = ln C 2 ¯ ln 2

2.2. Special Case 2: μ = 0 , σ Unknown

Next, we consider a general case when μ = 0 and σ , and hence the estimator of the concentration parameter is ρ = R 1 ¯ . This case is especially useful in many real life applications, for example, for price changes in financial data, μ = 0 is a standard assumption. In such a case, the probability density function reduces to
f ( θ , α ) = 1 2 π [ 1 + 2 p = 1 ρ p α cos p θ ) ] , 0 < α 2
In addition, by the method of trigonometric moments estimation, the estimator of index parameter α is given by
α 2 ^ = 1 ln 2 ln ln C 2 ¯ ln C 1 ¯
As is also seen in Anderson and Arnold (1993), for financial data after using log-ratio transformation, the location parameter of the transformed variable becomes zero. Hence, the case of μ 0 was not considered by Anderson and Arnold (1993) and accordingly by us also for the comparison made in this paper.

3. Derivation of the Asymptotic Distribution of the Moment Estimator

Lemma 1.
m ( T m μ ) L N 4 ( 0 , Σ )
w h e r e T m = ( C 1 ¯ , C 2 ¯ , S 1 ¯ , S 2 ¯ ) ,
μ is the mean vector given by
μ = ( ρ cos μ 0 , ρ 2 α cos 2 μ 0 , ρ sin μ 0 , ρ 2 α sin 2 μ 0 )
and Σ is the dispersion matrix given by
Σ = A B C D B E F G C F H I D G I J
where
A = ρ 2 α cos 2 μ 0 + 1 2 ρ 2 cos 2 μ 0 2
B = ρ cos μ 0 + ρ 3 α cos 3 μ 0 2 ρ 2 α + 1 cos μ 0 cos 2 μ 0 2
C = ρ 2 α sin 2 μ 0 2 ρ 2 cos μ 0 sin μ 0 2
D = ρ 3 α sin 3 μ 0 + ρ sin μ 0 2 ρ 2 α + 1 cos μ 0 sin 2 μ 0 2
E = ρ 4 α cos 4 μ 0 + 1 2 ( ρ 2 α ) 2 cos 2 2 μ 0 2
F = ρ 3 α sin 3 μ 0 ρ sin μ 0 2 ρ 2 α + 1 cos 2 μ 0 sin μ 0 2
G = ρ 4 α sin 4 μ 0 2 ( ρ 2 α ) 2 cos 2 μ 0 sin 2 μ 0 2
H = 1 ρ 2 α cos 2 μ 0 2 ρ 2 sin 2 μ 0 2
I = ρ cos μ 0 ρ 3 α cos 3 μ 0 2 ρ 2 α + 1 sin μ 0 sin 2 μ 0 2
J = 1 ρ 4 α cos 4 μ 0 2 ( ρ 2 α ) 2 sin 2 2 μ 0 2
Proof. 
The derivations for the proof are given in Appendix A.
Hence, assuming large sample size, central limit theorem Feller (1971) gives ( C 1 ¯ , C 2 ¯ , S 1 ¯ , S 2 ¯ ) L N 4 ( μ , Σ m ) where μ is the mean vector given by
μ = ( ρ cos μ 0 , ρ 2 α cos 2 μ 0 , ρ sin μ 0 , ρ 2 α sin 2 μ 0 )
and Σ is the dispersion matrix given by
Σ = A B C D B E F G C F H I D G I J
where A, B, C, D, E, F, G, H, I and J are as defined above. □
Theorem 1.
m ( α ^ α ) L N ( 0 , γ Σ γ )
where
γ = 1 ln 2 cos μ 0 ρ ln ρ , cos 2 μ 0 ρ 2 α ln ρ 2 α , sin μ 0 ρ ln ρ , sin 2 μ 0 ρ 2 α ln ρ 2 α
and
γ ̲ Σ γ ̲ = 1 ( ln 2 ) 2 1 + ρ 2 α 2 ρ 2 2 ( ρ ln ρ ) 2 + 1 + ρ 4 α 2 ( ρ 2 α ) 2 2 ( ρ 2 α ln ρ 2 α ) 2 + 2 ρ 2 α + 1 ρ ρ 3 α ρ ln ρ ρ 2 α ln ρ 2 α
Proof. 
We know from Lemma 1 that m ( T m μ ) L N 4 ( 0 , Σ )
Therefore, by delta method (given in Casella and Berger (2002)), we get
m ( α ^ α ) L N ( 0 , γ Σ γ ) where
g ( T m ) = α ^ = 1 ln 2 ln ln R 2 ¯ ln R 1 ¯ = 1 ln 2 ln ln C 2 ¯ 2 + S 2 ¯ 2 ln C 1 ¯ 2 + S 1 ¯ 2
γ = g C 1 ¯ g C 2 ¯ g S 1 ¯ g S 2 ¯ at μ
= 1 ln 2 C 1 ¯ ( C 1 ¯ 2 + S 1 ¯ 2 ) ln C 1 ¯ 2 + S 1 ¯ 2 C 2 ¯ ( C 2 ¯ 2 + S 2 ¯ 2 ) ln C 2 ¯ 2 + S 2 ¯ 2 S 1 ¯ ( C 1 ¯ 2 + S 1 ¯ 2 ) ln C 1 ¯ 2 + S 1 ¯ 2 S 2 ¯ ( C 2 ¯ 2 + S 2 ¯ 2 ) ln C 2 ¯ 2 + S 2 ¯ 2 at μ
= 1 ln 2 cos μ 0 ρ ln ρ cos 2 μ 0 ρ 2 α ln ρ 2 α sin μ 0 ρ ln ρ sin 2 μ 0 ρ 2 α ln ρ 2 α
γ ̲ Σ γ ̲ = 1 ( ln 2 ) 2 [ 1 + ρ 2 α cos 2 2 μ 0 2 ρ 2 ( cos 4 μ 0 + sin 4 μ 0 ) 2 ( ρ ln ρ ) 2 + ρ cos 2 μ 0 cos 2 μ 0 ρ 3 α cos 3 μ 0 cos μ 0 cos 2 μ 0 + 2 ρ 2 α + 1 cos 2 μ 0 cos 2 2 μ 0 ρ ln ρ ρ 2 α ln ρ 2 α + ρ 2 α sin 2 μ 0 sin μ 0 cos μ 0 2 ρ 2 cos 2 μ 0 sin 2 μ 0 ( ρ ln ρ ) 2 + ρ 3 α cos μ 0 sin 2 μ 0 sin 3 μ 0 ρ sin μ 0 cos μ 0 sin 2 μ 0 + 2 ρ 2 α + 1 cos 2 μ 0 sin 2 2 μ 0 ρ ln ρ ρ 2 α ln ρ 2 α + ρ 4 α cos 2 4 μ 0 + 1 2 ( ρ 2 α ) 2 ( cos 4 2 μ 0 + sin 4 2 μ 0 ) 2 ( ρ 2 α ln ρ 2 α ) 2 + ρ 4 α sin 4 μ 0 sin 2 μ 0 cos 2 μ 0 2 ( ρ 2 α ) 2 cos 2 2 μ 0 sin 2 2 μ 0 ( ρ 2 α ln ρ 2 α ) 2 + ρ 3 α sin 3 μ 0 cos 2 μ 0 sin μ 0 + ρ sin 2 μ 0 cos 2 μ 0 + 2 ρ 2 α + 1 cos 2 2 μ 0 sin 2 μ 0 ρ ln ρ ρ 2 α ln ρ 2 α + ρ cos μ 0 sin μ 0 sin 2 μ 0 + ρ 3 α cos 3 μ 0 sin μ 0 sin 2 μ 0 + 2 ρ 2 α + 1 sin 2 μ 0 sin 2 2 μ 0 ρ ln ρ ρ 2 α ln ρ 2 α ] = 1 ( ln 2 ) 2 1 + ρ 2 α 2 ρ 2 2 ( ρ ln ρ ) 2 + 1 + ρ 4 α 2 ( ρ 2 α ) 2 2 ( ρ 2 α ln ρ 2 α ) 2 + 2 ρ 2 α + 1 ρ ρ 3 α ρ ln ρ ρ 2 α ln ρ 2 α ( using usual trigonometric identities and formulae )
Lemma 2.
m ( T m μ ) L N ( 0 , σ 2 )
where
T m = C 2 ¯ ,
μ is the mean given by
μ = ρ 2 α
and σ 2 is the dispersion given by
σ 2 = ρ 4 α + 1 2 ( ρ 2 α ) 2 2
Proof. 
The derivations for the proof are given in Appendix B.
Hence, assuming large sample size, central limit theorem Feller (1971) gives C 2 ¯ L N ( μ , σ 2 m ) where μ is the mean given by μ = ρ 2 α and σ 2 is the dispersion given by σ 2 = m V ( C 2 ¯ ) , that is σ 2 = ρ 4 α + 1 2 ( ρ 2 α ) 2 2 . □
Theorem 2.
m ( α 1 ^ α ) L N 0 , σ 2 g μ 2
where
g μ = 1 ( ln 2 ) ρ 2 α a n d
σ 2 g μ 2 = 1 ( ln 2 ) 2 ρ 2 α
Proof. 
We know from Lemma 2 that m ( T m μ ) N ( 0 , σ 2 ) in distribution
Therefore, by delta method (given in Casella and Berger (2002)), we get m ( α 1 ^ α ) L N 0 , σ 2 g ( T m ) μ 2 where
g ( T m ) = ln C 2 ¯ ln 2
g ( T m ) μ = 1 ln 2 1 C 2 ¯ | μ = 1 ( ln 2 ) ρ 2 α
σ 2 g ( T m ) μ 2 = 1 ( ln 2 ) 2 ρ 2 α
Lemma 3.
m ( T m μ ) L N 2 ( 0 , Σ )
where
T m = ( C 1 ¯ , C 2 ¯ )
μ is the mean vector given by
μ = ( ρ , ρ 2 α )
and Σ is the dispersion matrix given by:-
Σ = A B B C
where
A = ρ 2 α + 1 2 ρ 2 2 , B = ρ + ρ 3 α 2 ρ 2 α + 1 2 , C = ρ 4 α + 1 2 ( ρ 2 α ) 2 2
Proof. 
The derivations for the proof are given in Appendix C.
Hence, assuming large sample size, central limit theorem Feller (1971) gives
( C 1 ¯ , C 2 ¯ ) L N 2 ( μ , Σ m )
where μ is the mean vector given by μ = ( ρ , ρ 2 α ) and Σ is the dispersion matrix given by Σ = A B B C where A = ρ 2 α + 1 2 ρ 2 2 , B = ρ + ρ 3 α 2 ρ 2 α + 1 2 and C = ρ 4 α + 1 2 ( ρ 2 α ) 2 2 . □
Theorem 3.
m ( α 2 ^ α ) L N ( 0 , γ 1 Σ γ 1 )
where
γ 1 = 1 ln 2 1 ρ ln ρ , 1 ρ 2 α ln ρ 2 α
and
γ 1 ̲ Σ γ 1 ̲ = 1 ( ln 2 ) 2 1 + ρ 2 α 2 ρ 2 2 ( ρ ln ρ ) 2 + 1 + ρ 4 α 2 ( ρ 2 α ) 2 2 ( ρ 2 α ln ρ 2 α ) 2 + 2 ρ 2 α + 1 ρ ρ 3 α ρ ln ρ ρ 2 α ln ρ 2 α
Proof. 
We know from Lemma 3 that m ( T m μ ) L N 2 ( 0 , Σ )
Therefore, by delta method (given in Casella and Berger (2002)), we get
m ( α 2 ^ α ) L N ( 0 , γ 1 Σ γ 1 ) where g ( T m ) = 1 ln 2 ln ln C 2 ¯ ln C 1 ¯
γ 1 = g C 1 ¯ g C 2 ¯ at μ = 1 ln 2 1 C 1 ¯ ( ln C 1 ¯ ) 1 C 2 ¯ ( ln C 2 ¯ ) at μ = 1 ln 2 1 ρ ln ρ 1 ρ 2 α ln ρ 2 α
γ 1 ̲ Σ γ 1 ̲ = 1 ( ln 2 ) 2 1 + ρ 2 α 2 ρ 2 2 ( ρ ln ρ ) 2 + 1 + ρ 4 α 2 ( ρ 2 α ) 2 2 ( ρ 2 α ln ρ 2 α ) 2 + 2 ρ 2 α + 1 ρ ρ 3 α ρ ln ρ ρ 2 α ln ρ 2 α
The above theorems imply the estimator to be consistent. Hence, in large samples, the performance of the estimator is reasonably good. Now, assuming the sample size is large, say 100, we calculate the asymptotic variance γ ̲ Σ γ ̲ /100 of g ( T m ) = α ^ for different values of α ranging from 0 to 2 and different values of ρ ranging from 0 to 1 in Table 1.

4. Improvement Over the Moment Estimator

The moment estimator need not always remain in the support of the true parameter α (that is (0,2]). Hence, the moment estimators proposed above do not need to be proper estimators of α . A modified estimator free from this defect is given by
α * ^ = α ^ i f 0 < α ^ < 2 = 2 i f α ^ 2
(since support of α excludes non-positive values).
Thus, the density function of α * ^ is given by
g ( α * ^ ) = P [ α ^ < 2 ] P [ α ^ 0 ] ; 0 < α * ^ < 2 < α ^ < 2 = P [ α * ^ = 2 ] ; α * ^ = 2 α ^ 2 = P [ α ^ 2 ] P [ α ^ 0 ] ; α * ^ = 2 α ^ 2
where f ( α ^ ) is the density function of α ^ N ( α , γ ̲ Σ γ ̲ / m ) . Therefore,
g ( α * ^ ) = Φ 2 α γ ̲ Σ γ ̲ / m 1 Φ α γ ̲ Σ γ ̲ / m ; 0 < α * ^ < 2 < α ^ < 2 = 1 Φ 2 α γ ̲ Σ γ ̲ / m Φ α γ ̲ Σ γ ̲ / m ; α * ^ = 2 α ^ 2
Thus, we get g ( α * ^ ) as a mixed distribution of one atomic mass function and a continuous function.

4.1. Special Case 1: μ = 0 , σ = 1

Similar modifications can be made for the estimator α 1 ^ . Let it be denoted by α 1 * ^ .

4.2. Special Case 2: μ = 0 , σ Unknown

Similar modifications can be made for the estimator α 2 ^ . Let it be denoted by α 2 * ^ .

5. Derivation of the Asymptotic Distribution of the Modified Truncated Estimators

Now, using the asymptotic normal distribution of α ^ , we can derive the same results for the modified truncated estimator of the index parameter α (given as below) as we have done for the method of moment estimator of α .
The mean of α * ^ is given by
E ( α * ^ ) = 0 . P ( α ^ < 0 ) + 0 2 α ^ f ( α ^ ) d α ^ + 2 . P ( α ^ > 2 )
where m ( α ^ α ) N(0, γ ̲ Σ γ ̲ ) asymptotically (as noted above) and f( α ^ ) = probability density function of α ^ .
The above is equivalent to τ = α ^ α γ ̲ Σ γ ̲ / m N(0,1) asymptotically.
Let ϕ ( τ ) and Φ ( τ ) denote the p.d.f. and c.d.f. of τ , respectively.
Let σ = γ ̲ Σ γ ̲ m . Then, we get,
E ( α * ^ ) = a P ( τ < a * ) + a * b * ( τ σ + α ) ϕ ( τ ) d τ + b P ( τ > b * )
E ( α * ^ ) = σ { ϕ ( a * ) ϕ ( b * ) } + α Φ ( b * ) Φ ( a * )
= α
since [ Φ ( b * ) Φ ( a * ) ] 1 , b [ 1 Φ ( b * ) ] 0 and σ 0 as m infinity where a * = α γ ̲ Σ γ ̲ m and b * = 2 α γ ̲ Σ γ ̲ m
E ( α * ^ 2 ) = 0 2 . P ( α ^ < 0 ) + 0 2 α ^ 2 f ( α ^ ) d α ^ + 4 . P ( α ^ > 2 )
E ( α * ^ 2 ) = σ 2 { a * ϕ ( a * ) b * ϕ ( b * ) + Φ ( b * ) Φ ( a * ) } + α 2 { Φ ( b * ) Φ ( a * ) } + 2 α σ { ϕ ( a * ) ϕ ( b * ) } since b 2 . [ 1 Φ ( b ) ] 0 as m infinity
The asymptotic variance of α * ^ is given by
V ( α * ^ ) = E ( α * ^ 2 ) [ E ( α * ^ ) ] 2
Similarly, the mean of α 1 * ^ is given by
E ( α 1 * ^ ) = σ ( g ( T m ) μ ) m { ϕ ( a ) ϕ ( b ) } + α Φ ( b ) Φ ( a ) since b [ 1 Φ ( b ) ] 0 as m infinity
E ( α 1 * ^ 2 ) = σ 2 ( g ( T m ) μ ) 2 m { a ϕ ( a ) b ϕ ( b ) + Φ ( b ) Φ ( a ) } + α 2 { Φ ( b ) Φ ( a ) } + 2 α σ ( g ( T m ) μ ) m { ϕ ( a ) ϕ ( b ) } since b 2 . [ 1 Φ ( b ) ] 0 as m infinity
The asymptotic variance of α 1 * ^ is given by
V ( α 1 * ^ ) = E ( α 1 * ^ 2 ) [ E ( α 1 * ^ ) ] 2
where a = α / σ g ( T m ) μ m and b = ( 2 α ) / σ g ( T m ) μ m
The mean of α 2 * ^ is given by
E ( α 2 * ^ ) = γ 1 ̲ Σ γ 1 ̲ m { ϕ ( a ) ϕ ( b ) } + α Φ ( b ) Φ ( a ) since b [ 1 Φ ( b ) ] 0 as m infinity
E ( α 2 * ^ 2 ) = γ 1 ̲ Σ γ 1 ̲ m { a ϕ ( a ) b ϕ ( b ) + Φ ( b ) Φ ( a ) } + α 2 { Φ ( b ) Φ ( a ) } + 2 α γ 1 ̲ Σ γ 1 ̲ m { ϕ ( a ) ϕ ( b ) } since b 2 . [ 1 Φ ( b ) ] 0 as m infinity
The asymptotic variance of α 2 * ^ is given by
V ( α 2 * ^ ) = E ( α 2 * ^ 2 ) [ E ( α 2 * ^ ) ] 2
where a = α γ 1 ̲ Σ γ 1 ̲ m and b = 2 α γ 1 ̲ Σ γ 1 ̲ m
Thus, the following theorem is established
Theorem 4.
( α * ^ α ) L N ( 0 , V ( α * ^ ) )
where V ( α * ^ ) is as derived above.
Now, assuming the sample size m is large, say 100, the asymptotic variances of the modified truncated estimator α * ^ for different values of α and different values of ρ (ranging from 0 to 1) are displayed in Table 2.

6. Comparison of the Proposed Estimator With the Hill Estimator and the Characteristic Function Based Estimator

Next, we want to compare the performance of this modified truncated estimator with that of a popular estimator known as Hill-estimator Dufour and Kurz-Kim (2010); Hill (1975), which is a simple non-parametric estimator based on order statistic. Given a sample of n observations X 1 , X 2 , X n , the Hill-estimator is defined as,
α ^ H = [ ( k 1 j = 1 k ln X n + 1 j : n ) ln X n k : n ] 1
with standard error
S D ( α ^ H ) = k α ^ H ( k 1 ) k 2
where k is the number of observations which lie on the tails of the distribution of interest and is to be optimally chosen depending on the sample size, n, tail thickness α , as k = k ( n , α ) and X j : n denotes j-order statistic of the sample of size n.
The asymptotic normality of the Hill estimator is provided by Goldie and Richard (1987) as,
k ( α H 1 ^ α 1 ) L N ( 0 , α 2 )
Lemma 4.
( α H ^ α ) L N 0 , 1 α 2 k
Proof. 
Assuming g α H 1 ^ = 1 α H 1 ^ = α H ^ (since g ( . ) exists and is non-zero valued) and using Equation (3), we get
( α H 1 ^ α 1 ) L N 0 , α 2 k ( α H ^ α ) L N 0 , ( g 1 ) ) 2 α 2 k α H ^ L N α , 1 α 2 k
We need this result for comparing the performances of the estimators for α .
In addition, we make a comparison of the performance of the modified truncated estimator α 2 ^ with that of the characteristic function based estimator Anderson and Arnold (1993), which is obtained by minimization of the objective function (where μ = 0 and σ = 1 ) given by
I ^ s ( α ) = i = 1 n w i ( η ^ ( z i ) exp ( | z i | α ) ) 2
The performance of the modified truncated estimator α 3 ^ is compared with that of the characteristic function-based estimator Anderson and Arnold (1993), which is obtained by minimization of the objective function (where μ = 0 and σ unknown) given by,
I ^ s ( α ) = i = 1 n w i ( η ^ ( z i ) exp ( | σ z i | α ) ) 2
where
η ^ ( t ) = 1 n j cos ( t x j ) .
x 1 , x 2 , , x n are realizations from symmetric stable ( α ) distribution, z i is the ith zero of the mth degree Hermite polynomial H m ( z ) and
w i = 2 m 1 m ! m ( m H m 1 ( z i ) ) 2
It is to be noted that, for the estimator of α < 1 , we do not know any explicit form of the probability density function. However, for value of the estimator between 1 and 2, i.e., for 1 < α * ^ < 2 , we may compare the fit with the stable family by modeling a mixture of normal and Cauchy distribution and then using the method as proposed in Anderson and Arnold (1993) by the objective function given by
i = 1 n w i ( η ^ ( z i ) ψ N C ) 2
where η ^ ( t ) is the same as defined above with the realizations taken from the mixture distribution. ψ N C denotes the corresponding theoretical characteristic function given by
ψ N C = p exp ( σ 1 2 t 2 / 2 ) + ( 1 p ) exp ( σ 2 | t | )
where p denotes the mixture proportion, σ 1 and σ 2 are taken as the scale parameters of the normal and Cauchy distributions, respectively (the location parameters are taken as zeros, the reason for which is mentioned above). Finally, a measure for the goodness of fit is proposed as:
Index of Objective function (I.O.) = Objective function + Number of parameters estimated
The distribution for which I.O. is minimum gives the best fit to the data.
The modified truncated estimator based on the moment estimator is free of the location parameter since it is defined in terms of R j ¯ = 1 m i = 1 m cos j ( θ i θ ¯ ) , j = 1 , 2 , that is in terms of the quantity ( θ i θ ¯ ) , which is centered with respect to the mean direction θ ¯ , although it is not free of the nuisance parameter that is the concentration parameter ρ . The Hill estimator is scale invariant since it is defined in terms of log of ratios but not location invariant. Therefore, centering needs to be done in order to take care of the location invariance.

7. Computational Studies

The analytical variance of the untruncated moment estimator was compared with that of the modified truncated estimator, as presented in Table 1, for values of α < 1 , which is more applicable in practical situations for volatile data.
The comparison of the performances of the two estimators is shown in Table 2. The parameter configurations were chosen as given by Hill (1975) and Dufour and Kurz-Kim (2010). The simulation is presented in Table 2 for the values of α = 1.01, 1.25, 1.5, 1.75, and 1.9 each with sample size n = 100, 250, 500, 1000, 2000, 5000, and 10,000 and for different values of ρ = 0.2, 0.4, 0.6, and 0.8 when skewness parameter β = 0, location parameter μ = 0, and scale parameter σ = ( ln ( ρ ) ) ( 1 / α ) , i.e., concentration parameter ρ = e σ α . For each combination of α and n, 10,000 replications were performed. In this simulation, the sample was relocated by three different relocations, viz. true mean = 0, estimated sample mean, and estimated sample median, and comparison of the root mean square errors (RMSEs) was made.
Next, in Table 3, comparison of the performance of the modified truncated estimator α 2 ^ with that of the characteristic function-based estimator where the simulation is presented for the values of α = 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, and 2.0 each with sample size n = 20, 30, 40, and 50, and the values of σ were taken as 3, 5, and 10. For each combination of α and n, 10,000 replications were performed.
The asymptotic variance of the characteristic function-based estimator, unlike that of the modified truncated estimator, is not available in any closed analytical form. We are thus unable to present the Asymptotic Relative Efficiency (ARE) of these estimators of α analytically. Instead, we compared these through their MSEs based on extensive simulations over all reasonable small, moderate, and large sample sizes.

8. Applications

8.1. Inference on the Gold Price Data (In US Dollars) (1980–2013)

Gold price data, say x t , were collected per ounce in US dollars over the years 1980–2013. These were transformed as z t = 100 ( ln ( x t ) ln ( x t 1 ) ) , which were then “wrapped” to obtain θ t = z t m o d 2 π and finally transformed to θ ^ = ( θ t θ ¯ ) mod 2 π , where θ ¯ denotes the mean direction of θ t and θ ^ denotes the variable thetamod as used in the graphs. The Durbin–Watson test performed on the log ratio transformed data shows that the autocorrelation is zero. The test statistic of Watson’s goodness of fit Jammalamadaka and SenGupta (2001) for wrapped stable distribution was obtained as 0.01632691 and the corresponding p-value was obtained as 0.9970284, which is greater than 0.05, indicating that the wrapped stable distribution fits the transformed gold price data (in US dollars). The modified truncated estimate α 1 * ^ is 0.3752206 while the estimate by characteristic function method is 0.401409. The value of the objective function using the characteristic function estimate is 2.218941 while that using our modified truncated estimate is 2.411018.

8.2. Inference on the Silver Price Data (In US Dollars) (1980–2013)

Data on the price of silver in US dollars collected per ounce over the same time period also underwent the same transformation. The Durbin–Watson test performed on the log ratio transformed data shows that the autocorrelation is zero. Here, the Watson’s goodness of fit test for wrapped stable distribution was also performed and the value of the statistic was obtained as 0.02530653 and the corresponding p-value is 0.9639666, which is greater than 0.05, indicating that the wrapped stable distribution also fits the transformed silver price data (in US dollars). The modified truncated estimate of the index parameter α is 0.4112475 while the estimate by characteristic function method is 0.644846. The value of the objective function using the characteristic function estimate is 2.234203 while that using our modified truncated estimate is 2.234432.

8.3. Inference on the Silver Price Data (In INR) (1970–2011)

Data on the price of silver in INR were also collected per 10 grams over the same time period. The p-value for the Durbin–Watson test performed on the log ratio transformed data is 0.3437, which indicates that the autocorrelation is zero. Here, the Watson’s goodness of fit test was also performed on the transformed data and the value of the statistic was obtained as 0.03382334 and the corresponding p-value is 0.8919965, which is greater than 0.05, indicating that the wrapped stable distribution also fits the silver price data (in INR). The estimate α 1 * ^ is 1.142171, which is the same as the characteristic function estimate. The value of the objective function using the characteristic function estimate is 2.813234 while that using our modified truncated estimate is 2.665166. Since the estimate of α lies between 1 and 2, a mixture of normal and Cauchy distributions is used in Anderson and Arnold (1993) to estimate the respective parameters. The initial values of the scale parameter ( σ 1 ) for the normal distribution is taken as the sample standard deviation and that for the Cauchy distribution ( σ 2 ) is taken as the sample quartile deviation. In addition, different initial values of the mixing parameter p yield the same estimate of the parameters, viz. p ^ = 0.165 , σ 1 ^ = 14.38486 , and σ 2 ^ = 0.077 , and the value of the objective function was found to be 0.9308165. Then, the value of I.O. using modified truncated estimate (assuming stable distribution) is 4.665166 (2.665166 + 2), using the characteristic function estimate (assuming stable distribution) is 4.813234 (2.813234 + 2), and using the characteristic function estimate (assuming mixture of normal and Cauchy distribution) is 3.9308165 (0.9308165 + 3). Thus, it can be observed using the I.O. measure that a mixture of normal and Cauchy distribution gives the best fit to the data. The maximum likelihood estimate of α assuming wrapped stable distribution is 1.1421361 . Akaike’s information criterion (AIC) value assuming wrapped stable distribution is 153.5426 and that assuming a mixture of normal and Cauchy distribution is 201.4.

8.4. Inference on the Box and Jenkins Stock Price Data

Series B Box and Jenkins (IBM) common stock closing price data obtained from Box et al. (2016) were also transformed similarly as for the preceding one. The Durbin–Watson test performed on the log ratio transformed data shows that the autocorrelation is zero. Watson’s test statistic for the goodness of fit test was obtained as 0.0554223 and the corresponding p-value was obtained as 0.6442058, which is greater than 0.05, indicating that the wrapped stable distribution fits the stock price data. The estimates of the index parameter α and the concentration parameter ρ as obtained by modified truncation method are 1.102854 and 0.4335457, respectively.

9. Findings and Concluding Remarks

It can be observed from Table 1 that the asymptotic variance of the untruncated estimator is reduced for the corresponding truncated estimator, indicating the efficiency of the truncated estimator.
It can also be noted from Table 2 that, for α = 1.01, the RMSE of the modified truncated estimator is less than that of the Hill estimator when the sample is relocated by three different relocations, viz. true mean = 0, sample mean, and sample median, for higher values of the concentration parameter ρ = 0.5, 0.6, 0.8, and 0.9 for sample sizes n = 100, 250, 500, and 1000 and for ρ = 0.3, 0.4, 0.6, 0.8, and 0.9 for sample sizes n = 2000, 5000, and 10,000. Furthermore, it can be observed that, for α = 1.25, 1.5, 1.75 and 1.9, the RMSE of the modified truncated is less than that of the Hill estimator for different relocations for ρ = 0.6, 0.7, 0.8, and 0.9 for smaller sample size and even for ρ = 0.5 for larger sample size. This clearly indicates the efficiency of the modified truncated estimator over the Hill estimator for higher values of the concentration parameter ρ .
It can be observed in Table 3 that the RMSE of the modified truncated estimator is less than that of the characteristic function-based estimator for almost all values of α corresponding to all values of σ .
The Hill estimator (Dufour and Kurz-Kim (2010)) is defined for 1 α 2 , whereas the modified truncated estimator is defined for the whole range 0 α 2 . In addition, the overall performance of the modified truncated estimator is quite good in terms of efficiency and consistency over both the Hill estimator and the characteristic function-based estimator.
Thus, we have established an estimator of the index parameter α that strongly supports its parameter space ( 0 , 2 ] . It can be observed from the above real life data applications that the modified truncated estimator is quite close to that of the characteristic function-based estimator. In addition, it is simpler and computationally easier than that of the estimator defined in Anderson and Arnold (1993). Thus, it may be considered as a better estimator.
Again, when the estimator of α lies between 1 and 2, is attempted to model a mixture of two distributions with the value of the index parameter as that of the two extreme tails that is modeling a mixture of Cauchy ( α = 1 ) and normal ( α = 2 ) distributions when 1 < α < 2 or modeling a mixture of Double Exponential ( α = 1 2 ) and Cauchy ( α = 1 ) distributions when 1 2 < α < 1 . Then, it is compared with that of the stable family of distributions for goodness of fit.
We could have used the usual technique of non-linear optimization as used in Salimi et al. (2018) for estimation, but it is computationally demanding and also the (statistical) consistency of the estimators obtained by such method is unknown. In contrast, our proposed methods of trigonometric moment and modified truncated estimation are much simpler, computationally easier and also possess useful consistency properties and, even their asymptotic distributions can be presented in simple and elegant forms as already proved above.

Author Contributions

Problem formulation, formal analyses and data curation, A.S.; formal and numerical data analyses, M.R.

Acknowledgments

The research of the second author of the paper was funded by the Senior Research Fellowship from the University Grants Commission, Government of India. She is also thankful to the Indian Statistical Institute and the University of Calcutta for providing the necessary facilities.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Lemma 1.
Putting p = 1 in Equation (2) and using the expansion of the characteristic function of θ , we get (henceforth, we denote E ( X ) and V ( X ) as expectation and variance of a random variable X, respectively, as usual)
E cos θ = ρ cos μ 0 E ( C 1 ¯ ) = E 1 m i = 1 m cos θ i = ρ cos μ 0
Again, putting p = 2 in Equation (2) and using the expansion of the characteristic function of θ , we get
E cos 2 θ = ρ 2 α cos 2 μ 0 E ( C 2 ¯ ) = E 1 m i = 1 m cos 2 θ i = ρ 2 α cos 2 μ 0
In addition, Equation (A2) implies that,
E cos 2 θ = ( ρ 2 α cos 2 μ 0 + 1 ) 2 V ( cos θ ) = E cos 2 θ E 2 cos θ = ρ 2 α cos 2 μ 0 + 1 2 ρ 2 cos 2 μ 0
Hence,
V ( C 1 ¯ ) = V 1 m i = 1 m cos θ i
= ρ 2 α cos 2 μ 0 + 1 2 ρ 2 cos 2 μ 0 2 m
Now, putting p = 4 in Equation (2) and using the expansion of the characteristic function of θ , we get,
E cos 4 θ = ρ 4 α cos 4 μ 0 E cos 2 2 θ = ( ρ 4 α cos 4 μ 0 + 1 ) 2
Hence,
V ( cos 2 θ ) = E cos 2 2 θ E 2 cos 2 θ = ρ 4 α cos 4 μ 0 + 1 2 ( ρ 2 α ) 2 cos 2 2 μ 0
Therefore,
V ( C 2 ¯ ) = V 1 m i = 1 m cos 2 θ i = ρ 4 α cos 4 μ 0 + 1 2 ( ρ 2 α ) 2 cos 2 2 μ 0 2 m
Now, putting p = 1 in Equation (2) and using the expansion of the characteristic function of θ , we get
E sin θ = ρ sin μ 0 E ( S 1 ¯ ) = E 1 m i = 1 m sin θ i = ρ sin μ 0
Again, putting p = 2 in Equation (2) and using the expansion of the characteristic function of θ , we get
E sin 2 θ = ρ 2 α sin 2 μ 0 E ( S 2 ¯ ) = E 1 m i = 1 m sin 2 θ i = ρ 2 α sin 2 μ 0
Now, using Equation (A2),
E sin 2 θ = ( 1 ρ 2 α cos 2 μ 0 ) 2
Hence,
V ( sin θ ) = E sin 2 θ E 2 sin θ = 1 ρ 2 α cos 2 μ 0 2 ρ 2 sin 2 μ 0
Therefore,
V ( S 1 ¯ ) = V 1 m i = 1 m sin θ i = 1 ρ 2 α cos 2 μ 0 2 ρ 2 sin 2 μ 0 2 m
Now, using Equation (A5),
E sin 2 2 θ = ( 1 ρ 4 α cos 4 μ 0 ) 2
Hence,
V ( sin 2 θ ) = E sin 2 2 θ E 2 sin 2 θ = 1 ρ 4 α cos 4 μ 0 2 ( ρ 2 α ) 2 sin 2 2 μ 0
Therefore,
V ( S 2 ¯ ) = V 1 m i = 1 m sin 2 θ i = 1 ρ 4 α cos 4 μ 0 2 ( ρ 2 α ) 2 sin 2 2 μ 0 2 m
Now, using Equations (A1), (A7) and (A8),
C o v ( cos θ , sin θ ) = E cos θ sin θ E cos θ E sin θ = ρ 2 α sin 2 μ 0 2 ρ cos μ 0 ρ sin μ 0
Therefore,
E i = 1 m cos θ i i = 1 m sin θ i = E i = 1 m cos θ i sin θ i + i m j i m cos θ i sin θ j
Thus,
C o v ( C 1 ¯ , S 1 ¯ ) = C o v 1 m i = 1 m cos θ i , 1 m i = 1 m sin 2 θ i = ρ 2 α sin 2 μ 0 2 ρ 2 cos μ 0 sin μ 0 2 m
Putting p = 3 in Equation (2) and using the expansion of characteristic function of θ , we get
E sin 3 θ = ρ 3 α sin 3 μ 0
Now,
C o v ( C 1 ¯ , S 2 ¯ ) = C o v 1 m i = 1 m cos θ i , 1 m i = 1 m sin 2 θ i C o v ( cos θ , sin 2 θ ) = E cos θ sin 2 θ E cos θ E sin 2 θ
Now, using Equations (A7) and (A9),
E cos θ sin 2 θ = E sin 3 θ + sin θ 2 = ρ 3 α sin 3 μ 0 + ρ sin μ 0 2
Thus, using Equations (A1) and (A8),
C o v ( cos θ , sin 2 θ ) = ρ 3 α sin 3 μ 0 + ρ sin μ 0 2 ρ cos μ 0 ρ 2 α sin 2 μ 0
Hence,
C o v ( C 1 ¯ , S 2 ¯ ) = ρ 3 α sin 3 μ 0 + ρ sin μ 0 2 ρ 2 α + 1 cos μ 0 sin 2 μ 0 2 m
Similarly, it can be shown that,
C o v ( C 1 ¯ , C 2 ¯ ) = ρ cos μ 0 + ρ 3 α cos 3 μ 0 2 ρ 2 α + 1 cos μ 0 cos 2 μ 0 2 m
C o v ( C 2 ¯ , S 1 ¯ ) = ρ 3 α sin 3 μ 0 ρ sin μ 0 2 ρ 2 α + 1 cos 2 μ 0 sin μ 0 2 m
C o v ( C 2 ¯ , S 2 ¯ ) = ρ 4 α sin 4 μ 0 2 ( ρ 2 α ) 2 cos 2 μ 0 sin 2 μ 0 2 m
C o v ( S 1 ¯ , S 2 ¯ ) = ρ cos μ 0 ρ 3 α cos 3 μ 0 2 ρ 2 α + 1 sin μ 0 sin 2 μ 0 2 m
 □

Appendix B

Proof of Lemma 2.
This proof follows simply by putting μ 0 = 0 into Equations (A2) and (A6) in the proof for Lemma 1. □

Appendix C

Proof of Lemma 3.
This proof follows simply by putting μ 0 = 0 in Equations (A1), (A2), (A3), (A6) and (A10) in the proof for Lemma 1. □
Figure A1. Histogram of wrapped log-ratio transformed gold price data (in US dollars) with wrapped stable density.
Figure A1. Histogram of wrapped log-ratio transformed gold price data (in US dollars) with wrapped stable density.
Jrfm 12 00171 g0a1
Figure A2. Histogram of wrapped log-ratio transformed silver price data(in US dollars)with wrapped stable density.
Figure A2. Histogram of wrapped log-ratio transformed silver price data(in US dollars)with wrapped stable density.
Jrfm 12 00171 g0a2
Figure A3. Histogram of wrapped log-ratio transformed gold price data(in INR)with wrapped stable density.
Figure A3. Histogram of wrapped log-ratio transformed gold price data(in INR)with wrapped stable density.
Jrfm 12 00171 g0a3
Figure A4. Histogram of wrapped log-ratio transformed silver price data (in INR) with wrapped stable density.
Figure A4. Histogram of wrapped log-ratio transformed silver price data (in INR) with wrapped stable density.
Jrfm 12 00171 g0a4
Figure A5. Histogram of wrapped log-ratio transformed Box and Jenkins data with wrapped stable density.
Figure A5. Histogram of wrapped log-ratio transformed Box and Jenkins data with wrapped stable density.
Jrfm 12 00171 g0a5

References

  1. Anderson, Dale N., and Barry C. Arnold. 1993. Linnik Distributions and Processes. Journal of Applied Probability 30: 330–40. [Google Scholar] [CrossRef]
  2. Box, George Edward Pelham, Gwilym M. Jenkins, Gregory C. Reinsel, and Greta M. Ljung. 2016. Time Series Analysis: Forecasting and Control. Hoboken: John Wiley and Sons. [Google Scholar]
  3. Casella, George, and Roger L. Berger. 2002. Statistical Inference. Boston: Thomson Learning. [Google Scholar]
  4. Dufour, Jean-Marie, and Jeong-Ryeol Kurz-Kim. 2010. Exact inference and optimal invariant estimation for the stability parameter of symmetric α-stable distributions. Journal of Empirical Finance 17: 180–94. [Google Scholar] [CrossRef]
  5. Feller, Williams. 1971. An Introduction to Probability Theory and Its Applications, 3rd ed. New York: Wiley. [Google Scholar]
  6. Goldie, Charles M., and Richard L. Smith. 1987. Slow variation with remainder: A survey of the theory and its applications. Quarterly Journal of Mathematics, Oxford 38: 45–71. [Google Scholar] [CrossRef]
  7. Hill, Bruce M. 1975. A simple general approach to inference about the tail of a distribution. The Annals of Statistics 3: 1163–74. [Google Scholar] [CrossRef]
  8. Jammalamadaka, S. Rao, and Ashis SenGupta. 2001. Topics in Circular Statistics. Hackensack: World Scientific Publishers. [Google Scholar]
  9. Nolan, John P. 2003. Modeling Financial Data with Stable Distributions. In Handbook of Heavy Tailed Distributions in Finance. Amsterdam: Elsevier, pp. 105–30. [Google Scholar]
  10. Salimi, Mehdi, NMA Nik Long, Somayeh Sharifi, and Bruno Antonio Pansera. 2018. A multi-point iterative method for solving nonlinear equations with optimal order of convergence. Japan Journal of Industrial and Applied Mathematics 35: 497–509. [Google Scholar] [CrossRef]
  11. SenGupta, Ashis. 1996. Analysis of Directional Data in Agricultural Research using DDSTAP. Invited paper. Paper presented at Indian Society of Agricultural Statistics Golden Jubilee Conference, New Delhi, India, December 19–21; pp. 43–59. [Google Scholar]
Table 1. Asymptotic Variances of the moment estimator α ^ and modified truncated estimator α * ^ .
Table 1. Asymptotic Variances of the moment estimator α ^ and modified truncated estimator α * ^ .
α ρ V ( α ^ ) V ( α * ^ )
0.20.20.1790.097
0.20.40.0930.058
0.20.60.0840.053
0.20.80.1180.070
0.40.20.2110.148
0.40.40.0940.079
0.40.60.0790.069
0.40.80.1070.088
0.60.20.2700.209
0.60.40.0980.093
0.60.60.0740.073
0.60.80.0960.092
0.80.20.3840.284
0.80.40.1050.103
0.80.60.0710.070
0.80.80.0860.085
1.00.20.6260.377
1.00.40.1180.117
1.00.60.0670.067
1.00.80.0750.075
Table 2. Comparison of RMSEs of modified truncated estimator (RMSE1) and Hill estimator (RMSE2, RMSE3, and RMSE4) with relocations of true mean, sample mean and median.
Table 2. Comparison of RMSEs of modified truncated estimator (RMSE1) and Hill estimator (RMSE2, RMSE3, and RMSE4) with relocations of true mean, sample mean and median.
Relocation
True Mean = 0Sample MeanSample Median
α ρ Sample SizeRMSE1k*RMSE2RMSE3RMSE4
1.010.21000.483120.4860.5290.514
1.010.31000.468120.4140.4290.423
1.010.41000.412120.4080.4090.411
1.010.51000.320120.4280.4150.432
1.010.61000.277120.4380.4090.441
1.010.71000.272120.3950.3800.404
1.010.81000.299120.4180.4140.427
1.010.91000.403120.4190.4650.424
1.010.22500.395220.2540.2550.254
1.010.32500.353220.2580.2610.258
1.010.42500.242220.2530.2550.254
1.010.52500.189220.2510.2520.253
1.010.62500.168220.2550.2500.256
1.010.72500.165220.2590.2520.260
1.010.82500.179220.2470.2400.248
1.010.92500.238220.2560.2560.257
1.010.25000.360370.1810.1810.181
1.010.35000.251370.1800.1810.180
1.010.45000.161370.1780.1790.179
1.010.55000.131370.1800.1810.180
1.010.65000.118370.1760.1760.177
1.010.75000.115370.1810.1790.181
1.010.85000.125370.1800.1760.180
1.010.95000.162370.1830.1810.183
1.010.210000.295640.1310.1310.131
1.010.310000.161640.1320.1320.132
1.010.410000.113640.1320.1320.132
1.010.510000.092640.1320.1330.132
1.010.610000.081640.1310.1310.131
1.010.710000.080640.1310.1310.131
1.010.810000.086640.1330.1310.134
1.010.910000.110640.1310.1290.131
1.010.220000.220830.1140.1140.114
1.010.320000.110830.1170.1170.117
1.010.420000.078830.1160.1160.116
1.010.520000.064830.1150.1150.115
1.010.620000.058830.1140.1140.114
1.010.720000.057830.1150.1150.115
1.010.820000.062830.1140.1140.114
1.010.920000.078830.1160.1150.116
1.010.250000.1251930.0740.0740.074
1.010.350000.0681930.0730.0730.073
1.010.450000.0491930.0730.0730.073
1.010.550000.0401930.0730.0730.073
1.010.650000.0371930.0730.0730.073
1.010.750000.0361930.0730.0740.073
1.010.850000.0391930.0740.0740.074
1.010.950000.0491930.0730.0720.073
1.010.2100000.0832820.0600.0600.060
1.010.3100000.0472820.0610.0610.061
1.010.4100000.0352820.0610.0610.061
1.010.5100000.0292820.0610.0610.061
1.010.6100000.0262820.0610.0610.061
1.010.7100000.0262820.0620.0620.062
1.010.8100000.0272820.0610.0610.061
1.010.9100000.0342820.0610.0610.061
1.250.21000.549180.3600.3900.368
1.250.31000.450180.3640.3520.377
1.250.41000.398180.3570.3210.364
1.250.51000.333180.3620.3190.366
1.250.61000.269180.3580.3250.368
1.250.71000.252180.3620.3420.370
1.250.81000.264180.3630.3620.370
1.250.91000.346180.3760.4250.380
1.250.22500.413420.2020.2130.206
1.250.32500.355420.2050.2020.208
1.250.42500.282420.2030.1940.208
1.250.52500.201420.1990.1890.205
1.250.62500.163420.2070.1930.210
1.250.72500.154420.2010.1910.203
1.250.82500.161420.2030.2010.207
1.250.92500.207420.2050.2190.208
1.250.25000.337820.1400.1480.142
1.250.35000.290820.1400.1390.142
1.250.45000.192820.1410.1350.143
1.250.55000.135820.1410.1340.144
1.250.65000.115820.1410.1340.143
1.250.75000.106820.1400.1340.142
1.250.85000.112820.1390.1370.141
1.250.95000.143820.1400.1470.143
1.250.210000.2961590.0990.1050.101
1.250.310000.2221590.1010.1010.102
1.250.410000.1281590.0990.0970.101
1.250.510000.0951590.0990.0950.100
1.250.610000.0791590.0980.0930.100
1.250.710000.0751590.1000.0960.101
1.250.810000.0791590.0980.0970.100
1.250.910000.1001590.1000.1040.102
1.250.220000.3003140.3140.3160.313
1.250.320000.2193140.3150.3140.313
1.250.420000.0883140.0700.0680.071
1.250.520000.0673140.0710.0680.072
1.250.620000.0563140.0700.0660.071
1.250.720000.0533140.0700.0670.071
1.250.820000.0553140.0690.0680.071
1.250.920000.0703140.0700.0720.071
1.250.250000.2063140.0440.0470.045
1.250.350000.0877760.0450.0450.045
1.250.450000.0557760.0440.0430.045
1.250.550000.0427760.0440.0430.045
1.250.650000.0357760.0450.0430.045
1.250.750000.0347760.0450.0430.045
1.250.850000.0367760.0450.0440.045
1.250.950000.0447760.0450.0460.045
1.250.2100000.14115470.0320.0340.032
1.250.3100000.06115470.0320.0320.032
1.250.4100000.03915470.0310.0300.031
1.250.5100000.03015470.0310.0300.032
1.250.6100000.02515470.0310.0300.032
1.250.7100000.02415470.0310.0300.032
1.250.8100000.02515470.0310.0310.032
1.250.9100000.03115470.0310.0320.032
1.50.21000.702210.4130.4350.408
1.50.31000.461210.3930.3410.394
1.50.41000.370210.4040.3320.396
1.50.51000.311210.3820.3260.378
1.50.61000.259210.4020.3420.393
1.50.71000.226210.3860.3500.385
1.50.81000.227210.3980.3740.390
1.50.91000.278210.3790.3930.376
1.50.22500.499510.2220.2280.221
1.50.32500.343510.2230.2030.221
1.50.42500.283510.2210.1960.220
1.50.52500.213510.2210.1960.220
1.50.62500.161510.2220.1980.219
1.50.72500.139510.2210.2010.219
1.50.82500.138510.2190.2080.219
1.50.92500.171510.2190.2230.218
1.50.25000.3881010.1510.1550.152
1.50.35000.2851010.1520.1400.151
1.50.45000.2261010.1520.1370.152
1.50.55000.1531010.1550.1390.156
1.50.65000.1131010.1500.1360.151
1.50.75000.0981010.1520.1400.151
1.50.85000.0971010.1520.1470.153
1.50.95000.1211010.1530.1560.152
1.50.210000.3112010.1050.1060.105
1.50.310000.2452010.1060.0990.106
1.50.410000.1662010.1050.0960.105
1.50.510000.1052010.1060.0950.105
1.50.610000.0792010.1070.0960.106
1.50.710000.0692010.1060.0990.106
1.50.810000.0682010.1060.1010.106
1.50.910000.0842010.1060.1090.107
1.50.220000.2614000.0750.0760.075
1.50.320000.2044000.0750.0700.075
1.50.420000.1134000.0740.0680.074
1.50.520000.0724000.0750.0680.075
1.50.620000.0564000.0740.0680.075
1.50.720000.0484000.0730.0680.074
1.50.820000.0484000.0740.0710.075
1.50.920000.0594000.0750.0760.075
1.50.250000.2229950.0470.0480.047
1.50.350000.1339950.0470.0440.047
1.50.450000.0699950.0470.0430.048
1.50.550000.0469950.0470.0420.047
1.50.650000.0359950.0470.0430.047
1.50.750000.0319950.0470.0440.047
1.50.850000.0309950.0470.0450.047
1.50.950000.0379950.0470.0480.047
1.50.2100000.20119910.0330.0340.034
1.50.3100000.08919910.0330.0310.033
1.50.4100000.04819910.0330.0300.033
1.50.5100000.03119910.0330.0300.033
1.50.6100000.02519910.0330.0300.033
1.50.7100000.02119910.0330.0310.033
1.50.8100000.02219910.0330.0320.033
1.50.9100000.02619910.0330.0330.033
1.750.21000.890220.4690.4780.443
1.750.31000.590220.4710.3880.448
1.750.41000.378220.4890.3780.457
1.750.51000.276220.4790.3780.441
1.750.61000.222220.4930.3990.446
1.750.71000.183220.4700.4080.449
1.750.81000.169220.4910.4300.463
1.750.91000.201220.4660.4390.442
1.750.22500.652540.2640.2530.252
1.750.32500.400540.2650.2250.255
1.750.42500.266540.2630.2190.252
1.750.52500.201540.2600.2190.249
1.750.62500.153540.2620.2260.251
1.750.72500.120540.2580.2290.250
1.750.82500.108540.2640.2390.251
1.750.92500.125540.2630.2500.251
1.750.25000.5201070.1810.1730.173
1.750.35000.3081070.1810.1540.173
1.750.45000.2131070.1790.1520.172
1.750.55000.1591070.1800.1520.172
1.750.65000.1141070.1800.1560.174
1.750.75000.0841070.1790.1570.171
1.750.85000.0771070.1790.1640.173
1.750.95000.0881070.1800.1710.171
1.750.210000.4042140.1230.1190.119
1.750.310000.2422140.1230.1070.119
1.750.410000.1722140.1220.1040.118
1.750.510000.1182140.1250.1070.120
1.750.610000.0802140.1240.1080.119
1.750.710000.0602140.1220.1090.118
1.750.810000.0542140.1230.1120.118
1.750.910000.0622140.1230.1180.118
1.750.220000.3244280.0880.0830.084
1.750.320000.1994280.0870.0770.084
1.750.420000.1414280.0850.0730.082
1.750.520000.0864280.0860.0740.082
1.750.620000.0574280.0870.0760.083
1.750.720000.0434280.0860.0770.083
1.750.820000.0384280.0870.0790.083
1.750.920000.0454280.0870.0830.084
1.750.250000.24410700.0540.0520.052
1.750.350000.15910700.0550.0470.053
1.750.450000.09410700.0540.0460.052
1.750.550000.05410700.0550.0470.053
1.750.650000.03510700.0540.0470.052
1.750.750000.02710700.0540.0480.052
1.750.850000.02410700.0540.0500.052
1.750.950000.02810700.0550.0520.053
1.750.2100000.19921390.0380.0370.037
1.750.3100000.13321390.0390.0340.037
1.750.4100000.06721390.0390.0330.037
1.750.5100000.03821390.0380.0330.037
1.750.6100000.02521390.0380.0340.037
1.750.7100000.01921390.0380.0340.037
1.750.8100000.01721390.0380.0350.037
1.750.9100000.02021390.0380.0370.037
1.90.21001.038220.5680.5420.504
1.90.31000.672220.5630.4320.507
1.90.41000.428220.5310.4160.488
1.90.51000.274220.5490.4300.498
1.90.61000.189220.5620.4490.510
1.90.71000.139220.5860.4600.492
1.90.81000.114220.5850.4930.512
1.90.91000.125220.5660.5020.511
1.90.22500.761550.2870.2670.264
1.90.32500.462550.2870.2320.264
1.90.42500.280550.2920.2320.268
1.90.52500.179550.2870.2330.264
1.90.62500.127550.2880.2380.264
1.90.72500.092550.2920.2470.268
1.90.82500.077550.2880.2530.265
1.90.92500.082550.2880.2620.268
1.90.25000.6011100.1930.1790.177
1.90.35000.3501100.1960.1620.182
1.90.45000.2131100.1970.1620.184
1.90.55000.1371100.1950.1620.181
1.90.65000.0951100.1920.1630.180
1.90.75000.0701100.1930.1680.181
1.90.85000.0561100.1930.1720.180
1.90.95000.0581100.1920.1760.180
1.90.210000.4872200.1330.1230.125
1.90.310000.2722200.1340.1130.126
1.90.410000.1612200.1330.1100.124
1.90.510000.1052200.1340.1120.125
1.90.610000.0732200.1360.1150.126
1.90.710000.0522200.1340.1170.126
1.90.810000.0412200.1350.1190.124
1.90.910000.0422200.1360.1240.128
1.90.220000.3994380.0950.0870.088
1.90.320000.2124380.0940.0790.088
1.90.420000.1264380.0950.0780.088
1.90.520000.0824380.0930.0780.087
1.90.620000.0554380.0940.0800.087
1.90.720000.0384380.0940.0810.087
1.90.820000.0294380.0930.0830.087
1.90.920000.0304380.0950.0860.088
1.90.250000.30310930.0590.0540.055
1.90.350000.15310930.0590.0500.056
1.90.450000.09110930.0590.0490.055
1.90.550000.05810930.0590.0490.055
1.90.650000.03710930.0590.0510.056
1.90.750000.02410930.0590.0510.056
1.90.850000.01810930.0600.0530.056
1.90.950000.01910930.0590.0540.055
1.90.2100000.24521870.0410.0380.039
1.90.3100000.12321870.0420.0350.040
1.90.4100000.07221870.0420.0350.039
1.90.5100000.04321870.0410.0350.039
1.90.6100000.02521870.0410.0350.039
1.90.7100000.01721870.0410.0360.039
1.90.8100000.01321870.0420.0380.040
1.90.9100000.01321870.0410.0380.039
* The value of k is obtained by linear interpolation from Dufour and Kurz-Kim (2010).
Table 3. Comparison of the RMSEs of the modified truncated estimator α 3 ^ (RMSE3) and the characteristic function-based estimator (RMSE4) when μ = 0 and σ unknown.
Table 3. Comparison of the RMSEs of the modified truncated estimator α 3 ^ (RMSE3) and the characteristic function-based estimator (RMSE4) when μ = 0 and σ unknown.
α σ Sample SizeRMSE3RMSE4
0.23.0200.5141.477
0.43.0200.4951.293
0.63.0200.4211.134
0.83.0200.4011.012
1.03.0200.4460.912
1.23.0200.5100.823
1.43.0200.5880.757
1.63.0200.6800.733
1.83.0200.7630.746
2.03.0200.8510.798
0.25.0200.5121.424
0.45.0200.4211.245
0.65.0200.3461.110
0.85.0200.3540.989
1.05.0200.4110.882
1.25.0200.4970.776
1.45.0200.5720.687
1.65.0200.6680.635
1.85.0200.7630.625
2.05.0200.8590.623
0.23.0300.4711.486
0.43.0300.4681.299
0.63.0300.4161.127
0.83.0300.4071.006
1.03.0300.4530.895
1.23.0300.5210.817
1.43.0300.6050.753
1.63.0300.6860.734
1.83.0300.7670.748
2.03.0300.8590.803
0.25.0300.4761.433
0.45.0300.4191.234
0.65.0300.3391.103
0.85.0300.3540.987
1.05.0300.4220.885
1.25.0300.4940.782
1.45.0300.5830.709
1.65.0300.6850.658
1.85.0300.7700.669
2.05.0300.8740.673
0.23.0400.4261.494
0.43.0400.4671.300
0.63.0400.4181.123
0.83.0400.4140.996
1.03.0400.4620.891
1.23.0400.5190.806
1.43.0400.5950.750
1.63.0400.6890.724
1.83.0400.7840.757
2.03.0400.8870.807
0.25.0400.4441.439
0.45.0400.4121.242
0.65.0400.3381.100
0.85.0400.3540.989
1.05.0400.4220.880
1.25.0400.4870.784
1.45.0400.5840.720
1.65.0400.6800.674
1.85.0400.7690.692
2.05.0400.8810.711
0.23.0500.3931.500
0.43.0500.4471.292
0.63.0500.4111.117
0.83.0500.4140.989
1.03.0500.4660.885
1.23.0500.5300.805
1.43.0500.6120.737
1.63.0500.6980.719
1.83.0500.7780.751
2.03.0500.8700.828
0.25.0500.4111.451
0.45.0500.4021.235
0.65.0500.3441.098
0.85.0500.3570.983
1.05.0500.4150.879
1.25.0500.5020.788
1.45.0500.5980.716
1.65.0500.6770.691
1.85.0500.7820.703
2.05.0500.8580.729
Back to TopTop