Next Article in Journal
Classical Layer-Resolving Scheme for a System of Two Singularly Perturbed Time-Dependent Problems with Discontinuous Source Terms and Spatial Delay
Previous Article in Journal
Containment Control for High-Order Heterogeneous Continuous-Time Multi-Agent Systems with Input Nonconvex Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Confidence Intervals for Expectiles

by
Spiridon Penev
1,*,† and
Yoshihiko Maesono
2,†
1
Department of Statistics, The University of New South Wales Sydney, Kensington, NSW 2052, Australia
2
Department of Mathematics, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(3), 510; https://doi.org/10.3390/math13030510
Submission received: 14 December 2024 / Revised: 25 January 2025 / Accepted: 31 January 2025 / Published: 4 February 2025

Abstract

:
Expectiles were introduced to statistics around 40 years ago, but have recently gained renewed interest due to their relevance in financial risk management. In particular, the 2007–2009 global financial crisis highlighted the need for more robust risk evaluation tools, leading to the adoption of inference methods for expectiles. While first-order asymptotic inference results for expectiles are well established, higher-order asymptotic results remain underdeveloped. This study aims to fill that gap by deriving higher-order asymptotic results for expectiles, ultimately improving the accuracy of confidence intervals. The paper outlines the derivation of the Edgeworth expansion for both the standardized and studentized versions of the kernel-based estimator of the expectile, using large deviation results on U-statistics. The expansion is then inverted to construct more precise confidence intervals for the expectile. These theoretical results were applied to moderate sample sizes ranging from 20 to 200. To demonstrate the advantages of this methodology, an example from risk management is presented. The enhanced confidence intervals consistently outperformed those based on the first-order normal approximation. The methodology introduced in this paper can also be extended to other contexts.

1. Introduction

Quantile-based inference has long been a topic of interest in risk evaluation in the financial industry and other contexts. Notable applications include the following: the widely used risk measure value at risk (VaR), which corresponds directly to a quantile; coherent risk measures, which are often derived from quantile transformations; and quantile regression, which has been employed as a tool for making portfolio investment decisions.
For a continuous random variable X with a cumulative distribution function (CDF) F ( x ) , density function f ( x ) , and E | X | < the p-th quantile is defined as
Q ( p ) = inf { x : F ( x ) p } .
Given a sample X 1 , X 2 , , X n from F , the simplest estimator of Q ( p ) is the sample quantile. Under mild conditions, it is asymptotically normal, but its asymptotic variance σ 2 = p ( 1 p ) f 2 ( Q ( p ) ) is large, particularly in the tails. Hence, it behaves poorly for small sample sizes, and alternative estimators are needed. An obvious choice is the kernel quantile estimator
Q ^ p , h n = 1 h n 0 1 F n 1 ( x ) K x p h n d x ,
where F n 1 ( x ) is the inverse of the empirical distribution function, K ( . ) is a suitably chosen kernel, and  h n is a bandwidth. Conditions on the bandwidth and the kernel must be imposed to ensure consistency, asymptotic normality, and the higher-order accuracy of  Q ^ p , h n .
Quantile-based inference has also been of significant interest to the authors of this paper. We derived in [1] a higher-order expansion for the standardized kernel quantile estimator, thus extending the long-standing flagship results obtained in [2,3]. Our expansion was non-trivial, because the influence of the bandwidth made the balance between bias and variance delicate.
In [4], we derived an Edgeworth expansion for the studentized version of the kernel quantile estimator, where the variance of the estimator was estimated using the jackknife method. This result is particularly important for practical applications, as the variance is rarely known in real-world scenarios. By inverting the Edgeworth expansion, we achieved a uniform improvement in coverage accuracy compared to the inversion of the asymptotically normal approximation. Our results are applicable in improving the inference for quantiles when the sample sizes are small-to-moderate. This situation often occurs in practice. For example, if monthly loss data are used in risk analysis, then the accumulated data for 10 years amounts to 120 observations. Another example is that of the daily data gathered from the stock market, which has approximately 250 active trading days each year.
The global financial crisis (GFC) from mid-2007 to early 2009 had a profound effect on many banks around the world. There were numerous reasons for the crisis; however, in mathematical terms, one important reason was that the banks’ risk estimates based on the widespread measure value at risk (VaR) happened to be very inaccurate. The key shortcomings of VaR are that it assumes normal market conditions, tends to ignore the tail risk, and shows a tendency to reduce risk estimates during calm periods (encouraging leverage) and increase them during volatile periods (forcing deleveraging). These limitations were seriously scrutinized after the GFC, with bank regulators proposing alternative risk measures. As a result, the seminal paper of Newey and Powell [5] introducing expectiles was rediscovered, and many advantageous properties of expectiles were noted and extended.
The coherency requirement for a risk measure in finance was first formulated in the seminal paper by [6] and has been widely used since then. We note that VaR is not a coherent risk measure, mainly because it does not satisfy the subadditivity property. The average value at risk (AVaR) does satisfy the subadditivity property and turns out to be a coherent risk measure. It was considered around 2009 as an alternative risk measure by the Basel Committee of Banking Supervision. Meanwhile, academic research on the properties of risk measures continued. The elicitability property was pointed out as another essential requirement in [7]. The latter property is an important requirement associated with effective backtesting. It then turned out that expectiles “have it all”, as they are simultaneously coherent and elicitable. Moreover, it has been shown (for example, in [8]) that expectiles are the only law-invariant risk measures that are simultaneously coherent and elicitable. Furthermore, ref. [9] noted that another desirable property, the isotonicity with respect to the usual stochastic order, also holds.
Due to the above properties, expectiles became widely adopted in risk management after the GFC. The asymptotic properties of expectile regression estimators and test statistics were investigated more deeply following the paper by [5]. Asymptotic properties of the sample expectiles, such as uniform consistency and asymptotic normality, were shown under weak assumptions, and a central limit theorem for the expectile process was proven in [10]. Expectile estimators for heavy-tailed distributions were also investigated in the same paper. Under strict stationarity assumptions, the authors in [11] showed several first-order asymptotic properties, such as consistency, asymptotic normality, and qualitative robustness. In [12], the authors compared the merits of estimators in the quantile regression and expectile regression models.
Less attention has been paid in the literature to confidence interval construction, with the only exception, to our knowledge, being the paper by [13]. Again, these intervals are constructed based on first-order asymptotics using asymptotic normality.
From a practical point of view, ref. [14] is a review paper which discusses known properties of expectiles and their financial meaning, and which presents real-data examples. The paper also refines some of the results in [15]. Another similar review discussing the regulatory implementation of expectiles is [16].
While first-order asymptotic inference results for expectiles are well established, developing higher-order asymptotics is more challenging.
It was natural for us, therefore, to turn to expectiles and to propose methods for improved inference about expectiles for small-to-moderate sample sizes. The methodology to achieve this goal is to derive the higher-order Edgeworth expansion for both the standardized and studentized versions of the kernel-based estimator of the expectile. By inverting the expansion, we can construct improved confidence intervals for the expectile. This article suggests this methodology and illustrates its effectiveness. The article is organized as follows. In Section 2, we introduce some notations, definitions, and auxiliary statements needed in the subsequent sections. Section 3 presents our main results about the Edgeworth expansion of the asymptotic distribution of the estimator. This section is subdivided into three subsections. The first subsection deals with the standardized kernel-based expectile estimator. The second subsection discusses the related results for the studentized kernel-based expectile estimator. Whilst the first subsection is mainly of theoretical interest, the results of the second subsection can be directly applied to derive more accurate confidence intervals for the expectile of the population for small-to-moderate samples. The third subsection discusses a Cornish–Fisher-type approximation for the quantile of the kernel-based expectile estimator. The application for accurate confidence interval construction presents the main purpose of our methodology. Its efficiency is illustrated numerically in Section 4. Section 5 summarizes our findings. The technical results and proofs of the main statements of the paper are postponed to Appendix A.

2. Materials and Methods

From a methodological standpoint, Q ^ p , h n is an L-estimator: it can be written as a weighted sum of the order statistics X ( i ) , i = 1 , 2 , , n :
Q ^ p , h n = i = 1 n v i , n X ( i ) , v i , n = 1 h n i 1 n i n K x p h n d x .
Consider a random variable X L 2 ( Ω , F , P ) . Using the notations x + = max ( x , 0 ) ,   x = max ( x , 0 ) , Newey and Powel introduced in [5] the expectile e τ as the minimizer of the asymmetric quadratic loss,
e τ ( X ) = argmin y R [ τ | | ( X y ) + | | 2 2 + ( 1 τ ) | | ( X y ) | | 2 2 ] ,
and we realize that its empirical variant can also be represented as an L-statistic. When τ = 1 / 2 we obtain e 1 / 2 ( X ) = E ( X ) , which means that the expectiles can be interpreted as an asymmetric generalization of the mean. In addition, it has been shown in several papers (see, for example, [14]) that the so-called expectile-VaR ( E V a R τ = e τ ( X ) is a coherent risk measure when τ ( 0 , 1 / 2 ] , as it satisfies the four coherency axioms from [6]. Any value of τ ( 0 , 1 ) can be used in the definition of the expectile but, for the reason mentioned in the previous sentence, we will assume that τ ( 0 , 1 / 2 ] in the theoretical developments of this paper.
The asymptotic properties of L-statistics are usually discussed by first decomposing them into a U-statistic plus a small-order remainder term and then applying the asymptotic theory of the U-statistic. Initially, the L statistic is written as 0 1 F n 1 ( u ) J ( u ) d u , where F n is the empirical distribution function and the score function  J ( u ) does not involve n . For the presentation (2), however, such a decomposition is impossible as the “score function” becomes a delta function in the limit. Therefore, a novel dedicated approach is needed. In the case of quantiles, details about such an approach are given in [4]. Our current paper shows how the issue can also be resolved in the case of expectiles. The main tools in our derivations are some large deviation results on U-statistics from [17] and standard asymptotic results from [18].
Remark 1. 
We mention, in passing, that in the paper by [19] it is shown that expectiles of a distribution F are in a one-to-one correspondence to quantiles of another distribution G that is related to F by an explicit formula. It was tempting to use this relation to utilize our results from [4] for constructing confidence intervals for the expectiles of F. We examined this option and realized that the correspondence is quite complicated, involving functionals of F that need to be estimated by the data. Our conclusion was that proceeding this way was not an option for constructing precise confidence intervals for the expectiles. Hence, our way to proceed is to deal directly from the very beginning with the definition of the expectile of F .
We start with some initial notations, definitions, and auxiliary statements.
We consider a sample X 1 , , X n of n independently and identically distributed random variables with density and cumulative distribution functions f , F , respectively.
Let us define
I τ ( x , y ) = τ ( y x ) I ( y x ) ( 1 τ ) ( x y ) I ( y < x )
where I ( · ) is an indicator function. Looking at the original definition (3), we can define the (true theoretical) expectile y * as a solution to the equation
E [ I τ ( y * , X 1 ) ] = 0 .
Using the relation ( y x ) + = ( x y ) , we realize that the defining Equation (4) leads to the same solution as the defining Equation (2) in [14] or the defining Equation (2) in [10].
As discussed in [10], the  τ -expectile y * satisfies the Equation
τ y * { 1 F ( s ) } d s ( 1 τ ) y * F ( s ) d s = 0 .
Using integration by parts, we have the following proposition:
Proposition 1. 
Assume that E | X 1 | < . Then, we have
τ ( μ y * ) ( 1 2 τ ) F [ 2 ] ( y * ) = 0 ,
where μ = E ( X 1 ) and F [ 2 ] ( x ) = x F ( s ) d s .
Thus, an estimator of the expectile is given by a solution of the equation
τ ( X ¯ y ˜ ) ( 1 2 τ ) y ˜ F n ( s ) d s = 0 ,
where X ¯ is the sample mean and F n ( · ) is the empirical distribution function.
Holzmann and Klar in [10] showed the uniform consistency and asymptotic normality of y ˜ . In this paper, we study the higher-order asymptotic properties of the expectile estimator. To study the higher-order asymptotics, we use a kernel-type estimator F ^ ( · ) of the distribution function F ( · ) , instead of F n ( · ) .
Let us define kernel estimators of the density and distribution function:
f ^ ( x ) = 1 n h i = 1 n K x X i h , F ^ ( x ) = 1 n i = 1 n W x X i h
where K ( · ) is a kernel function and W ( · ) is an integral of K ( · ) . We assume that
( a 1 ) K ( s ) 0 , K ( s ) = 1 , K ( s ) = K ( s ) ,
and W ( x ) = x K ( s ) d s . Here, h is a bandwidth where h 0 , n h . Hereafter, we assume that the bandwidth h = O ( n 1 4 ( log n ) 1 ) .
As in the case of quantile estimation, we are using a kernel-smoothed estimator of the cumulative distribution function in the construction of the expectile estimator. The reason for switching to the kernel-smoothed version of the empirical distribution function in the definition of our expectile estimator is that only for this version is it possible to show the validity of the Edgeworth expansion. As discussed in detail in [4], if we use a kernel estimator with an informed choice of bandwidth and a suitable kernel then the resulting expectile estimator can be easily studentized; the Edgeworth expansion up to order o ( n 1 / 2 ) for the studentized version will be derived and the theoretical quantities involved in the expansion can be shown to be easily estimated. In addition, a Cornish–Fisher inversion can be used to construct confidence intervals for the expectile (which is the main goal of the inference in this paper). The resulting confidence intervals are more precise than the intervals obtained via inversion of the normal approximation and can be used to improve the coverage accuracy for moderate sample sizes.
Hence, from now on we will discuss the higher-order asymptotic properties of the estimator y ^ of y * that satisfy
τ y ^ { 1 F ^ ( s ) } d s ( 1 τ ) y ^ F ^ ( s ) d s = 0 .
Let us define
A ( t ) = t W ( s ) d s , F ^ [ 2 ] ( s ) = 1 n i = 1 n h A s X i h .
Then, similarly to Proposition 1, we have the following Proposition:
Proposition 2. 
If the kernel function satisfies the condition (a1), the kernel expectile estimator y ^ is given by the solution of the equation
τ ( X ¯ y ^ ) ( 1 2 τ ) F ^ [ 2 ] ( y ^ ) = 0 .
For our further discussion, we define the following quantities:
Definition 1. 
u 1 ( X i ) = ( 1 2 τ ) h A y * X i h F [ 2 ] ( y * ) + τ ( X i μ ) , b 1 n = ( 1 2 τ ) E h A y * X i h F [ 2 ] ( y * ) , u ˜ 1 ( X i ) = u 1 ( X i ) b 1 n , ξ n 2 = E ( u ˜ 1 2 ( X 1 ) ) , b 2 n = E W y * X i h F ( y * ) , W ˜ ( X i ) = W y * X i h F ( y * ) b 2 n , b 3 n = E 1 h K y * X i h f ( y * ) , K ˜ ( X i ) = 1 h K y * X i h f ( y * ) b 3 n , C = τ + ( 1 2 τ ) F ( y * ) C ˜ = τ + ( 1 2 τ ) F ^ ( y * ) , C ^ = τ + ( 1 2 τ ) F ^ ( y ^ ) .
Note that E [ u ˜ 1 ( X i ) ] = E [ W ˜ ( X i ) ] = E [ K ˜ ( X i ) ] = 0 holds and that b 1 n , b 2 n , and b 3 n denote the biases of the kernel estimators of F [ 2 ] ( y * ) , F ( y * ) , and f ( y * ) .
Using Equations (6) and (7), we have
τ ( X ¯ μ ) τ ( y ^ y * ) ( 1 2 τ ) F ^ [ 2 ] ( y ^ ) F [ 2 ] ( y * ) = 0 .
Since we intend to discuss the Edgeworth expansion with a residual term o ( n 1 / 2 ) , we will obtain the asymptotic representation with residual term o L ( n 1 / 2 ) , where
P | o L ( n 1 / 2 ) | n 1 / 2 ε n = o ( n 1 / 2 )
as ε n 0 . When we obtain the Edgeworth expansion until the order n 1 / 2 , it follows from Esseen’s smoothing lemma that we can ignore the terms of order o L ( n 1 / 2 ) .
Similarly to the o L ( n 1 / 2 ) notation, we will also be using the o l ( n 1 / 2 ) notation that follows the definition
P | o l ( n 1 / 2 ) | n 1 / 2 ( log n ) 1 = o ( n 1 / 2 ) .
Note that we can also ignore the o l ( n 1 / 2 ) terms when we discuss the Edgeworth expansion with residual term o ( n 1 / 2 ) .

3. Results

3.1. Edgeworth Expansion for the Standardized Expectile

In this subsection, we will obtain an asymptotic representation of the standardized expectile n C ξ n ( y ^ y * ) and its Edgeworth expansion. This expansion is of theoretical interest mainly, as the constant C and the normalizing quantity ξ n involved depend on parameters of the unknown population distribution. Later, in Section 3.2 we will formulate the related but more practicable expansions of the studentized expectile. They do not depend on unknown population parameters and can also be inverted to deliver accurate asymptotic confidence intervals for the expectile.
We assume that the following conditions hold:
( a 2 ) C 0 , ( a 3 ) E | X 1 μ | 5 + δ < .
Theorem 1. 
We assume the conditions ( a 1 ) , ( a 2 ) , ( a 3 ) hold. Furthermore, we assume that K ( · ) , the derivatives f ( · ) , and K ( · ) are bounded, and that  s 4 K ( s ) d s < . Then:
(1)
For the standardized expectile estimator, we have
n C ξ n ( y ^ y * ) = n 1 / 2 b 1 n ξ n + n 1 / 2 ν B ξ n + n 1 / 2 i = 1 n u ˜ 1 ( X i ) ξ n + n 3 / 2 1 i < j n ν 2 ( X i , X j ) ξ n + o L ( n 1 / 2 )
where
ν B = ( 1 2 τ ) f ( y * ) 2 C 2 E [ u ˜ 1 2 ( X 1 ) ] + 1 C E [ W ˜ ( X 1 ) u ˜ 1 ( X 1 ) ] ν 2 ( X i , X j ) = ( 1 2 τ ) f ( y * ) C 2 u ˜ 1 ( X i ) u ˜ 1 ( X j ) + 1 C W ˜ ( X i ) u ˜ 1 ( X j ) + W ˜ ( X j ) u ˜ 1 ( X i ) .
(2)
The asymptotic variance of   y ^ is equal to  1 C 2 V a r [ u ˜ ( X 1 ) ] , where
V a r [ u ˜ ( X 1 ) ] = ( 2 4 τ ) F [ 3 ] ( y * ) + τ 2 ( y * μ ) 2 + τ 2 σ 2 + O ( h ) ,
σ 2 = V ( X 1 ) , and 
F [ 3 ] ( x ) = x F [ 2 ] ( s ) d s .
(3)
The Edgeworth expansion is given by
P n C ξ n ( y ^ y * ) x = Q x n 1 / 2 b 1 n ξ n + o ( n 1 / 2 )
where
κ = 1 ξ n 3 E [ u ˜ 1 3 ( X 1 ) ] + 3 ξ n 3 E [ u ˜ 1 ( X 1 ) u ˜ 1 ( X 2 ) ν 2 ( X 1 , X 2 ) ]
and
Q ( x ) = Φ ( x ) ϕ ( x ) n 1 / 2 ν B ξ n + κ ( x 2 1 ) 6 .
Remark 2. 
It follows from the results of Holzmann and Klar in [10] that
E I τ 2 ( y * , X 1 ) = E [ τ 2 ( X 1 y * ) 2 I ( X 1 y * ) + ( 1 τ ) 2 ( y * X 1 ) 2 I ( X 1 < y * ) ] = τ 2 y * ( s y * ) 2 f ( s ) d s + ( 1 τ ) 2 y * ( y * s ) 2 f ( s ) d s = τ 2 ( s y * ) 2 f ( s ) d s + ( 1 2 τ ) y * ( y * s ) 2 f ( s ) d s = τ 2 ( s μ + μ y * ) 2 f ( s ) d s + ( 1 2 τ ) ( y * s ) 2 F ( s ) y * + 2 y * ( y * s ) F ( s ) d s = τ 2 σ 2 + τ ( μ y * ) 2 + ( 1 2 τ ) 2 ( y * s ) F [ 2 ] ( s ) y * + 2 y * F [ 2 ] ( s ) d s = τ 2 σ 2 + τ 2 ( μ y * ) 2 + ( 2 4 τ ) F [ 3 ] ( y * ) .
Compared with the result of Corollary 4 on p. 2359 in [10], we realized that, as expected, the first-order approximations of the asymptotic variances of the kernel-smoothed expectile estimator in our paper and of the empirical distribution-based estimator discussed in [10] coincide.
It is easy to see that
3 ξ n 3 E [ u ˜ 1 ( X 1 ) u ˜ 1 ( X 2 ) ν 2 ( X 1 , X 2 ) ] = 3 ( 1 2 τ ) ξ n 3 ξ n 4 f ( y * ) C 2 + ξ n 2 C E [ W ˜ ( X 1 ) u ˜ 1 ( X 1 ) ] = 3 ( 1 2 τ ) ξ n f ( y * ) C 2 + 1 C ξ n E [ W ˜ ( X 1 ) u ˜ 1 ( X 1 ) ]
and this relation can be used to write down the formula for κ in (9) in an alternative way.
The proof of the above Theorem will be presented in Appendix A. It relies heavily on some large deviations results for the U-statistics that we summarized in Lemma A1 (whose formulations and proof are also postponed to Appendix A). Using the results of the latter Lemma, we can obtain the evaluation of the order of the asymptotic approximations and expansions of the differences ( y ^ y * ) , ( y ^ y * ) 2 , ( y ^ y * ) 3 (also presented in Lemma A2 in Appendix A). Combining the evaluations from Lemma A2 essentially guarantees the asymptotic representation in the standardized case given in Theorem 1.

3.2. Edgeworth Expansion for the Studentized Expectile

As pointed out by many papers, the Edgeworth expansion of the studentized estimator is more important than the expansion of the standardized estimator. In addition, it is the studentized version that can be inverted in practical settings to deliver confidence intervals for the unknown expectile. To obtain the asymptotic representation of the studentized estimator, we need to construct suitable estimators C ^ and ξ ^ n . C ^ is given in (Definition 1) above. Next, we proceed to obtain an estimator of E [ u 1 2 ( X 1 ) ] . Similarly, as the ordinal estimation of the population variance, putting
u ^ 1 ( X i ) = ( 1 2 τ ) h A y ^ X i h + τ X i ,
we can obtain a consistent estimator:
ξ ^ n 2 = 1 n 1 i = 1 n u ^ 1 2 ( X i ) n n 1 1 n i = 1 n u ^ 1 ( X i ) 2 .
Let us now analyze the following studentized expectile estimator:
C ^ ξ ^ n ( y ^ y * ) .
We now introduce our next set of notations:
Definition 2. 
Let us define
ζ ( X i ) = 1 2 ξ n 3 { u ˜ 1 2 ( X i ) ξ n 2 } , B n = 1 2 τ 2 C 2 f ( y * ) ξ n E [ u ˜ 1 ( X 1 ) ζ ( X 1 ) ] , u 1 * ( X i ) = u ˜ 1 ( X i ) ξ n , u 2 * ( X i , X j ) = 1 2 τ C 2 ξ n f ( y * ) u ˜ 1 ( X i ) u ˜ 1 ( X j ) u ˜ 1 ( X i ) ζ ( X j ) u ˜ 1 ( X j ) ζ ( X i ) .
Similarly to the standardized expectile estimator, we can derive the asymptotic representation of the studentized estimator in the next Theorem:
Theorem 2. 
Under the same assumptions as in Theorem 1:
(1)
For the studentized expectile estimator, we obtain
n 1 / 2 C ^ ( y ^ y * ) ξ ^ n = n 1 / 2 b 1 n ξ n + n 1 / 2 B n + n 1 / 2 i = 1 n u 1 * ( X i ) + n 3 / 2 1 i < j n u 2 * ( X i , X j ) + o l ( n 1 / 2 ) .
(2)
We have the Edgeworth expansion with residual term o ( n 1 / 2 ) :
P n C ^ ( y ^ y * ) ξ ^ n x = Q S x n 1 / 2 b 1 n ξ n + o ( n 1 / 2 )
where
κ S = E [ { u 1 * ( X 1 ) } 3 ] + 3 E [ u 1 * ( X 1 ) u 1 * ( X 2 ) u 2 * ( X 1 , X 2 ) ]
and
Q S ( x ) = Φ ( x ) n 1 / 2 ϕ ( x ) B n + κ S ( x 2 1 ) 6 .
The proof of Theorem 2 is presented in Appendix A.

3.3. Cornish–Fisher-Type Approximation of the α -Quantile of the Studentized Expectile

Here, we will obtain an approximation of α -quantile e α of the studentized expectile, where
α = P n C ^ ( y ^ y * ) ξ ^ n e α = Q S e α n 1 / 2 b 1 n ξ n + o ( n 1 / 2 ) .
Let us define
e α * = e α n 1 / 2 b 1 n ξ n .
Then, expanding around the α -quantile z α of N ( 0 , 1 ) , we have
e α * = z α + n 1 / 2 ϕ ( z α ) B n + P S ( z α ) Q S ( z α ) + O ( n 1 ) = z α + n 1 / 2 B n + P S ( z α ) + O ( n 1 )
where P S ( x ) = κ S ( x 2 1 ) 6 , κ S = 3 ( 1 2 τ ) f ( y * ) C 2 ξ n 2 E [ u ˜ 1 3 ( X 1 ) ] ξ n 3 . For the α -quantile e α , we have
e α = z α + n 1 / 2 b 1 n ξ n + n 1 / 2 B n + P S ( z α ) + O ( n 1 ) .
Since
b 1 n = h 2 ( 1 2 τ ) f ( y * ) 2 u 2 K ( u ) d u + O ( h 4 ) ,
we have an estimator of b 1 n
b ^ 1 n = h 2 ( 1 2 τ ) f ^ ( y ^ ) 2 u 2 K ( u ) d u .
It is easy to see that
B n = 1 2 τ 2 C 2 f ( y * ) ξ n E [ { u ˜ 1 ( X 1 ) } 3 ] 2 ξ n 3 .
Thus, we have estimators of B n and ν 3 as follows:
B ^ n = 1 2 τ 2 C ^ 2 f ^ ( y ^ ) ξ ^ n μ ^ 3 2 ξ ^ n 3 , κ ^ S = 3 ( 1 2 τ ) C ^ 2 f ^ ( y ^ ) ξ ^ n 2 μ ^ 3 ξ ^ n 3
where
μ ^ 3 = 1 n i = 1 n u ^ ( X i ) 1 n j = 1 n u ^ ( X j ) 3 .
Therefore, we have an estimator of the α -quantile e α :
e ^ α = z α + n 1 / 2 b ^ 1 n ξ ^ n + n 1 / 2 B ^ n + κ ^ S ( z α 2 1 ) 6 .

4. Discussion

Given that the main application domain of expectiles has been risk management, we also want to illustrate the application of our methodology in this area. As discussed in Section 2, E V a R = e τ ( X ) is a coherent risk measure when τ ( 0 , 1 / 2 ] . It is easy to check (or compare p. 46 of ([15])) that e 1 τ ˜ ( X ) = e τ ˜ ( X ) holds. In addition, most interest in risk management is in the tails. If the random variable of interest X represents an outcome, then X represents a loss, and one would be interested in losses in the tail. To illustrate the effectiveness of our approach for constructing improved confidence intervals, we need to compare simulation outcomes with the population distribution for which the true expectile is known precisely. Such examples are relatively scarce in the literature. Some suitable exceptions are discussed in [14]. One of these exceptions is the exponential distribution, which we chose for our illustrations below.
Setting Z = ( X ) to be standard exponentially distributed, we have for the values of τ ˜ the relation
e τ ˜ ( Z ) = 1 + W 2 τ ˜ 1 ( 1 τ ˜ ) e ,
where W ( · ) is the Lambert W function defined implicitly by means of the equation
W ( z ) exp ( W ( z ) ) = z
(and we note that W ( x ) l o g ( x ) for x holds). The Formula (13) for the expectile of the exponential distribution is derived on page 495 in [14].
We have used a symmetric compactly supported on ( 1 , 1 ) kernel K ( x ) . It is said to be of order m, where m is the mth derivative K ( m ) L i p ( β ) for some β > 0 and
1 1 K ( x ) d x = 1 , 1 1 x i K ( x ) d x = 0 , i = 1 , 2 , , m 1 , 1 1 x m K ( x ) d x 0 .
our numerical experiments, we used the classical second-order Epanechnikov kernel,
K ( x ) = 3 4 ( 1 x 2 ) I ( | x | 1 ) .
With it, the factor in the definition of the estimator b ^ 1 n becomes u 2 K ( u ) d u = 0.2 .
There are at least two ways to produce accurate confidence intervals at level ( 1 α ) for the expectile when exploiting the Edgeworth expansion of its studentized version. One approach (we call it the CF method) is based on using the estimated values ( e ^ α / 2 and e ^ 1 ( α / 2 ) ) obtained by using the Formula (12). Then, the left-and right-hand sides of the ( 1 α ) × 100 % confidence interval for E V a R = e τ ( X ) at given τ are obtained as
( y ^ e ^ 1 α 2 ξ ^ n / ( C ^ n n ) , y ^ e ^ α 2 ξ ^ n / ( C ^ n n ) ) .
Another approach (we call it numerical inversion) is to use numerical root-finder procedures to solve the two equations
Q ^ S ( η 1 n 1 / 2 b ^ 1 n ξ ^ n ) = α 2 , Q ^ S ( η 2 n 1 / 2 b ^ 1 n ξ ^ n ) = 1 α 2 ,
and construct the confidence interval as ( η 1 , η 2 ) . Here,
Q ^ S ( x ) = Φ ( x ) n 1 / 2 ϕ ( x ) B ^ n + κ ^ S ( x 2 1 ) 6 .
These two methods should be asymptotically equivalent but they deliver different intervals for small sample sizes, with the numerical inversion delivering significantly better results, in terms of closeness to the nominal coverage level.
We provide some details about the numerical implementation at the end of Appendix A.
With the standardized version, we achieved a better approximation and more precise coverage probabilities across very low sample sizes, such as n = 10 , 12 , 15 , or 20 , across a range of τ values, such as τ = 0.1 , 0.2 , 0.3 , 0.4 and a range of values of α , such as α = 0.1 , 0.05 , 0.01 for the ( 1 α ) 100 % confidence intervals. The approximations were extremely accurate for such small sample sizes. We have not reproduce all these here, as our main goal was to investigate the practically more relevant studentized case. We only include one graph (Figure 1) where the case n = 12 , τ = 0.2 is demonstrated graphically. The “true” cdf of the estimator for comparison was obtained via the empirical cdf based on 50,000 simulations from standard exponentially distributed data of size n = 12 . The resulting confidence intervals at a nominal 90 % level had actual coverage of 0.90 for the Edgeworth and 0.9014 for the normal approximation. At nominal 95 % , they were 0.9490 and 0.9453, respectively. At nominal 99 % , they were 0.9858 for Edgeworth versus 0.9804 for the normal approximation.
In the practically relevant studentized case, we were unable to obtain such good results for sample sizes as low as the ones from the standardized case. This is, of course, to be expected, as in this case there was a need to estimate the ξ n and the C quantities using the data. The moderate sample sizes at which the CF and numerical inversion methods deliver significantly more precise results depend, of course, on the distribution of C itself. For the exponential distribution, these turn out to be in the range of 20, 50, 100, 150 to about 200. For larger sample sizes, all three methods—the normal theory-based confidence intervals, the ones obtained by the numerical inversion and the CF-based intervals—become very accurate but the discrepancy between their accuracy becomes negligibly small and, for that reason, we do not report it here.
Before presenting thorough numerical simulations, we include one illustrative graph (Figure 2) for the case n = 50 , τ = 0.3 where the studentized case is demonstrated. The graph demonstrates the virtually uniform improvement when the Edgeworth approximation is used instead of the simple normal approximation. A comparison was made with the “true” cdf (obtained via the empirical cdf based on 50,000 simulations from standard exponentially distributed data of size n = 50 ). We found that at 50,000 replications a stabilization occurred and that further increase of the replications seemed unnecessary. The resulting confidence intervals at a nominal 90 % level had actual coverage of 0.877 for the numerical inversion of the Edgeworth, with 0.869 for the normal approximation. At  95 % nominal level, the actual coverage was 0.921 for the numerical inversion of the Edgeworth versus 0.917 for the normal approximation.
Next, we include Table 1 and Table 2 showing the effects of applying our methodology for constructing confidence intervals for the expectiles. The moderate samples included in the comparison were chosen as 20, 50, 100, 150, and 200. The two tables illustrate the results for two common confidence levels used in practice ( 90 % for Table 1 and 95 % for Table 2). The best performer in each row is in bold font. Examination of Table 1 and Table 2 shows that the “true” coverage probabilities approached the nominal probabilities when the sample size increased. As expected, the discrepancies in accuracy between the different approximations also decreased when the sample size n increased. As the confidence intervals were based on asymptotic arguments, this demonstrates the consistency of our procedure. For the chosen levels of confidence, our new confidence intervals virtually always outperformed the ones based on the normal approximation. There appears to be a downward bias in the coverage probabilities across the tables for both normal and Edgeworth-based methods. This bias grows smaller as n increases above 200. We observe that the value of τ also influenced the bias, with smaller values of τ impacting the bias more significantly. This was expected, as these values of τ lead to expectiles that are further in the right tail of the distribution of the loss variable X and, hence, are more difficult to estimate. As is known, the Edgeworth approximation’s strength is in the central part of the distribution. However, at small values of τ we focus on the tail, where it does not necessarily improve over the normal approximation.

5. Conclusions

Edgeworth expansions are expected to deliver better approximations to the standardized and studentized versions of estimators of parameters of interest. The inversion of these expansions can be applied for constructing more accurate confidence intervals for these parameters when sample sizes are small or moderate. To justify the validity of these expansions, one needs to switch to using a kernel-smoothed version of the empirical distribution function in the definition of the estimator. We applied the technique for estimating the expectile, which has recently found fruitful applications in risk management. We illustrated the advantages of our procedure on simulations that utilized the exponential distribution as an example. We chose this distribution because the expectile is known precisely (not only approximately) for it. However, we stress that our procedure is fully nonparametric and can be applied for any distribution, as long as the conditions of Theorem 2 are satisfied.
Furthermore, we focused on the applications of inference about expectiles in financial data. However, this is by no means the only domain of application. As discussed in Remark 1. above, expectiles of a distribution F are in a one-to-one correspondence to quantiles of another distribution G . Anywhere, where quantiles-based inference is of interest, expectiles-based inference can be an alternative. In particular, a referee has suggested applications in constructing nonparametric control charts. The extension of our methodology to expectile regression settings is particularly important for applications. These applications could be a topic for further research.

Author Contributions

Conceptualization, Y.M. and S.P.; methodology, Y.M. and S.P.; software, S.P. and Y.M.; validation, S.P. and Y.M.; formal analysis, Y.M. and S.P.; writing—original draft preparation, S.P. and Y.M.; writing—S.P. and Y.M.; visualization, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were analyzed in this study. Simulated data were created using the R programming language.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Proposition 1. 
Using the integration by parts, we have
0 = τ y * { 1 F ( s ) } d s ( 1 τ ) y * F ( s ) d s = τ s { 1 F ( s ) } y * + τ y * s f ( s ) d s ( 1 τ ) F [ 2 ] ( y * ) = τ y * { 1 F ( y * ) } + τ s f ( s ) d s y * s f ( s ) d s ( 1 τ ) F [ 2 ] ( y * ) = τ y * + τ y * F ( y * ) + τ μ s F ( s ) y * + y * F ( s ) d s ( 1 τ ) F [ 2 ] ( y * ) = τ ( μ y * ) ( 1 2 τ ) F [ 2 ] ( y * )
Proof of Proposition 2. 
Since the kernel function satisfies s K ( s ) d s = 0 , we have
s f ^ ( s ) d s = 1 n i = 1 n 1 h K s X i h d s = 1 n i = 1 n ( t h + X i ) K ( t ) d t = 1 n i = 1 n X i = X ¯ .
Similarly to the derivation in (Proposition 1), we obtain
0 = τ y ^ { 1 F ^ n ( s ) } d s ( 1 τ ) y ^ F ^ n ( s ) d s = τ s 1 F ^ ( s ) y ^ + τ y ^ s f ^ ( s ) d s ( 1 τ ) F ^ [ 2 ] ( y ^ ) = τ y ^ { 1 F ^ ( y ^ ) } + τ s f ^ ( s ) d s y ^ s f ^ ( s ) d s ( 1 τ ) F ^ [ 2 ] ( y ^ ) .
It is easy to see that
y ^ s f ^ ( s ) d s = s F ^ ( s ) y ^ y ^ F ^ ( s ) d s = y ^ F ^ ( y ^ ) F ^ [ 2 ] ( y ^ ) .
Thus, we have
τ ( X ¯ y ^ ) ( 1 2 τ ) F ^ [ 2 ] ( y ^ ) = 0 .
□  
First, we note the moment conditions that help us to obtain the asymptotic representations of the statistics.
Lemma A1. 
Under the conditions of Theorem 1, for some δ > 0
E K ˜ ( X 1 ) 3 + δ < , E W ˜ ( X 1 ) 3 + δ <
and
E u ˜ ( X 1 ) 3 + δ < .
Proof. 
Since K ( · ) is bounded and W ( · ) is a cumulative distribution function, we have the first and second inequalities. For  u ˜ ( · ) , it is sufficient to prove
E h A y * X 1 h 3 + δ < .
From the definition, we have
A ( t ) = t W ( s ) d s = s W ( s ) t t s K ( s ) d s .
Since s 2 K ( s ) d s < , we have
lim s s W ( s ) = lim s W ( s ) 1 / s = lim s K ( s ) 1 / s 2 = lim s { s 2 K ( s ) } = 0 .
From the condition (a3) and | W ( s ) | 1 , we have
h 3 + δ E y * X 1 h W y * X 1 h 3 + δ E | y * X 1 | 3 + δ < .
Furthermore, for a constant β > 0 , we have t s K ( s ) d s β . Thus, we obtain the desired result.    □
Let us define the order evaluation o M ( n 1 / 2 ) via
E | o M ( n 1 / 2 ) | r = O ( n 1 / 2 r / 2 δ )
for some r > 0 . As stated in our discussion in Section 2, when we consider an Edgeworth expansion with residual term o ( n 1 / 2 ) , we can ignore expressions of order o l ( n 1 / 2 ) and o L ( n 1 / 2 ) . Now, we observe that if a certain residual term R n satisfies R n = o M ( n 1 / 2 ) then it is o l ( n 1 / 2 ) .
Furthermore, let us define a U-statistic,
U n = n r 1 1 i 1 < i 2 < n g ( X i 1 , X i 2 , , X i r )
where g ( x 1 , x 2 , , x r ) is symmetric in its arguments. For  U n , using large deviation theory, we have the following Lemma:
Lemma A2. 
If E ( g 2 ) < we can obtain the following evaluations:
(1) 
It follows from Malevich and Abdalimov’s results [17] that
P U n E ( U n ) V ( U n ) log n = o ( n 1 / 2 ) .
(2) 
For h = n 1 / 4 ( log n ) 1 , we have  
P h 2 U n E ( U n ) V ( U n ) n 1 / 2 ( log n ) 1 = o ( n 1 / 2 )
and then
h 2 U n E ( U n ) V ( U n ) = o l ( n 1 / 2 ) .
(3) 
Let β ( · ) and γ ( · ) be functions satisfying E [ β ( X 1 ) ] = E [ γ ( X 1 ) ] = 0 , E [ β 2 ( X 1 ) ] = O ( 1 ) and E [ γ 2 ( X 1 ) ] = O ( 1 ) . Then, we have  
n 3 / 2 i = 1 n j = 1 n β ( X i ) γ ( X j ) = n 1 / 2 E [ β ( X 1 ) γ ( X 1 ) ] + n 3 / 2 1 i < j n { β ( X i ) γ ( X j ) + β ( X j ) γ ( X i ) } + o M ( n 1 / 2 ) .
(4) 
For U n and o M ( n 1 / 2 ) , we have  
P o M ( n 1 / 2 ) U n E ( U n ) V ( U n ) n 1 / 2 ( log n ) 1 = o ( n 1 / 2 )
and then
o M ( n 1 / 2 ) U n E ( U n ) V ( U n ) = o M ( n 1 / 2 ) .
Proof. 
(1) The equation directly follows from [17].
(2) Since
n 1 / 2 h 2 ( log n ) 1 = n 2 d 1 / 2 ( log n ) 1 log n ,
we have the desired result.
(3) Since E [ β 2 ( X 1 ) ] = O ( 1 ) and E [ γ 2 ( X 1 ) ] = O ( 1 ) , we have
E n 3 / 2 i = 1 n { β ( X i ) γ ( X i ) E [ β ( X 1 ) γ ( X 1 ) ] } 2 = O ( n 3 ) n E [ β 2 ( X 1 ) ] E [ γ 2 ( X 1 ) ] = O ( n 3 n ) = O ( n 1 / 2 2 / 2 1 / 2 )
and then
n 3 / 2 i = 1 n { β ( X i ) γ ( X i ) E [ β ( X 1 ) γ ( X 1 ) ] } = o l ( n 1 / 2 ) .
(4) It is easy to see that
P o M ( n 1 / 2 ) U n E ( U n ) V ( U n ) n 1 / 2 ( log n ) 1 = P o M ( n 1 / 2 ) U n E ( U n ) V ( U n ) n 1 / 2 ( log n ) 1 , U n E ( U n ) V ( U n ) log n + P o M ( n 1 / 2 ) U n E ( U n ) V ( U n ) n 1 / 2 ( log n ) 1 , U n E ( U n ) V ( U n ) < log n P U n E ( U n ) V ( U n ) log n + P | o M ( n 1 / 2 ) | n 1 / 2 ( log n ) 2 = o ( n 1 / 2 ) .
Thus, we have the desired result.    □
Lemma A3. 
For 0 < τ < 1 , under the assumption of Theorem 1 we have following approximations:
  • (1) 
( y ^ y * ) 3 = n 1 / 2 o M ( n 1 / 2 ) .
  • (2)  
C ˜ ( y ^ y * ) = 1 n i = 1 n u 1 ( X i ) 1 2 τ 2 ( y ^ y * ) 2 f ^ ( y * ) + n 1 / 2 o M ( n 1 / 2 ) .
  • (3)  
( C ˜ C ) ( y ^ y * ) = 1 2 τ C n 1 E [ W ˜ ( X 1 ) u ˜ 1 ( X 1 ) ] + n 2 1 i < j n W ˜ ( X i ) u ˜ 1 ( X j ) + W ˜ ( X j ) u ˜ 1 ( X i ) + n 1 / 2 o l ( n 1 / 2 ) .
  • (4)  
f ^ ( y * ) ( y ^ y * ) 2 = 2 f ( y * ) n 2 C 2 1 i < j n u ˜ 1 ( X i ) u ˜ 1 ( X j ) + f ( y * ) n C 2 E [ u ˜ 1 2 ( X 1 ) ] + n 1 / 2 o l ( n 1 / 2 ) .
Proof. 
(1) It follows from Equation (8) that
τ ( X ¯ μ ) [ τ + ( 1 2 τ ) F ^ ( y ) ] ( y ^ y * ) = 0
and
τ 3 ( X ¯ μ ) 3 = [ τ + ( 1 2 τ ) F ^ ( y ) ] 3 ( y ^ y * ) 3
where y { [ y ^ , y * ] or [ y * , y ^ ] } . For fixed τ ( 0 < τ < 1 ) and 0 x 1 , we can show that
min { τ , 1 τ } τ + ( 1 2 τ ) x max { τ , 1 τ }
Thus, ( y ^ y * ) 3 and ( X ¯ μ ) 3 converge to 0 with the same stochastic order. It follows from the moment evaluation of U-statistics that E | X ¯ μ | r = O ( n r / 2 ) ( r 2 ) . Then, we obtain
E | n 1 / 2 ( X ¯ μ ) 3 | 4 / 3 = n 2 / 3 E | X ¯ μ | 4 = O ( n 4 / 3 ) = O ( n 1 / 2 2 / 3 1 / 6 ) .
Thus, we have the desired result.
(2) Using the Taylor expansion, we have
F ^ [ 2 ] ( y ^ ) F ^ [ 2 ] ( y * ) = F ^ ( y * ) ( y ^ y * ) + 1 2 f ^ ( y * ) ( y ^ y * ) 2 + 1 6 f ^ ( y * ) ( y ^ y * ) 3 + 1 24 f ^ ( 2 ) ( y * ) ( y ^ y * ) 4 + 1 120 f ^ ( 3 ) ( y ) ( y ^ y * ) 5
where y is between y ^ and y * . Since K ( 3 ) ( · ) is bounded, we have
n 1 / 2 f ( 3 ) ( y ) ( X ¯ μ ) 5 = n 1 / 2 1 n h 4 i = 1 n K ( 3 ) y X i h ( X ¯ μ ) 5 M h 4 n 1 / 2 | X ¯ μ | 5 .
Then, it follows from the moment evaluation that
E M h 4 n 1 / 2 | X ¯ μ | 5 1 + δ / 5 = O n 3 / 2 + 3 δ / 10 n ( 5 + δ ) / 2 ( log n ) 4 + 4 δ / 5 = O n 1 / 2 ( 1 + δ / 5 ) / 2 δ / 10 ( log n ) 4 + 4 δ / 5 .
Thus, we have
1 120 f ^ ( 3 ) ( y ) ( y ^ y * ) 5 = n 1 / 2 o M ( n 1 / 2 ) .
In the same way as the evaluation of the asymptotic mean squared error of the kernel density estimator, we can show that
E f ^ ( y * ) = f ( y * ) + O ( h 2 ) and E f ^ ( y * ) f ( y * ) k = O h 2 k + 1 .
Thus, for some constant d > 0 ,
E f ^ ( y * ) f ( y * ) 4 d E f ^ ( y * ) E 1 h 2 K x X 1 h 4 + E 1 h 2 K x X 1 h f ( y * ) 4 = O n 2 h 7 + h 8
From Hölder’s inequality, we obtain
E f ^ ( y * ) f ( y * ) ( y ^ y * ) 3 4 / 3 M E f ^ ( y * ) f ( y * ) 4 1 / 4 E | X ¯ μ | 4 3 / 4 = O ( n 3 / 2 ) = o ( n 1 / 2 2 / 3 1 / 3 ) .
Therefore, we have
1 6 f ^ ( y * ) ( y ^ y * ) 3 = n 1 / 2 o M ( n 1 / 2 ) .
Similarly, we can show that
1 24 f ^ ( 2 ) ( y * ) ( y ^ y * ) 4 = n 1 / 2 o M ( n 1 / 2 ) .
Thus, we have
C ˜ ( y ^ y * ) = [ τ + ( 1 2 τ ) F ^ ( y * ) ] ( y ^ y * ) = 1 n i = 1 n u 1 ( X i ) 1 2 τ 2 ( y ^ y * ) 2 f ^ ( y * ) + n 1 / 2 o M ( n 1 / 2 ) .
Substituting the above equation into Equation (8), we have the desired result.
(3) From the definition, we can show that
C ˜ C = ( 1 2 τ ) F ^ ( y * ) F ( y * ) = ( 1 2 τ ) 1 n i = 1 n W ˜ ( X i ) + ( 1 2 τ ) b 2 n , C ( y ^ y * ) = C ˜ ( y ^ y * ) ( C ˜ C ) ( y ^ y * )
and
y ^ y * = 1 n C i = 1 n u ˜ 1 ( X i ) + o l ( n 1 / 2 ) .
It follows from the equation in (Lemma A2) that
b 2 n 1 n C i = 1 n u ˜ 1 ( X i ) = n 1 / 2 o l ( n 1 / 2 ) .
Then, we can show that
( C ˜ C ) ( y ^ y * ) = ( 1 2 τ ) 1 n i = 1 n W ˜ ( X i ) + ( 1 2 τ ) b 2 n 1 n C i = 1 n u ˜ 1 ( X i ) + o l ( n 1 / 2 ) = 1 2 τ C n 1 E [ W ˜ ( X 1 ) u ˜ 1 ( X 1 ) ] + n 2 1 i < j n W ˜ ( X i ) u ˜ 1 ( X j ) + W ˜ ( X j ) u ˜ 1 ( X i ) + n 1 / 2 o l ( n 1 / 2 ) .
(4) For the kernel-type estimators, we have
E K ˜ 2 ( X 1 ) = O ( h 1 ) , E W ˜ 2 ( X 1 ) = O ( 1 ) , E u ˜ 1 2 ( X 1 ) = O ( 1 ) .
First, we consider ( 1 2 τ ) ( y ^ y * ) 2 { f ^ ( y * ) f ( y * ) } . Let us evaluate the following terms:
n 5 / 2 1 i < j < k n u ˜ 1 ( X i ) u ˜ 1 ( X j ) K ˜ ( X k ) , n 3 / 2 b 1 n 1 i < j n u ˜ 1 ( X i ) K ˜ ( X j ) , n 3 / 2 b 3 n 1 i < j n u ˜ 1 ( X i ) u ˜ ( X j ) .
Using the moment evaluations for U-statistics, we can show that
E n 5 / 2 1 i < j < k n u ˜ 1 ( X i ) u ˜ 1 ( X j ) K ˜ ( X k ) 2 O ( n 5 ) n 3 E [ u ˜ 1 2 ( X i ) ] 2 E [ K ˜ 2 ( X k ) ] = O ( n 2 ) O ( h 1 ) = O ( n 1 / 2 1 ( 1 / 2 d ) ) .
Similarly, we can show that
E n 3 / 2 b 3 n 1 i < j n u ˜ 1 ( X i ) u ˜ ( X j ) 2 = O ( n 3 h 4 n 2 ) = O ( n 1 4 d ) = O ( n 1 / 2 1 4 ( d 1 / 8 ) )
and
E n 3 / 2 b n 1 i < j n u ˜ 1 ( X i ) K ˜ ( X j ) 2 = O ( n 3 h 4 n 2 h 1 ) = O ( n 1 3 d ) = O ( n 1 / 2 1 3 ( d 1 / 6 ) ) .
Furthermore, it is easy to see that
n 3 / 2 i = 1 n { u ˜ 1 2 ( X i ) E [ u ˜ 1 2 ( X i ) ] } = n 1 / 2 n 1 / 2 o l ( n 1 / 2 ) = o l ( n 1 / 2 )
and then
n 3 / 2 i = 1 n u ˜ 1 2 ( X i ) = n 1 / 2 E [ u ˜ 1 2 ( X i ) ] + o l ( n 1 / 2 ) .
It follows from Equation (4) in the standardized version that we have
y ^ y * = 1 n C i = 1 n u ˜ 1 ( X i ) + o l ( n 1 / 2 ) .
Using the large deviation in (Lemma A2), we have
( y ^ y * ) 2 = 2 n 2 C 2 1 i < j n u ˜ 1 ( X i ) u ˜ 1 ( X j ) + 1 n C 2 E [ u ˜ 1 2 ( X 1 ) ] + n 1 / 2 o l ( n 1 / 2 ) .
Then, we can show that
f ^ ( y * ) ( y ^ y * ) 2 = 2 f ( y * ) n 2 C 2 1 i < j n u ˜ 1 ( X i ) u ˜ 1 ( X j ) + f ( y * ) n C 2 E [ u ˜ 1 2 ( X 1 ) ] + n 1 / 2 o l ( n 1 / 2 ) .
   □
Proof of Theorem 1. 
(1) Using (Lemma A3), we can easily obtain the asymptotic representation of the standardized expectile n C ξ n ( y ^ y * ) .
(2) Using the Taylor expansion, we can easily obtain
E h A y * X 1 h = h A y * s h f ( s ) d s = h A y * s h F ( s ) + W y * s h F ( s ) d s = W y * s h F [ 2 ] ( s ) + 1 h K y * s h F [ 2 ] ( s ) d s = K ( u ) F [ 2 ] ( y * h u ) d u = F [ 2 ] ( y * ) + O ( h ) .
Similarly, we obtain
E h A y * X 1 h 2 = h A y * s h 2 f ( s ) d s = h A y * s h 2 F ( s ) + 2 h A y * s h W y * s h F ( s ) d s = 2 h A y * s h W y * s h F [ 2 ] ( s ) + 2 W 2 y * s h + A y * s h K y * s h F [ 2 ] ( s ) d s = 2 W 2 y * s h F [ 2 ] ( s ) d s + 2 A y * s h K y * s h F [ 2 ] ( s ) d s .
For the first term, we can show that
2 W 2 y * s h F [ 2 ] d s = 2 W 2 y * s h F [ 3 ] ( s ) + 4 h W y * s h K y * s h F [ 3 ] ( s ) d s = 4 W ( u ) K ( u ) F [ 3 ] ( y * h u ) d u = 4 F [ 3 ] ( y * ) W ( u ) K ( u ) d u + O ( h ) = 2 F [ 3 ] ( y * ) + O ( h )
where
F [ 3 ] ( x ) = x F [ 2 ] ( u ) d u .
For the second term, we have
2 A y * s h K y * s h F [ 2 ] ( s ) d s = 2 h A ( u ) K ( u ) F [ 2 ] ( y * h u ) d u = O ( h ) .
Then. we can obtain
V a r ( 1 2 τ ) h A y * X 1 h = ( 1 2 τ ) 2 2 F [ 3 ] ( y * ) F [ 2 ] ( y * ) 2 + O ( h ) .
Similarly, we can obtain the covariance. Since
E h A y * X 1 h X i = h A y * s h s f ( s ) d s = h A y * s h s F ( s ) F [ 2 ] ( s ) + W y * s h s F ( s ) F [ 2 ] ( s ) d s = W y * s h s F [ 2 ] ( s ) 2 F [ 3 ] ( s ) + 1 h K y * s h s F [ 2 ] ( s ) 2 F [ 3 ] ( s ) d s = K ( u ) ( y * h u ) F [ 2 ] ( y * h u ) 2 F [ 3 ] ( y * h u ) d u = y * F [ 2 ] ( y * ) 2 F [ 3 ] ( y * ) + O ( h ) ,
and then we have
C o v h A y * X 1 h , X i μ = E h A y * X 1 h X i μ E h A y * X 1 h = ( y * μ ) F [ 2 ] ( y * ) 2 F [ 3 ] ( y * ) + O ( h ) .
Combining the above calculations, we can obtain
V a r [ u ( X 1 ) ] = ( 1 2 τ ) 2 2 F [ 3 ] ( y * ) F [ 2 ] ( y * ) 2 + 2 h F [ 2 ] ( y * ) A ( u ) K ( u ) d u 2 u W ( u ) K ( u ) d u 2 ( 1 2 τ ) τ ( y * μ ) F [ 2 ] ( y * ) 2 F [ 3 ] ( y * ) + τ 2 σ 2 + O ( h ) = F [ 3 ] ( y * ) 2 ( 1 τ ) 2 + 4 ( 1 2 τ ) ( 1 2 τ ) F [ 2 ] ( y * ) τ ( y * μ ) + ( 1 2 τ ) F [ 2 ] ( y * ) ( 1 2 τ ) F [ 2 ] ( y * ) τ ( y * μ ) + τ 2 σ 2 + O ( h ) .
It follows from the Equation (6) that
( 1 2 τ ) F [ 2 ] ( y * ) τ ( y * μ ) = τ 2 ( y * μ ) 2 .
Thus, we obtain
V a r [ u ( X 1 ) ] = ( 2 4 τ ) F [ 3 ] ( y * ) + τ 2 ( y * μ ) 2 + τ 2 σ 2 + O ( h ) .
(3) Using the Edgeworth expansion for the asymptotic U-statistics (see [4]), we can easily obtain the Edgeworth expansion.   □
Proof of Theorem 2. 
From the definition, we obtain
C ^ ( y ^ y * ) = ( C ^ C ˜ ) ( y ^ y * ) + C ˜ ( y ^ y * ) .
Using the Taylor expansion and the previous approximations, we obtain
( C ^ C ˜ ) ( y ^ y * ) = ( 1 2 τ ) F ^ ( y ^ ) F ^ ( y * ) ( y ^ y * ) = ( 1 2 τ ) f ^ ( y * ) ( y ^ y * ) 2 + n 1 / 2 o l ( n 1 / 2 ) .
Thus, we have
C ^ ( y ^ y * ) = 1 n i = 1 n u 1 ( X i ) + 1 2 τ 2 ( y ^ y * ) 2 f ^ ( y * ) + n 1 / 2 o l ( n 1 / 2 ) .
It follows from (2) in (Lemma A3) that
C ^ ( y ^ y * ) = b n + 1 2 τ 2 n C 2 f ( y * ) E [ u ˜ 1 2 ( X 1 ) ] + 1 n i = 1 n u ˜ 1 ( X i ) + 1 2 τ n 2 C 2 f ( y * ) 1 i < j n u ˜ 1 ( X i ) u ˜ 1 ( X j ) + n 1 / 2 o l ( n 1 / 2 ) .
It is easy to see that
ξ n 2 = E [ u ˜ 1 2 ( X 1 ) ] = E [ { u 1 ( X 1 ) b n } 2 ] = E [ u 1 2 ( X 1 ) ] + O ( h 4 ) .
It follows from the theory of U-statistics for the sample variance and the Taylor expansion that
ξ ^ n 2 = ξ n 2 + 1 n i = 1 n u ˜ 1 2 ( X i ) ξ n 2 + o l ( n 1 / 2 ) .
Furthermore, using the Taylor expansion, we obtain
ξ ^ n 1 = ξ n 1 1 2 n ξ n 3 i = 1 n u ˜ 1 2 ( X i ) ξ n 2 + o l ( n 1 / 2 ) .
Similarly to the proof of (Lemma 2) in Maesono and Penev [4], we can obtain the asymptotic representation of the studentized expectile.
Using the same argument as in Maesono and Penev [4], it is easy to obtain the Edgeworth expansion for the studentized expectile estimator.   □
  • Details about the numerical implementation 
We used the R programming language. The kernel was the Epanechnikov kernel (14). The bandwidth was set to h = n 1 / 4 ( log n ) 1 . For performance comparisons, we needed the true expectile of the exponential distribution (13). This was calculated by using the R function lambertW0 from the R package lamW. The kernel functions W ( . ) and A ( . ) were calculated analytically using simple piece-wise polynomial integration starting with the Epanechnikov kernel. We describe the main steps in the implementation of the practically relevant studentized confidence intervals (the steps for the standardized intervals are similar):
  • Enter the n data and the values of τ and α .
  • Write simple R functions to calculate f ^ ( · ) ,   F ^ ( · ) . and F ^ [ 2 ] ( · ) , using h , W ( · ) , and A ( · ) .
  • Solve the Equation (7) to find the estimated expectile y ^ , using the uniroot function in R.
  • Find C ^ from Definition 1.
  • Find the estimator ξ ^ n 2 .
  • Find μ ^ 3 .
  • Find B ^ n , using C ^ , f ^ , y ^ , μ ^ 3 , and ξ ^ n .
  • Find κ ^ S , using C ^ , f ^ , y ^ , μ ^ 3 , and ξ ^ n .
  • Find b ^ 1 n , using h , f ^ ( y ^ ) , and u 2 K ( u ) d u = 0.2 .
  • Find the estimator of e ^ α in (12).
  • Substitute in the Formula (15) to obtain the confidence interval by the CF method.
  • Using the numerical foot-finder uniroot to find the confidence internal by using the numerical inversion method (16).

References

  1. Maesono, Y.; Penev, S. Edgeworth Expansion for the Kernel Quantile Estimator. Ann. Inst. Stat. Math. 2011, 63, 617–644. [Google Scholar] [CrossRef]
  2. Falk, M. Relative deficiency of kernel type estimators of quantiles. Ann. Stat. 1984, 12, 261–268. [Google Scholar] [CrossRef]
  3. Falk, M. Asymptotic normality of the kernel quantile estimator. Ann. Stat. 1985, 13, 428–433. [Google Scholar] [CrossRef]
  4. Maesono, Y.; Penev, S. Improved confidence intervals for quantiles. Ann. Inst. Stat. Math. 2013, 65, 167–189. [Google Scholar] [CrossRef]
  5. Newey, W.; Powel, J. Asymmetric least squares estimation and testing. Econometrica 1987, 55, 819–847. [Google Scholar] [CrossRef]
  6. Artzner, P.; Delbaen, F.; Eber, J.; Heath, D. Coherent measures of risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
  7. Gneiting, T. Making and Evaluating Point Forecasts. J. Am. Stat. Assocation 2011, 106, 746–762. [Google Scholar] [CrossRef]
  8. Ziegel, J. Coherence and elicitability. Math. Financ. 2016, 26, 901–918. [Google Scholar] [CrossRef]
  9. Bellini, F. Isotonicity properties of generalized quantiles. Stat. Probab. Lett. 2012, 82, 2017–2024. [Google Scholar] [CrossRef]
  10. Holzmann, H.; Klar, B. Expectile asymptotics. Electornic J. Stat. 2016, 10, 2355–2371. [Google Scholar] [CrossRef]
  11. Krätschmer, V.; Zähle, H. Statistical Inference for Expectile-based Risk Measures. Scand. J. Stat. 2017, 44, 425–454. [Google Scholar] [CrossRef]
  12. Waltrup, L.S.; Sobotka, F.; Kneib, T.; Kauermann, G. Expectile and quantile regression- Davis and Goliath? Stat. Model. 2015, 15, 433–456. [Google Scholar] [CrossRef]
  13. Sobotka, F.; Kauermann, G.; Waltrup, L.S.; Kneip, T. On confidence intervals for semiparametric expectile regression. Stat. Comput. 2013, 23, 135–148. [Google Scholar] [CrossRef]
  14. Bellini, F.; Di Bernardino, E. Risk management with expectiles. The Eur. J. Financ. 2017, 23, 487–506. [Google Scholar] [CrossRef]
  15. Bellini, F.; Klar, B.; Müller, A.; Rosazza Gianin, E. Generalized quantiles as risk measures. Insur. Math. Econ. 2014, 54, 41–48. [Google Scholar] [CrossRef]
  16. Chen, J.M. On Exactitute in Financial Regulation: Value-at-Risk, Expected Shortfall, and Expectiles. Risks 2018, 6, 61. [Google Scholar] [CrossRef]
  17. Malevich, T.L.; Abdalimov, B. Large Deviation Probabilities for U-Statistics. Theory Probab. Appl. 1979, 24, 215–220. [Google Scholar] [CrossRef]
  18. Van der Vaart, A.W. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  19. Jones, M.C. Expeciles and m-quantiles are quantiles. Stat. Probab. Lett. 1994, 20, 149–153. [Google Scholar] [CrossRef]
Figure 1. True cdf, normal, and Edgeworth approximation for the standardized estimator with exponential data.
Figure 1. True cdf, normal, and Edgeworth approximation for the standardized estimator with exponential data.
Mathematics 13 00510 g001
Figure 2. True cdf, normal, and Edgeworth approximation for the studentized estimator with exponential data.
Figure 2. True cdf, normal, and Edgeworth approximation for the studentized estimator with exponential data.
Mathematics 13 00510 g002
Table 1. Symmetric confidence intervals for the expectile of the standard exponential distribution.
Table 1. Symmetric confidence intervals for the expectile of the standard exponential distribution.
Sample Size τ Nominal Coverage 90%
Normal Numerical Inversion CF Method
200.50.85630 0 . 87444 0.85730
200.40.84648 0 . 86409 0.83894
200.30.83056 0 . 84846 0.81052
200.20.80674 0 . 82362 0.76108
200.10.75308 0 . 77348 0.66668
500.50.88008 0 . 89212 0.87974
500.40.87504 0 . 88542 0.86940
500.30.86872 0 . 87670 0.85496
500.20.85734 0 . 86368 0.82832
500.10.82998 0 . 83878 0.76416
1000.50.89244 0 . 90010 0.89138
1000.40.88938 0 . 89648 0.88616
1000.30.88588 0 . 89196 0.87762
1000.20.87834 0 . 88486 0.86204
1000.10.86184 0 . 86668 0.82268
1500.50.89558 0 . 90240 0.89476
1500.40.89284 0 . 90126 0.89164
1500.30.89070 0 . 89698 0.88544
1500.20.88532 0 . 89032 0.87424
1500.10.87374 0 . 87888 0.84714
2000.50.89714 0 . 90322 0.89666
2000.40.89522 0 . 90150 0.89374
2000.30.89330 0 . 89952 0.88916
2000.20.88714 0 . 89406 0.88008
2000.10.88094 0 . 88540 0.86120
Table 2. Symmetric confidence intervals for the expectile of the standard exponential distribution.
Table 2. Symmetric confidence intervals for the expectile of the standard exponential distribution.
Sample Size τ Nominal Coverage 95%
Normal Numerical Inversion CF Method
200.50.90454 0 . 92016 0.91170
200.40.89414 0 . 90844 0.89408
200.30.87992 0 . 89130 0.86164
200.20.85692 0 . 86550 0.80174
200.10.80530 0 . 81386 0.69500
500.50.93000 0 . 93760 0.93170
500.40.92404 0 . 93012 0.92260
500.30.91726 0 . 92100 0.90848
500.20.90608 0 . 90632 0.87640
500.10.87604 0 . 87832 0.79596
1000.50.94104 0 . 94588 0.94194
1000.40.93896 0 . 94170 0.93718
1000.30.93500 0 . 93612 0.92922
1000.20.92642 0 . 92652 0.91332
1000.10.90554 0 . 90666 0.86584
1500.50.94426 0 . 94874 0.94580
1500.40.94230 0 . 94580 0.94266
1500.30.93858 0 . 94200 0.93744
1500.20.93386 0 . 93430 0.92570
1500.1 0 . 92224 0.919240.89484
2000.50.94588 0 . 95002 0.94688
2000.40.94478 0 . 94690 0.94372
2000.30.94282 0 . 94286 0.93926
2000.20.93688 0 . 93704 0.93140
2000.1 0 . 92776 0.926060.90954
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Penev, S.; Maesono, Y. Improved Confidence Intervals for Expectiles. Mathematics 2025, 13, 510. https://doi.org/10.3390/math13030510

AMA Style

Penev S, Maesono Y. Improved Confidence Intervals for Expectiles. Mathematics. 2025; 13(3):510. https://doi.org/10.3390/math13030510

Chicago/Turabian Style

Penev, Spiridon, and Yoshihiko Maesono. 2025. "Improved Confidence Intervals for Expectiles" Mathematics 13, no. 3: 510. https://doi.org/10.3390/math13030510

APA Style

Penev, S., & Maesono, Y. (2025). Improved Confidence Intervals for Expectiles. Mathematics, 13(3), 510. https://doi.org/10.3390/math13030510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop