Next Article in Journal
Improving Influenza Epidemiological Models under Caputo Fractional-Order Calculus
Previous Article in Journal
Novel Hybrid Crayfish Optimization Algorithm and Self-Adaptive Differential Evolution for Solving Complex Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

k-Nearest Neighbors Estimator for Functional Asymmetry Shortfall Regression

1
Department of Mathematics, College of Science, King Khalid University, Abha 62529, Saudi Arabia
2
Department of Mathematical Sciences, College of Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 928; https://doi.org/10.3390/sym16070928
Submission received: 1 June 2024 / Revised: 1 July 2024 / Accepted: 5 July 2024 / Published: 19 July 2024
(This article belongs to the Section Mathematics)

Abstract

:
This paper deals with the problem of financial risk management using a new expected shortfall regression. The latter is based on the expectile model for financial risk-threshold. Unlike the VaR model, the expectile threshold is constructed by an asymmetric least square loss function. We construct an estimator of this new model using the k-nearest neighbors (kNN) smoothing approach. The mathematical properties of the constructed estimator are stated through the establishment of the pointwise complete convergence. Additionally, we prove that the constructed estimator is uniformly consistent over the nearest neighbors (UCNN). Such asymptotic results constitute a good mathematical support of the proposed financial risk process. Thus, we examine the easy implantation of this process through an artificial and real data. Our empirical analysis confirms the superiority of the kNN-approach over the kernel method as well as the superiority of the expectile over the quantile in financial risk analysis.

1. Introduction

Defining an accurate financial risk-metric is a challenging issue for financial institutions. Usually, the value at risk (VaR) is the standard risk-metric for financial risk management. The VaR-model was approved by the Basel committee in (1996, 2006). However, the financial operators have recognized the limitations and the weaknesses of this risk-metric of the VaR-model through the successive financial crises in the last decade. The primary weakness of the VaR model in financial risk management is its insensitivity to the extreme values. Consequently, the Basel committee in 2014 proposed enhancing the financial risk surveillance with the expected shortfall (ES) function. This function examines the expected loss when we exceed a specific threshold. Generally the threshold is defined through the VaR level. The novelty of this paper is to define the ES-function using an alternative risk-threshold that is the expectile regression.
The shortfall risk model was investigated by [1]. Motivated by its coherency feature, the ES function has widely developed in the last decade. A comparison study between VaR and ES-model was carried out by [1]. They stated that the VaR is inaccurate when the profit or the loss is not Gaussian. In this context, the ES-model is a more accurate financial risk metric than the VaR function. From a statistical point of view, the ES model behaves at different manners such as parametric, semi-parametric, or free distribution approaches. For an overview in the parametric approach, we refer to [2,3,4]. The present paper considers the nonparametric strategy. At this stage, we point out that the first study in nonparametric modeling was introduced by [5]. He estimated the ES-model by the kernel method. The same estimator was considered by [6]. They stated the asymptotic normality of their estimator. Alternatively, another estimator using the Bahadur representation was constructed by [7]. The literature concerning the nonparametric estimation of the ES is limited when the data are functional. To the best of our knowledge, only two works have treated the functional ES-model using the nonparametric regression structure. The first results are developed by [8] when the financial time series is modeled under the strong mixing assumption. The authors of [9] have used a weak correlation assumption to model the financial time series. They proved the complete consistency of the functional kernel estimator of the ES-function under the quasi-associated auto-correlation.
The second component of our contribution concerns the expectile model. It was introduced by [10]. It can be considered an alternative to the VaR-function. However, it corrects the main drawback of the quantile, which is the fact that it is the insensitive to the outliers. The expectile metric is very sensitive to the outliers. In financial risk analysis, the expectile has been developed by [11,12,13,14]. It should be noted that the expectile function has been used for other statistical problems, including for the outlier analysis (see [15]) or heteroscedasticity detection (see [16,17]). The expectile regression for vectorial statistics was studied by [18]. The authors of this last paper have developed a semi-parametric estimation of the expectile. Concerning the functional expectile model, we point out that the first result was stated by [19]. They established the asymptotic convergence rate of the kernel estimator of the functional expectile regression. We return to [20] for the parametric version of the functional expectile regression. They established the asymptotic convergence rate of an estimator constructed from the reproducing kernel Hilbert-space structure. For more recent advances and results in functional regression data analysis, we may refer to [21,22,23,24,25].
The third component of this paper is the k-NN smoothing approach. It is an attractive approach for many applied statistics such as the classification problems, the clustering issues or the prediction questions. The kNN estimation approach has been popularized by the contribution of [26]. This cited paper can be considered as pioneer work in nonparametric estimation by the kNN method. Pushed by its diversified applications, the kNN estimation algorithm has been introduced in functional data analysis by [27]. They proved the almost complete point-wise convergence of the functional regression using the kNN estimator. Such a result has been stated under the independence condition. We refer to [28] for the uniform convergence of the kNN estimation of the functional regression. They established the convergence rate using the entropy property. More recent advances and references in the functional kNN method, we may cite [21,29,30].
In this paper, we aim to estimate expectile shortfall regression using the k-NN smoothing approach. The principal motivations on the use of this estimation methodology are as follows: (1) usually, financial data are not Gaussian and the parametric approach fails to fit its randomize movement; (2) the functional approach explores the high frequency of the financial data by treating it as continuous curves; (3) the kNN approach explores the functional structure of the data by considering a varied local bandwidth adapted to functional curves. This feature allows one to update the estimator and identify the financial risk systematically; (4) the last motivation is the possibility of remedying the problem of the outliers’ insensitivity using the expectile instead to the VaR. The mathematical support of this contribution is highlighted by establishing the almost complete convergence of the constructed estimator. Additionally, we provide the convergence rate of the UCNN consistency of the constructed estimator. It should be noted that this last result has a great importance in practice. In particular, it can be used to resolve some practical purpose, namely the problem of the choice of the best number of neighborhood. So, we emphasize our theoretical development to examine the applicability as well as the efficiency of the kNN estimator of expectile shortfall regression. More precisely, we examine the attainability of the estimator using artificial and real financial data.
This paper is structured as follows: In Section 2, we present the risk metric function and its kNN estimator. In Section 3, we state the point-wise convergence of the constructed estimator. The UCNN consistency is sated in Section 4. Section 6 is dedicated to discuss the computation-ability of the estimator over simulated and real-data applications. Finally, the proofs of the auxiliary results are given in Section 7.

2. KNN Estimator of Expectile Shortfall Regression

Let ( A 1 , B 1 ) , , ( A n , B n ) be n independent random pairs in F × I R which are independent and have the same distribution as ( A , B ) . The functional space F is a semi-metric space with a semi-metric d. For our expected shortfall regression analysis, we assume that A is the functional explanatory variable and B is the real response variable. Often, the conventional ES-regression is defined for a F ,
E S R p ( a ) = I E B | B > C V a R p ( a ) , A = a
where C V a R p ( · ) is the conditional value at risk. In this paper, instead of C V a R p , we explicate the ES-regression using the conditional expectile of B given A = a . The latter is denoted by C E A p ( · ) and is defined by
C E A p ( a ) = I E B | B > E X R p ( a ) , A = a
where E X R p ( · ) is the expectile regression of B given A = a defined by
E X R p ( a ) = arg min t I R I E p ( B t ) 2 1 { ( B t ) > 0 } A = a
+ I E ( 1 p ) ( B t ) 2 1 { ( B t ) 0 } A = a ,
where 1 A is the indicator function of the set A . Of course, this replacement of C V a R p by E X R p enables one to overcome the lack of risk insensitivity of the quantile to the extreme values. This characteristic is very important in practice because the catastrophic losses are characterized in the extremities. The second feature of our contribution is the use of the kNN estimation approach. This latter feature is based on the determination of the smoothing parameter as
A n , k = min h n R + : i = 1 n 1 B ( a , h n ) ( A i ) = k ,
where B ( a , h n ) is a ball of the center a with a radius h n > 0 defined as follows
B ( a , h n ) = { a F , d ( a , a ) h n } .
So, the kNN estimator of EXES-regression is
C E A p ^ ( a ) = i = 1 n F A n , k 1 d ( a , A i ) B i 1 B i > E X R ^ p ( a ) i = 1 n F A n , k 1 d ( a , A i )
where F ( · ) is a known measurable function and E X R ^ p is the kNN estimator of E X R p . The latter is defined as the solution of
G ˜ ( E X R ^ p ( a ) ; x ) = p 1 p
with
G ˜ ( t ; x ) = i = 1 n F n i ( a ) ( B i t ) 1 I { ( B i t ) 0 } i = 1 n F n i ( a ) ( B i t ) 1 I { ( B i t ) > 0 } , for   t I R ,
with
F n i ( a ) = F i i = 1 n F i   and   F i ( a ) = F A n , k 1 d ( a , A i ) .

3. Pointwise Convergence

Before establishing the asymptotic properties of the estimator C E A p ^ , we consider some notation and assumptions. For the notation, we set by C a or C a some strictly positive generic constants, N a is a given neighborhood of a. Furthermore, for all t I R , we define C E ( t , a ) = I E B 1 B > t | A = a . Now, to formulate our main result, we will use the hypotheses listed below:
(P1)
P ( A B ( a , r ) ) = φ ( a , r ) > 0 where B ( a , h ) = x F : d ( x , a ) < h .
(P2)
There exists an invertible non-negative function ϕ ( · ) , a bounded and positive function L ( · ) , and a function ζ 0 ( u ) such that
(i)
ϕ ( ϵ ) tends to zero as ϵ goes to zero and, φ ( a , ϵ ) ϕ ( ϵ ) L ( a ) = O ( ϵ α ) , as ϵ 0 , for certain α > 0
(ii)
For all u > 0 , lim ϵ 0 ϕ ( u ϵ ) ϕ ( ϵ ) = ζ 0 ( u ) ,
(P3)
δ > 0 , ( t 1 , t 2 ) [ E X R p ( a ) δ , E X R p ( a ) + δ ] , ( a 1 , a 2 ) N x 2 ,
| C E ( t 1 , a 1 ) C E ( t 2 , a 2 ) | C x d b ( a 1 , a 2 ) b + | t 1 t 2 | , b > 0 .
(P4)
For all m > 2 ,
E | B | m A = a C m , a < , a . s .
(P5)
The kernel function F ( · ) is supported on ( 0 , 1 ) such that
C 1 I [ 0 , 1 ] ( t ) < F ( t ) < C 1 I [ 0 , 1 ] ( t )
(P6)
The number of the neighborhood k such that
ϕ 1 k n 0 and ln n k 0 .
Comments on the hypotheses.
All the considered assumptions are classical in functional data analysis, namely for the kNN smoothing approach. They are used for a similar study (see, for instance [30]). The assumptions (P1) and (P2) relate the functional variable to the probability structure. As discussed in the last paragraph of introduction, the nonparametric path is motivated by the fact that the distribution of the financial movement is unknown in practice. Assumption (P4) concerns the conditional moment integrability of the interest variable B. Such a condition is usually used in the regression analysis. Observe that the upper bound in (P4) is not uniform, but strongly depends on the order of the moment m and the location point a. This assumption is used to apply the Bernstein inequality where the constant C m , a should be inferior to C 1 m ! 2 C 2 m 2 , ( C 1 , (resp. C 2 ) being constant independent to m). The assumption over the kernel function is defined in condition (P5). Such a technical assumption is used to precise the convergence rate of the estimator.
Now, we obtain the following result
Theorem 1. 
From the suppositions (P1)–(P6), we have
C E A p ^ ( a ) C E A p ( a ) = O ϕ 1 k n b + O ln n k a l m o s t   c o m p l e t e l y .
Proof of Theorem 1. 
For t I R , we define
C E ^ ( t , a ) = i = 1 n F h 1 d ( a , A i ) B i 1 B i > t i = 1 n F h 1 d ( a , A i )
Then,
C E ^ ( E X R ^ p ( a ) , a ) = C E A p ^ ( a ) , and C E ( E X R p ( a ) , a ) = C E A p ( a ) .
So
C E A p ^ ( a ) C E A p ( a ) = C E ^ ( E X R ^ p ( a ) , a ) C E ( C E X R ^ p ( a ) , a )
+ C E ( C E X R ^ p ( a ) , a ) C E ( C E X R p ( a ) , a ) .
Therefore,
| C E A p ^ ( a ) C E A p ( a ) | sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] | C E ^ ( t , a ) C E ( t , a ) |
+ C | E X R ^ p ( a ) E X R p ( a ) | .
It suffices to prove the following lemmas. □
Lemma 1. 
From the suppositions of Theorem 1, we have
sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] | C E ^ ( t , a ) C E ( t , a ) | = O ϕ 1 k n b
+ O ln n k a l m o s t   c o m p l e t e l y
and
Lemma 2. 
From the suppositions of Theorem 1, we have
| E X R ^ p ( a ) E X R p ( a ) | = O ϕ 1 k n b
+ O ln n k a l m o s t   c o m p l e t e l y
The proof of both required results is based on the technique of kNN smoothing summarized on the following lemmas.
Lemma 3 
(see [28]). Let ( Z i , Z i ) i = 1 , , n be a sequence of independent random variables identically distributed as ( Z , Z ) , which are valued in F × R . Let
C n ( t ) = i = 1 n Z i L ( t , ( z , Z i ) ) i = 1 n L ( t , ( z , Z i ) ) .
where L ( · , · ) is a measurable function in R + × ( F × F ) R + . Consider ( A n ) n 1 as a sequence of real random variables and ( V n ) n N a decreasing positive sequence (with lim n V n = 0 ), and C ( · ) : F R be a non-random function. If, for all increasing sequence ζ n = ζ ( 0 , 1 ) with limit 1 ( ζ 1 = O ( V n ) ), there exist two sequences of real random variables ( A n ( ζ ) ) n 1 and ( A n + ( ζ ) ) n 1 such that
C1. 
n 1 , A n ( ζ ) A n + ( ζ ) and
1 { A n ( ζ ) A n A n + ( ζ ) } a l m o s t   c o m p l e t e l y 1 ;
C2. 
i = 1 n L ( A n ( ζ ) , ( z , Z i ) ) i = 1 n L ( A n + ( ζ ) ( z , Z i ) ) ζ = O a l m o s t   c o m p l e t e l y V n ;
C3. 
C n A n ( ζ ) C ( z ) = O a l m o s t   c o m p l e t e l y V n ,
and
C n A n + ( ζ ) C ( z ) = O a l m o s t   c o m p l e t e l y V n .
Then, we have
C n A n ( z ) C ( z ) = O a l m o s t   c o m p l e t e l y V n .

4. UCNN Convergence

We aim to establish the almost complete consistency of C E A p ^ ( a ) uniformly in the numbers of neighbors k ( k 1 , n , k 2 , n ) . To do that, we denote by C and C some strictly positive generic constants. In order to announce the first theorem, we will need the following assumptions.
U1
The function’s class
F = · F ( γ 1 d ( x , · ) ) , γ > 0
is a pointwise measurable class, such that:
sup Q 0 1 1 + ln N ( ϵ G Q , 2 , F ) d ϵ < ,
where the maximum is an overall probability Q on the space F with Q ( G 2 ) < with G being the envelope function of the set F . N ( ϵ , F ) is the number of open balls with a radius ϵ , which is necessary to cover the class of functions F . The balls are constructed using the L 2 ( Q ) -metric.
U2
The kernel F is supported within ( 1 / 2 , 1 / 2 ) and has a continuous first derivative, such that:
0 < C 1 I ( 1 / 2 , 1 / 2 ) ( · ) K ( · ) C 1 I ( 1 / 2 , 1 / 2 ) ( · ) ,
K ( 1 / 2 ) ζ 0 ( 1 / 2 ) 1 / 2 1 / 2 K ( s ) ζ 0 ( s ) d s > 0 ,
and ( 1 / 4 ) K ( 1 / 2 ) ζ 0 ( 1 / 2 ) 1 / 2 1 / 2 ( s 2 K ( s ) ) ζ 0 ( s ) d s > 0 ,
where 1 I A is the indicator function of set A.
U3
The sequences ( k 1 , n ) and ( k 2 , n ) verify:
ϕ 1 k 2 , n n 0 and ln n k 1 , n , n ϕ 1 k 1 , n n .
Then, the following theorem gives the UINN consistency of C E A p ^ .
Theorem 2. 
Under assumptions (P1)–(P4) and (U1)–(U3), we have
sup k 1 , n k k 2 , n C E A p ^ ( a ) C E A p ( a ) = O ϕ 1 k 2 , n n b 1 + O ln n k 1 , n a l m o s t   c o m p l e t e l y   ( a . c o . ) .
Proof of Theorem 2. 
Similarly to Theorem 1, the claimed result is the consequence of
Lemma 4. 
From the suppositions of Theorem 2, we have
sup k 1 , n k k 2 , n sup t [ E X R p ( s ) δ , E X R p ( a ) + δ ] | C E ^ ( t , a ) C E ( t , a ) | = O ϕ 1 k 2 , n n b 1
+
O a . c o . ln n k 1 , n
and
Lemma 5 
([11]). From the suppositions of Theorem 2, we have
sup k 1 , n k k 2 , n | E X R ^ p ( a ) E X R p ( a ) | = O ϕ 1 k 2 , n n b 1
+ O a . c o . ln n k 1 , n .
 □

5. Empirical Analysis

In this section, we discuss the practical use of the risk metric studied in the present work. This section is divided into three sections. In the first part, we propose an approach to choose the best number of neighborhood. The selection of this number constitutes a primordial for the practical use of this financial model. The second part is devoted to evaluate the behavior of the estimator for an artificial datum. In the last section, we examine the constructed model over real financial data from the Dow–Jones stock market.

5.1. Smoothing Parameter Selection: Cross-Validation

Generally, the optimal number k is obtained by optimizing some criterion as
k o p t = arg min k L ( A , B , k )
where L is a given loss function which is fixed according to the employed selector algorithm. In particular, for the expectile regression, many selector approaches exist; for instance, different cross-validation rules that were used, as can be seen in [19]. For instance, we can use
k C V o p t = arg min k i = 1 n B i E X R ^ 0.5 ( A i ) 2 ,
or more generally
k o p t = arg min k i = 1 n ρ B i E X R ^ p ( A i ) 2
where ρ is the scoring function defining E X R p . In practice, these selectors provide an efficient estimator. In this context, the UCNN convergence allows one to ensure the consistency of C E A p ^ o p t ( a ) which is associated with k o p t . Therefore, we deduce the following corollary.
Corollary 1. 
If k o p t ( k 1 , n , k 2 , n ) and if the conditions of Theorem 2 hold, then we have
| C E A p ^ o p t C E A p ( a ) | = O ϕ 1 k 2 , n n b + O a . c o . log n k 1 , n .

5.2. Simulated Data

The first part is devoted to the examination of the performance of the ES-expectile function using artificial observations. We compute this estimator for independent functional data. More precisely, we compare the proposed model to the same estimator obtained by the standard smoothing parameter. Additionally, we compare our estimator to the standard expected shortfall based on the percentile regression. For this empirical study, we generate the functional variable A i ( t ) defined, for any t [ 0 , 1 ] , by:
A i ( t ) = 3 W i sin 2 π t + η i t .
where W i and η i are two real random variables. In order to cover more general cases, we consider two examples of ( W i , η i ) . In the first example, we assume that W i N ( 0 , 0.5 ) and η i N ( 0 , 1 ) . While, in the second example, we generate W i from L o g n o r m a l ( 0 , 0.5 ) and η i from L o g n o r m a l ( 0 , 1 ) . In both cases, the obtained functional variables are relatively smooth, allowing one to choose the spline L 2 -metric (see [31]). A sample of the first co-variate curves is plotted in Figure 1.
We assume that the functional regressor represents a continuous trajectory of a financial asset. The interest variable B represents a future characteristic of this trajectory. More precisely, for all i, we assume that B i = A i + 1 ( 0 ) . Recall that the principal aim of this computational part is to conduct a comparison study between the kNN ES-expectile regression C E A p ^ and the standard expected shortfall based on the V a R p regression associated with the percentile regression
V a R p ( a ) = inf z I R : F ( z | a ) p ,
where F is the conditional cumulative function of B given A. The latter is estimated using the kNN estimator of the function F as
F ^ ( z | a ) = i = 1 n 1 ( B i z ) < 0 F d ( a , A i ) A n , k i = 1 n F d ( a , A i ) A n , k .
Thereafter, we estimate the V a R p function by
V a R ^ p ( a ) = inf z I R : F ^ ( z | a ) p .
Recall that, in this case, the expected shortfall regression is expressed by
C E A p ˜ ( a ) = i = 1 n F A n , k 1 d ( a , A i ) B i 1 B i > V a R p ^ ( a ) i = 1 n F A n , k 1 d ( a , A i ) .
So, we aim to compare the three estimators C E A p ^ (see Equation (2)), C E A p ˜ (see, Equation (8)) and C E A p ¯ (obtained by replacing A n , k in Equation (2) by a standard bandwidth h n ). All these estimators are calculated using the β -kernel, and the spline L 2 metric and the smoothing parameter (h or k) are selected by the cross-validation rule (6). In the kNN estimation, we select the best k by
k o p t = arg min k { 5 , 10 , 15 , 20 , , 40 } i = 1 n B i E X R ^ 0.5 ( A i ) 2 ,
and for the standard bandwidth, we select h
h C V o p t ( a ) = arg min h H n ( a ) i = 1 n B i E X R ^ 0.5 ( A i ) 2 ,
where H n ( a ) is the set of the positive real h ( a ) such as the ball centered at a with radius h ( a ) ‘contains exactly k neighbors of a. We compare this selection procedure using an arbitrary choice. Specifically, we execute the three optimal estimators C E A p ^ o p t , C E A p ˜ o p t and C E A p ¯ o p t , and the arbitrary one C E A p ^ a r b , C E A p ˜ a r b and C E A p ¯ a r b . The efficiency of the estimation approaches is examined using the backtesting measure defined by
M s e = 1 n i = 1 n B i ϑ ^ p ( · ) 2 1 I B i > E X R ^ ( A i ) and M s p = 1 n i = 1 n B i ϑ ^ p ( · ) 2 1 I B i > V a R ^ ( A i ) .
Such an error is evaluated for various values of p = 0.9 ,   0.5 ,   0.1 ,   0.05 , and p = 0.01 . The abstained results are given in the following tables (see Table 1 and Table 2).
Clearly, the behavior of the three estimators are strongly impacted by the choice of the smoothing parameter. However, we observe that the kNN approach is more appropriate compared to a standard case. Moreover, the expected shortfall based on expectile is more accurate than the expected shortfall based on the VaR threshold. Without suppressing the behavior of the estimators, it is also affected by the definition of the regressors with respect to the distribution of ( W i , η i ) (normal or lognormal). In particular, the estimators C E A p ^ and C E A p ¯ are more sensitive to this aspect than the estimator C E A p ˜ . The variability of the Mse and Msp is more important in C E A p ^ and C E A p ¯ than C E A p ˜ . This sensitivity confirms the importance of the expectile regression as a financial risk model.

5.3. Real Data Application

This last paragraph is devoted to the applicability of our model to real data. More precisely, we examine the efficiency of the ES-expectile model over financial data associated with real-time stock prices of liberty energy company. This last is one of a major energy industry service provider across North America. Using these data, we compare our financial metric to its competitive ones. In this financial data analysis, we study the high price P ( t ) t of this company during October 2024, observed within a frame of five minutes. The parent data contains more than 2700 values. The process of P ( t ) t is displayed in Figure 2. It is available in https://stooq.com/db/l (accessed on 24 April 2024).
To insure the stability, we proceeded with the difference algorithmic. We constructed the functional data from the process R ( t ) = l o g ( P ( t + 1 ) l o g ( P ( t ) ) ) . The transformed data are given in Figure 3.
We explore the functional path of the considered data by cutting the process R ( t ) with pieces of 30 points. These pieces represent the functional regressors A i . Furthermore, we use the same strategy as in the simulated data. Indeed, we choose A i as the curve of R ( t ) t [ s 30 , s [ and B i = R ( s ) . Now, to insure the independence structure of our work, we select distanced observations. Specifically, from 2790 observations, we choose 90 equidistant independent values from which we construct our learning sample ( A i , B i ) i = 1 , , 90 . Thus, we compare the three estimators C E A p ¯ , C E A p ^ , and C E A p ˜ using a real datum, ( A i , B i ) i = 1 , , 90 . Such estimators are computed using the same algorithm of the simulated data. We use the same kernel and select the smoothing parameters k and h by the rule (6). We use the L 2 metric obtained by the PCA-metric. We refer to Ferraty and Vieu [31] for more details on the mathematical formulation of this metric. The comparison results are given in Figure 4, Figure 5 and Figure 6, where we plot the true values of 670 testing observations (black line) ( B i ) i = 1 , , 670 versus the estimator C E X p ^ ( A i ) and C E A p ˜ ( A i ) (red line) for values of p = 0.1 .
Once again, the comparison confirms the superiority of the kNN ES-expectile regression over the standard ES-expectile and the kNN ES-quantile model. This superiority is confirmed by computing the Mse error (9) of the three models. We obtain, respectively, 0.0274 for the kNN ES-expectile, 0.108 for the standard ES-expectile 0.201, and for the kNN ES-quantile. For a deep examination of the behavior of the three estimators as financial risk models, we use the backtesting measure based on the cover test developed Bayer and Dimitriadis [32]. Specifically, we apply the version so-called one-side intercept expected shortfall regression backtest. This last is obtained using the routine code esr-backtest from the R-package esrback with α = 0.05 . We compute the p-values of the 70 observations, randomly chosen, from the above 670 testing observations. The average of the obtained values confirms the first statement that is the kNN ES-expectile regression, which is more adequate than the standard ES-expectile and the kNN ES-quantile model. Specifically, the average of the p-values of the kNN ES-expectile is 0.035 against 0.067 for the standard ES-expectile, and for the 0.058 kNN ES- quantile.

6. Conclusions and Prospects

In the present work, we developed a free-parameter estimation of the ES-expectile-with-regression. We constructed an estimator using kNN smoothing. This study covers two principal aspects of the financial data analysis: In the theoretical part, we establish the almost complete convergence of the constructed estimator; moreover, to ensure the applicability of the constructed estimator, we also determine the convergence rate of the UCNN consistency. Of course, this theoretical analysis constitutes a good mathematical support for the use of the new developed risk-metric in practice. We point out that the obtained asymptotic results are established under standard conditions and with the precision of the convergence rate. In particular, all the assumed conditions are related to the functional structure of the regressors and the nonparametric path of the model. On the other hand, we observe that the applicability of the estimator is very easy and gives better results compared to the other financial risk metric. In addition, our contribution leaves many open questions. For instance, the first natural prospect is the treatment of the dependent case, which allows the control of the movement of the stock exchange in its natural path, that is, the functional time series case. The second future work is the establishment of the asymptotic distribution of our new estimator, which in both cases, are independent or dependent cases. Moreover, the third prospect concerns the determination of the single structure case. This last permit enables one to improve the convergence rate of the estimator. Furthermore, we can also treat the partial model case or the parametric case.

7. The Demonstration of the Intermediate Results

The proof of the intermediate results are regrouped in this Section.
Proof of Lemma 1. 
It suffices to apply Lemma 3 for A n ( ζ ) = ϕ 1 ( ζ k n ) , A n + = ϕ 1 ( k ζ n ) and L ( t , ( z , Z i ) ) = F t 1 d ( z , Z i ) Since the choice of A n , A n + and L are the same as in [27], Conditions (C1 and C2) are satisfied. So, all that remains is checking condition (C3). Indeed, by a simple decomposition, for t I R ,
C E ^ ( t , s ) C E ^ ( t , s ) = 1 C E ^ D ( s ) [ ( C E ^ N ( t , s ) I E C E ^ N ( t , s ) )
( C E ^ ( t , s ) ) I E C E ^ N ( t , s ) ) ] C E ^ N ( t , s ) C E ^ D ( s ) [ C E ^ D ( s ) I E C E ^ D ( s ) ]
where
C E ^ N ( t , s ) = 1 I E F h 1 d ( s , A 1 ) i = 1 n F h 1 d ( s , A i ) B i 1 B i > t
and C E ^ D ( s ) = 1 I E F h 1 d ( s , A 1 ) i = 1 n F h 1 d ( s , A i ) .
and h = A n , or a = A n + Therefore, (C3) is a consequence of
C E ^ D ( s ) I E C E ^ D ( s ) = O ln n n ϕ ( h ) 1 / 2 a . c o .
sup t [ E X R p ( s ) δ , E X R p ( s ) + δ ] C E ^ N ( t , s ) I E C E ^ N ( t , s ) = O ln n n ϕ ( h ) 1 / 2 , a . c o .
and
sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ( t , s ) I E C E ^ N ( t , s ) = O h b .
Because the proof of the three required results is based on similar analytical arguments in FDA, we only focus on the second results, namely,
sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ^ N ( t , s ) I E C E ^ N ( t , s ) = O ln n n ϕ ( h ) 1 / 2 , a . c o .
To do that, we write
[ E X R p ( a ) δ , E X R p ( a ) + δ ] j = 1 l n ] B j d n , y j + d n [
with d n = O 1 n and l n = O n . Since I E [ C E ^ N ( · , s ) ] and C E ^ N ( · , s ) are increasing functions. Thus, ∀ 1 j l n
I E C E ^ N ( y j d n , s ) sup t ] y j d n , y j + d n [ I E C E ^ N ( t , s ) I E C E ^ N ( y j + d n , s )
C E ^ N ( t , s ) ( y j d n , s ) sup t ] y j d n , y j + d n [ C E ^ N ( t , s ) C E ^ N ( y j + d n , t ) .
Now, by (P2), we obtain
sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ^ N ( t , s ) I E C E ^ N ( t , s )
max 1 j l n max z { y j d n , y j + d n } C E ^ N ( z , s ) I E C E ^ N ( z , s ) + C d n .
As
d n = n 1 / 2 = o ln n n ϕ ( h ) .
We treat
max 1 j l n max z { y j d n , y j + d n } C E ^ N ( z , s ) I E C E ^ N ( z , s ) = O ln n n ϕ ( h ) , a . c o .
For this, we write for any η > 0
I P max 1 j l n max z { y j d n , y j + d n } C E ^ N ( z , s ) I E C E ^ N ( z , s ) > η ln n n ϕ ( h ) 2 l n max 1 j l n max z { y j d n , y j + d n } I P C E ^ N ( z , s ) I E C E ^ N ( z , s ) > η ln n n ϕ ( h ) .
Now, we evaluate
I P C E ^ N ( z , s ) I E C E ^ N ( z , s ) > η ln n n ϕ ( h ) .
Indeed, let
F ˜ i = 1 I E [ F h 1 d ( s , A 1 ) ] F h 1 d ( s , A i ) B i 1 I B i z I E F h 1 d ( s , A i ) B i 1 I B 1 z .
We write
ε > 0 , I P | C E ^ N ( z , s ) I E C E ^ N ( z , s ) | > ε = I P 1 n i = 1 n F ˜ i > ε
Since
E | F ˜ i ( a ) | m = O ( ϕ ( h ) ) m + 1 .
we can apply the inequality of Bernstein with a n 2 = ϕ ( h ) 1 , to obtain, for all τ > 0 ,
P | C E ^ N ( z , s ) I E C E ^ N ( z , s ) | > τ ln n n ϕ ( h ) = P | i = 1 n F ˜ i ( a ) | > n τ ln n n ϕ ( h ) 2 exp 1 2 τ 2 ln n 1 + τ ln n n ϕ ( h ) ,
Thus
n 1 P | C E ^ N ( z , s ) I E C E ^ N ( z , s ) | > τ ln n n ϕ ( h ) 2 n 1 exp C τ 2 ln n .
Thereby, for τ > 1 C gives
C E ^ N ( z , s ) I E C E ^ N ( z , s ) = O a . c o . ln n n ϕ ( h ) .
 □
Proof of Lemma 2. 
Let
z n = ϕ 1 k n b + ln n k
Similarly to [19], we have
n I P | E X R ^ p ( a ) E X R p ( a ) | > z n n I P s u p t [ E X R p ( a ) δ , E X R p ( a ) + δ ] | G ˜ ( t , a ) G ( t , a ) | C z n < .
where
G ( t , a ) : = G 1 ( t , a ) G 2 ( t , a ) ,
with
G 1 ( t , a ) = I E ( B t ) 1 I { ( B t ) 0 } A = a , G 2 ( t , a ) = I E ( b t ) 1 I { ( B t ) > 0 } A = a .
Therefore,
s u p t [ E X R p ( a ) δ , E X R p ( a ) + δ ] G ˜ h ( t , a ) G ( t , a ) = O a . c o . z n .
Similarly to the previous lemma, we write
G ˜ h ( t , a ) G ( t , a ) = G ˜ 1 ( t , a ) G ˜ 2 ( t , a ) G 1 ( t , a ) G 2 ( t , a ) = 1 G ˜ 2 ( t , a ) G ˜ 1 ( t , a ) G 1 ( t , a ) + G ( t , a ) G ˜ 2 ( t , a ) G 2 ( t , a ) G ˜ 2 ( t , a ) ,
where
G ˜ 1 ( t , a ) = 1 n I E [ F 1 ] i = 1 n F i ( B i t ) 1 I { ( B i t ) 0 } ,
and
G ˜ 2 ( t , a ) = 1 n I E [ F 1 ] i = 1 n F i ( B i t ) 1 I { ( B i t ) > 0 } .
We prove
s u p t [ E X R p ( a ) δ , E X R p ( a ) + δ ] G ^ 1 ( t , a ) I E G ^ 1 ( t , a ) = O a . c o . ( z n ) ,
and
s u p t [ E X R p ( a ) δ , E X R p ( a ) + δ ] G ^ 2 ( t , a ) I E G ^ 2 ( t , a ) = O a . c o . ( z n ) .
The rest of the proof is based on the same arguments of Lemma 1 where C E ^ ( t , a ) is replaced by G ^ 1 ( t , a ) or G ^ 2 ( t , a ) . □
Proof of Lemma 4. 
Let α ] 0 , 1 [ ; thus, we write
I P sup k 1 , n k k 2 , n sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ^ ( t , a ) C E ( t , a ) z n
I P sup k 1 , n k k 2 , n sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ^ ( t , a ) C E ( t , a ) × 1 I ϕ 1 α k 1 , n n h k ϕ 1 k 2 , n n α z n 2
+ I P h ϕ 1 α k 1 , n n , ϕ 1 k 2 , n n α .
It is shown, in [33], that
n k = k 1 , n k 2 , n I P h k ϕ 1 α k 1 , n n < and n k = k 1 , n k 2 , n I P h k ϕ 1 k 2 , n n α < .
So, all that is left to be proven is
sup ϕ 1 α k 1 , n n h ϕ 1 k 2 , n n α sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ˜ ( t , a ) C E ( t , a ) = O a . c o . ( z n ) .
where C E ˜ ( t , a ) is obtained by replacing h by A n , k in C E ^ ( t , a ) . For this aim, we use the following decomposition
C E ˜ ( t , a ) C E ( t , a ) = B ˜ ( t , a ) + D ˜ ( t , a ) C E ˜ D ( a ) + Q ˜ ( t , a ) C E ˜ D ( a ) ,
where
Q ˜ ( t , a ) = ( C E ˜ N ( t , a ) I E [ C E ˜ N ( t , a ) ] ) C E ( t , a ) ( C E ˜ D ( a ) I E [ C E ˜ D ( a ) ] ) , B ˜ ( t , a ) = I E [ C E ˜ N ( t , a ) ] I E [ C E ˜ D ( a ) ] C E ( t , a ) and D ˜ ( t , a ) = B ˜ ( t , a ) ( C E ˜ D ( a ) I E [ C E ˜ D ( a ) ] ) ,
with
C E ˜ N ( t , a ) = 1 n ϕ ( h ) i = 1 n F h 1 d ( a , A i ) B i 1 B i > t
and
C E ˜ D ( a ) = 1 n ϕ ( h ) i = 1 n F h 1 d ( a , A i ) .
Thus, we split the proof of Lemma 4 into
sup a n h b n C E ˜ D ( a ) I E [ C E ˜ D ( a ) ] = O a . c o . ln n n ϕ ( a n ) .
sup a n h b n sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ˜ N ( t , a ) I E [ C E ˜ N ( t , a ) ] = O a . c o . ln n n ϕ ( a n ) ,
where a n = ϕ 1 α k 1 , n n and b n = ϕ 1 k 2 , n n α .
We concentrate on the second convergence. The first one can be deduced by the same tools. Indeed, we write that
[ E X R p ( a ) δ , E X R p ( a ) + δ ] j = 1 d n t j l n , t j + l n ,
with l n = n 1 / 2 and d n = O n 1 / 2 .
Next, since the functions I E [ C E ˜ N ( x , · ) ] and C E ˜ N ( x , · ) are monotone, which implies that, for 1 j d n ,
I E C E ˜ N ( x , t j l n ) sup t ( t j l n , t j + l n ) I E C E ˜ N ( t , a ) I E C E ˜ N ( x , t j + l n ) , C E ˜ N ( x , t j l n ) sup t ( t j l n , t j + l n ) C E ˜ N ( t , a ) C E ˜ N ( x , t j + l n ) .
Thus
sup t [ E X R p ( a ) δ , E X R p ( a ) + δ ] C E ˜ N ( t , a ) I E C E ˜ N ( t , a ) max 1 j d n max z { t j l n , t j + l n } C E ˜ N ( z , a ) I E C E ˜ N ( z , a ) + 2 C l n .
Observe that l n = o ln n n ϕ ( a n ) 1 / 2 . Now, it suffices to
sup a n h b n max 1 j d n max z { t j l n , t j + l n } C E ˜ N ( x , z ) I E C E ˜ N ( x , z ) = O ln n n ϕ ( a n ) 1 / 2 , a . c o .
For this, we write that
I P sup a n h b n max 1 j d n max z { t j l n , t j + l n } C E ˜ N ( x , z ) I E C E ˜ N ( x , z ) > η ln n n ϕ ( a n ) 2 d n max 1 j d n max z { t j l n , t j + l n } I P sup a n h b n C E ˜ N ( x , z ) I E [ C E ˜ N ( x , z ) ] > η ln n n ϕ ( a n ) .
We evaluate the following quantity
I P sup a n h b n C E ˜ N ( x , z ) I E [ C E ˜ N ( x , z ) ] > η ln n n ϕ ( a n ) ,   for   all   z = t j l n , 1 j d n .
The proof of the latter is based on Bernstein’s inequality for empirical processes as in [33]. The empirical processes are
α n ( K ) = 1 n i = 1 n F i 1 I B i t I E F i 1 I B i t .
where F i = F h 1 d ( a , A i ) . Thereafter, we obtain
I P sup a n h b 0 n ϕ ( h ) ln n C E ˜ N ( z , a ) I E [ C E ˜ N ( z , a ) ] η 0 ln ( n ) n C η 0 2 .
Consequently, an adequate choice of η 0 enables us to deduce (18). □

Author Contributions

The authors contributed approximately equally to this work. Conceptualization, M.B.A.; methodology, M.B.A.; software, M.B.A.; validation, F.A.A.; formal analysis, F.A.A.; investigation, Z.K.; resources, Z.K.; data curation, Z.K.; writing—original draft preparation, A.L.; writing—review and editing, Z.K. and A.L.; visualization, A.L.; supervision, A.L.; project administration, A.L.; funding acquisition, A.L. All authors have read and agreed to the final version of the manuscript.

Funding

The authors thank and extend their appreciation to the funders of this project: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R515), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. (2) The Deanship of Scientific Research at King Khalid University through the Research Groups Program under grant number R.G.P. 1/128/45.

Data Availability Statement

The data used in this study are available through the link https://fred.stlouisfed.org/series/DJIA (accessed on 24 April 2024).

Acknowledgments

The authors are indebted to the editorial board members and the three referees for their very generous comments and suggestions on the first version of our article which helped us to improve content, presentation, and layout of the manuscript. The authors also thank and extend their appreciation to the funders of this project: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R515), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. (2) The Deanship of Scientific Research at King Khalid University through the Research Groups Program under grant number R.G.P. 1/128/45.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Artzner, P.; Delbaen, F.; Eber, J.M.; Heath, D. Coherent measures of risk. Math. Financ. 1999, 9, 203–228. [Google Scholar] [CrossRef]
  2. Righi, M.B.; Ceretta, P.S. A comparison of expected shortfall estimation models. J. Econ. Bus. 2015, 78, 14–47. [Google Scholar] [CrossRef]
  3. Moutanabbir, K.; Bouaddi, M. A new non-parametric estimation of the expected shortfall for dependent financial losses. J. Stat. Plan. Inference 2024, 232, 106151. [Google Scholar] [CrossRef]
  4. Lazar, E.; Pan, J.; Wang, S. On the estimation of Value-at-Risk and Expected Shortfall at extreme levels. J. Commod. Mark. 2024, 3, 100391. [Google Scholar] [CrossRef]
  5. Scaillet, O. Nonparametric estimation and sensitivity analysis of expected shortfall. Math. Financ. Int. J. Math. Stat. Financ. Econ. 2004, 14, 115–129. [Google Scholar] [CrossRef]
  6. Cai, Z.; Wang, X. Nonparametric estimation of conditional VaR and expected shortfall. J. Econom. 2008, 147, 120–130. [Google Scholar] [CrossRef]
  7. Wu, Y.; Yu, W.; Balakrishnan, N.; Wang, X. Nonparametric estimation of expected shortfall via Bahadur-type representation and Berry–Esséen bounds. J. Stat. Comput. Simul. 2022, 92, 544–566. [Google Scholar] [CrossRef]
  8. Ferraty, F.; Quintela-Del-Río, A. Conditional VAR and expected shortfall: A new functional approach. Econom. Rev. 2016, 35, 263–292. [Google Scholar] [CrossRef]
  9. Ait-Hennani, L.; Kaid, Z.; Laksaci, A.; Rachdi, M. Nonparametric estimation of the expected shortfall regression for quasi-associated functional data. Mathematics 2022, 10, 4508. [Google Scholar] [CrossRef]
  10. Newey, W.K.; Powell, J.L. Asymmetric least squares estimation and testing. Econom. J. Econom. Soc. 1987, 55, 819–847. [Google Scholar] [CrossRef]
  11. Waltrup, L.S.; Sobotka, F.; Kneib, T.; Kauermann, G. Expectile and quantile regression—David and Goliath? Stat. Model. 2015, 15, 433–456. [Google Scholar] [CrossRef]
  12. Bellini, F.; Di Bernardino, E. Risk management with expectiles. Eur. J. Financ. 2017, 23, 487–506. [Google Scholar] [CrossRef]
  13. Farooq, M.; Steinwart, I. Learning rates for kernel-based expectile regression. Mach. Learn. 2019, 108, 203–227. [Google Scholar] [CrossRef]
  14. Bellini, F.; Negri, I.; Pyatkova, M. Backtesting VaR and expectiles with realized scores. Stat. Methods Appl. 2019, 28, 119–142. [Google Scholar] [CrossRef]
  15. Chakroborty, S.; Iyer, R.; Trindade, A.A. On the use of the M-quantiles for outlier detection in multivariate data. arXiv 2024, arXiv:2401.01628. [Google Scholar]
  16. Gu, Y.; Zou, H. High-dimensional generalizations of asymmetric least squares regression and their applications. Ann. Stat. 2016, 44, 2661–2694. [Google Scholar] [CrossRef]
  17. Zhao, J.; Chen, Y.; Zhang, Y. Expectile regression for analyzing heteroscedasticity in high dimension. Stat. Probab. Lett. 2018, 137, 304–311. [Google Scholar] [CrossRef]
  18. Kneib, T. Beyond mean regression. Stat. Model. 2013, 13, 275–303. [Google Scholar] [CrossRef]
  19. Mohammedi, M.; Bouzebda, S.; Laksaci, A. The consistency and asymptotic normality of the kernel type expectile regression estimator for functional data. J. Multivar. Anal. 2021, 181, 104673. [Google Scholar] [CrossRef]
  20. Girard, S.; Stupfler, G.; Usseglio-Carleve, A. Functional estimation of extreme conditional expectiles. Econom. Stat. 2022, 21, 131–158. [Google Scholar] [CrossRef]
  21. Almanjahie, I.M.; Bouzebda, S.; Kaid, Z.; Laksaci, A. The local linear functional kNN estimator of the conditional expectile: Uniform consistency in number of neighbors. Metrika 2024, 1–29. [Google Scholar] [CrossRef]
  22. Aneiros, G.; Cao, R.; Fraiman, R.; Genest, C.; Vieu, P. Recent advances in functional data analysis and high-dimensional statistics. H. Multivar. Anal. 2019, 170, 3–9. [Google Scholar] [CrossRef]
  23. Goia, A.; Vieu, P. An introduction to recent advances in high/infinite dimensional statistics. J. Multivar. Anal. 2016, 170, 1–6. [Google Scholar] [CrossRef]
  24. Yu, D.; Pietrosanu, M.; Mizera, I.; Jiang, B.; Kong, L.; Tu, W. Functional Linear Partial Quantile Regression with Guaranteed Convergence for Neuroimaging Data Analysis. Stat. Biosci. 2024, 1–17. [Google Scholar] [CrossRef]
  25. Di Bernardino, E.; Laloe, T.; Pakzad, C. Estimation of extreme multivariate expectiles with functional covariates. J. Multivar. Anal. 2024, 202, 105292. [Google Scholar] [CrossRef]
  26. Collomb, G.; Härdle, W.; Hassani, S. A note on prediction via conditional mode estimation. J. Statist. Plann. Inference 1987, 15, 227–236. [Google Scholar] [CrossRef]
  27. Burba, F.; Ferraty, F.; Vieu, P. k-nearest neighbor method in functional nonparametric regression. J. Nonparametr. Statist. 2009, 21, 453–469. [Google Scholar] [CrossRef]
  28. Kudraszow, N.; Vieu, P. Uniform consistency of kNN regressors for functional variables. Statist. Probab. Lett. 2013, 83, 1863–1870. [Google Scholar] [CrossRef]
  29. Bouzebda, S.; Nezzal, A. Uniform consistency and uniform in number of neighbors consistency for nonparametric regression estimates and conditional U-statistics involving functional data. Jpn. J. Stat. Data Sci. 2022, 5, 431–533. [Google Scholar] [CrossRef]
  30. Bouzebda, S.; Mohammedi, M.; Laksaci, A. The k-nearest neighbors method in single index regression model for functional quasi-associated time series data. Rev. Mat. Complut. 2022, 36, 361–391. [Google Scholar] [CrossRef]
  31. Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis; Springer Series in Statistics; Springer: New York, NY, USA, 2006. [Google Scholar]
  32. Bayer, S.; Dimitriadis, T. Regression-Based Expected Shortfall Backtesting. J. Financ. Econom. 2022, 20, 437–471. [Google Scholar] [CrossRef]
  33. Kara-Zaïtri, L.; Laksaci, A.; Rachdi, M.; Vieu, P. Data-driven kNN estimation for various problems involving functional data. J. Multivariate Anal. 2017, 153, 176–188. [Google Scholar] [CrossRef]
Figure 1. Some explanatory curves of the sample A i .
Figure 1. Some explanatory curves of the sample A i .
Symmetry 16 00928 g001
Figure 2. The high price P ( t ) .
Figure 2. The high price P ( t ) .
Symmetry 16 00928 g002
Figure 3. The process R(t).
Figure 3. The process R(t).
Symmetry 16 00928 g003
Figure 4. kNN expectile expected shortfall.
Figure 4. kNN expectile expected shortfall.
Symmetry 16 00928 g004
Figure 5. kNN VaR expected shortfall.
Figure 5. kNN VaR expected shortfall.
Symmetry 16 00928 g005
Figure 6. Standard expectile expected shortfall.
Figure 6. Standard expectile expected shortfall.
Symmetry 16 00928 g006
Table 1. Comparison of Mse-error.
Table 1. Comparison of Mse-error.
ExampleCasesp = 0.9p = 0.5p = 0.1p = 0.05p = 0.01
Case: Normal distribution C E A p ^ o p t 0.060.040.030.0980.096
C E A p ˜ o p t 0.0910.0920.0980.0940.07
C E A p ¯ o p t 0.120.130.170.140.19
C E A p ^ a r b 0.180.170.150.110.10
C E A p ˜ a r b 0.230.390.220.270.37
C E A p ¯ a r b 0.320.370.350.330.39
Case: Log-normal distribution C E A p ^ o p t 0.020.010.0070.00420.006
C E A p ˜ o p t 0.0890.0910.0980.0900.06
C E A p ¯ o p t 0.0910.0970.0650.0780.081
C E A p ^ a r b 0.110.100.0980.0870.086
C E A p ˜ a r b 0.150.190.160.120.19
C E A p ¯ a r b 0.130.170.180.150.15
Table 2. Comparison of Msp-error.
Table 2. Comparison of Msp-error.
ExampleCasesp = 0.9p = 0.5p = 0.1p = 0.05p = 0.01
Case: Normal distribution C E A p ^ o p t 0.110.120.130.140.099
C E A p ˜ o p t 0.0910.0940.0990.0890.107
C E A p ¯ o p t 0.180.200.270.320.39
C E A p ^ a r b 0.460.490.320.380.29
C E A p ˜ a r b 0.230.220.240.270.24
C E A p ¯ a r b 0.420.390.410.400.36
Case: Log-normal distribution C E A p ^ o p t 0.0960.0820.0880.0650.074
C E A p ˜ o p t 0.0880.0890.0950.0910.06
C E A p ¯ o p t 0.120.130.170.140.19
C E A p ^ a r b 0.0930.0890.0920.0870.088
C E A p ˜ a r b 0.220.340.170.220.34
C E A p ¯ a r b 0.130.240.270.190.16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alamari, M.B.; Almulhim, F.A.; Kaid, Z.; Laksaci, A. k-Nearest Neighbors Estimator for Functional Asymmetry Shortfall Regression. Symmetry 2024, 16, 928. https://doi.org/10.3390/sym16070928

AMA Style

Alamari MB, Almulhim FA, Kaid Z, Laksaci A. k-Nearest Neighbors Estimator for Functional Asymmetry Shortfall Regression. Symmetry. 2024; 16(7):928. https://doi.org/10.3390/sym16070928

Chicago/Turabian Style

Alamari, Mohammed B., Fatimah A. Almulhim, Zoulikha Kaid, and Ali Laksaci. 2024. "k-Nearest Neighbors Estimator for Functional Asymmetry Shortfall Regression" Symmetry 16, no. 7: 928. https://doi.org/10.3390/sym16070928

APA Style

Alamari, M. B., Almulhim, F. A., Kaid, Z., & Laksaci, A. (2024). k-Nearest Neighbors Estimator for Functional Asymmetry Shortfall Regression. Symmetry, 16(7), 928. https://doi.org/10.3390/sym16070928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop