Next Article in Journal
A Novel Stacking-Based Deterministic Ensemble Model for Infectious Disease Prediction
Next Article in Special Issue
A New Bound in the Littlewood–Offord Problem
Previous Article in Journal
Configuration of Ten Limit Cycles in a Class of Cubic Switching Systems
Previous Article in Special Issue
Large Deviations for the Maximum of the Absolute Value of Partial Sums of Random Variable Sequences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the M-Estimator under Third Moment Condition

1
School of Business, Shandong University, Weihai 264209, China
2
Institute for Financial Studies, Shandong University, Jinan 250100, China
3
School of Economics, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1713; https://doi.org/10.3390/math10101713
Submission received: 7 April 2022 / Revised: 30 April 2022 / Accepted: 7 May 2022 / Published: 17 May 2022
(This article belongs to the Special Issue Limit Theorems of Probability Theory)

Abstract

:
Estimating the expected value of a random variable by data-driven methods is one of the most fundamental problems in statistics. In this study, we present an extension of Olivier Catoni’s classical M-estimators of the empirical mean, which focus on the heavy-tailed data by imposing more precise inequalities on exponential moments of Catoni’s estimator. We show that our works behave better than Catoni‘s both in practice and theory. The performances are illustrated in the simulation and real data.
MSC:
60E15; 62F35

1. Introduction

In this study, we focused on estimating the mean m = E X of a real random variable X, supposing that X 1 , , X n are independent and identically distributed drawn from X. It is well known that the empirical mean m ^ n = n 1 i = 1 n X i is the most popular estimator of m, and theoretical properties have been thoroughly studied [1].
However, recent works have concentrated more on the performance of the estimator when the distribution is heavy-tailed (the second moment or fourth moment of the distribution does not exist), which is becoming more and more common in many research fields (see, e.g., Embrechts, Klüppelberg, and Mikosch [2]). When the data have a heavy tail, the traditional method such as the empirical mean performs poorly, and appropriate robust estimators are required, which drives related research on M-estimator (generalizations of Maximum Likelihood estimator) for correction of the outliers (Huber [3]).
There has been a renewed interest in the area of robust statistics over the last several decades. Nemirovsky and Yudin [4], Hsu and Sabato [5], and Jerrum et al. [6] proposed various forms of Median-of-means (MOM) estimators to handle data in different situations. They called for dividing the data into several groups with equal size and then calculating the empirical mean within each group, finally taking the median of these empirical means as the formal MOM estimator, which reduces the impact of heavy-tailed data. Tukey and McLaughlin [7] and Huber and Ronchetti [8] tried to improve the performance of the empirical mean by using a truncation of X (they name it truncated mean), which removed part of the sample containing γ n maximum and minimum values depending on the parameter γ ( 0 , 1 ) and then averaged the remaining values to improve the robustness. Catoni [9], Audibert, and Catoni [10] studied the properties of M-estimation for regression problems. The relevant works about robust techniques in various fields are summarized in Bartlett and Mendelson [11], Maronna [12], Bubeck, and Lugosi [13].
Recently, Catoni [14] modified the empirical mean to a new robust estimator. It is easy to observe that the empirical mean is the solution of the following estimation equation
i = 1 n X i μ = 0 .
If we change the form of Equation (1) to
i = 1 n ϕ α X i μ = 0 .
The solution of (2) is called Catoni’s mean estimator, where ϕ : R R is a non-decreasing differentiable truncation function such that for any x R , log 1 x + x 2 / 2   ϕ ( x ) log 1 + x + x 2 / 2 , and α is a parameter to ensure the existence of the estimator. We denote Catoni’s mean estimator by m ˜ n , α . The main purpose of the truncation function is to make ϕ ( x ) grow slower than x, and then the effect of outliers due to heavy tails in X will be diminished. Although ϕ ( x ) is not the derivative of some explicit error function, it still can be considered as an influence function in robust theory.
By a mild assumption that the variance v = E ( X m ) 2 of the distribution exists and choosing the parameter α to optimize the bounds, Catoni [14] obtained the following performance of m ˜ n , α .
Theorem 1.
Let X 1 , , X n be independent, identically distributed random variables, which are drawn from X. We assume that the mean m and variance v of X exist. For any x R + , and positive integer n such that n > 2 x . Catoni’s mean estimator m ˜ n , α with parameter α = 2 x n v 1 + 2 x n 2 x ) satisfies,
P m ˜ n , α m 2 v x n 2 x 2 e x .
Moreover, if we choose α to be independent from x as follows, and assume n > 2 1 + x , α = 2 n v , then
P m ˜ n , α m 1 + x 1 1 + x / n v 2 n 2 e x .
The method of Catoni [14] is widely promoted as a robust estimator by Brownlees, Joly, and Lugosi [15], Minsker [16], and Wang et al. [17]. We need to point out here that the parameter α is the solution of the equation where the derivative of Catoni’s estimator’s deviation with respect to α equals to 0. When v = 0 , the Catoni’s estimator’s deviation is 0, and no specific α is needed. This also holds for Theorem 2.
The main contribution of this article is to improve Catoni’s estimator under the assumption of the third moment condition, and we named it the third-moment Catoni estimator. Starting from the adjustment of the truncation function denoted by ψ ( x ) in our work, as Figure 1 shows, the influence function with the third moment performs closer to the true value than the original one of Catoni’s. We obtained a more precise exponential moment upper bound, which leads to a better error bound.
Simultaneously, our work had a better performance for the samples drawn from the t-distribution, which is common in many fields of research(see Jones and Faddy [18]). As a special case of the heavy-tailed distribution, the third moment of the t-distribution exists, which satisfies our assumptions about the distribution. We present the superiority of our estimator in a Monte Carlo simulation. We also show the performance of the proposed estimator under a skewed normal distribution to evaluate the adaptability of the estimator to other distributions.
The rest of the article is organized as follows. In Section 2, we introduce the main result of the third-moment Catoni’s estimator. A Monte Carlo simulation is provided in Section 3 to compare the performance of the proposed estimator with Catoni’s estimator for t-distribution. Section 4 examines the performance of the proposed estimator on real data.

2. Main Result

Let X i i = 1 n denote an i.i.d. sample drawn from the distribution of X. Let m, v, and s be the mean, variance, and third central moment of X, respectively, which is E ( X ) = m , E ( X m ) 2 = v , and   E ( X m ) 3 = s .
The influence function ψ ( x ) here should be considered wider than the original function as Catoni’s to obtain a more accurate exponential moment. In this study, we assumed that
ψ ( x ) = log 1 + x + x 2 / 2 + x 3 / 6 , x 0 log 1 x + x 2 / 2 x 3 / 6 , x < 0 .
Our mean estimator m ^ n , α is the unique solution of R n , α ( μ ) = 0 , where
R n , α ( μ ) = i = 1 n ψ α X i μ .
Next, we present our main result that bounds the m ^ n , α m with the appropriate choice of negative parameter α :
Theorem 2.
Let X 1 , , X n be independent, identically distributed random variables with finite mean m, variance v, and third central moment s. For any x > 0 , the error bound between the estimator and the empirical mean satisfies
P m ^ n , α m 2 q 2 + Δ 3 + q 2 Δ 3 2 e x ,
where
Δ = q 2 2 + p 3 3 , p = 3 + 3 v α 2 α 2 , q = n α 3 s + 6 x 4 n n α 3 .
Under some technical assumptions that will be mentioned in the following corollary, we have the following upper bound on the probability of the exponential tail:
Corollary 1.
Let X 1 , , X n be independent, identically distributed random variables with finite mean m, variance v and third central moment s. For any x > 0 and assume that n > 3 2 1 + x and 4 n 3 v 3 729 s 4 n 3 v 3 729 ,
P m ^ n , α m ( 1 + x ) v n 2 e x .
Remark 1.
It is obvious that with the assumption that n is a positive integer and satisfies n > 3 2 1 + x and 4 n 3 v 3 729 s 4 n 3 v 3 729 , then
1 + x 1 1 + x / n v 2 n ( 1 + x ) v n ,
By assuming that α < 0 , we obtained a better estimator bias than (4) in Catoni’s result.
Remark 2.
When the sample was small, our result was still valid with a small s. We might consider the following example. Let X 1 , , X n be independent, identically distributed random variables, which are drawn from X. Assuming that the mean m = 0.01 , variance v = 1 , x = 1 , n = 4 , and whenever 4 n 3 v 3 729 s 4 n 3 v 3 729 such as s = 0.2 , which satisfies the assumption we have
P ( | m ^ n , α 1 | 1 ) 2 e .
For the convenience of proof, we first present the following lemma (Cardano formula); refer to Høyrup’s [19] for more details.
Lemma 1.
For any general cubic equation of the form x 3 + p x + q = 0 , one of the roots over the field of real numbers has the form:
x = q 2 + Δ 3 + q 2 Δ 3 ,
where the discriminant of the root Δ is as follows, when Δ > 0 , the cubic equation has one real root; the cubic equation has three real roots when Δ 0 .
Δ = q 2 4 + p 3 27 .
Proof of Theorem 2.
Due to the inequality (5) about the ψ ( x ) , we have the following exponential moment inequality of R n , α ( μ ) , for all μ R :
E e R n , α ( μ ) E 1 + α ( X μ ) + α 2 ( X μ ) 2 2 + α 3 ( X μ ) 3 6 n = 1 + E α ( X μ ) + E α 2 ( X μ ) 2 2 + E α 3 ( X μ ) 3 6 n ,
with a brief calculation, we have E [ X μ ] 2 = v + ( m μ ) 2 and E [ X μ ] 3 = ( m μ ) 3 + 3 v ( m μ ) + s ; so, the inequality (10) can be bounded by the following term:
exp n α ( m μ ) + n α 2 v + ( m μ ) 2 2 + n α 3 6 ( m μ ) 3 + 3 v ( m μ ) + s .
Similarly,
E e R n , α ( μ ) E 1 α ( X μ ) + α 2 ( X μ ) 2 2 α 3 ( X μ ) 3 6 n = 1 E α ( X μ ) + E α 2 ( X μ ) 2 2 E α 3 ( X μ ) 3 6 n = 1 α ( m μ ) + α 2 v + ( m μ ) 2 2 α 3 6 ( m μ ) 3 + 3 v ( m μ ) + s n exp n α ( m μ ) + n α 2 v + ( m μ ) 2 2 n α 3 6 ( m μ ) 3 + 3 v ( m μ ) + s .
Let
A 1 = n α ( m μ ) + n α 2 v + ( m μ ) 2 2 + n α 3 6 ( m μ ) 3 + 3 v ( m μ ) + s ,
A 2 = n α ( m μ ) + n α 2 v + ( m μ ) 2 2 n α 3 6 ( m μ ) 3 + 3 v ( m μ ) + s ,
whenever X i has a finite third moment s. We can obtain from the Markov inequality that for any μ R and x R + ,
P R n , α ( μ ) A 1 + x = P exp R n , α ( μ ) exp A 1 + x E e R n , α ( μ ) / exp A 1 + x e x .
In the same way, we have
P R n , α ( μ ) A 2 + x e x .
Then, as shown in Figure 2.
We can control the estimator m ^ n , α by the roots of the cubic equation as follows:
C + ( μ ) = A 1 + x = 0 , C ( μ ) = A 2 x = 0 .
Equation (13) above can be regarded as a cubic equation about m μ . To solve (13), we first convert it into a standard-form one-dimensional cubic equation by letting y n = m μ 1 α , ( n = 1 , 2 ) , and then we obtain the following equations:
y 1 3 + 3 + 3 v α 2 α 2 y 1 + n α 3 s + 6 x 4 n n α 3 = 0 , y 2 3 + 3 + 3 v α 2 α 2 y 2 n α 3 s + 6 x 4 n n α 3 = 0 .
For any α R , according to Lemma 1, since ( 3 + 3 v α 2 ) / α 2 is always positive, Δ is always greater than 0. In this case, our equation has one real root and two imaginary roots, which means we can control the m ^ n , α by the root of (13) as follows:
μ + = m 1 α + q 2 Δ 3 + q 2 + Δ 3 ,
μ = m 1 α q 2 + Δ 3 q 2 Δ 3 ,
where the Δ , p, and q are the same as above. We can easily obtain from the formula above that R n , α ( μ + ) 0 , implying that m ^ α , n < μ + with probability at least 1 e x , since R n , α ( μ ) is a non-increasing function. Similarly, m ^ α , n > μ with probability at least 1 e x . Then, by choosing the parameter α , we can derive the performance of the estimator m ^ α , n for the bias of the mean m. That is, with probability at least 1 2 e x , we have
μ < m ^ α , n < μ + .
The proof of Theorem 2 is completed. □
Proof of Corollary 1. 
In fact, the right-hand side of (7) can be bounded as follows without limiting the sign of s:
m ^ n , α m 2 q 2 + Δ 3 + q 2 Δ 3 < 4 n α 3 s + 6 x 4 n 2 n α 3 3 = 4 2 α 3 + 3 x n α 3 + s 2 3 ,
with the assumption n > 3 2 1 + x , which is weaker than Catoni’s, (16) can be bounded by
4 2 α 3 2 α 3 + s 2 3 = 4 s 2 3 .
Moreover, assuming that 4 n 3 v 3 729 s 4 n 3 v 3 729 , we can obtain that (17) is bounded by ( 1 + x ) v n ; then, (8) holds. □

3. Simulation

In this section, we considered the performance of the estimator with respect to the t-distribution on applications by Monte Carlo simulation exercise results. We focused on the performance of the estimator in L 1 regression. Our data were simulated from a linear model generated from a t-distribution regressed by our proposed estimator; we measured the loss of the regression by the minimization of the L 1 norm.
The details of the simulation are as follows: we considered n independent, identically distributed real random variables pairs X 1 , Y 1 , X 2 , Y 2 , X n , Y n where X i take their values in R 3 while Y i in R , and the explanatory variables X i are drawn from a multivariate normal distribution with 0 mean, and variance is a three-dimensional identity matrix. The response variable Y i is generated as follows:
Y i = X i T θ + ϵ i ,
where the parameter vector θ is set to be ( 0.25 , 0.25 , 0.50 ) , and ϵ i is an error term with zero mean and unit variance, which is drawn from a Student t-distribution. Our main goal was to estimate the parameter θ by minimizing the L 1 risk
E Y X i T θ ,
and then we defined the the L 1 estimators θ ^ 1 , the classical Catoni mean estimator θ ^ 2 , and the third-moment Catoni’s estimator θ ^ 3 as follows
θ ^ 1 = arg min θ R ^ 1 ( θ ) = arg min θ 1 n i = 1 n Y i X i T θ , θ ^ 2 = arg min θ R ^ 2 ( μ ) = arg min θ 1 n α i = 1 n ϕ α Y i X i T θ μ = 0 , θ ^ 3 = arg min θ R ^ 3 ( μ ) = arg min θ 1 n α i = 1 n ψ α Y i X i T θ μ = 0 ,
where the R ^ 2 ( μ ) , R ^ 3 ( μ ) is the root of the right side of the equation, respectively; ϕ ( x ) is the widest choice defined in Catoni’s result, the parameter α = 1 , which is the same as Brownless’s work; ψ ( x ) was set as above; and the parameter α = 1 . The measures for the performance of the estimator are as follows:
R θ ^ 1 R θ = E Y X T θ ^ 1 E Y X T θ , R θ ^ 2 R θ = E Y X T θ ^ 2 E Y X T θ , R θ ^ 3 R θ = E Y X T θ ^ 3 E Y X T θ .
The simulation experiments repeated with different sample sizes, which ranged from 50 to 1000 and with degrees of freedom of the t-distribution ranging from 1 to 7. Each set of the sample size experiments was replicated 1000 times, and for each replication, we evaluated the performance of the regression by the mean of the sample X 1 , Y 1 , X 2 , Y 2 , , X m , Y m —that is, i.i.d.with the sample X 1 , Y 1 , X 2 , Y 2 , X n , Y n . We used the following equation to evaluate the performance of the regression, which called excess risk.
R ˜ θ ^ 1 = 1 m i = 1 m Y i Z i T θ ^ 1 2 , R ˜ θ ^ 2 = 1 m i = 1 m Y i Z i T θ ^ 2 2 , R ˜ θ ^ 3 = 1 m i = 1 m Y i Z i T θ ^ 3 2 .
Figure 3 displays the performance of the excess risk for three estimators when n = 500 and the degrees of freedom of the t-distribution ranged from 1 to 7; we can obtain that the proposed estimator performs better than the other estimators, which means more stability on the outliers.
The results of the Monte Carlo simulation including the performance of the estimator for different n are presented in Table 1, and we also compared the performance between the proposed estimator and other estimators with various risks in Table 2 where sample size n = 500 and degrees of freedom d = 1 ; the L 1 represents the general L 1 regression; the C and C 3 denote the original Catoni estimator and our third-moment Catoni estimator, respectively; and the ER, RB, and SMSE represents the excess risk, relative risk ( | θ ^ ¯ 2 θ 2 | θ 2 , with θ ^ ¯ = 1 1000 j = 1 1000 θ ^ ( j ) ), and the square root of the mean square error ( M S E = 1 1000 j = 1 1000 [ θ ^ ( j ) 2 θ 2 ] 2 ).
We can derive from the table that when the distribution has a heavy tail, our estimator performs better in most cases than the other two estimators, and the excess risk of the estimator decreases as the sample size increases. At the same time, with the degrees of freedom of the t-distribution rising, the tail of the t-distribution becomes thinner, which becomes closer to the normal distribution, and the performance of all procedures on excess risk is significantly improved; additionally, the proposed estimator also performs well for different risks.
We also examined the performance of the third-moment Catoni estimator on a skewed normal distribution in Table 3; the model still follows (18) where ϵ follows a skewed normal distribution with shape parameter α = 1 , 3 , 5 and with other settings unchanged. We can draw conclusions from the table that the bias of the improved estimator is still smaller than the original one. However, the deviation in the estimator did not display a significant difference as the shape parameter α changed. We suppose that this results from the tail behavior of the skew normal distribution in that the existence of its fourth moment conflicts with the usual assumption that the fourth moment of heavy tail distribution does not exist. At the same time, neither Catoni’s estimator nor our estimator performed better than the estimator obtained by L1 regression.

4. Empirical Analysis

In this section, we used the proposed procedure to research the dataset “tumor cell resistance to death,” an artificial dataset consisting of two different types of tumor cells A and B, and the experiment records their resistance to different doses of experimental drugs. The explanatory variable X i here is the dose of the drug, and the response variable Y i is the score representing the resistance to death, ranging from 0 to 4. These data are available in the R lqr package; Galarza et al. [20] have studied these data by the quantile regression method.
In Figure 4, Figure 5, Figure 6 and Figure 7, we display the QQplot and the log-QQplot about the scores for cell A and cell B, and it can be seen that the distribution of both cells lacks normality; however, the normality is satisfied between cells and log-scores; besides, the boxplot and the bee colony diagram in Figure 8 and Figure 9 shows that both cell A and cell B have heavy-tails, which allows us to focus on the following regression model:
log ( Y i ) = β 0 + β 1 X i ,
where Y i and X i are defined before. Our focus was estimating the parameters β 0 and β 1 , the solution of the following equation:
r ^ β ( u ) = 1 n α i = 1 n ψ α log ( Y i ) β 0 X i β 1 u = 0 .
Let R ^ C ( β ) denote the solution of the r ^ β ( u ) = 0 ; then, the Catoni regression estimator of β 0 and β 1 is in the form as follows:
arg min β 0 , β 1 R ^ C ( β ) .
Moreover, we compared the proposed estimator with the classical OLS estimator in Figure 10 and Figure 11. The residuals plots are shown in Figure 12, Figure 13, Figure 14 and Figure 15, from which we can draw the conclusion that the distribution of the residual of the three-order Catoni regression performs more uniformly; furthermore, the Mean Squared Error of the third-moment Catoni regression and OLS regression was 0.1120, 0.1255 for cell A and 0.2268, 0.2335 for cell B, respectively, which indicates that the proposed method is a better regression.

5. Discussion

Estimating the mean of random variables is a classical issue in statistics [1], and it has been well studied in classical statistics; however, with the discovery of heavy-tailed distribution in many research fields, its existence is an important challenge in statistics. When the data have heavy tails, the traditional estimators such as the empirical mean usually perform poorly. Therefore, how to find an appropriate robust procedure is a well-known problem and has aroused great interest. A new estimator based on reconstructing the structure of the empirical mean was proposed by Catoni, which has excellent theoretical properties on the bias.
The Catoni’s estimator is based on the existence of the variance v of the random variable. Therefore, with a weaker assumption on the moment conditions, it is an interesting issue whether the estimator has a better performance. In this study, we assumed that the third moment s of the data exists, and a more accurate upper bound of the exponential moment was obtained, which motivates an estimator with a better bias. To a certain extent, the assumption reduces the robustness to outliers, but it has a minimal effect in heavy tails distribution (the fourth moment does not exist). In future work, we have the following goals: first, we believe that our method can be applied as an improved mean estimator to any relevant model as long as the third moment of the distribution has good theoretical properties and wide application; second, it is an interesting idea to discuss and compare the bias bound of the proposed estimator with the minimax bound; finally, the estimation of the variance in regression models is very important in statistical inference. Considering that the deviation of the estimator given in our main theoretical results from the true value can be regarded as the confidence interval based on the known variance; therefore, the proposed estimator is not suitable for the estimation of variance, but it is an interesting issue how a proper variance estimator affects the bias of our estimator; additionally, we will consider variance estimation under heavy-tailed distributions in later work.

Author Contributions

Conceptualization, Y.C.; methodology, Y.C. and R.L.; investigation, Y.C. and R.L.; software, S.S.; writing, Y.C. and R.L.; Y.C. has designed the framework of this study and substantively revised it; R.L. and Y.C. have performed the methodology and simulation; S.S. implements research on empirical analysis research. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 72073082).

Data Availability Statement

The dataset for the empirical analysis can be derived from the following resource available in CRAN, https://cran.r-project.org/web/packages/lqr/index.html, accessed on 12 February 2022.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lugosi, G.; Mendelson, S. Mean estimation and regression under heavy-tailed distributions: A survey. Found. Comput. Math. 2019, 19, 1145–1190. [Google Scholar] [CrossRef] [Green Version]
  2. Embrechts, P.; Kluppelberg, C.; Mikosch, T. Modelling Extremal Events for Insurance and Finance; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  3. Huber, P. Robust estimation of a location parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  4. Nemirovsky, A.; Yudin, D. Problem Complexity and Method Efficiency in Optimization; Springer: Berlin/Heidelberg, Germany; Wiley: New York, NY, USA, 1983. [Google Scholar]
  5. Hsu, D.; Sabato, S. Loss minimization and parameter estimation with heavy tails. J. Mach. Learn. Res. 2016, 17, 1–40. [Google Scholar]
  6. Jerrum, M.; Valiant, L.; Vazirani, V. Random generation of combinatorial structures from a uniform distribution. Theoret. Comput. Sci. 1986, 43, 169–188. [Google Scholar] [CrossRef] [Green Version]
  7. Tukey, J.; McLaughlin, D. Less vulnerable confidence and significance procedures for location based on a single sample: Trimming/Winsorization. I. Sankhyā Ser. A 1963, 25, 331–352. [Google Scholar]
  8. Huber, P.; Ronchetti, E. Robust Statistics; Wiley: New York, NY, USA, 2009. [Google Scholar]
  9. Catoni, O. Statistical Learning Theory and Stochastic Optimization; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  10. Audibert, J.; Catoni, O. Robust linear least squares regression. Ann. Stat. 2011, 39, 2766–2794. [Google Scholar] [CrossRef]
  11. Bartlett, P.; Mendelson, S. Empirical minimization. Probab. Theory Relat. Fields 2006, 311–334. [Google Scholar] [CrossRef] [Green Version]
  12. Maronna, R.A.; Martin, D.R.; Yohai, V.J. Robust Statistics: Theory and Methods; Springer: New York, NY, USA; Wiley: New York, NY, USA, 2006. [Google Scholar]
  13. Bubeck, S.; Cesa-Bianchi, N.; Lugosi, G. Bandits with heavy tail. IEEE Trans. Inform. Theory 2013, 7711–7717. [Google Scholar] [CrossRef]
  14. Catoni, O. Challenging the empirical mean and empirical variance: A deviation study. Ann. Inst. Henri Poincaré Probab. Stat. 2012, 48, 1148–1185. [Google Scholar] [CrossRef]
  15. Brownlees, C.; Joly, E.; Lugosi, G. Empirical risk minimization for heavy-tailed losses. Ann. Stat. 2015, 43, 2507–2536. [Google Scholar] [CrossRef] [Green Version]
  16. Minsker, S. Geometric median and robust estimation in Banach spaces. Bernoulli 2015, 21, 2308–2335. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, Z.; Liu, H.; Zhang, T. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Ann. Stat. 2014, 42, 2164–2201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Jones, M.C.; Faddy, M.J. A skew extension of the t-distribution, with applications. J. R. Stat. Soc. Ser. B Stat. Methodol. 2003, 65, 159–174, 62E10 (62F10). [Google Scholar] [CrossRef]
  19. Høyrup, J. The Babylonian Cellar Text BM 85200+ VAT 6599; Springer: Berlin/Heidelberg, Germany; Birkhäuser: Basel, Switzerland, 1992; pp. 315–358. [Google Scholar]
  20. Galarza, C.; Zhang, P.; Lachos, V. Logistic quantile regression for bounded outcomes using a family of heavy-tailed distributions. Sankhya B 2021, 83, S325–S349. [Google Scholar] [CrossRef]
Figure 1. Different chooses of influence function.
Figure 1. Different chooses of influence function.
Mathematics 10 01713 g001
Figure 2. Representation of R ^ f ( μ ) and the cubic equation C + ( μ ) and C ( μ ) .
Figure 2. Representation of R ^ f ( μ ) and the cubic equation C + ( μ ) and C ( μ ) .
Mathematics 10 01713 g002
Figure 3. Excess risk varies with degrees of freedom.
Figure 3. Excess risk varies with degrees of freedom.
Mathematics 10 01713 g003
Figure 4. QQplot for cell A.
Figure 4. QQplot for cell A.
Mathematics 10 01713 g004
Figure 5. QQplot for cell B.
Figure 5. QQplot for cell B.
Mathematics 10 01713 g005
Figure 6. log-QQplot for cell A.
Figure 6. log-QQplot for cell A.
Mathematics 10 01713 g006
Figure 7. log-QQplot for cell B.
Figure 7. log-QQplot for cell B.
Mathematics 10 01713 g007
Figure 8. Boxplot about the log-scores for the two types of cells.
Figure 8. Boxplot about the log-scores for the two types of cells.
Mathematics 10 01713 g008
Figure 9. The bee colony diagram about the log-scores for the two types of cells.
Figure 9. The bee colony diagram about the log-scores for the two types of cells.
Mathematics 10 01713 g009
Figure 10. Regression for cell A.
Figure 10. Regression for cell A.
Mathematics 10 01713 g010
Figure 11. Regression for cell B.
Figure 11. Regression for cell B.
Mathematics 10 01713 g011
Figure 12. OLS regression residual plot for cell A.
Figure 12. OLS regression residual plot for cell A.
Mathematics 10 01713 g012
Figure 13. Third-moment Catoni regression residual plot for cell A.
Figure 13. Third-moment Catoni regression residual plot for cell A.
Mathematics 10 01713 g013
Figure 14. OLS regression residual plot for cell B.
Figure 14. OLS regression residual plot for cell B.
Mathematics 10 01713 g014
Figure 15. Third-moment Catoni regression residual plot for cell B.
Figure 15. Third-moment Catoni regression residual plot for cell B.
Mathematics 10 01713 g015
Table 1. The excess risk of the L 1 , Catoni, and third-moment Catoni regression estimator for different degrees of freedom and sample size n.
Table 1. The excess risk of the L 1 , Catoni, and third-moment Catoni regression estimator for different degrees of freedom and sample size n.
n = 50 n = 100 n = 250 n = 500 n = 1000
d = 1 L 1 8.79 5.91 5.75 4.15 3.67
C 7.53 4.63 4.90 4.07 3.49
C 3 7.46 4.06 4.84 4.06 3.38
d = 3 L 1 1.51 1.34 1.22 1.15 1.14
C 1.38 1.27 1.20 1.15 1.12
C 3 1.27 1.21 1.14 1.10 1.08
d = 5 L 1 1.09 1.11 1.08 1.08 1.07
C 1.08 1.13 1.09 1.10 1.08
C 3 1.06 1.08 1.03 1.04 1.04
d = 7 L 1 1.08 1.02 1.05 1.01 1.00
C 1.05 0.94 1.03 1.00 0.98
C 3 0.97 0.94 0.90 0.85 0.86
Table 2. Comparisons of the performance between the proposed estimator and other estimators with various risks.
Table 2. Comparisons of the performance between the proposed estimator and other estimators with various risks.
L 1 C C 3
E R 4.15614.07474.0628
R B 0.03980.03850.0383
S M S E 0.09700.09520.0947
Table 3. The excess risk of the L 1 , Catoni, and third-moment Catoni regression estimator on a skewed normal distribution.
Table 3. The excess risk of the L 1 , Catoni, and third-moment Catoni regression estimator on a skewed normal distribution.
n = 50 n = 100 n = 250 n = 500 n = 1000
s = 1 L 1 0.847 0.829 0.820 0.807 0.784
C 0.865 0.844 0.825 0.785 0.779
C 3 0.859 0.837 0.823 0.789 0.781
s = 3 L 1 0.857 0.833 0.809 0.819 0.798
C 0.861 0.842 0.829 0.835 0.828
C 3 0.861 0.843 0.827 0.835 0.824
s = 5 L 1 0.831 0.825 0.812 0.792 0.782
C 0.856 0.855 0.850 0.839 0.828
C 3 0.855 0.855 0.848 0.827 0.822
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, R.; Chen, Y.; Song, S. On the M-Estimator under Third Moment Condition. Mathematics 2022, 10, 1713. https://doi.org/10.3390/math10101713

AMA Style

Luo R, Chen Y, Song S. On the M-Estimator under Third Moment Condition. Mathematics. 2022; 10(10):1713. https://doi.org/10.3390/math10101713

Chicago/Turabian Style

Luo, Rundong, Yiming Chen, and Shuai Song. 2022. "On the M-Estimator under Third Moment Condition" Mathematics 10, no. 10: 1713. https://doi.org/10.3390/math10101713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop