Next Article in Journal
Evaluating Eigenvector Spatial Filter Corrections for Omitted Georeferenced Variables
Previous Article in Journal
Removing Specification Errors from the Usual Formulation of Binary Choice Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels

1
Faculty of Economics, Setsunan University, 17-8 Ikeda Nakamachi, Neyagawa, Osaka 572-8508, Japan
2
Research Institute of Capital Formation, Development Bank of Japan, 9-7, Otemachi 1-chome, Chiyoda-ku, Tokyo 100-8178, Japan
3
Waseda Institute of Political Economy, Waseda University, 6-1 Nishiwaseda 1-chome, Shinjuku-ku, Tokyo 169-8050, Japan
4
Japan Economic Research Institute, 2-1, Otemachi 2-chome, Chiyoda-ku, Tokyo 100-0004, Japan
*
Author to whom correspondence should be addressed.
Econometrics 2016, 4(2), 28; https://doi.org/10.3390/econometrics4020028
Submission received: 29 December 2015 / Revised: 10 May 2016 / Accepted: 30 May 2016 / Published: 17 June 2016

Abstract

:
This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent under the alternative. A test-oriented smoothing parameter selection method is also proposed to implement the test. Monte Carlo simulations indicate superior finite-sample performance of the test statistic. It is worth emphasizing that the performance is grounded on the first-order normal limit and a small number of observations, despite a nonparametric convergence rate and a sample-splitting procedure of the test.

1. Introduction

Symmetry and conditional symmetry play a key role in numerous fields of economics and finance. Economists’ focuses are often on asymmetry of price adjustments (Bacon [1]), innovations in asset markets (Campbell and Hentschel [2]) or policy shocks (Clarida and Gertler [3]). In addition, the mean-variance analysis in finance is consistent with investors’ portfolio decision making if and only if asset returns are elliptically distributed (e.g., Chamberlain [4]; Owen and Rabinovitch [5]; Appendix B in Chapter 4 of Ingersoll [6]). Moreover, conditional symmetry in the distribution of the disturbance is often a key regularity condition for regression analysis. In particular, convergence properties of adaptive estimation and robust regression estimation are typically explored under this condition. For the former, Bickel [7] and Newey [8] demonstrate that conditional symmetry of the disturbance distribution in the contexts of linear regression and moment-condition models, respectively, suffices for adaptive estimators to attain their efficiency bounds. For the latter, Carroll and Welsh [9] warn invalidity in inference based on robust regression estimation when the regression disturbance is asymmetrically distributed. Indeed, symmetry of the disturbance distribution is often a key assumption for consistency of parameter estimators in certain versions of robust regression estimation (e.g., Lee [10,11]; Zinde-Walsh [12]; Bondell and Stefanski [13]). Based on their simulation studies, Baldauf and Santos Silva [14] also argue that lack of conditional symmetry in the disturbance distribution may lead to inconsistency of parameter estimates via robust regression estimation.
In view of the importance in the existence of symmetry, a number of tests for symmetry and conditional symmetry have been proposed. The tests can be classified into kernel and non-kernel methods. Examples for the former include Fan and Gencay [15], Ahmad and Li [16], Zheng [17], Diks and Tong [18], and Fan and Ullah [19]. The latter falls into the tests based on: (i) sample moments (Randles et al. [20]; Godfrey and Orme [21]; Bai and Ng [22]; Premaratne and Bera [23]); (ii) regression percentile (Newey and Powell [24]); (iii) martingale transformation (Bai and Ng [25]); (iv) empirical processes (Delgado and Escanciano [26]; Chen and Tripathi [27]); and (v) Neyman’s smooth test (Fang et al. [28]). Our focus is on the test by Fernandes, Mendes and Scaillet [29] (abbreviated as “FMS” hereafter). While this test can be viewed as the kernel-smoothed one, it has a unique feature. When a probability density function (“pdf”) is symmetric about zero, its shapes on positive and negative sides must be mirror images each other. Then, after estimating pdfs on positive and negative sides separately using positive and absolute values of negative observations, respectively, FMS examine whether symmetry holds through gauging closeness between two density estimates. By this nature, we call the test the split-sample symmetry test (“SSST”) hereafter. One of the features of the SSST is that it relies on asymmetric kernels with support on 0 , such as the gamma (“G”) kernel by Chen [30]. Asymmetric kernel estimators are nonnegative and boundary bias-free, and achieve the optimal convergence rate (in the mean integrated squared error sense) within the class of nonnegative kernel estimators. It is also reported (e.g., p. 597 of Gospodinov and Hirukawa [31]; p. 651 of FMS) that asymmetric kernel-based estimation and inference possess nice finite-sample properties. The split-sample approach is expected to result in efficiency loss. However, it can attain the same convergence rate as the smoothed symmetry tests using symmetric kernels do. Furthermore, unlike these tests, the SSST does not require continuity of density derivatives at the origin.
The aim of this paper is to ameliorate the SSST further through combining it with the generalized gamma (“GG”) kernels, a new class of asymmetric kernels with support on 0 , that have been proposed recently by Hirukawa and Sakudo [32]. Our particular focus is on two special cases of the GG kernels, namely, the modified gamma (“MG”) and Nakagami-m (“NM”) kernels. While superior finite-sample performance of the MG kernel has been reported in the literature, the NM kernel is also anticipated to have an advantage when applied to the SSST. It is known that finite-sample performance of a kernel density estimator depends on proximity in shape between the underlying density and the kernel chosen. As shown in Section 2, the NM kernel collapses to the half-normal pdf when smoothing is made at the origin, and the shape of the density is likely to be close to those on the positive side of single-peaked symmetric distributions. We also pay particular attention to the smoothing parameter selection. While existing articles on asymmetric kernel-smoothed tests (e.g., Fernandes and Grammig [33]; FMS) simply borrow the choice method based on optimality for density estimation, we tailor the idea of test-oriented smoothing parameter selection by Kulasekera and Wang [34,35] to the SSST.
The SSST with the GG kernels plugged in preserves all appealing properties documented in FMS. First, the SSST has a normal limit under the null of symmetry and it is also consistent under the alternative. Hence, unlike the tests by Delgado and Escanciano [26] and Chen and Tripathi [27], no simulated critical values are required. Second, Monte Carlo simulations indicate superior finite-sample performance of the SSST smoothed by the GG kernels. The performance is confirmed even when the entire sample size is 50, despite a nonparametric convergence rate and a sample-splitting procedure. Remarkably, the superior performance is based simply on first-order asymptotic results, and thus the assistance of bootstrapping appears to be unnecessary, unlike most of the smoothed tests employing fixed, symmetric kernels. This result complements previous findings on asymmetric kernel-smoothed tests by Fernandes and Grammig [33] and FMS.
The remainder of this paper is organized as follows. In Section 2 a brief review of a family of the GG kernels is provided. Section 3 proposes symmetry and conditional symmetry tests based on the GG kernels. Their limiting null distributions and power properties are also explored. As an important practical problem, Section 4 discusses the smoothing parameter selection. Our particular focus is on the choice method for power optimality. Section 5 conducts Monte Carlo simulations to investigate finite-sample properties of the test statistics. Section 6 summarizes the main results of the paper. Proofs are provided in the Appendix.
This paper adopts the following notational conventions: Γ a = 0 y a - 1 exp - y d y a > 0 is the gamma function; 1 · signifies an indicator function; · denotes the integer part; A = tr A A 1 / 2 is the Euclidian norm of matrix A; and c > 0 denotes a generic constant, the quantity of which varies from statement to statement. The expression “ X = d Y ” reads “A random variable X obeys the distribution Y.” The expression “ X n Y n ” is used whenever X n / Y n 1 as n . Lastly, in order to describe different asymptotic properties of an asymmetric kernel estimator across positions of the design point x > 0 relative to the smoothing parameter b > 0 that shrinks toward zero, we denote by “interior x” and “boundary x” a design point x that satisfies x / b and x / b κ for some 0 < κ < as b 0 , respectively.

2. Family of the GG Kernels: A Brief Review

Before proceeding, we provide a concise review on a family of the GG kernels. The family constitutes a new class of asymmetric kernels, and it consists of a specific functional form and a set of common conditions, as in Definition 1 below. The name “GG kernels” comes from the fact that the pdf of a GG distribution by Stacy [36] is chosen as the functional form. A major advantage of the family is that for each asymmetric kernel generated from this class, asymptotic properties of the kernel estimators (e.g., density and regression estimators) can be delivered by manipulating the conditions directly, as with symmetric kernels.
Definition 1. 
(Hirukawa and Sakudo [32], Definition 1) Let α , β , γ = α b x , β b x , γ b x R + 3 be a continuous function of the design point x and the smoothing parameter b. For such α , β , γ , consider the pdf of G G α , β Γ α / γ / Γ α + 1 / γ , γ , i.e.,
K G G u ; x , b = γ u α - 1 exp - u β Γ α γ / Γ α + 1 γ γ β Γ α γ / Γ α + 1 γ α Γ α γ 1 u 0 .
This pdf is said to be a family of the GG kernels if it satisfies each of the following conditions:
Condition 1. 
β = x for x C 1 b φ b x for x 0 , C 1 b , where 0 < C 1 < is some constant, the function φ b x satisfies C 2 b φ b x C 3 b for some constants 0 < C 2 C 3 < , and the connection between x and φ b x at x = C 1 b is smooth.
Condition 2. 
α , γ 1 , and for x 0 , C 1 b , α satisfies 1 α C 4 for some constant 1 C 4 < . Moreover, connections of α and γ at x = C 1 b , if any, are smooth.
Condition 3. 
M b x : = Γ α γ Γ α + 2 γ Γ α + 1 γ 2 = 1 + C 5 / x b + o b for x C 1 b O 1 for x 0 , C 1 b , for some constant 0 < C 5 < .
Condition 4. 
H b x : = Γ α γ Γ 2 α γ 2 1 / γ Γ α + 1 γ Γ 2 α - 1 γ = 1 + o 1 for interior x O 1 for boundary x .
Condition 5. 
A b , ν x : = γ Γ α + 1 γ β ν - 1 Γ ν α - 1 + 1 γ ν ν α - 1 + 1 γ Γ α γ 2 ν - 1 V I ν x b 1 - ν 2 for interior x V B ν b 1 - ν for boundary x , ν R + , where constants 0 < V I ν , V B ν < depend only on ν.
The family embraces the following two special cases1. Putting
α , β = x b , x for x 2 b 1 4 x b 2 + 1 , x 2 4 b + b for x 0 , 2 b
and γ = 1 in (1) generates the MG kernel
K M G u ; x , b = u α - 1 exp - u / β / α β / α α Γ α 1 u 0 .
It can be found that this is equivalent to the one proposed by Chen [30] by recognizing that α = ρ b x on p. 473 of Chen [30] and β / α = b . The same α , β and γ = 2 also yields the NM kernel
K N M u ; x , b = 2 u α - 1 exp - u / β Γ α 2 / Γ α + 1 2 2 β Γ α 2 / Γ α + 1 2 α Γ α 2 1 u 0 .
The GG kernels are designed to inherit all appealing properties that the MG kernel possesses. We conclude this section by referring to the properties. Two properties below are basic ones. First, by construction, the GG kernels are free of boundary bias and always generate nonnegative density estimates everywhere. Second, the shape of each GG kernel varies according to the position at which smoothing is made; in other words, the amount of smoothing changes in a locally adaptive manner. To illustrate this property, Figure 1 plots the shapes of the MG and NM kernels at four different design points ( x = 0 . 0 , 0 . 5 , 1 . 0 , 2 . 0 ) at which the smoothing is performed. For reference, the G kernel is also drawn in each panel. When smoothing is made at the origin (Panel (A)), the NM kernel collapses to a half-normal pdf, whereas others reduce to exponential pdfs. As the design point moves away from the boundary (Panels (B–D)), the shape of each kernel becomes flatter and closer to symmetry. We should stress that Figure 1 is drawn with the value of the smoothing parameter fixed at b = 0 . 2 . Unlike variable bandwidth methods for fixed, symmetric kernels (e.g., Abramson [37]), adaptive smoothing of these kernels can be achieved by a single smoothing parameter, which makes them much more appealing in empirical work.
The remaining three properties are on density estimates using the GG kernels. Third, when best implemented, each GG density estimator attains Stone’s [38] optimal convergence rate in the mean integrated squared error within the class of nonnegative kernel density estimators. Fourth, the leading bias of each GG density estimator contains only the second-order derivative of the true density over the interior region, unlike many other asymmetric kernels including the G kernel. Fifth, the variance of the GG estimator tends to decrease as the design point moves away from the boundary. This property is particularly advantageous to estimating the distributions that have long tails with sparse data.

3. Tests for Symmetry and Conditional Symmetry Smoothed by the GG Kernels

3.1. SSST as a Special Case of Two-Sample Goodness-of-Fit Tests

This section proposes to combine the SSST with the GG kernels, explores asymptotic properties of the test statistic, and finally expands the scope of the test to testing the null of conditional symmetry. The SSST can be characterized as a special case of two-sample tests for equality of two unknown densities investigated by Anderson, Hall and Titterington [39]. Suppose that we are interested in testing symmetry of the distribution of a random variable U R . Without loss of generality, we hypothesize that the distribution is symmetric about zero. If U has a pdf, then under the null, its shapes on positive and negative sides of the entire real line R must be mirror images each other. Let f and g be the pdfs to the right and left from the origin, respectively. Then, we would like to test the null hypothesis
H 0 : f u = g u for almost all u R +
against the alternative
H 1 : f u g u on a set of positive measure in R + .
Accordingly, a natural test statistic should be built on the integrated squared error (“ISE”)
I = 0 f u g u 2 d u = 0 f u g u d F u 0 f u g u d G u ,
where F and G are cumulative distribution functions corresponding to f and g, respectively.
The name of the SSST comes from the way to construct a sample analog to I. A random sample of N observations U i i = 1 N is split into two sub-samples, namely, X i i = 1 n 1 : = U i : U i 0 i = 1 n 1 and Y i i = 1 n 2 : = - U i : U i < 0 i = 1 n 2 , where N = n 1 + n 2 . Given the sub-samples, f and g can be estimated using a GG kernel with the smoothing parameter b as
f ^ u = 1 n 1 i = 1 n 1 K G G X i ; u , b and g ^ u = 1 n 2 i = 1 n 2 K G G Y i ; u , b ,
respectively2. Similarly, F , G is replaced with their empirical measures F n 1 , G n 2 . In addition, because n 1 n 2 under H 0 , without loss of generality and for ease of exposition, we assume that N is even and that n : = n 1 = n 2 = N / 2 . Using a short-handed notation K X Y = K G G Y ; X , b finally yields the sample analog to I as
I ¯ n = 1 n i = 1 n f ^ X i + g ^ Y i - g ^ X i - f ^ Y i = i = 1 n 1 n 2 K X i X i + K Y i Y i - K Y i X i - K X i Y i + j = 1 n i = 1 , i j n 1 n 2 K X j X i + K Y j Y i - K Y j X i - K X j Y i = I 1 n + I n .
Although we could use I ¯ n itself as the test statistic, the probability limit of I 1 n plays a role in a non-vanishing center term of the asymptotic null distribution. Because the term is likely to cause size distortions in finite samples, we focus only on I n to construct the testing statistic. Now I n can be rewritten as
I n : = 1 i < j n Φ n Z i , Z j : = 1 i < j n 1 n 2 ϕ n Z i , Z j + ϕ n Z j , Z i ,
where Z i : = X i , Y i and ϕ n Z i , Z j : = K X j X i + K Y j Y i - K Y j X i - K X j Y i . Observe that Φ n Z i , Z j is symmetric between Z i and Z j and that E Φ n Z i , Z j Z i = 0 almost surely under H 0 . It follows that I n is a degenerate U-statistic, and thus we may apply a martingale central limit theorem (e.g., Theorem 1 of Hall [40]; Theorem 4.7.3 of Koroljuk and Borovskich [41]).
Before describing the asymptotic properties of I n , we make two remarks. First, applying the idea of two-sample goodness-of-fit tests to the symmetry test is not new. Ahmad and Li [16] and Fan and Ullah [19] have also studied the symmetry test based on closeness of two density estimates measured by the ISE. They estimate densities using two samples, namely, the original entire sample X i i = 1 N : = U i i = 1 N and the one obtained by flipping the sign of each observation Y i i = 1 N : = - U i i = 1 N in our notations. Because each of X and Y has support on - , by construction, a standard symmetric kernel is employed for density estimation unlike the SSST. Second, if X and Y are taken from two different distributions with support on 0 , , then I n can be viewed as a pure two-sample goodness-of-fit test. It can be immediately applied to the testing for equality of two unknown distributions of nonnegative economic and financial variables such as incomes, wages, short-term interest rates, and insurance claims.
To present the convergence properties of I n , we make the following assumptions.
Assumption 1. 
Two random samples X i i = 1 n 1 and Y i i = 1 n 2 are drawn independently from univariate distributions that have pdfs f and g with support on 0 , , respectively.
Assumption 2. 
f and g are twice continuously differentiable on 0 , , and E X f X 2 , E X 2 f X g X , E Y 2 f Y g Y , E Y g Y 2 < .
Assumption 3. 
The smoothing parameter b = b n satisfies b + n b - 1 0 as n .
Assumption 4. 
Let X 1 , X 2 and Y 1 , Y 2 be two independent copies of X and Y, respectively. Then, the followings hold:
(a)
E K X 2 X 1 K Y 2 X 1 E f X g X ; and E K Y 2 Y 1 K X 2 Y 1 E f Y g Y .
(b)
E K X 2 X 1 K X 1 X 2 b - 1 / 2 V I 2 E X - 1 / 2 f X ; E K X 2 Y 1 K Y 1 X 2
b - 1 / 2 V I 2 E X - 1 / 2 g X ; E K Y 2 X 1 K X 1 Y 2 b - 1 / 2 V I 2 E Y - 1 / 2 f Y ;
and E K Y 2 Y 1 K Y 1 Y 2 b - 1 / 2 V I 2 E Y - 1 / 2 g Y , where V I 2 is a kernel-specific constant given in Condition 5 of Definition 1.
Assumptions 1–3 are standard in the literature of asymmetric kernel smoothing. On the other hand, Assumption 4 has a different flavor. Convergence results on I n are built on several different moment approximations. While Definition 1 implies the statements in Lemma A2 in the Appendix, it is unclear whether the definition may even admit such approximations as in Assumption 4. The difficulty comes from the fact that unlike symmetric kernels, roles of design points and data points are nonexchangeable in asymmetric kernels. What makes the problem more complicated is that functional forms of α , β , γ = α b x , β b x , γ b x in the GG kernels are not fully specified in Definition 1. Considering that not all GG kernels may admit the moment approximations (a) and (b), we choose to make an extra assumption. Note that the MG and NM kernels fulfill Assumption 4, as documented in the next lemma.
Lemma 1. 
If Assumptions 1–3 hold, then each of the MG and NM kernels satisfies Assumption 4.
The theorem below delivers the convergence properties of I n and provides a consistent estimator of its asymptotic variance.
Theorem 1. 
Suppose that Assumptions 1–4 and n 1 = n 2 = n hold.
(i) 
Under H 0 , n b 1 / 4 I n d N 0 , σ 2 as n , where
σ 2 = 2 V I 2 E X - 1 / 2 f X + g X + Y - 1 / 2 f Y + g Y ,
which reduces to σ 2 = 8 V I 2 E X - 1 / 2 f X under H 0 , and V I 2 is a kernel-specific constant given in Condition 5 of Definition 1.
(ii) 
A consistent estimator of σ 2 is given by
σ ^ 2 = 2 V I 2 1 n i = 1 n X i - 1 / 2 f ^ X i + g ^ X i + Y i - 1 / 2 f ^ Y i + g ^ Y i .
We make a few remarks. First, it follows from Lemma 1 and Theorem 1 that the MG and NM kernels can be safely employed for the SSST, where values of V I 2 for these kernels are V I , M G 2 , V I , N M 2 = 1 / 2 π , 1 / 2 π . It also follows from Proposition 1 of FMS and Theorem 1 that limiting null distributions of n b 1 / 4 I n using the G and MG kernels coincide, as expected. Second, while a similar form to the asymptotic variance σ 2 can be found in Proposition 1 of FMS, σ 2 takes a more general form. Accordingly, the variance estimator σ ^ 2 is consistent under both H 0 and H 1 . Third, it can be inferred from Theorem 1 that the test statistic becomes T n : = n b 1 / 4 I n / σ ^ . As a consequence, the SSST is a one-sided test that rejects H 0 in favor of H 1 if T n > z α , where z α is the upper α-percentile of N 0 , 1 .
The next proposition refers to consistency of the SSST. Observe that the power approaches one for local alternatives with convergence rates no faster than n b 1 / 4 , as well as for fixed alternatives.
Proposition 1. 
If Assumptions 1–4 hold, then under H 1 , Pr T n > B n 1 as n for any non-stochastic sequence B n satisfying B n = o n b 1 / 4 .

3.2. SSST When Two Sub-Samples Have Unequal Sample Sizes

Convergence results in the previous section rely on the assumption that the sample sizes of two sub-samples X i i = 1 n 1 and Y i i = 1 n 2 are the same, i.e., so far n 1 = n 2 has been maintained. In reality, n 1 n 2 is often the case, in particular, when the entire sample size N = n 1 + n 2 is odd or when H 1 is true.
Handling this case requires more tedious calculation. When n 1 n 2 , I n can be rewritten as
I n 1 , n 2 = j = 1 n 1 i = 1 , i j n 1 1 n 1 2 K X j X i + j = 1 n 2 i = 1 , i j n 2 1 n 2 2 K Y j Y i - j = 1 n 2 i = 1 , i j n 1 1 n 1 n 2 K Y j X i - j = 1 n 1 i = 1 , i j n 2 1 n 1 n 2 K X j Y i .
Following Fan and Ullah [19], we deliver convergence results under the assumption that two sample sizes n 1 and n 2 diverge at the same rate. The asymptotic variance of n 1 b 1 / 4 I n 1 , n 2 and its consistent estimate are also provided. Because the essential arguments are the same as those for Theorem 1 and Proposition 1, we omit the proofs of Theorem 2 and Proposition 2 and simply state the results. Observe that when n 1 = n 2 = n , these results collapse to Theorem 1 and Proposition 1, respectively.
Theorem 2. 
Suppose that Assumptions 1–4 and n 1 / n 2 λ for some constant λ 0 , hold.
(i) 
Under H 0 , n 1 b 1 / 4 I n 1 , n 2 d N 0 , σ λ 2 as n 1 , where
σ λ 2 = 2 V I 2 E X - 1 / 2 f X + λ E X - 1 / 2 g X + λ E Y - 1 / 2 f Y + λ 2 E Y - 1 / 2 g Y ,
which reduces to σ λ 2 = 2 1 + λ 2 V I 2 E X - 1 / 2 f X under H 0 .
(ii) 
A consistent estimator of σ λ 2 is given by
σ ^ λ 2 = 2 V I 2 1 n 1 i = 1 n 1 X i - 1 / 2 f ^ X i + n 1 n 2 1 n 1 i = 1 n 1 X i - 1 / 2 g ^ X i + n 1 n 2 1 n 2 i = 1 n 2 Y i - 1 / 2 f ^ Y i + n 1 n 2 2 1 n 2 i = 1 n 2 Y i - 1 / 2 g ^ Y i .
Proposition 2. 
If Assumptions 1–4 and n 1 / n 2 λ 0 , hold, then under H 1 , Pr T n 1 , n 2 > B n 1 : = Pr n 1 b 1 / 4 I n 1 , n 2 / σ ^ λ > B n 1 1 as n 1 for any non-stochastic sequence B n 1 satisfying B n 1 = o n 1 b 1 / 4 .
The next corollary is a natural outcome from Theorem 2 and comes from the fact that under H 0 , n 1 n 2 or λ = 1 holds. Because N could be odd in this context, n should read n = N / 2 .
Corollary 1. 
If Assumptions 1–4 and n 1 / n 2 1 hold, then n 1 , n 2 n = N / 2 so that n 1 b 1 / 4 I n 1 , n 2 = n b 1 / 4 I n + o p 1 .

3.3. Extension to a Test for Conditional Symmetry

So far we have maintained the assumption that the random variable U is observable and has a distribution that is symmetric about zero. However, often U is unobservable or the axis of symmetry is not zero. The former is typical when we are interested in symmetry of the distribution of the disturbance conditional on regressors in regression analysis. In this scenario, the test is conducted after U is replaced with the residual. For the latter, the test should be based on location-adjusted observations, i.e., transformed observations with an estimate of the axis of symmetry (e.g., the sample mean or the sample median) subtracted from U. These aspects motivate us to generalize the SSST to the testing for conditional symmetry.
Following FMS, we consider a testing for symmetry in the conditional distribution of V 1 V 2 with V 1 , V 2 R × R d within the framework of a semiparametric context. Specifically, for a parameter space Θ 1 and a function ξ 1 : R d × Θ 1 R , it suffices to check whether the conditional distribution of V 1 V 2 is symmetric about ξ 1 V 2 ; θ 1 0 for some θ 1 0 Θ 1 . Observe that this is equivalent to test whether there is θ 1 0 Θ 1 such that the conditional distribution of V V 2 : = V 1 - ξ 1 V 2 ; θ 1 0 V 2 is symmetric about zero.
However, implementing this type of testing strategy requires to estimate the conditional pdf of V V 2 nonparametrically. This is cumbersome, considering the curse of dimensionality in V 2 and another smoothing parameter choice. Instead, as in Zheng [17], Bai and Ng [25] and Delgado and Escanciano [26], we assume that there are a parameter space Θ 2 and a function ξ 2 : R d + 1 × Θ 2 R that can attain symmetry of the marginal distribution of U : = ξ 2 V 1 , V 2 ; θ 2 0 about zero for some θ 2 0 Θ 2 . Given the dependence of V (and thus U) on Θ 1 , we can finally rewrite our testing scheme as the one that tests, for a suitable parameter space Θ and a function ξ : R d + 1 × Θ R , symmetry of the marginal distribution of U = ξ V 1 , V 2 ; θ 0 about zero for some θ 0 Θ.
Accordingly, the procedure of the conditional symmetry test takes the following two steps. First, we estimate ξ · , · ; θ 0 given N observations V 1 i , V 2 i i = 1 N and denote a consistent estimator of ξ , θ 0 as ξ ^ , θ ^ . Second, the test is conducted using U ^ i i = 1 N : = ξ ^ V 1 , V 2 ; θ ^ i = 1 N . As before, the entire sample is split into two sub-samples X ^ i i = 1 n 1 : = U ^ i : U ^ i 0 i = 1 n 1 and Y ^ i i = 1 n 2 : = - U ^ i : U ^ i < 0 i = 1 n 2 . Then, the test statistics, namely, I n ξ ^ , θ ^ and I n 1 , n 2 ξ ^ , θ ^ for equal ( n 1 = n 2 = n ) and unequal ( n 1 n 2 ) sample sizes, can be obtained by replacing X , Y = X ξ , θ 0 , Y ξ , θ 0 in I n ξ , θ 0 and I n 1 , n 2 ξ , θ 0 with X ^ , Y ^ , respectively.
Our remaining task is to demonstrate that there is no asymptotic cost in the test statistics with ξ , θ 0 replaced by its estimator ξ ^ , θ ^ , as long as ξ ^ , θ ^ p ξ , θ 0 at a suitable rate of convergence. To control the convergence rate, we make Assumption 5 below. Observe that it allows for nonparametric rates of convergence; see Hansen [42], for instance, for uniform convergence rates of kernel estimators.
Assumption 5. 
N r θ ^ - θ 0 p 0 and N r ξ ^ - ξ p 0 uniformly over R d + 1 × Θ for some r 0 , 1 / 2 .
Theorem 3 below provides combinations of the shrinking rate q for b and the convergence rate r for ξ ^ , θ ^ that can establish the first-order asymptotic equivalence between n b 1 / 4 I n ξ , θ 0 ( n 1 b 1 / 4 I n 1 , n 2 ξ , θ 0 ) and n b 1 / 4 I n ξ ^ , θ ^ ( n 1 b 1 / 4 I n 1 , n 2 ξ ^ , θ ^ ) when two sub-samples have equal (unequal) sample sizes.
Theorem 3. 
If Assumptions 1–5 hold, then under H 0 ,
n b 1 / 4 I n ξ ^ , θ ^ = n b 1 / 4 I n ξ , θ 0 + o p 1
as n when n 1 = n 2 = n and
n 1 b 1 / 4 I n 1 , n 2 ξ ^ , θ ^ = n 1 b 1 / 4 I n 1 , n 2 ξ , θ 0 + o p 1
as n 1 when n 1 / n 2 λ 0 , , provided that q , r belong to the set q , r : r > - 5 q / 4 + 1 , r > q / 2 , r 1 / 2 .
The set given in the theorem can be expressed as the triangular region formed by the corners 2 / 5 , 1 / 2 , 4 / 7 , 2 / 7 and 1 , 1 / 2 on the q - r plane. The theorem also indicates that we must employ the sub-optimal smoothing parameter b = o n - 2 / 5 or undersmooth the observations to avoid additional cost of estimating ξ , θ 0 , as is the case with other kernel-smoothed tests. Moreover, FMS set b = o n - 4 / 9 and obtain the lower bound of r as 4 / 9 . Indeed, the set provided in Theorem 3 overlaps the one derived by FMS q , r : q > 4 / 9 , r > 4 / 9 .

4. Smoothing Parameter Selection

How to choose the value of the smoothing parameter b is an important practical problem. Nonetheless, it appears that the issue has not been well addressed in the literature on testing problems using asymmetric kernels. While Fernandes and Grammig [33] adopt a method inspired by Silverman’s [43] rule-of-thumb, FMS adjust the value chosen via cross validation. Both methods choose the smoothing parameter value from the viewpoint of optimality for density estimation. Such choices cannot be justified in theory or practice, because estimation-optimal values may not be equally optimal for testing purposes. In contrast, there are a few works on test-oriented smoothing parameter selection. For the test of equality in two unknown regression curves, Kulasekera and Wang [34,35], analytically explore the idea of choosing the smoothing parameter value that maximizes the power with the size preserved. Gao and Gijbels [44] combine this idea with the Edgeworth expansion for a bootstrap specification test of parametric regression models.
Below we tailor the procedure by Kulasekera and Wang [35] to the SSST. For a realistic setup, the case of n 1 n 2 is exclusively considered. Their basic idea is from sub-sampling. Without loss of generality assume that X i i = 1 n 1 and Y i i = 1 n 2 are ordered samples. Then, the entire sample X i i = 1 n 1 , Y i i = 1 n 2 can be split into M sub-samples, where M = M n 1 is a non-stochastic sequence that satisfies 1 / M + M / n 1 0 as n 1 . Given such M and k 1 , k 2 : = n 1 / M , n 2 / M , the mth sub-sample is defined as X m + i - 1 M i = 1 k 1 , Y m + i - 1 M i = 1 k 2 , m = 1 , , M . This sub-sample yields the analogues to (3) and (4) as
I k 1 , k 2 m = j = 1 k 1 i = 1 , i j k 1 1 k 1 2 K X m + j - 1 M X m + i - 1 M + j = 1 k 2 i = 1 , i j k 2 1 k 2 2 K Y m + j - 1 M Y m + i - 1 M - j = 1 k 2 i = 1 , i j k 1 1 k 1 k 2 K Y m + j - 1 M X m + i - 1 M - j = 1 k i = 1 , i j k 2 1 k 1 k 2 K X m + j - 1 M Y m + i - 1 M
and
σ ^ λ 2 m = 2 V I 2 1 k 1 i = 1 k 1 X m + i - 1 M - 1 / 2 f ^ m X m + i - 1 M + k 1 k 2 1 k 1 i = 1 k 1 X m + i - 1 M - 1 / 2 g ^ m X m + i - 1 M + k 1 k 2 1 k 2 i = 1 k 2 Y m + i - 1 M - 1 / 2 f ^ m Y m + i - 1 M + k 1 k 2 2 1 k 2 i = 1 k 2 Y m + i - 1 M - 1 / 2 g ^ m Y m + i - 1 M ,
where
f ^ m u = 1 k 1 i = 1 k 1 K G G X m + i - 1 M ; u , b and g ^ m u = 1 k 2 i = 1 k 2 K G G Y m + i - 1 M ; u , b .
It follows that the test statistic using the mth sub-sample becomes
T k 1 , k 2 m : = k 1 b 1 / 4 I k 1 , k 2 m σ ^ λ m , m = 1 , , M .
Also denote the set of admissible values for b = b n 1 as H n 1 : = B ̲ n 1 - q , B ¯ n 1 - q for some prespecified exponent q 0 , 1 and two constants 0 < B ̲ < B ¯ < . Moreover, let
π ^ M b k 1 : = 1 M m = 1 M 1 T k 1 , k 2 m > c m α ,
where c m α is the critical value for the size α test using the mth sub-sample. We pick the power-maximized b ^ k 1 = B ^ k 1 - q = arg max b k 1 H k 1 π ^ M b k 1 , and the smoothing parameter value b ^ n 1 : = B ^ n 1 - q follows.
The behavior of π ^ M b k 1 can be examined by considering the local alternative
H 1 : f u = g u + h u n 1 b 1 / 4 ,
where h u satisfies 0 h u d u = 0 and I h : = 0 h 2 u d u 0 , . Also let π b n 1 : = Pr T n 1 , n 2 > c α , where c α is the critical value for the size α test using the entire sample. For such π b n 1 , define b n 1 * : = B * n 1 - q = arg max b n 1 H n 1 π b n 1 . Then, b ^ n 1 is optimal in the sense of Proposition 3. The proof is omitted, because it is a minor modification of the one for Theorem 2.1 of Kulasekera and Wang [35]; indeed it can be established by recognizing that T n 1 , n 2 d N I h / σ λ , 1 under H 1 , as in Proposition 3 of Fernandes and Grammig [33].
Proposition 3. 
If Assumptions 1–4, 1 / M + M / n 1 0 and n 1 / n 2 λ 0 , hold, then B * / B ^ p 1 as n 1 .
We conclude this section by stating how to obtain b ^ n 1 in practice. Step 1 reflects that M should be divergent but smaller than both n 1 and n 2 in finite samples. Step 3 follows from the implementation methods in Kulasekera and Wang [34,35]. Finally, Step 4 considers that there may be more than one maximizer of π ^ M b k 1 .
Step 1: Choose some δ 0 , 1 and specify M = min n 1 δ , n 2 δ .
Step 2: Make M sub-samples of sizes k 1 , k 2 = n 1 / M , n 2 / M .
Step 3: Pick two constants 0 < H ̲ < H ¯ < 1 and define H k 1 = H ̲ , H ¯ .
Step 4: Set c m α z α and find b ^ k 1 = inf arg max b k 1 H k 1 π ^ M b k 1 by a grid search.
Step 5: Obtain B ^ = b ^ k 1 k 1 q and calculate b ^ n 1 = B ^ n 1 - q .

5. Finite-Sample Performance

5.1. Setup

It is widely recognized that asymptotic results on kernel-smoothed tests are not well transmitted to their finite-sample distributions, which reflects that omitted terms in the first-order asymptotics on the test statistics are highly sensitive to their smoothing parameter values in finite samples. On the other hand, Fernandes and Grammig [33] and FMS report superior finite-sample properties of asymmetric kernel-smoothed tests. To see which perspective dominates, this section investigates finite-sample performance of the test statistic for the SSST via Monte Carlo simulations.
To make a direct comparison with the results by FMS, we specialize in the conditional symmetry test using the same linear regression model y = β 0 + β 1 x + u as used in FMS. The data are generated in the following manner. First, the regressor x is drawn from N 0 , 1 . Second, the disturbance u, which is independent of x, is drawn from one of eight distributions with means of zero given in Table 1. Distributions with “S” (symmetric) and “A” (asymmetric) are used to investigate size and power properties of the test statistic, respectively. All the distributions are popularly chosen in the literature; the generalized lambda distribution (“GLD”) by Ramberg and Schmeiser [45], in particular, is known to nest a wide variety of symmetric and asymmetric distributions3. Finally, the dependent variable y is generated by setting β 0 = β 1 = 1 .
We are interested in testing symmetry of the conditional distribution of y given x. For this purpose the SSST is applied for the least-squares residual u ^ i : = y i - β ^ 0 - β ^ 1 x i using the sample y i , x i i = 1 N , where β ^ 0 , β ^ 1 are least-squares estimates of β 0 , β 1 . Finite-sample size and power properties of the test statistic T n 1 , n 2 for two sub-samples with unequal sample sizes are examined against nominal 5% and 10% levels. The MG and NM kernels (denoted as “ T n 1 , n 2 -MG” and “ T n 1 , n 2 -NM”, respectively) are employed as examples of the GG kernels.
Finite-sample properties of T n 1 , n 2 -MG and T n 1 , n 2 -NM are evaluated in comparison with other versions of the SSST. First, two versions of FMS’s original test statistic built on an equivalence to our I n using the G kernel are considered. “FMS-G-O” is FMS’s truly original statistic, whereas “FMS-G-AltVar” is the one with the variance estimator replaced by σ ^ λ 2 given in Theorem 2. Second, T n 1 , n 2 using the G kernel (denoted as “ T n 1 , n 2 -G”) is also calculated. Notice that FMS-G-AltVar and T n 1 , n 2 -G take exactly the same form. The only difference is the method of choosing the smoothing parameter b, which will be discussed shortly. Effects of changing the variance estimator, the method of choosing b, and the kernel choice can be examined by weighing FMS-G-O with FMS-G-AltVar, FMS-G-AltVar with T n 1 , n 2 -G, and T n 1 , n 2 -G with T n 1 , n 2 -MG or T n 1 , n 2 -NM, respectively.
The smoothing parameter b for FMS-G-O and FMS-G-AltVar is determined via making an adjustment for the value chosen by a cross-validation criterion; see p. 657 of FMS for details. On the other hand, the values of b for T n 1 , n 2 -G, T n 1 , n 2 -MG and T n 1 , n 2 -NM are selected by the power-optimality criterion in the previous section. Implementation details are as follows: (i) all critical values in π ^ M b k 1 are set at z 0 . 05 = 1 . 645 ; (ii) the shrinking rate of b is set at q = 4 / 9 because of N 1 / 2 -consistency of least-squares estimates and Theorem 3; (iii) three different values are considered for δ, namely, δ 0 . 3 , 0 . 5 , 0 . 7 ; and (iv) the interval for b k 1 is set equal to H k 1 = 0 . 01 , 0 . 64 . The sample size is N 50 , 100 , 200 , and 1000 replications are drawn for each combination of the sample size N and the distribution of u.

5.2. Simulation Results

Table 2 presents finite-sample rejection frequencies of each test statistic against nominal 5% and 10% levels across 1000 Monte Carlo samples. Critical values are simply based on the first-order normal limit, i.e., 1 . 645 and 1 . 280 correspond to the 5% and 10% levels, respectively.
Panel (A) reports size properties. At first glance, we can find that the results of FMS-G-O are close to what is reported in Table 3 of FMS. It has the tendency of over-rejecting the null slightly against the nominal size. Comparing FMS-G-O with FMS-G-AltVar reveals that replacing the variance formula is likely to decrease the rejection frequencies. Changing the choice method of b further reduces the rejection frequencies, and T n 1 , n 2 -G tends to result in mild under-rejection of the null. Effects of alternative kernel choices are mixed. While T n 1 , n 2 -G and T n 1 , n 2 -MG have similar size properties, T n 1 , n 2 -NM looks more conservative in the sense that its rejection frequencies are slightly smaller. Impacts of varying δ are found to be minor at best. A concern is that all test statistics exhibit size distortions for S4. However, the distribution is platykurtic and has sharp boundaries at ± 1 . A platykurtic distribution is an exception rather than a rule in economics and finance, and a distribution with a compact support violates Assumption 1. In sum, all test statistics exhibit good size properties, although their convergence rates are nonparametric ones, effective sample sizes are (roughly) a half of the entire sample size N, and no size correction devices such as bootstrapping are used.
Panel (B) refers to power properties. We can immediately see that the rejection frequencies of each test statistic approach to one with the sample size N, which confirms consistency of the SSST. There is substantial improvement in power as the sample size increases from N = 50 to 100. Most of rejection frequencies become nearly one for as small as N = 200 . After a closer look, we can find it hard to judge whether changing the variance formula from FMS-G-O to FMS-G-AltVar may affect power properties favorably or adversely. However, once the smoothing parameter value is chosen via the power-optimality criterion, power properties are improved in general. Power properties of T n 1 , n 2 -G and T n 1 , n 2 -MG again look alike, whereas T n 1 , n 2 -NM appears to be more powerful than these two. Because the power tends to decrease with δ, it could be safe to choose δ = 0 . 3 from the viewpoint of power-maximization. Indeed, for N = 200 and δ = 0 . 3 , each of T n 1 , n 2 -G, T n 1 , n 2 -MG and T n 1 , n 2 -NM exhibits better power properties than FMS-G-O and FMS-G-AltVar.
For convenience, Panel (B) presents size-adjusted powers, where the best case scenario (i.e., δ = 0 . 3 ) is considered for T n 1 , n 2 -G, T n 1 , n 2 -MG and T n 1 , n 2 -NM. These three test statistics again outperform FMS’s original statistics in terms of size-adjusted powers, and T n 1 , n 2 -NM appears to have the best power properties among three. All in all, Monte Carlo results indicate superior size and power properties of the SSST with the GG kernels plugged in.

6. Conclusions

The SSST developed by FMS is built on the idea of gauging the closeness between right and left sides of the axis of symmetry of an unknown pdf. To implement the test, we split the entire sample into two sub-samples and estimate both sides of the pdf nonparametrically using asymmetric kernels with support on 0 , . This paper has improved the SSST by combining it with the newly proposed GG kernels. The test statistic can be interpreted as a standardized version of a degenerate U-statistic. We deliver convergence properties of the test statistic and provide the asymptotic variance formulae for the cases of two sub-samples with equal and unequal sample sizes separately. It is demonstrated that the SSST smoothed by the GG kernels has a normal limit under the null of symmetry and is consistent under the alternative. As a part of the implementation method we also propose to select the smoothing parameter in a power-optimality criterion. Monte Carlo simulations indicate that the GG kernel-smoothed SSST with the power-maximized smoothing parameter value plugged in enjoys superior finite-sample properties. It should be stressed that the good performance of the SSST is grounded on the first-order normal limit and a small number of observations, despite its nonparametric convergence rate and sample-splitting procedure.

Appendix A. Appendix

Appendix A.1. Proof of Lemma 1

Because the proof for the MG kernel is basically the same as those for Lemmata 1(e) and 2 of FMS, we prove the case of the NM kernel. Among all statements, we concentrate on demonstrating that
E K X 2 X 1 K Y 2 X 1 E f X g X , and
E K X 2 Y 1 K Y 1 X 2 b - 1 / 2 V I 2 E X - 1 / 2 g X .
All the remaining statements can be shown in the same manner. To approximate the gamma function, we frequently refer to the following well-known formulae:
  • Stirling’s formula (“SF”):
    Γ z + 1 = 2 π z z + 1 / 2 e - z 1 + 1 12 z + 1 288 z 2 + O z - 3 as z .
  • Legendre’s duplication formula (“LDF”):
    Γ z Γ z + 1 2 = π 2 2 z - 1 Γ 2 z for z > 0 .
In addition, proofs of the above statements require the following lemma. Its proof is virtually the same as those for Lemmata A.1 and A.2 of Fernandes and Monteiro [46], and thus it is omitted.
Lemma A1. 
For a constant D > 0 and two numbers x , y > 0 ,
exp - y - x 2 D x x y y - x D exp - y - x 2 D x + y - x 3 2 D x 2
if x y , and
exp - y - x 2 D x + y - x 3 2 D y 2 x y y - x D exp - y - x 2 D x
if y < x .
Proof of (A1)
We apply the trimming argument as on p. 476 of Chen [30]. For some ϵ 0 , 1 / 2 ,
E K X 2 X 1 K Y 2 X 1 = b 1 - ϵ b 1 - ϵ 0 L b u ; x , y f x 1 d x 1 g y d y f x d x + O b 1 - ϵ ,
where L b u ; x , y : = K N M u ; x , b K N M u ; y , b for interior x , y . Then, the proof takes a multi-step approach including the following steps:
Step 1: approximating J x , y : = 0 L b u ; x , y f u d u .
Step 2: approximating J : = b 1 - ϵ b 1 - ϵ J x , y g y d y f x d x .
Step 1:  Define
P z b : = z Γ z 2 b Γ z 2 b + 1 2
for z = x , y , x + y . Then,
L b u ; x , y = 2 P x b P y b / P x 2 b + P y 2 b x + y b - 1 Γ x + y 2 b - 1 2 P x x / b b Γ x 2 b P y y / b b Γ y 2 b × 2 u x + y b - 1 - 1 exp - u P x b P y b / P x 2 b + P y 2 b 2 P x b P y b / P x 2 b + P y 2 b x + y b - 1 Γ x + y 2 b - 1 2 ,
where the first term is denoted as B b x , y , and the second term can be viewed as the pdf of G G x + y / b - 1 , P x b P y b / P x 2 b + P y 2 b , 2 . Moreover, B b x , y can be further rewritten as
B 1 b B 2 b B 3 b : = 2 Γ x + y 2 b - 1 2 Γ x 2 b Γ y 2 b P x 2 b + P y 2 b 1 / 2 P x b P y b P x y / b b P y x / b b P x 2 b + P y 2 b x + y 2 b ,
and an approximation to each of B 1 b , B 2 b and B 3 b is provided separately.
By LDF, B 1 b becomes
B 1 b = 2 x + y 2 b - 1 2 Γ x + y 2 b + 1 2 Γ x 2 b Γ y 2 b = b π 2 x + y b - 3 x + y - b Γ x + y 2 b Γ x 2 b Γ y 2 b Γ x + y 2 b .
Then, by SF, an approximation to B 1 b is given by
B 1 b = 2 π x y x + y x + y x x 2 b x + y y y 2 b 1 + o 1 .
Next, it follows from LDF and SF that
P z b = 2 z / b - 1 π z Γ 2 z 2 b Γ z b = 2 b z 1 / 2 1 + b 4 z + O b 2 .
Hence,
B 2 b 2 = 1 P x 2 b + 1 P y 2 b = b - 1 2 x + y x y 1 + o 1 ,
and thus
B 2 b = b - 1 / 2 2 x + y x y 1 + o 1 .
Furthermore, (A9) also implies that
P x y / b b = 2 b x y 2 b exp y 4 x 1 + o 1 ; P y x / b b = 2 b y x 2 b exp x 4 y 1 + o 1 ; and P x 2 b + P y 2 b x + y 2 b = 2 b x + y x + y 2 b exp 1 2 1 + o 1 .
Then,
B 3 b = x y 2 b y x 2 b x + y - x + y 2 b exp 1 4 y x - 1 exp 1 4 x y - 1 1 + o 1 .
Substituting (A8), (A10) and (A11) into (A7) finally yields
B b x , y : = b - 1 / 2 B ˜ b x , y x y y - x 2 b 1 + o 1 ,
where
B ˜ b x , y = 1 π x + y exp 1 4 y x - 1 exp 1 4 x y - 1 .
Then, for a random variable ζ x = d G G x + y / b - 1 , P x b P y b / P x 2 b + P y 2 b , 2 ,
J x , y = 0 L b u ; x , y f u d u 1 + o 1 = b - 1 / 2 B ˜ b x , y x y y - x 2 b E f ζ x 1 + o 1 .
By the property of GG random variables, (A5), (A9), and (A10),
E ζ x = B 2 b - 1 Γ x + y 2 b Γ x + y 2 b - 1 2 = x y 1 + O b .
In the end, a first-order Taylor expansion of f ζ x around ζ x = x y gives
J x , y = b - 1 / 2 B ˜ b x , y f x y x y y - x 2 b 1 + o 1 ,
which completes Step 1.
Step 2:  For some t 0 , 1 , we split the interval for y into four subintervals as follows:
J = b 1 - ϵ b 1 - ϵ 1 - t x + 1 - t x x + x 1 + t x + 1 + t x J x , y g y d y f x d x = J 1 + J 2 + J 3 + J 4 (say) .
Also denote h x , y : = B ˜ b x , y f x y g y . Then, by (A4) and the change of variable v : = y - x / 2 b x ,
J 1 b 1 - ϵ 0 1 - t x b - 1 / 2 h x , y exp - y - x 2 2 b x 1 + o 1 d y f x d x b 1 - ϵ 2 x - 1 2 x b - t 2 x b h x , x + v 2 b x e - v 2 1 + o 1 d v f x d x 0
as b 0 .
Next, it follows from (A4) that
J 2 b 1 - ϵ 1 - t x x b - 1 / 2 h x , y exp - y - x 2 2 b x + y - x 3 4 b y 2 1 + o 1 d y f x d x b 1 - ϵ 1 - t x x b - 1 / 2 h x , y exp - y - x 2 2 b x 1 + τ 1 1 + o 1 d y f x d x ,
where τ 1 : = t 2 - t / 2 1 - t 2 . By the change of variable w : = y - x 1 + τ 1 / 2 b x , the right-hand side becomes
b 1 - ϵ 2 x 1 + τ 1 - t 1 + τ 1 2 x b 0 h x , x + w 2 b x 1 + τ 1 e - w 2 1 + o 1 d w f x d x .
Because - 0 e - v 2 d v = π / 2 and h x , x = f x g x / 2 π x , we have
lim inf b 0 J 2 = 1 2 1 1 + τ 1 0 f x g x d F x 1 2 E f X g X
by letting t shrink toward zero. On the other hand, again by (A4) and the change of variable v = y - x / 2 b x ,
J 2 b 1 - ϵ 1 - t x x b - 1 / 2 h x , y exp - y - x 2 2 b x 1 + o 1 d y f x d x b 1 - ϵ 2 x - t 2 x b 0 h x , x + v 2 b x e - v 2 1 + o 1 d v f x d x ,
so that
lim sup b 0 J 2 = 1 2 0 f x g x d F x = 1 2 E f X g X .
Hence, we can conclude that J 2 1 / 2 E f X g X .
It can be also demonstrated that J 3 1 / 2 E f X g X and J 4 0 with the assistance of (A3). Therefore, J E f X g X , and thus (A1) is established.   □
Proof of (A2).  Again for some ϵ 0 , 1 / 2 ,
E K X 2 Y 1 K Y 1 X 2 = b 1 - ϵ b 1 - ϵ M b x , y g y d y f x d x + O b - ϵ ,
where Λ b x , y : = K N M y ; x , b K N M x ; y , b for interior x , y and the order of the remainder term is O b - ϵ = o b - 1 / 2 by construction. Observe that
Λ b x , y = 4 y x / b - 1 x y / b - 1 P x x / b b P y y / b b Γ x 2 b Γ y 2 b exp - y 2 P x 2 b exp - x 2 P y 2 b .
It follows from (A9) that
P z z / b b = 2 b z z 2 b exp 1 4 1 + o 1
for z = x , y . Similarly,
exp - y 2 P x 2 b = exp - y 2 2 b x exp 1 4 y x 2 1 + o 1 , and
exp - x 2 P y 2 b = exp - x 2 2 b y exp 1 4 x y 2 1 + o 1 .
Substituting (A13)–(A15) into (A12) and using SF, we have
Λ b x , y = b - 1 π x y x y y - x / b exp - y + x y - x 2 2 b x y × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 .
As before, for some t 0 , 1 , consider
Λ = b 1 - ϵ b 1 - ϵ 1 - t x + 1 - t x x + x 1 + t x + 1 + t x Λ b x , y g y d y f x d x = Λ 1 + Λ 2 + Λ 3 + Λ 4 (say) .
It follows from (A4) that
Λ 1 b 1 - ϵ 0 1 - t x b - 1 π x y exp - y - x 2 b x - y + x y - x 2 2 b x y × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 g y d y f x d x b 1 - ϵ 0 1 - t x b - 1 π x y exp - τ 2 y - x 2 2 b x × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 g y d y f x d x ,
where τ 2 : = 4 - t / 1 - t . Then, by the change of variable η : = y - x τ 2 / 2 b x ,
b 1 / 2 Λ 1 b 1 - ϵ 1 π 2 τ 2 - τ 2 2 x b - t τ 2 2 x b 1 x + η 2 b x τ 2 e - η 2 exp 1 4 x + η 2 b x τ 2 x 2 - 1 × exp 1 4 x x + η 2 b x τ 2 2 - 1 1 + o 1 g x + η 2 b x τ 2 d η f x d x 0 ,
or Λ 1 = o b - 1 / 2 .
Next, (A4) implies that
Λ 2 b 1 - ϵ 1 - t x x b - 1 π x y exp - y - x 2 b x + y - x 3 2 b y 2 - y + x y - x 2 2 b x y × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 g y d y f x d x b 1 - ϵ 1 - t x x b - 1 π x y exp - y - x 2 2 b x 3 + τ 3 × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 g y d y f x d x ,
where τ 3 : = 1 - t - 2 . By the change of variable μ : = y - x 3 + τ 3 / 2 b x , the right-hand side becomes
b 1 - ϵ b - 1 / 2 π 2 3 + τ 3 3 + τ 3 2 x b 0 1 x + μ 2 b x 3 + τ 3 e - μ 2 exp 1 4 x + μ 2 b x 3 + τ 3 x 2 - 1 × exp 1 4 x x + μ 2 b x 3 + τ 3 2 - 1 1 + o 1 g x + μ 2 b x 3 + τ 3 d μ f x d x ,
so that
lim inf b 0 b 1 / 2 Λ 2 = 1 2 π 2 3 + τ 3 0 x - 1 / 2 g x d F x 1 2 V I , N M 2 E X - 1 / 2 g X
by letting t shrink toward zero, where V I , N M 2 : = 1 / 2 π . Notice that we may safely assume that E X - 1 / 2 g X < : Assumption 2 ensures that f and g are bounded, and thus it must be the case that x - 1 / 2 f x g x c x - 1 / 2 in the vicinity of the origin. On the other hand, (A4) also yields
Λ 2 b 1 - ϵ 1 - t x x b - 1 π x y exp - y - x 2 b x - y + x y - x 2 2 b x y × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 g y d y f x d x b 1 - ϵ 1 - t x x b - 1 π x y exp - 2 y - x 2 b x × exp 1 4 y x 2 - 1 exp 1 4 x y 2 - 1 1 + o 1 g y d y f x d x .
By the change of variable ω : = y - x 2 / b x ,
Λ 2 b 1 - ϵ b - 1 / 2 2 π - t 2 x b 0 1 x + ω b x 2 e - ω 2 exp 1 4 x + ω b x 2 x 2 - 1 × exp 1 4 x x + ω b x 2 2 - 1 1 + o 1 g x + ω b x 2 d ω f x d x ,
and thus
lim sup b 0 b 1 / 2 Λ 2 = 1 2 2 π 0 x - 1 / 2 g x d F x = 1 2 V I , N M 2 E X - 1 / 2 g X .
Hence, we can conclude that Λ 2 b - 1 / 2 1 / 2 V I , N M 2 E X - 1 / 2 g X .
It also follows from (A3) that Λ 3 b - 1 / 2 V I , N M 2 1 / 2 E X - 1 / 2 g X and Λ 4 = o b - 1 / 2 . Therefore, Λ b - 1 / 2 / V I , N M 2 E X - 1 / 2 g X , and thus (A2) is also established.   □

Appendix A.2. Proof of Theorem 1

Because (ii) is obvious given that (i) is true, we concentrate only on (i). The proof strategy for (i) largely follows the one for Theorem 1.1 of Fernandes and Monteiro [46]. The proof of (i) also requires three lemmata below.
Lemma A2. 
Let X 1 , X 2 and Y 1 , Y 2 be two independent copies of X and Y, respectively. Then, under Assumptions 1–3, the followings hold:
(a) 
E K X 2 2 X 1 b - 1 / 2 V I 2 E X - 1 / 2 f X ; E K X 2 2 Y 1 b - 1 / 2 V I 2 E X - 1 / 2 g X ;
E K Y 2 2 X 1 b - 1 / 2 V I 2 E Y - 1 / 2 f Y ; and E K Y 2 2 Y 1 b - 1 / 2 V I 2 E Y - 1 / 2 g Y , where V I 2 is given in Condition 5 of Definition 1.
(b) 
E K X 2 X 1 K Y 2 Y 1 E f X E g Y ; E K X 2 Y 1 K Y 2 X 1 E g X E f Y ;
E K X 2 Y 1 K X 1 Y 2 E 2 g X ; and E K Y 2 X 1 K Y 1 X 2 E 2 f Y .
(c) 
E K X 2 X 1 K X 2 Y 1 E f X g X ; and E K Y 2 Y 1 K Y 2 X 1 E g Y f Y .
(d) 
E K X 2 X 1 K X 1 Y 2 E f X g X ; E K Y 1 X 2 K X 2 X 1 E f 2 Y ;
E K X 1 Y 2 K Y 2 Y 1 E g 2 X ; and E K Y 2 Y 1 K Y 1 X 2 E f Y g Y .
Lemma A3. 
If Assumptions 1–4 and n 1 = n 2 = n hold, then
E Φ n 2 Z 1 , Z 2 4 V I 2 n 4 b 1 / 2 E X - 1 / 2 f X + g X + Y - 1 / 2 f Y + g Y .
Lemma A4. 
If Assumptions 1–4 and n 1 = n 2 = n hold, then E Φ n 2 k Z 1 , Z 2 < and
E Υ n k Z 1 , Z 2 + n 1 - k E Φ n 2 k Z 1 , Z 2 E k Φ n 2 Z 1 , Z 2 0
for some k 1 , 3 / 2 , where Υ n x , y : = E Φ n Z 1 , x Φ n y , Z 1 .

Appendix A.2.1. Proof of Lemma A2

The variance approximation in Theorem 1 of Hirukawa and Sakudo [32] and the trimming argument on p.476 of Chen [30] yield (a). On the other hand, the bias approximation in Theorem 1 of Hirukawa and Sakudo [32] is applied to (b)–(d). As a consequence, (b) can be established by recognizing that E K X 2 X 1 K Y 2 Y 1 = E K X 2 X 1 E K X 2 X 1 , for instance. Moreover, (c) and (d) follow from the proofs for (d) and (f) in Lemma A1 of FMS. □

Appendix A.2.2. Proof of Lemma A3

Because ϕ n Z 1 , Z 2 = d ϕ n Z 2 , Z 1 , we have
E Φ n 2 Z 1 , Z 2 = 2 n 4 E ϕ n 2 Z 1 , Z 2 + E ϕ n Z 1 , Z 2 ϕ n Z 2 , Z 1 .
With the assistance of Assumption 4 and Lemma A2, we can pick out the leading terms of E ϕ n 2 Z 1 , Z 2 and E ϕ n Z 1 , Z 2 ϕ n Z 2 , Z 1 as:
E ϕ n 2 Z 1 , Z 2 = E K X 2 2 X 1 + E K X 2 2 Y 1 + E K Y 2 2 X 1 + E K Y 2 2 Y 1 + O 1 = b - 1 V I 2 E X - 1 / 2 f X + g X + Y - 1 / 2 f Y + g Y + o b - 1 ; and E ϕ n Z 1 , Z 2 ϕ n Z 2 , Z 1 = E K X 2 X 1 K X 1 X 2 + E K X 2 Y 1 K Y 1 X 2 + E K Y 2 X 1 K X 1 Y 2 + E K Y 2 Y 1 K Y 1 Y 2 + O 1 = b - 1 V I 2 E X - 1 / 2 f X + g X + Y - 1 / 2 f Y + g Y + o b - 1 .
The result immediately follows. □

Appendix A.2.3. Proof of Lemma A4

It follows from Lemma A3 that
E k Φ n 2 Z 1 , Z 2 = O n - 4 k b - k / 2 .
Next, by Jensen’s and C r -inequalities,
E Υ n k Z 1 , Z 2 = E Z 1 , Z 2 E Z 3 k Φ n Z 3 , Z 1 Φ n Z 2 , Z 3 E Z 1 , Z 2 E Z 3 Φ n Z 3 , Z 1 Φ n Z 2 , Z 3 k n - 4 k E Z 1 , Z 2 E Z 3 ϕ n Z 3 , Z 1 ϕ n Z 2 , Z 3 + ϕ n Z 3 , Z 1 ϕ n Z 3 , Z 2 + ϕ n Z 1 , Z 3 ϕ n Z 2 , Z 3 + ϕ n Z 1 , Z 3 ϕ n Z 3 , Z 2 k n - 4 k 2 2 k - 1 E Z 1 , Z 2 E Z 3 ϕ n Z 3 , Z 1 ϕ n Z 2 , Z 3 k + E Z 3 ϕ n Z 3 , Z 1 ϕ n Z 3 , Z 2 k + E Z 3 ϕ n Z 1 , Z 3 ϕ n Z 2 , Z 3 k + E Z 3 ϕ n Z 1 , Z 3 ϕ n Z 3 , Z 2 k = n - 4 k 2 2 k - 1 Υ 1 + Υ 2 + Υ 3 + Υ 4 (say) .
Furthermore, applying C r -inequality repeatedly yields
Υ 1 2 4 k - 1 · 8 E K X 1 k X 3 K X 3 k X 2 + E 2 K X 1 k X 3
under H 0 . Essentially the same arguments as in the proofs of Lemmata 1 and A2 establish that E K X 1 k X 3 K X 3 k X 2 is bounded by c b 1 - k 0 x 1 - k f 3 x d x . It follows from k < 3 / 2 that x 1 - k f 3 x c x - 1 / 2 in the neighborhood of the origin, and thus 0 x 1 - k f 3 x d x < holds. Hence, E K X 1 k X 3 K X 3 k X 2 O b 1 - k . Similarly, E 2 K X 1 k X 3 O b 1 - k / 2 2 = O b 1 - k , and thus Υ 1 O b 1 - k . It can be also shown that each of Υ 2 , Υ 3 and Υ 4 is bounded by O b 1 - k . As a result,
E Υ n k Z 1 , Z 2 O n - 4 k b 1 - k .
Using C r -inequality and ϕ n Z 1 , Z 2 = d ϕ n Z 2 , Z 1 , we also have
E Φ n 2 k Z 1 , Z 2 n - 4 k E ϕ n Z 1 , Z 2 + ϕ n Z 2 , Z 1 2 k n - 4 k 2 2 k - 1 E ϕ n Z 1 , Z 2 2 k + E ϕ n Z 2 , Z 1 2 k = n - 4 k 2 2 k E ϕ n Z 1 , Z 2 2 k .
Again, by C r -inequality,
E ϕ n Z 1 , Z 2 2 k 4 · 2 2 2 k - 1 E K X 2 2 k X 1 c b 1 - 2 k 2 0 x 1 - 2 k 2 f 2 x d x ,
where x 1 - 2 k / 2 f 2 x c x - 1 - ε for some ε 0 , 1 / 2 as x 0 so that 0 x 1 - 2 k / 2 f 2 x d x < is ensured. Therefore,
E Φ n 2 k Z 1 , Z 2 O n - 4 k b 1 - 2 k 2 = o 1
and thus E Φ n 2 k Z 1 , Z 2 < is demonstrated.
In the end, by (A16)–(A18),
E Υ n k Z 1 , Z 2 E k Φ n 2 Z 1 , Z 2 = O b 1 - k / 2 0 , and n 1 - k E Φ n 2 k Z 1 , Z 2 E k Φ n 2 Z 1 , Z 2 = O n b 1 / 2 1 - k 0 ,
as long as 1 < k < 3 / 2 . This completes the proof. □

Appendix A.2.4. Proof of Theorem 1

It follows from Lemma A4 that a martingale central limit theorem for a degenerate U-statistic (Theorem 4.7.3 of Koroljuk and Borovskich [41], to be precise) applies. Moreover, by Lemma A3, the asymptotic variance of the normal limit becomes
σ 2 = lim n n 2 b 1 / 2 V a r I n = lim n n 2 b 1 / 2 n n - 1 2 E Φ n 2 Z 1 , Z 2 = 2 V I 2 E X - 1 / 2 f X + g X + Y - 1 / 2 f Y + g Y = 8 V I 2 E X - 1 / 2 f X under H 0 .

Appendix A.3. Proof of Proposition 1

The proof closely follows the one for Theorem 2.2 of Fan and Ullah [19]. Under H 1 , E I n = 0 f u - g u 2 d u + O b = I + O b . Moreover, V a r I n = O n - 2 b - 1 / 2 and σ ^ 2 p σ 2 , regardless of whether H 0 or H 1 may be true. Therefore, I n = I + O b + O p n - 1 b - 1 / 4 p I > 0 , and thus n b 1 / 4 I n / σ ^ is a divergent stochastic sequence with an expansion rate of n b 1 / 4 . The result immediately follows. □

Appendix A.4. Proof of Theorem 3

For brevity, we focus only on the case of equal sample sizes in two sub-samples. The proof largely follow the one for Proposition 5 of FMS. FMS consider the Taylor expansion
I n ξ ^ , θ ^ - I n ξ , θ 0 = Δ 1 ξ , θ 0 ξ ^ - ξ + Δ 2 ξ , θ 0 θ ^ - θ 0 + R n ,
where Δ 1 ξ , θ 0 and Δ 2 ξ , θ 0 are partial derivatives of I n with respect to the first and second arguments evaluated at ξ , θ 0 , respectively, and R n is the remainder term of a smaller order. The only difference between their proof and ours is that we derive the range of q , r within which
n b 1 / 4 Δ 1 ξ , θ 0 ξ ^ - ξ + Δ 2 ξ , θ 0 θ ^ - θ 0 = o p 1
is the case. Because each of Δ 1 and Δ 2 is O b + O p n - 1 b - 3 / 4 , the left-hand side is bounded by
O n 1 - r b 5 / 4 + O p n - r b - 1 / 2 = O n 1 - r - 5 q / 4 + O p n - r + q / 2 .
This becomes o p 1 if q , r satisfy r > - 5 q / 4 + 1 , r > q / 2 and r 1 / 2 . □

Acknowledgments

We would like to thank the editor Kerry Patterson, four anonymous referees, Yohei Yamamoto, and the participants of seminars at Hitotsubashi University and the Development Bank of Japan for their constructive comments and suggestions. We are also grateful to Marcelo Fernandes, Eduardo Mendes and Olivier Scaillet for providing us with the computer codes used for Monte Carlo simulations in Fernandes, Mendes and Scaillet [29]. This research was supported, in part, by the grant from Japan Society of the Promotion of Science (grant number 15K03405). The views expressed herein and those of the authors do not necessarily reflect the views of the Development Bank of Japan.

Author Contributions

The authors contributed equally to the paper as a whole.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. R.W. Bacon. “Rockets and feathers: The asymmetric speed of adjustment of UK retail gasoline prices to cost changes.” Energy Econ. 13 (1991): 211–218. [Google Scholar] [CrossRef]
  2. J.Y. Campbell, and L. Hentschel. “No news is good news: An asymmetric model of changing volatility in stock returns.” J. Financ. Econ. 31 (1992): 281–318. [Google Scholar] [CrossRef]
  3. R. Clarida, and M. Gertler. “How the Bundesbank conducts monetary policy.” In Reducing Inflation: Motivation and Strategy. Edited by C.D. Romer and D.H. Romer. Chicago, IL, USA: University of Chicago Press, 1997, pp. 363–412. [Google Scholar]
  4. G. Chamberlain. “A characterization of the distributions that imply mean-variance utility functions.” J. Econ. Theory 29 (1983): 185–201. [Google Scholar] [CrossRef]
  5. J. Owen, and R. Rabinovitch. “On the class of elliptical distributions and their applications to the theory of portfolio choice.” J. Finance 38 (1983): 745–752. [Google Scholar] [CrossRef]
  6. J.E. Ingersoll Jr. Theory of Financial Decision Making. Savage, MD, USA: Rowman & Littlefield, 1987. [Google Scholar]
  7. P.J. Bickel. “On adaptive estimation.” Ann. Stat. 10 (1982): 647–671. [Google Scholar] [CrossRef]
  8. W.K. Newey. “Adaptive estimation of regression models via moment restrictions.” J. Econom. 38 (1988): 301–339. [Google Scholar] [CrossRef]
  9. R.J. Carroll, and A.H. Welsh. “A note on asymmetry and robustness in linear regression.” Am. Stat. 42 (1988): 285–287. [Google Scholar]
  10. M.-J. Lee. “Mode regression.” J. Econom. 42 (1989): 337–349. [Google Scholar] [CrossRef]
  11. M.-J. Lee. “Quadratic mode regression.” J. Econom. 57 (1993): 1–19. [Google Scholar] [CrossRef]
  12. V. Zinde-Walsh. “Asymptotic theory for some high breakdown point estimators.” Econom. Theory 18 (2002): 1172–1196. [Google Scholar] [CrossRef]
  13. H.D. Bondell, and L.A. Stefanski. “Efficient robust regression via two-stage generalized empirical likelihood.” J. Am. Stat. Assoc. 108 (2013): 644–655. [Google Scholar] [CrossRef] [PubMed]
  14. M. Baldauf, and J.M.C. Santos Silva. “On the use of robust regression in econometrics.” Econ. Lett. 114 (2012): 124–127. [Google Scholar] [CrossRef]
  15. Y. Fan, and R. Gencay. “A consistent nonparametric test of symmetry in linear regression models.” J. Am. Stat. Assoc. 90 (1995): 551–557. [Google Scholar] [CrossRef]
  16. I.A. Ahmad, and Q. Li. “Testing symmetry of unknown density functions by kernel method.” J. Nonparametr. Stat. 7 (1997): 279–293. [Google Scholar] [CrossRef]
  17. J.X. Zheng. “Consistent specification testing for conditional symmetry.” Econom. Theory 14 (1998): 139–149. [Google Scholar] [CrossRef]
  18. C. Diks, and H. Tong. “A test for symmetries of multivariate probability distributions.” Biometrika 86 (1999): 605–614. [Google Scholar] [CrossRef]
  19. Y. Fan, and A. Ullah. “On goodness-of-fit tests for weakly dependent processes using kernel method.” J. Nonparametr. Stat. 11 (1999): 337–360. [Google Scholar] [CrossRef]
  20. R.H. Randles, M.A. Fligner, G.E. Policello II, and D.A. Wolfe. “An asymptotically distribution-free test for symmetry versus asymmetry.” J. Am. Stat. Assoc. 75 (1980): 168–172. [Google Scholar] [CrossRef]
  21. L.G. Godfrey, and C.D. Orme. “Testing for skewness of regression disturbances.” Econ. Lett. 37 (1991): 31–34. [Google Scholar] [CrossRef]
  22. J. Bai, and S. Ng. “Tests for skewness, kurtosis, and normality for time series data.” J. Bus. Econ. Stat. 23 (2005): 49–58. [Google Scholar] [CrossRef]
  23. G. Premaratne, and A. Bera. “A test for symmetry with leptokurtic financial data.” J. Financ. Econom. 3 (2005): 169–187. [Google Scholar] [CrossRef]
  24. W.K. Newey, and J.L. Powell. “Asymmetric least squares estimation and testing.” Econometrica 55 (1987): 819–847. [Google Scholar] [CrossRef]
  25. J. Bai, and S. Ng. “A consistent test for conditional symmetry in time series models.” J. Econom. 103 (2001): 225–258. [Google Scholar] [CrossRef]
  26. M.A. Delgado, and J.C. Escanciano. “Nonparametric tests for conditional symmetry in dynamic models.” J. Econom. 141 (2007): 652–682. [Google Scholar] [CrossRef]
  27. T. Chen, and G. Tripathi. “Testing conditional symmetry without smoothing.” J. Nonparametr. Stat. 25 (2013): 273–313. [Google Scholar] [CrossRef]
  28. Y. Fang, Q. Li, X. Wu, and D. Zhang. “A data-driven test of symmetry.” J. Econom. 188 (2015): 490–501. [Google Scholar] [CrossRef]
  29. M. Fernandes, E.F. Mendes, and O. Scaillet. “Testing for symmetry and conditional symmetry using asymmetric kernels.” Ann. Inst. Stat. Math. 67 (2015): 649–671. [Google Scholar] [CrossRef]
  30. S.X. Chen. “Probability density function estimation using gamma kernels.” Ann. Inst. Stat. Math. 52 (2000): 471–480. [Google Scholar] [CrossRef]
  31. N. Gospodinov, and M. Hirukawa. “Nonparametric estimation of scalar diffusion models of interest rates using asymmetric kernels.” J. Empir. Finance 19 (2012): 595–609. [Google Scholar] [CrossRef] [Green Version]
  32. M. Hirukawa, and M. Sakudo. “Family of the generalised gamma kernels: A generator of asymmetric kernels for nonnegative data.” J. Nonparametr. Stat. 27 (2015): 41–63. [Google Scholar] [CrossRef]
  33. M. Fernandes, and J. Grammig. “Nonparametric specification tests for conditional duration models.” J. Econom. 127 (2005): 35–68. [Google Scholar] [CrossRef]
  34. K.B. Kulasekera, and J. Wang. “Smoothing parameter selection for power optimality in testing of regression curves.” J. Am. Stat. Assoc. 92 (1997): 500–511. [Google Scholar] [CrossRef]
  35. K.B. Kulasekera, and J. Wang. “Bandwidth selection for power optimality in a test of equality of regression curves.” Stat. Probab. Lett. 37 (1998): 287–293. [Google Scholar] [CrossRef]
  36. E.W. Stacy. “A generalization of the gamma distribution.” Ann. Math. Stat. 33 (1962): 1187–1192. [Google Scholar] [CrossRef]
  37. I.S. Abramson. “On bandwidth variation in kernel estimates—A square root law.” Ann. Stat. 10 (1982): 1217–1223. [Google Scholar] [CrossRef]
  38. C.J. Stone. “Optimal rates of convergence for nonparametric estimators.” Ann. Stat. 8 (1980): 1348–1360. [Google Scholar] [CrossRef]
  39. N.H. Anderson, P. Hall, and D.M. Titterington. “Two-sample test statistics for measuring discrepancies between two multivariate probability density functions using kernel-based density estimates.” J. Multivar. Anal. 50 (1994): 41–54. [Google Scholar] [CrossRef]
  40. P. Hall. “Central limit theorem for integrated square error of multivariate nonparametric density estimators.” J. Multivar. Anal. 14 (1984): 1–16. [Google Scholar] [CrossRef]
  41. V.S. Koroljuk, and Y.V. Borovskich. Theory of U-Statistics. Dordrecht, The Netherlands: Kluwer Academic Publishers, 1994. [Google Scholar]
  42. B.E. Hansen. “Uniform convergence rates for kernel estimation with dependent data.” Econom. Theory 24 (2008): 726–748. [Google Scholar] [CrossRef]
  43. B.W. Silverman. Density Estimation for Statistics and Data Analysis. London, UK: Chapman & Hall, 1986. [Google Scholar]
  44. J. Gao, and I. Gijbels. “Bandwidth selection in nonparametric kernel testing.” J. Am. Stat. Assoc. 103 (2008): 1584–1594. [Google Scholar] [CrossRef] [Green Version]
  45. J.S. Ramberg, and B.W. Schmeiser. “An approximate method for generating asymmetric random variables.” Commun. ACM 17 (1974): 78–82. [Google Scholar] [CrossRef]
  46. M. Fernandes, and P.K. Monteiro. “Central limit theorem for asymmetric kernel functionals.” Ann. Inst. Stat. Math. 57 (2005): 425–442. [Google Scholar] [CrossRef]
Figure 1. Shapes of the GG Kernels When b = 0 . 2 .
Figure 1. Shapes of the GG Kernels When b = 0 . 2 .
Econometrics 04 00028 g001
Table 1. Distributions of the Disturbance u in the Simulation Study.
Table 1. Distributions of the Disturbance u in the Simulation Study.
DistributionSkewnessKurtosis
S1 N 0 , 1 0 . 00 3 . 00
S2 t 10 0 . 00 4 . 00
S3 D E 0 , 1 or Standard Laplace 0 . 00 24 . 00
S4 U - 1 , 1 or GLD with λ 1 , λ 2 , λ 3 , λ 4 = 0 , 1 , 1 , 1 0 . 00 1 . 80
A1 L N 0 , 1 - exp 1 / 2 6 . 18 113 . 94
A2 χ 3 2 - 3 1 . 63 7 . 00
A3GLD with λ 1 , λ 2 , λ 3 , λ 4 = 12 . 601 , - 0 . 00980045 , - 0 . 11 , - 0 . 0001 - 2 . 92 19 . 52
A4GLD with λ 1 , λ 2 , λ 3 , λ 4 = - 9 . 7726 , - 0 . 0151878 , - 0 . 001 , - 0 . 13 3 . 16 23 . 75
Note: “S” and “A” stand for symmetric and asymmetric distributions, respectively. “GLD” denotes the generalized lambda distribution by Ramberg and Schmeiser [45]. The distribution is defined in terms of the inverse of the cumulative distribution function F - 1 u = λ 1 + u λ 3 + 1 - u λ 4 / λ 2 for u 0 , 1 .
Table 2. Size and Power of the SSST.
Table 2. Size and Power of the SSST.
(A) Size (%)
NTestδ Distribution
S1 S2 S3 S4
5%10% 5%10% 5%10% 5%10%
50FMS-G-O 4.89.4 4.99.5 6.010.8 7.412.6
FMS-G-AltVar 4.79.3 4.48.9 5.810.9 6.912.6
T n 1 , n 2 -G0.3 3.57.5 3.17.0 4.58.7 5.29.6
0.5 3.86.8 3.07.3 4.47.9 5.410.2
0.7 3.46.7 3.47.3 4.58.9 5.410.5
T n 1 , n 2 -MG0.3 3.67.7 3.97.5 4.99.5 5.39.7
0.5 4.07.2 3.77.2 4.98.7 5.410.2
0.7 3.36.8 3.87.3 4.79.1 5.310.7
T n 1 , n 2 -NM0.3 3.47.3 3.16.7 4.38.3 5.29.3
0.5 3.96.7 3.16.4 4.27.9 5.29.8
0.7 3.37.0 3.16.4 4.48.1 5.210.3
100FMS-G-O 5.29.8 6.710.8 5.610.1 7.412.5
FMS-G-AltVar 4.98.9 6.410.6 5.39.9 6.612.6
T n 1 , n 2 -G0.3 4.07.0 6.19.3 5.18.8 7.311.9
0.5 3.67.3 5.89.1 5.28.9 7.111.4
0.7 3.87.6 5.99.5 4.89.3 6.611.8
T n 1 , n 2 -MG0.3 4.57.4 6.29.5 5.89.6 7.312.1
0.5 3.77.7 6.19.6 5.79.0 7.211.6
0.7 3.88.1 5.89.6 5.19.4 6.712.1
T n 1 , n 2 -NM0.3 4.26.8 5.38.9 4.38.4 7.112.4
0.5 4.06.6 5.38.7 4.98.7 7.211.6
0.7 4.26.8 5.38.8 4.98.4 7.311.7
200FMS-G-O 4.17.3 6.09.3 5.88.9 8.914.1
FMS-G-AltVar 4.17.0 6.18.9 5.49.3 8.715.1
T n 1 , n 2 -G0.3 3.36.3 4.98.0 4.98.7 8.912.9
0.5 3.26.4 4.98.6 5.38.7 8.713.3
0.7 3.56.8 5.29.2 5.48.9 8.413.9
T n 1 , n 2 -MG0.3 3.56.7 5.58.4 5.59.4 9.013.4
0.5 3.56.8 5.29.0 5.78.8 8.713.6
0.7 3.57.0 5.59.8 5.79.1 8.613.7
T n 1 , n 2 -NM0.3 3.56.0 4.07.5 4.48.5 8.713.3
0.5 3.65.9 4.47.6 4.48.4 8.713.3
0.7 3.65.8 4.67.4 4.58.4 8.813.1
(B) Power (%)
NTestδ Distribution
A1 A2 A3 A4
5%10% 5%10% 5%10% 5%10%
50FMS-G-O 42.051.6 21.430.3 26.337.2 28.840.8
[43.7][52.4] [22.7][31.0] [27.6][38.1] [30.5][42.0]
FMS-G-AltVar 24.937.0 13.322.4 39.852.2 43.155.3
[27.5][39.0] [13.9][24.1] [41.5][53.6] [44.9][57.4]
T n 1 , n 2 -G0.3 31.341.5 17.324.7 30.841.3 33.844.3
[35.4][46.6] [20.8][29.5] [35.1][46.3] [38.7][50.8]
0.5 29.539.7 16.023.8 29.439.1 31.442.9
0.7 27.838.7 14.522.6 28.036.8 30.240.6
T n 1 , n 2 -MG0.3 32.142.2 17.925.8 31.641.8 35.545.5
[35.2][45.7] [20.9][29.3] [35.0][45.8] [39.1][49.5]
0.5 30.140.8 16.524.5 30.240.3 32.243.1
0.7 28.038.9 15.123.0 28.537.5 30.841.1
T n 1 , n 2 -NM0.3 33.842.5 18.225.2 32.942.5 35.246.5
[36.6][48.8] [20.7][30.4] [36.8][48.5] [39.2][53.5]
0.5 33.542.2 17.724.9 32.442.2 34.645.8
0.7 33.041.4 17.424.2 32.241.6 34.545.7
100FMS-G-O 73.081.5 41.451.4 59.171.2 64.274.6
[72.7][81.8] [40.7][52.3] [58.7][71.8] [63.8][74.6]
FMS-G-AltVar 56.470.2 30.343.4 73.780.7 77.283.7
[58.1][71.6] [32.2][44.4] [74.4][81.8] [78.1][84.4]
T n 1 , n 2 -G0.3 72.380.5 37.948.6 70.378.4 74.180.5
[75.4][84.4] [40.2][54.9] [72.8][82.3] [76.1][84.1]
0.5 67.177.2 33.945.0 65.275.7 69.578.2
0.7 62.973.6 31.942.5 61.772.3 65.275.8
T n 1 , n 2 -MG0.3 73.180.2 38.349.3 70.578.3 73.681.0
[74.8][83.6] [40.9][53.5] [72.4][80.7] [75.4][83.3]
0.5 67.477.5 34.645.4 65.675.4 69.577.9
0.7 62.973.5 32.042.3 62.272.5 65.375.2
T n 1 , n 2 -NM0.3 76.884.0 41.751.9 75.582.1 76.884.0
[79.6][87.2] [44.8][58.0] [77.6][85.0] [79.7][86.9]
0.5 77.084.0 40.151.0 75.182.0 75.783.0
0.7 76.483.7 39.650.1 74.981.9 75.682.9
200FMS-G-O 97.498.3 71.580.7 95.698.1 97.197.8
[97.8][98.6] [75.0][84.2] [96.9][98.6] [97.4][98.4]
FMS-G-AltVar 93.496.3 60.472.8 98.799.0 98.499.1
[95.2][97.5] [69.1][78.5] [98.8][99.2] [99.0][99.2]
T n 1 , n 2 -G0.3 97.799.1 77.084.8 98.699.1 98.699.2
[98.7][99.4] [82.7][90.3] [98.8][99.4] [98.9][99.2]
0.5 97.298.1 71.380.9 97.798.9 97.998.7
0.7 96.497.6 65.475.8 96.798.8 97.298.3
T n 1 , n 2 -MG0.3 97.999.1 76.285.3 98.699.1 98.599.1
[98.6][99.4] [81.8][89.8] [98.7][99.4] [98.7][99.2]
0.5 97.398.1 71.480.3 97.698.9 97.998.6
0.7 96.597.6 65.575.9 96.798.7 97.298.3
T n 1 , n 2 -NM0.3 98.999.0 84.891.1 98.899.2 98.698.9
[99.0][99.2] [91.1][95.8] [99.4][99.7] [99.5][99.7]
0.5 96.897.0 81.288.2 98.398.6 98.498.6
0.7 95.695.7 80.988.3 98.398.6 98.498.6
Note: Numbers in brackets are size-adjusted powers.
  • 1Hirukawa and Sakudo [32] present the Weibull kernel as yet another special case. However, it is not confirmed that this kernel satisfies Lemma 1 below, and thus the kernel is not investigated throughout.
  • 2It is possible to use different asymmetric kernels and/or different smoothing parameters to estimate f and g. For convenience, however, we choose to employ the same asymmetric kernel function and a single smoothing parameter.
  • 3Although the GLDs corresponding to A3 and A4 are used in Zheng [17] and FMS, they are found to have non-zero means. Therefore, we adjust the values of λ 1 and λ 2 with skewness and kurtosis maintained so that the resulting distributions have means of zero.

Share and Cite

MDPI and ACS Style

Hirukawa, M.; Sakudo, M. Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels. Econometrics 2016, 4, 28. https://doi.org/10.3390/econometrics4020028

AMA Style

Hirukawa M, Sakudo M. Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels. Econometrics. 2016; 4(2):28. https://doi.org/10.3390/econometrics4020028

Chicago/Turabian Style

Hirukawa, Masayuki, and Mari Sakudo. 2016. "Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels" Econometrics 4, no. 2: 28. https://doi.org/10.3390/econometrics4020028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop