Next Article in Journal
Alternatives to the Least Squares Solution to Peelle’s Pertinent Puzzle
Previous Article in Journal
Approximating the Minimum Tour Cover of a Digraph

Algorithms 2011, 4(2), 87-114; https://doi.org/10.3390/a4020087

Article
Goodness-of-Fit Tests For Elliptical and Independent Copulas through Projection Pursuit
Université Pierre et Marie Curie, Laboratoire de Statistique Théorique et Appliquée, 175 rue du Chevaleret, 75013 Paris, France
Received: 15 March 2011; in revised form: 5 April 2011 / Accepted: 8 April 2011 / Published: 26 April 2011

## Abstract

: Two goodness-of-fit tests for copulas are being investigated. The first one deals with the case of elliptical copulas and the second one deals with independent copulas. These tests result from the expansion of the projection pursuit methodology that we will introduce in the present article. This method enables us to determine on which axis system these copulas lie as well as the exact value of these very copulas in the basis formed by the axes previously determined irrespective of their value in their canonical basis. Simulations are also presented as well as an application to real datasets.
Keywords:
copulas; goodness-of-fit; projection pursuit; elliptical distributions

## 1. Introduction

The need to describe the dependency between two or more random variables triggered the concept of copulas. We consider a joint cumulative distribution function (cdf) F on ℝd and its cdf margins F1, F2, …,Fd. A copula C is a function such that F = C(F1, F2, …, Fd). Sklar  is the first to lay the foundations of this new theory. Several parametric families of copulas have been defined, namely elliptical, archimedean, periodic copulas etc., see Joe  and Nelsen  as well as Appendix A for an overview of these families. Finding criteria to determine the best copula for a given problem can only be achieved through a goodness-of-fit (GOF) approach. So far several GOF copula approaches have been proposed in the literature, e.g., Carriere , Genest and Rémillard , Fermanian , Genest Quessy and Rémillard , Michiels and De Schepper , Genest Favre Béliveau and Jacques , Mesfioui Quessy and Toupin , Genest Rémillard and Beaudoin , Berg , Bücher and Dette , among others. However, the field is still at an embryonic stage which explains the current shortage in recommendations. In univariate distributions, the GOF assessment can be performed using for instance the well-known Kolmogorov test. In the multivariate field, there are fewer alternatives. A simple way to build GOF approaches for multivariate random variables is to consider multi-dimensional chi-square approaches, as in for example Broniatowski . However, these approaches present feasibility issues for high dimensional problems due to the curse of dimensionality. In order to solve this, we recall some facts from the theory of projection pursuit.

The objective of projection pursuit is to generate one or several projections providing as much information as possible about the structure of the dataset regardless of its size. Once a structure has been isolated, the corresponding data are transformed through a Gaussianization. Through a recursive approach, this process is iterated to find another structure in the remaining data, until no further structure can be evidenced in the data left at the end. Friedman  and Huber  count among the first authors who introduced this type of approaches for evidencing structures. They each describe, with many examples, how to evidence such a structure and consequently how to estimate the density of such data through two different methodologies each. Their work is based on maximizing Kullback-Leibler divergence. In the present article, we will introduce a new projection pursuit methodology based on the minimisation of any ϕ-divergence greater than the L1-distance (ϕ-PP). We will show that this algorithm presents the extra advantage of being robust and fast from a numerical standpoint. Its key rationale lies in the fact that it allows not only to carry out GOF tests for elliptical and independent copulas but also to determine the axis system upon which these very copulas are based. The exact expression of these copulas in the basis constituted by these axes can therefore be derived.

This paper is organised as follows: Section 2 contains preliminary definitions and properties. In Section 3, we present in details the ϕ-projection pursuit algorithm. In Section 4, we present our first results. In Section 5, we introduce our tests. In Section 6, we provide three simulations pertaining to the two major situations described herein and we will study a real case.

## 2. Basic theory

#### 2.1. An Introduction to Copulas

In this section, we recall the concept of copula. We will also define the family of elliptical copulas through a brief reminder of elliptical distributions—see Appendix A for an overview of other families.

#### Sklar's Theorem

First, let us define a copula in ℝd.

#### Definition 2.1

A d-dimensional copula is a joint cumulative distribution function C defined on [0, 1]d, with uniform margins.

The following theorem explains in what extent a copula does describe the dependency between two or more random variables.

#### Theorem 2.1 (Sklar )

Let F be a joint multivariate distribution with margins F1, …, Fd, then, there exists a copula C such that

$F ( x 1 , … , x d ) = C ( F 1 ( x 1 ) , … , F d ( x d ) )$

If marginal cumulative distributions are continuous, then the copula is unique. Otherwise, the copula is unique on the range of values of the marginal cumulative distributions.

#### Remark 2.1

First, for any copula C and any ui in [0, 1], 1 ≤ id, we have

$W ( u 1 , … , u d ) = max { 1 − d + ∑ i = 1 d u i , 0 } ≤ C ( u 1 , … , u d ) ≤ min j ∈ { 1 , … , d } u j = M ( u 1 , … , u d )$
where W and M are called the Frechet-Hoeffding copula boundaries and are also copulas.

We set the independent copula Π as $Π ( u 1 , … , u d ) = Π i = 1 d u i$, for any ui in [0, 1], 1 ≤ id.

Moreover, we define the density of a copula as the density associated with the cdf C, which we will name as c:

#### Definition 2.2

Whenever there exists, the density of C is defined by $c ( u 1 , … , u d ) = ∂ d ∂ u 1 … ∂ u d C ( u 1 , … , u d )$, for any ui in [0, 1], 1 ≤ id.

Finally, let us present several examples of copulas (see also Appendix A to find an overview).

#### Example 2.1

The Gaussian copula Cρ (in ℝ2):

Defining Ψρ as the standard bivariate normal cumulative distribution function with ρ correlation, the Gaussian copula function is

$C ρ ( u , v ) = Ψ ρ ( Ψ − 1 ( u ) , Ψ − 1 ( v ) )$
where u, v ∈ [0, 1] and where Ψ is the standard normal cumulative distribution function.

The Student copula Cρ (in ℝ2):

Defining Tρ,k as the standard bivariate student cumulative distribution function with ρ as the correlation coefficient and with k as the degree of freedom of the distribution, the Student copula function is

$C ρ ( u , v ) = T ρ , k ( T k − 1 ( u ) , T k − 1 ( v ) )$
where u, v ∈ [0, 1] and where Tk is the standard Student cumulative distribution function.

The Elliptical copula :

Similarly as above, elliptical copulas are the copulas of elliptical distributions (an overview is provided in Appendix A).

#### 2.2. Brief Introduction to the ϕ-Projection Pursuit Methodology (ϕ-PP)

Let us first introduce the concept of ϕ-divergence.

#### The Concept of ϕ-Divergence

Let φ be a strictly convex function defined by $φ : ℝ + ¯ → ℝ + ¯$, and such that φ(1) = 0. We define a ϕ-divergence of P from Q, where P and Q are two probability distributions over a space Ω such that Q is absolutely continuous with respect to P-by

$D ϕ ( Q , P ) = ∫ φ ( d Q d P ) d P$
or $D ϕ ( q , p ) = ∫ φ ( q ( x ) p ( x ) ) p ( x ) d x$, if P and Q present p and q as density respectively.

Throughout this article, we will also assume that φ(0) < ∞, that φ′ is continuous and that this divergence is greater than the L1 distance—see also Appendix B page 109.

#### Functioning of the Algorithm

Let f be a density on ℝd. We consider an instrumental density g with the same mean and variance as f. We start with performing the Dϕ(g, f) = 0 test; should this test turn out to be positive, then f = g and the algorithm stops, otherwise, the first step of our algorithm consists in defining a vector a1 and a density g(1) by

$a 1 = arg inf a ∈ ℝ ∗ d D ϕ ( g f a g a , f ) and g ( 1 ) = g f a 1 g a 1$
where $ℝ ∗ d$, is the set of non-null vectors of ℝd and fa (resp. ga) stands for the density of aX (resp. aY) when f (resp. g) is the density of X (resp. Y).

In our second step, we replace g with g(1) and we repeat the first step, and so on. By iterating this process, we end up obtaining a sequence (a1, a2, …) of vectors in $ℝ ∗ d$ and a sequence of densities g(i).

#### Remark 2.2

First, to obtain an approximation of f, we stop our algorithm when the divergence equals zero, i.e., we stop when Dϕ(g(j), f) = 0 since it implies g(j) = f with jd, or when our algorithm reaches the dth iteration, i.e., we approximate f with g(d).

Second, we get Dϕ(g(0), f) ≥ Dϕ(g(1), f) ≥ …‥ ≥ 0 with g(0) = g.

Finally, the specific form of the relationship (2.2) implies that we deal with M-estimation. We can therefore state that our method is robust—see Sections 6, Yohai , Toma  as well as Huber .

The main steps of the present algorithm have been summarized in Table 1.

At present, let us study the following example:

#### Example 2.2

Let f be a density defined on ℝ3 by f(x1, x2, x3) = n(x1, x2)h(x3), with n being a bi-dimensional Gaussian density, and h being a non-Gaussian density. Let us also consider g, a Gaussian density with the same mean and variance as f.

Since g(x1, x2/x3) = n(x1, x2), we have $D ϕ ( g f 3 g 3 , f ) = D ϕ ( n . f 3 , f ) = D ϕ ( f , f ) = 0$ as f3 = h, i.e., the function $a ↦ D ϕ ( g f a g a , f )$ reaches zero for e3 = (0, 0, 1)′, where f3 and g3 are the third marginal densities of f and g respectively. We therefore obtain g(x1, x2/x3) = f(x1, x2/x3).

To recapitulate our method, if Dϕ(g, f) = 0, we derive f from the relationship f = g; whenever a sequence (ai)i=1,…j, j < d, of vectors in $ℝ ∗ d$ defining g(j) and such that Dϕ(g(j), f) = 0 exists, then $f ( . / a i ⊤ x , 1 ≤ i ≤ j ) = g ( . / a i ⊤ x , 1 ≤ i ≤ j )$ i.e., f coincides with g on the complement of the vector subspace generated by the family {ai}i=1,…,j—see also Section 3 for a more detailed explanation.

In the remaining of our study of the algorithm, after having clarified the choice of g, we will consider the statistical solution to the representation problem, assuming that f is unknown and that X1, X2,…Xm are i.i.d. with density f. We will provide asymptotic results pertaining to the family of optimizing vectors ak,m—which we will define more precisely below—as m goes to infinity. Our results also prove that the empirical representation scheme converges towards the theoretical one.

## 3. The Algorithm

#### 3.1. The Model

Let f be a density on ℝd. We assume there exists d non-null linearly independent vectors aj, with 1 ≤ jd, of ℝd, such that

$f ( x ) = n ( a j + 1 ⊤ x , … , a d ⊤ x ) h ( a 1 ⊤ x , … , a j ⊤ x )$
with j < d, n being an elliptical density on ℝdj and with h being a density on ℝj, which does not belong to the same family as n. Let X = (X1, …, Xd) be a vector with f as density

We define g as an elliptical distribution with the same mean and variance as f.

For simplicity, let us assume that the family {aj}1≤jd is the canonical basis of ℝd:

The very definition of f implies that (Xj+1, …, Xd) is independent from (X1, …, Xj). Hence, the density of (Xj+1, …, Xd) given (X1, …, Xj) is n.

Let us assume that Dϕ(g(j), f) = 0, for some jd. We then get $f ( x ) f a 1 f a 2 … f a j = g ( x ) g a 1 ( 1 − 1 ) g a 2 ( 2 − 1 ) … g a j ( j − 1 )$, since, by induction, we have $g ( j ) ( x ) = g ( x ) f a 1 g a 1 ( 1 − 1 ) f a 2 g a 2 ( 2 − 1 ) … f a j g a j ( j − 1 )$.

Consequently, lemma C.1 and the fact that the conditional densities with elliptical distributions are also elliptical, as well as the above relationship, lead us to infer that $n ( a j + 1 ⊤ x , . , a d ⊤ x ) = f ( . / a i ⊤ x , 1 ≤ i ≤ j ) = g ( . / a i ⊤ x , 1 ≤ i ≤ j )$. In other words, f coincides with g on the complement of the vector subspace generated by the family {ai}i=1,…,j.

Now, if the family {aj}1≤jd is no longer the canonical basis of ℝd, then this family is again a basis of ℝd. Hence, lemma C.2 implies that

$g ( . / a 1 ⊤ x , … , a j ⊤ x ) = n ( a j + 1 ⊤ x , … , a d ⊤ x ) = g ( . / a 1 ⊤ x , … , a j ⊤ x )$
which is equivalent to Dϕ(g(j), f) = 0, since by induction $g ( j ) = g f a 1 g a 1 ( 1 − 1 ) f a 2 g a 2 ( 2 − 1 ) … f a j g a j ( j − 1 )$.

The end of our algorithm implies that f coincides with g on the complement of the vector subspace generated by the family {ai}i=1,…,j. Therefore, the nullity of the ϕ-divergence provides us with information on the density structure.

In summary, the following proposition clarifies our choice of g which depends on the family of distribution one wants to find in f :

#### Proposition 3.1

With the above notations, Dϕ(g(j), f) = 0 is equivalent to

$g ( . / a 1 ⊤ x , … , a j ⊤ x ) = f ( . / a 1 ⊤ x , … , a j ⊤ x )$

More generally, the above proposition defines the co-support of f as the vector space generated by the vectors a1, …, aj.

#### Definition 3.1

Let f be a density on ℝd. We define the co-vectors of f as the sequence of vectors a1, …, aj which solves the problem Dϕ(g(j), f) = 0 where g is an elliptical distribution with the same mean and variance as f. We define the co-support of f as the vector space generated by the vectors a1, …, aj.

#### Remark 3.1

Any (ai) family defining f as in (3.1) is an orthogonal basis of ℝd—see lemma C.3

#### 3.2. Stochastic Outline of Our Algorithm

Let X1, X2,‥,Xm (resp. Y1, Y2,‥,Ym) be a sequence of m independent random vectors with the same density f (resp. g). As customary in nonparametric ϕ-divergence optimizations, all estimates of f and fa, as well as all uses of Monte Carlo methods are being performed using subsamples X1, X2,‥,Xn and Y1, Y2,‥,Yn—extracted respectively from X1, X2,‥,Xm and Y1, Y2,‥,Ym—since the estimates are bounded below by some positive deterministic sequence θm—see Appendix D.

Let ℙn be the empirical measure based on the subsample X1, X2,.,Xn. Let fn (resp. fa,n for any a in $ℝ ∗ d$ be the kernel estimate of f (resp. fa), which is built from X1, X2,‥,Xn (resp. aX1, aX2,‥,aXn).

As defined in Section 2.2, we consider the following sequences (ak)k≥1 and (g(k))k≥1 such that

$a k is a non null vector of ℝ d defined by a k = arg min a ∈ ℝ ∗ d D ϕ ( g ( k − 1 ) f a g a ( k − 1 ) , f ) g ( k ) is the density defined by g ( k ) = g ( k − 1 ) f a k g a k ( k − 1 ) with g ( 0 ) = g$

The stochastic setting up of the algorithm uses fn and $g n ( 0 ) = g$ instead of f and g(0) = g—since g is known. Thus, at the first step, we build the vector ǎ1 which minimizes the ϕ-divergence between fn and $g f a , n g a$ and which estimates a1. First, since proposition D.1 and lemma C.4 show how the infimum of the criteria (or index)

$D ˇ ϕ ( g f a , n g a , f n ) = 1 n ∑ i = 1 n φ ( g ( X i ) f a , n ( a ⊤ X i ) f n ( X i ) g a ( a ⊤ X i ) )$
is reached, we are then able to minimize the ϕ-divergence between fn and $g f a , n g a$. Second, defining ǎ1 as the argument of this minimization, proposition 4.3 infers that this vector tends to a1. Finally, we define the density $g ˇ n ( 1 )$ as $g ˇ n ( 1 ) = g f a ˇ 1 , n g a ˇ 1$ which estimates g(1) through theorem 4.1.

Now, from the second step and as defined in Section 2.2, we derive the fact that the density g(2–1) is unknown. Consequently, once again, the samples have to be truncated.

All estimates of f and fa (resp. g(1) and $g a ( 1 )$) are being performed using a subsample X1, X2,…,Xn (resp. $Y 1 ( 1 ) , Y 2 ( 1 ) , … , Y n ( 1 )$) extracted from X1, X2,…,Xm (resp. $Y 1 ( 1 ) , Y 2 ( 1 ) , … , Y m ( 1 )$, which is a sequence of m independent random vectors with same density g(1)) such that the estimates are bounded below by some positive deterministic sequence θm—see Appendix D.

Let ℙn be the empirical measure of the subsample X1, X2,…,Xn. Let fn (resp. $g n ( 1 ) , f a , n , g a , n ( 1 )$ for any a in $ℝ ∗ d$) be the kernel estimate of f (resp. g(1) and fa as well as $g a ( 1 )$) which is built from X1, X2,…,Xn (resp. $Y 1 ( 1 ) , Y 2 ( 1 ) , … , Y n ( 1 )$ and aX1, aX2,…,aXn as well as $a ⊤ Y 1 ( 1 ) , a ⊤ Y 2 ( 1 ) , … , a ⊤ Y n ( 1 )$).

The stochastic setting up of the algorithm uses fn and $g n ( 1 )$ instead of f and g(1). Thus, we build the vector ǎ2, which minimizes the ϕ-divergence between fn and $g n ( 1 ) f a , n g a , n ( 1 )$, since g(1) and $g a ( 1 )$ are unknown—and which estimates a2. First, since proposition D.1 and lemma C.4 show how the infimum of the criteria (or index)

$D ˇ ϕ ( g n ( 1 ) f a , n g a , n ( 1 ) , f n ) = 1 n ∑ i = 1 n φ ( g n ( 1 ) ( X i ) f a , n ( a ⊤ X i ) f n ( X i ) g a , n ( 1 ) ( a ⊤ X i ) )$
is reached, we are then able to minimize the ϕ-divergence between fn and $g n ( 1 ) f a , n g a , n ( 1 )$. Second, defining ǎ2 as the argument of this minimization, proposition 4.3 infers that this vector tends, to a2. Finally, we define the density $g ˇ n ( 2 )$ as $g ˇ n ( 2 ) = g n ( 1 ) f a ˇ 2 , n g a ˇ 2 , n ( 1 )$ which estimates g(2) through theorem 4.1.

And so on, we end up obtaining a sequence (ǎ1, ǎ2, …) of vectors in $ℝ ∗ d$ estimating the co-vectors of f and a sequence of densities $( g ˇ n ( k ) ) k$ such that $g ˇ n ( k )$ estimates g(k) through theorem 4.1.

Let us now summarize the main steps of the stochastic implementation of our algorithm (the dual representation of the estimators will be further detailed in Table 2 below).

## 4. Results

#### 4.1. Hypotheses on f

In this paragraph, we define the set of hypotheses on f which could possibly be of use in our work. Discussion on several of these hypotheses can be found in Appendix E.

In the remaining of this section, for legibility reasons, we replace g with g(k−1). Let

$Θ = ℝ d , Θ D ϕ = { b ∈ Θ | ∫ φ ∗ ( φ ′ ( g ( x ) f ( x ) f b ( b ⊤ x ) g b ( b ⊤ x ) ) ) d P < ∞ } M ( b , a , x ) = ∫ φ ′ ( g ( x ) f ( x ) f b ( b ⊤ x ) g b ( b ⊤ x ) ) g ( x ) f a ( a ⊤ x ) g a ( a ⊤ x ) d x − φ ∗ ( φ ′ ( g ( x ) f ( x ) f b ( b ⊤ x ) g b ( b ⊤ x ) ) ) ℙ n M ( b , a ) = ∫ M ( b , a , x ) d ℙ n , P M ( b , a ) = ∫ M ( b , a , x ) d P$
where P is the probability measure presenting f as density.

Similarly as in chapter V of Van der Vaart , let us define :

• (A1) : For all ε > 0, there is η > 0, such that for all c ∈ ΘDϕ verifying ‖cak‖ ≥ ε, we have PM(c, a) − η > PM(ak, a), with a ∈ Θ.

• (A2) : ∃ Z < 0, n0 > 0 such that (nn0 ⇒ supa∈Θ supc∈{ΘDϕ}cnM(c, a) < Z)

• (A3) : There exists V, a neighbourhood of ak, and H, a positive function, such that, for all cV, we have |M(c, ak, x)| ≤ H(x)(Pa.s.) with PH < ∞,

• (A4) : There exists V, a neighbourhood of ak, such that for all ε, there exists a η such that for all cV and a ∈ Θ, verifying ‖aak‖ ≥ ε, we have PM(c, ak) < PM(c, a) − η.

Putting $I a k = ∂ 2 ∂ a 2 D ϕ ( g f a k g a k , f )$, let us consider now four new hypotheses:

• (A5) : $P ‖ ∂ ∂ b M ( a k , a k ) ‖ 2$ and $P ‖ ∂ ∂ a M ( a k , a k ) ‖ 2$ are finite and the expressions $P ∂ 2 ∂ b i ∂ b j M ( a k , a k )$ and Iak exist and are invertible.

• (A6) : There exists k such that PM(ak, ak) = 0.

• (A7) : (VarP(M(ak, ak)))1/2 exists and is invertible.

• (A0) : f and g are assumed to be positive and bounded and such that K(g, f) ≥ ∫ |f(x) − g(x)|dx where K is the Kullback-Leibler divergence.

#### Estimation of the First Co-Vector of f

Let $ℛ$ be the class of all positive functions r defined on ℝ and such that g(x)r(ax) is a density on ℝd for all a belonging to $ℝ ∗ d$. The following proposition shows that there exists a vector a such that $f a g a$ minimizes Dϕ(gr, f) in r:

#### Proposition 4.1

There exists a vector a belonging to $ℝ ∗ d$ such that

$arg min r ∈ ℛ D ϕ ( g r , f ) = f a g a and r ( a ⊤ x ) = f a ( a ⊤ x ) g a ( a ⊤ x )$

Following Broniatowski , let us introduce the estimate of $D ϕ ( g f a , n g a , f n )$, through $D ˇ ϕ ( g f a , n g a , f n ) = sup b ∈ Θ ∫ M ( b , a , x ) d ℙ n ( x )$.

#### Proposition 4.2

Let ǎ be such that $a ˇ : = arg inf a ∈ ℝ ∗ d D ˇ ϕ ( g f a , n g a , f n )$.

Then, ǎ is a strongly convergent estimate of a, as defined in proposition 4.1.

Let us also introduce the following sequences (ǎk)k≥1 and $( g ˇ n ( k ) ) k ≥ 1$, for any given n—see Section 3.2—such that

ǎk is an estimate of ak as defined in proposition 4.2 with $g ˇ n ( k − 1 )$ instead of g, $g ˇ n ( k )$ is defined by $g ˇ n ( 0 ) = g$, $g ˇ n ( k ) ( x ) = g ˇ n ( k − 1 ) ( x ) f a ˇ k , n ( a ˇ k ⊤ x ) [ g ˇ ( k − 1 ) ] a ˇ k , n ( a ˇ k ⊤ x )$, i.e., $g ˇ n ( k ) ( x ) = g ( x ) Π j = 1 k f a ˇ j , n ( a ˇ j ⊤ x ) [ g ˇ ( j − 1 ) ] a ˇ j , n ( a ˇ j ⊤ x )$.

We also note that $g ˇ n ( k )$ is a density.

Convergence Study at the kth Step of the Algorithm:

In this paragraph, we show that the sequence (ǎk)n converges towards ak and that the sequence $( g ˇ n ( k ) ) n$ converges towards g(k).

Let čn(a)= arg supc∈ΘnM(c, a), with a ∈ Θ, and γ̌n = arg infa∈Θ supc∈ΘnM(c, a). We state

#### Proposition 4.3

Both supa∈Θčn(a) – ak‖ and γ̌n converge toward ak a.s.

Finally, the following theorem shows that $g ˇ n ( k )$ converges almost everywhere towards g(k):

#### Theorem 4.1

It holds $g ˇ n ( k ) → n g ( k )$ a.s.

#### Testing of the Criteria

In this paragraph, through a test of our criteria, namely $a ↦ D ϕ ( g ˇ n ( k ) f a , n [ g ˇ ( k ) ] a , n , f n )$, we build a stopping rule for this procedure. First, the next theorem enables us to derive the law of our criteria:

#### Theorem 4.2

For a fixed k, we have $n ( Var P ( M ( c ˇ n ( γ ˇ n ) , γ ˇ n ) ) ) − 1 / 2 ( ℙ n M ( c ˇ n ( γ ˇ n ) , γ ˇ n ) − ℙ n M ( a k , a k ) ) ℒ a w → N ( 0 , I )$, where k represents the kth step of our algorithm and where I is the identity matrix in ℝd.

Note that k is fixed in theorem 4.2 since γ̌n = arg inf a∈Θ supc∈ΘnM(c, a) where M is a known function of k—see Section 4.1. Thus, in the case when $D ϕ ( g ( k − 1 ) f a k g a k ( k − 1 ) , f ) = 0$, we obtain

#### Corollary 4.1

We have $n ( Var P ( M ( c ˇ n ( γ ˇ n ) , γ ˇ n ) ) ) − 1 / 2 ℙ n M ( c ˇ n ( γ ˇ n ) , γ ˇ n ) ℒ a w → N ( 0 , I )$.

Hence, we propose the test of the null hypothesis

$( H 0 ) : D ϕ ( g ( k − 1 ) f a k g a k ( k − 1 ) , f ) = 0$ versus the alternative $( H 1 ) : D ϕ ( g ( k − 1 ) f a k g a k ( k − 1 ) , f ) ≠ 0$.

Based on this result, we stop the algorithm, then, defining ak as the last vector generated, we derive from corollary 4.1 a α-level confidence ellipsoid around ak, namely $ℰ k = { b ∈ ℝ d ; n ( Var P ( M ( b , b ) ) ) − 1 / 2 ℙ n M ( b , b ) ≤ q α N ( 0 , 1 ) }$, where $q α N ( 0 , 1 )$ is the quantile of a α-level reduced centered normal distribution and where ℙn is the empirical measure arising from a realization of the sequences (X1, …, Xn) and (Y1, …, Yn).

Consequently, the following corollary provides us with a confidence region for the above test:

#### Corollary 4.2

$ℰ$k is a confidence region for the test of the null hypothesis (H0) versus (H1).

## 5. Goodness-of-Fit Tests

#### 5.1. The Basic Idea

Let f be a density defined on ℝ2. Let us also consider g, a known elliptical density with the same mean and variance as f. Let us also assume that the family (ai) is the canonical basis of ℝ2 and that Dϕ(g(2), f) = 0.

Hence, since lemma C.1 page 110 implies that $g a j ( j − 1 ) = g a j$ if jd, we then have $g ( 2 ) ( x ) = g ( x ) f 1 g 1 f 2 g 2 ( 1 ) = g ( x ) f 1 g 1 f 2 g 2$. Moreover, we get f with g(2) = f, as derived from property B.1 page 110.

Consequently, $f = g ( x ) f 1 g 1 f 2 g 2$, i.e., $f f 1 f 2 = g g 1 g 2$, and then $∂ 2 ∂ x ∂ y C f = ∂ 2 ∂ x ∂ y C g$ where Cf (resp. Cg) is the copula of f (resp. g).

More generally, if f is defined on ℝd, then the family (ai) is once again free (see lemma C.5), i.e., the family (ai) is once again a basis of ℝd. The relationship Dϕ(g(d), f) = 0 therefore implies that g(d) = f, i.e., for any x ∈ ℝd, $f ( x ) = g ( d ) ( x ) = g ( x ) Π k = 1 d f a k ( a k ⊤ x ) [ g ( k − 1 ) ] a k ( a k ⊤ x ) = g ( x ) Π k = 1 d f a k ( a k ⊤ x ) g a k ( a k ⊤ x )$ since lemma C.1 page 110 implies that $g a k ( k − 1 ) = g a k$ if kd. In other words, for any x ∈ ℝd, it holds

$g ( x ) Π k = 1 d g a k ( a k ⊤ x ) = f ( x ) Π k = 1 d f a k ( a k ⊤ x )$

Finally, putting A = (a1, …, ad) and defining vector y (resp. density , copula f of , density , copula g of ) as the expression of vector x (resp. density f, copula Cf of f, density g, copula Cg of g) in basis A, then, the following proposition provides us with the density associated with the copula of f as being equal to the density associated with the copula of g in basis A :

#### Proposition 5.1

With the above notations, should a sequence (ai)i=1,…d of not null vectors in $ℝ ∗ d$ defining g(d) and such that Dϕ(g(d), f) = 0 exist, then $∂ d ∂ y 1 … ∂ y d C ∼ f = ∂ d ∂ y 1 … ∂ y d C ∼ g$.

#### 5.2. With the Elliptical Copula

Let f be an unknown density defined on ℝd. The objective of the present section is to determine whether the copula of f is elliptical. We thus define an instrumental elliptical density g with the same mean and variance as f, and we follow the procedure of Section 3.2. As explained in Section 5.1, we infer from proposition 5.1 that the copula of f equals the copula of g when Dϕ(g(d), f) = 0, i.e., when ad is the last vector generated from the algorithm and when (ai) is the canonical basis of ℝd. Thus, in order to verify this assertion, corollary 4.1 page 96 provides us with a α-level confidence ellipsoid around this vector, namely

$ℰ d = { b ∈ ℝ d ; n ( Var P ( M ( b , b ) ) ) − 1 / 2 ℙ n M ( b , b ) ≤ q α N ( 0 , 1 ) }$
where $q α N ( 0 , 1 )$ is the quantile of a α-level reduced centered normal distribution, where ℙn is the empirical measure arising from a realization of the sequences (X1, …, Xn) and (Y1, …, Yn)—see Appendix D—and where M is a known function of d, fn and $g n ( d − 1 )$ —see Section 4.1.

Consequently, keeping the notations introduced in Section 5.1, we perform a statistical test of the null hypothesis

$( H 0 ) : ∂ d ∂ x 1 … ∂ x d C f = ∂ d ∂ x 1 … ∂ x d C g versus ( H 1 ) : ∂ d ∂ x 1 … ∂ x d C f ≠ ∂ d ∂ x 1 … ∂ x d C g$

Since, under (H0), we have Dϕ(g(d), f) = 0, then the following theorem provides us with a confidence region for this test.

#### Theorem 5.1

The set $ℰ$d is a confidence region for the test of the null hypothesis (H0) versus the alternative (H1).

#### Remark 5.1

1/If Dϕ(g(k), f) = 0, for k < d, then we reiterate the algorithm until g(d) is created in order to obtain a relationship for the copula of f.

2/If the ai do not constitute the canonical basis, then keeping the notations introduced in Section 5.1, our algorithm meets the test:

$( H 0 ) : ∂ d ∂ y 1 … ∂ y d C ∼ f = ∂ d ∂ y 1 … ∂ y d C ∼ g versus ( H 1 ) : ∂ d ∂ y 1 … ∂ y d C ∼ f ≠ ∂ d ∂ y 1 … ∂ y d C ∼ g$
Thus, our method permits to determine whether the copula of f equals the copula of g in the (a1, …, ad) basis.

#### 5.3. With the Independent Copulas

Let f be a density on ℝd and let X be a random vector with f as density. The objective of this section is to determine whether f is the product of its margins, i.e., whether the copula of f is the independent copula. Let g be an instrumental product of univariate Gaussian density—with diag(Var(X1),…, Var(Xd)) as covariance matrix and with the same mean as f. As explained at Section 5.2, we follow the procedure described at Section 3.2, i.e., proposition 5.1 infers that the copula of f is the independent copula when Dϕ(g(d), f) = 0, we then perform a statistical test of the null hypothesis:

$( H 0 ) : f = Π i = 1 d f i versus the alternative ( H 1 ) : f ≠ Π i = 1 d f i$

Since, under (H0), we have Dϕ(g(d), f) = 0, the following theorem provides us with a confidence region for our test.

#### Theorem 5.2

Keeping the notations of Section 5.2, the set $ℰ$d is a confidence region for the test of the null hypothesis (H0) versus the alternative (H1).

#### Remark 5.2

(1) As explained in Section 5.2, if Dϕ(g(k), f) = 0, for k < d, we reiterate the algorithm until g(d) is created in order to derive a relationship for the copula of f.

(2) If the ai do not constitute the canonical basis, then keeping the notations of Section 5.1, our algorithm meets the test:

$( H 0 ) : f = Π i = 1 d f a i versus the alternative ( H 1 ) : f ≠ Π i = 1 d f a i$

Thus, our method enables us to determine if the copula of f is the independent copula in the (a1, …, ad) basis.

#### 5.4. Study of the Subsequence (g(k′)) Defined by Dϕ(g(k′), f) = 0 for Any k′

Let be the set of non-negative integers defined by $Q = { k i ′ ; k 1 ′ = 1 , k q ′ = d , k i ′ < k i + 1 ′ }$, where q—such that qd—is its cardinal. In the present section, our goal is to study the subsequence (g(k′)) of the sequence (g(k))k=1‥d defined by Dϕ(g(k′), f) = 0 for any k′ belonging to .

First, we have:

• Dϕ(g(d), f) = 0 ⇔ g(d) = f, through property B.1

• $⇔ g ( x ) Π k = 1 d g a k ( a k ⊤ x ) = f ( x ) Π k = 1 d f a k ( a k ⊤ x )$, as explained in Section 5.2

• $⇔ g ∼ ( y ) Π k = 1 d g ∼ k ( y k ) = f ∼ ( y ) Π k = 1 d f ∼ k ( y k )$, which amounts to the previous relationship written in the A = (a1, …, ad) basis with the notations introduced in Section 5.2.

Moreover, defining $k ∼ i ′$ as the previous integer $k i ′$, in the space {1, …, d}, with i > 1, and as explained in Section 3.1, the relationship Dϕ(g(k′), f) = 0 implies that

$f ∼ ( y i , … , y k ∼ i + 1 ′ / y i , … , y k ∼ i ′ , y k ∼ i + 1 ′ , … , y d ) = f ∼ i , i + 1 ( y i , … , y k ∼ i + 1 ′ )$
where i,i+1 is the density of vector $( a i ⊤ X , … , a k ∼ i + 1 ′ ⊤ X )$ in the A = (a1,…,ad) basis. Consequently, $f ∼ ( y ) = f ∼ 1 , 2 ( y 1 , … y k ∼ 2 ′ ) ⋅ f ∼ 2 , 3 ( y k 2 ′ , … , y k ∼ 3 ′ ) … f ∼ q − 1 , d ( y k q − 1 ′ , … , y k ∼ d ′ )$.

Hence, we can infer that

$f ∼ ( y ) Π k = 1 d f ∼ k ( y k ) = f ∼ 1 , 2 ( y i , … , y k ∼ 2 ′ ) Π k = 1 k ∼ 2 ′ f ∼ k ( y k ) . f ∼ 2 , 3 ( y k ∼ 2 ′ , … , y k ∼ 3 ′ ) Π k = k 2 ′ k ∼ 3 ′ f ∼ k ( y k ) … f ∼ q − 1 , d ( y k ∼ q − 1 ′ , … , y k ∼ d ′ ) Π k = k ∼ q − 1 ′ d f ∼ k ( y k )$

The following theorem explicitly describes the form of the f copula in the A = (a1, …, ad) basis:

#### Theorem 5.3

Defining fi,j as the copula of i,j and keeping the notations introduced in Sections 5.1 and 5.4, it holds

$∂ d ∂ y 1 … ∂ y d C ∼ f = ∂ k ∼ 2 ′ ∂ y 1 … ∂ y k ∼ 2 ′ C ∼ f 1 , 2 . ∂ k ∼ 3 ′ − k 2 ′ + 1 ∂ y k 2 ′ … ∂ y k ∼ 3 ′ C ∼ f 2 , 3 … ∂ d − k q − 1 ′ + 1 ∂ y k q − 1 ′ … ∂ y d C ∼ f q − 1 , d$

#### Remark 5.3

If there exists i such that i < d and $k i ′ = k ∼ i + 1 ′$, then the notation $f ∼ i , i + 1 ( y k i ′ , … , y k ∼ i + 1 ′ )$ means $f ∼ k i ′ ( y k i ′ )$. Thus, if, for any k, we have Dϕ(g(k), f) = 0, then, for any i < d, we have $k i ′ = k ∼ i + 1 ′$, i.e., we have $f ∼ = Π k = 1 d f ∼ k ( y k )$, where k is the kth marginal density of .

At present, using relationship 5.2 and remark 5.3, the following corollary gives us the copula of f as equals to 1 in the {a1, …, ad} basis when, for any k, Dϕ(g(k′), f) = 0:

#### Corollary 5.1

In the case where, for any k, Dϕ(g(k), f) = 0, it holds:

$∂ d ∂ y 1 … ∂ y d C ∼ f = 1$

## 6. Simulations

Let us examine three simulations and an application to real datasets. The first simulation studies the elliptical copula and the second studies the independent copula. In each simulation, our program will aim at creating a sequence of densities (g(j)), j = 1,‥,d such that g(0) = g, g(j) = g(j−1)faj/[g(j−1)]aj and Dϕ(g(d), f) = 0, where Dϕ is a divergence—see Appendix B for its definition—and $a j = arg inf b D ϕ ( g ( j − 1 ) ) f b / g b ( j − 1 ) , f )$, for all j = 1, …, d. We will therefore perform the tests introduced at theorems 5.1 and 5.2. Finally, the third simulation compares the optimisations obtained, when we execute the process with, each time, a new ϕ-divergence.

#### Simulation 6.1

We are in dimension 2(=d), and we use the χ2 divergence to perform our optimisations. Let us consider a sample of 50(=n) values of a random variable X with a density law f defined by :

$f ( x ) = c ρ ( F Gumbel ( x 1 ) , F Exponential ( x 2 ) ) . Gumbel ( x 1 ) . Exponential ( x 2 )$
where c is the Gaussian copula with correlation coefficient ρ = 0.5, and where the Gumbel distribution parameters are −1 and 1 and the exponential density parameter is 2.

Let us generate then a Gaussian random variable Y with a density—that we will name as g—presenting the same mean and variance as f.

We theoretically obtain k = 2 and (a1, a2) = ((1, 0), (0, 1)).

To get this result, we perform the following test:

$( H 0 ) : ( a 1 , a 2 ) = ( ( 1 , 0 ) , ( 0 , 1 ) ) versus ( H 1 ) : ( a 1 , a 2 ) ≠ ( ( 1 , 0 ) , ( 0 , 1 ) )$

Then, theorem 5.1 enables us to verify (H0) by the following 0.9(=α) level confidence ellipsoid

$ℰ 2 = { b ∈ ℝ 2 ; ( Var P ( M ( b , b ) ) ) ( − 1 / 2 ) ℙ n M ( b , b ) ≤ q α N ( 0 , 1 ) / n ≃ 0 , 2533 / 7.0710 = 0.03582 }$

Results of this optimisation can be found in Table 3 and Figure 1.

Therefore, we can conclude that H0 is verified.

#### Simulation 6.2

We are in dimension 2(=d), and we use the χ2 divergence to perform our optimisations.

Let us consider a sample of 50(=n) values of a random variable X with a density law f defined by

$f ( x ) = Gumbel ( x 1 ) . Exponential ( x 2 )$
where the Gumbel distribution parameters are −1 and 1 and the exponential density parameter is 2.

Let g be an instrumental product of univariate Gaussian densities with diag(V ar(X1), …, V ar(Xd)) as covariance matrix and with the same mean as f.

We theoretically obtain k = 2 and (a1, a2) = ((1, 0), (0, 1)). To get this result, we perform the following test:

$( H 0 ) : ( a 1 , a 2 ) = ( ( 1 , 0 ) , ( 0 , 1 ) ) versus ( H 1 ) : ( a 1 , a 2 ) ≠ ( ( 1 , 0 ) , ( 0 , 1 ) )$

Then, theorem 5.2 enables us to verify (H0) by the following 0.9(=α) level confidence ellipsoid

$ℰ 2 = { b ∈ ℝ 2 ; ( Var P ( M ( b , b ) ) ) ( − 1 / 2 ) ℙ n M ( b , b ) ≤ q α N ( 0 , 1 ) / n ≃ 0.03582203 }$

Results of this optimisation can be found in Table 4 and Figure 2.

Therefore, we can conclude that $f = Π i = 1 d f i$.

#### Simulation 6.3

(On the choice of a ϕ-divergence). In this paragraph, we perform our algorithm several times. We first use several ϕ-divergences (see Appendix B for their definitions and their notations). We then perform a sensitivity analysis by varying the number n of simulated variables. Finally we introduce outliers.

At present, we consider a sample of n values of a random variable X with a density f defined by f(x) = Laplace(x1).Gumbel(x0),

where the Gumbel distribution parameters are (1, 2) and where the Laplace distribution parameters are 4 and 3. In theory, we get a1 = (0, 1) and a2 = (1, 0). Then, following the procedure of the first simulation, we get

n = 50Outliers = 0TimeOutliers = 2Time
Relative Entropy(0.10, 0.83) (1.13, 0.11)30 mn(0.1, 0.8) (0.80, 0.024)43 mn
χ2-divergence(0, 0.8) (1.021, 0.09)22 mn(0.12, 0.79) (0.867, −0.104)31 mn
Hellinger distance(0.1, 0.9) (0.91, 0.15)35 mn(0.1, 0.85) (0.81, 0.14)46 mn
n = 100Outliers = 0TimeOutliers = 5Time
Relative Entropy(0.09, 0.89) (1.102, 0.089)50 mn(0.1, 0.88) (1.15, 0.144)60 mn
χ2-divergence(0, 0.9) (0.97, −0.1)43 mn(−0.1, 0.9) (0.87, 0.201)52 mn
Hellinger distance(0.1, 0.91) (0.93, −0.11)57 mn(−0.05, 1.1) (0.79, 0.122)62 mn
n = 500Outliers = 0TimeOutliers = 25Time
Relative Entropy(0, 1.07) (1.1, −0.05)107 mn(0.13, 0.75) (0.79, 0.122)121 mn
χ2-divergence(0, 0.95) (1.12, −0.02)91 mn(0.15, 0.814 (0.922, 0.147)103 mn
Hellinger distance(−0.01, 0.95) (1.01, −0.073)100 mn(−0.17, 1.3) (0.973, 0.206)126 mn

#### Remark 6.1

• We have worked with a calculator presenting the following characteristics :

-

Processor : Mobile AMD 3000+,

-

Memory RAM : 512 DDR,

-

Windows XP.

• Our method, which uses the χ2 as ϕ-divergence, is faster and its performance is as good if not better than any other divergence method.

This results from the fact that the projection index (or criteria) of χ2 is a second degree polynomial. It is consequently easier and faster to assess. Moreover, these simulations illustrate the robustness of our method.

#### 6.1. Application to Real Datasets

Let us for instance study the moves in the stock prices of Renault and Peugeot from January 4, 2010 to July 25, 2010. We thus gather 140(=n) data from these stock prices, see Table 7 and Table 8 below.

Let us also consider X1 (resp. X2) the random variable defining the stock price of Renault (resp. Peugeot). We will assume—as it is commonly done in mathematical finance—that the stock market abides by the classical hypotheses of the Black-Scholes model—see Black and Scholes .

Consequently, X1 and X2 each present a log-normal distribution as probability distribution.

Let f be the density of vector (ln(X1), ln(X2)), let us now apply our algorithm to f with the Kullback-Leibler divergence as ϕ-divergence. Let us generate then a Gaussian random variable Y with a density—that we will name as g—presenting the same mean and variance as f.

We first assume that there exists a vector a such that $D ϕ ( g f a g a , f ) = 0$.

In order to verify this hypothesis, our reasoning will be the same as in Simulation 6.1. Indeed, we assume that this vector is a co-factor of f. Consequently, corollary 4.2 enables us to estimate a by the following 0.9(=α) level confidence ellipsoid

$ℰ 1 = { b ∈ ℝ 2 ; ( Var P ( M ( b , b ) ) ) ( − 1 / 2 ) ℙ n M ( b , b ) ≤ q α N ( 0 , 1 ) / n ≃ 0 , 2533 / 140 = 0.02140776 } .$

Numerical results of the first projection are summarized in Table 5.

Therefore, our first hypothesis is confirmed.

However, our goal is to study the copula of (ln(X1), ln(X2)). Then, as explained in Section 5.4, we formulate another hypothesis assuming that there exists a vector a such that $D ϕ ( g ( 1 ) f a g a ( 1 ) , f ) = 0$.

In order to verify this hypothesis, we use the same reasoning as above. Indeed, we assume that this vector is a co-factor of f. Consequently, corollary 4.2 enables us to estimate a by the following 0.9(=α) level confidence ellipsoid $ℰ 2 = { b ∈ ℝ 2 ; ( Var P ( M ( b , b ) ) ) ( − 1 / 2 ) ℙ n M ( b , b ) ≤ q α N ( 0 , 1 ) / n ≃ 0 , 2533 / 140 = 0.02140776 }$. Numerical results of the second projection are summarized in Table 6.

Therefore, our second hypothesis is confirmed.

In conclusion, as explained in corollary 5.1, the copula of f is equal to 1 in the {a1, a2} basis.

This result has been illustrated at Figures 3, 4 and 5.

#### 6.2. Critics of the Simulations

In the case where f is unknown, we will never be sure to have reached the minimum of the ϕ-divergence: the simulated annealing method has been used to solve our optimisation problem, and therefore it is only when the number of random jumps tends in theory towards infinity that the probability to get the minimum tends to 1. We also note that no theory on the optimal number of jumps to implement does exist, as this number depends on the specificities of each particular problem.

Moreover, we choose the $50 − 4 4 + d$ for the AMISE of the two simulations. This choice leads us to simulate 50 random variables—see Scott  page 151, none of which have been discarded to obtain the truncated sample.

This has also been the case in our application to real datasets.

Finally, the shape of the copula in the case of real datasets in the {a1, a2} basis is also noteworthy.

Figure 4 shows that the curve reaches a quite wide plateau around 1, whereas Figure 5 shows that this plateau prevails on almost the entire [0, 1]2 set. We can therefore conclude that the theoretical analysis is indeed confirmed by the above simulation.

#### 6.3. Conclusions

Projection pursuit is useful in evidencing characteristic structures as well as one-dimensional projections and their associated distribution in multivariate data. This article clearly demonstrates the efficiency of the φ-projection pursuit methodology for goodness-of-fit tests for copulas. Indeed, the robustness as well as the convergence results that we achieved convincingly fulfilled our expectations regarding the methodology used.

Figure 1. Graph of the estimate of (x1, x2) ↦ cρ(FGumbel(x1), FExponential(x2)).
Figure 1. Graph of the estimate of (x1, x2) ↦ cρ(FGumbel(x1), FExponential(x2)).
Figure 2. Graph of the independent copula estimate.
Figure 2. Graph of the independent copula estimate.
Figure 3. Graph of the copula of (ln(X1), ln(X2)) in the canonical basis.
Figure 3. Graph of the copula of (ln(X1), ln(X2)) in the canonical basis.
Figure 4. Graph of the copula of (ln(X1), ln(X2)) in the {a1, a2} basis.
Figure 4. Graph of the copula of (ln(X1), ln(X2)) in the {a1, a2} basis.
Figure 5. Graph of the copula of (ln(X1), ln(X2)) in the {a1, a2} basis—other view.
Figure 5. Graph of the copula of (ln(X1), ln(X2)) in the {a1, a2} basis—other view.
Table 1. Proposal.
 0. We define g, a density with same mean and variance as f and we set g(0) = g. i − 1. We perform the goodness-of-fit test Dϕ(g(i−1), f) = 0: • Should this test be passed, we derive f from $f = g Π i = 1 j f a i g a i ( i − 1 )$ And the algorithm stops. • Should this test not be verified, and should we look to approximate f, when we get to the dth iteration of the algorithm, we derive f from $f = g Π i = 1 d f a i g a i ( i − 1 )$ Otherwise, let us define a vector ai and a density g(i) by $a i = arg inf a ∈ ℝ ∗ d D ϕ ( g ( i − 1 ) f a g a ( i − 1 ) , f ) , and g ( i ) = g ( i − 1 ) f a i g a i ( i − 1 )$ i. Then we replace g(i−1) with g(i) and go back to i − 1.
Table 2. Stochastic outline of the algorithm.
 0. We define g, a density with same mean and variance as f and we set g(0) = g. i − 1. Given $g ˇ n ( i )$, find ǎi such that the index is minimized, where fa,n is a marginal density estimate based on a⊤X1, a⊤X2,…,a⊤Xn, and where is a density estimate based on the projection to a of a Monte Carlo random sample from . And we set $g ˇ n ( i ) = g ˇ n ( i − 1 ) f a ˇ 1 , n g ˇ a ˇ 1 , n ( i − 1 )$ i Then we replace $g ˇ n ( i )$ with and go back to i − 1 until the criteria reaches the stopping rule of this procedure (see below).
Table 3. Simulation 1: Numerical results of the optimisation.
Table 3. Simulation 1: Numerical results of the optimisation.
Our Algorithm
Projection Study 0:minimum : 0.445199
at point : (1.0171,0.0055)
P-Value : 0.94579
Test:H1 : a1$ℰ$1 : True
Projection Study 1:minimum : 0.009628
at point : (0.0048,0.9197)
P-Value : 0.99801
Test:H0 : a2$ℰ$2 : True
χ2(Kernel Estimation of g(2), g(2))3.57809
Table 4. Simulation 2: Numerical results of the optimisation.
Table 4. Simulation 2: Numerical results of the optimisation.
Our Algorithm
Projection Study 0 :minimum : 0.057833
at point : (0.9890,0.1009)
P-Value : 0.955651
Test :H1 : a1$ℰ$1 : True
Projection Study 1 :minimum : 0.02611
at point : (−0.1105,0.9290)
P-Value : 0.921101
Test :H0 : a2$ℰ$2 : True
χ2(Kernel Estimation of g(2), g(2))1.25945
Table 5. Numerical results: First projection.
Table 5. Numerical results: First projection.
Our Algorithm
Projection Study 0:minimum : 0.02087685
at point : a1=(19.1,-12.3)
P-Value : 0.748765
Test:H0 : a1$ℰ$1 : True
K(Kernel Estimation of g(1), g(1)4.3428735
Table 6. Numerical results: Second projection.
Table 6. Numerical results: Second projection.
Our Algorithm
Projection Study 1:minimum : 0.0198753
at point : a2=(8.1,3.9)
P-Value : 0.8743401
Test:H0 : a2$ℰ$2 : True
K(Kernel Estimation of g(2), g(2))4.38475324
Table 7. Stock prices of Renault and Peugeot.
Table 7. Stock prices of Renault and Peugeot.
DateRenaultPeugeotDateRenaultPeugeotDateRenaultPeugeot
23/07/1034.924.222/07/1034.2624.0121/07/1033.1523.3
20/07/1032.6922.7819/07/1033.2423.3616/07/1033.9223.77
15/07/1034.4423.7114/07/1035.0824.3613/07/1035.2824.37
12/07/1033.8423.1609/07/1033.4623.1308/07/1033.0822.65
07/07/1032.1522.1906/07/1031.1221.5605/07/1030.0220.81
02/07/1030.1720.8501/07/1029.5620.0530/06/1030.7821.07
29/06/1030.5520.9728/06/1032.3422.325/06/1031.3521.68
24/06/1032.2922.2523/06/1033.5822.4722/06/1033.8422.77
21/06/1034.0623.2518/06/1032.8922.717/06/1032.0822.31
16/06/1031.8721.9215/06/1032.0322.1214/06/1031.4522.2
11/06/1030.6221.4210/06/1030.4220.9309/06/1029.2720.34
08/06/1028.4819.7307/06/1028.9220.1504/06/1029.1920.27
03/06/1030.3520.4602/06/1029.3319.5301/06/1028.8719.45
31/05/1029.3919.5428/05/1029.1619.5527/05/1029.1819.81
26/05/1027.518.525/05/1026.7618.0824/05/1028.7518.81
21/05/1028.7818.8220/05/1028.5318.8419/05/1029.4919.25
18/05/1030.9519.7617/05/1030.9219.3514/05/1031.3519.34
13/05/1033.6520.7612/05/1033.6320.5211/05/1033.3820.34
10/05/1033.2820.307/05/103119.2406/05/1032.420.22
05/05/1032.9520.4504/05/1033.321.0303/05/1035.5822.63
30/04/1035.4122.4529/04/1035.5322.3628/04/1034.7522.33
Table 8. Stock prices of Renault and Peugeot.
Table 8. Stock prices of Renault and Peugeot.
DateRenaultPeugeotDateRenaultPeugeotDateRenaultPeugeot
27/04/1036.222.926/04/1037.6523.7323/04/1036.7223.5
22/04/1034.3622.7221/04/1035.0122.8620/04/1035.6222.88
19/04/1034.0821.7716/04/1034.4621.7115/04/1035.1622.22
14/04/1035.122.2213/04/1035.2822.4512/04/1035.1721.85
09/04/1035.7621.908/04/1035.6721.6707/04/1036.521.89
06/04/1036.872201/04/1035.521.9731/03/1034.721.8
30/03/1034.822.2429/03/1035.722.7326/03/1035.5422.58
25/03/1035.5322.7324/03/1033.821.8223/03/1034.121.58
22/03/1033.7321.6419/03/1034.1221.6818/03/1034.4421.75
17/03/1034.6821.9816/03/1034.3321.8815/03/1033.5721.53
12/03/1033.921.8611/03/1033.2721.5810/03/1033.1221.47
09/03/1032.6921.5408/03/1032.9921.6605/03/1032.8921.85
04/03/1031.6421.2603/03/1031.6520.702/03/1031.0520.2
01/03/1030.2619.5426/02/1030.219.3925/02/1029.4218.98
24/02/1030.919.4923/02/1030.5419.7422/02/1031.8920.06
19/02/1032.2920.6718/02/1032.2620.4117/02/1031.6920.31
16/02/1031.0819.815/02/1030.2519.6612/02/1029.5619.57
11/02/103120.410/02/1032.7821.2109/02/1033.3122.31
08/02/1032.6321.9505/02/1032.1522.3304/02/1033.7222.86
03/02/1035.3223.9302/02/1035.2923.801/02/1035.3124.05
29/01/1034.2623.6428/01/1033.9423.3127/01/1033.8523.88
26/01/1034.9724.8625/01/1035.0624.3522/01/1035.724.95
21/01/1036.12520/01/1036.9225.3519/01/1038.425.81
18/01/1039.2825.9515/01/1038.625.714/01/1039.5626.67
13/01/1039.4926.1312/01/1038.3625.9811/01/1039.2126.65
08/01/1039.3826.507/01/1039.6926.706/01/1039.2526.32
05/01/1038.3124.7404/01/1038.224.52

## Appendix

All the demonstrations of this article have been gathered in the Technical Report .

## A. On the Different Families of Copula

There exists many copula families. Let us here present the most important amongst them.

#### The Gaussian Copula

The Gaussian copula can be used in several fields. For example, many credit models are built from this copula, which also presents the property to make extreme values (minimal or maximal) independent in the limit; see Joe  for more details. For example, in ℝ2, it is derived from the bivariate normal distribution and from Sklar's theorem. Defining Ψρ as the standard bivariate normal cumulative distribution function with ρ correlation, the Gaussian copula function is Cρ(u, v) = Ψρ (Ψ−1(u), Ψ−1(v)) where u, v ∈ [0, 1] and where Ψ is the standard normal cumulative distribution function. Then, the copula density function is :

$c ρ ( u , v ) = ψ X , Y , ρ ( Ψ − 1 ( u ) ) , ( Ψ − 1 ( v ) ) ψ ( Ψ − 1 ( u ) ) ψ ( Ψ − 1 ( v ) )$
where $ψ X , Y , ρ ( x , y ) = 1 2 π 1 − ρ 2 exp ( − 1 2 ( 1 − ρ 2 ) [ x 2 + y 2 − 2 ρ x y ] )$ is the density function for the standard bivariate Gaussian with Pearson product-moment correlation coefficient ρ and where ψ is the standard normal density. This definition can obviously be extended to ℝd.

#### The Elliptical Copula

Let us begin with defining the class of elliptical distributions and its properties—see also Cambanis , Landsman :

#### Definition A.1

X is said to abide by a multivariate elliptical distribution, denoted XEd(μ, Σ, ξd), if X has the following density, for any x in ℝd:

$f X ( x ) = α d | ∑ | 1 / 2 ξ d ( 1 2 ( x − μ ) ′ ∑ − 1 ( x − μ ) )$
where Σ is a d × d positive-definite matrix and where μ is a d-column vector,

where ξd is referred as the “density generator”,

where αd is a normalisation constant, such that $α d = Γ ( d / 2 ) ( 2 π ) d / 2 ( ∫ 0 ∞ x d / 2 − 1 ξ d ( x ) d x ) − 1$,

with $∫ 0 ∞ x d / 2 − 1 ξ d ( x ) d x < ∞$.

#### Property A.1

(1) For any XEd(μ, Σ, ξd), for any m × d matrix with rank md, A, and for any m-dimensional vector b, we have AX + bEm( + b, AΣA′, ξm).

Therefore, any marginal density of multivariate elliptical distribution is elliptical, i.e., $X = ( X 1 , X 2 , … , X d ) ~ E d ( μ , ∑ , ξ d ) ⇒ X i ~ E 1 ( μ i , σ i 2 , ξ 1 )$, 1 ≤ id, with $f X i ( x ) = α 1 σ i ξ 1 ( 1 2 ( x − μ i σ ) 2 )$. (2) Corollary 5 of Cambanis  states that conditional densities with elliptical distributions are also elliptical. Indeed, if X = (X1, X2)′ ∼ Ed(μ, Σ, ξd), with X1 (resp. X2) of size d1 < d (resp. d2 < d), then X1/(X2 = a) ∼ Ed1(μ′, Σ′, ξd1) with $μ ′ = μ 1 + ∑ 12 ∑ 22 − 1 ( a − μ 2 )$ and $∑ ′ = ∑ 11 − ∑ 12 ∑ 22 − 1 ∑ 21$, with μ = (μ1, μ2) and Σ = (Σij)1≤i,j≤2.

#### Remark A.1

Landsman  shows that multivariate Gaussian distributions derive from ξd(x) = ex and that if X = (X1, …, Xd) has an elliptical density such that its marginals verify E(Xi) < ∞ and $E ( X i 2 ) < ∞$ for 1 ≤ id, then μ is the mean of X and Σ is a multiple of the covariance matrix of X. Consequently, from now on, we will assume this is indeed the case.

#### Definition A.2

Let t be an elliptical density on ℝk and let q be an elliptical density on ℝk. The elliptical densities t and q are said to belong to the same family of elliptical densities, if their generating densities are ξk and ξk′ respectively, which belong to a common given family of densities.

#### Example A.1

Consider two Gaussian densities (0, 1) and ((0, 0), Id2). They are said to belong to the same elliptical family as they both present xex as generating density.

Finally, let us introduce the definition of an elliptical copula which generalizes the above overview of the Gaussian copula:

#### Definition A.3

Elliptical copulas are the copulas of elliptical distributions.

#### A.2. Archimedean Copulas

These copulas exhibit a simple form as well as properties such as associativity. They also present a variety of dependent structures. They can generally be defined under the following form

$A ( u 1 , u 2 , … , u n ) = ξ − 1 ( ∑ i = 1 n ξ ( F i ( u i ) ) )$
where (u1, u2, …, un) ∈ [0, 1] n and where ξ is known as a “generator function”. This ξ function must be at least d – 2 times continuously differentiable, must have a decreasing and convex d – 2 derivative, and must be such that ξ(1) = 0.

Let us now present several examples:

• Clayton copula:

The Clayton copula is an asymmetric Archimedean copula, displaying greater dependency in the negative tail than in the positive tail. Let us define X (resp. Y) as the random vector having F (resp G) as cumulative distribution function (CDF). Assuming that the vector (X, Y) has a Clayton copula, then this copula is given by:

$A ( x , y ) = ( F ( x ) θ + G ( y ) θ − 1 ) 1 / θ$

And its generator is:

$ξ ( x ) = x θ − 1$

For θ = 0, the random variables are independent.

• Gumbel copula:

The Gumbel copula (Gumbel-Hougard copula) is an asymmetric Archimedean copula, presenting greater dependency in the positive tail than in the negative tail. This copula is given by:

$ξ ( x ) = ( − ln ( x ) ) α$

• Frank copula:

The Frank copula is a symmetric Archimedean copula given by:

$ξ ( x ) = ln ( e α x − 1 e α − 1 )$

#### A.3. Periodic Copula

In 2005, Alfonsi and Brigo  derived a new way of generating copulas based on periodic functions. Defining h (resp. ) as a 1-periodic non-negative function that integrates to 1 over [0, 1] (resp. as a double primitive of h), then both

$A ( u + v ) − A ( u ) − A ( v ) and − A ( u − v ) + A ( u ) + A ( − v )$
are copula functions, the second one not necessarily being exchangeable.

## B. ϕ-Divergence

Let us call ha the density of aZ if h is the density of Z. Let φ be a strictly convex function defined by $φ : ℝ + ¯ → ℝ + ¯$, and such that φ(1) = 0.

#### Definition B.1

We define a ϕ-divergence of P from Q, where P and Q are two probability distributions over a space Ω such that Q is absolutely continuous with respect to P, by

$D ϕ ( Q , P ) = ∫ φ ( d Q d P ) d P$
The above expression (B.1) is also valid if P and Q are both dominated by the same probability.

The most used distances (Kullback, Hellinger or χ2) belong to the Cressie-Read family (see Cressie-Read , Csiszár I.  and the books of Friedrich and Igor , Pardo Leandro  and Zografos K. ). They are defined by a specific φ. Indeed,

-

with the Kullback-Leibler divergence, we associate φ(x) = K(x) = xln(x) − x + 1

-

with the Hellinger distance, we associate $φ ( x ) = H ( x ) = 2 ( x − 1 ) 2$

-

with the χ2 distance, we associate $φ ( x ) = χ 2 ( x ) = 1 2 ( x − 1 ) 2$

-

more generally, with power divergences, we associate $φ ( x ) = x γ − γ x + γ − 1 γ ( γ − 1 )$, where γ ∈ ℝ \ (0, 1)

-

and, finally, with the L1 norm, which is also a divergence, we associate φ(x) = |x − 1|.

Let us now expose some well-known properties of divergences.

#### Property B.1

We have Dϕ(P, Q) = 0 ⇔ P = Q.

#### Property B.2

The divergence function QDϕ(Q, P) is convex and lower semi-continuous for the topology that makes all the applications of the form Q ↦ ∫ fdQ continuous (where f is bounded and continuous), and lower semi-continuous for the topology of the uniform convergence.

Finally, we will also use the following property derived from the first part of corollary (1.29) page 19 of Friedrich and Igor ,

#### Property B.3

If T : (X, A) → (Y, B) is measurable and if Dϕ(P, Q) < ∞, then Dϕ(P, Q) ≥ Dϕ(PT−1, QT−1) with equality being reached when T is surjective for (P, Q).

## C. Miscellaneous

#### Lemma C.1

For any pd, we have $g a p ( p − 1 ) = g a p$.

#### Lemma C.2

We have $g ( . / a 1 ⊤ x , … , a j ⊤ x ) = n ( a j + 1 ⊤ x , … , a d ⊤ x ) = f ( . / a 1 ⊤ x , … , a j ⊤ x )$.

#### Lemma C3

Should there exist a family (ai)i=1…d such that $f ( x ) = n ( a j + 1 ⊤ x , … , a d ⊤ x ) h ( a 1 ⊤ x , … , a j ⊤ x )$, with j < d, with f, n and h being densities, then this family is an orthogonal basis of ℝd.

#### Lemma C.4

$inf a ∈ ℝ ∗ d D ϕ ( g f a g a , f )$ is reached when the ϕ-divergence is greater than the L1 distance as well as the L2 distance.

#### Lemma C.5

Whenever there exists p, pd, such that Dϕ(g(p), f) = 0, then the family of (ai)i=1,…,p is free and is orthogonal.

#### Lemma C.6

For any continuous density f, we have $y m = | f m ( x ) − f ( x ) | = O P ( m − 2 4 + d )$.

## D. Study of the Sample

Let X1, X2,‥,Xm be a sequence of independent random vectors with the same density f. Let Y1, Y2,‥,Ym be a sequence of independent random vectors with the same density g. Then, the kernel estimators fm, gm, fa,m and ga,m of f, g, fa and ga, for all $a ∈ ℝ ∗ d$, almost surely and uniformly converge since we assume that the bandwidth hm of these estimators meets the following conditions (see Bosq ):

$( ℋ y p ) : h m ↘ m 0 , m h m ↗ m ∞ , m h m / L ( h m − 1 ) → m ∞ and L ( h m − 1 ) / LLm → m ∞ ,$
with L(u) = ln(ue).

Let us consider

$B 1 ( n , a ) = 1 n ∑ i = 1 n φ ′ { f a , n ( a ⊤ Y i ) g a , n ( a ⊤ Y i ) g n ( Y i ) f n ( Y i ) } f a , n ( a ⊤ Y i ) g a , n ( a ⊤ Y i ) and B 2 ( n , a ) = 1 n ∑ i = 1 n φ ∗ { φ ′ { f a , n ( a ⊤ X i ) g a , n ( a ⊤ X i ) g n ( X i ) f n ( X i ) } }$

Our objective is to estimate the minimum of $D ϕ ( g f a g a , f )$. To achieve this, samples have to be truncated:

Let us consider now a positive sequence θm such that θm → 0, $y m / θ n 2 → 0$, where ym is the almost sure convergence rate of the kernel density estimator— $y m = O P ( m − 2 4 + d )$, see lemma C.6— $y m ( 1 ) / θ m 2 → 0$, where $y m ( 1 )$ is defined by

$| φ ( g m ( x ) f m ( x ) f b , m ( b ⊤ x ) g b , m ( b ⊤ x ) ) − φ ( g ( x ) f ( x ) f b ( b ⊤ x ) g b ( b ⊤ x ) ) | ≤ y m ( 1 )$
for all b in $ℝ ∗ d$ and all x in ℝd, and finally $y m ( 2 ) θ m 2 → 0$, where $y n ( 2 )$ is defined by
$| φ ′ ( g m ( x ) f m ( x ) f b , m ( b ⊤ x ) g b , m ( b ⊤ x ) ) − φ ′ ( g ( x ) f ( x ) f b ( b ⊤ x ) g b ( b ⊤ x ) ) | ≤ y m ( 2 )$
for all b in $ℝ ∗ d$ and all x in ℝd.

We then generate fm, gm and gb,m from the starting sample and we select the Xi and Yi vectors such that fm(Xi) ≥ θm and gb,m(bYi) ≥ θm, for all i and for all $b ∈ ℝ ∗ d$.

The vectors meeting these conditions will be called X1, X2, …, Xn and Y1, Y2, …, Yn.

Consequently, the next proposition provides us with the condition required to derive our estimates:

#### Proposition D.1

Using the notations introduced in Broniatowski  and in Section 4.1, it holds $lim n → ∞ sup a ∈ ℝ ∗ d | ( B 1 ( n , a ) − B 2 ( n , a ) ) − D ϕ ( g f a g a , f ) | = 0$.

#### Remark D.1

With the Kullback-Leibler divergence, we can take for θm the expression mν, with $0 < ν < 1 4 + d$.

## E. Hypotheses' Discussion

Not all hypotheses will be used simultaneously.

Hypotheses (A1) and (A4) lead us to assume we deal with a saddle point: being used to demonstrate the convergence of čn(a) and γk towards ak, they make it easier to use the dual form of the divergence. Moreover, since our criteria $a ↦ D ϕ ( g f a g a , f )$ is differentiable on $ℝ ∗ d$ and continuously differentiable on ℝd, these hypotheses can be easily obtained. However, if other discontinuities, for which the criteria can not be extended by continuity, do exist, then the above hypotheses would be very difficult to verify even in very favorable cases.

As shown by the below subsection for relative entropy, hypothesis (A2) generally holds.

Hypotheses (A5) and (A7) are classical hypotheses from which a limit distribution for the criteria can be derived. Yet these hypotheses are difficult to obtain when the criteria $a ↦ D ϕ ( g f a g a , f )$ admits discontinuities—close to the co-vectors of f—for which it can not be continuously differentiable.

Hypothesis (A6) thus enables to create a stopping rule for the process since this hypothesis is equivalent to the nullity of the application $D ϕ ( g f a g a , f )$ in ak.

Hypothesis (A0) constitutes an alternative to the starting hypothesis according to which the divergence should be greater than the L1 distance. Although weaker, this hypothesis also requires that for all i, we have K (g(i), f) ≥ ∫ |f(x) − g(i)(x)|dx at each iteration of the algorithm.

#### E.1. Discussion of (A2)

Let us work with the Kullback-Leibler divergence and with g and a1.

For all $b ∈ ℝ ∗ d$, we have $∫ φ ∗ ( φ ′ ( g ( x ) f b ( b ⊤ x ) f ( x ) g b ( b ⊤ x ) ) ) f ( x ) d x = ∫ ( g ( x ) f b ( b ⊤ x ) f ( x ) g b ( b ⊤ x ) − 1 ) f ( x ) d x = 0$, since, for any b in $ℝ ∗ d$, the function $x ↦ g ( x ) f b ( b ⊤ x ) g b ( b ⊤ x )$ is a density. The complement of ΘDϕ in $ℝ ∗ d$ is ∅ and then the supremum looked for in ℝ̅ is −∞. We can therefore conclude. It is interesting to note that we obtain the same verification with f, g(k−1) and ak.

#### E.2. Discussion of (A3)

This hypothesis consists in the following assumptions:

(0) We work with the Kullback-Leibler divergence,

(1) We have $f ( . / a 1 ⊤ x ) = g ( . / a 1 ⊤ x )$, i.e., $K ( g f 1 g 1 , f ) = 0$ —we could also derive the same proof with f, g(k−1) and ak

#### Preliminary (A)

Shows that $A = { ( c , x ) ∈ ℝ ∗ d \ { a 1 } × R d ; f a 1 ( a 1 ⊤ x ) g a 1 ( a 1 ⊤ x ) > f c ( c ⊤ x ) g c ( c ⊤ x ) , g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) > f ( x ) } = ∅$ through a reductio ad absurdum, i.e., if we assume A ≠ ∅.

Thus, our hypothesis enables us to derive

$f ( x ) = f ( . / a 1 ⊤ x ) f a 1 ( a 1 ⊤ x ) = g ( . / a 1 ⊤ x ) f a 1 ( a 1 ⊤ x ) > g ( . / c ⊤ x ) f c ( c ⊤ x ) > f$
since $f a 1 ( a 1 ⊤ x ) g a 1 ( a 1 ⊤ x ) ≥ f c ( c ⊤ x ) g c ( c ⊤ x )$ implies $g ( . / a 1 ⊤ x ) f a 1 ( a 1 ⊤ x ) = g ( x ) f a 1 ( a 1 ⊤ x ) g a 1 ( a 1 ⊤ x ) ≥ g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) = g ( . / c ⊤ x ) f c ( c ⊤ x )$, i.e., f > f. We can thus conclude.

#### Preliminary (B)

Shows that $B = { ( c , x ) ∈ ℝ ∗ d \ { a 1 } × R d ; f a 1 ( a 1 ⊤ x ) g a 1 ( a 1 ⊤ x ) < f c ( c ⊤ x ) g c ( c ⊤ x ) , g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) < f ( x ) } = ∅$ through a reductio ad absurdum, i.e., if we assume B ≠ ∅.

Thus, our hypothesis enables us to derive

$f ( x ) = f ( . / a 1 ⊤ x ) f a 1 ( a 1 ⊤ x ) = g ( . / a 1 ⊤ x ) f a 1 ( a 1 ⊤ x ) < g ( . / c ⊤ x ) f c ( c ⊤ x ) < f$

We can consequently conclude as above.

Let us now verify (A3):

We have $P M ( c , a 1 ) − P M ( c , a ) = ∫ l n ( g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) f ( x ) ) { f a 1 ( a 1 ⊤ x ) g a 1 ( a 1 ⊤ x ) − f c ( c ⊤ x ) g c ( c ⊤ x ) } g ( x ) d x$. Moreover, the logarithm ln is negative on ${ x ∈ ℝ ∗ d ; g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) f ( x ) < 1 }$ and is positive on ${ x ∈ ℝ ∗ d ; g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) f ( x ) ≥ 1 }$.

Thus, the preliminary studies (A) and (B) show that $l n ( g ( x ) f c ( c ⊤ x ) g c ( c ⊤ x ) f ( x ) )$ and ${ f a 1 ( a 1 ⊤ x ) g a 1 ( a 1 ⊤ x ) − f c ( c ⊤ x ) g c ( c ⊤ x ) }$ always present a negative product. We can therefore conclude, since (c, a) ↦ PM(c, a1) − PM(c, a) is not null for all c and for all a, with aa1.

## References

1. Sklar, M. Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Stat. Univ. 1959, 8, 229–231. [Google Scholar]
2. Joe, H. Multivariate Models and Dependence Concepts. Monographs on Statistics and Applied Probability, 1st ed.; Chapman and Hall/CRC: London, UK, 1997. [Google Scholar]
3. Nelsen, R.B. An introduction to Copulas. Springer Series in Statistics, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
4. Carriere, J.F. A large sample test for one-parameter families of Copulas. Comm. Stat. Theor. Meth. 1994, 23, 1311–1317. [Google Scholar]
5. Genest, C.; Rémillard, B. Tests of independence and randomness based on the empirical Copula process. Test 2004, 13, 335–370. [Google Scholar]
6. Fermanian, J.D. Goodness of fit tests for copulas. J. Multivariate Anal. 2005, 95, 119–152. [Google Scholar]
7. Genest, C.; Quessy, J.F.; Rémillard, B. Goodness-of-fit procedures for copula models based on the probability integral transformation. Scand. J. Stat. 2006, 33, 337–366. [Google Scholar]
8. Michiels, F.; De Schepper, A. A Copula Test Space Model—How to Avoid the Wrong Copula Choice. Kybernetika 2008, 44, 864–878. [Google Scholar]
9. Genest, C. Metaelliptical copulas and their use in frequency analysis of multivariate hydrological data. Water Resour. Res. 2009, 43, W09401:1–W09401:12. [Google Scholar]
10. Mesfioui, M.; Quessy, J.F.; Toupin, M.H. On a new goodness-of-fit process for families of copulas. La Revue Canadienne de Statistique 2009, 37, 80–101. [Google Scholar]
11. Genest, C.; Rémillard, B. Goodness-of-fit tests for copulas: A review and a power study. Insurance: Math. Econ. 2009, 44, 199–213. [Google Scholar]
12. Berg, D. Copula goodness-of-fit testing: An overview and power comparison. Eur. J. Finance 2009, 15, 675–701. [Google Scholar]
13. Bücher, A.; Dette, H. Some comments on goodness-of-fit tests for the parametric form of the copula based on L2-distances. J. Multivar. Anal. 2010, 101, 749–763. [Google Scholar]
14. Broniatowski, M.; Leorato, S. An estimation method for the Neyman chi-square divergence with application to test of hypotheses. J. Multivar. Anal. 2006, 97, 1409–1436. [Google Scholar]
15. Friedman, J.H.; Stuetzle, W.; Schroeder, A. Projection pursuit density estimation. J. Am. Statist. Assoc. 1984, 79, 599–608. [Google Scholar]
16. Huber, P.J. Projection pursuit. Ann. Stat. 1985, 13, 435–525. [Google Scholar]
17. Cambanis, S.; Huang, S.; Simons, G. On the theory of elliptically contoured distributions. J. Multivar. Anal. 1981, 11, 368–385. [Google Scholar]
18. Landsman, Z.M.; Valdez, E.A. Tail conditional expectations for elliptical distributions. N. Am. Actuar. J. 2003, 7, 55–71. [Google Scholar]
19. Yohai, V.J. Optimal robust estimates using the Kullback-Leibler divergence. Stat. Probab. Lett. 2008, 78, 1811–1816. [Google Scholar]
20. Toma, A. Optimal robust M-estimators using divergences. Stat. Probab. Lett. 2009, 79, 1–5. [Google Scholar]
21. Huber, P.J. Robust Statistics; Wiley: New York, NY, USA, 1981; (republished in paperback, 2004). [Google Scholar]
22. van der Vaart, A.W. Asymptotic statistics. Cambridge Series in Statistical and Probabilistic Mathematics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
23. Scott, D.W. Multivariate Density Estimation. Theory, Practice, and Visualization. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics; A Wiley-Interscience Publication. John Wiley and Sons, Inc.: New York, NY, USA, 1992. [Google Scholar]
24. Touboul, J. Goodness-of-fit Tests For Elliptical And Independent Copulas Through Projection Pursuit. arXiv. Statistics Theory 2011. arXiv: 1103.0498. [Google Scholar]
25. Aurélien, A.; Damiano, B. New families of Copulas based on periodic functions. Commun. Stat. Theor. Meth. 2005, 34, 1437–1447. [Google Scholar]
26. Cressie, N.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Series B 1984, 46, 440–464. [Google Scholar]
27. Csiszár, I. On topology properties of f-divergences. Studia Sci. Math. Hungar. 1967, 2, 329–339. [Google Scholar]
28. Liese, F.; Vajda, I. Convex Statistical Distances; BSB B. G. Teubner Verlagsgesellschaft: Leipzig, Germany, 1987. [Google Scholar]
29. Pardo, L. Statistical Inference Based on Divergence Measures. Statistics: Textbooks and Monographs; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
30. Zografos, K.; Ferentinos, K.; Papaioannou, T. φ-divergence statistics: sampling properties and multinomial goodness of fit and divergence tests. Commun. Stat. Theor. Meth. 1990, 19, 1785–1802. [Google Scholar]
31. Azé, D. Eléments D'analyse Convexe et Variationnelle, Ellipses; Dunod: Paris, French, 1997. [Google Scholar]
32. Bosq, D.; Lecoutre, J.P. Livre-Theorie De L'Estimation Fonctionnelle; Economica, 1999. [Google Scholar]
33. Broniatowski, M.; Keziou, A. Parametric estimation and tests through divergences and the duality technique. J. Multivar. Anal. 2009, 100, 16–36. [Google Scholar]
34. Black and Scholes. The pricing of options and corporate liabilities. J. Polit. Econ. 1973, 81, 635–654. [Google Scholar]
• Classification: MSC 62H05 62H15 62H40 62G15