- freely available
- re-usable

*Algorithms*
**2011**,
*4*(2),
87-114;
doi:10.3390/a4020087

## Abstract

**:**Two goodness-of-fit tests for copulas are being investigated. The first one deals with the case of elliptical copulas and the second one deals with independent copulas. These tests result from the expansion of the projection pursuit methodology that we will introduce in the present article. This method enables us to determine on which axis system these copulas lie as well as the exact value of these very copulas in the basis formed by the axes previously determined irrespective of their value in their canonical basis. Simulations are also presented as well as an application to real datasets.

## 1. Introduction

The need to describe the dependency between two or more random variables triggered the concept of copulas. We consider a joint cumulative distribution function (cdf) F on ℝ^{d} and its cdf margins F_{1}, F_{2}, …,F_{d}. A copula C is a function such that F = C(F_{1}, F_{2}, …, F_{d}). Sklar [1] is the first to lay the foundations of this new theory. Several parametric families of copulas have been defined, namely elliptical, archimedean, periodic copulas etc., see Joe [2] and Nelsen [3] as well as Appendix A for an overview of these families. Finding criteria to determine the best copula for a given problem can only be achieved through a goodness-of-fit (GOF) approach. So far several GOF copula approaches have been proposed in the literature, e.g., Carriere [4], Genest and Rémillard [5], Fermanian [6], Genest Quessy and Rémillard [7], Michiels and De Schepper [8], Genest Favre Béliveau and Jacques [9], Mesfioui Quessy and Toupin [10], Genest Rémillard and Beaudoin [11], Berg [12], Bücher and Dette [13], among others. However, the field is still at an embryonic stage which explains the current shortage in recommendations. In univariate distributions, the GOF assessment can be performed using for instance the well-known Kolmogorov test. In the multivariate field, there are fewer alternatives. A simple way to build GOF approaches for multivariate random variables is to consider multi-dimensional chi-square approaches, as in for example Broniatowski [14]. However, these approaches present feasibility issues for high dimensional problems due to the curse of dimensionality. In order to solve this, we recall some facts from the theory of projection pursuit.

The objective of projection pursuit is to generate one or several projections providing as much information as possible about the structure of the dataset regardless of its size. Once a structure has been isolated, the corresponding data are transformed through a Gaussianization. Through a recursive approach, this process is iterated to find another structure in the remaining data, until no further structure can be evidenced in the data left at the end. Friedman [15] and Huber [16] count among the first authors who introduced this type of approaches for evidencing structures. They each describe, with many examples, how to evidence such a structure and consequently how to estimate the density of such data through two different methodologies each. Their work is based on maximizing Kullback-Leibler divergence. In the present article, we will introduce a new projection pursuit methodology based on the minimisation of any ϕ-divergence greater than the L^{1}-distance (ϕ-PP). We will show that this algorithm presents the extra advantage of being robust and fast from a numerical standpoint. Its key rationale lies in the fact that it allows not only to carry out GOF tests for elliptical and independent copulas but also to determine the axis system upon which these very copulas are based. The exact expression of these copulas in the basis constituted by these axes can therefore be derived.

This paper is organised as follows: Section 2 contains preliminary definitions and properties. In Section 3, we present in details the ϕ-projection pursuit algorithm. In Section 4, we present our first results. In Section 5, we introduce our tests. In Section 6, we provide three simulations pertaining to the two major situations described herein and we will study a real case.

## 2. Basic theory

#### 2.1. An Introduction to Copulas

In this section, we recall the concept of copula. We will also define the family of elliptical copulas through a brief reminder of elliptical distributions—see Appendix A for an overview of other families.

#### Sklar's Theorem

First, let us define a copula in ℝ^{d}.

#### Definition 2.1

A d-dimensional copula is a joint cumulative distribution function C defined on [0, 1]^{d}, with uniform margins.

The following theorem explains in what extent a copula does describe the dependency between two or more random variables.

#### Theorem 2.1 (Sklar [1])

Let F be a joint multivariate distribution with margins F_{1}, …, F_{d}, then, there exists a copula C such that

If marginal cumulative distributions are continuous, then the copula is unique. Otherwise, the copula is unique on the range of values of the marginal cumulative distributions.

#### Remark 2.1

First, for any copula C and any u_{i} in [0, 1], 1 ≤ i ≤ d, we have

We set the independent copula Π as
$\mathrm{\Pi}({u}_{1},\dots ,{u}_{d})={\mathrm{\Pi}}_{i=1}^{d}{u}_{i}$, for any u_{i} in [0, 1], 1 ≤ i ≤ d.

Moreover, we define the density of a copula as the density associated with the cdf C, which we will name as c:

#### Definition 2.2

Whenever there exists, the density of C is defined by
$c({u}_{1},\dots ,{u}_{d})=\frac{{\partial}^{d}}{\partial {u}_{1}\dots \partial {u}_{d}}C({u}_{1},\dots ,{u}_{d})$, for any u_{i} in [0, 1], 1 ≤ i ≤ d.

Finally, let us present several examples of copulas (see also Appendix A to find an overview).

#### Example 2.1

The Gaussian copula C_{ρ} (in ℝ^{2}):

Defining Ψ_{ρ} as the standard bivariate normal cumulative distribution function with ρ correlation, the Gaussian copula function is

The Student copula C_{ρ} (in ℝ^{2}):

Defining T_{ρ,k} as the standard bivariate student cumulative distribution function with ρ as the correlation coefficient and with k as the degree of freedom of the distribution, the Student copula function is

_{k}is the standard Student cumulative distribution function.

The Elliptical copula :

Similarly as above, elliptical copulas are the copulas of elliptical distributions (an overview is provided in Appendix A).

#### 2.2. Brief Introduction to the ϕ-Projection Pursuit Methodology (ϕ-PP)

Let us first introduce the concept of ϕ-divergence.

#### The Concept of ϕ-Divergence

Let φ be a strictly convex function defined by $\phi :\overline{\mathbb{R}+}\to \overline{\mathbb{R}+}$, and such that φ(1) = 0. We define a ϕ-divergence of P from Q, where P and Q are two probability distributions over a space Ω such that Q is absolutely continuous with respect to P-by

Throughout this article, we will also assume that φ(0) < ∞, that φ′ is continuous and that this divergence is greater than the L^{1} distance—see also Appendix B page 109.

#### Functioning of the Algorithm

Let f be a density on ℝ^{d}. We consider an instrumental density g with the same mean and variance as f. We start with performing the D_{ϕ}(g, f) = 0 test; should this test turn out to be positive, then f = g and the algorithm stops, otherwise, the first step of our algorithm consists in defining a vector a_{1} and a density g^{(1)} by

^{d}and f

_{a}(resp. g

_{a}) stands for the density of a

^{⊤}X (resp. a

^{⊤}Y) when f (resp. g) is the density of X (resp. Y).

In our second step, we replace g with g^{(1)} and we repeat the first step, and so on. By iterating this process, we end up obtaining a sequence (a_{1}, a_{2}, …) of vectors in
${\mathbb{R}}_{\ast}^{d}$ and a sequence of densities g^{(i)}.

#### Remark 2.2

First, to obtain an approximation of f, we stop our algorithm when the divergence equals zero, i.e., we stop when D_{ϕ}(g^{(j)}, f) = 0 since it implies g^{(j)} = f with j ≤ d, or when our algorithm reaches the d^{th} iteration, i.e., we approximate f with g^{(d)}.

Second, we get D_{ϕ}(g^{(0)}, f) ≥ D_{ϕ}(g^{(1)}, f) ≥ …‥ ≥ 0 with g^{(0)} = g.

Finally, the specific form of the relationship (2.2) implies that we deal with M-estimation. We can therefore state that our method is robust—see Sections 6, Yohai [19], Toma [20] as well as Huber [21].

The main steps of the present algorithm have been summarized in Table 1.

At present, let us study the following example:

#### Example 2.2

Let f be a density defined on ℝ^{3} by f(x_{1}, x_{2}, x_{3}) = n(x_{1}, x_{2})h(x_{3}), with n being a bi-dimensional Gaussian density, and h being a non-Gaussian density. Let us also consider g, a Gaussian density with the same mean and variance as f.

Since g(x_{1}, x_{2}/x_{3}) = n(x_{1}, x_{2}), we have
${D}_{\varphi}(g\frac{{f}_{3}}{{g}_{3}},f)={D}_{\varphi}(n.{f}_{3},f)={D}_{\varphi}(f,f)=0$ as f_{3} = h, i.e., the function
$a\mapsto {D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)$ reaches zero for e_{3} = (0, 0, 1)′, where f_{3} and g_{3} are the third marginal densities of f and g respectively. We therefore obtain g(x_{1}, x_{2}/x_{3}) = f(x_{1}, x_{2}/x_{3}).

To recapitulate our method, if D_{ϕ}(g, f) = 0, we derive f from the relationship f = g; whenever a sequence (a_{i})_{i}_{=1,…}_{j}, j < d, of vectors in
${\mathbb{R}}_{\ast}^{d}$ defining g^{(j)} and such that D_{ϕ}(g^{(j)}, f) = 0 exists, then
$f(./{a}_{i}^{\top}x,1\le i\le j)=g(./{a}_{i}^{\top}x,1\le i\le j)$ i.e., f coincides with g on the complement of the vector subspace generated by the family {a_{i}}_{i=1,…,j}—see also Section 3 for a more detailed explanation.

In the remaining of our study of the algorithm, after having clarified the choice of g, we will consider the statistical solution to the representation problem, assuming that f is unknown and that X_{1}, X_{2},…X_{m} are i.i.d. with density f. We will provide asymptotic results pertaining to the family of optimizing vectors a_{k,m}—which we will define more precisely below—as m goes to infinity. Our results also prove that the empirical representation scheme converges towards the theoretical one.

## 3. The Algorithm

#### 3.1. The Model

Let f be a density on ℝ^{d}. We assume there exists d non-null linearly independent vectors a_{j}, with 1 ≤ j ≤ d, of ℝ^{d}, such that

^{d−j}and with h being a density on ℝ

^{j}, which does not belong to the same family as n. Let X = (X

_{1}, …, X

_{d}) be a vector with f as density

We define g as an elliptical distribution with the same mean and variance as f.

For simplicity, let us assume that the family {a_{j}}_{1≤}_{j}_{≤}_{d} is the canonical basis of ℝ^{d}:

The very definition of f implies that (X_{j}_{+1}, …, X_{d}) is independent from (X_{1}, …, X_{j}). Hence, the density of (X_{j}_{+1}, …, X_{d}) given (X_{1}, …, X_{j}) is n.

Let us assume that D_{ϕ}(g^{(j)}, f) = 0, for some j ≤ d. We then get
$\frac{f(x)}{{f}_{{a}_{1}}{f}_{{a}_{2}}\dots {f}_{{a}_{j}}}=\frac{g(x)}{{g}_{{a}_{1}}^{(1-1)}{g}_{{a}_{2}}^{(2-1)}\dots {g}_{{a}_{j}}^{(j-1)}}$, since, by induction, we have
${g}^{(j)}(x)=g(x)\frac{{f}_{{a}_{1}}}{{g}_{{a}_{1}}^{(1-1)}}\frac{{f}_{{a}_{2}}}{{g}_{{a}_{2}}^{(2-1)}}\dots \frac{{f}_{{a}_{j}}}{{g}_{{a}_{j}}^{(j-1)}}$.

Consequently, lemma C.1 and the fact that the conditional densities with elliptical distributions are also elliptical, as well as the above relationship, lead us to infer that
$n({a}_{j+1}^{\top}x,.,{a}_{d}^{\top}x)=f(./{a}_{i}^{\top}x,1\le i\le j)=g(./{a}_{i}^{\top}x,1\le i\le j)$. In other words, f coincides with g on the complement of the vector subspace generated by the family {a_{i}}_{i=1,…,j}.

Now, if the family {a_{j}}_{1≤}_{j}_{≤}_{d} is no longer the canonical basis of ℝ^{d}, then this family is again a basis of ℝ^{d}. Hence, lemma C.2 implies that

_{ϕ}(g

^{(j)}, f) = 0, since by induction ${g}^{(j)}=g\frac{{f}_{{a}_{1}}}{{g}_{{a}_{1}}^{(1-1)}}\frac{{f}_{{a}_{2}}}{{g}_{{a}_{2}}^{(2-1)}}\dots \frac{{f}_{{a}_{j}}}{{g}_{{a}_{j}}^{(j-1)}}$.

The end of our algorithm implies that f coincides with g on the complement of the vector subspace generated by the family {a_{i}}_{i=1,…,j}. Therefore, the nullity of the ϕ-divergence provides us with information on the density structure.

In summary, the following proposition clarifies our choice of g which depends on the family of distribution one wants to find in f :

#### Proposition 3.1

With the above notations, D_{ϕ}(g^{(j)}, f) = 0 is equivalent to

More generally, the above proposition defines the co-support of f as the vector space generated by the vectors a_{1}, …, a_{j}.

#### Definition 3.1

Let f be a density on ℝ^{d}. We define the co-vectors of f as the sequence of vectors a_{1}, …, a_{j} which solves the problem D_{ϕ}(g^{(j)}, f) = 0 where g is an elliptical distribution with the same mean and variance as f. We define the co-support of f as the vector space generated by the vectors a_{1}, …, a_{j}.

#### Remark 3.1

Any (a_{i}) family defining f as in (3.1) is an orthogonal basis of ℝ^{d}—see lemma C.3

#### 3.2. Stochastic Outline of Our Algorithm

Let X_{1}, X_{2},‥,X_{m} (resp. Y_{1}, Y_{2},‥,Y_{m}) be a sequence of m independent random vectors with the same density f (resp. g). As customary in nonparametric ϕ-divergence optimizations, all estimates of f and f_{a}, as well as all uses of Monte Carlo methods are being performed using subsamples X_{1}, X_{2},‥,X_{n} and Y_{1}, Y_{2},‥,Y_{n}—extracted respectively from X_{1}, X_{2},‥,X_{m} and Y_{1}, Y_{2},‥,Y_{m}—since the estimates are bounded below by some positive deterministic sequence θ_{m}—see Appendix D.

Let ℙ_{n} be the empirical measure based on the subsample X_{1}, X_{2},.,X_{n}. Let f_{n} (resp. f_{a,n} for any a in
${\mathbb{R}}_{\ast}^{d}$ be the kernel estimate of f (resp. f_{a}), which is built from X_{1}, X_{2},‥,X_{n} (resp. a^{⊤}X_{1}, a^{⊤}X_{2},‥,a^{⊤}X_{n}).

As defined in Section 2.2, we consider the following sequences (a_{k})_{k}_{≥1} and (g^{(k)})_{k}_{≥1} such that

The stochastic setting up of the algorithm uses f_{n} and
${g}_{n}^{(0)}=g$ instead of f and g^{(0)} = g—since g is known. Thus, at the first step, we build the vector ǎ_{1} which minimizes the ϕ-divergence between f_{n} and
$g\frac{{f}_{a,n}}{{g}_{a}}$ and which estimates a_{1}. First, since proposition D.1 and lemma C.4 show how the infimum of the criteria (or index)

_{n}and $g\frac{{f}_{a,n}}{{g}_{a}}$. Second, defining ǎ

_{1}as the argument of this minimization, proposition 4.3 infers that this vector tends to a

_{1}. Finally, we define the density ${\stackrel{\u02c7}{g}}_{n}^{(1)}$ as ${\stackrel{\u02c7}{g}}_{n}^{(1)}=g\frac{{f}_{{\stackrel{\u02c7}{a}}_{1},n}}{{g}_{{\stackrel{\u02c7}{a}}_{1}}}$ which estimates g

^{(1)}through theorem 4.1.

Now, from the second step and as defined in Section 2.2, we derive the fact that the density g^{(2–1)} is unknown. Consequently, once again, the samples have to be truncated.

All estimates of f and f_{a} (resp. g^{(1)} and
${g}_{a}^{(1)}$) are being performed using a subsample X_{1}, X_{2},…,X_{n} (resp.
${Y}_{1}^{(1)},{Y}_{2}^{(1)},\dots ,{Y}_{n}^{(1)}$) extracted from X_{1}, X_{2},…,X_{m} (resp.
${Y}_{1}^{(1)},{Y}_{2}^{(1)},\dots ,{Y}_{m}^{(1)}$, which is a sequence of m independent random vectors with same density g^{(1)}) such that the estimates are bounded below by some positive deterministic sequence θ_{m}—see Appendix D.

Let ℙ_{n} be the empirical measure of the subsample X_{1}, X_{2},…,X_{n}. Let f_{n} (resp.
${g}_{n}^{(1)},{f}_{a,n},{g}_{a,n}^{(1)}$ for any a in
${\mathbb{R}}_{\ast}^{d}$) be the kernel estimate of f (resp. g^{(1)} and f_{a} as well as
${g}_{a}^{(1)}$) which is built from X_{1}, X_{2},…,X_{n} (resp.
${Y}_{1}^{(1)},{Y}_{2}^{(1)},\dots ,{Y}_{n}^{(1)}$ and a^{⊤}X_{1}, a^{⊤}X_{2},…,a^{⊤}X_{n} as well as
${a}^{\top}{Y}_{1}^{(1)},{a}^{\top}{Y}_{2}^{(1)},\dots ,{a}^{\top}{Y}_{n}^{(1)}$).

The stochastic setting up of the algorithm uses f_{n} and
${g}_{n}^{(1)}$ instead of f and g^{(1)}. Thus, we build the vector ǎ_{2}, which minimizes the ϕ-divergence between f_{n} and
${g}_{n}^{(1)}\frac{{f}_{a,n}}{{g}_{a,n}^{(1)}}$, since g^{(1)} and
${g}_{a}^{(1)}$ are unknown—and which estimates a_{2}. First, since proposition D.1 and lemma C.4 show how the infimum of the criteria (or index)

_{n}and ${g}_{n}^{(1)}\frac{{f}_{a,n}}{{g}_{a,n}^{(1)}}$. Second, defining ǎ

_{2}as the argument of this minimization, proposition 4.3 infers that this vector tends, to a

_{2}. Finally, we define the density ${\stackrel{\u02c7}{g}}_{n}^{(2)}$ as ${\stackrel{\u02c7}{g}}_{n}^{(2)}={g}_{n}^{(1)}\frac{{f}_{{\stackrel{\u02c7}{a}}_{2},n}}{{g}_{{\stackrel{\u02c7}{a}}_{2},n}^{(1)}}$ which estimates g

^{(2)}through theorem 4.1.

And so on, we end up obtaining a sequence (ǎ_{1}, ǎ_{2}, …) of vectors in
${\mathbb{R}}_{\ast}^{d}$ estimating the co-vectors of f and a sequence of densities
${({\stackrel{\u02c7}{g}}_{n}^{(k)})}_{k}$ such that
${\stackrel{\u02c7}{g}}_{n}^{(k)}$ estimates g^{(k)} through theorem 4.1.

Let us now summarize the main steps of the stochastic implementation of our algorithm (the dual representation of the estimators will be further detailed in Table 2 below).

## 4. Results

#### 4.1. Hypotheses on f

In this paragraph, we define the set of hypotheses on f which could possibly be of use in our work. Discussion on several of these hypotheses can be found in Appendix E.

In the remaining of this section, for legibility reasons, we replace g with g^{(}^{k}^{−1)}. Let

**P**is the probability measure presenting f as density.

Similarly as in chapter V of Van der Vaart [22], let us define :

(A1) : For all ε > 0, there is η > 0, such that for all c ∈ Θ

^{Dϕ}verifying ‖c − a_{k}‖ ≥ ε, we have**P**M(c, a) − η >**P**M(a_{k}, a), with a ∈ Θ.(A2) : ∃ Z < 0, n

_{0}> 0 such that (n ≥ n_{0}⇒ sup_{a∈Θ}sup_{c∈{ΘDϕ}c}ℙ_{n}M(c, a) < Z)(A3) : There exists V, a neighbourhood of a

_{k}, and H, a positive function, such that, for all c ∈ V, we have |M(c, a_{k}, x)| ≤ H(x)(**P**− a.s.) with**P**H < ∞,(A4) : There exists V, a neighbourhood of a

_{k}, such that for all ε, there exists a η such that for all c ∈ V and a ∈ Θ, verifying ‖a − a_{k}‖ ≥ ε, we have**P**M(c, a_{k}) <**P**M(c, a) − η.

Putting ${I}_{{a}_{k}}=\frac{{\partial}^{2}}{\partial {a}^{2}}{D}_{\varphi}(g\frac{{f}_{{a}_{k}}}{{g}_{{a}_{k}}},f)$, let us consider now four new hypotheses:

(A5) : $\mathbf{\text{P}}{\Vert \frac{\partial}{\partial b}M({a}_{k},{a}_{k})\Vert}^{2}$ and $\mathbf{\text{P}}{\Vert \frac{\partial}{\partial a}M({a}_{k},{a}_{k})\Vert}^{2}$ are finite and the expressions $\mathbf{\text{P}}\frac{{\partial}^{2}}{\partial {b}_{i}\partial {b}_{j}}M({a}_{k},{a}_{k})$ and I

_{ak}exist and are invertible.(A6) : There exists k such that

**P**M(a_{k}, a_{k}) = 0.(A7) : (Var

(M(a_{P}_{k}, a_{k})))^{1/2}exists and is invertible.(A0) : f and g are assumed to be positive and bounded and such that K(g, f) ≥ ∫ |f(x) − g(x)|dx where K is the Kullback-Leibler divergence.

#### Estimation of the First Co-Vector of f

Let $\mathcal{R}$ be the class of all positive functions r defined on ℝ and such that g(x)r(a^{⊤}x) is a density on ℝ^{d} for all a belonging to
${\mathbb{R}}_{\ast}^{d}$. The following proposition shows that there exists a vector a such that
$\frac{{f}_{a}}{{g}_{a}}$ minimizes D_{ϕ}(gr, f) in r:

#### Proposition 4.1

There exists a vector a belonging to ${\mathbb{R}}_{\ast}^{d}$ such that

Following Broniatowski [33], let us introduce the estimate of ${D}_{\varphi}(g\frac{{f}_{a,n}}{{g}_{a}},{f}_{n})$, through ${\stackrel{\u02c7}{D}}_{\varphi}(g\frac{{f}_{a,n}}{{g}_{a}},{f}_{n})={sup}_{b\in \mathrm{\Theta}}\int M(b,a,x)d{\mathbb{P}}_{n}(x)$.

#### Proposition 4.2

Let ǎ be such that $\stackrel{\u02c7}{a}:=\mathit{\text{arg}}{inf}_{a\in {\mathbb{R}}_{\ast}^{d}}{\stackrel{\u02c7}{D}}_{\varphi}\left(g\frac{{f}_{a,n}}{{g}_{a}},{f}_{n}\right)$.

Then, ǎ is a strongly convergent estimate of a, as defined in proposition 4.1.

Let us also introduce the following sequences (ǎ_{k})_{k}_{≥1} and
${({\stackrel{\u02c7}{g}}_{n}^{(k)})}_{k\ge 1}$, for any given n—see Section 3.2—such that

ǎ_{k} is an estimate of a_{k} as defined in proposition 4.2 with
${\stackrel{\u02c7}{g}}_{n}^{(k-1)}$ instead of g,
${\stackrel{\u02c7}{g}}_{n}^{(k)}$ is defined by
${\stackrel{\u02c7}{g}}_{n}^{(0)}=g$,
${\stackrel{\u02c7}{g}}_{n}^{(k)}(x)={\stackrel{\u02c7}{g}}_{n}^{(k-1)}(x)\frac{{f}_{{\stackrel{\u02c7}{a}}_{k},n}({\stackrel{\u02c7}{a}}_{k}^{\top}x)}{{[{\stackrel{\u02c7}{g}}^{(k-1)}]}_{{\stackrel{\u02c7}{a}}_{k},n}({\stackrel{\u02c7}{a}}_{k}^{\top}x)}$, i.e.,
${\stackrel{\u02c7}{g}}_{n}^{(k)}(x)=g(x){\mathrm{\Pi}}_{j=1}^{k}\frac{{f}_{{\stackrel{\u02c7}{a}}_{j},n}({\stackrel{\u02c7}{a}}_{j}^{\top}x)}{{[{\stackrel{\u02c7}{g}}^{(j-1)}]}_{{\stackrel{\u02c7}{a}}_{j},n}({\stackrel{\u02c7}{a}}_{j}^{\top}x)}$.

We also note that ${\stackrel{\u02c7}{g}}_{n}^{(k)}$ is a density.

Convergence Study at the k^{th} Step of the Algorithm:

In this paragraph, we show that the sequence (ǎ_{k})_{n} converges towards a_{k} and that the sequence
${({\stackrel{\u02c7}{g}}_{n}^{(k)})}_{n}$ converges towards g^{(k)}.

Let č_{n}(a)= arg sup_{c}_{∈Θ} ℙ_{n}M(c, a), with a ∈ Θ, and γ̌_{n} = arg inf_{a}_{∈Θ} sup_{c}_{∈Θ} ℙ_{n}M(c, a). We state

#### Proposition 4.3

Both sup_{a∈Θ} ‖č_{n}(a) – a_{k}‖ and γ̌_{n} converge toward a_{k} a.s.

Finally, the following theorem shows that
${\stackrel{\u02c7}{g}}_{n}^{(k)}$ converges almost everywhere towards g^{(k)}:

#### Theorem 4.1

It holds ${\stackrel{\u02c7}{g}}_{n}^{(k)}{\to}_{n}{g}^{(k)}$ a.s.

#### Testing of the Criteria

In this paragraph, through a test of our criteria, namely $a\mapsto {D}_{\varphi}({\stackrel{\u02c7}{g}}_{n}^{(k)}\frac{{f}_{a,n}}{{[{\stackrel{\u02c7}{g}}^{(k)}]}_{a,n}},{f}_{n})$, we build a stopping rule for this procedure. First, the next theorem enables us to derive the law of our criteria:

#### Theorem 4.2

For a fixed k, we have
$\sqrt{n}{({\mathit{\text{Var}}}_{\mathbf{\text{P}}}(M({\stackrel{\u02c7}{c}}_{n}({\stackrel{\u02c7}{\gamma}}_{n}),{\stackrel{\u02c7}{\gamma}}_{n})))}^{-1/2}({\mathbb{P}}_{n}M({\stackrel{\u02c7}{c}}_{n}({\stackrel{\u02c7}{\gamma}}_{n}),{\stackrel{\u02c7}{\gamma}}_{n})-{\mathbb{P}}_{n}M({a}_{k},{a}_{k}))\underrightarrow{\mathcal{L}aw}\mathcal{N}(0,I)$, where k represents the k^{th} step of our algorithm and where I is the identity matrix in ℝ^{d}.

Note that k is fixed in theorem 4.2 since γ̌_{n} = arg inf _{a}_{∈Θ} sup_{c}_{∈Θ} ℙ_{n}M(c, a) where M is a known function of k—see Section 4.1. Thus, in the case when
${D}_{\varphi}({g}^{(k-1)}\frac{{f}_{{a}_{k}}}{{g}_{{a}_{k}}^{(k-1)}},f)=0$, we obtain

#### Corollary 4.1

We have $\sqrt{n}{({\mathit{\text{Var}}}_{\mathbf{\text{P}}}(M({\stackrel{\u02c7}{c}}_{n}({\stackrel{\u02c7}{\gamma}}_{n}),{\stackrel{\u02c7}{\gamma}}_{n})))}^{-1/2}{\mathbb{P}}_{n}M({\stackrel{\u02c7}{c}}_{n}({\stackrel{\u02c7}{\gamma}}_{n}),{\stackrel{\u02c7}{\gamma}}_{n})\underrightarrow{\mathcal{L}aw}\mathcal{N}(0,I)$.

Hence, we propose the test of the null hypothesis

$({H}_{0}):{D}_{\varphi}({g}^{(k-1)}\frac{{f}_{{a}_{k}}}{{g}_{{a}_{k}}^{(k-1)}},f)=0$ versus the alternative $({H}_{1}):{D}_{\varphi}({g}^{(k-1)}\frac{{f}_{{a}_{k}}}{{g}_{{a}_{k}}^{(k-1)}},f)\ne 0$.

Based on this result, we stop the algorithm, then, defining a_{k} as the last vector generated, we derive from corollary 4.1 a α-level confidence ellipsoid around a_{k}, namely
${\mathcal{E}}_{k}=\{b\in {\mathbb{R}}^{d};\sqrt{n}{({\mathit{\text{Var}}}_{\mathbf{\text{P}}}(M(b,b)))}^{-1/2}{\mathbb{P}}_{n}M(b,b)\le {q}_{\alpha}^{\mathcal{N}(0,1)}\}$, where
${q}_{\alpha}^{\mathcal{N}(0,1)}$ is the quantile of a α-level reduced centered normal distribution and where ℙ_{n} is the empirical measure arising from a realization of the sequences (X_{1}, …, X_{n}) and (Y_{1}, …, Y_{n}).

Consequently, the following corollary provides us with a confidence region for the above test:

#### Corollary 4.2

$\mathcal{E}$_{k} is a confidence region for the test of the null hypothesis (H_{0}) versus (H_{1}).

## 5. Goodness-of-Fit Tests

#### 5.1. The Basic Idea

Let f be a density defined on ℝ^{2}. Let us also consider g, a known elliptical density with the same mean and variance as f. Let us also assume that the family (a_{i}) is the canonical basis of ℝ^{2} and that D_{ϕ}(g^{(2)}, f) = 0.

Hence, since lemma C.1 page 110 implies that
${g}_{{a}_{j}}^{(j-1)}={g}_{{a}_{j}}$ if j ≤ d, we then have
${g}^{(2)}(x)=g(x)\frac{{f}_{1}}{{g}_{1}}\frac{{f}_{2}}{{g}_{2}^{(1)}}=g(x)\frac{{f}_{1}}{{g}_{1}}\frac{{f}_{2}}{{g}_{2}}$. Moreover, we get f with g^{(2)} = f, as derived from property B.1 page 110.

Consequently,
$f=g(x)\frac{{f}_{1}}{{g}_{1}}\frac{{f}_{2}}{{g}_{2}}$, i.e.,
$\frac{f}{{f}_{1}{f}_{2}}=\frac{g}{{g}_{1}{g}_{2}}$, and then
$\frac{{\partial}^{2}}{\partial x\partial y}{C}_{f}=\frac{{\partial}^{2}}{\partial x\partial y}{C}_{g}$ where C_{f} (resp. C_{g}) is the copula of f (resp. g).

More generally, if f is defined on ℝ^{d}, then the family (a_{i}) is once again free (see lemma C.5), i.e., the family (a_{i}) is once again a basis of ℝ^{d}. The relationship D_{ϕ}(g^{(d)}, f) = 0 therefore implies that g^{(d)} = f, i.e., for any x ∈ ℝ^{d},
$f(x)={g}^{(d)}(x)=g(x){\mathrm{\Pi}}_{k=1}^{d}\frac{{f}_{{a}_{k}}({a}_{k}^{\top}x)}{{[{g}^{(k-1)}]}_{{a}_{k}}({a}_{k}^{\top}x)}=g(x){\mathrm{\Pi}}_{k=1}^{d}\frac{{f}_{{a}_{k}}({a}_{k}^{\top}x)}{{g}_{{a}_{k}}({a}_{k}^{\top}x)}$ since lemma C.1 page 110 implies that
${g}_{{a}_{k}}^{(k-1)}={g}_{{a}_{k}}$ if k ≤ d. In other words, for any x ∈ ℝ^{d}, it holds

Finally, putting A = (a_{1}, …, a_{d}) and defining vector y (resp. density f̃, copula C̃_{f} of f̃, density g̃, copula C̃_{g} of g̃) as the expression of vector x (resp. density f, copula C_{f} of f, density g, copula C_{g} of g) in basis A, then, the following proposition provides us with the density associated with the copula of f as being equal to the density associated with the copula of g in basis A :

#### Proposition 5.1

With the above notations, should a sequence (a_{i})_{i=1,…d} of not null vectors in
${\mathbb{R}}_{\ast}^{d}$ defining g^{(d)} and such that D_{ϕ}(g^{(d)}, f) = 0 exist, then
$\frac{{\partial}^{d}}{\partial {y}_{1}\dots \partial {y}_{d}}{\stackrel{\sim}{C}}_{f}=\frac{{\partial}^{d}}{\partial {y}_{1}\dots \partial {y}_{d}}{\stackrel{\sim}{C}}_{g}$.

#### 5.2. With the Elliptical Copula

Let f be an unknown density defined on ℝ^{d}. The objective of the present section is to determine whether the copula of f is elliptical. We thus define an instrumental elliptical density g with the same mean and variance as f, and we follow the procedure of Section 3.2. As explained in Section 5.1, we infer from proposition 5.1 that the copula of f equals the copula of g when D_{ϕ}(g^{(d)}, f) = 0, i.e., when a_{d} is the last vector generated from the algorithm and when (a_{i}) is the canonical basis of ℝ^{d}. Thus, in order to verify this assertion, corollary 4.1 page 96 provides us with a α-level confidence ellipsoid around this vector, namely

_{n}is the empirical measure arising from a realization of the sequences (X

_{1}, …, X

_{n}) and (Y

_{1}, …, Y

_{n})—see Appendix D—and where M is a known function of d, f

_{n}and ${g}_{n}^{(d-1)}$ —see Section 4.1.

Consequently, keeping the notations introduced in Section 5.1, we perform a statistical test of the null hypothesis

Since, under (H_{0}), we have D_{ϕ}(g^{(d)}, f) = 0, then the following theorem provides us with a confidence region for this test.

#### Theorem 5.1

The set $\mathcal{E}$_{d} is a confidence region for the test of the null hypothesis (H_{0}) versus the alternative (H_{1}).

#### Remark 5.1

1/If D_{ϕ}(g^{(k)}, f) = 0, for k < d, then we reiterate the algorithm until g^{(d)} is created in order to obtain a relationship for the copula of f.

2/If the a_{i} do not constitute the canonical basis, then keeping the notations introduced in Section 5.1, our algorithm meets the test:

_{1}, …, a

_{d}) basis.

#### 5.3. With the Independent Copulas

Let f be a density on ℝ^{d} and let X be a random vector with f as density. The objective of this section is to determine whether f is the product of its margins, i.e., whether the copula of f is the independent copula. Let g be an instrumental product of univariate Gaussian density—with diag(Var(X_{1}),…, Var(X_{d})) as covariance matrix and with the same mean as f. As explained at Section 5.2, we follow the procedure described at Section 3.2, i.e., proposition 5.1 infers that the copula of f is the independent copula when D_{ϕ}(g^{(d)}, f) = 0, we then perform a statistical test of the null hypothesis:

Since, under (H_{0}), we have D_{ϕ}(g^{(d)}, f) = 0, the following theorem provides us with a confidence region for our test.

#### Theorem 5.2

Keeping the notations of Section 5.2, the set $\mathcal{E}$_{d} is a confidence region for the test of the null hypothesis (H_{0}) versus the alternative (H_{1}).

#### Remark 5.2

(1) As explained in Section 5.2, if D_{ϕ}(g^{(k)}, f) = 0, for k < d, we reiterate the algorithm until g^{(d)} is created in order to derive a relationship for the copula of f.

(2) If the a_{i} do not constitute the canonical basis, then keeping the notations of Section 5.1, our algorithm meets the test:

Thus, our method enables us to determine if the copula of f is the independent copula in the (a_{1}, …, a_{d}) basis.

#### 5.4. Study of the Subsequence (g^{(k′)}) Defined by D_{ϕ}(g^{(k′)}, f) = 0 for Any k′

Let
be the set of non-negative integers defined by
$\mathcal{Q}=\{{k}_{i}^{\prime};{k}_{1}^{\prime}=1,{k}_{q}^{\prime}=d,{k}_{i}^{\prime}<{k}_{i+1}^{\prime}\}$, where q—such that q ≤ d—is its cardinal. In the present section, our goal is to study the subsequence (g^{(}^{k}^{′)}) of the sequence (g^{(k)})_{k}_{=1‥d} defined by D_{ϕ}(g^{(k′)}, f) = 0 for any k′ belonging to
.

First, we have:

D

_{ϕ}(g^{(d)}, f) = 0 ⇔ g^{(d)}= f, through property B.1$\iff \frac{g(x)}{{\mathrm{\Pi}}_{k=1}^{d}{g}_{{a}_{k}}({a}_{k}^{\top}x)}=\frac{f(x)}{{\mathrm{\Pi}}_{k=1}^{d}{f}_{{a}_{k}}({a}_{k}^{\top}x)}$, as explained in Section 5.2

$\iff \frac{\stackrel{\sim}{g}(y)}{{\mathrm{\Pi}}_{k=1}^{d}{\stackrel{\sim}{g}}_{k}({y}_{k})}=\frac{\stackrel{\sim}{f}(y)}{{\mathrm{\Pi}}_{k=1}^{d}{\stackrel{\sim}{f}}_{k}({y}_{k})}$, which amounts to the previous relationship written in the A = (a

_{1}, …, a_{d}) basis with the notations introduced in Section 5.2.

Moreover, defining
${\stackrel{\sim}{k}}_{i}^{\prime}$ as the previous integer
${k}_{i}^{\prime}$, in the space {1, …, d}, with i > 1, and as explained in Section 3.1, the relationship D_{ϕ}(g^{(k′)}, f) = 0 implies that

_{i,i+1}is the density of vector $({a}_{i}^{\top}X,\dots ,{a}_{{\stackrel{\sim}{k}}_{i+1}^{\prime}}^{\top}X)$ in the A = (a

_{1},…,a

_{d}) basis. Consequently, $\stackrel{\sim}{f}(y)={\stackrel{\sim}{f}}_{1,2}({y}_{1},\dots {y}_{{\stackrel{\sim}{k}}_{2}^{\prime}})\cdot {\stackrel{\sim}{f}}_{2,3}({y}_{{k}_{2}^{\prime}},\dots ,{y}_{{\stackrel{\sim}{k}}_{3}^{\prime}})\dots {\stackrel{\sim}{f}}_{q-1,d}({y}_{{k}_{q-1}^{\prime}},\dots ,{y}_{{\stackrel{\sim}{k}}_{d}^{\prime}})$.

Hence, we can infer that

The following theorem explicitly describes the form of the f copula in the A = (a_{1}, …, a_{d}) basis:

#### Theorem 5.3

Defining C̃_{fi,j} as the copula of f̃_{i,j} and keeping the notations introduced in Sections 5.1 and 5.4, it holds

#### Remark 5.3

If there exists i such that i < d and
${k}_{i}^{\prime}={\stackrel{\sim}{k}}_{i+1}^{\prime}$, then the notation
${\stackrel{\sim}{f}}_{i,i+1}({y}_{{k}_{i}^{\prime}},\dots ,{y}_{{\stackrel{\sim}{k}}_{i+1}^{\prime}})$ means
${\stackrel{\sim}{f}}_{{k}_{i}^{\prime}}({y}_{{k}_{i}^{\prime}})$. Thus, if, for any k, we have D_{ϕ}(g^{(k)}, f) = 0, then, for any i < d, we have
${k}_{i}^{\prime}={\stackrel{\sim}{k}}_{i+1}^{\prime}$, i.e., we have
$\stackrel{\sim}{f}={\mathrm{\Pi}}_{k=1}^{d}{\stackrel{\sim}{f}}_{k}({y}_{k})$, where f̃_{k} is the k^{th} marginal density of f̃.

At present, using relationship 5.2 and remark 5.3, the following corollary gives us the copula of f as equals to 1 in the {a_{1}, …, a_{d}} basis when, for any k, D_{ϕ}(g^{(}^{k}^{′)}, f) = 0:

#### Corollary 5.1

In the case where, for any k, D_{ϕ}(g^{(k)}, f) = 0, it holds:

## 6. Simulations

Let us examine three simulations and an application to real datasets. The first simulation studies the elliptical copula and the second studies the independent copula. In each simulation, our program will aim at creating a sequence of densities (g^{(j)}), j = 1,‥,d such that g^{(0)} = g, g^{(j)} = g^{(}^{j}^{−1)}f_{aj}/[g^{(}^{j}^{−1)}]_{aj} and D_{ϕ}(g^{(d)}, f) = 0, where D_{ϕ} is a divergence—see Appendix B for its definition—and
${a}_{j}=\mathit{\text{arg}}{inf}_{b}{D}_{\varphi}({g}^{(j-1)}){f}_{b}/{g}_{b}^{(j-1)},f)$, for all j = 1, …, d. We will therefore perform the tests introduced at theorems 5.1 and 5.2. Finally, the third simulation compares the optimisations obtained, when we execute the process with, each time, a new ϕ-divergence.

#### Simulation 6.1

We are in dimension 2(=d), and we use the χ^{2} divergence to perform our optimisations. Let us consider a sample of 50(=n) values of a random variable X with a density law f defined by :

Let us generate then a Gaussian random variable Y with a density—that we will name as g—presenting the same mean and variance as f.

We theoretically obtain k = 2 and (a_{1}, a_{2}) = ((1, 0), (0, 1)).

To get this result, we perform the following test:

Then, theorem 5.1 enables us to verify (H_{0}) by the following 0.9(=α) level confidence ellipsoid

Results of this optimisation can be found in Table 3 and Figure 1.

Therefore, we can conclude that H_{0} is verified.

#### Simulation 6.2

We are in dimension 2(=d), and we use the χ^{2} divergence to perform our optimisations.

Let us consider a sample of 50(=n) values of a random variable X with a density law f defined by

Let g be an instrumental product of univariate Gaussian densities with diag(V ar(X_{1}), …, V ar(X_{d})) as covariance matrix and with the same mean as f.

We theoretically obtain k = 2 and (a_{1}, a_{2}) = ((1, 0), (0, 1)). To get this result, we perform the following test:

Then, theorem 5.2 enables us to verify (H_{0}) by the following 0.9(=α) level confidence ellipsoid

Results of this optimisation can be found in Table 4 and Figure 2.

Therefore, we can conclude that $f={\mathrm{\Pi}}_{i=1}^{d}{f}_{i}$.

#### Simulation 6.3

(On the choice of a ϕ-divergence). In this paragraph, we perform our algorithm several times. We first use several ϕ-divergences (see Appendix B for their definitions and their notations). We then perform a sensitivity analysis by varying the number n of simulated variables. Finally we introduce outliers.

At present, we consider a sample of n values of a random variable X with a density f defined by f(x) = Laplace(x_{1}).Gumbel(x_{0}),

where the Gumbel distribution parameters are (1, 2) and where the Laplace distribution parameters are 4 and 3. In theory, we get a_{1} = (0, 1) and a_{2} = (1, 0). Then, following the procedure of the first simulation, we get

n = 50 | Outliers = 0 | Time | Outliers = 2 | Time |
---|---|---|---|---|

Relative Entropy | (0.10, 0.83) (1.13, 0.11) | 30 mn | (0.1, 0.8) (0.80, 0.024) | 43 mn |

χ^{2}-divergence | (0, 0.8) (1.021, 0.09) | 22 mn | (0.12, 0.79) (0.867, −0.104) | 31 mn |

Hellinger distance | (0.1, 0.9) (0.91, 0.15) | 35 mn | (0.1, 0.85) (0.81, 0.14) | 46 mn |

n = 100 | Outliers = 0 | Time | Outliers = 5 | Time |
---|---|---|---|---|

Relative Entropy | (0.09, 0.89) (1.102, 0.089) | 50 mn | (0.1, 0.88) (1.15, 0.144) | 60 mn |

χ^{2}-divergence | (0, 0.9) (0.97, −0.1) | 43 mn | (−0.1, 0.9) (0.87, 0.201) | 52 mn |

Hellinger distance | (0.1, 0.91) (0.93, −0.11) | 57 mn | (−0.05, 1.1) (0.79, 0.122) | 62 mn |

n = 500 | Outliers = 0 | Time | Outliers = 25 | Time |
---|---|---|---|---|

Relative Entropy | (0, 1.07) (1.1, −0.05) | 107 mn | (0.13, 0.75) (0.79, 0.122) | 121 mn |

χ^{2}-divergence | (0, 0.95) (1.12, −0.02) | 91 mn | (0.15, 0.814 (0.922, 0.147) | 103 mn |

Hellinger distance | (−0.01, 0.95) (1.01, −0.073) | 100 mn | (−0.17, 1.3) (0.973, 0.206) | 126 mn |

#### Remark 6.1

We have worked with a calculator presenting the following characteristics :

- -
Processor : Mobile AMD 3000+,

- -
Memory RAM : 512 DDR,

- -
Windows XP.

Our method, which uses the χ

^{2}as ϕ-divergence, is faster and its performance is as good if not better than any other divergence method.

This results from the fact that the projection index (or criteria) of χ^{2} is a second degree polynomial. It is consequently easier and faster to assess. Moreover, these simulations illustrate the robustness of our method.

#### 6.1. Application to Real Datasets

Let us for instance study the moves in the stock prices of Renault and Peugeot from January 4, 2010 to July 25, 2010. We thus gather 140(=n) data from these stock prices, see Table 7 and Table 8 below.

Let us also consider X_{1} (resp. X_{2}) the random variable defining the stock price of Renault (resp. Peugeot). We will assume—as it is commonly done in mathematical finance—that the stock market abides by the classical hypotheses of the Black-Scholes model—see Black and Scholes [34].

Consequently, X_{1} and X_{2} each present a log-normal distribution as probability distribution.

Let f be the density of vector (ln(X_{1}), ln(X_{2})), let us now apply our algorithm to f with the Kullback-Leibler divergence as ϕ-divergence. Let us generate then a Gaussian random variable Y with a density—that we will name as g—presenting the same mean and variance as f.

We first assume that there exists a vector a such that ${D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)=0$.

In order to verify this hypothesis, our reasoning will be the same as in Simulation 6.1. Indeed, we assume that this vector is a co-factor of f. Consequently, corollary 4.2 enables us to estimate a by the following 0.9(=α) level confidence ellipsoid

Numerical results of the first projection are summarized in Table 5.

Therefore, our first hypothesis is confirmed.

However, our goal is to study the copula of (ln(X_{1}), ln(X_{2})). Then, as explained in Section 5.4, we formulate another hypothesis assuming that there exists a vector a such that
${D}_{\varphi}({g}^{(1)}\frac{{f}_{a}}{{g}_{a}^{(1)}},f)=0$.

In order to verify this hypothesis, we use the same reasoning as above. Indeed, we assume that this vector is a co-factor of f. Consequently, corollary 4.2 enables us to estimate a by the following 0.9(=α) level confidence ellipsoid ${\mathcal{E}}_{2}=\{b\in {\mathbb{R}}^{2};{({\mathit{\text{Var}}}_{\mathbf{\text{P}}}(M(b,b)))}^{(-1/2)}{\mathbb{P}}_{n}M(b,b)\le {q}_{\alpha}^{\mathcal{N}(0,1)}/\sqrt{n}\simeq 0,2533/\sqrt{140}=0.02140776\}$. Numerical results of the second projection are summarized in Table 6.

Therefore, our second hypothesis is confirmed.

In conclusion, as explained in corollary 5.1, the copula of f is equal to 1 in the {a_{1}, a_{2}} basis.

#### 6.2. Critics of the Simulations

In the case where f is unknown, we will never be sure to have reached the minimum of the ϕ-divergence: the simulated annealing method has been used to solve our optimisation problem, and therefore it is only when the number of random jumps tends in theory towards infinity that the probability to get the minimum tends to 1. We also note that no theory on the optimal number of jumps to implement does exist, as this number depends on the specificities of each particular problem.

Moreover, we choose the ${50}^{-\frac{4}{4+d}}$ for the AMISE of the two simulations. This choice leads us to simulate 50 random variables—see Scott [23] page 151, none of which have been discarded to obtain the truncated sample.

This has also been the case in our application to real datasets.

Finally, the shape of the copula in the case of real datasets in the {a_{1}, a_{2}} basis is also noteworthy.

Figure 4 shows that the curve reaches a quite wide plateau around 1, whereas Figure 5 shows that this plateau prevails on almost the entire [0, 1]^{2} set. We can therefore conclude that the theoretical analysis is indeed confirmed by the above simulation.

#### 6.3. Conclusions

Projection pursuit is useful in evidencing characteristic structures as well as one-dimensional projections and their associated distribution in multivariate data. This article clearly demonstrates the efficiency of the φ-projection pursuit methodology for goodness-of-fit tests for copulas. Indeed, the robustness as well as the convergence results that we achieved convincingly fulfilled our expectations regarding the methodology used.

**Figure 1.**Graph of the estimate of (x

_{1}, x

_{2}) ↦ c

_{ρ}(F

_{Gumbel}(x

_{1}), F

_{Exponential}(x

_{2})).

0. | We define g, a density with same mean and variance as f and we set g^{(0)} = g. |

i − 1. | We perform the goodness-of-fit test D_{ϕ}(g^{(}^{i}^{−1)}, f) = 0: |

• Should this test be passed, we derive f from $f=g{\mathrm{\Pi}}_{i=1}^{j}\frac{{f}_{{a}_{i}}}{{g}_{{a}_{i}}^{(i-1)}}$ | |

And the algorithm stops. | |

• Should this test not be verified, and should we look to approximate f, when we get to the d^{th} iteration of the algorithm, we derive f from
$f=g{\mathrm{\Pi}}_{i=1}^{d}\frac{{f}_{{a}_{i}}}{{g}_{{a}_{i}}^{(i-1)}}$ | |

Otherwise, let us define a vector a_{i} and a density g^{(i)} by
${a}_{i}=\mathit{\text{arg}}{inf}_{a\in {\mathbb{R}}_{\ast}^{d}}{D}_{\varphi}\left({g}^{(i-1)}\frac{{f}_{a}}{{g}_{a}^{(i-1)}},f\right),\phantom{\rule{0.5em}{0ex}}\text{and}\phantom{\rule{0.5em}{0ex}}{g}^{(i)}={g}^{(i-1)}\frac{{f}_{{a}_{i}}}{{g}_{{a}_{i}}^{(i-1)}}$ | |

i. | Then we replace g^{(}^{i}^{−1)} with g^{(i)} and go back to i − 1. |

0. | We define g, a density with same mean and variance as f and we set g^{(0)} = g. |

i − 1. | Given
${\stackrel{\u02c7}{g}}_{n}^{(i)}$, find ǎ_{i} such that the index
is minimized, where f_{a,n} is a marginal density estimate based on a^{⊤}X_{1}, a^{⊤}X_{2},…,a^{⊤}X_{n}, and where
is a density estimate based on the projection to a of a Monte Carlo random sample from
. |

And we set ${\stackrel{\u02c7}{g}}_{n}^{(i)}={\stackrel{\u02c7}{g}}_{n}^{(i-1)}\frac{{f}_{{\stackrel{\u02c7}{a}}_{1},n}}{{\stackrel{\u02c7}{g}}_{{\stackrel{\u02c7}{a}}_{1},n}^{(i-1)}}$ | |

i | Then we replace ${\stackrel{\u02c7}{g}}_{n}^{(i)}$ with and go back to i − 1 until the criteria reaches the stopping rule of this procedure (see below). |

Our Algorithm | |
---|---|

Projection Study 0: | minimum : 0.445199 |

at point : (1.0171,0.0055) | |

P-Value : 0.94579 | |

Test: | H_{1} : a_{1} ∉ $\mathcal{E}$_{1} : True |

Projection Study 1: | minimum : 0.009628 |

at point : (0.0048,0.9197) | |

P-Value : 0.99801 | |

Test: | H_{0} : a_{2} ∈ $\mathcal{E}$_{2} : True |

χ^{2}(Kernel Estimation of g^{(2)}, g^{(2)}) | 3.57809 |

Our Algorithm | |
---|---|

Projection Study 0 : | minimum : 0.057833 |

at point : (0.9890,0.1009) | |

P-Value : 0.955651 | |

Test : | H_{1} : a_{1} ∉ $\mathcal{E}$_{1} : True |

Projection Study 1 : | minimum : 0.02611 |

at point : (−0.1105,0.9290) | |

P-Value : 0.921101 | |

Test : | H_{0} : a_{2}∈ $\mathcal{E}$_{2} : True |

χ^{2}(Kernel Estimation of g^{(2)}, g^{(2)}) | 1.25945 |

Our Algorithm | |
---|---|

Projection Study 0: | minimum : 0.02087685 |

at point : a_{1}=(19.1,-12.3) | |

P-Value : 0.748765 | |

Test: | H_{0} : a_{1} ∈ $\mathcal{E}$_{1} : True |

K(Kernel Estimation of g^{(1)}, g^{(1)} | 4.3428735 |

Our Algorithm | |
---|---|

Projection Study 1: | minimum : 0.0198753 |

at point : a_{2}=(8.1,3.9) | |

P-Value : 0.8743401 | |

Test: | H_{0} : a_{2} ∈ $\mathcal{E}$_{2} : True |

K(Kernel Estimation of g^{(2)}, g^{(2)}) | 4.38475324 |

Date | Renault | Peugeot | Date | Renault | Peugeot | Date | Renault | Peugeot |
---|---|---|---|---|---|---|---|---|

23/07/10 | 34.9 | 24.2 | 22/07/10 | 34.26 | 24.01 | 21/07/10 | 33.15 | 23.3 |

20/07/10 | 32.69 | 22.78 | 19/07/10 | 33.24 | 23.36 | 16/07/10 | 33.92 | 23.77 |

15/07/10 | 34.44 | 23.71 | 14/07/10 | 35.08 | 24.36 | 13/07/10 | 35.28 | 24.37 |

12/07/10 | 33.84 | 23.16 | 09/07/10 | 33.46 | 23.13 | 08/07/10 | 33.08 | 22.65 |

07/07/10 | 32.15 | 22.19 | 06/07/10 | 31.12 | 21.56 | 05/07/10 | 30.02 | 20.81 |

02/07/10 | 30.17 | 20.85 | 01/07/10 | 29.56 | 20.05 | 30/06/10 | 30.78 | 21.07 |

29/06/10 | 30.55 | 20.97 | 28/06/10 | 32.34 | 22.3 | 25/06/10 | 31.35 | 21.68 |

24/06/10 | 32.29 | 22.25 | 23/06/10 | 33.58 | 22.47 | 22/06/10 | 33.84 | 22.77 |

21/06/10 | 34.06 | 23.25 | 18/06/10 | 32.89 | 22.7 | 17/06/10 | 32.08 | 22.31 |

16/06/10 | 31.87 | 21.92 | 15/06/10 | 32.03 | 22.12 | 14/06/10 | 31.45 | 22.2 |

11/06/10 | 30.62 | 21.42 | 10/06/10 | 30.42 | 20.93 | 09/06/10 | 29.27 | 20.34 |

08/06/10 | 28.48 | 19.73 | 07/06/10 | 28.92 | 20.15 | 04/06/10 | 29.19 | 20.27 |

03/06/10 | 30.35 | 20.46 | 02/06/10 | 29.33 | 19.53 | 01/06/10 | 28.87 | 19.45 |

31/05/10 | 29.39 | 19.54 | 28/05/10 | 29.16 | 19.55 | 27/05/10 | 29.18 | 19.81 |

26/05/10 | 27.5 | 18.5 | 25/05/10 | 26.76 | 18.08 | 24/05/10 | 28.75 | 18.81 |

21/05/10 | 28.78 | 18.82 | 20/05/10 | 28.53 | 18.84 | 19/05/10 | 29.49 | 19.25 |

18/05/10 | 30.95 | 19.76 | 17/05/10 | 30.92 | 19.35 | 14/05/10 | 31.35 | 19.34 |

13/05/10 | 33.65 | 20.76 | 12/05/10 | 33.63 | 20.52 | 11/05/10 | 33.38 | 20.34 |

10/05/10 | 33.28 | 20.3 | 07/05/10 | 31 | 19.24 | 06/05/10 | 32.4 | 20.22 |

05/05/10 | 32.95 | 20.45 | 04/05/10 | 33.3 | 21.03 | 03/05/10 | 35.58 | 22.63 |

30/04/10 | 35.41 | 22.45 | 29/04/10 | 35.53 | 22.36 | 28/04/10 | 34.75 | 22.33 |

Date | Renault | Peugeot | Date | Renault | Peugeot | Date | Renault | Peugeot |
---|---|---|---|---|---|---|---|---|

27/04/10 | 36.2 | 22.9 | 26/04/10 | 37.65 | 23.73 | 23/04/10 | 36.72 | 23.5 |

22/04/10 | 34.36 | 22.72 | 21/04/10 | 35.01 | 22.86 | 20/04/10 | 35.62 | 22.88 |

19/04/10 | 34.08 | 21.77 | 16/04/10 | 34.46 | 21.71 | 15/04/10 | 35.16 | 22.22 |

14/04/10 | 35.1 | 22.22 | 13/04/10 | 35.28 | 22.45 | 12/04/10 | 35.17 | 21.85 |

09/04/10 | 35.76 | 21.9 | 08/04/10 | 35.67 | 21.67 | 07/04/10 | 36.5 | 21.89 |

06/04/10 | 36.87 | 22 | 01/04/10 | 35.5 | 21.97 | 31/03/10 | 34.7 | 21.8 |

30/03/10 | 34.8 | 22.24 | 29/03/10 | 35.7 | 22.73 | 26/03/10 | 35.54 | 22.58 |

25/03/10 | 35.53 | 22.73 | 24/03/10 | 33.8 | 21.82 | 23/03/10 | 34.1 | 21.58 |

22/03/10 | 33.73 | 21.64 | 19/03/10 | 34.12 | 21.68 | 18/03/10 | 34.44 | 21.75 |

17/03/10 | 34.68 | 21.98 | 16/03/10 | 34.33 | 21.88 | 15/03/10 | 33.57 | 21.53 |

12/03/10 | 33.9 | 21.86 | 11/03/10 | 33.27 | 21.58 | 10/03/10 | 33.12 | 21.47 |

09/03/10 | 32.69 | 21.54 | 08/03/10 | 32.99 | 21.66 | 05/03/10 | 32.89 | 21.85 |

04/03/10 | 31.64 | 21.26 | 03/03/10 | 31.65 | 20.7 | 02/03/10 | 31.05 | 20.2 |

01/03/10 | 30.26 | 19.54 | 26/02/10 | 30.2 | 19.39 | 25/02/10 | 29.42 | 18.98 |

24/02/10 | 30.9 | 19.49 | 23/02/10 | 30.54 | 19.74 | 22/02/10 | 31.89 | 20.06 |

19/02/10 | 32.29 | 20.67 | 18/02/10 | 32.26 | 20.41 | 17/02/10 | 31.69 | 20.31 |

16/02/10 | 31.08 | 19.8 | 15/02/10 | 30.25 | 19.66 | 12/02/10 | 29.56 | 19.57 |

11/02/10 | 31 | 20.4 | 10/02/10 | 32.78 | 21.21 | 09/02/10 | 33.31 | 22.31 |

08/02/10 | 32.63 | 21.95 | 05/02/10 | 32.15 | 22.33 | 04/02/10 | 33.72 | 22.86 |

03/02/10 | 35.32 | 23.93 | 02/02/10 | 35.29 | 23.8 | 01/02/10 | 35.31 | 24.05 |

29/01/10 | 34.26 | 23.64 | 28/01/10 | 33.94 | 23.31 | 27/01/10 | 33.85 | 23.88 |

26/01/10 | 34.97 | 24.86 | 25/01/10 | 35.06 | 24.35 | 22/01/10 | 35.7 | 24.95 |

21/01/10 | 36.1 | 25 | 20/01/10 | 36.92 | 25.35 | 19/01/10 | 38.4 | 25.81 |

18/01/10 | 39.28 | 25.95 | 15/01/10 | 38.6 | 25.7 | 14/01/10 | 39.56 | 26.67 |

13/01/10 | 39.49 | 26.13 | 12/01/10 | 38.36 | 25.98 | 11/01/10 | 39.21 | 26.65 |

08/01/10 | 39.38 | 26.5 | 07/01/10 | 39.69 | 26.7 | 06/01/10 | 39.25 | 26.32 |

05/01/10 | 38.31 | 24.74 | 04/01/10 | 38.2 | 24.52 |

## Appendix

All the demonstrations of this article have been gathered in the Technical Report [24].

## A. On the Different Families of Copula

There exists many copula families. Let us here present the most important amongst them.

#### A.1. Elliptical Copulas

#### The Gaussian Copula

The Gaussian copula can be used in several fields. For example, many credit models are built from this copula, which also presents the property to make extreme values (minimal or maximal) independent in the limit; see Joe [2] for more details. For example, in ℝ^{2}, it is derived from the bivariate normal distribution and from Sklar's theorem. Defining Ψ_{ρ} as the standard bivariate normal cumulative distribution function with ρ correlation, the Gaussian copula function is C_{ρ}(u, v) = Ψ_{ρ} (Ψ^{−1}(u), Ψ^{−1}(v)) where u, v ∈ [0, 1] and where Ψ is the standard normal cumulative distribution function. Then, the copula density function is :

^{d}.

#### The Elliptical Copula

Let us begin with defining the class of elliptical distributions and its properties—see also Cambanis [17], Landsman [18]:

#### Definition A.1

X is said to abide by a multivariate elliptical distribution, denoted X ∼ E_{d}(μ, Σ, ξ_{d}), if X has the following density, for any x in ℝ^{d}:

where ξ_{d} is referred as the “density generator”,

where α_{d} is a normalisation constant, such that
${\alpha}_{d}=\frac{\mathrm{\Gamma}(d/2)}{{(2\pi )}^{d/2}}{\left({\int}_{0}^{\infty}{x}^{d/2-1}{\xi}_{d}(x)dx\right)}^{-1}$,

with ${\int}_{0}^{\infty}{x}^{d/2-1}{\xi}_{d}(x)dx<\infty $.

#### Property A.1

(1) For any X ∼ E_{d}(μ, Σ, ξ_{d}), for any m × d matrix with rank m ≤ d, A, and for any m-dimensional vector b, we have AX + b ∼ E_{m}(Aμ + b, AΣA′, ξ_{m}).

Therefore, any marginal density of multivariate elliptical distribution is elliptical, i.e.,
$X=({X}_{1},{X}_{2},\dots ,{X}_{d})~{E}_{d}(\mu ,\sum ,{\xi}_{d})\Rightarrow {X}_{i}~{E}_{1}({\mu}_{i},{\sigma}_{i}^{2},{\xi}_{1})$, 1 ≤ i ≤ d, with
${f}_{{X}_{i}}(x)=\frac{{\alpha}_{1}}{{\sigma}_{i}}{\xi}_{1}\left(\frac{1}{2}{\left(\frac{x-{\mu}_{i}}{\sigma}\right)}^{2}\right)$. (2) Corollary 5 of Cambanis [17] states that conditional densities with elliptical distributions are also elliptical. Indeed, if X = (X_{1}, X_{2})′ ∼ E_{d}(μ, Σ, ξ_{d}), with X_{1} (resp. X_{2}) of size d_{1} < d (resp. d_{2} < d), then X_{1}/(X_{2} = a) ∼ E_{d1}(μ′, Σ′, ξ_{d1}) with
$\mu \prime ={\mu}_{1}+{\sum}_{12}{\sum}_{22}^{-1}(a-{\mu}_{2})$ and
$\sum \prime ={\sum}_{11}-{\sum}_{12}{\sum}_{22}^{-1}{\sum}_{21}$, with μ = (μ_{1}, μ_{2}) and Σ = (Σ_{ij})_{1≤i,j≤2}.

#### Remark A.1

Landsman [18] shows that multivariate Gaussian distributions derive from ξ_{d}(x) = e^{−x} and that if X = (X_{1}, …, X_{d}) has an elliptical density such that its marginals verify E(X_{i}) < ∞ and
$E({X}_{i}^{2})<\infty $ for 1 ≤ i ≤ d, then μ is the mean of X and Σ is a multiple of the covariance matrix of X. Consequently, from now on, we will assume this is indeed the case.

#### Definition A.2

Let t be an elliptical density on ℝ^{k} and let q be an elliptical density on ℝ^{k′}. The elliptical densities t and q are said to belong to the same family of elliptical densities, if their generating densities are ξ_{k} and ξ_{k′} respectively, which belong to a common given family of densities.

#### Example A.1

Consider two Gaussian densities
(0, 1) and
((0, 0), Id_{2}). They are said to belong to the same elliptical family as they both present x ↦ e^{−x} as generating density.

Finally, let us introduce the definition of an elliptical copula which generalizes the above overview of the Gaussian copula:

#### Definition A.3

Elliptical copulas are the copulas of elliptical distributions.

#### A.2. Archimedean Copulas

These copulas exhibit a simple form as well as properties such as associativity. They also present a variety of dependent structures. They can generally be defined under the following form

_{1}, u

_{2}, …, u

_{n}) ∈ [0, 1]

^{n}and where ξ is known as a “generator function”. This ξ function must be at least d – 2 times continuously differentiable, must have a decreasing and convex d – 2 derivative, and must be such that ξ(1) = 0.

Let us now present several examples:

Clayton copula:

The Clayton copula is an asymmetric Archimedean copula, displaying greater dependency in the negative tail than in the positive tail. Let us define X (resp. Y) as the random vector having F (resp G) as cumulative distribution function (CDF). Assuming that the vector (X, Y) has a Clayton copula, then this copula is given by:

$$\mathcal{A}(x,y)={(F{(x)}^{\theta}+G{(y)}^{\theta}-1)}^{1/\theta}$$And its generator is:

$$\xi (x)={x}^{\theta}-1$$For θ = 0, the random variables are independent.

Gumbel copula:

The Gumbel copula (Gumbel-Hougard copula) is an asymmetric Archimedean copula, presenting greater dependency in the positive tail than in the negative tail. This copula is given by:

$$\xi (x)={(-ln(x))}^{\alpha}$$Frank copula:

The Frank copula is a symmetric Archimedean copula given by:

$$\xi (x)=ln\left(\frac{{e}^{\alpha x}-1}{{e}^{\alpha}-1}\right)$$

#### A.3. Periodic Copula

In 2005, Alfonsi and Brigo [25] derived a new way of generating copulas based on periodic functions. Defining h (resp. ) as a 1-periodic non-negative function that integrates to 1 over [0, 1] (resp. as a double primitive of h), then both

## B. ϕ-Divergence

Let us call h_{a} the density of a^{⊤}Z if h is the density of Z. Let φ be a strictly convex function defined by
$\phi :\overline{\mathbb{R}+}\to \overline{\mathbb{R}+}$, and such that φ(1) = 0.

#### Definition B.1

We define a ϕ-divergence of P from Q, where P and Q are two probability distributions over a space Ω such that Q is absolutely continuous with respect to P, by

The most used distances (Kullback, Hellinger or χ^{2}) belong to the Cressie-Read family (see Cressie-Read [26], Csiszár I. [27] and the books of Friedrich and Igor [28], Pardo Leandro [29] and Zografos K. [30]). They are defined by a specific φ. Indeed,

- -
with the Kullback-Leibler divergence, we associate φ(x) = K(x) = xln(x) − x + 1

- -
with the Hellinger distance, we associate $\phi (x)=H(x)=2{(\sqrt{x}-1)}^{2}$

- -
with the χ

^{2}distance, we associate $\phi (x)={\chi}^{2}(x)=\frac{1}{2}{(x-1)}^{2}$- -
more generally, with power divergences, we associate $\phi (x)=\frac{{x}^{\gamma}-\gamma x+\gamma -1}{\gamma (\gamma -1)}$, where γ ∈ ℝ \ (0, 1)

- -
and, finally, with the L

^{1}norm, which is also a divergence, we associate φ(x) = |x − 1|.

Let us now expose some well-known properties of divergences.

#### Property B.1

We have D_{ϕ}(P, Q) = 0 ⇔ P = Q.

#### Property B.2

The divergence function Q ↦ D_{ϕ}(Q, P) is convex and lower semi-continuous for the topology that makes all the applications of the form Q ↦ ∫ fdQ continuous (where f is bounded and continuous), and lower semi-continuous for the topology of the uniform convergence.

Finally, we will also use the following property derived from the first part of corollary (1.29) page 19 of Friedrich and Igor [28],

#### Property B.3

If T : (X, A) → (Y, B) is measurable and if D_{ϕ}(P, Q) < ∞, then D_{ϕ}(P, Q) ≥ D_{ϕ}(PT^{−1}, QT^{−1}) with equality being reached when T is surjective for (P, Q).

## C. Miscellaneous

#### Lemma C.1

For any p ≤ d, we have ${g}_{{a}_{p}}^{(p-1)}={g}_{{a}_{p}}$.

#### Lemma C.2

We have $g(./{a}_{1}^{\top}x,\dots ,{a}_{j}^{\top}x)=n({a}_{j+1}^{\top}x,\dots ,{a}_{d}^{\top}x)=f(./{a}_{1}^{\top}x,\dots ,{a}_{j}^{\top}x)$.

#### Lemma C3

Should there exist a family (a_{i})_{i=1…d} such that
$f(x)=n({a}_{j+1}^{\top}x,\dots ,{a}_{d}^{\top}x)h({a}_{1}^{\top}x,\dots ,{a}_{j}^{\top}x)$, with j < d, with f, n and h being densities, then this family is an orthogonal basis of ℝ^{d}.

#### Lemma C.4

${inf}_{a\in {\mathbb{R}}_{\ast}^{d}}{D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)$ is reached when the ϕ-divergence is greater than the L^{1} distance as well as the L^{2} distance.

#### Lemma C.5

Whenever there exists p, p ≤ d, such that D_{ϕ}(g^{(p)}, f) = 0, then the family of (a_{i})_{i=1,…,p} is free and is orthogonal.

#### Lemma C.6

For any continuous density f, we have ${y}_{m}=|{f}_{m}(x)-f(x)|={O}_{\mathbf{\text{P}}}({m}^{-\frac{2}{4+d}})$.

## D. Study of the Sample

Let X_{1}, X_{2},‥,X_{m} be a sequence of independent random vectors with the same density f. Let Y_{1}, Y_{2},‥,Y_{m} be a sequence of independent random vectors with the same density g. Then, the kernel estimators f_{m}, g_{m}, f_{a,m} and g_{a,m} of f, g, f_{a} and g_{a}, for all
$a\in {\mathbb{R}}_{\ast}^{d}$, almost surely and uniformly converge since we assume that the bandwidth h_{m} of these estimators meets the following conditions (see Bosq [32]):

Let us consider

Our objective is to estimate the minimum of ${D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)$. To achieve this, samples have to be truncated:

Let us consider now a positive sequence θ_{m} such that θ_{m} → 0,
${y}_{m}/{\theta}_{n}^{2}\to 0$, where y_{m} is the almost sure convergence rate of the kernel density estimator—
${y}_{m}={O}_{\mathbf{\text{P}}}({m}^{-\frac{2}{4+d}})$, see lemma C.6—
${y}_{m}^{(1)}/{\theta}_{m}^{2}\to 0$, where
${y}_{m}^{(1)}$ is defined by

^{d}, and finally $\frac{{y}_{m}^{(2)}}{{\theta}_{m}^{2}}\to 0$, where ${y}_{n}^{(2)}$ is defined by

^{d}.

We then generate f_{m}, g_{m} and g_{b,m} from the starting sample and we select the X_{i} and Y_{i} vectors such that f_{m}(X_{i}) ≥ θ_{m} and g_{b,m}(b^{⊤}Y_{i}) ≥ θ_{m}, for all i and for all
$b\in {\mathbb{R}}_{\ast}^{d}$.

The vectors meeting these conditions will be called X_{1}, X_{2}, …, X_{n} and Y_{1}, Y_{2}, …, Y_{n}.

Consequently, the next proposition provides us with the condition required to derive our estimates:

#### Proposition D.1

Using the notations introduced in Broniatowski [33] and in Section 4.1, it holds ${lim}_{n\to \infty}{sup}_{a\in {\mathbb{R}}_{\ast}^{d}}|({B}_{1}(n,a)-{B}_{2}(n,a))-{D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)|=0$.

#### Remark D.1

With the Kullback-Leibler divergence, we can take for θ_{m} the expression m^{−ν}, with
$0<\nu <\frac{1}{4+d}$.

## E. Hypotheses' Discussion

Not all hypotheses will be used simultaneously.

Hypotheses (A1) and (A4) lead us to assume we deal with a saddle point: being used to demonstrate the convergence of č_{n}(a) and γ_{k} towards a_{k}, they make it easier to use the dual form of the divergence. Moreover, since our criteria
$a\mapsto {D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)$ is differentiable on
${\mathbb{R}}_{\ast}^{d}$ and continuously differentiable on ℝ^{d}, these hypotheses can be easily obtained. However, if other discontinuities, for which the criteria can not be extended by continuity, do exist, then the above hypotheses would be very difficult to verify even in very favorable cases.

As shown by the below subsection for relative entropy, hypothesis (A2) generally holds.

Hypotheses (A5) and (A7) are classical hypotheses from which a limit distribution for the criteria can be derived. Yet these hypotheses are difficult to obtain when the criteria $a\mapsto {D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)$ admits discontinuities—close to the co-vectors of f—for which it can not be continuously differentiable.

Hypothesis (A6) thus enables to create a stopping rule for the process since this hypothesis is equivalent to the nullity of the application
${D}_{\varphi}(g\frac{{f}_{a}}{{g}_{a}},f)$ in a_{k}.

Hypothesis (A0) constitutes an alternative to the starting hypothesis according to which the divergence should be greater than the L^{1} distance. Although weaker, this hypothesis also requires that for all i, we have K (g^{(i)}, f) ≥ ∫ |f(x) − g^{(i)}(x)|dx at each iteration of the algorithm.

#### E.1. Discussion of (A2)

Let us work with the Kullback-Leibler divergence and with g and a_{1}.

For all
$b\in {\mathbb{R}}_{\ast}^{d}$, we have
$\int {\phi}^{\ast}({\phi}^{\prime}(\frac{g(x){f}_{b}({b}^{\top}x)}{f(x){g}_{b}({b}^{\top}x)}))\phantom{\rule{0.2em}{0ex}}f(x)dx=\int (\frac{g(x){f}_{b}({b}^{\top}x)}{f(x){g}_{b}({b}^{\top}x)}-1)\phantom{\rule{0.2em}{0ex}}f(x)dx=0$, since, for any b in
${\mathbb{R}}_{\ast}^{d}$, the function
$x\mapsto g(x)\frac{{f}_{b}({b}^{\top}x)}{{g}_{b}({b}^{\top}x)}$ is a density. The complement of Θ^{Dϕ} in
${\mathbb{R}}_{\ast}^{d}$ is ∅ and then the supremum looked for in ℝ̅ is −∞. We can therefore conclude. It is interesting to note that we obtain the same verification with f, g^{(k−1)} and a_{k}.

#### E.2. Discussion of (A3)

This hypothesis consists in the following assumptions:

(0) We work with the Kullback-Leibler divergence,

(1) We have
$f(./{a}_{1}^{\top}x)=g(./{a}_{1}^{\top}x)$, i.e.,
$K\left(g\frac{{f}_{1}}{{g}_{1}},f\right)=0$ —we could also derive the same proof with f, g^{(}^{k}^{−1)} and a_{k}

#### Preliminary (A)

Shows that $A=\{(c,x)\in {\mathbb{R}}_{\ast}^{d}\backslash \{{a}_{1}\}\times {R}^{d};\frac{{f}_{{a}_{1}}({a}_{1}^{\top}x)}{{g}_{{a}_{1}}({a}_{1}^{\top}x)}>\frac{{f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)},g(x)\frac{{f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)}>f(x)\}=\varnothing $ through a reductio ad absurdum, i.e., if we assume A ≠ ∅.

Thus, our hypothesis enables us to derive

#### Preliminary (B)

Shows that $B=\{(c,x)\in {\mathbb{R}}_{\ast}^{d}\backslash \{{a}_{1}\}\times {R}^{d};\frac{{f}_{{a}_{1}}({a}_{1}^{\top}x)}{{g}_{{a}_{1}}({a}_{1}^{\top}x)}<\frac{{f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)},g(x)\frac{{f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)}<f(x)\}=\varnothing $ through a reductio ad absurdum, i.e., if we assume B ≠ ∅.

Thus, our hypothesis enables us to derive

We can consequently conclude as above.

Let us now verify (A3):

We have $PM(c,{a}_{1})-PM(c,a)=\int ln(\frac{g(x){f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)f(x)})\{\frac{{f}_{{a}_{1}}({a}_{1}^{\top}x)}{{g}_{{a}_{1}}({a}_{1}^{\top}x)}-\frac{{f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)}\}g(x)dx$. Moreover, the logarithm ln is negative on $\{x\in {\mathbb{R}}_{\ast}^{d};\frac{g(x){f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)f(x)}<1\}$ and is positive on $\{x\in {\mathbb{R}}_{\ast}^{d};\frac{g(x){f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)f(x)}\ge 1\}$.

Thus, the preliminary studies (A) and (B) show that
$ln(\frac{g(x){f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)f(x)})$ and
$\{\frac{{f}_{{a}_{1}}({a}_{1}^{\top}x)}{{g}_{{a}_{1}}({a}_{1}^{\top}x)}-\frac{{f}_{c}({c}^{\top}x)}{{g}_{c}({c}^{\top}x)}\}$ always present a negative product. We can therefore conclude, since (c, a) ↦ PM(c, a_{1}) − PM(c, a) is not null for all c and for all a, with a ≠ a_{1}.

## References

- Sklar, M. Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Stat. Univ.
**1959**, 8, 229–231. [Google Scholar] - Joe, H. Multivariate Models and Dependence Concepts. Monographs on Statistics and Applied Probability, 1st ed.; Chapman and Hall/CRC: London, UK, 1997. [Google Scholar]
- Nelsen, R.B. An introduction to Copulas. Springer Series in Statistics, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
- Carriere, J.F. A large sample test for one-parameter families of Copulas. Comm. Stat. Theor. Meth.
**1994**, 23, 1311–1317. [Google Scholar] - Genest, C.; Rémillard, B. Tests of independence and randomness based on the empirical Copula process. Test
**2004**, 13, 335–370. [Google Scholar] - Fermanian, J.D. Goodness of fit tests for copulas. J. Multivariate Anal.
**2005**, 95, 119–152. [Google Scholar] - Genest, C.; Quessy, J.F.; Rémillard, B. Goodness-of-fit procedures for copula models based on the probability integral transformation. Scand. J. Stat.
**2006**, 33, 337–366. [Google Scholar] - Michiels, F.; De Schepper, A. A Copula Test Space Model—How to Avoid the Wrong Copula Choice. Kybernetika
**2008**, 44, 864–878. [Google Scholar] - Genest, C. Metaelliptical copulas and their use in frequency analysis of multivariate hydrological data. Water Resour. Res.
**2009**, 43, W09401:1–W09401:12. [Google Scholar] - Mesfioui, M.; Quessy, J.F.; Toupin, M.H. On a new goodness-of-fit process for families of copulas. La Revue Canadienne de Statistique
**2009**, 37, 80–101. [Google Scholar] - Genest, C.; Rémillard, B. Goodness-of-fit tests for copulas: A review and a power study. Insurance: Math. Econ.
**2009**, 44, 199–213. [Google Scholar] - Berg, D. Copula goodness-of-fit testing: An overview and power comparison. Eur. J. Finance
**2009**, 15, 675–701. [Google Scholar] - Bücher, A.; Dette, H. Some comments on goodness-of-fit tests for the parametric form of the copula based on L
^{2}-distances. J. Multivar. Anal.**2010**, 101, 749–763. [Google Scholar] - Broniatowski, M.; Leorato, S. An estimation method for the Neyman chi-square divergence with application to test of hypotheses. J. Multivar. Anal.
**2006**, 97, 1409–1436. [Google Scholar] - Friedman, J.H.; Stuetzle, W.; Schroeder, A. Projection pursuit density estimation. J. Am. Statist. Assoc.
**1984**, 79, 599–608. [Google Scholar] - Huber, P.J. Projection pursuit. Ann. Stat.
**1985**, 13, 435–525. [Google Scholar] - Cambanis, S.; Huang, S.; Simons, G. On the theory of elliptically contoured distributions. J. Multivar. Anal.
**1981**, 11, 368–385. [Google Scholar] - Landsman, Z.M.; Valdez, E.A. Tail conditional expectations for elliptical distributions. N. Am. Actuar. J.
**2003**, 7, 55–71. [Google Scholar] - Yohai, V.J. Optimal robust estimates using the Kullback-Leibler divergence. Stat. Probab. Lett.
**2008**, 78, 1811–1816. [Google Scholar] - Toma, A. Optimal robust M-estimators using divergences. Stat. Probab. Lett.
**2009**, 79, 1–5. [Google Scholar] - Huber, P.J. Robust Statistics; Wiley: New York, NY, USA, 1981; (republished in paperback, 2004). [Google Scholar]
- van der Vaart, A.W. Asymptotic statistics. Cambridge Series in Statistical and Probabilistic Mathematics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
- Scott, D.W. Multivariate Density Estimation. Theory, Practice, and Visualization. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics; A Wiley-Interscience Publication. John Wiley and Sons, Inc.: New York, NY, USA, 1992. [Google Scholar]
- Touboul, J. Goodness-of-fit Tests For Elliptical And Independent Copulas Through Projection Pursuit. arXiv. Statistics Theory
**2011**. arXiv: 1103.0498. [Google Scholar] - Aurélien, A.; Damiano, B. New families of Copulas based on periodic functions. Commun. Stat. Theor. Meth.
**2005**, 34, 1437–1447. [Google Scholar] - Cressie, N.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Series B
**1984**, 46, 440–464. [Google Scholar] - Csiszár, I. On topology properties of f-divergences. Studia Sci. Math. Hungar.
**1967**, 2, 329–339. [Google Scholar] - Liese, F.; Vajda, I. Convex Statistical Distances; BSB B. G. Teubner Verlagsgesellschaft: Leipzig, Germany, 1987. [Google Scholar]
- Pardo, L. Statistical Inference Based on Divergence Measures. Statistics: Textbooks and Monographs; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
- Zografos, K.; Ferentinos, K.; Papaioannou, T. φ-divergence statistics: sampling properties and multinomial goodness of fit and divergence tests. Commun. Stat. Theor. Meth.
**1990**, 19, 1785–1802. [Google Scholar] - Azé, D. Eléments D'analyse Convexe et Variationnelle, Ellipses; Dunod: Paris, French, 1997. [Google Scholar]
- Bosq, D.; Lecoutre, J.P. Livre-Theorie De L'Estimation Fonctionnelle; Economica, 1999. [Google Scholar]
- Broniatowski, M.; Keziou, A. Parametric estimation and tests through divergences and the duality technique. J. Multivar. Anal.
**2009**, 100, 16–36. [Google Scholar] - Black and Scholes. The pricing of options and corporate liabilities. J. Polit. Econ.
**1973**, 81, 635–654. [Google Scholar]

**Classification: MSC**62H05 62H15 62H40 62G15

© 2011 by the author; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/.)