Bayesian Bandwidths in Semiparametric Modelling for Nonnegative Orthant Data with Diagnostics

Multivariate nonnegative orthant data are real vectors bounded to the left by the null vector, and they can be continuous, discrete or mixed. We first review the recent relative variability indexes for multivariate nonnegative continuous and count distributions. As a prelude, the classification of two comparable distributions having the same mean vector is done through under-, equi- and over-variability with respect to the reference distribution. Multivariate associated kernel estimators are then reviewed with new proposals that can accommodate any nonnegative orthant dataset. We focus on bandwidth matrix selections by adaptive and local Bayesian methods for semicontinuous and counting supports, respectively. We finally introduce a flexible semiparametric approach for estimating all these distributions on nonnegative supports. The corresponding estimator is directed by a given parametric part, and a nonparametric part which is a weight function to be estimated through multivariate associated kernels. A diagnostic model is also discussed to make an appropriate choice between the parametric, semiparametric and nonparametric approaches. The retention of pure nonparametric means the inconvenience of parametric part used in the modelization. Multivariate real data examples in semicontinuous setup as reliability are gradually considered to illustrate the proposed approach. Concluding remarks are made for extension to other multiple functions.


Introduction
The d-variate nonnegative orthant data on T + d ⊆ [0, ∞) d are real d-vectors bounded to the left by the null vector 0 d , and they can be continuous, discrete (e.g., count, categorial) or mixed. For simplicity, we here assume either T + d = [0, ∞) d for semicontinuous or T + d = N d := {0, 1, 2, . . .} d for counting; and, we then omit both setups of categorial and mixed which can be a mix of discrete and continuous data (e.g., [53]) or other time scales (see, e.g., [36]). Modelling such datasets of T + d need nonnegative orthant distributions which are generally not easy to handle in practical data analysis. The baseline parametric distribution (e.g., [20,34]) for the analysis of nonnegative countinuous data is the exponential distribution (e.g., in Reliability) and that of count data is the Poisson one. However, there intrinsic assumptions of the two first moments are often not realistic for many applications. The nonparametric topic of associated kernels, which is adaptable to any support T + d of probability density or mass function (pdmf), is widely studied in very recent years. We can refer to [7,8,12,13,29,43,51,52,61,62] for general results and more specific developments on associated kernel orthant distributions using classical cross-validation and Bayesian methods to select bandwidth matrices. Thus, a natural question of a flexible semiparametric modelling now arises for all these multivariate orthant datasets.
Indeed, we first need a review of the recent relative variability indexes for multivariate semicontinuous ( [31]) and count ( [25]) distributions. The infinite number and complexity of multivariate parametric distributions require the study of different indexes for comparisons and discriminations between them. Simple classifications of two comparable distributions are done through under-, equiand over-variability with respect to the reference distribution. We refer to [57] and references therein for univariate categorial data which does not yet have its multivariate version. We then survey multivariate associated kernels that can accommodate any nonnegative orthant dataset. Most useful families shall be pointed out, mainly as a product of univariate associated kernels and including properties and constructions. Also, we shall focus on bandwidth matrix selections by Bayesian methods. Finally, we have to introduce a flexible semiparametric approach for estimating multivariate nonnegative orthant distribution. Following [15] for classical kernels, the corresponding estimator shall be directed by a given parametric part, and a nonparametric part which is a weight function to be estimated through multivariate associated kernels. But what is the meaning of a diagnostic model to make an appropriate choice between the parametric, semiparametric and nonparametric approaches in this multivariate framework? Such a discussion is to highlight practical improvements on standard nonparametric methods for multivariate semicontinuous datasets, through the use of a reasonable parametric-start description. See, for instance, [27,33,48] for univariate count datasets.
In this paper, the main goal is to introduce a family of semiparametric estimators with multivariate associated kernels for both semicontinuous and count data. They are meant to be flexible compromises between a grueling parametric and fuzzy nonparametric approaches. The rest of the paper is organized as follow. Section 2 presents a brief review of the relative variability indexes for multivariate nonnegative orthant distributions, by distinguishing the dispersion for counting and the variation for semicontinuous. Section 3 displays a short panoply of multivariate associated kernels which are useful for semicontinuous and for count datasets. Properties are reviewed with new proposals, including both appropriated Bayesian methods of bandwidths selections. In Section 4, we introduce the semiparametric kernel estimators with d-variate parametric start. We also investigate the corresponding diagnostic model. Section 5 is devoted to numerical illustrations, especially for uni-and multivariate semicontinuous datasets. In Section 6, we make some final remarks in order to extend to other multiple functions, as regression. Eventually, appendixes are exhibited for technical proofs and illustrations.

Relative Variability Indexes for Orthant Distributions
We use the following notations: is the elementwise square root of the variance vector of X; diag √ varX = diag d ( varX j ) is the d × d diagonal matrix with diagonal entries varX j and 0 elsewhere; and, covX = (cov(X i , X j )) i,j∈{1,...,d} denotes the covariance matrix of X which is a d×d symmetric matrix with entries cov(X i , X j ) such that cov(X i , X i ) = varX i is the variance of X i . Then, one has where ρ X = ρ(X) is the correlation matrix of X; see, e.g., [21,. It is noteworthy that there are huge many multivariate distributions with exponential (resp. Poisson) margins. Therefore, we denote a generic d-variate exponential distribution by E d (µ, ρ), given specific positive mean vector µ −1 := (µ −1 1 , . . . , µ −1 d ) and correlation matrix ρ = (ρ i j ) i, j∈{1,...,d} . Similarly, a generic d-variate Poisson distribution is given by P d (µ, ρ), with positive mean vector µ := (µ 1 , . . . , µ d ) and correlation matrix ρ. See, e.g., Appendix 7.1 for more extensive exponential and Poisson models with possible behaviours in the negative correlation setup. The uncorrelated or independent d-variate exponential and Poisson will be written as E d (µ) and P d (µ), respectively, for ρ = I d the d × d unit matrix. Their respective d-variate probability density function (pdf) and probability mass function (pmf) are the product of d univariate ones.
According to [31] and following the recent univariate unification of the wellknown (Fisher) dispersion and the (Jørgensen) variation indexes by Touré et al. [55], the relative variability index of d-variate nonnegative orthant distributions can be written as follows. Let X and Y be two random vectors on the same support T + d ⊆ [0, ∞) d and assume m := EX = EY, Σ X := covX and V F Y (m) := cov(Y) fixed, then the relative variability index of X with respect to Y is defined as the positive quantity where "tr(·)" stands for the trace operator and The expression (2.2) of RWI does not appear to be very easy to handle in this general formulation on T + d ⊆ [0, ∞) d , even the empirical version and interpretations. We now detail both multivariate cases of counting and of semicontinous. Their corresponding empirical versions are given in [25,31].

Relative Dispersion Indexes for Count Distributions
2) is also of rank 1 and has only one positive eigenvalue, denoted by and called generalized dispersion index of X compared to Y ∼ P d (EX) with EY = EX = m ( [25]). For d = 1, GDI(X 1 ) = varX 1 /EX 1 = DI(X 1 ) is the (Fisher) dispersion index with respect to the Poisson distribution. To derive this interpretation of GDI, we successively decompose the denominator of (2.3) as (2.4) and the numerator of (2.3) by using also (2.1) as √ EX (covX) Thus, GDI(X) makes it possible to compare the full variability of X (in the numerator) with respect to its expected uncorrelated Poissonian variability (in the denominator) which depends only on EX. In other words, the count random vector X is over-(equi-and under-dispersed) with respect to P d (EX) if GDI(X) > 1 (GDI(X) = 1 and GDI(X) < 1, respectively). This is a generalization in multivariate framework of the well-known (univariate) Fisher dispersion index by [25]. See, e.g., [4,25] for illustrative examples. Also, we can modify GDI(X) to MDI(X), as marginal dispersion index, by replacing covX in (2.3) with diag √ varX to obtain dispersion information only coming from the margins of X.
More generally, for two count random vectors X and Y on the same support T + d ⊆ N d with EX = EY and GDI(Y) > 0, the relative dispersion index is defined by i.e., the over-(equi-and under-dispersion) of X compared to Y is realized if GDI(X) > GDI(Y) (GDI(X) = GDI(Y) and GDI(X) < GDI(Y), respectively). Obviously, GDI is a particular case of RDI with any general reference than P d . Consequently, many properties of GDI are easily extended to RDI.

Relative Variation Indexes for Semicontinuous Distributions
i.e., X is over-(equi-and under-varied) with respect to E d (EX) if GVI(X) > 1 (GVI(X) = 1 and GVI(X) < 1, respectively); see [31]. Remark that when d = 1, GVI(X 1 ) = varX 1 /(EX 1 ) 2 = VI(X 1 ) is the univariate (Jørgensen) variation index which is recently introduced by Abid et al. [2]. From (2.4) and using again (2.1) for rewritting the numerator of (2.6) as GVI(X) of (2.6) can be interpreted as the ratio of the full variability of X with respect to its expected uncorrelated exponential E d (EX) variability which depends only on EX. Similar to MDI(X), we can define MVI(X) from GVI(X). See [31] for properties, numerous examples and numerical illustrations.
The relative variation index is defined, for two semicontinuous random vectors X and Y on the same support i.e., the over-(equi-and under-variation) of X compared to Y is carried out if GVI(X) > GVI(Y) (GVI(X) = GVI(Y) and GVI(X) < GVI(Y), respectively). Of course, RVI generalizes GVI for multivariate semicontinuous distributions. For instance, one refers to [31] for more details on its discriminating power in multivariate parametric models from two first moments.

Multivariate Orthant Associated Kernels
Nonparametric techniques through associated kernels represent an alternative approach for multivariate orthant data. Let X 1 , . . . , X n be independent and identically distributed (iid) nonnegative orthant d-variate random vectors with an unknown joint pdmf f on T + d ⊆ [0, ∞) d , for d ≥ 1. Then the multivariate associated kernel estimator f n of f is expressed as where H is a given d × d bandwidth matrix (i.e., symmetric and positive definite) such that H ≡ H n → 0 d (the d × d null matrix) as n → ∞, and K x,H (·) is a multivariate (orthant) associated kernel, parameterized by x and H; see, e.g., [29]. More precisely, we have the following refined definition.
Definition 3.1. Let T + d be the support of the pdmf to be estimated, x ∈ T + d a target vector and H a bandwidth matrix. A parameterized pdmf K x,H (·) on support S x,H ⊆ T + d is called "multivariate orthant associated kernel" if the following conditions are satisfied: where This definition exists in the univariate count case of [26,33] and encompasses the multivariate one by [29]. The choice of the orthant associated kernel satisfying lim H→0 d Cov(Z x,H ) = 0 d assures the convergence of its corresponding estimator named of the second order. Otherwise, the convergence of its corresponding estimator is not guarantee for u i j ∈ (0, 1), a right neighborhood of 0, in Definition 3.1 and it is said a consistent first-order smoother; see, e.g., [26] for discrete kernels. In general, d-under-dispersed count associated kernels are appropriated for both small and moderate sample sizes; see, e.g., [26] for univariate cases. As for the selection of the bandwidth H, it is very crucial because it controls the degree of smoothing and the form of orientation of the kernel. As a matter of fact, a simplification can be obtained by considering a diagonal matrix H = diag d (h j ). Since it is challenging to obtain a full multivariate orthant distribution K x,H (·) for building a smoother, several authors suggest the product of univariate orthant associated kernels, where K x j ,h j , j = 1, . . . , d, belong either to the same family or to different families of univariate orthant associated kernels. The below two subsections shall be devoted to summaries of discrete and semicontinuous univariate associated kernels.
Before showing some main properties of the associated kernel estimator (3.1), let us recall that the family of d-variate classical (symmetric) kernels K on S d ⊆ R d (e.g., [47,49,60]) can be also presented as (classical) associated kernels. Indeed, from (3.1) and writting for instance where "det" is the determinant operator, one has S x, In general, one uses the classical (associated) kernels for smoothing continuous data or pdf having support T d = R d .
The purely nonparametric estimator (3.1) with multivariate associated kernel, f n of f , is generally defined up to the normalizing constant C n . Several simulation studies (e.g., [29, Table 3.1]) are shown that C n = C n (K, H) (depending on samples, associated kernels and bandwidths) is approximatively 1. Without loss of generality, one here assumes C n = 1 as for all classical (associated) kernel estimators of pdf. The following proposition finally proves its mean behaviour and variability through the integrated bias and integrated variance of f n , respectively. In what follows, let us denote by ν the reference measure (Lebesgue or counting) on the nonnegative orthant set T + d and also on any set T d ⊆ R d .
Then, for all n ≥ 1: Proof 1. Let n ≥ 1. One successively has which leads to the first result because f is a pdmf on T d . The second result on var(C n ) is trivial.
The following general result is easily deduced from Proposition 3.2. To our knwoledge, it appears to be new and interesting in the framework of the pdmf (associated) kernel estimators. Corollary 3.3. If C n = 1, for all n ≥ 1, then: In particular, Corollary 3.3 holds for all classical (associated) kernel estimators. The two following properties on the corresponding orthant multivariate associated kernels shall be needed subsequently.
(K1) There exists the second moment of K x,H : (K2) There exists a real largest number r = r(K x,H ) > 0 and 0 < c(x) < ∞ such that In fact, (K1) is a necessary condition for smoothers to have a finite variance and (K2) can be deduced from the continuous univariate cases (e.g., [24]) and also from the discrete ones (e.g., [26]).
We now establish both general asymptotic behaviours of the pointwise bias and variance of the nonparametric estimator (3.1) on the nonnegative orthant set T + d ; its proof is given in Appendix 7.2. For that, we need the following assumptions by endowing T + d with the Euclidean norm || · || and the associated inner product ·, · such that a, b = a b.
(a1) The unknown pdmf f is bounded function and twice differentiable or finite difference in T + d and ∇ f (x) and H f (x) denote respectively the gradient vector (in continuous or discrete sense, respectively) and the corresponding Hessian matrix of the function f at x.
Note that (a2) is obviously a consequence of (K2).
For d = 1 and according to the proof of Proposition 3.4, one can easily write E f n (x) as follows: where f (k) is the kth derivative or finite difference of the pdmf f under the existence of the centered moment of order k ≥ 2 of Z x,h .
Denote M the set of positive definite [diagonal] matrices [from (3.2), resp.] and let π be a given suitable prior distribution on M. Under the squared error loss function, the Bayes estimator of H is the mean of the posterior distribution. Then, the local Bayesian bandwidth at the target x ∈ T + d takes the form and the adaptive Bayesian bandwidth for each observation X i ∈ T + d of X is given by Note that the (global) cross-validation bandwidth matrix H CV and the global Bayesian one H B are obtained, respectively, from (3.7) as

Discrete Associated Kernels
We only present three main and useful families of univariate discrete associated kernels for (3.2) and satisfying (K1) and (K2).
It has been introduced in multivariate setup by Aitchison and Aitken [3] and investigated as a discrete associated kernel which is symmetric to the target x by [26] in univariate case; see [8] for a Bayesian approach in multivariate setup. Note here that its normalized constant is always 1 = C n . Example 3.6 (symmetric count). For fixed m ∈ N and T + 1 ⊆ Z, the symmetric count triangular kernel is expressed as where holds for h sufficiently small.
It has been first proposed by Kokonendji et al. [28] and then completed in [32] with an asymmetric version for solving the problem of boundary bias in count kernel estimation.
Example 3.7 (standard count). Let T + 1 ⊆ N, the standard binomial kernel is defined by Here B(x, h) tends to x/(x + 1) ∈ [0, 1) when h → 0 and the new Definition 3.1 holds. This first-order and under-dispersed binomial kernel is introduced in [26] which becomes very useful for smoothing count distribution through small or moderate sample size; see, e.g., [7,8,23] for Bayesian approaches and some references therein. In addition, we have the standard Poisson kernel where K Poisson x,h follows the equi-dispersed Poisson distribution with mean x + h, S x := N =: Recently, Huang et al. [17] have introduced the Conway-Maxwell-Poisson kernel by exploiting its underdispersed part and its second-order consistency which can be improved via the mode-dispersion approach of [37]; see also [16,Section 2.4].

Semicontinuous Associated Kernels
Now, we point out eight main and useful families of univariate semicontinuous associated kernels for (3.2) and satisfying (K1) and (K2). Which are gamma (G) of [9] (see also [14]), inverse gamma (Ig) (see also [42]) and log-normal 2 (LN2) by [37], inverse Gaussian (IG) and reciprocal inverse Gaussian by [46] (see also [18]), log-normal 1 (LN1) and Birnbaum-Saunders by [19] (see also [38,41]), and Weibull (W) of [45] (see also [41]). It is noteworthy that the link between LN2 of [37] and LN1 of [19] is through changing (x, h) to (x exp(h 2 ), 2 log(1 + h). Several other semicontinuous could be constructed by using the mode-dispersion technique of [37] from any semicontinuous distribution which is unimodal and having a dispersion parameter. Recently, one has the scaled inverse chi-squared kernel of [11]. Table 4.1 summarizes these eight semicontinuous univariate associated kernels with their ingredients of Definition 3.1 and order of preference (O.) obtained graphically. In fact, the heuristic classification (O.) is done through the behaviour of the shape and scale of the associated kernel around the target x at the edge as well as inside; see Figure 4.1 for edge and Figure 4.2 for inside. Among these eight kernels, we thus have to recommend the five first univariate associated kernels of Table 4.1 for smoothing semicontinuous data. This approach could be improved by a given dataset; see, e.g., [35] for cumulative functions.

Semiparametric Kernel Estimation with d-Variate Parametric Start
We investigate the semiparametric orthant kernel approach which is a compromise between the pure parametric and the nonparametric methods. This concept was proposed by Hjort and Glad [15] for continuous data, treated by Kokonendji et al. [27] for discrete univariate data and, recently, studied by Kokonendji et al. [33] with an application to radiation biodosimetry.   Without loss of generality, we here assume that any d-variate pdmf f can be formulated (e.g., [30] for d = 1) as where p d (·; θ) is the non-singular parametric part according to a reference dvariate distribution with corresponding unknown parameters θ = (θ 1 , . . . , θ k ) and w(·; θ) := f (·)/p d (·; θ) is the unknown orthant weight function part, to be estimated with a multivariate orthant associated kernel. The weight function at each point can be considered as the local multiplicative correction factor aimed to accommodate any pointwise departure from the reference d-variate distribution. However, one cannot consider the best fit of parametric models as the start distribution in this semiparametric approach. Because the corresponding weight function is close to zero and becomes a noise which is unappropriated to smooth by an associated kernel, especially for the continuous cases.
Let X 1 , . . . , X n be iid nonnegative orthant d-variate random vectors with un- The semiparametric estimator of (4.1) with (3.2) is expressed as follows: where θ n is the estimated parameter of θ. From (4.2), we then deduce the nonparametric orthant associated kernel estimate of the weight function x → w(x; θ n ) which depends on θ n . One can observe that Proposition 3.2 also holds for f n (·) = p d (·; θ n ) w n (·; θ n ). However, we have to prove below the analogous of Proposition 3.4.

Known d-Variate Parametric Model
Let p d (·; θ 0 ) be a fixed orthant distribution in (4.1) with θ 0 known. Writing f (x) = p d (x; θ 0 ) w(x), we estimate the nonparametric weight function w by w n (x) = n −1 n i=1 K x,H (X i )/p d (X i ; θ 0 ) with an orthant associated kernel method, resulting in the estimator The following proposition is proven in Appendix 7.2.
It is expected that the bias here is quite different from that of (3.3).

Unknown d-Variate Parametric Model
Let us now consider the more realistic and practical semiparametric estimator f n (·) = p d (·; θ n ) w n (·; θ n ) presented in (4.2) of f (·) = p d (·; θ)w(·; θ) in (4.1) such that the parametric estimator θ n of θ can be obtained by the maximum likelihood method; see [15] for quite a general estimator of θ. In fact, if the d-variate parametric model p d (·; θ) is misspecified then this θ n converges in probability to the pseudotrue value θ 0 satisfying θ 0 := arg min from the Kullback-Leibler divergence (see, e.g., [58]).
By writting p 0 (·) := p d (·; θ 0 ) this best d-variate parametric approximant, but this p 0 (·) is not explicitly expressible as the one in (4.4). According to [15] (see also [27]), we can represent the proposed estimator f n (·) = p d (·; θ n ) w n (·; θ n ) in (4.2) as Thus, the following result provides approximate bias and variance. We omit its proof since it is analogous to the one of Proposition 4.1.
Once again, the bias is different from that of (3.3). Thus, the proposed semiparametric estimator f n in (4.2) of f can be shown to be better or not than the traditional nonparametric one f n in (3.1). The following subsection provides a practical solution.

Model Diagnostics
The estimated weight function w n (x, θ n ) given in (4.3) provides useful information for model diagnostics. The d-variate weight function w(·) is equal one if the d-variate parametric start model p d (·; θ 0 ) is indeed the true pdmf. Hjort and Glad [15] proposed to check this adequacy by examining a plot of the weight function for various potential models with pointwise confidence bands to see wether or not w(x) = 1 is reasonable. See also [27,33] for univariate count setups.
In fact, without technical details here we use the model diagnostics for verifying the adequacy of the model by examining a plot of x → w n (x; θ n ) or for all x = X i , i = 1, . . . , n, with a pointwise confidence band of ±1.96 for large n; that is to see how far away it is from zero. More precisely, for instance, W n is < 5% for pure nonparametric, it belongs to [5%, 95%] for semiparametric, and it is > 95% for full parametric models. It is noteworthy that the retention of pure nonparametric means the inconvenience of parametric part considered in this approach; hence, the orthant dataset is left free.

Semicontinuous Examples of Application with Discussions
For a practical implementation of our approach, we propose to use the popular multiple gamma kernels as (3.2) by selecting the adaptive Bayesian procedure of [52] to smooth w n (x; θ n ). Hence, we shall gradually consider d-variate semicontinuous cases with d = 1, 2, 3 for real datasets. All computations and graphics have been done with the R software [44].

Adaptive Bayesian Bandwidth Selection for Multiple Gamma Kernels
From Table 4.1, the function G x,h (·) is the gamma kernel [9] given on the support where 1 E denotes the indicator function of any given event E. This gamma kernel G x,h (·) appears to be the pdf of the gamma distribution, denoted by G(1 + x/h, h) with shape parameter 1 + x/h and scale parameter h. The multiple gamma kernel from (3.2) is written as K x,H (·) = d j=1 G x j ,h j (·) with H = diag d h j . For applying (3.6) and (3.7) in the framework of semiparametric estimator f n in (4.2), we assume that each component h i = h i (n), = 1, . . . , d, of H i has the univariate inverse gamma prior Ig(α, β ) distribution with same shape parameters α > 0 and, eventually, different scale parameters β > 0 such that β = (β 1 , . . . , β d ) . We here recall that the pdf of Ig(α, β ) with α, β > 0 is defined by The mean and the variance of the prior distribution (5.1) for each component h i of the vector H i are given by β /(α − 1) for α > 1, and β 2 /(α − 1) 2 (α − 2) for α > 2, respectively. Note that for a fixed β > 0, = 1, . . . , d, and if α → ∞, then the distribution of the bandwidth vector H i is concentrated around the null vector 0.
From those considerations, the closed form of the posterior density and the Bayesian estimator of H i are given in the following proposition which is proven in Appendix 7.2. (i) the posterior density is the following weighted sum of inverse gamma (ii) under the quadratic loss function, the Bayesian estimator for m = 1, 2, . . . , d, with the previous notations of A i j (α, β ), B i jm (β m ), C jk (α, β k ) et D i (α, β).

Semicontinuous Datasets
The numerical illustrations shall be done through the following dataset of Table 5.1 which are recently used in [52] for nonpaprametric approach and only in trivariate setup as semicontinuous data. It concerns three measurements (with n = 42) of drinking water pumps installed in the Sahel. The first variable X 1 represents the failure times (in months) and, also, it is recently used by Touré et al. [55]. The second variable X 2 refers to the distance (in kilometers) between each water pump and the repair center in the Sahel, while the third one X 3 stands for average volume (in m 3 ) of water per day.    Table 5.1. Hence, each X j , (X j , X k ) and (X 1 , X 2 , X 3 ) is over-dispersed compared to the corresponding uncorrelated Poisson distribution. But, only (X 1 , X 3 ) (resp. X 1 ) can be considered as a bivariate equi-variation (resp. univariate over-variation) with respect to the corresponding uncorrelated exponential distribution; and, other X j , (X j , X k ) and (X 1 , X 2 , X 3 ) are under-varied. In fact, we just compute dispersion indexes for curiosity since all values in Table 5.1 are positive integers; and, we herenow omit the counting point of view in the remainder of the analysis. Thus, we are gradually investing in semiparametric approaches for three univariates, three bivariates and only one trivariate from (X 1 , X 2 , X 3 ) of Table 5.1.

Univariate Examples
For each univariate semicontinuous dataset X j , j = 1, 2, 3, we have already computed the GVI in Table 5.2 which belongs in (0.01, 1.95) 1. This allows to consider our flexible semiparametric estimation f n, j with an exponential E 1 (µ j ) as start in (4.2) and using adaptive Bayesian bandwidth in gamma kernel of Proposition 5.1. Hence, we deduce the corresponding diagnostic percent W n, j from (4.6) for deciding an appropriate approach. In addition, we first present the univariate nonparametric estimation f n,j with adaptive Bayesian bandwidth in gamma kernel of [50] and then propose another parametric estimation of X j by the standard gamma model with shape (a j ) and scale (b j ) parameters. Table 5.3 transcribes parameter maximum likelihood estimates of exponential and gamma models with diagnostic percent from Table 5.1. Figure 5.1 exhibits histogram, f n, j , f n, j , exponential, gamma and diagnostic W n, j graphs for each univariate data X j . One can observe differences with the naked eye between f n,j and f n, j although they are very near and with the same pace. The diagnostic W n, j graphics lead to conclude to semiparametric approach for X 2 and to full parametric models for X 3 and slightly also for X 1 . Thus, we have suggested gamma model with two parameters for improving the starting exponential model; see, e.g., [30, Table 2] for alternative parametric models.

Bivariate and Trivariate Examples
For the sake of flexibility and efficiency, we here analyse our proposed semiparametric estimation f n with an uncorrolated exponential as start in (4.2) and using adaptive Bayesian bandwidth in gamma kernel of Proposition 5.1. This concerns all bivariate and trivariate datasets from   Table 5.4 reports the main numerical results of the corresponding correlations, MVI, parameter estimates and finally diagnostic percent W n from (4.6) that we intentionally omit to represent some graphics in three or four dimensions. However, Figure 5.2 displays some representative projections of W n . From Table 5.4, the cross empirical correlations are closed to 0 and all MVI are smaller than 1 which allow to consider uncorrelated exponential start-models. The maximum likelihood method is also used for estimating the parameters µ j for getting the same results as in Table 5.3. Thus, the obtained numerical values of W n indicate semiparametric approaches for all bivariate datasets and the purely nonparametric method for the trivariate one; see [52] for more details on this nonparametric analysis. This progressive semiparametric analysis of the trivariate dataset of Table 5.1 shows the necessity of a suitable choice of the parametric start-models, which may take into account the correlation structure. Hence, the retention of pure nonparametric means the inconvenience of parametric part used in the modelization. Note that we could consider the Marshall-Olkin exponential distributions with nonnegative correlations as start-models; but, they are singular. See Appendix 7.1 for a brief review.

Concluding Remarks
In this paper, we have presented a flexible semiparametric approach for multivariate nonnegative orthant distributions. We have first recalled multivariate variability indexes GVI, MVI, RVI, GDI, MDI and RDI from RWI as a prelude to the second-order discrimination for these parametric distributions. We have then reviewed and provided new proposals to the nonparametric estimators through multivariate associated kernels; e.g., Proposition 3.2 and Corollary 3.3. Both effective adaptive and local Bayesian selectors of bandwidth matrices are suggested for semicontinuous and counting data, respectively.
All these previous ingredients were finally used to develop the semiparametric modelling for multivariate nonnegative orthant distributions. Numerical illustrations have been simply done for univariate and multivariate semicontinuous datasets with the uncorrolated exponential start-models after examining GVI and MVI. The adaptive Bayesian bandwidth selection (3.6) in multiple gamma kernel (Proposition 5.1) were here required for applications. Finally, the diagnostic models have played a very interesting role for helping to the appropriate approach, even if it means improving it later.
At the meantime, Kokonendji et al. [23] proposed an in-depth practical analysis of multivariate count datasets starting by multivariate (un)correlated Poisson models after reviewing GDI and RDI. They have also established an equivalent of our Proposition 5.1 for the local Bayesian bandwidth selection (3.5) by using the multiple binomial kernel from Example 3.7. As one of the many perspectives, one could consider the categorial setup with local Bayesian version of the multivariate associated kernel of Aitchison and Aitken [3] from Example 3.5 of the univariate case.
At this stage of analysis, all the main foundations are now available for working in multivariate setup such as variability indexes, associated kernels, Bayesian selectors and model disgnostics. We just have to adapt them to each situation encountered. For instance, we have the semiparametric regression modelling; see, e.g., Abdous et al. [1] devoted to count explanatory variables and [48]. Also, an opportunity will be opened for hazard rate functions (e.g., [45]). The near future of other functional groups, such categorial and mixed, can now be considered with objectivity and feasibility.

On a Broader d-Variate Parametric Models and the Marshall-Olkin Exponential
According to Cuenin et al. [10], taking p ∈ {1, 2} in their multivariate Tweedie models of flexible dependence structure, another way to define the d-variate Poisson and exponential distributions is given by P d (Λ) and E d (Λ), respectively. The d × d symmetric variation matrix Λ = (λ i j ) i,j∈{1,...,d} is such that λ i j = λ ji ≥ 0, the mean of the corresponding marginal distribution is λ ii > 0, and the non-negative correlation terms satisfy with R(i, j) = λ ii /λ j j (1 − λ −1 ii i,j λ i ) ∈ (0, 1). Their constructions are perfectly defined having d(d + 1)/2 parameters as in P d (µ, ρ) and E d (µ, ρ). Moreover, we attain the exact bounds of the correlation terms in (7.1). Cuenin et al. [10] have pointed out the construction and simulation of the negative correlation structure from the positive one of (7.1) by considering the inversion method.
The negativity of a correlation component is crucial for the phenomenon of under-variability in a bivariate/multivariate positive orthant model. Figure 7.1 (right) plots a limit shape of any bivariate positive orthant distribution with very strong negative correlation (in red), which is not the diagonal line of the upper bound (+1) of positive correlation (in blue); see, e.g., [10] for details on both bivariate orthant (i.e., continuous and count) models. Conversely, Figure 7.1 (left) represents the classic lower (−1) and upper (+1) bounds of correlations on R 2 or finite support. The d-variate exponential X = (X 1 , . . . , X d ) ∼ E d (µ, µ 0 ) of Marshall and Olkin [39] is built as follows. Let Y 1 , . . . , Y d and Z be univariate exponential random variables with parameters µ 1 > 0, . . . , µ d > 0 and µ 0 ≥ 0, respectively. Then, by setting X j := Y j + Z for j = 1, . . . , d, one has EX j = 1/(µ j + µ 0 ) = varX j and cov(X j , X ) = µ 0 /{(µ j + µ 0 )(µ + µ 0 )(µ j + µ + µ 0 )} for all j . Each correlation ρ(X j , X ) = µ 0 /(µ j + µ + µ 0 ) lies in [0, 1] if and only if µ 0 ≥ 0. From its survival (or reliability) function its pdf can be written as It is not absolutely continuous with respect to the Lebesgue measure in T + d and has singularities corresponding to the cases where two or more of the x j 's are equal. Karlis [22] has proposed a maximum likelihood estimation of parameters via an EM algorithm. Finally, Kokonendji et al. [31] have calculated Hence, the Marshall-Olkin exponential model X ∼ E d (µ, µ 0 ) is always under-varied with respect to the MVI and over-or equi-varied with respect to GVI. If µ 0 = 0 then E d (µ, µ 0 ) is reduced to the uncorrolated E d (µ) with GVI(Y) = 1. However, the assumption of non-negative correlations between components is sometimes unrealistic for some analyzes. Proof 2 (Proof of Proposition 3.4). From Definition 3.1, we get (see also [29] for more details) Next, using (7.2), by a Taylor expansion of the function f (·) over the points Z x,H n and E Z x,H n , we get where o(1) is uniform in a neighborhood of x. Therefore, taking the expectation in both sides of (7.3) and then substituting the result in (7.2), we get About the variance term, f being bounded, we have E K x,H n (X j ) = O(1). It follows that: var f n (x) = 1 n var K x,H n (X j )