Next Article in Journal
The New Bipolar Intuitionistic Fuzzy Metric Space (NBIFM-Space) with Applications
Previous Article in Journal
Umbral Theory and the Algebra of Formal Power Series
Previous Article in Special Issue
CBAM-BiLSTM-DDQN: A Novel Adaptive Quantitative Trading Model for Financial Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Distributional Properties of the Matrix-Variate Generalized Gamma Model

1
Department of Mathematics and Statistics, McGill University, Montreal, QC H3A 2K6, Canada
2
Department of Statistical and Actuarial Sciences, Western University, London, ON N6A 5B7, Canada
*
Author to whom correspondence should be addressed.
Axioms 2026, 15(3), 238; https://doi.org/10.3390/axioms15030238
Submission received: 15 December 2025 / Revised: 13 February 2026 / Accepted: 17 March 2026 / Published: 23 March 2026
(This article belongs to the Special Issue New Perspectives in Mathematical Statistics, 2nd Edition)

Abstract

This paper employs Jacobians of matrix transformations to derive the density function of a matrix-variate generalized gamma distribution, together with its normalizing constant. By applying the inverse Mellin transform, explicit expressions for the density functions of the determinant and the trace are obtained in terms of generalized hypergeometric functions. The characteristic function and the first two moments follow from an associated density generator. Both the real and complex cases are treated, and several important special cases are identified. A simulation study reveals that the proposed model provides a more accurate fit than other distributions that are also defined on the cone of positive definite matrices. Moreover, it is shown to exhibit superior performance when applied to two empirical data sets. Applications involving the modeling of scatter matrices arising in financial studies, biostatistics, and reliability analysis are also discussed.

1. Introduction

The motivation for this work stems from the need for flexible and analytically tractable matrix-variate distributions capable of modeling scatter matrices that may deviate from classical Wishart behavior, a challenge that is encountered in fields such as neuroimaging, seismology, and climatology. This paper introduces a matrix-variate generalized gamma distribution that broadens the modeling capacity of existing families. Its novel contributions include the derivation of the full density function via Jacobians of matrix transformations and explicit representation of several associated statistical functions. The framework unifies important special cases, accommodates both the real and complex settings, and demonstrates that the generalized gamma model consistently attains higher modeling accuracy than competing distributions.
A submodel of the proposed distribution is the matrix-variate Wishart-Kotz distribution as defined, for instance, in the first part of Theorem 3 of [1], whose density function is specified in Equation (3). Its multivariate version, the so-called Kotz-type distribution, which was proposed in [2], is discussed in [3], where several applications arising in ecology, discriminant analysis, mathematical finance, repeated measurements, shape theory, and signal processing are pointed out. Ref. [4] made use of the log-likelihood of a Kotz-type model to estimate a certain intraclass correlation. The density function of the ratio of bivariate Kotz-type vectors is derived in [5], who applied his result to transformed stock prices.
Ref. [6] expressed a matrix-variate Kotz-type distribution as an elliptically contoured distribution and derived some related statistical properties. As mentioned in Section 3.2.2 of [7], the Wishart–Kotz model is suitable to fit the high-resolution imaging data, which generally exhibits heavy tails. Ref. [8] discuss the estimation of the covariance parameter of the Kotz-Wishart distributions. The latter is, of course, a generalization of the well-known Wishart distribution, which continues to elicit much interest. For instance, the recent paper, Ref. [9] introduces a novel method for constructing random matrices that follow a Wishart distribution, expressing dependent elements as algebraic functions of independent random variables having defined densities.
As complex variables are increasingly utilized in engineering and the physical sciences, many of the results will be derived in both the real and complex domains. For instance, a complex matrix-variate Kotz-type distribution is utilized in [10] as a classifier for multilook polarimetric synthetic aperture radar data.
This paper is organized as follows. Section 2 makes use of Jacobians of matrix transformations to derive the density function of a matrix-variate generalized gamma distribution from a symmetric Kotz-type model, yielding an explicit expression for its normalizing constant. This construction introduces a shape parameter whose role is examined in detail. Both the real and complex cases are considered. Section 3 develops several distributional properties, including the characteristic function of the matrix-variate random variable, its first two moments, and the exact density functions of the trace and the determinant. These results rely primarily on the inverse Mellin transform and on key properties of elliptical Wishart distributions. While Section 4 outlines a number of applications, Section 5 presents a simulation study as well as data modeling numerical examples. Section 6 offers some concluding remarks. The reader is referred to Appendix A for the notation being used.

2. The Matrix-Variate Generalized Gamma Density Function

The derivation of the matrix-variate generalized gamma distribution is most efficiently carried out through a sequence of structured matrix transformations, each accompanied by its corresponding Jacobian. Beginning with the Kotz-type distribution define on a p × q rectangular matrix X, we proceed to obtain the distribution of U = X X , then introduce an additional shape parameter to obtain U γ , and finally apply a scaling transformation to arrive at W = Δ U γ . This stepwise approach is conceptually transparent and economical: each density follows directly from the previous one by a simple transformation of variables and the application of Jacobian results, which are stated as lemmas. The real and complex cases proceed in parallel, and the resulting densities (3)–(8) naturally reflect this progression. The transformation chain can be represented as follows:
X U = X X U γ W = Δ U γ .
This sequence applies equally in the complex setting, with tildes marking the analogous complex-valued random matrices. Letting U γ = Y Y , the density function of Y is also determined, along with its complex counterpart.
Let X be a p × q real matrix ( p q ) of rank p. The matrix-variate Kotz-type distribution with parameters α > 0 , δ > 0 , and η > p q / 2 has density
g X ( X ) = δ Γ p q 2 α 1 δ p q 2 + η π p q / 2 Γ 1 δ p q 2 + η [ tr ( X X ) ] η exp α [ tr ( X X ) ] δ ,
as specified for instance in [11].
The corresponding complex density, which can be determined from [12], is
g X ˜ ( X ˜ ) = δ Γ ( p q ) α 1 δ ( p q + η ) π p q Γ 1 δ ( p q + η ) [ tr ( X ˜ X ˜ ) ] η exp α [ tr ( X ˜ X ˜ ) ] δ ,
whenever α > 0 , δ > 0 and η > p q , with the p × q ( p q ) Hermitian matrix X ˜ being of rank p.
The density functions of
U = X X and U ˜ = X ˜ X ˜
will be determined by applying the next two results, which respectively hold in the real and complex domain. All the lemmas stated in this section are derived in [13]. The subsequent results rely on the matrix-variate transformation-of-variables technique, as detailed for instance in [14] for both the real and complex cases.
Lemma 1.
Let X be a p × q , p q , full-rank real matrix whose p q elements x i j are all distinct and let the p × p positive definite symmetric matrix U = X X . Then,
d X = π p q 2 Γ p q 2 | U | q 2 p + 1 2 d U ,
where Γ p a = π p ( p 1 ) / 4 j = 1 p Γ ( a ( j 1 ) / 2 ) with ( a ) > ( p 1 ) / 2 , is referred to as a real matrix-variate gamma function, with ( a ) denoting the real part of a.
Lemma 2.
Let the p × q complex matrix X ˜ comprising p q distinct elements x ˜ i j with p q be of rank p. Let the p × p Hermitian positive definite matrix U ˜ = X ˜ X ˜ , X ˜ denoting the conjugate transpose of X ˜ . Then,
d X ˜ = π p q Γ ˜ p q | U ˜ | q p d U ˜ ,
where Γ ˜ m ( a ) = π p ( p 1 ) / 2 j = 1 p Γ ( a ( j 1 ) ) with ( a ) > ( p 1 ) , denotes the complex matrix-variate gamma function.
The p × p real matrix random variable U = X X has a (type I) Wishart–Kotz (W-K) distribution, as defined in Díaz-García and Gutiérrez-Jáimez (2010) [1]. On applying the transformation of variables technique, that is, expressing the density function of X as specified in (1) in terms of U and then multiplying it by the Jacobian provided in Lemma 1, one obtains the following density function for U:
h U ( U ) = π p q 2 Γ p q 2 | U | q 2 p + 1 2 δ Γ ( p q 2 ) α 1 δ ( p q 2 + η ) π p q 2 Γ [ 1 δ ( p q 2 + η ) ] [ tr ( U ) ] η e α [ tr ( U ) ] δ = δ Γ ( p q 2 ) α 1 δ ( p q 2 + η ) Γ p q 2 Γ [ 1 δ ( p q 2 + η ) ] | U | q 2 p + 1 2 [ tr ( U ) ] η e α [ tr ( U ) ] δ ,
for α > 0 , δ > 0 and η > p q / 2 .
Parenthetically, the derivation of this density function proceeds in the same way as in the univariate setting: consider, for instance, a density λ ( x 2 ) where x > 0 (analogously, the density (1) can be expressed as λ ( tr ( X X ) ) ) and the transformation u = x 2 ( U = X X in the matrix-variate case); denoting by J the derivative (Jacobian) of the inverse transformation, in the univariate case, one obtains the density of u as h ( u ) = | J | λ ( u ) , u > 0 , and, in the matrix-variate case, h U ( U ) = | J | λ ( tr ( U ) ) , U > 0 , which is precisely the density given in (3) on noting that, in this case, λ ( tr ( U ) ) = g X ( X ) wherein X X is expressed as U and that, in view of Lemma 1, J = π p q 2 Γ p q 2 | U | q 2 p + 1 2 .
Similarly, the complex counterpart of this density function is that associated with the p × p complex-valued matrix random variable U ˜ = X ˜ X ˜ . It is given by
h U ˜ ( U ˜ ) = δ Γ ( p q ) α 1 δ ( p q + η ) Γ ˜ p ( q ) Γ 1 δ ( p q + η ) | U ˜ | q p [ tr ( U ˜ ) ] η e α tr ( U ˜ ) δ ,
whenever α > 0 , δ > 0 and η > p q .
It should be noted that the determinants appearing in the real and complex Wishart–Kotz density functions (3) and (4) arise solely from the Jacobians of the transformations.
A generalized gamma family is then obtained by replacing the shape parameters
q 2 with γ + q 2 and q with γ + q .
in the density functions of U and U ˜ , respectively specified in (3) and (4), which yielding
h U γ ( U γ ) = δ Γ ( p γ + p q 2 ) α 1 δ ( p γ + p q 2 + η ) Γ p γ + q 2 Γ [ 1 δ ( p γ + p q 2 + η ) ] | U γ | γ + q 2 p + 1 2 [ tr ( U γ ) ] η e α [ tr ( U γ ) ] δ ,
for α > 0 , δ > 0 γ > q 2 + p 1 2 , and η > p ( γ + q / 2 ) , and
h U ˜ γ ( U ˜ γ ) = δ Γ ( p γ + p q ) α 1 δ ( p γ + p q + η ) Γ ˜ p ( γ + q ) Γ 1 δ ( p γ + p q + η ) | U ˜ γ | γ + q p tr ( U ˜ γ ) η e α tr ( U ˜ γ ) δ ,
whenever α > 0 , δ > 0 γ > q + p , and η > p ( γ + q ) .
The density function specified in Equation (5) shall henceforth be referred to as that of a (real unscaled) matrix-variate generalized gamma ( MGG ) distribution. The introduction of the parameter γ can be justified as follows. In the Wishart–Kotz densities (3) and (4), the exponent of the determinant term is fixed by the geometry of the transformation X U = X X . By replacing q / 2 with γ + q / 2 , and q with γ + q , we free this exponent and allow it to vary continuously through γ . This produces a generalized gamma-type determinant factor, which controls the behavior of the density near the boundary of the cone of positive definite matrices. Thus, γ acts as a shape parameter governing how much mass the distribution places on matrices with small or large determinants. It increases model flexibility by allowing heavier or lighter tails in the space of positive definite matrices. Furthermore, it should be noted that, in the univariate setting, the generalized gamma family is obtained by introducing an additional shape parameter that modifies the power of the argument. The substitutions above play exactly the same role in the matrix setting: γ adjusts the determinant term in the same way the univariate shape parameter adjusts the power of the scalar variable. Consequently, γ is the key ingredient that lifts the Wishart–Kotz distribution to a matrix-variate generalized gamma (MGG) distribution, and the former is readily recovered from the latter by setting γ = 0 .
Next, on letting
U γ = Y Y and U ˜ γ = Y ˜ Y ˜ ,
and applying Lemmas 1 and 2, one obtains the following density functions of Y R p × q (the set of p × q real matrices) and Y ˜ C p × q (the set of p × q complex matrices) which are utilized in the next section:
f Y ( Y ) = Γ p ( q 2 ) π p q 2 δ Γ ( p γ + p q 2 ) Γ [ 1 δ ( p γ + p q 2 + η ) ] α 1 δ ( p γ + p q 2 + η ) Γ p ( γ + q 2 ) | Y Y | γ [ tr ( Y Y ) ] η e α [ tr ( Y Y ) ] δ
(derived from (5) where U γ is replaced with Y Y and multiplying the result by J 1 = Γ p q 2 π p q 2 | U | q 2 + p + 1 2 , which follows from the transformation of variables technique) and, similarly,
f Y ˜ ( Y ˜ ) = Γ ˜ p ( q ) π p q δ Γ ( p γ + p q ) Γ [ 1 δ ( p γ + p q + η ) ] α 1 δ ( p γ + p q + η ) Γ ˜ p ( γ + q ) | ( Y ˜ Y ˜ ) | γ [ tr ( Y ˜ Y ˜ ) ] η e α [ tr ( Y ˜ Y ˜ ) ] δ .
Now, let the real-valued matrix-variate random variable W = Δ U γ , where the density function of U γ is given in (5) and Δ is a scaling matrix, and let the complex-valued matrix-variate random variable W ˜ = Δ U ˜ γ , where the density function of U ˜ γ is given in (6).
We appeal to the following lemmas to secure the density functions of the matrix-variate generalized gamma distributions of W and W ˜ in the real and complex domains.
Lemma 3.
When the p × p matrix U is symmetric, the Jacobian of the transformation W = A 1 2 U A 1 2 where A is a p × p symmetric matrix, is given by d W = | A | p + 1 2 d U so that the Jacobian of the inverse transformation U = A 1 2 W A 1 2 is such that d U = | A | p + 1 2 d W .
Lemma 4.
In the complex domain, on letting W ˜ = A 1 2 U ˜ A 1 2 , the Jacobian of the inverse transformation U ˜ = A 1 2 W ˜ A 1 2 is such that d U ˜ = | A | p d W ˜ .
On letting
W = Δ 1 / 2 U γ Δ 1 / 2 and W ˜ = Δ 1 / 2 U ˜ γ Δ 1 / 2
with Δ > 0 , or equivalently
W = Δ U γ and W ˜ = Δ U ˜ γ ,
since
| Δ 1 / 2 W Δ 1 / 2 | = | Δ | 1 | W | = | Δ 1 W | and tr ( Δ 1 / 2 W Δ 1 / 2 ) = tr ( Δ 1 W ) ,
one obtains the real scaled matrix-variate generalized gamma density
ψ W ( W ) = 1 | Δ | γ + q 2 δ α 1 δ p γ + q 2 + η Γ 1 δ p γ + q 2 + η Γ p γ + q 2 Γ p γ + q 2 × W ( 2 γ + q p 1 ) / 2 tr ( Δ 1 W ) η e α tr ( Δ 1 W ) δ
for α > 0 , δ > 0 and η > p ( γ + q / 2 ) , (multiplying the density (5) where U γ = Δ 1 W by the Jacobian given in Lemma 3, that is, | Δ | p + 1 2 , and simplifying) and the complex counterpart, that is, that associated with the p × p complex-valued positive definite matrix random variable W ˜ is given by
ψ W ˜ ( W ˜ ) = 1 | Δ | γ + q δ α 1 δ p γ + q + η Γ 1 δ p γ + q + η Γ p γ + q Γ ˜ p γ + q × | W ˜ | γ + q p tr ( Δ 1 W ˜ ) η e α tr ( Δ 1 W ˜ ) δ ,
whenever α > 0 , δ > 0 and η > p ( γ + q ) .
The scaled matrix-variate generalized gamma model whose density is given in Equation (9) can be interpreted as a generalized Wishart–Kotz distribution and is suitable for modeling random covariance matrices in settings where the classical Wishart assumptions are too restrictive. Specific applications are pointed out in Section 4.
The derivations above demonstrate that the scaled matrix-variate generalized gamma distribution arises naturally from a sequence of simple and well-structured transformations. The resulting family of distributions is therefore both mathematically coherent and practically flexible, providing a unified framework that encompasses the Wishart, Wishart–Kotz, matrix-variate gamma, and their scaled counterparts as special cases.

3. The Distribution of Statistical Functions of the Matrix-Variate Generalized Gamma Distribution

Each subsection examines a specific statistical function of the matrix-variate generalized gamma ( MGG ) distribution. In the following subsection, for example, we derive the moments of the trace of this matrix random variable and subsequently obtain the exact density of the trace by applying the inverse Mellin transform to its complex moments. The derivations and distributional results presented in this section constitute original contributions to the literature.

3.1. The Distribution of the Trace

Let Y be p × q , p q , matrix of rank p in the real domain, whose density function is given in Equation (7). Then,
Y | Y Y | γ [ tr ( Y Y ) ] η e α [ tr ( Y Y ) ] δ d Y = π p q 2 Γ p ( q 2 ) Γ [ 1 δ ( p γ + p q 2 + η ) ] δ Γ ( p γ + p q 2 ) Γ p ( γ + q 2 ) α 1 δ ( p γ + p q 2 + η ) .
Let us determine the h-th moment of [ tr ( Y Y ) ] for an arbitrary h. From the right-hand side of (11), one has
E [ tr ( Y Y ) ] h = Γ [ 1 δ ( p γ + p q 2 + η + h ) ] Γ [ 1 δ ( p γ + p q 2 + η ) ] 1 α h δ = Γ [ 1 δ ( p γ + p q 2 + η 1 ) + s δ ] Γ [ 1 δ ( p γ + p q 2 + η ) ] α 1 δ α s δ , h = s 1 , = C Γ ( β + s δ ) α s δ , β = 1 δ ( p γ + p q 2 + η 1 ) , C = α 1 δ Γ [ 1 δ ( p γ + p q 2 + η ) ]
for p γ + p q 2 + η 1 > 0 , or γ > 0 , η > 0 , and ( β + s δ ) > 0 where ( · ) denotes the real part of ( · ) . Clearly, this h-th moment can be similarly obtained from the density function of U γ = Y Y , which is given in Equation (3) with γ + q 2 substituted to q 2 .
Let u = tr ( Y Y ) and let the density, coming from (12) through the inverse Mellin transform, be denoted by g ( u ) . Then,
g ( u ) = C 1 2 π i c i c + i Γ ( β + s δ ) ( α 1 δ u ) s d s
where i = ( 1 ) and c in the contour of integration is any real number greater than δ β . Letting s 1 = s δ so that s = δ s 1 and d s = δ d s 1 , one has
g ( u ) = C δ 2 π i c i c + i Γ ( β + s 1 ) ( α u δ ) s 1 d s 1 .
The poles of Γ ( β + s 1 ) are at s 1 = β ν , ν = 0 , 1 , , and the sum of the residues is
( α u δ ) β ν = 0 ( 1 ) ν ν ! ( α u δ ) ν = ( α u δ ) β e α u δ = α 1 δ ( p γ + p q 2 + η 1 ) u p γ + p q 2 + η 1 e α u δ .
Therefore, the density g ( u ) is given by
g ( u ) d u = δ α 1 δ ( p γ + p q 2 + η ) Γ [ 1 δ ( p γ + p q 2 + η ) ] u p γ + p q 2 + η 1 e α u δ d u
for γ > 0 and η > 0 .
Note 1.
This function g ( u ) is also the density of u = r 2 where r denotes the radius in the polar coordinates representation of the domain. We also observe that u = tr ( Y Y ) is equal to the sum of the squares of all the elements of Y as well as the sum of the eigenvalues of the p × p real positive definite matrix Z = Y Y .
The complex case is now addressed. Let Y ˜ be p × q , p q , matrix of rank p in the complex domain, whose density function is given in Equation (8). Then,
Y ˜ | ( Y ˜ Y ˜ ) | γ [ tr ( Y ˜ Y ˜ ) ] η e α [ tr ( Y ˜ Y ˜ ) ] δ d Y ˜ = π p q Γ ˜ p ( q ) Γ [ 1 δ ( p γ + p q + η ) ] δ Γ ( p γ + p q ) Γ ˜ p ( γ + q ) α 1 δ ( p γ + p q + η ) .
On comparing (11) and (15), we note that the only difference in the right-hand sides is that q 2 in the real case is replaced by q in the complex case. Hence, the density of u ˜ = tr ( Y ˜ Y ˜ ) , denoted by g ˜ ( u ˜ ) , is the following, observing that u ˜ in this case is real because Y ˜ Y ˜ is Hermitian positive definite, where Y ˜ denotes the conjugate transpose of Y ˜ :
g ˜ ( u ˜ ) d u ˜ = δ α 1 δ ( p γ + p q + η ) Γ [ 1 δ ( p γ + p q + η ) ] u ˜ p γ + p q + η 1 e α u ˜ δ d u ˜
for η > 0 and γ > 0 .

3.2. The Distribution of the Determinant

Let u 1 = | Y Y | , the determinant of Y Y . The h-th moment of u 1 is obtained by once again making use of identity (11):
E [ u 1 h ] = Γ [ 1 δ ( p γ + p h + p q 2 + η ) ] Γ [ 1 δ ( p γ + p q 2 + η ) ] × Γ ( p γ + p q 2 ) Γ ( p γ + p q 2 + p h ) Γ p ( γ + q 2 + h ) Γ p ( γ + q 2 ) α p h δ = C 1 Γ [ 1 δ ( p γ p + p q 2 + η ) + p s δ ] Γ [ p γ + p q 2 p + p s ] × j = 1 p Γ ( γ + q 2 j 1 2 1 + s ) α p s δ , h = s 1 , C 1 = Γ ( p γ + p q 2 ) Γ [ 1 δ ( p γ + p q 2 + η ) ] α p δ j = 1 p Γ ( γ + q 2 j 1 2 1 ) 1 .
Then, by applying the inverse Mellin transform technique, the density of u 1 , denoted by g 1 ( u 1 ) , is obtained as follows:
g 1 ( u 1 ) = C 1 1 2 π i c i c + i Γ [ 1 δ ( p γ p + p q 2 + η ) + p s δ ] Γ ( p γ + p q 2 p + p s ) × { j = 1 p Γ ( γ + q 2 j 1 2 1 + s ) } ( α p δ u 1 ) s d s = C 1 H 1 , p + 1 p + 1 , 0 α p δ u 1 | ( 1 δ ( p γ + p q 2 + η p ) , p δ ) , ( γ + q 2 1 j 1 2 , 1 ) , j = 1 , , p ( p γ + p q 2 p , p ) , 0 u 1 < ,
where H ( · ) is Fox’s H-function (a particular case of Meijer’s G-function), as defined for instance in [15], which provides finite form representations thereof, discusses its efficient evaluation and includes computational modules.
By applying the multiplication formula for gamma functions,
Γ ( m z ) = ( 2 π ) 1 m 2 m m z 1 2 j = 1 m Γ z + j 1 m , m = 1 , 2 , ,
one can expand all the gamma functions included in the h-th moment of | Y Y | = u 1 given by (17). Thus, the gamma ratios can be expressed as follows:
Γ [ p δ ( γ + q 2 + η p + h ) ] Γ [ p δ ( γ + q 2 + η p ) ] = j = 1 p Γ [ 1 δ ( γ + q 2 + η p + j 1 p + h ) ] Γ [ 1 δ ( γ + q 2 + η p + j 1 p ) ] p p h δ
and
Γ [ p ( γ + q 2 ) ] Γ [ p ( γ + q 2 + h ) ] = j = 1 p Γ ( γ + q 2 + j 1 p ) Γ ( γ + q 2 + j 1 p + h ) p p h .
The exponent of p coming from all the factors is then
p p h p α p h δ = p p α p p δ p p s α p p s δ , h = s 1 .
Now, on collecting all factors, we have
E [ | Y Y | ] s 1 = C 1 p p α p p δ j = 1 p { Γ ( 1 δ ( γ + q 2 + η p 1 + j 1 p ) + s δ ) Γ ( γ + q 2 1 + j 1 p + s ) } × j = 1 p Γ ( γ + q 2 j 1 2 1 + s ) [ p p α p p δ ] s
where
C 1 = j = 1 p Γ ( γ + q 2 + j 1 p ) Γ [ 1 δ ( γ + q 2 + η p + j 1 p ) ] j = 1 p Γ ( γ + q 2 j 1 2 1 ) 1 .
Then, by taking the inverse Mellin transform, one obtains the density of u 1 which is given by
g 1 ( u 1 ) = p p α p p δ H p , 2 p 2 p , 0 p p α p p δ u 1 | ( β 1 j , 1 δ ) , ( β 2 j , 1 ) , j = 1 , , p ( β 3 j , 1 ) , j = 1 , , p
where
β 1 j = 1 δ ( γ + q 2 + η p + j 1 p 1 ) , β 2 j = γ + q 2 j 1 2 1 , β 3 j = γ + q 2 1 + j 1 p .
When δ = 1 , the H-function in (21) reduces to a G-function as follows:
g 1 ( u 1 ) = α p G p , 2 p 2 p , 0 α p u 1 | β 1 j , β 2 j , j = 1 , , p β 3 j , j = 1 , , p , 0 u 1 < ,
where the parameters are the same as those specified in (22) with δ = 1 . The gamma product or the integrand in the contour integral in (18) is then
j = 1 p Γ ( γ + q 2 + η p + j 1 p 1 + s ) Γ ( γ + q 2 j 1 2 1 + s ) [ Γ ( γ + q 2 + j 1 p 1 + s ) ] 1 .
Note 2.
The discussion of the moments and densities arising in the complex case parallels that provided for the real case. The only difference is that the parameter q 2 originating from the Jacobian in the real case is replaced by q in the complex case, with the other parameters remaining the same.

3.3. The Joint Distribution of the Trace and the Determinant

We will derive the joint density of the trace and the determinant by first taking the product moment that follows and then by inverting it to determine the joint density function:
E [ | Y Y | s 1 1 [ tr ( Y Y ) ] s 2 1 ] = C 2 Γ [ 1 δ ( p γ + p q 2 + p s 1 p + η + s 2 1 ) ] Γ [ ( p γ + p q 2 + p s 1 p ) ] Γ p ( γ + s 1 1 + q 2 ) α 1 δ ( p s 1 + s 2 p 1 ) , C 2 = Γ ( p γ + p q 2 ) Γ [ 1 δ ( p γ + p q 2 + η ) ] Γ p ( γ + q 2 ) .
On denoting the joint moment expression appearing in Equation (23) by ϕ ( s 1 , s 2 ) and inverting it, one obtains the following representation of the joint density of u = tr ( Y Y ) and u 1 = | Y Y | :
g 2 ( u , u 1 ) = 1 ( 2 π i ) 2 c 1 i c 1 + i c 2 i c 2 + i ϕ ( s 1 , s 2 ) u s 2 u 1 s 1 d s 1 d s 2 ,
For s 1 = s 2 = s , one has the density of u 2 = | Y Y | [ tr ( Y Y ) ] from Equation (25); for s 1 = s and s 2 = 2 s , one obtains the density of u 3 = | Y Y | [ tr ( Y Y ) ] from Equation (25); for s 2 = s , s 1 = 2 s , one has the density of u 4 = [ tr ( Y Y ) ] | Y Y | also from Equation (25).

3.4. The Moment Generating Function for the Case δ = 1 and η = 0

Let W 0 denote the matrix-variate generalized gamma random variable W whose density function is specified in Equation (9), where η is set equal to zero and δ is set equal to one, and let C 0 ( Δ 1 ) represent the normalizing constant of the density of W 0 , that is,
C 0 ( Δ 1 ) = | Δ 1 | γ + q 2 α p γ + q 2 Γ p γ + q 2 Γ p γ + q 2 Γ p γ + q 2 .
Then, the moment-generating function of W 0 evaluated at a p × p real symmetric matrix T whose non-diagonal elements are weighted by 1 2 , is
M W 0 ( T ) = E ( e tr ( T W 0 ) ) = W 0 | Δ 1 | γ + q 2 α p γ + q 2 Γ p γ + q 2 Γ p γ + q 2 Γ p γ + q 2 × W 0 ( 2 γ + q p 1 ) / 2 e α [ tr ( ( Δ 1 T α ) W 0 ) ] d W 0 = C 0 ( Δ 1 ) C 0 ( Δ 1 T α ) = | Δ 1 T α | ( γ + q 2 ) | Δ 1 | ( γ + q 2 ) = | I T Δ α | ( γ + q 2 ) ,
the characteristic function of W 0 being | I i T Δ / α | ( γ + q / 2 ) . On letting γ = 0 , α = 1 / 2 and q = n , one obtains the characteristic function of a Wishart distribution on n degrees of freedom whose covariance parameter is Δ , which is | I 2 i T Δ | n / 2 .
Since, in the complex case, the normalizing constant is a function of | Δ | ( γ + q ) instead of | Δ | ( γ + q 2 ) , the derivation of the moment generating function of the complex matrix-variate generalized gamma random variable W 0 ˜ for the case δ = 1 and η = 0 is nearly identical. In the complex domain, one has
M W ˜ 0 ( T ) = | I T Δ α | ( γ + q ) ,
the characteristic function of W ˜ 0 being | I i T Δ / α | ( γ + q ) .

3.5. The First Two Moments

Since according to Note 1 the density function of u, the square of L 2 norm of Y, is that specified in Equation (14), it follows that the k-th moment of u denoted by μ k is
μ k = α k δ Γ q p 2 δ + p γ + η + k δ Γ q p 2 δ + p γ + η δ .
Let W 1 denote a matrix-variate random variable whose density function is given in Equation (9) where γ is set equal to zero. On substituting Kotz density generator,
h n p ( t ) = β π n p 2 R n p + 2 ( α 1 ) 2 β Γ ( n p 2 ) Γ ( n p 2 β + α 1 β ) t α 1 e R t β ,
as specified in Section 4.4.4 of [16], in that paper’s Equation (21), one obtains the density function of W 1 with the following notational changes: S W 1 , n q , Σ Δ , α η + 1 , β δ , and R α in ψ W ( W ) ) as given in Equation (9). Accordingly, this distribution belongs to the class of elliptical Wishart distributions, and it follows from Propositions 3 and 4 of [16] that
E [ W 1 ] = m 1 p Δ
and
var [ W 1 ] = μ 2 p ( q p + 2 ) I p 2 + K p , p ( Δ Δ )
+ q μ 2 p ( q p + 2 ) ( μ 1 ) 2 q p 2 vec ( Δ ) ( vec ( Δ ) )
where the asterisk indicates that γ = 0 in the moment expressions given in Equation (29), vec(·) is an operator that transforms a matrix into a vector by means of column-wise concatenation, ⊗ denotes the Kronecker product, K p , p is the p 2 × p 2 commutation matrix operator that is utilized for transforming the vectorized form of a p × p matrix into the vectorized form of its transpose, and var[X] = E[vec(X)(vec(X))] − vec(E[X])(vec(E[X])).

3.6. The Characteristic Function of W 1

According to Equation (23) of [16], the characteristic function of W 1 as defined in the previous section, can be expressed as follows:
Ψ W 1 ( Z ) = π q p 2 k = 0 i k k ! 0 h q p ( r ) r q p 2 + k 1 d r Γ ( q p 2 + k ) | κ | = k q / 2 κ C κ ( Z Δ )
where Z is a p × p symmetric real matrix whose non-diagonal elements are weighted by 1 2 , h q p ( · ) is defined in Equation (30), C κ ( · ) represent the zonal polynomial corresponding to the ordered partition κ = ( k 1 , k 2 , , k p ) , k 1 + k 2 + + k p = k , k 1 k 2 k p , which is denoted | κ | = k under the summation sign, the partitioned shifted factorial ( a ) κ = j = 1 p ( a ( j 1 ) / 2 ) k j with the Pocchammer symbol ( b ) k j = b ( b + 1 ) ( b + k j 1 ) , ( b ) 0 = 1 . The readers are referred for instance to [17] for an account of the construction of zonal polynomials and their principal properties. Thus,
Ψ W 1 ( Z ) = π q p Γ ( q p 2 ) Γ ( q p 2 δ + η δ ) k = 0 [ i k k ! α k / δ Γ [ ( q p / 2 + η + k ) / δ ] Γ ( q p 2 + k ) × | κ | = k q / 2 κ C κ ( Z Δ ) ] .

4. Illustrative Applications

The real matrix-variate generalized gamma MGG random variable—whose density function is specified in Equation (9) in the scaled case and Equation (5) in the unscaled case—defines a probability distribution on the cone of symmetric positive definite matrices. This distribution is naturally suited to the modeling of covariance (or scatter) matrices whose behavior may deviate from that implied by the classical Wishart model.
We consider two types of data sets for which the MGG distribution provides an appropriate modeling framework, and we also outline additional contexts in which its application is likely to be beneficial.

4.1. Sample Covariance Matrices of Financial Returns

Let X t R p be p-dimensional intraday or daily log-return vectors for a set of p assets (e.g., sector indices or large-cap stocks), observed over many non-overlapping windows of length q (e.g., rolling q-day windows).
For each window w, the sample covariance matrix
W w = 1 q t w ( X t X ¯ w ) ( X t X ¯ w ) .
is computed. This yields a collection { W w } of observed p × p positive definite matrices. Empirically, the distribution of these covariance matrices is often heavier-tailed than the Gaussian Wishart model suggests. The determinant and trace may display extra variability, and the distribution of tr ( Δ 1 W w ) can exhibit heavier or lighter tails depending on market conditions.
The MGG model (with appropriate choices of γ , δ , η , Δ ) allows one to flexibly capture such features: δ 1 and η 0 modify the radial (trace) tail behavior, while γ adjusts the determinant component, giving a more realistic fit to empirical distributions of realized covariance matrices than the standard Wishart or the Wishart–Kotz ( W K ) distribution—whose density function is given in Equation (3)—due to the presence of the additional shape parameter γ .

4.2. Longitudinal Multivariate Biomedical Measurements

A second context where the MGG model can be useful is longitudinal multivariate biomedical measurements, such as subject-level covariance matrices of repeated blood pressure, heart rate, and respiration measurements in an intensive-care setting.
Suppose that for each patient i = 1 , , N , we observe q repeated three-dimensional measurements
X i t = ( B P i t , H R i t , R R i t ) , t = 1 , , q ,
within a fixed monitoring window.
For patient i, we form the sample covariance matrix
W i = 1 q t = 1 q ( X i t X ¯ i ) ( X i t X ¯ i ) .
The collection { W i } i = 1 N then consists of 3 × 3 symmetric positive definite matrices.
In such physiological data, there is typically substantial heterogeneity across patients, which results in heavy tails in the distribution of tr ( Δ 1 W i ) , and skewed or overdispersed behavior in | W i | compared with a classical Wishart model. The MGG distribution naturally accommodates: Non-Wishart determinant behavior through γ , allowing heavier or lighter tails in | W | than under standard Wishart or Wishart–Kotz distributions, as well as flexible radial (trace) tails via ( δ , η ) : for instance, δ < 1 yields heavier-than-exponential tails in tr ( Δ 1 W ) , observed in the case of unstable measurements.
Accordingly, the proposed model is well positioned to yield a superior fit relative to submodels by capturing the determinant and trace departures arising from heterogeneous covariance matrices. In both examples—high-frequency financial covariance matrices and patient-level physiological covariance matrices—the empirical behavior of the random matrices typically departs substantially from the assumptions implicit in the Wishart–Kotz family.

4.3. Additional Applications

Beyond the examples previously discussed, there are several contexts in which modeling the distribution of covariance matrices is useful. These include multivariate Bayesian analysis, diffusion tensor imaging in neuroimaging, high-dimensional covariance estimation, multiple-input multiple-output channel modeling in wireless communication systems, random effects modeling, studies involving multivariate environmental or climatological measurements across spatial locations, and multivariate quality control and reliability studies.
In each of these settings, the covariance matrix itself is a random quantity whose distribution carries meaningful structural information, and generalized gamma-type matrix-variate models constitute a flexible and analytically tractable choice.

5. Simulation Results and Empirical Data Modeling

The simulation study conducted here reveals that the MGG distribution provides improved goodness-of-fit relative to non-nested competitors also defined on the space of symmetric positive definite matrices. This advantage is further supported by its performance on two empirical data sets, where it again emerges as the best-fitting model.
We begin by recalling a standard result—see, for instance, [18]—that clarifies why submodels are excluded from our model comparisons: Let M 1 and M 2 represent two statistical models, with parameter spaces Θ 1 R d 1 and Θ 2 R d 2 where d 2 d 1 , and assumes that M 1 is a submodel of M 2 in the sense that Θ 1 Θ 2 and that both models yield identical likelihood values for all θ Θ 1 . Letting 1 and 2 denote the corresponding log-likelihoods, and θ ^ 1 and ϑ ^ 2 their respective maximum likelihood estimators, the result states that the maximized log-likelihood under the larger model cannot be smaller than that under the submodel, i.e., 2 ( ϑ ^ 2 ) 1 ( θ ^ 1 ) .
Now noting that the Wishart–Kotz ( W K ) model is a submodel of the MGG model where γ = 0 , that the Wishart ( W ) distribution is a submodel of the Wishart–Kotz model where α = 1 / 2 , δ = 1 and η = 0 , and that the matrix-variate gamma distribution as defined in [19] is a particular case of the MGG distribution where α = δ = 1 and η = 0 , and assuming the regularity conditions required for valid maximum-likelihood estimation hold, for any given sample, the loglikelihood functions will, for example, obey the following inequalities:
MGG ( θ ^ 2 ) W K ( θ ^ 1 ) W ,
where θ ^ 2 ( α ^ 2 , δ ^ 2 , η ^ 2 , γ ^ 2 ) and θ ^ 1 ( α ^ 1 , δ ^ 1 , η ^ 2 ) .

5.1. Simulation Experiment

A simulation study was conducted. Several candidate distributions— including the Wishart, Wishart–Kotz, and matrix-variate gamma distribution—were excluded because they arise as submodels of the matrix-variate generalized gamma ( MGG ) distribution and consequently cannot provide an independent basis for assessing model accuracy or comparative superiority. This leaves only a small number of suitable distributions defined on the cone of positive definite matrices, notably the inverse Wishart ( IW ), the matrix-variate Beta type II ( MB II ), and the matrix-variate generalized inverse Gaussian ( MGIG ) distributions. We note that the matrix-variate F distribution is merely a reparameterization of the Beta type II, while the Beta type I distribution is inappropriate in this context because its density is defined only for positive definite matrices W satisfying I W > 0 .
Below we list the density functions of the models to be compared with the MGG distribution.
  • Inverse Wishart distribution:
    For S S p + + , the set of p × p positive definite matrices, and ν > p 1 , the density function is
    f IW ( S ) = 1 2 ν p / 2 Γ p ( ν / 2 ) | S | ( ν + p + 1 ) / 2 exp 1 2 tr ( S 1 ) .
  • Matrix–variate Generalized Inverse Gaussian distribution:
    For S S p + + and λ > 0 ,
    f MGIG ( S ) = 1 c p ( λ ) | S | λ p + 1 2 exp 1 2 tr ( S + S 1 ) ,
    where
    c p ( λ ) = K λ ( p ) ( I p , I p ) ,
    and K λ ( p ) ( · ) denotes the matrix–argument modified Bessel function of the second kind.
  • Matrix–variate Beta type II distribution:
    For S S p + and parameters a , b > ( p 1 ) / 2 ,
    f IW ( S ) = 1 B p ( a , b ) | S | a ( p + 1 ) / 2 | I p + S | ( a + b ) .
The adequacy of each of the four models can be assessed by evaluating the maximized log-likelihood, ( θ ^ ) , the AIC statistic, 2 ( θ ^ ) + 2 k , or the BIC statistic, 2 ( θ ^ ) + k log N , where θ ^ denotes the vector of MLE’s for a given model, and k and N are the number of model parameters and the sample size. By accounting for model complexity, BIC statistics enable comparisons that remain meaningful across competing models. Accordingly, we report only the per-observation BIC values, that is, the BIC statistics divided by the sample size N, since the remaining criteria can be readily recovered from them, if required.
A collection of N matrices A i of dimension p × q was initially generated, with all entries drawn independently from the standard normal distribution. (All computations were carried out using a common random seed.) For each A i , the corresponding p × p matrix
W i = A i A i , i = 1 , , N ,
was formed and utilized in its standardized form to compute the maximum likelihood estimates of the parameters under the MGG , MB II , MGIG , and IW models. The log-likelihood function of each of the four models was evaluated at the corresponding MLEs, and the per-observation BIC values were computed for several combinations of p and q, and N = 500 and 1000. These values are reported in Table 1.
For every (p, q, N) configuration considered, the MGG model attains a smaller BIC value than that of each of the three competing models, indicating that it provides the best overall fit among the models examined.
The simulations were then repeated 1000 times for a moderate sample size of N = 50 . Only the two most relevant competing models—the MGG and MB II distributions—were retained for comparison, since the remaining two candidates exhibit mean BIC values so distant from those of the proposed model that overlap is virtually impossible.
For the four combinations of p and q considered, the proportion of simulation instances where the mean BIC values evaluated from the MGG model were smaller than those secured for the MB II model was determined. The proportions so obtained are as follows:
p = 3   and   q = 5 :   97.5 % ; p = 3   and   q = 9 :   87.6 % ; p = 5   and   q = 10 :   100 % ; p = 5   and   q = 15 :   100 % .
Taken together, these outcomes underscore the modeling effectiveness of the MGG distribution and leave little room for doubt regarding its comparative performance.

5.2. Modeling Empirical Covariance Matrices

Two data sets are analyzed in this study. The first consists of a sequence of 25 empirical 3 × 3 covariance matrices constructed from 100-day rolling-window log-returns of the S&P 500, FTSE 100, and Nikkei 225 equity indices, using daily observations available from January 2018 onward. The second data set is the classical Iris data, from which a collection of 20 empirical covariance matrices was obtained.

5.2.1. Rolling Covariance Matrices from Global Equity Returns

The dataset consists of a sequence of empirical 3 × 3 covariance matrices constructed from real-world financial data. The underlying observations are daily log-returns of three major equity indices: the S&P 500 (US), FTSE 100 (UK), and Nikkei 225 (Japan). Adjusted daily closing prices for all indices were obtained from the permanent public archive of Yahoo Finance: https://finance.yahoo.com. For each index k, the daily log-return is defined as R t ( k ) = log P t ( k ) log P t 1 ( k ) , where P t ( k ) denotes the adjusted closing price on day t. Using these returns, a sequence of empirical covariance matrices is estimated via a rolling window of N = 100 trading days . For window index m, the empirical covariance matrix is
Σ ^ m = 1 N 1 t = m m + N 1 R t R ¯ m R t R ¯ m ,
where R t = ( R t ( 1 ) , R t ( 2 ) , R t ( 3 ) ) and R ¯ m = t = m m + N 1 R t / N .
The raw price data starts on the first of January 2018, which contains roughly 500 trading days. From this dataset, M = 25 rolling windows are constructed, indexed by m = 1 , , 25 . Window m uses days t = m , , m + 99 , so all 25 covariance matrices describe the same joint return process over sliding 100-day windows during early 2018. The resulting dataset is the collection Σ ^ 1 , Σ ^ 2 , , Σ ^ 25 , of symmetric positive definite 3 × 3 matrices. The standardized covariances matrices, that is, Σ ^ ¯ 1 / 2 Σ ^ i Σ ^ ¯ 1 / 2 , i = 1 , , 25 , are utilized in the calculations.
The per-observation BIC values are reported in Table 2 for the four models under consideration. Once again, the MGG distribution provides the best fit.

5.2.2. Covariance Matrices Associated with the Iris Data Set

The covariance matrices analyzed in this study are derived from the classical Iris data set, which is hosted by the UCI Machine Learning Repository. This data set is freely available from https://archive.ics.uci.edu/ml/datasets/iris (accessed on the 20 January 2026).
For the purposes of this study, the first 140 of the 150 available observations on three morphological measurements—sepal width (cm), petal length (cm), and petal width (cm)—were retained. These 140 observation vectors were partitioned into 20 consecutive blocks of dimension 3 × 7 . For each block, a 3 × 3 sample covariance matrix was computed using the usual unbiased estimator. The resulting collection of N = 20 standardized empirical covariance matrices constitutes the input set { W i } i = 1 20 for the matrix–variate modeling.
The competing models considered are MGG , MB II , MGIG , and IW . Table 3 indicates that, yet again, the MGG model yields the best fit.

6. Conclusions

The derivation of the matrix-variate generalized gamma ( MGG ) density and its normalizing constant, in both real and complex settings, relies on appropriate Jacobians of matrix transformations and introduces an additional parameter that enhances the model’s flexibility. Equations (3)–(10) arise naturally within the transformation sequence
X U = X X U γ W = Δ U γ ,
corresponding to the Kotz-type model on X, the Wishart–Kotz distribution on U, the generalized gamma distribution on U γ , and its scaled version on W, respectively. U γ may therefore be regarded as a generalized Wishart–Kotz matrix random variable.
The shape parameter γ modifies the exponent of the determinant term | U | and thereby governs the behavior of the density near the boundary of the cone of positive definite matrices. Larger values of γ shift probability mass away from nearly singular matrices, whereas smaller values allow heavier tails. Setting γ = 0 recovers the Wishart–Kotz distribution, while γ > 0 yields a richer family that parallels the extension from the gamma distribution to the univariate generalized gamma. This additional flexibility is particularly valuable for modeling high-dimensional or non-Gaussian matrix-valued data, where classical Wishart assumptions may prove inadequate. Importantly, the introduction of γ preserves analytical tractability, as it enters the transformation chain through a simple modification of the determinant exponent.
We derived explicit forms for a range of statistical functions of the MGG distribution and its submodels, notably the characteristic functions and the density functions of moments of traces and determinants. Contexts in which the model is especially useful—for example, in applications involving random covariance matrices that deviate from Wishart-type behavior, such as those arising in high-frequency financial covariance estimation or longitudinal multivariate biomedical measurements—are pointed out.
A simulation study demonstrates that the MGG distribution provides a more accurate fit than other distributions on symmetric matrices that are not its submodels. It also exhibits superior performance when applied to empirical data.
All computations were performed using the symbolic software package Mathematica (Version 14.3); the corresponding code is available from the second author upon request.
The matrix-variate generalized gamma distribution developed in this paper enriches the toolkit for modeling the randomness inherent in multivariate phenomena, and further applications beyond those already identified are likely to emerge.

Author Contributions

Conceptualization, A.M.M. and S.B.P.; methodology, A.M.M. and S.B.P.; software, S.B.P.; validation, A.M.M. and S.B.P.; formal analysis, A.M.M. and S.B.P.; investigation, A.M.M. and S.B.P.; resources, A.M.M. and S.B.P.; writing—original draft preparation, A.M.M. and S.B.P.; writing—review and editing, A.M.M. and S.B.P.; project administration, A.M.M. and S.B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets can be readily retrieved from the sources specified in the paper.

Acknowledgments

We wish to express our sincere appreciation to the Guest Editor as well as three referees for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The notation adopted throughout this paper is as follows. Scalar variables, mathematical and random, will be denoted by lowercase letters, vector/matrix variables will be denoted by capital letters, and complex variables will be indicated by a tilde. All the matrices appearing in this paper are p × p real positive definite or Hermitian positive definite unless stated otherwise. X > O will mean that the p × p real symmetric matrix X is positive definite and X ˜ > O , that the p × p matrix X ˜ in the complex domain is Hermitian, that is, X ˜ = X ˜ where X ˜ denotes the conjugate transpose of X ˜ and X ˜ is positive definite. O < A < X < B will indicate that the p × p real positive definite matrices are such that A > O , B > O , X > O , X A > O , B X > O . X f ( X ) d X represents a real-valued scalar function f ( X ) being integrated out over all X in the domain of X, where d X stands for the wedge product of differentials of all distinct elements in X.
If X = ( x i j ) is a real p × q matrix, the x i j ’s being distinct real scalar variables, then d X = d x 11 d x 12 d x p q or d X = i = 1 p j = 1 q d x i j . If X = X , that is, X is a real symmetric matrix of dimension p × p , then d X = i j = 1 p d x i j = i j = 1 p d x i j , which involves only p ( p + 1 ) / 2 differential elements d x i j . When taking the wedge product, the elements x i j ’s may be taken in any convenient order to start with.
If X ˜ = X 1 + i X 2 , where X 1 and X 2 are real p × q matrices, i = ( 1 ) , then d X ˜ will be defined as d X ˜ = d X 1 d X 2 . A < X ˜ < B f ( X ˜ ) d X ˜ represents the real-valued scalar function f of complex matrix argument X ˜ being integrated out over all p × p matrix X ˜ such that A > O , X ˜ > O , B > O , X ˜ A > O , B X ˜ > O (all Hermitian positive definite), where A and B are constant matrices in the sense that they are free of the elements of X ˜ . The corresponding integral in the real case will be denoted by A < X < B f ( X ) d X = A B f ( X ) d X , A > O , X > O , X A > O , B > O , B X > O , where A and B are constant matrices, all the matrices being of dimension p × p .

References

  1. Díaz-García, J.A.; Gutiérrez, R.J. On Wishart distribution. arXiv 2010, arXiv:1010.1799. [Google Scholar] [CrossRef] [PubMed]
  2. Kotz, S. Multivariate distributions at a cross road. In Statistical Distributions in Scientific Work; Patil, G.P., Kotz, S., Ord, J.K., Eds.; Springer: Dordrecht, The Netherlands, 1975; Volume 1, pp. 247–270. [Google Scholar]
  3. Nadarajah, S. The Kotz-type distribution with applications. Statistics 2003, 37, 341–358. [Google Scholar] [CrossRef]
  4. Helu, A.; Naik, N. Estimation of interclass correlation via a Kotz-type distribution. Comput. Stat. Data Anal. 2006, 51, 1523–1534. [Google Scholar] [CrossRef]
  5. Nadarajah, S. The Kotz-type ratio distribution. Statistics 2012, 46, 167–174. [Google Scholar] [CrossRef]
  6. Sarr, A. On a Kotz-Wishart distribution: Multivariate Varma transform. arXiv 2014, arXiv:1404.4441. [Google Scholar] [CrossRef]
  7. Deng, X.; López-Martínez, C.; Chen, J.; Han, P. Statistical Modeling of Polarimetric SAR Data: A Survey and Challenges. Remote Sens. 2017, 9, 348. [Google Scholar] [CrossRef]
  8. Ayadi, I.; Bouchard, F.; Pascal, F. Elliptical Wishart distribution: Maximum likelihood estimator from information geometry. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023. [Google Scholar]
  9. Nikolova, A.; Ignatov, T.; Dyakov, B. Random correlation matrices and one decomposition of the Wishart distribution. Int. J. Appl. Math. 2024, 37, 247–264. [Google Scholar] [CrossRef]
  10. Kersten, P.R.; Anfinsen, S.N.; Doulgeris, A.P. The Wishart-Kotz classifier for multilook polarimetric SAR data. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar]
  11. Gupta, A.K.; Varga, T. Elliptically Contoured Models in Statistics; Kluwer Academic Publisher: Dordrecht, The Netherlands, 1993. [Google Scholar]
  12. Díaz-García, J.A.; Gutiérrez, R.J. Compound and scale mixture of matricvariate and matrix variate Kotz-type distributions. J. Korean Stat. Soc. 2010, 39, 75–82. [Google Scholar] [CrossRef]
  13. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
  14. Gupta, A.K.; Nagar, D.K. Matrix Variate Distributions; Chapman & Hall/CRC: Boca Raton, FL, USA, 2000. [Google Scholar]
  15. Coelho, C.A.; Arnold, B.C. Finite Form Representations for Meijer G and Fox H Functions; Springer: Cham, Switzerland, 2019. [Google Scholar]
  16. Ayadi, I.; Bouchard, F.; Pascal, F. On elliptical and inverse elliptical Wishart distributions: Review, new results, and applications. arXiv 2024, arXiv:2404.17468v1. [Google Scholar] [CrossRef]
  17. Mathai, A.M.; Provost, S.B.; Hayakawa, T. Bilinear Forms and Zonal Polynomials; Springer: New York, NY, USA, 1995. [Google Scholar]
  18. Bickel, P.J.; Doksum, K.A. Mathematical Statistics: Basic Ideas and Selected Topics, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2015; Volume I. [Google Scholar]
  19. Mathai, A.M.; Provost, S.B.; Haubold, H.J. Multivariate Statistical Analysis in the Real and Complex Domains; Springer: Cham, Switzerland, 2022; XXVII + 921 pages. [Google Scholar] [CrossRef]
Table 1. Per-observation BIC values for the four models—simulated data.
Table 1. Per-observation BIC values for the four models—simulated data.
pqN MGG MB II MGIG IW
355006.64207.0618914.724812.7122
3510006.63867.060414.441212.3115
395004.54874.77007.169810.4030
3910004.52004.75667.153810.3984
5105007.23638.153311.463722.2131
51010007.22108.079711.397222.1384
5155002.46753.04909.445426.1350
51510002.57543.14599.418226.0022
Table 2. Per-observation BIC values for the four models—equity index data.
Table 2. Per-observation BIC values for the four models—equity index data.
MGG MB II MGIG IW
−10.9495−9.267110.074113.442
Table 3. BIC values for the four models—Iris data.
Table 3. BIC values for the four models—Iris data.
MGG MB II MGIG IW
−218.991−185.343201.483268.839
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mathai, A.M.; Provost, S.B. Some Distributional Properties of the Matrix-Variate Generalized Gamma Model. Axioms 2026, 15, 238. https://doi.org/10.3390/axioms15030238

AMA Style

Mathai AM, Provost SB. Some Distributional Properties of the Matrix-Variate Generalized Gamma Model. Axioms. 2026; 15(3):238. https://doi.org/10.3390/axioms15030238

Chicago/Turabian Style

Mathai, Arak M., and Serge B. Provost. 2026. "Some Distributional Properties of the Matrix-Variate Generalized Gamma Model" Axioms 15, no. 3: 238. https://doi.org/10.3390/axioms15030238

APA Style

Mathai, A. M., & Provost, S. B. (2026). Some Distributional Properties of the Matrix-Variate Generalized Gamma Model. Axioms, 15(3), 238. https://doi.org/10.3390/axioms15030238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop