Next Article in Journal
Multivariate Time Series Change-Point Detection with a Novel Pearson-like Scaled Bregman Divergence
Previous Article in Journal
Bayesian Inference for Multiple Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate and Matrix-Variate Logistic Models in the Real and Complex Domains

Department of Mathematics and Statistics, McGill University, Montreal, QC H3A 0G4, Canada
Stats 2024, 7(2), 445-461; https://doi.org/10.3390/stats7020027
Submission received: 16 March 2024 / Revised: 21 April 2024 / Accepted: 1 May 2024 / Published: 11 May 2024

Abstract

:
Several extensions of the basic scalar variable logistic density to the multivariate and matrix-variate cases, in the real and complex domains, are given where the extended forms end up in extended zeta functions. Several cases of multivariate and matrix-variate Bayesian procedures, in the real and complex domains, are also given. It is pointed out that there are a range of applications of Gaussian and Wishart-based matrix-variate distributions in the complex domain in multi-look data from radar and sonar. It is hoped that the distributions derived in this paper will be highly useful in such applications in physics, engineering, statistics and communication problems, because, in the real scalar case, a logistic model is seen to be more appropriate compared to a Gaussian model in many industrial applications. Hence, logistic-based multivariate and matrix-variate distributions, especially in the complex domain, are expected to perform better where Gaussian and Wishart-based distributions are currently used.

1. Introduction

We will introduce an extended logistic model through the exponentiation of a real scalar type-2 beta density. For α > 0 , β > 0 and for the real scalar variable t, consider the density
f ( t ) = c t α 1 ( 1 + t ) ( α + β ) , c = Γ ( α + β ) Γ ( α ) Γ ( β ) , 0 t <
and f ( t ) = 0 elsewhere. Usually, the parameters in a statistical density are real and hence we take real parameters throughout the paper. The above function f ( t ) is integrable even if the parameters are in the complex domain. In this case, the conditions will be ( α ) > 0 , ( β ) > 0 , where ( · ) denotes the real part of ( · ) . Consider an exponentiation; namely, let t = e x , 0 t < < x < and
f ( t ) d t = c t α 1 ( 1 + t ) ( α + β ) d t = c e α x ( 1 + e x ) α + β d x ,
and then the density of x, denoted by h ( x ) , is the following:
h ( x ) = c e α x ( 1 + e x ) α + β , α > 0 , β > 0 , < x < .
Note that
e α x ( 1 + e x ) α + β = e β x ( 1 + e x ) α + β , < x <
and when α = β = 1 , h ( x ) reduces to
e x ( 1 + e x ) 2 = e x ( 1 + e x ) 2 , < x <
which is the standard logistic density. There is a vast amount of literature on this standard logistic model. The importance of this model arises from the fact that it has a thicker tail compared to that of the standard Gaussian density; thereby, large deviations have higher probabilities compared to the standard Gaussian.
The following notations will be used in this paper. Real scalar ( 1 × 1 matrix) variables will be denoted by lowercase letters such as x , y , whether the variables are random or mathematical variables. Capital letters such as X , Y will be used to denote vector ( p × 1 or 1 × p , p > 1 matrix)/matrix variables, whether mathematical or random and whether square matrices or rectangular matrices. Scalar constants will be denoted by a , b , etc., and vector/matrix constants by A , B , etc. A tilde will be placed over the variable to denote variables in the complex domain, whether mathematical or random, such as x ˜ , y ˜ , X ˜ , Y ˜ . No tilde will be used on constants. The above general rule will not be followed when symbols or Greek letters are used. These will be explained as needed. For a m × n matrix X = ( x j k ) , where the x j k s are real scalar variables, the wedge product of differentials will be denoted as d X = j , k d x j k = d x 11 d x m n . For two real scalar variables x and y, the wedge product of differentials is defined as d x d y = d y d x so that d x d x = 0 , d y d y = 0 . When the matrix variable Y ˜ is in the complex domain, one can write Y ˜ = Y 1 + i Y 2 , where i = ( 1 ) , Y 1 , Y 2 are real m × n matrices. Then, d Y ˜ will be defined as d Y ˜ = d Y 1 d Y 2 . The transpose of a matrix Y will be denoted by Y . The complex conjugate transpose will be denoted by Y ˜ * . When Y ˜ is m × m such that Y ˜ = Y ˜ * , then Y ˜ is Hermitian and, in this case, Y ˜ = Y 1 + i Y 2 = Y ˜ * Y 1 = Y 1 , Y 2 = Y 2 , that is, Y 1 is real symmetric and Y 2 is real skew symmetric. The determinant of a square matrix A will be denoted as | A | or as det ( A ) , and the absolute value of the determinant of A, when A is in the complex domain, will be denoted as | det ( A ) | = | det ( A A * ) | = ( a 1 2 + a 2 2 ) , | A | = a 1 + i a 2 , i = ( 1 ) , a 1 , a 2 are real scalar quantities. When a real symmetric matrix B is positive definite, it will be denoted as B > O . If the m × m matrix X ˜ = X ˜ * > O , then X ˜ is Hermitian positive definite. Other notations will be explained wherever they occur.
Our aim here is to look into various extensions of the generalized logistic density in (2). First, let us consider a multivariate extension. Consider the m × 1 real vector X, X = [ x 1 , , x m ] , where x j , j = 1 , , m are functionally independent (distinct) real scalar variables, X X = x 1 2 + + x m 2 . If X ˜ is in the complex domain, then X ˜ * X ˜ = | x ˜ 1 | 2 + + | x ˜ m | 2 = ( x 11 2 + x 12 2 ) + + ( x m 1 2 + x m 2 2 ) , x ˜ j = x j 1 + i x j 2 , i = ( 1 ) , x j 1 , x j 2 , j = 1 , , m are real. Then,
e X X = e ( x 1 2 + + x m 2 ) = e x 1 2 , m = 1 ; e X ˜ * X ˜ = e ( ( x 11 2 + x 12 2 ) + + ( x m 1 2 + x m 2 2 ) ) = e ( x 11 2 + x 12 2 ) , m = 1 .
Extensions of the logistic model will be considered by taking the argument X as a vector or a matrix in the real and complex domains. In order to derive such models, we will require Jacobians of matrix transformations in three situations, and these will be listed here as lemmas without proofs. For the proofs and other details, see Ref. [1].
Lemma 1. 
Let the m × n , m n matrix X of rank m be in the real domain with m n distinct elements x i j s. Let the m × m matrix S = X X , which is real positive definite. Then, going through a transformation involving a lower triangular matrix with positive diagonal elements and a semi-orthonormal matrix, and after integrating out the differential element corresponding to the semi-orthonormal matrix, we will have the following connection between d X and d S ; see the details from Ref. [1].
d X = π m n 2 Γ m ( n 2 ) | S | n 2 m + 1 2 d S
where, for example, Γ m ( α ) is the real matrix-variate gamma function given by
Γ m ( α ) = π m ( m 1 ) 4 Γ ( α ) Γ ( α 1 2 ) Γ ( α m 1 2 ) , ( α ) > m 1 2 = X > O | X | α m + 1 2 e tr ( X ) d X , ( α ) > m 1 2
where tr ( · ) means the trace of the square matrix ( · ) . We call Γ m ( α ) the real matrix-variate gamma because it is associated with the real matrix-variate gamma integral. It is also known by different names in the literature. Similarly, Γ ˜ m ( α ) is called complex matrix-variate gamma because it is associated with the complex matrix-variate gamma integral. When the m × n , m n matrix X ˜ of rank m, with distinct elements, is in the complex domain, and letting S ˜ = X ˜ X ˜ * , which is m × m and Hermitian positive definite, then, going through a transformation involving a lower triangular matrix with real and positive diagonal elements and a semi-unitary matrix, and then integrating out the differential element corresponding to the semi-unitary matrix, we can establish the following connection between d X ˜ and d S ˜ :
d X ˜ = π m n Γ ˜ m ( n ) | det ( S ˜ ) | n m d S ˜
where, for example, Γ ˜ m ( α ) is the complex matrix-variate gamma function given by
Γ ˜ m ( α ) = π m ( m 1 ) 2 Γ ( α ) Γ ( α 1 ) Γ ( α m + 1 ) , ( α ) > m 1 = X ˜ > O | det ( X ˜ ) | α m e tr ( X ˜ ) d X ˜ , ( α ) > m 1 .
This paper is organized as follows. An extended zeta function is introduced in Section 2 and then several extended logistic models are discussed, which will all result in the extended zeta function. Section 3 deals with some Bayesian-type models, which will also result in extended zeta functions. Section 4 provides some concluding remarks.

2. Several Matrix-Variate Extensions of the Logistic Model

Since the logistic model in (2) is more viable and contains symmetric and non-symmetric situations, we will consider extensions of the model in (2). When we wish to study some properties of a given model or a given density, then the procedure will follow the pattern of the procedure in evaluating the normalizing constant therein. Hence, the most important aspect in the construction of a model is the evaluation of the normalizing constant there. Therefore, our aim will be the evaluation of the normalizing constant in each extended logistic model. Moreover, it will be seen that when integrals are evaluated on the proposed extended logistic models, the results lead to some extended forms of generalized zeta functions introduced by the author. The basic zeta function and generalized zeta function of order ρ , available in the literature, are the following convergent series, denoted by ζ ( ρ ) and ζ ( ρ , α ) , respectively:
ζ ( ρ ) = k = 1 1 k ρ , ζ ( ρ , α ) = k = 0 1 ( α + k ) ρ , ρ > 1 , α 0 , 1 , 2 ,
If α is a positive integer, then the generalized zeta function ζ ( ρ , α ) can be written in terms of the ordinary zeta function ζ ( ρ ) .

3. Extended Zeta Function

Let α j 0 , 1 , , j = 1 , , r and b j 0 , 1 , , j = 1 , , q . The following convergent series is defined as the extended zeta function:
ζ m , n r ( x ) = ζ [ { ( m 1 , α 1 ) , , ( m r , α r ) } : a 1 , , a m ; b 1 , , b n ; x ] = k = 0 [ 1 j = 1 r ( α j + k ) m j ( a 1 ) k ( a m ) k ( b 1 ) k ( b n ) k x k k ! ]
where, for example, ( a ) k = a ( a + 1 ) ( a + k 1 ) , a 0 , ( a ) 0 = 1 is the Pochhammer symbol, j = 1 r m j > 1 , n m or m = n + 1 , | x | < 1 .
First, we will consider a m × 1 vector-variate random variable in the real and complex domains. Then, we will consider a general case of m × n matrix-variate random variable in the real and complex domains. Associated with these random variables, we will construct a number of densities that can be taken as extended logistic densities to the vector/matrix-variate cases. The model and the evaluation of the corresponding normalizing constant will be stated as a theorem.
Theorem 1. 
Let X be a m × 1 real vector and consider the following density where c 1 is the normalizing constant. Then, for the model
f 1 ( X ) d X = c 1 [ X X ] γ e α X X ( 1 + a e X X ) α + β d X , α > 0 , β > 0 , 0 a 1 , γ 0 .
the normalizing constant c 1 is given by the following:
c 1 = Γ ( m 2 ) π m 2 Γ ( γ + m 2 ) ζ [ { ( γ + m 2 , α ) } ; α + β ; ; a ] .
Proof. 
Since X X = x 1 2 + + x m 2 > 0 , X = [ x 1 , , x m ] O and 0 a 1 , we have 0 < a e X X < 1 and hence a binomial expansion is valid for the denominator in (5). That is,
( 1 + a e X X ) ( α + β ) = k = 0 ( α + β ) k ( a ) k k ! e k X X .
Let y = X X . Note that X is 1 × m and then Lemma 1 gives
y = X X d X = π m 2 Γ ( m 2 ) y m 2 1 d y
and, from (5), (7), (8),
X f 1 ( X ) d X = c 1 k = 0 ( α + β ) k ( a ) k k ! π m 2 Γ ( m 2 ) y = 0 y γ + m 2 1 e ( α + k ) y d y = c 1 k = 0 ( α + β ) k ( a ) k k ! π m 2 Γ ( m 2 ) ( α + k ) ( γ + m 2 ) Γ ( γ + m 2 ) = c 1 π m 2 Γ ( m 2 ) Γ ( γ + m 2 ) ζ [ { ( γ + m 2 , α ) } : α + β ; ; a ]
for γ + m 2 > 1 . The integral in the first line above is evaluated by using a real scalar variable gamma integral. Hence, when f 1 ( X ) is a density of X, then
c 1 1 = π m 2 Γ ( m 2 ) Γ ( γ + m 2 ) ζ [ { ( γ + m 2 , α ) } : α + β ; ; a ] , γ + m 2 > 1 .
This establishes the theorem.
In order to avoid using too many numbers for the functions and equations, we will use the following simplified notation to denote results in the complex domain by appending a letter “c” to the function number. For example, the density in the complex domain, corresponding to f 1 ( X ) , will be denoted by f 1 c ( X ˜ ) . With these notations, we will list the counterpart of Theorem 1 for the complex domain. Let X ˜ be a m × 1 vector in the complex domain with distinct complex scalar elements. Let the normalizing constant in the complex domain, corresponding to c 1 in the real domain, be denoted by c ˜ 1 . Then, we have the theorem in the complex domain, corresponding to Theorem 1, denoted by Theorem 2, as the following.
Theorem 2. 
Let X ˜ be a m × 1 vector random variable in the complex domain with distinct complex scalar elements. Let X ˜ * denote the conjugate transpose of X ˜ . Consider the model
f 1 c ( X ˜ ) = c ˜ 1 [ X ˜ * X ˜ ] γ e α X ˜ * X ˜ ( 1 + a e X ˜ * X ˜ ) α + β .
Then, the normalizing constant c ˜ 1 is given by the following:
c ˜ 1 = Γ ( m ) π m Γ ( γ + m ) ζ [ { ( γ + m , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , 0 a 1 , γ + m > 1 .
Since the equation numbers are changed, the corresponding statements regarding the equation numbers are deleted. The deleted items are, “and section number of the equation numbers”, “and the equation number (6) will be written as (11)”.
Proof. 
Note that X ˜ * X ˜ = | x ˜ 1 | 2 + + | x ˜ m | 2 = ( x 11 2 + x 12 2 ) + + ( x m 1 2 + x m 2 2 ) , where x ˜ j = x j 1 + i x j 2 , i = ( 1 ) , x j 1 , x j 2 are real scalar variables. Then, there are 2 m real scalar variables in X ˜ * X ˜ compared to m real scalar variables in the real case X X . Hence, from Lemma 1 in the complex case, we have d X ˜ = π m Γ ( m ) y m 1 d y for y = X ˜ * X ˜ , which is again real. Then, following through the derivation in the real case, we have c ˜ 1 as given in (11). □
Note 2.1. Ref. [2] deals with the analysis of Polarimetric Synthetic Aperture Radar (PolSAR) multi-look return signal data. The return signal cross-section has two components, called freckle and texture. Freckle is the dust-like contaminant on the cross-section and texture is the cross-section variable itself. The analysis is frequently done under the assumption that the freckle part has a complex Gaussian distribution. It is found that non-Gaussian models give better representations in certain uneven regions, such as urban areas, forests, and sea surfaces; see, for example, Refs. [3,4,5,6]. Various types of multivariate distributions in the complex domain, associated with complex Gaussian and complex Wishart, are used in the study of speckle (noise-like appearance in PolSAR images) parts and real positive scalar and positive definite matrix-variate distributions are used for the texture (spatial variation in the radar cross-section) part. In industrial applications, a logistic model is preferred compared to a standard Gaussian model, because, even though the graphs of both look alike, the tail is thicker in the logistic case compared to that in the standard Gaussian. Hence, multivariate and matrix-variate models, as extended logistic models in the real scalar case, may qualify to be more appropriate models in many areas, such as areas where logistic models are preferred. Analysts of single-look and multi-look sonar and radar data are likely to find extended logistic models better for their data compared to Gaussian and Wishart based models. These are the motivating factors in introducing extended logistic models, especially in the form of multivariate and matrix-variate distributions, in the complex domain. Moreover, the models in Theorems 1 and 2 can be generalized by replacing X X in the real case by ( X μ ) A ( X μ ) , μ = E [ X ] , A > O , where E [ ( · ) ] denotes the expected value of ( · ) and A > O is a m × m real positive definite constant matrix. In this case, the only change is that the normalizing constant will be multiplied by | A | 1 2 . In the complex case, replace X ˜ * X ˜ by ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) , where μ ˜ = E [ X ˜ ] and A = A * > O is a m × m constant Hermitian positive definite matrix. In this case, the only change is that the normalizing constant will be multiplied by | det ( A ) | , i.e., the absolute value of the determinant of A.
Note that the h-th moments of X X and X ˜ * X ˜ , for an arbitrary h, are available from the respective normalizing constants in the extended logistic models by replacing γ with γ + h and then taking the ratios of the respective normalizing constants.
Now, we consider a slightly more general model. Again, let X be a m × 1 real vector random variable. In Theorem 1, replace X X by ( X X ) δ , δ > 0 , i.e., X X having an arbitrary exponent δ > 0 . Let us call the resulting model f 2 ( X ) . Then, go through the same steps as in the proof of Theorem 1. That is, y = X X , etc. Now, set z = y δ d y = 1 δ z 1 δ 1 d z . Then, we will have the following final model, which is stated as a theorem.
Theorem 3. 
Let X be a m × 1 real vector with distinct real scalar variables as elements. For δ > 0 , consider replacing X X in the exponent in Theorem 1 by ( X X ) δ . Then, we have the following model denoted by f 2 ( X ) :
f 2 ( X ) = c 2 [ ( X X ) ] γ e α ( X X ) δ ( 1 + a e ( X X ) δ ) α + β
where
c 2 = δ Γ ( m 2 ) π m 2 Γ ( 1 δ ( γ + m 2 ) ) ζ [ { ( 1 δ ( γ + m 2 ) , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , 0 a 1 , γ + m 2 > δ , δ > 0 , μ = E [ X ] , X is m × 1 in the real domain.
A more general form is to replace X X by [ ( X μ ) A ( X μ ) ] δ , δ > 0 , μ = E [ X ] . Then, the only change is that the normalizing constant c 2 will be multiplied by | A | 1 2 . Now, consider the corresponding replacement of X ˜ * X ˜ by ( X ˜ * X ˜ ) δ , δ > 0 . Then, proceeding as in the real case, we have the corresponding resulting model in the complex domain, denoted by f 2 c ( X ˜ ) and given as the following.
Theorem 4. 
Let X ˜ be a m × 1 complex vector random variable with distinct scalar complex variables as elements. For δ > 0 , consider the replacement of X ˜ * X ˜ in Theorem 2 by ( X ˜ * X ˜ ) δ , δ > 0 . Let the resulting model be denoted by f 2 c ( X ˜ ) . Then,
f 2 c ( X ˜ ) = c ˜ 2 [ X ˜ * X ˜ ] γ e α ( X ˜ * X ˜ ) δ ( 1 + a e ( X ˜ * X ˜ ) δ ) α + β
where
c ˜ 2 = δ Γ ( m ) π m Γ ( 1 δ ( γ + m ) ) ζ [ { ( 1 δ ( γ + m ) , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , 0 a 1 , γ + m > δ , δ > 0 , X ˜ is m × 1 in the complex domain.
A more general situation is to replace X ˜ * X ˜ by ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) , μ ˜ = E [ X ˜ ] , A = A * > O , i.e., A > O is a Hermitian positive definite constant matrix. As explained before, the only change is that the normalizing constant c ˜ 2 will be multiplied by | det ( A ) | , the absolute value of the determinant of A.
Now, we consider a more general case of a rectangular matrix-variate random variable in the real domain. Let X be a m × n , m n matrix of rank m in the real domain with m n distinct real scalar variables as elements. Let S = X X . Then, S > O is m × m and positive definite. Then, from Lemma 1, d X = π m n 2 Γ m ( n 2 ) | S | n 2 m + 1 2 d S from the real version of Lemma 1. In the corresponding complex case, let X ˜ be m × n , m n and of rank m. Let S ˜ = X ˜ X ˜ * , S ˜ = S ˜ * > O . Then, d X ˜ = π m n Γ ˜ m ( n ) | det ( S ˜ ) | n m d S ˜ . Then, proceeding as in the derivation of Theorems 1 and 2, we have the following theorems.
Theorem 5. 
Let X be a m × n , m n matrix of rank m with distinct m n real scalar variables as elements. For tr ( · ) denoting the trace of ( · ) , consider the following model:
f 3 ( X ) = c 3 | X X | γ e α [ tr ( X X ) ] ( 1 + a e [ tr ( X X ) ] ) α + β
Then, the normalizing constant c 3 is given by the following:
c 3 = Γ m ( n 2 ) π m n 2 Γ m ( γ + n 2 ) ζ [ { ( γ + n 2 , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , γ + n 2 > max { 1 , m 1 2 } , n 2 > m 1 2 .
The connection between d X and d S , S = X X is given in Lemma 1. Then, following steps parallel to the derivation in Theorem 1, the result in (14) follows. This model can be extended by replacing X X with A 1 2 ( X M ) B ( X M ) A 1 2 , where A > O is a m × m and B > O is a n × n real constant positive definite matrix, M = E [ X ] and A 1 2 denotes the positive definite square root of the positive definite matrix A > O . The only change is that the normalizing constant c 3 will be multiplied by | A | n 2 | B | m 2 . This is available from Lemma 2 given below. For the corresponding complex case, we have already given the connection between d X ˜ and d S ˜ , S ˜ = X ˜ X ˜ * . Then, the remaining procedure is parallel to the derivation in Theorem 2c.1 and hence the result is given here without proof [1].
Lemma 2. 
Consider a m × n matrix X = ( x i j ) , where the x i j s are functionally independent (distinct) real scalar variables, and let A be a m × m and B be a n × n nonsingular constant matrix. Then,
Y = A X B , | A | 0 , | B | 0 d Y = | A | n | B | m d X .
When the m × n matrix X ˜ is in the complex domain and when A and B are m × m and n × n nonsingular constant matrices in the real or complex domain, then
Y ˜ = A X ˜ B , | A | 0 , | B | 0 d Y ˜ = | det ( A ) | 2 n | det ( B ) | 2 m d X ˜ = | det ( A A * ) | n | det ( B * B ) | m d X ˜
where | det ( · ) | denotes the absolute value of the determinant of ( · ) .
Theorem 6. 
Let X ˜ be a m × n , m n matrix of rank m in the complex domain with m n distinct complex scalar variables as elements. Consider the following model:
f 3 c ( X ˜ ) = c ˜ 3 | det ( X ˜ X ˜ * ) | γ e α [ tr ( X ˜ X ˜ * ) ] ( 1 + a e [ tr ( X ˜ X ˜ * ) ] ) α + β .
Then, the normalizing constant c ˜ 3 is given by the following:
c ˜ 3 = Γ ˜ m ( n ) π m n Γ ˜ m ( γ + n ) ζ [ { ( γ + n , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , γ + n > max { 1 , m 1 } , n > m 1 , 0 a 1 .
An extension of the model is available by replacing X ˜ X ˜ * by A 1 2 ( X ˜ M ˜ ) B ( X ˜ M ˜ ) * , M ˜ = E [ X ˜ ] , where A = A * > O is a m × m and B = B * > O is a n × n constant Hermitian positive definite matrix, and A 1 2 denotes the Hermitian positive definite square root of A = A * > O . The only change is that the normalizing constant c ˜ 3 will be multiplied by | det ( A ) | n | det ( B ) | m from Lemma 2.
Again, we start with a real m × n , m n matrix of rank m with m n distinct real scalar variables as elements. Then, we know that tr ( X X ) is the sum of squares of the m n real scalar variables in X because, for any matrix C = ( c j k ) in the real domain, tr ( C C ) = tr ( C C ) = j k c j k 2 . In the complex domain, the corresponding result is tr ( C ˜ C ˜ * ) = tr ( C ˜ * C ˜ ) = j k | c ˜ j k | 2 , where c ˜ j k = c j k 1 + i c j k 2 , i = ( 1 ) , c j k 1 , c j k 2 are real scalar quantities, and | c ˜ j k | 2 = ( c j k 1 2 + c j k 2 2 ) . Thus, in the complex case, there will be twice the number of real variables compared to the corresponding real case. In the real case, the sum of squares of m n quantities can be taken as coming from a form Z Z , where Z is a m n × 1 real vector. Then, one can apply Lemma 1 on this real vector, and, in the real case, if y = tr ( X X ) , then the connection between d X and d y in the real case and d y ˜ and d X ˜ in the complex domain is the following:
d X = π m n 2 Γ ( m n 2 ) y m n 2 1 d y ; d X ˜ = π m n Γ ( m n ) y m n 1 d y , y = tr ( X ˜ X ˜ * ) .
Now, proceeding as in the derivation of Theorem 1, we have the following results, which are stated as theorems.
Theorem 7. 
Let X be a m × n , m n matrix of rank m in the real domain with m n distinct real scalar variables as elements. For δ > 0 , consider the model
f 4 ( X ) = c 4 [ tr ( X X ) ] η e α [ tr ( X X ) ] δ ( 1 + a e [ tr ( X X ) ] δ ) α + β .
Then, the normalizing constant c 4 is the following:
c 4 = δ Γ ( m n 2 ) π m n 2 Γ ( 1 δ ( η + m n 2 ) ) ζ [ { ( 1 δ ( η + m n 2 ) , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , δ > 0 , η + m n 2 > 0 , η + m n 2 > δ .
A generalization can be made by replacing X X by A 1 2 ( X M ) B ( X M ) A 1 2 with A > O , B > O , as explained in connection with Theorem 5. Hence, further details are omitted. In the corresponding complex case, we have the following result.
Theorem 8. 
Let X ˜ be a m × n , m n matrix in the complex domain of rank m and with m n distinct complex scalar variables as elements. Let y = tr ( X ˜ X ˜ * ) . The connection between d X ˜ and d y is explained above. Consider the model
f 4 c ( X ˜ ) = c ˜ 4 [ tr ( X ˜ X ˜ * ) ] η e α [ tr ( X ˜ X ˜ * ) ] δ ( 1 + a e [ tr ( X ˜ X ˜ * ) ] δ ) α + β .
Then, the normalizing constant c ˜ 4 is given by the following:
c ˜ 4 = δ Γ ( m n ) π m n Γ ( 1 δ ( η + m n ) ) ζ [ { ( 1 δ ( η + m n ) , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , δ > 0 , η + m n > δ , 0 a 1 .
A generalization is available by replacing X ˜ X ˜ * by A 1 2 ( X ˜ M ˜ ) B ( X ˜ M ˜ ) * A 1 2 , A > O , B > O , M ˜ = E [ X ˜ ] , as explained before.
Next, we consider a case where X is m × m real positive definite, X > O , and the corresponding case in the complex domain when the m × m matrix X ˜ is Hermitian positive definite, X ˜ = X ˜ * > O . In the real case, we consider the following model, which is stated as a theorem.
Theorem 9. 
Let X be m × m real positive definite, X > O . Then, for the model
f 5 ( X ) = c 5 | X | γ m + 1 2 e α [ tr ( X ) ] ( 1 + a e [ tr ( X ) ] ) α + β
the normalizing constant c 5 is given by
c 5 1 = Γ m ( γ ) ζ [ { ( m γ , α ) } ; α + β ; ; a ]
for m γ > 1 , γ > m 1 2 , α > 0 , β > 0 .
Proof. 
Since 0 < a e [ tr ( X ) ] < 1 , we expand
( 1 + a e [ tr ( X ) ] ) ( α + β ) = k = 0 ( α + β ) k ( a ) k k ! e k [ tr ( X ) ] .
Now, the integral to be evaluated can be obtained by using a real matrix-variate gamma integral of Lemma 1. That is,
X > O | X | γ m + 1 2 e ( α + k ) [ tr ( X ) ] d X = Γ m ( γ ) ( α + k ) m γ , γ > m 1 2 .
Then, the summation over k gives the following:
Γ m ( γ ) k = 0 ( α + β ) k ( a ) k k ! ( α + k ) m γ = Γ m ( γ ) ζ [ { ( m γ , α ) } ; α + β ; ; a ] ,
for m γ > 1 . This makes the normalizing constant as given above, which establishes the result. □
In the corresponding complex domain, the steps are parallel and we integrate out X ˜ > O by using a gamma integral in the complex domain as given in Lemma 1. Hence, we state the theorem in the complex domain without proof.
Theorem 10. 
Let X ˜ be a m × m matrix in the complex domain and let X ˜ be Hermitian positive definite, i.e., X ˜ = X ˜ * > O . Then, for the following model
f 5 c ( X ˜ ) = c ˜ 5 | det ( X ˜ ) | γ p e α [ tr ( X ˜ ) ] ( 1 + a e [ tr ( X ˜ ) ] ) α + β
the normalizing constant c ˜ 5 is the following:
c ˜ 5 1 = Γ ˜ m ( γ ) ζ [ { ( m γ , α ) } ; α + β ; ; a ]
for α > 0 , β > 0 , γ > m 1 , m γ > 1 .
So far, we have considered generalized matrix-variate logistic models involving a rectangular matrix and exponential trace having an arbitrary power and, at the same time, a trace raised to a power enters the model as a product or the product factor is a determinant but the exponential trace has power one. The only remaining situation is that in which a rectangular matrix is involved, the exponential trace has an arbitrary power, and, at the same time, a factor containing a determinant and another factor containing a trace are present in the model. This is the most general form of a generalized matrix-variate logistic model. We will consider this situation next, for which we need a recent result from Ref. [7], which is stated here as a lemma.
Lemma 3. 
Let X be a m × n , m n matrix in the real domain of rank m with m n distinct real scalar variables as elements. Then,
X | X X | γ [ tr ( X X ) ] η e α [ tr ( X X ) ] δ d X = π m n 2 Γ m ( n 2 ) Γ [ 1 δ ( m ( γ + n 2 ) + η ) ] Γ m ( γ + n 2 ) δ Γ ( m ( γ + n 2 ) ) α 1 δ ( m ( γ + n 2 ) + η ) )
for ( γ ) > n 2 + m 1 2 , ( η ) > 0 , δ > 0 , α > 0 , m n . Let X ˜ be a m × n , m n matrix in the complex domain of rank m with m n distinct complex scalar variables as elements. Then,
X ˜ | det ( X ˜ X ˜ * ) | γ [ tr ( X ˜ X ˜ * ) ] η e α [ tr ( X ˜ X ˜ * ) ] δ d X ˜ = π m n Γ ˜ m ( n ) Γ ˜ m ( γ + n ) Γ [ 1 δ ( m ( γ + n ) + η ) ] δ Γ ( m ( γ + n ) ) α 1 δ ( m ( γ + n ) + η )
for ( γ ) > n + m 1 , ( η ) > 0 , δ > 0 , α > 0 , m n .
The proof of the above results in Lemma 3 being very lengthy and hence only the results are given in Lemma 3, without the derivation. The model in Lemma 3 in the real case, excluding the determinant factor, is often known in the literature as Kotz’s model. Some (Ref. [8]) call the model in the real case, including the determinant factor, Kotz’s model also. Unfortunately, the normalizing constant traditionally used in such a model (Ref. [8]) is found to be incorrect. The correct normalizing constant and the detailed steps in the derivation and the correct integration procedure are given in Ref. [7]. The most general rectangular matrix-variate logistic density will be stated next as a theorem, where we take all parameters to be real.
Theorem 11. 
Let X be a m × n , m n matrix in the real domain of rank m and with m n distinct real scalar variables as elements. Then, for the model
f 6 ( X ) = c 6 | X X | γ [ tr ( X X ) ] η e α [ tr ( X X ) ] δ ( 1 + a e [ tr ( X X ) ] δ ) α + β
the normalizing constant c 6 is given by
c 6 1 = π m n 2 Γ [ 1 δ ( m ( γ + n 2 ) + η ) ] Γ m ( γ + n 2 ) δ Γ m ( n 2 ) Γ ( m ( γ + n 2 ) ) ζ [ { ( 1 δ ( m ( γ + n 2 ) + η ) , α ) } ; α + β ; ; a ]
for γ > n 2 + m 1 2 , η > 0 , δ > 0 , α > 0 , β > 0 , m n , m ( γ + n 2 ) + η > δ .
The above result follows directly from Lemma 3 in the real case. We state the corresponding result in the complex domain. Again, the result follows from the complex version of Lemma 3 and hence the theorem is stated without proof.
Theorem 12. 
Let X ˜ be a m × n , m n matrix in the complex domain of rank m and with m n distinct complex scalar variables as elements. Then, for the following model
f 6 c ( X ˜ ) = c ˜ 6 | det ( X ˜ X ˜ * ) | γ [ tr ( X ˜ X ˜ * ) ] η e α [ tr ( X ˜ X ˜ * ) ] δ ( 1 + a e [ tr ( X ˜ X ˜ * ) ] δ ) α + β
the normalizing constant c ˜ 6 is given by the following:
c ˜ 6 1 = π m n Γ ˜ m ( γ + n ) Γ [ 1 δ ( m ( γ + n ) + η ) ] δ Γ ˜ m ( n ) Γ ( m ( γ + n ) ) ζ [ { ( 1 δ ( m ( γ + n ) + η ) , α ) } ; α + β ; ; a ]
for γ > n + m 1 , δ > 0 , α > 0 , β > 0 , η > 0 , m n , m ( γ + n ) + η > δ .
Special cases of Theorems 11 and 12 are the cases δ = 1 ; η = 0 and general γ , δ ; γ = 0 and general η , δ . It can be seen that the case γ 0 , δ 1 is the most difficult one in which to evaluate the normalizing constant. For the other situations, there are several other techniques available. The generalization of the above model in the real case is that in which X X is replaced by A 1 2 ( X M ) B ( X M ) A 1 2 , where M = E [ X ] and A > O is a m × m and B > O is a n × n constant positive definite matrix. A 1 2 is the positive definite square root of A > O . In the complex domain, one can replace X ˜ X ˜ * by A 1 2 ( X ˜ M ˜ ) B ( X ˜ M ˜ ) * A 1 2 , where M ˜ = E [ X ˜ ] , and A = A * > O , B = B * > O are m × m and n × n constant Hermitian positive definite matrices. A 1 2 is the Hermitian positive definite square root of the Hermitian positive definite matrix A > O . In these general cases, particular cases of interest are the following in the real domain: M = O ; M = O , A = I ; M = O , B = I ; M = O , A = I , B = I ; A = I ; B = I . One can consider the corresponding special cases in the complex domain. In what we have taken, we have the same δ for the numerator and denominator exponential traces. If we take different parameters δ 1 for the numerator and δ 2 for the denominator, δ 1 δ 2 , then the problem becomes very difficult. In the scalar case, the integral to be evaluated is of the form
o x γ e a x δ 1 b x δ 2 d x , a > 0 , b > 0 .
When δ 1 > 0 , δ 2 < 0 or vice versa, then the problem is a generalization of a Bessel integral (Ref. [9]) or Krätzel integral in applied analysis or a reaction-rate probability integral in nuclear reaction rate theory. This can be handled through the Mellin convolution of a product and then taking the inverse Mellin transform, which will usually lead to a H-function for general parameters. When δ 1 > 0 , δ 2 > 0 , this situation can be handled through the Mellin convolution of a ratio and then taking the inverse Mellin transform, which will lead to a H-function for general parameters. In the matrix-variate case, when δ 1 > 0 , δ 2 < 0 or δ 1 < 0 , δ 2 > 0 , one can interpret it as the M-convolution of a product or as the density of a symmetric product of positive definite or Hermitian positive definite matrices. The unique inversion of the M-convolution of a product is not available and hence the explicit form of the density of a symmetric product of matrices is not available in the general situation. Similarly, when δ 1 > 0 , δ 2 > 0 , one can interpret it as the M-convolution of a ratio or as the density of a symmetric ratio of positive definite matrix-variate random variables. Again, the explicit form of the density is not available for general parameters. These are all open problems. For M-convolutions and symmetric product and symmetric ratios of matrices, see Ref. [7].

4. Some Bayesian Models

In a Bayesian procedure, we have a conditional distribution to start with. This conditional distribution may be represented by a random variable X when another variable or parameter Y is given. At given values of Y, the distribution of X is available. If the density of X is available, then it will be of the form f ( X | Y ) or the density of X at a given Y or the conditional density of X, given Y. Then, Y may have a prior distribution of its own, denoted by the density g ( Y ) . Then, the joint density of X and Y is f ( X | Y ) g ( Y ) . Now, the unconditional density of X, denoted by f x ( X ) , is given by f x ( X ) = Y f ( X | Y ) g ( Y ) d Y . Hence, the conditional density of Y at a given X is given by g ( Y | X ) = f ( X | Y ) g ( Y ) / f x ( X ) wherever the ratio is defined. The aim of Bayesian analysis is to study the properties of Y in g ( Y | X ) . In this process, the most important step is to compute the unconditional density f x ( X ) . In the above formulation, X may be a scalar/vector/matrix in the real or complex domain, so Y may also be a scalar/vector/matrix in the real or complex domain. Our aim here is to consider a few situations involving vector/matrix variables in the real and complex domains for densities and consider some Bayesian procedures. Since the evaluation of the unconditional density f x ( X ) is the most important step in this Bayesian process, we will introduce each model by computing the unconditional density f x ( X ) and end the discussion of this model. Ref. [10] states, with illustrations, that Quantum Mechanics is nothing but the Bayesian analysis of Hermitian matrices in a Hilbert space. Hence, all the results that we obtain here have relevance in Quantum Physics and other related problems, in addition to their importance in statistical distribution theory and Bayesian analysis.

5. Bayesian Model 1 in the Real Domain

We will start with an illustrative example. Let X = ( x j k ) be a m × n , m n matrix in the real domain of rank m with m n distinct real scalar variables as elements. Let X have a rectangular matrix-variate gamma density, denoted by f ( X ) , where
f ( X ) = c | A X B X | γ e tr ( A X B X ) , A > O , B > O ,
for < x j k < for all j and k. Here, A is a m × m and B is a n × n real constant positive definite matrix. Observe that when γ = 0 , one has the real matrix-variate Gaussian density in (22). For writing in the usual format, replace A by 1 2 A . Moreover, observe that one can write the matrices in a symmetric format. tr ( A X B X ) = tr ( A 1 2 X B X A 1 2 ) and | A X B X | = | A 1 2 X B X A 1 2 | , where A 1 2 is the positive definite square root of A > O . If a location parameter is to be included, then replace X by X M , M = E [ X ] in tr ( A X B X ) . Here, we have A and B as scaling matrices. The normalizing constant c here can be evaluated by using the following procedure. Let U = A 1 2 X B 1 2 d X = | A | n 2 | B | m 2 d U from Lemma 2. Then, let V = U U d U = π m n 2 Γ m ( n 2 ) | V | n 2 m + 1 2 d V from Lemma 1. Then, integrate out V by using the real matrix-variate gamma integral given in Lemma 1 to obtain the final result for c as the following:
c = | A | n 2 | B | m 2 Γ m ( n 2 ) π m n 2 Γ m ( γ + n 2 ) ,
for ( γ ) + n 2 > m 1 2 . If f ( X ) is the density of X at a given A, then we may write f ( X ) as f ( X | A ) , in standard notation for a conditional density. One can take (22) as a scaling model or scale mixture also. For example, consider a m × n , m n matrix Y in the real domain of rank m having the density in (22) with A = I m , the identity matrix. Consider a scaled Y in the form X = A 1 2 Y , where A > O is a m × m real positive definite matrix. Then, the density of X is as in (22) with a slight change in the normalizing constant. Note that (22) also contains the usual real multivariate Gaussian density. Take γ = 0 and p = 1 . Then, A will be a positive scalar constant. Take it as 1 2 . Then, (22) is a q-variate Gaussian density with covariance matrix B 1 ; from here, one can obtain the scalar variable Gaussian also. Hence, our discussion will cover scalar/vector/matrix cases in the real domain. Let us take (22) as the conditional density f ( X | A ) . Let A > O have a prior density, denoted by g ( A ) , where
g ( A ) = c 1 | A | ρ m + 1 2 e α [ tr ( A ) ] ( 1 + a e [ tr ( A ) ] ) α + β
for α > 0 , β > 0 , ( ρ ) > m 1 2 where the normalizing constant c 1 can be evaluated by using the procedure in Section 2 and one can write it in terms of an extended zeta function. That is,
c 1 1 = Γ m ( ρ ) ζ [ { ( m ρ , α ) } ; α + β ; : a ] , ( ρ ) > m 1 2 .
The unconditional density of X, denoted by f x ( X ) , is the following, where c is c with | A | n 2 removed:
f x ( X ) = A > O f ( X | A ) g ( A ) d A = c c 1 k = 0 ( α + β ) k ( a ) k k ! | X B X | γ A > O | A | ρ + γ + n 2 m + 1 2 e tr [ A ( ( α + k ) I + X B X ) ] d A = c c 1 Γ m ( ρ + γ + n 2 ) | X B X | γ k = 0 ( α + β ) k ( a ) k k ! | ( α + k ) I + X B X | ( ρ + γ + n 2 )
for ( ρ + γ + n 2 ) > max { 1 , m 1 2 } , α > 0 , β > 0 , ( ρ ) > m 1 2 , 0 a 1 . Let λ 1 , , λ m be the eigenvalues of X B X . Then,
| ( α + k ) I + X B X | ( ρ + γ + n 2 ) = j = 1 m ( α + λ j + k ) ( ρ + γ + n 2 ) .
Then, we can write f x ( X ) in terms of an extended zeta function as follows for δ = ρ + γ + n 2 :
f x ( X ) = c c 1 Γ m ( δ ) | X B X | γ ζ [ { ( δ , α + λ 1 ) , , ( δ , α + λ m ) } ; α + β ; ; a ] .
This (26) is our Bayesian model 1 in the real domain.

6. Bayesian Model 1 in the Complex Domain

Let X ˜ be a m × n , m n matrix in the complex domain of rank m and with m n distinct complex scalar variables as elements. Let the conditional density of X ˜ , given A > O , be of the form
f ( X ˜ | A ) = c ˜ | det ( A X ˜ B X ˜ * ) | γ e tr ( A X ˜ B X ˜ * )
for A = A * > O , B = B * > O , ( γ ) > n + m 1 . Then, proceeding as in the real case, we note that the normalizing constant c ˜ is given by the following:
c ˜ = | det ( A ) | n | det ( B ) | m Γ ˜ m ( n ) π m n Γ ˜ m ( γ + n )
for n > m 1 , ( γ ) > n + m 1 . Let the marginal density of A, denoted by g ˜ ( A ) , be the following:
g ˜ ( A ) = c ˜ 1 | det ( A ) | ρ m e α [ tr ( A ) ] ( 1 + a e tr ( A ) ) α + β
for ( ρ ) > m 1 , 0 a 1 , α > 0 , β > 0 . Then, proceeding parallel to the derivation in the real case, one has
c ˜ 1 = Γ ˜ m ( ρ ) ζ [ { ( m ρ , α ) } ; α + β ; ; a ]
for ( ρ ) > m 1 , m ( ρ ) > 1 . Then, the unconditional density of X ˜ , denoted by f x ˜ ( X ˜ ) , is the following, using steps parallel to those in the real case, where c ˜ is c ˜ with | det ( A ) | n removed and δ = ρ + γ + n :
f x ˜ ( X ˜ ) = c ˜ c ˜ 1 Γ ˜ m ( ρ + γ + n ) | det ( X ˜ B X ˜ * ) | γ × ζ [ { ( δ , α + λ 1 ) , , ( δ , α + λ m ) } ; α + β ; ; a ]
for α > 0 , β > 0 , 0 a 1 , ( ρ + γ + n ) > max { 1 , m 1 } , n > m 1 , ( ρ ) > m 1 , ( γ ) > n + m 1 , where λ 1 , , λ m are the eigenvalues of X ˜ B X ˜ * , which will be real and positive since X ˜ B X ˜ * is Hermitian positive definite. This is Bayesian model 1 in the complex domain. Note that in g ( A ) and g ˜ ( A ) , one can replace tr ( A ) by tr ( C A ) , where, in the real domain, C > O is a real constant positive definite matrix, and, in the complex domain, C = C * > O is a Hermitian positive definite constant matrix. The results can be calculated by going through steps parallel to the ones already used. In this situation, we need one more result on the Jacobians of a symmetric matrix going to a symmetric matrix in the real case and a Hermitian matrix going to a Hermitian matrix in the complex domain. These results are available from Ref. [1].

7. Bayesian Model 2 in the Real Domain

Since a multivariate Gaussian density is the most popular density, we will give one example involving a multivariate normal. Let X be a m × 1 real vector, X = [ x 1 , , x m ] , with the x j s being distinct real scalar variables. Let the density of X, at a given A > O , be a real multivariate normal density as in the following:
f ( X | A ) = | A | 1 2 ( 2 π ) m 2 e 1 2 tr ( A X X ) , A > O , X = [ x 1 , , x m ] ,
for < x j < , j = 1 , , m . Let the prior density for A be the same g ( A ) in Bayesian model 1 as in the real domain. Then, the unconditional density of X, again denoted by f x ( X ) , is the following:
f x ( X ) = c 1 ( 2 π ) m 2 A > O | A | ρ + 1 2 m + 1 2 e 1 2 tr ( A X X ) e α tr ( A ) ( 1 + a e tr ( A ) ) α + β d A = c 1 ( 2 π ) m 2 k = 0 ( α + β ) k ( a ) k k ! A > O | A | ρ + 1 2 m + 1 2 e ( α + k ) tr ( A ) 1 2 tr ( A X X ) d A = c 1 Γ m ( ρ + 1 2 ) ( 2 π ) m 2 k = 0 ( α + β ) k ( a ) k k ! | ( α + k ) I + 1 2 X X | ( ρ + 1 2 ) .
By using the determinants of partitioned matrices,
| ( α + k ) I + 1 2 X X | = 1 X 1 2 X ( α + k ) I = ( α + k ) m [ 1 + X X 2 ( α + k ) ] = ( α + k ) m 1 [ α + 1 2 X X + k ] .
Then, f x ( X ) is the following, observing that there are two factors containing k, where one has exponent ( ρ + 1 2 ) ( m 1 ) and the other has exponent ( ρ + 1 2 ) :
f x ( X ) = c 1 ( 2 π ) m 2 Γ m ( ρ + 1 2 ) k = 0 ( α + β ) k ( a ) k k ! ( α + k ) ( ρ + 1 2 ) ( m 1 ) ( α + 1 2 X X + k ) ( ρ + 1 2 ) = c 1 ( 2 π ) m 2 Γ m ( ρ + 1 2 ) × ζ [ { ( ( ρ + 1 2 ) ( m 1 ) , α ) , ( ( ρ + 1 2 ) , α + 1 2 X X ) } : α + β ; ; a ]
for ( ρ ) > m 1 2 , ( ρ ) > 1 2 , ( m 1 ) ( ρ + 1 2 ) > 1 , α > 0 , β > 0 , 0 a 1 , where c 1 is given in (25), which is
c 1 1 = Γ m ( ρ ) ζ [ { ( m ρ , α ) } ; α + β ; : a ] , ( ρ ) > m 1 2 .
This is Bayesian model 2 in the real domain.

8. Bayesian Model 2 in the Complex Domain

In the complex domain, let X ˜ be m × 1 with the conditional density
f ( X ˜ | A ) = | det ( A ) | π m e X ˜ * A X ˜
and in the prior density for A, the structure remains the same except that the multiplicative factor containing the determinant is | det ( A ) | ρ m . Hence, the normalizing constant in the prior density for A, denoted by c ˜ 1 , is the following:
c ˜ 1 1 = Γ ˜ m ( ρ ) ζ [ { ( m ρ , α ) } ; α + β ; ; a ] .
With these changes, the unconditional density of X ˜ , denoted by f x ˜ ( X ˜ ) , is the following:
f x ˜ ( X ˜ ) = 1 π m Γ ˜ m ( ρ + 1 ) Γ ˜ m ( ρ ) ζ [ { ( ( ρ + 1 ) ( m 1 ) , α ) , ( ( ρ + 1 ) , α + X ˜ * X ˜ ) } : α + β ; ; a ] ζ [ { ( m ρ , α ) } : α + β ; ; a ]
for ( ρ ) > m 1 , m ( ρ ) > 1 . This is Bayesian model 2 in the complex domain. Here, one can also replace tr ( A ) by tr ( C A ) , where C > O is a constant real positive definite matrix in the real domain and C = C * > O is a Hermitian positive definite constant matrix in the complex domain.
When a = 0 in the prior density for A, one has the structure of the matrix-variate gamma density as the prior. Hence, this case is also covered in the discussions of models 1 and 2 in the real and complex domains. In the multivariate Gaussian case for the conditional density, if the covariance matrix itself has a prior density, then our procedure will lead to an integral of the following type in the real domain:
A > O | A | γ e a tr ( A ) b tr ( A 1 ) d A
where a > 0 , b > 0 . This integral is the matrix-variate version of the basic Bessel integral or basic Krätzel integral. This integral is difficult to evaluate. If powers are involved, such as [ tr ( A ) ] δ and [ tr ( A 1 ) ] ρ , then the problem becomes very difficult, even if δ = ρ . What we have considered are models for the prior distributions, which can be considered as matrix-variate extensions of scalar variable logistic models. One can consider other matrix-variate distributions compatible with the conditional density and devise the resulting models or resulting unconditional densities, both in the real and complex domains. With the above illustrative models, we will end the discussion.

9. Concluding Remarks

In this paper, we have introduced a number of matrix-variate models that can be considered as matrix-variate extensions of the basic scalar variable logistic models. Matrix-variate analogues, both in the real and complex domains, are considered. A few Bayesian situations are also discussed, where the prior distributions for the conditioned variable can be looked upon as extensions of the logistic model. One can consider other matrix-variate distributions for the conditional distributions and compatible prior distributions for the conditioned matrix variable. One challenging problem is to evaluate matrix-variate versions of Bessel integrals, Krätzel integrals, reaction rate probability integrals, integration involving an inverse Gaussian density in the matrix-variate case, etc., where an example is given in (36). These are open problems. Most generalized models are given in Theorems 11 and 12. By using these models, one can also look into Bayesian structures. For example, consider the evaluation of the normalizing constant c in the following model, where X is a m × n , m n matrix of rank m with distinct m n real scalar variables as elements:
c 1 = X [ tr ( X X ) ] η e a [ tr ( X X ) ] δ b [ tr ( X X ) ] ρ d X , a > 0 , b > 0 , δ > 0 , ρ > 0 .
Let y = tr ( X X ) . Then, from Lemma 1, we have
c 1 = π m n 2 Γ ( m n 2 ) y > 0 y η + m n 2 1 e a y δ b y ρ d y .
The integral in ( 37 ) can be evaluated by identifying it with the density of a product u = x 1 x 2 , where x 1 > 0 and x 2 > 0 are statistically independently distributed real scalar random variables with the density functions f 1 ( x 1 ) and f 2 ( x 2 ) , respectively. Let g ( u ) be the density of u. Then,
g ( u ) = v 1 v f 1 ( u v ) f 2 ( v ) d v .
One can take f 1 ( x 1 ) = c 1 e x 1 ρ and f 2 ( x 2 ) = c 2 x 2 η + m n 2 e a x 2 δ with u = b 1 ρ , where c 1 and c 2 are normalizing constants. Now, one can identity ( 38 ) with ( 37 ) . Then, from the property E [ u s 1 ] = E [ x 1 s 1 ] E [ x 2 s 1 ] , due to statistical independence, E [ u s 1 ] is available from the corresponding moments in x 1 and x 2 . Now, by inverting E [ u s 1 ] using an inverse Mellin transform, one has the explicit form of c 1 ; details may be seen in [7].
Instead of a ρ in the exponent, one can have a + ρ or the exponential part may be of the form e a y δ b y ρ . Equation ( 38 ) can be evaluated by identifying it with the density of a ratio u 1 = x 1 x 2 , v = x 2 so that x 1 = u 1 v and the Jacobian part is v. Then, with the same procedure as above, one has E [ u 1 s 1 ] ; from here, by inversion, one has the corresponding explicit form for the density of u 1 .
Now, consider the evaluation of an integral of the type ( 37 ) coming from the corresponding matrix-variate case. Let X 1 > O and X 2 > O be p × p independently distributed real positive definite matrix-variate random variables. Let U = X 2 1 2 X 1 X 2 1 2 be the symmetric product, with V = X 2 . Then, the Jacobian part is | V | p + 1 2 ; see [1]. Then, the density of U, again denoted by g ( U ) , will be of the form
g ( U ) = V | V | p + 1 2 f 1 ( V 1 2 U V 1 2 ) f 2 ( V ) d V .
If X 1 > O and X 2 > O are matrix-variate generalized gamma distributed, then we will need to evaluate an integral of the following form to obtain g ( U ) :
V | V | η e a tr ( V ) δ b tr ( V 1 2 U V 1 2 ) ρ d V
for some η , a > 0 , b > 0 , δ > 0 , ρ > 0 . Note that tr ( A B ) ρ tr ( A ρ B ρ ) even if ρ > 1 is a positive integer. When A and B commute, then one may be able to evaluate the integral in ( 40 ) in some situations, but commutativity is not a valid assumption in the above problem. The evaluation of the integral in ( 40 ) is a challenging problem in the matrix-variate case, both in the real and complex domains. If we consider a symmetric ratio U 1 = X 2 1 2 X 1 1 X 2 1 2 , corresponding to the scalar case ratio x 2 x 1 , then also we will end up with a similar integral as in ( 40 ) . Such a symmetric product and symmetric ratio have practical relevance and they are also connected to fractional integrals, as illustrated in [7].

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflicts of interest. This work did not receive funding from any funding agency or external source. This manuscript is not being considered for publication by any other journal. The author has no competing interests to declare that are relevant to the content of this article.

References

  1. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
  2. Deng, X. Texture Analysis and Physical Interpretation of Polarimetric SAR Data. Ph.D. Thesis, Universitat Politecnica de Catalunya, Barcelona, Spain, 2016. [Google Scholar]
  3. Bombrun, L.; Beaulieu, J.M. Fisher distribution for texture modeling of Polarimetric SAR data. IEEE Geosci. Remote Sens. Lett. 2008, 5, 512–516. [Google Scholar] [CrossRef]
  4. Yueh, S.H.; Kong, J.A.; Jao, J.K.; Shin, R.T.; Novak, L.M. K-distribution and Polarimetric terrain radar clutter. J. Electromagn. Wave Appl. 1989, 3, 747–768. [Google Scholar] [CrossRef]
  5. Frery, A.C.; Muller, H.J.; Yanasse, C.C.F.; Sant’Anna, S.J.S. A model for extremely heterogeneous clutter. IEEE Trans. Geosci. Remote Sens. 1997, 35, 648–659. [Google Scholar] [CrossRef]
  6. Jakeman, E.; Pusey, P. Significance of K-distribution in scatering experiments. Phys. Rev. Lett. 1978, 40, 546–550. [Google Scholar] [CrossRef]
  7. Mathai, A.M.; Provost, S.B.; Haubold, H.J. Multivariate Statistical Analysis in the Real and Complex Domains; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
  8. Díaz-Garcia, J.A.; Gutiérrez-Jáimez, R. Compound and scale mixture of matricvariate and matrix variate Kotz-type distributions. J. Korean Stat. Soc. 2010, 39, 75–82. [Google Scholar] [CrossRef]
  9. Jeffrey, A.; Zwillinger, D. Tables of Integrals, Ieries and Products, 7th ed.; Academic Press: Boston, MA, USA, 2007. [Google Scholar]
  10. Benavoli, A.; Facchini, A.; Zaffalon, M. Quantum Mechanics: The Bayesian theory generalized to the space of Hermitian matrices. arXiv 2016, arXiv:1605.08177v4. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mathai, A.M. Multivariate and Matrix-Variate Logistic Models in the Real and Complex Domains. Stats 2024, 7, 445-461. https://doi.org/10.3390/stats7020027

AMA Style

Mathai AM. Multivariate and Matrix-Variate Logistic Models in the Real and Complex Domains. Stats. 2024; 7(2):445-461. https://doi.org/10.3390/stats7020027

Chicago/Turabian Style

Mathai, A. M. 2024. "Multivariate and Matrix-Variate Logistic Models in the Real and Complex Domains" Stats 7, no. 2: 445-461. https://doi.org/10.3390/stats7020027

Article Metrics

Back to TopTop