Abstract
Several extensions of the basic scalar variable logistic density to the multivariate and matrix-variate cases, in the real and complex domains, are given where the extended forms end up in extended zeta functions. Several cases of multivariate and matrix-variate Bayesian procedures, in the real and complex domains, are also given. It is pointed out that there are a range of applications of Gaussian and Wishart-based matrix-variate distributions in the complex domain in multi-look data from radar and sonar. It is hoped that the distributions derived in this paper will be highly useful in such applications in physics, engineering, statistics and communication problems, because, in the real scalar case, a logistic model is seen to be more appropriate compared to a Gaussian model in many industrial applications. Hence, logistic-based multivariate and matrix-variate distributions, especially in the complex domain, are expected to perform better where Gaussian and Wishart-based distributions are currently used.
Keywords:
multivariate distributions; matrix-variate case; real and complex domains; model buildings; Bayesian analysis; extended zeta functions MSC:
62E10; 62E15; 62F15; 26B15; 30E20; 15B52; 15B57
1. Introduction
We will introduce an extended logistic model through the exponentiation of a real scalar type-2 beta density. For and for the real scalar variable t, consider the density
and elsewhere. Usually, the parameters in a statistical density are real and hence we take real parameters throughout the paper. The above function is integrable even if the parameters are in the complex domain. In this case, the conditions will be , where denotes the real part of . Consider an exponentiation; namely, let and
and then the density of x, denoted by , is the following:
Note that
and when , reduces to
which is the standard logistic density. There is a vast amount of literature on this standard logistic model. The importance of this model arises from the fact that it has a thicker tail compared to that of the standard Gaussian density; thereby, large deviations have higher probabilities compared to the standard Gaussian.
The following notations will be used in this paper. Real scalar ( matrix) variables will be denoted by lowercase letters such as , whether the variables are random or mathematical variables. Capital letters such as will be used to denote vector ( or matrix)/matrix variables, whether mathematical or random and whether square matrices or rectangular matrices. Scalar constants will be denoted by , etc., and vector/matrix constants by , etc. A tilde will be placed over the variable to denote variables in the complex domain, whether mathematical or random, such as . No tilde will be used on constants. The above general rule will not be followed when symbols or Greek letters are used. These will be explained as needed. For a matrix , where the s are real scalar variables, the wedge product of differentials will be denoted as . For two real scalar variables x and y, the wedge product of differentials is defined as so that . When the matrix variable is in the complex domain, one can write , where are real matrices. Then, will be defined as . The transpose of a matrix Y will be denoted by . The complex conjugate transpose will be denoted by . When is such that , then is Hermitian and, in this case, , that is, is real symmetric and is real skew symmetric. The determinant of a square matrix A will be denoted as or as , and the absolute value of the determinant of A, when A is in the complex domain, will be denoted as are real scalar quantities. When a real symmetric matrix B is positive definite, it will be denoted as . If the matrix , then is Hermitian positive definite. Other notations will be explained wherever they occur.
Our aim here is to look into various extensions of the generalized logistic density in (2). First, let us consider a multivariate extension. Consider the real vector X, , where are functionally independent (distinct) real scalar variables, . If is in the complex domain, then are real. Then,
Extensions of the logistic model will be considered by taking the argument X as a vector or a matrix in the real and complex domains. In order to derive such models, we will require Jacobians of matrix transformations in three situations, and these will be listed here as lemmas without proofs. For the proofs and other details, see Ref. [1].
Lemma 1.
Let the matrix X of rank m be in the real domain with distinct elements s. Let the matrix , which is real positive definite. Then, going through a transformation involving a lower triangular matrix with positive diagonal elements and a semi-orthonormal matrix, and after integrating out the differential element corresponding to the semi-orthonormal matrix, we will have the following connection between and ; see the details from Ref. [1].
where, for example, is the real matrix-variate gamma function given by
where means the trace of the square matrix . We call the real matrix-variate gamma because it is associated with the real matrix-variate gamma integral. It is also known by different names in the literature. Similarly, is called complex matrix-variate gamma because it is associated with the complex matrix-variate gamma integral. When the matrix of rank m, with distinct elements, is in the complex domain, and letting , which is and Hermitian positive definite, then, going through a transformation involving a lower triangular matrix with real and positive diagonal elements and a semi-unitary matrix, and then integrating out the differential element corresponding to the semi-unitary matrix, we can establish the following connection between and :
where, for example, is the complex matrix-variate gamma function given by
This paper is organized as follows. An extended zeta function is introduced in Section 2 and then several extended logistic models are discussed, which will all result in the extended zeta function. Section 3 deals with some Bayesian-type models, which will also result in extended zeta functions. Section 4 provides some concluding remarks.
2. Several Matrix-Variate Extensions of the Logistic Model
Since the logistic model in (2) is more viable and contains symmetric and non-symmetric situations, we will consider extensions of the model in (2). When we wish to study some properties of a given model or a given density, then the procedure will follow the pattern of the procedure in evaluating the normalizing constant therein. Hence, the most important aspect in the construction of a model is the evaluation of the normalizing constant there. Therefore, our aim will be the evaluation of the normalizing constant in each extended logistic model. Moreover, it will be seen that when integrals are evaluated on the proposed extended logistic models, the results lead to some extended forms of generalized zeta functions introduced by the author. The basic zeta function and generalized zeta function of order , available in the literature, are the following convergent series, denoted by and , respectively:
If is a positive integer, then the generalized zeta function can be written in terms of the ordinary zeta function .
3. Extended Zeta Function
Let and . The following convergent series is defined as the extended zeta function:
where, for example, is the Pochhammer symbol, or .
First, we will consider a vector-variate random variable in the real and complex domains. Then, we will consider a general case of matrix-variate random variable in the real and complex domains. Associated with these random variables, we will construct a number of densities that can be taken as extended logistic densities to the vector/matrix-variate cases. The model and the evaluation of the corresponding normalizing constant will be stated as a theorem.
Theorem 1.
Let X be a real vector and consider the following density where is the normalizing constant. Then, for the model
the normalizing constant is given by the following:
Proof.
Since and , we have and hence a binomial expansion is valid for the denominator in (5). That is,
Let . Note that is and then Lemma 1 gives
and, from (5), (7), (8),
for . The integral in the first line above is evaluated by using a real scalar variable gamma integral. Hence, when is a density of X, then
□
This establishes the theorem.
In order to avoid using too many numbers for the functions and equations, we will use the following simplified notation to denote results in the complex domain by appending a letter “c” to the function number. For example, the density in the complex domain, corresponding to , will be denoted by . With these notations, we will list the counterpart of Theorem 1 for the complex domain. Let be a vector in the complex domain with distinct complex scalar elements. Let the normalizing constant in the complex domain, corresponding to in the real domain, be denoted by . Then, we have the theorem in the complex domain, corresponding to Theorem 1, denoted by Theorem 2, as the following.
Theorem 2.
Let be a vector random variable in the complex domain with distinct complex scalar elements. Let denote the conjugate transpose of . Consider the model
Then, the normalizing constant is given by the following:
for .
Since the equation numbers are changed, the corresponding statements regarding the equation numbers are deleted. The deleted items are, “and section number of the equation numbers”, “and the equation number (6) will be written as (11)”.
Proof.
Note that , where are real scalar variables. Then, there are real scalar variables in compared to m real scalar variables in the real case . Hence, from Lemma 1 in the complex case, we have for , which is again real. Then, following through the derivation in the real case, we have as given in (11). □
Note 2.1. Ref. [2] deals with the analysis of Polarimetric Synthetic Aperture Radar (PolSAR) multi-look return signal data. The return signal cross-section has two components, called freckle and texture. Freckle is the dust-like contaminant on the cross-section and texture is the cross-section variable itself. The analysis is frequently done under the assumption that the freckle part has a complex Gaussian distribution. It is found that non-Gaussian models give better representations in certain uneven regions, such as urban areas, forests, and sea surfaces; see, for example, Refs. [3,4,5,6]. Various types of multivariate distributions in the complex domain, associated with complex Gaussian and complex Wishart, are used in the study of speckle (noise-like appearance in PolSAR images) parts and real positive scalar and positive definite matrix-variate distributions are used for the texture (spatial variation in the radar cross-section) part. In industrial applications, a logistic model is preferred compared to a standard Gaussian model, because, even though the graphs of both look alike, the tail is thicker in the logistic case compared to that in the standard Gaussian. Hence, multivariate and matrix-variate models, as extended logistic models in the real scalar case, may qualify to be more appropriate models in many areas, such as areas where logistic models are preferred. Analysts of single-look and multi-look sonar and radar data are likely to find extended logistic models better for their data compared to Gaussian and Wishart based models. These are the motivating factors in introducing extended logistic models, especially in the form of multivariate and matrix-variate distributions, in the complex domain. Moreover, the models in Theorems 1 and 2 can be generalized by replacing in the real case by , where denotes the expected value of and is a real positive definite constant matrix. In this case, the only change is that the normalizing constant will be multiplied by . In the complex case, replace by , where and is a constant Hermitian positive definite matrix. In this case, the only change is that the normalizing constant will be multiplied by , i.e., the absolute value of the determinant of A.
Note that the h-th moments of and , for an arbitrary h, are available from the respective normalizing constants in the extended logistic models by replacing with and then taking the ratios of the respective normalizing constants.
Now, we consider a slightly more general model. Again, let X be a real vector random variable. In Theorem 1, replace by , i.e., having an arbitrary exponent . Let us call the resulting model . Then, go through the same steps as in the proof of Theorem 1. That is, , etc. Now, set . Then, we will have the following final model, which is stated as a theorem.
Theorem 3.
Let X be a real vector with distinct real scalar variables as elements. For , consider replacing in the exponent in Theorem 1 by . Then, we have the following model denoted by :
where
for is in the real domain.
A more general form is to replace by . Then, the only change is that the normalizing constant will be multiplied by . Now, consider the corresponding replacement of by . Then, proceeding as in the real case, we have the corresponding resulting model in the complex domain, denoted by and given as the following.
Theorem 4.
Let be a complex vector random variable with distinct scalar complex variables as elements. For , consider the replacement of in Theorem 2 by . Let the resulting model be denoted by . Then,
where
for is in the complex domain.
A more general situation is to replace by , i.e., is a Hermitian positive definite constant matrix. As explained before, the only change is that the normalizing constant will be multiplied by , the absolute value of the determinant of A.
Now, we consider a more general case of a rectangular matrix-variate random variable in the real domain. Let X be a matrix of rank m in the real domain with distinct real scalar variables as elements. Let . Then, is and positive definite. Then, from Lemma 1, from the real version of Lemma 1. In the corresponding complex case, let be and of rank m. Let . Then, . Then, proceeding as in the derivation of Theorems 1 and 2, we have the following theorems.
Theorem 5.
Let X be a matrix of rank m with distinct real scalar variables as elements. For denoting the trace of , consider the following model:
Then, the normalizing constant is given by the following:
for .
The connection between and is given in Lemma 1. Then, following steps parallel to the derivation in Theorem 1, the result in (14) follows. This model can be extended by replacing with , where is a and is a real constant positive definite matrix, and denotes the positive definite square root of the positive definite matrix . The only change is that the normalizing constant will be multiplied by . This is available from Lemma 2 given below. For the corresponding complex case, we have already given the connection between and . Then, the remaining procedure is parallel to the derivation in Theorem 2c.1 and hence the result is given here without proof [1].
Lemma 2.
Consider a matrix , where the s are functionally independent (distinct) real scalar variables, and let A be a and B be a nonsingular constant matrix. Then,
When the matrix is in the complex domain and when A and B are and nonsingular constant matrices in the real or complex domain, then
where denotes the absolute value of the determinant of .
Theorem 6.
Let be a matrix of rank m in the complex domain with distinct complex scalar variables as elements. Consider the following model:
Then, the normalizing constant is given by the following:
for .
An extension of the model is available by replacing by , where is a and is a constant Hermitian positive definite matrix, and denotes the Hermitian positive definite square root of . The only change is that the normalizing constant will be multiplied by from Lemma 2.
Again, we start with a real matrix of rank m with distinct real scalar variables as elements. Then, we know that is the sum of squares of the real scalar variables in X because, for any matrix in the real domain, . In the complex domain, the corresponding result is , where are real scalar quantities, and . Thus, in the complex case, there will be twice the number of real variables compared to the corresponding real case. In the real case, the sum of squares of quantities can be taken as coming from a form , where Z is a real vector. Then, one can apply Lemma 1 on this real vector, and, in the real case, if , then the connection between and in the real case and and in the complex domain is the following:
Now, proceeding as in the derivation of Theorem 1, we have the following results, which are stated as theorems.
Theorem 7.
Let X be a matrix of rank m in the real domain with distinct real scalar variables as elements. For , consider the model
Then, the normalizing constant is the following:
for .
A generalization can be made by replacing by with , as explained in connection with Theorem 5. Hence, further details are omitted. In the corresponding complex case, we have the following result.
Theorem 8.
Let be a matrix in the complex domain of rank m and with distinct complex scalar variables as elements. Let . The connection between and is explained above. Consider the model
Then, the normalizing constant is given by the following:
for .
A generalization is available by replacing by , as explained before.
Next, we consider a case where X is real positive definite, , and the corresponding case in the complex domain when the matrix is Hermitian positive definite, . In the real case, we consider the following model, which is stated as a theorem.
Theorem 9.
Let X be real positive definite, . Then, for the model
the normalizing constant is given by
for .
Proof.
Since , we expand
Now, the integral to be evaluated can be obtained by using a real matrix-variate gamma integral of Lemma 1. That is,
Then, the summation over k gives the following:
for This makes the normalizing constant as given above, which establishes the result. □
In the corresponding complex domain, the steps are parallel and we integrate out by using a gamma integral in the complex domain as given in Lemma 1. Hence, we state the theorem in the complex domain without proof.
Theorem 10.
Let be a matrix in the complex domain and let be Hermitian positive definite, i.e., . Then, for the following model
the normalizing constant is the following:
for .
So far, we have considered generalized matrix-variate logistic models involving a rectangular matrix and exponential trace having an arbitrary power and, at the same time, a trace raised to a power enters the model as a product or the product factor is a determinant but the exponential trace has power one. The only remaining situation is that in which a rectangular matrix is involved, the exponential trace has an arbitrary power, and, at the same time, a factor containing a determinant and another factor containing a trace are present in the model. This is the most general form of a generalized matrix-variate logistic model. We will consider this situation next, for which we need a recent result from Ref. [7], which is stated here as a lemma.
Lemma 3.
Let X be a matrix in the real domain of rank m with distinct real scalar variables as elements. Then,
for . Let be a matrix in the complex domain of rank m with distinct complex scalar variables as elements. Then,
for .
The proof of the above results in Lemma 3 being very lengthy and hence only the results are given in Lemma 3, without the derivation. The model in Lemma 3 in the real case, excluding the determinant factor, is often known in the literature as Kotz’s model. Some (Ref. [8]) call the model in the real case, including the determinant factor, Kotz’s model also. Unfortunately, the normalizing constant traditionally used in such a model (Ref. [8]) is found to be incorrect. The correct normalizing constant and the detailed steps in the derivation and the correct integration procedure are given in Ref. [7]. The most general rectangular matrix-variate logistic density will be stated next as a theorem, where we take all parameters to be real.
Theorem 11.
Let X be a matrix in the real domain of rank m and with distinct real scalar variables as elements. Then, for the model
the normalizing constant is given by
for
The above result follows directly from Lemma 3 in the real case. We state the corresponding result in the complex domain. Again, the result follows from the complex version of Lemma 3 and hence the theorem is stated without proof.
Theorem 12.
Let be a matrix in the complex domain of rank m and with distinct complex scalar variables as elements. Then, for the following model
the normalizing constant is given by the following:
for
Special cases of Theorems 11 and 12 are the cases ; and general ; and general . It can be seen that the case is the most difficult one in which to evaluate the normalizing constant. For the other situations, there are several other techniques available. The generalization of the above model in the real case is that in which is replaced by , where and is a and is a constant positive definite matrix. is the positive definite square root of . In the complex domain, one can replace by , where , and are and constant Hermitian positive definite matrices. is the Hermitian positive definite square root of the Hermitian positive definite matrix . In these general cases, particular cases of interest are the following in the real domain: . One can consider the corresponding special cases in the complex domain. In what we have taken, we have the same for the numerator and denominator exponential traces. If we take different parameters for the numerator and for the denominator, , then the problem becomes very difficult. In the scalar case, the integral to be evaluated is of the form
When or vice versa, then the problem is a generalization of a Bessel integral (Ref. [9]) or Krätzel integral in applied analysis or a reaction-rate probability integral in nuclear reaction rate theory. This can be handled through the Mellin convolution of a product and then taking the inverse Mellin transform, which will usually lead to a H-function for general parameters. When , this situation can be handled through the Mellin convolution of a ratio and then taking the inverse Mellin transform, which will lead to a H-function for general parameters. In the matrix-variate case, when or , one can interpret it as the M-convolution of a product or as the density of a symmetric product of positive definite or Hermitian positive definite matrices. The unique inversion of the M-convolution of a product is not available and hence the explicit form of the density of a symmetric product of matrices is not available in the general situation. Similarly, when , one can interpret it as the M-convolution of a ratio or as the density of a symmetric ratio of positive definite matrix-variate random variables. Again, the explicit form of the density is not available for general parameters. These are all open problems. For M-convolutions and symmetric product and symmetric ratios of matrices, see Ref. [7].
4. Some Bayesian Models
In a Bayesian procedure, we have a conditional distribution to start with. This conditional distribution may be represented by a random variable X when another variable or parameter Y is given. At given values of Y, the distribution of X is available. If the density of X is available, then it will be of the form or the density of X at a given Y or the conditional density of X, given Y. Then, Y may have a prior distribution of its own, denoted by the density . Then, the joint density of X and Y is . Now, the unconditional density of X, denoted by , is given by . Hence, the conditional density of Y at a given X is given by wherever the ratio is defined. The aim of Bayesian analysis is to study the properties of Y in . In this process, the most important step is to compute the unconditional density . In the above formulation, X may be a scalar/vector/matrix in the real or complex domain, so Y may also be a scalar/vector/matrix in the real or complex domain. Our aim here is to consider a few situations involving vector/matrix variables in the real and complex domains for densities and consider some Bayesian procedures. Since the evaluation of the unconditional density is the most important step in this Bayesian process, we will introduce each model by computing the unconditional density and end the discussion of this model. Ref. [10] states, with illustrations, that Quantum Mechanics is nothing but the Bayesian analysis of Hermitian matrices in a Hilbert space. Hence, all the results that we obtain here have relevance in Quantum Physics and other related problems, in addition to their importance in statistical distribution theory and Bayesian analysis.
5. Bayesian Model 1 in the Real Domain
We will start with an illustrative example. Let be a matrix in the real domain of rank m with distinct real scalar variables as elements. Let X have a rectangular matrix-variate gamma density, denoted by , where
for for all j and k. Here, A is a and B is a real constant positive definite matrix. Observe that when , one has the real matrix-variate Gaussian density in (22). For writing in the usual format, replace A by . Moreover, observe that one can write the matrices in a symmetric format. and , where is the positive definite square root of . If a location parameter is to be included, then replace X by in . Here, we have A and B as scaling matrices. The normalizing constant c here can be evaluated by using the following procedure. Let from Lemma 2. Then, let from Lemma 1. Then, integrate out V by using the real matrix-variate gamma integral given in Lemma 1 to obtain the final result for c as the following:
for If is the density of X at a given A, then we may write as , in standard notation for a conditional density. One can take (22) as a scaling model or scale mixture also. For example, consider a matrix Y in the real domain of rank m having the density in (22) with , the identity matrix. Consider a scaled Y in the form , where is a real positive definite matrix. Then, the density of X is as in (22) with a slight change in the normalizing constant. Note that (22) also contains the usual real multivariate Gaussian density. Take and . Then, A will be a positive scalar constant. Take it as . Then, (22) is a q-variate Gaussian density with covariance matrix ; from here, one can obtain the scalar variable Gaussian also. Hence, our discussion will cover scalar/vector/matrix cases in the real domain. Let us take (22) as the conditional density . Let have a prior density, denoted by , where
for where the normalizing constant can be evaluated by using the procedure in Section 2 and one can write it in terms of an extended zeta function. That is,
The unconditional density of X, denoted by , is the following, where is c with removed:
for Let be the eigenvalues of . Then,
Then, we can write in terms of an extended zeta function as follows for :
This (26) is our Bayesian model 1 in the real domain.
6. Bayesian Model 1 in the Complex Domain
Let be a matrix in the complex domain of rank m and with distinct complex scalar variables as elements. Let the conditional density of , given , be of the form
for . Then, proceeding as in the real case, we note that the normalizing constant is given by the following:
for . Let the marginal density of A, denoted by , be the following:
for . Then, proceeding parallel to the derivation in the real case, one has
for , Then, the unconditional density of , denoted by , is the following, using steps parallel to those in the real case, where is with removed and :
for , where are the eigenvalues of , which will be real and positive since is Hermitian positive definite. This is Bayesian model 1 in the complex domain. Note that in and , one can replace by , where, in the real domain, is a real constant positive definite matrix, and, in the complex domain, is a Hermitian positive definite constant matrix. The results can be calculated by going through steps parallel to the ones already used. In this situation, we need one more result on the Jacobians of a symmetric matrix going to a symmetric matrix in the real case and a Hermitian matrix going to a Hermitian matrix in the complex domain. These results are available from Ref. [1].
7. Bayesian Model 2 in the Real Domain
Since a multivariate Gaussian density is the most popular density, we will give one example involving a multivariate normal. Let X be a real vector, , with the s being distinct real scalar variables. Let the density of X, at a given , be a real multivariate normal density as in the following:
for Let the prior density for A be the same in Bayesian model 1 as in the real domain. Then, the unconditional density of X, again denoted by , is the following:
By using the determinants of partitioned matrices,
Then, is the following, observing that there are two factors containing k, where one has exponent and the other has exponent :
for , where is given in (25), which is
This is Bayesian model 2 in the real domain.
8. Bayesian Model 2 in the Complex Domain
In the complex domain, let be with the conditional density
and in the prior density for A, the structure remains the same except that the multiplicative factor containing the determinant is . Hence, the normalizing constant in the prior density for A, denoted by , is the following:
With these changes, the unconditional density of , denoted by , is the following:
for . This is Bayesian model 2 in the complex domain. Here, one can also replace by , where is a constant real positive definite matrix in the real domain and is a Hermitian positive definite constant matrix in the complex domain.
When in the prior density for A, one has the structure of the matrix-variate gamma density as the prior. Hence, this case is also covered in the discussions of models 1 and 2 in the real and complex domains. In the multivariate Gaussian case for the conditional density, if the covariance matrix itself has a prior density, then our procedure will lead to an integral of the following type in the real domain:
where . This integral is the matrix-variate version of the basic Bessel integral or basic Krätzel integral. This integral is difficult to evaluate. If powers are involved, such as and , then the problem becomes very difficult, even if . What we have considered are models for the prior distributions, which can be considered as matrix-variate extensions of scalar variable logistic models. One can consider other matrix-variate distributions compatible with the conditional density and devise the resulting models or resulting unconditional densities, both in the real and complex domains. With the above illustrative models, we will end the discussion.
9. Concluding Remarks
In this paper, we have introduced a number of matrix-variate models that can be considered as matrix-variate extensions of the basic scalar variable logistic models. Matrix-variate analogues, both in the real and complex domains, are considered. A few Bayesian situations are also discussed, where the prior distributions for the conditioned variable can be looked upon as extensions of the logistic model. One can consider other matrix-variate distributions for the conditional distributions and compatible prior distributions for the conditioned matrix variable. One challenging problem is to evaluate matrix-variate versions of Bessel integrals, Krätzel integrals, reaction rate probability integrals, integration involving an inverse Gaussian density in the matrix-variate case, etc., where an example is given in (36). These are open problems. Most generalized models are given in Theorems 11 and 12. By using these models, one can also look into Bayesian structures. For example, consider the evaluation of the normalizing constant c in the following model, where X is a matrix of rank m with distinct real scalar variables as elements:
Let . Then, from Lemma 1, we have
The integral in can be evaluated by identifying it with the density of a product , where and are statistically independently distributed real scalar random variables with the density functions and , respectively. Let be the density of u. Then,
One can take and with , where and are normalizing constants. Now, one can identity with . Then, from the property , due to statistical independence, is available from the corresponding moments in and . Now, by inverting using an inverse Mellin transform, one has the explicit form of ; details may be seen in [7].
Instead of a in the exponent, one can have a or the exponential part may be of the form . Equation can be evaluated by identifying it with the density of a ratio so that and the Jacobian part is v. Then, with the same procedure as above, one has ; from here, by inversion, one has the corresponding explicit form for the density of .
Now, consider the evaluation of an integral of the type coming from the corresponding matrix-variate case. Let and be independently distributed real positive definite matrix-variate random variables. Let be the symmetric product, with . Then, the Jacobian part is ; see [1]. Then, the density of U, again denoted by , will be of the form
If and are matrix-variate generalized gamma distributed, then we will need to evaluate an integral of the following form to obtain :
for some . Note that even if is a positive integer. When A and B commute, then one may be able to evaluate the integral in in some situations, but commutativity is not a valid assumption in the above problem. The evaluation of the integral in is a challenging problem in the matrix-variate case, both in the real and complex domains. If we consider a symmetric ratio , corresponding to the scalar case ratio , then also we will end up with a similar integral as in . Such a symmetric product and symmetric ratio have practical relevance and they are also connected to fractional integrals, as illustrated in [7].
Funding
This research received no external funding.
Data Availability Statement
Not applicable.
Conflicts of Interest
The author declares no conflicts of interest. This work did not receive funding from any funding agency or external source. This manuscript is not being considered for publication by any other journal. The author has no competing interests to declare that are relevant to the content of this article.
References
- Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
- Deng, X. Texture Analysis and Physical Interpretation of Polarimetric SAR Data. Ph.D. Thesis, Universitat Politecnica de Catalunya, Barcelona, Spain, 2016. [Google Scholar]
- Bombrun, L.; Beaulieu, J.M. Fisher distribution for texture modeling of Polarimetric SAR data. IEEE Geosci. Remote Sens. Lett. 2008, 5, 512–516. [Google Scholar] [CrossRef]
- Yueh, S.H.; Kong, J.A.; Jao, J.K.; Shin, R.T.; Novak, L.M. K-distribution and Polarimetric terrain radar clutter. J. Electromagn. Wave Appl. 1989, 3, 747–768. [Google Scholar] [CrossRef]
- Frery, A.C.; Muller, H.J.; Yanasse, C.C.F.; Sant’Anna, S.J.S. A model for extremely heterogeneous clutter. IEEE Trans. Geosci. Remote Sens. 1997, 35, 648–659. [Google Scholar] [CrossRef]
- Jakeman, E.; Pusey, P. Significance of K-distribution in scatering experiments. Phys. Rev. Lett. 1978, 40, 546–550. [Google Scholar] [CrossRef]
- Mathai, A.M.; Provost, S.B.; Haubold, H.J. Multivariate Statistical Analysis in the Real and Complex Domains; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
- Díaz-Garcia, J.A.; Gutiérrez-Jáimez, R. Compound and scale mixture of matricvariate and matrix variate Kotz-type distributions. J. Korean Stat. Soc. 2010, 39, 75–82. [Google Scholar] [CrossRef]
- Jeffrey, A.; Zwillinger, D. Tables of Integrals, Ieries and Products, 7th ed.; Academic Press: Boston, MA, USA, 2007. [Google Scholar]
- Benavoli, A.; Facchini, A.; Zaffalon, M. Quantum Mechanics: The Bayesian theory generalized to the space of Hermitian matrices. arXiv 2016, arXiv:1605.08177v4. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).