Entropy Optimization, Maxwell–Boltzmann, and Rayleigh Distributions

In physics, communication theory, engineering, statistics, and other areas, one of the methods of deriving distributions is the optimization of an appropriate measure of entropy under relevant constraints. In this paper, it is shown that by optimizing a measure of entropy introduced by the second author, one can derive densities of univariate, multivariate, and matrix-variate distributions in the real, as well as complex, domain. Several such scalar, multivariate, and matrix-variate distributions are derived. These include multivariate and matrix-variate Maxwell–Boltzmann and Rayleigh densities in the real and complex domains, multivariate Student-t, Cauchy, matrix-variate type-1 beta, type-2 beta, and gamma densities and their generalizations.


Introduction
The following notations will be used in this paper: Real scalar variables, whether mathematical variables or random variables, will be denoted by lower-case letters, such as x, y, etc.; real vector/matrix variables-mathematical and random-will be denoted by capital letters, such as X, Y, etc. Complex variables will be written with a tilde, such as x,ỹ,X,Ỹ, etc. Scalar constants will be denoted by a, b, etc., and vector/matrix constants by A,B, etc. No tilde will be used on constants. If A = (a ij ) is a p × p matrix, then its determinant will be denoted by |A| or det(A) if the elements of a ij are real or complex. The transpose of A is written as A and the complex conjugate transpose as A * . The absolute value of the determinant will be written as |det(A)| = det(AA * ). For example, if det(A) = a + ib, i = (−1), a, b is a real scalar, then the absolute value is |det(A)| = (a 2 + b 2 ). If X = (x ij ) is a p × q real matrix, then the wedge product of the differentials dx ij is written as dX = ∧ p i=1 ∧ q j=1 dx ij , where, for two real scalar variables x and y with differentials dx and dy, the wedge product is defined as dx ∧ dy = −dy ∧ dx so that dx ∧ dx = 0, dy ∧ dy = 0. IfX in the complex domain is a p × q matrix, then we can writẽ X = X 1 + iX 2 , i = (−1), X 1 , X 2 , which is real; then, we define dX = dX 1 ∧ dX 2 . If f (X) is a real-valued scalar function of X, where X may be scalar real variable x, scalar complex variablex, vector/matrix real variable X, or vector/matrix complex variableX such that f (X) ≥ 0 for all X and X f (X)dX = 1, then f (X) will be called a statistical density.
In many disciplines, especially in physics, communication theory, engineering, and statistics, one popular method of deriving statistical distributions is the optimization of an appropriate measure of entropy under appropriate constraints. For a real scalar random variable x, [1] introduced a measure of entropy or a measure of uncertainty: (1) where c is a constant. The corresponding measure for the discrete case is −c k ∑ j=1 p j ln p j , p j > 0, j = 1, . . . , k, p 1 + . . . + p k = 1 or (p 1 , . . . , p k ), which is a discrete probability law. By optimizing S( f ), several authors have derived exponential, Gaussian, and other distributions under the constraints in terms of moments of x, such as E[x] = fixed over all functional f , meaning that the first moment is given, where E(·) indicates the expected value of (·). This constraint will produce exponential density. If E[x] and E[x 2 ] are fixed, meaning that the first two moments are fixed, then one has a Gaussian density, etc. The basic entropy measure in (1) has been generalized by various authors. One such generalized entropy is the Havrda-Charvát entropy [2] H α ( f ) for the real scalar variable x, which is given by where f (x) is a density. The original H α ( f ) is for the discrete case, and the corresponding continuous case is given in (2). Various properties, characterizations, and applications of the Shannon entropy and various α-generalized entropies were discussed by [3]. A modified version of (2) was introduced by Tsallis [4], and it is known in the literature as Tsallis' entropy, which is the following: Observe that when α → 1 in (2) and q → 1 in (3), both of these generalized entropies in the real scalar case reduce to the Shannon entropy of (1). Tsallis developed the whole area of non-extensive statistical mechanics by deriving Tsallis' statistics by optimizing (3) under the constraint that the first moment is fixed in an escort density,

Hundreds of papers have been published on Tsallis' statistics.
In early 2000, the second author introduced a generalized entropy of the following form: where f (X) is a statistical density, f (X) ≥ 0, X f (X)dX = 1, where X may be real scalar x, complex scalarx, real vector/matrix X, or complex vector/matrixX, a is a fixed real scalar anchoring point, α is a real scalar parameter, and η > 0 is a real scalar constant so that the deviation of α from a is measured in η units. In the real scalar case, we can see that when α → a, then (4) goes to the Shannon entropy in (1). Therefore, for vector/matrix variables in the real and complex domain, one has a generalization of the Shannon entropy in (4). If (3) is optimized under the constraint that the first moment E[x] in f (x) is fixed; then, it does not lead directly to Tsallis' statistics. One must optimize (3) in the escort density mentioned above under the restriction that the first moment in the escort density is fixed. Then, one obtains Tsallis' statistics. If (4) is used, then one can derive various real and complex, scalar, vector, or matrix-variate distributions directly from f (X) by imposing moment-like restrictions in f (X). A particular case of (4) for a = 1, η = 1, introduced by the second author was applied by [5] in time-series analysis, fractional calculus, and other areas. The researchers in [6] used a particular case of (4) in record values, ordered random variables, and derived some properties, including characterization theorems. In [7] discussed the analytical properties of the classical Mittag-Leffler function as being derived as the solution of the simplest fractional differential equation governing relaxation processes. In [8] studied the complexity of the ultraslow diffusion process using both the classical Shannon entropy and its general case with the inverse Mittag-Leffler function in conjunction with the structural derivative.
In the present article, the term "entropy" is used as a mathematical measure of uncertainty or information characterized by some basic axioms, as illustrated by [3]. Thus, it is a functional resulting from a set of axioms, that is, a function that can be interpreted in terms of a statistical density in the continuous case and in terms of multinomial probabilities in the discrete case. A general discussion of "entropy" is not attempted here because, as per Von Neumann, "whoever uses the term 'entropy' in a discussion always wins since no one knows what entropy really is, so in a debate, one always has the advantage". An overview of various entropic functional forms used so far in the literature is available from [9], along with their historical backgrounds and an account of the numbers of citations of these various functional forms. Hence, no detailed discussion of various entropic functional forms is attempted in the present paper. The concept of entropy is applied in general physics, information theory, chaos theory, time series, computer science, data mining, statistics, engineering, mathematical linguistics, stochastic processes, etc. An account of the entropic universe was given by [10], along with answers to the following questions: How different concepts of entropy arose, what the mathematical definitions of each entropy are, how entropies are related to each other, which entropy is appropriate in which areas of application, and their impacts on the scientific community. Hence, the present article does not attempt to repeat the answers to these questions again. The present paper is about one entropy measure on a real scalar variable, its generalizations to vector/matrix variables in the real and complex domains, and an illustration of how this entropy can be optimized under various constraints to derive various statistical densities in the scalar, vector, and matrix variables in the real and complex domains. Because the entropy measure to be considered in the present article does not contain derivatives, the method of calculus of variation is used for optimization so that the resulting Euler equations will be simple. Mathematical variables and random variables are treated in the same way so that the double notations used for random variables are avoided. In order to avoid having too many symbols and the resulting confusion, scalar variables are denoted by lower-case letters and vector/matrix variables are denoted by capital letters so that the presentation is concise, consistent, and reader-friendly.

Entropy as an Expected Value
Shannon entropy S( f ) can be looked upon as an expected value of −c ln f (x). In Mathai's entropy (4), one can write the numerator as Then, (4) is the following expected value: The quantity in the expected value operator goes to − 1 η ln f (X) when α → a, which is the same as the Shannon case for c = 1 η . Therefore, the quantity inside the expectation operator is an approximation to − 1 η ln f (X).

Optimization of Mathai's Entropy for the Real Scalar Case
Let x be a real scalar variable and let f (x) be a density function, that is, f (x) ≥ 0 for all x and x f (x)dx = 1. Consider the optimization of (4) under the following momentlike constraints: over all possible densities f (x). Then, if we use calculus of variation for the optimization of (4), the Euler equation is the following: where λ 1 and λ 2 are Lagrangian multipliers and λ 2 λ 1 is taken as b(a − α) for convenience for Observe that all three functions f i (x), i = 1, 2, 3 can be reached through the pathway parameter α. From f 1 (x), one can go to f 2 (x) and f 3 (x).
Similarly, from f 2 (x), one can obtain f 1 (x) and f 3 (x). Hence, f 1 (x) or f 2 (x) is Mathai's pathway model for the real scalar positive variable x as a mathematical model or as a statistical model. The model is a Maxwell-Boltzmann density for x ≥ 0, and for γ = 1 2 , δ = 2, x ≥ 0, f 3 (x) is the Rayleigh density for the real scalar positive variable case. If a location parameter is desired, then x is replaced by x − m in all of the above models, where m is the relocation parameter. For γ = 0, δ = 1, η = 1, a = 1, α = q, f i (x), i = 1, 2, 3 is Tsallis' statistic of non-extensive statistical mechanics; see [4] Tsallis (1988). Hundreds of articles have been published on Tsallis' statistics. For δ = 1, η = 1, a = 1, α = 1, f 2 (x) and f 3 (x)-but not f 1 (x)-provide superstatistics of statistical mechanics. Several articles have been published on superstatistics.
Fermi-Dirac and Bose-Einstein densities are also available from the same procedure. In this case, the second factor x δ in the constraint is replaced by e cx , c > 0, x ≥ 0, and the Lagrangian multipliers are taken as −λ 1 and −λ 2 so that the second factor in Equation (6) becomes (λ 1 + λ 2 e cx ) −η/(α−a) for α > a with (λ 1 + λ 2 e cx ) > 0 to create a density function. Now, take γ = 0, η = 1, α − a = 1. Then, for λ 1 = 1, λ 2 = e d and for some constant d, this gives the Fermi-Dirac density, and for λ 1 = −1 and λ 2 = e d , this gives Bose-Einstein density. In , or Gaussian model (γ = 0, δ = 2) and is the stable or ideal situation in a physical system, then f 1 (x) and f 2 (x) provide the unstable or chaotic neighborhoods, and through the pathway parameter α, one can model the stable situation, the unstable neighborhoods, and the transitional stages in a data analysis situation. This is the pathway idea of Mathai.

Constraints in Terms of the Ellipsoid of Concentration in the Real p-Variate Case
Let X be a p × 1 real vector with distinct real scalar variables x j as elements; X = (x 1 , . . . , x p ), where a prime denotes the transpose. Let µ be a p × 1 location vector. Let the covariance matrix in X be ; then, Σ = Σ , and let Σ > O (real positive definite). Then, the square of the Euclidean distance of X from the point of location µ is (X − µ) (X − µ), and the generalized distance of X from µ is as the ellipsoid of concentration. The probability content of this ellipsoid of concentration is an important quantity in statistical analysis. Let us consider constraints in terms of moments of the ellipsoid of concentration u. Consider the following constraints: over all possible densities f (X), where X is a p × 1 vector random variable. Then, optimizing Mathai's entropy in (4) for all possible densities f (X) and proceeding as in Section 2, we have the following three densities: For α < a, and for α → a, the models in both (9) and (10) go to the model are the normalizing constants. These normalizing constants can be evaluated, and further properties of the models can be studied with the help of the following results from [11]: Lemma 1. Let X = (x ij ) be a p × q real matrix with distinct real scalar variables x ij as elements. Let A be p × p and B be q × q constant nonsingular matrices. Then, For the proof of this result, as well as for other similar results, see [11]. We will state one more result from [11] here without proof. Lemma 2. Let X be real p × q, p ≤ q, and rank p matrix with distinct real scalar variables as elements. Let S = XX so that S is p × p symmetric and positive definite. Then, after integrating out over the Stiefel manifold, where, for example, Γ p (α) is the real matrix-variate gamma given by where (·) indicates the real part of (·), S > O is p × p real positive definite, and tr(·) indicates the trace of (·).

Evaluation of the Normalizing Constants
Consider for α < a. The last step is obtained by integrating out s by using a real type-1 beta integral. Hence, for α < a, the normalizing constant is In a similar manner, and by integrating out s by using a real type-2 beta integral, we have the normalizing constant C 2 for α > a as the following: and for α → a we have Observe that the model in (9) is a multivariate generalized real type-1 beta model, (10) is a multivariate generalized real type-2 beta model, and (11) is a multivariate generalized real gamma model. For δ = 2, γ = 2, (11) is also a real multivariate Maxwell-Boltzmann model, and for δ = 2, γ = 1 2 , (11) is a real multivariate Rayleigh model. The corresponding densities for Y = Σ − 1 2 (X − µ) can be called the standard real multivariate Maxwell-Boltzmann and Rayleigh densities, respectively. If the Maxwell-Boltzmann and Rayleigh densities are the stable distributions in a physical system, then the unstable or chaotic neighborhoods are available from (9) and (10), and all of the situations, the stable situation, the unstable neighborhoods, and the transitional stages can be reached through the pathway parameter α. For γ = 0, the model in (9) is very useful in real multivariate reliability analysis; see [12,13]. The model in (10) for γ = 0 corresponds to a multivariate version of Student-t, Cauchy, multivariate F, and related distributions; see [14].
From the normalizing constants C 1 , C 2 , C 3 , one can also obtain the h-th moment of the ellipsoid of concentration for an arbitrary h. That is, for α < a, for (γ + h + p 2 ) > 0. The density coming from (iv) is an H-function. For the theory and applications of the H-function, see [15].
δ is distributed as a real scalar type-1 beta random variable with the parameters (γ + p δ is a real scalar type-2 beta variable with the parameters δ is a real scalar gamma random variable with the parameters (γ + p 2 , 1).
δ is a real scalar type-1 beta random variable with the parameters (γ + p δ is a real scalar type-2 beta random variable with the parameters (γ + p 2 , is a real scalar gamma random variable with the parameters (γ + p 2 , 1).

Note 1.
We can relax the condition δ > 0. Note that the models in (10) and (11) are also valid for δ < 0, and by defining the support appropriately, we can relax the condition δ > 0 in (9) as well.
Note 2. Consider a function g((X − µ) Σ −1 (X − µ)) for g(r 2 ) ≥ 0 for some real scalar variable r and let r g(r 2 )dr < ∞. Consider the optimization of Mathai's entropy in (4) over all possible densities f (X) and under the constraint where the expectation is taken in f (X); then, we end up with an elliptically contoured distribution for f (X) when the corresponding density for Y = Σ − 1 2 (X − µ) is a spherically symmetric distribution that is invariant under orthonormal transformations or under the rotation of the axes of coordinates.

Real Matrix-Variate Case
Let X = (x ij ) be a real p × q, p ≤ q, and rank p matrix with distinct real scalar variables x ij as elements. Let A > O be a p × p constant positive definite matrix and let B > O be a q × q constant positive definite matrix. Let u = tr(A 1 2 XBX A 1 2 ). This u is an important quantity in statistical literature. Hence, we will impose restrictions in terms of moments of u. Consider the optimization of Mathai's entropy in (4) over all densities f (X), where X is a p × q matrix, as defined above, subject to the constraints: over all possible densities f (X). Then, proceeding as in Section 3, we end up with the following densities, where we use the same notations of f i (X), C i , i = 1, 2, 3 in order to avoid having too many symbols: For α < a, For α > a, and for α → a, For evaluating the normalizing constants, we use the following transformations: Y = A s pq 2 −1 ds. Then, for α < a, we evaluate the s-integral by using a real scalar type-1 beta integral; for α > a, we evaluate the s-integral by using a real scalar type-2 beta integral; for α → a, the s-integral is evaluated by using a real scalar gamma integral. Then, the normalizing constants are the following: where, in (23), the conditions are α < a, Observe that (21) and (22)  1 δ is a real scalar gamma distributed with the parameters (γ + pq 2 , 1).

Note 3.
If a location parameter p × q matrix M is to be introduced, then replace X with X − M everywhere. If q ≤ p and if X is of rank q, then one can consider v = tr(B 1 2 X AXB 1 2 ). Then, parallel results hold for all of the results in Section 4 by interchanging A with B and p with q.

Constraints in Terms of Determinants
Let X = (x ij ) be a p × q, p ≤ q, and rank p matrix with distinct elements x ij . Let A > O be p × p and B > O be q × q constant positive definite matrices. Consider the optimization of Mathai's entropy (4) under the constraint over all real p × q, p ≤ q, and rank p matrix-variate densities f (X). Then, following the same procedure as in the above cases, we end up with the density and a is a fixed scalar constant. In order to avoid having too many symbols, we will use the same notations of f i (X), C i , i = 1, 2, 3 in this section. For α > a, the model in (26) changes into The transition of (26) and (27) to (28) can be seen from the following properties. Let λ 1 , . . . , λ p be the eigenvalues of A However, from the definition of the mathematical constant e, we have lim α→a Then, the product gives the sum of the eigenvalues or the trace in the exponent and, hence, the result. The normalizing constants C i , i = 1, 2, 3 can be evaluated by using the following transformations: 2 dS by using Lemma 2. Then, evaluating the S-integral by using a real matrix-variate type-1 beta integral for α < a, by using a real matrix-variate type-2 beta integral for α > a, or by using a real matrix-variate gamma integral for α → a, we obtain the results, where, for example, Γ p (α) is the real matrix-variate gamma defined earlier in (14).

Modification of the Constraint in Terms of a Determinant
Let us consider the matrices X, A, B as in Section 5. Consider the optimization of (4) under the following constraint for α < a: over all possible densities f (X). Then, proceeding as in the previous cases, we end up with the following densities: Observe that, as in the previous cases, all three models are available through the pathway parameter α from either f 1 (X) or f 2 (X). If f 3 (X) is the stable situation in a physical system, then the unstable neighborhoods are given by f 1 (X) and f 2 (X); these stable and unstable stages and the transitional stages can be reached through α. For γ = 1, the model in (31) can be taken as the real rectangular matrix-variate Maxwell-Boltzmann density, and for γ = 1 2 , it is the real rectangular matrixvariate Rayleigh density. The corresponding densities of Y = A 1 2 XB 1 2 can be taken as the standard matrix-variate Maxwell-Boltzmann and Rayleigh densities. The corresponding densities for S = YY can be taken as the isotropic or spherically symmetric matrix-variate Maxwell-Boltzmann and Rayleigh densities. The normalizing constants can be evaluated by using the transformations in Section 5 and then evaluating the S-integral by using real matrix-variate type-1 beta, type-2 beta, and gamma integrals. The final expressions will be the following: where in (32) α < a, in (33) α > a and η α−a − (γ + q 2 ) > 0, and in (32)

Arbitrary Moments
Let u = |A 1 2 XBX A 1 2 |, and if the h-th moment of this determinant u for an arbitrary h is needed, then this moment can be written down by looking at the normalizing constants in (32)-(34). For α < a, the h-th moment is the following: for (h + γ + q 2 ) > p−1 2 , where u 1 , . . . , u p are independently distributed real scalar type-1 beta random variables, with u j having the parameters (γ + q 2 − j−1 2 , p+1 2 + η a−α ), j = 1, . . . , p, so that we have the following structural representation for α < a: Both sides have the same distribution. Similarly, for α > a, we have the following: where v 1 , . . . , v p are independently distributed real scalar type-2 beta variables, with v j having the parame- For α → a, we have the following from (34): for (h + γ + q 2 ) > p−1 2 , where w 1 , . . . , w p are independently distributed real scalar gamma variables, with w j having the parameters (γ + q 2 − j−1 2 , 1), j = 1, . . . , p.
Note 5. Note that u = |A Let the rows of Y be Y 1 , . . . , Y p , where Y j is a 1 × q real vector. Then, Y j can be considered to be a point in a q-dimensional Euclidean space. We have p ≤ q of such points. These points (vectors) are linearly independent because we have assumed that the matrix is of rank p. Taking the points in the order Y 1 , . . . , Y p , these points (vectors) create a convex hull, and in this hull, a parallelotope is determined; the volume content of this parallelotope is the determinant |YY | 1 2 . Hence, the distribution of this determinant, as well as the moments, is important in stochastic geometry or in geometrical probabilities and other related areas of image processing, pattern recognition, etc. The scaling constants b(a − α) in (35), b(α − a) in (36), and bη in (37) can be taken as unities for convenience. Then, the points Y 1 , . . . , Y p are type-1 beta distributed in (35), type-2 beta distributed in (36), and gamma distributed in (37). In general, Y 1 , . . . , Y p have pathway distributions, or these are pathway-distributed random points in q-space. Note 6. If q ≤ p and the matrix X is of rank q, then we may consider B  2 ), where one is a p × p matrix and the other is a q × q matrix.

Complex Case
For a matrix A, its transpose will be written as A and its complex conjugate transpose as A * . If A = A * , then A is called Hermitian. Any complex matrix A can be written as A = A 1 + iA 2 , i = (−1), A 1 , A 2 real. When A is Hermitian, then A 1 = A 1 , A 2 = −A 2 , that is, A 1 is real symmetric and A 2 is real skew symmetric. If A is p × p Hermitian positive definite, then A = A * > O (Hermitian positive definite). The determinant of A is written as |A|, as well as det(A), and the absolute value of the determinant will be written as |det(A)| = [det(A)][det(A * )] = [det(AA * )]. Variables in the complex domain will be written with a tilde, such asX. In order to optimize Mathai's entropy (4) over the density f (X) in the complex domain, we need some results on Jacobians. These will be given here as lemmas without proofs. For the proofs and for other related results in the complex domain, see [11]. Lemma 3. LetX = (x ij ) be p × q with distinct complex scalar elementsx ij . Let A and B be p × p or q × q nonsingular constant matrices, respectively-real or complex. Then, Lemma 4. LetX = (x ij ) be a p × q, p ≤ q, and rank p matrix with distinct complex elements x ij . LetS =XX * , which is p × p Hermitian positive definite. Then, after integrating over the Stiefel manifold, whereΓ p (α) is a complex matrix-variate gamma given bỹ

Optimization in the Complex Domain
As a first problem, letX be a p × 1 vector variable in the complex domain with distinct scalar complex elements. Let Σ > O be a p × p Hermitian positive definite constant matrix. Consider the Hermitian form u = (X − µ) * Σ −1 (X − µ), where µ is a p × 1 constant vector. This can be taken as the ellipsoid of concentration in 2p-dimensional Euclidean space or as the ellipsoid of concentration in a p-dimensional complex domain. This ellipsoid is an important quantity in statistical analysis, as well as in various other situations. WhenX is a vector random variable in the complex domain with the mean value E[X] = µ and with the covariance matrix Σ = Cov(X) = E[(X − µ)(X − µ) * ], then u is the generalized distance of X from the point of the location of its expected value µ. Hence, we will optimize Mathai's entropy in (4) under moment-like constraints on u. Consider the following constraints: over all possible densities f (X), where a, α, η > 0, δ > 0, γ > 0 are all real scalar constants, a is a fixed location, and α is a real parameter. For α < a, proceeding as in the real case, we will end up with the following density: where c 1 is the normalizing constant. In order to avoid having too many symbols, we will use the same notations as in the real case, with variables written with a tilde and constants without a tilde. For α > a, we will have the following density: for b > 0, η > 0. For evaluating the normalizing constants c 1 , c 2 , c 3 , we will use the following transformations:Ỹ = Σ − 1 2 (X − µ) ⇒ dỸ = |det(Σ)| −1 dX by using Lemma 3, where Σ − 1 2 is the Hermitian positive definite square root of the Hermitian positive definite Σ −1 ; s = Y * Y ⇒ dY = π q Γ(q) s q−1 ds by using Lemma 4, whereỸ * is 1 × p, and s is 1 × 1. Then, we evaluate the s-integral by using a real scalar type-1 beta integral for α < a, real scalar type-2 beta integral for α > a, and real scalar gamma integral for α → a. Then, we have the following results: for b > 0, η > 0, γ > 0, δ > 0, and in addition, in (46), η α−a − ( γ+p δ ) > 0. Observe that through the pathway parameter α, one can reach all three densities f j (X), j = 1, 2, 3, and hence, f 1 (X) or f 2 (X) is the pathway model in the complex domain for the p × 1 vector random variableX. In model-building situations, if f 3 (X) is the stable model, then the unstable neighborhoods are given by f 1 (X) and f 2 (X), and the transitional stages are also reached through α.
For γ = 1, δ = 1, one can consider f 3 (X) in (44) as a multivariate Maxwell-Boltzmann density in the complex domain. For γ = 1 2 , δ = 1, one can take (44) as a multivariate Rayleigh density in the complex domain. For p = 1, we have the scalar variable Maxwell-Boltzmann and Rayleigh densities in the complex domain from (44). The corresponding real cases may be seen from [12,13]. Observe that (43) and (44) also hold for δ < 0, but for δ < 0, the support must be redefined in (42). Hence, a form of multivariate Maxwell-Boltzmann and Rayleigh densities can be defined for δ < 0 as well. In the complex domain, these densities are defined over the whole complex space. In the complex scalar case, if one has to confine to the sector (x − µ) > 0, then we multiply the corresponding (44) by 1 2 for p = 1 so that one can consider, for example, a time variable that is real positive for the real part and a phase variable for the complex part. Note that in the Rayleigh case for which is the absolute value. For γ = 0, (42) gives a very good model for multivariate reliability analysis in the complex domain. Reliability analysis in the complex domain does not seem to have been discussed in the literature.

Optimization with a Trace Constraint
LetX be a p × q, p ≤ q, and rank p matrix with distinct complex scalar variables as elements. Let A > O and B > O be p × p and q × q Hermitian positive definite constant matrices, respectively. Let γ( a−α η )+δ = fixed over all possible densities f (X), whereX is p × q, p ≤ q, and of rank p. Then, proceeding as in the real case, we end up with the following densities: For α < a, where u = tr(A 1 2X BX * A 1 2 ), and the normalizing constants are the following: Note that (50) can be considered as a multivariate version of the complex Maxwell-Boltzmann and Rayleigh densities for (γ = 1, δ = 1) and (γ = 1 2 , δ = 1), respectively. If q ≤ p andX is of rank q, then we can take u = tr(B 1 2X * AXB 1 2 ) and proceed, as in the p ≤ q case, with p and q interchanged and A and B interchanged. We obtain results parallel to the ones above for the case of p ≤ q.

Arbitrary Moments
As in the real case, we can consider the h-th moment of the absolute value of the determinant |det(A

Concluding Remarks
In this paper, it is shown that a large number of statistical densities belonging to the pathway family [16] of densities in the scalar, vector, and matrix-variate cases in the real and complex domains can be obtained by optimizing a certain entropy measure. The calculus of variation technique was used for the optimization. The notations were simplified and made consistent in order to avoid having too many symbols to denote different types of variables. Mathematical variables and random variables are treated in the same way to avoid the double notations that are usually used to denote random variables and the resulting confusions.