Extended k-Gamma and k-Beta Functions of Matrix Arguments

: Various k-special functions such as k-gamma function, k-beta function and k-hypergeometric functions have been introduced and investigated. Recently, the k-gamma function of a matrix argument and k-beta function of matrix arguments have been presented and studied. In this paper, we aim to introduce an extended k-gamma function of a matrix argument and an extended k-beta function of matrix arguments and investigate some of their properties such as functional relations, inequality, integral formula, and integral representations. Also an application of the extended k-beta function of matrix arguments to statistics is considered.


Introduction and Preliminaries
Special functions of matrix arguments have appeared in statistics, theoretical physics, theory of group representations and number theory (see, e.g., [1][2][3]). Special functions of a matrix argument were investigated in the study of spherical functions on certain symmetric spaces and multivariate analyses in statistics (see [4]). Certain properties of the gamma function of a matrix argument and beta function of matrix arguments and hypergeometric functions of matrix arguments have been studied (see, e.g., [5][6][7][8][9]). Various k-special functions such as the k-gamma function, k-beta function and k-hypergeometric functions have been introduced and investigated (see, e.g., [10][11][12][13][14][15]). Some k-special functions of matrix arguments such as k-gamma function of a matrix argument, k-beta function of matrix arguments and k-hypergeometric functions of matrix arguments have been introduced and investigated (see [16,17]).
In this paper, we extend the k-gamma and k-beta functions of matrix arguments and investigate some properties of the extended functions.
In what follows, we shall denote by Z, N, R, R + , and C the classes of integers, positive integers, real numbers, positive real numbers, and complex numbers, respectively. Also put N 0 := N ∪ {0}, Z − 0 := Z \ N, and R + 0 := R + ∪ {0}. For r ∈ N, let C r×r denote the set of all r by r matrices of which the entries are in C. Let (A) be the set of all eigenvalues of A ∈ C r×r . For A ∈ C r×r , let α(A) := max { (z) | z ∈ (A)} and β(A) := min { (z) | z ∈ (A)}. Let A be a matrix in C r×r such that (z) > 0 (z ∈ (A)).
For A ∈ C r×r , its 2-norm is denoted by where for any vector y ∈ C r , y 2 = y H y 1/2 is the Euclidean norm of y. Here y H denotes the Hermitian matrix of y.
If R is a matrix in C s×s which satisfies (w) > 0 for all w ∈ (R), then Γ(R) is well-defined as follows: where I is the identity matrix of order s (see, e.g., [6][7][8]18]). Also, throughout this paper, let I denote the identity matrix corresponding to a square matrix of any order. If a(w) and b(w) are analytic functions of the complex variable w, which are defined in an open set Θ of the complex plane, and R is a matrix in C s×s such that (R) ⊂ Θ, one finds from the properties of the matrix functional calculus that a(R) b(R) = b(R) a(R) (see [19] (p. 550)). Hence, if S in C s×s is another matrix with (S) ⊂ Θ, such that RS = SR, then a(R)b(S) = b(S)a(R) (see [6,7]). Since the reciprocal Gamma function denoted by Γ −1 (w) = 1/Γ(w) is an entire function of the complex variable w, for any R in C s×s , the Riesz-Dunford functional calculus reveals that the image of Γ −1 (w) acting on R, denoted by Γ −1 (R), a well-defined matrix (see [19] (Chapter 7)). Moreover, if T is a matrix in C s×s which gratifies T + nI is invertible for every integer n ∈ N 0 , then Γ(T) is invertible, its inverse coincides with Γ −1 (T), and (see [20] (p. 253)). Under condition (3), from that, (4) can be written in the form The Pochhammer symbol (or shifted factorial) (λ) ν is defined (for λ, ν ∈ C) by it being accepted conventionally that (0) 0 = 1. Now, one applies the matrix functional calculus to this function to find that, for any matrix R in C s×s , Also, in view of (5), (7) can be expressed in terms of the Gamma function of the matrix argument: Jódar and Cortés [6] (Theorem 1) proved the following limit expression of the Gamma function of matrix argument (cf. [21] (p. 2, Equation (6))): where P ∈ C r×r satisfies (1).
If f (P) is well defined and S is an invertible matrix in C r×r , then [22] (p. 541) Using the Schur decomposition of P ∈ C r×r , it follows [22] (pp. 336, 556) that e tP ≤ e t α(P) Let x ∈ C, k ∈ R + , and n ∈ N. Then the k-Pochhammer symbol (x) n,k and the k-gamma function Γ k are defined by (see [10,11]) and The Eulerian integral representation of the k-gamma function Γ k in (13) is given by (see [10,11]) The k-gamma function Γ k satisfies the following fundamental functional relation The k-Pochhammer symbol (x) n,k in (12) can be expressed in terms of the k-gamma function Γ k as follows: The beta function B(α, β) is defined by (see, e.g., [21] (p. 8, Equation (43))) Let P, Q be matrices in C r×r satisfying (1). Then the beta function B(P, Q) of matrix arguments is well defined as follows (see [6]): Further, if P, Q are diagonalizable matrices in C r×r such that PQ = QP, then By application of the matrix functional calculus, the k-Pochhammar symbol of a matrix P in C r×r is defined as (see [16]) (P) n,k = P(P + kI)(P + 2kI) · · · (P + (n − 1)kI) (n ∈ N) (20) and (P) 0,k = I (k ∈ R + ). The limit expression and integral form of the k-gamma function of a matrix P in C r×r satisfying (1) are given as follows (see [16]) and If P is a matrix in C r×r such that P + nkI is an invertible matrix for every n ∈ N 0 and k ∈ R + , then Γ k (P) is invertible, its inverse with Γ −1 k (P), one finds (see [16]) The k-beta function of matrix arguments is defined by (see [16]) where P and Q are matrices in C r×r satisfying (1). If P and Q are diagonalizable matrices in C r×r such that PQ = QP and which satisfy (1), one gets (see [16])

Extensions of the Gamma Functions and the Beta Functions of Matrix Arguments
Chaudhry and Zubir [23] introduced the following extension of the gamma function (see also [24]) Chaudhry et al. [25] provided the following extension of the beta function (see also [24]) Here, based on (26) and (27), we introduce extensions of the gamma function of a matrix argument (2), the beta function of matrix arguments (18), the k-gamma function of a matrix argument (22), and the k-beta function of matrix arguments (24), which are given in the following definition. Definition 1. (i) Let P be a matrix in C r×r satisfying (1). Then an extension of the gamma function of a matrix argument (2) is defined by (ii) Let P, Q be matrices in C r×r satisfying (1). Then an extension of the beta function of matrix arguments (18) is defined by (iii) Let P be a matrix in C r×r satisfying (1). Then an extension of the k-gamma function of a matrix argument (22) is defined by (iv) Let P and Q be matrices in C r×r satisfying (1). Then an extension of the k-beta function of matrix arguments (24) is defined by ≤ 1 for all b ∈ R + 0 and k ∈ R + , as in [6,15], it is found that the four extended functions in Definition 1 are well defined.
(b) If the matrices P, Q ∈ C 1×1 ≡ C, then the four extended functions in Definition 1 reduce to functions of scalar arguments.
(c) Clearly (d) Certain usefulness of the extended gamma function (26) and the extended beta function (27) to diverse engineering and physical problems and their relevant connections with some other special functions are pointed out in [23,25].

Some Properties of the Extended k-Gamma Functions of Matrix Arguments
In this section we investigate certain properties of the extended gamma functions of matrix arguments in Section 2. Theorem 1. Let k ∈ R + and b ∈ R + 0 . Also let P be a matrix in C r×r such that P, P − I, P + (k − 1)I and P − (k + 1)I satisfy (1). Then Proof of Theorem 1. From (30), let Then We find from (30) that By using integration by parts, Combining two identities in (33) and (34), we obtain (32).
Setting b = 0 in the result in Theorem 1, we get an identity for the k-gamma function of a matrix argument in the following corollary. Corollary 1. Let k ∈ R + . Also let P be a matrix in C r×r such that P, P − I and P + (k − 1)I satisfy (1). Then Theorem 2. Let 1 < p < ∞ with 1 p + 1 q = 1. Also let k ∈ R + and b ∈ R + 0 . Further let P, Q be matrices in C r×r satisfying (1). Then Proof of Theorem 2. From (30), we have By using the Hölder inequality, we get Choosing P, Q matrices in C 1×1 such that P = (x) ≡ x and Q = (y) ≡ y in R + , we find the inequality (39). The inequality (40) is a special case of (39) when k = 1 and b = 0 and is well-known (see, e.g., [21] (p. 105)).
Also let x, y, k ∈ R + and b ∈ R + 0 . Then and It is noted that the inequalities (39) and (40) show that Γ k,b (x) and Γ (x) are log-convex on (0, ∞).

Theorem 3.
Let b, k ∈ R + . Also let P be a matrix in C r×r such that P, −P satisfy (1). Then the following reflection formula holds: Proof of Theorem 3. Let t = b/τ in (30). We have

Some Properties of the Extended k-Beta Functions of Matrix Arguments
In this section we investigate certain properties of the extended beta functions of matrix arguments in Section 2.

Theorem 4.
Let k ∈ R + . Also let P, Q, R be matrices in C r×r satisfying (1). Then In addition, if P + R and Q + R are diagonalizable matrices such that P, Q, R are mutually commutative and which satisfy (1), then Proof of Theorem 4. Multiplying both sides of (31) by b R−I and integrating the resulting identity with respect to the variable b from 0 to ∞, and letting we find which, upon using (22) and (24), yields (42).
Also, we may use Fubini's theorem (see, e.g., [26] (p. 65)) to interchange the order of the double integrals in the first equality of (45) to get which, upon using the same substitute of the integrand variable as in (44), leads to the last equality of (45). Similarly, (43) can be obtained, the proof of which is left for the interested reader.

Corollary 3.
Let k ∈ R + . Also let P, Q be matrices in C r×r satisfying (1). Then Proof of Corollary 3. From (22), we have where O is the zero matrix in C r×r . Note that Using (50) in (49), we get Setting R = I in (42) and using (47), we obtain (48).

Theorem 5.
Let k ∈ R + and b ∈ R + 0 . Also let P, Q be matrices in C r×r such that P, Q, P + kI and Q + kI satisfy (1). Then B k,b (P, Q + kI) + B k,b (P + kI, Q) = B k,b (P, Q).
Proof of Theorem 5. Let L be the left member of (52). It follows from (31) that Note that, for 0 < t < 1, ln j t j! I = exp(ln t) I = t I.

Theorem 6.
Let k ∈ R + and b ∈ R + 0 . Also let P, Q be matrices in C r×r satisfying (1). Then and Proof of Theorem 6. Setting t = cos 2 θ in (31) yields (55). Let t = u 2 in (30). Replace P by Q in (30) and set t = v 2 in the resulting identity. By multiplying the two integrals, we have Let u = (r cos θ) 1 k and v = (r sin θ) 1 k in (57). Then the associated Jacobian denoted by J is given by Hence we have Finally, replacing the inner integral in (58) by (57), we obtain (56).

Theorem 7.
Let k ∈ R + and b ∈ R + 0 . Also let P, Q be matrices in C r×r satisfying (1). Then Proof of Theorem 7. From (30), we find Change variables by setting The associated Jacobian is Then we can get (59).
Setting b = 0 in (56) or (59) and considering (c) in Remark 1, we obtain a desired relation involving the k-gamma and k-beta functions of matrix arguments (cf., (25)), which is given in the following corollary (see [16] (Theorem 3.3)).

An Application
Like the gamma and beta functions including their generalizations (see, e.g., [4,6,11,15,16,[23][24][25]27,28]), among many potential applications of the extended k-gamma and k-beta functions of matrix arguments, here, one application of the extended k-beta function of matrix arguments to statistics is considered.
In what follows, we assume that P, Q are matrices in C r×r that satisfy (1), and b, k ∈ R + . We define the extended k-beta distribution of matrix arguments by The incomplete extended k-beta function of matrix arguments B x,k (P, Q; b) is given by Then the cumulative distribution of (62) can be expressed as and, obviously, F k (1) = 1. It will be said that a random variable X with probability density function given by (62) has the extended k-beta distribution of matrix arguments with shape parameters P, Q ∈ C r×r , and b, k ∈ R + . If R is any matrix in C r×r that gratifies (1), then one finds (see, e.g., [25,29]) The particular case of (65) when R = I ∈ C r×r is the identity matrix (in the following) stands for the mean of the distribution. Also signifies the variance of the distribution. Further the moment generating function of the distribution is Remark 2. The incomplete extended k-beta function of matrix arguments B x,k (P, Q; b) in (63) reduces to yield several relatively simple incomplete extended beta functions (see, e.g., [25,30] (p. 900, Entry 8.39)). Also all the other identities may reduce to provide many corresponding relatively simple and similar ones (see, e.g., [11,15,16,25,27,28,31]).

Conclusions
An extended k-gamma function and an extended k-beta function of matrix arguments were introduced. Some properties of these extended functions such as functional relations, inequality, integral formula, and integral representations were investigated. As above-noted, special functions of matrix arguments have appeared in and been applied to a diversity of research subjects such as statistics, theoretical physics, theory of group representations, and number theory. In this connection, the newly introduced functions of matrix arguments together with their properties presented here and the other uninvestigated ones (if any) are hoped and believed to have potential applications to various research subjects including the above-mentioned ones. For example, in addition to Section 5, it seems possible for the extended gamma and beta functions to be relevantly employed to make the multivariate statistical analysis of damage of tribo-fatigue and mechanothermodynamic systems (see, e.g., [32][33][34][35][36][37]). Certain applications to other research subjects and investigation of more properties of these newly introduced functions are left to the authors and the interested researchers for future study.
Author Contributions: Writing-original draft, G.S.K., P.A. and J.C.; Writing-review and editing, G.S.K., P.A. and J.C. The authors have equally contributed to accomplish this research work. All authors have read and agreed to the published version of the manuscript.