Certain Matrix Riemann–Liouville Fractional Integrals Associated with Functions Involving Generalized Bessel Matrix Polynomials

: The fractional integrals involving a number of special functions and polynomials have signiﬁcant importance and applications in diverse areas of science; for example, statistics, applied mathematics, physics, and engineering. In this paper, we aim to introduce a slightly modiﬁed matrix of Riemann–Liouville fractional integrals and investigate this matrix of Riemann–Liouville fractional integrals associated with products of certain elementary functions and generalized Bessel matrix polynomials. We also consider this matrix of Riemann–Liouville fractional integrals with a matrix version of the Jacobi polynomials. Furthermore, we point out that a number of Riemann–Liouville fractional integrals associated with a variety of functions and polynomials can be presented, which are presented as problems for further investigations.

Many formulas for integral transforms of the orthogonal matrix polynomials have been provided. However, some formulas corresponding to fractional integral transforms of those polynomials are little known and traceless in the literature. This motivates us to investigate Riemann-Liouville fractional integral transforms for functions involving generalized Bessel matrix polynomials. In this study, we aim to introduce certain matrix Riemann-Liouville fractional integrals (23) and provide some matrix Riemann-Liouville fractional integrals of generalized Bessel matrix polynomials (21) together with certain elementary matrix functions, exponential functions, and logarithmic functions. We also consider these matrix Riemann-Liouville fractional integrals in a matrix version of the Jacobi polynomials (42). Furthermore, we point out that a number of matrix Riemann-Liouville fractional integrals with certain functions associated with a variety of matrix functions and matrix polynomials can be presented, which are poised as problems for further investigations.

Some Definitions and Notations
In this section, for later use, we recall some definitions and notations whose more detailed accounts and applications may be found in [53][54][55][56]. We also introduce a slightly modified matrix version of the Riemann-Liouville fractional integrals (see (23)).
Here and in the following, let C, R + , N, and Z − 0 denote the sets of complex numbers, positive real numbers, positive integers, and non-positive integers, respectively, and let N 0 := N ∪ {0}. In addition, let C s×s be the vector space of all the square matrices of order s ∈ N whose entries are in C. For a T ∈ C s×s , let σ(T) be the set of all eigenvalues of T which is called the spectrum of T. Furthermore, for the T ∈ C s×s , let µ(T) := max{ (ξ) : ξ ∈ σ(T)} and µ(T) := min{ (ξ) : ξ ∈ σ(T)} which implies µ(T) = −µ(−T). Here, µ(T) is called the spectral abscissa of T and the matrix T is said to be positive stable if µ(T) > 0. For A ∈ C s×s , its 2-norm is denoted by where for any vector y ∈ C s , y 2 = y H y 1/2 is the Euclidean norm of y. Here, y H denotes the Hermitian matrix of y.
If f (z) and g(z) are analytic functions of the complex variable z, which are defined in an open set Ω of the complex plane and R is a matrix in C s×s such that σ(R) ⊂ Ω, one finds from the properties of the matrix functional calculus that f (R) g(R) = g(R) f (R) (see, e.g., [53] p. 558). Thus, if S in C s×s is another matrix with σ(S) ⊂ Ω, such that RS = SR, then f (R)g(S) = g(S) f (R) (see, e.g., [57,58]).
The Gamma function Γ(z) is defined by (see, e.g., [59] Section 1.1) The ψ-function (or digamma function) is defined by the logarithmic derivative of the Gamma function (see, e.g., [59] The Pochhammer symbol (λ) ν is defined (for λ, ν ∈ C), in terms of the Gamma function Γ, by (see [59] pp. 2, 5): as it is accepted conventionally that (0) 0 = 1. If R is a positive stable matrix in C s×s , then the Gamma matrix function Γ(R) is well-defined as follows (see, e.g., [57,58,60,61]): Here and elsewhere, let I and 0 denote the identity and zero matrices corresponding to a square matrix of any order, respectively. Since the reciprocal Gamma function denoted by Γ −1 (z) = 1/Γ(z) is an entire function of the complex variable z, for any R in C s×s , the Riesz-Dunford functional calculus reveals that the image of Γ −1 (z) acting on R, denoted by Γ −1 (R), is a well-defined matrix (see [53], Chapter 7). Moreover, if T is a matrix in C s×s , which supports T + nI is invertible for every integer n ∈ N 0 , then Γ(T) is invertible, and its inverse coincides with Γ −1 (T), and (see, e.g., [62] p. 253). Under condition (7), (8) can be written in the form Now, one can apply the matrix functional calculus to this function to find that, for any matrix, R in C s×s , Furthermore, in view of (9), (10) can be expressed in terms of the Gamma function of the matrix argument: Jódar and Cortés [57] in their Theorem 1 proved the following limit expression of the Gamma function of the matrix argument (cf. [59] p. 2, Equation (6)): where R ∈ C s×s is positive stable.
If R is a diagonalizable matrix in C s×s and T is an invertible matrix in C s×s , then ( [63] p. 541) Using the Schur decomposition of R ∈ C s×s , it follows [63] that If R is a positive stable matrix in C s×s which satisfies (7), the digamma matrix function where Γ (z) z ∈ C \ Z − 0 is the derivative of the Gamma function in (3). The beta function B(α, β) is defined by (see, e.g., [59] p. 8, Equation (43)) Let R, T be positive stable matrices in C s×s . Then, the beta matrix function B(R, T) is well defined as follows (see, e.g., [57]): Further, if R, T are diagonalizable matrices in C r×r such that RT = TR, then Let p, q ∈ N 0 . In addition, let (T) p and (R) q be the arrays of p commutative matrices T 1 , T 2 , . . . , T p and q commutative matrices R 1 , R 2 , . . . , R q in C s×s , respectively, such that R s + I are invertible for 1 ≤ s ≤ q and all ∈ N 0 . Then, the generalized hypergeometric matrix function p F q (T) p ; (R) q ; z (z ∈ C) is defined by (see, e.g., [37,58,64]) In particular, the hypergeometric matrix function 2 F 1 (A, B; C; z) ≡ F(A, B; C; z) is defined by for matrices A, B, C in C s×s such that C + I are invertible for all ∈ N 0 .
Let T and R be matrices in C s×s (s ∈ N) such that R + I are invertible for all ∈ N 0 . Then, for each n ∈ N 0 , the nth generalized Bessel matrix polynomial Y n (T, R; z) is defined by (see, e.g., [37,65]) Note that the nth generalized Bessel matrix polynomial Y n (T, R; z) when s = 1 is easily found to reduce to the scalar generalized Bessel polynomials (2).
where f(t) is a function of t and some square matrices so that this integral converges.
For example, let A be a positive stable matrix in C s×s ; then, the Riemann-Liouville fractional integrals with matrix parameters of order ν are given by It is noted in passing that (24) is a very slightly modified version of the equation in ([40] Equation (4.3), Definition 4.1; see, e.g., [19,38,39]).
The following three lemmas, whose first and second parts may be easily derivable from (18) and (24), respectively, are required in the subsequent section.

Lemma 1.
Refs. [19,[38][39][40] Let A be a positive stable matrix in C s×s . Then, the Riemann-Liouville fractional integral with matrix A − I of order ν is given by Lemma 2. Let σ ∈ C, ξ > 0, and (ν) > 0. Additionally, let A be a positive stable matrix in C s×s such that A + νI + I are invertible for all ∈ N 0 . Then, Lemma 3. Let (ν) > 0, ξ > 0, and n ∈ N. Additionally, let A be a positive stable matrix in C s×s such that A + I and A + (ν + )I are invertible for all ∈ N 0 .

Main Results
We evaluate the Riemann-Liouville fractional integrals with matrix parameters of certain functions involving the generalized Bessel matrix polynomials in (21) in the following theorems.

Theorem 1.
Let z ∈ C, (ν) > 0, ξ > 0, n ∈ N 0 , and s ∈ N. Additionally, let T and R be matrices in C s×s such that R + I are invertible for all ∈ N 0 and µ( (2 Then, Proof. From (21) and (22), we find Using (25) to evaluate the integral in (30), we obtain Applying the following identity provided kI − A are invertible for all k ∈ N 0 , to (31), we get which, in terms of (21), leads to the desired identity (29).

Theorem 2.
Let z ∈ C, (ν) > 0, ξ > 0, n ∈ N 0 , and s ∈ N. Additionally, let T and R be matrices in C s×s such that R + I are invertible for all ∈ N 0 ; let S be a positive stable matrix in C s×s such that I − S are invertible for all ∈ N. Further, let Then, Proof. The proof here runs in parallel with that of Theorem 1. The details are omitted.

Theorem 3.
Let z ∈ C, (ν) > 0, ξ > 0, n ∈ N 0 , and s ∈ N. Additionally, let T and R be matrices in C s×s such that R + I are invertible for all ∈ N 0 ; let S be a positive stable matrix in C s×s such that S + (ν + )I are invertible for all ∈ N 0 . Further let Then, Proof. The proof here runs along the lines of that of Theorem 1. The details are omitted.
Theorem 4. Let z, σ ∈ C, (ν) > 0, ξ > 0, n ∈ N 0 , and s ∈ N. Additionally, let T and R be matrices in C s×s such that R + I are invertible for all ∈ N 0 ; let S be a positive stable matrix in C s×s such that S + (ν + )I are invertible for all ∈ N 0 . Further, let f 4 (t) = t S−I e −σt Y n (T, tR; z).

Theorem 5.
Let z ∈ C, (ν) > 0, ξ > 0, n ∈ N 0 , and s ∈ N. Additionally, let T and R be matrices in C s×s such that R + I are invertible for all ∈ N 0 ; let S be a positive stable matrix in C s×s such that S + I and S + (ν + )I are invertible for all ∈ N 0 . Further, let Then, Proof. Making particular use of (27), the proof here runs in parallel with that of Theorem 1.
The details are omitted.
The Jacobi polynomials P (α,β) n (x) may be defined by (see, e.g., [68] p. 254) A matrix version of the Jacobi polynomials P (α,β) n (z) (see, e.g., [68] p. 254) may be defined by where n ∈ N 0 , z ∈ C, and A, B ∈ C s×s such that A + I are invertible for all ∈ N 0 . We present the Riemann-Liouville fractional integrals with matrix parameters of order ν of a function involving the matrix version of the Jacobi polynomial in (42) as in the following theorem. Theorem 6. Let ξ > 0, (ν) > 0, n ∈ N 0 , s ∈ N, and z ∈ C. Also let A, B ∈ C s×s such that A is positive stable, and A + I and A + (ν + )I are invertible for all ∈ N 0 . Then, Proof. Using (42) in (23), we obtain Employing (25) and (11) in (44), we get which, in terms of (19), yields the identity (43).

Concluding Remarks
In this paper, we tried to introduce a matrix of Riemann-Liouville fractional integrals (23) as a slightly-modified version of a specialized matrix of Riemann-Liouville fractional integrals . Then, we provided a matrix of Riemann-Liouville fractional integrals of generalized Bessel matrix polynomials together with certain elementary matrix functions, exponential functions, and logarithmic functions, which are given in Theorems 1-6. We also presented this matrix of Riemann-Liouville fractional integrals as a matrix version of the Jacobi polynomials (42). It is obvious that the results presented here, which are involved in certain matrices in C s×s , may reduce to yield the corresponding scalar matrices when s = 1. In particular, the identity (43) may be specialized to produce certain corresponding results associated with, for example, Legendre, Zernike, ultraspherical (or, equivalently, Gegenbauer), and Chebyshev polynomials (see, e.g., [67][68][69]).
We tried to give a differential equation with a (non-scalar) matrix of Jacobi polynomials as its solution. However, this was found not to be easy in the present circumstances (software). Instead, we introduce a paper which deals with the general Jacobi matrix method for solving some nonlinear ordinary differential equations (see [70]).
For different matrix-versions with Gamma functions, Beta functions, and other special functions that differ from those in this paper, the interested reader may refer to [71].
In fact, a remarkably large number of Riemann-Liouville fractional integral transforms (or formulas) involving a variety of functions and polynomials have been presented (see, e.g., [67] pp. 185-212). In this context, we conclude this paper by posing the following problem for further investigation: researchers should try to give matrix versions of results for Riemann-Liouville fractional integral transforms (or formulas) involving a variety of functions and polynomials (see, e.g., [67] pp. 185-212). For example, recall the nth Laguerre matrix polynomial L (A,λ) n (t) given by (see [64] Equation (10) where (λ) > 0 and A ∈ C s×s (s ∈ N) such that A + I are invertible for all ∈ N 0 . As in Theorem 6, we find where ξ > 0, (ν) > 0, n ∈ N 0 , and the restrictions in (46) are assumed. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Acknowledgments:
The authors are very grateful to the anonymous referees for their constructive and encouraging comments which improved this paper.

Conflicts of Interest:
The authors have no conflict of interest.