Next Article in Journal
Positive Solutions of Operator Equations AX = B, XC = D
Next Article in Special Issue
Mathematical Model of Cyber Risks Management Based on the Expansion of Piecewise Continuous Analytical Approximation Functions of Cyber Attacks in the Fourier Series
Previous Article in Journal
Numerical Solution of Time-Fractional Schrödinger Equation by Using FDM
Previous Article in Special Issue
Effects of the Wiener Process and Beta Derivative on the Exact Solutions of the Kadomtsev–Petviashvili Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Relations on the rRs(P,Q,z) Matrix Function

1
Department of Mathematics, Faculty of Science, Assiut University, Assiut 71516, Egypt
2
Department of Mathematics, Al-Aqsa University-Gaza, Gaza Strip 79779, Palestine
3
Engineering School, DEIM, Largo dell’Universita, Tuscia University, 01100 Viterbo, Italy
4
Department of Mathematics and Informatics, Azerbaijan University, Hajibeyli Str., Baku AZ1007, Azerbaijan
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(9), 817; https://doi.org/10.3390/axioms12090817
Submission received: 29 May 2023 / Revised: 8 August 2023 / Accepted: 21 August 2023 / Published: 25 August 2023
(This article belongs to the Special Issue Mathematical Models and Simulations)

Abstract

:
In this paper, we derive some classical and fractional properties of the r R s matrix function by using the Hilfer fractional operator. The theory of special matrix functions is the theory of those matrices that correspond to special matrix functions such as the gamma, beta, and Gauss hypergeometric matrix functions. We will also show the relationship with other generalized special matrix functions in the context of the Konhauser and Laguerre matrix polynomials.

1. Introduction

Matrix functions are an important mathematical tool, not only in mathematics, but also in several fundamental disciplines like physics, engineering, and applied sciences. Special matrix functions are used in a variety of fields including statistics [1,2], but also in probability theory, physics, engineering [3,4], and Lie theory [2]. In particular, Jódar and Cortés [5,6], at the beginning of this century, initiated the investigation into the matrix analogs of the gamma, beta, and Gauss hypergeometric functions, thus giving the foundation of the theory of special matrix functions. Indeed, in [7], it is shown that the Gauss hypergeometric matrix function is the analytic solution of the hypergeometric matrix differential equation. Dwivedi and Sahai expanded their studies on one of the variable special matrix functions to include n variables [8,9]. In [10], this topic is discussed, in detail, in an extended work on the Appell matrix functions. The matrix analogs of the Appell functions and Lauricella functions of several variables were studied in [10,11].
Polynomials of one or more variables are introduced and investigated from a matrix perspective in [12,13,14]. Cetinkaya [15] introduced and studied the incomplete second Appell hypergeometric functions together with their properties.
Jódar and Cortés [6] defined the region of convergence and the integral representation of the Gauss hypergeometric matrix function by using the matrix parameters represented by 2 F 1 ( A ; B ; C ; z ) . The generalized hypergeometric matrix function, abbreviated to p F q , is a natural generalization of the Gauss hypergeometric matrix function [16].
In particular, the hypergeometric matrix function plays a fundamental role in the solution of numerous problems in mathematical physics, engineering, and mathematical sciences [17,18].
The multidisciplinary applications of fractional order calculus have dominated recent advances in the subject. Without a doubt, fractional calculus has emerged as an exciting new mathematical approach to solving problems in engineering, mathematics, physics models, and many other fields of science (see, for example, [19,20,21]).
Because of their utility and applications in a variety of research fields, the fractional integrals associated with special matrix functions and orthogonal matrix polynomials have been recently receiving attention (see, for example, [22,23,24,25,26,27,28]).
The main goal of this paper is to investigate the analytical and fractional integral properties of the r R s matrix function. This function is a combination of the generalized Mittag–Leffler function [29,30,31] and the generalized hypergeometric function; it is useful in many topics of mathematical analysis, fractional calculus, and statistics (see e.g., [32,33,34,35,36], as well as in the field of free-electron laser equations [19,37] and fractional kinetic equations [38].
In this paper, we will discuss the convergence of the matrix function r R s , as well as its analytic properties (type and order) that have certain integral representations and applications. The organization of this paper is as follows. Section 1 introduces the theory of matrix functions and includes some preliminary notes and definitions. In Section 2, we use the ratio test with perturbation lemma [39] to prove the convergence of the matrix function r R s . Section 3 presents a new Theorem 2 for obtaining the properties of the r R s matrix function via Stirling’s formula for the logarithm of the gamma function, including analytic properties (type and order). Section 4 discusses some contiguous relations, differential properties, matrix recurrence relations, and the matrix differential equation of the r R s function that shows new theorems. Section 5 discusses some integral representations of the r R s matrix function, as well as the generalized integral representation (see, Theorem 8), which involves some special cases that are related to integral representations, such as the Euler-type, Laplace transform, and the Riemann–Liouville fractional derivative operator of the r R s matrix function. In the final section, we discuss the fundamental properties of the r R s matrix function, as well as certain special cases, such as Laguerre and Konhauser matrix polynomials, the Mittag–Leffler matrix function, and the generalized Wright matrix function.

Preliminary Remarks

Throughout this paper, for a matrix A in C N × N , its spectrum σ ( A ) denotes the set of all eigenvalues of A. The two-norm will be denoted by | | A | | 2 , and it is defined by (see [5,6])
| | A | | 2 = sup x 0 | | A x | | 2 | | x | | 2 ,
where for a vector x in C N , | | x | | 2 = ( x T x ) 1 2 is the Euclidean norm of x. Let us denote the real numbers M ( A ) and m ( A ) as in the following
M ( A ) = max { R e ( z ) : z σ ( A ) } ; m ( A ) = min { R e ( z ) : z σ ( A ) } .
If f ( z ) and g ( z ) are holomorphic functions of the complex variable z, as defined in an open set Ω of the complex plane, and A and B are matrices in C N × N with σ ( A ) Ω and σ ( B ) Ω , such that A B = B A , then it follows from the matrix functional calculus properties in [5,6] that
f ( A ) g ( B ) = g ( B ) f ( A ) .
Throughout this study, a matrix polynomial of degree in x means an expression of the form
P ( x ) = A x + A 1 x 1 + + A 1 x + A 0 ,
where x is a real variable or complex variable A j for 0 < j < , and A 0 are complex matrices in C N × N , where 0 is the null matrix in C N × N .
We recall that the reciprocal gamma function, denoted by Γ 1 ( z ) = 1 Γ ( z ) , is an entire function of the complex variable z, and thus Γ 1 ( A ) is a well defined matrix for any matrix A in C N × N . In addition, if A is a matrix, then
A + I is an invertible matrix for all integers 0 ,
where I is the identity matrix in C N × N . Then, from [5], it follows that
( A ) = A ( A + I ) ( A + ( 1 ) I ) = Γ ( A + I ) Γ 1 ( A ) ; 1 ; ( A ) 0 = I .
If is large enough so that for > B , then we will mention the following relation, which exists in Jódar and Cortés [6,7], in the form
( B + I ) 1 1 B ; > B .
If A ( , n ) and B ( , n ) are matrices in C N × N for n 0 and 0 , then it follows, in a manner analogous to the proof of Lemma 11 [5], that
n = 0 = 0 A ( , n ) = n = 0 = 0 [ 1 2 n ] A ( , n 2 ) , n = 0 = 0 B ( , n ) = n = 0 = 0 n B ( , n ) .
According to (5), we can write
n = 0 = 0 [ 1 2 n ] A ( , n ) = n = 0 = 0 A ( , n + 2 ) , n = 0 = 0 n B ( , n ) = n = 0 = 0 B ( , n + ) .
Hypergeometric matrix function 2 F 1 ( A , B ; C ; z ) is given in the following form:
2 F 1 ( A , B ; C ; z ) = = 0 ( A ) ( B ) [ ( C ) ] 1 ! z ,
for A, B, and C matrices in C N × N m such that C + I is an invertible matrix for all integers 0 and for | z | < 1 . Jódar and Cortés [6,7] observed that this series is absolutely convergent for | z | = 1 when
m ( C ) > M ( A ) + M ( B ) ,
where m ( Q ) and M ( Q ) in (1) are for any matrix Q in C N × N .
Definition 1.
As p and q are finite positive integers, the generalized hypergeometric matrix function is defined as (see [16])
p F q ( A 1 , A 2 , , A p ; B 1 , B 2 , , B q ; z ) = = 0 z ! ( A 1 ) ( A 2 ) ( A p ) [ ( B 1 ) ] 1 [ ( B 2 ) ] 1 [ ( B q ) ] 1 = = 0 z ! i = 1 p ( A i ) [ j = 1 q ( B j ) ] 1 ,
where A i ; 1 i p and B j ; 1 j q are matrices in C N × N such that
B j + I are invertible matrices for all integers 0 .
1.
If p q , then the power series (8) converges for all finite z.
2.
If p > q + 1 , then the power series (8) diverges for all z, z 0 .
3.
If p = q + 1 , then the power series (8) is convergent for | z | < 1 and diverges for | z | > 1 .
4.
If p = q + 1 , then the power series (8) is absolutely convergent for | z | = 1 when
j = 1 q m ( B j ) > i = 1 p M ( A i ) .
5.
If p = q + 1 , then the power series (8) is conditionally convergent for | z | = 1 when
i = 0 p M ( A i ) 1 < j = 0 q m ( B j ) i = 0 p M ( A i ) .
6.
If p = q + 1 , then the power series (8) diverges from | z | = 1 when
j = 0 q m ( B j ) i = 0 p M ( A i ) 1
where M ( A i ) and m ( B j ) are as defined in (1).

2. Definition and Convergence Conditions for the r R s ( P , Q , z ) Matrix Function

This section discusses the convergence properties of the r R s matrix function.
Definition 2.
Let us suppose that P, Q, R e ( P ) > 0 , R e ( Q ) > 0 , A i ; R e ( A i ) > 0 , 1 i r and B j ; R e ( B j ) > 0 , 1 j s are matrices in C N × N such that
B j + I are invertible matrices for all integers 0 ,
where r and s are finite positive integers. The matrix function r R s ( P , Q , z ) is then defined as
r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; z ) = = 0 z ! ( A 1 ) ( A 2 ) ( A r ) [ ( B 1 ) ] 1 [ ( B 2 ) ] 1 [ ( B s ) ] 1 Γ 1 ( P + Q ) = = 0 z ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) = = 0 W ,
where W = z ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) .
We will now investigate the convergence properties of the r R s ( P , Q , z ) , where one obtains
1 R = lim sup U 1 = lim sup i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) ! 1 = lim sup i = 1 r 2 π e ( A i + I ) ( A i + I ) A i + I 1 2 I ( j = 1 s 2 π e ( B j + I ) ( B j + I ) B j + I 1 2 I ) 1 ( 2 π e ( P + Q ) ( P + Q ) P + Q 1 2 I ) 1 i = 1 r Γ 1 ( A i ) j = 1 s Γ ( B j ) 2 π e 1 + 1 2 1
= lim sup i = 1 r 2 π e ( A i + I ) ( A i + I ) A i + I 1 2 I ( 2 π e A i ( A i ) A i 1 2 I ) 1 j = 1 s 1 2 π e ( B j + I ) ( B j + I ) B j I + 1 2 I ( 1 2 π e ( B j ) ( B j ) B j + 1 2 I ) 1 2 π e ( P + Q ) ( P + Q ) P Q + 1 2 I 1 2 π e 1 + 1 2 1 lim sup i = 1 r j = 1 s e B j + I + P + Q A i I + I B j + A i ( A i + I ) A i + I 1 2 I ( B j + I ) B j I + 1 2 I ( P + Q ) P Q + 1 2 I 1 2 1 lim sup i = 1 r j = 1 s e P + Q + I ( A i + I ) A i + I 1 2 I ( B j + I ) B j I + 1 2 I ( P + Q ) P Q + 1 2 I 1 2 1 e P + I lim sup i = 1 r j = 1 s ( A i + I ) ( B j + I ) 1 ( P + Q ) P ( A i + I ) A i 1 2 I ( B j + I ) B j + 1 2 I ( P + Q ) Q + 1 2 I 1 2 1 .
The last limit shows that:
  • If r s + 1 , then the power series in (14) converges for all finite z.
  • If r = s + 2 , then the power series in (14) converges for all | z | < 1 and diverges for all | z | > 1 .
  • If r > s + 2 , then the power series in (14) diverges for z 0 .
The above definition of the r R s ( P , Q , z ) matrix function can be referred to in reference to [40], whereby the different method is taken into consideration by being used in proving it is based on the perturbation lemma [39] and ratio test detailed in this paper.
As an analog to Theorem 3 in [6], we can state the following:
Theorem 1.
1.
If r = s + 2 , then the power series in (14) is absolutely convergent on the circle | z | = 1 when
j = 1 s m ( B j ) i = 1 r M ( A i ) > 0 .
2.
If r = s + 2 , then the power series (14) is conditionally convergent for | z | = 1 when
i = 0 r M ( A i ) 1 < j = 0 s m ( B j ) i = 0 p M ( A i ) .
3.
If r = s + 2 , then the power series (14) diverges from | z | = 1 when
j = 0 s m ( B j ) i = 0 r M ( A i ) 1
where M ( A i ) ; 1 i r and m ( B j ) ; 1 j s are defined in (1).
Thus, r R s is an entire function of z when P + I > 0 .
Remark 1.
Let A i ; 1 i r and B j ; 1 j s be matrices in C N × N that satisfy (13), and where all matrices are commutative. As such, P = Q = A 1 = I in (14) reduces to
r R s ( I , A 2 , , A p ; B 1 , B 2 , , B s ; I , I ; z ) = = 0 z k ! i = 2 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( k P + Q ) = = 0 W = r 1 F s ( A 2 , , A p ; B 1 , B 2 , , B s ; z )
where r 1 F s is the generalized hypergeometric matrix function detailed in (8).

3. Order and Type of the r R s ( P , Q , z ) Matrix Function

In this section, we obtain the properties of the r R s matrix function, including its analytic properties (type and order).
Theorem 2.
Let A i ; 1 i r , B j ; 1 j s , P and Q be matrices in C N × N that satisfy (13), and where all matrices are commutative. Then, the r R s matrix function is an entire function of variable z of the order ρ = ( P + I ) 1 and type τ = ( P + I ) P P ( P + I ) 1 .
Proof. 
In applying Stirling’s formula of the gamma matrix function, we obtain
Γ ( A ) 2 π e A A A 1 2 I ,
which recovers Stirling’s formula:
! 2 π ( e ) ,
and which uses the asymptotic expansion
ln Γ ( A ) ln 2 π I A + ( A 1 2 I ) ln ( A ) 1 2 ln ( 2 π ) I A + ( A 1 2 I ) ln ( A )
To evaluate the order, we apply Stirling’s asymptotic formula for a large , and the logarithm of the gamma function Γ ( + 1 ) is set at infinity as follows:
ρ ( r R s ) = lim sup ln ( ) ln ( 1 U ) = lim sup ln ( ) ln ( ! j = 1 s ( B j ) Γ ( P + Q ) [ i = 1 r ( A i ) ] 1 ) = lim sup ln ( ) ln ( ! j = 1 s Γ ( B j + I ) Γ 1 ( B j ) Γ ( P + Q ) i = 1 r Γ 1 ( A i + I ) Γ ( A i ) ) = lim sup 1 Ψ = ( P + I ) 1 ,
where
Ψ = i = 1 r j = 1 s ln Γ ( + 1 ) I + ln Γ ( A i ) ln Γ ( A i + I ) + ln Γ ( B j + I ) ln Γ ( B j ) ln Γ ( P + Q ) ln ( ) = i = 1 r j = 1 s 1 2 ln ( 2 π ) ln ( ) I + ln ( ) ln ( ) I ln ( e ) ln ( ) I + 1 2 ln ( 2 π ( B j + I ) ) ln ( ) + ( B j + I ) ln ( B j + I ) ln ( ) ( B j + I ) ln ( e ) ln ( ) 1 2 ln ( 2 π ( B j ) ) ln ( ) B j ln ( B j ) ln ( ) + B j ln ( e ) ln ( ) + 1 2 ln ( 2 π ( P + Q ) ) ln ( ) + ( P + Q ) ln ( P + Q ) ln ( ) ( P + Q ) ln ( e ) ln ( ) + 1 2 ln ( 2 π ( A i ) ) ln ( ) + A i ln ( A i ) ln ( ) A i ln ( e ) ln ( ) 1 2 ln ( 2 π ( A i + I ) ) ln ( ) ( A i + I ) ln ( A i + I ) ln ( ) + ( A i + I ) ln ( e ) ln ( ) .
Thus, we obtain the order ρ = ( P + I ) 1 .
We obtain the asymptotic estimate for Γ ( P + Q ) and Γ ( + 1 ) by repeatedly applying the asymptotic formula for the logarithm of the gamma function:
τ = τ ( r R s ) = 1 e ρ lim sup U ρ = 1 e ρ lim sup i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) ! ρ = 1 e ρ lim sup i = 1 r j = 1 s 2 π e ( A i + I ) ( A i + I ) A i + I 1 2 I ( 2 π e ( B j + I ) ( B j + I ) B j + I 1 2 I ) 1 ( 2 π e ( P + Q ) ( P + Q ) P + Q 1 2 I ) 1 Γ 1 ( A i ) Γ ( B j ) 2 π e + 1 2 ρ 1 e ρ lim sup i = 1 r j = 1 s e B j + I + P + Q A i I + I ( A i + I ) A i + I 1 2 I ( B j + I ) B j I + 1 2 I ( P + Q ) P Q 1 2 I 1 2 ρ 1 e ρ e ( P + I ) ρ lim sup i = 1 r j = 1 s ( A i + I ) A i 1 2 I ( A i + I ) ( B j + I ) B j + 1 2 I ( B j + I ) ( P + Q ) Q 1 2 I ( P + Q ) P 1 2 ρ 1 e ρ e ( P + I ) ρ P P ( P + I ) 1 = ( P + I ) P P ( P + I ) 1 .
Finally, we arrive at the type of function τ = ( P + I ) P P ( P + I ) 1 . □

4. Contiguous Function Relations

The contiguous function relations and differential property of the r R s matrix function are established in this section.
Assume that A i ( i = 1 , 2 , , r ) and B j ( j = 1 , 2 , , s ) have no integer eigenvalues for those matrices that commute with one another. The relation A i ( A i + I ) = ( A i + k I ) ( A i ) , when combined with the definitions of the matrix contiguous function relations, yields the following formulas:
r R s ( A 1 + ) = = 0 z n ! ( A 1 + I ) ( A 2 ) ( A r ) [ ( B 1 ) ] 1 [ ( B 2 ) ] 1 [ ( B s ) ] 1 Γ 1 ( P + Q ) = = 0 ( A 1 + I ) ( A 1 ) 1 W ( z ) .
Similarly, we obtain
r R s ( A i + ) = ( A i ) 1 = 0 ( A i + I ) W ( z ) , r R s ( A i ) = ( A i I ) = 0 ( A i + ( 1 ) I ) 1 W ( z ) , r R s ( B j + ) = ( B j ) = 0 ( B j + I ) 1 W ( z ) , r R s ( B j ) = ( B j I ) 1 = 0 ( B j + ( k 1 ) I ) W ( z ) .
For all integers n 1 , we deduce that:
r R s ( A i + n I ) = k = 1 n ( A i + ( k 1 ) I ) 1 = 0 k = 1 n ( A i + ( + k 1 ) I ) W ( z ) , r R s ( A i n I ) = k = 1 n ( A i k I ) = 0 k = 1 n ( A i + ( k ) I ) 1 W ( z ) , r R s ( B j + n I ) = k = 1 n ( B j + ( k 1 ) I ) = 0 k = 1 n ( B j + ( + k 1 ) I ) 1 W ( z ) , r R s ( B j n I ) = k = 1 n ( B j k I ) 1 = 0 k = 1 n ( B j + ( k ) I ) W ( z ) .
Remark 2.
If we apply the above results for (25), we obtain the contiguous relations for the generalized hypergeometric matrix function [16].
Theorem 3.
Let A, B, P, and Q be commutative matrices in C N × N that satisfy the condition (13). Then, the following recursion formulas hold true for r R s
r R s = ( θ P + Q ) r R s ( Q + I ) ,
where θ = z d d z .
Proof. 
Starting with the right hand side, we have
Q r R s ( Q + I ) + z P d d z r R s ( Q + I ) = Q r R s ( Q + I ) + z P [ = 0 z 1 ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + I ) ] = Q r R s ( Q + I ) + = 0 ( P + Q ) z ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) ( P + Q ) 1 Q = 0 z ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + I ) ] = = 0 z ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) = r R s .
Remark 3.
For further specific values of the parameters in (26), we obtain the contiguous relations for the generalized hypergeometric matrix function [16].
Theorem 4.
The r R s matrix function has the following differential property:
( d d z ) κ [ z Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c z P ) ] = z Q ( κ + 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s P , Q κ I ; c z P ) .
Proof. 
By differentiating term by term under the sign of summation in (14), we obtain the result (27). □
Theorem 5.
Let A i ; 1 i r and B j ; 1 j s , P, and Q be matrices in C N × N that satisfy (13), and where all matrices are commutative, then the following recurrence matrix relation for r R s matrix function holds true:
θ j = 1 s ( θ I + B j I ) r R s z i = 1 r ( θ I + A i ) r R s ( Q + P ) = 0 ,
where 0 is the null matrix in C N × N .
Proof. 
Consider the differential operator θ = z d d z , D z = d d z , θ z = z . For the matrices that commute with one another, we thus have
θ j = 1 s ( θ I + B j I ) r R s = = 1 z ! j = 1 s ( I + B j I ) i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) = = 1 z ( 1 ) ! i = 1 r ( A i ) [ j = 1 s ( B j ) 1 ] 1 Γ 1 ( P + Q ) .
When is replaced by + 1 , we have
θ j = 1 s ( θ I + B j I ) r R s = = 0 z + 1 ! i = 1 r ( A i ) + 1 [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + P ) = z i = 1 r ( θ I + A i ) r R s ( Q + P ) .
Theorem 6.
Let A i ; 1 i r and B j ; 1 j s , P, and Q be commutative matrices in C N × N that satisfy the condition (13), and where all matrices are commutative. Then, the r R s matrix function satisfies the matrix differential equation
r R s ( P , Q + ( μ + 1 ) I , z ) r R s ( P , Q + ( μ + 2 ) I , z ) = z 2 P 2 d 2 d z 2 r R s ( P , Q + ( μ + 3 ) I , z ) + z P ( P + 2 I + 2 ( Q + μ I ) ) d d z r R s ( P , Q + ( μ + 3 ) I , z ) + ( Q + ν I ) ( Q + ( μ + 2 ) I ) r R s ( P , Q + ( μ + 3 ) I , z ) .
Proof. 
In using the fundamental relation of the gamma matrix function Γ ( A + I ) = A Γ ( A ) in (2), we have
r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 1 ) I ; z ) = = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 ( P + Q + μ I ) 1 Γ 1 ( P + Q + μ I ) .
Similarly, we find
r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 2 ) I ; z ) = = 0 ( ( P + Q + μ I ) 1 ( P + Q + ( μ + 1 ) I ) 1 ) z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) = r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 1 ) I ; z ) = 0 ( P + Q + ( μ + 1 ) I ) 1 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) .
Next, we denote the last term of (31) by L, which can be written as follows:
L = = 0 ( P + Q + ( μ + 1 ) I ) 1 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) = r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 1 ) I ; z ) r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 2 ) I ; z ) .
The sum L can be expressed as
L = = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 ( P + Q + μ I ) Γ 1 ( P + Q + ( μ + 3 ) I ) + = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 ( P + Q + μ I ) ( P + Q + ( μ + 1 ) I ) Γ 1 ( P + Q + ( μ + 3 ) I ) = P = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) + ( Q + μ I ) = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) + P 2 = 0 2 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) + ( 2 Q + ( 2 μ + 1 ) I ) P = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) + ( Q + μ I ) ( Q + ( μ + 1 ) I ) = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) .
On evaluating each term on the R.H.S. of Equation (33), we have
d 2 d z 2 ( z 2 r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) ) = = 0 ( + 1 ) ( + 2 ) z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I )
or
z 2 d 2 d z 2 r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) + 4 z d d z r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) = = 0 2 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) + 3 = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) .
Similarly, we have
d d z ( z r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) ) = = 0 ( + 1 ) z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I )
or
z d d z r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) + = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) .
Therefore, from (34) and (35), we obtain
= 0 2 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + ( μ + 3 ) I ) = z 2 d 2 d z 2 r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) + z d d z r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) .
By taking into account (33), (34) and (36), we have
L = P 2 z 2 d 2 d z 2 r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) + z ( P 2 + P + ( 2 Q + ( 2 μ + 1 ) I ) P ) d d z r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) + ( Q + μ I + ( Q + μ I ) ( Q + ( μ + 1 ) I ) ) r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 3 ) I ; z ) .
By substituting the equation in (37) and taking into account (37) and (32), we yield the desired proof. □

5. Integrals Involving the r R s Matrix Function

Here, we establish the integral representations and differential property of the r R s matrix function, whereby its integrals that involve relationships with other well-known fractional calculus and special functions are accounted for.
The integral representations of the r R s matrix function in [6] can be extended to yield the following result:
Theorem 7.
Let A i ; 1 i r and B j ; 1 j s be matrices in C N × N such that B j + I are invertible matrices for all integers 0 . Suppose that A i , B j , and B j A i are positive stable matrices. If r s + 2 for z < 1 , then we have
r R s A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z = Γ 1 A i Γ 1 B j A i Γ B j 0 1 t A i I 1 t B j A i I × r 1 R s 1 A 1 , , A i 1 , A i + 1 , A r ; B 1 , , B j 1 , B j + 1 , B s ; P , Q , z t .
Proof. 
By definition of the pochammar matrix symbol (3) for R e ( B 1 ) > R e ( A 1 ) > 0 , as well as by using the integral definition of the beta matrix function, we obtain
A i [ B j ] 1 = Γ 1 A i Γ 1 B j A i Γ B j 0 1 t A i + 1 I 1 t B j A i I d t
where A i B j = B j A i . Also, we have
r R s A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; z = = 0 z k ! A 1 A i 1 A i + 1 A r [ B 1 ] 1 [ B j 1 ] 1 [ B j + 1 ] 1 [ B s ] 1 × Γ 1 A i Γ 1 B j A i Γ B j 0 1 t A i + n 1 I 1 t B j A i I d t = Γ 1 A i Γ 1 B j A i Γ B j 0 1 t A i I 1 t B j A i I × = 0 z t k ! A 1 A i 1 A i + 1 A r [ B 1 ] 1 [ B j 1 ] 1 [ B j + 1 ] 1 [ B s ] 1 d t = Γ 1 A i Γ 1 B j A i Γ B j 0 1 t A i I 1 t B j A i I × r 1 R s 1 A 1 , , A i 1 , A i + 1 , , A r ; B 1 , , B j 1 , B j + 1 , , B s ; z t d t .
Remark 4.
If A 1 = P = Q = I in (38), we obtain the results for the generalized hypergeometric matrix functions [16].
Theorem 8.
The following integral representation holds true:
0 1 t Q + μ I r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ν I ; t P ) d t = r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 1 ) I ; 1 ) r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 2 ) I ; 1 ) .
Proof. 
By putting z = 1 in (31), we obtain
r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 2 ) I ; 1 ) = r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ( μ + 1 ) I ; 1 ) = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 ( P + Q + ( μ + 1 ) I ) 1 Γ 1 ( P + Q + μ I ) .
One can observe that
z Q + μ I r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + μ I ; z P ) = = 0 z P + Q + μ I ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) .
On integrating both sides with respect to z, this yields
0 z t Q + μ I r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ν I ; t P ) d t = = 0 1 ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) 0 z t P + Q + μ I d t = = 0 1 ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) ( P + Q + ( μ + 1 ) I ) 1 z P + Q + ( μ + 1 ) I .
By putting z = 1 in (41), we obtain
0 1 t Q + μ I r R s ( A 1 , A 2 , , A p ; B 1 , B 2 , , B s ; P , Q + ν I ; t P ) d t = = 0 1 ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + μ I ) ( P + Q + ( μ + 1 ) I ) 1 .
Taking into account the work of (40) and (42), one can obtain the equation detailed in (39). □
Theorem 9.
The r R s matrix function has the following integral representation
r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z ) = Γ 1 ( A 1 ) 0 t A 1 I e t r 1 R s ( A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z t ) d t .
Proof. 
When using the definition of the gamma matrix function
Γ ( A 1 + I ) = 0 e t t A 1 + I I d t ,
we obtain (43). □
Theorem 10.
The r R s matrix function satisfies the following representations
Γ ( Φ ) r + 1 R s ( Φ , A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; z ) = 2 π F [ e φ u exp ( e u ) r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; z e u ) ; τ ]
where Φ = φ + i τ , φ > 0 , r s + 1 , the F ( Φ , τ ) is the Fourier transform of Φ ([41])
F ( Φ , τ ) = 1 2 π e i u τ Φ ( u ) d u , τ R > 0 .
Proof. 
By substituting the t = e u in (43), we can easily acquire the Fourier transform representation of the r R s matrix function. □
Theorem 11.
The Euler-type integral representation of the r R s matrix function is determined as
r + κ R s + κ A 1 , A 2 , , A r , Δ ( P ; κ ) ; B 1 , B 2 , , B s , ( P + Q ; κ ) ; P , Q , c z κ = z I P Q Γ 1 P Γ P + Q Γ 1 Q 0 z t P I z t Q I × r R s A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , c t κ d t .
where κ is a positive integer and Δ ( P , r ) is the array of parameters
Δ ( P , κ ) = 1 κ P , 1 κ ( P + I ) , 1 κ ( P + 2 I ) , , 1 κ ( P + ( κ 1 ) I ) .
Proof. 
By putting t = z u and t = z d u into the equation, we obtain
0 z t P + ( κ 1 ) I z t Q I d t = z P + Q + ( κ 1 ) I 0 1 u P + ( κ 1 ) I 1 u Q I d u = z P + Q + ( κ 1 ) I Γ ( P ) Γ ( Q ) Γ 1 ( P + Q ) ( P ) κ [ ( P + Q ) κ ] 1 .
Theorem 12.
The Euler-type integral representation of the r R s matrix function is determined as
r + κ + ı R s + κ + ı A 1 , A 2 , , A r , Δ ( P ; κ ) , Δ ( Q ; ı ) ; B 1 , B 2 , , B s , ( P + Q ; κ + ı ) ; P , Q , c κ κ ı ı ( κ + ı ) κ + ı = Γ 1 P Γ P + Q Γ 1 Q 0 1 t P I 1 t Q I × r R s A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , c t κ ( 1 t ) ı d t .
Proof. 
When using the beta matrix function, we obtain
0 1 t P + ( κ 1 ) I 1 t Q + ( ı 1 ) I d u = Γ ( P ) Γ ( Q ) Γ 1 ( P + Q ) ( P ) κ ( Q ) ı [ ( P + Q ) κ + ı ] 1 .
When using the above equation (49), we obtain (48). □
Theorem 13.
The Laplace transform of the r R s matrix function is determined by
L [ t Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z t P ) ; s ] = 0 t Q I e s t r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z t P ) d t = s Q r F s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; z s P ) ,
where L [ f ( t ) ; s ] is the Laplace transform
L [ f ( t ) ; s ] = 0 e s t f ( t ) d t = F ( s ) , s C .
Proof. 
When using Euler’s integral, we have
L [ t P + Q I ; s ] = 0 e s t t P + Q I d t = Γ ( P + Q ) s P + Q ,
where min R e ( P + Q ) , R e ( s ) > 0 , R e ( s ) = 0 , or 0 < R e ( P + Q ) < 1 .
When using the above Equation (51), this yields the right-hand side of (50). □
Theorem 14.
As such, the following integral formula holds:
0 x ( x t ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z ( x t ) P ) t Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z t P ) d t = x Q + Q I r R s ( A 1 + A 1 , A 2 + A 2 , , A r + A r ; B 1 + B 1 , B 2 + B 2 , , B s + B s ; P , Q + Q ; z x P ) .
Proof. 
On employing the convolution theorem of the Laplace transform, we obtain
L [ 0 x Ψ ( x τ ) Ω ( τ ) d τ ; s ] = L [ Ψ ( x ) ; s ] L [ Ω ( τ ) ; s ] .
When using (53), we obtain
L [ 0 x ( x t ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z ( x t ) P ) t Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z t P ) d t ; s ] = L [ x Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z x P ) ; s ] L [ x Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q , z x P ) ; s ] = = 0 j = 0 z ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 z j j ! i = 1 r ( A i ) j [ j = 1 s ( B j ) j ] 1 s ( + j ) P Q Q = = 0 j = 0 z + j ! j ! i = 1 r ( A i ) [ j = 1 s ( B j ) ] 1 i = 1 r ( A i ) j [ j = 1 s ( B j ) j ] 1 s ( + j ) P Q Q = = 0 j = 0 z ( j ) ! j ! i = 1 r ( A i ) j [ j = 1 s ( B j ) j ] 1 i = 1 r ( A i ) j [ j = 1 s ( B j ) j ] 1 s P Q Q = = 0 j = 0 z ! i = 1 r ( A i + A i ) [ j = 1 s ( B j + B j ) ] 1 s P Q Q .
When using (51), we find that
L 1 ( s P Q Q ) = x P + Q + Q I Γ 1 ( P + Q + Q ) .
When we use the inverse Laplace transform, we obtain the right hand side of (54), and when we use (55), we obtain
x Q + Q I r R s ( A 1 + A 1 , A 2 + A 2 , , A r + A r ; B 1 + B 1 , B 2 + B 2 , , B s + B s ; P , Q + Q ; z x P ) .
Theorem 15.
For x > a , the following relations hold true:
I a + α [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = ( x a ) Q + ( α 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q + α I ; c ( x a ) P ) ,
where I a + α is the right-sided Riemann–Liouville (R–L) fractional integral operator ([42,43])
( I a + α f ) ( x ) = 1 Γ ( α ) a x ( x t ) α 1 f ( t ) d t , x > a ,
and
D a + α [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = ( x a ) Q ( α + 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q α I ; c ( x a ) P ) ,
where D a + α is the right-hand-sided Riemann–Liouville (R–L) fractional derivative operator of order α
( D a + α f ) ( x ) = ( d d x ) n ( I a + n α f ) ( x ) ,
and
D a + α , β [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = ( x a ) Q ( α + 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q α I ; c ( x a ) P ) ,
where D a + α , β is the right-hand-sided Riemann–Liouville (R–L) fractional derivative operator of order α,
( D a + α , β f ) ( x ) = ( I a + β ( 1 α ) d d x ( I a + ( 1 β ) ( 1 α ) f ) ) ( x ) , α ( 0 , 1 ] , β [ 0 , 1 ] .
Proof. 
When using the relation, we obtain
I a + α [ ( z a ) P + Q I ] = Γ ( P + Q ) Γ 1 ( P + Q + α I ) ( x a ) P + Q + ( α 1 ) I , x > a ,
this yields the right hand side of (56). Thus, we obtain
I a + α [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q ) I a + α ( z a ) P + Q I = = 0 z ! i = 1 p ( A i ) [ j = 1 s ( B j ) ] 1 Γ 1 ( P + Q + α I ) ( x a ) P + Q + ( α 1 ) I = ( x a ) Q + ( α 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q + α I ; c ( x a ) P ) .
When using the relation
I a + n α [ ( z a ) P + Q I ] = Γ ( P + Q ) Γ 1 ( P + Q + ( n α ) I ) ( x a ) P + Q + ( n α 1 ) I , x > a ,
and
D n [ ( z a ) P + Q + ( n α 1 ) I ] = Γ ( P + Q + ( n α ) I ) Γ 1 ( P + Q α I ) ( x a ) P + Q ( α + 1 ) I , x > a
to prove assertion (57), we use (60) and (61), which gives
D a + α [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = ( d d x ) n I a + n α [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = ( d d x ) n [ ( x a ) Q + ( n α 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q + ( n α ) I ; c ( x a ) P ) ] = ( x a ) Q ( α + 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q α I ; c ( x a ) P ) .
By applying the D a + α , β right-hand-sided Riemann–Liouville (R–L) fractional derivative operator of order α , we obtain
I a + ( 1 β ) ( 1 α ) [ ( z a ) P + Q I ] = Γ ( P + Q ) Γ 1 ( P + Q + ( ( 1 β ) ( 1 α ) ) I ) ( x a ) P + Q + ( ( 1 β ) ( 1 α ) 1 ) I ,
D [ ( x a ) P + Q + ( ( 1 β ) ( 1 α ) 1 ) I ] = ( P + Q + ( ( 1 β ) ( 1 α ) 1 ) I ) ( x a ) P + Q + ( ( 1 β ) ( 1 α ) 2 ) I ,
I a + β ( 1 α ) [ ( z a ) P + Q + ( ( 1 β ) ( 1 α ) 2 ) I ] = Γ ( P + Q + ( ( 1 β ) ( 1 α ) 1 ) I ) Γ 1 ( P + Q + ( ( 1 β ) ( 1 α ) 1 ) I + β ( 1 α ) I ) ( x a ) P + Q + ( ( 1 β ) ( 1 α ) 2 ) I + β ( 1 α ) I ,
and
( D a + α , β [ ( z a ) P + Q I ] = Γ ( P + Q ) Γ 1 ( P + Q α I ) ( x a ) P + Q ( α + 1 ) I .
Thus, we obtain
D a + α , β [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = I a + β ( 1 α ) d d x I a + ( 1 β ) ( 1 α ) [ ( z a ) Q I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q ; c ( z a ) P ) ] = I a + β ( 1 α ) d d x [ ( z a ) Q + ( 1 β ) ( 1 α ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q + ( k α ) I ; c ( z a ) P ) ] = ( x a ) Q ( α + 1 ) I r R s ( A 1 , A 2 , , A r ; B 1 , B 2 , , B s ; P , Q α I ; c ( x a ) P ) .

6. Some Special Cases and Applications

In this section, we develop an integral of the r R s matrix function that involves a relation with some of the special cases related to the integral representations of the r R s matrix function, which is also explained below.
Theorem 16.
As | z | < 1 , R e ( B ) > R e ( A ) > 0 of the r + 1 R r matrix function satisfies the following Euler-type integral representation, we obtain the following:
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; P , Q ; z ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) 0 1 t A I ( 1 t ) B A I E P , Q , E ( z t r ) d t
where E P , Q , E ( z ) is a three-parametric Mittag–Leffler matrix function [40].
Proof. 
For convenience, let r + 1 R r be the left hand side of (66), then
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; P , Q ; z ) = = 0 z ! ( E ) ( 1 r A ) ( 1 r ( A + I ) ) 1 r ( A + ( r 1 ) I ) × [ ( 1 r B ) ] 1 [ ( 1 r ( B + I ) ) ] 1 [ 1 r ( B + ( r 1 ) I ) ] 1 Γ 1 ( P + Q ) .
When using the relation [16], we obtain
( A ) r = r r i = 1 r ( A + ( i 1 ) I r ) , = 0 , 1 , 2 , ,
where r is a positive integer.
Thus, (67) becomes
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; P , Q ; z ) = = 0 z ! ( E ) ( A ) r [ ( B ) r ] 1 Γ 1 ( P + Q ) ,
and we find
( A ) r [ ( B ) r ] 1 = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) B ( A + r , B A ) .
When using (69) and (70), we arrive at
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; P , Q ; z ) = = 0 z ! ( E ) ( A ) r [ ( B ) r ] 1 Γ 1 ( P + Q ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) = 0 z ! ( E ) Γ 1 ( P + Q ) 0 1 t A + ( r 1 ) I ( 1 t ) B A I d t = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) 0 1 t A I ( 1 t ) B A I E P , Q , E ( z t r ) d t .
Theorem 17.
For any matrix E in C N × N , the following assertion integral holds true:
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; I , Q ; z ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) Γ 1 ( Q ) 0 1 t A I ( 1 t ) B A I 1 F 1 ( E ; Q ; z t r ) d t .
Proof. 
For P = I in (66), the three-parameter Mittag–Leffler matrix function E A , P , Q ( x t 2 ) coincides with the confluent hypergeometric matrix function. Thus, we obtain (71). □
Theorem 18.
For the r + 1 R r matrix function, we find that it satisfies the following Euler-type integral representation:
r + 1 R r ( n I , Δ ( A , r ) ; Δ ( B , r ) ; k I , Q ; z ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) × Γ ( n + 1 ) Γ 1 ( n k I + Q ) 0 1 t A I ( 1 t ) B I Z n Q I ( z t r ; k ) d t
where n , k N and Z n Q I ( z ; k ) are the Konhauser matrix polynomials [16,44,45,46,47,48] of degree n in z k .
Proof. 
By performing E = n I and P = k I , we find that (66) reduces to
r + 1 R r ( n I , Δ ( A , r ) ; Δ ( B , r ) ; k I , Q ; z ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) 0 1 t A I ( 1 t ) B A I E k I , Q ; n I ( z t r ) d t
When using the result defined in [16,45], this leads to the right-hand side of (72). □
Yet another such integral representation is obtained in a straight forward manner as follows.
Theorem 19.
For n N , the following integral representation reduces to
r + 1 R r ( n I , Δ ( A , r ) ; Δ ( B , r ) ; 1 I , Q ; z ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) × Γ ( n + 1 ) Γ 1 ( Q + n I ) 0 1 t A I ( 1 t ) B A I L n Q I ( z t r ) d t ,
where L n Q I ( z ) is a Laguerre matrix polynomial [14].
Theorem 20.
The r + 1 R r matrix function satisfies the following result:
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; P , Q ; z ) = Γ ( B ) Γ 1 ( A ) = 0 z ! Γ 1 ( B A I ) ( A + I ) 1 × r + 1 R r ( E , Δ ( A + I , r ) ; Δ ( A + ( + 1 ) I , r ) ; P , Q ; z )
Proof. 
From the equation in (66) and when letting r + 1 R r be the left-hand side of (74), we obtain
r + 1 R r ( E , Δ ( A , r ) ; Δ ( B , r ) ; P , Q ; z ) = Γ ( B ) Γ 1 ( A ) Γ 1 ( B A ) 0 1 t A I ( 1 t ) B A I E P , Q ; E ( z t r ) d t = Γ ( B ) Γ 1 ( A ) = 0 ( 1 ) ! Γ 1 ( B A I ) k = 0 1 k ! ( E ) k z k Γ 1 ( k P + Q ) 0 1 t A + ( + r k 1 ) I d t = Γ ( B ) Γ 1 ( A ) = 0 ( 1 ) ! Γ 1 ( B A I ) k = 0 1 k ! ( E ) k z k Γ 1 ( k P + Q ) ( A + ( + r k ) I ) 1 = Γ ( B ) Γ 1 ( A ) = 0 ( 1 ) ! ( A + I ) 1 Γ 1 ( B A I ) k = 0 1 k ! ( E ) k ( A + I ) r k × [ ( A + ( + 1 ) I ) r k ] 1 Γ 1 ( k P + Q ) z k = Γ ( B ) Γ 1 ( A ) = 0 ( 1 ) ! ( A + I ) 1 Γ 1 ( B A I ) × r + 1 R r ( E , Δ ( A + I , r ) ; Δ ( A + ( + 1 ) I , r ) ; P , Q ; z ) .
Corollary 1.
For | z | < 1 , the 2 R 1 matrix function is given by
2 R 1 ( A , I ; B ; P , I ; z ) = Γ ( B ) Γ 1 ( A ) 2 Ψ 2 ( A , I ; B , P ; z ) .
Proof. 
From (38), we obtain
2 R 1 ( A , I ; B ; P , I , z ) = Γ 1 ( A ) Γ 1 ( B A ) Γ ( B ) 0 1 t A I ( 1 t ) B A I 1 R 0 ( I ; ; P , I , z t ) d t = Γ 1 ( A ) Γ 1 ( B A ) Γ ( B ) 0 1 t A I ( 1 t ) B A I = 0 Γ 1 ( P + I ) ( z t ) d t = Γ 1 ( A ) Γ 1 ( B A ) Γ ( B ) 0 1 t A I ( 1 t ) B A I E P ( z t ) d t ,
where E P ( z t ) is a Mittag–Leffler matrix function.
By using the relation between the Mittag–Leffler matrix function E P ( z t ) and the generalized Wright matrix function 2 Ψ 2 [45], we find
0 1 t A I ( 1 t ) B A I E P ( z t ) d t = Γ ( B A ) 2 Ψ 2 ( A , I ; B , P ; z )
where 2 Ψ 2 is a special case of the generalized Wright matrix function r Ψ s in [22]. This completes the proof □

7. Conclusions or Concluding Remarks

We were motivated in this paper to obtain a recurrence relation and to then use this result to obtain an integral representation of the r R s matrix function. The results presented in this paper appear to be novel in the literature. The convergence properties of the r R s matrix function with some of its properties—including its analytic properties (type and order), as well as the contiguous function relations and differential property of the r R s matrix function—were established. The contiguous relations for the generalized hypergeometric matrix function; the extended integral representations and the differential property of the r R s matrix function with its integrals involving relationships with some other well-known fractional calculus equations with special functions; the transform method with an application to the Mittag–Leffler matrix function; Euler-type integral representation; and some special cases related to the integral representations of the r R s matrix functions, are also explained in this paper. Since several of the results that involve the generalizations and extensions of the hypergeometric matrix functions have the potential to play important roles in the theory of the special matrix functions of mathematical physics, applied mathematics, engineering, probability theory, and statistical sciences, it would be interesting, and possible, to develop its study in the future. As a result, in this context, some particular cases, as well as our main results, can be applied theoretically, practically, and in some numerical, algorithmical points of view. With the assistance of this article, a variety of fields and their applications can be accessed, such as the representation of the matrix R-function via Fourier transformation, the distributional representation of the r R s matrix function, and the Euler-type integral matrix representations of the generalized r R s matrix function (which were developed in some special cases from the perspectives of the Konhauser and Laguerre matrix polynomials). We can also now study some applications in the areas of probability theory and groundwater pumping modeling via the pathway integral representation of the r R s matrix function and the pathway transformation of the r R s matrix function in terms of, as well as, the solution of the fractional matrix differential equations that involve the Hilfer derivative operator (which involves the composition of the Riemann–Liouville fractional integral and derivative). The conclusions of this work are thus diverse and important; therefore, it will be intriguing, and possible, to expand the study of these conclusions in the future.

Author Contributions

Writing—original draft, A.S., G.S.K. and C.C.; Writing—review & editing, A.S., G.S.K. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this paper are available, as they are requested.

Conflicts of Interest

The authors declares no conflict of interest.

References

  1. Constantine, A.G.; Muirhead, R.J. Partial differential equations for hypergeometric functions of two argument matrices. J. Multivar. Anal. 1972, 3, 332–338. [Google Scholar] [CrossRef]
  2. James, A.T. Special Functions of Matrix and Single Argument in Statistics in Theory and Application of Special Functions; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  3. Mathai, A.M. A Handbook of Generalized Special Functions for Statistical and Physical Sciences; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  4. Mathai, A.M.; Haubold, H.J. An Introduction to Fractional Calculus; Nova Science Publishers: New York, NY, USA, 2017. [Google Scholar]
  5. Jódar, L.; Cortés, J.C. Some properties of Gamma and Beta matrix functions. Appl. Math. Lett. 1998, 11, 89–93. [Google Scholar] [CrossRef]
  6. Jódar, L.; Cortés, J.C. On the hypergeometric matrix function. J. Comput. Appl. Math. 1998, 99, 205–217. [Google Scholar] [CrossRef]
  7. Jódar, L.; Cortés, J.C. Closed form general solution of the hypergeometric matrix differential equation. Math. Comput. Model. 2000, 32, 1017–1028. [Google Scholar] [CrossRef]
  8. Dwivedi, R.; Sahai, V. On the hypergeometric matrix functions of two variables. Linear Multilinear Algebra 2018, 66, 1819–1837. [Google Scholar] [CrossRef]
  9. Dwivedi, R.; Sahai, V. A note on the Appell matrix functions. Quaest. Math. 2020, 43, 321–334. [Google Scholar] [CrossRef]
  10. Abdullah, A.; Bayram, C.; Sahin, R. On the matrix versions of Appell hypergeometric functions. Quaest. Math. 2014, 37, 31–38. [Google Scholar]
  11. Liu, H. Some generating relations for extended Appell’s and Lauricella’s hypergeometric functions. Rocky Mt. J. Math. 2014, 44, 1987–2007. [Google Scholar] [CrossRef]
  12. Bayram, C.; Rabia, A. Multivariable matrix generalization of Gould-Hopper polynomials. Miskolc Math. Notes 2015, 16, 79–89. [Google Scholar]
  13. Defez, E.; Jódar, L.; Law, A. Jacobi matrix differential equation, polynomial solutions, and their properties. Comput. Math. Appl. 2004, 48, 789–803. [Google Scholar] [CrossRef]
  14. Jódar, L.; Sastre, J. On Laguerre matrix polynomials. Util. Math. 1998, 53, 37–48. [Google Scholar]
  15. Cetinkaya, A. The incomplete second Appell hypergeometric functions. Appl. Math. Comput. 2013, 219, 8332–8337. [Google Scholar] [CrossRef]
  16. Shehata, A. Some relations on Konhauser matrix polynomials. Miskolc Math. Notes 2016, 17, 605–633. [Google Scholar] [CrossRef]
  17. Duran, A.J.; Van Assche, W. Orthogonal matrix polynomials and higher order recurrence relations. Linear Algebra Appl. 1995, 219, 261–280. [Google Scholar] [CrossRef]
  18. Geronimo, J.S. Scattering theory and matrix orthogonal polynomials on the real line. Circ. Syst. Signal Process. 1982, 1, 471–495. [Google Scholar] [CrossRef]
  19. Abbas, M.I. Nonlinear Alangana-Baleanu fractional differential equations involving the Mittag–Leffler integral operator. Mem. Differ. Equ. Math. Phys. 2021, 82, 1–13. [Google Scholar]
  20. Shiri, B.; Baleanu, D. System of fractional differential algebraic equations with applications. Chaos Solitons Fractals 2019, 120, 203–212. [Google Scholar] [CrossRef]
  21. Zhang, X. The non-uniqueness of solution for initial value problem of impulsive differential equations involving higher order Katugampola fractional derivative. Adv. Differ. Equ. 2020, 2020, 85. [Google Scholar] [CrossRef]
  22. Bakhet, A.; Jiao, Y.; He, F. On the Wright hypergeometric matrix functions and their fractional calculus. Integral Transform. Spec. Funct. 2019, 30, 138–156. [Google Scholar]
  23. Duan, J.; Chen, L. Solution of fractional differential equation systems and computation of matrix Mittag—Leffler functions. Symmetry 2018, 10, 503. [Google Scholar] [CrossRef]
  24. Eltayeb, H.; Kiliçman, A.; Agarwal, R.P. On integral transforms and matrix functions. Abstr. Appl. Anal. 2011, 2011, 207930. [Google Scholar] [CrossRef]
  25. Kargin, L.; Kurt, V. Chebyshev-type matrix polynomials and integral transforms. Hacet. J. Math. Stat. 2015, 44, 341–350. [Google Scholar] [CrossRef]
  26. Khammash, G.S.; Agarwal, P.; Choi, J. Extended k-Gamma and k-Beta functions of matrix arguments. Mathematics 2020, 8, 1715. [Google Scholar] [CrossRef]
  27. Shehata, A. On Lommel Matrix Polynomials. Symmetry 2021, 13, 2335. [Google Scholar] [CrossRef]
  28. Shehata, A.; Subuhi, K. On Bessel-Maitland matrix function. Mathematica 2015, 57, 90–103. [Google Scholar]
  29. Salim, T.O. Some properties relating to the generalized Mittag–Leffler function. Adv. Appl. Math. Anal. 2009, 4, 21–30. [Google Scholar]
  30. Sharma, K. Application of fractional calculus operators to related areas. Gen. Math. Notes 2011, 7, 33–40. [Google Scholar]
  31. Shukla, A.K.; Prajapati, J.C. On a generalization of Mittag–Leffler function and its properties. J. Math. Anal. Appl. 2007, 336, 797–811. [Google Scholar] [CrossRef]
  32. Bose, R.C. Early History of Multivariate Statistical Analysis. In Analysis IV; Krishnaiah, P.R., Ed.; North-Holland: Amsterdam, The Netherlands, 1977; pp. 3–22. [Google Scholar]
  33. Jain, S.; Cattani, C.; Agarwal, P. Fractional Hypergeometric Functions. Symmetry 2022, 14, 714. [Google Scholar] [CrossRef]
  34. Pham-Gia, T.; Thanh, D. Hypergeometric Functions: From One Scalar Variable to Several Matrix Arguments, in Statistics and Beyond. Open J. Stat. 2016, 6, 951–994. [Google Scholar] [CrossRef]
  35. Saigo, M. On generalized fractional calculus operators. In Proceedings of the Recent Advances in Applied Mathematics, Kuwait City, Kuwait, 4–7 May 1996; Kuwait University Press: Kuwait City, Kuwait, 1996; pp. 441–450. [Google Scholar]
  36. Srivastava, H.M.; Agarwal, P. Certain Fractional Integral Operators and the Generalized Incomplete Hypergeometric Functions. Appl. Appl. Math. 2013, 8, 333–345. [Google Scholar]
  37. Boyadjiev, L.; Dobner, H.J. Fractional free electron laser equations. Integral Transform. Spec. Funct. 2001, 11, 113–136. [Google Scholar] [CrossRef]
  38. Tassaddiq, A.; Srivastava, R. New results involving the generalized Krätzel function with application to the fractional kinetic equations. Mathematics 2023, 11, 1060. [Google Scholar] [CrossRef]
  39. Dunford, N.; Schwartz, J. Linear Operators, Part I; Interscience: New York, NY, USA, 1957. [Google Scholar]
  40. Sanjhira, R.; Dwivedi, R. On the matrix function pRq(A,B;z) and its fractional calculus properties. Commun. Math. 2023, 31, 43–56. [Google Scholar]
  41. Folland, G.B. Fourier Analysis and Its Applications; The Wadsworth and Brooks/Cole Mathematics Series; Thomson Brooks/Cole: Belmont, CA, USA, 1992. [Google Scholar]
  42. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; North-Holland Mathematics Studies Elsevier Science B.V.: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  43. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives; Gordon and Breach Science Publishers: Yverdon, Switzerland, 1993. [Google Scholar]
  44. Erkus-Duman, E.; Cekim, B. New generating functions for the Konhauser matrix polynomials. Commun. Fac. Sci. Univ. Ank. Ser. A1 Math. Stat. 2014, 63, 35–41. [Google Scholar] [CrossRef]
  45. Sanjhira, R.; Nathwani, B.V.; Dave, B.I. Generalized Mittag-Leffer matrix function and associated matrix polynomials. J. Indian Math. Soc. 2019, 86, 161–178. [Google Scholar]
  46. Sanjhira, R.; Dave, B.I. Generalized Konhauser matrix polynomial and its properties. Math. Stud. 2018, 87, 109–120. [Google Scholar]
  47. Shehata, A. A note on Konhauser matrix polynomials. Palestine J. Math. 2020, 9, 549–556. [Google Scholar]
  48. Varma, S.; Cekim, B.; Tasdelen, F. On Konhauser matrix polynomials. Ars Comb. 2011, 100, 193–204. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shehata, A.; Khammash, G.S.; Cattani, C. Some Relations on the rRs(P,Q,z) Matrix Function. Axioms 2023, 12, 817. https://doi.org/10.3390/axioms12090817

AMA Style

Shehata A, Khammash GS, Cattani C. Some Relations on the rRs(P,Q,z) Matrix Function. Axioms. 2023; 12(9):817. https://doi.org/10.3390/axioms12090817

Chicago/Turabian Style

Shehata, Ayman, Ghazi S. Khammash, and Carlo Cattani. 2023. "Some Relations on the rRs(P,Q,z) Matrix Function" Axioms 12, no. 9: 817. https://doi.org/10.3390/axioms12090817

APA Style

Shehata, A., Khammash, G. S., & Cattani, C. (2023). Some Relations on the rRs(P,Q,z) Matrix Function. Axioms, 12(9), 817. https://doi.org/10.3390/axioms12090817

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop