Next Article in Journal
On Stability Switches and Bifurcation of the Modified Autonomous Van der Pol–Duffing Equations via Delayed State Feedback Control
Next Article in Special Issue
New Results of the Fifth-Kind Orthogonal Chebyshev Polynomials
Previous Article in Journal
Impact of Bioconvection and Chemical Reaction on MHD Nanofluid Flow Due to Exponential Stretching Sheet
Previous Article in Special Issue
Integrable Nonlocal PT-Symmetric Modified Korteweg-de Vries Equations Associated with so(3, \({\mathbb{R}}\))
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Lommel Matrix Polynomials

1
Department of Mathematics, College of Science and Arts, Qassim University, Unaizah 56264, Saudi Arabia
2
Department of Mathematics, Faculty of Science, Assiut University, Assiut 71516, Egypt
Symmetry 2021, 13(12), 2335; https://doi.org/10.3390/sym13122335
Submission received: 16 October 2021 / Revised: 4 November 2021 / Accepted: 25 November 2021 / Published: 6 December 2021

Abstract

:
The main aim of this paper is to introduce a new class of Lommel matrix polynomials with the help of hypergeometric matrix function within complex analysis. We derive several properties such as an entire function, order, type, matrix recurrence relations, differential equation and integral representations for Lommel matrix polynomials and discuss its various special cases. Finally, we establish an entire function, order, type, explicit representation and several properties of modified Lommel matrix polynomials. There are also several unique examples of our comprehensive results constructed.

1. Introduction

The Eugen von Lommel introduced Lommel polynomial R m , v ( z ) of degree m in 1 z which for m = 0 , 1 , 2 , and any v in [1,2,3], and Watson arisen for these polynomials in the theory of Bessel functions in [4]. The study of special matrix polynomials and orthogonal matrix polynomials is important due to their applications in certain areas of statistics, physics, engineering, Lie groups theory, group representation theory and differential equations. Recently, Significant results emerged in the classical theory of orthogonal polynomials and special functions have been expanded to include many orthogonal matrix limits and special matrix functions and applications that have continued to appear in the literature until now (see for example [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]). In [23,24,25], Mathai et al. studied some Special function of matrix arguments. in [26], Nisar et al. introduced the modified Hermite matrix polynomials. In [27,28] Aydi et al. established Some formulas for quadruple hypergeometric functions. In mathematics, specifically in linear algebra, a symmetric matrix is a square matrix that is equal to its transpose, and a skew-symmetric (antimetric or antisymmetric) matrix is a square matrix which its transpose equals its negative. Symmetric matrices appear naturally in a variety of important applications, such as statistical analysis, control theory, and optimization. Classical orthogonal polynomials are solutions of differential equations. Therefore, Lommel matrix polynomials are an illustrative example of symmetric polynomials. Symmetric type of Lommel matrix polynomials is in general of physical importance.
The motive for that work is an extension of the paper presented by Shehata’s recent paper on Lommel matrix functions [29] and to prove new properties for Lommel matrix polynomials(LMPs). The outline of this paper is the following: Section 2 deals with the study of some generalizations of hypergeometric matrix function and prove new interesting properties. Section 3 provides the definition of Lommel matrix polynomials (LMPs), and recurrence matrix relations for Lommel matrix polynomials are given. We give also a matrix differential equation of the second order which is satisfied by Lommel matrix polynomials and we show the integral representations for Lommel matrix polynomials. Furthermore, the results of Section 2 and Section 3 are used in Section 4 and Section 5 to investigate the behavior of modified Lommel matrix polynomials (MLMPs). Finally, we give some concluding remarks in Section 6.

Preliminaries

In this subsection, we summarize basic facts, lemmas, notations and definitions of matrix functional calculus.
Throughout this paper, the identity matrix and the null matrix or zero matrix in × will be denoted by I and 0 , respectively. If Q is a matrix in × in the complex space × of all square matrices of common order , its spectrum σ ( Q ) denotes the set of all eigenvalues of Q . The two-norm Q is defined as
Q = sup x 0 x Q 2 x 2 ,
where x 2 = ( x T x ) 1 2 is the Euclidean norm of x for a vector x .
Theorem 1
(Dunford and Schwartz [30]). If Ψ ( z ) and Ω ( z ) are holomorphic functions of complex variable z, which are defined in an open set Φ of complex plane, then
Ψ ( A ) Ω ( Q ) = Ω ( Q ) Ψ ( A ) ,
where A , Q are commutative matrices in × with σ ( A ) Φ and σ ( Q ) Φ , such that A Q = Q A .
Definition 1
(Jódar and Cortés [31]). For Q in × , we say that Q is a positive stable matrix if
R e ( μ ) > 0 , μ σ ( Q ) .
Definition 2
(Jódar and Cortés [31]). Let Q be a positive stable matrix in × , then Gamma matrix function Γ ( Q ) is defined by
Γ ( Q ) = 0 e t t Q I d t ; t Q I = exp ( ( Q I ) ln t ) .
Definition 3
(Jódar and Sastre [12]). If Q is a matrix in × such that
Q + r I is   an   invertible   matrix   for   all   integers r 0 ,
then Γ ( Q ) is an invertible matrix in × and the matrix analogues of Pochhammer symbol or shifted factorial is defined by
( Q ) r = Q ( Q + I ) ( Q + 2 I ) ( Q + ( r 1 ) I ) = Γ ( Q + r I ) Γ 1 ( Q ) ; r 1 , ( Q ) 0 = I .
Fact 1
(Jódar and Cortés [32]). Let us denote the real numbers M ( Q ) , m ( Q ) for Q × as in the following
M ( Q ) = max { R e ( z ) : z σ ( Q ) } and m ( Q ) = min { R e ( z ) : z σ ( A ) } .
Notation 1
(Jódar and Cortés [33]). If Q is a matrix in × , then it follows that
e t Q e t M ( Q ) r = 0 1 ( Q 1 2 t ) r r ! ; t 0
and considering that m Q = e Q ln ( m ) , one gets
m Q m M ( Q ) r = 0 1 ( Q 1 2 ln ( m ) ) r r ! ; m 1 .
Definition 4
(Jódar and Cortés [32,33]). The hypergeometric matrix function 2 F 1 is defined by
2 F 1 ( A , P ; Q ; z ) = r = 0 z r r ! ( A ) r ( P ) r [ ( Q ) r ] 1 ,
where A , P , and Q are matrices of × such that Q + r I is an invertible matrix for every integer r 0 .
Definition 5.
Let us take Q a matrix in × such that
ν is not a negative integer for every ν σ ( Q ) ,
then the Bessel matrix functions (BMFs) J Q ( z ) of the first kind of order Q was defined in [16,34,35] as follows:
J Q ( z ) = s = 0 ( 1 ) s s ! Γ 1 ( Q + ( s + 1 ) I ) ( 1 2 z ) Q + 2 s I = ( 1 2 z ) Q Γ 1 ( Q + I ) 0 F 1 ( ; Q + I ; z 2 4 ) ; | z | < ; | a r g ( z ) | < π .
Theorem 2
(Jódar and Cortés [31]). Let Q be a positive stable matrix satisfying the condition R e ( ν ) > 0 for every eigenvalue ν σ ( Q ) and let r 1 be an integer, then we have
Γ ( Q ) = lim r ( r 1 ) ! [ ( Q ) r ] 1 r Q ,
where ( Q ) r is defined by (4).
Definition 6
(Jódar and Cortés [31]). Let A and Q be positive stable matrices in × , then Beta matrix function B ( A , Q ) is defined by
B ( A , Q ) = 0 1 t A I ( 1 t ) Q I d t .
Lemma 1.
If A , Q and A + Q are positive stable matrices in × satisfying the conditions A Q = Q A , and A + r I , Q + r I and A + Q + r I are invertible matrices for all eigenvalues r 0 in [31], then we have
B ( A , Q ) = Γ ( A ) Γ ( Q ) Γ l ( A + Q ) .
Lemma 2
(Defez and Jódar [36]). For r 0 , s 0 and Ω ( s , r ) is a matrix in × , the following relation is satisfied:
r = 0 s = 0 Ω ( s , r ) = r = 0 s = 0 r Ω ( s , r s ) .
Corollary 1
(Batahan [37]; Defez and Jódar [38]). Let A and Q be matrices in × such that A , Q and Q A are positive stable matrices with A Q = Q A and Q + r I is an invertible matrix for every integer r 0 . Then, for r is a non-negative integer, the following holds
2 F 1 ( r I , A ; Q ; 1 ) = ( Q A ) r [ ( Q ) r ] 1 .

2. Hypergeometric Matrix Function 2 F 3 : Definition and Properties

In this section, we define the hypergeometric matrix function 2 F 3 under certain conditions. The radius of convergence properties, order, type, matrix differential equations and transformation of the hypergeometric matrix function 2 F 3 are given.
Definition 7.
Let us define the hypergeometric matrix function 2 F 3 in the form
2 F 3 = 2 F 3 ( A 1 , A 2 ; Q 1 , Q 2 , Q 3 ; z ) = s = 0 z s k ! ( A 1 ) s ( A 2 ) s [ ( Q 1 ) s ] 1 [ ( Q 2 ) s ] 1 [ ( Q 3 ) s ] 1 = s = 0 z s U s ,
where A 1 , A 2 , Q 1 , Q 2 and Q 3 are commutative matrices × such that
Q 1 + s I , Q 2 + s I and Q 3 + s I are invertible matrices for each integer s 0 .
For the radius of convergence with the help of the relation in [39,40,41] and (11), then we have
1 R = lim sup s U s 1 s = lim s sup ( A 1 ) s ( A 2 ) s [ ( Q 1 ) s ] 1 [ ( Q 2 ) s ] 1 [ ( Q 3 ) s ] 1 s ! 1 s = lim sup s [ s A 1 ( A 1 ) s ( s 1 ) ! ( s 1 ) ! s A 1 s A 2 ( A 2 ) s ( s 1 ) ! ( s 1 ) ! s A 2 s Q 1 ( s 1 ) ! ( s 1 ) ! [ ( Q 1 ) s ] 1 s Q 1 × s Q 2 ( s 1 ) ! ( s 1 ) ! [ ( Q 2 ) s ] 1 s Q 2 s Q 3 ( s 1 ) ! ( s 1 ) ! [ ( Q 3 ) s ] 1 s Q 3 1 s ! ] 1 s = lim sup s Γ 1 ( A 1 ) Γ 1 ( A 2 ) Γ ( Q 1 ) Γ ( Q 2 ) Γ ( Q 3 ) s A 1 s A 2 s Q 1 s Q 2 s Q 3 1 ( s 1 ) ! s ! 1 s lim sup s s A 1 s A 2 s Q 1 s Q 2 s Q 3 1 ( s 1 ) ! s ! 1 s lim sup s s A 1 s A 2 s Q 1 s Q 2 s Q 3 ( s 1 ) ! s ! 1 s .
From (5)–(7) into (18), we write
1 R lim sup s { 1 ( s 1 ) ! s ! s M ( A 1 ) r = 0 1 ( A 1 1 2 ln ( s ) ) r r ! s M ( A 2 ) r = 0 1 ( A 2 1 2 ln ( s ) r r ! × s m ( Q 1 ) r = 0 1 ( Q 1 1 2 ln ( s ) ) r r ! s m ( Q 2 ) r = 0 1 ( Q 2 1 2 ln ( s ) ) r r ! × s m ( Q 3 ) r = 0 1 ( Q 3 1 2 ln ( s ) ) r r ! } 1 s .
Using the identity
r = 0 1 ( A 1 1 2 ln ( s ) ) r r ! ( ln ( s ) ) 1 r = 0 1 ( A 1 ) r r ! = ( ln ( s ) ) 1 e A 1 ,
we get
1 R lim sup s { 1 2 π ( s 1 ) ( s 1 e ) s 1 2 π s ( s e ) s s M ( A 1 ) s M ( A 2 ) s m ( Q 1 ) s m ( Q 2 ) s m ( Q 3 ) × e A 1 e A 2 ( ln ( s ) ) 5 5 e Q 1 e Q 2 e Q 3 } 1 s = 0 .
Summarizing, the result has been proven.
Theorem 3.
The hypergeometric matrix function 2 F 3 is an entire function of z.
Theorem 4.
The hypergeometric matrix function 2 F 3 is an entire function of order 1 2 and type zero.
Proof. 
If
f ( z ) = k = 0 a k z k
is an entire function in [39,42,43], then the order and type of f are given by
ρ ( f ) = lim sup k k ln ( k ) ln ( 1 | a k | )
and
τ = 1 e ρ lim sup k k | a k | ρ k .
Now, we calculate the order of the function 2 F 3 as follows:
ρ ( 2 F 3 ) = lim sup s s ln ( s ) ln ( 1 U s ) = lim sup s s ln ( s ) ln ( s ! ( Q 1 ) s ( Q 2 ) s ( Q 3 ) s [ ( A 1 ) s ] 1 [ ( A 2 ) s ] 1 ) = lim sup s s ln ( s ) ln ( s ! Ψ ) = lim sup s 1 Φ = lim sup s 1 0 + 0 + I + 0 + 0 + I + 0 + 0 + 0 + I 0 + 0 + I 0 + 0 + I 0 = 1 2 ,
where
Ψ = Γ ( A 1 ) Γ ( A 2 ) Γ ( Q 1 + s I ) Γ ( Q 2 + s I ) Γ ( Q 3 + s I ) Γ 1 ( A 1 + s I ) Γ 1 ( A 2 + s I ) × Γ 1 ( Q 1 ) Γ 1 ( Q 2 ) Γ 1 ( Q 3 )
and
Φ = ln Γ ( A 1 ) + ln Γ ( A 2 ) ln Γ ( Q 1 ) ln Γ ( Q 2 ) ln Γ ( Q 2 ) s ln ( s ) + 1 2 ln ( 2 π s ) s ln ( s ) + s ln ( s ) s ln ( s ) s ln ( e ) s ln ( s ) + 1 2 ln ( 2 π ( Q 1 + ( s 1 ) I ) ) s ln ( s ) + ( Q 1 + ( s 1 ) I ) ln ( Q 1 + ( s 1 ) I ) s ln ( s ) ( Q 1 + ( s 1 ) I ) ln ( e ) s ln ( s ) + 1 2 ln ( 2 π ( Q 2 + ( s 1 ) I ) ) s ln ( s ) + ( Q 2 + ( s 1 ) I ) ln ( Q 2 + ( s 1 ) I ) s ln ( s ) ( Q 2 + ( s 1 ) I ) ln ( e ) s ln ( s ) + 1 2 ln ( 2 π ( Q 3 + ( s 1 ) I ) ) s ln ( s ) + ( Q 3 + ( s 1 ) I ) ln ( Q 3 + ( s 1 ) I ) s ln ( s ) ( Q 3 + ( s 1 ) I ) ln ( e ) s ln ( s ) 1 2 ln ( 2 π ( A 1 + ( s 1 ) I ) ) s ln ( s ) ( A 1 + ( s 1 ) I ) ln ( A 1 + ( s 1 ) I ) s ln ( s ) + ( A 1 + ( s 1 ) I ) ln ( e ) s ln ( s ) 1 2 ln ( 2 π ( A 2 + ( s 1 ) I ) ) s ln ( s ) ( A 2 + ( s 1 ) I ) ln ( A 2 + ( s 1 ) I ) s ln ( s ) + ( A 2 + ( s 1 ) I ) ln ( e ) s ln ( s ) .
Further, we calculate the type of the function 2 F 3 as follows:
τ ( 2 F 3 ) = 1 e ρ lim sup s s U s ρ s = 1 e ρ lim sup s s ( A 1 ) s ( A 2 ) s [ ( Q 1 ) s ] 1 [ ( Q 2 ) s ] 1 [ ( Q 3 ) s ] 1 s ! ρ s ,
which gives
τ ( 2 F 3 ) = 1 e ρ lim sup s s Ω s ! ρ s = 1 e ρ lim sup s s 2 π e ( A 1 + s I ) ( A 1 + s I ) A 1 + s I 1 2 I 2 π e ( A 2 + s I ) ( A 2 + s I ) A 2 + s I 1 2 I × ( 2 π e ( Q 1 + s I ) ( Q 1 + s I ) Q 1 + s I 1 2 I ) 1 ( 2 π e ( Q 2 + s I ) ( Q 2 + s I ) Q 2 + s I 1 2 I ) 1 × ( 2 π e ( Q 3 + s I ) ( Q 3 + s I ) Q 3 + s I 1 2 I ) 1 Γ 1 ( A 1 ) Γ 1 ( A 2 ) Γ ( Q 1 ) Γ ( Q 2 ) Γ ( Q 3 ) 2 π e s s s 1 2 ρ s 1 e ρ lim sup s s e ( A 1 + s I ) ( A 1 + s I ) A 1 + s I 1 2 I e ( A 2 + s I ) ( A 2 + s I ) A 2 + s I 1 2 I e ( Q 1 + s I ) × ( Q 1 + s I ) Q 1 s I + 1 2 I e ( Q 2 + s I ) ( Q 2 + s I ) Q 2 s I + 1 2 I e ( Q 3 + s I ) ( Q 3 + s I ) Q 3 s I + 1 2 I e s s s 1 2 ρ s 1 e ρ lim sup s s e Q 1 + Q 2 + Q 3 A 1 A 2 + 2 s I ( A 1 + s I ) A 1 + s I 1 2 I ( A 2 + s I ) A 2 + s I 1 2 I × ( Q 1 + s I ) Q 1 s I + 1 2 I ( Q 2 + s I ) Q 2 s I + 1 2 I ( Q 3 + s I ) Q 3 s I + 1 2 I s s + 1 2 ρ s 1 e ρ e 2 ρ lim sup s s ( A 1 + s I ) A 1 + s I 1 2 I ( A 2 + s I ) A 2 + s I 1 2 I × ( Q 1 + s I ) Q 1 s I + 1 2 I ( Q 2 + s I ) Q 2 s I + 1 2 I ( Q 3 + s I ) Q 3 s I + 1 2 I s s + 1 2 ρ s 1 e ρ e 2 ρ lim sup s s ( A 1 + s I ) ( A 2 + s I ) s ( Q 1 + s I ) ( Q 2 + s I ) ( Q 3 + s I ) ρ ( A 1 + s I ) A 1 1 2 I ρ s × ( A 2 + s I ) A 2 1 2 I ( Q 1 + s I ) Q 1 + 1 2 I ( Q 2 + s I ) Q 2 + 1 2 I ( Q 3 + s I ) Q 3 + 1 2 I s 1 2 ρ s = 0 ,
where
Ω = Γ ( A 1 + s I ) Γ ( A 2 + s I ) Γ ( Q 1 ) Γ ( Q 2 ) Γ ( Q 3 ) Γ 1 ( A 1 ) Γ 1 ( A 2 ) × Γ 1 ( Q 1 + s I ) Γ 1 ( Q 2 + s I ) Γ 1 ( Q 3 + s I ) .
Next, by using of a operator θ = z d d z , which has an interesting property θ z k = k z k , we obtain
θ ( θ I + Q 1 I ) ( θ I + Q 2 I ) ( θ I + Q 3 I ) 2 F 3 = s = 1 s z s s ! ( s I + Q 1 I ) ( s I + Q 2 I ) ( s I + Q 3 I ) ( A 1 ) s ( A 2 ) s [ ( Q 1 ) s ] 1 [ ( Q 2 ) s ] 1 [ ( Q 3 ) s ] 1 = s = 1 z s ( s 1 ) ! ( A 1 ) s ( A 2 ) s [ ( Q 1 ) s 1 ] 1 [ ( Q 2 ) s 1 ] 1 [ ( Q 3 ) s 1 ] 1 .
Replace s by s + 1 , we have
θ ( θ I + Q 1 I ) ( θ I + Q 2 I ) ( θ I + Q 3 I ) 2 F 3 = s = 0 z s + 1 s ! ( A 1 ) s + 1 ( A 2 ) s + 1 [ ( Q 1 ) s ] 1 [ ( Q 2 ) s ] 1 [ ( Q 3 ) s ] 1 = z ( θ I + A 1 ) ( θ I + A 2 ) 2 F 3 .
This result is summarized below.
Theorem 5.
The function 2 F 3 is a solution of a matrix differential equation
[ θ ( θ I + Q 1 I ) ( θ I + Q 2 I ) ( θ I + Q 3 I ) z ( θ I + A 1 ) ( θ I + A 2 ) ] 2 F 3 = 0 .
Here, we establish various transformation formulae for hypergeometric matrix function 2 F 3 .
Theorem 6.
Let A and Q be matrices in × , where I A s I , Q , A + Q + ( s 1 ) I are positive stable matrices and Q + s I is an invertible matrix for every integer s 0 and A Q = Q A , then
2 F 1 ( s I , I A s I ; Q ; 1 ) = ( A + Q I ) 2 s [ ( Q ) s ] 1 [ ( A + Q I ) s ] 1 .
Proof. 
From (15) and taking A I A s I , we have
2 F 1 ( s I , I A s I ; Q ; 1 ) = ( Q + A + ( s 1 ) I ) s [ ( Q ) s ] 1 = Γ ( Q ) Γ ( A + Q + ( 2 s 1 ) I ) Γ 1 ( Q + s I ) Γ 1 ( A + Q + ( s 1 ) I ) = Γ ( A + Q + ( 2 s 1 ) I ) Γ 1 ( A + Q I ) Γ ( A + Q I ) Γ ( A + Q + ( s 1 ) I ) Γ ( Q ) Γ 1 ( Q + s I ) .
Indeed, by (4) we can rewrite the formula
Γ ( A + Q + ( 2 s 1 ) I ) Γ 1 ( A + Q I ) = ( A + Q I ) 2 s , Γ ( A + Q I ) Γ 1 ( A + Q + ( s 1 ) I ) = [ ( A + Q I ) s ] 1 , Γ ( Q ) Γ 1 ( Q + s I ) = [ ( Q ) s ] 1 .
From (26) and (27), we obtain (25). □
Theorem 7.
If A and Q are commutative matrices in × , then
0 F 1 ( ; A ; z ) 0 F 1 ( ; Q ; z ) = 2 F 3 ( 1 2 ( A + Q ) , 1 2 ( A + Q I ) ; A , Q , A + Q I ; 4 z ) ,
where I A m I , Q , A + Q + ( m 1 ) I are positive stable matrices for every integer m 0 and A + s I , Q + s I , A + Q + ( s 1 ) I are invertible matrices for every integer s 0 .
Proof. 
From (14) and (15), we have
0 F 1 ( ; A ; z ) 0 F 1 ( ; Q ; z ) = m , s = 0 [ ( A ) m ] 1 [ ( Q ) s ] 1 z m + s m ! s ! = m = 0 s = 0 m [ ( A ) m s ] 1 [ ( Q ) s ] 1 z m s ! ( m s ) ! = m = 0 s = 0 m ( I A m I ) s [ ( Q ) s ] 1 ( m I ) s s ! [ ( A ) m ] 1 m ! z m = m = 0 2 f 1 ( m I , I A m I ; Q ; 1 ) [ ( A ) m ] 1 m ! z m = m = 0 ( A + Q I ) 2 m [ ( Q ) m ] 1 [ ( A + Q I ) m ] 1 [ ( A ) m ] 1 m ! z m = m = 0 2 2 m ( 1 2 ( A + Q I ) ) m ( 1 2 ( A + Q ) ) m [ ( A ) m ] 1 [ ( Q ) m ] 1 [ ( A + Q I ) m ] 1 z m m ! = 2 F 3 ( 1 2 ( A + Q ) , 1 2 ( A + Q I ) ; A , Q , A + Q I ; 4 z ) .
Then, the prove is finished. □
Theorem 8.
Let A and Q be matrices in × satisfying the conditions A s I , Q + I , A + Q + ( s + 1 ) I are positive stable matrices for every integer s 0 and A + ( s + 1 ) I , Q + ( s + 1 ) I , A + Q + ( s + 1 ) I are invertible matrices for every integer s 0 , A Q = Q A and let J A ( z ) and J Q ( z ) be two BMFs of complex variable z, then the product of two BMFs have the following properties:
J A ( z ) J Q ( z ) = ( z 2 ) A + Q Γ 1 ( A + I ) Γ 1 ( Q + I ) × 2 F 3 ( 1 2 ( A + Q ) + I , 1 2 ( A + Q + I ) ; A + I , Q + I , A + Q + I ; z 2 ) .
Proof. 
Similar to (28), we can easily prove the formula (29). □
Corollary 2.
Let A be a matrix in × satisfying the conditions A s I , A + I , 2 A + ( s + 1 ) I are positive stable matrices for every integer s 0 and A + ( s + 1 ) I , 2 A + ( s + 1 ) I are invertible matrices for every integer s 0 , then the product of two BMFs satisfy the following properties:
J A 2 ( z ) = ( z 2 ) 2 A ( Γ 1 ( A + I ) ) 2 1 F 2 ( A + 1 2 I ; A + I , 2 A + I ; z 2 ) .
Proof. 
Taking A = Q in (29), we obtain (30). □

3. On Lommel’s Matrix Polynomials

Here we define Lommel matrix polynomials (LMPs) and derive matrix recurrence relations, differential equations and integral representations for these matrix polynomials.
Definition 8.
Let us consider the Lommel’s matrix polynomials (LMPs)
R A , Q ( z ) = Γ ( A + Q ) Γ 1 ( B ) ( 2 z ) A 2 F 3 ( 1 2 ( I A ) , 1 2 A ; Q , A , I A Q ; z 2 ) , z 0 ,
where A and Q are matrices in × satisfy the condition
Q , I + A s I and I A Q + s I are positive stable matrices for each integer s 0 , and Q + s I , s I A and I A Q + s I are invertible matrices for each integer s 0 , A Q = Q A .
Throughout the current section consider that the matrices A and Q are commutative matrices in × and satisfy condition (32).
Theorem 9.
The polynomials z A R A , Q ( z ) is an entire function of order 1 2 and type zero.
Explicitly, the first few polynomials are in succession from the formulae
R 2 I , Q ( z ) = I , R I , Q ( z ) = 0 , R 0 , Q ( z ) = I , R I , Q ( z ) = 2 z Q , R 2 I , Q ( z ) = 4 z 2 Q ( Q + I ) I , z 0 .
Corollary 3.
If I Q A , A , A 2 I and 2 I Q are commutative matrices in × satisfying (32), we have the formula
R A , Q ( z ) = e A ln ( 1 ) R A , Q ( z ) ,
R A , Q ( z ) = e A ln ( 1 ) R A , I Q A ( z ) ,
and
R A , Q ( z ) = e ( A I ) ln ( 1 ) R A 2 I , 2 I Q ( z ) .
Proof. 
Using (31), we get (33). By the same manner way, we can easily prove the formulas (34) and (35). □
Next, let us give the connection of LMPs and BMFs.
Corollary 4.
Let r A and Q + I be matrices in × satisfy (32) and Γ ( r A + Q + I ) is an invertible matrix in × . Then the connection of LMPs and BMFs satisfy
lim r ( 1 2 z ) r A + Q R r A , Q + I ( z ) Γ 1 ( r A + Q + I ) = J Q ( z ) .
Proof. 
From (31), we have
( 1 2 z ) r A + Q R r A , Q + I ( z ) Γ 1 ( r A + Q + I ) = k 0 ( 1 ) k k ! ( 1 2 z ) Q + 2 k I Γ 1 ( Q + k I + I ) × Γ ( r A k I + I ) Γ ( r A + Q k I + I ) Γ 1 ( r A 2 k I + I ) Γ 1 ( r A + Q + I ) .
Now, we can write
θ = Γ ( r A k I + I ) Γ ( r A + Q k I + I ) Γ 1 ( r A 2 k I + I ) Γ 1 ( r A + Q + I ) ,
so that
θ = ( r A k I ) ( r A k I I ) ( r A 2 k I + I ) ( r A + Q ) 1 ( r A + Q I ) 1 ( r A + Q k I + I ) 1 .
Hence,
θ < 1 ,
and
lim r θ = 1 .
Since
k 0 ( 1 ) k k ! ( 1 2 z ) Q + 2 k I Γ 1 ( Q + k I + I )
is absolutely convergent, it follows that
lim r ( 1 2 z ) r A + Q R r A , Q + I ( z ) Γ 1 ( r A + Q + I ) = k = 0 ( 1 ) k k ! ( 1 2 z ) Q + 2 k I Γ 1 ( Q + k I + I ) = J Q ( z ) .
Theorem 10.
The LMPs is a solution of the Lommel matrix differential equation
[ ( θ I + A ) ( θ I + 2 Q + A 2 I ) ( θ I 2 Q A ) ( θ I A 2 I ) + 4 z 2 θ ( θ + 1 ) I ] R A , Q ( z ) = 0 .
Proof. 
Using (24) and (31), the proof is done. □
Corollary 5.
The LMPs and Laguerre matrix polynomials L n ( A , λ ) ( z ) satisfy following connection
L n ( A , ν ) ( z ) = 2 n Γ ( A + I ) R n I , A + I ( 1 ν z ) Γ 1 ( n I + A + I ) , ν z 0 .
Proof. 
In [12], we recall the definition for Laguerre matrix polynomials L m E , ν z
L m E , ν z = r = 0 m 1 r E + I m E + I r 1 ν z r r ! m r ! ,
where E is a matrix in × satisfy r σ ( E ) for every integer r > 0 and ν is a complex number for R e ν > 0 . From (31) and (39), we obtain (38). □
Theorem 11.
If A + I , A I , Q + I and Q I are matrices × satisfying the condition (32), the LMPs R A , Q ( z ) satisfies the following matrix pure recurrence relations
R A I , Q + I ( z ) + R A + I , Q I ( z ) = 2 z ( Q I ) R A , Q ( z ) , z 0 ,
R A I , Q ( z ) + R A + I , Q ( z ) = 2 z ( A + Q ) R A , Q ( z ) , z 0
and
R A I , Q ( z ) + R A + I , Q ( z ) R A I , Q + I ( z ) R A + I , Q I ( z ) = 2 z ( A + I ) R A , Q ( z ) , z 0 .
Proof. 
From (31), we have
R A I , Q + I ( z ) + R A + I , Q I ( z ) = Γ ( ( A ) + Q ) Γ 1 ( Q ) ( 2 z ) A I k = 0 ( 1 ) k z 2 k k ! × ( 1 2 ( 2 I A ) ) k ( 1 2 ( A I ) ) k ( Q + k I ) 1 [ ( Q ) k ] 1 [ ( I A ) k ] 1 × [ ( I A Q ) k ] 1 + Γ ( A + Q ) ( Q I ) Γ 1 ( Q ) ( 2 z ) A + I k = 0 ( 1 ) k z 2 k k ! ( 1 2 A ) k × ( 1 2 ( A + I ) ) k [ ( Q I ) k ] 1 [ ( A I ) k ] 1 [ ( I A Q ) k ] 1 = 2 z ( Q I ) × Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( I A ) ) k ( 1 2 A ) k [ ( B ) k ] 1 × [ ( A ) k ] 1 [ ( I A Q ) k ] 1 = 2 z ( Q I ) R A , Q ( z ) .
For the proof of (41), we have
R A I , Q ( z ) + R A + I , Q ( z ) = Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A I k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( 2 I A ) ) k × ( 1 2 ( A I ) ) k ( Q + k I ) 1 [ ( B ) k ] 1 [ ( I A ) k ] 1 [ ( I A Q ) k ] 1 + Γ ( A + Q ) × ( Q I ) Γ 1 ( Q ) ( 2 z ) A + I k = 0 ( 1 ) k z 2 k k ! ( 1 2 A ) k ( 1 2 ( A + I ) ) k × [ ( Q I ) k ] 1 [ ( A I ) k ] 1 [ ( I A Q ) k ] 1 = 2 z ( A + I ) Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( I A ) ) k ( 1 2 A ) k × [ ( Q ) k ] 1 [ ( A ) k ] 1 [ ( I A Q ) k ] 1 = 2 z ( A + Q ) R A , Q ( z ) .
By combining (40) and (41), we obtain (42). □
Theorem 12.
If A + I , A I , Q + I and Q I are matrices × satisfying the condition (32), we obtain the following matrix differential relations
R A , Q ( z ) = 1 z ( A + 2 I ) R A , Q ( z ) + R A + I , Q I ( z ) R A + I , Q ( z ) , z 0 ,
R A , Q ( z ) = 1 z A R A , Q ( z ) + R A I , Q ( z ) R A I , Q + I ( z ) , z 0 ,
R A , Q ( z ) = 1 z ( A + 2 Q ) R A , Q ( z ) R A I , Q + I ( z ) R A + I , Q ( z ) , z 0
and
R A , Q ( z ) = 1 z ( A + 2 Q 2 I ) R A , Q ( z ) + R A + I , Q I ( z ) + R A I , Q ( z ) , z 0 .
Proof. 
Taking the derivative of both side of (31) with respect to z, we get
R A , Q ( z ) = 1 z A Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( I A ) ) k ( 1 2 A ) k × [ ( Q ) k ] 1 [ ( A ) k ] 1 [ ( I A Q ) k ] 1 + Γ ( A + Q ) Γ 1 ( B ) ( 2 z ) A k = 0 2 k ( 1 ) k z 2 k 1 k ! × ( 1 2 ( I A ) ) k ( 1 2 A ) k [ ( Q ) k ] 1 [ ( A ) k ] 1 [ ( I A Q ) k ] 1 = 1 z A Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( I A ) ) k ( 1 2 A ) k × [ ( Q ) k ] 1 [ ( A ) k ] 1 [ ( I A Q ) k ] 1 + 2 Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k + 1 z 2 k + 1 k ! × ( 1 2 ( I A ) ) k + 1 ( 1 2 A ) k + 1 [ ( Q ) k + 1 ] 1 [ ( A ) k + 1 ] 1 [ ( I A Q ) k + 1 ] 1
= 1 z A Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( I A ) ) k ( 1 2 A ) k [ ( Q ) k ] 1 × [ ( A ) k ] 1 [ ( I A Q ) k ] 1 z 2 ( Q 1 + ( I A Q ) 1 ) Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A × k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( 2 I A ) ) k ( 1 2 ( A 2 I ) ) k [ ( Q + I ) k ] 1 [ ( I A ) k ] 1 × [ ( 2 I A Q ) k ] 1 = 1 z A Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( I A ) ) k × ( 1 2 A ) k [ ( Q ) k ] 1 [ ( A ) k ] 1 [ ( I A Q ) k ] 1 Γ ( A + Q ) Γ 1 ( Q + I ) ( 2 z ) A I × k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( 2 I A ) ) k ( 1 2 ( A 2 I ) ) k [ ( Q + I ) k ] 1 [ ( I A ) k ] 1 × [ ( 2 I A Q ) k ] 1 + Γ ( A + Q I ) Γ 1 ( Q ) ( 2 z ) A I k = 0 ( 1 ) k z 2 k k ! ( 1 2 ( 2 I A ) ) k × ( 1 2 ( A 2 I ) ) k [ ( Q + I ) k ] 1 [ ( I A ) k ] 1 [ ( 2 I A Q ) k ] 1 = 1 z ( A + 2 I ) R A , Q ( z ) + R A + I , Q I ( z ) R A + I , Q ( z ) .
By using (40)–(42), we obtain (44)–(46). Thus the proof is completed. □
Now, we obtain a class of new integral representations involving Lommel matrix polynomials.
Theorem 13.
The LMPs R A , Q ( z ) satisfy the following integral representations:
R A , Q ( z ) = 0 1 t 1 2 ( I + A ) ( 1 t ) Q + 1 2 A 3 2 I 1 f 2 ( 1 2 A ; A , I A Q ; z 2 t ) d t × Γ ( A + Q ) ( 2 z ) A Γ 1 ( 1 2 ( I A ) ) Γ 1 ( Q 1 2 ( I A ) ) , = 0 1 t 1 2 ( I + A ) ( 1 t ) 1 2 ( A + 3 I ) 1 f 2 ( 1 2 A ; Q , I A Q ; z 2 t ) d t × Γ ( A + Q ) Γ ( A ) ( 2 z ) A Γ 1 ( Q ) Γ 1 ( 1 2 ( I A ) ) Γ 1 ( 1 2 ( I + A ) ) , = 0 1 t 1 2 ( I + A ) ( 1 t ) ( 1 2 ( I + A ) + Q ) 1 f 2 ( 1 2 A ; Q , A ; z 2 t ) d t × Γ ( A + Q ) Γ ( I A Q ) ( 2 z ) A Γ 1 ( Q ) Γ 1 ( 1 2 ( I A ) ) Γ 1 ( 1 2 ( I A ) Q ) ,
where Γ ( 1 2 ( I A ) ) , Γ ( Q 1 2 ( I A ) ) , Γ ( Q ) , Γ ( 1 2 ( I + A ) ) and Γ ( 1 2 ( I A ) Q ) are invertible matrices and
R A , Q ( z ) = 0 1 t 1 2 A I ( 1 t ) Q + 1 2 A I 1 f 2 ( 1 2 ( I A ) ; A , I A Q ; z 2 t ) d t × Γ ( A + Q ) ( 2 z ) A Γ 1 ( 1 2 A ) Γ 1 ( Q + 1 2 A ) , = 0 1 t 1 2 A I ( 1 t ) 1 2 A I 1 f 2 ( 1 2 ( I A ) ; Q , I A Q ; z 2 t ) d t × Γ ( A + Q ) Γ ( A ) ( 2 z ) A Γ 1 ( Q ) Γ 1 ( 1 2 A ) Γ 1 ( 1 2 A ) , = 0 1 t 1 2 A I ( 1 t ) Q 1 2 A 1 f 2 ( 1 2 ( I A ) ; Q , A ; z 2 t ) d t × Γ ( A + Q ) Γ ( I A Q ) ( 2 z ) A Γ 1 ( Q ) Γ 1 ( 1 2 A ) Γ 1 ( I Q 1 2 A ) ,
where Γ ( 1 2 A ) , Γ ( Q + 1 2 A ) , Γ ( Q ) and Γ ( I Q 1 2 A ) are invertible matrices.
Proof. 
By using (12), (13) and (31), we obtain (47) and (48). □

4. Modified Lommel Matrix Polynomials h A , Q ( z )

Throughout the current section suppose that the matrices A and Q are commutative matrices in × and satisfy (32), we define the modified Lommel matrix polynomials (MLMPs) and discus various properties established by these polynomials.
Definition 9.
Let A and Q be commutative matrices in × satisfying the condition (32), then we define the modified Lommel matrix polynomials h A , Q ( z ) by
h A , Q ( z ) = R A , Q ( 1 z ) = Γ ( A + Q ) Γ 1 ( Q ) ( 2 z ) A 2 F 3 ( 1 2 A , 1 2 ( I A ) ; Q , A , I Q A ; 1 z 2 ) , z 0 .
Theorem 14.
For MLMPs h A , Q ( z ) the following matrix pure recurrence relation holds
h A , Q ( z ) = 2 z ( A + Q I ) h A I , Q ( z ) h A 2 I , Q ( z ) ,
where A I , A 2 I and Q I are commutative matrices in × satisfy (32).
Proof. 
The proof of the theorem is very a similar to Theorem 11. □
By the help of explicit representations (49), we obtain for the MLMPs h A , Q ( z )
h I , Q ( z ) = 0 , h 0 , Q ( z ) = I , h I , Q ( z ) = 2 z Q h 2 I , Q ( z ) = Q ( Q + I ) ( 2 z ) 2 I , h 3 I , Q ( z ) = Q ( Q + I ) ( Q + 2 I ) ( 2 z ) 3 2 ( Q + I ) ( 2 z ) .
Corollary 6.
The MLMPs h A , Q ( z ) and Bessel matrix functions satisfy following connection
lim r ( 2 z ) I A r Q h A , r Q ( z ) Γ 1 ( A + r Q ) = J A I ( 1 z ) , z 0 ,
where Γ ( A + r Q ) is an invertible matrix in × .
Proof. 
The proof of the corollary is very similar to Corollary 4. □
Corollary 7.
For modified Lommel matrix polynomials, we have
h A , Q ( z ) = e A ln ( 1 ) h A , Q ( z ) .
Proof. 
Using (49), we get proof of Corollary. □
Theorem 15.
The following modified Lommel matrix differential equation for MLMPs h A , Q ( z ) holds true:
[ z 2 ( θ I + A ) ( θ I + 2 Q + A 2 I ) ( θ I 2 Q A ) ( θ I A 2 I ) + 4 θ ( θ + 1 ) I ] h A , Q ( z ) = 0 .
Proof. 
Putting A 1 = 1 2 A , A 2 = 1 2 ( I A ) , Q 1 = Q , Q 2 = A , Q 3 = I Q A and z = 1 z 2 from (49) into (24) we get (54). □

5. Modified Lommel Matrix Polynomials f A , Q ( z )

Throughout the current section consider that the matrices A and Q + I are commutative matrices in × and satisfy (32), we define the modified Lommel matrix polynomials (MLMPs) and discuss several result proved by these polynomials.
Definition 10.
Let A and Q + I be commutative matrices in × satisfy (32). Then, we define the modified Lommel matrix polynomials f A , Q ( z ) by the equation
f A , Q ( z ) = z 1 2 A R A , Q + I ( 2 z ) = k = 0 ( 1 ) k Γ ( A k I + I ) Γ 1 ( A 2 k I + I ) Γ ( Q + A k I ) Γ 1 ( A + k I ) k ! z 1 2 A + k I = Γ ( A + Q + I ) Γ 1 ( Q + I ) z 1 2 A 2 F 3 ( 1 2 ( I A ) , 1 2 A ; Q + I , A , Q A ; z ) .
So that the Lommel matrix polynomials are as follows
R A , Q + I ( z ) = ( 1 2 z ) A f A , Q ( 1 4 z 2 ) .
Theorem 16.
The z 1 2 A f A , Q ( z ) is an entire function of order 1 2 and type zero.
Theorem 17.
For MLMPs z f A , Q ( z ) , the following matrix recurrence relations hold
f A + I , Q ( z ) = ( A + Q + I ) f A , Q ( z ) z f A I , Q ( z ) ,
f A + I , Q I ( z ) = Q f A , Q ( z ) z f A I , Q + I ( z ) ,
1 z Q I d d z [ z Q f A , Q ( z ) ] = z f A I , Q ( z ) + f A + I , Q I ( z ) , z 0
and
z A + 2 I d d z [ z A I f A , Q ( z ) ] = f A + I , Q I ( z ) f A + I , Q ( z ) ,
where A I , A + I , Q + I and Q + 2 I are matrices in × satisfy (32).
Proof. 
With the help of (55) by using a similar technique, we try easily to obtain (57)–(60). □
Theorem 18.
For MLMPs f A , Q ( z ) , the following matrix pure recurrence relation hold
( A + Q ) f A + 2 I , Q ( z ) = ( A + Q + I ) [ ( A + Q ) ( A + Q + 2 I ) 2 z I ] f A , Q ( z ) ( A + Q + 2 I ) z 2 f A 2 I , Q ( z ) ,
where A 2 I and A + 2 I are matrices in × satisfy (32).
Theorem 19.
For the matrix polynomials f A , Q ( z ) , we have the following matrix differential equation
[ ( θ I + 1 2 A ) ( θ I + 1 2 A + Q ) ( θ I 1 2 A I ) ( θ I 1 2 A Q I ) z θ ( θ I + 1 2 I ) ] f A , Q ( z ) = 0 .
Proof. 
Using (19) and (55), the proof is done. □

6. Concluding Remarks

We conclude our present study, we have investigated the radius of convergence properties, order, type, matrix differential equations and transformation of the hypergeometric matrix function 2 F 3 . Furthermore, we have derived matrix recurrence relations, differential equations and integral representations for the Lommel matrix polynomials (LMPs) R A , Q ( z ) . Moreover, we have established and proved some properties for modified Lommel matrix polynomials (MLMPs) h A , Q ( z ) and f A , Q ( z ) . Therefore, the results of this work are variant, unique, noteworthy and so it is intriguing and capable to develop its study in the future.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this paper are available, as they are requested.

Acknowledgments

The researcher would like to thank the Deanship of Scientific Research, Qassim University for funding publication of this project. The author wish to thank the referees for the suggestions, valuable remarks and comments that will be made to improve the presentation of the paper.

Conflicts of Interest

The author of this paper declare that they have no conflict of interest.

References

  1. Von Lommel, E.C.J. Zur Theorie der Bessel’schen Functionen II (Jan. 1971). Math. Ann. 1871, 4, 103–116. [Google Scholar] [CrossRef]
  2. Von Lommel, E.C.J. Ueber eine mit den Bessel’schen Functionen verwandte Function (Aug. 1975). Math. Ann. 1876, 9, 425–444. [Google Scholar] [CrossRef]
  3. Von Lommel, E.C.J. Zur Theorie der Bessel’schen Funktionen IV (Oct. 1979). Math. Ann. 1880, 16, 183–208. [Google Scholar]
  4. Watson, G.N. A Treatise on the Theory of Bessel Functions, 2nd ed.; Cambridge University Press: London, UK, 1948; p. 294. [Google Scholar]
  5. Altin, A.; Çekim, B. Some miscellaneous properties for Gegenbauer matrix polynomials. Util. Math. 2013, 92, 377–387. [Google Scholar]
  6. Çekim, B.; Altin, A. Matrix analogues of some properties for Bessel matrix functions. J. Math. Sci. Univ. Tokyo 2015, 22, 519–530. [Google Scholar]
  7. Çekim, B.; Altin, A.; Aktas, R. Some new results for Jacobi matrix polynomials. Filomat 2013, 27, 713–719. [Google Scholar] [CrossRef]
  8. Çekim, B.; Erkuş-Duman, E. Integral represantations for Bessel matrix functions. Gazi Univ. J. Sci. 2014, 27, 663–667. [Google Scholar]
  9. Chak, A.M. A generalization of Lommel polynomials. Duke Math. J. 1958, 25, 73–82. [Google Scholar] [CrossRef]
  10. Dickinson, D. On Lommel and Bessel polynomials. Proc. Am. Math. Soc. 1954, 5, 946–956. [Google Scholar] [CrossRef]
  11. Higueras, I.; Garcia-Celayeta, B. Logarithmic norms for matrix pencils. SIAM J. Matriz Anal. 1999, 20, 646–666. [Google Scholar] [CrossRef]
  12. Jódar, L.; Sastre, J. On Laguerre matrix polynomials. Util. Math. 1998, 53, 37–48. [Google Scholar]
  13. Kargin, L.; Veli Kurt, V. Chebyshev-type matrix polynomials and integral transforms. Hacet. J. Math. Stat. 2015, 44, 341–350. [Google Scholar] [CrossRef] [Green Version]
  14. Magnus, W.; Oberhettinger, F.; Soni, R.P. Formulas and Theorems for the Special Functions of Mathematical Physics, 3nd ed.; Springer: Berlin/Heidelberg, Germany, 1966. [Google Scholar]
  15. Rao, C.R.; Mitra, S.K. Generalized Inverses of Matrices and its Applications; Wiley: New York, NY, USA, 1971. [Google Scholar]
  16. Sastre, J.; Jódar, L. Asymptotics of the modified Bessel and incomplete Gamma matrix functions. Appl. Math. Lett. 2003, 16, 815–820. [Google Scholar] [CrossRef] [Green Version]
  17. Shehata, A. Some relations on Humbert matrix polynomials. Math. Bohem. 2016, 141, 407–429. [Google Scholar] [CrossRef] [Green Version]
  18. Shehata, A. Some relations on the generalized Bessel matrix polynomials. Asian J. Math. Comput. Res. 2017, 17, 1–25. [Google Scholar]
  19. Shehata, A. A note on Two-variable Lommel matrix functions. Asian-Eur. J. Math. (AEJM) 2018, 11, 1850041. [Google Scholar] [CrossRef]
  20. Shehata, A. Some relations on generalized Rice’s matrix polynomials. Int. J. Appl. Appl. Math. 2017, 12, 367–391. [Google Scholar]
  21. Shehata, A. Some properties associated with the Bessel matrix functions. Konuralp J. Math. (KJM) 2017, 5, 24–35. [Google Scholar]
  22. Tasdelen, F.; Aktas, R.; Çekim, B. On a multivariable extension of Jacobi matrix polynomials. Comput. Math. Appl. 2011, 61, 2412–2423. [Google Scholar] [CrossRef] [Green Version]
  23. Mathai, A.M. Some results on functions of matrix argument. Math Nachr. 1978, 84, 171–177. [Google Scholar] [CrossRef]
  24. Mathai, A.M. Special Functions of Matrix Arguments-III. Proc. Nat. Acad. Sci. India Sect. A. 1995, 65, 367–393. [Google Scholar]
  25. Mathai, A.M.; Pederzoli, G. Some transformations for functions of matrix arguments. Indian J. Pure Appl. Math. 1996, 27, 277–284. [Google Scholar]
  26. Singh, V.; Khan, M.A.; Khan, A.H.; Nisar, K.S. A note on modified Hermite matrix polynomials. J. Math. Comput. Sci. 2021, 22, 333–346. [Google Scholar] [CrossRef]
  27. Hasanov, A.; Younis, J.; Aydi, H. On decomposition formulas Related to the Gaussian hypergeometric functions in three variables. J. Funct. Spaces 2021, 2021, 5538583. [Google Scholar] [CrossRef]
  28. Younis, J.A.; Aydi, H.; Verma, A. Some formulas for new quadruple hypergeometric functions. J. Math. 2021, 2021, 5596299. [Google Scholar] [CrossRef]
  29. Shehata, A. Lommel matrix functions. Iran. J. Math. Sci. Inf. 2020, 15, 61–79. [Google Scholar]
  30. Dunford, N.; Schwartz, J.T. Linear Operators, Part I, General Theory; Interscience Publishers, Inc.: New York, NY, USA, 1958. [Google Scholar]
  31. Jódar, L.; Cortés, J.C. Some properties of Gamma and Beta matrix functions. Appl. Math. Lett. 1998, 11, 89–93. [Google Scholar] [CrossRef] [Green Version]
  32. Jódar, L.; Cortés, J.C. On the hypergeometric matrix function. J. Comput. Appl. Math. 1998, 99, 205–217. [Google Scholar] [CrossRef] [Green Version]
  33. Jódar, L.; Cortés, J.C. Closed form general solution of the hypergeometric matrix differential equation. Math. Comput. Model. 2000, 32, 1017–1028. [Google Scholar] [CrossRef]
  34. Jódar, L.; Company, R.; Navarro, E. Solving explicitly the Bessel matrix differential equation, without increasing problem dimension. Congr. Numer. 1993, 92, 261–276. [Google Scholar]
  35. Jódar, L.; Company, R.; Navarro, E. Bessel matrix functions: Explicit solution of coupled Bessel Type equations. Util. Math. 1994, 46, 129–141. [Google Scholar]
  36. Defez, E.; Jódar, L. Some applications of the Hermite matrix polynomials series expansions. J. Comput. Appl. Math. 1998, 99, 105–117. [Google Scholar] [CrossRef] [Green Version]
  37. Batahan, R.S. Generalized form of Hermite matrix polynomials via the hypergeometric matrix function. Adv. Linear Algebra Matrix Theory 2014, 4, 134–141. [Google Scholar] [CrossRef] [Green Version]
  38. Defez, E.; Jódar, L. Chebyshev matrix polynomials and second order matrix differential equations. Util. Math. 2002, 61, 107–123. [Google Scholar]
  39. Copson, E.T. An Introduction to the Theory of Functions of a Complex Variable; Oxford University Press: London, UK, 1970. [Google Scholar]
  40. Sayyed, K.A.M. Basic Sets of Polynomials of two Complex Variables and Convergence Propertiess. Ph.D. Thesis, Assiut University, Asyut, Egypt, 1975. [Google Scholar]
  41. Sayyed, K.A.M.; Metwally, M.S.; Mohammed, M.T. Certain hypergeometric matrix function. Sci. Math. Jpn. 2009, 69, 315–321. [Google Scholar]
  42. Boas, R.P. Entire Functions; Academic Press Inc.: New York, NY, USA, 1954. [Google Scholar]
  43. Levin, B.Y.; Lyubarskii, Y.; Sodin, M.; Tkachenko, V. Lectures on Entire Functions; American Mathematical Society: Providence, RI, USA, 1996. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shehata, A. On Lommel Matrix Polynomials. Symmetry 2021, 13, 2335. https://doi.org/10.3390/sym13122335

AMA Style

Shehata A. On Lommel Matrix Polynomials. Symmetry. 2021; 13(12):2335. https://doi.org/10.3390/sym13122335

Chicago/Turabian Style

Shehata, Ayman. 2021. "On Lommel Matrix Polynomials" Symmetry 13, no. 12: 2335. https://doi.org/10.3390/sym13122335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop