On Lommel Matrix Polynomials

The main aim of this paper is to introduce a new class of Lommel matrix polynomials with the help of hypergeometric matrix function within complex analysis. We derive several properties such as an entire function, order, type, matrix recurrence relations, differential equation and integral representations for Lommel matrix polynomials and discuss its various special cases. Finally, we establish an entire function, order, type, explicit representation and several properties of modified Lommel matrix polynomials. There are also several unique examples of our comprehensive results constructed.


Introduction
The Eugen von Lommel introduced Lommel polynomial R m,v (z) of degree m in 1 z which for m = 0, 1, 2, . . . and any v in [1][2][3], and Watson arisen for these polynomials in the theory of Bessel functions in [4]. The study of special matrix polynomials and orthogonal matrix polynomials is important due to their applications in certain areas of statistics, physics, engineering, Lie groups theory, group representation theory and differential equations. Recently, Significant results emerged in the classical theory of orthogonal polynomials and special functions have been expanded to include many orthogonal matrix limits and special matrix functions and applications that have continued to appear in the literature until now (see for example [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]). In [23][24][25], Mathai et al. studied some Special function of matrix arguments. in [26], Nisar et al. introduced the modified Hermite matrix polynomials. In [27,28] Aydi et al. established Some formulas for quadruple hypergeometric functions. In mathematics, specifically in linear algebra, a symmetric matrix is a square matrix that is equal to its transpose, and a skew-symmetric (antimetric or antisymmetric) matrix is a square matrix which its transpose equals its negative. Symmetric matrices appear naturally in a variety of important applications, such as statistical analysis, control theory, and optimization. Classical orthogonal polynomials are solutions of differential equations. Therefore, Lommel matrix polynomials are an illustrative example of symmetric polynomials. Symmetric type of Lommel matrix polynomials is in general of physical importance.
The motive for that work is an extension of the paper presented by Shehata's recent paper on Lommel matrix functions [29] and to prove new properties for Lommel matrix polynomials(LMPs). The outline of this paper is the following: Section 2 deals with the study of some generalizations of hypergeometric matrix function and prove new interesting properties. Section 3 provides the definition of Lommel matrix polynomials (LMPs), and recurrence matrix relations for Lommel matrix polynomials are given. We give also a matrix differential equation of the second order which is satisfied by Lommel matrix Theorem 1 (Dunford and Schwartz [30]). If Ψ(z) and Ω(z) are holomorphic functions of complex variable z, which are defined in an open set Φ of complex plane, then Definition 1 (Jódar and Cortés [31]). For Q in C × , we say that Q is a positive stable matrix if Definition 2 (Jódar and Cortés [31]). Let Q be a positive stable matrix in C × , then Gamma matrix function Γ(Q) is defined by Definition 3 (Jódar and Sastre [12]). If Q is a matrix in C × such that Q + rI is an invertible matrix for all integers r ≥ 0, then Γ(Q) is an invertible matrix in C × and the matrix analogues of Pochhammer symbol or shifted factorial is defined by Fact 1 (Jódar and Cortés [32]). Let us denote the real numbers M(Q), m(Q) for Q ∈ C × as in the following Notation 1 (Jódar and Cortés [33]). If Q is a matrix in C × , then it follows that e tQ ≤ e t M(Q) and considering that m Q = e Q ln (m) , one gets Definition 4 (Jódar and Cortés [32,33]). The hypergeometric matrix function 2 F 1 is defined by where A, P, and Q are matrices of C × such that Q + rI is an invertible matrix for every integer r ≥ 0.

Definition 5.
Let us take Q a matrix in C × such that ν is not a negative integer for every ν ∈ σ(Q), then the Bessel matrix functions (BMFs) J Q (z) of the first kind of order Q was defined in [16,34,35] as follows : Theorem 2 (Jódar and Cortés [31]). Let Q be a positive stable matrix satisfying the condition Re(ν) > 0 for every eigenvalue ν ∈ σ(Q) and let r ≥ 1 be an integer, then we have where (Q) r is defined by (4).
Definition 6 (Jódar and Cortés [31]). Let A and Q be positive stable matrices in C × , then Beta matrix function B(A, Q) is defined by Lemma 1. If A, Q and A + Q are positive stable matrices in C × satisfying the conditions AQ = QA, and A + rI, Q + rI and A + Q + rI are invertible matrices for all eigenvalues r ≥ 0 in [31], then we have Lemma 2 (Defez and Jódar [36]). For r ≥ 0, s ≥ 0 and Ω(s, r) is a matrix in C × , the following relation is satisfied : Corollary 1 (Batahan [37]; Defez and Jódar [38]). Let A and Q be matrices in C × such that A, Q and Q − A are positive stable matrices with AQ = QA and Q + rI is an invertible matrix for every integer r ≥ 0. Then, for r is a non-negative integer, the following holds

Hypergeometric Matrix Function 2 F 3 : Definition and Properties
In this section, we define the hypergeometric matrix function 2 F 3 under certain conditions. The radius of convergence properties, order, type, matrix differential equations and transformation of the hypergeometric matrix function 2 F 3 are given.

Definition 7.
Let us define the hypergeometric matrix function 2 F 3 in the form where A 1 , A 2 , Q 1 , Q 2 and Q 3 are commutative matrices C × such that Q 1 + sI, Q 2 + sI and Q 3 + sI are invertible matrices for each integer s ≥ 0.
For the radius of convergence with the help of the relation in [39][40][41] and (11), then we have From (5)-(7) into (18), we write Summarizing, the result has been proven.  Proof. If is an entire function in [39,42,43], then the order and type of f are given by and Now, we calculate the order of the function 2 F 3 as follows: where Further, we calculate the type of the function 2 F 3 as follows: which gives where Next, by using of a operator θ = z d dz , which has an interesting property θz k = kz k , we obtain Replace s by s + 1, we have This result is summarized below.
Theorem 5. The function 2 F 3 is a solution of a matrix differential equation Here, we establish various transformation formulae for hypergeometric matrix function 2 F 3 . Theorem 6. Let A and Q be matrices in C × , where I − A − sI, Q, A + Q + (s − 1)I are positive stable matrices and Q + sI is an invertible matrix for every integer s 0 and AQ = QA, then Proof. From (15) and taking A −→ I − A − sI, we have Indeed, by (4) From (26) and (27), we obtain (25). where I − A − mI, Q, A + Q + (m − 1)I are positive stable matrices for every integer m 0 and A + sI, Q + sI, A + Q + (s − 1)I are invertible matrices for every integer s 0.
Proof. From (14) and (15), we have Then, the prove is finished.

On Lommel's Matrix Polynomials
Here we define Lommel matrix polynomials (LMPs) and derive matrix recurrence relations, differential equations and integral representations for these matrix polynomials.

Definition 8. Let us consider the Lommel's matrix polynomials (LMPs)
where A and Q are matrices in C × satisfy the condition Q, I + A − sI and I − A − Q + sI are positive stable matrices for each integer s ≥ 0, and Q + sI, sI − A and I − A − Q + sI are invertible matrices for each integer s ≥ 0, AQ = QA.
Throughout the current section consider that the matrices A and Q are commutative matrices in C × and satisfy condition (32).
Next, let us give the connection of LMPs and BMFs.
where E is a matrix in C × satisfy −r / ∈ σ(E) for every integer r > 0 and ν is a complex number for Re(ν) > 0. From (31) and (39), we obtain (38). Informed Consent Statement: Not applicable.

Data Availability Statement:
The data that support the findings of this paper are available, as they are requested.