Next Article in Journal
Industrial Internet of Things Intrusion Detection System Based on Graph Neural Network
Previous Article in Journal
Kinds of Matchings Extending to Hamiltonian Cycles in Hypercube Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite Orthogonal M Matrix Polynomials

by
Esra Güldoğan Lekesiz
Department of Mathematics, Faculty of Arts and Sciences, Çankaya University, Ankara 06790, Türkiye
Symmetry 2025, 17(7), 996; https://doi.org/10.3390/sym17070996
Submission received: 28 May 2025 / Revised: 15 June 2025 / Accepted: 20 June 2025 / Published: 24 June 2025
(This article belongs to the Section Mathematics)

Abstract

In this study, we aim to construct a finite set of orthogonal matrix polynomials for the first time, along with their finite orthogonality, matrix differential equation, Rodrigues’ formula, several recurrence relations including three-term relation, forward and backward shift operators, generating functions, integral representation and their relation with Jacobi matrix polynomials. Thus, the concept of “finite”, which is used to impose parametric constraints for orthogonal polynomials, is transferred to the theory of matrix polynomials for the first time in the literature. Moreover, this family reduces to the finite orthogonal M polynomials in the scalar case when the degree is 1, thereby providing a matrix generalization of finite orthogonal M polynomials in one variable.

1. Introduction

Matrix polynomials are powerful tools that facilitate mathematical analysis and play an important role in applied fields such as physics, engineering, and economics, where they are used to model and solve systems. They are highly important and useful in both theoretical mathematics and applied fields. This significance stems from several factors. Key areas of application include Linear Algebra, Linear Differential Equations, Control Theory [1], System Theory [2], Numerical Computations, Spectral Theory [3], Eigenvalue Problems, Graph Theory, and Network Analysis.
Stability analysis, system response, and transfer functions are often expressed using matrix polynomials. For example, matrix polynomials are fundamental components in methods such as Taylor series, Padé approximation, or Krylov subspaces for computing the exponential of a matrix. They are also used to simplify and analyze functional expressions of matrices through the Cayley–Hamilton Theorem and play a key role in topics such as diagonalization.
Matrix polynomials and functions have been studied by many mathematicians over the years [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. Some new types and families of matrix polynomials have been introduced in [3,5,6,7,9,12,14,15,16,17,19,20,21,25,26,27]. In [8], pseudo-orthogonality for matrix polynomials and the importance of a nonsingular leading coefficient matrix are discussed. In [10], motivated by the idea that any polynomial system satisfying a three-term relation is orthogonal, Durán showed that certain 2 N + 1 term recurrence relations can be written in terms of orthonormal matrix polynomials. In [11], a quadrature formula and some basic properties of the zeroes of a sequence of orthogonal matrix polynomials were studied. In [12], the authors defined orthogonal matrix polynomials with respect to a right matrix moment functional. In addition, some related properties have been discussed in [4,18,22,23,24,27]. In 1996 [23], Sinap and Van Assche proposed that scalar orthogonal polynomials with Sobolev inner products involving a finite number of derivatives can be studied using matrix orthogonal polynomials on the real line. Furthermore, in [28], the authors proved that every real multivariate polynomial has a symmetric determinantal representation. These representations allow polynomials to be expressed in terms of matrices and are used in applications such as semidefinite programming. The symmetry or asymmetry of matrix polynomials typically depends on the structure of the matrix and the way the polynomial is defined. In general, matrix polynomials are asymmetric, since a matrix polynomial is symmetric if and only if the matrix itself is symmetric. However, symmetric matrix polynomials can also be constructed from asymmetric matrices.
Recently, Fourier series expansions [29], interpolation [2], quadrature [18,22], group representation theory [30], medical imaging [31], splines [32], scattering theory [13], and statistics constitute the majority of the application areas for matrix polynomials.
In general, matrix polynomials can be studied in much the same way as scalar polynomials. However, matrices generally do not satisfy the commutativity property. In other words, G H H G in general, where G and H are two matrices in C m × m . When examining this case, it is important to be able to reduce matrix generalizations to the appropriate scalar cases, where the relevant matrices are of degree 1.
The aim of this study is to derive a finite orthogonal matrix polynomial under suitable conditions for the first time in the literature and to establish its relationships with known polynomials. In general, there are two types of orthogonality for polynomials: infinitely orthogonality and finitely orthogonality, with the latter being are subject to certain parametric constraints. In the infinite case, the nonnegative integer number n (degree of the polynomial) is unrestricted and can increase indefinitely, whereas in the finite case, constraints must be imposed on n. Some parametric restrictions must be introduced in order to obtain such finite orthogonality for matrix polynomials, which will be defined in Section 3. In this way, the concept of finite-type orthogonality is transferred to matrix polynomial theory for the first time in this work. In order to achieve our goal, we draw inspiration from the following finite orthogonal polynomials M n μ , λ y .
In the scalar case, the polynomials M n μ , λ y form a finite orthogonal polynomial set for λ > 1 and μ > 2 max n + 1 [33]. This set is defined by the following Rodrigues formula
M n μ , λ y = 1 n 1 + y μ + λ y λ d n y n + λ 1 + y n μ λ d y n .
From (1), the explicit formula
M n μ , λ y = 1 n Γ λ + n + 1 l = 0 n n l n + 1 μ l y l l ! Γ l + λ + 1
is derived, and it is shown that it satisfies the differential equation
y 1 + y M n y + 2 μ y + 1 + λ M n y + n μ 1 n M n y = 0 .
Then, the orthogonality
0 M n μ , λ y M s μ , λ y y λ 1 + y μ + λ d y = n ! Γ μ n Γ λ + n + 1 μ 2 n 1 Γ μ + λ n δ n , s 0 , n = s , n s
is obtained with the help of the self-adjoint form of (2), so that a three-term relation
μ 2 n 1 μ 2 n 2 μ 2 n y + 2 n n + 1 μ λ + 2 n + 1 M n μ , λ y + n 2 n + 2 μ λ + μ n λ + n M n 1 μ , λ y = μ n 1 μ 2 n M n + 1 μ , λ y
can be introduced.
In particular, when considering λ = λ + 2 n + 1 , it is seen that this family is closely related to the classical Jacobi polynomials.
In this paper, Section 2 presents some basic definitions and theorems from the theory of matrix polynomials. The main results, including the definition of finite orthogonal M matrix polynomials, are given in Section 3. Matrix differential equations, the Rodrigues formula, the three-term recurrence relation, several recurrence relations, forward and backward shift operators, the generating function, and integral representations are also derived. Moreover, the Conclusion presents a connection between the Jacobi matrix polynomials and the finite orthogonal M matrix polynomials.
Due to importance of special matrix polynomial forms in various fields of mathematics, physics, and engineering, establishing a relationship between the known Jacobi matrix polynomials and this novel finite class of matrix polynomials is highly significant. While the real-life applications of Jacobi matrix polynomials are primarily in technical and mathematical domains, methods based on these structures also provide solutions to many practical problems. Some key areas of application include Numerical Analysis and Differential Equations, Quantum Mechanics [34,35], Vibration Analysis [36] and Mechanical Systems, as well as Data Science and Machine Learning.
Jacobi matrix polynomials are related to families of orthogonal polynomials such as Legendre, Chebyshev, and Jacobi polynomials. These polynomials are especially useful in numerical solution methods. Gauss–Jacobi integration schemes, based on Jacobi polynomials, offer high accuracy for numerical integration in spectral methods (e.g., fluid mechanics and heat transfer). They are also widely used in integral calculations (e.g., Gauss quadrature). Real-life applications include weather forecasting (via Navier–Stokes equations), heat distribution modeling (e.g., in engine blocks), and electromagnetic field simulations (e.g., antenna design). In quantum mechanics, Jacobi matrix polynomials, which are especially useful for the matrixization of Hamiltonian operators, are often used in eigenvalue problems. The properties of these matrices are used to find the energies of physical systems. At this point, real-life applications [34,35] can be provided as quantum dot and quantum wire simulations and for the modeling of nanoscale devices. Jacobi matrices are used in engineering problems, particularly for determining the vibration modes and frequencies, such as the analysis of mass-spring systems. Examples include the analysis of suspension systems in the automotive industry, modeling of aircraft wing vibrations, and dynamic analysis of bridges. Although Jacobi matrices rarely appear directly in data science problems, the mathematical foundations of many algorithms rely on such matrix structures. They are particularly useful in eigenvalue analysis, dimensionality reduction, and kernel methods. Consequently, Jacobi matrix polynomials are not directly used as end products, but they play a critical role in technical fields such as engineering, numerical computation, and physics. Real-life problems are addressed through mathematical models that incorporate these polynomials and matrix structures.

2. Some Basic Notations and Properties

Let Υ G be the set of all eigenvalues of any matrix G in C m × m .
Definition 1.
For 0 j n , any matrix polynomial K n y is defined as
K n y = G n y n + G n 1 y n 1 + + G 1 y + G 0 ,
where y is a real variable, and the coefficients G j are members of C m × m , the space of real or complex matrices of order m.
Lemma 1
([37]). If f z and g z are holomorphic functions in an open set Ω of the complex plane, and if G is a matrix in C m × m for which Υ G Ω , then
f G g G = g G f G .
Therefore, if G H = H G , and H C m × m is a matrix, such that Υ H Ω , then
f G g H = g H f G .
Definition 2.
If R e ( β ) > 0 for β Υ G , such that G C m × m is a matrix, then the matrix G is called a positive stable matrix.
Definition 3.
For G C m × m , the matrix version of the Pochhammer symbol is defined by
G p = G G + I G + 2 I G + p 1 I , p 1 ,
where I C m × m is the identity matrix, and G 0 I .
Remark 1.
If G = j I for j = 1 , 2 , , then G p = θ for p > j .
Definition 4.
For G , H C m × m and G = H + r I ,
H + r I p = 1 p H + r + p 1 I p , p 0 .
Definition 5.
The definition of the Gamma matrix function is given by
Γ G = 0 t G I e t d t , t G I = exp G I ln t
such that G is a positive stable matrix. If G + p I matrices are invertible for p 0 and G C m × m , then the equation
G p = Γ G + p I Γ 1 G , p 1
is satisfied by considering (3) for matrices given by (4) [38].
Lemma 2.
For an arbitrary matrix G C m × m ,
D p t G + r I = G + I r p 1 G + I r t G + r p I = Γ G + r + 1 I Γ 1 G + r p + 1 I t G + r p I , p = 0 , 1 , 2 , ,
where D = d d t , in light of Equation (6).
Definition 6.
The Beta matrix function is defined as
B ( G , H ) = 0 1 y G I 1 y H I d y ,
where G , H C m × m are positive stable matrices [38].
Theorem 1.
If the matrices G , H C m × m are commutative, such that G,H, and G + H are positive stable matrices, then the equality
B G , H = Γ G Γ H Γ 1 G + H
exists [16].
Lemma 3.
Let the matrices G , H C m × m satisfy the conditions
R e ( z ) > 2 max n + 1 , R e ( w ) > 1 , z Υ ( G ) , w Υ ( H ) .
From (7),
0 y H 1 + y G + H d y = B G I , H + I G + H 1 .
Lemma 4
([39]). For G C m × m , let ψ G = max R e z ; z Υ G . Then
e G t e t ψ G p = 0 r 1 G r t p p ! , t 0 .
Lemma 5
([40]). The reciprocal scalar Gamma function, Γ 1 z = 1 / Γ z , is an entire function of the complex variable z. Thus, for any G C m × m , the Riesz–Dunford functional calculus [37] shows that Γ 1 G is well defined and is, indeed, the inverse of Γ G . Hence, if G C m × m is, such that, G + n I is invertible for n 0 , n = 0 , 1 , 2 , , then G n = Γ G + n I Γ 1 G .
Lemma 6.
If G + n I , H + n I , and G + H + n I are all invertible and G , H C m × m for G H = H G and n 0 , n = 0 , 1 , 2 , , then B G , H = Γ G Γ H Γ 1 G + H , where B G , H denotes the Beta matrix function [16].
Lemma 7.
If C 3 + n I is invertible for n = 0 , 1 , 2 , and C 1 , C 2 , C 3 C m × m , then the definition of the hypergeometric matrix function F C 1 , C 2 ; C 3 ; z is as follow
F C 1 , C 2 ; C 3 ; z = n 0 C 1 n C 2 n C 3 n 1 n ! z n .
It converges for z < 1 [16].
Lemma 8
([41]). For n = 0 , 1 , 2 , , let C 1 , C 2 , C 3 C m × m are constant matrices, so that C 3 C 2 = C 2 C 3 , C 1 C 3 = C 3 C 1 , and C 3 + n I is invertible. Then, the solution for the hypergeometric matrix differential equation
z 1 z d 2 E z d z 2 z C 1 d E z d z + d E z d z C 3 z C 2 + I C 1 C 2 E z = θ , 0 < z < 1 ,
is
E z = E 1 z G + E 2 z H
for
E 1 z = F C 1 , C 2 ; C 3 ; z ,
and
E 2 z = z I C 3 F C 1 C 3 + I , C 2 C 3 + I ; C 3 + 2 I ; z ,
where F is the hypergeometric matrix function (9) and G , H C m × m are arbitrary constant matrices.
Remark 2.
A solution for (10) is also the hypergeometric matrix function (9) under the conditions C 3 C 2 = C 2 C 3 , C 1 C 3 = C 3 C 1 , and C 3 + n I is invertible.

3. Finite Orthogonal M Matrix Polynomials

In this section, the set of finite orthogonal M matrix polynomials is introduced. The corresponding matrix differential equation, finite orthogonality relation, several recurrence relations, Rodrigues formula, and generating functions for the family are also derived.
Finite orthogonal M matrix polynomials possess properties that naturally generalize those of the scalar finite orthogonal polynomials M and are conveniently constructed for matrix calculus.
Definition 7.
For any integer n 0 , the n t h M matrix polynomial M n Q , R y is defined by
M n Q , R y = p = 0 n 1 n n p Γ Q n I Γ 1 Q n + p I × Γ R + n + 1 I Γ 1 R + p + 1 I y p
so that Q , R C m × m are parameter matrices, whose eigenvalues z and t all satisfy the spectral conditions R e z > 2 max n + 1 , z Υ Q , R e t > 1 , t Υ R , and Q R = R Q .
Remark 3.
Note that
M n Q , R y = F n + 1 I Q , n I ; I + R ; y × 1 n Γ R + n + 1 I Γ 1 R + I ,
and that for the scalar case n = 1 , taking Q = μ and R = λ , and μ > 2 max n + 1 , λ > 1 , the n t h polynomial M n Q , R y coincides with the scalar finite polynomial M [33]. Here, F C 1 , C 2 ; C 3 ; z denotes the hypergeometric matrix function given by (9).
Proof. 
In definition (11), following from (4), (5), and (6), we obtain
M n Q , R y = p = 0 n 1 n n p Γ Q n I Γ 1 Q n + p I Γ R + n + 1 I × Γ 1 R + p + 1 I y p = 1 n p = 0 n Γ n + 1 I Γ 1 n p + 1 I Γ 1 p + 1 I Γ Q n I Γ 1 Q n + p I × Γ R + n + 1 I Γ 1 R + p + 1 I y p
= 1 n p = 0 n n I p p ! 1 p Γ Q n I Γ 1 Q n + p I Γ R + n + 1 I Γ 1 R + I × Γ R + I Γ 1 R + p + 1 I y p = 1 n Γ R + n + 1 I Γ 1 R + I p = 0 n n I p Γ 1 R + I Γ R + p + 1 I 1 p ! × 1 p Γ Q n I Γ 1 Q n + p I y p = 1 n Γ R + n + 1 I Γ 1 R + I p = 0 n n I p 1 p Q n + p I p R + I p 1 p ! y p = 1 n Γ R + n + 1 I Γ 1 R + I p = 0 n n I p n + 1 I Q p R + I p 1 p ! y p = 1 n Γ R + n + 1 I Γ 1 R + I F n + 1 I Q , n I ; I + R ; y .
Theorem 2.
For each integer n 0 , the finite M matrix polynomials satisfy the matrix differential equation
y 1 + y M y + 2 I Q y + R + I M y + n Q n + 1 I M y = θ
for 0 < y < .
Proof. 
Let C 3 C 2 = C 2 C 3 , and C 3 + n I be invertible for n = 0 , 1 , 2 , . The hypergeometric matrix function (9) is a solution to the matrix differential Equation (10) for 0 < z < 1 [41]. By replacing z = y , C 1 = n + 1 I Q , C 2 = n I , and C 3 = R + I ,
F Q n + 1 I , n I ; I + R ; y = 1 n Γ 1 n + 1 I + R Γ I + R M n Q , R y ,
from Remark 3. Introduce the notation
E y = F Q n + 1 I , n I ; I + R ; y .
Applying the chain rule in (14),
E y = 1 n + 1 Γ I + R Γ 1 n + 1 I + R d d y M n Q , R y
and
E y = 1 n + 2 Γ 1 n + 1 I + R Γ I + R d 2 d y 2 M n Q , R y .
Taking into account that C 3 z C 2 + I = R + 1 + n 1 z I and that this term commutes with Γ R + I Γ 1 R + n + 1 I , substituting (14), (15) and (16) in (10) and postmultiplying by
1 n Γ R + n + 1 I Γ 1 R + I
yields
y y + 1 d 2 d y 2 M n Q , R y + 2 I Q y + R + I d d y M n Q , R y + n Q n + 1 I M n Q , R y = θ .
Thus, M n Q , R y satisfies (13) in 0 < y < . □
Remark 4.
Taking R = λ , λ > 1 and Q = μ , μ > 2 max n + 1 in (13) gives the differential equation for the scalar finite M orthogonal polynomials [33].
Corollary 1.
For n 0 , M n Q , R y is a solution for the differential equation
d d y y R + I 1 + y I Q R M y + n Q n + 1 I y R 1 + y Q + R M y = θ ,
over 0 < y < .
Proof. 
After multiplying (13) by 1 + y Q + R y R , rearranging yields (18) for 0 < y < . □
Definition 8.
A matrix Q C m × m is called positively stable [29] if R e z > 0 , z Υ Q .
Lemma 9
([6]). Let C 2 be positively stable, C 2 C 3 = C 3 C 2 , and C 3 C 2 + p I and C 3 + p I are invertible for p = 0 , 1 , 2 , and C 2 , C 3 C m × m . Then
F n I , C 2 ; C 3 ; y = 1 y n F n I , C 3 C 2 ; C 3 ; y 1 y ,
or equivalently
F n I , C 3 C 2 ; C 3 ; y = 1 y n F n I , C 2 ; C 3 ; y 1 y
for y < 1 .
The finite M matrix polynomials (12) may be defined in terms of the hypergeometric matrix function F n I , n + 1 I Q ; R + I ; y . From (19), definition (12) can be written as follows
M n Q , R y = 1 n 1 + y n Γ R + n + 1 I Γ 1 R + I × F n I , n I + Q + R ; I + R ; y 1 + y ,
where Q + R n I is positively stable. Thus,
M n Q , R y = 1 n Γ R + n + 1 I Γ 1 R + I × p = 0 n n p Q + R n I p R + I p 1 p ! y p 1 + y n p = 1 n y R 1 + y Q + R p = 0 n 1 p n p Q + R n I p × R + I n R + I p 1 y p I + R 1 + y n p I Q R
holds by using
s p = 1 p s ! s p ! 0 , p s , , p > s .
Since
D p y + 1 n I Q R = 1 p Q + R n I p 1 + y n p I Q R
and
D n p y R + n I = R + I n R + I p 1 y p I + R ,
we get
M n Q , R y = 1 n y R 1 + y Q + R × p = 0 n n p D p y + 1 n I Q R D n p y R + n I .
Applying the Leibnitz rule for the differentiation of a product, the following theorem can be given for the Rodrigues formula of polynomials M n Q , R y .
Theorem 3.
Let Q and R satisfy that R e z > 2 max n + 1 , z Υ Q , and R e w > 1 , w Υ R , and Q R = R Q , where Q , R C m × m . Then, the finite M matrix polynomials (11) can be written of the form
M n Q , R y = 1 n y R 1 + y Q + R D n y R + n I y + 1 n I Q R
for n = 0 , 1 , 2 , .
Now, for finite M matrix polynomials M n Q , R y stated by parameter matrices Q and R, such that R e z > 2 max n + 1 , z Υ Q , and R e w > 1 , w Υ R , and Q R = R Q , the finite orthogonality will be obtained on the interval 0 y < with weight function E y = y R 1 + y Q + R . For this purpose, we will make use of the self-adjoint form (18) and investigate the behavior of y I + R 1 + y I Q + R at the extremes of the interval.
Lemma 10.
lim y 0 + y R + I 1 + y I R Q D y = θ
and
lim y y R + I 1 + y I R Q D y = θ ,
where Q , R C m × m satisfy the spectral conditions in Lemma 3, and D y is an arbitrary matrix polynomial.
Proof. 
First, we approach the case y 0 + .
Suppose that D y is continuous and bounded on the closure of χ , such that χ is an open bounded neighbourhood of y = 0 . Then
D y K 1 , y χ , K 1 R + .
We know the fact that if
ψ R = max R e z : z Υ R ,
and S H R S = L + N is the Schur decomposition for any arbitrary matrix R C m × m , then from (8)
e R t e t ψ R p = 0 r 1 N r t p p ! ,
where t 0 . Therefore,
y R + I y ψ R + I p = 0 r 1 N r log y p p !
such that D H R + I D = L + N is the Schur decomposition of R + I . On the other hand, R satisfies R e z > 1 , z Υ R . So, R + I satisfies the conditions R e v > 0 , v Υ R + I , and thus, ψ R + I > 0 .
On the one hand,
lim t 0 + t η log t j = 0 , η > 0
for 0 j r 1 , and hence,
lim y 0 + y ψ R + I log y j = 0 , j = 0 , 1 , , r 1 .
On the other hand, the inequality
0 y R + I y ψ R + I p = 0 r 1 L r log y p p !
is already satisfied and, since the upper bound of (21) approaches zero as y 0 + , we obtain
lim y 0 + y R + I = θ .
For the factor 1 + y I R Q ,
lim y 0 + 1 + y I Q R = I I Q R ,
and 1 + y I Q R is bounded on χ .
In conclusion,
0 D y 1 + y I Q R y R + I D y 1 + y I Q R y R + I K 1 1 + y I Q R y R + I ,
then
lim y 0 + y R + I 1 + y I Q R D y = θ .
Secondly, we evaluate the case y .
For the second limit in the right part of
lim y y R + I 1 + y I Q R = exp R + I log lim y y y + 1 × exp Q 2 I log lim y 1 + y ,
we can say that it is zero if and only if Q 2 I is Hurwitz (i.e., real part of any eigenvalue of Q 2 I is negative). To see why this is true, the Spectral Mapping Theorem can be applied, which basically says the eigenvalues of
exp Q 2 I log lim y 1 + y
is of the form exp γ log 1 + y , where γ is any eigenvalue of Q 2 I . □
It can be easily said that when the spectral conditions in Lemma 3 are satisfied, the self-adjoint form (18) can be expressed as
d d y y + y 2 y R 1 + y ( Q + R ) M y + n Q n + 1 I y R 1 + y Q + R M y = θ ,
where 0 < y < . Because M n Q , R y is a solution of (18).
Multiplying (22) by M s Q , R y gives
M s Q , R y d d y y + y 2 y R 1 + y ( Q + R ) d d y M n Q , R y + n Q n + 1 I y R 1 + y Q + R M n Q , R y M s Q , R y = θ
and, by replacing s and n,
M n Q , R y d d y y + y 2 y R 1 + y ( Q + R ) d d y M s Q , R y + s Q s + 1 I y R 1 + y Q + R M s Q , R y M n Q , R y = θ .
Subtracting (23) and (24), and by using the commutativity of the finite orthogonal M matrix polynomials,
M s Q , R y d d y y + y 2 y R 1 + y ( Q + R ) d d y M n Q , R y M n Q , R y d d y y + y 2 y R 1 + y ( Q + R ) d d y M s Q , R y + n s Q n + s + 1 I y R 1 + y ( Q + R ) M n Q , R y M s Q , R y = θ , 0 < y < .
We know that
d d y y + y 2 y R 1 + y ( Q + R ) M s Q , R y d d y M n Q , R y M n Q , R y d d y M s Q , R y = M s Q , R y d d y y + y 2 y R 1 + y ( Q + R ) d d y M n Q , R y M n Q , R y d d y y + y 2 y R 1 + y ( Q + R ) d d y M s Q , R y
can be easily verified and thus (25) can be read as
d d y y + y 2 y R 1 + y ( Q + R ) M s Q , R y d d y M n Q , R y M n Q , R y d d y M s Q , R y + n s Q n + s + 1 I y R 1 + y ( Q + R ) M n Q , R y M s Q , R y = θ .
Now, after defining U y = M s Q , R y d d y M n Q , R y M n Q , R y d d y M s Q , R y in (27) we integrate (27) over 0 , . So,
n s Q n + s + 1 I 0 y R 1 + y ( Q + R ) M n Q , R y M s Q , R y d y = lim y y + y 2 y R 1 + y ( Q + R ) U y lim y 0 + y + y 2 y R 1 + y ( Q + R ) U y .
With application of Lemma 10, (28) implies that
0 y R 1 + y ( Q + R ) M n Q , R y M s Q , R y d y = θ ,
since n s and Q n + s + 1 I is invertible.
For the final stage in the derivation of orthogonality, it is necessary to confirm that
0 y R 1 + y ( Q + R ) M n Q , R y 2 d y
is invertible for n = 0 , 1 , 2 , .
0 M n Q , R y 2 y R 1 + y ( Q + R ) d y = 0 M n Q , R y 1 n D n y R + n I 1 + y n I Q R d y
holds by using the Rodrigues’ formula for M n Q , R y .
By Lemma 10,
lim y D n 1 y R + n I 1 + y n I Q R M n Q , R y = θ ,
lim y 0 + D n 1 y R + n I 1 + y n I Q R M n Q , R y = θ ,
lim y D n p y R + n I 1 + y n I Q R M n Q , R y = θ
and
lim y 0 + D n p y R + n I 1 + y n I Q R M n Q , R y = θ
for 1 p n .
With the help of above limits, applying integration in parts to the integral in the right part of (29) gives
1 n 0 M n Q , R y D n y R + n I 1 + y n I Q R d y = 1 n M n Q , R y D n 1 y R + n I 1 + y n I Q R 0 0 D n 1 y R + n I 1 + y n I Q R d d y M n Q , R y d y = 1 n + 1 0 D n 1 y R + n I 1 + y n I Q R d d y M n Q , R y d y .
If we repeat this process n 1 times more, it produces that
1 n 0 M n Q , R y D n y R + n I 1 + y n I Q R d y = 0 d n d y n M n Q , R y y R + n I 1 + y n I Q R d y = n ! Γ Q n I Γ 1 Q 2 n I 0 y R + n I 1 + y n I Q R d y
by using the above limits each time.
Here, through the integration of parts by R times, it can be shown that
0 y R + n I 1 + y n I Q R d y = B R + n + 1 I , Q 2 n + 1 I = Γ R + n + 1 I Γ Q 2 n + 1 I Γ 1 Q + R n I
under the spectral conditions in Lemma 3.
Then, substituting (31) in (30) gives
1 n 0 M n Q , R y D n y R + n I 1 + y n I Q R d y = n ! Γ Q n I Γ R + n + 1 I Γ 1 Q + R n I Q 2 n + 1 I 1 ,
and thus, 0 y R 1 + y ( Q + R ) M n Q , R y 2 d y is nonsingular.
Therefore, we obtain the result in the following Theorem.
Theorem 4.
Let Q , R C m × m satisfy the following spectral conditions
R e z > 2 max n + 1 , z Υ Q , and R e w > 1 , w Υ R .
For n , s = 0 , 1 , 2 , , the orthogonality of the finite orthogonal M matrix polynomials M n Q , R y is defined by
0 y R 1 + y ( Q + R ) M n Q , R y M s Q , R y d y
= θ n ! Γ Q n I Γ R + n + 1 I Γ 1 Q + R n I Q 2 n + 1 I 1 , n s , , n = s . .
Corollary 2.
The set of finite M matrix polynomials is orthogonal with respect to the matrix weight E y , Q , R = y R 1 + y ( Q + R ) over 0 , .
Remark 5.
In the scalar case of the orthogonality relation, the choices Q = μ , μ > 2 n + 1 and R = λ , λ > 1 give the finite orthogonality for the scalar finite orthogonal M polynomials M n μ , λ y .
Theorem 5.
The finite M matrix polynomials M n Q , R y satisfy the following recurrence relation:
d d y M n Q , R y = n Q n + 1 I M n 1 Q 2 I , R + I y .
Corollary 3.
More generally, the recurrence relation
d p d y p M n Q , R y = n p + 1 I p Q n + p I p M n p Q 2 p I , R + p I y , 0 p n
holds for the finite matrix polynomials M n Q , R y .
Remark 6.
In special,
d n d y n M n Q , R y = n ! Γ Q n I Γ 1 Q 2 n I .
Theorem 6.
The polynomials M n Q , R y have the following forward shift operator
Q y R y 1 + y d d y M n Q , R y = M n + 1 Q + 2 I , R I y .
Proof. 
Considering (33) and (37) results in the proof. □
Theorem 7.
The polynomials M n Q , R y have the following backward shift operators
y + 1 d d y n M n Q , R y = n Q + R n I M n 1 Q I , R + I y ,
and
y d d y n M n Q , R y = n R + n I M n 1 Q I , R y
Proof. 
For the proofs of (35) and (36), we make the necessary arrangements after taking the derivative of finite matrix polynomials M n Q , R y . □
Theorem 8.
The finite M matrix polynomials have the recurrence relations
M n + 1 Q , R y = Q 2 I y R + I M n Q 2 I , R + I y n Q n + 3 I y 1 + y M n 1 Q 4 I , R + 2 I y ,
M n + 1 Q , R y = Q 2 n + 1 I y M n Q I , R + I y R + n + 1 I M n Q , R y
and
d d y M n Q , R y = n Q + R n I M n 1 Q I , R + I y n R + n I M n 1 Q I , R y .
Proof. 
Using (34) in matrix differential eqation (13), we obtain (37).
On the other hand, we prove (38) taking C 1 n I , C 2 n + 1 I Q , C 3 R + I and z y in the equality [42]
F C 1 , C 2 ; C 3 ; z = F C 1 I , C 2 + I ; C 3 ; z + z C 3 1 C 2 C 1 + I F C 1 , C 2 + I ; C 3 + I ; z ,
where F C 1 , C 2 ; C 3 ; z is the hypergeometric matrix function for C 1 , C 2 , C 3 C m × m , C 1 and C 2 are commutative, and C 1 I , C 2 + k I , C 3 + k I are invertible for all integer k 0 .
As a consequence of (35) and (36), we have (39). □
Now, we obtain the three term recurrence relation from the orthogonality of finite M matrix polynomials defined in (11), where the leading coefficient of M n Q , R y is an invertible matrix, for n 1 , and the parameter matrices Q and R will be assumed to satisfy the spectral conditions (32).
By Theorem 2.2 in [19], any matrix polynomial of degree n can be represented uniquely in the form
T y = p = 0 n A p M p Q , R y , A p C m × m .
So, using the orthogonality relation for the finite matrix polynomials M n Q , R y and (40), if J y is a matrix polynomial of degree strictly less than n, then
0 J y y R 1 + y R + Q M n Q , R y d y = θ .
It can be said that y M n Q , R y becomes a matrix polynomial with degree n + 1 , for n 0 . Thus, by (40),
y M n Q , R y = p = 0 n + 1 A p M p Q , R y
for some coefficients A p in C m × m .
Using the orthogonality relation of M n Q , R y , and for p = 0 , 1 , 2 , , n + 1 ,
0 y M n Q , R y y R 1 + y Q + R M p Q , R y d y = A p H p ,
where the coefficient matrices A p can be determined by Theorem 4. Consequently,
0 y M n Q , R y y R 1 + y Q + R M p Q , R y d y = θ , p + 1 < n ,
so that the three term recurrence relations
y M n Q , R y = A n 1 M n 1 Q , R y + A n M n Q , R y + A n + 1 M n + 1 Q , R y
holds for the finite M matrix polynomials M n Q , R y .
Comparing the powers of y in both sides of (41), the recurrence coefficient matrices A n 1 , A n , and A n + 1 are obtained as follows:
A n + 1 = Q n + 1 I Q 2 n + 2 I 2 1 , A n = Q R + n I + n + 1 Q 2 n I Q 2 n + 2 I 1 Q 2 n I 1 , A n 1 = n Q + R n I R + n I Q 2 n + 1 I 2 1 .
These results give us the following theorem.
Theorem 9.
Let Q and R be in C m × m satisfy spectral conditions (32). Then, the finite orthogonal M matrix polynomials M n Q , R y n = 0 defined in (11) satisfy the three-term matrix recurrence relation
M 1 Q , R y = θ , M 0 Q , R y = I , A n + 1 M n + 1 Q , R y = y I A n M n Q , R y A n 1 M n 1 Q , R y ,
in which A n 1 , A n , and A n + 1 in C m × m are given by (42). In (43), A n + 1 is nonsingular for n 0 .
Remark 7.
When the matrix order m is 1 for (42) and (43), the relation (41) is reduced to the three-term relation for the scalar case in [33].
Finally, we can give the following result, called the generating function.
Theorem 10.
The finite orthogonal M matrix polynomials have the generating function
n = 0 1 n I Q n R + I n 1 M n Q , R y t n n ! = 1 t Q I F I Q 2 , 2 I Q 2 ; R + I ; 4 t y 1 t 2 ,
where matrices Q and R satisfy the spectral conditions (32).
Proof. 
In (44), using definition (11) and the equalities
Γ R + n + 1 I Γ 1 R + p + 1 I = R + I n R + I p 1
and
Γ Q n + p I = 1 p Γ Q n I Q n + 1 I p 1 ,
we get
n = 0 1 n Q I n R + I n 1 M n Q , R y t n n ! = n = 0 p = 0 n I Q n n + 1 I Q p R + I p 1 y p t n n p ! p ! .
If it is taken into account that n r n + r p = n r + p , we obtain
Q I n Q n + 1 I p = Q I n + p
and
I Q 2 p 2 p + 1 I Q n = I Q n + 2 p
for 0 p n .
Thus, (45) becomes
n = 0 1 n I Q n R + I n 1 M n Q , R y t n n ! = n = 0 p = 0 Q I n + 2 p R + I p 1 y p t n + p n ! p ! = 1 t Q I p = 0 Q I 2 p p ! R + I p 1 t y 1 t 2 p .
Finally, under the cloak of n 2 r = 2 2 r n 2 p n + 1 2 p and the necessary arrangements we reach (44). □
Now, in order to obtain integral representations for the finite orthogonal M matrix polynomials, we recall the following theorem.
Theorem 11
([42]). Let C 1 , C 2 , C 3 and T be matrices in C m × m , such that R e z > 0 for z Υ C 3 , R e w > 0 for w Υ T , C 3 + T + k I is invertible for k N , and these matrices are commutative. Then
y C 3 + T I F C 1 , C 2 ; C 3 + T ; y = Γ C 3 + T Γ 1 C 3 Γ 1 T 0 y y x T I x C 3 I F C 1 , C 2 ; C 3 ; x d x .
In the light of the theorem above, we can give the following results.
Theorem 12.
Let Q , R , T C m × m be matrices, such that the corresponding eigenvalues satisfy the conditions R e z > 2 max n + 1 , z Υ Q , R e w > 1 , w Υ R , and R e u > 0 , u Υ T . Also, let matrices T be commutative. The following integral representations are satisfied for the finite orthogonal M matrix polynomials M n Q , R y :
y R + T M n Q , R + T y M n Q , R + T 0 1 = Γ R + T + I Γ 1 R + I Γ 1 T y 0 y t T I t R M n Q , R y M n Q , R 0 1 d t
and
y R + T 1 + y R + n + 1 I M n Q T , R + T y M n Q T , R + T 0 1 = Γ R + T + I Γ 1 R + I Γ 1 T 0 y y t T I t R 1 + t R + T + n + 1 I M n Q , R y M n Q , R 0 1 d t .
Proof. 
Taking C 1 n I , C 2 n + 1 I Q , C 3 R + I and y y , x t in (12), which completes of proof (46). To prove (47), we substitute C 1 n I , C 2 Q + R n I , C 3 R + I and y y 1 + y , x t 1 + t in (20). □

4. Conclusions

For the first time, we introduce a finite set of orthogonal matrix polynomials in this paper. The structure defined with (11) in this paper results in several important applications of the finite M matrix polynomials M n Q , R y . First, it is shown that the polynomials M n Q , R y satisfy the second-order differential Equation (13). A Rodrigues’ formula for the polynomials M n Q , R y is derived by adding the commutativity property Q R = R Q afterwards. We construct the finite orthogonality in the sense made by Theorem 4 and subsequently introduce some matrix recurrence relations including a three-term recurrence relation for the finite M matrix polynomials.
Similar to the scalar case, we set a relation between this finite orthogonal matrix polynomials and the Jacobi matrix polynomials in the following corollary.
Corollary 4.
Using the explicit representation (12), the relationship
M n B A , A y 1 2 = 1 n F n + 1 I + B + A , n I ; I + A ; 1 y 2 × Γ n + 1 I + A Γ 1 I + A = n ! P n B , A y = 1 n n ! P n A , B y
holds, where P n A , B y is the Jacobi matrix polynomial defined in [6].
Considering the usage areas and real-life applications of Jacobi matrix polynomials, the connection between the finite matrix polynomials produced in this study and Jacobi matrices becomes important.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares that they have no competing interests.

References

  1. Šebek, M.; Hromčík, M. Polynomial design methods. Int. J. Robust Nonlinear Control 2007, 17, 679–681. [Google Scholar] [CrossRef]
  2. Antsaklis, P.J.; Gao, Z. Polynomial and rational matrix interpolation: Theory and control applications. Int. J. Control 1993, 58, 349–404. [Google Scholar] [CrossRef]
  3. Mirzoev, K.A.; Konechnaya, N.N.; Safonova, T.A.; Tagirova, R.N. Generalized Jacobi Matrices and Spectral Analysis of Differential Operators with Polynomial Coefficients. J. Math. Sci. 2021, 252, 213–224. [Google Scholar] [CrossRef]
  4. Altın, A.; Çekim, B. Generating matrix functions for Chebyshev matrix polynomials of the second kind. Hacet. J. Math. Stat. 2012, 41, 25–32. [Google Scholar]
  5. Çekim, B. New kinds of matrix polynomials. Miskolc Math. Notes 2013, 14, 817–826. [Google Scholar] [CrossRef]
  6. Defez, E.; Jódar, L.; Law, A. Jacobi matrix differential equation, polynomial solutions and their properties. Comput. Math. Appl. 2004, 48, 789–803. [Google Scholar] [CrossRef]
  7. Defez, E.; Jódar, L. Chebyshev matrix polynomials and second order matrix differential equations. Util. Math. 2002, 61, 107–123. [Google Scholar]
  8. Defez, E.; Jódar, L.; Law, A.; Ponsoda, E. Three-term recurrences and matrix orthogonal polynomials. Util. Math. 2000, 57, 129–146. [Google Scholar]
  9. Duran, A.J. On orthogonal polynomials with respect to a positive definite matrix of measures. Can. J. Math. 1995, 47, 88–112. [Google Scholar] [CrossRef]
  10. Duran, A.J.; Van Assche, W. Orthogonal matrix polynomials and higher order recurrence relations. Linear Algebra Appl. 1995, 219, 261–280. [Google Scholar] [CrossRef]
  11. Duran, A.J.; López-Rodriguez, P. Orthogonal matrix polynomials: Zeros and Blumenthal’s theorem. J. Approx. Theory 1996, 84, 96–118. [Google Scholar] [CrossRef]
  12. Draux, A.; Jokung-Nguena, O. Orthogonal polynomials in a non-commutative algebra. Non-normal case. IMACS Ann. Comput. Appl. Maths. 1991, 9, 237–242. [Google Scholar]
  13. Geronimo, J.S. Scattering theory and matrix orthogonal polynomials on the real line. Circuit Syst. Signal Process. 1982, 1, 471–494. [Google Scholar] [CrossRef]
  14. Jódar, L.; Company, R.; Navarro, E. Laguerre matrix polynomials and system of second-order differential equations. Appl. Numer. Math. 1994, 15, 53–63. [Google Scholar] [CrossRef]
  15. Jódar, L.; Company, R.; Ponsoda, E. Orthogonal matrix polynomials and systems of second order differential equations. Differ. Equ. Dyn. Syst. 1996, 3, 269–288. [Google Scholar]
  16. Jódar, L.; Cortés, J.C. On the hypergeometric matrix function. J. Comput. Appl. Math. 1998, 99, 205–217. [Google Scholar] [CrossRef]
  17. Jódar, L.; Company, R. Hermite matrix polynomials and second order matrix differential equations. J. Approx. Theory Appl. 1996, 12, 20–30. [Google Scholar] [CrossRef]
  18. Jódar, L.; Defez, E.; Ponsoda, E. Matrix quadrature and orthogonal matrix polynomials. Congr. Numer. 1995, 106, 141–153. [Google Scholar]
  19. Jódar, L.; Defez, E.; Ponsoda, E. Orthogonal matrix polynomials with respect to linear matrix moment functionals: Theory and applications. J. Approx. Theory Appl. 1996, 12, 96–115. [Google Scholar] [CrossRef]
  20. Sastre, J.; Defez, E.; Jodar, L. Laguerre matrix polynomial series expansion: Theory and computer applications. Math. Comput. Model. 2006, 44, 1025–1043. [Google Scholar] [CrossRef]
  21. Sayyed, K.A.M.; Metwally, M.S.; Batahan, R.S. Gegenbauer matrix polynomials and second order matrix differential equations. Divulg. Mat. 2004, 12, 101–115. [Google Scholar]
  22. Sinap, A.; Van Assche, W. Polynomial interpolation and Gaussian quadrature for matrix valued functions. Linear Algebra Appl. 1994, 207, 71–114. [Google Scholar] [CrossRef]
  23. Sinap, A.; Van Assche, W. Orthogonal matrix polynomials and applications. J. Comput. Appl. Math. 1996, 66, 27–52. [Google Scholar] [CrossRef]
  24. Sri Lakshmi, V.; Srimannarayana, N.; Satyanarayana, B.; Chakradhar Rao, M.V.; Radha Madhavi, M.; Pavan Kumar, D.K. Jacobi matrix polynomial and its integral results. Commun. Appl. Nonlinear Anal. 2025, 32, 253–262. [Google Scholar] [CrossRef]
  25. Sri Lakshmi, V.; Srimannarayana, N.; Satyanarayana, B.; Radha Madhavi, M.; Ramesh, D. On modified Jacobi matrix polynomials. Int. J. Adv. Sci. Technol. 2020, 29, 924–932. [Google Scholar]
  26. Taşdelen, F.; Çekim, B.; Aktaş, R. On a multivariable extension of Jacobi matrix polynomials. Comput. Math. Appl. 2011, 61, 2412–2423. [Google Scholar] [CrossRef]
  27. Varma, S. Some Extensions of Orthogonal Polynomials. Ph.D. Thesis, Ankara University, Ankara, Turkey, 2013. [Google Scholar]
  28. Stefan, A.; Welters, A. A short proof of the symmetric determinantal representation of polynomials. Linear Algebra Appl. 2021, 627, 80–93. [Google Scholar] [CrossRef]
  29. Defez, E.; Jódar, L. Some applications of the Hermite matrix polynomials series expansions. J. Comput. Appl. Math. 1998, 99, 105–117. [Google Scholar] [CrossRef]
  30. James, A.T. Special functions of matrix and single argument in statistics. In Theory and Applications of Special Functions; Askey, R.A., Ed.; Academic Press: Cambridge, MA, USA, 1975; pp. 497–520. [Google Scholar]
  31. Defez, E.; Hervás, A.; Law, A.; Villanueva-Oller, J.; Villanueva, R.J. Progressive transmission of images: PC-based computations, using orthogonal matrix polynomials. Math. Comput. Model. 2000, 32, 1125–1140. [Google Scholar] [CrossRef]
  32. Defez, E.; Law, A.; Villanueva-Oller, J.; Villanueva, R.J. Matrix cubic splines for progressive 3D imaging. J. Math. Imaging Vis. 2002, 17, 41–53. [Google Scholar] [CrossRef]
  33. Masjed-Jamei, M. Three finite classes of hypergeometric orthogonal polynomials and their application in functions approximation. Integr. Trans. Spec. Funct. 2002, 13, 169–190. [Google Scholar] [CrossRef]
  34. Wood, J.D.; Tougaw, D. Matrix Multiplication Using Quantum-Dot Cellular Automata to Implement Conventional Microelectronics. IEEE Trans. Nanotechnol. 2011, 10, 1036–1042. [Google Scholar] [CrossRef]
  35. Illera, S.; Garcia-Castello, N.; Prades, J.D.; Cirera, A. A transfer Hamiltonian approach for an arbitrary quantum dot array in the self-consistent field regime. J. Appl. Phys. 2012, 112, 093701. [Google Scholar] [CrossRef]
  36. Hiramoto, K.; Grigoriadis, K.M. Integrated design of structural and control systems with a homotopy like iterative method. Int. J. Control. 2006, 79, 1062–1073. [Google Scholar] [CrossRef]
  37. Dunford, N.; Schwartz, J. Linear Operators; Interscience: New York, NY, USA, 1963; Volume I. [Google Scholar]
  38. Jódar, L.; Cortés, J.C. Some properties of Gamma and Beta matrix functions. Appl. Math. Lett. 1998, 11, 89–93. [Google Scholar] [CrossRef]
  39. Golub, G.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 1995. [Google Scholar]
  40. Hille, E. Lectures on Ordinary Differential Equations; Addison-Wesley: New York, NY, USA, 1969. [Google Scholar]
  41. Jódar, L.; Cortes, J.C. Closed form solution of the hypergeometric matrix differential equation. Math. Comput. Model. 2000, 32, 1017–1028. [Google Scholar] [CrossRef]
  42. Çekim, B.; Altın, A.; Aktaş, R. Some new results for Jacobi matrix polynomials. Filomat 2013, 27, 713–719. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Güldoğan Lekesiz, E. Finite Orthogonal M Matrix Polynomials. Symmetry 2025, 17, 996. https://doi.org/10.3390/sym17070996

AMA Style

Güldoğan Lekesiz E. Finite Orthogonal M Matrix Polynomials. Symmetry. 2025; 17(7):996. https://doi.org/10.3390/sym17070996

Chicago/Turabian Style

Güldoğan Lekesiz, Esra. 2025. "Finite Orthogonal M Matrix Polynomials" Symmetry 17, no. 7: 996. https://doi.org/10.3390/sym17070996

APA Style

Güldoğan Lekesiz, E. (2025). Finite Orthogonal M Matrix Polynomials. Symmetry, 17(7), 996. https://doi.org/10.3390/sym17070996

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop