1. Introduction
Matrix polynomials are powerful tools that facilitate mathematical analysis and play an important role in applied fields such as physics, engineering, and economics, where they are used to model and solve systems. They are highly important and useful in both theoretical mathematics and applied fields. This significance stems from several factors. Key areas of application include Linear Algebra, Linear Differential Equations, Control Theory [
1], System Theory [
2], Numerical Computations, Spectral Theory [
3], Eigenvalue Problems, Graph Theory, and Network Analysis.
Stability analysis, system response, and transfer functions are often expressed using matrix polynomials. For example, matrix polynomials are fundamental components in methods such as Taylor series, Padé approximation, or Krylov subspaces for computing the exponential of a matrix. They are also used to simplify and analyze functional expressions of matrices through the Cayley–Hamilton Theorem and play a key role in topics such as diagonalization.
Matrix polynomials and functions have been studied by many mathematicians over the years [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27]. Some new types and families of matrix polynomials have been introduced in [
3,
5,
6,
7,
9,
12,
14,
15,
16,
17,
19,
20,
21,
25,
26,
27]. In [
8], pseudo-orthogonality for matrix polynomials and the importance of a nonsingular leading coefficient matrix are discussed. In [
10], motivated by the idea that any polynomial system satisfying a three-term relation is orthogonal, Durán showed that certain
term recurrence relations can be written in terms of orthonormal matrix polynomials. In [
11], a quadrature formula and some basic properties of the zeroes of a sequence of orthogonal matrix polynomials were studied. In [
12], the authors defined orthogonal matrix polynomials with respect to a right matrix moment functional. In addition, some related properties have been discussed in [
4,
18,
22,
23,
24,
27]. In 1996 [
23], Sinap and Van Assche proposed that scalar orthogonal polynomials with Sobolev inner products involving a finite number of derivatives can be studied using matrix orthogonal polynomials on the real line. Furthermore, in [
28], the authors proved that every real multivariate polynomial has a symmetric determinantal representation. These representations allow polynomials to be expressed in terms of matrices and are used in applications such as semidefinite programming. The symmetry or asymmetry of matrix polynomials typically depends on the structure of the matrix and the way the polynomial is defined. In general, matrix polynomials are asymmetric, since a matrix polynomial is symmetric if and only if the matrix itself is symmetric. However, symmetric matrix polynomials can also be constructed from asymmetric matrices.
Recently, Fourier series expansions [
29], interpolation [
2], quadrature [
18,
22], group representation theory [
30], medical imaging [
31], splines [
32], scattering theory [
13], and statistics constitute the majority of the application areas for matrix polynomials.
In general, matrix polynomials can be studied in much the same way as scalar polynomials. However, matrices generally do not satisfy the commutativity property. In other words, in general, where G and H are two matrices in . When examining this case, it is important to be able to reduce matrix generalizations to the appropriate scalar cases, where the relevant matrices are of degree 1.
The aim of this study is to derive a finite orthogonal matrix polynomial under suitable conditions for the first time in the literature and to establish its relationships with known polynomials. In general, there are two types of orthogonality for polynomials: infinitely orthogonality and finitely orthogonality, with the latter being are subject to certain parametric constraints. In the infinite case, the nonnegative integer number
n (degree of the polynomial) is unrestricted and can increase indefinitely, whereas in the finite case, constraints must be imposed on
n. Some parametric restrictions must be introduced in order to obtain such finite orthogonality for matrix polynomials, which will be defined in
Section 3. In this way, the concept of finite-type orthogonality is transferred to matrix polynomial theory for the first time in this work. In order to achieve our goal, we draw inspiration from the following finite orthogonal polynomials
.
In the scalar case, the polynomials
form a finite orthogonal polynomial set for
and
[
33]. This set is defined by the following Rodrigues formula
From (
1), the explicit formula
is derived, and it is shown that it satisfies the differential equation
Then, the orthogonality
is obtained with the help of the self-adjoint form of (
2), so that a three-term relation
can be introduced.
In particular, when considering , it is seen that this family is closely related to the classical Jacobi polynomials.
In this paper,
Section 2 presents some basic definitions and theorems from the theory of matrix polynomials. The main results, including the definition of finite orthogonal
M matrix polynomials, are given in
Section 3. Matrix differential equations, the Rodrigues formula, the three-term recurrence relation, several recurrence relations, forward and backward shift operators, the generating function, and integral representations are also derived. Moreover, the Conclusion presents a connection between the Jacobi matrix polynomials and the finite orthogonal
M matrix polynomials.
Due to importance of special matrix polynomial forms in various fields of mathematics, physics, and engineering, establishing a relationship between the known Jacobi matrix polynomials and this novel finite class of matrix polynomials is highly significant. While the real-life applications of Jacobi matrix polynomials are primarily in technical and mathematical domains, methods based on these structures also provide solutions to many practical problems. Some key areas of application include Numerical Analysis and Differential Equations, Quantum Mechanics [
34,
35], Vibration Analysis [
36] and Mechanical Systems, as well as Data Science and Machine Learning.
Jacobi matrix polynomials are related to families of orthogonal polynomials such as Legendre, Chebyshev, and Jacobi polynomials. These polynomials are especially useful in numerical solution methods. Gauss–Jacobi integration schemes, based on Jacobi polynomials, offer high accuracy for numerical integration in spectral methods (e.g., fluid mechanics and heat transfer). They are also widely used in integral calculations (e.g., Gauss quadrature). Real-life applications include weather forecasting (via Navier–Stokes equations), heat distribution modeling (e.g., in engine blocks), and electromagnetic field simulations (e.g., antenna design). In quantum mechanics, Jacobi matrix polynomials, which are especially useful for the matrixization of Hamiltonian operators, are often used in eigenvalue problems. The properties of these matrices are used to find the energies of physical systems. At this point, real-life applications [
34,
35] can be provided as quantum dot and quantum wire simulations and for the modeling of nanoscale devices. Jacobi matrices are used in engineering problems, particularly for determining the vibration modes and frequencies, such as the analysis of mass-spring systems. Examples include the analysis of suspension systems in the automotive industry, modeling of aircraft wing vibrations, and dynamic analysis of bridges. Although Jacobi matrices rarely appear directly in data science problems, the mathematical foundations of many algorithms rely on such matrix structures. They are particularly useful in eigenvalue analysis, dimensionality reduction, and kernel methods. Consequently, Jacobi matrix polynomials are not directly used as end products, but they play a critical role in technical fields such as engineering, numerical computation, and physics. Real-life problems are addressed through mathematical models that incorporate these polynomials and matrix structures.
3. Finite Orthogonal Matrix Polynomials
In this section, the set of finite orthogonal M matrix polynomials is introduced. The corresponding matrix differential equation, finite orthogonality relation, several recurrence relations, Rodrigues formula, and generating functions for the family are also derived.
Finite orthogonal M matrix polynomials possess properties that naturally generalize those of the scalar finite orthogonal polynomials M and are conveniently constructed for matrix calculus.
Definition 7. For any integer , the M matrix polynomial is defined byso that are parameter matrices, whose eigenvalues z and t all satisfy the spectral conditions , , , , and . Remark 3. Note thatand that for the scalar case , taking and , and , , the polynomial coincides with the scalar finite polynomial M [33]. Here, denotes the hypergeometric matrix function given by (9). Proof. In definition (
11), following from (
4), (
5), and (
6), we obtain
□
Theorem 2. For each integer , the finite M matrix polynomials satisfy the matrix differential equationfor Proof. Let
, and
be invertible for
. The hypergeometric matrix function (
9) is a solution to the matrix differential Equation (
10) for
[
41]. By replacing
,
,
, and
,
from Remark 3. Introduce the notation
Applying the chain rule in (
14),
and
Taking into account that
and that this term commutes with
, substituting (
14), (
15) and (
16) in (
10) and postmultiplying by
yields
Thus,
satisfies (
13) in
. □
Remark 4. Taking , and , in (13) gives the differential equation for the scalar finite M orthogonal polynomials [33]. Corollary 1. For , is a solution for the differential equationover . Proof. After multiplying (
13) by
, rearranging yields (
18) for
. □
Definition 8. A matrix is called positively stable [29] if , . Lemma 9 ([
6]).
Let be positively stable, , and and are invertible for and . Thenor equivalentlyfor . The finite M matrix polynomials (
12) may be defined in terms of the hypergeometric matrix function
. From (
19), definition (
12) can be written as follows
where
is positively stable. Thus,
holds by using
Since
and
we get
Applying the Leibnitz rule for the differentiation of a product, the following theorem can be given for the Rodrigues formula of polynomials
.
Theorem 3. Let Q and R satisfy that , , and , , and , where . Then, the finite M matrix polynomials (11) can be written of the formfor . Now, for finite
M matrix polynomials
stated by parameter matrices
Q and
R, such that
,
, and
,
, and
, the finite orthogonality will be obtained on the interval
with weight function
. For this purpose, we will make use of the self-adjoint form (
18) and investigate the behavior of
at the extremes of the interval.
Lemma 10. andwhere satisfy the spectral conditions in Lemma 3, and is an arbitrary matrix polynomial. Proof. First, we approach the case .
Suppose that
is continuous and bounded on the closure of
, such that
is an open bounded neighbourhood of
. Then
We know the fact that if
and
is the Schur decomposition for any arbitrary matrix
, then from (
8)
where
. Therefore,
such that
is the Schur decomposition of
. On the other hand,
R satisfies
,
. So,
satisfies the conditions
,
, and thus,
.
On the one hand,
for
, and hence,
On the other hand, the inequality
is already satisfied and, since the upper bound of (
21) approaches zero as
, we obtain
For the factor
,
and
is bounded on
.
Secondly, we evaluate the case .
For the second limit in the right part of
we can say that it is zero if and only if
is Hurwitz (i.e., real part of any eigenvalue of
is negative). To see why this is true, the Spectral Mapping Theorem can be applied, which basically says the eigenvalues of
is of the form
, where
is any eigenvalue of
. □
It can be easily said that when the spectral conditions in Lemma 3 are satisfied, the self-adjoint form (
18) can be expressed as
where
. Because
is a solution of (
18).
Multiplying (
22) by
gives
and, by replacing
s and
n,
Subtracting (
23) and (
24), and by using the commutativity of the finite orthogonal
M matrix polynomials,
We know that
can be easily verified and thus (
25) can be read as
Now, after defining
in (
27) we integrate (
27) over
. So,
With application of Lemma 10, (
28) implies that
since
and
is invertible.
For the final stage in the derivation of orthogonality, it is necessary to confirm that
is invertible for
.
holds by using the Rodrigues’ formula for
With the help of above limits, applying integration in parts to the integral in the right part of (
29) gives
If we repeat this process
times more, it produces that
by using the above limits each time.
Here, through the integration of parts by
R times, it can be shown that
under the spectral conditions in Lemma 3.
Then, substituting (
31) in (
30) gives
and thus,
is nonsingular.
Therefore, we obtain the result in the following Theorem.
Theorem 4. Let satisfy the following spectral conditionsFor , the orthogonality of the finite orthogonal M matrix polynomials is defined by Corollary 2. The set of finite M matrix polynomials is orthogonal with respect to the matrix weight over .
Remark 5. In the scalar case of the orthogonality relation, the choices , and , give the finite orthogonality for the scalar finite orthogonal M polynomials .
Theorem 5. The finite M matrix polynomials satisfy the following recurrence relation: Corollary 3. More generally, the recurrence relationholds for the finite matrix polynomials . Theorem 6. The polynomials have the following forward shift operator Proof. Considering (
33) and (
37) results in the proof. □
Theorem 7. The polynomials have the following backward shift operatorsand Proof. For the proofs of (
35) and (
36), we make the necessary arrangements after taking the derivative of finite matrix polynomials
. □
Theorem 8. The finite M matrix polynomials have the recurrence relationsand Proof. Using (
34) in matrix differential eqation (
13), we obtain (
37).
On the other hand, we prove (
38) taking
,
,
and
in the equality [
42]
where
is the hypergeometric matrix function for
,
and
are commutative, and
are invertible for all integer
.
As a consequence of (
35) and (
36), we have (
39). □
Now, we obtain the three term recurrence relation from the orthogonality of finite
M matrix polynomials defined in (
11), where the leading coefficient of
is an invertible matrix, for
, and the parameter matrices
Q and
R will be assumed to satisfy the spectral conditions (
32).
By Theorem 2.2 in [
19], any matrix polynomial of degree
n can be represented uniquely in the form
So, using the orthogonality relation for the finite matrix polynomials
and (
40), if
is a matrix polynomial of degree strictly less than
n, then
It can be said that
becomes a matrix polynomial with degree
, for
. Thus, by (
40),
for some coefficients
in
.
Using the orthogonality relation of
, and for
,
where the coefficient matrices
can be determined by Theorem 4. Consequently,
so that the three term recurrence relations
holds for the finite
M matrix polynomials
.
Comparing the powers of
y in both sides of (
41), the recurrence coefficient matrices
,
, and
are obtained as follows:
These results give us the following theorem.
Theorem 9. Let Q and R be in satisfy spectral conditions (32). Then, the finite orthogonal M matrix polynomials defined in (11) satisfy the three-term matrix recurrence relationin which , , and in are given by (42). In (43), is nonsingular for . Remark 7. When the matrix order m is 1
for (42) and (43), the relation (41) is reduced to the three-term relation for the scalar case in [33]. Finally, we can give the following result, called the generating function.
Theorem 10. The finite orthogonal M matrix polynomials have the generating functionwhere matrices Q and R satisfy the spectral conditions (32). Proof. In (
44), using definition (
11) and the equalities
and
we get
If it is taken into account that
, we obtain
and
for
.
Thus, (
45) becomes
Finally, under the cloak of
and the necessary arrangements we reach (
44). □
Now, in order to obtain integral representations for the finite orthogonal M matrix polynomials, we recall the following theorem.
Theorem 11 ([
42]).
Let and T be matrices in , such that for , for , is invertible for , and these matrices are commutative. Then In the light of the theorem above, we can give the following results.
Theorem 12. Let be matrices, such that the corresponding eigenvalues satisfy the conditions , , and . Also, let matrices T be commutative. The following integral representations are satisfied for the finite orthogonal M matrix polynomials :and Proof. Taking
,
,
and
,
in (
12), which completes of proof (
46). To prove (
47), we substitute
,
,
and
,
in (
20). □