Abstract
In this study, we aim to construct a finite set of orthogonal matrix polynomials for the first time, along with their finite orthogonality, matrix differential equation, Rodrigues’ formula, several recurrence relations including three-term relation, forward and backward shift operators, generating functions, integral representation and their relation with Jacobi matrix polynomials. Thus, the concept of “finite”, which is used to impose parametric constraints for orthogonal polynomials, is transferred to the theory of matrix polynomials for the first time in the literature. Moreover, this family reduces to the finite orthogonal M polynomials in the scalar case when the degree is 1, thereby providing a matrix generalization of finite orthogonal M polynomials in one variable.
1. Introduction
Matrix polynomials are powerful tools that facilitate mathematical analysis and play an important role in applied fields such as physics, engineering, and economics, where they are used to model and solve systems. They are highly important and useful in both theoretical mathematics and applied fields. This significance stems from several factors. Key areas of application include Linear Algebra, Linear Differential Equations, Control Theory [1], System Theory [2], Numerical Computations, Spectral Theory [3], Eigenvalue Problems, Graph Theory, and Network Analysis.
Stability analysis, system response, and transfer functions are often expressed using matrix polynomials. For example, matrix polynomials are fundamental components in methods such as Taylor series, Padé approximation, or Krylov subspaces for computing the exponential of a matrix. They are also used to simplify and analyze functional expressions of matrices through the Cayley–Hamilton Theorem and play a key role in topics such as diagonalization.
Matrix polynomials and functions have been studied by many mathematicians over the years [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. Some new types and families of matrix polynomials have been introduced in [3,5,6,7,9,12,14,15,16,17,19,20,21,25,26,27]. In [8], pseudo-orthogonality for matrix polynomials and the importance of a nonsingular leading coefficient matrix are discussed. In [10], motivated by the idea that any polynomial system satisfying a three-term relation is orthogonal, Durán showed that certain term recurrence relations can be written in terms of orthonormal matrix polynomials. In [11], a quadrature formula and some basic properties of the zeroes of a sequence of orthogonal matrix polynomials were studied. In [12], the authors defined orthogonal matrix polynomials with respect to a right matrix moment functional. In addition, some related properties have been discussed in [4,18,22,23,24,27]. In 1996 [23], Sinap and Van Assche proposed that scalar orthogonal polynomials with Sobolev inner products involving a finite number of derivatives can be studied using matrix orthogonal polynomials on the real line. Furthermore, in [28], the authors proved that every real multivariate polynomial has a symmetric determinantal representation. These representations allow polynomials to be expressed in terms of matrices and are used in applications such as semidefinite programming. The symmetry or asymmetry of matrix polynomials typically depends on the structure of the matrix and the way the polynomial is defined. In general, matrix polynomials are asymmetric, since a matrix polynomial is symmetric if and only if the matrix itself is symmetric. However, symmetric matrix polynomials can also be constructed from asymmetric matrices.
Recently, Fourier series expansions [29], interpolation [2], quadrature [18,22], group representation theory [30], medical imaging [31], splines [32], scattering theory [13], and statistics constitute the majority of the application areas for matrix polynomials.
In general, matrix polynomials can be studied in much the same way as scalar polynomials. However, matrices generally do not satisfy the commutativity property. In other words, in general, where G and H are two matrices in . When examining this case, it is important to be able to reduce matrix generalizations to the appropriate scalar cases, where the relevant matrices are of degree 1.
The aim of this study is to derive a finite orthogonal matrix polynomial under suitable conditions for the first time in the literature and to establish its relationships with known polynomials. In general, there are two types of orthogonality for polynomials: infinitely orthogonality and finitely orthogonality, with the latter being are subject to certain parametric constraints. In the infinite case, the nonnegative integer number n (degree of the polynomial) is unrestricted and can increase indefinitely, whereas in the finite case, constraints must be imposed on n. Some parametric restrictions must be introduced in order to obtain such finite orthogonality for matrix polynomials, which will be defined in Section 3. In this way, the concept of finite-type orthogonality is transferred to matrix polynomial theory for the first time in this work. In order to achieve our goal, we draw inspiration from the following finite orthogonal polynomials .
In the scalar case, the polynomials form a finite orthogonal polynomial set for and [33]. This set is defined by the following Rodrigues formula
From (1), the explicit formula
is derived, and it is shown that it satisfies the differential equation
Then, the orthogonality
is obtained with the help of the self-adjoint form of (2), so that a three-term relation
can be introduced.
In particular, when considering , it is seen that this family is closely related to the classical Jacobi polynomials.
In this paper, Section 2 presents some basic definitions and theorems from the theory of matrix polynomials. The main results, including the definition of finite orthogonal M matrix polynomials, are given in Section 3. Matrix differential equations, the Rodrigues formula, the three-term recurrence relation, several recurrence relations, forward and backward shift operators, the generating function, and integral representations are also derived. Moreover, the Conclusion presents a connection between the Jacobi matrix polynomials and the finite orthogonal M matrix polynomials.
Due to importance of special matrix polynomial forms in various fields of mathematics, physics, and engineering, establishing a relationship between the known Jacobi matrix polynomials and this novel finite class of matrix polynomials is highly significant. While the real-life applications of Jacobi matrix polynomials are primarily in technical and mathematical domains, methods based on these structures also provide solutions to many practical problems. Some key areas of application include Numerical Analysis and Differential Equations, Quantum Mechanics [34,35], Vibration Analysis [36] and Mechanical Systems, as well as Data Science and Machine Learning.
Jacobi matrix polynomials are related to families of orthogonal polynomials such as Legendre, Chebyshev, and Jacobi polynomials. These polynomials are especially useful in numerical solution methods. Gauss–Jacobi integration schemes, based on Jacobi polynomials, offer high accuracy for numerical integration in spectral methods (e.g., fluid mechanics and heat transfer). They are also widely used in integral calculations (e.g., Gauss quadrature). Real-life applications include weather forecasting (via Navier–Stokes equations), heat distribution modeling (e.g., in engine blocks), and electromagnetic field simulations (e.g., antenna design). In quantum mechanics, Jacobi matrix polynomials, which are especially useful for the matrixization of Hamiltonian operators, are often used in eigenvalue problems. The properties of these matrices are used to find the energies of physical systems. At this point, real-life applications [34,35] can be provided as quantum dot and quantum wire simulations and for the modeling of nanoscale devices. Jacobi matrices are used in engineering problems, particularly for determining the vibration modes and frequencies, such as the analysis of mass-spring systems. Examples include the analysis of suspension systems in the automotive industry, modeling of aircraft wing vibrations, and dynamic analysis of bridges. Although Jacobi matrices rarely appear directly in data science problems, the mathematical foundations of many algorithms rely on such matrix structures. They are particularly useful in eigenvalue analysis, dimensionality reduction, and kernel methods. Consequently, Jacobi matrix polynomials are not directly used as end products, but they play a critical role in technical fields such as engineering, numerical computation, and physics. Real-life problems are addressed through mathematical models that incorporate these polynomials and matrix structures.
2. Some Basic Notations and Properties
Let be the set of all eigenvalues of any matrix G in .
Definition 1.
For , any matrix polynomial is defined as
where y is a real variable, and the coefficients are members of , the space of real or complex matrices of order m.
Lemma 1
([37]). If and are holomorphic functions in an open set Ω of the complex plane, and if G is a matrix in for which , then
Therefore, if , and is a matrix, such that , then
Definition 2.
If for , such that is a matrix, then the matrix G is called a positive stable matrix.
Definition 3.
For , the matrix version of the Pochhammer symbol is defined by
where is the identity matrix, and .
Remark 1.
If for , then for .
Definition 4.
For and ,
Definition 5.
The definition of the Gamma matrix function is given by
such that G is a positive stable matrix. If matrices are invertible for and , then the equation
Lemma 2.
Definition 6.
The Beta matrix function is defined as
where are positive stable matrices [38].
Theorem 1.
If the matrices are commutative, such that G,H, and are positive stable matrices, then the equality
exists [16].
Lemma 3.
Lemma 4
([39]). For , let . Then
Lemma 5
([40]). The reciprocal scalar Gamma function, , is an entire function of the complex variable z. Thus, for any , the Riesz–Dunford functional calculus [37] shows that is well defined and is, indeed, the inverse of . Hence, if is, such that, is invertible for , , then .
Lemma 6.
If , , and are all invertible and for and , , then , where denotes the Beta matrix function [16].
Lemma 7.
If is invertible for and ,,, then the definition of the hypergeometric matrix function is as follow
It converges for [16].
Lemma 8
([41]). For , let are constant matrices, so that , , and is invertible. Then, the solution for the hypergeometric matrix differential equation
is
for
and
where F is the hypergeometric matrix function (9) and are arbitrary constant matrices.
3. Finite Orthogonal Matrix Polynomials
In this section, the set of finite orthogonal M matrix polynomials is introduced. The corresponding matrix differential equation, finite orthogonality relation, several recurrence relations, Rodrigues formula, and generating functions for the family are also derived.
Finite orthogonal M matrix polynomials possess properties that naturally generalize those of the scalar finite orthogonal polynomials M and are conveniently constructed for matrix calculus.
Definition 7.
For any integer , the M matrix polynomial is defined by
so that are parameter matrices, whose eigenvalues z and t all satisfy the spectral conditions , , , , and .
Remark 3.
Note that
and that for the scalar case , taking and , and , , the polynomial coincides with the scalar finite polynomial M [33]. Here, denotes the hypergeometric matrix function given by (9).
Theorem 2.
For each integer , the finite M matrix polynomials satisfy the matrix differential equation
for
Proof.
Let , and be invertible for . The hypergeometric matrix function (9) is a solution to the matrix differential Equation (10) for [41]. By replacing , , , and ,
from Remark 3. Introduce the notation
Applying the chain rule in (14),
and
Taking into account that and that this term commutes with , substituting (14), (15) and (16) in (10) and postmultiplying by
yields
Thus, satisfies (13) in . □
Remark 4.
Taking , and , in (13) gives the differential equation for the scalar finite M orthogonal polynomials [33].
Corollary 1.
For , is a solution for the differential equation
over .
Definition 8.
A matrix is called positively stable [29] if , .
Lemma 9
([6]). Let be positively stable, , and and are invertible for and . Then
or equivalently
for .
The finite M matrix polynomials (12) may be defined in terms of the hypergeometric matrix function . From (19), definition (12) can be written as follows
where is positively stable. Thus,
holds by using
Since
and
we get
Applying the Leibnitz rule for the differentiation of a product, the following theorem can be given for the Rodrigues formula of polynomials .
Theorem 3.
Let Q and R satisfy that , , and , , and , where . Then, the finite M matrix polynomials (11) can be written of the form
for .
Now, for finite M matrix polynomials stated by parameter matrices Q and R, such that , , and , , and , the finite orthogonality will be obtained on the interval with weight function . For this purpose, we will make use of the self-adjoint form (18) and investigate the behavior of at the extremes of the interval.
Lemma 10.
and
where satisfy the spectral conditions in Lemma 3, and is an arbitrary matrix polynomial.
Proof.
First, we approach the case .
Suppose that is continuous and bounded on the closure of , such that is an open bounded neighbourhood of . Then
We know the fact that if
and is the Schur decomposition for any arbitrary matrix , then from (8)
where . Therefore,
such that is the Schur decomposition of . On the other hand, R satisfies , . So, satisfies the conditions , , and thus, .
On the one hand,
for , and hence,
On the other hand, the inequality
is already satisfied and, since the upper bound of (21) approaches zero as , we obtain
For the factor ,
and is bounded on .
In conclusion,
then
Secondly, we evaluate the case .
For the second limit in the right part of
we can say that it is zero if and only if is Hurwitz (i.e., real part of any eigenvalue of is negative). To see why this is true, the Spectral Mapping Theorem can be applied, which basically says the eigenvalues of
is of the form , where is any eigenvalue of . □
It can be easily said that when the spectral conditions in Lemma 3 are satisfied, the self-adjoint form (18) can be expressed as
where . Because is a solution of (18).
Subtracting (23) and (24), and by using the commutativity of the finite orthogonal M matrix polynomials,
With application of Lemma 10, (28) implies that
since and is invertible.
For the final stage in the derivation of orthogonality, it is necessary to confirm that
is invertible for .
holds by using the Rodrigues’ formula for
By Lemma 10,
With the help of above limits, applying integration in parts to the integral in the right part of (29) gives
If we repeat this process times more, it produces that
by using the above limits each time.
Here, through the integration of parts by R times, it can be shown that
under the spectral conditions in Lemma 3.
Therefore, we obtain the result in the following Theorem.
Theorem 4.
Let satisfy the following spectral conditions
For , the orthogonality of the finite orthogonal M matrix polynomials is defined by
Corollary 2.
The set of finite M matrix polynomials is orthogonal with respect to the matrix weight over .
Remark 5.
In the scalar case of the orthogonality relation, the choices , and , give the finite orthogonality for the scalar finite orthogonal M polynomials .
Theorem 5.
The finite M matrix polynomials satisfy the following recurrence relation:
Corollary 3.
More generally, the recurrence relation
holds for the finite matrix polynomials .
Remark 6.
In special,
Theorem 6.
The polynomials have the following forward shift operator
Theorem 7.
The polynomials have the following backward shift operators
and
Proof.
Theorem 8.
The finite M matrix polynomials have the recurrence relations
and
Proof.
On the other hand, we prove (38) taking , , and in the equality [42]
where is the hypergeometric matrix function for , and are commutative, and are invertible for all integer .
Now, we obtain the three term recurrence relation from the orthogonality of finite M matrix polynomials defined in (11), where the leading coefficient of is an invertible matrix, for , and the parameter matrices Q and R will be assumed to satisfy the spectral conditions (32).
By Theorem 2.2 in [19], any matrix polynomial of degree n can be represented uniquely in the form
So, using the orthogonality relation for the finite matrix polynomials and (40), if is a matrix polynomial of degree strictly less than n, then
It can be said that becomes a matrix polynomial with degree , for . Thus, by (40),
for some coefficients in .
Using the orthogonality relation of , and for ,
where the coefficient matrices can be determined by Theorem 4. Consequently,
so that the three term recurrence relations
holds for the finite M matrix polynomials .
Comparing the powers of y in both sides of (41), the recurrence coefficient matrices , , and are obtained as follows:
These results give us the following theorem.
Theorem 9.
Remark 7.
Finally, we can give the following result, called the generating function.
Theorem 10.
The finite orthogonal M matrix polynomials have the generating function
where matrices Q and R satisfy the spectral conditions (32).
Proof.
Now, in order to obtain integral representations for the finite orthogonal M matrix polynomials, we recall the following theorem.
Theorem 11
([42]). Let and T be matrices in , such that for , for , is invertible for , and these matrices are commutative. Then
In the light of the theorem above, we can give the following results.
Theorem 12.
Let be matrices, such that the corresponding eigenvalues satisfy the conditions , , and . Also, let matrices T be commutative. The following integral representations are satisfied for the finite orthogonal M matrix polynomials :
and
4. Conclusions
For the first time, we introduce a finite set of orthogonal matrix polynomials in this paper. The structure defined with (11) in this paper results in several important applications of the finite M matrix polynomials . First, it is shown that the polynomials satisfy the second-order differential Equation (13). A Rodrigues’ formula for the polynomials is derived by adding the commutativity property afterwards. We construct the finite orthogonality in the sense made by Theorem 4 and subsequently introduce some matrix recurrence relations including a three-term recurrence relation for the finite M matrix polynomials.
Similar to the scalar case, we set a relation between this finite orthogonal matrix polynomials and the Jacobi matrix polynomials in the following corollary.
Corollary 4.
Using the explicit representation (12), the relationship
holds, where is the Jacobi matrix polynomial defined in [6].
Considering the usage areas and real-life applications of Jacobi matrix polynomials, the connection between the finite matrix polynomials produced in this study and Jacobi matrices becomes important.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The author declares that they have no competing interests.
References
- Šebek, M.; Hromčík, M. Polynomial design methods. Int. J. Robust Nonlinear Control 2007, 17, 679–681. [Google Scholar] [CrossRef]
- Antsaklis, P.J.; Gao, Z. Polynomial and rational matrix interpolation: Theory and control applications. Int. J. Control 1993, 58, 349–404. [Google Scholar] [CrossRef]
- Mirzoev, K.A.; Konechnaya, N.N.; Safonova, T.A.; Tagirova, R.N. Generalized Jacobi Matrices and Spectral Analysis of Differential Operators with Polynomial Coefficients. J. Math. Sci. 2021, 252, 213–224. [Google Scholar] [CrossRef]
- Altın, A.; Çekim, B. Generating matrix functions for Chebyshev matrix polynomials of the second kind. Hacet. J. Math. Stat. 2012, 41, 25–32. [Google Scholar]
- Çekim, B. New kinds of matrix polynomials. Miskolc Math. Notes 2013, 14, 817–826. [Google Scholar] [CrossRef]
- Defez, E.; Jódar, L.; Law, A. Jacobi matrix differential equation, polynomial solutions and their properties. Comput. Math. Appl. 2004, 48, 789–803. [Google Scholar] [CrossRef]
- Defez, E.; Jódar, L. Chebyshev matrix polynomials and second order matrix differential equations. Util. Math. 2002, 61, 107–123. [Google Scholar]
- Defez, E.; Jódar, L.; Law, A.; Ponsoda, E. Three-term recurrences and matrix orthogonal polynomials. Util. Math. 2000, 57, 129–146. [Google Scholar]
- Duran, A.J. On orthogonal polynomials with respect to a positive definite matrix of measures. Can. J. Math. 1995, 47, 88–112. [Google Scholar] [CrossRef]
- Duran, A.J.; Van Assche, W. Orthogonal matrix polynomials and higher order recurrence relations. Linear Algebra Appl. 1995, 219, 261–280. [Google Scholar] [CrossRef]
- Duran, A.J.; López-Rodriguez, P. Orthogonal matrix polynomials: Zeros and Blumenthal’s theorem. J. Approx. Theory 1996, 84, 96–118. [Google Scholar] [CrossRef]
- Draux, A.; Jokung-Nguena, O. Orthogonal polynomials in a non-commutative algebra. Non-normal case. IMACS Ann. Comput. Appl. Maths. 1991, 9, 237–242. [Google Scholar]
- Geronimo, J.S. Scattering theory and matrix orthogonal polynomials on the real line. Circuit Syst. Signal Process. 1982, 1, 471–494. [Google Scholar] [CrossRef]
- Jódar, L.; Company, R.; Navarro, E. Laguerre matrix polynomials and system of second-order differential equations. Appl. Numer. Math. 1994, 15, 53–63. [Google Scholar] [CrossRef]
- Jódar, L.; Company, R.; Ponsoda, E. Orthogonal matrix polynomials and systems of second order differential equations. Differ. Equ. Dyn. Syst. 1996, 3, 269–288. [Google Scholar]
- Jódar, L.; Cortés, J.C. On the hypergeometric matrix function. J. Comput. Appl. Math. 1998, 99, 205–217. [Google Scholar] [CrossRef]
- Jódar, L.; Company, R. Hermite matrix polynomials and second order matrix differential equations. J. Approx. Theory Appl. 1996, 12, 20–30. [Google Scholar] [CrossRef]
- Jódar, L.; Defez, E.; Ponsoda, E. Matrix quadrature and orthogonal matrix polynomials. Congr. Numer. 1995, 106, 141–153. [Google Scholar]
- Jódar, L.; Defez, E.; Ponsoda, E. Orthogonal matrix polynomials with respect to linear matrix moment functionals: Theory and applications. J. Approx. Theory Appl. 1996, 12, 96–115. [Google Scholar] [CrossRef]
- Sastre, J.; Defez, E.; Jodar, L. Laguerre matrix polynomial series expansion: Theory and computer applications. Math. Comput. Model. 2006, 44, 1025–1043. [Google Scholar] [CrossRef]
- Sayyed, K.A.M.; Metwally, M.S.; Batahan, R.S. Gegenbauer matrix polynomials and second order matrix differential equations. Divulg. Mat. 2004, 12, 101–115. [Google Scholar]
- Sinap, A.; Van Assche, W. Polynomial interpolation and Gaussian quadrature for matrix valued functions. Linear Algebra Appl. 1994, 207, 71–114. [Google Scholar] [CrossRef]
- Sinap, A.; Van Assche, W. Orthogonal matrix polynomials and applications. J. Comput. Appl. Math. 1996, 66, 27–52. [Google Scholar] [CrossRef]
- Sri Lakshmi, V.; Srimannarayana, N.; Satyanarayana, B.; Chakradhar Rao, M.V.; Radha Madhavi, M.; Pavan Kumar, D.K. Jacobi matrix polynomial and its integral results. Commun. Appl. Nonlinear Anal. 2025, 32, 253–262. [Google Scholar] [CrossRef]
- Sri Lakshmi, V.; Srimannarayana, N.; Satyanarayana, B.; Radha Madhavi, M.; Ramesh, D. On modified Jacobi matrix polynomials. Int. J. Adv. Sci. Technol. 2020, 29, 924–932. [Google Scholar]
- Taşdelen, F.; Çekim, B.; Aktaş, R. On a multivariable extension of Jacobi matrix polynomials. Comput. Math. Appl. 2011, 61, 2412–2423. [Google Scholar] [CrossRef]
- Varma, S. Some Extensions of Orthogonal Polynomials. Ph.D. Thesis, Ankara University, Ankara, Turkey, 2013. [Google Scholar]
- Stefan, A.; Welters, A. A short proof of the symmetric determinantal representation of polynomials. Linear Algebra Appl. 2021, 627, 80–93. [Google Scholar] [CrossRef]
- Defez, E.; Jódar, L. Some applications of the Hermite matrix polynomials series expansions. J. Comput. Appl. Math. 1998, 99, 105–117. [Google Scholar] [CrossRef]
- James, A.T. Special functions of matrix and single argument in statistics. In Theory and Applications of Special Functions; Askey, R.A., Ed.; Academic Press: Cambridge, MA, USA, 1975; pp. 497–520. [Google Scholar]
- Defez, E.; Hervás, A.; Law, A.; Villanueva-Oller, J.; Villanueva, R.J. Progressive transmission of images: PC-based computations, using orthogonal matrix polynomials. Math. Comput. Model. 2000, 32, 1125–1140. [Google Scholar] [CrossRef]
- Defez, E.; Law, A.; Villanueva-Oller, J.; Villanueva, R.J. Matrix cubic splines for progressive 3D imaging. J. Math. Imaging Vis. 2002, 17, 41–53. [Google Scholar] [CrossRef]
- Masjed-Jamei, M. Three finite classes of hypergeometric orthogonal polynomials and their application in functions approximation. Integr. Trans. Spec. Funct. 2002, 13, 169–190. [Google Scholar] [CrossRef]
- Wood, J.D.; Tougaw, D. Matrix Multiplication Using Quantum-Dot Cellular Automata to Implement Conventional Microelectronics. IEEE Trans. Nanotechnol. 2011, 10, 1036–1042. [Google Scholar] [CrossRef]
- Illera, S.; Garcia-Castello, N.; Prades, J.D.; Cirera, A. A transfer Hamiltonian approach for an arbitrary quantum dot array in the self-consistent field regime. J. Appl. Phys. 2012, 112, 093701. [Google Scholar] [CrossRef]
- Hiramoto, K.; Grigoriadis, K.M. Integrated design of structural and control systems with a homotopy like iterative method. Int. J. Control. 2006, 79, 1062–1073. [Google Scholar] [CrossRef]
- Dunford, N.; Schwartz, J. Linear Operators; Interscience: New York, NY, USA, 1963; Volume I. [Google Scholar]
- Jódar, L.; Cortés, J.C. Some properties of Gamma and Beta matrix functions. Appl. Math. Lett. 1998, 11, 89–93. [Google Scholar] [CrossRef]
- Golub, G.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 1995. [Google Scholar]
- Hille, E. Lectures on Ordinary Differential Equations; Addison-Wesley: New York, NY, USA, 1969. [Google Scholar]
- Jódar, L.; Cortes, J.C. Closed form solution of the hypergeometric matrix differential equation. Math. Comput. Model. 2000, 32, 1017–1028. [Google Scholar] [CrossRef]
- Çekim, B.; Altın, A.; Aktaş, R. Some new results for Jacobi matrix polynomials. Filomat 2013, 27, 713–719. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).