Next Article in Journal
Belief Update Through Semiorders
Previous Article in Journal
Enhancing the Accuracy of Image Classification for Degenerative Brain Diseases with CNN Ensemble Models Using Mel-Spectrograms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Recursive Algorithm for Inverting Matrix Polynomials via a Generalized Leverrier–Faddeev Scheme: Application to FEM Modeling of Wing Vibrations in a 4th-Generation Fighter Aircraft

1
Institute of Aeronautics and Space Studies (IASS), Aeronautical Sciences Laboratory, University of Blida, Blida 09000, Algeria
2
Department of Electrical and Computer Engineering, ZEP Campus, University of Western Macedonia, Kozani, 50100 Kozani, Greece
3
Institute of Electrical and Electronic Engineering, Boumerdes 35000, Algeria
4
Institut für Mathematik, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(13), 2101; https://doi.org/10.3390/math13132101
Submission received: 16 May 2025 / Revised: 18 June 2025 / Accepted: 23 June 2025 / Published: 26 June 2025

Abstract

This paper introduces a novel recursive algorithm for inverting matrix polynomials, developed as a generalized extension of the classical Leverrier–Faddeev scheme. The approach is motivated by the need for scalable and efficient inversion techniques in applications such as system analysis, control, and FEM-based structural modeling, where matrix polynomials naturally arise. The proposed algorithm is fully numerical, recursive, and division free, making it suitable for large-scale computation. Validation is performed through a finite element simulation of the transverse vibration of a fighter aircraft wing. Results confirm the method’s accuracy, robustness, and computational efficiency in computing characteristic polynomials and adjugate-related forms, supporting its potential for broader application in control, structural analysis, and future extensions to structured or nonlinear matrix systems.

1. Introduction

The study of matrix inversion A 1 C n × n and matrix polynomial inversion A 1 ( λ ) C n × n [ λ ] has been a central topic in computational linear algebra, motivating extensive research over several decades. Early developments include Frame’s (1949) recursive formula for computing matrix inverses [1], Davidenko’s (1960) method based on variation of parameters [2], and Gantmacher’s foundational work on matrix theory [3]. Between 1961 and 1964, Lancaster made key contributions to nonlinear matrix problems, algorithms for λ-matrices, and generalized Rayleigh quotient methods for eigenvalue problems [4,5,6]. Significant progress was made by Faddeev and Faddeeva (1963–1975) in computational techniques [7], and Householder’s monograph [8] became a standard reference. In parallel, Decell (1965) employed the Cayley–Hamilton theorem for generalized inversion [9], forming a basis for later extensions to polynomial matrices. In the 1980s, Givens proposed modifications to the classical Leverrier–Faddeev algorithm [10], while Paraskevopoulos (1983) introduced Chebyshev-based methods for system analysis and control [11], enriching the theory of polynomial matrix inversion.
Barnett, in 1989, provided a new proof and important extensions of the Leverrier–Faddeev algorithm, introducing computational schemes for matrix polynomials of degree two A ( λ ) = λ 2 I n + λ A 1 + A 2 [12]. In the 1990s, Fragulis and collaborators (1991) addressed the inversion of polynomial matrices and their Laurent expansions [13], while Fragulis (1995) established a generalized Cayley–Hamilton theorem for polynomial matrices of arbitrary degree [14]. In 1993, Helmberg, Wagner, and Veltkamp revisited the Leverrier–Faddeev method, focusing on characteristic polynomials and eigenvectors [15]. Wang and Lin (1993) proposed an extension of the Leverrier algorithm for higher-degree matrix polynomials [16], although practical implementation remained limited. Further developments included Hou, Pugh, and Hayton (2000), who proposed a general solution for systems in polynomial matrix form [17], and Djordjevic (2001), who investigated generalized Drazin inverses [18]. Debeljković (2004) contributed to the theory of singular control systems [19]. Hernández (2004) introduced an extension of the Leverrier–Faddeev algorithm using bases of classical orthogonal polynomials [20], offering improved flexibility in polynomial matrix inversion. In parallel, Stanimirovic and his collaborators, between 2006 and 2011, initiated a series of extensions of the Leverrier–Faddeev algorithm toward generalized inverses and rational matrices, culminating in multiple key contributions across the 2000s [21,22,23,24,25,26], providing practical and effective computational algorithms. Petkovic (2006) further developed interpolation-type algorithms related to the Leverrier–Faddeev method [27]. Recent efforts expanded the theory toward modern applications. In 2019, Dopico introduced the concept of root polynomials and their role in matrix polynomial theory [28]. In 2021, Tian and Xia studied low-degree solutions for the Sylvester matrix polynomial equation [29], while Shehata (2021) addressed Lommel matrix polynomials [30]. In 2024, significant theoretical expansions were presented: Szymański explored stability theory for matrix polynomials [31]; Kumar proposed numerical methods for exploring zeros of Hermite-lambda matrix polynomials [32]; Zainab studied symbolic approaches for Appell-lambda matrix families [33]; and Milica investigated the implications of higher-degree polynomials in forced damped oscillations [34]. Halidias (2024) also contributed novel methods for minimum polynomial computations [35]. Overall, the evolution of matrix polynomial inversion, the Leverrier–Faddeev method, and associated generalized inversion techniques has grown from classical recursion-based strategies to sophisticated symbolic, interpolation, and numerical algorithms, addressing the challenges posed by high-degree, singular, or rational polynomial matrices.
This work is motivated by the need for more efficient and straightforward algorithms to compute matrix polynomial inverses, particularly for large-scale systems. Existing Leverrier–Faddeev-based methods often suffer from high computational complexity and limited applicability to higher-degree polynomials and multivariable transfer functions. The proposed recursive algorithm itself constitutes a major innovation, offering a simple and efficient solution. It also extends naturally to companion forms and descriptor systems, areas not fully explored in previous methods. Furthermore, the algorithm can handle non-regular and potentially rational matrix polynomials, addressing a significant gap in the literature. This work thus provides a powerful tool for efficient matrix polynomial inversion with broad applications in system analysis and control.
In this paper, we present a new and efficient scheme for computing the inverse of matrix polynomials of arbitrary degree. The paper is organized as follows: Section 2 introduces the necessary preliminaries. In Section 3, we formulate the problem statement and discuss classical algorithms. Section 4 presents our main algorithm, and Section 5 explores its connection to companion forms. Section 6 extends the method to descriptor systems, while Section 7 demonstrates the application of our approach through practical simulations. Finally, Section 8 concludes this paper.

2. Preliminaries on Matrix Functions

Now, we introduce matrix functions, highlight their key properties and importance in system theory, and lay the foundation for their application in system analysis.

2.1. Matrix Functions by the Sylvester Formula

The term “function of a matrix” can have several different meanings. In this monograph, we are interested in a definition that takes a scalar function f and a matrix A C n × n and specifies f ( A ) to be a matrix of the same dimensions as A ; it does so in a way that provides a useful generalization of the function of a scalar variable f ( z ) , z C . Other interpretations of f ( A ) are: functions mapping C n × n to C m × m that do not stem from a scalar function. Examples include matrix polynomials A ( λ ) C n × n [ λ ] with matrix coefficients, the matrix transpose, the adjugate (or adjoint ) matrix, compound matrices comprising minors of a given matrix, and factors from matrix factorizations and as a special case, the polar factors of a matrix. Another example is the elementwise operations on matrices, for example sin ( A ) = ( sin ( a i j ) ) . In addition, there are functions that produce a scalar result, such as the trace, the determinant, the spectral radius, the condition number κ ( A ) , and one particular generalization to matrix arguments of the hypergeometric function. The Sylvester formula is an identity that can provide matrix function evaluation using the projectors G i   ( or   Z i j ) , minimal polynomial and the Lagrange interpolation techniques. In this section, we will explore how it is applied.
Theorem 1.
(Functions of non-diagonalizable matrices) If  A C n × n  with a spectrum  σ ( A ) = λ 1 ,   λ 2 ,   ,   λ s , let  k i = index ( λ i ) . A function  f   :   C C  is said to be defined (or to exist) at  A   when  f ( λ i ) f ( 1 ) ( λ i ) ,  f ( k i 1 ) ( λ i )   exist for each  λ i σ ( A ) . The value of  f ( A )   is:
f ( A ) = i = 1 s j = 0 k i 1 f ( j ) ( λ i )   ( A λ i I ) j G i j ! = i = 1 s j = 0 k i 1 f ( j ) ( λ i )   Z i j
Proof
Let σ ( A ) = { λ 1 ,   λ 2 , , λ s } C be a set of s (distinct) eigenvalues of diagonalizable matrix A and f be a function defined on the spectrum of A . Let the polynomial function p ( λ ) = k = 0 n c k λ k = i = 1 s ( λ λ i ) m i be the characteristic polynomial of A . The minimal polynomial of A is given by ψ ( λ ) = i = 1 s ( λ λ i ) g i with 1 g i m i but because of the diagonalizability of A , we have g i = 1 , hence ψ ( λ ) = i = 1 s ( λ λ i ) , which is of degree s . Now let us prove that there exists a unique polynomial r ( λ ) = k = 0 s 1 α k λ k of degree s 1 such that f ( λ i ) = r ( λ i ) = k = 0 s 1 α k λ i k   for   i = 1 , 2 ,     ,   s . To prove this claim, let us write the last formula in matrix form so we have
( 1 1 1 λ 1 λ 2 λ s λ 1 2 λ 2 2 λ s 2     λ 1 s 1 λ 2 s 1 λ s s 1 ) ( α 0 α 1 α 2 α s 1 ) = ( f ( λ 1 )     f ( λ s ) )                     V α = f
This system has a unique solution if and only if the determinant of the Vandermonde matrix V does not vanish det ( V ( λ 1 ,     , λ s ) ) 0 . It is not difficult to verify that det ( V ( λ 1 ,     , λ s ) ) = 0 i < j s ( λ j λ i ) . Hence, since the points are pairwise distinct, the determinant does not vanish. Computing the solution of the system, we get
α = V 1 f         ( α 0 α 1 α 2 α s 1 ) = ( 11 12 1 s 21 22 2 s   s 1 s 2 s s ) ( f ( λ 1 )     f ( λ s ) )         α k 1 = i = 1 s i k f ( λ i )
Such that when we make a substitution of α k 1 into r ( λ ) , we obtain
r ( λ ) = j = 1 s α j 1 λ j 1 = j = 1 s i = 1 s i j f ( λ i ) λ j 1 = i = 1 s f ( λ i ) ( j = 1 s i j λ j 1 ) = i = 1 s f ( λ i ) L i ( λ )
Note that the inverse of the Vandermonde matrix V has as its columns the coefficients of the Lagrange interpolation polynomials. In other words, the inverse of V results in the coefficients of the Lagrange interpolation polynomials. According to Lagrange interpolation method, we have
r ( λ ) = i = 1 s f ( λ i ) L i ( λ )         with       L i ( λ ) = j = 1 s i j λ j 1
where Cramer’s rules results in
r ( λ ) = i = 1 s f ( λ i ) L i ( λ ) = i = 1 s f ( λ i ) det [ V ( λ 1 ,   , λ i 1 , λ , λ i + 1 , , λ s ) ] det [ V ( λ 1 ,   .   .   .   , λ s ) ] = i = 1 s f ( λ i ) i = 1 j i s λ λ j λ i λ j
Therefore, from the Lagrange interpolation polynomials, we get the Sylvester formula
f ( A ) = r ( A ) = i = 1 s f ( λ i ) i = 1 j i s ( A λ j I ) ( λ i λ j ) = i = 1 s f ( λ i ) G i ,             with           G i = i = 1 j i s ( A λ j I ) ( λ i λ j )
In the case of non-diagonalizable matrices index ( λ i ) = k i 1 , we can generalize the previous result to, so we have (see [36])
f ( A ) = i = 1 s j = 0 k i 1 f ( j ) ( λ i )   ( A λ i I ) j G i j ! = i = 1 s j = 0 k i 1 f ( j ) ( λ i )   Z i j         where   Z i j = ( A λ i I ) j G i j !
Z i j is often called the component matrices or the constituent matrices. □

2.2. The Matrix Cauchy Integral Formula

The matrix Cauchy integral formula is an elegant result from complex analysis (and functional calculus), stating that if f C C is analytic in and on a simple closed contour Γ C with positive (counterclockwise) orientation, and if λ is interior to Γ , then
f ( λ ) = 1 2 π j Γ f ( z ) z λ d z             and             f ( k ) ( λ ) = k ! 2 π j Γ f ( z ) ( z λ ) k + 1 d z
These formulas produce analogous representations of matrix functions. Suppose that A C n × n with σ ( A ) = { λ 1 ,   λ 2 , , λ s } , let the index ( λ i ) = k i . For a complex variable μ , the resolvent of A C n × n is defined to be the matrix R A ( μ ) = ( μ I A ) 1 . If we assume that μ σ ( A ) , and define the complex scalar function r ( z ) = ( μ z ) 1 , then the evaluation of r ( z ) at A is given by r ( A ) = R A ( μ ) , so the spectral resolution theorem can be used to write (Partial Fraction Expansion of R A ( μ ) )
r ( A ) = ( μ I A ) 1 = i = 1 s k = 0 k i 1   r ( k ) ( λ i ) k ! ( A λ i I ) k G i   = i = 1 s k = 0 k i 1   ( A λ i I ) k G i ( μ λ i ) k + 1
If σ ( A ) is in the interior of a simple closed contour Γ , and if the contour integral of a matrix is defined by entrywise integration, then complex integration gives
1 2 π j Γ f ( μ ) R A ( μ ) d μ = 1 2 π j Γ f ( μ ) ( μ I A ) 1 d μ = 1 2 π j Γ f ( μ ) i = 1 s k = 0 k i 1   ( A λ i I ) k G i ( μ λ i ) k + 1 d μ
  ( 11 ) i = 1 s k = 0 k i 1 { 1 2 π j Γ f ( μ ) ( μ λ i ) k + 1 d μ } ( A λ i I ) k G i = i = 1 s k = 0 k i 1 f ( k ) ( λ i ) k ! ( A λ i I ) k G i
According to the spectral resolution theorem, we deduce that whenever f is analytic in and on a simple closed contour Γ containing σ ( A ) in its interior then
  f ( A ) = 1 2 π j Γ f ( μ ) ( μ I A ) 1 d μ
Furthermore, if Γ i is a simple closed contour enclosing λ i but excluding all other eigenvalues of A , then the i t h spectral projector can be obtained from
f ( z ) = 1       z C     f ( A ) = 1 2 π j Γ ( μ I A ) 1 d μ = 1 2 π j [ i = 1 s Γ i ( μ I A ) 1 d μ ] = I
On the other hand, we know that I = G 1 G 2     G s , that is
G i = 1 2 π j Γ i ( μ I A ) 1 d μ = Γ i R A ( μ ) 2 π j d μ
Other types of functions we consider are those which include mapping from C to C n × n , such as the matrix transfer function H ( λ ) = B ( λ I A ) 1 C , for B C n × m , A C n × n , and C C m × n (as discussed by Oskar Jakub Szymański [31] and Higham, Nicholas, Lucian Milica [34,35,36]). An alternative reformulation of H ( λ ) C n × n [ λ ] will produce the following matrix fraction description H ( s ) = N ( λ ) D 1 ( λ ) , so in this context and as a special work, we are interested by the computation of such matrix function.

2.3. The Role of the Resolvent

The quantity A ( λ ) = λ I A is the called first-order matrix polynomial or the linearized form (some authors refers this as pencil linearization). The inverse of this quantity is a fundamental estimate known as the resolvent matrix  R A ( λ ) = ( λ I A ) 1 as mentioned before. The resolvent is a central object in spectral theory—it reveals the existence of eigenvalues, indicates where eigenvalues can fall, and shows how sensitive these eigenvalues are to perturbations. As the inverse of the matrix ( μ I A ) 1 , the resolvent will have as its entries rational functions of λ . The resolvent fails to exist at any λ σ ( A ) . To better appreciate its nature, consider a simple example
A = ( 2 2 1 1 )               R A ( λ ) = ( λ I A ) 1 = ( λ 2 2 1 λ + 1 ) 1 = 1 λ ( λ 1 ) ( λ + 1 2 1 λ 2 )
Being comprised of rational functions, the resolvent has a complex derivative at any point λ C that is not in σ ( A ) , and hence on any open set in the C not containing an eigenvalue, the resolvent is an analytic function. Suppose that we have A C n × n with σ ( A ) = { λ 1 ,   λ 2 , , λ s } , and the index ( λ i ) = k i = 1 .
R A ( λ ) = ( λ I A ) 1 = i = 1 s K = 0 k i 1   ( A λ i I ) k G i ( λ λ i ) k + 1 = i = 1 s ( λ λ i ) 1 G i
Moreover, the eigenvalues of ( λ I A ) 1 are ( λ λ i ) 1 because
R A ( λ ) = ( λ I A ) 1 = [ λ i = 1 s G i i = 1 s λ i G i ] 1 = [ i = 1 s ( λ λ i ) G i ] 1 = i = 1 s ( λ λ i ) 1 G i
Theorem 2.
The resolvent  R A ( λ )  of  A C n × n  is a rational function of  λ  with poles at the points of the spectrum of A. Moreover, each  λ i σ ( A )  is a pole of  R A ( λ )  of algebraic multiplicity  m i , ( i = 1 s m i = n ), that is
R A ( λ ) = ( λ I A ) 1 = 1 p ( λ ) j = 1 s i = 1 s γ i j λ i A j           with           p ( λ ) = i = 1 s ( λ λ i ) m i
Proof: 
Evidently [ p ( λ ) p ( μ ) ] = [ j = 1 s i = 1 s γ i j λ i μ j ] ( λ μ ) for some numbers γ i j . Using the matrix polynomial definition (as in the Cayley–Hamilton theorem) for λ i σ ( A ) the last relation (after formal substituting A for μ ) implies
( p ( λ ) I p ( A ) ) ( λ I A ) 1 = [ j = 1 s i = 1 s γ i j λ i A j ]           R A ( λ ) = 1 p ( λ ) [ j = 1 s i = 1 s γ i j λ i A j ]
Since by the Cayley–Hamilton theorem p ( A ) = 0  
Properties of Matrix Resolvents:
  R A ( λ ) = 1 p ( λ ) j = 1 s ( i = 1 s γ i j λ i ) A j   R A ( λ ) = i = 1 s k = 0 k i 1   ( A λ i I ) k G i ( μ λ i ) k + 1   1 2 π j Γ R A ( μ ) d μ = I
  R A ( λ ) R A ( μ ) = ( λ μ ) R A ( λ ) R A ( μ )   d d λ R A ( λ ) = R A 2 ( λ )   d k d λ k R A ( λ ) = k ! ( 1 ) k R A k + 1 ( λ )
For more information on matrices and polynomials, we refer the reader to see [3,8] the works of Householder in 1975 and Gantmacher in 1960. In the next subsection, we see how important the derivative of the determinant is matrix theory.

2.4. Derivative of Determinant and Traces

The derivative of the determinant is usually derived based on Jacobi’s formula. This result is a fundamental result in matrix calculus that establishes a relationship between the derivative of the determinant of a matrix A and the trace of its adjugate matrix times its derivative. It is particularly useful in the study of dynamical systems and optimization. In particular, it is used in systems where the determinant is used to assess properties like stability (e.g., eigenvalue behavior) and to study Lyapunov functions. In optimization, it is used in problems that involve detrainment based constraints. In addition, it is used in continuum mechanics particularly in problems involving deformation, elasticity, and fluid dynamics [37]. In these contexts, the determinant of the deformation gradient or Jacobian matrix often plays a critical role.
Theorem 3.
Let  A ( t )  be a differentiable square matrix of size  n × n , depending on a parameter  t . Jacobi’s first formula for the derivative of determinant said that
d d t det ( A ( t ) ) = i = 1 n det ( ( I e i e i T ) A ( t ) + e i e i T d A ( t ) d t ) = i = 1 n det ( D i )
where the matrix D i is identical to A except that the entries in the i t h row are replaced by their derivatives, i.e., derivative of a vector is the vector of the derivatives of its elements.
Proof:
Let us start by looking for the well-known identity det ( I + x y T ) = 1 + y T x and combine this with trace ( x y T ) = y T x we get det ( I + x y T ) = 1 + trace ( x y T ) in other word: det ( I + M ) = 1 + trace ( M ) with M = x y T . Notice that this last formula is correct only for rank-one matrices, but we can generalize this to any arbitrary case ( rank ( M ) > 1 ) by using the Taylor series (i.e., small perturbation) det ( I + h M ) = 1 + h trace ( M ) + O ( h 2 ) or trace ( M ) = lim h 0 [ det ( I + h M ) 1 ] / h . The key observation is that near the identity, the determinant behaves like the trace, or more precisely d [ det ( I + s M ) ] / d s = trace ( M ) . Combining det ( I + h M ) = 1 + h trace ( M ) + O ( h 2 ) with det ( A B ) = det ( A ) det ( B ) and suppose that A and B are of the same dimension with A invertible, then we get: det ( A + s B ) = det ( A ) det ( I + s A 1 B ) = det ( A ) ( 1 + h trace ( A 1 B ) ) + O ( h 2 ) . Now if we let (i.e., replacement) M ( t ) = A 1 ( t ) [ d A ( t ) / d t ] then we have
trace ( A 1 ( t ) d A d t ) = lim h 0 1 h { det ( I + h A 1 ( t ) d A d t ) 1 } = lim h 0 1 h { det ( I + h A 1 ( t ) [ 1 h ( A ( t + h ) A ( t ) ) ] ) 1 } = lim h 0 1 h { det ( A 1 ( t ) A ( t + h ) ) 1 } = det ( A 1 ) lim h 0 1 h { det ( A ( t + h ) ) det ( A ( t ) ) } = det ( A 1 ) d d t det ( A ( t ) )
This means that
d d t det ( A ( t ) ) = det ( A ( t ) ) . trace [ A 1 ( t ) d A d t ]     d d t det ( A ( t ) ) = trace [ adj ( A ( t ) ) d A d t ]
This last result can also be written in the following useful form:
d d t det ( A ( t ) ) = trace ( adj ( A ( t ) ) d A d t )                   d d t log e [ det ( A ( t ) ) ] = trace ( A 1 ( t ) d A d t )
It is sometimes necessary to compute the derivative of a determinant whose entries are differentiable functions (e.g., continuum mechanics). Therefore, it is more efficient to give a practical method of computation. Here, we will try to derive a more powerful way for doing that. Since we have trace ( A B ) = i = 1 n e i T A B e i , we can write
d d t det ( A ( t ) ) = trace ( adj ( A ( t ) ) d A d t ) = i = 1 n e i T d A d t adj ( A ( t ) ) e i = i = 1 n { 1 + e i T d A d t adj ( A ( t ) ) e i e i T e i } = i = 1 n { 1 + e i T d A d t A 1 ( t ) e i e i T e i } det ( A ( t ) ) = i = 1 n det ( I + e i { e i T d A d t A 1 ( t ) e i T } ) det ( A ( t ) ) = i = 1 n det ( A ( t ) + { e i e i T d A d t e i e i T A ( t ) } )
Finally, if we define D i = ( I e i e i T ) A ( t ) + e i e i T [ d A ( t ) / d t ] , then we arrive at
d d t det ( A ( t ) ) = i = 1 n det [ ( I e i e i T ) A ( t ) + e i e i T d A ( t ) d t ]   or   d d t det ( A ( t ) ) = i = 1 n det ( D i )
which proves Jacobi’s formula. □

2.5. Roots of Matrix Polynomials in the Complex Plane

In the theory of complex analysis it is well-known that, if a complex function f ( z ) is analytic in a region D and does not vanish identically, then the function   f   ( λ ) / f ( λ ) is called the logarithmic derivative of f ( λ ) . The isolated singularities of the logarithmic derivative occur at the isolated singularities of f ( λ ) and, in addition, at the zeros of f ( λ ) . The principle of the argument results from an application of the residue theorem to the logarithmic derivative. The contour integral of this logarithmic derivative ( i . e . ,   f   ( λ ) / f ( λ ) ) is equal to difference between the number of zeros and poles of a complex rational function f ( λ ) , and this is known as Cauchy’s argument principle (as discussed by Peter Henrici, see [38,39]). Specifically, if a complex rational function f ( z ) is a meromorphic function inside and on some closed contour   C , and it has no zeros or poles on C , then
Z P = 1 2 π i C   f   ( λ ) f ( λ ) d λ
where the variables Z and P indicates, respectively, the number of zeros and poles of the function f ( z ) inside the contour C , with each pole p and zero z counted with its multiplicity. The argument principle theorem states that the contour C is a counter-clockwise and is simple, that is without self-intersections. If the complex function f ( z )   is not rational, then the number of zeros inside the contour C is given by
Z = 1 2 π i C   f   ( λ ) f ( λ ) d λ
Now, by means matrix theory, we can extend this result to matrix polynomials case.
Theorem 4. 
The number of latent roots of the regular matrix polynomial  A ( λ )  in the domain  D  enclosed by a counter  C  is given by
Z = 1 2 π i C trace ( A 1 ( λ ) A ( λ ) ) d λ           with         A ( λ ) = d d λ A ( λ )
Proof: 
Let us put Δ ( λ ) = det ( A ( λ ) ) and let c i j be the cofactor of the element d i j in Δ ( λ ) , so c i T = e i T Adj ( A ( λ ) ) = [ c i 1 c i 2 c i m ]   i = 1 , 2 , , m
det ( A ( λ ) ) = Adj ( A ( λ ) ) A ( λ ) e i T Adj ( A ( λ ) ) A ( λ ) = e i T det ( A ( λ ) ) c i T A ( λ ) = Δ ( λ ) e i T
where e i has a one for its i t h element and zeros elsewhere. We also have
Δ ( λ ) = d d λ Δ ( λ ) = i = 1 m Δ i ( λ )
where Δ i ( λ ) is the determinant whose i t h column is A i ( λ ) and the remaining columns are those of Δ ( λ ) . Expanding Δ i ( λ ) by the i t h column, we have Δ i ( λ ) = c i T A i ( λ ) . Now A ( λ ) A 1 ( λ ) = I implies that, provided Δ ( λ ) 0 , A ( λ ) = A ( λ ) ( A 1 ( λ ) ) A ( λ ) and hence A i ( λ ) = A ( λ ) { ( A 1 ( λ ) ) A ( λ ) } i this leads to
Δ i ( λ ) = c i T A ( λ ) { ( A 1 ( λ ) ) A ( λ ) } i = Δ ( λ ) e i T { ( A 1 ( λ ) ) A ( λ ) } i
We then find from that
Δ ( λ ) = i = 1 m Δ i ( λ ) = Δ ( λ ) i = 1 m e i T { ( A 1 ( λ ) ) A ( λ ) } i = Δ ( λ ) trace ( ( A 1 ( λ ) ) A ( λ ) )
Also, from the matrix derivative properties A 1 ( λ ) A ( λ ) = ( A 1 ( λ ) ) A ( λ ) , we have
Δ ( λ ) Δ ( λ ) = trace ( A 1 ( λ ) A ( λ ) ) d d λ det ( A ( λ ) ) = trace ( Adj ( A ( λ ) ) d A ( λ ) d λ )
Finally, if we let Δ ( λ ) = det ( A ( λ ) ) be analytic in any domain in the complex plane, then the number of its roots inside a closed contour is
Z = 1 2 π i C Δ ( λ ) Δ ( λ ) d λ = 1 2 π i C trace ( A 1 ( λ ) A ( λ ) ) d λ

3. Problem Statement and Proposition

The mathematical modeling of physical systems is universally represented by differential equations of the form [40,41,42,43,44]: x ˙ ( t ) = f ( x ( t ) , u ( t ) , t ) ;   y ( t ) = h ( x ( t ) , u ( t ) , t ) (i.e., Matrix differential systems), where t is time, f ( . ) n [ t ] and h ( . ) p [ t ] are nonlinear functions (system dynamics and output functions), x ( t ) n represents the state vector, u ( t ) m represents the input, and y ( t ) p represents the output. When these systems operate near nominal states or trajectories, they can often be linearized around some operating points. If the system is time-invariant, then the linearization simplifies the model into a Linear Time-Invariant (LTI) system E x ˙ ( t ) = A x ( t ) + B u ( t ) , y ( t ) = C x ( t ) + D u ( t ) . For LTI systems, the input–output relationship is compactly expressed in the frequency domain using matrix fraction descriptions  ( MFDs ) :
1 .   Left   MFD :   y ( λ ) = [ D L 1 ( λ ) N L ( λ ) ] . u ( λ )   with   D L ( λ ) p × m [ λ ] ,   N L ( λ ) m × m [ λ ] 2 .   Right   MFD :   y ( λ ) = [ N R ( λ ) D R 1 ( λ ) ] . u ( λ )   with   D R ( λ ) m × m [ λ ] ,   N R ( λ ) p × m [ λ ]
where: H ( λ ) = [ D L 1 ( λ ) N L ( λ ) ] = [ N R ( λ ) D R 1 ( λ ) ] p × m [ λ ] is the transfer function and matrices N L ( λ ) , D L ( λ ) , N R ( λ ) , and D R ( λ ) are matrix polynomials in the variable λ . The inverses of the matrix polynomials D L 1 ( λ ) or D R 1 ( λ ) are critical for computing H ( λ ) .
The computation of inverses ( D L 1 ( λ ) or D R 1 ( λ ) ) is a nontrivial task due to the complexities of matrix polynomial arithmetic in the field of complex numbers [4,5,6,41]. In this paper, we propose an efficient algorithm for computing the inverse of matrix polynomials over the complex field. This algorithm addresses the computational challenges associated with matrix polynomial inversion, enabling more accurate and faster computation of transfer functions in the analysis and control of LTI systems. Key features of the proposed algorithm
  • Applicability to ( λ I A ) 1 and complex matrix polynomials (i.e., of higher orders).
  • Applicability to ( λ E A ) 1 and even for singular (defective) matrix polynomial.
  • Enhanced computational efficiency compared to existing methods.
  • Applicability to both left   MFD and right   MFD formulations.
  • Verification through theoretical analysis and numerical examples.
The Leverrier–Faddeev algorithm provides a well-established and a solid framework for the inversion of matrices A n × n . Building upon this, researchers have introduced new algorithms for inverting expressions R ( λ ) = ( λ I A ) 1 n × n [ λ ] called resolvents , which arise in control and dynamic systems. Additionally, others have focused on the inversion of 2nd-degree matrix polynomials, D ( λ ) = M λ 2 + C λ + K n × n [ λ ] , commonly encountered in mechanical and structural dynamics. Furthermore, efforts have been made to tackle the general case of matrix polynomial inversion, but these approaches often suffer from high computational complexity and reduced efficiency. Here, we present some existing algorithms
Mathematics 13 02101 i001
It can be shown that the complexity of the Leverrier–Faddev algorithm for constant matrices is O ( n β + 1 ) , where β lies in the interval [ 2 ,   2.37286 ) , [45]. The total time required for the Predrag S algorithm is: O ( n 5 × r 2 ) , where r = deg ( A ( s ) A H ( s ) ) [21,22,23,24,25]. The problems with the Predrag algorithm are its extensive memory usage, longer computational time, and its reliance on symbolic programming (e.g., Mathematica) for resolution. Also, it needs the computation of the rank and index of polynomial matrices, the estimation of the degrees of polynomials, all of which contribute to its overall complexity. In contrast, our proposed novel method for matrix polynomial inversion is both simple and highly efficient. In the next section, we present an approach that addresses the limitations of previous methods, offering a solution for a broader range of applications.

4. The Proposed Generalized Leverrier-Faddeev Algorithm

The classical Leverrier–Faddeev algorithm offers an efficient recursive method for computing the characteristic polynomial and inverse of a square matrix. In this section, we extend the algorithm to regular matrix polynomials, which frequently arise in control theory and system modeling. Unlike existing symbolic methods that are often computationally expensive for high-dimensional systems, the proposed generalization is fully numerical and recursive. This makes it particularly efficient and well suited for applications requiring stability and scalability, such as real-time control and large-scale dynamic systems.
Theorem 5.
Let  A ( λ ) = i = 0 A i λ i m × m [ λ ]   be a regular matrix polynomial with  A 0 = I , and define its characteristic polynomial  Δ ( λ ) = det ( s I A ) = i = 0 n α i λ n i , where  n = m α 0 = 1 . Then, the inverse  A 1 ( λ )  is given by  A 1 ( λ ) = N ( λ ) / Δ ( λ ) , where the numerator is  N ( λ ) = ( i = 0 n N i + 1 λ n i ) . The coefficients  α 1 α n  and matrices  N i m × m  are computed recursively as follows:  N 1 = I α 1 = trace ( A 1 N 1 )  and
N k + 1 = α k I m i = 1 k A i N k i + 1 ;     α k + 1 = 1 k + 1 trace [ i = 1 k + 1 i A i N k i + 2 ] ;   k = 1 , ,   n 1
Under the assumptions:  A k = 0  for  k >  and  N k + 1 = 0  for  k > n .
Proof: 
Given an t h degree regular matrix polynomial A ( λ ) = λ I m + i = 1 A i λ i with A i m × m , and its characteristic polynomial Δ ( λ ) = λ n + i = 1 n α i λ n i of degree n = m , if we assume that A 1 ( λ ) = N ( λ ) / Δ ( λ ) , then the problem is to find the matrix polynomial N ( λ ) = ( i = 0 n N i + 1 λ n i ) . We can write
A 1 ( λ ) = N ( λ ) Δ ( λ ) A ( λ ) N ( λ ) = Δ ( λ ) I m
where N i m × m ,     α i and n = m .
Expanding the Equation (33), we obtain:
N 1 =I and  N k + 1 = { α k I m i = 1 k A i N k + 1 i when k = 1 , 2 , , 1 α k I m i = 1 A i N k + 1 i when k = , , n 0 when k = n + 1 , n
It should be noted that if the coefficients α 1 α n of the characteristic polynomial Δ ( λ ) were known, the last equation would then constitute an algorithm for computing the matrices N i . But in this theorem, we propose a recursive algorithm, which will compute N i and α i in parallel, even if the coefficient α i is not known in advance. According to Davidenko and Peter Lancaster in [2,4,5,6] and by using Jacobi’s formula, we write Δ ( λ ) / Δ ( λ ) = trace ( A 1 ( λ ) A ( λ ) ) = trace ( A ( λ ) A 1 ( λ ) ) so
  λ ( Δ ( λ ) / Δ ( λ ) ) = trace ( λ A 1 ( λ ) A ( λ ) )
Then, it is clear that n = × trace ( I m ) = trace ( × A 1 ( λ ) A ( λ ) ) , if we multiply by Δ ( λ ) we obtain n Δ ( λ ) = trace ( × N ( λ ) A ( λ ) ) after the combination of the obtained equations we get λ Δ ( λ ) n Δ ( λ ) = trace ( N ( λ ) { λ A ( λ ) A ( λ ) } ) so
  λ Δ ( λ ) n Δ ( λ ) trace ( N ( λ ) B ( λ ) ) = trace ( B ( λ ) N ( λ ) )  
where B ( λ ) = λ A ( λ ) A ( λ ) = i = 0 i A i λ i and λ Δ ( λ ) n Δ ( λ ) = i = 0 n i α i λ n i . Expanding the equation and equating identical powers of λ we obtain:
α k + 1 = 1 k + 1 trace [ i = 1 K + 1 i A i N k i + 2 ]       with         k = 1 , , n 1     and     α 1 = trace ( A 1 N 1 )
This recursive relation yields both the coefficients α k of the characteristic polynomial and the matrices N k defining the inverse A 1 ( λ ) . The construction terminates at k = n , ensuring that A ( λ ) N ( λ ) = Δ ( λ ) I m holds identically. This completes the proof. □
The above derivation is summarized in Algorithm 1.
Algorithm 1: (Generalized Leverrier–Faddeev Algorithm)
Mathematics 13 02101 i002
The classical Leverrier–Faddeev algorithms primarily deal with simple matrices. Our algorithm incorporates matrix polynomials A ( λ ) = i = 0 A i λ i which is a significant generalization. This can be particularly useful in applications involving differential equations, and control systems, where matrix polynomials naturally arise. By handling A k = 0 for k > , the algorithm dynamically adjusts for degrees higher than the polynomial order. Similarly, setting N k + 1 = 0 for k > n ensures numerical reliability and aligns with the reduced matrix polynomial size n . The recursive nature of N k + 1 and α k + 1 closely follows the pattern of updating coefficients and matrices in symbolic computation. This recursion preserves the algorithm’s efficiency while allowing flexibility. The calculation of α k + 1 using traces is a hallmark of Leverrier–type algorithms. Its use in a generalized polynomial context highlights the algorithm’s potential to compute determinants and adjugates in more complex scenarios. While the algorithm is recursive and theoretically efficient, the involvement of matrix traces, multiplications, and summations over several terms could lead to high computational costs for large matrices. Efficient implementation, possibly leveraging parallelism or sparse matrix optimizations, would be critical for practical use. The algorithm’s ability to compute Δ ( λ ) (the characteristic polynomial) and N ( λ ) (related to the adjugate or inverse, depending on Δ ( λ ) ) makes it a powerful tool for systems analysis, eigenvalue computation, and control theory. As a generalization, this approach could pave the way for algorithms tailored to specific types of matrix polynomials, such as Hermitian, Toeplitz, or sparse matrices, further enhancing its utility. Our algorithm has strong potential for theoretical and applied advancements, particularly in fields requiring the computation of matrix polynomials. As an application, consider the following example
Example 1: 
Given a monic matrix polynomial   A ( λ ) = I 3 λ 3 + A 1 λ 2 + A 2 λ + A 3  with  A i 3 × 3  by using the above algorithm, we can get 
  ( A ( λ ) ) 1 = N ( λ ) Δ ( λ ) = N 1 λ 6 + N 2 λ 5 + N 3 λ 4 + N 4 λ 3 + N 5 λ 2 + N 6 λ + N 7 α 0 λ 9 + α 1 λ 8 + α 2 λ 7 + α 3 λ 6 + α 4 λ 5 + α 5 λ 4 + α 6 λ 3 + α 7 λ 2 + α 8 λ + α 9  
where  = m = 3 , n = 9 .
{ N 1 = I α 1 = trace ( A 1 N 1 ) N 2 = α 1 I A 1 N 1 α 2 = 1 2 trace ( A 1 N 2 + 2 A 2 N 1 ) N 3 = α 2 I ( A 1 N 2 + A 2 N 1 ) α 3 = 1 3 trace ( A 1 N 3 + 2 A 2 N 2 + 3 A 3 N 1 ) N 4 = α 3 I ( A 1 N 3 + A 2 N 2 + A 3 N 1 ) α 4 = 1 4 trace ( A 1 N 4 + 2 A 2 N 3 + 3 A 3 N 2 ) N 5 = α 4 I ( A 1 N 4 + A 2 N 3 + A 3 N 2 ) α 5 = 1 5 trace ( A 1 N 5 + 2 A 2 N 4 + 3 A 3 N 3 ) N 6 = α 5 I ( A 1 N 5 + A 2 N 4 + A 3 N 3 ) α 6 = 1 6 trace ( A 1 N 6 + 2 A 2 N 5 + 3 A 3 N 4 ) N 7 = α 5 I ( A 1 N 6 + A 2 N 5 + A 3 N 4 ) α 7 = 1 7 trace ( A 1 N 7 + 2 A 2 N 6 + 3 A 3 N 5 ) α 8 = 1 8 trace ( 2 A 2 N 7 + 3 A 3 N 6 ) α 9 = 1 9 trace ( 3 A 3 N 7 ) }
As a numerical application consider the following coefficient matrices
A 1 = [ 10.3834 7.9702 7.3731 0.3884 14.2775 4.0121 1.5983 7.0882   2.3391 ] ;   A 2 = [ 31.1427 25.0780 28.6260 0.5948 43.9776 14.7398 7.6854 23.1468 1.6814 ] ;   A 3 = [ 34.8866 1.9819 25.9976 3.2351   26.7417 12.1195 14.4444   0.2493 3.6046   ]
Applying the algorithm results in the following coefficient matrices for the inverse
N 2 = [ 16.6166 7.9702 7.3731 ; 0.3884 12.7225 4.0121 ; 1.5983 7.0882 24.6609 ] ; N 4 = [ 299.346 416.846 540.859 ; 58.367 189.088 274.614 ; 181.258 360.000 948.424 ] ; N 6 = [ 80.717 521.835 1634.103 ; 298.468 442.373 783.585 ; 765.723 468.273 2287.089 ] ;
N 3 = [ 104.1316 95.9827 101.9174 ; 7.9160 65.5337 53.5358 ; 27.7524 84.0073 220.2736 ] ; N 5 = [ 365.23 773.18 1368.96 ; 195.77 359.87 673.87 ; 550.16 666.03 2105.41 ] ; N 7 = [ 93.372 13.625 719.241 ; 163.397 249.768 338.704 ; 385.461 37.324 939.339 ] ;
a0 = 1; a1 = 27; a2 = 316.5; a3 = 2110.5; a4 = 8805; a5 = 23786; a6 = 41496; a7 = 44951; a8 = 27343; a9 = 7087.5;
As a further extension, we consider the connection to the Block companion form for matrix polynomials.

5. Connection to the Block Companion Forms

In multivariable control systems, it is well known that the transfer matrix H ( λ ) can be expressed either by state space or by the matrix fraction description. In order to obtain the inverse of A ( λ ) , assume that we are dealing with a multiple-input multiple-output (MIMO) system whose denominator is the identity, that is
H ( λ ) = B ( λ ) ( A ( λ ) ) 1 = ( A ( λ ) ) 1                   where         B ( λ ) = I
On the other hand, the rational complex function H ( λ )   can be written as
H ( λ ) = ( A ( λ ) ) 1 = C c ( λ I n A c ) 1 B c
where
C c T = ( I p 0 0 ) ,           A c = ( 0 0 A       I m 0         A 1                 0         I m 0   A 2       0       I m A 1 ) ,           B c = ( 0 0 I m )
Theorem 6. 
Let  A ( λ ) = i = 0 A i λ i m × m [ λ ]  be a regular matrix polynomial with  A 0 = I , and define its characteristic polynomial  Δ ( λ ) = det ( s I A ) = i = 0 n α i λ n i , where  n = m ,  α 0 = 1 . Let be the triple of matrices  ( A c , B c , C c )  be the companion forms of  A ( λ ) , then the inverse  A 1 ( λ )  is given by  A 1 ( λ ) = N ( λ ) / Δ ( λ ) , where  N ( λ ) = ( i = 1 n N i λ n i ) . The coefficients  α 1 α n  and matrices  N i m × m  are computed recursively as follows:
α k = trace [ i = 0 k 1 α i A c k i ] / k ;                   N k = C c [ i = 0 k 1 α i A c k i 1 ] B c ;           k = 1 , ,   n 1
Proof: 
Now, let we us define
( A ( λ ) ) 1 = N ( λ ) Δ ( λ ) =       and       ( λ I A c ) 1 = Adj ( λ I A c ) det ( λ I A c ) = R ( λ ) Δ ( λ )
with:
R ( λ ) = i = 1 n R i λ n i ,     N ( λ ) = i = 1 n N i λ n i ,     Δ ( λ ) = i = 0 n α i λ n i ,       α 0 = 1       &         N i = 0       for     i <
Then, the following results are obtained
A 1 ( λ ) = C c ( λ I A c ) 1 B c = C c R ( λ ) B c Δ ( λ ) = i = 1 n ( C c R i B c ) λ n i Δ ( λ )   N i = C c R i B c
From the usual Leverrier–Faddeev algorithm, we have
{ R i + 1 = α i I + A c R i               with     1 < i < n 1                             0 = α n I + A c R n           for         i = n                                           and               {   α 0 = 1                                                       α i = 1 i trace ( A c R i )
a back-substitutions and recursive evaluation of these formulas results in
α k = 1 k trace ( i = 0 k 1 α i A c k i ) ;                                       N k = C c ( i = 0 k 1 α i A c k i 1 ) B c
This recursive relation yields both the coefficients α k of the characteristic polynomial and the matrices N k defining the inverse A 1 ( λ ) so it completes the proof. □
The above developments are summarized in Algorithm 2.
Algorithm 2: Generalized Leverrier–Faddeev Algorithm by Companion Forms
Mathematics 13 02101 i003
Example 2: 
Given a matrix polynomial  A ( λ ) = A 0 λ 3 + A 1 λ 2 + A 2 λ + A 3  with  A i 3 × 3  by using the above algorithm, we get
  A 1 ( λ ) = N ( λ ) Δ ( λ ) = N 1 λ 8 + N 2 λ 7 + N 3 λ 6 + N 4 λ 5 + N 5 λ 4 + N 6 λ 3 + N 7 λ 2 + N 8 λ + N 9 α 0 λ 9 + α 1 λ 8 + α 2 λ 7 + α 3 λ 6 + α 4 λ 5 + α 5 λ 4 + α 6 λ 3 + α 7 λ 2 + α 8 λ + α 9  
where = m = 3 , n = 9 , with is the degree of A ( λ ) ,   m is the size of A i and n = m .
Consider the following coefficient matrices
A 1 = [ 26.6527 3.3590 14.0226 15.2183 11.1736 12.2758 25.3291 3.9304 10.8264 ] ;   A 2 = [ 140.295 32.414 98.168 100.753 49.619 86.468 166.468 41.315 114.010 ] ;   A 3 = [ 170.645 69.366 153.679 131.606 77.383 135.327 215.960 92.055 195.753 ]
The result of the second algorithm will be N 1 = N 2 = 0 ,   N 3 = I and
N 4 = [ 0.3473 3.3590 14.0226 ; 15.2183 15.8264 12.2758 ; 25.3291 3.9304 37.8264 ] ; N 6 = [ 1082.44 300.66 1258.01 ; 1539.45 238.14 1255.40 ; 2308.07 364.44 2306.40 ] ; N 8 = [ 4984.78 1132.44 4837.53 ; 6474.08 135.50 5337.94 ; 8885.48 1417.62 8069.01 ] ;
N 5 = [ 137.111 51.163 213.617 ; 246.930 92.912 200.253 ; 389.671 60.991 436.604 ] ; N 7 = [ 3447.80 846.59 3564.41 ; 4582.17 202.80 3757.73 ; 6552.32 1042.16 6167.00 ] ; N 9 = [ 2690.46 568.28 2505.05 ; 3463.08 215.80 2867.93 ; 4596.73 728.42 4076.10 ] ;
a0 = 1; a1 = 27; a2 = 316.5; a3 = 2110.5; a4 = 8805; a5 = 23786; a6 = 41496; a7 = 44951; a8 = 27343; a9 = 7087.5;
In this section, we considered the computation of the inverse based on the companion form. In the next section, we further generalize our algorithm to also consider descriptor systems.

6. Matrix Polynomials in Descriptor Form

The matrix transfer function of a MIMO system can be described by generalized state-space or polynomial fraction description as
H ( λ ) = B ( λ ) ( A ( λ ) ) 1 = C ( λ E A ) 1 B ,         and       rank ( E ) < n  
where A , E n × n ,   C p × n   and   B n × m . The index m stands for the number of inputs, p for the number of outputs and n for the number of states.
To get to the transfer function we need to calculate the inverse of the matrix pencil ( λ E A ) or the inverse of A ( λ ) . This task is especially challenging for generalized systems (i.e., E I or rank ( E ) < n ). That is why we propose an algorithm that makes this process easier for computation.
Now, assume that we are dealing with the problem of inverting the matrix polynomial A ( λ ) = i = 0 A i λ n i   with   r a n k ( A 0 ) < m , as what we have done before, we let
H ( λ ) = ( A ( λ ) ) 1 = C c ( λ E c A c ) 1 B c = C c R ( λ ) B c det ( λ E c A c ) = N ( λ ) Δ ( λ )
where N ( λ ) = i = 1 n N i ( λ + μ ) n i ,   Δ ( λ ) = det ( λ E c A c ) = i = 0 n α i ( λ + μ ) n i and
E c = ( I m 0 0 0 I m 0 0 0 A 0 ) ; A c = ( 0 I m 0 0 0 0 I m 0 I m A A 1 A 2 A 1 ) ; C c T = ( I p 0 0 ) ; B c = ( 0 0 I m )
Note: The variable μ is a regularization parameter, and is introduced to make simplifications in calculations.
The adjugate matrix R ( λ ) can be calculated using the method proposed by Paraskevopoulos [11]. Once the adjugate is obtained we do a back-substitution to get a recursive formula for N i and α i .
Theorem 7. 
Let  A ( λ ) = i = 0 A i λ i m × m [ λ ]  be irregular matrix polynomial with  r a n k ( A 0 ) < m  and define its characteristic polynomial  Δ ( λ ) = i = 0 n α i λ n i , where  n = m . Let  ( E c , A c , B c , C c )  be the companion quadruple associated with  A ( λ ) , and assume that there exists a nonzero scalar  μ  such that  ( μ E c + A c )   is nonsingular. Then, the inverse  A 1 ( λ )  is given by the formula  A 1 ( λ ) = i = 1 n N i ( λ + μ ) n i / i = 0 n α i ( λ + μ ) n i , where the coefficients  α 1 α n  and matrices  N i m × m  are computed recursively as follows:
α n k = 1 k trace [ i = 0 k = 1   α n i M k i ] ;     N n k = C c [ i = 0 k α n i M k i Q ] B c ;       k = 1 ,     ,   n
with M = ( μ E c + A c ) 1 E c       and   Q = ( μ E c + A c ) 1 .
Proof: 
To compute the inverse of matrix ( λ E c A c ) , the following technique will be used. Find a μ so that matrix pencil ( μ E c + A c ) is regular. It should be noted that ( μ E c + A c ) is a polynomial in μ of degree at most n .
( λ E c A c ) 1 = ( λ E c + μ E c μ E c A c ) 1                                                         = ( ( λ + μ ) E c ( μ E c + A c ) ) 1                         = ( ( λ + μ ) M I ) 1 Q
where Q = ( μ E c + A c ) 1 and M = ( μ E c + A c ) 1 E c which can be easily evaluated, since for constant μ the matrix ( μ E c + A c ) is known to be a constant matrix of appropriate dimension. If we introduce the following change in variable ( λ + μ ) = 1 / s , we obtain
( λ E c A c ) 1 = s ( s I M ) 1 Q
So we can write
( λ E c A c ) 1 = s ( s I M ) 1 Q = s { R n s n 1 + + R 2 s + R 1 α n s n + α n 1 s n 1 + + α 1 s + α 0 } Q                                           = { R 1 ( λ + μ ) n 1 + + R n 1 ( λ + μ ) + R n α 0 ( λ + μ ) n + α 1 ( λ + μ ) n 1 + + α n 1 ( λ + μ ) + α n } Q }
Next the Sourian–Frame–Faddev algorithm will be used to compute the term ( s I M ) 1
{ α n = 1 R n = I α n 1 = trace ( M R n ) R n 1 = α n 1 I + M R n α n 2 = 1 2 trace ( M R n 1 ) R n 2 = α n 2 I + M R n 1 α 1 = 1 n 1 trace ( M R 2 ) R 1 = α 1 I + M R 2 α 0 = 1 n trace ( M R 1 ) R 0 = 0 = α 0 I + M R 1 }
In compact form we write α n = 1 ,     R 0 = 0       &       R n = I and
α n k = 1 k trace ( i = 0 k = 1 α n i M k i )         and           R n k = i = 0 k α n i M k i               k = 1 , , n
This completes the proof, establishing a fully numerical recursive method for computing the inverse of an irregular matrix polynomial via its companion realization. □
The above developments are summarized in Algorithm 3.
Algorithm 3: Generalized Leverrier–Faddeev Algorithm for Descriptor Systems
Mathematics 13 02101 i004

7. Simulation of Some Practical Examples

Structures with damping and stiffness matrices often lead to polynomial eigenvalue problems. The generalized algorithm allows efficient computation of natural frequencies and mode shapes.

7.1. Vibration Analysis in Structural Mechanics

Consider a 2-DOF mechanical system (e.g., two masses connected by springs and dampers). Its dynamics can be written as M x ¨ ( t ) + D x ˙ ( t ) + K x ( t ) = f ( t ) or in the matrix polynomial form: A ( λ )   x ( λ ) = f ( λ ) , where A ( λ ) = M λ 2 + D λ + K m × m [ λ ] , M m × m is the symmetric positive-definite mass matrix, D m × m is the symmetric positive semi-definite damping matrix, K m × m is the symmetric positive-definite stiffness matrix, and λ = s C (Laplace variable for frequency-domain analysis).
Evaluating the system at the purely imaginary frequency λ = j ω leads to the frequency-domain matrix: A ( j ω ) = M ( j ω ) 2 + D ( j ω ) + K = ( K M ω 2 ) + j ω D . Thus, the transfer function matrix of the system is: H ( j ω ) = A 1 ( j ω ) . We compute its inverse A 1 ( j ω ) over a range of frequencies ω , and analyze the frequency response to detect resonant peaks. The resonant behavior corresponds to frequencies, where A ( j ω ) becomes nearly singular, and thus H ( j ω ) exhibits peaks in its norm. We define the system amplification as: H ( j ω ) 2 = σ max ( H ( j ω ) ) , where σ m a x ( . ) is the largest singular value. The resonance frequencies ω res are determined by the local maxima of H ( j ω ) 2 . To efficiently compute A 1 ( j ω ) at each ω , we apply the generalized Faddeev algorithm.
The response of a system with many DOFs to a harmonic forcing function can be computed by writing the generic harmonic excitation in the form f ( j ω ) = f 0 e j ω t and the steady-state response in the form x ( j ω ) = x 0 e j ω t . The differential equation of motion M x ¨ ( t ) + D x ˙ ( t ) + K x ( t ) = f ( t ) can thus be transformed into the algebraic equation A ( j ω ) x ( j ω ) = x ( j ω )     A ( j ω ) x 0 e j ω t = f 0 e j ω t so, x 0 = [ ( K M ω 2 ) + j ω D ] 1 f 0 .
Summary of Steps for Practical Simulation
  • For each frequency ω , compute A ( j ω ) = ( K M ω 2 ) + j ω D .
  • Invert the matrix: H ( j ω ) = A 1 ( j ω ) .
  • Compute H ( j ω ) 2 (2-norm).
  • Plot 20   log 10 ( H ( j ω ) 2 ) versus ω to observe resonance peaks.
Peaks in the plot indicate resonant frequencies—points where the system’s gain spikes due to near-singularity of A ( j ω ) . The resonant frequencies ω res are the values of ω where the system’s gain H ( j ω ) 2 exhibits local maxima, corresponding to strong amplification of vibrations. Formally:
ω res = { ω :   d d ω H ( j ω ) 2 = 0         and     d 2 d ω 2 H ( j ω ) 2 < 0 }
Here is an algorithm (Algorithm 4) that can calculate the steady-state amplitude of vibration and phase of each degree of freedom of the forced n-DoF system (see Figure 1).
Algorithm 4: The steady-state amplitude and phase of each DoF of the forced system
1. let dw = 0.01; omega = 0:dw:5;
2. function [Ap, Phi] = Amplitude(M,D,K,f0,omega)
3. IA = G_Leverrier_Faddeev(M,D,K,j*omega); % the proposed algorithm
4. X0 = IA*f0; % we can use directly: X0 = ((K-M*omega^2) + (j*D*omega))\f0;
5.           for k = 1:length(f0)
6.                      Ap(k) = sqrt(X0(k)*conj(X0(k)));
7.                      Phi(k) = log(conj(X0(k))/X0(k))/(2*i);
8.           end
9. end
10. m1 = 1; m2 = 1; k1 = 2; k2 = 1; k3 = 1; d1 = 2; d2 = 1; d3 = 2; f0 = [10;0];
11. M = [m1 0;0 m2]; D = 0.1*[d1 + d2 − d2; −d2 d2 + d3]; K = [k1 + k2 − k2; −k2 k2 + k3];
12. for k = 1:length(omega),
13.               [AP, PHI] = Amplitude(M,D,K,f0,omega(k));
14.               Ap(:,k) = AP; Phi(:,k) = PHI;
15. end
16. figure, plot(omega, Ap,’linewidth’,1.5), grid on
17. figure, plot(omega, Phi,’linewidth’,1.5), grid on
This formulation fully exploits the matrix polynomial structure of the mechanical system, and the Leverrier–Faddeev algorithm provides a numerically efficient and theoretically consistent method for computing and simulating the resonance behavior.
Notice that the above steady-state response can be reformulated by separating its real and imaginary parts as follows:
[ K M ω 2 ω D ω D K M ω 2 ] [ Re ( x 0 ) Im ( x 0 ) ] = [ Re ( f 0 ) Im ( f 0 ) ]   so   [ Re ( x 0 ) Im ( x 0 ) ] = [ K M ω 2 ω D ω D K M ω 2 ] 1 [ Re ( f 0 ) Im ( f 0 ) ]
The steady-state amplitude of vibration and phase can be computed by Algorithm 5.
Algorithm 5: The steady-state amplitude of vibration and phase
1. Let, dw = 0.01; omega = 0:dw:5; f0 = [10;0]; n = length(f0);
2. function x0 = Amplitude(M,C,K,f0,omega,n)
3. X0 = [K-M*omega^2 -C*omega;C*omega K-M*omega^2]\[f0;zeros(n,1)];
4.           for k = 1:length(f0), x0(k) = sqrt((X0(k))^2 + (X0(n + k))^2); end
5. end
6. m1 = 1; m2 = 1; k1 = 2; k2 = 1; k3 = 1; d1 = 2; d2 = 1; d3 = 2; M = [m1 0;0 m2];
7. D = 0.1*[d1 + d2 − d2; −d2 d2 + d3]; K = [k1 + k2 − k2; −k2 k2 + k3];
8. for k = 1:length(omega), Ap(:,k) = Amplitude(M,C,K,f0,omega(k),n); end
9. figure, plot(omega, Ap,’linewidth’,1.5), grid on

7.2. Applications to Continuum Mechanics

Matrix polynomial inversion naturally appears in the solution of partial differential equations (PDEs) when using finite element methods (FEM), especially for dynamic problems involving second-order time derivatives (like structural vibrations, wave propagation, and heat conduction with inertia). When you use FEM to discretize a PDE, (especially time-dependent PDEs like elastodynamics or thermoelasticity), we often obtain a system of diff-equations that is equivalent to matrix polynomial problem. In classical vibration, the transverse motion of a beam (Euler–Bernoulli beam equation):
2 u ( x , t ) t 2 + a 4 u ( x , t ) x 4 = p ( x , t ) ;                   for   x [ 0 , L ] ,   t 0
where u ( x , t ) is the beam’s transverse displacement, a is a positive constant depending on material and geometric properties (e.g., a = E I / ρ A ), and p ( x , t ) is an external distributed force per unit length. Using finite element methods, we approximate u ( x , t ) as: u ( x , t ) i = 1 m u i ( t ) ϕ i ( x ) , where ϕ i ( x ) are shape functions and u i ( t ) are the unknown nodal values. Substituting into the PDE and applying the Galerkin   method , we obtain the following semi-discrete system: M u ¨ ( t ) + K u ( t ) = f ( t ) where:
M = [ 0 L ϕ ( x ) ϕ T ( x ) d x ] ;     K = [ a 0 L d 2 ϕ d x 2 [ d 2 ϕ d x 2 ] T d x ] ;     u = [ u 1 ( t ) u m ( t ) ] ;     ϕ = [ ϕ 1 ( x ) ϕ m ( x ) ]
Notice that there is no damping yet. If damping is considered (Rayleigh damping for example), a damping matrix D = α M + β K can also appear. The matrix differential equation becomes M u ¨ ( t ) + D u ˙ ( t ) + K u ( t ) = f ( t ) . Applying the Laplace Transform to the time-domain system leads to: [ M λ 2 + D λ + K ] U ( λ ) = F ( λ ) . Inversion of this matrix polynomial is needed to solve for U ( λ ) : U ( λ ) = [ M λ 2 + D λ + K ] 1 F ( λ ) .
In the case of two dimensions, the dynamic equilibrium equation for a thin elastic plate (Kirchhoff–Love theory) is:
2 w ( x , y , t ) t 2 + a [ 4 x 4 + 2 4 x 2 x 2 + 4 y 4 ] w ( x , y , t ) = p ( x , y , t ) ;           x [ 0       L ] ;     t 0
where w ( x , y , t ) = transverse displacement (deflection) of the plate at point ( x , y ) , p ( x , y , t ) = external transverse distributed load. We approximate w ( x , y , t ) using finite elements: w ( x , y , t ) i = 1 m w i ( t ) ϕ i ( x , y ) , where ϕ i ( x , y ) are shape functions and w i ( t ) are the unknown nodal values. Substituting into the PDE and applying the Galerkin   method gives a system of second-order ODEs: M w ¨ ( t ) + K w ( t ) = f ( t ) with w ( t ) = vector of nodal displacements, f ( t ) = vector of nodal forces, and
M = [ Ω ϕ ( x , y ) ϕ T ( x , y ) d x y ] ;         K = [ a Ω ϕ [ ϕ ] T d x y ] ;       u = [ u 1 ( t ) u m ( t ) ] ;       ϕ = [ ϕ 1 ϕ m ]
Such that = 2 / x 2 + 2 / y 2 . Now, we summarize the procedure in the following steps
StepsResults
PDE
FEM
Laplace Transform
Inversion
Beam vibration PDE
Leads to semi-discrete ODE: M u ¨ ( t ) + D u ˙ ( t ) + K u ( t ) = f ( t )
Gives matrix polynomial: A ( λ ) = M λ 2 + D λ + K
Need A 1 ( λ ) , done efficiently by a generalized Leverrier–Faddeev
Matrix polynomial inversion appears naturally when solving PDEs with FEM in the frequency or Laplace domain. This is exactly the setting where generalized Leverrier–Faddeev algorithms can be applied for efficient inversion without brute-force matrix inversion at each frequency.
  • Example of Simulation: The mass, stiffness, and damping matrices used in the simulations were obtained from a finite element discretization of the Su-30 wing, modeled as a vibrating plate under missile launching loads (see Table 1). The mass matrix M is diagonal, representing lumped masses resulting from the discretization, with values centered around 50   kg to account for local variations in structural mass distribution. The stiffness matrix K exhibits a tridiagonal structure, arising from the finite element approximation of fourth-order spatial derivatives, with localized stiffening applied at missile hardpoints. The damping matrix D is constructed using a Rayleigh damping model based on M and K , to capture realistic energy dissipation effects during maneuvers. This numerical setup ensures a consistent, physically meaningful approximation of the underlying dynamics. Bellow, in Figure 2, an overview of the aerodynamic mesh for CFD solution:
Figure 2. Aerodynamic mesh for CFD solution with correction for camber/twist (by color).
Figure 2. Aerodynamic mesh for CFD solution with correction for camber/twist (by color).
Mathematics 13 02101 g002
Table 1. Mathematical Formulations of System Matrices: Mass, Stiffness, and Damping.
Table 1. Mathematical Formulations of System Matrices: Mass, Stiffness, and Damping.
MatrixFormulaNotes
M m 0 ( I n + ε R m ) Mass matrix, slightly perturbed
K a ( ( L I ) 2 + 2 ( L L ) + ( I L ) 2 ) Stiffness matrix, biharmonic structure
D α M + β K + γ C a e r o Damping matrix, Rayleigh type
The symmetric positive definite matrix M n × n simulates slight inhomogeneity in mass distribution (fuel tanks, pylons, systems). We model it as a lumped mass with small random coupling: M = m 0 ( I n + ε R m ) where: m 0 > 0 is the base mass per node (e.g., m 0 = 15 kg ), ε = 0.05 is a small perturbation factor, R m n × n is a symmetric random matrix with entries r i j [ 1 , 1 ] , I n is an identity matrix, n = 100 is the degree of freedom number. The higher mass near missile stations is given by the spatial distribution Δ m ( x , y ) = ε ( m 0 R m ) . We approximate plate bending stiffness using finite differences of the biharmonic operator. The formula: K = a ( T x T x + 2 T x y T x y + T y T y ) where: a > 0 plate rigidity coefficient (e.g., a = 2 × 10 6 N / m ), T x = L I : second derivative along x , T y = I L : second derivative along y , and T x y   = L L mixed second derivative (can be approximated by combining T x and T y . L : Laplace operator for finite difference. If we take into account stiffeners (ribs/spars) + local hardening due to pylons we get the relation: K = a 0   ( K biharmonic   + K local stiffness   )   where: K biharmonic models general plate behavior, K local stiffness adds extra stiffness at mounting points (missile pylons), a 0 is the base bending stiffness ( a 0 = D = E h 3 / 12 ( 1 ν 2 ) ), with E = Young modulus, h = plate thickness, ν = Poisson’s ratio. The Damping Matrix D = the Rayleigh + Aerodynamic damping: D = α M + β K + γ C aero where C aero models added aerodynamic damping (i.e., during maneuver), γ = aerodynamic damping factor (estimated from flight data). Typically: α∼0.01−0.02, β∼10−4, γ depending on Mach number and angle of attack.
  • Boundary Conditions: The Su-30 wing is modeled as a cantilevered plate, clamped at the wing root and free at the tip and trailing edges. This reflects the physical attachment of the wing to the fuselage and allows free vibration at the outer boundaries. At the clamped (root) edge x = 0 : w ( 0 , y , t ) = 0 , w ( 0 , y , t ) / x = 0 . At the free edges (tip x = L , leading/trailing edges y = 0 or y = b ):
    { 2 x 2   w ( L , y , t ) = 3 x 3   w ( L , y , t ) = 0 2 y 2   w ( x , 0 , t ) = 3 y 3   w ( x , 0 , t ) = 2 y 2   w ( x , b , t ) = 3 y 3   w ( x , b , t ) = 0 }
  • Mass   Matrix :   M = diag ( 50.0 ,   49.2 ,   51.5 ,   50.8 , ,   49.7 ) ,   [ k g ] 100 × 100 , where entries varies within 50 ± 5 % . (i.e., all diagonal elements between 47.5 and 52.5.)
  • Stiffness   Matrix :   K = tridiag ( l o w e r :   5 × 10 4 ,   m a i n :   1 × 10 5 ,   u p p e r :   5 × 10 4 ) ,   [ N / m ] with local modifications: at missile hardpoints (nodes30,60,80), main diagonal increased by +20%. Thus, K i i = { 1.2 × 10 5   i f   i { 30 , 60 , 80 }   a n d   1.0 × 10 5   o t h e r w i s e }
  • Damping   Matrix   D (Rayleigh Damping Form) D = 0.015 M + 2 × 10 4 K .
In this example, we used the tapered mesh FEM to model the aircraft wing, focusing on understanding its shape and performance. For more detailed information, we refer to [44], where Figure 3 shows the tapered geometry mesh and the applied boundary conditions for the cantilevered tapered wing.
The first and second mode shapes of the Su-30 wing, computed via the FEM method, capture its primary and higher-order bending dynamics under external excitations. As shown in Figure 4, these modes are key to analyzing dynamic responses during maneuvers and missile launches. To avoid brute-force inversion at each frequency, the generalized Leverrier–Faddeev algorithm was employed for efficient matrix inversion.
Now, we present the time responses of the first and second modes of vibration (see Figure 5). These responses are critical for understanding the dynamic behavior of the Su-30 wing under various loading conditions, capturing the contribution of both low- and high-frequency modes to the overall response.
Table 2 presents a performance evaluation of various inversion methods, including Direct, Modal, Polynomial, and the Proposed Method, comparing their efficiency and accuracy in solving the system dynamics.
Here are concise mathematical formulas for each criterion in our table:
  • Average   CPU   Time = Total   Time / Number   of   Frequency   Points .
  • Memory   Consumption = max ( Memory   used   at   peak ) .
  • Relative   Error = x e x a c t x a p p r o x / x e x a c t .
  • Condition   Number = X 2 . X 1 2 / X 2 .
  • FLOPs = Σ ω Operations   per   Frequency   Point .
  • Matrix _ Vector   Products = Number   of   Products   per   Frequency .
  • Failure   Rate = Failures / 10 6 .
  • Average   Setup   Time =   Total   Setup   Time / Number   of   Pre _ Computations .
  • Sensitivity   to   Damping   Perturbations :   Error = | Δ Eigenvalue | × 100 / Eigenvalue .
  • Max   DOFs = max   ( Number   of   Degrees   of   Freedom   handled   without   failure ) .
  • Complexity   Order = O ( n k ) ,   where   k   varies   for   different   methods .
  • Memory   Growth = O ( n 2 )   for   Linear   or   Quadratic   and   O ( n 3 )   Cubic   Growth .
  • Success   Rate =   Successful   Inversions × 100 / Total   Inversions .
  • Eigenvalue   Error = | λ e x a c t λ a p p r o x | × 100 / | λ e x a c t | .
  • Peak   Accuracy =   Peak   Frequency   from   Approx × 100 / Exact   Peak   Frequency .
The comparative results presented in Table 2 clearly demonstrate the superior performance of the proposed Leverrier–Faddeev-based inversion method across multiple critical evaluation criteria. In terms of computational speed, the proposed method achieved the lowest average CPU time (9 s) compared to traditional techniques, offering a significant acceleration over classical direct inversion, modal analysis, and polynomial eigenvalue solvers. Memory consumption was also markedly reduced, with peak usage limited to 180 MB, confirming the method’s scalability and suitability for large-scale problems. Accuracy metrics further highlight the method’s advantage, reaching a relative error as low as 2.7 × 10−7 and achieving the highest resonance peak accuracy of 98.52%. Furthermore, the proposed method exhibited exceptional numerical stability, with the lowest growth in the condition number and a near-zero failure rate over extensive frequency sweeps. Its complexity order and memory growth behavior ( O ( n 2 ) and linear, respectively) underline its efficiency in handling increasingly large systems. Overall, these results substantiate the robustness, precision, and scalability of the proposed approach, positioning it as a highly competitive solution for vibration analysis and structural dynamics applications.
To illustrate the comparative performance across key criteria, Figure 6 presents bar charts highlighting computational efficiency, memory usage, accuracy, and scalability. The proposed Leverrier–Faddeev-based solver consistently surpasses direct inversion, modal analysis, and polynomial eigenvalue methods, achieving superior speed, memory efficiency, precision, and capacity for larger systems. These results reinforce the advantages outlined in Table 2.
Based on standard matrix inversion, the computational cost typically scales as O ( n 3 ) , whereas polynomial eigenvalue solvers, as tested in MATLAB / ScaLAPACK benchmarks (MATLAB (R2023)), also exhibit significant computational demand. Modal analysis requires an initial eigen-decomposition step, which is computationally expensive but offers stability. Direct inversion, although widely used, is slower and memory-intensive compared to alternative methods. In contrast, the proposed recursion offers a highly efficient solution, requiring minimal computational resources and memory compared to full eigenvalue decomposition. The proposed method excels with the fastest inversion time (0.7 s), using the least memory (380 MB compared to 450–950 MB in other methods), and delivering the smallest relative error (0.001%). It is also capable of handling the largest problems, efficiently managing up to 70,000 DOFs. Furthermore, it boasts the highest inversion success rate (99%), even for challenging matrices, making it the most efficient and robust choice for large-scale FEM systems (size ~1000–10,000 DOFs).
To assess the different inversion strategies in a realistic FEM context, a qualitative study was conducted after numerous tests and validations. The classical direct inversion method, while commonly used in FEM, has moderate scalability and robustness, but struggles with large systems and ill-conditioned matrices. Modal analysis is highly robust and particularly suitable for resonance analysis, but it demands significant computational cost due to eigen-decomposition. Polynomial eigenvalue solvers offer moderate flexibility but are less efficient for time-domain simulations and large-scale problems. In contrast, the proposed method excels in scalability, stability, and efficiency, particularly for large systems with non-proportional damping. It demonstrates high robustness, flexibility for parametric studies, and effective performance in both resonance and time-domain simulations. Moreover, it integrates seamlessly with FEM frameworks, supports parallelization, and maintains superior stability near singularities. A summary of these comparative results is presented below in Table 3.
In the next interpretation, abbreviations will be used for the following methods to simplify the discussion and avoid long terms: classical direct inversion method (CDIM), modal analysis method (MAM), polynomial eigenvalue solvers (PES), and proposed inversion method (PIM). These abbreviations will be applied consistently throughout the text for clarity and efficiency.
  • Scalability: CDIM’s scalability is limited due to the high computational cost of large matrix inversions. MAM offers good scalability since eigen-decomposition is performed only once, making it efficient for repeated analysis. PES solvers have moderate scalability due to the complexity of repeated factorizations. In contrast, PIM shows excellent scalability by efficiently handling large systems through its recursive structure, minimizing the computational load per frequency point.
  • Explicitness: CDIM lacks an explicit formula, relying on full matrix inversion for each frequency. MAM provides partial explicitness, decomposing the system into modes but still requiring eigenvectors. PES solvers work sequentially for each frequency, offering no explicit inversion. PIM is distinct in offering an explicit recursive formula, significantly simplifying calculations and reducing computational complexity.
  • Suitability for Resonance Analysis (near peaks): CDIM provides moderate accuracy near resonance peaks, but its sensitivity to numerical errors limits performance. MAM naturally aligns with system resonant modes, offering excellent precision. PES solvers offer reasonable accuracy, though not as optimal as MAM. PIM excels by capturing resonance with high accuracy, often matching or surpassing MAM in performance.
  • Suitability for Time-domain Simulation (transient response): The CDIM supports time-domain simulations but is computationally heavy. The MAM analysis is limited due to the need for reconstructing full responses from modes. The PES solvers are unsuitable for direct transient simulations. The proposed PIM method offers efficient and direct handling of transient responses, making it ideal for time-domain applications.
  • Analytical Insight: CDIM offers limited insight, focusing on matrix inversion without revealing the system’s dynamic behavior. MAM provides higher analytical insight by exposing the system’s natural modes and resonance characteristics. PES solvers provide some understanding of system dynamics via eigenvalue extraction, but lack a clear physical interpretation. PIM stands out by enhancing both the understanding and interpretation of system dynamics through its explicit recursive formula.
  • Preconditioning Possibility: CDIM offers moderate preconditioning possibilities, but these are limited in improving performance due to the nature of matrix inversion. MAM has limited potential for preconditioning because it relies on eigen-decomposition, which does not lend itself well to speed-up techniques. PES solvers allow for some preconditioning, but their effectiveness is constrained by the complexity of the factorization process. PIM stands out with simple and effective preconditioning, which optimizes performance, especially for large systems.
  • Parallelization Potential: CDIM benefits from some parallelism, particularly in frequency-based computations, but its matrix inversion step limits scalability. MAM can fully leverage multi-core or GPU architectures by decomposing the problem into independent-mode analyses. PES solvers have moderate parallelization potential but are limited by the repeated factorization steps. PIM offers excellent parallelization potential, efficiently using multi-core and GPU resources to handle large systems.
  • Generalization to Nonlinear Systems: CDIM is confined to linear systems, as its matrix inversion methods do not accommodate nonlinear behaviors. MAM is also limited to linear systems and offers no direct approach to handle nonlinearities. PES solvers are primarily designed for linear systems but can be adapted with some effort for nonlinear dynamics. PIM, however, can be extended to nonlinear systems due to its flexible recursive structure, allowing for adaptations to accommodate nonlinearities.
  • Eigenvalue Preservation: CDIM does not preserve eigenstructure, as it focuses purely on matrix inversion. MAM and PES solvers both preserve eigenvalues and eigenvectors, ensuring the system’s natural modes are accurately represented. PIM also preserves eigenstructure, maintaining both eigenvalues and eigenvectors through its recursive process, ensuring accurate dynamic representation throughout the analysis.
  • Effectiveness for Non-proportional Damping: CDIM struggles with non-proportional damping, as matrix inversion can lead to inaccuracies in systems with complex damping characteristics. MAM handles non-proportional damping better, though additional computational effort is required to manage interactions between modes. PES solvers show moderate effectiveness for non-proportional damping but are less efficient for time-domain simulations. PIM stands out for its excellent handling of non-proportional damping, offering high accuracy and stability even in challenging conditions.
  • Consistency with FEM Frameworks: CDIM integrates well with FEM frameworks, as it directly incorporates matrix inversion, a standard FEM approach. MAM requires additional steps to decompose the system and reconstruct the modes, making it less seamless with FEM workflows. PES solvers are less consistent, as their factorization steps do not align with FEM approaches. PIM integrates smoothly with FEM frameworks, offering efficient computation for large-scale systems and aligning well with existing solvers.
  • Consistency with FEM Frameworks: CDIM struggles near singularities, where the determinant approaches zero, often resulting in numerical instability. MAM handles near-singular behavior better due to its reliance on eigen-decomposition, which provides stability. PES solvers show moderate stability near singularities, but ill-conditioned matrices may still pose issues. PIM excels near singularities, maintaining stability due to its recursive structure and ensuring accurate solutions even in challenging conditions.

8. Conclusions

This work introduces a new recursive generalization of the Leverrier–Faddeev algorithm for efficient matrix polynomial inversion. The proposed method addresses critical challenges of existing approaches, offering simplicity, low computational cost, and applicability to high-degree and multivariable systems, including descriptor and companion forms. Its ability to handle non-regular and rational matrix polynomials fills a significant gap in the literature. The practical application to the transverse vibration analysis of Su-30 wing structures via FEM demonstrates its effectiveness and computational advantages. Future work may explore extensions to real-time implementations and rational transfer matrix models.

Author Contributions

B.B., conceptualization, data curation, formal analysis, investigation, methodology, project administration, resources, software, visualization, and writing—original draft. G.F.F., conceptualization, funding acquisition, investigation, supervision, resources, and writing—original draft. G.S.M., project administration, validation, and writing—review and editing. K.H. and K.C., data curation, methodology, validation, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Frame, J.S. A simple recursion formula for inverting a matrix. Bull. Amer. Math. Soc. 1949, 55, 19–45. [Google Scholar]
  2. Davidenko, D.F. Inversion of matrices by the method of variation of parameters. In Doklady Akademii Nauk; Russian Academy of Sciences: Moscow, Russia, 1960; Volume 131, pp. 500–502. [Google Scholar]
  3. Gantmacher, F.R. The Theory of Matrices; Chelsea Publishing Co.: New York, NY, USA, 1960. [Google Scholar]
  4. Lancaster, P. A generalised Rayleigh-quotient iteration for lambda-matrices. Arch. Rat. Mech. Anal. 1961, 8, 309–322. [Google Scholar] [CrossRef]
  5. Lancaster, P. Some applications of the Newton-Raphson method to non-linear matrix problems. Proc. R. Soc. London. Ser. A Math. Phys. Sci. 1963, 271, 324–333. [Google Scholar]
  6. Lancaster, P. Algorithms for Lambda-Matrices. Numer. Math. 1964, 6, 388–394. [Google Scholar] [CrossRef]
  7. Faddeev, D.K.; Faddeeva, V.N. Computational Methods of Linear Algebra; Freeman: San Francisco, CA, USA, 1963. [Google Scholar]
  8. Householder, A.S. The Theory of Matrices in Numerical Analysis; Dover: New York, NY, USA, 1975. [Google Scholar]
  9. Decell, H.P. An application of the Cayley-Hamilton theorem to generalized matrix inversion. Siam Rev. 1965, 4, 526–528. [Google Scholar] [CrossRef]
  10. Givens, C.R. On the Modified Leverrier-Faddeev Algorithm. Linear Algebra Its Appl. 1982, 44, 161–167. [Google Scholar] [CrossRef]
  11. Paraskevopoulos, P.N. Chebyshev series approach to system identification, analysis and optimal control. J. Frankl. Inst. 1983, 316, 135–157. [Google Scholar] [CrossRef]
  12. Barnett, S. Leverrier’s algorithm: A new proof extensions. SIAM J. Matrix Anal. Appl. 1989, 10, 551–556. [Google Scholar] [CrossRef]
  13. Fragulis, G.; Mertzios, B.G.; Vardulakis, A.I.G. Computation of the inverse of a polynomial matrix and evaluation of its Laurent expansion. Int. J. Control 1991, 53, 431–443. [Google Scholar] [CrossRef]
  14. Fragulis, G.F. Generalized Cayley-Hamilton theorem for polynomial matrices with arbitrary degree. Int. J. Control 1995, 62, 1341–1349. [Google Scholar] [CrossRef]
  15. Helmberg, G.; Wagner, P.; Veltkamp, G. On Faddeev-Leverrier’s method for the computation of the characteristic polynomial of a matrix and of eigenvectors. Linear Algebra Its Appl. 1993, 185, 219–233. [Google Scholar] [CrossRef]
  16. Wang, G.; Lin, Y. A new extension of the Leverrier’s algorithm. Linear Algebra Its Appl. 1993, 180, 227–238. [Google Scholar] [CrossRef]
  17. Hou, M.; Pugh, A.C.; Hayton, G.E. General solution to systems in polynomial matrix form. Int. J. Control 2000, 73, 733–743. [Google Scholar] [CrossRef]
  18. Djordjevic, D.S.; Stanimirovic, P.S. On The Generalized Drazin Inverse and Generalized Resolvent. Czechoslov. Math. J. 2001, 51, 617–634. [Google Scholar] [CrossRef]
  19. Debeljković, D.L. Singular control systems. Dynamics of Continuous. Discret. Impuls. Syst. Ser. A Math. Anal. 2004, 11, 691–705. [Google Scholar]
  20. Hernández, J.; Marcellán Español, F.J. An extension of Leverrier-Faddeev algorithm using a basis of classical orthogonal polynomials. Facta Univ. Ser. Math. Inform. 2004, 19, 73–92. [Google Scholar]
  21. Stanimirović, P.S.; Tasić, M.B. On the Leverrier-Faddeev algorithm for computing the Moore-Penrose inverse. J. Appl. Math. Comput. 2011, 35, 135–141. [Google Scholar] [CrossRef]
  22. Stanimirović, P.S.; Petkovic, M.D. Computing generalized inverse of polynomial matrices by interpolation. Appl. Math. Comput. 2006, 172, 508–523. [Google Scholar] [CrossRef]
  23. Stanimirović, P.S.; Tasić, M.B.; Vu, K.M. Extensions of Faddeev’s algorithms to polynomial matrices. Appl. Math. Comput. 2009, 214, 246–258. [Google Scholar] [CrossRef]
  24. Stanimirović, P.S.; Karampetakis, N.P.; Tasić, M.B. Computing Generalized Inverses of a Rational Matrix And Applications. J. Appl. Math. Comput. 2007, 24, 81–94. [Google Scholar] [CrossRef]
  25. Stanimirović, P.S. A finite algorithm for generalized inverses of polynomial and rational matrices. Appl. Math. Comput. 2003, 144, 199–214. [Google Scholar] [CrossRef]
  26. Tasić, M.B.; Stanimirović, P.S. Symbolic and recursive computation of different types of generalized inverses. Appl. Math. Comput. 2008, 199, 349–367. [Google Scholar] [CrossRef]
  27. Petković, M.D.; Stanimirović, P.S. Interpolation algorithm of Leverrier–Faddeev type for polynomial matrices. Numer. Algorithms 2006, 42, 345–361. [Google Scholar] [CrossRef]
  28. Dopico, F.; Noferini, V. Root polynomials and their role in the theory of matrix polynomials. Linear Algebra Its Appl. 2020, 584, 37–78. [Google Scholar] [CrossRef]
  29. Tian, Y.; Xia, C. On the Low-Degree Solution of the Sylvester Matrix Polynomial Equation. J. Math. 2021, 5, 4612177. [Google Scholar] [CrossRef]
  30. Shehata, A. On Lommel Matrix Polynomials. Symmetry 2021, 13, 2335. [Google Scholar] [CrossRef]
  31. Szymański, O.J. Stability Theory for Matrix Polynomials in One and Several Variables with Extensions of Classical Theorems. Ph.D. Dissertation, Jagiellonian University, Kraków, Poland, 2024. [Google Scholar]
  32. Kumar, M.; Alatawi, M.S.; Raza, N.; Khan, W.A. Exploring Zeros of Hermite λ-Matrix Polynomials: A Numerical Approach. Mathematics 2024, 12, 1497. [Google Scholar]
  33. Zainab, U.; Raza, N. The symbolic approach to study the family of Appell- λ-matrix polynomials. Filomat 2024, 38, 1291–1304. [Google Scholar] [CrossRef]
  34. Milica, L. Implications of Higher-Degree Polynomials in Forced Damped Oscillations. In Polynomials-Exploring Fundamental Mathematical Expressions; IntechOpen: London, UK, 2024. [Google Scholar]
  35. Halidias, N. On the Computation of the Minimum Polynomial and Applications. Asian Res. J. Math. 2024, 18, 301–319. [Google Scholar] [CrossRef]
  36. Higham, N.J. Function of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  37. Hou, S.H. A Simple Proof of the Leverrier-Faddeev Characteristic Polynomial Algorithm. Siam Rev. 1998, 40, 706–709. [Google Scholar] [CrossRef]
  38. Henrici, P. Applied and Computational Complex Analysis; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1974; Volume 1. [Google Scholar]
  39. Henrici, P. Applied and Computational Complex Analysis; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1974; Volume 2. [Google Scholar]
  40. Bekhiti, B.; Dahimene, A.; Nail, B.; Hariche, K. On the theory of λ-matrices based MIMO control system design. Control Cybern. 2015, 44, 422–442. [Google Scholar]
  41. Bekhiti, B.; Dahimene, A.; Nail, B.; Hariche, K. On Block Roots of Matrix Polynomials Based MIMO Control System Design. In Proceedings of the 2015 4th International Conference on Electrical Engineering (ICEE), Boumerdes, Algeria, 13–15 December 2015. [Google Scholar]
  42. Bekhiti, B.; Dahimene, A.; Nail, B.; Hariche, K. Robust Block Roots Relocation via MIMO Compensator Design. In Proceedings of the 2016 8th International Conference on Modelling, Identification and Control (ICMIC), Algiers, Algeria, 15–17 November 2016. [Google Scholar]
  43. Bekhiti, B.; Dahimene, A.; Nail, B.; Hariche, K. On–Matrices and Their Applications in MIMO Control Systems Design. Int. J. Model. Identif. Control 2018, 29, 281–294. [Google Scholar] [CrossRef]
  44. Hajjia, A.; Gouzi, M.B.; Harras, B.; El Khalfi, A.; Vlase, S.; Luminita, M. Finite Element Analysis of Functionally Graded Mindlin–Reissner Plates for Aircraft Tapered and Interpolated Wing Defluxion and Modal Analysis. Mathematics 2025, 13, 620. [Google Scholar] [CrossRef]
  45. Bär, C. The Faddeev-LeVerrier algorithm and the Pfaffian. Linear Algebra Its Appl. 2021, 630, 39–55. [Google Scholar] [CrossRef]
Figure 1. The response of a system under a generic harmonic excitation.
Figure 1. The response of a system under a generic harmonic excitation.
Mathematics 13 02101 g001
Figure 3. The tapered geometry mesh and the applied boundary conditions for the aircraft wing.
Figure 3. The tapered geometry mesh and the applied boundary conditions for the aircraft wing.
Mathematics 13 02101 g003
Figure 4. First- and second-mode shapes for FEM of the fighter aircraft wings.
Figure 4. First- and second-mode shapes for FEM of the fighter aircraft wings.
Mathematics 13 02101 g004
Figure 5. Time responses of the first and second modes of the fighter aircraft wings.
Figure 5. Time responses of the first and second modes of the fighter aircraft wings.
Mathematics 13 02101 g005
Figure 6. Performance comparison of proposed and conventional methods.
Figure 6. Performance comparison of proposed and conventional methods.
Mathematics 13 02101 g006aMathematics 13 02101 g006b
Table 2. Quantitative Performance Evaluation of Direct, Modal, Polynomial, and Proposed Inversion Methods.
Table 2. Quantitative Performance Evaluation of Direct, Modal, Polynomial, and Proposed Inversion Methods.
Classical Direct Inversion
Method
Modal
Analysis
Method
Polynomial
Eigenvalue
Solvers
Proposed Inversion Method
Average CPU Time (1000 freq points, 100 DOFs)85 s28 s110 s9 s
Memory Consumption (100 DOFs, peak, MB)520 MB780 MB640 MB180 MB
Accuracy (Relative Error—max norm) 2.1 × 10 3 6.8 × 10 6 9.4 × 10 5 2.7 × 10 7
Numerical Stability (Condition Number Growth) 10 6 10 4 10 5 10 3
Number of Floating-Point Operations (per freq) 1.6 × 10 7 FLOPs 4.2 × 10 6 FLOPs 1.9 × 10 7 FLOPs 2.8 × 10 6 FLOPs
Number of Matrix-Vector Products2 per ω1 global3 per ω1 per ω
Failure Rate (over 106 frequencies)0.015%0.004%0.008%0.0001%
Average Setup Time (first pre-computation)0.5 s3.2 s5.1 s0.4 s
Sensitivity to Damping Perturbations (Error %)12%4%7%1%
Max Achievable Problem Size (standard PC)300 DOFs800 DOFs450 DOFs1500 DOFs
Complexity Order (Max DOFs Handled) O ( n 3 )
per inversion
O ( n 2 )
(once)
O ( n 3 ) O ( n 2 )
recursive
Memory Growth with System SizeCubicQuadraticCubicLinear
Inversion Success Rate (%)90%95%92%99%
Average Eigenvalue Error (%)0.1%0.001%0.005%0.0005%
Resonance Peak Accuracy (%)85.71%98.23%93.22%98.52%
Table 3. Qualitative Comparative Assessment of Inversion Methods Based on Extensive Testing and Validation.
Table 3. Qualitative Comparative Assessment of Inversion Methods Based on Extensive Testing and Validation.
Classical
Direct Inversion Method
Modal Analysis
Method
Polynomial Eigenvalue SolversProposed Inversion Method
Scalability (Ability to handle larger systems)LimitedGoodModerateExcellent
Explicitness (Explicit inversion formula)NoPartialNoYes
Suitability for Resonance Analysis (Quality near peaks)Moderate
85.71%
Excellent 98.23%Good
93.22%
Excellent
98.52%
Suitability for Time-domain Simulation (for transient)Yes (but costly)LimitedNoEfficient
Analytical Insight (dynamic behavior understanding)LowHighModerateHigh
Preconditioning Possibility (speed-up techniques)ModerateDifficultModerateEasy
Parallelization Potential (GPU/multi-core benefit) Moderate (freq)High ModerateHigh
Generalization to Nonlinear SystemsNo (linear only)NoLimited Possible
Eigenvalue Preservation (Preserves eigenstructure)NoYesYesYes
Effectiveness for Non-Proportional DampingPoorGoodModerateExcellent
Consistency with FEM FrameworksHighMediumMediumHigh
Stability Near Singularities: when det ( A ( λ ) ) 0 PoorGoodModerateExcellent
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bekhiti, B.; Fragulis, G.F.; Maraslidis, G.S.; Hariche, K.; Cherifi, K. A Novel Recursive Algorithm for Inverting Matrix Polynomials via a Generalized Leverrier–Faddeev Scheme: Application to FEM Modeling of Wing Vibrations in a 4th-Generation Fighter Aircraft. Mathematics 2025, 13, 2101. https://doi.org/10.3390/math13132101

AMA Style

Bekhiti B, Fragulis GF, Maraslidis GS, Hariche K, Cherifi K. A Novel Recursive Algorithm for Inverting Matrix Polynomials via a Generalized Leverrier–Faddeev Scheme: Application to FEM Modeling of Wing Vibrations in a 4th-Generation Fighter Aircraft. Mathematics. 2025; 13(13):2101. https://doi.org/10.3390/math13132101

Chicago/Turabian Style

Bekhiti, Belkacem, George F. Fragulis, George S. Maraslidis, Kamel Hariche, and Karim Cherifi. 2025. "A Novel Recursive Algorithm for Inverting Matrix Polynomials via a Generalized Leverrier–Faddeev Scheme: Application to FEM Modeling of Wing Vibrations in a 4th-Generation Fighter Aircraft" Mathematics 13, no. 13: 2101. https://doi.org/10.3390/math13132101

APA Style

Bekhiti, B., Fragulis, G. F., Maraslidis, G. S., Hariche, K., & Cherifi, K. (2025). A Novel Recursive Algorithm for Inverting Matrix Polynomials via a Generalized Leverrier–Faddeev Scheme: Application to FEM Modeling of Wing Vibrations in a 4th-Generation Fighter Aircraft. Mathematics, 13(13), 2101. https://doi.org/10.3390/math13132101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop