Next Article in Journal
Curves in Multiplicative Equiaffine Space
Previous Article in Journal
Fuzzy Epistemic Logic: Fuzzy Logic of Doxastic Attitudes
Previous Article in Special Issue
A Closed Form of Higher-Order Cayley Transforms and Generalized Rodrigues Vectors Parameterization of Rigid Motion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computation of Minimal Polynomials and Multivector Inverses in Non-Degenerate Clifford Algebras

by
Dimiter Prodanov
1,2
1
PAML-LN, Institute for Information and Communication Technologies (IICT), Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Neuroelectronics Research Flanders, IMEC, 3001 Leuven, Belgium
Mathematics 2025, 13(7), 1106; https://doi.org/10.3390/math13071106
Submission received: 13 February 2025 / Revised: 20 March 2025 / Accepted: 24 March 2025 / Published: 27 March 2025
(This article belongs to the Special Issue Geometric Methods in Contemporary Engineering)

Abstract

:
Clifford algebras are an active area of mathematical research having numerous applications in mathematical physics and computer graphics, among many others. This paper demonstrates algorithms for the computation of characteristic polynomials, inverses, and minimal polynomials of general multivectors residing in a non-degenerate Clifford algebra of an arbitrary dimension. The characteristic polynomial and inverse computation are achieved by a translation of the classical Faddeev–LeVerrier–Souriau (FVS) algorithm in the language of Clifford algebra. The demonstrated algorithms are implemented in the Clifford package of the open source computer algebra system Maxima. Symbolic and numerical examples residing in different Clifford algebras are presented.

1. Introduction

Clifford algebras provide natural generalizations of complex, dual, and split-complex (or hyperbolic) numbers into the concept of Clifford numbers, i.e., general multivectors. The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. The development of Clifford algebras was based on the insights of Hamilton, Grassmann, and Clifford from the 19th century. After a hiatus lasting many decades, Clifford geometric algebra experienced a renaissance with the advent of contemporary computer algebra systems. There are multiple actively researched applications in computer-aided design (CAD), computer vision and robotics, protein folding, neural networks, modern differential geometry, genetics, and mathematical physics. Readers are directed towards Hitzer et al. for a recent survey on such applications [1].
Clifford algebras are implemented in a variety of general-purpose computer languages and computational platforms. There are many implementations of the major computer algebra systems, such as the package CLIFFORD for Maple [2], the Clifford Multivector Toolbox for Matlab [3], the Clifford package (v 2.5.4) for Maxima [4], and domain-specific applications—i.e., Ganja.js for JavaScript, GaLua for Lua (http://spencerparkin.github.io/GALua/), Galgebra for Python (https://galgebra.readthedocs.io/), Grassmann for Julia (https://grassmann.crucialflow.com/), and Mathematica—the Geometric Algebra Mathematica package by Acus and Dargys (https://github.com/ArturasAcus/GeometricAlgebra).
As its main contribution, this article demonstrates an algorithm for multivector inversion, based on the Faddeev–LeVerrier–Souriau (FVS) algorithm. The algorithm is implemented in the computer algebra system Maxima (version 5.37 or greater) using the Clifford package [4,5]. The multivector FVS algorithm was presented in a preliminary form at the Computer Graphics International conferee, CGI 2023, Shanghai, 28 August–1 September 2023 [6]. Compared to the initial presentation, the current paper introduces the apparatus of minimal polynomials and the related concept of a multivector rank as central computational tools.
Unlike the original FVS algorithm, which computes the characteristic polynomial and has a fixed number of steps, the present Clifford FVS algorithm involves only Clifford multiplications and subtractions of scalar parts and has a variable number of steps, depending on the spanning subspace of the multivector. The correctness of the algorithm is proven using an algorithmic, constructive representation of a multivector in matrix algebra over the reals, but it by no means depends on such a representation. The present FVS algorithm is in fact a proof certificate for the existence of an inverse. To the best of the present author’s knowledge, the FVS algorithm has not been used systematically to exhibit multivector inverses.
This paper is organized as follows. Section 2 introduces the notation. Section 4 exhibits a real matrix representation of the algebra. Section 3 discussed the indicial representation of the algebra. Section 5 discusses the multivector inverse and derives the FVS multivector algorithm. Section 6 introduces the notion of rank of a multivector. Section 7 demonstrates the algorithm. Section 10 provides the Conclusion section of the paper.

2. Notation and Preliminaries

2.1. Notation

C n denotes a Clifford algebra of order n but with an unspecified signature. Clifford multiplication is denoted by a simple juxtaposition of symbols. Algebra generators will be indexed by Latin letters. Multi-indices will be considered as index lists and not as sets and will be denoted with capital letters. The operation of taking the k-grade part of an expression will be denoted by 〈.〉k and, in particular, the scalar part will be denoted by 〈.〉0. Set difference is denoted by Δ. Matrices will be indicated with bold capital letters, while matrix entries will be indicated by lowercase letters. The degree of the polynomial P will be denoted as deg P .

2.2. General Definitions

Definition 1. 
Let p , q 0 with n : = p + q be given integers and let C p , q : = C p , q [ e 1 , , e n ] denote the non-degenerate real Clifford algebra with signature ( p , q ) and ordered sequence of generators e 1 , , e n . That is, C p , q is a real associative algebra with unit 1 generated freely by its elements e i which satisfy the relations
e i 2 = 1 ( i p ) , e i 2 = 1 ( i > p ) , e i e j = e j e i ( i j ) .
It will be assumed that there is an ordering relation ≺, such that for two natural numbers, i < j e i e j . The extended basis set of the algebra will be defined as the ordered power set B : = P ( E ) , of all generators E = { e 1 , , e n } and their irreducible products.
Definition 2 
(Scalar product). The scalar product of the blades A and B will be denoted by as
A B : = A B 0
and extended by linearity to the entire algebra.
A multivector will be written as A = a 0 + k = 1 r A k = a 0 + J a J e J , where J is a multi-index, such that e J B . In other words, J is subset of the power set of the first n natural numbers P ( n ) . The maximal grade of A will be denoted by gr [ A ] . The pseudoscalar will be denoted by I.
Definition 3 
(Span of a multivector). The span of a multivector A, written as the set span [ A ] , is defined as the minimal ordered set of generators span [ A ] : = e i for which
( A A 0 ) e J = 0 , e J B
holds true.
It is clear that span [ A ] E while span [ A ] = E only for a full-grade, general multivector.
Definition 4. 
The term general multivector will denote a multivector for which span [ A ] = E .
Definition 5 
(Sparsity property). A (square) matrix has the sparsity property if it has exactly one non-zero element per column and exactly one non-zero element per row. Such a matrix we call sparse.
Here, it is useful to recall the definition of a permutation matrix, which is a square binary matrix that has exactly one entry of 1 in each row and each column, with all other entries being 0. Therefore, a sparse matrix in the sense of the above definition generalizes the notion of a permutation matrix.

2.3. Automorphisms

Consider a general multivector M. Most authors define two (principal) automorphisms: inversion,
M ^ : = k = 0 n ( 1 ) k M k ,
and grade reversion,
M : = k = 0 n ( 1 ) k ( k 1 ) / 2 M k .
These can be further composed into Clifford conjugation:
M ¯ : = M ^ = ( M ^ ) = k = 0 n ( 1 ) k ( k + 1 ) / 2 M k
Readers are also directed to the works of [7,8,9] for more details. Another less-used automorphism is the Hitzer–Sangwine involution [10]:
h J ( M ) : = k J n ( 1 ) k M k
for the multiindex J, which has no standard notation.
Lastly, the  inverse (or Hermitian) automorphism is defined as follows [11]. Suppose that M is represented as the sum of blades M = k = 0 n c k B k . Then,
M # : = k = 0 n c k B k 1
by linearity. This also lacks standard notation. The term “Hermitian” originates from the use of complex-valued Clifford algebras.

2.4. Minimal Polynomials

Definition 6 
(Minimal polynomial). The minimal polynomial of a multivector A is the monic polynomial μ : C n C n of minimal degree m, such that
μ ( A ) = k = 0 m c k A k = 0 , c m = 1
for a given multivector A.
Evaluation of μ in the Clifford algebra or the complex numbers will be assumed depending on the context of discussion. Furthermore, we have the following result:
Proposition 1. 
The minimal polynomial μ is unique for a given multivector A C p , q .
Proof. 
The proof is given in [12]. Suppose that f(x) and g(x) are two monic polynomials of minimal degree m such that f(y) = g(y) = 0; then, h(x) = f(x) − g(x) is a polynomial of smaller degree (m-1) such that h(y) = 0. This contradicts the minimality of f and g. Therefore, h = 0.    □

3. Indicial or Sparse Representation

Definition 7 
(Indicial map). Define the indicial map ι e , acting on symbols by concatenation (i.e., of a set), such that
ι e : n N B ι e : g e g
where g is set-valued and let the convention ι e : 1 C l .
Definition 8. 
Define the argument map arg acting on symbol compositions as
arg : f ( g ) f g g
Let arg f = for the atomic symbol f.
These definitions allow for stating a very general result about Clifford algebra representations.
Theorem 1 
(Indicial representation). For generators e s , e t C p , q , r , such that s t , we have the following diagram:
Mathematics 13 01106 i001
Proof. 
The right–left ι action follows from the construction of C p , q , r . The left–right argument action is trivial. We observe that arg f = . Trivially, { s } = { s } = { s } . Let us suppose that s = t . We notice that { s } { t } = { t } { s } = . Let us suppose that s t . We notice that { s } { t } = { s , t } and { t } { s } = { t , s } .    □
This theorem is used for the reduction of products of blades in the Clifford package as shown by the author of [4].

4. Clifford Algebra Real Matrix Representation Map

Definition 9 
(Scalar product table). Define the diagonal scalar product matrix as
G : = { σ I J = e I e J | e I , e J B , I J }
At present, we will focus on non-degenerate Clifford algebras; therefore, the non-zero elements of G are valued in the set 1 , 1 .
Lemma 1 
(Sparsity lemma). If the matrices A and B are sparse, then so is C = AB . Moreover,
c i j = 0 a i q b q j
(no summation!) for some index q.
Proof. 
The proof is given in [6] and will not be repeated.    □
Lemma 2 
(Multiplication Matrix Structure). For the multi-index disjoint sets S T , the following implications hold for the elements of M :
Mathematics 13 01106 i002
so that m λ λ = m λ μ σ μ m μ λ for some index μ.
Proof. 
The proof is given in [6] but will be repeated for completeness of the discussion. Suppose that the ordering of elements is given in the construction of C p , q , r . To simplify presentation, without loss of generality, suppose that e s and e t are some generators. By the properties of M , there exists an index λ > λ such that e M e L = m μ λ e t , L M = T for L L . Choose M s.t. L M L . Then, for L M L and S T ,
e M e L = m μ λ e s , L M = S e L e M = m λ μ e s e M e L = m μ λ e t , L M = T
Suppose that e s e t = e s t , s t = S T = S T . Multiply together the diagonal nodes in the matrix as follows:
e L e M e M σ μ e L = m λ μ m μ λ e s t
Therefore, s L and t L . We observe that there is at least one element (the algebra unity) with the desired property σ μ 0 .
Further, we observe that there exists a unique index λ such that m λ λ e s t . Since λ is fixed, this implies that L = L λ = λ . Therefore,
e L e L = m λ λ e s t , L L = { s , t }
which implies the identity m λ λ e s t = m λ μ σ μ m μ λ e s t . For higher-graded elements e S and e T , we should write e S T instead of e s t .    □
Proposition 2. 
Consider the multiplication table M . All elements m k j are different for a fixed row k. All elements m i q are different for a fixed column q.
Proof. 
The proof is given in [6].    □
Proposition 3. 
For e s E , the matrix A s = C s ( M ) is sparse.
Proof. 
The proof is given in [6].    □
Proposition 4. 
For generator elements e s and e t , E s E t + E t E s = 0 .
Proof. 
The proof is given in [6]. Consider the basis elements e s and e t . By linearity and homomorphism of the π map (Theorem 2), we have π : e s e t + e t e s = 0 π ( e s e t ) + π ( e t e s ) = 0 . Therefore, for two vector elements, E s E t + E t E s = 0 .    □
Proposition 5. 
E s E s = σ s I
Proof. 
The proof is given in [6]. Consider the matrix W = G A s G A s . Then, w μ ν = λ σ μ σ λ a μ λ a λ ν element-wise. By Lemma 1, W is sparse so that w μ ν = ( 0 ; σ μ σ q a μ q a q ν ) .
From the structure of M for the entries containing the element e S , we have the equivalence
e M e Q = a μ q s e S , S = M Q e Q e M = a q μ s e S ,
After multiplication of the equations, we obtain e M e Q e Q e M = a μ q s e S a q μ s e S , which simplifies to the first fundamental identity:
σ q σ μ = a μ q s a q μ s σ s
We observe that if σ μ = 0 or σ q = 0 , the result follows trivially. In this case, we also have that σ s = 0 . Therefore, let us suppose that σ s σ q σ μ 0 . We multiply both sides by σ s σ q σ μ to obtain σ s = σ q σ μ a μ q s a q μ s . However, the RHS is a diagonal element of W ; therefore, by the sparsity, it is the only non-zero element for a given row/column so that W = E s 2 = σ s I .    □
Definition 10 
(Clifford coefficient map). Define the linear map acting element-wise C a : C n R by the action C a ( a x + b ) = x for x R , a , b B .
Define the Clifford coefficient map indexed by e S as A S : = C S ( M ) , where M is the multiplication table of the extended basis M = e M e N | e M , e N B , and  A S action of the map.
In non-degenerated algebras, the coefficient map can be represented by the formula
A S ( M ) = M e S #
using the inverse automorphism.
Definition 11 
(Canonical matrix map). Define the map π : B Mat R ( 2 n × 2 n ) , n = p + q + r as
π : e S E s : = G A s
where s is the ordinal of e S B and A S is computed as in Definition 10.
The Maxima code implementing the π -map is given in Listing A1.
Proposition 6. 
The π-map is linear.
The proposition follows from the linearity of the coefficient map and matrix multiplication with a scalar.
Theorem 2 
(Semigroup property). Let e s and e t be generators of C p , q , r . Then, the following statements hold:
1. 
The map π is a homomorphism with respect to the Clifford product (i.e., π distributes over the Clifford products): π ( e s e t ) = π ( e s ) π ( e t ) .
2. 
The set of all matrices E s forms a multiplicative semigroup.
Proof. 
The proof is given in [6] but will be repeated for completeness of the discussion. Let E s = π ( e s ) , E t = π ( e t ) , E s t = π ( e s e t ) . We specialize the result of Lemma 2 for S = { s } and T = { t } and observe that m λ λ e s t = m λ μ σ μ m μ λ e s t for λ , λ , μ n and σ λ m λ λ = σ λ m λ μ σ μ m μ λ . In summary, the map π acts on C p , q according to the following diagram:
Mathematics 13 01106 i003
Therefore, E s t = E s E t . Moreover, we observe that π ( e s e t ) = E s t = E s E t = π ( e s ) π ( e t ) .
For the semi-group property, observe that since π is linear, it is invertible. Since π distributes over the Clifford product, its inverse π 1 distributes over matrix multiplication:
π 1 ( E s E t ) π 1 ( E s t ) = e s t e s e t = π 1 ( E s ) π 1 ( E t )
However, C p , q is closed by construction; therefore, the set E s is closed under matrix multiplication.    □
Proposition 7. 
Let L : = l i | l i B be a column vector and R s be the first row of E s . Then, π 1 : E s R s L .
Proof. 
We observe that by Proposition 3, the only non-zero element in the first row of E s is σ 1 m 1 s = 1 . Therefore, R s L = e s .    □
Theorem 3 
(Complete Real Matrix Representation). Define the map g : A G A as matrix multiplication with G . Then, for a fixed multi-index s, π = C s g = g C s . Further, π is an isomorphism inducing a Clifford algebra representation in the real matrix algebra according to the following diagram:
Mathematics 13 01106 i004
Proof. 
The proof is given in [6] but will be repeated for completeness of the discussion. The π -map is a linear isomorphism. The set E s forms a multiplicative group, which is a subset of the matrix algebra Mat R ( N × N ) , N = 2 n . Let π ( e s ) = E s and π ( e t ) = E t . It is claimed that
  • E s E t 0 by the Sparsity Lemma 1.
  • E s E t = E t E s by Proposition 4.
  • E s E s = σ s I by Proposition 5.
Therefore, the set E S S = { 1 } P ( n ) is an image of the extended basis B . Here, P ( n ) denotes the power set of the indices of the algebra generators.    □
What is useful about the above representation is the relationship between the trace of the multivector matrix and the scalar part of the pre-image:
tr A = 2 n A 0
for the image π ( A ) = A of a general multivector element A. This will be used further in the proof of the FVS algorithm.
Remark 1. 
The above construction works if instead of the entire algebra C p , q we restrict a multivector to a sub-algebra of a smaller grade max gr [ A ] = r . In this case, we form grade-restricted multiplication matrices G r and M r .

Characteristic and Minimal Polynomials of Multivector

Let us first introduce the notion of a multivector characteristic polynomial and contrast it with the previously introduced definition of a minimal polynomial. The distinction between the characteristic and minimal polynomials will become clear from the subsequent results. The coefficients of these polynomials will be assumed to be real numbers, although this is not strictly necessary, as discussed in [12].
Definition 12 
(Characteristic polynomial). The characteristic polynomial p A of the multivector A is the pre-image of the characteristic polynomial P A ( x ) : = det ( x I A ) of its matrix representation by the map π.
From the properties of the π map, it is clear that
π 1 : P A ( A ) = 0 p A ( A ) = 0
so that the above definition is consistent with the usual notion of a characteristic polynomial. Therefore, the notion of an eigenvalue λ of a multivector can also be defined according to its usual meaning—that is, a member of the list of real or complex numbers { λ } i , such that the equation
p A ( A ) = i 2 n ( A λ i ) = 0
holds true for the multivector A.
Remark 2. 
Defined in this way, the characteristic polynomial is related to the real matrix representation, while the minimal polynomial is representation-independent. Therefore, it is expected that a given matrix algorithm can be translated in a one-to-one manner to a Clifford algebra algorithm via the characteristic polynomial.
Theorem 4. 
Under the mapping π, the polynomial μ is the minimal polynomial of the complete real matrix representation of the generic multivector A.
Proof. 
Consider the generic multivector A and fix the value of the coefficients of its minimal polynomial μ . Then, by the properties of the π map, we can compose the following diagram:
Mathematics 13 01106 i005
   □
As an illustration, consider the special case of the algebra pseudoscalar I.
Proposition 8. 
Suppose that A is a blade. Then, it has a quadratic minimal polynomial of the form
μ ( x ) = x 2 ± 1
In particular, if A = I , then μ ( x ) = x 2 σ I .
Proof. 
Consider C p , q , with p + q = n . Without loss of generality, consider the pseudoscalar I n . The square of I n can be evaluated as
I n 2 = ( 1 ) q ( 1 ) n 1 2 = σ I
This follows by reduction considering that
I n 2 = e n 2 ( 1 ) n 1 I n 1 2 , I 1 2 = 1
and there are ( n 1 ) / 2 odd powers in the product. Therefore, the minimal polynomial is μ ( x ) = x 2 σ I . By virtue of the same argument, all blades have quadratic minimal polynomials    □
Lemma 3. 
Suppose that p A ( x ) and μ ( x ) are the characteristic and minimal polynomials of the multivector A, respectively, and furthermore, that μ ( 0 ) 0 . Then,
  • μ divides p A : μ | P A ;
  • p A and μ share the same roots;
  • Finally, p A can be written as
    p A ( x ) = g ( x ) k = 1 n = N / m a k μ k ( x ) , a n = 1
     where deg [ p ] = N and deg [ g ] = N n m and g ( x ) is monic.
Proof. 
Suppose that p A is of degree N and is divided by μ (of degree m) as
p A ( x ) = μ ( x ) g ( x ) + r ( x ) ,
where g ( x ) is a polynomial of degree N m and r ( x ) is the reminder polynomial of maximal degree k < m (by the definition of μ ). Then, we evaluate A at any of its roots to obtain 0 = r ( A ) . Therefore, r = 0 , since μ by hypothesis is the minimal polynomial.
Consider the matrix representation of A: A = π ( A ) . Suppose that λ 0 is a root of P A with multiplicity 1. Then, there exists a non-null eigenvector v such that A v = λ v . Furthermore, by associativity, for any natural number m, we have A m v = λ m v . Hence,
μ ( A ) v = μ ( λ ) I v = 0
by Theorem 4. Therefore, by the above diagram, P A , and hence p A and  μ , share the same roots.
Now, suppose that λ is a root of μ with multiplicity 1. To establish the validity of Equation (15), we write it first as
p A ( x ) = g ( x ) k = 0 n = N / m a k μ k ( x ) + r ( x ) ,
where r is the reminder term. However, we have already established that r = 0 . Then, we use a result generated on the condition when one polynomial is a polynomial of another one, as stated in (Proposition 1, [13]); since p A and μ share the same roots as established above, they do fulfill the technical condition. Furthermore, we observe that a n = 1 , since p A is monic. To determine n, we observe that the coefficients can be determined by the analytical formula
a k = 1 k ! 1 μ ( x ) x k p A ( x ) g ( x ) x = λ
The series terminates for k > n . The proof of Equation (16) follows by induction observing that for a differentiable function μ we have that
μ p A ( x ) g ( x ) = d x d μ x p A ( x ) g ( x ) = 1 μ ( x ) x p A ( x ) g ( x )
while also μ ( λ ) 0 .    □
To optimize the inverse calculation, the following needs to be considered. In the first place, for the case whenever p A ( x ) = μ ( x ) n , one could determine the coefficients of μ by equating the equal powers from both sides of the equation. The exponent n in the formula can be determined by the polynomial Greatest Common Divisor algorithm. This is supported “out of the box” by CASs, such as Maxima.
On the other hand, if  μ ( 0 ) = 0 , then μ ( x ) = x h ( x ) , where h ( x ) corresponds to a zero divisor, since det A = 0 in that case. Suppose that h ( 0 ) 0 . In such a case, we proceed as follows. Write p A as
p A ( x ) = g ( x ) k = 0 n = N / m a k h k ( x )
Then, by the chain rule, we obtain
h p A = d x d h x p A = 1 h ( x ) x p A
Therefore,
a k = 1 k ! 1 h ( x ) x k p A ( x ) g ( x ) x = λ
which can be shown by induction, in a similar way as above. The above discussion is valid also for the case whenever μ ( x ) = x q h ( x ) for some natural number q. In such a case, h is computed in a corresponding manner as h ( x ) = μ ( x ) / x q , so we only need to determine the multiplicity of the root x = 0 by successive differentiation.
From this discussion, it is apparent that, in the general case, the minimal polynomial cannot be determined solely from the characteristic one.
Based on the concept of the minimal polynomial, the determinant of a multivector can be defined as follows.
Definition 13. 
Consider a multivector A having a minimal polynomial μ. The determinant is defined as
det A : = μ ( 0 )
This definition ensures the uniqueness of the determinant based on Proposition 1 and its independence of a particular representation. Furthermore, it agrees with the usual definition based on outermorphisms [14]. Indeed, consider the set of linearly independent set of vectors V = { v j } such that v i I s = 0 for certain blade I s . Form the outer product
W ( V ) : = j = 1 v j
Then, W ( V ) = α I s for some numerical factor α . Therefore, W I s = α . Furthermore, the map W is linear by the linearity of the outer product and can trivially be extended to outermorphism as
W ( V ) : = j = 1 W ( v j ) , W ( v j ) = v j
Therefore, we can identify det W = W I s . On the other hand, W 2 = α 2 I s 2 = ± α 2 , so the minimal polynomial is μ ( x ) = x 2 α 2 .

5. Multivector Inverses and the FVS Algorithm

5.1. Low-Dimensional Formulas for the Inverse

The following formulas for the inverse element have been shown to hold [10]: For n = 1, 2,
M 1 = M ¯ M M ¯
For n = 3 ,
M 1 = M ¯ M ^ M M M ¯ M ^ M
For n = 4 ,
M 1 = M ¯ h 3 , 4 ( M M ¯ ) M M ¯ h 3 , 4 ( M M ¯ )
For n = 5 ,
M 1 = M ¯ M ^ M h 1 , 4 ( M M ¯ M ^ M ) M M ¯ M ^ M h 1 , 4 ( M M ¯ M ^ M )
Other, albeit equivalent, formulas have been derived by different authors [15,16,17].

5.2. The FVS Multivector Inversion Algorithm

Multivector inverses can be computed using the matrix representation and the characteristic polynomial.
The matrix inverse is given as A 1 = adj A / det A , where det A is the determinant and adj denotes the adjunct. The formula is not practical, because it requires the computation of n 2 + 1 determinants. By Cayley–Hamilton’s Theorem, the inverse of A is a polynomial in A, which can be computed during the last step of the FVS algorithm [18]. This algorithm has a direct representation in terms of Clifford multiplications as follows.
Theorem 5 
(Reduced-grade FVS algorithm). Suppose that A C p , q is a multivector of span dimension s, such that A span [ e 1 , , e s ] . The Clifford inverse, if it exists, can be computed in k = 2 s / 2 Clifford multiplication steps from the iteration
Mathematics 13 01106 i006
until the step where m k = 0 so that
A 1 = m k 1 / c k .
There exists a polynomial A of maximal grade k
χ A ( λ ) = λ k + c 1 λ k 1 + c k 1 λ + c k ,
such that χ A ( A ) = 0 . This polynomial will be called the reduced characteristic polynomial.
Proof. 
The proof is given in [6]. The proof follows from the homomorphism of the π map. We recall the following statement of tge FVS algorithm:
p A ( λ ) = det ( λ I n A ) = λ n + c 1 λ n 1 + c n 1 λ + c n , n = dim ( A ) ,
where
Mathematics 13 01106 i007
The matrix inverse can be computed from the last step of the algorithm as A 1 = M n 1 / t n under the obvious restriction t n 0 .
Therefore, the kth step of the algorithm application of π 1 leads to
π 1 : M k = A M k 1 t k I m k = A m k 1 t k .
Furthermore, tr [ M k ] = n m k 0 = t k by Equation (11). Moreover, the FVS algorithm terminates with M n = 0 n , which corresponds to the limiting case n = 2 p + q wherever A contains all grades. Here, 0 n denotes the square zero matrix of dimension n.
On the other hand, examining the matrix representations of different Clifford algebras, Acus and Dargys [19] make the observation that according to the Bott periodicity, the number of steps can be reduced to 2 n / 2 . This can be proven as follows. Consider the isomorphism C p , q C p , q + C q 1 , p 1 . Then, if a property holds for an algebra of dimension n = p + q , it will hold also for the algebra of dimension n 2 . Therefore, suppose that for n even the characteristic polynomial is square free: p A ( v ) q ( v ) 2 for some polynomial. We proceed by reduction.
For n = 2 in C 2 , 0 and A = a 1 + e 1 a 2 + e 2 a 3 + e 12 a 4 , we compute
p A ( v ) = a 1 2 a 2 2 a 3 2 + a 4 2 2 a 1 v + v 2 2
and a similar result holds also for the other signatures of C 2 and can be obtained by direct computation using the Clifford package. Therefore, we have a contradiction, and the reduced polynomial is of degree k = 2 n / 2 and the number of steps can be reduced accordingly. In the same way, suppose that n is odd and the characteristic polynomial is square-free.
However, for  n = 3 in C 3 , 0 and A = a 1 + e 1 a 2 + e 2 a 3 + e 3 a 4 + a 5 e 12 + a 6 e 13 + a 7 e 23 + a 8 e 123 , it is established that p A factorizes as p A ( v ) = q ( v ) 2 for the polynomial
q ( v ) = ( a 1 2 a 2 2 a 3 2 a 4 2 + a 5 2 + a 6 2 + a 7 2 a 8 2 + 2 i ( a 3 a 6 a 4 a 5 a 2 a 7 + a 1 a 8 ) 2 ( a 1 + i a 8 ) v + v 2 ) ( a 1 2 a 2 2 a 3 2 a 4 2 + a 5 2 + a 6 2 + a 7 2 a 8 2 + 2 i ( a 4 a 5 a 3 a 6 + a 2 a 7 a 1 a 8 ) 2 ( a 1 i a 8 ) v + v 2 ) .
The above polynomial is factored in C due to space limitations. Similar results hold also for the other signatures of C 3 and can be obtained by direct computation using the Clifford package. Therefore, we have a contradiction and the reduced polynomial is of degree k = ( n + 1 ) / 2 . Therefore, overall, one can reduce the number of steps to k = 2 n / 2 .
As a second case, let E s = span [ A ] be the set of all generators, represented in A, and s their count. We compute the restricted multiplication tables M ( E s ) and G ( E s ) , respectively, and form the restricted map π s . Then,
π s ( A A 1 ) = π s ( A ) π s ( A 1 ) = A A 1 = I n , n = 2 s .
Therefore, the FVS algorithm terminates in k = 2 s steps. Observe that π 1 : A M k A m k . Therefore, tr [ A M k ] will map to 2 s A m k by Equation (11). Now, suppose that t k 0 ; then, for the last step of the algorithm, we obtain
A m k 1 t k = 0 A m k 1 / t k = 1 A 1 = m k 1 / t k .
Therefore, by the argument of the previous case, the number of steps can be reduced to k = 2 s / 2 .    □
Corollary 1. 
χ A is the minimal polynomial of the generic multivector A of span s. The maximal grade of μ is m = 2 s / 2 . The algebra signature uniquely determines χ A .
Proof. 
The first statement follows from Theorem 4. The second statement follows from Proposition 1.    □
Corollary 2. 
The inverse A 1 does not exist if det A = 0 .
Proof. 
The inverse does not exist if c k = 0 . By Corollary 1, χ A is the minimal polynomial and χ A ( 0 ) = c k .    □
One could define the multivector adjunct as follows:
Definition 14. 
The adjunct of the generic multivector A is defined from the minimal polynomial as
adj A : = k = 1 m c k A k 1
 where c k are the coefficients of the minimal polynomial.
Proposition 9. 
Suppose that χ A is minimal. Then, the adjunct of a multivector A can be computed as
adj A = m k 1 , k = 2 s / 2
Furthermore, A adj A = det A .
Proof. 
The first part of the statement follows Theorem 5 considering that m k = 0 in the last step, which corresponds to the minimal polynomial. For the second part of the statement, observe that
A adj A = k = 1 m c k A k = μ ( 0 ) μ ( A ) = det A
Remark 3. 
To avoid possible confusion, the name “reduced characteristic polynomial” will be kept for the minimal polynomial of the algebra (i.e, of the generic mutlivector of grade n).
Based on Theorem 5, we can tabulate the numbers for steps necessary for the determinant computation (Table 1) in view of the algebra dimension. The table can be extended in an obvious way for the higher dimensional Clifford algebras. However, here it is truncated to n = 8 considering the Bott periodicity.

6. Multivector Rank

The notion of a minimal polynomial allows one to define the rank of a multivector in a related way:
Definition 15 
(Rank of a multivector). The rank of the multivector A, denoted r ( A ) , is the degree of its minimal polynomial – deg μ ( A ) .
The above proof demonstrates that the degree of the minimal polynomial determines the number of steps (i.e., Clifford multiplications) in the computation. If the degree of the minimal polynomial is smaller than the degree of the characteristic polynomial, some optimization of the algorithm is possible, but then we have to determine the minimal polynomial of a specific, possibly sparser multivector. To achieve this, one could use the following result.
Proposition 10. 
The determinant of a multivector A (and hence its inverse if it exists) can be computed in at least r ( A ) number of steps.
From the above definition, we can conclude that the rank of a multivector is a measure of its complexity. For instance, the scalars are of rank 1, vectors and blades are of rank 2, etc. One could expect that the multivector rank will also play a role in other algorithms.
Proposition 11. 
A non-scalar multivector A is of even rank.
Proof. 
Scalars obey μ ( v ) = v 1 , so we exclude them. Suppose, then, that there exists a multivector A of odd rank r ( A ) = 2 s + 1 in C n for s > 0 . We proceed by reduction as before. The property should hold for C n 2 . For n = 1 , the minimal polynomial is μ ( v ) = v 2 ± 1 , depending on the signature of the algebra by Proposition 8. Therefore, we have a contradiction. For n = 2 , we have general quadratic polynomials as per Examples 1–3. Therefore, again we have a contradiction. Hence, generic multivectors are of even rank.    □
Denote the inverse of the blade e J as e J ( e J ) # . Then, conveniently, e J e J = δ J J , where δ J J is the Kronecker symbol.
Proposition 12 
(Rank algorithm). Consider a multivector A having span E s = span [ A ] = { e 1 , , e s } of s dimensions. Define the Krylov exponent sequence as the set
K : = { 1 , A , A 2 , , A k } , k = 2 s / 2
(which can be thought of also as a co-vector of multivectors). Define the trial polynomial
c ( A ) : = i = 0 k c i A i C . K
Populate the simultaneous equation system as L : = c ( A ) e J = 0 , e J P ( E s ) . Let U be the coefficient matrix of L with respect to C . If k > m , then rank ( U ) m . If k = m , then
m = 2 rank ( U ) 2
and the vector C spans the coefficients of μ ( A ) .
Proof. 
Suppose that E s = span [ A ] = { e 1 , , e s } is of s dimensions. Consider the list of scalar products
c ( A ) e J = k = 0 n c k A k e J # = 0 , e J P ( E s )
enumerated by the multi-index J. They result in a simultaneous system of equations for the components of the coefficient vector C. The notation can be expanded to formulate the equations
k = 0 n I c k a I ( k ) δ I J = k = 0 n c k a J ( k ) = 0
where a J ( k ) denotes the components of the exponentiated multivector. In matrix form, the above system can be represented as C U = 0 , where C : = { c k } is the coefficient vector, U : = { u i k a # ( I ) = i ( k ) } is the coefficient matrix, and # ( I ) denote the enumeration of multi-index J.
If k < m , then only the null vector C = 0 will solve the system so we suppose that k > m . Observe that if a multivector power does not contain the blade e J , this results in the trivial identity 0 = 0 and hence, in a null row in U . Therefore, the above system of equations is underdetermined and det U = 0 must hold for a non-trivial solution to exist. Therefore, rank ( U ) m .
Finally, if  k = m , we still obtain an underdetermined system but m = 2 rank ( U ) / 2 by Proposition 11. Then, the null space of U is one-dimensional and will consist of the unnormalized coefficients of the minimal polynomial.    □
The Maxima code implementing the algorithm is presented in Listing A3.
Remark 4. 
If the matrix rank is determined by direct computation, the result stated in Proposition 12 may not be practical. On the other hand, the computation of the set K can be parallelized, which can lead to time saving. On the second place, there maybe more economic algorithms for determining the rank of A.

An Application to Multivector Exponentiation

A particular application of the presented algorithm is the exponentiation of multivectors. The multivector exponent is defined by the infinite series
e M t : = k = 0 t k M k k !
where t is a scalar and M a multivector. The Laplace transform action is, therefore,
L [ e M t ] ( s ) = ( s M ) 1
Therefore, the exponent can be calculated as the line integral
e M t = 1 2 π i B r e s t ( s M ) 1 d s
along the Bromwich contour. The integral can be evaluated from the Residue Theorem as
e M t = i = 1 k Res e s t ( s M ) 1 , s = s i ,
where k is the degree of the minimal polynomial μ ( s M ) and s i gives its roots. For rational coefficients, in some cases, the denominator can be decomposed into partial fractions. This amounts to finding the factorization of the characteristic polynomial. For clarity of the discussion, let P s M ( v ) = μ ( s M ) . The roots of the equation P s M ( v ) = 0 in the s variable should be evaluated at v = 0 to yield the poles s i . This amounts to computing the determinant and finding its roots in the Laplace variable
det ( s M ) = 0 .
In practice, this can be achieved by numerical global root-finding algorithms.

7. Methods and Implementation

Algorithms were implemented in the Maxima code, based on the Clifford package in Maxima, which was first utilized in [4]. The present version of the package is 2.5 and is available for download from a Zenodo repository [5]. The function fadlevicg2cp simultaneously computes the inverse (if it exists) and the characteristic polynomial p A ( v ) of a multivector A (Appendix A). The function minpoly2 simultaneously computes the rank of the multivector and its minimal polynomial.
Experiments were performed on a Dell® 64-bit Microsoft Windows 10 Enterprise machine with configuration—Intel® CoreTM i5-8350U CPU @ 1.70 GHz, 1.90 GHz, and 16 GB RAM. The computations were performed using the Clifford package version 2.5 on Maxima version 5.46.0 using Steel Bank Common Lisp version 2.2.2.

8. Symbolical Experiments

Example 1. 
For C 2 , 0 and a multivector A = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 12 , the reduced grade algorithm produces
t 1 = 2 a 1 , m 1 = a 1 + e 1 a 2 + e 2 a 3 + a 4 e 12 ,
resulting in A 1 = ( a 1 e 1 a 2 e 2 a 3 a 4 e 12 ) / ( a 1 2 a 2 2 a 3 2 + a 4 2 ) and the reduced characteristic polynomial is χ A ( v ) = a 1 2 a 2 2 a 3 2 + a 4 2 2 a 1 v + v 2 .
Example 2. 
For C 1 , 1 and a multivector A = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 12 , the reduced grade algorithm produces
t 1 = 2 a 1 , m 1 = a 1 + e 1 a 2 + e 2 a 3 + a 4 e 12 ,
resulting in A 1 = ( a 1 + e 1 a 2 + e 2 a 3 + a 4 e 12 ) / ( a 1 2 + a 2 2 a 3 2 + a 4 2 ) and the reduced characteristic polynomial is χ A ( v ) = a 1 2 a 2 2 + a 3 2 a 4 2 2 a 1 v + v 2 .
Example 3. 
For C 0 , 2 and a multivector A = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 12 , the reduced grade algorithm produces
t 1 = 2 a 1 , m 1 = a 1 + e 1 a 2 + e 2 a 3 + a 4 e 12 ,
resulting in A 1 = ( a 1 e 1 a 2 e 2 a 3 a 4 e 12 ) / ( a 1 2 + a 2 2 + a 3 2 + a 4 2 ) and the reduced characteristic polynomial is χ A ( v ) = a 1 2 + a 2 2 + a 3 2 + a 4 2 2 a 1 v + v 2 .
Bespoke computations are practically instantaneous on the testing hardware configuration.
Example 4. 
The real matrix representation of a generic multivector A in the quaternion algebra C 0 , 2 is
A = a 1 a 2 a 3 a 4 a 2 a 1 a 4 a 3 a 3 a 4 a 1 a 2 a 4 a 3 a 2 a 1
Suppose that we wish to determine if a particular Hadamard 4 × 4 matrix (discussed, for example, in [20])
A q = q 1 q 2 q 3 q 4 q 2 q 1 q 4 q 3 q 3 q 4 q 1 q 2 q 4 q 3 q 2 q 1
encodes the same algebra. One could proceed as follows. Applying the matrix FVS algorithm to compute the characteristic polynomial of A q , we obtain
χ A q ( v ) = q 1 2 + q 2 2 + q 3 2 + q 4 2 2 q 1 v + v 2 2
On the other hand, the reduced multivector FVS algorithm applied to A results in the reduced polynomial
χ A ( v ) = a 1 2 + a 2 2 + a 3 2 + a 4 2 2 a 1 v + v 2
Therefore, by simple inspection, one can conclude that A q encodes the same Clifford (i.e., quaternion) algebra, which was also recognized in [20] using the full multiplication table of the matrix algebra. The identification using the multiplication table requires 16 matrix multiplications if the maximal matrix representation is used as was the case in [20], while the multivector FVS algorithm requires only 2 multivector multiplications, as shown above.
In three dimensions, the general element is of the form
a 1 + e 1 a 2 + e 2 a 3 + e 3 a 4 + a 5 e 1 e 2 + a 6 e 1 e 3 + a 7 e 2 e 3 + a 8 e 1 e 2 e 3
Then we have the following maximal real matrix representations:
Example 5 
(Algebra of Physical Space C 3 , 0 ).
A = a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 2 a 1 a 5 a 6 a 3 a 4 a 8 a 7 a 3 a 5 a 1 a 7 a 2 a 8 a 4 a 6 a 4 a 6 a 7 a 1 a 8 a 2 a 3 a 5 a 5 a 3 a 2 a 8 a 1 a 7 a 6 a 4 a 6 a 4 a 8 a 2 a 7 a 1 a 5 a 3 a 7 a 8 a 4 a 3 a 6 a 5 a 1 a 2 a 8 a 7 a 6 a 5 a 4 a 3 a 2 a 1
Example 6 
(Clifford algebra C 2 , 1 ).
A = a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 2 a 1 a 5 a 6 a 3 a 4 a 8 a 7 a 3 a 5 a 1 a 7 a 2 a 8 a 4 a 6 a 4 a 6 a 7 a 1 a 8 a 2 a 3 a 5 a 5 a 3 a 2 a 8 a 1 a 7 a 6 a 4 a 6 a 4 a 8 a 2 a 7 a 1 a 5 a 3 a 7 a 8 a 4 a 3 a 6 a 5 a 1 a 2 a 8 a 7 a 6 a 5 a 4 a 3 a 2 a 1
Example 7 
(Clifford algebra C 1 , 2 ).
A = a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 2 a 1 a 5 a 6 a 3 a 4 a 8 a 7 a 3 a 5 a 1 a 7 a 2 a 8 a 4 a 6 a 4 a 6 a 7 a 1 a 8 a 2 a 3 a 5 a 5 a 3 a 2 a 8 a 1 a 7 a 6 a 4 a 6 a 4 a 8 a 2 a 7 a 1 a 5 a 3 a 7 a 8 a 4 a 3 a 6 a 5 a 1 a 2 a 8 a 7 a 6 a 5 a 4 a 3 a 2 a 1
Example 8 
(Clifford algebra C 0 , 3 ).
A = a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 2 a 1 a 5 a 6 a 3 a 4 a 8 a 7 a 3 a 5 a 1 a 7 a 2 a 8 a 4 a 6 a 4 a 6 a 7 a 1 a 8 a 2 a 3 a 5 a 5 a 3 a 2 a 8 a 1 a 7 a 6 a 4 a 6 a 4 a 8 a 2 a 7 a 1 a 5 a 3 a 7 a 8 a 4 a 3 a 6 a 5 a 1 a 2 a 8 a 7 a 6 a 5 a 4 a 3 a 2 a 1
Example 9. 
Consider C 3 , 0 . Let
A = a 1 + e 1 a 2 + e 2 a 3 + e 3 a 4 + a 5 e 12 + a 6 e 13 + a 7 e 23 + a 8 e 123
Then the application of the reduced FVS algorithm yields
A 1 = S + V + B V + Q Δ
where the determinant is given by
Δ = a 1 4 2 a 1 2 a 2 2 + a 2 4 2 a 1 2 a 3 2 + 2 a 2 2 a 3 2 + a 3 4 2 a 1 2 a 4 2 + 2 a 2 2 a 4 2 + 2 a 3 2 a 4 2 + a 4 4 + 2 a 1 2 a 5 2 2 a 2 2 a 5 2 2 a 3 2 a 5 2 + 2 a 4 2 a 5 2 + a 5 4 8 a 3 a 4 a 5 a 6 + 2 a 1 2 a 6 2 2 a 2 2 a 6 2 + 2 a 3 2 a 6 2 2 a 4 2 a 6 2 + 2 a 5 2 a 6 2 + a 6 4 + 8 a 2 a 4 a 5 a 7 8 a 2 a 3 a 6 a 7 + 2 a 1 2 a 7 2 + 2 a 2 2 a 7 2 2 a 3 2 a 7 2 2 a 4 2 a 7 2 + 2 a 5 2 a 7 2 + 2 a 6 2 a 7 2 + a 7 4 8 a 1 a 4 a 5 a 8 + 8 a 1 a 3 a 6 a 8 8 a 1 a 2 a 7 a 8 + 2 a 1 2 a 8 2 + 2 a 2 2 a 8 2 + 2 a 3 2 a 8 2 + 2 a 4 2 a 8 2 2 a 5 2 a 8 2 2 a 6 2 a 8 2 2 a 7 2 a 8 2 + a 8 4
and 
S = a 1 3 a 1 a 2 2 a 1 a 3 2 a 1 a 4 2 + a 1 a 5 2 + a 1 a 6 2 + a 1 a 7 2 2 a 4 a 5 a 8 + 2 a 3 a 6 a 8 2 a 2 a 7 a 8 + a 1 a 8 2
While the vector part is given by
V = e 1 e 2 e 3 a 1 2 a 2 + a 2 3 + a 2 a 3 2 + a 2 a 4 2 a 2 a 5 2 a 2 a 6 2 + 2 a 4 a 5 a 7 2 a 3 a 6 a 7 + a 2 a 7 2 2 a 1 a 7 a 8 + a 2 a 8 2 a 1 2 a 3 + a 2 2 a 3 + a 3 3 + a 3 a 4 2 a 3 a 5 2 2 a 4 a 5 a 6 + a 3 a 6 2 2 a 2 a 6 a 7 a 3 a 7 2 + 2 a 1 a 6 a 8 + a 3 a 8 2 a 1 2 a 4 + a 2 2 a 4 + a 3 2 a 4 + a 4 3 + a 4 a 5 2 2 a 3 a 5 a 6 a 4 a 6 2 + 2 a 2 a 5 a 7 a 4 a 7 2 2 a 1 a 5 a 8 + a 4 a 8 2
the bi-vector part is given by
B V = e 12 e 23 e 13 a 1 2 a 5 + a 2 2 a 5 + a 3 2 a 5 a 4 2 a 5 a 5 3 + 2 a 3 a 4 a 6 a 5 a 6 2 2 a 2 a 4 a 7 a 5 a 7 2 + 2 a 1 a 4 a 8 + a 5 a 8 2 2 a 3 a 4 a 5 a 1 2 a 6 + a 2 2 a 6 a 3 2 a 6 + a 4 2 a 6 a 5 2 a 6 a 6 3 + 2 a 2 a 3 a 7 a 6 a 7 2 2 a 1 a 3 a 8 + a 6 a 8 2 2 a 2 a 4 a 5 + 2 a 2 a 3 a 6 a 1 2 a 7 a 2 2 a 7 + a 3 2 a 7 + a 4 2 a 7 a 5 2 a 7 a 6 2 a 7 a 7 3 + 2 a 1 a 2 a 8 + a 7 a 8 2
and the pseudoscalar part by
Q = I 2 a 1 a 4 a 5 2 a 1 a 3 a 6 + 2 a 1 a 2 a 7 a 1 2 a 8 a 2 2 a 8 a 3 2 a 8 a 4 2 a 8 + a 5 2 a 8 + a 6 2 a 8 + a 7 2 a 8 a 8 3
The inverse exists if the determinant Δ 0 .
Up to sign permutations, the above results hold also for C 2 , 1 , C 1 , 2 , and  C 0 , 3 but are not given in view of space limitations.
Example 10. 
Consider C 3 , 0 and the multivector A = 4 / 3 e 1 + 2 / 3 I . Then the Laplace transform is
C = s + 4 / 3 e 1 2 / 3 I 1
The rank of the multivector C is 4 as computed from the matrix
U = 1 s 1 3 4 + 3 s 2 4 s + s 3 1 81 112 + 648 s 2 + 81 s 4 0 4 3 8 3 s 1 27 16 + 108 s 2 1 27 64 s + 144 s 3 0 0 16 9 16 3 s 1 27 128 + 288 s 2 0 2 3 4 3 s 1 27 88 + 54 s 2 1 27 352 s + 72 s 3
The minimal polynomial is
81 μ ( v ) = ( 400 216 s 2 + 81 s 4 + ( 432 s 324 s 3 ) v + ( 216 + 486 s 2 ) v 2 324 s v 3 + 81 v 4 )
Then C can be explicitly computed in four steps to yield
C = 3 80 e 1 + 40 I 36 s 48 s e 23 36 s 2 e 1 + 18 s 2 I + 27 s 3 20 24 s + 9 s 2 20 + 24 s + 9 s 2
The partial fraction decomposition is
C = 1 2 12 12 e 1 6 I + 6 e 23 + 9 + 9 e 1 s 20 24 s + 9 s 2 + 1 2 12 + 12 e 1 + 6 I + 6 e 23 + 9 + 9 e 1 s 20 + 24 s + 9 s 2
The poles are located at
s 1 , 2 = 2 3 ( 2 ± i ) , s 3 , 4 = 2 3 ( 2 ± i )
Therefore, the exponent is
e A t = 1 2 1 + e 1 cos 2 t 3 + I + e 23 sin 2 t 3 e 4 t 3 + 1 2 1 e 1 cos 2 t 3 + I e 23 sin 2 t 3 e 4 t 3

9. Numerical Experiments

Note that the trivial last steps will be omitted. To demonstrate the utility of the FVS algorithm, we will now follow some high-dimensional numerical examples. Examples for higher-dimensional algebras are not particularly instructive as they result in very long expressions. These can nevertheless be useful for hard-coding formulas in particular niche applications.
Example 11. 
In C 2 , 2 , let A = 1 + e 1 + e 134 2 e 23 . Let B = e 134 , C = e 123
t 1 = 4 , m 1 = 1 + e 1 + B 2 e 23 t 2 = 2 , m 2 = 1 2 e 1 4 C 2 B + 4 e 23 + 2 e 34 t 3 = 12 , m 3 = 9 + 3 e 1 + 4 C B + 2 e 23 2 e 34
so that
A 1 = 1 + e 1 + 4 3 C 1 3 B + 2 3 e 23 2 3 e 34
The reduced characteristic polynomial is χ A ( v ) = 3 + 12 v 2 v 2 4 v 3 + v 4 and is also minimal.
Example 12. 
Let us compute a rational example in C 2 , 5 . To avoid cumbersome expressions, let A = 1 2 e 15 + 5 e 134 = 1 2 B + 5 C , where B : = e 15 and C : = e 134 .
Then span [ A ] = { e 1 , e 3 , e 4 , e 5 } and for the maximal representation we have k = 2 4 = 16 steps:
t 1 = 16 , m 1 = 15 + 5 C 2 B ; t 2 = 288 , m 2 = 252 70 C + 28 B ; t 3 = 2912 , m 3 = 2366 + 1190 C 476 B ; t 4 = 29456 , m 4 = 22092 10640 C + 4256 B ; t 5 = 213696 , m 5 = 146916 + 99820 C 39928 B ; t 6 = 1509760 , m 6 = 943600 634760 C + 253904 B ; t 7 = 8250496 , m 7 = 4640904 + 4083240 C 1633296 B ; t 8 = 43581024 , m 8 = 21790512 19121280 C + 7648512 B ; t 9 = 181510912 , m 9 = 79411024 + 89831280 C 35932512 B ; t 10 = 730723840 , m 10 = 274021440 307223840 C + 122889536 B ; t 11 = 2275435008 , m 11 = 711073440 + 1062883360 C 425153344 B ; t 12 = 6900244736 , m 12 = 1725061184 2492483840 C + 996993536 B ; t 13 = 15007376384 , m 13 = 2813883072 + 6132822080 C 2453128832 B ; t 14 = 32653412352 , m 14 = 4081676544 7936593280 C + 3174637312 B ; t 15 = 39909726208 , m 15 = 2494357888 + 12471789440 C 4988715776 B .
Therefore,
A 1 = 1 5 C + 2 B / 22
and χ A ( v ) = ( 22 2 v + v 2 ) 8 . The evaluation takes 0.0469 s using 12.029 MB memory on Maxima. On the other hand, the reduced algorithm will run in k = 2 4 / 2 = 4 steps:
t 1 = 4 , m 1 = 1 + 5 C 2 B ; t 2 = 48 , m 2 = 24 10 C + 4 B ; t 3 = 88 , m 3 = 66 + 110 C 44 B ;
and χ A ( v ) = 484 88 v + 48 v 2 4 v 3 + v 4 = ( 22 2 v + v 2 ) 2 . Here, the evaluation takes 0.0156 s using 2.512 MB memory on Maxima. Note that in this case, det A = A A = 22 . Therefore, A 1 = A / 22 .
Example 13. 
This example was presented in [21]. Consider the algebra C 5 , 0 and define the multivector A = 1 + 2 e 1 + 3 e 23 + 4 e 2345 . The reduced algorithm will run in k = 2 3 = 8 steps. Let B = e 2345 , C = e 123 and D = e 145 . The calculation proceeds as
t 1 = 8 , m 1 = 1 + 2 e 1 + 3 e 23 + 4 B ; t 2 = 16 , m 2 = 4 12 e 1 + 12 C + 16 I 18 e 23 24 B 24 e 45 ; t 3 = 208 , m 3 = 78 8 e 1 60 C 80 I 144 D + 66 e 23 112 B + 120 e 45 ; t 4 = 1064 , m 4 = 532 + 112 e 1 + 624 C 768 I + 576 D 144 e 23 + 608 B 96 e 45 ; t 5 = 5792 , m 5 = 3620 3768 e 1 1632 C + 2624 I + 192 D + 3084 e 23 + 912 B 192 e 45 ; t 6 = 20416 , m 6 = 15312 + 7280 e 1 7536 C 10048 I 1536 D 5928 e 23 3104 B 14880 e 45 ; t 7 = 28608 , m 7 = 25032 96 e 1 + 8592 C + 8256 I + 28992 D + 53832 e 23 47424 B + 15072 e 45
The inverse is
A 1 = 1 14790 149 4 e 1 + 358 e 123 + 344 e 12345 + 1208 e 145 + 2243 e 23 1976 e 2345 + 628 e 45
and the reduced characteristic polynomial is
χ A ( v ) = 354960 28608 v + 20416 v 2 5792 v 3 + 1064 v 4 + 208 v 5 16 v 6 8 v 7 + v 8
which is also minimal. The reduced rank matrix is
U = 1 1 12 34 276 1604 46608 303176 2918256 0 2 4 56 208 4168 27056 403744 2717312 0 3 6 162 624 6228 31176 74232 34944 0 0 24 72 1056 4800 29664 141792 585984 0 0 12 36 1104 5280 32112 151536 583296 0 0 0 144 576 6720 34560 81984 32256 0 4 8 16 32 4496 27232 396224 2660608 0 0 16 48 128 960 46784 313152 2966528
which is of rank 8.
Example 14. 
This example was presented in [19]. In  C 4 , 0 , define A : = 1 + e 1 + 3 e 23 e 24 . Then the inverse computes in n = 4 steps:
t 1 = 4 , m 1 = 1 + e 1 + 3 e 23 e 24 ; t 2 = 24 , m 2 = 12 2 e 1 + 6 e 123 2 e 124 6 e 23 + 2 e 24 ; t 3 = 40 , m 3 = 30 10 e 1 6 e 123 + 2 e 124 + 36 e 23 12 e 24 ; t 4 = 140 , m 4 = 0 ,
resulting in
A 1 = 1 70 5 + 5 e 1 + 3 e 123 e 124 18 e 23 + 6 e 24
The characteristic polynomial is
χ A ( v ) = 140 40 v + 24 v 2 4 v 3 + v 4 = ( 10 + v 2 ) ( 14 4 v + v 2 )
which is also minimal. Therefore, the rank of the multivector is 4.
The inverse can be computed alternatively in the following way. Let
B : = A A ^ = 10 + 6 e 23 2 e 24 , B = 10 6 e 23 + 2 e 24 , Δ = B B = 140
Therefore, the inverse is given by the formula
A 1 = A ^ ( A A ^ ) Δ
in accordance with the above result. This is an alternative to Equation (22).
Example 15. 
Consider C 5 , 2 and let A : = 1 e 2 + I . The full-grade algorithm takes 128 steps and will not be illustrated due to space limitations. The reduced-grade algorithm can be illustrated as follows. Let C = e 134567 . Then
t 1 = 16 , m 1 = 1 e 2 + I ; t 2 = 120 , m 2 = 15 + 14 e 2 14 I + 2 C ; t 3 = 560 , m 3 = 105 89 e 2 + 93 I 26 C ; t 4 = 1836 , m 4 = 459 + 340 e 2 388 I + 156 C ; t 5 = 4560 , m 5 = 1425 881 e 2 + 1145 I 572 C ; t 6 = 9064 , m 6 = 3399 + 1682 e 2 2562 I + 1454 C ; t 7 = 14960 , m 7 = 6545 2529 e 2 + 4557 I 2790 C ; t 8 = 20886 , m 8 = 10443 + 3096 e 2 6648 I + 4296 C ; t 9 = 24880 , m 9 = 13995 3051 e 2 + 8091 I 5448 C ; t 10 = 25480 , m 10 = 15925 + 2386 e 2 8242 I + 5694 C ; t 11 = 22416 , m 11 = 15411 1475 e 2 + 7007 I 4934 C ; t 12 = 16716 , m 12 = 12537 + 596 e 2 4932 I + 3548 C ; t 13 = 10480 , m 13 = 8515 35 e 2 + 2795 I 1980 C ; t 14 = 5400 , m 14 = 4725 50 e 2 1150 I + 850 C ; t 15 = 2000 , m 15 = 1875 + 125 e 2 + 375 I 250 C ,
resulting in A 1 = ( 1 e 2 3 I + 2 C ) / 5 . The reduced characteristic polynomial can factorize as
χ A ( v ) = ( 5 4 v + 6 v 2 4 v 3 + v 4 ) 2 = ( 1 + v 2 ) 4 ( 5 4 v + v 2 ) 4
This is an indication that the rank of the multivector is lower, as will be demonstrated below.
Example 16. 
We use the same data A = 1 e 2 + I in C 5 , 2 to compute the rank according to Proposition 12. The reduced (with zero rows removed) rank matrix is
U T = 1 0 0 0 1 1 0 0 1 2 2 2 1 1 6 5 3 4 12 12 19 19 20 21 59 58 22 22 139 139 14 15 263 264 168 168 359 359 600 599 119 118 1558 1558 1321 1321 3234 3235 5877 5876 5148 5148 16901 16901 4420 4419 38221 38222 8062 8062 68381 68381 54346 54345 82417 82416 177072 177072
which is of rank 4. Direct computation of the inverse results in
t 1 = 4 , m 1 = 1 e 2 + I ; t 2 = 6 , m 2 = 3 + 2 e 2 2 I + 2 C ; t 3 = 4 , m 3 = 3 + e 2 + 3 I 2 C
where, as before, C = e 134567 and A 1 = 1 e 2 + 2 C 3 I / 5 . The determinant det A can be computed by the sequence of operations B = A A ^ = 1 2 I , followed by det A = B B = 5 . This allows for writing the simple formula
A 1 = A ^ ( A A ^ ) / 5
Example 17. 
We use A : = 1 e 2 + e 3 + e 13456 residing in C 5 , 2 to compute the rank according to Proposition 12. The span is a six-dimensional vector space— span [ A ] = { e 1 , e 2 , e 3 , e 4 , e 5 , e 6 } . The reduced (with zero rows removed) rank matrix is
U = 1 1 2 4 4 4 40 160 496 0 1 2 4 8 12 8 32 192 0 0 2 2 0 12 56 184 512 0 0 2 6 16 40 88 168 256 0 0 0 2 8 24 64 152 320 0 1 2 6 16 36 72 120 128
which is of rank 5.
The minimal polynomial is computed as μ ( v ) = 4 + 4 v 2 4 v 3 + v 4 and the inverse is
A 1 = 1 2 e 3 + e 12456 e 13456 e 1456
In this case, the determinant can be computed by the sequence of steps
B = A A , det A = B B ^ = 4
Therefore, the inverse can be computed by the formula
A 1 = A ( A A ) ^ / 4
in an obvious manner.

10. Discussion

Computation of inverses of multivectors has drawn continuous attention in the literature as the problem has only been gradually solved [10,21,22,23]. In order to compute the inverse of a multivector, previous contributions used series of automorphisms of special types discussed in Section 2.3. This allows one to write basis-free formulas with increasing complexity.
From Table 1, it can be concluded that the low-dimensional formulas reported in the literature are optimal in terms of the number of Clifford multiplications. It is also apparent that looking for specific formulas of the general inverse element for higher-dimensional Clifford algebras would offer little immediate insight.
The maximal matrix algebra construction exhibited in the present paper allows for systematic translation of matrix-based algorithms to Clifford algebra simultaneously allowing for their direct verification. For example, future work could focus on proving the FVS algorithm entirely in the language of Clifford algebra in line with [19]. Another possible application is deriving formulas for exponents of multivectors.
The advantage of the multivector FVS algorithm is its simplicity of implementation. This can be beneficial for purely numerical applications as it involves only Clifford multiplications followed by taking scalar parts of multivectors, which can be encoded as the first member of an array. The Clifford multiplication computation can be reduced to O ( N log N ) operations, since it involves the sorting of a joined list of algebra generators. On the other hand, the FVS algorithm does not ensure optimality of the computation but nevertheless provides a certificate of existence of an inverse. Therefore, optimized algorithms can be introduced for particular applications, i.e.,  Space–Time Algebra C 1 , 4 , Projective Geometric Algebra C 3 , 0 , 1 , Conformal Geometric Algebra C 4 , 1 , etc. As a side product, the algorithm can compute the characteristic polynomial of a general multivector and, hence, also its determinant without any resort to a matrix representation. This could be used, for example, for the computation of a multivector resolvent or some other analytical functions.
One of the main applications of the present algorithms could be envisioned in Finite Element Modelling where a geometric algebra approach would improve the efficiency and accuracy of calculations by providing a more compact representation of vectors, tensors, and geometric operations. This can lead to faster and more accurate simulations of elastic deformations.

Funding

The present work was funded by the European Union’s Horizon Europe program under grant agreement VIBraTE, 101086815.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The author declares that the data supporting the findings of this study are available within the paper and the Zenodo data repository. The studied examples can be downloaded from the Zenodo repository and it includes the file climatrep.mac, which implements different instances of the FVS algorithm [24].

Conflicts of Interest

The author declares no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
FVSFaddeev–LeVerrier–Souriau

Appendix A. Program Code

The Clifford package can be downloaded from a Zenodo repository [5].
Listing A1. Matrix representation code.
Mathematics 13 01106 i0a1a
Mathematics 13 01106 i0a1b
Listing A2. FVS algorithm implementation in Maxima based on the Clifford package.
Mathematics 13 01106 i0a2
Listing A3. Minimal polynomial computation.
Mathematics 13 01106 i0a3a
Mathematics 13 01106 i0a3b

References

  1. Hitzer, E.; Kamarianakis, M.; Papagiannakis, G.; Vasik, P. Survey of new applications of geometric algebra. Math. Methods Appl. Sci. 2024, 47, 11368–11384. [Google Scholar] [CrossRef]
  2. Ablamowicz, R.; Fauser, B. Mathematics of CLIFFORD—A Maple package for Clifford and Grassmann algebras. Adv. Appl. Clifford Algebr. 2002, 15, 157–181. [Google Scholar]
  3. Sangwine, S.J.; Hitzer, E. Clifford Multivector Toolbox (for MATLAB). Adv. Appl. Clifford Algebr. 2016, 27, 539–558. [Google Scholar] [CrossRef]
  4. Prodanov, D.; Toth, V.T. Sparse Representations of Clifford and Tensor Algebras in Maxima. Adv. Appl. Clifford Algebras 2016, 27, 539–558. [Google Scholar] [CrossRef]
  5. Prodanov, D. Clifford Maxima Package v 2.5.4. Available online: https://zenodo.org/records/8205828 (accessed on 15 January 2025).
  6. Prodanov, D. Algorithmic Computation of Multivector Inverses and Characteristic Polynomials in Non-degenerate Clifford Algebras. In Advances in Computer Graphics; Springer Nature: Cham, Switzerland, 2023; pp. 379–390. [Google Scholar] [CrossRef]
  7. Hestenes, D.; Sobczyk, G. Clifford Algebra to Geometric Calculus; Springer: Berlin/Heidelberg, Germany, 1984; p. 336. [Google Scholar]
  8. Ablamowicz, R.; Parra, J.; Lounesto, P. Clifford Algebras with Numeric and Symbolic Computations; Birkhäuser: Basel, Switzerland, 1996; p. 340. [Google Scholar]
  9. Renaud, P. Clifford Algebra Lecture Notes on Applications in Physics; HAL Open Science: Lyon, France, 2020; p. 253. [Google Scholar]
  10. Hitzer, E.; Sangwine, S. Multivector and multivector matrix inverses in real Clifford algebras. Appl. Math. Comput. 2017, 311, 375–389. [Google Scholar] [CrossRef]
  11. Shirokov, D.S. Clifford algebras and their applications to Lie groups and spinors. In Proceedings of the Nineteenth International Conference on Geometry, Integrability and Quantization, Varna, Bulgaria, 2–7 June 2017; Volume 19, pp. 11–53. [Google Scholar] [CrossRef]
  12. Garibaldi, S. The Characteristic Polynomial and Determinant Are Not Ad Hoc Constructions. Am. Math. Mon. 2004, 111, 761. [Google Scholar] [CrossRef]
  13. Rickards, J. When Is a Polynomial a Composition of Other Polynomials? Am. Math. Mon. 2011, 118, 358. [Google Scholar] [CrossRef]
  14. MacDonald, A. Linear and Geometric Algebra; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2010; p. 223. [Google Scholar]
  15. Lundholm, D. Geometric (Clifford) Algebra and Its Applications. Master’s Thesis, KTH, Stockholm, Sweden, 2006. [Google Scholar] [CrossRef]
  16. Shirokov, D. Concepts of trace, determinant and inverse of Clifford algebra elements. Prog. Anal. 2011, 1, 187–194. [Google Scholar]
  17. Dadbeh, P. Inverse and Determinant in 0 to 5 Dimensional Clifford Algebra. arXiv 2011, arXiv:1104.0067. [Google Scholar] [CrossRef]
  18. Faddeev, D.K.; Sominskij, I.S. Sbornik Zadatch po Vyshej Algebre; Nauka: Moscow/Leningrad, Russia, 1949. [Google Scholar]
  19. Acus, A.; Dargys, A. The characteristic polynomial in calculation of exponential and elementary functions in Clifford algebras. Math. Methods Appl. Sci. 2022. [Google Scholar] [CrossRef]
  20. Petoukhov, S.V. Symmetries of the genetic code, hypercomplex numbers and genetic matrices with internal complementarities. Symmetry Cult. Sci. 2012, 23, 225–448. [Google Scholar]
  21. Acus, A.; Dargys, A. The Inverse of a Multivector: Beyond the Threshold p + q = 5. Adv. Appl. Clifford Algebr. 2018, 28, 65. [Google Scholar] [CrossRef]
  22. Hitzer, E.; Sangwine, S.J. Construction of Multivector Inverse for Clifford Algebras Over 2m + 1 – Dimensional Vector Spaces from Multivector Inverse for Clifford Algebras Over 2m-Dimensional Vector Spaces. Adv. Appl. Clifford Algebr. 2019, 29, 29. [Google Scholar] [CrossRef]
  23. Shirokov, D.S. On computing the determinant, other characteristic polynomial coefficients, and inverse in Clifford algebras of arbitrary dimension. Comp. Appl. Math. 2021, 40, 173. [Google Scholar] [CrossRef]
  24. Prodanov, D. Examples for CGI2023. Available online: https://zenodo.org/records/10327066 (accessed on 15 January 2025).
Table 1. Number of steps of reduced-grade FVS algorithm.
Table 1. Number of steps of reduced-grade FVS algorithm.
(sub)-space dimensions12345678
maximal number of steps 2 s / 2 2244881616
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prodanov, D. Computation of Minimal Polynomials and Multivector Inverses in Non-Degenerate Clifford Algebras. Mathematics 2025, 13, 1106. https://doi.org/10.3390/math13071106

AMA Style

Prodanov D. Computation of Minimal Polynomials and Multivector Inverses in Non-Degenerate Clifford Algebras. Mathematics. 2025; 13(7):1106. https://doi.org/10.3390/math13071106

Chicago/Turabian Style

Prodanov, Dimiter. 2025. "Computation of Minimal Polynomials and Multivector Inverses in Non-Degenerate Clifford Algebras" Mathematics 13, no. 7: 1106. https://doi.org/10.3390/math13071106

APA Style

Prodanov, D. (2025). Computation of Minimal Polynomials and Multivector Inverses in Non-Degenerate Clifford Algebras. Mathematics, 13(7), 1106. https://doi.org/10.3390/math13071106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop