1. Introduction
Clifford algebras provide natural generalizations of complex, dual, and split-complex (or hyperbolic) numbers into the concept of Clifford numbers, i.e., general multivectors. The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. The development of Clifford algebras was based on the insights of Hamilton, Grassmann, and Clifford from the 19th century. After a hiatus lasting many decades, Clifford geometric algebra experienced a renaissance with the advent of contemporary computer algebra systems. There are multiple actively researched applications in computer-aided design (CAD), computer vision and robotics, protein folding, neural networks, modern differential geometry, genetics, and mathematical physics. Readers are directed towards Hitzer et al. for a recent survey on such applications [
1].
As its main contribution, this article demonstrates an algorithm for multivector inversion, based on the Faddeev–LeVerrier–Souriau (FVS) algorithm. The algorithm is implemented in the computer algebra system
Maxima (version 5.37 or greater) using the Clifford package [
4,
5]. The multivector FVS algorithm was presented in a preliminary form at the Computer Graphics International conferee, CGI 2023, Shanghai, 28 August–1 September 2023 [
6]. Compared to the initial presentation, the current paper introduces the apparatus of minimal polynomials and the related concept of a multivector rank as central computational tools.
Unlike the original FVS algorithm, which computes the characteristic polynomial and has a fixed number of steps, the present Clifford FVS algorithm involves only Clifford multiplications and subtractions of scalar parts and has a variable number of steps, depending on the spanning subspace of the multivector. The correctness of the algorithm is proven using an algorithmic, constructive representation of a multivector in matrix algebra over the reals, but it by no means depends on such a representation. The present FVS algorithm is in fact a proof certificate for the existence of an inverse. To the best of the present author’s knowledge, the FVS algorithm has not been used systematically to exhibit multivector inverses.
This paper is organized as follows.
Section 2 introduces the notation.
Section 4 exhibits a real matrix representation of the algebra.
Section 3 discussed the indicial representation of the algebra.
Section 5 discusses the multivector inverse and derives the FVS multivector algorithm.
Section 6 introduces the notion of rank of a multivector.
Section 7 demonstrates the algorithm.
Section 10 provides the Conclusion section of the paper.
2. Notation and Preliminaries
2.1. Notation
denotes a Clifford algebra of order n but with an unspecified signature. Clifford multiplication is denoted by a simple juxtaposition of symbols. Algebra generators will be indexed by Latin letters. Multi-indices will be considered as index lists and not as sets and will be denoted with capital letters. The operation of taking the k-grade part of an expression will be denoted by 〈.〉k and, in particular, the scalar part will be denoted by 〈.〉0. Set difference is denoted by Δ. Matrices will be indicated with bold capital letters, while matrix entries will be indicated by lowercase letters. The degree of the polynomial P will be denoted as .
2.2. General Definitions
Definition 1. Let with be given integers and let denote the non-degenerate real Clifford algebra with signature and ordered sequence of generators . That is, is a real associative algebra with unit 1 generated freely by its elements which satisfy the relations It will be assumed that there is an ordering relation ≺, such that for two natural numbers, . The extended basis set of the algebra will be defined as the ordered power set of all generators and their irreducible products.
Definition 2 (Scalar product)
. The scalar product of the blades A and B will be denoted by ∗ asand extended by linearity to the entire algebra. A multivector will be written as , where J is a multi-index, such that . In other words, J is subset of the power set of the first n natural numbers . The maximal grade of A will be denoted by . The pseudoscalar will be denoted by I.
Definition 3 (Span of a multivector)
. The span of a multivector A, written as the set , is defined as the minimal ordered set of generators for whichholds true. It is clear that while only for a full-grade, general multivector.
Definition 4. The term general multivector will denote a multivector for which .
Definition 5 (Sparsity property). A (square) matrix has the sparsity property if it has exactly one non-zero element per column and exactly one non-zero element per row. Such a matrix we call sparse.
Here, it is useful to recall the definition of a permutation matrix, which is a square binary matrix that has exactly one entry of 1 in each row and each column, with all other entries being 0. Therefore, a sparse matrix in the sense of the above definition generalizes the notion of a permutation matrix.
2.3. Automorphisms
Consider a general multivector
M. Most authors define two (principal) automorphisms: inversion,
and grade reversion,
These can be further composed into Clifford conjugation:
Readers are also directed to the works of [
7,
8,
9] for more details. Another less-used automorphism is the Hitzer–Sangwine involution [
10]:
for the multiindex J, which has no standard notation.
Lastly, the inverse (or Hermitian) automorphism is defined as follows [
11]. Suppose that
M is represented as the sum of blades
. Then,
by linearity. This also lacks standard notation. The term “Hermitian” originates from the use of complex-valued Clifford algebras.
2.4. Minimal Polynomials
Definition 6 (Minimal polynomial)
. The minimal polynomial of a multivector A is the monic polynomial of minimal degree m, such thatfor a given multivector A. Evaluation of in the Clifford algebra or the complex numbers will be assumed depending on the context of discussion. Furthermore, we have the following result:
Proposition 1. The minimal polynomial μ is unique for a given multivector .
Proof. The proof is given in [
12]. Suppose that f(x) and g(x) are two monic polynomials of minimal degree
m such that f(y) = g(y) = 0; then, h(x) = f(x) − g(x) is a polynomial of smaller degree (m-1) such that h(y) = 0. This contradicts the minimality of f and g. Therefore, h = 0. □
3. Indicial or Sparse Representation
Definition 7 (Indicial map)
. Define the indicial map , acting on symbols by concatenation (i.e., of a set), such thatwhere g is set-valued and let the convention . Definition 8. Define the argument map acting on symbol compositions asLet for the atomic symbol f. These definitions allow for stating a very general result about Clifford algebra representations.
Theorem 1 (Indicial representation). For generators , such that , we have the following diagram:
Proof. The right–left action follows from the construction of . The left–right argument action is trivial. We observe that . Trivially, . Let us suppose that . We notice that . Let us suppose that . We notice that and . □
This theorem is used for the reduction of products of blades in the Clifford package as shown by the author of [
4].
4. Clifford Algebra Real Matrix Representation Map
Definition 9 (Scalar product table)
. Define the diagonal scalar product matrix as At present, we will focus on non-degenerate Clifford algebras; therefore, the non-zero elements of are valued in the set .
Lemma 1 (Sparsity lemma)
. If the matrices and are sparse, then so is . Moreover,(no summation!) for some index q. Proof. The proof is given in [
6] and will not be repeated. □
Lemma 2 (Multiplication Matrix Structure). For the multi-index disjoint sets , the following implications hold for the elements of :
so that for some index μ.
Proof. The proof is given in [
6] but will be repeated for completeness of the discussion. Suppose that the ordering of elements is given in the construction of
. To simplify presentation, without loss of generality, suppose that
and
are some generators. By the properties of
, there exists an index
such that
,
for
. Choose
M s.t.
. Then, for
and
,
Suppose that
. Multiply together the diagonal nodes in the matrix as follows:
Therefore,
and
. We observe that there is at least one element (the algebra unity) with the desired property
.
Further, we observe that there exists a unique index
such that
. Since
is fixed, this implies that
. Therefore,
which implies the identity
. For higher-graded elements
and
, we should write
instead of
. □
Proposition 2. Consider the multiplication table . All elements are different for a fixed row k. All elements are different for a fixed column q.
Proof. The proof is given in [
6]. □
Proposition 3. For , the matrix is sparse.
Proof. The proof is given in [
6]. □
Proposition 4. For generator elements and , .
Proof. The proof is given in [
6]. Consider the basis elements
and
. By linearity and homomorphism of the
map (Theorem 2), we have
. Therefore, for two vector elements,
. □
Proposition 5.
Proof. The proof is given in [
6]. Consider the matrix
. Then,
element-wise. By Lemma 1,
is sparse so that
.
From the structure of
for the entries containing the element
, we have the equivalence
After multiplication of the equations, we obtain
, which simplifies to the
first fundamental identity:
We observe that if
or
, the result follows trivially. In this case, we also have that
. Therefore, let us suppose that
. We multiply both sides by
to obtain
. However, the RHS is a diagonal element of
; therefore, by the sparsity, it is the only non-zero element for a given row/column so that
. □
Definition 10 (Clifford coefficient map). Define the linear map acting element-wise by the action for .
Define the Clifford coefficient map indexed by as , where is the multiplication table of the extended basis , and action of the map.
In non-degenerated algebras, the coefficient map can be represented by the formula
using the inverse automorphism.
Definition 11 (Canonical matrix map)
. Define the map aswhere s is the ordinal of and is computed as in Definition 10. The Maxima code implementing the -map is given in Listing A1.
Proposition 6. The π-map is linear.
The proposition follows from the linearity of the coefficient map and matrix multiplication with a scalar.
Theorem 2 (Semigroup property). Let and be generators of . Then, the following statements hold:
- 1.
The map π is a homomorphism with respect to the Clifford product (i.e., π distributes over the Clifford products): .
- 2.
The set of all matrices forms a multiplicative semigroup.
Proof. The proof is given in [
6] but will be repeated for completeness of the discussion. Let
. We specialize the result of Lemma 2 for
and
and observe that
for
and
. In summary, the map
acts on
according to the following diagram:
Therefore, . Moreover, we observe that .
For the semi-group property, observe that since
is linear, it is invertible. Since
distributes over the Clifford product, its inverse
distributes over matrix multiplication:
However,
is closed by construction; therefore, the set
is closed under matrix multiplication. □
Proposition 7. Let be a column vector and be the first row of . Then, .
Proof. We observe that by Proposition 3, the only non-zero element in the first row of is . Therefore, . □
Theorem 3 (Complete Real Matrix Representation). Define the map as matrix multiplication with . Then, for a fixed multi-index s, . Further, π is an isomorphism inducing a Clifford algebra representation in the real matrix algebra according to the following diagram:
Proof. The proof is given in [
6] but will be repeated for completeness of the discussion. The
-map is a linear isomorphism. The set
forms a multiplicative group, which is a subset of the matrix algebra
. Let
and
. It is claimed that
by the Sparsity Lemma 1.
by Proposition 4.
by Proposition 5.
Therefore, the set is an image of the extended basis . Here, denotes the power set of the indices of the algebra generators. □
What is useful about the above representation is the relationship between the trace of the multivector matrix and the scalar part of the pre-image:
for the image
of a general multivector element
A. This will be used further in the proof of the FVS algorithm.
Remark 1. The above construction works if instead of the entire algebra we restrict a multivector to a sub-algebra of a smaller grade . In this case, we form grade-restricted multiplication matrices and .
Characteristic and Minimal Polynomials of Multivector
Let us first introduce the notion of a multivector characteristic polynomial and contrast it with the previously introduced definition of a minimal polynomial. The distinction between the characteristic and minimal polynomials will become clear from the subsequent results. The coefficients of these polynomials will be assumed to be real numbers, although this is not strictly necessary, as discussed in [
12].
Definition 12 (Characteristic polynomial). The characteristic polynomial of the multivector A is the pre-image of the characteristic polynomial of its matrix representation by the map π.
From the properties of the
map, it is clear that
so that the above definition is consistent with the usual notion of a characteristic polynomial. Therefore, the notion of an eigenvalue
of a multivector can also be defined according to its usual meaning—that is, a member of the list of real or complex numbers
, such that the equation
holds true for the multivector
A.
Remark 2. Defined in this way, the characteristic polynomial is related to the real matrix representation, while the minimal polynomial is representation-independent. Therefore, it is expected that a given matrix algorithm can be translated in a one-to-one manner to a Clifford algebra algorithm via the characteristic polynomial.
Theorem 4. Under the mapping π, the polynomial μ is the minimal polynomial of the complete real matrix representation of the generic multivector A.
Proof. Consider the generic multivector A and fix the value of the coefficients of its minimal polynomial . Then, by the properties of the map, we can compose the following diagram:
□
As an illustration, consider the special case of the algebra pseudoscalar I.
Proposition 8. Suppose that A is a blade. Then, it has a quadratic minimal polynomial of the formIn particular, if , then . Proof. Consider
, with
. Without loss of generality, consider the pseudoscalar
. The square of
can be evaluated as
This follows by reduction considering that
and there are
odd powers in the product. Therefore, the minimal polynomial is
. By virtue of the same argument, all blades have quadratic minimal polynomials □
Lemma 3. Suppose that and are the characteristic and minimal polynomials of the multivector A, respectively, and furthermore, that . Then,
Proof. Suppose that
is of degree
N and is divided by
(of degree
m) as
where
is a polynomial of degree
and
is the reminder polynomial of maximal degree
(by the definition of
). Then, we evaluate
A at any of its roots to obtain
. Therefore,
, since
by hypothesis is the minimal polynomial.
Consider the matrix representation of A:
. Suppose that
is a root of
with multiplicity 1. Then, there exists a non-null eigenvector
v such that
. Furthermore, by associativity, for any natural number
m, we have
. Hence,
by Theorem 4. Therefore, by the above diagram,
, and hence
and
, share the same roots.
Now, suppose that
is a root of
with multiplicity 1. To establish the validity of Equation (
15), we write it first as
where
r is the reminder term. However, we have already established that
. Then, we use a result generated on the condition when one polynomial is a polynomial of another one, as stated in (Proposition 1, [
13]); since
and
share the same roots as established above, they do fulfill the technical condition. Furthermore, we observe that
, since
is monic. To determine
n, we observe that the coefficients can be determined by the analytical formula
The series terminates for
. The proof of Equation (
16) follows by induction observing that for a differentiable function
we have that
while also
. □
To optimize the inverse calculation, the following needs to be considered. In the first place, for the case whenever , one could determine the coefficients of by equating the equal powers from both sides of the equation. The exponent n in the formula can be determined by the polynomial Greatest Common Divisor algorithm. This is supported “out of the box” by CASs, such as Maxima.
On the other hand, if
, then
, where
corresponds to a zero divisor, since
in that case. Suppose that
. In such a case, we proceed as follows. Write
as
Then, by the chain rule, we obtain
Therefore,
which can be shown by induction, in a similar way as above. The above discussion is valid also for the case whenever
for some natural number
q. In such a case,
h is computed in a corresponding manner as
, so we only need to determine the multiplicity of the root
by successive differentiation.
From this discussion, it is apparent that, in the general case, the minimal polynomial cannot be determined solely from the characteristic one.
Based on the concept of the minimal polynomial, the determinant of a multivector can be defined as follows.
Definition 13. Consider a multivector A having a minimal polynomial μ. The determinant is defined as This definition ensures the uniqueness of the determinant based on Proposition 1 and its independence of a particular representation. Furthermore, it agrees with the usual definition based on outermorphisms [
14]. Indeed, consider the set of linearly independent set of vectors
such that
for certain blade
. Form the outer product
Then,
for some numerical factor
. Therefore,
. Furthermore, the map
W is linear by the linearity of the outer product and can trivially be extended to outermorphism as
Therefore, we can identify
. On the other hand,
, so the minimal polynomial is
.
5. Multivector Inverses and the FVS Algorithm
5.1. Low-Dimensional Formulas for the Inverse
The following formulas for the inverse element have been shown to hold [
10]: For
n = 1, 2,
For
,
For
,
For
,
Other, albeit equivalent, formulas have been derived by different authors [
15,
16,
17].
5.2. The FVS Multivector Inversion Algorithm
Multivector inverses can be computed using the matrix representation and the characteristic polynomial.
The matrix inverse is given as
, where
is the determinant and adj denotes the adjunct. The formula is not practical, because it requires the computation of
determinants. By Cayley–Hamilton’s Theorem, the inverse of
A is a polynomial in
A, which can be computed during the last step of the FVS algorithm [
18]. This algorithm has a direct representation in terms of Clifford multiplications as follows.
Theorem 5 (Reduced-grade FVS algorithm). Suppose that is a multivector of span dimension s, such that . The Clifford inverse, if it exists, can be computed in Clifford multiplication steps from the iteration
until the step where so thatThere exists a polynomial A of maximal grade ksuch that . This polynomial will be called the reduced characteristic polynomial. Proof. The proof is given in [
6]. The proof follows from the homomorphism of the
map. We recall the following statement of tge FVS algorithm:
where
The matrix inverse can be computed from the last step of the algorithm as under the obvious restriction .
Therefore, the k
th step of the algorithm application of
leads to
Furthermore,
by Equation (
11). Moreover, the FVS algorithm terminates with
, which corresponds to the limiting case
wherever A contains all grades. Here,
denotes the square zero matrix of dimension
n.
On the other hand, examining the matrix representations of different Clifford algebras, Acus and Dargys [
19] make the observation that according to the Bott periodicity, the number of steps can be reduced to
. This can be proven as follows. Consider the isomorphism
. Then, if a property holds for an algebra of dimension
, it will hold also for the algebra of dimension
. Therefore, suppose that for
n even the characteristic polynomial is square free:
for some polynomial. We proceed by reduction.
For
in
and
, we compute
and a similar result holds also for the other signatures of
and can be obtained by direct computation using the Clifford package. Therefore, we have a contradiction, and the reduced polynomial is of degree
and the number of steps can be reduced accordingly. In the same way, suppose that
n is odd and the characteristic polynomial is square-free.
However, for
in
and
, it is established that
factorizes as
for the polynomial
The above polynomial is factored in
due to space limitations. Similar results hold also for the other signatures of
and can be obtained by direct computation using the Clifford package. Therefore, we have a contradiction and the reduced polynomial is of degree
. Therefore, overall, one can reduce the number of steps to
.
As a second case, let
be the set of all generators, represented in A, and
s their count. We compute the restricted multiplication tables
and
, respectively, and form the restricted map
. Then,
Therefore, the FVS algorithm terminates in
steps. Observe that
. Therefore,
will map to
by Equation (
11). Now, suppose that
; then, for the last step of the algorithm, we obtain
Therefore, by the argument of the previous case, the number of steps can be reduced to
. □
Corollary 1. is the minimal polynomial of the generic multivector A of span s. The maximal grade of μ is . The algebra signature uniquely determines .
Proof. The first statement follows from Theorem 4. The second statement follows from Proposition 1. □
Corollary 2. The inverse does not exist if .
Proof. The inverse does not exist if . By Corollary 1, is the minimal polynomial and . □
One could define the multivector adjunct as follows:
Definition 14. The adjunct of the generic multivector A is defined from the minimal polynomial as where are the coefficients of the minimal polynomial. Proposition 9. Suppose that is minimal. Then, the adjunct of a multivector A can be computed asFurthermore, . Proof. The first part of the statement follows Theorem 5 considering that
in the last step, which corresponds to the minimal polynomial. For the second part of the statement, observe that
□
Remark 3. To avoid possible confusion, the name “reduced characteristic polynomial” will be kept for the minimal polynomial of the algebra (i.e, of the generic mutlivector of grade n).
Based on Theorem 5, we can tabulate the numbers for steps necessary for the determinant computation (
Table 1) in view of the algebra dimension. The table can be extended in an obvious way for the higher dimensional Clifford algebras. However, here it is truncated to
considering the Bott periodicity.
6. Multivector Rank
The notion of a minimal polynomial allows one to define the rank of a multivector in a related way:
Definition 15 (Rank of a multivector). The rank of the multivector A, denoted , is the degree of its minimal polynomial – .
The above proof demonstrates that the degree of the minimal polynomial determines the number of steps (i.e., Clifford multiplications) in the computation. If the degree of the minimal polynomial is smaller than the degree of the characteristic polynomial, some optimization of the algorithm is possible, but then we have to determine the minimal polynomial of a specific, possibly sparser multivector. To achieve this, one could use the following result.
Proposition 10. The determinant of a multivector A (and hence its inverse if it exists) can be computed in at least number of steps.
From the above definition, we can conclude that the rank of a multivector is a measure of its complexity. For instance, the scalars are of rank 1, vectors and blades are of rank 2, etc. One could expect that the multivector rank will also play a role in other algorithms.
Proposition 11. A non-scalar multivector A is of even rank.
Proof. Scalars obey , so we exclude them. Suppose, then, that there exists a multivector A of odd rank in for . We proceed by reduction as before. The property should hold for . For , the minimal polynomial is , depending on the signature of the algebra by Proposition 8. Therefore, we have a contradiction. For , we have general quadratic polynomials as per Examples 1–3. Therefore, again we have a contradiction. Hence, generic multivectors are of even rank. □
Denote the inverse of the blade as . Then, conveniently, , where is the Kronecker symbol.
Proposition 12 (Rank algorithm)
. Consider a multivector A having span of s dimensions. Define the Krylov exponent sequence as the set(which can be thought of also as a co-vector of multivectors). Define the trial polynomialPopulate the simultaneous equation system as . Let be the coefficient matrix of L with respect to . If , then . If , thenand the vector spans the coefficients of . Proof. Suppose that
is of
s dimensions. Consider the list of scalar products
enumerated by the multi-index
J. They result in a simultaneous system of equations for the components of the coefficient vector
C. The notation can be expanded to formulate the equations
where
denotes the components of the exponentiated multivector. In matrix form, the above system can be represented as
, where
is the coefficient vector,
is the coefficient matrix, and
denote the enumeration of multi-index
J.
If , then only the null vector will solve the system so we suppose that . Observe that if a multivector power does not contain the blade , this results in the trivial identity and hence, in a null row in . Therefore, the above system of equations is underdetermined and must hold for a non-trivial solution to exist. Therefore, .
Finally, if , we still obtain an underdetermined system but by Proposition 11. Then, the null space of is one-dimensional and will consist of the unnormalized coefficients of the minimal polynomial. □
The Maxima code implementing the algorithm is presented in Listing A3.
Remark 4. If the matrix rank is determined by direct computation, the result stated in Proposition 12 may not be practical. On the other hand, the computation of the set can be parallelized, which can lead to time saving. On the second place, there maybe more economic algorithms for determining the rank of A.
An Application to Multivector Exponentiation
A particular application of the presented algorithm is the exponentiation of multivectors. The multivector exponent is defined by the infinite series
where
t is a scalar and
M a multivector. The Laplace transform action is, therefore,
Therefore, the exponent can be calculated as the line integral
along the Bromwich contour. The integral can be evaluated from the Residue Theorem as
where k is the degree of the minimal polynomial
and
gives its roots. For rational coefficients, in some cases, the denominator can be decomposed into partial fractions. This amounts to finding the factorization of the characteristic polynomial. For clarity of the discussion, let
. The roots of the equation
in the
s variable should be evaluated at
to yield the poles
. This amounts to computing the determinant and finding its roots in the Laplace variable
In practice, this can be achieved by numerical global root-finding algorithms.
7. Methods and Implementation
Algorithms were implemented in the Maxima code, based on the
Clifford package in Maxima, which was first utilized in [
4]. The present version of the package is 2.5 and is available for download from a Zenodo repository [
5]. The function fadlevicg2cp simultaneously computes the inverse (if it exists) and the characteristic polynomial
of a multivector
A (
Appendix A). The function minpoly2 simultaneously computes the rank of the multivector and its minimal polynomial.
Experiments were performed on a Dell® 64-bit Microsoft Windows 10 Enterprise machine with configuration—Intel® CoreTM i5-8350U CPU @ 1.70 GHz, 1.90 GHz, and 16 GB RAM. The computations were performed using the Clifford package version 2.5 on Maxima version 5.46.0 using Steel Bank Common Lisp version 2.2.2.
8. Symbolical Experiments
Example 1. For and a multivector , the reduced grade algorithm producesresulting in and the reduced characteristic polynomial is . Example 2. For and a multivector , the reduced grade algorithm producesresulting in and the reduced characteristic polynomial is . Example 3. For and a multivector , the reduced grade algorithm producesresulting in and the reduced characteristic polynomial is . Bespoke computations are practically instantaneous on the testing hardware configuration.
Example 4. The real matrix representation of a generic multivector A in the quaternion algebra isSuppose that we wish to determine if a particular Hadamard 4 × 4 matrix (discussed, for example, in [20])encodes the same algebra. One could proceed as follows. Applying the matrix FVS algorithm to compute the characteristic polynomial of , we obtainOn the other hand, the reduced multivector FVS algorithm applied to A results in the reduced polynomialTherefore, by simple inspection, one can conclude that encodes the same Clifford (i.e., quaternion) algebra, which was also recognized in [20] using the full multiplication table of the matrix algebra. The identification using the multiplication table requires 16 matrix multiplications if the maximal matrix representation is used as was the case in [20], while the multivector FVS algorithm requires only 2 multivector multiplications, as shown above. In three dimensions, the general element is of the form
Then we have the following maximal real matrix representations:
Example 5 (Algebra of Physical Space
)
. Example 6 (Clifford algebra
)
. Example 7 (Clifford algebra
)
. Example 8 (Clifford algebra
)
. Example 9. Consider . LetThen the application of the reduced FVS algorithm yieldswhere the determinant is given byand
While the vector part is given bythe bi-vector part is given byand the pseudoscalar part byThe inverse exists if the determinant . Up to sign permutations, the above results hold also for , , and but are not given in view of space limitations.
Example 10. Consider and the multivector . Then the Laplace transform isThe rank of the multivector C is 4 as computed from the matrixThe minimal polynomial isThen C can be explicitly computed in four steps to yieldThe partial fraction decomposition isThe poles are located atTherefore, the exponent is 9. Numerical Experiments
Note that the trivial last steps will be omitted. To demonstrate the utility of the FVS algorithm, we will now follow some high-dimensional numerical examples. Examples for higher-dimensional algebras are not particularly instructive as they result in very long expressions. These can nevertheless be useful for hard-coding formulas in particular niche applications.
Example 11. In , let . Let , so thatThe reduced characteristic polynomial is and is also minimal. Example 12. Let us compute a rational example in . To avoid cumbersome expressions, let , where and .
Then and for the maximal representation we have steps:Therefore,and . The evaluation takes 0.0469 s using 12.029 MB memory on Maxima. On the other hand, the reduced algorithm will run in steps:and . Here, the evaluation takes 0.0156 s using 2.512 MB memory on Maxima. Note that in this case, . Therefore, . Example 13. This example was presented in [21]. Consider the algebra and define the multivector . The reduced algorithm will run in steps. Let , and . The calculation proceeds asThe inverse isand the reduced characteristic polynomial iswhich is also minimal. The reduced rank matrix iswhich is of rank 8. Example 14. This example was presented in [19]. In , define . Then the inverse computes in steps:resulting inThe characteristic polynomial iswhich is also minimal. Therefore, the rank of the multivector is 4. The inverse can be computed alternatively in the following way. LetTherefore, the inverse is given by the formulain accordance with the above result. This is an alternative to Equation (22). Example 15. Consider and let . The full-grade algorithm takes 128 steps and will not be illustrated due to space limitations. The reduced-grade algorithm can be illustrated as follows. Let . Thenresulting in . The reduced characteristic polynomial can factorize asThis is an indication that the rank of the multivector is lower, as will be demonstrated below. Example 16. We use the same data in to compute the rank according to Proposition 12. The reduced (with zero rows removed) rank matrix iswhich is of rank 4. Direct computation of the inverse results inwhere, as before, and . The determinant can be computed by the sequence of operations , followed by . This allows for writing the simple formula Example 17. We use residing in to compute the rank according to Proposition 12. The span is a six-dimensional vector space—. The reduced (with zero rows removed) rank matrix iswhich is of rank 5. The minimal polynomial is computed as and the inverse isIn this case, the determinant can be computed by the sequence of stepsTherefore, the inverse can be computed by the formulain an obvious manner. 10. Discussion
Computation of inverses of multivectors has drawn continuous attention in the literature as the problem has only been gradually solved [
10,
21,
22,
23]. In order to compute the inverse of a multivector, previous contributions used series of automorphisms of special types discussed in
Section 2.3. This allows one to write basis-free formulas with increasing complexity.
From
Table 1, it can be concluded that the low-dimensional formulas reported in the literature are optimal in terms of the number of Clifford multiplications. It is also apparent that looking for specific formulas of the general inverse element for higher-dimensional Clifford algebras would offer little immediate insight.
The maximal matrix algebra construction exhibited in the present paper allows for systematic translation of matrix-based algorithms to Clifford algebra simultaneously allowing for their direct verification. For example, future work could focus on proving the FVS algorithm entirely in the language of Clifford algebra in line with [
19]. Another possible application is deriving formulas for exponents of multivectors.
The advantage of the multivector FVS algorithm is its simplicity of implementation. This can be beneficial for purely numerical applications as it involves only Clifford multiplications followed by taking scalar parts of multivectors, which can be encoded as the first member of an array. The Clifford multiplication computation can be reduced to operations, since it involves the sorting of a joined list of algebra generators. On the other hand, the FVS algorithm does not ensure optimality of the computation but nevertheless provides a certificate of existence of an inverse. Therefore, optimized algorithms can be introduced for particular applications, i.e., Space–Time Algebra , Projective Geometric Algebra , Conformal Geometric Algebra , etc. As a side product, the algorithm can compute the characteristic polynomial of a general multivector and, hence, also its determinant without any resort to a matrix representation. This could be used, for example, for the computation of a multivector resolvent or some other analytical functions.
One of the main applications of the present algorithms could be envisioned in Finite Element Modelling where a geometric algebra approach would improve the efficiency and accuracy of calculations by providing a more compact representation of vectors, tensors, and geometric operations. This can lead to faster and more accurate simulations of elastic deformations.