# Closed-Form Expressions for the Matrix Exponential

## Abstract

**:**

**PACS**02.30.Tb; 42.25.Ja; 03.65.Fd

## 1. Introduction

**σ**· B that rules the dynamics of a spin-1/2 particle subjected to a magnetic field B. Here,

**σ**= (σ

_{x}, σ

_{y}, σ

_{z}) denotes the Pauli spin operator and k is a parameter that provides the above expression with appropriate units. The upsurge of research in several areas of physics—most notably in quantum optics—involving two-level systems, has made a Hamiltonian of the above type quite ubiquitous. Indeed, the dynamics of any two-level system is ruled by a Hamiltonian that can be written in such a form. Hence, one often requires an explicit, closed-form expression for quantities such as exp(iα

**n**·

**σ**), where

**n**is a unit vector. This closed-form expression can be obtained as a generalization of Euler's formula exp iα = cos α + i sin α. It reads

_{k}A

^{k}/k! for the case A = iα

**n**·

**σ**. Next, one invokes the following relationship:

_{i}σ

_{j}= δ

_{ij}I + iϵ

_{ijk}σ

_{k}(summation over repeated indices being understood). Equation (2) implies that (

**n**·

**σ**)

^{2}

^{n}= I, and hence (

**n**·

**σ**)

^{2}

^{n}

^{+1}=

**n**·

**σ**. This allows one to split the power series of exp(iα

**n**·

**σ**) in two parts, one constituted by even and the other by odd powers of iα

**n**·

**σ**:

**n**·

**σ**, as well as to our ability to “re-summate” the series expansion so as to obtain a closed-form expression. There are several other cases [6] in which a relation similar to Equation (1) follows as a consequence of generalizing some properties of the group SU(2) and its algebra to the case SU(N), with N > 2. Central to these generalizations and to their associated techniques are both the Cayley–Hamilton theorem and the closure of the Lie algebra su(N) under commutation and anti-commutation of its elements [6]. As already recalled, the Cayley–Hamilton theorem states that any n × n matrix A satisfies its own characteristic equation p(A) = 0, where

^{k}, with k ≥ n, can be written in terms of the matrices I = A

^{0}, A,…, A

^{n}

^{−1}. Thus, any infinite series, such as the one corresponding to exp A, may be rewritten in terms of the n powers A

^{0}, A,…, A

^{n}

^{−1}. By exploiting this fact one can recover Equation (1). Reciprocally, given A, one can construct a matrix B that satisfies exp B = A, as shown by Dattoli, Mari and Torre [2]. These authors used essentially the same tools as we do here and presented some of the results that we will show below, but leaving them in an implicit form. The aforementioned authors belong to a group that has extensively dealt with our subject matter and beyond it [7], applying the present techniques to cases of current interest [8]. A somewhat different approach was followed by Leonard [9], who related the Cayley–Hamilton theorem to the solution of ordinary differential equations, in order to get closed expressions for the matrix exponential. This technique can be applied to all n × n matrices, including those that are not diagonalizable. Untidt and Nielsen [10] used this technique when addressing the groups SU(2), SU(3) and SU(4). Now, especially when addressing SU(2), Leonard's approach seems to be unnecessarily involved. This is because there is a trade-off between the wide applicability of the method and its tailoring to a special case. When dealing with diagonalizable matrices, the present approach may prove more useful. Thus, one exploits not only the Cayley–Hamilton theorem, but the diagonalizability of the involved matrices as well. As a result, we are provided with a straightforward way to obtain closed-form expressions for the matrix exponential. There are certainly many other ways that are either more general [9,11] or else better suited to specific cases [12–16], but the present method is especially useful for physical applications.

## 2. Closed Form of the Matrix Exponential via the Solution of Differential Equations

_{1},…, x

_{n})

^{T}and A a constant, n × n matrix. The matrix exponential appears in the solution of Equation (6), when we write it as

**x**(t) = e

^{At}

**x**(0). By successive derivation of this exponential we obtain D

^{k}e

^{At}= A

^{k}e

^{At}. Hence, p(D)e

^{At}≡ (D

^{n}+ c

_{n}

_{−1}D

^{n}

^{−1}+ … + c

_{1}D + c

_{0})e

^{At}= p(A)e

^{At}= 0, on account of p(A) = 0, i.e., the Cayley–Hamilton theorem. Now, as already noted, this implies that e

^{At}can be expressed in terms of A

^{0}, A,…, A

^{n}

^{−1}. Let us consider the matrix $M(t):={\sum}_{k=0}^{n-1}{y}_{k}(t){A}^{k}$, with the y

_{k}(t) being n independent solutions of the differential equation p(D)y(t) = 0. That is, the y

_{k}(t) solve this equation for n different initial conditions that will be conveniently chosen. We have thus that $p(D)M(t)={\sum}_{k=0}^{n-1}p(D){y}_{k}(t){A}^{k}=0$. Our goal is to choose the y

_{k}(t) so that e

^{At}= M(t). To this end, we note that D

^{k}e

^{At}|

_{t}

_{=0}= A

^{k}e

^{At}|

_{t}

_{=0}= A

^{k}. That is, e

^{At}solves p(D)Φ(t) = 0 with the initial conditions Φ(0) = A

^{0},…, D

^{n}

^{−1}Φ(0) = A

^{n}

^{−1}. It is then clear that we must take the following initial conditions: ${D}^{j}{y}_{k}(0)={\delta}_{k}^{j}$, With j,k ∈ {0, …, n − 1}. In such a case, e

^{At}and M(t) satisfy both the same differential equation and the same initial conditions. Hence, e

^{At}= M(t).

_{0}+ a

_{1}t + … + a

_{m}

_{−1}t

^{m}

^{−1})e

^{λt}, the a

_{k}being fixed by the initial conditions. As already said, this method applies even when the matrix A is not diagonalizable. However, when the eigenvalue problem for A is a solvable one, another approach can be more convenient. We present such an approach in what follows.

## 3. Closed Form of the Matrix Exponential via the Solution of Algebraic Equations

_{k}〉 = a

_{k}|a

_{k}〉 and span the Hilbert space on which A acts. Thus, the identity operator can be written as I = Σ

_{k}|a

_{k}〉 〈a

_{k}|. One can also write A = A · I = Σ

_{k}a

_{k}|a

_{k}〉 〈a

_{k}|. Moreover, ${A}^{m}={\sum}_{k}{a}_{k}^{m}|{a}_{k}\rangle \phantom{\rule{0.2em}{0ex}}\langle {a}_{k}|$, from which it follows that

**n**·

**σ**, with

**n**a unit vector. This matrix has the eigenvalues ±1 and the corresponding eigenvectors |n

_{±}〉. That is,

**n**·

**σ**|

**n**

_{±}〉 = ± |

**n**

_{±}〉. We need no more than this to get Equation (1). Indeed, from

**n**·

**σ**= |

**n**

_{+}〉 〈

**n**

_{+}| − |

**n**

_{−}〉 〈

**n**

_{−}| and I = |

**n**

_{+}〉 〈

**n**

_{+}| + |

**n**

_{−}〉 〈

**n**

_{−;}|, it follows that |

**n**

_{±}〉 〈

**n**

_{±}| = (I ±

**n**·

**σ**) /2. Next, we consider F (A) = exp A = Σ

_{k}exp a

_{k}|a

_{k}〉 〈a

_{k}|, with A = iα

**n**·

**σ**. The operator iα

**n**·

**σ**has eigenvectors |

**n**

_{±}〉 and eigenvalues ±iα. Thus,

**n**·

**σ**. It is a matter of convenience whether one chooses to express exp (iα

**n**·

**σ**) in terms of the projectors |

**n**

_{±}〉 〈

**n**

_{±}|, or in terms of I and

**n**·

**σ**.

**n**·

**σ**) is a rotation operator acting on spinor space. It is also an element of the group SU(2), whose generators can be taken as X

_{i}= iσ

_{i}/2, i = 1, 2, 3. They satisfy the commutation relations [X

_{i}, X

_{j}] = ϵ

_{ijk}X

_{k}that characterize the rotation algebra. The rotation operator can also act on three-dimensional vectors

**r**. In this case, one often uses the following formula, which gives the rotated vector

**r′**in terms of the rotation angle θ and the unit vector

**n**that defines the rotation axis:

_{i}for three-dimensional space, which can be read off from the next formula, Equation (12). The rotation matrix is then obtained as exp (θ

**n**·

**X**), with

**n**

_{0}〉 and |

**n**

_{±}〉, respectively. Similarly to the spin case, we have now

**n**

_{k}〉 〈

**n**

_{k}|, k = ±, 0, in terms of I and M. This equation is obtained by squaring M:

**n**

_{±}〉 〈

**n**

_{±}| = (∓ iM − M

^{2}) /2, and |

**n**

_{0}〉 〈

**n**

_{0}| = I + M

^{2}. Thus, we have

**r**= (x, y, z)

^{T}, we easily see that M

**r**=

**n**×

**r**and M

^{2}

**r**=

**n**× (

**n**×

**r**) =

**n**(

**n**·

**r**) −

**r**. Thus, on account of Equation (17),

**r′**= exp(θM)

**r**reads the same as Equation (11).

_{k}of A (which we assume nondegenerate) have been determined, we can write the N equations: A

^{0}= I = Σ

_{k}|a

_{k}〉 〈 a

_{k}|, A = Σ

_{k}a

_{k}|a

_{k}〉 〈 a

_{k}|, ${A}^{2}={\sum}_{k=1}^{N}{a}_{k}^{2}|{a}_{k}\rangle \langle {a}_{k}|,\dots ,{A}^{N-1}={\sum}_{k=1}^{N}{a}_{k}^{N-1}|{a}_{k}\rangle \langle {a}_{k}|$, from which it is possible to obtain the N projectors |a

_{k}〉 〈 a

_{k}| in terms of I, A, A

^{2},…, A

^{N}

^{−1}. To this end, we must solve the system

_{k}〉 〈 a

_{k}| in terms of I, A, … A

^{N}

^{−1}, we can express any analytic function of A in terms of these N powers of A, in particular $expA={\sum}_{k=1}^{N}exp({a}_{k})|{a}_{k}\rangle \phantom{\rule{0.2em}{0ex}}\langle {a}_{k}|$. For the case N = 4, for instance, we have the following result:

_{j}〉 〈 a

_{j}| → w

_{j}, with j = 1,…, N, and A

^{k}→ q

_{k}

_{+1}, with k = 0,…, N − 1. The solution of this system is given by ${w}_{j}={\sum}_{k=0}^{N-1}{U}_{j,k}{q}_{k}$, with U = V

^{−1}, the inverse of the Vandermonde matrix. This matrix inverse can be calculated as follows [18]. Let us define a polynomial P

_{j}(x) of degree N − 1 as

_{j,k}of the last equality follow from expanding the preceding expression and collecting equal powers of x. These U

_{j,k}are the components of V

^{−1}. Indeed, setting x = a

_{i}and observing that ${P}_{j}({a}_{i})={\delta}_{ji}={\sum}_{k=1}^{N}{U}_{j,k}{a}_{i}^{k-1}={(UV)}_{j,i}$, we see that U is the inverse of the Vandermonde matrix. The projectors |a

_{j}〉 〈a

_{j}| in Equation (18) can thus be obtained by replacing x → A in Equation (23). We get in this way the explicit solution

_{1}and λ

_{2}, which are two-fold degenerate. We can group the projectors as follows:

## 4. Examples

#### 4.1. The Foldy–Wouthuysen Transformation

^{T}that solves the Dirac equation iħ∂ψ/∂t = Hψ, where H = − ħc

**α·∇**+ βmc

^{2}. Here, β and

**α**= (α

_{x}, α

_{y}, α

_{z}) are the 4 × 4 Dirac matrices:

**α**·

**p**/2 = (θ |

**p**| /2)β

**α**·

**n**, where

**n**=

**p**/ |

**p**|. The eigenvalues of the 4 × 4 matrix β

**α**·

**n**are ±i, each being two-fold degenerate. This follows from noting that the matrices

**σ**·

**n**)

^{2}= 1, the above matrices share the characteristic equation λ

^{2}+ 1 = 0. Their eigenvalues are thus ±i. The eigenvalues of M = θβ

**α**·

**p**/2 are then λ

_{1}

_{,}

_{2}= ±iθ |

**p**| /2. Replacing these values in Equation (29) we obtain

**α**and β in order to group together odd and even powers of θ. This finally leads to the same closed-form expression that we have arrived at after some few steps.

#### 4.2. Lorentz-Type Equations of Motion

**S**. These equations often contain terms of the form

**Ω**×. An example of this is the ubiquitous equation

**Ω̂**: =

**Ω**×. The solution for the case ∂

**Ω**/∂t =

**0**, for instance, was obtained by expanding exp (t

**Ω̂**) as an infinite series and using the cyclical properties of the vector product in order to get

**S**(t) in closed form. This form is nothing but Equation (11) with the replacements

**r′**→

**S**(t),

**r**→

**S**(0) and θ → Ωt, where Ω := |

**Ω**|. We obtained Equation (11) without expanding the exponential and without using any cyclic properties. Our solution follows from writing Equation (35) in matrix form, i.e.,

**n**=

**Ω**/Ω. The solution

**S**(t) = exp(MΩt)

**S**(0) is then easily written in closed form by applying the CH-method, as in Equation (11). The advantages of this method show up even more sharply when dealing with some extensions of Equation (36). Consider, e.g., the non-homogeneous version of Equation (35):

**E**· r and

**A**=

**B**×

**r**/2, respectively [20]. The solution of Equation (37) is easily obtained by acting on both sides with the “integrating (operator-valued) factor” exp(−ΩMt). One then readily obtains, for the initial condition

**S**(0) =

**S**

_{0},

**N**, the integral in Equation (38) is then trivial. An equivalent solution is given in [20], but written in terms of the evolution operator Û (t) = exp(tΩ̂) and its inverse. Inverse operators repeatedly appear within such a framework [20] and are often calculated with the help of the Laplace transform identity: ${\widehat{A}}^{-1}={\int}_{0}^{\infty}exp(-s\widehat{A})ds$. Depending on $\widehat{A},$ this could be not such a straightforward task as it might appear at first sight. Now, while vector notation gives us additional physical insight, vector calculus can rapidly turn into a messy business. Our strategy is therefore to avoid vector calculus and instead rely on the CH-method as much as possible. Only at the end we write down our results, if we wish, in terms of vector products and the like. That is, we use Equations (13)–(17) systematically, in particular Equation (16) when we need to handle exp(θM), e.g., within integrals. The simplification comes about from our working with the eigenbasis of exp(θM), i.e., with the eigenbasis of M. Writing down the final results in three-vector notation amounts to expressing these results in the basis in which M was originally defined, cf. Equation (12). Let us denote this basis by {|

**x**〉, |

**y**〉, |

**z**〉}. The eigenvectors |

**n**

_{±}〉 and |

**n**

_{0}〉 of M are easily obtained from those of X

_{3}, cf. Equation (12). The eigenvectors of X

_{3}are, in turn, analogous to those of Pauli's σ

_{y}, namely $|\pm \rangle =(|\mathit{x}\rangle \mp i|\mathit{y}\rangle )/\sqrt{2},$ plus a third eigenvector that is orthogonal to the former ones, that is, |0〉 = |

**z**〉. In order to obtain the eigenvectors of

**n**·

**X**, with

**n**= (sin θ cos ϕ, sin θ sin ϕ, cos θ), we apply the rotation exp(ϕX

_{3}) exp(θX

_{2}) to the eigenvectors |±〉 and |0〉, thereby getting |

**n**

_{±}〉 and |

**n**

_{0}〉, respectively. All these calculations are easily performed using the CH-method.

**n**

_{±}〉 and |

**n**

_{0}〉, we also have the transformation matrix T that brings M into diagonal form: T

^{−1}MT = M

_{D}= diag(−i, 0, i). Indeed, T 's columns are just |

**n**

_{−}〉, |

**n**

_{0}〉 and |

**n**

_{+}〉. After we have carried out all calculations in the eigenbasis of M, by applying T we can express the final result in the basis {|

**x**〉, |

**y**〉, |

**z**〉}, thereby obtaining the desired expressions in three-vector notation. Let us illustrate this procedure by addressing the evolution equation

**S**(t) = exp(At)

**S**

_{0}. The eigenbasis of A is the same as that of M. We have thus

_{k}〉 〈n

_{k}| can be written in terms of the powers of A by solving the system

^{2}and A

^{2}= −2λΩ

^{3}M + (1 − λ

^{2}Ω

^{2})(ΩM)

^{2}, and replacing the solution of the system (42)–(44) in Equation (41) we get

**S**(t) = exp (At)

**S**

_{0}in the original basis {|x〉, |y〉, |z〉}, something that in this case amounts to writing M

**S**

_{0}=

**n**×

**S**

_{0}and M

^{2}

**S**

_{0}=

**n**(

**n**·

**S**

_{0}) –

**S**

_{0}. Equation (39) was also addressed in [20], but making use of the operator method. The solution was given in terms of a series expansion for the evolution operator. In order to write this solution in closed form, it is necessary to introduce sin- and cos- like functions [20]. These functions are defined as infinite series involving two-variable Hermite polynomials. The final expression reads like Equation (11), but with sin and cos replaced by the aforementioned functions containing two-variable Hermite polynomials. Now, one can hardly unravel from such an expression the physical features that characterize the system's dynamics. On the other hand, a solution given as in Equation (45) clearly shows such dynamics, in particular the damping effect stemming from the λ-term in Equation (39), for λ > 0. Indeed, Equation (45) clearly shows that the state vector

**S**(t) = exp (At)

**S**

_{0}asymptotically aligns with

**Ω**while performing a damped Larmor precession about the latter.

**Ω**/∂t ≠ 0 is more involved and generally requires resorting to Dyson-like series expansions, e.g., time-ordered exponential integrations. While this subject lies beyond the scope of the present work, it should be mentioned that the CH-method can be advantageously applied also in this context. For instance, time-ordered exponential integrations involving operators of the form A + B(t) do require the evaluation of exp A. Likewise, disentangling techniques make repeated use of matrix exponentials of single operators [21]. In all these cases, the -method offers a possible shortcut.

#### 4.3. The Jaynes–Cummings Hamiltonian

_{0}= ℏω

_{0}σz/2 + ℏωa

^{†}a. The interaction Hamiltonian V = ℏg(a

^{†}σ

_{−}+ a σ

_{+}) couples the statesand |a, n〉 alone. Hence, H can be split into a sum: H = Σ

_{n}H

_{n}, with each H

_{n}acting on the subspace Span {|a, n〉, |b, n + 1〉}. Within such a subspace, H

_{n}is represented by the 2 × 2 matrix

_{0}– ω.

_{1}+ H

_{2}, with H

_{1}= ℏω (a

^{†}a+σ

_{z}/2) and H

_{2}= ℏδσ

_{z}/2 + (a†σ

_{−}+ aσ

_{+}). Because [H

_{1}, H

_{2}] = 0, the evolution operator can be factored as U = U

_{1}U

_{2}= exp(−iH

_{1}t/ℏ)exp(−iH

_{2}t/ℏ). The first factor is diagonal in Span .The second factor can be expanded in a Taylor series. As it tums out, one can obtain closed-form expressions for the even and the odd powers of the expansion. Thus, a closed-form for U

_{2}can be obtained as well. As can be seen, this method depends on the realization that Equation (46) can be written in a special form, which renders it possible to factorize U.

_{n}H

_{n}, with [H

_{n}, H

_{m}] = 0, and write U = ∏

_{n}U

_{n}= ∏

_{n}exp(−iH

_{n}t/ℏ). Generally, a 2 × 2 Hamiltonian H has eigenvalues of the form E± =ℏ(λ

_{0}± λ). We have thus

_{n}has eigenvalues ${E}_{n}^{\pm}=\hslash \omega (n+1/2)\pm \hslash \sqrt{{\delta}^{2}/4+{g}^{2}(n+1)}\equiv \hslash \omega (n+1/2)\pm \hslash {R}_{n}$. Whence,

_{n}from Equation (47) in the above expression we get

^{2}a

^{†}a + δ

^{2}/4. Proceeding similarly with the other operators that enter Equation (53) and observing that $sin({R}_{n}t){R}_{n}^{-1}\sqrt{n+1}=\langle n|isin(t\sqrt{\widehat{\phi}+{g}^{2}}){(\sqrt{\widehat{\phi}+{g}^{2}})}^{-1}a|n+1\rangle $, etc., we readily obtain

#### 4.4. Bispinors and Lorentz Transformations

^{μν}represents the metric tensor of Minkowsky space (η

^{00}= −η

^{11}= −η

^{22}= η

^{33}= 1, η

^{μν}= 0 for μ≠ ν). A bispinor ψ(x) transforms according to [19]

^{μν}= −V

^{νμ}are the components of an antisymmetric tensor, which has thus six independent components, corresponding to the six parameters defining a Lorentz transformation. The quantities γ

_{μ}= η

_{μν}γ

^{ν}satisfy γ

^{μ}γ

^{ν}+ γ

^{μ}γ

^{ν}= 2η

^{μν}. The quantities γ

_{μ}γ

_{ν}are the generators of the Lorentz group. S(Λ) is not a unitary transformation, but satisfies

_{i}Pauli generators and the q

_{i}quaternion generators. The pseudoscalar γ

_{5}:= γ

_{0}γ

_{1}γ

_{2}γ

_{3}satisfies ${\gamma}_{5}^{2}=-1,{\gamma}_{5}{\gamma}_{\mu}=-{\gamma}_{\mu}{\gamma}_{5}$, so that it commutes with each generator of the Lorentz group:

_{5}(α, β ∈ ℝ) behave like complex numbers upon multiplication with p

_{i}and q

_{i}. We denote the subspace spanned by such quantities as the complex-like subspace C

_{i}and set

**i**≡ γ

_{5}. Noting that

**i**p

_{i}= q

_{i}and

**i**q

_{i}= −p

_{i}, the following multiplication rules are easily derived:

_{i}as Pauli generators. Noting that they furthermore satisfy

**i**→ i, p

_{k}→ −σ

_{k}, with i being the imaginary unit and σ

_{k}the Pauli matrices. These matrices, as is well-known, satisfy [σ

_{i}, σ

_{j}] = 2iϵ

_{ijk}σ

_{k}and the anticommutation relations σ

_{i}σ

_{j}+ σ

_{j}σ

_{i}= 2δ

_{ij}, which follow from σ

_{i}σ

_{j}= iϵ

_{ijk}σ

_{k}+ δ

_{ij}.

_{i}and q

_{i}:

^{i}= −V

^{0}

^{i}/4 and β

^{k}ϵ

_{ijk}= −V

^{ij}/4. We can write B in terms of the Pauli-generators alone:

_{k}↔ −σ

_{k}, we could derive the expression for S(Λ) = exp B by splitting the series expansion into even and odd powers of B, and noting that

**α**

^{2}≡

**α**·

**α**,

**β**

^{2}≡

**β**·

**β**, and $\mathit{\text{\alpha}}\cdot \mathit{\text{\beta}}\equiv {\sum}_{i=1}^{3}{\alpha}^{i}{\beta}^{i}$. We have then that B

^{3}= z

^{2}B, B

^{4}= z

^{4}, B

^{5}= z

^{4}B,… This allows us to write

**f**·

**σ**), with

**f**=

**α**+ i

**β**∈ ℂ. The matrix

**f**·

**σ**has the (complex) eigenvalues

**f**

_{±}〉 for the corresponding eigenvectors, i.e.,

**f**·

**σ**|

**f**

_{±}〉 = λ

_{±}|

**f**

_{±}〉, we have that

**f**

_{±}〉 〈

**f**

_{±}|, we get

_{n}exp a

_{n}|a

_{n}〉 〈a

_{n}| to the case A = −

**f**·

**σ**. The operator exp (−

**f**·

**σ**) has eigenvectors |

**f**

_{±}> and eigenvalues exp (∓z). Thus,

**f**·

**σ**/z. We have thus obtained closed-form expressions for exp(−

**f**·

**σ**), with

**f**=

**α**+ i

**β**∈ ℂ

^{3}, i.e., for the elements of SL(2, ℂ), the universal covering group of the Lorentz group. It is interesting to note that the elements of SL(2, ℂ) are related to those of SU(2) by extending the parameters

**α**entering exp(i

**α**⋅

**n**) ∈ SU(2) from the real to the complex domain: i

**α**→

**α**+ i

**β**. Standard calculations that are carried out with SU(2) elements can be carried out similarly with SL(2, ℂ) elements [15]. A possible realization of SU(2) transformations occurs in optics, by acting on the polarization of light with the help of birefringent elements (waveplates). If we also employ dichroic elements like polarizers, which absorb part of the light, then it is possible to implement SL(2, ℂ) transformations as well. In this way, one can simulate Lorentz transformations in the optical laboratory [23]. The above formalism is of great help for designing the corresponding experimental setup.

## 5. Conclusions

## Conflicts of Interest

## Acknowledgments

## References

- Gantmacher, F.R. The Theory of Matrices; Chelsea Publishing Company: New York, NY, USA, 1960; p. 83. [Google Scholar]
- Dattoli, G.; Mari, C.; Torre, A. A simplified version of the Cayley-Hamilton theorem and exponential forms of the 2 × 2 and 3 × 3 matrices. Il Nuovo Cimento
**1998**, 180, 61–68. [Google Scholar] - Cohen-Tannoudji, C.; Diu, B.; Laloë, F. Quantum Mechanics; John Wiley & Sons: New York, NY, USA, 1977; pp. 983–989. [Google Scholar]
- Sakurai, J.J. Modern Quantum Mechanics; Addison-Wesley: New York, NY, USA, 1980; pp. 163–168. [Google Scholar]
- Greiner, W.; Müller, B. Quantum Mechanics, Symmetries; Springer: New York, NY, USA, 1989; p. 68. [Google Scholar]
- Weigert, S. Baker-Campbell-Hausdorff relation for special unitary groups SU(N). J. Phys. A
**1997**, 30, 8739–8749. [Google Scholar] - Dattoli, G.; Ottaviani, P.L.; Torre, A.; Vásquez, L. Evolution operator equations: Integration with algebraic and finite-difference methods. Applications to physical problems in classical and quantum mechanics and quantum field theory. Riv. Nuovo Cimento
**1997**, 20, 1–133. [Google Scholar] - Dattoli, G.; Zhukovsky, K. Quark flavour mixing and the exponential form of the Kobayashi–Maskawa matrix. Eur. Phys. J. C
**2007**, 50, 817–821. [Google Scholar] - Leonard, I. The matrix exponential. SIAM Rev.
**1996**, 38, 507–512. [Google Scholar] - Untidt, T.S.; Nielsen, N.C. Closed solution to the Baker-Campbell-Hausdorff problem: Exact effective Hamiltonian theory for analysis of nuclear-magnetic-resonance experiments. Phys. Rev. E
**2002**, 65. [Google Scholar] [CrossRef] - Moore, G. Orthogonal polynomial expansions for the matrix exponential. Linear Algebra Appl.
**2011**, 435, 537–559. [Google Scholar] - Ding, F. Computation of matrix exponentials of special matrices. Appl. Math. Comput.
**2013**, 223, 311–326. [Google Scholar] - Koch, C.T.; Spence, J.C.H. A useful expansion of the exponential of the sum of two non-commuting matrices, one of which is diagonal. J. Phys. A Math. Gen.
**2003**, 36, 803–816. [Google Scholar] - Ramakrishna, V.; Zhou, H. On the exponential of matrices in su(4). J. Phys. A Math. Gen.
**2006**, 39, 3021–3034. [Google Scholar] - Tudor, T. On the single-exponential closed form of the product of two exponential operators. J. Phys. A Math. Theor.
**2007**, 40, 14803–14810. [Google Scholar] - Siminovitch, D.; Untidt, T.S.; Nielsen, N.C. Exact effective Hamiltonian theory. II. Polynomial expansion of matrix functions and entangled unitary exponential operators. J. Chem. Phys.
**2004**, 120, 51–66. [Google Scholar] - Goldstein, H. Classical Mechanics, 2nd ed.; Addison-Wesley: New York, NY, USA, 1980; pp. 164–174. [Google Scholar]
- Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipees in FORTRAN, The Art of Scientific Computing, 2nd ed.; Cambridge University Press: Cambridge, UK, 1992; pp. 83–84. [Google Scholar]
- Bjorken, J.D.; Drell, S.D. Relativistic Quantum Mechanics; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
- Babusci, D.; Dattoli, G.; Sabia, E. Operational methods and Lorentz-type equations of motion. J. Phys. Math.
**2011**, 3, 1–17. [Google Scholar] - Puri, R.R. Mathematical Methods of Quantum Optics; Springer: New York, NY, USA, 2001; pp. 8–53. [Google Scholar]
- Meystre, P.; Sargent, M. Elements of Quantum Optics, 2nd ed.; Springer: Berlin, Germany, 1999; pp. 372–373. [Google Scholar]
- Kim, Y.S.; Noz, M.E. Symmetries shared by the Poincaré group and the Poincaré sphere. Symmetry
**2013**, 5, 233–252. [Google Scholar]

© 2014 by the author; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

De Zela, F.
Closed-Form Expressions for the Matrix Exponential. *Symmetry* **2014**, *6*, 329-344.
https://doi.org/10.3390/sym6020329

**AMA Style**

De Zela F.
Closed-Form Expressions for the Matrix Exponential. *Symmetry*. 2014; 6(2):329-344.
https://doi.org/10.3390/sym6020329

**Chicago/Turabian Style**

De Zela, F.
2014. "Closed-Form Expressions for the Matrix Exponential" *Symmetry* 6, no. 2: 329-344.
https://doi.org/10.3390/sym6020329