Next Article in Journal
Transfer of Risk in Supply Chain Management with Joint Pricing and Inventory Decision Considering Shortages
Previous Article in Journal
An Extended SEIR Model with Vaccination for Forecasting the COVID-19 Pandemic in Saudi Arabia Using an Ensemble Kalman Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unifying Framework for Perturbative Exponential Factorizations

1
Departament de Matemàtiques and IMAC, Universitat Jaume I, 12071 Castellón, Spain
2
Departament de Física Teòrica, Universitat de València and Institute for Integrative Systems Biology (I2SysBio), 46100 Burjassot, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(6), 637; https://doi.org/10.3390/math9060637
Submission received: 25 January 2021 / Revised: 5 March 2021 / Accepted: 13 March 2021 / Published: 17 March 2021

Abstract

:
We propose a framework where Fer and Wilcox expansions for the solution of differential equations are derived from two particular choices for the initial transformation that seeds the product expansion. In this scheme, intermediate expansions can also be envisaged. Recurrence formulas are developed. A new lower bound for the convergence of the Wilcox expansion is provided, as well as some applications of the results. In particular, two examples are worked out up to a high order of approximation to illustrate the behavior of the Wilcox expansion.

1. Introduction

Linear differential equations of the form
x ˙ d x d t = A ( t ) x , x ( 0 ) = x 0 C d ,
with A ( t ) a d × d matrix, whose entries are integrable functions of t, are ever-present in many branches of science, the fundamental evolution equation of Quantum Mechanics, the Schrödinger equation, being a particular case. In consequence, solving Equation (1) is of the greatest importance. In spite of their apparent simplicity, however, they are seldom solvable in terms of elementary functions, and so different procedures have been proposed over the years to render approximate solutions. These are specially useful in the analytical treatment of perturbative problems, such as those arising in the time evolution of quantum systems [1], control theory, or problems where time-ordered products are involved [2]. Among them, exponential perturbative expansions have received a great deal of attention, due to some remarkable properties they possess. In particular, if Equation (1) is defined in a Lie group, the approximations they furnish also evolve in the same Lie group. As a consequence, important qualitative properties of the exact solution are also preserved by the approximations. Thus, if Equation (1) represents the time-dependent Schrödinger equation, then the approximate evolution operator is still unitary, and as a consequence, the total sum of (approximate) transition probabilities is the unity, no matter where the expansion is truncated. There are, in fact, many physical problems (in non-linear mechanics, optical spectroscopy, magnetic resonance, etc.) involving periodic fast-oscillating external fields that are also modeled by Equation (1), with A ( t ) periodic. In that case, especially tailored expansions incorporating the well-known Floquet theorem [3], such as the average Hamiltonian theory [4] and the Floquet–Magnus expansion [5,6], have also been proposed.
When dealing with the general problem (1), one of the most widely used exponential approximations corresponds to the Magnus expansion [7]
x ( t ) = e Ω ( t ) x 0 ,
where Ω is an infinite series
Ω ( t ) = k = 1 Ω k ( t )
whose terms are linear combinations of time-ordered integrals of nested commutators of A evaluated at different times (see [8] for a review, including applications to several physical and mathematical problems). What is more interesting for our purposes here is that this expansion can be related with a coordinate transformation x X rendering the original system (1) into the trivial equation
d X d t = 0 ,
with the static solution X ( t ) = X ( 0 ) = x 0 , and that the transformation is given precisely by x ( t ) = exp ( Ω ( t ) ) X ( t ) [9].
In contrast to the Magnus expansion, the Floquet–Magnus expansion obtains the solution with two exponential transformations when A ( t ) is periodic, whereas other exponential perturbative expansions are based on infinite product factorizations of x ( t ) ,
x ( t ) = e Ω 1 ( t ) e Ω 2 ( t ) e Ω n ( t ) x ( 0 ) ,
such as those proposed by Fer and Wilcox. In fact, as pointed out in [10], both expansions have a curious history, which is worth describing. It was Fer who proposed the expansion that bears his name in [11], although he never applied it to solve any specific problem. Bellman reviewed this paper in the Mathematical Reviews (MR0104009), and even proposed the expansion as an exercise in [12]. Nevertheless, Wilcox identified it in [1] with an alternative factorization, Equation (5), which was indeed a different and new type of expansion. From them on, their historical trajectories move apart. Thus, Fer expansion was rediscovered by Iserles [13] as a tool for the numerical integration of linear differential equations and later on used in Quantum Mechanics [14] and solid-state nuclear magnetic resonance [15], but also as a Lie-group integrator [16,17,18]. On the other hand, Wilcox expansion has been rediscovered several times in the literature, in particular in [19] in the context of nonlinear control systems, and in [20] as a general tool for approximating the time evolution operator in Quantum Mechanics.
The first goal in this work is to recast both infinite product expansions within a unifying framework. This is done by considering, instead of just one exponential transformation, as in the case of the Magnus expansion, a sequence of such transformations, e Ω 1 ( t ) , , e Ω k ( t ) , , chosen to satisfy certain requirements. To be more specific, suppose one replaces A ( t ) in Equation (1) by λ A ( t ) , where λ > 0 is a parameter. Then, if the transformations are chosen so that each Ω k ( t ) is proportional to λ k , we recover the Wilcox expansion, whereas we end up with the Fer expansion when each Ω k ( t ) is an infinite series in λ whose first term is proportional to λ 2 k 1 .
We also show that further alternative descriptions yield new factorizations. This additional degree of freedom can be indeed used to deal better with the features of the matrix A ( t ) , as in Floquet–Magnus, when A ( t ) is periodic.
One might then consider this sequence of linear transformations as a generalization of the concept of picture in Quantum Mechanics when Equation (1) refers to the Schrödinger equation.
Our second goal consists of obtaining, on the basis of this framework, new results concerning Wilcox expansion. Thus, we develop a recursive procedure to obtain every order of approximation in terms of nested commutators, as well as a convergence radius bound. We also establish a formal connection of the Wilcox expansion with the Zassenhaus formula [7,21].
Eventually, the important problem of expanding the exponential exp ( A + ε B ) for ε > 0 and A, B, two generic non-commuting operators will be addressed, and two applications of the obtained results.

2. A Sequence of Transformations: The General Case

Given the initial value problem (1), let us consider a linear change in variables x X 1 of the form
x ( t ) = e Ω 1 ( t ) X 1 ( t ) , Ω 1 ( 0 ) = 0
transforming the original system into
d X 1 d t = B 1 ( t ) X 1 .
For the time being, the generator Ω 1 ( t ) of the transformation is not specified. Then, B 1 ( t ) can be expressed in terms of A ( t ) and Ω 1 ( t ) as follows. First, by inserting Equation (6) into Equation (1), and taking Equation (7) into account, one can obtain
d d t exp ( Ω 1 ) = A ( t ) exp ( Ω 1 ) exp ( Ω 1 ) B 1 ( t ) ,
whence
B 1 ( t ) = e Ω 1 A ( t ) e Ω 1 e Ω 1 d d t e Ω 1 .
The derivative of the matrix exponential can be written as [8]
d d t exp ( Ω 1 ( t ) ) = d exp Ω 1 ( t ) ( Ω ˙ 1 ( t ) ) exp ( Ω 1 ) ,
where the symbol d exp Ω ( C ) stands for the (everywhere convergent) power series
d exp Ω ( C ) = k = 0 1 ( k + 1 ) ! ad Ω k ( C ) exp ( ad Ω ) I ad Ω ( C ) .
Here ad Ω 0 C = C , ad Ω k C = [ Ω , ad Ω k 1 C ] , and [ Ω , C ] denotes the usual commutator. Therefore
B 1 ( t ) = e Ω 1 A ( t ) d exp Ω 1 ( t ) ( Ω ˙ 1 ( t ) ) e Ω 1 e ad Ω 1 ( B 0 G 1 )
where
B 0 ( t ) A ( t ) , G 1 ( t ) d exp Ω 1 ( t ) ( Ω ˙ 1 ( t ) )
and
e ad Ω 1 F = k 0 ( 1 ) k k ! ad Ω 1 k F = e Ω 1 F e Ω 1 .
Of course, nothing prevents us from repeating the whole procedure above and introducing a second transformation to Equation (7) of the form
X 1 ( t ) = e Ω 2 ( t ) X 2 ( t ) , Ω 2 ( 0 ) = 0 ,
so that the new variables verify
d X 2 d t = B 2 ( t ) X 2 .
In general, for the n-th such linear transformation
X n 1 ( t ) = e Ω n ( t ) X n ( t ) , Ω n ( 0 ) = 0
with
d X n d t = B n ( t ) X n ,
one has
B n ( t ) = e ad Ω n ( B n 1 G n ) , with G n ( t ) = d exp Ω n ( t ) ( Ω ˙ n ( t ) )
so that the solution of Equation (1) is expressed as
x ( t ) = e Ω 1 ( t ) e Ω 2 ( t ) e Ω n ( t ) X n ( t ) .
Alternatively, we can write B n ( t ) in Equation (19) as follows. Since it is also true that [8]
d exp Ω n ( t ) ( Ω ˙ n ( t ) ) = d d t e Ω n e Ω n = 0 1 e x Ω n Ω ˙ n e ( 1 x ) Ω n d x e Ω n = 0 1 e x Ω n Ω ˙ n e x Ω n d x ,
we have
B n = e Ω n B n 1 e Ω n 0 1 e u Ω n Ω ˙ n e u Ω n d u = k 0 ( 1 ) k ( k + 1 ) ! ( k + 1 ) ad Ω n k B n 1 ad Ω n k Ω ˙ n , n 1 .
The important point is, of course, how to choose B n , or, alternatively, Ω n , i.e., the specific requirements each transformation has to satisfy in order to be useful to approximately solve Equation (1). There are obviously many possibilities, and in the following we analyze two of them, leading to two different and well-known exponential perturbation factorizations mentioned in the Introduction, namely the Wilcox [1] and Fer [11] expansions.

3. Wilcox Expansion

3.1. Recurrences

Let us introduce the (dummy) parameter λ in Equation (1) and replace A with λ A . This is helpful when collecting coefficients, and at the end we can always take λ = 1 .
Since the solution of Equation (1) when A is constant, or more generally when A ( t 1 ) A ( t 2 ) = A ( t 2 ) A ( t 1 ) for all t 1 t 2 , is x ( t ) = exp ( 0 t A ( u ) d u ) , it makes sense to take the generator for the first transformation as
Ω 1 ( t ) = 0 t B 0 ( u ) d u = λ 0 t A ( u ) d u λ W 1 ( t ) .
Then, according to Equations (12) and (13), we have
C 1 B 0 G 1 = k 1 λ k ( b 0 , k g 1 , k )
where
b 0 , 1 = A ( t ) , b 0 , l = 0 , l > 1 g 1 , 1 = W ˙ 1 , g 1 , l = 1 l ! ad W 1 l 1 W ˙ 1 , l > 1 .
With the choice W ˙ 1 = A ( t ) , it turns out that B 1 is a power series in λ starting with λ 2 ,
B 1 ( t ) = e ad Ω 1 ( B 0 G 1 ) = e λ ad W 1 C 1 = l 2 λ l b 1 , l ,
where
b 1 , l = k = 0 l 2 ( 1 ) k k ! ad W 1 k b 0 , l k g 1 , l k .
We can analogously choose the second transformation proportional to λ 2 , i.e., as Ω 2 ( t ) λ 2 W 2 ( t ) for a given W 2 to be determined. Then, a straightforward calculation shows that
C 2 B 1 G 2 = l 2 λ l ( b 1 , l g 2 , l )
with
g 2 , 2 = W ˙ 2 , g 2 , 2 l = 1 l ! ad W 2 l 1 W ˙ 2 , g 2 , r = 0 , r 2 l .
The generator W 2 is then obtained by imposing that b 1 , 2 g 2 , 2 = 0 in Equation (28), i.e.,
g 2 , 2 = W ˙ 2 = b 1 , 2 = b 0 , 2 g 1 , 2 = 1 2 ad W 1 W ˙ 1 .
In this way, B 2 ( t ) is a power series in λ , starting with λ 3 ,
B 2 ( t ) = e λ 2 ad W 2 C 2 = l 3 λ l b 2 , l
with
b 2 , l = k = 0 [ ( l 1 ) / 2 ] 1 ( 1 ) k k ! ad W 2 k b 1 , l 2 k g 2 , l 2 k ,
where [ · ] stands for the integer part of the argument. In general, the n-th transformation Ω n ( t ) λ n W n ( t ) is determined in such a way that the power series of B n ( t ) starts with λ n + 1 . This can be done as follows: from B n 1 = l n λ l b n 1 , l , we compute
C n B n 1 G n = r = n λ r c n , r = r = n λ r ( b n 1 , r g n , r )
with
g n , r = 1 l ! ad W n l 1 W ˙ n , r = n l 0 , r n l l = 1 , 2 ,
Then, W n is obtained by taking b n 1 , n g n , n = 0 , i.e.,
W ˙ n = b n 1 , n
and, finally, B n is determined as
B n = e λ n ad W n C n = l = n + 1 λ l b n , l
with
b n , l = k = 0 [ l 1 n ] 1 ( 1 ) k k ! ad W n k c n , l n k .
Notice that, in view of Equations (34) and (37), Equation (35) simplifies to
W ˙ n = b n 2 , n for n 3 .
The solution of Equation (1) is expressed, after n such transformations, as
x ( t ) = e λ W 1 ( t ) e λ 2 W 2 ( t ) e λ n W n ( t ) X n ( t ) .
An approximation to the exact solution containing all the dependence up to O ( λ n ) is obtained by taking X n = x 0 in Equation (39).
In spite of this, the truncated factorization e λ W 1 ( t ) e λ 2 W 2 ( t ) e λ n W n ( t ) still shares relevant qualitative properties with the evolution operator, such as orthogonality, unitarity, etc.
Recursion (33)–(38) allows one to construct any generator W k of the expansion in terms of W 1 , , W k 1 . For illustration, we next collect the first terms
W ˙ 1 = A ( t ) , W ˙ 2 = 1 2 ad W 1 W ˙ 1 , W ˙ 3 = 1 3 ad W 1 2 W ˙ 1 W ˙ 4 = 1 8 ad W 1 3 W ˙ 1 1 2 ad W 2 W ˙ 2 , W ˙ 5 = 1 30 ad W 1 4 W ˙ 1 ad W 2 W ˙ 3
This is the way the Wilcox expansion is built up.

3.2. Explicit Expressions for W n ( t )

Although the recursive procedure (33)–(38) turns out to be very computationally efficient to construct the exponents W n ( t ) for a given A ( t ) in practice, it is clear that much insight about the expansion can be gained if an explicit expression for any W n can be constructed, thus generalizing the treatment originally done by Wilcox up to n = 4 [1].
Such an expression could be obtained, in principle, by working out the recurrence (33)–(38), but a more direct approach consists of comparing the Dyson perturbation series of U ( t ) [22] in the associated initial value problem
U ˙ = λ A ( t ) U , U ( 0 ) = I ,
i.e.,
U ( t ) = I + n = 1 λ k P k ( t ) , with P k ( t ) = 0 t d t 1 0 t k 1 d t k A ( t 1 ) A ( t k )
with the expansion in λ of the factorization
U ( t ) = e λ W 1 ( t ) e λ 2 W 2 ( t ) e λ n W n ( t ) .
Thus, for the first terms, one has
P 1 = W 1 , P 2 = W 2 + 1 2 W 1 2 , P 3 = W 3 + W 1 W 2 + 1 3 ! W 1 3 , P 4 = W 4 + W 1 W 3 + 1 2 W 1 2 W 2 + 1 2 W 2 2 + 1 4 ! W 1 4 .
In general, we can write
P n = p ( n ) i 1 i 2 i k 1 r k ! W i 1 W i 2 W i k , i 1 i 2 i k ,
where the sum is extended over the total number of partitions p ( n ) of the integer n. We recall that a partition p ( n ) of the integer n is an n-tuple ( i 1 , i 2 , , i k ) , such that i 1 + i 2 + + i k = n , with ordering i 1 i 2 i k . Thus, the seven partitions of n = 5 with the chosen ordering are ( 5 ) , ( 1 , 4 ) , ( 2 , 3 ) , ( 1 , 1 , 3 ) , ( 1 , 2 , 2 ) , ( 1 , 1 , 1 , 2 ) and ( 1 , 1 , 1 , 1 , 1 ) . In Equation (45), r k is the number of repeated indices in the partition considered.
By working out Equation (45), one can invert the relations and express W n in terms of P 1 , , P n 1 for any n 1 . Thus, one obtains
W 1 = P 1 , W 2 = P 2 1 2 P 1 2 , W 3 = P 3 P 1 P 2 + 1 3 P 1 3 , W 4 = P 4 P 1 P 3 + 3 4 P 1 2 P 2 + 1 4 P 2 P 1 2 1 2 P 2 2 1 4 P 1 4 .
Notice that W n , n 2 , is expressed in terms of products of iterated integrals P i 1 P i j . Interestingly, it is possible to express these products as proper time-ordered integrals by using a procedure developed in [23]. If we denote
A ( i 1 i 2 i n ) 0 t d t 1 0 t 1 d t 2 0 t n 1 d t n A ( t i 1 ) A ( t i 2 ) A ( t t n ) ,
so that P n ( t ) = A ( 12 n ) , then
W 2 = A ( 12 ) 1 2 A ( 1 ) · A ( 1 ) W 3 = A ( 123 ) A ( 1 ) · A ( 12 ) + 1 3 A ( 1 ) · A ( 1 ) · A ( 1 ) ,
etc. Taking into account Fubini’s theorem,
0 α d y y α f ( x , y ) d x = 0 α d x 0 x f ( x , y ) d y ,
it is clear that A ( 1 ) · A ( 1 ) = A ( 12 ) + A ( 21 ) , and thus
W 2 = A ( 12 ) 1 2 A ( 12 ) + A ( 21 ) = 1 2 A ( 12 ) A ( 21 ) .
We can proceed analogously with the following products
A ( 1 ) · A ( 12 ) = A ( 123 ) + A ( 213 ) + A ( 312 ) A ( 1 ) · A ( 1 ) · A ( 1 ) = A ( 123 ) + A ( 132 ) + A ( 213 ) + A ( 231 ) + A ( 312 ) + A ( 321 ) ,
so that
W 3 = 1 3 A ( 123 ) + 1 3 A ( 132 ) 2 3 A ( 213 ) + 1 3 A ( 231 ) 2 3 A ( 312 ) + 1 3 A ( 321 ) .
Carrying out this argument to any order, we can expand all the products of integrals appearing in W n . As a result, each product is replaced by the sum of all possible permutations of time ordering consistent with the time ordering in the factors of this product [24].
At this point, it is illustrative to consider some examples in detail. Thus, the product A ( 1 ) · A ( 12 ) gives the sum of all permutations of three elements, such that the second index is less than the third one. With respect to A ( 1 ) · A ( 1 ) · A ( 1 ) , since there is no special ordering, then all possible permutations have to be taken into account. Finally, for the product P 1 P 3 appearing in W 4 one has
P 1 P 3 = A ( 1 ) · A ( 123 ) = A ( 4123 ) + A ( 3124 ) + A ( 2134 ) + A ( 1234 ) .
Proceeding in a similar way, one can show that any product of iterated integrals can be expressed as a sum of iterated integrals. This property is, in fact, related to a much deeper characterization of the group of permutations [23]. If S S y m m denotes the graded Q -vector space with the fundamental basis given by the disjoint union of the symmetric groups S n for all n 0 , then it is possible to define a product ∗ of permutations and a coproduct δ in S S y m m , so that there is a one-to-one correspondence between iterated integrals and permutations
A ( σ ) · A ( τ ) = A ( σ τ ) .
The product ∗ was introduced in [25], and, together with the coproduct δ , endows S S y m m with a structure of Hopf algebra [26], the so-called Malvenuto–Reutenauer Hopf algebra of permutations [27].
In sum, the general structure of the Wilcox expansion terms therefore reads as
W n = k = 1 n ! ω k ( n ) A ( σ k ) ,
where the summation extends over all the n ! permutations σ k S n of { 1 , 2 , , n } . The weights ω k ( n ) are given by rational numbers that can be determined algorithmically for any n, although the general expression for them is not obvious, in contrast with the Magnus expansion, for which such a closed formula exists [28]. This can be then considered as an open problem.
Moreover, if one is interested in obtaining a compact expression for W n in terms of independent nested commutators of A ( t ) , as is done in [1] up to n = 4 ; one can use the class of bases proposed by Dragt & Forest in [24] for the Lie algebra generated by the operators A ( t 1 ) , A ( t 2 ) , , A ( t n ) . The same procedure as carried out in [23] for the Magnus expansion can be applied here, so that one obtains the general formula
W n ( t ) = τ k c τ k ( n ) 0 t d t 1 0 t 1 d t 2 0 t n 1 d t n [ A ( t τ ( 2 ) ) , [ A ( t τ ( 3 ) ) [ A ( t τ ( n ) ) , A ( t 1 ) ] ] ] .
Here, the sum extends over the ( n 1 ) ! permutations τ k of the elements { 2 , 3 , , n } and c τ k ( n ) is a rational number that depends on the particular permutation. For a given permutation, say τ k , its value coincides with the prefactor in Equation (55) of the particular term A ( σ k ) , corresponding to the permutation σ k S n , such that
{ σ k ( 1 ) , σ k ( 2 ) , , σ k ( n 1 ) , σ k ( n ) } = { τ k ( 2 ) , τ k ( 3 ) , , τ k ( n ) , 1 } .
Thus, if we denote
A [ i 1 i 2 i n ] 0 t d t 1 0 t 1 d t 2 0 t n 1 d t n [ A ( t i 1 ) , [ A ( t i 2 ) , [ A ( t i n 1 ) , A ( t i n ) ] ] ]
we obtain for the first terms
W 2 = 1 2 A [ 21 ] , W 3 = 1 3 A [ 231 ] + 1 3 A [ 321 ] , W 4 = 1 4 A [ 3241 ] 1 4 A [ 4231 ] 1 4 A [ 4321 ] , W 5 = 2 15 A [ 23451 ] 2 15 A [ 23541 ] 2 15 A [ 24351 ] 2 15 A [ 24531 ] 2 15 A [ 25341 ] 2 15 A [ 25431 ] + 1 5 A [ 32451 ] + 1 5 A [ 32541 ] 2 15 A [ 34251 ] 2 15 A [ 34521 ] 2 15 A [ 35241 ] 2 15 A [ 35421 ] + 1 5 A [ 42351 ] + 1 5 A [ 42531 ] + 1 5 A [ 43251 ] + 1 5 A [ 43521 ] 2 15 A [ 45231 ] 2 15 A [ 45321 ] + 1 5 A [ 52341 ] + 1 5 A [ 52431 ] + 1 5 A [ 53241 ] + 1 5 A [ 53421 ] + 1 5 A [ 54231 ] + 1 5 A [ 54321 ] .
In sum, the general structure of the Wilcox expansion terms via commutators uses the same weights as in Equation (55) and reads
W n = k = 1 ( n 1 ) ! ω k ( n ) A [ σ k ] ,
where the primed sum requires the rightmost element in the permutation σ ( k ) to be invariant (as in Equation (56)). This element may be chosen at will and, whatever that value, the permutations are build up with the remaining n 1 elements. Different, but equivalent, expressions for W n in terms of commutators are obtained depending on the value fixed at the rightmost position. We stress once again that, although only the first terms have been collected here for simplicity, the whole procedure is algorithmic in nature and has been implemented in a computer algebra system furnishing to evaluate W n ( t ) explicitly for any n [29]. Note that W n ( t ) involves a linear combination of ( n 1 ) ! iterated integrals.

3.3. Convergence of Wilcox Expansion

Recursion (33)–(38) is also very useful to provide estimates for the radius of convergence of the Wilcox expansion when Equation (1) is defined in a Banach algebra A , i.e., an algebra that is also a complete normed linear space with a sub-multiplicative norm,
X Y X Y .
If this is the case, then ad X Y 2 X Y and, in general, ad X n Y 2 n X n Y .
As shown in [30,31], if the series
M ( λ ; t ) = j = 1 λ j W j ( t )
has a certain radius of convergence r c for a given t, then, for λ < r < r c , the sequence of functions
Ψ n e λ W 1 ( t ) e λ 2 W 2 ( t ) e λ n W n ( t )
converges uniformly on any compact subset of the ball B ( 0 , r c ) . Thus, studying the convergence of the Wilcox expansion reduces analysis of the series M ( λ ; t ) and, in particular, its radius of convergence r c .
Let k ( t ) be a function such that A ( t ) k ( t ) and denote K ( t ) = 0 t k ( s ) d s . Then, clearly
b 0 , 1 k ( t ) ; b 0 , l = 0 , l = 2 , 3 ,
and
W 1 K ( t ) , W 2 1 2 K 2 ( t ) .
In general, the following bounds can be established by induction
g n , r ( t ) β n , r K r 1 ( t ) k ( t ) , b n , l ( t ) α n , l K l 1 ( t ) k ( t ) , n = 1 , 2 , ; l > n W n ( t ) c n K n ( t ) ,
where
α 0 , 1 = 1 ; α 0 , l = 0 , l > 1 α n , l = j = 0 [ l 1 n ] 1 1 j ! 2 j c n j ( α n 1 , l n j + β n , l n j ) n 1 , l > n β n , r = 1 l ! 2 l 1 n c n l , r n = r n = l 0 , r n r n c 1 = 1 , c 2 = 1 2 , c n = 1 n α n 2 , n , n > 2 .
It is clear that if the series j 1 c j K j ( t ) converges, so does M ( λ = 1 ; t ) . Therefore, a sufficient condition for convergence of the Wilcox expansion is obtained by imposing
lim n c n + 1 K n + 1 ( t ) c n K n ( t ) = K ( t ) lim n D n < 1 ,
where
D n n n + 1 α n 1 , n + 1 α n 2 , n .
We have computed this quantity up to n = 2000 and then extrapolated to the limit ( 1 / n ) 0 . Then D n D = 1.51868 , as seen in Figure 1, and thus the convergence of the Wilcox expansion is ensured at least for values of time t such that
0 t A ( s ) d s K ( t ) < ξ W = 1 D 0.65846
This type of extrapolation has also been used to estimate the convergence radius of the Magnus expansion [32]. Although the estimate is not completely analytic, the same type of computation has provided accurate results in other settings. In particular, for the Magnus expansion, such an estimate fully agrees with a purely theoretically deduced bound [8,10,32].

4. Fer–Like Expansions

4.1. Standard Fer Expansion

In forming the Wilcox expansion, the first transformation is chosen in such a way that Ω ˙ 1 = B 0 ( t ) , whereas Ω ˙ n B n 1 for n 2 . It makes sense, then, to analyze what happens if we impose this condition at each step of the procedure
Ω ˙ n ( t ) = B n 1 ( t ) or , equivalently , Ω n ( t ) = 0 t B n 1 ( u ) d u
for all n 1 . In that way, expression (22) for B n clearly simplifies to
B n ( t ) = k 1 ( 1 ) k k ( k + 1 ) ! ad Ω n k B n 1 , n 1 .
In doing, so we recover the precise Fer expansion, see [10,11]. Again, after n transformations, we get
x ( t ) = e Ω 1 ( t ) e Ω 2 ( t ) e Ω n ( t ) X n ( t )
so that, if we impose X n ( t ) = x 0 , we are left with another approximation to the exact solution. Notice that this approximation clearly differs from the previous Wilcox expansion for n 2 , as can be seen by analyzing the dependence on λ of each transformation. Whereas Ω n = λ n W n ( t ) for the Wilcox expansion, now Ω n contains terms of order λ 2 n 1 and higher. This can be easily shown by induction: Ω 1 is proportional to λ , so that B 1 , according to Equation (72) contains terms of order λ 2 (coming from [ Ω 1 , B 0 ] ) and higher. In general, B n 2 and Ω n 1 contain terms of order λ 2 n 2 and higher, so the first term in the series (72) for B n 1 , i.e., the commutator [ Ω n 1 , B n 2 ] , produces a term of order ( λ 2 ) 2 n 2 = λ 2 n 1 in Ω n .
Alternatively, expressing Equation (72) as
B n ( t ) = 0 1 d x 0 x d u e ( 1 u ) Ω n [ B n 1 , Ω n ] e ( 1 u ) Ω n
and taking norms, it is then possible to show that the Fer expansion converges for values of t such that [10]
0 t A ( s ) d s < 0.8604065 .

4.2. Intermediate Fer-Like Expansions

Notice that the λ -power series of Ω n in the Fer expansion contains infinite terms starting with λ 2 n 1 , but the corresponding truncated factorization obtained from Equation (73) by taking X n ( t ) = x 0 is correct only up to terms of order λ 2 n 1 . One might then consider yet another sequence of transformations so that each Ω k contains only the relevant terms leading to a correct approximation up to this order. Of course, both factorizations would be different, but nevertheless they would produce the correct power series up to order λ 2 n 1 . The corresponding factorization can be properly called a modified Fer expansion.
Our starting point is, once again, Equation (22). Clearly, the first transformation is the same as in Fer (and Wilcox), i.e.,
Ω ˙ 1 = B 0 = λ A ( t ) ,
and thus
B 1 = k = 0 ( 1 ) k k ( k + 1 ) ! ad Ω 1 k B 0 = O ( λ 2 ) ,
where the rightmost term points out the lowest λ contribution in the sum.
Next, to reproduce the same dependence on λ as the Fer expansion, we need to enforce that B 2 = O ( λ 4 ) , and the question is how to choose Ω 2 guaranteeing this feature. An analysis of Equation (22) with n = 2 reveals that this is achieved by taking Ω ˙ 2 as the sum of terms in B 1 in Equation (77) contributing to λ 2 and λ 3 , i.e.,
Ω ˙ 2 = 1 2 ad Ω 1 B 0 + 1 3 ad Ω 1 2 B 0 ,
since the next term appearing in the expression of B 2 involves the computation of ad Ω 1 3 B 0 = O ( λ 4 ) . Thus
B 2 = k = 1 ( 1 ) k ( k + 1 ) ! ( k + 1 ) ad Ω 2 k B 1 ad Ω 2 k Ω ˙ 2 = O ( λ 4 ) .
Likewise, Ω 3 is to be designed so that
B 3 = k = 0 ( 1 ) k ( k + 1 ) ! ( k + 1 ) ad Ω 3 k B 2 ad Ω 3 k Ω ˙ 3 = O ( λ 8 ) ,
and this is guaranteed by taking Ω 3 as the sum all the terms in B 2 contributing to powers from λ 4 up to λ 7 . From Equation (79) it is clear that
B 2 = 1 2 2 ad Ω 2 B 1 ad Ω 2 Ω ˙ 2 O ( λ 4 ) + 1 3 ! 3 ad Ω 2 2 B 1 ad Ω 2 2 Ω ˙ 2 O ( λ 6 ) + O ( λ 8 ) ,
where only the relevant terms in the expansion in B 1 have to be taken into account. In this way, we can take
Ω ˙ 3 = ad Ω 2 B 1 [ 4 ] + 1 2 ad Ω 2 Ω ˙ 2 + 1 2 ad Ω 2 2 B 1 [ 2 ] 1 6 ad Ω 2 2 Ω ˙ 2
with
B 1 [ j ] k = 1 j ( 1 ) k ( k + 1 ) ! k ad Ω 1 k B 0 .
Notice, that since the second term in Ω 2 in Equation (78) is O ( λ 3 ) , the expression (82) does contain some contributions in λ 8 and λ 9 that in principle could be removed. We prefer, however, to maintain them in order to have a more compact expression.
For this modified Fer expansion, Ω n is generally chosen, so that Ω ˙ n is precisely the sum of all terms of B n 1 containing terms of powers from λ 2 n 1 up to λ 2 n 1 and then appropriately truncating the series of B n 2 , , B 1 .
Other possibilities for choosing B k at the successive stages clearly exist, and according to the particular election, different intermediate Fer-like expansions result. In practice, one of those combinations of commutators could be more easily computed for a specific problem.

5. Applications

5.1. Wilcox Expansion as the Continuous Analogue of Zassenhaus Formula

The Zassenhaus formula may be considered as the dual of the Baker–Campbell–Hausdorff (BCH) formula [33] in the sense that it relates the exponential of the sum of two non-commuting operators X and Y with an infinite product of exponentials of these operators and their nested commutators. More specifically,
e X + Y = e X e Y n = 2 e C n ( X , Y ) = e X e Y e C 2 ( X , Y ) e C 3 ( X , Y ) e C k ( X , Y ) ,
where C k ( X , Y ) is a homogeneous Lie polynomial in X and Y of degree k [1,7,21,34,35]. A very efficient procedure to generate all the terms in Equation (84) is presented in [21] and allows to construct C n up to a prescribed value of n directly, in terms of the minimum number of independent commutators involving n operators X and Y.
In view of the formal similarity between Equations (39) and (84), Wilcox expansion also has been described as the “continuous analogue of the Zassenhaus formula” [10], just as the Magnus expansion is sometimes called the continuous version of the BCH formula. To substantiate this claim, we next reproduce the Zassenhaus Formula (84) by applying the procedure of Section 3 to a particular initial value problem, namely, the abstract equation
U ˙ = λ ( X + Y ) U , U ( 0 ) = I ,
where X and Y are two non-commuting constant operators.
The formal solution is, of course, U ( t ) = e t λ ( X + Y ) , but we can also solve Equation (85) by first integrating U ˙ 0 = λ X U 0 and factorizing U ( t ) as U ( t ) = U 0 U I = e t λ X U I , where U I obeys the equation
U ˙ I = λ e t λ X Y e t λ X U I A λ ( t ) U I ,
and finally apply this to Equation (86), the sequence of transformations leading to the Wilcox expansion. Notice, however, that now the coefficient matrix A λ ( t ) is an infinite series in λ
A λ ( t ) = λ e t λ ad X Y = j 0 ( 1 ) j j ! t j λ j + 1 ad X j Y ,
so that, when applying the recursion (33)–(38), Ω ˙ 1 is no longer B 0 A λ ( t ) , but the term in A λ ( t ) which is proportional to λ . In other words,
W ˙ 1 = Y , and   thus W 1 ( t ) = t Y .
After some computation, one arrives at
b 0 , l = ( 1 ) l 1 ( l 1 ) ! t l 1 ad X l 1 Y .
Since W ˙ 1 = b 0 , 1 = Y , then clearly g 1 , l = 0 for all l > 1 and
b 1 , l = k = 0 l 2 ( 1 ) k k ! ad W 1 k b 0 , l k .
By imposing b 1 , 2 = g 2 , 2 = W ˙ 2 , we get
W ˙ 2 = t ad X Y , so   that W 2 ( t ) = 1 2 t 2 ad X Y .
In general, g n , n = W ˙ n , g n , r = 0 when r n and the recurrence (33)–(38) reads now
W ˙ 1 = b 0 , 1 b n , l = k = 0 [ l 1 n ] 1 ( 1 ) k k ! ad W n k b n 1 , l n k , n = 1 , 2 , W ˙ n = b n 2 , n n 2
together with Equation (89). Working out this recursion we obtain, for the first terms
W 3 ( t ) = 1 6 t 3 ad X 2 Y + 1 3 t 3 ad Y ad X Y W 4 ( t ) = 1 24 t 4 ad X 3 Y 1 8 t 4 ad Y ad X 2 Y 1 8 t 4 ad Y 2 ad X Y W 5 ( t ) = 1 120 t 5 ad X 4 Y + 1 30 t 5 ad Y ad X 3 Y + 1 20 t 5 ad Y 2 ad X 2 Y + 1 30 t 5 ad Y 3 ad X Y + 1 20 t 5 ad [ X , Y ] ad X 2 Y + 1 10 t 5 ad [ X , Y ] ad Y ad X Y
One can see that this procedure agrees with the algorithm presented in [21] for every term W n , n 1 , in
U ( t ) = e t λ ( X + Y ) = e t λ X e λ W 1 ( t ) e λ 2 W 2 ( t ) e λ 3 W 3 ( t ) .
The Zassenhaus formula is recovered by taking t = 1 , i.e., C n ( X , Y ) = W n ( t = 1 ) .

5.2. Expanding the Exponential e x p ( A + ε B )

Bellman, in his classic book [36], states that “one of the great challenges of modern physics is that of obtaining useful approximate relations for e ( A + ε B ) t in the case where A B B A ”. One such approximation was proposed and left undisclosed in ([12], p. 175). Assuming that e A + ε B can be written in the form
e A + ε B = e A e ε C 1 e ε 2 C 2 e ε 3 C 3 ,
Bellman proposed to determine the first three terms C 1 , C 2 , C 3 , and pointed out that, contrary to other expansions, the product expansion (95) is unitary if A and B are skew-Hermitian.
It turns out that the Wilcox expansion can be used to provide explicit expressions for C n for any two indeterminates A and B, as we will see in the sequel.
Before proceeding, it is important to remark that this problem differs from the Zassenhauss formula, in the sense that the expansion parameter affects only one of the operators in the exponential. The solution goes as follows. We write
U ( t ) e t ( A + ε B ) = e t A V ,
and solve the differential equation satisfied by V
d V d t = ε e t A B e t A V ε B ˜ ( t ) V , V ( 0 ) = I
with the Wilcox expansion, so that
V ( t ) = e ε W 1 ( t ) e ε 2 W 2 ( t ) e ε 3 W 3 ( t ) .
The operators C i in Equation (95) are then obtained by taking t = 1 .
The successive terms W j ( t ) in Equation (98) can be determined either by the recursion (33)–(38) or the explicit expression (56). For the first term, we get
W 1 ( t ) = 0 t B ˜ ( s ) d t = 0 t e s ad A B d s = k = 0 ( 1 ) k t k + 1 ( k + 1 ) ! ad A k B
and so the expression for C 1 is given by
C 1 = 1 e ad A ad A B = B 1 2 [ A , B ] + 1 3 ! [ A , [ A , B ] ] 1 4 ! [ A , [ A , [ A , B ] ] ] + .
Although it is possible in principle to construct explicit expressions for W 2 , W 3 , etc., it is perhaps more convenient to apply the recursion (33)–(38) for each particular application.

5.3. Illustrative Examples

We next particularize the Bellman problem (95) to matrices where closed expressions for C 1 , C 2 , can be obtained. The idea is to illustrate the behaviour of the product expansion by computing explicitly high-order terms with matrices in the SU ( 2 ) and the SO ( 3 ) Lie algebras.

5.3.1. Matrices X and Y in SU ( 2 )

In the first example, we chose A = i a σ z and B = i σ x , where i = 1 , a is a real parameter and
σ x = 0 1 1 0 , σ y = 0 i i 0 , σ z = 1 0 0 1
are Pauli matrices. This instance is borrowed from Quantum Mechanics, where exp [ i ( a σ z + ε σ x ) ] is a matrix that transforms the 1 2 -spin wave function in a Hilbert space.
Using the scalar product notation to write down a linear combination of Pauli matrices: v · σ = v x σ x + v y σ y + v z σ z , the matrix exponential reads
exp ( i v · σ ) = cos v I + i sin v v v · σ .
In the sequel, we work out the expansion
e i ( a σ z + ε σ x ) = e i a σ z e i ε W 1 e i ε 2 W 2 e i ε 3 W 3
up to order eleven in ε and analyze the increasing accuracy of the product expansion as far as more terms are considered. The lhs in Equation (103) may be thought as a transformation involving σ x and σ z . In turn, the rhs is a pure σ z transformation, i.e., exp ( i a σ z ) , followed by an infinite succession of transformations, exp ( i ε k W k ) , whose effect should decrease with k. The truncated product expansion is expected to be accurate as far as ε a .
In Table 1 we write down the first five contributions for a generic t (expressions for k > 5 are too involved to be collected here). Wilcox–Bellman’s Formula (103) corresponds then to t = 1 . All the terms have been obtained with the recurrences of Section 3, starting from
W ˙ 1 ( t ) = e i a t σ z σ x e i a t σ z = C σ x + S σ y ,
where C cos ( 2 a t ) and S sin ( 2 a t ) .
The formulas in Table 1 show that ε / a may be considered as an effective expansion parameter. In Figure 2, we illustrate, for a = 1 , the accuracy of the Wilcox–Bellman product expansion in the example at hand as a function of ε . We plot the squared modulus of the non-diagonal matrix element, say | U 1 , 2 | 2 , of Equation (103) for every analytic approximation up to order eleven in ε , as well as the exact result. Even orders do not contribute in this test, because W 2 k is always proportional to σ z , and therefore exp ( i ε 2 k W 2 k ) is a diagonal matrix.
As regards convergence of the product expansion, the lower bound of Equation (70) leads to
0 1 e i a t σ z ε σ x e i a t σ z d t = ε < 0.658 .
In turn, the behaviour of the curves in Figure 2 points out that convergence of the product expansion extends well beyond that lower bound for this particular example.
Eventually, Figure 3 shows the logarithm of the absolute error in the approximations given by curves in Figure 2.

5.3.2. Matrices X and Y in SO ( 3 )

The second example refers to the matrix that describes a rotation in three dimensions defined by the vector a = α a ^ . Here, α stands for the rotation angle around the axis given by the unitary vector a ^ . A generic 3D rotation matrix can be written as exp ( a · ρ ) , where the components of ρ are the three fundamental rotation matrices
ρ x = 0 0 0 0 0 1 0 1 0 , ρ y = 0 0 1 0 0 0 1 0 0 , ρ z = 0 1 0 1 0 0 0 0 0 .
We study the particular case a · ρ = α cos θ ρ z + sin θ ρ x , and compare the rotation of angle α around the unit vector ( sin θ , 0 , cos θ )
e α cos θ ρ z + sin θ ρ x
with the sequence of transformations
e α cos θ ρ z e α sin θ W 1 e ( α sin θ ) 2 W 2 e ( α sin θ ) 3 W 3
In other words, the question we address is how the pure z-axis rotation in Equation (108), exp ( α cos θ ρ z ) , has to be corrected by an infinite composition of further rotations to reproduce the one defined by a · ρ . When α sin θ is small enough, the approach is expected to converge, since the expansion convergence lower bound reads, in this case, | α sin θ | < 0.658 .
Here, the accuracy of the product expansion will depend on both the rotation angle α and the relative orientation of the rotation axis, determined by the angle θ . This is illustrated in Figure 4 and Figure 5 for the first five orders of approximation, with α = π / 4 , π / 2 , 3 π / 4 and π .
In order to test the expansion, we have computed the matrix trace of the successive approximants and compared them with the exact result
tr exp α cos θ ρ z + sin θ ρ x = 1 + 2 cos α .
The first two approximants are simple enough to be written down
tr e α cos θ ρ z e α sin θ W 1 = 1 + cos ( α cos θ ) cos 2 tan θ sin α cos θ 2 tr e α cos θ ρ z e α sin θ W 1 e ( α sin θ ) 2 W 2 = cos 1 2 tan 2 θ ( sin ( α cos θ ) α cos θ ) · tr e α cos θ ρ z e α sin θ W 1 e ( α sin θ ) 2 W 2 = cos α cos θ 1 2 tan θ sin α cos θ 2
Interestingly, the third approximation order is worse than the second one in all four cases. In the case of α = π , the fourth order is better than the fifth one.

6. Conclusions

When a linear system of differential equations, defined by the coefficient matrix λ A ( t ) , is transformed under exp ( λ 0 t A ( t ) d t ) , the coefficient matrix in the new representation becomes an infinite power series in λ , say A ˜ ( t ) = k 1 λ k a k ( t ) . That is the first step of all matrix exponential methods to approximate the time-evolution operator. In the framework that we have introduced, it is the first move in a sequence of exponential transformations that change the linear system from one representation to another with the goal that dynamics will become less and less relevant. Choosing the transformation exp ( λ 0 t A ˜ ( x ) d x ) as the second move and iterating this procedure afterwards yields the Fer expansion. Instead, choosing the transformation exp ( λ 0 t a 1 ( x ) d x ) , i.e., the leading term of the new coefficient matrix, opens up Wilcox expansion. The framework allows for intermediate expansions, taking exp ( k = 1 n λ k 0 t a k ( x ) d x ) as an initiator, as well as jumping between schemes, in accordance with the particular requirements of the problem at hand.
We have seen that the theory of linear transformations (or changes of picture in the language of Quantum Mechanics) provides a unified framework to deal with many different exponential perturbative expansions. Whereas only one linear transformation reducing the dynamics to the trivial Equation (4) or to a system with a constant matrix renders the Magnus [9] and the Floquet–Magnus [37] expansion, respectively, a sequence of such transformations with different choices of the new matrices lead to Wilcox and Fer factorizations. From this perspective, other factorizations are possible depending on the particular problem at hand: one only has to appropriately select the successive transformations.
In the case of Wilcox expansion, we have provided an efficient recursive procedure to compute this. In addition, we have developed a method to build up an explicit expression for any W n in terms of commutators. This is possible by using similar tools, as in the case of the Magnus expansion, namely by relating products of iterated integrals with the structure of the Hopf algebra of permutations, and by using special bases of nested commutators. A sufficient condition for the expansion convergence has also been obtained.
We have presented some application examples of the results about Wilcox expansion. Firstly, we have shown how to obtain Zassenhaus formula from Wilcox expansion which, in turn, may be interpreted as its continuous analogue. Secondly, we point out that Wilcox expansion solves the problem of expanding the exponential exp ( A + ϵ B ) when A and B are non-commuting operators. We refer to this as Wilcox–Bellman expansion. Two practical cases, in this respect, have been analyzed up to high order. Interestingly, in one of them the convergence seems to not be uniform. For convenience, the interested reader can find, in [29], a Mathematica code generating general explicit expressions and recurrences for the Wilcox expansion.
While a full assessment of the Wilcox expansion in comparison with Fer expansion is not the main purpose of this work, we can still mention some of their most distinctive features. Both types of expansion construct the solution of Equation (1) as an infinite exponential factorization, but in Wilcox, the exponent of each factor is proportional to successive powers of the expansion parameter λ , whereas, in Fer, each exponent contains an infinite sum of powers of λ . This means that, when truncated after a given number of transformations, say n, the Wilcox expansion differs from the exact solution in the power λ n + 1 . In other words, each term W k ( t ) in the Wilcox expansion collects the effect of the perturbation at order k. On the other hand, the Fer expansion, when truncated after n transformations, provides a much more accurate approximation. This is true, of course, if the infinite sums involved in each transformation are exactly computed, an almost impossible task unless the time dependence of A ( t ) is simple enough. By contrast, we have explicit expressions for each exponent W k ( t ) in the Wilcox expansion for a generic A ( t ) and, by using the same techniques as in the Magnus expansion, we can construct appropriate approximations of the iterated integrals if necessary. As the examples collected here and in some other studies show [20], Wilcox expansion can provide accurate results after only a few such transformations.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

The first three authors have been funded by Ministerio de Ciencia e Innovación (Spain) through projects MTM2016-77660-P and PID2019-104927GB-C21 (AEI/FEDER, UE), and by Universitat Jaume I (grants UJI-B2019-17 and GACUJI/2020/05). The work of J.A.O. has been partially supported by the Spanish MINECO (grant numbers AYA2016-81065-C2-2 and PID2019-109592GB-100).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data that support the findings of this study are available from the corresponding author upon request. Codes generating explicit expressions and recurrences for the Wilcox expansion are openly available in http://www.gicas.uji.es/Research/Wilcox.html (accessed on 16 March 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wilcox, R. Exponential operators and parameter differentiation in quantum physics. J. Math. Phys. 1967, 8, 962–982. [Google Scholar] [CrossRef]
  2. Oteo, J.; Ros, J. From time-ordered products to Magnus expansion. J. Math. Phys. 2000, 41, 3268–3277. [Google Scholar] [CrossRef]
  3. Hale, J. Ordinary Differential Equations; Krieger Publishing Company: Malabar, FL, USA, 1980. [Google Scholar]
  4. Maricq, M.M. Application of average Hamiltonian theory to the NMR of solids. Phys. Rev. B 1982, 25, 6622–6632. [Google Scholar] [CrossRef]
  5. Casas, F.; Oteo, J.A.; Ros, J. Floquet theory: Exponential perturbative treatment. J. Phys. A Math. Gen. 2001, 34, 3379–3388. [Google Scholar] [CrossRef]
  6. Mananga, E.; Charpentier, T. On the Floquet–Magnus expansion: Applications in solid-state nuclear magnetic resonance and physics. Phys. Rep. 2016, 609, 1–50. [Google Scholar] [CrossRef]
  7. Magnus, W. On the Exponential Solution of Differential Equations for a Linear Operator. Comm. Pure Appl. Math. 1954, VII, 649–673. [Google Scholar] [CrossRef]
  8. Blanes, S.; Casas, F.; Oteo, J.; Ros, J. The Magnus expansion and some of its applications. Phys. Rep. 2009, 470, 151–238. [Google Scholar] [CrossRef] [Green Version]
  9. Casas, F.; Chartier, P.; Murua, A. Continuous changes of variables and the Magnus expansion. J. Phys. Commun. 2019, 3, 095014. [Google Scholar] [CrossRef]
  10. Blanes, S.; Casas, F.; Oteo, J.; Ros, J. Magnus and Fer expansions for matrix differential equations: The convergence problem. J. Phys. A Math. Gen. 1998, 22, 259–268. [Google Scholar] [CrossRef] [Green Version]
  11. Fer, F. Résolution de l’equation matricielle U˙= pU par produit infini d’exponentielles matricielles. Bull. Classe Sci. Acad. R. Bel. 1958, 44, 818–829. [Google Scholar]
  12. Bellman, R. Introduction to Matrix Analysis, 2nd ed.; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
  13. Iserles, A. Solving Linear Ordinary Differential Equations by Exponentials of Iterated Commutators. Numer. Math. 1984, 45, 183–199. [Google Scholar] [CrossRef]
  14. Klarsfeld, S.; Oteo, J. Exponential infinite-product representations of the time displacement operator. J. Phys. A Math. Gen. 1989, 22, 2687–2694. [Google Scholar] [CrossRef]
  15. Mananga, E. On the Fer expansion: Applications in solid-state nuclear magnetic resonance and physics. Phys. Rep. 2016, 608, 1–41. [Google Scholar] [CrossRef]
  16. Casas, F. Fer’s Factorization as a Symplectic Integrator. Numer. Math. 1996, 74, 283–303. [Google Scholar] [CrossRef]
  17. Zanna, A. Collocation and Relaxed Collocation for the Fer and the Magnus Expansions. SIAM J. Numer. Anal. 1999, 36, 1145–1182. [Google Scholar] [CrossRef] [Green Version]
  18. Iserles, A.; Munthe-Kaas, H.; Nørsett, S.; Zanna, A. Lie-Group Methods. Acta Numer. 2000, 9, 215–365. [Google Scholar] [CrossRef] [Green Version]
  19. Huillet, T.; Monin, A.; Salut, G. Lie algebraic canonical representation in nonlinear control systems. Math. Syst. Theory 1987, 20, 193–213. [Google Scholar] [CrossRef]
  20. Zagury, N.; Aragão, A.; Casanova, J.; Solano, E. Unitary expansion of the time evolution operator. Phys. Rev. A 2010, 82, 042110. [Google Scholar] [CrossRef] [Green Version]
  21. Casas, F.; Murua, A.; Nadinic, M. Efficient computation of the Zassenhaus formula. Comput. Phys. Commun. 2012, 183, 2386–2391. [Google Scholar] [CrossRef] [Green Version]
  22. Galindo, A.; Pascual, P. Quantum Mechanics; Springer: Berlin/Heidelberg, Germany, 1990. [Google Scholar]
  23. Arnal, A.; Casas, F.; Chiralt, C. A general formula for the Magnus expansion in terms of iterated integrals of right-nested commutators. J. Phys. Commun. 2018, 2, 035024. [Google Scholar] [CrossRef]
  24. Dragt, A.; Forest, E. Computation of nonlinear behavior of Hamiltonian systems using Lie algebraic methods. J. Math. Phys. 1983, 24, 2734–2744. [Google Scholar] [CrossRef]
  25. Agrachev, A.; Gamkrelidze, R. The shuffle product and symmetric groups. In Differential Equations, Dynamical Systems, and Control Science; Elworthy, K., Everitt, W., Lee, E., Eds.; Marcel Dekker: New York, NY, USA, 1994; pp. 365–382. [Google Scholar]
  26. Hazewinkel, M.; Gubareni, N.; Kirichenko, V. Algebras, Rings and Modules. Lie Algebras and Hopf Algebras; AMS: Providence, RI, USA, 2010. [Google Scholar]
  27. Malvenuto, C.; Reutenauer, C. Duality between quasi-symmetric functions and the Solomon descent algebra. J. Algebra 1995, 177, 967–982. [Google Scholar] [CrossRef] [Green Version]
  28. Strichartz, R.S. The Campbell–Baker–Hausdorff–Dynkin Formula and Solutions of Differential Equations. J. Funct. Anal. 1987, 72, 320–345. [Google Scholar] [CrossRef] [Green Version]
  29. Geometric Integration Research Group. 2021. Available online: http://www.gicas.uji.es/Research/Wilcox.html (accessed on 18 February 2021).
  30. Bayen, F. On the convergence of the Zassenhaus formula. Lett. Math. Phys. 1979, 3, 161–167. [Google Scholar] [CrossRef]
  31. Arnal, A.; Casas, F.; Chiralt, C. On the structure and convergence of the symmetric Zassenhaus formula. Comput. Phys. Commun. 2017, 217, 58–65. [Google Scholar] [CrossRef] [Green Version]
  32. Moan, P.; Oteo, J. Convergence of the exponential Lie series. J. Math. Phys. 2001, 42, 501–508. [Google Scholar] [CrossRef]
  33. Casas, F.; Murua, A. An efficient algorithm for computing the Baker–Campbell–Hausdorff series and some of its applications. J. Math. Phys. 2009, 50, 033513. [Google Scholar] [CrossRef] [Green Version]
  34. Suzuki, M. On the convergence of exponential operators—The Zassenhaus formula, BCH formula and systematic approximants. Commun. Math. Phys. 1977, 57, 193–200. [Google Scholar] [CrossRef]
  35. Weyrauch, M.; Scholz, D. Computing the Baker–Campbell–Hausdorff series and the Zassenhaus product. Comput. Phys. Commun. 2009, 180, 1558–1565. [Google Scholar] [CrossRef]
  36. Bellman, R. Perturbation Techniques in Mathematics, Engineering & Physics; Dover Publications: Mineola, NY, USA, 1972. [Google Scholar]
  37. Arnal, A.; Casas, F.; Chiralt, C. Exponential perturbative expansions and coordinate transformations. Math. Comput. Appl. 2020, 25, 50. [Google Scholar] [CrossRef]
Figure 1. D n as a function of 1 / n , and linear extrapolation (red line).
Figure 1. D n as a function of 1 / n , and linear extrapolation (red line).
Mathematics 09 00637 g001
Figure 2. Accuracy of Wilcox–Bellman product expansion up to order eleven as a function of the ratio ε / a , with a = 1 . The quantity plotted is the squared modulus of the non-diagonal element of the matrix. The vertical grey line stands for the convergence lower bound ε = 0.658 .Wilcox–Bellman expansion example
Figure 2. Accuracy of Wilcox–Bellman product expansion up to order eleven as a function of the ratio ε / a , with a = 1 . The quantity plotted is the squared modulus of the non-diagonal element of the matrix. The vertical grey line stands for the convergence lower bound ε = 0.658 .Wilcox–Bellman expansion example
Mathematics 09 00637 g002
Figure 3. Absolute error of the approximations given by curves in Figure 2 with a = 1 . The vertical grey line is located at the value of the convergence lower bound ε = 0.658 .
Figure 3. Absolute error of the approximations given by curves in Figure 2 with a = 1 . The vertical grey line is located at the value of the convergence lower bound ε = 0.658 .
Mathematics 09 00637 g003
Figure 4. Error in the approximations to the matrix trace as a function of the rotation angle θ , for two values α = π / 2 and π / 4 .
Figure 4. Error in the approximations to the matrix trace as a function of the rotation angle θ , for two values α = π / 2 and π / 4 .
Mathematics 09 00637 g004
Figure 5. Error in the approximations to the matrix trace as a function of the rotation angle θ , for two values α = 3 π / 4 and π .
Figure 5. Error in the approximations to the matrix trace as a function of the rotation angle θ , for two values α = 3 π / 4 and π .
Mathematics 09 00637 g005
Table 1. First five orders in Bellman problem for generic t. The operators C k in Equation (95) are obtained by taking t = 1 , i.e., C k = i W k ( 1 ) . We have defined S sin ( 2 a t ) and C cos ( 2 a t ) .
Table 1. First five orders in Bellman problem for generic t. The operators C k in Equation (95) are obtained by taking t = 1 , i.e., C k = i W k ( 1 ) . We have defined S sin ( 2 a t ) and C cos ( 2 a t ) .
k W k ( t )
1 1 2 a [ S σ x + ( 1 C ) σ y ]
2 1 4 a 2 ( 2 a t S ) σ z
3 1 12 a 3 [ 6 a t + ( C 4 ) S ] σ x ( 1 C ) 2 σ y
4 1 16 a 4 [ 6 a t + ( C 4 ) S ] σ z
5 1 240 a 5 [ 56 S ( 4 C + 7 ) S C 10 a t ( 7 + 4 C 2 C 2 ) ] σ x +
i 240 a 5 [ 4 C 3 7 C 2 28 C + 31 + 2 a t ( 2 S C 4 S + 3 a t ) ] σ y
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arnal, A.; Casas, F.; Chiralt, C.; Oteo, J.A. A Unifying Framework for Perturbative Exponential Factorizations. Mathematics 2021, 9, 637. https://doi.org/10.3390/math9060637

AMA Style

Arnal A, Casas F, Chiralt C, Oteo JA. A Unifying Framework for Perturbative Exponential Factorizations. Mathematics. 2021; 9(6):637. https://doi.org/10.3390/math9060637

Chicago/Turabian Style

Arnal, Ana, Fernando Casas, Cristina Chiralt, and José Angel Oteo. 2021. "A Unifying Framework for Perturbative Exponential Factorizations" Mathematics 9, no. 6: 637. https://doi.org/10.3390/math9060637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop