A unifying framework for perturbative exponential factorizations

We propose a framework where Fer and Wilcox expansions for the solution of differential equations are derived from two particular choices for the initial transformation that seeds the product expansion. In this scheme intermediate expansions can also be envisaged. Recurrence formulas are developed. A new lower bound for the convergence of the Wilcox expansion is provided as well as some applications of the results. In particular, two examples are worked out up to high order of approximation to illustrate the behavior of the Wilcox expansion


Introduction
Linear differential equations of the form with () a  ×  matrix whose entries are integrable functions of , are ever-present in many branches of science, the fundamental evolution equation of Quantum Mechanics, the Schrödinger equation, being a particular case.In consequence, solving Eq. ( 1) is of the greatest importance.In spite of their apparent simplicity, however, they are seldom solvable in terms of elementary functions, and so different procedures have been proposed along the years to render approximate solutions.These are specially useful in the analytical treatment of perturbative problems, such as those arising in the time evolution of quantum systems [1], control theory or problems where time-ordered products are involved [2].Among them, exponential perturbative expansions have received a great deal of attention, due to some remarkable properties they possess.In particular, if Eq. ( 1) is defined in a Lie group, the approximations they furnish also evolve in the same Lie group.As a consequence, important qualitative properties of the exact solution are also preserved by the approximations.Thus, if Eq. ( 1) represents the time-dependent Schrödinger equation, then the approximate evolution operator is still unitary, and as a consequence the total sum of (approximate) transition probabilities is the unity, no matter where the expansion is truncated.There are in fact many physical problems (in non-linear mechanics, optical spectroscopy, magnetic resonance, etc.) involving periodic fast-oscillating external fields that are also modeled by Eq.
When dealing with the general problem (1), one of the most widely used exponential approximations corresponds to the Magnus expansion [7], where Ω is an infinite series, whose terms are linear combinations of time-ordered integrals of nested commutators of  evaluated at different times (see [8] for a review, including applications to several physical and mathematical problems).What is more interesting for our purposes here is that this expansion can be related with a coordinate transformation  ⟼  rendering the original system (1) into the trivial equation with the static solution () = (0) =  0 , and that the transformation is given precisely by () = exp(Ω())() [9].In contrast to the Magnus expansion, the Floquet-Magnus expansion gets the solution with two exponential transformations when () is periodic, whereas other exponential perturbative expansions are based on infinite product factorizations of (), () = e Ω1 () e Ω 2 () ⋯ e Ω  () ⋯ (0), (5) such as those proposed by Fer and Wilcox.In fact, as pointed out in [10], both expansions have a curious history worth to be sketched.It was Fer who proposed the expansion that bears his name in [11], although he never applied it to solve any specific problem.Bellman reviewed this paper in the Mathematical Reviews 1 , and even proposed the expansion as an exercise in [12].Nevertheless, Wilcox identified it in [1] with an alternative factorization, Eq. (5), which was indeed a different and new type of expansion.From them on, their historical trajectories move apart.Thus, Fer expansion was rediscovered by Iserles [13] as a tool for the numerical integration of linear differential equations and later on used in quantum mechanics [14] and solid-state nuclear magnetic resonance [15], but also as a Lie-group integrator [16,17,18].On the other hand, Wilcox expansion has been rediscovered several times in the literature, in particular in [19] in the context of nonlinear control systems, and in [20] as a general tool for approximating the time evolution operator in quantum mechanics.The first goal in this work is to recast both infinite product expansions within a unifying framework.This is done by considering, instead of just one exponential transformation as in the case of the Magnus expansion, a sequence of such transformations, e Ω 1 () , … , e Ω  () , …, chosen to satisfy certain requirements.To be more specific, suppose one replaces () in Eq. ( 1) by (), where  > 0 is a parameter.Then, if the transformations are chosen so that each Ω  () is proportional to   , we recover the Wilcox expansion, whereas we end up with the Fer expansion when each Ω  () is an infinite series in  whose first term is proportional to  2 −1 .
We also show that further alternative descriptions yield new factorizations.This additional degree of freedom can be indeed used to deal better with the features of the matrix () as in Floquet-Magnus, when () is periodic.
One might then consider this sequence of linear transformations as a generalization of the concept of picture in Quantum Mechanics when Eq. ( 1) refers to the Schrödinger equation.
Our second goal consists in obtaining, on the basis of this framework, new results concerning Wilcox expansion.Thus, we develop a recursive procedure to obtain every order of approximation in terms of nested commutators, as well as a convergence radius bound.We also establish a formal connection of the Wilcox expansion with the Zassenhaus formula [7,21].
Eventually, the important problem of expanding the exponential exp( + ) for  > 0 and , , two generic non-commuting operators will be addressed, and two applications of the results given.

A sequence of transformations: the general case
Given the initial value problem (1), let us consider a linear change of variables  ⟼  1 of the form transforming the original system into For the time being, the generator Ω 1 () of the transformation is not specified.Then,  1 () can be expressed in terms of () and Ω 1 () as follows.First, by inserting Eq. ( 6) into Eq.( 1), and taking Eq. ( 7) into account, one can obtain whence The derivative of the matrix exponential can be written as [8] where the symbol  exp Ω () stands for the (everywhere convergent) power series Here and [Ω, ] denotes the usual commutator.Therefore where and Of course, nothing prevents us from repeating the whole procedure above and introduce a second transformation to Eq. ( 7) of the form so that the new variables verify In general, for the -th such linear transformation with one has so that the solution of equation ( 1) is expressed as Alternatively, we can write   () in Eq. (19) as follows.Since it is also true that [8]  exp Ω  () ( Ω we have The important point is of course how to choose   , or alternatively Ω  , i.e., the specific requirements each transformation has to satisfy in order to be useful to approximately solve Eq. ( 1).There are obviously many possibilities and in the following we analyze two of them, leading to two different and well known exponential perturbation factorizations mentioned in the Introduction, namely the Wilcox [1] and Fer [11] expansions.

Recurrences
Let us introduce the (dummy) parameter  in Eq. ( 1) and replace  with .This is helpful when collecting coefficients, and at the end we can always take  = 1.
Since the solution of Eq. ( 1) when  is constant, or more generally when ( 1 )( 2 ) = ( 2 )( 1 ) for all  1 ≠  2 , is () = exp(∫  0 ()), it makes sense to take the generator for the first transformation as Then, according to Eqs. ( 12)-( 13), we have where  0,1 = (),  0, = 0,  > 1 With the choice Ẇ1 = (), it turns out that  1 is a power series in  starting with  2 , where We can analogously choose the second transformation proportional to  2 , i.e., as Ω 2 () ≡  2  2 () for a given  2 to be determined.Then a straightforward calculation shows that with The generator  2 is then obtained by imposing that  1,2 −  2,2 = 0 in Eq. ( 28), i.e., In this way  2 () is a power series in  starting with  3 , with where [⋅] stands for the integer part of the argument.In general, the -th transformation Ω  () ≡     () is determined in such a way that the power series of   () starts with  +1 .This can be done as follows: from with Then,   is obtained by taking  −1, −  , = 0, i.e., Ẇ =  −1, (35) and finally   is determined as with Notice that, in view of Eq. ( 34) and Eq. ( 37), equation (35) simplifies to The solution of Eq. ( 1) is expressed, after  such transformations, as An approximation to the exact solution containing all the dependence up to (  ) is obtained by taking   =  0 in Eq. ( 39).
Recursion ( 33)-( 38) allows one to construct any generator   of the expansion in terms of  1 , … ,  −1 .For illustration, we next collect the first terms: This is the way the Wilcox expansion is built up.

Explicit expressions for 𝑊 𝑛 (𝑡)
Although the recursive procedure ( 33)-( 38) turns out to be very computationally efficient to construct the exponents   () for a given () in practice, it is clear that much insight about the expansion can be gained if an explicit expression for any   can be constructed, thus generalizing the treatment done originally by Wilcox up to  = 4 [1].Such an expression could be obtained, in principle, by working out the recurrence ( 33)-( 38), but a more direct approach consists in comparing the Dyson perturbation series of  () [22] in the associated initial value problem i.e., (42) with the expansion in  of the factorization Thus, for the first terms one has In general, we can write where the sum is extended over the total number of partitions () of the integer .
By working out equation ( 45), one can invert the relations and express   in terms of  1 , … ,  −1 for any  ≥ 1.Thus, i one obtains Interestingly, it is possible to express these products as proper time-ordered integrals by using a procedure developed in [23].If we denote so that   () = (12 … ), then etc. Taking into account Fubini's theorem, it is clear that (1) ⋅ (1) =  (12) +  (21), and thus We can proceed analogously with the following products so that Carrying out this argument to any order, we can expand all the products of integrals appearing in   .As a result, each product is replaced by the sum of all possible permutations of time ordering consistent with the time ordering in the factors of this product [24].
At this point, it is illustrative to consider in detail some examples.Thus, the product (1) ⋅  (12) gives the sum of all permutations of three elements such that the second index is less than the third one.With respect to (1) ⋅ (1) ⋅  (1), since there is no special ordering, then all possible permutations have to be taken into account.Finally, for the product  1  3 appearing in  4 one has Proceeding in a similar way, one can show that any product of iterated integrals can be expressed as a sum of iterated integrals.This property is in fact related with a much deeper characterization of the group of permutations [23].If  denotes the graded ℚ-vector space with fundamental basis given by the disjoint union of the symmetric groups   for all  ≥ 0, then it is possible to define a product * of permutations and a coproduct  in  so that there exists a one-to-one correspondence between iterated integrals and permutations: The product * was introduced in [25], and together with the coproduct  endow  with a structure of Hopf algebra [26], the so-called Malvenuto-Reutenauer Hopf algebra of permutations [27].
In sum, the general structure of the Wilcox expansion terms reads where the summation extends over all the ! permutations   ∈   of {1, 2, … , }.The weights  ()   are given by rational numbers that can be determined algorithmically for any , although, it seems not obvious to find a general expression for them, in contrast with the Magnus expansion, for which such a closed formula exists [28].This can be then considered as an open problem.
Moreover, if one is interested in getting a compact expression for   in terms of independent nested commutators of (), as is done in [1] up to  = 4, one can use the class of bases proposed by Dragt & Forest in [24] for the Lie algebra generated by the operators ( 1 ), ( 2 ), … , (  ).The same procedure carried out in [23] for the Magnus expansion can be applied here, so that one gets the general formula Here the sum extends over the ( − 1)! permutations   of the elements {2, 3, … , } and  ()   is a rational number that depends on the particular permutation.For a given permutation, say   , its value coincides with the prefactor in Eq. (55) of the particular term (  ) corresponding to the permutation   ∈   such that Thus, if we denote we get for the first terms (59) In sum, the general structure of the Wilcox expansion terms via commutators uses the same weights as in Eq. ( 55) and reads where the primed sum requires the rightmost element in the permutation () to be invariant (as in Eq. ( 56)).This element may be chosen at will and, whatever it be that value, the permutations are build up with the remaining  − 1 elements.Different, but equivalent, expressions for   in terms of commutators are obtained depending on the value fixed at the rightmost position.We stress once again that, although only the first terms have been collected here for simplicity, the whole procedure is algorithmic in nature and has been implemented in a computer algebra system furnishing to evaluate explicitly   () for any  [29].Notice that   () involves a linear combination of ( − 1)! iterated integrals.

Convergence of Wilcox expansion
Recursion ( 33)-( 38) is also very useful to provide estimates for the radius of convergence of the Wilcox expansion when Eq. ( 1) is defined in a Banach algebra , i.e., an algebra that is also a complete normed linear space with a sub-multiplicative norm, If this is the case, then ‖ad   ‖ ≤ 2 ‖‖ ‖ ‖ and, in general, ‖ad    ‖ ≤ 2  ‖‖  ‖ ‖.As shown in [30,31], if the series has a certain radius of convergence   for a given , then, for  <  <   , the sequence of functions converges uniformly on any compact subset of the ball (0,   ).Thus, studying the convergence of the Wilcox expansion reduces to analyze the series (; ) and in particular its radius of convergence   .
Let () be a function such that ‖()‖ ≤ () and denote and In general, the following bounds can be established by induction: where It is clear that if the series ∑ ≥1     () converges, so does ( = 1; ).Therefore, a sufficient condition for convergence of the Wilcox expansion is obtained by imposing where We have computed this quantity up to  = 2000 and then extrapolated to the limit (1∕) → 0. Then   →  ∞ = 1.51868, as seen in Figure 1, and thus the convergence of the Wilcox expansion is ensured at least for values of time  such that This type of extrapolation has also been used to estimate the convergence radius of the Magnus expansion [32].Although the estimate is not completely analytic, the same type of computation has provided accurate results in other settings.In particular, for the Magnus expansion such an estimate fully agrees with a purely theoretically deduced bound [10,8,32].
Figure 1:   as a function of 1∕, and linear extrapolation (red line).

Fer-like expansions 4.1 Standard Fer expansion
In forming the Wilcox expansion the first transformation is chosen in such a way that Ω1 =  0 (), whereas Ω ≠  −1 for  ≥ 2. It makes sense, then, to analyze what happens if we impose this condition at each step of the procedure for all  ≥ 1.In that way, expression (22) for   clearly simplifies to In doing so we recover precisely the Fer expansion, see [11,10].Again, after  transformations, we get () = e Ω 1 () e Ω 2 () ⋯ e Ω  ()   () so that if we impose   () =  0 we are left with another approximation to the exact solution.Notice that this approximation clearly differs from the previous Wilcox expansion for  ≥ 2, as can be seen by analyzing the dependence on  of each transformation.Whereas Ω  =     () for the Wilcox expansion, now Ω  contains terms of order  2 −1 and higher.This can be easily shown by induction: Ω 1 is proportional to , so that  1 , according to Eq. ( 72) contains terms of order  2 (coming from [Ω 1 ,  0 ]) and higher.In general,  −2 and Ω −1 contain terms of order  2 −2 and higher, so the first term in the series (72) for  −1 , i.e., the commutator [Ω −1 ,  −2 ], produces a term of order Alternatively, expressing Eq. ( 72) as

Intermediate Fer-like expansions
Notice that the -power series of Ω  in the Fer expansion contains infinite terms starting with  2 −1 , but the corresponding truncated factorization obtained from Eq. ( 73) by taking   () =  0 is correct only up to terms of order  2 −1 .One might then consider yet another sequence of transformations so that each Ω  contains only those relevant terms leading to a correct approximation up to this order.Of course, both factorizations would be different, but nevertheless they would produce the correct power series up to order  2 −1 .The corresponding factorization can be properly called a modified Fer expansion.
Our starting point is once again equation (22).Clearly, the first transformation is the same as in Fer (and Wilcox), i.e., and thus where the rightmost term points out the lowest  contribution in the sum.Next, to reproduce the same dependence on  as the Fer expansion, we need to enforce that  2 = ( 4 ), and the question is how to choose Ω 2 guaranteeing this feature.An analysis of Eq. ( 22) with  = 2 reveals that this is achieved by taking Ω2 as the sum of terms in  1 in Eq. (77) contributing to  2 and  3 , i.e., since the next term appearing in the expression of  2 involves the computation of ad 3 Likewise, Ω 3 is to be designed so that and this is guaranteed by taking Ω 3 as the sum all the terms in  2 contributing to powers from  4 up to  7 .From Eq. ( 79) it is clear that where only the relevant terms in the expansion in  1 have to be taken into account.In this way, we can take with Notice, however, that since the second term in Ω 2 in Eq. ( 78) is ( 3 ), the expression (82) does contain some contributions in  8 and  9 that in principle could be removed.We prefer, however, to maintain them just to have a more compact expression.For this modified Fer expansion, Ω  is chosen in general so that Ω is precisely the sum of all terms of  −1 containing terms of powers from  2 −1 up to  2  −1 and then truncating appropriately the series of  −2 , … ,  1 .
Other possibilities for choosing   at the successive stages clearly exist, and according with the particular election, different intermediate Fer-like expansions result.In practice, one of those combinations of commutators could be more easily computed for a specific problem.

Wilcox expansion as the continuous analogue of Zassenhaus formula
The Zassenhaus formula may be considered as the dual of the Baker-Campbell-Hausdorff (BCH) formula [33] in the sense that it relates the exponential of the sum of two noncommuting operators  and  with an infinite product of exponentials of these operators and their nested commutators.More specifically, e + = e  e  ∞ ∏ =2 e   (, ) = e  e  e  2 (, ) e  3 (, ) ⋯ e   (, ) ⋯ , (84 where   (,  ) is a homogeneous Lie polynomial in  and  of degree  [21,7,34,35,1].A very efficient procedure to generate all the terms in Eq. ( 84) is presented in [21] and allows to construct   up to a prescribed value of  directly in terms of the minimum number of independent commutators involving  operators  and  .
In view of the formal similarity between Eq. (39) and Eq. ( 84), Wilcox expansion has been also described as the "continuous analogue of the Zassenhaus formula" [10], just as the Magnus expansion is sometimes called the continuous version of the BCH formula.To substantiate this claim, we next reproduce the Zassenhaus formula (84) by applying the procedure of Section 3 to a particular initial value problem, namely the abstract equation where  and  are two non-commuting constant operators.The formal solution is of course  () = e (+ ) , but we can also solve Eq. ( 85) by first integrating U0 =  0 and factorizing  () as  () =  0   = e    , where   obeys the equation and finally apply to Eq. ( 86) the sequence of transformations leading to the Wilcox expansion.Notice, however, that now the coefficient matrix   () is an infinite series in , so that, when applying the recursion ( 33)-(38), Ω1 is no longer  0 ≡   (), but the term in   () which is proportional to .In other words, Ẇ1 =  , and thus After some computation, one arrives at Since Ẇ1 =  0,1 =  , then clearly  1, = 0 for all  > 1 and In general,  , = Ẇ ,  , = 0 when  ≠  and the recurrence (33)-(38) reads now One can see that this procedure agrees with the algorithm presented in [21] for every term   ,  ≥ 1, in  () = e (+ ) = e  e  1 () e  2  2 () e  3  3 () ⋯ . (94) The Zassenhaus formula is recovered by taking  = 1, i.e.,   (,  ) =   ( = 1).

Expanding the exponential exp(𝐴 + 𝜀𝐵)
Bellman, in his classic book [36], states that "one of the great challenges of modern physics is that of obtaining useful approximate relations for e (+) in the case where  ≠ ".One such approximation was proposed and left undisclosed in [12, p. 175].Assuming that e + can be written in the form e + = e  e  1 e  2  2 e  3  3 ⋯ , ( Bellman proposed to determine the first three terms  1 ,  2 ,  3 , and pointed out that, contrary to other expansions, the product expansion (95) is unitary if  and  are skew-Hermitian.
It turns out that the Wilcox expansion can be used to provide explicit expressions for   for any two indeterminates  and , as we will see in the sequel.
Before proceeding, it is important to remark that this problem differs from the Zassenhauss formula, in the sense that the expansion parameter affects only one of the operators in the exponential.The solution goes as follows.We write and solve the differential equation satisfied by with the Wilcox expansion, so that The operators   in Eq. ( 95) are then obtained by taking  = 1.
The successive terms   () in Eq. ( 98) can be determined either by the recursion (33)-(38) or the explicit expression (56).For the first term we get and so the expression for  1 is given by Although it is possible in principle to construct explicit expressions for  2 ,  3 , etc., it is perhaps more convenient to apply the recursion ( 33)-(38) for each particular application.

Illustrative examples
We next particularize the Bellman problem (95) to matrices where closed expressions for  1 ,  2 , … can be obtained.The idea is to illustrate the behaviour of the product expansion by computing explicitly high order terms with matrices in the SU(2) and the SO(3) Lie algebras.
Matrices  and  in SU(2).In the first example we choose  =    and  =    , where  = √ −1,  is a real parameter and are Pauli matrices.This instance is borrowed from Quantum Mechanics, where exp[(  +   )] is a matrix that transforms the 1 2 -spin wave function in a Hilbert space.Using the scalar vector notation to write down a linear combination of Pauli matrices: ⃗  ⋅ ⃗  =     +     +     , the matrix exponential reads In the sequel we work out the expansion e (  +  ) = e   e  1 e  2  2 e  3  3 … (103) up to order eleven in  and analyze the increasing accuracy of the product expansion as far as more terms are considered.The lhs in Eq. ( 103) may be thought as a transformation involving   and   .In turn, the rhs is a pure   transformation, i.e. exp(  ), followed by an infinite succession of transformations, exp(    ), whose effect should decrease with .The truncated product expansion is expected to be accurate as far as  ≪ .
In Table 1 we write down explicitly the first five contributions for a generic  (expressions for  > 5 are too involved to be collected here).Wilcox-Bellman's formula (103) corresponds then to  = 1.All the terms have been obtained with the recurrences of section 3, starting from where  ≡ cos(2) and  ≡ sin(2).
The formulas in Table 1 show that ∕ may be considered as an effective expansion parameter.In Figure 2 we illustrate, for  = 1, the accuracy of the Wilcox-Bellman    () In turn, the behaviour of the curves in Figure 2 points out that convergence of the product expansion extends well beyond that lower bound for this particular example.Eventually, Figure 3 shows the logarithm of the absolute error in the approximations given by curves in Figure 2.
Matrices  and  in SO(3).The second example refers to the matrix that describes a rotation in three dimensions defined by the vector ⃗  =  â.Here  stands for the rotation angle around the axis given by the unitary vector â.A generic 3D rotation matrix can be written as exp( ⃗  ⋅ ⃗ ), where the components of ⃗  are the three fundamental rotation matrices We study the particular case ⃗  ⋅ ⃗  =  ( cos    + sin    ) , and compare the rotation of angle  around the unit vector (sin , 0, cos ) with the sequence of transformations e  cos    e  sin   1 e ( sin ) 2  2 e ( sin ) 3  3 … (108) In other words, the question we address is how the pure -axis rotation in Eq. (108), exp( cos    ), has to be corrected by an infinite composition of further rotations to reproduce the one defined by ⃗  ⋅ ⃗ .When  sin  is small enough the approach is expected to converge since the expansion convergence lower bound reads in this case | sin | < 0.658.
Here the accuracy of the product expansion depend on both the rotation angle  and the relative orientation of the rotation axis, determined by the angle .This is illustrated in Figures 4 and 5 for the first five orders of approximation, with  = ∕4, ∕2, 3∕4 and .
In order to test the expansion, we have computed the matrix trace of the successive approximants and compared with the exact result: The first two approximants are simple enough to be written down: tr ( e  cos    e  sin    Interestingly, the third approximation order is worse than the second one in all four cases.Also in the case  =  the fourth order is better than the fifth one.

Conclusions
When a linear system of differential equations, defined by the coefficient matrix (), is transformed under exp( ∫  0 ( ′ )d ′ ), the coefficient matrix in the new representation becomes an infinite power series in , say Ã() = ∑ ≥1     ().That is the first step of all matrix exponential methods to approximate the time-evolution operator.In the framework that we have introduced it is the first move in a sequence of exponential transformations that change the linear system from a representation to another with the goal that dynamics will become less and less relevant.Choosing the transformation exp( ∫  0 Ã()d) as the second move and iterating this procedure afterwards yields the Fer expansion.Instead, choosing the transformation exp( ∫  0  1 ()d), i.e. the leading term of the new coefficient matrix, opens up Wilcox expansion.The framework allows intermediate expansions taking exp( ∑  =1   ∫  0   ()d) as initiator, as well as jumping between schemes, according with the particular requirements of the problem at hand.
We have seen that the theory of linear transformations (or changes of picture in the language of Quantum Mechanics) provides a unified framework to deal with many different exponential perturbative expansions: whereas only one linear transformation reducing the dynamics to the trivial equation ( 4) or to a system with a constant matrix renders the Magnus [9] and the Floquet-Magnus [37] expansion, respectively, a sequence of such transformations with different choices of the new matrices lead to Wilcox and Fer factorizations.From this perspective, other factorizations are possible depending on the particular problem at hand: one only has to select appropriately the successive transformations.
In the case of Wilcox expansion we have provided an efficient recursive procedure to compute it.In addition, we have developed a method to build up an explicit expression for any   in terms of commutators.This is possible by using similar tools as in the case of the Magnus expansion, namely by relating products of iterated integrals with the structure of the Hopf algebra of permutations, and by using special bases of nested commutators.A sufficient condition for the expansion convergence has also been obtained.
We have presented some application examples of the results about Wilcox expansion.Firstly, we have shown how to obtain Zassenhaus formula from Wilcox expansion which, in turn, may be interpreted as its continuous analogue.Secondly, we point out that Wilcox expansion solves the problem of expanding the exponential exp( + ) when  and  are non-commuting operators.We refer to it as Wilcox-Bellman expansion.Two practical cases on this respect have been analyzed up to high order.Interestingly, in one of them the convergence seems not to be uniform.For convenience, the interested reader can find in [29] a Mathematica code generating general explicit expressions and recurrences for the Wilcox expansion.
While a full assessment of the Wilcox expansion in comparison with Fer expansion is not the main purpose of this work, we can still mention here some of their most distinctive features.Both types of expansions construct the solution of Eq. ( 1) as an infinite exponential factorization, but in Wilcox the exponent of each factor is proportional to successive powers of the expansion parameter , whereas in Fer each exponent contains an infinite sum of powers of .This means that, when truncated after a given number of transformations, say , the Wilcox expansion differs from the exact solution in the power  +1 .In other words, each term   () in the Wilcox expansion collects the effect of the perturbation at order .On the other hand, the Fer expansion, when truncated after  transformations, provides a much more accurate approximation.This is true, of course, if the infinite sums involved in each transformation are exactly computed, an almost impossible task unless the time dependence of () is simple enough.By contrast, we have explicit expressions for each exponent   () in the Wilcox expansion for a generic () and, by using the same techniques as in the Magnus expansion, we can construct appropriate approximations to the iterated integrals if necessary.As the examples collected here and some other studies show [20], Wilcox expansion can provide accurate results after only a few such transformations.

Figure 2 :
Figure 2: Accuracy of Wilcox-Bellman product expansion up to order eleven as a function of the ratio ∕, with  = 1.The quantity plotted is the squared modulus of the non-diagonal element of the matrix.The vertical grey line stands for the convergence lower bound  = 0.658.

Figure 3 :
Figure 3: Absolute error of the approximations given by curves in Figure 2 with  = 1.The vertical grey line is located at the value of the convergence lower bound  = 0.658.

Figure 4 :
Figure 4: Error in the approximations to the matrix trace as a function of the rotation angle , for two values  = ∕2 and ∕4.

Figure 5 :
Figure 5: Error in the approximations to the matrix trace as a function of the rotation angle , for two values  = 3∕4 and .

Table 1 :
(1)st five orders in Bellman problem for generic .The operators   in Eq. (95) are obtained by taking  = 1, i.e.,   =   (1).We have defined  ≡ sin(2) and  ≡ cos(2).productexpansion in the example at hand as a function of .We plot the squared modulus of the non-diagonal matrix element, say | 1,2 | 2 , of Eq. (103) for every analytic approximation up to order eleven in , as well as the exact result.Even orders do not contribute in this test because  2 is always proportional to   and therefore exp( 2  2 ) is a diagonal matrix.