Exponential perturbative expansions and coordinate transformations

We propose a unified approach for different exponential perturbation techniques used in the treatment of time-dependent quantum mechanical problems, namely the Magnus expansion, the Floquet--Magnus expansion for periodic systems, the quantum averaging technique and the Lie--Deprit perturbative algorithms. Even the standard perturbation theory fits in this framework. The approach is based on carrying out an appropriate change of coordinates (or picture) in each case, and can be formulated for any time-dependent linear system of ordinary differential equations. All the procedures (except the standard perturbation theory) lead to approximate solutions preserving by construction unitarity when applied to the time-dependent Schr\"odinger equation.


Introduction
Linear differential equations of the form are ubiquitous in many branches of physics, chemistry and mathematics.Here  is a real or complex  ×  matrix, and () is a sufficiently smooth matrix to ensure the existence of solutions.Perhaps the most important example corresponds to the Schrödinger equation for the evolution operator in quantum systems with a timedependent Hamiltonian (), in which case () = −()∕ℏ.Particular cases include spin dynamics in magnetic resonance (Nuclear Magnetic Resonance -NMR, Electronic Paramagnetic Resonance -EPR, Dynamic Nuclear Polarization -DNP, etc) [3,1,2], electron-atom collisions in atomic physics, pressure broadening of rotational spectra in molecular physics, control of chemical reactions with driving induced by laser beams, etc.When the time-dependence of the Hamiltonian is periodic, as occurs for instance in periodically driven quantum systems, atomic quantum gases in periodically driven optical lattices, etc. [4,5], the Floquet theorem [6] relates () with a constant Hamiltonian.More specifically, it implies that the evolution operator is factorized as  () =  () exp( ), with  () a periodic time-dependent matrix and  a constant matrix.This theorem has been widely used in problems of solid state physics, in NMR, in the quantum simulation of systems with time-independent Hamiltonians by periodically driven quantum systems, etc. [7].The Average Hamiltonian Theory is also closely related with this result, and the effective Hamiltonian is an important tool in the description of the system [9,8].
In general, equation ( 1) cannot be solved in closed form, and so different approaches have been proposed along the years to get approximations, both analytic and numerical.Among the former, we can mention the standard perturbation theory, the average Hamiltonian theory, and the Magnus expansion [5,8,10].Concerning the second approach, different numerical algorithms have been applied to get solutions on specific time intervals [11,12,13].
In this work we will concentrate on different techniques providing analytic approximations to the solution of (1) that also share some of its most salient qualitative features.In particular, if (1) represents the Schrödinger equation, it is well known that  () is unitary for all , and this guarantees that its elements represent probabilities of transition between the different states of the system.It happens, however, that not every approximate scheme (either analytic or numerical) renders unitary matrices, and thus the physical description they provide is unreliable, especially for large integration times.
The Magnus expansion [14] presents the remarkable feature that it allows one to express the solution as the exponential of a series,  () = exp(Ω()), so that even when the series Ω() = ∑ ≥1 Ω  () is truncated, the corresponding approximation is still unitary when applied to the Schrödinger equation.More generally, if (1) is defined in a Lie group , then it provides an approximate solution also belonging to .Moreover, it has also been used to construct efficient numerical integration algorithms also preserving this feature [15,12].
When () depends periodically on time with period  , the Magnus expansion does not explicitly provide the structure of the solution ensured by the Floquet theorem, i.e.,  () =  ()e  .with  () periodic and  a constant matrix.Nevertheless, it can be conveniently generalized to cover also this situation, in such a way that the matrix  () is expressed as the exponential of a series of periodic terms.The resulting approach (the so-called Floquet-Magnus expansion [10]) has been used during the last years in a variety of physical problems [7,4,16,5].
Very often, the coefficient matrix in (1) is of the form () =  0 +  1 (), where  0 is constant,  1 ( +  ) =  1 () and  > 0 is a (small) parameter.In other words, one is dealing with a time-dependent perturbation of a solvable problem defined by  0 .In that case, several perturbative procedures exist to construct  () as a power series in , either directly (by applying standard perturbation techniques in the interaction picture defined by  0 ) or taking into account the structure ensured by the Floquet theorem and constructing both matrices  () and  as power series [9].Of course, if  () = exp(Λ()) and Λ() is constructed also as a power series in , then the qualitative properties of the solution are inherited by the approximations (in particular, they are unitary in quantum evolution problems) [17].
An alternative manner of viewing the Floquet theorem is to interpret the matrix  (), provided it is invertible, as a time-periodic transformation to a new set of coordinates where the new evolution equation has a constant coefficient matrix  , so that exp( ) is the exact solution in the new coordinates.In the language of Quantum Mechanics, this corresponds to a change of picture.This interpretation leads to the important mathematical notion of a reducible system: according to Lyapunov, the general system (1) is called reducible if there exists a matrix  () which together with det( −1 ()) is bounded on 0 ≤  < +∞ such that the system obtained from (1) by applying the linear transformation defined by  () has constant coefficients [18].In this sense, if () is a periodic matrix, then (1) is reducible by means of a periodic matrix.The situation is not so clear, however, when () is quasi-periodic or almost periodic: although several general results exist ensuring reducibility [20,21,19], there are also examples of irreducible systems [22].
In this work we pursue and generalize this interpretation to show that all the above mentioned exponential perturbative treatments can be considered as particular instances of a generic change of variables applied to the original differential equation.The idea of making a coordinate change to analyze a problem arises of course in many application settings, ranging from canonical (or symplectic) transformations in classical mechanics (based either on generating functions expressed in terms of mixed, old and new, variables [23], or on the Lie-algebraic setting [25,24]) to changes of picture and unitary transformations in quantum mechanics.What we intend here is to show that several widely used perturbative expansions in quantum mechanics can be indeed derived from the same basic principle using different variations of a unique ansatz based on a generic linear transformations of coordinates.We believe this interpretation sheds new light into the different expansions, and moreover allows one to elaborate a unique procedure for analyzing a given problem and compare in an easy way how they behave in practice.
It is important to remark that all the procedures considered here (with the exception of course of the standard perturbation theory) preserve by construction the unitarity of the solution when (1) refers to the Schrödinger equation.More generally, the approximations obtained evolve in the same matrix Lie group as the exact solution of the differential equation (1).

Coordinate transformations and linear systems
To begin with, let us consider the most general situation of a linear differential equation with () a  ×  matrix whose entries are integrable functions of .Notice that  () in (1) can be considered as the fundamental matrix of (2).We analyze a change of variables  ⟼  transforming the original system (2) into where the matrix  adopts some desirable form.Since we are interested in preserving qualitative properties of (2), we impose an additional requirement for the transformation, namely, it has to be of the form so that (0) = (0).Thus, in particular, if (2) is the Schrödinger equation with Hamiltonian () = ℏ(), Ω() is skew-Hermitian.
It is clear that the transformation ( 4) is completely determined once the generator Ω() is obtained.An evolution equation for Ω is obtained by introducing (4) in (2) and also taking into account (3) as The derivative of a matrix exponential can be written as [26] where the symbol  exp Ω () stands for the (everywhere convergent) power series Here ad 0 Ω  = , ad  Ω  = [Ω, ad −1 Ω ], and [Ω, ] denotes the usual commutator.By inserting (6) into (5) one gets where If we invert the  exp Ω operator given by (7), we get from (8) the formal identity where  ≡ ad Ω .Now, taking into account that A more convenient way of expressing this identity is obtained by recalling that where   are the Bernoulli numbers, so that (9) reads With more generality, we can assume that both () and  () are power series of some appropriate parameter , and thus the generator Ω will be also a power series, The successive Ω  () can be determined by inserting (11) and ( 12) into equation (10) and equating terms of the same power in .Thus, one obtains the following recursive procedure: where Of course, although the change of variables  ⟼  is completely general, it is only worth to be considered if equation ( 3) is simpler to solve than (2).In the following we analyze several ways of choosing  fulfilling this basic requirement, and how in this way we are able to recover different exponential perturbative expansions.

Magnus expansion
The simplest choice one can imagine of is taking  = 0 or, in other words, one is looking for a linear transformation rendering the original system (2) into with trivial solution () = (0) =  0 .A sufficient condition for the reducibility of equation ( 2) to ( 15) is [18] If this is the case, from (4), where Ω() is determined from (10) with  = 0, i.e., This, of course, corresponds to the well known Magnus expansion for the solution () of ( 2) [14,26].The terms Ω  () are then determined by the recursion ( 14) by taking   = 0.If we take  0 = 0 and  1 () = () in ( 11), then we get the familiar recursive procedure [26] whence the successive terms Ω  are obtained by integration.An explicit expression for Ω  () involving only independent nested commutators can be obtained by working out this recurrence and using the class of bases proposed by Dragt & Forest [27] for the Lie algebra generated by the operators  1 ( 1 ), …  1 (  ).Specifically, one has [28] Ω where  is a permutation of {1, 2, … , −1} and   corresponds to the number descents of .We recall that  has a descent in Notice that the argument in the last term is fixed to   , and one considers all permutations in ( 1 ), ( 2 ), … , ( −1 ).Moreover, the series (12) converges in this case in the interval and the sum Ω() satisfies exp(Ω()) =  () [26].Here ‖ ⋅ ‖ 2 denotes the spectral norm, characterized as ‖‖ 2 = max{ √  ∶  is an eigenvalue of  † }.The Magnus expansion has a long history as a tool to approximate solutions in a wide spectrum of fields in Physics and Chemistry, from atomic and molecular physics to Nuclear Magnetic Resonance to Quantum Electrodynamics (see [26] and references therein).Also in computational mathematics it has been used to construct efficient algorithms for the numerical integration of differential equations within the widest field of geometric numerical integration [15,13,12].Recently it has also been used in the treatment of dissipative driven two-level systems [29].

Floquet-Magnus expansion
The Magnus expansion can be in principle applied for any particular time dependence in (), as long as the integrals in Ω  () can be computed or conveniently approximated.When () in ( 2) is periodic in  with period  , however, other changes of variables may be more suitable.According to Floquet's theorem, the original system is reducible to a system with a constant coefficient matrix  , whose eigenvalues (the so-called characteristic exponents) determine the asymptotic stability of the solution ().In addition, the linear transformation is periodic with the same period  [6].
In our general framework, then, it makes sense to determine a change of variables  = exp(Ω())() in such a way that  in (3) is constant, so that () = exp( ) 0 and Ω() is periodic.Proceeding as before, if we take  0 = 0 and  1 () = () in (11), the procedure ( 13)-( 14) simplifies to and Notice that, in general,   =   −   depends on the previously computed Ω  ,   ,  = 1, … ,  − 1, so that equations ( 21) can be solved recursively as follows.First we determine  1 and   by taking the average of  and   , respectively, over one period  , and then compute the integrals respectively, thus ensuring that Ω  is periodic with the same period  .This results in the well known Floquet-Magnus expansion for the solution of (2), originally introduced in [10] and subsequently applied in different areas [4,16,7,5].
In the context of periodic quantum mechanical problems,  ef ≡ ℏ is called the effective Hamiltonian of the problem.This expansion presents the great advantage that, in addition to preserving unitarity as the Magnus expansion, also allows one to determine the stability of the system by analyzing the eigenvalues of  .As shown in [10], the resulting series for Ω() is absolutely convergent at least for The procedure can be easily generalized to quasi-periodic problems [30].We recall that () is quasi-periodic in  with frequencies where Ã is 2-periodic with respect to  1 , … ,   and   =    for  = 1, … , .In that case we can write e (,) where (, ) . In this case   is also quasi-periodic (by induction) and we take the limiting mean value of   (), independent of the particular value of .In consequence, , e (,) and is also quasi-periodic with the same basic frequencies as   .

Perturbed problems
There are many problems that can be formulated as (2) with () depending on a perturbation parameter , () =  0 + ∑ ≥1     (), so that  0 is a constant matrix and   () are periodic or quasi-periodic functions of .
Here again the issue of reducibility has received much attention along the years, and the problem consists in determining both the linear transformation  () and the constant matrix  .In this section we consider several possible ways to proceed, depending on the final matrix  one is aiming for.

Removing the perturbation
One obvious approach is to take  =  0 .In other words, we try to construct a transformation rendering the original system into another one in which the perturbation is removed [17].The transformation fulfilling this requirement is therefore where we have written explicitly the dependence on , the parameter of the perturbation.The recurrence for determining the generators Ω  is obtained from ( 13)- (14) as Alternatively, we can write with solution verifying Ω  (0) = 0 given by As a matter of fact, this scheme can be related with the usual perturbation treatment of equation ( 2).To keep the formalism simple, let us assume that () in ( 11) is () =  0 +  1 () and work directly with the power series of the transformation, Then, by inserting () =  (, ) e  0  0 into equation ( 2), one determines the differential equation satisfied by  as Ṗ +   0 =  , whence the successive terms   () verify The solution, as with equation (24), is given by Here  0 ≡  and we have denoted   () ≡ e − 0  1 () e  0 .Therefore, the solution of (2) reads and so, taking into account the explicit expression (25) for   (), one recovers the solution provided by the standard perturbation theory in the picture defined by  0 .

Lie-Deprit perturbative algorithm
The previous treatment has one important drawback (in addition to the lack of preservation of unitarity when one deals with the Schrödinger equation and the expansion is truncted), namely, when () is a periodic or quasi-periodic function of time, the secular terms are not removed and, as a result, the time interval of validity of the resulting approximations is very small indeed.For this reason it is worth considering in the general framework ( 13)-( 14) the quasi-periodic case and look for a quasi-periodic transformation leading to a constant coefficient matrix This problem has been addressed a number of times in the literature (see e.g.[32,21] and references therein).In particular, in [33] a perturbative algorithm is presented for constructing this transformation as the exponential of a quasi-periodic matrix.Here we show how the results of [33,17] can be reproduced by the generic scheme proposed here in a more direct way, in the sense that the terms Ω  are determined at once from ( 13)-( 14).
It is clear that the problem reduces to solve where now and the goal is to determine the constant term   and construct Ω  () as a quasi-periodic function with the same basic frequencies as   for  ≥ 1.
As shown in the Appendix, the solution verifying all these requirements is given by where   () is the antiderivative Finally, the solution of ( 2) is written as where Ω(, ) and  () are appropriate truncations of the corresponding series.

Quantum averaging
In the context of time-dependent quantum mechanical systems, the averaging method has been used to construct quantum analogues of classical perturbative treatments [34,35].Essentially, the basic approach in that setting is to transform the original Hamiltonian of the problem by a unitary transformation so that the problem of finding the time evolution of the transformed Hamiltonian is easier than the original one.The idea has been also applied in the perturbative treatment of pulse-driven quantum problems [37,36,38].This approach to quantum averaging can be fit into our general framework of Section 2 by taking  0 =  0 and the terms   () in (11) verifying in addition Clearly, the solution of ( 27) verifies   () = e ad  0   (0) = e  0   (0)e − 0 , and thus which leads to exp(− 0 )() = exp(( (0) −  0 )(0) and finally () = e  0 e ( (0)− 0 ) (0).
In other words, condition (27) guarantees that equation ( 3) can be solved in a closed way and the dynamics of ( 2) is obtained once the generator Ω() is determined, even when   depends explicitly on time.As shown in [35], the corresponding solutions are where Here  1 =  1 and   =   +   −   for  ≥ 2.

Illustrative examples
In previous sections we have reviewed several perturbative expansions aimed to get analytic approximations for the solution of ( 1), and how they can be derived from a same basic principle, namely a transformation of coordinates.This allows one, in particular, to design a unique computational procedure to deal with a particular problem defined by () and take the most appropriate variant in each case.To illustrate the technique we next consider two examples describing the dynamics of simple timedependent quantum systems, although the same treatment can be applied of course to other more involved problems.

The three-lambda system
The so-called driven three-lambda system describes an atomic three energy-level system with two ground states |1⟩, |2⟩ with the same energy  1 that are coupled with an excited state |3⟩ with energy  3 via a time-dependent laser field.In the interaction picture the Hamiltonian is given by () =  () |3⟩ ( ⟨1| + ⟨2| ) (+ Hermitian conjugate), or equivalently, where  () is usually a periodic function.The corresponding Schrödinger equation, ℏ U () = () (), is then a particular case of equation ( 2) with () = −()∕ℏ, and one is typically interested in obtaining the induced probability of transition between states |1⟩ and |2⟩, namely We follow [30] and take with  = 1 and  = 10 2 ) 1 2 ≈ 7.65 as an example of periodic function.Starting from the initial condition  (0) =  we compute the approximate solution until the final time   = 400, i.e for 487 periods, with the Floquet-Magnus (FM) expansion ( 20)-( 21) up to  = 3 and  = 4 and determine the corresponding transition probability.In this way we get Figure 1, where the result achieved by the effective Hamiltonian () = e  , with  = − ef ∕ℏ, is also depicted.We see that for this value of , considering more terms in the series for both Ω() and  leads to better results and that working only with the effective Hamiltonian gives indeed a good approximation.The reference solution is computed with the DSolve function of Mathematica.For completeness, the value of the effective Hamiltonian up to  = 4 reads , whereas the transition probability obtained with  ef up to  = 3 is given by (ℏ = 1) To get a more quantitative view, also in Figure 2 (bottom) we show the error in the transition probability (in logarithmic scale) for the same values of , but now include  = 3 and  = 7 terms in the FM expansion, as well as the result obtained with just the effective Hamiltonian in this last case.As is clear from the figure, taking into account more terms in the FM expansion improves the approximations, and this improvement is more remarkable for larger values of .Here the approximate solution is obtained as and the two parts of  encode different behaviors of the system.Thus, exp( Ω()) describes fast fluctuations around the envelope evolution provided by exp(− ef ∕ℏ), but these fluctuations are not always negligible, especially if the driving frequency is not too large.In any case, it is worth noticing that taking only into consideration the effective Hamiltonian renders less accurate results than working with the full approximation.
Next we take  () in ( 28) to be a quasi-periodic function, namely with  = 1 and  = 12, and compute approximations using the Floquet-Magnus and Magnus expansions with  = 2 and  = 3 terms.The corresponding errors in the transition probability are shown in Figure 3.We see that in this case including more terms in the expansions does not improve significantly the accuracy of the approximations.This is an indication of the lack of convergence of the expansions for these values of the parameters.In fact, it is straightforward to verify that ‖() , so that for the values of the parameters considered, the convergence of the Magnus expansion is ensured for  < 1.6117 according to the estimate (19).To illustrate the behavior in this range of times, in Figure 4 we collect the results obtained by taking up to  = 6 terms in the expansion.It is worth noticing the systematic decrease in the error when more terms are included in the expansion.A similar pattern is observed for the Floquet-Magnus expansion, but with much smaller times (the convergence is only assured for  < 0.074).

Bloch-Siegert dynamics
This is an example of a periodically driven two-level quantum system described by the time-dependent Hamiltonian The coupling parameter  is the amplitude of a driving radio-frequency field and  is its frequency.We can write () =  0 +  1 () with  = 2 and where   are Pauli matrices.With this simple example we can illustrate the different treatments previously considered, both perturbative and non-perturbative, and also analyze how the results vary with the perturbation parameter .
As in [39], we consider the resonant case  =  0 = 1 and two different values of the perturbation,  = 0.2 and  = 1.One could use the Magnus expansion (17) directly to the matrix () = −()∕ℏ.In that case, however, the convergence of the series, as given by (19), is only ensured for times  ∈ [0,  f ] such that Thus, in particular, if  = 0.2, then  f ≈ 6.056, whereas for  = 1 it reduces to  f ≈ 3.608.We cannot expect, then, accurate results for larger time intervals, and this is indeed what is observed in practice.Something analogous happens also for the Floquet-Magnus expansion, with the difference that the interval of convergence is even smaller.It makes sense, in consequence, to apply both procedures in the interaction picture, i.e., to the matrix   () = e − 0  1 ()e  0 and this is what we have done in our computations by applying the procedure of section 4. We also compare with the Lie-Deprit (LD) perturbation algorithm and the formalism removing the perturbation, i.e., such that  =  0 .As in the previous example, we compute | 12 ()| 2 , i.e., the probability of transition between the two levels, obtained with each procedure with  = 3 terms.The corresponding results are shown in Figure 5 as a function of time, where we also include the exact result for comparison.Left diagrams are obtained with FM (in the interaction picture), whereas we depict in the right diagrams the results obtained by LD and also when  is taken as  0 .Notice that top and bottom panels correspond to  = 0.2 and  = 1, respectively.We have also depicted the result obtained with just the effective Hamiltonian in the case of the Floquet-Magnus expansion.It can be observed that both the LD perturbative algorithm and the one obtained by removing the perturbation provide accurate results only when  takes small values, in contrast with Floquet-Magnus in the interaction picture, with an enlarged validity domain in .Notice that also in this case the effective Hamiltonian is not enough to get a precise description of the dynamics when  increases.We should remark that all schemes render unitary approximations by construction.For comparison, in Figure 6 we collect the results achieved by the standard Magnus expansion in the interaction picture for this same problem and values of the perturbation parameter:  = 0.2 (left) and  = 1 (right).Here again increasing the value of  leads to a loss of accuracy for long times.If the Magnus expansion is applied to the matrix (), then only accurate results are obtained for times in the convergence domain.
To analyze how incorporating an increasing number of terms in the Floquet-Magnus expansion leads to an improvement of the approximation, even for large values of , we show in Figure 7 the results for the transition probability for  = 1.5 obtained with  = 3, 5, 7, 9 terms (left to right, top to bottom) as a function of time.Notice that even for this large value of the perturbation parameter, we still have a correct description of the dynamics when a sufficiently large number of terms is taken into account.To get a more quantitative description of the different approximations, in Figure 8 we depict the error in the transition probability committed by the Floquet-Magnus expansion (left) and Lie-Deprit (right) with  = 3, 5, 7, 9 terms each for  = 0.5.Notice that taking into account more terms leads to a reduction in the error, and that the LD approximation clearly deteriorates with time, whereas FM still provides excellent results even for large time intervals.

Concluding remarks
The subject of the perturbative treatment of linear systems of time-dependent differential equations has a long history in physics and mathematics.In physics, in particular, it plays a fundamental role in the study of the evolution of quantum systems.Among the different procedures, the so-called exponential perturbation algorithms have the relevant property of preserving the unitary character of the evolution.More in general, the approximations they provide belong to the same Lie group than the exact solution of the problem.Archetypical examples of exponential perturbation theories are the Magnus expansion since its inception in the 1950s, and more recently, the Floquet-Magnus expansion, several quantum averaging procedures and a generalization of the well known Hori-Deprit perturbation theory of classical mechanics.Each of these algorithms has been derived in an independent way and it is not always easy to establish connections and common points between them.
The present paper tries to bridge this gap by showing that all of them can be seen in fact as the result of linear changes of coordinates expressed as the exponential of a certain series whose terms can be obtained recursively.In addition to the Magnus and Floquet-Magnus expansions, other techniques can also be incorporated into our general framework, including the quantum averaging method and the Lie-Deprit perturbative algorithm.In the process we have also considered a novel approach, namely an exponential transformation rendering the original system into another one in which the perturbation is removed.The resulting approximations preserve whatever qualitative features the exact solution may possess (unitarity, orthogonality, symplecticness, etc.).Even the standard perturbation theory in the interaction picture is recovered in this setting when the exponential defining the transformation is truncated.
With this same framework one might of course consider other exponential transformations, and this would automatically lead to new perturbation formalisms which might be particularly adapted to the problem at hand.For instance, we could choose  () as a diagonal time-dependent matrix, so that equation ( 3) is easy to solve, and construct the corresponding generator Ω().

Figure 2 :
Figure 2: Top: transition probability for the three-lambda system with  () = e  obtained with Floquet-Magnus (blue line) and Magnus expansion (red line), both with  = 3 terms.Left diagram is for  = 6, right diagram is for  = 12.Bottom: error (in logarithmic scale) as a function of time for Floquet-Magnus with  = 3 and  = 7 terms, and the effective solution with  = 7 terms (purple).

Figure 3 :
Figure 3: Error (in logarithmic scale) the transition probability (in logarithmic scale) for the quasi-periodic case (30) as a function of time for FM (left) and Magnus (right) with  = 2 and  = 3 terms in the expansion.In both cases  = 12 in (30).

Figure 4 :
Figure 4: Detail of Figure 3 right in the interval  ∈ [0, 30].We have included up to  = 6 terms in the Magnus series.

Figure 5 :
Figure 5: Transition probability obtained with FM (left, blue line) and LD (right, green line) with  = 3 terms in comparison with the exact result for the Bloch-Siegert Hamiltonian with  = 0.2 (top) and  = 1 (bottom).Black line corresponds to the exact solution, and orange to the result achieved with the effective Hamiltonian.On the right panels the results achieved by the algorithm removing the perturbation are also depicted ( =  0 ).

Figure 6 :
Figure 6: Transition probability obtained with the standard Magnus expansion with  = 3 terms (red line) in with the exact result (black line) for the Bloch-Siegert Hamiltonian with  = 0.2 (left) and  = 1 (right).

Figure 7 :
Figure 7: Transition probability obtained with FM (solid blue line) with  = 3, 5, 7, 9 terms (left to right, top to bottom) in comparison with the exact result (black line) for the Bloch-Siegert Hamiltonian with  = 1.5.

Figure 8 :
Figure 8: Error (in logarithmic scale) in the transition probability committed by FM (left) and LD (right) with  = 3, 5, 7, 9 terms each for the Bloch-Siegert Hamiltonian with  = 0.5.Notice the different time interval in each panel.