Connection between Inverse Engineering and Optimal Control in Shortcuts to Adiabaticity

We consider fast high-fidelity quantum control by using a shortcut to adiabaticity (STA) technique and optimal control theory (OCT). Three specific examples, including expansion of cold atoms from the harmonic trap, atomic transport by moving harmonic trap, and spin dynamics in the presence of dissipation, are explicitly detailed. Using OCT as a qualitative guide, we demonstrate how STA protocols designed from inverse engineering method can approach with very high precision optimal solutions built about physical constraints, by a proper choice of the interpolation function and with a very reduced number of adjustable parameters.


Introduction
The last ten years witnessed the huge development of "shortcuts to adiabaticity" (STA) with wide applications ranging from atomic, molecular, and optical physics (AMO) to quantum information transfer or processing [1,2]. The concept of STA was originally proposed to speed up the adiabatic processes in quantum control. Nowadays, STA become versatile toolboxes for controlling the dynamics and transformation in quantum physics [1,2], statistical physics [3,4], integrated optics [5], and classical physics [6][7][8][9]. In this context, the most popular STA techniques are the fast-forward scaling [10,11], the counterdiabatic driving [12,13] (or transitionless quantum algorithm [14][15][16][17]), and the invariant-based inverse engineering [18], and their variants. These three techniques can be shown to be mathematically equivalent [19,20]. However, the diversity of the designs of shortcut protocols or their combination may be required for a realistic experimental implementation [21]. Furthermore, some counterdiabatic hamiltonians turn out to be unfeasible [22], or some systems cannot be treated by means of invariant-based engineering.
STA method provides a useful toolbox for fast and robust quantum controls with applications in a wide variety of quantum platforms such as cold atoms [23,24], NV center spin [25,26] including for their use as a quantum sensor [27], trapped ion [28], and superconducting qubit [29][30][31][32] to name a few.
Such controls have also a clear added value to quantum optimal control in quantum information processing and quantum computing [33], in terms of analytical tools, numerical tools, and a combination of these two. Numerical optimal control such as the gradient ascent pulse engineering (GRAPE) algorithm works to some extent as a black box. The dynamics and the structure of the control field are not easily predictable [34]. STA techniques based on a clear physical picture deliver a more easily understandable framework but are mostly addressing problems of low complexity. However, these techniques have recently been combined with deep machine learning for more involved physical problems [35][36][37][38].
Interestingly, shortcut protocols can be readily engineered to accommodate for various physical constraints [39] or to mitigate an environmental noise. In this respect, the combination of inverse engineering (IE) method and optimal control theory (OCT) has been particularly fruitful [40][41][42][43][44][45][46][47][48]. Most STA techniques provide solutions that are robust against a small variation of the duration of the parameter engineering. In Ref. [49], it is shown how OCT solutions can be adapted to accommodate for extra boundary conditions to ensure a similar robustness. Alternatively, the STA technique of Ref. [50] provides an explicit solution for linear control problems fulfilling the Kalman criterium [51].
In this article, we compare systematically the IE method with the result of OCT on three specific examples that can be addressed analytically in both formalisms: expansion of cold atoms from the harmonic trap, atomic transport by moving harmonic trap, and spin dynamics in the presence of dissipation. Our aim is to provide a pedagogical introduction and comparison between a simple if not the simplest Shortcut To Adiabaticity technique, the direct inverse engineering of the equation of motion of the dynamical variables, and the optimal control theory. STA techniques are built about the boundary conditions while OCT involves the minimization of a cost function. To facilitate the comparison we therefore discuss how inverse engineered (IE) solutions can be modified in order to minimize a cost function and mimic OCT solutions. Similarly to the variational method in quantum mechanics, and as illustrated in the following, the family of functions over which the minimization is performed play a crucial role. In the following, we also show how a simple ansatz having just a few tunable parameters can approach very precisely the optimal solution obtained for a given physical constraint.
In this section, we address the problem of fast atomic cooling in a time-dependent harmonic trap [18]. We derive the time-dependence of the trap frequency by an inverse engineer procedure on an Ermakov equation and using OCT. We subsequently compare the two types of solutions. Interestingly, the tunability inherent to the inverse engineering method provides the required flexibility to shape the inverse-engineered trajectories to minimize a cost function. We show how such solutions can be simply adapted to get results very close to optimal solutions for a time-averaged energy cost function [18,40,56].

Model, Hamiltonian, and the Inverse Engineering Approach
More specifically, we consider in the following the fast decompression of a onedimensional (1D) harmonic potential from an initial angular frequency ω(0) = ω 0 to the final target one ω(t f ) = ω f , (ω f < ω 0 ). The problem amounts to finding the timedependent solution of the Schrödinger equation that ensures the transformation from the ground state of the initial trap to the ground state of the final trap in a finite amount of time t f : For this purpose, we look for a scaling solution of the form ψ( The first factor accounts for the normalization, the second factor for the evolution of the phase (we show below that it is purely imaginary), and the last one for the desired scaling dynamics. By plugging such an ansatz into the Schrödinger equation, we find how the different parameters are related: By introducing the renormalized timet(t) = t 0 dt /b(t ) 2 and for the choice α = (−im/2h)ḃ/b and β = ln b/2, the effective wave function Ψ(ρ,t) = f (ρ, t) obeys a time-independent Schrödinger equation: provided that the scaling parameter b(t) satisfies the following Ermakov equation Interestingly, this latter equation is amenable to a set of linear equations. Indeed, it is the equation of an effective 2D oscillator in polar coordinates, the 1/b −3 is nothing but the centrifugal barrier which acts as a repulsive force that prohibits the access to a zero value of b. Alternatively, the very same result can be obtained by using Lewis-Riesenfeld dynamical invariant [18]. The ground state wave function in such a time-dependent harmonic trap reads where N accounts for the normalization and a 0 = h/(mω 0 ). The self-consistent boundary conditions for a smooth continuous interpolation function are [18]: As a simple example, one can choose for the scaling factor b(t) a fifth order polynomial ansatz that fulfills the above six boundary conditions [18]: In view of the comparison with optimal protocols, we calculate hereafter the mean energy associated to the ground state wave function (5) [56]: where E(t) = K(t) + E p (t) is the sum of the kinetic energy K(t) =h(ḃ 2 + ω 2 0 /b 2 )/(4ω 0 ) and the potential energy E p =hω 2 (t)b 2 /(4ω 0 ). The mean energies obey the Virial theorem: For any b(t) trajectory that fulfills the boundary conditions, one can infer ω(t) from Equation (4) and calculate explicitly the mean energies. Using the available freedom to shape the scaling factor b(t), the inverse-engineered solutions can be tuned so to minimize the time-averaged energy as discussed in Section 2.3.

Optimal Control Theory
STA protocols such as IE are built about the boundary conditions. We have provided an example using a polynomial interpolation. OCT offers an alternative to find a path between two states but shall be built about a cost function. We propose hereafter to use OCT on the Ermakov Equation (4).
For this purpose, we recast Equation (4) into a set of first order nonlinear coupled equations,ẋ = f(x(t), u) by defining the x components as x 1 = b(t) and x 2 =ḃ/ω 0 , and introducing the (scalar) control function, u(t) = ω 2 (t)/ω 2 0 : In the following, we work out two OCT solutions associated to the minimization of the final time and then of the mean energy. As a result of the nonlinear character of the set of Hamiltonian equations, the Pontryagin maximum principle only gives a necessary condition to get an extremum.

Time-Optimal Solution
The so-called time-optimal solution amounts to minimizing the cost function with the boundary conditions (6) which translates on the x vector components as x 1 (0) = 1, x 2 (0) = 0 and x 1 (t f ) = γ and x 2 (t f ) = 0. We furthermore choose the constraint |u| ≤ 1 [40,57]. We stress that we let the possibility for the control parameter to be either positive or negative. When it is negative, the curvature of the harmonic confinement is reversed. Atoms are therefore transiently expelled which provides a method to accelerate the desired transformation.
To minimize the cost function (11), we apply the Pontryagin maximum principle which states that there exists nonzero, continuous vector p with components (p 0 , p 1 , p 2 ), fulfilling Hamilton's equations [40,41]:ẋ = ∂H c /∂p andṗ = −∂H c /∂x. With the cost function J, the control Hamiltonian H c reads where p 0 is a nonzero normalization constant, and p 1 and p 2 are generalized Lagrange multipliers. The Pontryagin's maximum principle states that at any instant (0 ≤ t ≤ t f ), the values of the control function u maximize H c . As H c is linear in the control function u and since x 1 > 0, the sign of the factor in front of u, (−p 2 x 1 ) is fully determined by the sign of −p 2 . This latter parameter plays the role of a switching function for "bang-bang" type control as discussed in the literature [40][41][42]57]. The fact that the Hamilton equations are nonlinear enables the possibility to have multiple bang-bang solutions [40]. We consider in the following the simplest solution with analytical expression. This "bang-bang" solution has a single intermediate time (see Figure 1): With such a control function, we infer the value of the scaling factor b(t) from the Ermakov equation and find the following solution for "bang-bang" control that fulfills the boundary conditions (6): It is worth noticing that the Ermakov equation implies that the quantity x 2 2 + ux 2 1 + x −2 1 = c is constant. The value of the constant c is fixed by the initial conditions for 0 < t < t 1 and by the final conditions for t 1 < t < t f . Using the continuity of b(t) at t 1 and t f due to the second derivative in the Ermakov equation, we find the explicit expression for both times [40,57]: As the time t 1 shall remain real, we deduce from Equation (15) that ω 2 ≥ ω 0 /γ > ω f . The last inequality is naturally satisfied because of the cooling constraint ω 0 > ω f . The first inequality requires ω 0 /γ ≤ ω 2 ≤ ω 0 . In Figure 2, we plot the normalized final time s f = t f ω 0 as a function of ω 1 /ω 0 and ω 2 /ω 0 in their accessible domains. We conclude that the shortest normalized final time s f is obtained for the largest ω 1 and ω 2 . With the choice ω 2 = ω 1 = ω 0 , we obtain the shortest time The lowest bound for ω 2 , namely, ω 2 = ω 0 /γ provides the upper bound for final time where the first period of time is reduced to t 1 = 0, so that only two jumps are needed.
In this latter range of parameter, the scaling factor reads with τ = t/t f . In Figure 3a, we plot such an example of the time evolution of b(t).
The solution that corresponds to the upper bound for the final time also provides the minimum time-averaged energy. Using Equation (8), we calculate this latter quantity: where ε =hω 0 /4. In Figure 3b, we plot this time-averaged energy E p as a function of the final time s f . It is worth noticing that ω f and t f are not independent since s f = πγ/2 = π ω 0 /ω f /2.  Parameters : The time-averaged energy as a function of the normalized final time s f = πγ/2, obtained from the time-optimal control solution.

Time-Averaged Energy Minimization
In this section, we consider optimal control solution associated to the minimization of time-averaged energy with unbounded constraint [56]. The lower bound for the timeaveraged potential (total) energy in Equation (8) reads [56,58]: with the following solution of b(τ) = (B 2 − s 2 f )τ 2 + 2Bτ + 1 and B = −1 + s 2 f + γ 2 . In Figure 4, we plot this lower bound for optimized time-averaged energy as a blue dashed line.  (7) (black dotted line), (2) polynomial IE solution with two free parameters optimized for a given normalized final time (stars) (see Table 1): , and s f = 4 (orange solid line). Parameters: ω 2 f = ω 2 0 /5.

Comparison between IE and OCT
In the previous subsections, we have reviewed the streamline of IE and OCT protocols to ensure a fast frictionless decompression in a harmonic trap whose strength can be timeengineered. As already discussed, there is a lot of freedom to design inverse-engineered protocols since the only requirements concern the boundary conditions. However, the question of the mean energy cost of such protocols may be relevant since a real potential always exhibits some anharmonicity when the potential energy becomes too large. In what follows, we propose to design IE protocols having a minimal mean potential energy. We will show how we can readily approach the optimal results.
The IE solution exhibited in Equation (7) relies on a fifth-order polynomial that fulfills the six boundary conditions. In Figure 4, we plot the corresponding mean potential energy E p (s f ) using a black dotted line which turns out to be quite far from the optimal solution (dashed blue line).
To reduce E p (s f ), we remove the constraints onḃ andb at initial and final time since they are not strictly speaking necessary neither fulfilled by the optimal solution. We also enlarge the parameter space for b(τ) using a third-order polynomial ansatz b(τ) = ∑ 3 n=0 a n τ n to keep some free parameters. The two boundary conditions yields a 0 = 1 and a 1 = −1 − a 2 − a 3 + γ. For different normalized final time s f , we can therefore minimize the time-averaged energy with respect to the two parameters a 2 and a 3 . In Table 1, we provide the optimal values a 2 and a 3 that minimizes the mean potential energy for the three cases with s f = 1.1, s f = πγ/2, and s f = 4. Table 1. Optimal values of the free parameters a 2 and a 3 in the three-order polynomial ansatz for the IE protocol that minimize the time-averaged energy. Parameter The results are represented as stars in Figure 4. They nearly coincide with the result of the optimal control theory. This is confirmed by plotting the scaling functions for both protocols (see Figure 5). We conclude that the IE trajectories inspired by the OCT solutions can be readily designed to approach with an impressive accuracy the exact OCT solutions.  Table 1. Parameters: ω 2 f = ω 2 0 /5.

Fast Transport of Atoms in Moving Harmonic Traps
STA techniques have also been applied to high-fidelity fast quantum transport of neutral atoms [59] or charged ions [60,61] using a moving trap. Such developments have a wide range of applications from quantum information processing [62,63] to atom fountain clock, atom chip manipulation [64][65][66], or atomic interferometry [67]. In recent closely related works, optimal trajectories that minimize the excitation in ion shuttling in the presence of stochastic noise have been designed by combining invariant-based inverse engineering, perturbation theory, and optimal control [68,69].
In this section, we address the problem of the fast transport of a single atom based on a moving 1D harmonic potential. The particle is supposed to be initially in the ground state and shall remain in the ground state at the final time. We follow the same kind of presentation as previously: we first design inverse-engineered protocols, we then derive the OCT protocols for time [43] and mean-energy optimization [45], and eventually compare both approaches.

Classical and Quantum Inverse-Engineered Solutions
The time-dependent Hamiltonian of atomic transport using a moving harmonic trap reads where ω 0 is the constant trap angular frequency, and x 0 (t) the time-dependent position of the trap center. This problem amounts to finding the appropriate driving of this harmonic oscillator. The exact mapping between the classical and quantum solutions enables one to solve the classical problem to get a solution valid quantum mechanically [50].
The time-evolution of the coordinate, x(t), of a classical particle under the time-dependent Hamiltonian (22) is given byẍ A smooth perfect transport, i.e., a transport without any residual oscillations at final can be obtained using inverse engineering by imposing the six boundary conditions: Any interpolation function x(t) that fulfills these boundary conditions provides a possible solution of our problem. For instance, one can use the following fifth order polynomial interpolation function: Once x(t) is known, the trajectory of the trap center x 0 (t) can be directly inferred from Equation (23). A similar result can be derived quantum mechanically using the properties of dynamical invariants [43,62]. In view of the optimization that we will perform later on, it is worth working out the instantaneous average potential energy where the first term accounts for the zero-point energy contribution and E p (t) = 1 2 mω 2 0 (x(t) − x 0 (t)) 2 i.e., the instantaneous potential energy for the effective classical particle. The timeaveraged potential energy is defined by

Optimal Control Theory
To recast this problem as an optimal problem, we define the variables x 1 (t) = x(t) and x 2 (t) =ẋ, and the control function u(t) = x(t) − x 0 (t). The control function corresponds to the relative position of the effective particle with respect to the trap center. The equation of motion (23) for the effective particle can be encapsulated in the following set of linearly coupled first order differential equationsẋ = f[x(t), u]: Interestingly, for this linear system, the solution deduced from the Pontryagin formalism provides the unique control solution u that minimizes the cost function.

Time Minimization
In this section, we solve the time-optimal problem with an upper bound on the relative displacement |u| ≤ δ. The cost function to minimize t f is The corresponding Pontryagin Hamiltonian reads H c = p 0 + p 1 x 2 − ω 2 p 2 u, where the Lagrange multipliers p 1 and p 2 fulfillṗ 1 = 0 andṗ 2 = −p 1 . We deduce p 1 = c 1 and p 2 = −c 1 t + c 2 where c 1 and c 2 are constants to be determined. The Hamiltonian H c is a linear function of the bounded control function u(t). As a result, the sign of p 2 sets the sign of u(t) to maximize H c . The parameter p 2 being a linear function of time, the sign of p 2 can only change once. By considering the initial and final boundary conditions, the appropriate control sequence taking into account the upper bound for |u(t)| is a (three-jump) "bangbang" control With such a control function, the time-optimal solution of Equation (23) compatible with the boundary conditions (24) reads The driving of the trap bottom is then given by x 0 (t) =ẍ(t)/ω 2 0 + x(t). By imposing, the continuity on x(t 1 ) andẋ(t 1 ), one gets the explicit expression for the switching and final times: According to Equation (27), the time-averaged potential energy E p for this constrained protocol is

Mean Potential Energy Minimization
In this section, we work out the energy-optimal protocol. We here provide a solution that minimizes the time-averaged potential energy for a given transport time t f and distance d, with unbounded constraint. According to the definition of potential energy, E p = 1 2 mω 2 0 (x − x 0 ) 2 , the cost function for this problem is

and the Pontryagin Hamiltonian
The Hamilton equations give two costate equations similar to those derived in the previous section. For the normalization, we can choose the constant parameter p 0 = −1/m, so that the optimal problem amounts to maximizing the quantity −u 2 /2 − p 2 u.
For convenience, we consider the unbounded case (u(t) is unbounded) which sets the lowest bound for time-averaged potential energy E p . The quantity −u 2 /2 − p 2 u is maximal for u = −p 2 . This expression for the control function combined to Equation (23) and the boundary conditions (24) enables one to determine the optimal trajectory of the center of mass: from which we infer the trap center trajectory x 0 (t) using Equation (23) with initial and final boundary conditions x 0 (0) = 0 and x 0 (t f ) = d: In Figure 6, we plot the OCT center of mass along with the bottom trap trajectories for some specific values using blue dashed lines. It is worth noticing that according to our optimal solution the trap center has to include two sudden jumps at initial and final time. With such an optimization performed for an unbounded control function, we get the following lowest time-averaged potential energy In Figure 7, we also plot this minimal time-averaged potential energy as a function of the final time t f as a blue dashed line.

Comparison between IE and OCT
In this section, we use the freedom in the interpolation function that enters IE solutions to approach the solution of the optimal control theory associated to a minimization of the time-averaged potential energy.

IE with Polynomial Ansatzs
For this purpose, we first enlarge the parameter space of the polynomial ansatz that fulfills the boundary conditions (24) and search for the optimal values of the coefficients that minimize the time-averaged potential energy.
To satisfy the six boundary conditions (24) the minimal order of the polynomial interpolation function is five (see Equation (25)). In Figure 6, we plot the center of mass, x(t/t f )/d, and trap center, x 0 (t), trajectories as a function of time using red solid lines.
The corresponding time-averaged potential energy is E p (P5) = 1.42E p (OCT) which is significantly larger than the minimal potential energy given by Equation (39). It is represented as a black solid line in Figure 7.
In order to further reduce the time-averaged potential energy, we enlarge the parameter space, while keeping the six boundary conditions satisfied. We search for a solution of the form n=0 a n (t/t f ) n ]. By applying the boundary conditions (24), we have a 0 = a 1 = a 2 = 0, a 5 = 21 − 6a 3 − 3a 4 , a 6 = −35 + 8a 3 + 3a 4 , and a 7 = 15 − 3a 3 − a 4 . The time-averaged potential energy can be explicitly worked out: The minimization of this energy yields a 3 = 21 and a 4 = −70 and E p This curve as a function of the final time is represented in Figure 7 as a purple solid line. It provides a clear improvement with respect to the fifth-order polynomial solution. A priori, it is possible to further improve the optimization using a higher order polynomial ansatz. For instance, using a 19th order well-optimized polynomial, we have found 1.018E p (OCT) . In Figure 8, we have plotted the corresponding time-dependent trajectories x c (t) and x 0 (t). However, the increase of the polynomial order requires a minimization with an increasing number of parameters. This is somehow cumbersome. In the following section, we propose another type of interpolating function inspired by the OCT solution and yielding astonishing results.

IE with Hyperbolic Ansatz
In this subsection, we apply the IE approach using the following hyperbolic-function where a 2 > 1 to avoid any singularity. Interestingly, the choice of the parameter a 2 enables one to mimic a jump at initial and final time. This class of solution with the possibility of an initial and final offset and with similar symmetry as the optimal function provides a very performant class of functions for the optimization. The freedom provided by the two parameters a 1 and a 2 enables one to reduce the time-averaged potential energy while satisfying the two boundary conditions x(0) = 0 and x(t f ) = d. Such an optimization gives a 1 = 1.2 and a 2 = 1.25. The corresponding trajectories x(t) and x 0 (t) are plotted in Figure 9, and the mean potential energy is represented in Figure 10 with marked red points. This ansatz provides a solution that nearly coincides with the exact solution,  For this transport problem, we have shown how the freedom on the interpolation ansatz enables one to optimize extra constraints such as the mean energy whilst fulfilling the boundary conditions. The choice of the ansatz has a strong impact. One could naively think that a very high order polynomial could always provide a successful strategy. However, we have shown on this example that the convergence may be quite slow with the degree of the polynomial and that the investigation of other shapes with a few adjustable parameters can easily outperform the polynomial interpolation for a give constraint.

Spin Dynamics in the Presence of Dissipation
In contrast with the previous sections, we address in the following an example dealing with the control of internal degrees of freedom. Optimal control provides a powerful tool to solve time-optimal and energy-optimal problems in quantum two-level and threelevel systems [70][71][72][73]. Such result can be directly extended to two uncoupled [72] and coupled [74] spins with similar approach. Using numerical optimal algorithm, robust optimal control can also be designed that accounts for inhomogeneous boarding and/or dissipation [71,[75][76][77]. Inverse engineering techniques have also been used for the fast and robust control of single spin [78] and two-interacting spins [78,79] in the presence of dissipation [80]. Systematic error or perturbation induced from the parameter fluctuations, dephasing noise, bit flip can be further suppressed using IE and OCT in atomic population transfer [46][47][48] and spin flip [79].
Strictly speaking, the presence of dissipation rules out the possibility of an adiabatic evolution. However, the inverse engineering can still be applied. In the following, we consider the control of a spin 1/2 (S = (S x , S y , S z ) through the appropriate design of the time-varying magnetic field components (B = (B x , B y , B z )) for the desired boundary conditions. More precisely, we address the dissipative evolution of this spin in the presence of a strong transverse relaxation rate, R > 0. As is commonly the case in NMR, the longitudinal relaxation rate is supposed to be negligible compared to the transverse one, and is here neglected [73]. Under those assumptions, the spin components obey the Bloch equations: Following Ref. [73], we recast the Bloch equations using spherical coordinates. For this purpose, we introduce the angles θ(t) and φ(t) such that S = (r sin θ cos ϕ, r sin θ sin ϕ, r cos θ) where r denotes de length of the spin r = S 2 x + S 2 y + S 2 z . It is convenient to decompose the transverse magnetic field B ⊥ = (B x , B y ) into B ⊥ = (B, B c ), satisfying B S ⊥ and B c ⊥ S ⊥ (see Figure 11): The Bloch equations can be readily rewritten with the variables a = ln r, tan θ = S 2 x + S 2 y /S z , tan φ = S y /S x , and the normalized time t = Rt : To ensure a spin rotation from an initial spin-up state to a given final target state, we shall use the boundary conditions It is worth emphasizing the fact that choosing the final spin length a f and orientation θ f for a given final time t f may have no solution for finite resources. Indeed, if the driving by the magnetic field is not sufficiently strong, the dissipation will set an upper limit on the final spin length.
The field component B c is always perpendicular to r ⊥ and therefore only affects the spin rotation about the z-axis. The angle θ is responsible for the partial or total spin flip. To minimize the energy cost, the trajectory length shall be minimal. This latter condition sets the value of B c to zero which means φ = constant. Basically, the IE technique amounts here to fixing the θ(t) function in accordance with the boundary conditions (48) and inferring the external magnetic field B(t) from Equation (46).

Energy Minimization by OCT
We consider here a given spin manipulation from (a(0) = 0, θ(0) = 0) to (a f , θ f ) with the minimum magnetic field amplitude. For this purpose, we aim at minimizing the cost function Let us first recast this problem as a control problem involving a set of coupled first order equations. By defining the state variables x 1 = a, x 2 = θ and the control function u(t) = B(t), the system Equations (45) and (46) is of the formẋ = f(x(t), u): and the cost function is The corresponding Pontryagin Hamiltonian reads where p 1 and p 2 are the Lagrange multipliers fulfillingṗ = −∂H c /∂x i.e.,ṗ 1 = 0, p 2 = p 1 sin(2x 2 ) + p 2 cos(2x 2 ). The maximum Pontryagin principle states for an unbounded control u that ∂H c /∂u = 0, i.e., u = p 2 . In the absence of terminal cost, the optimal solution for this optimization between fixed initial and final states but without fixing the final time gives the extra condition H c [p(t), x(t), u(t)] = 0: From Equation (51), we deducė By combining Equations (55) and Equation (50), we find dx 1 = − sin x 2 / 2p 1 + cos 2 x 2 dx 2 . After integration, this relation gives The (constant) value of p 1 is deduced self-consistently with the boundary conditions. The final time provided by OCT for an arbitrary target r f is determined by We note that the dissipation has an influence on the final time.

Case I: Reaching the Horizontal Plane of the Bloch Sphere
In this subsection, we consider the transfer of the spin from the quantization axis to the horizontal plane. The boundary conditions are thus θ(0) = 0, r(0) = 1 and θ(t f ) = θ f = π/2. This choice sets the value of the constant p 1 : p π/2 To address a specific example, we consider in the following the final value r(t f ) = r f = e −2 . The final time obtained from Equation (57) suffers from a logarithmic divergence. To cure this problem, we shift the initial and final time by a small quantity ε 1: θ(0) = and θ f = π − : for ε = 10 −3 . For this specific example, the cost function associated to this optimal solution (see Equation (52)) is For comparison with the inverse engineering method, we propose, for the very same t f , the following second order polynomial ansatz: This ansatz fulfills the boundary conditions and has a single free parameter. The corresponding cost function, E (P2) , is minimal for a 1 = −0.119582: π/2 . However, our simple polynomial ansatz provides an upper bound on the reachable values of r f . This point is illustrated in Figure 12a where we plot the energy as a function of the logarithm of the final radius a f for different values of the free parameter a 1 . For this example, the reachable range of values for r f is [0.055; 0.476]. As a result, a target such as r f = 0.6 turns out to be out of reach. It is worth noticing that this limit is intimately related to the choice of the ansatz. For instance, we can choose a third-order polynomial ansatz: where t f is determined as previously (t f = 3.6357955 for r f = 0.6) and, the coefficients a 0 = 0 and a 2 = −(a 1 t f + a 3 t f 3 − θ f )/t f 2 are dictated by the boundary conditions (48). The extra freedom provided by the a 3 coefficient enables one to (1) reach the target and (2) minimize the cost function. With the values a 3 = 0.1 and a 1 = 0.15713222, the cost function, E (P3) , is quite close to the optimal value: E (P3) = 1.03E (OCT) π/2 . In Figure 12b, we plot the energy as a function of the free parameter a 1 for a 3 = 0.1. This curve defines a new interval of reachable r f : [0.043; 0.608]. The variable θ(t) and its corresponding magnetic field B(t) obtained from the latter IE method are depicted in Figure 13, and the associated spin trajectory on the Bloch sphere along with the spin components in Figure 14. Our results can be a priori further improved using an optimization on an even larger order polynomial.

Case II: Spin Flip
In this section, we consider a spin flip (θ f = π) for which the constant p 1 parameter is With the same notations as previously, the final time reads (we use r f = 0.6 in the following) The cost function associated to the optimal solution (see Equation (52)) is This optimal solution is plotted as a blue line in Figure 15. The optimal solution exhibits a smooth variations of θ(t) at initial and final and a symmetry about t f /2. This suggest to add the following extra condition to the polynomial ansatz for θ(t) for the inverse engineered solution: θ(0) = 0, θ(t f /2) = π/2, θ(t f ) = π,θ(0) =θ(t f ) = 0, andθ(0) =θ(t f ) = 0. (64) We have used a ninth-order polynomial to accommodate for the 7 boundary conditions listed above, an extra parameter is fixed by the final target radius, r f . The remaining two free parameters are used to minimize the energy. Knowing θ(t), we infer the magnetic field to be applied to drive the spin in accordance with our boundary conditions. As explicitly shown in Figure 15, we find a bell shape for the magnetic field B(t) associated to this θ(t). However, the curves remain relatively far from the optimal result. We find E (P9) = 1.13E (OCT) . The ripples in the polynomial ansatz increase the energy and are difficult to remove by increasing the polynomial order. The convergence towards the optimal solution is therefore once again slow with the polynomial order.
Alternatively, the shape obtained from OCT suggests that the following ansatz could be worth trying: Minimizing the energy, we find E = 1.007E (OCT) with a 5 = 1.1 and a 1 = 3.104678. The comparison of this solution with its optimal counterpart confirms the proximity between the two approaches (see Figure 16).

Conclusions
In summary, we have investigated different implementations of the inverse engineering method and compare them with solutions deduced from the OCT for a given cost function. We have addressed in this manner the fast atomic cooling in harmonic trap, the atomic transport with a moving harmonic trap, and the spin control in the presence of dissipation. We have shown how the freedom on the ansatz inherent to inverse engineering techniques provide enough tunability to minimize a cost function while fulfilling the boundary conditions. We have systematically found class of functions with few adjustable parameters approaching the optimal control result with a relative excess of energy below one percent. Inverse engineered solutions are usually searched as continuous and analytical functions which is a priori an asset for their practical use. However, we have also exhibit the possibility to design inverse engineered trajectories having initial and final jump to mimic the optimal control solution yielding solutions that are nearly undistinguishable from their optimal counterpart.