Shortcut-to-Adiabaticity-Like Techniques for Parameter Estimation in Quantum Metrology

Quantum metrology makes use of quantum mechanics to improve precision measurements and measurement sensitivities. It is usually formulated for time-independent Hamiltonians, but time-dependent Hamiltonians may offer advantages, such as a T4 time dependence of the Fisher information which cannot be reached with a time-independent Hamiltonian. In Optimal adaptive control for quantum metrology with time-dependent Hamiltonians (Nature Communications 8, 2017), Shengshi Pang and Andrew N. Jordan put forward a Shortcut-to-adiabaticity (STA)-like method, specifically an approach formally similar to the “counterdiabatic approach”, adding a control term to the original Hamiltonian to reach the upper bound of the Fisher information. We revisit this work from the point of view of STA to set the relations and differences between STA-like methods in metrology and ordinary STA. This analysis paves the way for the application of other STA-like techniques in parameter estimation. In particular we explore the use of physical unitary transformations to propose alternative time-dependent Hamiltonians which may be easier to implement in the laboratory.


Introduction
Quantum metrology aims at high-resolution and highly sensitive measurements of parameters using advantages provided by quantum states and dynamics. Most of the research in this field has been focused on time-independent Hamiltonians, but time-dependent Hamiltonians may beat precision limits found for time-independent ones [1][2][3][4][5] to estimate some parameter g in the Hamiltonian.
The Cramèr-Rao bound states that the mean squared deviation δ 2 g for an unbiased estimation is bounded as where N is a measure of the amount of data and I g is the Fisher Information.
In a quantum scenario the information about a parameter g in the Hamiltonian H g is "stored" in the quantum states of the system |ψ g (t) , whose evolution depends on g, and the Fisher information for a final time T measures the distinguishability (distance) between |ψ g (T) and |ψ g+δg (T) . The "maximum" Fisher information for g with respect to all possible quantum measurements is [6,7] I (Q) g = 4 ∂ g ψ g (T) ∂ g ψ g (T) − | ψ g (T) ∂ g ψ g (T) | 2 , which is also named "quantum Fisher information". For a given Hamiltonian H g , the quantum Fisher information has an upper bound, where µ max (t) and µ min (t) are the (instantaneous) maximum and minimum eigenvalues of ∂ g H g (t). To actually implement this upper bound with particular states and measurements, the dynamics should follow some specific path, along an equal superposition of the corresponding eigenvectors. Reaching the upper bound of the Fisher information may require Hamiltonian control [1], i.e., adding an extra term H c (t) to the original Hamiltonian of the system H g (t) to implement the necessary dynamics. This methodology based on driving the system along preselected "rails" (states), is formally quite similar to the one proposed in Shortcut-to-adiabaticity (STA) methods [8], specifically in the "counterdiabatic" (CD) approach [9][10][11][12][13][14]. In the CD approach, an auxiliary Hamiltonian H cd (t) is also added to some reference Hamiltonian H 0 (t) to drive the system along eigenstates of H 0 (t). We shall revisit the main concepts and results in [1] in Sections 2 and 3, and analyze in detail the relations and differences between "actual STA" and the STA-like method used in metrology, see Table 1 for an overview. A recurring topic within the counterdiabatic approach is that it is often difficult to implement in practice H cd [8,15,16]. This problem has lead to a number of approximations, variational approaches, or methods based on unitary transformations that, properly adjusted, could be as well applicable in metrology. In Section 4, we explore in particular the use of alternative Hamiltonians to H g + H c via unitary transformations. In the final discussion, we shall comment on prospects to apply other STA-like approaches. Table 1. Main differences between the use of Shortcut-to-adiabaticity (STA)-like methodology in metrology, following the work in [1], and ordinary applications of STA (Counterdiabatic approach).

Optimal Adaptive Control for Quantum Metrology with Time-Dependent Hamiltonians
Our first objective is to summarize and comment on the work in [1] to set and understand the relations and differences between the STA-like approach applied there and ordinary STA. The analysis should be useful for a practitioner of STA methods less acquainted with quantum metrology, as well as for quantum metrologists not aware of the rich toolbox of STA techniques. In the following,h = 1.

Quantum Fisher Information
The quantum Fisher information in Equation (2) can be rewritten as (proportional to) a variance computed for the initial state |ψ 0 , I where and U g (0 → T) is the unitary evolution operator from 0 to time T for the Hamiltonian H g (t). Being a variance, an "optimal" value of the quantum Fisher information, with respect to all possible initial states, is where τ max (T) and τ min (T) are maximal and minimal eigenvalues of h g (T), respectively. The optimal state |ψ 0 is an equal superposition of the maximal and minimal eigenvalues of h g (T), and the calculation of g op (T) requires diagonalizing h g (T). A mathematical upper bound for this optimal value may be found by rewritting h g (T) in integral form [1], The variance would be maximized by maximizing the contribution of each instant t by means of a hypothetical dynamical state that were at all times an equal superposition of the eigenvectors of ∂ g H g (t) with maximal and minimal instantaneous eigenvalues µ max (t), µ min (t), We have termed this upper bound as "mathematical" because the eigenvectors |ψ min (0) or |ψ max (0) of ∂ g H g (0), with µ min (0) and µ max (0) eigenvalues, will not be driven in general along corresponding eigenvectors |ψ min (t) and |ψ max (t) with eigenvalues µ min (t) and µ max (t) by H g (t), i.e., in general the optimal value does not reach (saturate) the upper bound, It is important to distinguish the quantum Fisher information in (4) (which is "maximal" with respect to measurements, for a given state), the "optimal" quantum Fisher information (6) (with respect to measurements and states), and the "upper bound" (8). The optimal value can be calculated in principle from the Hamiltonian H g (t) alone, but to implement it in an actual estimation protocol we would need specific states and measurements. Similarly, the upper bound depends formally only on H g (t) (more specifically on its derivative ∂ g H g (t)), but its realization also needs careful state and measurement selection, as well as extra control terms, as we shall see. The terms "maximal", "optimal", and "upper bound" referred to the Fisher information represent here an ordered hierarchy but could be confusing and are subjected to specific defining conditions, they have to be put in context. It is not easy to keep an entirely consistent terminology, for example, "an optimal value" or an "upper bound" are of course also maximal values in some sense. Moreover Pang and Jordan refer to the process that achieves the upper bound (8) by adding Hamiltonian control as "optimal" [1]. Even the concept of an "upper bound" is a relative one as it depends on the chosen H g . In summary, dealing with this somewhat entangled parlance needs careful reading. Equation (7) gives the clue to physically realize the upper bound. Again, the initial state must be an equal combination of the maximal and minimal eigenvectors of ∂ g H g (0), and their time evolution should keep them as instantaneous eigenvectors of ∂ g H g (t) until t = T. The solution proposed in [1] to achieve this guided dynamics is to add a control term H c to the Hamiltonian H g so that the states are driven by the total Hamiltonian in the same way instantaneous eigenstates |ψ k (t) of ∂ g H g (t) change with time, Here is where the core similarity with STA (counterdiabatic) methods lays. Both in counterdiabatic methods and in the parameter estimation strategy set in [1], new terms are added to some reference Hamiltonian so that the system is guided along predetermined paths. The proposed form of the control term is [1] where the f k (t) are in principle arbitrary functions of time that could be chosen for convenience, and the |ψ k (t) are the instantaneous eigenvectors of ∂ g H g (t). Rewriting H c as where then Equation (13) may be found by imposing a unitary evolution operator of the form U(0 → t) = ∑ k e −iθ k (t) |ψ k (t) ψ k (0)|, which drives the dynamics along the eigenstates of ∂ g H g (t) up to phase factors e −iθ k (t) . The corresponding Hamiltonian must be iUU † which gives exactly the right hand side in Equation (13) with In STA applications, H cd (t) is called counterdiabatic Hamiltonian [8] because, in that context, it avoids diabatic transitions among eigenstates of H 0 (t). H cd (t) drives the system along states {|ψ k (t) } both in STA applications and in metrology. There are, however, some important differences: (i) In STA, the states |ψ k (t) are eigenstates of a reference Hamiltonian H 0 (t) (which plays a similar role than H g (t) as the Hamiltonian whose dynamics we want to transform by adding new terms) while in metrology they are eigenstates of ∂ g H g (t).
(ii) In STA, H 0 (t) is by construction diagonal in the basis {|ψ k (t) }. In metrology H g (t) is in general not diagonal in this basis.
(iii) The functions f k (t) can be chosen to simplify the Hamiltonian. They do not produce transitions among the {|ψ k }, they just accumulate a phase factor e −iθ k (t) for each |ψ k (t) . In STA, we may apply this freedom to drive the system along the desired paths with H tot (t) = H 0 (t) + H cd (t), which is in fact the most common form, instead of H tot (t) = H cd (t). By contrast, in metrology we could not in general use H g (t) + H cd (t) because the addition of H g (t) does more than just changing phases, it produces transitions. That is why in metrology H tot (t) is just H cd (t), Equation (14), at least as a starting point, because a reformulation is in fact needed, see Equation (18) below and the related discussion.
(iv) In metrology the denomination "counterdiabatic" for H cd is, strictly speaking, an abuse of language as, in that context, H cd precludes transitions among eigenstates of ∂ g H g , and not transitions among adiabatic states. Nevertheless, the formal expressions are identical so that we shall keep the same terminology and the same notation.
(v) The emphasis in STA is on fast processes, whereas in the STA-like approach used in metrology speed might be taken into account but it is not necessarily the main goal. Instead, the emphasis is on a precise parameter estimation.
Let us now come back to metrology. To recap, the addition of H c to H g would guarantee the state following needed in principle to reach the upper bound, but that is not enough, there are two very important points to take into account: (a) Formally H c as written above, see Equation (11), depends on g, whose exact value is unknown. A way out is to set H c for an approximate value g c .
(b) To get the upper bound of the Fisher information, in addition to following the state dynamics, the eigenvalues of ∂ g H tot should be the "right ones", i.e., those of ∂ g H g . This point is possibly not fully explicit in [1] but it is quite crucial, as the eigenvalues of ∂ g H tot for H tot = H cd (g) are in general not the right ones.
The way out to these two points is to reformulate the control Hamiltonian in (11) where ψ k (t) is the kth eigenstate of ∂ g H g (t) with g = g c . In this context, the subscript g = g c does not mean that the value of g c is exactly equal to the unknown g. Rather, it means that g c should be written instead of the unknown g.
Instead of Equation (14), the total Hamiltonian is thus reformulated as or, taking into account Equation (12), This is finally the structure used in Reference [1] to approach the upper bound of the Fisher information.
The first term provides the right maximal and minimal eigenvalues of ∂ g H g (t), now ∂ g H tot (t) = ∂ g H g (t), whereas the whole sum (≈ H cd (g) but not exactly) essentially drives the two corresponding eigenstates as dynamical solutions of the full Hamiltonian. This structure implies the need for an "adaptive scheme", i.e., a guess value g c is taken as starting point to produce a better estimate g c and so on. Convergence towards g is not guaranteed for arbitrary circumstances, but in specific examples, the iterations do converge and convergence criteria may be found [1]. This motivates a further difference between ordinary STA and the STA-like approach: (vi) The STA-like approach in metrology is adaptive, it proceeds by iteration to find via measurements, the value g. In ordinary STA there is no such a scheme, nothing plays the role of successive g c , g c , g c ... values. There are iterative approaches, such as the superadiabatic iterations [8], but their aim and formal structure do not match closely the described adaptive scheme. Nevertheless, superadiabatic iterations may be the basis for other parameter estimation schemes as sketched in the final discussion.
For the total Hamiltonian (18), Equation (7) is reformulated as where ∂ g H tot = ∂ g H g by construction; the unitary evolution operators U(0 → t) and U † (0 → t) correspond now to the evolution driven by the total Hamiltonian H tot . That is, the reformulated h g (T) depends both on g and g c . A further remark on notation: We stay essentially faithful to the compact notation in [1] to facilitate comparison, but compactness comes with a price as we use some symbols, for example H tot or h g , for different things, contrast in particular (14) and the reformulation (18). A more precise but heavier notation would likely be cumbersome for the reader. We assume that the context should make clear the right interpretation. Note also that in the practical applications of the adaptive method in Reference [1] only the reformulated expressions for H tot and h g are used. In cases where doubts could arise, we will specify the equation number.
The eigenstates with the maximum and minimum eigenvalues of ∂ g H g | g=g c will be denoted as ψ max (t) and ψ min (t) respectively. With the initial state the maximal Fisher information (2) reaches in zeroth order in the deviation the upper bound (8).
To attain in practice this upper bound of the quantum Fisher information, the following observable can be measured at time T, Keeping dominant orders in δg = g − g c , so g can be found from the estimator O . The variance of the estimate is the inverse of the upper bound of the Fisher information,

Estimation of Field Amplitude and Rotation Frequency
Pang and Jordan [1] apply the above methodology to a qubit in a uniformly rotating magnetic field B(t) = B[cos(ωt)e x + sin(ωt)e z ], where e x and e z are unit vectors with directionsx andẑ, respectively, to estimate the amplitude B and the rotation frequency ω. Here, we shall focus on ω as it leads to the most interesting results.

The Hamiltonian that represents the interaction between the qubit and the field is
in terms of Pauli matrices. We may as well consider a reinterpretation of this Hamiltonian as the semiclassical interaction for a two level system in a properly set laser or microwave field, but let us keep formally the notation for a magnetic field. An interesting exercise is to compute the Fisher information for ω (now the g parameter), with and without Hamiltonian control. The derivative of H ω is with time-dependent eigenvalues µ max,min = ±tB. Using this result in Equation (8), the upper bound of the Fisher information is I This result is nontrivial because the gap between Hamiltonian eigenvalues is not increased. Otherwise, if the Hamiltonian is set to increase rapidly with time, arbitrarily high powers of T or exponential growth may be found [1]. Note also that the maximum power of T that can be achieved with a time independent Hamiltonian is T 2 . Without Hamiltonian control the optimal quantum Fisher information (6) is instead (29)

Estimation of the Rotation Frequency with Hamiltonian Control
If we assume f k (t) = 0, (13) becomes Note that H cd is here a time-independent Hamiltonian with an upper bound ∼ T 2 for the Fisher information. This illustrates the general statement made before that H cd drives along the "right" eigenvectors of ∂ g H g but does not necessarily provide the right eigenvalues as ∂ g H cd = ∂ g H g .
As explained in Section 2, the way out is to reformulate H c (t) at the estimated value ω c and set To compute the optimal Fisher information, the corresponding h ω (T) in Equation (19) is diagonalized to find its eigenvalues. Since ω c is assumed close to ω, h ω (T) is expanded around ω c = ω as where δω = ω c − ω, (Notice that this notation in [1] is not consistent with δg = g − g c .) with eigenvalues Therefore, substituting the results given by Equation (34) into Equation (6), the optimal Fisher information for (31) becomes where higher order terms of δω have been neglected. Conditions for convergence are analyzed in [1].

Alternative Driving via Physical Unitary Transformations
In ordinary applications of STA based on the counterdiabatic approach, H cd often implies different operators from those in the reference Hamiltonian H 0 . These extra operators may be hard or even impossible to generate in the laboratory. In the applications to metrology of STA-like methods the same difficulties may arise with the control Hamiltonian H c . Specifically, for the system and Hamiltonian studied in Section 3, the control Hamiltonian includes a σ y term whose implementation could be quite challenging in some systems [17], this really depends on the particular realization of the two-level system, but here we shall assume, as a basic exercise, that σ y is a term that we want to avoid. In STA applications, it is sometimes possible to change the structure of the total Hamiltonian avoiding undesired terms by means of "physical" unitary transformations [8,18,19]. We shall explore this approach in the context of parameter estimation. Specifically our generic goal is to modify the total Hamiltonian H tot (see Equation (18)) so that we get rid of the problematic terms. In the example of the previous section we will modify Equation (31), to get rid of the σ y term without losing the T 4 dependence in the Fisher information.
Given a Hamiltonian H(t) that drives the general state |ψ(t) , the unitarily transformed state |ψ (t) = where and the dot stands for time derivative. H (t) is in general not just the unitary transform of H(t) when G(t) depends on time. Notice also that although these expressions are formally the same as those that define an interaction picture, here the alternative Hamiltonian H (t) and H(t) represent different physical drivings just like |ψ(t) and |ψ (t) represent different dynamic states. In the context of STA methods, the transformation provides indeed an alternative shortcut to the one represented by H if we set in order to guarantee and That is, with these boundary conditions the wavefunctions and the Hamiltonians coincide at the boundary times. In ordinary STA, these boundary conditions may be relaxed in some cases [19]. Moreover, in metrology as we shall see.
Let us now examine the operator h g = iU † g ∂ g U g corresponding to H , where The parameter g is unknown so we assume that the unitary transformation G does not depend on it. Then Similarly h 2 g (T) = h 2 g (T), so H and H will have the same maximal Fisher information (four times the variance of h g , see Equation (4)) for the same initial state. The optimal and upper bound Fisher information depend only on h g (T) = h g (T) so they are also unaffected. In this context, there is no need in principle for the transformation operator G to satisfy the boundary conditions in Equations (39) and (40). However, it may be convenient to satisfy Equation (39) so that the wavefunctions |Ψ(t) and |Ψ (t) coincide at both initial and final times. In particular this would allow us to use the same observable O in Equation (21) as an estimator for g.
In the example of Section 3, we want to transform the Hamiltonian (31) to get rid of σ y . We really need the full Hamiltonian (31) as starting point. If we instead used a pure H cd = − ω 2 σ y , as in Equation (14), only a T 2 dependence for the Fisher information could be reached since this Hamiltonian is time-independent.
When the Hamiltonian is a linear combination of generators of some Lie algebra, the unitary transformation G may be constructed by exponentiating elements of the algebra and imposing the vanishing of the unwanted terms [19]. In our example, the generators of the algebra are the Pauli matrices so, we will choose unitary transformations of the form where α(t) is a given time-dependent real function and σ i can be any of the Pauli matrices {σ x , σ y , σ z }. Taking into account that we choose σ i = σ y in Equation (45). The alternative Hamiltonian becomes To cancel the σ y term, we choose which has the same structure (generators) as the reference Hamiltonian (26) but different time-dependent coefficients. Rewriting (49) as it can be seen that the realization of H (t) is possible assuming that fields oscillating with ω (a "carrier" signal with precise frequency to be determined) and ω c (a test signal with accurately known frequency) can be implemented and combined. Alternatively, we may think of a setting where the difference between two frequencies ω − ω c can be controlled accurately even if the carrier frequency is unknown. Therefore, the alternative feasible Hamiltonian will keep the T 4 scaling of Fisher information for a given evolution time T and, consequently, the estimation of the ω will be the same than the one achieved using the untransformed Hamiltonian. Specifically an explicit perturbative calculation in orders of δω reproduces the result in Equation (35) in agreement with the general proof given above that the unitary transformation does not change the Fisher information.
As for the observable O in Equation (21), it may be used as an estimator provided G(0) = G(T) = 1. Then the final states for drivings by H and H are identical if the initial states are the same. Equation (48) implies that G(0) = 1. Noting that, generically, at periodic times G(T) = 1 as desired.

Discussion
The seminal work of Pang and Jordan in [1] demonstrates that time-dependent Hamiltonians allow for better parameter estimations than time-independent ones. Specifically, the time dependence of the Fisher information can be given by higher powers of time without increasing Hamiltonian intensity. In the example of a qubit in a rotating magnetic field, the optimal Fisher information for the rotating frequency of the field can reach a T 4 dependence surpassing the limit T 2 of time-independent Hamiltonians. In practice, it is necessary in general to add a control Hamiltonian to reach the upper bound of the Fisher information. Pang and Jordan propose a control Hamiltonian to reach the upper bound using an STA-like adaptive approach.
We have discussed similarities and differences between actual STA methods and the STA-like method in Reference [1]. This analysis sets the ground to apply other STA techniques in metrology. We have explored here one of them: physical (rather than formal) unitary transformations. We have first proven that for these transformations the Fisher information does not change. Then, a "proof of principle" application is worked out for the frequency measurement in the single qubit model: assuming that a σ y type of term is not easy to implement, as it happens, e.g., in the experimental setting in [17], we find, by a unitary transformation, alternative Hamiltonians leading to the upper bound of the Fisher information without a σ y term.
As for further possibilities, we sketch here some ideas to be developed in more detail elsewhere: The counterdiabatic approach may be regarded as the zeroth iteration of an STA-generating scheme based on superadiabatic iterations [8,12,20,21]. In zeroth order a given (Schrödinger picture) Hamiltonian H 0 (t) is diagonilized with a basis {|n 0 (t) } to set an interaction picture (IP) based on the unitary transformation A 0 = ∑ n |n 0 (t) n 0 (0)| with IP Hamiltonian H 1 = A † 0 (H 0 − K 0 )A 0 , where K 0 = iȦ 0 A † 0 . If, in the IP, the coupling term is cancelled by adding its negative, A † 0 K 0 A 0 , the dynamics unfolds without transitions. Back in the Schrödinger picture (SP) this amounts to driving the system aided by a counterdiabatic term, with the modified Hamiltonian H 0 + K 0 , where K 0 = H cd = H (0) cd . The added superindex (0) denotes that higher order iterations may be worked out by repeating the same process, starting, in the first iteration, with H 1 instead of H 0 . This first "superadiabatic" iteration generates a different coupling term that may be canceled with its negative as in the CD method. Of course further iterations could be implemented.

Appendix A. Physical vs. Formal Transformations
In this Appendix, we compare the "interaction picture" transformation in the Supplementary Note 3 of [1] and the transformation in Section 4. While in both cases similar unitary operators, exp −iσ y ωt/2 and exp iσ y ω c t/2 , are applied, the aim, results, and physical content of the transformations are not the same. In [1], the two Hamiltonians involved are (26) and − Bσ x + ωσ y /2. (A1) Formally, (26) may be regarded as an interaction picture Hamiltonian of the "Schrödinger" Hamiltonian (A1) if the interaction picture wavefunction is defined in terms of the Schrödinger picture one as ψ IP = exp iωtσ y /2 ψ S . As in ordinary applications of interaction pictures, the physics is the same in both pictures, they are just different representations of the same thing, and the aim of the transformation is to get a simple expression of the evolution operator driven by (26) making use of the time independent structure of (A1). In these equations only the exact value ω appears, and no control Hamiltonian has been added. In Section 4, the starting Hamiltonian is instead (31), which is transformed via (37) using G = exp iω c tσ y /2 into (49). The two Hamiltonians involved, (31) and (49), are different now from those in Reference [1], (26) and (A1). Moreover, the distinction between ω and ω c plays a fundamental role, and the control Hamiltonian is added in (31). In Section 4, the two related Hamiltonians represent different physical settings and drive different dynamics. The transformation is now made to change the physics, it is not just a convenient, formal change of representation.