Stochastic Schrödinger Equations and Conditional States: A General Non-Markovian Quantum Electron Transport Simulator for THz Electronics

A prominent tool to study the dynamics of open quantum systems is the reduced density matrix. Yet, approaching open quantum systems by means of state vectors has well known computational advantages. In this respect, the physical meaning of the so-called conditional states in Markovian and non-Markovian scenarios has been a topic of recent debate in the construction of stochastic Schrödinger equations. We shed light on this discussion by acknowledging the Bohmian conditional wavefunction (linked to the corresponding Bohmian trajectory) as the proper mathematical object to represent, in terms of state vectors, an arbitrary subset of degrees of freedom. As an example of the practical utility of these states, we present a time-dependent quantum Monte Carlo algorithm to describe electron transport in open quantum systems under general (Markovian or non-Markovian) conditions. By making the most of trajectory-based and wavefunction methods, the resulting simulation technique extends to the quantum regime, the computational capabilities that the Monte Carlo solution of the Boltzmann transport equation offers for semi-classical electron devices.


Introduction
Thanks to its accuracy and versatility, the Monte Carlo solution of the Boltzmann transport equation has been, for decades, the preferred computational tool to predict the DC, AC, transient, and noise performances of semi-classical electron devices [1]. In the past decade, however, due to the miniaturization of electronic devices (with active regions approaching the de Broglie wavelength of the transport electrons), a majority of the device modeling community has migrated from semi-classical to fully quantum simulation tools, marking the onset of a revolution in the community devoted to semiconductor device simulation. Today, a number of quantum electron transport simulators are available to the scientific community [2][3][4]. The amount of information that these simulators can provide, however, is mainly restricted to the stationary regime and therefore their predicting capabilities are still far from those of the traditional Monte Carlo solution of the semi-classical Boltzmann transport equation [1]. This limitation poses a serious problem in the near future as electron devices are foreseen to operate at the Terahertz (THz) regime. At these frequencies, the discrete nature of electrons in the active region is expected to generate unavoidable fluctuations of the current that could interfere with the correct operation of such devices both for analog and digital applications [5].
A formally correct approach to electron transport beyond the quasi-stationary regime lies on the description of the active region of an electron device as an open quantum system [6,7]. As such, one can then borrow any state-of-the-art mathematical tool developed to study open quantum systems [8,9]. A preferred technique has been the stochastic Schrödinger equation (SSE) approach [10][11][12][13][14][15][16][17]. Instead of directly solving equations of motion for the reduced density matrix, the SSE approach exploits the state vector nature of the so-called conditional states to alleviate some computational burden (and ensure a complete positive map by construction [18]). Even if this technique allows to always reconstruct the full density matrix, a discussion on whether dynamical information can be directly extracted from such conditional states in non-Markovian scenarios has appeared recently in the literature [19,20]. This debate is very relevant to us as we are interested in computing not only one-time expectation values (i.e., DC performance) but also dynamical properties (i.e., AC, transient, and noise), such as multi-time correlation functions, at THz frequencies. At these frequencies the environment correlations are expected to decay on a time-scale comparable to the time-scale relevant for the system evolution [21]. Furthermore, the displacement current becomes important at very high frequencies and a self-consistent solution of the Maxwell equations and the Schrödinger equation is necessary [21,22].
Some light on how to utilize the SSE technique to access dynamical information without the need of reconstructing the reduced density matrix has already been shed by Wiseman and Gambetta by acknowledging the Bohmian conditional wavefunction as the proper mathematical tool to describe general open quantum systems in non-Markovian scenarios [23,24]. In this work we reinforce this idea by showing that the Bohmian conditional wavefunction, together with the corresponding Bohmian trajectory, is an exact decomposition and recasting of the unitary time-evolution of a closed quantum system that yields a set of coupled, non-Hermitian, equations of motion that allows to describe the evolution of arbitrary subsets of the degrees of freedom on a formally exact level. Furthermore, since the measurement process is defined as a routine interaction between subsystems in Bohmian mechanics, conditional states can be used to describe either the measured or unmeasured dynamics of an open quantum system. As an example of the practical utility of the conditional wavefunctions, we present here a Monte Carlo simulation scheme to describe quantum electron transport in open systems that is valid both for Markovian or non-Markovian regimes and that guarantees a dynamical map that preserves complete positivity [25][26][27][28][29].
This paper is structured as follows. In Section 2 we provide a brief account on the SSE approach and on how nanoscale electron devices can be understood as open quantum systems. Section 3 focuses on the physical interpretation of the conditional states (i.e., system states conditioned on a particular value of the environment) in the contexts of the orthodox and Bohmian quantum mechanical theories. Section 4 provides an overall perspective on the points raised in the previous sections and puts into practice the conditional wavefunction concept to build a general purpose electron transport simulator, called BITLLES, beyond the steady state (Markovian) regime. As an example of the use of conditional states, numerical simulations of the THz current in a graphene electron device are presented in Section 5. Final comments and conclusions can be found in Section 6.

Electron Devices as Open Quantum Systems
In this section we introduce the SSE approach to open quantum systems and discuss how it can be used to reconstruct the reduced density matrix. We then explain how a nanoscale electron device can be understood as an open quantum system and how the SSE approach can be applied to predict its performance.

Open Quantum Systems
As usual, we start with a closed quantum system (see Figure 1a). This system is represented by a pure state, |Ψ(t) , which evolves unitarily according to the time-dependent Schrödinger equation: ih Finding a solution to Equation (1) is inaccessible for most practical scenarios due to the large number of degrees of freedom involved. Therefore, it is a common practice to partition the system into two subsets of degrees of freedom, viz., open system and environment [6]. The open system can be described by a reduced density matrix: where Tr env denotes the trace over the environment degrees of freedom. The reduced density matrix ρ sys can be shown to obey, in most general circumstances, a non-Markovian master equation [30,31]: whereĤ int (t) is a system Hamiltonian operator in some interaction picture andK(t, s) is the "memory kernel" superoperator, which operates on the reduced stateρ sys (t) and represents how the environment affects the system. If the solution to Equation (3) is known then the expectation value of any observablê A of the system can be evaluated as: Unfortunately, solving Equation (3) is not an easy task. The effect ofK(t, s) onρ sys (t) cannot be explicitly evaluated in general circumstances. Moreover, even if the explicit form ofK(t, s) is known, the solution to Equation (3) is very demanding as the density matrixρ sys (t) scales very poorly with the number of degrees of freedom of the open system. Finally, if one is aiming at computing multi-time correlations functions, then it is necessary to incorporate the effect (backaction) of the successive measurements on the evolution of the reduced density matrix, which is, in general non-Markovian regimes, a very complicated task both from the practical and conceptual points of view.

Stochastic Schrodinger Equations
A breakthrough in the computation of the reduced density matrix in Equation (2) came from the advent of the SSE approach [32]. The main advantage behind the SSE approach is that the unknown to be evaluated is in the form of a state vector (of N sys degrees of freedom) rather than a matrix (of size N 2 sys ) and thus there is an important reduction of the associated computational cost. In addition, it provides equations of motion that, by construction, ensure a complete positive map [18] so that the SSE approach guarantees that the density matrix always yields a positive probability density, a requirement that is not generally satisfied by other approaches that are based on directly solving Equation (3) [33].
The central mathematical object in the SSE approach to open quantum systems is the conditional state of the system: where P(q, t) = ψ q (t)|ψ q (t) = Ψ(t)|Î sys ⊗ |q q| ⊗Î sys |Ψ(t) and |q are the eigenstates of the so-called unraveling observableQ belonging to the Hilbert space of the environment. To simplify the discussion, and unless indicated, q represents the collection of degrees of freedom of the environment. Using the eigenstates |q as a basis for the environment degrees of freedom, it is then easy to rewrite the full state |Ψ(t) as: which can be simply understood as a Schmidt decomposition of a bipartite (open system plus environment) state. Thus, a complete set of conditional states can be always used to reproduce the reduced density matrix at any time as: Let us note that no specific (Markovian or non-Markovian) assumption was required to write Equation (7). In fact, the above definition of the reduced density matrix simply responds to the global unitary evolution in Equation (1), which (as depicted in Figure 1a) does not include the effect of any measuring apparatus.

Nanoscale Electron Devices as Open Quantum Systems
At first sight, one could be inclined to say that a nanoscale electron device perfectly fits into the above definition of open quantum system. The open system would then be the device's active region and the environment (including the contacts, the cables, ammeter, etc.) the so called reservoirs or contacts (see Figure 1a). In addition, the observable of interestÂ in Equation (4) would be, most probably, the current operatorÎ. As long as we are interested only in single-time expectation values, i.e., static or stationary properties, this picture (and the picture in Figure 1a) is perfectly valid. Therefore, the SSE approach introduced in Equations (5)-(7) can be easily adopted to simulate electron devices and hence to predict their static performance. However, if one aims at computing dynamical properties such as time-correlation functions, e.g., I(t + τ)I(t) , then a valid question is whether such an expectation value is expected to be measurable at the laboratory. If so, what would then be the effect of the measurement of I at time t on the measurement of I at a later time t + τ?. Figure 1b schematically depicts this question by drawing explicitly the measuring apparatus (or meter). As it is well known, the action of measuring in quantum mechanics is not innocuous. It is quite the opposite: in many relevant situations, extracting information from a system at time t has a non-negligible effect on the subsequent evolution of the system and hence also on what is measured at a later time t + τ. Therefore, as soon as we are concerned about dynamic information (i.e., time-correlation functions), we need to ask ourselves whether an approach to open quantum systems such as the SSE approach can be of any help. In the next section we will answer this question and understand whether the conditional states |ψ q (t) defined in Equation (5) do include the backaction of the measuring apparatus depicted in Figure 1b.

Interpretation of Conditional States in Open Quantum Systems
The conditional states in Equation (5) were first interpreted as a simple numerical tool [32], that is, exploiting the result in Equation (7) as a numerical recipe to evaluate any expectation value of interest. This interpretation is linked to the assumption that the operatorÂ in Equation (4) is the physically relevant operator (associated to a real measuring apparatus), while the operatorQ associated to the definition of the conditional state in Equation (5) is only a mathematical object with no attached physical reality, i.e., it merely represents a basis. In more recent times, however, it has been generally accepted that the conditional states in Equation (5) can be interpreted as the states of the system conditioned on a type of sequential (sometimes referred to as continuous) measurement [34] of the operatorQ of the environment (now representing a physical measuring apparatus that substitutes the no longer needed operatorÂ) [6,12,35]. From a practical point of view, this last interpretation is very attractive as it would allow to link the conditional states, |ψ q (t) , at different times and thus compute time-correlation functions without the need of introducing the measuring apparatus or of reconstructing the full density matrix. Whether or not this later interpretation is physically sound in general circumstances is the focus of our discussion in the next subsections.

The Orthodox Interpretation of Conditional States
Let us start by discussing, in the orthodox quantum mechanics theory, what is the physical meaning of the conditional states that appear in Equation (5). When the full closed system follows the unitary evolution of Figure 1a, the conditional state |ψ q (t) can be understood as the (renormalized) state that the system is left in after projectively measuring the property Q of the environment (with outcome q). This can be easily seen by noting that the superposition in Equation (6) is, after a projective measurement of Q, reduced (or collapsed) to the product state |Ψ q (t) = P(q, t)|q ⊗ |ψ q (t) .
It is important to notice that the conditional state |ψ q (t ) at a later time, t > t, can be equivalently defined as the state of the system when the superposition in Equation (6) is measured at time t and yields the outcome q . This interpretation, however, is only valid if no previous measurement (in particular at t) has been performed, as depicted in Figure 2a. Otherwise, the collapse of the wavefunction at time t, yielding the state P(q, t)|q ⊗ |ψ q (t) , should be taken into account in the future evolution of the system, which would not be the same as if the measurement had not been performed at the previous time. Therefore, the equation of motion of the conditional states, as defined in Equation (5), cannot be, in general, the result of a sequential measurement protocol such as the one depicted in Figures 1b or 2b. This conclusion seems obvious if one recalls that our starting point was Figure 1a, where there is no measurement.

Orthodox Conditional States in Markovian Scenarios
Even if the conditional states solution of the SSE cannot be generally interpreted as the result of a sequential measurement, such an interpretation has been proven to be very useful in practice for scenarios that fulfill some specific type of Markovian conditions. We are aware that there is still some controversy on how to properly define Markovianity in the quantum regime (see, e.g., Ref. [18]), so it is our goal here only to acknowledge the existence of some regimes (i.e., particular observation time intervals) of interest where the role of the measurement of the environment has no observable effects. In this regime, Figure 1a,b as well as Figure 2a,b can be thought to be equivalent. The states of the system conditioned on a particular value of the environment at time t, |ψ q (t) , can be given a physical meaning only if no measurement has been performed at a previous time t < t. This approach can be always used to reconstruct the correct reduced density matrix of the system at any time but cannot be used to link in time the conditional states for non-Markovian scenarios. Panel (b): Schematic representation of a sequential measurement. The wavefunction of the system plus environment is measured sequentially. In this picture, the link between the states of the full system plus environment at different times is physically motivated.
In our pragmatical definition of Markovianity the entanglement between system and environment decays in a time scale t D that is much smaller than the observation time interval τ, i.e., t D τ. In this regime, the environment itself can be thought of as a type of measuring operator (as appears in generalized quantum measurement theory [36]) that keeps the open system in a pure state after the measurement. The open system can be then seen as an SSE in which the stochastic variable q t (sampled from the distribution P(q t , t)) is directly the output of a sequential measurement of the environment. The stochastic trajectory of this conditioned system state generated by the (Markovian) SSE is often referred to as a quantum trajectory [6,12,35] and can be used, for example, to evaluate time-correlation functions of the environment as: Let us emphasize that the stochastic variables q t and q t+τ in Equation (9) are sampled, separately, from the probability distributions P(q t , t) = ψ q (t)|ψ q (t) and P(q t+τ , t + τ) = ψ q (t + τ)|ψ q (t + τ) . Therefore, as we have schematically depicted in Figure 3, no matter how the trajectories {q t } are connected in time, one always obtains the correct time-correlation function Q(t)Q(t + τ) . It is important to realize that we started our discussion on the physical meaning of the Markovian SSE with an open system whose environment is not being measured (see Figures 1a and 2a). Noticeably, we have ended up discussing an environment that is being measured at every time interval τ (see Figure 2b). How is that possible? Well, the reason is that measuring the environment at time t does not affect the system conditional states at a later time τ when the built-in correlations in the environment due to the measurement at time t decay in a time interval t D much smaller than the time interval between measurements τ. In other words, Figure 1a,b as well as Figure 2a,b are not distinguishable when t D τ. In this sense, the Markovian regime has some similarities with a classical system, where it is accepted that information can be extracted without perturbation.

Orthodox Conditional States in Non-Markovian Scenarios
For nanoscale devices operating at very high (THz) frequencies, the relevant dynamics and hence the observation time interval τ are both below the picoseconds time-scale and the previous assumption of Markovianity, i.e., t D τ, starts to break down. Under the condition t D ∼ τ, non-Markovian SSE have been proposed which allow an alternative procedure for solving the reduced state |ψ q (t) [17,33,[37][38][39][40][41]. However, non-Markovian SSEs constructed from Equation (5), unlike the Markovian SSEs, suffer from interpretation issues [17]. In the non-Markovian regime, the perturbation of the environment due to the quantum backaction of a measurement at time t would not be washed out in the time lapse τ ∼ t D and hence the joint probability distribution would not become separable, i.e., P(q t , q t+τ )) = P(q t )P(q t+τ ). As a direct consequence, connecting in time the different solutions q t and q t+τ (sampled independently from the probability distributions P(q t , t) and P(q t+τ , t + τ) as in Figure 3 to make a trajectory "would be a fiction" [17,19,20]. Here, the word "fiction" means that the time-correlations computed in Equation (9) are wrong, i.e., the expectation value in Equation (9) would simply be different from the experimental result.
According to D'Espagnat the above discussion can be rephrased in terms of the so-called proper and improper mixtures [42]. Following D'Espagnat arguments, the reduced density matrix in Equation (7) is an improper mixture because it has been constructed by tracing out the degrees of freedom of the environment. On the contrary, a proper mixture is a density matrix constructed to simultaneously define several experiments where a closed system is described, at each experiment, by different pure states. Due to our ignorance, we do not know which pure state corresponds to which experiment, so we only know the probabilities of finding a given pure state. D'Espagnat argues that the ignorance interpretation of the proper density matrix, cannot be applied in the improper density matrix discussed here (See Appendix A). To understand why under a Markovian regime open systems can be described by pure states (using a proper mixture), we remind that Markovianity implies conditions on the observation time.

The Bohmian Interpretation of Conditional States
So, under non-Markovian (i.e., the most general) conditions, the conditional states |ψ q (t) can be used to reconstruct the reduced density matrix as in Equation (7) but cannot be used to provide further information on its own. This interpretation problem is rooted in the fact that orthodox quantum mechanics only provides reality to objects whose properties (such as q) are being directly measured. However, as explained in the previous subsection, it is precisely the fact of introducing the measurement of q (without including the pertinent backaction on the system evolution) which prevents the conditional states |ψ q (t) of the non-Markovian SSE from being connected in time for t D ∼ τ. In this context, a valid question regarding the interpretation of |ψ q (t) is whether or not we can obtain information of, e.g., the observable Q without perturbing the state of the system. The answer given by orthodox quantum mechanics is crystal clear: this is not possible (except for Markovian conditions) because information requires a measurement, and the measurement induces a perturbation. Notice, however, that the assumption that only measured properties are real is not something forced on us by experimental facts, but it is a deliberate choice of the orthodox quantum theory. Therefore, we here turn to a nonorthodox approach: the Bohmian interpretation of quantum mechanics [43][44][45][46][47][48].
A fundamental aspect of the Bohmian theory is that reality (of the properties) of quantum objects does not depend on the measurement. That is, the values of some observables, e.g., the value of the positions of the particles of the environment, exist independently of the measurement. If q is the collective degree of freedom of the position of the particles of the environment and x is the collective degree of freedom of the position of particles of the system; then, the Bohmian theory defines an experiment in the laboratory by means of two basic elements: (i) the wavefunction q, x|Ψ(t) = Ψ(x, q, t) and (ii) an ensemble of trajectories Q i (t), X i (t) of the environment and of the system. We use a superindex i to denote that each time an experiment is repeated, with the same preparation for the wavefunction Ψ(x, q, t), the initial positions of the environment and system particles can be different. They are selected according to the probability distribution |Ψ(X i , Q i , 0)| 2 [44]. The equation of motion for the wavefunction Ψ(x, q, t) is the time-dependent Schrödinger equation in Equation (1), while the equations of motion for the environment and system trajectories Q i (t), X i (t) are obtained by time-integrating the velocity fields v q (x, q, t) = J q (x, q, t)/|Ψ(x, q, t)| 2 and v x (x, q, t) = J x (x, q, t)/|Ψ(x, q, t)| 2 respectively. Here, J q (x, q, t) and J x (x, q, t) are the standard current densities of the environment and the system respectively. We highlight the (nonlocal) dependence of the Bohmian velocities of the particles of the environment on the particles of the system, and vice-versa. This shows just the entanglement between environment and system at the level of the Bohmian trajectories. According to the continuity equation can be used to reproduce the probability distribution |Ψ(x, q, t)| 2 at any time. Thus, by construction, the computation of ensemble values from the orthodox and Bohmian theories are fully equivalent, at the empirical level. From the full wavefunction x, q|Ψ(t) = Ψ(x, q, t) (solution of Equation (1)) and the trajectories Q i (t), X i (t), one can then easily construct the Bohmian conditional wavefunction of the system and environment respectively. Notice that this Bohmian definition of conditional states does not require to specify if the system is measured or not because the ontological nature of the trajectories {Q(t), X(t)} does not depend on the measurement. Consequently, the conditional wavefunctionsψ Q i (t) (x, t), with the corresponding Bohmian trajectories, contain all the required information to evaluate dynamical properties of the system no matter whether Markovian or non-Markovian conditions are being considered. This can be seen by noticing that the velocity of the trajectory X i (t) given by v q (X i (t), Q i (t)) can be equivalently computed either from (the Thus, in a particular experiment i and for a given time t, the dynamics of the Bohmian trajectory X i (t) can be computed The Bohmian conditional wavefunction of the system can now be connected to the orthodox conditional wavefunction in Equation (5) by imposing Q i (t) ≡ q t . Then one can readily write: At first sight, one can think that the difference between the Bohmian and orthodox conditional states is just a simple renormalization constant P(q t , t) (see Appendix B for a more detailed explanation of the role of this renormalization constant). However, the identity in Equation (11) has to be understood as to be satisfied at any time t, which implies that the following identity should prevail: We emphasize the importance of Equation (12) in ensuring the accomplishment of Equation (11). If we consider another experiment Q j (t) ≡ q t , we have to define another conditional state |ψ q t (t) . It can happen that, at a particular time t ≡ t 1 , both conditional states become identical i.e., |ψ q t 1 (t 1 ) = |ψ q t 1 (t 1 ) . However, this does not imply that both conditional wavefunctions can identically be used in the computation of time-correlations. This is because every Bohmian trajectory has a fundamental role in describing the history of the Bohmian conditional state for one particular experiment. Therefore, the trajectory Q i (t) uniquely describes the evolution of the conditional wavefunction |ψ q t (t) for one experiment (labeled by the index i in the Bohmian language) the same way as the trajectory Q j (t) and the conditional wave function |ψ q t (t) describes the experiment labeled by j. As we said, |ψ q t 1 (t 1 ) = |ψ q t 1 (t 1 ) are the same orthodox conditional states, but do not necessarily represent the same Bohmian conditional wavefunction. This subtle difference explains why SSEs cannot be connected in time and used to study the time-correlation of non-Markovian open system whereas the same can be done through the Bohmian conditional states, without any ambiguity. The mathematical definition of the measurement process in Bohmian mechanics and in the orthodox quantum mechanics differs substantially [44]. In the orthodox theory a collapse (or reduction) law, different from the Schrödinger equation, is necessary to describe the measurement process [45]. Contrarily, in Bohmian mechanics the measurement is treated as any other interaction as far as the degrees of freedom of the measuring apparatus are taken into account [44]. Therefore, while in the orthodox theory the conditional states |ψ q t (t) cannot be understood without the perturbation of the full wavefunction Ψ(x, q, t), in Bohmian mechanics the states |ψ q t (t) do have a physical meaning even when the full wavefunction Ψ(x, q, t) is unaffected by the measurement of the environment [23]. Interestingly, this introduces the possibility of defining what we call "unmeasured (Bohmian) conditional states" when it is assumed that there is no measurement or that the measurement of q t at time t has a negligible influence on the subsequent evolution of the conditional state.
Importantly, the Bohmian conditional states and the corresponding Bohmian trajectories can be used not only to reconstruct the reduced density matrix in Equation (7) at any time but the environment trajectories {Q(t)} allow us to correctly predict any dynamic property of interest including time-correlation functions, e.g., where M → ∞ is the number of experiments (Bohmian trajectories) considered in the ensemble and we have defined P(q t , As it is shown in Figure 4, the evaluation of Equation (13) and any other dynamic property when t D ∼ τ can be done only by connecting the (Bohmian) trajectories at different times in accordance with the continuity equation in Equation (10). This is in contrast with the evaluation of the dynamics in the Markovian regime where any position of the environment at time t 1 can be connected to another position of the environment at time t 2 (see Figure 3) and hence we can write Q(t)Q(t + τ) . This very relevant point was first explained by Gambetta and Wiseman [23,24].  (10) can be linked in time to form a trajectory (shown as connected red circles). Dashed lines represent connections that do not follow the continuity equation and hence cannot be used to evaluate any dynamic property.
Although the Bohmian theory can also provide measured properties of the system that coincide with the orthodox results in Figure 2b, let us emphasize once more the merit of the unmeasured properties provided by the Bohmian theory, which remains mainly unnoticed in the literature. As it has been already explained, in the orthodox theory, measuring a particular value of the environment property q at time t cannot be conceived without the accompanying perturbation of the wavefunction Ψ(x, q, t). Under non-Markovian conditions, it is precisely this perturbation that prevents the conditional states of the system |ψ q t (t) from being connected in time to form a trajectory. Contrarily, in Bohmian mechanics, the existence of the environment trajectories {Q(t)}, even in the absence of any measurement, allows the possibility of connecting in time the conditional states |ψ q t (t) even when t D ∼ τ.
Note that in the Bohmian framework, where the measurement apparatus is simply represented by an additional number of degrees of freedom interacting with the system (i.e., without requiring any additional collapse law), a discussion about measured and unmeasured properties of quantum systems is pertinent [49]. At a practical level, the measurement of many classical systems implies non-negligible perturbations. In particular, electronic devices at high frequencies are paradigmatic examples where such perturbations occur. It is well-known that the experimental setup (for e.g., a coaxial cable) connecting the electronic device to the meter induces dramatic perturbations in high-frequency measurements. An important task for device engineers is to determine what part of the measured signal is due to the intrinsic behaviour of the electron device and what part is due to rest of the experimental setup. When trying to predict the "intrinsic" behaviour of the electronic devices, the coaxial cables are modelled by "parasitic" capacitors or inductors to account for their "spurious" effect. Even the measurement of the whole experimental setup is repeated twice, with and without the "intrinsic" device under test (DUT), to subtract the results and determine experimentally the "intrinsic" properties of the electronic device alone. Such "intrinsic" properties of the electronic devices are what we define in this manuscript as the unmeasured properties of quantum systems.

Bohmian Conditional Wavefunction Approach to Quantum Electron Transport
The different notions of reality invoked by the orthodox quantum theory and Bohmian mechanics lead to practical differences in the abilities that these theories can offer to provide information about quantum dynamics. Specifically, we have shown that contrarily to orthodox quantum mechanics, Bohmian mechanics allows to physically interpret (i.e., link in time) the conditional states of the SSE approach in general non-Markovian scenarios. The reason is that whereas in the Bohmian theory the reality of the current is independent of any measurement, the orthodox theory gives reality to the electrical current only when it is being measured (this is the so-called eigenstate-eigenvalue link). From the practical point of view, this has a remarkable consequence. In the Bohmain approach the total current can be defined in terms of the dynamics of the electrons (Bohmian) trajectories without the need to define a measurement operator. As it will be shown in this section, the possibility of computing the total current at high frequencies without specifying the measurement operator is certainly a great advantage of the Bohmian approach in front of the orthodox one [44]. In particular, one can then avoid cumbersome questions like, is the measurement operator of the electrical current strong or weak? If weak, how weak? How often do such operator acts on the system? Every picosecond, every femtosecond? At high frequencies, how we introduce the contribution of the displacement current in the electrical current operator?
In this section we provide a brief summary of the path that the authors of this work followed for developing an electron transport simulator based on the use of Bohmian conditional states. The resulting computational tool is called BITLLES [28,29,[50][51][52][53][54][55][56]. Let us start by considering an arbitrary quantum system. The whole system, including the open system, the environment, and the measuring apparatus, is described by a Hilbert space H that can be decomposed as H = H x ⊗ H q where H x is the Hilbert space of the open system and H q the Hilbert space of the environment. If needed, the Hamiltonian H q can include also the degrees of freedom of the measuring apparatus as explained in Section 3.2. We define x = {x 1 , x 2 ...x n } as the degrees of freedom of n electrons in the open system, while q collectively defines the degrees of freedom of the environment (and possibly the measuring apparatus). The open system plus environment Hamiltonian can then be written as: whereĤ x is the Hamiltonian of the system,Ĥ q is the Hamiltonian of the environment (including the apparatus if required), andV is the interaction Hamiltonian between the system and the environment. We note at this point that the number of electrons n in the open system can change in time and so the size of the Hilbert spaces H x and H q can depend on time too. The equation of motion for the Bohmian conditional states x|ψ q t (t) =ψ q t (x, t) in the position representation of the system can be derived by projecting the many-body (system-environment) Schrödinger equation into a particular trajectory of the environment q t ≡ Q(t), i.e., [26,57]: Equation (15) can be rewritten as: whereŨ In Equation (17), U(x, t) is an external potential acting only on the system degrees of freedom, V(x, q t , t) is the Coulomb potential between particles of the system and the environment evaluated at a given trajectory of the environment, A(x, q t , t) = −h 2 2m ∇ 2 q Ψ(x, q, t)/Ψ(x, q, t) q=q t and B(x, q t , t) = h∇ q Ψ(x, q, t)/Ψ(x, q, t) q=q tq t (withq t = dq t /dt) are responsible for mediating the so-called kinetic and advective correlations between system and environment [26,57]. Equation (16) is non-linear and describes a non-unitary evolution. In summary, Bohmian conditional states can be used to exactly decompose the unitary time-evolution of a closed quantum system in terms of a set of coupled, non-Hermitian, equations of motion [26,[57][58][59]. An approximate solution of Equation (16) can always be achieved by making an educated guess for the terms A and B according to the problem at hand. Specifically, in the BITLLES simulator the first and second terms in Equation (17) are evaluated through the solution of the Poisson equation [29]. The third and fourth terms are modeled by a proper injection model [60] as well as proper boundary conditions [56,61] that include the correlations between active region and reservoirs. Electron-phonon decoherence effects can be also effectively included in Equation (16) [25].
In an electron device, the number of electrons contributing to the electrical current are mainly those in the active region of the device. This number fluctuates as there are electrons entering and leaving the active region. This creation and destruction of electrons leads to an abrupt change in the degrees of freedom of the many body wavefunction which cannot be treated with a Schrödinger-like equation for ψ q t (x, t) with a fixed number of degrees of freedom. In the Bohmian conditional approach, this problem can be circumvented by decomposing the system conditional wavefunctionψ q t (x, t) into a set of conditional wavefunctions for each electron. More specifically, for each electron x i , we define a single particle conditional wavefunctionψ q t (x i ,X i (t), t), whereX i (t) = {X 1 (t), .., x i−1 (t), x i+1 , .., X n (t)} are the Bohmian positions of all electrons in the active region except x i , and the second tilde denotes the single-electron conditional decomposition that we have considered on top of the conditional decomposition of the system-environment wavefunction. The set of equations of motion of the resulting n(t) single-electron conditional wavefunctions inside the active region can be written as: . . .
That is, the first conditional process is over the environment degrees of freedom and the second conditional process is over the rest of electrons within the (open) system. We remind here that, as shown in Figure 2b, the active region of an electron device (acting as the open system) is connected to the ammeter (that acts as the measuring apparatus) by a macroscopic cable (that represents the environment). The electrical current provided by the ammeter is then the relevant observable that we are interested in. Thus, the evaluation of the electrical current seems to require keeping track of all the degrees of freedom, i.e., of the system and the environment, which is of course a formidable computational task (see (d) Table 1). At THz frequencies, however, the electrical current is not only the particle current but also the displacement current. It is well-known that the total current defined as the particle current plus the displacement current is a divergence-less vector [21,22]. Consequently, the total current evaluated at the end of the active region is equal to the total current evaluated at the cables. So the variable of the environment associated to the total current, q t ≡ I(t), can be equivalently computed at the borders of the open system. The reader is referred to Ref. [62] for a discussion on how I(t) can be defined in terms of Bohmian trajectories with the help of a quantum version of the Ramo-Schokley-Pellegrini theorem [63]. In particular, it can be shown that the total (particle plus displacement) current in a two-terminal devices can be written as [63]: where L is the distance between the two (metallic) contacts, e is the electron charge, and v x i (X i (t),X i (t), t) is the Bohmian velocity of the i-th electron inside the active region. Let us note that I(t) is the electrical current given by the ammeter (although computed by the electrons inside the open system). Since the cable has macroscopic dimensions, it can be shown that the measured current at the cables is just equal to the unmeasured current (taking into account only the simulation of electrons inside the active region) plus a source of (nearly white) noise which is only relevant at very high frequencies [62]. The basic argument is that the (non-simulated) electrons in the metallic cables have a very short screening time. In other words, the electric field generated by an electron in the cable spatially decreases very rapidly due to the presence of many other mobile charge carriers in the cable that screen it out. Thus, the contribution of this outer electron to the displacement current at the border of the active region is negligible [64]. Summarizing, for the computation of the current at THz frequencies, the degrees of freedom of the environment can be neglected without any appreciable deviation from the correct current value [62]. This introduces an enormous computational simplification as shown (e) in Table 1. This is, for the specific scenarios that we are interested in, the computation cost of the Bohmian conditional wavefunction approach has the same computational cost as the orthodox SSE approach (see Table 1). Yet, in contrast to the orthodox conditional states, which can be used only to evaluate the dynamics of quantum systems in the Markovian regime, the Bohmian conditional states provide direct information on the dynamics of both Markovian or non-Markovian systems.

Numerical Results
In this section we present numerical results obtained with the BITLLES simulator (see Section 4) that demonstrate the ability of the Bohmian conditional wavefunction approach to provide dynamics information for both Markovian and non-Markovian scenarios. We simulate a two-terminal electron device whose active region is a graphene sheet contacted to the outer by two (ohmic) contacts. Graphene is a 2D material that has attracted a lot of attention recently because of its high electron mobility. It is a gapless material with linear energy band, which differs from the parabolic energy bands of traditional semiconductors. In graphene, the conduction and valence bands coincide at an energy point known as the Dirac point. Thus, the dynamics of electrons is no longer governed by an (effective mass) Schrödinger equation but by the Dirac equation, allowing transport from the valence to the conduction band (and vice versa) through Klein tunneling. A Bohmian conditional bispinor (instead of a conditional scalar wavefunction) is used to describe electrons inside the device. The change from a wavefunction to a bispinor does not imply any conceptual difficulty but just a mere increment of the computational cost. More details can be found in Appendix C.
In particular, we want to simulate electron transport in graphene at very high frequencies (THz) taking into account the electromagnetic environment of the electron device. Typically, nanoscale devices are small enough to assume that, even at THz frequencies, the electric field is much more relevant than the magnetic field. Therefore, only the Gauss law (first Maxwell's equations) is enforced to be fulfilled in a self-consistent way (i.e., taking into account the actual charge distribution in the active region). However, the environment of nanoscale devices is commonly a metallic element of macroscopic dimensions. In there, the magnetic and electric fields become both relevant, acting as active (detecting or emitting) THz antennas. For the typical electromagnetic modes propagating in the metals, the magnetic and electric fields are translated into the language of currents and voltages and the whole antenna is modeled as a part of an electric circuit. In this work, the graphene device interacts with an environment that is modeled by a Resistor (R) and a capacitor (C) connected in series through ideal cables (see the schematic plots in Figure 5a-c).
The active region of the graphene device is simulated with the Bohmian conditional wavefunction approach explained in the previous section, while the RC circuit is simulated using a time-dependent finite-difference method. We consider the system plus environment to be in equilibrium. Specifically, the self-consistent procedure to get the current is as follows: an initial (at time t = 0) zero voltage is applied at the source (V S (0) = 0) and drain (V D (0) = 0) contacts of the graphene active region. At room temperature this situation yields a non-zero current from Equation (20) (i.e., I(0) = 0) because of thermal noise. Such current I(0) enters the RC circuit and leads to a new voltage V S (dt) = 0 at the next time step dt (where dt represents the time step that defines the interaction between the RC circuit and the quantum device which was set to dt = 0.5fs). The new source V S (dt) = 0 and fixed drain V D (dt) = 0 voltages now lead to a new value of the current I(dt) = 0 in (20) which is different from zero not only because of thermal noise but also because there is now a net bias (V D (dt) − V S (dt) = 0). This new current I(dt) is used (in the RC circuit) to get a new V S (2dt) that is introduced back in the device to obtain I(2dt) and so on. Importantly, as the system and environment are in equilibrium, the expectation value of I(t) is zero at any time, i.e., I(t) = 0 ∀t.
We consider three different environments (with different values of the capacitance). In Figure 5a we plot the total (particle plus displacement) electrical current at the end of the active region when R = 0 and C = ∞. The same information is shown in Figure 5b,c for two different values of the capacitance C = 2.6 × 10 −17 F and C = 1.3 × 10 −17 F. In all cases the value of the resistance is R = 187 Ω, and we assumed the current I(t) to be positive when it goes from drain to source.
The effect of the RC circuit is, mainly, to attenuate the current fluctuations, which are originated due to thermal noise. This can be seen by comparing Figure 5a with Figure 5b,c. The smaller the capacitance the smaller the current fluctuations. This can be explained as follows: when the net current is positive, the capacitor in the source starts to be charged and so the voltage at the source increases trying to counteract the initially positive current. Therefore, the smaller the capacitance the faster the RC circuit reacts to a charge imbalance. In Figure 6 we plot the total (particle plus displacement) current-current correlations as a function of the observation time τ for the three scenarios in Figure 5. Correlations at very small observation times provide information of the variance of the current, which, as explained above, is reduced as the value of the capacitance is increased. Numerical simulations (not shown here) exhibit that the role of the resistor R is less evident because the active region itself has a much larger (than R = 187 Ω) associated resistance. Numerically the distinction between Markovian and non-Markovian dynamics boils down to the comparison of time correlations as defined in Equations (9) and (13). Since there is no net bias applied to the graphene device (i.e., it is in equilibrium), an ensemble average of the current (over an infinite set of trajectories like the one depicted in Figure 5) yields I(t) = 0 ∀t. Time correlation functions computed in Equation (9) are thus zero by construction, i.e., I(t) I(t + τ) = 0 ∀t, τ. Therefore, the non-Markovian dynamics occurring at very high-frequencies (below the ps time-scale in Figure 6 expressly shows) fixes the correlation time of the environment at t D ∼ ps. Although all three values of the capacitance C in Figure 6 yield the same order of magnitude for t D ∼ ps, it seems also true that the smaller the value of the capacitance, the smaller t D .
Current-current correlations shown in Figure 6 can be better understood by assessing the transit time of electrons. For a velocity of roughly 10 6 m/s inside an active region of L = 40 nm length is roughly τ T = L/v x = 0.04 ps. Positive correlations correspond to transmitted electrons traveling from drain to source (as well as electrons traversing the device from source to drain). While 0 < t < τ T electrons are transiting inside the active region, such electrons provide always a positive (or negative) current as seen in expression (20). In other words, if we have a positive current at time t because electrons are traveling from drain to source, we can expect also a positive current at times t satisfying t < t < t + τ T . The negative correlations belong to electrons that are being reflected. They enter in the active region with a positive (negative) velocity and, after some time τ R inside the device, they are reflected and have negative (positive) velocities until they leave the device after spending roughly 2τ R in the active region. Thus, during the time τ R < t < 2τ R which will be different for each electron depending on the time when they are reflected, we can expect negative correlations. Interestingly, during the 4 ps simulation the number of Bohmian trajectories reflected are double in the black (C = ∞) simulation than in the red one (C = 1.3 × 10 17 F). This can be explained in a similar way as we explained the reduction of the current fluctuations. The fluctuations of the electrical current imply also fluctuations of the charge inside the active region, which are translated (through the Gauss law) into fluctuations of the potential profile. Thus, the larger the noisy current, the larger the noisy internal potential profile. This implies a larger probability of being reflected by the Klein tunneling phenomenon. Therefore, if one aims at describing the dynamics of nanoscale devices with a time-resolution τ that is comparable to (or goes beyond) the electron transit time τ T , a non-Markovian approach is necessary. This is so because the total current I(t) (which has contributions from the displacement and the particle currents) shows correlations at times that are smaller than the electron transit time.  Figure 5. The zero is indicated by a dashed line to show the tendency of the total current, understood as a property of the environment, to vanish at long times τ. Zero autocorrrelation implies an independence between I(t) and I(t + τ) which is typical for Markovian scenarios. This is not true for the short τ considered here which are the representatives of the non-Markovian dynamics.

Conclusions and Final Remarks
Theoretical approaches to open quantum systems that rely on the manipulation of state vectors instead of a reduced density matrix have well known computational advantages. Two major benefits are the substantial reduction of the dimensionality of the involved mathematical objects and the preservation of complete positivity [18]. However, substituting density matrices by state vectors constitutes also an attempt to achieve a more detailed description of the dynamics of open quantum systems [6,19]. It is well recognized, for example, that the continuous measurement of an open quantum system with associated Markovian dynamics can be described by means of a SSE (see Table 2 O4). The conditional state solution to such an equation over some time interval can be linked to a "quantum trajectory" [12,19] of one property of the environment. Thus, the conditional state can be interpreted as the state of the open system evolving while its environment is under continuous monitoring. This is true in general for Markovian systems, no matter whether or not the environment is being actually measured (i.e., it is valid for both Figure 1a,b). This fact is of great importance for designing and experimentally implementing feedback control in open quantum systems [35]. If this interpretation could also be applied to non-Markovian SSEs [33,37], then this would be very significant for quantum technologies, especially in condensed matter environments (e.g., electron devices), which are typically non-Markovian [6]. Table 2. Validity of Bohmian vs. orthodox conditional states to provide dynamic information of open quantum system depending on the relation between the environment decoherence time t D and the observation period τ. Here (un)measured refers to unmeasured and measured indistinctively.

Validity of Conditional
States to Provide Dynamic Information Unfortunately, for non-Markovian conditions, the above interpretation is only possible for the rather exotic scenario where the environment is being continuously monitored and the system is strongly coupled to it. As no correlation between the system and the environment can build up, the evolved system is kept in a pure state. This is the well-known quantum Zeno regime [65,66], under which conditional states can be trivially used to describe the frozen properties of the system (see Table 2 O1). Without the explicit consideration of the measurement process (as in Figure 1a), however, the postulates of the orthodox theory restrict the amount of dynamical information that can be extracted from state vectors (see Table 2 O2). In most general conditions, for τ > 0 and non-Markovian dynamics, while conditional states can be used to reconstruct the reduced density matrix, they cannot be used to evaluate time-correlations (see Table 2 O3) [20,23]. This is not only true when the environment is being measured (as in Figure 1b), but also when it is not measured (as in Figure 1a). Therefore, we turned to a nonorthodox approach: the Bohmian interpretation of quantum mechanics. The basic element of the Bohmian theory (as in other quantum theories without observers) is that the intrinsic properties of quantum systems do not depend on whether the system is being measured or not. Such ontological change is, nevertheless, fully compatible with the predictions of orthodox quantum mechanics because a measurement-independent reality of quantum objects is not in contradiction with non-local and contextual quantum phenomena. Yet, the ontological nature of the trajectories in Bohmian mechanics introduces the possibility of evaluating dynamic properties in terms of conditional wavefunctions for Markovian and non-Markovian dynamics, no matter whether the environment is being actually measured or not (see Table 2, B1-B4 and Figure 7a,b).
In summary, the Bohmian conditional states lend themselves as a rigorous theoretical tool to evaluate static and dynamic properties of open quantum systems in terms of state vectors without the need of reconstructing a reduced density matrix. Formally, the price to be paid is that for developing a SSE-like approach based on Bohmian mechanics one needs to evaluate both the trajectories of the environment and of the system see (d) Table 1. Nonetheless, we have seen that this additional computational cost can be substantially reduced in practical situations. For THz electron devices (see Section 5), for example, we showed that invoking current and charge conservation one can easily get rid of the evaluation of the environment trajectories. This reduces substantially the computational cost associated to the Bohmian conditional wave function approach (as shown (e) in Table 1). Let us also notice that here we have always assumed that the positions of the environment are the variables that the states of the system are conditioned to. However, it can be shown that the mathematical equivalence of the SSEs with state vectors conditioned to other "beables" of the environment (different from the positions) is also possible. It requires using a generalized modal interpretation of quantum phenomena, instead of the Bohmian theory. A review on the modal interpretation can be found in [67,68]. As an example of the practical utility of the Bohmian conditional states, we have introduced a time-dependent quantum Monte Carlo algorithm, called BITLLES, to describe electron transport in open quantum systems. We have simulated a graphene electron device coupled to an RC circuit and computed its current-current correlations up to the THz regime where non-Markovian effects are relevant. The resulting simulation technique allows to describe not only DC and AC device's characteristics but also noise and fluctuations. Therefore, BITLLES extends to the quantum regime the computational capabilities that the Monte Carlo solution of the Boltzmann transport equation has been offering for decades for semi-classical devices.

•
The "proper" mixture is simply a mixture of different pure states of a closed system. We define such pure states as |ψ q with q = 1, .., N. We know that each of these states represent the closed system in one of the repeated experiments, but we ignore which state corresponds to each experiment. We only know the probability P(q) that one experiment is represented by the pure state |ψ q . Then, if we are interested in computing some ensemble value of the system, over all experiments, von Neumann introduced the mixture ρ = P(q)|ψ q ψ q |dq. It is important to notice that we are discussing here human ignorance (not quantum uncertainty). The system is always in a well-defined state (for all physical computations), but we (the humans) ignore what the state is in each experiment.

•
The "improper" mixture refers to the density matrix that results from a trace reduction of a pure sate (or statistical operator) of a whole system that includes the system and the environment. The reduced density of the system alone is given by tracing out the degrees of freedom of the environment, giving the result in Equation (7), which is mathematically (but not physically) equivalent to the results of the "proper" mixture constructed from our "ignorance" of which state represents the system. D'Espagnat claims that the ignorance interpretation of the "proper" mixture cannot be given to the "improper" mixture. The D'Espagnat's argument is as follows. Let us assume a pure global system (inclding the open system and the environment) described by Equation (1). Then, if we accept that the physical state of the system is given by |ψ q (t) , we have to accept that the system-plus-environment is in the physical state |q ⊗ |ψ q (t) with probability P(q). The ignorance interpretation will then erroneously conclude that the the global system is in a mixed state, not in a pure state as assumed in Equation (1). The error is assuming that the system is in a well-defined state that we (the humans) ignore it. This is simply not true. D'Espagnat results shows that a conditional state cannot be a description of an open system with all the static and dynamic information that we can get from the open system.
In addition, let us notice that the conclusion of D'Espagnat applies to any open quantum system without distinguishing between Markovian or non-Markovian scenarios. However, indeed, there is no contradiction between the D'Espagnat conclusion and the attempt of the SSE of using pure states to describe Markovian open quantum systems for static and dynamic properties. Both are right. D'Espagnat discussion is a formal (fundamental) discussion about conditional states, while the discussion about Markovian scenarios is a practical discussion about simplifying approximation when extracting information of the system at large τ.
Finally, let us notice that the D'Espagnat conclusions do not apply to Bohmian mechanics because the Bohmian definition of a quantum system involves a wave function plus trajectories. The conditional stateψ Q i (t) (x, t) = Ψ(x, Q i (t), t), together with the environment and system trajectory Q i (t) and X i (t), contains all the (static and dynamic) information of the open system in this i-th experiment. An ensemble over all experiments prepared with the same global wavefunction Ψ(x, q, t) requires an ensemble of different environment and system trajectories Q i (t) and X i (t) for i = 1, 2, ..., M with M → ∞.
where we have used the quantum equilibrium condition |Ψ(x, q, t) with M → ∞. Now, we multiply and divide by Ψ * (x, Q i (t), t) to get, where P i = 1/M can be interpreted as the probability associated to each i = 1, 2, ..., M experiment and we have defined: Now, once we arrive at Equation (A5), one can be tempted to define a type of Bohmian reduced density matrix in terms of the conditional wavefunctions for i = 1, 2, ..., M experiments as, where we have arbitrarily eliminated the role of the trajectories. However, strictly speaking Equation (A1) is not equal to Equation (A7). If we include all i = 1, 2, ..., M experiments in the computation of (A7), there are trajectories Q i (t) and Q j (t) that at the particular time t can be represented by the same conditional wavefunctionψ i (x, t) =ψ j (x, t) if Q i (t) = Q j (t). Such over-summation due to the repetition of the same trajectories is not present in (A1).
To simplify the subsequent discussion, let us assume that q is one degree of freedom in a 1D space. Let us cut such 1D space into small intervals of length ∆q. Each interval is defined as j ∆q < q < (j + 1) ∆q and it is labelled by the index j. Then, we can define G j (t) as the number of positions Q i (t) that are inside the j-interval at time t as: With this definition, assuming that ∆q is so small that all Q i (t) inside the interval and all the corresponding Bohmian conditional wave functions Ψ(x, Q j (t), t), system positions X i (t), and probabilities P i are almost equivalent, and given by q j , Ψ(x, q j , t), x j and P j respectively, we can change the sum over i = 1, ..., M experiments into a sum over j = ..., −1, 0, 1, ... spatial intervals to rewrite Equation (A7) as: where N j (t) = G j (t)P j / Ψ * (x j , q j , t)Ψ(x j , q j , t) . So, finally, a proper normalization of the Bohmian conditional states allows us to arrive to Equation (A1) from Equation (A7). Such normalization is already discussed in quation (11) in the text. The moral of the mathematical developments of this appendix is that open systems are more naturally described in terms of density matrix than in terms of conditional states when using the orthodox theory, while the contrary happens when using the Bohmian theory. Because of the additional variables of the Bohmian theory, the conditional states are a natural Bohmian tool to describe open systems.