1. Introduction
Supersymmetry (SUSY) is a theory that intends to unify the strong, the weak and the electromagnetic interaction. Technically, SUSY assigns a partner (superpartner) to every particle, the spin of which differs by one-half from the spin of its supersymmetric counterpart. For an introduction to SUSY the reader may refer to the book [
1]. At first sight, it is therefore somewhat surprising that the SUSY formalism has become very popular in the context of nonrelativistic Quantum Mechanics, a theory that does not take the spin of particles into account. The reason of SUSY’s popularity lies in its computational aspects: while the theoretical framework of SUSY simplifies considerably, the practical assignation of a superpartner to a given system (represented by its Hamiltonian) becomes important. This is so, because in nonrelativistic Quantum Mechanics there are solvable models, the most famous of which being associated to potentials like the harmonic oscillator or the Coulomb potential. Now, to each of such solvable models, the quantum mechanical SUSY formalism generates a new solvable model, namely, the superpartner. In other words, if we have a given solution of a Schrödinger equation for a certain potential, then by means of SUSY we obtain a solution for a Schrödinger equation with a different potential (also called superpartner of the initial potential). Quantum mechanical superpartners are related to each other in many interesting ways, especially in the stationary case. As examples let us mention that superpartners share their energy spectra (isospectrality), and their Green’s functions are interrelated by a very simple trace formula [
2,
3]. Note that there are many exhaustive reviews on the SUSY formalism for the stationary Schrödinger equation, as examples let us mention [
4,
5] and [
6]. Furthermore, in the references of the latter reviews some recent applications of SUSY for the stationary case can be found. In the present work we will focus on the time-dependent situation, the correspoding SUSY formalism was introduced in [
7]. As is well known, there are even less solvable cases of the time-dependent Schrödinger equation (TDSE) than of its stationary counterpart, such that SUSY is one of the very few methods to obtain explicit solutions. It should be pointed out that the mapping that relates solutions of SUSY superpartners to each other, is known as Darboux transformation. This transformation was introduced in a purely mathematical context [
8] and only eventually it was found to be equivalent to the mapping that interrelates SUSY superpartners. Darboux transformations do not only exist in the context of Schrödinger equations, but they have been established for many linear and nonlinear equations [
9]. Thus, the Darboux transformation can exist independently of SUSY, but within the quantum-mechanical SUSY framework, the Darboux transformation and the SUSY transformation (the mapping between superpartners) coincide. This is true not only for the TDSE, but for a generalized linear version of it, which we will focus on in the present review. For details on the Darboux transformation for generalized TDSEs consult [
10]. Our generalized TSDE comprises all known linear special cases - as an example let us mention the TDSE for position-dependent mass - and the SUSY formalism that we will derive here, reduces correctly for each special case. Before we start seeing the generalized TDSE, in
section 2 we give a brief review of the conventional SUSY formalism for the TDSE.
Section 3 is devoted to the generalized TDSE and the corresponding generalized SUSY formalism. Afterwards we derive a condition for the superpartner potentials to be real-valued (reality condition), as is required in the majority of physical applications, and we verify that our reality condition reduces correctly to the well known case, if our generalized TDSE coincides with a conventional TDSE (
section 4). For selected particular cases of the generalized TDSE we then state the corresponding SUSY data (SUSY transformation, explicit form of the superpartners, reality condition) in
section 5. We apply our generalized SUSY formalism to a concrete example in
section 6 in order to illustrate how a superpartner of a given TDSE can be obtained.
Section 7 is devoted to an extension of the SUSY transformation to equations that are different from the TDSE. In particular, we first introduce a concept to generalize the SUSY transformation, and then apply this concept to the Fokker-Planck equation and to the nonhomogeneous Burgers equation. For more details than contained in this review, the reader may refer to the references given above (
section 2), to our recent papers [
10,
11] and references therein (
section 3,
section 4 and
section 5), and to the paper [
12] (
section 7).
2. Conventional SUSY formalism
In this section we give a brief review of the standard SUSY formalism, as it applies to the TDSE. For details and more information the reader may refer to [
13] and references therein.
Preliminaries and the matrix TDSE. Let us start by considering two TDSEs in atomic units (
), that is,
where the symbol ∂ denotes the partial derivative, the functions
,
stand for the respective solutions and the Hamiltonians
,
are given by
for potentials
and
. Let us now write the TDSEs (
1) and (
2) in matrix form:
If we define a matrix Hamiltonian
H via
diag
, together with a matrix solution
, then our equation (
4) takes the following form:
The components of the vector will turn out to contain the solutions that belong to a supersymmetric pair of Hamiltonians.
The supercharges. As in the stationary case [
4], our goal is to construct a superalgebra with three generators, two of which are called supercharge operators or simply supercharges. These are mutually adjoint matrix operators of the following form
where
L and its adjoint
are linear operators, the purpose of which will be explained below. The supercharges act on two-component solutions of the matrix TDSE (
5), in particular we have for
that
It is immediate to see that the first component
of
has been taken into the second component, and the operator
L has been applied to it. The supercharge
adjoint to
Q reverses the above process (
7):
Next, let us understand the purpose of introducing the operators L and .
The intertwining relation and its adjoint. Note that
is a solution of our matrix TDSE (
5), that is, its first and second component
and
solve the TDSEs (
1) and (
2), respectively. Now, we want this property to be preserved after application of the supercharges, i.e.,
and
are required to be solutions of the TDSEs (
2) and (
1), respectively. Consequently,
L must be an operator that converts solutions of the first TDSE (
1) into solutions of the second TDSE (
2), and its adjoint
must convert solutions of the second TDSE (
2) into solutions of the first TDSE (
1). Let us first consider the operator
L, which we will determine from the following equation:
This operator equation is called intertwining relation, as it intertwines the two TDSEs (
1), (
2) by means of the operator
L. This is why
L is often called an intertwiner. In order to understand how the intertwining relation works, let us assume that we apply both sides of it to a solution
of the first TDSE (
1). Consequently, the right hand side of (
9) vanishes, since we assumed
L to be linear, implying
. But if the right hand side of equation (
9) is zero, so must be its left hand side, which means that
is a solution of the second TDSE (
2). In order to find the operator
L from our intertwining relation (
9), assume it to be a linear, first-order differential operator of the form
where the coefficients
and
are to be determined. After inserting (
10) and the Hamiltonians (
3) into the intertwining relation (
9), we expand the latter and require the coefficients of the respective derivative operators to be the same on both sides. We do not give the calculation here, since it is a special case of a calculation that will be done in full detail for the generalized TDSE. After having evaluated our intertwining relation using (
10), we obtain the following results on the
and
that appear in (
10): we have
, that is,
does not depend on the spatial variable. Furthermore, the function
is given by
, where
u is a solution of the first TDSE (
1). If these two conditions are satisfied, then the operator
L as given in (
10) becomes
Clearly, application to the solution
of the first TDSE (
1) gives
in the form
If in addition
and
u are linearly independent, then
is a nontrivial solution of the second TDSE (
2), where the potential
is under the following constraint:
where the prime denotes the derivative with respect to
t, since
does not depend on
x. Let us point out here that in the notation
the index refers to a derivative of the logarithm and not only to a derivative of its argument. Now, let us make a remark on the solution
u of equation (
1) that appears in (
12). If we want to apply the operator
L to a solution
of our first TDSE (
1), we must provide this solution
. Furthemore, we must provide another solution
u of the same TDSE (
1) in order to determine the operator
L. Therefore, the function
u is often referred to as auxiliary function or auxiliary solution of the TDSE (
1), and throughout the remainder of this note we will adopt that terminology. Let us further point out that in the stationary case the function
is often referred to as superpotential. Roughly speaking, this is due to the fact that the stationary Hamiltonians factorize as products of the operators
L and
. Since this is not true in the present, time-dependent case, we will not use the term superpotential here. Now, the characterization of
L as given in (
12) is complete and it remains to find its adjoint
, which can be done as follows: suppose that the differential operators
,
, are self-adjoint, and take the adjoint of our intertwining relation (
9):
This intertwining relation will be used to find
, which is therefore called intertwiner, just as its adjoint
L. If we assume
to be a linear, first-order differential operator, then we find after substitution into (
14) and evaluation that
Application to a solution
of the second TDSE (
2) gives then
is given by
where the function
is a solution of the second TDSE (
2) and
does not depend on the spatial variable. Then, if
and
u are linearly independent, the function
is a nontrivial solution of the first TDSE (
1), the potential
of which is constrained as
Hence, the characterization of is complete and we can continue with the construction of our superalgebra.
Construction of the superalgebra. We will need to have another generator besides the supercharges, which we obtain as follows. Consider the operators
and
, defined by
We will now see that
is a symmetry operator of the first TDSE (
1). To this end, we evaluate the following commutator:
Now we substitute the intertwining relations (
9) and (
14) in the first and the second term of (
16), respectively. This gives
In a similar way one proves that
The vanishing commutators (
17) and (
18) imply that
and
are symmetry operators of the TDSEs (
1) and (
2), respectively. Consequently,
diag
is a symmetry operator of our matrix TDSE (
5). We are now ready to show that the supercharges
and the symmetry operator
S generate a superalgebra. To this end, we need to evaluate a couple of commutators and anticommutators [
13], denoted by
).
This follows from the fact that the supercharge matrices (
6) are nilpotent. Next, we have
In the same fashion one shows that
As in the stationary case [
4], our results (
19)-(
23) imply that the operators
and
S are the generators of the simplest superalgebra. If the solutions and potentials of our TDSEs (
1) and (
2) are related by means of (
12) and (
13), respectively, then the corresponding Hamiltonians
and
are called supersymmetric partners. This term is also applied to their respective potentials, one says that
and
are supersymmetric partners. The relation (
12) was first introduced in [
8] and is known as Darboux transformation, the corresponding operator (
11) is called Darboux operator.
3. Generalized SUSY formalism
In this section we develop the SUSY formalism for a generalized form of the TDSE, where we summarize results from [
11]. In principle we follow the steps that were taken in the previous section, but this time the intertwining relation and its solution will be studied in detail.
The generalized TDSE. Let us consider the following equation, which we will call generalized TDSE:
Here
,
, denote arbitrary coefficient functions,
is the potential, and
stands for the solution. In order to set up the SUSY formalism for equation (
24), we first rewrite it in a different form. To this end, let us set
introducing arbitrary functions
,
and
. After insertion of the settings (
25) into TDSE (
24), the latter obtains the following form:
Note that the settings (
25) do not reduce the number of free parameters, the equation is just written in a different form without imposing any restriction.
Generalized matrix TDSE and supercharges. Now let us consider another generalized TDSE, which we will relate to its counterpart (
26):
where
is the potential and
stands for the solution. As in the previous section we will develop a generalized SUSY formalism that relates the TDSEs (
26) and (
27). The corresponding Hamiltonians we define as
Using these Hamiltonians, the two generalized TDSEs join as components of the following matrix equation:
This equation has exactly the same form as its conventional counterpart (
4). If we define a matrix Hamiltonian
H via
diag
, together with a matrix solution
, then our equation (
29) takes the following form:
Next, we define the supercharge operator and its adjoint as in the conventional case (
6):
for two operators
L and
that are to be determined. It is clear that these supercharges have the properties (
7) and (
8). The difference between the present and the conventional case lies in the form of the operators
L and
.
The intertwining relation for . In order to determine
L, we require it to convert solutions of the first TDSE (
26) into solutions of the second TDSE (
27), and its adjoint
must convert solutions of the second TDSE (
27) into solutions of the first TDSE (
26). The intertwining relation involving the operator
L is given by (
9), where the Hamiltonians are taken from (
28):
At this point it is necessary to expand the intertwining relation in order to get conditions for the sought operator
L. Let us assume that
L is given in the form (
10), substitution of which in combination with (
28) renders (
32) in the form
We will now expand both sides of this intertwining relation and find the coefficients of the derivative operators. The intertwining relation can only be fulfilled if the coefficients of a derivative operator are the same on both sides, which gives conditions on the coefficients. Let us first evaluate the left hand side of the latter intertwining relation:
Next, we process the right hand side of (
33) in the same way:
Again, the intertwining relation (
33) can only hold if its two sides (
34) and (
35) are the same. It is easy to see that the terms associated with the derivatives
,
and
are already equal on both sides and therefore cancel in the intertwining relation. Since there are more terms in the coefficients that cancel in the same way, let us now recombine (
34) and (
35) after simplification, that is, without equal terms that appear on both sides.
As mentioned before, we now collect the coefficients of each derivative operator on both sides of the latter intertwining relation and require the coefficients to be the same.
Resolution of the intertwining relation. Since there are only three different derivative operators left in our intertwining relation (
36), namely,
,
and the multiplication (derivative of order zero), we obtain three equations. These equations have the following form:
We will now solve this system of equations with respect to the coefficients
and
in our operator
L, recall its form as given in (
10). Since we are dealing with three equations, we will need a third function as a variable, which we take to be the potential
of the TDSE (
27). In order to solve the above three equations, we start with (
37) and determine
:
where
is an arbitrary constant of integration. It remains to solve equations (
38) and (
39) by determining
and the potential
, which will be done by elimination of the potential difference. In order to do so, we first need to write equations (
38) and (
39) in a slightly different form:
Now we multiply the first and the second of these equations by
and
, respectively, such that the right hand sides of these equations become the same. Consequently, the left hand sides must also be the same, and we can equate them to each other. This results in the following equation:
We will now solve this equation with respect to
. To this end, we will introduce a new function
K defined by
. Before we substitute this function into (
41), we first evaluate the following expressions, which we will need in the substitution:
where we have used the explicit form (
40) of
. We use this explicit form and the derivatives (
42) in order to rewrite equation (
41), where we substitute
by
. After simplification we arrive at the following equation:
We see that our former equation (
41), which depends on
and
, has been converted to an equation that depends on
K only. Unfortunately, we cannot solve (
43), as it is an equation of Riccati type for
K, which is not integrable in a general case like ours [
14]. Still, for practical reasons it makes sense to linearize (
43) by means of the following setting:
introducing a new function
. Assuming that
u is twice continuously differentiable, implying
, we substitute (
44) in (
43) and get after simplification the following equation for the function
u:
Clearly, this equation holds if the expression in square brackets does not depend on
x. We integrate on both sides and multiply with
u:
where
is a purely time-dependent constant of integration. Equation (
46) is identical to the initial equation (
26) for
. However, setting
C to zero is not a restriction, since solutions to (
46) with
and
differ from each other only by a purely time-dependent factor, which cancels out in (
44). Thus, our equation (
46) can be taken in the form
The function
u is called auxiliary solution of the TDSE (
26), as it is needed for determining the function
that appears as a coefficient in the operator
L. Once a solution
u of (
47) is known, then the function
K can be found from (
44), which in turn determines the sought coefficient
by means of
. Taking into account the explicit form (
40) of
, we obtain
Thus, with the coefficients
and
we have determined the sought operator
L, as given in (
10), completely.
Potential difference and the operator . Before we state the operator
L in its explicit form, let us find the potential
by solving (
38):
This can be specified more by inserting the explicit form of
and
, as given in (
40) and (
48), respectively. We obtain
The operator
L is given explicitly by
Finally, the solution
of the second TDSE (
27) can now be given using the latter form of
L:
Hence, if
is a solution of the first TDSE (
26), then
is a solution of the second TDSE (
27), provided the potential
is related to the potential
via (
50). Let us briefly verify that our expressions for the operator
L and the potential
reduce correctly to their conventional counterparts that are given in (
12) and (
13), respectively. To this end, we observe that in the conventional case we have
. On substituting this setting into (
52), we recover immediately the correct expression (
12), if we take
. As for the potential
, plugging
into its explicit form (
50), we obtain
which coincides with the desired expression (
13) for
.
The adjoint operator . The next task is to find the operator
in the same way as it was just done for
L. The intertwining relation to be used is given by the adjoint of (
32):
where the Hamiltonians are of generalized form (
28) and we assumed that the operators
,
, are self-adjoint. The calculation scheme for finding
is the same as for
L and consists in expanding the two sides of the intertwining relation, collecting the respective coefficients of the derivative operators, and requiring them to be the same on both sides of the intertwining relation. Afterwards, the resulting conditions for
have to be resolved. Since the calculations for finding
are similar and as tedious as in the case of its counterpart
L, we do not present the whole scheme in detailed form. Instead, we state the result, which is the explicit form of the operator
:
Let
and
u be linearly independent solutions of the second TDSE (
27), then the function
, given by
is a solution of the first TDSE (
27), provided the potential
is given by
This completes the characterization of the operator .
Construction of the superalgebra. Since the operators
L and
in the generalized case are now determined, at the same time the supercharges
Q and
, as given in (
31), are determined. As in the coventional case we construct the superalgebra by adding one more generator besides the supercharges, which will be constructed from the following operators
and
:
This is the same definition as for the conventional case (
15). We observe that
and
are symmetry operators for the TDSEs (
26) and (
27), respectively, such that
diag
provides a symmetry operator for the matrix TDSE (
30). This can be proved exactly as in the conventional case, see the calculations (
16)-(
18). Furthermore, the results (
19)-(
23) transfer to the present, generalized case without change of notation:
This implies that
Q,
and
S generate the simplest superalgebra. As in the conventional case, if the solutions and potentials of our TDSEs (
26) and (
27) are related by means of (
52) and (
50), respectively, then the corresponding Hamiltonians
and
, as given in (
28), are called supersymmetric partners (the same can be said about the potentials
and
). The operator (
51) is called generalized Darboux operator, and its application (
52) is called generalized Darboux transformation [
10].
4. Reality condition
Throughout this section we continue summarizing results from [
11]. In general, the potential
and its supersymmetric partner
, as given in (
50), are allowed to be complex-valued. In fact, even if one of the potentials is real, its supersymmetric partner can still turn out to be non-real. This is sometimes not desirable, as in many applications one is interested in real-valued potentials only. In this section we review a condition on the potential
in the second TDSE (
27) to be real-valued, provided both the potential
of the first TDSE (
26), and the parameters
f,
h are real. This condition is called reality condition, and it is usually fulfilled by choosing the arbitrary function
N in (
50) accordingly. Since
N does not depend on the spatial variable, the reality condition is not guaranteed not have a solution. As a byproduct of our reality condition, we obtain the corresponding condition for the conventional case after setting
in the final result. Now, before we start considering the reality condition, we first rewrite the function
N in a form that will prove convenient for our purposes. Observe that
N can be complex, so let us first find its real and imaginary parts. Write
N in polar coordinates as
where the real-valued functions
and
denote the absolute value and the argument of
N, respectively. We obtain
Here the prime stands for the derivative, note that
and
depend on
t only. We are now ready to extract the imaginary part of the potential
, as given in (
50). After substitution of (
55) we obtain the following result for the imaginary part of
:
If this expression is zero, then the potential
must be real. After regrouping terms, requiring (
56) to be zero, and solving with respect to the logarithm containing
we arrive at the following condition:
It becomes clear that this equation does not necessarily have a solution for
, since its right hand side can depend on both
x and
t, while the left hand side depends only on
t. Furthermore, we do not have any free parameters left except for
. Let us rewrite our condition (
57):
We will now express the imaginary part on the right hand side of the latter equation in a standard way:
note that the asterisk denotes complex conjugation. We incorporate the latter change into our condition (
58) and continue its simplification:
This is the condition for the potential (
50) of the second TDSE (
27) to be real-valued. It has a solution for
if the right hand side does not depend on the spatial variable. If this is so, then we can solve (
59) for the function
, giving
Let us point out that this is in general not a solution, since the right hand side of (
60) can depend on
x, while the left hand side cannot. Now let us assume the reality condition (
59) to be fulfilled, we will determine the corresponding form of the potential
. Substitution of (
59) or (
60) into the potential as given in (
50), gives its real part:
As desired, this expression contains only real-valued terms. Finally, let us verify how the reality condition (
59) and the potential of the second TDSE (
27) reduce in the conventional case. There we have
, which we plug into our reality condition (
59):
This allows for a solution if the right hand side does not depend on
x, that is, if
which coincides with well known results [
13]. Next, we insert
into the potential
, the explicit form of which is given in (
50):
This is precisely the known reality condition for the conventional TDSE [
13], if we set the arbitrary phase
to zero.