A Regularised Total Least Squares Approach for 1D Inverse Scattering

We study the inverse scattering problem for a Schrödinger operator related to a static wave operator with variable velocity, using the GLM (Gelfand–Levitan–Marchenko) integral equation. We assume to have noisy scattering data, and we derive a stability estimate for the error of the solution of the GLM integral equation by showing the invertibility of the GLM operator between suitable function spaces. To regularise the problem, we formulate a variational total least squares problem, and we show that, under certain regularity assumptions, the optimisation problem admits minimisers. Finally, we compute numerically the regularised solution of the GLM equation using the total least squares method in a discrete sense.


Introduction
In many scientific, medical and industrial problems, one has to retrieve unknown coefficients of a governing differential equation (PDE) from (partial) measurements of its solution. This way, properties of materials can be studied in a medium that we do not have direct physical access to. In geophysics, for example, a well-known problem is estimating the elastic parameters of the subsurface from surface measurements. The governing PDE is a wave equation, and the measurements consist of a trace of its solution on the boundary of the domain. See, for example, [1] for an overview.
In particular, we focus on the inverse problem for the 1D static wave/Helmholtz equation − d dy c 2 (y) d dy v(k, y) = k 2 v(k, y), y ∈ R, (1) with v = v i + v s and (asymptotic) boundary conditions lim y→±∞ dv s (k, y) dx ± ikv s (k, y) = 0 We let v i (k, y) = e −ıky , which corresponds to an incoming plane wave from the left. The measurements at y = 0 are given by for t ∈ [0, T]. The goal is now to retrieve c from these measurements.
Various methods for solving the inverse coefficient problem for the wave equation have been developed. A well-known method is full waveform inversion, which poses the inverse problem as a PDE-constrained optimisation problem [2,3]. Other variational formulations for the inverse problem have been proposed as well; see, for example, [4,5]. We refer to such methods as indirect, as they are based on an implicit non-linear relation between data and coefficients that need to be solved iteratively. On the other hand, the inverse problem can be solved using a direct method. Here, an explicit formula leads to the exact solution of the inverse problem (for noiseless data). A classical direct method is given by the Gelfand-Levitan-Marchenko (GLM) integral equation [6][7][8]. This method has its roots in inverse scattering theory, and has recently attracted renewed attention [9][10][11][12]. In [13], for example, a GLM-like approach for wavefield redatuming was proposed. Here, boundary measurements are used to estimate the wavefield in the entire domain. Subsequently, such a wavefield can be used to estimate medium parameters by solving the Lippmann-Schwinger integral equation [14].
One advantage of the indirect (variational) methods over the direct ones is that they can handle better situations where there is noise in the data. In direct methods, noise in the data likely gets amplified. To counter such instabilities, one typically adds regularisation. In particular, the GLM approach with noisy data requires a total least squares (TLS) approach, and was studied numerically by [15]. Other regularised approaches for similar integral equations in seismic imaging are discussed in [16].
In this paper, we revisit the classical GLM approach for the 1D inverse medium problem and consider in particular the infinite dimensional case with noisy measurements. To regularise the problem, we formulate a regularised (TLS) approach for it. To solve the resulting variational problem, we use an alternating iterative method [17]. Some numerical examples complete the paper.
Our main contributions are as follows: • We extend the stability estimates that can be found in [18] to the classical GLM integral equation. • We show that the variational TLS formulation of the GLM method admits minimisers.
This paper is organised as follows. In Section 2, we state the forward scattering problem and we review some classical results from scattering theory. We also review basic properties of the GLM integral operator. In Section 3, we include our new findings, namely the stability estimate for the GLM inversion assuming having access to noisy scattering data. We then continue studying the variational total least squares problem of reconstructing the solution of the inverse problem from noisy scattering data, and we show the well-posedness of it. We also discuss its analytical limitations. In Section 4 we implement numerically the proposed total least squares regularisation method. In Section 5 we show a number of numerical examples and conclude the paper with a discussion section.

Preliminaries
This section summarises mostly known and well-established results in 1D scattering theory. In Section 2.1, we use the travel time coordinate transform to derive the equivalence of the static wave equation with variable velocity to the Schrödinger equation and we formulate the forward scattering problem. In Section 2.2, we repeat the classical procedure of using the Jost solutions to construct the solution of the forward scattering problem. In Section 2.3, we briefly discuss the derivation of the GLM integral equation and we review some basic properties of the GLM integral operator.

Formulation of the Forward Problem
It has been well-established (see, for example, [7,8]) that the inverse problem for the 1D static wave/ Helmholtz equation may equivalently be stated in terms of the 1D Schrödinger equation with boundary conditions where u(k, ·) = u i (k, ·) + u s (k, ·) (6) with u i (k, x) = e −ıkx , x ∈ R. The quantities are related via where and We assume that the velocity c > 0 is bounded and an element of C 1 (R). We also assume that c is bounded and has a compact support and that c ∈ L 2 (R) is a bounded function. Therefore, q is bounded and compactly supported since η is bounded and compactly supported. We refer to the Appendix A for a discussion on the L 2 -validity of the calculations that lead from the static wave equation to the Schrödinger equation. Furthermore, we note that we seek for an element of H 2 loc (R) as the solution of the differential equation since the scattering potential is discontinuous in general, and thus, we cannot necessarily obtain a solution of C 2 -regularity.
A key result that we will need later on is the absence of bound states.

Theorem 1. Let the Schrödinger operator
where q is given by relation (8). Then the discrete spectrum of S is empty.
We include the proof in the Appendix A. To show Theorem 1, we use the positivity of the static wave operator and its equivalence to the Schrödinger operator. We use this positivity argument thanks to the conversation and the hint given to author 1 (A.T.) from Vassilis Papanicolaou [19]. The result of the absence of bound states for this particular Schrödinger operator can be derived using also physical arguments; see [8].

Classical Results from Scattering Theory
It is well known that the Schrödinger differential equation can be reduced to the following Schrödinger integral equations at ±∞.
Such Volterra-type integral equations can be derived using the variation of constants and we refer to [20] for a discussion about the existence and uniqueness of solutions of these integral equations. The functions f ± (±k, ·) are called the Jost solutions, and the solution of the forward scattering problem can be decomposed as a sum of these functions as where the functions T, R are called the transmission and reflection coefficients respectively. The transmission and reflection coefficients, as functions of the wavenumber k, satisfy the following relations T(k) = 1 + 1 2ık R u(k, y)q(y)e ıky dy, for k ∈ R \ {0} and the conservation of energy The scattering theory for the Schrödinger equation is a classical mathematical subject that dates back to the 1960s. We refer, for example, to [20][21][22] and the references therein for introduction and extensive analysis of the quantum scattering problem.

The Inverse Scattering Problem and the Gelfand-Levitan-Marchenko Inversion Method
The inverse scattering problem is now to retrieve the scattering potential, q, from the reflection coefficient R. The GLM integral equation is the key for solving this inverse scattering problem. In this section, we review the classical inverse scattering problem of the determination of the scattering potential from scattering data, using the GLM integral equation. In Section 2.3.2, we study the integral operator defined by the GLM equation in order to derive properties that will help us to construct an inequality for the error of the solution of the GLM equation as we show in Section 3.1.

Derivation of the GLM Equation
The key ingredient for deriving the GLM integral equation is the scattering identity (12). For fixed x, we get that C Im>0 k → f + (k, x)e −ikx − 1 is an element of the Hardy class H + 2 . Using the Paley-Wiener theorem, we obtain that f + (·, x) satisfies the following relation where B(x, ·) ∈ L 2 (0, ∞) satisfies The calculation of the Fourier transform of relation (12) gives the classical GLM integral equation. For more details on the application of the Paley-Wiener theorem to the Jost function f + (·, ·), we refer to [20]. Below, we give the GLM integral equation. For a detailed proof, we refer again to [20], which gives a very detailed exposition of the quantum scattering problem on the line using analytical methods. Theorem 2. Let x ∈ R. Then the function B(x, ·) satisfies the GLM integral equation where the scattering data K = K c + K d are given by where (−ρ n ) N n=1 are the eigenvalues of the Schrödinger operator S = − d 2 dx 2 + q and (−p n = − 1 f n L 2 (R) ) N n=1 , f n are the eigenfunctions corresponding to the eigenvalues.

Remark 1.
Due to the absence of bound states in our setting, we do not have to consider K d and will denote the measured scattering data by K, with the understanding that it is directly related to R through a Fourier transform.
The inverse procedure is as follows. First, the scattering data K are collected by measuring the response at the boundary (i.e., x = 0). Then follows the solution of the GLM integral Equation (17). After the GLM kernel is recovered, the scattering potential, q, can be found using relation (16).

Analysing the GLM Operator
In this subsection, we study the GLM integral operator and we review some of its properties. In particular, we use these properties in Section 3.1 for deriving an upper bound for the error of the noisy inverse problem. Since we assume that the scattering potential q is compactly supported, we obtain that for every fixed x ∈ R, the solution of the GLM equation B(x, ·) is compactly supported. In particular, this is justified using the following inequality, for x ∈ R and t > 0, see [20]. Consequently, the domain of integration in the GLM integral equation can be reduced to an integration over a finite interval. For a fixed potential q, we assume that the interval of integration is (0, T x ), where T x depends on the fixed value of x ∈ R. In addition, since we are interested in reconstructing B(·, ·) for the values of x where the scattering potential is supported, it is reasonable to consider the following. Since the set {T x : x ∈ supp(q)} is bounded from above, we denote with T its supremum. We assume without loss of generality that (0, T)⊃supp(q). We define We also define the set B(Y ) = {L : Y → Y : bounded and linear}. Additionally, for a fixed x ∈ (0, T) and f ∈ Y we define Since we fix K ∈ L 2 (R), we write for simplicity With χ ω (·), we denote the characteristic function which is 1 in ω and 0 in R \ ω. Since the reflection coefficient R ∈ L 2 (R), thus K ∈ L 2 (R), see [20], we find the following.
is compact and self-adjoint.

Lemma 2.
The numbers λ = ±1 are not eigenvalues of A x .
Considering the previous lemmas, the following result follows.

Proposition 1.
The operator I + A x ∈ B(Y ) is invertible and its inverse is given by the Neumann series expansion in B(Y ).

Main Results
So far, we have summarised mainly known results for the scattering problem of our study. In the following subsection, we provide the reader with a new result regarding the stability of the reconstruction of the GLM kernel from noisy scattering data. In Section 3.2, we show the existence of minimisers for the variational total least squares regularisation of the GLM inversion.

Stability Estimates
Assuming now that there is an error ε ∈ L 2 (R) in the measurements of the scattering data K (due to noise, measuring errors, etc.); we are then dealing with the following problem.
where we let B x (·) = B(x, ·) for ease of notation. We then want to bound B x − B x L 2 (R) in terms of ε L 2 (R) . A similar upper bound for the error for a similar GLM equation is given in [18], but not in L 2 (R). In addition, we refer to [23] for a discussion on a stability estimate of the Marchenko inversion where the bounds are on the scattering potential. In the application of recovering the scattering potential from scattering data, a pointwise estimate for the error is sufficient in view of relation (16). We denote and Assuming further that the error e is real valued, we obtain, as before, that A x is a compact and self-adjoint operator. We then find the following result.

Lemma 3.
Let the previous assumptions be true. The following inequality holds, Proof. Let f ∈ Y. We obtain With this, we are ready to present the error bound.

Theorem 3.
Under the previous assumptions, we obtain the following: Proof. We subtract (17)- (26) to obtain The previous stability estimate gives an upper bound for the error of the solution of the GLM equation, which is proportionate to the L 2 -norm of the error in the measurements e. Though, we cannot rule out the case where the operator norm of A x is almost 1. In general, the operator norm of A x is determined by K. However, what kind of potential produces scattering data that make the operator norm be closer to 1 is still something to investigate.

Variational Regularisation
In this section, we define and show well-posedness for the variational total least squares regularisation problem of determining the kernel B from inexact scattering data. Similar work on this subject was done in the finite dimensional setting by [15], where they considered discrete scattering data and they followed a data analytic way for studying the total least squares problem for regularizing the GLM equation. In our approach, we fill the theoretical gap of showing well-posedness of the total least squares regularisation method of the GLM inversion in the infinite dimensional setting. Now, for a set Ω ⊂ R N , N = 1, 2 and a generic function space Ψ(Ω) = { f : Ω → C}, we define the extension operator as the map that extends a function to 0 if the argument is not included in Ω. (This is a bounded operator if, for example, Ψ(Ω) = L 2 (0, T).) Then, we consider the usual Lebesgue space The inner product is given by We also define Θ : Lemma 4. Θ is a bounded linear operator.
Proof. The linearity is easy to show. Now, for the boundedness, let f ∈ L 2 ((0, T) 2 ) Remark 2. We need Θ in order to have a well-defined convolution type relation in G. In addition, compare this with the setting of the previous section. By using Θ, we avoid the use of the space Y altogether. Therefore, Finally, Now, for a function g ∈ L 2 (R), we define for almost all x ∈ (0, T) Lemma 5. S : L 2 (R) → L 2 ((0, T), L 2 (R)) is linear and bounded.
In addition, Now, we observe that Let K ∈ L 2 (R) given. Let also α, β > 0. We define the total least squares functional We assume that we have access to inexact scattering data K ∈ L 2 (R). We will define the solution of the inverse problem of finding the kernel B (in a region where the potential is supported) from K.
, both bounded convex and closed in the respective topologies. Then, we call B a regularised total least squares solution of the GLM integral equation.

Remark 3.
Ideally, we would like to find a perturbation e that will almost cancel out the noise ε which is included in K.
We state some auxiliary results needed for showing well-posedness for the variational inverse problem (47). Lemma 6. Let K ∈ L 2 (R) and let the strongly convergent sequences and e n → e in L 2 (0, 3T). (49) Proof. For almost all x ∈ (0, T), t ∈ R, we take Using the triangular inequality and working similarly to Proposition 2, we obtain As n → ∞, e n L 2 (R) is bounded, so we conclude the result. Now, using the above auxiliary results, we find the following well-posedness result.
we can find a minimizing sequence ( f n , e n ) ⊂ U × V with Since U , V, are bounded in their respective spaces these two sequences are bounded. By reflexivity, see ( [24] pages 67-68), there exist weak limits f , e such that (passing to subsequences using the same indexing) and e n e in σ(H 1 (0, 3T), H −1 (0, 3T)).
Since U, V are strongly closed and convex subsets of reflexive spaces, they are also weakly closed; see ( [24] page 60). Therefore, Now, by the following compact embeddings, and we can conclude the strong convergence and e n → e in L 2 (0, 3T).
Using the above lemmas, we obtain that see [25].

Remark 5.
Regarding the choice of H = H 1 ((0, T) 2 ) and the choice of H 1 0 (0, 3T) as the space of the perturbations. We choose in particular these spaces for the space of the GLM kernels (in the total least squares sense) and the perturbations since we have the above compact embedding properties. Otherwise [working, for example, with the L 2 ((0, T 2 )) and the L 2 (0, 3T)], we cannot pass to some further strongly convergent subsequences in the proof of existence of minimisers of our variational inverse problem and conclude the existence result. Similar work on this subject was done in [26]. However, the assumptions made in this paper are too strong to require in our application. In particular, the authors considered the existence of minimisers issue for a general class of total least squares problems. Assuming that the inverse problem is described by a bilinear operator with the property that weakly convergent sequences are mapped to strongly convergent sequences, they show existence. However, without working the way we did, the weak to strong continuity property of the forward operator is a very strong assumption to claim, and in general it does not hold. To see that, it is sufficient to pick a weakly convergent sequence of the form (B n , 0) n ⊂ L 2 ((0, τ) 2 ) × L 2 (R) and then observe that (G(B n , 0)) n is not necessarily norm convergent. Keep also in mind that G is not bilinear. In addition, the convolution type relation between K and B should be carefully studied under the convolution and the weak convergence.
To sum up our approach, we pick the spaces of interest so that we have a compact embedding property. This way, we do not need to make any assumptions on G.

Remark 6.
Regarding the reasonability of the H 1 -regularity assumption for the GLM kernels. Even though a GLM kernel naturally has an L 2 -regularity at least in the box of interest, we know that it satisfies a Goursat-type hyperbolic PDE (see [20]) So either we study the regularity of solutions of the above PDE, or we just view our proposed existence of minimisers result as a relaxed version of the problem of seeking kernels with L 2 -regularity (and perturbations).

Remark 7.
Finally, another thing to keep in mind is that it is possible to obtain multiple minimisers of the above optimisation problem since the TLS functional, φ K , is not convex.

Numerical Implementation
In this section, we show the discrete form of the GLM equation and its numerical implementation. We also implement numerically the total least squares regularisation method of the GLM equation, using noisy scattering data.

Discretisation of the GLM Equation
We discretise the quantities K and B on a regular grid of samples t i = i · ∆t. We then denote the discrete scattering data by k ∈ R n . The discrete GLM kernel is denoted by B ∈ R (m+1)×(m+1) . The discrete counterpart to GLM equation is then given by for i, j = 0, 1, . . . m. We will assume that n = 3m to properly define these relations. The discrete GLM equation can be more compactly expressed using the map G : R (m+1)×(m+1) × R n → R (m+1) 2 and S : R n → R (m+1) 2 : For fixed k, this system of equations decouples in m independent systems of equations of the form for i = 0, . . . , m. For fixed B, the system of equations takes the following form . . .

Numerical Regularisation
Having discretised the GLM equation, we can now define the numerical regularisation strategies. The Tikhonov-regularised problem (LS) reads where L represents a finite-difference approximation of the second derivative. Due to the special form of the equations for fixed k, this problem separates in m separate least-squares problems for the columns of B. These problems can be readily solved using standard iterative methods, such as LSQR.
The total least-squares (TLS) functional in the discrete setting is given by To find a minimiser, we apply an alternating minimisation algorithm, as proposed by [17] we repeat for k = 0, 1, ...
As explained in the previous section, both steps involve a quadratic problem that can easily be solved using an iterative method like LSQR. The convergence of this alternating approach is guaranteed by the bi-convex nature of the functional φ k [17].
Having solved either of the regularised problems for B and in view of relation (16), we can compute the scattering potential from the reconstructed kernel by extracting the first column from B and using a finite-difference approximation to compute the derivative.

Choice of Regularisation Parameters
Both regularised formulations (LS and TLS) include regularisation parameter(s) that need to be estimated. In particular, for the TLS method, we need to estimate two parameters, α, β. In practice, though we expect that β does not play a significant role, as the problem for e is overdetermined, G(B, k + e) = −S(k + e) defines (m + 1) 2 equations in 3m + 1 unknowns. Moreover, the corresponding system matrix consists of an identity matrix plus a small perturbation (cf. (69)), so the system is unlikely to be ill-posed. Thus, we pick a (small) reference value for β = β and focus on estimating the remaining parameter, α.
Ideally, we would pick α to minimise the reconstruction error, i.e., where B is the unregularised solution corresponding to noiseless data, k, (i.e., G(B) = −S(k)), and B α denotes the optimal solution of either the LS or the TLS method corresponding to noisy data, k = k + ε. For the sake of completeness, we mention below a number of commonly used methods for choosing regularisation parameters and how these could potentially be applied in the problem of estimating α. A posteriori parameter selection methods aim to achieve this by using only knowledge of the data and the noise level. A well-known method in this class is the discrepancy principle. The particular nature of our problem (involving a product of B and k) makes it difficult to apply such rules, however, as they would require an estimate of the residual at the optimal solution. To see why, note that the residual for (LS) is given by G(B, ε) + S(ε) 2 . The discrepancy principle then finds an α such that but this would require knowledge of the true kernel. For the total least squares approach, we could use the estimated error e α and find α such that Heuristic methods such as the L-curve method could be applied. However, it is not clear how well they would perform on problems of this nature, as even for classical illposed linear inverse problems, such heuristic methods are not convergent [27]. Despite this theoretical shortcoming, such methods are often applied in practice with success [28].

Numerical Examples
In this subsection, we present a couple of numerical examples comparing the regularised approaches (LS and TLS) outlined above. The least-squares (sub-) problems are solved using LSQR. Unless stated otherwise, we use 10 iterations of the alternating method and 10 iterations of LSQR for the sub problems. The scattering potential is obtained by numerically differentiating the reconstructed kernel, as in (16).
We find that the TLS method is not sensitive to the choice of β (as argued in the previous section). We therefore use a fixed value of β = 1 · 10 −16 for all experiments. The remaining regularisation parameter α for each method (LS and TLS) is obtained via (74). Although this requires knowledge of the noiseless data in order to compute B, it allows us to make a fair best-case comparison between the methods.
The reconstruction quality of the methods is measured by the relative L 2 -error between the reconstructed kernel and the reference solution B.

Example 1: The Plasma Wave-Equation with a Smooth Potential
In this first numerical experiment, we consider the case where the scattering data are generated directly by the plasma wave equation. The measured scattering data and the scattering potential are shown in Figure 1. We apply the methods described in the previous subsections, and thus, we solve the GLM integral equation to find the GLM kernel and then the scattering potential. Figure 2 shows the solution of the GLM equation (matrix B) and the comparison between the true and the recovered potential. For such a smooth potential, the generated scattering data lead to a good reconstruction. As we studied previously, the presence of noise in the scattering data is expected to affect the reconstruction of both the GLM kernel and the potential. To test this, we add i.i.d. normally distributed random noise to the data with mean zero and variance σ 2 . Reconstructions using the unregularised, LS and TLS approach for σ = 1 · 10 −3 are shown in Figure 3. The results for various noise levels are summarised in Table 1. In all cases, the TLS approach is superior and requires less regularisation (smaller value of α).

Conclusions and Discussion
We revisited some classical results from inverse scattering to solve the 1D inverse coefficient problem for the wave equation. In particular, we considered the GLM method with noisy data and proposed a regularised total least squares formulation in the infinite dimensional setting. We contributed an error bound for the unregularised GLM approach and have shown existence of minimisers for the variational formulation of the TLS approach. Numerical results illustrate the approach, showing that the TLS approach gives superior results as compared to conventional Tikhonov regularisation.
The results from inverse scattering, in particular a GLM-like approach has recently received renewed attention in the geophysical literature. Noisy data is a significant source of error in these methods, and various discrete regularisation schemes have been proposed to address this issue. While these methods have been shown to work well in practice, careful analysis of the infinite dimensional problem has not been done. We believe that it is important to study this because it will yield new insight in the behaviour of practical approaches as they are pushed to include higher frequency data (and hence finer discretisation). Ultimately, these insights may lead to adaptive methods. Moreover, the 1D problem analysed here serves as a model problem for many practical problems in 2D and 3D, and the insights may inspire new approaches there as well. Funding: This work was supported by the Utrecht Consortium for Subsurface Imaging (UCSI).

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Acknowledgments: Author 1 (Andreas Tataris) would like to thank Vassilis G. Papanicolaou for the discussion and suggestions.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Absence of Bound States Auxiliary Results
In this part of the paper, we include some auxiliary results needed for the proof of Theorem 1. We also provide the proof of Theorem 1 in the end of the Appendix A.
Lemma A1. Assume that c, c are bounded, c ∈ C 1 (R), c > 0. Then the travel time coordinate transform R y → x ∈ R, with c(y)dx(y) = dy, y ∈ R, (A1) Proof. Since the new variable x is defined as a function of the independent variable y and the integrand in (A2) is strictly positive, we get immediately that x is injective. In addition, x , x are bounded functions since c, c are bounded. Since x is a continuous function and since the integrand is in C 2 (R), we get that x ∈ C 2 (R). Now, let us study the inverse of x. By the inverse function theorem the inverse of x, x −1 , exists and is an element of C 1 (R) with the property (M is the upper bound of c) and also which is also bounded since c is assumed to be bounded. Finally, , y ∈ R.
Therefore, since c is bounded from above and from below, is also bounded from above and below.
we can view c(y) as a function of x as Since our differential relation is we can write Since also we can have y as a function of x, So, similarly to [9], we have established Having that, we can also write η(x) = (c(y(x))) 1/2 , x ∈ R. (A16) Corollary A2. The inverse travel time coordinate transform is a 2-diffeomorphism.
Proof. We have shown that x(·) and its inverse are continuous. We have also shown that their first and second derivatives are continuous and bounded. What is left to show is that the derivative of the inverse is bounded from below. First, we observe that However, since η(x) 2 = c(y(x)) and c(·) is bounded from below, we obtain the result.
Remark A1. In view of the differential relations (A10) and (A11), we can express the one variable as a function of the other.
The 2-diffeomorphisms given by the travel time coordinate transform and the inverse define bounded linear operators from H 2 (R) to H 2 (R); see [29]. The static wave operator has domain of definition We will now show that the relations that lead from the 1D static wave equation to the Schrödinger equation are well defined in the L 2 -sense. First, using the 2-diffeomorphism defined by y(·), we can find the following. According to [24] (Proposition 9.6) for v ∈ H ⊂ H 1 (R), we get the L 2 (R, dx)-relation, dv(y(x)) dx = dv(y(x)) dy dy(x) dx = dv(y(x)) dy c(y(x)), x ∈ R. (A20) Now, since the function 1/c(y(x)), x ∈ R defines a multiplication operator T : Assuming also that v ∈ H, we obtain the L 2 (R, dy)-relation defined by the static wave with λ ∈ C. Using the inverse travel time coordinate transform, we get that with Φg = g • y, with g ∈ L 2 (R, dy). Now, observe that c 2 v y ∈ H 1 (R, dy), therefore we obtain again that Therefore, combining the above relations, we find that Therefore, the transformed wave equation has the form − (c(y(x))v x ) x 1 c(y(x)) L 2 (R,dx) = λv(y(x)), x ∈ R. (A27) Again, using the multiplication operator T, we obtain Previously, we defined u as Let us view this as a L 2 (R, dx)-relation for the moment. It follows that since v ∈ H, then u ∈ H 2 (R) (otherwise we obtain a contradiction by Corollary A3). Now, we can consider the following L 2 (R, dx)relation similarly as above (using again a multiplication operator) v(y(x)) = η −1 (x)u(x), x ∈ R, and since by assumption c ∈ C 1 (R) and y ∈ C 1 (R), then η ∈ C 1 (R), and we get pointwise since H 2 (R) ⊂ C 1 (R). Since η −1 , η x η −1 are bounded, the relation is valid in the L 2 (R, dx)sense. Again, using a multiplication operator, we obtain the L 2 (R, dx) relation We can weakly differentiate the above relation once more (using density arguments) and we find (η 2 (x)v x (y(x))) which is again valid in L 2 (R, dx). We substitute (A34) in (A29) and using once more a multiplication operator, we find the L 2 (R, dx)-relation which is our Schrödinger equation after rearranging. Now, for the sake of completeness we prove the assertions that we used previously.
Since Theorem 1 is not standard, we provide its proof.
Proof of Theorem 1. Let λ = −k 2 < 0 an eigenvalue of S and u ∈ H 2 (R) the associated eigenfunction such that We defined previously the static wave operator as with H = {v ∈ H 1 (R) : c 2 v ∈ H 1 (R)}. We showed above (Lemma A2) that the function v such that R x → v(y(x)) = η −1 (x)u(x) or equivalently y → v(y) = η −1 (x(y))u(x(y)) belongs to H. Now, we want to show that if u solves the Schrödinger equation, then v solves the static wave equation. If we assume that v does not solve the wave equation, then since the calculations that lead from the static wave equation to the Schrödinger equation are well defined in the L 2 -sense, we obtain that u is not a solution of the Schrödinger equation, which is not true. We will show that W is a positive operator. Indeed ∀v ∈ H Considering that H 1 (−R, R) is continuously embedded in H 1 (R), ∀R > 0, and since the product rule of differentiation is still valid in H 1 (−R, R) ∀R > 0, see [24], we obtain the above result. In addition, note that c 2 v y , v ∈ H 1 (R) and H 1 (R) = H 1 0 (R) is an algebra. Furthermore, the eigenvalues of a positive operator are positive, therefore λ = −k 2 cannot be an eigenvalue of W. Thus, this contradicts the initial statement of λ being an eigenvalue of S.