Next Article in Journal
Local Normal Approximations and Probability Metric Bounds for the Matrix-Variate T Distribution and Its Application to Hotelling’s T Statistic
Next Article in Special Issue
Game of Life-like Opinion Dynamics: Generalizing the Underpopulation Rule
Previous Article in Journal
Continued Fractions and Probability Estimations in Shor’s Algorithm: A Detailed and Self-Contained Treatise
Previous Article in Special Issue
One-Dimensional Matter Waves as a Multi-State Bit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Note on the Appearance of the Simplest Antilinear ODE in Several Physical Contexts

by
Dmitry Ponomarev
1,2
1
Institute of Analysis & Scientific Computing, Vienna University of Technology (TU Wien), Wiedner Hauptstrasse 8-10, 1040 Wien, Austria
2
St. Petersburg Department of Steklov Mathematical Institute, Fontanka 27, 191023 St. Petersburg, Russia
AppliedMath 2022, 2(3), 433-445; https://doi.org/10.3390/appliedmath2030024
Submission received: 27 May 2022 / Revised: 20 June 2022 / Accepted: 20 June 2022 / Published: 19 July 2022
(This article belongs to the Special Issue Feature Papers in AppliedMath)

Abstract

:
We review several one-dimensional problems such as those involving linear Schrödinger equation, variable-coefficient Helmholtz equation, Zakharov–Shabat system and Kubelka–Munk equations. We show that they all can be reduced to solving one simple antilinear ordinary differential equation u x = f x u x ¯ or its nonhomogeneous version u x = f x u x ¯ + g x , x 0 , x 0 R . We point out some of the advantages of the proposed reformulation and call for further investigation of the obtained ODE.

1. Introduction

Many physical phenomena can be directly described by or reduced to systems of differential equations having certain structural properties. Restricting ourselves here to linear one-dimensional settings, we are concerned with a system of 2 first-order ODEs whose matrix is antidiagonal with complex-conjugate elements. Namely, given x 0 R and complex-valued function f x , we consider the equation
U x = 0 f x f x ¯ 0 U x , x 0 , x 0 ,
as well as its nonhomogeneous analog
U x = 0 f x f x ¯ 0 U x + G x , x 0 , x 0 ,
where U x u 1 x , u 2 x T C 2 is an unknown solution-vector, G x g 1 x , g 2 x T C 2 is a given vector-function, and each of Equations (1) and (2) is supplemented by the initial condition U 0 = U 0 C 2 .
Here and onwards, we employ the notation · ¯ to denote complex conjugation.
Similarly to Hamiltonian, Dirac and more general canonical systems (see, e.g., [1]), Equations (1) and (2) constitute an important class of dynamical systems for two reasons. On the one hand, as we shall further see, formulations of several important problems are reducible to either (1) or (2). On the other hand, these systems are close to being exactly solvable in the following sense. Let us focus on (1) and consider the more general system
U x = p x r x s x q x U x , x 0 , x 0 .
We note that the diagonal elements in the matrix of (3) can be removed by the exponential multiplier transform. Namely, by setting
V x : = e 0 x p τ d τ 0 0 e 0 x q τ d τ U x ,
one can observe that V x satisfies, for x 0 , x 0 ,
V x = 0 r x exp 0 x p τ q τ d τ s x exp 0 x p τ q τ d τ 0 V x ,
with the initial condition V 0 = U 0 . Now, if the anti-diagonal elements of the matrix in the right-hand side of (4) are equal, i.e.,
r x exp 0 x p τ q τ d τ = s x exp 0 x p τ q τ d τ = : c 1 x ,
then the solution can be written explicitly as
V x = cosh 0 x c 1 τ d τ I + sinh 0 x c 1 τ d τ S U 0 ,
where
I : = 1 0 0 1 , S : = 0 1 1 0 .
However, condition (5), which amounts to the assumption
s x = r x exp 2 0 x p τ q τ d τ ,
may be too restrictive. Indeed, in view of multiple possible similarity transformations allowing to rewrite (3) in different equivalent forms, we want to have a clearly identifiable matrix structure which should be, on the one hand, immediately recognisable and, on the other hand, leading to a solution simplification or even an explicit solution. Such an identifiable structure may be, for example, a pairwise relation between some of the elements of the matrix in (3). The explicit solvability condition, nevertheless, plays against any visible structural property of the matrix: even though assumption (7) leads to a closed-form solution, it implies a rather complicated relation between the matrix elements. Condition (7) is very specific and thus unlikely to be satisfied for any easily describable class of matrix elements unless, of course, p q , which would then also entail that r s . This exactly solvable case with equal diagonal and anti-diagonal elements in (3) may sometimes be valuable but it does not seem to be the one that covers too many applications.
It turns out that condition (7) has an analog which is less stringent to the form of the matrix elements in (3), with more pertinence to important physical contexts, and, at the same time, it still leads to a significant simplification of the solution procedure (and, at least in some cases, also to closed-form solutions). This condition reads
s x = r x ¯ exp 2 Re 0 x p τ q τ d τ .
Despite the similarity to (7), condition (8) is easier to satisfy while preserving a visible matrix structure. Indeed, if p x q x is a purely imaginary function (e.g., in particular, when p q ¯ ), the complicated exponential factor in (8) disappears. In this case, the implied condition s x = r x ¯ is clearly indentifiable but far from being trivial since, as we shall see, it covers a variety of different practical applications. This reasoning motivates us to consider (1) as well as its nonhogeneous analog (2).
We note that, in context of second-order differential equations, the proposed reformulation into the antilinear ODE falls somewhat between two classical approaches: reduction to a linear dynamical system and reduction to a single nonlinear ODE such as Riccati equation. Both of these approaches can be used to the practical advantage. In the first case, perhaps the most meaningful way of vectorising the one-dimensional wave equation is splitting the problem into consequetive contributions of transmitted and reflected waves as given by Bremmer series and its analogs [2,3,4]. The computational advantage of the second approach can be achieved when the phase of the solution satisfying nonlinear ODE is non-oscillatory, see [5].
In our approach, instead of a system, only one first-order equation has to be dealt with, whereas the antilinearity is perhaps the closest one can get to the linear realm in terms of extension of methods such as integral transforms and series expansions.
The plan of this note is as follows. Section 2 is dedicated to the transformation of (1) and (2) into the homogeneous antilinear ODE
u x = f x u x ¯ , x 0 , x 0 ,
and its nonhomogeneous analog
u x = f x u x ¯ + g x , x 0 , x 0 ,
respectively. Next, in Section 3, we outline some relevant physical applications, i.e., problems which, upon appropriate transformations, can be recast as (1) or (2) and are thus reducible to formulations involving antilinear ODE (9), or, in one case, its nonhomogeneous version (10). Section 4 illustrates what can be achieved, thanks to the new reformulation, in one of the discussed physical contexts. Finally, in Section 5, we discuss some advantages of the proposed reformulation and conclude with some remarks on how antilinear ODEs can be constructively addressed further and hence bring advancements in the mentioned applied contexts.

2. Transformation of an Antidiagonal Problem into an Antilinear ODE

2.1. Homogeneous Case: From (1) to (9)

We consider (1) supplemented with the initial data U 0 = U 0 u 1 0 , u 2 0 T , where u 1 0 , u 2 0 C are arbitrary constants, and we devise a transformation that allows construction of the solution of system (1) in terms of the solution of an antilinear ODE of the form (9).
Let us first motivate our approach to construction of such a transformation. To this effect, given a complex-valued function f x , it is instructive to consider the elementary differential equation for v x
v x = f x v x , x 0 , x 0 ,
with the initial condition v 0 = 1 . On the one hand, (11) is in separable form and hence can be integrated directly to yield the solution
v x = exp 0 x f τ d τ .
On the other hand, rewriting (11) in the integral form
v x = 1 + 0 x f τ v τ d τ , x 0 , x 0 ,
the Picard iterative process gives
v x = 1 + 0 x f τ d τ + 0 x f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 + 0 x f τ 3 0 τ 3 f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 d τ 3 + .
Comparison of (12) with (13) results in important identities
exp 0 x f τ d τ = 1 + 0 x f τ d τ + 0 x f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 + 0 x f τ 3 0 τ 3 f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 d τ 3 + ,
sinh 0 x f τ d τ = 0 x f τ d τ + 0 x f τ 3 0 τ 3 f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 d τ 3 + ,
cosh 0 x f τ d τ = 1 + 0 x f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 + 0 x f τ 4 0 τ 4 f τ 3 0 τ 3 f τ 2 0 τ 2 f τ 1 d τ 1 d τ 2 d τ 3 d τ 4 + ,
where we used the identity exp z = cosh z + sinh z , z C , and the parity argument to split the terms: sinh is an odd function and hence (14) may contain only odd number of multiplicative instances of f, and similarly, (15) may contain only terms with even number of multiplications by f due to cosh being an even function.
Now, similarly to (14) and (15), let us consider the following quantities
S f x : = 0 x f τ 1 d τ 1 + 0 x f τ 3 0 τ 3 f τ 2 ¯ 0 τ 2 f τ 1 d τ 1 d τ 2 d τ 3 + ,
C f x : = 1 + 0 x f τ 2 0 τ 2 f τ 1 ¯ d τ 1 d τ 2 + 0 x f τ 4 0 τ 4 f τ 3 ¯ 0 τ 3 f τ 2 0 τ 2 f τ 1 ¯ d τ 1 d τ 2 d τ 3 d τ 4 + .
Let us show that (16) and (17) are inherent to an algebraic structure underlying (1). To this effect, we rewrite (1) in the integral form
U x = U 0 + 0 x A τ U τ d τ , x 0 , x 0 , A τ : = 0 f τ f τ ¯ 0 ,
and note that
A τ 2 A τ 1 = f τ 2 f τ 1 ¯ 0 0 f τ 2 ¯ f τ 1 ,
A τ 3 A τ 2 A τ 1 = 0 f τ 3 f τ 2 ¯ f τ 1 f τ 3 ¯ f τ 2 f τ 1 ¯ 0 ,
A τ 4 A τ 3 A τ 2 A τ 1 = f τ 4 f τ 3 ¯ f τ 2 f τ 1 ¯ 0 0 f τ 4 ¯ f τ 3 f τ 2 ¯ f τ 1 , .
Therefore, writing out Picard iterations for solving (18), we obtain
U x = C f x S f x S f ¯ x C f ¯ x U 0 = C f x S f x S f x ¯ C f x ¯ U 0 .
Furthermore, it is easy to see from (16) and (17) that S f x , C f x obey the following intertwining relation
C f x = f x S f x ¯ , S f x = f x C f x ¯ , x 0 , x 0 ,
and the conditions S f 0 = 0 , C f 0 = 1 . Introducing another pair of functions
Z + x : = C f x + S f x , Z x : = C f x S f x ,
we decouple (20) as
Z + x = f x Z + x ¯ , x 0 , x 0 , Z + 0 = 1 ,
Z x = f x Z x ¯ , x 0 , x 0 , Z 0 = 1 .
Equations (21) and (22) are two separate instances of the initial-value problem with the antilinear ODE given by (9). Solution of this ODE would thus yields the solutions of (21) and (22) and, consequently, also of (20), providing S f x , C f x appearing in (19) which furnishes the solution of (1).

2.2. Nonhomogeneous Case: From (2) to (10)

Let us now consider (2) with G x g 1 x , g 2 x T and subject to the initial condition U 0 = U 0 u 1 0 , u 2 0 T . We are going to show that, in particular case where
g 2 x = i g 1 x ¯ , u 2 0 = i u 1 0 ¯ ,
the solution of (2) can be constucted in terms of solutions of two instances of problem (10). As we shall see in Section 3.2, assumption (23) will be satisfied in at least one important practical context.
Similarly to (16) and (17), let us introduce
S f , h x : = 0 x f τ 1 h τ 1 d τ 1 + 0 x f τ 3 0 τ 3 f τ 2 ¯ 0 τ 2 f τ 1 h τ 1 d τ 1 d τ 2 d τ 3 + ,
C f , h x : = h x + 0 x f τ 2 0 τ 2 f τ 1 ¯ h τ 1 d τ 1 d τ 2 + 0 x f τ 4 0 τ 4 f τ 3 ¯ 0 τ 3 f τ 2 0 τ 2 f τ 1 ¯ h τ 1 d τ 1 d τ 2 d τ 3 d τ 4 + .
Rewriting (2) in the integral form
U x = U 0 + 0 x G τ d τ + 0 x A τ U τ d τ , x 0 , x 0 , A τ : = 0 f τ f τ ¯ 0 ,
it is straightforward to see that Picard iterations give
U x = S f , h 2 x + C f , h 1 x S f ¯ , h 1 x + C f ¯ , h 2 x ,
where S f , h x , C f , h x are as defined by (24) and (25), and
h 1 x : = u 1 0 + 0 x g 1 τ d τ , h 2 x : = u 2 0 + 0 x g 2 τ d τ .
By means of differentiation of S f , h 2 x and C f , h 1 x , we obtain the following intertwining relation
C f , h 1 x = h 1 x + f x S f ¯ , h 1 x , S f , h 2 x = f x C f ¯ , h 2 x , x 0 , x 0 ,
which is to be supplemented by the conditions C f , h 1 0 = h 1 0 , S f , h 2 0 = 0 .
Note that, from (24) and (25), S f ¯ , h 1 x = S f , h 1 ¯ x ¯ and C f ¯ , h 2 x = C f , h 2 ¯ x ¯ . Moreover, assumption (23) entails that h 2 x = i h 1 x ¯ , and by linearity in h of S f , h x , C f , h x , we have S f , h 2 x = i S f , h 1 ¯ x and C f , h 2 ¯ x ¯ = i C f , h 1 x ¯ . Consequently, relation (28) becomes
C f , h 1 x = h 1 x + f x S f , h 1 ¯ x ¯ , S f , h 1 ¯ x = f x C f , h 1 x ¯ , x 0 , x 0 .
Setting
Z + x : = C f , h 1 x + S f , h 1 ¯ x , Z x : = C f , h 1 x S f , h 1 ¯ x ,
we obtain from (29) two decoupled ODE problems
Z + x = f x Z + x ¯ + g 1 x , Z + 0 = u 1 0 ,
Z x = f x Z x ¯ + g 1 x , Z 0 = u 1 0 ,
each of them are of the form (10).

3. Some Physical Contexts Leading to (1) and (2)

3.1. Linear Schrödinger Equation

Consider the stationary linear Schrödinger equation in 1 D , with a potential a x > 0 ,
u x + a x u x = 0 , x 0 , x 0 .
We focus here on the initial-value problem, i.e., we supplement (33) with the boundary conditions u 0 = u 0 , u 0 = u 1 , but boundary-value problems on 0 , x 0 , with x 0 being finite or infinite, could also be treated. We assume a C 1 0 , x 0 .
Introducing the vector-function
U x : = u x 1 a 1 / 2 x u x ,
we observe that U x satisfies
U x = u x a x 2 a 3 / 2 x u x + 1 a 1 / 2 x u x = A x U x ,
with
A x : = 0 a 1 / 2 x a 1 / 2 x a x 2 a x ,
and U 0 = u 0 , u 1 / a 1 / 2 0 T . Here, in the second inequality of (35), we used (33) to eliminate u x .
By writing,
A x = a 1 / 2 x 0 1 1 0 + a x 2 a x 0 0 0 1 ,
we note that the first matrix in the right-hand side is diagonalisable as follows
P 0 1 1 0 P 1 = i 0 0 i , P : = 1 2 1 / 2 i 1 1 i , P 1 = 1 2 1 / 2 i 1 1 i .
Consequently, introducing V x : = P U x , we multiply the both sides of (35) by P and thus transform it into
V x = P A x P 1 V x = B x V x , x 0 , x 0 ,
with
B x : = a 1 / 2 x i 0 0 i a x 4 a x 1 i i 1 = b 0 x 0 0 b 0 x ¯ a x 4 a x 0 i i 0 ,
b 0 x : = i a 1 / 2 x a x 4 a x ,
and supplemented by the initial condition
V 0 = P U 0 = 1 2 1 / 2 i u 0 + u 1 / a 1 / 2 0 u 0 + i u 1 / a 1 / 2 0 .
Furthermore, introducing
W x : = exp 0 x b 0 τ d τ 0 0 exp 0 x b 0 τ ¯ d τ V x ,
we have
d d x exp 0 x b 0 τ d τ 0 0 exp 0 x b 0 τ ¯ d τ
= b 0 x exp 0 x b 0 τ d τ 0 0 b 0 x ¯ exp 0 x b 0 τ ¯ d τ .
Therefore, (37) entails
W x = a x 4 a x 0 i exp 0 x b 0 τ b 0 τ ¯ d τ i exp 0 x b 0 τ b 0 τ ¯ d τ 0 W x ,
which, recalling (38), we can rewrite as
W x = C x W x , x 0 , x 0 ,
with
C x : = 0 c 0 x c 0 x ¯ 0 , c 0 x : = i a x 4 a x exp 2 i 0 x a 1 / 2 τ d τ ,
and the initial condition
W 0 = V 0 = 1 2 1 / 2 i u 0 + u 1 / a 1 / 2 0 u 0 + i u 1 / a 1 / 2 0 .
The steps described above draw from [6] (see also [7]) and provide one way to rewrite the linear Schrödinger equation in the form (1), but this approach is not the only one. Alternative reduction procedures may be more cumbersome but more beneficial in practice, depending on a final goal. See Section 4, drawing from [8], where the initial vectorisation of (33) is different from (34) while other steps of the transformation are ideologically similar.

3.2. Helmholtz Equation

Stationary problems for the wave propagation in heterogeneous media are described by the Helmholtz equation whose 1D version is given by
α x u x + β x u x = f x , x 0 , x 0 .
Here, α x , β x > 0 are material parameters and f x is the source term. As in Section 3.1, we suppose that (40) is supplemented by the initial conditions u 0 = u 0 , u 0 = u 1 . Furthermore, we assume that f x , u 0 , u 1 are all real-valued. This assumption does not reduce generality since (40) is linear with real-valued α x , β x and hence a real-valued problem can be solved separately for real and imaginary parts of the solution of the original equation.
Setting
U x : = α 1 / 2 x β 1 / 2 x u x α x u x ,
we recast (40) in the vector form
U x = A x U x + F 0 x , x 0 , x 0 ,
with
A x : = α x β x 2 α x β x β x α x 1 / 2 β x α x 1 / 2 0 , F 0 x : = 0 f x ,
and the initial condition
U 0 = α 1 / 2 0 β 1 / 2 0 u 0 α 0 u 1 .
We now follow the reduction steps similar to those in Section 3.1. We write
A x = β x α x 1 / 2 0 1 1 0 + α x β x 2 α x β x 1 0 0 0 ,
and note that we can diagonalise the first matrix with the help of the auxiliary constant matrix P introduced in (36). Denoting V x : = P U x , we hence have, from (41),
V x = B x V x + F 1 x ,
with
B x : = b 1 x 0 0 b 1 x ¯ + α x β x 4 α x β x 0 i i 0 , F 1 x : = P F x = f x 2 1 / 2 i f x 2 1 / 2 ,
b 1 x : = i β x α x 1 / 2 + α x β x 4 α x β x ,
and the initial condition
V 0 = P U 0 = 1 2 1 / 2 i α 1 / 2 0 β 1 / 2 0 u 0 + α 0 u 1 α 1 / 2 0 β 1 / 2 0 u 0 + i α 0 u 1 .
Introducing
W x : = exp 0 x b 1 τ d τ 0 0 exp 0 x b 1 τ ¯ d τ V x ,
Equation (42) transforms into
W x = C x W x + G x ,
where
C x : = 0 c 1 x c 1 x ¯ 0 , c 1 x : = i α x β x 4 α x β x exp 2 i 0 x β τ α τ 1 / 2 d τ ,
G x : = g 1 x i g 1 x ¯ , g 1 x : = f x 2 1 / 2 exp 0 x i β τ α τ 1 / 2 + α x β x 4 α x β x d τ ,
and the initial condition
W 0 = w 1 0 i w 1 0 ¯ , w 1 0 : = 1 2 1 / 2 i α 1 / 2 0 β 1 / 2 0 u 0 + α 0 u 1 .
Here, in relating the first and the second components of the vector G x , and similarly W 0 , we employed the real-valuedness of α x , β x , u 0 , u 1 that was discussed in the beginning of this Subsection.
It remains to observe that system (43) is such that the matrix C x and the vectors G x , W 0 fit the assumptions discussed in Section 2.2.

3.3. Zakharov–Shabat System

It is well-known that solution of a spectral problem with the linear Schrödinger equation appears as an intermediate step in solving the Korteweg–de Vries (KdV) equation by using the inverse scattering transform. Zakharov–Shabat systems play the same role in the integrability of other nonlinear equations, see ([9] p. 10). In particular, the Zakharov–Shabat system
x v 1 x , t v 2 x , t = i ξ q x , t q x , t ¯ i ξ v 1 x , t v 2 x , t ,
with ξ R being a spectral parameter, is a linear problem pertinent to the integration of the defocusing cubic nonlinear Schrödinger (NLS) equation
i t q x , t = x 2 q x , t 2 q x , t 2 q x , t
subject to the initial data q x , 0 = q 0 x . We refer to [10] for more details on this matter.
We observe that by setting
W x , t : = e i ξ x 0 0 e i ξ x v 1 x , t v 2 x , t ,
Zakharov–Shabat system (44) immediately reduces to
x W x , t = 0 q x e 2 i ξ x q x ¯ e 2 i ξ x 0 W x , t ,
which is a system of the form (1).

3.4. Kubelka–Munk Equations

Kubelka–Munk equations is a simple phenomenological model [11] for computing reflection and transmission optical fluxes without solution of significantly more complicated radiative transfer equations. Due to their simplicity, Kubelka–Munk equations have been popular in practice (in paper paint visibility contexts), they have been extensively studied from modelling viewpoint and several generalisations have been proposed: [12,13,14,15].
We consider the following model equations
d d x F + x F x = K x S x S x S x K x + S x F + x F x ,
where F + , F are fluxes in positive and negative directions, and K and S are related to absorption and scattering, respectively. Note that, unlike in the classical model, we take here K, S to be dependent on the optical depth x rather than simply being constants. This generalisation is expected to be useful since constant scattering and absorption coefficients are known to be a considerable limitation of the Kubelka–Munk model, see ([16] Sect. 4.5).
The procedure of reduction of (45) to (1) is similar to that performed in Section 3.1. Therefore, we shall omit any detailed calculation.
Let us write
A x : = K x S x S x S x K x + S x = S x 0 1 1 0 K x + S x 1 0 0 1 ,
and compute
B x : = P A x P 1 = S x i 0 0 i K x + S x 0 i i 0
with P defined as in (36). Multiplying the both sides of (45) by P and introducing
V x : = P F + x F x ,
we obtain
V x = B x V x .
Furthermore, setting
W x : = exp i 0 x S τ d τ 0 0 exp i 0 x S τ d τ V x ,
C x : = 0 c 2 x c 2 x ¯ 0 , c 2 x : = i K x + S x exp 2 i 0 x S τ d τ ,
we arrive at
W x = C x W x ,
which is a system of the form (1).

4. An Example of Application

Let us show a concrete application of the proposed reduction to the antilinear ODE. Namely, we consider a scattering problem for the stationary Schrödinger equation in the semiclassical regime:
ϵ 2 ψ x + a 0 x ψ x = 0 ,
where ϵ 1 is a small parameter related to the Planck’s constant, a 0 x : = E q x , E is the prescribed energy, q is a potential. We assume that q C 2 R and q x const for x R 0 , x 0 with some x 0 > 0 . Moreover, suppose that a 0 x > 0 . Boundary conditions supplementing (48) are ψ 0 + i / ϵ a 0 0 ψ 0 = 0 , ψ x 0 i / ϵ a 0 x 0 ψ x 0 = 2 i / ϵ a 0 x 0 . The second of these conditions describes the particle incident on the potential from right, whereas the first condition is an open boundary condition representing the absence of the influx from left of the interval 0 , x 0 . Due to the linearity, it suffices to solve the initial-value problem [8]
ϵ 2 ϕ x + a 0 x ϕ x = 0 ,
subject to the initial conditions ϕ 0 = 1 , ϕ 0 = i / ϵ a 0 0 . The solution to (48) with the above mentioned boundary conditions can be reconstructed as
ψ x = 2 i a 0 x 0 ϕ x 0 i a 0 x 0 ϕ x 0 ϕ x .
Setting
U x : = a 0 1 / 4 x ϵ 1 / 2 ϕ x ϵ 1 / 2 a 0 1 / 4 x ϕ x + ϵ 1 / 2 a 0 x 4 a 0 5 / 4 x ϕ x ,
we have to solve
U x = A 0 x U x ,
with
A 0 x : = 1 ϵ a 0 1 / 2 x 0 1 1 0 + ϵ 4 a 0 1 / 4 x a 0 x a 0 5 / 4 x 0 0 1 0 ,
U 0 = a 0 1 / 4 0 ϵ 1 / 2 i a 0 1 / 4 0 ϵ 1 / 2 + ϵ 1 / 2 a 0 0 4 a 0 5 / 4 0 .
Proceeding as in Section 3.1, we arrive at
W x = C 0 x W x ,
with
C 0 x : = 0 f 0 x f 0 x ¯ 0 , f 0 x : = i ϵ 8 a 0 1 / 4 x a 0 x a 0 5 / 4 x exp 2 i ϵ 0 x a 0 1 / 2 τ d τ ,
W 0 = P U 0 = 1 2 1 / 2 ϵ 1 / 2 a 0 0 4 a 0 5 / 4 0 2 a 0 1 / 4 0 ϵ 1 / 2 + i ϵ 1 / 2 a 0 0 4 a 0 5 / 4 0 .
This formulation was a basis of asymptotic-numerical methods in [8,17]. Reformulation towards the antilinear ODE provides simplification of the asymptotic part in that it allows dealing only with scalar quantities. Indeed, according to the results of Section 2.1, in order to solve (50), it suffices to deal with
u x = f 0 x u x ¯ , x 0 , x 0 ,
subject to u ( 0 ) = 1 . Picard iterations yield
u x = 1 + 0 x f 0 τ d τ + 0 x f 0 τ 2 0 τ 2 f 0 τ 1 ¯ d τ 1 d τ 2 + 0 x f 0 τ 3 0 τ 3 f 0 τ 2 ¯ 0 τ 2 f 0 τ 1 d τ 1 d τ 2 d τ 3 + , x 0 , x 0 ,
which, due to the form of f 0 , is an expansion in powers of ϵ . Note that, even though ϵ is still present in phase factors, the magnitude of each term is proportional to a power of ϵ . Moreover, this series is not only asymptotic, but a convergent expansion on 0 , x 0 similar to Bremmer series, see [2].
This illustrates how the achieved structural simplification can make tedious higher-order expansions more tractable in practice.

5. Discussion and Conclusions

We have introduced a new scalar differential equation of the first order which is curious for two principal reasons. First, it is, in some sense, the simplest nonlinear ODE (either with or without a non-homogeneous term), with the nonlinearity being merely the complex conjugation. Second, this equation emerges, after appropriate reduction steps, in rather different physical contexts. Certainly, much more application areas can be identified (e.g., telegrapher’s equations or Goldstein-Taylor model, see [18]), but already the context of the linear Schrödinger equation alone is a good enough motivation to further study the antilinear ODE u x = f x u x ¯ . For instance, as demonstrated in Section 4, reduction of matrix-vector manipulations to those involving only scalar quantities already provides a simplification in tedious asymptotic constructions which could allow advancement of asymptotic-numerical methods such as those in [8,17]. Therefore, this new reformulation yields concrete practical advantages. We believe that theoretical aspects of the mentioned models could benefit from it, too. This might be achievable, for instance, through newly produced forms of the Prüfer transformation which is typically used for studying Sturm-Liouville problems, see, e.g., ([19] Sect. 5.2). Furthermore, it is important to identify classes of functions f for which the antilinear ODE can be solved in a closed form. This would, in particular, yield new solvable quantum mechanical potentials and sound-speed profiles important for generating reference solutions for verification of numerical methods for the wave propagation. Exploring this direction, we note that the Kubelka–Munk model context hints on the elementary exponential class of function f x since system (45) with constant K and S is solvable explicitly. This can be generalised further since the form of the antilinear ODE is amenable to a treatment by integral transform methods (unlike other nonlinearities) typically compatible with an exponential function and combinations thereof. Finally, the form of the antilinear ODE calls for study of the possible connection with d-bar problems, see, e.g., [20]. In this case, an appropriate extension of the equation to the complex plane may yield a formulation that eventually produces a closed-form solution due to numerous constructive results on Hilbert and Riemann-Hilbert problems.

Funding

The work has been partly financed by FWF (Austrian Science Fund) project I3538-N32 through the research group of Prof. Anton Arnold (TU Wien).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author is grateful to Juliette Leblond (Inria SAM) for reading the final version of the manuscript and for giving suggestions which improved the presentation of the material.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Remling, C. Spectral Theory of Canonical Systems; De Gruyter: Berlin, Germany, 2018. [Google Scholar]
  2. Berk, H.L.; Book, D.L.; Pfirsch, D. Convergence of the Bremmer series for the spatially inhomogeneous Helmholtz equation. J. Math. Phys. 1967, 8, 1611–1619. [Google Scholar] [CrossRef]
  3. Popovic, J.; Runborg, O. Analysis of a fast method for solving the high frequency Helmholtz equation in one dimension. BIT Numer. Math. 2011, 51, 721–755. [Google Scholar] [CrossRef]
  4. Winitzki, S. Cosmological particle production and the precision of the WKB approximation. Phys. Rev. D 2005, 72, 104011. [Google Scholar] [CrossRef] [Green Version]
  5. Bremer, J. On the numerical solution of second order ordinary differential equations in the high-frequency regime. Appl. Comput. Harmon. Anal. 2018, 44, 312–349. [Google Scholar] [CrossRef]
  6. Lorenz, K.; Jahnke, T.; Lubich, C. Adiabatic integrators for highly oscillatory second-order linear differential equations with time-varying eigendecomposition. BIT Numer. Math. 2005, 45, 91–115. [Google Scholar] [CrossRef]
  7. Christ, M.; Kiselev, A. WKB asymptotic behavior of almost all generalized eigenfunctions for one-dimensional Schrödinger operators with slowly decaying potentials. J. Funct. Anal. 2001, 179, 426–447. [Google Scholar] [CrossRef] [Green Version]
  8. Arnold, A.; Ben-Abdallah, N.; Negulescu, C. WKB-based schemes for the oscillatory 1D Schrödinger equation in the semiclassical limit. SIAM J. Numer. Anal. 2011, 49, 1436–1460. [Google Scholar] [CrossRef] [Green Version]
  9. Ablowitz, M.J.; Segur, H. Solitons and the Inverse Scattering Transform; SIAM: Philadelphia, PA, USA, 1981. [Google Scholar]
  10. Grébert, B.; Kappeler, T. The Defocusing NLS Equation and Its Normal Form; European Mathematical Society: Zurich, Switzerland, 2014. [Google Scholar]
  11. Kubelka, P.; Munk, F. An article on optics of paint layers. Z. Tech. Phys. 1931, 12, 259–274. Available online: http://www.graphics.cornell.edu/~westin/pubs/kubelka.pdf (accessed on 7 June 2022). (In German, with English translation).
  12. Sandoval, C.; Kim, A.D. Deriving Kubelka–Munk theory from radiative transport. J. Opt. Soc. Am. A 2014, 31, 628–636. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, L.; Kruse, B. Revised Kubelka–Munk theory. I. Theory and application. J. Opt. Soc. Am. A 2004, 21, 1933–1941. [Google Scholar] [CrossRef] [PubMed]
  14. Yang, L.; Kruse, B.; Miklavcic, S.J. Revised Kubelka–Munk theory. II. Unified framework for homogeneous and inhomogeneous optical media. J. Opt. Soc. Am. A 2004, 21, 1942–1952. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, L.; Miklavcic, S.J. Revised Kubelka–Munk theory. III. A general theory of light propagation in scattering and absorptive media. J. Opt. Soc. Am. A 2005, 22, 1866–1873. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Choudhury, A.K.R. Principles of Colour and Appearance Measurement: Visual Measurement of Colour, Colour Comparison and Management; Woodhead Publishing: Sawston, UK, 2014. [Google Scholar]
  17. Körner, J.; Arnold, A.; Döpfner, K. WKB-based scheme with adaptive step size control for the Schrödinger equation in the highly oscillatory regime. J. Comput. Appl. Math. 2022, 404, 113905. [Google Scholar] [CrossRef]
  18. Dietert, H.; Evans, J. Finding the jump rate for fastest decay in the Goldstein-Taylor model. arXiv 2021, arXiv:2103.10064. [Google Scholar] [CrossRef]
  19. Pryce, J.D. Numerical Solution of Sturm–Liouville Problems; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  20. Knudsen, K.; Mueller, J.; Siltanen, S. Numerical solution method for the dbar-equation in the plane. J. Comput. Phys. 2004, 198, 500–517. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ponomarev, D. A Note on the Appearance of the Simplest Antilinear ODE in Several Physical Contexts. AppliedMath 2022, 2, 433-445. https://doi.org/10.3390/appliedmath2030024

AMA Style

Ponomarev D. A Note on the Appearance of the Simplest Antilinear ODE in Several Physical Contexts. AppliedMath. 2022; 2(3):433-445. https://doi.org/10.3390/appliedmath2030024

Chicago/Turabian Style

Ponomarev, Dmitry. 2022. "A Note on the Appearance of the Simplest Antilinear ODE in Several Physical Contexts" AppliedMath 2, no. 3: 433-445. https://doi.org/10.3390/appliedmath2030024

APA Style

Ponomarev, D. (2022). A Note on the Appearance of the Simplest Antilinear ODE in Several Physical Contexts. AppliedMath, 2(3), 433-445. https://doi.org/10.3390/appliedmath2030024

Article Metrics

Back to TopTop