A Note on the Appearance of the Simplest Antilinear ODE in Several Physical Contexts

: We review several one-dimensional problems such as those involving linear Schrödinger equation, variable-coefﬁcient Helmholtz equation, Zakharov–Shabat system and Kubelka–Munk equations. We show that they all can be reduced to solving one simple antilinear ordinary differential equation u (cid:48) ( x ) = f ( x ) u ( x ) or its nonhomogeneous version u (cid:48) ( x ) = f ( x ) u ( x ) + g ( x ) , x ∈ ( 0, x 0 ) ⊂ R . We point out some of the advantages of the proposed reformulation and call for further investigation of the obtained ODE.


Introduction
Many physical phenomena can be directly described by or reduced to systems of differential equations having certain structural properties. Restricting ourselves here to linear one-dimensional settings, we are concerned with a system of 2 first-order ODEs whose matrix is antidiagonal with complex-conjugate elements. Namely, given x 0 ∈ R and complex-valued function f (x), we consider the equation as well as its nonhomogeneous analog where U(x) ≡ (u 1 (x), u 2 (x)) T ∈ C 2 is an unknown solution-vector, G(x) ≡ (g 1 (x), g 2 (x)) T ∈ C 2 is a given vector-function, and each of Equations (1) and (2) is supplemented by the initial condition U(0) = U 0 ∈ C 2 .
Here and onwards, we employ the notation · to denote complex conjugation. Similarly to Hamiltonian, Dirac and more general canonical systems (see, e.g., [1]), Equations (1) and (2) constitute an important class of dynamical systems for two reasons. On the one hand, as we shall further see, formulations of several important problems are reducible to either (1) or (2). On the other hand, these systems are close to being exactly solvable in the following sense. Let us focus on (1) and consider the more general system We note that the diagonal elements in the matrix of (3) can be removed by the exponential multiplier transform. Namely, by setting one can observe that V(x) satisfies, for x ∈ (0, x 0 ), with the initial condition V(0) = U 0 . Now, if the anti-diagonal elements of the matrix in the right-hand side of (4) are equal, i.e., then the solution can be written explicitly as However, condition (5), which amounts to the assumption may be too restrictive. Indeed, in view of multiple possible similarity transformations allowing to rewrite (3) in different equivalent forms, we want to have a clearly identifiable matrix structure which should be, on the one hand, immediately recognisable and, on the other hand, leading to a solution simplification or even an explicit solution. Such an identifiable structure may be, for example, a pairwise relation between some of the elements of the matrix in (3). The explicit solvability condition, nevertheless, plays against any visible structural property of the matrix: even though assumption (7) leads to a closed-form solution, it implies a rather complicated relation between the matrix elements. Condition (7) is very specific and thus unlikely to be satisfied for any easily describable class of matrix elements unless, of course, p ≡ q, which would then also entail that r ≡ s. This exactly solvable case with equal diagonal and anti-diagonal elements in (3) may sometimes be valuable but it does not seem to be the one that covers too many applications. It turns out that condition (7) has an analog which is less stringent to the form of the matrix elements in (3), with more pertinence to important physical contexts, and, at the same time, it still leads to a significant simplification of the solution procedure (and, at least in some cases, also to closed-form solutions). This condition reads Despite the similarity to (7), condition (8) is easier to satisfy while preserving a visible matrix structure. Indeed, if p(x) − q(x) is a purely imaginary function (e.g., in particular, when p ≡ q), the complicated exponential factor in (8) disappears. In this case, the implied condition s(x) = r(x) is clearly indentifiable but far from being trivial since, as we shall see, it covers a variety of different practical applications. This reasoning motivates us to consider (1) as well as its nonhogeneous analog (2).
We note that, in context of second-order differential equations, the proposed reformulation into the antilinear ODE falls somewhat between two classical approaches: reduction to a linear dynamical system and reduction to a single nonlinear ODE such as Riccati equation. Both of these approaches can be used to the practical advantage. In the first case, perhaps the most meaningful way of vectorising the one-dimensional wave equation is splitting the problem into consequetive contributions of transmitted and reflected waves as given by Bremmer series and its analogs [2][3][4]. The computational advantage of the second approach can be achieved when the phase of the solution satisfying nonlinear ODE is non-oscillatory, see [5].
In our approach, instead of a system, only one first-order equation has to be dealt with, whereas the antilinearity is perhaps the closest one can get to the linear realm in terms of extension of methods such as integral transforms and series expansions.
The plan of this note is as follows. Section 2 is dedicated to the transformation of (1) and (2) into the homogeneous antilinear ODE and its nonhomogeneous analog respectively. Next, in Section 3, we outline some relevant physical applications, i.e., problems which, upon appropriate transformations, can be recast as (1) or (2) and are thus reducible to formulations involving antilinear ODE (9), or, in one case, its nonhomogeneous version (10). Section 4 illustrates what can be achieved, thanks to the new reformulation, in one of the discussed physical contexts. Finally, in Section 5, we discuss some advantages of the proposed reformulation and conclude with some remarks on how antilinear ODEs can be constructively addressed further and hence bring advancements in the mentioned applied contexts.

Transformation of an Antidiagonal Problem into an Antilinear ODE
2.1. Homogeneous Case: From (1) to (9) We consider (1) supplemented with the initial data U(0) = U 0 ≡ u 0 1 , u 0 2 T , where u 0 1 , u 0 2 ∈ C are arbitrary constants, and we devise a transformation that allows construction of the solution of system (1) in terms of the solution of an antilinear ODE of the form (9).
Let us first motivate our approach to construction of such a transformation. To this effect, given a complex-valued function f (x), it is instructive to consider the elementary differential equation with the initial condition v(0) = 1. On the one hand, (11) is in separable form and hence can be integrated directly to yield the solution On the other hand, rewriting (11) Comparison of (12) with (13) results in important identities where we used the identity exp(z) = cosh(z) + sinh(z), z ∈ C, and the parity argument to split the terms: sinh is an odd function and hence (14) may contain only odd number of multiplicative instances of f , and similarly, (15) may contain only terms with even number of multiplications by f due to cosh being an even function. Now, similarly to (14) and (15), let us consider the following quantities Let us show that (16) and (17) are inherent to an algebraic structure underlying (1). To this effect, we rewrite (1) in the integral form and note that Therefore, writing out Picard iterations for solving (18), we obtain Furthermore, it is easy to see from (16) and (17) that S f (x), C f (x) obey the following intertwining relation (20) and the conditions S f (0) = 0, C f (0) = 1. Introducing another pair of functions we decouple (20) as Equations (21) and (22) are two separate instances of the initial-value problem with the antilinear ODE given by (9). Solution of this ODE would thus yields the solutions of (21) and (22) and, consequently, also of (20), providing S f (x), C f (x) appearing in (19) which furnishes the solution of (1). (2) to (10) Let us now consider (2) with G(x) ≡ (g 1 (x), g 2 (x)) T and subject to the initial condition

Nonhomogeneous Case: From
We are going to show that, in particular case where the solution of (2) can be constucted in terms of solutions of two instances of problem (10).
As we shall see in Section 3.2, assumption (23) will be satisfied in at least one important practical context. Similarly to (16) and (17), let us introduce

Rewriting (2) in the integral form
it is straightforward to see that Picard iterations give (24) and (25), and By means of differentiation of S f ,h 2 (x) and C f ,h 1 (x), we obtain the following intertwining relation which is to be supplemented by the conditions C f ,h 1 (0) = h 1 (0), S f ,h 2 (0) = 0. Note that, from (24) and (25) Setting we obtain from (29) two decoupled ODE problems each of them are of the form (10).
By writing, we note that the first matrix in the right-hand side is diagonalisable as follows Consequently, introducing V(x) := PU(x), we multiply the both sides of (35) by P and thus transform it into with and supplemented by the initial condition Therefore, (37) entails which, recalling (38), we can rewrite as with x 0 a 1/2 (τ)dτ , and the initial condition The steps described above draw from [6] (see also [7]) and provide one way to rewrite the linear Schrödinger equation in the form (1), but this approach is not the only one. Alternative reduction procedures may be more cumbersome but more beneficial in practice, depending on a final goal. See Section 4, drawing from [8], where the initial vectorisation of (33) is different from (34) while other steps of the transformation are ideologically similar.

Helmholtz Equation
Stationary problems for the wave propagation in heterogeneous media are described by the Helmholtz equation whose 1D version is given by Here, α(x), β(x) > 0 are material parameters and f (x) is the source term. As in Section 3.1, we suppose that (40) is supplemented by the initial conditions u(0) = u 0 , u (0) = u 1 . Furthermore, we assume that f (x), u 0 , u 1 are all real-valued. This assumption does not reduce generality since (40) is linear with real-valued α(x), β(x) and hence a real-valued problem can be solved separately for real and imaginary parts of the solution of the original equation. Setting we recast (40) in the vector form with , and the initial condition We now follow the reduction steps similar to those in Section 3.1. We write 1 0 0 0 , and note that we can diagonalise the first matrix with the help of the auxiliary constant matrix P introduced in (36). Denoting V(x) := PU(x), we hence have, from (41), with and the initial condition Introducing

Equation (42) transforms into
where and the initial condition Here, in relating the first and the second components of the vector G(x), and similarly W(0), we employed the real-valuedness of α(x), β(x), u 0 , u 1 that was discussed in the beginning of this Subsection.
It remains to observe that system (43) is such that the matrix C(x) and the vectors G(x), W(0) fit the assumptions discussed in Section 2.2.

Zakharov-Shabat System
It is well-known that solution of a spectral problem with the linear Schrödinger equation appears as an intermediate step in solving the Korteweg-de Vries (KdV) equation by using the inverse scattering transform. Zakharov-Shabat systems play the same role in the integrability of other nonlinear equations, see ([9] p. 10). In particular, the Zakharov-Shabat system with ξ ∈ R being a spectral parameter, is a linear problem pertinent to the integration of the defocusing cubic nonlinear Schrödinger (NLS) equation subject to the initial data q(x, 0) = q 0 (x). We refer to [10] for more details on this matter. We observe that by setting

Zakharov-Shabat system (44) immediately reduces to
which is a system of the form (1).

Kubelka-Munk Equations
Kubelka-Munk equations is a simple phenomenological model [11] for computing reflection and transmission optical fluxes without solution of significantly more complicated radiative transfer equations. Due to their simplicity, Kubelka-Munk equations have been popular in practice (in paper paint visibility contexts), they have been extensively studied from modelling viewpoint and several generalisations have been proposed: [12][13][14][15].
We consider the following model equations d dx where F + , F − are fluxes in positive and negative directions, and K and S are related to absorption and scattering, respectively. Note that, unlike in the classical model, we take here K, S to be dependent on the optical depth x rather than simply being constants. This generalisation is expected to be useful since constant scattering and absorption coefficients are known to be a considerable limitation of the Kubelka-Munk model, see ([16] Sect. 4.5).
The procedure of reduction of (45) to (1) is similar to that performed in Section 3.1. Therefore, we shall omit any detailed calculation.
Let us write with P defined as in (36). Multiplying the both sides of (45) by P and introducing Furthermore, setting which is a system of the form (1).

An Example of Application
Let us show a concrete application of the proposed reduction to the antilinear ODE. Namely, we consider a scattering problem for the stationary Schrödinger equation in the semiclassical regime: where 1 is a small parameter related to the Planck's constant, a 0 (x) := E − q(x), E is the prescribed energy, q is a potential. We assume that q ∈ C 2 (R) and q(x) ≡ const for x ∈ R\(0, x 0 ) with some x 0 > 0. Moreover, suppose that a 0 (x) > 0. Boundary conditions supplementing (48) are ψ (0) + i/ a 0 (0)ψ(0) = 0, ψ (x 0 ) − i/ a 0 (x 0 )ψ(x 0 ) = −2i/ a 0 (x 0 ). The second of these conditions describes the particle incident on the potential from right, whereas the first condition is an open boundary condition representing the absence of the influx from left of the interval (0, x 0 ). Due to the linearity, it suffices to solve the initial-value problem [8] subject to the initial conditions φ(0) = 1, φ (0) = −i/ a 0 (0). The solution to (48) with the above mentioned boundary conditions can be reconstructed as Setting Proceeding as in Section 3.1, we arrive at with This formulation was a basis of asymptotic-numerical methods in [8,17]. Reformulation towards the antilinear ODE provides simplification of the asymptotic part in that it allows dealing only with scalar quantities. Indeed, according to the results of Section 2.1, in order to solve (50), it suffices to deal with x ∈ (0, x 0 ), which, due to the form of f 0 , is an expansion in powers of . Note that, even though is still present in phase factors, the magnitude of each term is proportional to a power of . Moreover, this series is not only asymptotic, but a convergent expansion on [0, x 0 ] similar to Bremmer series, see [2].
This illustrates how the achieved structural simplification can make tedious higherorder expansions more tractable in practice.

Discussion and Conclusions
We have introduced a new scalar differential equation of the first order which is curious for two principal reasons. First, it is, in some sense, the simplest nonlinear ODE (either with or without a non-homogeneous term), with the nonlinearity being merely the complex conjugation. Second, this equation emerges, after appropriate reduction steps, in rather different physical contexts. Certainly, much more application areas can be identified (e.g., telegrapher's equations or Goldstein-Taylor model, see [18]), but already the context of the linear Schrödinger equation alone is a good enough motivation to further study the antilinear ODE u (x) = f (x)u(x). For instance, as demonstrated in Section 4, reduction of matrix-vector manipulations to those involving only scalar quantities already provides a simplification in tedious asymptotic constructions which could allow advancement of asymptotic-numerical methods such as those in [8,17]. Therefore, this new reformulation yields concrete practical advantages. We believe that theoretical aspects of the mentioned models could benefit from it, too. This might be achievable, for instance, through newly produced forms of the Prüfer transformation which is typically used for studying Sturm-Liouville problems, see, e.g., ([19] Sect. 5.2). Furthermore, it is important to identify classes of functions f for which the antilinear ODE can be solved in a closed form. This would, in particular, yield new solvable quantum mechanical potentials and sound-speed profiles important for generating reference solutions for verification of numerical methods for the wave propagation. Exploring this direction, we note that the Kubelka-Munk model context hints on the elementary exponential class of function f (x) since system (45) with constant K and S is solvable explicitly. This can be generalised further since the form of the antilinear ODE is amenable to a treatment by integral transform methods (unlike other nonlinearities) typically compatible with an exponential function and combinations thereof. Finally, the form of the antilinear ODE calls for study of the possible connection with d-bar problems, see, e.g., [20]. In this case, an appropriate extension of the equation to the complex plane may yield a formulation that eventually produces a closed-form solution due to numerous constructive results on Hilbert and Riemann-Hilbert problems.
Funding: The work has been partly financed by FWF (Austrian Science Fund) project I3538-N32 through the research group of Prof. Anton Arnold (TU Wien).
Institutional Review Board Statement: Not applicable.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.