Complex Connections between Symmetry and Singularity Analysis

: In this paper, it is noted that three apparently disparate areas of mathematics—singularity analysis, complex symmetry analysis and the distributional representation of special functions— have a basic commonality in the underlying methods used. The insights obtained from the first of these provides a much-needed explanation for the effectiveness of the latter two. The consequent explanations are provided in the form of two theorems and their corollaries


Introduction
The shortest path between two truths in the real domain passes through the complex domain-Jacques Hadamard (1991) Methods to solve linear ordinary differential equations (ODEs) were developed soon after differential calculus.However, solving nonlinear differential equations (DEs) was limited to some special cases, and no general methods were available until Sophus Lie and Paul Painlevé provided some generality by very different approaches.Lie [1] attempted to use the methods of Abel and Galois to resolve the issue of the solution of polynomial equations by means of radicals in order to solve DEs.Painlevé [2,3] extended the methods of Frobenius [4] to solve second-order, linear ODEs about regular, singular points to deal with more DEs.His basic new input was to take the dependent and independent variables to lie in the complex domain, I C and allow the singularities to move off the real axis, IR.Whereas Painlevé's methods had to be considered on a case-by-case basis, using Lie's methods, each symmetry can be used to reduce the number of variables in partial differential equations (PDEs) or reduce the order of the equations, regardless of whether they are linear or not.As such, if there are enough symmetries available, the DE can be reduced from partial to ordinary, then solved by using one symmetry at a time to reduce the order down to zero.The key question of what would be "enough symmetries" was answered by Lie and others by providing general criteria.Of course, Lie's methods cannot be applied if the equations do not have enough symmetries.
There have been various developments in symmetry analysis contributed by Lie and others that I will not go into at present.However, while Lie had assumed that the independent and dependent variables are complex, he never made explicit use of this fact.Of course, as Ali, Mahomed and Qadir (AMQ) [5,6] pointed out, the dependent variables must then be complex differentiable and, hence, complex analytic, thus satisfying the Cauchy-Riemann equations (CREs).The CREs have to be incorporated into the system of equations, so the symmetry structure is changed.It was shown [7] that if some criteria are met, there is a correspondence between two-dimensional systems of real ODEs and scalar equations of a complex dependent variable that depend on one real variable.This correspondence led [8] to solutions of two real dimensional systems having fewer symmetries than are required for symmetry solutions of the systems-including no symmetry!However, it was not really clear how and why the complex procedure can provide the dramatic results it does.To be able to use all three techniques to obtain results, it is essential to obtain criteria to apply them generally.It is hoped that by identifying the commonality of the three methods, it will be possible to formulate such criteria.
The plan of the paper is as follows.In the next section, a very brief review of the relevant, salient points of singularity analysis is provided.In Section 3, symmetry analysis and the complex methods are presented, followed, in the subsequent section, by a review of the singular representation of some special functions.In Section 5 the problem of defining a complex variational principle and its resolution are discussed.In Section 6 the complex connection between them is identified and the results are stated in the form of two theorems, each with two corollaries.A brief discussion and conclusion is given in the last section.

Review of Painlevé and Singularity Analysis
The power series method to solve linear ODEs uses a term-by-term cancellation of a power series with arbitrary coefficients.The cancellation imposes enough constraints on the coefficients so that the number of arbitrary coefficients equals the order of the ODE (n).The series converges if the points are regular, i.e., the coefficients of the ODE do not diverge.If some coefficients diverge as some point (x 0 ), it is said to be singular.Writing the ODE as Σ n i=0 P i (x)y n−i (x) = 0, with P 0 = 1, and assuming that the singular behaviour of the function can be approximated by α(x − x 0 ) p , where p < 0, Frobenius [4] extended the method to those singular points at which (x − x 0 ) n−i P i (x) is regular.Such points are called regular, singular points.Staying in the real context, p could even be a fraction, provided care is taken to approach x 0 only from above.The series generically converges in some restricted domain.
Painlevé [2] took the natural next step of converting to the complex domain, writing the ODE as Σ n i=0 P i (z)w n−i (z) = 0.The singular point being forced on one by the ODE in the real domain is called a "fixed singularity".On the other hand, in the complex domain, it is possible to search for singularities, so they are called "movable singularities".Notice that the restriction of approaching the singular point (z 0 ) no longer applies.For example, while 1/(x 2 + a 2 ) is not singular anywhere, 1/(z 2 + a 2 ) is singular at ±ιa.As such, there could be a number of singular points to expand the series about.Consequently, new solutions could be sought about each separate movable singularity.This opens the door to many more solutions than could exist in the real domain.
To follow Painlevé's procedure and its similarity to and difference from the complex methods of AMQ, it is worth looking at a simple illustrative example (given by Rammani, Grammaticos and Bontis (RGB) [9]).Consider the following general first-order, nonlinear ODE: and write it in parametric form in terms of the complex variable (z): as Painlevé assumed (what is now called the Painlevé property) that there are only poles in f when looked at in the complex plane, (not like e −1/z , for example).Let this ODE be singular at some z 0 and retain only the dominant terms for x and y.Writing z − z 0 = τ, x = aτ p , y = bτ q (p < 0), there are four arbitrary parameters, (p, q, a, b) to be determined.The parameters have to satisfy some constraints for the ODE to hold near the singularity.As such, though there is some freedom of choice in the values of the parameters, it is not total.One of the remaining free parameters is needed for the choice of the movable singularity.
Making the example more concrete, in Equation ( 2), take where k is a given constant in the ODE.As it is a 2D system of first-order ODEs, there should be two free constants: one to locate the moving singularity and the second to give the arbitrary constant.Putting the leading terms of Equation (3) into Equation ( 2), there are two distinct cases: either (i) q > p or (ii) q = p.In the first case, the τ q term becomes irrelevant, and the first equation gives p = −1, a = 1, with no constraint on b; and in the second case, they give the same p, but here, a = −1, and b = 2 so that the contributions of x and y in the first equation cancel.Notice that there is only one relevant, arbitrary constant to determine the position of the movable singularity in either case, which also has to be the constant of integration.Hence, the position is not determined but chosen.
Using the Laurent series to cancel the next-to-leading terms for for x and y, cτ p+1 and dτ q+1 , in case (i), c = k; with no constraint on d, which is then the second constant; and in case (ii), 2d − c = k, Again, we have the second constant.Instead of the Laurent series, Painlevé's procedure for the next term puts The leading terms are already cancelled, so one retains only the coefficients of the derivative of τ r , which means that the new leading term is linear in r.The new terms are called "resonances".Retaining only these new leading terms (as the first ones had already cancelled out) yields a matrix equation, Q.C = 0, where C is the vector with components c and d.Therefore, det(Q) = 0.For case (i), this yields r = −1, 0. The first root does not satisfy the requirement that r > 0. This root is always present for autonomous systems and is not a resonance.It simply corresponds to the arbitrary constant for the first-order ODE.The second root does not alter the original solution, so it is trivial and does not give anything new.For case (ii), the values are r = −1, 2. Here, the second root is non-trivial.In this case, Equation (4) reduces to If r is fractional, one obtains branch points.However, it may be possible to find suitable transformations of variables in the case of fractional r to make it an integer in the transformed equation.Also, one has not actually located the movable singularity in this example, as it is a free choice that corresponds to the integration constant.This is because it is an autonomous system, and no position is selected by the equation, as occurred in case (ii) for the term k.If it is not autonomous, it may be that, as with the Cauchy-Euler equations, a similarity transformation may be able to reduce it to autonomous, as was done by Paliathanasis, Taves and Leach [10].Otherwise, one might be able to deal with non-autonomous ODEs by some other transformation of variables.
The Painlevé procedure gives only an approximate solution near the movable singularity.One could now develop a power series solution by using this as an extension of Frobenius' method.Of course, there is no reason to restrict the system to two variables.In the example, the limitation came only because one started with a first-order scalar ODE.The procedure could be used for any n-dimensional system of ODEs, and the same search for poles and resonances could be carried out.For first-order systems it turns out that the only one is the Riccatti system, which has a simple pole and can be solved more easily by transformation of variables.One can also proceed to higher-order ODEs.It is found that there are fifty scalar ODEs with the Painlevé property of having only movable-pole singularities (see, for example, Ref. [11]).For higher-order ODEs, there is no complete classification, and it is a matter of trial and error to find ODEs with only movable-pole singularities.The purpose of this section is not to provide a primer for Painlevé analysis but to bring out the fact that it can provide solutions where other methods do not seem to work and to highlight the key required ingredient of movable-pole singularities for the system to be solvable.Of course, PDEs are not excluded, but they can only be solved by reducing them to ODEs, as is done by using transformation of variables.
It is of special interest to note that Painlevé analysis is also useful for integrating Hamiltonian systems.A Hamiltonian system, where H[q i (t), p i (t)] is the Hamiltonian, gives the dynamical evolution of the system.It is said to be Liouville integrable if there exist N constants of the motion relating the 2N dependent variables, I i (q, p) = C i , such that {I i , I j } := 0, where {, } is the Poisson bracket defined by {A, B} = ∂A/∂q i ∂B/∂p i − ∂A/∂p i ∂B/∂q i , using the Einstein summation convention, i.e., that repeated indices are summed over.
If there exists a generating functional, S(q, p), called the action, which is the time integral of the Lagrangian over a given time interval, Liouville's theorem guarantees the integrability of the system.(For completeness, I should mention that the Lagrangian is a quantity that is to be minimized over a time interval by selecting q(t), q(t) for this purpose.The optimality conditions give the Euler-Lagrange equations, which show that the Hamiltonian is a conserved quantity.)In that case the system is said to be algebraically integrable, and the N solutions correspond to N-d real tori.Even if the system is not algebraically integrable, one may still be able to find solutions by using the complex domain.The solutions with real time involved polynomial functions, but for complex time, now, the functions can be rational, and one can use Painlevé analysis.However, the solution space is no longer the earlier tori, as the tori are no longer real.Not only that, but they are no longer tori, as the space becomes non-compact.The question of complex time brings one to the use of complex Hamiltonians to solve problems of atomic physics [12], as the Hamiltonian corresponds to time translations.The original use had been in the context of symmetry analysis, and this is directly related to complex methods in symmetry analysis.

Review of Symmetry Analysis and Complex Methods
An object is said to be symmetric with respect to some operation if it remains invariant under the operation.For algebraic expressions of many variables, it means invariance under interchange of those variables.For geometrical objects, the operations can be translation, reflection, rotation or re-scaling.For DEs, the transformations must be not only continuous but also adequately differentiable.If the independent variable is complex, differentiability in a region guarantees complex analyticity.As such, Lie [1] assumed the variables to be complex but did not make explicit use of that analyticity.To start with, consider only scalar n th -order ODEs, E(x, y; y ′ , . . ., y (n) ) = 0. Regarding the independent and dependent variables as giving a point in a 2D space, Lie point transformations correspond to infinitesimal changes in the positions of the points of the space.Thus, the operator can be represented as a vector field in the tangent space at that point, X = ξ(x, y)∂/∂x + η(x, y)∂/∂y.To be able to apply it to the DE, the space needs to be enlarged or prolonged to an (n + 2)-D so-called "jet space".The corresponding prolonged symmetry generator is ∂ ∂y (n) .(7) To obtain the coefficients of the generator, one has to write the transformed coordinates as a series expansion of a small parameter.The coefficient of the linear term for x is the required ξ, and that of y is the relevant η.For the transformed variables, y ′ = dy/dx gives the tangency condition that η = dξ/dx = ξ ,x + y ′ ξ ,y .The values of the other ηs are obtained correspondingly.The ODE, E, is said to admit the symmetry generator, X, if by which is meant that the generator acting on the algebraic function appearing on the left side of the equation annihilates it for solutions of the equation.
There are various methods available for reducing the number of variables of the DE or reducing its order by using a symmetry.Perhaps the simplest is the construction of differential invariants.These are expressions involving the variables in the jet space, barring the highest-order terms, that remain constant under the symmetry.These can be used to write one of the highest derivatives in terms of the other variables, thereby reducing the number of variables or the order of the DE.Criteria for complete solvability are given in terms of what is called a "group classification".Since the symmetry is only identified locally, the classification is actually of the Lie algebra of the symmetry generators of the DE.Lie showed that the generators form a basis of vector fields for the tangent space of the solutions of the DE and, hence, satisfy a set of commutator relations, [X k , where C k i,j are called structure constants.Clearly, the algebra is characterised by the complete set of structure constants.
To bring out the difference between the local symmetry of the algebra and the global symmetry of the group, consider the symmetry generators of the Euclidean plane, so(2) s IR 2 , where so(2) is the generator of the rotation of the plane, each IR is a translation and s is the semidirect product, denoting that the product is non-commuting.Now imagine the plane wrapped into a cylinder.The symmetry of the cylinder is so(2) IR, where one of the translations has become a rotation about the axis of the cylinder and the original rotation is lost, as it yields a tilting of the cylindrical axis.Imagine a little square pinned to the cylinder at some point.Extended, this would be a tangent plane to the cylinder at that point and would possess the symmetries of the plane, but the cylinder would not possess all its symmetries.The Lie group for the cylinder is, thus, SO(2) IR, but that of the plane is SO(2) s IR 2 .The DE generators lie on the little square, which possesses all its symmetries, so the symmetries of the algebra are so(2) s IR 2 .If one wants to extend the solution well beyond the original point and, in fact, allow it to close up if it is compact, one needs other methods.In the complex domain, it leads to a change of the topology, which has to be dealt with.Nevertheless, one has compact and non-compact complex Lie groups.This is of relevance for the connection between complex methods for symmetry analysis and singularity analysis.
Of particular relevance for our purposes is the method of transforming the independent and dependent variables to transform DEs to a linear form, called linearization, thereby yielding their exact solutions.While all scalar first-order ODEs can be so transformed by Lie point transformations, this does not hold, even for second-order ODEs.Lie proved that scalar second-order ODEs are linearizable (see, for example, [13]) only if they have eight symmetry generators.For the linearizability of scalar n th -order ODEs (n ≥ 3), there are three classes with (n + 1), (n + 2) and (n + 4) generators, respectively [14].For m-D systems of second-order ODEs, there are 2m classes, with (2m + 1), . . ., 4m generators, then one class with (2m) 2 − 1 generators [15,16].For systems of higher-order ODEs, the formula is obtained by putting the two together.For PDEs, the situation is more complicated and not relevant for the present purposes.
It was shown that there is a direct connection between geometric symmetries and systems of second-order ODEs [17][18][19], as the ODEs satisfied by the shortest paths between two points, i.e., geodesics, are second-order nonlinear systems.Specifically, the directions along which the metric tensor of the underlying space, g ab (x c ), is invariant, called isometries, are the symmetries of the following geodesic equations: where ẋa = dx a /ds, s being the arc length parameter, and is called the Christoffel symbol, where g ad is the inverse metric tensor and the subscript comma denotes partial differentiation relative to the position vector.
Upon using the translational invariance of the geodesic equations with respect to s, one can project the m-D system down to one of (m − 1)-D.While the original ODEs are quadratically semi-linear, the projected system is cubically semi-linear.Lie found that all his linearizable systems had to be cubically semi-linear and satisfy four first-order differential constraints involving two arbitrary functions.Tressé [20] eliminated the arbitrary functions and reduced the number to two by increasing the order of the constraints to two.It is natural to expect that since a flat space has straight-line geodesics, the condition for linearizability would be that the curvature tensor for the manifold containing the geodesics be zero.That this is a sufficient condition for linearizability was proven [21].Since for a scalar second-order ODE, the linearizability is unique, in that case, it is also a necessary condition, but there is no reason for it to be unique for higher orders or higher-dimensional systems.This issue was dealt with more generally in [22], and it was found that the Lie conditions came out automatically in projecting from 2D down to the scalar case due to the coordinate freedom resulting in a free choice of two Christoffel symbols.The n-D generalization of the condition comes out automatically.
A code was constructed to convert the coefficients of the cubically semi-linear ODE to Christoffel symbols and, thereby, generate the metric tensor corresponding to the ODE [23].An algorithm developed to write the metric of a flat space in Cartesian coordinates [24] can, thus, be used to directly obtain the linearizing transformation and, hence, to linearize the system.This was called "geometric linearization" [25].It only yields linearizable systems with maximum symmetry (sl(n + 2, IR)).For n = 1, this would be the only linearizable system, but even for n = 2, there are five classes with 5, 6, 7, 8 or 15 infinitesimal symmetry generators, of which only the last is obtained geometrically; hence the solution can be obtained by using codes.It would be most desirable to be able to use the power of the geometric method for the other classes and, perhaps, learn why they arose (much as one learned where the Lie conditions came from).
It is noted that explicit use of the analyticity of the dependent variable assumed by Lie would entail an apparent paradox.Splitting the variables into their real and imaginary parts would double the number of variables and generators.Also, a complex two-dimensional space corresponds to a real four-dimensional space.The maximum number of symmetry generators for the real four-dimensional space is 15, while the maximum number of complex generators is 8, the splitting of which should yield 16 and not 15 generators.Omitting any one of the 8, the split system has only 14 generators instead of 15.The resolution of the paradox is that the system has to incorporate the system of Cauchy-Riemann equations and so the set of symmetries is modified [5,6].This explicit use of analyticity for the complex domain is called "complex symmetry analysis" (CSA).
A complex ODE, called a CODE, can be split into its real and imaginary parts by writing the independent and dependent variables in terms of their real and imaginary parts.Thus, for example, a scalar CODE splits into a pair of two PDEs of two variables.More generally, if it is an n-D system of ODEs there will be a system of 2n-D PDEs of two independent variables.To obtain ODEs from the CODE, one can now require that the independent variable be restricted to being real, while the dependent variable is complex.In this case, one obtains a system of 2n real ODEs, called RODEs.While for every CODE, there is a system of RODEs, the converse is not true.However, criteria have been formulated to determine when a system of RODEs corresponds to a CODE.
If the CODE is linearizable, it is not necessary that the corresponding system of RODEs be linearizable.Of course, we can obtain the complete solution of an m th -order linearizable n-D CODE involving nm arbitrary constants, but the corresponding system of RODEs may not be linearizable; hence, its solution may not involve the corresponding 2nm constants in its solution.The use of the correspondence between linearizable CODEs and systems of RODEs to solve the problems for the systems of RODEs is called complex linearization.It yielded two of the four missing linearizable classes mentioned above [7,26,27], but the other two did not turn up there.Furthermore, there were nonlinearizable RODEs corresponding to linearizable CODEs.Nevertheless, the procedure did provide solutions to RODEs with insufficient symmetries for the purpose, as well as even those that had no symmetries.The question following remained: "Why, and when, does complex linearization provide the results mentioned here?".

The Distributional Representation of Special Functions
Another development in CSA needed a formal development to make it rigorous, but at the time, it had been pursued without a proper base.That base came subsequently with some work in a totally different field: that of the so-called "special functions".These functions are normally thought of in the context of solutions of Sturm-Liouville systems, which are second-order linear ODEs with different types of boundary or initial conditions.To that extent, they seem to fit in the broad area of DEs.However, there is the gamma function that is not of this type, and, even more, there is the Riemann zeta function that arises in the theory of prime numbers-a long cry from DEs.The formal development arose in working with delta functions with a complex argument.To explain the context, a brief explanation of different representations of special functions is provided first; then, the discussion carries on to the problem of dealing with delta functions of a complex variable.
What exactly is a function?It can be represented in different ways: by an algorithm, algebraically, in tabular form, graphically, etc.For example, the factorial function n! = n(n − 1) . . .2.1 is well defined for natural numbers (IN).Although 0! makes no sense, for consistency of notation in combinatorials, n C r = n!/r!(n − r)! is defined as 1, thus adjoining 0 to the domain.In this form, there is no graphical representation.Defining it as an integral, one can extend the domain from (IN ∪ {0}) to IR \ {−IN}.This is the integral representation of the "generalized factorial" called the gamma function (Γ(x + 1)), which can now also be represented graphically with infinitely many discontinuities at the negative integers.Extending the domain to I C \ {−IN}, the infinite discontinuities convert to simple poles, and the domain becomes connected.With this domain, it can also be regarded as an integral (Mellin) transform of e −t , i.e., which can then be analytically continued to the entire complex plane (with poles at the negative integers).This is an integral transform representation.Throughout, it remains the same factorial function or its generalizations/extensions.Of particular interest is the Fourier transform representation (FTR) because it is easy to obtain its inverse transform, unlike most other transforms, such as Laplace or Mellin transforms.Fourier and inverse Fourier transforms are defined (respectively) by and At the base, there is an image of a "true function" laid out in some Platonic heaven, whose shadows are seen by its representations.(Even the name "representation" evokes this image.)Think of it in geometrical terms as the "function" defined in some manifold in the sky, and we only see its coordinate form down on Earth.However, for the "distributional representation" to be discussed, this image no longer applies.The function has to descend from the sky and get its hands dirty by acting on other functions.It is no longer be the usual function but what is called a generalized function or a functional.A functional is normally defined as a mapping from the space of functions to that of real numbers.More generally, it is the mapping from a space of functions to the space of functions.The difference this distinction makes is of its cardinality.For transfinite numbers, the countable set (IN) has a cardinality of ℵ 0 , and, assuming the continuum hypothesis, IR is the power set of the natural numbers, with a cardinality of ℵ 1 = 2 ℵ 0 .The space of all functions (IF) can then be regarded as the power set of IR (see, for example, [28]) with a cardinality of ℵ 2 = 2 ℵ 1 , and the space of generalized functions ( I G) has a cardinality of ℵ 3 = 2 ℵ 2 .The generalized function or distribution is defined by its action on "test functions" that belong to a class (C ⊂ IF) of "well-behaved" functions over a compact support (K) (see, for example, [29]).More specifically, they are defined by the inner product of the distribution with a test function over the compact support.Of course, all members of C would, themselves, be distributions.However, C also includes, for example, the Heaviside step function (Θ(x − x 0 )) and the Dirac delta function (δ(x − x 0 )), which are not functions in the usual sense.Thus, for any ϕ(x), Θ(x − x 0 ) is defined by where the ">" is used in the sense of Pareto, i.e., each component of one vector is greater than those of the other.Similarly, δ(x − x 0 ) is defined by provided x 0 ϵK and 0 otherwise.The distributional representation [30] of a function expresses it as a series of distributions, i.e., as a linear combination of a countable sequence of distributions over the field (C).It originally arose in the use of the FTR for Γ(x).Since the FTR must necessarily be complex, one uses Γ(z) instead, which has simple pole singularities on the negative real axis, so the Cauchy integral formula yields a series of delta functions.In this guise, it can be seen as an operator that acts on any test function by the inner product defined by an integral over the imaginary part of z.By taking the test function to be Γ(z), new identities are obtained, yielding the "norm square" (|Γ(z)| 2 = π2 1−2x Γ(2x)) and "norm fourth" (|Γ(z)| 2 = 2πΓ 4 (2x)/Γ(4x)).There are numerous other very useful formulae obtained with this representation.(When presented at a conference, the identities created quite a stir [31].)It has also led to new identities for the Riemann zeta function (RZF) and its family [32,33], including ones for a series of Dirichlet η and Λ functions and even of Bernoulli numbers!

The Complex Variational Principle
The variational problem is to find the choice of objective functions (of the dependent variables and their derivatives) that minimize their integral over a given interval (or domain) of the independent variable(s), subject to some constraints.The functional that is minimized is called the action.That nature (or people) follows the resulting "path" in physical (or economic) applications is called the "principle of least action".Using the method of Lagrange multipliers with the constraint functions, the resulting objective function is called the Lagrangian.The necessary condition for the extremal is that the variation of the functional (the action integral) be zero.This is the variational principle.Assuming that the Lagrangian depends only on the first derivative, there is a dual formulation that replaces the first derivative of the dependent variable(s) by a (or several) conjugate variable(s) and leads to the conservation of the dual to the Lagrangian, called the Hamiltonian function.In either way of looking at the problem, one needs an extremal value.Since I C is not an ordered set, this entails the requirement that the functional and, hence, the Lagrangian and Hamiltonian lie in IR.Of course, one could take the absolute value of the "complex functional", but that does not solve the original problem.
Originally, the results of variational calculus were extended to the complex domain [6,34], without bothering with the corresponding rigorous extension of functionals.A crucial point is that for the extension for ODEs, one restricts the independent variable to IR.One needs to interpret the two components for physical and economic applications.This was done in a physical application [35].It turns out that there has been work done on bi-Hamiltonian systems (see, for example, [36]) and even on "non-observable (i.e., non-Hermitian) Hamiltonians" [12], which are needed to explain some atomic spectra.For the economic example, there may be two independent objective functions to be minimized, subject to two sets of constraints in which neither is given greater importance over the other.In this sense, each of the Lagrangians acts like a constraint for the other.It would be interesting to find some actual economic applications of the "bi-Lagrangian".There remained the problem of a rigorous extension of the functional from of IR to I C. The rigorous formulation came from work on special functions arising from the distributional representation [37,38].Notice that the space of distributions is not an inner product space, as it is defined by its action on "well-behaved functions" and not on other distributions.For example, δ 2 (x) is not a distribution and has no clear definition as one.Thus the usual norm is also not defined for distributions.Essentially, this is the problem with defining a complex Lagrangian.For this purpose, it is necessary to use a generalization of the distribution, called an ultradistribution [39].Ultradistributions are objects that act on distributions to produce distributions.The basis for this comes from the process of iterative integration, which may be regarded as a "negative-order" differentiation [40].Even δ ′ (x) is not directly defined as a distribution, but it can be evaluated by integration by parts when multiplied by a test function.In effect, it acts as a derivative of the test function evaluated at zero (with sign flipping).(It is worth pointing out that the space of higher-order ultradistributions has a correspondingly higher-order transfinite cardinality.)The net result is that we can rigorously deal with the bi-Lagrangian or bi-Hamiltonian as a complex Lagrangian or an "unobservable" Hamiltonian.

The Complex Connection
Although complex numbers were introduced by Cardano during 1501-15, they were introduced as variables for functions by Euler (1707-1783) (see [41]).This not only extended the domain for the functions considered, but it changed the very conception of functions.In particular, it led to the study of the nature of singularities of functions and their use for evaluating contour integrals [42].Thus, when Lie was developing his methods for solving DEs [1], he automatically took the variables and the functions to lie in the complex domain without adverting to its significance or concern with the singularities of the functions.Although Frobenius used the method of expanding about regular singular points [4], he ignored the possibility of complex variables and exploiting their use.It was only when Painlevé used complex variables and exploited the singular behaviour of the functions satisfying the DEs [2] that the full power of complex analysis could be used to solve them.What Euler did for the theory of functions and Painlevé did for Frobenius, the "complex methods" [5] try to do for Lie.As such, there should be much wisdom to be found by looking at those methods from the perspective of the earlier two developments.
The most obvious connection is the fact that the complex methods led to the solution of systems of RODEs by their correspondence to split CODEs, even though the system of RODEs was not linearizable.The question asked earlier was, how that could be.Although the question was pondered, at the time, no answer was forthcoming.However, the glimmering of an answer lay in the requirement that when the CODE is split, to obtain a real system, the independent variable has to remain real, as otherwise, the resultant system would be of PDEs.As such, it is necessary that the functions in the CODE be real analytic, i.e., analytic on the real axis, which must lie in the complex plane off the real axis.If there are no singularities, as seen by recalling singularity analysis, there is no extra solution provided.This leads to the following theorem.Theorem 1.To give linearizable RODEs, the CODE must be real analytic, i.e., have enough singularities only in the real part of the domain where complex methods are applied and none outside it; and to provide solutions of the RODEs not available by classical methods, it must have movable singularities in that domain.

Corollary 1.
If the CODE has enough singularities in the real domain for linearization and has some movable singularities, some of the other linearizable classes corresponding to the other movable singularities are obtained.
Corollary 2. For the solution of the RODEs to be global, the corresponding domains must be the entire IR n and I C n .
One now needs to see to what extent singularity analysis can help with understanding the "distributional representation" of special functions (as defined by Chaudhry and Qadir).Recalling that the space of distributions is not an inner product space, any inner product or norm can only be defined in some sense.Such an "inner product" was defined for the distributional representation involving a series of delta functions to be summed up over the imaginary part of the independent variable.Obviously, this requires that there be movable singularities in the differential equation defining the special function, if any exist.
For this purpose, I give an example of the use of the DR to obtain new identities for the RZF.It is defined by its integral representation [43]: which is then analytically continued to cover most of the complex plane.Using its Fourier transform representation leads to the following distributional representation [32,33]: As explained earlier, this representation is only meaningful as an operator acting on a test function by integration over τ.In particular, it can act on the gamma function to yield the following identity: Taking the DR acting on the zeta function itself gives a norm in some sense. ∥ζ(σ This "norm" is defined by integration over the imaginary part of the independent variable (τ), defined with the gamma function as a weight, leaving it as a function of the real part (σ), so it is not quite a norm in the usual sense.Call it a "τ-norm".
A norm is generally defined for the special functions that are solutions of a Sturm-Liouville (SL) system with respect to a weight function.Treating the real part of the dependent variable as a parameter, in this case, the SL system is, where " ,τ " means "the derivative with respect to τ", and λ ϵ I C is the eigenvalue parameter that labels the solutions.Since p, q and λ can be arbitrarily chosen (not necessarily uniquely) by using the properties of the RZF, this gives a second-order ODE for the RZF in terms of the imaginary part of its independent variable.This unifies the RZF with other special functions of mathematical physics!Here, then, is an example of how the complex connection can lead to new insights and results for special functions: Theorem 2. If there is a DR for any special function, then there is a corresponding second-order ODE in terms of the imaginary part of the dependent variable with respect to the weight function appearing in the representation.Corollary 3. A τ norm can be defined for any function with a DR.Corollary 4. A second-order ODE for the gamma function in the τ sense can be obtained.

Conclusions and Discussion
We have seen that CSA and special functions have benefited from the insights obtained from singularity analysis.One could look for the reverse payback of those two to singularity analysis.There are 50 Painlevé types of second-order ODEs, of which 44 were solved using Lie symmetries, by linearization or quadrature, while the remaining 6 needed new transcendental functions to solve them [9].To that extent, the payback is already there.It is possible that CSA methods using contact or higher-order symmetries could shed further light on those six types.Furthermore, CSA has been used for higher-order ODEs [44,45], so the Painlevé-type higher-order ODEs could be investigated using CSA.There is little more to be said here about the use of singularity analysis for CSA, as the new results of CSA come directly from it, and examples of its use are given in the literature cited above.
Coming to the use of DRs of special functions for singularity analysis, if they lead to DEs, those DEs will necessarily have movable singularities and, hence, lie among the Painlevé types.Identifying those Painlevé equations, the well-known properties of the special functions can then be used for those Painlevé types.Furthermore, in the case of the RZF, it was pointed out that the equation would be obtainable for it as a function of the real part of its independent variable.However, since it would come from a movable singularity for the presumed ODE, there must be a sort of "dual" equation for the imaginary part.Furthermore, it should be possible for the two to be put together to give a second-order CODE for the special function of the full complex variable.This would be a line worth investigating-not only for the RZF but also for other functions that have not already been obtained by solving ODEs.

Funding:
The research received no funding.