Econophysics and Fractional Calculus: Einstein’s Evolution Equation, the Fractal Market Hypothesis, Trend Analysis and Future Price Prediction

This paper examines a range of results that can be derived from Einstein’s evolution equation focusing on the effect of introducing a Lévy distribution into the evolution equation. In this context, we examine the derivation (derived exclusively from the evolution equation) of the classical and fractional diffusion equations, the classical and generalised Kolmogorov–Feller equations, the evolution of self-affine stochastic fields through the fractional diffusion equation, the fractional Poisson equation (for the time independent case), and, a derivation of the Lyapunov exponent and volatility. In this way, we provide a collection of results (which includes the derivation of certain fractional partial differential equations) that are fundamental to the stochastic modelling associated with elastic scattering problems obtained under a unifying theme, i.e., Einstein’s evolution equation. This includes an analysis of stochastic fields governed by a symmetric (zero-mean) Gaussian distribution, a Lévy distribution characterised by the Lévy index γ ∈ [0, 2] and the derivation of two impulse response functions for each case. The relationship between non-Gaussian distributions and fractional calculus is examined and applications to financial forecasting under the fractal market hypothesis considered, the reader being provided with example software functions (written in MATLAB) so that the results presented may be reproduced and/or further investigated. Mathematics 2019, 7, 1057; doi:10.3390/math7111057 www.mdpi.com/journal/mathematics Mathematics 2019, 7, 1057 2 of 57


Introduction
We study one of the principal field equations in statistical mechanics, namely, Einstein's evolution equation (EEE or E 3 ). This is done in order to derive mathematical models and thereby specific financial indices in a unified manner, an approach which includes the use of fractional calculus. E 3 models the random motion and (elastic) interactions of a canonical ensemble of particles. It provides a description for the time evolution of the spatial density field that represents the concentration of such particles in a macroscopic sense. In an n-dimensional space, each particle is taken to be undergoing a random walk in which the direction that a particle "propagates" after a "scattering event" (in which energy and momentum are conserved) is random together with the length of propagation. The scattering angle θ is taken to be conform to a distribution of angles Pr[θ(r)], r ∈ R n and the (free) propagation length is taken to conform to some distribution of lengths Pr[L(r)] whose mean value defines the mean free path (MFP). This was the basis for Albert Einstein's original study of Brownian motion in 1905 [1], albeit for the one-dimensional case.
In addition to the work of Josiah Gibbs, the evolution equation that Einstein derived is one the foundations of statistical mechanics [2,3]. The approach can, for example, be applied equally well to modelling the diffusion of light propagating through a complex of scatterers. In this case the light is taken to be a ray-field where each ray (reflected from one particle to another) has a random path length and scattering angle.

Focus and Context
The focus of this paper is to derive a range of equations and metrics via an n-dimensional version of E 3 in order to demonstrate an inherent connectivity and association in a unified sense. These equations include the classical diffusion equation, the classical and generalised Kolmogorov-Feller equations and the evolution of self-affine stochastic fields through the fractional diffusion equation. The fractional form of these equations is shown to be a direct consequence of introducing non-Gaussian distributions as "governors" for the statistical characteristics under which random processes occur, subject to the condition that all such processes involve independent elastic interactions.
For certain non-Gaussian models such as Lévy processes, this leads naturally to the use of fractional calculus to develop solutions to the evolution equation as studied in this paper. Further, it is shown that such solutions are fundamental to the application of the fractal market hypothesis [4] for analysing financial time series and thereby in developing trading strategies based on this hypothesis. This approach represents an Econophysics methodology in which a fundamental model used to describe stochastic processes, originally developed in the study of Brownian motion, is used to solve problems in economics. In this paper, following developments published previously by Blackledge et al. (e.g., [5][6][7][8][9][10][11][12][13][14]), it is shown that this approach is inclusive of the application of fractional calculus.

Structure and Organisation
The structure of the paper is as follows. Section 2 provides a brief overview of the principal mathematical results used in this paper including basic definitions and notation. This section also includes a short introduction to fractional calculus, specifically some of the conventional definitions of a fractional integral and a fractional derivative. Section 3 presents E 3 upon which all the results derived in this paper are ultimately dependent, thereby providing a unifying framework for the work reported as discussed in Section 4, which provides a brief introduction to financial time series analysis in the context of E 3 .
Two equations, that are a conditional representation of E 3 , are considered in Section 5, namely the Classical Kolmogorov-Feller and the Generalised Kolmogorov-Feller equations which are studied later on in the paper, specifically in Section 14. In the context of E 3 , Section 6 provides statements on the random walk hypothesis and the efficient and fractal market hypotheses coupled with a brief history associated with the development of such hypotheses for interpreting and analysing financial time series. As discussed in Section 3, E 3 is predicated on a model for the probability density function of a stochastic system using a continuous random walk model and Section 7 therefore introduces density functions whose basic properties are important to appreciate in the context of the work reported here. Sections 8 and 9 study the derivation from the E 3 of two metrics, namely, the Lyapunov exponent and the volatility, respectively. These metrics are then combined into a Lyapunov-to-volatility ratio (LVR) to develop a trend analysis algorithm which is presented in Section 10, the idea being to provide an indicator that flags when a financial time series changes its trending behaviour. This is based on a change in the polarity of the LVR and it is shown, for example, that in order to obtain suitable accuracies appropriate for algorithmic trading, both pre-(of the financial signal) and post-filtering (of the LVR) is required. This is quantified in Section 10 using a back-testing strategy. In addition to being bi-polar, the amplitude of the LVR has values that reflect periods of relative stability in the dynamic behaviour of a financial signal and in Section 11, a method is proposed to exploit this indication and provide short term predictions on future prices using the principles of evolution computing (EC). In this paper, EC is implemented using an online resource and applications package called 'Eureqa'.
The remaining sections of the paper deal with the classical and fractional diffusion equations, both of which are derived from E 3 in Sections 12 and 13 using Gaussian and non-Gaussian (Lévy) distributions, respectively. In the latter case, and, using the principles of fractional calculus established in Section 2, a time series model is developed that depends upon the Lévy index. Section 14 then provides a complementary approach to deriving similar results using the Generalised Kolmogorov-Feller equation and an orthonormal memory function which yields the same scaling properties compounded in the impulse response function. The application of this index for financial trend analysis is provided in Section 15, illustrating that the Lyapunov exponent and the Lévy index have similar predictive power providing the data is pre-and post-filtered. Section 16 provides a review and discussion of the results presented followed by a general conclusion and some open questions to direct future research.

Original Contributions
Judging from the open literature, and, to the best of the authors' knowledge, the approach taken in this paper is original as are the numerical results presented. In regard to the latter case, an effort has been made by the authors to integrate important numerical functions with the derivation of certain important metrics associated with the theoretical models used and the mathematical analysis presented. These functions are given in Appendix A and their aim is to provide the reader with the opportunity to reproduce the results presented (the online data sources being referenced throughout) and investigate their performance for different financial data.

Mathematical Preliminaries
In this section, we provide a short overview of some of the mathematical results that are of importance to the material developed in this paper, specifically the short introduction to fractional calculus provided in Section 2.3.

Fourier Transformation and the Convolution Integral
The mathematical models developed in this paper rely on the properties of the Fourier transform coupled with the convolution and correlation integrals in n-dimensions. For a square integrable function f (r) ∈ L 2 (R n ) : C → C, we define the Fourier and inverse Fourier transforms in the "non-unitary form" as respectively. Here, r is the n-dimensional spatial vector where r ≡| r |= (r 2 1 + r 2 2 + ... + r 2 n ) 1 2 . Similarly, k is the spatial frequency vector where k ≡| k |= 2π/λ for wavelength λ and k · r = k 1 r 1 + k 2 r 2 + ... + k n r n . These integral transforms define a Fourier transform pair which, in this paper, we write using the notation F(k) ↔ f (r).
We define the (n-dimensional) Dirac delta function as where, with ⊗ denoting the convolution integral, the convolution of two functions f (r) and g(r) being given by s(r) = g(r) ⊗ f (r) ≡ where [s(r), g(r), f (r)] ∈ L 2 (R n ) : C → C. Note that the dimension associated with the integral operators ⊗ and is taken to be inferred from the dimension of the functions to which these operators are applied. In addition, note that, strictly speaking, the Fourier transform is taken over a Schwartz tempered distributional space, and, in this context, the following theorems are fundamental: where G(k) ↔ g(r) and F(k) ↔ f (r).

The p-and Uniform-Norm
We define the p-norm as with the uniform norm being given by f (r) ∞ = sup{| f (r) |, r ∈ R n } and principal properties

Fractional Integrals and Differentials
Since, for n = 0, 1, 2, ..., d ±n dx ±n f (x) ↔ (ik) ±n F(k) we can, in principal, generalise this result to the case when n is non-integer. Thus, suppose we wish to fractionally integrate the differential equation to obtain a solution for f (x) in terms of g(x). Fourier transforming, and thus, using the convolution theorem, we can write This important result is easily derived by expressing the inverse Fourier transform in terms of a Bromwich integral so that, with p = ik, we can write h(x) in terms of the inverse Laplace transform p α dp.
Generalising the Laplace transform of the function x n (for positive integer n) given by This expression for f (x) in terms of the convolution h(x) ⊗ g(x) is the basic fractional integral known as the Riemann-Liouville integral which, specifying the limits of integration, takes the form thereby expressing the integral in terms of an inverse differential operator D −α over the limits a and x. This allows us to express a fractional differential denoted by the operator a D α x ≡ d α /dx α in terms of a fractional integral by noting that When α is a negative value and noting that Γ(1 + α) = αΓ(α), thereby recovering the expression for a fractional integral given by Equation (5). Thus, combining the results, we can write Since, for some scaling value λ (and with z = λy), this operator has the self-affine scaling characteristic Another related approach to defining a fractional differential is through application of the delta function. For r ∈ R 1 Generalising this result to the non-integer case, we write where, from Equation (1), We can then write A further definition of a fractional differential can be obtained using the sign function sgn(x) where when we can write This result becomes clear if we note that and therefore that Defining a fractional differential and integral in terms of the operators a D α x and a D −α x , respectively, is based on a generalisation of the Fourier transform under differentiation and integration, respectively. Traditional (integer) calculus goes hand-in-hand with a geometrical interpretation of the associated operations, starting with a differential defining the gradient of a function at a point (at least for a piecewise continuous function). With fractional calculus, generalisations of this type do not easily lend themselves to a geometrical interpretation. However, geometric and physical interpretation of fractional derivatives have been developed (e.g., [16,17]) including the connectivity between fractional calculus and fractal geometry [18] which is based on the scaling relationship compounded in Equation (8). An important characteristic of these interpretations that is relevant to the remit of this paper, is that the operator a D −α x operating on a stochastic function is characterised by this scaling property, a property that yields self-affine stochastic fields or random scaling fractals.
As discussed later on in this paper, many financial signals can be classified as random scaling fractal signals (with a fractal dimension D ∈ [1,2]). This is the basis for the fractal market hypothesis in mathematical economics and hence, the applications of fractional calculus. Note however, that the "process" of generalising the Fourier transform used above for defining fractional differentials and integrals is just one such generalisation that can be applied. Thus, the operators defined by Equations (5) and (6), for example, are not unique and there are many definitions and generalisations of a fractional derivative that have been developed [19] and continue to be so [20].
Although there are, in principle, an unlimited number of definitions that may be "designed" to define a fractional derivative, there is a common theme to all of them which is that they are expressed in terms of a convolution. For example, the Caputo fractional derivative is given by which is easily formulated via application of the inverse Fourier transform given that if then from Relationship (3), and hence The results considered here are fundamental to the implementation of fractional calculus in econophysics (and physics in general) as they are predicated on the Fourier transform which arguably plays the most pivotal role of all in so many aspects of physics and especially in the analysis and processing of signals, e.g., [21,22], including financial signals.
Irrespective of the non-unique definition of a fractional derivative, there is one fundamental difference between a classical and a fractional derivative which is characterised by Equation (7), for example. A nth order derivative of a piecewise continuous function f (x) can be defined at a single point on x at x 0 say, and is independent of any other values of f (x) for x < x 0 or x > x 0 . However, given that a fractional derivative involves the convolution of the function f (x) with 1/x 1+α , for example, its value at a point x 0 depends on prior values of f (x) for x < x 0 . Thus the value of a fractional derivative of f (x) depends on its "history" and thus, unlike an integer derivative, a fractional derivative therefore incurs "memory". This "memory effect" is another way of approaching the analysis of financial signals using fractional calculus as financial signals are influenced by the memory of past financial conditions, albeit within a stochastic context. This is a key element to the analysis of financial signals using fractional calculus and a fundamental component to applications of the fractal market hypothesis (as discussed later on in this paper).

Einstein's Evolution Equation
Let p(r) denote a probability density function (PDF) where ∞ −∞ p(r)d n r = 1, which characterises with the position of particles in an n-dimensional space r ∈ R n where, at any instant in time t, the particles exist as a result of some "random walk" generated by a sequence of "elastic scattering" processes (with other like particles in the same n-dimensional space) that have occurred over some period of time < t. Further, let u(r, t) denote the density function associated with a canonical assemble of particles all undergoing the same random walk process (i.e., the number of particles per unit space, e.g., per unit volume for n = 3).
Consider the initial condition where we have an infinitely small concentration of such particles at a time t = 0 located at the origin r = 0. The density function at t = 0 is then given by u(r, 0) = δ n (r) where δ n (r) is the n-dimensional Dirac delta function. At some short time later t = τ << 1, it can be expected that the density function will be determined by the PDF governing the distribution of particles after a (short duration) random walk. Thus we can write u(r, τ) = p(r) ⊗ u(r, 0) = p(r) ⊗ δ n (r) = p(r), where ⊗ denotes the convolution integral over all r. The PDF p(r) therefore represents the response (in a statistical sense) associated with a short time random walk process, and, in this context, can be considered to be a statistical impulse response function (IRF). Thus for any time t, the density field at some later time t + τ will be given by For any instant in time t, Equation (10) shows that the spatial behaviour of the density field at some future time τ is given by the convolution of the density of particles at a previous time with the PDF of the system that governs its "statistical evolution". In this sense, p(r) is analogous to the IRF of a linear stationary system when, for an initial condition u 0 (r) ≡ u(r, t = 0), say, where g(r, t) is the characteristic Green's function of the system. However, in this case u(r, t) denotes a deterministic function associated with the behaviour of a deterministic system, whereas in Equation (10), u(r, t) is the density function associated with the evolution of a statistical system. This "system" is taken to be stationary in a statistical sense because it is assumed that p(r) does not vary in time and the time evolution model given by Equation (10) is referred to as being "Ergodic". Further, we note that if the PDF is symmetric, then p(r) ≡ p(r).
Equation (10) is Einstein's evolution equation (E 3 ). It is a "master equation" for elastic scattering processes in statistical mechanics and is an example of a continuous time random walk model. On the basis of Equation (10), one can derive a variety of stochastic field equations as shall be shown later on in this paper.
In regard to the continuous time random walk model given by Equation (10), p(r) is the PDF for the displacement r of a particle's position over time interval τ. The equivalent discrete time random walk model, Equation (10) takes the form where r m and t n are discrete vector and scalar arrays, respectively, and, ⊗ denotes the convolution sum. In this case, τ is fixed time step and, in the context of the work reported in this paper, may be considered to be a time-unit for financial markets, i.e., a minute, hour, day or week associated with a price value u(t n ).
For a source function s(r, t) (a source density), which may be a stochastic function, the evolution equation is u(r, t + τ) = p(r) ⊗ u(r, t) + s(r, t).
This equation describes the evolution of of the density function u(r, t) when the initial particle concentration is replenished in space and/or time and can be extended further to include a decay factor over time when it is required to consider an evolution equation of the type (for decay rate factor R) The financial time series models and metrics that are considered in this paper are all derived from Equation (11) and for this reason, in the following section, a short introduction to financial time series analysis is provided. This is necessary for readers to appreciate the focus of the application that is considered in this paper.

Financial Time Series Analysis
A financial time series is a discrete set of price values that are most commonly regular samples over a specific time interval (minutes, hours, days, etc.) which depend on the financial price index available (e.g., world-wide indices such as FTSE100, S & P 500, FOREX, etc.). Over longer time intervals, the price index is usually an average of the samples taken over the next smallest time interval. Most financial data is available as a time series and therefore developing mathematical models (both linear and non-linear) of time series data is an essential component underpinning many aspects of mathematical finance leading to algorithms for day-to-day trading, forecasting and econometrics in general.
There are numerous internet resources that provide up-to-date and historical data of different indices over different time scales such as the data available at [23] which is the internet source used to access the data presented in this paper. Similarly, there are numerous "metrics" (also called a financial index) which are the result of processing samples of data over a look-back window of a specified length usually known as the "period". Such metrics range from statistical metrics based on an autoregressive moving average and nonlinear locally non-constant variance models (applicable to volatile financial returns, interest, exchange rates and futures) through to descriptive techniques for various features, such as long term level fluctuations and distributions, short and long memory dependence, directionality and volatility.
Methods of fitting time series models to time series data and their statistical validation determine the application to which they can (or otherwise) be successfully applied to forecasting, systematic trading, fund manager evaluation, hedging and simulation for example. The online resource 'Investopedia' [24] provides descriptions, computational algorithms and examples of the numerous metrics, indices and other parameters that have, and are continuing to be, developed for financial time series analysis.
In this paper, continuous time series models for a financial signal denoted by u(t) are derived exclusively from Equation (11), the associated discrete time series model being denoted by u n , n = 1, 2, ..., N which is taken to describe a digital financial signal consisting of N elements.

Einstein's Evolution Equation and the Kolmogorov-Feller Equations
The Classical and Generalised Kolmogorov-Feller Equations can be derived directly from E 3 through application of a Taylor series in time and a memory function (in time), respectively. They are in fact representations of E 3 for the case when τ << 1 and otherwise, respectively, as shall now be shown, both equations being studied later on in this paper.

The Classical Kolmogorov-Feller Equation
Consider the following Taylor series for the function u(r, t + τ) in Equation (10): and from Equation (10), we obtain the classical Kolmogorov-Feller equation (CKFE), [25,26] which is essentially a representation of Equation (10) for τ << 1. Equation (13) is based on a critical assumption which is that the time evolution of the density field u(r, t) is influenced only by short term events and that longer term events have no influence on the behaviour of the field at any time t, i.e., the "system" described by Equation (13) has no "memory". This statement is the physical basis upon which the condition τ << 1 is imposed, thereby facilitating the Taylor series expansion of the function u(r, t + τ) to first order alone.

The Generalised Kolmogorov-Feller Equation
Given that Equation (13) is memory invariant, the question arises as to how longer term temporal influences can be modelled, other than by taking an increasingly larger number of terms in the Taylor expansion of u(r, t + τ) which is not of practical analytical value, i.e., writing Equation (10) in the form The key to solving this problem is to express the infinite series on the left hand side of the equation above in terms of a "memory function" m(t) and write This is the generalised Kolmogorov-Feller equation (GKFE) which reduces to the CKFE when m(t) = δ(t).
A characteristic time spectrum M(ω) for m(t) can be obtained by noting that we have, in effect, considered the result so that, after taking the Fourier transform with respect to t, we obtain where U(r, ω) ↔ u(r, t) and M(ω) ↔ m(t), from which it follows that we can write M(ω) as

Orthonormal Memory Functions
For any inverse function or class of inverse functions of the type n(t), say, such that the GKFE can be written in the form where the GKFE is again recovered when n(t) = δ(t) given that δ(t) ⊗ δ(t) = δ(t). The function n(t) is an orthonormal function of m(t).

The Random Walk, the Efficient and the Fractal Market Hypotheses
From Equation (11) we can generate a simple (continuous) financial time series model by integrating over r to obtain where and, for p(r) = δ n (r), If s(t) is taken to be a (bi-polar) stochastic function of time and u(t) is some price value (of some commodity) then Equation (15) describes the case in which a future price at some future time t + τ is given by the known price at time t plus some random price value s(t). Note that for any value of t, s(t) may be a positive or negative value thereby giving a higher or lower price value at t + τ. The principal point here is that although Equation (15) is the simplest of models for price variation, it can nevertheless be seen to be the result of a spatial integration of E 3 when p(r) = δ n (r). Moreover, it is a model that encompasses some of the earliest questions associated with the dynamics of a free market economy as discussed in the following section.

The Random Walk Hypothesis
In 1900, Louis Bachelier [27] concluded that the price of a commodity today is the best estimate of its price in the future (at least in the short term). The random behaviour of commodity prices was again noted by Holbrook Working in 1934 [28] in an analysis of time series data. In the 1950s, Maurice Kendall [29] attempted to find periodic cycles in the financial time series of various securities and commodities but did not observe any. Prices appeared to be yesterday's price plus some random change (up or down); he suggested that price changes were independent and that they followed random walks. Thus the first models conceived for price variation were based on the sum of independent random variations often referred to as Brownian motion and quantified in Equation (15). This led to the creation of the random walk hypothesis, and the closely related efficient market hypothesis which states that random price movements indicate a well-functioning or efficient market.
An example of the type of time series that illustrates this effect is given in Figure 1. The figure shows a signal obtained using a zero mean Gaussian random number generated to compute s n based on the iteration u n+1 = u n + s n , u 1 = 100, n = 1, 2, 3, ..., 999.
Trivial though this model is, it nevertheless provides remarkably similar signals to those that characterise many financial signals. However, it is an example of a stationary signal in the sense that the scale of random deviations is invariant of time and the trends (up and down) are over similar amplitude and time scales-characteristics that are not properties of financial signals in general, at least over large time scales.

The Efficient Market Hypothesis
It is often stated that asset prices should follow Gaussian random walks because of the efficient market hypothesis (EMH), e.g., [30][31][32] (and references therein). The EMH states that the current price of an asset fully reflects all available information relevant to it and that new information is immediately incorporated into the price. Thus, in an efficient market, models for asset pricing are concerned with the arrival of new information which is taken to be independent and random.
The EMH implies independent price increments, but why should they be Gaussian distributed? A Gaussian PDF is chosen because price movements are presumed to be an aggregation of smaller ones and sums of independent random contributions have a Gaussian PDF due to the central limit theorem. This is equivalent to arguing that all financial time series used to construct an "averaged signal" such as the FTSE100 or Dow Jones Industrial Average are statistically independent. Such an argument is not fully justified because it assumes that the reaction of investors to one particular stock market is independent of investors in other stock markets which, in general, will not be the case as each investor may have a common reaction to economic issues that transcend any particular stock. In other words, asset management throughout the markets relies on a high degree of connectivity and the arrival of new information can send "shocks" through the market as people react to it and then to each other's reactions.
The EMH assumes that there is a rational and unique way to use available information, that all agents possess this knowledge and that any chain reaction produced by a "shock" happens instantaneously. This is clearly not physically possible or financial viable and financial models that are based on such a hypothesis have and will continue to fail.

The Fractal Market Hypothesis
One of the principal concerns with regard to the EMH relates to the issue of assuming that the markets are Gaussian distributed. This is because it has long been known that financial time series (specifically price changes) do not adhere to a Gaussian distribution and this is arguably the most important of the shortcomings relating to the EMH model (i.e., the failure of the independence and the Gaussian distribution of increments assumption). It is fundamental to the inability for EMH-based analysis such as the Black-Scholes model [33] to explain the characteristics of financial signals such as clustering, flights and failure to explain "boom-bust" events, and, in particular, financial "crashes" leading to recession.
More recently, financial time series have been shown to be random self-affine signals which has led to the related development of the fractal market hypothesis in which price variations are in effect random walks whose statistical distribution of values is similar over different time scales. Ralph Elliott (a professional accountant) first reported on the apparent self-affine properties of financial data in 1938 [34,35]. He was the first to observe that segments of financial time series data of different sizes could be scaled in such a way that they were statistically the same, producing so-called Elliot waves. He proposed that trends in financial prices resulted from investors' predominant psychology and found that swings in mass psychology always seemed to be a manifestation of the same recurring self-affine patterns in financial markets.
A primary goal of an investor is to attempt to obtain information that can provide some confidence in the immediate future of a commodity's price, based on patterns of the past. One of the principal components of this goal is based on the observation that there are "waves within waves" that appear to permeate financial signals when studied in sufficient detail and imagination. It is these repeating self-affine wave patterns that occupy both the financial investor and the financial systems modeller alike and it is clear that although economies have undergone many changes in the last 100 years, the dynamics of market data does not appear to have changed significantly (ignoring scale).
The Elliott wave principal developed in the late 1930s and the fractal market hypothesis developed in the late 1990s provide data consistent models for the interpretation and analysis of financial signals and investment theory. In turn, and, as discussed in this paper, fractal signals and fields can be cast in terms of solutions to certain fractional differential equations for which an understanding of the fractional calculus is a pre-requisite. Hence, the application of fractional calculus is and is likely to continue to have a primary role in mathematical economics.
In this context, and, on the basis of Equation (11), an overview of the contents of this paper and its subject connectivity is quantified in terms of the flow diagram given in Table 1 where the discrete time dependent behaviour of u(t) is taken to represent a digital financial time series u n , n = 1, 2, ..., N. This flow diagram highlights the relationship between the E 3 and the applications of fractional calculus in mathematical economics which is a theme of this paper. It is illustrative of the unified approach that has been taken in order to produce a coherent exposition for the development of three fundamental indices that are used to analyse financial signals, namely, the Lyapunov exponent, the volatility and the Lévy index. As shall be studied later on in this paper, these indices are used to undertake a trend analysis which, in turn, provides a confidence criterion for the application of evolutionary computing to predict future prices.

Principal Properties of Financial Signals
Whatever the hypothesis that is considered in regard to understanding and analysing financial signals, there are some basic characteristics of such signals that are common. These include the following: • financial signals are stochastic signals; • they are non-stationary signals; • their distributions (specifically the price differences) are non-Gaussian; • they are often characterized by long term historical correlations; • they have random repeating patterns at different scales-they are statistically self-affine (random fractals); • they have instabilities at all scales-sometimes referred to a "Lévy flights".
The models, metrics and computation algorithms reported in this paper attempt to take each of the above properties into account while maintaining adherence to E 3 as a unifying theme. Table 1. Fow diagram illustrating the connectivity between Einstein's Evolution Equation (E 3 ), two well known financial indices (i.e., the volatility σ and Lyapunov exponent λ) and the classical and fractional diffusion equations both of which can be derived from the evolution equation using the Characteristic Functions (CFs) shown (where c is a constant, k is the spatial frequency and γ is the Lévy index). The flow diagram also illustrates the relationship between the evolution equation and two principal market hypotheses: the efficient market hypothesis and the fractal market hypothesis, the latter hypothesis being a concomitant of the fractional calculus. The asterisk ( * ) denotes the connection between the Generalised KFE and the introduction of a memory function which allows E 3 to be written in a different form without loss of generality.

Density Function Distributions
Suppose that the one-dimensional density function u(x, t) is ergodic and has a PDF p( If, for all time t > 0, the distributions of u(y, t) and u(z, t) are identical, what is the (symmetric) distribution of the density functions in the plane r ∈ R 2 and the volume r ∈ R 3 ?
It is clear that the cumulative distribution function of u(x, t) is given by and hence, from the fundamental theorem of calculus Thus, for r ∈ R 2 , when p(x) = p(y), the (circularly symmetric) cumulative distribution is (using polar coordinates (r, θ) with r = x 2 + y 2 ) and so the PDF p 2 (r), say, is given by Similarly for r ∈ R 3 , when p(x) = p(y) = p(z), then for the spherically symmetric case (using spherical polar coordinates (r, θ, φ) with r = x 2 + y 2 + z 2 ),

Gaussian and Rayleigh Distributions
In the case when u(x, t), u(y, t) and u(z, t) are (zero mean) Gaussian distributed and where σ is the standard deviation and when the characteristic function (CF) is given by [36] , r ∈ R 2 , which is a standard Rayleigh distribution with characteristics function [36] For the three dimensional case which has the CF [36] The distributions p 2 (r) and p 3 (r) represent the random length of the two-and three-vectors respectively.
The case associated with p 2 (r) frequently occurs when a random time signal u(t) has a distribution p(x). By computing the Hilbert transform of this signal, we obtain the quadrature component w(t) which has the same distribution as u(t). The analytic signal s(t), is then given by and the amplitude modulations given by A(t) = u 2 (t) + w 2 (t) are therefore 2πrp(r) distributed.

Lévy and Associated Distributions
The symmetric Lévy distribution features in material considered later on in this work and is a key to the connectivity between E 3 and the fractional diffusion equation. We therefore take the opportunity at this point in the paper to consider some of the basic definitions and results associated with this distribution. The CF of a (zero-mean) Gaussian distribution can be written as The Lévy distribution is one whose CF is based on a generalisation of the CF of a Gaussian distribution to where γ is the Lévy index. It is then clear that γ = 2 recovers a Gaussian PDF, γ = 1 generates a Cauchy distribution given that For γ ∈ (0, 2) it possible to derive the asymptotic result [37] exp A simple derivation of this result can be obtained by noting that (2). The non-asymptotic Lévy distribution for arbitrary values of γ can easily be evaluated numerically through application of a discrete Fourier transform. Figure 2 shows examples of the Lévy distribution p(x) for different values of γ (with c = 2) and associated distributions xp(x) for the same values of γ but for c = 1/2. It is noted that the tails of each distribution for γ < 2 are longer than those for the case when γ = 2, thereby representing stochastic processes in which rare but extreme events are more likely to occur than with a Gaussian distributed process. These events include Lévy flights which, in financial time series analysis, mark positions in time when the value of a price may increase or decrease in a way that is inconsistent with the statistical signature of the series in a more general sense. An example of this is given in Figure 3 which shows Lévy flights in the complex plane associated with a FTSE 100 signal, the data having been obtained from [23].
Identifying metrics that can flag the positions in time at which a Lévy flight may occur are an important feature of financial trading. The first of these that we consider in this paper is the Lyapunov exponent which, in the context E 3 , is discussed in the following section.   (16). Both time functions u(t) and s(t) are taken to be uniformly sampled discrete functions u n and s n , respectively; the analytic signal is computed using a fast Fourier transform (FFT) and the algorithm presented in [38].

The Lyapunov Exponent
The Lyapunov exponent is a quantity that characterises the rate of separation of infinitesimally close trajectories, a trajectory being a time-ordered set of states of a dynamical system. Specifically, a trajectory is a sequence f n (x), n ∈ N calculated by the iterated application of a mapping f to an element x of its source. In this section, we illustrate the derivation of this exponent from E 3 . Consider some dynamical system that is modelled by an iterative equation characterised by a function f (t, x 0 ) which produces a solution at some time t given by x(t) for an initial condition x 0 ≡ x(t = 0). The system is such that it may be stable or unstable depending on the initial condition x 0 and system parameters, i.e., the numerical value of the parameters that characterise the function f (t, x 0 ). For stability, we can expect the solution x(t) to be characterised by convergence to a specific value (which could be zero) so that as t → ∞, If the solution is unstable we can expect x(t) might increase in value with t and/or have some chaotic behaviour where x(t) becomes a chaotic variable of time. In this case, a fundamental "diagnostic" is associated with asking the following question: Is a given system, characterised by the function f (t, x 0 ), unstable, and, if so, how unstable is it? The answer to this question is compounded in the Lyapunov exponent, whose value is typically taken to be a measure of how sensitive x(t) is to the initial condition x 0 . If we denote δx(t) to be some change to the solution which depends on a change to the initial condition denoted by δx 0 , then this sensitivity is compounded in the following equation where λ is referred to as the (leading) Lyapunov exponent and has the solution This exponent represents the mean rate of separation of trajectories of the system where the term "trajectory" refers to the time evolution of x(t) subject to the initial condition x 0 . Thus, any two trajectories , say, that are close to each other for t << 1 and consequently separate exponentially with time, will represent a system defined by function f (t, x 0 ) that has a large value of λ. On the other hand, if all values of x(t) and x(t) + δx(t) converge to the same value in some neighbourhood of time, then δx(t) must approach zero, and, from Equation (18), this implies that λ < 0. Thus, on the basis of Equation (18), a positive value of λ defines a system with chaotic behaviour in time and a negative value of λ characterises a stable system which convergences in time. Moreover, the larger the value of λ becomes the faster the rate of convergence (for λ < 1) or the "route to chaos" (for λ > 1), [39][40][41].
Given the description above as to what the Lyapunov exponent is and what it characterises, we consider a derivation of this exponent within the context of Equation (10) for r ∈ R 3 and uniform discretisation in time so that we can write Suppose that after many time steps, this iteration converges to the function u(r, t ∞ ), say. We can then represent the iteration in the form where (r, t n ) denotes the error at any time step n. Convergence to the function u(r, t ∞ ) then occurs if (r, t n ) → 0 as n → ∞. If we now consider a model for the error at each time step given by (for some real constant ε) (r, t n+1 ) = ε exp(λt n ) with t n = nτ (where τ defines the time sampling interval) it is clear that we can then write (r, t n+1 ) = (r, t n ) exp(λτ), or, after taking the p = 1-norm of both sides, Thus we can consider an expression for λ given by If λ is negative, then the iterative process is stable since we can expect that for n >> 1, (t n+1 )/¯ (t n ) < 1 and thus ln¯ (t n+1 )/¯ (t n ) < 0. If λ is positive then the iterative process will diverge, such a criterion for convergence/divergence being dependent on the exponential model given in Equation (22) used to represent the error function at each iteration. This result applies for any time iteration process. However, in the case of Equation (20) From Equation (21) we can now consider the equation u(r, t n ) = Gauss(r) + (r, t n ), or, after taking p = 1 norms,ū(t n ) ≤ 1 +¯ (t n ) whereū(t n ) = u(r, t n ) 1 . For a discrete time series u n > 0∀n, say, we compute the Lyapunov exponent using the relatively simple formula Hence, for a time series which is assumed to be predicated on Equation (10), we can compute the corresponding Lyapunov exponent using Equation (23), albeit, in practice, for a finite array of size N. This includes financial time series data when λ can be computed for a moving look-back window to generate a signal composed of Lyapunov exponents. In this context, the product Nτ merely scales the computed value of the exponent but if u n+1 > u n , ∀n = 1, 2, ..., N then λ > 0 and if u n+1 < u n , ∀n = 1, 2, ..., N then λ < 0. Hence, irrespective of the scale used, a change of polarity in the value of λ is a signature of a change in the gradient of the time series. For this reason a change in polarity of the Lyapunov exponent can be used to quantify the transition between the growth or decay of a financial series.
An example of this is given in Figure 4 which shows a financial signal (the first 1000 elements of the FTSE 100 prices given in Figure 3)-from 14/03/2006-26/02/2010-which has been normalised for display purposes, i.e., u n := u n / u n ∞ . The associated Lyapunov exponent has been computed using function Lyapunov given in Appendix A.2 and re-scaled for values τ = 0.01 and N = 32 according to Equation (23). Note that the first N values in Figure 3 are missing which is due to the window being a look-back window holding data that contributes to the first computation of the exponent at point N. As with other metrics computed from financial time series, the Lyapunov exponent obtained depends critically on the look-back window that is applied. However, subject to a delay which is proportional to the size of the look-back window used (determined by the time scale of the analysis), the polarity (and continuity thereof) of the signal can be used to estimate the macroscopic trends in a financial time series as illustrated in Figure 4. This is discussed further later on in this paper.
Since we can write Equation (23) in the form and that ln u n+1 − ln u n τ then, for the continuous case with time series function u(t), t ∈ [0, T], we can write giving a result that is analogous to Equation (19).

The Evolution Equation, Volatility and Risk
For a stochastic source term s(r, t), as given in Equation (11), Equation (14) becomes Consider the case when p(r) = δ n (r). Integrating over r ∈ R n , we can then write the rate equation Suppose we write this equation in the form and consider an iterative solution for u(t) given by so that the first iterate u(t) := u 1 (t) becomes the solution to the rate equation where, for u 0 (t) = 1, Equation (24) then shows that the volatility is a measure of the randomness of ln u(t) through the convolution of s(t) with the time integration of n(t). If, for example, | s(t) |≤ 1, then, in the term σs(t), σ determines the amplitude of s(t).
Equation (24) does not provide a practically useful formula for σ as it relies on defining the functions n(t) and s(t) when what is ideally required is a definition for σ that relies on knowledge of u(t) alone. To do this we are required to derive a formula for σ in terms of the function u(t) through the elimination f (t) and this requires a condition to be applied. In this context, suppose we assume that f (t) is a phase only function (with unit amplitude) of compact support T and with a bandwidth Ω. This requires that both s(t) and n(t) are phase only functions of the same compact support and bandwidth. In this case F(ω) = exp[iθ(ω)] where θ(ω) is the "Phase Spectrum" and using Parseval's Theorem, we have Hence, we obtain an expression for the volatility given by For a uniformly sampled discrete time series u n , n = 1, 2, 3, ..., N, application of a forward differencing scheme for a time interval ∆t when The sampling interval ∆t of u n is related to the sampling interval ∆ω of the discrete Fourier transform of u n by the equation and since the bandwidth of the discrete spectrum of u n is N∆ω is is clear that ∆tΩ = 2π. Thus we derive a simple formula for the volatility given by Comparing Equation (25) with Equation (23), we observe similarities in regard to the commonality of the quotient u n+1 /u n and the logarithmic function but where λ < 0 or λ ≥ 0 and where σ ≥ 0∀n.
An example of the short time volatility is given in Figure 5 which shows a financial signal (the first 1000 elements of the FTSE 100 prices given in Figure 3), normalised for display purposes. In this example σ in Equation (25) was computed using function volatility given in Appendix A.3 for N = 32.
In financial time series modelling, the volatility is a measure of the noise in the signal. For data that has a stable trend (up or down) the volatility is relatively low, and, in this context, trading is best undertaken working with financial signals that have a low volatility other than options trading where there may have been a "bet" of a move of a certain magnitude. In this sense, the volatility of a signal provides a measure of the risk, a low risk loosely equating to a low volatility. In the derivation of the volatility provided in this section: σ = 1/τ where τ is a coefficient in Equation (14). In this respect, and, in context of the evolution equation, τ is a measure of risk, the greater the value of τ the lower the risk associated with an investment. For short δ(t) type memory functions the GKFE reduces to the classical Kolmogorov-Feller equation which, in terms of its relationship to the evolution equation requires that τ << 1. Thus low risk requires that a financial time series is characterised by long memory functions, at least in terms of the model compounded in Equation (14)-a result that makes intuitive sense.

Trend Analysis Using the Lyapunov Exponent to Volatility Index Ratio
The changes in polarity or "zero-crossings" associated with the Lyapunov exponent (computed on a moving window basis) as discussed in Section 8 provide the positions in time where there is a transition in the type of trend (growth leading to decay and decay leading to growth). The value of the volatility indicates the "stability" of the time series, the temporal characteristics of all indicators being dependent of the size of the window or "period" used. This suggests scaling the Lyapunov exponent with the inverse of the volatility, i.e., computing the quotient where σ is defined by Equation (25) and λ is defined by Equation (23) with τ = 1/N thereby making λ σ scale independent. This index then assesses not only changes in the direction of a trend but the corresponding stability of that trend. This idea has obvious applications to a range of time series but especially in regard to financial time series analysis where forecasting both the type and characteristics of a trend is of fundamental importance, a positive trend with low volatility indicating a good investment horizon, for example. We define λ σ as the Lyapunov-to-volatility ratio (LVR). Figure 6 shows the time varying LVR of a financial signal (the first 1000 elements of the FTSE 100 prices given in Figure 3 after normalisation) for N = 32.

Pre-and Post-Filtering
As shall be discussed later, the numerical accuracy of results obtained in predicting a trend and its longevity, is critically dependent on the filtering of both the input data and λ σ -pre-and post-filtering, respectively.

Pre-Filtering
The positions in time at which the zero crossings are evaluated using Equation (26) depend on the accuracy of the algorithm used to compute λ σ which in turn, depends on the intrinsic noise associated with the time series data. This can yield errors in the positions at which the zero-crossings are computed especially in regard to changes associated with very short time micro-trends.
In the context of longer term macro-trends, such micro-trends may legitimately be interpreted as noise although, in the context of financial times series analysis, for example, the term "noise" must be understood to reflect legitimate price values. To overcome this effect, u n is filtered using a moving average filter defined by: and W defines the length of the "moving window". The function given in Appendix A.4 provides a moving average filter for pre-filtering the data u n using a window of size W.

Post-Filtering
In addition to pre-filtering the time series data, an option for post-filtering the λ σ is required to further control the dynamic behaviour of this index. We therefore again consider a moving average filter given by and T defines the length of the "moving window", T = W.

Zero-Crossings Analysis
On the basis of the ideas considered in the previous section, the critical points at which a trend forecasting decision is made are the zero crossing points associated with λ σ . By computing λ σ (t) where t is the position in time of the window, identification of the zero crossings denoted by the function z c (t) involves the follow basic procedure: where ε is a small perturbation in time. This procedure generates a series of Kronecker delta functions whose polarity determines the position(s) in time at which a trend is expected to be positive or negative. Thus the function z c (t) identifies the zero crossings associated with the end of an upward trend and the start of a downward trend when z c (t) = −1 and the end of downward trend and the start of an upward trend when z c (t) = +1. This is therefore a "critical indicator" in regard to forecasting the trending behaviour of a time series.

Back-Testing Evaluator
Back-testing algorithms are designed to "gauge" the accuracy of results in terms of trend predictions, for example, and, are usually, but not exclusively, related to testing a strategy for forecasting the behaviour of a financial time series. They are usually designed to assess the overall accuracy of some trading strategy based on historical data when the future outcomes of such a strategy can be evaluated. In this context, the function given in Appendix A.5 evaluates the performance associated with the zero-crossings analysis discussed in the previous section. This evaluation operates on the basis that the price differences should reflect the interval between the start and end points of a predicted trend if the prediction is correct. Thus, in the case when z c (t) > 0 and the trend is positive, the price difference between this point in the time series and the next point in time series when z c (t) < 0 should be positive, thereby representing a net price gain between the two zero crossings. Similarly, when z c (t) < 0 and the trend is negative, the price difference between this point in the time series and the next point in time series when z c (t) > 0 should be negative, thereby representing a net price loss between the two zero crossings. In those cases where this occurs throughout the duration of the time series considered, the predicted entry and exits points are taken to be correct, or else, they are taken to be incorrect. The accuracy associated with this evaluation is computed as a percentage in terms of successful entries and exits, i.e., going "long" (when an investment might be made because the price of a commodity is increasing) and going "short" (when an investment would be held or sold at the start of a downward trend), respectively.

Example Results
A function called Backtester is provided in Appendix A.6 which gives the user options on the sizes of the look-back window length W and T and the size L of the data stream that is used from the available input data. This data is provided in the form of a column vector via a .txt file. The function normalises this data so that it can be plotted on a scale that is consistent with the scale of λ σ [n]. The function provides a plot which shows the evolution of u n (normalised), λ σ [n] and z c [n] and then evaluates the results using function evaluator as discussed in the previous section. Note that both the Lyapunov exponent and the volatility are evaluated from the original data (and not the normalised data used for the plot) using a look-back window of T. Figure 7 shows some example results of running Backtester for the first 1000 elements of the FTSE 100 prices given in Figure 3. The three examples provided are for function Backtester(10,10,1000), Backtester(20,10,1000) and Backtester(30,10,1000) for which the combined entry/exit (long/short) accuracy is 36.55% , 64.58% and 72.73%, respectively. From these results it is clear that the accuracy improves significantly with the extent of the pre-filtering that is applied to the time series before computation of the LVR. This is to be expected as pre-filtering reduces the noise associated with the time series prior to the computation of the LVR. In order to quantify both the pre-and post-filtering effect on the combined accuracy of the long/short predictions, Figure 8 shows a surface (mesh) plot of the combined accuracy as a function of the pre-and post-filtering look-back window sizes W and T, respectively. The maximum value associated with this 'WT-map' is 87.5% which occurs at (W, T) coordinates (40, 10). From Figure 8 it can be seen that the highest combined accuracies (>70%) are obtained for approximate values of W ∈ [30,50] and T ∈ [10,20]. However, it should be noted that WT-maps of this kind are data dependent and will vary with the type of financial time series that is processed and on the non-stationary characteristics that occur over the length of the data series that is chosen (i.e., the input parameter L in function Backtester). Hence, WT-maps of the type given in Figure 8 provide a "signature" for a financial signal from which optimal values of the pre-and post filtering windows can be established. This optimisation is based on finding the smallest values of W and T that will maintain a combined accuracy compatible with an expected return over a given time scale.
As a comparative example, Figure 9 shows the equivalent WT-map for 1000 elements of the Eruo-Dollar (USA) daily (averaged) exchange rates from 29/04/2008 to 27/02/2012 as given in Figure 10. In this case, a maximum value of 83.33% occurs at WT coordinates with minimum values of (38,12). Although the quantitative details of this WT-map are unique to the data used, in qualitative terms, it is similar to the WT-map given in Figure 8 revealing that greater accuracy is achieved for large values of W relative to T which is intuitively to be expected. Clearly, for any specific financial date series, a WT-map is required to provide an optimal accuracy associated with the trend analysis of that series under the assumption that the stochastic behaviour of the series is stationary, i.e., the financial signal is Ergodic.

Price Prediction Using Evolutionary Computing
The trend analysis considered in the previous sections provides evidence of being able to predict a positive or negative trend in a financial signal over a period of time that yields a net positive of negative gradient, respectively, before the trend is reversed. This is based on pre-filtering the financial signal and post-filtering the LVR in which the accuracy achieved is based on the look-back windows of both filters. Such an approach only provides a statement on the expected future trend of a financial signal; it does not provide an estimate of the actual future price.
On the basis of Equation (15) and the random walk hypothesis it represents, it is not possible to determine a future price with 100% accuracy whatever the time scale, given that most financial signals are known to be self-affine stochastic fields which exhibit the same statistical distributions over all time scales. Thus, it is well known and understood (but not always appreciated) that in economics, only an estimate (essentially an informed guess) of a future price is possible. However, in principle, the lower the volatility of the signal, the less likely it is to exhibit large random variation at some future (short) time and hence, the larger the LVR the more likely it is that an estimate of a future price will be a more accurate prediction. In terms of Equation (15), this means that u(t + τ) ∼ u(t) given that s(t) ∼ 0, i.e., 'tomorrow's price is likely to be close to today's price. This provides the basis for using evolutionary computing to estimate short time price values by using the LVR to flag when the approach can be used effectively, i.e., when the LVR reaches a maximum or minimum above or below a certain threshold, respectively-as illustrated in Figure 7 for a threshold of 2, for example.

Evolutionary Computing
Evolutionary computing (EC) involves "applying the Darwinian principles of natural selection to algorithmic problem solving" [42] and has its origins in the 1960s with the introduction of "evolutionary programming" [43], "genetic algorithms" [44], and "evolutionary strategies" [45]. Following independent developments in the 1990s these areas merged to form the discipline of genetic programming known today as EC in which a correlation exists between natural evolution and evolution by computational problem solving [46].
In the context of a local environment that has a population striving for survival and to reproduce, with natural evolution, the success (fitness) of each individual is dependent on their environment and how well they meet their goals. Similarly, with a trial-and-error mathematical process, a candidate solution is judged in the context of the problem that it is trying to solve and how well the candidate solves the problem which determines whether or not it is kept as a candidate solution. A common theme in EC is the idea of taking a population of individuals "operating" according to environmental pressures causing natural selection and thereby the growth of a fitter population. Many aspects of EC are stochastic and the starting point of candidate solutions can be either deterministic or stochastic. In either case, the aim is to produce a "solution" that minimises some fitness function.

Eureqa
Eureqa is an EC tool originally developed by the Cornell Creative Machine Laboratory (Cornell University) and commercialised by Nutonian Inc. ( Boston, MA, USA) [47]. The underlying principle is to use genetic programming to generate equations, each of which provides an increasingly better fitness function to model a given dataset. The system iteratively generates a sequence of non-linear functions to describe input (digital) signals which may include stochastic signals [48]. It is a modelling engine predicated on artificial intelligence using evolutionary searches to determine an equation that represents a set of data [49]. The system automatically discovers formulae through evolutionary algorithms requiring no human intervention starting by randomly creating equations via sequences of mathematical building blocks based on a combination of common functions. The content of these formulae is ordered only by a basic syntax (e.g., two addition signs cannot appear one after the other). Beyond this basic syntax, the sequences generated by the program are entirely random.

Application to Financial Forecasting
With a little data "Eureqa generates fundamental laws of nature" [50]. However, there have been few applications of EC to financial forecasting. This is partly due to the significance of Equation (15) and the basic random walk hypothesis which financial signals adhere to, albeit as self-affine stochastic fields. Thus although EC can be used to generate a non-linear equation for some short time financial signal, no fundamental significance in terms of a "law of nature" can be inferred by such an equation due to the random walk nature of the data that is used. To date, the only "law of nature" that can be used to describe financial signals is that they are statistically self-affine fields to which the fractal market hypothesis is thereby applicable. Nevertheless, EC can be used to provide short time predictions including the performance of equity markets [51] and energy commodities [52], for example. This is done by using EC to generate representative equations for existing prices over a look-back window and can, in principle, be applied successive for a moving (look-back) window especially for time periods where the volatility of the time series is low and future prices can be expected to be random but locally similar to past prices.

An Example Result
With reference to Figure 7, we consider the daily prices for array values between 870 and 900 (inclusively) which correspond to days 24/08/2009 to 08/09/2009 when the LVR is ∼3 and relatively flat. With these 30 price values, Eureqa provides the following formula: obtained after 51,056 generations giving a correlation coefficient of 0.98362871, an R 2 (coefficient of determination) goodness of fit of 0.96684623, a mean absolute error of 16.623419 and a complexity of 51. Figure 11 shows a comparison of the true price values with the estimates obtained using a discretised version of Equation (27) given by f n = 5025.73417939762 + 8.96527863946579t 2 n + 1.52597679067939 × 10 −6 t 6 n + cos(8.96527863946579t n ) − 76.8453768284695t n − 0.253321938783733t 3 n − 48.5578781261177 sin(0.96446841878421 + 7.16244232473996t n ), n = 1, 2, ..., N, t n = n (28) for N = 30, and, additionally, for n = 31, 32 and 33 thereby providing future price prediction for three days ahead. A comparison of the numerical values for these price predictions is given in Table 2. Predicted price value f n 5022.9 5066.6 5087.4 Actual price value u n 5138.0 5108.9 5154.6

Discussion
With reference to Figure 11, the local trend in prices before and inclusive of element 30 (i.e., elements 26-30) is downward and so based on the principle of Equation (15) for s(t) ∼ 0 and application of exponential smoothing for time series forecasting [53], for example, continuation of this trend will lead to inaccurate predictions that are inconsistent with the local increase in prices for elements 31, 32 and 33-the circled dots in Figure 11. However, the equivalent future predictions given by Equation (28) are consistent with the actual values which represent a short time up-ward trend as shown in Table 2. The predictive ability of EC can only be considered for very short future time increments (a look-forward prediction window) but this example result does provide evidence for the success of using EC exercised on a moving look-back window basis.
A quantitative study on the accuracy of this approach in terms of the look-back window and the look-forward (prediction) window relative to the local LVR lies beyond the scope of this work. However, it is to be expected that the success of this approach will be predicated on the size of the amplitude of | λ σ | when the volatility is low. Hence, based on the results given in Figure 7, an EC moving window approach may be used when | λ σ |≥ 2 where λ σ is given by Equation (26). In this context, the LVR not only provides a method of predicting trends (subject to appropriate pre-and post-filtering) based on its change in polarity but also flags when to apply EC to generate future price estimates.

Derivations of the Diffusion Equation from the Evolution Equation
So far in this paper, we have developed a predictive indicator that is based on combining the Lyapunov exponent and the volatility into a ratio (the LVR), both parameters having been derived from E 3 and computed on a moving window basis. We have then used the amplitude of this ratio to gauge the likelihood of using EC to successfully predict short term future price values. We have not yet studied the effect of applying specific models for the PDF associated with E 3 which is the subject of later sections. In particular, we show how the classical diffusion equation is a result of considering a Gaussian PDF in E 3 and the non-classical fractional diffusion equation is the result of considering a non-Gaussian PDF, in particular, a Lévy distribution and undertaken using the associated characteristic functions.
In this section we consider three approaches to deriving the classical diffusion equation in order to show the connectivity between this equation and E 3 in terms of applying different conditions and approximations. We start with Einstein's original approach which is independent of the specific PDF but on the condition that the PDF is symmetric.

Einstein's Derivation for r ∈ R 1
In his 1905 paper [1], Einstein considered the one-dimensional case, when r ∈ R 1 , and where the PDF is taken to be symmetric so that p(x) = p(−x). In this case, Equation (10) can be written as Taylor expanding u(x, t) to first order in time, and, to second order in space, we then obtain We can thus write the equation where which is the one-dimensional diffusion equation for diffusivity D with dimensions of Length 2 /Time. This derivation of the diffusion equation relies on the conditions τ << 1 and λ 2 << 1 which are required in order to truncate the Taylor series expansion of u(x, t + τ) in time and u(x + λ, t) in space. However, this derivation of the diffusion equation is independent of the PDF (subject to the condition that the PDF is symmetric) which determines the diffusivity D through Equation (30).

Einstein's Derivation for r ∈ R 3
A similar approach can be used to deriving of the diffusion equation for r ∈ R 3 as shall now be demonstrated. In this case u(r, t + τ) = p(r) ⊗ u(r, t), p(r) = p(−r) can be written out in the form where λ is a scalar with dimensions of length and components λ x , λ y and λ z . Expanding u(r + λ, t) in terms of a three-dimensional Taylor series, We then obtain which can be written as ∂ ∂t u(r, t) = ∇ · D∇u(r, t) + V · ∇u(r, t) where D is the diffusion tensor given by and V is a flow vector which describes any drift velocity that the particle ensemble may have and is given by Note that as λ i λ j = λ j λ i , the diffusion tensor is diagonally symmetric (i.e., D ij = D ji ). For isotropic diffusion when λ i λ j = 0 for i = j and λ i λ j = λ 2 for i = j and with a zero drift velocity when V = 0,

PDF Dependent Derivation of the Diffusion Equation
Consider the case when, for r ∈ R 1 , p(x) is a zero-mean normal (Gaussian) distribution with Standard Deviation σ and Variance σ 2 , i.e., Taylor expansion to first order of Equation (10) followed by application of the convolution theorem yields the Fourier space equation where P(k) being the Characteristic Function.
Suppose we now consider the case when the variance is small, i.e., σ 2 << 1. Then and Equation (31) can be written as through which we again obtain the diffusion equation given that −k 2 U(k, t) ↔ ∂ 2 ∂x 2 u(x, t). In this case, the "key" to the derivation of the diffusion equation is the assumption that the variance of a normal distribution is small and that τ << 1. We note that an identical analysis in the two-and three-dimensional domains yields the the two-and three-dimensional diffusion equation ∂ ∂t u(r, t) = D∇ 2 u(r, t), r ∈ R n , n = 2, 3

Generalisation
We can generalise this approach further by writing the evolution equation in Fourier space using the convolution theorem and in expanded form as so that upon inverse Fourier transformation we have, for r ∈ R n , n = 1, 2, 3 Equating terms with the same coefficients in regard to powers of τ, we have (for any positive integer m) ∂ ∂t u(r, t) = D∇ 2 u(r, t), ∂ 2 ∂t 2 u(r, t) = D 2 ∇ 4 u(r, t), ..., Since all such equations can be constructed from the diffusion equation, i.e., ∂ 2 ∂t 2 u(r, t) = D∇ 2 ∂ ∂t u(r, t) = D∇ 2 [D∇ 2 u(r, t)], ... this analysis confirms that the diffusion equation is E 3 for the case when the PDF is a Gaussian distribution.

The Black-Scholes Model
There is a synergy associated with the diffusion equation and the Black-Scholes model for a call premium which is compounded in the partial differential equation [55] ∂ ∂t c(x, t) where c(x, t) is the call premium, s is the stock price, σ is the volatility and r is the risk. Subject to specific initial and boundary conditions, this equation can be transformed into the classical diffusion equation through application of a change of variables when it can be written in the form which has the same Green's function solution as given in the previous section, for n = 1 and initial condition u(x, t = 0). Thus, just as the classical diffusion equation is a manifestation of the PDF associated with E 3 being normal, so the Black-Scholes model may be taken to be predicated on Gaussian processes.

The Fractional Diffusion Equation
The fractional diffusion equation (FDE) can be derived by generalising the Gaussian characteristic function P(k) = exp −σ 2 k 2 /2 to the form where γ ∈ [0, 2] is the Lévy index and c is a constant with dimensions of Length γ as previously discussed in Section 7.2.
Using the Reisz definition of the fractional Laplacian operator ∇ γ , r ∈ R n , namely, with D γ = c/τ, repetition of the analysis given in Section 12.4 yields the homogeneous FDE where D γ is the fractional diffusivity with dimensions of Length γ /Time, and, for r ∈ R 3 with Cartesian coordinates (x, y, z), Thus, we obtain a fundamental connectivity between between Einstein's evolution equation and fractional calculus, i.e., application of the Lévy distribution in Equation (10) yields the FDE.

Continuity Equation
For the case when γ = 1, we can use the FDE to construct the transport equation wheren is the unit vector. This is a continuity equation, and, in the context of the evolution equation, illustrates the connectivity between the concept of flux (the flow of an ensemble of particles) and the Cauchy distribution (as discussed in Section 7.2).

Time-Independent Analysis
If we consider Equation (11), then for the time dependent case the FDE becomes ∇ γ u(r) = s(r), r ∈ R n where u(r) is a stochastic function. Since ∇ γ u(r) ↔ − | k | γ we can construct the solution Using Equation (4) and the convolution theorem, we can then write u(r) as This solution for u(r) defines the Riesz potential and has a fundamental scaling property obtained by considering the convolution of the source function for a scaling factor λ when it is simple to show that c n,γ (2π) n 1 | r | n−γ ⊗ s(λr) = Thus, for a stochastic source, the Riesz potential u(r) is a random scaling self-affine field-a random scaling fractal. In this context, Appendix B develops the relationship between the topological dimension n, the fractal dimension D and the Lévy index γ which is given by Thus, for example, a Mandelbrot surface, which has a fractal dimension D = 4 − γ ∈ [2, 3], can be defined in terms of the solution to the two-dimensional fractional Poisson equation (FPE) ∇ γ u(r) = s(r), r ∈ R 2 , γ ∈ [2,1] and if s(r) has a white spectrum, i.e., a spectrum whose power spectral density function (PSDF) is a constant, then the PSDF of u(r) is determined by 1/ | k | 4−γ .
We note that by Taylor expanding Equation (11) for the time-independent case, then in Fourier space, we obtain U(k) = P(k)U(k) + S(k) and with P(k) = exp(−c | k | γ ) it can be shown that [37] This asymptotic result yields a similar inverse power law but with the scaling law, a result that is a characteristic of scale-invariant field theory when the field equations are scale invariant so that for any solution φ(r), say, of the field equations, there exist other solutions of the form λ ∆ φ(r) for an exponent ∆ (not necessarily related to γ).

Time-Dependent Analysis
We study the FDE with r ∈ R 1 for a stochastic source, namely, and consider the generic Green's function solution where the convolution operation is taken to apply to both x and t. We are then required to compute the Green's function in this case.

Green's Function for the Fractional Diffusion Equation
We consider an evaluation of the Green's function for the fractional diffusion equation which is defined as the solution to Writing the Green's function in terms of the Fourier transformation It is well known that for the equation and the outgoing Green's function is given by Generalising this result for Equation (36), we therefore consider the expression given that when γ = 2 and −iω/D γ := ω 2 /D γ , Equation (37) is recovered. To find the time evolution of the Green's function, we are required to take the inverse Fourier transform of g(| x |, ω), and evaluate the integral This can be achieved by writing the exponential function exp[i(−iω/D γ ) 1/γ | x |] as a series which yields the series solution (40) where is the Heaviside step function. This result comes from noting that for 0 < α < 1 the function δ [(n−1)/γ] (t) being defined in terms of Equation (9).
Note that from Equation (39) when γ = 2 which is the Green's function for the classical diffusion equation where D 2 is the classical diffusivity.

Asymptotic Solution
From Equation (40), it is clear that we can define the time dependent Green's function for the case when x → 0 as The Green's function solution to Equation (34) is then given by where u(t), g(t) and s(t) represent the functions u(0, t), g(0, t) and s(0, t). We note that as x → 0, Equation (41) reduces to (44) and that this result is consistent with Equation (42) given that for γ = 2, Γ(1/2) = √ π and we have The scaling relationship associated with Equation (43) is given by (c.f. Equation (32)) and from Equation (33), the relationship between Fractal Dimension D and the Lvey index in this case is D = (5 − 2/γ)/2, ∈ [1, 2]; ⇒ γ ∈ [2/3, 2]. Figure 12 shows examples of the function u γ (t) for γ = 2/3, 1 and 3/2 using the same stochastic source function s(t). Comparing these results with the example given in Figure 3, it is clear that the case of γ ∼ 1 provides a time series that (through visual inspection) better matches that of the financial signal. This is verified through application of regression applied to the data given in Figure 3 which yields γ = 1.1455 based on assuming that the data has an amplitude spectrum | U(ω) | with the following spectral power law: This value of γ is the one associated with the data given in Figure 3 in its entirety, and, like the Lyapunov exponent and the volatility, it can be computed on a moving window basis to obtain a (short) time dependent signature which is explored further in Section 15. The regression algorithm used to achieve this result is given in Appendix A.7 and is based on computing the exponent α associated with the power law U(ω) ∼| ω | α using the least squares method (LSM). For a uniformly sampled frequency (or time) series u n > 0∀n, n = 1, 2, ..., N α is given by The Lvey index is then related to α by the equation γ = −1/α. Note that to compute γ using the LSM requires computation of the amplitude spectrum using a discrete (fast) Fourier transform. The data used is that in the positive half-space of the amplitude spectrum with the DC component removed, thereby adhering to the condition | ω |> 0 in the spectral power law defined by Equation (45). Thus, the LSM is applied for | U n |, n = 2, 3, ..., N/2.

Discussion: Impulse Response Functions for Classical and Fractional Diffusion
Given Equation (43), it is clear that, in the asymptotic limit x → 0, the difference between classical and fractional diffusion is compounded in the different Green's function given by Equations (42) and (44). Thus, ignoring the scaling parameters in Equations (42) and (44) as well as those of their Fourier transforms, we can compare the asymptotic solutions as follows: • Classical Diffusion • Fractional Diffusion Unlike classical diffusion, fractional diffusion is characterised by a range of values of the Lévy index. The efficient market hypothesis is predicated on classical diffusion processes, based on E 3 for a Gaussian distribution. The ramifications of this is that the time series model for u(0, t) given by Equation (46) is characterised by the impulse response function (IRF) 1/ √ t. By comparison, the fractal market hypothesis is predicated on fractional diffusion processes based on E 3 for a Lévy distribution. The consequence of this is that the time series model for u(0, t) given by Equation (47) is characterised by the IRF 1/t 1−1/γ . Since financial signals tend to be non-stationary random fractals, variations in γ as a function of time are informative. However, before we study this, we consider another way to derive what is, in effective, the same basic result but via a different approach, an approach that is also based on E 3 but obtained via the GKFE subject to application of an appropriate memory function. This is discussed in the following section.

Solution to the GKFE for an Orthonormal Memory Function
In this section, we show that the temporal power law which characterises Equation (43)-i.e., 1/t 1−1/γ -can be obtained from Equation (14) for a specific orthonormal memory function. The purpose of this is to show another route to deriving the power law which is informative in that it is based on the application of a memory function alone and does not involve specific application of the FDE as presented in the previous section. In this case, and, for r ∈ R 1 , by writing Equation (14) in the form we can construct a Green's function solution is given by where g(t) is the Green's function given by Provided the Laplace transform of the function n(t) exists, we can write this Green's function solution as where h(t) ↔n (s) τs +n(s) and ↔ denotes the Laplace transformation, i.e., the mutual transformation from t-space to s-space. This result is obtained by using the convolution theorems for the Fourier and Laplace transforms, when Equation (14) can be written as leading to the equation¯ u(k, s) =h(s)¯ u(k, s) p(k).
Inverse Fourier-Laplace transformation then gives Equation (49). Equation (49) supports an iterative solution of the form and we may therefore consider an approximation based on the first iterate, i.e., The condition required for this approximation to apply can be obtained as follows: Given that and hence we required that h(t) 1 << 1.
Further, if we consider the case when u 0 (x, t) = δ(x)s(t), then we can write If we now choose a memory function m(t) whose Laplace transform is s β−1 then the orthonormality property n(t) ⊗ m(t) = δ(t) is satisfied if the Laplace transform of n(t) is s 1−β given that from the convolution theorem for Laplace transformsn(s)m(s) = 1. In this casē where This solution is characterised by Riemann-Liouville (fractional) integral which has self-affine properties, i.e., properties that exhibit "stochastic trending characteristics". In other words, u(t) defines a random scaling fractal function whose impulse response function is 1/t 1−β , a result that is, in light of the above analysis, been shown to be a PDF independent first order solution to the GKFE for a memory function In order to comply with Condition (50), we require that which is satisfied for the case when τ >> 1, β ∈ [0, 1).
Clearly, ignoring differences in scaling, compatibility of this solution for u(t) with Equation (43) is obtained when β = 1/γ. Thus, subject to the conditions imposed in each case we have shown that there exist temporal solutions to the FDE and the GKFE that exhibit a fundamental power law of 1/t 1−1/γ for Lévy index γ. In the former case, the solution is predicated on defining the PDF in the evolution equation (a Lévy distribution) whereas in the latter case, the result is independent of the PDF but predicated on the definition of the memory function (with power law 1/t β ). In both cases, the solution is characterised by a fractional integral which is self-affine, a property that is fundamental to the analysis and interpretation of financial signals and underpins the fractal market hypothesis.

Time Varying Lévy-and α-Indices
As with the other indices considered in this paper, the time dependence of γ for a financial signal can be obtained by computing it over a moving (look-back) window. Figure 13 shows an example of this short time signature. for a financial signal (the first 1000 elements of the FTSE 100 prices given in Figure 3), normalised for display purposes. In this example γ has been computed using the function given in Appendix A.7. This result assumes that the short-time amplitude spectrum adheres to the scaling law | ω | − 1 γ , and, strictly within the context of this spectral model, the numerical range of γ is only limited by the original definition of the of a Lévy distribution, i.e., γ ∈ [0, 2] as given in Equation (17). The statement γ ∈ (1, 2] given in Equation (47) is a result of imposing the condition that 0 < 1/γ < 1 in order that the Fourier transform pair relationship given by Equation (3) is satisfied. However, if we arbitrarily consider a modified IRF given by 1/t 1−1/γ , γ ∈ [0, 2], then it is clear that we can consider a short time scaling function given by (for t > 0) for the case when s(t) = δ(t). This result has similar properties to the Lyapunov exponent in terms of providing an 'α-index' that reflects up-ward (for α > 0) and down-ward (for α < 0) trends. Further, as with the LVR considered in Section 10 and compounded in Equation (26) we can scale the α-index by the inverse of the volatility to produce Alpha-to-volatility ratio (AVR) index given by In practice, the value of the α can easily be computed using the LSM which is compounded in the function Alpha given in Appendix A.8.
Following the same procedure to that discussed in Section 10.4 (specifically Figure 7), Figure 14 shows example results of running Backtester for the first 1000 elements of the FTSE 100 prices given in Figure 3 but for the AVR index α σ [n] instead of λ σ [n]. The example given is for Backtester (30,10,1000) which yields a combined entry/exit (long/short) accuracy of 60.98%. Note that this results is obtained by replacing the code L(m)=Lyapunov(s,T,1);%Compute the Lyapunov Exponent. V(m)=Volatility(s,T);%Compute the Volatility. R(m)=L(m)/V(m);%Compute the Lyapunov to Volatility Ratio (LVR). in function Backtester given in Appendix A.6.
Apart from the scale in amplitude, the signature of the ARV is very similar to the LVR (comparing Figures 14 and 7). However, the trend prediction accuracy is relatively low and the computational time greater (due to the repeated application of the LSM) which suggests that the LVR is a more reliable and computationally efficient index. However, this statement must be understood within the context of the limited data that was used and demonstrated for this publication and must be quantified further using WT-maps for a range of financial signals and the functions given in Appendix A, a study that lies beyond the scope of this work.

Summary, Conclusions and Open Questions
One of the principal themes of this paper has been to develop financial indices that in all cases can been traced back to a fundamental field equation of statistical physics, namely, Einstein's evolution equation-Equation (10). In this context, we have developed expressions for the following financial indices: • the Lyapunov Exponent; • the Volatility; • the Lévy Index.

Summary
We have explored the ability for the time varying Lyapunov-to-volatility ratio (LVR) to predict the trend of a financial signal in terms of a change in polarity and the period over which that polarity is sustained subject to pre-and post-filtering as discussed in Sections 10.1.1 and 10.1.2, respectively. The filtering processes are critically dependent on the values of the look-back windows that are applied and a quantification of the values required to optimise the predictive power has been explored in Section 10.4 in terms of the WT map. Application of the LVR provides a time signature whose maximum and minimum values correlate with regions of a financial signal that have up-ward and down-ward trends with low volatility, respectively. In Section 11, a short study has been presented to use this result as a criterion for the application of EC to predict short term future prices. In this context, computing the time varying LVR has two primary uses: • predicting the entry points in time for making, holding or withdrawing an investment; • assessing the position in time when application of EC can be expected to yield optimally accurate short term price predictions.
It is noted that in regard to the application of EC, the volatility alone can be used as an assessment criterion, low volatilities providing a flag for the use of EC on a moving window basis to update previous price predictions. While the derivation and the application of the LVR is predicated on the evolution equation (at least, as demonstrated in this paper), it does not rely on the application of fractional calculus which has been a focal issue in regard to the composition of this paper. Thus, the latter half of this paper was devoted to an analysis of fractional calculus with the aim of showing how, in particular, the classical diffusion and fractional diffusion equations are both directly related to the evolution equation and can be derived directly from it, the difference between the two equations being compounded in the PDF that "governs" the spatial distribution of the density field.
We have shown that the classical diffusion equation is predicated on a Gaussian distribution and that the fractional diffusion equation is predicated on a (symmetric) Lévy distribution. In turn, it has been shown that at the spatial origin (i.e., as x → 0), the temporal impulse response functions for these two cases are given by 1/ √ t and 1/t 1−1/γ , respectively, functions that underpin the efficient and fractal market hypotheses, respectively. In deriving these functions, we have attempted to show the intrinsic connectivity between the application of Lévy statistics to the evolution equation, the fractional diffusion, the application of fractional calculus for solving this equation and the analysis of the solution leading directly to the description of a stochastic self-affine field-a random scaling fractal signal.
In addition to the theoretical concepts presented in this paper, we have provided a set of numerical algorithms that allows the reader to reproduce the results given. These algorithms are based on the m-code given in Appendix A. They have been designed to give interested readers the facility to study the methods used for the wide variety of financial time series available online and to develop the algorithms further as required. Their development has been based on maintaining consistency with the theoretical analysis derived at the expense of any further and more sophisticated software engineering. Hence, issues such as error checks on input/output data, processing parameters and data/processor compatibility have not been considered.

Conclusions
The application of fractional calculus in mathematical finance is well known and in this paper we have provided a unified approach to showing that this is the case using Einstein's evolution equation as a fundamental field equation. This approach has the potential for the development of a range of new models for a financial signal by introducing different PDFs in Equation (11) to those that have been considered here, the categorisation of such models for different time series lying beyond the scope of this publication.
The primary results are given in Section 10 which shows that a relatively high accuracy for predicting up-ward and down-ward trends can be obtained, thereby providing the potential for a profitable trading strategy to be implemented. However, it must be noted that the quantitative results given in Section 10 in regard to this statement are strictly applicable only to the data used (i.e., the daily FTSE100 and Euro-USA dollar exchange rate). Application of the algorithms presented must therefore be fully quantified and characterised for any and all specific financial time series data used, "quantification" being compounded in the associated WT map.
The use of EC discussed in Section 11 verifies that short time price prediction can be exercised if the LVR has reached a maximum or minimum threshold in excess of +2 or −2, respectively. However, as pointed out in Section 11, the material presented in this respect has only been introduced to complement the main theme of this paper. Further studies are required to assess the accuracy of EC prediction on a moving window basis in terms of the number of future projected price values which maintain an appropriate forecasting accuracy and the associated look-back window used to generate short time forecasting equations of the type given by Equation (27), for example.

Open Questions
There are a number of open questions which this paper has raised that are the subject of further investigation. The reader is invited to consider the following examples: • The specific form of the evolution equation used in this work has been based on Equation (11) and it may be of value to consider the affect of the decay term −Ru(r, t) given in Equation (12).

•
Given that the critical step in deriving the IRF 1/t 1−1/γ (from which γ can be computed) is the asymptotic condition x → 0, what are the consequences of developing a numerical algorithm to compute γ when this condition is negated? • What is the impact of the LVR and AVR in terms of their possible inclusion into machine learning algorithms that use sets of more conventional financial indices and other statistical metrics for forecasting?
In regard to more generic questions, the following examples may be of interest: • In regard to E 3 , the PDFs considered in this work are the delta function, Gaussian function and Lévy distribution which provide models associated with the random walk, efficient and fractal hypothesis, respectively. An investigation into the models for u(r, t) and metrics thereof, associated with the application of different PDF (including non-symmetric distributions), is therefore warranted.

•
Similarly, what is the effect of introducing different memory functions into the generalised Kolmogorov-Feller equation, i.e., E 3 in all but name, expressed in terms of memory function m(t), and, further, is it possible to develop an inverse solution in which a financial signal u(t) can be used to derive a estimate of m(t) for a known distribution p(r).

•
What is the relationship/connectivity (or otherwise) between fractional and Itô calculus in regard to E 3 ?

Final Remarks
One of the primary aims of this paper was to realise the connectivity compounded in Table 1, and, in this broader context, to show the relationship between E 3 and fractional calculus through the application of a non-Gaussian distribution, specifically a symmetric Lévy distribution whose characteristic function is a generalisation of the Gaussian function (for a real constant c) exp(−c | k | 2 ) to exp(−c | k | γ ), 0 < γ < 2. The effect of this has been to show that there is a close relationship between non-Gaussian processes of this type and the self-affine characteristics of stochastic signals modelled in terms of the solution to a fractional differential equation, i.e., the fractional diffusion equation. This approach provides the basis for a more general study that transcends the specific distributions considered in order to derive stochastic models that are a more complete and accurate description associated with the varied properties of financial signals in which the applications of fractional calculus is a central theme.
In terms of the computational methods presented, a primary aim is to classify the WT maps for a range of different financial data in terms of the LVR and AVR and to further quantify the accuracy of these two indices in regard to different data types. The purpose of this is to categorise the type of financial times series that are best suited to the trend analysis proposed in terms of a robust predictive accuracy. In turn, this exercise will inform a quantification of the use of EC for predicting short term prices with the aim of obtain a quantitative relationship between the look-back window used, the number of future prices that can be predicted with a specified accuracy and the amplitude of the LVR and/or ALR for a specific financial signal.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

Appendix B. Relationship between the Lévy Index and the Fractal Dimension
Consider a simple Euclidean straight line of length L( ) over which we 'walk' a shorter 'ruler' of length δ. The number of steps taken to cover the line N[L( ), δ] is then L( )/δ which is not always an integer for arbitrary L and δ. Since Thus, for example, a signal where H = 1/2 has a fractal dimension of 1.5. For higher topological dimensions n, using a similar box counting measure, we have D = n + 1 − H, r ∈ R n (A3) Consider a random scaling fractal signal defined by a time dependent function f (t). Let f T (t) denote a component of the function which is of finite support f T (t) = f (t), 0 < t < T; 0, otherwise. where which has a power spectrum defined by P T (ω) = 1 T | F T (ω) | 2 , P(ω) = lim T→∞ P T (ω).
Let the function g(t) be the result of scaling the function f (t) by 1/a H for a real constant a > 0. Then we can write g T (t) = g(t) = 1 a H f (at), 0 < t < T; 0, otherwise.
We can therefore construct the equation showing that G T (ω) = 1 a H+1 F T ω a .
The power spectrum of g T (t) is therefore given by , T → ∞ and setting ω = 1 and then replacing 1/a by ω we obtain The corresponding amplitude spectrum A(ω) is therefore characterised by The result β = 2H + 1 applies to case when r ∈ R 1 and for r ∈ R n generalises to β = 2H + n so that from Equation (A3) we obtain Equation (33).