Linear Stochastic Models in Discrete and Continuous Time

An autoregressive moving-average model in discrete time is driven by a forcing function that is necessarily limited in frequency to the Nyquist value of π radians per sampling interval. The linear stochastic model that is commonly regarded as the counterpart in continuous time of the autoregressive moving-average model is driven by a forcing function that consists of the increments of a Wiener process. This function is unbounded in frequency. The disparity in the frequency contents of the two forcing functions creates diﬃculties in deﬁning a correspondence between the discrete-time and continuous-time models. These diﬃculties are alleviated when the continuous-time forcing function is limited in frequency by the Nyquist value. Then, there is an immediate one-to-one correspondence been the discrete-time autoregressive moving-average model and its continuous-time counterpart, of which the parameters can be readily inferred from those of the discrete-time model. These parameters can also serve as the starting point of an algorithm that seeks the parameters of the continuous-time model that is driven by a forcing function of unbounded frequencies.


Introduction
Statistical time series analysis commonly depends on data sampled at regular intervals from continuously varying signals.The relationships subsisting in the sampled data are usually characterised without reference to the underlying continuous signal; in which case, it is assumed that the domain of the process generating the data is the set of discrete points in time that includes the sample points.
Nevertheless, it is sometimes desirable to attempt to reconstitute the continuous signal from the sampled data by bridging the gaps between the data points.Also, it may be required to construct a model of the process generating the data that represents time as a continuum rather than as a discrete sequence of dates.
The Nyquist-Shannon sampling theorem establishes conditions under which it is possible to recover the continuous signal perfectly from its sampled points.The theorem presupposes that the signal is amenable to a Fourier decomposition that expresses it as a weighted combination of trigonometric functions that are bounded in frequency.
To enable the reconstruction of the signal, the sampling must be sufficiently rapid to capture all of its information.It is sufficient to make exactly two observations in the time that it takes for the signal component of highest frequency to complete a single cycle, whereby the highest frequency corresponds to π radians per sampling interval.Then, the signal can be reconstructed via a weighted combination of trigonometric functions that are equal in number to the data points and that are equally spaced in the Nyquist frequency interval [0, π].
An alternative, but equivalent, way of reconstructing the signal is via a process of interpolation in which kernel functions of the appropriate scales are attached to each of the data points.The values of the data points determine the scales of the associated functions.A continuous trajectory is derived by summing the superimposed kernel functions.
On the assumption that the data set is doubly infinite, the appropriate kernel is a sinc fuction, which is formed by applying a bi-directional, hyperbolic taper to a sine function with a frequency of π radians per sample interval.For a finite sample of T points, it is appropriate to employ a Dirichlet kernel, which would be formed by wrapping the sinc function around a circle of circumference T and by adding the overlying ordinates.In either case, this method of reconstructing the continuous trajectory may be described as sinc-function interpolation.
The conditions that are sufficient for the estimation of a model of the processes generating the continuous trajectory on the basis of the sampled data are less stringent than those for the reconstruction of the trajectory.The first requirement is for a statistical specification of the forcing function or primum mobile of the process, which is assumed to be a continuous white noise.
The common assumption, which underlies a conventional linear stochastic differential equation (LSDE), is that this forcing function consists of the increments of a Wiener process, which constitute a continuous stream of infinitesimal and uncorrelated impulses.This function will deliver a discrete-time white-noise process whenever it is sampled at regular intervals, regardless of the length of those intervals.The process is also unbounded in frequency, whereas a discrete-time white-noise process has a limiting frequency of π radians per sample interval.
The disparity between the frequency limit of the discrete-time white noise and the unbounded frequencies of the conventionally defined continuous-time white noise makes it difficult to establish a correspondence between a discrete-time autoregressive-moving average (ARMA) model and an LSDE.In this paper, we shall alleviate the difficulty, whenever it is appropriate to do so, by defining a continuous white-noise process that is limited in frequency by the Nyquist value of π radians per sampling interval.
This will enable a simple one-to-one correspondence to be established between a discrete-time ARMA model and a continuous-time autoregressive-moving average (CARMA) model with the same frequency limit as the ARMA Model.(In this paper, the acronym CARMA will be used exclusively to denote such a model, whereas it is also commonly used to denote an LSDE with a forcing function of unbounded frequencies.)It is notable that the continuous frequency-limited noise will deliver a discrete white-noise sequence only if it is sampled at the appropriate rate of exactly two observations in the time it takes the element at the Nyquist frequency to complete one cycle.
The next requirement in estimating a model of a continuous process from sampled data is that the frequency arguments of the poles of the transfer function mapping from the forcing function to the observable output should lie in a specified band of a width not exceeding 2π radians.It is common to specify the Nyquist frequency band [−π, π]; and this will be taken for granted in the paper.
The plan of the paper is as follows.In Section 2, the means of reconstructing a continuous signal from its sampled ordinates is considered, both in the case of a finite sequence and in the case of a doubly-infinite sequence, which is the subject of the sampling theorem of Nyquist (1924Nyquist ( , 1928) ) and of Shannon (1949).Section 3 provides the essential results concerning discrete-time ARMA models and continuous-time LSDE models in which the forcing function is a white-noise process comprising the increments of a Wiener process, and Section 4 concerns the autocovariance functions and the spectral density functions of these models.
A central result of the paper is contained in Section 5, which derives a version of the frequency-limited ARMA process in continuous-time.This is achieved by attaching sinc functions to the ordinates of the discrete-time ARMA process.However, it is also required to derive a frequency-limited LSDE to represent the process that generates the continuous trajectory of the signal.
The principle of impulse invariance serves to establish an immediate one-toone relationship between the parameters of the ARMA model and those of the frequency-limited LSDE model-or a CARMA model, in the terminology of this paper.To find the parameters of an LSDE that is unbounded in frequency the alternative principle of autocovariance equivalence must be invoked.This requires an iterative procedure, which is described in Section 6.This section, which concludes the paper, also provides examples of LSDE models of both varieties.
By replacing in (2) the index t = 0, 1, . . ., T − 1 of the sample by a continuous variable t ∈ [0, T ), a trajectory is generated that interpolates the sampled elements according to the method of Fourier interpolation.
When T is even-valued, the nonzero frequencies of the trigonometric functions of (2) range from ω 1 = 2π/T to ω T /2 = π.The former corresponds to one cycle in the time spanned by the data, and the latter is the Nyquist frequency, which is the frequency of the function (−1) t and which represents the highest frequency that is observable within sampled data.
If, in its trigonometrical representation, the underlying signal contains elements at frequencies in excess of π, then these frequencies will be confounded in the sampled sequence with frequencies falling within the observable Nyquist range of [0, π] radians per sampling interval, which are described as their aliases.
If the limiting frequency within the signal is π radians per unit interval, then it should be possible, via equation (2), to reconstruct the trajectory of the signal over the inter the interval [0, T ) by letting t vary continuously in the interval.
The Fourier synthesis of (2) depicts the segment of the signal over the interval [0, T ) as a single cycle of a continuous periodic function.For this to be an accurate representation of the signal, it is necessary that x T −1 x 0 , which is to say that there must be no significant discontinuity where one replication of the segment ends and another begins.
(In effect, the segment representing the transition between x T −1 and x 0 is required to be synthesised from the available Fourier frequencies.To replicate a jump discontinuity or "saltus" in the underlying signal would require an infinite Fourier sum comprising an unbounded set of frequencies.) Various recourses are available to ensure that this requirement is met.Thus, it may be appropriate to replace the sampled ordinates by their deviations from a line that passes through the points x 0 and x T −1 or that passes close to them.Then, once it has been generated by the Fourier interpolation of the deviations, the continuous trajectory may be added back to the line.This technique may be employed whenever there is a significant trend in the data.
An alternative recourse, which is available when there is no significant trend and which can can also be applied to the detrended data, is to interpolate a segment of pseudo data between x T −1 and x 0 so as to ensure a continuous transition between these two points.Once a continuous trajectory has been created, the segment based on the pseudo data can be discarded.Both of these recourses are available in a computer program IDEOLOG.PAS, available on the author's website, and examples have been provided in Pollock (2015).
Using the expression for ξ j from (1), the representation of the continuous signal x(t) over the interval [0, T ) that is based on T elements of sampled data can be written as Here, it is understood than ω (T /2)+j = ω j−(T /2) .Therefore, in the case where T is odd-valued, the inner summation of the final expression gives rise to the following Dirichlet Kernel: (4) (In the case where T is even-valued, there is a marginally more complicated expression for the Dirichlet kernel that has been given, for example, by Pollock (2008), where the derivations are provided in both odd and even cases.)Thus, the continuous signal can also be expressed in terms of the Dirichlet kernel: This method of recovering the continuous signal from its sampled ordinates is described as kernel interpolation.
Since theoretical time-series analysis commonly presupposes that the data constitute a doubly-infinite sequence, it is also appropriate to consider the limiting form of the discrete Fourier transform as the size of the sample grows indefinitely.This is the discrete-time Fourier transform.Thus, in the limit, equation (1) becomes By allowing t to vary continuously, the expression for an underlying continuous signal that is limited in frequency to π radians becomes The integral on the RHS is evaluated as Putting this into the RHS gives a weighted sum of sinc functions: where is the sinc function centred on t = k.Equation ( 9) represents sinc function interpolation, and it corresponds to the classic result of Shannon (1949).
It is easy to see that the Dirichlet kernel represents a circularly wrapped version of the sinc function.Compare which relates the sinc function to its Fourier transform, with which is the analogous relationship for the Dirichlet kernel.The equality ϕ • (ω j ) = ϕ(ω j ) must prevail at each of the Fourier frequencies ω j , and, for the corresponding integrals with respect to t to be equal, it is necessary that Thus, the Dirichlet kernel would be created by wrapping the sinc function around a circle of circumference T and by adding the overlying ordinates.
An ARMA process is supported in the frequency domain on the Nyquist interval, which is (0, π], if one is considering trigonometric functions, or (−π, π], if one is considering complex exponential functions.It has been show by Pollock (2017) that severe biases can arise in the estimation of such a model whenever the process is bounded by a value ω c < π less than the Nyquist frequency.
An appropriate recourse, in that case, may be to reconstitute the continuous signal by Fourier interpolation and to resample this at the rate of one observation in each interval of π/ω c time units.In the case where k = π/ω c is an integer, it is appropriate to subsample the data by taking one in every k sample points.The effect, in either case, will be to expand the spectrum of the data to cover the full Nyquist interval.
A different circumstance arises when the maximum frequency ω c > π of the underlying continuous signal exceeds the Nyquist value.The continuous signal and its Fourier transform are related as follows: For an element of the sampled sequence of {x t ; t = 0, ±1, ±2, . ..}, there is Therefore, at x t = x(t), there is The equality of the two integrals implies that If the domain of ξ(ω) is not limited in frequency to [−π, π), then ξ S (ω) = ξ(ω) and aliasing will occur.It can be seen that, in this case, the function ξ S (ω) arises from wrapping ξ(ω) around a circle in the frequency domain with a circumference of 2π.

Models in Discrete and Continuous Time.
The linear stochastic differential equation of orders p and q < p, denoted by LSDE(p, q), is specified by the equation wherein D is the derivative operator such that Dx(t) = dx(t)/dt.
In describing the mapping from the forcing function ζ(t) to the output variable y(t), we may consider the rational form of the differential equation, which, on the assumption that there are no repeated roots, has the following partial-fraction decomposition: The sequence {y t ; t = 0 ± 1, ±2, . ..}, which has been sampled at unit intervals for the continuous trajectory of y(t), can be construed as a process generated by a discrete-time ARMA(p, p − 1) model.The latter is the discrete counterpart of the LSDE(p, q) model, and it is described as the exact discrete linear model (EDLM).
The EDLM can be derived, in theory, by converting each partial-fraction component of the LSDE into a corresponding discrete-time component.In the case of the jth component, which may be real or complex valued, there is Within the final expression, the integral on the interval (−∞, t] can be separated into two parts, which are the intervals over (−∞, t − 1] and (t − 1, t]: where µ j = e κ j , and where ε j (t) is a discrete white-noise process.This can be written as (1 − µ j L)ν j (t) = ε j (t), where L is the lag operator that has the effect that Lx(t) = x(t − 1) when applied to a sequence x(t) = {x t ; t = 0, ±1, ±2, . ..}.
The assemblage of these discrete-time process υ j (t); j = 1, 2, . . ., p gives rise to an ARMA(p, p − 1) model in the form where The white-noise sequence ε(t) is formed from a weighted combination of the whitenoise sequences ε j (t); j = 1, . . ., p. Equation ( 22) represents the EDLM that is the discrete-time counterpart of the LSDE.
A straightforward comparison of the continuous-time LSDE and the EDLM is via their impulse response functions.
The function which is found within equation ( 19), is the impulse response function that characterises the mapping of the continuous LSDE model from the forcing function to the output.According to the sifting property of Dirac's delta function δ(τ ), there is Thus, the impulse response function can be obtained by replacing the forcing function ζ(t) in ( 19) by δ(t) and by extending the integral over the negative real line, if necessary, depending on the construction of the limit process that leads to δ(t).
The analogous impulse response function of the discrete-time model with p > q, which represents its response to a discrete-time unit impulse, is A special case of the continuous-time model arises when ζ(t) is replaced by a sequence of Dirac deltas separated by unit intervals and weighted by the ordinates of a discrete-time white-noise process.Then, of course, the continuous-time process coincides with the discrete-time process and, within equation ( 26), there is d j = c j for all j.
A equivalence at the integer points of the impulse response function of an ARMA model and that of a linear stochastic differential equation driven by a frequency-limited white noise will be used in finding a frequency-limited continuous -time version of the ARMA process.The later will be described as a CARMA process, notwithstanding the common use of this acronym to denote an LSDE that is driven by a white-noise process of unbounded frequencies.

Autocovariance Functions and Spectra
In finding the parameters of a unbounded LSDE when its exact discrete counterpart is available, we shall make use of the autocovariance functions of the discrete and continuous processes.
The autocovariance generating function of the ARMA process is given by where are the z-transforms of the autoregressive and moving-average operators, respectively, and where ψ(z), which represents the series expansion of the rational function of β(z)/α(z), is the z-transform of the impulse response function.
Setting z = exp{−iω} places it on the unit circle in the complex plane.By running the argument ω over the interval (−π, π], the spectral density function or "spectrum" f (ω) is generated: The spectrum is just the discrete-time Fourier transform of the sequence of autocovariances.
The spectrum and the autocovariance function provide alternative and equivalent characterisations of the ARMA process.The autocovariances are recovered from the spectrum via the inverse transform: On defining δ(z −1 ) = σ 2 ε β(z −1 )/α(z −1 ), equation ( 27) can be written as wherein the coefficients of δ(z −1 ) can be found by solving the equations Then, given that the known values on the RHS of ( 27), those equations can be solved for γ 0 , . . .γ p .Thereafter, the equation can be solved recursively to provide the succeeding values {γ p+1 , γ p+2 , . ..}.
An analytic expression for the autocovariance function of an ARMA(p, q) model with p > q is also available that exploits the expression of (26) for the discrete-time impulse response function: The autocovariance function of the continuous-time LSDE process is also found via its impulse response function.It is assumed that Then, Substituting the expression of (24) for the continuous-time impulse response function ψ(t) into equation ( 35) gives This expression, which is liable to contain complex-valued terms, may be rendered in real terms by coupling the various conjugate complex terms.The spectral density function f (ω) of the continuous-time process is the Fourier integral transform of autocovariance function: This is a symmetric function; but, in contrast to the autocovariance function of an ARMA process, it is not a periodic function.In the case of an LSDE(p, q) model of equation ( 1), the spectral density function is given by Example 1.The rational transfer function of the LSDE(2, 1) model with complexvalued poles is represented by where κ = δ + iω, κ * = δ − iω are the poles and where c = a + ib, c * = a − ib are the numerator coefficients of the partial fraction expansion.It follows that θ 0 = 2a and θ 1 = 2(aδ + bω) and, conversely, According to the formula of (36), the autocovariance function of the LSDE(2, 1) process, is given by γ Here, the second expression gathers the conjugate complex terms.The conjugate complex exponential functions will give rise to real-valued trigonometric functions, which will embody the following terms: The terms may be combined using the result that, if κ = δ + iω, κ * = δ − iω and c = a + ib, c * = a − ib, then It follows that an alternative expression for the autocovariance function is There is no difficulty in dealing with models of higher orders.The expression of (44) for the autocovariance function of an LSDE(2, 1) process with conjugate complex poles will be employed in a subsequent example.

The Continuous-Time Frequency-Limited ARMA Process
The basis of the frequency-limited continuous-time CARMA model is a frequencylimited white-noise forcing function.This is derived from a discrete-time whitenoise process ε(t) = {ε t ; t = 0, ±1, ±2, . ..}, in accordance with the Nyquist-Shannon sampling theorem, by the simple expedient of attaching a sinc function to each of its ordinates and by adding together the overlapping kernel functions.The resulting process may be denoted by where (Hereafter, we call omit the subscript c, which serves to distinguish the continuous function ε c (t); t ∈ R from the sequence ε(t); t ∈ {0, ±1, ±2, . ..} = Z.) The frequency-limited white noise, which is assumed to have a mean of zero, has an autocovariance function that is a scaled sinc function: This follows from the fact that ε s = ε t ϕ(t − s) + υ, where υ in independent of ε t .Therefore, On defining y(t) = k y k ψ(t − k), the equation of the continuous-time ARMA model may be denoted by with α 0 = 1.The model has a moving-average representation in the form of where the coefficients are from the series expansion of the rational function β(z)/α(z) = ψ(z).In this notation, the continuous-time ARMA model is indistinguishable from the discrete-time model.If y(t) = i ψ i ε(t − i), and y(s) = j ψ j ε(s − j), with t, s ∈ R and i, j ∈ Z, then the autocovariance function of y t = y(t) and y s = y(s) is where γ k = σ 2 ε j ψ j ψ j+j is the kth autocovariance of the discrete-time process.It can be seen immediately that γ(τ ) = γ τ , is a discrete-time autocovariance when τ takes an integer value, and that the continuous-time autocovariance function is obtained from the discrete-time function by sinc-function interpolation.
A difficulty in realising this expression for the autocovariance function lies in the fact that the sinc functions are supported on the entire real line.An alternative approach is to pursue the (inverse) Fourier transform of the spectral density function.This function is equivalent to the central segment of the periodic spectral density function f (ω) of the discrete-time ARMA model.Thus, where the second equality follows in consequence of the symmetry of f (ω) = f (−ω).
To realise this expression it is necessary to approximate the Fourier integral by a discrete cosine Fourier transform embodying a large the number of points sampled from the function f (ω) over the interval [−π, π].
Although such an approximation of the integral can be highly accurate, it is altogether easier to generate the continuous autocovariance function by allowing the argument τ in the analytic expression of (33) for the discrete-time autocovariance function to vary continuously.

Estimates of the Linear Stochastic Models
The available computer programs for estimating ARMA models are numerous and well developed.Those for estimating LSDE models are comparatively rare; and some them are restricted to continuous-time autoregressive models that lack moving-average components.All of them presuppose that the processes are driven by forcing functions of unlimited frequencies.
The direct approach to the estimation of a linear stochastic differential equation (LSDE) depends on the optimisation of a criterion function such as the likelihood function or a residual sum of squares.Typically, a state-space representation of the model is employed in evaluating the function and in seeking its optimum.
The early approaches to estimating an LSDE concentrated on pure autoregressive models that lack a moving-average component.Thus, Bergstrom (1966) adopted purely autoregressive formulations in constructing multi-equation continuous-time econometric models.An interesting and a unique approach to the estimation of a single purely autoregressive equation has been taken, more recently, by Hyndman (1993).
An important contribution to the estimation via the direct approach was made by Jones (1984), who provided a computer program for estimating an autoregressive LSDE that was subject to a white-noise contamination.More recent developments, which allow for a moving average component and for the presence of stochastic trends, have been due to Harvey andStock (1985, 1988) and to Chambers and Thornton (2012).
In the alternative indirect approach, the LSDE estimates are derived from prior estimates of an autoregressive moving-average (ARMA) model.First, the autoregressive dynamics of the ARMA model are translated to the continuoustime dynamics of the LSDE via a one-to-one mapping.Then, a moving-average component is sought that will to match the continuous-time autocovariance function to the discrete-time function at the integer lags.An ARMA(p, q) model with q < p will lead, invariably, to an LSDE(p, p − 1) model.
The indirect approach has received less attention than the direct approach.However, Söderström, (1991) has surveyed some of the available methods and he has developed a state space-method that has been appraised, in comparison with other methods, by Larsson et al. (2006).
The methods of this paper also presuppose the availability of valid ARMA estimates to be used as the starting point.In the case of an ARMA model that is deemed to be free of the effects of aliasing, the corresponding frequency-limited continuous-time CARMA model is available through a one-to-one correspondence based on the equivalence of the impulse response functions.
In some cases, the estimates of an ARMA(p, p − 1) model are to be regarded as those of an exact discrete linear model, or EDLM, that is the counterpart of a linear stochastic differential equation, or LSDE, driven by a forcing function of unbounded frequencies.
In such cases, the EDLM will suffer the effects of aliasing, and the continuoustime moving-average parameters are no longer readily available from their discretetime counterparts.Then, a special iterative estimation procedure is called for that fulfils the principle of autocovariance equivalence.
The task may also arise of converting an LSDE driven by a forcing function of unbounded frequencies to its exact discrete-time counterpart model, or EDLM.This is bound to be an ARMA(p, p − 1) model, in the case of an LSDE(p, q) model with p > q.
The principle of autocovariance equivalence indicates that this conversion may be achieved by applying a Cramér-Wold decomposition to the autocovariance function of the LSDE, which is shared by the EDLM.Since the discrete-time autoregressive parameters can be inferred directly from their continuous-time counterparts, the Cramer-Wold decomposition is concerned only with finding the moving-average parameters.

From ARMA to CARMA
The CARMA model is the continuous-time version of the discrete-time ARMA model.To find a continuous trajectory that interpolates the data from which the ARMA estimates had been derived, it is sufficient to replace the discrete temporal index of equation ( 2) by a continuous variable t ∈ [0, T ).This amounts to a Fourier interpolation, which is equivalent to a sinc function interpolation employing Dirichelet kernels.
To find a differential equation representing the CARMA model, it is necessary to find the partial-fraction decomposition of the ARMA operator β(z)/α(z): The jth partial fraction coefficient is The roots µ 1 , µ 2 , . . ., µ p of the polynomial α(z) can be calculated reliably via the procedure of Müller (1956), of which versions has been coded by Pollock (1999) in Pascal and C. The numerator coefficients d 1 , d 2 , . . ., d p are best calculated via the expression on the RHS of (53).
A complex pole of the ARMA model takes the form of where Here, it is assumed that ω ∈ [0, π] and that ρ ∈ (0, 1).The corresponding pole of the CARMA differential equation is with δ ∈ (−∞, 0), which puts it in the left half of the s-plane, as is necessary for the stability of the system.The remaining task is to assemble the roots and the coefficients to form the rational operator θ(s)/φ(s) of the differential equation: In effect, the differential equation of the CARMA model is provided by equation ( 19), with c j = d j for j = 1, 2, . . ., p.
Given the latter equalities and the correspondence between the ARMA poles µ j and the poles κ j = ln(µ j ) of the differential equation, it follows that the impulse response function of the the CARMA model, provided by equation ( 24), is equal, at the integer points, to the impulse response function of the ARMA model, provided by equation ( 26).Likewise, the continuous autocovariance function of the CARMA model with be equal, at the integer points, to that of the ARMA model.
The spectral density function of the ARMA process is illustrated in Figure 1.Here, it will be observed that the function is virtually zero at the limiting Nyquist frequency of π.Therefore, it is reasonable to propose that the corresponding continuous-time model should be driven by a white-noise forcing function that is bounded by the Nyquist frequency.
The parameters of the resulting continuous-time CARMA model are displayed below, beside those of the ARMA model: In Figure 2, the discrete autocovariance function of the ARMA process is superimposed on the continuous autocovariance function of the CARMA process.
The former has been generated by the procedure described by the recursive equations ( 30)-(32).The latter has been generated by equation ( 33), wherein the index τ varies continuously.
The spectral density function of the CARMA process is the integral Fourier transform of the continuous autocovariance function, whereas the spectral density function of the ARMA process is the discrete Fourier transform of the autocovariance sequence.The frequency limitation of the CARMA process means that there is no aliasing in the sampling process.Therefore, the two spectra are identical.From ARMA to LSDE When a linear stochastic differential equation (LSDE) is driven by a whitenoise forcing function of unbounded frequencies, the ARMA model that represents its discrete-time counterpart, which is the EDLM, is bound to be affected by aliasing, even if this is to a negligible extent.The aliasing also affects the impulse response function.Therefore, the principle of impulse invariance can no longer serve to determine the parameters of the continuous-time model.Instead, the principal of autocovariance equivalence, enunciated by Bartlett (1946), must be relied upon to translate from the discrete-time model to the continuous-time model.This principal asserts that the continuous-time autocovariance function must be equal, at the integer points, to that of the discrete-time model.Given that the autoregressive parameters of the LSDE are implied by those of the EDLM, it follows that only the moving-average parameters need to be determined in fulfilment of the principle.
Let γ d (µ, d) be the autocovariance function of the EDLM, as given by equation ( 33), and let γ c (κ, c) be the corresponding autocovariance function of the LSDE, which is sampled at the integer points from the function of (36).Since equation ( 54) expresses the poles of the LSDE in terms of those of the ARMA model, there is κ = κ(µ), and the principle of autocovariance equivalence can be expressed via the equation Then, the parameters of the LSDE can be derived once a value of c = [c 1 , c 2 , . . ., c p ] has been found that satisfies this equation.
The value of c can be found by using an optimisation procedure to find the zero of the function wherein w 1 , w 2 , . . ., w p are a set of positive weights.It will be observed that the coefficients c j of the continuous autocovariance function of (36) and are liable to be complex-valued.This can make it tedious, if not difficult, to calculate the analytic derivatives of S(c), which might be required by the optimisation procedure.Therefore, the derivative-free procedure of Nelder and Mead (1965) can be used to advantage.The algorithm has been coded in Pascal by Bunday and Garside (1987).
The autocovariance function of the LSDE can also be rendered in a form suggested by equation ( 42), wherein the parameters c = (α, β) are real-valued.
Söderström, (1990) has determined the possible locations of the zeros of an EDLM(2,1) as a function of the poles of the LSDE to which it corresponds; and he has shown that the zeros can only occur in a restricted set within the unit circle.This affects the possibility of finding an LSDE to correspond to a specified ARMA model.
An ingredient of the algorithm of Söderström (1991) for translating an ARMA model to an LSDE is a spectral factorisation.If the factorisation cannot be accomplished, then there is no corresponding continuous-time model.Therefore, it has been proposed that the algorithm is an appropriate means of determining whether there exists an LSDE corresponding to a specified ARMA model.
For the algorithm that we are pursuing, the non-availability of an LSDE corresponding to the ARMA model is indicted by a nonzero minimum of the criterion function.However, when the minimum is close to zero, then the ARMA model will be close to the EDML that corresponds to the minimising LSDE.
In such cases, we might surmise that the divergence of the ARMA model from the EDLM may be due to the vagaries of the processes of sampling and estimation that have given rise to the ARMA model.We might also surmise that the minimising LSDE will be close to one that would be delivered by a direct method of estimation applied to the sample from which the ARMA model has arisen.
Example 3. The mapping from the discrete-time ARMA model to a continuoustime LSDE model can be illustrated, in the first instance, with the ARMA(2, 1) model of Example 2.
The parameters of the corresponding LSDE(2, 1) model are obtained by using the Nelder-Mead procedure to find the minimum of the criterion function of ( 59), where it is assumed that the variance of the forcing function is σ There are four points that correspond to zero valued-minima, where the ordinates of the discrete and continuous autocovariance functions coincide at the integer points.These points, together with the corresponding moving-average parameters, are as follows: Here, the parameter values of (1) and (4) are equivalent, as are those of (2) and (3).Their difference is a change of sign, which can be eliminated by normalising θ 0 at unity and by adjusting variance of the forcing function accordingly.
The miniphase condition, which corresponds to the invertibility condition of a discrete-time model, requires the zeros to be in the left half of the s-plane.Therefore, (2) and (3) on the NE-SW axis are the chosen pair.
These estimates of the LSDE(2, 1) are juxtaposed below with those of the CARMA(2, 1) model derived from the same ARMA model: The autoregressive parameters of the CARMA model and of the LSDE model are, of course, identical.However, there is a surprising disparity between the two sets of moving-average parameters.Nevertheless, when they are superimposed on the same diagram-which is Figure 4-the spectra of the two models are seen virtually to coincide.Moreover, the parameters of the ARMA model can be recovered exactly from those of the LSDE by an inverse transformation, which will be described later.The explanation for this outcome is to be found in the remarkable flatness of the criterion function in the vicinity of the minimising points, which are marked on both sides of Figure 4 by black dots.The flatness implies that a wide spectrum of the parameter values of the LSDE will give rise to almost identical autocovariance functions and spectra.
The left side of Figure 4 show the some equally-spaced contours of the zsurface of the criterion function, which are rising from an annulus that contains the minima.The minima resemble small indentations in the broad brim of a hat.
The right side of Figure 4, which is intended to provide more evidence of the nature of the criterion function in the vicinity of the minima, shows the contours of the function q = 1/(z + a), where a is a small positive number that prevents a division by zero.We set a = (X − RM )/(R − 1), where M = min(z), X = max(z) and where R = max(q)/ min(q) = 60.The extended lenticular contours surround-ing the minima are a testimony to the virtual equivalence of a wide spectrum of parameter values.
A variant to the ARMA(2, 1) model is one that has a pair of complex conjugate poles ρ exp{±iθ} with the same argument as before, which is θ = tan −1 (β/α) = π/4 = 45 • , and with a modulus that has been reduced to ρ = 0.5.The model retains the zero of 0.5.The ARMA parameters and those of the corresponding LSDE are as follows: The ARMA process, which is to be regarded as a sampled version of the LSDE, is seen to suffer from a high degree of aliasing, whereby the spectral power of the LSDE that lies beyond the Nyquist frequency is mapped into the Nyquist interval [−π, π], with the effect that the profile of the ARMA spectrum is raised considerably.On this basis, it can be asserted that the ARMA model significantly misrepresents the underlying continuous-time process.From LSDE to ARMA The mapping from the LSDE to the EDLM also depends on the principle of autocovariance equivalence.Thus, the ARMA parameters will be derived from the sampled ordinates the continuous-time autocovariance function of the LSDE.The essential equation is that of ( 27), which can be cast in the form of β(z)β(z −1 ) = σ 2 ε α(z)γ c (z)α(z −1 ), (60)

Figure 3 .
Figure 3.The spectrum of the LSDE(2, 1) corresponding to the ARMA(2, 1) model of Example 1 plotted on top of the spectrum of that model, represented by the thick grey line.The two spectra virtually coincide over the interval [0, π].

Figure 4 .
Figure 4. Left The contours of the criterion function z = z(α, β) together with the minimising values, marked by black dots.Right The contours of the function q = 1/(z + a).

Figure 5
Figure5shows the spectral density function of the LSDE and of the ARMA model superimposed on same diagram.The spectrum of the LSDE extends far beyond the Nyquist frequency of π, which is the limiting ARMA frequency.The ARMA process, which is to be regarded as a sampled version of the LSDE, is seen to suffer from a high degree of aliasing, whereby the spectral power of the LSDE that lies beyond the Nyquist frequency is mapped into the Nyquist interval [−π, π], with the effect that the profile of the ARMA spectrum is raised considerably.On this basis, it can be asserted that the ARMA model significantly misrepresents the underlying continuous-time process.

Figure 5 .
Figure 5.The spectrum of the revised ARMA model superimposed on the spectrum derived LSDE.