Next Article in Journal
Wearable Driver Distraction Identification On-The-Road via Continuous Decomposition of Galvanic Skin Responses
Next Article in Special Issue
PSPICE Hybrid Modeling and Simulation of Capacitive Micro-Gyroscopes
Previous Article in Journal
A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks
Previous Article in Special Issue
Fabrication of Glass Microchannel via Glass Imprinting using a Vitreous Carbon Stamp for Flow Focusing Droplet Generator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Long-Term Stability of Precise Oscillators under Influence of Frequency Drift

1
GNSS (Global Navigation Satellite System) Research Center, Wuhan University, 129 Luoyu Road, 430079 Wuhan, China
2
Collaborative Innovation Center for Geospatial Information Technology, Wuhan University, 129 Luoyu Road, 430079 Wuhan, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 502; https://doi.org/10.3390/s18020502
Submission received: 24 December 2017 / Revised: 2 February 2018 / Accepted: 4 February 2018 / Published: 7 February 2018
(This article belongs to the Collection Modeling, Testing and Reliability Issues in MEMS Engineering)

Abstract

:
High-performance oscillators, atomic clocks for instance, are important in modern industries, finance and scientific research. In this paper, the authors study the estimation and prediction of long-term stability based on convex optimization techniques and compressive sensing. To take frequency drift into account, its influence on Allan and modified Allan variances is formulated. Meanwhile, expressions for the expectation and variance of discrete-time Hadamard variance are derived. Methods that reduce the computational complexity of these expressions are also introduced. Tests against GPS precise clock data show that the method can correctly predict one-week frequency stability from 14-day measured data.

1. Introduction

Timing technology is important in modern finance [1], industries and scientific research [2]. High frequency trading, real-time navigation and the verification of relativistic effects require accurate and high-resolute time and/or frequency information. Timing information is given by counting the periodic signals of referenced oscillators. Meanwhile, frequencies of the timing signal are multiples of the referenced oscillator period. A time-scale is accurate only if the participant oscillators produce frequencies consistent with their nominal values or are stable enough to be predictable. Furthermore, high resolution requires, in turn, a short period of oscillator output signal frequencies. Unfortunately, no high-performance oscillator produces constant and high-resolution signals.
The difference between an oscillator’s output signal from its nominal value can be divided into deterministic and random parts. The oscillator’s random behavior is well-documented by a class of noise processes called power-law noise (PLN) [3]. While the random variations are defined in the frequency domain, it is often measured in the time domain by a class of structure functions and referred to as the frequency stability of the oscillator. For example, Allan (AVAR), modified Allan (MVAR) and Hadamard variance (HVAR) are commonly-used methods. These statistics can be improved by using a ‘total approach’ [4]. Recently, Thêo- [5] and parabolic variances [6] were also proposed. The authors proposed an oscillator noise analysis method called stochastic ONA [7]. The method predicts long-term frequency stabilities using convex optimization techniques. Specifically, the confidence regions of long-term Hadamard variances (HVAR) predicted from 14-day GPS precise clock data include HVAR estimated from 168-day measured data and are smaller than those estimated from 84-day time derivations.
On the other hand, distinctions between deterministic and random behavior are blurry [8]. It is often difficult to differentiate drift from frequency noises [9]. A main drawback of stochastic ONA is its requirement for drift-free input variances. For example, cesium frequency standards are conventionally believed to be free from drift [10]. However, analysis of historical data and current practice show that the performance of TAI (International Atomic Time) improved when taking the frequency drifts of participant cesium clocks into account [11].
This paper studies the estimation of oscillator stability under the influence of frequency drift. In Section 2, the basic concepts and methods of time domain stability have been reviewed. Although time domain stability is related to the frequency domain, discrete sampling has different impacts on the two. The influence of discrete sampling on both domains has also been reviewed in this section. In Section 3, we introduce a method called stochastic ONA, which extends the oscillator noise analysis problem to the prediction of long-term stability. In the following section, we describe methods to compute coefficient matrices used in stochastic ONA. We also introduce a method that greatly reduces the computational complexity of Walter’s characterization of AVAR and MVAR. From these works, we can then predict long-term frequency contaminated by deterministic linear frequency drift. The proposed model is tested against GPS precise clock data in Section 5. The one-week AVAR, MVAR and HVAR predicted by stochastic ONA from 14-day measured data are consistent with those estimated from 84-day data. In addition, the fifteen-day variances predicted by stochastic ONA have more compact confidence regions than those estimated from 42–60-day data.

2. Review of Time Domain Stability

It is well documented that high performance oscillators are influenced by power-law noises (PLN). PLN processes are conventionally defined by their power spectral densities (PSD):
S y ( f ) = i = 1 N h h α i f α i = ( 2 π f 2 ) S x ( f )
where S y ( f ) is the PSD of oscillator fractional frequency y ( t ) , S x ( f ) PSD of time deviations x ( t ) ,
x ( t ) = 0 t y ( t ) d t .
f (Fourier) frequency and h α noise intensity coefficient, α = α 1 , α 2 , , α N h . Often, α = 2 (white phase modulation, WHPM), 1 (flicker PM, FLPM), 0 (white frequency modulation, WHFM), −1 (flicker FM, FLFM), −2 (random walk FM, RWFM) [12]. However in Global Positioning System (GPS) master control station (MCS) clock prediction, α = 2, 0, −2 and −4 (random run FM, RRFM), and the h α coefficients are replaced by q i [4]:
q i = h 2 / 8 π 2 τ 0 , i = 0 , ( 2 π ) 2 ( i 1 ) h 2 ( 1 i ) τ 0 , i = 1 , 2 o r 3 .
However, PSD measured from an oscillator signal is not used solely in practice. Since the PSD estimates are “noisy” [13], time domain statistics are often used instead. For instance, AVAR [13]:
σ ^ y 2 ( τ ) = 1 2 ( N 2 m ) τ 2 i = 1 N 2 m x i + 2 m 2 x i + m + x i 2 ,
MVAR:
Mod σ ^ y 2 ( τ ) = j = 1 N 3 m + 1 i = j j + m 1 x i + 2 m 2 x i + m + x i 2 2 m 2 τ 2 ( N 3 m + 1 ) ,
and HVAR:
σ ^ z 2 ( τ ) = i = 1 N 3 m x i + 3 m 3 x i + 2 m + 3 x i + m x i 2 6 ( N 3 m ) τ 2 .
A time-domain variance σ k 2 ( τ ) can be related to its PSD:
σ k 2 ( τ ) = 0 S y ( f ) H k ( f ) 2 d f = i = 1 N h Φ k ( α i , τ ) h α i .
Here, τ = m τ 0 is the averaging time, τ 0 sampling period and H k ( f ) the transfer function of σ k 2 ( τ ) defined in [12]:
Φ k ( α , τ ) = 0 f α H k ( f ) 2 d f .
Here, subscript k is used as a generic form of different variances. The majority of measured data nowadays are digital. σ k 2 ( τ ) estimated from finite data may have different values for Equation (5). The former is usually denoted as σ k 2 ( τ ) ^ , and it can be viewed as a realization of sample variance variable Σ k ( τ ) [14]. The distribution function F Σ k ( τ ) σ ˘ k 2 ( τ ) ( F σ ˘ k 2 ( τ ) for short) of random variable Σ k ( τ ) can be formulated as:
F σ ˘ k 2 ( τ ) = 0 σ ˘ k 2 ( τ ) u EDF k ( τ ) / 2 u e u Γ EDF k ( τ ) / 2 d u ,
for arbitrary positive real number σ ˘ k 2 ( τ ) , where e is Euler’s number, Γ ( · ) the Gamma function, EDF k ( τ ) the equivalence degrees of freedom (EDF):
EDF k ( τ ) = E σ ^ k 2 ( τ ) 2 Var σ ^ k 2 ( τ ) ,
E · variance is estimated from infinite samples and Var · variance of the random variable σ ^ k 2 ( τ ) .
If we denote:
E σ ^ k 2 ( τ ) = i = 1 N h Φ k ( α i , τ ) h α i ,
where variance σ ^ k 2 ( τ ) is estimated from x [ t ] and x [ t ] the discrete sampling of time deviations x ( t ) , then Φ k ( α i , τ ) does not equal Equation (6). Instead of PSD, Kasdin shows that it is the symmetric two-time autocorrelation function:
R x ( t , τ ) x ( t τ / 2 ) x ( t + τ / 2 ) ,
which directly samples the continuous-time function [15]. For example, instead of Equation (1), Walter shows that PSD measured from discrete sampled time deviations x [ t ] relates to S y ( f ) in the following way [16]:
S x ( f ) = h α 4 π 2 sin ( π f τ 0 ) π τ 0 α 2 = τ 0 2 S y d ( f ) 4 sin 2 ( π τ 0 f ) .
The autocorrelation function of PLN processes has the following asymptotic form when t > > τ [15]:
R ( t , τ ) h α 2 ( 2 π ) α log 4 t log τ
for α = 1 , and:
R x ( t , τ ) Q Γ ( α 1 ) | τ | 1 α Γ ( α / 2 ) Γ ( 1 α / 2 ) + Q Γ ( 1 α ) t 1 α Γ ( 2 α ) Γ 2 ( 1 α / 2 )
for α 1 , where:
Q = h α 2 ( 2 π ) α .
To derive an expression for the discrete autocorrelation function, the deviation of Brownian motion is often replaced by the discrete Wiener process in time and frequency metrology [15,17]. If, in addition, the noise process is wide sense stationary, R x ( t , τ ) can be recast as [15]:
R x ( d ) [ m ] = Q Γ ( m + 1 α / 2 ) Γ ( α 1 ) τ 0 α 1 Γ ( m 1 + α / 2 ) Γ ( α / 2 ) Γ ( 1 α / 2 ) .
Here, τ = m τ 0 . From Equations (12)–(14), Walter derives Φ k ( α i , τ ) :
Φ AVAR ( α i , τ ) = π Γ ( α i 1 ) m 2 ( 2 π τ 0 ) α i + 1 Γ 2 ( α i / 2 ) × 3 4 Γ ( m + 1 α i / 2 ) Γ ( α i / 2 ) Γ ( m + α i / 2 ) Γ ( 1 α i / 2 ) + Γ ( 2 m + 1 α i / 2 ) Γ ( α i / 2 ) Γ ( 2 m + α i / 2 ) Γ ( 1 α i / 2 )
and Var σ ^ k 2 ( τ ) :
Var σ ^ y 2 ( τ ) = h α 2 Γ 2 ( α 1 ) sin 2 ( α π / 2 ) ( 2 π τ 0 ) 2 α + 2 ( N 2 m ) m 4 × = N + 2 m + 1 N 2 m 1 2 2 | | N 2 m × 3 Γ ( | | + 1 α / 2 ) Γ ( | | + α / 2 ) 2 Γ ( | m + | + 1 α / 2 ) Γ ( | m + | + α / 2 ) 2 Γ ( | m | + 1 α / 2 ) Γ ( | m | + α / 2 ) + Γ ( | 2 m + | + 1 α / 2 ) 2 Γ ( | 2 m + | + α / 2 ) + Γ ( | 2 m | + 1 α / 2 ) 2 Γ ( | 2 m | + α / 2 ) 2
for Allan variance (AVAR) [16]. It should be noted that variances estimated from discrete sampled data may be distorted when the averaging time τ = m τ 0 is near sampling period τ 0 . The distortions are caused by alias and measurement noise [15]. Equations (15) and (16) do not take the distortions into account. Furthermore, the influences of frequency drift are not included in these equations. As a development of Walter’s work, we will formulate the effect of deterministic linear frequency drift in AVAR estimates and derive Φ k ( α i , τ ) and Var σ ^ k 2 ( τ ) for HVAR in Section 4.
On the other hand, if σ ^ k 2 ( τ ) is a measurement of the time-domain variance σ k 2 ( τ ) , we can estimate the h α coefficients from σ ^ k 2 ( τ ) . It can be formed as a least square problem:
minimize ( Φ h σ ) T W ( Φ h σ ) ,
and called oscillator noise analysis. Suppose there are M different input variances. Therefore, the coefficient matrix Φ can be divided into M blocks:
Φ = Φ k 1 T Φ k T Φ k M T T ,
whose k-th block Φ k , k = k 1 , k 2 , , k M ,
Φ k = Φ k ( α 1 , τ 0 ) Φ k ( α N h , τ 0 ) Φ k ( α 1 , 2 τ 0 ) Φ k ( α N h , 2 τ 0 ) Φ k ( α 1 , m k τ 0 ) Φ k ( α N h , m k τ 0 ) ,
Φ k ( α , τ ) is defined in Equation (9). Φ can be formed by using the method we proposed in Section 4. Similarly, the column vector of input variances σ can be partitioned into M blocks:
σ = σ k 1 T σ k T σ k M T T ,
the k-th block:
σ k = σ ^ k 2 ( τ 0 ) σ ^ k 2 ( m τ 0 ) σ ^ k 2 ( m k τ 0 ) T ,
is comprised of σ ^ k 2 ( m k τ 0 ) , m = 1 , ⋯, m k estimated from time deviations x [ i ] , i = 1 , 2 , , N. h is a column vector of noise intensity coefficients and W the weight matrix. The works in [18,19] give different ways to compute W.

3. Stochastic ONA

We extended the oscillator noise analysis problem to the prediction of long-term stability. Since the likelihood (or conditional probability) of:
σ k 2 ( τ ) = specified positive real number
is zero, we estimate a 1 2 ε confidence region of σ k 2 ( τ ) instead. This extension is realized by using convex optimization techniques (Appendix A.2), and we call it stochastic ONA [7].
The basic idea of stochastic ONA is:
F σ k 2 ( τ ) < i = 1 N h B k ( α i , τ , 1 ε ) h α i 1
and:
F σ k 2 ( τ ) < i = 1 N h B k ( α i , τ , ε ) h α i 0
when ε 0 , where F ( · ) is the chi-square distribution function defined in Equation (7), ε > 0 ,
B k ( α , τ , ε ) = F 1 ( ε ) × Φ k ( α , m τ 0 ) EDF k ( τ ) α .
Clearly,
B ( ε ) h σ B ( 1 ε ) h
when ε is small enough. The matrices B ( ε ) and B ( 1 ε ) are obtained by substituting Φ k ( α , τ ) in the coefficient matrix Φ with B k ( α , τ , ε ) and B k ( α , τ , 1 ε ) . This can be cast into the following optimization problem:
minimize ( Φ h σ ) T W ( Φ h σ ) ,
where the N h -dimensional column vector of the noise intensity coefficients h subject to:
B ( ε ) h σ B ( 1 ε ) h + σ h 0 .
In practice, it is not always easy to find such an ε . In addition, Equation (21) does not model the uncertainty of input variances completely. For example, neither the correlations among the different averaging time, nor those among different variances calculated from the same underlying time series are taken into account. We therein prescribe a lower bound ε l as a threshold. Equation (21) will be replaced by an alternative model if stochastic ONA fails to find an ε ε l that holds for the inequalities. While different variances estimated from the same time series are correlated, they contain independent pieces of information. It is difficult to formulate the correlations of structure functions precisely. It is even more difficult to solve stochastic programming under complex probabilistic constraints. When unformulated information only has a strong impact on a small population of the whole input variances, they can be treat as ‘violations’ using the techniques of compressive sensing. The auxiliary variables μ and ν are used as indicators for the violations. Specifically, the i = 1 M m i -dimensional non-negative vector variable μ and ν are defined such that:
B ( ε ) h diag ( 1 + μ ) σ , B ( 1 ε ) h diag ( 1 ν ) σ .
Since B ( ε ) h E σ B ( 1 ε ) h ,
E μ + ν = 0 min μ + ν
for any norm · of μ and ν . We choose the 1 -norm (see Appendix A.1 for details):
ν 1 i = 1 j = 1 M m j | ν i |
where ν i is the i-th component of ν , and | ν i | returns the absolute value of ν i . The probabilistic fact that only a minority of input variances violate Equation (21) can be formulated using the property of 1 : the minimum of the 1 norm is approximately sparse when we have more variables than problem data. Therein, the optimization problem can be formulated as:
minimize μ 1 + ν 1 ,
where optimization variables ( h , μ , ν ) subject to:
B ( ε ) h diag { σ } μ σ B ( 1 ε ) h diag { σ } ν + σ μ ν ν 1 h 0 .
We adjust the values of input variance according to the result of Equation (23). Consequently, we can find that h holds for Equation (21) with the adjusted variances. Suppose ( h * , μ * , ν * ) is the optimum of Equation (23). We label an input variance σ ^ k 2 ( τ ) as an ‘outlier’ when:
  • case I:
    i = 1 N h B k ( α i , τ , ε ) h α i * > σ ^ k 2 ( τ ) ;
  • case II:
    i = 1 N h B k ( α i , τ , 1 ε ) h α i * < σ ^ k 2 ( τ ) .
An outlier will be adjusted in the following way:
i = 1 N h ( 1 ψ ) B k ( α i , τ , ε ) + ψ Φ k ( α i , τ ) h α i * , case I ; i = 1 N h ( 1 ψ ) B k ( α i , τ , 1 ε ) + ψ Φ k ( α i , τ ) h α i * , case II ,
where 0 < ψ < 1 . We set ψ = 0.5 in this article.
By then, we can either minimize or maximize the values of:
i = 1 N h Φ k ( α i , τ ) h α i * ,
under the restriction of Equation (21). Here, k is not necessarily any of k 1 , k 2 , , k M . Neither τ should be smaller than N τ 0 , where τ 0 is the sampling interval and N number of time deviations. If we denote the minimum and maximum of Equation (25) as σ ̲ k 2 ( τ ) and σ ¯ k 2 ( τ ) , respectively, σ ̲ k 2 ( τ ) , σ ¯ k 2 ( τ ) can be approximately considered as an 1 2 ε confidence region of σ k 2 . This is the predictive model used in stochastic ONA.

4. Models for Discrete-Time Variances

In this section, we introduce a way to compute coefficient matrices Φ , B ( ε ) and B ( 1 ε ) used in stochastic ONA. Because B ( ε ) and B ( 1 ε ) can be computed from Φ and the inverse of the chi-square distribution function (7). Equation (7) is well-defined if the degrees of freedom EDF k ( τ ) is known. EDF k ( τ ) , in turn, can be determined by Φ and Var σ ^ k 2 ( τ ) . Specifically, we: (i) formulate the influence of deterministic linear frequency drift on Allan (AVAR) and modified Allan (MVAR) variance; (ii) derive expressions for Φ k ( α , τ ) and Var σ ^ k 2 ( τ ) of discrete-time Hadamard variance (HVAR). Computing the values of Φ k ( α , τ ) and Var σ ^ k 2 ( τ ) for discrete-time AVAR, MVAR and HVAR is a daunting task, since we need to compute the gammafunctions O ( m N ) ( O m N 2 for MVAR) times. We reduce the computation of gamma functions to three times per f α noise in the end of this section.

4.1. Drift Model

To formulate the influences of deterministic linear frequency drift on AVAR and MVAR, we first assume the oscillator output signal to be contaminated by drift. Suppose its time derivations x [ i ] can be separated as:
x [ i ] = x [ i ] + a ( t + i τ 0 ) 2 , i = 1 , 2 , , N ,
where x [ i ] and x [ i ] are discrete sampling of the continuous-time signals x ( t ) and x ( t ) , respectively. We also denote the AVAR and MVAR estimated from x [ i ] as σ ^ y 2 ( x , m ) and Mod σ ^ y 2 ( x , m ) , respectively. Apparently,
E σ ^ y 2 ( x , m ) = i = 1 N h Φ AVAR ( α i , τ ) h α i
and:
E Mod σ ^ y 2 ( x , m ) = i = 1 N h Φ MVAR ( α i , τ ) h α i .
Since x [ i ] is unknown, AVAR and MVAR can only be measured from x [ i ] . We denote them as σ ^ y 2 ( x , m ) and Mod σ ^ y 2 ( x , m ) , respectively. It can be shown, from Equations (2) and (3), that:
E σ ^ y 2 ( x , m ) = E σ ^ y 2 ( x , m ) + 2 ( m τ 0 a ) 2
and:
E Mod σ ^ y 2 ( x , m ) = E Mod σ ^ y 2 ( x , m ) + 2 ( m τ 0 a ) 2 .
In other words, the drift-free AVAR and MVAR can be divided from the influence of drift theoretically.
To predict long-term stability, the sign of a makes no difference. We can therein treat a 2 as a component of h. The column vector h of a rubidium frequency standard, for instance, is:
a 2 h 2 h 1 h 0 h 1 h 2 h 4 T ,
where h α , α = 2 , 1, 0, 1 , 2 and 4 are noise intensity coefficients of white and flicker PM, white, flicker, random walk and random run FM, respectively. Accordingly, the m-th row of Φ k and B k ( ε ) can be cast as:
2 ( m τ 0 ) 2 Φ AVAR ( 2 , m τ 0 ) Φ AVAR ( 1 , m τ 0 ) Φ AVAR ( 0 , m τ 0 ) Φ AVAR ( 1 , m τ 0 ) Φ AVAR ( 2 , m τ 0 ) 0
and:
2 ( m τ 0 ) 2 · F 1 ( ε ) / EDF AVAR ( τ ) α = 2 Φ AVAR ( 2 , m τ 0 ) · F 1 ( ε ) / EDF AVAR ( τ ) α = 2 Φ AVAR ( 1 , m τ 0 ) · F 1 ( ε ) / EDF AVAR ( τ ) α = 1 Φ AVAR ( 2 , m τ 0 ) · F 1 ( ε ) / EDF AVAR ( τ ) α = 2 0 T
for AVAR, respectively. Here, we use the subscript α = 2 , 1, 0, 1 , 2 or 4 to indicate the dominant PLN process; other PLN processes will be ignored in the computation of the inverse chi-square distribution function. While Φ AVAR ( α , m τ 0 ) is given in Equation (15) and Var σ ^ AVAR 2 ( τ ) in Equation (16) explicitly, it is a daunting task to compute Φ k and B k ( ε ) from these equations. As we show at the end of this section, the computation can be greatly shortened by taking the properties of the gamma function into account. Especially, we can simplify Equation (15) in the case of GPS MCS clock prediction:
E σ ^ y 2 ( x , m ) = 2 ( m τ 0 a ) 2 + 3 q 0 m 2 τ 0 2 + q 1 m τ 0 2 + q 2 ( 2 m 2 + 1 ) 6 m .
On the other hand, the simplified expression for Var σ ^ AVAR 2 ( τ ) depends on the ratio of m to N. When m N / 4 ,
Var σ ^ y 2 ( m τ 0 ) = q 0 2 ( 35 N 88 m ) ( N 2 m ) 2 ( m τ 0 ) 4
for α = 2 ,
Var σ ^ y 2 ( m τ 0 ) = q 1 2 5 3 N + 4 3 m 2 N 7 2 m 1 2 m 2 3 m 3 4 ( N 2 m ) 2 m 3 τ 0 4 ,
for α = 0 , and:
Var σ ^ y 2 ( m τ 0 ) = q 2 2 302 35 m 6 N + 4 m 4 N + 14 5 m 2 N + 18 7 N 101 5 m 7 34 5 m 5 19 5 m 3 26 5 m 144 ( N 2 m ) 2 m 3 .
for α = 2 . Simplified expressions for Var σ ^ y 2 ( m τ 0 ) , m > N / 4 , will be given in Appendix B.

4.2. Hadamard Variance

To differentiate the influence of frequency drift from the random behavior of an oscillator, for example RWFM or RRFM, we can combine AVAR with some statistics, which are convergent for RRFM and free from drift. We choose HVAR among those statistics. In order to form coefficient matrices Φ , B ( ε ) and B ( 1 ε ) , we derive here Φ k ( α , τ ) and Var σ ^ k 2 ( τ ) of discrete-time HVAR.
Since discrete-time PSD is not a direct sampling of the corresponding continuous-time PSD, the discrete-time HVAR is not a discrete sampling of the continuous function defined by Equation (5). On the other hand, the discrete-time symmetric two-time autocorrelation function is a direct sampling of its continuous counterpart (10). If we can recast the continuous-time HVAR as a combination of autocorrelation functions, the discrete-time HVAR can be derived from directly sampling the autocorrelation function. Equivalently, if the discrete-time HVAR can be expanded as a combination of autocorrelation functions, an explicit expression of the variance can be derived by replacing the autocorrelation functions with Equations (12)–(14).
From Equations (4) and (10),
E σ ^ z 2 ( τ ) = 1 6 τ 2 R x ( t + 3 τ , 0 ) + 9 R x ( t + 2 τ , 0 ) + 9 R x ( t + τ , 0 ) + R x ( t , 0 ) 6 R x ( t + 5 2 τ , τ ) + 6 R x ( t + 2 τ , 2 τ ) 18 R x ( t + 3 2 τ , τ ) 2 R x ( t + 3 2 τ , 3 τ ) 6 R x ( t + 1 2 τ , τ ) + 6 R x ( t + τ , 2 τ ) .
By substituting autocorrelation functions in the equation above with Equations (12) and (13), we attain:
E σ ^ z 2 ( τ ) = σ w α 2 Γ ( α 1 ) ( τ / τ 0 ) 1 α τ 2 Γ ( α / 2 ) Γ ( 1 α / 2 ) ( 2 2 α 5 3 α ) + O ( t α 5 ) , α 1 , σ w α 2 3 ( τ 0 τ ) 2 10 ln | τ | ln 64 3 + O ( t 6 ) , α = 1 .
Obviously, Equation (35) converges when α > 5 . Because all of the power-law noises (PLN) mentioned before have a power index α > 5 , Equation (35) holds for the problem discussed.
In addition, t τ 0 , the sampling interval τ 0 ranges from several minutes to days, and HVAR is approximately independent of t (We assume here that the random behaviors of an oscillator is unchanged. Otherwise, HVAR is either divergent or changes with time t). Hence, we replace the symmetric two-time autocorrelation function in Equation (35) with Equation (14). The expression for Φ k ( α , τ ) of discrete-time HVAR is therein derived:
Φ z ( α , τ ) = Γ ( α 1 ) 6 m 2 ( 2 π ) α τ 0 α + 1 Γ ( α / 2 ) Γ ( 1 α / 2 ) × 10 Γ ( 1 α / 2 ) Γ ( α / 2 ) 15 Γ ( m + 1 α / 2 ) Γ ( m + α / 2 ) + 6 Γ ( 2 m + 1 α / 2 ) Γ ( 2 m + α / 2 ) Γ ( 3 m + 1 α / 2 ) Γ ( 3 m + α / 2 ) .
autocorrelation functions in the equation above with Equations (12) and (13).
Likewise, we expand Var σ ^ z 2 ( τ ) with the symmetric two-time autocorrelation functions:
Var σ ^ z 2 ( τ ) = = N + 3 m + 1 N 3 m 1 N 3 m | | τ 0 18 ( N 3 m ) 2 ( m τ 0 ) 4 R x ( t + 3 m τ 0 , | | τ 0 ) 3 R x ( t + 5 2 m τ 0 , | m + | τ 0 ) 3 R x ( t + 5 2 m τ 0 , | m | τ 0 ) + 9 R x ( t + 2 m τ 0 , | | τ 0 ) + 3 R x ( t + 2 m τ 0 , | 2 m + | τ 0 ) + 3 R x ( t + 2 m τ 0 , | 2 m | τ 0 ) R x ( t + 3 2 m τ 0 , | 3 m + | τ 0 ) R x ( t + 3 2 m τ 0 , | 3 m | τ 0 ) 9 R x ( t + 3 2 m τ 0 , | m + | τ 0 ) 9 R x ( t + 3 2 m τ 0 , | m | τ 0 ) + 9 R x ( t + m τ 0 , | | τ 0 ) + 3 R x ( t + m τ 0 , | 2 m + | τ 0 ) + 3 R x ( t + m τ 0 , | 2 m | τ 0 ) 3 R x ( t + 1 2 m τ 0 , | m + | τ 0 ) 3 R x ( t + 1 2 m τ 0 , | m | τ 0 ) + R x ( t , | | τ 0 ) 2
by assuming the third order differences of x ( t ) ,
x ( t + 3 m ) 3 x ( t + m ) + 3 x ( t + m ) x ( t ) / τ ,
m fixed, are normally distributed. It is easy to see that Equation (37) holds for α > 5 after substituting the autocorrelation functions in the above equation with Equations (12) and (13). Furthermore, Var σ ^ z 2 ( τ ) is approximately independent of the parameter t. Therefore, we replace the autocorrelation functions with Equation (14). Var σ ^ k 2 ( τ ) of HVAR is therein cast as
Var σ ^ z 2 ( τ ) = h α 2 Γ 2 ( α 1 ) sin 2 ( α π / 2 ) 2 ( 2 π τ 0 ) 2 α + 2 ( N 3 m ) m 4 × = N + 3 m + 1 N 3 m 1 N 3 m | | N 3 m × 20 Γ ( | | + 1 α / 2 ) 3 Γ ( | | + α / 2 ) 5 Γ ( | m + | + 1 α / 2 ) Γ ( | m + | + α / 2 ) 5 Γ ( | m | + 1 α / 2 ) Γ ( | m | + α / 2 ) + 2 Γ ( | 2 m + | + 1 α / 2 ) Γ ( | 2 m + | + α / 2 ) + 2 Γ ( | 2 m | + 1 α / 2 ) Γ ( | 2 m | + α / 2 ) Γ ( | 3 m + | + 1 α / 2 ) 3 Γ ( | 3 m + | + α / 2 ) Γ ( | 3 m | + 1 α / 2 ) 3 Γ ( | 3 m | + α / 2 ) 2 .
By then, the coefficient matrices Φ , B ( ε ) and B ( 1 ε ) in stochastic ONA can be constructed in the following way:
Φ = Φ AVAR Φ HVAR , B ( ε ) = B AVAR ( ε ) B HVAR ( ε ) , B ( 1 ε ) = B AVAR ( 1 ε ) B HVAR ( 1 ε ) ,
where Φ AVAR is defined in Equation (28), B AVAR ( ε ) and B AVAR ( 1 ε ) in Equation (29). The m-th rows of Φ HVAR and B HVAR ( ε ) are defined as:
0 Φ HVAR ( 2 , m τ 0 ) Φ HVAR ( 2 , m τ 0 ) Φ HVAR ( 4 , m τ 0 )
and:
0 Φ HVAR ( 2 , m τ 0 ) · F 1 ( ε ) / EDF HVAR ( τ ) α = 2 Φ HVAR ( 1 , m τ 0 ) · F 1 ( ε ) / EDF HVAR ( τ ) α = 1 Φ HVAR ( 2 , m τ 0 ) · F 1 ( ε ) / EDF HVAR ( τ ) α = 2 Φ HVAR ( 4 , m τ 0 ) · F 1 ( ε ) / EDF HVAR ( τ ) α = 4 T ,
respectively. When the time series contains random run FM, h 4 0 . While AVAR does not converge for α = 4 PLN, the noise process has little influence on short-term AVAR estimated from real data. In such a case, the inconsistency between Equations (29) and (40) will be treated as ‘violations’ by the optimization problem (23). The unformulated influence of α = 4 PLN in Equation (29) will be smoothed out by Equation (24).
In GPS MCS clock prediction, only PLN of α = 2 , 0, 2 and 4 are considered. In such a case, the flicker noise components in Φ , B ( ε ) and B ( 1 ε ) should be removed. Furthermore, the remaining components can be computed using the following simplified expression:
E σ ^ z 2 ( τ ) = 10 q 0 3 m 2 τ 0 2 + q 1 m τ 0 2 + q 2 6 m ( m 2 + 1 ) + q 3 τ 0 2 120 m ( 11 m 4 + 5 m 2 4 ) .
If, in addition, m N / 6 ,
Var σ ^ z 2 ( τ ) = ( 154 N 562 m ) q 0 2 3 ( N 3 m ) 2 ( m τ 0 ) 4
for α = 2 ,
Var σ ^ z 2 ( τ ) = q 1 2 56 m 3 N + 84 m N 204 m 4 288 m 2 144 ( N 3 m ) 2 ( m τ 0 ) 4
for α = 0 ,
Var σ ^ z 2 ( τ ) = q 2 2 62 m 6 N + 92 m 4 N + 98 m 2 N + 108 N 1557 7 m 7 312 m 5 309 m 3 2496 7 m 4320 ( N 3 m ) 2 m 3
for α = 2 , and:
Var σ ^ z 2 ( τ ) = τ 0 4 q 3 2 4147200 ( N 3 m ) 2 m 3 2620708 231 m 10 N + 16180 3 m 8 N + 3844 m 6 N + 63940 21 m 4 N + 7664 3 m 2 N + 28800 11 N 2979934 77 m 11 104194 7 m 9 70662 7 m 7 49702 7 m 5 50744 7 m 3 644544 77 m .
for α = 4 . Expressions of Var σ ^ z 2 ( τ ) for m > N / 6 will be given in Appendix B.

4.3. Quick Computation of Discrete-Time Variances

Although for Equations (36) and (38), Walter’s characterizations of AVAR and HVAR holds for real α values, they produce heavy computational burdens. If their computational complexity is represented by the evaluation of Gamma functions, then, for given m, the complexity of Equations (15), (16), (36) and (38) is O ( N ) and O m N for Walter’s characterization of MVAR. Here, we describe a method to reduce the computation complexity to three.
In order to estimate the values of Equations (15), (16), (36) and (38), we define an N-dimensional column vector b Γ . The i-th component of b Γ is:
b Γ ( i ) = sin α 2 π Γ i α 2 Γ ( α 1 ) Γ i 1 + α 2 .
From the properties of the gamma function, we recast Equations (15), (16), (36) and (38) as functions of b Γ . For instance,
Φ AVAR ( α , τ ) = 3 b Γ ( 1 ) 4 b Γ ( m + 1 ) + b Γ ( 2 m + 1 ) m 2 ( 2 π τ 0 ) α + 1
and:
Var σ ^ y 2 ( τ ) = = N + 2 m + 1 N 2 m 1 N 2 m | | 2 i = 2 2 h α 2 ( 3 | i | ) b Γ ( | i m + | + 1 ) ( 2 π τ 0 ) α + 1 ( N 2 m ) m 2 2 .
It is obvious that:
| i m + | N 1 , N + 2 m + 1 N 2 m 1 .
On the other hand, for any 0 j N 1 , there exists such that:
| i m + | = j ,
for some 2 i 2 . Hence, the auxiliary parameter b Γ is both sufficient and necessary in the computation of Equations (15), (16), (36) and (38).
To calculate the values of b Γ , we start by searching for the least positive i 0 such that:
i α 2 1
and:
i 1 + α 2 1 .
Then, we compute the value of b Γ ( i 0 ) :
b Γ ( i 0 ) = ( 1 ) α / 2 sin α 2 π α 2 π Γ i 0 α 2 Γ ( α 1 ) Γ i 0 1 + α 2
Other components of b Γ can be estimated recursively: Given the value of b Γ ( i ) :
  • if b Γ ( i 1 ) is unknown,
    b Γ ( i 1 ) = i 2 + α / 2 i 1 α / 2 b Γ ( i ) ;
  • if b Γ ( i + 1 ) is unknown,
    b Γ ( i + 1 ) = i α / 2 i 1 + α / 2 b Γ ( i ) .
By using the auxiliary vector b Γ , we reduce the calculations of gamma functions to three times per f α noise.

5. Results and Discussion

To test the method proposed in this article, we predict τ = 15 -day frequency stabilities of GPS onboard clocks. The predictions are made based on 14-day GPS precise clock data provided by IGS (International GNSS Service). The IGS timescale is selected as the reference clock. For comparison, we also estimate variances from 42–60-day measured data. It should be noted that the method discussed in this paper assumes power-law processes and deterministic frequency drift being the major sources of time-series data. Analysis of all thirty-two satellites shows that the method fails when period behaviors have a strong influence on input variances. In this section, only the predictions of GPS SVN. 45 and 41 satellite rubidium clock frequency stabilities are chosen as representative. This is because:
  • To test the modified stochastic ONA method, a strong presentation of deterministic frequency drift should be seen. In the real data test of [7], frequency drift does not have significant influence on the behaviors of some onboard rubidium frequency standards within 168 days. Such data cannot test the capability of stochastic ONA in predicting drift contaminated stabilities.
  • The oscillator should be somehow well-modeled. If the dominant variation of the frequency standard has not been model and it has major influence on frequency stability estimates, stochastic ONA will not function properly. For instance, stochastic ONA fails to predict the long-term AVAR and HVAR from MJD.52437.0-52451.0 GPS SVN.36 (PRN.06) precise clock data. AVAR and HVAR estimated from the data imply strong periodic behaviors.

5.1. SVN.45

The prediction of GPS SVN.45 onboard clock long-term stability shows how stochastic ONA behaves when it cannot distinguish RWFM from frequency drift. As shown in Figure 1, we use (i) AVAR, (ii) MVAR and (iii) HVAR estimated from 84-day GPS SVN.45 rubidium precise clock data (‘—’) as the reference. Although a linear frequency drift was removed before the estimation, the variances are not free from drift. The predictions (‘–·–’) made by stochastic ONA are based on variances estimated from the first 14 days of the time deviations (‘⋯’). Since we set ε = 0.025 in the computation, the predicted confidence interval of long-term variance can be seen to have a 95%-confidence level. For comparison, we also estimate (i) AVAR and (ii) MVAR from the first 42-day (‘– –’) and (iii) HVAR from the first 60-day (‘– –’) measured data. By assuming RWFM ( α = 2 ) as the dominant noise, 95%-confidence regions of there variances (‘▹–∘–◃’) are computed in the following way:
EDF k ( τ ) F 1 ( 97.5 % ) σ ^ k 2 ( τ ) , EDF k ( τ ) F 1 ( 2.5 % ) σ ^ k 2 ( τ ) .
It can be seen from Figure 1 that 0.1–1-day AVAR and HVAR estimated from 14-day measured data are less than the referenced variances. On the other hand, τ > 1 -day AVAR and HVAR estimated from 14 days are much larger than the reference. This will be interpreted as a weaker FLFM and stronger RWFM by the conventional oscillator noise analyzer. Stochastic ONA attributes the fluctuations to RWFM, because RWFM has much larger confidence regions at long-term averaging time. For example, the 1152-th ( τ = 4/day) rows of Φ y and B y ( ε = 0.025 ) have the following approximate values:
2.39 × 10 11 1.06 × 10 15 5.60 × 10 12 1.45 × 10 6 1.40 2.27 × 10 6 0 T and 2.14 × 10 11 9.81 × 10 16 3.20 × 10 12 1.30 × 10 7 6.21 × 10 2 4.05 × 10 4 0 T ,
respectively. Consequently, the predicted lower and upper bounds of long-term variance are almost parallel to the references. Stochastic ONA cannot find any h that holds for Equation (21), so it has to make a trade-off between 0.1–1-day and τ > 1 -day input variances: while the former lead to a smaller RWFM noise level, the latter indicate a larger one. Although the predicted confidence regions are greater than those estimated from 42-day measured data for averaging time of τ 10 days, they are consistent with the referenced variances. However, the confidence regions of variances estimated from 42-day data, by contrast, do not encompass all referenced variances. For instance, τ = 1 - and 2-day referenced AVAR and MVAR, and τ = 2∼7-day HVAR are not included in the confidence regions calculated from 42-day clock data.

5.2. SVN.41

When stochastic ONA does find an h that holds for Equation (21), it predicts long-term variances with narrow confidence regions. This can be seen in Figure 2. The input variances (‘⋯’) of stochastic ONA are estimated from GPS SVN.41 rubidium clock time deviations from MJD.52018.0–MJD.52032.0. Stochastic ONA predicts 2-day τ 15 -day AVAR, MVAR and HVAR (‘–·–’) based on these variances with ε = 0.025. The referenced (i) AVAR and (ii) MVAR are measured from 84-day (‘—’); and (iii) HVAR 160-day (‘—’) time deviations of the same clock. For comparison, we also estimate (i) AVAR from 42-day (‘– –’), (ii) MVAR from 45-day (‘– –’) and (iii) HVAR from 60-day (‘– –’) measured data. Their 95%-confidence regions (‘▹–∘–◃’) are computed by assuming RWFM as the dominant noise.
In Figure 2, only the τ 7 -day referenced variances are included in the confidence intervals predicted by stochastic ONA. This phenomenon can be explained by the behavior of input variances. By comparing Figure 2 with Figure 1, it is easy to see that the 0.1-day τ 1 -day frequency stabilities of the GPS SVN.41 onboard clock do not vibrate so fiercely as GPS SVN.45’s. On the other hand, the former has a tail tens of times smaller than the referenced. As discussed in the previous subsection, stochastic ONA tends to attribute the fluctuations at long-term averaging time to RWFM. Stochastic ONA finds an h that holds for Equation (21). In this case, stochastic ONA fits the input variances using the inequality-bounded least-square model (17). However, the least square criterion (17) is designed for Gaussian distributions. In addition, the existence of some 1 2 ε confidence regions holding for input variances does not mean no violation to the theoretical 1 2 ε confidence intervals. Consequently, as shown in Figure 2, stochastic ONA underestimates the influence of frequency drift and overestimates the noise level of RWFM. Despite their compactness, only τ one-week referenced variances are included in the predicted regions. On the other hand, all referenced variances of the SVN.45 satellite clock are included in the regions predicted without using the least-square criterion.

6. Conclusions

In this article, we discussed the prediction of long-term stability with the presentation of deterministic linear frequency drift. The fundamental theory of time stability analysis and the influences of discrete sampling are first revisited. Based on these theories, we construct a method called stochastic ONA. Stochastic ONA extends the capability of conventional oscillator noise analysis to the prediction of long-term frequency stability. By then, we introduce methods to model long-term variances contaminated by frequency drift. Specifically, we: (i) formulated the influence of frequency drift on Allan (AVAR) and modified Allan (MVAR) variances; (ii) derived expressions for discrete Hadamard variance (HVAR); (iii) simplified the formulations for the case of GPS MCS (master control station) clock prediction; and (iv) introduce a method that reduces the computational complexity of Walter’s characterization of AVAR and MVAR.
To test stochastic ONA and the model, we predict τ 15 -day AVAR, MVAR and HVAR based on 14-day GPS precise clock data. Due to limited space, we choose the result of GPS SVN 45 and 41 as representatives:
  • For the SVN.45 satellite clock, stochastic ONA cannot find a set of noise intensity coefficients for which Equation (21) holds for input variances. In such a case, stochastic ONA predicts long-term stabilities based on Equation (23). The criterion (23) takes the probability distributions of input variances into account and produces robust results. All the referenced variances are included in the predicted confidence regions. On the other hand, τ = 1 - and 2-day referenced AVAR and MVAR, and τ = 2∼7-day HVAR are not included in the 95%-confidence regions estimated from 42-day clock data.
  • For the SVN.41 onboard clock, stochastic ONA does find noise intensity coefficient values that hold for the probabilistic constraints (21). In this case, it predicts long-term stability of the satellite frequency standard based on the least square criterion (17). Despite the compactness of predicted confidence intervals, only τ 7 -day referenced variances are included in these regions. Specifically, τ > 7 -day referenced AVAR and MVAR are greater than the predicted ones, while the τ > 7 -day referenced HVAR is smaller than the predictions. This suggests an overestimation of RWFM noise level and underestimation of frequency drift. Nevertheless, the inconsistency may be interpreted as the inappropriate power-law model used in this paper.
In summary, the method introduced in this paper can predict long-term stability superimposed with influences of frequency drift. Criterion (23) takes the probability distributions of input variances into account by assuming the majority of input variances within their 1 2 ε confidence regions. The predictions made therein have large uncertainty, but are robust. In contrast, the least square criterion (17) assumes non-existing symmetric distributions of input variances, which reduce both the uncertainty and robustness of the result. We should find an alternative for the least square criterion in a future study.

Acknowledgments

This work was supported by the Applied Basic Research Program No. 2015011701011639 of Wuhan Municipality, the National Natural Science Foundation of China through Grants 41074023 and 41304006 and the National Basic Research Program of China (973 Program) through Subprogram 2013CB733205.

Author Contributions

Weiwei Cheng designed the algorithm, analyzed the data and wrote the paper. Guigen Nie supervised its analysis and edited the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AVARAllan variance
EDFEquivalence degree of freedom
FLFMFlicker frequency modulation
FLPMFlicker phase modulation
GNSSGlobal navigation satellite system
GPSGlobal positioning system
HVARHadamard variance
MCSMaster control station
MVARModified Allan variance
ONAOscillator noise analysis
RRFMRandom run frequency modulation
RWFMRandom walk frequency modulation
PLNPower-law noise
PRNPseudo-random noise
PSDPower spectral distribution or power spectrum density
SVNSpace vehicle numbers
TAIInternational Atomic Time
WHFMWhite frequency modulation
WHPMWhite phase modulation

Appendix A. Convex Optimization Techniques

We solve the stochastic ONA models described in this paper using a primal-dual interior-point algorithm. Its foundation is the theory of convex optimization. In this Appendix, relevant concepts and techniques will be reviewed.

Appendix A.1. ℓ1 Norm and Compressive Sensing

We define the object function in Equation (23) using the 1 -norm:
f 0 ( h , μ , ν ) = μ 1 + ν 1 .
Since both μ and ν are positive,
f 0 ( h , μ , ν ) = i = 1 j = 1 M m j μ i + ν i .
It has been shown that when the 1 -norm is applied to the optimization variable, and
dim ( h , μ , ν ) > dim ( σ )
estimates of the optima are approximately sparse [20]. Here, dim ( σ ) returns the dimension of σ . There is a high probability that the solution is stable and unique. The number:
dim ( h , μ , ν ) = dim ( h ) + dim ( μ ) + dim ( ν ) .
makes little difference to the solutions. Instead, ( h , μ , ν ) should be less than C dim ( σ ) , where C is a constant given in [21]. Such kinds of problems are called compressive sensing and can be solved by the convex optimization techniques described below.

Appendix A.2. Solving Inequality Constrained Optimization Problems

Equations (20), (23) and (25) are all convex optimization problems, because both sets defined by Equation (21) and the inequality constraints of Equation (23) are intersections of hyperplanes. That is, they are convex sets. In addition, the object functions defined in Equations (20), (23) and (25) are convex since they are either linear or quadratic with the semi-positive Hessian.
If we denote the object functions of Equations (20), (23) and (25) as f 0 ( s ) , where s = h or ( h , μ , ν ) , and the inequality constraints as:
f i ( s ) 0 , i = 1 , 2 , , l ,
where l = 3 j = 1 M m j or 6 j = 1 M m j . The (Lagrange) dual problem of Equation (20), (23) and (25) is defined as:
maximize L ( λ ) , subject to λ 0 .
where λ is an l-dimensional real column vector, L ( λ ) dual function of the original problem:
L ( λ ) : = inf ( s ) D r ( s , λ ) ,
the infimum function inf ( s ) D r ( s , λ ) returns the maximum value of A , A is the set of all real values less than r ( s , λ ) for s D and fixed λ . r ( s , λ ) is the corresponding Lagrangian,
r ( s , λ ) = f 0 ( s ) + i = 1 l λ i f i ( s ) .
Then, L ( λ ) f 0 ( s ) . If, in addition, all inequalities f i ( s ) 0 , i = 1 , …, l, are strict, and we denote the optimum of f 0 ( s ) and L ( λ ) as s * and λ * , respectively,
f 0 ( s * ) = L ( λ * ) .
Since the object function f 0 ( s ) defined by Equations (20), (23) and (25) and their dual L ( λ ) are differentiable,
r ( s , λ ) s r ( s , λ ) λ ( s , λ ) = ( s * , λ * ) = 0 .
This is the famous Karush–Kuhn–Tucker (KKT) conditions.
To solve the inequality constraints, we replace f i ( s ) with the interior barriers:
I i ( s ) = ( 1 / η ) log f i ( s ) , i = 1 , , l ,
where log is the natural logarithm function and η an arbitrary positive real number. These barriers will be used as ‘penalties’ in the cost function. Specifically, we substitute the original object function f 0 ( s ) by:
f 0 ( s ) i = 1 m 1 ( 1 / η ) log f i ( s ) .
Since the inequality constraints become strict after taking the interior barriers, the KKT conditions are both sufficient and necessary for optima of Equation (A5). We therein solve Equation (A5) in the following Newtonian framework:
  • ( s 0 , λ 0 ) is an arbitrary initial point of the original and dual problem
  • for k = 0 , 1, ⋯
  • solve the Newton step Δ s k , Δ λ k from:
    R Δ s k Δ λ k = f 0 ( s ) / s D F ( s ) T λ diag ( λ ) F ( s ) + ( 1 / η ) 1
    at the point ( s , λ ) = ( h k , s k , λ k ) , where:
    F ( s ) = f 1 ( s ) f 2 ( s ) f l ( s ) ,
    1 is an l-dimensional column vector whose elements are one,
    R = 2 f 0 ( s ) + i = 1 l λ i 2 f i ( h , s ) D F ( s ) T diag ( λ ) D F ( s ) diag F ( s )
    diag ( λ ) returns a square diagonal matrix with the elements of vector λ on the main diagonal, and the operator 2 returns the Hessian matrix of the operand at its right-hand side; for example, for f 0 ( h , s ) defined in Equation (17), the i-th row and j-th column component of 2 f 0 ( s ) is:
    2 f 0 ( s ) [ i j ] = 2 f 0 ( s ) h α i h α j = Φ T Q Φ [ i j ] ,
    Φ T Q Φ [ i j ] is the i-th row and j-th column component of Φ T Q Φ , λ i the i-th component of λ , D F ( s ) the derivative matrix of F ( s ) , whose i-th row j-th column component is:
    D F ( s ) [ i j ] = f i ( s ) h α j ,
  • set
    h k + 1 , s k + 1 , λ k + 1 h k , s k , λ k + ψ Δ h k , Δ s k , Δ λ k
    choosing β so that f i ( s ) < 0 , i = 1 , ⋯, m 1 , and λ 0 [22], where 0 < ψ 1 , ψ can be determined using a backtracking line search described in [23]. When ψ = 1 , the primal-dual interior-point algorithm is based on the pure Newton method.
If the difference between f 0 ( s k ) and r ( s k , λ k ) grows when k increases, from [24], we label Equation (A5) as ‘infeasible’, and change the model according to the description of Section 3. Otherwise, we increase t in Equation (A5) and use the result of the previous iteration as the initial point ( s 0 , λ 0 ) in th current iteration.

Appendix B. Simplified Formulations

Suppose we estimate AVAR σ ^ y 2 ( m τ 0 ) and HVAR σ ^ z 2 ( m τ 0 ) from time deviations x [ i ] , i = 1 , 2 , , and N, of a frequency standard. In Section 4, we give the expressions Var σ ^ k 2 ( m τ 0 ) of AVAR for m N / 4 and HVAR for m N / 6 . Here, expressions of Var σ ^ k 2 ( m τ 0 ) for other situations will be given.
When N / 3 m < N / 2 ,
Var σ ^ y 2 ( m τ 0 ) = 18 q 0 2 ( N 2 m ) ( m τ 0 ) 4
for α = 2 ,
Var σ ^ y 2 ( m τ 0 ) = q 1 2 3 4 N 4 8 m N 3 + 32 m 2 N 2 3 4 N 2 56 m 3 N + 5 m N + 36 m 4 7 m 2 4 ( N 2 m ) 2 ( m τ 0 ) 4
for α = 0 , and:
Var σ ^ y 2 ( m τ 0 ) = q 2 2 96 ( N 2 m ) 2 m 4 3 14 N 8 32 7 m N 7 + 208 5 m 2 N 6 9 5 N 6 1048 5 m 3 N 5 + 144 5 m N 5 + 1904 3 m 4 N 4 560 3 m 2 N 4 + 9 2 N 4 3520 3 m 5 N 3 + 1864 3 m 3 N 3 48 m N 3 + 3872 3 m 6 N 2 1104 m 4 N 2 + 2816 15 m 2 N 2 102 35 N 2 27392 35 m 7 N + 14624 15 m 5 N 4688 15 m 3 N + 88 5 m N + 22016 105 m 8 4864 15 m 6 + 2792 15 m 4 824 35 m 2
for α = 2 .
When N / 4 m < N / 3 ,
Var σ ^ y 2 ( m τ 0 ) = 2 q 0 2 ( 17 N 42 m ) ( N 2 m ) 2 ( m τ 0 ) 4
and:
Var σ ^ z 2 ( m τ 0 ) = 200 q 0 2 9 ( N 3 m ) ( m τ 0 ) 4
for α = 2 ,
Var σ ^ y 2 ( m τ 0 ) = q 1 2 8 ( N 2 m ) 2 ( m τ 0 ) 4 1 6 N 4 2 m N 3 + 8 m 2 N 2 m N 2 1 6 N 2 8 m 3 N + 8 m 2 N + 5 m N 6 m 4 17 m 3 11 m 2
and:
Var σ ^ z 2 ( m τ 0 ) = q 1 2 144 ( N 3 m ) 2 ( m τ 0 ) 4 100 3 N 4 480 m N 3 + 2592 m 2 N 2 100 3 N 2 6192 m 3 N + 280 m N + 5508 m 4 540 m 2
for α = 0 ,
Var σ ^ y 2 ( m τ 0 ) = q 2 72 m 4 ( N 2 m ) 2 1 56 N 8 4 7 m N 7 + 8 m 2 N 6 3 20 N 6 64 m 3 N 5 + 18 5 m N 5 + 320 m 4 N 4 36 m 2 N 4 + 3 8 N 4 1024 m 5 N 3 + 192 m 3 N 3 6 m N 3 + 2048 m 6 N 2 576 m 4 N 2 + 36 m 2 N 2 17 70 N 2 81618 35 m 7 N + 4628 5 m 5 N + 40253 35 m 8 3106 5 m 6 466 5 m 3 N + 158 35 m N + 461 5 m 4 318 35 m 2
and:
Var σ ^ z 2 ( m τ 0 ) = q 2 2 648 ( N 3 m ) 2 m 4 25 28 N 8 180 7 m N 7 + 1602 5 m 2 N 6 15 2 N 6 11271 5 m 3 N 5 + 162 m N 5 + 19575 2 m 4 N 4 1440 m 2 N 4 + 75 4 N 4 26838 m 5 N 3 + 6735 m 3 N 3 270 m N 3 + 45369 m 6 N 2 34911 2 m 4 N 2 + 7218 5 m 2 N 2 85 7 N 2 1513107 35 m 7 N + 23733 m 5 N 16923 5 m 3 N + 666 7 m N + 2490669 140 m 8 13203 m 6 + 58653 20 m 4 1233 7 m 2
for α = 2 , and:
Var σ ^ z 2 ( m τ 0 ) = τ 0 4 q 3 2 103680 ( N 3 m ) 2 m 4 5 66 N 12 36 11 m N 11 + 64 m 2 N 10 35 18 N 10 2245 3 m 3 N 9 + 70 m N 9 + 81495 14 m 4 N 8 15675 14 m 2 N 8 + 755 42 N 8 221962 7 m 5 N 7 + 73260 7 m 3 N 7 3624 7 m N 7 + 619836 5 m 6 N 6 63243 m 4 N 6 + 32199 5 m 2 N 6 145 2 N 6 1752408 5 m 7 N 5 + 257948 m 5 N 5 225667 5 m 3 N 5 + 1566 m N 5 + 1423227 2 m 8 N 4 718917 m 6 N 4 + 194724 m 4 N 4 27821 2 m 2 N 4 + 1120 9 N 4 1013166 m 9 N 3 + 1352484 m 7 N 3 529228 m 5 N 3 + 194878 3 m 3 N 3 1792 m N 3 + 4809249 5 m 10 N 2 3291663 2 m 8 N 2 + 4424766 5 m 6 N 2 2351637 14 m 4 N 2 + 66942 7 m 2 N 2 5240 77 N 2 211070799 385 m 11 N + 8206362 9 m 9 N 29175021 35 m 7 N + 1595052 7 m 5 N 156444 7 m 3 N + 3648 7 m N + 21992877 154 m 12 5208975 14 m 10 + 4756509 14 m 8 1773297 14 m 6 + 134982 7 m 4 73224 77 m 2
for α = 4 .
When N / 5 m < N / 4 ,
Var σ ^ z 2 ( m τ 0 ) = ( 425 N 1500 m ) q 0 2 9 ( N 3 m ) 2 ( m τ 0 ) 4
for α = 2 ,
Var σ ^ z 2 ( m τ 0 ) = q 1 2 25 12 N 4 40 m N 3 + 288 m 2 N 2 25 12 N 2 908 m 3 N + 40 m N + 1057 m 4 115 m 2 36 ( N 3 m ) 2 ( m τ 0 ) 4
for α = 0 ,
Var σ ^ z 2 ( m τ 0 ) = q 2 2 864 ( N 3 m ) 2 m 4 25 84 N 8 80 7 m N 7 + 956 5 m 2 N 6 15 N 6 9098 5 m 3 N 5 + 72 m N 5 + 10770 m 4 N 4 860 m 2 N 4 + 25 4 N 4 40584 m 5 N 3 + 5450 m 3 N 3 120 m N 3 + 95052 m 6 N 2 19314 m 4 N 2 + 4304 5 m 2 N 2 85 21 N 2 632348 5 m 7 N + 36284 m 5 N 13564 5 m 3 N + 416 7 m N + 2560783 35 m 8 28228 m 6 + 15871 5 m 4 1116 7 m 2
for α = 2 , and:
Var σ ^ z 2 ( m τ 0 ) = τ 0 4 q 3 2 138240 ( N 3 m ) 2 m 4 5 198 N 12 16 11 m N 11 + 344 9 m 2 N 10 35 54 N 10 5450 9 m 3 N 9 + 280 9 m N 9 + 45065 7 m 4 N 8 14045 21 m 2 N 8 + 755 126 N 8 1015316 21 m 5 N 7 + 59320 7 m 3 N 7 4832 21 m N 7 + 1313888 5 m 6 N 6 70034 m 4 N 6 + 57706 15 m 2 N 6 145 6 N 6 5202144 5 m 7 N 5 + 1182328 3 m 5 N 5 548318 15 m 3 N 5 + 696 m N 5 + 1487529 m 8 N 4 763918 m 6 N 4 + 215737 m 4 N 4 74801 9 m 2 N 4 + 1120 27 N 4 5984488 m 9 N 3 + 4024112 m 7 N 3 2427188 3 m 5 N 3 + 473732 21 m 3 N 3 7168 9 m N 3 + 40142892 5 m 10 N 2 6881802 m 8 N 2 + 9403928 5 m 6 N 2 1304218 7 m 4 N 2 + 39964 7 m 2 N 2 5240 231 N 2 289902724 45 m 11 N + 433989944 63 m 9 N 37117292 15 m 7 N + 22002368 63 m 5 N 162080 9 m 3 N + 23168 77 m N + 1618838030 693 m 12 192734554 63 m 10 + 29575226 21 m 8 17042854 63 m 6 + 1331272 63 m 4 60128 77 m 2
for α = 4 .
When 1 / 6 m / N < 1 / 5 ,
Var σ ^ z 2 ( m τ 0 ) = ( 461 N 1680 m ) q 0 2 9 ( N 3 m ) 2 ( m τ 0 ) 4
for α = 2 ,
Var σ ^ z 2 ( m τ 0 ) = q 1 2 1 3 N 4 8 m N 3 + 72 m 2 N 2 1 3 N 2 232 m 3 N + 88 m N + 228 m 4 300 m 2 144 ( N 3 m ) 2 ( m τ 0 ) 4
for α = 0 ,
Var σ ^ z 2 ( m τ 0 ) = q 2 2 864 ( N 3 m ) 2 m 4 1 84 N 8 4 7 m N 7 + 12 m 2 N 6 1 10 N 6 144 m 3 N 5 + 18 5 m N 5 + 1080 m 4 N 4 54 m 2 N 4 + 1 4 N 4 5184 m 5 N 3 + 432 m 3 N 3 6 m N 3 + 15552 m 6 N 2 1944 m 4 N 2 + 54 m 2 N 2 17 105 N 2 932686 35 m 7 N + 4684 m 5 N 982 5 m 3 N + 824 35 m N + 698283 35 m 8 4728 m 6 + 1311 5 m 4 540 7 m 2
for α = 2 , and:
Var σ ^ z 2 ( m τ 0 ) = τ 0 4 q 3 2 4147200 ( N 3 m ) 2 m 4 1 33 N 12 24 11 m N 11 + 72 m 2 N 10 7 9 N 10 1440 m 3 N 9 + 140 3 m N 9 + 19440 m 4 N 8 1260 m 2 N 8 + 151 21 N 8 186624 m 5 N 7 + 20160 m 3 N 7 2416 7 m N 7 + 1306368 m 6 N 6 211680 m 4 N 6 + 7248 m 2 N 6 29 N 6 6718464 m 7 N 5 + 1524096 m 5 N 5 86976 m 3 N 5 + 1044 m N 5 + 25194240 m 8 N 4 7620480 m 6 N 4 + 652320 m 4 N 4 15660 m 2 N 4 + 448 9 N 4 67184640 m 9 N 3 + 26127360 m 7 N 3 3131136 m 5 N 3 + 125280 m 3 N 3 3584 3 m N 3 + 120932352 m 10 N 2 58786560 m 8 N 2 + 9393408 m 6 N 2 563760 m 4 N 2 + 10752 m 2 N 2 2096 77 N 2 30472331996 231 m 11 N + 235162420 3 m 9 N 112693988 7 m 7 N + 28477444 21 m 5 N 121360 3 m 3 N + 226752 77 m N + 5076178850 77 m 12 329308930 7 m 10 + 84470010 7 m 8 9520870 7 m 6 + 400840 7 m 4 720000 77 m 2 .
for α = 4 .

References

  1. Shorter, G.; Miller, R. High-Frequency Trading: Background, Concerns, and Regulatory Developments; Congressional Research Service: Washington, DC, USA, 2014. [Google Scholar]
  2. McCarthy, D.; Seidelmann, P. Time; WILEY-VCH Verlag GmbH & Co. KGaA: Weinheim, Germany, 2009. [Google Scholar]
  3. Keshner, M. 1/f noise. Proc. IEEE 1982, 70, 212–218. [Google Scholar] [CrossRef]
  4. Howe, D.; Beard, R.; Greenhall, C.; Vernotte, F.; Riley, W.; Peppler, T. Enhancements to GPS Operations and Clock Evaluations Using a “Total” Hadamard Deviation. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2005, 52, 1253–1261. [Google Scholar] [CrossRef] [PubMed]
  5. Howe, D. ThêoH: A Hybrid, High-Confidence Statistic that Improves on the Allan Deviation. Metrologia 2006, 43, 322–331. [Google Scholar] [CrossRef]
  6. Vernotte, F.; Lenczner, M.; Bourgeois, P.Y.; Rubiola, E. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2016, 63, 611–623. [Google Scholar] [CrossRef] [PubMed]
  7. Cheng, W.; Nie, G.; Wang, P.; Zhang, C.; Gao, Y. Predicting Long-Term Frequency Stability: Stochastic Oscillator Noise Analysis. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. in press.
  8. Manolakis, D.; Ingle, V. Applied Digital Signal Processing; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  9. Vernotte, F.; Lantz, E. Metrology and 1/f noise: Linear regressions and confidence intervals in flicker noise context. Metrologia 2015, 52, 222–237. [Google Scholar] [CrossRef]
  10. Levine, J. The statistical modeling of atomic clocks and the design of time sacles. Rev. Sci. Instrum. 2012, 83, 1–28. [Google Scholar] [CrossRef] [PubMed]
  11. Panfilo, G.; Harmegnies, A.; Tisserand, L. A new prediction algorithm for the generation of International Atomic Time. Metrologia 2012, 49, 49–56. [Google Scholar] [CrossRef]
  12. Institute of Electrical and Electronics Engineers. IEEE Standard Definitions of Physical Quantities for Fundamental Frequency and Time Metrology–Random Instabilites; IEEE: Piscataway, NJ, USA, 2009. [Google Scholar]
  13. Riley, W. Handbook of Frequency Stability Analysis; NIST Special Publication: Boulder, CO, USA, 2008. [Google Scholar]
  14. Kobayashi, H.; Mark, B.L.; Turin, W. Probability, Random Processes, and Statistical Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  15. Kasdin, N. Discrete simulation of colored noise and stochastic processes and l/fα power law generation. Proc. IEEE 1995, 83, 802–827. [Google Scholar] [CrossRef]
  16. Walter, T. Characterizing frequency stability: A continuous power-law model with discrete sampling. IEEE Trans. Instrum. Meas. 1994, 43, 69–79. [Google Scholar] [CrossRef]
  17. Barnes, J.; Allan, D. A statistical model of flicker noise. Proc. IEEE 1966, 54, 176–178. [Google Scholar] [CrossRef]
  18. Vernotte, F.; Lantz, E.; Groslambert, J.; Gagnepain, J. Oscillator Noise Analysis: Multivariance Measurement. IEEE Trans. Instrum. Meas. 1993, 42, 342–350. [Google Scholar] [CrossRef]
  19. Cheng, W.; Nie, G. An adaptive oscillator noise analysis: using factor analysis. Metrologia 2013, 50, 586–595. [Google Scholar] [CrossRef]
  20. Candes, E.; Romberg, J.; Tao, T. Stable Signal Recovery from Incomplete and Inaccurate Measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef]
  21. Candes, E.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  22. Wright, S. Primal-Dual Interior-Point Methods; Society for Industrial and Applied Mathematics (MOS-SIAM): Philadelphia, PA, USA, 1997. [Google Scholar]
  23. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  24. Kojima, M.; Megiddo, N.; Mizuno, S. A general framework of continuation methods for complementarity problems. Math. Oper. Res. 1993, 18, 945–963. [Google Scholar] [CrossRef]
Figure 1. Estimates and prediction of GPS SVN.45 (PRN.21) rubidium clock frequency stability.
Figure 1. Estimates and prediction of GPS SVN.45 (PRN.21) rubidium clock frequency stability.
Sensors 18 00502 g001
Figure 2. Estimates and prediction of GPS SVN.41 (PRN.14) rubidium clock frequency stability.
Figure 2. Estimates and prediction of GPS SVN.41 (PRN.14) rubidium clock frequency stability.
Sensors 18 00502 g002

Share and Cite

MDPI and ACS Style

Cheng, W.; Nie, G. Predicting Long-Term Stability of Precise Oscillators under Influence of Frequency Drift. Sensors 2018, 18, 502. https://doi.org/10.3390/s18020502

AMA Style

Cheng W, Nie G. Predicting Long-Term Stability of Precise Oscillators under Influence of Frequency Drift. Sensors. 2018; 18(2):502. https://doi.org/10.3390/s18020502

Chicago/Turabian Style

Cheng, Weiwei, and Guigen Nie. 2018. "Predicting Long-Term Stability of Precise Oscillators under Influence of Frequency Drift" Sensors 18, no. 2: 502. https://doi.org/10.3390/s18020502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop