Next Article in Journal
Physiological State Monitoring in Advanced Soldiers: Precision Health Strategies for Modern Military Operations
Previous Article in Journal
Ultrasound-Assisted Production of Virgin Olive Oil: Effects on Bioactive Compounds, Oxidative Stability, and Antioxidant Capacity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sine-Fitting Residual Root Mean Square, Mean, and Variance in the Presence of Phase Noise or Jitter

by
Francisco Alegria
Instituto de Telecomunicações and Instituto Superior Técnico, University of Lisbon, 1049-001 Lisbon, Portugal
Sci 2025, 7(4), 136; https://doi.org/10.3390/sci7040136
Submission received: 14 August 2025 / Revised: 10 September 2025 / Accepted: 25 September 2025 / Published: 1 October 2025

Abstract

Fitting a sinusoidal model to a set of data points is a common practice in engineering, where one wants to estimate some quantities of interest by carrying out a sequence of measurements on a physical phenomenon. Analytical expressions are derived for the statistics of the root mean square value of the residuals from the least-squares sine-fitting procedure, when the data points are affected by phase noise or sampling jitter. The two analytical expressions derived, for the mean and for the variance, are numerically validated using a Monte Carlo-type procedure with simulated data for varying amounts of noise present, a varying number of data points, and varying signal amplitude. It will be shown that there is an excellent agreement between the numerical values obtained and those given by the analytical expressions proposed. These can be of use to engineers who need to compute confidence intervals for their estimations or who need to choose the number of signal data points that should be acquired in a given application.

1. Introduction

The processing of sinusoidal signals is a fundamental technique in modern measurement and instrumentation systems, playing a critical role in the performance evaluation of electronic devices and the extraction of key parameters of the system. Fitting sinusoidal models to empirical data finds widespread use in various engineering fields, including the testing of analog-to-digital converters (ADCs), signal integrity analysis, precision metrology, and communication system characterization [1]. The popularity of sinusoidal processing stems from the natural occurrence of sinusoidal waveforms in physical systems and their deliberate use as controlled excitation signals in measurement setups. Also in physics, for example, there are many situations where the signals are sinusoidal, like in nuclear magnetic resonance spectroscopy, particle accelerators, where particles are forced to move in a circle by a magnetic field, and gravitational wave detection, where the fabric of spacetime is often modeled as sinusoids. The computation of the root mean square (RMS) of the residuals can be used to detect faint signals in noisy data.
The mathematical foundation of the current work is robust and general, and the application is a matter of mapping the concepts to the specific domain in question. The proposed analytical expressions serve as a versatile statistical tool. Their adaptation to a new domain primarily requires identifying the physical source of phase noise or timing jitter within that system’s measurement process. The core advantage remains consistent across all fields: providing a computationally efficient, analytical method to quantify uncertainty and assess model fidelity for any process that can be described by a sinusoidal model.
Apart from mere curve fitting, sinusoidal analysis is also a standard technique for investigating device behavior under controlled conditions. If a pure sinusoidal input is presented to a system, the output will display fundamental characteristics like linearity, nonlinear distortion, noise performance, and dynamic range limits. The residuals, resulting from subtracting the fitted sinusoidal model from measurement data, represent non-ideal effects, such as thermal noise, quantization errors, harmonic distortion, and timing jitter.
The RMS value of residuals is a metric that provides direct information on key system parameters, including signal-to-noise ratio (SNR), effective number of bits (ENOB), and noise floor characteristics [2]. Such parameters are critical to assess the performance of measurement equipment, data acquisition systems, and signal processing algorithms. For high-precision applications, accurate analysis of residuals enables engineers to establish confidence intervals, optimize system designs, and balance measurement accuracy and acquisition time.
Sinusoidal fitting is specifically crucial in ADC testing, where standard test procedures are based on sine wave stimuli to assess converter performance. The IEEE Standard 1241-2000 [3] mandates least-squares sine-wave fitting methods for the extraction of dynamic performance metrics. In precision oscillator characterization and frequency metrology, sinusoidal fitting is used to quantify phase noise and frequency stability—parameters that are of paramount importance to telecommunications and scientific instrumentation [4].
Notwithstanding its prevalent application, the implementation of sinusoidal fitting in practical measurements is complicated by non-ideal conditions, including phase noise and sampling jitter, both of which contribute substantial uncertainty. Phase noise, which signifies stochastic variations in the phase of the waveform, appears as discrepancies from ideal periodicity, thereby complicating the differentiation from other sources of noise [5]. Phase noise is caused by a multitude of sources, including thermal noise related to active devices, flicker noise in semiconductor junctions, and external perturbations like vibrations and temperature fluctuations. These effects propagate through reference oscillators, frequency synthesizers, and phase-locked loops, causing correlated errors that bias parameter estimates and increase perceived noise levels. Sampling jitter—time uncertainty in ADC sampling moments—additionally complicates measurements by introducing amplitude errors in digitized samples. These errors’ magnitude depends on input signal amplitude, frequency, and the RMS jitter value, establishing complicated dependencies that complicate result interpretation [6]. In the mathematical analysis that follows, both phase noise and jitter can be dealt with the same formalism.
Although there has been much research aimed at refining sinusoidal fitting algorithms, there are few studies that give detailed analytical foundations for residual prediction in the presence of phase noise and jitter. Giaquinto and Trotta [1] created improved sine-wave fitting methods for ADC testing without explicitly considering the effects of phase noise. Likewise, multiharmonic sine-fitting (MHSF) algorithms [7] consider harmonic distortion without theoretical consideration of correlated noise sources. Phase noise measurement methods, e.g., cross-correlation techniques [8], have pushed precision metrology forward but are largely unexploited in more general sinusoidal fitting applications. The lack of analytical residual statistics models restricts uncertainty budgeting, optimization of measurement, and confidence interval estimation, especially in calibration and standards laboratories [9]. This paper fills these gaps by deriving analytical formulas for the mean and variance of residual RMS values in the presence of phase noise and jitter. These closed-form solutions allow for fast uncertainty assessment without lengthy Monte Carlo-type simulations, providing insight into noise dependencies and measurement optimization.
Validated by numerical simulations, the framework lends itself to applications from ADC testing to oscillator characterization. It also offers a basis for future studies of complicated noise situations, non-stationary effects, and nonlinear system behavior.
The impact of phase noise in measurement systems has been treated from several distinct viewpoints in the literature, each adding vital insights to specific aspects while leaving other areas less adequately covered. In the context of analog-to-digital conversion, significant work has focused on the impact of sampling jitter on signal-to-noise ratio (SNR) and spurious-free dynamic range (SFDR) degradation in high-speed converters. Brannon [10] performed an extensive analysis of the effects of jitter in data conversion systems, deriving equations for jitter-induced noise power as a function of input signal characteristics and jitter statistics. While this work provided a fundamental link between timing uncertainty and amplitude error, it did not address the residuals of sinusoidal fitting. Phase noise in communication systems has been studied quite thoroughly, particularly with regard to digital modulation methods and carrier recovery mechanisms. Meyr et al. [11] developed detailed mathematical models for describing phase noise in phase-locked loops and carrier tracking systems, from which analytical expressions were derived for the degradation in bit error rates under various phase noise conditions. Although this work provides important information on the statistical properties of sinusoidal signals corrupted by phase distortion, its focus on communication systems means that it does not directly address the unique requirements of measurement and instrumentation applications.
Robust parameter estimation of sinusoidal signals in noise has been a topic of study in signal processing for many years. Kay [12] presented a comprehensive analysis of spectral analysis and parameter estimation, including Cramér–Rao lower bounds for sinusoidal parameter estimation in additive white Gaussian noise. Although this work establishes ultimate accuracy limits on frequency, amplitude, and phase estimation, it does not address phase noise or residual fit statistics specifically. More recent research by Pintelon and Schoukens [13] has developed system identification methods with sinusoidal excitation, further including techniques to deal with colored noise and nonlinear distortions. Their approach encompasses uncertainty analysis for frequency-domain measurements but does not completely deal with phase noise and jitter in time-domain fitting.
This research fills essential voids in the literature through the derivation of analytical formulas for the mean and variance of root mean square (RMS) sinusoidal fitting residuals in the presence of phase noise and jitter. For the first time, this theoretical model stringently describes the impact of these noise sources on residual statistics and measurement uncertainty, which in the future can be utilized in metrics such as SNR and effective number of bits (ENOB) estimation, for example. The analytical strategy employed here has important benefits over numerical techniques. Firstly, closed-form solutions provide fast uncertainty calculation—a feature especially valuable for real-time adaptive measurements. Secondly, the formalism employed discloses functional dependencies among noise properties, signal parameters, and uncertainty, which makes it possible to optimize systems from first principles instead of empirical information.
The author has published several papers that deal with the effects of phase noise or sampling jitter. The one that is closer to the current work is [14]. It deals only with the expected value of the RMS value estimation and does not address the estimator’s standard deviation. Furthermore, that study of the expected value of RMS estimation results in a simple, approximate linear expression that is valid only for small values of phase noise standard deviation (less than 0.5 rad) and not for the complete range of possible values, as is performed in the current work. Naturally, the author has devoted his entire career to studying the uncertainty associated with diverse measurement methods. This includes techniques such as the time-domain polarization method for characterization of soils [15], as well as numerical and experimental validation of measurement methods using Monte Carlo-type procedures.
In this work, we derive two analytical expressions for both the mean and the variance of the root mean square estimation of the sine-fitting residuals in the presence of phase noise or jitter. These expressions take into account the noise standard deviation, the number of data points, and the sinusoidal amplitude.
Section 2 recapitulates the least-squares sine-fitting algorithm, and Section 3 calculates the statistics of the sinusoidal amplitude estimator under phase noise and jitter. Section 4 deals with the analytical derivation of the formulas for the statistics of the residuals, and Section 5 applies those findings to calculate the statistics of the RMS value of the residuals. Section 6 gives the findings of numerical simulations that confirm the analytical formulas introduced. Lastly, Section 7 makes some conclusions and suggests future work.

2. Sine-Fitting

In the following, we present the analytical derivation of an expression that allows us to determine the means and standard deviation of the RMS value of the sine-fitting residuals. The first step is, naturally, to start by describing the sinusoidal model used. We thus express the ideal sample values using
y i = C + A cos ω x t i + φ ,
where C is the sinusoidal offset, A is the amplitude, and φ is the initial phase. The sampling instants are given by t i , where the sample index, i , goes from 0 to M     1 , where M is the number of samples. The sinusoidal frequency is assumed to be known and is, thus, not estimated in the context of this work. It is given by f x ,   and the corresponding angular frequency is given by ω x .
Here, we study the case where there is a phase noise that affects the sample values, so that the actual sample values are provided by
z i = C + A cos ω x t i + φ + θ i .
Note that, since we are considering that the sinusoidal frequency is known, the analysis presented next is also applicable to the case where sampling jitter is present instead of phase noise. In this case, we just need to make the correspondence
θ i = ω x δ i ,
where δ i is the amount of sampling jitter present. Naturally, the standard deviation of those two variables is related by
σ θ = ω x σ δ .
Considering this set of data points, corrupted by noise, we wish to estimate the three unknown parameters that describe the sinusoid—offset ( C ^ ), amplitude ( A ^ ) and initial phase ( φ ^ ). Note the “hat” symbol used over the estimated quantities. We can thus use a three-parameter sine-fitting algorithm, which minimizes the square difference between the data points and the sinusoidal model; that is, it minimizes the summed squared residuals (SSR) provided by
S S R ^ = i = 0 M 1 z i z ^ i 2 ,
where the estimated sinusoidal points are given by
z ^ i = C ^ + A ^ cos ω x t i + φ ^ .
We are going to write the estimated SSR value as
S S R ^ = i = 0 M 1 r i 2 ,
where
r i = z i z ^ i
are the residuals.
The three estimated parameters of the sinusoidal model are determined from the samples using
C ^ = 1 M i = 0 M 1 z i
for the offset,
A ^ = A ^ I 2 + A ^ Q 2
for the amplitude, and
φ ^ = arctan A ^ Q A ^ I
for the initial phase, where A ^ I and A ^ Q are the in-phase and in-quadrature amplitudes computed using
A ^ I = 2 M i = 0 M 1 z i c o s ω a t i
and
A ^ Q = 2 M i = 0 M 1 z i sin ω a t i .
Note that ω a is the angular frequency used in the least-squares sine-fitting procedure, and which we assumed to be known in this work, so that we have
ω a = ω x .
The next step is going to be the derivation of an analytical expression for the expected value of the square of the residuals. This is presented in the next section. Afterwards, we will focus on the expected value of the residuals and, from it, study the variance of the root mean square value of the residuals.

3. Statistics of the Amplitude Estimator

Here, we derive the expected value and standard deviation of the estimated sinusoidal amplitude in the presence of phase noise or jitter. Later on, these results will be used to compute the statistics of the root mean square value of the sine-fitting residuals.
Considering the phase noise to be normally distributed with null mean and variance σ θ 2 ,
θ i ~ N ( 0 , σ θ 2 ) ,
the characteristic function is
ψ θ i = E e i t θ i = e σ θ 2 2 t 2 .
We now use the Euler formula to write the complex exponential in (16) as
E cos t θ i + i E sin t θ i = e σ θ 2 2 t 2 .
Since the right side of this equation is real, we must have
E sin t θ i = 0
and thus
E cos t θ i = e σ θ 2 2 t 2 .
If we set t = 1 now, we obtain
E cos θ i = e σ θ 2 2 .
Later on, we will also need the expected value of the square of the cosine of θ i . We will compute it now by writing the square of the cosine as
cos 2 θ i = 1 2 + 1 2 cos 2 θ i .
The expected value is thus
E   cos 2 θ i = 1 2 + 1 2 E cos 2 θ i .
To compute the expected value of cos 2 θ i , we use an identical procedure as before, but now making t = 2 in (19) such that
E cos 2 θ i = e 2 σ θ 2 .
Inserting this back into (22) leads to
E cos 2 θ i = 1 2 + 1 2 e 2 σ θ 2 .
Focusing now on the bias of the amplitude estimation, we will use the fact that the sinusoidal amplitude can be estimated using
A ^ = i = 0 M 1 z i c o s ω a t i + φ ^ i = 0 M 1 cos 2 ω a t i + φ ^ .
Taking the expectation value and assuming that the initial phase estimation is unbiased, that is
φ ^ φ ,
we have
E A ^ = A E c o s θ i = A e σ θ 2 2 .
Making use of (20) leads to
E A ^ = A e σ θ 2 2 .
The previous two sections (Section 2 and Section 3) contain material that is well known in statistics and estimation theory. It is included here to serve as context and reference for the novel derivations, which constitute the main contribution of this work, and which are presented in Section 4 and Section 5.

4. Statistics of the Residuals

The sine-fitting residuals, provided by (8), are computed as the difference between the sample values and the values of the fitted sine wave at the same instants in time. The first depends directly on the phase noise or jitter present and is considered a random variable. The latter depends on the sine-fitting parameters, namely amplitude, initial phase, and offset. These three parameters, since they are estimated from the data points, are, strictly speaking, also random variables. We are, however, in the following theoretical analysis, considering them to be deterministic variables, albeit with a biased amplitude provided by (28). This will simplify the derivations and will be shown, using numerical simulations, that the resulting analytical expression proposed for the variance of the RMS value of the residuals is, in fact, very accurate, despite this simplification.
Note that the analysis is conditional on the parameter estimation process. We are not analyzing the joint distribution of the data and the unknown parameters, but rather the distribution of the residuals given the fitted model. In mathematical terms, we are computing E r i 2 | A ^ , φ ^ , C ^ and E r i 4 | A ^ , φ ^ , C ^ , where A ^ , φ ^ , and C ^ are the estimated parameters. Once these parameters are estimated from the data, they become fixed values for the purpose of analyzing the residual statistics.
Our derivation makes several key assumptions about the parameter estimation process. The first one is that the initial phase and offset estimations are unbiased; that is, E φ ^ = φ and E C ^ = C . This means that on average, the phase and offset estimation procedure recovers the true phase and offset:
φ ^ = φ ,
and
C ^ = C .
For least-squares fitting of sinusoidal data with phase noise, this assumption is generally valid. We explicitly account for the amplitude estimation bias, as determined by (28), and consider that the sinusoidal amplitude is given by
A ^ = A e σ θ 2 2 .
This bias is deterministic and depends only on the phase noise variance, not on random realizations. Finally, we assume that for sufficiently large datasets, the parameter estimates converge to their expected values, making the conditioning assumption more accurate.

4.1. Second Raw Moment Calculation

The starting point of the derivation is writing the cosine function found in (2) for the sample value affected by phase noise,
z i = C + A cos ω x t i + φ + θ i ,
using the trigonometric identity
cos a + b = cos a cos b sin a sin b ,
for any generic variables a and b . Applying this to our case results in
cos ω x t i + φ + θ i = cos ω x t i + φ cos θ i sin ω x t i + φ sin θ i .
The actual sample values, provided by (32), become
z i = C + A cos ω x t i + φ cos θ i A sin ω x t i + φ sin θ i .
The residual, provided by (8), can then be written as
r i = C + A cos ω x t i + φ cos θ i A sin ω x t i + φ sin θ i C ^ A ^ cos ω x t i + φ ^ ,
where we have made use of z ^ i given by (6). Assuming now that C ^ C and φ ^ φ , we have
r i = A cos ω x t i + φ cos θ i A sin ω x t i + φ sin θ i A ^ cos ω x t i + φ .
Inserting the expected value of the amplitude, provided by (28), leads to
r i = A cos ω x t i + φ cos θ i A sin ω x t i + φ sin θ i A e σ θ 2 2 cos ω x t i + φ .
Simplifying leads to
r i = A cos ω x t i + φ cos θ i e σ θ 2 2 A sin ω x t i + φ sin θ i .
Computing now the square of the residual results in
r i 2 = A 2 cos 2 ω x t i + φ cos θ i e σ θ 2 2 2 + A 2 sin 2 ω x t i + φ sin 2 θ i   2 A 2 cos ω x t i + φ sin ω x t i + φ cos θ i e σ θ 2 2 sin θ i .
We can now compute the expected value of the square of this residual by computing the expected value of each of the three terms:
E r i 2 = A 2 cos 2 ω x t i + φ E cos θ i e σ θ 2 2 2 + A 2 sin 2 ω x t i + φ E sin 2 θ i     2 A 2 cos ω x t i + φ sin ω x t i + φ E cos θ i     e σ θ 2 2 sin θ i .
The expected value in the first term of the left side of (41) is, computing the square,
E cos θ i e σ θ 2 2 2 = E cos 2 θ i + e σ θ 2 2 E cos θ i e σ θ 2 2 .
The first expected value on the right side was computed in (24), while the second one was computed in (20). Using them both leads to
E cos θ i e σ θ 2 2 2 = 1 2 + 1 2 e 2 σ θ 2 + e σ θ 2 2 e σ θ 2 2 e σ θ 2 2 .
Simplifying leads to
E cos θ i e σ θ 2 2 2 = 1 + e 2 σ θ 2 2 e σ θ 2 .
Moving on to the second expected value in (41) we have
E sin 2 θ i = 1 E cos 2 θ i .
Introducing (24) leads to
E   sin 2 θ i = 1 e 2 σ θ 2 2 .
Finally, computing the third term in (41) leads to
E cos θ i e σ θ 2 2 sin θ i = E cos θ i sin θ i e σ θ 2 2 E sin θ i .
Since the sine function is an odd function, the second term is null.
E cos θ i e σ θ 2 2 sin θ i = E cos θ i sin θ i .
Writing the product of the cosine and sine functions as the sine of double the argument leads to
E cos θ i e σ θ 2 2 sin θ i = E sin 2 θ i 2 .
For the same reason as above, this term is null. We thus have
E cos θ i e σ θ 2 2 sin θ i = 0 .
Inserting into (41) the terms given in (44), (46), and (50) leads to
E r i 2 = A 2 cos 2 ω x t i + φ 1 + e 2 σ θ 2 2 e σ θ 2 + A 2 sin 2 ω x t i + φ 1 e 2 σ θ 2 2   2 A 2 cos ω x t i + φ sin ω x t i + φ 0 .
Simplifying, we obtain
E r i 2 = A 2 cos 2 ω x t i + φ 1 + e 2 σ θ 2 2 e σ θ 2 + A 2 sin 2 ω x t i + φ 1 e 2 σ θ 2 2 .
Using sin 2 a = 1     cos 2 a , we can write
E r i 2 = A 2 cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 + A 2 1 e 2 σ θ 2 2 .
We will now carry out a similar analysis for the case of the fourth raw moment of the residuals.

4.2. Fourth Raw Moment Calculation

In this section, we proceed to compute the expected value, as in the previous section, but now not of the square of the residuals but of their fourth power. This will be necessary later on when computing the variance of the summed squared residuals (SSRs).
Using (8) we have
E r i 4 = E z i z ^ i 4 .
Computing the binomial expansion leads to
E r i 4 = E z i 4 4 z ^ i E z i 3 + 6 z ^ i 2 E z i 2 4 z ^ i 3 E z i + z ^ i 4 ,
where z ^ i is given by (6).
We now have to compute each of the four expected values. The first one results in
E z i 4 = A 4 3 8 + 1 2 cos 2 ω x t i + 2 φ e 2 σ θ 2 + 1 8 cos 4 ω x t i + 4 φ e 8 σ θ 2 .
The second one results in
E z i 3 = A 3 3 4 cos ω x t i + φ e 1 2 σ θ 2 + 1 4 cos 3 ω x t i + 3 φ e 9 2 σ θ 2 .
The third one results in
E z i 2 = A 2 1 2 + 1 2 cos 2 ω x t i + 2 φ e 2 σ θ 2 .
The fourth one results in
E z i = A cos 2 ω x t i + 2 φ e 1 2 σ θ 2 .
Inserting these four terms, together with (29), (30), and (31), into (55) leads to
E r i 4 = A 4 3 8 + 1 2 cos 2 ω x t i + 2 φ e 2 σ θ 2 + 1 8 cos 4 ω x t i + 4 φ e 8 σ θ 2 4 A cos ω x t i + φ e 1 2 σ θ 2 A 3 3 4 cos ω x t i + φ e 1 2 σ θ 2 + 1 4 cos 3 ω x t i + 3 φ e 9 2 σ θ 2 + 6 A 2 cos 2 ω x t i + φ e σ θ 2 A 2 1 2 + 1 2 cos 2 ω x t i + 2 φ e 2 σ θ 2 4 A 3 cos 3 ω x t i + φ e 3 2 σ θ 2 A cos 2 ω x t i + 2 φ e 1 2 σ θ 2 + A 4 cos 4 ω x t i + φ e 2 σ θ 2 .
Simplifying, the final expression for the fourth raw moment of the residual values is
E r i 4 = A 4 3 8 + 1 2 cos 2 ω x t i + 2 φ e 2 σ θ 2 + 1 8 cos 4 ω x t i + 4 φ e 8 σ θ 2 3 cos 4 ω x t i + φ e 2 σ θ 2 + 3 cos 2 ω x t i + φ e 2 σ θ 2 cos 2 ω x t i + 2 φ e 3 σ θ 2 cos ω x t i + φ cos 3 ω x t i + 3 φ e 5 σ θ 2 .
This is going to be used later on when computing the statistics of SSR and then the RMS estimation of the residuals.

4.3. Degrees of Freedom Correction

In statistics, the number of degrees of freedom is the number of values in the final calculation of statistics that are free to vary. When we use a set of data to estimate parameters in a model, we “use up” some of those degrees of freedom. Each parameter we estimate from the data reduces the number of degrees of freedom by one.
Imagine you have a set of M data points, and you calculate their sample mean. The sum of the deviations of each data point from the sample mean is always zero. This means that if you know the first M     1 deviations, the last one is fixed. You have lost one degree of freedom in the process of estimating the mean. This is why, when we calculate the sample variance (which is the average of the squared residuals from the mean), we divide by M     1 instead of M to obtain an unbiased estimate of the true population variance. This is known as Bessel’s correction.
Applying this to the current specific problem of three-parameter sine-fitting, where we are estimating the sinusoidal amplitude, initial phase, and offset, when you perform the sine fit, you are estimating these three parameters from your M data points. In doing so, you are imposing three constraints on your data. This means you lose three degrees of freedom. It will thus be necessary, for the correct estimation of the variance of the summed squared residuals (SSRs), to take into account this reduction in the number of degrees of freedom by multiplying the values obtained by the factor
k = M 3 M .
In particular, the analytical expression derived previously for the expected value of the residuals, namely Equation (53), needs to be corrected, resulting in
E r i 2 = A 2 1 3 M cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 + 1 e 2 σ θ 2 2 .
The same needs to be performed for the expected value of the fourth power of the residuals given by (61),
E r i 4 = A 4 1 3 M 3 8 + 1 2 cos 2 ω x t i + 2 φ e 2 σ θ 2 + 1 8 cos 4 ω x t i + 4 φ e 8 σ θ 2 3 cos 4 ω x t i + φ e 2 σ θ 2 + 3 cos 2 ω x t i + φ e 2 σ θ 2 cos 2 ω x t i + 2 φ e 3 σ θ 2 cos ω x t i + φ cos 3 ω x t i + 3 φ e 5 σ θ 2 .
Later on, when validating the analytical expressions derived in this work using numerical simulations, it will be evident that the results of the simulation will not match the analytical expressions derived if this correction factor is not included.

4.4. Expected Value of SSR

The expected value of the sum of the square of the residuals is obtained from (7) by summing all the residuals
E S S R ^ = i = 0 M 1 E r i 2 .
Inserting (63), we have
E S S R ^ = i = 0 M 1 A 2 1 3 M cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 + A 2 1 e 2 σ θ 2 2 .
Considering that the data points cover an integer number of periods of the sinusoid (coherent sampling), the summation of the square of the cosine function becomes
i = 0 M 1 cos 2 ω x t i + φ = M 2 .
Inserting this into (66) leads to
E S S R ^ = A 2 1 3 M M 2 e 2 σ θ 2 e σ θ 2 + M 1 e 2 σ θ 2 2 ,
which simplifies to
E S S R ^ = A 2 1 3 M M 2 1 e σ θ 2 .

4.5. Second Raw Moment of SSR

In order to compute the variance of the SSR it will be necessary to determine also the second raw moment of the SSR. Proceeding in a similar manner as in the previous section, we can write the expected value of the sum of the squares of the residuals is obtained from (7) by summing all the residuals
E S S R ^ 2 = E i = 0 M 1 r i 2 2 .
The square of the summation can be written as a double summation (omitting the limits of the summation for now):
i r i 2 2 = i j r i 2 r j 2 .
Separating the diagonal terms from the off-diagonal terms leads to
i r i 2 2 = i r i 4 + i j i r i 2 r j 2 .
Taking the expected value of both sides leads to
E S S R ^ 2 = E i r i 2 2 = E i r i 4 + E i j i r i 2 r j 2 .
Moving the expected values on the right side past the summations leads to
E S S R ^ 2 = i E r i 4 + i j i E r i 2 r j 2 .
Assuming that the residuals are independent, we have
E r i 2 r j 2 = E r i 2 E r j 2 .
Inserting this into (74) leads to
E S S R ^ 2 = i M E r i 4 + i j i E r i 2 E r j 2 .
We will now simplify the double summation, which can be written as
i j i E r i 2 E r j 2 = i E r i 2 j i M E r j 2 .
Since
j i E r j 2 = j = 0 M 1 E r j 2 E r i 2
we have
i = 0 M 1 j i E r i 2 E r j 2 = i = 0 M 1 E r i 2 j = 0 M 1 E r j 2 E r i 2 = i = 0 M 1 E r i 2 j = 0 M 1 E r j 2 i = 0 M 1 E r j 2 2 = i = 0 M 1 E r i 2 2 i = 0 M 1 E r j 2 2 .
Inserting this back into (76) leads to
E S S R ^ 2 = i = 0 M 1 E r i 4 P A + i = 0 M 1 E r i 2 2 P B i = 0 M 1 E r j 2 2 P C .
We split this expression into three parts ( P A , P B , and P C ) that we will compute next in turn.

4.5.1. Computation of Part A

Here, we will compute the term P A of (80),
P A = i = 0 M 1 E r i 4 .
Inserting (64) leads to
P A = A 4 1 3 M i = 0 M 1 3 8 + 1 2 i = 0 M 1 cos 2 ω x t i + 2 φ e 2 σ θ 2 + 1 8 i = 0 M 1 cos 4 ω x t i + 4 φ e 8 σ θ 2 3 i = 0 M 1 cos 4 ω x t i + φ e 2 σ θ 2 + 3 i = 0 M 1 cos 2 ω x t i + φ cos 2 ω x t i + 2 φ e 2 σ θ 2 e 3 σ θ 2 i = 0 M 1 cos ω x t i + φ cos 3 ω x t i + 3 φ e 5 σ θ 2 .
Considering that summations cover an integer number of periods of the signal, the second, third, and fifth terms become 0. The first, fourth, and fifth summations become 3 8 M , 3 8 M , and M / 4 , respectively, leading to
P A = A 4 1 3 M 3 8 M + 0 + 0 3 3 8 M e 2 σ θ 2 + 3 M 4 e 2 σ θ 2 e 3 σ θ 2 + 0 .
Simplifying, we receive
P A = A 4 1 3 M M 8 3 9 e 2 σ θ 2 + 6 e 3 σ θ 2 .

4.5.2. Computation of Part B

Here, we will compute the term P B of (80),
P B = i = 0 M 1 E r i 2 2 .
We will start with the square root of this part and, after simplification, compute the square:
P B = i = 0 M 1 E r i 2 .
Inserting (63)
P B = i = 0 M 1 A 2 1 3 M cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 + 1 e 2 σ θ 2 2 .
Moving the summation into the square brackets leads to
P B = A 2 1 3 M i = 0 M 1 cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 + M 1 e 2 σ θ 2 2 .
The summation over an integer number of periods of the sinewave results in M / 2   , leading to
P B = A 2 1 3 M M 2 e 2 σ θ 2 e σ θ 2 + M 1 e 2 σ θ 2 2 .
Simplifying results in
P B = A 2 2 M 3 1 e σ θ 2 .
Finally, computing the square, one obtains
P B = A 4 4 M 3 2 1 e σ θ 2 2 .

4.5.3. Computation of Part C

From (80) we have
P C = i = 0 M 1 E r j 2 2 .
Inserting (63) leads to
P C = i = 0 M 1 A 4 1 3 M 2 cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 + 1 e 2 σ θ 2 2 2 .
Computing the square of the square brackets leads to
P C = i = 0 M 1 A 4 1 3 M 2 cos 4 ω x t i + φ e 2 σ θ 2 e σ θ 2 2 + 1 e 2 σ θ 2 2 2 + 2 cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 1 e 2 σ θ 2 2 .
Moving the summation into the square bracket leads to
P C = A 4 1 3 M 2 i = 0 M 1 cos 4 ω x t i + φ e 2 σ θ 2 e σ θ 2 2 + M 1 e 2 σ θ 2 2 2 + 2 i = 0 M 1 cos 2 ω x t i + φ e 2 σ θ 2 e σ θ 2 1 e 2 σ θ 2 2 .
Using
i = 0 M 1 cos 4 ω x t i + φ = 3 8 M
and
i = 0 M 1 cos 2 ω x t i + φ = 1 2 M
in (95) leads to
P C = A 4 1 3 M 2 3 8 M e 2 σ θ 2 e σ θ 2 2 + M 1 e 2 σ θ 2 2 2 + 2 M 2 e 2 σ θ 2 e σ θ 2 1 e 2 σ θ 2 2 .
Moving the factor of M leads to
P C = A 4 M M 3 2 3 8 e 2 σ θ 2 e σ θ 2 2 + 1 e 2 σ θ 2 2 2 + e 2 σ θ 2 e σ θ 2 1 e 2 σ θ 2 2 .
Simplifying leads to
P C = A 4 8 M M 3 2 1 e σ θ 2 2 2 + e 2 σ θ 2 .

4.5.4. Bringing the Three Parts Together

Adding the three parts of (80), using (84), (91) and (100) leads to
E S S R ^ 2 = A 4 1 3 M M 8 3 9 e 2 σ θ 2 + 6 e 3 σ θ 2 + A 4 4 M 3 2 1 e σ θ 2 2 A 4 8 M M 3 2 1 e σ θ 2 2 2 + e 2 σ θ 2 .
Simplifying results in
E S S R ^ 2 = A 4 8 1 3 M 1 e σ θ 2 2 6 5 M + 2 M 2 + 6 M e σ θ 2 + 3 M e 2 σ θ 2 .

4.6. Computing the Variance

The variance of the SSR estimation, given by (7), is
V A R S S R ^ = E S S R ^ 2 E 2 S S R ^ .
Inserting (102) and (69) into this leads to
V A R S S R ^ = A 4 8 1 3 M 1 e σ θ 2 2 6 5 M + 2 M 2 + 6 M e σ θ 2 + 3 M e 2 σ θ 2 A 2 2 M 3 1 e σ θ 2 2 .
Simplifying results in
V A R S S R ^ = A 4 8 M M 3 1 e σ θ 2 2 6 + M + 6 M e σ θ 2 M 3 e 2 σ θ 2 .
We have now derived an analytical expression for the mean and variance of the SSR estimation given by Equations (69) and (105), respectively.

5. Statistics of the Root Mean Square Value

The goal of this work is to derive an analytical expression for the mean and the variance of the RMS value of the residuals of sine-fitting in the presence of phase noise or jitter. Having computed the analytical expressions for the mean and variance of the sum of square residuals (SSRs), we can proceed to the computation of those statistics for the RMS value, which is obtained using
R M S = S S R M .

5.1. Mean of the Root Mean Square Value

To find the expected value of the RMS estimation, we can use a Taylor expansion of the square root function and the mean of its argument. If
X = S S R M
then
R M S = X .
The Taylor expansion of X around the mean of X , μ X , is
X μ X + 1 2 X X μ X 1 8 μ X 3 2 X μ X 2 .
Using this, we can write the expected value of the RMS estimation as
E R M S = E S S R M V A R S S R M 8 E 3 / 2 S S R M .
Moving the number of data points leads to
E R M S = 1 M E S S R V A R S S R 8 E 3 / 2 S S R .
We can also move the E S S R term leading to
E R M S = E S S R M 1 V A R S S R 8 E 2 S S R .
Inserting (69) leads to
E R M S = A 2 1 3 M M 2 1 e σ θ 2 M 1 V A R S S R 8 A 2 1 3 M M 2 1 e σ θ 2 2 .
Simplifying leads to
E R M S = A 2 1 3 M 1 e σ θ 2 1 V A R S S R 8 A 2 1 3 M M 2 1 e σ θ 2 2 .
Inserting (105) leads to
E R M S = A 2 1 3 M 1 e σ θ 2 1 A 4 8 M M 3 1 e σ θ 2 2 6 + M + 6 M e σ θ 2 M 3 e 2 σ θ 2 8 A 2 1 3 M M 2 1 e σ θ 2 2 .
Simplifying leads to
E R M S = A 2 1 3 M 1 e σ θ 2 1 6 + M + 6 M e σ θ 2 M 3 e 2 σ θ 2 16 M M 3 .
Now that we have obtained an expression for the expected value of the RMS of the residuals, we will move on to its variance.

5.2. Variance of the Root Mean Square Value

The variance of RMS can also be obtained using a Taylor expansion:
V A R R M S = V A R S S R M 4 E S S R M .
Moving the number of samples leads to
V A R R M S = 1 4 M V A R S S R E S S R .
Inserting (69) and (105) leads to
V A R R M S = 1 4 M A 4 8 M M 3 1 e σ θ 2 2 6 + M + 6 M e σ θ 2 M 3 e 2 σ θ 2 A 2 1 3 M M 2 1 e σ θ 2 .
Simplifying results in
V A R R M S = A 2 16 M 1 e σ θ 2 1 + 6 M + 6 e σ θ 2 e 2 σ θ 2 .
This is the second main result of this work. Both analytical expressions presented, this one and the one for the mean of RMS estimation, provided in Equation (116), will be validated through numerical simulations in the next section.

6. Validation Using Numerical Simulations

At this point in the presentation, the two derived analytical expressions will be put to the test by numerically simulating a set of data points that are corrupted by phase noise, fitting them to the sinusoidal model described above, determining the resultant residuals, and computing their root mean square value. Three parameters are going to be varied in this study, namely the injected phase noise standard deviation, the number of data points, and the sinusoidal amplitude, all of which are parameters that these two analytical expressions depend on. In general, for each case, we will present the average and the standard deviation of the RMS estimation, together with the values given by the analytical expressions derived here. The numerically obtained values are presented in a graphical form using error bars, which correspond to a confidence interval computed for the confidence level of 99.9%. The values obtained with the analytical expressions will be plotted in the same charts using a solid line. The goal is to have all the error bars located around the solid lines. As we will see, this happens in almost all instances, showcasing the range of validity of the analytical expressions derived. A second set of charts will present the difference between the numerical simulation values and the ones given by the analytical expressions derived here.
The first study presented here concerns the use of different amounts of injected phase noise. In Figure 1, we observe the mean values of RMS estimation as a function of phase noise standard deviation over a range up to 3 rad. In this instance, the error bars are so short that they are practically invisible. In any case, the circles representing the middle point of the error bars follow closely the values given by the analytical expression. In this example, a sinusoidal amplitude of 3 V and 100 samples (M) was used. Note that the number of simulation repetitions used to compute the error bars (R) was, in this case, 100. The more repetitions are used, the shorter the error bars become, but the longer the numerical simulation takes.
In this figure, we confirm the complete agreement between the numerical simulations and the theoretical expression (116). Furthermore, in Figure 2, we plot the difference between the values from numerical simulation and the ones from the analytical expression. Once again, a complete agreement is observed, since in this instance all error bars are around 0, which shows that the analytical expression indeed accurately reproduces the relationship between the phase noise or jitter phase noise standard deviation and the expected value of the estimated RMS value of the residuals.
In Table 1, one can see the theoretical and numerical mean values of RMS for some of the phase noise standard deviation cases.
The range of values tried in the numerical simulation is far greater than the ones shown here; however, since no substantive difference in conclusions is warranted, and for the sake of conciseness, we choose to present just a few illustrative cases.
In Figure 3 and Figure 4, we observe the equivalent results for the estimated RMS standard deviation. In this case, the theoretical expression is the one presented in (120). Again, complete agreement is found. In Figure 3, one can observe the theoretical expression depicted with the solid line and the error bars from the numerical simulations made with 1000 repetitions.
In Figure 4, one can see the difference between the analytical expression and the numerical simulation results. The error bars are very close to 0, which means that the approximated analytical expression for the standard deviation of the RMS value of the residuals provides an excellent approximation, albeit not exact, due to the use of a Taylor series approximation with just one term.
In Table 2, one can see the theoretical and numerical standard deviation values of RMS estimation for some of the phase noise standard deviation cases.
In the next set of charts, the number of samples was varied from 5 to 150, and the phase noise standard deviation was kept constant at 3 rad. Other values were also used, but the conclusions are the same, and so we refrain from showing them here. Figure 5 shows the mean value of the RMS value estimation as a function of the number of samples. The agreement between numerical simulation values and the analytical expression (solid line) is strong, except when the number of data points is very low (5). Recall that some approximations were performed during the theoretical derivation in order to obtain analytical expressions that were as simple as possible but still fairly accurate.
Figure 6 shows the difference between numerical simulation values and the ones provided by the analytical expression for varying numbers of data points. Again, we see that for a very small number of data points, the difference is not negligible.
In Figure 7 and Figure 8, we see the behavior of the standard deviation of the RMS value estimation as a function of the number of data points. Again, strong agreement is found, except for the first data points that correspond to a very low number of samples. In Figure 7, we see the standard deviation of the RMS value of the residuals as a function of the number of data points. The solid line, corresponding to the theoretical expression, lies around the error bars, which in some cases (large number of samples) are so small that they become invisible, and only the middle point (circles) can be observed. The exception is in the first four data points.
In Figure 8, we observe the difference between numerical simulation values and theoretical ones. Here, we can observe more clearly the discrepancy in the first four data points. As expected, the smaller the number of data points, the worse the Taylor series approximation. One could have used more terms of the Taylor series to achieve a better agreement, but that would lead to a more complex and cumbersome analytical expression.
The final set of numerical simulation results pertains to varying sinusoidal amplitude. In Figure 9, we observe the linear behavior provided by the analytical expression for the dependence of the mean value of the estimated residual RMS as a function of sinusoidal amplitude, as expressed by Equation (116). Again, for the number of repetitions used, R = 1000, the error bars for the numerical simulation results are invisible, and only the middle points (circles) are visible. As observed, all the circles are on top of the solid line. Note that this simulation was made for 100 samples. As mentioned before, if the number of samples is less than 10, the agreement deteriorates significantly.
In Figure 10, we see more clearly that the numerical simulation values and the analytical expression agree for different values of sinusoidal amplitude, since all the error bars are located around 0 for this specific case, where 100 (M) samples were used.
In Figure 11 and Figure 12, we see the last set of results that pertain to the RMS estimation standard deviation as a function of the sinusoidal amplitude. Again, the agreement with the theoretical expression derived here and presented in Equation (120) is strong. In Figure 11, all the vertical bars corresponding to the numerical simulation results are on top of the solid line representing the analytical expression.
In Figure 12, the difference between the theoretical values and those from the numerical simulations is more clearly seen. As expected, the error bars are all around 0.
The numerical simulation results presented in this section completely validate the analytical derivations presented in this work. This provides assurance to the engineer that the two analytical expressions obtained (for the mean and for the standard deviation of RMS value estimation) can be used with confidence.

7. Conclusions

This paper has developed a strong theoretical framework to understand and quantify the influence of phase noise and sampling jitter on sinusoidal fitting residuals, fundamentally contributing to the measurement science literature. The rigorous derivation of closed-form analytical expressions for the mean and variance of the root mean square (RMS) estimation of sine-fitting residuals is a sustainable contribution to the measurement science literature and furthers the knowledge of measurement uncertainty when considering either generator phase noise or sampling jitter. The detailed numerical validation through Monte Carlo simulations provides considerable confidence that the derived expressions accurately represent the actual behavior of residuals and will reflect reasonably strong estimates for very diverse operating conditions. Excellent agreement with the analytical predictions when compared to simulations, which were shown for varying amounts of phase noise, different numbers of data points, and different signal amplitudes, clearly illustrates that the developed theory will have wide applicability for real-world measurement purposes. Additionally, throughout the validation, it was confirmed that the assumptions and approximations made during the analytical derivation were reasonable for the intended domain of application. Developing theoretical knowledge of this structure is considered a significant development in the field of measurement uncertainty; however, it is an important consideration to discuss the potential limits and assumptions that will limit its domain of application.
In the future, the results presented here can be used as a starting point to study the uncertainty of other estimators that are based on sine-fitting residuals, like signal-to-noise ratio, noise floor, or effective number of bits, for example.
As a final comment, the author wants to reiterate that the knowledge of the standard deviation value of an estimator is important for an engineer who, when performing a measurement or numerical estimation of some kind where random phenomena are at play, must always specify the confidence intervals for their estimates.

Funding

This research was supported by the Portuguese Science and Technology Foundation, under projects UID-BASE-LX-UIDB/50008/2020 and DOI identifier https://doi.org/10.54499/UIDB/50008/2020, POCTI/ESE/46995/2002, FCT/2022.72436.CPCA.A0, and by Instituto de Telecomunicações under the project IT/LA/295/2005.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Giaquinto, N.; Trotta, A. Fast and accurate ADC testing via an enhanced sine wave fitting algorithm. IEEE Trans. Instrum. Meas. 1997, 46, 1020–1025. Available online: https://ieeexplore.ieee.org/document/650820/ (accessed on 13 August 2025). [CrossRef]
  2. Analog Devices. How to Calculate ENOB for ADC Dynamic Performance Measurement. Technical Article. June 2019. Available online: https://www.analog.com/en/resources/technical-articles/how-to-calculate-enob-for-adc-dynamic-performance-measurement.html (accessed on 13 August 2025).
  3. IEEE Standard 1241-2000; IEEE Standard for Terminology and Test Methods for Analog-to-Digital Converters. Institute of Electrical and Electronics Engineers: Manhattan, NY, USA, 2000.
  4. Rubiola, E. Phase Noise and Frequency Stability in Oscillators; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  5. Wikipedia. Phase Noise. Available online: https://en.wikipedia.org/wiki/Phase_noise (accessed on 13 August 2025).
  6. Texas Instruments. Dynamic Tests For A/D Converter Performance. Application Report SBAA002. 1995. Available online: https://www.ti.com/lit/pdf/sbaa002 (accessed on 13 August 2025).
  7. IEEE Standard 1057-2017; IEEE Standard for Digitizing Waveform Recorders. Institute of Electrical and Electronics Engineers: Manhattan, NY, USA, 2017.
  8. NIST Phase Noise Metrology Group. Time and Frequency Metrology; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2007. Available online: https://www.nist.gov/pml/time-and-frequency-division/time-and-frequency-metrology (accessed on 13 August 2025).
  9. JCGM 100:2008; Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement. International Organization for Standardization: Geneva, Switzerland, 2008.
  10. Brannon, B. Sampled Systems and the Effects of Clock Phase Noise and Jitter; Analog Devices Application Note AN-756; One Technology Way: Norwood, MA, USA, 2004. [Google Scholar]
  11. Meyr, H.; Moeneclaey, M.; Fechtel, S.A. Digital Communication Receivers: Synchronization, Channel Estimation, and Signal Processing; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
  12. Kay, S.M. Modern Spectral Estimation: Theory and Application; Prentice Hall: Toronto, ON, Canada, 1988. [Google Scholar]
  13. Pintelon, R.; Schoukens, J. System Identification: A Frequency Domain Approach; IEEE Press: Manhattan, NY, USA, 2001. [Google Scholar]
  14. Alegria, F.C. Expected Value of the Root Mean Square of Sinefitting Residuals in the Presence of Phase Noise or Sampling Jitter. tm-Tech. Mess. 2025, 92, 2025-0074. [Google Scholar] [CrossRef]
  15. Alegria, F.C.; Martinho, E.; Almeida, F. Measuring Soil Contamination with the Time Domain Induced Polarization Method Using LabVIEW. Measurement 2009, 42, 1082–1091. [Google Scholar] [CrossRef]
Figure 1. Mean RMS value as a function of phase noise standard deviation. The circles represent the values obtained with the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars (barely visible). The solid line represents the value given by the theoretical expression (116).
Figure 1. Mean RMS value as a function of phase noise standard deviation. The circles represent the values obtained with the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars (barely visible). The solid line represents the value given by the theoretical expression (116).
Sci 07 00136 g001
Figure 2. The difference between the mean of the estimated RMS value and the values given by the theoretical expression (116). The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Figure 2. The difference between the mean of the estimated RMS value and the values given by the theoretical expression (116). The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Sci 07 00136 g002
Figure 3. Standard deviation of the estimated RMS value as a function of phase noise standard deviation. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value given by the square root of the theoretical expression (120).
Figure 3. Standard deviation of the estimated RMS value as a function of phase noise standard deviation. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value given by the square root of the theoretical expression (120).
Sci 07 00136 g003
Figure 4. The difference between the standard deviation of the estimated RMS values obtained through numerical simulation and those provided by the theoretical expression (120). The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Figure 4. The difference between the standard deviation of the estimated RMS values obtained through numerical simulation and those provided by the theoretical expression (120). The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Sci 07 00136 g004
Figure 5. Mean of the estimated RMS value as a function of the number of samples. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value provided by the theoretical expression derived here (Equation (116)).
Figure 5. Mean of the estimated RMS value as a function of the number of samples. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value provided by the theoretical expression derived here (Equation (116)).
Sci 07 00136 g005
Figure 6. Difference in the mean of the RMS value estimation and the values given by the analytical expression (116) as a function of the number of data points. The circles represent the values obtained with the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Figure 6. Difference in the mean of the RMS value estimation and the values given by the analytical expression (116) as a function of the number of data points. The circles represent the values obtained with the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Sci 07 00136 g006
Figure 7. Standard deviation of the estimated RMS as a function of the number of data points. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value provided by the theoretical expression derived here (Equation (120)).
Figure 7. Standard deviation of the estimated RMS as a function of the number of data points. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value provided by the theoretical expression derived here (Equation (120)).
Sci 07 00136 g007
Figure 8. The difference between the standard deviation of the estimated RMS values and the ones provided by the analytical expression (120). The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Figure 8. The difference between the standard deviation of the estimated RMS values and the ones provided by the analytical expression (120). The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Sci 07 00136 g008
Figure 9. Mean value of the estimated RMS value as a function of sinusoidal amplitude. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars (not visible). The solid line represents the value provided by the theoretical expression derived here, Equation (116).
Figure 9. Mean value of the estimated RMS value as a function of sinusoidal amplitude. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars (not visible). The solid line represents the value provided by the theoretical expression derived here, Equation (116).
Sci 07 00136 g009
Figure 10. Standard deviation of the estimated sine-wave initial phase. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The dot-dash line represents the value provided by the theoretical expression derived here.
Figure 10. Standard deviation of the estimated sine-wave initial phase. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The dot-dash line represents the value provided by the theoretical expression derived here.
Sci 07 00136 g010
Figure 11. Standard deviation of the estimated RMS value as a function of sinusoidal amplitude. The circles represent the values obtained with the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value provided by the theoretical expression given in Equation (120).
Figure 11. Standard deviation of the estimated RMS value as a function of sinusoidal amplitude. The circles represent the values obtained with the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the value provided by the theoretical expression given in Equation (120).
Sci 07 00136 g011
Figure 12. The difference between the numerical simulation values and the ones provided by the theoretical expression (120) for different values of sinusoidal amplitude. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Figure 12. The difference between the numerical simulation values and the ones provided by the theoretical expression (120) for different values of sinusoidal amplitude. The circles represent the values obtained from the Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars.
Sci 07 00136 g012
Table 1. Data points for the mean RMS for different values of phase noise standard deviation in the case of a sinusoidal amplitude of 3 V, initial phase null, and 100 data points. The number of repetitions is 100.
Table 1. Data points for the mean RMS for different values of phase noise standard deviation in the case of a sinusoidal amplitude of 3 V, initial phase null, and 100 data points. The number of repetitions is 100.
Phase Noise Standard Deviation (rad)Theoretical Value (V)Error Bar
Minimum (V)
Error Bar
Maximum (V)
Simulation Value (V)
11.6261.6151.6931.654
22.0712.0372.0902.063
32.0902.0572.1142.086
Table 2. Data points for the standard deviation of RMS estimation for different values of phase noise standard deviation in the case of a sinusoidal amplitude of 3 V, initial phase null, and 100 data points. The number of repetitions is 100.
Table 2. Data points for the standard deviation of RMS estimation for different values of phase noise standard deviation in the case of a sinusoidal amplitude of 3 V, initial phase null, and 100 data points. The number of repetitions is 100.
Phase Noise Standard Deviation (rad)Theoretical Value (V)Error Bar
Minimum (V)
Error Bar
Maximum (V)
Simulation Value (V)
10.1060.0980.1140.105
20.0800.0750.0870.081
30.0770.0730.0850.079
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alegria, F. Sine-Fitting Residual Root Mean Square, Mean, and Variance in the Presence of Phase Noise or Jitter. Sci 2025, 7, 136. https://doi.org/10.3390/sci7040136

AMA Style

Alegria F. Sine-Fitting Residual Root Mean Square, Mean, and Variance in the Presence of Phase Noise or Jitter. Sci. 2025; 7(4):136. https://doi.org/10.3390/sci7040136

Chicago/Turabian Style

Alegria, Francisco. 2025. "Sine-Fitting Residual Root Mean Square, Mean, and Variance in the Presence of Phase Noise or Jitter" Sci 7, no. 4: 136. https://doi.org/10.3390/sci7040136

APA Style

Alegria, F. (2025). Sine-Fitting Residual Root Mean Square, Mean, and Variance in the Presence of Phase Noise or Jitter. Sci, 7(4), 136. https://doi.org/10.3390/sci7040136

Article Metrics

Back to TopTop