Next Article in Journal
Analyzing and Classifying Time-Series Trends in Medals
Previous Article in Journal
Comparative Analysis of Temperature Prediction Models: Simple Models vs. Deep Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

An Autoregressive Moving Average Model for Time Series with Irregular Time Intervals †

by
Diana Alejandra Godoy Pulecio
* and
César Andrés Ojeda Echeverri
Escuela de Estadística, Universidad del Valle, Cali 760042, Colombia
*
Author to whom correspondence should be addressed.
Presented at the 11th International Conference on Time Series and Forecasting, Canaria, Spain, 16–18 July 2025.
Comput. Sci. Math. Forum 2025, 11(1), 8; https://doi.org/10.3390/cmsf2025011008
Published: 31 July 2025

Abstract

This research focuses on the study of stochastic processes with irregularly spaced time intervals, which is present in a wide range of fields such as climatology, astronomy, medicine, and economics. Some studies have proposed irregular autoregressive (iAR) and moving average (iMA) models separately, and moving average autoregressive processes (iARMA) for positive autoregressions. The objective of this work is to generalize the iARMA model to include negative correlations. A first-order moving average autoregressive model for irregular discrete time series is presented, being an ergodic and strictly stationary Gaussian process. Parameter estimation is performed by Maximum Likelihood, and its performances are evaluated for finite samples through Monte Carlo simulations. The estimation of the autocorrelation function (ACF) is performed using the DCF (Discrete Correlation Function) estimator, evaluating its performance by varying the sample size and average time interval. The model was implemented on real data from two different contexts; the first one consists of the two-week measurement of star flares of the Orion Nebula in the development of the COUP and the second pertains to the measurement of sunspot cycles from 1860 to 1990 and their relationship to temperature variation in the northern hemisphere.

1. Introduction

Time-varying data are present in a wide variety of fields, such as economics [1], engineering [2], climatology [3], astronomy [4], and ecology [5], among others. The importance of modeling the behavior of these data is critical to generate predictions and anticipate their outcomes. Although most studies have focused on the analysis of data observed at regular time intervals, this approach does not always reflect the reality of many environments. Consequently, research has been encouraged in the development of models for time series observed at specific moments and not necessarily equidistantly.
In the literature, several alternatives are available to address the treatment of time series whose observations have been recorded at irregular time intervals. These proposals usually seek to adjust these irregular intervals to regular ones and make use of conventional models [6], or manipulate them as regular series with missing data [7,8]. These techniques work well as an approximation; however, they generally have a higher degree of bias in model fit than a model that accounts for the irregular nature of the sampling.
For the treatment of irregular series, studies have developed models with moving average structures, iMA [9]; with autoregressive structures, iAR [4,10,11]; and moving average autoregressives structures for discrete time series observed at irregular time intervals, iARMA(1,1) [12]. However, these models are proposed under the assumptions that their parameters must be positive, taking values among 0 and 1, which limits their application in time series that may have negative correlation structures. In this paper, the iARMA(1,1) model is made more flexible by extending the parameter space of ( ϕ , θ ), so it is possible to model negative correlations.
Once the model is proposed, ACF estimators are evaluated in the context of irregular time series, which allow, from the data, the identification of whether the model is a suitable candidate, as happens in the MA(p) models, where the ACF behavior is cancelled from ( p + 1 ), and in the AR(q) models, with decreasing exponential behavior. In their work, Ref. [13] present the algorithm BINCOR (BINned CORrelation) to estimate the correlation between two unequally spaced climate time series that are not necessarily sampled at identical points in time, which will contribute to the identification of the ACF for the proposed model.
The remainder of this paper is organized as follows. Section 2 introduces the model definition and its properties. Also, this section provides the one-step linear predictors and their mean squared errors. The maximum likelihood estimation and bootstrap method are introduced in Section 3. The finite-sample behavior of this estimator is studied via the Monte Carlo method in Section 4. Two real-life data applications are presented in Section 5. Finally, conclusions are given in Section 6.

2. The Model

The construction of a model of a stationary stochastic process with an autoregressive moving average structure considering irregularly spaced time (iARMA) is performed, where T = { t n } n N + is the set of times, such that the consecutive differences Δ n + 1 = t n + 1 t n 1 , for all n 1 .

2.1. iARMA Model

Let { ζ t n } t n T be a sequence of uncorrelated random variables with mean 0 and variance 1, where σ 2 > 0 , 1 < ϕ < 1 , 1 < θ < 1 (when ( ϕ · θ ) > 0 ), c 1 ( ϕ , θ ) = ( 1 + 2 ϕ θ + θ 2 ) / ( 1 ϕ 2 ) , and
c n + 1 ( ϕ , θ ) = ( 1 ϕ n + 1 2 ) c 1 ( ϕ , θ ) θ n + 1 2 c n ( ϕ , θ ) 2 ϕ n + 1 θ n + 1 ,
with ϕ n + 1 = sign ( ϕ ) | ϕ | Δ n + 1 and θ n + 1 = sign ( θ ) | θ | Δ n + 1 . The process { X t n } t n T is said to be an iARMA process if X t 1 = σ 2 c 1 ( ϕ , θ ) 1 / 2 ζ t 1 and
X t n + 1 = ϕ n + 1 X t n + σ 2 c n + 1 ( ϕ , θ ) 1 / 2 ζ t n + 1 + σ 2 θ n + 1 ζ t n σ 2 c n ( ϕ , θ ) 1 / 2 .
Note that this model allows both coefficients to take positive and negative values as long as ϕ · θ > 0 . This condition allows the model to consider both positive and negative autocorrelation structures.

2.2. Properties

Let X n = ( X t 1 , , X t n ) be a random vector of an iARMA process. Let { ζ t n } , for all n, be a sequence of uncorrelated normal random variables; then X n is a normally distributed random vector with E [ X n ] = 0 and covariance matrix
Σ n = γ 0 γ 1 , 2 γ 0 ϕ 3 γ 1 , 2 γ 1 , 3 γ 0 ϕ 4 ϕ 3 γ 1 , 2 ϕ 4 γ 1 , 3 γ 1 , 4 γ 0 i = 3 n ϕ i γ 1 , 2 i = 4 n ϕ i γ 1 , 3 i = 5 n ϕ i γ 1 , 4 γ 1 , n γ 0 ,
where γ 0 = σ 2 [ ( 1 + 2 ϕ θ + θ 2 ) / ( 1 ϕ 2 ) ] and γ 1 , n = ϕ n γ 0 + σ 2 θ n , for n 2 . Thus, the iARMA process is an ergodic and weakly stationary Gaussian process and, hence, strictly stationary.
Another property of the iARMA model is that it is a more general process, since when ϕ = 0 , an iMA process is obtained [9,14], while when θ = 0 , an iAR process is obtained [10]. Likewise, when Δ n + 1 = 1 , for all n, the traditional ARMA(1,1) model is obtained [12].

2.3. Prediction

Using the innovations algorithm [8], the one-step linear predictors are X ^ t 1 ( ϕ , θ ) = 0 and
X ^ t n + 1 ( ϕ , θ ) = ϕ n + 1 X t n + θ n + 1 c n ( ϕ , θ ) X t n X ^ t n ( ϕ , θ ) , n 1 ,
with mean square error E X t 1 X ^ t 1 ( ϕ , θ ) 2 = σ 2 c 1 ( ϕ , θ ) and
E X t n + 1 X ^ t n + 1 ( ϕ , θ ) 2 = σ 2 c n + 1 ( ϕ , θ ) , for all n .

2.4. Theoretical ACF

Taking the covariance matrix presented in Section 2.2, the theoretical ACF of the iARMA model is determined as
Cor ( X t n , X t n + k ) = 1 , k = 0 , ρ 1 , n + 1 , k = 1 , i = 2 k ϕ n + i ρ 1 , n + 1 , k 2 ,
where ρ 1 , n + 1 = γ 1 , n + 1 γ 0 = ϕ n + 1 + θ n + 1 c 1 ( ϕ , θ ) .
The behavior of the ACF for the iARMA model depends on both the Lag (k) and the time intervals between observations; so, a three-dimensional representation is presented for a case where both parameters are negative in Figure 1a and another for the case where both parameters are positive in Figure 1b.
As expected, as the lag increases, the correlation between the variables decreases. The same happens as the time interval increases. On the basis of this, it can be said that an expected behavior of the ACF is obtained. The behavior of a regular ARMA(1,1) model can be visualized by taking a cross-sectional view at t = 1 .

3. Estimation

3.1. Maximum Likelihood Estimation Method

MLE is used from the one-step linear predictors (Equation (3)) with distribution N( 0 , σ 2 c n ( ϕ , θ ) ). Let X t be observed at points t 1 , , t N , the log-likelihood is
( ϕ , θ ) = N 2 ln ( 2 π ) N 2 ln ( σ 2 ) 1 2 n = 1 N ln ( c n ( ϕ , θ ) ) 1 2 n = 1 N X t n X ^ t n ( ϕ , θ ) 2 σ 2 c n ( ϕ , θ ) .
Concentrated criterion function was used in order to optimize the process and estimate ϕ ^ N and θ ^ N by minimizing the expression (5), replaced with σ ^ N 2 ( ϕ , θ ) = 1 N n = 1 N X t n X ^ t n ( ϕ , θ ) 2 c n ( ϕ , θ ) (the optimum of σ 2 ), which results in the reduced likelihood [8], q N ( ϕ , θ ) = ln ( σ ^ N 2 ( ϕ , θ ) ) + 1 N n = 1 N ln ( c n ( ϕ , θ ) ) . Thus, q N ( ϕ , θ ) must be minimised with respect to ϕ and θ and then σ ^ N 2 ( ϕ , θ ) must be plugged in for the sigma estimation.

3.2. Bootstrap Method

The bootstrap method in time series is based on obtaining the residuals of a model fitted to the data, and then generating a new series including random samples of the residuals to the fitted model. The estimated standardized innovations for the iARMA model (assuming σ 2 = 1 ) are e t n s = X t n X ^ t n ( ϕ ^ N ) c n ( ϕ ^ N , θ ^ N ) , where ϕ ^ N and θ ^ N are the MLE estimates of the X t process observed at points t 1 , , t N . Normally, the residuals are centered to have the same mean as the model innovations, so model-based resampling is given by equiprobable sampling with replacement of the centered residuals e t 1 s c = e t 1 s e ¯ , , e t N s c = e t N s e ¯ , where e ¯ = n = 1 N e t n s N , obtaining the simulated innovations e t n s c , , e t N s c , considering the same times of the observations, in order to define
X t 1 = c 1 ( ϕ ^ N , θ ^ N ) e t 1 s c ,
X t n + 1 = ϕ ^ N n + 1 X t n + c n + 1 ( ϕ ^ N , θ ^ N ) e t n + 1 s c + θ ^ N n + 1 c n ( ϕ ^ N , θ ^ N ) c n + 1 ( ϕ ^ N , θ ^ N ) e t n + 1 s c ,
as the bootstrapped sequence. In this way, the parameters of the sequence X t n can be estimated through MLE. This process is repeated B times, generating a collection of bootstrapped parameter estimates that allow an estimation of the finite sample distribution of the ϕ ^ N and θ ^ N estimators.

4. Simulation

In this study, M = 1000 trajectories of size N { 30 , 100 , 500 , 2000 } are simulated from the iARMA model (Equation (2)), considering σ 2 = 1 , ϕ { 0.5 , 0.7 } and θ { 0.3 , 0.8 , 0.5 , 0.9 } , using the combinations in which both parameters have the same sign.
For each simulated series, the parameters ϕ and θ are estimated through the MLE, and their variances by bootstrap with B = 500 resamples (methods described in Section 3.1 and Section 3.2), respectively. For this study, irregular time sequences Δ n + 1 1 + exp ( λ = 1 ) are generated for n 1 .

4.1. ML Estimator

We define ϕ ^ m M L E and θ ^ m M L E as the ML estimates for the mth trajectory obtained by minimizing q N ( ϕ , θ ) . The standard errors are estimated via the curvature of the likelihood surface in ϕ ^ m M L E and θ ^ m M L E . The calculated quantities of the M trajectories are summarized with an average E ^ [ ϕ ^ M L E ] = 1 M m = 1 M ϕ ^ m M L E and their variances s e ^ 2 ( ϕ ^ M L E ) = 1 M m = 1 M ( ϕ ^ m M L E ϕ ^ M L E ) 2 .

4.2. Bootstrap Estimator

In order to determine the behavior of the estimators ϕ ^ m M L E and θ ^ m M L E and to estimate their respective variances, B resamplings are performed for each mth MLE trajectory, defining them as bootstrap estimators and their variances: ϕ ^ m b o o t = 1 B b = 1 B ϕ ^ m , b b o o t and s e ^ 2 ( ϕ ^ m b o o t ) = 1 B 1 b = 1 B ( ϕ ^ m , b b o o t ϕ ^ m b o o t ) 2 , respectively, in which the behavior of the variances obtained by bootstrap is compared with the variances of the estimators by MLE.

4.3. Performance Measures

The root mean square error (RMSE) and the coefficient of variation (CV) are proposed as performance measures of the estimators by MLE. Taking as b i a s ^ ϕ ^ M L E = E ^ [ ϕ ^ M L E ] ϕ , we define R M S E ^ ϕ ^ M L E = s e ^ ( ϕ ^ M L E ) 2 + b i a s ^ ϕ ^ M L E 2 1 / 2 and C V ^ ϕ ^ M L E = s e ^ ( ϕ ^ M L E ) | ϕ ^ M L E | 100 % .

4.4. Simulation Results

The bootstrap method is useful to identify the variance of ML estimators, since in reality there is usually only one path, so the variance cannot be obtained from the result of many ML estimates. As we see in Table 1, the performance of bootstrap variance estimates improves when the sample size is large, so if the sample size is not large enough in real data, there is a greater likelihood of high variability in the estimates of the parameters of the data. Bias, RMSE, and CV are smaller when N increases as expected. Although better results are obtained for ϕ than for θ , the values obtained are consistent.

4.5. ACF Estimation

The DCF estimator was used to estimate the autocorrelation function of the proposed model and to analyse its behavior. ACF is fundamental to obtain key information on the temporal dependencies between the values of the time series and facilitates the choice of an appropriate model that can also generate a good forecast by identifying whether there are cyclical behaviors or important dependencies.
The DCF estimator proposed by [15] was applied, following the algorithm outlined in [13]. The approach relies on transforming an irregularly spaced series into a regular one using an average time interval τ ¯ .
Trajectories were generated by changing the sample size N { 30 , 500 } for Δ n + 1 1 + exp ( λ = 1 ) , ϕ = 0.5 , θ = 0.3 , and ϕ = 0.7 , θ = 0.5 , analysing the first ten lags. These estimates are compared with the theoretical ACF (Equation in Section 2.4), revealing that the theoretical ACFs in Figure 2a,b perform better than the positive parameter scenarios (Figure 3a,b), as these series have smoother fluctuations allowing several points in the primary series to be replaced with a single value in the binned series. Consequently, the τ ¯ differs significantly from that of the irregular series in these cases, impacting their theoretical ACF.

5. Applications

Two application cases with real data are presented. The first one corresponds to the two-week measurement of starbursts from the Orion Nebula in the development of the Chandra Orion Ultradeep Project (COUP) and the second one case is related to the measurement of solar cycles and their relationship with temperature variation in the northern hemisphere.

5.1. Chandra Orion Ultradeep Project

For two weeks, the most extensive data collection in the field of X-ray astronomy was carried out through the COUP, where observations of the amount of energy released as flares in the young star region of the Orion Nebula were made from NASA’s Chandra X-ray Observatory. From this study, Ref. [16] resulted in the detection of 1616 X-ray sources, of which one (COUP263) was taken for the analysis of interest in this study. The simplified information consists of the photon arrival time (measured in seconds) and the photon energy (measured in kiloelectronvolts, keV); however, because the measurement in seconds generates very large time intervals between each observation (values among 19 and 24,537 s, with an average of 4071 s, Figure 4a), it was chosen to analyse them as the log ( Δ t n ) . The results of the estimation of the ϕ and θ parameters by MLE, and of their respective bootstrap variances (B = 500) are presented in Table 2, where it is identified that a strong autoregressive structure is suggested.
Figure 4b presents the trajectory of COUP263, with the transformed times in a black line. Good estimation behavior is observed (blue line), obtaining normality in the residuals in Figure 4c, and a p-value = 0.3887 in the Shapiro–Wilk test. However, the estimation of the ACF did not have the expected behavior, since having an average time interval of 7.71, it is expected that the first coefficient is very close to 1 as a consequence of the obtained estimations, but ρ 1 had a negative value. When analysing the behavior of the estimation, it became evident that the DCF adjustment created a binned series with a size of 42 observations, which generated an average time much longer than the original (38.44), which affects the behavior of the ACF.

5.2. Sunspot Cycle

Ref. [17] identified a direct influence between the Earth’s climate and the variation of the solar cycle length through measurements of air temperature in the northern hemisphere from 1860 to 1990, and their comparison with sunspot cycles in the same period. The information analysed is based on 24 data with sunspot cycle lengths (years) from 1860 to 1990, with a mean of 5.35 years. By examining the trajectory of the analyzed data (Figure 5a), two pronounced trends of mean decrease and increase are identified, indicating that stationarity is not guaranteed. To address this, the series is differentiated to ensure a constant mean. In Table 3, the values of the ϕ ^ M L E and θ ^ M L E estimators are presented, having a case in which both parameters are negative.
The residuals are tested for normality with a value- p = 0.8647 in the Shapiro–Wilk test. It shows that despite being a much smaller sample than the previous example, a good prediction of the model is maintained. In this case, the ACF (Figure 5d) correctly estimates the trajectory behavior with a negative ρ 1 and alternating sign in the following correlation coefficients, approximating the values of the correlation coefficients quite well. This optimal behavior is due to the fact that the average time interval handled in the binned series was not significantly affected, so it is close to that expected.

6. Conclusions

A first-order moving average autoregressive model for irregularly spaced time intervals is presented, based on the proposal of [12], extending the correlation structure for both negative and positive parameters, limited to cases where the parameters have the same sign. The model is an ergodic and strictly stationary Gaussian process.
The use of Maximum Likelihood estimation showed good results for both positive and negative structures; however, the result improves as the sample size increases. The best results were obtained when positive coefficients were used. The estimates for ϕ were better than those for θ , showing lower dispersion (smaller CV) and values closer to the true value (lower bias and RMSE). This can be analysed as an effect of the correlation structure, since ϕ relates to the autoregressive structure, where the data at time n is correlated with all past data for which information is available, while the moving average structure ( θ ) is only correlated with the immediately previous data, which generates a greater variability effect at the time of estimating θ .
The estimation of the variance of the ML estimators was performed by bootstrap, identifying better behavior as the sample size increases. Good performance for both positive and negative scenarios was obtained. In general terms, the estimates of s e ^ ϕ 2 were better than those of s e ^ θ 2 as they presented lower dispersion even with a small sample size.
The DCF estimator had good results for estimating the ACF and identifying whether it is a scenario with parameters with positive or negative signs; however, as the average distance of the times increases, the estimation does not present the best results, mainly when the coefficients are very close to 0, with a greater probability of obtaining results with the opposite sign. This is a sparse result as a consequence of the weakness of the autocorrelation when the observations are further apart.
Finally, the practical application of the proposed model was exemplified by means of two real cases related to astronomical and climatic time series.
The model obtained had the limitation of being applied only in cases where the signs of both parameters are equal ( ( ϕ · θ ) > 0 ). It is proposed for future research to include correlations when the parameters have different signs, which can be widely found in real fields of application.

Author Contributions

Conceptualization, D.A.G.P. and C.A.O.E.; Data curation, D.A.G.P.; Formal analysis, D.A.G.P.; Methodology, D.A.G.P. and C.A.O.E.; Software, D.A.G.P.; Supervision, C.A.O.E.; Validation, D.A.G.P. and C.A.O.E.; Visualization, D.A.G.P.; Writing–original draft, D.A.G.P.; Writing–review and editing, D.A.G.P. and C.A.O.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available in this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Franses, P.H. Estimating persistence for irregularly spaced historical data. Qual. Quant. 2021, 55, 2177–2187. [Google Scholar] [CrossRef]
  2. Coelho, M.; Fernandes, C.V.S.; Detzel1, D.H.M.; Mannich, M. Statistical validity of water quality time series in urban watersheds. Braz. J. Water Resour. 2017, 22, e51. [Google Scholar] [CrossRef]
  3. Ólafsdóttir, K.B.; Schulz, M.; Mudelsee, M. REDFIT-X: Cross-spectral analysis of unevenly spaced paleoclimate time series. Comput. Geosci. 2016, 91, 11–18. [Google Scholar] [CrossRef]
  4. Elorrieta, F.; Eyheramendy, S.; Palma, W. Discrete-time autoregressive model for unequally spaced time-series observations. Astron. Astrophys. 2019, 627, A120. [Google Scholar] [CrossRef]
  5. Liénard, J.F.; Gravel, D.; Strigul, N.S. Data-intensive modeling of forest dynamics. Environ. Model. Softw. 2015, 67, 138–148. [Google Scholar] [CrossRef]
  6. Shukla, S.N.; Marlin, B.M. A survey on principles, models and methods for learning from irregularly sampled time series. arXiv 2020, arXiv:2012.00168. [Google Scholar]
  7. Erdogan, E.; Ma, S.; Beygelzimer, A.; Rish, I. Statistical Models for Unequally Spaced Time Series; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2005. [Google Scholar]
  8. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer Science & BUSINESS Media: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  9. Ojeda, C.; Palma, W.; Eyheramendy, S.; Elorrieta, F. Extending time-series models for irregular observational gaps with a moving average structure for astronomical sequences. RAS Tech. Instrum. 2023, 2, 33–44. [Google Scholar] [CrossRef]
  10. Eyheramendy, S.; Elorrieta, F.; Palma, W. An irregular discrete time series model to identify residuals with autocorrelation in astronomical light curves. Mon. Not. R. Astron. Soc. 2018, 481, 4311–4322. [Google Scholar] [CrossRef]
  11. Rumyantseva, O.; Sarantsev, A.; Strigul, N. Time series analysis of forest dynamics at the ecoregion level. Forecasting 2020, 2, 20. [Google Scholar] [CrossRef]
  12. Ojeda, C.; Palma, W.; Eyheramendy, S.; Elorrieta, F. A Novel First-Order Autoregressive Moving Average Model to Analyze Discrete-Time Series Irregularly Observed. In Theory and Applications of Time Series Analysis and Forecasting; Springer: Cham, Switzerland, 2023; pp. 91–103. [Google Scholar]
  13. Polanco-Martinez, J.M.; Medina-Elizalde, M.A.; Sanchez Goni, M.F.; Mudelsee, M. BINCOR: An R package for estimating the correlation between two unevenly spaced time series. R J. 2019, 11, 170–184. [Google Scholar] [CrossRef]
  14. Ojeda, C.; Palma, W.; Eyheramendy, S.; Elorrieta, F. An irregularly spaced first-order moving average model. arXiv 2021, arXiv:2105.06395. [Google Scholar] [CrossRef]
  15. Edelson, R.; Krolik, J. The discrete correlation function-A new method for analyzing unevenly sampled variability data. Astrophys. J. Part 1 1988, 333, 646–659. [Google Scholar] [CrossRef]
  16. Getman, K.V.; Flaccomio, E.; Broos, P.S.; Grosso, N.; Tsujimoto, M.; Townsley, L.; Garmire, G.P.; Kastner, J.; Li, J.; Harnden, F.R., Jr.; et al. Chandra Orion Ultradeep Project: Observations and Source Lists. Astrophys. J. Suppl. Ser. 2005, 160, 319–352. [Google Scholar] [CrossRef]
  17. Friis-Christensen, E.; Lassen, K. Length of the Solar Cycle: An Indicator of Solar Activity Closely Associated with Climate. Science 1991, 254, 698–700. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) Theoretical ACF when ϕ = 0.8 and θ = 0.7 . (b) Theoretical ACF when ϕ = 0.8 and θ = 0.7 .
Figure 1. (a) Theoretical ACF when ϕ = 0.8 and θ = 0.7 . (b) Theoretical ACF when ϕ = 0.8 and θ = 0.7 .
Csmf 11 00008 g001
Figure 2. Comparison between theoretical ACF (right) and ACF estimated with DCF (left) for a binned serie with ϕ = 0.5 y θ = 0.3 . (a) N = 30 . (b) N = 500 .
Figure 2. Comparison between theoretical ACF (right) and ACF estimated with DCF (left) for a binned serie with ϕ = 0.5 y θ = 0.3 . (a) N = 30 . (b) N = 500 .
Csmf 11 00008 g002
Figure 3. Comparison between theoretical ACF (right) and ACF estimated with DCF (left) for a binned serie with ϕ = 0.7 y θ = 0.5 . (a) N = 30 . (b) N = 500 .
Figure 3. Comparison between theoretical ACF (right) and ACF estimated with DCF (left) for a binned serie with ϕ = 0.7 y θ = 0.5 . (a) N = 30 . (b) N = 500 .
Csmf 11 00008 g003
Figure 4. (a) COUP263 trajectory in seconds. (b) In blue, COUP263 data forecast from the iARMA model. In black, the original data with the logarithm of the differences in photon arrival times, log ( s ) . (c) QQ-plot of residuals with normality bands. (d) ACF estimated from COUP263 data.
Figure 4. (a) COUP263 trajectory in seconds. (b) In blue, COUP263 data forecast from the iARMA model. In black, the original data with the logarithm of the differences in photon arrival times, log ( s ) . (c) QQ-plot of residuals with normality bands. (d) ACF estimated from COUP263 data.
Csmf 11 00008 g004
Figure 5. (a) Transformation of the logarithm of the sunspot cycle length from 1860 to 1990. (b) In blue, the sunspot cycle length forecast from the iARMA model. In black, the original data. (c) QQ plot of residuals with normality bands. (d) ACF estimated from transformed solar cycle length data.
Figure 5. (a) Transformation of the logarithm of the sunspot cycle length from 1860 to 1990. (b) In blue, the sunspot cycle length forecast from the iARMA model. In black, the original data. (c) QQ plot of residuals with normality bands. (d) ACF estimated from transformed solar cycle length data.
Csmf 11 00008 g005
Table 1. Monte Carlo results for the ML estimator with irregularly spaced times for different values of ϕ , θ , and N.
Table 1. Monte Carlo results for the ML estimator with irregularly spaced times for different values of ϕ , θ , and N.
ϕ θ N ϕ ^ MLE se ^ ( ϕ ^ MLE ) bias ^ ϕ ^ MLE RMSE ^ ϕ ^ MLE CV ^ ϕ ^ MLE θ ^ MLE se ^ ( θ ^ MLE ) bias ^ θ ^ MLE RMSE ^ θ ^ MLE CV ^ θ ^ MLE
−0.5−0.330−0.36810.27600.13190.305974.98%−0.46200.3442−0.16200.380474.50%
100−0.40200.19280.09800.216347.96%−0.41570.2558−0.11570.280761.54%
500−0.47440.09970.02560.102921.01%−0.32780.1766−0.02780.178753.85%
2000−0.49320.04630.00680.04689.38%−0.30790.0956−0.00790.095931.05%
−0.830−0.43660.26810.06340.275561.41%−0.69680.32460.10320.340646.58%
100−0.49520.14630.00480.146429.54%−0.72590.24210.07410.253233.35%
500−0.50100.0680−0.00100.068013.57%−0.77950.13110.02050.132716.82%
2000−0.50140.0309−0.00140.03096.15%−0.79650.05390.00350.05416.77%
0.70.5300.62960.2108−0.07040.222233.48%0.56370.38050.06370.385867.49%
1000.67240.0978−0.02760.101714.55%0.55560.29350.05560.298752.82%
5000.69420.0437−0.00580.04416.29%0.51720.16420.01720.165131.75%
20000.69890.0213−0.00110.02133.04%0.50070.08440.00070.084416.86%
0.9300.66930.1796−0.03070.182226.84%0.73640.3342−0.16360.372145.38%
1000.69570.0866−0.00430.086712.45%0.80710.2327−0.09290.250628.83%
5000.70160.03490.00160.03494.97%0.87120.1087−0.02880.112512.48%
20000.69970.0177−0.00030.01772.54%0.89600.0548−0.00400.05496.12%
Table 2. iARMA model estimates for COUP263 data.
Table 2. iARMA model estimates for COUP263 data.
N ϕ ^ MLE ϕ ^ boot se ^ ( ϕ ^ boot ) θ ^ MLE θ ^ boot se ^ ( θ ^ boot )
2090.98310.98110.00200.01120.01120.0002
Table 3. iARMA model estimates for sunspot cycle data.
Table 3. iARMA model estimates for sunspot cycle data.
N ϕ ^ MLE ϕ ^ boot se ^ ( ϕ ^ boot ) θ ^ MLE θ ^ boot se ^ ( θ ^ boot )
24−0.8034−0.80180.0016−0.4841−0.48310.0009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Godoy Pulecio, D.A.; Ojeda Echeverri, C.A. An Autoregressive Moving Average Model for Time Series with Irregular Time Intervals. Comput. Sci. Math. Forum 2025, 11, 8. https://doi.org/10.3390/cmsf2025011008

AMA Style

Godoy Pulecio DA, Ojeda Echeverri CA. An Autoregressive Moving Average Model for Time Series with Irregular Time Intervals. Computer Sciences & Mathematics Forum. 2025; 11(1):8. https://doi.org/10.3390/cmsf2025011008

Chicago/Turabian Style

Godoy Pulecio, Diana Alejandra, and César Andrés Ojeda Echeverri. 2025. "An Autoregressive Moving Average Model for Time Series with Irregular Time Intervals" Computer Sciences & Mathematics Forum 11, no. 1: 8. https://doi.org/10.3390/cmsf2025011008

APA Style

Godoy Pulecio, D. A., & Ojeda Echeverri, C. A. (2025). An Autoregressive Moving Average Model for Time Series with Irregular Time Intervals. Computer Sciences & Mathematics Forum, 11(1), 8. https://doi.org/10.3390/cmsf2025011008

Article Metrics

Back to TopTop