Next Article in Journal
Sharp Unified Smoothness Theory for Cavalieri Estimation via Fourier Decay
Previous Article in Journal
Algorithms for Solving the Resolvent of the Sum of Two Maximal Monotone Operators with a Finite Family of Nonexpansive Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

The Law of the Iterated Logarithm for the Error Distribution Estimator in First-Order Autoregressive Models

1
School of Big Data and Statistics, Anhui University, Hefei 230601, China
2
Irving K. Barber Faculty of Science, University of British Columbia, Kelowna, BC V1V 1V7, Canada
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(11), 784; https://doi.org/10.3390/axioms14110784 (registering DOI)
Submission received: 29 August 2025 / Revised: 22 October 2025 / Accepted: 23 October 2025 / Published: 26 October 2025

Abstract

This paper investigates the asymptotic behavior of kernel-based estimators for the error distribution in a first-order autoregressive model with dependent errors. The model assumes that the error terms form an α-mixing sequence with an unknown cumulative distribution function (CDF) and finite second moment. Due to the unobservability of true errors, we construct kernel-smoothed estimators based on residuals obtained via least squares. Under mild assumptions on the kernel function, bandwidth selection, and mixing coefficients, we establish a logarithmic law of the iterated logarithm (LIL) for the supremum norm difference between the residual-based kernel estimator and the true distribution function. The limiting bound is shown to be 1/2, matching the classical LIL for independent samples. To support the theoretical results, simulation studies are conducted to compare the empirical and kernel distribution estimators under various sample sizes and error term distributions. The kernel estimators demonstrate smoother convergence behavior and improved finite-sample performance. These results contribute to the theoretical foundation for nonparametric inference in autoregressive models with dependent errors and highlight the advantages of kernel smoothing in distribution function estimation under dependence.
Keywords: autoregressive model; α-mixing sequence; kernel estimation; law of the iterated logarithm; residual-based estimator autoregressive model; α-mixing sequence; kernel estimation; law of the iterated logarithm; residual-based estimator

Share and Cite

MDPI and ACS Style

Wang, B.; Jin, Y.; Wang, L.; Shi, X.; Yang, W. The Law of the Iterated Logarithm for the Error Distribution Estimator in First-Order Autoregressive Models. Axioms 2025, 14, 784. https://doi.org/10.3390/axioms14110784

AMA Style

Wang B, Jin Y, Wang L, Shi X, Yang W. The Law of the Iterated Logarithm for the Error Distribution Estimator in First-Order Autoregressive Models. Axioms. 2025; 14(11):784. https://doi.org/10.3390/axioms14110784

Chicago/Turabian Style

Wang, Bing, Yi Jin, Lina Wang, Xiaoping Shi, and Wenzhi Yang. 2025. "The Law of the Iterated Logarithm for the Error Distribution Estimator in First-Order Autoregressive Models" Axioms 14, no. 11: 784. https://doi.org/10.3390/axioms14110784

APA Style

Wang, B., Jin, Y., Wang, L., Shi, X., & Yang, W. (2025). The Law of the Iterated Logarithm for the Error Distribution Estimator in First-Order Autoregressive Models. Axioms, 14(11), 784. https://doi.org/10.3390/axioms14110784

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop