Next Article in Journal
Inference for the Chris–Jerry Lifetime Distribution Under Improved Adaptive Progressive Type-II Censoring for Physics and Engineering Data Modelling
Previous Article in Journal
On Corresponding Cauchy–Riemann Equations Applied to Laplace-Type Operators over Generalized Quaternions, with an Application
Previous Article in Special Issue
Modal Regression Estimation by Local Linear Approach in High-Dimensional Data Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Adaptive Lasso via Robust Sample Autocorrelation Coefficient for the Autoregressive Models

1
School of Economics, Jinan University, Guangzhou 510632, China
2
School of Public Administration, Jinan University, Guangzhou 510632, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(9), 701; https://doi.org/10.3390/axioms14090701
Submission received: 3 August 2025 / Revised: 8 September 2025 / Accepted: 12 September 2025 / Published: 17 September 2025
(This article belongs to the Special Issue Advances in Statistical Simulation and Computing)

Abstract

For the autoregressive models, classical estimation methods, including the least squares estimator or the maximum likelihood estimator are not robust to heavy-tailed distributions or outliers in the dataset, and lack sparsity, leading to potentially inaccurate estimation and poor generalization capability. Meanwhile, the existing variable selection methods can not handle the case where the influence of explanatory variables on the dependent variable gradually weakens as the lag order increases. To address these issues, we propose a novel robust adaptive lasso method for the autoregressive models. The proposed method is constructed by using partial autocorrelation coefficients as adaptive penalty weights to promote sparsity in parameter estimation, and by employing a robust autocorrelation estimator based on the F Q n statistic to enhance resistance to outliers. Numerical simulations and two real data analyses illustrate the promising performance of our proposed approach. The results indicate that our proposed approach exhibits good robustness and sparsity in the presence of outliers in the dataset.

1. Introduction

Financial market stability is significantly influenced by the prices of the stock index, which serve as crucial indicators of macroeconomic conditions. However, constructing robust and generalizable prediction models for stock indices presents considerable challenges due to their inherent volatility, a characteristic driven by complex interactions among economic, political, cultural and demographic factors. Among various forecasting approaches, time series analysis models, particularly autoregressive (AR) models, have been widely employed for stock price prediction.
Traditional estimation methods for AR models, such as the ordinary least squares (OLS) estimator and the maximum likelihood estimator, perform adequately under some regular conditions, but become unreliable when financial data contain outliers. Such outliers frequently arise from unexpected events such as policy changes, major accidents, and social phenomena. As demonstrated by Box et al. [1], outliers can severely distort time-series model estimates. Two primary approaches have been developed to address this issue: outlier detection and removal, or robust estimation methods that mitigate outlier effects.
In outlier detection research, Huber [2] pioneered methods for identifying outliers in AR models. Chang et al. [3] subsequently extended these methods to ARIMA models with iterative procedures for innovational outliers (IOs) and additive outliers (AOs). Tsay et al. [4] generalized these approaches to multivariate cases, while McQuarrie and Tsai [5] proposed a detection method that does not require prior knowledge of the order, location, or type of the model. Further developments include techniques by Karioti and Caroni [6] for shorter time series and unequal-length AR models, and continuous detection methods introduced by Louni [7] that demonstrate superior performance for IOs. However, most detection methods rely on specific distributional assumptions that are often unrealistic in practice. To address this limitation, Čampulová et al. [8] proposed a nonparametric approach combining data smoothing with change point analysis of residuals.
Compared to outlier detection, robust estimation methods for time series have received less attention. Notable contributions include Pan et al. [9] application of local least absolute deviation for PARMA models and Jiang [10] introduction of exponential square estimators for AR models with heavy-tailed errors. Recent advances by Callens et al. [11] weighted M-estimators with data-driven parameter selection and Chang and Shi [12] reweighted multivariate least trimmed squares and MM-estimators for VAR models.
Traditional time series estimation methods also suffer from poor generalization due to the lack of sparsity. This issue was addressed by the least absolute shrinkage and selection operator (lasso) method [13], which produces sparse and interpretable models. Subsequent developments include smoothly clipped absolute deviation (SCAD) penalty [14] and adaptive lasso [15] to overcome the bias of the lasso method. For AR models, Audrino and Camponovo [16] investigated adaptive lasso’s theoretical properties, Songsiri [17] formulated 1 -regularized least squares problems, and Emmert-Streib and Dehmer [18] proposed a two-step lasso approach for vector autoregression with data-driven feature selection. However, traditional AR model estimation methods are sensitive to outliers and can not deal with the case that the influence of explanatory variables on the dependent variable gradually weakens as the lag order increases. In order to overcome this issue, we propose a novel robust adaptive lasso approach that combines the partial autocorrelation coefficient to construct adaptive penalty weights and uses robust autocorrelation coefficients constructed using the F Q n statistic. Both extensive numerical simulations and a real data example demonstrate the validity of our proposed method.
The rest of the paper is organized as follows: In Section 2, we first review traditional estimation methods for AR models, and present our proposed robust adaptive lasso method. In Section 3, we evaluate the finite-sample performance of the proposed method through Monte Carlo simulations and compare it with other methods. In Section 4, we employ the proposed method to analyze two distinct time series: the S&P 500 Index and the USD/CNY exchange rate. We conclude with some remarks in Section 5.

2. Robust Adaptive Lasso for AR Models

2.1. Least Squares Method

The autoregressive models with order p, denoted as AR(p), can be expressed as follows:
y t = β 0 + β 1 y t 1 + + β p y t p + ε t ,
where y t is the dependent variable, y t 1 , , y t p are explanatory variables, β = ( β 0 , β 1 , , β p ) T are unknown parameters, and ε t is an independent error term with E ε t = 0 . Given n samples, the coefficients in model (1) can be estimated by minimizing the residual sum of squares:
β ^ o l s = arg min β t = p + 1 n y t β 0 k = 1 p y t k β k 2 .
However, the least squares method is non-robust and incapable of order selection, and cannot yield sparse solutions.

2.2. Lasso and Adaptive Lasso Methods

Tibshirani [13] proposed a lasso method that simultaneously performs variable selection and parameter estimation by introducing a penalty term to the objective function in (2), which is given as follows:
β ^ lasso = argmin β t = p + 1 n y t β 0 k = 1 p y t k β k 2 + λ 1 k = 1 p | β k | , λ 1 0 ,
where λ 1 is nonnegative tuning parameter.
However, the lasso method applies uniform penalty coefficients to all features, which results in biased estimates. To address this limitation, Zou [15] proposed the adaptive lasso method, which assigns smaller penalties to larger coefficients and vice versa. The estimation procedure is introduced as follows:
β ^ adl = argmin β t = 1 n y t β 0 k = 1 p y t k β k 2 + λ 2 k = 1 p ω k 1 | β k | , λ 2 0 ,
where λ 2 is a nonnegative penalty parameter, ω k 1 = 1 / | β ^ k | δ 1 , δ 1 is a positive constant, and β ^ k is the estimated value of β k .

2.3. Robust Adaptive Lasso for AR Models

Classical estimation approaches such as the least squares method lack robustness. For the AR ( p ) models, the Yule-Walker equations can be expressed as:
ρ 1 = β 1 ρ 0 + β 2 ρ 1 + + β p ρ 1 p , ρ 2 = β 1 ρ 1 + β 2 ρ 0 + + β p ρ 2 p , ρ p = β 1 ρ p 1 + β 2 ρ p 2 + + β p ρ 0 ,
where ρ j denotes the j-th order autocorrelation coefficient.
The Yule–Walker equations form a system of linear equations between the sample autocorrelation function and the coefficients. Clearly, robust coefficients can be obtained by deriving robust autocorrelation functions. However, the expression of the sample autocorrelation function for weakly stationary time series contains the sample mean, which is highly sensitive to outliers, resulting in non-robust ρ ^ k . Gnanadesikan and Kettenring [19] proposed an alternative definition of the autocorrelation function, which is defined as follows:
ρ = var ( U ) var ( V ) var ( U ) + var ( V ) ,
where
U = X / σ 1 + Y / σ 2 / 2 , V = X / σ 1 Y / σ 2 / 2
with σ 1 and σ 2 being the standard deviations of X and Y, respectively. The correlation coefficient of the sample can be expressed as follows:
ρ ^ = S ^ 2 ( U ) S ^ 2 ( V ) S ^ 2 ( U ) + S ^ 2 ( V ) ,
where S ^ 2 ( U ) and S ^ 2 ( V ) are the sample variance based on the random sample from U and V, respectively, and are the estimators of var ( U ) and var ( V ) , respectively.
Since ρ ^ in (7) involves the sample variance, it inherits non-robustness. Therefore, it is necessary to seek a robust alternative for the estimator of ρ . The median absolute deviation (MAD) is a robust estimator of standard deviation, and has a high breakdown point of 0.5, but at the cost of an efficiency of only 0.37. The breakdown point is a global robustness measure from the perspective of resistance to outliers, referring to the maximum proportion of contaminated data that an estimator can tolerate before becoming meaningless [20]. Refs. [21,22] point out that a 50% breakdown point means that the estimator is insensitive to the corruption made by outliers, provided that the outliers constitute less than 50% of the set. Rousseeuw and Croux [23] proposed another robust estimator, the lower quartile of the absolute pairwise differences ( Q n ), which has a maximum breakdown point of 0.5 and an efficiency of 0.82. However, the Q n estimator suffers from high computational complexity. Subsequently, Smirnov et al. [24] constructed an M-estimator by matching its influence function to that of the Q n estimator, thereby maintaining the high asymptotic efficiency of Q n while avoiding its high computational complexity, resulting in a fast Q n statistic, denoted as F Q n :
F Q n ( x ) = 1.483 MAD n ( x ) 1 Z 0 n / 2 / Z 2 ,
Z k = i = 1 n u i k e u i 2 / 2 , ( k = 0 , 2 ; i = 1 , 2 , , n ) ,
where MAD n is the median absolute deviation and med ( x ) is the median.
In (8), by employing the k-step M-estimation approach, the final estimator inherits the breakdown point of the initial estimator (Rousseeuw and Croux [25]). Therefore, when MAD is selected as the initial estimator, the F Q n also achieves a breakdown point of 0.5. Using F Q n to estimate the sample standard deviation yields robust sample autocorrelation coefficients:
ρ ^ = F Q n 2 ( u ) F Q n 2 ( v ) F Q n 2 ( u ) + F Q n 2 ( v ) .
The Yule–Walker equations establish the relationship between sample autocorrelation coefficients and parameters to be estimated. By obtaining robust autocorrelation coefficients, we can derive robust parameter estimates through the corresponding algorithms. To achieve both robust and sparse results, these robust correlation coefficients are incorporated into the adaptive lasso method to enhance the model’s robustness.
Generally, the influence of explanatory variables on the dependent variable gradually weakens as the lag order increases. We combine the partial autocorrelation coefficient commonly used in time series order determination with the adaptive lasso method to improve the traditional lasso model.
β ^ r a = argmin β t = p + 1 n ρ ^ t k = 1 p ρ ^ t k β k 2 + λ 3 k = 1 p ω k 3 | β k | , λ 3 0 ,
where
ω k 3 = 1 | β ^ k | δ 2 | ϕ ^ k k | δ 3 , δ 2 , δ 3 > 0 ,
ρ ^ t is the t-th order robust autocorrelation coefficient, the k-th order sample partial autocorrelation coefficient ϕ ^ k k is computed recursively as follows:
ϕ ^ k k = ρ ^ k j = 1 k 1 ϕ ^ k 1 , j ρ ^ k j 1 j = 1 k 1 ϕ ^ k 1 , j ρ ^ j ,
where ρ ^ k is the k-th order sample autocorrelation coefficient, ϕ ^ k 1 , j represents the j-th coefficient estimate from the ( k 1 ) -th order partial autocorrelation, and the recursion is initialized with ϕ ^ 11 = ρ ^ 1 .

3. Simulation Studies

In this section, we investigate the numerical performance of the proposed method using Monte Carlo simulations. We simulated 100 data sets from the following autoregression model (14) with sample sizes n = 200 , 400 .
y t = β 1 y t 1 + + β 10 y t 10 + ε t .
The data are generated by the following three scenarios:
  • Scenario 1: The coefficients ( β 1 , , β 5 ) = ( 0.45 , 0.37 , 0.28 , 0.20 , 0.15 ) with β j = 0 for j > 5 . The error term follows a Gaussian mixture distribution: ε t ( 1 ε ) N ( 0 , 1 ) + ε N ( 0 , 10 ) , where the contamination proportion ε takes values { 0 , 0.10 , 0.20 } .
  • Scenario 2: Maintaining the same coefficient as Scenario 1, the error distribution is replaced by ε t ( 1 ε ) N ( 0 , 1 ) + ε Cauchy ( 0 , 1 ) with ε { 0 , 0.10 , 0.20 } . The Cauchy component introduces extreme outliers due to its heavy-tailed properties.
  • Scenario 3: The coefficients ( β 1 , , β 5 ) = ( 0.85 , 0.20 , 0.15 , 0.10 , 0.05 ) with β j = 0 for j > 5 . Characteristic root analysis confirms stationarity with a dominant root modulus of 1.031, inducing strong persistence and slow mean reversion typical of economic time series. The error term follows that of Scenario 1.
To demonstrate the advantage of our proposed method, we compare our proposed method (RA-LASSO) with the traditional adaptive lasso method (LS-LASSO) and the ordinary least squares estimation (OLS). Furthermore, the following four criteria were computed to evaluate the finite sample performance:
  • TP: The average accuracy rate of parameter estimates over 100 repetitions.
  • Size: The average number of non-zero coefficients in the estimation results over 100 repetitions.
  • AE: Mean absolute estimation error: j = 1 p | β ^ n j β j | .
  • SE: Root of mean squares estimation error: j = 1 p | β ^ n j β j | 2 .
For RA-LASSO and LS-LASSO, we take δ 1 = δ 2 = δ 3 = 0.5 . Meanwhile, these methods require the selection of initial values β ^ ( k ) and the tuning parameters λ 2 , λ 3 . In this simulation, we use the ordinary least squares estimates as initial values β ^ ( k ) , and select the tuning parameters λ 2 , λ 3 by minimizing the following BIC criterion [26]. The BIC criterion is defined as follows:
B I C ( λ ) = log ( S S E ( β ^ λ ) ) + k log ( n ) / n ,
where S S E ( β ^ λ ) = i = 1 n ( y i y ^ i ) 2 / n , y ^ i denotes the predicted value of y i , and k denotes the number of non-zero coefficients in β ^ λ .
The corresponding simulation results are shown in Table 1, Table 2 and Table 3. These results reveal the following:
  • For Scenario 1, in the absence of contamination ( ε = 0 ), all methods performed well. As the contamination level ε increases to 0.1 and 0.2, the advantage of the robust method became pronounced. RA-LASSO consistently achieves a higher TP and lower AE and SE compared to the other methods under contamination.
  • For Scenario 2, under no contamination ( ε = 0 ), the performance of all methods is similar. However, even a mild contamination ( ε = 0.1 ) drastically degrades the performance of OLS and LS-LASSO. RA-LASSO demonstrates relatively robust performance, outperforming the other two methods. Furthermore, the Size metric reveals that LS-LASSO tends to overfit severely under contamination, whereas RA-LASSO effectively controls model complexity, selecting a model size much closer to the true value.
  • For Scenario 3, RA-LASSO consistently delivers the highest TP and the lowest estimation errors (AE and SE) under contamination. This confirms that the proposed method remains effective in the presence of both persistent serial correlation and outliers in the innovations.

4. Empirical Analysis

4.1. Application to S&P 500 Index

The closing price reflects the final trading price of a stock for the day. It is widely used as a stable indicator to measure stock performance because it accounts for all price movements during the trading session. We select the closing prices of the S&P 500 Index from 24 September 2020 to 18 February 2021 as our research dataset, as this period witnessed significant market volatility and substantial price fluctuations, making it suitable for empirical analysis.
We first plot the closing prices over time in Figure 1, which shows significant fluctuations. To check for stationarity, we performed the augmented Dickey–Fuller (ADF) test. The test results are shown in Table 4. From Table 4, we can observe that p-value is 0.6109, which is greater than 0.05, which implies that the series is non-stationary. To address this issue, we take the first-order difference in the closing prices and plot the differenced series in Figure 2. As shown in Figure 2, the differenced series appears more stable, fluctuating randomly around a constant level. Another ADF test on the differenced series (Table 4) yields a p-value of 0.01, confirming stationarity. Although the differenced series is stationary, pronounced outliers are evident in its time series plot. To rigorously diagnose the presence of outliers, we draw a leverage versus standardized residual plot based on a preliminary AR(p) model in Figure 3, where p was determined by the AIC. We can observe from Figure 3 that the absolute values of standardized residuals for three distinct observations exceed the threshold of 2, some even exceeding 3. These extreme values are likely attributable to transient market shocks, such as unexpected earnings reports, geopolitical events, or abrupt changes in investor sentiment, which can introduce significant short-term volatility not eliminated by differencing. Next, we examine the autocorrelation (ACF) and partial autocorrelation (PACF) plots of the differenced series in Figure 4 and Figure 5. Neither the ACF nor the PACF cuts off sharply. Based on Figure 4 and Figure 5, we proceed to fit an AR(p) model to the differenced series.
We perform first-order differencing on the closing prices of the S&P 500 Index, denoted as y t , with the p-order lagged differences in daily closing prices denoted as y t 1 , , y t p . Firstly, we apply the Akaike information criterion (AIC) to determine the order p for the AR(p) model. Figure 6 presents the AIC values for different model configurations. The results demonstrate that the AIC value reaches its minimum when the lag order is 12. Therefore, we select p = 12 as the optimal order.
After determining the lag order, we apply the RA-LASSO method, the LS-LASSO method and the OLS method to fit the AR(12) model. The corresponding estimation results are presented in Table 5. We observe from Table 5 that the number of non-zero elements progressively decreases across the three AR(12) fitted models. The LS-LASSO method selected all lagged days except the t 5 day as explanatory variables, the RA-Lasso method selected all lagged days except t 2, t 3, t 5, and t 8 days as explanatory variables. Compared with the LS-LASSO method, the RA-Lasso approach demonstrates a faster shrinkage rate and selects fewer variables, indicating superior variable selection performance.
After obtaining the fitted models using three different methods, we further evaluated their prediction accuracy for stock prices. We apply each model to predict the first-order differenced results of the S&P 500 index. The mean absolute percentage error (MAPE) was calculated for each model, defined as follows:
MAPE = 1 n i = 1 n y ^ i y i y i ,
where y i represents the actual stock price on day i, and y ^ i denotes the predicted price.
The MAPE results of the three methods are presented in Table 6. As shown in Table 6, the proposed RA-LASSO method achieves the smallest MAPE, demonstrating its excellent predictive capability.

4.2. Application to USD/CNY Exchange Rate

Next, we apply the proposed methodology to analyze the dataset on the exchange rate of the U.S. Dollar against the Chinese Yuan (USD/CNY) from 17 January 2023, to 8 June 2023, sourced from the Federal Reserve Economic Data (FRED) website of the Federal Reserve Bank of St. Louis (fred.stlouisfed.org, accessed on 23 August 2024). During this period, the exchange rate exhibits significant fluctuations. We will utilize these data to compare the robustness of the RA-Lasso method, the LS-Lasso method, and the OLS method. Figure 7 depicts the exchange rate time series, which exhibits considerable variability. An ADF test is conducted to assess stationarity; the results, shown in Table 7, yield a p-value of 0.8908, indicating non-stationarity. First-order differencing is applied to achieve stationarity. The differenced series, plotted in Figure 8, appears stable and fluctuates randomly around a constant level. A follow-up ADF test confirms stationarity with a p-value of 0.01. Despite achieving stationarity, the differenced series contains noticeable outliers, likely attributable to transient market disturbances, such as unexpected macroeconomic announcements, geopolitical tensions, or sudden changes in monetary policy expectations. These factors can introduce short-term volatility that differencing alone cannot eliminate. To rigorously diagnose the presence of any outliers and influential points, we employ a leverage versus standardized residual plot (Figure 9) based on a preliminary AR(p) model, where p was determined by the AIC. The plot reveals two distinct phenomena:
  • Numerous high-leverage observations, with leverage values exceeding 2 × h ¯ , where h ¯ denotes the mean leverage, indicated by the vertical dashed line.
  • Several extreme residuals with absolute standardized values surpassing 2.
These high-leverage points, predominantly clustered during periods of exceptional market turbulence, may disproportionately influence parameter estimates. The simultaneous presence of large residuals violates the Gaussian error assumption underlying classical estimation methods. The ACF and PACF of the differenced series are presented in Figure 10 and Figure 11. Neither the ACF nor the PACF cuts off sharply.
Let y t represent the first-differenced USD/CNY exchange rate, with y t 1 , , y t p denoting the lagged values. The Akaike Information Criterion (AIC) is used to select the optimal lag order p. As shown in Figure 12, the AIC is minimized at p = 10 , which is selected as the optimal order. The AR(10) model is estimated using RA-LASSO, LS-LASSO, and OLS. Coefficient estimates are reported in Table 8. The number of non-zero coefficients decreases across methods: LS-LASSO retains all lags, while RA-LASSO further excludes lags t 1 , t 7 , and t 9 . This indicates that RA-LASSO promotes greater sparsity and exhibits a faster shrinkage rate.
Furthermore, the finite sample performance is evaluated using the mean absolute error (MAE) and median absolute deviation (MAD), which are defined as follows:
MAE = 1 n i = 1 n y ^ i y i
MAD = median y ^ i y i
where y i denotes the actual exchange rate on the i-th day, and y ^ i represents the predicted exchange rate. The results of the three methods are presented in Table 9. The proposed RA-LASSO method achieves lower MAE and MAD values than other two methods.

5. Discussion

This paper proposed a robust adaptive lasso method for the autoregressive models by combining the partial autocorrelation coefficient and robust autocorrelation coefficients. Simulation studies and two real data analyses demonstrated that the proposed method had better performance than the existing methods.
It is noteworthy that there are further topics to investigate for our proposed method. Firstly, we will investigate the asymptotic properties of the proposed method as future work. Secondly, we further extend the proposed method to tackle other time series models such as the moving average models and the autoregressive moving average models.

Author Contributions

Conceptualization, Y.J.; software, F.C.; data curation, F.C.; writing—original draft preparation, F.C. and X.Y.; writing—review and editing, Y.J. and X.Y.; visualization, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Science Foundation of China under Grant No. 12571284 and Grant No. 12171203, and the Fundamental Research Funds for the Central Universities under Grant No. 23JNQMX21.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  2. Huber, P.J. The 1972 wald lecture robust statistics: A review. Ann. Math. Stat. 1972, 43, 1041–1067. [Google Scholar] [CrossRef]
  3. Chang, I.; Tiao, G.C.; Chen, C. Estimation of time series parameters in the presence of outliers. Technometrics 1988, 30, 193–204. [Google Scholar] [CrossRef]
  4. Tsay, R.S.; Pena, D.; Pankratz, A.E. Outliers in multivariate time series. Biometrika 2000, 87, 789–804. [Google Scholar] [CrossRef]
  5. McQuarrie, A.D.; Tsai, C.L. Outlier detections in autoregressive models. J. Comput. Graph. Stat. 2003, 12, 450–471. [Google Scholar] [CrossRef]
  6. Karioti, V.; Caroni, C. Simple detection of outlying short time series. Stat. Pap. 2004, 45, 267–278. [Google Scholar] [CrossRef]
  7. Louni, H. Outlier detection in ARMA models. J. Time Ser. Anal. 2008, 29, 1057–1065. [Google Scholar] [CrossRef]
  8. Čampulová, M.; Michálek, J.; Mikuška, P.; Bokal, D. Nonparametric algorithm for identification of outliers in environmental data. J. Chemom. 2018, 32, e2997. [Google Scholar] [CrossRef]
  9. Pan, B.; Chen, M.; Wang, Y. Weighted least absolute deviations estimation for periodic ARMA models. Acta Math. Sin. Engl. Ser. 2015, 31, 1273–1288. [Google Scholar] [CrossRef]
  10. Jiang, Y. An exponential-squared estimator in the autoregressive model with heavy-tailed errors. Stat. Its Interface 2016, 9, 233–238. [Google Scholar] [CrossRef]
  11. Callens, A.; Wang, Y.G.; Fu, L.; Liquet, B. Robust estimation procedure for autoregressive models with heterogeneity. Environ. Model. Assess. 2021, 26, 313–323. [Google Scholar] [CrossRef]
  12. Chang, L.; Shi, Y. A discussion on the robust vector autoregressive models: Novel evidence from safe haven assets. Ann. Oper. Res. 2024, 339, 1725–1755. [Google Scholar] [CrossRef]
  13. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  14. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  15. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef]
  16. Audrino, F.; Camponovo, L. Oracle properties and finite sample inference of the adaptive lasso for time series regression models. arXiv 2013, arXiv:1312.1473. [Google Scholar] [CrossRef]
  17. Songsiri, J. Sparse autoregressive model estimation for learning Granger causality in time series. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 3198–3202. [Google Scholar]
  18. Emmert-Streib, F.; Dehmer, M. High-dimensional LASSO-based computational regression models: Regularization, shrinkage, and selection. Mach. Learn. Knowl. Extr. 2019, 1, 359–383. [Google Scholar] [CrossRef]
  19. Gnanadesikan, R.; Kettenring, J.R. Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics 1972, 28, 81–124. [Google Scholar] [CrossRef]
  20. Donoho, D.L.; Huber, P.J. The notion of breakdown point. In A Festschrift for Erich L. Lehmann; CRC Press: Boca Raton, FL, USA, 1983; Volume 157184. [Google Scholar]
  21. Amini, M.; Roozbeh, M. Least trimmed squares ridge estimation in partially linear regression models. J. Stat. Comput. Simul. 2016, 86, 2766–2780. [Google Scholar] [CrossRef]
  22. Rousseeuw, P.; Leroy, A. Robust Regression and Outlier Detection; John Wiley & Sons: Hoboken, NJ, USA, 1987. [Google Scholar]
  23. Rousseeuw, P.J.; Croux, C. Alternatives to the median absolute deviation. J. Am. Stat. Assoc. 1993, 88, 1273–1283. [Google Scholar] [CrossRef]
  24. Smirnov, P.O.; Shevlyakov, G.L.; Smirnov, P.; Smirnov, P.; Shevlyakov, G.; Shevlyakov, G. Approximation of the QN-estimate of scale with the help of fast M-estimates. Sib. Aerosp. J. 2010, 11, 83–85. [Google Scholar]
  25. Rousseeuw, P.J.; Croux, C. The bias of k-step M-estimators. Stat. Probab. Lett. 1994, 20, 411–420. [Google Scholar] [CrossRef]
  26. Wang, H.; Li, G.; Jiang, G. Robust regression shrinkage and consistent variable selection through the LAD-Lasso. J. Bus. Econ. Stat. 2007, 25, 347–355. [Google Scholar] [CrossRef]
Figure 1. The S&P 500 daily closing price time series plot.
Figure 1. The S&P 500 daily closing price time series plot.
Axioms 14 00701 g001
Figure 2. First-Differenced Series of the S&P 500 daily closing prices.
Figure 2. First-Differenced Series of the S&P 500 daily closing prices.
Axioms 14 00701 g002
Figure 3. Leverage vs. standardized residuals of the first-differenced S&P 500 daily closing prices. The vertical dashed line indicates the threshold of 2 × h ¯ , and the horizontal dashed line marks the ± 2 standard deviation boundaries.
Figure 3. Leverage vs. standardized residuals of the first-differenced S&P 500 daily closing prices. The vertical dashed line indicates the threshold of 2 × h ¯ , and the horizontal dashed line marks the ± 2 standard deviation boundaries.
Axioms 14 00701 g003
Figure 4. Sample autocorrelation function of the first-differenced S&P 500 daily closing prices. The blue dotted lines represent the approximate 95% confidence interval.
Figure 4. Sample autocorrelation function of the first-differenced S&P 500 daily closing prices. The blue dotted lines represent the approximate 95% confidence interval.
Axioms 14 00701 g004
Figure 5. Sample partial autocorrelation function of the first-differenced S&P 500 daily closing prices. The blue dotted lines represent the approximate 95% confidence interval.
Figure 5. Sample partial autocorrelation function of the first-differenced S&P 500 daily closing prices. The blue dotted lines represent the approximate 95% confidence interval.
Axioms 14 00701 g005
Figure 6. The AIC Values for Different Model Configurations of the First-Differenced S&P 500 Daily Closing Prices.
Figure 6. The AIC Values for Different Model Configurations of the First-Differenced S&P 500 Daily Closing Prices.
Axioms 14 00701 g006
Figure 7. USD/CNY exchange rate time series plot.
Figure 7. USD/CNY exchange rate time series plot.
Axioms 14 00701 g007
Figure 8. First-differenced USD/CNY exchange rate.
Figure 8. First-differenced USD/CNY exchange rate.
Axioms 14 00701 g008
Figure 9. Leverage vs. standardized residuals of the first-differenced USD/CNY exchange rate. The vertical dashed line indicates the threshold of 2 × h ¯ , and the horizontal dashed line marks the ± 2 standard deviation boundaries.
Figure 9. Leverage vs. standardized residuals of the first-differenced USD/CNY exchange rate. The vertical dashed line indicates the threshold of 2 × h ¯ , and the horizontal dashed line marks the ± 2 standard deviation boundaries.
Axioms 14 00701 g009
Figure 10. Sample autocorrelation function of the first-differenced USD/CNY exchange rate. The blue dotted lines represent the approximate 95% confidence interval.
Figure 10. Sample autocorrelation function of the first-differenced USD/CNY exchange rate. The blue dotted lines represent the approximate 95% confidence interval.
Axioms 14 00701 g010
Figure 11. Sample partial autocorrelation function of the first-differenced USD/CNY exchange rate. The blue dotted lines represent the approximate 95% confidence interval.
Figure 11. Sample partial autocorrelation function of the first-differenced USD/CNY exchange rate. The blue dotted lines represent the approximate 95% confidence interval.
Axioms 14 00701 g011
Figure 12. The AIC values of the first-differenced USD/CNY exchange rate.
Figure 12. The AIC values of the first-differenced USD/CNY exchange rate.
Axioms 14 00701 g012
Table 1. Simulation results for Scenario 1.
Table 1. Simulation results for Scenario 1.
n ε MethodTPSizeAESE
2000OLS0.50010.000.6580.802
LS-LASSO0.8205.560.6150.768
RA-LASSO0.8215.490.6650.802
0.1OLS0.50010.001.0491.017
LS-LASSO0.7524.781.0881.040
RA-LASSO0.7815.330.8890.935
0.2OLS0.50010.001.2441.112
LS-LASSO0.6745.001.2131.100
RA-LASSO0.7375.051.0291.009
4000OLS0.50010.000.4470.660
LS-LASSO0.8536.350.3900.614
RA-LASSO0.9225.360.4520.655
0.1OLS0.50010.000.9820.988
LS-LASSO0.7676.430.9960.996
RA-LASSO0.8565.040.8260.902
0.2OLS0.50010.001.1481.070
LS-LASSO0.6736.811.1501.071
RA-LASSO0.8044.980.9920.993
Table 2. Simulation results for Scenario 2.
Table 2. Simulation results for Scenario 2.
n ε MethodTPSizeAESE
2000OLS0.50010.000.6540.798
LASSO0.8155.810.6060.763
RA-LASSO0.8075.730.6640.801
0.1OLS0.50010.001.2971.130
LS-LASSO0.6346.101.2831.126
RA-LASSO0.7714.690.9200.946
0.2OLS0.50010.001.4231.189
LS-LASSO0.5787.041.4071.183
RA-LASSO0.7354.391.0321.009
4000OLS0.50010.000.4550.668
LS-LASSO0.8566.300.4230.639
RA-LASSO0.9025.200.5410.719
0.1OLS0.50010.001.2701.111
LS-LASSO0.6717.411.2761.115
RA-LASSO0.8354.290.9100.944
0.2OLS0.50010.001.4231.190
LS-LASSO0.5508.441.4201.189
RA-LASSO0.7973.931.0411.015
Table 3. Simulation results for Scenario 3.
Table 3. Simulation results for Scenario 3.
n ε MethodTPSizeAESE
2000OLS0.50010.000.7060.831
LASSO0.7163.760.5660.748
RA-LASSO0.6563.900.5850.759
0.1OLS0.50010.001.2241.102
LS-LASSO0.7415.990.9940.992
RA-LASSO0.6723.920.6610.807
0.2OLS0.50010.001.3291.149
LS-LASSO0.7046.761.1581.072
RA-LASSO0.6605.440.8850.931
4000OLS0.50010.000.5220.712
LS-LASSO0.7444.560.4770.683
RA-LASSO0.6983.100.5050.708
0.1OLS0.50010.001.0651.030
LS-LASSO0.7446.980.9290.961
RA-LASSO0.7323.820.5600.744
0.2OLS0.50010.001.2141.101
LS-LASSO0.6787.901.1221.057
RA-LASSO0.7254.570.7540.864
Table 4. ADF test results of the S&P 500 daily closing prices.
Table 4. ADF test results of the S&P 500 daily closing prices.
ADF TestS&P 500The First-Differenced S&P 500
Dickey–Fuller−1.2007−6.0134
p-value0.61090.01
Table 5. Coefficient estimates comparison of the first-differenced S&P 500 daily closing prices.
Table 5. Coefficient estimates comparison of the first-differenced S&P 500 daily closing prices.
Method t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 8 t 9 t 10 t 11 t 12
OLS−0.030.190.07−0.13−0.09−0.230.110.070.10−0.18−0.05−0.22
LS-LASSO−0.140.020.16−0.130.00−0.260.090.060.17−0.21−0.17−0.30
RA-Lasso−0.040.000.00−0.170.00−0.200.070.000.06−0.16−0.05−0.22
Table 6. MAPE comparison of the first-differenced S&P 500 daily closing prices.
Table 6. MAPE comparison of the first-differenced S&P 500 daily closing prices.
MethodSizeMAPE
OLS122.12
LS-LASSO112.28
RA-Lasso81.89
Table 7. ADF test results of the USD/CNY exchange rate.
Table 7. ADF test results of the USD/CNY exchange rate.
ADF TestExchange RateThe First-Differenced Exchange Rate
Dickey–Fuller−0.4449−7.7716
p-value0.89080.01
Table 8. Coefficient estimates comparison of the first-differenced USD/CNY exchange rate.
Table 8. Coefficient estimates comparison of the first-differenced USD/CNY exchange rate.
Method t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 8 t 9 t 10
OLS−0.095−0.013−0.095−0.0470.1840.0470.0550.3730.137−0.187
Lasso−0.005−0.069−0.0260.0790.1110.120−0.0660.161−0.018−0.152
RA-Lasso0.000−0.063−0.0100.0580.1010.0780.0000.1570.000−0.132
Table 9. Result of the first-differenced USD/CNY exchange rate.
Table 9. Result of the first-differenced USD/CNY exchange rate.
MethodSizeMAEMAD
OLS100.01150.0150
LS-LASSO100.01140.0151
RA-Lasso70.01110.0149
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Chen, F.; Yan, X. Robust Adaptive Lasso via Robust Sample Autocorrelation Coefficient for the Autoregressive Models. Axioms 2025, 14, 701. https://doi.org/10.3390/axioms14090701

AMA Style

Jiang Y, Chen F, Yan X. Robust Adaptive Lasso via Robust Sample Autocorrelation Coefficient for the Autoregressive Models. Axioms. 2025; 14(9):701. https://doi.org/10.3390/axioms14090701

Chicago/Turabian Style

Jiang, Yunlu, Fudong Chen, and Xiao Yan. 2025. "Robust Adaptive Lasso via Robust Sample Autocorrelation Coefficient for the Autoregressive Models" Axioms 14, no. 9: 701. https://doi.org/10.3390/axioms14090701

APA Style

Jiang, Y., Chen, F., & Yan, X. (2025). Robust Adaptive Lasso via Robust Sample Autocorrelation Coefficient for the Autoregressive Models. Axioms, 14(9), 701. https://doi.org/10.3390/axioms14090701

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop