You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

22 December 2025

Integrating Deep Learning into Semiparametric Network Vector AutoRegressive Models

,
,
and
1
School of Statistics and Mathematics, Shanghai Lixin University of Accounting and Finance, Shanghai 201209, China
2
Shanghai Municipal Big Data Center, Shanghai 200072, China
3
School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, QLD 4068, Australia
4
School of Statistics and Data Science, Shanghai University of International Business and Economics, Shanghai 201620, China

Abstract

Network vector AutoRegressive models play a vital role in multivariate time series analysis. However, previous research in the classic Network vector AutoRegressive (NAR) model is limited to strict assumptions of linearity and time-invariance of node-specific covariates. In this study, we propose a Semiparametric NAR (SNAR) model to broaden existing research horizons by (1) extending node-specific covariates to a nonlinear framework, (2) incorporating high-dimensional time-varying covariates for a more comprehensive analysis, and (3) maintaining the interpretability of the autoregressive effects of the NAR. A deep learning-based method is presented to simultaneously estimate the nonparametric function and the parameters in SNAR. We also provide theoretical proof for the convergence rate of the nonparametric deep neural network estimator to support linear-to-nonlinear extension and show that the proposed method is capable of avoiding the curse of dimensionality. Furthermore, we prove the asymptotic normality of the parametric estimators for autoregressive effects to demonstrate the maintenance of interpretability. Experiments on various numerical simulated data show that the proposed method can avoid the curse of dimensionality; for instance, in nonlinear settings, the SNAR model reduces the prediction MSE by approximately 69% compared to the classic NAR model (decreasing from 3.44 to 1.06). Furthermore, in real-world stock return analysis, the SNAR model achieves an MSE of 0.9930, significantly outperforming the NAR baseline (MSE 1.6540) and other state-of-the-art methods.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.