Next Article in Journal
Enhancing Document Forgery Detection with Edge-Focused Deep Learning
Previous Article in Journal
Inequality Constraints on Statistical Submanifolds of Norden-Golden-like Statistical Manifold
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Value at Risk Estimation in Multi-Functional Volterra Time-Series Model (MFVTSM)

1
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Mathematics, College of Science, King Khalid University, Abha 62223, Saudi Arabia
3
Laboratoire AGEIS, Université Grenoble Alpes (France), EA 7407, AGIM Team, UFR SHS, BP. 47, F38040 Grenoble Cedex 09, France
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1207; https://doi.org/10.3390/sym17081207
Submission received: 31 May 2025 / Revised: 24 June 2025 / Accepted: 14 July 2025 / Published: 29 July 2025
(This article belongs to the Section Mathematics)

Abstract

In this paper, we aim to provide a new algorithm for managing financial risk in portfolios containing multiple high-volatility assets. We assess the variability of volatility with the Volterra model, and we construct an estimator of the Value-at-Risk (VaR) function using quantile regression. Because of its long-memory property, the Volterra model is particularly useful in this domain of financial time series data analysis. It constitutes a good alternative to the standard approach of Black–Scholes models. From the weighted asymmetric loss function, we construct a new estimator of the VaR function usable in Multi-Functional Volterra Time Series Model (MFVTSM). The constructed estimator highlights the multi-functional nature of the Volterra–Gaussian process. Mathematically, we derive the asymptotic consistency of the estimator through the precision of the leading term of its convergence rate. Through an empirical experiment, we examine the applicability of the proposed algorithm. We further demonstrate the effectiveness of the estimator through an application to real financial data.

1. Introduction

In financial mathematics, stochastic integral equations are usually used to fit memory effects and model the movement of financial assets. These stochastic equations are useful for modeling volatility clustering, rough volatility, or heavytail characteristics of financial data. In the present work we aim to evaluate financial risk by modeling historical data as a continuous, multidimensional process sampled from MFVTSM. Such a research topic is primarily motivated by the long-memory feature of the Volterra model, which is a critical factor in financial time-series analysis. Even if the bibliography on the multivariate analysis of the model is very abundant, the functional path of this kind of model is not fully explored. For the first development of this topic, we mention Ref. [1]. The cited reference enhances stochastic volatility models by incorporating long-memory kernels, which permits improvement in the efficiency of the financial modeling. The authors of Ref. [2] use Volterra-type equations to study volatility surfaces. Ref. [3] introduces the rough Heston model to improve derivative pricing. This last method employs the Volterra model to fit rough volatility effects. For a deeper discussion on the role of non-Markovian processes in financial markets, we refer to [4]. Such pioneering works highlight the crucial role of the Volterra model in the improvement of pricing precision and volatility calibration, as well as risk management. At this stage, evaluating financial risk through historical data remains a challenging issue for bankers and investors. The VaR function is commonly used to remedy this issue. The statistical estimation of the VaR function is commonly based on the unconditional quantile function. However, the conditional quantile is more appropriate for monitoring the dynamics of financial transactions. Pioneering investigations in the conditional VaR were conducted by [5], who consider the VaR estimation under a conditional autoregressive model. Since this cited work, significant progress in risk management using conditional (CAVaR) models has been observed. For instance, ref. [6] conducted a comprehensive comparative analysis between traditional VaR and a hybrid method that combines GARCH modeling with extreme value theory. Their work provides valuable ideas into the relative performance of these risk measurement techniques. In a related study, ref. [7] investigated the optimal order selection for VaR estimation, offering an important methodological algorithm to the specification of VaR models. There exist many alternative approaches to estimate the conditional VaR by a parametric approach. We mention the GARCH approach in ref. [8], Extreme Value (EV) by [9], and the conditional copula [10], among others. The estimation of quantile regression is widely considered in mathematical statistics. The pioneering work in a multivariate case was developed by [11]. The latter constructed an estimator using an empirical estimator of the conditional distribution function. Next, ref. [12] derived the asymptotic normality of the nonparametric conditional quantile estimator. For a comprehensive review of the literature on quantile regression, we refer to [13]. Furthermore, the functional version of quantile regression was introduced by [14]. The authors have used B-spline smoothing to study a linear quantile regression model with Hilbertian explanatory variables. They also derived the L 2 -convergence rate of their estimator. Nonparametric smoothing was considered by [15]. The latter proved the almost-complete convergence (a.co.) of a kernel estimator of quantile regression. Ref. [16] proposed a local linear estimation method for functional quantiles by inverting the local linear estimator of the conditional cumulative distribution function. Subsequently, ref. [17] studied the partial linear quantile regression, with particular attention to handling incomplete data cases. As an alternative approach, ref. [18] developed a functional partial quantile estimator using basis expansion techniques. More recently, ref. [19] introduced a robust estimation procedure for scalar-on-function quantile regression models that effectively handles outliers while maintaining prediction reliability. Despite its significance, research on multivariate functional data remains relatively scarce. To the best of our knowledge, only one reference [20] has addressed the estimation of unconditional quantiles in the context of multi-functional statistics.
As an alternative development to the cited works, we aim in this contribution to treat the conditional quantile case under multi-functional data. Under this consideration we construct an estimator and we establish its complete consistency. Moreover, we use this model to provide a new approach to assessing financial risk using a Gaussian Volterra chaotic model. The particularity of the introduced algorithm is the treatment of financial data as a continuous Gaussian Volterra process, which allows it to reflect the nature of financial time-series data. This approach is especially powerful for modeling memory effects and path-dependent market behaviors. Indeed, it is well known that the model’s kernel is adequate to detect the local interactions and long-range dependencies which are very beneficial to describe real-world market characteristics such as rough volatility, cluster effects, and heavy-tailed distributions. In this context the Volterra model has proven useful across various financial applications, from derivative pricing to the calibration of volatility surface and risk management. Additionally, the Volterra model offers a more robust alternative to the standard Black–Scholes framework, which suffers from several practical limitations. The most significant is Black–Scholes’s assumption of constant volatility and interest rates, an overly simplistic view that does not accurately represent real market conditions. Instead, the Volterra model accounts for fluctuating volatility and memory effects, allowing a more adaptable illustration of market dynamics. On the other hand, we have used nonparametric estimation techniques, which provide flexibility in modeling financial dynamics without relying on rigid assumptions. Typically, for financial time-series data analysis nonparametric functional data modeling permits the user to fit the dynamic of high-frequency and high-dimensional relationships of the data that traditional multivariate GARCH models fail to incorporate. More precisely, unlike standard multivariate GARCH models, which are based on some specific assumptions (linearity and limited discrete time-point observations), functional statistics offer a flexible approach to model the entire historical trajectory of financial data as continuous curves. Thus, the nonparametric functional approach allows a deeper understanding of time-varying volatility, risk spread, and non-linear time-varying dependencies. Therefore, the principal advantage of the present contribution is the multi-functional treatment offering a robust, data-driven alternative for risk management and portfolio optimization and enhancing the accuracy via the VaR model for more complex markets. Finally, we point out that all these advantages have been highlighted through simulation and real data application after establishing the mathematical consistency of the constructed estimator.
The paper is structured as follows: Section 2 introduces the general framework of the contribution. In Section 3, we present and discuss the main results. Section 4 focuses on the empirical study, which includes both simulated and real data applications. In the Section 5, we provide a general conclusion along with some research perspectives for future work. Finally, the Appendix A contains the proofs of supporting results.

2. Methodology

2.1. Multifunctional Data Framework

In this paper we assume that we have a portfolio of d financial assets; each one has trajectory T i ( · ) , which behaves as a Gaussian Volterra process such that
T i ( s ) = 0 t L ( t , s ) d B i ( t ) , i = 1 , , d , s [ 0 , 1 ] , and   L is   a   kernel .
The dynamic of the portfolio is formulated as
T ( s ) = T 1 ( s ) T 2 ( s ) T d ( s )
This setting is common in financial mathematics. Its popularity is justified by the long memory of stock market dynamics (see [21]). In this functional framework, we employ the Cameron–Martin space defined as follows:
H = { f u n c t i o n f ( s ) = 0 s L ( t , s ) h f ( t ) d t   for   t [ 0 , 1 ] and   h f L 2 ( [ 0 , 1 ] ) } .
The space H carries the scalar product < f , g > = < h f , h g > L 2 ( [ 0 , 1 ] ) and an orthonormal basis
f n ( s ) = 0 t L ( t , s ) h n ( s ) d t , t ( 0 , 1 ) ( h n ) is   an   orthonormal   basis L 2 ( [ 0 , 1 ] ) .
The main objective of this paper is to assess the long-term risk in financial asset movements using historical data. The primary contribution of this work is the exploration of the historical information of multiple returns in a continuous form. Indeed, we sample N multi-functional variables as covariates ( C k ) k = 1 , 2 , , N defined as follows:
s [ 0 , 1 ] , C k i ( s ) = T i ( k 1 ) + t N i = 1 , 2 , d .
The financial risk is assessed with respect to the future characteristics of the portfolio’s returns. Among the principal characteristics are minimum or maximum values, daily range, variation over fixed periods, closed values, and values at a given time t 0 , among others. Mathematically, we represent the future characteristics through a function F u . This last function defines the intersect variables ( S k ) k = 1 , 2 , , N as follows:
S k = F u ( C k i ) , i = 1 , 2 , d . k = 1 , 2 , , N ,
Finally, we analyze the financial dynamic using Z k = ( C k , S k ) k = 1 , 2 , , N as multi-functional random variable drown from Volterra process defined in (1) and (2).

2.2. Model and Estimation

The present contribution focuses on the estimation of the VaR function as a solution of the following optimization problem:
Q P p ( V ) = arg min y R E p ( S y ) 1 I { ( S y ) > 0 } C = V + E ( 1 p ) ( S y ) 1 I { ( S y ) 0 } C = V , p ( 0 , 1 ) , V = V 1 V 2 V d H
It is important to note that the scoring function in this optimization problem is convex, with limit + at both and + (with great probability); it has at least one minimizing value over R . It should also be noted that this scoring function is piecewise linear with monotonically increasing slope and may include a flat segment (where the slope is zero) corresponding to its minimum value. In this situation, we take Q P p ( V ) as the smallest minimizing value. Otherwise, t ^ p ( x ) is unique and locates the turning point from negative to positive slope. Furthermore, it is shown in ref. [22] that Q P p ( V ) is the root of
F ( y , V ) p = 0 ,
where
F ( y , V ) = E 1 I { ( S y ) 0 } I = V
Now, the quantity Q P p ( V ) is estimated Q P p ^ ( V ) as the zero of
F ^ ( y , V ) p = 0
with
F ^ ( y , V ) = Σ k = 1 N i = 1 d K ( a 1 V i C k i ) 1 I { ( S k y ) 0 } Σ k = 1 N i = 1 d K ( a 1 V i C k i ) ,
where K is given a measurable kernel, a is a sequence of positive numbers, and · is L 2 ( 0 , 1 ) -norm.
Of course, the applicability of the risk detector Q P p ^ ( V ) necessitates a theoretical foundation to establish the conditions for its optimality.

3. The Mathematical Foundation of the Estimation

3.1. Assumptions

Hereafter, for V H , we set by N V neighborhood of V , and c 1 , c 2 , c , are positive constants. Furthermore, we set
B ( V , a ) = V H : Σ i = 1 d V i V i < a .
We need the following assumptions:
(As1)
The functions F ( · , V ) are differentiable in R and satisfy the following:
h > 0 y Q P p ( V ) h , Q P p ( V ) + h , V 1 , V 2 F ,
| F ( y , V 1 ) F ( y , V 2 ) | c 1 Σ i = 1 d V 1 i V 2 i b 1 , with F ( Q P p ( V ) , V ) y > 0
(As2)
The function L ( · , · ) is a Holder continuous kernel such that
( i ) The   map   t L ( t , s ) is   differentiable   on t and   has   derviative   L 2 integrable such   that max 0 s < t 1 L ( t , s ) t ( t s ) 3 / 2 < . ( i i ) The   map   t 0 t L ( t , s ) d s has   bounded   variation .
(As3)
For all a > 0 , and ν > 1 / 2 we suppose that I P I B ( V , a ) = ζ ( d , V , a ) > 0 and for all i j ,
sup i j I P ( C j , C i ) B ( V , a ) × B ( V , a ) = O ( ( ζ ( d , V , a ) 1 + ν ν + 1 ) ) , and η ξ = e ν ξ .
(As4)
There exist c 1 , c 2 > 0 , such that
log N 5 N 1 c 1 ζ ( d , V , a ) 1 log N 1 + c 2 .
(As5)
The function K ( · ) has a support [ 0 , 1 ] and satisfies
0 < c 4 K ( · ) c 5 < ; c 4 , c 5 > 0

Some Comments

The assumed conditions are not restrictive. They allow us to address multiple aspects of the subject (model structure, data characteristics, correlation, and convergence rate). In this context, the imposed assumptions are not overly restrictive, given the complexity of the proposed functional time-series model and the strength of the Borel–Cantelli (BC) consistency result. We point out that not all of these assumptions are strictly necessary for financial data applications; rather, they are required to explore different dimensions of the topic. Specifically, each aspect of the subject is examined through a distinct assumption. For instance, (As1) defines the nonparametric model, while (As2) characterizes the Gaussian Volterra model and is used to identify the orthonormality of the basis function. The functionality of the input variable C is controlled by (As3), which relates the spectral measure to the Hilbert structure of the input variable as well as its local dependency. The rest of the assumptions are technical and cover the kernel function and the bandwidth sequence, both of which are crucial in determining the convergence rate of the kernel estimator Q P p ^ ( V ) with respect to the Borel–Cantelli mode. In fact, an alternative form of consistency—weaker than BC consistency—can be established for the estimator without these assumptions. Specifically, by computing the expectation and variance of the estimator, we can prove the probability consistency. Although this probability consistency is less than the Borel–Cantelli consistency, it is sufficient to support the practical use of the estimator. However, since this paper seeks to highlight both the theoretical generality of the model and its practical feasibility, we have opted to focus on the stronger consistency result, from which the weaker form naturally follows.

3.2. Asymptotic Result

Now, we can present our main results concerning the asymptotic properties of the estimator Q P p ^ ( V ) under the above assumptions:
Theorem 1.
Under (As1)(As5), we have
Q P p ^ ( V ) Q P p ( V ) = O a . c o . a b 1 + log N N ζ ( d , V , a ) .
Proof. 
As F ( y , V ) ) , is increasing function, therefore,
Σ n 1 I P | Q P p ^ ( V ) Q P p ( V ) | > ϵ
Σ n 1 I P F ^ ( Q P p ( V ) ϵ , V ) F ( Q P p ( V ) ϵ , V ) | F ( Q P p ( V ) , V ) F ( Q P p ( V ) ϵ , V ) |
+ Σ n 1 I P F ^ ( Q P p ( V ) + ϵ , V ) F ( Q P p ( V ) + ϵ , V ) | F ( Q P p ( ( V ) , V ) F ( Q P p ( V ) + ϵ , V ) | .
The function F ( · . V ) is of class C 1 and such that F ( Q P p ( V ) , V ) y > 0 . Then
δ > 0 such that inf y [ Q P p ( V ) δ , Q P p ( V ) + δ ] F ( Q P p ( V ) , V ) y C > 0 .
Therefore
Σ n 1 I P | Q P p ^ ( V ) Q P p ( V ) | > ϵ Σ N I P sup y [ Q P p ( V ) δ , Q P p ( V ) + δ ] | F ^ ( y , V ) F ( y , V ) | C ϵ .
Finally, it suffices
Σ N I P sup y [ Q P p ( V ) δ , Q P p ( V ) + δ ] | F ^ ( y , V ) F ( y , V ) | C ϵ 0 a k 0 + log n N ζ ( d , V , a ) < .
The latter is based on
F ^ ( y , V ) F ( y , V ) = F ~ 1 ( y , V ) F ~ 2 ( y , V ) F ( y , V ) = 1 F ~ 2 ( y , V ) F ~ 1 ( y , V ) F ( y , V ) + F ( y , V ) F ~ 2 ( y , V ) F 2 ( y , V ) 1 ,
where
F ~ 1 ( y , V ) = 1 N I E [ i = 1 d K ( a 1 V i I i ) ] Σ k = 1 N i = 1 d K ( a 1 V i C k i ) 1 I { ( S k y ) 0 ,
and
F ~ 2 ( V ) = 1 N I E [ i = 1 d K ( a 1 V i I i ) ] Σ k = 1 N i = 1 d K ( a 1 V i C k i ) .
Consequently, the proof of the main result is based in the following intermediate results. □
Lemma 1.
Under Assumptions (As2)(As5), we have
sup y [ Q P p ( V ) δ , Q P p ( V ) + δ ] F ~ 1 ( y , V ) I E F ~ 1 ( y , V ) = O a . c o . log N N ζ ( d , V , a ) ,
and
F ~ 2 ( V ) I E F ~ 2 ( V ) = O a . c o . log N N ζ ( d , V , a ) .
Lemma 2.
Assume that the hypotheses (As2), (As4), and (As6) are fulfilled, then
E F ~ 1 ( y , V ) F ( y , V ) = O a b 1 .
Lemma 3.
Under the hypotheses of Lemma 1, we have
ϵ 0 > 0 s u c h   t h a t Σ N 1 I P | F ^ 2 ( V ) | ϵ 0 < .

4. Empirical Analysis

4.1. Simulated Financial Time-Series Analysis

The main purpose of this Section is to evaluate the applicability of the new estimator Q P p ^ ( V ) using simulated financial time series. Therefore, we assume control of the return of portfolio with d financial assets and examine the performance of Q P p ^ ( V ) as risk detector of the average of the daily variation of the d financial assets. For this simulation experiment we draw the functional covariate from sampling process (1). More precisely, we simulate a Volterra process with d = 2 using the R-package (version is 4.3.1) Sim.DiffProc through the routine code st.int. In this routine code we employ the fractional kernel with h defined by
L ( t , s ) = ( t s ) h 0.5 .
The different components of the multi-functional explanatory variables C k ( s ) are plotted in the following Figure 1, Figure 2 and Figure 3.
As mentioned below, the interest variables S k are chosen as the average of the variation of the components of the functional variables C k ( s ) . We point out that the primary step of the applicability of the estimator Q P p ^ ( V ) lies in the determination of its parameters. In this empirical analysis we use the cross-validation rules used by [15,23]. It selects the smoothing parameter a locally using the k-nearest neighborhood approach as follows:
a o p t = arg min 1 n Σ k = 1 n S k Q P 0.5 ^ ( C k ) 2 .
where
H = a 0 : Σ i = 1 n 1 I B ( θ , a ) ( C i ) = k ,
where k { 5 , 15 , 25 , , 0.5 n } . Furthermore, we have simulated by the quadratic kernel on ( 0 , 1 ) and, we have employed the PCA metric which is more adequate for this kind of functional variable input (see the curve shapes in Figure 1, Figure 2 and Figure 3). Finally we examine the feasibility of the estimator Q P p ^ ( ) by comparing it to the parametric one obtained by the code routine CVaR R-code in the package PerformanceAnalytics. The performance of both risk models are tested via backtesting using
A L S ( p ) = 1 N Σ k = 1 N ρ p S k , Q P p ^ ( C k ) .
where ρ p ( t ) = t 2 p 1 I t 0 .
The results obtained are summarized in Table 1, which presents the values of A L S ( p ) for different sample sizes ( N = 50 , 150 , 250 ), three probability levels ( p = 0.1 , 0.05 , 0.01 ), and three distinct values of the Hurst parameter values in (0,1) (small, mid, and strong) that are h = 0.2 , 0.6 , 0.9 .
The obtained results illustrate the easy implementation of the estimator Q P p ^ ( V ) . The behavior of Q P p ^ ( V ) is significantly influenced by the Hurst parameter h, which plays a crucial role in financial time-series analysis. Specifically, it is well-established that the estimation error varies with different values of h. Lower values of h are associated with rough volatility and have errors varying between (0.64, 1.89). Whereas higher values correspond to long-memory processes and give small errors between (0.18, 1.23). This result confirms the trend-reinforcing behavior of the Hurst parameter. Furthermore, the empirical findings demonstrate a strong alignment with the theoretical predictions for Q P p ^ ( V ) . In that sense, the performance of the estimator is profoundly affected by the underlying characteristics of the multifractional volatility time-series model (MFVTSM) process.

4.2. Real-World Financial Time Series

One of the most challenging issues in financial risk management is determining suitable decision rules that detect the extreme loss or extreme gain. Based on the definition of quantile regression estimation, Q P p ^ ( V ) appears to be a promising solution to this problem. In fact, Q P p ^ ( V ) emphasizes analyzing financial time-series data in a functional nonparametric manner. In particular, functional estimation allows us to incorporate the functional aspects of financial time series. The particularity of this contribution lies in its consideration of multidimensional data, enabling us to assess the risk of multiple assets simultaneously. For this study, we assume that d = 2 and define C k as the daily peak values of two precious metals: Platinum (XPT) and Palladium (XPD). The data are available on the website [https://stooq.com/db/h/] (accessed on 28 March 2025). We analyze the daily dynamic of these metals during 2024 (from 1 January 2024 to 31 December 2024). The data for the two metals are displayed in the following (Figure 4).
Usually we are interested in the log-return, defined by
z ( s ) : = 100 log C ( s ) log C ( s 1 ) ,
and we forecast the average variation of the return giving the process z ( s ) for t [ 0 , T ) . It is evident that the modified financial time series z ( s ) has a zero mean and shows considerable volatility. Before considering this financial time series as functional data, we begin by replicating the Volterra process and create a functional sample using the sampling method mentioned in Section 2.2. We divide z ( s ) into N = 220 curves, each corresponding to one month. While our goal is to predict the S average of the monthly variations among the two metals, we will then compare the effectiveness of the proposed method against traditional risk measures like V a R ( V ) , which is obtained by CVaR R-code in the package PerformanceAnalytics. Furthermore, for the practical determination of Q P p ^ , we use the quadratic kernel on [ 0 , 1 ] and the Principal Component Analysis (PCA)-based metric (refer to [15] for its definition). We choose the optimal smoothing parameter a through the local k-nearest neighborhood approach using the same rule as the previous section. We compare both estimators by plotting in Figure 5 and Figure 6 the two estimators Q P p ^ (Method 1) and CVaR (Method 2) and (the curves in red) compared to the true values of the process of the last three months. This reveals violation cases when the process S ( t ) exceeds the estimators.
The obtained results demonstrate that Method 1 ( Q P p ^ ) performs better in detecting the financial risk than its competitor, Method 2 (CVaR). This conclusion is confirmed by the frequency of exceedance (represented by the red curves), which remains consistently closer to the nominal threshold p. More precisely, we examine the percentage of cases where the true curve is superior to θ ^ ( t ) , with θ ^ representing either (CVaR) or Q P p ^ . For p = 0.1 , the exceedance rate is approximately 0.12 for Q P p ^ compared to 0.18 for CVaR. Similarly, when p = 0.05 , the observed rates are 0.049 for Q P p ^ versus 0.058 for (CVaR). These results consistently favor Method 1 across different threshold levels.
In the second illustration we examine the impact of the number of assets on the accuracy of Q P p ^ to detect the financial risk. For this second purpose, we consider the daily price of a portfolio that contains high values of a certain number of crypto-currencies such as, Bitcoin, Biconomy, Binance, BinaryX, and Bora. The data are available on the same website https://stooq.com/db/h/ (accessed on 8 June 2025). It is clear that this portfolio contains assets from different platforms. Typically, Bitcoin is the first and most accessible digital currency, characterized by its rapid expansion and high volatility, while Biconomy cryptocurrency is based on simplifying Web3 interactions and cross-chain transfers. Binance Coin is a prominent cryptocurrency initially launched by the Binance exchange in 2017. BinaryX (BNX) is a cryptocurrency platform known for its unpredictable price, with increases and decreases which defy general market trends. Finally, BORA is a cryptocurrency associated with the BORA platform that utilizes Ethereum blockchain to enhance transactions. The feasibility of the model Q P p ^ is assessed by computing the percentage of exceedance points relative to the estimated quantity Q P p ^ . To ensure robustness, we repeatedly split the dataset into learning and testing subsets. From a technical point of view, this backtesting measure helps investors to assess the reliability of their risk models under conditions of market volatility. Specifically, we repeat the splitting process between learning and testing sample (60 times) and we calculate the exceedance rate. Figure 7 and Figure 8 present the computational results obtained for two distinct probability thresholds ( p = 0.1 and p = 0.05 ), considering two portfolio configurations: a five-asset portfolio and a specialized portfolio comprising Bitcoin and Biconomy.
Without surprise, the accuracy of risk estimation exhibits a strong dependence on the number of components in the multi-functional vector time-series models (MFVTSM). More precisely, the precision of the estimator Q P p ^ deteriorates as the portfolio size increases. This relationship is quantitatively verified through the computation of the Average Exceedance Percentage (AEP). Our empirical analysis reveals that for a probability threshold of p = 0.1 , the AEP increases from approximately 0.13 for portfolios containing two assets to 0.18 for portfolios comprising five assets. Similarly, when considering a more stringent threshold of p = 0.05 , we observe an AEP of 0.049 for two-asset portfolios compared to 0.067 for five-asset portfolios. These findings highlight the significant impact of the dimensionality parameter d (representing the number of components of regressors or the number of assets in the portfolio) on the performance of MFVTSM models.

5. Conclusions

Pushed by the necessity to create a modern algorithm for instant financial risk management, we have introduced innovative statistical methods for high-frequency data observed every minute. This novel approach serves as an alternative to traditional models, specifically those based on the multivariate GARCH model. Our proposed methods offer more insightful information than the conventional model. This is primarily because the multivariate GARCH model relies on specific assumptions about data distribution that are typically unmet, and fails to recognize high-frequency data. Recall that, with advances in technology, digital financial risk management is becoming essential, necessitating the modernization of traditional models. In this context our empirical analysis has shown that the multi-functional VaR model outperforms the common approach. Despite the significant practical impact of these results, they also pave the way for future developments. For instance, comparing semiparametric estimations with nonparametric functional techniques will be crucial moving forward, notably partial linear estimation using similar techniques to quantile regression. In particular, the partial linear estimation is a good alternative allowing the user to enhance the accuracy and the robustness of quantile regression by combining parametric and nonparametric components. It would also be beneficial to explore alternative estimators for quantile regression using a knn approach or treating more complicated structures.

Author Contributions

The authors contributed approximately equally to this work. Formal analysis, F.A.A.; Validation, M.B.A.; Writing—review & editing, A.L. and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers; Supporting Project number (PNURSP2025R515); Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; and the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Small Research Project under grant number RGP1/41/46.

Data Availability Statement

The data used in this study are available through the link: https://stooq.com/db/h/ (accessed on 8 March 2024).

Acknowledgments

The authors thank and extend their appreciation to the funders of this work: This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R515), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia, and the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Small Research Project under grant number RGP1/41/46.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The proof of the intermediate results is based on some covariance inequalities stated in the following lemma
Lemma A1 ([24]).
If the MFVTSM ( C k ) k has a kernel L ( · , · ) that satisfies ((As2)) then, for all bounded Lipschitz-continuous functions F and G we have
C o v ( F ( Z k , k K 1 ) , G ( Z k , k K 2 ) ) # K 1 # K 2 L i p ( F ) L i p ( G ) η ξ
with
η ξ = max k K 1 Σ k K 2 , | k k | ξ Σ l = 1 Σ l = 1 Σ i = 1 d Σ j = 1 d | C o v ( C k , l i , C k , l j ) | + | C o v ( S k , C k , l ) | + | C o v ( C k , l i , C k ) | + | C o v ( S k , S k ) | .
for ξ = d ( K 1 , K 2 ) .
Proof of Lemma 1.
The compactness of [ Q P p ( V ) δ , Q P p ( V ) + δ ] implies
[ Q P p ( V ) δ , Q P p ( V ) + δ ] j = 1 d N t j l N , t j + l N ,
with l N = N 1 / 2 and d N = O N 1 / 2 . Since
I E F ~ 1 ( t j l N , V ) sup y ( t j l N , t j + l N ) I E F ~ 1 ( y , V ) I E F ~ 1 ( t j + l N , V ) F ~ 1 ( t j l N , V ) sup y ( t j l N , t j + l N ) F ~ 1 ( y , V ) F ~ 1 ( t j + l N , V ) .
Implying that
sup y [ Q P p ( V ) δ , Q P p ( V ) + δ ] F ~ 1 ( y , V ) I E F ~ 1 ( y , V ) max 1 j d N max z { t j l N , t j + l N } F ~ 1 ( z , V ) I E F ~ 1 ( z , V ) + 2 C 2 l N .
By (As4)
l N = o log N N ζ ( d , V , a ) 1 / 2 .
Thus, it suffices to asses
2 d N max 1 j d N max z { t j l N , t j + l N } I P F ~ 1 ( z , V ) I E F ~ 1 ( z , V ) > ϵ 0 log N N ζ ( d , V , a ) .
Indeed, for all y G N we write
I P F ~ 1 ( y , V ) I E F ~ 1 ( y , V ) > ε = I P 1 N I E i = 1 d K ( a 1 V i C 1 i ) Σ k = 1 N Ψ k > ε I P Σ k = 1 N Ψ k > ε N I E [ i = 1 d K ( a 1 V i C 1 i ) ] .
Ψ k = 1 N I E i = 1 d K ( a 1 V i C 1 i ) χ ( C k , S k ) ,
with
χ ( V , w ) = Y ( w ) i = 1 d K ( a 1 V i I i ) I E Y ( w ) i = 1 d K ( a 1 V i C 1 i ) , V H , w I R .
Y ( w ) = 1 I { w t y 0 } . Observe that
χ 2 C K d ,
Lip ( χ ) C ( a d Lip ( K ) + K d ) C a d Lip ( K ) .
To do that we start to evaluate V a r Σ k = 1 N Ψ k and C o v ( Ψ u 1 Ψ u s , Ψ v 1 Ψ v t ) , for all ( u 1 , , u s ) I N s and ( v 1 , , v t ) I N v .
For
V a r Σ i = 1 N Ψ i = Σ i = 1 N Σ j = 1 N C o v ( Ψ i , Ψ j ) = N V a r Ψ 1 + Σ i = 1 N Σ j = 1 j i N C o v ( Ψ i , Ψ j ) .
Evidently,
I E Ψ i 2 C I E [ i = 1 d K 2 ( a 1 V i I i ) ] .
As
I E i = 1 d K j ( a 1 V i I i ) = O ( ζ ( d , V , a ) )
then
V a r ( Ψ 1 ) = O 1 N ζ ( d , V , a ) .
Concerning Σ i = 1 N Σ j = 1 j i N C o v ( Ψ i , Ψ j ) , we have
Σ i = 1 N Σ j = 1 j i N C o v ( Ψ i , Ψ j ) = Σ i = 1 N Σ j = 1 0 < | i j | m N N C o v ( Ψ i , Ψ j ) + Σ i = 1 N Σ j = 1 | i j | > m N N C o v ( Ψ i , Ψ j ) = : T I + T I I ,
where ( m N ) is a sequence of integers, which tends towards infinity as n .
  • Combining (As4) and (As5) to
C o v ( Ψ i , Ψ j ) C ζ ( ν + 1 ) / ν ( d , V , a ) + ζ 2 ( d , V , a ) .
Consequently,
T I C N m N ζ ( ν + 1 ) / ν ( d , V , a ) .
Lemma (A1) implies
T I I C N a d Lip ( K ) e ν m N .
Therefore,
Σ i = 1 N Σ j = 1 j i N C o v ( Ψ i , Ψ j ) C N m N ζ ( ν + 1 ) / ν ( d , V , a ) + N ( a ) d Lip ( K ) e ν m N .
For m N = ν 1 log a d Lip ( K ) ν ζ ( ν + 1 ) / ν ( d , V , a ) , we have
1 N ζ ( d , V , a ) Σ i = 1 N Σ j = 1 j i N C o v ( Ψ i , Ψ j ) 0 ,   as   n .
Finally,
V a r Σ i = 1 N Ψ i = O 1 N ζ ( d , V , a ) .
Next, C o v ( Ψ u 1 Ψ u s , Ψ v 1 Ψ v t ) and we use Lemma (A1) for v 1 > u s ,
| C o v ( Ψ u 1 Ψ u s , Ψ v 1 Ψ v t ) | s t a d Lip ( K ) 2 C N ζ ( d , V , a ) t + s e ν ( v 1 u s ) .
On the other hand, we have
| C o v ( Ψ u 1 Ψ u s , Ψ v 1 Ψ v t ) | ζ ( ν + 1 ) / ν ( d , V , a ) + ζ 2 ( d , V , a ) C N ζ ( d , V , a ) s + t ζ ( ν + 1 ) / ν ( d , V , a ) .
Next, taking a 1 2 ν + 1 -power of the first and a ( 2 ν 2 ν + 1 ) -power for the second to obtain
| C o v ( Ψ u 1 Ψ u s , Ψ v 1 Ψ v t ) | s t ζ ( d , V , a ) C N ζ ( d , V , a ) t + s e ν ( v 1 u s ) / ( 2 ( ν + 1 ) ) .
For the case t 1 = s u we have
| C o v ( Ψ u 1 Ψ u s , Ψ v 1 Ψ v t ) | ζ ( d , V , a ) C N ζ ( d , V , a ) t + s .
Now, the Kallabis and Newmann’s inequality (see [25]) for
K N = C N ζ ( d , V , a ) , M N = C N ζ ( d , V , a ) a n d V a r Σ k = 1 N Ψ k = O 1 N ζ ( d , V , a )
gives
I P F ~ 1 ( y , V ) I E F ~ 1 ( y , V ) > ϵ 0 log N N ζ ( d , V , a ) I P | Σ k = 1 N Ψ k | > ϵ 0 log N N ζ ( d , V , a ) exp ϵ 0 2 log N / ( 2 N ζ ( d , V , a ) ) V a r Σ k = 1 N Ψ k + C ( N ζ ( d , V , a ) ) 1 3 log N N ζ ( d , V , a ) 5 6 exp ϵ 0 2 log N C + C log 5 N N ζ ( d , V , a ) 1 6 C exp C ϵ 0 2 log N .
The adequate choice of ϵ 0 achieves
sup y [ Q P p ( V ) Ψ , Q P p ( V ) + Ψ ] F ~ 1 ( y , V ) I E F ~ 1 ( y , V ) = O a . c o . log N N ζ ( d , V , a ) .
Now, for (10) we write
I P F ~ 2 ( V ) I E F ~ 2 ( V ) > ε = I P 1 N I E i = 1 d K ( a 1 V i C 1 i ) Σ k = 1 N Δ k > ε I P Σ k = 1 N Δ k > ε N I E [ i = 1 d K ( a 1 V i C 1 i ) ] .
where
Δ k = 1 N I E i = 1 d K ( a 1 V i C 1 i ) χ ( C k ) ,
with
χ ( w ) = i = 1 d K ( a 1 V i C i ) I E i = 1 d K ( a 1 V i C 1 i ) , V i L 2 [ 0 , 1 ] , w I R .
Clearly
χ 2 C K d ,
Lip ( χ ) C a d Lip ( K ) .
The evaluation of V a r Σ k = 1 N Δ k and C o v ( Δ u 1 Δ u s , Δ v 1 Δ v t ) , for all ( u 1 , , u s ) I N s and ( v 1 , , v t ) I N v . are similar to the previous case. We get
V a r ( Δ 1 ) = O 1 N ζ ( d , V , a ) .
and
| C o v ( Δ u 1 Δ u s , Δ v 1 Δ v t ) | s t ζ ( d , V , a ) C N ζ ( d , V , a ) t + s e ν ( v 1 u s ) / ( 2 ( ν + 1 ) ) .
Once again use the same parameters
K N = C N ζ ( d , V , a ) , M N = C N ζ ( d , V , a ) and V a r Σ k = 1 N Δ k = O 1 N ζ ( d , V , a ) .
to deduce that
F ~ 2 ( V ) I E F ~ 2 ( V ) = O a . c o . log N N ζ ( d , V , a ) .
Thus the proof is completed. □
Proof of Lemma 2.
The stationarity gives
E F ~ 1 ( y , V ) F ( y , V ) =
1 E i = 1 d K ( a 1 V i C 1 i ) E i = 1 d K ( a 1 V i C 1 i ) F ( y , C 1 ) F ( y , V ) ,
| E F ^ 1 ( y , V ) F 1 ( y , V ) | = 1 E i = 1 d K ( a 1 V i C 1 i ) E i = 1 d K ( a 1 V i C 1 i ) F 1 ( t , C 1 ) F 1 ( y , V ) 1 I B ( V 1 , a ) × B ( V d , a ) C a b 1 .
implying
E F ~ 1 ( y , V ) F ( y , V ) = O a b 1 .
Proof of Lemma 3.
We have
| F ^ 2 ( V ) | 1 2 | F ^ 2 ( V ) 1 | > 1 2 .
implying
I P | F ^ 2 ( V ) | 1 2 I P | F ^ 2 ( V ) 1 | > 1 2 .
Observe that I E F ~ 2 ( V ) = 1 , then we apply the Lemma 1, with the particular choice that ϵ 0 = 1 2 to conclude
Σ N 1 I P | F ^ 2 ( V ) | 1 2 < .
Hence the proof is complete. □

References

  1. Comte, F.; Renault, E. Long memory in continuous-time stochastic volatility models. Math. Financ. 1998, 8, 291–323. [Google Scholar] [CrossRef]
  2. Alòs, E.; León, J.A.; Vives, J. On the short-time behavior of the implied volatility for jump-diffusion models with stochastic volatility. Financ. Stoch. 2007, 11, 571–589. [Google Scholar] [CrossRef]
  3. Fouque, J.P.; Papanicolaou, G.; Sircar, K.R. Derivatives in Financial Markets with Stochastic Volatility; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  4. Tankov, P. Financial Modelling with Jump Processes; Chapman and Hall/CRC: Boca Raton, FL, USA, 2003. [Google Scholar]
  5. Engle, R.F.; Manganelli, S. CAViaR: Conditional autoregressive value at risk by regression quantiles. J. Bus. Econ. Stat. 2004, 22, 367–381. [Google Scholar] [CrossRef]
  6. Kuester, K.; Mittnik, S.; Paolella, M.S. Value-at-risk prediction: A comparison of alternative strategies. J. Financ. Econ. 2006, 4, 53–89. [Google Scholar] [CrossRef]
  7. Sun, P.; Lin, F.; Xu, H.; Yu, K. Estimation of value-at-risk by Lp quantile regression. Ann. Inst. Stat. Math. 2025, 77, 25–59. [Google Scholar] [CrossRef]
  8. Lux, M.; Härdle, W.K.; Lessmann, S. Data driven value-at-risk forecasting using a SVR-GARCH-KDE hybrid. Comput. Stat. 2020, 35, 947–981. [Google Scholar] [CrossRef]
  9. Herrera, R.; Schipp, B. Value at risk forecasts by extreme value models in a conditional duration framework. J. Empir. Financ. 2013, 23, 33–47. [Google Scholar] [CrossRef]
  10. Huang, J.-J.; Lee, K.-J.; Liang, H.; Lin, W.-F. Estimating value at risk of portfolio by conditional copula-GARCH method. Insur. Math. Econ. 2009, 45, 315–324. [Google Scholar] [CrossRef]
  11. Stone, C.J. Consistent Nonparametric Regression. Ann. Statist. 1977, 5, 595–620. [Google Scholar] [CrossRef]
  12. Stute, W. Conditional empirical processes. Ann. Statist. 1986, 14, 638–647. [Google Scholar] [CrossRef]
  13. Koenker, R. Quantile regression: 40 years on. Annu. Rev. Econ. 2017, 9, 155–176. [Google Scholar] [CrossRef]
  14. Cardot, H.; Crambes, C.; Sarda, P. Quantile regression when the covariates are functions. J. Nonparametric Stat. 2005, 17, 841–856. [Google Scholar] [CrossRef]
  15. Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis: Theory and Practice; Springer Series in Statistics; Springer: New York, NY, USA, 2006. [Google Scholar]
  16. Messaci, F.; Nemouchi, N.; Ouassou, I.; Rachdi, M. Local polynomial modelling of the conditional quantile for functional data. Stat. Methods Appl. 2015, 24, 597–622. [Google Scholar] [CrossRef]
  17. Ling, N.; Yang, Y.; Peng, Q. Partial linear quantile regression model with incompletely observed functional covariates. J. Nonparametric Stat. 2025, 1–27. [Google Scholar] [CrossRef]
  18. Mutis, M.; Beyaztas, U.; Karaman, F.; Lin Shang, H. On function-on-function linear quantile regression. J. Appl. Stat. 2025, 52, 814–840. [Google Scholar] [CrossRef] [PubMed]
  19. Beyaztas, U.; Tez, M.; Shang, H.L. Robust scalar-on-function partial quantile regression. J. Appl. Stat. 2024, 51, 1359–1377. [Google Scholar] [CrossRef] [PubMed]
  20. Agarwal, G.; Sun, Y. Bivariate Functional Quantile Envelopes with Application to Radiosonde Wind Data. Technometrics 2020, 63, 199–211. [Google Scholar] [CrossRef]
  21. Lim, K.P.; Brooks, R.D.; Kim, J.H. Financial crisis and stock market efficiency: Empirical evidence from Asian countries. Int. Rev. Financial Anal. 2008, 17, 571–591. [Google Scholar] [CrossRef]
  22. Laksaci, A.; Lemdani, M.; Ould-Saïd, E. A generalized L1-approach for a kernel estimator of conditional quantile with functional regressors: Consistency and asymptotic normality. Stat. Probab. Lett. 2009, 79, 1065–1073. [Google Scholar] [CrossRef]
  23. Rachdi, M.; Vieu, P. Nonparametric regression for functional data: Automatic smoothing parameter selection. J. Stat. Plan. Inference 2007, 137, 2784–2801. [Google Scholar] [CrossRef]
  24. Alkhaldi, S.H.; Alshahrani, F.; Alaoui, M.K.; Laksaci, A.; Rachdi, M. Multifunctional Expectile Regression Estimation in Volterra Time Series: Application to Financial Risk Management. Axioms 2025, 14, 147. [Google Scholar] [CrossRef]
  25. Doukhan, P.; Neumann, M.H. Probability and moment inequalities for sums of weakly dependent random variables, with applications. Stoch. Process. Their Appl. 2007, 117, 878–903. [Google Scholar] [CrossRef]
Figure 1. The components of the case h = 0.9.
Figure 1. The components of the case h = 0.9.
Symmetry 17 01207 g001
Figure 2. The components of the case h = 0.6.
Figure 2. The components of the case h = 0.6.
Symmetry 17 01207 g002
Figure 3. The components of the case h = 0.2.
Figure 3. The components of the case h = 0.2.
Symmetry 17 01207 g003
Figure 4. The two components of the functional regressor C .
Figure 4. The two components of the functional regressor C .
Symmetry 17 01207 g004
Figure 5. Comparison between Q P p ^ and CVaR: Case p = 0.05 .
Figure 5. Comparison between Q P p ^ and CVaR: Case p = 0.05 .
Symmetry 17 01207 g005
Figure 6. Comparison between Q P p ^ and CVaR: Case p = 0.1 .
Figure 6. Comparison between Q P p ^ and CVaR: Case p = 0.1 .
Symmetry 17 01207 g006
Figure 7. Comparison of the exceedance rate: Case p = 0.1 highlighted by the black line.
Figure 7. Comparison of the exceedance rate: Case p = 0.1 highlighted by the black line.
Symmetry 17 01207 g007
Figure 8. Comparison of the exceedance rate: Case p = 0.05 highlighted by the black line.
Figure 8. Comparison of the exceedance rate: Case p = 0.05 highlighted by the black line.
Symmetry 17 01207 g008
Table 1. The A L S ( p ) for different scenarios.
Table 1. The A L S ( p ) for different scenarios.
ModelhN p = 0.1 p = 0.05 p = 0.01
Q P p ^ 0.2501.891.761.59
1501.121.021.07
2500.810.640.73
0.6501.571.411.61
1500.960.850.72
2500.310.230.28
0.9501.181.171.23
1501.021.091.11
2500.180.250.33
CVaR0.2501.191.231.25
1501.071.121.26
2500.970.740.83
0.6501.851.911.71
1501.111.031.12
2500.710.460.53
0.9502.202.212.33
1502.142.192.081
2501.0781.050.76
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almulhim, F.A.; Alamari, M.B.; Laksaci, A.; Rachdi, M. Dynamic Value at Risk Estimation in Multi-Functional Volterra Time-Series Model (MFVTSM). Symmetry 2025, 17, 1207. https://doi.org/10.3390/sym17081207

AMA Style

Almulhim FA, Alamari MB, Laksaci A, Rachdi M. Dynamic Value at Risk Estimation in Multi-Functional Volterra Time-Series Model (MFVTSM). Symmetry. 2025; 17(8):1207. https://doi.org/10.3390/sym17081207

Chicago/Turabian Style

Almulhim, Fatimah A., Mohammed B. Alamari, Ali Laksaci, and Mustapha Rachdi. 2025. "Dynamic Value at Risk Estimation in Multi-Functional Volterra Time-Series Model (MFVTSM)" Symmetry 17, no. 8: 1207. https://doi.org/10.3390/sym17081207

APA Style

Almulhim, F. A., Alamari, M. B., Laksaci, A., & Rachdi, M. (2025). Dynamic Value at Risk Estimation in Multi-Functional Volterra Time-Series Model (MFVTSM). Symmetry, 17(8), 1207. https://doi.org/10.3390/sym17081207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop