1. Introduction
Cryptocurrencies have emerged as a new asset class in recent years and have become a significant part of our global financial system (
Ji et al. 2019). Numerous financial organizations have extensively used cryptocurrencies to profit from the technology that underpins them. Many businesses have started taking them in addition to fiat money. According to
Shrivas and Yeboah (
2017), virtual currencies have given governments, businesses, and individuals new ways to access open, dependable, quick, and secure services that could potentially assist in sparking economic activity and growth in our global economy.
Broadly speaking, virtual currency exchanges are viewed as a popular and lucrative investment due to the technology underpinning them (
Derbentsev et al. 2020). The market capitalization of cryptocurrencies is expected to be more than triple in the next 15 years, reaching up to
$10 trillion (
Steve 2018). As of 22 April 2020, there were over 5000 cryptocurrencies in the market, with a total and combined market value of over
$200 billion (
Peng and Yichao 2020). According to the popular cryptocurrency trading website CoinMarketCap, the most traded cryptocurrencies are Bitcoin, Ethereum, Ripple, Stellar, and Litecoin. Together, these five make up more than 75% of the total market capitalization of all digital assets.
Studies focusing on market efficiency (e.g.,
Nadarajah and Chu 2017;
Tran and Leirvik 2020), volatility patterns and investment behaviors (
El-Chaarani et al. 2023), as well as the portfolio and hedging implications of cryptocurrencies (e.g.,
Bouri et al. 2017;
Conlon and McGee 2020) show the importance of being able to accurately estimate and forecast the volatility behaviors of these unique digital assets. A vine copula approach is used by
Syuhada and Hakim (
2020) to build a dependence model and provide value-at-risk (VaR) projections. Using the prices of Bitcoin and Ethereum,
Chi and Hao (
2021) demonstrate that the GARCH model’s volatility forecast is superior to the option-implied volatility.
VaR modeling has become popular for financial market regulators and investors to measure risk. It measures the loss that buyers can expect to take over a certain amount of time with a certain level of certainty. This risk measure is also popular among practitioners because it simplifies the downside risk of each portfolio into a single number and expresses the associated loss at a fixed probability (
Marimoutou et al. 2009).
Using a variety of volatility models,
Trucios (
2019) analyze the VaR of the price dynamics of Bitcoin and show how the presence of outliers can play a fundamental role in the modeling or forecasting of key Bitcoin risk metrics. Such outliers may, for example, signal a regime-shift in the price dynamics of the asset. This is something that is explored further by
Ardia et al. (
2019). Specifically, according to
Ardia et al. (
2019), the Markov-switching GARCH (MSGARCH) model performs better than the single-regime GARCH model when it comes to forecasting VaR. The stochastic volatility with the co-jumps model was shown to be the best for predicting VaR and expected shortfall (ES) to assess cryptocurrency risk by
Nekhili and Sultan (
2020).
Jiménez et al. (
2020) argue that the median shortfall and GARCH and GAS models with semi-parametric specifications offer a more precise and reliable risk measure for assessing Bitcoin risk than other measures.
To study the tail behavior of cryptocurrencies,
Gkillas and Katsiampa (
2018) employed Extreme value theory (EVT) to forecast VaR and ES. According to their findings, Bitcoin Cash had the most potential for gain and loss, making it the riskiest one overall. Bitcoin and Litecoin, on the other hand, were determined to be the least dangerous of the sampled currencies.
To have any probability of success, investment decisions require precise estimations of downside risk measures, such as those arising from VaR (
Zahid et al. 2022). As a result, there is growing interest in using various backtesting methodologies to assess VaR’s accuracy. For instance,
Troster et al. (
2019) used backtesting techniques to compare the accuracy of VaR models. They found that the most precise GAS model for calculating Bitcoin risk was confirmed to have a heavy-tailed distribution.
The aforementioned research measures downside risk using various modeling techniques, including GARCH, SV, Markov-switching models, and EVT. Quantile models, on the other hand, have not received as much consideration regarding VaR modeling in cryptocurrency markets. However, empirical data shows that quantile models can compete with other VaR models (
Yu et al. 2010;
Laporta et al. 2018).
The quantile-based strategy resulted in two quantile models: dynamic quantile regression (DQR) and CAViaR. The CAViaR models make no assumptions about the distribution of error terms and instead explicitly model the quantile of return distributions. This autoregressive model is more applicable to the returns of cryptocurrencies because their returns tend to aggregate over time, indicating that they are autocorrelated to some degree. Therefore, the VaR, which is closely related to the distribution’s standard deviation, must share the same property. A reasonable way to formalize this property is to use autoregressive specifications, such as the CAViaR model with different specifications, which are preferred for VaR estimation (
Koenker and Bassett 1978).
In contrast, the DQR model is characterized by a stochastic process of the first order. It explicitly calculates the conditional quantile and specifies its evaluation by regression; as a result, this model can handle rapid price fluctuations, high volatility, and other distinctive characteristics of cryptocurrency returns. Such an approach is more well-suited to handle outliers arising from large shifts in price behaviors. The DQR approach can thus examine scenarios where a miscalculation of risk can result in significant losses, given its ability to predict VaR models with a high confidence level (
Laporta et al. 2018).
Several studies employ backtesting to determine the precision of risk metrics used to analyze cryptocurrencies. However, backtesting processes do not always reveal the magnitude of the exceedance. Therefore, a researcher cannot just compare models to determine the optimal model. While studies have made attempts to use the MCS technique to select an optimal risk model, our study contributes to the literature by merging several VaR models to select a univariate model for estimating cryptocurrencies’ downside risk.
While the MCS procedure can be used across sample periods and across models in order to select an optimal model, it is possible, as we argue here, to combine models in order to improve downside risk estimation. The various theoretical and empirical benefits of merging VaR forecasts have been recognized in previous studies (
Bernardi et al. 2017). Similarly,
Stock and Watson (
1999) demonstrate that combining forecasts yield better results than more conventional model selection strategies, such as MCS. This approach dynamically weights the VaR predictions generated by models that are a component of the best final set (
Laporta et al. 2018). The combination of VaR models is favorable since VaR is a narrow coverage quantile model sensitive to the few observations below the quantile estimate.
Ammann and Verhofen (
2005) discuss the impact of different VaR models on the computation of performance measures, providing insights into how different models can influence the combined forecast.
Timmermann (
2006) provides a comprehensive overview of forecast combination methods in economic forecasting. Additionally,
Pesaran et al. (
2009) point out that integrating many VaR models can actually make individual VaR forecasts more accurate.
Hansen et al. (
2011) introduce a method for comparing and selecting among a set of competing models, forming the basis for combined forecasts from selected models.
Due to the extreme price volatility that virtual currencies display, trading in them carries a higher level of risk than trading in traditional financial instruments. Given the escalating demand for cryptocurrencies, choosing precise models to calculate the investment’s downside risk is essential to minimize losses and maximize returns. Such downside risk estimation is the focal point of our study.
Our study’s major goal is to identify the best forecasting model that can effectively predict VaR while capturing the underlying traits (stylized facts) displayed in the most active cryptocurrency marketplaces. Econometrically, our study contributes to the existing literature in four different ways.
First, GARCH, EGARCH, GJR, and GAS models with different innovations are used to estimate and forecast VaR. Second, this study contributes to the literature by estimating the VaR to forecast cryptocurrency downside risks using quantile-based models (CAViaR and DQR). The accuracy of these models for estimating VaR in cryptocurrency markets has not yet undergone thorough testing. Third, different backtesting strategies and the MCS method are applied to choose the best VaR model. Finally, a weighted aggregative approach is used to combine various VaR models within a superior collection of models to create an ideal forecast model, robustifying individual VaR forecasts. Data on Bitcoin pricing has not yet been systematically applied using the weighted aggregative approach. The results of our study have significant implications for risk managers, investors, and regulators who use cryptocurrencies in associated financial techniques such as optimum hedging (
Koutmos et al. 2021).
3. Value-at-Risk Estimation & Sample Data
All of the models in this research use a one-day-ahead conditional variance forecast to generate VaR. The one-day-ahead forecast of the conditional variance
of returns is represented is
Using the assumption that each model considered in this study has a different distribution of error terms, a forecast of the conditional variance of returns’ one-day-ahead VaR at
% confidence level is made as:
where
represents the
quantile of the cumulative distribution function of the innovation distribution.
3.1. Backtesting and Model Selection
The effectiveness of the VaR forecast depends on quantifiable tests that follow predetermined criteria and compare actual gains and losses to predicted VaR estimates. Backtesting VaR can be done using a variety of tests, including the dynamic quantile (DQ) test, the actual over expected (AE) exceedance ratio, the conditional coverage (CC) test, and the unconditional coverage (UC) test. These tests assess a model’s performance in terms of effectiveness and precision in order to examine the backtesting methodology.
The DQ test of
Engle and Manganelli (
2004) employs a method based on the characteristics of a quantile regression. The DQ test compares the predicted VaR to the actual manifestation. The fundamental assumption is that realized returns should be below the VaR forecast with a probability equal to the quantile level if the model is accurate.
VaR models can be evaluated using the AE exceedance ratio. It compares the actual number of exceedances (when losses exceed the VaR estimate) and the expected number of exceedances. The actual number of exceedances is simply the count of how many times the losses actually did exceed the VaR estimate in the sample period. The AE exceedance ratio is then calculated by dividing the actual number by the expected number. A ratio of 1 indicates that the model performs as expected. A ratio greater than 1 indicates that the model underestimates risk (there are more exceedances than anticipated), whereas a ratio less than 1 indicates that the model overestimates risk (there are fewer exceedances than expected) (see
Kupiec 1995;
Jorion 2007 for details).
The UC test of
Christoffersen (
1998) verifies that the number of exceptions (instances in which the loss exceeds the VaR estimate) corresponds to the expected number based on the confidence level. The CC test on the other hand, examines whether exceptions are distributed independently in time. Both tests are based on the likelihood ratio test, comparing the likelihood of a model where the exceptions follow a certain distribution (either independent or Bernoulli for the UC test and a Markov chain for the CC test) to the likelihood of a model where the exceptions are distributed differently.
Even while backtesting techniques show that the VaR forecast is accurate, no one model can be said to be superior to the others. In this scenario, market players are unable to select just one volatility model among the multiple ones that are offered. In light of this, our study uses the MCS technique to conduct a thorough analysis and choose the model that fits the data the best out of all the candidates.
3.2. Model Confidence Set
The MCS technique evaluates and analyzes the prediction abilities of various models. The null hypothesis that all currently chosen models have the same predictive capacity for user-defined loss function/measurement error is tested by this sequential testing process, which is based on the hypothesis of equal predictive ability (EPA). Through the use of the MCS technique, empiricists can create a set of models known as the superior set of models (SSM), in which the EPA null hypothesis is accepted with a given level of confidence. Using an “elimination rule” that is consistent with the statistical test, the worst-performing model is eliminated at each stage until the hypothesis of EPA is not rejected for all the models included in the SSM.
The MCS procedure considers the quantile as a loss function in our study. The VaR estimate of the model
at level
is represented by
, whereas the loss function related to this model is defined as
The asymmetric loss function deals with several large negative returns that surpass the VaR estimate that is represented as:
The average loss differences and loss differences were taken into account when developing the EPA test. The loss differences between models
and
are represented by
while
represents the average (loss) differences between model
and other competing models that exist in the initial
M set of models, with respect to time
t:
Intuitively,
is preferred as an alternative to
when
.
The test statistics (EPA) null and alternative hypotheses, respectively, are described as follows in Equation (23):
The
-statistic is given as,
where
is a sample loss of model
as compared to the average loss across any other model
and
This quantifies the average loss between model
and
. On the other hand,
represents a bootstrap estimate of
(
Finally, the EPA tests the hypothesis developed in Equation (23) by applying
Intuitively, a high value of
affirms that the model
estimates are far away from actual values relative to any other competing model belonging to
j M. Consequently, the
th model can be removed from
M. The elimination rule with the statistic is presented in Equation (28) as follows:
The elimination rule in Equation (28) discards the models which have the greatest statistic ( value. The model will be confirmed as a poor-performing model in case of the greatest standardized loss compared to the average loss across all other competing models existing in M. The null hypothesis will not be accepted at a prescribed confidence level for each iteration if the elimination rule defined in Equation (28) drops the model. The statistic ( will be again computed for all i, j M. If the null hypothesis is not rejected, it stops the iteration and sets aside a superior-performing set of models.
3.3. VaR Aggregation
Instead of picking the single forecast model with the greatest performance, the MCS process delivers a set of models. This set of models demonstrates equal VaR forecast capabilities, and in the worst cases, no model is ever deleted, when
However, choosing the one forecast model that performs the best is of great importance to researchers and market participants. The issue of how to somehow amalgamate (combine) the data from each superior set of models has drawn a lot of attention in this situation, and it is the subject of our work. The necessity of pooling risk models or developing a method that can best pool these models has already been highlighted in the literature (
Pesaran et al. 2009;
Stock and Watson 1999;
Bernardi et al. 2017;
Laporta et al. 2018;
Maciel 2021).
A weighted average approach can be used as a way of aggregation to pool VaR models and apply this to the best-performing collection of models. As a result, it is possible to compare the results of pooled VaR models that are part of the model’s superior set with those of single models. The VaR estimates of model
at time
at level
in a superior set of the model is presented by
and
is a set of weights. We can simply get the weighted VaR using Equation (29)
In Equation (29)
= 1. The set of weights assigns the same (identical) weight to each
, that is
. Consequently
is the average of every
model. The statistic
can be used to equate a model
with other models. This exists in a superior set of models, and, therefore, we also connect
to
through the MCS framework in our study:
where
Hence the first-ranked model in
has
, since it has the lowest
, while other models are characterized by
.
The MCS procedure assigns ranks to different models, which determine the VaR forecasts’ relevancy in a superior set of models. The lower the value of the statistic ti, the higher the rank of the model which belongs to , and therefore, the higher the weight of model as represented by .
In Equation (30), the value of − . The exponential of is considered to get the value in . In this way, the first-rank model in a superior set of the model gets the exp because it has the smallest value. On the other hand, other models’ range is characterized by O < exp< 1. This study considers the uniform and exponential sets to compare the models.
3.4. Cryptocurrency Price Data
The five major cryptocurrencies used in this study’s empirical examination are Bitcoin, Ethereum, Ripple, Stellar, and Litecoin. Based on their market capitalization, these cryptocurrencies make up some of the most frequently traded cryptocurrencies worldwide. We examine these cryptocurrencies and apply the aforementioned methods based on their daily closing values. The information was taken from websites that focus on cryptocurrencies (
https://coinmarketcap.com/ accessed on 10 January 2022).
The Bitcoin and Litecoin data were collected from 29 April 2013, while the Ethereum, Ripple and Stellar data started on 8 August 2015, 5 August 2013, and 5 August 2014. The data of all the cryptocurrencies ended on 31 June 2020. There are a total of 2605, 1791, 2519, 2621, and 2157 observations for Bitcoin, Ethereum, Ripple, Litecoin, and Stellar, respectively. The programming language R (version 3.5) was used to carry out the statistical analysis.
The model’s parameters were estimated using the in-sample data, and the forecast’s accuracy was assessed using the out-of-sample data. The prognosis for all cryptocurrencies was evaluated using data from the previous year (365 days). As additional observations were available, an expanding window was used to update all of the model’s parameters until the end of the sample. In this investigation, we used forecast timeframes of 1 day (k = 1).
4. Major Findings
The descriptive statistics of log returns of five selected cryptocurrencies utilized in this study are shown in
Table 1. Every cryptocurrency has a mean value of nearly zero, with standard deviations varying from 1.9 to 3.3. All cryptocurrencies follow a long-tail (leptokurtic) distribution, as high kurtosis values show. Except for Ethereum and Ripple, which exhibit substantial negative skewness, all cryptocurrencies have positive skewness. Each cryptocurrency’s daily log reports failed the Jarque-Bera (JB) test for normalcy. This supports the idea that Bitcoin price changes are out of the ordinary. Further evidence of heteroscedasticity in their log returns came from the ARCH LM test up to lag 20, which was also rejected. The Ljung-Box test for the squared returns up to lag 20 showed serial correlation and was highly significant. Finally, all series pass the Augmented Dickey-Fuller (ADF) unit root test, proving that all the series are stationary.
In
Figure 1, the daily closing prices of the five cryptocurrencies (left panel) and associated log returns (right panel) are plotted. The graphs clearly show that all cryptocurrency values spiked at the beginning of 2018 before falling off after a year. Volatility clustering is also seen in the log return series. This study considers Normal, skew-Normal, Student-
t and skew-Student-t distributions for innovation terms in the GARCH-type and GAS models.
4.1. Backtesting Results
The VaR backtesting findings for our five sampled cryptocurrencies at the 95% and 99% levels, respectively, are shown in
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6. The critical values for
,
, and DQ of 3.84, 5.99, and 9.49, respectively, are used in all tables to reject the hypothesis. On the other hand, an AE ratio close to 1.00 indicates that the VaR models are estimated accurately.
The quantile models, such as CAViaR and DQR, outperform the GARCH, EGARCH, GJR, and GAS type models when analyzing the AE ratio, with values that are closer to 1.00. Comparatively to other models, where the range of the AE ratio values was 0.603 to 1.986, quantile models’ values ranged from 0.877 to 1.150. This demonstrates that the quantile models permit a more precise risk measurement and may even be thought of as a better choice than the VaR models. These models directly model the quantiles of the return distribution without making any distributional assumptions on the residuals, avoiding the risk of model misspecification. As a result, the procedure is more robust in detecting tail behaviors and outliers in the data.
GARCH, EGACRH, and GJR specifications deliver accurate results for the five cryptocurrencies, passing both the the and tests at a 99% VaR confidence level. The DQ test yielded diverse results, with CAViaR passing with SAV, AS, and IG criteria, whereas other models failed in at least one instance. These models’ AE ratio values are likewise discovered to be closer to 1.00, indicating their accuracy in simulating the VaR of cryptocurrencies.
4.2. MCS Procedure Results
Table 7 provides a rating of the models for each individual coin in addition to the results of the MCS process for all cryptocurrencies at 95% and 99% confidence levels. In addition, several competing SSM models that estimate VaR are listed in the table. Each value in this table represents the rank of a model inside SSM. Based on the
t-statistic, the ranks summarize the likelihood of discarding a model. The five cryptocurrencies’ SSM dimensions varied from 8 to 21, which shows that different models have comparable capacities for predicting VaR.
Except for Stellar, which reached the top rank, all GARCH-T specifications used the MCS process to achieve higher ranks at the 95% confidence level, which is positive news for this coin. The EGARCH-T and ST requirements were rejected many times by the MCS method for Stellar and Ethereum. The GJR criteria were not completely ignored by the MCS method, although the majority of them received lower rankings. Similar findings were made for the GAS-ST specs for Ethereum and Ripple.
The CAViaR-AD model, which was rejected for Stellar and placed 12th for Litecoin, was the only quantile-based model to perform poorly with strong ranks (1st to 6th). However, it was discovered that the DQR model performed well and consistently for each of the cryptocurrencies.
Similarly, at the 99% confidence level, the MCS procedure discarded GARCH-T and GARCH-ST specifications for Litecoin, while EGARCH-T and EGARCH-ST specifications for Ethereum and Litecoin were discarded multiple times. The GJR-T and GJR-ST specifications for Stellar and Litecoin were eliminated from the MCS procedure. The MCS procedure did not reject the GAS model specification, despite not all of them achieving relatively high ranks.
Similarly, all quantile models performed well with strong ranks (1st to 7th) at a 99% confidence level, except for the CAViaR-AD model, which was rejected for Stellar and Litecoin (and placed 18th for Ripple). Furthermore, the CAViaR-SAV ranked 12th for Ripple. The EGARCH specifications were the worst of all GARCH-type models (especially in the case of Litecoin).
4.3. Aggregate VaR Backtesting Results
This approach pools the different VaR forecasts and assesses the advantage of individual models being pooled within
. All the models are aggregated and distributed into two subgroups in
. The first group includes economic models such as GARCH, EGARCH, GJR-GARCH, and GAS, while the second group contains quantile models. After subgrouping, the aggregation procedure is constructed using uniform and exponential sets, respectively, as weighting sets. Each VaR model is weighted equally in the uniform set, whereas the exponential set is used for the combination procedure described in Equation (30).
Table 8 and
Table 9 report the backtesting results of the proposed VaR combinations.
The results of the backtesting of the two subgroups represented by a combination of VaR models at the 95% and 99% confidence levels are depicted in
Table 8. DIST-UNI and DIST-EXP labeled the aggregation of GARCH, its different variants and GAS models, while the aggregation of quantile models with uniform and exponential weights is denoted by Q-UNI and Q-EXP, respectively.
At a confidence level of 95%, the volatility models (such as GARCH, EGARCH, and GJR) and the GAS models achieved superior results during the aggregation for Bitcoin, Ethereum, and Stellar, with all specifications passing both tests (including the DQ test). Except for Litecoin, the aggregation approach of quantile models yields favorable results for all other cryptocurrencies.
Using the aggregation method, the GARCH-based volatility models and the GAS models produced superior results for Bitcoin, Ethereum, and Stellar with a 99% confidence level. Similarly, the aggregation strategy for quantile models yielded positive outcomes for the same cryptocurrencies. It reveals that the pooled VaR method enhances the performance of adverse risk when compared to estimating VaR with a singular model. It also demonstrates that, in the presence of uncertainty regarding the dynamics of an asset’s returns, it is advantageous to use a combination of models, as this increases the probability that the empiricist captures the properties of the asset’s time series and more accurately describes the nature of its downside risks.
While
Table 8 presents the results of the backtesting of the two subgroups represented by a combination of VaR models,
Table 9 displays the results of the VaR aggregation procedure for all SSM models, regardless of group membership. This table contrasts the 95% and 99% confidence levels for the uniform weights (UNI) for a singular model and the exponential weights (EXP) for the VaR aggregation model. We can compare UNI and EXP using the standard deviation and the AE ratio. In terms of standard deviation and AE ratio evaluations, the EXP group provides more accurate results for Bitcoin, Ethereum, and Stellar than UNI at a confidence level of 95%.
Similarly, the EXP group provides more precise results for Bitcoin, Ethereum, and Stellar in terms of the SD and AE ratios than the UNI group. Compared to the individual model, the VaR combination has a lower SD and AE ratio that appears closer to 1.00. It indicates that the recommended weighted aggregated method mitigates the impact of extreme values and reduces forecasting errors. The final conclusion that can be drawn from these results is that the recommended aggregation method minimizes the overly optimistic or too conservative forecasts to generate accurate VaR model forecasts.
Nekhili and Sultan (
2020) and
Jiménez et al. (
2020) validate risk measures through backtesting procedures. In contrast to these studies, we conduct a comprehensive analysis utilizing an MCS procedure to select the optimal adverse risk model from various models.
Caporale and Zekokh (
2019) estimate risk measures using backtesting and the MCS procedure, similar to our research. Nevertheless, the MCS procedure typically constructs a superior set of models from competing models with identical forecasting abilities and cannot reject models with equivalent quality. In contrast, this study incorporates various VaR forecasts using the weighted average method to produce a single optimal model from the best set of models. When applied to our sampled cryptocurrencies, the results indicate that this method strengthens individual VaR projections.
Maciel (
2021) forecasts the VaR and ES of cryptocurrencies using jump-robust and regime-switching models, whereas our study uses a variety of models in addition to quantile regression models to estimate VaR, especially since quantile regression models have received relatively less attention for analyzing the downside risks of cryptocurrencies. Our findings support the notion that forecast combination strategies can produce more accurate risk management estimates, which can be beneficial for regulators, investors, and other market participants.
5. Conclusions
Price fluctuations of cryptocurrencies have significant effects on economic activity and financial markets. Price declines pose significant risks for cryptocurrency miners, regulatory authorities, businesses, and merchants. Prices of cryptocurrencies can be extremely volatile and incomparable to that of traditional asset classes, such as bonds and equities. Consequently, market participants can incur substantial losses at any given time. In this context, risk measurement is a crucial aspect of risk management, whereby it is critical to adopt an approach that appropriately models and quantifies the price risk exposure of cryptocurrencies. In this study, we measure the risk exposure via VaR estimation and examine the performance of various univariate VaR models for five widely traded cryptocurrencies. We incorporate popular models from the financial literature, including GARCH, GJR, EGARCH, CAViaR, and the GAS model. We also use the DQR that directly estimates the conditional quantile and describes its evolution over time using a regression approach where the coefficients follow a stationary autoregressive stochastic process. We conduct a backtesting analysis for each model, and its predictive ability of VaR forecasting is subsequently tested using the MCS procedure.
This empirical analysis suggests that quantile-based models such as CAViaR and DQR outperform other competitive models for estimating the adverse risk of most cryptocurrencies. In addition, integrating multiple VaR models strengthens individual VaR forecasts, highlighting the significance of the weighted average approach. This study’s findings have significant ramifications for investors and risk managers utilizing cryptocurrencies for optimal hedging or investment strategies. First, in the framework of quantile models, distributional assumptions on the error term are unnecessary, reducing the danger of model misspecification. Second, we can directly estimate VaR, which is the primary purpose of this analysis. Moreover, combining GARCH, EGARCH, GJR-GARCH, and GAS models may improve their performance in situations where they perform inadequately individually. This contribution is not exhaustive but demonstrates the usefulness of quantile models for VaR forecasting for the most extensively traded cryptocurrencies. Future research should further focus on how risk models can be combined to produce optimal risk forecasts and the methods and criteria for combining such models. Further, the study could extend the univariate modeling framework to a multivariate context, considering potential correlations and interconnections between cryptocurrency markets. Cryptocurrency risk management will undoubtedly be an important field in the years to come, given its growing popularity and given the growing demographic that seems to be involved in active trading.