Jump Driven Risk Model Performance in Cryptocurrency Market

: This paper aims at identifying a validated risk model for the cryptocurrency market. We propose a stochastic volatility model with co-jumps in return and volatility (SVCJ) to highlight the role of jumps in returns and volatility in affecting Value-at-Risk (VaR) and Expected Shortfall (ES) in cryptocurrency market. Validation results based on backtesting show that SVCJ model is superior in terms of statistical accuracy of VaR and ES estimates, compared to alternative models such as TGARCH (Threshold GARCH) volatility and RiskMetrics models. The results imply that for the cryptocurrency market, the best performing model is a stochastic process that accounts for both jumps in returns and volatility.


Introduction
Forecasting volatility is pivotal for developing accurate and realistic risk management models that perform well in good times and in bad. An accurate volatility forecast depends on the assumptions made by the analyst and selection of proper statistical models that can provide a parsimonious representation of the stylized features of the data. When risk management fails, the blame is squarely placed on risk models. According to Bernanke (2008), "Those institutions faring better during the recent turmoil generally placed relatively more emphasis on validation, independent review, and other controls for models and similar quantitative techniques. They also continually refined their models and applied a healthy dose of skepticism to model output". Hence, a crucial task facing a risk manager is to make sure the models are tested, back-tested, and validated to minimize expected losses.
Academics, practitioners, and regulators have commonly used risk models that were deemed sophisticated in terms of forecasting risk. For instance, JPMorgan and Bank of America use historical simulation to estimate their trading risk. Others rely on volatility forecasting models such as GARCH family models, exponentially moving average, JPMorgan's RiskMetrics, and extreme value theory models. In this respect, academics provided various results of the reality checks of these models and suggested different versions of the GARCH volatility models by alternating between Normal, Student-t, and Skewed-t distributions in an attempt to better capture tail events and asymmetry of the data generating process (see, for example, Bauwens and Laurent (2005), Danielsson and Morimoto (2000)).
Other scholars suggested hybrid models combining, for instance, filtered historical simulation with GARCH models or assuming different error terms in the models. Nevertheless, such models require assumptions about the stochastic processes of the underlying asset prices that are subject to validation failure either because of misspecification or the latent characteristic of the parameters, especially during economic downturns.
On a more macro level, it is now evident that the importance of risk models remains fundamental for capital requirements as imposed by the Basel regulations. Decision-makers rely on these risk models as long as they have passed some validation criteria adopted by financial institutions and regulatory authorities. Three critical model-failures have been noted in the literature-1992 Deutsche Bank loss of $500 million, the 1998 collapse of Long Term Capital Management (LTCM), and the 2012 "London Whale" 1 debacle of JPMorgan Chase & Co. For the Deutsche bank loss, the culprit was the assumption of flat volatility to price options and, in the case of the LTCM debacle, the blame was placed on the model's use of Gaussian copula and the assumption of no contagion (Jorion 2000). 2 Finally, the 2012 loss of $6.2 billion, due to a spreadsheet error in calculating Value-at-Risk (VaR) and operational risk at JPMorgan Chase, highlights why it is important to validate risk models. 3 In light of some of these historical data, it is fitting that scholars shifted their approach to stochastic volatility risk models, postulating that volatility is driven by its own stochastic process that accounts for jump dynamics in the returns rather than skewness or excess kurtosis. Such an approach, when pitted against other risk models, outperformed both in and out-of-sample backtesting results (see, for example, Maheu and McCrudy (2005), Su and Hung (2011), and Ze-To (2012)). Their results supported a consensus that jumps are causing extreme value in returns and taking them into consideration provides better VaR forecasts for long and short positions at lower and higher VaR levels. Though such models were successfully validated, they accounted for jumps in the return series and not in volatilities. In addition, many of these risk models were validated in a portfolio context, and little has been done with individual assets with a stochastic model that accounts for both jumps in returns and volatilities (see, for example, Eraker et al. (2003)).
The challenge, therefore, is to identify the best risk model that has passed some validation criteria using risk measures such as VaR and Expected Shortfall (ES), which remain the building-block of market risk regulations. One typical means for identification of the best risk forecast model is by analyzing violation ratios, which is better known as backtesting. Although some scholars argue that risk model choice is the least concern for decision-makers (see, for example, Danielsson et al. (2016)), the scenario takes a different path when dealing with individual financial assets and considering economic events affecting financial markets.
Risk validation in any financial asset that trades on organized platforms is critical for national and international regulatory bodies that are entrusted with providing a safe and sound financial environment for financial transactions. To this extent, investor safety is paramount for an assessment of risks of cryptocurrencies so that proper regulatory controls, if needed, can be designed and implemented. The popular media have declared the cryptocurrencies as some of the most volatile assets in the financial market worldwide. Such assertions must be validated using appropriate econometric risk models that incorporate stylized features of the market to understand the evolution of risk and the factors that are responsible for it. Most importantly, the structure of the market, transaction costs, market microstructure, price formation, and the volatility should be studied within an appropriate risk model. For the emerging cryptocurrencies market where governmental oversight and regulatory structure is still evolving, model risk due to wrong assumptions can lead to wrong conclusions and incorrect policy implementation.
Overall, cryptocurrencies have taken place in the financial markets and in portfolio management. They may be useful in risk management and ideal for risk-averse investors in anticipation of negative 1 The term "London Whale" was based on the enormous size of the bet on credit default swaps made by the London office of the bank's risk management division. 2 In addition, the LTCM model made several critical mistakes, including assuming that returns were normally distributed, and the time period to establish the risk parameters was rather short. See Jorion (2000) for more. 3 Interestingly, JPM CEO Jaime Dimon had initially described the problem as "a tempest in a teapot".
shocks to the market. They are also considered as investment assets useful for portfolio diversification and hedging against movements in other financial assets such as commodities. To sum up, for an investor trying to manage tail risk in cryptocurrencies, choosing an appropriate model is critical for forecasting volatility.
This paper aims at exclusively identifying a risk model that is valid for the cryptocurrency markets. It also attempts to build up on the consensus that cryptocurrencies exhibit extreme volatility that needs to be properly quantified for risk management purposes. The existing literature suggests that both stochastic volatility and jumps in returns in the equity market are important components of the returns. Hence, we consider theoretical and applied return models that require the specification of a stochastic volatility component. The model that we select accommodates the persistence in volatility, and volatility of a jump to address the unpredictable and large movements in the price process. In essence, our objective is to examine if jumps in returns and volatility can help us predict tail risk and expected shortfall more accurately. Furthermore, it also is important to determine if jumps in returns and volatility can help us accurately predict and manage expected losses from investing in cryptocurrencies. This particular focus on the volatility structure of the cryptocurrency market is incomplete in the literature.
Our risk model validation approach starts with a nonparametric test to detect jumps in the dynamics of the price process in the cryptocurrency market. Next, we introduce the price dynamics as inputs in a stochastic model that allows for jumps in both returns and volatility, as well as their correlation. We call this the Stochastic Volatility with Co-Jumps (SVCJ) model. We further study how such a model could be appropriate for risk measurement and compare its Value-at-Risk and Expected Shortfall predictions with competing models that are frequently applied to financial time series. Backtesting criteria are implemented to test the statistical accuracy of the models, followed by an examination of the statistical significance of the differences between the models.
Our results suggest that no one model universally fits all cryptocurrencies. We find that there are jumps in the returns and volatility of returns in the cryptocurrency market, though jump probability estimates vary across currencies. We find evidence of the leverage effect where volatility has an asymmetric response to good news and bad news. Both the SVCJ and TGARCH models produce accurate forecasts of tail risk and Expected Shortfall (ES) better than the popular RiskMetrics model. Finally, the strongest result in the paper is that the proposed SVCJ model produces lower economic losses than the TGARCH and RiskMetrics models. This implies real savings for an investor for dealing with capital losses for investing in the cryptocurrency market.
The paper proceeds as follows. In Section 2, we discuss the proposed stochastic volatility model with jumps and leverage. In Section 3, we offer empirical results. The final section concludes the paper.

Methodology
An understanding of the volatility process of financial assets is necessary for investors to manage risks of investing in financial markets. Equally important is that regulators have a more informed view of the underlying volatility structure of these assets so that appropriate regulatory policies can be designed to attract investors and potential new issuers. To this extent, it is important to examine if assets have time varying volatility, jumps, autocorrelation, extreme risk, and how the volatility process responds to good news and bad news in the markets. These issues have been investigated in the literature individually in a disparate manner when they should be addressed simultaneously in an integrated model to allow interaction among these volatility parameters (see Ardia et al. (2019), Barivera et al. (2017), and Segnon and Bekiros (2019), and references therein). Hence, we adopt a model that can capture quick and persistent movements of the conditional volatility of returns as in Eraker et al. (2003), which was an implementation of the model with jumps in both returns and volatility by Duffie et al. (2000). Such models showed that, with jumps in returns and jumps in stochastic volatility, the performance is better than competing models with different specification of the volatility process. A number of papers have examined equity price models with jumps in returns and stochastic volatility (see, for example, Bakshi et al. (1997), Andersen et al. (2002), and Pan (2002)) and made it clear that both stochastic volatility and jumps in returns are important components of the time series properties of financial assets.
Let us begin by defining logP t as the logarithmic price process with V t as the stochastic variance. Both processes are assumed to have a continuous path or happen to be discontinuous with the occurrence of at least one jump: where the stochastic volatility V t has parameters κ and θ that are the mean reversion rate and mean reversion level, respectively. W X and W V are correlated standard Brownian motions with Cov(dW X t , dW V t ) = ρdt. Z X t = Z V t are contemporaneous jump arrivals in both prices and volatility and are assumed to follow a Poisson process with constant intensity λ. σ V represents the volatility of volatility and measures the variance responsiveness to diffusive volatility shocks.
Because data are observed in discrete time, it is common to use an Euler discretization of the continuous time process in Equation (1). Assuming a time discretization of one day (dt = 1) and X t = logP t − logP t−1 , the discrete model, labeled SVCJ, becomes: where J X t and J V t are the correlated jump sizes with , and X t and V t are standard normal random variables with correlation ρ. We note that, when ρ J = 0 and µ V = 0, the model turns to a stochastic volatility with jumps of Bates (1996), and, when ρ J = 0, µ V = 0, λ = 0, µ X = 0, and σ X = 0, the model is a stochastic volatility of Heston (1993).
We use a likelihood-based framework for estimating multivariate jump-diffusion models using the Markov Chain Monte Carlo (MCMC) method. This method is based on Bayesian modeling that requires using a likelihood, a priori distribution, and a posteriori distribution. Prior distributions are required for the initial volatility state, V 0 , and for all parameters governing the dynamics of the volatilities. Moreover, the prior contains information about both the parameters and the structure of the latent processes: the stochastic specifications of the jump sizes, and jump times. As in Eraker et al. (2003), the priors are always consistent with the intuition that jumps are "large" and infrequent. More specifically, we choose a prior that places low probability on the jump sizes being small, say less than one percent, and a prior that places low probability on the daily jump probability being greater than 10 percent. In this paper, we generate results with priors.
Next, the forecastability of the SVCJ model is compared to commonly adopted alternative volatility models within the popular GARCH family. For this and to be in line with the stylized facts that financial time series have leptokurtosis, heavy tail, and autocorrelation, we impose volatility dynamics within the universe of GARCH specifications. We choose the TGARCH specification of Glosten et al. (1993) is due to its ability to capture the so-called leverage effect, the tendency of volatility to increase more with negative news rather than positive news. Brownlees and Engle (2012) argued that this volatility model has superior forecasting performance than other known volatility models 4 . The model takes into consideration any presence of autocorrelation of order p and is presented as follows: 4 Other volatility forecasting models would include ARCH, GARCH, I-GARCH, GARCH-M, GJR-GARCH, and TARCH, for example. However, it is very tough to generalize the statement because results from the above models may vary due to differences in assets, data, and time period under study. See, for example, Ali (2013).
with u t ∼ D(0, σ 2 t ) representing independent and identically distributed shocks with zero mean and time-varying variance, and I − t−1 = 1 if u t < 0, and zero, otherwise. In this model, the parameters α and β are respectively the ARCH and GARCH coefficients, and the parameter γ captures the leverage effect of the returns. In line with the stylized facts observed in the cryptocurrency market (see for example Chan et al. (2017), Caporale and Zeokokh (2019), and Ardia et al. (2019)) and, because there is a large departure of the cryptocurrencies returns from normality, we allow for the distribution D of shocks to follow a Student-t or skewed Student-t with ν degrees of freedom.
We explore whether the forecasts generated from the two models are able to provide an investor with a valid tool to hedge risk. Therefore, we derive VaR and ES using the simulated volatility series when fixing the parameter estimates produced by the models. An n-day τ% VaR is defined as and, once X is below VaR τ , we define To concentrate on a specific return bracket, we adopt a non-parametric technique based on Filtered Historical Simulation of Barone-Adesi et al. (1999) to simulate 5000 returns' paths from both the SVCJ and the AR(2)-TGARCH (1,1)∼ t models. For the latter, we first standardize returns by quantiles and volatility estimates and then generate returns' paths serving as the basis for calculating VaR and ES. Next, we evaluate the accuracy of each model through backtesting the estimated VaR and ES. The backtesting relies on comparing the risk measures estimated by the models under analysis with the actual trading results. The cases in which the actual loss exceeds the VaR estimate are called exceptions. According to Christoffersen (1998), the exception sequence is defined as: for t = T + 1, . . . , T + n, where T is the number of return observations used to estimate the VaR of the day T + 1, and n is the number of one-step-ahead estimates of that risk measure included in the test. Consequently, Christoffersen's conditional coverage test (LR cc ) for VaR backtesting consists of determining whether the probability of occurrence of an exception, p = Pr[X t < VaR τ t ] is significantly different from the defined τ (unconditional coverage test LR uc ) and whether the exception sequence is serially independent (independence test LR ind ) 5 . The likelihood ratio statistics for the test of correct conditional coverage is defined as: where n 0 and n 1 are respectively the number of 0s and 1's in the indicator series, n ij is the number of observations with value i followed by value j in the I τ t series. The value i, j = 0 denotes no violation, while i, j = 1, denotes the opposite. The series I τ t are assumed to be a first-order Markov process 5 The probability of an exception does not depend on the previous day's outcome.
with transition probabilities π ij = n ij ∑ j n ij 6 . The likelihood function LR cc follows a χ 2 (2) and tests the independence of exceedance (loss) across time periods. If the sequence of losses is independent, then π 01 = π 11 = p. Hence, this test can reject a model that generates too many or too few violations.
Given that VaR passes this test, we then proceed with backtesting the excess loss component, McNeil et al. (2005) 'zero mean test' and the bootstrap method of Efron and Tibshirani (1994), which requires no assumption on the distribution of S = (L − ES τ )1 L>VaR τ .
Lastly, we test the superiority of a model vis-à-vis a competing model with respect to the loss function of Angelidis et al. (2004) and using Sarma et al. (2003) 'zero median test'. The loss function is defined as: where q τ [X t ] T+n T+1 is the quantile of the out-of-sample returns used for backtesting. At each time t, C t increases either by excess loss, if a violation occurs, or by the difference between VaR τ t forecast and the future quantile. It follows that choosing the best accurate model i over model j, which will minimize the total loss ∑ T t=1 C t , can be decided by testing the hypothesis that the median of the distribution B t = C it − C jt is equal to 0. Here, B t is known as the loss differential between model i and model j at time t, and a negative value indicates the superiority of model i over j. This loss function is of practical interest to investors seeking to reduce market risk and avoiding allocating more money than needed.

Data and Empirical Results
In this section, we describe the details of the procedures for the comparison of the previously discussed risk models for the matter of validation, and, for a better understanding of our results, we divide this section in three parts. In the first part, we describe the stylized facts of the sample and conduct preliminary diagnostics. The second part presents the details of the in-sample estimation of the risk models, namely SVCJ, TGARCH, and RM. In the third part, we evaluate the out-of-sample forecasting ability of the models in terms of VaR and ES, and then perform backtesting for validation purposes.

Data
Over the last few years, the most important aspect of cryptocurrencies which has gained prominence in the media is the realized market volatility. To be fair, the media's infatuation with cryptocurrencies is manifested in the actual market data. Between 26 April 2013 and 16 May 2019, the daily average return from the largest cryptocurrency Bitcoin (BTC) was 0.3% with 4.34% standard deviation. There were 174 days with daily returns falling by more than 5%, and 178 days with daily returns increasing by more than 5%. The maximum daily return during this period was 43.58% (19 November 2013) and the largest one-day change was -23.43% (12 December 2013). On 18 December 2017, the market cap for BTC was $320 billion and the price soared to $19,783 (17 December 2017). One year later, the market cap for the currency declined to $63 billion (28 December 2018). As of this writing (23 May 2019), BTC had a market cap of $138.5 billion. Such large, unprecedented swings in the market value can be terrifying for some investors, while others see opportunities. In more recent days, however, there is a lot more emphasis on avoiding volatility and promoting the stability of the cryptocurrencies to bring some sense of calm in the market. For example, companies like Google, IBM, and Facebook 7 have announced their plans to introduce newer coins and each one is claiming that their currency will be a more stable asset than the others (Forbes, 16 April 2019). 6 π 01 = Pr[I τ t = 1 | I τ t+1 = 0), and π 11 = Pr[I τ In fact, Facebook is planning to introduce a cryptocurrency, appropriately named as 'Stablecoin' for its "WhatsApp" platform.
We use daily prices of seven successful 8 cryptocurrencies: Bitcoin (BTC), Ripple (XRP), Litecoin (LTC), Stellar (XLM), Monero (XMR), Dash (DASH), and Bytecoin (BCN), all collected from cryptocompare.com 9 . The data span the period 5 August 2014 to 24 March 2019, with a total of 1693 daily observations. Table 1 reports the summary statistics including the mean, standard deviation, minimum, maximum, skewness, kurtosis, and the p-values of the Ljung-Box test for first-order autocorrelation for all cryptocurrencies. Ripple has the highest mean of 0.24% and Bytecoin has the highest standard deviation of 11.44%. All cryptocurrencies display excess kurtosis and the Ljung-Box test shows that data exhibit first and second-order autocorrelation except for Stellar at the 5% confidence level.
The results also show a positive correlation, ρ, between the Brownian motions of returns and volatility for all cryptocurrencies except for XRP and DASH, where it is negative. This shows that a negative shock to returns increases volatility, and we can infer that the leverage effect contributes to the effectiveness in fitting the volatility of cryptocurrency returns. Figure 1 displays the jumps in returns and volatility for selected cryptocurrencies with high and low intensity of jumps. XRP and LTC have high intensity, and BTC and BCN have low intensity jumps. 8 Our sample of cryptocurrencies captures market dynamics for various market capitalizations, ranging from high to low. Among the largest market caps (22 May 2019), we have Bitcoin ($136.13 billion) and XRP ($15.88 billion), in the middle market cap category, we have Litecoin ($5.44 billion), and Bytecoin ($0.169 billion) represents the small market cap category. 9 It is important to acknowledge that there are significant differences in the quality of data that are available at multiple sites including CoinAPI, Cryptodatadownload, Cryptocompare, Coinmarketcap, and Coingecko. According to Alexander and Dakos (2019), some of these data are traded prices while others are non-traded prices issued by the exchanges, leading to questionable results in empirical studies.  We have also estimated several AR(2) return models with various volatility specifications namely, asymmetric GARCH, IGARCH, TARCH, and GJR-GARCH, and by alternating between Student-t and Skewed Student-t errors. Table A1 (see Appendix A) displays the estimation results of these models for the cryptocurrencies. Each model was ranked on the basis of the log-likelihood function (higher the better) and the AIC (lower the better). Overall, the TGARCH with skewed t-distributed errors turns out to be the best volatility fitting model for the cryptocurrencies considered in this paper. These results contradict the findings of Chan et al. (2017) that IGARCH and GJR-GARCH models provide the best fits for the most popular and largest cryptocurrencies. Table 3 summarizes these results by reporting the AR(2)-TGARCH(1,1)∼Skewed t estimated parameters. The parameters α and β, which represent short-run dynamics, are all significant for all cryptocurrencies. This suggests that the volatility is intensively reacting to market movements and that shocks to the conditional variance take time to die out. The leverage effect γ is statistically significant for all series except for XRP, DASH, and BCN. There were no remaining autocorrelations in both the standardized residuals and the squared standardized residuals. Summary of the estimation results of the AR(2)-TGARCH(1,1)∼Skewed t for the cryptocurrencies. Standard errors are in parentheses and bold indicates insignificance at 5% and 1% levels.
The estimated volatility from these three distinctly different models are reported in Figure 2 for BTC, as an example. A visual examination shows that the volatility graphs are markedly different across models. The SVCJ model produces the smoothest plot because it includes all parameters of the volatility series. The plots generated from the remaining models are substantially jagged and show significant structural breaks, which can impede our estimation of tail risk.

Out-of-Sample Validation
We proceed with an out-of-sample comparison of the risk measures and forecasting ability of the two models, SVCJ and TGARCH. Our benchmark model is the RiskMetrics (RM) of J.P. Morgan (1996). The risk measures VaR and ES were estimated with a rolling window of T − 365 = 1328 daily log-returns, and the remaining 365 days (24 March 2018 to 24 March 2019) are kept for out-of-sample forecasts and accuracy checks. We then simulate 5000 returns paths from both models. For the AR(2)-TGARCH(1,1)∼Skewed t model, we used the Filtered Historical Simulation by first extracting the standardized residuals using the volatilities to form a new set of innovations, which are then utilized to obtain the conditional mean. For each return, these steps are repeated recursively to obtain different simulated pathways, with 5000 draws from the standardized residuals to generate 1328 (same as in-sample size) replicates of the returns. Table 4 reports the out-of-sample backtesting results. The Christoffersen (1998) conditional coverage test confirms that the two models SVCJ and TGARCH accurately forecast the VaR as the p-values are greater than 5%. There is an exception for XRP where TGARCH performs better for 1% VaR. Although the RiskMetrics model displays forecasting accuracy, it occasionally fails to perform accordingly for LTC and XLM cryptocurrencies. Speculative investors taking either a long or short position in a cryptocurrency can generate accurate VaR forecasts using these two models. Given the accuracy of the models, Table 5 reports the zero mean test of excess loss provided that the model first passes the test for VaR. The results indicate that the predictive power of SVCJ model is better than TGARCH and RM models at the 5% level (many of the p-values are less than 5%). One possible explanation of such a finding is that TGARCH and RM models' forecasting have less significant gains over the forecasts of the SVCJ model. This particular evidence supports our prior that accounting for jumps in returns and volatility is a reason for the SVCJ model's superior predictive power.  Table 6 summarizes the test of the best performing model with respect to the quantile loss function of Angelidis et al. (2004). For each cryptocurrency and confidence level, we present the loss differential B and the p-values of the zero median test of Sarma et al. (2003). When p-values are less than 5%, it implies that two competing models are significantly different from each other in terms of estimating risk. The opposite implies that the two competing models are not significantly different from each other, with respect to the quantile loss function. Hence, regulators and risk managers remain indifferent between these two models. The results suggest that, at the 5% level, the SVCJ model is better than TGARCH and RiskMetrics models because it produces lower economic losses. At the 1% level, some of the results show that a risk manager is indifferent between the models for VaR estimation. For instance, for Bitcoin and Stellar, SCVJ and TGARCH models are not significantly different from each other, with respect to the quantile loss function. For the same cryptocurrencies, these two models are performing better than the RiskMetrics model. Therefore, as far as loss is concerned, a risk manager would prefer either SVCJ or TGARCH model over RiskMetrics. Overall, as noted earlier in Table 5, there is a gap between the quantities of risk measured by VaR and ES at the 1% and 5% confidence levels. This suggests that ES gives a more accurate measure of risk than the traditional VaR measure. This finding seems to support the recommendation from the Basel Committee on Banking Supervision (2013) that banks use ES in lieu of VaR, and that there should be a recalibration of the confidence level for consistency and accuracy of the risk measure. In terms of the forecast accuracy, our results show that SVCJ and TGARCH generate better forecasts at the 1% level then RM. This evidence clearly supports the notion that fat-tailed volatility models can predict risk more accurately than non-fat-tailed models. In summary, the combination of jumps in returns and volatility in a stochastic model yields the most accurate VaR forecasts for the majority of the cryptocurrencies studied in this paper.

Conclusions
It is now a widely accepted view that risk models should account for the stylized facts of the data in order to be successfully validated. Estimating risk was mainly performed on many financial asset markets but not on the emerging cryptocurrency market, which has been proven to be extremely volatile. Typical volatility models may not adequately provide an accurate representation of the cryptocurrencies volatility process for successful risk management purposes. In particular, risk models must be able to capture the cryptocurrencies volatility process that includes stochastic volatility, persistence in volatility, and jump process. All these stylized features are critical for capturing unpredictable and large movements in the price process and for accurately predicting tail risk and expected shortfall. There is limited research on this topic despite the fact that investors are exploring how cryptocurrencies can be integrated into a portfolio along with other traditional assets such as stocks, bonds, currencies, and commodities. Choosing a proper model that provides a parsimonious representation of the distribution of the return-generating process is the first step.
In this paper, we identified risk models for the cryptocurrency market and evaluated their performance for validation purposes. We evaluated models based on stochastic volatility with co-jumps in returns and volatility (SVCJ), threshold GARCH volatility (TGARCH), and RiskMetrics. Backtesting methods using the conditional and unconditional coverage were performed to test the validity of the models, and the regulatory loss function was applied to choose the most accurate model.
The validation results reveal that, although the models considered in this paper are effective for fitting the cryptocurrency returns, the SVCJ model more accurately forecasts risk in a VaR and ES sense, and the reality check proves its superiority over TGARCH and RiskMetrics models. Therefore, incorporating jumps in the cryptocurrency volatility model improves the forecasting ability of risk in terms of VaR and ES. This is important for risk-averse investors and for speculative investors who are particularly interested in hedging their risk in a VaR sense. It is, therefore, recommended to use a model that accounts for jumps, leptokurtosis, and leverage effects when dealing with cryptocurrency market data. Such a model improves risk forecasting in terms of VaR and Expected Shortfall.
The results in this study have several implications for applying the SVCJ model to other assets including commodities, foreign currencies, and stock market indices, especially in times of stress. The global financial market has seen unprecedented volatility in recent days, given falling oil prices and concerns related to the COVID-19 pandemic. It would be interesting to see if such wild swings in the market can be studied using the SVCJ model to incorporate the co-jumps in returns and volatility affecting the measurement of VaR and Expected Shortfalls in the contagion like period that we now have. We leave that for a future study.

Conflicts of Interest:
The authors declare no conflict of interest. Table A1. Volatility estimates.             Estimation results of various GARCH volatility models with t and Skewed t errors for the cryptocurrencies are reported in this Table. For each parameter, we report the standard errors in parentheses and bold indicates insignificance at 5% and 1% levels. Higher LogLikelihood and lower AIC indicate the best fit model for cryptocurrencies.