Next Article in Journal
Bayesian LASSO with Categorical Predictors: Coding Strategies, Uncertainty Quantification, and Healthcare Applications
Previous Article in Journal
Rice Yield Forecasting in Northeast China with a Dual-Factor ARIMA Model Incorporating SPEI1-Sep. and Sown Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting Stablecoin Failure with Simple Thresholds and Panel Binary Models: The Pivotal Role of Lagged Market Capitalization and Volatility

Moscow School of Economics, Moscow State University, Leninskie Gory, 1, Building 61, 119992 Moscow, Russia
Forecasting 2025, 7(4), 68; https://doi.org/10.3390/forecast7040068
Submission received: 27 October 2025 / Revised: 10 November 2025 / Accepted: 17 November 2025 / Published: 19 November 2025

Highlights

What are the main findings?
  • A simple price threshold of 0.80 is introduced as a novel and robust real-time indicator for stablecoin failure, validated against benchmarks like CoinMarketCap delistings and volume-based methods.
  • Lagged monthly market capitalization and stablecoin volatility are identified as the most significant predictors of default. These coin-specific drivers consistently outperform macroeconomic factors, and the panel Cauchit model with fixed effects (Cauchit FE) delivers the best out-of-sample forecasting performance.
What is the implication of the main findings?
  • Investors and risk managers gain a practical, interpretable framework to assess stability. The 0.80 threshold can be used as a real-time signal to reassess risk or exit positions, while low volatility and high market capitalization serve as key indicators of a stablecoin’s resilience.
  • Regulators can use the proposed threshold and forecasting models to monitor stablecoin stability and potential systemic risks in the DeFi market. The findings also imply that oversight should prioritize stablecoin-specific factors like reserve quality and transparency over broader macroeconomic controls.

Abstract

In this study, we extend research on stablecoin credit risk by introducing a novel rule-of-thumb approach to determine whether a stablecoin is “dead” or “alive” based on a simple price threshold. Using a comprehensive dataset of 98 stablecoins, we classify a coin as failed if its price falls below a predefined threshold (e.g., $0.80), validated through sensitivity analysis against established benchmarks such as CoinMarketCap delistings and Feder et al. (2018) methodology. We employ a wide range of panel binary models to forecast stablecoins’ probabilities of default (PDs), incorporating stablecoin-specific regressors. Our findings indicate that panel Cauchit models with fixed effects outperform other models across different definitions of stablecoin failure, while lagged average monthly market capitalization and lagged stablecoin volatility emerge as the most significant predictors—outweighing macroeconomic and policy-related variables. Random forest models complement our analysis, confirming the robustness of these key drivers. This approach not only enhances the predictive accuracy of stablecoin PDs but also provides a practical, interpretable framework for regulators and investors to assess stablecoin stability based on credit risk dynamics.
JEL Classification:
C32; C35; C51; C53; C58; G12; G17; G32; G33

1. Introduction

Stablecoins, designed to maintain a stable value typically pegged to $1, have emerged as critical components of decentralized finance (DeFi), powering transactions, lending, and liquidity provision in digital asset markets. With billions of dollars locked in DeFi protocols, stablecoins facilitate seamless interactions across blockchain ecosystems. However, their promise of stability has been repeatedly tested by high-profile failures. The collapse of TerraUSD in 2022, where the price fell precipitously below the $0.80 threshold we validate in this study, exposed catastrophic credit risks and triggered widespread market disruptions. These events underscore the urgent need for robust, real-time tools to assess and predict stablecoin stability—a need echoed by regulators like the Financial Stability Oversight Council [1] concerned about systemic risks. These events underscore the urgent need for robust tools to assess and predict stablecoin stability, both to protect investors and to address regulatory concerns about systemic risks in digital markets [2].
Despite their importance, the literature on stablecoin risk assessment remains limited in offering practical, real-time solutions. Prior studies, such as [3], focus on peg maintenance mechanisms, highlighting the role of reserves and arbitrage in ensuring stability. Others, like [2], warn of systemic risks from algorithmic stablecoins prone to runs, while [4] emphasize liquidity and regulatory challenges. However, these works often overlook simple, interpretable methods for detecting failures early. For instance, volume-based criteria, such as those adapted from [5] for cryptocurrencies, rely on lagging indicators like trading volume drops, which fail to capture depegging events in real time. Moreover, existing models rarely prioritize coin-specific predictors, such as market capitalization and volatility, which drive stablecoin survival compared to macroeconomic factors; see [6].
This study addresses these gaps by proposing a straightforward, data-driven framework to detect stablecoin failures and forecast probabilities of default (PDs). First, we introduce a novel rule-of-thumb: a stablecoin is classified as “dead” if its price falls below $0.80, a threshold validated against CoinMarketCap delistings and volume-based benchmarks. This approach enables real-time monitoring, overcoming the delays of traditional indicators. Second, we develop a large suite of panel binary models, including Cauchit fixed-effects (FE) and random-effects (RE) models, to forecast PDs, with lagged monthly market capitalization and stablecoin volatility as primary predictors. These models achieve superior out-of-sample performance, with AUC up to 0.947 and H-measure up to 0.842 for 30-day forecasts. Third, we validate the framework’s robustness using a time series-based Zero Price Probability (ZPP) model to forecast PDs, as well as different categorizations of stablecoins into long-lived and short-lived groups, depending on the length of their time series.
Beyond traditional binary response models, recent research in financial econometrics has increasingly emphasized frameworks that represent market conditions or asset states through latent binary or multi-state processes evolving over time. In particular, hidden Markov models (HMMs), Markov-switching regressions, and other state-space approaches have been employed to capture discrete shifts between stable and turbulent regimes in returns, volatility, or liquidity [7,8,9]. Similar probabilistic state representations have also been applied to the cryptocurrency market to identify regime transitions associated with crashes or depegging episodes [10,11]. These models share a conceptual link with the threshold-based binary framework adopted in this paper: in both cases, the system alternates between discrete states driven by past information. However, while regime-switching or state-space models infer transitions probabilistically, the present analysis relies on explicit and optimized threshold rules that directly map observable predictors (such as lagged market capitalization and volatility) into failure or survival states. Our threshold-based classification approach provides a transparent, real-time analog to these more complex state representations, offering practical advantages for regulatory monitoring and risk management applications.
The contributions of this study are threefold: it provides a simple, transparent $0.80 price threshold for immediate failure detection, a set of high-performing panel models for PD forecasting, and robustness checks that reinforce the importance of stablecoin market capitalization and volatility over external factors, aligning with S&P Global Ratings’ [12] emphasis on stablecoin reserve quality. These tools offer practical value for investors seeking to manage credit risk and regulators monitoring DeFi stability, addressing critical gaps in the literature. By focusing on coin-specific dynamics and real-time applicability, this study provides an actionable framework to navigate the complex and rapidly evolving stablecoin market.
The remainder of the paper is structured as follows: Section 2 reviews the stablecoin literature, while Section 3 describes the methodology. Section 4 presents the empirical analysis and its implications, and Section 5 discusses a series of robustness checks. Section 6 concludes.

2. Literature Review

Early studies on stablecoins focused on their design and stabilization mechanisms. Ref. [3] examined the factors maintaining stablecoin pegs, highlighting the role of reserve backing and market confidence in fiat-collateralized stablecoins like Tether (USDT) and USD Coin (USDC). They found that deviations from the peg are often temporary and corrected through arbitrage, though transparency in reserve management remains a persistent concern. In contrast, algorithmic stablecoins, which rely on smart contracts to adjust supply and demand, have been critiqued for their inherent fragility. Ref. [2] likened these to historical private currencies prone to runs—a view reinforced by the TerraUSD collapse, where a loss of investor confidence triggered a rapid depeg and systemic contagion within DeFi markets.
Building on this foundation, Ref. [6] marked a significant advancement by explicitly estimating the credit risk of stablecoins, in what was the first study to focus exclusively on this market segment. Using [5]’s methodology, they identified that 21% of a sample of 121 stablecoins were “abandoned” at least once, with only 36% achieving subsequent “resurrection” and 11% maintaining that status. They employed structural break tests to detect significant peg deviations, finding an average of 10 days between a break and a coin’s collapse or stabilization. Probabilities of default (PDs) were estimated using market-capitalization-based forecasting models, revealing that stablecoins on robust blockchains like Ethereum exhibited lower default risk. These findings underscored the importance of coin-specific factors—–such as market capitalization and blockchain ecosystem strength—in assessing credit risk, beyond the broader market dynamics explored in earlier works.
Prior financial literature has examined stablecoin credit risk, particularly in the context of regulatory and financial stability concerns. Ref. [4] analyzed stablecoins’ role within the crypto-asset ecosystem, noting their critical function as liquidity providers in DeFi and their potential to transmit risks to traditional financial systems. They highlighted operational, liquidity, and settlement risks, exacerbated by inadequate redemption policies among major issuers like Tether, which imposes weekly redemption limits or high minimum thresholds. This evidence aligns with recent observations of stablecoin vulnerabilities highlighted by [6] and motivates our proposal of a rule-of-thumb threshold to detect failure events.
More recently, the 2024 Annual Report by the U.S. Financial Stability Oversight Council (FSOC) [1], released in December 2024, warned that stablecoins remain a potential risk to financial stability, citing their vulnerability to runs in the absence of robust risk management standards. This report underscores ongoing policy attention to credit risk, complementing our empirical focus. Additionally, S&P Global Ratings [12] evaluated the creditworthiness of USDC, emphasizing reserve transparency and liquidity management as key stability drivers, though it stopped short of econometric modelling. These works reinforce the relevance of stablecoin-specific variables—such as market capitalization and volatility—over external factors, a hypothesis we test in this study.
Econometric approaches to stablecoin credit risk have also evolved. Ref. [6] employed the Zero-Price Probability (ZPP) model and the [13]’s Proportional Hazards model—a widely used framework in survival analysis—to forecast stablecoin PDs. Ref. [14] modeled the probability of death for over two thousand cryptocurrencies, including a small number of stablecoins, using credit scoring models, machine learning, and time-series-based methods, identifying market capitalization as a critical predictor. Interestingly, he also found that the (pooled) cauchit model and the ZPP model were the best approaches for newly established coins, whereas credit-scoring models and machine-learning methods were better suited for older coins. Given this evidence and our available dataset, we extend the cauchit model to a panel setting to capture potential unobserved heterogeneity.
The literature has also explored alternative predictors of stablecoin stability. Ref. [15] modeled multivariate market and credit risks for cryptocurrencies, finding volatility to be a significant driver—a result we confirm for stablecoins in this study. Conversely, macroeconomic variables such as policy indices or Bitcoin volatility, while influential in broader crypto markets [4], appear less critical for forecasting stablecoin PDs, a distinction we investigate further. Economic intuition suggests that short-term lags (e.g., daily) capture immediate market shocks affecting peg stability, while longer lags (e.g., monthly) reflect persistent trends in investor confidence and liquidity, as stablecoins are less volatile than other cryptocurrencies [3]. Moreover, Ref. [3] examined how improved arbitrage mechanisms stabilize stablecoin prices and consistently highlighted market capitalization and volatility as dominant features.
In summary, the stablecoin literature has evolved from design analyses to PD modeling, emphasizing coin-specific variables under regulatory scrutiny. Our study advances this with threshold-based detection and panel models.

3. Methodology

This section outlines the methodology employed to assess stablecoin credit risk and forecast their probabilities of default (PDs). It is structured into three subsections: detecting stablecoin failure using threshold rules, modelling PDs with panel binary and random forest models, and evaluating forecasting performance with specific metrics.

3.1. Detecting Stablecoin Failure with Simple Thresholds

We introduce a novel rule-of-thumb approach to classify stablecoins as dead or alive, based on a simple price threshold applied to their closing prices. A stablecoin is deemed dead (Status = 1) if its price falls below a predefined threshold (e.g., $0.80), and alive (Status = 0) otherwise.
Economically, a threshold like $0.80 reflects the point where depegging erodes investor confidence, triggering potential runs or liquidity crises, as stablecoins’ value proposition relies on near-$1 parity; we validate this via sensitivity analysis across $0.05 to $0.95 against benchmarks like CoinMarketCap delistings. To operationalize this, we define Status variables across a range of thresholds from $0.05 to $0.95 in increments of $0.05, using two panel datasets: stablecoins with fewer than 730 observations (11,558 observations across 39 coins) and stablecoins with more than 730 observations (76,176 observations across 59 coins). We compare our approach to existing benchmarks, such as the CoinMarketCap delisting criterion and the methodology proposed in [5], which we use to validate our threshold-based classification through a detailed sensitivity analysis.
CoinMarketCap, a leading cryptocurrency data aggregator, delists a crypto-asset when it no longer meets specific listing criteria, such as sufficient trading volume, liquidity, or project activity. See the detailed requirements here: (https://coinmarketcap.com/academy/glossary/delisting, accessed on 15 November 2025). Delisting typically occurs when a coin is deemed inactive or abandoned by its developers, often following a significant decline in market relevance or a failure to maintain its peg (in the case of stablecoins). In our sample of 98 stablecoins, 25 were delisted from CoinMarketCap by the end of the observation period, providing a benchmark for “dead” status. However, delisting is a lagging indicator, as it reflects decisions made after a coin’s decline and lacks transparency regarding exact thresholds, necessitating alternative methods for real-time detection.
The methodology in [5] offers another approach to identify crypto-asset failure. However, its original approach, which identifies failure based on price peaks, is unsuitable for stablecoins designed for price stability; see [6] and references therein. To address this, we adapt their method to focus solely on trading volume. In our modified approach, a “candidate peak” is defined as the date when the 7-day rolling average trading volume surpasses any recorded volume within a surrounding 30-day period, both forward and backward. Following a similar filtering process to the original method [5], we retain only those volume peaks that exceed the lowest observed volume in the preceding 30 days by at least 50% and account for at least 5% of the stablecoin’s all-time maximum trading volume. Once these volume-based peaks are identified, each is assessed against subsequent daily trading volumes, and we apply the final classification rules unchanged: a stablecoin is deemed “dead” if its average daily trading volume falls below 1% of its peak volume. Conversely, a previously “dead” stablecoin is reclassified as “resurrected” if its trading volume recovers to exceed 10% of its peak volume. This volume-centric adaptation provides a more effective measure of stablecoin abandonment.
An important caveat of the volume-based Feder criterion, and any methodology relying on reported trading volume in cryptocurrency markets, is the potential impact of wash trading and other forms of volume manipulation. Wash trading—the practice of simultaneously buying and selling an asset to create artificial activity—is a known issue on some cryptocurrency exchanges [16,17]. This manipulation can artificially inflate reported trading volumes, creating a misleading impression of liquidity and market interest. In the context of our adapted Feder method, sustained wash trading could potentially delay the classification of a stablecoin as “dead” by keeping its average daily trading volume artificially above the 1% of peak volume threshold, even if genuine economic activity and organic market demand have evaporated. This inherent vulnerability to data manipulation further motivates the need for alternative, more robust failure detection methods, such as our price-threshold approach, which is based on the more directly observable and economically fundamental metric of price stability.
In this study, we applied only the volume-based Feder et al. method [5] to our sample of 98 stablecoins, alongside our threshold-based approach, to compare their effectiveness. Table 1 presents these classifications for all 98 stablecoins, displaying the CoinMarketCap (final) listing status, the outcomes from the volume-based approach  [5] (number of deaths, resurrections, and final status), and the results of our price threshold method at $0.80 (number of deaths, resurrections, and final status).
For validation, we performed a sensitivity analysis comparing our threshold-based method to the CoinMarketCap and Feder benchmarks. We considered two classification criteria: Final Status (a stablecoin is classified as dead if its price remains below the threshold at the end of the sample) and Dynamic Threshold (a stablecoin is classified as dead if its price drops below the threshold at least once during its lifetime). We computed overlap percentages for dead coins, alive coins, and overall accuracy (i.e., both dead and alive combined). Table 2 reports these results for thresholds ranging from $0.05 to $0.95, using CoinMarketCap (25 dead, 73 alive) and the Feder volume-only method (39 dead/59 alive for Final Status and 41 dead/57 alive for Dynamic Threshold) as reference classifications.
At a threshold of $0.80, our approach identifies 25 stablecoins as “dead” under the Final Status criterion, with an overlap of 52% with CoinMarketCap delistings and 41.03% with the method proposed by [5]. When applying the Dynamic Threshold criterion, our method identifies 32 dead stablecoins, increasing the overlap to 56% with CoinMarketCap and 53.66% with the Feder method. Raising the threshold to $0.85 increases the proportion of detected dead coins (60% and 72% overlap with CoinMarketCap for the Final and Dynamic criteria, respectively), but decreases the overlap for alive coins. This trade-off suggests that $0.80 serves as a balanced and practical threshold for detecting stablecoin failure.
This specific threshold is further supported by a straightforward price distribution analysis, where we examine histograms of both the final observed prices and the minimum prices of stablecoins over their respective time series. This visualization, presented in Figure 1, illustrates where most stablecoins’ prices lie relative to the $0.80 threshold, reinforcing its empirical relevance as a benchmark for classification.
At a conceptual level, the choice of a $0.80 threshold is rooted in established financial principles where a 20% deviation from a target price is widely recognized as a signal of a structural break or bear market. For a stablecoin, this threshold is not arbitrary: it represents a critical point where its primary function—to maintain its peg—has fundamentally failed. Economically, a persistent drop to $0.80 erodes the coin’s perceived ‘moneyness’ [3] and often triggers automated risk-management protocols, margin calls, and liquidation cascades, particularly in leveraged DeFi environments. It serves as a psychological coordination point where investors collectively reassess the asset’s credibility, potentially leading to a terminal loss of confidence, as exemplified by the TerraUSD collapse [18]. This threshold thus provides an economically meaningful and empirically observable boundary for classifying failure.
Therefore, by leveraging both sensitivity analysis and price distribution insights, our approach offers a real-time, transparent, and interpretable alternative to existing failure detection methods. Unlike CoinMarketCap’s delisting process, which operates with inherent delays, or the volume-based method in [5], which relies solely on trading activity, our price-based threshold provides an immediate and robust indicator of stablecoin distress. This novel approach enhances risk assessment by offering a clear, data-driven framework that can be easily implemented by researchers, investors, and regulators seeking timely insights into stablecoin stability.

3.2. Models for Stablecoin Probability of Default Forecasting

We briefly discuss the models employed to forecast the probability of default (PD) for stablecoins. Following [14,15,19], direct forecasts were computed using lagged regressors. Specifically, 1-day lagged regressors were used to forecast the 1-day ahead PD, 30-day lagged regressors for the 30-day ahead PD, and for stablecoins with over 730 observations, 365-day lagged regressors for the 365-day ahead PD. These specific forecast horizons were chosen to capture distinct economic risk dimensions: 1-day ahead for immediate market shocks and liquidity stress, 30-day ahead to align with medium-term portfolio and reporting cycles, and 365-day ahead to assess long-term viability, analogous to traditional annual credit risk. The regressors included lagged changes in market capitalization, stablecoin historical volatility, Bitcoin implied volatility, and economic policy uncertainty indexes (further details are provided in the Empirical section below).
The following models estimate the probability of default P ( y i t = 1 ) for stablecoin i at time t. The vector x i t contains the lagged regressors for stablecoin i at time t.
  • Pooled Logit Model: The pooled logit model assumes homogeneity across stablecoins, ignoring the panel structure. The probability of default P ( y i t = 1 x i t ) is modeled as:
    P ( y i t = 1 x i t ) = exp ( x i t β ) 1 + exp ( x i t β )
    where y i t is the binary outcome (1 for default, 0 otherwise), x i t is the vector of regressors, and β is the vector of coefficients.
  • Panel Logit Model with Fixed Effects: This model accounts for unobserved heterogeneity across stablecoins by including individual-specific intercepts α i :
    P ( y i t = 1 x i t , α i ) = exp ( α i + x i t β ) 1 + exp ( α i + x i t β )
  • Panel Logit Model with Fixed Effects and Asymptotic Bias Correction: Fixed effects logit models suffer from the incidental parameters problem, leading to biased estimates in short panels. To mitigate this, asymptotic bias correction methods based on [20], and [21,22] were applied. These methods adjust the estimated coefficients to reduce bias:
    β ^ corrected = β ^ FE + bias correction
    where β ^ FE is the fixed effects estimator. See also [23] for more details.
  • Conditional Logit Model: This model eliminates the individual fixed effects by conditioning on the sum of y i t for each stablecoin. The probability is given by:
    P ( y i t = 1 t = 1 T y i t ) = exp ( x i t β ) s : t y i t = t y i s exp ( x i s β )
    The denominator sums over all time observations for each stablecoin, representing the conditional probability given exactly one default of the stablecoin occurs.
  • Panel Logit Model with Random Effects: The random effects model treats individual-specific effects as random variables drawn from a distribution, typically normal:
    P ( y i t = 1 x i t , α i ) = exp ( α i + x i t β ) 1 + exp ( α i + x i t β )
    where α i N ( 0 , σ α 2 ) .
  • Pooled Cauchit Model: The pooled cauchit model uses the Cauchy cumulative distribution function instead of the logistic function:
    P ( y i t = 1 x i t ) = 1 π arctan ( x i t β ) + 1 2
  • Panel Cauchit Model with Fixed Effects: This model extends the pooled cauchit model by incorporating individual-specific intercepts:
    P ( y i t = 1 x i t , α i ) = 1 π arctan ( α i + x i t β ) + 1 2
  • Panel Cauchit Model with Random Effects: This model introduces random effects α i N ( 0 , σ α 2 ) into the cauchit framework:
    P ( y i t = 1 x i t , α i ) = 1 π arctan ( α i + x i t β ) + 1 2
  • Random Forests Model: In addition to the panel binary models, a random forests model was employed. This non-parametric ensemble method builds multiple decision trees on random subsets of the data and averages their predictions. It was selected due to its proven effectiveness in forecasting PDs of cryptocurrencies, crypto exchanges, and in detecting pump-and-dump schemes involving crypto assets, as documented in [14,24,25].
The model was implemented using the randomForest package in R. We set the number of trees to B = 500 to ensure stable predictions. All other hyperparameters were set to the package’s default values, which are standard in the literature and yielded robust performance. The key defaults include: the number of variables randomly sampled as candidates at each split (‘mtry’) was set to p , where p is the total number of predictors; the minimum size of terminal nodes (‘nodesize’) was 1; and no maximum depth was enforced for the trees (‘maxnodes = NULL’). The Gini impurity index was used as the splitting criterion. Preliminary tuning indicated that the model’s performance was robust to variations in these parameters; therefore, we proceeded with these standard and well-established settings.
The predicted PD is given by:
P ^ ( y i t = 1 x i t ) = 1 B b = 1 B T b ( x i t ) ,
where T b ( · ) is the b-th tree in the ensemble.
We remark that the inclusion of the Cauchit model is motivated not only by its empirical performance but also by its theoretical suitability for the dynamics observed in cryptocurrency markets. The Cauchit specification, which employs the cumulative distribution function of the Cauchy distribution as its link function, features much heavier tails than the Logit or Probit alternatives. This heavy-tailed structure makes it more robust to outliers and extreme realizations, features that are pervasive in cryptocurrency data due to abrupt price jumps, liquidity shocks, and irregular trading activity. In such environments, the Cauchit link can better accommodate extreme values of the latent variable without allowing them to disproportionately influence coefficient estimates or predicted probabilities, thereby improving robustness and interpretability [26,27]. From a conceptual perspective, the Cauchit model is particularly well suited for modelling binary outcomes characterized by long periods of stability followed by sudden regime shifts, such as stablecoin defaults. For most of their lifespan, stablecoins exhibit near-zero probabilities of failure that remain largely insensitive to small fluctuations in volatility or market capitalization. However, when confidence erodes, due to events like a breakdown in arbitrage mechanisms or a run on reserves, the same explanatory variables can trigger a rapid, nonlinear jump in the probability of default. The fatter tails of the Cauchit link function capture this abrupt transition more effectively than the smoother response of the Logit model, allowing for a more realistic representation of these threshold-driven dynamics [28]. Moreover, the Cauchit model’s slower tail decay allows for flexible modelling of rare but economically meaningful events, where the probability mass in the extremes carries critical information about systemic fragility. This property makes it especially appropriate for stablecoin markets, where extreme depegging episodes (such as the TerraUSD collapse) represent defining moments for the system’s stability. Its robustness to heavy-tailed distributions ensures more stable parameter estimates even in the presence of volatile price movements or sudden market capitalization changes. Additionally, when applied to panel data, the Cauchit specification accommodates unobserved heterogeneity across stablecoins with varying lifespans, further enhancing its suitability for this context. These theoretical advantages, together with its strong empirical performance with cryptoassets [14], justify its inclusion alongside the Logit and Probit models as a robustness check against thin-tailed assumptions.

3.3. Evaluation Metrics for Binary Models

To assess the predictive performance of our models in forecasting stablecoin probabilities of default (PDs), we employ several standard evaluation metrics for binary classification: the Area Under the receiver operating characteristic Curve (ROC AUC), the H-measure by [29], the Brier score [30], and classification accuracy, sensitivity, and specificity under different thresholding approaches.
  • ROC AUC
The receiver operating characteristic (ROC) curve plots the true positive rate (TPR) against the false positive rate (FPR) at varying classification thresholds. The true positive rate, also known as sensitivity or recall, is given by:
TPR = TP TP + FN ,
where TP (true positives) represents correctly classified positive cases, and FN (false negatives) denotes misclassified positive cases. The false positive rate is defined as:
FPR = FP FP + TN ,
where FP (false positives) represents misclassified negative cases, and TN (true negatives) denotes correctly classified negative cases. The area under the ROC curve (AUC) summarizes the overall performance, with a value of 0.5 indicating random guessing and 1 representing perfect classification; see [31] for more details.
  • H-measure
The H-measure, introduced by [29], addresses key limitations of the ROC AUC metric by explicitly incorporating application-specific misclassification costs. While AUC evaluates a classifier’s ability to rank positive instances above negative ones across all possible thresholds, it does not account for situations where ROC curves intersect or where certain regions of the curve are more relevant—such as minimizing false positives in financial risk assessment. The H-measure overcomes these limitations by defining an optimal classification threshold T ( c ) that minimizes the expected misclassification cost, given by:
T ( c ) = argmin t c π 0 ( 1 F 0 ( t ) ) + ( 1 c ) π 1 F 1 ( t ) ,
where c = c 0 / ( c 0 + c 1 ) represents the severity ratio, which adjusts the relative misclassification costs for the two classes c i , while π i denotes the class priors and F i ( t ) the cumulative distribution function of predicted scores for class i. The misclassification loss at a threshold t is then:
Q ( t ; b , c ) = b c π 0 ( 1 F 0 ( t ) ) + ( 1 c ) π 1 F 1 ( t ) .
The general loss is obtained by substituting the optimal threshold T ( c ) from (12) into (13). This value is then weighted using a Beta severity distribution u ( c ) and integrated over all possible severity ratios c [32]:
L α , β = Q T ( c ) ; b , c u α , β ( c ) d c ,
where the Beta distribution parameters are defined as α = π 1 + 1 and β = π 0 + 1 .
The H-measure is then defined as the normalized ratio of the general loss to the worst-case loss scenario:
H = 1 L α , β L max .
This formulation allows the H-measure to adapt to real-world cost structures, making it particularly useful in domains such as fraud detection and credit risk modelling, where false positives and false negatives carry asymmetric consequences.
  • Brier Score
The Brier score quantifies the accuracy of probabilistic predictions by computing the mean squared error between the predicted probability p ^ i and the true binary outcome y i { 0 , 1 } :
Brier Score = 1 N i = 1 N ( p ^ i y i ) 2 .
where N is the total number of observations. Lower Brier scores indicate better calibration of the model’s probability estimates.
  • Accuracy, Sensitivity, and Specificity
Classification performance is also assessed using accuracy, sensitivity, and specificity, computed under two thresholding schemes: (i) a fixed threshold of 50% and (ii) the empirical prevalence of positive cases in the dataset. Accuracy measures overall correctness and is given by:
Accuracy = TP + TN TP + FP + TN + FN .
Sensitivity (or recall) is computed using Equation (10), while specificity, which measures the proportion of correctly identified negative cases, is defined as:
Specificity = TN TN + FP .
By evaluating these metrics across different thresholds, we ensure a comprehensive assessment of model discrimination, calibration, and classification performance.
  • The Model Confidence Set (MCS) Procedure
The Model Confidence Set (MCS), introduced by [33], is a statistical framework designed to compare forecasting models while accounting for uncertainty. Unlike conventional selection methods that pinpoint a single best model, the MCS approach identifies a subset of models that are statistically equivalent to the best-performing model at a given confidence level. This procedure employs an iterative hypothesis testing process to systematically remove underperforming models. It begins with an initial set of candidate models, denoted as M 0 , consisting of m 0 models. The performance of each model is assessed using a predefined loss function. In the case of binary classification, a commonly used loss function is the Brier score (16).
The objective of the MCS procedure is to retain a subset of models, denoted as M , that are not significantly worse than the best-performing model in M 0 . This is achieved by testing the null hypothesis that the expected loss difference between all model pairs is zero:
H 0 : E [ d i j ] = 0 i , j M 0 ,
where d i j = L i L j represents the difference in loss between models i and j. To assess the relative performance of models, test statistics based on the loss differences d i j are computed. Two commonly used statistics are the Range Statistic:
R = max i , j M 0 | d ¯ i j | ,
where d ¯ i j denotes the sample mean of d i j , and the T-Statistic:
T = max i M 0 d ¯ i + Var ( d ¯ i + ) ,
where d ¯ i + represents the average loss difference of model i relative to all other models.
Through an iterative elimination process, models that fail the hypothesis test are progressively excluded until the remaining models form a set M in which performance differences are no longer statistically significant at a given confidence level α . By applying the MCS procedure, we identify a robust subset of models that deliver comparably strong performance in probabilistic forecasting of binary outcomes. This approach is particularly valuable in financial applications, where ensuring model reliability and interpretability is essential, and minor predictive improvements can lead to meaningful practical outcomes.

4. Empirical Analysis

4.1. Data

We collected daily data on 98 stablecoins from three primary cryptocurrency data platforms: CoinMarketCap, CoinGecko, and TokenInsight. CoinMarketCap (https://coinmarketcap.com, accessed on 15 November 2025) is a widely recognized aggregator that provides historical and real-time data on cryptocurrency prices, market capitalization, and trading volumes, alongside listing status updates. CoinGecko (https://www.coingecko.com, accessed on 15 November 2025) offers a complementary dataset, tracking similar metrics with an emphasis on community-driven insights and additional market indicators. TokenInsight (https://tokeninsight.com, accessed on 15 November 2025) provides detailed analytics on blockchain assets, including stablecoins, with a focus on risk ratings and market performance. Given that data for some stablecoins are no longer available online due to delisting or project abandonment, we supplemented our dataset using the Wayback Machine (https://web.archive.org/,waybackmachine, accessed on 15 November 2025) [34]. The Wayback Machine is an initiative by the Internet Archive that archives snapshots of websites over time, enabling us to retrieve historical data from these platforms when primary sources were inaccessible. The full list of stablecoins analyzed is reported in Table 1 of Section 3.1 (“Detecting Stablecoin Failure with Simple Thresholds”).
Our dataset includes daily observations of open, high, low, and close prices, market capitalization, and trading volume for these 98 stablecoins, spanning January 2019 to November 2024. Following [14], we categorized the coins into two groups based on observation count: 39 stablecoins with fewer than 730 observations, used to forecast 1-day and 30-day ahead probabilities of death, and 59 stablecoins with more than 730 observations, used to forecast 1-day, 30-day, and 365-day ahead probabilities of death. For each stablecoin, we computed several market capitalization differences: today’s value minus yesterday’s ( Δ M C a p D , t ), today’s minus 7 days ago ( Δ M C a p W , t ), and today’s minus 30 days ago ( Δ M C a p M , t ). Additionally, we calculated daily stablecoin volatility using a modified estimator [35] that accounts for opening gaps, as proposed by [36], and defined as:
σ D , t 2 = ( ln O t ln C t 1 ) 2 + 0.5 ( ln H t ln L t ) 2 ( 2 ln 2 1 ) ( ln C t ln O t ) 2 ,
where O t , H t , L t , and C t denote the open, high, low, and close prices on day t. We also computed weekly σ W , t and monthly σ M , t rolling averages of the daily historical volatilities. This regressor structure, incorporating daily, weekly, and monthly horizons, draws inspiration from the Heterogeneous Auto-Regressive (HAR) model in [37].
The selection of daily, weekly, and monthly horizons for market capitalization changes and volatility captures both immediate market reactions and sustained trends, providing a comprehensive view of stability dynamics across different time frames relevant to investors and regulators.
We included the T3 Bitcoin Volatility Index [38], a 30-day implied volatility (IV) measure for Bitcoin derived from Bitcoin option prices via linear interpolation between the expected variances of the two nearest expiration dates. Sourced from (https://t3index.com/indexes/bit-vol/, accessed on 15 January 2025) until its discontinuation in February 2025, the full methodology is archived at (https://web.archive.org/web/20221114185507/https://t3index.com/wp-content/uploads/2022/06/Bit-Vol-process_guide-Jan-2019-2022_03_22-06_02_32-UTC.pdf, accessed on 15 November 2025). We also computed weekly and monthly rolling averages of this index for each stablecoin. Alternative implied volatility indices exist, such as those from Deribit (https://www.deribit.com, accessed on 15 November 2025), a leading crypto options exchange that calculates volatility from its Bitcoin and Ethereum option markets, and other institutions like Skew or CryptoCompare, which provide similar metrics. The Bitcoin IV is potentially an important regressor because Bitcoin volatility often drives broader crypto market dynamics, potentially destabilizing stablecoins through correlated price shocks or shifts in investor confidence, thereby influencing their probability of default.
The daily news-based Economic Policy Uncertainty (EPU) Index, sourced from (https://www.policyuncertainty.com/us_monthly.html, accessed on 15 November 2025), is constructed from newspaper archives in the NewsBank Access World News database, which aggregates thousands of global news sources. We restricted our analysis to newspaper articles, excluding magazines and newswires, to ensure consistency. Weekly and monthly rolling averages of this index were also computed for each stablecoin. The EPU Index captures macroeconomic and policy-related uncertainty, which may affect stablecoin stability by altering investor risk appetite or triggering capital flows that impact peg maintenance, making it a relevant predictor of PDs. This index has been successfully used in a large number of academic and professional articles; see (https://www.policyuncertainty.com/research.html, accessed on 15 November 2025) and all references therein. A notation Table A14 was added in the Appendix A that lists all our main variables.
Differently from past literature (see [14] and references therein), we excluded daily Google Trends data on stablecoin searches due to several limitations: for many stablecoins, daily data were unavailable because search volumes fell below Google Trends’ reporting threshold, which standardizes searches between 0 and 1 only for sufficient activity. Moreover, for coins with available data, searches predominantly reflected interest in the real dollar rather than the stablecoin, introducing noise and potential bias. Thus, we discarded this variable entirely. Furthermore, we did not consider Google Trends data for Bitcoin in our analysis, following the evidence provided by [39,40]. These studies compared the predictive performance of models using implied volatility against those using Google search data for forecasting volatility and risk measures across various financial and commodity markets. Their findings indicate that implied volatility models generally outperform those based on Google Trends data. This result is attributed to the fact that the information captured by Google search activity is already embedded within implied volatility, whereas the reverse does not hold. These authors argue that this is likely because implied volatility reflects the forward-looking expectations of sophisticated market participants—such as institutional investors with access to superior information—while Google Trends primarily reflect the behavior and expectations of retail investors with limited information.
As detailed in Section 3.1, we applied two competing criteria to classify stablecoins daily as dead or alive/resurrected: the volume-based approach in [5], and our price threshold method at $0.80, deeming a coin dead if its price falls below 80 cents, and alive or resurrected if above or recovering above 80 cents, respectively. The CoinMarketCap final listing status was not used for daily analysis, as it reflects only the end-of-sample status. The dataset of 39 stablecoins with fewer than 730 observations spans February 2021 to November 2024 (11,558 daily observations), while the 59 stablecoins with more than 730 observations span January 2019 to November 2024 (76,176 daily observations). Following [14,15], we employed direct forecasts, using 1-day lagged regressors for 1-day ahead PDs, 30-day lagged regressors for 30-day ahead PDs, and, for coins with over 730 observations, 365-day lagged regressors for 365-day ahead PDs.
Table 3 reports the total number of “dead days”—days when stablecoins are classified as dead under the two criteria—both in absolute value and as a percentage.
It appears that the price threshold method is more restrictive, requiring a significant price drop below 80 cents, whereas the volume-based Feder method captures more days as dead, potentially including periods of low activity that do not necessarily reflect a loss of peg stability.
Figure 2 illustrates the total number of stablecoins available each day (blue line) and the number of dead stablecoins each day according to the Feder method (green line) and the $0.80 threshold (red line), split by observation groups.
For stablecoins with fewer than 730 observations, the $0.80 method reacts more quickly to failure events, as seen in sharper spikes (e.g., August 2022 and mid-2023), while the Feder method shows more sustained periods of dead stablecoins, reflecting its sensitivity to prolonged low volume. For stablecoins with 730 or more observations, the $0.80 method again responds faster, with notable peaks (e.g., 2022), compared to the Feder method’s broader plateaus. This quicker reaction aligns with the price threshold’s focus on immediate peg deviations, making it a more responsive indicator of stablecoin distress in real-time monitoring scenarios.

4.2. In-Sample Analysis

The in-sample analysis evaluates the performance of various panel models and a random forest model in explaining the probability of stablecoin failure, using the full available data sample and the two classification criteria outlined in Section 3.1: the $0.80 price threshold and the volume-based approach of [5].
The in-sample results are visualized in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Specifically, Figure 3 and Figure 4 show the panel model coefficient estimates for the $0.80 price threshold approach, segmented by stablecoin lifespan (≥730 and <730 observations). The corresponding Random Forest variable importance measures are displayed in Figure 5. Similar visual summaries for the Volume-based Feder et al. (2018) approach are provided in Figure 6, Figure 7 and Figure 8. In the variable importance charts, results are broken down by lifespan and metric, with variables ranked and forecast horizons distinguished by color.
Detailed tabular results are available in the Appendix A. Panel model coefficient estimates are presented in Table A1 and Table A4 for stablecoins with ≥730 observations, and in Table A2 and Table A5 for those with fewer observations. Variable importance measures for the Random Forest models, covering both lifespan categories, are located in Table A3 and Table A6. The tables provide coefficient estimates for a suite of panel models (logit and cauchit variants with pooled, fixed effects (FE), bias-corrected FE, conditional logit, and random effects (RE) specifications) and key variable importance metrics (Mean Decrease Accuracy and Mean Decrease Gini) for 1-day, 30-day, and 365-day forecast horizons where applicable. In the random forest model, ‘Mean Decrease Accuracy’ (MDA) measures the average reduction in out-of-bag prediction accuracy when a variable is permuted, indicating its contribution to classification performance. ‘Mean Decrease Gini’ (MDG) quantifies the total decrease in node impurity (using the Gini index) across all splits involving the variable, reflecting its importance in tree construction.

4.2.1. Differences Between the $0.80 Price Threshold and the Feder Method

For stablecoins with fewer than 730 observations, the panel models under the $0.80 price threshold method (Table A2) reveal a mix of significant and insignificant regressors, with varying signs and magnitudes. The monthly market capitalization change ( Δ M C a p M , t 1 ) and volatilities ( σ W , t 1 , σ M , t 1 ) consistently show statistical significance across most models for the 1-day horizon, with coefficients indicating strong effects—generally negative for Δ M C a p M , t 1 (i.e., an increase in market cap reduces the PD) and positive for the volatilities (highlighting increased credit risk). Bitcoin volatility ( I V M , t 1 B T C ) and the monthly Economic Policy Uncertainty Index ( EPU-I M , t 1 ) also exhibit significant negative impacts, suggesting that higher Bitcoin volatility and economic uncertainty influence stablecoin failure. In contrast, for stablecoins with 730 or more observations (Table A1), the results are more consistent across models and horizons. For instance, Δ M C a p M , t 1 and σ M , t 1 remain highly significant with mostly negative and positive signs, respectively, and Bitcoin volatility terms ( I V D , t 1 B T C , I V M , t 1 B T C ) are uniformly negative and significant. This suggests that longer data histories enhance the stability and reliability of coefficient estimates, likely due to reduced noise and greater statistical power.
Under the Feder et al. (2018) volume-based criterion, differences between the two stablecoin groups are similarly pronounced. For coins with fewer than 730 observations (Table A5), the 1-day horizon shows fewer significant regressors, with σ W , t 1 consistently negative and σ M , t 1 positive, while Δ M C a p M , t 1 is significant only in the pooled logit. For coins with 730 or more observations (Table A4), the 1-day horizon highlights σ M , t 1 (positive sign) and I V M , t 1 B T C (negative sign) as key drivers, with more regressors achieving significance across models (e.g., particularly EPU-I M , t 1 with negative effects). The 365-day horizon further amplifies these effects, with Δ M C a p M , t 365 showing large negative coefficients. The increased significance and magnitude for coins with longer histories suggest that the volume-based Feder criterion benefits from extended data, capturing prolonged low-volume periods more effectively than the price threshold method, which reacts to immediate price drops.
Summarizing, the results exhibit notable differences depending on the method used to define stablecoin failures. For the $0.80 price threshold, variables such as the lagged monthly market capitalization change ( Δ M C a p M ) and monthly volatility ( σ M ) are consistently significant across most panel models, particularly for stablecoins with ≥730 observations. In contrast, the Feder method places greater emphasis on volatility measures, with σ M and I V M B T C showing strong predictive power. The Feder method also highlights the importance of economic policy uncertainty ( EPU-I M ), which is less pronounced in the price threshold analysis. This suggests that the Feder method, which incorporates trading volume, may capture broader market dynamics and systemic risks, while the price threshold focuses more on direct price declines.

4.2.2. Panel Models vs. Random Forest Models

Comparing panel models to random forest models reveals distinct interpretive strengths. Panel models (Table A1, Table A2, Table A4 and Table A5) provide coefficient estimates with statistical significance, allowing directional inference. For example, under the $0.80 threshold, σ M , t 1 has mainly a positive effect, implying that higher volatility increases the failure probability. Random forest models (Table A3 and Table A6), however, focus on variable importance via Mean Decrease Accuracy (MDA) and Mean Decrease Gini (MDG), without directional insight. For stablecoins with fewer than 730 observations (Table A3), σ M , t 1 tops the 1-day horizon for both criteria, followed by EPU-I M and Δ M C a p M , aligning with panel model significance but offering a non-parametric perspective.
Therefore, panel models and random forest approaches offer complementary insights. Panel models provide interpretable coefficients, revealing that lagged volatility and market capitalization changes are critical predictors. In contrast, random forest models prioritize variables based on predictive accuracy and Gini importance, with the stablecoin monthly volatility σ M consistently ranking highest. This suggests that while panel models identify specific directional effects, random forest models capture non-linear relationships and interactions, particularly for volatility and market-derived variables.

4.2.3. Differences Across Forecast Horizons and Stablecoin Lifespans

Differences across forecast horizons are evident in both model types. For the $0.80 threshold with 730 or more observations (Table A1), the 1-day horizon shows strong effects from σ M , t 1 (mainly positive) and I V M , t 1 B T C (mainly negative), which persist but weaken in magnitude at 30 days and further diminish at 365 days. This attenuation suggests that short-term volatility and market dynamics are more predictive of immediate failure, while longer horizons dilute these effects. For fewer than 730 observations (Table A2), the 30-day horizon shows fewer significant terms compared to the 1-day, indicating limited predictive power with shorter data spans. Random forest results (Table A3) mirror this, with MDA values for σ M slightly decreasing from the 1-day to the 30-day horizon for stablecoins with fewer observations but remaining stable for those with longer histories, suggesting robustness across horizons when data is sufficient.
Under the Feder criterion, horizon effects differ. For 730 or more observations (Table A4), the 365-day horizon introduces more significant terms compared to the 1-day, reflecting the criterion’s sensitivity to prolonged low volume. For fewer observations (Table A5), the 30-day horizon shows reduced significance, likely due to data constraints. Random forest results (Table A6) confirm σ M as the top predictor across all horizons, with EPU-I M and Δ M C a p M gaining importance at longer horizons, highlighting their role in sustained volume-based failure.
Summarizing, the forecast horizon significantly influences the results. For 1-day-ahead forecasts, daily and weekly variables play a more prominent role, while longer horizons (30-day and 365-day) emphasize monthly variables. The random forest results align with this pattern, showing higher importance scores for monthly variables at longer horizons. This indicates that short-term failures are driven by recent market movements, while long-term failures are influenced by sustained trends in volatility and market capitalization, as well as macroeconomic and systemic factors. Moreover, forecast horizon differences underscore short-term sensitivity in the $0.80 threshold and longer-term relevance in the Feder criterion, informing their suitability for different monitoring contexts.
Finally, we remark that stablecoins with 730 or more observations yield more robust and significant results across both forecasting models and definitions of stablecoin death, benefiting from longer data histories. These stablecoins exhibit more stable and statistically significant coefficients, particularly for monthly variables such as Δ M C a p M and σ M . For these variables, the panel models consistently show strong negative coefficients for Δ M C a p M , indicating that declines in market capitalization are a robust predictor of failure, and positive coefficients for σ M , suggesting that increases in stablecoin volatility are a reliable signal of impending failure. In contrast, stablecoins with fewer than 730 observations display more erratic results, with fewer significant coefficients and larger standard errors, likely due to limited data availability. The random forest results corroborate this evidence, showing higher variable importance scores for σ M and Δ M C a p M in the larger dataset, further underscoring their reliability as predictors for more established stablecoins.

4.2.4. Economic Interpretation of In-Sample Drivers of Stablecoin Default

The in-sample results, consistently across model specifications, forecast horizons, sample lengths, and death definitions, reveal a coherent economic narrative linking market stability, investor confidence, and systemic trust mechanisms within the stablecoin ecosystem. Beyond statistical relationships, the estimated coefficients highlight how macro-financial and crypto-specific forces jointly determine the credibility and resilience of individual stablecoins.
First, the negative and highly significant effect of stablecoin volatility on the probability of default underscores the central role of price stability as a coordination device. In theoretical terms, lower volatility reduces uncertainty about redemption value and reinforces expectations of convertibility, which in turn strengthens the self-fulfilling confidence loop sustaining the peg. This finding is consistent with models of monetary trust and coordination equilibria, where small deviations from parity are tolerated, but increasing fluctuations erode collective confidence and trigger runs or liquidity withdrawals. Across model types, this effect remains the most robust determinant of survival, particularly at shorter forecast horizons, indicating that day-to-day price discipline is essential for maintaining the perception of “moneyness” [3,41]. For example, when significant volatility emerges for algorithmic stablecoins, this could indicate a breakdown in the arbitrage incentives designed to correct price deviations [3]. For collateralized stablecoins, it may reflect mounting concerns over the quality, liquidity, or transparency of the underlying reserves, raising redemption fears akin to a bank run [2]. High volatility deters a stablecoin’s use as a medium of exchange or unit of account within the decentralized finance (DeFi) ecosystem, eroding its utility and demand [4]. Consequently, elevated volatility is not merely a statistical feature but a direct symptom of a loss of monetary confidence—the core of what defines a stablecoin’s failure—as seen in the TerraUSD collapse, where amplified fluctuations eroded trust and triggered systemic contagion [18]. Our models quantitatively confirm that periods of high price instability are strong leading indicators of a terminal loss of peg and its final demise.
Second, larger increases in market capitalization are associated with a lower probability of failure, confirming that market depth and adoption act as stabilizing forces. Expanding capitalization reflects growing transactional use, network effects, and greater distribution of coin holdings, all of which contribute to reducing idiosyncratic liquidity shocks (S&P Global Ratings, 2025). Conversely, a declining market cap reflects net outflows, capital flight, and a collapse in confidence. This can trigger a negative feedback loop: as investors redeem or sell their holdings, the reduced market cap makes the coin more susceptible to large trades and liquidity crises, further increasing its fragility. Therefore, from an economic standpoint, higher capitalization signals stronger user trust and institutional anchoring, mitigating the risk of destabilizing redemptions. Interestingly, this protective effect becomes more pronounced in models estimated on longer time series (over 730 observations), where structural market expansion rather than short-term speculative inflows appears to drive stability. This distinction highlights how the accumulation of user base and transactional liquidity operates as a form of endogenous insurance for the peg (see [42,43]).
Third, a more nuanced finding is the generally negative relationship between Bitcoin’s implied volatility ( I V B T C ) and stablecoin default risk. At first glance, one might expect that turmoil in the core crypto asset (Bitcoin) would spill over and destabilize the entire ecosystem, including stablecoins. However, the negative coefficient suggests the opposite: when Bitcoin becomes more volatile, stablecoins appear to become safer in a relative sense. This admits a compelling economic interpretation as a flight-to-quality or flight-to-safety phenomenon within the digital asset space. During periods of extreme uncertainty and price swings in the crypto market, investors seek to de-risk their portfolios. They often divest from volatile assets like Bitcoin and Ethereum and park their capital in stablecoins to preserve value and await clearer market directions. This surge in demand for stability can temporarily bolster stablecoin prices, increase their trading volumes, and reduce their perceived default risk. This effect positions certain stablecoins as a type of “safe haven” asset within the crypto ecosystem—a role that becomes particularly prominent during internal market crises, even if they remain risky from a traditional finance perspective. This inverse relationship, consistent across almost all our models, underscores that stablecoin stability cannot be analyzed in isolation but must be viewed as embedded within the larger crypto-financial network (see [44,45,46,47,48]).
Finally, the role of the Economic Policy Uncertainty Index (EPU-I) is less dominant than the coin-specific factors, and its effect varies. However, when significant, it often carries a negative sign, implying that higher traditional economic uncertainty is associated with lower stablecoin default risk. In periods of heightened policy uncertainty, investors and institutions may increase their use of stablecoins as settlement or collateral instruments that combine the benefits of digital liquidity with perceived U.S. dollar stability. This effect is consistent with the safe-haven hypothesis observed for other dollar-denominated instruments: when traditional financial markets become riskier or regulatory uncertainty rises, stablecoins (particularly asset-backed designs) gain attractiveness as alternative transactional media. Conversely, during tranquil periods of low EPU, demand for stablecoins as a hedging or reserve instrument diminishes, potentially amplifying idiosyncratic fragility. This finding highlights the hybrid nature of stablecoins as both crypto assets and synthetic money-like instruments whose demand is countercyclical to global uncertainty (see [49,50,51,52]). However, the varying significance across models indicates that while EPU-I plays a role, coin-specific factors dominate, consistent with SP’s emphasis on internal reserve quality over external shocks [12].
When interpreted jointly, these results reveal a unified mechanism: stablecoin survival depends on the interaction between micro-level stability signals (volatility and capitalization) and macro-level credibility anchors (crypto and policy uncertainty). The consistency of these relationships across model types, forecast horizons, and definitions of failure suggests that the identified determinants reflect fundamental economic forces rather than model-specific artifacts. At a conceptual level, the evidence supports the view that stablecoins operate as endogenous confidence assets, sustained by liquidity, predictability, and systemic calm, whose failure dynamics mirror those of traditional monetary systems when the equilibrium of trust collapses. For investors, these insights advocate monitoring capitalization and volatility as early indicators, while regulators can leverage them to mitigate systemic risks, as warned by the Financial Stability Oversight Council (2024) [1]. Future research could extend this by incorporating real-time reserve data to refine these interpretations.

4.3. Out-of-Sample Analysis

We assess the forecasting performance of our eight panel models and the random forest model in detecting stablecoin failures, using both the $0.80 price threshold and the volume-based criterion of [5], as outlined in Section 3.1.
The initial dataset for estimating panel models and the Random Forest model spanned February 2021 to February 2022 for coins with fewer than 730 observations and January 2019 to January 2021 for coins with more than 730 observations. In essence, all stablecoin data were pooled together up to a specific time point (e.g., time t), and panel models along with the Random Forest model were estimated with this dataset to calculate the out-of-sample probabilities of death. Subsequently, the time window was extended by one day, and the process was repeated. For both the panel models and the Random Forest model, direct forecasts were generated by estimating the models multiple times, corresponding to the number of forecast horizons, using regressors lagged by the duration of each forecast horizon (e.g., 1-day lagged regressors for 1-day-ahead probability of death predictions, and so forth).
The out-of-sample performance of nine different models, tested under the two competing definitions of a stablecoin death, is visualized in Figure 9 and Figure 10. The plots evaluate performance using the AUC, H-measure, and Brier Score, segmenting the results by the stablecoin’s lifespan and the forecasting horizon. Models that are included in the Model Confidence Set (MCS), which is based on the Brier Score as a loss function at a 10% significance level, are highlighted with a solid point to denote them as statistically superior forecasting models. Detailed forecasting results are presented in tabular format in the Appendix A. Table A7 and Table A8 provide a comprehensive breakdown of performance, evaluated separately by stablecoin lifespan (<730 observations and ≥730 observations) and forecast horizon (1-day, 30-day, and 365-day, with the 365-day horizon applied only to stablecoins with longer histories). The tables include the aforementioned performance metrics (AUC, H-measure, Brier Score) along with standard classification metrics—Accuracy, Sensitivity, and Specificity—derived using two alternative thresholds (50% and empirical prevalence). Finally, the tables explicitly report each model’s inclusion in the Model Confidence Set.
Among the evaluated forecasting models, the panel Cauchit model with fixed effects consistently emerges as the top performer across multiple evaluation criteria, forecast horizons, and stablecoin classifications. This model demonstrates superior predictive power, as evidenced by its frequent inclusion in the Model Confidence Set (MCS), its consistently high AUC and H-measure values, and its low Brier Scores under both the $0.80 price threshold and the volume-based Feder approach. Particularly noteworthy is its robustness at shorter forecast horizons, where it achieves near-optimal sensitivity and specificity, making it highly reliable for real-time failure detection. For stablecoins with fewer than 730 observations, the panel Cauchit FE model excels in navigating volatility and limited data availability, while for longer-lived stablecoins, it leverages historical patterns to maintain strong performance. While other models, such as Random Forest and Logit RE, also perform well in specific contexts—particularly for short-term forecasts of stablecoins with fewer than 730 observations—their results are less consistent compared to the panel Cauchit FE model. In contrast, the bias-corrected logit model with fixed effects and the Conditional Logit model consistently underperform across both classification criteria, likely due to overfitting or misspecification in sparse data settings.
Based on these findings, the Cauchit FE model clearly emerges as the preferred choice for predicting stablecoin failures, particularly when aiming for a balance between accuracy, adaptability, and robustness across varying conditions.
In terms of differences among classification criteria, under the $0.80 price threshold (Table A7), models generally exhibit higher predictive power compared to the Feder criterion (Table A8), as evidenced by higher AUC and H-measure values across most horizons and lifespans. In contrast, the volume-based Feder approach incorporates trading volume dynamics, introducing additional complexity and noise into the classification process. Consequently, models evaluated under this criterion exhibit lower sensitivity and specificity, particularly at shorter forecast horizons, reflecting the inherent challenges of capturing failure events based on volume fluctuations.
The distinction between stablecoins with fewer than 730 observations and those with 730 or more observations also plays a significant role in shaping the results. Under the $0.80 threshold, stablecoins with fewer than 730 observations show excellent performance in short-term forecasts, particularly at the 1-day-ahead horizon. For stablecoins with 730 or more observations, performance remains strong, but sensitivity is generally lower, suggesting that longer histories dilute the ability to detect rare failure events amidst more stable periods. The Feder criterion generally mirrors this trend.
Across forecast horizons, predictive performance declines as the horizon extends, more markedly under the Feder criterion. This suggests that the $0.80 threshold retains more predictive power over longer horizons due to its focus on price signals, whereas the Feder criterion struggles as volume trends become less informative over time. Notably, models such as Logit RE and Cauchit RE maintain relatively strong performance across horizons for stablecoins with longer lifespans, underscoring their adaptability to varying forecasting requirements, even though they are considerably more difficult to estimate and computationally demanding.
In summary, based on these findings, we recommend the panel Cauchit FE model as the preferred choice for predicting stablecoin failures, particularly when aiming for a balance between accuracy, adaptability, and robustness across varying conditions. For applications where interpretability is less critical, the Random Forest model offers a competitive alternative, especially for short-term forecasts. The $0.80 price threshold method appears to outperform the Feder criterion in out-of-sample forecasting, particularly for short-term horizons and stablecoins with shorter lifespans, due to its sensitivity to immediate price deviations. Stablecoins with fewer than 730 observations benefit from higher precision in short-term forecasts, while those with 730 or more observations show robust but less sensitive predictions. Forecasting performance degrades across horizons, more so under the Feder approach, where long-term volume signals weaken.

5. Robustness Checks

5.1. A Time Series-Based Method

The Zero Price Probability (ZPP) method, adapted from [6,14,24], was employed to estimate market-implied probabilities of default or “death” for stablecoins. Rather than relying on prices, the method uses market capitalization, which provides a more comprehensive metric by reflecting both price and circulating supply—offering a better gauge of market sentiment and stability compared to trading volume. The ZPP method estimates the probability that a stablecoin’s market capitalization will fall to zero within specified time horizons (1-day, 30-day, and 365-day ahead). It involves three steps: modelling changes in market capitalization using a conditional model, simulating multiple future trajectories, and calculating the probability of default as the proportion of simulations where market capitalization falls to zero. This approach allows for flexible distributions beyond the log-normal and accommodates truncated variables like market capitalization, making it a robust indicator of default risk. For further details, see [53]. We also tested the Cox Proportional Hazards model used in [6], but encountered several numerical instability issues, particularly for stablecoins with fewer than 730 observations, consistent with the problems discussed in [6]. For this reason, we did not include this model in our analysis. The investigation of survival-type models with shrinkage estimators for stablecoin credit risk is left for future research.
As a robustness check, we applied the ZPP method under the assumption that a stablecoin’s market capitalization follows a simple random walk [ZPP(RW)], making this approach suitable for coins with limited data availability. Unlike the panel models, the ZPP model was estimated individually for each stablecoin. Given the relatively short length of historical market capitalization time series, we adopted an initial estimation sample of 30 observations. In this robustness check, we applied the ZPP(RW) alongside the previous forecasting models and the two classification criteria: the $0.80 price threshold and the volume-based method of [5].
It is important to note that the datasets used to estimate the panel and random forest models differed from those used for the ZPP models. As a result, there were certain dates for which not all models produced forecasts. Consequently, the evaluation metrics reported in Table A9 and Table A10 in the Appendix A are based exclusively on the subset of dates for which forecasts from all models were simultaneously available.
Overall, the results indicate that while the ZPP(RW) model performs reasonably well in terms of AUC, it generally exhibits lower accuracy, sensitivity, and specificity compared to more sophisticated models, and often fails to be included in the Model Confidence Set (MCS). Under the $0.80 price threshold, the ZPP(RW) achieves moderate AUC values for 1-day and 30-day forecasts, but its performance deteriorates sharply for longer horizons. The volume-based [5] criterion yields even weaker ZPP(RW) results, reflecting the method’s sensitivity to the failure definition. In contrast, models such as the panel Cauchit FE and Random Forest consistently outperform the ZPP(RW) under both criteria, underscoring their robustness.
The ZPP(RW) model performs relatively better for stablecoins with shorter lifespans (fewer than 730 observations), likely due to fewer structural breaks. However, even in this case, it is outperformed by panel models with fixed effects.
Across forecast horizons, the predictive power of the ZPP(RW) declines markedly beyond short-term forecasts (1-day and 30-day ahead), reflecting its simplicity and inability to capture complex dynamics over longer periods. This limitation arises because more advanced ZPP variants incorporating GARCH models require 500–1000 observations for stable parameter estimation and cannot be reliably applied to most of our dataset. See the large simulation studies on GARCH models by [54,55] for further details.
While the ZPP(RW) is outperformed by more complex panel models, its simplicity translates into specific practical advantages where sophisticated modelling is infeasible. This approach is particularly valuable in three scenarios:
(a)
Severe data scarcity: For a newly launched stablecoin or one with a very short trading history (e.g., less than 30-60 days), estimating panel models with multiple regressors or even a GARCH-based ZPP model is impossible. The ZPP(RW), requiring only a minimal initial window (e.g., 30 observations), provides a immediate, data-parsimonious method to generate an initial, market-implied PD estimate where no other model can run.
(b)
New stablecoin launches: Investors and exchanges listing a new stablecoin lack the coin-specific history needed for a fundamental credit assessment. In this vacuum, the ZPP(RW) can be applied from the first day of trading to monitor its market capitalization trajectory, offering a purely market-based, real-time signal of initial stability or distress that complements other due diligence.
(c)
Real-time monitoring and baselining: The computational simplicity of the ZPP(RW) allows it to be updated and calculated instantaneously with new market data. This makes it suitable for building real-time dashboards that monitor a large universe of stablecoins. It can serve as a high-level baseline alert system; if the ZPP(RW) for a coin spikes, it can trigger a deeper, more resource-intensive analysis using the superior panel Cauchit FE model.
In essence, the ZPP(RW) should not be seen as a competitor to the panel models but as a complementary tool for early-stage, resource-constrained, or high-frequency monitoring situations. It embodies a trade-off, sacrificing predictive power for immediacy and applicability in data-scarce environments.
In conclusion, while the ZPP(RW) offers a viable benchmark for short-term forecasts with minimal data requirements, its simplicity comes at the cost of lower accuracy—particularly for longer horizons or more complex failure definitions. These results reinforce the superiority of structured models such as the panel Cauchit model with fixed effects and highlight the trade-off between simplicity and predictive power in small-sample settings.

5.2. Alternative Sample Cutoff at 365 Observations

In our baseline analysis, we adopted a cutoff of 730 observations for stablecoin time series to ensure sufficient data for forecasting 365-day ahead probabilities of stablecoin death. This cutoff was chosen because it allows for a minimum of one year of training data for model initialization, with an additional year to evaluate forecasting performance, ensuring robust model training and reliable long-term predictions. To assess the robustness of our findings to the choice of sample size cutoff, we conducted an additional analysis using a smaller cutoff of 365 observations. This alternative cutoff aligns with standard practices in financial time series analysis, where a one-year period is often sufficient to capture meaningful patterns while including a broader set of stablecoins with shorter histories. We applied this alternative cutoff to both panel models and Random Forest models, using both the $0.80 price threshold approach and the volume-based Feder et al. (2018) methodology, as presented in Table A2, Table A3, Table A4, Table A5, Table A6, Table A11, Table A12 and Table A13. Below, we discuss the key findings from comparing the results for stablecoins with fewer than 730 observations versus fewer than 365 observations.
In the panel models (Table A2 and Table A11), the results for both cutoffs confirm the importance of monthly variables, particularly monthly volatility ( σ M ), monthly market capitalization changes ( Δ M C a p M ), and monthly Bitcoin implied volatility ( I V M B T C ), which are significant across most models for both 1-day and 30-day ahead forecasts. However, the smaller 365-observation sample (Table A11) exhibits larger and more volatile coefficient estimates, especially in fixed effects (FE) and random effects (RE) models. For instance, in the 1-day ahead forecast, Δ M C a p M , t 1 has coefficients of −222.158*** in Logit FE and −32.471 in Cauchit RE for the 365-observation sample, compared to −70.313*** and −41.741 in the 730-observation sample. This suggests increased sensitivity to data limitations or outliers in the smaller sample. Additionally, weekly volatility ( σ W , t 1 ) is more significant in the 365-observation sample for the 1-day forecast, while economic uncertainty ( EPU-I M ) gains prominence in the 30-day forecast, indicating that shorter samples may amplify certain effects. Similarly, in the volume-based Feder et al. (2018) panel models (Table A5 and Table A12), monthly variables remain dominant predictors. The 365-observation sample (Table A12) shows significantly larger coefficients for Δ M C a p M , such as −410.578*** in Logit FE and −1126.429*** in Cauchit FE for the 1-day forecast, compared to −6.279 and −3.702 in the 730-observation sample. The smaller sample also reverses the sign of σ M , t 1 in FE models (e.g., −0.901*** vs. 0.280*** in Logit FE), suggesting potential overfitting or data-specific effects. Weekly economic uncertainty ( EPU-I W , t 30 ) is more significant in the smaller sample for the 30-day forecast, further highlighting the sensitivity of shorter time series to specific variables.
In the Random Forest models (Table A3 and Table A13 for the $0.80 price threshold approach, and Table A6 and Table A13 for the Feder et al. approach), monthly variables ( σ M , Δ M C a p M , I V M B T C ) consistently rank among the top predictors for both cutoffs and forecast horizons. However, the 730-observation sample (Table A3 and Table A6) yields higher Mean Decrease in Accuracy and Mean Decrease in Gini values, indicating greater model robustness with larger datasets. For example, in the Random Forest model for the $0.80 price threshold approach, σ M , t 1 has an M.D. Accuracy of 71.59 and M.D. Gini of 462.49 for the 1-day forecast in the 730-observation sample, compared to 51.218 and 197.714 in the 365-observation sample. Similarly, in the Feder et al. approach, σ M , t 1 has an M.D. Accuracy of 92.731 and M.D. Gini of 532.785 in the 730-observation sample, versus 55.495 and 100.240 in the 365-observation sample. The smaller 365-observation sample elevates the relative importance of certain variables, such as EPU-I M and I V W B T C in the $0.80 price threshold approach, and Δ M C a p M and I V M B T C in the Feder et al. approach, particularly for the 1-day forecast. Weekly volatility ( σ W ) is more prominent in the 730-observation sample, especially in the Feder et al. approach, where it ranks second for both forecast horizons. These differences suggest that smaller samples may emphasize financial and economic indicators over volatility measures due to data constraints.
The robustness check using a 365-observation cutoff confirms that our primary findings are not overly sensitive to the choice of sample size, as monthly variables remain the most consistent predictors across both cutoffs, methodologies, and forecast horizons. The 730-observation cutoff, designed to support 365-day ahead forecasting with sufficient training data, provides more stable coefficient estimates in panel models and higher importance scores in Random Forest models, reflecting greater robustness with larger datasets. In contrast, the smaller 365-observation sample produces larger and more volatile coefficients in panel models and lower importance scores in Random Forest models, indicating potential overfitting or increased sensitivity to data limitations. These results highlight the trade-off between including more stablecoins with shorter histories and maintaining model stability with larger datasets. By presenting results for both cutoffs, we provide a comprehensive view of how sample size influences predictive performance, reinforcing the reliability of our conclusions.

6. Discussion and Conclusions

This study advances the understanding of stablecoin credit risk by introducing a novel rule-of-thumb approach to classify stablecoins as “dead” or “alive” using a $0.80 price threshold, validated against CoinMarketCap delistings and the volume-based methodology of [5]. The $0.80 threshold proved to be a robust and practical indicator of stablecoin failure, offering a real-time, transparent alternative to lagging indicators such as delistings or volume-based methods. Sensitivity analyses confirmed its balanced trade-off between detecting failures and preserving accuracy for surviving coins.
Employing a comprehensive dataset of 98 stablecoins, we utilized a suite of panel binary models and a random forest model to forecast probabilities of default (PDs), revealing that lagged monthly market capitalization ( Δ M C a p M ) and stablecoin volatility ( σ M ) are the most significant predictors across both in-sample and out-of-sample analyses. The panel Cauchit model with fixed effects (Cauchit FE) consistently outperformed alternative specifications, achieving high AUC and H-measure values while remaining robust across forecast horizons and failure criteria. Random forest models complemented these findings, likewise ranking σ M and Δ M C a p M as the top variables. In contrast, other factors such as Bitcoin implied volatility and economic policy uncertainty indices played a secondary role, underscoring the primacy of stablecoin-specific dynamics in stablecoin stability.
The in-sample results reveal that stablecoin survival is not random but driven by identifiable and economically meaningful forces. The empirical evidence shows that lower price volatility and larger market capitalization significantly reduce default risk, confirming that internal stability and scale are key to maintaining investor trust and transactional credibility. At the same time, the counterintuitive negative relationships between Bitcoin implied volatility and the Economic Policy Uncertainty Index with the probability of death indicate that stablecoins tend to benefit from periods of heightened uncertainty, functioning as relative safe havens within both the crypto ecosystem and the broader financial landscape. These findings emphasize that stablecoins operate at the intersection of micro-level market dynamics and macro-level confidence channels, behaving simultaneously as digital assets and synthetic money substitutes. From a policy perspective, the evidence highlights the importance of transparency in reserves and mechanisms that anchor redemption credibility, since these characteristics underpin stability across regimes. Future research could extend this framework by examining whether the same economic mechanisms apply to newer stablecoins or to those integrated into regulated payment systems, thereby testing the resilience of these relationships as the market and regulatory landscape evolve
The out-of-sample analysis further highlighted the superiority of the $0.80 threshold over the Feder criterion, particularly for short-term forecasts (1-day and 30-day), where the Cauchit FE and random forest models excelled. Stablecoins with fewer than 730 observations demonstrated higher precision in short-term predictions, likely due to reduced noise, while those with ≥730 observations exhibited robust performance but lower sensitivity, suggesting that longer time series dilute the ability to detect rare failure events amidst more stable periods. Model performance deteriorated over longer horizons, especially under the volume-based Feder method, as volume signals weakened. The robustness check using the Zero Price Probability with a random walk [ZPP(RW)] confirmed these general trends; however, this approach exhibited modest performance, limited by its simplicity and inability to capture complex dynamics, particularly beyond the 30-day forecast horizon.
The practical contributions of this study are directly applicable to financial regulators seeking to develop effective market surveillance tools. The simplicity and high accuracy of the $0.80 price threshold make it an ideal candidate for a first-tier, automated alert system to flag stablecoins experiencing acute distress. The TerraUSD collapse serves as a powerful case-study validation: had this framework been applied, the breach of the $0.80 threshold would have provided an unambiguous, real-time signal of its failure, far quicker than volume-based metrics or post-mortem delistings. Furthermore, our panel Cauchit FE model, which identified falling market capitalization and surging volatility as key predictors, would have shown a rapidly escalating probability of default (PD) in the days preceding the final de-peg. This demonstrates that regulators could use the price threshold for immediate triage and the panel model for more granular, forward-looking risk assessment, allowing for proactive intervention. Our findings thus provide a clear, data-driven roadmap for integrating stablecoin-specific risk factors into supervisory frameworks, consistent with the recommendations of the Financial Stability Oversight Council [1] and with S&P Global Ratings’ stablecoin assessment [12].
Despite its contributions, this study has limitations. The dataset, while extensive, may not capture all stablecoin failures due to missing data for delisted or obscure coins, potentially biasing results toward more prominent assets. The $0.80 threshold, though validated, is static and may not adapt to evolving market conditions or stablecoin designs (e.g., algorithmic vs. fiat-backed). Moreover, while the current framework relies on an instantaneous price threshold to classify stablecoin failures, it does not explicitly account for the duration of a depegging episode. Incorporating time as an additional dimension could meaningfully refine the identification of persistent versus transient instability. For instance, a stablecoin might be classified as failed only after remaining below the $0.80 threshold for a sustained period (e.g., several consecutive days), thereby distinguishing short-lived volatility from a genuine loss of confidence. Such a duration-based extension would align with survival or hazard models commonly used in credit-risk analysis, where the persistence of distress carries more predictive information than momentary breaches [56]. Although implementing duration models proved numerically challenging with the current dataset, this represents a promising avenue for future research, offering a natural bridge between threshold-based classification and state-transition modelling to better capture the temporal dynamics of stablecoin failure. Besides, the ZPP(RW)’s poor performance reflects data constraints—most stablecoins lack the 1000+ observations required for advanced GARCH-based ZPP variants—limiting its utility as a benchmark. Additionally, unobserved heterogeneity beyond fixed effects (e.g., governance or blockchain-specific risks) may influence PDs, yet our models cannot fully account for these due to data limitations.
Future research could address these gaps by expanding the dataset to include emerging stablecoins and refining the threshold approach with dynamic or coin-specific cutoffs, possibly using machine learning to optimize thresholds in real time. Incorporating blockchain-level data (e.g., transaction volumes, smart contract activity) could enhance model precision by capturing governance or operational risks absent from market-based regressors. Extending the ZPP framework with alternative time series models suited for small samples (e.g., Bayesian methods) could improve its performance with stablecoins exhibiting limited time series. Finally, exploring the interplay between stablecoin credit risk and DeFi ecosystem dynamics—such as liquidity pool dependencies—offers a promising avenue to assess systemic implications, building on [2,4]. Together, these efforts would refine the predictive tools and theoretical insights needed to navigate the evolving stablecoin landscape.

Funding

The author gratefully acknowledges financial support from the grant of the Russian Science Foundation n. 25-18-00319.

Data Availability Statement

The data are available from the authors upon request.

Acknowledgments

During the preparation of this manuscript, the author used Grok 4 for proofreading the written English. The author has reviewed and edited the output and takes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1. $0.80 Price threshold: in-sample coefficient estimates for Panel models. The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with ≥730 observations. Regressors lagged according to forecast horizon.
Table A1. $0.80 Price threshold: in-sample coefficient estimates for Panel models. The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with ≥730 observations. Regressors lagged according to forecast horizon.
1-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 1 −0.0060.169 **0.003−0.0040.162 **−0.0240.663 ***0.563 *
Δ M C a p W , t 1 −0.016−0.226 ***−0.012−0.001−0.218 ***−0.022−0.704−0.312
Δ M C a p M , t 1 −0.089 ***−1.483 ***−0.276 ***−0.066 ***−1.417 ***−0.136 *−5.889 ***−4.673 ***
σ D , t 1 0.110 ***0.053 *0.0660.0050.053 *0.0720.996 ***0.006
σ W , t 1 0.397 ***0.115 ***0.331 ***0.029 ***0.115 ***0.470 ***0.608 ***−0.118
σ M , t 1 1.164 ***0.187 ***0.156 ***0.081 ***0.187 ***5.744 ***2.145 ***5.349 ***
I V D , t 1 B T C −0.150 **−0.561 ***−0.165 *−0.198 ***−0.562 ***−0.502 ***−1.067 ***−0.993 ***
I V W , t 1 B T C 0.150 *0.690 ***0.210 *0.181 **0.690 ***0.291 *0.439 *0.434 *
I V M , t 1 B T C −0.390 ***−1.669 ***−0.470 ***−0.484 ***−1.665 ***−0.984 ***−2.718 ***−2.805 ***
EPU-I D , t 1 0.0010.0170.0010.0030.0170.0030.0300.039
EPU-I W , t 1 −0.023−0.034−0.0280.011−0.0340.0040.1550.164
EPU-I M , t 1 −0.217 ***−1.236 ***−0.305 ***−0.282 ***−1.231 ***−1.041 ***−2.610 ***−2.491 ***
30-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 30 − 0.011−0.015−0.007−0.013−0.016−0.040−0.075−0.081
Δ M C a p W , t 30 −0.020−0.067−0.0060.000−0.069−0.0280.1610.178
Δ M C a p M , t 30 −0.078 ***−0.985 ***−0.268 ***−0.062 ***−0.954 ***−0.105−1.144 ***−1.151 ***
σ D , t 30 0.054 *0.016−0.0210.0030.0160.113 *0.0590.111 *
σ W , t 30 0.0360.0390.0070.0130.039−0.338 ***−0.015−0.205 **
σ M , t 30 1.275 ***0.176 ***0.248 ***0.080 ***0.176 ***5.843 ***2.469 ***2.534 ***
I V D , t 30 B T C −0.162 **−0.672 ***−0.193 **−0.216 ***−0.672 ***−0.473 ***−1.753 ***−1.746 ***
I V W , t 30 B T C 0.128 *0.585 ***0.194 *0.170 **0.584 ***0.1391.320 ***1.346 ***
I V M , t 30 B T C −0.373 ***−1.536 ***−0.464 ***−0.456 ***−1.533 ***−0.889 ***−3.157 ***−3.166 ***
EPU-I D , t 30 −0.0000.006−0.0000.0020.006−0.0100.0070.008
EPU-I W , t 30 −0.041−0.135 *−0.054−0.017−0.135 *−0.045−0.034−0.041
EPU-I M , t 30 −0.175 ***−1.204 ***−0.280 ***−0.287 ***−1.199 ***−0.867 ***−2.605 ***−2.564 ***
365-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 365 −0.004−0.001−0.001−0.002−0.001−0.017−0.005−0.005
Δ M C a p W , t 365 −0.0130.0150.0300.0030.015−0.049 *0.0490.049
Δ M C a p M , t 365 −0.018−0.041−0.136 ***−0.019−0.0410.037−0.109 *−0.109 *
σ D , t 365 0.024−0.010−0.014−0.002−0.010−0.038−0.030−0.030
σ W , t 365 0.021−0.033−0.034−0.010−0.033−0.128 ***−0.263−0.263
σ M , t 365 0.515 ***−0.197 ***−0.860 ***−0.036 *−0.195 ***2.207 ***−0.565 ***−0.565 ***
I V D , t 365 B T C −0.176 ***−0.650 ***−0.238 ***−0.205 ***−0.649 ***−0.381 ***−1.440 ***−1.440 ***
I V W , t 365 B T C 0.0560.1540.1180.0700.1530.0590.418 *0.418 *
I V M , t 365 B T C −0.266 ***−0.955 ***−0.395 ***−0.313 ***−0.953 ***−0.486 ***−2.141 ***−2.141 ***
EPU-I D , t 365 0.005−0.000−0.0040.004−0.0000.0320.0150.015
EPU-I W , t 365 0.014−0.034−0.054−0.009−0.0330.0800.0460.046
EPU-I M , t 365 −0.086 **−0.808 ***−0.186 ***−0.178 ***−0.805 ***−0.422 ***−2.723 ***−2.722 ***
*** for p-value < 0.001, ** for p-value < 0.01, * for p-value < 0.05.
Table A2. $0.80 Price threshold: in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with <730 observations. Regressors lagged according to forecast horizon.
Table A2. $0.80 Price threshold: in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with <730 observations. Regressors lagged according to forecast horizon.
1-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 1 −0.0081.2420.1100.1940.026−0.0190.2730.171
Δ M C a p W , t 1 −0.0071.3210.3902.2010.400−0.0244.6875.251
Δ M C a p M , t 1 −0.111 *−70.313 ***−4.258−16.821 **−0.770−0.187−46.821−41.741
σ D , t 1 0.0690.0100.0570.0110.0140.306 **−0.003−0.003
σ W , t 1 0.139 **0.162 *0.130 *0.061 *0.159 **1.483 ***0.191 *0.190 *
σ M , t 1 0.597 ***0.175 **0.448 ***0.180 ***0.190 ***0.807 ***0.199 *0.201 *
I V D , t 1 B T C −0.065−0.658 ***−0.085−0.236 **−0.668 ***−1.114 ***−1.659 ***−1.654 ***
I V W , t 1 B T C −0.1740.066−0.1400.0630.1060.833 ***0.3780.377
I V M , t 1 B T C 0.119−2.164 ***−0.026−0.789 ***−2.096 ***−0.258−5.621 ***−5.608 ***
EPU-I D , t 1 −0.007−0.011−0.004−0.0020.0050.0270.0230.021
EPU-I W , t 1 0.0770.306 ***0.0830.107 **0.308 ***0.0560.285 *0.289 *
EPU-I M , t 1 0.233 ***−0.801 ***0.168 **−0.141 ***−0.794 ***0.456 ***−1.260 ***−1.258 ***
30-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 30 −0.008−0.236−0.059−0.2570.032−0.018−0.296−0.241
Δ M C a p W , t 30 −0.0081.6980.0191.4920.234−0.0151.4251.377
Δ M C a p M , t 30 −0.115 *6.9391.5148.686−0.191−0.15425.44621.860
σ D , t 30 0.0560.0110.0410.0120.0121.840 ***0.0030.004
σ W , t 30 0.028−0.1010.000−0.033−0.1010.080−0.395 ***−0.400 ***
σ M , t 30 0.593 ***0.231 ***0.498 ***0.184 ***0.231 ***0.804 ***0.304 ***0.305 ***
I V D , t 30 B T C −0.055−0.842 ***−0.093−0.265 **−0.830 ***−0.685 **−1.380 ***−1.377 ***
I V W , t 30 B T C −0.453 **−0.321−0.397 *−0.258 *−0.329−0.360−0.261−0.262
I V M , t 30 B T C 0.439 ***−0.984 ***0.292 **−0.307 ***−0.977 ***0.453 **−2.478 ***−2.473 ***
EPU-I D , t 30 −0.007−0.022−0.007−0.009−0.0230.0030.0080.009
EPU-I W , t 30 0.045−0.0510.033−0.037−0.0510.034−0.112−0.114
EPU-I M , t 30 0.179 ***−0.582 ***0.128 *−0.109 **−0.576 ***0.271 ***−1.013 ***−1.012 ***
*** for p-value < 0.001, ** for p-value < 0.01, * for p-value < 0.05.
Table A3. $0.80 Price threshold: Random Forest variable importance measures. The variables are sorted in decreasing order according to the Out-Of-Bag accuracy for each forecasting horizon. Regressors lagged according to forecast horizon.
Table A3. $0.80 Price threshold: Random Forest variable importance measures. The variables are sorted in decreasing order according to the Out-Of-Bag accuracy for each forecasting horizon. Regressors lagged according to forecast horizon.
Stablecoins with <730 observations.
1-DAY AHEAD FORECAST30-DAY AHEAD FORECAST
M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini
σ M , t 1 71.59462.49 σ M , t 30 66.45453.91
Δ M C a p M , t 1 55.31334.60 Δ M C a p M , t 30 53.86326.27
I V M , t 1 B T C 52.14137.11 σ W , t 30 52.89371.98
Δ M C a p D , t 1 47.55243.89 Δ M C a p W , t 30 50.90258.33
Δ M C a p W , t 1 47.20295.53 I V M , t 30 B T C 50.21134.45
σ W , t 1 43.21348.89 σ D , t 30 49.29276.18
σ D , t 1 41.74293.42 I V D , t 30 B T C 41.2994.43
I V D , t 1 B T C 41.3575.08 I V W , t 30 B T C 41.00110.23
I V W , t 1 B T C 39.2482.86 EPU-I M , t 30 40.3689.73
EPU-I M , t 1 38.5563.55 Δ M C a p D , t 30 39.41195.03
EPU-I W , t 1 26.2042.90 EPU-I W , t 30 37.8162.88
EPU-I D , t 1 9.0530.92 EPU-I D , t 30 14.3037.26
Stablecoins with ≥730 observations.
1-DAY AHEAD FORECAST30-DAY AHEAD FORECAST365-DAY AHEAD FORECAST
M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini
σ M , t 1 147.344170.00 σ M , t 30 148.364227.32 σ M , t 365 148.343380.54
EPU-I M , t 1 114.781058.70 EPU-I M , t 30 112.971067.48 Δ M C a p M , t 365 132.882418.49
I V M , t 1 B T C 109.241296.23 Δ M C a p M , t 30 108.881897.56 EPU-I M , t 365 127.591191.95
Δ M C a p M , t 1 101.631816.94 I V M , t 30 B T C 102.361331.41 I V M , t 365 B T C 105.251299.73
σ W , t 1 92.322801.37 σ W , t 30 95.582623.14 σ W , t 365 94.732263.76
σ D , t 1 92.262048.69 EPU-I W , t 30 93.01727.75 Δ M C a p W , t 365 88.021655.30
EPU-I W , t 1 89.60700.83 σ D , t 30 89.361933.23 EPU-I W , t 365 80.74820.46
I V W , t 1 B T C 82.171018.08 I V W , t 30 B T C 77.901046.30 Δ M C a p D , t 365 78.131295.40
Δ M C a p W , t 1 72.721303.10 I V D , t 30 B T C 76.23926.57 σ D , t 365 76.581682.79
I V D , t 1 B T C 72.64898.06 Δ M C a p W , t 30 73.031315.23 I V W , t 365 B T C 72.491079.72
Δ M C a p D , t 1 60.841005.85 Δ M C a p D , t 30 64.371008.46 I V D , t 365 B T C 60.93938.19
EPU-I D , t 1 37.58479.25 EPU-I D , t 30 44.30494.27 EPU-I D , t 365 49.36530.11
Table A4. Volume-based Feder et al. (2018): in-sample coefficient estimates for Panel models. The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with ≥730 observations. Regressors lagged according to forecast horizon.
Table A4. Volume-based Feder et al. (2018): in-sample coefficient estimates for Panel models. The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with ≥730 observations. Regressors lagged according to forecast horizon.
1-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 1 −0.005−0.001−0.0030.000−0.001−0.013−0.007−0.007
Δ M C a p W , t 1 −0.0130.008−0.0150.0070.0070.040−0.072−0.071
Δ M C a p M , t 1 −0.031 *0.066 **0.0030.0130.066 **−0.076 **0.368 ***0.368 ***
σ D , t 1 0.0310.0220.0150.0010.0220.0470.123 *0.123 *
σ W , t 1 −0.039−0.006−0.0170.002−0.006−0.261 ***−0.846 ***−0.846 ***
σ M , t 1 1.150 ***0.290 ***0.539 ***0.048 ***0.291 ***3.706 ***3.515 ***3.515 ***
I V D , t 1 B T C −0.176 ***−0.564 ***−0.225 ***−0.252 ***−0.564 ***−0.276 ***−0.788 ***−0.788 ***
I V W , t 1 B T C 0.0760.350 ***0.1200.155 ***0.349 ***0.0300.420 ***0.420 ***
I V M , t 1 B T C −0.358 ***−1.140 ***−0.444 ***−0.544 ***−1.138 ***−0.558 ***−1.541 ***−1.541 ***
EPU-I D , t 1 0.002−0.0000.001−0.001−0.0000.004−0.000−0.000
EPU-I W , t 1 −0.010−0.086 *−0.021−0.015−0.085 *−0.035−0.071−0.071
EPU-I M , t 1 −0.035−0.538 ***−0.084 **−0.153 ***−0.537 ***−0.079 *−1.194 ***−1.194 ***
30-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 30 −0.004−0.001−0.003−0.000−0.001−0.021−0.008−0.008
Δ M C a p W , t 30 −0.0160.001−0.0180.0030.0010.042 *−0.071−0.072
Δ M C a p M , t 30 −0.0250.076 **0.0100.0200.076 **−0.070 *0.394 ***0.400 ***
σ D , t 30 0.0480.0300.0240.0020.0300.094 *−0.010−0.007
σ W , t 30 0.0120.0290.0130.0080.029−0.242 ***−0.228 ***−0.234 ***
σ M , t 30 1.110 ***0.265 ***0.528 ***0.041 ***0.266 ***3.582 ***3.406 ***3.459 ***
I V D , t 30 B T C −0.192 ***−0.570 ***−0.238 ***−0.260 ***−0.569 ***−0.301 ***−0.733 ***−0.732 ***
I V W , t 30 B T C 0.0930.374 ***0.138 *0.168 ***0.373 ***0.0470.472 ***0.470 ***
I V M , t 30 B T C −0.359 ***−1.106 ***−0.441 ***−0.532 ***−1.105 ***−0.545 ***−1.547 ***−1.547 ***
EPU-I D , t 30 −0.000−0.007−0.002−0.004−0.007−0.003−0.010−0.010
EPU-I W , t 30 −0.014−0.087 *−0.025−0.011−0.086 *−0.045−0.076−0.077
EPU-I M , t 30 −0.031−0.499 ***−0.078 **−0.165 ***−0.497 ***−0.079 *−1.085 ***−1.078 ***
365-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 365 −0.0100.023−0.005−0.0120.022−0.043 **−0.256 ***−0.257 ***
Δ M C a p W , t 365 −0.015−0.040−0.017−0.002−0.0440.001−0.813 ***−0.810 ***
Δ M C a p M , t 365 −0.170 ***−0.999 ***−0.266 ***−0.109 ***−0.974 ***−0.162 ***−2.619 ***−2.622 ***
σ D , t 365 0.0100.004−0.0020.0000.0040.0550.0090.009
σ W , t 365 −0.094 ***−0.017−0.038−0.008−0.017−0.207 ***−0.041 *−0.041 *
σ M , t 365 1.347 ***0.073 ***0.467 ***0.026 ***0.074 ***2.563 ***0.233 ***0.233 ***
I V D , t 365 B T C −0.203 ***−0.460 ***−0.238 ***−0.236 ***−0.460 ***−0.282 ***−0.596 ***−0.596 ***
I V W , t 365 B T C 0.0770.252 ***0.118 *0.124 **0.252 ***0.1080.199 *0.199 *
I V M , t 365 B T C −0.396 ***−0.909 ***−0.449 ***−0.479 ***−0.908 ***−0.602 ***−1.176 ***−1.177 ***
EPU-I D , t 365 −0.007−0.020−0.009−0.010−0.020−0.011−0.017−0.017
EPU-I W , t 365 −0.035−0.093 *−0.044−0.035−0.093 *−0.086 *−0.118 *−0.118 *
EPU-I M , t 365 −0.175 ***−0.347 ***−0.169 ***−0.162 ***−0.347 ***−0.233 ***−0.988 ***−0.989 ***
*** for p-value < 0.001, ** for p-value < 0.01, * for p-value < 0.05.
Table A5. Volume-based Feder et al. (2018): in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with <730 observations. Regressors lagged according to forecast horizon.
Table A5. Volume-based Feder et al. (2018): in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Stablecoins with <730 observations. Regressors lagged according to forecast horizon.
1-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 1 −0.0070.3670.2700.2340.057−0.0110.1830.184
Δ M C a p W , t 1 −0.009−0.374−0.322−0.167−0.211−0.027−0.069−0.068
Δ M C a p M , t 1 −0.121 **−6.279−4.466−4.274−0.450−0.122−3.702−3.731
σ D , t 1 −0.0020.0030.0080.0070.006−0.2730.0220.023
σ W , t 1 −0.300 ***−0.306 ***−0.227 ***−0.228 ***−0.298 ***−0.760 ***−0.234 **−0.233 **
σ M , t 1 0.101 *0.280 ***0.248 ***0.212 ***0.276 ***0.341 ***0.409 ***0.409 ***
I V D , t 1 B T C 0.041−0.163−0.086−0.060−0.1750.229−0.166−0.166
I V W , t 1 B T C −0.1460.1950.1180.0480.196−0.4120.2170.217
I V M , t 1 B T C −0.083−1.030 ***−0.648 ***−0.481 ***−1.014 ***−0.251 *−1.303 ***−1.303 ***
EPU-I D , t 1 −0.011−0.011−0.007−0.011−0.011−0.0030.0020.002
EPU-I W , t 1 −0.016−0.017−0.0150.025−0.015−0.0530.0220.022
EPU-I M , t 1 0.360 ***−0.092−0.0460.003−0.0950.655 ***−0.127−0.127
30-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 30 −0.0090.2550.1950.197−0.007−0.0150.1400.141
Δ M C a p W , t 30 −0.0120.005−0.0510.107−0.135−0.027−0.035−0.033
Δ M C a p M , t 30 −0.122 **−6.392−4.690−4.889 *−0.488−0.122−4.147−4.184
σ D , t 30 0.0050.0030.0050.0040.006−0.0130.0360.035
σ W , t 30 0.007−0.011−0.006−0.010−0.001−0.0440.0670.068
σ M , t 30 −0.0040.211 ***0.194 ***0.180 ***0.205 ***0.147 **0.675 ***0.676 ***
I V D , t 30 B T C −0.003−0.361 *−0.196−0.122−0.370 *0.116−0.359−0.359
I V W , t 30 B T C −0.1460.3150.1650.1000.313−0.508 *0.2990.300
I V M , t 30 B T C −0.035−1.250 ***−0.712 ***−0.554 ***−1.232 ***−0.063−1.667 ***−1.667 ***
EPU-I D , t 30 −0.012−0.055−0.028−0.025−0.055−0.010−0.083−0.083
EPU-I W , t 30 0.110 **0.1180.0750.0450.1180.149 *0.206 *0.206 *
EPU-I M , t 30 0.232 ***−0.209 ***−0.102−0.009−0.212 ***0.449 ***−0.479 ***−0.479 ***
*** for p-value < 0.001, ** for p-value < 0.01, * for p-value < 0.05.
Table A6. Volume-based Feder et al. (2018): Random Forest variable importance measures. The variables are sorted in decreasing order according to the Out-Of-Bag accuracy for each forecasting horizon. Regressors lagged according to forecast horizon.
Table A6. Volume-based Feder et al. (2018): Random Forest variable importance measures. The variables are sorted in decreasing order according to the Out-Of-Bag accuracy for each forecasting horizon. Regressors lagged according to forecast horizon.
Stablecoins with <730 observations.
1-DAY AHEAD FORECAST30-DAY AHEAD FORECAST
M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini
σ M , t 1 92.731532.785 σ M , t 30 91.517527.257
σ W , t 1 66.244379.758 σ W , t 30 70.878399.964
σ D , t 1 60.904284.065 I V M , t 30 B T C 67.102187.943
Δ M C a p M , t 1 58.361280.940 σ D , t 30 62.362287.241
I V M , t 1 B T C 57.882174.456 Δ M C a p M , t 30 48.519272.206
Δ M C a p W , t 1 47.360212.664 Δ M C a p W , t 30 47.302198.873
I V W , t 1 B T C 45.059132.145 EPU-I M , t 30 45.488134.969
Δ M C a p D , t 1 45.047247.512 I V W , t 30 B T C 45.327133.427
EPU-I M , t 1 40.069120.647 Δ M C a p D , t 30 43.199225.551
I V D , t 1 B T C 38.699109.675 I V D , t 30 B T C 42.100119.303
EPU-I W , t 1 37.37496.648 EPU-I W , t 30 34.06296.868
EPU-I D , t 1 11.76872.393 EPU-I D , t 30 7.81466.980
Stablecoins with ≥730 observations.
1-DAY AHEAD FORECAST30-DAY AHEAD FORECAST365-DAY AHEAD FORECAST
M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini
σ M , t 1 156.0384325.298 σ M , t 30 153.0214402.147 σ M , t 365 154.7114555.518
EPU-I W , t 1 113.6641057.233 EPU-I M , t 30 118.1011548.791 EPU-I M , t 365 133.1391769.350
EPU-I M , t 1 104.5071466.079 Δ M C a p M , t 30 105.1393733.134 Δ M C a p M , t 365 120.4793345.022
Δ M C a p M , t 1 104.4483895.016 EPU-I W , t 30 103.3061118.834 σ W , t 365 110.1633133.867
σ W , t 1 100.6462754.708 I V M , t 30 B T C 98.3411838.324 σ D , t 365 99.8652283.636
I V M , t 1 B T C 95.8211786.739 σ W , t 30 97.6452941.215 I V M , t 365 B T C 94.8572001.125
Δ M C a p W , t 1 88.3083065.650 Δ M C a p W , t 30 83.5062858.668 Δ M C a p W , t 365 91.5422326.249
Δ M C a p D , t 1 84.5692582.196 σ D , t 30 78.0182094.880 Δ M C a p D , t 365 85.8191978.278
σ D , t 1 79.6062032.422 Δ M C a p D , t 30 76.7982373.704 EPU-I W , t 365 81.9431250.345
I V W , t 1 B T C 73.3281456.258 I V W , t 30 B T C 71.1111494.133 I V W , t 365 B T C 60.8641655.588
I V D , t 1 B T C 72.8891266.673 I V D , t 30 B T C 67.0971279.382 I V D , t 365 B T C 54.5641339.696
EPU-I D , t 1 48.288822.535 EPU-I D , t 30 53.185838.811 EPU-I D , t 365 50.066846.033
Table A7. $0.80 Price threshold: AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Table A7. $0.80 Price threshold: AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Stablecoins with <730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.8360.3380.0880.8900.0950.9900.4160.9830.345No
Logit FE0.9850.9670.0210.9730.7750.9980.9900.9210.998No
Logit FE (Bias. Corr.)0.6920.1480.4080.4810.9540.4220.1371.0000.029No
Conditional Logit0.7550.2040.3110.4500.9190.3910.1511.0000.045No
Logit RE0.9880.9510.0210.9730.7700.9990.9900.9220.999No
Pooled Cauchit0.8530.3730.0870.8910.2770.9690.4580.9930.390No
Cauchit FE0.9950.9890.0190.9760.7900.9990.9880.8990.999No
Cauchit RE0.9960.9700.0200.9760.7930.9990.9880.9000.999No
Random Forest0.9960.9290.0150.9860.9090.9950.9080.9950.897YES
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.7040.1380.1080.8710.0860.9800.4270.9150.358No
Logit FE0.9320.7720.0580.9310.5230.9880.9540.7180.987No
Logit FE (Bias. Corr.)0.6040.0660.4100.4510.8010.4020.1500.9950.032No
Conditional Logit0.7010.1450.3050.4930.8110.4480.1950.9950.084No
Logit RE0.9250.7600.0500.9380.5280.9950.9600.7190.994No
Pooled Cauchit0.6970.1300.1120.8650.1400.9660.4270.8840.363No
Cauchit FE0.9060.8120.0460.9470.5701.0000.9630.6951.000YES
Cauchit RE0.9050.7700.0550.9380.5630.9900.9540.6960.990No
Random Forest0.9310.6130.0550.9260.6220.9680.8150.9080.802No
Stablecoins with ≥730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.8220.3590.1110.8570.1380.9860.8630.5120.926No
Logit FE0.9930.8780.0300.9580.7360.9980.9690.9060.981No
Logit FE (Bias. Corr.)0.7140.1500.1680.8310.2260.9390.1960.9970.053No
Conditional Logit0.5960.0360.2940.3060.8830.2020.1631.0000.012No
Logit RE0.9920.8770.0330.9540.7080.9980.9710.9080.982No
Pooled Cauchit0.8450.3900.1020.8700.2960.9730.8450.5890.890No
Cauchit FE0.9930.8930.0260.9670.7950.9980.9780.8910.993YES
Cauchit RE0.9920.8900.0290.9630.7630.9980.9790.8950.994No
Random Forest0.9660.7230.0510.9320.6140.9900.8850.9210.878No
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.7970.3180.1150.8530.1150.9880.8370.4960.900No
Logit FE0.9610.8100.0410.9470.7040.9910.9550.8660.971No
Logit FE (Bias. Corr.)0.6210.0530.2020.7930.2040.9010.1730.9800.025No
Conditional Logit0.5670.0210.3020.3070.8580.2060.1580.9960.004No
Logit RE0.9660.8120.0410.9460.7040.9900.9540.8710.970No
Pooled Cauchit0.8270.3530.1070.8640.2640.9730.8210.5740.866No
Cauchit FE0.9660.8420.0350.9550.7430.9940.9670.8390.990YES
Cauchit RE0.9740.8440.0350.9550.7420.9940.9670.8410.991YES
Random Forest0.8870.4890.0840.8870.3930.9770.8320.7760.843No
365-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.6970.1350.1610.8040.0200.9850.8070.1090.967No
Logit FE0.7980.4160.1260.8600.3750.9720.8740.5010.960No
Logit FE (Bias. Corr.)0.6380.0740.2690.6140.1290.7250.2510.8150.120No
Conditional Logit0.5300.0070.2700.5500.4720.5690.1970.9930.013No
Logit RE0.8060.4270.1220.8630.3560.9800.8770.4830.968No
Pooled Cauchit0.6860.1210.1580.8050.0440.9800.8040.1620.952No
Cauchit FE0.7800.4640.1100.8790.3770.9950.8830.4330.987YES
Cauchit RE0.8110.4670.1100.8800.3770.9960.8840.4320.988YES
Random Forest0.7210.1880.1510.8090.0940.9740.8020.3640.903No
Table A8. Volume-based Feder et al. (2018): AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Table A8. Volume-based Feder et al. (2018): AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Stablecoins with <730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.6420.0970.1180.8610.0000.9960.5280.6550.508No
Logit FE0.9700.9520.0480.9420.5960.9960.9620.7470.995No
Logit FE (Bias. Corr.)0.6500.0950.2950.5460.7080.5200.3070.9240.211No
Conditional Logit0.5810.0530.3050.4490.6330.4200.2120.9950.089No
Logit RE0.9910.9410.0450.9440.6020.9970.9650.7660.996YES
Pooled Cauchit0.6300.0950.1190.8550.0260.9850.5680.6110.561No
Cauchit FE0.9840.9730.0470.9450.6090.9980.9560.6910.998YES
Cauchit RE0.9940.9760.0520.9330.5190.9980.9640.7490.997No
Random Forest0.9610.7100.0450.9430.6670.9860.8430.9300.829YES
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.5880.0340.1370.8400.0020.9890.5360.6130.522No
Logit FE0.8910.7770.0710.9200.5100.9930.9340.6090.992YES
Logit FE (Bias. Corr.)0.6360.0740.3450.4730.7920.4160.2740.9320.157No
Conditional Logit0.5600.0310.3140.4710.6140.4450.2380.9750.107No
Logit RE0.8970.7410.0690.9200.5040.9940.9350.6110.992YES
Pooled Cauchit0.5590.0270.1430.8220.0510.9590.5450.5620.542No
Cauchit FE0.9360.8170.0690.9220.4950.9980.9340.5780.998YES
Cauchit RE0.8930.7940.0680.9230.5010.9980.9360.5900.998YES
Random Forest0.8110.3690.0970.8800.4760.9510.6850.7760.669No
Stablecoins with ≥730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.7730.2430.1600.7790.1470.9730.7850.3820.908No
Logit FE0.9830.9330.0810.8790.5370.9830.9340.7870.979No
Logit FE (Bias. Corr.)0.6810.1180.2100.6630.5100.7100.4070.9540.240No
Conditional Logit0.6100.0540.2760.4260.8230.3040.2551.0000.028No
Logit RE0.9840.9320.0800.8790.5360.9840.9350.7880.979No
Pooled Cauchit0.7790.2590.1620.7820.2360.9490.7860.3900.907No
Cauchit FE0.9870.9390.0740.8970.6070.9860.9320.7720.981YES
Cauchit RE0.9870.9300.0800.8900.5700.9880.9260.7460.981No
Random Forest0.9360.6200.0840.8860.5690.9830.8520.8570.850No
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.7710.2530.1340.8230.1760.9750.7740.5180.834No
Logit FE0.9400.8300.0810.8840.4690.9810.9360.7500.979No
Logit FE (Bias. Corr.)0.6720.1050.2030.7320.3990.8100.3850.9150.261No
Conditional Logit0.6240.0650.2630.4750.7270.4170.2151.0000.032No
Logit RE0.9490.8390.0810.8820.4610.9810.9360.7520.979No
Pooled Cauchit0.7720.2520.1390.8200.2360.9570.8080.4630.889No
Cauchit FE0.9470.8420.0790.8980.5380.9820.9280.7000.981YES
Cauchit RE0.9500.8360.0810.8940.5210.9810.9270.6960.980No
Random Forest0.8300.3530.1120.8480.3880.9560.7570.7340.762No
365-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
Pooled Logit0.7090.1520.2110.7170.1080.9730.7200.1890.943No
Logit FE0.7210.3070.2500.7170.1080.9730.7400.2000.967No
Logit FE (Bias. Corr.)0.5320.0110.2670.6340.1130.8530.4060.7600.257No
Conditional Logit0.5100.0040.2400.6130.2400.7700.3060.9830.022No
Logit RE0.7360.3040.2510.7150.1080.9710.7390.2010.965No
Pooled Cauchit0.7020.1440.2190.7160.1380.9600.7170.1860.940No
Cauchit FE0.6840.3390.2380.7270.1080.9880.7500.1980.983No
Cauchit RE0.7410.3330.2410.7250.1090.9840.7480.2010.979No
Random Forest0.6540.0950.2060.7150.2660.9040.6640.4810.741YES
Table A9. $0.80 Price threshold: AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models, including also the ZPP(RW), using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Table A9. $0.80 Price threshold: AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models, including also the ZPP(RW), using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Stablecoins with <730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.9550.7300.2830.6770.9800.6350.6370.9890.588No
Pooled Logit0.8340.3350.0930.8800.0960.9890.4830.9730.415No
Logit FE0.9900.9730.0220.9720.7830.9980.9900.9290.998No
Logit FE (Bias. Corr.)0.7020.1590.3990.4930.9580.4290.1491.0000.031No
Conditional Logit0.7430.1840.3160.4520.9220.3870.1661.0000.050No
Logit RE0.9910.9600.0210.9720.7780.9990.9900.9280.998No
Pooled Cauchit0.8500.3670.0920.8820.2760.9660.5160.9790.452No
Cauchit FE0.9960.9910.0200.9750.7990.9990.9880.9090.999No
Cauchit RE0.9960.9740.0200.9750.8020.9990.9880.9100.998No
Random Forest0.9960.9320.0150.9850.9150.9950.9260.9920.917YES
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.9250.6440.3580.6260.9890.5720.6030.9990.544No
Pooled Logit0.7120.1410.1130.8630.0870.9790.4770.8660.419No
Logit FE0.9360.7810.0590.9280.5420.9860.9530.7430.985No
Logit FE (Bias. Corr.)0.6210.0750.3990.4670.8300.4120.1610.9950.037No
Conditional Logit0.6950.1360.3110.4880.8130.4400.2130.9930.096No
Logit RE0.9280.7810.0510.9360.5470.9940.9620.7470.994No
Pooled Cauchit0.7010.1290.1180.8550.1300.9630.4910.8430.439No
Cauchit FE0.9070.8170.0470.9460.5920.9990.9620.7210.999YES
Cauchit RE0.9100.7790.0560.9360.5850.9880.9530.7220.988No
Random Forest0.9340.6250.0570.9230.6460.9640.8160.9140.802No
Stablecoins with ≥730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.7510.2080.2960.6490.7140.6380.6120.8200.574No
Pooled Logit0.8230.3610.1110.8560.1380.9870.8650.5000.932No
Logit FE0.9930.8770.0310.9570.7360.9980.9700.9060.981No
Logit FE (Bias. Corr.)0.7130.1510.1670.8310.2250.9420.2000.9970.054No
Conditional Logit0.5910.0340.2940.3020.8830.1960.1651.0000.013No
Logit RE0.9920.8770.0330.9530.7080.9980.9710.9080.983No
Pooled Cauchit0.8450.3910.1030.8690.2970.9730.8470.5820.895No
Cauchit FE0.9930.8930.0260.9670.7960.9980.9780.8910.994YES
Cauchit RE0.9920.8900.0290.9620.7630.9980.9790.8950.994No
Random Forest0.9660.7240.0510.9320.6140.9900.8850.9220.878No
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.7220.1800.3970.5630.8720.5050.5050.9110.429No
Pooled Logit0.7970.3200.1160.8510.1150.9890.8460.4810.914No
Logit FE0.9610.8100.0410.9460.7040.9910.9550.8650.972No
Logit FE (Bias. Corr.)0.6200.0530.2020.7930.2040.9030.1760.9790.026No
Conditional Logit0.5620.0190.3020.3010.8580.1980.1600.9950.004No
Logit RE0.9660.8120.0410.9460.7040.9910.9550.8690.970No
Pooled Cauchit0.8270.3540.1070.8630.2650.9740.8230.5670.870No
Cauchit FE0.9650.8420.0360.9550.7430.9940.9660.8380.990YES
Cauchit RE0.9740.8440.0360.9550.7420.9950.9670.8400.991YES
Random Forest0.8870.4890.0850.8850.3940.9770.8320.7760.843No
365-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.6960.1660.5240.5410.8980.4570.5130.9260.416No
Pooled Logit0.7000.1410.1610.8040.0210.9880.8080.1070.973No
Logit FE0.8000.4200.1270.8590.3790.9710.8730.5050.960No
Logit FE (Bias. Corr.)0.6430.0780.2710.6090.1280.7220.2490.8120.116No
Conditional Logit0.5270.0060.2710.5460.4760.5630.2000.9920.014No
Logit RE0.8080.4320.1230.8620.3600.9800.8770.4870.968No
Pooled Cauchit0.6890.1270.1580.8050.0440.9840.8070.1590.959No
Cauchit FE0.7830.4700.1110.8780.3810.9950.8830.4370.987YES
Cauchit RE0.8140.4720.1110.8790.3810.9960.8830.4360.988YES
Random Forest0.7240.1940.1520.8080.0960.9750.8030.3670.906No
Table A10. Volume-based Feder et al. (2018): AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models, including also the ZPP(RW), using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Table A10. Volume-based Feder et al. (2018): AUC, H-measure, Brier scores, models included in the MCS, and evaluation metrics for all models, including also the ZPP(RW), using two alternative thresholds for converting probabilities into binary variables (50% and empirical prevalence). Results are separated by stablecoins lifespans and forecasting horizons for out-of-sample forecasts.
Stablecoins with <730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.7890.3230.3120.6580.8360.6270.6200.8440.582No
Pooled Logit0.6410.0930.1240.8520.0000.9960.5740.6260.565No
Logit FE0.9710.9540.0490.9410.6110.9970.9600.7450.997YES
Logit FE (Bias. Corr.)0.6470.0970.2970.5430.7060.5150.3100.9210.207No
Conditional Logit0.5840.0520.3070.4510.6400.4190.2280.9940.098No
Logit RE0.9910.9480.0470.9420.6170.9970.9620.7650.996YES
Pooled Cauchit0.6270.0920.1250.8450.0260.9840.6090.5840.614No
Cauchit FE0.9830.9710.0490.9430.6230.9980.9540.6930.998YES
Cauchit RE0.9940.9760.0540.9310.5360.9980.9580.7290.997No
Random Forest0.9600.7080.0470.9400.6790.9850.8620.9170.852YES
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.7790.3350.3930.6190.8840.5700.6010.9040.545No
Pooled Logit0.5820.0340.1410.8340.0020.9880.5540.5860.548No
Logit FE0.8870.7670.0720.9200.5290.9930.9310.6060.991YES
Logit FE (Bias. Corr.)0.6340.0770.3480.4610.7870.4010.2740.9240.153No
Conditional Logit0.5600.0320.3190.4610.6270.4300.2450.9720.111No
Logit RE0.8920.7400.0720.9190.5230.9930.9300.6070.990YES
Pooled Cauchit0.5540.0260.1470.8180.0540.9590.5620.5400.567No
Cauchit FE0.9310.8060.0720.9200.5040.9960.9310.5770.996YES
Cauchit RE0.8880.7830.0700.9200.5070.9960.9330.5900.996YES
Random Forest0.8080.3730.0990.8760.4920.9470.6820.7750.665No
Stablecoins with ≥730 observations.
1-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.8650.4590.2060.7440.8380.7150.7190.9050.660No
Pooled Logit0.7730.2440.1610.7780.1470.9750.7850.3660.916No
Logit FE0.9830.9330.0820.8770.5370.9830.9320.7830.979No
Logit FE (Bias. Corr.)0.6790.1160.2110.6600.5100.7070.4090.9520.240No
Conditional Logit0.6060.0510.2760.4240.8240.2990.2591.0000.028No
Logit RE0.9830.9330.0820.8770.5360.9840.9330.7840.979No
Pooled Cauchit0.7790.2590.1620.7810.2360.9500.7860.3850.911No
Cauchit FE0.9860.9380.0760.8950.6080.9850.9310.7700.981YES
Cauchit RE0.9870.9290.0810.8880.5700.9870.9240.7440.981No
Random Forest0.9360.6190.0850.8840.5690.9830.8510.8570.849No
30-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.8230.3870.3300.6220.9040.5550.5700.9300.484No
Pooled Logit0.7730.2570.1330.8230.1760.9770.7790.5120.842No
Logit FE0.9390.8280.0820.8830.4700.9810.9340.7460.979No
Logit FE (Bias. Corr.)0.6700.1040.2040.7300.4000.8080.3850.9140.259No
Conditional Logit0.6240.0630.2630.4740.7310.4130.2191.0000.033No
Logit RE0.9490.8380.0830.8810.4620.9810.9350.7480.979No
Pooled Cauchit0.7730.2570.1390.8200.2360.9590.8100.4560.894No
Cauchit FE0.9450.8400.0800.8970.5400.9810.9270.7000.981YES
Cauchit RE0.9490.8340.0820.8920.5220.9800.9250.6960.980No
Random Forest0.8320.3560.1120.8480.3880.9570.7580.7340.764No
365-DAY AHEAD FORECAST
ModelAUCHBrier S.AccuracySensitivitySpecificityAccuracySensitivitySpecificityMCS
Threshold: 50%Threshold: empirical prevalence
ZPP(RW)0.7200.2230.4550.6180.8840.5050.6040.9070.476No
Pooled Logit0.7070.1490.2130.7130.0980.9740.7160.1790.944No
Logit FE0.7180.3000.2510.7150.1090.9720.7390.2020.966No
Logit FE (Bias. Corr.)0.5360.0120.2660.6330.1130.8530.4110.7650.261No
Conditional Logit0.5120.0050.2410.6110.2450.7660.3080.9820.023No
Logit RE0.7370.2980.2520.7140.1090.9700.7380.2030.964No
Pooled Cauchit0.7010.1430.2210.7130.1290.9600.7130.1770.940No
Cauchit FE0.6780.3320.2390.7260.1090.9870.7500.2000.982No
Cauchit RE0.7410.3270.2420.7230.1090.9830.7470.2030.978No
Random Forest0.6510.0920.2080.7130.2600.9050.6650.4750.745YES
Table A11. $0.80 Price threshold: in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Regressors lagged according to forecast horizon. Stablecoins with <365 observations.
Table A11. $0.80 Price threshold: in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Regressors lagged according to forecast horizon. Stablecoins with <365 observations.
1-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 1 −0.0180.4600.2640.1680.474−0.054−0.221−0.187
Δ M C a p W , t 1 −0.051−1.1090.317−2.684−1.200−0.0680.237−0.246
Δ M C a p M , t 1 −0.184−222.158 ***−17.596−32.225 ***−195.186 ***−0.525−32.490−32.471
σ D , t 1 0.0600.0110.0430.0230.0103.859 ***0.0300.030
σ W , t 1 0.185 ***0.467 ***0.202 *0.197 **0.471 ***0.376 ***0.734 **0.733 **
σ M , t 1 0.188 ***−0.560 ***−0.053−0.260 ***−0.546 ***−0.637 ***−1.104 ***−1.102 ***
I V D , t 1 B T C 0.364 *−0.1850.302−0.118−0.1740.431−0.861 *−0.860 *
I V W , t 1 B T C −0.340−0.031−0.263−0.039−0.043−0.3340.6440.641
I V M , t 1 B T C 0.624 ***−1.172 ***0.322 *−0.666 ***−1.138 ***1.803 ***−3.112 ***−3.105 ***
EPU-I D , t 1 −0.0200.016−0.0120.0100.016−0.0370.0870.087
EPU-I W , t q 0.206 **0.465 ***0.222 *0.277 ***0.465 ***0.524 **0.513 *0.514 *
EPU-I M , t 1 −0.016−0.905 ***−0.095−0.583 ***−0.899 ***−1.597 ***−1.392 ***−1.391 ***
30-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 30 −0.0110.039−0.017−0.078−0.126−0.011−0.238−0.201
Δ M C a p W , t 30 −0.0301.9940.3711.2550.414−0.0604.9625.056
Δ M C a p M , t 30 −0.205−17.409−2.618−6.585−0.615−0.4891.6600.416
σ D , t 30 0.0450.0580.0390.0270.0600.076−0.000−0.001
σ W , t 30 0.009−0.453 ***−0.017−0.305 ***−0.449 ***−0.030−0.937 ***−0.936 ***
σ M , t 30 0.144 *−0.070−0.035−0.069−0.069−0.404 ***0.1740.174
I V D , t 30 B T C 0.304−0.1800.184−0.024−0.1670.807 *−0.591−0.589
I V W , t 30 B T C −0.732 **−0.570−0.452−0.641 **−0.570−0.316−0.709−0.710
I V M , t 30 B T C 1.264 ***−0.1480.798 ***−0.010−0.1351.844 ***−1.393 ***−1.391 ***
EPU-I D , t 30 −0.019−0.078−0.024−0.052−0.0750.018−0.038−0.039
EPU-I W , t 30 0.100−0.323 *0.053−0.169 *−0.319 *0.376 *−0.380 *−0.381 *
EPU-I M , t 30 −0.225 **−0.438 ***−0.227 **−0.333 ***−0.432 ***−1.890 ***−0.454 *−0.454 *
*** for p-value < 0.001, ** for p-value < 0.01, * for p-value < 0.05.
Table A12. Volume-based Feder et al. (2018): in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Regressors lagged according to forecast horizon. Stablecoins with <365 observations.
Table A12. Volume-based Feder et al. (2018): in-sample coefficient estimates for Panel models (no intercept). The regressors are lagged according to the forecast horizon used for the direct forecasts. Regressors lagged according to forecast horizon. Stablecoins with <365 observations.
1-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 1 −0.02010.0673.6571.3219.304−0.03022.00412.839
Δ M C a p W , t 1 −0.0519.3634.210−2.7369.136−0.17314.6118.781
Δ M C a p M , t 1 −0.134−410.578 ***−119.639 ***−41.651 ***−389.344 ***−0.232−1126.429 ***−573.483 ***
σ D , t 1 0.008−0.096−0.028−0.014−0.0910.0100.269−0.070
σ W , t 1 0.046−0.009−0.0290.050−0.012−0.042−0.535−0.021
σ M , t 1 0.028−0.901 ***−0.127−0.271 *−0.865 ***0.350−2.530 ***−1.433 ***
I V D , t 1 B T C 0.1930.2400.0700.1030.2360.863−0.0460.085
I V W , t 1 B T C −0.1670.3450.2030.2350.340−0.0890.978 *0.586
I V M , t 1 B T C −0.094−1.343 ***−0.768 ***−0.724 ***−1.320 ***−1.938 ***−2.529 ***−1.662 ***
EPU-I D , t 1 −0.0160.0220.0440.0480.022−0.0730.0750.050
EPU-I W , t 1 0.0960.0730.0310.0570.0760.039−0.0360.013
EPU-I M , t 1 0.455 ***0.1710.0160.0530.1621.932 ***0.374 *0.203
30-DAY AHEAD FORECAST
Pooled LogitLogit FELogit FE (Bias Corr)Conditional LogitLogit REPooled CauchitCauchit FECauchit RE
Δ M C a p D , t 30 −0.0162.4550.5870.580−0.036−0.0700.4110.454
Δ M C a p W , t 30 −0.0107.9061.1870.9110.1170.0663.4793.437
Δ M C a p M , t 30 −0.176−99.077 *−22.419−22.809 *−1.015−0.546−419.546 ***−329.253 ***
σ D , t 30 0.012−0.075−0.005−0.048−0.0530.033−0.162−0.140
σ W , t 30 0.010−0.187−0.021−0.094−0.1760.005−0.212−0.199
σ M , t 30 −0.077−0.547 **−0.167−0.530 ***−0.530 **−0.003−0.928 **−0.857 **
I V D , t 30 B T C −0.156−0.274−0.206−0.156−0.2480.6710.017−0.000
I V W , t 30 B T C 0.1430.6050.2800.4630.583−0.9870.6480.628
I V M , t 30 B T C 0.019−1.173 ***−0.315−0.837 ***−1.122 ***−0.241−1.764 ***−1.657 ***
EPU-I D , t 30 −0.024−0.082−0.039−0.055−0.090−0.081−0.093−0.090
EPU-I W , t 30 0.263 ***0.365 **0.263 *0.302 **0.366 **0.913 **0.451 **0.425 **
EPU-I M , t 30 0.024−0.612 ***−0.192−0.470 ***−0.610 ***0.508−0.573 ***−0.567 ***
*** for p-value < 0.001, ** for p-value < 0.01, * for p-value < 0.05.
Table A13. Random Forest variable importance measures for stablecoins with <365 observations. The variables are sorted in decreasing order according to the Out-Of-Bag accuracy for each forecasting horizon and split by $0.80 price threshold and volume-based Feder et al. (2018) methods. Regressors lagged according to forecast horizon.
Table A13. Random Forest variable importance measures for stablecoins with <365 observations. The variables are sorted in decreasing order according to the Out-Of-Bag accuracy for each forecasting horizon and split by $0.80 price threshold and volume-based Feder et al. (2018) methods. Regressors lagged according to forecast horizon.
$0.80 price threshold
1-DAY AHEAD FORECAST30-DAY AHEAD FORECAST
M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini
σ M , t 1 51.218197.714 σ M , t 30 57.873202.703
I V M , t 1 B T C 41.455110.314 σ W , t 30 43.801121.357
Δ M C a p M , t 1 41.03375.735 Δ M C a p M , t 30 43.64172.645
EPU-I M , t 1 39.29742.345 I V M , t 30 B T C 40.291113.190
I V W , t 1 B T C 32.01471.497 EPU-I M , t 30 38.88251.418
Δ M C a p W , t 1 30.59454.046 Δ M C a p W , t 30 35.49151.005
σ W , t 1 29.079124.679 σ D , t 30 32.21151.980
I V D , t 1 B T C 28.45452.358 EPU-I W , t 30 29.85629.149
Δ M C a p D , t 1 26.43743.999 I V W , t 30 B T C 29.43475.038
EPU-I W , t 1 24.25721.524 I V D , t 30 B T C 26.97359.338
σ D , t 1 23.52965.892 Δ M C a p D , t 30 20.00827.836
EPU-I D , t 1 10.10512.925 EPU-I D , t 30 12.97913.295
Volume-based Feder et al. (2018)
1-DAY AHEAD FORECAST30-DAY AHEAD FORECAST
M.D. AccuracyM.D. Gini M.D. AccuracyM.D. Gini
σ M , t 1 55.495100.240 σ M , t 30 64.181112.743
Δ M C a p M , t 1 45.16364.628 I V M , t 30 B T C 44.25980.842
I V M , t 1 B T C 38.09777.437 Δ M C a p M , t 30 36.54844.714
I V W , t 1 B T C 29.59954.089 σ D , t 30 33.69944.866
Δ M C a p W , t 1 28.67633.247 Δ M C a p W , t 30 31.56428.382
σ W , t 1 27.65152.822 σ W , t 30 30.25959.981
EPU-I M , t 1 27.13534.766 EPU-I M , t 30 29.77630.428
σ D , t 1 26.54940.108 I V W , t 30 B T C 28.28551.223
I V D , t 1 B T C 25.57940.400 I V D , t 30 B T C 27.27245.260
Δ M C a p D , t 1 20.96419.562 EPU-I W , t 30 22.67823.881
EPU-I W , t 1 19.28721.575 Δ M C a p D , t 30 20.76118.704
EPU-I D , t 1 8.29512.344 EPU-I D , t 30 7.32311.236

Notation Guide

Table A14. Summary of key variable notations used in the models and regressions.
Table A14. Summary of key variable notations used in the models and regressions.
NotationDescription
y i t Binary dependent variable (1 if stablecoin i is dead at time t, 0 otherwise)
Δ M C a p D , t Daily change in market capitalization (today’s value minus yesterday’s)
Δ M C a p W , t Weekly change in market capitalization (today’s value minus 7 days ago)
Δ M C a p M , t Monthly change in market capitalization (today’s value minus 30 days ago)
σ D , t Daily stablecoin volatility (Garman-Klass estimator)
σ W , t Weekly average of daily volatilities
σ M , t Monthly average of daily volatilities
I V D , t B T C Daily Bitcoin implied volatility index (T3 Bit-Vol)
I V W , t B T C Weekly average of Bitcoin implied volatility
I V M , t B T C Monthly average of Bitcoin implied volatility
EPU-I D , t Daily Economic Policy Uncertainty Index
EPU-I W , t Weekly average of EPU Index
EPU-I M , t Monthly average of EPU Index

References

  1. Financial Stability Oversight Council. 2024 Annual Report; Technical Report; US Department of the Treasury: Washington, DC, USA, 2024. [Google Scholar]
  2. Gorton, G.B.; Zhang, J.Y. Taming Wildcat Stablecoins. Univ. Chic. Law Rev. 2023, 90, 909–971. [Google Scholar] [CrossRef]
  3. Lyons, R.K.; Viswanath-Natraj, G. What keeps stablecoins stable? J. Int. Money Financ. 2023, 131, 102777. [Google Scholar] [CrossRef]
  4. Adachi, M.; Cominetta, M.; Kaufmann, C.; van der Kraaij, A. A regulatory and financial stability perspective on global stablecoins. In Macroprudential Bulletin; European Central Bank: Frankfurt am Main, Germany, 2020; Volume 10. [Google Scholar]
  5. Feder, A.; Gandal, N.; Hamrick, J.T.; Moore, T.; Vasek, M. The rise and fall of cryptocurrencies. In Proceedings of the 17th Workshop on the Economics of Information Security (WEIS), Innsbruck, Austria, 18–19 June 2018. [Google Scholar]
  6. Fantazzini, D.; Korobova, E. Stablecoins and credit risk: When do they stop being stable? Appl. Econom. 2025, 77, 46–73. [Google Scholar]
  7. Nystrup, P.; Madsen, H.; Lindström, E. Long memory of financial time series and hidden Markov models with time-varying parameters. J. Forecast. 2017, 36, 989–1002. [Google Scholar] [CrossRef]
  8. Pomorski, P.; Gorse, D. Improving on the Markov-switching regression model by the use of an adaptive moving average. In Proceedings of the New Perspectives and Paradigms in Applied Economics and Business: Select Proceedings of the 2022 6th International Conference on Applied Economics and Business, Stockholm, Sweden, 25–27 August 2022; Springer: Berlin/Heidelberg, Germany, 2023; pp. 17–30. [Google Scholar] [CrossRef]
  9. Oelschläger, L.; Adam, T.; Michels, R. fHMM: Hidden Markov Models for Financial Time Series in R. J. Stat. Softw. 2024, 109, 1–25. [Google Scholar] [CrossRef]
  10. Kurbucz, M.T.; Pósfay, P.; Jakovác, A. Linear laws of markov chains with an application for anomaly detection in bitcoin prices. arXiv 2022, arXiv:2201.09790. [Google Scholar] [CrossRef]
  11. An, S.; Gao, X.; An, F.; Wu, T. Early warning of regime switching in a complex financial system from a spillover network dynamic perspective. iScience 2025, 28, 111924. [Google Scholar] [CrossRef] [PubMed]
  12. S&P Global Ratings. Stablecoin Stability Assessment; Technical report; S&P Global: New York, NY, USA, 2025. [Google Scholar]
  13. Cox, D.R. Regression models and life-tables. J. R. Stat. Soc. Ser. B (Methodol.) 1972, 34, 187–202. [Google Scholar] [CrossRef]
  14. Fantazzini, D. Crypto-Coins and Credit Risk: Modelling and Forecasting Their Probability of Death. J. Risk Financ. Manag. 2022, 15, 304. [Google Scholar] [CrossRef]
  15. Fantazzini, D.; Zimin, S. A multivariate approach for the simultaneous modelling of market risk and credit risk for cryptocurrencies. J. Ind. Bus. Econ. 2020, 47, 19–69. [Google Scholar] [CrossRef]
  16. Le Pennec, G.; Fiedler, I.; Ante, L. Wash trading at cryptocurrency exchanges. Financ. Res. Lett. 2021, 43, 101982. [Google Scholar] [CrossRef]
  17. Cong, L.W.; Li, X.; Tang, K.; Yang, Y. Crypto wash trading. Manag. Sci. 2023, 69, 6427–6454. [Google Scholar] [CrossRef]
  18. Briola, A.; Vidal-Tomás, D.; Wang, Y.; Aste, T. Anatomy of a Stablecoin’s failure: The Terra-Luna case. Financ. Res. Lett. 2023, 51, 103358. [Google Scholar] [CrossRef]
  19. Lahiri, K.; Yang, L. Forecasting binary outcomes. In Handbook of Economic Forecasting; Elsevier: Amsterdam, The Netherlands, 2013; Volume 2, pp. 1025–1106. [Google Scholar] [CrossRef]
  20. Fernández-Val, I. Fixed effects estimation of structural parameters and marginal effects in panel probit models. J. Econom. 2009, 150, 71–85. [Google Scholar] [CrossRef]
  21. Fernández-Val, I.; Weidner, M. Individual and time effects in nonlinear panel models with large N, T. J. Econom. 2016, 192, 291–312. [Google Scholar] [CrossRef]
  22. Fernández-Val, I.; Weidner, M. Fixed effects estimation of large-T panel data models. Annu. Rev. Econ. 2018, 10, 109–138. [Google Scholar] [CrossRef]
  23. Greene, W. The behaviour of the maximum likelihood estimator of limited dependent variable models in the presence of fixed effects. Econom. J. 2004, 7, 98–119. [Google Scholar] [CrossRef]
  24. Fantazzini, D. Assessing the Credit Risk of Crypto-Assets Using Daily Range Volatility Models. Information 2023, 14, 254. [Google Scholar] [CrossRef]
  25. Magomedov, S.; Fantazzini, D. Modeling and Forecasting the Probability of Crypto-Exchange Closures: A Forecast Combination Approach. J. Risk Financ. Manag. 2025, 18, 48. [Google Scholar] [CrossRef]
  26. Koenker, R.; Yoon, J. Parametric links for binary choice models: A Fisherian–Bayesian colloquy. J. Econom. 2009, 152, 120–130. [Google Scholar] [CrossRef]
  27. Nolan, J.P. Financial modeling with heavy-tailed stable distributions. Comput. Stat. 2014, 6, 45–55. [Google Scholar] [CrossRef]
  28. Embrechts, P.; Klüppelberg, C.; Mikosch, T. Modelling Extremal Events: For Insurance and Finance; Springer Science & Business Media: New York, NY, USA, 2013; Volume 33. [Google Scholar] [CrossRef]
  29. Hand, D.J. Measuring classifier performance: A coherent alternative to the area under the ROC curve. Mach. Learn. 2009, 77, 103–123. [Google Scholar] [CrossRef]
  30. Brier, G. Verification of forecasts expressed in terms of probability. Mon. Weather. Rev. 1950, 78, 1–3. [Google Scholar] [CrossRef]
  31. Sammut, C.; Webb, G. Encyclopedia of Machine Learning; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  32. Hand, D.J.; Anagnostopoulos, C. A better Beta for the H measure of classification performance. Pattern Recognit. Lett. 2014, 40, 41–46. [Google Scholar] [CrossRef]
  33. Hansen, P.; Lunde, A.; Nason, J. The model confidence set. Econometrica 2011, 79, 453–497. [Google Scholar] [CrossRef]
  34. Internet Archive. Wayback Machine. 1996. Available online: https://web.archive.org/,waybackmachine (accessed on 13 October 2025).
  35. Garman, M.B.; Klass, M.J. On the estimation of security price volatilities from historical data. J. Bus. 1980, 53, 67–78. [Google Scholar] [CrossRef]
  36. Molnár, P. Properties of range-based volatility estimators. Int. Rev. Financ. Anal. 2012, 23, 20–29. [Google Scholar] [CrossRef]
  37. Corsi, F. A simple approximate long-memory model of realized volatility. J. Financ. Econom. 2009, 7, 174–196. [Google Scholar] [CrossRef]
  38. T3 Bit-Vol Methodology Guide. Archived via the Internet Archive on 14 November 2022. 2022. Available online: https://web.archive.org/web/20221114185507/https://t3index.com/wp-content/uploads/2022/06/Bit-Vol-process_guide-Jan-2019-2022_03_22-06_02_32-UTC.pdf (accessed on 15 November 2025).
  39. Basistha, A.; Kurov, A.; Wolfe, M. Volatility forecasting: The role of internet search activity and implied volatility. J. Risk Model Valid. 2020, 14, 35–63. [Google Scholar] [CrossRef]
  40. Lycheva, M.; Mironenkov, A.; Kurbatskii, A.; Fantazzini, D. Forecasting oil prices with penalized regressions, variance risk premia and Google data. Appl. Econom. 2022, 68, 28–49. [Google Scholar] [CrossRef]
  41. Brunnermeier, M.K.; James, H. The Digitalization of Money. NBER Work. Pap. 2019. [Google Scholar] [CrossRef]
  42. Gandal, N.; Hamrick, J.; Moore, T.; Vasek, M. The rise and fall of cryptocurrency coins and tokens. Decis. Econ. Financ. 2021, 44, 981–1014. [Google Scholar] [CrossRef]
  43. Arner, D.; Auer, R.; Frost, J. Stablecoins: Risks, Potential and Regulation. Univ. Hong Kong Fac. Law Res. Pap. 2020, 2021/57. [Google Scholar] [CrossRef]
  44. Baur, D.G.; Hoang, L.T. A crypto safe haven against Bitcoin. Financ. Res. Lett. 2021, 38, 101431. [Google Scholar] [CrossRef]
  45. Grobys, K.; Junttila, J.; Kolari, J.W.; Sapkota, N. On the stability of stablecoins. J. Empir. Financ. 2021, 64, 207–223. [Google Scholar] [CrossRef]
  46. Ti, A.; Husodo, Z.A. Navigating volatility spillover amidst investor extreme fear in stablecoin and financial markets. Cogent Econ. Financ. 2024, 12, 2408276. [Google Scholar] [CrossRef]
  47. Kristoufek, L. On the role of stablecoins in cryptoasset pricing dynamics. Financ. Innov. 2022, 8, 37. [Google Scholar] [CrossRef]
  48. Naifar, N. Mapping Systemic Tail Risk in Crypto Markets: DeFi, Stablecoins, and Infrastructure Tokens. J. Risk Financ. Manag. 2025, 18, 329. [Google Scholar] [CrossRef]
  49. Baker, S.R.; Bloom, N.; Davis, S.J. Measuring economic policy uncertainty. Q. J. Econ. 2016, 131, 1593–1636. [Google Scholar] [CrossRef]
  50. Zhang, P.; Kong, D.; Xu, K.; Qi, J. Global economic policy uncertainty and the stability of cryptocurrency returns: The role of liquidity volatility. Res. Int. Bus. Financ. 2024, 67, 102165. [Google Scholar] [CrossRef]
  51. He, C.; Li, Y.; Wang, T.; Shah, S.A. Is cryptocurrency a hedging tool during economic policy uncertainty? An empirical investigation. Humanit. Soc. Sci. Commun. 2024, 11, 1–10. [Google Scholar] [CrossRef]
  52. Feng, J.; Yuan, Y.; Jiang, M. Are stablecoins better safe havens or hedges against global stock markets than other assets? Comparative analysis during the COVID-19 pandemic. Int. Rev. Econ. Financ. 2024, 92, 275–301. [Google Scholar] [CrossRef]
  53. Fantazzini, D.; De Giuli, M.E.; Maggi, M.A. A new approach for firm value and default probability estimation beyond Merton models. Comput. Econ. 2008, 31, 161–180. [Google Scholar] [CrossRef]
  54. Hwang, S.; Valls Pereira, P. Small sample properties of GARCH estimates and persistence. Eur. J. Financ. 2006, 12, 473–494. [Google Scholar] [CrossRef]
  55. Fantazzini, D. The effects of misspecified marginals and copulas on computing the Value-at-Risk: A Monte Carlo study. Comput. Stat. Data Anal. 2009, 53, 2168–2188. [Google Scholar] [CrossRef]
  56. Shumway, T. Forecasting bankruptcy more accurately: A simple hazard model. J. Bus. 2001, 74, 101–124. [Google Scholar] [CrossRef]
Figure 1. Visual justification for the $0.80 failure threshold. The figure displays histograms for two key price points across the 98 stablecoins in our sample: their final observed price (left panel) and their all-time minimum price (right panel). The red vertical line at $0.80 illustrates how this threshold effectively separates the distribution of failed or distressed stablecoins (concentrated below the line) from those that maintained their peg (concentrated near $1.00).
Figure 1. Visual justification for the $0.80 failure threshold. The figure displays histograms for two key price points across the 98 stablecoins in our sample: their final observed price (left panel) and their all-time minimum price (right panel). The red vertical line at $0.80 illustrates how this threshold effectively separates the distribution of failed or distressed stablecoins (concentrated below the line) from those that maintained their peg (concentrated near $1.00).
Forecasting 07 00068 g001
Figure 2. Evolution of stablecoin failures over time according to two classification methods. The panels show results for stablecoins with fewer than 730 observations (top) and 730 or more (bottom). The blue line tracks the total number of active stablecoins each day. The green line represents the count of ’dead’ stablecoins using the volume-based Feder et al. (2018) method, while the red line shows the count using our $0.80 price threshold.
Figure 2. Evolution of stablecoin failures over time according to two classification methods. The panels show results for stablecoins with fewer than 730 observations (top) and 730 or more (bottom). The blue line tracks the total number of active stablecoins each day. The green line represents the count of ’dead’ stablecoins using the volume-based Feder et al. (2018) method, while the red line shows the count using our $0.80 price threshold.
Forecasting 07 00068 g002
Figure 3. In-sample coefficient estimates from various panel binary models for stablecoins with ≥730 observations, using the $0.80 price threshold for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Figure 3. In-sample coefficient estimates from various panel binary models for stablecoins with ≥730 observations, using the $0.80 price threshold for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Forecasting 07 00068 g003
Figure 4. In-sample coefficient estimates from various panel binary models for stablecoins with <730 observations, using the $0.80 price threshold for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Figure 4. In-sample coefficient estimates from various panel binary models for stablecoins with <730 observations, using the $0.80 price threshold for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Forecasting 07 00068 g004
Figure 5. In-sample variable importance from the Random Forest model, using the $0.80 price threshold definition of failure. The plots show results for stablecoins with fewer (<730) and more (≥730) observations across different forecast horizons. Importance is measured by Mean Decrease in Accuracy (MDA, higher is better) and Mean Decrease in Gini (MDG, higher is better). Variables are ranked by MDA, highlighting the most predictive features for each group and horizon.
Figure 5. In-sample variable importance from the Random Forest model, using the $0.80 price threshold definition of failure. The plots show results for stablecoins with fewer (<730) and more (≥730) observations across different forecast horizons. Importance is measured by Mean Decrease in Accuracy (MDA, higher is better) and Mean Decrease in Gini (MDG, higher is better). Variables are ranked by MDA, highlighting the most predictive features for each group and horizon.
Forecasting 07 00068 g005
Figure 6. In-sample coefficient estimates from various panel binary models for stablecoins with ≥730 observations, using the volume-based Feder et al. (2018) method for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Figure 6. In-sample coefficient estimates from various panel binary models for stablecoins with ≥730 observations, using the volume-based Feder et al. (2018) method for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Forecasting 07 00068 g006
Figure 7. In-sample coefficient estimates from various panel binary models for stablecoins with <730 observations, using the volume-based Feder et al. (2018) method for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Figure 7. In-sample coefficient estimates from various panel binary models for stablecoins with <730 observations, using the volume-based Feder et al. (2018) method for failure classification. Each panel corresponds to a different forecast horizon (1-day, 30-day, 365-day). The x-axis lists the predictor variables, and the y-axis shows the coefficients’ magnitudes and their statistical significance.
Forecasting 07 00068 g007
Figure 8. In-sample variable importance from the Random Forest model, using the volume-based Feder et al. (2018) definition of failure. The plots show results for stablecoins with fewer (<730) and more (≥730) observations across different forecast horizons. Importance is measured by Mean Decrease in Accuracy (MDA, higher is better) and Mean Decrease in Gini (MDG, higher is better). Variables are ranked by MDA, highlighting the most predictive features for each group and horizon under this failure definition.
Figure 8. In-sample variable importance from the Random Forest model, using the volume-based Feder et al. (2018) definition of failure. The plots show results for stablecoins with fewer (<730) and more (≥730) observations across different forecast horizons. Importance is measured by Mean Decrease in Accuracy (MDA, higher is better) and Mean Decrease in Gini (MDG, higher is better). Variables are ranked by MDA, highlighting the most predictive features for each group and horizon under this failure definition.
Forecasting 07 00068 g008
Figure 9. Out-of-sample forecasting performance of nine models using the $0.80 price threshold definition of failure. Results are grouped by stablecoin lifespan (<730 and ≥730 observations) and forecast horizon (1-day, 30-day, 365-day). Performance is measured by the Area Under the Curve (AUC) and H-measure (where higher is better) and the Brier Score (where lower is better). Models marked with a solid point are included in the Model Confidence Set (MCS), indicating they are statistically among the best-performing models based on the Brier Score.
Figure 9. Out-of-sample forecasting performance of nine models using the $0.80 price threshold definition of failure. Results are grouped by stablecoin lifespan (<730 and ≥730 observations) and forecast horizon (1-day, 30-day, 365-day). Performance is measured by the Area Under the Curve (AUC) and H-measure (where higher is better) and the Brier Score (where lower is better). Models marked with a solid point are included in the Model Confidence Set (MCS), indicating they are statistically among the best-performing models based on the Brier Score.
Forecasting 07 00068 g009
Figure 10. Out-of-sample forecasting performance of nine models using the volume-based Feder et al. (2018) definition of failure. Results are grouped by stablecoin lifespan (<730 and ≥730 observations) and forecast horizon (1-day, 30-day, 365-day). Performance is measured by the Area Under the Curve (AUC) and H-measure (where higher is better) and the Brier Score (where lower is better). Models marked with a solid point are included in the Model Confidence Set (MCS), indicating they are statistically among the best-performing models based on the Brier Score.
Figure 10. Out-of-sample forecasting performance of nine models using the volume-based Feder et al. (2018) definition of failure. Results are grouped by stablecoin lifespan (<730 and ≥730 observations) and forecast horizon (1-day, 30-day, 365-day). Performance is measured by the Area Under the Curve (AUC) and H-measure (where higher is better) and the Brier Score (where lower is better). Models marked with a solid point are included in the Model Confidence Set (MCS), indicating they are statistically among the best-performing models based on the Brier Score.
Forecasting 07 00068 g010
Table 1. A stablecoin-by-stablecoin comparison of failure classification using three distinct methods. The table reports the final listing status from CoinMarketCap, alongside the number of failure (’death’) and recovery (’resurrection’) events detected by the volume-based Feder et al. (2018) method and our proposed $0.80 price threshold method. ’Final Status’ indicates whether the stablecoin was classified as ’Dead’ or ’Alive’ at the end of the sample period according to each dynamic method.
Table 1. A stablecoin-by-stablecoin comparison of failure classification using three distinct methods. The table reports the final listing status from CoinMarketCap, alongside the number of failure (’death’) and recovery (’resurrection’) events detected by the volume-based Feder et al. (2018) method and our proposed $0.80 price threshold method. ’Final Status’ indicates whether the stablecoin was classified as ’Dead’ or ’Alive’ at the end of the sample period according to each dynamic method.
NameSymbolListed onFeder et al. (2018) MethodPrice Threshold Method at $0.80
CoinMarketCapN. of DeathsN. of ResurrectionsFinal StatusN. of DeathsN. of ResurrectionsFinal Status
Alchemix USDALUSDNo10Dead00Alive
Anchored Coins AEURYes00Alive00Alive
Angle ProtocolEURAYes00Alive11Resurrected
Agora-financeAUSDYes00Alive00Alive
Basis CashBACYes00Alive10Dead
BeanBEANNo00Alive10Dead
BounceBitBBYes00Alive00Alive
BUSDBUSDYes10Dead00Alive
Celo DollarCUSDYes00Alive00Alive
Celo EuroCEURYes00Alive00Alive
Coffin DollarCOUSDNo00Alive32Dead
coin98CUSDYes10Dead10Dead
CriptodolarUXDYes00Alive00Alive
crvUSDcrvUSDYes00Alive00Alive
CryptoFrancXCHFYes10Dead00Alive
DaiDAIYes00Alive00Alive
Decentralized_USDDUSDYes10Dead10Dead
DefiDollarDUSDNo00Alive10Dead
DjedDJEDYes00Alive00Alive
Dola-usdDOLAYes00Alive00Alive
EOSDTEOSDTNo10Dead21Dead
ESD Empty SetESDYes10Dead10Dead
EURCEURCYes00Alive00Alive
EuriteEURIYes00Alive00Alive
EURO STASISEURSYes00Alive00Alive
SpiceEURO USDEUROS-USDNo00Alive10Dead
EURt_TetherEURTYes00Alive00Alive
Fei USDFEIYes10Dead11Resurrected
First Digital USDFDUSDYes00Alive00Alive
FraxFRAXYes00Alive00Alive
Fuse dollarFUSDNo00Alive00Alive
Frapped USDTfUSDTNo00Alive10Dead
FEG wrapped USDTfUSDTNo00Alive00Alive
Gemini DollarGUSDYes00Alive00Alive
HUSDHUSDYes10Dead10Dead
Inflation Adjusted USDSIUSDSNo00Alive10Dead
Inter Stable TokenISTYes00Alive00Alive
IRONIRONNo32Dead10Dead
KRT-terraKRTNo21Dead10Dead
Lift DollarUSDLYes00Alive00Alive
Liquity USDLUSDYes10Dead00Alive
Mad USDMUSDYes10Dead21Dead
Magic Internet MoneyMIMYes10Dead00Alive
MAI-MIMATICMAIYes10Dead10Dead
MNEEMNEEYes00Alive00Alive
MoremoneyMONEYYes32Dead00Alive
Moola Celo EURMCEURYes00Alive00Alive
Moola Celo USDMCUSDYes00Alive00Alive
mStable USDMUSDNo21Dead00Alive
ONEICHIONEICHINo21Dead00Alive
Origin DollarOUSDYes10Dead11Resurrected
ParallelPARYes54Dead11Resurrected
Pax DollarUSDPYes00Alive00Alive
Paypal-usdPYUSDYes00Alive00Alive
Ratio stablecoinUSDRNo00Alive00Alive
Reserve DollarRSVYes10Dead00Alive
sEURSEURNo00Alive00Alive
SORA Synthetic USDXSTUSDYes21Dead21Dead
Sperax USDUSDSYes00Alive00Alive
SpiceUSDSPICENo00Alive10Dead
StablR EuroEURRYes00Alive00Alive
STATIKSTATIKNo10Dead10Dead
Steem dollar(before fork)SBDNo10Dead22Resurrected
sUSDSUSDYes00Alive00Alive
TerraClassicUSDUSTCYes10Dead10Dead
TORTORYes10Dead10Dead
TribeTRIBEYes00Alive21Dead
TrueUSDTUSDYes00Alive00Alive
USD TetherUSDTYes00Alive00Alive
USDCUSDCYes00Alive00Alive
USDDUSDDYes00Alive00Alive
USDe EthenaUSDEYes00Alive00Alive
UsdexUSDEXYes10Dead10Dead
USDFL-USDUSDFLNo10Dead21Dead
USDHUSDHYes00Alive00Alive
USDIUSDINo10Dead00Alive
USDJUSDJYes00Alive00Alive
USDKUSDKNo10Dead00Alive
USDP StablecoinUSDPYes22Resurrected11Resurrected
USDQ USDUSDQNo21Dead00Alive
Stably USDUSDSCNo10Dead00Alive
USDX KavaUSDXYes10Dead00Alive
USNUSNYes00Alive00Alive
NuBitsUSNBTNo10Dead10Dead
sForce USDUSXYes21Dead00Alive
VaiVAIYes10Dead11Resurrected
Verified USDUSDVYes00Alive00Alive
VNX EuroVEURYes00Alive00Alive
VNX Swiss FrancVCHFYes00Alive00Alive
wanUSDTWANUSDTYes10Dead00Alive
Worldwide USDWUSDYes00Alive00Alive
xDAIxDAIYes00Alive00Alive
XUSDXUSDYes00Alive00Alive
xXUSDxXUSDYes10Dead00Alive
YUSD StablecoinYUSDYes10Dead00Alive
ZAI StablecoinZAIYes00Alive00Alive
ZedxionUSDZYes00Alive00Alive
ZUSDZUSDYes11Resurrected00Alive
Table 2. Sensitivity analysis of our price-based failure classification method. This table evaluates the classification overlap (in percentage) between various price thresholds (from $0.05 to $0.95) and two benchmarks: CoinMarketCap delistings and the volume-based Feder et al. (2018) method. The analysis is presented for two scenarios: ’Final Status’ assesses classification at the end of the sample, while ’Dynamic Threshold’ classifies a coin as dead if it ever breached the threshold. The $0.80 threshold, used in our main analysis, is highlighted in bold.
Table 2. Sensitivity analysis of our price-based failure classification method. This table evaluates the classification overlap (in percentage) between various price thresholds (from $0.05 to $0.95) and two benchmarks: CoinMarketCap delistings and the volume-based Feder et al. (2018) method. The analysis is presented for two scenarios: ’Final Status’ assesses classification at the end of the sample, while ’Dynamic Threshold’ classifies a coin as dead if it ever breached the threshold. The $0.80 threshold, used in our main analysis, is highlighted in bold.
FINAL STATUS AT THE END OF THE SAMPLE
Thresholdvs. CoinMarketCapvs. volume-based FEDER
(Only DEAD coins)(Only ALIVE coins)(DEAD and ALIVE)(Only DEAD coins)(Only ALIVE coins)(DEAD and ALIVE)
0.0512.0090.4170.4123.0898.3168.37
0.1016.0089.0470.4125.6496.6168.37
0.1520.0089.0471.4325.6494.9267.35
0.2020.0089.0471.4325.6494.9267.35
0.2520.0087.6770.4128.2194.9268.37
0.3020.0087.6770.4128.2194.9268.37
0.3524.0087.6771.4330.7794.9269.39
0.4028.0087.6772.4530.7793.2268.37
0.4536.0086.3073.4730.7788.1465.31
0.5036.0084.9372.4533.3388.1466.33
0.5540.0084.9373.4733.3386.4465.31
0.6044.0084.9374.4935.9086.4466.33
0.6548.0084.9375.5135.9084.7565.31
0.7048.0083.5674.4938.4684.7566.33
0.7552.0083.5675.5141.0384.7567.35
0.8052.0083.5675.5141.0384.7567.35
0.8560.0083.5677.5543.5983.0567.35
0.9060.0082.1976.5346.1583.0568.37
0.9560.0080.8275.5146.1581.3667.35
DYNAMIC THRESHOLD FOR THE FULL SAMPLE
Thresholdvs. CoinMarketCapvs. volume-based FEDER
(Only DEAD coins)(Only ALIVE coins)(DEAD and ALIVE)(Only DEAD coins)(Only ALIVE coins)(DEAD and ALIVE)
0.0512.0087.6768.3726.8398.2568.37
0.1024.0087.6771.4326.8392.9865.31
0.1524.0084.9369.3929.2791.2365.31
0.2024.0083.5668.3731.7191.2366.33
0.2528.0083.5669.3934.1591.2367.35
0.3032.0083.5670.4134.1589.4766.33
0.3536.0080.8269.3941.4689.4769.39
0.4036.0079.4568.3743.9089.4770.41
0.4544.0079.4570.4143.9085.9668.37
0.5044.0079.4570.4143.9085.9668.37
0.5552.0079.4572.4548.7885.9670.41
0.6056.0079.4573.4748.7884.2169.39
0.6556.0079.4573.4748.7884.2169.39
0.7056.0078.0872.4548.7882.4668.37
0.7556.0075.3470.4153.6682.4670.41
0.8056.0075.3470.4153.6682.4670.41
0.8572.0067.1268.3765.8573.6870.41
0.9084.0058.9065.3173.1763.1667.35
0.9584.0045.2155.1085.3754.3967.35
Table 3. Total count and percentage of “dead days” under two different failure classification criteria. The table shows the total number of days stablecoins were classified as failed, both for the volume-based Feder et al. (2018) method and our $0.80 price threshold method. Results are shown separately for stablecoins with shorter (<730 days) and longer (≥730 days) time series.
Table 3. Total count and percentage of “dead days” under two different failure classification criteria. The table shows the total number of days stablecoins were classified as failed, both for the volume-based Feder et al. (2018) method and our $0.80 price threshold method. Results are shown separately for stablecoins with shorter (<730 days) and longer (≥730 days) time series.
STABLECOINS WITH <730 OBSERVATIONS
Volume-based Feder et al. (2018)Price threshold method at $0.80
N. of dead days%N. of dead days%
152613.20137011.85
STABLECOINS WITH ≥ 730 OBSERVATIONS
Volume-based Feder et al. (2018)Price threshold method at $0.80
N. of dead days%N. of dead days%
1711222.461085814.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fantazzini, D. Detecting Stablecoin Failure with Simple Thresholds and Panel Binary Models: The Pivotal Role of Lagged Market Capitalization and Volatility. Forecasting 2025, 7, 68. https://doi.org/10.3390/forecast7040068

AMA Style

Fantazzini D. Detecting Stablecoin Failure with Simple Thresholds and Panel Binary Models: The Pivotal Role of Lagged Market Capitalization and Volatility. Forecasting. 2025; 7(4):68. https://doi.org/10.3390/forecast7040068

Chicago/Turabian Style

Fantazzini, Dean. 2025. "Detecting Stablecoin Failure with Simple Thresholds and Panel Binary Models: The Pivotal Role of Lagged Market Capitalization and Volatility" Forecasting 7, no. 4: 68. https://doi.org/10.3390/forecast7040068

APA Style

Fantazzini, D. (2025). Detecting Stablecoin Failure with Simple Thresholds and Panel Binary Models: The Pivotal Role of Lagged Market Capitalization and Volatility. Forecasting, 7(4), 68. https://doi.org/10.3390/forecast7040068

Article Metrics

Back to TopTop