A Liquidity Shortfall Analysis Framework for the European Banking Sector †

The analysis and the results presented in this paper were based on a dataset covering a speciﬁc historical period (up to Q4 2016). They should not be used as a reliable indicator or to draw any conclusions on systemic resiliency during any other present or future time period and especially during periods of signiﬁcant ﬁnancial stress or market turmoil (for example, the recent market turbulence conditions associated with COVID-19 outbreak). Directorate Macro-prudential Financial Stability, European ﬁnancial (Javier Abstract: This paper presents an analytical framework for the identiﬁcation of vulnerabilities arising from the liquidity and funding proﬁle of banks. It is composed of two pillars—estimation of liquidity needs and the counterbalancing capacity of the total liquid assets—that determine a liquidity surplus or shortfall and the drivers for a range of plausible scenarios. Granular bank-level data on the structure of liabilities, maturation proﬁle, liquid assets quality composition, and asset encumbrance are used for that purpose, also taking into account associated commonality e ﬀ ects. A new liquidity metric is introduced—the distance to liquidity stress indicator (DLSI)—which measures the required stress factor for banks to become illiquid. The novelty of the approach (i.e., taking into account asset encumbrance to determine counterbalancing capacity) provides empirical evidence that asset encumbrance has a signiﬁcant impact on a bank’s liquidity position, leading to the non-linear behavior of liquidity shortfalls, even in the case of linear stress factors.


Introduction
The experiences from previous financial crises have shown that it is equally important to assess a bank's resilience from a solvency and a liquidity perspective. In most cases, liquidity distress is a symptom rather than the real cause of a bank's failure during a crisis (see Georgescu and Laux [1]).
Nevertheless, there have also been a few past episodes where the liquidity deterioration has resulted in a crisis situation (see Shin [2] or Brunnermeier and Pederson [3]). These imbalances in the liquidity and funding profile may have an impact on the propagation of asset quality and solvency shocks throughout the financial system. It is, therefore, essential from a financial stability perspective to monitor liquidity conditions of banks and banking systems, as well as to test their resilience under stressed market conditions, allowing a better understanding of potential risks and vulnerabilities of the banking sector. This paper provides an analytical framework for the ex-ante assessment of the resilience of banks to liquidity shocks.
Stress tests have become very valuables tools to assess the resilience of the banking sector (see Baudino et al. [4]). Several central banks have incorporated liquidity stress test modules in their macroprudential stress testing framework. In some cases, the interaction between solvency risks and liquidity risks and their endogenous amplification is explicitly modeled (see the approach of the Bank of Canada based on Gauthier et al. [5] and Oesterreichische Nationalbank (OeNB) based on Puhr and Schmitz [6]). In Gauthier et al. [5], a two-period network setting is used to model rollover risk in the intermediary period conditional on solvency risk. Puhr and Schmitz [6] use supervisory data on weekly cash flows and counterbalancing capacity across six currencies and five maturity buckets to derive liquidity shortfalls. Higher solvency risk leads to lower collateral values and lower cash inflows, further amplifying the increase in funding costs and asset fire sales. In this context, an asset fire sale is the main driver of solvency losses. The European Central Bank (ECB) stress testing tool models liquidity risk through higher funding costs resulting from the macro scenario (see Henry et al. [7] or Halaj and Laliotis [8]). Second round effects arise due to liquidity spirals and counterparty defaults.
Aikman et al. [9] developed a macro stress testing framework, in which a liquidity risk arises endogenously as a function of a bank's solvency, liquidity position, similarity to other banks facing funding difficulties, and other market factors. Banks gradually lose access to unsecured funding as the above risk factors deteriorate. Asset fire sales of insolvent banks lead to mark-to-market losses for banks holding similar assets. This aspect of liquidity risk is also captured by the Bank of Korea [10]. Funding risk is integrated into their liquidity stress testing framework with mark-to-market losses and asset fire sales, amplifying solvency risk. Van den End [11] proposes a liquidity stress testing methodology based on supervisory data obtained from the De Nederlandsche Bank (DNB) monthly liquidity report. After an initial shock to haircut values and run-off rates, banks are assumed to react by using various liquidity sources (deposits with central banks, sale of assets, and interbank receivables or unsecured funding). In a second round, the resulting reputational loss leads to further haircuts and an increase in run-off rates. It is not clear what the economic rationale is for the functional form for reputational risk or why the use of central bank deposits or unsecured funding is associated with a reputational loss. Wong and Hui [12] apply a Monte Carlo simulation to generate market risk shocks for different asset classes for 12 Hong Kong banks. In a second step, these shocks are fed into equations describing market, default, and liquidity risks. In the liquidity risk module, it is assumed that deposit outflows are a function of the bank's default probability, with the threshold default probability beyond which a run is assumed calibrated based on the Bear Stearns default probability. The results suggest that the liquidity risk of banks in Hong Kong is contained.
A number of studies, such as Basel Committee on Banking Supervision (BCBS) [13], BCBS [14] and Schmieder et al. [15], have reviewed the main sources of liquidity risk and the best practices in banks' liquidity stress testing. Halaj and Henry [16] identify and specify guiding principles for designing and implementing a systemic liquidity stress test. While the conceptual framework in this paper is similar to Schmieder et al. [15], their analysis is rather theoretical, relying on a case study to illustrate the potential use of the proposed cash-flow-based tests by regulators and supervisors.
In this paper, we present a scenario-based analytical framework, where the impact on banks' respective liquidity needs and counterbalancing capacity is assessed under different liquidity stress scenarios referring to market funding availability and the level of haircuts that banks may face for secured funding. Based on the scenario-specific run-off factors and collateral haircuts, we estimate the liquidity shortfall and surplus by comparing projected liquidity needs with counterbalancing capacity for each individual bank. The main novelty of the proposed approach is that (i) asset encumbrance is explicitly considered, (ii) a very granular dataset is used by decomposing the sovereign and corporate debt holdings of the banks into different credit quality asset classes, and (iii) that the approach enables a system-wide assessment of stress scenario projections in order to provide a framework for analyzing systemic or second round effects.
Importantly, the "distance to liquidity stress indicator (DLSI)" is introduced, a metric that measures the level of resiliency to liquidity stress conditions. The indicator integrates the available information of the liquidity condition of individual banks, sectors, clusters, or the market as a whole conditional on a stress scenario. The latter metric allows a standardized identification of liquidity-related financial stability risks, facilitating analysis across market segments and through time. This approach can also be easily adapted to also integrate fires sale impacts, contagion effects, and the impact of lender-of-last-resort (LOLR) policy decisions.
The results are based on a sample of 94 banks that were also grouped for the purposes of this analysis into clusters in line with their business model. This is done mainly for illustration purposes using a simple internally built clustering analysis scheme that focuses on separating banks on the basis of their business model characteristics.
The structure of this paper is as follows. In Section 2 we present the model approach and data sources, while the main results of the paper are presented in Section 3. Section 4 concludes the paper.

Model Approach Outline and Data Sources
The impact on the liquidity of individual banks is assessed under different scenarios in terms of the magnitude of funding freezes and the availability of liquidity buffers to counterbalance liquidity outflows. The scenarios vary by severity level and are applied both to the availability of funding and to the level of haircuts that banks may face in using their liquid assets to replenish liquidity outflows.
For that purpose, a four-step approach is applied. First, we replicate and calibrate the banks' funding mixes. Second, we apply progressive stress scenarios affecting the funding availability. In the third step, we estimate the counterbalancing capacity of banks to respond to liquidity stress by quantifying their available liquid assets in each stress scenario (appropriate haircuts are applied in each scenario to reflect discounts associated with asset sales or an increase in the collateral haircuts). Finally, in the fourth step, we estimate the liquidity shortfall and surplus by comparing projected liquidity needs with available liquidity buffers for each individual bank.
The main data source used in the analysis is supervisory reporting data (Financial Reporting (FINREP)/ Common Reporting (COREP)) up to Q4 2016, which were used to infer both liability (funding mix) and total liquid asset (TLA) sides of the exercise. In addition, recent data on asset encumbrance (AE), including the received collateral in secured lending and assuming that banks may freely re-hypothecate, were also incorporated in order to enhance the projection accuracy of the counterbalancing capacity of banks and in order to assess potential stress dependencies.
Maturation profiles and the breakdown by credit rating of banks' sovereign debt holdings were used as in the 2014 comprehensive assessment (CA) stress test (ST) exercise, under the assumption that these distributions remain constant or change very slowly over time. We used a sample of 94 banks and we focused on the analysis of clusters of banks based on their business models for the purpose of this paper.

Scenario-Based Analysis
Three liquidity stress scenarios were defined and applied, with the most severe one (severely adverse) corresponding roughly to the market shock following the Lehman incident. Two less severe scenarios (mild and adverse) are also considered to simulate the effects of weaker but more plausible liquidity shocks, as seen in Table 1.
The approach cannot be directly compared to the Liquidity Coverage Ratio (LCR), as it assumes a three-month stress horizon compared to the one-month LCR one. Moreover, contrary to the LCR, no stable inflows are assumed. However, some comparison with LCR can be done in terms of the severity of the assumed run-off rates (i.e., LCR would fit somewhere between the mild and the adverse scenarios). The analysis assumes that stressed liquidity conditions persist for three months. This horizon, although different from the one used in the LCR metric, is considered to be more pragmatic in terms of the time span of a real liquidity stress incident, allowing for a more reasonable assessment of systemic spillovers and their impacts in the markets. Therefore, all liquidity needs are calibrated to a three-month horizon interval. This parameter plays an important role for the calibration of applied run-off rates on long-and short-term funding sources. As a natural result, maturing liquidity needs are assumed to be higher for a short-term funding source than a long-term source for a similar amount of notional funding.
The severity levels of the above scenarios have an impact on the run-off rates applied on the liability side (banks cannot rollover their maturing funding needs) and on the haircuts applied on liquid assets. These represent the limited efficacy of the relevant markets due to the prevailing stressed conditions that would require higher haircuts for funding, either using assets of similar quality as collateral (secured funding) or through fire sale pricing if asset disposal is considered a funding source.
In addition, the framework also allows for higher stress impacts by letting the severity of the scenario go beyond the level of the severely adverse scenario. Furthermore, we also allow for deviation from the almost linear stress factor mapping of the three basic scenarios to a non-linear functional form of the applied stress factors (either convex or concave). By setting the "mild" scenario to correspond to a stress factor of 0.25 and the "severely adverse" to correspond to a stress factor of 1, we use a simple exponential (non-linear) functional form to define the scenarios between those points and beyond the stress factor of 1.

Funding Sources and Determination of Liquidity Needs
Data from the 2014 CA ST exercise are used for the calibration of the maturity ladder profile for each sub-segment of the wholesale market at the individual bank level.
A so-called "liquidity basis" calculation for each wholesale market segment is conducted, taking into account maturity profiles over the entire ST projection horizon. In practice, a conservative estimation of the funding that needs to be rolled over during any three-month horizon window is determined and forms the "liquidity basis" of each funding segment. More precisely, the number of maturing liabilities is determined for each funding segment on the basis of ST maturation ladder data by taking the maximum number of maturing liabilities within the next 3 months from the 12 quarterly observations available from the 3-year ST projection horizon. Obviously, any other data source with similar information (COREP) can also be used for calibration purposes.
To ensure conservatism in defining this basis value, an outflow floor, expressed as a percentage of the outstanding volume, is also applied (defined at 10% or 20% for long-term funds and 60% for short-term funds). The liquidity needs for each bank are derived by applying run-off rates for the respective scenario to the estimated liquidity basis for the wholesale funding sources and the outstanding volume of deposits. Table 2 summarizes the run-off rates for different sources of funding for each scenario. The increase in contingent liabilities (credit and liquidity commitments) and the increase in asset encumbrance levels for existing funding positions are not considered in this analysis due to the lack of reliable and consistent data. The framework enables the analysis of the impact of contingent liabilities and increases in asset encumbrance for existing positions, provided that the scaling factors that capture such contingent increases in funding needs are defined on the basis of existing evidence and that certain assumptions are also made for the TLAs that will be used to account for the asset encumbrance increase. The approach for determining the abovementioned liquidity basis for the determination of liquidity needs in principle should not have a major impact on the estimated liquidity needs for short-term liabilities. However, the impact on rollover needs for long-term liabilities could be much more substantial due to likely concentrations of issuances or long-term debts becoming due. While one could argue that this is overly conservative, we consider it prudent for macroprudential purposes in view of the fact that liquidity stress can occur at any point in time and persist for a certain period of time. Therefore, such an approach is preferred and more prudent than a static point-in-time snapshot of maturing long-term wholesale funding sources. Obviously, the approach used to estimate rollover needs can be adjusted depending on the purpose of the exercise.

Total Liquid Assets (TLA) and Determination of Counterbalancing Capacity
Basic information about TLAs is obtained from FINREP data, and the capacity of banks to respond to liquidity stress is estimated by applying asset-and scenario-specific haircuts to their liquid assets.
The following five categories of assets are considered for the calculation of TLAs: (i) cash, cash equivalents, and central bank deposits; (ii) sovereign debt securities; (iii) other debt securities covering non-sovereign securities issued by financial corporations (FC); (iv) other debt securities covering non-sovereign securities issued by non-financial corporations (NFC); and (v) equity securities.
With the exception of sovereign debt securities, the other four categories of liquid assets are subject to explicit haircuts for each scenario (see Table 3).
For sovereign debt securities, a more granular calculation of haircuts is applied based on the composition of sovereign assets of the 2014 CA exercise. Concretely, sovereign debt holdings as reported in FINREP were broken down by issuer rating and remaining maturities, in accordance with the 2014 CA reported composition.
For the mild scenario, an average haircut value was selected for each credit rating (S&P ratings were used) based on an estimate of the current secured lending market conditions (i.e., current haircuts for collateralized funding). A more accurate calibration may always be performed on the basis of the regularly updated list of eligible collateral haircuts, as proposed by Halaj and Laliotis [8]. In the second step, haircuts for the shortest maturity and for the longest maturity are defined such that a plausible term structure is ensured. Finally, the haircuts for intermediary maturities are computed by logarithmic interpolation subject to the constraint that the average haircut defined at the beginning is obtained for the entire tenor curve. For the adverse and severely adverse scenarios, haircuts were calculated by using scaling factors of 1.2 and 1.5, respectively, on the haircut that corresponds to the mild scenario for each credit rating. The same approach as in the case of the mild scenario was used to obtain the haircuts across the maturity structure (see Tables 4 and 5). The haircut treatment of sovereign holdings tries to map scenario-based projections of the secured lending conditions using a simple scaling factor for the projection of haircuts in the adverse and severely adverse scenarios and logarithmic interpolation for the calculation of haircuts for intermediary tenors.
Although this approach may be regarded as simplistic in nature, the framework in general does not pose any restriction in choosing more complex haircut models or even in using projections calibrated on the (limited) instances of historical liquidity stress conditions. Alternatively, a more accurate sovereign haircuts calibration may also involve published eligible collateral haircuts for central bank monetary operations.
The approach above controls for the credit quality of sovereign debt holdings of each bank when calculating the TLA's counterbalancing capacity. This has a significant impact on the average haircut applied across countries, due to the fragmentation in sovereign debt markets and the tendency of banks to hold domestic sovereign debt in excess of sovereign issuance proportion. Table 4. Sovereign liquid assets haircut scaling factor, showing the haircuts for sovereign liquid assets.

Macroprudential Implications: Country Analysis
Based on the described approach, the projected liquidity needs are offset by the available TLA after application of scenario haircuts, and the difference determines the liquidity shortfall and surplus for each individual bank under a given scenario. For the sake of simplicity, in these first set of results, we do not use encumbrance assets.
Several metrics can be used to investigate the results and benchmark proxies for the entire sample on the individual bank level in order to assess the resilience of liquidity buffers: the number of banks ending up with a shortfall over the total number of banks (percentage that will require liquidity assistance), absolute average shortfall (only falling banks) over total liabilities, total shortfall and surplus over total liabilities for a country-by-country analysis, average TLAs over total liabilities (liquidity buffers), and TLAs post-haircut over initial TLA (no haircut).
The resulting impact analysis is based on the relative metrics of each bank with respect to the total sample, subset of the sample or cluster sample, and vulnerable institutions can be identified. Figures 1-3 present the evolution of the three scenarios, allowing for a country-by-country analysis of the total liquidity needs (Figure 1), TLA counterbalancing capacity (Figure 2), and total liquidity shortfall and surplus (Figure 3), respectively. The countries are Euro area countries whose names have been anonymized and are presented in the figures in a random order.
In Figure 3, it is important to note that in the severely adverse scenario only, five of sixteen countries would have an overall liquidity shortfall. These results are also illustrative of how the presented framework can be used in a macroprudential context, since the relative resiliency of a banking system at an aggregated level can be assessed. Therefore, systemic vulnerabilities pertaining to the funding mix of the banks that belong to the same cluster may be identified and further analyzed and monitored. The same framework can also provide additional insight on the main drivers of such vulnerabilities and can be used to further analyze the system's resiliency on the basis of focused calibrations of shocks (country or sector specific) or the relative importance of vulnerabilities (funding or asset side impact). A cross-temporal dimension of this analysis may also result in reliable stress-test-based indicators that monitor the liquidity conditions within banking clusters and their evolution in time.
The method presented here also provides for numerous "what if" scenario analyses, where different scenario run-offs and haircuts are applied, along with reverse engineering analyses, where the target is to find the appropriate level of shock that would drive a bank to become illiquid. For example, the case where the interbank liability that corresponds to the within-group deposit-taking entities can be easily accommodated by switching part of this liability exposure to deposit funding and by recalculating liquidity needs. Since expected liquidity shocks cannot be easily confined in predetermined scenarios, a range of plausible scenarios is highly informative. for each individual bank under a given scenario. For the sake of simplicity, in these first set of results, we do not use encumbrance assets. Several metrics can be used to investigate the results and benchmark proxies for the entire sample on the individual bank level in order to assess the resilience of liquidity buffers: the number of banks ending up with a shortfall over the total number of banks (percentage that will require liquidity assistance), absolute average shortfall (only falling banks) over total liabilities, total shortfall and surplus over total liabilities for a country-by-country analysis, average TLAs over total liabilities (liquidity buffers), and TLAs post-haircut over initial TLA (no haircut).
The resulting impact analysis is based on the relative metrics of each bank with respect to the total sample, subset of the sample or cluster sample, and vulnerable institutions can be identified. Figures 1-3 present the evolution of the three scenarios, allowing for a country-by-country analysis of the total liquidity needs (Figure 1), TLA counterbalancing capacity (Figure 2), and total liquidity shortfall and surplus (Figure 3), respectively. The countries are Euro area countries whose names have been anonymized and are presented in the figures in a random order.
In Figure 3, it is important to note that in the severely adverse scenario only, five of sixteen countries would have an overall liquidity shortfall. These results are also illustrative of how the presented framework can be used in a macroprudential context, since the relative resiliency of a banking system at an aggregated level can be assessed. Therefore, systemic vulnerabilities pertaining to the funding mix of the banks that belong to the same cluster may be identified and further analyzed and monitored. The same framework can also provide additional insight on the main drivers of such vulnerabilities and can be used to further analyze the system's resiliency on the basis of focused calibrations of shocks (country or sector specific) or the relative importance of vulnerabilities (funding or asset side impact). A cross-temporal dimension of this analysis may also result in reliable stress-test-based indicators that monitor the liquidity conditions within banking clusters and their evolution in time.
The method presented here also provides for numerous "what if" scenario analyses, where different scenario run-offs and haircuts are applied, along with reverse engineering analyses, where the target is to find the appropriate level of shock that would drive a bank to become illiquid. For example, the case where the interbank liability that corresponds to the within-group deposit-taking entities can be easily accommodated by switching part of this liability exposure to deposit funding and by recalculating liquidity needs. Since expected liquidity shocks cannot be easily confined in predetermined scenarios, a range of plausible scenarios is highly informative.

Liquid Shortfalls and Surpluses and Related Metrics
As stated above, the projected liquidity needs are offset by the available TLA after haircuts, and the difference determines the liquidity shortfall and surplus for each individual bank under a given scenario. We have already presented some metrics above. Additionally, another important metric has been introduced-the "distance to liquidity stress indicator (DLSI)", which measures the required stress factor that has to be applied in order for the bank to reach the point where it becomes illiquid (surpluses turn into shortfalls). Since in our specification the mild scenario corresponds to a stress factor of 0.25 and the severely adverse to a stress factor of 1, any value of DLSI below 1 suggests that the bank would face a liquidity shortfall at a stress level below the one corresponding to the severely adverse scenario. However, a DLSI value above 1 would suggest that the bank has the counterbalancing capacity to withstand the stress that corresponds to the severely adverse scenario (stress factor = 1), and even higher levels of stress would, therefore, be required in order to bring that bank to a liquidity shortfall.
In this calculation, we control for asset encumbrance. Asset encumbrance data are also used to produce estimates of the available (unencumbered) assets per asset class for each bank that is able to meet liquidity needs, either in the form of direct asset sale or to be pledged for a collateralized funding transaction. By also assuming a pecking order (i.e., use of better quality assets first to obtain secured funding with lower haircuts in order to minimize the effective cost of funding) on the unencumbered

Liquid Shortfalls and Surpluses and Related Metrics
As stated above, the projected liquidity needs are offset by the available TLA after haircuts, and the difference determines the liquidity shortfall and surplus for each individual bank under a given scenario. We have already presented some metrics above. Additionally, another important metric has been introduced-the "distance to liquidity stress indicator (DLSI)", which measures the required stress factor that has to be applied in order for the bank to reach the point where it becomes illiquid (surpluses turn into shortfalls). Since in our specification the mild scenario corresponds to a stress factor of 0.25 and the severely adverse to a stress factor of 1, any value of DLSI below 1 suggests that the bank would face a liquidity shortfall at a stress level below the one corresponding to the severely adverse scenario. However, a DLSI value above 1 would suggest that the bank has the counterbalancing capacity to withstand the stress that corresponds to the severely adverse scenario (stress factor = 1), and even higher levels of stress would, therefore, be required in order to bring that bank to a liquidity shortfall.
In this calculation, we control for asset encumbrance. Asset encumbrance data are also used to produce estimates of the available (unencumbered) assets per asset class for each bank that is able to meet liquidity needs, either in the form of direct asset sale or to be pledged for a collateralized funding transaction. By also assuming a pecking order (i.e., use of better quality assets first to obtain secured funding with lower haircuts in order to minimize the effective cost of funding) on the unencumbered

Liquid Shortfalls and Surpluses and Related Metrics
As stated above, the projected liquidity needs are offset by the available TLA after haircuts, and the difference determines the liquidity shortfall and surplus for each individual bank under a given scenario. We have already presented some metrics above. Additionally, another important metric has been introduced-the "distance to liquidity stress indicator (DLSI)", which measures the required stress factor that has to be applied in order for the bank to reach the point where it becomes illiquid (surpluses turn into shortfalls). Since in our specification the mild scenario corresponds to a stress factor of 0.25 and the severely adverse to a stress factor of 1, any value of DLSI below 1 suggests that the bank would face a liquidity shortfall at a stress level below the one corresponding to the severely adverse scenario. However, a DLSI value above 1 would suggest that the bank has the counterbalancing capacity to withstand the stress that corresponds to the severely adverse scenario (stress factor = 1), and even higher levels of stress would, therefore, be required in order to bring that bank to a liquidity shortfall.
In this calculation, we control for asset encumbrance. Asset encumbrance data are also used to produce estimates of the available (unencumbered) assets per asset class for each bank that is able to meet liquidity needs, either in the form of direct asset sale or to be pledged for a collateralized funding transaction. By also assuming a pecking order (i.e., use of better quality assets first to obtain secured funding with lower haircuts in order to minimize the effective cost of funding) on the unencumbered assets for each bank in regards to the coverage of liquidity needs, one can also simulate the overall supply of assets (both in terms of fire sales and collateral supply), as briefly discussed in Halaj and Laliotis [8]. The presented framework is granular and robust enough to be used as the repository for the modeling of fire sales and system-wide amplifications; however, an explicit modeling of fire sales and contagion impact falls outside the purposes of this paper. Figure 4 presents some results for the number of banks that reach a stressed level under the three main scenarios for the four following models. Model 1 does not take into account asset encumbrance, but includes collateral received as part of the liquid assets of the bank (TLA), assuming full rights of re-hypothecation. Model 2 does not take into account encumbrance, but the perimeter of the TLA is only the bank's own assets (no use of collateral received, assuming that they are not available for re-hypothecation). Model 3 focuses on asset encumbrance data, but uses a proportionate rule to assess which assets have already been used for funding purposes (encumbered). Model 4 focuses on asset encumbrance and assumes a certain value-maximizing "pecking order" in terms of what drives assets to become encumbered, assuming that banks use better quality unencumbered assets first to obtain secured funding. Special treatment is applied to sovereign debt holdings-both already encumbered and unencumbered-where the granularity of asset encumbrance reporting does not provide rating breakdowns. Therefore, we assume that the existing aggregate level of asset encumbrance is allocated to higher ratings first (based on rating breakdown available from CA 2014), and the use of unencumbered assets as counterbalancing capacity follows the same logic. assets for each bank in regards to the coverage of liquidity needs, one can also simulate the overall supply of assets (both in terms of fire sales and collateral supply), as briefly discussed in Halaj and Laliotis [8]. The presented framework is granular and robust enough to be used as the repository for the modeling of fire sales and system-wide amplifications; however, an explicit modeling of fire sales and contagion impact falls outside the purposes of this paper. Figure 4 presents some results for the number of banks that reach a stressed level under the three main scenarios for the four following models. Model 1 does not take into account asset encumbrance, but includes collateral received as part of the liquid assets of the bank (TLA), assuming full rights of re-hypothecation. Model 2 does not take into account encumbrance, but the perimeter of the TLA is only the bank's own assets (no use of collateral received, assuming that they are not available for rehypothecation). Model 3 focuses on asset encumbrance data, but uses a proportionate rule to assess which assets have already been used for funding purposes (encumbered). Model 4 focuses on asset encumbrance and assumes a certain value-maximizing "pecking order" in terms of what drives assets to become encumbered, assuming that banks use better quality unencumbered assets first to obtain secured funding. Special treatment is applied to sovereign debt holdings-both already encumbered and unencumbered-where the granularity of asset encumbrance reporting does not provide rating breakdowns. Therefore, we assume that the existing aggregate level of asset encumbrance is allocated to higher ratings first (based on rating breakdown available from CA 2014), and the use of unencumbered assets as counterbalancing capacity follows the same logic. It is apparent that asset encumbrance has a significant impact on the results, as the number of banks that reach the stress level in each scenario is significantly higher for those models that take asset encumbrance into account. With an average of approximately 33% of TLAs encumbered across the bank sample and a significant variation between banks and business clusters (Figure 5a), asset encumbrance appears to be the driver that defines counterbalancing capacity to withstand a liquidity stress. In combination with the average haircut levels of TLAs that each bank is facing for a given stress level (i.e., varying TLA average quality between banks that result in significant dispersity of average haircuts under stress; Figure 5b), pre-stress levels of asset encumbrance are the main determinant of TLA quality and adequacy.
The average shortfall of banks that face shortfall as a percentage of their liabilities remains contained to approximately 2% in the adverse and 3.5% in the severely adverse scenarios, respectively ( Table 6). The counterintuitive reduction in the average shortfall in the pecking order option compared to with no pecking order under adverse and severely adverse scenarios is mainly related to the fact that more banks are failing (with smaller on average shortfalls) and that we measure  It is apparent that asset encumbrance has a significant impact on the results, as the number of banks that reach the stress level in each scenario is significantly higher for those models that take asset encumbrance into account. With an average of approximately 33% of TLAs encumbered across the bank sample and a significant variation between banks and business clusters (Figure 5a), asset encumbrance appears to be the driver that defines counterbalancing capacity to withstand a liquidity stress. In combination with the average haircut levels of TLAs that each bank is facing for a given stress level (i.e., varying TLA average quality between banks that result in significant dispersity of average haircuts under stress; Figure 5b), pre-stress levels of asset encumbrance are the main determinant of TLA quality and adequacy.
The average shortfall of banks that face shortfall as a percentage of their liabilities remains contained to approximately 2% in the adverse and 3.5% in the severely adverse scenarios, respectively ( Table 6). The counterintuitive reduction in the average shortfall in the pecking order option compared to with no pecking order under adverse and severely adverse scenarios is mainly related to the fact that more banks are failing (with smaller on average shortfalls) and that we measure the shortfall as an average percentage of total liabilities of banks that are failing. With regard to the numbers of banks failing, these are in line with expectations-1, 9, and 34 banks without pecking order, and 1, 12, and 36 banks with pecking order failed under mild, adverse, and severely adverse scenarios, respectively. the shortfall as an average percentage of total liabilities of banks that are failing. With regard to the numbers of banks failing, these are in line with expectations-1, 9, and 34 banks without pecking order, and 1, 12, and 36 banks with pecking order failed under mild, adverse, and severely adverse scenarios, respectively.
(a) (b) Figure 5. Total liquid assets-asset encumbrance. We present here asset encumbrance as a percentage of TLAs and haircuts associated with TLA: (a) TLA asset encumbrance levels; (b) TLA average haircut levels. (Note 1: For illustration purposes and in order to demonstrate cross-sectoral variation, a different, more refined business-model-based clustering scheme was used for the analysis presented in Figures 5 and 7. This new scheme is similar in terms of principles and approach for the breakdown of the data set with the clustering scheme previously introduced and used in Table 7 and Figure 6.). Table 6. Average shortfalls as a percentage of liabilities using proportional and pecking order rules.

Average Shortfall as a% of Liabilities
Mild Adverse Severely Adverse Proportional Rule 0.12% 1.94% 3.54% Pecking Order Rule 0.91% 1.74% 3.50% As far as cluster analysis is concerned, Table 7 and Figure 6 present the respective aggregates for illustrative purposes based on a stylized business-model-based clustering scheme that was built for this purpose. The scheme is based on a business model clustering principle and uses some balance sheet size (absolute asset size) and business orientation (local or global presence) indicators and expert judgment with regard to the business model classification of non-standard entities.
It is important to note that even with this simple clustering, significant differences between cluster members can be observed in terms of their ability to withstand a liquidity crisis (see Figure 6). The cluster associated with business model number five (BM 5) appears to be the most resilient cluster, with significant amounts of excess TLA, even for the severely adverse scenario. BM 4 and BM 1 clusters have significantly higher average DLSIs than those of BM 2 and BM 3.  Note: Averages shown are the liability-weighted total. Positive signs correspond to a liquidity surplus, and negative signs correspond to a liquidity shortfall. Simulation results are based on the proportionate rule; asset encumbrance includes the collateral received but not pledged and the linear stress factor. Figure 6. Distance to liquidity stress indicator (DLSI) for clusters. We present here the DLSI (liabilityweighted and unweighted average) stress factor required to cause the bank or cluster liquidity stress. Figure 7 visualizes the DLSI results for two options with regard to stress factors: one where the stress factors are kept linear (approximately in accordance with the three basic scenarios presented here with respect to the applied run-offs and haircuts) and one where stress factors have a more convex functional form, although mild and severely adverse scenarios are the same (which allows for higher impact for stress factors that go beyond the severely adverse scenario). Figure 7 compares the DLSI outcomes of for the entire set of banks and for different businessmodel clusters in the sample for linear and non-linear forms of stress factors, attempting to illustrate the impact of non-linearities on the way that liquidity stress scenarios materialize. Non-linearities tend to decrease average DLSIs across clusters, implying that linear modelling or ignoring second  As far as cluster analysis is concerned, Table 7 and Figure 6 present the respective aggregates for illustrative purposes based on a stylized business-model-based clustering scheme that was built for this purpose. The scheme is based on a business model clustering principle and uses some balance sheet size (absolute asset size) and business orientation (local or global presence) indicators and expert judgment with regard to the business model classification of non-standard entities. It is important to note that even with this simple clustering, significant differences between cluster members can be observed in terms of their ability to withstand a liquidity crisis (see Figure 6). The cluster associated with business model number five (BM 5) appears to be the most resilient cluster, with significant amounts of excess TLA, even for the severely adverse scenario. BM 4 and BM 1 clusters have significantly higher average DLSIs than those of BM 2 and BM 3. Note: Averages shown are the liability-weighted total. Positive signs correspond to a liquidity surplus, and negative signs correspond to a liquidity shortfall. Simulation results are based on the proportionate rule; asset encumbrance includes the collateral received but not pledged and the linear stress factor. Figure 7 visualizes the DLSI results for two options with regard to stress factors: one where the stress factors are kept linear (approximately in accordance with the three basic scenarios presented here with respect to the applied run-offs and haircuts) and one where stress factors have a more convex functional form, although mild and severely adverse scenarios are the same (which allows for higher impact for stress factors that go beyond the severely adverse scenario). Figure 7 compares the DLSI outcomes of for the entire set of banks and for different business-model clusters in the sample for linear and non-linear forms of stress factors, attempting to illustrate the impact of non-linearities on the way that liquidity stress scenarios materialize. Non-linearities tend to decrease average DLSIs across clusters, implying that linear modelling or ignoring second round effects due to fire sales, contagion and interconnectedness would potentially lead to overestimating the systemic liquidity stress test resiliency. This is further shown in Figure 8, which present the average effective haircut on TLAs that banks face as the applied stress factor increases from 0.25 (mild) to 1 (severely adverse) and beyond ( Figure 8a) and the quantile distribution analysis of this effective haircut across the entire sample of banks (Figure 8b). Mild encumbrance conditions are captured by the linear part of this curve: under low levels of encumbrance, banks tend to face haircuts that increase in an almost linear way with the increasing stress level; in the case of a more acute crisis or in the case of elevated pre-stress encumbrance level, the system as a whole has to face effective haircuts that increase in a non-linear way. This non-linearity can be attributed to the fact that lower quality assets with higher market haircuts are used as collateral, as the stress level increases and banks run down on high quality collateral as specified in the pecking order policy. As already mentioned for the DLSI metrics, this non-linear behavior is expected to be even further amplified if second round effects are also taken into consideration. This is further shown in Figure 8, which present the average effective haircut on TLAs that banks face as the applied stress factor increases from 0.25 (mild) to 1 (severely adverse) and beyond ( Figure  8a) and the quantile distribution analysis of this effective haircut across the entire sample of banks (Figure 8b). Mild encumbrance conditions are captured by the linear part of this curve: under low levels of encumbrance, banks tend to face haircuts that increase in an almost linear way with the increasing stress level; in the case of a more acute crisis or in the case of elevated pre-stress encumbrance level, the system as a whole has to face effective haircuts that increase in a non-linear way. This non-linearity can be attributed to the fact that lower quality assets with higher market haircuts are used as collateral, as the stress level increases and banks run down on high quality collateral as specified in the pecking order policy. As already mentioned for the DLSI metrics, this non-linear behavior is expected to be even further amplified if second round effects are also taken into consideration.
(a) (b) Figure 8. Average effective haircut for TLAs. We present here the average effective haircut level averaged across all banks, assuming a convex form of the stress factor: (a) average TLA effective haircuts for increasing stress factor levels; (b) TLA effective haircut for increasing stress factor levelsquantile distribution analysis.

Conclusions
This paper outlines an analytical framework for assessing the resilience of banks with respect to a liquidity shock. The impact on the liquidity position of individual banks in terms of the magnitude  This is further shown in Figure 8, which present the average effective haircut on TLAs that banks face as the applied stress factor increases from 0.25 (mild) to 1 (severely adverse) and beyond ( Figure  8a) and the quantile distribution analysis of this effective haircut across the entire sample of banks (Figure 8b). Mild encumbrance conditions are captured by the linear part of this curve: under low levels of encumbrance, banks tend to face haircuts that increase in an almost linear way with the increasing stress level; in the case of a more acute crisis or in the case of elevated pre-stress encumbrance level, the system as a whole has to face effective haircuts that increase in a non-linear way. This non-linearity can be attributed to the fact that lower quality assets with higher market haircuts are used as collateral, as the stress level increases and banks run down on high quality collateral as specified in the pecking order policy. As already mentioned for the DLSI metrics, this non-linear behavior is expected to be even further amplified if second round effects are also taken into consideration.
(a) (b) Figure 8. Average effective haircut for TLAs. We present here the average effective haircut level averaged across all banks, assuming a convex form of the stress factor: (a) average TLA effective haircuts for increasing stress factor levels; (b) TLA effective haircut for increasing stress factor levelsquantile distribution analysis.

Conclusions
This paper outlines an analytical framework for assessing the resilience of banks with respect to a liquidity shock. The impact on the liquidity position of individual banks in terms of the magnitude Figure 8. Average effective haircut for TLAs. We present here the average effective haircut level averaged across all banks, assuming a convex form of the stress factor: (a) average TLA effective haircuts for increasing stress factor levels; (b) TLA effective haircut for increasing stress factor levels-quantile distribution analysis.

Conclusions
This paper outlines an analytical framework for assessing the resilience of banks with respect to a liquidity shock. The impact on the liquidity position of individual banks in terms of the magnitude of funding freezes and the availability of liquidity buffers is assessed under different scenarios. The granular analysis of both the liability and asset sides of the balance sheet relies on data sources, which allow a regular update of the monitoring metrics.
The scenarios are applied both to the availability of funding through the calibration of funding run-off rates and to the level of haircuts that banks may face in using their liquid assets to replenish liquidity outflows. This methodology is applied to a large sample of banks in the Euro area and proves to be very useful for the identification and assessment of liquidity-and funding-related vulnerabilities in a macroprudential context. Importantly, a new metric is introduced-the distance to liquidity stress indicator (DLSI)-that integrates available information of the liquidity stress condition of individual banks, aggregate sectors, and clusters, or the market as a whole. This metric contributes to a more standardized framework for monitoring the liquidity-related condition or level of stress, while also facilitating analysis across market segments and through time.
The additional flexibility of user-defined stress factors, the adaptable form of the functional stress factor form, and the extension of the analysis beyond the severely adverse scenario make the framework well-suited for the purpose of assessing financial stability risks. As an extension, behavioral aspects with system-wide implications (e.g., fire sales and contagion effects), such as the reactions of individual banks to liquidity stress and the implications of the policy reactions of the LOLR, can be integrated to further enrich the analysis.
The proposed framework can be used to set the required analytical background for a systemic liquidity stress test and also to assess liquidity risk. Stress tests became an essential part of the financial stability toolkit after the 2008 financial crisis.
Future research could build on theoretical models linking solvency and liquidity risk to integrate solvency risk with a multiyear horizon and liquidity risk with a 3-6-month horizon.