Next Article in Journal
Digital Banking in Northern India: The Risks on Customer Satisfaction
Previous Article in Journal
A Critical Analysis of Volatility Surprise in Bitcoin Cryptocurrency and Other Financial Assets
Previous Article in Special Issue
Using Model Performance to Assess the Representativeness of Data for Model Development and Calibration in Financial Institutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of an Impairment Point in Time Probability of Default Model for Revolving Retail Credit Products: South African Case Study

by
Douw Gerbrand Breed
1,
Niel van Jaarsveld
2,
Carsten Gerken
3,
Tanja Verster
1 and
Helgard Raubenheimer
1,*
1
Centre for Business Mathematics and Informatics, North-West University, Private Bag X6001, Potchefstroom 2520, South Africa
2
Independent Researcher, 20 Timbavati, St. Christopher Rd., St. Andrews, Bedfordview 2007, South Africa
3
Independent Researcher, Hartmannsweilerstr. 55, 65933 Frankfurt am Main, Germany
*
Author to whom correspondence should be addressed.
Risks 2021, 9(11), 208; https://doi.org/10.3390/risks9110208
Submission received: 16 August 2021 / Revised: 21 October 2021 / Accepted: 8 November 2021 / Published: 15 November 2021
(This article belongs to the Special Issue Quantitative Risk Modeling and Management—New Regulatory Challenges)

Abstract

:
A new methodology to derive IFRS 9 PiT PDs is proposed. The methodology first derives a PiT term structure with accompanying segmented term structures. Secondly, the calibration of credit scores using the Lorenz curve approach is used to create account-specific PD term structures. The PiT term structures are derived by using empirical information based on the most recent default information and account risk characteristics prior to default. Different PiT PD term structures are developed to capture the structurally different default risk patterns for different pools of accounts using segmentation. To quantify what a materially different term structure constitutes, three tests are proposed. Account specific PiT PDs are derived through the Lorenz curve calibration using the latest default experience and credit scores. The proposed methodology is illustrated on an actual dataset, using a revolving retail credit portfolio from a South African bank. The main advantages of the proposed methodology include the use of well-understood methods (e.g., Lorenz curve calibration, scorecards, term structure modelling) in the banking industry. Further, the inclusion of re-default events in the proposed IFRS 9 PD methodology will simplify the development of the accompanying IFRS 9 LGD model due to the reduced complexity for the modelling of cure cases. Moreover, attrition effects are naturally included in the PD term structures and no longer require a separate model. Lastly, the PD term structure is based on months since observation, and therefore the arrears cycle could be investigated as a possible segmentation.

1. Introduction

The International Accounting Standards Board (IASB) launched a project to substitute the International Accounting Standard (IAS) 39 with the International Financial Reporting Standard (IFRS) 9 that outlines the requirements for the recognition and measurement of financial instruments in the financial statements of a company (IFRS Foundation 2014). The new standard requires financial institutions to raise impairments on financial instruments using an expected credit loss (ECL) framework (IFRS Foundation 2014), moving away from the previous incurred loss model. IFRS 9 differentiates between 12 month and lifetime ECL. 12 month ECL (in monetary value) is defined as the “portion of lifetime expected credit losses that represent the expected credit losses that result from default events on a financial instrument that are possible within the 12 months after the reporting date” (IFRS Foundation 2014). Lifetime ECL is defined as the expected credit losses resulting from all possible default events over the financial instrument’s expected life.
IFRS 9 proposes a “three stage model” when estimating ECL, based on changes in credit quality since initial recognition (see Aptivaa (2016) and Beerbaum (2015) for further discussion on the three stage methodology). A financial instrument is assigned to Stage 1 if its credit risk (measured in terms of its probability of default as required by Section 5.5.9 of (IFRS Foundation 2014)) has not increased significantly since initial recognition. A 12 month ECL is recognised for such an instrument. On the other hand, a financial instrument is assigned to Stage 2 if its credit risk has increased significantly since initial recognition. A lifetime ECL is recognised for instruments in Stage 2. Finally, Stage 3 comprises all credit impaired (defaulted) financial instruments for which a lifetime ECL is recognised.
The quantification of ECL is often broken down into its three components as similar metrics are also required for other risk management purposes such as the quantification of regulatory or economic capital requirements: the probability of default (PD), loss given default (LGD) and exposure at default (EAD). A simplified expression for calculating expected credit loss is E C L = P D × L G D × E A D . When using a PD term structure (marginal PD’s), the expression changes as follows to calculate the ECL at account level: E C L i = t = 1 T p i , t m × l i , t × e i , t , where p i , t m is the marginal PD, l i , t is the LGD when an account defaults at time t , and e i , t the EAD at time t , for account i (Schutte et al. 2020).
IFRS 9, being a principle based accounting standard, does not prescribe specific methodologies for the 12 month or lifetime ECL estimation, and there is typically no single methodology suitable to all portfolios (Global Public Policy Committee (GPPC) 2016). Literature is scarce on IFRS 9 specific PD methodologies. This paper aims to develop a new methodology to derive the PiT (point in time) PD component (i.e., the marginal PD, p i , t m as above) for the ECL calculations of financial instruments in Stages 1 and 2 (Stage 3 assets are subject to a constant 100% PD estimate). In a PiT approach, borrowers are regraded immediately as the economic cycle changes instead of a cycle-neutral, through-the-cycle (TTC) grading (Taylor 2003).
The proposed methodology to estimate the marginal PD consists of two parts, the term structure and the Lorenz calibration. The term structure part is a segment specific, point in time based PD term structure that uses default information from the most recent months. The Lorenz calibration is the calibration of credit scores to 12 month (Stage 1) and lifetime (Stage 2) PDs to create account specific PD term structures. The proposed methodology is illustrated in a case study for a revolving retail credit portfolio from a South African bank.

2. Literature Overview

Several PD modelling approaches exist that may be considered as possible methodologies to derive the PD component for IFRS 9 purposes, although very few IFRS 9 PD methodologies exist in published academic literature. In spite of the expansion of research in respect to IFRS 9 in the past few years, it is still in its infancy in developing countries (Dib and Feghali 2021).The focus of the literature overview will be on retail portfolios and not wholesale portfolios, because some methodologies for wholesale portfolios are not applicable for retail portfolios (see e.g., (Gubareva 2021)).
Some of these possible approaches will now be discussed in the literature overview. How risk drivers are incorporated in each of these approaches will be mentioned, since it is crucial to derive sufficiently granular PD estimates to assess significant increases in credit risk as required by IFRS 9. Some advantages and disadvantages of each approach will be discussed with specific consideration of the requirements of IFRS 9 (McPhail and McPhail 2014). Țurlea (2021) highlights various rating methods and systems applicable under the IFRS 9 framework with some advantages and disadvantages.
Logistic regression (Țurlea 2021) can be used in a scorecard to predict PDs (Thomas 2009). Typically risk drivers will be included as independent variables (Anderson 2007). The advantages of logistic regression include: it is simple to use, well known in the industry (Siddiqi 2006), it produces account-level estimates, and it can regress multiple variables without the need for segmentation (Siddiqi 2017). The key disadvantage from an IFRS 9 perspective is that logistic regression is not designed for varying time horizons and, if used, may result in unnecessary complexity.
Cumulative default curves can be estimated from empirical default and closure data which is sometimes referred to as segmented empirical term structures (Schutte et al. 2020). The PD term structures will often be segmented for different risk drivers. This method is generally intuitive and well understood (Yang 2017) and directly includes re-defaults and attrition effects. The one disadvantage is that the resulting estimates’ quality depends significantly on how the term structures are segmented and granular segmentations are often not possible due to data limitations such as a sufficient number of observations and defaults for stable estimates per segment.
Markov chains (Aalen and Johansen 1978) can also be used to derive PD term structures by multiplication of empirically derived migration matrices that describe the transition between risk states (Cziraky and Zink 2017). To account for different risk drivers, one can either use segmented migration matrices or incorporate these as specific risk states into the migration matrix (e.g., delinquency). The advantages include that one can produce PD estimates for any time horizon. They are easy to design and implement and provide forecasts of future portfolio risk profiles (e.g., what % of today’s portfolio is expected to be delinquent in 12 months) that can be used for budgeting/stress testing. However, the disadvantages are that the standard time-invariant Markov chain assumption typically does not result in a good fit for actual multi-period default behaviour. Additional time-specific (e.g., two months after observation) migration matrices are, in these cases, required to achieve acceptable fits, making the model very complex with limited benefit compared to direct estimation of PD term structures. Furthermore, minor deviations in monthly migrations may lead to significant over- or underestimation for multi-year PDs.
Run-off triangles (Braun 2004) use the most recent default and closure information to predict the term structure for the performing portfolio. These run-off triangles are often segmented for different risk drivers. The most significant advantage of this approach is that it is easy to design and implement, easy to automate, and generates PiT estimates that are often predictive for the near future. The disadvantages are that the derivation of account specific PD term structures typically requires techniques like the simple linear scaling of segment level PD term structures which may not result in a good fit. Structural changes in the portfolio in the recent past might distort PD term structures (but can be corrected).
The chain ladder method (England and Verrall 2002) is a popular method that insurance companies use to estimate their required claim reserves and can also be used to determine PDs. The chain ladder method utilises run-off triangles and has similar advantages and disadvantages as the run-off triangles provided above.
Hazard models (Țurlea 2021) can be used to assess the riskiness of the obligor by computing a score that indicates whether the obligor defaults within the specified horizon. However, the models can be quite complex, and the model does not determine when the obligor defaults will occur (Crook and Bellotti 2013). More generally, survival analysis can also be used (Chimezda and Marimo 2017).
Panel models (Țurlea 2021) are also an alternative and has the advantage of capturing effects over time (Crook and Bellotti 2010). However, these panels models can become quite complex.
Autoregressive models (Glen 2015) can also be used where future default rates are modelled as a function of previous default rates. Additional risk drivers can be included in the modelling process as separate covariates in the regression. One of the most significant advantages of autoregressive models is that they can incorporate macroeconomic forecasts in the regression. However, disadvantages include that very granular segmentation is often not possible, which leads to top-down approaches being required. Autoregressive models are sometimes sensitive to recent changes (e.g., driven by credit policy changes).
Techniques within the data mining environment can also be applied, such as neural networks (Țurlea 2021). However, there is no predetermined mechanism to determine the optimum network as the connections are seen as black boxes that are difficult to interpret (Wielenga et al. 1999).
The herein presented PiT PD modelling approach uses calibration as a key model component to derive account-level PD estimates. Calibration refers to the mapping of credit score to a probability of default (Van Gestel and Baesens 2009). Several calibration methods have been suggested in the literature (Medema et al. 2009; Glößner 2003). In Glößner (2003), a method to estimate PDs using Lorenz curve construction based on credit scores is described, which will be leveraged off in this paper.

3. Methodology

In the proposed methodology, we assume that the ECL for a period ( 0 , T ] , for an account with a credit score s can be estimated as follows:
E C L T ( s ) = t = 1 T p ˜ t m ( s ) × e ˜ t ( s ) × l ˜ t ( s ) ,
where l ˜ t ( s ) is the estimated LGD, e ˜ t ( s ) is the estimated EAD and p ˜ t m ( s ) is the estimated marginal PD in the period ( t 1 , t ] for an account with a credit score s . Note that for Stage 1 accounts T = 12 and for Stage 2 accounts, the choice of T is based on the expected lifetime of the product. For revolving retail products, the behavioural lifetime needs to be determined.
The proposed methodology to estimate p ˜ t m ( s ) consists of two parts with the PiT term structures discussed in Section 3.1 as the first part. This includes the segmented term structures and the three proposed tests to evaluate whether the segmented term structures differ from the unsegmented term structure. The second part refers to the Lorenz curve calibration, which calibrates credit scores to create account specific term structures discussed in Section 3.2. The strengths, weaknesses and assumptions of the methodology are summarised in Section 3.3.

3.1. Empirical PiT PD Term Structures

This section describes how marginal PDs (referred to as ‘PD term structures’ in future) are derived by using empirical information. The method creates PD term structures based on the most recent default information and accounts’ risk characteristics prior to default. The approach accounts for attrition effects (i.e., accounts closing from a performing status and can thus no longer default in subsequent months) such that no separate adjustment will be required in the calculation of ECL. The proposed methodology also includes re-default events in the PD term structures (i.e., scenarios where an account defaulted more than once during its lifetime) such that lifetime PDs can theoretically exceed 100% for high risk accounts, particular for portfolios with long lifetime assumptions. This approach was chosen to capture the portfolio’s actual default behaviour more accurately and reduce complexities compared to a ‘worst ever’ modelling approach, typically used in regulatory models. The same approach must also be applied in the LGD model, e.g., by assigning a zero loss assumption to cure events to avoid double counting effects. Different PiT PD term structures are developed to capture the structurally different default risk patterns for different pools of accounts using segmentation.
To define the PD term structure, we define the cumulative and marginal PD’s similarly as in (Yang 2017). Let p k , t c be the cumulative PD for the period ( 0 , t ] with respect to observation month k i.e., the probability of defaulting in the total period ( 0 , t ] . We define the marginal PD in the period ( t 1 , t ] with respect to observation month k is then defined as p k , t m = p k , t c p k , t 1 c . Note that by definition, for the period ( 0 ,   1 ] we have p k , 1 m = p k , 1 c .
To estimate the empirical marginal PD’s we construct a defaults table containing the number of defaults and performing accounts. Each row in the defaults table represents an observation month, where { M 1 , M 2 , , M K } is the set of observation months e.g., {201501, 201502, …, 201507}. Let d k , t be the number of accounts that were performing as at the observation month M k , and then defaulted in the period ( t 1 , t ] months after the observation month and n k , t the number of performing accounts that survived in the period ( 0 , t 1 ] , where t = { 0 , 1 , , T k } is the number of months since the observation month k , i.e., n k , 0 will be the number of performing accounts as at the observation month.
Table 1 is an illustrative example of the defaults table. The number of performing accounts at the start of the observation month 201501 (i.e., M 1 = 201501 ) was 500 accounts (i.e., n 1 , 0 = 500 ). Of these 500 accounts 10 accounts defaulted in 201502, during the period ( 0 , 1 ] (i.e., d 1 , 1 = 10 ), and 5 accounts defaulted in 201503, during the period ( 1 , 2 ] (i.e., d 1 , 2 = 5 ). Note that no forecasting is required.
From the defaults table, the empirical PD’s, per observation month M k can be calculated as follow:
p k , t m = d k , t n k , 0 ,   and
p k , t C = i = 0 t d k , i / n k , 0 .
After having derived the empirical marginal PD estimates p k , t m for the different observation months M k and outcome horizons t = { 1 , 2 , , T k } , it needs to be decided which of these estimates should contribute to the final PD term structure and how these contributions should be weighted. As mentioned earlier, the model’s objective is to yield PiT PD estimates such that only defaults from the most recent R outcome months are used in the final PD estimates. R is referred to as the ‘reference period’ and is a key parameter for estimating the PD term structure. A shorter reference period will make the resulting estimates more PiT but could result in unwanted volatility (and vice versa for longer reference periods). If one uses a very long reference period over an entire economic cycle, the resulting PD term structures can be considered to provide through-the-cycle (TTC) estimates. It may sometimes also be required not to use the most recent data, e.g., if models are refreshed in August but should only account for information until June. Hence, a reference month M { M 1 , M 2 , , M K } is used in order to denote the last observation month from the set of observations months that is used in the derivation of p k , t m . Therefore only defaults from the outcome months { M ( R 2 ) , , M , M + 1 } are used for modelling purposes.
We estimated the marginal PD, p ˜ t m , from the defaults table, as the weighted average of marginal PDs across the most recent R observation months with available outcome horizons of at least t months. A weighting by the number of observations is performed to smooth the impact of outlier months for segments with small and volatile population sizes.
Thus given R and a reference month M (note that the term structure is generally calculated taking the reference month to be the most recent observation month, i.e., M = M K ) we estimate the marginal PD for time horizon t as
p ˜ t m ( R , M ) = i = M ( t 1 ) ( R 1 ) M ( t 1 ) d i , t i = M ( t 1 ) ( R 1 ) M ( t 1 ) n i , 0 .
For example, if a reference period of 3 months is chosen ( R = 3 ) , and the reference month is 201507 ( M = 201507 ) , then Table 2 illustrates the resulting marginal PDs.
To summarise, the approach will create PD term structures based on the most recent default information and initial performing accounts as at observation month. It does not require the explicit modelling of future cures or closures, as no survival analysis is required. Below we discuss how the PD term structure is segmented.

Segmentation of PD Term Structures

We have a single PD term structure at this stage, and typically, a revolving retail credit portfolio is not a homogeneous set of exposures. Hence, it is unlikely that a single PD term structure would adequately fit across different pools of accounts. If the portfolio is left unsegmented, then the resulting PD term structure will represent a combination of low risk PD term structures and high risk PD term structures. Such an unsegmented PD term structure will understate the default risk of high risk customers and overstate the default risk of low risk customers. Typical examples of effects that cause PD term structures to be materially different from a portfolio average are ageing effects and irregular payment behaviours. Young accounts tend to carry significant default risk in the first year or two after origination, but this improves significantly over time.
In contrast, more matured accounts do not show this significant further improvement in later time periods. Customers with irregular payment behaviour in the recent past have a higher risk of defaulting shortly after observation than in later periods, while very low risk customers often show a more constant (or even increasing) default risk over time. Therefore, a steep increasing cumulative PD term structure is needed for the high risk customers, whereas a more linear-shaped cumulative PD term structure would be needed for the low risk customers.
Therefore, it is necessary to identify segments for which the PD term structure shape is structurally different from the shape of the unsegmented PD term structure to ensure an appropriate risk differentiation. Banks commonly segment their portfolios along business lines, product types and risk characteristics to model more homogeneous loans groups (McPhail and McPhail 2014). To identify these segments, detailed data analysis based on typical dimensions like delinquency cycle, account age or product type is used and often supported by additional insights from business areas.
It is also important to quantify what a materially different term structure constitutes. To assess whether segmented PD term structures are materially different from the corresponding unsegmented PD term structure, the following tests are proposed:
  • Visual comparison of the segmented PD term structures vs the unsegmented PD term structure: The segmented PD term structures should ideally show no crossings and provide clear risk differentiation.
  • Comparing the ratio of cumulative segmented PDs over different horizons to the 12 month cumulative segmented PD: Define R a t i o T ( R , M ) = p ˜ T c ( R , M ) p ˜ 12 c ( R , M ) = t = 1 T p ˜ t m ( R , M ) t = 1 12 p ˜ t m ( R , M )   for T = 24 ,   36 ,   48 months. These ratios assess the segmented PD term structure’s steepness. The ratios will also show potential levels of over- or underestimations if only the unsegmented PD term structure is used. The ratios can be analysed over time by varying the reference month M .
  • Comparison of the cumulative segmented PDs to cumulative unsegmented PDs over time for different horizons: This test will confirm whether the difference between the segmented PD term structures and the unsegmented one remains consistent over different horizons T = 12 ,   24 ,   36 ,   48 months. The PDs can be analysed over time by varying the reference month M .
The validity of the suggested segments will be evaluated using these three tests. These three tests are another contribution to this paper and will be illustrated in the case study.

3.2. Account-Specific Term Structures Using Lorenz Curve Calibration

Until now, segment level granularity has been achieved, but more granularity is required to estimate the IFRS 9 PiT PD accurately. In this section, a methodology is proposed to yield account level granularity for the PD estimates. The credit score, in conjunction with a calibration technique, will be used. Any type of credit score that yields good risk differentiation can be used for this approach. Behavioural scores typically provide the best predictive power for revolving retail credit portfolios. In this paper, we used the Lorenz curve as the calibration method (Glößner 2003). Once the base PD term structures are created using the methodology described in Section 3.1, account specific PiT PDs will be derived through the Lorenz curve calibration using the latest default experience together with the credit scores. Two PD term structures are created for each account, one to be used for Stage 1 ECL calculations and the other to be used for Stage 2 ECL calculations. Note that separate calibrations will be required for different segments used for the base PD term structures to improve the calibrated PDs. The key two reasons for performing this extra step post the creation of the base PD term structures are as follows:
  • To obtain more granular PD term structures per credit score, which will increase the accuracy of account level ECL estimates as well as the identification of Stage 2 accounts for which the default risk has significantly increased since origination; and
  • To make average PD estimates more point-in-time by accounting for changes in score levels.
Assume two cumulative distribution functions F d ( s ) and F a ( s ) , where F d ( s ) is the cumulative distribution of defaulted accounts and F a ( s ) is the cumulative distribution of all accounts for credit score s . The Lorenz curve is defined as the graphical plot of the proportion of defaulted accounts against the proportion of all accounts. Mathematicaly the Lorenz curve is defined by Glößner (2003) as:
: [ 0 , 1 ] [ 0 , 1 ] ,
( x ) F d F a 1 ( x ) .
An example of a Lorenz curve (solid line) is shown in Figure 1. The x -axis of the Lorenz curve indicates the cumulative proportion of all accounts, and the y -axis the cumulative proportion of defaulted accounts. Suppose the defaulted accounts are distributed evenly across scores amongst the portfolio. In that case, the y fraction is more or less equal to the x fraction in which case the resulting Lorenz curve would be the 45-degree diagonal (dashed line), indicating that the credit score s has no distinguishing power. The other extreme would be a credit score s separating the good accounts from the bad ones perfectly (dotted line). In this case, the Lorenz curve runs linearly to y = 1 , until the x value equal to the percentage of defaulted accounts is reached and stays there.
The Lorenz curve can be used to estimate the cumulative PD, over a horizon t , for accounts with a given credit score s (Glößner 2003). The estimated PD of accounts with a given credit score s is the fraction of defaulted accounts with a credit score s of all accounts with a credit score s , i.e., p t c ( s ) = f d ( s ) / f a ( s ) , where f d ( s ) the density of defaulted accounts and f a ( s ) the density of all accounts for credit score s . Glößner (2003) further shows the cumulative PD, over a horizon t , is also:
p t c ( s ) = ( F a ( s ) ) × p ¯ = F d ( s ) F a ( s ) p ¯ ,
where p ¯ is the average PD over the whole portfolio.
Specific invariancy properties (Glößner 2003) of the Lorenz curve make this method a stable way to estimate PDs. Considerably less information is needed to fit a numerically constructed Lorenz curve than to fit the two numerical distributions it is constructed from. When applying the Lorenz curve, we make the following two assumptions:
  • The validity of the calculated PD relies on the account’s scoring results, i.e., distinguishing power of the credit score used; and
  • The density functions f d ( s ) and f a ( s ) are assumed to be lognormal.
The ensure lognormality assumption the credit score is transformed (see Glößner 2003) using different transformations. Assume T ( s ) refers to a transformation applied to the credit score s with a score range [ s 0 < s 1 < < s J ] where s 0 and s J are the minimum and the maximum observed credit scores. Glößner (2003) proves the Lorenz curve’s transformation invariance and proposes three possible transformations: the quadratic, exponential and logarithm.
By assuming that the densities f d ( s ) and f a ( s ) are lognormal, the cumulative PD over a horizon t can be estimated from the data as:
p ^ t c ( s ) = F ^ d ( s ) F ^ a ( s ) p ¯ ^ ,
where F ^ d ( s ) = L N o r m ( T ( s ) ; μ d , σ d ) and F ^ a ( s ) = L N o r m ( T ( s ) ; μ a , σ a ) are lognormal CDFs of the transformed credit scores for defaulted and all accounts, respectively, with parameters μ d ,   σ d , μ a , and σ a . p ¯ ^ is the average PD estimated from the data.
The parameters may be estimated from the same data as in Table 1. For example, to construct a three month cumulative PD from the data in Table 1, over an observation period of three months, Table 3 gives an example of the data required to construct the Lorenz curve.
Furthermore, consider a scoring range between 1 and 3 with 1 unit between each score, where 1 is the worst score and 3 is the best score. The number of performing accounts and cumulative defaulted accounts, d k , t c ( s j ) = i = 0 t d k , i ( s j ) , are determined for each score in the dataset, as illustrated in Table 4.
The average PD of the portfolio p ¯ is estimated as p ¯ ^ = D N , where D = j = 1 J k M d k , t c ( s j ) and N = j = 1 J k M n k , 0 ( s j ) . M is any subset of M k , the observation period (i.e., sample window) and t the horizon (i.e., the performance window) over which the Lorenz curve is calibrated. The parameters μ d ,   σ d , μ a and σ a for the calibrated Lorenz curve ^ ( s ) = F ^ d ( F ^ a 1 ( s ) ) , can be estimated by minimising the sum of squares of differences to the empirical Lorenz curve. The point wise-defined empirical Lorenz curve is
( k M n k , 0 ( s j ) N , k M d k , t c ( s j ) D ) j = 1 J .
The resulting objective function is
j = 1 J ( ^ ( k M n k , 0 ( s j ) N ) k M d k , t c ( s j ) D ) 2 .
The choice of transformation T ( s ) is motivated by the specific transformation applied to the credit score that minimises the sum of squared differences (SSD) between the calibrated Lorenz curve and the empirical Lorenz curve (Glößner 2003).
A Gini can be calculated to determine how well the calibrated PDs differentiate between the defaulted accounts and all accounts. Note that the calibration has no impact on the Gini as it does not change the rank order. The Gini statistic (Siddiqi 2006) quantifies a model’s ability to discriminate between two possible values of a binary target variable (Tevet 2013). Cases are ranked according to the predictions, and the Gini then provides a measure of predictive power. It is one of the most popular retail credit scoring measures (Baesens et al. 2016) and has the added advantage that it is a single value (Tevet 2013). For detailed Gini calculations, the reader can consult various literature sources, e.g., SAS Institute (2017) and Breed et al. (2019).
To create an account specific PiT PD term structure for a given performance window T (e.g., 12 months for Stage 1) a scaling factor is calculated as:
S F T ( s ) = p ^ T c ( s ) t = 1 T p ˜ t m ( R , M ) = p ^ T c ( s ) p ˜ T c ( R , M ) ,
where p ^ T c ( s ) the cumulative Lorenz curve calibrated T month PD for an account with credit score s and p ˜ T c ( R , M ) the cumulative T month term structure PD. The resulting marginal PD for accounts with a credit score s in the period ( t 1 , t ] is then defined as
p ˜ t m ( s , T ) = S F T ( s ) × p ˜ t m ( R , M ) .
This approach ensures that the cumulative PD assigned to a customer corresponds to his calibrated PD. For Stage 1 and Stage 2, two Lorenz curve calibrations are required. For Stage 1, the ECL is calculated over a 12 month period and therefore t T = 12 .
To illustrate, let’s assume Customer A and Customer B are in the same segment (i.e., the same term structure) but with different credit scores ( s A > s B ). Customer A has a scaling factor of 0.76 ( S F T ( s A ) = 0.76 ) and Customer B has a scaling factor of 1.22 ( S F T ( s B ) = 1.22 ). Suppose we assume a marginal PD in the period ( t 1 , t ] of the PD term structure for the segment to which Customer A and B belong, is 2.3%. In that case, the resulting marginal PD of Customer A will be 1.748% (2.3% × 0.76) and for Customer B 2.806% (2.3% × 1.22).
For Stage 2, a lifetime ECL is calculated, and the choice of the performance window T will depend on the product type, quality of data and expected lifetime T . Typically for Stage 2 T < T , since the confidence in the Lorenz curve calibration becomes lower for longer performance windows. With T < T the resulting marginal lifetime PDs may be unrealistically high (or low) for horizons t > T due to the wide range of calibrated PDs. Therefore, only the marginal PD for accounts with a credit score s in the period ( t 1 , t ] , for t T are adjusted, and the remainder stays unchanged:
p ˜ t m ( s , T ) = { S F T ( s ) × p ˜ t m ( R , M ) t T p ˜ t m ( R , M ) t > T .  
This approach ensures that the average predicted lifetime PD remains aligned to actual default behaviour in the respective segment.

3.3. Methodology: Strengths and Weaknesses

The methodology’s strengths may be summarised as follows: The model methodology utilises well-understood methods (e.g., Lorenz curve calibration, scorecards, term structure modelling) used in the banking industry. The inclusion of re-default events in the proposed IFRS 9 PD methodology will simplify the development of the accompanying IFRS 9 LGD model due to the reduced complexity for the modelling of cure cases (Breed et al. 2019). Attrition effects are naturally included in the PD term structures and do no longer require a separate model. The PD term structure is based on months since observation and not months on book see e.g., Schutte et al. (2020), therefore the arrears cycle could be investigated as a possible segmentation.
On the other hand, the weaknesses are: The number of segments chosen will influence the results—the balance between granularity and the number of observations to ensure a good fit of the Lorenz curve calibration. On account level, the inclusion of re-default events might lead to unintuitive PD (higher than 100%) however, this rarely happens on segment or portfolio level. The Lorenz curve calibration is dependent on the rank order ability of the credit scores used.
The proposed methodology is loosely based on a combination of the run-off triangle method (Braun 2004), chain ladder method (England and Verrall 2002) and the segmented term structure method, e.g., (Yang 2017) and (Schutte et al. 2020). Some of the differences include that our proposed methodology does not require a survival component in the term structure and that the term structure is based on months since observation (as opposed to months on book). The segmented term structure is then combined with the Lorenz curve calibration to derive more granular account level PD estimates for both 12 months (Stage 1) and lifetime (Stage 2).

4. Case Study

This section illustrates the proposed IFRS 9 PiT PD methodology described in Section 3 on a revolving retail portfolio from one of the major banks in South Africa. Although little information will be provided due to the data’s confidential nature, the purpose of the case study is to show how our proposed methodology can be implemented. For confidentiality purposes, results were altered (PD values multiplied by a random value). In Section 4.1, the empirical PiT PD term structure will be constructed and further segmented by relevant risk drivers. In Section 4.2, the Lorenz curve calibration is performed.

4.1. Empirical PiT PD Term Structures

The empirical PiT PD term structures will be constructed using a dataset that contains monthly account level default status from September 2005 to April 2020. The set of observation months is therefore { 200509 , , 202004 } with K = 176 . The reference period R is taken as 24 months. For the portfolio, the cumulative ( p ˜ t c ( 24 , 202004 ) ) and marginal ( p ˜ t m ( 24 , 202004 ) ) PD term structures as of April 2020 are displayed in Figure 2 panel (a) and (b) resepectively.
The first test is a visual comparison of the segmented PD term structures vs the unsegmented PD term structure indicated in Figure 3. The four segmented PD term structures do not cross and show clear risk differentiation.
In the second test, Figure 4, we compare the ratio of cumulative segmented PDs over different horizons to the 12 month cumulative segmented PD, using R a t i o T ( 24 , M ) for T = 24 ,   36 ,   48 months and M { 201504 , , 202004 } . The graph confirms that if no segmentation is applied, then this would result in a significant over- and understatement of lifetime PDs. Note that the unintuitive pattern (high spike) for Segment 3 (see Figure 4 panel (c)) in earlier periods is due to low marginal PD’s for small horizons, as shown in Figure 3 (almost flat cumulative PD).
Figure 5 compares the cumulative segmented PDs to cumulative unsegmented PDs over time for different horizons in the third test. This test confirms that the difference between the segmented PD term structures and the unsegmented one remains relatively consistent over different horizons T = 12 ,   24 ,   36 ,   48 . Again, the PDs were analysed over time with M { 201504 , , 202004 } .
For each of the four proposed segments above, the tests above were performed and confirmed that these term structures are materially different, and therefore, we will continue with these four segments.

4.2. Lorenz Curve Calibration

Only three months of credit scores were available to calibrate the Lorenz curve with M = { 201712   ,   201803   ,   201804 } with a 12 and 22 month performance window for Stage 1 and Stage 2, respectively. The choice of a 22 month performance window for the Stage 2 PD was based on the bank-specific data availability, economic cycle and credit score availability.
Four different calibrations were performed, one for each base PD term structure segment, to derive account level 12 month calibrated PDs and another four calibrations for the lifetime calibrated PDs. The behavioural credit score was used for these Lorenz curve calibrations as described in Section 3.2.
Table 5 shows the selected transformation, SSD and Gini for the 12 month Lorenz curve calibrations for the four segments. The fitted Lorenz curves are presented in Figure 6, and the calibrated PD per credit score are presented in Figure 7.
Note that a Gini ranges from 0 to 1, and the higher the Gini, the better the scorecard differentiates between the defaulters and non-defaulters. The calibration function yields a monotonic PD curve, as shown in Figure 7. This clearly indicates that the resulting calibrated PDs are higher for lower credit scores and lower for higher credit scores. All segments used the quadratic transformation except for Segment 3 where the exponential transformation was used as the quadratic function did not yield an acceptable fit. The resulting SSD for Segment 3 was higher than for other Segments and had a lower Gini. The calibration function showed slight non-monotonic behaviour for the worst score buckets, which carry hardly any observations as Segment 3 only consist of customers with very consistent payment behaviour and thus low default risk.
Table 6 shows the selected transformation, SSD and Gini for the 22 month Lorenz curve calibrations for the four segments. The fitted Lorenz curves are presented in Figure 8, and the calibrated PD per credit score are presented in Figure 9.
Again, all segments used the quadratic transformation except for Segment 3 where the exponential transformation was used. The resulting SSD for Segment 3 was higher than other Segments and had a lower Gini. The calibration function showed slight non-monotonic behaviour for the worst score buckets, which carry hardly any observations. All four lifetime PD calibration functions yielded monotonic PD curves, except for buckets with very few observations.

4.3. Account-Specific Term Structures

We will illustrate the account specific term structures. We will present the scaling factors for accounts of which 25% of the accounts had a credit score of 783 or less and for accounts of which 75% of the accounts had a credit score of 895 or less. Table 7 provides the associated calibrated Lorenz cumulative PD as well as the scaling factors for Segment 1.
In Figure 10, we present the adjusted term structures for Stage 1 and Stage 2, for accounts in Segment 1, with associated credits scores as above. As mentioned before, for Stage 2, a performance window of 22 months is used. Note that the resulting cumulative lifetime PDs may be unrealistically high or low as seen in Figure 10b, if scaling factors are applied to marginal PDs for horizons t > T = 22 . Therefore scaling factors are only applied to derive marginal PDs for t T = 22 (see 25th percentile (Cap) and 75th percentile (Cap) in Figure 10b).
Figure 10 was used to illustrate the adjusted term structures for Stage 1 and Stage 2 and the capping effect for Segment 1. Figure 11 below compares the calibrated and uncalibrated PD for all four segments for Stage 1. Note that the scale for the cumulative PD is different for the four graphs to enhance visibility. In panel (a), for Segment 1, the 25th percentile calibrated PD (dotted line) represents the calibrated PD for an account with a credit score of 783, and the 75th percentile calibrated PD (dashed line) represents the calibrated PD for an account with a credit score of 895. Similarly, Segments 2 to 4 are illustrated in panels (b)–(d). The respective credit scores (830 and 917 for Segment 2, 615 and 620 for Segment 3, and 864 and 903 for Segment 4) are calculated such that 25% of accounts had credit scores below and 75% of accounts had credit scores below the respective values. This comparative analysis demonstrates the advantage of the methodology by providing more granular PD term structures per credit score.

5. Conclusions and Future Recommendations

A new methodology to derive an IFRS 9 PiT PD model was described that consists of first deriving a PiT term structure with accompanying segmented term structures. Secondly, the calibration of credit scores using the Lorenz curve approach to create account-specific PD term structures.
The PiT term structures are derived by using empirical information based on the most recent default information and accounts’ risk characteristics prior to default. The approach directly accounts for attrition effects and also includes re-default events. Next, different PiT PD term structures are developed to capture the structurally different default risk patterns for different pools of accounts using segmentation.
It is important to quantify what a materially different term structure constitutes, and three tests were proposed: Visual comparison of the segmented PD term structures vs the unsegmented PD term structure; Comparing the ratio of cumulative segmented PDs over different horizons to the 12 month cumulative segmented PD; and Comparison of the cumulative segmented PDs to cumulative unsegmented PDs over time for different horizons.
Next, account-specific PiT PDs were derived through the Lorenz curve calibration using the latest default experience and accounts’ credit scores. For each account, two PD term structures are created, one to be used for Stage 1 ECL calculations and the other to be used for Stage 2 ECL calculations. This is done for each segment.
The proposed methodology was illustrated on an actual data set, using a large South African bank’s revolving retail portfolio. First, an unsegmented term structure was derived, next four segments were identified to create four segmented term structures. All three proposed tests confirm that the segmented term structures materially differ from the unsegmented term structure. Account level specific PD term structures were created using the Lorenz curve calibration.
According to IFRS 9, the ECL should include forward-looking macro-economic scenarios (IFRS Foundation 2014). This linkage to forward-looking information will determine how the central bank rate (e.g., the repurchase rate in South Africa), gross domestic product (GDP) and other macro-economic variables will affect the PD. This has not been done in this paper and could be researched in future.
Other aspects for future research include the derivation of SICR (significant increases in credit risk) rules; the treatment of unscored accounts; and the monitoring of the model after implementation. Specifically, the monitoring of lifetime PD models should be investigated further.
IFRS 9 methodologies are still relatively young and no clear market standard has yet emerged. A comparative analysis of IFRS 9 methodologies could be a future research study to be done once such a market standard has been observed.
Generally, there are two risk philosophies: TTC versus PiT philosophy (Taylor 2003). PiT and TTC approach the issue of cyclicality quite differently, but there is value in both. According to Baesens et al. (2016) there is quite a controversy regarding procyclical risk measures. A PiT philosophy, as used in IFRS 9, is intended to provide an accurate view of future losses, so if the economy fluctuates, the ECL should change and will impact financial statements. In practice, banks would often aim to smooth this effect somewhat. This effect of procyclicality can be researched in future.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work is based on research supported in part by the Department of Science and Innovation (DSI) of South Africa. The grant holder acknowledges that opinions, findings and conclusions or recommendations expressed in any publication generated by DSI-supported research are those of the authors and that the DSI accepts no liability whatsoever in this regard.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aalen, Odd O., and Søren Johansen. 1978. An Empirical Transition Matrix for Non-Homogeneous Markov Chains Based on Censored Observations. Scandinavian Journal of Statistics 5: 141–56. [Google Scholar]
  2. Anderson, Raymond. 2007. The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation. Oxford: Oxford University Press. [Google Scholar]
  3. Aptivaa. 2016. Building Blocks of Impairment Modeling (Issue 02). Available online: http://www.aptivaa.com/blog/wp-content/uploads/2016/04/Blog_02-Building-Blocks-of-Impairment-Modeling.pdf (accessed on 5 June 2021).
  4. Baesens, Bart, Daniel Rösch, and Harald Scheule. 2016. Credit Risk Analytics. Hoboken: SAS Institute, Wiley. [Google Scholar]
  5. Beerbaum, Dirk. 2015. Significant Increase in Credit Risk According to IFRS 9: Implications for Financial Institutions. International Journal of Economics and Management Sciences 4: 1–3. [Google Scholar] [CrossRef]
  6. Braun, Christian. 2004. The prediction error of the chain ladder method applied to correlated run-off triangles. Astin Bulletin 34: 399–423. [Google Scholar] [CrossRef] [Green Version]
  7. Breed, Douw G., Tanja Verster, Willem D. Schutte, and Naeem Siddiqi. 2019. Developing an impairment loss given default model using weighted logistic reegression: Ilustrated on a secured retail bank portfolio. Risks 7: 123. [Google Scholar] [CrossRef] [Green Version]
  8. Chimezda, Charles, and Mercy Marimo. 2017. Survival analysis of bank loans in the presence of long-term survivors. South African Statistical Journal 51: 199–216. [Google Scholar]
  9. Crook, Jonathan, and Tony Bellotti. 2010. Time varying and dynamic models for default risk in consumer loans. Journal of the Royal Statistical Society: Series A (Statistics in Society) 173: 283–305. [Google Scholar] [CrossRef] [Green Version]
  10. Crook, Jonathan, and Tony Bellotti. 2013. Forecasting and stress testing credit card default using dynamic models. International Journal of Forecasting 29: 563–74. [Google Scholar]
  11. Cziraky, Dario, and Ulrich Zink. 2017. Multi-State Markov Modelling of IFRS9 Default Probability. Oracle. Available online: https://www.oracle.com/us/industries/financial-services/multi-state-markov-model-wp-3432818.pdf (accessed on 19 November 2020).
  12. Dib, Darine, and Khalil Feghali. 2021. Preliminary Impact of IFRS 9 Implementation on The Lebanese Banking Sector. Journal of Accounting and Management Information Systems 20: 369–401. [Google Scholar] [CrossRef]
  13. England, Peter D., and Richard J. Verrall. 2002. Stochastic claims reserving in general insurance. British Actuarial Journal 8: 443–18. [Google Scholar] [CrossRef]
  14. Glen, Stephanie. 2015. Autoregressive Model: Definition & the AR Process. Available online: https://www.statisticshowto.datasciencecentral.com/autoregressive-model/ (accessed on 27 August 2019).
  15. Global Public Policy Committee (GPPC). 2016. The Implementation of IFRS 9 Impairment Requirements by Banks: Considerations for Those Charged with Governance of Systemically Important Banks. Global Public Policy Committee. Available online: http://www.ey.com/Publication/vwLUAssets/Implementation_of_IFRS_9_impairment_requirements_by_systemically_important_banks/$File/BCM-FIImpair-GPPC-June2016%20int.pdf (accessed on 10 April 2020).
  16. Glößner, Peter. 2003. Calculating Basel II Risk Parameters for a Portfolio of Retail Loans. Master’s Thesis, Kellogg College, Oxford University, Oxford, UK. [Google Scholar]
  17. Gubareva, Mariya. 2021. How to estimate expected credit losses–ECL–for provisioning under IFRS 9. Journal of Risk Finance 22: 169–90. [Google Scholar] [CrossRef]
  18. IFRS Foundation. 2014. IRFS9 Financial Instruments: Project Summary. Available online: http://www.ifrs.org/Current-Projects/IASB-Projects/Financial-Instruments-A-Replacement-of-IAS-39-Financial-Instruments-Recognitio/Documents/IFRS-9-Project-Summary-July-2014.pdf (accessed on 31 January 2016).
  19. McPhail, Joseph, and Lihong McPhail. 2014. Forecasting lifetime credit losses: Modelling considerations for complying with the new FASB and IASB current expected loss models. Journal of Risk Management in Financial Institutions 7: 375–88. Available online: http://bit.ly/2qzrWAF (accessed on 8 September 2020).
  20. Medema, Lydian, Rudd H. Koning, and Robert Lensink. 2009. A practical approach to validating a PD model. Journal of Banking & Finance 33: 701–8. [Google Scholar]
  21. SAS Institute. 2017. Development of Credit Scoring Applications Using SAS Enterprise Miner (SAS Course Notes: LWCSEM42). Cary: SAS Institute, ISBN 978-1-63526-092-2. [Google Scholar]
  22. Schutte, Willem D., Tanja Verster, Derek Doody, Helgard Raubenheimer, and Peet J. Coetzee. 2020. A proposed benchmark model using a modularised approach to calculate IFRS9 expected credit loss. Cogent Economics & Finance 8: 1735681. [Google Scholar] [CrossRef]
  23. Siddiqi, Naeem. 2006. Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring. Hoboken: John Wiley & Sons. [Google Scholar]
  24. Siddiqi, Naeem. 2017. Intelligent Credit Scoring: Building and Implementing Better Credit Risk Scorecards. Hoboken: John Wiley & Sons. [Google Scholar]
  25. Taylor, Jeremy. 2003. Risk-grading philosophy: Through the cycle versus point in time. The Risk Management Association Journal, 32–39. Available online: https://cms.rmau.org/uploadedFiles/Credit_Risk/Library/RMA_Journal/Risk_Ratings/Risk-Grading%20Philosophy_%20Through%20the%20Cycle%20versus%20Point%20in%20Time.pdf (accessed on 5 September 2020).
  26. Tevet, Dan. 2013. Exploring model lift: Is your model worth implementing? Actuarial Review 40: 10–13. [Google Scholar]
  27. Thomas, Lyn C. 2009. Consumer Credit Models: Pricing, Profit and Portfolios. Oxford: Oxford University Press. [Google Scholar]
  28. Țurlea, Ioan-Codruț. 2021. Development of Rating Models under IFRS 9. CECCAR Business Review 7: 64–72. [Google Scholar] [CrossRef]
  29. Van Gestel, Tony, and Bart Baesens. 2009. Credit Risk Management: Basic Concepts. Oxford: Oxford University Press. [Google Scholar]
  30. Wielenga, Doug, Bob Lucas, and Jim Georges. 1999. Enterprise Miner: Applying Data Mining. Cary: SAS Institute Inc. [Google Scholar]
  31. Yang, Bill H. 2017. Point-in-Time PD Term Structure Models for Multi-Period Scenario Loss Projection: Methodologies and Implementations for IFRS 9 ECL and CCAR Stress Testing. Paper No. 76271. Toronto: Munich Personal RePEc Archive (MPRA). [Google Scholar]
Figure 1. Lorenz curve example.
Figure 1. Lorenz curve example.
Risks 09 00208 g001
Figure 2. Cumulative and Marginal PD term structure.
Figure 2. Cumulative and Marginal PD term structure.
Risks 09 00208 g002
Figure 3. Segmented cumulative PD term structures.
Figure 3. Segmented cumulative PD term structures.
Risks 09 00208 g003
Figure 4. The ratios of cumulative segmented PDs over different horizons.
Figure 4. The ratios of cumulative segmented PDs over different horizons.
Risks 09 00208 g004aRisks 09 00208 g004b
Figure 5. Cumulative segmented PDs to cumulative unsegmented PDs over different horizons.
Figure 5. Cumulative segmented PDs to cumulative unsegmented PDs over different horizons.
Risks 09 00208 g005aRisks 09 00208 g005b
Figure 6. 12 month Lorenz curves.
Figure 6. 12 month Lorenz curves.
Risks 09 00208 g006
Figure 7. 12 month calibrated PD by score.
Figure 7. 12 month calibrated PD by score.
Risks 09 00208 g007
Figure 8. 22 month Lorenz curves.
Figure 8. 22 month Lorenz curves.
Risks 09 00208 g008
Figure 9. 22 month calibrated PD by score.
Figure 9. 22 month calibrated PD by score.
Risks 09 00208 g009
Figure 10. Adjusted cumulative PD term structure for Segment 1.
Figure 10. Adjusted cumulative PD term structure for Segment 1.
Risks 09 00208 g010
Figure 11. Comparison of calibrated and uncalibrated PD for Stage 1.
Figure 11. Comparison of calibrated and uncalibrated PD for Stage 1.
Risks 09 00208 g011aRisks 09 00208 g011b
Table 1. Example of the defaults table.
Table 1. Example of the defaults table.
Observation   Month   ( M k ) Performing   Accounts   ( n k , 0 ) Months   after   Observation   ( t )
1234567
201,50150010548633
2015025501156375
201503600135746
20150465014665
2015057001557
201506750147
20150780016
Table 2. Example of the PD term structure.
Table 2. Example of the PD term structure.
HorizonSum of Performing AccountsSum of DefaultsMarginal PD
12250452.000%
22100180.857%
31950201.026%
41800120.667%
51650191.152%
Table 3. Lorenz curve data extract from defaults table.
Table 3. Lorenz curve data extract from defaults table.
Observation   Month   ( M k ) Performing   Accounts   ( n k , 0 ) Months   after   Observation   ( t )
123
2015015001054
2015025501156
2015036001357
Table 4. Data construction for Lorenz curve.
Table 4. Data construction for Lorenz curve.
Observation   Month   ( M k ) Credit   Score   ( s j ) Performing   Accounts   ( n k , 0 ( s j ) ) Number   of   Defaults   ( d k , 3 c ( s j ) )
20150115010
22505
32004
201,50217012
22807
32003
201,50319014
23008
32103
Table 5. 12 month Lorenz curve calibration statistics.
Table 5. 12 month Lorenz curve calibration statistics.
SegmentTransformationSSDGini
Segment 1Quadratic0.57%56.60%
Segment 2Quadratic0.46%61.30%
Segment 3Exponential2.44%46.50%
Segment 4Quadratic0.83%55.90%
Table 6. 22 month Lorenz curve calibration statistics.
Table 6. 22 month Lorenz curve calibration statistics.
SegmentTransformationSSDGini
Segment 1Quadratic0.35%54.50%
Segment 2Quadratic0.47%58.30%
Segment 3Exponential2.59%39.40%
Segment 4Quadratic0.62%51.50%
Table 7. Illustration of scaling factors for Segment 1.
Table 7. Illustration of scaling factors for Segment 1.
Performance   Window   ( T ) Credit   Score   ( s ) Lorenz   Calibrated   PD   ( p ^ t c ( s ) ) Scaling   Factor   ( S F T ( s ) )
12 month25th Percentile (783)14.07%1.294
75th Percentile (895)1.89%0.173
22 month25th Percentile (783)27.03%2.486
75th Percentile (895)6.19%0.569
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Breed, D.G.; van Jaarsveld, N.; Gerken, C.; Verster, T.; Raubenheimer, H. Development of an Impairment Point in Time Probability of Default Model for Revolving Retail Credit Products: South African Case Study. Risks 2021, 9, 208. https://doi.org/10.3390/risks9110208

AMA Style

Breed DG, van Jaarsveld N, Gerken C, Verster T, Raubenheimer H. Development of an Impairment Point in Time Probability of Default Model for Revolving Retail Credit Products: South African Case Study. Risks. 2021; 9(11):208. https://doi.org/10.3390/risks9110208

Chicago/Turabian Style

Breed, Douw Gerbrand, Niel van Jaarsveld, Carsten Gerken, Tanja Verster, and Helgard Raubenheimer. 2021. "Development of an Impairment Point in Time Probability of Default Model for Revolving Retail Credit Products: South African Case Study" Risks 9, no. 11: 208. https://doi.org/10.3390/risks9110208

APA Style

Breed, D. G., van Jaarsveld, N., Gerken, C., Verster, T., & Raubenheimer, H. (2021). Development of an Impairment Point in Time Probability of Default Model for Revolving Retail Credit Products: South African Case Study. Risks, 9(11), 208. https://doi.org/10.3390/risks9110208

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop