Next Article in Journal
Interconnectedness of Financial Conglomerates
Next Article in Special Issue
Supervising System Stress in Multiple Markets
Previous Article in Journal
Rationality Parameter for Exercising American Put
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Custom v. Standardized Risk Models

by
Zura Kakushadze
1,2,* and
Jim Kyung-Soo Liew
3
1
Quantigic Solutions LLC, 1127 High Ridge Road #135, Stamford, CT 06905, USA
2
Business School & School of Physics, Free University of Tbilisi, 240, David Agmashenebeli Alley, Tbilisi 0159, Georgia
3
The Johns Hopkins Carey Business School, 100 International Drive, Baltimore, MD 21202, USA
*
Author to whom correspondence should be addressed.
Risks 2015, 3(2), 112-138; https://doi.org/10.3390/risks3020112
Submission received: 19 February 2015 / Accepted: 15 May 2015 / Published: 20 May 2015
(This article belongs to the Special Issue Financial Engineering to Address Complexity)

Abstract

:
We discuss when and why custom multi-factor risk models are warranted and give source code for computing some risk factors. Pension/mutual funds do not require customization but standardization. However, using standardized risk models in quant trading with much shorter holding horizons is suboptimal: (1) longer horizon risk factors (value, growth, etc.) increase noise trades and trading costs; (2) arbitrary risk factors can neutralize alpha; (3) “standardized” industries are artificial and insufficiently granular; (4) normalization of style risk factors is lost for the trading universe; (5) diversifying risk models lowers P&L correlations, reduces turnover and market impact, and increases capacity. We discuss various aspects of custom risk model building.

1. Introduction

In most incarnations, multi-factor risk models for stocks (RM) are based on style and industry risk factors. Industry factors are based on a similarity criterion, stocks’ membership in industries under a given classification (e.g., GICS, ICB, BICS, etc.). Style factors are based on some estimated (or measured) properties of stocks. Examples of style factors are size [1], value and growth [2,3,4,5,6,7,8], momentum [9,10], liquidity [11,12,13,14], volatility [15], etc.1
Most commercial RM are standardized (SRM). Majority of their users are institutions with longer term holdings (mutual funds, pension funds, etc.). To these users it is important that RM: (i) contain fundamental longer horizon style risk factors (value, growth, etc.), and (ii) be standardized, so their risk reporting and cross-institutional communications can be uniform. If a mutual fund risk report says “Portfolio A has exposure B to the risk factor C under the risk model D”, its pension-fund client knows what that means. Typically, such portfolios are broad (low turnover allows lower cap/liquidity holdings) and fairly close to SRM coverage, so universe customization is not critical. Standardization is a much higher priority.
Quant trading with shorter-term holding portfolios has opposite priorities. Such strategies trade smaller universes of more liquid stocks (roughly, around 1000–2500 names), and these universes differ from strategy to strategy. Standardization is not critical for typically in-house risk reporting. Customization is much more important for a variety of reasons, which we discuss in more detail below. Here is a summary.
(1) Short v. Long Horizons. Strategies with shorter holding periods (e.g., statistical arbitrage and high frequency trading) do not benefit from longer horizon risk factors (value, growth, etc.)—shorter-term returns are not highly correlated with quantities such as book value, which updates quarterly and lacks predictive power for holding horizons measuring in days or intraday. Moreover, when such risk factors are included in regression or optimization of shorter-term returns, they add noise to the holdings and trades, thereby increasing trading costs and reducing profitability.
(2) Inadvertent Alpha Neutralization. If, by design, some alphas are taking certain risk exposure, optimization using SRM typically will (partially) neutralize such alphas by producing a portfolio with more balanced risk exposure. e.g., if alphas are intentionally skewed toward small cap value, optimization using SRM, which typically contains “size” risk factor, will muddle such alphas by producing a portfolio with more balanced larger cap exposure. In such cases it is important to use custom RM (CRM) with the corresponding undesirable risk factors carefully omitted.
(3) Insufficient Industry Granularity. SRM coverage universe U S R M is “squeezed” into a modest number of fixed standardized “industries” (SRMI). Typically this reduces industry granularity, which adversely affects hedging industry risk. Thus, under a given industry classification (e.g., GICS, ICB, BICS, etc.), the true number of industries2 into which a given trading universe U falls can be sizably higher than the number of SRMI. Also, different trading universes U 1 and U 2 fall into different (numbers of) true industries, while in SRM they are classified into the same SRMI.
(4) Trading v. Coverage Universe. Restricting U S R M to a substantially smaller trading universe U yields side effects: (i) for certain U one can have empty SRMI (e.g., U contains no telecom stocks), with no option to omit them from the factor covariance matrix (FCM) computation;3 and (ii) it spoils normalization of style factors—those normalized across U S R M , e.g., conformed to a (log-)normal distribution.
(5) Herd Effect. Overusing SRM makes alphas more correlated, so when a shop blows up, it drags others with it (e.g., Aug’07 Quant Meltdown). This adversely affects shorter holding trading. Longer horizon strategies simply weather the storm.
(6) Diversifying Risk Models Lowers P&L Correlations. When evaluating two RM against each other (in regression or optimization of the same returns), one looks at both (A) the relative performance and (B) P&L correlation. If the P&L correlation is low enough, it is more optimal to run a combined strategy using both RM thereby reducing portfolio turnover and market impact, and increasing capacity and P&L.
(7) Pros and Cons of Custom Risk Models. CRM further provide: (i) ability to compute RM based on custom universes and add/subtract risk factors; (ii) transparency; (iii) flexibility with update frequency (daily, weekly, etc.) and integration into test/production environments. However, the portfolio manager (PM) must compute CRM as opposed to receiving a file from an SRM provider.4 Data required to build CRM for shorter horizon trading is typically available to PM, albeit self-consistently computing FCM and idiosyncratic risk (ISR) is not common knowledge.
We discuss the above points (1)–(7) in more detail below. Section 2 discusses RM in general, differences in their use in regression and optimization, and point (2). Section 3 discusses decoupling of time horizons relevant to point (1). Section 4 discusses points (3) and (4). We conclude in Section 5 by discussing points (5)–(7). Appendix A contains R code for some style risk factors. Appendix B contains C code for symmetric matrix inversion. Appendix C contains legal disclaimers.

2. Multi-factor Risk Models

In RM, a sample covariance matrix (SCM) C i j for N stocks, i , j = 1 , , N (computed based on time series of stock returns) is modeled by Γ i j given by
Γ Ξ + Ω Φ Ω T
Ξ i j ξ i 2 δ i j
where δ i j is the Kronecker delta; Γ i j is an N × N matrix; ξ i is specific a.k.a. idiosyncratic risk (ISR) for each stock; Ω i A is an N × K factor loadings matrix (FLM); and Φ A B is a K × K factor covariance matrix (FCM), A , B = 1 , , K . i.e., the random processes Υ i corresponding to N stock returns are modeled via N random processes χ i (ISR) together with K random processes f A (factor risk):
Υ i = χ i + A = 1 K Ω i A f A
χ i , χ j = Ξ i j
χ i , f A = 0
f A , f B = Φ A B
Υ i , Υ j = Γ i j
The main reason for replacing the sample covariance matrix C i j by Γ i j is that the off-diagonal elements of C i j typically are not expected to be too stable out-of-sample. A constructed factor model covariance matrix Γ i j is expected to be much more stable as the number of risk factors, for which FCM Φ A B needs to be computed, is K N . Also, if M < N , where M + 1 is the number of observations in each time series, then C i j is singular with M nonzero eigenvalues. Assuming all ξ i > 0 and Φ A B is positive-definite, then Γ i j is automatically positive-definite (and invertible).

2.1. Industry Risk Factors

RM can mix risk factors of different types. Typically, the most numerous factors are industry factors. In its simplest incarnation, industry FLM Ω i A ind , A = 1 , , K ind is a binary matrix of 1s and 0s indicating whether a stock belongs to a given industry:
Ω i A ind = δ G ( i ) , A
G : { 1 , , N } { 1 , , K ind }
i = 1 N Ω i A ind = N A
A = 1 K ind Ω i A ind = 1
where G is the map between stocks and industries, and N A > 0 is the number of stocks in the industry labeled by A (we are assuming there are no empty industries).
More generally, we can allow each stock to belong to multiple industries (e.g., in the case of conglomerates) with some weights ω i A :
Ω i A ind = B G ( i ) ω i B δ A B
B G ( i ) ω i B = 1
where G ( i ) { 1 , , K ind } and G ( i ) n i need not be 1, albeit typically for most stocks n i = 1 , and (for conglomerates) n i 1 is not a large number.
Binary Ω i A ind can be constructed from binary classifications such as GICS, ICB, BICS, etc. Naming conventions for levels differ by classification, e.g., sector, sub-sector, industry, sub-industry, etc. “Industry” here means the most detailed level in the classification tree. If there are too many industries (e.g., with small N A ), one can “prune” the tree by merging (small) industries at higher levels, e.g., in the tree “sector → sub-sector → industry”, industries are pruned at the sub-sector level.

2.2. Style Risk Factors

Industry factors are based on a similarity criterion: industry membership. Style risk factors are based on some estimated (or measured) properties of stocks. Examples of style factors are size, liquidity, volatility, momentum, growth, value, etc.
For illustrative purposes, let us discuss some style factors in more detail. In Appendix A we give R code for size, liquidity, intraday volatility and momentum style factors.5
• Size is a logarithm of market cap, normalized (conformed to normal distribution)—see R code in Appendix A. Note that ADRs are treated separately.
• Liquidity is a logarithm of the average daily dollar volume (ADDV),6 normalized similarly to size with ADRs treated separately—see Appendix A.
• Volatility style factor can be based on historical (relevant for longer-term models) or intraday (relevant for shorter-term models—see the next section) volatility. One way to define intraday volatility is to use intraday high and low price—see Appendix A. The log of intraday volatility is normalized similarly to size.
• Momentum can be defined as a normalized average over D trading days of d-trading-day moving-average returns (e.g., d = 5 , D = 252 )7—see Appendix A.
• Value can be defined via a normalized book-to-price ratio (negative values need to be dealt with). Growth requires earnings data. As discussed below, longer horizon style factors such as value and growth do not add value in shorter horizon strategies as book value and earnings data essentially are updated quarterly.

2.3. Factor Covariance Matrix and Specific Risk

Once FLM Ω i A is defined,8 FCM Φ A B and ISR ξ i must be constructed. Straightforwardly computing FCM as a sample9 covariance matrix of the risk factors f A is insufficient as ISR ξ i computed using such sample FCM typically are ill-defined. Algorithms for consistently computing FCM and ISR usually are deemed proprietary.

2.4. Use of Factor Models

RM have a variety of uses, which are not limited to passively measuring risk exposures, but also include actively hedging such risks, e.g., by requiring that a portfolio be neutral w.r.t. various risk factors and/or such risk exposures be optimized.10 Here we outline the usage of RM in regression and optimization.

2.4.1. Regression

Let R i , i = 1 , , N be the stock expected returns. A weighted regression (without intercept) of R over FLM Ω with weights z i is given by (in matrix notation):11
ε R Ω Q 1 Ω T Z R
Z diag ( z i )
Q Ω T Z Ω
R ˜ Z ε
Here ε i are the residuals of the weighted regression. Also, note that the “regressed” returns R ˜ i are neutral w.r.t. the K risk factors corresponding to Ω i A :
i = 1 N R ˜ i Ω i A = 0 , A = 1 , , K
If Ω i A includes the intercept (a unit vector) as one of its columns, then we have
i = 1 N R ˜ i = 0
i.e., in this case the regressed returns are demeaned. The weights z i can be chosen to be unit weights, or, e.g., z i 1 / σ i 2 , where σ i is some historical volatility of R i .
Using R ˜ i , a simple mean-reversion strategy can be constructed via
D i = γ R ˜ i
where D i are the desired dollar holdings and γ > 0 is fixed via
i = 1 N | D i | = I
where I is the total desired investment level. The portfolio (20) is dollar neutral if we have Equation (19). For this weighted regression all we need is FLM Ω i A —FCM Φ A B is not needed,12 nor is ISR ξ i , albeit if the latter are known, they can be used (instead of σ i ) in the regression weights via z i 1 / ξ i 2 (see [50] for details).

2.4.2. Optimization

In optimization, rather than requiring strict neutrality w.r.t. the risk factors, one can require that risk exposure be optimized, albeit one can do both (see below). In its simplest incarnation, optimization requires that the Sharpe ratio of the portfolio (using the notations of the previous subsection)
S = i = 1 N D i R i i , j = 1 N C i j D i D j max
Assuming no costs, constraints or bounds, the Sharpe ratio is maximized by
D i = ζ j = 1 N C i j 1 R j
where ζ is fixed via (21). In the RM context, one replaces C i j via Γ i j , which gives:13
D i = ζ ξ i 2 R i j = 1 N R j ξ j 2 A , B = 1 K Ω i A Ω j B Q ˜ A B 1
Q ˜ A B Φ A B 1 + i = 1 N 1 ξ i 2 Ω i A Ω i B
where Φ A B 1 is the inverse14 of Φ A B , and Q ˜ A B 1 is the inverse of Q ˜ A B . The desired dollar holdings D i are not neutral w.r.t. the risk factors, nor are they dollar neutral. Dollar and/or various risk factor neutrality can be achieved via optimization with homogeneous linear constraints (see [50]). Note that to compute the desired dollar holdings (24), we need not only FLM Ω i A , but also FCM Φ A B and ISR ξ i . This is a key difference between using RM in optimization vs. regression.

2.4.3. “Risk-taking” Alphas

While in optimization the resulting desired dollar holdings D i are not neutral w.r.t. the risk factors, they are “approximately” neutral in the sense that the deviation from neutrality is due to ISR. Indeed, from Equation (24) we have
i = 1 N D i Ω i A = ζ i = 1 N R i ξ i 2 Ω i B Δ B A 1
Δ A B δ A B + i = 1 N C = 1 K 1 ξ i 2 Φ A C Ω i C Ω j B
where Δ 1 is the inverse of Δ = Φ Q ˜ . Let Φ A B κ Φ ^ A B , and let us take the limit κ with Φ ^ A B = fixed . In this limit the factor risk dominates and ISR is negligible, so optimization reduces to the weighted regression of the previous sub-subsection (see [50] for details), and we have
i = 1 N D i Ω i A 0
So, regression yields risk neutrality, while optimization produces approximate risk neutrality. Either way, if the returns R i have exposure to a risk factor, it is either eliminated (regression) or substantially reduced (optimization). This has important implications for using SRM in certain types of trading.
Thus, imagine that by design the returns R i have desirable risk exposure, e.g., our strategy could be deliberately skewed toward small cap value stocks, have exposure to momentum, volatility, etc.15 If such a risk factor is included in FLM Ω i A , then using the full FLM in regression and/or optimization would be to the detriment of our alpha. In the regression we can simply omit the corresponding risk factor(s). However, in the optimization merely omitting risk factors will not do—we also need to recompute FCM and ISR anew based on the remaining risk factors, otherwise we will get wrong predictions for the total risk:
Γ i i ξ i 2 + A , B H Ω i A Ω i B Φ A B Γ i i
where H { 1 , , K } is the subset corresponding to the remaining risk factors. In this case SRM simply will not do and CRM is required.

3. Decoupling of Time Horizons (Frequencies)

There is an important fundamental concept, which can be stated as decoupling of time horizons (or, equivalently, frequencies or wavelengths). In a nutshell, what happens at time horizon T 1 (or frequency f 1 = 1 / T 1 ) is not affected by what happens at time horizon T 2 (or frequency f 2 = 1 / T 2 ) if T 1 and T 2 are vastly different. By time horizon we mean the relevant time scales. e.g., the time horizon for a daily close-to-close return is 1 day. In terms of returns, the decoupling can be restated as the returns for long-term horizons T 1 being essentially uncorrelated with the returns for short-term horizons T 2 . Here is a simple argument.

3.1. Short v. Long Time Horizons

Here is a simple argument for a single stock (or security). Consider a time interval from time t 0 to time t M > t 0 . Let us divide it into M intervals t 0 , t 1 , , t M 1 , t M . For simplicity, we can assume that these intervals are uniform, t s = t 0 + s Δ t , s = 0 , , M , albeit this is not critical here. Let the stock prices at times t = t s be P ( t s ) . Let us define the return from time t to time t as
R ( t , t ) ln P ( t ) P ( t )
Then we have
R ˜ R ( t 0 , t M ) = s = 1 M R s
R s R ( t s 1 , t s )
Now let us ask the following question. How correlated is the return R M for the most recent period (i.e., t M 1 to t M ) with the return R ˜ for the entire period (i.e., t 0 to t M )? To define a “correlation”, we need multiple observations. So, we hang yet another index onto our returns, call it α, where α = 1 , , p labels different periods t 0 α to t M α , each consisting of M periods ( t s 1 α to t s α , s = 1 , , M ), and we wish to compute the correlation between the return for the last such period R M α and the return for the entire such period R ˜ α , and α (not s) labels the series (which is a time series) over which the correlation is computed. For simplicity, we can assume that the periods labeled by α are “tightly packed”, i.e., t M α 1 = t M 1 α , albeit this is not crucial here. We then have p + M time points τ r , r = 0 , 1 , , p + M 1 and, consequently, p + M 1 returns R ^ r , r = 1 , , p + M 1 , where
τ r t 0 + r Δ t , r = 0 , 1 , , p + M 1
R ^ r ln P ( τ r ) P ( τ r 1 ) , r = 1 , , p + M 1
R s α = R ^ s + α 1 , s = 1 , , M , α = 1 , , p
R ˜ α = r = α M + α 1 R ^ r
Note that p can be much larger than M, in fact, we will assume this to be the case.16
With the covariance * , * defined as above, we have
σ s 2 R s , R s = 1 p α = 1 p R ^ s + α 1 2 1 p 2 α = 1 p R ^ s + α 1 2 σ 2
where
σ 2 1 p r = 1 p R ^ r 2 1 p 2 r = 1 p R ^ r 2
and we have used the fact that p M , which implies that all M variances σ s 2 are approximately the same. We then have
R s , R s σ s σ s Ψ s s σ 2 Ψ s s
R ˜ , R ˜ = s , s = 1 M R s , R s σ 2 s , s = 1 M Ψ s s
R s , R ˜ = s = 1 M R s , R s σ 2 s = 1 M Ψ s s
where Ψ s s is the M × M correlation matrix of returns R s , and Ψ s s = 1 . We have
ρ s Cor ( R s , R ˜ ) s = 1 M Ψ s s s , s = 1 M Ψ s s = 1 + ψ s M + s = 1 M ψ s
where
ψ s s J s Ψ s s
and J s { 1 , , M } { s } .
Because we have M p , the matrix Ψ s s is approximately “self-similar” in the sense that all m × m sub-matrices of Ψ s s with s , s { k , k + 1 , , k + m 1 } , k = 1 , , M m + 1 and 1 < m < M (i.e., for each m there are M m + 1 such sub-matrices) are approximately the same. Put differently, Ψ s s approximately depend only on the difference s s , and, in fact, only on | s s | since Ψ s s is symmetric. Let Ψ s , 1 η s 1 , s > 1 . Then we have Ψ s s η | s s | , s s , and
ψ M s = 1 M 1 η s
s = 1 M ψ s 2 s = 1 M 1 ( M s ) η s
To estimate the correlation ρ M , we need to make some assumptions about the correlations η s . A reasonable assumption is that the correlations Ψ s s decay as | s s | grows. e.g., we can assume that | Ψ s s | λ | s s | for some positive λ < 1 , or, equivalently, that η s = η ˜ s λ s , s = 1 , , M 1 , where | η ˜ s | 1 . We then have
ψ M f ( λ ) s = 1 M 1 η ˜ s λ s
s = 1 M ψ s 2 M f ( λ ) λ f ( λ )
f ( λ ) f ( λ ) λ
and our correlation ρ M reads
ρ M 1 + f ( λ ) M 1 + 2 f ( λ ) ] 2 λ f ( λ )
We also have
| f ( λ ) | λ λ M 1 λ λ 1 λ
| f ( λ ) | 1 + ( M 1 ) λ M M λ M 1 ( 1 λ ) 2 1 ( 1 λ ) 2
where we have taken into account that M 1 .
So, if f ( λ ) 0 , we have (assuming M is large)
0 < ρ M 1 ( 1 λ ) M 1
and for positive f ( λ ) the bound is even tighter.
What about f ( λ ) < 0 ? First, note that ρ M is still positive—for it to become negative, the argument of the square root in the denominator in Equation (49) would become negative, which is not possible (see below). However, for negative f ( λ ) a priori it might appear that ρ M need not be small as the denominator in Equation (49) could become small when f ( λ ) = 1 / 2 + ϵ , where 0 < ϵ 1 / M . Nonetheless, this cannot be the case if there is randomness in the returns R s . Indeed, we have17
R ˜ , R ˜ σ 2 s , s = 1 M Ψ s s σ 2 M 1 + 2 f ( λ ) ] 2 λ f ( λ )
So, the argument of the square root in the denominator of Equation (49) is (up to σ 2 ) the variance of the return R ˜ for the period t M t 0 = M Δ t . If there is randomness in the returns R s , the variance R ˜ , R ˜ should scale linearly with t M t 0 and, consequently, with M. If this variance were of order σ 2 , this would imply that the returns R s are highly anti-correlated with each other and the entire process is highly deterministic. Put differently, there would be essentially no dispersion in this case. Under normal circumstances, where we have randomness in the returns R s , the variance R ˜ , R ˜ should be of order M σ 2 . If there are any negative correlations η s , they are offset by other positive correlations so that R ˜ , R ˜ M σ 2 and we have (52).
The upshot is—this is a generalization of our example above—that quantities with long time horizons have low correlations with quantities with short horizons. What happens, say, at milliseconds gets diluted by the time one gets to, say, month-long horizons—and this dilution is due to the cumulative effect of everything that transpires in between such vastly different time scales. Randomness plays a crucial role in this dilution. If things were deterministic, such dilution would not occur.18

3.2. Implication for Risk Factors

A practical implication of the above discussion is that care is needed in choosing which risk factors to use in RM depending on what the time horizons of the strategies are for which RM is used. If these horizons are short, then risk factors such as value and growth, whose underlying fundamental data updates quarterly, should not be used as they add no value in short holding (a few days, overnight, intraday, etc.) strategies. Here is a simple argument. Consider high frequency trading at, say, millisecond time scales. Does book value make a difference to such trading? The answer is no. What is relevant here is the market microstructure at the millisecond timescales (bid, ask, bid and ask sizes, order book depth, hidden liquidity, posting orders fast on different exchanges, whether the trader’s collocation is close to the exchange connectivity hub, etc.).19 Whether the book value for stock XYZ is $100M or $1B does not directly affect the market microstructure at millisecond time scales.20
On the other hand, quantities such as liquidity and market cap21 do affect market microstructure. e.g., liquidity affects typical bid/ask sizes, print sizes, etc. More precisely, liquidity computed based on, say, 20-trading-day ADDV indirectly relates to such “micro” quantities because of the expected linear scaling of volumes.22 i.e., even though ADDV is computed using longer horizons, it is a relevant risk factor for shorter horizon strategies precisely because of the aforementioned linear scaling of volumes, allowing an extrapolation from longer to shorter horizons.
Similarly, volatility is a relevant style factor. Typically, it is computed as historical volatility of, say, close-to-close returns. As an extrapolation—based on the assumption that historically more volatile stocks are also more volatile intraday—one can use this style factor for shorter horizon strategies. Preferably, one can also define volatility style factor based on shorter horizons (e.g., intraday; see Section 2).
So, conceptually, if the underlying quantity (e.g., book value or earnings) has a long time horizon (i.e., changes, say, quarterly), then the corresponding risk factors are not relevant for shorter horizon strategies (e.g., those involving overnight returns),23 unless there is a linear extrapolating argument that reasonably relates such longer term quantities to their shorter term counterparts (as, e.g., in the case of liquidity). More technically, suppose we have K factors we know add value. How do we determine if a new, ( K + 1 ) -th, factor adds value?24 Here is a simple method.
Thus, suppose we have N stocks and we have FLM Ω i A , i = 1 , , N , A = 1 , , K . Let Ω i A , i = 1 , , N , A = 1 , , K , K K + 1 be new FLM once we add a new, ( K + 1 ) -th, risk factor. (So we have Ω i A A = A = Ω i A , A = 1 , , K .) Let R i be the returns used in our strategy, i.e., these returns have the time horizon relevant to our strategy.25 We can run two regressions (without intercept—unless it is included in Ω i A ), first R i over Ω i A , and second R i over Ω i A . In R notations:
R 1 + Ω
R 1 + Ω
In actuality, R i , Ω i A and Ω i A are time series: R i ( t s ) R s i , Ω i A ( t s ) Ω s i A , Ω i A ( t s ) Ω s i A , s = 0 , 1 , , M . We can run the above two regressions for each value of s and look at, e.g., two time-series vectors of the regression F-statistic to assess if the new risk factor improves the overall F-statistic.26 Alternatively, we can pull the ( M + 1 ) × N matrix R s i into a vector R ^ σ of length ( M + 1 ) N (i.e., treat the index pair ( s , i ) as a single index σ), and do the same with FLM: Ω s i A Ω ^ σ A , Ω s i A Ω ^ σ A . We can now run two regressions
R ^ 1 + Ω ^
R ^ 1 + Ω ^
and compare the F-statistic.27 If K is not large, it is also informative to compare the t-values of the regression coefficients and assess the effect of the new factor.
For illustrative purposes, we ran such regressions for overnight returns R i log ( P i ( t open ) / P i ( t close ) ) , where the open P i ( t open ) and the previous close P i ( t close ) prices are adjusted for splits and dividends. In the case of, say, book value, as a benchmark it suffices to consider a K = 1 model, where the sole risk factor is the intercept. Then we add the second risk factor, which is (log of) book (or tangible book, price-to-book, etc.),28 so K = 2 . The regression F-statistic and t-values are given in Table 1 (for regressions (56) and (57)), which shows that the second regression (57) involving (tangible) book value does not have improved statistic over the intercept-only regression. The 1-factor regressions other than the intercept-only regression can be thought of as regressions over “betas”. The log ( Prc / Book ) case (see Table 1) is the closest to the intercept-only case because the regression Prc 1 + Book has F-statistic 56,230, and the t-value 237.1, i.e., price and book value are highly correlated. As to the 2-factor regressions, (T)Book does not improve the statistic. It is log(Prc) that makes impact, precisely because prices change daily.
We also ran the (56) and (57) regressions with a K = 10 model as a benchmark, where the risk factors are 10 BICS sectors29 (so K = 11 ). The results are given in Table 2, which shows that book value does not improve regression statistic. As above, it is log(Prc) that provides improvement. We also ran the (54) and (55) regressions separately for each date (i.e., without pulling the index pair ( s , i ) into a single index σ—see above) with the same K = 10 benchmark. The results are given in Table 3 and agree with those in Table 2. Log(Prc), not Book, has impact.
Table 1. Results for regressions (56) and (57) with the intercept-only 1-factor model as the benchmark. Int = intercept; (T)Book = (tangible) book; Prc = adjusted previous close; RPrc = raw (unadjusted) previous close. Next to Int+log(Prc) we also give Int+log(RPrc) results. We do this because adjusting the previous close introduces a bias of anticipating future splits and/or dividends. However, as can be seen from the Int+log(RPrc) row, this bias is relatively mild and does not affect our conclusions. The blank entries “—" stand for N/As.
Table 1. Results for regressions (56) and (57) with the intercept-only 1-factor model as the benchmark. Int = intercept; (T)Book = (tangible) book; Prc = adjusted previous close; RPrc = raw (unadjusted) previous close. Next to Int+log(Prc) we also give Int+log(RPrc) results. We do this because adjusting the previous close introduces a bias of anticipating future splits and/or dividends. However, as can be seen from the Int+log(RPrc) row, this bias is relatively mild and does not affect our conclusions. The blank entries “—" stand for N/As.
Regression/StatisticF-StatisticIntercept t-ValueSecond Coefficient t-Value
Int only737.727.16
Book only237.215.40
TBook only191.213.83
Prc only1.341.16
Prc/Book only12.53.54
Prc/Tbook only3.841.96
log(Book) only707.526.60
log(TBook) only583.724.70
log(Prc) only526.022.94
log(Prc/Book) only739.1–27.19
log(Prc/Tbook) only608.7–24.67
Int+Book362.522.084.10
Int+TBook297.620.104.56
Int+(Prc/Book)354.326.38–0.66
Int+(Prc/TBook)287.223.890.15
Int+Prc368.927.14–0.24
Int+log(Book)354.20.980.53
Int+log(TBook)294.1–2.113.70
Int+log(Prc)473.920.53–14.48
Int+log(RPrc)468.720.18–14.12
Int+log(Prc/Book)394.0–6.99–8.93
Int+log(Prc/TBook)329.9–7.14–9.23
Finally, for the (54) and (55) regressions we computed the t-statistic of actual risk factor time series a la Fama and MacBeth [29], both for the K = 1 (intercept only) and K = 10 (BICS sectors) benchmark factor models. The results are given in Table 4 and Table 5 and agree with those in Table 1, Table 2 and Table 3.
Table 2. Results for regressions (56) and (57) with the BICS-sector 10-factor model as the benchmark. S = 10 BICS sectors labeled by S1(30), S2(63), S3(45), S4(30), S5(91), S6(75), S7(42), S8(48), S9(41) and S10(28) (the parentheticals show the number of tickers in each sector); X = the 11th factor (P, B, P/B, log(P), log(B) and log(P/B)); P = adjusted previous close; B = book; F = F-statistic; t = t-value. e.g., in the “Reg:” line “S+(P/B)” means that the returns R are regressed over FLM Ω containing 11 columns corresponding to the 10 sectors S1 through S10 plus the 11th factor X, which is (P/B) in this case. In the S+log(P) column we also give the values when P is taken to be the raw (unadjusted) previous close. We do this because adjusting the previous close introduces a bias of anticipating future splits and/or dividends. However, as can be seen from the S+log(P) column, this bias is relatively mild and does not affect our conclusions.
Table 2. Results for regressions (56) and (57) with the BICS-sector 10-factor model as the benchmark. S = 10 BICS sectors labeled by S1(30), S2(63), S3(45), S4(30), S5(91), S6(75), S7(42), S8(48), S9(41) and S10(28) (the parentheticals show the number of tickers in each sector); X = the 11th factor (P, B, P/B, log(P), log(B) and log(P/B)); P = adjusted previous close; B = book; F = F-statistic; t = t-value. e.g., in the “Reg:” line “S+(P/B)” means that the returns R are regressed over FLM Ω containing 11 columns corresponding to the 10 sectors S1 through S10 plus the 11th factor X, which is (P/B) in this case. In the S+log(P) column we also give the values when P is taken to be the raw (unadjusted) previous close. We do this because adjusting the previous close introduces a bias of anticipating future splits and/or dividends. However, as can be seen from the S+log(P) column, this bias is relatively mild and does not affect our conclusions.
Reg:SS+PS+BS+(P/B)S+log(P)S+log(B)S+log(P/B)
F80.673.372.471.392.6/91.771.377.8
t:S16.406.406.166.4015.07/14.801.56–6.42
t:S26.676.675.946.5015.74/15.441.19–7.08
t:S35.575.575.025.6015.08/14.761.56–7.04
t:S45.405.405.085.4114.13/13.871.32–6.79
t:S513.3113.2811.1213.3619.26/18.931.80–6.38
t:S613.1313.1312.2612.5019.29/18.991.98–6.09
t:S74.974.973.253.7414.40/14.150.89–7.37
t:S86.856.856.426.8815.88/15.581.35–6.80
t:S912.8312.8411.9012.9219.40/19.122.49–5.39
t:S108.638.638.218.8716.17/15.922.19–5.69
t:X–0.423.42–0.45–14.57/–14.21–0.20–8.47
Table 3. Results for regressions (54) and (55) with the BICS-sector 10-factor model as the benchmark. The notations are the same as in Table 2, except that F = median F-statistic, and t = median t-value, where F-statistic and t-values are computed based on regressions (54) and (55) for each date, and the median is computed serially over all dates. The meaning of double entries in the S+log(P) column is the same as in Table 2.
Table 3. Results for regressions (54) and (55) with the BICS-sector 10-factor model as the benchmark. The notations are the same as in Table 2, except that F = median F-statistic, and t = median t-value, where F-statistic and t-values are computed based on regressions (54) and (55) for each date, and the median is computed serially over all dates. The meaning of double entries in the S+log(P) column is the same as in Table 2.
Reg:SS+PS+BS+(P/B)S+log(P)S+log(B)S+log(P/B)
F13.512.212.012.012.3/12.312.112.2
t:S10.400.400.400.430.67/0.680.11–0.18
t:S20.610.610.620.630.85/0.830.11–0.16
t:S30.340.330.320.320.69/0.630.11–0.20
t:S40.230.230.230.250.62/0.560.10–0.21
t:S50.910.910.760.880.90/0.850.15–0.19
t:S60.800.800.820.910.96/0.920.14–0.14
t:S70.390.390.240.250.70/0.640.10–0.20
t:S80.540.540.570.570.76/0.740.08–0.19
t:S90.760.760.770.801.00/0.980.18–0.12
t:S100.530.530.520.550.84/0.880.14–0.14
t:X–0.020.13–0.04–0.55/–0.52–0.03–0.30
Table 4. Results for regressions (54) and (55) with the intercept-only 1-factor model as the benchmark. The notations are the same as in Table 1, except that the t-statistic here refers to the t-statistic of the corresponding risk factor time series a la Fama and MacBeth [29]. These t-statistic are annualized, i.e., we compute the daily t-statistic and then multiply it by 252 .
Table 4. Results for regressions (54) and (55) with the intercept-only 1-factor model as the benchmark. The notations are the same as in Table 1, except that the t-statistic here refers to the t-statistic of the corresponding risk factor time series a la Fama and MacBeth [29]. These t-statistic are annualized, i.e., we compute the daily t-statistic and then multiply it by 252 .
Regression/StatisticIntercept t-StatisticSecond Coefficient t-Statistic
Int only0.90
Int+Book0.822.21
Int+(Prc/Book)0.90–0.69
Int+Prc0.90–0.42
Int+log(Book)0.230.32
Int+log(Prc)1.90–3.50
Int+log(RPrc)1.78–3.10
Int+log(Prc/Book)–2.15–2.90
Table 5. Results for regressions (54) and (55) with the BICS-sector 10-factor model as the benchmark. The notations are the same as in Table 2, except that “t:✭” refers to the annualized t-statistic of the corresponding risk factor “✭” time-series a la Fama and MacBeth [29], same as in Table 4. The meaning of double entries in the S+log(P) column is the same as in Table 2.
Table 5. Results for regressions (54) and (55) with the BICS-sector 10-factor model as the benchmark. The notations are the same as in Table 2, except that “t:✭” refers to the annualized t-statistic of the corresponding risk factor “✭” time-series a la Fama and MacBeth [29], same as in Table 4. The meaning of double entries in the S+log(P) column is the same as in Table 2.
Reg:SS+PS+BS+(P/B)S+log(P)S+log(B)S+log(P/B)
t:S10.760.760.740.781.76/1.640.39–2.02
t:S20.590.590.540.601.61/1.500.28–2.24
t:S30.760.760.680.772.07/1.910.30–2.42
t:S41.091.091.011.102.32/2.140.37–2.48
t:S50.880.880.770.881.75/1.650.45–2.02
t:S61.071.071.021.052.01/1.880.51–1.86
t:S70.830.830.560.662.14/1.990.22–2.66
t:S80.640.640.600.651.72/1.590.32–2.07
t:S91.091.091.021.091.87/1.770.61–1.51
t:S101.171.171.131.232.04/1.920.58–1.78
t:X–0.562.15–0.61–3.45/–3.050.02–3.12

4. Pitfalls of Standardized Risk Models

4.1. Industry Risk Factors

Suppose we have an industry classification. For our discussion below it will make no difference whether FLM Ω i A ind , A = 1 , , K ind is binary or some conglomerates are allowed, so for simplicity we will assume it to be binary (see SubSection 2.1):
Ω i A ind = δ G ( i ) , A
For definiteness, let us fix the names of the industry tree levels as “sectors → sub-sectors → industries”, so “industries” correspond to the most detailed level. The number of the industries K ind depends on the universe. Different universes U 1 and U 2 can have vastly different industries to which the corresponding stocks belong.
In SRM a large number of stocks (e.g., several thousand for the U.S. models) are “squeezed” into a relatively modest number K ind * of standardized industries, which can be substantially smaller than the number of true industries K ind for a typical quantitative trading portfolio universe of, say, 1000–2500 names. So, standardized industries typically lose granularity, which determines how well RM helps hedge a portfolio against industry risk. For illustrative purposes, let us look at the number of true industries for top-by-market-cap portfolios in BICS.30 We require at least 10 stocks in each industry. Small industries are pruned to the sub-sector level and, if need be, to the sector level. Any leftover small industries can be merged into larger industries. The result is given in Table 6. The numbers of true industries for top 1,500+ universes are sizably higher than those of typical standardized industries.
Table 6. Number of BICS industries for portfolios of stocks in top X by market cap with at least 10 stocks in each industry as of August 19, 2014. Only U.S. listed common stocks and class shares are included (no OTCs, preferred shares, etc.).
Table 6. Number of BICS industries for portfolios of stocks in top X by market cap with at least 10 stocks in each industry as of August 19, 2014. Only U.S. listed common stocks and class shares are included (no OTCs, preferred shares, etc.).
Top X by Market CapNumber of Industries
100055
150075
200094
2500107
3000122
3500125
4000128
4500130
5000133

4.2. Empty Standardized Industries

Furthermore, for any given universe U, even if U is, say, 1,000-2,500 names, we can have (almost) empty standardized industries—e.g., a portfolio that does not trade any stocks from a given (sub-)sector. Empty standardized industries would have been omitted had we built RM based on the custom universe U. In SRM we have no such option, so we must keep empty industries. Why is this so bad?
Style factors are not important for our discussion here, so let us consider RM with only industry factors. Let these be K standardized binary industries. Let the risk model universe be U S R M , and let our universe be U U S R M . Let FLM be Ω i A , i { 1 , , N } U . Let some industries be empty with N A = 0 , where
N A i = 1 N Ω i A = i = 1 N δ G ( i ) , A
Such industries must be omitted from regressions, if this is how we use RM.
On the other hand, suppose we are doing optimization, and we cannot omit such empty industries. Let J { A | N A > 0 } and J { A | N A = 0 } . We have
Γ i j = ξ i 2 δ i j + A , B J Ω i A Ω j B Φ A B = ξ i 2 δ i j + Φ G ( i ) , G ( j )
The number of risk factors in this model is K ^ = | J | < K , yet FCM Φ A B is computed based on K factors. So, from the viewpoint of the universe U, there are | J | = K K ^ “hidden” factors, to which U has no exposure, yet they affect the covariance matrix Γ. This does not bode well with the RM premise that all covariances are explained by a combination of: i) some fixed risk factors, exposure to which for a given universe of stocks is well-defined; and ii) ISR, which describes all uncertainty not described by the risk factors. Based on this premise, the correct way of modeling risk for our universe U would be to assume that we have K ^ risk factors and compute FCM Φ ^ A B , A , B J for these factors along with the corresponding ISR ξ ^ i , i.e., to have
Γ ^ i j ξ ^ i 2 δ i j + A , B J Ω i A Ω j B Φ ^ A B = ξ ^ i 2 δ i j + Φ ^ G ( i ) , G ( j )
At first it might appear that (60) and (61) are identical—if Φ A B and Φ ^ A B are computed as SCM of the corresponding risk factors, then we should have Φ ^ A B = Φ A B , A , B J . However, as mentioned in SubSection 2.3, in real life FCM is not computed as SCM, because ISR with such a computation are typically ill-defined. Consequently, Φ ^ A B Φ A B , A , B J and ξ ^ i ξ i . Because of this interdependency of FCM and ISR, it is more desirable to compute Φ ^ A B and ξ ^ i directly based on the universe U without any empty industries—the latter only bring more noise into the computation. Any uncertainty not described by the relevant risk factors should be modeled via ISR, not some “hidden” risk factors.
That empty industries, to which the universe U has no exposure, add noise can be seen in the optimization context—they contribute to the desired dollar holdings (24) via (25).31 The effect is that the desired dollar holdings are approximately neutralized against these “hidden” industries (see (26) and (27)).32 Typically, this generates additional noise (“twitch”) trades, which on paper may appear harmless,33 but in real life can increase trading costs and reduce profitability, rendering empty industries undesirable. The same argument applies to irrelevant style factors (e.g., value/growth in short horizon strategies; see Section 3.2), rendering them harmful.34

4.3. Style Risk Factors

Some —albeit not necessarily all35— style factors are normalized. One way of normalizing a style factor labeled by A { 1 , , K } is by conforming the values of the A-th column in FLM Ω i A to a normal distribution with, e.g., mean 0 and standard deviation equal to the standard deviation (or MAD) of the original, unnormalized column. This is done in the R code in Appendix A for momentum, size and liquidity. If the distribution is expected to be log-normal, then one normalizes log of the column and re-exponentiates. This is done for intraday volatility in the R code in Appendix A.
Such normalizations of style factors are typically done across the entire coverage universe U S R M . Suppose we wish to use SRM for our universe U, which is a fraction of U S R M . Then the values of the corresponding columns in Ω i A with i U are no longer normalized. For a random subset U U S R M one may expect that they are still approximately normalized if the number of stocks in U is large. However, typically there is nothing random about a real-life trading universe U, which is carefully selected based on certain requirements on market cap, liquidity, volatility, price and/or other relevant quantities, which can be rather skewed, so the truncated to U style factor columns in Ω i A are no longer even approximately normalized. Why does this matter? If FCM is computed based on normalized style factors (be it using U S R M or U * ), then this FCM is not the same (or even necessarily close to) FCM one would obtain based on style factors normalized based on U. This also affects ISR. Therefore, for the same reasons as in the previous subsection, it is more desirable to compute FLM, FCM and ISR based on the custom universe U.

5. Concluding Remarks

Above we discussed points (1)–(4) raised in Introduction in relation to using SRM and when and why CRM are warranted. Let us conclude by briefly touching upon points (5)–(7) mentioned in Introduction, which relate to other aspects.
Statistical Arbitrage (StatArb) “refers to highly technical short-term mean-reversion strategies involving large numbers of securities (hundreds to thousands, depending on the amount of risk capital), very short holding periods (measured in days to seconds), and substantial computational, trading, and information technology (IT) infrastructure” [39]. A quantitative framework for this “mean-reversion” was recently discussed in [50]. Schematically, one can think about this mean-reversion as follows. Pick some returns. Pick RM. Then: (i) either regress your returns over FLM (or a subset of its columns) with some weights, or (ii) do optimization using RM (possibly with some constraints). One can hang various bells and whistles onto a strategy constructed this way, which do contribute into differentiating models. Nonetheless, the choice of RM is a major factor (along with the choice of returns) in what the desired holdings and trades will look like. As discussed in Section 2.4.3, in optimization desired holdings are approximately neutral w.r.t the risk factors, while in regression they are exactly neutral. So RM, in its uses discussed above, factors away (completely or approximately) the risk exposure in the returns w.r.t. the risk factors. Therefore, using the same RM for two sets of apparently different returns can substantially reduce the difference between the two—when it comes to the resulting desired holdings—making the two strategies more correlated.
Therefore, to diversify strategies it is not only important to diversify the returns, but also to diversify RM. This is especially useful in the “herd effect” situations, where many market participants—for whatever (temporary) underlying reason (cf. Aug’07 Quant Meltdown)—are "compelled” to do the same thing. When it comes to unpleasant situations such as potential book liquidation, being less correlated with others can make a difference between liquidating (and incurring huge transaction costs) or weathering the storm. Custom RM in such cases can make a difference.
We can take this further by noting that, even with the same returns, two substantially different RM, call them RM-A and RM-B, can produce P&L streams which are not that highly correlated. In this case, instead of running one strategy with the returns, say, optimized using only RM-A, it makes more sense to run two strategies—seamlessly, with trades crossed internally at the level of desired holdings, i.e., the two strategies are combined with some weights w A and w B (see below)—where the first strategy, call it Str-A, is optimized using RM-A, and the second strategy, call it Str-B, is optimized using RM-B. If both strategies have positive returns and are not too highly correlated, then even if Str-B has worse return than Str-A, it still makes sense to combine them with some weights. In the zeroth approximation the weights w A and w B can be obtained, e.g., by requiring that the Sharpe ratio of the resulting combined strategy be maximized. However, in real life—so long as the Sharpe ratio is acceptably high—typically it is the P&L that matters. In this regard, combining the two strategies can increase the P&L as the capacity bound of the combined strategy is higher (than those of Str-A and Str-B)—the Str-A and Str-B trades are not highly correlated and by combining them one reduces turnover, thereby decreasing market impact and increasing capacity, and also decreasing costs of trading. Turnover reduction and increasing capacity are two important incentives for diversifying and using CRM.
Finally, let us mention that CRM provide additional evident benefits listed in point (7) in Introduction. Also, for shorter-term applications (which is precisely when CRM are warranted), which do not require long lookbacks or nontrivial fundamental data (such as earnings going back many years), the data required for building a CRM is typically already available to the portfolio manager. All in all, it boils down to computing FCM and ISR in a self-consistent fashion.

Acknowledgments

We would like to thank two anonymous reviewers for valuable suggestions.

A. R Code for Some Style Risk Factors

Below we give R code for 4 style factors (function calc.add.fac()): momentum, liquidity, size and intraday volatility. All uninformative dependencies have been omitted. The code is unaltered otherwise. (Functions normalize(), calc.sr(), calc.eff.mad(), calc.ret.mv() and calc.ret.mv.clean() are auxiliary.) Below: is.adr is a binary N-vector (N is the number of stocks); hist.prc, hist.vol, hist.high, hist.low, hist.cap are d × N matrices (d is the number of trading dates in the historical data)—historical closing price, daily volume, daily high, daily low (all adjusted—this is necessary for momentum, albeit not for liquidity or intraday volatility), and market cap, respectively; dates is the vector of the last 252 trading dates. ADRs are normalized independently. days is the lookback (e.g., 252 trading days), while back is for out-of-sample backtesting (and is 0 for current-date calculations); both days and back are passed into calc.add.fac() via the omitted arguments.
normalize <- function(x, center = mean(x), sdev = sd(x)){
        return(qnorm(ppoints(x)[sort.list(sort.list(x), method = ”radix”)],
           center, sdev))
      }
      calc.sr <- function(tv){
        sr <- sqrt(tv)
        sr <- log(sr)
        sr <- normalize(sr, median(sr), mad(sr))
        sr <- exp(sr)
        return(sr)
      }
     calc.eff.mad <- function(ret){
       eff.mad <- (5 * outer(apply(ret, 1,
         mad, na.rm = T), apply(ret,
       2, mad, na.rm = T), pmax))
       return(eff.mad)
     }
      calc.ret.mv <- function(prc, back, days, d.r){
        last <- nrow(prc) - back
        first <- last - days
        today <- prc[last:first, ]
        yest <- 0
        for(i in 1:d.r)
          yest <- yest + prc[(last - i):(first - i), ]
        yest <- yest / d.r
        ret <- today/yest - 1
        dimnames(ret) <- dimnames(prc[last:first, ])
        return(ret)
      }
      calc.ret.mv.clean <- function(prc, back, days, d.r){
        ret <- calc.ret.mv(prc, back, days, d.r)
        eff.mad <- calc.eff.mad(ret)
        bad <- abs(ret - apply(ret, 1, median)) > eff.mad
        ret[bad] <- NA
        avg.ret <- matrix(rowMeans(ret, na.rm = T),
          nrow = nrow(ret), ncol = ncol(ret))
        ret[bad] <- avg.ret[bad]
        ret <- ret - avg.ret
        return(ret)
      }
      calc.add.fac <- function(...){
        ### MOMENTUM MOVING AVERAGE LENGTHS   d.r <- 5
        ### ADDV MOVING AVERAGE LENGTHS
        d.addv <- 20
        ### MOMENTUM FACTOR
        ### BASED ON AVERAGE 5-DAY RETURNS (OUTLIERS REMOVED)
        ret.mom <- calc.ret.mv.clean(hist.prc, back, days, d.r)
        mom <- apply(ret.mom, 2, mean)
        mom <- normalize(mom, 0, mad(mom))
        ### AVERAGE DAILY DOLLAR VOLUME (ADDV) FACTOR
        ### BASED ON LAST 20 DAYS
        ### ADRS ARE NORMALIZED ACCORDING TO NON-ADR DISTRIBUTION
        not.adr <- !is.adr
        addv <- hist.prc[dates, ] * hist.vol[dates, ]
        addv <- addv[1:d.addv, ]
        addv[addv == 0] <- NA
        addv <- colMeans(log(addv), na.rm = T)
        addv[is.adr] <- normalize(addv[is.adr], 0, mad(addv[not.adr]))
        addv <- normalize(addv, 0, mad(addv[not.adr]))
        ### MARKET CAP FACTOR
        ### BASED ON 252 DAYS
        ### ADRS ARE NORMALIZED ACCORDING TO NON-ADR DISTRIBUTION
        mkt.cap <- hist.cap[dates, ]
        mkt.cap[mkt.cap == 0] <- NA
        mkt.cap <- colMeans(log(mkt.cap), na.rm = T)
        mkt.cap[is.adr] <- normalize(mkt.cap[is.adr], 0, mad(mkt.cap[not.adr]))
        mkt.cap <- normalize(mkt.cap, 0, mad(mkt.cap[not.adr]))
        ### INTRADAY VOLATILITY FACTOR
        ### BASED ON 252 DAYS
        hist.low <- hist.low[dates, ]
        hist.high <- hist.high[dates, ]
        hist.prc <- hist.prc[dates, ]
        high.low <- abs(hist.high - hist.low) / hist.prc
        high.low <- calc.sr(colMeans(high.low2))
      }
	  

B. C Code for Symmetric Matrix Inversion

The vector a[] is a symmetric n × n matrix A i j to be inverted pulled into a vector such that A i j , i , j = 0 , 1 , , n 1 is given by a[i + n * j]. This algorithm utilizes the fact that the matrix is symmetric thereby reducing the number of operations (compared with the Gauss-Jordan method for a general matrix).
static void InvSymMat(double *a, int n){
       int i, j, k;
       double sum;
       for( i = 0; i < n; i++ )
         for( j = i; j < n; j++ )
         {
           sum = a[i + n * j];
           for( k = i - 1; k >= 0; k– )
             sum -= a[i + n * k] * a[j + n * k];
           a[j + n * i] =
             ( j == i ) ? 1 / sqrt(sum) : sum * a[i * (n + 1)];
         }
       for( i = 0; i < n; i++ )
         for( j = i + 1; j < n; j++ )
         {
           sum = 0;
           for( k = i; k < j; k++ )
             sum -= a[j + n * k] * a[k + n * i];
           a[j + i * n] = sum * a[j * (n + 1)];
         }
       for( i = 0; i < n; i++ )
         for( j = i; j < n; j++ )
         {
           sum = 0;
           for( k = j; k < n; k++ )
             sum += a[k + n * i] * a[k + n * j];
           a[i + n * j] = a[j + n * i] = sum;
         }
      }
	  

C. Disclaimers

Wherever the context so requires, the masculine gender includes the feminine and/or neuter, and the singular form includes the plural and vice versa. The author of this paper (“Author”) and his affiliates including without limitation Quantigic® Solutions LLC (“Author’s Affiliates” or “his Affiliates”) make no implied or express warranties or any other representations whatsoever, including without limitation implied warranties of merchantability and fitness for a particular purpose, in connection with or with regard to the content of this paper including without limitation any code or algorithms contained herein (“Content”).
The reader may use the Content solely at his/her/its own risk and the reader shall have no claims whatsoever against the Author or his Affiliates and the Author and his Affiliates shall have no liability whatsoever to the reader or any third party whatsoever for any loss, expense, opportunity cost, damages or any other adverse effects whatsoever relating to or arising from the use of the Content by the reader including without any limitation whatsoever: any direct, indirect, incidental, special, consequential or any other damages incurred by the reader, however caused and under any theory of liability; any loss of profit (whether incurred directly or indirectly), any loss of goodwill or reputation, any loss of data suffered, cost of procurement of substitute goods or services, or any other tangible or intangible loss; any reliance placed by the reader on the completeness, accuracy or existence of the Content or any other effect of using the Content; and any and all other adversities or negative effects the reader might encounter in using the Content irrespective of whether the Author or his Affiliates is or are or should have been aware of such adversities or negative effects.
The R code included in Appendix A hereof is part of the copyrighted R code for Quantigic® Risk ModelTM and is provided herein with the express permission of Quantigic® Solutions LLC. The copyright owner retains all rights, title and interest in and to its copyrighted source code included in Appendix A and Appendix B hereof and any and all copyrights therefor.
The Content is not intended, and should not be construed, as an investment, legal, tax or any other such advice, and in no way represents views of Quantigic® Solutions LLC, the website: www.quantigic.com or any of their other affiliates.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. R. Banz. “The relationship between return and market value of common stocks.” J. Financ. Econ. 9 (1981): 3–18. [Google Scholar] [CrossRef]
  2. S. Basu. “The investment performance of common stocks in relation to their price to earnings ratios: A test of the efficient market hypothesis.” J. Financ. 32 (1977): 663–682. [Google Scholar] [CrossRef]
  3. E. Fama, and K. French. “The cross-section of expected stock returns.” J. Financ. 47 (1992): 427–465. [Google Scholar] [CrossRef]
  4. E.F. Fama, and K.R. French. “Common risk factors in the returns on stocks and bonds.” J. Financ. Econ. 33 (1993): 3–56. [Google Scholar] [CrossRef]
  5. J. Lakonishok, A. Shleifer, and R.W. Vishny. “Contrarian investment, extrapolation, and risk.” J. Financ. 49 (1994): 1541–1578. [Google Scholar] [CrossRef]
  6. J. Liew, and M. Vassalou. “Can book-to-market, size and momentum be risk factors that predict economic growth? ” J. Financ. Econ. 57 (2000): 221–245. [Google Scholar] [CrossRef]
  7. C. Asness, and R. Stevens. Intra- and Inter-Industry Variation in the Cross-Section of Expected Stock Returns. Working Paper. San Fransisco, CA, USA: Goldman Sachs, Investment Management Division, 1995. [Google Scholar]
  8. R.A. Haugen. The New Finance: The Case Against Efficient Markets. Upper Saddle River, NJ, USA: Prentice Hall, 1995. [Google Scholar]
  9. C.S. Asness. The Power of Past Stock Returns to Explain Future Stock Returns. Working Paper. San Fransisco, CA, USA: Goldman Sachs, Investment Management Division, 1995. [Google Scholar]
  10. N. Jegadeesh, and S. Titman. “Returns to buying winners and selling losers: Implications for stock market efficiency.” J. Financ. 48 (1993): 65–91. [Google Scholar] [CrossRef]
  11. C. Asness, R.J. Krail, and J.M. Liew. “Do hedge funds hedge? ” J. Portf. Manag. 28 (2001): 6–19. [Google Scholar] [CrossRef]
  12. M. Anson. “Performance measurement in private equity: Another look at the lagged beta effect.” J. Priv. Equity 17 (2013): 29–44. [Google Scholar] [CrossRef]
  13. L. Pastor, and R.F. Stambaugh. “Liquidity risk and expected stock returns.” J. Polit. Econ. 111 (2003): 642–685. [Google Scholar] [CrossRef]
  14. M. Scholes, and J. Williams. “Estimating betas from nonsynchronous data.” J. Financ. Econ. 5 (1977): 309–327. [Google Scholar] [CrossRef]
  15. A. Ang, R. Hodrick, Y. Xing, and X. Zhang. “The cross-section of volatility and expected returns.” J. Financ. 61 (2006): 259–299. [Google Scholar] [CrossRef]
  16. R. Jagannathan, and Z. Wang. “The conditional CAPM and the cross-section of expected returns.” J. Financ. 51 (1996): 3–53. [Google Scholar] [CrossRef]
  17. F. Black, M. Jensen, and M. Scholes. “The capital asset pricing model: Some empirical tests.” In Studies in the Theory of Capital Markets. Edited by M. Jensen. New York, NY, USA: Praeger Publishers, 1972, pp. 79–121. [Google Scholar]
  18. F. Black. “Capital market equilibrium with restricted borrowing.” J. Bus. 45 (1972): 444–455. [Google Scholar] [CrossRef]
  19. O. Blume, and L. Friend. “A new look at the capital asset pricing model.” J. Financ. 28 (1973): 19–33. [Google Scholar] [CrossRef]
  20. M.W. Brandt, A. Brav, J.R. Graham, and A. Kumar. “The idiosyncratic volatility puzzle: Time trend or speculative episodes? ” Rev. Financ. Stud. 23 (2010): 863–899. [Google Scholar] [CrossRef]
  21. J. Campbell. “Stock returns and the term structure.” J. Financ. Econ. 18 (1987): 373–399. [Google Scholar] [CrossRef]
  22. J. Campbell, and R. Shiller. “The dividend-price ratio and expectations of future dividends and discount factors.” Rev. Financ. Stud. 1 (1988): 195–227. [Google Scholar] [CrossRef]
  23. J.Y. Campbell, M. Lettau, B.G. Malkiel, and Y. Xu. “Have individual stocks become more volatile? An empirical exploration of idiosyncratic risk.” J. Financ. 56 (2001): 1–43. [Google Scholar]
  24. M.M. Carhart. “Persistence in mutual fund performance.” J. Financ. 52 (1997): 57–82. [Google Scholar] [CrossRef]
  25. N. Chen, R. Roll, and S. Ross. “Economic forces and the stock market.” J. Bus. 59 (1986): 383–403. [Google Scholar] [CrossRef]
  26. J.H. Cochrane. Asset Pricing. Princeton, NJ, USA: Princeton University Press, 2001. [Google Scholar]
  27. G. Connor, and R. Korajczyk. “Risk and return in an equilibrium APT: Application of a new test methodology.” J. Financ. Econ. 21 (1988): 255–289. [Google Scholar] [CrossRef]
  28. W. DeBondt, and R. Thaler. “Does the stock market overreact? ” J. Financ. 40 (1985): 739–805. [Google Scholar]
  29. E.F. Fama, and J.D. MacBeth. “Risk, return and equilibrium: Empirical tests.” J. Polit. Econ. 81 (1973): 607–636. [Google Scholar] [CrossRef]
  30. E. Fama, and K. French. “Multifactor explanations for asset pricing anomalies.” J. Financ. 51 (1996): 55–94. [Google Scholar] [CrossRef]
  31. W. Ferson, and C. Harvey. “The variation in economic risk premiums.” J. Polit. Econ. 99 (1991): 385–415. [Google Scholar] [CrossRef]
  32. W. Ferson, and C. Harvey. “Conditioning variables and the cross section of stock returns.” J. Financ. 54 (1999): 1325–1360. [Google Scholar] [CrossRef]
  33. A.D. Hall, S. Hwang, and S.E. Satchell. “Using bayesian variable selection methods to choose style factors in global stock return models.” J. Bank. Financ. 26 (2002): 2301–2325. [Google Scholar] [CrossRef]
  34. S. Kothari, and J. Shanken. “Book-to-market, dividend yield and expected market returns: A time series analysis.” J. Financ. Econ. 44 (1997): 169–203. [Google Scholar] [CrossRef]
  35. J.-H. Lee, and D. Stefek. “Do risk factors eat alphas? ” J. Portfolio Manag. 34 (2008): 12–24. [Google Scholar] [CrossRef]
  36. B. Lehmann, and D. Modest. “The empirical foundations of the arbitrage pricing theory.” J. Financ. Econ. 21 (1988): 213–254. [Google Scholar] [CrossRef]
  37. J. Lintner. “The valuation of risky assets and the selection of risky investments in stock portfolios and capital budgets.” Rev. Econ. Stat. 47 (1965): 13–37. [Google Scholar] [CrossRef]
  38. A.W. Lo, and A.C. MacKinlay. “Data-snooping biases in tests of financial asset pricing models.” Rev. Financ. Stud. 3 (1990): 431–468. [Google Scholar] [CrossRef]
  39. A.W. Lo. Hedge Funds: An Analytic Perspective. Princeton, NJ, USA: Princeton University Press, 2010, p. 260. [Google Scholar]
  40. A.C. MacKinlay. “Multifactor models do not explain deviations from the CAPM.” J. Financ. Econ. 38 (1995): 3–28. [Google Scholar] [CrossRef]
  41. R. Merton. “An intertemporal capital asset pricing model.” Econometrica 41 (1973): 867–887. [Google Scholar] [CrossRef]
  42. D. Mukherjee, and A.K. Mishra. “Multifactor Capital Asset Pricing Model Under Alternative Distributional Specification.” Available online: http://ssrn.com/abstract=871398 (accessed on 7 May 2015).
  43. V. Ng, R.F. Engle, and M. Rothschild. “A multi-dynamic-factor model for stock returns.” J. Econ. 52 (1992): 245–266. [Google Scholar] [CrossRef]
  44. S. Ross. “The arbitrage theory of capital asset pricing.” J. Econ. Theory 13 (1976): 341–360. [Google Scholar] [CrossRef]
  45. G. Schwert. “Stock returns and real activity: A century of evidence.” J. Financ. 45 (1990): 1237–1257. [Google Scholar] [CrossRef]
  46. W. Sharpe. “Capital asset prices: A theory of market equilibrium under conditions of risk.” J. Financ. 19 (1964): 425–442. [Google Scholar]
  47. R. Whitelaw. “Time variations and covariations in the expectation and volatility of stock market returns.” J. Financ. 49 (1997): 515–541. [Google Scholar] [CrossRef]
  48. C. Zhang. “A Re-examination of the causes of time-varying stock return volatilities.” J. Financ. Quant. Anal. 45 (2010): 663–684. [Google Scholar] [CrossRef]
  49. C.R. Harvey, Y. Liu, and H. Zhou. “...and the cross-section of expected returns.” Rev. Financ. Stud. Available online: http://ssrn.com/abstract=2249314 (accessed on 7 May 2015).
  50. Z. Kakushadze. “Mean-reversion and optimization.” J. Asset Manag. 16 (2015): 14–40. [Google Scholar] [CrossRef]
  • 1For an additional partial list (with some related literature), see, e.g., [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48], and references therein. For a literature survey, see, e.g., [49].
  • 2Naming conventions vary by industry classification. “Industry” here refers to the most detailed level (i.e., the terminal branch) in a given classification tree (see SubSection 2.1 for details).
  • 3In fact, SRM sometimes may use a (small) subset U * of U S R M to compute FCM—the required historical data may not be available for the entire U S R M . However, it may be (and typically is) available for the trading universe U, so U, not artificial U * , should be used for computing FCM.
  • 4Principal components provide “customization” to some extent. However, see footnote 8.
  • 5Legal disclaimers regarding this code are included in Appendix C.
  • 6E.g., over the past 20 trading days. One may prefer to take, say, last 3 months.
  • 7In the 0-th approximation, this is a D-day return. Removing outliers introduces d-dependence.
  • 8One can also use the first K prin principal components (PC) of SCM C i j as columns in FLM Ω i A . However, the out-of-sample instability in the off-diagonal elements of SCM is also inherited by PC. Furthermore, if M < N , SCM is singular and only M PC are available. It is for these reasons that style and industry factors are more widely used in practical applications.
  • 9Without delving into details, out-of-sample stability and singularity of FCM when M < K are issues to consider.
  • 10See, e.g., [50] for a recent discussion.
  • 11This is a cross-sectional regression; in R notations ε = residuals lm R Ω 1 , weights = Z .
  • 12Rotating Ω i A by an arbitrary K × K nonsingular matrix U A B , Ω Ω U , does not change the regression residuals (14) or the risk neutrality conditions (18).
  • 13This follows from the expression for the inverse of Γ: Γ 1 = Ξ 1 Ξ 1 Ω Q ˜ 1 Ω T Ξ 1 .
  • 14Appendix B contains C code for symmetric matrix inversion.
  • 15Real-life alphas often have sizable exposure to risk—a real-life alpha is any reasonable expected return. e.g., momentum strategies often have substantial exposure to risk. Furthermore, there is no “perfect” risk model. Otherwise, there would only be mean-reversion caused by temporary trading imbalances. For a complementary discussion, see, e.g., [35].
  • 16To illustrate, if, say, Δ t is 1 day, then we are computing the correlation of the M-day moving average return R ¯ α 1 M R ˜ α with the last daily return in the moving average, and we have p rolling periods like this. We have p + M dates and, consequently, p + M 1 daily returns.
  • 17Note that ρ M > 0 unless f ( λ ) 1 , for which R ˜ , R ˜ would be negative considering M 1 .
  • 18This is “analogous” to what happens in quantum mechanics and quantum field theory. We put the adjective “analogous” in quotation marks because a stochastic process described by Brownian motion is nothing but Euclidean quantum mechanical particle, so the “analogy” is in fact precise.
  • 19Similarly, growth does not add value in this context either. This is not to say that, e.g., earnings are not important in short-term trading. However, the way to implement them is via monitoring earnings announcements and, e.g., not trading stocks immediately following their earnings announcements, not by using growth style factor in, say, intraday regressions or optimization.
  • 20Arguably, there might be higher-order indirect effects via the book value affecting liquidity and market cap (see below). However, such higher-order effects are expected to be lost in all the noise. They might be ephemerally amplified around the time book value is updated (quarterly).
  • 21Market cap is relevant primarily because it is highly correlated with liquidity.
  • 22One can directly measure intraday liquidity based on “micro” quantities, which is more tedious. Typically, ADDV based computation reasonably agrees with such “micro” computation.
  • 23Conversely, value-based longer horizon strategies would not benefit from any risk factors based on “micro” quantities with, say, millisecond horizons. e.g., statistical arbitrage strategies have high turnover as they attempt to capture intraday mean-reversion effects due to market over-/under-reactions to news events, etc. Value based strategies have very low turnover given that periods of extreme mispricings seldom occur (e.g., ’87 Crash, ’08 Meltdown).
  • 24In the next section we discuss why no-value-adding factors can increase trading costs.
  • 25e.g., R i are overnight returns, we obtain alphas from these returns by regressing them (possibly, with some weights) over some FLM, and then we trade on these alphas right after the open.
  • 26To improve statistical significance, outliers can be removed (or smoothed, e.g., Winsorized).
  • 27When assessing F-statistic, it needs to be taken into account that we have ( K + 1 ) vs. K factors, as is a possible change in the number of observations per factor due to any NAs.
  • 28We used fundamental data from stockpup.com (accessed 07/28/2014) and pricing data from finance.yahoo.com (accessed 07/29/2014) from 06/18/2009 through 06/20/2014 for a universe of 493 stocks, essentially from S&P500. Negative (tangible) book values were omitted for the entire backtesting period.
  • 29Stocks rarely jump industries let alone sectors, so sector assignments are robust against time.
  • 30BICS naming convention is “sectors → industries → sub-industries”, so our “industries” correspond to BICS “sub-industries”, and our “sub-sectors” correspond to BICS “industries”.
  • 31FCM and ISR must be recomputed based on non-empty industries to remove this contribution.
  • 32These “hidden” industries might be correlated with the non-empty industries. However, such correlations should not be high—if they are high, then the industry classification is too granular (or deficient) and must be pruned (or replaced with a more precise industry classification). Including redundant noise-generating industries in RM is certainly not the right way to handle such cases.
  • 33On paper such noise trades typically have little effect on the simulated P&L, but can reduce the Sharpe ratio—if the approximate neutrality constraints effected by these empty industries do not add value, then the portfolio is suboptimal, i.e., it does not maximize the Sharpe ratio.
  • 34This brings us to another point we made in Introduction: in SRM typically FCM is computed based on some universe U * , a fraction of U S R M . For the same reasons as above, it is preferable to compute FCM based on the trading universe U as U * may not have substantial overlap with U.
  • 35e.g., style factors with discrete values are not normalized. An example is a binary style factor in some SRM indicating whether the stock belongs to the universe U * defined above.

Share and Cite

MDPI and ACS Style

Kakushadze, Z.; Liew, J.K.-S. Custom v. Standardized Risk Models. Risks 2015, 3, 112-138. https://doi.org/10.3390/risks3020112

AMA Style

Kakushadze Z, Liew JK-S. Custom v. Standardized Risk Models. Risks. 2015; 3(2):112-138. https://doi.org/10.3390/risks3020112

Chicago/Turabian Style

Kakushadze, Zura, and Jim Kyung-Soo Liew. 2015. "Custom v. Standardized Risk Models" Risks 3, no. 2: 112-138. https://doi.org/10.3390/risks3020112

APA Style

Kakushadze, Z., & Liew, J. K. -S. (2015). Custom v. Standardized Risk Models. Risks, 3(2), 112-138. https://doi.org/10.3390/risks3020112

Article Metrics

Back to TopTop