Dispersion Trading Based on the Explanatory Power of S&P 500 Stock Returns

This paper develops a dispersion trading strategy based on a statistical index subsetting procedure and applies it to the S&P 500 constituents from January 2000 to December 2017. In particular, our selection process determines appropriate subset weights by exploiting a principal component analysis to specify the individual index explanatory power of each stock. In the following out-of-sample trading period, we trade the most suitable stocks using a hedged and unhedged approach. Within the large-scale back-testing study, the trading frameworks achieve statistically and economically significant returns of 14.52 and 26.51 percent p.a. after transaction costs, as well as a Sharpe ratio of 0.40 and 0.34, respectively. Furthermore, the trading performance is robust across varying market conditions. By benchmarking our strategies against a naive subsetting scheme and a buy-and-hold approach, we find that our statistical trading systems possess superior risk-return characteristics. Finally, a deep dive analysis shows synchronous developments between the chosen number of principal components and the S&P 500 index.


Introduction
Relative value trading strategies, often referred to as statistical arbitrage, were developed by Morgan Stanley's quantitative group in the mid-1980s and describe a market neutral trading approach [1]. Those strategies attempt to generate profits from the mean reversion of two closely related securities that diverge temporarily. Pairs trading, which in its plain form tries to exploit mispricing between two co-moving assets, is probably the most popular delta-one trading approach amongst relative value strategies. Several studies show that those procedures generate significant and robust returns (see [2][3][4][5][6]).
In the non-linear space, relative value strategies are also prominent. Dispersion approaches are one of the most common trading algorithms and attempt to profit from implied volatility spreads of related assets and changes in correlations. Since index options usually incorporate higher implied volatility and correlation than an index replicating basket of single stock options, returns are generated by selling index options and buying the basket. Ultimately, the trader goes long volatility and short correlation [7]. As shown by [8,9], volatility based strategies generate meaningful and reliable returns. Dispersion trades are normally conducted by sophisticated investors such as hedge funds [10]. In 2011, the Financial Times speculated that Och-Ziff Capital Management (renamed to Sculptor Capital Management, Inc. New York, NY, USA in 2019 with headquarter in New York City, United States), an alternative asset manager with $34.5bn total assets under management [11], set up a dispersion trade worth around $8.8bn on the S&P 100 index [12]. In academia, References [13][14][15] examined the profitability of dispersion trades and delivered evidence of substantial returns across markets. However, Reference [16] reported that returns declined after the year 2000 due to structural changes in options markets. References [14][15][16] enhanced their returns by trading dispersion based on an index subset. All of those studies try to replicate the index with as few securities as possible, but neglect the individual explanatory power of stocks in their weighting schemes. This provides a clear opportunity for further improvements of the subsetting procedure.
This manuscript contributes to the existing literature in several aspects. First, we introduce a novel statistical approach to select an appropriate index subset by determining weights based on the individual index explanatory power of each stock. Second, we provide a large-scale empirical study on a highly liquid market that covers the period from January 2000 to December 2017 and therefore includes major financial events such as 9/11 and the global financial crisis. Third, we benchmark our statistical trading approaches against baseline dispersion trading algorithms and a buy-and-hold strategy. Fourth, we conduct a deep dive analysis, including robustness, risk factor, and sub-period analysis, that reports economically and statistically significant annual returns of 14.51 percent after transaction costs. Our robustness analysis suggests that our approach produces reliable results independent of transaction costs, reinvestment rate, and portfolio size. Fifth, we evaluate in depth our innovative selection process and report the number of required principal components and selected sector exposures over the study period. We find a synchronous relationship between the number of principal components that are necessary to describe 90 percent of the stock variance and the S&P 500 index performance, i.e., if the market performs well, more components are required. Finally, we formulate policy recommendations for regulators and investors that could utilize our approach for risk management purposes and cost-efficient dispersion trade executions.
The remainder of this paper is structured as follows. Section 2 provides the underlying theoretical framework. In Section 3, we describe the empirical back-testing framework followed by a comprehensive result analysis in Section 4. Finally, Section 5 concludes our work, provides practical policy recommendations, and gives an outlook of future research areas.

Theoretical Framework
This section provides an overview of the theoretical framework of our trading strategy. Section 2.1 describes the underlying methodology and the drivers of dispersion trades. Different dispersion trading structures and enhancement methods are elaborated in Section 2.2.

Dispersion Foundation and Trading Rational
The continuous process of stock price S is defined as [17]: where the dt term represents the drift and the second term denotes the diffusion component. In one of its simplest forms, continuous processes follow a constant drift µ and incorporate a constant volatility σ. In various financial models, for example in the well-known Black-Scholes model (see [18]), it is assumed that the underlying asset follows a geometric Brownian motion (GBM): Following [19,20], we define return dispersion for an equity index at time t as: where N represents the number of index members, w i the index weight, and R i,t the return of stock i at time t. Moreover, R I,t denotes the index return with R I,t = ∑ N i=1 w i R i,t . Return dispersion statistically describes the spread of returns in an index. References [21][22][23] showed in their seminal works that the realized variance (RV) is an accurate estimator of the actual variance (σ 2 ) as RV converges against the quadratic variation. Therefore, RV and σ 2 are used interchangeably in the following. Applying the definition of realized variance of the last J returns RV i,t = ∑ J k=1 R 2 i,t−k , Equation (3) can be rewritten as: Expanding Equation (4) by the index variance of perfectly correlated index constituents yields: The first term represents the variance dispersion of the single index constituents. This component is independent of the individual correlations. However, the second expression depends on realized correlations and describes the spread between the index variance under perfectly positively correlated index members and the realized correlations. Therefore, dispersion trades are exposed to volatility and correlation. This is reasonable as the variance of the index is eventually a function of the realized correlations between index members and their variances. Reference [24] already quantified this relationship in 1952: where σ i (σ j ) represents the volatility of stock i (j) and ρ i,j denotes the correlation coefficient between shares i and j. Figure 1 illustrates the dispersion drivers graphically. Dispersion consists of volatility and correlation dispersion, i.e., the deviation of the realized correlation from perfectly positively correlated index members. The missing diversification benefits of the index compared to uncorrelated constituents decrease the profit of a dispersion trade, highlighting the short correlation characteristic of a long dispersion trade. Hence, a long dispersion trade profits from an increase in individual stock volatility and a decrease in index volatility, which itself implies a decline in correlation. Shorting dispersion leads to a gain when index volatility rises while the constituent's volatility remains unchanged or falls. In practice, multiple ways exist to structure dispersion trades that possess distinct merits. In our empirical study in Section 3, we develop a cost-efficient way to trade dispersion. Within the scope of this work, we focus on two trading approaches, namely at-the-money (ATM) straddles with and without delta hedging. Both strategies rely on options to trade volatility. When expressing a view on dispersion using non-linear products, one has to consider the implied volatility (IV), which essentially reflects the costs of an option. Profits from owning plain vanilla options are generally achieved when the realized volatility exceeds the implied one due to the payoff convexity. The contrary applies to short positions since these exhibit a concave payoff structure. Thus, a long dispersion trade would only be established if the expected realized volatility exceeds the implied one.
Two essential metrics exist for measuring the implied costs of dispersion trades. First, the costs can be expressed as implied volatility. To assess the attractiveness of a trade, the IV of the index has to be compared with that of the index replicating basket. This method has to assume an average correlation coefficient in order to calculate the IV of the basket. To estimate the average correlation, historical realizations are often used. Second, implied costs can also be directly expressed as average implied correlation, which is computed based on the IVs of the index and its constituents. This measurement simply backs out the average implied correlation so that the IV of the basket equals the index IV. Through modification of the Markowitz portfolio variance equation (see Equation (6)) and assuming ρ = ρ i,j for i = j and i, j = 1, ..., N, the implied volatility can be expressed as average implied correlation (see [16]): Version Friday 11 th September, 2020 submitted to Mathematics 4 of 24 Index variance with non correlated components For a better understanding, we give the following example: Assume an equal weighted portfolio of two stocks with a pairwise correlation of 0.5 and RV of 0.04 and 0.08, respectively. The arisen dispersion of 0.0159 can be split as followed: Dispersion= 0.0159   Two essential metrics exist for measuring the implied costs of dispersion trades. First, the costs can be expressed as implied volatility. To assess the attractiveness of a trade, the IV of the index has to be compared with that of the index replicating basket. This method has to assume an average correlation coefficient in order to calculate the IV of the basket. To estimate the average correlation, historical realisations are often used. Second, implied costs can also be directly expressed as average implied correlation, which is computed based on the IVs of the index and its constituents. This measurement simply backs out the average implied correlation so that the IV of the basket equals the index IV. Through modification of the Markowitz portfolio variance equation (see equation (6)) and  It is easily perceivable that the index variance decreases with a lower correlation of the two stocks. If the two assets are perfectly negatively correlated, the index variance is virtually zero. This represents the best case for a long dispersion trade as the single index constituents variance is unaffected at 0.0064%, while the index is exposed to 0% variance. This state generates in the simulation a return dispersion of 0.0800%. Nevertheless, this extreme case is not realistic as indices normally incorporate more than two securities. Adding an additional stock leads inevitably to a positive linear relationship with one or the other constituent. Moreover, stocks are typically positively correlated as they exhibit similar risk factors.
The representation of implied costs as average implied correlation is a useful concept as it enables us to assess deviations of index and basket IV by solely computing one figure that is independent of the index size. Hence, this metric is always one-dimensional. Typically, the implied average correlation is compared with historical or forecasted correlations to identify profitable trading opportunities [13,16]. We included this trading filter in our robustness check in Section 4.4 to examine if it is economically beneficial to trade conditional on correlation. Overall, investors would rather sell dispersion in an environment of exceptionally low implied correlation than to build a long position since correlation is expected to normalize in the long run.
However, most of the time investors engage in long dispersion trades. This has mainly two reasons. First, a historical overpricing of implied volatility of index options compared to that of its constituents is persisting. Thus, selling index options is more attractive than buying them. Second, index option sellers can generate profits from the embedded correlation risk premium. Buying index options is ultimately an insurance against decreasing diversification as correlations tend to rise in market downturns. An increase in correlation negatively affects any portfolio diversification. Therefore, many investors are willing to pay a premium to hedge against rising correlations, making long dispersion trades more attractive [25,26]. Nonetheless, Reference [16] showed that the correlation risk premium seems to play only a minor role in explaining the performance of dispersion trades and concluded that returns depend mainly on mispricing and market inefficiency. Over the years, several explanations for the overpricing of index options have emerged. The most prominent argument is related to the supply and demand of index and stock options. Changes in implied volatility are rooted in net buying and selling pressure. Amongst institutional investors, there is usually high demand for index options, especially puts, to hedge their portfolios. This creates net buying pressure, resulting in a higher implied volatility for index options [27][28][29]. Hedge funds and other sophisticated investors engage in call and put underwriting on single stock options to earn the negatively embedded volatility premium [30]. Due to consistent overpricing, selling both insurance (puts) and lottery tickets (calls) generates positive net returns in the long run, despite a substantial crash risk [31]. Hence, sophisticated market participants would sell volatility especially when the implied volatility is high. This was for example the case after the bankruptcy of Long-Term Capital Management in 1998 when several investors sold the high implied volatility [32]. The net buying pressure for index options and net selling pressure for stock options create the typical mispricing between index IV and the IV of a replicating portfolio. Dispersion trades therefore help balance the supply and demand of index and single name options as the strategy involves buying the oversupplied single stock options and selling the strongly demanded index options. In a long dispersion trade, investors ultimately act as the liquidity provider to balance single stock and index volatility.

Dispersion Trading Strategies
In the market, a variety of structures to capture dispersion are well established. In this subsection, we give an overview of the two most common variations, namely at-the-money straddles (Section 2.2.1) and at-the-money straddles with delta hedging (Section 2.2.2).

At-The-Money Straddle
One of the traditional and most transparent, as well as liquid ways to trade volatility is by buying and selling ATM straddles. A long straddle involves a long position in a call and put with the same strike price (K) and maturity (T). Selling both options results in a short straddle. The payoff of a long straddle position at T with respect to the share price (S T ) is given by: where [x] + represents max(x, 0). Translating an ATM straddle into the context of a long dispersion trade, the payoff equals: The first part denotes the single stock option portfolio while the second term corresponds to the index leg. When setting up an at-the-money straddle, the initial delta of the position is approximately zero as the put and call delta offset each other [7]. Despite the low delta at inception, this structure is exposed to directional movements and is therefore not a pure volatility play. This is rooted in changes of the position's overall delta as the underlying moves or the time to maturity decreases.

At-the-Money Straddle with Delta Hedging
Delta hedging eliminates the directional exposure of ATM straddles. Through directional hedging, profits from an increase in volatility are locked in. Therefore, this structure generates income even in the case that the underlying ends up at expiry exactly at-the-money. The volatility profits are generated through gamma scalping. A long gamma position implies buying shares of the asset on the way down and selling them on the way up [7]. This represents every long investor's goal: buying low and selling high. A net profit is generated when the realized volatility is higher than the implied volatility at inception as more gamma gains are earned as from the market priced in. Hence, ATM straddles with frequent delta hedging are better suited to trade volatility than a plain option.
Assuming the underlying time series follows a GBM (Equation (2)) and the Black-Scholes assumptions hold, it can be shown that the gamma scalping gains are exactly offset by the theta bleeding when the risk-free rate equals zero [18,33]. Taking the Black-Scholes PDE: and substituting the partial derivatives with the Greeks yields: Invoking Itô's lemma, this can be expressed for an infinitesimal small time change dt as: In the above-mentioned equations, Π describes the delta neutral portfolio, S represents the asset price, r illustrates the risk-free interest rate, and Θ and Γ denote the option's price sensitivities with respect to the passage of time and delta.
In the real world when using a discrete delta hedging method, theta and gamma do not necessarily offset each other. This results partially from the risk and randomness that an occasionally hedged straddle exhibits. In fact, the profit-and-loss (P&L) of a long straddle can be approximated as [17,33]: where σ implied denotes the implied volatility and δ describes the change in a variable. The first term Γ $,ti is known as dollar gamma, and the last expression illustrates the difference between realized and implied variance, implying that a hedged ATM straddle is indeed a volatility play. However, this approach is still not a pure volatility trade due to the interaction of the dollar gamma with the volatility spread. This relationship creates a path dependency of the P&L. Noticeable, the highest P&L will be achieved when the underlying follows no clear trend and rather oscillates in relatively big movements around the strike price. This is driven by a high gamma exposure along the followed path since the straddle is most of the time relatively close to ATM. A starting trend in either directions would cut hedging profits, despite the positive variance difference. Concluding, delta hedged ATM straddles provide a way to express a view on volatility with a relatively low directional exposure.

Back-Testing Framework
This section describes the design of the back-testing study. First, an overview of the software and data used is provided (Section 3.1). Second, Section 3.2 introduces a method to subset index components (formation period). Third, we present our trading strategy in Section 3.3 (trading period). Following [2,34], we divide the dataset into formation-trading constellations, each shifted by approximately one month. Finally, Section 3.4 describes our return calculation method.

Data and Software
The empirical back-testing study is based on the daily option and price data of the S&P 500 and its constituents from January 2000 to December 2017. This index represents a highly liquid and broad equity market of leading U.S. companies. Hence, this dataset is suitable for examining any potential mispricing since investor scrutiny and analyst coverage are high. The stock price data, including outstanding shares, dividends, and stock split information, were retrieved from the Center for Research in Security Prices (CRSP). Information about the index composition and Standard Industrial Classification (SIC) codes were obtained from Compustat. For the options data of the S&P 500 and its members, we rely on the OptionMetrics Ivy database, which offers a comprehensive information set. From that database, all available one month options were extracted. We mention that OptionMetrics provides for single stocks only data on American options. Thus, the returns in Section 4 are probably understated. This conclusion arises from two facts. First, the constructed strategies could also be established with European options that usually trade at a discount to American ones due to the lack of the early exercise premium. Second, a dispersion strategy is typically long individual stock options, therefore resulting in higher initial costs. Nonetheless, this study provides a conservative evaluation of the attractiveness of dispersion trades. The above-mentioned data provider was accessed via Wharton Research Data Services (WRDS). The one month U.S. Dollar LIBOR, which is in the following used as a risk-free rate proxy, was directly downloaded from the Federal Reserve Economic Research database and transformed into a continuously compounded interest rate. Moreover, we use the Kenneth R. French data library to obtain all relevant risk factors.
To keep track of the changes in the index composition, a constituent matrix with appropriate stock weights is created. As the weights of the S&P 500 are not publicly available, the individual weights are reconstructed according to the market capitalization of every stock. This approach leads not to a 100% accurate representation of the index as the free-float adjustment factors, used by the index provider, are not considered. However, it provides us with a reliable proxy. The options data was cleaned by eliminating all data points with missing quotes. Furthermore, the moneyness was calculated for every option to determine ATM options.
All concepts were implemented in the general-purpose programming language Python. For some calculations and graphs, the statistical programming language R was used as a supplementary [35]. Computationally intensive calculations were outsourced to cloud computing platforms.

Formation Period
Transaction costs play a major role in trading financial securities (see [36,37]). In particular, when traded products are complex or exotic, transaction costs might be substantial. Reference [38] showed that the portfolio construction is of great importance in order to execute strategies cost efficiently in the presence of transaction costs. Therefore, portfolio building represents a material optimization potential for trading strategies. One simple way to reduce transaction fees is to trade less. In dispersion trades, the single option basket is the main trading costs driver. Therefore, it is desirable to reduce the number of traded assets, especially when the portfolio is delta hedged. However, trading less stocks as the index incorporates could result in an insufficient replication, which might not represent the desired dispersion exposure accurately.
To determine an appropriate subset of index constituents that acts as a hedge for the index position, a principal component analysis (PCA) is used. This statistical method converts a set of correlated variables into a set of linearly uncorrelated variables via an orthogonal transformation [39,40]. The main goal of this method is to find the dominant pattern of price movements, as well as stocks that incorporate this behavior most prominently. Selecting those assets leads to a portfolio that explains the majority of the index movement.
The fundamentals of the applied procedure are based on [15,16,41], but we improve the selection process by identifying stocks with the highest explanatory power. Therefore, we are in a position to get a basket of stocks that explains the index in a more accurate way. To be more specific, our method is comprised of six steps that recur every trading day:

1.
Calculate the covariance matrix of all index members based on the trailing twelve month daily log returns.

2.
Decompose the covariance matrix into the eigenvector and order the principal components according to their explanatory power.

3.
Determine the first I principal components that cumulatively explain 90% of the variance. 4.
Compute the explained variation of every index constituent through performing Steps 1 and 2 while omitting the specific index member and comparing the new explained variance of the I components to that of the full index member set.

5.
Select the top N stocks with the highest explained variation. 6.
Calculate the individual weights as the ratio of one index member's explained variation to the total explained variation of the selected N stocks.
To illustrate our approach in more detail, we report an example of our portfolio construction methodology for the trading day 29/11/2017 below.

1.
Calculating the trailing 12 month return covariance matrix, after excluding stocks with missing data points, yields the following 410 × 410 matrix.

2.
Decomposing the covariance matrix from 1 into the eigenvectors results in 410 principal components. To keep this section concise, we report the selected components in Table 1 below.

3.
Examining the cumulative variance of the principal components in the last column of Table 1 shows that we have to set I = 36 to explain 90% of the variance.

4.
After repeating Steps 1 and 2 while omitting one index member at a time, we receive the new cumulative variance of the 36 components for every stock that enables us to calculate the individual explanatory power by comparing the new cumulative variance to that of the full index member set (Columns 3-5 in Table 2). 5.
We select the top five stocks with the highest explained variation, which in our example are Xcel Energy Inc. (XEL), Lincoln National Corporation (LNC), CMS Energy Corporation (CMS), Bank of America Corporation (BAC), and SunTrust Banks, Inc. (STI). 6.
Based on the explained variation of the top five stocks, we calculate the appropriate portfolio weights as the ratio of individually explained variation and total explained variation of the selected stocks. Eventually, we arrive in Table 2 at the reported portfolio weights.

Trading Period
As [42] showed that even professional traders that are normally considered as rational and sophisticated suffer from behavioral biases, our strategy is based on predefined and clear rules to alleviate any unconscious tendencies. In line with [15,16], we implement our trading strategies based on one month ATM options. The following rules specify our trading framework:

1.
Whenever one month ATM options are available, a trading position is established.

2.
A trading position consists always of a single stock option basket and an index leg.

3.
Every position is held until expiry.
Whenever a new position is established, we invest 20% of our capital. The remaining capital stock acts as a liquidity buffer to cover obligations that may emerge from selling options. All uncommitted capital is invested at LIBOR. In total, we construct the four trading systems PCA straddle delta hedged (PSD), PCA straddle delta unhedged (PSU), largest constituents straddle delta hedged (LSD), and largest constituents straddle delta unhedged (LSU). In order to benchmark our index subsetting scheme (Section 3.2), we also apply our strategies to a naive subset of the index, consisting of the five largest constituents. As a point of reference, a simple buy-and-hold strategy on the index (MKT) is reported. Details can be found in the following lines. • Largest constituents straddle delta unhedged (LSU): The trade construction and selection process is identical to LSD. However, the overall delta position remains unhedged, hence representing a less sophisticated approach than LSD.
• Naive buy-and-hold strategy (MKT): This approach represents a simple buy-and-hold strategy. The index is bought in January 2000 and held during the complete back-testing period. MKT is the simplest strategy in our study.
Transaction costs are inevitable when participating in financial markets. Thus, transaction fees have to be considered in evaluating trading strategies to provide a more realistic picture of profitability. Besides market impact, bid-ask spreads, and commissions, slippages are the main cost driver. Over the last few years, trading costs decreased due to electronic trading, decimalization, increased competition, and regulatory changes (see [43,44]). The retail sector also benefited from this development as brokers such as Charles Schwab first decreased and then completely eliminated fees for stocks, ETFs, and options that are listed on U.S. or Canadian exchanges [45]. However, transaction costs are hard to estimate as they depend on multiple factors such as asset class and client profile. To account for asset specific trading fees in our back-testing study, we apply 10 bps for every half turn per option, to which 2.5 bps are added if delta hedging is performed. In light of our trading strategy in a highly liquid equity market, the cost assumptions appear to be realistic.

Return Calculation
In contrast to [2,37,46], who constructed a fully invested portfolio, we base our returns on a partially invested portfolio. Our calculation is similar to the concept of return on risk-adjusted capital (RORAC). There are two reasons for choosing this method. First, short selling options incorporates extreme payoffs that can easily outstrip option premiums. As a result, investors need substantial liquidity to cover any future cash outflows to honor their obligations. Second, writing options requires a margin to enter and maintain the position. For example, the initial margin for a one month short index ATM call position at the Chicago Board Options Exchange (CBOE) amounts to 100% of the options proceeds plus 15% of the aggregated underlying index value minus the out-of-the-money amount [47].
The return of a dispersion trade is calculated as: where V t represents the initial costs of the individual legs: V t,Index = C I,t (K I , T) + P I,t (K I , T).
C t and P t denote the prices for calls and puts for a specific time to maturity (T) and strike (K). When delta hedging is conducted, the generated P&L is added to the nominator of Equation (14) for both legs. As [18] assumed continuous hedging is impracticable in reality due to transaction costs and a lack of order execution speed, we undertake daily delta hedging at market close. To calculate the delta exposure (∆), the Black-Scholes framework is used. IV j ∀j ∈ (t, T) is assumed to be the annualized one month trailing standard deviation of log returns. Any proceeds (losses) from delta hedging are invested (financed) at dollar LIBOR (r). This rate also serves as borrowing rate when cash is needed to buy shares (S). Hence, the delta P&L at T for a portfolio of N options is determined by: where D i,t denotes the present value at time t of the dividend payment of stock i.

Results
We follow [48] and conduct a fully-fledged performance evaluation on the strategies PSD, PSU, LSD, and LSU from January 2000 to December 2017-compared to the general market MKT.
The key results for the options portfolio of the top five stocks are depicted in two panels-before and after transaction costs. First, we evaluate the performance of all trading strategies (Section 4.1), conduct a sub-period analysis (Section 4.2), and analyze the sensitivity to varying market frictions (Section 4.3). Second, Section 4.4 checks the robustness, and Section 4.5 examines the exposure to common systematic risk factors. Finally, we investigate the number of principal components and the corresponding sector exposure of our PCA based selection process (Section 5). Table 3 shows the risk-return metrics per trade and the corresponding tradings statistics for the top five stocks per strategy from January 2000 to December 2017. We observe statistically significant returns for PSD and PSU, with Newey-West (NW) t-statistics above 2.20 before transaction costs and above 1.90 after transaction costs. A similar pattern also emerges from the economic perspective: the mean return per trade is well above zero percent for PSD (0.84 percent) and PSU (2.14 percent) after transaction costs. In contrast, LSD produces a relatively small average return of 0.30 percent per trade. It is very interesting that LSU achieves 1.28 percent, but this is not statistically significant. The naive buy-and-hold strategy MKT achieves identical results before and after transaction costs because the one-off fees are negligible. The range, i.e., the difference of maximum and minimum, is substantially lower for the delta hedged strategies PSD and LSD, which reflects the lower directional risk of those approaches. Furthermore, the standard deviation of PSU (15.23 percent) and LSU (13.53 percent) is approximately two times higher than that of PSD (6.78 percent) and LSD (7.25 percent). We follow [49] and report the historical value at risk (VaR) figures. Overall, the tail risk of the delta hedged strategies is greatly reduced, e.g., the historical VaR 5% after transaction costs for PSD is −11.13 percent compared to −20.90 percent for PSU. The decline from a historical peak, called maximum drawdown, is at a relatively low level for PSD (70.53 percent) compared to PSU (94.70 percent), LSD (82.01 percent), and LSU (85.64 percent). The hit rate, i.e., the number of trades with a positive return, varies between 52.15 percent (LSD) and 56.71 percent (PSD). Across all systems, the number of actually executed trades is 395 since none of the strategies suffers a total loss. Consequently, the average number of trades per year is approximately 22; this number is well in line with [50].

Strategy Performance
In Table 4, we report annualized risk-return measures for the strategies PSD, PSU, LSD, and LSU. The mean return after transaction costs ranges from 0.77 percent for LSD to 26.51 percent for PSU. As anticipated from Table 3, the standard deviation of both delta hedged strategies amounts to approximately 20%-half of the unhedged counterparts. Notably, the Sharpe ratio, i.e., the excess return per unit of standard deviation, of PSD clearly outperforms the benchmarks with a value of 0.40 after transaction costs. Concluding, PSD generates promising risk-return characteristics, even after transaction costs.

Sub-Period Analysis
Investors are concerned about the stability of the results, potential drawdowns, and the behavior in high market turmoils. Inspired by the time-varying returns of [51][52][53], we analyze the performance of our implemented trading strategies over the complete sample period. Figure 3  The first sub-period ranges from 2000 to 2005 and includes the dot-com crash, the 9/11 attacks, and the time of moderation. For PSD and LSD, 1 USD invested in January 2000 grows to more than 2.00 USD at the end of this time range; both show a steady growth without any substantial drawdowns. PSU, LSU, and MKT exhibit a similar behavior, which is clearly worse than the hedged strategies. The second sub-period ranges from 2006 to 2011 and describes the global financial crisis and its aftermath. The strategy PSD seems to be robust against any external effects since it copes with the global financial crisis in a convincing way. As expected, the unhedged trading approaches PSU and LSU show strong swings during high market turmoils. The third sub-period ranges from 2012 to 2017 and specifies the period of regeneration and comebacks. The development of all strategies does not decline even after transaction costs; profits are not being arbitraged away. Especially PSU outperforms in this time range with a cumulative return up to 300 as a consequence of too high index option premiums and high profits on single stock options, e.g., Goldman Sachs Group, Inc. and United Rentals, Inc.

Market Frictions
This subsection evaluates the robustness of our trading strategies in light of market frictions. Following [54], we analyze the annualized mean return and Sharpe ratio for varying transaction cost levels (see Table 5). Motivated by the literature, our back-testing study supposes transaction costs of 10 bps per straddle and 2.5 bps per stock for delta hedging. The results for 0 bps and 10 bps are identical to Table 4. Due to the same trading frequencies, all strategies are similarly affected by transaction costs; the naive buy-and-hold strategy MKT is constant (see Section 4.1). Taking annualized returns into account, we observe that the breakeven point for PSD and LSU is between 40 bps and 80 bps. As expected, PSU has a higher level where costs and returns are equal; the breakeven point is around 90 bps. The additional consideration of the risk side leads to the Sharpe ratio, with the results being similar to before. As expected, the breakeven points are reached for lower transactions costs, as the Sharpe ratio is calculated on the basis of excess returns. The thresholds vary between approximately 5 bps for LSD and 90 bps for PSU. Concluding, the delta hedged strategies PSD and PSU provide promising results in the context of risk-return measures, even for investors that are exposed to different market conditions and thus higher transaction costs.

Robustness Check
As stated previously, the 20 percent reinvestment rate, the top stocks option portfolio, which is traded unconditionally on correlation, and the number of five target stocks was motivated based on the existing literature (Section 3.3). Since data snooping is an important issue in many research studies, we examine the sensitivity of our PSD results with respect to variations of these hyperparameters.
In contrast to the reinvestment rate and the number of target stocks, trading based on unconditional correlation is not a hyperparameter in our framework. However, it is beneficial to examine if changes in the entry signal lead to substantial changes in our strategy performance. In Section 2.1, the concept of measuring implied costs as average implied correlation is introduced. Based on this approach, the relative mispricing between the index and the replicating basket can be quantified. Applying Equation (7) to forecasted or historical volatility yields an average expected correlation. Comparing this metric to the implied average correlation provides insights regarding the current pricing of index options. As discussed in Section 2.1, index options trade usually richer in terms of implied volatility than the replication portfolio. However, there are times at which the opposite might be the case. Following, an investors would set up a short dispersion trade: selling the basket and buying the index. A simple trading signal can be derived from average correlation levels: For our robustness check, we rely on historical volatility as a baseline approach. However, more advanced statistical methods could be applied to forecast volatility and determine current pricing levels (see [55][56][57]). Table 6 reports the annualized return of PSD for a variety of replicating portfolio sizes, reinvestment rates, and correlation based entry signals. First of all, a higher reinvestment rate leads to higher annualized mean returns; concurrently, higher risk aggravates the Sharpe ratio and the maximum drawdown. We observe several total losses for reinvestment rates of 80 percent and 100 percent, which is a result of the corresponding all-or-nothing strategy. Higher annualized returns and Sharpe ratios can generally be found at lower numbers of top stocks indicating that our selection algorithm introduced in Section 3.2 is meaningful. Regarding the correlation filter, we recognize that the strategy based on conditional correlation leads to worse results than the unconditional counterpart, e.g., the annualized mean return for the top five stocks and an investment rate of 20 percent is 9.53 percent (conditional correlation) vs. 18.41 percent (unconditional correlation). Summarizing, our initial hyperparameter setting does not hit the optimum; our selection procedure identifies the right top stocks; and considering unconditional correlation has a positive impact on the trading results.  Table 7 evaluates the exposure of PSD and PSU after transaction costs to systematic sources of risk (see [48]). Therefore, we apply the Fama-French three factor model (FF3) and the Fama-French 5-factor model (FF5) of [58,59], and the Fama-French 3+2 factor model (FF3+2) presented by [2]. Hereby, FF3 measures the exposure to the general market, small minus big capitalization stocks (SMB), and high minus low book-to-market stocks (HML). Next, FF3+2 enlarges the first model by a momentum factor and a short-term reversal factor. Finally, FF5 extends FF3 by a factor capturing robust minus weak (RMW5) profitability and a factor capturing conservative minus aggressive (CMA5) investment behavior.

Risk Factor Analysis
Across all three models, we observe statistically significant monthly alphas ranging between 0.74% and 0.79% for PSD and between 2.01% and 2.19% for PSU. The highest explanatory content is given for FF3+2 with adjusted R 2 of 0.0142 and 0.0118; probably, the momentum and reversal factor possess a high explanatory power. Not significant loadings for SMB5, HML5, RMW5, and CMA5 confirm our long-short portfolio we are constructing. Exposure to HML (PSD) and the reversal factor (PSU) underlies our selection and trading process (see Section 3).

Analysis of PCA Components and Market Exposure
Following [60], we report the Kaiser-Meyer-Olkin criterion (KMO) and Bartlett's sphericity test in Table 8 to examine the suitability of our data for a PCA analysis. Bartlett's test of sphericity, which tests the hypothesis that the correlation matrix is an identity matrix and therefore the variables are unrelated to each other and not suitable to detect any structure, is for all of our trading dates at a significance level of 1% [61]. KMO is a sampling adequacy measure that describes the proportion of variance in the variables that might be caused by shared underlying factors [62,63]. A low KMO score is driven by high partial correlation in the underlying data. The threshold for an acceptable sampling adequacy for a factor analysis is normally considered to be 0.5 [64]. However, in our context of finding the top five stocks that describe the index as closely as possible, a low KMO value is actually favorable as it indicates that strong relationships amongst the S&P 500 constituents exist that we aim to exploit with our methodology. As expected, our KMO values are on almost all trading days below the threshold with an average and median of 0.2833 and 0.2619, respectively. There are two determinants that explain the low KMO values in our study and support our approach. (i) Stocks are often dependent on the same risk factors, one of the most prominent factors being the market (for more, see [65,66]), and therefore exhibit rather high partial correlations. (ii) On every trading day, we analyze around 450 S&P 500 index members, which results in 101,250 different partial correlations. Due to the substantial number of combinations, you will find structure and high partial correlations in the data.    Last, but not least, Figure 5 shows the yearly sector exposure based on our PCA selection process from January 2000 to December 2017. In line with the Standard Industrial Classification, all companies are categorized into the following nine economic sectors: "Mining", "Manufacturing", "Wholesale Trade", "Finance, Insurance, Real Estate", "Construction", "Transportation and Public Utilities", "Real Trade", "Services". Each point on the horizontal axis refers to the sectors that are used in the year after to the respective point. First of all, we observe that the sum of positive sector exposures increases in times of high market turmoil. Especially, the cumulative value exceeds three in 2013 because of a high exposure in "Finance, Insurance, Real Estate", especially for Bank of America Corporation and Goldman Sachs Group, Inc. Furthermore, the exposure of each sector varies over time, i.e., industry branches are preferred and avoided in times of bull and bear markets. As such, the percentage of financial stocks increases in 2009 and 2013, the peak of the global financial and European debt crisis.

Conclusions and Policy Recommendations
In this manuscript, we developed a dispersion trading strategy based on a statistical stock selection process and applied our approach to the S&P 500 index and its constituents from January 2000 to December 2017. We contributed to the existing literature in four ways. First, we developed an index subsetting procedure that considers the individual index explanatory power of stocks in the weighting scheme. Therefore, we are in a position to build a replicating option basket with as little as five securities. Second, the large-scale empirical study provides a reliable back-testing for our dispersion trades. Hence, the profitability and robustness of those relative value trades can be examined across a variety of market conditions. Third, we analyzed the added value of our strategies by benchmarking them against a naive index subsetting approach and a simple buy-and-hold strategy. The trading frameworks that employ the PCA selection process outperformed its peers with an annualized mean return of 14.52 and 26.51 percent for PSD and PSU, respectively. The fourth contribution focuses on the conducted deep dive analysis of our selection process, i.e., sector exposure and number of required principal components over time, and the robustness checks. We showed that our trading systems possess superior risk-return characteristics compared to the benchmarking dispersion strategies.
Our study reveals two main policy recommendations. First of all, our framework shows that advanced statistical methods can be utilized to determine a portfolio replicating basket and could therefore be used for sophisticated risk management. Regulatory market risk assessments of financial institutions often rely on crude approaches that stress bank's capital requirements disproportionately to the underlying risk. However, regulators should explore the possibility of employing more advanced statistical models in their risk assessments, such as considering PCA built replicating baskets to hedge index exposures, to adequately reflect risk. Finally, investors should be aware that a principal component analysis can be used to cost-efficiently set up dispersion trades, and they should use their comprehensive datasets to improve the stock selection process further.
For future research endeavors, we identify the following three areas. First, the back-testing framework should be applied to other indices and geographical areas, i.e., price weighted equity markets and emerging economies, to shed light on any idiosyncrasies related to geographical or index construction differences. Second, efforts could be undertaken to improve the correlation filter from Section 4.4, so that profitable short dispersion opportunities can be spotted more accurately. Third, financial product innovations in a dispersion context should be subject to future studies, particularly the third generation of volatility derivatives such as variance, correlation, and gamma swaps.
Author Contributions: L.S. conceived of the research method. The experiments are designed and performed by L.S. The analyses were conducted and reviewed by L.S. and J.S. The paper was initially drafted and revised by L.S. and J.S. It was refined and finalized by L.S. and J.S. All authors read and agreed to the published version of the manuscript.

Funding:
We are grateful to the "Open Access Publikationsfonds", which covered 75 percent of the publication fees.