Revolutionizing Hedge Fund Risk Management: The Power of Deep Learning and LSTM in Hedging Illiquid Assets

: In the dynamic sphere of financial markets, hedge funds have emerged as a critical force, navigating through volatility with advanced risk management techniques yet grappling with the challenges posed by illiquid assets. This study aims to transcend traditional option pricing models, which struggle under the complexities of hedge fund investments, by exploring the applicability of machine learning in financial risk management. Leveraging Deep Neural Networks (DNNs) and Long Short-Term Memory (LSTM) cells, the research introduces a model-free, data-driven approach for discrete-time hedging problems. Through a comparative analysis of simulated data and the implementation of LSTM architectures, the paper elucidates the potential of these machine learning techniques to enhance the precision of risk assessments and decision-making processes in hedge fund investments. The findings reveal that DNNs and LSTMs offer significant advancements over conventional models, effectively capturing long-term dependencies and complex patterns within financial time series data. Consequently, the study underscores the transformative impact of machine learning on the methodologies employed in financial risk management, proposing a novel paradigm that promises to mitigate the intricacies of hedging illiquid assets. This research not only contributes to the academic discourse but also paves the way for the development of more adaptive and resilient investment strategies in the face of market uncertainties.


Introduction 1.Literature Review
Hedge funds are investment funds that pools capital from accredited or institutional investors and invests in a variety of assets.They usually deliver higher Sharpe ratios than Buy-and-Hold strategies on traditional asset classes, benefiting from their complicated portfolio-construction and risk management techniques.The investors were especially attracted by the promising behaviors of hedge fund industry during the bear market between 2000 and 2003.However, investments in hedge funds are illiquid, since they often require investors to keep their money in the fund for at least one year, a time known as 'the lock-up period'.Withdrawals may also only happen at certain frequencies, e.g., quarterly or bi-annually.In such cases, the Black-Scholes model on option pricing may suffer from restrictive assumptions when being used in the hedge fund index option pricing.Investors may also have to consider tail risk and hedge slippage in discrete-time hedging problems.In this report, we would like to develop a model-free approach to solve this illiquid option hedging problem, using multiple criteria to measure hedge errors.
In the realm of options trading, delta hedging plays a pivotal role in portfolio management.Delta, the most critical hedge parameter, can be easily adjusted through trades in the underlying asset.Since the advent of exchange-traded options markets in 1973, option traders have frequently adjusted delta to near-zero levels by trading the underlying asset, highlighting its significance in risk mitigation strategies.In the literature, several stochastic volatility models have been proposed, including those by Hull and White (1987), Hull (1988), Heston (1993), and Hagan et al. (2002).A recent research by Hull and White (2017) has noted that the conventionally calculated delta doesn't minimize portfolio variance due to the correlation between asset price and volatility movements.The minimum variance delta considers both price fluctuations and volatility changes.And they empirically derived a model for this delta, demonstrating its superiority over stochastic volatility models using S&P 500 options data.
Instead of directly pricing the European style option on hedging non-traded assets, we investigate the techniques to value a payoff in an incomplete financial market.One of the traditional hedging methodologies studying optimal policies under such conditions is called 'the mean-variance hedging'.An initial research by Duffie et al. (1991) provided explicit optimal positions that minimize the quadratic objective, assuming that both tradable and non-tradable asset prices follow a geometric Brownian motion.Later, Schweizer (1995) provided a solution to one-dimensional, mean-variance hedging with a non-stochastic interest rate.An optimal hedging strategy in terms of parameters from a specific nontradable asset payoff decomposition were thereafter derived by Gourieroux et al. (1998).With the help of stochastic dynamic programming, Bertsimas et al. (2001) solved the minimization of the mean-squared-error, and numerically computed the optimal replication strategy.Consecutive researches from Čern ỳ and Kallsen (2007Kallsen ( , 2008) ) studied meanvariance hedging strategies in locally square-integrable semi-martingales context.Further, they also proposed solutions to the mean-variance hedging problem in Heston's model framework.A more recent study by Rémillard and Rubenthaler (2013) proposed the optimal solution for hedging portfolio in a discrete time context.
Another hedging methodology, CVaR (Conditional Value at Risk)-based hedging, is a risk management tool introduced by Rockafellar and Uryasev (2000).It measures the average loss of an asset or portfolio in the worst-case scenario within a given confidence level, typically 95% or 99%.Unlike VaR (Value at Risk), which focuses on the maximum potential loss within a specific confidence interval, CVaR considers the average loss beyond the VaR threshold, thus addressing the "tail risk" more comprehensively.Further researches including Rockafellar and Uryasev (2002) and Krokhmal et al. (2002) developed the potential and constraints of CVaR-based hedging.Alexander et al. (2003) discussed on derivative portfolio hedging utilizing CVaR.
As techniques in machine learning evolve rapidly within recent decades, attempts to solve financial problems with neural networks start to prosper.Promising results provided by Hutchinson et al. (1994) directly parameterized the pricing function of a derivative using a neural network, assuming relatively good liquidity and abundance in historical data of the underlying.Moody and Wu (1997) and Jiang et al. (2017) also apply machine learning techniques to deal with a non-linear objective functions setup of classic portfolio optimization.Solid outcomes given by by Du et al. (2016) and Lu (2017) also confirm the problem-solving competence of neural network in algorithmic trading.Recent works from Lütkebohmert et al. ( 2022) and Mikkilä and Kanniainen (2023) provide further insight on the potential of empirical deep hedging and robust deep hedging.
Deep feed forward networks, as an extension of the first and simplest type of artificial neural network devised, enjoy high reputation in its capability to satisfy universal approximation properties.Early in the 1990s Hornik (1991) revealed the effectiveness of deep feed forward networks in combining optimal approximation properties of all affine systems.Such efficiency to determine the optimal hedging strategy with corresponding input factors turns out to be an edge in the situation of particular hedging problems, where the availability of the price data of the derivative to be hedged is limited.More importantly, the deep hedging methodology provides the possibility to aggregate multiple hedging instruments and market frictions, which, in our case, is the transaction costs.A relevant research from Föllmer and Schied (2011) provided a general introduction focusing on such incomplete markets.Modern reinforcement learning methods were applied by Buehler et al. (2019) to create a framework for hedging a portfolio of derivatives in the presence of market incompleteness.Several machine-learning-based algorithms were also generated by Fecamp et al. (2019) to solve hedging problems related to illiquidity, non-tradable risk factors, discrete hedging dates and proportional transaction costs.A flexible and accurate model based on reinforcement learning also appeared in recent research by Kolm and Ritter (2019) to resolve hedging problems where trading decisions are discrete and trading costs are nonlinear.

Problem Formulation
The main objective of this paper focuses on simultaneously determining the option prices V(t, S(t)) and hedge ratios Φ(t, S(t)) throughout different time t of maturity T and corresponding underlying S(t).Previous researches on delta hedging by Hull and White (1987) and Hull (1988) are referred to for the following problem formulation.Special attentions are paid to the initial endowment V(0, S(0)) and hedging strategy Φ(0, S(0)).Notice that the hedge ratio is considered as an independent entity to determine.It does not simply equal to some infinitesimal change in the option price relative to an infinitesimal change in the underlying asset price.
Consider the profit and loss (P&L) of an option seller over a time period (t, t + 1].The wealth change of his/her delta hedged portfolio consists of two parts: the option part and the hedge part: (1) The option part can be written as where G(t) = V(t, S(t))df(t, t + 1) + P(t)df(t, t + 1). (3) Here V(t, S(t)) stands for the option price at time t.P(t) denotes the payoff of the contract at time t.In the case of European options, P(t) = 0 for all t ̸ = T. df(t, t + 1) represents the risk-free discount fact from time t to t + 1.On the other hand, the option seller will also attempt to hedge the sold option position with hedge ratio of Φ(t, S(t)) over the time period (t, t + 1].Here, the hedge part of the wealth change includes changes in underlying asset price, financing costs, dividends received or paid, and transaction costs.Therefore, the wealth change of the total hedged position without transaction costs is given by where df(t, t + 1) + M(t + 1)df(t, t + 1). (5) Here Df(t, t + 1) denotes the financing cost discount factor based on the repurchase agreement (repo) rate of the underlying asset, which is essentially different from df(t, t + 1).M(t) represents any discrete dividend paid by holding the underlying asset.Z(t, S(t)) stands for the transaction costs for each hedging procedure.
To hedge the portfolio and figure out the initial endowment V(0, S(0)) and hedging strategy Φ(0, S(0)), the absolute value of total wealth change ∆W is supposed to be minimized.Different criteria should be applied to the optimization for different purposes, and situations both with and without transaction costs should be investigated.We in-tend to develop a model-free machine learning approach which could be applied to solve discrete-time hedging problems of illiquid assets.
The rest of this report is arranged as follows: Section 2 discusses the methodology used in this project.Hedging approach of two different loss function, the structure of LSTM cells and transaction costs will be included.Section 3 provides results and relevant discussion over numerical results and empirical results.Section 4 concludes our project and proposes possible topics for future researches.

Hedging Approach
Similar to the binomial tree option pricing method, the simulation of all Monte Carlo paths for all time steps is required before further implementation.A specific stochastic process is chosen for the underlying asset for simulation.Such procedure resembles the creation of a full binomial tree first before starting the option pricing.With the simulated data in hand, we can work backwards from the maturity, solve for the option value V(t, S(t)) and the hedge ratio Φ(t, S(t)) at each time step, and finally reach the initial endowment V(0, S(0)) and hedging strategy Φ(0, S(0)).Like the binomial tree method, the European option value at maturity is the payoff of that specific Monte Carlo simulation path.The method to solve for V(0, S(0)) and Φ(0, S(0)) is based upon minimization of a loss function L(∆W) of the average wealth change over all paths and all time steps.In this report, we discuss two different loss functions: mean variance and CVaR.

Mean Variance
Mean-variance analysis is a popular method of weighing risk, expressed as variance, against expected return.Results of mean-variance analysis may help investors make decisions about which financial instruments to invest in, based on how much risk they are willing to take in exchange for different levels of reward.Mean-variance analysis allows investors to find the largest profit at a given level of risk or the least risk at a given level of return.
In our particular hedging problem, the total wealth of the portfolio is set to be the required mean (expected return) for the mean-variance analysis Rémillard and Rubenthaler (2013).We would like to find the initial endowment and the hedging strategy V(0, S(0)) and Φ(0, S(0)), such that the following loss function is minimized Notice that here ∆W denotes the total wealth change of all time steps from 0 to T, The distribution of this total wealth change will give out an overall P&L distribution of the attempted option hedge through all time steps.This total wealth change distribution gives a complete picture of valuation and risk as compared to the methods producing one unique price with risk measures being simple calculated using sensitivities to infinitesimal changes of various input parameters (i.e., "delta", "vega", "rho", etc).

CVaR
Conditional value at risk (CVaR) is a risk measure evaluate the market risk or credit risk of a portfolio.It is also known as 'the expected shortfall': the "expected shortfall at q% level" is the expected return on the portfolio in the worst q% of cases.CVaR is an alternative to value at risk (VaR) because it is a coherent, and moreover a spectral, measure of financial portfolio risk.It is calculated for a given quantile-level q, and is defined to be the mean loss of portfolio value given that a loss is occurring at or below the q-quantile.
CVaR values are derived from the calculation of VaR itself.Therefore, the assumptions that VaR is based on will all affect the value of CVaR, such as the shape of the distribution of returns, the cut-off level used, the periodicity of the data, and the assumptions about stochastic volatility.The value of CVaR equals to the average of the values that fall beyond the VaR: where a represents the cut-off point (significance level) on the distribution, p(x) is the probability density, and VaR is the agreed-upon VaR level.
The illiquid asset hedging problem requires a discrete-time context.In our particular setting, the value of CVaR equals to the average of the smallest q% of all possible ∆W for all N simulations.The loss function we would like to minimize translates to: where n = q% × N. Unlike the mean-variance approach, the CVaR approach emphasizes the tail risk and tries to prevent extreme losses.Such characteristic coincides with real-life concerns and thus gains its popularity in risk management fields.

Machine Learning Approach
A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers, where each layer represents a unique mathematical manipulation.As an edge-cutting approximation technique, DNNs approximate the function with given input and output, by matching each layer with proper weight parameters.A prominent advantage of a DNN is that it approximates the target function effectively, no matter it is linear or non-linear, so we utilize such property to solve aforementioned hedging problems.Hornik et al. (1989) showed that the multi-layer feedforward architecture gives neural networks the competence for universal approximation.
Theorem 1 (Universal Approximation Theorem, Hornik et al. (1989) Corollary 2.4).For a given dimension I ∈ N, let C I be the set of all continuous Borel-measurable functions from R I to R. For any monotonically increasing and bounded function σ(•) (sigmoid activation function), there exists f for any ε > 0. The operator • denotes the scalar product.
This theorem states that a feedforward neural network with one hidden layer (a threelayered feedforward neural network) has the capability to approximate any function in C I .Corollary 1 further extends the theorem and shows that it holds for networks with multiple outputs.
Corollary 1 (Hornik et al. (1989) Corollary 2.6).Theorem 1 holds for the approximation of functions in C I,N by extending the function f (x) = V(σ(Wx + θ)), where V ∈ R N×J , W ∈ R J×I , θ ∈ R J , and x ∈ R I .
Consequently, three-layered multi-output feedforward neural networks are universal approximators for vector-valued functions.However, financial series data is time dependent.In this aspect, recurrent neural network (RNN) shows remarkable competency over regular DNNs for its capability in modeling sequence of time-dependent data.Schäfer and Zimmermann (2006) showed that RNN in state space model form are also universal approximators and are able to approximate any open dynamical system with an arbitrary accuracy.
Theorem 2 (Universal Approximation Theorem for RNN, Schäfer and Zimmermann (2006) Theorem 2).For a measurable function g(•) : R J × R I → R J and a continuous function h(•) : R J → R N , the external inputs x t ∈ R I , the inner states s t ∈ R J , and the outputs y t ∈ R N (t = 1, ..., T), any open dynamical system of the form can be approximated with an arbitrary accuracy by a system of the following form where σ(•) is a sigmoid activation function, the matrices U ∈ R J×J , W ∈ R J×I , and C ∈ R N×J and the bias θ ∈ R J .
Nevertheless, the deficiency of RNN emerges as the gradient vanishing effect becomes conspicuous.To avoid such effect that basic RNN suffers, Long Short-Term Memory (LSTM) cells were introduced by Hochreiter and Schmidhuber (1997) for their power to capture the long-range dependence of the data.

LSTM Cell
The architecture of a basic LSTM cell unrolled is illustrated in Figure 1.As input time series data is processed through the LSTM cell, structures named "gates" regulate the information by modifying its flow and produce two output vectors: a hidden state s t (short term memory), and a cell state c t (long term memory).The hidden state s t−1 from time t − 1 is passed down to the current time step at time t and goes through a sigmoid function known as the "forget gate layer", which determines the proportion of memory that is to be "remembered".The "input gate layer" decides how much of the input x t is used for the calculation of the memory state c t at time t.The "output layer" determines the final output s t and c t .Meanwhile, c t is adjusted by the previous cell state c t−1 and the outcome of the forget gate and the input gate.c t together with s t will flow to the next time step, while a copy of s t is extracted as the output of the LSTM cell of current time step.As introduced in Hochreiter and Schmidhuber (1997), the compact forms of the equations for the forward pass of an LSTM unit with a forget gate are: where the initial values are C 0 = 0 and H 0 = 0, and the operator ⊙ denotes the Hadamard product (element-wise product).σ(•) is the logistic sigmoid function, defined as σ(s) = 1/(1 + e −s ).The subscript t indexes the time step.x t ∈ R I denotes the input vector to the LSTM unit.f t ∈ R J , i t ∈ R J , and o t ∈ R J represent the activation vectors of the forget gate, the input gate, and the output gate, respectively.s t ∈ R J is the hidden state vector, also known as the output vector of the LSTM unit, and c t ∈ R J is the cell state vector.W ∈ R J×I , U ∈ R J×J and θ ∈ R J stand for weight matrices and bias vector parameters to be trained, and the superscripts I and J refer to the number of input features and number of hidden units, respectively.

Recurrent and LSTM Networks for Option Hedging
Our multi-layer LSTM network consists of several basic LSTM cells, where the output of each individual cell is used as the input of its following cell.The LSTM network is fed successively with S(t), t ∈ {1...N − 1}.For each pair of (t, S(t)), the network provides the hedge ratio Φ t (S(t), Θ) where Θ includes the bias and weights to be estimated.In the case of t = 0, the initial option price V 0 and hedge ratio Φ 0 are to be optimized without the LSTM network.For convenience of computation, we set discount factors to be zero, and exclude the payment of dividends.Since V 0 , Φ 0 and Θ are trainable variables, the optimization problem is equivalent to minimize the loss function L(∆W), where We use TensorFlow to construct the LSTM neural network.The architecture of LSTM recurrent neural network is illustrated in Figure 2. Adaptive moment estimation (Adam) optimization algorithm is used to update network weights iterative based in training data.The parameters used in the optimization process are listed as follows: The number of simulations used for each iteration of the AdamOptimizer, namely the batch size, is 1000.The initial learning rate for the AdamOptimizer is 0.001 as default.The number of nodes of the LSTM cell neural network is [24,12,12,1].The input data is normalized batch-wise before fed into the LSTM neural network.10,000 simulations are generated for the mean and variance used for normalization.

Transaction Costs
The transaction costs arise from changes in the hedge ratio during the dynamic hedging.Usually, the transaction costs include a δ proportion of the value of the transaction and a flat rate (i.e., c dollars per trade).Hence, the general fee structure is modeled as the following form: Taking transaction costs χ into consideration, the total wealth change ∆W described in (19) becomes: Since the impact of fixed transaction costs is generally overshadowed by that of the proportional part, we put our emphasis on the presence of δ in our numerical examples and keep c = 0. Hochreiter and Schmidhuber (1997).

Results and Discussion
In this section we provide both analytic and empirical results of our machine learning hedging model and discuss their properties.We first test the validity of our model using analytic solutions of the Heston-Nandi GARCH (HN-GARCH) model.The initial endowment V(0, S(0)) and hedging strategy Φ(0, S(0)) will be justified and the distribution of total wealth change, i.e., the hedging error, will also be examined under different loss functions, and under conditions of with/without transaction costs.Similar analyses will be implemented onto empirical results generated from calibration of real-world data for particular illiquid assets using the Q-GARCH model.

Analytic Results
Before importing actual data to our model, we would like to justify the validity of our LSTM architecture with simulated data through the comparison between results of analytic solution and our hedging model.Given that the illiquid asset hedging problem requires a discrete-time context, the hedging model we implemented should be modelfree and data-driven.Therefore, its availability remains the same no matter the choice of particular calibration model to generate our simulated data.The model we choose is the Heston-Nandi GARCH model, namely the HN-GARCH model.

HN-GARCH
We assume that we are equipped with a complete probability space (Ω, F, {F t } t∈{0,1,...N} , P), where P is the physical measure.We denote by Y t := log(S t /S t−1 ) the one-period log-return process, where S t is the asset price at time t.The conditional variance h t = Var[Y t | F t−1 ] is an F t -predictable process.For the HN-GARCH model, one can derive the unconditional moment generating function of both log S t−1 and h t in an exponential affine form with coefficients satisfying some recursive relationships, which is the key ingredient in deriving closed-form solutions for variance-optimal hedging.
The dynamics of the log-return process are assumed to follow the Heston-Nandi GARCH(1,1) model under the physical measure P, and are given by: In the above conditional mean equation, r denotes the one-period risk-free interest rate, λ is the equity risk-premium parameter and z t is a sequence of i.i.d.standard Gaussian distributed random variables.The conditional variance process h t as an affine GARCH(1,1) structure with the parameters ω, α, β and γ satisfying the standard positivity and stationarity constraints.The γ parameter captures asymmetry in the response of volatility to positive versus negative return shocks, and it reflects the leverage effect.Under arbitrage-free condition, the price of any contingent claim can be expressed as the discounted expected value of its payoff at maturity under equivalent martingale measure.Here, we use the exponential affine pricing kernel first introduced for derivative valuation under GARCH models by Siu et al. (2004).Under this new pricing probability measure, denoted here by Q, the risk-neutral returns dynamics coincide with those derived in Heston and Nandi (2000) and are given below: Here, the innovation process z * t is standard Gaussian distributed under Q.The risk-neutral leverage effect parameter γ * is related to the physical counterpart by γ * = γ + λ + 1 2 .The risk-neutral parameters for the HN-GARCH risk-neutral dynamics used for our numerical exercises illustrated in Table 1 are taken from GARCH Options Toolbox 1 Here we refer to Christoffersen et al. (2008) and Christoffersen et al. (2012) for the analytic solution of the mean-variance hedging approach for our model.We generate N = 10, 000 paths of T = 30 time steps, with initial underlying price S 0 = 100, strike price K = 100, and risk-free rate r = 0.After obtaining the analytic results of the HN-GARCH model: the initial endowment V(0, S(0)) and hedging strategy Φ(0, S(0)), together with the distribution of hedging error ∆W, we analyze how hedging results from our model coincide with these.We first compare our LSTM results with the HN-GARCH model (Heston and Nandi 2000) analytic solutions with the mean-variance hedging approach to justify our model.The hedging results for LSTM and analytic solutions are listed in Table 2. Observations show that the discrepancy of the initial endowment V 0 and the hedging strategy Φ 0 both remain less than 5%, which indicates the validity of our model.After the validation of our model, we could move on and testify the difference between different loss functions.Similar comparison is applied here, and we notice that both initial endowments are around $2.28, and the discrepancy of hedging strategies is slightly above 4%, as shown in Table 3.Such results confirm that both loss functions are promising, and we would further investigate the difference between these two loss functions.
We compare the hedge ratios of two loss functions versus the change of underlying price at a snapshot of t = 20 in Figure 3a.Both two lines converge to 1.0 at deep inthe-money range, and get close to 0.0 within out-of-the-money regime.However, the curve of Mean Variance shows a slightly larger slope than that of the CVaR curve.Such pattern results from the feature of the CVaR loss function, that it concerns more about the distribution of the hedge error ∆W, so that it is less sensitive to the change of underlying price itself.
We move on to the investigation of the distribution of hedge error ∆W as displayed in Figure 3b.More statistics are shown in Table 4, and it is obvious that the mean and variance for the mean variance loss function are smaller than their counterparts for the CVaR loss function.At the first glance, we might conclude from these statistics that the mean variance loss function outperforms CVaR.However, after a second thought, we realize that it is because the loss function is called "mean variance", and the CVaR loss function should be reasonable or even more practical because it concerns more about the tail of the distribution.As shown in Figure 3b, the distribution for CVaR actually has a short tail than the mean variance loss function.In this case, we shall not say that one of the loss functions is better than the other.Instead, both of these loss functions have their advantages in relevant fields.The mean variance loss function might be utilized more by traders, because they care more about the expectation of their investments, while risk managers might prefer the CVaR loss function because their concern how much money they might lose in the worst case.After the comparison between two loss functions, we move on to testify the impact of transaction costs.In our project, we implied a proportional transaction cost of 0.2%, and investigate its influence on the initial endowment V 0 , hedging strategy Φ 0 , and the distribution of hedging error ∆W.Similar comparison is shown in Table 5.We notice that the presence of transaction costs causes almost no change in hedge ratio Φ 0 , but increases the option price V 0 .This is because for every hedge step, the investor has to pay a small amount of money.These small amounts of money accumulate and are reflected in the final option price.The curves of hedge ratio versus underlying price are also presented in Figure 4a.Observation shows that no conspicuous impacts are done by the presence of transaction costs in terms of hedge ratio.Furthermore, the distribution of hedging error described in Figure 4b and Table 6 indicates that the distribution of ∆W remains unchanged when a proportional transaction cost is implied.Since all aforementioned analyses are studied using the mean variance loss function, similar investigation are implemented under CVaR loss function, and the results displayed in Figure 4c,d

Empirical Results
Empirical properties of asset returns are mainly characterized by volatility clustering, high kurtosis, and slow decay of the auto-correlations in squared returns.GARCH models are commonly employed in modelling financial series that exhibit these properties.However, standard GARCH models assume that positive and negative error terms have a symmetric effect on the volatility.In practice, this assumption is frequently violated due to leverage effect, i.e., the asymmetric response of volatility to positive and negative returns.Therefore, QGARCH(1,1) is introduced as a realistic objective measure model for equity index returns, to help us discover how the hedging performance affected by leverage effect.Such properties of the QGARCH model fit the asymmetric characteristics of illiquid assets.

Q-GARCH
The QGARCH model was proposed by Sentana (1995) to overcome the weakness of the GARCH model.Under QGARCH(1,1) framework, the asset and its volatility evolves as follows: where the innovation process is define as ε t = σ t z t .{z t } is a sequence of i.i.d random variables and assumed to follow the standard normal distribution N(0, 1).The autoregressive parameter β partly determines the persistence of the variance in the model, and the innovation parameter α determines the volatility of volatility.When α is not zero, the kurtosis of the spot return increases and consequently the distribution of returns exhibit fat-tail phenomenon.This characteristic renders the model consistent with stylized facts that financial time series have positive excess kurtosis and heavy-tailed distributions.
The parameter γ captures the asymmetry in the response of volatility to positive versus negative return shocks, and it also captures the leverage effect.If the parameter γ is zero, the distribution is symmetric, while a value of γ different from zero results in asymmetric influences of the shocks, e.g., a large negative shock z t raises the variance more than a large positive shock does.

Data
The analysis in this chapter was based on the HFRI Fund of Funds Index (HFRIFOF).Our main tests use monthly data from 31 December 2005 to 30 June 2016.The datasets are obtained from Bloomberg and the sample spans 126 trading months.The sample data are used to calibrate the Q-GARCH(1,1) model by using maximum likelihood method.
The HFRX Global Index (HFRX) is an investable index with daily liquidity, including a subset of managers from the HFR database (approximately 6800 funds) that are open for investment and will accept managed account investments from HFR, along with other restrictions.In terms of Hedge Fund Index, survivor-ship bias commonly occurs.An upward bias is created when obsolete funds cease to report to a database.In addition, hedge funds may also choose to stop reporting funds that will result in a downward bias.

Results
The analysis on empirical results will resemble the one implemented on numerical results.We first investigate the difference between two loss functions.As shown in Table 7, the discrepancies of option price V 0 and hedge ratio Φ 0 are all restricted within a safe range of 5%, which indicates both loss functions are valid under real-life scenarios.Moving on, a snapshot of hedge ratio versus the underlying price change is plotted in Figure 5a.We notice that both curves converge to 1 in the money and goes down to 0 out of the money.And the slope for mean variance is larger than CVaR, just as described in numerical results.However, a significant difference here is that the points in the plot is rather scattered, compared to clear curves we obtained for numerical results.Such phenomenon comes from the use of the Q-GARCH model.The Q-GARCH model captures the asymmetric property of the illiquid asset, while leaves each single dot rather path-dependent.Moreover, the distribution of hedging error ∆W is investigated in Figure 5b and Table 8.The mean variance loss function shows promising capability in minimizing total mean and variance, while the CVaR loss function indeed secure a shorter tail for the distribution.
Similarly, we try to figure out how transaction costs influence our hedging procedure for empirical results.Table 9 demonstrates that we obtain an increase option price with the presence of transaction costs, and unchanged hedge ratio.Table 10 and Figure 6 confirms that the distribution of the hedging error ∆W is not influenced by transactions costs, no matter which loss function is applied.

Conclusions
This study has successfully introduced and validated a model-free, data-driven approach leveraging Long Short-Term Memory (LSTM) neural networks to navigate the hedging challenges associated with illiquid assets.Through meticulous analysis, we examined the effects of initial endowment, hedging strategies, and the distribution of hedging errors across two distinct loss functions, while also assessing the impact of transaction costs on these elements.Our findings affirm the efficacy and relevance of both loss functions, each displaying unique advantages and applicability within specific contexts.Importantly, our research highlights that transaction costs contribute to an escalation in the final option pricing, albeit without altering the hedging strategy or the extent of hedging error.A critical strength of the proposed LSTM-based model lies in its flexibility and lack of reliance on predefined models or assumptions, offering robust adaptability to a wide array of data within a discrete-time framework.The implementation of LSTM neural networks allows for a sophisticated examination of data patterns that traditional models might possibly overlook, significantly shortens the calculation runtime compared with analytical approaches, and offers a more nuanced understanding of risk in hedge fund portfolios.Looking forward, potential avenues for further investigation include exploring the effects of varied hedging frequencies and the implications of incorporating diverse option types, such as binary options or those with target volatility.This work not only broadens the horizons of financial risk management strategies but also lays the groundwork for future innovations in the field.

Figure 3 .
Figure 3.Comparison of difference between loss functions.(a) Comparison of hedge ratio for different loss functions.(b) Distribution of hedge errors for different loss functions.

Figure 4 .
Figure 4. Comparison of difference with/without transaction costs.(a) Comparison of hedge ratio with/without transaction costs (mean variance).(b) Distribution of hedge errors with/without transaction costs (mean variance).(c) Comparison of hedge ratio with/without transaction costs (CVaR).(d) Distribution of hedge errors with/without transaction costs (CVaR).

Figure 5 .
Figure 5.Comparison of difference between loss functions (Q-GARCH).(a) Comparison of hedge ratio for different loss functions (Q-GARCH).(b) Distribution of hedge errors for different loss functions (Q-GARCH).

Figure 6 .
Figure 6.Comparison of with/without transaction costs (Q-GARCH).(a) Comparison of hedge ratio with/without transaction costs (mean variance).(b) Distribution of hedge errors for different loss functions (mean variance).(c) Comparison of hedge ratio for different loss functions (CVaR).(d) Distribution of hedge errors for different loss functions (CVaR).

Table 1 .
Parameters for the HN-GARCH model.

Table 2 .
Hedging results for LSTM and analytic solutions.

Table 3 .
Hedging results for different loss functions.

Table 4 .
Statistics of hedge error distribution for different loss functions.

Table 5 .
Hedging results with/without transaction costs.

Table 6 .
Statistics of hedge error distribution with/without transaction costs.

Table 7 .
Hedging results for different loss functions.

Table 8 .
Statistics of hedge error distribution for different loss functions.

Table 9 .
Hedging results with/without transaction costs.

Table 10 .
Statistics of hedge error distribution with/without transaction costs.