Next Article in Journal
On a Three-Parameter Bounded Gamma–Gompertz Distribution, with Properties, Estimation, and Applications
Previous Article in Journal
Effect of Fear and Time Delay on Predator–Prey Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Temporal Fusion Transformer Graph Neural Network Model for Stock Market Prediction

by
Sebastian Thomas Lynch
,
Parisa Derakhshan
and
Stephen Lynch
*
Department of Computer Science, Loughborough University, Loughborough LE11 3TU, UK
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(4), 176; https://doi.org/10.3390/appliedmath5040176
Submission received: 29 October 2025 / Revised: 1 December 2025 / Accepted: 2 December 2025 / Published: 8 December 2025

Abstract

Forecasting stock prices remains a central challenge in financial modelling, as markets are influenced by market sentiment, firm-level fundamentals and complex interactions between macroeconomic and microeconomic factors, for example. This study evaluates the predictive performance of both classical statistical models and advanced attention-based deep learning architectures for daily stock price forecasting. Using a dataset of major U.S. equities and Exchange Traded Funds (ETFs) covering 2012–2024, we compare traditional statistical approaches, Seasonal Autoregressive Integrated Moving Average (SARIMA) and Exponential Smoothing (ES) in the Error, Trend, Seasonal (ETS) framework, with deep learning architectures such as the Temporal Fusion Transformer (TFT), and a novel hybrid model, the TFT-Graph Neural Network (TFT-GNN), which incorporates relational information between assets. All models are assessed under consistent experimental conditions in terms of forecast accuracy, computational efficiency, and interpretability. Our results indicate that while statistical models offer strong baselines with high stability and low computational cost, the TFT outperforms them in capturing short-term nonlinear dependencies. The hybrid TFT-GNN achieves the highest overall predictive accuracy, demonstrating that relational signals derived from inter-asset connections provide meaningful enhancements beyond traditional temporal and technical indicators. These findings highlight the advantages of integrating relational learning into temporal forecasting frameworks and emphasise the continued relevance of statistical models as interpretable and efficient benchmarks for evaluating deep learning approaches in high-frequency financial prediction.

1. Introduction

Stock price prediction is a major challenge for financial modelling due to market volatility and complexity. Despite advances in data availability and computation, developing models that consistently deliver accurate short-term forecasts remains difficult.
Accurate short-term stock forecasts are valuable across portfolio management, algorithmic trading, and risk control, assisting practitioners like quantitative analysts to make data-driven decision-making in assessing market conditions and managing exposure. Identifying which modelling approaches work best, and under which constraints, therefore remains highly relevant to both research and industry practice.
Although the Efficient Market Hypothesis (EMH) argues that prices fully reflect all available information and follow an unpredictable random walk [1], extensive work in behavioural finance shows that markets exhibit inefficiencies driven by investor biases, slow information diffusion, and structural frictions [2]. Such frictions create persistent cross-asset dependencies, for example within sectors or supply chains, providing an economic rationale for modelling relational structure rather than treating assets independently. This motivates the use of graph-based methods that capture how information and shocks propagate across related securities.
A recent review of stock market prediction using machine learning and deep learning techniques is given in [3]. Classical statistical approaches such as Seasonal Autoregressive Integrated Moving Average (SARIMA) and Exponential Smoothing (ES) within the Error Trend Seasonal (ETS) framework remain strong and interpretable baselines for financial forecasting. Their computational efficiency makes them attractive for high-frequency retraining, though their linear formulation limits their ability to capture nonlinear and rapidly shifting market dynamics [4]. To overcome these limitations, recurrent neural networks, particularly Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures, have been widely adopted in financial time-series analysis due to their capacity to model temporal dependence; see [5] for an introduction with Python (version 3.14), and [6] for an introduction to the MathWorks® Deep Learning Toolbox. Convolutional Neural Networks (CNNs) have also been explored for extracting local temporal patterns, and hybrid variants continue to appear in the literature, such as a CNN-LSTM model [7]. However, these architectures often struggle with long-range dependencies, vanishing gradients, and limited interpretability.
Recent attention-based architectures, especially the Temporal Fusion Transformer (TFT), offer significant improvements by combining recurrent encoders with interpretable attention layers for multi-horizon forecasting [8].
Hybrid deep learning models are increasingly prevalent, ranging from variance–mean decomposition approaches [9] to integrated time–frequency transformer pipelines [10], reflecting a broader trend toward combining complementary modelling components.
A major limitation in the existing literature is that nearly all models treat assets independently, ignoring cross-asset relationships arising from sectoral links, supply chains, co-movement patterns, and shared macro exposures. This omission persists even though relational networks have proven effective in other predictive domains. A survey of 124 stock prediction studies by Jiang et al. [11] shows that only 4.2% incorporate relational information, and several recent reviews omit graph-based models entirely [12,13]. This reveals a clear scientific gap.
While simple correlation structures capture linear co-movement, they cannot represent nonlinear, asymmetric, or higher-order dependencies between assets. Graph-based methods, by contrast, model how shocks propagate across multi-hop connections, such as through supply-chain networks, sector hierarchies, or shared macro exposures, providing a richer representation of market structure than pairwise correlation alone. This strengthening of the economic and statistical rationale explains why integrating GNN-derived relational signals into a temporal architecture may yield predictive gains.
To address this, the present study evaluates statistical baselines (SARIMA, ETS), a transformer model (TFT), and a novel Temporal Fusion Transformer-Graph Neural Network (TFT-GNN) hybrid for daily stock price forecasting using a sample of major U.S. equities and ETFs from 2012-2024. While GNNs have been applied independently to financial prediction tasks, prior work has not integrated graph-derived relational signals directly into a transformer-based temporal model, nor evaluated such a hybrid under a unified experimental framework. The TFT-GNN model embeds relational information, captured through graph attention mechanisms, into the TFT’s time-varying input layer, enabling joint temporal–relational learning.
Table 1 provides an overview for each of the mentioned deep learning models.
The TFT-GNN’s structure allows the study to directly assess when relational information materially improves prediction accuracy, how it compares to statistical and transformer-only approaches, and what trade-offs arise in terms of interpretability and compute time.

2. Methodology

2.1. Overview

This study adopts a comparative modelling approach to evaluate statistical and deep learning methods for daily stock price forecasting. Specifically, we compare two statistical baselines, SARIMA and ES, with two attention-based architectures: the TFT and TFT-GNN hybrid. All models were implemented under consistent data, training, and evaluation settings to ensure a fair comparison.

2.2. Data and Preprocessing

Daily OHLCV (Open, High, Low, Close, Volume) data for Apple (AAPL), JPMorgan Chase (JPM), NVIDIA (NVDA), and the S&P 500 ETF (SPY) were retrieved using the Yahoo Finance API via the yfinance Python module, covering the period from 2012–2024. These four securities were selected to provide coverage of key market sectors, technology (AAPL, NVDA), financials (JPM), and a broad-market benchmark (SPY), while encompassing a range of volatility profiles, enabling a focused evaluation of model performance across diverse stock behaviours.
In addition to the four primary stocks/ETFs, the full set of NASDAQ-traded securities was retrieved, with ticker symbols obtained from the NASDAQ Trader symbol directory [14]. These additional securities were essential for constructing the relational data required by the hybrid TFT-GNN model. Since the data was downloaded directly from Yahoo Finance, there were no missing values or errors, and preprocessing was limited to feature engineering and scaling.
While the statistical models were univariate, relying solely on closing prices, the deep learning models required extensive feature engineering, using OHLCV data to generate lagged variables and technical indicators. Feature selection was performed both before and after training. Highly correlated feature pairs were reduced to avoid redundancy. Weak linear correlation with the target did not preclude inclusion, as nonlinear, time-dependent patterns may still hold predictive value. Figure 1 shows a correlation heatmap of engineered and raw variables for AAPL, used to identify highly correlated features.
For the deep learning models, input features were automatically standardised (zero mean, unit variance) within the PyTorch Forecasting TimeSeriesDataSet pipeline, while the statistical models (SARIMA, ETS) operated on raw closing prices without scaling.

2.3. Problem Framing and Experimental Setup

The forecasting task was framed as a regression problem targeting the daily closing price. Three temporal splits were used to ensure robustness:
  • Training & validation 2012–2017, test 2018.
  • Training & validation 2015–2020, test 2021.
  • Training & validation 2018–2023, test 2024.
A rolling-window validation procedure was applied for hyperparameter tuning, and models were retrained on full training and validation data before evaluation on the holdout year. By evaluating all models on out-of-sample data from future periods unseen during training, this procedure effectively mitigates overfitting.

2.4. Models

2.4.1. Statistical Models

SARIMA, introduced by Box et al. in 1970 [15], comprises four components: autoregressive (dependence on past values, p), integrated (differencing to ensure stationarity, d), moving average (lagged error terms, q), and ( p , d , q ) s are their seasonal counterparts, with period s. A full grid search of all seven hyperparameters was computationally infeasible, so the parameter space was constrained:
  • p { 0 , 1 , 2 } , q { 0 , 1 , 2 , 3 , 4 } , d = 1 .
  • p { 0 , 1 } , d = q = 1 , s { 5 , 21 , 63 } .
ES, introduced by Robert G. Brown in 1956 [16] and further developed in his 1959 book [17], forecasts future values by assigning exponentially decreasing weights to past observations. Its simplicity and effectiveness in short-term forecasting contributed to its popularity [18]. In 2002, Rob J. Hyndman extended this approach by introducing the ETS framework [19], which explicitly models the components of forecasting errors, trends, and seasonality in either additive or multiplicative forms, with seasonal periods of 5, 21, and 63 days.

2.4.2. Deep Learning Models

The TFT was implemented using the PyTorch Forecasting library, combining variable selection networks, gated residual blocks, and interpretable attention layers. Originally introduced by Lim et al. at Google Research in 2021 [20], the TFT is a transformer-based architecture designed specifically for multi-horizon time-series forecasting. It builds upon the transformer framework first established in the seminal work “Attention Is All You Need” by Vaswani et al. [21], which introduced the self-attention mechanism that allows models to efficiently capture long-term dependencies in sequential data.
Unlike standard recurrent or convolutional approaches, the TFT explicitly separates static, known, and observed time-varying inputs through dedicated encoders, allowing it to model both temporal dynamics and static metadata simultaneously. The model architecture consists of a sequence of Gated Residual Networks (GRNs) for feature processing and Variable Selection Networks (VSNs) that dynamically choose the most relevant covariates at each time step. Temporal dependencies are modelled through a sequence-to-sequence LSTM backbone combined with multi-head self-attention, enabling the model to focus selectively on informative past time steps.
One of the key innovations of the TFT is its interpretable attention layer, which provides insight into which historical inputs most influenced the model’s forecasts. Although computationally intensive, the TFT’s hybrid design of attention and gating mechanisms achieves a balance between model expressiveness and interpretability, making it well-suited for high-dimensional financial time-series data.
GNNs, first proposed by Scarselli et al. [22], extend neural network operations to graph-structured data by iteratively aggregating information from neighboring nodes, enabling the model to capture relational dependencies. In financial contexts, this allows stocks to be represented as nodes linked by correlations, sector relationships, or supply-chain ties [23]. Graph Attention Networks (GATs), introduced by Veličković et al. [24], apply attention mechanisms to graph learning, allowing the network to assign adaptive importance weights to neighboring nodes. These architectures enable the modelling of both the structure and strength of inter-stock relationships, which is particularly valuable in stock market prediction [25].
The TFT-GNN hybrid extended this framework by incorporating relational information from a GAT into the TFT’s temporal input layer. The GAT outputs were transformed into a simplified directional signal (up or down), allowing the TFT to leverage relational trends without excessive dimensional complexity. This integration enabled joint temporal–relational learning, improving predictive accuracy and interpretability compared to standalone temporal or graph-based models. Because the GAT edge attention weights are retained, the relational component remains fully interpretable: we can observe which neighbour assets most strongly influence the target each day.
While both TFTs and GNNs have been individually applied to stock market prediction [26,27], combining them into a hybrid model marks a novel contribution. The key innovations are as follows:
  • A simplified directional GNN signal; a binary up/down indicator distilled from a GAT classifier rather than high-dimensional embeddings;
  • A lightweight integration strategy that injects relational signals as time-varying features, preserving the TFT’s variable-selection and attention-based interpretability.
In-depth details on the hyperparameters of the TFT and TFT-GNN models can be found in Appendix A.1 and Appendix A.2, respectively.

2.5. Evaluation Metrics

Performance was assessed using Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Coefficient of Determination (R2). Together, these metrics capture error magnitude, scale-independent accuracy, and explanatory power. Model interpretability was further examined using attention weight visualisations for the TFT and TFT-GNN.

2.6. Implementation

All experiments were implemented in Python 3.14 using Pandas, statsmodels, PyTorch 2.5, and PyTorch Forecasting. Training was conducted on Google Colab with GPU acceleration for the deep learning models. Code and data preprocessing scripts are available in Appendix B.

3. Results

3.1. Overview

This section presents the empirical results for the four selected models. All experiments focus on daily stock price forecasting, evaluated on a sample of major U.S. equities and ETFs from 2018–2024. Models were compared using consistent datasets and metrics, with performance assessed in terms of predictive accuracy, computational efficiency, and interpretability. A comprehensive table reporting all performance metrics for every model, security, and test period is provided in Appendix C. Example figures for each model, illustrating actual versus predicted values, are included in this section.
Due to computational constraints, SARIMA was implemented using a weekly sliding-window approach, where each model was refit weekly and used to generate daily predictions for the subsequent week. In contrast, ES, TFT, and TFT-GNN employed a daily rolling-window framework, updating the model each trading day using the most recent data.

3.2. Statistical Models

3.2.1. SARIMA

The SARIMA model produced mixed results across assets and time periods. While it captured recurring seasonal and autoregressive patterns, its linear formulation limited responsiveness to abrupt market shifts. Across all test periods, SARIMA achieved an average RMSE = 3.4862 and R 2 = 0.9168 . Performance across each stock was similar, despite the varying levels of volatility. Figure 2 shows the SARIMA model’s prediction for SPY in 2024.
Hyperparameter optimisation required significant computational time, and full daily retraining proved infeasible. Each forecast required approximately 10–20 min per stock under the weekly sliding-window scheme (already several times higher than ES) and over an hour for a daily rolling-window. Consequently, SARIMA was evaluated only under the weekly sliding-window scheme.
The use of a weekly sliding window introduces asymmetry relative to ES and the deep learning models, which use a daily updating scheme. This discrepancy inevitably affects the fairness of direct comparisons: SARIMA has fewer opportunities to adapt to fast-moving market dynamics. However, the reduced update frequency stems from its high computational burden rather than from any modelling preference or limitation of the approach itself.

3.2.2. ES Model in the ETS Framework

The ES model achieved strong predictive accuracy at a fraction of the computational cost of SARIMA. Its exponential updating mechanism enabled rapid adaptation to recent price dynamics, and daily retraining was computationally feasible.
Across assets, ES achieved an average RMSE = 2.7070 and R 2 = 0.9630 , outperforming SARIMA in most daily settings. The model performed particularly well for volatile assets such as JPM and NVDA, as shown in Figure 3, where adaptive trend and seasonality components effectively captured short-term reversals.
Computation times averaged under five minutes per stock per year, making ES the only statistical model suited to true daily rolling-window forecasting. While interpretability remained high, ES occasionally lagged during sharp regime changes due to its assumption of locally smooth trends.

3.3. Deep Learning Models

3.3.1. TFT

The TFT achieved superior predictive performance compared with the statistical baselines while maintaining manageable training times. The architecture effectively captured nonlinear temporal dependencies and provided interpretable feature importance and attention weight visualisations. On average, the model achieved RMSE = 2.3369 and R 2 = 0.9577 , representing an improvement over ES. Figure 4 shows one of the TFT’s strongest predictions: SPY in 2021.
Feature attribution (Figure 5) indicated that the Relative Strength Index (RSI) and Moving Average Convergence Divergence (MACD) were the most influential technical indicators, while trading volume contributed minimally. Temporal attention plots (Figure 6) showed that the model focused primarily on the most recent 5–10 trading days, consistent with short-term market memory. Training each period required approximately five minutes, confirming the model’s practicality for daily rolling forecasts.

3.3.2. Hybrid TFT-GNN

Integrating graph-based relational information further improved predictive accuracy. The TFT-GNN hybrid achieved the best overall results, with an average RMSE = 2.1662 and R 2 = 0.9645 , outperforming the standalone TFT in 11 of 12 evaluated periods. The hybrid’s strongest performance occurred for SPY in 2024, shown in Figure 7, where it achieved R 2 = 0.9873 , the highest of all models.
Using a simplified GNN-derived up/down signal produced superior results to high-dimensional graph embeddings, suggesting that compact relational cues are more informative for daily prediction tasks. Moreover, the GAT edge-attention weights provide explicit attribution of which assets contributed to the relational signal, offering interpretability beyond attention heatmaps alone. Figure 8 illustrates the eight stocks most frequently assigned high attention scores when predicting AAPL.
The resulting set, MSFT, LRCX, TMO, AMD, QCOM, GOOGL, TSLA, and MRK, combines both economically intuitive and broader market-driven relationships. Several top contributors (MSFT, GOOGL, AMD, QCOM, LRCX) belong to Apple’s core technology and semiconductor ecosystem. TSLA, while not directly connected to Apple, is a similarly large, high-profile, high-beta company whose stock movements often correlate with broad market trends. TMO and MRK, as healthcare-sector stocks, show limited structural linkage to Apple, reflecting more general market co-movements rather than industry-specific relationships.
Attention analysis of the TFT revealed that graph-derived relational signals were assigned greater weight than conventional indicators such as RSI and MACD, shown in Figure 9.
Training times averaged 10–20 min per period, around double the TFT baseline, confirming that superior predictive performance comes at a higher computational cost.

3.4. Comparative Summary

ES in the ETS framework provided the strongest statistical model, balancing accuracy, interpretability, and computational efficiency. The TFT-GNN achieved the highest overall predictive accuracy, demonstrating the value of combining temporal attention with relational graph information. Comprehensive results are in Appendix C, and the findings are summarised in Table 2.
Overall, the results show that while traditional statistical models remain competitive baselines, attention-based architectures, and particularly the TFT-GNN hybrid, offer significant improvements in predictive accuracy without prohibitive computational cost. These findings emphasise the practical benefits of integrating relational information into temporal forecasting models for daily financial prediction.

4. Discussion

The comparative evaluation reveals distinct trade-offs between predictive accuracy, computational feasibility, and interpretability across the model families examined. Although deep learning models often dominate in complex sequence modelling, the results here reaffirm the continued competitiveness of statistical methods for financial time series forecasting when appropriately tuned and constrained.

4.1. Statistical Models

The statistical models exhibited consistent and interpretable behaviour, with performance differences largely explained by each model’s capacity to handle volatility and adapt to evolving patterns. Due to the high computational cost of full daily retraining, SARIMA was evaluated using a weekly sliding-window scheme that still produced daily forecasts but updated parameters only once per week. This adjustment was driven entirely by computational burden rather than by methodological preference, and should be taken into account when comparing SARIMA’s performance with models trained under a true daily rolling-window framework.
Among the statistical approaches, ES achieved the strongest results. Its adaptive smoothing mechanisms provided effective short-horizon responsiveness with minimal training overhead. SARIMA’s accuracy was limited by the weekly sliding window, but optimal parameters frequently closed to ( p , d , q ) = ( 2 , 1 , 3 ) , aligning with previous findings in financial forecasting literature [28].
Overall, ES proved the most practical for daily deployment, while SARIMA demonstrated value in longer-horizon or lower-frequency contexts where retraining costs are less prohibitive. These findings reinforce prior conclusions that classical statistical models remain strong baselines for short-term market forecasting, particularly when interpretability and efficiency are prioritised over incremental accuracy gains.

4.2. TFT

The TFT outperformed the statistical baselines on the daily forecasting horizon. Its ability to selectively attend to recent observations and key covariates enabled the model to capture short-term dependencies more effectively than fixed-structure statistical methods. Attention-based interpretability further revealed that the most influential features were short-term momentum and volatility indicators, while volume contributed minimally, consistent with established findings in stock market prediction research.
Despite its greater complexity, the TFT trained efficiently on GPU, with convergence stability and reproducibility that contrasted other deep learning models’, such as LSTMs’, known difficulties under long sequences [29]. Without GPU acceleration, each period would require over an hour of computation. With GPU support, however, the TFT can be trained in a comparable time frame to ETS and SARIMA, though its larger hyperparameter space necessitates more extensive tuning. The TFT therefore represents a practical balance between model expressiveness and operational feasibility for daily financial forecasting tasks.

4.3. TFT-GNN Hybrid

The hybrid TFT-GNN model delivered the highest predictive accuracy of all tested approaches. By embedding graph-derived relational signals into the TFT architecture, the model captures inter-stock dependencies and latent market structures that univariate or purely temporal models cannot.
Examination of the GAT-derived relational signals shows that most of Apple’s top contributors (MSFT, GOOGL, AMD, QCOM, LRCX) are connected economically or by sector/supply chains, indicating that the GNN successfully identifies meaningful inter-stock dependencies. TSLA’s inclusion reflects broader market co-movement typical of large, high-beta companies. The presence of healthcare stocks (TMO, MRK) likely arises from occasional macro-level synchronisation rather than direct sectoral links. Overall, this mix demonstrates that the GNN captures both structural relationships and market-driven patterns, supporting the hybrid model’s improved forecasting performance.
Crucially, attention analysis from the TFT revealed that the GNN-derived relational features were consistently assigned greater weight than traditional indicators like RSI and MACD, highlighting the predictive value of inter-asset dependencies. Meanwhile, RSI and MACD continue to encode short-term momentum and mean-reversion effects driven by investor behaviour. Together, these findings suggest that combining relational structure with conventional temporal indicators enables the model to exploit both latent market connectivity and established financial patterns, enhancing forecasting performance.
High accuracy and interpretability come at a computational cost: training the TFT-GNN, including GNN components, can take up to 20 min on a GPU and requires extensive hyperparameter tuning. Once trained, however, the model can generate predictions quickly, making it feasible for practical, real-world applications.
From an applied perspective, the TFT-GNN strikes a favourable balance between accuracy, interpretability, and computational cost. While its training demands are higher than those of ES or SARIMA, the hybrid’s superior accuracy and transparency through attention mechanisms make it a compelling candidate for near-term, production-level financial forecasting systems.

4.4. Limitations

Several limitations should be acknowledged. First, the empirical analysis was conducted on four securities (AAPL, JPM, NVDA, and SPY). While these were selected to provide sectoral diversity and varying volatility profiles, the sample remains too narrow to support broad generalisation across the U.S. equity market. Future studies should extend the framework to larger cross-sectional universes or multi-market settings.
Both SARIMA and ETS are univariate models, whereas the deep learning models leverage multivariate and relational features. While this difference may exaggerate the apparent advantage of deep learning, it reflects the inherent design of classical statistical methods, which are not intended to handle high-dimensional covariates. Deep learning architectures, in contrast, are specifically designed to exploit multivariate information, so this capability should not be considered a methodological flaw. Second, SARIMA was evaluated using a weekly sliding-window retraining scheme due to computational constraints, whereas ETS and the deep learning models employed daily rolling windows. This discrepancy limits the fairness of direct comparisons, as SARIMA had fewer opportunities to adapt to rapidly changing market conditions.
The study did not implement a formal significance test such as the Diebold–Mariano test. Although the performance differences reported across RMSE, MAE, MAPE, and R 2 are consistent and substantial, statistical significance cannot be formally asserted. Furthermore, alternative measures such as SMAPE or Theil’s U could provide complementary perspectives on forecast accuracy. Incorporating these metrics may enhance comparability with broader financial forecasting literature.
Although the experimental design used three temporally separated test periods (2018, 2021, 2024), it did not evaluate model performance during crises such as the 2020 COVID-19 shock. The choice to omit 2020 was deliberate, as extreme market dislocations can dominate error statistics and obscure model behaviour under more typical conditions; however, this limits conclusions regarding robustness in turbulent markets.
Finally, while chronological train–test splits mitigate look-ahead bias, the study does not include additional robustness checks such as cross-market validation. Incorporating these checks would strengthen evidence for generalisability across regimes and asset classes.
Together, these limitations outline directions for future work aimed at expanding the empirical scope, increasing statistical rigour, and deepening robustness assessment.

4.5. Comparison to Other Studies

Direct comparison with existing forecasting studies is challenging due to substantial methodological heterogeneity across the literature. For example, many prior works do not employ strict chronological train–test splits [30], while others treat forecasting as a classification problem rather than a regression task [31]. Furthermore, most related studies focus on different securities or market indices, making results less directly comparable to the four U.S. equities examined in this work, though the S&P 500 is commonly used.
Despite these limitations, it is still informative to position the present findings relative to selected studies with conceptually similar settings. Parker et al. [32] applied a heterogeneous ensemble approach to forecasting SPY over a shorter test window and reported an R 2 of 0.9702 . Although differences in time periods and window lengths complicate direct comparison, the TFT-GNN model in the present study achieved a comparable or higher level of explanatory accuracy across full-year test sets.
Other studies forecasting the S&P 500 index have reported similar performance ranges. For example, the machine learning approach in [33] achieved an R 2 of 0.9768 using a non-chronological split, while the PCA–ICA–LSTM hybrid model of [34] reported an R 2 of 0.963 based on a substantially longer training period and multi-year test horizon. Metrics such as MAPE show similar patterns: the GRU-based model in [35] produced a MAPE of 1.4994 % over a 2 year window, whereas the TFT-GNN achieved an average MAPE of approximately 0.67 % for SPY in the present study.
These comparisons must be interpreted with caution. Differences in market regimes, volatility, train–test splits, asset selection, and forecast horizons can materially affect error magnitudes. Nevertheless, the results indicate that the hybrid TFT-GNN model performs competitively relative to contemporary approaches in the financial forecasting literature, particularly given the more stringent chronological evaluation framework adopted here.

4.6. Summary

In summary, the comparative analysis underscores the nuanced trade-offs between traditional and neural forecasting paradigms. ES in an ETS framework remains an efficient and interpretable baseline, TFT provides substantial accuracy improvements with manageable computational cost, and the TFT-GNN extends this further by integrating relational structure into temporal modelling. The results collectively highlight the value of incorporating graph-based relational information into transformer-based frameworks for stock price forecasting, offering both empirical accuracy gains and deeper interpretive insight into market dependencies.

5. Conclusions

This study evaluated the performance of statistical and deep learning approaches for daily stock price forecasting, focusing on SARIMA, ES, TFT, and a hybrid TFT-GNN model. Among the statistical methods, ES proved most effective under the daily rolling-window framework due to its low computational cost and adaptive smoothing, while SARIMA remained valuable for longer-horizon or weekly sliding-window forecasts despite their higher computational demands. These results reaffirm the continued relevance of classical statistical models as robust and interpretable baselines.
The TFT model demonstrated clear advantages over standalone statistical methods by capturing short-term nonlinear dynamics and providing interpretable feature attributions. Incorporating relational information through a TFT-GNN hybrid further improved predictive accuracy, with attention analysis showing that graph-derived signals were consistently emphasised over conventional technical indicators. This result has broader theoretical implications: it suggests that in equity markets, temporal dependencies alone are insufficient for fully explaining price movements, and that relational structure, reflecting sectoral links, co-movement patterns, and shared risk exposures, forms an essential part of the data-generating process. In this sense, the study contributes a conceptual framework in which financial forecasting is treated as a joint temporal–relational learning problem rather than a purely sequential one.
The work also provides a methodological contribution by demonstrating that graph-based relational features can be embedded directly into transformer architectures through time-varying inputs, enabling end-to-end training without requiring separate graph-specific forecasting modules. This hybrid design offers a flexible template for future models seeking to integrate structural information with temporal signals.
In real-world decision making, financial institutions do not publicly disclose their forecasting methods, as these are proprietary and provide a competitive advantage. This underscores the importance and relevance of transparent and robust models like TFT and the TFT-GNN hybrid to active market environments such as asset pricing and valuation, fraud detection, liquidity and market microstructure, portfolio optimisation, risk assessment, trading execution, and volatility forecasting, for example.
Beyond financial forecasting, the proposed framework could be adapted to other time-series domains, where relational dependencies and temporal dynamics play equally important roles, such as energy demand [36], traffic congestion [37], or healthcare analytics [38], for example.
Overall, the results indicate that architectures combining temporal learning and relational reasoning offer a more theoretically grounded and empirically effective approach to daily stock price prediction. Future research could explore dynamic graph structures, probabilistic forecasting, cross-market generalisation, and integration of macroeconomic networks, further advancing the strategic development of hybrid temporal–relational models in financial prediction.

Author Contributions

Conceptualization, P.D.; Methodology, S.L.; Software, S.T.L.; Validation, S.L.; Formal analysis, S.T.L.; Investigation, P.D.; Writing—original draft, S.T.L.; Writing—review and editing, S.T.L., P.D. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the anonymous referees for their valuable comments and suggestions that led to the improvement of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAPLApple Inc.
CNNConvolutional Neural Network
ESExponential Smoothing
ETFExchange Traded Fund
ETSError, Trend, Seasonal
GATGraph Attention Network
GNNGraph Neural Network
GRUGated Recurrent Unit
JPMJPMorgan Chase & Co.
LSTMLong Short-Term Memory
MACDMoving Average Convergence Divergence
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
NVDANVIDIA
OHLCVOpen, High, Low, Close, Volume
R 2 Coefficient of Determination
RMSERoot Mean Squared Error
RSIRelative Strength Index
SARIMASeasonal Autoregressive Integrated Moving Average
SPYSPDR S&P 500 ETF Trust
TFTTemporal Fusion Transformer
TFT-GNNTemporal Fusion Transformer with Graph Neural Network integration

Appendix A. Hyperparameter Tuning

Appendix A.1. TFT Model

The TFT shares several hyperparameters with standard recurrent architectures, including dropout rate, learning rate, number of epochs, early stopping, batch size, loss function, optimizer, lookback window size, input features, and prediction horizon. Key TFT-specific hyperparameters are as follows:
  • Encoder and Decoder Layers: Number of stacked LSTM layers in the encoder and decoder modules. Deeper layers increase model capacity but risk overfitting.
  • Hidden Layer Size: Dimensionality of the model’s internal layers, influencing its representational capacity.
  • Attention Heads: Number of parallel attention mechanisms in the multi-head attention block; additional heads capture diverse patterns at higher computational cost.
  • Static Embedding Size: Dimensionality of learned embeddings for static covariates.
  • Time-Varying Embedding Size: Dimensionality of learned embeddings for time-varying features such as lagged prices or technical indicators.
  • Variable Selection: Boolean flag indicating whether to use input variable selection networks for dynamic feature relevance estimation.
  • Attention Window: Number of past time steps accessible to the temporal attention mechanism.
Initial configurations used Close and Volume as input variables, with a learning rate of 0.01 , hidden size of 128, attention head size of 2, dropout rate of 0.1 , batch size of 64, attention window size of 30, and SMAPE loss.
Empirical testing indicated that a lookback window of 30 days yielded the best results, with both shorter and longer windows reducing accuracy. Reducing the hidden size from 128 to 64 improved generalization. Increasing dropout to 0.2 or reducing batch size to 32 degraded performance. Varying attention heads confirmed that two heads were optimal, as one or three performed worse. Final configurations balanced interpretability and computational efficiency, providing consistent daily rolling-window performance across assets.

Appendix A.2. TFT-GNN Model

The GNN and subsequent TFT–GNN hybrid were trained using a diverse universe of U.S. equities and sector ETFs to capture inter-asset relationships. These were grouped as follows:
  • Apple Supply Chain and Related: AAPL, AMD, TSM, AVGO, ASML, QCOM, TXN, MU, NXPI, KLAC, LRCX, ADI, AMAT, MCHP
  • Big Tech Peers: GOOGL, MSFT, AMZN, META, NFLX, ORCL, SONY, CRM, ADBE
  • Financials: JPM, MS, BAC, BLK, GS, WFC, SCHW, BK, AXP, COF, MET
  • Semiconductors: NVDA, AMD, TSM, ASML, QCOM, MU, TXN, NXPI, KLAC, LRCX
  • Healthcare: UNH, JNJ, PFE, MRK, LLY, TMO, BMY, NVO
  • Automotive: TSLA, F, GM, HMC, TM
  • Consumer: WMT, HD, COST, PG, KO, MCD, TGT, PEP
  • Exchange-Traded Funds (ETFs): SPY, QQQ, DIA, IWM, XLK, XLF, XLE, XLI, XLV, XLY, XLP, VNQ, IYR, VGT, VTI, VUG, VTV, IWF, IWD, ITOT
The GNN was implemented as a Graph Attention Network (GAT) with tunable parameters including:
  • Node Features: Input features for each stock (e.g., OHLCV, RSI, MACD).
  • Hidden Channels: Dimensionality of intermediate node embeddings.
  • Number of GAT Layers: Controls network depth; deeper models capture higher-order relations but risk over-smoothing.
  • Attention Heads: Number of attention mechanisms applied per layer.
  • Dropout Rate, Learning Rate, and Optimizer: Standard training hyperparameters.
The TFT-GNN hybrid extended the base TFT by incorporating relational information from the Graph Neural Network (GNN). Initial experiments injected raw 128-dimensional GNN embeddings directly into the TFT input, but this degraded performance due to excessive dimensionality.
To address this, a multilayer perceptron (MLP) was introduced to progressively reduce embedding dimensionality. Reductions from 128 16 8 4 were tested, with smaller embeddings yielding improved stability though performance remained near the TFT baseline.
Subsequently, GNN embeddings were transformed into a simplified up/down signal, representing directional market tendencies. Incorporating this signal produced a measurable improvement in predictive accuracy and interpretability. Additional tuning revealed that employing a lower learning rate and retaining an attention head size of two further stabilized training and improved consistency across different assets.

Appendix B. Code Repository

An example code for each model can be found at this GitHub link: https://github.com/SebLynch5/Stock-Market-Forecasting, (accessed on 4 December 2025).
In addition, a complete archive of all notebooks used in the making of this article has been uploaded to the same repository to ensure full reproducibility.

Appendix C. Table of Results

Figure A1 shows the performance metrics for each model, across each time period and security.
Figure A1. Comprehensive results for each model across each test period and security. The top three performing models for each security are highlighted in bronze, silver and gold respectively.
Figure A1. Comprehensive results for each model across each test period and security. The top three performing models for each security are highlighted in bronze, silver and gold respectively.
Appliedmath 05 00176 g0a1

References

  1. Fama, E.F. The behavior of stock-market prices. J. Bus. 1965, 38, 34–105. [Google Scholar] [CrossRef]
  2. Malkiel, B.G. The efficient market hypothesis and its critics. J. Econ. Perspect. 2003, 17, 59–82. [Google Scholar] [CrossRef]
  3. Saberironaghi, M.; Ren, J.; Saberironaghi, A. Stock market prediction using machine learning and deep learning techniques: A review. AppliedMath 2025, 5, 5030076. [Google Scholar] [CrossRef]
  4. Lara-Benítez, P.; Carranza-García, M.; Riquelme, J.C. An experimental review on deep learning architectures for time series forecasting. Int. J. Neural Syst. 2021, 31, 2130001. [Google Scholar] [CrossRef]
  5. Lynch, S. Python for Scientific Computing and Artificial Intelligence; Chapman and Hall/CRC: Boca Raton, FL, USA, 2023. [Google Scholar]
  6. Lynch, S. Dynamical Systems with Applications using MATLAB, 3rd ed.; Springer Nature: Berlin/Heidelberg, Germany, 2025. [Google Scholar]
  7. Shi, Z.; Ibrahim, O.; Hashim, H.I.C. A novel hybrid HO-CAL framework for enhanced stock index prediction. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 333–342. [Google Scholar] [CrossRef]
  8. Olorunnimbe, K.; Viktor, H. Ensemble of temporal Transformers for financial time series. J. Intell. Inf. Syst. 2024, 62, 1087–1111. [Google Scholar] [CrossRef]
  9. Kautkar, H.; Das, S.; Gupta, H.; Ghosh, S.; Kanjilal, K. Leveraging an integrated first and second moments modeling approach for optimal trading strategies: Evidence from the Indian pharma sector in the pre- and post-COVID-19 era. J. Forecast. 2025, 70046. [Google Scholar] [CrossRef]
  10. Huang, Y.; Pei, Z.; Yan, J.; Zhou, C.; Lu, X. A combined adaptive Gaussian short-term Fourier transform and Mamba framework for stock price prediction. Eng. Appl. Artif. Intellgence 2025, 162, 112588. [Google Scholar] [CrossRef]
  11. Jiang, W. Applications of deep learning in stock market prediction: Recent progress. Expert Syst. Appl. 2021, 184, 115537. [Google Scholar] [CrossRef]
  12. Ajiga, D.I.; Adeleye, R.A.; Tubokirifuruar, T.S.; Bello, B.G.; Ndubuisi, N.L.; Asuzu, O.F.; Owolabi, O.R. Machine learning for stock market forecasting: A review of models and accuracy. Financ. Account. Res. J. 2024, 6, 112–124. [Google Scholar]
  13. Kehinde, T.; Chan, F.T.; Chung, S.H. Scientometric review and analysis of recent approaches to stock market forecasting: Two decades survey. Expert Syst. Appl. 2023, 213, 119299. [Google Scholar] [CrossRef]
  14. NASDAQ. Nasdaqtraded.txt—NASDAQ Symbol Directory. Available online: https://www.nasdaqtrader.com/dynamic/SymDir/nasdaqtraded.txt (accessed on 23 April 2025).
  15. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  16. Brown, R.G. Exponential Smoothing for Predicting Demand; Arthur D. Little: Cambridge, MA, USA, 1956. [Google Scholar]
  17. Brown, R. Statistical Forecasting for Inventory Control; McGraw-Hill: Columbus, OH, USA, 1959. [Google Scholar]
  18. Gardner Jr, E.S. Exponential smoothing: The state of the art. J. Forecast. 1985, 4, 1–28. [Google Scholar] [CrossRef]
  19. Hyndman, R.J.; Koehler, A.B.; Snyder, R.D.; Grose, S. A state space framework for automatic forecasting using exponential smoothing methods. Int. J. Forecast. 2002, 18, 439–454. [Google Scholar] [CrossRef]
  20. Lim, B.; Arık, S.Ö.; Loeff, N.; Pfister, T. Temporal fusion transformers for interpretable multi-horizon time series forecasting. Int. J. Forecast. 2021, 37, 1748–1764. [Google Scholar] [CrossRef]
  21. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
  22. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef]
  23. Patel, M.; Jariwala, K.; Chattopadhyay, C. A Systematic Review on Graph Neural Network-based Methods for Stock Market Forecasting. ACM Comput. Surv. 2024, 57, 1–38. [Google Scholar] [CrossRef]
  24. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. Stat 2017, 1050, 10–48550. [Google Scholar]
  25. Feng, F.; He, X.; Wang, X.; Luo, C.; Liu, Y.; Chua, T.S. Temporal relational ranking for stock prediction. ACM Trans. Inf. Syst. (TOIS) 2019, 37, 1–30. [Google Scholar] [CrossRef]
  26. Hu, X. Stock price prediction based on temporal fusion transformer. In Proceedings of the 2021 3rd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), Taiyuan, China, 3–5 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 60–66. [Google Scholar]
  27. Wang, J.; Zhang, S.; Xiao, Y.; Song, R. A Review on Graph Neural Network Methods in Financial Applications. J. Data Sci. 2022, 20, 111–134. [Google Scholar] [CrossRef]
  28. Sun, Z. Comparison of trend forecast using ARIMA and ETS Models for S&P500 close price. In Proceedings of the 2020 4th International Conference on E-Business and Internet, Singapore, 9–11 October 2020; pp. 57–60. [Google Scholar]
  29. Schmidt, F. Generalization in generation: A closer look at exposure bias. arXiv 2019, arXiv:1910.00292. [Google Scholar] [CrossRef]
  30. Saiyyad, A.; Wankhade, S.; Sakhare, A.; Kale, P.; Yenchilwar, G.; Sharma, P. Stock Price Prediction for Stock Market Forecasting using Machine Learning. In Proceedings of the 2025 4th OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 5.0, Raigarh, India, 9–11 April 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 1–5. [Google Scholar]
  31. Uzzal, M.H.; Ślepaczuk, R. The Performance of Time Series Forecasting Based on Classical and Machine Learning Methods for S&P 500 Index; University of Warsaw, Faculty of Economic Sciences: Warsaw, Poland, 2023. [Google Scholar]
  32. Parker, M.; Ghahremani, M.; Shiaeles, S. Stock Price Prediction Using a Stacked Heterogeneous Ensemble. Int. J. Financ. Stud. 2025, 13, 201. [Google Scholar] [CrossRef]
  33. Gasparėnienė, L.; Remeikiene, R.; Sosidko, A.; Vėbraitė, V. A modelling of S&P 500 index price based on US economic indicators: Machine learning approach. Eng. Econ. 2021, 32, 362–375. [Google Scholar]
  34. Sarıkoç, M.; Celik, M. PCA-ICA-LSTM: A hybrid deep learning model based on dimension reduction methods to predict S&P 500 index price. Comput. Econ. 2025, 65, 2249–2315. [Google Scholar]
  35. Chahuán-Jiménez, K. Neural network-based predictive models for stock market index forecasting. J. Risk Financ. Manag. 2024, 17, 242. [Google Scholar] [CrossRef]
  36. Simeunović, J.; Schubnel, B.; Alet, P.J.; Carrillo, R.E. Spatio-temporal graph neural networks for multi-site PV power forecasting. IEEE Trans. Sustain. Energy 2021, 13, 1210–1220. [Google Scholar] [CrossRef]
  37. Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv 2017, arXiv:1709.04875. [Google Scholar]
  38. Liu, T.; Liang, L.; Che, C.; Liu, Y.; Jin, B. A transformer-based framework for temporal health event prediction with graph-enhanced representations. J. Biomed. Inform. 2025, 166, 104826. [Google Scholar] [CrossRef]
Figure 1. Feature correlation heatmap for AAPL. Each cell shows the Pearson correlation coefficient between pairs of features: stronger positive correlations appear in red, negative correlations in blue, and neutral colours indicate weak or no linear relationship.
Figure 1. Feature correlation heatmap for AAPL. Each cell shows the Pearson correlation coefficient between pairs of features: stronger positive correlations appear in red, negative correlations in blue, and neutral colours indicate weak or no linear relationship.
Appliedmath 05 00176 g001
Figure 2. SPY close price alongside SARIMA’s weekly sliding-window predictions for 2024. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Figure 2. SPY close price alongside SARIMA’s weekly sliding-window predictions for 2024. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Appliedmath 05 00176 g002
Figure 3. NVDA close price vs. ES in the ETS framework daily rolling-window prediction in 2024. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Figure 3. NVDA close price vs. ES in the ETS framework daily rolling-window prediction in 2024. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Appliedmath 05 00176 g003
Figure 4. SPY close price vs. TFT’s daily rolling-window prediction in 2021. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Figure 4. SPY close price vs. TFT’s daily rolling-window prediction in 2021. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Appliedmath 05 00176 g004
Figure 5. Horizontal bar chart showing the relative importance of each input feature in the TFT model, expressed as a percentage of total attention weight.
Figure 5. Horizontal bar chart showing the relative importance of each input feature in the TFT model, expressed as a percentage of total attention weight.
Appliedmath 05 00176 g005
Figure 6. Temporal attention weights in the TFT model across the 30-day lookback window. Higher values indicate time steps that the model considered more influential for the prediction, with the x-axis representing relative time indices (days).
Figure 6. Temporal attention weights in the TFT model across the 30-day lookback window. Higher values indicate time steps that the model considered more influential for the prediction, with the x-axis representing relative time indices (days).
Appliedmath 05 00176 g006
Figure 7. SPY close price vs. TFT-GNN’s daily rolling-window prediction in 2024. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Figure 7. SPY close price vs. TFT-GNN’s daily rolling-window prediction in 2024. The blue line shows the actual values, and the orange line shows the model’s forecasts.
Appliedmath 05 00176 g007
Figure 8. Bar chart illustrating the eight stocks most consistently identified as similar to AAPL by the GAT attention mechanism in the TFT-GNN model, averaged across all trading days from 2018 to 2023.
Figure 8. Bar chart illustrating the eight stocks most consistently identified as similar to AAPL by the GAT attention mechanism in the TFT-GNN model, averaged across all trading days from 2018 to 2023.
Appliedmath 05 00176 g008
Figure 9. Horizontal bar chart showing the relative importance of each input feature in the TFT-GNN model, expressed as a percentage of total attention weight.
Figure 9. Horizontal bar chart showing the relative importance of each input feature in the TFT-GNN model, expressed as a percentage of total attention weight.
Appliedmath 05 00176 g009
Table 1. Strengths, limitations, and scientific gaps of major deep learning architectures for financial time-series forecasting.
Table 1. Strengths, limitations, and scientific gaps of major deep learning architectures for financial time-series forecasting.
ModelStrengthsLimitationsScientific Gap Relevant to This Study
LSTMLearns nonlinear temporal dependencies.Weak long-range memory; training instability.Cannot model cross-asset relationships; limited interpretability.
GRUComputationally efficient recurrent model.Reduced capacity for complex dynamics.Cannot model cross-asset relationships; limited interpretability.
CNNCaptures local temporal patterns.Poor long-sequence modelling.Cannot model cross-asset relationships; limited interpretability.
TFTStrong multivariate forecasting with interpretable attention.Still treats assets independently; high computational cost.Lacks mechanism for incorporating relational information.
GNNCaptures relational structure and cross-asset dependencies.Not designed for sequence forecasting; limited temporal modelling.Lacks integration with temporal architectures for joint relational–temporal prediction.
Hybrid DL ModelsCombine complementary architectures.Higher computational cost.Typically do not integrate graph-based relational signals.
TFT–GNN (This Study)Joint temporal–relational modelling; attention-based interpretability.Higher computational cost.Introduces relational information directly into a transformer forecasting pipeline.
Table 2. Summary of daily forecasting performance across models.
Table 2. Summary of daily forecasting performance across models.
ModelRMSE (↓) R 2 (↑)HorizonInterpretabilityCompute Time
SARIMA 3.4862 0.9168 WeeklyLowModerate
ETS 2.7070 0.9630 DailyModerateLow
TFT 2.3369 0.9577 DailyHighHigh
TFT-GNN 2.1662 0.9645 DailyHighVery high
The down arrow next to RMSE signifies that lower is better. The up arrow next to R 2 signifies that higher is better.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lynch, S.T.; Derakhshan, P.; Lynch, S. A Novel Hybrid Temporal Fusion Transformer Graph Neural Network Model for Stock Market Prediction. AppliedMath 2025, 5, 176. https://doi.org/10.3390/appliedmath5040176

AMA Style

Lynch ST, Derakhshan P, Lynch S. A Novel Hybrid Temporal Fusion Transformer Graph Neural Network Model for Stock Market Prediction. AppliedMath. 2025; 5(4):176. https://doi.org/10.3390/appliedmath5040176

Chicago/Turabian Style

Lynch, Sebastian Thomas, Parisa Derakhshan, and Stephen Lynch. 2025. "A Novel Hybrid Temporal Fusion Transformer Graph Neural Network Model for Stock Market Prediction" AppliedMath 5, no. 4: 176. https://doi.org/10.3390/appliedmath5040176

APA Style

Lynch, S. T., Derakhshan, P., & Lynch, S. (2025). A Novel Hybrid Temporal Fusion Transformer Graph Neural Network Model for Stock Market Prediction. AppliedMath, 5(4), 176. https://doi.org/10.3390/appliedmath5040176

Article Metrics

Back to TopTop