Next Article in Journal
Using Causality-Driven Graph Representation Learning for APT Attacks Path Identification
Previous Article in Journal
The Steiner k-Wiener Index of Cacti
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry-Aware Graph Neural Approaches for Data-Efficient Return Prediction in International Financial Market Indices

1
Department of Industrial and Management Systems Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
2
Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2025, 17(9), 1372; https://doi.org/10.3390/sym17091372
Submission received: 21 June 2025 / Revised: 21 July 2025 / Accepted: 5 August 2025 / Published: 22 August 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Machine Learning and Data Science)

Abstract

This research evaluates the suitability of Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT) for improving financial return predictions across 15 major worldwide stock indices. The proposed method uses graph modeling to represent financial index relationships which enables the detection of symmetric market dependencies including mutual spillover effects and bidirectional influence patterns. The symmetric network structures become most important during financial instability because market interdependencies strengthen at such times. The evaluation process compares these models against XGBoost and Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) traditional forecasting approaches. The results of 30 time-series cross-validation experiments show that GNN models produce lower RMSE and MAE values, especially during financial crises and recovery phases and volatile market periods. The models show reduced advantages when markets remain stable. The research demonstrates that graph-based forecasting models which incorporate symmetry effectively detect complex financial relationships which leads to important implications for investment strategies and financial risk management and global economic forecasting.

1. Introduction

The increasing global financial market connections make it difficult for economists to monitor economic disturbances that propagate across worldwide index systems. ML and DL techniques within standard financial forecasting systems rely on static feature representations that fail to demonstrate market relationships. Financial indices operate as part of a network system which enables market spillovers to affect each other, so models must account for network dynamics instead of independent index analysis.
The inter-market relationships commonly present symmetric patterns because markets influence each other through reciprocal responses and bidirectional spillover effects. The correct modeling of global financial networks requires capturing these symmetrical dependencies because they define the actual nature of the system.
Financial forecasting benefits from GNNs as an innovative solution because they process data relational structures. The ability of GNNs to learn endogenous features from connectivity patterns makes them suitable for complex nonlinear financial index modeling. GNNs have been applied to stock predictions and cryptocurrency markets and single-market indices but their use for modeling global financial spillovers remains an underdeveloped field. GNNs use their ability to gather information from connected nodes to naturally encode symmetric relationships in graph structures where edge connections indicate market behavior that reciprocates. The models show potential to improve prediction results by identifying concealed market connections.
The financial forecasting potential of GNNs faces two fundamental barriers that prevent their adoption. Current GNN finance research primarily studies localized financial data without demonstrating effective use in international investment markets. Financial markets operate independently yet researchers rarely study how GNN-based models perform when processing multiple worldwide indices with varying economic conditions and different risk profiles and regulatory systems. The research shows that GNN forecasting methods need thorough validation because multiple studies present unproven predictive accuracy improvements through insufficient cross-validation methods and ambiguous statistical significance results.
This study aims to address these gaps by answering the following core research questions: (1) Can GNN-based models such as GCN and GAT predict global financial index returns more effectively than traditional ML methods? (2) Under what market conditions do graph-based models exhibit performance advantages? (3) Can a correlation-thresholded graph structure constructed from return data capture global spillover dynamics without relying on external variables?
To this end, the objectives of this paper are threefold: (i) to design and validate a GNN-based framework for forecasting global financial indices using only return-based data; (ii) to benchmark GCN and GAT models against standard ML baselines such as RF, XGBoost, SVM, MLP, and KNN through repeated time-series cross-validation; and (iii) to identify market regimes (e.g., volatility, crisis recovery) where GNNs offer statistically significant advantages.
This study creates new possibilities for GNN-based forecasting by studying global market applications while building a complete validation framework. The research evaluates the performance of Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs) when predicting daily market returns from the fifteen most capitalization-heavy global financial indices. The chosen indices represent diverse economic conditions along with market interaction effects which generate results that transcend individual economies. The method operates solely on intrinsic market data which enhances data efficiency and reduces the requirement for external indicators that could produce delays and bias.
The research establishes its findings through 30 time-series cross-validation experiments which provide a robust methodological framework for statistical assessment. The evaluation of models relies on RMSE and MAE measurements which are assessed at three different significance levels of 10%, 5% and 1%. The research explores specific market environments that show better performance by GNNs than conventional models while distinguishing between periods of high market volatility and post-crisis recovery phases and times of economic stability. The thorough analysis enables us to determine when GNNs excel and which specific conditions optimize their predictive benefits.
The research establishes three crucial contributions which benefit financial econometrics together with quantitative finance. The research demonstrates how GNNs outperform traditional models by detecting intricate nonlinear relationships that exist between worldwide indices. Graph-based models excel at capturing the symmetric financial interdependencies which produces stronger and more interpretable forecasting results. The research demonstrates graph-based methods’ durability through multiple testing sessions that prove their effectiveness in various market environments. The application of GNNs in global financial forecasting enables them to function as efficient tools for investment strategy development and portfolio risk management and macroeconomic prediction.
This work makes three notable contributions to the field of financial forecasting. First, it proposes a graph construction methodology based on thresholded correlation networks, specifically tailored to capturing dynamic interdependencies among global financial indices. Second, it conducts rigorous evaluation using statistical testing at multiple significance levels, ensuring reliable comparisons across models. Third, the results demonstrate that GCN and GAT models significantly outperform traditional ML methods under specific market regimes, highlighting their practical value in macroeconomic forecasting and risk-aware investment strategy.
The paper has a defined sequence which begins with Section 2 presenting GNN applications in financial forecasting alongside their existing methodological limitations. The research methodology section explains the steps involved in data acquisition and graph structure creation and model implementation. The experimental findings from this research study GNNs against traditional machine learning models in Section 4. The evaluation examines when GNN-based forecasting provides better results based on market circumstances. Future research directions are presented in the final section of the paper.

2. Literature Review

The identification of financial patterns stands as a fundamental challenge which research in econometrics machine learning and deep learning continues to pursue. The time-series forecasting models of traditional methods consist of autoregressive integrated moving average (ARIMA) models (Box et al., 2015) [1] together with vector autoregression (VAR) models (Sims, 1980) [2] and GARCH models (Bollerslev, 1986) [3]. The previous forecasting methods have proven useful for modeling historical financial patterns yet they do not excel in recognizing intricate connections between worldwide financial indices throughout times of economic turbulence. ML and DL research introduced alternative methods to traditional approaches through support vector regression (Drucker et al., 1997) [4], random forests (Breiman, 2001) [5], and long short-term memory (LSTM) networks (Hochreiter & Schmidhuber, 1997) [6]. Building on this trend, Choi and Kim (2025) [7] extend the scope of analysis beyond predictive accuracy, showing that diverse graph-based network analysis methods can reveal distinct perspectives and patterns in sector-based financial instruments’ price discrepancies, thereby underscoring the value of method-specific insights. Although such predictive models show enhanced accuracy, they often fail to capture the underlying financial market relationships, as noted by Ozbayoglu et al. (2019) [8]. Foroutan and Lahmiri (2024) [9] found that the Temporal Convolutional Network (TCN) proved most effective in forecasting WTI, Brent, and silver prices, while the BiGRU model excelled at gold price prediction, providing essential information for investors, policymakers, and other market stakeholders. Zhang et al. (2023) [10] explained that their EEMD-PSO-LSSVM-ICSS-GARCH hybrid model achieved superior prediction accuracy for the NASDAQ CTA Artificial Intelligence and Robotics (AIRO) Index returns because of its ability to handle complex structural characteristics.
Graph-based learning represents an innovative solution to financial forecasting problems because it provides an organized method for analyzing market dependency structures and spillover patterns. GNNs have been widely applied to structured data problems across various domains, including physics (Battaglia et al., 2018) [11], biology (Gilmer et al., 2017) [12], and social networks (Hamilton et al., 2017) [13]. Their adoption for financial market analysis began recently but shows fast-growing momentum. Xiang et al. (2023) [14] proved GNNs can identify temporal relationships in stock market predictions better than traditional machine learning and deep learning methods. Choi and Kim (2023) [15] applied a graph-based approach to forecast downside risks in global financial markets by constructing inflation rate-adjusted dependence networks among 21 major indices. Chen et al. (2023) [16] further developed this concept by uniting natural language processing (NLP) with GNNs to add sentiment analysis capabilities to financial prediction models. Zhou et al. (2025) [17] demonstrated how their evolving multiscale graph neural network (EMGNN) framework leads to superior cryptocurrency volatility prediction by modeling cryptocurrency and conventional financial market interactions thus helping risk management and policy development. Similarly, Yin et al. (2024) [18] proposed a GNN-based strategy that combines the financial stress index with cryptocurrency forecasting, confirming the model’s ability to capture macro-financial stress propagation mechanisms that affect digital asset pricing.
Recent innovations in graph structure learning further enhance graph and GNN capabilities. Fan et al. (2025) [19] proposed the CCGIB framework to balance shared and channel-specific representations, enabling richer modeling of multiview financial structures. Choi et al. (2024) [20] also contributed by introducing an augmented representation framework that encodes temporal statistical-space priors into graph models, showing improved accuracy in handling volatile time series under complex dependencies. Meanwhile, Fan et al. (2024) [21] introduced Neural Gaussian Similarity Modeling to enable differentiable and scalable graph construction, which is well-suited for financial data with high node similarity.
Zhang et al. (2024) [22] explored the application of graph neural networks to power grid operational risk assessment under dynamic topological conditions, demonstrating that GNNs can reliably predict system-wide and localized risk indicators despite uncertainty in future grid configurations. Dong et al. (2024) [23] developed a dynamic fraud detection framework by integrating reinforcement learning into graph neural networks, addressing key challenges such as label imbalance, feature distortion from highly connected nodes, and evolving fraud patterns.
The foundational Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017) [24] and Graph Attention Networks (GATs) (Veličković et al., 2018) [25] extract useful node-level features from network connectivity thus making them suitable for financial forecasting applications. Researchers have developed various graph-based learning models such as GraphSAGE (Hamilton et al., 2017) [13] together with temporal graph networks (Rossi et al., 2020) [26] which demonstrate their potential to model financial relationships as they change over time. Stock-level predictions represent the primary focus of current applications but macroeconomic forecasting remains an underdeveloped area.
The use of GNNs for global financial forecasting needs further exploration despite recent progress in the field. The existing research mainly focuses on individual stocks and specific sector indices and single-market datasets which hinders the generalization of findings to interconnected financial markets across multiple markets. The majority of current GNN-based research fails to study how economic disturbances transmit between markets which represents a crucial element for systemic risk assessment and spread modeling. The application of graph-based models to financial risk prediction by Cheng et al. (2022) [27], Choi and Kim (2024) [28], and Das et al. (2024) [29] failed to perform rigorous tests against traditional econometric models while ignoring how GNNs behave under varying economic scenarios. This highlights the need for a comprehensive and robust framework that evaluates GNN performance across diverse regimes, international indices, and volatility conditions.
The research addresses these knowledge gaps through international financial index GNN implementation along with an advanced validation framework which surpasses previous studies. The research employs extensive hyperparameter tuning combined with 30 repeated time-series cross-validation experiments instead of traditional single-period backtesting and static train-test splits. The evaluation method delivers both robust and statistically sound performance assessments that work effectively across multiple market conditions throughout various time periods. The comprehensive validation process tackles model overfitting together with temporal data leakage problems to establish reliable performance comparisons between different prediction models.
Research makes an original contribution to financial network structure research by using Graph Neural Networks (GNNs) to show how these networks successfully identify spillover effects and systemic risk transmission and structural dependencies between international markets. The current time-series methods for financial indicators work independently from each other because they disregard the fundamental connections between these indicators. The studied approach uses graph models to reveal hidden network relationships thus delivering an integrated predictive system for complex market connections.
Through correlation-based network formation techniques the research develops financial graphs that lead to an efficient forecasting system which works without requiring macroeconomic data indicators. The proposed approach benefits emerging markets by working without requiring abundant economic data that is difficult to obtain. The research sets itself apart from previous studies because it systematically examines the conditions that optimize GNN performance by evaluating their predictive outcomes across different financial environments such as high-volatility crises and post-crisis recoveries and stable economic conditions.
The research contributes to multiple fields through its work in financial econometrics and quantitative finance and systemic risk modeling. The research connects financial network representation to predictive analytics through its application of graph-based forecasting methods to international market indices. The research demonstrates that financial network forecasting accuracy depends heavily on structural properties which include symmetry and interconnectedness. The empirical results demonstrate that GNN-based methods deliver both robustness and data efficiency which provides new understanding about how network structures affect macroeconomic predictions. The research establishes a solid base for creating hybrid modeling systems which combine graph neural networks with conventional econometric models to improve forecasting capabilities in the modern interconnected global financial system.

3. Methodology and Data

3.1. Methodology

This study aims to predict financial returns by measuring the spillover effects among global indices. Specifically, it examines whether utilizing graph-based embeddings improves predictive performance compared to benchmark models that rely solely on raw data. The benchmark models include Random Forest, XGBoost, Multi-Layer Perceptron (MLP), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM), which were used for baseline predictions. The results of these models were then compared with predictions made using embeddings generated through Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT). Our research flow is presented in Figure 1.

3.1.1. Benchmark Regression Models

Random Forest Regressor
Random Forest (Breiman, 2001) [5] is an ensemble learning method that builds multiple decision trees and aggregates their outputs to enhance predictive performance. In regression tasks, it reduces variance by averaging the predictions of individual trees
y = f x = 1 N i = 1 N T i x
where T i x is the prediction of the i-th tree.
XGBoost Regressor
XGBoost (Chen & Guestrin, 2016) [30] is a gradient boosting algorithm that optimizes decision trees iteratively to minimize regression loss by reducing bias and variance. It uses a differentiable loss function, such as mean squared error, to improve predictive accuracy.
F t x = F t 1 x + η h t x
where F t x is the updated model, η is the learning rate, and h t x is the residual of a regression.
Multi-Layer Perceptron Regressor
Multi-Layer Perceptron (Rosenblatt, 1958) [31] is a neural network model consisting of multiple layers of neurons that learn nonlinear mappings through backpropagation.
y = σ W L σ W L 1 σ W 1 x + b 1 + b L 1 + b L
where σ is the activation function, W l and b l is the weights and biases for the l t h layer, where l { 1,2 , . , L } .
K-Nearest Neighbors Regressor
K-Nearest Neighbors (Cover & Hart, 1967) [32] is a non-parametric algorithm that predicts a sample’s value by averaging the target values of its k-nearest neighbors in feature space.
y ^ = 1 k i = 1 k { y i }
Support Vector Regressor
Support Vector Machine (Cortes & Vapnik, 1995) [33] is a supervised learning model that finds the optimal hyperplane to predict continuous values by minimizing error within a specified margin (ε-insensitive zone).
min w , b 1 2 w 2 + C i = 1 n ξ i + ξ i * s . t .         y i w x i + b ϵ + ξ i ,             i w x i + b y i ϵ + ξ i * ,             i ξ i , ξ i * 0 ,             i
where ξ i , ξ i * is the slack variable, w is the weight vector, b is the bias, and y i is the label of x i . The ϵ is the insensitive-loss meaning the margin of error and C i = 1 n ξ i + ξ i * is the penalty, when the sample is outside the margin of error.

3.1.2. Graph Models

A graph is a data structure composed of a set of nodes (also referred to as vertices) and edges that connect these nodes. It is generally defined as G   =   ( V ,   E ) , where V represents the set of nodes and E represents the set of edges connecting pairs of nodes. In this study, the nodes represent individual stock indices, while the edges are weighted by the correlation between two indices. GNNs generally utilize a message-passing mechanism to calculate the embeddings of each node. This involves updating a node’s embedding by leveraging its relationships with neighboring nodes. GNNs employ an iterative process where the feature information of neighboring nodes is aggregated and integrated into the representation of the central node. This process includes aggregation and update operations performed through multiple stacked layers. As a result, GNNs enable more accurate predictions based on the provided graph data. In this study, to variation models of GNNs were used for prediction.
Graph Convolutional Network (GCN)
A Graph Convolutional Network (GCN) is a model that utilizes both the characteristics of nodes and the structure of the graph to learn from graph data. A node’s embeddings are updated by combining its own features with those of its neighbors. The embedding update of a node v in one layer is represented by the equation below.
H l + 1 = σ A ^ H l W l = σ D 1 2 A ~ D 1 2 H l W l
The input is the initial node feature H 0 R N × F , which is the initial input values. A   ~ =   A   +   I , is the adjacency matrix of the graph, with self-loops added. The element A i j = 0,1 in the matrix indicates whether there is an edge between node i and node j. The A ^ = D 1 2 A ~ D 1 2 part is to normalize the data to make it easier to learn from, where D is a diagonal matrix ( D i i = j A i j ). H l ,   W l each represents the node embedding matrix and the learnable weight matrix at the l-th layer, and σ is the activation function. The model aggregates information from neighboring nodes at each layer and updates the current node’s representation. The output values are obtained from the final layer H L . This step is called the forward pass.
The backpropagation step is the process of updating the weight matrix W l for each layer, which minimizes the loss function below.
L = 1 N i = 1 N Loss y i ^ , y i
It calculates the gradient of L and updates the weight W l .
Graph Attention Network (GAT)
The Graph Attention Network (GAT) introduces an attention mechanism to graph neural networks. Instead of treating all neighbors equally (as in GCN), GAT assigns different attention weights to neighboring nodes, allowing the model to focus on the most relevant neighbors during feature aggregation. The main purpose of the GAT model is to generalize a new set of node features as an output h = h 1 , h 2 , , h n from its previous input h = h 1 , h 2 , , h n . To proceed this step, the model first calculates and the attention coefficient.
e i j = LeakyReLU a T ( W h i   | |   W h j ) ,
where a T is the attention vector (learnable parameter) and W is the weight matrix which can be trained by the model.
Next, the attention coefficient is normalized by using the softmax function.
α i j = softmax e i j
Lastly, the features of neighbor nodes are aggregated using the normalized attention coefficient.
h i = σ j N i α i j W h j
The GAT model utilizes a multi-head attention mechanism to improve model stability and expressiveness. In the intermediate layers, the attention heads are concatenated, whereas in the final layer, they are averaged to reduce dimensionality and enhance stability.
h i = | | k = 1 K σ j N i α i j k W k h j

3.1.3. Optimization and Computational Complexity

The model learns through the process of minimizing the Mean Squared Error (MSE) loss between predicted and actual returns. The MSE loss function remains convex for output predictions but the overall optimization landscape becomes non-convex because of the nonlinear activation functions (e.g., ReLU, ELU) and stacked neural layers in both GCN and GAT architectures. Deep learning models typically exhibit non-convexity which stochastic gradient-based optimization methods (e.g., Adam) solve by finding generalizable local minima in practice.
The proposed models—Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs)—exhibit polynomial computational complexity, which supports their applicability to large-scale financial networks. For GCNs, the dominant computational cost arises from matrix multiplications involving node features and weight parameters, yielding a complexity of O N F F , where N is the number of nodes, F is the input feature dimension, and F is the output feature dimension. In practice, this cost is significantly reduced by leveraging sparse adjacency matrices, which is appropriate for financial graphs that typically exhibit sparse connectivity.
GATs have a complexity of O E F , where E denotes the number of edges. This is because attention coefficients are computed only for connected node pairs. Although the self-attention mechanism could lead to a worst-case complexity of O N 2 in dense graphs, this is mitigated in real-world applications by limiting attention computations to each node’s local neighbors.
Therefore, under the common assumption of sparsity in financial networks, both GCNs and GATs offer computationally feasible and scalable solutions for modeling complex inter-market relationships.

3.2. Data

To analyze global financial spillover effects, we constructed a dataset of major stock indices spanning from 1 January 2004, to 31 December 2024. The initial list of indices was obtained by crawling the main stock index listings available on Investing.com, which provides a comprehensive and up-to-date registry of representative indices from major global economies. For each index, daily closing prices and trading volume data were collected through automated crawling of Yahoo Finance, which offers a reliable source of historical financial market data.
The initial collection yielded 41 global stock indices, covering a diverse range of regions including North America, Europe, and Asia. During the data preprocessing phase, we evaluated the completeness of each index’s time series. Indices with excessive missing values or incomplete coverage over the 21-year sample period were excluded to maintain consistency in longitudinal analysis. This filtering process resulted in 25 indices with sufficient data quality.
Among these, we further narrowed the sample to 15 representative indices, selected based on relative market importance. Specifically, we computed a proxy for market capitalization by taking the cumulative sum of the product of daily closing price and trading volume over the full period. The top 15 indices with the highest cumulative value were retained, under the rationale that these markets play a more influential role in transmitting financial shocks across borders and thus offer richer insights into spillover dynamics.
To prepare the data for modeling and analysis, the closing prices were transformed into daily log return series, which standardizes the data and enhances stationarity—a critical requirement for many econometric and machine learning forecasting models.

3.3. Graph Data Generation

The original tabular data underwent a transformation to create a graph-structured dataset which enabled graph-based learning. The graph model uses stock indices as nodes to represent individual elements while edges represent historical correlation relationships between them. The financial network construction process retained only those edges which demonstrated absolute correlation coefficients exceeding 0.3.
The chosen threshold followed both statistical conventions and financial network analysis principles. The absolute correlation coefficient of 0.3 or higher in correlation analysis indicates moderate to strong relationships because positive values (≥0.3) show meaningful positive associations and negative values (≤−0.3) indicate significant inverse relationships. The graph structure becomes less reliable when correlations fall below this threshold because they introduce excessive noise.
Financial network modeling research shows that low correlation values tend to change frequently which makes them unreliable for long-term market analysis. The 0.3 threshold selection helps the model detect important financial connections while avoiding temporary market fluctuations and random statistical noise. The model achieves better robustness through this balance because it selects meaningful financial dependencies which represent actual market structures. The graph structures are plotted in Appendix C, Figure A1a–oo.
The graph structure remains static throughout each 6-month evaluation period but changes dynamically between time segments. The graphs derive from thresholded correlation values (|ρ| ≥ 0.3) which produce sparse undirected weighted adjacency matrices. The models do not require uniform connectivity because they use natural variations in edge density and degree distribution which emerge from actual market data. The models maintain flexibility to adjust their financial interdependency strength during different periods. The graph structure gets reconstructed independently for each rolling window to handle time variations.

3.4. Experimental Design

The baseline experiment used 15 indices in tabular form with five models from Section 3.1.1 for prediction. A total of 41 segments were tested, each comprising a 6-month training set and a 6-month test set, as detailed in Appendix A, Table A1.
The prediction purpose was to predict the rate of return of the day after (1 day prediction). For the feature variables, data from time steps 1 to N-1 (where N is the total number of rows for each test period) across all index columns were used. This process was repeated by shifting the target index i for each experiment, and 30 iterations were performed for each target to ensure robust results. Mean RMSE and MAE were calculated for every target index to evaluate model performance comprehensively.
The graph-based experiments were conducted using the GCN and GAT models. Embeddings were generated for each node and edge based on the previously constructed graph dataset. For each combination, experiments were performed for all target indices with 30 loops per target, and each loop consisted of 100 epochs to ensure the robustness of the results. Default settings for the ML models, as outlined in Table A7 of Appendix B, were based on the configurations specified in the respective papers or implementation packages.

4. Results

The results of the experimental data can be found in Table A1, Table A2, Table A3, Table A4 and Table A5 of the Appendix A. GAT and GCN demonstrate their ability to track nonlinear financial relationships and dynamic dependencies according to the experimental results which are presented in Figure A2, Figure A3, Figure A4, Figure A5, Figure A6 and Figure A7 of Appendix D.
(1)
Pre-Crisis Period (2004–2007): Early Signal Detection
During the relatively stable pre-crisis years, both models showed moderate yet statistically significant improvements over traditional approaches. In particular, Test 6, conducted in the first half of 2007 amid geopolitical tension, revealed that GAT outperformed MLP (t = 3.8764, p = 0.0003), while GCN results were comparable (t = 3.8670, p = 0.0003).
These findings suggest that even under modest volatility, graph-based models began to detect early structural changes in global financial linkages—although their relative advantage over baselines was not yet dramatic.
(2)
Crisis Periods: 2008 Global Financial Crisis & European Debt Crisis
The models achieved their strongest performance during the 2008 Global Financial Crisis, a time when market interdependencies intensified dramatically.
Test 10, conducted after the collapse of Lehman Brothers, showed that GAT significantly outperformed XGBoost (t = 3.4109, p = 0.0011) and MLP (t = 2.2339, p = 0.0169), while GCN produced slightly lower but still competitive results (XGBoost: t = 3.3862, p = 0.0012; MLP: t = 2.2054, p = 0.0180). These results confirm the superiority of GNNs in capturing systemic market shocks and nonlinear contagion paths.
Similarly, during the European Debt Crisis (2010–2012), the models continued to perform strongly. In Test 14 (early 2011), GAT significantly outperformed MLP (t = 3.6446, p = 0.0006). Under heightened uncertainty due to Greece’s bailout negotiations (Test 16), GAT again surpassed XGBoost (t = 1.9666), MLP (t = 2.0200), and KNN (t = 2.2302). GCN also showed competitive performance (e.g., KNN: t = 2.2227, p = 0.0175).
These results emphasize the models’ ability to monitor sovereign risk events and shifts in market sentiment.
(3)
Recovery Phases: Post-GFC and Post-COVID-19
As markets entered recovery, GAT and GCN maintained strong predictive capabilities by adapting to evolving inter-market structures.
Test 11 (late 2009) indicated that GAT significantly outperformed XGBoost (t = 2.5188), MLP (t = 2.8342), and KNN (t = 3.5566), with similar results for GCN (e.g., KNN: t = 3.5029, p = 0.0008). During the COVID-19 crisis, both models again delivered outstanding results.
In Test 31 (early 2020), GAT strongly outperformed MLP in RMSE (t = 4.6089, p < 0.0001). In Test 33, GAT showed clear advantages over MLP (p = 0.0029) and SVM (p = 0.0034) in MAE, with GCN producing comparable outcomes.
Test 34 further showed that GAT remained superior even in the post-COVID adjustment phase, outperforming MLP (t = 2.9313) and SVM (t = 2.2648). These results demonstrate that GNNs are highly effective during periods of rapid regime shifts and volatility spikes.
(4)
Stable and Mixed Regimes: 2013–2019 and Recent Inflationary Periods
The performance gap between GNNs and baseline models narrowed during relatively calm periods. From 2013 to 2019, when markets were supported by quantitative easing, GAT still managed to outperform traditional models in some cases such as Test 20 (2014) where it beat MLP (t = 2.8370, p = 0.0043).
During Test 25 (2016), amid the Brexit referendum, GAT demonstrated robustness again, outperforming MLP (t = 1.8035, p = 0.0410). In more recent years marked by inflation and monetary tightening, both GAT and GCN remained competitive.
In Test 37 (early 2022), GAT outperformed MLP (t = 1.7436, p = 0.0462), and in Test 39 (2023), GAT and GCN both significantly surpassed MLP (t ≈ 2.37, p ≈ 0.0130), validating their responsiveness to macroeconomic structural changes.
The models demonstrated excellence in recent times marked by inflation together with monetary tightening. GAT and GCN displayed effective adaptation to macroeconomic changes in Test 37 during the first half of 2022 by achieving better results than MLP (t = 1.7436, p = 0.0462) and producing results comparable to GCN (t = 1.7395, p = 0.0465).
The analysis in Test 39 (2023) indicated that GAT achieved better significance than MLP (t = 2.3703, p = 0.0129) compared to GCN (t = 2.3651, p = 0.0130) which validated its ability to detect inflation-driven market adjustments.
Taken together, the experimental results demonstrate that graph-based models provide superior forecasting power compared to traditional methods, especially under periods of crisis and structural transition. GAT and GCN performed best during events such as the 2008 Global Financial Crisis, European Debt Crisis, Brexit referendum, COVID-19 pandemic, and recent inflationary shocks, all of which were marked by intense shifts in market connectivity. In contrast, during stable periods such as Tests 8 (2008) and 24 (2016), the performance gap narrowed, showing that GNNs’ effectiveness depends heavily on market conditions. The results confirm that GNNs are particularly effective in modeling dynamic, symmetric, and nonlinear spillover effects, which traditional models often fail to capture. These findings suggest that GAT and GCN are especially valuable for systemic risk analysis, adaptive forecasting, and financial decision support in volatile and interconnected market environments.

5. Discussion

This study presents robust empirical evidence indicating that Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs) outperform conventional machine learning and deep learning models in forecasting financial returns across international markets. The advantage of GNN-based models becomes particularly pronounced during periods of heightened market volatility, systemic crises, and subsequent recovery phases—periods characterized by strengthened interdependencies and structural changes within financial networks, which GNNs are uniquely equipped to capture.
Both GCNs and GATs learn representations directly from dynamically evolving network structures. Their architectural design allows for the modeling of symmetric financial relationships, including mutual influences and bidirectional spillovers. GATs, in particular, utilize attention mechanisms that assign learnable weights to neighbor nodes, enabling the model to emphasize the most salient inter-market connections. This adaptive capacity enhances the model’s responsiveness to regime shifts and sudden economic disturbances. Conversely, GCNs are more effective in relatively stable markets, where neighborhood aggregation suffices to capture persistent dependency structures.
These architectural differences explain why GATs consistently outperformed other models during crises such as the 2008 Global Financial Crisis and the COVID-19 shock, while GCNs maintained more stable performance during less volatile periods. These findings suggest that the choice of GNN architecture should be guided by the structural characteristics of the market network and prevailing economic conditions.
The current model operates under a centralized training framework, where financial index data is aggregated and processed in a single computational environment. This design avoids the communication overhead and compression bottlenecks that often occur in distributed systems. However, we acknowledge that centralized architectures may face scalability limitations, especially in real-time, multi-agent financial environments.
Recent advancements such as the work by Doostmohammadian et al. (2025) [34] in IEEE Transactions on Automation Science and Engineering have demonstrated that log-scale quantization in distributed first-order optimization algorithms can substantially reduce communication costs while preserving convergence performance. Their method enables learning over networks of geographically distributed agents by exchanging quantized gradient information, offering practical benefits in bandwidth-constrained settings.
While our current implementation does not incorporate distributed or quantized optimization techniques, we recognize the relevance of such approaches for future extensions of this work. In particular, integrating log-quantized GNN training within decentralized financial forecasting systems could enhance scalability, reduce latency, and support edge-based deployment.
The performance of GNN models is highly sensitive to the structure of the underlying financial network. In this study, graphs were constructed based on pairwise correlations with a threshold of |ρ| ≥ 0.3, producing sparse, undirected networks that evolve over time. The density and connectivity of these graphs vary across market regimes, directly influencing the flow of information during training. During crises, increased graph connectivity creates dense inter-market feedback loops, enhancing the predictive power of GNNs. In contrast, sparse or disconnected graphs—more common in stable periods—limit relational learning and reduce the model’s relative advantage. Thus, model effectiveness is closely tied to graph sparsity, degree distribution, and the temporal stability of edge weights.
While correlation-based graph construction offers computational efficiency and interpretability, it captures only linear and symmetric relationships. This approach cannot account for more complex, nonlinear, or causal dependencies often present in financial systems. Future research should explore advanced graph construction techniques based on mutual information, Granger causality, or transfer entropy to more accurately represent the dynamics of financial markets. Incorporating causal inference and temporal structure would improve both the robustness and interpretability of GNN-based models.
Another key limitation pertains to model interpretability. Although GATs provide some transparency via attention weights, deep graph models generally lack intuitive explanations for their predictions. For practical adoption in financial decision-making, enhanced explainability is essential. The development of visualization tools and post hoc interpretation techniques is therefore critical for promoting trust and accountability in graph-based forecasting systems. Also, integrating Explainable AI (XAI) techniques—such as feature attribution, counterfactual analysis, or graph-specific saliency mapping—into GNN-based forecasting frameworks could further enhance transparency and stakeholder trust. Such integration would allow market analysts to not only observe prediction outputs but also understand the underlying drivers of inter-market dependencies and systemic risk signals. Given that XAI approaches have already been successfully applied in diverse domains such as marketing, healthcare, and policy analytics, their adoption in financial forecasting could similarly improve interpretability, user confidence, and practical decision-making capabilities [35,36,37].
In summary, GNNs offer a flexible and powerful framework for capturing the dynamic, symmetric relationships that emerge during structural changes in global financial systems. Their effectiveness is most evident when traditional models fail to adapt to nonlinear regime shifts. However, their deployment requires careful consideration of graph structure, computational demands, and interpretability. Future progress in distributed graph learning, explainable GNNs, and causal graph construction will be key to realizing their full potential in real-time financial applications across interconnected markets.

6. Conclusions

This study demonstrates that Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs) are effective tools for forecasting financial returns, particularly in environments characterized by structural volatility and systemic risk. By leveraging the underlying financial network structures, these models capture complex, symmetric relationships and nonlinear interdependencies that traditional machine learning and deep learning approaches often fail to model adequately.
Through a comprehensive benchmarking framework, we evaluate GCNs and GATs against standard machine learning models across various market regimes. The results consistently show that graph-based models achieve superior predictive performance during periods of market disruption, such as the 2008 Global Financial Crisis and the COVID-19 pandemic. These performance gains stem from the models’ capacity to learn dynamic patterns and directional spillovers embedded within evolving financial networks—an ability that proves critical in capturing contagion effects and regime transitions.
Nevertheless, our findings also highlight several limitations. The computational complexity of GNNs increases with network size, and their relative advantage diminishes in stable market conditions where inter-index relationships are more static and linear. In such contexts, simpler models like XGBoost or MLP often perform comparably at significantly lower computational cost. These trade-offs suggest that GNNs should be applied selectively—ideally during periods of heightened interconnectivity or structural change, where their relational modeling capabilities provide meaningful benefits.
To enable broader adoption in real-world financial systems, further developments are required in three areas: (1) scalable and adaptive graph construction methods that reflect temporal changes in market topology, (2) computationally efficient training and inference procedures suited for high-frequency environments, and (3) enhanced model interpretability through explainable AI techniques.
Ultimately, this research contributes an integrated framework that connects financial network modeling with predictive modeling techniques, offering a graph-based approach to capturing structural evolution in global markets. The flexibility of GNNs in adapting to nonlinear dynamics and uncovering latent inter-market structures positions them as valuable tools for macroeconomic forecasting, portfolio allocation, and systemic risk monitoring. Their demonstrated ability to identify structural breaks and directional dependencies offers significant potential for informing institutional decision-making and developing next-generation decision-support systems in finance.

Author Contributions

Conceptualization, I.C.; methodology, I.C.; software, T.K.L. and I.C.; validation, I.C. and T.K.L.; formal analysis, T.K.L. and I.C.; investigation, I.C. and T.K.L.; resources, T.K.L. and W.C.K.; data curation, T.K.L. and I.C.; writing—original draft preparation, T.K.L. and I.C.; writing—review and editing, I.C. and T.K.L.; visualization, I.C.; supervision, W.C.K.; project administration, I.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data supporting the results of this study are publicly available and properly cited within the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

AbbreviationFull Form
GNNGraph Neural Network
GCNGraph Convolutional Network
GATGraph Attention Network
MLMachine Learning
DLDeep Learning
ARIMAAutoregressive Integrated Moving Average
VARVector Autoregression
GARCHGeneralized Autoregressive Conditional Heteroskedasticity
MLPMulti-Layer Perceptron
SVMSupport Vector Machine
KNNK-Nearest Neighbors
RMSERoot Mean Squared Error
MAEMean Absolute Error
NLPNatural Language Processing

Appendix A

Table A1. Forecasting Results.
Table A1. Forecasting Results.
Test IDModelMean RMSERMSE SummaryMean MAEMAE Summary
1Random Forest0.00890.0089 ± 0.00320.00710.0071 ± 0.0025
XGBoost0.00980.0098 ± 0.00350.00770.0077 ± 0.0027
MLP0.01010.0101 ± 0.00280.00810.0081 ± 0.0022
KNN0.00980.0098 ± 0.00330.00770.0077 ± 0.0027
SVM0.01000.0100 ± 0.00280.00800.0080 ± 0.0023
GCN0.00860.0086 ± 0.00270.00670.0067 ± 0.0022
GAT0.00860.0086 ± 0.00270.00670.0067 ± 0.0022
2Random Forest0.00800.0080 ± 0.00330.00630.0063 ± 0.0026
XGBoost0.00850.0085 ± 0.00330.00680.0068 ± 0.0027
MLP0.01000.0100 ± 0.00290.00790.0079 ± 0.0024
KNN0.00850.0085 ± 0.00340.00670.0067 ± 0.0027
SVM0.00810.0081 ± 0.00310.00640.0064 ± 0.0025
GCN0.00800.0080 ± 0.00310.00620.0062 ± 0.0025
GAT0.00800.0080 ± 0.00310.00620.0062 ± 0.0025
3Random Forest0.00860.0086 ± 0.00310.00670.0067 ± 0.0024
XGBoost0.00920.0092 ± 0.00330.00720.0072 ± 0.0025
MLP0.01020.0102 ± 0.00260.00800.0080 ± 0.0021
KNN0.00890.0089 ± 0.00310.00700.0070 ± 0.0026
SVM0.00940.0094 ± 0.00310.00740.0074 ± 0.0026
GCN0.00860.0086 ± 0.00280.00660.0066 ± 0.0023
GAT0.00860.0086 ± 0.00280.00660.0066 ± 0.0023
4Random Forest0.01120.0112 ± 0.00400.00850.0085 ± 0.0029
XGBoost0.01180.0118 ± 0.00430.00900.0090 ± 0.0032
MLP0.01530.0153 ± 0.00330.01170.0117 ± 0.0025
KNN0.01150.0115 ± 0.00410.00880.0088 ± 0.0031
SVM0.01160.0116 ± 0.00400.00880.0088 ± 0.0031
GCN0.01150.0115 ± 0.00410.00860.0086 ± 0.0031
GAT0.01150.0115 ± 0.00410.00860.0086 ± 0.0031
5Random Forest0.00910.0091 ± 0.00280.00700.0070 ± 0.0022
XGBoost0.00990.0099 ± 0.00320.00760.0076 ± 0.0025
MLP0.01060.0106 ± 0.00220.00830.0083 ± 0.0017
KNN0.00940.0094 ± 0.00300.00740.0074 ± 0.0024
SVM0.01030.0103 ± 0.00410.00820.0082 ± 0.0035
GCN0.00900.0090 ± 0.00260.00680.0068 ± 0.0019
GAT0.00900.0090 ± 0.00260.00680.0068 ± 0.0019
6Random Forest0.00970.0097 ± 0.00260.00730.0073 ± 0.0019
XGBoost0.01040.0104 ± 0.00280.00780.0078 ± 0.0021
MLP0.01310.0131 ± 0.00210.01000.0100 ± 0.0016
KNN0.01020.0102 ± 0.00280.00770.0077 ± 0.0021
SVM0.01030.0103 ± 0.00290.00780.0078 ± 0.0023
GCN0.00980.0098 ± 0.00260.00730.0073 ± 0.0020
GAT0.00980.0098 ± 0.00260.00730.0073 ± 0.0020
7Random Forest0.01420.0142 ± 0.00310.01100.0110 ± 0.0024
XGBoost0.01510.0151 ± 0.00320.01170.0117 ± 0.0025
MLP0.01770.0177 ± 0.00260.01400.0140 ± 0.0020
KNN0.01430.0143 ± 0.00330.01100.0110 ± 0.0025
SVM0.01610.0161 ± 0.00390.01310.0131 ± 0.0031
GCN0.01430.0143 ± 0.00330.01090.0109 ± 0.0025
GAT0.01430.0143 ± 0.00330.01090.0109 ± 0.0025
8Random Forest0.01710.0171 ± 0.00350.01280.0128 ± 0.0024
XGBoost0.01830.0183 ± 0.00390.01380.0138 ± 0.0028
MLP0.01800.0180 ± 0.00330.01350.0135 ± 0.0021
KNN0.01760.0176 ± 0.00350.01340.0134 ± 0.0025
SVM0.01720.0172 ± 0.00410.01290.0129 ± 0.0029
GCN0.01670.0167 ± 0.00390.01230.0123 ± 0.0026
GAT0.01670.0167 ± 0.00390.01230.0123 ± 0.0026
9Random Forest0.03200.0320 ± 0.00440.02320.0232 ± 0.0036
XGBoost0.03290.0329 ± 0.00450.02400.0240 ± 0.0036
MLP0.03540.0354 ± 0.00410.02590.0259 ± 0.0031
KNN0.03270.0327 ± 0.00460.02360.0236 ± 0.0037
SVM0.03440.0344 ± 0.00480.02540.0254 ± 0.0042
GCN0.03300.0330 ± 0.00410.02370.0237 ± 0.0032
GAT0.03300.0330 ± 0.00410.02370.0237 ± 0.0032
10Random Forest0.02300.0230 ± 0.00320.01820.0182 ± 0.0026
XGBoost0.02580.0258 ± 0.00400.02010.0201 ± 0.0033
MLP0.02390.0239 ± 0.00310.01890.0189 ± 0.0025
KNN0.02240.0224 ± 0.00360.01770.0177 ± 0.0030
SVM0.02440.0244 ± 0.00420.01950.0195 ± 0.0039
GCN0.02160.0216 ± 0.00260.01660.0166 ± 0.0023
GAT0.02160.0216 ± 0.00260.01650.0165 ± 0.0023
11Random Forest0.01280.0128 ± 0.00210.01010.0101 ± 0.0015
XGBoost0.01470.0147 ± 0.00240.01170.0117 ± 0.0019
MLP0.01470.0147 ± 0.00190.01160.0116 ± 0.0015
KNN0.01530.0153 ± 0.00210.01230.0123 ± 0.0017
SVM0.01380.0138 ± 0.00220.01080.0108 ± 0.0017
GCN0.01270.0127 ± 0.00200.00990.0099 ± 0.0016
GAT0.01270.0127 ± 0.00200.00990.0099 ± 0.0016
12Random Forest0.01430.0143 ± 0.00380.01040.0104 ± 0.0024
XGBoost0.01540.0154 ± 0.00390.01140.0114 ± 0.0026
MLP0.01840.0184 ± 0.00330.01370.0137 ± 0.0022
KNN0.01480.0148 ± 0.00370.01090.0109 ± 0.0023
SVM0.01460.0146 ± 0.00380.01080.0108 ± 0.0028
GCN0.01410.0141 ± 0.00370.01020.0102 ± 0.0024
GAT0.01410.0141 ± 0.00370.01020.0102 ± 0.0024
13Random Forest0.01060.0106 ± 0.00240.00810.0081 ± 0.0019
XGBoost0.01150.0115 ± 0.00260.00900.0090 ± 0.0021
MLP0.01170.0117 ± 0.00240.00920.0092 ± 0.0019
KNN0.01120.0112 ± 0.00240.00860.0086 ± 0.0019
SVM0.01380.0138 ± 0.00830.01150.0115 ± 0.0082
GCN0.01040.0104 ± 0.00240.00770.0077 ± 0.0018
GAT0.01030.0103 ± 0.00240.00770.0077 ± 0.0018
14Random Forest0.01050.0105 ± 0.00240.00800.0080 ± 0.0016
XGBoost0.01150.0115 ± 0.00260.00870.0087 ± 0.0018
MLP0.01330.0133 ± 0.00210.01040.0104 ± 0.0014
KNN0.01100.0110 ± 0.00260.00840.0084 ± 0.0018
SVM0.01040.0104 ± 0.00260.00800.0080 ± 0.0018
GCN0.01030.0103 ± 0.00250.00780.0078 ± 0.0015
GAT0.01030.0103 ± 0.00250.00780.0078 ± 0.0015
15Random Forest0.01860.0186 ± 0.00430.01430.0143 ± 0.0033
XGBoost0.01950.0195 ± 0.00410.01500.0150 ± 0.0032
MLP0.02000.0200 ± 0.00420.01550.0155 ± 0.0033
KNN0.01860.0186 ± 0.00440.01430.0143 ± 0.0034
SVM0.02010.0201 ± 0.00430.01570.0157 ± 0.0038
GCN0.01880.0188 ± 0.00400.01440.0144 ± 0.0031
GAT0.01880.0188 ± 0.00400.01440.0144 ± 0.0031
16Random Forest0.01180.0118 ± 0.00320.00920.0092 ± 0.0025
XGBoost0.01320.0132 ± 0.00340.01030.0103 ± 0.0026
MLP0.01310.0131 ± 0.00290.01040.0104 ± 0.0023
KNN0.01360.0136 ± 0.00360.01060.0106 ± 0.0028
SVM0.01300.0130 ± 0.00340.01050.0105 ± 0.0028
GCN0.01100.0110 ± 0.00280.00840.0084 ± 0.0021
GAT0.01100.0110 ± 0.00280.00840.0084 ± 0.0021
17Random Forest0.00970.0097 ± 0.00340.00740.0074 ± 0.0025
XGBoost0.01040.0104 ± 0.00350.00800.0080 ± 0.0027
MLP0.01090.0109 ± 0.00320.00840.0084 ± 0.0022
KNN0.01040.0104 ± 0.00360.00780.0078 ± 0.0026
SVM0.01010.0101 ± 0.00340.00770.0077 ± 0.0025
GCN0.00950.0095 ± 0.00340.00710.0071 ± 0.0023
GAT0.00950.0095 ± 0.00340.00710.0071 ± 0.0024
18Random Forest0.01080.0108 ± 0.00400.00800.0080 ± 0.0027
XGBoost0.01150.0115 ± 0.00420.00860.0086 ± 0.0029
MLP0.01330.0133 ± 0.00340.01010.0101 ± 0.0024
KNN0.01130.0113 ± 0.00400.00850.0085 ± 0.0028
SVM0.01120.0112 ± 0.00450.00850.0085 ± 0.0032
GCN0.01060.0106 ± 0.00400.00790.0079 ± 0.0028
GAT0.01060.0106 ± 0.00400.00790.0079 ± 0.0028
19Random Forest0.00940.0094 ± 0.00380.00720.0072 ± 0.0028
XGBoost0.01040.0104 ± 0.00420.00810.0081 ± 0.0031
MLP0.01080.0108 ± 0.00340.00850.0085 ± 0.0025
KNN0.00990.0099 ± 0.00370.00760.0076 ± 0.0027
SVM0.01200.0120 ± 0.00660.00980.0098 ± 0.0059
GCN0.00910.0091 ± 0.00370.00690.0069 ± 0.0026
GAT0.00910.0091 ± 0.00370.00690.0069 ± 0.0026
20Random Forest0.00890.0089 ± 0.00280.00680.0068 ± 0.0021
XGBoost0.00970.0097 ± 0.00320.00740.0074 ± 0.0025
MLP0.01150.0115 ± 0.00230.00890.0089 ± 0.0018
KNN0.00940.0094 ± 0.00310.00720.0072 ± 0.0024
SVM0.00920.0092 ± 0.00280.00710.0071 ± 0.0021
GCN0.00880.0088 ± 0.00280.00660.0066 ± 0.0022
GAT0.00880.0088 ± 0.00280.00660.0066 ± 0.0022
21Random Forest0.01030.0103 ± 0.00330.00790.0079 ± 0.0028
XGBoost0.01110.0111 ± 0.00350.00840.0084 ± 0.0029
MLP0.01190.0119 ± 0.00290.00920.0092 ± 0.0024
KNN0.01070.0107 ± 0.00350.00830.0083 ± 0.0028
SVM0.01080.0108 ± 0.00320.00830.0083 ± 0.0026
GCN0.01010.0101 ± 0.00330.00760.0076 ± 0.0027
GAT0.01010.0101 ± 0.00330.00760.0076 ± 0.0027
22Random Forest0.00990.0099 ± 0.00260.00760.0076 ± 0.0021
XGBoost0.01060.0106 ± 0.00310.00830.0083 ± 0.0026
MLP0.01100.0110 ± 0.00230.00860.0086 ± 0.0018
KNN0.01030.0103 ± 0.00300.00810.0081 ± 0.0023
SVM0.01000.0100 ± 0.00240.00780.0078 ± 0.0019
GCN0.00960.0096 ± 0.00240.00730.0073 ± 0.0019
GAT0.00960.0096 ± 0.00240.00730.0073 ± 0.0019
23Random Forest0.01330.0133 ± 0.00290.01000.0100 ± 0.0022
XGBoost0.01390.0139 ± 0.00320.01070.0107 ± 0.0026
MLP0.01470.0147 ± 0.00240.01120.0112 ± 0.0018
KNN0.01330.0133 ± 0.00270.01000.0100 ± 0.0020
SVM0.01360.0136 ± 0.00280.01020.0102 ± 0.0022
GCN0.01320.0132 ± 0.00260.00990.0099 ± 0.0019
GAT0.01320.0132 ± 0.00260.00990.0099 ± 0.0019
24Random Forest0.01390.0139 ± 0.00390.01040.0104 ± 0.0027
XGBoost0.01520.0152 ± 0.00400.01140.0114 ± 0.0027
MLP0.01570.0157 ± 0.00340.01200.0120 ± 0.0024
KNN0.01440.0144 ± 0.00390.01070.0107 ± 0.0028
SVM0.01490.0149 ± 0.00470.01160.0116 ± 0.0038
GCN0.01380.0138 ± 0.00390.01030.0103 ± 0.0027
GAT0.01380.0138 ± 0.00390.01030.0103 ± 0.0027
25Random Forest0.00920.0092 ± 0.00280.00680.0068 ± 0.0020
XGBoost0.01020.0102 ± 0.00290.00770.0077 ± 0.0022
MLP0.01060.0106 ± 0.00270.00820.0082 ± 0.0020
KNN0.01060.0106 ± 0.00340.00800.0080 ± 0.0025
SVM0.01390.0139 ± 0.01020.01160.0116 ± 0.0104
GCN0.00890.0089 ± 0.00270.00650.0065 ± 0.0019
GAT0.00890.0089 ± 0.00270.00650.0065 ± 0.0019
26Random Forest0.00720.0072 ± 0.00230.00530.0053 ± 0.0017
XGBoost0.00800.0080 ± 0.00260.00600.0060 ± 0.0020
MLP0.00930.0093 ± 0.00200.00710.0071 ± 0.0014
KNN0.00780.0078 ± 0.00270.00580.0058 ± 0.0019
SVM0.00890.0089 ± 0.00450.00700.0070 ± 0.0043
GCN0.00700.0070 ± 0.00230.00520.0052 ± 0.0016
GAT0.00700.0070 ± 0.00230.00520.0052 ± 0.0016
27Random Forest0.00640.0064 ± 0.00180.00490.0049 ± 0.0014
XGBoost0.00720.0072 ± 0.00200.00550.0055 ± 0.0016
MLP0.00790.0079 ± 0.00160.00610.0061 ± 0.0012
KNN0.00680.0068 ± 0.00190.00520.0052 ± 0.0015
SVM0.00960.0096 ± 0.00780.00820.0082 ± 0.0077
GCN0.00610.0061 ± 0.00170.00460.0046 ± 0.0013
GAT0.00610.0061 ± 0.00170.00460.0046 ± 0.0013
28Random Forest0.00940.0094 ± 0.00190.00700.0070 ± 0.0014
XGBoost0.00990.0099 ± 0.00190.00740.0074 ± 0.0014
MLP0.01170.0117 ± 0.00170.00880.0088 ± 0.0013
KNN0.00960.0096 ± 0.00200.00730.0073 ± 0.0015
SVM0.00980.0098 ± 0.00190.00740.0074 ± 0.0014
GCN0.00960.0096 ± 0.00200.00710.0071 ± 0.0014
GAT0.00950.0095 ± 0.00200.00710.0071 ± 0.0014
29Random Forest0.01140.0114 ± 0.00280.00840.0084 ± 0.0020
XGBoost0.01250.0125 ± 0.00330.00930.0093 ± 0.0024
MLP0.01250.0125 ± 0.00240.00950.0095 ± 0.0017
KNN0.01150.0115 ± 0.00280.00860.0086 ± 0.0021
SVM0.01230.0123 ± 0.00340.00950.0095 ± 0.0030
GCN0.01120.0112 ± 0.00250.00810.0081 ± 0.0018
GAT0.01120.0112 ± 0.00250.00810.0081 ± 0.0018
30Random Forest0.00920.0092 ± 0.00240.00700.0070 ± 0.0018
XGBoost0.01020.0102 ± 0.00260.00770.0077 ± 0.0020
MLP0.01130.0113 ± 0.00210.00880.0088 ± 0.0016
KNN0.00950.0095 ± 0.00240.00710.0071 ± 0.0019
SVM0.01180.0118 ± 0.00360.00960.0096 ± 0.0034
GCN0.00890.0089 ± 0.00230.00670.0067 ± 0.0016
GAT0.00890.0089 ± 0.00230.00670.0067 ± 0.0016
31Random Forest0.00860.0086 ± 0.00210.00660.0066 ± 0.0016
XGBoost0.00940.0094 ± 0.00240.00730.0073 ± 0.0019
MLP0.01130.0113 ± 0.00160.00860.0086 ± 0.0012
KNN0.00930.0093 ± 0.00220.00710.0071 ± 0.0017
SVM0.00930.0093 ± 0.00210.00710.0071 ± 0.0018
GCN0.00850.0085 ± 0.00180.00630.0063 ± 0.0013
GAT0.00850.0085 ± 0.00180.00630.0063 ± 0.0013
32Random Forest0.02660.0266 ± 0.00610.01780.0178 ± 0.0037
XGBoost0.02690.0269 ± 0.00590.01820.0182 ± 0.0036
MLP0.03130.0313 ± 0.00520.02090.0209 ± 0.0030
KNN0.02620.0262 ± 0.00600.01770.0177 ± 0.0037
SVM0.02670.0267 ± 0.00590.01850.0185 ± 0.0039
GCN0.02640.0264 ± 0.00590.01760.0176 ± 0.0037
GAT0.02640.0264 ± 0.00590.01760.0176 ± 0.0037
33Random Forest0.01780.0178 ± 0.01980.01040.0104 ± 0.0029
XGBoost0.01940.0194 ± 0.01930.01170.0117 ± 0.0032
MLP0.02280.0228 ± 0.01870.01270.0127 ± 0.0028
KNN0.01870.0187 ± 0.01950.01130.0113 ± 0.0029
SVM0.02260.0226 ± 0.01970.01560.0156 ± 0.0071
GCN0.01710.0171 ± 0.02000.00960.0096 ± 0.0029
GAT0.01710.0171 ± 0.02000.00960.0096 ± 0.0029
34Random Forest0.01010.0101 ± 0.00260.00770.0077 ± 0.0020
XGBoost0.01130.0113 ± 0.00270.00860.0086 ± 0.0020
MLP0.01260.0126 ± 0.00230.00970.0097 ± 0.0018
KNN0.01130.0113 ± 0.00250.00880.0088 ± 0.0020
SVM0.01340.0134 ± 0.00510.01100.0110 ± 0.0052
GCN0.00990.0099 ± 0.00260.00740.0074 ± 0.0021
GAT0.00990.0099 ± 0.00260.00740.0074 ± 0.0021
35Random Forest0.01050.0105 ± 0.00310.00800.0080 ± 0.0022
XGBoost0.01130.0113 ± 0.00350.00860.0086 ± 0.0025
MLP0.01310.0131 ± 0.00270.01010.0101 ± 0.0020
KNN0.01090.0109 ± 0.00320.00820.0082 ± 0.0022
SVM0.01200.0120 ± 0.00720.00970.0097 ± 0.0071
GCN0.01020.0102 ± 0.00290.00760.0076 ± 0.0021
GAT0.01020.0102 ± 0.00290.00760.0076 ± 0.0021
36Random Forest0.01540.0154 ± 0.00430.01200.0120 ± 0.0035
XGBoost0.01630.0163 ± 0.00430.01290.0129 ± 0.0036
MLP0.01660.0166 ± 0.00370.01310.0131 ± 0.0030
KNN0.01580.0158 ± 0.00430.01230.0123 ± 0.0035
SVM0.01520.0152 ± 0.00410.01200.0120 ± 0.0032
GCN0.01470.0147 ± 0.00410.01130.0113 ± 0.0034
GAT0.01470.0147 ± 0.00410.01130.0113 ± 0.0034
37Random Forest0.01380.0138 ± 0.00420.01040.0104 ± 0.0031
XGBoost0.01460.0146 ± 0.00440.01120.0112 ± 0.0032
MLP0.01570.0157 ± 0.00380.01220.0122 ± 0.0028
KNN0.01400.0140 ± 0.00460.01070.0107 ± 0.0034
SVM0.01570.0157 ± 0.00590.01270.0127 ± 0.0053
GCN0.01320.0132 ± 0.00420.01000.0100 ± 0.0029
GAT0.01320.0132 ± 0.00420.01000.0100 ± 0.0030
38Random Forest0.01130.0113 ± 0.00550.00880.0088 ± 0.0041
XGBoost0.01290.0129 ± 0.00570.01010.0101 ± 0.0044
MLP0.01290.0129 ± 0.00510.01010.0101 ± 0.0038
KNN0.01210.0121 ± 0.00560.00950.0095 ± 0.0043
SVM0.01150.0115 ± 0.00540.00890.0089 ± 0.0040
GCN0.01080.0108 ± 0.00540.00830.0083 ± 0.0041
GAT0.01080.0108 ± 0.00540.00830.0083 ± 0.0040
39Random Forest0.00900.0090 ± 0.00210.00710.0071 ± 0.0016
XGBoost0.00980.0098 ± 0.00230.00780.0078 ± 0.0018
MLP0.01070.0107 ± 0.00180.00850.0085 ± 0.0015
KNN0.00950.0095 ± 0.00230.00740.0074 ± 0.0018
SVM0.01040.0104 ± 0.00270.00840.0084 ± 0.0024
GCN0.00890.0089 ± 0.00220.00690.0069 ± 0.0018
GAT0.00890.0089 ± 0.00220.00690.0069 ± 0.0018
40Random Forest0.00860.0086 ± 0.00250.00660.0066 ± 0.0019
XGBoost0.00930.0093 ± 0.00250.00730.0073 ± 0.0020
MLP0.01000.0100 ± 0.00230.00780.0078 ± 0.0017
KNN0.00890.0089 ± 0.00260.00700.0070 ± 0.0020
SVM0.00940.0094 ± 0.00380.00740.0074 ± 0.0033
GCN0.00830.0083 ± 0.00250.00630.0063 ± 0.0019
GAT0.00830.0083 ± 0.00250.00630.0063 ± 0.0019
41Random Forest0.01040.0104 ± 0.00360.00780.0078 ± 0.0025
XGBoost0.01130.0113 ± 0.00380.00850.0085 ± 0.0027
MLP0.01220.0122 ± 0.00310.00940.0094 ± 0.0022
KNN0.01080.0108 ± 0.00400.00830.0083 ± 0.0029
SVM0.01130.0113 ± 0.00450.00870.0087 ± 0.0036
GCN0.01000.0100 ± 0.00350.00750.0075 ± 0.0024
GAT0.01000.0100 ± 0.00350.00750.0075 ± 0.0024
Table A2. Independent T-Test Results (GAT Model, RMSE).
Table A2. Independent T-Test Results (GAT Model, RMSE).
Test IDComparison ModelT-StatisticSignificance (10%)Significance (5%)Significance (1%)p-Value
1RF0.2786FALSEFALSEFALSE0.3913
1XGB1.0160FALSEFALSEFALSE0.1595
1MLP1.5337TRUEFALSEFALSE0.0682
1KNN1.0278FALSEFALSEFALSE0.1566
1SVM1.3767TRUEFALSEFALSE0.0898
2RF0.0255FALSEFALSEFALSE0.4899
2XGB0.4112FALSEFALSEFALSE0.3422
2MLP1.7753TRUETRUEFALSE0.0438
2KNN0.4253FALSEFALSEFALSE0.3371
2SVM0.1208FALSEFALSEFALSE0.4524
3RF−0.0083FALSEFALSEFALSE0.5033
3XGB0.5314FALSEFALSEFALSE0.2997
3MLP1.5581TRUEFALSEFALSE0.0653
3KNN0.2953FALSEFALSEFALSE0.3850
3SVM0.6972FALSEFALSEFALSE0.2457
4RF−0.2098FALSEFALSEFALSE0.5823
4XGB0.1885FALSEFALSEFALSE0.4259
4MLP2.8205TRUETRUETRUE0.0045
4KNN0.0278FALSEFALSEFALSE0.4890
4SVM0.1196FALSEFALSEFALSE0.4528
5RF0.1492FALSEFALSEFALSE0.4412
5XGB0.8320FALSEFALSEFALSE0.2064
5MLP1.8747TRUETRUEFALSE0.0357
5KNN0.4484FALSEFALSEFALSE0.3287
5SVM1.0450FALSEFALSEFALSE0.1534
6RF−0.0579FALSEFALSEFALSE0.5229
6XGB0.6543FALSEFALSEFALSE0.2592
6MLP3.8764TRUETRUETRUE0.0003
6KNN0.4172FALSEFALSEFALSE0.3399
6SVM0.4945FALSEFALSEFALSE0.3125
7RF−0.1284FALSEFALSEFALSE0.5506
7XGB0.6534FALSEFALSEFALSE0.2594
7MLP3.1230TRUETRUETRUE0.0021
7KNN−0.0184FALSEFALSEFALSE0.5073
7SVM1.3837TRUEFALSEFALSE0.0888
8RF0.3481FALSEFALSEFALSE0.3652
8XGB1.1803FALSEFALSEFALSE0.1239
8MLP1.0402FALSEFALSEFALSE0.1537
8KNN0.6995FALSEFALSEFALSE0.2450
8SVM0.3595FALSEFALSEFALSE0.3610
9RF−0.6304FALSEFALSEFALSE0.7332
9XGB−0.0477FALSEFALSEFALSE0.5189
9MLP1.5992TRUEFALSEFALSE0.0605
9KNN−0.1780FALSEFALSEFALSE0.5700
9SVM0.8699FALSEFALSEFALSE0.1960
10RF1.3654TRUEFALSEFALSE0.0917
10XGB3.4109TRUETRUETRUE0.0011
10MLP2.2339TRUETRUEFALSE0.0169
10KNN0.7610FALSEFALSEFALSE0.2268
10SVM2.2005TRUETRUEFALSE0.0189
11RF0.1596FALSEFALSEFALSE0.4372
11XGB2.5188TRUETRUETRUE0.0090
11MLP2.8342TRUETRUETRUE0.0042
11KNN3.5566TRUETRUETRUE0.0007
11SVM1.4906TRUEFALSEFALSE0.0737
12RF0.1362FALSEFALSEFALSE0.4463
12XGB0.9320FALSEFALSEFALSE0.1797
12MLP3.4280TRUETRUETRUE0.0010
12KNN0.5392FALSEFALSEFALSE0.2970
12SVM0.3637FALSEFALSEFALSE0.3594
13RF0.2561FALSEFALSEFALSE0.3999
13XGB1.3099FALSEFALSEFALSE0.1005
13MLP1.5817TRUEFALSEFALSE0.0625
13KNN0.9919FALSEFALSEFALSE0.1649
13SVM1.5509TRUEFALSEFALSE0.0701
14RF0.2880FALSEFALSEFALSE0.3877
14XGB1.3051FALSEFALSEFALSE0.1013
14MLP3.6446TRUETRUETRUE0.0006
14KNN0.7966FALSEFALSEFALSE0.2162
14SVM0.1508FALSEFALSEFALSE0.4406
15RF−0.1812FALSEFALSEFALSE0.5712
15XGB0.4278FALSEFALSEFALSE0.3360
15MLP0.7853FALSEFALSEFALSE0.2194
15KNN−0.1309FALSEFALSEFALSE0.5516
15SVM0.8135FALSEFALSEFALSE0.2114
16RF0.7268FALSEFALSEFALSE0.2367
16XGB1.9666TRUETRUEFALSE0.0298
16MLP2.0200TRUETRUEFALSE0.0265
16KNN2.2302TRUETRUEFALSE0.0172
16SVM1.8162TRUETRUEFALSE0.0402
17RF0.1494FALSEFALSEFALSE0.4411
17XGB0.7153FALSEFALSEFALSE0.2402
17MLP1.1627FALSEFALSEFALSE0.1274
17KNN0.6538FALSEFALSEFALSE0.2593
17SVM0.4718FALSEFALSEFALSE0.3204
18RF0.1003FALSEFALSEFALSE0.4604
18XGB0.5849FALSEFALSEFALSE0.2817
18MLP1.9292TRUETRUEFALSE0.0321
18KNN0.4369FALSEFALSEFALSE0.3328
18SVM0.3952FALSEFALSEFALSE0.3479
19RF0.2337FALSEFALSEFALSE0.4085
19XGB0.9203FALSEFALSEFALSE0.1827
19MLP1.3292TRUEFALSEFALSE0.0973
19KNN0.5832FALSEFALSEFALSE0.2822
19SVM1.4794TRUEFALSEFALSE0.0766
20RF0.1616FALSEFALSEFALSE0.4364
20XGB0.8601FALSEFALSEFALSE0.1986
20MLP2.8370TRUETRUETRUE0.0043
20KNN0.5262FALSEFALSEFALSE0.3015
20SVM0.4475FALSEFALSEFALSE0.3290
21RF0.1547FALSEFALSEFALSE0.4391
21XGB0.7662FALSEFALSEFALSE0.2250
21MLP1.5533TRUEFALSEFALSE0.0659
21KNN0.4532FALSEFALSEFALSE0.3269
21SVM0.5382FALSEFALSEFALSE0.2973
22RF0.3155FALSEFALSEFALSE0.3775
22XGB0.9859FALSEFALSEFALSE0.1669
22MLP1.5690TRUEFALSEFALSE0.0644
22KNN0.7473FALSEFALSEFALSE0.2309
22SVM0.4387FALSEFALSEFALSE0.3323
23RF0.0327FALSEFALSEFALSE0.4871
23XGB0.6647FALSEFALSEFALSE0.2560
23MLP1.5740TRUEFALSEFALSE0.0634
23KNN0.0809FALSEFALSEFALSE0.4681
23SVM0.3394FALSEFALSEFALSE0.3684
24RF0.0541FALSEFALSEFALSE0.4786
24XGB0.9568FALSEFALSEFALSE0.1734
24MLP1.3802TRUEFALSEFALSE0.0893
24KNN0.4114FALSEFALSEFALSE0.3419
24SVM0.7054FALSEFALSEFALSE0.2433
25RF0.3391FALSEFALSEFALSE0.3685
25XGB1.3699TRUEFALSEFALSE0.0908
25MLP1.8035TRUETRUEFALSE0.0410
25KNN1.5381TRUEFALSEFALSE0.0679
25SVM1.8387TRUETRUEFALSE0.0423
26RF0.1738FALSEFALSEFALSE0.4316
26XGB1.0438FALSEFALSEFALSE0.1528
26MLP2.8730TRUETRUETRUE0.0039
26KNN0.8468FALSEFALSEFALSE0.2022
26SVM1.4557TRUEFALSEFALSE0.0802
27RF0.4610FALSEFALSEFALSE0.3243
27XGB1.4787TRUEFALSEFALSE0.0758
27MLP2.7531TRUETRUETRUE0.0053
27KNN0.9242FALSEFALSEFALSE0.1820
27SVM1.6165TRUEFALSEFALSE0.0639
28RF−0.1662FALSEFALSEFALSE0.5653
28XGB0.4307FALSEFALSEFALSE0.3353
28MLP2.9072TRUETRUETRUE0.0039
28KNN0.0855FALSEFALSEFALSE0.4663
28SVM0.2942FALSEFALSEFALSE0.3856
29RF0.2476FALSEFALSEFALSE0.4032
29XGB1.1865FALSEFALSEFALSE0.1234
29MLP1.4032TRUEFALSEFALSE0.0862
29KNN0.3224FALSEFALSEFALSE0.3749
29SVM0.9701FALSEFALSEFALSE0.1708
30RF0.3686FALSEFALSEFALSE0.3576
30XGB1.4167TRUEFALSEFALSE0.0839
30MLP2.9361TRUETRUETRUE0.0033
30KNN0.6214FALSEFALSEFALSE0.2697
30SVM2.6059TRUETRUETRUE0.0078
31RF0.1883FALSEFALSEFALSE0.4260
31XGB1.1666FALSEFALSEFALSE0.1270
31MLP4.6089TRUETRUETRUE0.0000
31KNN1.0459FALSEFALSEFALSE0.1525
31SVM1.1044FALSEFALSEFALSE0.1395
32RF0.0959FALSEFALSEFALSE0.4622
32XGB0.2404FALSEFALSEFALSE0.4060
32MLP2.3175TRUETRUEFALSE0.0144
32KNN−0.0702FALSEFALSEFALSE0.5277
32SVM0.1552FALSEFALSEFALSE0.4389
33RF0.1020FALSEFALSEFALSE0.4597
33XGB0.3234FALSEFALSEFALSE0.3744
33MLP0.8075FALSEFALSEFALSE0.2131
33KNN0.2212FALSEFALSEFALSE0.4133
33SVM0.7586FALSEFALSEFALSE0.2272
34RF0.2522FALSEFALSEFALSE0.4014
34XGB1.4415TRUEFALSEFALSE0.0807
34MLP2.9313TRUETRUETRUE0.0035
34KNN1.5001TRUEFALSEFALSE0.0728
34SVM2.2648TRUETRUEFALSE0.0176
35RF0.2551FALSEFALSEFALSE0.4002
35XGB0.9461FALSEFALSEFALSE0.1762
35MLP2.7989TRUETRUETRUE0.0046
35KNN0.6289FALSEFALSEFALSE0.2673
35SVM0.8968FALSEFALSEFALSE0.1907
36RF0.4837FALSEFALSEFALSE0.3163
36XGB1.0497FALSEFALSEFALSE0.1518
36MLP1.3258TRUEFALSEFALSE0.0983
36KNN0.7227FALSEFALSEFALSE0.2382
36SVM0.3339FALSEFALSEFALSE0.3706
37RF0.3585FALSEFALSEFALSE0.3613
37XGB0.8954FALSEFALSEFALSE0.1891
37MLP1.7436TRUETRUEFALSE0.0462
37KNN0.4769FALSEFALSEFALSE0.3186
37SVM1.3380TRUEFALSEFALSE0.0964
38RF0.2394FALSEFALSEFALSE0.4063
38XGB1.0362FALSEFALSEFALSE0.1545
38MLP1.1023FALSEFALSEFALSE0.1399
38KNN0.6259FALSEFALSEFALSE0.2682
38SVM0.3489FALSEFALSEFALSE0.3649
39RF0.1350FALSEFALSEFALSE0.4468
39XGB1.0794FALSEFALSEFALSE0.1452
39MLP2.3703TRUETRUEFALSE0.0129
39KNN0.7016FALSEFALSEFALSE0.2446
39SVM1.5979TRUEFALSEFALSE0.0613
40RF0.3065FALSEFALSEFALSE0.3808
40XGB1.1106FALSEFALSEFALSE0.1385
40MLP1.8459TRUETRUEFALSE0.0382
40KNN0.6396FALSEFALSEFALSE0.2640
40SVM0.8960FALSEFALSEFALSE0.1899
41RF0.2582FALSEFALSEFALSE0.3992
41XGB0.8995FALSEFALSEFALSE0.1884
41MLP1.7367TRUETRUEFALSE0.0472
41KNN0.5695FALSEFALSEFALSE0.2870
41SVM0.8451FALSEFALSEFALSE0.2031
Table A3. Independent T-Test Results (GCN Model, RMSE).
Table A3. Independent T-Test Results (GCN Model, RMSE).
Test IDComparison ModelT-StatisticSignificance (10%)Significance (5%)Significance (1%)p-Value
1RF0.2671FALSEFALSEFALSE0.3957
1XGB1.0057FALSEFALSEFALSE0.1619
1MLP1.5225TRUEFALSEFALSE0.0695
1KNN1.0172FALSEFALSEFALSE0.1590
1SVM1.3654TRUEFALSEFALSE0.0915
2RF0.0215FALSEFALSEFALSE0.4915
2XGB0.4072FALSEFALSEFALSE0.3436
2MLP1.7710TRUETRUEFALSE0.0442
2KNN0.4213FALSEFALSEFALSE0.3385
2SVM0.1166FALSEFALSEFALSE0.4540
3RF−0.0125FALSEFALSEFALSE0.5049
3XGB0.5274FALSEFALSEFALSE0.3011
3MLP1.5540TRUEFALSEFALSE0.0658
3KNN0.2912FALSEFALSEFALSE0.3865
3SVM0.6932FALSEFALSEFALSE0.2470
4RF−0.2128FALSEFALSEFALSE0.5835
4XGB0.1858FALSEFALSEFALSE0.4270
4MLP2.8192TRUETRUETRUE0.0045
4KNN0.0250FALSEFALSEFALSE0.4901
4SVM0.1168FALSEFALSEFALSE0.4539
5RF0.1407FALSEFALSEFALSE0.4446
5XGB0.8241FALSEFALSEFALSE0.2086
5MLP1.8649TRUETRUEFALSE0.0364
5KNN0.4402FALSEFALSEFALSE0.3316
5SVM1.0382FALSEFALSEFALSE0.1549
6RF−0.0654FALSEFALSEFALSE0.5258
6XGB0.6471FALSEFALSEFALSE0.2615
6MLP3.8670TRUETRUETRUE0.0003
6KNN0.4100FALSEFALSEFALSE0.3425
6SVM0.4874FALSEFALSEFALSE0.3149
7RF−0.1301FALSEFALSEFALSE0.5513
7XGB0.6519FALSEFALSEFALSE0.2599
7MLP3.1222TRUETRUETRUE0.0022
7KNN−0.0201FALSEFALSEFALSE0.5079
7SVM1.3825TRUEFALSEFALSE0.0890
8RF0.3441FALSEFALSEFALSE0.3667
8XGB1.1759FALSEFALSEFALSE0.1248
8MLP1.0354FALSEFALSEFALSE0.1548
8KNN0.6952FALSEFALSEFALSE0.2464
8SVM0.3558FALSEFALSEFALSE0.3623
9RF−0.6348FALSEFALSEFALSE0.7346
9XGB−0.0520FALSEFALSEFALSE0.5205
9MLP1.5950TRUEFALSEFALSE0.0610
9KNN−0.1822FALSEFALSEFALSE0.5716
9SVM0.8660FALSEFALSEFALSE0.1970
10RF1.3383TRUEFALSEFALSE0.0960
10XGB3.3862TRUETRUETRUE0.0012
10MLP2.2054TRUETRUEFALSE0.0180
10KNN0.7362FALSEFALSEFALSE0.2341
10SVM2.1774TRUETRUEFALSE0.0198
11RF0.1116FALSEFALSEFALSE0.4560
11XGB2.4705TRUETRUEFALSE0.0100
11MLP2.7788TRUETRUETRUE0.0048
11KNN3.5029TRUETRUETRUE0.0008
11SVM1.4420TRUEFALSEFALSE0.0803
12RF0.1327FALSEFALSEFALSE0.4477
12XGB0.9288FALSEFALSEFALSE0.1805
12MLP3.4254TRUETRUETRUE0.0010
12KNN0.5357FALSEFALSEFALSE0.2982
12SVM0.3603FALSEFALSEFALSE0.3607
13RF0.2391FALSEFALSEFALSE0.4064
13XGB1.2945FALSEFALSEFALSE0.1031
13MLP1.5660TRUEFALSEFALSE0.0643
13KNN0.9756FALSEFALSEFALSE0.1688
13SVM1.5444TRUEFALSEFALSE0.0709
14RF0.2791FALSEFALSEFALSE0.3911
14XGB1.2970FALSEFALSEFALSE0.1026
14MLP3.6374TRUETRUETRUE0.0006
14KNN0.7883FALSEFALSEFALSE0.2186
14SVM0.1422FALSEFALSEFALSE0.4440
15RF−0.1837FALSEFALSEFALSE0.5722
15XGB0.4254FALSEFALSEFALSE0.3369
15MLP0.7830FALSEFALSEFALSE0.2201
15KNN−0.1334FALSEFALSEFALSE0.5526
15SVM0.8112FALSEFALSEFALSE0.2121
16RF0.7185FALSEFALSEFALSE0.2392
16XGB1.9587TRUETRUEFALSE0.0303
16MLP2.0114TRUETRUEFALSE0.0270
16KNN2.2227TRUETRUEFALSE0.0175
16SVM1.8084TRUETRUEFALSE0.0409
17RF0.1428FALSEFALSEFALSE0.4437
17XGB0.7092FALSEFALSEFALSE0.2420
17MLP1.1566FALSEFALSEFALSE0.1286
17KNN0.6477FALSEFALSEFALSE0.2612
17SVM0.4654FALSEFALSEFALSE0.3226
18RF0.0976FALSEFALSEFALSE0.4615
18XGB0.5824FALSEFALSEFALSE0.2825
18MLP1.9270TRUETRUEFALSE0.0322
18KNN0.4343FALSEFALSEFALSE0.3337
18SVM0.3928FALSEFALSEFALSE0.3487
19RF0.2273FALSEFALSEFALSE0.4109
19XGB0.9143FALSEFALSEFALSE0.1842
19MLP1.3227TRUEFALSEFALSE0.0983
19KNN0.5768FALSEFALSEFALSE0.2843
19SVM1.4751TRUEFALSEFALSE0.0772
20RF0.1580FALSEFALSEFALSE0.4378
20XGB0.8571FALSEFALSEFALSE0.1994
20MLP2.8345TRUETRUETRUE0.0043
20KNN0.5231FALSEFALSEFALSE0.3025
20SVM0.4442FALSEFALSEFALSE0.3302
21RF0.1502FALSEFALSEFALSE0.4408
21XGB0.7618FALSEFALSEFALSE0.2263
21MLP1.5486TRUEFALSEFALSE0.0664
21KNN0.4489FALSEFALSEFALSE0.3285
21SVM0.5337FALSEFALSEFALSE0.2989
22RF0.3117FALSEFALSEFALSE0.3789
22XGB0.9826FALSEFALSEFALSE0.1677
22MLP1.5653TRUEFALSEFALSE0.0648
22KNN0.7438FALSEFALSEFALSE0.2320
22SVM0.4347FALSEFALSEFALSE0.3337
23RF0.0272FALSEFALSEFALSE0.4892
23XGB0.6593FALSEFALSEFALSE0.2576
23MLP1.5674TRUEFALSEFALSE0.0642
23KNN0.0752FALSEFALSEFALSE0.4703
23SVM0.3338FALSEFALSEFALSE0.3705
24RF0.0462FALSEFALSEFALSE0.4818
24XGB0.9490FALSEFALSEFALSE0.1754
24MLP1.3718TRUEFALSEFALSE0.0906
24KNN0.4036FALSEFALSEFALSE0.3448
24SVM0.6983FALSEFALSEFALSE0.2455
25RF0.3229FALSEFALSEFALSE0.3746
25XGB1.3548TRUEFALSEFALSE0.0932
25MLP1.7883TRUETRUEFALSE0.0423
25KNN1.5243TRUEFALSEFALSE0.0696
25SVM1.8329TRUETRUEFALSE0.0428
26RF0.1683FALSEFALSEFALSE0.4338
26XGB1.0386FALSEFALSEFALSE0.1540
26MLP2.8667TRUETRUETRUE0.0039
26KNN0.8416FALSEFALSEFALSE0.2036
26SVM1.4520TRUEFALSEFALSE0.0806
27RF0.4497FALSEFALSEFALSE0.3283
27XGB1.4685TRUEFALSEFALSE0.0771
27MLP2.7426TRUETRUETRUE0.0055
27KNN0.9132FALSEFALSEFALSE0.1848
27SVM1.6130TRUEFALSEFALSE0.0643
28RF−0.1731FALSEFALSEFALSE0.5680
28XGB0.4243FALSEFALSEFALSE0.3376
28MLP2.9030TRUETRUETRUE0.0039
28KNN0.0789FALSEFALSEFALSE0.4689
28SVM0.2876FALSEFALSEFALSE0.3880
29RF0.2439FALSEFALSEFALSE0.4046
29XGB1.1839FALSEFALSEFALSE0.1239
29MLP1.4007TRUEFALSEFALSE0.0866
29KNN0.3188FALSEFALSEFALSE0.3762
29SVM0.9674FALSEFALSEFALSE0.1715
30RF0.3680FALSEFALSEFALSE0.3578
30XGB1.4168TRUEFALSEFALSE0.0839
30MLP2.9374TRUETRUETRUE0.0033
30KNN0.6210FALSEFALSEFALSE0.2698
30SVM2.6063TRUETRUETRUE0.0078
31RF0.1803FALSEFALSEFALSE0.4291
31XGB1.1598FALSEFALSEFALSE0.1284
31MLP4.6031TRUETRUETRUE0.0000
31KNN1.0386FALSEFALSEFALSE0.1541
31SVM1.0970FALSEFALSEFALSE0.1411
32RF0.0943FALSEFALSEFALSE0.4628
32XGB0.2389FALSEFALSEFALSE0.4065
32MLP2.3162TRUETRUEFALSE0.0144
32KNN−0.0718FALSEFALSEFALSE0.5283
32SVM0.1537FALSEFALSEFALSE0.4395
33RF0.1012FALSEFALSEFALSE0.4601
33XGB0.3226FALSEFALSEFALSE0.3747
33MLP0.8068FALSEFALSEFALSE0.2133
33KNN0.2204FALSEFALSEFALSE0.4136
33SVM0.7578FALSEFALSEFALSE0.2274
34RF0.2447FALSEFALSEFALSE0.4043
34XGB1.4338TRUEFALSEFALSE0.0818
34MLP2.9226TRUETRUETRUE0.0036
34KNN1.4921TRUEFALSEFALSE0.0739
34SVM2.2598TRUETRUEFALSE0.0178
35RF0.2514FALSEFALSEFALSE0.4017
35XGB0.9425FALSEFALSEFALSE0.1771
35MLP2.7941TRUETRUETRUE0.0047
35KNN0.6251FALSEFALSEFALSE0.2685
35SVM0.8947FALSEFALSEFALSE0.1912
36RF0.4832FALSEFALSEFALSE0.3165
36XGB1.0491FALSEFALSEFALSE0.1519
36MLP1.3251TRUEFALSEFALSE0.0984
36KNN0.7221FALSEFALSEFALSE0.2383
36SVM0.3334FALSEFALSEFALSE0.3708
37RF0.3542FALSEFALSEFALSE0.3629
37XGB0.8913FALSEFALSEFALSE0.1902
37MLP1.7395TRUETRUEFALSE0.0465
37KNN0.4728FALSEFALSEFALSE0.3200
37SVM1.3346TRUEFALSEFALSE0.0970
38RF0.2368FALSEFALSEFALSE0.4073
38XGB1.0334FALSEFALSEFALSE0.1551
38MLP1.0993FALSEFALSEFALSE0.1405
38KNN0.6232FALSEFALSEFALSE0.2691
38SVM0.3462FALSEFALSEFALSE0.3659
39RF0.1285FALSEFALSEFALSE0.4494
39XGB1.0738FALSEFALSEFALSE0.1464
39MLP2.3651TRUETRUEFALSE0.0130
39KNN0.6958FALSEFALSEFALSE0.2464
39SVM1.5930TRUEFALSEFALSE0.0618
40RF0.2987FALSEFALSEFALSE0.3838
40XGB1.1030FALSEFALSEFALSE0.1401
40MLP1.8383TRUETRUEFALSE0.0388
40KNN0.6319FALSEFALSEFALSE0.2665
40SVM0.8900FALSEFALSEFALSE0.1914
41RF0.2557FALSEFALSEFALSE0.4001
41XGB0.8975FALSEFALSEFALSE0.1889
41MLP1.7353TRUETRUEFALSE0.0473
41KNN0.5674FALSEFALSEFALSE0.2877
41SVM0.8432FALSEFALSEFALSE0.2036
Table A4. Independent T-Test Results (GAT Model, MAE).
Table A4. Independent T-Test Results (GAT Model, MAE).
Test IDComparison ModelT-StatisticSignificance (10%)Significance (5%)Significance (1%)p-Value
1RF0.4108FALSEFALSEFALSE0.3422
1XGB1.1188FALSEFALSEFALSE0.1366
1MLP1.7498TRUETRUEFALSE0.0456
1KNN1.0827FALSEFALSEFALSE0.1443
1SVM1.6132TRUEFALSEFALSE0.0590
2RF0.1116FALSEFALSEFALSE0.4560
2XGB0.5359FALSEFALSEFALSE0.2983
2MLP1.8668TRUETRUEFALSE0.0366
2KNN0.4860FALSEFALSEFALSE0.3155
2SVM0.1347FALSEFALSEFALSE0.4470
3RF0.0340FALSEFALSEFALSE0.4865
3XGB0.6169FALSEFALSEFALSE0.2712
3MLP1.7234TRUETRUEFALSE0.0480
3KNN0.4417FALSEFALSEFALSE0.3311
3SVM0.8769FALSEFALSEFALSE0.1941
4RF−0.1220FALSEFALSEFALSE0.5481
4XGB0.3646FALSEFALSEFALSE0.3591
4MLP2.9805TRUETRUETRUE0.0030
4KNN0.1782FALSEFALSEFALSE0.4299
4SVM0.2099FALSEFALSEFALSE0.4177
5RF0.3619FALSEFALSEFALSE0.3601
5XGB1.0454FALSEFALSEFALSE0.1527
5MLP2.4066TRUETRUEFALSE0.0115
5KNN0.7966FALSEFALSEFALSE0.2164
5SVM1.3614TRUEFALSEFALSE0.0938
6RF−0.0334FALSEFALSEFALSE0.5132
6XGB0.6721FALSEFALSEFALSE0.2535
6MLP3.9890TRUETRUETRUE0.0002
6KNN0.4716FALSEFALSEFALSE0.3204
6SVM0.5947FALSEFALSEFALSE0.2785
7RF0.0271FALSEFALSEFALSE0.4893
7XGB0.8407FALSEFALSEFALSE0.2038
7MLP3.7928TRUETRUETRUE0.0004
7KNN0.1349FALSEFALSEFALSE0.4468
7SVM2.1117TRUETRUEFALSE0.0221
8RF0.5553FALSEFALSEFALSE0.2915
8XGB1.5265TRUEFALSEFALSE0.0691
8MLP1.3918TRUEFALSEFALSE0.0877
8KNN1.1138FALSEFALSEFALSE0.1374
8SVM0.5533FALSEFALSEFALSE0.2923
9RF−0.4032FALSEFALSEFALSE0.6551
9XGB0.2449FALSEFALSEFALSE0.4042
9MLP1.9274TRUETRUEFALSE0.0321
9KNN−0.0438FALSEFALSEFALSE0.5173
9SVM1.2285FALSEFALSEFALSE0.1151
10RF1.8607TRUETRUEFALSE0.0368
10XGB3.5261TRUETRUETRUE0.0008
10MLP2.7020TRUETRUETRUE0.0058
10KNN1.1975FALSEFALSEFALSE0.1209
10SVM2.5575TRUETRUETRUE0.0089
11RF0.4627FALSEFALSEFALSE0.3236
11XGB2.9518TRUETRUETRUE0.0032
11MLP3.1681TRUETRUETRUE0.0018
11KNN4.0375TRUETRUETRUE0.0002
11SVM1.6521TRUEFALSEFALSE0.0549
12RF0.2493FALSEFALSEFALSE0.4025
12XGB1.2890FALSEFALSEFALSE0.1040
12MLP4.1612TRUETRUETRUE0.0001
12KNN0.8258FALSEFALSEFALSE0.2080
12SVM0.6097FALSEFALSEFALSE0.2735
13RF0.4969FALSEFALSEFALSE0.3116
13XGB1.7445TRUETRUEFALSE0.0461
13MLP2.0865TRUETRUEFALSE0.0231
13KNN1.2108FALSEFALSEFALSE0.1181
13SVM1.7190TRUEFALSEFALSE0.0528
14RF0.3651FALSEFALSEFALSE0.3589
14XGB1.6022TRUEFALSEFALSE0.0603
14MLP5.0067TRUETRUETRUE0.0000
14KNN1.0967FALSEFALSEFALSE0.1412
14SVM0.3996FALSEFALSEFALSE0.3463
15RF−0.1434FALSEFALSEFALSE0.5565
15XGB0.4731FALSEFALSEFALSE0.3199
15MLP0.8719FALSEFALSEFALSE0.1953
15KNN−0.0877FALSEFALSEFALSE0.5346
15SVM0.9952FALSEFALSEFALSE0.1642
16RF0.9820FALSEFALSEFALSE0.1674
16XGB2.1903TRUETRUEFALSE0.0186
16MLP2.4577TRUETRUEFALSE0.0102
16KNN2.4406TRUETRUEFALSE0.0109
16SVM2.2612TRUETRUEFALSE0.0162
17RF0.3525FALSEFALSEFALSE0.3636
17XGB1.0313FALSEFALSEFALSE0.1557
17MLP1.6123TRUEFALSEFALSE0.0591
17KNN0.8227FALSEFALSEFALSE0.2088
17SVM0.7681FALSEFALSEFALSE0.2244
18RF0.1177FALSEFALSEFALSE0.4536
18XGB0.6886FALSEFALSEFALSE0.2484
18MLP2.3394TRUETRUEFALSE0.0134
18KNN0.6174FALSEFALSEFALSE0.2710
18SVM0.5719FALSEFALSEFALSE0.2860
19RF0.3510FALSEFALSEFALSE0.3641
19XGB1.1733FALSEFALSEFALSE0.1254
19MLP1.6938TRUEFALSEFALSE0.0507
19KNN0.7818FALSEFALSEFALSE0.2205
19SVM1.7681TRUETRUEFALSE0.0464
20RF0.2411FALSEFALSEFALSE0.4056
20XGB0.9607FALSEFALSEFALSE0.1725
20MLP3.1355TRUETRUETRUE0.0020
20KNN0.7222FALSEFALSEFALSE0.2381
20SVM0.5827FALSEFALSEFALSE0.2824
21RF0.2993FALSEFALSEFALSE0.3835
21XGB0.8820FALSEFALSEFALSE0.1927
21MLP1.7734TRUETRUEFALSE0.0436
21KNN0.7003FALSEFALSEFALSE0.2448
21SVM0.7997FALSEFALSEFALSE0.2153
22RF0.4384FALSEFALSEFALSE0.3324
22XGB1.1200FALSEFALSEFALSE0.1369
22MLP1.8023TRUETRUEFALSE0.0416
22KNN0.9221FALSEFALSEFALSE0.1827
22SVM0.6067FALSEFALSEFALSE0.2746
23RF0.1662FALSEFALSEFALSE0.4346
23XGB0.9731FALSEFALSEFALSE0.1698
23MLP1.9253TRUETRUEFALSE0.0322
23KNN0.2463FALSEFALSEFALSE0.4036
23SVM0.4778FALSEFALSEFALSE0.3183
24RF0.0706FALSEFALSEFALSE0.4721
24XGB1.0778FALSEFALSEFALSE0.1452
24MLP1.7822TRUETRUEFALSE0.0429
24KNN0.4216FALSEFALSEFALSE0.3383
24SVM1.0187FALSEFALSEFALSE0.1590
25RF0.5037FALSEFALSEFALSE0.3092
25XGB1.6200TRUEFALSEFALSE0.0584
25MLP2.3904TRUETRUEFALSE0.0119
25KNN1.8088TRUETRUEFALSE0.0411
25SVM1.8900TRUETRUEFALSE0.0392
26RF0.2712FALSEFALSEFALSE0.3941
26XGB1.2876FALSEFALSEFALSE0.1045
26MLP3.6410TRUETRUETRUE0.0006
26KNN1.0303FALSEFALSEFALSE0.1560
26SVM1.5893TRUEFALSEFALSE0.0648
27RF0.6036FALSEFALSEFALSE0.2757
27XGB1.7036TRUEFALSEFALSE0.0504
27MLP3.2180TRUETRUETRUE0.0017
27KNN1.0815FALSEFALSEFALSE0.1448
27SVM1.7218TRUEFALSEFALSE0.0538
28RF−0.0829FALSEFALSEFALSE0.5327
28XGB0.6236FALSEFALSEFALSE0.2694
28MLP3.3402TRUETRUETRUE0.0014
28KNN0.3431FALSEFALSEFALSE0.3672
28SVM0.5088FALSEFALSEFALSE0.3078
29RF0.4124FALSEFALSEFALSE0.3417
29XGB1.5142TRUEFALSEFALSE0.0715
29MLP2.0031TRUETRUEFALSE0.0279
29KNN0.6579FALSEFALSEFALSE0.2583
29SVM1.5083TRUEFALSEFALSE0.0731
30RF0.4572FALSEFALSEFALSE0.3256
30XGB1.5552TRUEFALSEFALSE0.0658
30MLP3.5428TRUETRUETRUE0.0007
30KNN0.7514FALSEFALSEFALSE0.2294
30SVM3.0135TRUETRUETRUE0.0034
31RF0.4838FALSEFALSEFALSE0.3162
31XGB1.5898TRUEFALSEFALSE0.0622
31MLP4.8798TRUETRUETRUE0.0000
31KNN1.3608TRUEFALSEFALSE0.0925
31SVM1.3149FALSEFALSEFALSE0.1000
32RF0.1974FALSEFALSEFALSE0.4225
32XGB0.4386FALSEFALSEFALSE0.3323
32MLP2.6620TRUETRUETRUE0.0067
32KNN0.1054FALSEFALSEFALSE0.4584
32SVM0.6385FALSEFALSEFALSE0.2644
33RF0.7412FALSEFALSEFALSE0.2324
33XGB1.8787TRUETRUEFALSE0.0354
33MLP2.9828TRUETRUETRUE0.0029
33KNN1.5789TRUEFALSEFALSE0.0628
33SVM3.0404TRUETRUETRUE0.0034
34RF0.3723FALSEFALSEFALSE0.3564
34XGB1.5592TRUEFALSEFALSE0.0655
34MLP3.1324TRUETRUETRUE0.0022
34KNN1.7891TRUETRUEFALSE0.0426
34SVM2.4275TRUETRUEFALSE0.0133
35RF0.4152FALSEFALSEFALSE0.3406
35XGB1.2023FALSEFALSEFALSE0.1198
35MLP3.2584TRUETRUETRUE0.0015
35KNN0.7474FALSEFALSEFALSE0.2306
35SVM1.0773FALSEFALSEFALSE0.1485
36RF0.5028FALSEFALSEFALSE0.3097
36XGB1.1551FALSEFALSEFALSE0.1293
36MLP1.4326TRUEFALSEFALSE0.0820
36KNN0.7309FALSEFALSEFALSE0.2357
36SVM0.4845FALSEFALSEFALSE0.3160
37RF0.4071FALSEFALSEFALSE0.3435
37XGB1.0908FALSEFALSEFALSE0.1424
37MLP2.1546TRUETRUEFALSE0.0200
37KNN0.6142FALSEFALSEFALSE0.2721
37SVM1.7004TRUEFALSEFALSE0.0516
38RF0.3273FALSEFALSEFALSE0.3729
38XGB1.1536FALSEFALSEFALSE0.1292
38MLP1.2453FALSEFALSEFALSE0.1117
38KNN0.7595FALSEFALSEFALSE0.2270
38SVM0.3832FALSEFALSEFALSE0.3522
39RF0.3119FALSEFALSEFALSE0.3788
39XGB1.3974TRUEFALSEFALSE0.0871
39MLP2.6063TRUETRUETRUE0.0076
39KNN0.7980FALSEFALSEFALSE0.2160
39SVM1.8952TRUETRUEFALSE0.0351
40RF0.4552FALSEFALSEFALSE0.3264
40XGB1.3542TRUEFALSEFALSE0.0937
40MLP2.2107TRUETRUEFALSE0.0181
40KNN0.9156FALSEFALSEFALSE0.1842
40SVM1.0985FALSEFALSEFALSE0.1422
41RF0.2933FALSEFALSEFALSE0.3858
41XGB1.0116FALSEFALSEFALSE0.1606
41MLP2.0640TRUETRUEFALSE0.0246
41KNN0.7109FALSEFALSEFALSE0.2418
41SVM1.0217FALSEFALSEFALSE0.1588
Table A5. Independent T-Test Results (GCN Model, MAE).
Table A5. Independent T-Test Results (GCN Model, MAE).
Test IDComparison ModelT-StatisticSignificance (10%)Significance (5%)Significance (1%)p-Value
1RF0.3980FALSEFALSEFALSE0.3469
1XGB1.1072FALSEFALSEFALSE0.1390
1MLP1.7377TRUETRUEFALSE0.0466
1KNN1.0710FALSEFALSEFALSE0.1468
1SVM1.6013TRUEFALSEFALSE0.0603
2RF0.1094FALSEFALSEFALSE0.4568
2XGB0.5343FALSEFALSEFALSE0.2989
2MLP1.8668TRUETRUEFALSE0.0366
2KNN0.4843FALSEFALSEFALSE0.3161
2SVM0.1324FALSEFALSEFALSE0.4478
3RF0.0280FALSEFALSEFALSE0.4889
3XGB0.6112FALSEFALSEFALSE0.2730
3MLP1.7175TRUETRUEFALSE0.0485
3KNN0.4360FALSEFALSEFALSE0.3331
3SVM0.8713FALSEFALSEFALSE0.1955
4RF−0.1259FALSEFALSEFALSE0.5497
4XGB0.3612FALSEFALSEFALSE0.3603
4MLP2.9798TRUETRUETRUE0.0030
4KNN0.1746FALSEFALSEFALSE0.4313
4SVM0.2063FALSEFALSEFALSE0.4190
5RF0.3545FALSEFALSEFALSE0.3628
5XGB1.0379FALSEFALSEFALSE0.1544
5MLP2.3949TRUETRUEFALSE0.0118
5KNN0.7891FALSEFALSEFALSE0.2185
5SVM1.3555TRUEFALSEFALSE0.0947
6RF−0.0450FALSEFALSEFALSE0.5178
6XGB0.6609FALSEFALSEFALSE0.2570
6MLP3.9765TRUETRUETRUE0.0002
6KNN0.4605FALSEFALSEFALSE0.3243
6SVM0.5841FALSEFALSEFALSE0.2820
7RF0.0234FALSEFALSEFALSE0.4907
7XGB0.8368FALSEFALSEFALSE0.2049
7MLP3.7870TRUETRUETRUE0.0004
7KNN0.1312FALSEFALSEFALSE0.4483
7SVM2.1078TRUETRUEFALSE0.0223
8RF0.5492FALSEFALSEFALSE0.2936
8XGB1.5199TRUEFALSEFALSE0.0699
8MLP1.3841TRUEFALSEFALSE0.0888
8KNN1.1070FALSEFALSEFALSE0.1389
8SVM0.5477FALSEFALSEFALSE0.2942
9RF−0.4093FALSEFALSEFALSE0.6573
9XGB0.2387FALSEFALSEFALSE0.4065
9MLP1.9205TRUETRUEFALSE0.0325
9KNN−0.0498FALSEFALSEFALSE0.5197
9SVM1.2228FALSEFALSEFALSE0.1161
10RF1.8368TRUETRUEFALSE0.0386
10XGB3.5045TRUETRUETRUE0.0009
10MLP2.6770TRUETRUETRUE0.0062
10KNN1.1758FALSEFALSEFALSE0.1252
10SVM2.5389TRUETRUETRUE0.0092
11RF0.4217FALSEFALSEFALSE0.3382
11XGB2.9138TRUETRUETRUE0.0035
11MLP3.1249TRUETRUETRUE0.0021
11KNN3.9967TRUETRUETRUE0.0002
11SVM1.6132TRUEFALSEFALSE0.0590
12RF0.2420FALSEFALSEFALSE0.4053
12XGB1.2820FALSEFALSEFALSE0.1052
12MLP4.1540TRUETRUETRUE0.0001
12KNN0.8184FALSEFALSEFALSE0.2100
12SVM0.6029FALSEFALSEFALSE0.2757
13RF0.4835FALSEFALSEFALSE0.3162
13XGB1.7333TRUETRUEFALSE0.0471
13MLP2.0753TRUETRUEFALSE0.0236
13KNN1.1984FALSEFALSEFALSE0.1204
13RF0.4835FALSEFALSEFALSE0.3162
14RF0.3516FALSEFALSEFALSE0.3639
14XGB1.5903TRUEFALSEFALSE0.0616
14MLP4.9959TRUETRUETRUE0.0000
14KNN1.0846FALSEFALSEFALSE0.1438
14SVM0.3872FALSEFALSEFALSE0.3508
15RF−0.1481FALSEFALSEFALSE0.5583
15XGB0.4686FALSEFALSEFALSE0.3215
15MLP0.8676FALSEFALSEFALSE0.1965
15KNN−0.0923FALSEFALSEFALSE0.5365
15SVM0.9913FALSEFALSEFALSE0.1652
16RF0.9721FALSEFALSEFALSE0.1698
16XGB2.1808TRUETRUEFALSE0.0190
16MLP2.4477TRUETRUEFALSE0.0105
16KNN2.4316TRUETRUEFALSE0.0111
16SVM2.2523TRUETRUEFALSE0.0165
17RF0.3439FALSEFALSEFALSE0.3667
17XGB1.0235FALSEFALSEFALSE0.1575
17MLP1.6042TRUEFALSEFALSE0.0600
17KNN0.8147FALSEFALSEFALSE0.2111
17SVM0.7599FALSEFALSEFALSE0.2269
18RF0.1118FALSEFALSEFALSE0.4559
18XGB0.6832FALSEFALSEFALSE0.2501
18MLP2.3348TRUETRUEFALSE0.0136
18KNN0.6118FALSEFALSEFALSE0.2728
18SVM0.5666FALSEFALSEFALSE0.2878
19RF0.3423FALSEFALSEFALSE0.3673
19XGB1.1651FALSEFALSEFALSE0.1270
19MLP1.6846TRUEFALSEFALSE0.0516
19KNN0.7729FALSEFALSEFALSE0.2230
19SVM1.7629TRUETRUEFALSE0.0468
20RF0.2389FALSEFALSEFALSE0.4065
20XGB0.9590FALSEFALSEFALSE0.1730
20MLP3.1346TRUETRUETRUE0.0020
20KNN0.7204FALSEFALSEFALSE0.2387
20SVM0.5807FALSEFALSEFALSE0.2831
21RF0.2943FALSEFALSEFALSE0.3853
21XGB0.8770FALSEFALSEFALSE0.1940
21MLP1.7677TRUETRUEFALSE0.0441
21KNN0.6954FALSEFALSEFALSE0.2463
21SVM0.7945FALSEFALSEFALSE0.2168
22RF0.4350FALSEFALSEFALSE0.3336
22XGB1.1172FALSEFALSEFALSE0.1375
22MLP1.7992TRUETRUEFALSE0.0418
22KNN0.9190FALSEFALSEFALSE0.1835
22SVM0.6032FALSEFALSEFALSE0.2758
23RF0.1617FALSEFALSEFALSE0.4363
23XGB0.9690FALSEFALSEFALSE0.1708
23MLP1.9202TRUETRUEFALSE0.0326
23KNN0.2416FALSEFALSEFALSE0.4054
23SVM0.4732FALSEFALSEFALSE0.3199
24RF0.0623FALSEFALSEFALSE0.4754
24XGB1.0695FALSEFALSEFALSE0.1470
24MLP1.7734TRUETRUEFALSE0.0436
24KNN0.4135FALSEFALSEFALSE0.3412
24SVM1.0119FALSEFALSEFALSE0.1606
25RF0.4827FALSEFALSEFALSE0.3165
25XGB1.6017TRUEFALSEFALSE0.0604
25MLP2.3722TRUETRUEFALSE0.0124
25KNN1.7920TRUETRUEFALSE0.0424
25SVM1.8846TRUETRUEFALSE0.0396
26RF0.2646FALSEFALSEFALSE0.3966
26XGB1.2812FALSEFALSEFALSE0.1056
26MLP3.6320TRUETRUETRUE0.0006
26KNN1.0238FALSEFALSEFALSE0.1575
26SVM1.5858TRUEFALSEFALSE0.0652
27RF0.5854FALSEFALSEFALSE0.2817
27XGB1.6873TRUEFALSEFALSE0.0520
27MLP3.2009TRUETRUETRUE0.0018
27KNN1.0642FALSEFALSEFALSE0.1486
27SVM1.7173TRUEFALSEFALSE0.0542
28RF−0.0933FALSEFALSEFALSE0.5368
28XGB0.6140FALSEFALSEFALSE0.2725
28MLP3.3345TRUETRUETRUE0.0014
28KNN0.3334FALSEFALSEFALSE0.3709
28SVM0.4991FALSEFALSEFALSE0.3111
29RF0.4068FALSEFALSEFALSE0.3438
29XGB1.5099TRUEFALSEFALSE0.0720
29MLP1.9986TRUETRUEFALSE0.0281
29KNN0.6527FALSEFALSEFALSE0.2599
29SVM1.5045TRUEFALSEFALSE0.0736
30RF0.4580FALSEFALSEFALSE0.3253
30XGB1.5563TRUEFALSEFALSE0.0657
30MLP3.5448TRUETRUETRUE0.0007
30KNN0.7523FALSEFALSEFALSE0.2292
30SVM3.0143TRUETRUETRUE0.0034
31RF0.4702FALSEFALSEFALSE0.3210
31XGB1.5783TRUEFALSEFALSE0.0635
31MLP4.8684TRUETRUETRUE0.0000
31KNN1.3482TRUEFALSEFALSE0.0945
31SVM1.3028FALSEFALSEFALSE0.1021
32RF0.1929FALSEFALSEFALSE0.4243
32XGB0.4339FALSEFALSEFALSE0.3340
32MLP2.6571TRUETRUETRUE0.0068
32KNN0.1008FALSEFALSEFALSE0.4602
32SVM0.6340FALSEFALSEFALSE0.2658
33RF0.7284FALSEFALSEFALSE0.2362
33XGB1.8650TRUETRUEFALSE0.0364
33MLP2.9664TRUETRUETRUE0.0031
33KNN1.5649TRUEFALSEFALSE0.0644
33SVM3.0328TRUETRUETRUE0.0035
34RF0.3639FALSEFALSEFALSE0.3594
34XGB1.5508TRUEFALSEFALSE0.0665
34MLP3.1234TRUETRUETRUE0.0022
34KNN1.7805TRUETRUEFALSE0.0434
34SVM2.4231TRUETRUEFALSE0.0134
35RF0.4084FALSEFALSEFALSE0.3430
35XGB1.1954FALSEFALSEFALSE0.1211
35MLP3.2488TRUETRUETRUE0.0015
35KNN0.7404FALSEFALSEFALSE0.2326
35SVM1.0745FALSEFALSEFALSE0.1491
36RF0.5011FALSEFALSEFALSE0.3103
36XGB1.1533FALSEFALSEFALSE0.1297
36MLP1.4305TRUEFALSEFALSE0.0823
36KNN0.7291FALSEFALSEFALSE0.2362
36SVM0.4827FALSEFALSEFALSE0.3167
37RF0.3988FALSEFALSEFALSE0.3465
37XGB1.0831FALSEFALSEFALSE0.1440
37MLP2.1470TRUETRUEFALSE0.0203
37KNN0.6064FALSEFALSEFALSE0.2746
37SVM1.6950TRUEFALSEFALSE0.0522
38RF0.3220FALSEFALSEFALSE0.3749
38XGB1.1478FALSEFALSEFALSE0.1304
38MLP1.2389FALSEFALSEFALSE0.1129
38KNN0.7539FALSEFALSEFALSE0.2286
38SVM0.3777FALSEFALSEFALSE0.3543
39RF0.3052FALSEFALSEFALSE0.3813
39XGB1.3919TRUEFALSEFALSE0.0879
39MLP2.6016TRUETRUETRUE0.0077
39KNN0.7921FALSEFALSEFALSE0.2177
39SVM1.8907TRUETRUEFALSE0.0354
40RF0.4448FALSEFALSEFALSE0.3301
40XGB1.3439TRUEFALSEFALSE0.0953
40MLP2.1999TRUETRUEFALSE0.0185
40KNN0.9055FALSEFALSEFALSE0.1868
40SVM1.0910FALSEFALSEFALSE0.1438
41RF0.2920FALSEFALSEFALSE0.3863
41XGB1.0110FALSEFALSEFALSE0.1607
41MLP2.0646TRUETRUEFALSE0.0246
41KNN0.7100FALSEFALSEFALSE0.2421
41SVM1.0211FALSEFALSEFALSE0.1590

Appendix B

Table A6. Train Periods and Test Periods.
Table A6. Train Periods and Test Periods.
Test IDTrain Set PeriodsTest Set Periods
12004.01.–2004.06.2004.07.–2004.12
22004.07.–2004.12.2005.01.–2005.06.
32005.01.–2005.06.2005.07.–2005.12.
42005.07.–2005.12.2006.01.–2006.06.
52006.01.–2006.06.2006.07.–2006.12.
62006.07.–2006.12.2007.01.–2007.06.
72007.01.–2007.06.2007.07.–2007.12.
82007.07.–2007.12.2008.01.–2008.06.
92008.01.–2008.06.2008.07.–2008.12.
102008.07.–2008.12.2009.01.–2009.06.
112009.01.–2009.06.2009.07.–2009.12.
122009.07.–2009.12.2010.01.–2010.06.
132010.01.–2010.06.2010.07.–2010.12.
142010.07.–2010.12.2011.01.–2011.06.
152011.01.–2011.06.2011.07.–2011.12.
162011.07.–2011.12.2012.01.–2012.06
172012.01.–2012.062012.07.–2012.12
182012.07.–2012.122013.01.–2013.06.
192013.01.–2013.06.2013.07.–2013.12.
202013.07.–2013.12.2014.01.–2014.06.
212014.01.–2014.06.2014.07.–2014.12.
222014.07.–2014.12.2015.01.–2015.06.
232015.01.–2015.06.2015.07.–2015.12.
242015.07.–2015.12.2016.01.–2016.06.
252016.01.–2016.06.2016.07.–2016.12.
262016.07.–2016.12.2017.01.–2017.06.
272017.01.–2017.06.2017.07.–2017.12.
282017.07.–2017.12.2018.01.–2018.06.
292018.01.–2018.06.2018.07.–2018.12.
302018.07.–2018.12.2019.01.–2019.06.
312019.01.–2019.06.2019.07.–2019.12.
322019.07.–2019.12.2020.01.–2020.06.
332020.01.–2020.06.2020.07.–2020.12.
342020.07.–2020.12.2021.01.–2021.06.
352021.01.–2021.06.2021.07.–2021.12.
362021.07.–2021.12.2022.01.–2022.06.
372022.01.–2022.06.2022.07.–2022.12.
382022.07.–2022.12.2023.01.–2023.06.
392023.01.–2023.06.2023.07.–2023.12.
402023.07.–2023.12.2024.01.–2024.06.
412024.01.–2024.06.2024.07.–2024.12.
Table A7. Hyperparameter Settings.
Table A7. Hyperparameter Settings.
ModelParameterValues
RFNumber of Estimators100
Maximum DepthNone
Minimum Samples Split5
Minimum Samples Leaf3
Random SeedRandom integer (0 to 10,000)
XGBNumber of Estimators100
Maximum Depth2000
Minimum Samples SplitReLU
Minimum Samples Leaf0.001
Random SeedTRUE
MLPHidden Layer SizeRandom integer (3 to 8)
Maximum Iterations30
Activation FunctionNone
Learning Rate InitializationRadial Basis Function (RBF)
Early StoppingRandom value (0.1 to 10)
Random Seed0.1
KNNNumber of NeighborsInteger
Leaf SizeInteger
Number of JobsTRUE
SVMKernelTRUE
Regularization Parameter (C)64
Epsilon64
GCNInput Channels32
Output Channels0.01
Normalization4
Bias100
Hidden Dimension 1None
Hidden Dimension 25
Embedding Dimension3
Learning RateRandom integer (0 to 10,000)
GATInput Channels100
Output Channels2000
NormalizationReLU
Bias0.001
Number of HeadsTRUE
Hidden Dimension 1Random integer (3 to 8)
Hidden Dimension 230
Embedding DimensionNone
Learning RateRadial Basis Function (RBF)

Appendix C

Figure A1. Training Graph Structures Based on Pearson Correlation Coefficients. (a). Graph Plot for Test ID 1. (b). Graph Plot for Test ID 2. (c). Graph Plot for Test ID 3. (d). Graph Plot for Test ID 4. (e). Graph Plot for Test ID 5. (f). Graph Plot for Test ID 6. (g). Graph Plot for Test ID 7. (h). Graph Plot for Test ID 8. (i). Graph Plot for Test ID 9. (j). Graph Plot for Test ID 10. (k). Graph Plot for Test ID 11. (l). Graph Plot for Test ID 12. (m). Graph Plot for Test ID 13. (n). Graph Plot for Test ID 14. (o). Graph Plot for Test ID 15. (p). Graph Plot for Test ID 16. (q). Graph Plot for Test ID 17. (r). Graph Plot for Test ID 18. (s). Graph Plot for Test ID 19. (t). Graph Plot for Test ID 20. (u). Graph Plot for Test ID 21. (v). Graph Plot for Test ID 22. (w). Graph Plot for Test ID 23. (x). Graph Plot for Test ID 24. (y). Graph Plot for Test ID 25. (z). Graph Plot for Test ID 26. (aa). Graph Plot for Test ID 27. (bb). Graph Plot for Test ID 28. (cc). Graph Plot for Test ID 29. (dd). Graph Plot for Test ID 30. (ee). Graph Plot for Test ID 31. (ff). Graph Plot for Test ID 32. (gg). Graph Plot for Test ID 33. (hh). Graph Plot for Test ID 34. (ii). Graph Plot for Test ID 35. (jj). Graph Plot for Test ID 36. (kk). Graph Plot for Test ID 37. (ll). Graph Plot for Test ID 38. (mm). Graph Plot for Test ID 39. (nn). Graph Plot for Test ID 40. (oo). Graph Plot for Test ID 41.
Figure A1. Training Graph Structures Based on Pearson Correlation Coefficients. (a). Graph Plot for Test ID 1. (b). Graph Plot for Test ID 2. (c). Graph Plot for Test ID 3. (d). Graph Plot for Test ID 4. (e). Graph Plot for Test ID 5. (f). Graph Plot for Test ID 6. (g). Graph Plot for Test ID 7. (h). Graph Plot for Test ID 8. (i). Graph Plot for Test ID 9. (j). Graph Plot for Test ID 10. (k). Graph Plot for Test ID 11. (l). Graph Plot for Test ID 12. (m). Graph Plot for Test ID 13. (n). Graph Plot for Test ID 14. (o). Graph Plot for Test ID 15. (p). Graph Plot for Test ID 16. (q). Graph Plot for Test ID 17. (r). Graph Plot for Test ID 18. (s). Graph Plot for Test ID 19. (t). Graph Plot for Test ID 20. (u). Graph Plot for Test ID 21. (v). Graph Plot for Test ID 22. (w). Graph Plot for Test ID 23. (x). Graph Plot for Test ID 24. (y). Graph Plot for Test ID 25. (z). Graph Plot for Test ID 26. (aa). Graph Plot for Test ID 27. (bb). Graph Plot for Test ID 28. (cc). Graph Plot for Test ID 29. (dd). Graph Plot for Test ID 30. (ee). Graph Plot for Test ID 31. (ff). Graph Plot for Test ID 32. (gg). Graph Plot for Test ID 33. (hh). Graph Plot for Test ID 34. (ii). Graph Plot for Test ID 35. (jj). Graph Plot for Test ID 36. (kk). Graph Plot for Test ID 37. (ll). Graph Plot for Test ID 38. (mm). Graph Plot for Test ID 39. (nn). Graph Plot for Test ID 40. (oo). Graph Plot for Test ID 41.
Symmetry 17 01372 g0a1aSymmetry 17 01372 g0a1bSymmetry 17 01372 g0a1cSymmetry 17 01372 g0a1dSymmetry 17 01372 g0a1e

Appendix D

Figure A2. RMSE Bar Chart with Error Bars.
Figure A2. RMSE Bar Chart with Error Bars.
Symmetry 17 01372 g0a2
Figure A3. MAE Bar Chart with Error Bars. Note: The models related to the graphs are represented in red hues, while the benchmarks are represented in blue hues. The error bars indicate a length of 1 sigma, and the dot in the middle of the error bar represents the mean. A smaller mean (further to the left) indicates better performance, while smaller error bars signify greater robustness.
Figure A3. MAE Bar Chart with Error Bars. Note: The models related to the graphs are represented in red hues, while the benchmarks are represented in blue hues. The error bars indicate a length of 1 sigma, and the dot in the middle of the error bar represents the mean. A smaller mean (further to the left) indicates better performance, while smaller error bars signify greater robustness.
Symmetry 17 01372 g0a3
Figure A4. p-Value Bar Chart for RMSE Using GAT. Note: p-value bars are used, and smaller p-values (from an independent t-test) indicate more significant differences between the corresponding model and the benchmark. The color scheme is the same as in Figure A2 and Figure A3. Vertical lines at p-values of 0.1, 0.05, and 0.01, representing significance thresholds, are marked in green, yellow, and red, respectively.
Figure A4. p-Value Bar Chart for RMSE Using GAT. Note: p-value bars are used, and smaller p-values (from an independent t-test) indicate more significant differences between the corresponding model and the benchmark. The color scheme is the same as in Figure A2 and Figure A3. Vertical lines at p-values of 0.1, 0.05, and 0.01, representing significance thresholds, are marked in green, yellow, and red, respectively.
Symmetry 17 01372 g0a4
Figure A5. p-Value Bar Chart for MAE Using GAT.
Figure A5. p-Value Bar Chart for MAE Using GAT.
Symmetry 17 01372 g0a5
Figure A6. p-Value Bar Chart for RMSE Using GCN.
Figure A6. p-Value Bar Chart for RMSE Using GCN.
Symmetry 17 01372 g0a6
Figure A7. p-Value Bar Chart for MAE Using GCN.
Figure A7. p-Value Bar Chart for MAE Using GCN.
Symmetry 17 01372 g0a7

References

  1. Box, G.E.P.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  2. Sims, C.A. Macroeconomics and reality. Econometrica 1980, 1, 1–48. [Google Scholar] [CrossRef]
  3. Bollerslev, T. Generalized autoregressive conditional heteroskedasticity. J. Econ. 1986, 31, 307–327. [Google Scholar] [CrossRef]
  4. Drucker, H.; Burges, C.J.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. Adv. Neural Inf. Process. Syst. 1996, 28, 779–784. [Google Scholar]
  5. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  6. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  7. Choi, I.; Kim, W.C. A multifaceted graph-wise network analysis of sector-based financial instruments’ price-based discrepancies with diverse statistical interdependencies. N. Am. J. Econ. Financ. 2025, 75, 102316. [Google Scholar] [CrossRef]
  8. Ozbayoglu, A.M.; Gudelek, M.U.; Sezer, O.B. Deep learning for financial applications: A survey. Appl. Soft Comput. 2020, 93, 106384. [Google Scholar] [CrossRef]
  9. Foroutan, P.; Lahmiri, S. Deep learning systems for forecasting the prices of crude oil and precious metals. Financ. Innov. 2024, 10, 111. [Google Scholar] [CrossRef]
  10. Zhang, Y.-J.; Zhang, H.; Gupta, R. A new hybrid method with data-characteristic-driven analysis for artificial intelligence and robotics index return forecasting. Financ. Innov. 2023, 9, 75. [Google Scholar] [CrossRef]
  11. Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. Relational inductive biases, deep learning, and graph networks. arXiv 2018, arXiv:1806.01261. [Google Scholar] [CrossRef]
  12. Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural message passing for quantum chemistry. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1263–1272. [Google Scholar]
  13. Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), Long Beach, CA, USA, 4–9 December 2017; pp. 1025–1035. [Google Scholar]
  14. Xiang, S.; Cheng, D.; Shang, C.; Zhang, Y.; Liang, Y. Temporal and Heterogeneous Graph Neural Network for Financial Time Series Prediction. arXiv 2023, arXiv:2305.08740. [Google Scholar]
  15. Choi, I.; Kim, W.C. Estimating historical downside risks of global financial market indices via inflation rate-adjusted dependence graphs. Res. Int. Bus. Financ. 2023, 66, 102077. [Google Scholar] [CrossRef]
  16. Chen, Z.; Zheng, L.N.; Lu, C.; Yuan, J.; Zhu, D. ChatGPT Informed Graph Neural Network for Stock Movement Prediction. arXiv 2023, arXiv:2306.03763. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Xie, C.; Wang, G.-J.; Gong, J.; Zhu, Y. Forecasting cryptocurrency volatility: A novel framework based on the evolving multiscale graph neural network. Financ. Innov. 2025, 11, 87. [Google Scholar] [CrossRef]
  18. Yin, W.; Chen, Z.; Luo, X.; Kirkulak-Uludag, B. Forecasting cryptocurrencies’ price with the financial stress index: A graph neural network prediction strategy. Appl. Econ. Lett. 2024, 31, 630–639. [Google Scholar] [CrossRef]
  19. Fan, X.; Gong, M.; Wu, Y.; Tang, Z.; Liu, J. CCGIB: A Cross-Channel Graph Information Bottleneck Principle. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 9488–9499. [Google Scholar] [CrossRef]
  20. Choi, I.; Koh, W.; Kang, G.; Jang, Y.; Kim, W.C. Encoding Temporal Statistical-Space Priors via Augmented Representation. arXiv 2024, arXiv:2401.16808. [Google Scholar]
  21. Fan, X.; Gong, M.; Wu, Y.; Tang, Z.; Liu, J. Neural Gaussian Similarity Modeling for Differential Graph Structure Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38. [Google Scholar]
  22. Zhang, Y.; Karve, P.M.; Mahadevan, S. Graph neural networks for power grid operational risk assessment under evolving grid topology. arXiv 2024, arXiv:2405.07343. [Google Scholar]
  23. Dong, Y.; Yao, J.; Wang, J.; Liang, Y.; Liao, S.; Xiao, M. Dynamic fraud detection: Integrating reinforcement learning into graph neural networks. In Proceedings of the 2024 6th International Conference on Data-driven Optimization of Complex Systems, Hangzhou, China, 16–18 August 2024; pp. 818–823. [Google Scholar]
  24. Kipf, T.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  25. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. In Proceedings of the International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  26. Rossi, E.; Chamberlain, B.; Frasca, F.; Eynard, D.; Monti, F.; Bronstein, M. Temporal graph networks for deep learning on dynamic graphs. arXiv 2020, arXiv:2006.10637. [Google Scholar] [CrossRef]
  27. Cheng, D.; Yang, F.; Xiang, S.; Liu, J. Financial time series forecasting with multi-modality graph neural network. Pattern Recognit. 2022, 121, 108218. [Google Scholar] [CrossRef]
  28. Choi, I.; Kim, W.C. Practical forecasting of risk boundaries for industrial metals and critical minerals via statistical machine learning techniques. Int. Rev. Financ. Anal. 2024, 94, 103252. [Google Scholar] [CrossRef]
  29. Das, N.; Sadhukhan, B.; Chatterjee, R.; Chakrabarti, S. Integrating sentiment analysis with graph neural networks for enhanced stock prediction: A comprehensive survey. Decis. Anal. J. 2024, 10, 100417. [Google Scholar] [CrossRef]
  30. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  31. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef]
  32. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  33. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  34. Doostmohammadian, M.; Qureshi, M.I.; Khalesi, M.H.; Rabiee, H.R.; Khan, U.A. Log-scale quantization in distributed first-order methods: Gradient-based learning from distributed data. IEEE Trans. Autom. Sci. Eng. 2025, 22, 10948–10959. [Google Scholar] [CrossRef]
  35. Lee, M.; Choi, I.; Kim, W.C. Predicting mobile payment behavior through explainable machine learning and application usage analysis. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 117. [Google Scholar] [CrossRef]
  36. Černevičienė, J.; Kabašinskas, A. Explainable artificial intelligence (XAI) in finance: A systematic literature review. Artif. Intell. Rev. 2024, 57, 216. [Google Scholar] [CrossRef]
  37. Choi, I.; Kim, W.C. A transparent single financial asset trading framework via reinforcement learning. In Proceedings of the 10th International Conference on E-Business and Applications, Singapore, 24–26 February 2024; Springer Nature: Singapore; pp. 72–79. [Google Scholar]
Figure 1. Research Flow.
Figure 1. Research Flow.
Symmetry 17 01372 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, T.K.; Choi, I.; Kim, W.C. Symmetry-Aware Graph Neural Approaches for Data-Efficient Return Prediction in International Financial Market Indices. Symmetry 2025, 17, 1372. https://doi.org/10.3390/sym17091372

AMA Style

Lee TK, Choi I, Kim WC. Symmetry-Aware Graph Neural Approaches for Data-Efficient Return Prediction in International Financial Market Indices. Symmetry. 2025; 17(9):1372. https://doi.org/10.3390/sym17091372

Chicago/Turabian Style

Lee, Tae Kyoung, Insu Choi, and Woo Chang Kim. 2025. "Symmetry-Aware Graph Neural Approaches for Data-Efficient Return Prediction in International Financial Market Indices" Symmetry 17, no. 9: 1372. https://doi.org/10.3390/sym17091372

APA Style

Lee, T. K., Choi, I., & Kim, W. C. (2025). Symmetry-Aware Graph Neural Approaches for Data-Efficient Return Prediction in International Financial Market Indices. Symmetry, 17(9), 1372. https://doi.org/10.3390/sym17091372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop