USTGCN: A Unified Spatio-Temporal Graph Convolutional Network for Stock-Ranking Prediction
Abstract
1. Introduction
- We design a dual-stream temporal encoder that uses ALSTM to capture short-term volatility and GRU to summarize longer-term structural patterns.
- We introduce a rolling-window correlation smoothing strategy to stabilize the dynamic graph and reduce the effect of transient micro-structure noise.
- We build a shared fusion layer to aggregate dynamic and structural relations within one downstream prediction pipeline, while skip connections retain the original temporal features.
2. Related Work
2.1. Deep Temporal Models for Financial Time-Series
2.2. Spatio-Temporal Graph Neural Networks
2.3. Dynamic Graph Learning and Temporal Fusion
3. Preliminary
3.1. Problem Formulation
3.2. Graph Construction Paradigm
- Dynamic Graph: Designed to encapsulate transient and highly reactive inter-stock correlations. It is dynamically generated via the cosine similarity of short-horizon latent representations extracted from recent temporal patterns.
- Static Graph: Designed to capture relatively stable structural dependencies. It is derived from the cosine similarity of longer-horizon latent representations encoded via a separate recurrent network and is refreshed at each trading day, but typically evolves more smoothly than the dynamic graph.
4. Methodology
4.1. Dual-Stream Temporal Encoding
- Dynamic Features (): To capture transient market shocks and short-term sequential patterns, we deploy an Attention-augmented LSTM (ALSTM). The attention mechanism allows the network to assign varying importance weights to informative trading days, and attention-based LSTM models have been used in stock forecasting to highlight important time steps [23,39], which makes this branch well suited to emphasize abrupt short-horizon fluctuations:
- Static Features (): Conversely, to extract robust, longer-horizon structural characteristics that are less sensitive to daily noise, we utilize a standard Gated Recurrent Unit (GRU). GRU-based recurrent modeling provides a compact gated summary over sequence histories [21], and GRU has also been adopted for relatively longer prediction windows in stock trading applications [40]. By extracting the final hidden state over the whole lookback window, we obtain a smoother historical summary for each asset:
4.2. Robust Graph Construction
4.3. Unified Spatio-Temporal Fusion
4.4. Ranking Prediction
5. Experiments
5.1. Datasets and Experimental Setup
5.2. Baseline Methods
- Traditional ML: MLP, SFM.
- Deep Sequence Models: GRU, LSTM, ALSTM, Transformer, ALSTM+TRA.
- Advanced Graph Models: GATs, HIST, SDGNN, DTSRN.
5.3. Evaluation Metrics
5.3.1. Predictive Metrics
- Information Coefficient (IC): Calculates the cross-sectional Pearson correlation coefficient between the predicted scores and the actual true returns. A higher IC indicates stronger linear predictive power:where and are the cross-sectional means of the predictions and true returns.
- Rank Information Coefficient (Rank IC): Computes the Spearman’s rank correlation coefficient. It measures the monotonic relationship by replacing continuous values with their respective ordinal ranks and :where is the mean of the ranks. Compared to a standard IC, Rank IC is significantly more robust to financial extreme outliers and directly evaluates the effectiveness of the portfolio sorting order.
- Precision@N (P@N): Measures the accuracy of the top algorithmic recommendations. It is defined as the proportion of the actual true top-N most profitable stocks successfully identified within the model’s top-N predictions:
5.3.2. Portfolio Backtesting Metrics
- Annualized Return (Ann. Ret.): The geometric average of annualized wealth accumulation. Given the cumulative return , the annualized return is calculated as:
- Maximum Drawdown (Max DD): Measures profound downside risk by computing the maximum observed loss from a historical peak to a trough of the portfolio’s net asset value (NAV):where .
- Sharpe Ratio (Sharpe): Evaluates risk-adjusted performance. Setting the risk-free rate to zero for strict daily evaluation, it is computed by scaling the daily ratio by :where and are the sample mean and standard deviation of the daily strategy returns.
- Information Ratio (IR): Measures the active return of the strategy relative to its tracking error against the benchmark:
- Annualized Alpha (): Gauges excess profitability strictly independent of systemic market movements (Beta). We compute the daily alpha () via Ordinary Least Squares (OLS) regression (), and project the annualized alpha as:
5.4. Implementation Details and Hyperparameter Configuration
5.5. Results and Quantitative Analysis
5.6. Ablation Studies
5.7. Hyperparameter Sensitivity
5.8. Practical Backtest Performance and Risk Management
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Feng, F.; He, X.; Wang, X.; Luo, C.; Liu, Y.; Chua, T. Temporal Relational Ranking for Stock Prediction. ACM Trans. Inf. Syst. 2019, 37, 27:1–27:30. [Google Scholar] [CrossRef]
- He, Y.; Li, Q.; Wu, F.; Gao, J. Static-dynamic graph neural network for stock recommendation. In 34th International Conference on Scientific and Statistical Database Management; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–4. [Google Scholar] [CrossRef]
- Zhong, Y.; Chen, J.; Gao, J.; Wang, J.; Wan, Q. DTSRN: Dynamic Temporal Spatial Relation Network for Stock Ranking Recommendation. In International Conference on Neural Information Processing; Springer: Cham, Switzerland, 2023; pp. 346–359. [Google Scholar] [CrossRef]
- Fama, E.F. Efficient capital markets: A review of theory and empirical work. J. Financ. 1970, 25, 383–417. [Google Scholar] [CrossRef]
- Markowitz, H.M. Portfolio Selection. J. Financ. 1952, 7, 77–91. [Google Scholar] [CrossRef]
- Baek, Y.; Kim, H.Y. ModAugNet: A new forecasting framework for stock market index value with an overfitting prevention LSTM module and a prediction LSTM module. Expert Syst. Appl. 2018, 113, 457–480. [Google Scholar] [CrossRef]
- Patel, J.; Shah, S.; Thakkar, P.; Kotecha, K. Predicting stock market index using fusion of machine learning techniques. Expert Syst. Appl. 2015, 42, 2162–2172. [Google Scholar] [CrossRef]
- Bathla, G. Stock Price prediction using LSTM and SVR. In 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC); IEEE: Piscataway, NJ, USA, 2020; pp. 211–214. [Google Scholar] [CrossRef]
- Matsunaga, D.; Suzumura, T.; Takahashi, T. Exploring Graph Neural Networks for Stock Market Predictions with Rolling Window Analysis. arXiv 2019, arXiv:1909.10660. [Google Scholar] [CrossRef]
- Xu, W.; Liu, W.; Wang, L.; Xia, Y.; Bian, J.; Yin, J.; Liu, T.Y. HIST: A graph-based framework for stock trend forecasting via mining concept-oriented shared information. arXiv 2021, arXiv:2110.13716. [Google Scholar] [CrossRef]
- Shi, H.; Song, W.; Zhang, X.; Shi, J.; Luo, C.; Ao, X.; Arian, H.; Seco, L.A. Alphaforge: A framework to mine and dynamically combine formulaic alpha factors. Proc. AAAI Conf. Artif. Intell. 2025, 39, 12524–12532. [Google Scholar] [CrossRef]
- Duan, Y.; Wang, W.; Li, J. FactorGCL: A Hypergraph-Based Factor Model with Temporal Residual Contrastive Learning for Stock Returns Prediction. Proc. AAAI Conf. Artif. Intell. 2025, 39, 173–181. [Google Scholar] [CrossRef]
- Noorizadegan, A.; Cavoretto, R.; Young, D.L.; Chen, C.S. Stable weight updating: A key to reliable PDE solutions using deep learning. Eng. Anal. Bound. Elem. 2024, 168, 105933. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar] [CrossRef]
- Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
- Chen, Y.; Wei, Z.; Huang, X. Incorporating corporation relationship via graph convolutional neural networks for stock price prediction. In 27th ACM International Conference on Information and Knowledge Management; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1655–1658. [Google Scholar] [CrossRef]
- Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv 2017, arXiv:1709.04875. [Google Scholar] [CrossRef]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph wavenet for deep spatial-temporal graph modeling. arXiv 2019, arXiv:1906.00121. [Google Scholar] [CrossRef]
- Ariyo, A.A.; Adewumi, A.O.; Ayo, C.K. Stock price prediction using the ARIMA model. In 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation; IEEE: Piscataway, NJ, USA, 2014; pp. 106–112. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar] [CrossRef]
- Kim, T.; Kim, H.Y. Forecasting stock prices with a feature fusion LSTM-CNN model using different representations of the same data. PLoS ONE 2019, 14, e0212320. [Google Scholar] [CrossRef]
- Feng, F.; Chen, H.; He, X.; Ding, J.; Sun, M.; Chua, T.S. Enhancing stock movement prediction with adversarial training. arXiv 2018, arXiv:1810.09936. [Google Scholar] [CrossRef]
- Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar] [CrossRef]
- Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. iTransformer: Inverted transformers are effective for time series forecasting. arXiv 2023, arXiv:2310.06625. [Google Scholar] [CrossRef]
- Wen, Q.; Zhou, T.; Zhang, C.; Chen, W.; Ma, Z.; Yan, J.; Sun, L. Transformers in time series: A survey. arXiv 2022, arXiv:2202.07125. [Google Scholar] [CrossRef]
- Sezer, O.B.; Gudelek, M.U.; Ozbayoglu, A.M. Financial time series forecasting with deep learning: A systematic literature review: 2005–2019. Appl. Soft Comput. 2020, 90, 106181. [Google Scholar] [CrossRef]
- Patel, M.; Jariwala, K.; Chattopadhyay, C. A Systematic Review on Graph Neural Network-based Methods for Stock Market Forecasting. ACM Comput. Surv. 2024, 57, 34. [Google Scholar] [CrossRef]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, P.; Liò, P.; Bengio, Y. Graph attention networks. arXiv 2018, arXiv:1710.10903. [Google Scholar] [CrossRef]
- Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv 2017, arXiv:1707.01926. [Google Scholar] [CrossRef]
- Jiang, W.; Luo, J. Graph neural network for traffic forecasting: A survey. Expert Syst. Appl. 2022, 207, 117921. [Google Scholar] [CrossRef]
- Bai, L.; Yao, L.; Li, C.; Wang, X.; Wang, C. Adaptive graph convolutional recurrent network for traffic forecasting. Adv. Neural Inf. Process. Syst. 2020, 33, 17804–17815. [Google Scholar] [CrossRef]
- Pareja, A.; Domeniconi, G.; Chen, J.; Ma, T.; Suzumura, T.; Kanezashi, H.; Kaler, T.; Schardl, T.; Leiserson, C. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. Proc. AAAI Conf. Artif. Intell. 2020, 34, 5363–5370. [Google Scholar] [CrossRef]
- Rossi, E.; Chamberlain, B.; Frasca, F.; Eynard, D.; Monti, F.; Bronstein, M. Temporal graph networks for deep learning on dynamic graphs. arXiv 2020, arXiv:2006.10637. [Google Scholar] [CrossRef]
- Gao, J.; Ying, X.; Xu, C.; Wang, J.; Zhang, S.; Li, Z. Graph-Based Stock Recommendation by Time-Aware Relational Attention Network. ACM Trans. Knowl. Discov. Data 2022, 16, 4:1–4:21. [Google Scholar] [CrossRef]
- Hsu, Y.-L.; Tsai, Y.-C.; Li, C.-T. FinGAT: Financial Graph Attention Networks for Recommending Top-K Profitable Stocks. IEEE Trans. Knowl. Data Eng. 2023, 35, 469–481. [Google Scholar] [CrossRef]
- Sawhney, R.; Agarwal, S.; Wadhwa, A.; Shah, R.R. Exploring the Scale-Free Nature of Stock Markets: Hyperbolic Graph Learning for Algorithmic Trading. In Proceedings of the Web Conference 2021, ACM/IW3C2, Ljubljana, Slovenia, 19–23 April 2021; pp. 11–22. [Google Scholar] [CrossRef]
- Sawhney, R.; Agarwal, S.; Wadhwa, A.; Shah, R. Deep attentive learning for stock movement prediction from social media text and company correlations. In 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP); Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 8415–8426. [Google Scholar] [CrossRef]
- Yu, Y.; Kim, Y.J. Two-dimensional attention-based LSTM model for stock index prediction. J. Inf. Process. Syst. 2019, 15, 1231–1242. [Google Scholar] [CrossRef]
- Touzani, Y.; Douzi, K. An LSTM and GRU based trading strategy adapted to the Moroccan market. J. Big Data 2021, 8, 126. [Google Scholar] [CrossRef] [PubMed]







| Category | Hyperparameter | Value | Remark/Functional Purpose |
|---|---|---|---|
| Data and Input | Feature Dimension (F) | 6 | Raw daily trading indicators (e.g., OHLCV, VWAP). |
| Temporal Lookback Window (T) | 60 | Length of the historical trading day sequence. | |
| Batch Size | Daily Cross-Section | Preserves holistic market spatial topology per day. | |
| Model Architecture | Dual-Stream Hidden Dim. (d) | 128 | Latent representation capacity for sequence encoders. |
| Recurrent Layers | 2 | Depth of ALSTM and GRU sequential extractors. | |
| Graph Rolling Window (W) | 20 | Days aggregated to smooth transient market noise. | |
| Attention Reduction Factor (v) | 4 | Reduces parameter overhead in self-attention projection. | |
| Predictor MLP Cascade | Progressive feature-fusion mapping to final alpha scores. | ||
| Dropout Rate | 0.0 | Network regularization threshold (empirically disabled). | |
| Training and Opt. | Optimizer | Adam | Adaptive gradient descent algorithm. |
| Learning Rate () | Optimal step size determined via sensitivity analysis. | ||
| Maximum Epochs | 100 | Upper bound for training iterations. | |
| Early Stopping Patience | 30 | Epochs without validation gain before early termination. |
| Model | CSI100 | CSI300 | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IC | Rank IC | Precision@N (%) | IC | Rank IC | Precision@N (%) | |||||||
| (↑) | (↑) | 3 (↑) | 5 (↑) | 10 (↑) | 30 (↑) | (↑) | (↑) | 3 (↑) | 5 (↑) | 10 (↑) | 30 (↑) | |
| MLP | 0.071 | 0.067 | 56.53 | 56.17 | 55.49 | 53.55 | 0.082 | 0.079 | 57.21 | 57.10 | 56.75 | 55.56 |
| SFM | 0.081 | 0.074 | 57.79 | 56.96 | 55.92 | 53.88 | 0.102 | 0.096 | 59.84 | 58.28 | 57.89 | 56.82 |
| GATs | 0.096 | 0.090 | 59.17 | 58.71 | 57.48 | 54.59 | 0.111 | 0.105 | 60.49 | 59.96 | 59.02 | 57.41 |
| LSTM | 0.097 | 0.091 | 60.12 | 59.49 | 59.04 | 54.77 | 0.104 | 0.098 | 59.51 | 59.27 | 58.40 | 56.98 |
| ALSTM | 0.102 | 0.097 | 60.79 | 59.76 | 58.13 | 55.00 | 0.115 | 0.109 | 59.51 | 59.33 | 58.92 | 57.47 |
| GRU | 0.103 | 0.097 | 59.97 | 58.99 | 58.37 | 55.09 | 0.113 | 0.108 | 59.95 | 59.28 | 58.59 | 57.43 |
| Transformer | 0.089 | 0.090 | 59.62 | 59.20 | 57.94 | 54.80 | 0.106 | 0.104 | 60.76 | 60.06 | 59.48 | 57.71 |
| ALSTM+TRA | 0.107 | 0.102 | 60.27 | 59.09 | 57.66 | 55.16 | 0.119 | 0.112 | 60.45 | 59.52 | 59.16 | 58.24 |
| HIST | 0.120 | 0.115 | 61.87 | 60.82 | 59.38 | 56.04 | 0.131 | 0.126 | 61.60 | 61.08 | 60.51 | 58.79 |
| SDGNN | 0.126 | 0.120 | 62.49 | 61.41 | 59.81 | 56.39 | 0.137 | 0.132 | 62.23 | 61.76 | 61.18 | 59.56 |
| DTSRN | 0.137 | 0.132 | 62.85 | 61.79 | 60.68 | 56.84 | 0.146 | 0.141 | 62.72 | 62.03 | 61.37 | 59.74 |
| USTGCN | 0.141 | 0.137 | 63.56 | 62.60 | 61.33 | 57.42 | 0.154 | 0.148 | 63.62 | 62.92 | 62.21 | 60.50 |
| Variant | CSI100 | CSI300 | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IC | Rank IC | Precision@N (%) | IC | Rank IC | Precision@N (%) | |||||||
| (↑) | (↑) | 3 (↑) | 5 (↑) | 10 (↑) | 30 (↑) | (↑) | (↑) | 3 (↑) | 5 (↑) | 10 (↑) | 30 (↑) | |
| USTGCN | 0.141 | 0.137 | 63.56 | 62.60 | 61.33 | 57.42 | 0.154 | 0.148 | 63.62 | 62.92 | 62.21 | 60.50 |
| USTGCN-RW | 0.139 | 0.135 | 64.17 | 62.58 | 61.23 | 57.22 | 0.149 | 0.144 | 63.96 | 62.55 | 61.48 | 60.48 |
| USTGCN-UC | 0.133 | 0.129 | 63.30 | 62.71 | 60.88 | 56.95 | 0.135 | 0.129 | 62.15 | 61.48 | 60.89 | 59.46 |
| USTGCN-DS | 0.126 | 0.120 | 63.88 | 62.23 | 60.26 | 56.30 | 0.138 | 0.132 | 62.01 | 61.27 | 61.24 | 59.54 |
| USTGCN-SK | 0.117 | 0.113 | 62.15 | 60.89 | 60.14 | 56.17 | 0.126 | 0.123 | 61.09 | 60.72 | 59.96 | 59.25 |
| Dataset | Method | Dynamic | Static | IC | Rank IC |
|---|---|---|---|---|---|
| CSI100 | USTGCN (Default) | ALSTM | GRU | 0.141 | 0.137 |
| Swap-1 | GRU | ALSTM | 0.138 | 0.134 | |
| Swap-2 | GRU | GRU | 0.138 | 0.132 | |
| Swap-3 | ALSTM | ALSTM | 0.128 | 0.125 | |
| CSI300 | USTGCN (Default) | ALSTM | GRU | 0.154 | 0.148 |
| Swap-1 | GRU | ALSTM | 0.152 | 0.148 | |
| Swap-2 | GRU | GRU | 0.146 | 0.140 | |
| Swap-3 | ALSTM | ALSTM | 0.148 | 0.143 |
| Model | CSI100 | CSI300 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Ann. Ret. | Sharpe | Max DD | IR | Alpha | Ann. Ret. | Sharpe | Max DD | IR | Alpha | |
| (% ↑) | (↑) | (% ↑) | (↑) | (% ↑) | (% ↑) | (↑) | (% ↑) | (↑) | (% ↑) | |
| MLP | 9.74 | 0.606 | −24.65 | −0.602 | −2.40 | 21.86 | 1.051 | −23.67 | 0.858 | 8.88 |
| SFM | 8.75 | 0.551 | −25.63 | −0.670 | −3.26 | 26.97 | 1.249 | −19.33 | 1.303 | 12.95 |
| GATs | 16.68 | 0.924 | −20.36 | 0.211 | 3.54 | 25.69 | 1.182 | −22.68 | 1.096 | 12.07 |
| LSTM | 15.43 | 0.870 | −25.24 | 0.075 | 2.46 | 24.89 | 1.094 | −26.66 | 1.009 | 10.83 |
| ALSTM | 16.37 | 0.905 | −25.85 | 0.178 | 3.24 | 25.72 | 1.197 | −21.04 | 1.142 | 12.08 |
| GRU | 18.10 | 0.974 | −20.81 | 0.360 | 4.63 | 27.09 | 1.227 | −24.02 | 1.215 | 13.06 |
| Transformer | 14.15 | 0.812 | −20.08 | −0.069 | 1.43 | 22.28 | 1.027 | −23.71 | 0.863 | 8.85 |
| ALSTM+TRA | 14.72 | 0.839 | −21.98 | −0.006 | 2.03 | 21.74 | 1.021 | −28.22 | 0.865 | 8.41 |
| HIST | 14.66 | 0.828 | −20.25 | −0.008 | 1.75 | 26.38 | 1.180 | −27.11 | 1.116 | 12.42 |
| SDGNN | 18.75 | 1.002 | −18.11 | 0.414 | 5.24 | 26.85 | 1.273 | −19.27 | 1.230 | 13.30 |
| DTSRN | 15.40 | 0.852 | −17.39 | 0.073 | 2.52 | 28.13 | 1.265 | −24.72 | 1.255 | 13.99 |
| USTGCN | 20.75 | 1.117 | −17.48 | 0.632 | 7.09 | 31.16 | 1.396 | −20.65 | 1.538 | 16.36 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Yao, W.; Gao, L.; Zhang, X.; Chen, H.; Liu, M.; Hu, Y. USTGCN: A Unified Spatio-Temporal Graph Convolutional Network for Stock-Ranking Prediction. Electronics 2026, 15, 1317. https://doi.org/10.3390/electronics15061317
Yao W, Gao L, Zhang X, Chen H, Liu M, Hu Y. USTGCN: A Unified Spatio-Temporal Graph Convolutional Network for Stock-Ranking Prediction. Electronics. 2026; 15(6):1317. https://doi.org/10.3390/electronics15061317
Chicago/Turabian StyleYao, Wenjie, Lele Gao, Xiangzhou Zhang, Haotao Chen, Mingzhe Liu, and Yong Hu. 2026. "USTGCN: A Unified Spatio-Temporal Graph Convolutional Network for Stock-Ranking Prediction" Electronics 15, no. 6: 1317. https://doi.org/10.3390/electronics15061317
APA StyleYao, W., Gao, L., Zhang, X., Chen, H., Liu, M., & Hu, Y. (2026). USTGCN: A Unified Spatio-Temporal Graph Convolutional Network for Stock-Ranking Prediction. Electronics, 15(6), 1317. https://doi.org/10.3390/electronics15061317

