Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (286)

Search Parameters:
Keywords = learning portfolio

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 289 KB  
Article
The Impact of Product Variety on Quality Conformance in Continuous Process Manufacturing: A Quantitative Investigation in a Chemical Industry Context
by Mads Andersson, Lars Hvam, Cipriano Forza and Niels Henrik Mortensen
Appl. Sci. 2025, 15(21), 11618; https://doi.org/10.3390/app152111618 - 30 Oct 2025
Abstract
Product variety in manufacturing has increased significantly, driven by technological advancements and growing demand for customisation. To meet diverse customer preferences, companies often overextend their portfolios without fully considering the resulting impact on manufacturing effectiveness. This study investigates how product variety is associated [...] Read more.
Product variety in manufacturing has increased significantly, driven by technological advancements and growing demand for customisation. To meet diverse customer preferences, companies often overextend their portfolios without fully considering the resulting impact on manufacturing effectiveness. This study investigates how product variety is associated with quality conformance in continuous process manufacturing, an area underexplored in the existing literature, which predominantly focuses on discrete or assembly-based operations. Utilising production data from a large chemical manufacturer, logistic regression analysis was applied to examine how variety-related engineering parameters relate to the probability of non-conformance at the big-bag level. The analysis shows that ramp-ups, especially those associated with major changeovers, and short production uptimes are correlated with an increased likelihood of quality issues. Very infrequent production also appears to increase quality risks. Contrary to learning-curve expectations, products with medium and low production intensity showed lower odds of non-conformance than high-intensity products. These findings indicate the implications of product variety in continuous manufacturing environments. By identifying variety parameters that appear to contribute to quality risks, this study offers initial guidance for production planners and product portfolio managers aiming to balance product variety with quality conformance and overall manufacturing effectiveness. Full article
(This article belongs to the Special Issue Quality Control and Product Monitoring in Manufacturing)
19 pages, 257 KB  
Review
From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030
by Mubarak S. Aldosari
Sustainability 2025, 17(21), 9415; https://doi.org/10.3390/su17219415 - 23 Oct 2025
Viewed by 220
Abstract
Assessment practices are central to higher education, particularly critical in theory-based programs, where they facilitate the development of conceptual understanding and higher-order cognitive skills. They also support Saudi Arabia’s Vision 2030 agenda, which aims to drive educational innovation. This narrative review examines assessment [...] Read more.
Assessment practices are central to higher education, particularly critical in theory-based programs, where they facilitate the development of conceptual understanding and higher-order cognitive skills. They also support Saudi Arabia’s Vision 2030 agenda, which aims to drive educational innovation. This narrative review examines assessment practices in theory-based programs at a Saudi public university, identifies discrepancies with learning objectives, and proposes potential solutions. A narrative review synthesised peer-reviewed literature (2015–2025) from Scopus, Web of Science, ERIC, and Google Scholar, focusing on traditional and alternative assessments, barriers, progress, and comparisons with international standards. The review found that traditional summative methods (quizzes, final exams) still dominate and emphasise memorisation, limiting the development of higher-order skills. Emerging techniques, such as projects, portfolios, oral presentations, and peer assessment, are gaining traction but face institutional constraints and resistance from faculty. Digital adoption is growing: 63% of students are satisfied with learning management system tools, and 75% find online materials easy to understand; yet, advanced analytics and AI-based assessments are rare. A comparative analysis reveals that international standards favour formative feedback, adaptive technologies, and holistic competencies. The misalignment between current practices and Vision 2030 highlights the need to broaden assessment portfolios, integrate technology, and provide faculty training. Saudi theory-based programs must transition from memory-oriented evaluations to student-centred, evidence-based assessments that foster critical thinking and real-world application. Adopt diverse assessments (projects, portfolios, peer reviews), invest in digital analytics and adaptive learning, align assessments with learning outcomes and Vision 2030 competencies, and implement ongoing faculty development. The study offers practical pathways for reform and highlights strategic opportunities for achieving Saudi Arabia’s national learning outcomes. Full article
(This article belongs to the Section Sustainable Education and Approaches)
31 pages, 3540 KB  
Article
Bi-Objective Portfolio Optimization Under ESG Volatility via a MOPSO-Deep Learning Algorithm
by Imma Lory Aprea, Gianni Bosi, Gabriele Sbaiz and Salvatore Scognamiglio
Mathematics 2025, 13(20), 3308; https://doi.org/10.3390/math13203308 - 16 Oct 2025
Viewed by 251
Abstract
In this paper, we tackle a bi-objective optimization problem in which we aim to maximize the portfolio diversification and, at the same time, minimize the portfolio volatility, where the ESG (Environmental, Social, and Governance) information is incorporated. More specifically, we extend the standard [...] Read more.
In this paper, we tackle a bi-objective optimization problem in which we aim to maximize the portfolio diversification and, at the same time, minimize the portfolio volatility, where the ESG (Environmental, Social, and Governance) information is incorporated. More specifically, we extend the standard portfolio volatility framework based on the financial aspects to a new paradigm where the sustainable credits are taken into account. In the portfolio’s construction, we consider the classical constraints concerning budget and box requirements. To deal with these new asset allocation models, in this paper, we develop an improved Multi-Objective Particle Swarm Optimizer (MOPSO) embedded with ad hoc repair and projection operators to satisfy the constraints. Moreover, we implement a deep learning architecture to improve the quality of estimating the portfolio diversification objective. Finally, we conduct empirical tests on datasets from three different countries’ markets to illustrate the effectiveness of the proposed strategies, accounting for various levels of ESG volatility. Full article
(This article belongs to the Special Issue Multi-Objective Optimization and Applications)
Show Figures

Figure 1

43 pages, 4746 KB  
Article
The BTC Price Prediction Paradox Through Methodological Pluralism
by Mariya Paskaleva and Ivanka Vasenska
Risks 2025, 13(10), 195; https://doi.org/10.3390/risks13100195 - 4 Oct 2025
Viewed by 1336
Abstract
Bitcoin’s extreme price volatility presents significant challenges for investors and traders, necessitating accurate predictive models to guide decision-making in cryptocurrency markets. This study compares the performance of machine learning approaches for Bitcoin price prediction, specifically examining XGBoost gradient boosting, Long Short-Term Memory (LSTM), [...] Read more.
Bitcoin’s extreme price volatility presents significant challenges for investors and traders, necessitating accurate predictive models to guide decision-making in cryptocurrency markets. This study compares the performance of machine learning approaches for Bitcoin price prediction, specifically examining XGBoost gradient boosting, Long Short-Term Memory (LSTM), and GARCH-DL neural networks using comprehensive market data spanning December 2013 to May 2025. We employed extensive feature engineering incorporating technical indicators, applied multiple machine and deep learning models configurations including standalone and ensemble approaches, and utilized cross-validation techniques to assess model robustness. Based on the empirical results, the most significant practical implication is that traders and financial institutions should adopt a dual-model approach, deploying XGBoost for directional trading strategies and utilizing LSTM models for applications requiring precise magnitude predictions, due to their superior continuous forecasting performance. This research demonstrates that traditional technical indicators, particularly market capitalization and price extremes, remain highly predictive in algorithmic trading contexts, validating their continued integration into modern cryptocurrency prediction systems. For risk management applications, the attention-based LSTM’s superior risk-adjusted returns, combined with enhanced interpretability, make it particularly valuable for institutional portfolio optimization and regulatory compliance requirements. The findings suggest that ensemble methods offer balanced performance across multiple evaluation criteria, providing a robust foundation for production trading systems where consistent performance is more valuable than optimization for single metrics. These results enable practitioners to make evidence-based decisions about model selection based on their specific trading objectives, whether focused on directional accuracy for signal generation or precision of magnitude for risk assessment and portfolio management. Full article
(This article belongs to the Special Issue Portfolio Theory, Financial Risk Analysis and Applications)
Show Figures

Figure 1

25 pages, 3228 KB  
Article
Sustainable vs. Non-Sustainable Assets: A Deep Learning-Based Dynamic Portfolio Allocation Strategy
by Fatma Ben Hamadou and Mouna Boujelbène Abbes
J. Risk Financial Manag. 2025, 18(10), 563; https://doi.org/10.3390/jrfm18100563 - 3 Oct 2025
Viewed by 750
Abstract
This article aims to investigate the impact of sustainable assets on dynamic portfolio optimization under varying levels of investor risk aversion, particularly during turbulent market conditions. The analysis compares the performance of two portfolio types: (i) portfolios composed of non-sustainable assets such as [...] Read more.
This article aims to investigate the impact of sustainable assets on dynamic portfolio optimization under varying levels of investor risk aversion, particularly during turbulent market conditions. The analysis compares the performance of two portfolio types: (i) portfolios composed of non-sustainable assets such as fossil energy commodities and conventional equity indices, and (ii) mixed portfolios that combine non-sustainable and sustainable assets, including renewable energy, green bonds, and precious metals using advanced Deep Reinforcement Learning models (including TD3 and DDPG) based on risk and transaction cost- sensitive in portfolio optimization against the traditional Mean-Variance model. Results show that incorporating clean and sustainable assets significantly enhances portfolio returns and reduces volatility across all risk aversion profiles. Moreover, the Deep Reinforcing Learning optimization models outperform classical MV optimization, and the RTC-LSTM-TD3 optimization strategy outperforms all others. The RTC-LSTM-TD3 optimization achieves an annual return of 24.18% and a Sharpe ratio of 2.91 in mixed portfolios (sustainable and non-sustainable assets) under low risk aversion (λ = 0.005), compared to a return of only 8.73% and a Sharpe ratio of 0.67 in portfolios excluding sustainable assets. To the best of the authors’ knowledge, this is the first study that employs the DRL framework integrating risk sensitivity and transaction costs to evaluate the diversification benefits of sustainable assets. Findings offer important implications for portfolio managers to leverage the benefits of sustainable diversification, and for policymakers to encourage the integration of sustainable assets, while addressing fiduciary responsibilities. Full article
(This article belongs to the Special Issue Sustainable Finance for Fair Green Transition)
Show Figures

Figure 1

14 pages, 696 KB  
Article
Portfolio Management Strategies Based on Deep Temporal Clustering
by Eleftherios Kouloumpris, Panagiotis Doupidis, Konstantinos Moutsianas and Ioannis Vlahavas
Appl. Sci. 2025, 15(19), 10439; https://doi.org/10.3390/app151910439 - 26 Sep 2025
Viewed by 492
Abstract
Portfolio management (PM) facilitates optimal investing decisions and enables organizations to control risks and achieve stable financial growth. Advances in machine learning, mostly through supervised learning, are drastically changing the way in which PM is conducted. More recently, unsupervised learning is also emerging [...] Read more.
Portfolio management (PM) facilitates optimal investing decisions and enables organizations to control risks and achieve stable financial growth. Advances in machine learning, mostly through supervised learning, are drastically changing the way in which PM is conducted. More recently, unsupervised learning is also emerging as a paradigm that can support the creation of diversified and profitable portfolios through stock clustering. In the corresponding literature, there is significant evidence that cluster-informed methods can outperform both traditional and supervised approaches to PM. However, these works are few and have not considered state-of-the-art deep learning approaches for clustering, while stock allocation is often limited to equally weighted portfolios or mean-variance optimization (MVO). To address these issues, we propose a cluster-informed PM method based on deep temporal clustering (DTC) along with our recommended parameters for training convergence, combined with the conditional drawdown at risk (CDaR) portfolio allocation method. Unlike MVO, CDaR considers tail risk and can minimize extreme price drawdowns. Cluster validity metrics reveal that DTC outperforms previously proposed stock clustering methods. Furthermore, DTC enhanced by CDaR achieves a higher expected Sortino ratio (1.1) compared to previous works in clustering-based PM. Additional Brinson attribution and maximum drawdown analyses further confirm the robustness of our method. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 5860 KB  
Review
Mapping the Rise in Machine Learning in Environmental Chemical Research: A Bibliometric Analysis
by Bojana Stanic and Nebojsa Andric
Toxics 2025, 13(10), 817; https://doi.org/10.3390/toxics13100817 - 26 Sep 2025
Viewed by 659
Abstract
Machine learning (ML) is reshaping how environmental chemicals are monitored and how their hazards are evaluated for human health. Here, we mapped this landscape by analyzing 3150 peer-reviewed articles (1985–2025) from the Web of Science Core Collection. Co-citation, co-occurrence, and temporal trend analyses [...] Read more.
Machine learning (ML) is reshaping how environmental chemicals are monitored and how their hazards are evaluated for human health. Here, we mapped this landscape by analyzing 3150 peer-reviewed articles (1985–2025) from the Web of Science Core Collection. Co-citation, co-occurrence, and temporal trend analyses in VOSviewer and R reveal an exponential publication surge from 2015, dominated by environmental science journals, with China and the United States leading in output. Eight thematic clusters emerged, centered on ML model development, water quality prediction, quantitative structure–activity applications, and per-/polyfluoroalkyl substances, with XGBoost and random forests as the most cited algorithms. A distinct risk assessment cluster indicates migration of these tools toward dose–response and regulatory applications, yet keyword frequencies show a 4:1 bias toward environmental endpoints over human health endpoints. Emerging topics include climate change, microplastics, and digital soil mapping, while lignin, arsenic, and phthalates appear as fast-growing but understudied chemicals. Our findings expose gaps in chemical coverage and health integration. We recommend expanding the substance portfolio, systematically coupling ML outputs with human health data, adopting explainable artificial intelligence workflows, and fostering international collaboration to translate ML advances into actionable chemical risk assessments. Full article
(This article belongs to the Section Novel Methods in Toxicology Research)
Show Figures

Figure 1

32 pages, 2647 KB  
Review
Adapting the Baldrige Framework for Sustainable Creative Education: Urban Design, Architecture, Art, and Design Programs
by Kittichai Kasemsarn, Ukrit Wannaphapa, Antika Sawadsri, Amorn Kritsanaphan, Rittirong Chutapruttikorn and Farnaz Nickpour
Sustainability 2025, 17(19), 8540; https://doi.org/10.3390/su17198540 - 23 Sep 2025
Viewed by 630
Abstract
Two critical research problems emerge in creative education quality management: the framework misalignment problem, where business-oriented performance metrics inadequately assess design creativity and innovation, and the sustainability integration gap, reflecting limited incorporation of environmental and social sustainability dimensions into excellence models. This review [...] Read more.
Two critical research problems emerge in creative education quality management: the framework misalignment problem, where business-oriented performance metrics inadequately assess design creativity and innovation, and the sustainability integration gap, reflecting limited incorporation of environmental and social sustainability dimensions into excellence models. This review article addresses these problems by developing an initial framework that adapts the Baldrige framework for urban design, architecture, art, and design education with integrated sustainability principles. Drawing on literature review and theoretical synthesis, the article proposes a framework that introduces three key epistemological shifts: prioritizing process over product, supporting non-linear and reflective learning pathways, and recognizing tacit, embodied, and experiential knowledge as central to creative education. The framework incorporates the United Nations Sustainable Development Goals (SDGs) as core design challenges and introduces innovative evaluation tools, including portfolios with iterative review processes, community feedback loops, and SDG mapping rubrics. This research contributes to the educational quality management literature by offering a systematic framework that bridges business excellence models with creative education paradigms while positioning sustainability as a core educational objective rather than a peripheral concern. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Figure 1

18 pages, 788 KB  
Article
Self-Coded Digital Portfolios as an Authentic Project-Based Learning Assessment in Computing Education: Evidence from a Web Design and Development Course
by Manuel B. Garcia
Educ. Sci. 2025, 15(9), 1150; https://doi.org/10.3390/educsci15091150 - 4 Sep 2025
Viewed by 1809
Abstract
Digital portfolios have become an essential assessment tool in project-based and student-centered learning environments. Unfortunately, students exert minimal effort in creating digital portfolios because they find the writing component unchallenging. This issue is concerning since existing research predominantly focuses on the use of [...] Read more.
Digital portfolios have become an essential assessment tool in project-based and student-centered learning environments. Unfortunately, students exert minimal effort in creating digital portfolios because they find the writing component unchallenging. This issue is concerning since existing research predominantly focuses on the use of pre-existing platforms for building digital portfolios. With this concern, there is an opportunity to explore more challenging approaches to digital portfolio creation. Consequently, this study employs a project-based learning (PBL) approach within a website design and development course, where 176 undergraduate students completed weekly coding tasks culminating in a self-coded digital portfolio. Using a one-group posttest-only research design, data were collected through a structured questionnaire that included demographic items and validated scales measuring learning effectiveness and ownership of learning. The survey was administered electronically after students submitted their digital portfolio projects. The results reveal that device ownership shows only weak associations with students’ perceptions, while internet connectivity and self-reported academic performance demonstrate stronger relationships with engagement and ownership of learning. Additionally, prior experience with digital portfolios positively influences students’ engagement, motivation, and ownership of learning. Implications of these findings are discussed for supporting the integration of digital portfolios into technical disciplines. Overall, this study contributes to the literature on PBL methodology, expands our understanding of digital portfolio integration, and underscores the significance of student-centered pedagogies. Full article
Show Figures

Figure 1

30 pages, 7088 KB  
Article
Cascade Hydropower Plant Operational Dispatch Control Using Deep Reinforcement Learning on a Digital Twin Environment
by Erik Rot Weiss, Robert Gselman, Rudi Polner and Riko Šafarič
Energies 2025, 18(17), 4660; https://doi.org/10.3390/en18174660 - 2 Sep 2025
Viewed by 635
Abstract
In this work, we propose the use of a reinforcement learning (RL) agent for the control of a cascade hydropower plant system. Generally, this job is handled by power plant dispatchers who manually adjust power plant electricity production to meet the changing demand [...] Read more.
In this work, we propose the use of a reinforcement learning (RL) agent for the control of a cascade hydropower plant system. Generally, this job is handled by power plant dispatchers who manually adjust power plant electricity production to meet the changing demand set by energy traders. This work explores the more fundamental problem with the cascade hydropower plant operation of flow control for power production in a highly nonlinear setting on a data-based digital twin. Using deep deterministic policy gradient (DDPG), twin delayed DDPG (TD3), soft actor-critic (SAC), and proximal policy optimization (PPO) algorithms, we can generalize the characteristics of the system and determine the human dispatcher level of control of the entire system of eight hydropower plants on the river Drava in Slovenia. The creation of an RL agent that makes decisions similar to a human dispatcher is not only interesting in terms of control but also in terms of long-term decision-making analysis in an ever-changing energy portfolio. The specific novelty of this work is in training an RL agent on an accurate testing environment of eight real-world cascade hydropower plants on the river Drava in Slovenia and comparing the agent’s performance to human dispatchers. The results show that the RL agent’s absolute mean error of 7.64 MW is comparable to the general human dispatcher’s absolute mean error of 5.8 MW at a peak installed power of 591.95 MW. Full article
Show Figures

Figure 1

18 pages, 465 KB  
Article
Empirical Calibration of XGBoost Model Hyperparameters Using the Bayesian Optimisation Method: The Case of Bitcoin Volatility
by Saralees Nadarajah, Jules Clement Mba, Ndaohialy Manda Vy Ravonimanantsoa, Patrick Rakotomarolahy and Henri T. J. E. Ratolojanahary
J. Risk Financial Manag. 2025, 18(9), 487; https://doi.org/10.3390/jrfm18090487 - 2 Sep 2025
Cited by 1 | Viewed by 938
Abstract
Ensemble learning techniques continue to show greater interest in forecasting the volatility of cryptocurrency assets. In particular, XGBoost, an ensemble learning technique, has been shown in recent studies to provide the most accurate forecast of Bitcoin volatility. However, the performance of XGBoost largely [...] Read more.
Ensemble learning techniques continue to show greater interest in forecasting the volatility of cryptocurrency assets. In particular, XGBoost, an ensemble learning technique, has been shown in recent studies to provide the most accurate forecast of Bitcoin volatility. However, the performance of XGBoost largely depends on the tuning of its hyperparameters. In this study, we examine the effectiveness of the Bayesian optimization method for tuning the XGBoost hyperparameters for Bitcoin volatility forecasting. We chose to explore this method rather than the most commonly used manual, grid, and random hyperparameter choices due to its ability to predict the most promising areas of hyperparameter spaces through exploitation and exploration using acquisition functions, as well as its ability to minimize error with a reduced amount of time and resources required to find an optimal configuration. The obtained XGBoost configuration improves the forecast accuracy of Bitcoin volatility. Our empirical results, based on letting the data speak for itself, could be used for a comparative study on Bitcoin volatility forecasting. This would also be important for volatility trading, option pricing, and managing portfolios related to Bitcoin. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

15 pages, 601 KB  
Article
Cryptocurrency Futures Portfolio Trading System Using Reinforcement Learning
by Jae Heon Chun and Suk Jun Lee
Appl. Sci. 2025, 15(17), 9400; https://doi.org/10.3390/app15179400 - 27 Aug 2025
Viewed by 2481
Abstract
This paper proposes a cryptocurrency portfolio trading system (CPTS) that optimizes trading performance in the cryptocurrency futures market by leveraging reinforcement learning and timeframe analysis. By employing the advantage actor–critic (A2C) algorithm and analysis of variance (ANOVA) portfolios are constructed over multiple timeframes. [...] Read more.
This paper proposes a cryptocurrency portfolio trading system (CPTS) that optimizes trading performance in the cryptocurrency futures market by leveraging reinforcement learning and timeframe analysis. By employing the advantage actor–critic (A2C) algorithm and analysis of variance (ANOVA) portfolios are constructed over multiple timeframes. Data corresponding to the trade of 18 major cryptocurrencies on Binance Futures––between January 2022 and December 2023––are used to show that trading strategies can be effectively categorized into those with high-frequency (10, 30, and 60 min) and low-frequency (daily) timeframes. Empirical results demonstrate statistically significant differences in returns between these timeframe groups, with major cryptocurrencies (e.g., Bitcoin and Ethereum) exhibiting higher returns in high-frequency trading (16–17%) than in daily trading (6–7%) during training. Performance evaluation during the test period revealed that the low-frequency group achieved a 43.06% average return, significantly outperforming the high-frequency group (5.68%). The ANOVA results confirm that both the frequency type and portfolio selection significantly influence trading performance at the 5% significance level. This study offers a novel approach to cryptocurrency trading that considers the distinct characteristics of different timeframes. The effectiveness of combining reinforcement learning with statistical analysis for portfolio optimization in highly volatile cryptocurrency markets is demonstrated. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

30 pages, 651 KB  
Article
A Fusion of Statistical and Machine Learning Methods: GARCH-XGBoost for Improved Volatility Modelling of the JSE Top40 Index
by Israel Maingo, Thakhani Ravele and Caston Sigauke
Int. J. Financial Stud. 2025, 13(3), 155; https://doi.org/10.3390/ijfs13030155 - 25 Aug 2025
Viewed by 1034
Abstract
Volatility modelling is a key feature of financial risk management, portfolio optimisation, and forecasting, particularly for market indices such as the JSE Top40 Index, which serves as a benchmark for the South African stock market. This study investigates volatility modelling of the JSE [...] Read more.
Volatility modelling is a key feature of financial risk management, portfolio optimisation, and forecasting, particularly for market indices such as the JSE Top40 Index, which serves as a benchmark for the South African stock market. This study investigates volatility modelling of the JSE Top40 Index log-returns from 2011 to 2025 using a hybrid approach that integrates statistical and machine learning techniques through a two-step approach. The ARMA(3,2) model was chosen as the optimal mean model, using the auto.arima() function from the forecast package in R (version 4.4.0). Several alternative variants of GARCH models, including sGARCH(1,1), GJR-GARCH(1,1), and EGARCH(1,1), were fitted under various conditional error distributions (i.e., STD, SSTD, GED, SGED, and GHD). The choice of the model was based on AIC, BIC, HQIC, and LL evaluation criteria, and ARMA(3,2)-EGARCH(1,1) was the best model according to the lowest evaluation criteria. Residual diagnostic results indicated that the model adequately captured autocorrelation, conditional heteroskedasticity, and asymmetry in JSE Top40 log-returns. Volatility persistence was also detected, confirming the persistence attributes of financial volatility. Thereafter, the ARMA(3,2)-EGARCH(1,1) model was coupled with XGBoost using standardised residuals extracted from ARMA(3,2)-EGARCH(1,1) as lagged features. The data was split into training (60%), testing (20%), and calibration (20%) sets. Based on the lowest values of forecast accuracy measures (i.e., MASE, RMSE, MAE, MAPE, and sMAPE), along with prediction intervals and their evaluation metrics (i.e., PICP, PINAW, PICAW, and PINAD), the hybrid model captured residual nonlinearities left by the standalone ARMA(3,2)-EGARCH(1,1) and demonstrated improved forecasting accuracy. The hybrid ARMA(3,2)-EGARCH(1,1)-XGBoost model outperforms the standalone ARMA(3,2)-EGARCH(1,1) model across all forecast accuracy measures. This highlights the robustness and suitability of the hybrid ARMA(3,2)-EGARCH(1,1)-XGBoost model for financial risk management in emerging markets and signifies the strengths of integrating statistical and machine learning methods in financial time series modelling. Full article
Show Figures

Figure 1

27 pages, 1363 KB  
Article
FSTGAT: Financial Spatio-Temporal Graph Attention Network for Non-Stationary Financial Systems and Its Application in Stock Price Prediction
by Ze-Lin Wei, Hong-Yu An, Yao Yao, Wei-Cong Su, Guo Li, Saifullah, Bi-Feng Sun and Mu-Jiang-Shan Wang
Symmetry 2025, 17(8), 1344; https://doi.org/10.3390/sym17081344 - 17 Aug 2025
Cited by 1 | Viewed by 1808
Abstract
Accurately predicting stock prices is crucial for investment and risk management, but the non-stationarity of the financial market and the complex correlations among stocks pose challenges to traditional models (ARIMA, LSTM, XGBoost), resulting in difficulties in effectively capturing dynamic patterns and limited prediction [...] Read more.
Accurately predicting stock prices is crucial for investment and risk management, but the non-stationarity of the financial market and the complex correlations among stocks pose challenges to traditional models (ARIMA, LSTM, XGBoost), resulting in difficulties in effectively capturing dynamic patterns and limited prediction accuracy. To this end, this paper proposes the Financial Spatio-Temporal Graph Attention Network (FSTGAT), with the following core innovations: temporal modelling through gated causal convolution to avoid future information leakage and capture long- and short-term fluctuations; enhanced spatial correlation learning by adopting the Dynamic Graph Attention Mechanism (GATv2) that incorporates industry information; designing the Multiple-Input-Multiple-Output (MIMO) architecture of industry grouping for the simultaneous learning of intra-group synergistic and inter-group influence; symmetrically fusing spatio-temporal modules to construct a hierarchical feature extraction framework. Experiments in the commercial banking and metals sectors of the New York Stock Exchange (NYSE) show that FSTGAT significantly outperforms the benchmark model, especially in high-volatility scenarios, where the prediction error is reduced by 45–69%, and can accurately capture price turning points. This study confirms the potential of graph neural networks to model the structure of financial interconnections, providing an effective tool for stock forecasting in non-stationary markets, and its forecasting accuracy and industry correlation capturing ability can support portfolio optimization, risk management improvement and supply chain decision guidance. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 1527 KB  
Article
Trends in Patent Applications for Technologies in the Automotive Industry: Applications of Deep Learning and Machine Learning
by ChoongChae Woo and Junbum Park
AI 2025, 6(8), 185; https://doi.org/10.3390/ai6080185 - 13 Aug 2025
Viewed by 2113
Abstract
This study investigates global innovation trends in machine learning (ML) and deep learning (DL) technologies within the automotive sector through a patent analysis of 5314 applications filed between 2005 and 2022 across the five major patent offices (IP5). Using Cooperative Patent Classification (CPC) [...] Read more.
This study investigates global innovation trends in machine learning (ML) and deep learning (DL) technologies within the automotive sector through a patent analysis of 5314 applications filed between 2005 and 2022 across the five major patent offices (IP5). Using Cooperative Patent Classification (CPC) codes and keyword analysis, we identify seven sub-technology domains and examine both geographical and corporate patenting strategies. Our findings show that the United States dominates in overall filings, while Japan demonstrates a notably high share of triadic patents, which reflects a strong global-reach strategy. Patent activity is heavily concentrated in vehicle control and infrastructure traffic control, with emerging growth observed in battery management and occupant analytics. In contrast, security-related technologies remain underrepresented, indicating a potential blind spot in current innovation efforts. Corporate strategies diverge markedly; for example, some firms, such as Toyota and Bosch, pursue balanced tri-regional protection, whereas others, including Ford and GM, focus on dual-market coverage in the United States and China. These patterns illustrate how market priorities, regulatory environments, and technological objectives influence patenting behavior. By mapping the technological and strategic landscape of ML/DL innovation in the automotive industry, this study provides actionable insights for industry practitioners seeking to optimize intellectual property portfolios and for policymakers aiming to address gaps such as automotive cybersecurity in future R&D agendas. Full article
Show Figures

Figure 1

Back to TopTop