Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,199)

Search Parameters:
Keywords = financial network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2139 KB  
Article
Structural Symmetry Modeling and Network Optimization for Evaluating Industrial Chain Integration and Firm Performance: Evidence from Xinjiang’s Characteristic Food Processing Industry Under the Big Food Concept
by Ting Wang and Reziyan Wakasi
Symmetry 2026, 18(5), 735; https://doi.org/10.3390/sym18050735 (registering DOI) - 25 Apr 2026
Abstract
Industrial chains in agriculture are currently fragmented and do not support developing resource-based competitive advantages. This is true under the Big Food Framework’s strategic orientation. This research seeks to develop a new analytical framework for evaluating pathways to the integration of agricultural industrial [...] Read more.
Industrial chains in agriculture are currently fragmented and do not support developing resource-based competitive advantages. This is true under the Big Food Framework’s strategic orientation. This research seeks to develop a new analytical framework for evaluating pathways to the integration of agricultural industrial chains and their impact on the performance of companies engaged in food processing in Xinjiang. A mixed-method approach, employing both an exploratory and sequential design, will be used to do this. The primary method of data collection for this study is the case study method, along with the questionnaire method involving 145 agricultural enterprises. From these data, structural equation modeling (SEM) will be used to test the paths of causation among cognitive managers of firms who have implemented the BFF. Evidence will be presented to demonstrate the relationship among three types of integration (vertical, horizontal, and lateral) in the agricultural industrial chain, dynamic capabilities, and company performance. Additionally, network topology and optimization simulations will be conducted to determine how effectively structures are organized in training the respective companies. Important findings revealed in this research include the following: The managerial cognition constructs offered by BFFs play a key role in enhancing the depth and structural balance of industry chain integration. There were complementary performance effects found, and they are related to vertical integration achieving operational efficiency and financial efficiency; horizontal integration improving market competitiveness and brand competitiveness; and lateral integration facilitating innovative growth. Dynamic capabilities are a significant mediating mechanism linking institutional support and digital capability with the depth of integration across different modes of integration. The findings from network optimization suggest that there is a positive effect of balanced connectivity across the different dimensions of integration on overall system efficiency and reduced structural inefficiencies. Based on these findings, the authors recommend that organizations establish governance mechanisms that facilitate coordinated connectivity; strengthen adaptive capabilities within the firm; and promote balanced integration across industrial networks. Future researchers should consider applying these findings to conducting longitudinal studies on network evolution; integrating sustainability measures as part of their analysis; and conducting comparative validation studies across regions or industry systems. Full article
(This article belongs to the Section Chemistry: Symmetry/Asymmetry)
42 pages, 3269 KB  
Systematic Review
Artificial Intelligence in Disaster Supply Chain Risk Management: A Bibliometric Analysis with Financial Risk Implications
by Ioannis Dimitrios Kamperos, Nikolaos Giannakopoulos, Damianos Sakas and Niki Glaveli
J. Risk Financial Manag. 2026, 19(5), 310; https://doi.org/10.3390/jrfm19050310 (registering DOI) - 25 Apr 2026
Abstract
Disruptions caused by disasters, pandemics, and systemic crises have increased the complexity and vulnerability of global supply chains, highlighting the need for advanced analytical approaches to risk and resilience management. In this context, artificial intelligence (AI) has emerged as a promising analytical capability [...] Read more.
Disruptions caused by disasters, pandemics, and systemic crises have increased the complexity and vulnerability of global supply chains, highlighting the need for advanced analytical approaches to risk and resilience management. In this context, artificial intelligence (AI) has emerged as a promising analytical capability for improving risk assessment and decision-making in disrupted supply chains. The study follows PRISMA 2020 reporting guidelines adapted for bibliometric research and presents a bibliometric and knowledge-mapping analysis of artificial intelligence applications in disaster supply chain risk and resilience management. Using the Web of Science Core Collection, a dataset of 288 peer-reviewed publications was analyzed through keyword co-occurrence, bibliographic coupling, citation analysis, and collaboration network mapping. The findings indicate a rapidly expanding research field in which AI supports predictive risk assessment, real-time monitoring, and resilience-oriented decision-making in disaster-prone supply networks. The analysis identifies dominant thematic clusters, emerging research directions, and opportunities for integrating AI-enabled analytics into supply chain risk management frameworks. The mapped literature also suggests secondary interpretive implications for financial risk exposure and supply chain finance, rather than indicating a separately operationalized finance-specific bibliometric subfield. To enhance interpretive depth, an AI-assisted analytical layer was applied to refine thematic clusters and detect emerging trends. However, this layer operates as a complementary interpretive tool and is subject to methodological limitations, including sensitivity to keyword semantics, dependence on bibliometric outputs, and potential interpretive bias in AI-assisted thematic labeling. Consequently, the AI-assisted analysis is used to support, rather than replace, bibliometric findings. Overall, this study contributes to the emerging literature on artificial intelligence in disaster supply chain risk management and highlights future research opportunities, including improved methodological integration and enhanced analytical transparency in AI-assisted bibliometric research. Full article
(This article belongs to the Special Issue Supply Chain Finance and Management)
Show Figures

Graphical abstract

28 pages, 11068 KB  
Article
Dynamic Interlinkages Between Energy, Food and Metal Prices Under the Geopolitical Tension
by Linda Karlina Sari, Muchamad Bachtiar, Noer Azam Achsani and Reni Lestari
Resources 2026, 15(5), 61; https://doi.org/10.3390/resources15050061 (registering DOI) - 24 Apr 2026
Abstract
This study examines the dynamic interlinkages among energy, food, and metal commodity markets under geopolitical tensions using daily data from January 2022 to July 2025. The empirical framework integrates correlation analysis, Granger causality tests, and a Vector Error Correction Model (VECM) to capture [...] Read more.
This study examines the dynamic interlinkages among energy, food, and metal commodity markets under geopolitical tensions using daily data from January 2022 to July 2025. The empirical framework integrates correlation analysis, Granger causality tests, and a Vector Error Correction Model (VECM) to capture both short- and long-run transmission mechanisms, with robustness assessed through impulse response functions, forecast error variance decomposition, and a Diebold–Yilmaz connectedness analysis across three structurally distinct geopolitical event windows. The results reveal asymmetric and sector-specific transmission patterns in which geopolitical risk significantly influences key commodity prices—particularly WTI crude oil, wheat, copper, and aluminium—confirming its role as a primary external shock driver. WTI emerges as the dominant transmitter of shocks, while industrial metals exhibit strong internal connectedness. Critically, gold’s role proves to be conditional and context-dependent: within an integrated energy–food–metal network under geopolitical stress, it functions primarily as a net receiver and passive absorber of macroeconomic uncertainty rather than as a systemic transmitter, a finding that complements, rather than contradicts, its established safe-haven role in financial asset pricing frameworks. These findings are subject to limitations, including reliance on futures price data and a linear VECM framework that may not fully capture nonlinear or regime-dependent dynamics. Full article
21 pages, 1596 KB  
Article
Integration of Building Information Modelling and Economic Multi-Criteria Decision-Making with Neural Networks: Towards a Smart Renewable Energy Community
by Helena M. Ramos, Ana Paula Falcao, Praful Borkar, Oscar E. Coronado-Hernández, Francisco-Javier Sánchez-Romero and Modesto Pérez-Sánchez
Algorithms 2026, 19(5), 327; https://doi.org/10.3390/a19050327 - 23 Apr 2026
Viewed by 76
Abstract
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated [...] Read more.
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated modelling and decision-making. The approach is applied to a hydropower site, evaluating five Scenarios (IDs 1–5) under a Community and Industry model. Financial benchmarks include a 10% Minimum Required Return and a 7-year payback period. ID3—hydropower, solar, and wind—proves most effective, with ANPV of €10,905 (wet) and €4501 (dry), and ROI of 155%/64%. Its ROIA/MRA Index peaks at 539%, and Payback/N ratios remain within acceptable limits (55%/96%). LCOE stays stable in average conditions (0.042–0.046 €/kWh), rising in dry years (0.07–0.10 €/kWh). Profitability differences primarily stem from demand and curtailment, rather than production costs. The NARX neural network reliably models SS% values from renewable inputs with low error across scenarios. The integrated BIM–EMCDM framework ensures transparent, sustainable, and risk-balanced energy system decisions for long-term autonomy. Full article
22 pages, 566 KB  
Article
Towards Sustainable Inventory Systems: Multi-Objective Optimisation of Economic Cost and CO2 Emissions in Multi-Echelon Supply Chains
by Joaquim Jorge Vicente
Sustainability 2026, 18(9), 4205; https://doi.org/10.3390/su18094205 - 23 Apr 2026
Viewed by 115
Abstract
Effective supply chain planning increasingly requires balancing cost-efficiency with environmental responsibility, particularly as organisations face growing pressure to reduce the carbon footprint of logistics operations. This study develops a mixed-integer linear programming model to optimise inventory and transportation decisions in a multi-echelon distribution [...] Read more.
Effective supply chain planning increasingly requires balancing cost-efficiency with environmental responsibility, particularly as organisations face growing pressure to reduce the carbon footprint of logistics operations. This study develops a mixed-integer linear programming model to optimise inventory and transportation decisions in a multi-echelon distribution network comprising a central warehouse, regional warehouses, and retailers. The model integrates a continuous-review (r,Q) replenishment policy, stochastic demand, safety stock requirements, transportation lead times, and stockout behaviour, enabling a detailed representation of operational dynamics under uncertainty and environmental concerns. Unlike most sustainable inventory models—which typically treat environmental impacts and replenishment control separately or rely on simplified service assumptions—this study provides an integrated framework that jointly embeds (r,Q) policies, stochastic demand, stockouts and distance-based CO2 metrics within a unified optimisation structure. The model advances prior work by explicitly integrating continuous-review (r,Q) replenishment policies with distance-based CO2 metrics under stochastic demand, a combination rarely addressed in sustainable multi-echelon inventory models. A multi-objective formulation captures the trade-off between economic performance and CO2 emissions, allowing the identification of Pareto-efficient strategies that reconcile financial and environmental goals. Reducing emissions by over 90% requires an additional cost of only about 4%, demonstrating that substantial emission reductions can be achieved at relatively low additional cost. The findings offer practical insights for managers seeking to design more sustainable and cost-effective distribution policies, highlighting the value of integrated optimisation approaches in contemporary logistics systems. Full article
(This article belongs to the Special Issue Green Supply Chain and Sustainable Economic Development—2nd Edition)
39 pages, 3419 KB  
Review
Opportunities and Challenges of Sensor- and Acoustic-Based Irrigation Monitoring Technologies in South Africa: A Scoping Review with Machine Learning-Enhanced Evidence Synthesis
by Gift Siphiwe Nxumalo, Tondani Sanah Ramabulana, Noxolo Felicia Vilakazi and Attila Nagy
AgriEngineering 2026, 8(5), 161; https://doi.org/10.3390/agriengineering8050161 - 23 Apr 2026
Viewed by 78
Abstract
South African irrigation schemes face critical challenges of water scarcity, infrastructure deterioration, and limited monitoring capacity, threatening agricultural productivity and food security. This scoping review systematically analyses 59 peer-reviewed publications (2000–2025) on sensor-based and acoustic irrigation monitoring technologies in South Africa, using transformer-based [...] Read more.
South African irrigation schemes face critical challenges of water scarcity, infrastructure deterioration, and limited monitoring capacity, threatening agricultural productivity and food security. This scoping review systematically analyses 59 peer-reviewed publications (2000–2025) on sensor-based and acoustic irrigation monitoring technologies in South Africa, using transformer-based natural language processing (Sentence-BERT embeddings), unsupervised Machine Learning (UMAP dimensionality reduction, HDBSCAN clustering), and geospatial mapping applied to literature retrieved from Web of Science and Scopus. Results show that water quality monitoring (42.4% of studies) and remote sensing (25.4%) dominate the national research landscape, while soil moisture sensing and modelling remain comparatively limited. Notably, no peer-reviewed studies applying acoustic monitoring technologies to irrigation were identified, representing a critical gap despite proven international applications for leak detection (95–98% accuracy), widespread infrastructure aging (over 50% of schemes exceeding 30 years), and reported water losses of 30–60% in poorly managed systems. Reported experimental water savings range from 15% to 30%, yet applications remain largely confined to pilot-scale implementations concentrated within a limited number of Water Management Areas. Persistent adoption barriers include infrastructure unreliability, financial inaccessibility, limited digital literacy, and weak institutional coordination. The review recommends: (i) expanding research coverage across underrepresented regions and Water Management Areas; (ii) strengthening extension support and technical training to enable broader adoption; and (iii) integrating low-cost sensor networks with predictive, data-driven irrigation advisory systems. These priorities aim to support scalable, context-sensitive irrigation modernisation under increasing water scarcity pressures. Full article
(This article belongs to the Section Agricultural Irrigation Systems)
30 pages, 4257 KB  
Article
A Sustainable and Resilient Distribution System Restoration Framework Based on Intentional Islanding and Blockchain-Based P2P Insurance
by Amany El-Zonkoly
Sustainability 2026, 18(9), 4163; https://doi.org/10.3390/su18094163 - 22 Apr 2026
Viewed by 184
Abstract
Extreme weather events have raised the frequency of power outages, posing critical challenges to the sustainability and resilience of modern power systems. In such cases, distributed energy resources (DERs) can effectively support the re-establishment of sustainable power supply for critical loads within the [...] Read more.
Extreme weather events have raised the frequency of power outages, posing critical challenges to the sustainability and resilience of modern power systems. In such cases, distributed energy resources (DERs) can effectively support the re-establishment of sustainable power supply for critical loads within the distribution network and reduce power outage losses. In this paper, a sustainable fault recovery framework based on an intentional islanding scheme is proposed to partition the distribution system in order to optimize the priority restoration of critical loads, while taking the operational constraints of the system into consideration. In addition, a blockchain-based P2P insurance mechanism is applied to mitigate the outage losses of the network’s users with a higher degree of security and transparency. By linking technical restoration decisions with financial risk-sharing mechanisms, the proposed framework improves economic sustainability and social equity among network users. For this purpose, a multi-layer, multi-objective optimization algorithm is proposed for optimal partitioning of the distribution network, management of DERs, and demand side management of flexible loads in order to minimize the outage losses and the insurance premium, while maintaining satisfactory performance of the network. To validate the feasibility of the proposed algorithm, the 45-node distribution network of Alexandria, Egypt is used. The results show that a reduction in peak load, outage losses, and operational costs are achieved, with an overall saving of 17.34%, in addition to a premium reduction of 41.3%. These results highlight the effectiveness of the proposed framework in enhancing the environmental, economic, and operational sustainability of distribution systems under outage conditions. Full article
Show Figures

Figure 1

42 pages, 966 KB  
Article
Garbage In, Garbage Out? The Impact of Data Quality on the Performance of Financial Distress Prediction Models
by Veronika Labosova, Lucia Duricova, Katarina Kramarova and Marek Durica
Forecasting 2026, 8(3), 35; https://doi.org/10.3390/forecast8030035 - 22 Apr 2026
Viewed by 253
Abstract
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic [...] Read more.
Financial distress prediction remains a central topic in corporate finance and risk management, with extensive research devoted to improving classification accuracy through increasingly sophisticated statistical and machine learning techniques. Nevertheless, the influence of data preparation on predictive performance has received comparatively less systematic attention. This study examines how an economically grounded data-preparation process affects the predictive performance of selected statistical and machine-learning models dedicated to predicting corporate financial distress. Using the chosen financial ratios, generally accepted indicators of corporate financial stability and economic performance, financial distress models are estimated on both raw, unprocessed input data and pre-processed data involving the exclusion of economically implausible accounting values, treatment of missing observations, and class balancing. In light of the above, the study adopts a structured methodological approach to assess the predictive performance of selected classification models, namely decision tree algorithms (CART, CHAID, and C5.0), artificial neural networks (ANNs), logistic regression (LR), and linear discriminant analysis (DA), using confusion-matrix–based evaluation and a comprehensive set of evaluation measures. The results suggest that the process of input data preparation is a critical factor, significantly improving the predictive performance of financial distress prediction models across most modelling techniques employed. The most pronounced gains are observed in decision tree models. ANNs also demonstrate marked improvement after input data preparation, whereas LR benefits more moderately, and linear DA remains limited despite preprocessing. The average gain in accuracy across all six modelling techniques, calculated as the difference between pre-processed and raw performance for each method and averaged across methods, was approximately 15.6 percentage points, with specificity improving by approximately 26.9 percentage points on average, amounting to roughly half the performance variation attributable to algorithm choice, which underscores that data preparation is a primary determinant of model reliability alongside algorithm selection. A step-level detailed analysis further shows that missing value imputation is the dominant driver of improvement for tree-based models, while class balancing contributes most for ANNs and logistic regression. The findings highlight that reliable financial distress prediction depends not only on technique selection but also on the consistency and economic plausibility of the input data, underscoring the central role of structured data preparation in developing robust early-warning models. Full article
Show Figures

Figure 1

21 pages, 470 KB  
Article
Regulating the Crypto-Laundering Chain: A Comparative Study of Scam Compounds and Money Mule Mechanisms Within Criminal Networks
by Gioia Arnone
Risks 2026, 14(4), 96; https://doi.org/10.3390/risks14040096 - 21 Apr 2026
Viewed by 128
Abstract
This paper examines how scam compounds, money mules and crypto-assets operate as interdependent elements of contemporary money-laundering chains. It assesses whether existing anti-money laundering (AML) and crypto-asset regulatory frameworks are capable of disrupting these chains holistically, rather than addressing individual components in isolation, [...] Read more.
This paper examines how scam compounds, money mules and crypto-assets operate as interdependent elements of contemporary money-laundering chains. It assesses whether existing anti-money laundering (AML) and crypto-asset regulatory frameworks are capable of disrupting these chains holistically, rather than addressing individual components in isolation, with particular reference to scam-compound activity in Southeast Asia. The study adopts a qualitative comparative case-study methodology grounded in legal and regulatory analysis. Four empirically grounded cases are examined: two Southeast Asian scam-compound enforcement cases (Cambodia and Myanmar) and two European crypto-asset seizure cases (Ireland and Italy). Judicial decisions, enforcement actions and regulatory instruments are analysed through a chain-based analytical framework aligned with Financial Action Task Force (FATF) standards, the EU Markets in Crypto-Assets Regulation (MiCA) and the Anti-Money Laundering Authority (AMLA) framework. The analysis reveals a structural divergence in enforcement strategies: Southeast Asian responses increasingly prioritise network- and infrastructure-level disruption of scam compounds, whereas European approaches remain largely centred on post-offence crypto-asset seizure through traditional proceeds-of-crime mechanisms. Across all jurisdictions, money mules emerge as a critical yet systematically under-regulated intermediary layer enabling the resilience of crypto-laundering operations. The paper advances existing AML typologies by conceptualising scam compounds, money mules and crypto-assets as interconnected components of a single crypto-laundering chain. This chain-based perspective offers a novel analytical and regulatory lens for understanding organised crypto-enabled fraud. The study is based on a qualitative, case-based design and does not aim for statistical generalisation. However, the analytical framework developed is transferable to other jurisdictions experiencing similar scam-compound and crypto-laundering dynamics. The findings suggest that effective AML enforcement requires coordinated intervention across multiple nodes of the laundering chain, including scam compound infrastructure and money mule networks, alongside traditional asset-seizure mechanisms and CASP supervision. By highlighting the structural links between scam compounds, coercive labour and crypto-laundering mechanisms, the paper underscores the broader social harms of crypto-enabled fraud and the need for integrated regulatory responses that address both financial crime and human exploitation. Full article
28 pages, 1008 KB  
Review
Deep Learning for Credit Risk Prediction: A Survey of Methods, Applications, and Challenges
by Ibomoiye Domor Mienye, Ebenezer Esenogho and Cameron Modisane
Information 2026, 17(4), 395; https://doi.org/10.3390/info17040395 - 21 Apr 2026
Viewed by 160
Abstract
Credit risk prediction is central to financial stability and regulatory compliance, guiding lending decisions and portfolio risk management. While traditional approaches such as logistic regression and tree-based models have long been the industry standard, recent advances in deep learning (DL) have introduced architectures [...] Read more.
Credit risk prediction is central to financial stability and regulatory compliance, guiding lending decisions and portfolio risk management. While traditional approaches such as logistic regression and tree-based models have long been the industry standard, recent advances in deep learning (DL) have introduced architectures capable of capturing complex nonlinearities, temporal dynamics, and relational dependencies in borrower data. This study provides a comprehensive review of DL methods applied to credit risk prediction, covering multi-layer perceptron, recurrent and convolutional neural networks, transformer, and graph neural networks. We examine benchmark and large-scale datasets, highlight peer-reviewed applications across corporate, consumer, and peer-to-peer lending, and evaluate the benefits of DL relative to classical machine learning. In addition, we critically assess key challenges and identify emerging opportunities. By synthesising methods, applications, and open challenges, this paper offers a roadmap for advancing trustworthy deep learning in credit risk modelling and bridging the gap between academic research and industry deployment. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
Show Figures

Figure 1

29 pages, 1833 KB  
Article
MSTFNet: Multi-Scale Temporal Fusion Network with Frequency-Enhanced Attention for Financial Time Series Forecasting
by Qian Xia and Wenhao Kang
Mathematics 2026, 14(8), 1391; https://doi.org/10.3390/math14081391 - 21 Apr 2026
Viewed by 129
Abstract
Financial time series forecasting remains a persistent challenge due to the non-stationary nature, inherent noise, and multi-scale temporal dependencies present in market data. This paper presents MSTFNet, a multi-scale temporal fusion network that combines dilated causal convolutions with a frequency-enhanced sparse attention mechanism [...] Read more.
Financial time series forecasting remains a persistent challenge due to the non-stationary nature, inherent noise, and multi-scale temporal dependencies present in market data. This paper presents MSTFNet, a multi-scale temporal fusion network that combines dilated causal convolutions with a frequency-enhanced sparse attention mechanism for improved financial prediction. The proposed architecture consists of three core components: a multi-scale dilated causal convolution module that extracts temporal patterns across different time horizons through parallel convolutional branches with varying dilation rates, a frequency-enhanced sparse attention mechanism that leverages Fast Fourier Transform to identify dominant periodic components and modulate attention weights accordingly, and an adaptive scale fusion gate that learns to dynamically combine representations from multiple temporal scales. Extensive experiments conducted on three public financial datasets (S&P 500, CSI 300, and NASDAQ Composite) spanning the period from January 2015 to December 2024 show two key results. First, consistent with near-efficient markets, the random-walk benchmark (y^t+1=yt) outperforms all the data-driven models on level-error metrics (MAE, RMSE, MAPE, and R2), establishing the martingale as the binding lower bound on point-prediction error. Second, MSTFNet achieves the highest directional accuracy (DA) across all three indices—56.3% on the S&P 500 versus 50.0% for the martingale—representing a 6.3 percentage-point improvement that generates positive pre-cost returns in a trading strategy backtest. Among the eight data-driven baselines (LSTM, GRU, TCN, Transformer, Autoformer, FEDformer, PatchTST, and iTransformer), MSTFNet also achieves the lowest MAE, reducing it by 13.6% relative to the strongest data-driven baseline (iTransformer) on the S&P 500. These results confirm that integrating multi-scale temporal modeling with frequency-domain guidance extracts a real, if modest, directional signal from financial time series. Full article
Show Figures

Figure 1

34 pages, 22620 KB  
Article
Improved Secretary Bird Optimization Algorithm Based on Financial Investment Strategy for Global Optimization and Real Application Problems
by Yiming Liu, Bingchun Yuan and Shuqi Yuan
Symmetry 2026, 18(4), 688; https://doi.org/10.3390/sym18040688 - 21 Apr 2026
Viewed by 209
Abstract
This paper proposes a multi-strategy Secretary Bird Optimization Algorithm (MS-SBOA) for solving global optimization problems and 3D wireless sensor network deployment. While preserving the original two-phase search framework of SBOA, the proposed algorithm achieves a dynamic balance between global exploration and local exploitation [...] Read more.
This paper proposes a multi-strategy Secretary Bird Optimization Algorithm (MS-SBOA) for solving global optimization problems and 3D wireless sensor network deployment. While preserving the original two-phase search framework of SBOA, the proposed algorithm achieves a dynamic balance between global exploration and local exploitation through the synergistic integration of multiple enhancement strategies, including a hybrid initialization scheme combining Latin hypercube sampling and quasi-opposition-based learning, a success-history-based adaptive parameter learning mechanism, a finance-inspired market-state trading operator, and an elite-guided population regulation strategy. Experimental results on the IEEE CEC2020 and CEC2022 benchmark test suites demonstrate that MS-SBOA significantly outperforms nine comparative algorithms, including VPPSO, IAGWO, and QHSBOA, under both 10-dimensional and 20-dimensional settings. The proposed algorithm exhibits superior optimization accuracy, faster convergence speed, and stronger robustness. Statistical analyses using the Wilcoxon rank-sum test and the Friedman mean rank test further confirm that the observed performance improvements are statistically significant. Moreover, MS-SBOA is applied to three-dimensional wireless sensor network (3D WSN) deployment optimization problems, where the average coverage rates reach 76.22% and 82.32% for 30-node and 50-node deployment scenarios, respectively. The resulting node distributions are more uniform, and the computational efficiency is improved compared with competing algorithms. Full article
(This article belongs to the Special Issue Symmetry in Optimization Algorithms and Applications)
Show Figures

Figure 1

29 pages, 1027 KB  
Review
The Impact of Dementia Caregiving on the Health of the Spousal Caregiver
by Donna de Levante Raphael, Lora J. Kasselman, Wendy Drewes, Isabella Wolff, Luke Betlow, Joshua De Leon and Allison B. Reiss
Medicina 2026, 62(4), 796; https://doi.org/10.3390/medicina62040796 - 21 Apr 2026
Viewed by 551
Abstract
Dementia caregiving represents a major public health challenge, with spousal caregivers assuming the greatest burden. Spouses, themselves typically older adults, provide high intensity, long-term, and largely unpaid care across all stages of cognitive decline. Despite their central role in dementia care, the health [...] Read more.
Dementia caregiving represents a major public health challenge, with spousal caregivers assuming the greatest burden. Spouses, themselves typically older adults, provide high intensity, long-term, and largely unpaid care across all stages of cognitive decline. Despite their central role in dementia care, the health consequences experienced by spousal caregivers remain insufficiently characterized in the literature and inadequately addressed in clinical and public health practice. This structured narrative review synthesizes current evidence on the multidimensional impact of dementia caregiving on the physical, psychological, cognitive, social, and financial health of spousal caregivers. It further contextualizes these consequences within the trajectory of dementia progression, and identifies interventions, support systems, and policy considerations necessary to mitigate caregiver burden. Spousal caregivers experience disproportionate burden due to continuous, escalating responsibilities that often mirror the progressive deterioration of their partners. Emotional burdens, including uncertainty during pre-diagnostic stages, role strain, conflict, loss of intimacy, and anticipatory grief. Physically, spouses endure musculoskeletal strain, sleep disruption, poor nutrition, and heightened frailty risk. Psychologically, spousal caregivers exhibit elevated rates of depression, anxiety, loneliness, and stress-related disorders. Socially, caregivers experience substantial isolation, stigma, and erosion of social networks. Financial hardship, including early retirement, reduced employment, and uncompensated care hours, further exacerbate stress. Evidence suggests that chronic caregiving stress contributes to biological changes such as immune dysregulation, inflammation, acceleration, aging, and potential cognitive decline in caregivers themselves. Caregiver burden influences patient outcomes as evidenced by increased emergency department use, falls, and earlier institutionalization in persons with dementia whose caregiver is subjected to a high burden. Current care models rarely include routine, caregiver assessment or structured guidance following diagnosis, resulting in substantial unmet needs. Effective mitigation requires integrated, stage-sensitive interventions, including psychosocial support, caregiver education, respite services, culturally tailored programs, and digital health tools, alongside broader policy reforms to reduce financial and structural barriers. Full article
(This article belongs to the Section Neurology)
Show Figures

Figure 1

20 pages, 977 KB  
Article
An Enhanced Multi-Task Deep Learning Framework for Joint Prediction of Customer Churn and Downsell
by Qiang Zhang, Lihong Zhang and Yanfeng Chai
Appl. Sci. 2026, 16(8), 4014; https://doi.org/10.3390/app16084014 - 21 Apr 2026
Viewed by 203
Abstract
Customer churn refers to the termination of a customer’s business relationship with a bank, representing a direct loss of future revenue. Product downsell manifests as a reduction in the number of financial products held or a downgrade in service tier, often signaling early [...] Read more.
Customer churn refers to the termination of a customer’s business relationship with a bank, representing a direct loss of future revenue. Product downsell manifests as a reduction in the number of financial products held or a downgrade in service tier, often signaling early customer disengagement. Accurately identifying customers at risk of these two behaviors has become a cornerstone of profitable growth in the competitive retail banking industry as downsell frequently serves as a precursor to total churn. However, the existing research typically treats these highly correlated behaviors as independent prediction tasks, overlooking their intrinsic link and failing to address the critical challenges of class imbalance and regulatory demands for model interpretability. To tackle these problems, we propose an enhanced multi-task learning network (EMTL-Net), a deep learning framework specifically designed to capture the nuanced interplay between churn and downsell behaviors. EMTL-Net introduces an explicit feature interaction module to enhance the modeling of high-order feature relationships and utilizes a shared representation layer to extract universal customer risk patterns, enabling the joint prediction of churn and downsell. Furthermore, we employ Focal Loss as the training objective to dynamically adjust sample weights, effectively mitigating the class imbalance problem. Critically, to meet financial compliance requirements, we implement a SHAP-based interpretation mechanism that is compatible with multi-task outputs, providing preliminary insights into feature importance. Formal validation of interpretability claims remains an important direction for future research. The experimental results on a publicly available pedagogical bank customer benchmark dataset demonstrate that EMTL-Net achieves excellent performance on both tasks. For churn prediction, the model achieves an AUC of 0.8259, an accuracy of 0.8361, and an F1-score of 0.6235, significantly outperforming the existing baseline models. For downsell prediction (noting that the downsell label is rule-derived from the number of products held), the model achieves an AUC of 0.8932, an accuracy of 0.8571, and an F1-score of 0.7504. Ablation studies confirm the critical contributions of the explicit feature interaction module, Focal Loss, and the residual structure to model performance. Crucially, the interpretability analysis corroborates business intuition by identifying customer age, account balance, and product holdings as dominant churn drivers—a consistency that reinforces the model’s credibility and practical utility in high-stakes financial environments. Full article
Show Figures

Figure 1

19 pages, 1412 KB  
Article
A Micro-Manifold Identity-Preserving Spatiotemporal Graph Neural Network for Financial Risk Early Warning
by Jin Kuang, Fusheng Chen, Te Guo and Chiawei Chu
Mathematics 2026, 14(8), 1388; https://doi.org/10.3390/math14081388 - 21 Apr 2026
Viewed by 209
Abstract
Traditional financial early warning models often rely on the independent and identically distributed (IID) assumption, failing to adequately capture cross-sectional spatial contagion effects and temporal dynamic mutations, and are susceptible to the over-smoothing problem when processing highly imbalanced graph networks. To address these [...] Read more.
Traditional financial early warning models often rely on the independent and identically distributed (IID) assumption, failing to adequately capture cross-sectional spatial contagion effects and temporal dynamic mutations, and are susceptible to the over-smoothing problem when processing highly imbalanced graph networks. To address these limitations, this study proposes a micro-manifold-based identity-preserving spatiotemporal graph neural network framework (Micro-STAGNN). In the spatial dimension, an identity-preserving graph convolutional operator (IP-GCN) is constructed. By hard-coding a self-preservation coefficient (λ=0.8), it quantifies peer risk spillover while mitigating feature dilution, ensuring the transmission of heterogeneous default signals. In the temporal dimension, Long Short-Term Memory networks are cascaded with a temporal attention mechanism to capture the nonlinear temporal inflection points that trigger financial distress. The empirical study utilizes a sample of China’s A-share market from 2015 to 2025, evaluating the model using an Out-of-Time Validation protocol and Focal Loss. Results indicate that under a highly imbalanced distribution with a positive-to-negative sample ratio of approximately 1:50, Micro-STAGNN achieves an OOT ROC-AUC of 0.9095, a minority class default recall of 89%, and reduces the missed detection rate to 11%, outperforming traditional nonlinear cross-sectional models such as XGBoost. Furthermore, temporal attention weights provide explainable support for the early warning results. Full article
(This article belongs to the Special Issue Mathematical Methods for Economics, Finance and Actuarial Sciences)
Show Figures

Figure 1

Back to TopTop