Next Article in Journal
A Novel MaxViT Model for Accelerated and Precise Soybean Leaf and Seed Disease Identification
Previous Article in Journal
HFC-YOLO11: A Lightweight Model for the Accurate Recognition of Tiny Remote Sensing Targets
Previous Article in Special Issue
Investigation of Multiple Hybrid Deep Learning Models for Accurate and Optimized Network Slicing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management

Department of Economic Informatics and Cybernetics, Bucharest University of Economic Studies, 010552 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Computers 2025, 14(5), 196; https://doi.org/10.3390/computers14050196
Submission received: 28 April 2025 / Revised: 13 May 2025 / Accepted: 15 May 2025 / Published: 18 May 2025

Abstract

In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in economic systems, particularly under the threat of financial contagion. A synthetic dataset simulating the interaction between economic indicators and edge performance metrics was constructed to emulate real-time decision environments. Composite indicators were developed to quantify key dynamics, and a range of machine learning models, including XGBoost, Random Forest, and Neural Networks, were applied to classify economic decision outcomes. The results indicate that low latency, efficient resource use, and balanced workload distribution are significantly associated with higher decision quality. XGBoost outperformed all other models, achieving 97% accuracy and a ROC-AUC of 0.997. The findings suggest that edge computing performance metrics can act as predictive signals for systemic fragility and may be integrated into early warning systems for financial risk management. This study contributes to the literature by offering a novel framework for modeling the economic implications of edge intelligence and provides policy insights for designing resilient, real-time financial infrastructures.

1. Introduction

In today’s highly interconnected and volatile economic landscape, the ability to make accurate, real-time decisions is more essential than ever. Financial markets react within milliseconds to new information, and delays in data processing or transmission can amplify systemic vulnerabilities. Traditional centralized computing infrastructures, while powerful, often fail to meet the ultra-low latency requirements of modern economic systems [1,2], especially during periods of financial turbulence [3,4].
Edge computing has emerged as a transformative solution to this challenge by enabling data processing at or near the source [5,6], significantly reducing the response time [7,8,9] and improving operational resilience [10,11]. When integrated with Internet of Things (IoT) technologies, edge computing facilitates localized analytics [12,13,14], real-time optimization [15,16,17], and faster reaction cycles in economic systems [18,19]. This distributed model is particularly relevant for institutions and platforms that manage financial flows, risk signals, and decision triggers at scale.
One of the critical applications of edge computing in the financial domain lies in the management of financial contagion risk, the rapid transmission of financial shocks from one institution or market segment to others [4,20,21,22]. During such events, the ability to localize decisions, filter signals at the edge, and dynamically adjust resource allocation can mitigate systemic collapse and improve stability.
The main purpose of this research is to explore how edge computing infrastructures influence the accuracy, responsiveness, and quality of economic decision-making under uncertainty, particularly in the context of financial contagion. Unlike previous studies that regularly explore technological and financial domains separately, this work introduces a hybrid analytical framework that integrates economic behavior signals with real-time edge performance metrics, employing machine learning techniques for predictive and interpretative modeling.
This study contributes to the literature by (i) introducing composite indicators that link computation dynamics with economic decision outputs; (ii) simulating real-time decision environments using IoT-augmented edge systems and synthetic economic scenarios; and (iii) providing empirical evidence on how edge-enhanced infrastructure can be leveraged in order to mitigate systemic risk propagation in complex economic systems.
To achieve this goal, the study addresses the following research questions (RQs):
  • RQ1: How can composite indicators be constructed to effectively reflect the interplay between edge system performance and real-time economic decision variables?
  • RQ2: What are the significant relationships between edge computing performance metrics and the outcomes of economic decision-making processes?
  • RQ3: Which machine learning algorithms provide the most accurate and robust classification of economic decision outcomes in edge-enhanced environments, and how can these models be optimized for effective risk management?
The structure of the paper is as follows. Section 2 reviews the relevant literature on financial contagion, edge computing, and real-time economic systems. Section 3 details the methodology, including data description, composite indicator construction, and machine learning models. Section 4 presents the empirical results and visual analysis. Section 5 offers a comprehensive discussion on the economic and technological implications of the findings. Finally, Section 6 concludes this study by answering the research questions, outlining the key contributions, acknowledging limitations, and suggesting avenues for future research.
Unlike previous studies that focus either on technical aspects of edge computing (e.g., latency reduction, blockchain integration) or on network-based models of financial contagion (e.g., exposure matrices, risk spillover effects), this study proposes a hybrid framework that integrates edge performance metrics into economic decision modeling under uncertainty. By constructing composite indicators and applying machine learning classification on real-time decision outcomes, we offer a novel systems-level approach to understanding and mitigating financial contagion risk through technological responsiveness.

2. Related Work

In recent years, the intersection between real-time decision-making, financial risk, and edge computing has become a central point in academic and industry research. This literature review outlines key contributions from studies that address financial contagion, edge-enabled economic decision systems, and the technological infrastructure that supports them.
The reviewed literature shows a convergence of interest around the use of edge computing for enhancing decision-making processes in financial and economic systems. From predictive analytics in investment contexts to structural models of contagion risk, these contributions provide theoretical and empirical foundations for integrating edge intelligence into economic resilience frameworks. However, challenges remain in terms of data integration, scalability, and maintaining trust in decentralized environments.

2.1. Financial Contagion and Network Analytics

Cheng et al. [4] explored the dynamics of financial contagion using network analytics, emphasizing the interconnectedness among financial institutions through lending–borrowing relationships. Their model integrated balance sheet data, exposure matrices, and market value adjustments to simulate contagion effects. They proposed a contagion algorithm and analyzed how system heterogeneity affects financial stability. Their findings underscore the importance of regulatory mechanisms capable of responding to structural vulnerabilities within financial networks.
Liao and Li [23] adopted a tail-event driven network approach (TENET) to study contagion between international commodity markets and China’s financial sector. Using a combination of TENET and ARDL-ECM models, they revealed strong bidirectional contagion effects, particularly from industrial commodities to currency markets. Their results support the establishment of real-time monitoring frameworks to detect and mitigate systemic risk.
Building upon these traditional models of contagion, recent studies have explored the intersection of edge computing, IoT, and financial risk management. Zeng [24] proposed a financial early-warning model based on backpropagation neural networks (BPNNs) and mobile edge computing (MEC). The model improved the responsiveness of corporate financial systems by predicting risk through IoT-enhanced data flows and optimizing service preloading. This approach not only enhanced computational efficiency but also enabled real-time alerts, with experimental results indicating a financial health prediction accuracy of 91.6%.
Li et al. [25] tackled the challenge of securing financial network transactions through a hybrid edge computing and blockchain framework. They introduced an anonymous storage protocol and a synchronization system that strengthens supply chain finance (SCF) against multiple cyber threats, including Man-in-the-Middle and replay attacks. The system exhibited low latency (<215 ms), highlighting edge computing’s feasibility in protecting financial infrastructures under dynamic conditions.
In a more recent study, Liao et al. [26] analyzed tail-risk contagion between digital and traditional financial assets using a downside risk network and the TVP-SV-VAR model. Their results underscore the increasing interconnectedness of asset classes like NFTs, stock indices, and commodities, particularly under exogenous shocks such as policy uncertainty or financial crises. The authors identified key spillover directions and influential nodes, offering important insights for portfolio diversification, systemic risk containment, and digital asset regulation.
Although network-based contagion models offer valuable insights into structural vulnerabilities, they generally lack integration with real-time computational infrastructures. Most approaches do not consider how the performance of edge computing systems may influence the speed and quality of decisions under financial stress. Although this study does not explicitly simulate the propagation of financial contagion, it proposes a complementary approach in which the quality of economic decisions, shaped by edge computing performance, is used as a proxy indicator for systemic fragility in real-time environments.

2.2. Edge Computing and Financial Risk Management

Zhou [22] developed an intelligent financial investment risk prediction system utilizing edge computing principles. The system was designed to process investment risk indicators in real-time, leveraging mobile edge computing (MEC) frameworks to support the early detection of anomalies in venture capital and investment activities. This approach demonstrated its effectiveness in reducing operational risk and enhancing the resilience of financial institutions. Mitsis et al. [27] challenged data offloading in edge computing systems, proposing a game-theoretic approach that considers user behavior and server pricing strategies. Their model accounted for risk awareness and economic incentives, offering a framework applicable to resource allocation and distributed decision-making in economic systems.
Kong and Lu [28] applied MEC in the context of rural cooperative financial institutions, addressing structural deficiencies in financial service delivery in agricultural areas. Their model introduced a collaborative machine learning mechanism (LECC) to improve resource efficiency and system responsiveness. Experimental simulations showed substantial improvements in hit rates compared to traditional strategies, emphasizing the potential of edge computing in expanding financial inclusion and managing risk in underbanked regions. Liu et al. [29] integrated edge computing with blockchain technology to construct a real-time Cooperative Intrusion Detection System (CIDS) aimed at securing financial networks. Using deep learning models such as LSTM and Bayesian networks, their system achieved a high detection accuracy of 97.54%, outperforming single intrusion systems. The proposed architecture underscores the role of edge-enabled cybersecurity frameworks in safeguarding financial infrastructures and ensuring business continuity.
Cheng and Huang [20] introduced an edge-based intelligent system for forecasting financial risks in investment platforms. By employing a hybrid CNN-LSTM model and leveraging knowledge graphs, their system predicted financial vulnerabilities with high accuracy. Their research highlights how edge computing can support early warning mechanisms and enable adaptive financial risk management strategies under volatile conditions.
Existing research on edge computing in finance predominantly focuses on technological innovations such as blockchain integration or mobile edge architectures, with limited highlighting on their economic implications. Furthermore, few studies quantify how edge performance affects actual decision outcomes. This work contributes by modeling decision outcomes as a function of composite edge metrics, offering an interpretable and predictive approach to risk management in real time.
While prior studies have explored edge computing applications in financial risk prediction, blockchain integration, and decentralized data architectures, very few have systematically linked edge performance metrics to real-time systemic risk mitigation, particularly in the context of financial contagion dynamics. The intersection of edge computing infrastructures and financial contagion management remains largely unexplored, creating a significant research gap that this study addresses by modeling the impact of computational responsiveness and system resource balance on decision quality under contagion-prone conditions.

2.3. Edge Computing and Broader Economic Context

Akbari [30] presented a systematic review highlighting how edge computing supports circular economy strategies in supply chains. By analyzing 103 scholarly articles, three thematic clusters emerged: technology adoption, optimization, and sustainability. Their study found that edge-enabled supply chains are more agile and responsive, but face challenges related to data security, device integration, and cost. Yang et al. [31] proposed a hybrid LSTM-GAN-edge computing algorithm for smart grid enterprise decision-making. Their method outperformed traditional models in forecasting economic performance and energy demand, emphasizing the efficiency gains provided by edge-enhanced prediction. Shen et al. [32] introduced a signaling game framework for assessing the availability of edge-based IoT systems under malware dissemination. The authors demonstrated how probabilistic modeling and Markov chains can guide infrastructure resilience strategies and inform risk-based decision-making.
Bhambri and Khang [18] examined the role of edge computing in enabling sustainable and intelligent transportation systems. By integrating edge computing with IoT and AI technologies, the authors showed how traffic management, energy efficiency, and emission reduction can be significantly improved. Their study explored use cases such as smart traffic lights, vehicle tracking, EV charging stations, and autonomous driving, underscoring how real-time processing at the edge contributes to greener mobility ecosystems.
Mohsin et al. [15] addressed the growing challenge of e-waste management by proposing an edge-enabled automated disassembly framework for waste printed circuit boards (WPCBs). Leveraging edge computing and IoT, the system utilized the YOLOv10 model for real-time component recognition, achieving 99.9% precision. Their results demonstrate the transformative potential of edge computing in circular economy applications, specifically in improving both efficiency and safety in electronic waste recycling.
While prior studies highlight the benefits of edge computing for supply chains, energy systems, or transportation, they do not often address the economic decision-making dimension under uncertainty. This study advances the discussion by embedding edge metrics into economic decision logic, thus extending the application of edge computing beyond logistics and control, into financial behavior modeling.

2.4. Economic Systems and IoT Data Reliability

Chuang et al. [33] developed RIDES, a blockchain-based economic system leveraging edge computing to manage IoT data transactions. Their system integrated smart contracts and automated trust management to support reliable, real-time data exchange, essential for financial systems that rely on continuous information flow. Kubiak et al. [34] conducted a systematic literature review on the applications of edge computing in manufacturing, noting that EC technologies enable faster feedback loops and real-time decision-making, which are increasingly important in dynamic industrial and economic environments. Feng and Ran [35] addressed the challenge of optimizing distributed energy systems by combining edge computing with machine learning. Their proposed architecture improved energy allocation efficiency by 12% and reduced energy waste by 18% compared to traditional models, while also cutting system response times by 30%. These improvements demonstrate the role of edge-enhanced architectures in ensuring the operational reliability and sustainability of economic systems built on IoT data. Zheng and Tan [36] proposed a novel decentralized task offloading framework for large-scale edge computing environments using the TD3 algorithm. By simulating real-world user dynamics, their DSMECO-DP scheme balanced performance benefits for users and profitability for service providers through dynamic pricing. Their work underscores the scalability and reliability of edge systems in complex economic infrastructures facing unpredictable loads and behavioral volatility. Bhutiani [37] introduced an edge-based AI system for automated waste classification, employing CNNs and YOLO to process visual data in real time. Designed for deployment on low-latency devices, their system enhanced the intelligence and responsiveness of circular economy operations, improving material recovery efficiency and reducing processing delays in sustainable resource systems.
Despite recent efforts to improve the reliability and security of economic systems using IoT and edge-enabled infrastructures, most studies treat these elements as exogenous. In contrast, this study positions them as endogenous variables that shape, and are shaped by, economic decision dynamics, providing a systems-level view of interaction between digital infrastructure and economic performance.

3. Materials and Methods

This study employed a data-driven analytical framework designed to explore the intersection between edge computing performance and economic decision-making. The primary objective was to evaluate how real-time computational metrics influence decision accuracy and systemic economic outcomes under uncertain and dynamic conditions. A combination of statistical techniques and machine learning algorithms was used to identify patterns, assess predictive capabilities, and reduce systemic complexity.
The dataset was obtained from Kaggle [38] and was adapted to simulate real-time economic environments enhanced through edge computing and IoT infrastructure.
The dataset comprises 500 observations and includes both raw and engineered variables that reflect the real-time interaction between digital infrastructure and economic behavior. Although the dataset contains 500 synthetic observations, its design emulates real-time decision-making dynamics in edge-enhanced economic environments, making it suitable for exploratory modeling and interpretability-focused research. Core indicators were grouped into two main categories:
  • Economic indicators, which capture the behavioral and transactional dimensions of the system:
    • Transaction volume: simulates the load of economic activity per decision window;
    • Market behavior index: approximates aggregated sentiment or fluctuation signals;
    • Financial metric: reflects the monetary intensity or return associated with each decision.
  • Technological indicators, which measure edge system performance in real time:
    d.
    Edge processing latency: the delay incurred at the computation node;
    e.
    System throughput: the amount of successfully processed data;
    f.
    Resource utilization and workload distribution efficiency: indicates system strain and balance;
    g.
    Decision accuracy: the performance of the edge system in making correct decisions;
    h.
    Decision outcome: the binary result of the decision process (0 = negative, 1 = positive).
The variable decision accuracy is included directly in the simulated dataset provided by Kaggle [38] and reflects the correctness level of economic decisions based on system conditions (e.g., latency, throughput, workload balance). It is generated independently from the target variable decision outcome and is not derived from any predictive model trained in this study.
To capture deeper interactions between system dynamics and economic outcomes, four composite indicators were constructed:
  • Transaction efficiency
  • Latency per transaction
  • Utilization to efficiency ratio
  • Decision quality score
These derived metrics are grounded in operational logic and are intended to improve the analytical capacity of the models. The ratio from relation (1) reflects the computational effectiveness of the system in handling economic activity. A higher value indicates that the infrastructure can support a greater number of successful outputs relative to the volume of incoming transactions, serving as a proxy for digital scalability and processing responsiveness.
t r a n s a c t i o n   e f f i c i e n c y = s y s t e m   t h r o u g h p u t t r a n s a c t i o n   v o l u m e
The construction of the transaction efficiency indicator is grounded in operational research on throughput-based performance evaluation in edge-enhanced financial systems. As shown by Li et al. [25], latency and the response time are critical dimensions in the security and operational viability of real-time financial networks, particularly in contexts such as supply chain finance. Our indicator builds on this logic by quantifying how efficiently the system handles the transaction load relative to the processing throughput, thus offering a scalable proxy for digital responsiveness.
In relation (2), we describe the latency per transaction metric. This indicator assesses the average processing delay incurred per transaction. It provides insight into the temporal cost of system decisions, with elevated values potentially signaling bottlenecks or inefficiencies in real-time operations that could degrade the decision quality under pressure.
l a t e n c y   p e r   t r a n s a c t i o n = e d g e   p r o c e s s i n g   l a t e n c y t r a n s a c t i o n   v o l u m e
The latency per transaction indicator is derived based on established principles in edge-computing architectures that are aimed at reducing network transmission delays and improving responsiveness in financial systems. As shown in [39], positioning the edge nodes closer to users and applying predictive caching can reduce overall latency by up to 38%, with edge processing delays maintained under 120 ms. Our indicator captures this temporal performance dimension by normalizing edge latency with respect to the transaction volume, thus reflecting real-time responsiveness under varying economic loads.
The indicator from relation (3) captures the balance between how intensively system resources are being used and how efficiently workloads are being distributed. A mismatch, such as high utilization but low distribution efficiency, may suggest resource saturation or poor orchestration within the edge environment.
u t i l i z a t i o n   t o   e f f i c i e n y   r a t i o = r e s o u r c e   u t i l i z a t i o n w o r k l o a d   d i s t r i b u t i o n   e f f i c i e n c y
The utilization to efficiency ratio introduced in this study reflects the balance between how intensively system resources are used and how effectively workloads are distributed across the edge infrastructure. While previous studies, such as [40], have employed mobile edge computing for financial risk evaluation, our indicator goes a step further by quantifying operational saturation versus orchestration efficiency, providing a novel proxy for real-time system stability in economic contexts.
The composite score from relation (4) embeds both the correctness and the nature of the decision (positive or negative). By combining these elements, the score quantifies decision effectiveness in a way that is sensitive to the outcome polarity, allowing for a nuanced interpretation of decision performance in response to system or market conditions.
d e c i s i o n   q u a l i t y   s c o r e = d e c i s i o n   a c c u r a c y × d e c i s i o n   o u t c o m e
These derived variables provide additional granularity in modeling the decision environment and help reveal the underlying mechanisms that may influence economic behavior and systemic risk, especially under high-frequency or stress-prone conditions.
To evaluate the relationship between edge computing metrics and economic decision outcomes, the analysis followed a structured multi-step methodology, integrating both descriptive statistics and predictive modeling techniques. The analytical workflow is illustrated in Figure 1, which outlines six core phases designed to explore, preprocess, and model the dataset in a robust and interpretable manner.
The first stage focused on descriptive analysis and data exploration, including the computation of summary statistics and the visualization of distributions and boxplots for key variables. This was followed by a correlation analysis using both Pearson and Kendall methods to capture linear and monotonic relationships between economic and technological indicators. The third phase addressed potential multicollinearity among predictors, assessed via the Variance Inflation Factor (VIF) and Tolerance scores.
To address potential multicollinearity among predictors and to simplify the model without compromising informational value, a dimensionality reduction step was incorporated using Principal Component Analysis (PCA). This dimensionality reduction aimed to consolidate performance indicators into a single composite feature to streamline model training and interpretation.
In the model development stage, five supervised learning algorithms were trained to classify binary decision outcomes: Logistic Regression, Random Forest, Support Vector Machines (SVMs), Neural Networks, and XGBoost. These models were subsequently evaluated in the final validation phase based on classification performance metrics including accuracy, precision, recall, F1-Score, and ROC-AUC (Receiver Operating Characteristic–Area Under the Curve).
To classify economic decision outcomes in edge-enhanced environments, we employed five supervised machine learning algorithms. Each of these models offers different mechanisms for learning patterns in the data and handling non-linear relationships, noise, and feature interactions. Below, we briefly describe the technical foundation of each algorithm.
Logistic Regression (LR) is a linear probabilistic model used for binary classification problems [41]. It estimates the probability P ( y = 1 | X ) by applying the logistic function to a linear combination of output features [42], according to Equation (5):
P y = 1 X = 1 1 + e ( β 0 + β 1 X 1 + + β n X n )
In Equation (5), β 0 is the model intercept, and β 1 ,   ,   β n are the regression coefficients associated with each feature X i .
LR provides interpretability and a strong baseline for classification, especially when the relationship between features and the target variable is approximately linear [43].
The Support Vector Machine with RBF Kernel (SVM-RBF) constructs a hyperplane that separates data points of different classes by maximizing the margin between them [42]. The radial basis function (RBF) kernel allows the model to capture non-linear relationships by projecting data into a higher-dimensional space, according to Equation (6):
K x i ,   x j = e x p ( γ x i x j 2 )
Random Forest (RF) is an ensemble learning method that builds a collection of decision trees using bootstrapped samples and random feature selection. The final prediction is obtained through majority voting (for classification). RF is robust to overfitting and handles feature interactions well [44]:
y ^ = m a j o r i t y _ v o t e ( T 1 x ,   T 2 x , ,   T m x )
where, in Equation (7), T i ( x ) represents the i t h decision tree’s prediction.
Extreme Gradient Boosting (XGBoost) is a highly efficient and scalable gradient boosting framework that sequentially adds decision trees to minimize a regularized objective function [45] (8):
L = i = 1 n l y i ,   y i ^ ( t ) + k = 1 t Φ f k
where, in Equation (8), Φ f = γ T + 1 2 λ w 2 penalizes the model’s complexity. XGBoost supports missing values, handles imbalance well, and includes built-in regularization, making it ideal for real-time classification under uncertainty. Also, l y i ,   y i ^ ( t ) is the loss function, and γ and λ are the hyperparameters that control regularization. γ represents the minimum loss reduction required to make a further partition on a tree node. A higher γ value leads to fewer splits, thus producing simpler trees and reducing the overfitting risk. In this study, γ was set to 0.2, which encourages the model to create splits only when they yield meaningful improvements in predictive accuracy, leading to more generalizable trees in noisy, synthetic data environments. λ corresponds to the L2 regularization term on leaf weights. This penalizes extreme weight values and stabilizes the model when dealing with multicollinearity or outlier-sensitive data. A value of λ = 1 was selected to maintain moderate regularization strength, ensuring that the model remains flexible while avoiding excessive sensitivity to specific features or noise. Together, these parameters contribute to a better trade-off between bias and variance, ultimately enhancing the robustness of the XGBoost classifier in edge-enhanced decision scenarios.
A Neural Network (NN) is a feed-forward multilayer perceptron (MLP) that is trained via backpropagation. It consists of an input layer, one or more hidden layers, and an output layer with a sigmoid activation function for binary classification [46]. The forward pass computes, according to Equation (9):
y ^ = σ ( W ( 2 ) × R e L U W 1 x + b 1 + b ( 2 )
where, in Equation (9), W 1 and W 2 are weight matrices, b 1 and b 2 are biases, σ represent the sigmoid activation function for binary classification, and R e L U is the Rectified Linear Unit activation function [47].
All machine learning algorithms were implemented using the Python 3.10 programming language. The primary libraries utilized were Scikit-learn (for Logistic Regression, Support Vector Machine, Random Forest), XGBoost (for gradient boosting), and TensorFlow/Keras (for the construction and training of the Neural Network model). The LR model was implemented using the <<LogisticRegression>> class from the <<sklearn.linear_model>> module, with default settings for binary classification. The SVM-RBF kernel was implemented using the SVC class from <<sklearn.svm>>, with the kernel parameter set to “rbf” and the gamma parameter was tuned via internal scaling. The RF classifier was implemented using the <<RandomForestClassifier>> class from <<sklearn.ensemble>>, with 100 trees and a fixed random seed to ensure reproducibility. The XGBoost model was implemented using the <<XGBClassifier>> class from the <<xgboost>> package. Parameters such as the learning rate, the maximum tree depth, and the number of estimators were fine-tuned to optimize predictive performance. The NN model was built as a feed-forward MLP using the Sequential API from the <<TensorFlow/Keras>> library. The network architecture consisted of one hidden layer with ReLU activation and one output layer with sigmoid activation for binary classification.
The confusion matrix is a commonly used diagnostic tool in classification tasks, capturing the distribution of predicted versus actual class labels. It is applicable to both binary and multiclass classification problems. Unlike aggregate metrics such as overall accuracy, the confusion matrix allows for a more granular analysis by highlighting how well each individual class is predicted. This is particularly relevant when the model has difficulty with specific classes, which might not be reflected in overall accuracy results. For instance, a model may consistently misclassify a particular category, a limitation that standard accuracy metrics cannot reveal [48,49]. According to Table 1, the confusion matrix consists of four core components: (i) TP—correctly predicted positive instances; (ii) FP—negative instances incorrectly predicted as positive; (iii) FN—positive instances incorrectly predicted as negative; and (iv) TN—correctly predicted negative instances.
This structure enables a detailed breakdown of classification performance, serving as the foundation for several derived metrics such as accuracy (10), recall (11), precision (12), and F1 Score (13), which are used to assess model robustness across different types of error. The evaluation metrics used to assess classification models are computed based on the components of the confusion matrix [48,50].
A c c u r a c y = T P + T N T P + T N + F P + F N
R e c a l l = T P T P + F N
P r e c i s i o n = T P T P + F P
F 1   S c o r e = 2 1 R e c a l l + 1 P r e c i s i o n
Before fitting the classification models, it is essential to assess multicollinearity among the independent variables, as high intercorrelations can distort coefficient estimates, reduce model interpretability, and inflate variance. Two widely used diagnostic measures for multicollinearity are the VIF and Tolerance. The VIF quantifies how much the variance of a regression coefficient is inflated due to multicollinearity. It is calculated for each predictor X j as follows:
V I F X j = 1 1 R j 2
where, in Equation (10), R j 2 is the coefficient of determination obtained when X j is regressed on all other independent variables. A VIF value of 1 indicates no correlation with other variables, whereas values above 5 (and especially above 10) are often interpreted as indicative of problematic multicollinearity [51,52].
The Tolerance is the reciprocal of VIF (Equation (15)):
T o l e r a n c e X j = 1 R j 2 = 1 V I F ( X j )
Low Tolerance values (typically < 0.1 ) suggest that the predictor is highly collinear with others and may be redundant in the model [52,53].
These diagnostics were applied to all input variables prior to model estimation. Based on the computed VIF and Tolerance values, variables exhibiting high multicollinearity were either excluded or transformed using PCA to ensure model robustness and avoid instability during training. PCA is an unsupervised statistical method that transforms the original correlated variables into a new set of uncorrelated variables called principal components, which are linear combinations of the original variables. The first few principal components capture most of the variance in the data, allowing for a more compact and stable representation of the feature space. Mathematically, the transformation is defined as follows:
Z k = a k 1 X 1 + a k 2 X 2 + + a k n X n
where, in Equation (16), Z k is the kth principal component, and a k i are the eigenvectors of the covariance matrix of the original variables X i .
To assess the performance and generalizability of the classification models, the dataset was randomly split into training (80%) and testing (20%) subsets [48,54]. All models were trained on the training set and evaluated on the unseen test set to ensure a fair assessment of the generalization capability. The evaluation was conducted using both threshold-dependent and threshold-independent performance metrics derived from the confusion matrix, including accuracy, precision, recall, F1-Score, and ROC-AUC.
These metrics offer complementary perspectives: while accuracy provides a global measure of correctness, precision and recall are particularly useful in understanding the balance between false positives and false negatives. The F1-Score captures the trade-off between the two, and the ROC-AUC reflects the model’s discriminative power across all classification thresholds.

4. Results

In practical terms, the problem addressed in this study relates to how financial and economic institutions can respond rapidly and effectively to fast-moving, uncertain events, such as market turbulence or financial contagion, where milliseconds can determine substantial gains or losses. Traditional decision-making architectures, often relying on centralized data processing, may introduce latency that compromises responsiveness and risk mitigation. In contrast, edge computing enables data to be processed near their source, thus reducing latency and enhancing local autonomy in decision-making. In this context, the real-time decision quality becomes a proxy for system resilience. This study models the problem by simulating how variations in the edge system performance, such as processing delays, throughput, and resource efficiency, can impact the accuracy and reliability of economic decisions, with implications for systemic stability.

4.1. Simulation Setup and Experimental Configuration

All simulations and model training were conducted in Python 3.10 using the Google Colab environment. Additionally, the full source code, including preprocessing, feature engineering, model training, and evaluation steps, is openly available on GitHub at [https://github.com/IonutNica/Edge_Computing, accessed on 1 May 2025] to facilitate replication and transparency. The implementation relied on widely used machine learning libraries: Scikit-learn (for Logistic Regression, Support Vector Machine, and Random Forest), XGBoost (for gradient boosting), and TensorFlow/Keras (for Neural Network modeling). The dataset includes synthetic observations simulating real-time economic behavior and edge infrastructure dynamics. The data were split randomly into training (80%) and testing (20%) subsets to ensure fair evaluation of generalization performance.
Logistic Regression was implemented using the <<LogisticRegression>> class from <<sklearn.linear_model>>, with default settings for binary classification.
The Support Vector Machine with RBF kernel used the SVC(kernel=”rbf”) class from <<sklearn.svm>>, with <<gamma=“scale”>> for automatic kernel coefficient adjustment.
Random Forest was implemented via <<RandomForestClassifier>> from <<sklearn.ensemble>>, using <<n_estimators = 100>> and a fixed random seed to ensure reproducibility.
XGBoost was developed using the XGBClassifier from the xgboost library, with tuned hyperparameters: <<max_depth = 6>>, <<learning_rate = 0.1>>, <<lambda = 1>>, <<gamma = 0.2>>, and <<n_estimators = 150>>.
The Neural Network was constructed using the Sequential API from <<tensorflow.keras.models>> consisting of one hidden layer with 32 neurons (ReLU activation) and one output layer with sigmoid activation. It was compiled with binary cross-entropy loss and trained over 50 epochs with a batch size of 16.
Model performance was evaluated on the test set using standard classification metrics: accuracy, precision, recall, F1-Score, and ROC-AUC.

4.2. Descriptive Statistics

Following the data preparation and feature engineering steps described in the previous section, a sequence of statistical and machine learning analyses was conducted to uncover patterns and predictive signals in real-time economic decision-making supported by edge computing. The results are structured in several stages: exploratory data analysis and descriptive statistics; correlation and multicollinearity assessments; dimensionality reduction via PCA; and finally, predictive modeling using various supervised learning algorithms.
Table 2 presents the descriptive statistics of the dataset used to evaluate the interplay between the edge computing performance and economic decision-making under risk conditions. The transaction volume displays a high variability (mean = 5591.86; std. dev. = 2523.18), suggesting substantial fluctuations in economic activity across instances, which may indicate varying system loads or market demands in different real-time contexts. The market behavior index shows a mean value near zero ( 0.006) with a relatively balanced distribution (std. dev. = 0.595), indicating that market dynamics oscillate around an equilibrium, with no significant trend toward bullish or bearish conditions. The financial metric exhibits a positively skewed distribution (mean = 111.21; median = 71.27), reflecting the existence of high outliers in financial intensity across scenarios. This may suggest that some decisions are associated with extreme financial performance outcomes. The edge processing latency varies widely (mean = 164.40 ms; std. dev. = 149.76), highlighting inconsistent processing delays at the edge level, a key concern for time-sensitive economic decisions. Resource utilization and workload distribution efficiency demonstrate a discrepancy in scale. While utilization is normalized (mean = 0.55), efficiency varies more widely (mean = 101.00), indicating that higher workload distribution efficiency is not necessarily associated with proportional resource use. Decision accuracy centers around a high average (73.36%) but shows significant variance, which could reflect instability in the decision-support system under changing computational or market stress. The system throughput values are expressed in normalized units, scaled to simulate microsecond-level data transmission rates typically encountered in edge-computing environments. Its interpretation should be relative (e.g., in ratios), rather than absolute. Transaction efficiency, a ratio between throughput and volume, shows small values, which are consistent with the system throughput scale, but indicate how well the infrastructure handles the data load. The latency per transaction (mean = 0.040) indicates an average of 40 ms per transaction, highlighting edge computing’s potential in real-time scenarios. The utilization to efficiency ratio (mean = 0.023) shows low correspondence between resource use and workload efficiency, suggesting optimization issues in the edge architecture. The decision quality Score, computed as the product of decision accuracy and decision outcome, has a right-skewed distribution (mean = 34.83; median = 7.33), implying that while most decisions are moderate or poor in quality, a few reach very high performance.

4.3. Correlation, Exploratory Analysis and Multicollinearity Diagnostics

The Pearson correlation heatmap from Figure 2 reveals several insightful relationships between technological and economic indicators in the dataset.
Notably, transaction efficiency is negatively correlated with the transaction volume ( 0.66) and the latency per transaction ( 0.47), suggesting that increased transaction loads and processing delays hinder efficiency. A strong positive correlation (0.88) exists between the decision outcome and the decision quality score, validating the constructed metric. Additionally, system throughput correlates moderately with both the transaction efficiency (0.53) and the latency per transaction (0.42), emphasizing its role in real-time performance. Most variables exhibit weak pairwise associations, implying low multicollinearity and diverse influences on decision outcomes, suitable for robust multivariate modeling.
According to Figure 3, the Kendall correlation heatmap provides a non-parametric assessment of the monotonic relationships among the variables. It confirms several key findings from the Pearson analysis, but with subtle differences in strength. The decision quality score remains strongly correlated with the decision outcome (0.81), reinforcing the reliability of this composite indicator. A moderate positive correlation exists between the edge processing latency and the latency per transaction (0.71), and between resource utilization and the utilization to efficiency ratio (0.56), indicating consistent directional trends in these dimensions. Transaction efficiency exhibits a negative correlation with transaction volume ( 0.50), suggesting that efficiency diminishes as the volume increases. Overall, Kendall’s tau supports the robustness of relationships detected via the Pearson analysis while being less sensitive to outliers and distributional assumptions, making it a valuable complement in evaluating dependencies among real-time economic indicators.
Figure 4 presents both the histogram and the boxplot of the standardized transaction volume. The histogram indicates a relatively uniform distribution with no strong skewness or pronounced peaks, suggesting a balanced spread of values across the sample. The boxplot reinforces this observation, showing no significant outliers and a symmetric distribution around the median. This stable pattern supports the reliability of transaction volume as a consistent metric in the analysis of economic decisions facilitated by edge computing.
The visualization of the market behavior index reveals a nearly symmetric and uniform spread across its range, according to Figure 5. The histogram displays a slight bimodal tendency, suggesting some variation in behavioral clustering within the market. Meanwhile, the boxplot indicates a well-balanced distribution with no extreme outliers, and a median that aligns closely with the center of the interquartile range. This implies that market sentiment, as captured by this index, fluctuates consistently around a central value, making it a stable input for real-time economic modeling based on edge computing.
Figure 6 provides insights into the distribution of the financial metric variable. The histogram shows a left-skewed distribution, indicating that lower financial metric values are more frequent in the dataset. This asymmetry suggests that most economic scenarios captured by the system are associated with lower financial outputs or returns. The boxplot further confirms this skewness, with the median located closer to the lower quartile and a longer right tail. These characteristics may reflect the volatility or imbalance in financial outcomes within edge-enabled decision-making environments.
Figure 7 displays the distribution of the edge processing latency variable, highlighting a strongly right-skewed pattern in the histogram. This indicates that most latency values are low, while a smaller number of cases experience significantly higher delays. The boxplot reinforces this observation, showing a concentration of data on the lower end with a long right whisker, suggesting the presence of high-latency outliers. Such variability in processing time may reflect uneven computational loads across edge nodes, which could affect real-time decision-making efficiency in edge-enabled economic systems.
The distribution of the resource utilization variable, shown in Figure 8, shows an approximately symmetric pattern centered around the mean, indicating a consistent and balanced use of computational resources across the edge computing infrastructure. The histogram indicates that resource utilization values are well dispersed without strong skewness, while the boxplot confirms the absence of significant outliers. This balanced spread implies the efficient allocation of resources within the network, which is essential for maintaining system stability and performance in real-time economic environments.
The observed left-skewness in the distribution of workload distribution efficiency from Figure 9 indicates that, in several instances, computational tasks are unequally balanced across the edge infrastructure. Such inefficiencies may lead to localized system slowdowns or bottlenecks, which in turn hinder the system’s capacity to process economic data promptly. In high-volatility contexts, this latency can delay or change critical financial decisions, amplifying systemic vulnerabilities and increasing the risk of financial contagion propagation across interconnected institutions or markets.
Figure 10 shows that decision accuracy values are relatively uniformly distributed, with no extreme skewness or pronounced outliers. This suggests that the system maintains a balanced performance in terms of decision-making, with a substantial proportion of observations concentrated around the median. Such consistency is fundamental in real-time economic environments, where the reliability of automated or assisted decisions under edge computing architectures directly impacts risk management and systemic stability.
Figure 11 illustrates the distribution of the system throughput variable, which appears approximately uniform and symmetric across its range. The boxplot confirms the absence of significant outliers, suggesting stable data flow rates within the edge computing environment. This stability is decisive in real-time decision-making systems, as it ensures consistent performance and low latency, factors that are essential for timely responses during financial stress or contagion propagation scenarios.
As depicted in Figure 12, the variable decision outcome follows a perfectly binary distribution, with values clustered exclusively at 1 and 1, corresponding to the two possible decision outcomes. The histogram highlights an almost equal frequency of both outcomes, while the boxplot confirms the absence of intermediate values or variability. This binarized structure is essential for classification tasks such as Logistic Regression, enabling clear evaluation of the predictive performance of decision-making systems under edge computing conditions.
Figure 13 illustrates the distribution of latency per transaction, showing a clear right-skewed pattern where most values are concentrated toward the lower latency end. This distribution supports the premise that reduced latency can enhance real-time responsiveness, which is critical in economic contexts where rapid decision-making is essential to prevent the amplification of systemic risks such as financial contagion. The observed variation, including occasional higher latencies, provides a useful framework for assessing the stability and resilience of edge-assisted economic infrastructures.
The distribution of the utilization-to-efficiency ratio from Figure 14 is notably left-skewed, with a concentration of observations at lower ratio values. This pattern may reflect an operational scenario in which high efficiency is maintained despite conservative or modest resource usage, an important attribute in edge-based economic environments aiming to balance cost and responsiveness. From the perspective of economic decision-making, such distributions can suggest systems that are structurally optimized to mitigate overloads and ensure continuity during periods of financial stress or contagion transmission, underscoring the strategic value of resource calibration in real-time infrastructures.
Figure 15 illustrates a bimodal distribution for the decision quality score, with peaks near the extremes, suggesting a polarization in decision outcomes, either highly effective or largely ineffective. This pattern highlights the sensitivity of real-time economic systems to both system-level computational performance and contextual economic volatility. Such distributional traits are especially relevant in financial contagion scenarios, where abrupt shifts in accuracy and outcomes can propagate across interconnected agents or nodes, amplifying systemic risk. The variation in decision quality emphasizes the need for adaptive edge-driven architectures that can stabilize performance, even in uncertain environments.
Table 3 presents the Variance Inflation Factor (VIF) and the corresponding Tolerance values for each predictor variable used in the regression analysis. VIF values above 5 typically signal potential multicollinearity concerns which may distort coefficient estimates and reduce model interpretability. Notably, transaction efficiency (VIF = 6.18), decision accuracy (VIF = 5.96), and edge processing latency (VIF = 5.55) exceed this threshold, indicating a high degree of linear dependency with other variables in the model. These findings suggest that additional variable selection, dimensionality reduction, or regularization techniques may be necessary to ensure model stability, especially in scenarios that simulate real-time decision-making under interconnected economic and computational conditions.

4.4. Dimensionality Reduction via PCA

Most variables demonstrated acceptable multicollinearity thresholds, although indicators such as decision accuracy, edge processing latency, and transaction efficiency exhibited VIF values exceeding 5, signaling moderate to high multicollinearity. To mitigate this, we applied PCA on these three variables and extracted a single composite feature labeled performance PCA, which retained 96.46% of the total variance according to Figure 16. This new feature encapsulates the essential variation across performance-critical indicators, ensuring dimensionality reduction while preserving relevant information.
To further interpret the nature of this component, Figure 17 presents the feature loadings on the first principal component. The results clearly indicate that the edge processing latency dominates the composite dimension (loading ≈ 0.9999), while the contributions of decision accuracy (≈0.0087) and transaction efficiency (≈0.0000) are negligible. This distribution confirms that edge latency dominates the performance variation captured by the PCA score. Nonetheless, retaining the PCA-based composite variable helps maintain a unified representation of performance in the predictive models and ensures consistency with the multicollinearity diagnosis based on the VIF. This supports the validity of using the PCA-based score as a robust proxy for real-time system responsiveness in subsequent predictive models.
Figure 18 illustrates the distribution of the composite performance scores obtained through PCA, grouped by decision outcomes. The results reveal distinct distributional characteristics between the two groups. Although both “Negative” and “Positive” decisions share comparable interquartile ranges, their median values differ slightly, suggesting a potential performance threshold that separates decision outcomes. The spread of the PCA scores is notably wider for the “Positive” outcomes, indicating greater variability in the underlying performance features when the decisions were successful. This variability could reflect a more dynamic and responsive computational environment supporting effective economic decisions. The figure supports the hypothesis that performance-related features, such as latency, decision accuracy, and transaction efficiency, aggregated via PCA, are meaningfully associated with the quality of economic decision-making in edge computing contexts.
To ensure the robustness of all of the machine learning models applied in this study, multicollinearity among predictor variables was evaluated using the Variance Inflation Factor (VIF) and Tolerance statistics. As presented in Table 4, all of the VIF values were below the commonly accepted threshold of 5, and the Tolerance values were well above 0.1. Although all of the variables presented in Table 4 pass the multicollinearity diagnostics, specifically, with all VIF values below 5 and Tolerance scores above 0.1, the variable decision quality score was excluded from the subsequent machine learning models. Despite its acceptable multicollinearity metrics, this variable is mathematically constructed using the target variable (decision outcome), which could introduce data leakage and artificially inflate the model’s predictive performance. Therefore, to ensure model integrity and avoid circular reasoning, the decision quality score was retained only for exploratory and descriptive analysis.

4.5. Predictive Modeling and Performance Evaluation

Table 5 provides a comparative overview of various machine learning models evaluated for predicting binary decision outcomes in the context of real-time economic systems enhanced by edge computing.
The classification models were evaluated using five standard performance metrics: accuracy, precision, recall, F1-Score, and ROC-AUC. Accuracy measures the overall proportion of correct predictions across both classes and provides a general assessment of model correctness. precision reflects the proportion of predicted positive decisions that are truly positive, indicating how reliable the model is when signaling a favorable economic decision. Recall (also known as sensitivity) measures the model’s ability to correctly identify all actual positive decisions, which is essential for minimizing missed opportunities in high-impact contexts. The F1-Score is the harmonic mean of precision and recall, balancing both metrics when there is a trade-off between false positives and false negatives. Finally, the ROC-AUC provides a threshold-independent measure of the model’s discriminative ability, summarizing how well the model distinguishes between positive and negative decision outcomes across different probability cutoffs.
Among the evaluated classifiers, XGBoost achieved the strongest overall performance, with 97% accuracy, 1.00 precision, 0.94 recall, 0.97 F1-Score, and an ROC-AUC of 0.998. This consistent superiority across all performance metrics highlights XGBoost’s ability to model the complex, nonlinear relationships that characterize real-time, edge-enabled economic decision environments. RF closely followed, also reaching high performance levels (accuracy = 95%, ROC-AUC = 0.997), while the NN demonstrated solid but slightly lower scores (e.g., ROC-AUC = 0.965). In contrast, LR, traditionally used as a baseline model, showed the weakest performance, with only 64% accuracy and an ROC-AUC of 0.663, indicating a limited capacity to capture the underlying nonlinear dynamics. Even Support Vector Machines, which are typically robust to noise and effective in high-dimensional settings, failed to outperform XGBoost, scoring only 77% on all core metrics. These results confirm that the proposed framework, based on engineered edge-computing indicators and XGBoost, not only outperforms traditional models statistically but also offers greater reliability in real-time risk-sensitive applications, such as the early detection of financial instability or contagion propagation.
Table 6 presents the confusion matrix for the Random Forest model. Out of all of the actual positive decision outcomes, 46 were correctly classified (true positives), while five were incorrectly predicted as negative (false negatives), indicating a small but non-negligible rate of missed opportunities. All of the negative cases were correctly identified (true negatives = 49), and there were no false positives, demonstrating excellent specificity. This outcome suggests that the Random Forest model is highly reliable for identifying unfavorable decisions, while still maintaining a strong ability to detect positive ones.
Table 7 displays the confusion matrix for XGBoost, which shows further improved performance. It correctly classified 48 positive cases and 49 negative cases, with only three false negatives and no false positives. This perfect precision (no FPs) combined with very high recall (only three FNs) is particularly valuable in risk-sensitive environments, where false alarms must be minimized and undetected risks (FNs) can propagate into systemic failures. From a risk management perspective, this result highlights the robustness of XGBoost in ensuring both accuracy and early detection of potentially dangerous decision scenarios.
In contexts such as financial contagion prevention, the trade-off between false positives (resource misallocation) and false negatives (undetected shocks) is critical. XGBoost’s ability to eliminate false positives and minimize false negatives offers a decisive advantage for real-time, edge-enhanced financial decision systems. By contrast, although RF performs very well, the existence of five false negatives could translate into unflagged risks in practice.
This result provides valuable insights into how edge-enabled decision frameworks can support the design of adaptive and resilient economic infrastructures in peripheral regions. By integrating low-latency, high-precision computational mechanisms (as demonstrated through the performance of XGBoost), local institutions can significantly enhance their capacity to identify early signals of economic stress and respond with greater autonomy. This technological augmentation aligns with the principles of place-based policy, promoting decentralized, context-sensitive solutions that reinforce systemic stability at the regional level. Furthermore, by minimizing false positives and negatives in decision classification, the proposed framework contributes to more efficient resource allocation, reducing vulnerability to financial contagion and supporting long-term socioeconomic resilience in underdeveloped or structurally fragile territories.

5. Discussion

In this study, ML algorithms were applied to predict economic decision outcomes using a comprehensive set of features derived from real-time edge computing environments. The dependent variable, the decision outcome, is a binary indicator that reflects whether a decision was classified as positive or negative. This classification task aligns naturally with supervised learning methods, where the goal is to learn underlying patterns in the data that contribute to decision performance. The independent variables used in the models encompass both economic metrics (e.g., the transaction volume, the financial metric, and the market behavior index) and technological performance indicators (e.g., resource utilization, edge processing latency, system throughput, and performance PCA). These variables capture the hybrid nature of the system, integrating edge infrastructure dynamics with economic behavioral signals. The core purpose of employing ML models is threefold: (i) predictive decision support; (ii) systemic risk mitigation; and (iii) feature importance assessment. From a strategic perspective, the ML algorithms serve a broader analytical role in (i) evaluating decision efficiency under real-time constraints, especially in scenarios with potential systemic instability; (ii) identifying signals indicative of suboptimal or risky decisions, which may propagate across interconnected financial or operational networks; and (iii) supporting edge-based infrastructure monitoring, where fluctuations in latency or throughput might lead to degraded decision quality and increased exposure to contagion risk. Moreover, the comparative model performance shows that more complex algorithms, such as XGBoost and Random Forest, significantly outperform traditional Logistic Regression in predictive accuracy and ROC-AUC. This suggests that nonlinear relationships and interaction effects between variables play a substantial role in shaping decision outcomes, an insight highly relevant for policy designers and infrastructure engineers alike.
The increasing reliance on machine learning algorithms and edge computing in financial decision-making raises several ethical concerns, particularly in high-stakes environments where misclassifications may result in systemic harm. Automated models, while efficient, may perpetuate biases embedded in training data or overlook qualitative contextual factors that human judgment would otherwise consider. In real-time financial ecosystems, this could lead to the unfair exclusion of vulnerable actors, the misallocation of capital, or the amplification of existing inequalities.
Furthermore, the opacity of complex models such as XGBoost and Neural Networks may limit transparency and explainability, posing challenges for accountability and regulatory compliance. Ensuring that such systems are interpretable, auditable, and aligned with the principles of fairness and inclusivity is critical, especially when deployed in sectors that affect public welfare and economic stability.
As such, any deployment of edge-enhanced decision systems in financial contexts must be accompanied by robust ethical oversight frameworks, including algorithmic audits, transparency protocols, and mechanisms for human-in-the-loop validation. These safeguards are essential for building trust, avoiding unintended consequences, and ensuring that technological advancements contribute positively to equitable and resilient economic development.
These findings have significant economic implications, particularly in the context of financial contagion and systemic risk propagation. In highly interconnected economic ecosystems, decisions made under suboptimal technological or informational conditions can lead to cascading effects, not only within a single system but across multiple institutions or sectors. The ability to predict and interpret decision outcomes in real-time thus becomes a critical capability for early warning systems and crisis prevention frameworks.
The integration of performance metrics derived from edge computing (e.g., latency, utilization, and throughput) into predictive economic models offers a novel approach to understanding operational fragility. For instance, elevated latency per transaction or inefficient resource-to-efficiency ratios may signal early signs of stress in the digital infrastructure, potentially correlating with diminished decision accuracy or higher error rates in automated financial systems. These inefficiencies, if left undetected, can amplify exposure to financial contagion, especially in markets where high-frequency transactions and automated decisions dominate.
Moreover, from a macroeconomic perspective, the results suggest that system-level technical variables are not just peripheral operational concerns but can serve as leading indicators of economic disruption. Edge computing environments, due to their decentralized and adaptive nature, provide a rich stream of granular data that can be used to map emerging vulnerabilities in near real-time. Thus, incorporating such data into machine learning frameworks enhances our ability to forecast not only individual decision failures but also the broader structural risks they may entail.

6. Conclusions

This study examined the intersection of edge computing performance and economic decision-making in real-time environments, with particular attention to the mitigation of financial contagion risk. By constructing a dataset that integrates economic behavior signals with edge infrastructure metrics, and applying multiple machine learning models, we addressed three key research questions.
Regarding RQ1, we successfully designed four composite indicators, transaction efficiency, the latency per transaction, the utilization-to-efficiency ratio, and the decision quality score, that effectively capture the interplay between system performance and economic decision dynamics. These indicators allowed for granular modeling of computational and economic interactions.
For RQ2, correlation analysis and PCA revealed significant and interpretable relationships between edge performance (e.g., latency, resource use) and decision outcomes, highlighting that low latency and balanced resource utilization are associated with more favorable economic decisions.
With respect to RQ3, our results demonstrated that ensemble-based models such as XGBoost and Random Forest outperformed traditional classifiers, achieving high levels of predictive accuracy (up to 97%) and excellent ROC-AUC scores. These findings underscore the effectiveness of ML-based architecture in forecasting and managing decision risk under edge-enhanced conditions.
The economic implications of this research are multifaceted. At the micro level, localized decision-making enabled by edge computing can reduce reaction times and mitigate error propagation in automated economic processes. At the macro level, edge performance metrics may serve as early warning signals for systemic fragility, supporting the development of real-time monitoring tools and adaptive response systems.
In the context of financial contagion, our results suggest that technological inefficiencies, such as uneven workload distribution or delayed edge response, can indirectly influence the transmission of shocks across economic agents. Thus, integrating edge monitoring into financial infrastructure design could provide a new layer of resilience against cascading failures.
While this study did not simulate financial contagion events explicitly, the strong association between improved edge system performance (e.g., reduced latency, balanced resource utilization) and higher decision quality suggests a potential pathway for mitigating contagion risk. In complex economic networks, faster and more accurate local decisions can prevent the amplification of shocks across interconnected agents. Future research could build on this foundation by modeling systemic contagion dynamics under varying edge performance scenarios, offering more direct empirical validation of this critical link.
Based on the findings of this study, several actionable recommendations can be formulated to enhance economic resilience and improve decision-making accuracy in edge-enabled environments:
  • Integrate edge monitoring into financial regulation frameworks: Regulators should require financial institutions to monitor and report key edge performance metrics (e.g., processing latency, system throughput, and resource efficiency) as part of operational risk assessments. These indicators can serve as early warning signals for systemic stress or contagion risk;
  • Promote edge infrastructure standardization in financial ecosystems: Establishing common standards for edge computing performance and data interoperability in financial platforms will ensure consistency and reliability across institutions, especially during high-volatility periods;
  • Assign stress testing for digital infrastructure under simulated contagion scenarios: Financial institutions should be required to perform digital stress tests simulating edge node failures or high-latency conditions to understand potential vulnerabilities in decentralized environments;
  • Incentivize decentralized and adaptive risk management solutions: Governments and central banks should support the development of edge-based decision-support systems that can adapt in real time to economic shocks, improving reaction speed and local autonomy in decision-making.
Despite these contributions, several limitations must be acknowledged. First, the dataset used was synthetic and simulated, which may not fully reflect the complexity of real-world market behavior. Second, the analysis focused on binary outcomes and did not explore sequential decision processes or temporal feedback loops. It is important to acknowledge that the synthetic dataset, although carefully constructed to reflect key features of real-time economic dynamics, may not capture the full complexity, heterogeneity, and emergent behavior characteristic of real-world systems. As such, the generalizability of the results must be interpreted cautiously. Future studies should aim to validate these findings using empirical datasets derived from operational financial systems or IoT-enabled economic infrastructures.
Furthermore, while this study focused on binary decision outcomes (positive or negative), real-world economic decision-making often involves multi-class or continuous outcomes, representing a richer spectrum of financial states and risk levels. Generalizing the current framework to multiclass classification, ordinal decision outcomes, or sequential decision processes (e.g., using reinforcement learning) would improve applicability and realism in dynamic economic environments.
In addition, scalability remains a critical concern, as deploying resource-intensive machine learning models (such as XGBoost) across heterogeneous and constrained edge devices may encounter technical limitations. Computational costs associated with real-time inference could impact system responsiveness and energy efficiency. Moreover, although XGBoost achieved the highest predictive performance, its lower interpretability compared to simpler models (e.g., Logistic Regression) could pose challenges for regulatory compliance, auditability, and stakeholder trust. Future research should explore lightweight model alternatives (e.g., model pruning or distillation), scalable distributed edge architectures, and explainable AI techniques (e.g., SHAP values) to overcome these challenges. These limitations open valuable avenues for future research at the intersection of edge computing, economic decision-making, and financial risk management.
Also, future research could address these limitations by (i) validating findings on real-time data from financial markets or IoT-based economic systems; (ii) extending the framework to multiclass or time-series classification tasks; (iii) incorporating reinforcement learning or adaptive edge optimization algorithms for dynamic resource management; and (iv) exploring cross-sectoral contagion mechanisms beyond the financial domain, such as supply chain disruptions or energy markets.
Overall, this study provides a robust foundation for understanding how edge computing can shape economic decision architectures and offers actionable insights into the design of intelligent, real-time financial systems in an increasingly connected and uncertain world.

Author Contributions

Conceptualization, Ș.I. and I.N.; Data curation, Ș.I., C.D. and I.N.; Formal analysis, Ș.I.; Investigation, Ș.I. and I.N.; Methodology, Ș.I. and I.N.; Project administration, C.D.; Resources, Ș.I., C.D. and I.N.; Software, Ș.I.; Supervision, C.D.; Validation, Ș.I., C.D. and I.N.; Visualization, Ș.I., C.D. and I.N.; Writing—original draft, Ș.I.; Writing—review and editing, C.D. and I.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the EU’s NextGenerationEU instrument through the National Recovery and Resilience Plan of Romania-Pillar III-C9-I8, managed by the Ministry of Research, Innovation, and Digitization, within the project entitled “Place-based Economic Policy in EU’s Periphery–fundamental research in collaborative development and local resilience. Projections for Romania and Moldova (PEPER)”, contract no. 760045/23.05.2023, code CF 275/30.11.2022. This paper was co-financed by the Bucharest University of Economic Studies during the Ph.D. program.

Data Availability Statement

The dataset used in this study is publicly available on Kaggle under the CC0: Public Domain license. The dataset includes economic and technological performance indicators, such as transaction volume, market behavior, financial metrics, edge processing latency, throughput, and resource utilization. It can be accessed at the following link: https://www.kaggle.com/datasets/ziya07/iot-driven-economic-decision-making (accessed on 4 April 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
AIArtificial Intelligence
MLMachine Learning
CNNConvolutional Neural Network
AUCArea Under the Curve
ROCReceiver Operating Characteristic
LRLogistic Regression
MLPMultilayer Perceptron
NNNeural Network
PCAPrincipal Component Analysis
RFRandom Forest
ReLURectified Linear Unit
SVMSupport Vector Machine
VIFVariance Inflation Factor
XGBoostExtreme Gradient Boosting
TPTrue Positive
TNTrue Negative
FPFalse Positive
FNFalse Negative

References

  1. Kelechi, A.H.; Alsharif, M.H.; Ramly, A.M.; Abdullah, N.F.; Nordin, R. The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review. Energies 2019, 12, 3449. [Google Scholar] [CrossRef]
  2. Jiang, K.; Zhou, H.; Chen, X.; Zhang, H. Mobile Edge Computing for Ultra-Reliable and Low-Latency Communications. IEEE Commun. Stand. Mag. 2021, 5, 68–75. [Google Scholar] [CrossRef]
  3. Zachariadis, M.; Hileman, G.; Scott, S.V. Governance and Control in Distributed Ledgers: Understanding the Challenges Facing Blockchain Technology in Financial Services. Inf. Organ. 2019, 29, 105–117. [Google Scholar] [CrossRef]
  4. Cheng, X.; Wu, J.; Liao, S.S. A Study of Contagion in the Financial System from the Perspective of Network Analytics. Neurocomputing 2017, 264, 42–49. [Google Scholar] [CrossRef]
  5. Qiu, T.; Chi, J.; Zhou, X.; Ning, Z.; Atiquzzaman, M.; Wu, D.O. Edge Computing in Industrial Internet of Things: Architecture, Advances and Challenges. IEEE Commun. Surv. Tutorials 2020, 22, 2462–2488. [Google Scholar] [CrossRef]
  6. Shahzadi, S.; Iqbal, M.; Dagiuklas, T.; Qayyum, Z.U. Multi-Access Edge Computing: Open Issues, Challenges and Future Perspectives. J. Cloud Comput. 2017, 6, 30. [Google Scholar] [CrossRef]
  7. Sharma, M.; Tomar, A.; Hazra, A. Edge Computing for Industry 5.0: Fundamental, Applications, and Research Challenges. IEEE Internet Things J. 2024, 11, 19070–19093. [Google Scholar] [CrossRef]
  8. Xie, J.; Zhou, X.; Cheng, L. Edge Computing for Real-Time Decision Making in Autonomous Driving: Review of Challenges, Solutions, and Future Trends. IJACSA 2024, 15, 598–607. [Google Scholar] [CrossRef]
  9. Modupe, O.T.; Otitoola, A.A.; Oladapo, O.J.; Abiona, O.O.; Oyeniran, O.C.; Adewusi, A.O.; Komolafe, A.M.; Obijuru, A. REVIEWING THE TRANSFORMATIONAL IMPACT OF EDGE COMPUTING ON REAL-TIME DATA PROCESSING AND ANALYTICS. Comput. Sci. IT Res. J. 2024, 5, 693–702. [Google Scholar] [CrossRef]
  10. Arroba, P.; Buyya, R.; Cárdenas, R.; Risco-Martín, J.L.; Moya, J.M. Sustainable Edge Computing: Challenges and Future Directions. Softw. Pract. Exp. 2024, 54, 2272–2296. [Google Scholar] [CrossRef]
  11. Veeramachaneni, V. Edge Computing: Architecture, Applications, and Future Challenges in a Decentralized Era. Recent Trends Comput. Graph. Multimed. Technol. 2024, 7, 8–23. [Google Scholar] [CrossRef]
  12. Butt, A.U.R.; Saba, T.; Khan, I.; Mahmood, T.; Khan, A.R.; Singh, S.K.; Daradkeh, Y.I.; Ullah, I. Proactive and Data-Centric Internet of Things-Based Fog Computing Architecture for Effective Policing in Smart Cities. Comput. Electr. Eng. 2025, 123, 110030. [Google Scholar] [CrossRef]
  13. Larian, H.; Safi-Esfahani, F. InTec: Integrated Things-Edge Computing: A Framework for Distributing Machine Learning Pipelines in Edge AI Systems. Computing 2025, 107, 41. [Google Scholar] [CrossRef]
  14. Salman, O.; Elhajj, I.; Kayssi, A.; Chehab, A. Edge Computing Enabling the Internet of Things. In Proceedings of the 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), Milan, Italy, 14–16 December 2015; IEEE: New York, NY, USA, 2015; pp. 603–608. [Google Scholar]
  15. Mohsin, M.; Rovetta, S.; Masulli, F.; Cabri, A. Automated Disassembly of Waste Printed Circuit Boards: The Role of Edge Computing and IoT. Computers 2025, 14, 62. [Google Scholar] [CrossRef]
  16. Singh, S.; Madaan, G.; H. R., S.; Singh, A.; Pandey, D.; George, A.S.; Pandey, B.K. Empowering Connectivity: Exploring the Internet of Things. In Advances in Computational Intelligence and Robotics; Pandey, D., Muniandi, B., Pandey, B.K., George, A.S., Eds.; IGI Global: Hershey, PA, USA, 2024; pp. 89–116. ISBN 9798337310329. [Google Scholar]
  17. Routaib, H.; Seddik, S.; Elmounadi, A.; Haddadi, A.E. Enhancing E-Business in Industry 4.0: Integrating Fog/Edge Computing with Data LakeHouse for IIoT. Future Gener. Comput. Syst. 2025, 166, 107653. [Google Scholar] [CrossRef]
  18. Bhambri, P.; Khang, A. Edge Computing for Enhancing Efficiency and Sustainability in Green Transportation Systems. In Driving Green Transportation System Through Artificial Intelligence and Automation; Khang, A., Ed.; Lecture Notes in Intelligent Transportation and Infrastructure; Springer Nature Switzerland: Cham, Switzerland, 2025; pp. 43–65. ISBN 978-3-031-72616-3. [Google Scholar]
  19. Priya, P.K.; Sivaranjani, R.; Sathyamoorthy, M.; Dhanaraj, R.K. Efficient Network and Communication Technologies for Smart and Sustainable Society 5.0. In Networked Sensing Systems; Dhanaraj, R.K., Sathyamoorthy, M., Balasubramaniam, S., Kadry, S., Eds.; Wiley: Hoboken, NJ, USA, 2025; pp. 63–100. ISBN 978-1-394-31086-9. [Google Scholar]
  20. Cheng, C.; Huang, H. Smart Financial Investor’s Risk Prediction System Using Mobile Edge Computing. J. Grid Comput. 2023, 21, 76. [Google Scholar] [CrossRef]
  21. Abid, M.; Saqlain, M. Utilizing Edge Cloud Computing and Deep Learning for Enhanced Risk Assessment in China’s International Trade and Investment. IJKIS 2023, 1, 1–9. [Google Scholar] [CrossRef]
  22. Zhou, H. Design of Intelligent Financial Investment Risk Prediction System Based on Edge Computing. Mob. Inf. Syst. 2022, 2022, 7822292. [Google Scholar] [CrossRef]
  23. Liao, X.; Li, W. Research on the Tail Risk Contagion in the International Commodity Market on the China’s Financial Market: Based on a Network Perspective. Kybernetes 2025, 54, 807–831. [Google Scholar] [CrossRef]
  24. Zeng, H. Influences of Mobile Edge Computing-Based Service Preloading on the Early-Warning of Financial Risks. J. Supercomput. 2022, 78, 11621–11639. [Google Scholar] [CrossRef]
  25. Li, Z.; Liang, X.; Wen, Q.; Wan, E. The Analysis of Financial Network Transaction Risk Control Based on Blockchain and Edge Computing Technology. IEEE Trans. Eng. Manag. 2024, 71, 5669–5690. [Google Scholar] [CrossRef]
  26. Liao, X.; Li, Q.; Chan, S.; Chu, J.; Zhang, Y. Interconnections and Contagion among Cryptocurrencies, DeFi, NFT and Traditional Financial Assets: Some New Evidence from Tail Risk Driven Network. Phys. A Stat. Mech. Its Appl. 2024, 647, 129892. [Google Scholar] [CrossRef]
  27. Mitsis, G.; Tsiropoulou, E.E.; Papavassiliou, S. Price and Risk Awareness for Data Offloading Decision-Making in Edge Computing Systems. IEEE Syst. J. 2022, 16, 6546–6557. [Google Scholar] [CrossRef]
  28. Kong, F.; Lu, H. Risk Control Management of New Rural Cooperative Financial Organizations Based on Mobile Edge Computing. Mob. Inf. Syst. 2021, 2021, 5686411. [Google Scholar] [CrossRef]
  29. Liu, S.; Wang, C.; Zhou, Y. Analysis of Financial Data Risk and Network Information Security by Blockchain Technology and Edge Computing. IEEE Trans. Eng. Manag. 2024, 71, 12579–12592. [Google Scholar] [CrossRef]
  30. Akbari, M. Revolutionizing Supply Chain and Circular Economy with Edge Computing: Systematic Review, Research Themes and Future Directions. Manag. Decis. 2024, 62, 2875–2899. [Google Scholar] [CrossRef]
  31. Yang, P.; Li, S.; Qin, S.; Wang, L.; Hu, M.; Yang, F. Smart Grid Enterprise Decision-Making and Economic Benefit Analysis Based on LSTM-GAN and Edge Computing Algorithm. Alex. Eng. J. 2024, 104, 314–327. [Google Scholar] [CrossRef]
  32. Shen, Y.; Shen, S.; Wu, Z.; Zhou, H.; Yu, S. Signaling Game-Based Availability Assessment for Edge Computing-Assisted IoT Systems with Malware Dissemination. J. Inf. Secur. Appl. 2022, 66, 103140. [Google Scholar] [CrossRef]
  33. Chuang, I.-H.; Weng, T.-C.; Tsai, J.-S.; Horng, M.-F.; Kuo, Y.-H. A Reliable IoT Data Economic System Based on Edge Computing. In Proceedings of the 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Bologna, Italy, 9–12 September 2018; IEEE: New York, NY, USA, 2018; pp. 1–5. [Google Scholar]
  34. Kubiak, K.; Dec, G.; Stadnicka, D. Possible Applications of Edge Computing in the Manufacturing Industry—Systematic Literature Review. Sensors 2022, 22, 2445. [Google Scholar] [CrossRef]
  35. Feng, N.; Ran, C. Design and Optimization of Distributed Energy Management System Based on Edge Computing and Machine Learning. Energy Inform. 2025, 8, 17. [Google Scholar] [CrossRef]
  36. Zheng, L.; Tan, L. A Decentralized Scheme for Multi-User Edge Computing Task Offloading Based on Dynamic Pricing. Peer-to-Peer Netw. Appl. 2025, 18, 91. [Google Scholar] [CrossRef]
  37. Bhutiani, A. Designing Real-Time Image and Video Processing Algorithms for Automated Waste Classification and Sorting in Circular Economy Systems. J. Artif. Intell. Mach. Learn. Data Sci. 2024, 2, 1871–1874. [Google Scholar] [CrossRef] [PubMed]
  38. Kaggle IoT-Driven Economic Decision-Making 2024. Available online: https://www.kaggle.com/datasets/ziya07/iot-driven-economic-decision-making (accessed on 2 April 2025).
  39. Zhu, J.; Xu, T.; Zhang, Y.; Fan, Z. Scalable Edge Computing Framework for Real-Time Data Processing in Fintech Applications. Int. J. Adv. Appl. Sci. Res. 2024, 3, 85–92. [Google Scholar]
  40. Liu, C.; Zhang, L. Financial Risk Management of Listed Companies Based on Mobile Edge Computing. Math. Probl. Eng. 2022, 2022, 8804988. [Google Scholar] [CrossRef]
  41. Suresh Kumar, S.; Stephen, S.; Suhainul Rumysia, M. Rootkit Detection Using Hybrid Machine Learning Models and Deep Learning Model: Implementation. In Proceedings of the 2024 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), Chennai, India, 9 May 2024; IEEE: New York, NY, USA, 2024; pp. 1–7. [Google Scholar]
  42. Prathom, K.; Sujitapan, C. Performance of Logistic Regression and Support Vector Machine Conjunction with the GIS and RS in the Landslide Susceptibility Assessment: Case Study in Nakhon Si Thammarat, Southern Thailand. J. King Saud Univ.-Sci. 2024, 36, 103306. [Google Scholar] [CrossRef]
  43. Hosmer, D.W.; Lemeshow, S. Applied Logistic Regression, 1st ed.; Wiley: Hoboken, NJ, USA, 2000; ISBN 978-0-471-35632-5. [Google Scholar]
  44. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  45. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  46. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; Adaptive computation and machine learning; The MIT Press: Cambridge, MA, USA, 2016; ISBN 978-0-262-03561-3. [Google Scholar]
  47. Hammad, M.M. Deep Learning Activation Functions: Fixed-Shape, Parametric, Adaptive, Stochastic, Miscellaneous, Non-Standard, Ensemble. arXiv 2024, arXiv:2407.11090. [Google Scholar]
  48. Nica, I.; Alexandru, D.B.; Crăciunescu, S.L.P.; Ionescu, Ș. Automated Valuation Modelling: Analysing Mortgage Behavioural Life Profile Models Using Machine Learning Techniques. Sustainability 2021, 13, 5162. [Google Scholar] [CrossRef]
  49. Haghighi, S.; Jasemi, M.; Hessabi, S.; Zolanvari, A. PyCM: Multiclass Confusion Matrix Library in Python. JOSS 2018, 3, 729. [Google Scholar] [CrossRef]
  50. Ionescu, Ș.; Chiriță, N.; Nica, I.; Delcea, C. An Analysis of Residual Financial Contagion in Romania’s Banking Market for Mortgage Loans. Sustainability 2023, 15, 12037. [Google Scholar] [CrossRef]
  51. Kalnins, A.; Praitis Hill, K. The VIF Score. What Is It Good For? Absolutely Nothing. Organ. Res. Methods 2025, 28, 58–75. [Google Scholar] [CrossRef]
  52. Mahmood, S.H. Estimating Models and Evaluating Their Efficiency under Multicollinearity in Multiple Linear Regression: A Comparative Study. ZJHS 2024, 28, 264–277. [Google Scholar] [CrossRef]
  53. Shrestha, N. Detecting Multicollinearity in Regression Analysis. Am. J. Appl. Math. Stat. 2020, 8, 39–42. [Google Scholar] [CrossRef]
  54. Hastari, D.; Winanda, S.; Pratama, A.R.; Nurhaliza, N.; Ginting, E.S. Application of Convolutional Neural Network ResNet-50 V2 on Image Classification of Rice Plant Disease. Public Res. J. Eng. Data Technol. Comput. Sci. 2024, 1, 71–77. [Google Scholar] [CrossRef]
Figure 1. Methodological workflow.
Figure 1. Methodological workflow.
Computers 14 00196 g001
Figure 2. Pearson correlation.
Figure 2. Pearson correlation.
Computers 14 00196 g002
Figure 3. Kendall correlation.
Figure 3. Kendall correlation.
Computers 14 00196 g003
Figure 4. Distribution and variability of transaction volume.
Figure 4. Distribution and variability of transaction volume.
Computers 14 00196 g004
Figure 5. Distribution and variability of market behavior index.
Figure 5. Distribution and variability of market behavior index.
Computers 14 00196 g005
Figure 6. Distribution and variability of financial metric.
Figure 6. Distribution and variability of financial metric.
Computers 14 00196 g006
Figure 7. Distribution and variability of edge processing latency.
Figure 7. Distribution and variability of edge processing latency.
Computers 14 00196 g007
Figure 8. Distribution and variability of resource utilization.
Figure 8. Distribution and variability of resource utilization.
Computers 14 00196 g008
Figure 9. Distribution and variability of workload distribution efficiency.
Figure 9. Distribution and variability of workload distribution efficiency.
Computers 14 00196 g009
Figure 10. Distribution and variability of decision accuracy.
Figure 10. Distribution and variability of decision accuracy.
Computers 14 00196 g010
Figure 11. Distribution and variability of system throughput.
Figure 11. Distribution and variability of system throughput.
Computers 14 00196 g011
Figure 12. Distribution and variability of decision outcome.
Figure 12. Distribution and variability of decision outcome.
Computers 14 00196 g012
Figure 13. Distribution and variability of latency per transaction.
Figure 13. Distribution and variability of latency per transaction.
Computers 14 00196 g013
Figure 14. Distribution and variability of utilization to efficiency ratio.
Figure 14. Distribution and variability of utilization to efficiency ratio.
Computers 14 00196 g014
Figure 15. Distribution and variability of decision quality score.
Figure 15. Distribution and variability of decision quality score.
Computers 14 00196 g015
Figure 16. PCA—derived composite variable.
Figure 16. PCA—derived composite variable.
Computers 14 00196 g016
Figure 17. Loadings of original features on the first principal component (performance_pca).
Figure 17. Loadings of original features on the first principal component (performance_pca).
Computers 14 00196 g017
Figure 18. PCA score distribution by decision outcome.
Figure 18. PCA score distribution by decision outcome.
Computers 14 00196 g018
Table 1. Confusion matrix.
Table 1. Confusion matrix.
Actual Value
PositiveNegative
Predicted ValuePositiveTP (True Positive)FP (False Positive)
NegativeFN (False Negative)TN (True Negative)
Table 2. Summary statistics.
Table 2. Summary statistics.
IndicatorMeanStd. dev.MinFirst QuartileMedianThird Quartile
Transaction volume5591.862523.181005.003430.505767.007787.50
Market behavior index 0.0060.595 0.987 0.5450.0010.517
Financial metric111.21127.980.0043.0971.2799.04
Edge processing latency164.40149.760.0336.60109.53270.67
Resource utilization0.550.260.100.340.540.78
Workload distribution
efficiency
101.00128.740.7159.9075.8789.12
Decision accuracy73.3628.710.0173.5981.9690.85
System throughput 5.48 × 10 11 2.62 × 10 11 1.01 × 10 11 3.11 × 10 11 5.53 × 10 11 7.76 × 10 11
Decision outcome0.510.500.000.001.001.00
Transaction efficiency 1.39 × 10 14 1.38 × 10 14 1.15 × 10 15 5.74 × 10 15 9.55 × 10 15 1.66 × 10 14
Latency per transaction0.0400.0510.0010.0060.0200.054
Utilization to efficiency ratio0.0230.1010.0010.0030.0070.011
Decision quality score34.8338.260.000.007.3376.81
Table 3. Variance Inflation Factor and Tolerance scores for multicollinearity diagnosis.
Table 3. Variance Inflation Factor and Tolerance scores for multicollinearity diagnosis.
VariableVIFTolerance
Transaction volume1.180.84
Market behavior index1.010.98
Financial metric1.740.57
Edge processing latency5.550.17
Resource utilization4.870.20
Workload distribution efficiency1.630.61
Decision accuracy5.960.16
System throughput4.340.23
Transaction efficiency6.180.16
Latency per transaction4.740.21
Utilization to efficiency ratio1.110.89
Decision quality score1.880.53
Table 4. Variance Inflation Factor and Tolerance scores for the final model variables.
Table 4. Variance Inflation Factor and Tolerance scores for the final model variables.
VariableVIFTolerance
Transaction volume4.590.21
Market behavior index1.010.98
Financial metric1.740.57
Resource utilization4.690.20
Workload distribution efficiency1.630.61
System throughput4.430.22
Latency per transaction3.550.28
Utilization to efficiency ratio1.110.89
Decision quality score1.840.54
Performance PCA2.060.48
Table 5. Performance comparison of machine learning models in predicting decision outcomes.
Table 5. Performance comparison of machine learning models in predicting decision outcomes.
ModelAccuracyPrecisionRecallF1-ScoreROC-AUC
Logistic Regression64%0.630.700.670.663
Random Forest95%1.000.940.970.997
XGBoost97%1.000.940.970.998
SVM RBF77%0.770.770.770.834
Neural Network87%0.870.860.870.965
Table 6. Confusion matrix for Random Forest.
Table 6. Confusion matrix for Random Forest.
Actual Value
PositiveNegative
Predicted ValuePositive460
Negative549
Table 7. Confusion matrix for XGBoost.
Table 7. Confusion matrix for XGBoost.
Actual Value
PositiveNegative
Predicted ValuePositive480
Negative349
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ionescu, Ș.; Delcea, C.; Nica, I. Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management. Computers 2025, 14, 196. https://doi.org/10.3390/computers14050196

AMA Style

Ionescu Ș, Delcea C, Nica I. Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management. Computers. 2025; 14(5):196. https://doi.org/10.3390/computers14050196

Chicago/Turabian Style

Ionescu, Ștefan, Camelia Delcea, and Ionuț Nica. 2025. "Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management" Computers 14, no. 5: 196. https://doi.org/10.3390/computers14050196

APA Style

Ionescu, Ș., Delcea, C., & Nica, I. (2025). Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management. Computers, 14(5), 196. https://doi.org/10.3390/computers14050196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop