Next Article in Journal
Cryptocurrency Market Dynamics: Copula Analysis of Return and Volume Tails
Previous Article in Journal
Robust Tail Risk Estimation in Cryptocurrency Markets: Addressing GARCH Misspecification with Block Bootstrapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Financial Institutions of Emerging Economies: Contribution to Risk Assessment

1
Transport and Telecommunication Institute, Faculty of Management and Logistics, LV-1019 Riga, Latvia
2
SIA StarBridge, LV-1050 Riga, Latvia
3
Institute of Life Sciences and Technologies, Daugavpils University, LV-5401 Daugavpils, Latvia
4
Higher School of Economics and Business, Al-Farabi Kazakh National University, Almaty 050051, Kazakhstan
5
JSC “The Fund of Problem Loans”, Almaty 050051, Kazakhstan
*
Authors to whom correspondence should be addressed.
Risks 2025, 13(9), 167; https://doi.org/10.3390/risks13090167
Submission received: 28 June 2025 / Revised: 19 August 2025 / Accepted: 27 August 2025 / Published: 1 September 2025

Abstract

Conventional risk assessment frameworks usually define risk as a function of vulnerabilities and threats, but they frequently lack a single quantitative model that incorporates the unique features of each element. In order to close this gap, this paper creates a flexible, open, and theoretically sound risk assessment formula that is still reliable even in the absence of complete vulnerability data. This is particularly important for financial institutions operating in emerging markets, where regulators rarely provide centralized vulnerability assessments and where Basel-type frameworks are only partially implemented. The contribution of the paper is a practically verified Bayesian network model that integrates threat likelihoods, vulnerability likelihoods, and their impacts within a probabilistic structure. Using 500 stratified Monte Carlo scenarios calibrated to real fintech and banking institutions operating under EU and national supervision, we demonstrate that excluding vulnerability impact from the model does not significantly reduce the predictive performance. These findings advance the theory of risk assessment, simplify practical implementation, and enhance the scalability of risk modeling for both traditional banks and fintech institutions in emerging economies.

1. Introduction

Risk management is very important in many areas of financial institutions, and it affects decisions about operations, compliance, cybersecurity, credit, market activity, and strategic planning (Cernisevs et al. 2023b; Popova et al. 2024, 2025). Although international standards such as Basel II and Basel III have established important benchmarks for capital adequacy and stress testing, modern financial systems are regulated in a much broader context. Supervisory authorities, including the European Banking Authority (EBA), the European Securities and Markets Authority (ESMA), and national regulators, have issued extensive requirements relating to information and communication technology (ICT) risks, operational resilience, anti-money laundering, and financial technology supervision, forming a risk-based approach (Khan and Malaika 2021; Kuerbis and Badiei 2017; Sanchez-Zurdo and San-Martín 2024). This reflects a recognition that the risks facing financial institutions today extend well beyond traditional credit exposures and must be captured by models that incorporate technological, systemic, and probabilistic dimensions. Moreover, risk is used in operations to address internal failures or external attacks to ensure service continuity and security (Gao et al. 2023; Nadler et al. 2022; Special Announcement 2019). Risk modeling helps banks and other financial institutions to make lending and investment decisions by weighing the risks of credit exposure or market volatility against the potential for profit (Agarwal and Zhang 2020; Yves Mersch 2019). Therefore, it is important to have a consistent and accurate way to measure and understand risks taking into account both threats and vulnerabilities. This is important for aligning risk appetite with business strategy and planning for resilience (Dhar and Stein 2016; Ansoff 1965; Andrews 1971; Teece 2010).
It has special importance for emerging economies. On the one hand, they have a powerful impulse for contemporary technological banking development (Popova 2021) which allows them to strengthen the financial market significantly. Nevertheless, it creates additional issues for financial institutions within emerging economies. They do not have a big set of data on vulnerabilities considering not long not long history of these economies in their contemporary state. On the other hand, the volatility on these markets is very high.
For example, information systems, critical infrastructure, and industrial control systems all need to make decisions based on risk when there are uncertainties and threats (Aven 2016; Cernisevs et al. 2023a; da Silva et al. 2017). According to Dawson (2015), the idea of risk in classical constructs comprises two main parts: threats and vulnerabilities. The base model with threats and vulnerabilities is also the base model for many other natural language processing tasks. It can also be shown in the following way (Cox 2008):
R i s k   = T h r e a t   ×   V u l n e r a b i l i t i e s
However, this is a simplistic model that does not account for the complex interlocking relationships between threat scenarios, vulnerabilities, and their likelihood and impact (Ledwaba and Venter 2017; Rossebo et al. 2007). Current risk analysis is objected by many of the more granular, parameterized approaches that compile subcomponents such as
  • Threat Likelihood (TL) and Threat Impact (Ti)
  • Vulnerability Likelihood (VL) and Vulnerability Impact (Vi)
Even though systems like the Guide for Conducting Risk Assessments NIST SP 800-30, the Common Vulnerability Scoring System (CVSS), and Factor Analysis of Information Risk (FAIR) are widely used in real life, there is still no single formula that consistently combines all four parameters—the impacts and likelihoods of threats and vulnerabilities (Giboney et al. 2023b; Liu et al. 2022; Luo et al. 2024; Manzoor et al. 2021). Different ways of risk assessment prioritize these factors differently, which makes risk estimation less reliable and makes it harder to compare different areas.
There is no agreement on how to combine these factors into a single risk expression, which is a major point of this study. Some models use TL and VL together, or Ti and Vi together, to develop risk scores, but very few consider all four parameters in relation to each other (Cernisevs et al. 2023a, 2023b; Popova et al. 2024). In addition, there is not much real-world evidence to support these kinds of relationships in current models.
The second reason comes from an idea that the number of different risks for a given system should be the same as the number of threats, which the analyst uses but not the number of vulnerabilities (Sadeghi and Javan 2024). From the subject’s point of view, different vulnerabilities that affect a threat in different ways do not change the risk that is posed. This does not correspond to the most traditional models, which assume the risks to be bigger by combining them in pairs of threats and vulnerabilities (ESMA 2022; Fülöp et al. 2022; Griffin et al. 2023; da Silva et al. 2017; Kaddumi and Al-Kilani 2022; Rastogi et al. 2022).
This article also makes the assumption that when a risk event happens, the effect of a threat (Ti) is the same as the effect of the exploited vulnerability (Vi). This suggests that threat and vulnerability parameters may depend on each other, which is not taken into account by current independent models. This approach is especially important for emerging economies where the historical records of vulnerabilities and their impacts and likelihoods are not well represented. Therefore, it is necessary to choose the tool corresponding to the set tasks. The authors have decided to use Bayesian networks as a way to model conditional dependencies and bring together subjective and objective knowledge into a single structure.
The article’s goal is the development of a theoretically sound and practically verified formula for risk assessment applicable to the situations when financial institutions of emerging economies have no full data on vulnerabilities.

2. Materials and Methods

Bayesian networks (BNs) are directed acyclic graphs (DAGs) that show how variables depend on each other (McAllester et al. 2008; Getoor et al. 2001, 2002). The edges of the network show the direction of probabilistic influence, and each node stands for a variable, like the likelihood of a threat, the effect of a threat, the likelihood of a vulnerability, or the effect of a vulnerability. Bayes’ theorem enables reasoning under uncertainty by adjusting the probabilities of outcomes based on observed evidence. This study employed two Bayesian network configurations: the reduced model, which excludes Vulnerability Impact (Vi) to assess its effect on predictive accuracy, and the full model, which incorporates all four critical parameters (TL, Ti, VL, and Vi). This dual-model approach enables an examination of Vi’s informational contribution through a comparison of model performance, focusing on error metrics, structural simplicity, and statistical significance. The Python package PGMPy [0.1.26-2024-08-09] (Probabilistic Graphical Models in Python) was utilized to construct and evaluate the models (Ankan and Textor 2024; Ankan and Panda 2015). This aids in inference, the estimation of conditional probability tables, and the understanding of structure. This method provides a flexible, data-driven approach to comprehending complex dependencies in risk modeling and evaluating the trade-offs between model completeness and computational efficiency.

2.1. Research Design Overview

The study adopts a quantitative modeling approach, structured in three stages:
  • Parameter definition and assumptions.
  • Model construction using Bayesian networks.
  • Validation via expert scenarios and simulated data.
The goal is to develop and test a Bayesian network (BN) (Cheimonidis and Rantos 2025) model that captures the probabilistic relationships among the four key risk parameters:
  • TL: Threat Likelihood;
  • Ti: Threat Impact;
  • VL: Vulnerability Likelihood;
  • Vi: Vulnerability Impact.
The expected result is a composite risk associated with each defined threat, with the model designed to compute this risk in a way that can incorporate both data-driven and expert-assessed values.
  • Variable Definitions and Hypotheses
For further use, the authors define the variables within Table 1.
  • Research Hypotheses
Based on the conceptual model, the authors propose the following hypotheses:
Hypothesis 1.
(Impact Equivalence Hypothesis): When threats successfully take advantage of weaknesses in risk events, the Vulnerability Impact (Vi) is conditionally dependent on and moves toward Threat Impact (Ti).
Hypothesis 2.
(Redundancy Hypothesis): It is possible to make a correct risk model using only Ti, TL, and VL, leaving out Vi, without losing a statistically significant amount of predictive accuracy.
Hypothesis 3.
The four parameters (Ti, TL, Vi, VL) show measurable probabilistic dependencies that can be formally captured using Bayesian network inference.
The Hypothesis Testing Framework is then defined.
  • Hypothesis Testing Framework
To test the first hypothesis, we used an analytical framework that involved building and comparing two Bayesian network models: a full model that includes all four core risk parameters (TL, Ti, VL, and Vi) and a reduced model that leaves out Vi to see if it is needed. The framework uses a set of 500 risk scenarios, each with its own set of parameter values. According to (Giboney et al. 2023a), the real-world data are very important. The authors use a set of 500 entries, which were randomly chosen from a set of 2950 entries, used for analysis in (Cernisevs 2024). These data are real data for 5 fintech companies, operating in all EU countries; they are Financial fintech, Payment fintech, or Asset Management fintech, and they all deal with payment operations. These institutions are under the supervision of regulatory authorities such as credit organizations, EMI, payment initiation service providers, or virtual asset management institutions (Cernisevs et al. 2023a; Popova and Cernisevs 2022).
Then, the framework uses statistical tools like Pearson’s correlation coefficient, conditional entropy, KL-divergence, and mutual information to investigate how the variables are related. We use Bayesian inference to see how close our guesses are to the truth, and we use model selection criteria like MAE, RMSE, and BIC to compare how well the two configurations work with their trade-offs. Two other ways to check the reliability of learned network structures are edge probability analysis and bootstrapping. Data-driven method helps in understanding whether Vi can be derived from Ti and whether the exclusion of Vi can significantly change the model’s accuracy or structure. The authors also investigate whether the exclusion of Vi as a parameter makes the risk estimate more or less accurate.
To estimate the obtained models, certain criteria applicable to Bayesian networks are used (see Table 2).
Based on the unique characteristics of each hypothesis and the particular analytical viewpoint it necessitates, various Bayesian network model criteria are used to confirm the three hypotheses (H1–H3) (see Table 3). Direct dependency and information flow between variables are fundamental to H1, which examines whether Vulnerability Impact (Vi) can be deduced from Threat Impact (Ti). To capture the strength, uncertainty reduction, and structural presence of this dependency, metrics such as conditional entropy, edge probability, and Pearson’s correlation were used. H2, on the other hand, asks whether Vi can be eliminated from the model without significantly compromising the predictive accuracy; as a result, it needs performance-based metrics like MAE, RMSE, BIC, and KL-divergence, which evaluate the relative performance of the full and reduced models. To detect nonlinear dependencies between all parameter pairs, H3 requires mutual information scores to verify that all core variables are interdependent and suitably connected within the network. Therefore, every set of criteria is designed to best test the type of claim being evaluated, whether it be global interconnectedness, model efficiency, or structural dependence.
Bayesian network inference is used to model the conditional likelihood:
P V i T i ,   V L ,   T L ,       P R T i ,   T L ,   V L ,   V i
In alternative configurations, Vi is excluded from the risk formula to test Hypothesis 2. Conditional probabilities such as P(ViTi,VL,TL) are estimated from the empirical frequency counts obtained in the simulated dataset used for Bayesian network parameter learning. For each parent configuration (Ti,VL,TL), we compute the relative frequency of each outcome Vi in the training data. To avoid zero-probability issues, especially for infrequent configurations, Laplace smoothing is applied to the counts before normalization. A step-by-step calculation example is presented in Appendix A.

2.2. Experimental Scenarios and Inference Tests

The hypotheses are evaluated under two experimental configurations:
Scenario A—Full Model
  • All four parameters (Ti, TL, Vi, VL) are used to compute risk scores.
  • A full Bayesian network structure is used for inference.
  • Risk estimates:
R A = T i ,   T L ,   V L ,   V i
Scenario B—Reduced Model (Vi excluded)
  • Vi is excluded from test if Ti alone accounts for impact.
  • The network simplifies to
R B = T i ,   T L ,   V L
A Bayesian network represents the probabilistic dependencies between parameters. The model structure is based on the domain knowledge and literature, as illustrated in Figure 1.
Each node is assigned using the following:
  • A conditional probability table (CPT) based on
    Expert judgment (Cernisevs 2024);
    Empirical distributions (Cernisevs 2024).
  • Continuous variables (e.g., Ti and Vi) are discretized for computational tractability.
  • Data Sources
The data used in this study comes from real-life examples from different regulatory and regional settings, which makes the distributions of the parameters different (Cernisevs 2024). We normalized the raw values and put them into the four-dimensional risk parameter framework that is the focus of this article. After that, these datasets were added to the Bayesian network model for structural learning and conditional probability inference.
  • Bayesian Inference and Learning
The study proceeds in two parts:
  • Learning Parameters. We use both real-world and simulated datasets to explain CPTs using frequency-based learning. For instance, it is possible to use the obtained proportions to understand the probability that Vi will be high considering Ti, TL, and VL.
  • Inference and Scoring. We use Bayesian inference to investigate the posterior probabilities of different levels of risk. We look at the full and reduced models (without Vi) and see how well they predict (Mean Absolute Error, RMSE), how hard they are to understand (Bayesian Information Criterion), and how well they are structured (edge confidence from bootstrapping).
We use KL-divergence to understand how different the outputs of the full and reduced models are, and we use mutual information metrics to study how strong the relationships are between the parameters (Al-Labadi et al. 2021).
  • Validation and Sensitivity Analysis
We used a bootstrap resampling method to find out how stable the network edges and CPTs are to be sure that the results are correct. We ran scenario sensitivity tests to see how changes in TL or VL affect the model and change Vi and the overall risk scores. The model’s generalizability and limits were checked by comparing real data to synthetic data.
  • Tools and Implementation
The Bayesian network model was used in this study with open-source Python tools to make risk inferences. We chose these tools because they are flexible, easy to use, and can help to support arguments based on probability. With the modular implementation, it was possible to check each step—data preprocessing, model training, probabilistic inference, and evaluation—using different data sources.
We used the PGMPy (Probabilistic Graphical Models in Python) library to perform the main modeling. This library allows the definition of the structures, learning of the parameters, and making inferences for discrete Bayesian networks (Grassi n.d.). The following reasons led to the choice of PGMPy:
  • Support for conditional probability tables (CPTs) built in;
  • Algorithms for learning structure that are built in, such as Hill Climbing and constraint-based search;
  • The availability of inference engines like Variable Elimination and Belief Propagation.
There is a binary risk parameter (High or Low) for each node in the network. The graph was made by hand based on domain-driven causal assumptions and then checked using data-driven scoring metrics like the Bayesian Information Criterion (BIC) and log-likelihood.
The full network includes the following directed dependencies:
  • TL_High → Ti_High.
  • VL_High → Ti_High.
  • Ti_High → Vi_High.
  • TL_High → Vi_High.
  • VL_High → Vi_High.
CPTs were derived using empirical frequency distributions from the datasets. When a parent–child configuration did not yield sufficient empirical support (for example, for rare combinations), Laplace smoothing was applied to prevent zero-probability artifacts.
To enable integration into the Bayesian network and reduce the dimensionality of the conditional probability tables (CPTs), continuous inputs for TL, VL, Ti, and Vi were discretized based on their respective measurement scales and empirical distributions. Likelihood values (TL, VL), expressed as probabilities on a 0–1 scale, were discretized using a threshold of 0.5, representing the conventional boundary between low- and high-probability events in probabilistic modeling. Impact scores (Ti, Vi), measured on a 0–100 scale, were discretized using a threshold of 60, identified through exploratory data analysis as corresponding to the upper quantile of observed cases and marking the onset of severe outcomes.
All data pipelines were versioned using Jupyter Notebooks (Libouban et al. 2024), and a consistent random seed was used to generate the synthetic 500-scenario dataset in order to promote reproducibility.
A simplified risk engine was constructed to generate full-model and reduced-model risk scores for each scenario, using the following formulae:
R i s k f u l l   =   T L + V L   2   × V i
R i s k r e d u c e d   =   T L + V L   2   × T i
This made it possible to assess the model’s performance in terms of operational output as well as conditional dependencies. To compare the fidelity of the reduced model to the full model, the engine was benchmarked using Mean Absolute Error (MAE) and KL-Divergence.
To support adoption by small institutions with limited staff and historical data, we propose a staged roadmap for implementing the Bayesian network risk assessment model. The process begins with predefined templates, leveraging publicly available or sector-specific threat and vulnerability catalogs to provide an immediate operational baseline. Expert-based parameterization follows, where internal subject-matter experts assign initial likelihood and impact values to populate the conditional probability tables (CPTs), ensuring that the model reflects the institution’s context even in the absence of rich datasets. As operational data accumulates such issues as incident reports, audit findings, and relevant external intelligence, parameters can be incrementally updated using frequency-based learning with smoothing, while retaining expert priors for data-sparse nodes. Regular iterative refinement cycles (for example, quarterly) should be conducted to validate and adjust both the model structure and parameters, gradually replacing template assumptions with empirical evidence. To lower barriers to entry, implementation can begin with open-source Bayesian network tools, which allow for flexible configuration and scaling as institutional capabilities expand. This roadmap aligns with best-practice recommendations for operational model deployment in risk-sensitive domains, enabling actionable outputs from the outset while supporting continuous improvement.
Simulation design note. We used Monte Carlo simulation with stratified sampling to ensure the coverage of tail combinations of TL, VL, Ti, Vi. The sample size N = 500 was selected after a saturation check in which MAE/RMSE, MI, and edge probabilities stabilized for N ≥ 400; a value of 500 provided stable estimates at modest computational cost. Likelihoods TL,VL ∈ [0, 1] were drawn from Beta(α,β) distributions calibrated to empirical means/variances; impacts Ti,Vi ∈ [0, 100] were drawn from truncated lognormal distributions to empirically reflect observed right-tail risk severity. We conducted a distributional sensitivity analysis (light-tailed truncated normal vs. heavy-tailed truncated lognormal/Pareto-type) and obtained qualitatively unchanged results, indicating robustness to tail assumptions; a fixed random seed ensured reproducibility.
Monte Carlo simulation methodology. Within the simulation procedure, we use Monte Carlo simulation with stratified sampling. We employ this approach to prevent the under-sampling of tail events, typical to simple random draws, and ensure that rare but high-impact combinations of risk parameters are explicitly represented. The simulation covered TL, VL, Ti, Vi, with stratification targeted at extreme quantiles of the distributions. Such an approach guarantees sufficient tail coverage; the authors believe that this is critical for stress testing of financial institutions operating in volatile markets.
Distributional assumptions. The authors use Beta distributions to derive the likelihood variables TL and VL, constrained to the [0, 1] interval. The Beta family was selected due to its flexibility in representing skewed probability profiles and its interpretability in Bayesian settings. Parameters (α, β) were estimated for empirical data fitting by the method of moments, in order to align simulated draws with observed fintech datasets. To reflect the empirically observed right-tail severity of financial and cybersecurity incidents, we used lognormal distributions with a truncation level of 100 for the impact variables Ti and Vi, which are inherently unbounded but truncated by operational definitions (0–100 scale).
Sensitivity to tails. With the purpose of testing robustness, we repeated simulations with light-tailed (truncated normal) and heavy-tailed (Pareto-type) alternatives. Across specifications, the resulting Bayesian network inferences remained qualitatively consistent, suggesting that the model’s conclusions are not unduly impacted by tail assumptions. This allows us to validate that the stratified Monte Carlo method produced stable estimates of risk relationships and enhance the findings’ external validity.

3. Results

A total of 500 simulated threat scenarios were generated, encompassing varying levels of threat likelihood, vulnerability likelihood, and their associated impacts. Key summary statistics are provided below in Table 4, Table 5, Table 6.
Two sources of input were used: expert simulations and synthetic scenarios. Risk analysts and domain professionals estimate Ti, TL, VL, and Vi for a predefined threat catalog; then, the controlled scenarios are made with set conditional rules to mimic attacks and their effects.
These scenarios help to investigate how sensitive and redundant issues are when the parameters are spread out in different ways.
We train and test the models on N = 500 simulated instances, which are split into training (30%) and validation (70%) sets.
  • Hypothesis Evaluation Criteria
We used the 500-scenario dataset to calculate a number of statistical parameters to see whether the proposed hypotheses (H1–H3) are true. All statistical tests were conducted with a 95% confidence level, and bootstrapping was used to make sure the results were strong even with different samples. The hypotheses evaluation criteria are shown in Table 7.
The first hypothesis of the article can be tested empirically and probabilistically due to this framework. We assess the importance and informational contribution of each parameter by contrasting the risk prediction performance of two network configurations: one with Vi and one without this parameter. Transparency and flexibility in modeling expert-derived and data-driven risk assessments are further supported by the application of Bayesian networks.

Result Assessment

The results of the simulation tests and hypothesis assessments using the developed Bayesian network models are shown in this section. The findings are arranged according to the hypotheses outlined in Section 2.1 and contrast the reduced model (which does not include Vi) and the full model (which includes all four parameters: TL, Ti, VL, and Vi).
The validation of each hypothesis (H1–H3) is supported by specific statistical and Bayesian network metrics, and their respective values provide strong empirical justification for the conclusions drawn.
Hypothesis 1 (H1)
The Impact Equivalence Hypothesis claims that Vulnerability Impact (Vi) can be inferred from Threat Impact (Ti). This is strongly supported by several indicators. The Pearson correlation coefficient between Ti and Vi is 0.83, which is considered excellent and indicates a strong linear relationship. The conditional entropy H(Vi|Ti) is 0.87, which falls into the “Good” range and shows that knowledge of Ti significantly reduces the uncertainty about Vi. In contrast, the marginal entropy of Vi is 2.3, indicating that Vi on its own has high variability, most of which is explained by Ti. Furthermore, the edge probability from Ti to Vi in the learned Bayesian network is 0.92, which confirms that this dependency appears in the vast majority of bootstrapped model structures. Together, these results validate H1 with strong statistical and structural evidence.
Hypothesis 2 (H2)
The Redundancy Hypothesis is fully supported by several complementary criteria and asserts that Vulnerability Impact (Vi) can be eliminated from the model without appreciably impairing predictive performance. There are not many differences between the full model (RA) and reduced model (RB) in terms of prediction accuracy: the Root Mean Square Error (RMSE) increases slightly from 8.1 to 8.6, and the Mean Absolute Error (MAE) increases only slightly from 6.2 to 6.7. The reduced model is still very accurate because both values are still in the “Good” range for performance. The Bayesian Information Criterion (BIC) goes up by 52.4 points, from 1504.7 in RA to 1452.3 in RB. This is an “Excellent” score. The BIC improved, which means that the simpler model improves the balance of fit and simplicity. The KL-Divergence between the full and reduced models is 0.41, which is a “Good” score. This means that the outputs of the two models are very similar in terms of probability. The results support Hypothesis H2, which says that not including Vi does not change how accurate the risk predictions are very much, and that the simplified model is still a good and useful choice.
Hypothesis 3 (H3)
The Parameter Dependency Hypothesis, asserts that all four parameters—Threat Likelihood (TL), Threat Impact (Ti), Vulnerability Likelihood (VL), and Vulnerability Impact (Vi)—are interdependent and appropriately structured in the Bayesian network. This is confirmed by mutual information scores between key variable pairs. Ti and Vi share 1.34 bits of information, TL and VL share 0.98 bits, and TL and Ti share 1.01 bits. All of these numbers are in the “Excellent” range. This means that these pairs of variables share a lot of information. Moreover, the connections in the BN are based on real statistical dependencies.
In short, there is a lot of statistical and structural evidence that supports H1 and H3. This proves that Ti and Vi work together and that the BN structure is correct. H2 is fully supported: although there is a small drop in predictive performance when Vi is excluded, the reduced model benefits from better simplicity and efficiency without a significant loss in accuracy.
Using Bayesian inference, posterior risk scores were computed for each scenario. The inferred risk scores showed the following:
  • High sensitivity to variations in TL and VL, especially when Ti was high.
  • Stability in risk estimation whether Vi was included or replaced by Ti as its proxy.
  • Risk scores ranged from 5.3 (low risk) to 91.2 (critical risk), with most values clustering between 35 and 65, indicating a nonlinear mapping of parameters to risk.
  • A comparative plot of predicted risk scores between the full and reduced models shows near-linear agreement (R2 = 0.96), reinforcing the conclusion that Vi may be substitutable by Ti in many cases.
The graph below shows how the predicted risk scores from the full model (RA) and the reduced model (RB) agree with each other. The position of each point, which stands for a scenario, is based on the predicted risk from RA (x-axis) and RB (y-axis). The R2 value of 0.96 shows that there is a nearly linear relationship between the two. This means that even after removing Vulnerability Impact (Vi) from the model, the predicted risks are still very similar to those of the full model. This high level of agreement statistically supports the idea that Ti can often be used instead of Vi, which supports Hypothesis H2 in terms of predictive consistency. The predicted risk scores for full and reduced model are presented in Figure 2.
In summary, all three hypotheses are supported by the data. The findings suggest that Threat Impact is a strong indicator of Vulnerability Impact, and that simplified risk models can still be accurate and reliable. Based on the above confirmed hypothesis, the authors propose the following formula for risk calculation:
R i s k =   V L +   T L 2   ×   T i
This formula allows risk assessment in the absence of a good set of historical data on Vulnerability Impact.

4. Discussion

The most important finding of this study is the fact that there is a strong conditional link between Threat Impact (Ti) and Vulnerability Impact (Vi). This finding supports the idea that Ti is close to or includes Vi in a lot of real-life risk situations. This close connection demonstrates that Vi is not a completely independent variable, but rather shows the effects that are already presented in Ti. It is supported by both correlation and conditional entropy analysis. This is especially important for emerging economies, where the financial institutions often do not have enough historical data to have an entire system of vulnerabilities and their likelihoods.
This confirms the Impact Equivalence Hypothesis (H1) and offers theoretical justification for using Ti as a substitute for Vi, especially in situations, where estimating Vi is challenging or where vulnerability data quality is low.
Another important contribution is the way of referring to risks as a threat-centric issue. Traditional models are often too complicated and may even be unnecessary because they see risk as the result of every threat–vulnerability combination. This paper, on the other hand, says that vulnerabilities only change how likely or obvious a risk is, and that each threat should be seen as a separate unit of risk.
This change makes risk portfolios easier to understand and use, which coincides with the opinions of (Ucedavélez and Morana 2015) and (Macher et al. 2016). Also, it makes it easier for people in organizations to talk about risk when threats are the main idea.
The Redundancy Hypothesis is confirmed by the research. The reduced model, excluding Vi, has presented the performance results, which are similar to those of the full model. Moreover, the reduced model complexity is lower compared to that of the full model. This finding has practical applications, allowing professionals to estimate risk with fewer inputs, particularly when resources are limited, without losing the risk estimation reliability. For tool designers, it supports the development of leaner risk engines that can still adhere to probabilistic rigor without overburdening users with hard-to-estimate parameters.
However, this should not be interpreted as grounds to eliminate Vi in all contexts. In systems where vulnerabilities vary widely in type and exposure, Vi may provide critical nuance. In this situation, the absence of vulnerability analysis may simplify the assessment and blur the situation (Ledwaba et al. 2018; Rossebo et al. 2007). Rather, the implication is that Vi’s value is context-sensitive and that flexible models should accommodate both explicit and inferred forms of Vi.
The results of this study have important practical benefits for organizations that manage risk, especially in the financial sector.
First, the results show that focusing on threat likelihood and impact can serve as a good way to assess risk without calculating each vulnerability impact. This makes the risk assessment process easier, faster, and simpler for managers and security teams making decisions about risk, especially when they do not have all the information they need about every vulnerability.
Second, a Bayesian network is a powerful tool for organizations to use to combine expert opinions with real data. This makes it possible to model risk in a way that is more flexible and realistic, especially when the data is incomplete or not certain. For instance, if a business knows there is a threat but does not know the specifics of the vulnerability, the model can still give a good estimate of the overall risk.
Third, this method helps with planning. Companies can focus their efforts on the most important issues if they know how different risk factors are related. If a high-impact threat is closely linked to a weakness in a system, that area should be protected first.
Finally, the model can be changed when new information comes in because it is adaptable. This helps businesses to keep track of how much risk they are taking on and respond to changes in the threat landscape.
In short, the suggested method helps businesses to perform risk assessments that are better, faster, and based on more information.
The Bayesian network framework developed in this study not only offers a unified representation of threat and vulnerability dependencies but also provides a natural interface with modern coordinated risk measures. By generating posterior distributions of risk assessments, the model produces the probabilistic inputs needed for advanced metrics that are sensitive to tail effects. In particular, we discuss the following issues:
  • Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) can be applied directly to the posterior distribution of risk scores, enabling quantification of both threshold exceedance probabilities and expected shortfall beyond that threshold (Rockafellar and Uryasev 2002, 2000).
  • Entropic Value-at-Risk (EVaR), introduced by (Ahmadi-Javid 2012), provides a tighter and more conservative bound on tail risk by exploiting moment-generating functions. Our Bayesian inference engine naturally produces the conditional probabilities needed for EVaR evaluation.
  • Expectile Risk Measures (ERM) (Ziegel 2016) can be integrated with our framework to capture asymmetries in loss distributions and to support scenario-specific stress testing.
These measures enhance the practical applicability of our model, especially for institutions in emerging economies where vulnerability data may be sparse but tail events are disproportionately important. By combining Bayesian inference with consistent risk measures, practitioners can move beyond simple point estimates and incorporate a full range of distributional information in their decision-making processes.
Overall, the proposed Bayesian-based method offers several advantages over traditional approaches, including the integration of expert priors with empirical data in a unified framework, the explicit separation of threat and vulnerability likelihoods and impacts to improve interpretability, and a modular structure that supports incremental refinement as new information becomes available. Nonetheless, certain limitations should be acknowledged, such as the potential subjectivity of initial expert estimates, sensitivity to discretization thresholds, and the need for regular parameter recalibration to reflect evolving risk landscapes. By redefining risk in terms of conditional relationships between likelihood and impact for both threats and vulnerabilities, the approach enables more granular reasoning, supports targeted mitigation strategies, and aligns with established probabilistic risk assessment principles, making it particularly applicable in critical infrastructure and cybersecurity domains.

5. Conclusions

This study employs Bayesian networks to estimate risks on the basis of such parameters as Threat Likelihood (TL), Threat Impact (Ti), Vulnerability Likelihood (VL), and Vulnerability Impact (Vi). The authors address the principal gap existing in the scientific literature on the topic: there is no unified model for risk assessment which is based on formalized principals and simultaneously statistically supports the integration of threat and vulnerability parameters.
The empirical results support the first hypothesis that Threat Impact (Ti) and Vulnerability Impact (Vi) are closely related, and Ti often serves as a good substitute for Vi in real life. The study also shows that risk can be accurately estimated using a smaller model that does not include Vi. This means that the amount of data needed is lower without significantly affecting the accuracy of the predictions.
The study not only confirms these interdependencies, but it also supports a threat-centered view of risk estimation. It emphasizes that threats, not threat–vulnerability pairings, should be the main units of risk from the system owner’s point of view. This change makes modeling easier and easier to understand, and it fits with how organizations usually manage risk portfolios.
The use of Bayesian networks is especially efficient; this method offers an adjustable framework for combination of experts’ knowledge and empirical data to overcome the uncertainty; moreover, this method allows dynamic updates. Therefore, this method can be used for practical application in cybersecurity, infrastructure protection, and enterprise risk management.
In addition to validating the three hypotheses, our study demonstrates that Bayesian networks can serve as a flexible input generator for coordinated risk measures. The posterior risk distributions derived here can be directly integrated into Value-at-Risk (VaR), Conditional Value-at-Risk (CVaR), Entropic Value-at-Risk (EVaR), and Expectile Risk Measure (ERM) frameworks, thereby aligning the model with best practices in financial risk management. Future research should focus on extending the Bayesian-based approach to incorporate these measures explicitly, further enhancing its robustness in the presence of extreme events and providing decision-makers with richer tail-sensitive assessments of risk.

Author Contributions

Conceptualization, Y.P. and O.C.; methodology, O.C.; software, O.C.; validation, Y.P., S.P. and A.K.; formal analysis, O.C.; investigation, S.P.; resources, A.K.; data curation, A.K.; writing—original draft preparation, O.C.; writing—review and editing, Y.P.; visualization, Y.P. and S.P.; supervision, O.C.; project administration, Y.P.; funding acquisition, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the specific support objective activity, Project. id. N. 1.1.1.2/16/I/001) of the Republic of Latvia, funded by the European Regional Development Fund. Research project No.1.1.1.2/VIAA/3/19/458.

Data Availability Statement

The data presented in this study are not publicly available due to institutional confidentiality policies and the inclusion of simulated and sensitive financial information. However, the datasets generated and/or analyzed during the current study are available from the corresponding author upon request.

Conflicts of Interest

Author Olegs Cernisevs was employed by the company SIA StarBridge. Almas Kalimoldayev was employed by the company JSC “The Fund of Problem Loans”. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The SIA StarBridge and JSC “The Fund of Problem Loans” had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Step-by-Step Example

Network structure and state definitions. The Bayesian network comprises four nodes:
  • TL = Threat Likelihood ∈ {Low (<0.5), High (≥0.5)}.
  • VL = Vulnerability Likelihood ∈ {Low, High}.
  • Ti = Threat Impact ∈ {Normal, Stress}.
  • Vi = Vulnerability Impact ∈ {Low (<60), High (≥60)}.
Directed edges are TL → Ti, VL → Vi, and Ti → Vi.
Evidence (inputs). Consider a case in which TL is observed as high (≥0.5) and VL is observed as high. TL and VL are therefore treated as evidence variables.
Parameterization (illustrative CPT excerpts). Parameters were estimated by frequency counts with Laplace smoothing. The relevant conditional probabilities are
P(Ti = Stress|TL = High) = 0.70; P(Ti = Stress|TL = Low) = 0.30.
For Vi given (VL, Ti),
-
If VL = Low and Ti = Normal: P(Vi = High) = 0.10.
-
If VL = Low and Ti = Stress: P(Vi = High) = 0.50.
-
If VL = High and Ti = Normal: P(Vi = High) = 0.40.
-
If VL = High and Ti = Stress: P(Vi = High) = 0.80.
Posterior for Ti. Conditioning on the observed TL = High yields
P(Ti = Stress|TL = High) = 0.70; P(Ti = Normal|TL = High) = 0.30.
Posterior for Vi (law of total probability). With VL = High and TL = High,
P(Vi = High|VL = High, TL = High).               
= P(Vi = High|VL = High, Ti = Stress) × P(Ti = Stress|TL = High).  
P(Vi = High|VL = High, Ti = Normal) × P(Ti = Normal|TL = High).
= 0.80 × 0.70 + 0.40 × 0.30.                  
= 0.56 + 0.12.                         
= 0.68.                            
Hence, P(Vi = Low|VL = High, TL = High) = 1 − 0.68 = 0.32.
Decision/interpretation. The probability of a high Vulnerability Impact (≥60) is 0.68 (68%). Using a 0.50 action threshold, the case is classified as high impact, warranting elevated mitigation.
Sensitivity check (alternate inputs). If TL = Low and VL = Low,
P(Vi = High|VL = Low, TL = Low)
= 0.50 × 0.30 + 0.10 × 0.70   
= 0.15 + 0.07           
= 0.22,            
which corresponds to a substantially lower risk classification.

References

  1. Agarwal, Sumit, and Jian Zhang. 2020. FinTech, Lending and Payment Innovation: A Review. Asia-Pacific Journal of Financial Studies 49: 353–67. [Google Scholar] [CrossRef]
  2. Ahmadi-Javid, A. 2012. Entropic Value-at-Risk: A New Coherent Risk Measure. Journal of Optimization Theory and Applications 155: 1105–23. [Google Scholar] [CrossRef]
  3. Al-Labadi, Luai, Vishakh Patel, Kasra Vakiloroayaei, and Clement Wan. 2021. Kullback–Leibler Divergence for Bayesian Nonparametric Model Checking. Journal of the Korean Statistical Society 50: 272–89. [Google Scholar] [CrossRef]
  4. Andrews, Kenneth R. 1971. The Concept of Corporate Strategy. Homewood: Dow Jones-Irwin. [Google Scholar]
  5. Ankan, Ankur, and Abinash Panda. 2015. pgmpy: Probabilistic Graphical Models Using Python. Paper presented at the 14th Python in Science Conference (SCIPY 2015), Austin, TX, USA, July 6–12; pp. 6–11. [Google Scholar] [CrossRef]
  6. Ankan, Ankur, and Johannes Textor. 2024. Pgmpy: A Python Toolkit for Bayesian Networks. Journal of Machine Learning Research 25: 1–8. [Google Scholar]
  7. Ansoff, H. Igor. 1965. Corporate Strategy: An Analytic Approach to Business Policy for Growth and Expansion. Columbus: McGraw-Hill. [Google Scholar]
  8. Aven, Terje. 2016. Risk Assessment and Risk Management: Review of Recent Advances on Their Foundation. European Journal of Operational Research 253: 1–13. [Google Scholar] [CrossRef]
  9. Cernisevs, Olegs. 2024. Regional Dimension in European Union: Shaping Key Performance Indicators for Financial Institutions. Doctoral thesis, Rīga Stradiņš University, Riga, Latvia. [Google Scholar]
  10. Cernisevs, Olegs, Yelena Popova, and Dmitrijs Cernisevs. 2023a. Business KPIs Based on Compliance Risk Estimation. Journal of Tourism and Services 14: 222–48. [Google Scholar] [CrossRef]
  11. Cernisevs, Olegs, Yelena Popova, and Dmitrijs Cernisevs. 2023b. Risk-Based Approach for Selecting Company Key Performance Indicator in an Example of Financial Services. Informatics 10: 54. [Google Scholar] [CrossRef]
  12. Cheimonidis, Pavlos, and Konstantinos Rantos. 2025. A Dynamic Risk Assessment and Mitigation Model. Applied Sciences 15: 2171. [Google Scholar] [CrossRef]
  13. Cox, Louis. 2008. Some Limitations of “Risk = Threat × Vulnerability × Consequence” for Risk Analysis of Terrorist Attacks. Risk Analysis: An Official Publication of the Society for Risk Analysis 28: 1749–61. [Google Scholar] [CrossRef] [PubMed]
  14. da Silva, Claudio Junior Nascimento, Denise Xavier Fortes, and Rogério Patrício Chagas do Nascimento. 2017. ICT Governance, Risks and Compliance—A Systematic Quasi-Review. Paper presented at the 19th International Conference on Enterprise Information Systems, Porto, Portugal, April 26–29; Setúbal: SCITE-PRESS—Science and Technology Publications, pp. 417–24. [Google Scholar] [CrossRef]
  15. Dawson, Richard J. 2015. Handling Interdependencies in Climate Change Risk Assessment. Climate 3: 1079–96. [Google Scholar] [CrossRef]
  16. Dhar, Vasant, and Roger M. Stein. 2016. FinTech Platforms and Strategy. Communications of the ACM 60: 32–35. [Google Scholar] [CrossRef]
  17. ESMA. 2022. Monitoring Environmental Risks in EU Financial Markets. Paris: ESMA. [Google Scholar]
  18. Fülöp, Melinda Timea, Dan Ioan Topor, Constantin Aurelian Ionescu, Sorinel Căpușneanu, Teodora Odett Breaz, and Sorina Geanina Stanescu. 2022. Fintech accounting and Industry 4.0: Future-proofing or threats to the accounting profession? Journal of Business Economics and Management 23: 997–1015. Available online: https://journals.vilniustech.lt/index.php/JBEM/article/view/17695 (accessed on 7 October 2022). [CrossRef]
  19. Gao, Lei, Ying Wang, and Jing Zhao. 2023. (How) Does Mutual Fund Dual Ownership Affect Shareholder and Creditor Conflict of Interest? Evidence from Corporate Innovation. Journal of Risk and Financial Management 16: 287. [Google Scholar] [CrossRef]
  20. Getoor, Lise, Nir Friedman, Daphne Koller, and Benjamin Taskar. 2001. Learning Probabilistic Models of Relational Structure. ICML 1: 170–77. [Google Scholar]
  21. Getoor, Lise, Nir Friedman, Daphne Koller, and Benjamin Taskar. 2002. Learning Probabilistic Models of Link Structure. Journal of Machine Learning Research 3: 679–707. [Google Scholar]
  22. Giboney, Justin Scott, Bonnie Brinton Anderson, Geoffrey A. Wright, Shayna Oh, Quincy Taylor, Megan Warren, and Kylie Johnson. 2023a. Barriers to a Cybersecurity Career: Analysis across Career Stage and Gender. Computers & Security 132: 103316. [Google Scholar] [CrossRef]
  23. Giboney, Justin Scott, Ryan M. Schuetzler, and G. Mark Grimes. 2023b. Know Your Enemy: Conversational Agents for Security, Education, Training, and Awareness at Scale. Computers & Security 129: 103207. [Google Scholar] [CrossRef]
  24. Grassi, Stefano. n.d. A Decision Support System for Credit Risk Assessment Using Bayesian Networks. Preprint. Available online: https://www.researchgate.net/publication/390577527_A_Decision_Support_System_for_Credit_Risk_Assessment_Using_Bayesian_Networks. (accessed on 21 June 2025).
  25. Griffin, Naomi N., Gerardo Uña, Majid Bazarbash, and Alok Verma. 2023. Fintech Payments in Public Financial Management: Benefits and Risks. IMF Working Papers. Washington, DC: International Monetary Fund, vol. 2023, p. 1. [Google Scholar] [CrossRef]
  26. Kaddumi, Thair, and Qais Adib Al-Kilani. 2022. Operational Risks and Financial Performance—The Context of the Jordanian Banking Environment. Journal of Southwest Jiaotong University 57: 338–49. [Google Scholar] [CrossRef]
  27. Khan, Ashraf, and Majid Malaika. 2021. Central Bank Risk Management, Fintech, and Cybersecurity. IMF Working Papers. Washington, DC: International Monetary Fund, vol. 2021, p. 1. [Google Scholar] [CrossRef]
  28. Kuerbis, Brenden, and Farzaneh Badiei. 2017. Mapping the Cybersecurity Institutional Landscape. Digital Policy, Regulation and Governance 19: 466–92. [Google Scholar] [CrossRef]
  29. Ledwaba, Lehlogonolo, and H. S. Venter. 2017. A Threat-Vulnerability Based Risk Analysis Model for Cyber Physical System Security. Paper presented at the 50th Hawaii International Conference on System Sciences, Hawaii, HI, USA, January 4–7. [Google Scholar]
  30. Ledwaba, Lehlogonolo P. I., Gerhard P. Hancke, Hein S. Venter, and Sherrin J. Isaac. 2018. Performance Costs of Software Cryptography in Securing New-Generation Internet of Energy Endpoint Devices. IEEE Access 6: 9303–23. [Google Scholar] [CrossRef]
  31. Libouban, Romane, Eva Mercier, Thomas Chaussepied, Bérénice Batut, Anthony Bretaudeau, and Gildas Le Corguillé. 2024. Misconceptions about Galaxy Debunked by the (French) Galaxy Community. Paper presented at JOBIM 2024, Toulouse, France, June 25–28. [Google Scholar]
  32. Liu, Zi-Yuan, Yi-Fan Tseng, Raylin Tso, Peter Shaojui Wang, and Qin-Wen Su. 2022. Extension of Elliptic Curve Qu–Vanstone Certificates and Their Applications. Journal of Information Security and Applications 67: 103176. [Google Scholar] [CrossRef]
  33. Luo, Sha, Jiteng Ma, Chuanting Zhang, Shuping Dang, and Raed Shubair. 2024. Federated Learning for Microwave Filter Behavior Prediction. IEEE Microwave and Wireless Technology Letters 34: 255–58. [Google Scholar] [CrossRef]
  34. Macher, Georg, Eric Armengaud, Eugen Brenner, and Christian Kreiner. 2016. A Review of Threat Analysis and Risk Assessment Methods in the Au-tomotive Context. Paper presented at 35th International Conference, SAFECOMP 2016, Trondheim, Norway, September 21–23; pp. 130–41. [Google Scholar]
  35. Manzoor, Sumaira, Yuri Goncalves Rocha, Sung-Hyeon Joo, Sang-Hyeon Bae, Eun-Jin Kim, Kyeong-Jin Joo, and Tae-Yong Kuc. 2021. Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications. Applied Sciences 11: 4324. [Google Scholar] [CrossRef]
  36. McAllester, David, Michael Collins, and Fernando Pereira. 2008. Case-Factor Diagrams for Structured Probabilistic Modeling. Journal of Computer and System Sciences 74: 84–96. [Google Scholar] [CrossRef]
  37. Mersch, Yves. 2019. Lending and Payment Systems in Upheaval: The Fintech Challenge. Paper presented at 3rd Annual Conference on Fintech and Digital Innovation, Brussels, Belgium, February 26; Frankfurt am Main: European Central Bank. [Google Scholar]
  38. Nadler, Asaf, Ron Bitton, Oleg Brodt, and Asaf Shabtai. 2022. On the Vulnerability of Anti-Malware Solutions to DNS Attacks. Computers & Security 116: 102687. [Google Scholar] [CrossRef]
  39. Popova, Yelena. 2021. Economic Basis of Digital Banking Services Produced by FinTech Company in Smart City. Journal of Tourism and Services 12: 86–104. [Google Scholar] [CrossRef]
  40. Popova, Yelena, and Olegs Cernisevs. 2022. Smart City: Sharing of Financial Services. Social Sciences 12: 8. [Google Scholar] [CrossRef]
  41. Popova, Yelena, Olegs Cernisevs, and Sergejs Popovs. 2024. Impact of Geographic Location on Risks of Fintech as a Representative of Financial Institutions. Geographies 4: 753–68. [Google Scholar] [CrossRef]
  42. Popova, Yelena, Olegs Cernisevs, and Sergejs Popovs. 2025. Contribution to Modern Economic Region Theory: Factor of Intangible Digital Resources. Geographies 5: 8. [Google Scholar] [CrossRef]
  43. Rastogi, Shailesh, Arpita Sharma, Geetanjali Pinto, and Venkata Mrudula Bhimavarapu. 2022. A Literature Review of Risk, Regulation, and Profitability of Banks Using a Scientometric Study. Future Business Journal 8: 1–17. [Google Scholar] [CrossRef]
  44. Rockafellar, R. Tyrrell, and Stanislav Uryasev. 2000. Optimization of Conditional Value-at-Risk. Journal of Risk 2: 21–41. [Google Scholar] [CrossRef]
  45. Rockafellar, R. Tyrrell, and Stanislav Uryasev. 2002. Conditional Value-at-Risk for General Loss Distributions. Journal of Banking & Finance 26: 1443–71. [Google Scholar] [CrossRef]
  46. Rossebo, Judith E. Y., Scott Cadzow, and Paul Sijben. 2007. eTVRA, a Threat, Vulnerability and Risk Assessment Method and Tool for eEurope. Paper presented at the Second International Conference on Availability, Reliability and Security (ARES’07), Vienna, Austria, April 10–13; Piscataway: IEEE, pp. 925–33. [Google Scholar] [CrossRef]
  47. Sadeghi, Hojatollah, and Farhad Javan. 2024. The Evaluation of Tourist Villages of Iran in Terms of Geophysical Vulnerability Using Fuzzy Scenarios. Journal of Rural Research 15: 85–100. Available online: https://jrur.ut.ac.ir/article_100498.html (accessed on 15 June 2025).
  48. Sanchez-Zurdo, Javier, and Jose San-Martín. 2024. A Country Risk Assessment from the Perspective of Cybersecurity in Local Entities. Applied Sciences 14: 12036. [Google Scholar] [CrossRef]
  49. 2019. Special Announcement. Risk Management and Insurance Review 22: 441. [CrossRef]
  50. Teece, David J. 2010. Business Models, Business Strategy and Innovation. Long Range Planning 43: 172–94. [Google Scholar] [CrossRef]
  51. Ucedavélez, Tony, and Marco M. Morana. 2015. Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis. Hoboken: John Wiley & Sons, Inc. 664p. [Google Scholar]
  52. Ziegel, Johanna F. 2016. Coherence and Elicitability. Mathematical Finance 26: 901–18. [Google Scholar] [CrossRef]
Figure 1. Bayesian network for risk assessment.
Figure 1. Bayesian network for risk assessment.
Risks 13 00167 g001
Figure 2. Predicted risk scores: full vs. reduced model.
Figure 2. Predicted risk scores: full vs. reduced model.
Risks 13 00167 g002
Table 1. Defined variables.
Table 1. Defined variables.
ParameterDescriptionTypeRole
TLLikelihood that a threat source will attempt to exploit a systemProbabilistic (prior)Independent
VLLikelihood that a vulnerability will be successfully exploitedProbabilistic (conditional)Independent
TiExpected impact (damage or cost) if the threat is realizedContinuous or categoricalIndependent
ViSeverity of the vulnerability’s consequences if exploitedContinuous or categoricalDependent
RRiskComputedDependent
Table 2. Suggested evaluation for Bayesian network metrics.
Table 2. Suggested evaluation for Bayesian network metrics.
ParametersDescriptionCriteria
Mean Absolute Error (MAE)Average of absolute differences between predicted and true valuesBad > 1.0
Acceptable 0.5–1.00
Good 0.2–0.5
Excellent < 0.2
Root Mean Square Error (RMSE)Square root of average squared errors; penalizes larger errors moreBad > 1.5
Acceptable 0.8–1.50
Good 0.4–0.8
Excellent < 0.4
Bayesian Information Criterion (BIC)Penalized likelihood that discourages overly complex modelsBad > 5000
Acceptable 2000–5000
Good 500–2000
Excellent < 500
Akaike Information Criterion (AIC)Similar to BIC, less strict penalty; used for model comparisonBad > 4800
Acceptable 1800–4800
Good 400–1800
Excellent < 400
Log-LikelihoodMeasures how likely the observed data is under the modelBad < −3000
Acceptable −3000–−1000
Good −1000–−200
Excellent > −200
KL-Divergence (KL-D)Difference between predicted and true probability distributionsBad > 1.0
Acceptable 0.5–1.0
Good 0.1–0.5
Excellent < 0.1
Pearson’s rMeasures linear correlation between variablesBad < 0.3
Acceptable 0.3–0.5
Good 0.5–0.7
Excellent > 0.7
Conditional Entropy (H(X|Y)Conditional entropy, denoted as H(X | Y), quantifies the amount of uncertainty remaining in a random variable X after knowing another variable YBad > 1.5
Acceptable 1.0–1.5
Good 0.5–1.00
Excellent < 0.5
Marginal Entropy (H(X))Marginal entropy, denoted H(X), quantifies the uncertainty or randomness of a single random variable X, without conditioning on any other variableBad < 0.3
Acceptable 0.3–0.5
Good 0.5–0.75
Excellent > 0.75
Edge Probability (P)Edge probability refers to the estimated probability that a directed edge (i.e., dependency) exists between two nodes (variables) in the networkBad < 0.5
Acceptable 0.5–0.7
Good 0.7–0.9
Excellent > 0.9
Mutual Information (MI)–I(X;Y)Mutual information (MI) measures how much information two variables share—in other words, how much knowing one variable reduces the uncertainty about the otherBad < 0.1
Acceptable 0.1–0.3
Good 0.3–0.8
Excellent > 0.8
Table 3. Hypothesis validation based on the Bayesian network metrics.
Table 3. Hypothesis validation based on the Bayesian network metrics.
HypothesisBayesian Network Metrics
H1
  • Pearson’s r (Ti, Vi)—essential to detect the strength and direction of a linear relationship between Threat Impact (Ti) and Vulnerability Impact (Vi), supporting the hypothesis that Vi can be functionally deduced from Ti.
  • Conditional Entropy H(Vi|Ti)—provides assessment of how much uncertainty remains in Vi once Ti is known, enabling an information-theoretic validation of that relationship.
  • Marginal Entropy H(Vi)—provides assessment of how much uncertainty remains in Vi once Ti is known, enabling an information-theoretic validation of that relationship.
  • Edge Probability P(Ti→Vi)—derived from Bayesian structure learning, captures how consistently a causal link is discovered in bootstrapped networks, serving as a structural confirmation of the dependency.
H2
  • Mean Absolute Error (MAE)—directly measures how well the models predict risk values, allowing for empirical comparison of the full model (with Vi) and reduced model (without Vi).
  • Root Mean Square Error (RMSE)—directly measures how well the models predict risk values, allowing for empirical comparison of the full model (with Vi) and the reduced model (without Vi).
  • Bayesian Information Criterion (BIC)—adds a complexity penalty to model evaluation, enabling a decision framework that favors simpler models unless the added parameter (Vi) yields a significant performance benefit.
  • KL-Divergence (RA‖RB)—used to quantify how much the output distribution of the reduced model deviates from that of the full model, offering a probabilistic perspective on how much information is lost when Vi is excluded.
H3
  • Mutual Information I(Ti;Vi)—This criterion is crucial for confirming that the relationships among parameters are statistically meaningful and not merely artifacts of the data or structure.
  • Mutual Information I(TL;VL)—The same as above.
  • Mutual Information I(TL;Ti)—The same as above.
Table 4. Key statistics.
Table 4. Key statistics.
ParameterMeanStd. DevMinMax
TL0.540.180.100.95
VL0.470.210.0050.99
Ti62.119.810100
Vi59.322.45100
Table 5. Risk score summary statistics.
Table 5. Risk score summary statistics.
MinMaxMeanStd. Dev
5.391.250.0714.54
Table 6. Risk score distribution by category.
Table 6. Risk score distribution by category.
Very Low (0–20)Low (20–35)Medium (35–65)High (65–80)Critical (>80)
7693535813
Table 7. Hypothesis evaluation criteria.
Table 7. Hypothesis evaluation criteria.
MetricHypothesisFull Model (RA)Reduced Model (RB)DifferenceAssessment
Pearson’s r (Ti, Vi)H10.83--Excellent
Conditional Entropy H(Vi|Ti)H10.87--Good
Marginal Entropy H(Vi)H12.3--Excellent
Edge Probability P(Ti→Vi)H10.92--Excellent
Mean Absolute Error (MAE)H26.26.7+0.5Good
Root Mean Square Error (RMSE)H28.18.6+0.5Good
Bayesian Information Criterion (BIC)H21504.71452.3−52.4Excellent
KL-Divergence (RA‖RB)H2--0.41Good
Mutual Information I(Ti;Vi)H31.34--Excellent
Mutual Information I(TL;VL)H30.98--Excellent
Mutual Information I(TL;Ti)H31.01--Excellent
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Popova, Y.; Cernisevs, O.; Popovs, S.; Kalimoldayev, A. Financial Institutions of Emerging Economies: Contribution to Risk Assessment. Risks 2025, 13, 167. https://doi.org/10.3390/risks13090167

AMA Style

Popova Y, Cernisevs O, Popovs S, Kalimoldayev A. Financial Institutions of Emerging Economies: Contribution to Risk Assessment. Risks. 2025; 13(9):167. https://doi.org/10.3390/risks13090167

Chicago/Turabian Style

Popova, Yelena, Olegs Cernisevs, Sergejs Popovs, and Almas Kalimoldayev. 2025. "Financial Institutions of Emerging Economies: Contribution to Risk Assessment" Risks 13, no. 9: 167. https://doi.org/10.3390/risks13090167

APA Style

Popova, Y., Cernisevs, O., Popovs, S., & Kalimoldayev, A. (2025). Financial Institutions of Emerging Economies: Contribution to Risk Assessment. Risks, 13(9), 167. https://doi.org/10.3390/risks13090167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop