Next Article in Journal
Leveraging Max-Pooling Aggregation and Enhanced Entity Embeddings for Few-Shot Knowledge Graph Completion
Previous Article in Journal
The DDMRP Replenishment Model: An Assessment by Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Network Applications in Decision Support Systems

1
The Samuel Neaman Institute, Technion—Israel Institute of Technology, Haifa 3200003, Israel
2
The KPA Group, Raanana 4365413, Israel
Mathematics 2025, 13(21), 3484; https://doi.org/10.3390/math13213484
Submission received: 15 August 2025 / Revised: 20 October 2025 / Accepted: 25 October 2025 / Published: 1 November 2025

Abstract

Decision support systems are designed to provide decision makers with a view of the present and the future under alternative scenarios. A decision support system is different from a dashboard application representing current conditions and trends using a set of indicators and descriptive statistics. This paper focuses on decision support systems implementing Bayesian networks, with three case studies presenting applications in different areas. The first case study is about the integration of mobility data available from Google with hospitalization data related to COVID-19. This data from the pandemic era provides an impact assessment of non-pharmaceutical interventions such as the closure of airports. A second case study is from a website usability assessment with data from web surfing characteristics. A third application is a conflict resolution politography application where economic, demographic, and other types of data are analyzed to create a data-driven narrative for decision makers and researchers. These three different examples show how Bayesian networks are used in different contexts to support decision support systems. The paper is about decision support systems and Bayesian networks, with examples of implementation. It begins with an introduction to general decision support systems, then case studies, and concludes with a section describing future research pathways.

1. Introduction to Decision Support Systems

Decision support systems combine data with visualization methods and analytics to support decision making. A related platform is known as a situation room or war room, which is a dedicated space designed to enable teams to collaborate in order to solve complex problems and make critical decisions. Such rooms are often set up during emergencies and crisis management events. Decision support systems are also used more routinely than situation rooms. Examples of the application of such systems include industrial processes, customer service centers, and health care systems.
Keen [1], Sprague [2], and Bonczek et al. [3] consider decision support systems as data-driven systems for use by managers. This covers a range of definitions determined by strategies for development, usage, and implementation. The development and operation of a decision support system can involve a management information system (MIS) manager, who manages the process of developing and installing it; an information or data science specialist, who builds it; and/or system designers, who create and assemble the technology on which these systems are based.
Decision support systems integrate data from different sources and decision-making models. For a modern approach to model-based system engineering (MBSE), see Hanke et al. [4]. Decision support systems are systems with system engineering principles being applied to them. Here, we consider decision support systems as data-driven systems delivering a current view and providing the ability to evaluate the impact of alternative scenarios. This combination of data and analytic capabilities aims to support decision makers and researchers in the formulation of policies and decisions.
The next three sections cover case studies that include (i) the management of the COVID-19 pandemic, (ii) the operation of a website through usability indicators, and (iii) studies of conflict resolution situations for designing policies and strategies. In all three case studies, the decision support system provides a rendering of current conditions and of “what-if” evaluations that enable the assessment of alternative scenarios. Implementations of such decision support systems typically start with a prototype followed by adaptations to users’ changing demands. A management approach that implements this is called agile development. This consists of short cycles of development driven by the delivery of specific functionalities called “sprints” and evaluations at the end of each cycle called “scrums”, like in rugby. For more on agile development, see Kenett et al. [5]. Here, we focus on the functionality of decision support systems with a special emphasis on applications of Bayesian networks.
This paper does not introduce Bayesian network theory, but shows examples of their application. References that include details on Bayesian networks are Pearl [6], Kenett [7,8,9], Pearl and Mackenzie [10], Zhang and Kim [11], and Scutari and Denis [12]. The next three sections are self-contained case studies of decision support system applications. We start with a decision support system used in managing pandemic outbreaks.

2. The COVID-19 Case Study

The COVID-19 pandemic had far-reaching consequences on global, national, and local scales. Managing the pandemic was based on national policies that included movement restrictions (such as lockdowns) and massive testing to detect outbreaks. An important factor in handling pandemic effects is population behavior and public cooperation with health authorities’ instructions. Adherence to Ministry of Health instructions—e.g., wearing face masks, maintaining physical distance, and washing hands—is affected by psycho-social aspects such as trust in policy makers, fear and anxiety, and the quality of risk communication.
With the ongoing waves of the pandemic around the world, efforts to model population behavior were instigated to shed light on social processes that impact compliance with such instructions, as well as to reveal social patterns and trends over time. The decision support system described below was part of this effort, which included an integration of health-related data from hospitals with population mobility data captured by Google mobility indicators.
COVID-19 pandemic management policies were based on national and local lockdowns including the closure of air traffic hubs, education institutions, and economic sectors. Citizens were called to adhere to protection measures, such as maintaining physical distance, practicing hygiene measures, and using face masks. Government-applied policies of lockdowns and re-openings were decided according to the perceived “acceptable loss”. Such decisions were based on identifying a balance between running the economy and minimizing economic and social damage versus the need to save lives. As expected, the easing of lockdowns frequently led to increases in morbidity. At the same time, lockdowns and quarantines caused severe damage in terms of social and economic considerations with increased rates of domestic violence, unemployment, depression, and non-normative behavior (alcohol drinking and drugs), among others. The decision support system presented in Kenett et al. [13] was designed to answer specific questions by an assessment of the impact of lockdowns on hospital admissions and COVID-19-related deaths.
Research has shown that citizens’ compliance with stay-at-home policies is predicted by perceived risks and trust in science, scientists, the authorities (Bargain and Aminjonov [14]), and social capital (Borgonovi and Andrieu [15]). Restricted mobility directives (such as the closure of airports, shopping malls, sport events, and education institutions) were found to effectively prevent outbreaks and reduce the number of deaths. Furthermore, research found that citizens voluntarily decreased their mobility even without official instructions to stay at home (Yilmazkuday [16]). This theoretical understanding needed to be quantified in specific areas with the modeling of relevant data.
The pandemic management decision support system presented in Kenett et al. [13] was designed and implemented by integrating hospital data from ministries of health in Israel and Italy with Google mobility data that was available during the pandemic (https://www.google.com/covid19/mobility/, accessed on 20 October 2024).
To set up the decision support system, a database of health and mobility data was organized for each country with appropriate time lags. The data-driven learning of Bayesian network structures is possible with various algorithms (He et al. [17]). Here, a Hill-climbing algorithm was applied to the data in order to derive a Bayesian network structure accounting for whitelists and blacklists.
The learning algorithm was applied to the number of hospitalized COVID patients, the number of COVID-19 related deaths, and mobility data in various setups such as workplaces, shopping centers, and public transportation utilities. A structural equation model of the learned network was used to test the significance of links between variables. Structural equation models, also called path analysis, are often used in social sciences to represent links between observed and latent variables. As a next step, a Bayesian network of statistically significant effects was constructed. Through conditioning on this Bayesian network, one can assess alternative scenarios corresponding to non-pharmaceutical interventions (NPIs) such as restricting access to parks or airports. Figure 1 is a scenario from the COVID-19 study based on data from Israel when all of the NPI restrictions are set to state = 1 (close), and Figure 2 is a scenario from the COVID-19 study when all of the NPI restrictions are set to state = 2 (open). The figures are adapted from Kenett et al. [13]. The analysis shows the impact of non-pharmaceutical interventions in Israel on the number of patients admitted to hospital intensive care units (ICUs) and COVID-19-related deaths.
Figure 1 and Figure 2 show the distributions of discretized data representing mobility and COVID-19 health-related data. The discretization was carried out separately for Israel and Italy to reflect the different country-specific decision threshold guidelines. The arrows in the figures correspond to links between these discretized variables. Comparing Figure 1, with full restrictions, to Figure 2, with no restrictions, we see by how much the distribution of hosp and death changes. In particular, the percentage of level 3 (high number) increases in hosp from 19% to 39% and of death from 30% to 36%. The Bayesian network provides an analysis of scenarios with and without enforcing (or lifting) mobility restrictions, thus supporting decisions made by decision makers.
Kenett et al. [13] show how this combination of structural equation models and Bayesian networks sets up a decision support system for pandemic management. Conditioning on variables in this network enables a quantitative analysis of alternative scenarios. Figure 1 and Figure 2 are examples of a decision support system with interactive functionality. This case study also demonstrates the ability of Bayesian networks to integrate data from different sources and evaluate alternative scenarios. Bayesian networks also provide an effective visual rendering: see, for example, the bnlearn R application Scutari [18] or the GeNIe software (https://www.bayesfusion.com/genie/, accessed on 20 October 2024). The next case study is about a decision support system used in managing the usability of websites.

3. The Website Usability Case Study

This section presents a decision support system used in managing the usability of a website, the decision support system for user interface design (DSUID). For more details, see Harel et al. [19]. A DSUID includes an analytic method based on the integration of models for estimating and analyzing website visitors’ activities. It consists of a seven-layer model with a Bayesian network applied to clickstream data.
The goal of usability design diagnostics is to identify, for each website page, design deficiencies that hamper the user navigation experience. To understand the user experience, we need to know the user activity and the user expectations. The diagnosis of design deficiency involves measurements of the site navigation, statistical calculations and statistical decisions.
How can we tell whether website visitors encounter difficulties in exploring a particular page, and if so, what kind of difficulty do they experience? Website visitors are usually task-driven, but we do not know whether the visitors’ goals are related to a specific web page. We also usually cannot tell whether visitors know anything a priori about the site, if they believe that the site is relevant to their goals, or if they have visited it before. It may be that the visitors are simply exploring the site, or that they are following a procedure to accomplish a task. Yet, their behavior reflects their perceptions of the site content. Server logs provide time stamps for all hits, including those of page html text files, but also of image files and scripts used for the page display. The time stamps of the additional files enable us to estimate three important time intervals:
  i.
The time the visitors wait until the beginning of the file download. This is used as a measure of page responsiveness.
 ii.
The download time. This is used as a measure of page performance.
iii.
The time from download completion to the visitor’s request for the next page. This is a time stamp of when the visitor reads the page content, but also does other things, some of them unrelated to the page content.
The DSUID needs to decide, based on the statistics of these time intervals, whether the visitors feel comfortable during website navigation, for instance, if they feel that they waited too long for the page to download and how they feel about what they see on webpage screen.
We want to determine whether a time interval is acceptable by page visit, i.e., whether it is too short or too long. For example, consider an average page download time of 5 s. If the user expects the page to load quickly, site visitors may regard it as too lengthy in response to a search request. However, 5 s may be acceptable if the user’s goal is to learn or explore specific information. Same 5 s, different goal-driven experience.
Diagnostic-oriented time analysis observes the correlation between the page download time and page exits. If visitors are indifferent about the download time, then the page exit rate will be invariant with respect to the page download time. However, if the download time matters, then the page exit rate depends on this download time. When the page download time is acceptable, most visitors stay on the site, looking for additional information. When the download time is too long, more visitors abandon the site and go to other websites. The longer the download time, the higher the exit rate. In operating a DSUID, one distinguishes between three status conditions:
  • Design—At this state, data is collected and analyzed. System architects and designers develop guidelines and operating procedures representing accumulated knowledge and experience on preventing operational failures. A prototype DSUID is then developed.
  • Testing—the prototype DSUID is subjected to beta testing. This is repeated when new website versions are launched for evaluating the way they are actually being used.
  • Tracking—ongoing DSUID tracking systems are required to handle changing operational patterns. Statistical process control (SPC) is employed to monitor the user experience by comparing actual results to expected results, acting on the gaps.
In setting up a DSUID, we identify several layers of data collection and data analytics:
  • The first and lowest layer—user activity. This layer records significant user actions (involving screen changes or server-side processing).
  • The second layer—page hit attributes. This layer consists of download time, processing time, and user response time.
  • The third layer—transition analysis. The third-layer data are about transitions and repeated form submission (indicative of visitors’ difficulties in form filling).
  • The fourth layer—user problem indicator identification. Indicators of possible navigational difficulty, including (a) predicted estimates for site exit by the time elapsed until the next user action (as no exit indication is recorded on the server log file), (b) backward navigation, and (c) transitions to main pages, interpreted as escaping the current sub task.
  • The fifth layer—usage data. This consists of usage statistics, such as the following:
    • Average entry time;
    • Average download time;
    • Average time between repeated form submission;
    • Average time on website (indicating content related behavior);
    • Average time on a previous screen (indicating ease of link finding).
  • The sixth layer—statistical decision. For each of the page attributes, DSUID compares the data over the exceptional page views to those over all page views. The null hypothesis is that (for each attribute) the statistics of both samples are the same. A simple two-tailed t test can be used to reject it, and therefore to conclude that certain page attributes are potentially problematic. A typical error level is set to 5%.
  • The seventh and top layer—interpretation. For each of the page attributes, DSUID provides a list of possible reasons for the difference between the statistics over the exceptional navigation patterns and that over all of the page hits. Typically, the usability analyst decides which of the potential source of visitors’ difficulties is applicable to the particular deficiency.
Bayesian networks are used in the 4–7th layers of a DSUID system. Figure 3 and Figure 4, prepared with GeNIe 2.0, present Bayesian networks derived from the analysis of web log analyzers. The networks indicate associations between page size, text size, download time, reading time, seek time, reading time, back activations, and exits.
With low seek time (Figure 3), we see 49% of high (s4) and very high exit rates (s5). With high seek time (Figure 4), these numbers jump to 96% so that seek time affects the behavior of the users.
This case study introduces a seven-layer architecture used to design a decision support system aimed at tracking the usability of websites. The characteristics of DSUID can be generalized to other applications such as industrial process management where online data from sensors is used to manage processes. For a more general introduction and a state-of-the-art review of dashboards used in web usability, see Almasi et al. [20].

4. The Political Conflict Resolution Case Study

Politography refers to an analysis of how nations, states, or any political entities interact in relation to each other. In specific projects, it consists of a data-driven tool describing, measuring, and evaluating the level of control between political entities over particular territories or domains. This context is primarily empirical and analytic, aiming to inform political decision-making about current and project changes in loci of control. A more general perspective applies politography to regional conflicts and is classified as a conflict resolution or conflict management tool. Here, we consider politography as a decision support system.
Below, we describe the design and construction of a politography decision support system for policy makers related to intergroup conflict management, focused on the Israeli–Palestinian conflict. It provides a case study that can be generalized to other applications. The decision support system developed in this case study has over 9000 indicators pertaining to geographical areas labeled A, B, and C according to the Oslo Accords. These designated areas have distinct legal status specified in the 1993 Oslo Accords. A second level of the decision support system hierarchy consists of three domains that characterize levels of control in each one of the geographic areas: security, geo-spatial, and economic. In addition, the system contains contextual data from different sources such as political, diplomatic, social, and legal domains. For more details on this case study, see Arieli et al. [21].
In developing a politography decision support system, the main methodological challenges include the following:
(i)  
Integrating data from different sources and in different update timings and units;
(ii) 
Defining composite indicators that provide unified views;
(iii)
Tracking and modeling trends at various levels of the system hierarchy;
(iv)
Analyzing alternative scenarios for supporting decision makers.
The methodology applied in designing and implementing the politography decision support system involves four parts: (i) Map, (ii) Construct, (iii) Identify, and (iv) Analyze.
I.  
Map: Mapping categorizes factors into economic, security, and geo-spatial (demographic) domains. In this phase, experts determine indicators reflecting domains of control by geographical area. A methodology for supporting this part is the Goals–Question–Metrics (GQM) approach presented in Van Solingen et al. [22]. The GQM steps are to (1) generate a set of goals, (2) derive a set of questions relating to the goals, and (3) develop a set of metrics needed to answer the questions. Data can be viewed using dynamic graphs with the ability to zoom in on any individual indicator and by navigating the data hierarchy.
II. 
Construct: This stage involves developing an integrated database combining indicators from different domains, by year or by quarter. In this phase, research teams load data into a database and compute indices relative to a common annual baseline. The politography decision support system updates a configuration file with indicator names and identifies missing values and outliers. We determine data subsets for use in integration (by year or quarter) and conduct linkage analysis to obtain integrated data.
III.
Identify: Here, trends are identified in individual indicators and composite indicators and relative control levels are computed by year and by entity over one of the predetermined territories listed above. For trend analysis, we compute composite indicators by domain using the median. Bar charts, trend charts, and variable cluster analysis are used to identify the most representative cluster indicator. In addition, domains are combined to compute an overall trend with a composite indicator. The method applied to define composite indicators is to compute individual indicators relative to a base year and then using the yearly median across indicators. To derive the combined composite indicators for each indicator, Yi(x), we define a desirability function di(Yi), which assigns numbers between 0 and 1 to the values of Yi. The value di(Yi) = 0 represents an undesirable value of Yi and di(Yi) = 1 represents a desirable or ideal value. The individual desirabilities are then combined to an overall desirability index using the geometric mean of the individual desirabilities:
Overall Desirability Function = [(d1(Y1) x d2(Y2))x … dk(Yk))]1/k
where k denotes the number of indicators. Notice that if any response Yi is completely undesirable (di(Yi) = 0), then the overall desirability is zero. To account for this “zero control,” we apply an additional step that mitigates such cases and the desirability function is used as a composite indicator based on individual indicators. The final composite indicators are plotted on a Y by X graph, with four triangular quadrants. Each triangle represents a different combination of Israeli and PA control levels. For more on desirability functions, see Derringer and Suich [23].
IV.
Analyze: Scenarios are analyzed by determining the list and contribution of indicators affecting target indicators. Here, we apply Bayesian network analysis in order to understand the links between indicators from the same or different domains, and how changes in the level of one indicator influence the other indicators. This provides the ability to run what-if scenarios to assist policymakers in making informed decisions that account for the consequences of policy decisions. Below, we demonstrate an application of Bayesian networks to a subset of 18 indicators labeled I1–I16 over 12 years (2010–2021). The data analyzed is calibrated to the year 2022 as the baseline. The last column, labeled MEDIAN, is the median of the row used as a composite indicator representing the specific year. We first discretized the data and the indicator data was classified into three groups of equal width.
From this data, we can derive a directed acyclic graph, as shown in Figure 5.
In Figure 6, we show the network, conditioned on the year 2010 and, in Figure 7, the data is conditioned on years above 2019. In Figure 6, we see that indicator I10 is, with a probability of 60%, in the lowest category and in Figure 7, 9 years later, this probability drops to 20%.
These model-driven estimates provide probabilistic statements to changes over time. This Bayesian network analysis provides the decision support system with an ability to consider future-looking alternative scenarios.

5. Discussion and Future Research Pathways

As mentioned in the Introduction, decision support systems use multivariate data to provide a status report in specific contexts. Bayesian networks can be used to link multiple variables and, by conditioning the network, evaluate the impact of specific decisions. The situation rooms and war rooms mentioned in the Introduction were initially designed to provide a reflection of reality such as a combat zone. Decision support systems with Bayesian networks take this a step further.
With modern sensor technology, powerful computing, advanced analytics, and flexible systems, we are seeing the ubiquitous development of digital twins that consist of digital assets in parallel to physical assets (see Kenett and Bortman [24]). Digital assets are designed and implemented to provide monitoring, diagnostic, prognostic, and prescriptive capabilities supporting the performance of systems. Digital twins have been implemented in a wide range of domains including health care, energy management, and social ecosystems (see Elkefi and Asan [25], and Yossef Ravid and Aharon-Gutman [26]).
The scope of Bayesian network applications has been extended to veterinary epidemiology, maternal health, and road safety. For papers that introduce methodological developments in additive Bayesian networks, see [27,28].
A more general perspective is to consider developments in analytics in the context of decision support systems. The exponential interest in artificial intelligence and data science has forced some rethinking on the role and integration of statistics and analytics, especially with big or massive datasets (see Ruggeri et al. [29]).
An additional direction with significant impact is the widespread interest in large language models (LLMs) and related prompt engineering methodologies. These LLM-generative pre-trained transformers can operate on data obtained from open media and publications or specific corpora of documents like Google notebookLM (https://notebooklm.google/, accessed on 20 October 2024) or deep research (https://openai.com/index/introducing-deep-research/, accessed on 20 October 2024). Properly integrating LLMs into decision support systems remains a challenge.
The paper covers three case studies where Bayesian networks were applied in the context of decision support systems. The case studies are from epidemiology, website engineering, and political science. They cover completely different domains and show different methodologies of design and implementation. A comprehensive approach to such initiatives is needed and requires future research covering both applied and theoretical developments. Such developments in decision support systems can impact the future of civic societies as envisaged over 120 years ago (Geddes [30]).

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available at the following websites and on request: Case study 1: https://doi.org/10.3390/ijerph19084859; case study 2: https://doi.org/10.1002/9780470315262.ch7; case study 3: https://doi.org/10.20944/preprints202508.1309.v1.

Conflicts of Interest

The author is chairman of KPA Ltd. and the third case study presented in the paper was done in this context, no conflict of interest is declared by the author.

References

  1. Keen, P.G. Decision support systems: A research perspective. In Decision Support Systems: Issues and Challenges: Proceedings of an International Task Force Meeting; Center for Information Systems Research, Massachusetts Institute of Technology, Sloan School of Management: Boston, MA, USA, 1980; pp. 23–44. [Google Scholar]
  2. Sprague, R.H., Jr. A framework for the development of decision support systems. In MIS Quarterly; Carlson School of Management University of Minnesota: Minneapolis, MN, USA, 1980; pp. 1–26. [Google Scholar]
  3. Bonczek, R.H.; Holsapple, C.W.; Whinston, A.B. Foundations of Decision Support Systems; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  4. Hanke, F.; Bita, I.M.; von Heißen, O.; Julian, W.; Aschot, H.; Roman, D. AI-augmented systems engineering: Conceptual application of retrieval-augmented generation for model-based systems engineering graph. Proc. Des. Soc. 2025, 5, 439–448. [Google Scholar] [CrossRef]
  5. Kenett, R.S.; Harel, A.; Ruggeri, F. Agile Testing with User Data in Cloud and Edge Computing Environments. In Analytic Methods in Systems and Software Testing; John Wiley and Sons: Hoboken, NJ, USA, 2018; pp. 353–371. [Google Scholar]
  6. Pearl, J. Causal diagrams for empirical research. Biometrika 1995, 82, 669–688. [Google Scholar] [CrossRef]
  7. Kenett, R.S. On generating high InfoQ with Bayesian networks. Qual. Technol. Quant. Manag. 2016, 13, 309–332. [Google Scholar] [CrossRef]
  8. Kenett, R.S. Bayesian networks: Theory, applications and sensitivity issues. Encycl. Semant. Comput. Robot. Intell. 2017, 1, 1630014. [Google Scholar] [CrossRef]
  9. Kenett, R.S. Introduction aux Réseaux Bayésiens et Leurs Applications in Statistique et causalité; Bertrand, F., Saporta, G., Thomas-Agnan, C., Eds.; Editions Technip: Paris, France, 2021. [Google Scholar]
  10. Pearl, J.; Mackenzie, D. The Book of Why: The New Science of Cause and Effect; Basic Books: New York, NY, USA, 2018. [Google Scholar]
  11. Zhang, Y.; Kim, S. Gaussian Graphical Model Estimation and Selection for High-Dimensional Incomplete Data Using Multiple Imputation and Horseshoe Estimators. Mathematics 2024, 12, 1837. [Google Scholar] [CrossRef]
  12. Scutari, M.; Denis, J.B. Bayesian Networks: With Examples in R; Chapman and Hall/CRC: Boca Raton, FL, USA, 2021. [Google Scholar]
  13. Kenett, R.S.; Manzi, G.; Rapaport, C.; Salini, S. Integrated analysis of behavioural and health COVID-19 data combining Bayesian networks and structural equation models. Int. J. Environ. Res. Public Health 2022, 19, 4859. [Google Scholar] [CrossRef] [PubMed]
  14. Bargain, O.; Aminjonov, U. Trust and compliance to public health policies in times of COVID-19. J. Public Econ. 2020, 192, 104316. [Google Scholar] [CrossRef] [PubMed]
  15. Borgonovi, F.; Andrieu, E. Bowling together by bowling alone: Social capital and COVID-19. Soc. Sci. Med. 2020, 265, 113501. [Google Scholar] [CrossRef] [PubMed]
  16. Yilmazkuday, H. Stay-at-home works to fight against COVID-19: International evidence from Google mobility data. J. Hum. Behav. Soc. Environ. 2021, 31, 210–220. [Google Scholar] [CrossRef]
  17. He, C.; Di, R.; Tan, X. Bayesian Network Structure Learning Using Improved A* with Constraints from Potential Optimal Parent Sets. Mathematics 2023, 11, 3344. [Google Scholar] [CrossRef]
  18. Scutari, M. Learning Bayesian networks with the bnlearn R package. J. Stat. Softw. 2010, 35, 1–22. [Google Scholar] [CrossRef]
  19. Harel, A.; Kenett, R.S.; Ruggeri, F. Modeling web usability diagnostics on the basis of usage statistics. In Statistical Methods in e-Commerce Research; John Wiley and Sons: Hoboken, NJ, USA, 2008; pp. 131–172. [Google Scholar]
  20. Almasi, S.; Bahaadinbeigy, K.; Ahmadi, H.; Sohrabei, S.; Rabiei, R. Usability evaluation of dashboards: A systematic literature review of tools. BioMed Res. Int. 2023, 2023, 9990933. [Google Scholar] [CrossRef] [PubMed]
  21. Arieli, S.; Jacob, R.B.; Hirschberger, G.; Hirsch-Hoefler, S.; Kenett, A.; Kenett, R.S. A Decision Support Tool Integrating Data and Advanced Modeling. 2024. Available online: https://www.preprints.org/frontend/manuscript/2ca01bfd9883060d7dc9f200b43b2a46/download_pub (accessed on 20 October 2024).
  22. Van Solingen, R.; Basili, V.; Caldiera, G.; Rombach, H.D. Goal Question Metric approach. In Encyclopedia of Software Engineering; John Wiley and Sons: Hoboken, NJ, USA, 2002. [Google Scholar]
  23. Derringer, G.; Suich, R. Simultaneous optimization of several response variables. J. Qual. Technol. 1980, 12, 214–219. [Google Scholar] [CrossRef]
  24. Kenett, R.S.; Bortman, J. The digital twin in Industry 4.0: A wide-angle perspective. Qual. Reliab. Eng. Int. 2022, 38, 1357–1366. [Google Scholar] [CrossRef]
  25. Elkefi, S.; Asan, O. Digital twins for managing health care systems: Rapid literature review. J. Med. Internet Res. 2022, 24, e37641. [Google Scholar] [CrossRef] [PubMed]
  26. Yossef Ravid, B.; Aharon-Gutman, M. The social digital twin: The social turn in the field of smart cities. Environ. Plan. B Urban Anal. City Sci. 2023, 50, 1455–1470. [Google Scholar] [CrossRef]
  27. Pittavino, M.; Dreyfus, A.; Heuer, C.; Benschop, J.; Wilson, P.; Collins-Emerson, J.; Torgerson, P.R.; Furrer, R. Comparison between generalized linear modelling and additive Bayesian network; identification of factors associated with the incidence of antibodies against Leptospira interrogans sv Pomona in meat workers in New Zealand. Acta Trop. 2017, 173, 191–199. [Google Scholar] [CrossRef] [PubMed]
  28. Carrodano, C. Data-driven risk analysis of nonlinear factor interactions in road safety using Bayesian networks. Sci. Rep. 2024, 14, 18948. [Google Scholar] [CrossRef] [PubMed]
  29. Ruggeri, F.; Banks, D.; Cleveland, W.S.; Fisher, N.I.; Escobar-Anel, M.; Giudici, P.; Raffinetti, E.; Hoerl, R.W.; Lin, D.K.J.; Kenett, R.S.; et al. Is There a Future for Stochastic Modeling in Business and Industry in the Era of Machine Learning and Artificial Intelligence? Appl. Stoch. Models Bus. Ind. 2025, 41, e70004. [Google Scholar] [CrossRef]
  30. Geddes, P. Civics: As applied sociology. In Start of the Project Gutenberg Ebook 13205; 1904; Volume 1, pp. 100–118. Available online: https://www.gutenberg.org/files/13205/13205-h/13205-h.htm (accessed on 20 October 2024).
Figure 1. Bayesian network with the scenario when transport NPI restrictions are enforced, i.e., set to state = 1 (close).
Figure 1. Bayesian network with the scenario when transport NPI restrictions are enforced, i.e., set to state = 1 (close).
Mathematics 13 03484 g001
Figure 2. Bayesian network conditioned on a scenario when transport NPI restrictions are lifted, i.e., set to state = 2 (open).
Figure 2. Bayesian network conditioned on a scenario when transport NPI restrictions are lifted, i.e., set to state = 2 (open).
Mathematics 13 03484 g002
Figure 3. A Bayesian network of web log data, conditioned on average low seek time (at level s2).
Figure 3. A Bayesian network of web log data, conditioned on average low seek time (at level s2).
Mathematics 13 03484 g003
Figure 4. A Bayesian network of web log data, conditioned on high average seek time (at level s5).
Figure 4. A Bayesian network of web log data, conditioned on high average seek time (at level s5).
Mathematics 13 03484 g004
Figure 5. Bayesian network from 18 indicators.
Figure 5. Bayesian network from 18 indicators.
Mathematics 13 03484 g005
Figure 6. Bayesian network, conditioned on the year 2010.
Figure 6. Bayesian network, conditioned on the year 2010.
Mathematics 13 03484 g006
Figure 7. Bayesian network, conditioned on the years after 2019.
Figure 7. Bayesian network, conditioned on the years after 2019.
Mathematics 13 03484 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kenett, R.S. Bayesian Network Applications in Decision Support Systems. Mathematics 2025, 13, 3484. https://doi.org/10.3390/math13213484

AMA Style

Kenett RS. Bayesian Network Applications in Decision Support Systems. Mathematics. 2025; 13(21):3484. https://doi.org/10.3390/math13213484

Chicago/Turabian Style

Kenett, Ron S. 2025. "Bayesian Network Applications in Decision Support Systems" Mathematics 13, no. 21: 3484. https://doi.org/10.3390/math13213484

APA Style

Kenett, R. S. (2025). Bayesian Network Applications in Decision Support Systems. Mathematics, 13(21), 3484. https://doi.org/10.3390/math13213484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop