Algorithms in Decision Support Systems Vol. 2

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (1 June 2022) | Viewed by 27598

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, University of Oviedo, 33007 Oviedo, Spain
Interests: web engineering; artificial Intelligence; recommendation systems; health informatics; modeling software with DSL and MDE
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Currently, decision support systems (DSSs) are essential tools that provide information and support for decision making on possible problems that, due to their level of complexity, cannot be easily solved by humans. These systems allow the extraction and manipulation of information in a flexible way, allowing users to define what information they need and how to combine it. Due to their ability to analyze and process data, they are widely used in different fields, such as population health management, education, medical diagnosis, catastrophe avoidance, agriculture, sustainable development, sales projections, inventory organization, production design, etc. The basic components needed to define the architecture of a DSS are the (1) knowledge base or database required for the storage of structured or unstructured information; (2) user interface, essential for interactions between users and machines; (3) model, used to infer decisions; and (4) user, who normally uses the system and makes decisions. The models used by DSSs are usually based on different types of algorithms, such as neural networks, logistic regression, classification trees, fuzzy logic, etc.

As DSSs take on an increasingly central role in decision making in different scenarios, the need for researchers and developers to be able to refine and propose new algorithms to optimize the performance of these systems becomes more important, considering that these algorithms are usually adapted to the set of data available for a particular domain of knowledge. This Special Issue on “Algorithms in Decision Support Systems” provides a platform to exchange new ideas by researchers and practitioners in the field of decision support systems and its applications in many areas. We encourage authors across the world to submit their original and unpublished works. We have a special interest in works focusing on the topics listed below, but we are open to other works that fit the theme of the Special Issue.

Prof. Dr. Edward Rolando Núñez-Valdez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning algorithms
  • machine learning algorithms
  • genetic algorithms
  • evolutionary algorithms
  • fuzzy logic
  • probabilistic reasoning
  • rule-based algorithms
  • statistical methods
  • atmospheric models
  • data mining algorithms
  • recommender systems algorithms
  • explainable AI algorithms

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 193 KiB  
Editorial
Special Issue on Algorithms in Decision Support Systems Vol.2
by Edward Rolando Núñez-Valdez
Algorithms 2023, 16(11), 512; https://doi.org/10.3390/a16110512 - 8 Nov 2023
Viewed by 1316
Abstract
Currently, decision support systems (DSSs) are essential tools that provide information and support for decision making on possible problems that, due to their level of complexity, cannot be easily solved by humans [...] Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)

Research

Jump to: Editorial

19 pages, 3484 KiB  
Article
Reasoning about Confidence in Goal Satisfaction
by Malak Baslyman, Daniel Amyot and John Mylopoulos
Algorithms 2022, 15(10), 343; https://doi.org/10.3390/a15100343 - 23 Sep 2022
Cited by 2 | Viewed by 2282
Abstract
Goal models are commonly used requirements engineering artefacts that capture stakeholder requirements and their inter-relationships in a way that supports reasoning about their satisfaction, trade-off analysis, and decision making. However, when there is uncertainty in the data used as evidence to evaluate goal [...] Read more.
Goal models are commonly used requirements engineering artefacts that capture stakeholder requirements and their inter-relationships in a way that supports reasoning about their satisfaction, trade-off analysis, and decision making. However, when there is uncertainty in the data used as evidence to evaluate goal models, it is crucial to understand the confidence or trust level in such evaluations, as uncertainty may increase the risk of making premature or incorrect decisions. Different approaches have been proposed to tackle goal model uncertainty issues and risks. However, none of them considers simple quality measures of collected data as a starting point. In this paper, we propose a Data Quality Tagging and Propagation Mechanism to compute the confidence level of a goal’s satisfaction level based on the quality of input data sources. The paper uses the Goal-oriented Requirement Language (GRL), part of the User Requirements Notation (URN) standard, in examples, with an implementation of the proposed mechanism and a case study conducted in order to demonstrate and assess the approach. The availability of computed confidence levels as an additional piece of information enables decision makers to (i) modulate the satisfaction information returned by goal models and (ii) make better-informed decisions, including looking for higher-quality data when needed. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

30 pages, 1344 KiB  
Article
Designing Reinforcement Learning Algorithms for Digital Interventions: Pre-Implementation Guidelines
by Anna L. Trella, Kelly W. Zhang, Inbal Nahum-Shani, Vivek Shetty, Finale Doshi-Velez and Susan A. Murphy
Algorithms 2022, 15(8), 255; https://doi.org/10.3390/a15080255 - 22 Jul 2022
Cited by 20 | Viewed by 4647
Abstract
Online reinforcement learning (RL) algorithms are increasingly used to personalize digital interventions in the fields of mobile health and online education. Common challenges in designing and testing an RL algorithm in these settings include ensuring the RL algorithm can learn and run stably [...] Read more.
Online reinforcement learning (RL) algorithms are increasingly used to personalize digital interventions in the fields of mobile health and online education. Common challenges in designing and testing an RL algorithm in these settings include ensuring the RL algorithm can learn and run stably under real-time constraints, and accounting for the complexity of the environment, e.g., a lack of accurate mechanistic models for the user dynamics. To guide how one can tackle these challenges, we extend the PCS (predictability, computability, stability) framework, a data science framework that incorporates best practices from machine learning and statistics in supervised learning to the design of RL algorithms for the digital interventions setting. Furthermore, we provide guidelines on how to design simulation environments, a crucial tool for evaluating RL candidate algorithms using the PCS framework. We show how we used the PCS framework to design an RL algorithm for Oralytics, a mobile health study aiming to improve users’ tooth-brushing behaviors through the personalized delivery of intervention messages. Oralytics will go into the field in late 2022. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

22 pages, 5237 KiB  
Article
BAG-DSM: A Method for Generating Alternatives for Hierarchical Multi-Attribute Decision Models Using Bayesian Optimization
by Martin Gjoreski, Vladimir Kuzmanovski and Marko Bohanec
Algorithms 2022, 15(6), 197; https://doi.org/10.3390/a15060197 - 7 Jun 2022
Cited by 3 | Viewed by 2245
Abstract
Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical [...] Read more.
Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical decision support; however, the related work on generating alternatives for qualitative multi-attribute decision models is quite scarce. In this paper, we introduce Bayesian Alternative Generator for Decision Support Models (BAG-DSM), a method to address the problem of generating alternatives. More specifically, given a multi-attribute hierarchical model and an alternative representing the initial state, the goal is to generate alternatives that demand the least change in the provided alternative to obtain a desirable outcome. The brute force approach has exponential time complexity and has prohibitively long execution times, even for moderately sized models. BAG-DSM avoids these problems by using a Bayesian optimization approach adapted to qualitative DEX models. BAG-DSM was extensively evaluated and compared to a baseline method on 43 different DEX decision models with varying complexity, e.g., different depth and attribute importance. The comparison was performed with respect to: the time to obtain the first appropriate alternative, the number of generated alternatives, and the number of attribute changes required to reach the generated alternatives. BAG-DSM outperforms the baseline in all of the experiments by a large margin. Additionally, the evaluation confirms BAG-DSM’s suitability for the task, i.e., on average, it generates at least one appropriate alternative within two seconds. The relation between the depth of the multi-attribute hierarchical models—a parameter that increases the search space exponentially—and the time to obtaining the first appropriate alternative was linear and not exponential, by which BAG-DSM’s scalability is empirically confirmed. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

13 pages, 2486 KiB  
Article
EEG Pattern Classification of Picking and Coordination Using Anonymous Random Walks
by Inon Zuckerman, Dor Mizrahi and Ilan Laufer
Algorithms 2022, 15(4), 114; https://doi.org/10.3390/a15040114 - 26 Mar 2022
Cited by 6 | Viewed by 2452
Abstract
Tacit coordination games are games where players are trying to select the same solution without any communication between them. Various theories have attempted to predict behavior in tacit coordination games. Until now, research combining tacit coordination games with electrophysiological measures was mainly based [...] Read more.
Tacit coordination games are games where players are trying to select the same solution without any communication between them. Various theories have attempted to predict behavior in tacit coordination games. Until now, research combining tacit coordination games with electrophysiological measures was mainly based on spectral analysis. In contrast, EEG coherence enables the examination of functional and morphological connections between brain regions. Hence, we aimed to differentiate between different cognitive conditions using coherence patterns. Specifically, we have designed a method that predicts the class label of coherence graph patterns extracted out of multi-channel EEG epochs taken from three conditions: a no-task condition and two cognitive tasks, picking and coordination. The classification process was based on a coherence graph extracted out of the EEG record. To assign each graph into its appropriate label, we have constructed a hierarchical classifier. First, we have distinguished between the resting-state condition and the other two cognitive tasks by using a bag of node degrees. Next, to distinguish between the two cognitive tasks, we have implemented an anonymous random walk. Our classification model achieved a total accuracy value of 96.55%. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

19 pages, 2867 KiB  
Article
Forecast of Medical Costs in Health Companies Using Models Based on Advanced Analytics
by Daniel Ricardo Sandoval Serrano, Juan Carlos Rincón, Julián Mejía-Restrepo, Edward Rolando Núñez-Valdez and Vicente García-Díaz
Algorithms 2022, 15(4), 106; https://doi.org/10.3390/a15040106 - 23 Mar 2022
Viewed by 3251
Abstract
Forecasting medical costs is crucial for planning, budgeting, and efficient decision making in the health industry. This paper introduces a proposal to forecast costs through techniques such as a standard model of long short-term memory (LSTM); and patient grouping through k-means clustering in [...] Read more.
Forecasting medical costs is crucial for planning, budgeting, and efficient decision making in the health industry. This paper introduces a proposal to forecast costs through techniques such as a standard model of long short-term memory (LSTM); and patient grouping through k-means clustering in the Keralty group, one of Colombia’s leading healthcare companies. It is important to highlight its implications for the prediction of cost time series in the health sector from a retrospective analysis of the information of services invoiced to health companies. It starts with the selection of sociodemographic variables related to the patient, such as age, gender and marital status, and it is complemented with health variables such as patient comorbidities (cohorts) and induced variables, such as service provision frequency and time elapsed since the last consultation (hereafter referred to as “recency”). Our results suggest that greater accuracy can be achieved by first clustering and then using LSTM networks. This implies that a correct segmentation of the population according to the usage of services represented in costs must be performed beforehand. Through the analysis, a cost projection from 1 to 3 months can be conducted, allowing a comparison with historical data. The reliability of the model is validated by different metrics such as RMSE and Adjusted R2. Overall, this study is intended to be useful for healthcare managers in developing a strategy for medical cost forecasting. We conclude that the use of analytical tools allows the organization to make informed decisions and to develop strategies for optimizing resources with the identified population. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

18 pages, 576 KiB  
Article
Evolutionary Optimization of Spiking Neural P Systems for Remaining Useful Life Prediction
by Leonardo Lucio Custode, Hyunho Mo, Andrea Ferigo and Giovanni Iacca
Algorithms 2022, 15(3), 98; https://doi.org/10.3390/a15030098 - 19 Mar 2022
Cited by 9 | Viewed by 4544
Abstract
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults [...] Read more.
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults occur. In the recent literature, several data-driven methods for RUL prediction have been proposed. However, most of them are based on traditional (connectivist) neural networks, such as convolutional neural networks, and alternative mechanisms have barely been explored. Here, we tackle the RUL prediction problem for the first time by using a membrane computing paradigm, namely that of Spiking Neural P (in short, SN P) systems. First, we show how SN P systems can be adapted to handle the RUL prediction problem. Then, we propose the use of a neuro-evolutionary algorithm to optimize the structure and parameters of the SN P systems. Our results on two datasets, namely the CMAPSS and new CMAPSS benchmarks from NASA, are fairly comparable with those obtained by much more complex deep networks, showing a reasonable compromise between performance and number of trainable parameters, which in turn correlates with memory consumption and computing time. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

40 pages, 8026 KiB  
Article
A Temporal Case-Based Reasoning Platform Relying on a Fuzzy Vector Spaces Object-Oriented Model and a Method to Design Knowledge Bases and Decision Support Systems in Multiple Domains
by Joël Colloc, Relwendé Aristide Yameogo, Peter Summons, Lilian Loubet, Jean-Bernard Cavelier and Paul Bridier
Algorithms 2022, 15(2), 66; https://doi.org/10.3390/a15020066 - 19 Feb 2022
Cited by 7 | Viewed by 3895
Abstract
Knowledge bases in complex domains must take into account many attributes describing numerous objects that are themselves components of complex objects. Temporal case-based reasoning (TCBR) requires comparing the structural evolution of component objects and their states (attribute values) at different levels of granularity. [...] Read more.
Knowledge bases in complex domains must take into account many attributes describing numerous objects that are themselves components of complex objects. Temporal case-based reasoning (TCBR) requires comparing the structural evolution of component objects and their states (attribute values) at different levels of granularity. This paper provides some significant contributions to computer science. It extends a fuzzy vector space object-oriented model and method (FVSOOMM) to present a new platform and a method guideline capable of designing objects and attributes that represent timepoint knowledge objects. It shows how temporal case-based reasoning can use distances between temporal fuzzy vector functions to compare these knowledge objects’ evolution. It describes examples of interfaces that have been implemented on this new platform. These include an expert’s interface that describes a knowledge class diagram; a practitioner’s interface that instantiates domain objects and their attribute constraints; and an end-user interface to input attribute values of the real cases stored in a domain case database. This paper illustrates resultant knowledge bases in different domains, with examples of pulmonary embolism diagnosis in medicine and decision making in French municipal territorial recomposition. The paper concludes with the current limitations of the proposed model, its future perspectives and possible platform enhancements. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

Back to TopTop