Special Issue "Algorithms for Decision Making"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: 30 November 2018

Special Issue Editor

Guest Editor
Prof. Dr. Thomas Hanne

Institute for Information Systems, University of Applied Sciences and Arts Northwestern Switzerland, Riggenbachstr, 16 CH-4600 Olten, Switzerland
Website | E-Mail
Interests: computational intelligence; optimization simulation; multicriteria decision analysis; logistics and supply chain management; evolutionary computation

Special Issue Information

Dear Colleagues,

The question of using advanced methods in decision making has, not only improved a plethora of application areas during the last few decades, it has also advanced theoretical research and the development of methods in various areas, such as multicriteria decision analysis, machine learning, fuzzy systems, neural networks, nature-inspired methods, and other metaheuristics. One can say that a new wave of artificial intelligence is largely inspired by applications related to decision making, such as recommender systems, smart home applications, process automation, intelligent manufacturing, advanced solutions for logistics and supply chain management, data mining, text mining, or the audio, image, or video analysis.

The aim of this Special Issue is to present recent applications from such innovative areas, as well as recent developments related to the theory and methodology in decision making.

The topics include, but are not limited to, the following areas:

Keywords:

  • Complex and Multicriteria Decision Analysis
  • Decision support systems
  • Computational intelligence in decision support
  • Machine learning in decision making
  • Optimization in decision making
  • Soft computing for decision making
  • Applications
  • Case studies

Prof. Dr. Thomas Hanne
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Open AccessArticle Towards the Verbal Decision Analysis Paradigm for Implementable Prioritization of Software Requirements
Algorithms 2018, 11(11), 176; https://doi.org/10.3390/a11110176
Received: 18 September 2018 / Revised: 22 October 2018 / Accepted: 30 October 2018 / Published: 3 November 2018
PDF Full-text (2755 KB) | HTML Full-text | XML Full-text
Abstract
The activity of prioritizing software requirements should be done as efficiently as possible. Selecting the most stable requirements for the most important customers of a development company can be a positive factor considering that available resources do not always encompass the implementation of
[...] Read more.
The activity of prioritizing software requirements should be done as efficiently as possible. Selecting the most stable requirements for the most important customers of a development company can be a positive factor considering that available resources do not always encompass the implementation of all requirements. There are many quantitative methods for prioritization of software releases in the field of search-based software engineering (SBSE). However, we show that it is possible to use qualitative verbal decision analysis (VDA) methods to solve this type of problem. Moreover, we will use the ZAPROS III-i method to prioritize requirements considering the opinion of the decision-maker, who will participate in this process. Results obtained using VDA structured methods were found to be quite satisfactory when compared to methods using SBSE. A comparison of results between quantitative and qualitative methods will be made and discussed later. The results were reviewed and corroborated with the use of performance metrics. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Application of Data Science Technology on Research of Circulatory System Disease Prediction Based on a Prospective Cohort
Algorithms 2018, 11(10), 162; https://doi.org/10.3390/a11100162
Received: 11 September 2018 / Revised: 8 October 2018 / Accepted: 11 October 2018 / Published: 20 October 2018
PDF Full-text (2099 KB) | HTML Full-text | XML Full-text
Abstract
Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health
[...] Read more.
Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health of our population. This article is based on the prospective population cohort data of chronic diseases in China, based on the existing medical cohort studies, the Kaplan–Meier method was used for feature selection, and the traditional medical analysis model represented by the Cox proportional hazards model was used and introduced. Support vector machine research methods in machine learning establish circulatory system disease prediction models. This paper also attempts to introduce the proportion of the explanation variation (PEV) and the shrinkage factor to improve the Cox proportional hazards model; and the use of Particle Swarm Optimization (PSO) algorithm to optimize the parameters of SVM model. Finally, the experimental verification of the above prediction models is carried out. This paper uses the model training time, Accuracy rate(ACC), the area under curve (AUC)of the Receiver Operator Characteristic curve (ROC) and other forecasting indicators. The experimental results show that the PSO-SVM-CSDPC disease prediction model and the S-Cox-CSDPC circulation system disease prediction model have the advantages of fast model solving speed, accurate prediction results and strong generalization ability, which are helpful for the intervention and control of chronic diseases. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Incremental Learning for Classification of Unstructured Data Using Extreme Learning Machine
Algorithms 2018, 11(10), 158; https://doi.org/10.3390/a11100158
Received: 27 September 2018 / Revised: 13 October 2018 / Accepted: 16 October 2018 / Published: 17 October 2018
PDF Full-text (1215 KB) | HTML Full-text | XML Full-text
Abstract
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it
[...] Read more.
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Two Hesitant Multiplicative Decision-Making Algorithms and Their Application to Fog-Haze Factor Assessment Problem
Algorithms 2018, 11(10), 154; https://doi.org/10.3390/a11100154
Received: 29 August 2018 / Revised: 28 September 2018 / Accepted: 2 October 2018 / Published: 10 October 2018
Cited by 1 | PDF Full-text (255 KB) | HTML Full-text | XML Full-text
Abstract
Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads
[...] Read more.
Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads to inconsistent conclusions, therefore consistency improvement processes and deriving the reliable priority weight vector for alternatives are two significant and challenging issues for hesitant multiplicative information decision-making problems. In this paper, some new concepts are first introduced, including HMPR, consistent HMPR and the consistency index of HMPR. Then, based on the logarithmic least squares model and linear optimization model, two novel automatic iterative algorithms are proposed to enhance the consistency of HMPR and generate the priority weights of HMPR, which are proved to be convergent. In the end, the proposed algorithms are applied to the factors affecting selection of fog-haze weather. The comparative analysis shows that the decision-making process in our algorithms would be more straight-forward and efficient. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle Multiple Attribute Decision-Making Method Using Linguistic Cubic Hesitant Variables
Algorithms 2018, 11(9), 135; https://doi.org/10.3390/a11090135
Received: 9 August 2018 / Revised: 28 August 2018 / Accepted: 5 September 2018 / Published: 7 September 2018
Cited by 1 | PDF Full-text (1200 KB) | HTML Full-text | XML Full-text
Abstract
Linguistic decision making (DM) is an important research topic in DM theory and methods since using linguistic terms for the assessment of the objective world is very fitting for human thinking and expressing habits. However, there is both uncertainty and hesitancy in linguistic
[...] Read more.
Linguistic decision making (DM) is an important research topic in DM theory and methods since using linguistic terms for the assessment of the objective world is very fitting for human thinking and expressing habits. However, there is both uncertainty and hesitancy in linguistic arguments in human thinking and judgments of an evaluated object. Nonetheless, the hybrid information regarding both uncertain linguistic arguments and hesitant linguistic arguments cannot be expressed through the various existing linguistic concepts. To reasonably express it, this study presents a linguistic cubic hesitant variable (LCHV) based on the concepts of a linguistic cubic variable and a hesitant fuzzy set, its operational relations, and its linguistic score function for ranking LCHVs. Then, the objective extension method based on the least common multiple number/cardinality for LCHVs and the weighted aggregation operators of LCHVs are proposed to reasonably aggregate LCHV information because existing aggregation operators cannot aggregate LCHVs in which the number of their hesitant components may imply difference. Next, a multi-attribute decision-making (MADM) approach is proposed based on the weighted arithmetic averaging (WAA) and weighted geometric averaging (WGA) operators of LCHVs. Lastly, an illustrative example is provided to indicate the applicability of the proposed approaches. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle Probabilistic Interval-Valued Hesitant Fuzzy Information Aggregation Operators and Their Application to Multi-Attribute Decision Making
Algorithms 2018, 11(8), 120; https://doi.org/10.3390/a11080120
Received: 9 July 2018 / Revised: 27 July 2018 / Accepted: 28 July 2018 / Published: 6 August 2018
PDF Full-text (300 KB) | HTML Full-text | XML Full-text
Abstract
Based on the probabilistic interval-valued hesitant fuzzy information aggregation operators, this paper investigates a novel multi-attribute group decision making (MAGDM) model to address the serious loss of information in a hesitant fuzzy information environment. Firstly, the definition of probabilistic interval-valued hesitant fuzzy set
[...] Read more.
Based on the probabilistic interval-valued hesitant fuzzy information aggregation operators, this paper investigates a novel multi-attribute group decision making (MAGDM) model to address the serious loss of information in a hesitant fuzzy information environment. Firstly, the definition of probabilistic interval-valued hesitant fuzzy set will be introduced, and then, using Archimedean norm, some new probabilistic interval-valued hesitant fuzzy operations are defined. Secondly, based on these operations, the generalized probabilistic interval-valued hesitant fuzzy ordered weighted averaging (GPIVHFOWA) operator, and the generalized probabilistic interval-valued hesitant fuzzy ordered weighted geometric (GPIVHFOWG) operator are proposed, and their desirable properties are discussed. We further study their common forms and analyze the relationship among these proposed operators. Finally, a new probabilistic interval-valued hesitant fuzzy MAGDM model is constructed, and the feasibility and effectiveness of the proposed model are verified by using an example of supplier selection. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle Improving Monarch Butterfly Optimization Algorithm with Self-Adaptive Population
Algorithms 2018, 11(5), 71; https://doi.org/10.3390/a11050071
Received: 21 March 2018 / Revised: 27 April 2018 / Accepted: 27 April 2018 / Published: 14 May 2018
PDF Full-text (382 KB) | HTML Full-text | XML Full-text
Abstract
Inspired by the migration behavior of monarch butterflies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterfly optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterflies in land 1 (subpopulation 1) and land
[...] Read more.
Inspired by the migration behavior of monarch butterflies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterfly optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterflies in land 1 (subpopulation 1) and land 2 (subpopulation 2) are calculated according to the parameter p, which is unchanged during the entire optimization process. In our present work, a self-adaptive strategy is introduced to dynamically adjust the butterflies in land 1 and 2. Accordingly, the population size in subpopulation 1 and 2 are dynamically changed as the algorithm evolves in a linear way. After introducing the concept of a self-adaptive strategy, an improved MBO algorithm, called monarch butterfly optimization with self-adaptive population (SPMBO), is put forward. In SPMBO, only generated individuals who are better than before can be accepted as new individuals for the next generations in the migration operation. Finally, the proposed SPMBO algorithm is benchmarked by thirteen standard test functions with dimensions of 30 and 60. The experimental results indicate that the search ability of the proposed SPMBO approach significantly outperforms the basic MBO algorithm on most test functions. This also implies the self-adaptive strategy is an effective way to improve the performance of the basic MBO algorithm. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle The Supplier Selection of the Marine Rescue Equipment Based on the Analytic Hierarchy Process (AHP)-Limited Diversity Factors Method
Algorithms 2018, 11(5), 63; https://doi.org/10.3390/a11050063
Received: 23 February 2018 / Revised: 18 April 2018 / Accepted: 3 May 2018 / Published: 4 May 2018
Cited by 1 | PDF Full-text (1201 KB) | HTML Full-text | XML Full-text
Abstract
Supplier selection is an important decision-making link in bidding activity. When overall scores of several suppliers are similar, it is hard to obtain an accurate ranking of these suppliers. Applying the Diversity Factors Method (Diversity Factors Method, DFM) may lead to over correction
[...] Read more.
Supplier selection is an important decision-making link in bidding activity. When overall scores of several suppliers are similar, it is hard to obtain an accurate ranking of these suppliers. Applying the Diversity Factors Method (Diversity Factors Method, DFM) may lead to over correction of weights, which would degrade the capability of indexes to reflect the importance. A Limited Diversity Factors Method (Limited Diversity Factors Method, LDFM) based on entropy is presented in this paper in order to adjust the weights, in order to relieve the over correction in DFM and to improve the capability of identification of indexes in supplier selection. An example of salvage ship bidding demonstrates the advantages of the LDFM, in which the raking of overall scores of suppliers is more accurate. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Vessel Traffic Risk Assessment Based on Uncertainty Analysis in the Risk Matrix
Algorithms 2018, 11(5), 60; https://doi.org/10.3390/a11050060
Received: 5 April 2018 / Revised: 28 April 2018 / Accepted: 2 May 2018 / Published: 3 May 2018
PDF Full-text (1446 KB) | HTML Full-text | XML Full-text
Abstract
Uncertainty analysis is considered to be a necessary step in the process of vessel traffic risk assessment. The purpose of this study is to propose the uncertainty analysis algorithm which can be used to investigate the reliability of the risk assessment result. Probability
[...] Read more.
Uncertainty analysis is considered to be a necessary step in the process of vessel traffic risk assessment. The purpose of this study is to propose the uncertainty analysis algorithm which can be used to investigate the reliability of the risk assessment result. Probability and possibility distributions are used to quantify the two types of uncertainty identified in the risk assessment process. In addition, the algorithm for appropriate time window selection is chosen by considering the uncertainty of vessel traffic accident occurrence and the variation trend of the vessel traffic risk caused by maritime rules becoming operative. Vessel traffic accident data from the United Kingdom’s marine accident investigation branch are used for the case study. Based on a comparison with the common method of estimating the vessel traffic risk and the algorithm for uncertainty quantification without considering the time window selection, the availability of the proposed algorithms is verified, which can provide guidance for vessel traffic risk management. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Decision-Making Approach Based on Neutrosophic Rough Information
Algorithms 2018, 11(5), 59; https://doi.org/10.3390/a11050059
Received: 13 February 2018 / Revised: 20 April 2018 / Accepted: 1 May 2018 / Published: 3 May 2018
Cited by 3 | PDF Full-text (525 KB) | HTML Full-text | XML Full-text
Abstract
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives
[...] Read more.
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives more precision, flexibility and compatibility to the system as compared to the classic and fuzzy models. In this research study, we develop neutrosophic rough digraphs based on the neutrosophic rough hybrid model. Moreover, we discuss regular neutrosophic rough digraphs, and we solve decision-making problems by using our proposed hybrid model. Finally, we give a comparison analysis of two hybrid models, namely, neutrosophic rough digraphs and rough neutrosophic digraphs. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Failure Mode and Effects Analysis Considering Consensus and Preferences Interdependence
Algorithms 2018, 11(4), 34; https://doi.org/10.3390/a11040034
Received: 4 February 2018 / Revised: 15 March 2018 / Accepted: 15 March 2018 / Published: 21 March 2018
Cited by 1 | PDF Full-text (1167 KB) | HTML Full-text | XML Full-text
Abstract
Failure mode and effects analysis is an effective and powerful risk evaluation technique in the field of risk management, and it has been extensively used in various industries for identifying and decreasing known and potential failure modes in systems, processes, products, and services.
[...] Read more.
Failure mode and effects analysis is an effective and powerful risk evaluation technique in the field of risk management, and it has been extensively used in various industries for identifying and decreasing known and potential failure modes in systems, processes, products, and services. Traditionally, a risk priority number is applied to capture the ranking order of failure modes in failure mode and effects analysis. However, this method has several drawbacks and deficiencies, which need to be improved for enhancing its application capability. For instance, this method ignores the consensus-reaching process and the correlations among the experts’ preferences. Therefore, the aim of this study was to present a new risk priority method to determine the risk priority of failure modes under an interval-valued Pythagorean fuzzy environment, which combines the extended Geometric Bonferroni mean operator, a consensus-reaching process, and an improved Multi-Attributive Border Approximation area Comparison approach. Finally, a case study concerning product development is described to demonstrate the feasibility and effectiveness of the proposed method. The results show that the risk priority of failure modes obtained by the proposed method is more reasonable in practical application compared with other failure mode and effects analysis methods. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Back to Top