Special Issue "Algorithms for Decision Making"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (30 November 2018).

Special Issue Editor

Prof. Dr. Thomas Hanne
E-Mail Website
Guest Editor
Institute for Information Systems, University of Applied Sciences and Arts Northwestern Switzerland, Riggenbachstr, 16 CH-4600 Olten, Switzerland
Interests: computational intelligence; optimization simulation; multicriteria decision analysis; logistics and supply chain management; evolutionary computation
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The question of using advanced methods in decision making has, not only improved a plethora of application areas during the last few decades, it has also advanced theoretical research and the development of methods in various areas, such as multicriteria decision analysis, machine learning, fuzzy systems, neural networks, nature-inspired methods, and other metaheuristics. One can say that a new wave of artificial intelligence is largely inspired by applications related to decision making, such as recommender systems, smart home applications, process automation, intelligent manufacturing, advanced solutions for logistics and supply chain management, data mining, text mining, or the audio, image, or video analysis.

The aim of this Special Issue is to present recent applications from such innovative areas, as well as recent developments related to the theory and methodology in decision making.

The topics include, but are not limited to, the following areas:

Keywords:

  • Complex and Multicriteria Decision Analysis
  • Decision support systems
  • Computational intelligence in decision support
  • Machine learning in decision making
  • Optimization in decision making
  • Soft computing for decision making
  • Applications
  • Case studies

Prof. Dr. Thomas Hanne
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
An Improved Genetic Algorithm for Emergency Decision Making under Resource Constraints Based on Prospect Theory
Algorithms 2019, 12(2), 43; https://doi.org/10.3390/a12020043 - 18 Feb 2019
Cited by 2
Abstract
The study of emergency decision making (EDM) is helpful to reduce the difficulty of decision making and improve the efficiency of decision makers (DMs). The purpose of this paper is to propose an innovative genetic algorithm for emergency decision making under resource constraints. [...] Read more.
The study of emergency decision making (EDM) is helpful to reduce the difficulty of decision making and improve the efficiency of decision makers (DMs). The purpose of this paper is to propose an innovative genetic algorithm for emergency decision making under resource constraints. Firstly, this paper analyzes the emergency situation under resource constraints, and then, according to the prospect theory (PT), we further propose an improved value measurement function and an emergency loss levels weighting algorithm. Secondly, we assign weights for all emergency locations using the best–worst method (BWM). Then, an improved genetic algorithm (GA) based on prospect theory (PT) is established to solve the problem of emergency resource allocation between multiple emergency locations under resource constraints. Finally, the analyses of example show that the algorithm can shorten the decision-making time and provide a better decision scheme, which has certain practical significance. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
A Hybrid Adaptive Large Neighborhood Heuristic for a Real-Life Dial-a-Ride Problem
Algorithms 2019, 12(2), 39; https://doi.org/10.3390/a12020039 - 16 Feb 2019
Abstract
The transportation of elderly and impaired people is commonly solved as a Dial-A-Ride Problem (DARP). The DARP aims to design pick-up and delivery vehicle routing schedules. Its main objective is to accommodate as many users as possible with a minimum operation cost. It [...] Read more.
The transportation of elderly and impaired people is commonly solved as a Dial-A-Ride Problem (DARP). The DARP aims to design pick-up and delivery vehicle routing schedules. Its main objective is to accommodate as many users as possible with a minimum operation cost. It adds realistic precedence and transit time constraints on the pairing of vehicles and customers. This paper tackles the DARP with time windows (DARPTW) from a new and innovative angle as it combines hybridization techniques with an adaptive large neighborhood search heuristic algorithm. The main objective is to improve the overall real-life performance of vehicle routing operations. Real-life data are refined and fed to a hybrid adaptive large neighborhood search (Hybrid-ALNS) algorithm which provides a near-optimal routing solution. The computational results on real-life instances, in the Canadian city of Vancouver and its region, and DARPTW benchmark instances show the potential improvements achieved by the proposed heuristic and its adaptability. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
A Distributed Execution Pipeline for Clustering Trajectories Based on a Fuzzy Similarity Relation
Algorithms 2019, 12(2), 29; https://doi.org/10.3390/a12020029 - 22 Jan 2019
Abstract
The proliferation of indoor and outdoor tracking devices has led to a vast amount of spatial data. Each object can be described by several trajectories that, once analysed, can yield to significant knowledge. In particular, pattern analysis by clustering generic trajectories can give [...] Read more.
The proliferation of indoor and outdoor tracking devices has led to a vast amount of spatial data. Each object can be described by several trajectories that, once analysed, can yield to significant knowledge. In particular, pattern analysis by clustering generic trajectories can give insight into objects sharing the same patterns. Still, sequential clustering approaches fail to handle large volumes of data. Hence, the necessity of distributed systems to be able to infer knowledge in a trivial time interval. In this paper, we detail an efficient, scalable and distributed execution pipeline for clustering raw trajectories. The clustering is achieved via a fuzzy similarity relation obtained by the transitive closure of a proximity relation. Moreover, the pipeline is integrated in Spark, implemented in Scala and leverages the Core and Graphx libraries making use of Resilient Distributed Datasets (RDD) and graph processing. Furthermore, a new simple, but very efficient, partitioning logic has been deployed in Spark and integrated into the execution process. The objective behind this logic is to equally distribute the load among all executors by considering the complexity of the data. In particular, resolving the load balancing issue has reduced the conventional execution time in an important manner. Evaluation and performance of the whole distributed process has been analysed by handling the Geolife project’s GPS trajectory dataset. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
A Pricing Strategy of E-Commerce Advertising Cooperation in the Stackelberg Game Model with Different Market Power Structure
Algorithms 2019, 12(1), 24; https://doi.org/10.3390/a12010024 - 18 Jan 2019
Cited by 1
Abstract
A lot of research work has studied the auction mechanism of uncertain advertising cooperation between the e-commerce platform and advertisers, but little has focused on pricing strategy in stable advertising cooperation under a certain market power structure. To fill this gap, this paper [...] Read more.
A lot of research work has studied the auction mechanism of uncertain advertising cooperation between the e-commerce platform and advertisers, but little has focused on pricing strategy in stable advertising cooperation under a certain market power structure. To fill this gap, this paper makes a study of the deep interest distribution of two parties in such cooperation. We propose a pricing strategy by building two stackelberg master-slave models when the e-commerce platform and the advertiser are respectively the leader in the cooperation. It is analyzed that the optimization solution of the profits of both parties and the total system are affected by some main decision factors including the income commission proportion, the advertising product price and the cost of advertising effort of both parties’ brand in different dominant models. Then, some numerical studies are used to verify the effectiveness of the models. Finally, we draw a conclusion and make some suggestions to the platforms and the advertisers in the e-commerce advertising cooperation. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle
Total Optimization of Energy Networks in a Smart City by Multi-Population Global-Best Modified Brain Storm Optimization with Migration
Algorithms 2019, 12(1), 15; https://doi.org/10.3390/a12010015 - 07 Jan 2019
Cited by 2 | Correction
Abstract
This paper proposes total optimization of energy networks in a smart city by multi-population global-best modified brain storm optimization (MP-GMBSO). Efficient utilization of energy is necessary for reduction of CO2 emission, and smart city demonstration projects have been conducted around the world [...] Read more.
This paper proposes total optimization of energy networks in a smart city by multi-population global-best modified brain storm optimization (MP-GMBSO). Efficient utilization of energy is necessary for reduction of CO2 emission, and smart city demonstration projects have been conducted around the world in order to reduce total energies and the amount of CO2 emission. The problem can be formulated as a mixed integer nonlinear programming (MINLP) problem and various evolutionary computation techniques such as particle swarm optimization (PSO), differential evolution (DE), Differential Evolutionary Particle Swarm Optimization (DEEPSO), Brain Storm Optimization (BSO), Modified BSO (MBSO), Global-best BSO (BSO), and Global-best Modified Brain Storm Optimization (GMBSO) have been applied to the problem. However, there is still room for improving solution quality. Multi-population based evolutionary computation methods have been verified to improve solution quality and the approach has a possibility for improving solution quality. The proposed MS-GMBSO utilizes only migration for multi-population models instead of abest, which is the best individual among all of sub-populations so far, and both migration and abest. Various multi-population models, migration topologies, migration policies, and the number of sub-populations are also investigated. It is verified that the proposed MP-GMBSO based method with ring topology, the W-B policy, and 320 individuals is the most effective among all of multi-population parameters. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Multi-Objective Bi-Level Programming for the Energy-Aware Integration of Flexible Job Shop Scheduling and Multi-Row Layout
Algorithms 2018, 11(12), 210; https://doi.org/10.3390/a11120210 - 17 Dec 2018
Cited by 2
Abstract
The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, [...] Read more.
The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, and the layout scheme affects the effect of the scheduling scheme by the transportation time of jobs. However, in traditional methods, they are regarded as separate tasks performed sequentially, which ignores the interaction. Therefore, developing effective methods to deal with the multi-objective energy-aware integration of the FJSSP and MRWLP (MEIFM) problem in a sustainable manufacturing system is becoming more and more important. Based on the interaction between FJSSP and MRWLP, the MEIFM problem can be formulated as a multi-objective bi-level programming (MOBLP) model. The upper-level model for FJSSP is employed to minimize the makespan and total energy consumption, while the lower-level model for MRWLP is used to minimize the material handling quantity. Because the MEIFM problem is denoted as a mixed integer non-liner programming model, it is difficult to solve it using traditional methods. Thus, this paper proposes an improved multi-objective hierarchical genetic algorithm (IMHGA) to solve this model. Finally, the effectiveness of the method is verified through comparative experiments. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
On the Use of Learnheuristics in Vehicle Routing Optimization Problems with Dynamic Inputs
Algorithms 2018, 11(12), 208; https://doi.org/10.3390/a11120208 - 15 Dec 2018
Cited by 2
Abstract
Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that [...] Read more.
Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
MapReduce Algorithm for Location Recommendation by Using Area Skyline Query
Algorithms 2018, 11(12), 191; https://doi.org/10.3390/a11120191 - 25 Nov 2018
Cited by 1
Abstract
Location recommendation is essential for various map-based mobile applications. However, it is not easy to generate location-based recommendations with the changing contexts and locations of mobile users. Skyline operation is one of the most well-established techniques for location-based services. Our previous work proposed [...] Read more.
Location recommendation is essential for various map-based mobile applications. However, it is not easy to generate location-based recommendations with the changing contexts and locations of mobile users. Skyline operation is one of the most well-established techniques for location-based services. Our previous work proposed a new query method, called “area skyline query”, to select areas in a map. However, it is not efficient for large-scale data. In this paper, we propose a parallel algorithm for processing the area skyline using MapReduce. Intensive experiments on both synthetic and real data confirm that our proposed algorithm is sufficiently efficient for large-scale data. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Pricing Strategies of Logistics Distribution Services for Perishable Commodities
Algorithms 2018, 11(11), 186; https://doi.org/10.3390/a11110186 - 17 Nov 2018
Cited by 1
Abstract
The problem of pricing distribution services is challenging due to the loss in value of product during its distribution process. Four logistics service pricing strategies are constructed in this study, including fixed pricing model, fixed pricing model with time constraints, dynamic pricing model, [...] Read more.
The problem of pricing distribution services is challenging due to the loss in value of product during its distribution process. Four logistics service pricing strategies are constructed in this study, including fixed pricing model, fixed pricing model with time constraints, dynamic pricing model, and dynamic pricing model with time constraints in combination with factors, such as the distribution time, customer satisfaction, optimal pricing, etc. By analyzing the relationship between optimal pricing and key parameters (such as the value of the decay index, the satisfaction of consumers, dispatch time, and the storage cost of the commodity), it is found that the larger the value of the attenuation coefficient, the easier the perishable goods become spoilage, which leads to lower distribution prices and impacts consumer satisfaction. Moreover, the analysis of the average profit of the logistics service providers in these four pricing models shows that the average profit in the dynamic pricing model with time constraints is better. Finally, a numerical experiment is given to support the findings. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Towards the Verbal Decision Analysis Paradigm for Implementable Prioritization of Software Requirements
Algorithms 2018, 11(11), 176; https://doi.org/10.3390/a11110176 - 03 Nov 2018
Cited by 1
Abstract
The activity of prioritizing software requirements should be done as efficiently as possible. Selecting the most stable requirements for the most important customers of a development company can be a positive factor considering that available resources do not always encompass the implementation of [...] Read more.
The activity of prioritizing software requirements should be done as efficiently as possible. Selecting the most stable requirements for the most important customers of a development company can be a positive factor considering that available resources do not always encompass the implementation of all requirements. There are many quantitative methods for prioritization of software releases in the field of search-based software engineering (SBSE). However, we show that it is possible to use qualitative verbal decision analysis (VDA) methods to solve this type of problem. Moreover, we will use the ZAPROS III-i method to prioritize requirements considering the opinion of the decision-maker, who will participate in this process. Results obtained using VDA structured methods were found to be quite satisfactory when compared to methods using SBSE. A comparison of results between quantitative and qualitative methods will be made and discussed later. The results were reviewed and corroborated with the use of performance metrics. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Application of Data Science Technology on Research of Circulatory System Disease Prediction Based on a Prospective Cohort
Algorithms 2018, 11(10), 162; https://doi.org/10.3390/a11100162 - 20 Oct 2018
Abstract
Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health [...] Read more.
Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health of our population. This article is based on the prospective population cohort data of chronic diseases in China, based on the existing medical cohort studies, the Kaplan–Meier method was used for feature selection, and the traditional medical analysis model represented by the Cox proportional hazards model was used and introduced. Support vector machine research methods in machine learning establish circulatory system disease prediction models. This paper also attempts to introduce the proportion of the explanation variation (PEV) and the shrinkage factor to improve the Cox proportional hazards model; and the use of Particle Swarm Optimization (PSO) algorithm to optimize the parameters of SVM model. Finally, the experimental verification of the above prediction models is carried out. This paper uses the model training time, Accuracy rate(ACC), the area under curve (AUC)of the Receiver Operator Characteristic curve (ROC) and other forecasting indicators. The experimental results show that the PSO-SVM-CSDPC disease prediction model and the S-Cox-CSDPC circulation system disease prediction model have the advantages of fast model solving speed, accurate prediction results and strong generalization ability, which are helpful for the intervention and control of chronic diseases. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Incremental Learning for Classification of Unstructured Data Using Extreme Learning Machine
Algorithms 2018, 11(10), 158; https://doi.org/10.3390/a11100158 - 17 Oct 2018
Cited by 2
Abstract
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it [...] Read more.
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Two Hesitant Multiplicative Decision-Making Algorithms and Their Application to Fog-Haze Factor Assessment Problem
Algorithms 2018, 11(10), 154; https://doi.org/10.3390/a11100154 - 10 Oct 2018
Cited by 1
Abstract
Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads [...] Read more.
Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads to inconsistent conclusions, therefore consistency improvement processes and deriving the reliable priority weight vector for alternatives are two significant and challenging issues for hesitant multiplicative information decision-making problems. In this paper, some new concepts are first introduced, including HMPR, consistent HMPR and the consistency index of HMPR. Then, based on the logarithmic least squares model and linear optimization model, two novel automatic iterative algorithms are proposed to enhance the consistency of HMPR and generate the priority weights of HMPR, which are proved to be convergent. In the end, the proposed algorithms are applied to the factors affecting selection of fog-haze weather. The comparative analysis shows that the decision-making process in our algorithms would be more straight-forward and efficient. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle
Multiple Attribute Decision-Making Method Using Linguistic Cubic Hesitant Variables
Algorithms 2018, 11(9), 135; https://doi.org/10.3390/a11090135 - 07 Sep 2018
Cited by 3
Abstract
Linguistic decision making (DM) is an important research topic in DM theory and methods since using linguistic terms for the assessment of the objective world is very fitting for human thinking and expressing habits. However, there is both uncertainty and hesitancy in linguistic [...] Read more.
Linguistic decision making (DM) is an important research topic in DM theory and methods since using linguistic terms for the assessment of the objective world is very fitting for human thinking and expressing habits. However, there is both uncertainty and hesitancy in linguistic arguments in human thinking and judgments of an evaluated object. Nonetheless, the hybrid information regarding both uncertain linguistic arguments and hesitant linguistic arguments cannot be expressed through the various existing linguistic concepts. To reasonably express it, this study presents a linguistic cubic hesitant variable (LCHV) based on the concepts of a linguistic cubic variable and a hesitant fuzzy set, its operational relations, and its linguistic score function for ranking LCHVs. Then, the objective extension method based on the least common multiple number/cardinality for LCHVs and the weighted aggregation operators of LCHVs are proposed to reasonably aggregate LCHV information because existing aggregation operators cannot aggregate LCHVs in which the number of their hesitant components may imply difference. Next, a multi-attribute decision-making (MADM) approach is proposed based on the weighted arithmetic averaging (WAA) and weighted geometric averaging (WGA) operators of LCHVs. Lastly, an illustrative example is provided to indicate the applicability of the proposed approaches. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle
Probabilistic Interval-Valued Hesitant Fuzzy Information Aggregation Operators and Their Application to Multi-Attribute Decision Making
Algorithms 2018, 11(8), 120; https://doi.org/10.3390/a11080120 - 06 Aug 2018
Cited by 5
Abstract
Based on the probabilistic interval-valued hesitant fuzzy information aggregation operators, this paper investigates a novel multi-attribute group decision making (MAGDM) model to address the serious loss of information in a hesitant fuzzy information environment. Firstly, the definition of probabilistic interval-valued hesitant fuzzy set [...] Read more.
Based on the probabilistic interval-valued hesitant fuzzy information aggregation operators, this paper investigates a novel multi-attribute group decision making (MAGDM) model to address the serious loss of information in a hesitant fuzzy information environment. Firstly, the definition of probabilistic interval-valued hesitant fuzzy set will be introduced, and then, using Archimedean norm, some new probabilistic interval-valued hesitant fuzzy operations are defined. Secondly, based on these operations, the generalized probabilistic interval-valued hesitant fuzzy ordered weighted averaging (GPIVHFOWA) operator, and the generalized probabilistic interval-valued hesitant fuzzy ordered weighted geometric (GPIVHFOWG) operator are proposed, and their desirable properties are discussed. We further study their common forms and analyze the relationship among these proposed operators. Finally, a new probabilistic interval-valued hesitant fuzzy MAGDM model is constructed, and the feasibility and effectiveness of the proposed model are verified by using an example of supplier selection. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle
Improving Monarch Butterfly Optimization Algorithm with Self-Adaptive Population
Algorithms 2018, 11(5), 71; https://doi.org/10.3390/a11050071 - 14 May 2018
Cited by 1
Abstract
Inspired by the migration behavior of monarch butterflies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterfly optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterflies in land 1 (subpopulation 1) and land [...] Read more.
Inspired by the migration behavior of monarch butterflies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterfly optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterflies in land 1 (subpopulation 1) and land 2 (subpopulation 2) are calculated according to the parameter p, which is unchanged during the entire optimization process. In our present work, a self-adaptive strategy is introduced to dynamically adjust the butterflies in land 1 and 2. Accordingly, the population size in subpopulation 1 and 2 are dynamically changed as the algorithm evolves in a linear way. After introducing the concept of a self-adaptive strategy, an improved MBO algorithm, called monarch butterfly optimization with self-adaptive population (SPMBO), is put forward. In SPMBO, only generated individuals who are better than before can be accepted as new individuals for the next generations in the migration operation. Finally, the proposed SPMBO algorithm is benchmarked by thirteen standard test functions with dimensions of 30 and 60. The experimental results indicate that the search ability of the proposed SPMBO approach significantly outperforms the basic MBO algorithm on most test functions. This also implies the self-adaptive strategy is an effective way to improve the performance of the basic MBO algorithm. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
The Supplier Selection of the Marine Rescue Equipment Based on the Analytic Hierarchy Process (AHP)-Limited Diversity Factors Method
Algorithms 2018, 11(5), 63; https://doi.org/10.3390/a11050063 - 04 May 2018
Cited by 1
Abstract
Supplier selection is an important decision-making link in bidding activity. When overall scores of several suppliers are similar, it is hard to obtain an accurate ranking of these suppliers. Applying the Diversity Factors Method (Diversity Factors Method, DFM) may lead to over correction [...] Read more.
Supplier selection is an important decision-making link in bidding activity. When overall scores of several suppliers are similar, it is hard to obtain an accurate ranking of these suppliers. Applying the Diversity Factors Method (Diversity Factors Method, DFM) may lead to over correction of weights, which would degrade the capability of indexes to reflect the importance. A Limited Diversity Factors Method (Limited Diversity Factors Method, LDFM) based on entropy is presented in this paper in order to adjust the weights, in order to relieve the over correction in DFM and to improve the capability of identification of indexes in supplier selection. An example of salvage ship bidding demonstrates the advantages of the LDFM, in which the raking of overall scores of suppliers is more accurate. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Vessel Traffic Risk Assessment Based on Uncertainty Analysis in the Risk Matrix
Algorithms 2018, 11(5), 60; https://doi.org/10.3390/a11050060 - 03 May 2018
Abstract
Uncertainty analysis is considered to be a necessary step in the process of vessel traffic risk assessment. The purpose of this study is to propose the uncertainty analysis algorithm which can be used to investigate the reliability of the risk assessment result. Probability [...] Read more.
Uncertainty analysis is considered to be a necessary step in the process of vessel traffic risk assessment. The purpose of this study is to propose the uncertainty analysis algorithm which can be used to investigate the reliability of the risk assessment result. Probability and possibility distributions are used to quantify the two types of uncertainty identified in the risk assessment process. In addition, the algorithm for appropriate time window selection is chosen by considering the uncertainty of vessel traffic accident occurrence and the variation trend of the vessel traffic risk caused by maritime rules becoming operative. Vessel traffic accident data from the United Kingdom’s marine accident investigation branch are used for the case study. Based on a comparison with the common method of estimating the vessel traffic risk and the algorithm for uncertainty quantification without considering the time window selection, the availability of the proposed algorithms is verified, which can provide guidance for vessel traffic risk management. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Decision-Making Approach Based on Neutrosophic Rough Information
Algorithms 2018, 11(5), 59; https://doi.org/10.3390/a11050059 - 03 May 2018
Cited by 5
Abstract
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives [...] Read more.
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives more precision, flexibility and compatibility to the system as compared to the classic and fuzzy models. In this research study, we develop neutrosophic rough digraphs based on the neutrosophic rough hybrid model. Moreover, we discuss regular neutrosophic rough digraphs, and we solve decision-making problems by using our proposed hybrid model. Finally, we give a comparison analysis of two hybrid models, namely, neutrosophic rough digraphs and rough neutrosophic digraphs. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Open AccessArticle
Failure Mode and Effects Analysis Considering Consensus and Preferences Interdependence
Algorithms 2018, 11(4), 34; https://doi.org/10.3390/a11040034 - 21 Mar 2018
Cited by 4
Abstract
Failure mode and effects analysis is an effective and powerful risk evaluation technique in the field of risk management, and it has been extensively used in various industries for identifying and decreasing known and potential failure modes in systems, processes, products, and services. [...] Read more.
Failure mode and effects analysis is an effective and powerful risk evaluation technique in the field of risk management, and it has been extensively used in various industries for identifying and decreasing known and potential failure modes in systems, processes, products, and services. Traditionally, a risk priority number is applied to capture the ranking order of failure modes in failure mode and effects analysis. However, this method has several drawbacks and deficiencies, which need to be improved for enhancing its application capability. For instance, this method ignores the consensus-reaching process and the correlations among the experts’ preferences. Therefore, the aim of this study was to present a new risk priority method to determine the risk priority of failure modes under an interval-valued Pythagorean fuzzy environment, which combines the extended Geometric Bonferroni mean operator, a consensus-reaching process, and an improved Multi-Attributive Border Approximation area Comparison approach. Finally, a case study concerning product development is described to demonstrate the feasibility and effectiveness of the proposed method. The results show that the risk priority of failure modes obtained by the proposed method is more reasonable in practical application compared with other failure mode and effects analysis methods. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Show Figures

Figure 1

Back to TopTop