Next Article in Journal
Explicit Formulas for All Scator Holomorphic Functions in the (1+2)-Dimensional Case
Next Article in Special Issue
A New Approach to Identifying a Multi-Criteria Decision Model Based on Stochastic Optimization Techniques
Previous Article in Journal
Fuzzy Goal Programming with an Imprecise Intuitionistic Fuzzy Preference Relations
Previous Article in Special Issue
A Distributionally Robust Chance-Constrained Approach for Modeling Demand Uncertainty in Green Port-Hinterland Transportation Network Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Are MCDA Methods Benchmarkable? A Comparative Study of TOPSIS, VIKOR, COPRAS, and PROMETHEE II Methods

by
Wojciech Sałabun
1,*,
Jarosław Wątróbski
2 and
Andrii Shekhovtsov
1,*
1
Research Team on Intelligent Decision Support Systems, Department of Artificial Intelligence Methods and Applied Mathematics, Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, Szczecin ul. Żołnierska 49, 71-210 Szczecin, Poland
2
Department of Information Systems Engineering, Faculty of Economics and Management, University of Szczecin, Mickiewicza 64, 71-101 Szczecin, Poland
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(9), 1549; https://doi.org/10.3390/sym12091549
Submission received: 23 August 2020 / Revised: 10 September 2020 / Accepted: 14 September 2020 / Published: 20 September 2020
(This article belongs to the Special Issue Uncertain Multi-Criteria Optimization Problems)

Abstract

:
Multi-Criteria Decision-Analysis (MCDA) methods are successfully applied in different fields and disciplines. However, in many studies, the problem of selecting the proper methods and parameters for the decision problems is raised. The paper undertakes an attempt to benchmark selected Multi-Criteria Decision Analysis (MCDA) methods. To achieve that, a set of feasible MCDA methods was identified. Based on reference literature guidelines, a simulation experiment was planned. The formal foundations of the authors’ approach provide a reference set of MCDA methods ( Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), Complex Proportional Assessment (COPRAS), and PROMETHEE II: Preference Ranking Organization Method for Enrichment of Evaluations) along with their similarity coefficients (Spearman correlation coefficients and WS coefficient). This allowed the generation of a set of models differentiated by the number of attributes and decision variants, as well as similarity research for the obtained rankings sets. As the authors aim to build a complex benchmarking model, additional dimensions were taken into account during the simulation experiments. The aspects of the performed analysis and benchmarking methods include various weighing methods (results obtained using entropy and standard deviation methods) and varied techniques of normalization of MCDA model input data. Comparative analyses showed the detailed influence of values of particular parameters on the final form and a similarity of the final rankings obtained by different MCDA methods.

1. Introduction

Making decisions is an integral part of human life. All such decisions are made based on the assessment of individual decision options, usually based on preferences, experience, and other data available to the decision maker. Formally, a decision can be defined as a choice made based on the available information, or a method of action aimed at solving a specific decision problem [1]. Taking into account the systematics of the decision problem itself and the classical paradigm of single criterion optimization, it should be noted that it is now widely accepted to extend the process of decision support beyond the classical model of single goal optimization described on the set of acceptable solutions [2]. This extension allows one to tackle multi-criteria problems with a focus on obtaining a solution that meets enough many, often contradictory, goals [3,4,5,6,7].
The concept of rational decisions is, at the same time, a paradigm of multi-criteria decision support and is the basis of the whole family of Multi-Criteria Decision-Analysis (MCDA) methods [8]. These methods aim to support the decision maker in the process of finding a solution that best suits their preferences. Such an approach is widely discussed in the literature. In the course of the research, whole groups of MCDA methods and even ”schools” of multi-criteria decision support have developed. There are also many different individual MCDA methods and their modifications developed so far. The common MCDA methods belonging to the American school include Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) [9,10,11], VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) [12], Analytic Hierarchy Process (AHP) [13,14], and Complex Proportional Assessment (COPRAS) [15]. Examples of the most popular methods belonging to the European school are the ELECTRE [16,17] and Preference Ranking Organization Method for Enrichment of Evaluations (PROMETHEE) [18] method families. The third best-known group of methods are mixed approaches, based on the decision-making rules of [19,20,21]. The result of the research is also a dynamic development of new MCDA methods and extensions of existing methods [22].
Despite the large number of MCDA methods, it should be remembered that no method is perfect and cannot be considered suitable for use in every decision-making situation or for solving every decision problem [23]. Therefore, using different multi-criteria methods may lead to different decision recommendations [24]. It should be noted, however, that if different multi-criteria methods achieve contradictory results, then the correctness of the choice of each of them is questioned [25]. In such a situation, the choice of a decision support method appropriate to the given problem [22] becomes an important research issue, as only an adequately chosen method allows one to obtain a correct solution reflecting the preferences of the decision maker [2,26]. The importance of this problem is raised by Roy, who points out that in stage IV of the decision-making model defined by him, the choice of the calculation procedure should be made in particular [2]. This is also confirmed by Hanne [27], Cinelli et al. [26].
It should be noted that it is difficult to answer the question of which method is the most suitable to solve a specific type of problem [24]. This is related to the apparent universality of MCDA methods, because many methods meet the formal requirements of a given decision-making problem so that they can be selected independently of the specificity of a particular problem (e.g., the existence of a finite set of alternatives may be a determinant for the decision-maker when choosing a method) [27]. Therefore, Guitouni et al. recommend to study different methods and determine their areas of application by identifying their limitations and conditions of applicability [23]. This is very important because different methods can provide different solutions to the same problem [28]. Differences in results using different calculation procedures can be influenced by, for example, the following factors: Individual techniques use different weights of criteria in calculations, the algorithms of methods themselves differ significantly, and many algorithms try to scale the targets [24].
Many methodologies, frameworks, and formal approaches to identify a subset of MCDA methods suitable for the given problem have been developed in prior works [20,22]. The synthesis of available terms is presented later in the article. It should be noted that the assessment of accuracy and reliability of results obtained using various MCDA methods remains a separate research problem. The examples of work on the accuracy assessment and more broadly benchmarking of MCDA methods include Zanakis et al. [24], Chang et al. [29], Hajkowicz and Higgins [30], Żak [31], and others. Their broader discussion is also conducted later in the paper. The authors, focusing their research on selected MCDA methods, effectively use a simulation environment (e.g., Monte Carlo simulation) to generate a set of ranks. The authors most often use Spearman’s rank correlation coefficient to assess and analyze the similarity of the rankings. However, the shortcomings of the indicated approaches should be indicated. In the vast majority of the available literature, the approaches are focused on a given domain of the MCDA method (or a subset of methods) application. Thus, despite the use of a simulation environment, the studies are not comprehensive, limiting the range of obtained results. Most of the papers are focused on the assessment of only selected aspects of the accuracy of single MCDA methods or contain narrow comparative studies of selected MCDA methods. Another important challenge related to multi-criteria decision-making problems that are not included in the works mentioned above is the proper determination of weights [32]. It seems to be an investigation into how various methods to determine criteria weights (subjective weighting methods and objective weighting methods) affect final ranking [33]. In our research, we attempt a complex benchmarking for a subset of carefully selected MCDA methods. Taking care of the correctness and comprehensiveness of the conducted research and at the same time following the guidelines of the authors of publications in this area, we use a simulation environment and apply multiple model input parameters (see Figure 1). The aspects of the conducted analysis and benchmarking of methods include not only the generation of final rankings for a variable number of decision-making options, but also take into account different weighing methods (results obtained using entropy and standard deviation methods) and different techniques of normalizing input data of the MCDA model. The analysis of rank similarities obtained under different conditions and using different MCDA methods was carried out based on reference publications and using Spearman’s ranking correlation coefficient and W S coefficients.
The study aims to analyze the similarity of rankings obtained using the selected four MCDA methods. It should be noted that in research, we analyze this problem only from the technical (algorithmic) point of view, leaving in the background the conceptual aspects of the method and the systemic assumptions of obtaining model input data. For each of these methods using a simulation environment, several rankings were calculated. The simulation itself was divided into two parts. The first part refers to a comparison of the similarity of the results separately for TOPSIS, VIKOR, COPRAS, and PROMETHEE II. In the second part, these methods were compared with each other. As illustrated in Figure 1), the calculation procedure took into account different normalization methods, weighting methods, preference thresholds, and preference functions. Thus, in the case of the TOPSIS method for each decision matrix, 12 separate rankings were created, where each of them is a different combination of a standardization method and a weighing method. Then, for the same matrix, a set of rankings using other MCDA methods (VIKOR, COPRAS, and PROMETHEE II) was created respectively. This procedure was repeated for different parameters of the decision matrix (in terms of the number of analyzed alternatives and number of criteria). In this way, the simulation provides a complex dataset in which similarities were analyzed using the Spearman and WS coefficients.
In this paper, we used several MCDA methods (TOPSIS, VIKOR, COPRAS, and PROMETHEE II) to make comparative tests. The choice of this set was dictated by the properties and popularity of these methods. These methods and their modifications found an application in many different domains such as sustainability assessment [34,35,36], logistics [37,38,39], supplier selection [7,40,41], manufacturing [42,43,44], environment management [45,46,47], waste management [48,49], energy management [50,51,52,53,54,55], chemical engineering [56,57,58], and many more [59,60]. The choice of a group of TOPSIS, VIKOR, and COPRAS methods is justified, as they form a coherent group of methods of the American MCDA school and are based on the same principles, using the concepts of the so-called reference points. At the same time, unlike other methods of the American school, they are not merely trivial (in the algorithmic sense) elaborations of the simple additional or multiplicative weighted aggregation. The choice of the PROMETHEE II method was dictated by the fact that this method belongs to the European school and that the PROMETHEE II algorithm implements the properties of other European school-based MCDA methods (outranking relations, thresholds, and different preference functions). It should be noted that for benchmarking purposes, the choice of this method has one more justification—unlike other methods of this school, the method provides a full, quantitative final ranking of decision-making options.
The rest of the article is structured as follows. Section 2.1.1 present the most important MCDA foundations. Operational point of view and preference aggregation techniques are presented in Section 2.1.2. Section 2.1.3 and Section 2.1.4 describe respectively American and European-based MCDA methods. Mixed and rule-based methods are shown in Section 2.1.5. Section 2.2 presents the MCDA method selection and benchmarking problem. Section 3.1.1 contains a description of the TOPSIS method and Section 3.1.2 describes the VIKOR method. A description of the COPRAS method can be found in Section 3.1.3 and a description of the PROMETHEE II method is in Section 3.1.4. Section 3.2 describes the normalization methods, which could be applied to data before executing any MCDA methods. Section 3.3 contains a description of weighting methods, which would be used in article. Section 3.4 describes the correlation coefficients that will be used to compare results. The research experiment and the numerical example are presented in Section 4. Section 5 provides the most relevant results and their discussion. Finally, the conclusions are formulated in Section 6.

2. Literature Review

2.1. MCDA State of the Art

In this section, we introduce the methodological assumptions of MCDA, taking into account the operational point of view and available data aggregation techniques. At the same time, we provide an outline of existing methods and decision-making schools.

2.1.1. MCDA Foundations

Almost in every case, the nature of the decision problem makes it a multi-criteria problem. This means that making a “good” decision requires considering many decision options, where each option should be considered in terms of many factors (criteria) that characterize its acceptability. The values of these factors may limit the number of variants in case their values exceed the assumed framework. They can also serve for grading the admissibility of variants in the situation when each of them is admissible, and the multi-criteria problem consists in choosing subjectively the best of them. For different decision makers, different criteria may have a different relevance, so in no case can a multi-criteria decision be considered as completely objective. Only the ranking of individual variants with given weights of individual criteria is objective here, as this ranking is usually generated using a specific multi-criteria method. Therefore, concerning recommended multi-criteria decisions, the term “optimal” is not used but ”most satisfactory decision-maker” [23] which means optimum in the sense of Pareto [2]. In conclusion, it can be concluded that multi-criteria models should take into account elements that can be described as a multi-criteria paradigm: The existence of multiple criteria, the existence of conflicts between criteria, and the complex, subjective, and poorly structured nature of the decision-making problem [61].
The MCDA methods are used to solve decision problems where there are many criteria. The literature provides various models of the decision-making process, e.g., Roy, Guitouni, and Keeney. For example, Guitouni’s model distinguishes five stages: Decision problem structuring, articulating and modeling preferences, preference aggregation, exploitation of aggregation, and obtaining a solution recommendation [62]. The essential elements of a multi-criteria decision problem are formed by a triangle (A, C, and E), where A defines a set of alternative decision options, C is a set of criteria, while E represents the criteria performance of the options [62]. Modeling of the decision maker’s preferences can be done in two ways, i.e., directly or indirectly, using the so-called disaggregation procedures [63]. The decision maker’s preferences are expressed using binary relations. When comparing decision options, two fundamental indifference relations ( a i I a j ) and strict preference ( a i P a j ) may occur. Moreover, the set of basic preferential relations may be extended by the relations of a weak preference of one of the variants to another and their incomparability (variants) [64], creating together with the basic relations the so-called outranking relation.

2.1.2. Operational Point of View and Preference Aggregation Techniques

The structured decision problem, for which the modeling of the decision maker’s preferences was carried out, is an input for the multi-criteria preference aggregation procedure (MCDA methods). This procedure should take into account the preferences of the decision-maker, modeled using the weighting of criteria, and preference thresholds. This procedure is responsible for aggregating the performance of the criteria of individual variants to obtain a global result of comparing the variants consistent with one of the multi-criteria issues. The individual aggregation procedures can be divided according to their operational approach. In the literature three main approaches exist [63]:
  • The use of a single synthesized criterion: In this approach, the result of the variants’ comparisons is determined for each criterion separately. Then the results are synthesized into a global assessment. The full order of variants is obtained here [65];
  • The synthesis of the criteria based on the relation of outranking: Due to the occurrence of incomparability relations, this approach allows for obtaining the partial order of variants [65];
  • Aggregation based on decision rules [63]: This approach is based on the rough sets set theory [66]. It uses cases and reference ranking from which decision rules are generated [67].
The difference between the use of a single synthesized criterion and the synthesis based on the relation of exceedance is that in methods using synthesis to one criterion there is a compensation, while methods using the relation of exceedance by many researchers are considered uncompensated [68,69].
Research on multi-criteria decision support has developed two main groups of methods. These groups can be distinguished due to the operational approach used in them. These are the methods of the so-called American school of decision support and the methods of the European school [70,71]. There is also a group of so-called basic methods, most of which are similar in terms of the operational approach to the American school methods. Examples of basic methods are the lexicographic method, the ejection method for the minimum attribute value, the maximum method, or the additive weighting method. Besides, there is a group of methods combining elements of the American and European approaches, as well as methods based on the previously mentioned rule approach.

2.1.3. American School-Based MCDA Methods

The methods of the American school of decision support are based on a functional approach [63], namely, the use of value or usability. These methods usually do not take into account the uncertainty, inaccuracy, and uncertainty that can occur in data or decision-maker preferences [1]. This group of methods is strongly connected with the operational approach using a single synthesized criterion. The basic methods of the American school are MAUT, AHP, ANP, SMART, UTA, MACBETH (Measuring Attractiveness by a Categorical Based Evaluation Technique), or TOPSIS.
In the MAUT method, the most critical assumption is that the preferences of the decision-maker can be expressed using a global usability function, taking into account all the criteria taken into account. The AHP method is the best known and most commonly used functional method. This method allows one to prioritize the decision-making problem. The ANP method is a generalization of AHP. Instead of prioritizing the decision problem, it allows the building of a network model in which there may be links between criteria and variants and their feedback. In the SMART method, the criteria values of variants are converted to a common internal scale. This is done mathematically by the decision-maker, and the value function is used [72]. In the UTA method, the decision maker’s preferences are extracted from the reference set of variants [73]. The MACBETH method (Measuring Attractiveness by a Categorical Based Evaluation Technique) is based on qualitative evaluations. The individual variants are compared here in a comparison matrix in pairs. The criterion preferences of the variants are aggregated as a weighted average [74,75]. In the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method, the decision options considered are compared for the ideal and the anti-ideal solution [76,77,78,79].

2.1.4. European School-Based MCDA Methods

The methods of the European School use a relational model. Thus, they use a synthesis of criteria based on the relation of outranking. This relation is characterized by transgression between pairs of decision options. Among the methods of the European School of Decision Support, the groups of ELECTRE and PROMETHEE methods should be mentioned above all [1].
ELECTRE I and ELECTRE Is methods are used to solve the selection problem. In the ELECTRE I method there is a true criterion (there are no indifference and preference thresholds), while in ELECTRE Is, pseudo-criteria have been introduced including thresholds. It should be noted here that indifference and preference thresholds can be expressed directly as fixed quantities for a given criterion or as functions, which would allow distinguishing the relation of weak and strong preferences. The relations of the excess occurring between the variants are presented on the graph and the best variants are those which are not exceeded by any other [80,81]. The ELECTRE II method is similar to the ELECTRE I since no indifference and preference thresholds are defined here as well, i.e., the true criteria are also present here. Furthermore, the calculation algorithm is the same almost throughout the procedure. However, the ELECTRE II method distinguishes weak and strong preference [82]. The ELECTRE III method is one of the most frequently used methods of multi-criteria decision support and it deals with the ranking problem. The ELECTRE IV method is similar to ELECTRE III in terms of using pseudo-criteria. Similarly, the final ranking of variants is also determined here. The ELECTRE IV method determines two orders (ascending and descending) from which the final ranking of variants is generated. However, the ELECTRE IV method does not define the weights of criteria, so all criteria are equal [83]. ELECTRE Tri is the last of the discussed ELECTRE family methods. It deals with the classification problem and uses pseudo-criteria. This method is very similar to ELECTRE III in procedural terms. However, in the ELECTRE Tri method, the decisional variants are compared with so-called variants’ profiles, i.e., “artificial variants” limiting particular quality classes [84].
PROMETHEE methods are used to determine a synthetic ranking of alternatives. Depending on the implementation, they operate on true or pseudo-criteria. The methods of this family combine most of the ELECTRE methods as they allow one to apply one of the six preference functions, reflecting, among others, the true criterion and pseudo-criteria. Moreover, they enrich the ELECTRE methodology at the stage of object ranking. These methods determine the input and output preference flows, based on which we can create a partial ranking in the PROMETHEE I method [85]. In contrast, in the PROMETHEE II method, the net preference flow values for individual variants are calculated based on input and output preference flows. Based on net values, a complete ranking of variants is determined [86,87].
NAIADE (Novel Approach to Imprecise Assessment and Decision Environment) methods are similar to PROMETHEE in terms of calculation because the ranking of variants is determined based on input and output preference flows [88]. However, when comparing the variants, six preferential relations defined based on trapezoidal fuzzy numbers are used (apart from indistinguishability of variants, weak, and strong preference is distinguished). The methods of this family do not define the weights of criteria [89].
Other examples of methods from the European MCDA field are ORESTE, REGIME, ARGUS, TACTIC, MELCHIOR, or PAMSSEM. The ORESTE method requires the presentation of variant evaluations and a ranking of criteria on an ordinal scale [90]. Then, using the distance function, the total order of variants is determined concerning the subsequent criteria [91]. The REGIME method is based on the analysis of variants’ compatibility. The probability of dominance for each pair of variants being compared is determined and on this basis the order of variants is determined [92]. In the ARGUS method for the representation of preferences on the order scale, qualitative measures are used [93]. The TACTIC method (Treatment of the Alternatives According To the Importance of Criteria) is based on quantitative assessments of alternatives and weights of criteria. Furthermore, it allows the use of true criteria and quasi-criteria, thus using the indistinguishability threshold as well as the veto. Similarly to ELECTRE I, TACTIC and ARGUS methods use preference aggregation based on compliance and non-compliance analysis [94]. In the MELCHIOR method pseudo-criteria are used, the calculation is similar to ELECTRE IV, while in the MELCHIOR method the order relationship between the criteria is established [93]. The PAMSSEM I and II methods are a combination of ELECTRE III, NAIADE, and Promethee and implement the computational procedure used in these methods [95,96].

2.1.5. Mixed and Rule-Based Methods

Many multi-criteria methods combine the approaches of the American and European decision support school. An example is the EVAMIX method [23], which allows taking into account both quantitative and qualitative criteria, using two separate measures of domination [97]. Another mishandled method is the QUALIFLEX method [98], which allows for the use of qualitative evaluations of variants and both quantitative and qualitative criteria weights [99]. PCCA (Pairwise Criterion Comparison Approach) methods can be treated as a separate group of multi-criteria methods. They focus on the comparison of variants concerning different pairs of criteria considered instead of single criteria. The partial results obtained in this way are then aggregated into the evaluation and final ranking [100]. The methods are based on the PCCA approach: MAPPAC, PRAGMA, PACMAN, and IDRA [101].
The last group of methods are methods based strictly on decision rules [102]. These are methods using fuzzy sets theory (COMET: Characteristic Objects Method) [22] and rough sets theory (DRSA: (Dominance-based Rough Set Approach) [103]. In the methods belonging to this group, the decision rules are initially built. Then, based on these rules, variants are compared and evaluated, and a ranking is generated.
The COMET (Characteristic Objects Method) requires giving fuzzy triangular numbers for each criterion [104], determining the degree of belonging of variants to particular linguistic values describing the criteria [105]. Then, from the values of vertices of particular fuzzy numbers, the characteristic variants are generated. These variants are compared in pairs by the decision-maker, and their model ranking is generated. These variants, together with their aggregated ranking values, create a fuzzy rule database [21]. After the decision system of the considered variants is given to the decision system, each of the considered variants activates appropriate rules and its aggregated rating is determined as the sum of the products of the degrees in which the variants activate individual rules [106,107].
The DRSA method (Dominance-based Rough Set Approach) is based on the rough set theory and requires the definition of a decision table taking into account the values of criteria and consequences of previous decisions (in other words, the table contains historical decision options together with their criteria assessments as well as their aggregated global assessments) [108]. The decision defines the relation of exceedance. The final assessment of a variant is determined as the number of variants which, based on the rules, the considered variant exceeds or is not exceeded by. Moreover, the number of variants that are exceeded or not exceeded by the considered variant is deducted from the assessment [109,110].

2.2. MCDA Methods Selection and Benchmarking Problem

The complexity of the problem of choosing the right multi-criteria method to solve a specific decision problem results in numerous works in the literature where this issue is addressed. These works can be divided according to their approach to method selection. Their authors apply approaches based on benchmarks, multi-criteria analysis, and informal and formal structuring of the problem or decision-making situation. An attempt at a short synthesis of available approaches to the selection of MCDA methods to a decision-making problem is presented below.
The selection of a method based on multi-criteria analysis requires defining criteria against which individual methods will be evaluated. This makes it necessary to determine the structure of the decision problem at least partially [111]. Moreover, it should be noted that treating the MCDA method selection based on the multi-criteria approach causes a looping of the problem, because for this selection problem, the appropriate multi-criteria method should also be chosen [112]. Nevertheless, the multi-criteria approach to MCDA method selection is used in the literature. Examples include works by Gershon [113], Al-Shemmeri et al. [111], and Celik and Deha Er [114].
The informal approach to method selection consists of selecting the method for a given decision problem based on heuristic analysis performed by the analyst/decision-maker [115]. This analysis is usually based on the author’s thoughts and unstructured description of the decision problem and the characteristics of particular methods. The methodological approach is similar to the semi-formal one, with the difference that the characteristics of individual MCDA methods are to some extent formalized here (e.g., table describing the methods). The informal approach was used in Adil et al. [115] and Bagheri Moghaddam et al. [116]. The semi-formal approach has been used in the works of Salinesi and Kornyshov [117], De Montis et al. [118], and Cinelli et al. [119].
In the formal approach to the selection of the MCDA method, the description of individual methods is fully structured (e.g., taxonomy or a table of features of individual MCDA methods) [120,121]. The decision problem and the method of selecting a single or group of MCDA methods from among those considered are formally defined (e.g., based on decision rules [122], artificial neural networks [123], or decision trees [23,124]). These are frameworks, which enable a selection of MCDA method based on the formal description of methods and decision problem. Such an approach is proposed, among others, in works: Hwang and Yoon [125], Moffett and Sarkar [124], Guitouni and Martel [23], Guitouni et al. [62], Wątróbski [120], Wątróbski and Jankowski [122], Celik and Topcu [126], Cicek et al. [127], and Ulengin et al. [123].
The benchmarking approach seems particularly important. It focuses on a comparison of the results obtained by individual methods. The main problem of applying this approach is to find a reference point against which the results of the examined multi-criteria methods would be compared. Some authors take the expert ranking as a point of reference whilst others compare the results to the performance of one selected method or examine the compliance of individual rankings obtained using particular MCDA methods. Examples of benchmark-based approach to selection/comparison of MCDA methods are works: Zanakisa et al. [24], Chang et al. [29] Hajkowicz and Higgins [30], and Żak [31].
The publication of Zanakis et al. [29] presents results of benchmarks for eight MCDA methods (Simple Additive Weighting, Multiplicative Exponential Weighting, TOPSIS, ELECTRE, and four AHP variants). In the simulation test scenario, randomly generated decision problems were assumed in which the number of variants was equal: 3, 5, 7, and 9; the number of criteria were: 5, 10, 15, and 20; and the weights of the criteria could be equal, have a uniform distribution in the range <0; 1> with a standard deviation of 1/12, or a beta U-shaped distribution in the range <0; 1> with a standard deviation of 1/24. Moreover, the assessments of alternatives were randomly generated according to a uniform distribution in the range <0; 1>. The number of repetitions was 100 for each combination of criteria, variants, and weights. Therefore, 4800 decision problems were considered in the benchmark (4 number of criteria × 4 number of variants × 3 weightings types × 100 repetitions) and 38,400 solutions were obtained in total (4800 problems × 8 MCDA methods). Within the tests, the average results of all rankings generated by each method were compared with the average results of rankings generated by the SAW method, which was the reference point. Comparisons were made using, among others, Spearman’s rank correlation coefficient for rankings.
Benchmark also examined the phenomenon of ranking reversal after introducing an additional non-dominant variant to the evaluation. The research on the problem of ranking reversal was carried out based on similar measures, with the basic ranking generated by a given MCDA method being the point of reference in this case. The authors of the study stated that the AHP method gives the rankings closest to the SAW method, while in terms of ranking reversal, the TOPSIS method turned out to be the best.
Chang et al. [29] took up a slightly different problem. They presented the procedure of selecting a group fuzzy multi-criteria method generating the most preferred group ranking for a given problem. The authors defined 18 fuzzy methods, which are combinations of two methods of group rating averaging (arithmetic and geometric mean), three multi-criteria methods (Simple Additive Weighting, Weighted Product, and TOPSIS), and three methods of results defuzzification (Center-of-area, graded mean integration, and metric distance). The best group ranking was selected by comparing each of the 18 rankings with nine individual rankings of each decision maker created using methods that are a combination of multi-criteria procedures and defuzzification methods. Spearman’s correlation was used to compare group and individual rankings.
In the work of Hajkowicz and Higgins [30] rankings were compared using five methods (Simple Additive Weighting, Range of Value Method, PROMETHEE II, Evamix, and Compromise programming). To compare the rankings, we used Spearman’s and Kendall’s correlations for full rankings and a coefficient that directly determines the compliance on the first three positions of each of the compared rankings. The study considered six decision-making problems, in the field of water resources management, undertaken in the scientific literature. Nevertheless, it should be noted that this statement is based on the analysis of features and possibilities offered by individual MCDA methods. However, it does not result from the conducted benchmark.
Another publication cited in which the benchmark was applied to the work of Żak [31]. The author considered five multi-criteria methods (ELECTRE, AHP, UTA, MAPPAC, and ORESTE). The study was based on the examination of the indicated methods concerning three decision-making problems related to transport, i.e., (I) evaluation of the public transportation system development scenarios, (II) ranking of maintenance and repair contractors in the public transportation system, and (III) selection of the means of transport used in the public transportation system. The benchmark uses expert evaluations for each of the methods in terms of versatility and relevance to the problem, computational performance, modeling capabilities for decision-makers, reliability, and usefulness of the ranking.
The problem of MCDA methods benchmarking is also addressed in many up-to-date studies. Thus in the paper [128] using building performance simulation reliability of rankings generated using AHP, TOPSIS, ELECTRE III, and PROMETHEE II methods was evaluated. In the paper [129] using Monte Carlo simulation, Weighted Sum, and Weighted Product Methods (WSM/WPM), TOPSIS, AHP, PROMETHEE I, and ELECTRE I were compared. The next study [130] adopted the same benchmarking environment as in the study [24]. The authors empirically compare the rankings produced by multi-MOORA, TOPSIS, and VIKOR methods. In the paper [131] another benchmark of selected MCDA methods was presented. AHP, Fuzzy AHP, TOPSIS, Fuzzy TOPSIS, and PROMETHEE I methods were used here. The similarity of rankings was evaluated using Spearman’s rank correlation coefficient. In the next paper [132], the impact of different uncertainty sources on the rankings of MCDA problems in the context of food safety was analyzed. In this study, MMOORA, TOPSIS, VIKOR, WASPAS, and ELECTRE II were compared. In the last example work [133], using a simulation environment, the impact of different standardization techniques in the TOPSIS method on the final form of final rankings was examined. The above literature analysis unambiguously shows the effectiveness of the simulation environment in benchmarking methods from the MCDA family. However, at the same time, it constitutes a justification for the authors of this article for the research methods used.

3. Preliminaries

3.1. MCDA Methods

In this section, we introduced formal foundations of MCDA methods used during the simulation. We selected three methods belonging to the American school (TOPSIS, VIKOR, and COPRAS) and one popular method of the European school called PROMETHEE II.

3.1.1. TOPSIS

The first one is Technique of Order Preference Similarity (TOPSIS). In this approach, we measure the distance of alternatives from the reference elements, which are respectively positive and negative ideal solution. This method was widely presented in [9,134]. The TOPSIS method is a simple MCDA technique used in many practical problems. Thanks to its simplicity of use, it is widely used in solving multi-criteria problems. Below we present its algorithm [9]. We assume that we have a decision matrix with m alternatives and n criteria is represented as X = ( x i j ) m × n .
Step 1. Calculate the normalized decision matrix. The normalized values r i j calculated according to Equation (1) for profit criteria and (2) for cost criteria. We use this normalization method, because [11] shows that it performs better than classical vector normalization. Although, we can also use any other normalization method.
r i j = x i j m i n j ( x i j ) m a x j ( x i j ) m i n j ( x i j )
r i j = m a x j ( x i j ) x i j m a x j ( x i j ) m i n j ( x i j )
Step 2. Calculate the weighted normalized decision matrix v i j according to Equation (3).
v i j = w i r i j
Step 3. Calculate Positive Ideal Solution (PIS) and Negative Ideal Solution (NIS) vectors. PIS is defined as maximum values for each criteria (4) and NIS as minimum values (5). We do not need to split criteria into profit and cost here, because in step 1 we use normalization which turns cost criteria into profit criteria.
v j + = { v 1 + , v 2 + , , v n + } = { m a x j ( v i j ) }
v j = { v 1 , v 2 , , v n } = { m i n j ( v i j ) }
Step 4. Calculate distance from PIS and NIS for each alternative. As shown in Equations (6) and (7).
D i + = j = 1 n ( v i j v j + ) 2
D i = j = 1 n ( v i j v j ) 2
Step 5. Calculate each alternative’s score according to Equation (8). This value is always between 0 and 1, and the alternatives which have values closer to 1 are better.
C i = D i D i + D i +

3.1.2. VIKOR

VIKOR is an acronym in Serbian that stands for VlseKriterijumska Optimizacija I Kompromisno Resenje. The decision maker chooses an alternative that is the closest to the ideal and the solutions are assessed according to all considered criteria. The VIKOR method was originally introduced by Opricovic [135] and the whole algorithm is presented in [134]. These both methods are based on closeness to the ideal objects [136]. However, they differ in their operational approach and how these methods consider the concept of proximity to the ideal solutions.
The VIKOR method, similarly to the TOPSIS method, is based on distance measurements. In this approach a compromise solution is sought. The description of the method will be quoted according to [135,136]. Let us say that we have a decision matrix with m alternatives and n criteria is represented as X = f i j ( A i ) m × n . Before actually applying this method, the decision matrix can be normalized with one of the methods described in Section 3.2.
Step 1. Determine the best f i * and the worth f i values for each criteria functions. Use (9) for profit criteria and (10) for cost criteria.
f j * = max i f i j , f j = min i f i j
f j * = min i f i j , f j = max i f i j
Step 2. Calculate the S i and R i values by Equations (11) and (12).
S i = j = 1 n w j f j * f i j / f j * f j
R i = max j w j f j * f i j / f j * f j
Step 3. Compute the Q i values using Equation (13).
Q i = v S i S * / S S * + ( 1 v ) R i R * / R R *
where
S * = min i S i , S * = min i S i
R * = min i R i , R * = max i R i
and v is introduced as a weigh for the strategy “majority of criteria”. We use v = 0.5 here.
Step 4. Rank alternatives, sorting by the values S, R, and Q in ascending order. Result is three ranking lists.
Step 5. Normally, we should use S, R, and Q ranking lists to propose the compromise solution or set of compromise solutions, as shown in [134,136]. However, in this paper would use only the Q ranking list.

3.1.3. COPRAS

Third used method is a COPRAS (Complex Proportional Assessment), introduced by Zavadskas [137,138]. This approach assumes a direct and proportional relationship of the importance of investigated variants on a system of criteria adequately describing the decision variants and on values and weights of the criteria [139]. This method ranks alternatives based on their relative importance (weight). Final ranking is creating using the positive and negative ideal solutions [138,140]. Assuming, that we have a decision matrix with m alternatives and n criteria is represented as X = f i j ( A i ) m × n , the COPRAS method is defined in five steps:
Step 1. Calculate the normalized decision matrix using Equation (14).
r i j = x i j i = 1 m x i j
Step 2. Calculate the difficult normalized decision matrix, which represents a multiplication of the normalized decision matrix elements with the appropriate weight coefficients using Equation (15).
v i j = r i j · w j
Step 3. Determine the sums of difficult normalized values, which was calculated previously. Equation (16) should be used for profit criteria and Equation (17) for cost criteria.
S + i = j = 1 k v i j
S i = j = k + 1 n v i j
where k is the number of attributes that must be maximized. The rest of attributes from k + 1 to n prefer lower values. The S + i and S i values show the level of the goal achievement for alternatives. A higher value of S + i means that this alternative is better and the lower value of S i also points to better alternative.
Step 4. Calculate the relative significance of alternatives using Equation (18).
Q i = S + i + S min · i = 1 m S i S i · i = 1 m S min S i
Step 5. Final ranking is performed according U i values (19).
U i = Q i Q i m a x · 100 %
where Q i m a x stands for the maximum value of the utility function. Better alternatives have a higher U i value, e.g., the best alternative would have U i = 100 .

3.1.4. PROMETHEE II

The Preference Ranking Organization Method for Enrichment of Evaluations (PROMETHEE) is a family of MCDA methods developed by Brans [18,141]. It is similar to other methods input data, but it optionally requires to choose preference function and some other variables. In this article we use PROMETHEE II method, because  the output of this method has a full ranking of the alternatives. It is the approach where a complete ranking of the actions is based on the multi-criteria net flow. It includes preferences and indifferences (preorder) [142]. According to [134,141], PROMETHEE II is designed to solve the following multicriteria problems:
m a x { g 1 ( a ) , g 2 ( a ) , g n ( a ) | a A }
where A is a finite set of alternatives and g i ( · ) is a set of evaluation criteria either to be maximized or minimized. In other words, g i ( a j ) is a value of criteria i for alternative a j . With these values and weights we can define evaluation table (see Table 1).
Step 1. After defining the problem as described above, calculate the preference function values. It is defined as (21) for profit criteria.
P ( a , b ) = F d ( a , b ) , a , b A
where d ( a , b ) is the difference between two actions (pairwise comparison):
d ( a , b ) = g ( a ) g ( b )
and the value of the preference function P is always between 0 and 1 and it is calculating for each criterion. The Table 2 presents possible preference functions.
This preference functions and variables such as p and q allows one to customize the PROMETHEE model. For our experiment these variables are calculated according to Equations (23) and (24):
q = D ¯ k · σ D
p = D ¯ + k · σ D ,
where D is a positive value of the d values (22) for each criterion and k is a modifier. In our experiments k { 0.25 , 0.5 , 1.0 } .
Step 2. Calculate the aggregated preference indices (25).
π ( a , b ) = j = 1 n P j ( a , b ) w j π ( b , a ) = j = 1 n P j ( b , a ) w j
where a and b are alternatives and π ( a , b ) shows how much alternative a is preferred to b over all of the criteria. There are some properties (26) which must be true for all alternatives set A.
π ( a , a ) = 0 0 π ( a , b ) 1 0 π ( b , a ) 1 0 π ( a , b ) + π ( b , a ) 1
Step 3. Next, calculate positive (27) and negative (28) outranking flows.
ϕ + ( a ) = 1 m 1 x A π ( a , x )
ϕ ( a ) = 1 m 1 x A π ( x , a )
Step 4. In this article we will use only PROMETHEE II, which results in a complete ranking of alternatives. Ranking is based on the net flow Φ (29).
Φ ( a ) = Φ + ( a ) Φ ( a )
Larger value of Φ ( a ) means better alternative.

3.2. Normalization Methods

In the literature, there is no clear assignment to which decision-makers’ methods of data normalization are used. This situation poses a problem, as it is necessary to consider the influence of a particular normalization on the result. The most common normalization methods in MCDA methods can be divided into two groups [143], i.e.,  methods designed to profit (30), (32), (34) and (36) and cost criteria (31), (33), (35), and (37).
The minimum-maximum method: In this approach, the greatest and the least values in the considered set are used. The formulas are described as follows (30) and (31):
r i j = x i j m i n j ( x i j ) m a x j ( x i j ) X m i n
r i j = m a x j ( x i j ) x i j m a x j ( x i j ) m i n j ( x i j )
The maximum method: In this technique, only the greatest value in the considered set is used. The formulas are described as follows (32) and (33):
r i j = x i j m a x j ( x i j )
r i j = 1 x i j m a x j ( x i j )
The sum method: In this method, the sum of all values in the considered set is used. The formulas are described as follows (34) and (35):
r i j = x i j i = 1 m x i j
r i j = 1 x i j i = 1 m 1 x i j
The vector method: In this method, the square root of the sum of all values. The formulas are described as follows (36) and (37):
r i j = x i j i = 1 m x i j 2
r i j = 1 x i j i = 1 m x i j 2

3.3. Weighting Methods

In this section, we present three popular methods related to objective criteria weighting. These are the most popular methods currently found in the literature. In the future, this set should be extended to other methods.

3.3.1. Equal Weights

The first and least effective weighted method is the equal weight method. All criteria’s weights are equal and calculated by Equation (38), where n is the number of criteria.
w j = 1 / n

3.3.2. Entropy Method

According to [33], the entropy method is based on a measure of uncertainty in the information. It is calculated using Equations (39)–(41) below.
p i j = x i j i = 1 m x i j i = 1 , . . . , m ; j = 1 , . . . , n
E j = i = 1 m p i j l n ( p i j ) l n ( m ) j = 1 , . . . , n
w j = 1 E j i = 1 n ( 1 E i ) j = 1 , . . . , n

3.3.3. Standard Deviation Method

This method is similar to entropy at some point and assigns small weights to an attribute which has similar values across alternatives. The SD method is defined with Equations (42) and (43), where w j is the weight of criteria and σ j is the standard deviation [33].
σ j = i = 1 m x i j x j ¯ 2 m j = 1 , , n
w j = σ j / j = 1 n σ j j = 1 , , n

3.4. Correlation Coefficients

Correlation coefficients make it possible to compare obtained results and determine how similar they are. In this paper we compare ranking lists obtained by several MCDA methods using Spearman rank correlation coefficient (44), weighted Spearman correlation coefficient (46), and rank similarity coefficient (47).

3.4.1. Spearman’s Rank Correlation Coefficient

Rank values r g X and r g Y are defined as (44). However, if we are dealing with rankings where the values of preferences are unique and do not repeat themselves, each variant has a different position in the ranking, the formula (45) can be used.
r s = c o v ( r g X , r g Y ) σ r g X σ r g Y
r s = 1 6 · i = 1 N ( r g X i r g Y i ) N ( N 2 1 )

3.4.2. Weighted Spearman’s Rank Correlation Coefficient

For a sample of size N, rank values x i and y i are defined as (46). In this approach, the positions at the top of both rankings are more important. The weight of significance is calculated for each comparison. It is the element that determines the main difference to the Spearman’s rank correlation coefficient, which examines whether the differences appeared and not where they appeared.
r w = 1 6 i = 1 N ( x i y i ) 2 ( ( N x i + 1 ) + ( N y i + 1 ) ) N 4 + N 3 N 2 N

3.4.3. Rank Similarity Coefficient

For a samples of size N, the rank values x i and y i is defined as (47) [144]. It is an asymmetric measure. The weight of a given comparison is determined based on the significance of the position in the first ranking, which is used as a reference ranking during the calculation.
W S = 1 i = 1 N 2 x i | x i y i | m a x ( | x i 1 | , | x i N | )

4. Study Case and Numerical Examples

The main goal of the experiments is to test if the MCDA method or weight calculation method has an impact on the final ranking and how significant this impact is. For this purpose, we applied four commonly used classical MCDA methods, which are listed in Section 3.1. In the case of TOPSIS and VIKOR methods, we would use different normalization methods, and for the PROMETHEE method, we use various preference function and use different p and q values. The primary way to analyze the results obtained from numerical experiments is to use selected correlation coefficients, described in Section 3.4.
Algorithm 1 presents the simplified pseudo-code of the experiment, where we process matrices with different number of criteria and alternatives. The number of criteria changed from 2 to 5 and the number of the alternatives belongs to the set { 3 , 5 , 10 , 50 , 100 } . For each of the 20 combinations, the number of alternatives and criteria was generated after 1000 random decision matrices. They contain attribute values for all analyzed alternatives for all analyzed criteria. The preference values of the drawn alternatives are not known, but three different vectors of criteria weights are derived from this data. Rankings are then calculated using these different methods with different settings. This way, we obtain research material in the form of rankings calculated using different approaches. Analyzing the results using similarity coefficients, we try to determine the similarity of the obtained results. For each matrix we perform the following steps:
Step 1.
Calculate 3 vectors of weights, using equations described in Section 3.3;
Step 2.
Split criteria into profit and cost criteria: Assuming we have n criteria, first n / 2 are considered to be profit criteria and the rest ones are considered to be cost;
Step 3.
Compute 3 rankings using MCDA methods listed in Section 3.1 and three different weighting vectors.
Algorithm 1 Research algorithm
1:
N 1000
2:
for n u m _ o f _ c r i t = 2 to 5 do
3:
   t y p e s g e n e r a t e _ c r i t _ t y p e s ( n u m _ o f _ c r i t )
4:
   e q u a l _ w e i g h t s g e n e r a t e _ e q u a l _ w e i g h t s ( n u m _ o f _ c r i t )
5:
  for n u m _ o f _ a l t s in [ 3 , 5 , 10 , 50 , 100 ] do
6:
    for i = 1 to N do
7:
       m a t r i x g e n e r a t e _ r a n d o m _ m a t r i x ( n u m _ o f _ a l t s , n u m _ o f _ c r i t )
8:
       e n t r o p y _ w e i g h t s e n t r o p y _ w e i g h t s ( m a t r i x )
9:
       s t d _ w e i g h t s s t d _ w e i g h t s ( m a t r i x )
10:
       r e s u l t R e s u l t ( )
11:
      for m e t h o d in m e t h o d s do
12:
         r e s u l t . a d d ( m e t h o d ( m a t r i x , e q u a l _ w e i g h t s , t y p e s ) )
13:
         r e s u l t . a d d ( m e t h o d ( m a t r i x , e n t r o p y _ w e i g h t s , t y p e s ) )
14:
         r e s u l t . a d d ( m e t h o d ( m a t r i x , s t d _ w e i g h t s , t y p e s ) )
15:
      end for
16:
       s a v e _ r e s u l t ( r e s u l t )
17:
    end for
18:
  end for
19:
end for
The further part of this section presents two examples that are intended to explain our simulation study better. The sample data and how it is handled in the following section will be reviewed. Due to the vast number of generated results, some figures have been placed in the Appendix A for clarity.

4.1. Decision Matrices

We have chosen two random matrices with three criteria and five alternatives to show how exactly these matrices are processed during the experiment. These matrices were chosen to demonstrate how many different or similar the rankings obtained with different MCDA methods in particular cases could be. Table 3 and Table A1 contain chosen matrices and their weights which was calculated with three different methods described in Section 3.3. In the following sections, the example from Table 3 will be discussed separately for clarity. The results for the second example (Table A1) will be shown in the Appendix B.

4.2. TOPSIS

Processing a matrix with the TOPSIS method, using four different normalization methods and three different weighting methods gives us 12 rankings. Table 4 presents all rankings for the analyzed example showed in Table 4, and rankings for the second example are shown in Table A2. The orders in both cases are not identical and we can observe that the impact of different parameters varies for almost every case in the first example (Table 3). It depends mainly on the applied normalization method and selected weight vector. However, we need to determine exactly the similarity of considered rankings. Therefore, we use coefficients similarity, which are presented in heat maps in Figure 2 and Figure 3. The results for the second numerical example is presented in Figure A1 and Figure A2. These figures show exactly how different these rankings are using the r w and W S coefficients described in Section 3.4. Each figure shows three heat maps corresponding to three weighting methods.
Figure 2 and Figure 3 presents r w and W S correlations between rankings obtained using different normalization methods with TOPSIS method. The most significant difference is obtained for the ranking calculated using the entropy method and sum-based normalization. Only a single discrepancy appears for the alternatives A 1 and A 5 while comparing this ranking with the rest of the rankings. There is a change in the ranking on positions 2 and 5. These figures also show that the W S coefficient is asymmetrical as opposed to r w . Therefore both coefficients will also be used in further analyses.
Next, Figure A1 and Figure A2 show r w and W S correlations for the second matrix. Rankings obtained using different normalization methods with TOPSIS are far more correlated than for the first matrix. The entropy methods weights gave as equal rankings and the ranking for the minmax normalization method is slightly different for equal weights and standard deviation weighting method.

4.3. VIKOR

Next, we calculate rankings for both matrices using the VIKOR method in combination with four different normalization methods and three weighting methods. Besides, we also use VIKOR without normalization (represented by “none” in the tables and on the figures). Rankings are shown in Table 5 and Table A3. These rankings have more differences between themselves than the rankings obtained using TOPSIS method with different normalization methods.
Figure 4 and Figure 5 with r w and W S correlation coefficients show us that in this case the entropy weighting method performed worse than the other two weighting methods. In addition, it is noticeable how small size of the correlation between VIKOR without normalization. For the entropy weighting method, the r w value is 1.0 which means that rankings obtained with VIKOR without normalization are reversed to VIKOR with any other normalization methods.
The rankings calculated for the second matrix are more correlated between themselves, which is presented on heat maps Figure A2 and Figure A3. Similarly to TOPSIS, the entropy weighted method gives perfectly correlated rankings and the other two weighting method gives us fewer correlated rankings. Moreover, it is noticeable that the ranking obtained using VIKOR without normalization is less correlated to VIKOR with normalization methods.

4.4. PROMETHEE II

For exemplary purposes, we use the PROMETHEE II method with five different preference functions, and the q and p values for them are calculated as follows:
q = D ¯ ( 0.25 · σ D ) ,
p = D ¯ ( 0.25 · σ D ) ,
where D stands for positive values of the differences (see Section 3.1.4 for a more detailed explanation). As previously mentioned, Table 6 and Table A4 contains rankings obtained using different preference functions and different weighting methods.
According to Figure 6 and Figure 7, entropy weighting methods give slightly less correlated rankings than the two other methods. It is noticeable that for equal weights and for std-based method ranking obtained using U-shape preference function was not the same as for other rankings.
For the second matrix, rankings obtained with the usual preference function and equal weights is quite different than other rankings which is shown in Figure A5 and Figure A6. For the entropy weighting method, we could see that the rankings obtained with the usual preference function and the v-shape 2 preference function are equal, but applying other preference functions gave us slightly different rankings. In the case of the standard weighting methods, preference functions usual, level, and v-shape 2 have equal rankings, but, as previously mentioned, they are different from the rankings obtained by other preference functions.

4.5. Different Methods

In this part, we show how different rankings could be obtained by different methods. We compare rankings obtained by TOPSIS with minmax-based normalization, VIKOR without normalization, PROMETHEE II with the usual preference function, and COPRAS. Obtained rankings for first and second matrices are shown in Table 7 and Table A5 accordingly.
Correlation heat maps are shown in Figure 8 and Figure 9. The differences between rankings in the case of the first example are the smallest. It can be seen that the ranking obtained using VIKOR and entropy weighting method is reversed to rankings obtained using TOPSIS minmax and PROMETHEE II methods. It also poorly correlated with the ranking obtained with the COPRAS method. This is also noticeable that for other two weighting methods, with the VIKOR ranking being much less correlated with other rankings.
The second matrix situation is opposite. As shown in Figure A7 and Figure A8, rankings obtained using entropy methods are equal. For other two weighting methods we notice that rankings obtained by VIKOR are less correlated with other rankings.

4.6. Summary

Based on the results of the two numerical examples presented in Section 4.1, Section 4.2, Section 4.3, Section 4.4 and Section 4.5, it can be seen that the selection of the MCDA method is important for the results in the final ranking. Both numerical examples were presented to show positive and negative cases. In both samples, it can be seen that once a given method is selected, the weighing method and the normalization used in this method play a key role. The impact of parameters in the decision-making process in the discussed examples was different. Therefore, simulations should be performed in order to examine the typical similarity between the received rankings concerning a different number of alternatives. In Section 4.5, comparisons are made between methods using the most common configuration, where each of the analyzed scenarios has a huge impact on the final ranking. Therefore, simulation studies will be conducted to show the similarity of the rankings examined in the next section.

5. Results and Discussion

5.1. TOPSIS

In this section, we compare TOPSIS between different normalization methods. Figure 10, Figure 11 and Figure 12 contain results of simulations. Each dot represents the correlation between two rankings for certain random matrix. The color of the dot shows which weighting method it is. For TOPSIS we compare rankings with three correlation coefficients described in Section 3.4, but other methods would be compared only using the r w and W S correlation coefficients because the Spearman correlation has some limitations. At first, we could not use the r s correlation for rankings where standard deviations are 0. It is frequent occurrences for the VIKOR method for decision matrices with a small number of alternatives. The second reason is that the cardinality of the Spearman correlation coefficient values set is strictly less than the cardinality of possible values sets of other two correlation coefficients. It is perfectly seen in Figure 10 for five alternative.
Figure 11 shows the r w correlation between rankings obtained using the TOPSIS method with different normalization methods. We have some reversed rankings for matrices with three alternatives but with a greater number of alternatives, rankings become increasingly similar. It is noticeable that for a big number of alternatives, such as 50 or 100, rankings obtained using minmax vs. max normalization, minmax vs. vector normalization, and max vs. vector normalization methods are almost perfectly correlated. It is clearly seen in Table A6, which contains mean values of this correlations. The table gives details of the average correlation values for the number of alternatives (3, 5, 10, 50, or 100) and the number of criteria (2, 3, 4, and 5). Overall, the value of r w reaches the lowest values for five criteria and three alternatives. The closest results are obtained for the max and minmax normalizations, but when not applying an equal distribution of criteria weights. Then, in the worst case, we get the value of the coefficient 0.897, where for the method of equal distribution of weights we get the result 0.735. Other pairs of normalizations reach, in the worst case, the r w correlation of 0.571. It is also visible that rankings obtained using the sum normalization method are less correlated to rankings obtained with other normalization methods. Herein we can conclude that sum normalization is performing poorly with TOPSIS compared to other normalization methods.
On Figure 12, we see similar results to r w . Rankings obtained using sum normalization are less correlated with other rankings. It is noticeable that according to the W S similarity coefficient, there are more poorly correlated rankings for the sum vs. vector normalization methods case than for r w correlation. A problem of interpretation may be that both coefficients have a different domain. Nevertheless, this is the only case where the number of alternatives is increasing and there is no improvement in the similarity of the rankings obtained. The detailed data in Table A7 also confirm this. It may also come as a big surprise that the choice of the weighting method is not so important. Relatively similar results are obtained regardless of the method used.
In general, it follows that with a small number of alternatives that do not exceed 10 decision-making options, the rankings may vary considerably depending on the normalization chosen. Thus, this shows that a problem as important as the choice of the method of weighting is the choice of the method used to normalize the input data. For larger groups of alternatives, the differences are also significant, although a smaller difference can be expected. The analysis of the data in Table A6 and Table A7 shows that the increase in the complexity of the problem, understood as an increase in the number of criteria, practically always results in the decrease in similarity between the results obtained as a result of different normalizations applied.

5.2. VIKOR

As mentioned previously, we would use only r w and W S correlation coefficients for the following comparisons. Figure 13 shows correlation between rankings obtained using VIKOR with different normalization and without normalization. It is clearly seen that VIKOR without normalization is poorly correlated to VIKOR when normalization is applied. In Table A8, we could see that mean values of the correlation values for VIKOR without normalization with cases where normalization was applied is around 0. It is also noticeable that VIKOR with minmax, max, and vector normalization gives very similar rankings.
This is similar to the TOPSIS method, where sum normalization has less similar rankings than other methods. Interestingly, correlations of rankings in the case of VIKOR without normalization vs. VIKOR with any normalization have a mean around zero, but rankings obtained with equal weight have less variance. In this case, less variability may be of concern, as it means that it is not possible to get rankings that are compatible. In addition to the three exceptions mentioned earlier, it should be noted that the choice of normalization has a significantly stronger impact on the similarity of rankings than in the TOPSIS method.
We can observe a similar situation on Figure 14, where W S similar coefficient values are presented. It confirms the results obtained using r w correlation coefficient: Rankings obtained using VIKOR without normalization is less correlated with rankings obtained using VIKOR with normalization than other rankings correlated between themselves. It is also noticeable that in the sum vs. vector normalization methods case, the W S similarity coefficient values are also visibly smaller, as it was for TOPSIS. Therefore, we can conclude that rankings obtained using the sum and vector normalization method usually should be quite different.
A detailed analysis of the results of mean values of r W (Table A8) and mean W S (Table A9) show that the mean significance of similarities is significantly lower than in the TOPSIS method. Besides, the dependence of the change in the values of both coefficients in the tables is more random and more challenging to predict. Nevertheless, it means that the final result of the ranking is significant whether or not we apply the method with or without normalization, and which normalization we apply. Again, the influence of the selection of the method of criteria weighting was not as significant as one might expect. Generally, the proof is shown here that standardization can have a considerable impact on the final result.

5.3. PROMETHEE II

For PROMETHEE II, the normalization of the decision matrix does not apply. That is why this section will analyze receiving the rankings with different values of parameters p, q, and preference function. Figure 15 shows the results of the r w factor for the U-shape preference function concerning different techniques of criteria weighting. The values p and q are calculated automatically and we only scale them. The differences between the individual rankings are very similar. If we analyze up to 10 alternatives, using any scaling value, we get significantly different results. As the number of alternatives increases, the spread of possible correlation values decreases. However, when analyzing the values contained in Table A10 and Table A11 we see that the average values indicate a much smaller impact than in the case of the selection of standardization methods in the two previous MCDA methods.
For comparison, we present Figure 16 where another preference function is shown but also for the r w ratio. It turns out that by choosing the V-shape preference function instead of the U-shape we get more of a similarity of results. Again, it turns out that the methods of weighting do not have such a significant impact on the similarity results obtained.
An important observation is that despite the lack of possibilities to normalize input data, there are still differences depending on the selected parameters and functions of preferences. The other two preference functions for the r w ratio are shown in Figure A9 and Figure A10. The results also indicate a similar level of similarity between the rankings. The W S coefficient for the corresponding cases is shown in Figure A11, Figure A12, Figure A13 and Figure A14. Both coefficients indicate the same nature of the influence of the applied parameters on the similarity of rankings. To sum up, in all the discussed cases, we see significant differences between the rankings obtained. Thus, not only normalization but other parameters are important for the results obtained and it is always necessary to make an in-depth analysis by selecting a specific method and assumptions, i.e., such as normalization, etc.

5.4. Comparison of the MCDA Methods

Figure 17 shows how much correlated rankings obtained by four different methods could be: TOPSIS with minmax normalization, VIKOR without normalization, PROMETHEE II with usual preference function, and COPRAS. We can see that VIKOR has quite different rankings in comparison to other methods.
Table A12 shows that the mean values of the correlation between VIKOR rankings and other methods’ rankings are around zero. It is also noticeable that for cases VIKOR vs. other method correlation for rankings obtained using equal weights has a smaller variance in comparison to the other two weighting methods. Next, we could see that mostly correlated rankings could be obtained with TOPSIS minmax and PROMETHEE II usual methods. The comparisons between TOPSIS minmax vs. COPRAS and PROMETHEE II usual vs. COPRAS look quite similar and this methods’ rankings are less correlated between themselves than TOPSIS minmax and PROMETHEE II usual methods’ rankings.
The general situation is quite similar for W S correlation coefficient, as it shows Figure 18. The rankings obtained by VIKOR are far less correlated to rankings obtained using the other three methods than these rankings between themselves. Similarly to r w , W S points that rankings obtained using TOPSIS minmax and PROMETHEE II usual methods have a strong correlation between themselves. Rankings obtained by TOPSIS minmax, COPRAS, and PROMETHEE II with the usual preference function are slightly less correlated. However, correlations between them are stronger than in the case of VIKOR vs. other methods.

5.5. Dependence of Ranking Similarity Coefficients on the Distance between Weight Vectors

The last section is devoted to a short presentation of distance distributions between weight vectors and similarity coefficients. All these distributions are asymmetric and will be briefly discussed for each of the four methods used. Thus, Figure 19 shows the relations between r w coefficient and TOPSIS method. The smallest difference in the distance between the vectors of weights is between the equal weights and those obtained by the std method. These values refer to all simulations that have been performed and cannot be generalized to the whole space. In Figure A15, there is a distribution for the W S coefficient, which confirms that there is a moderate relation between the distance of weight vectors and the similarity coefficient of obtained rankings.
In Figure 20, the results of the VIKOR method are presented. The distribution of similarity coefficient r w is very similar to the distribution obtained for the TOPSIS method. Distance distributions between vectors of respective methods are the same for all presented graphs. Figure A16 shows the relationships for the VIKOR method and WS coefficient.
Using the PROMETHEE II method (usual), we can observe changes in the relation which is presented in Figure 21 and Figure A17. This means that the use of different weights in the presented simulation was less important in the PROMETHEE II method than in the TOPSIS or VIKOR method. The distance between the scale vectors was the least important in the COPRAS method, the results of which are presented in Figure 22 and Figure A18.
These results are important preliminary research on whether the weights and their differences always have a important influence on the compliance of the obtained rankings. TOPSIS and VIKOR have the highest sensitivity in the presented experiment, and COPRAS has the lowest sensitivity.

6. Conclusions

The results of the conducted research indicate that when choosing the MCDA method, not only the method itself but also the method of normalization and other parameters should be carefully selected. Almost every combination of the method and its parameters may bring us different results. In our study, we have checked how much influence these decisions can have on the ranking of the decision problem. As it turns out, it may weigh not only the correct identification of the best alternative but also the whole ranking.
For the TOPSIS method, rankings obtained using minmax, max, and vector normalization method could be quite similar, especially for the big number of alternatives. In this case, equal weights performed worse than entropy or standard deviation method. Furthermore, with these normalization methods, correlation of rankings had a smaller variance when the entropy weighting method was used. For VIKOR, rankings obtained using any normalization methods could be even reversed in comparison to rankings obtained using VIKOR without normalization. Thus, although it was not necessary to apply normalization when using VIKOR, applying one could be noticeable to improve rankings and the overall performance of the method. Equal weights performed better with VIKOR. The PROMETHEE II method, despite the lack of use of normalization, returned quite different results depending on the set of parameters used and it clearly showed that the choice of the method and its configuration for the decision-making problem was important and should be the subject of further benchmarking. In four different method comparison, rankings obtained using VIKOR without normalization were very different from rankings obtained by other methods. Some equal rankings for the small number of alternatives was achieved. However, for a greater number of alternatives, correlations between VIKOR’s rankings and other methods rankings had oscillated around zero, which means that there was no correlation between these rankings. Most similar rankings were obtained using TOPSIS minmax and PROMETHEE II usual methods. In this case, equal weighting methods performed slightly better. This proves that it is worthwhile to research in order to develop reliable and generalized benchmarks.
The direction of future work should be focused on the development of an algorithm for estimation of accuracy for multi-criteria decision analysis method and estimating the accuracy of selected multi-criteria decision-making methods. However, special attention should be paid to the family of fuzzy set-based MCDA methods and group decision making methods as well. Another challenge is to create a method that, based on the results of the benchmarks, will be able to recommend a proper solution.

Author Contributions

Conceptualization, A.S. and W.S.; methodology, A.S. and W.S.; software, A.S.; validation, A.S., J.W., and W.S.; formal analysis, A.S., J.W., and W.S.; investigation, A.S. and W.S.; resources, A.S.; data curation, A.S.; writing—original draft preparation, A.S. and W.S.; writing—review and editing, A.S., J.W., and W.S.; visualization, A.S. and W.S.; supervision, W.S.; project administration, W.S.; funding acquisition, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the National Science Centre, Decision number UMO-2018/29/B/HS4/02725.

Acknowledgments

The authors would like to thank the editor and the anonymous reviewers, whose insightful comments and constructive suggestions helped us to significantly improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TOPSISTechnique for Order of Preference by Similarity to Ideal Solution
VIKORVlseKriterijumska Optimizacija I Kompromisno Resenje (Serbian)
COPRASComplex Proportional Assessment
PROMETHEEPreference Ranking Organization Method for Enrichment of Evaluation
MCDAMulti Criteria Decision Analysis
MCDMMulti Criteria Decision Making
PISPositive Ideal Solution
NISNegative ideal Solution
SDStandard Deviation

Appendix A. Figures

Figure A1. W S correlations heat map for TOPSIS with different normalization methods (matrix 2).
Figure A1. W S correlations heat map for TOPSIS with different normalization methods (matrix 2).
Symmetry 12 01549 g0a1
Figure A2. W S correlations heat map for TOPSIS with different normalization methods (matrix 2).
Figure A2. W S correlations heat map for TOPSIS with different normalization methods (matrix 2).
Symmetry 12 01549 g0a2
Figure A3. r w correlations heat map for VIKOR with different normalization methods (matrix 2).
Figure A3. r w correlations heat map for VIKOR with different normalization methods (matrix 2).
Symmetry 12 01549 g0a3
Figure A4. W S correlations heat map for VIKOR with different normalization methods (matrix 2).
Figure A4. W S correlations heat map for VIKOR with different normalization methods (matrix 2).
Symmetry 12 01549 g0a4
Figure A5. r w correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Figure A5. r w correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Symmetry 12 01549 g0a5
Figure A6. W S correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Figure A6. W S correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Symmetry 12 01549 g0a6
Figure A7. r w correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Figure A7. r w correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Symmetry 12 01549 g0a7
Figure A8. W S correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Figure A8. W S correlations heat map for PROMETHEE II with different preference functions (matrix 2).
Symmetry 12 01549 g0a8
Figure A9. Comparison of the r w similarity coefficient for the PROMETHEE II method with level preference function and different k values.
Figure A9. Comparison of the r w similarity coefficient for the PROMETHEE II method with level preference function and different k values.
Symmetry 12 01549 g0a9
Figure A10. Comparison of the r w similarity coefficient for the PROMETHEE II method with V-shape 2 preference function and different k values.
Figure A10. Comparison of the r w similarity coefficient for the PROMETHEE II method with V-shape 2 preference function and different k values.
Symmetry 12 01549 g0a10
Figure A11. Comparison of the W S similarity coefficient for the PROMETHEE II method with U-shape preference function and different k values.
Figure A11. Comparison of the W S similarity coefficient for the PROMETHEE II method with U-shape preference function and different k values.
Symmetry 12 01549 g0a11
Figure A12. Comparison of the W S similarity coefficient for the PROMETHEE II method with V-shape preference function and different k values.
Figure A12. Comparison of the W S similarity coefficient for the PROMETHEE II method with V-shape preference function and different k values.
Symmetry 12 01549 g0a12
Figure A13. Comparison of the W S similarity coefficient for the PROMETHEE II method with level preference function and different k values.
Figure A13. Comparison of the W S similarity coefficient for the PROMETHEE II method with level preference function and different k values.
Symmetry 12 01549 g0a13
Figure A14. Comparison of the W S similarity coefficient for the PROMETHEE II method with V-shape 2 preference function and different k values.
Figure A14. Comparison of the W S similarity coefficient for the PROMETHEE II method with V-shape 2 preference function and different k values.
Symmetry 12 01549 g0a14
Figure A15. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the TOPSIS method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Figure A15. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the TOPSIS method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Symmetry 12 01549 g0a15
Figure A16. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the VIKOR method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Figure A16. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the VIKOR method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Symmetry 12 01549 g0a16
Figure A17. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the PROMETHEE II (usual) method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Figure A17. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the PROMETHEE II (usual) method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Symmetry 12 01549 g0a17
Figure A18. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the COPRAS method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Figure A18. Relationship between the euclidean distance of weights and W S similarity coefficient for rankings obtained by the COPRAS method with different weighting methods, where (left) equal/entropy, (center) equal/std, and (right) std/entropy.
Symmetry 12 01549 g0a18

Appendix B. Tables

Table A1. The second example decision matrix with three different criteria weighting vectors.
Table A1. The second example decision matrix with three different criteria weighting vectors.
C 1 C 2 C 3
A 1 0.9470.9570.275
A 2 0.0180.6310.581
A 3 0.5650.2950.701
A 4 0.4230.6020.509
A 5 0.6640.6370.786
w e q u a l 0.3330.3330.333
w e n t r o p y 0.6780.1720.151
w s t d 0.4420.3030.255
Table A2. TOPSIS rankings for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
Table A2. TOPSIS rankings for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
Minmax Max Sum Vector
(a)(b)(c) (a)(b)(c) (a)(b)(c) (a)(b)(c)
A 1 111 111 111 111
A 2 455 555 555 555
A 3 534 433 433 433
A 4 243 344 344 344
A 5 322 222 222 222
Table A3. VIKOR rankings for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
Table A3. VIKOR rankings for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
None Minmax Max Sum Vector
(a)(b)(c) (a)(b)(c) (a)(b)(c) (a)(b)(c) (a)(b)(c)
A 1 312 111 111 111 111
A 2 555 455 455 455 455
A 3 434 534 534 534 534
A 4 243 242 242 243 242
A 5 121 323 323 322 323
Table A4. PROMETHEE II rankings for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
Table A4. PROMETHEE II rankings for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
Usual U-Shape V-Shape Level V-Shape 2
(a)(b)(c) (a)(b)(c) (a)(b)(c) (a)(b)(c) (a)(b)(c)
A 1 111 111 111 111 111
A 2 455 455 455 555 555
A 3 534 444 544 444 434
A 4 343 232 232 233 243
A 5 222 323 323 322 322
Table A5. Rankings obtained with different MCDA methods for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
Table A5. Rankings obtained with different MCDA methods for matrix 2: (a) Equal weights, (b) entropy method, and (c) std method.
TOPSIS VIKOR PROM. II COPRAS
(a)(b)(c) (a)(b)(c) (a)(b)(c) (a)(b)(c)
A 1 111 312 111 111
A 2 455 555 455 555
A 3 534 434 534 434
A 4 243 243 343 343
A 5 322 121 222 222
Table A6. Mean values of the r w correlation coefficient for the TOPSIS method with different normalization methods: (a) Minmax/max, (b) minmax/sum, (c) minmax/vector, (d) max/sum, (e) max/vector, and (f) sum/vector.
Table A6. Mean values of the r w correlation coefficient for the TOPSIS method with different normalization methods: (a) Minmax/max, (b) minmax/sum, (c) minmax/vector, (d) max/sum, (e) max/vector, and (f) sum/vector.
NormWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)equal0.8280.9230.9770.9991.0000.7550.8970.9660.9991.0000.7440.8870.9660.9981.0000.7350.8710.9640.9981.000
entropy0.9790.9860.9910.9991.0000.9730.9800.9890.9991.0000.9770.9740.9890.9991.0000.9730.9730.9880.9991.000
std0.9160.9600.9850.9991.0000.9130.9490.9770.9991.0000.9040.9380.9760.9981.0000.8970.9330.9740.9981.000
(b)equal0.7400.8080.8210.8150.8070.6510.7540.8050.8250.8210.5960.6790.7140.7330.7350.5710.6580.7020.7420.748
entropy0.9160.8950.8510.8120.8040.9050.8780.8470.8260.8200.8780.7960.7550.7310.7330.8650.7960.7660.7420.745
std0.8360.8440.8320.8130.8050.8230.8150.8190.8240.8200.7830.7350.7270.7320.7340.7620.7220.7200.7410.747
(c)equal0.8030.9000.9570.9940.9970.7030.8560.9340.9910.9960.6960.8390.9290.9890.9950.7010.8180.9220.9890.995
entropy0.9720.9770.9840.9960.9980.9610.9690.9760.9930.9970.9630.9590.9730.9920.9960.9600.9580.9670.9910.996
std0.8990.9420.9700.9940.9970.8900.9210.9510.9910.9960.8750.9030.9470.9900.9960.8740.8930.9380.9890.995
(d)equal0.8700.8490.8270.8130.8060.8420.8290.8210.8240.8210.7850.7470.7320.7320.7330.7820.7410.7230.7410.747
entropy0.9250.8950.8500.8100.8020.9130.8820.8470.8240.8190.8830.8020.7560.7290.7320.8730.8020.7690.7400.744
std0.8950.8640.8330.8110.8040.8800.8500.8270.8220.8190.8330.7630.7340.7310.7330.8390.7630.7310.7390.746
(e)equal0.9660.9710.9800.9950.9980.9260.9500.9650.9920.9960.9170.9410.9600.9910.9960.9230.9340.9560.9900.995
entropy0.9860.9860.9890.9960.9980.9760.9800.9830.9940.9970.9740.9720.9770.9930.9960.9700.9640.9720.9920.996
std0.9750.9790.9830.9950.9980.9700.9630.9710.9920.9960.9540.9520.9660.9910.9960.9470.9430.9600.9900.996
(f)equal0.8680.8500.8290.8120.8060.8610.8460.8330.8260.8220.7880.7500.7290.7320.7330.7950.7510.7310.7420.748
entropy0.9220.8970.8490.8080.8010.9250.8900.8520.8240.8190.8860.8030.7550.7270.7300.8740.8080.7710.7400.744
std0.8990.8700.8350.8110.8040.8880.8590.8370.8250.8210.8300.7670.7340.7310.7330.8480.7640.7360.7410.747
Table A7. Mean values of the W S correlation coefficient for the TOPSIS method with different normalization methods: (a) Minmax/max, (b) minmax/sum, (c) minmax/vector, (d) max/sum, (e) max/vector, and (f) sum/vector.
Table A7. Mean values of the W S correlation coefficient for the TOPSIS method with different normalization methods: (a) Minmax/max, (b) minmax/sum, (c) minmax/vector, (d) max/sum, (e) max/vector, and (f) sum/vector.
NormWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)equal0.8640.9310.9800.9991.0000.8370.9100.9670.9991.0000.8260.9000.9660.9991.0000.8210.8860.9620.9981.000
entropy0.9830.9860.9880.9991.0000.9780.9790.9870.9991.0000.9820.9730.9850.9991.0000.9790.9720.9840.9991.000
std0.9390.9610.9850.9991.0000.9340.9460.9740.9991.0000.9250.9400.9740.9991.0000.9210.9360.9710.9991.000
(b)equal0.8120.8420.8650.9260.9440.7740.8130.8570.9150.9320.7480.7760.8100.8680.8850.7400.7670.8090.8690.886
entropy0.9360.9080.8890.9270.9450.9280.8980.8870.9170.9320.9080.8430.8360.8730.8870.8970.8410.8380.8730.889
std0.8890.8710.8740.9260.9440.8780.8520.8650.9160.9320.8520.8050.8160.8690.8860.8400.8010.8150.8700.886
(c)equal0.8470.9120.9640.9980.9990.8050.8810.9450.9960.9990.8000.8680.9390.9950.9980.8030.8520.9320.9930.998
entropy0.9770.9760.9810.9980.9990.9690.9690.9750.9960.9990.9700.9570.9700.9950.9980.9680.9570.9650.9950.998
std0.9270.9460.9720.9980.9990.9190.9230.9550.9960.9990.9060.9100.9490.9950.9980.9090.9050.9440.9940.998
(d)equal0.9090.8710.8680.9250.9440.8890.8650.8650.9150.9320.8530.8090.8180.8670.8850.8520.8120.8160.8690.886
entropy0.9430.9090.8910.9270.9450.9360.9020.8900.9170.9320.9130.8470.8390.8730.8870.9060.8440.8400.8730.889
std0.9240.8840.8750.9260.9440.9140.8780.8710.9160.9320.8830.8230.8210.8680.8860.8900.8240.8210.8690.886
(e)equal0.9720.9690.9800.9980.9990.9450.9510.9670.9970.9990.9390.9430.9600.9950.9980.9430.9370.9560.9940.998
entropy0.9890.9840.9870.9980.9990.9830.9790.9810.9970.9990.9790.9690.9760.9960.9980.9780.9630.9710.9950.998
std0.9800.9780.9830.9980.9990.9770.9630.9710.9960.9990.9640.9500.9630.9960.9990.9600.9450.9600.9940.998
(f)equal0.9060.8700.8480.8250.8210.8970.8660.8390.7910.7730.8570.8000.7740.7450.7340.8520.8010.7700.7300.724
entropy0.9420.9090.8720.8210.8160.9420.9040.8650.7810.7690.9130.8410.8020.7400.7310.9050.8430.8020.7250.721
std0.9260.8850.8540.8230.8210.9170.8790.8460.7880.7740.8810.8170.7820.7440.7330.8910.8130.7740.7300.722
Table A8. Mean values of the r w correlation coefficient for the VIKOR method with different normalization methods, (a) none/minmax, (b) none/max, (c) none/sum, (d) none/vector, (e) minmax/max, (f) minmax/sum, (g) minmax/vector, (h) max/sum, (i) max/vector, and (j) sum/vector.
Table A8. Mean values of the r w correlation coefficient for the VIKOR method with different normalization methods, (a) none/minmax, (b) none/max, (c) none/sum, (d) none/vector, (e) minmax/max, (f) minmax/sum, (g) minmax/vector, (h) max/sum, (i) max/vector, and (j) sum/vector.
NormWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)equal0.083−0.003−0.029−0.028−0.0300.2370.3170.3130.3440.3380.0430.0580.0660.0670.0690.1620.2230.2380.2550.255
entropy0.010−0.030−0.036−0.016−0.0260.3100.3450.3360.3610.3300.0310.0060.0370.0530.0660.2110.1920.2600.2360.233
std0.022−0.033−0.040−0.027−0.0320.2700.3290.3040.3450.3290.0380.0140.0330.0570.0650.1990.1750.2260.2370.246
(b)equal0.083−0.003−0.029−0.028−0.0300.2370.3170.3130.3440.3380.0430.0580.0660.0670.0690.1620.2230.2380.2550.255
entropy0.010−0.030−0.036−0.016−0.0260.3100.3450.3360.3610.3300.0310.0060.0370.0530.0660.2110.1920.2600.2360.233
std0.022−0.033−0.040−0.027−0.0320.2700.3290.3040.3450.3290.0380.0140.0330.0570.0650.1990.1750.2260.2370.246
(c)equal0.088−0.033−0.0200.2220.3580.2500.2650.2310.3770.4680.0550.018−0.0290.1210.2350.1590.2100.1290.2370.335
entropy0.0220.0030.0460.2770.3860.3150.3560.3700.4990.5320.026−0.0060.0480.1730.2480.1940.1810.2400.3050.356
std0.017−0.0140.0140.2420.3670.2660.3050.2900.4260.487−0.002−0.044−0.0250.0940.1860.1640.1080.1410.2200.290
(d)equal0.083−0.003−0.029−0.028−0.0300.2370.3170.3130.3440.3380.0430.0580.0660.0670.0690.1620.2230.2380.2550.255
entropy0.010−0.030−0.036−0.016−0.0260.3100.3450.3360.3610.3300.0310.0060.0370.0530.0660.2110.1920.2600.2360.233
std0.022−0.033−0.040−0.027−0.0320.2700.3290.3040.3450.3290.0380.0140.0330.0570.0650.1990.1750.2260.2370.246
(e)equal1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
entropy1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
std1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
(f)equal0.9500.8890.8510.7910.7600.9640.9150.8650.8260.8120.9250.9030.8430.7930.7780.9250.9180.8520.8050.795
entropy0.9380.9010.8650.7950.7590.9370.9090.8840.8460.8210.8920.8460.8130.7670.7480.9090.8570.8490.7930.772
std0.9170.8800.8470.7910.7580.9220.9010.8670.8330.8140.8730.8390.8060.7650.7490.8960.8490.8360.7840.769
(g)equal1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
entropy1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
std1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
(h)equal0.9500.8890.8510.7910.7600.9640.9150.8650.8260.8120.9250.9030.8430.7930.7780.9250.9180.8520.8050.795
entropy0.9380.9010.8650.7950.7590.9370.9090.8840.8460.8210.8920.8460.8130.7670.7480.9090.8570.8490.7930.772
std0.9170.8800.8470.7910.7580.9220.9010.8670.8330.8140.8730.8390.8060.7650.7490.8960.8490.8360.7840.769
(i)equal1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
entropy1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
std1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
(j)equal0.9500.8890.8510.7910.7600.9640.9150.8650.8260.8120.9250.9030.8430.7930.7780.9250.9180.8520.8050.795
entropy0.9380.9010.8650.7950.7590.9370.9090.8840.8460.8210.8920.8460.8130.7670.7480.9090.8570.8490.7930.772
std0.9170.8800.8470.7910.7580.9220.9010.8670.8330.8140.8730.8390.8060.7650.7490.8960.8490.8360.7840.769
Table A9. Mean values of the W S correlation coefficient for the VIKOR method with different normalization methods, (a) none/minmax, (b) none/max, (c) none/sum, (d) none/vector, (e) minmax/max, (f) minmax/sum, (g) minmax/vector, (h) max/sum, (i) max/vector, and (j) sum/vector.
Table A9. Mean values of the W S correlation coefficient for the VIKOR method with different normalization methods, (a) none/minmax, (b) none/max, (c) none/sum, (d) none/vector, (e) minmax/max, (f) minmax/sum, (g) minmax/vector, (h) max/sum, (i) max/vector, and (j) sum/vector.
NormWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)equal0.4580.4870.4750.3690.3400.5700.5960.6210.6010.5670.5370.5210.5330.4680.4410.5700.5820.6100.5920.575
entropy0.5990.5260.4890.3920.3550.6860.6620.6430.6130.5700.5700.5280.5190.4570.4350.6230.5900.6130.5700.554
std0.5490.4930.4740.3740.3430.6220.6240.6210.6050.5670.5400.5110.5200.4630.4410.5930.5710.5990.5840.571
(b)equal0.4580.4870.4750.3690.3400.5700.5960.6210.6010.5670.5370.5210.5330.4680.4410.5700.5820.6100.5920.575
entropy0.5990.5260.4890.3920.3550.6860.6620.6430.6130.5700.5700.5280.5190.4570.4350.6230.5900.6130.5700.554
std0.5490.4930.4740.3740.3430.6220.6240.6210.6050.5670.5400.5110.5200.4630.4410.5930.5710.5990.5840.571
(c)equal0.4700.5070.5640.7470.8290.5710.5880.6440.8140.8780.5400.5140.5420.7000.7900.5660.5800.5950.7460.823
entropy0.6070.5600.5930.7630.8360.6910.6860.7120.8440.8910.5800.5500.5800.7100.7800.6280.6140.6500.7610.818
std0.5580.5310.5800.7540.8320.6270.6390.6750.8280.8830.5330.5150.5490.6750.7550.5880.5640.6060.7260.789
(d)equal0.4580.4870.4750.3690.3400.5700.5960.6210.6010.5670.5370.5210.5330.4680.4410.5700.5820.6100.5920.575
entropy0.5990.5260.4890.3920.3550.6860.6620.6430.6130.5700.5700.5280.5190.4570.4350.6230.5900.6130.5700.554
std0.5490.4930.4740.3740.3430.6220.6240.6210.6050.5670.5400.5110.5200.4630.4410.5930.5710.5990.5840.571
(e)equal1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
entropy1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
std1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
(f)equal0.9590.8970.8900.9460.9620.9700.9160.8840.9370.9570.9420.9090.8740.9310.9520.9440.9230.8760.9230.949
entropy0.9500.9140.9030.9480.9630.9510.9240.9090.9450.9600.9170.8750.8680.9110.9300.9290.8820.8830.9150.933
std0.9330.8960.8890.9460.9620.9440.9090.8900.9400.9580.9070.8630.8580.9140.9350.9260.8710.8670.9120.933
(g)equal1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
entropy1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
std1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
(h)equal0.9590.8970.8900.9460.9620.9700.9160.8840.9370.9570.9420.9090.8740.9310.9520.9440.9230.8760.9230.949
entropy0.9500.9140.9030.9480.9630.9510.9240.9090.9450.9600.9170.8750.8680.9110.9300.9290.8820.8830.9150.933
std0.9330.8960.8890.9460.9620.9440.9090.8900.9400.9580.9070.8630.8580.9140.9350.9260.8710.8670.9120.933
(i)equal1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
entropy1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
std1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
(j)equal0.9540.8970.8740.8740.8760.9710.9170.8780.8700.8690.9430.9100.8690.8730.8760.9450.9220.8730.8650.869
entropy0.9500.9130.8890.8850.8840.9500.9220.9000.8920.8850.9190.8720.8580.8560.8530.9300.8810.8730.8660.866
std0.9340.8940.8720.8770.8780.9440.9080.8820.8790.8750.9060.8630.8490.8620.8650.9280.8690.8650.8660.865
Table A10. Mean values of the W S correlation coefficient for the PROMETHEE II method with different k value and different preference functions: (a) U-shape; (b) V-shape (c) Level; and (d) V-shape2.
Table A10. Mean values of the W S correlation coefficient for the PROMETHEE II method with different k value and different preference functions: (a) U-shape; (b) V-shape (c) Level; and (d) V-shape2.
TypekWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)0.25/0.5equal0.9670.9300.9590.9920.9960.9510.9290.9560.9940.9970.9520.9290.9570.9940.9970.9440.9230.9580.9930.997
entropy0.9710.9370.9550.9920.9960.9650.9320.9580.9930.9970.9580.9260.9540.9930.9970.9680.9270.9560.9930.997
std0.9670.9330.9570.9920.9960.9560.9280.9540.9930.9970.9520.9190.9560.9930.9970.9570.9240.9580.9930.997
0.25/1equal0.9460.8850.9380.9900.9950.9340.8750.9280.9890.9950.9310.8740.9280.9880.9950.9310.8600.9250.9880.994
entropy0.9690.8980.9310.9890.9950.9450.8770.9270.9880.9950.9480.8740.9240.9880.9940.9520.8730.9240.9880.994
std0.9510.8870.9340.9900.9950.9320.8700.9260.9890.9950.9290.8650.9250.9880.9940.9310.8620.9260.9880.994
0.5/1equal0.9580.9060.9460.9910.9950.9500.8970.9410.9910.9960.9440.9000.9400.9900.9960.9440.8880.9390.9900.995
entropy0.9730.9150.9430.9910.9950.9520.9020.9410.9900.9960.9560.8990.9350.9900.9950.9610.8940.9390.9900.995
std0.9640.9060.9450.9910.9950.9440.8920.9410.9910.9960.9490.8890.9370.9900.9960.9460.8890.9390.9910.995
(b)0.25/0.5equal1.0000.9910.9940.9991.0000.9960.9900.9940.9991.0000.9970.9920.9940.9991.0000.9930.9850.9940.9991.000
entropy0.9960.9930.9950.9991.0000.9970.9920.9950.9991.0000.9970.9920.9940.9991.0000.9940.9910.9940.9991.000
std0.9950.9910.9950.9991.0000.9960.9910.9940.9991.0000.9940.9900.9940.9991.0000.9950.9900.9940.9991.000
0.25/1equal1.0000.9800.9890.9980.9990.9950.9800.9840.9980.9990.9920.9800.9850.9980.9990.9870.9770.9860.9980.999
entropy0.9920.9850.9890.9980.9990.9860.9820.9880.9980.9990.9900.9820.9860.9980.9990.9860.9800.9860.9980.999
std0.9950.9820.9890.9980.9990.9840.9820.9870.9980.9990.9900.9800.9860.9980.9990.9850.9780.9860.9980.999
0.5/1equal1.0000.9870.9930.9990.9990.9930.9880.9890.9980.9990.9910.9850.9900.9980.9990.9870.9860.9900.9990.999
entropy0.9920.9890.9920.9990.9990.9860.9880.9920.9990.9990.9910.9870.9910.9990.9990.9870.9860.9900.9990.999
std0.9940.9860.9930.9991.0000.9860.9870.9920.9990.9990.9900.9880.9900.9990.9990.9830.9850.9900.9980.999
(c)0.25/0.5equal0.9360.9300.9670.9940.9970.9300.9350.9640.9950.9980.9400.9350.9620.9950.9980.9290.9290.9620.9950.998
entropy0.9540.9380.9620.9930.9970.9490.9340.9640.9940.9980.9470.9370.9600.9940.9970.9440.9350.9600.9940.997
std0.9480.9370.9650.9940.9970.9430.9370.9630.9950.9970.9440.9350.9620.9940.9980.9300.9320.9640.9950.998
0.25/1equal0.8540.8880.9430.9920.9960.8580.8760.9340.9880.9950.8700.8800.9300.9870.9940.8590.8640.9280.9880.993
entropy0.9420.8980.9350.9910.9960.9170.8860.9310.9880.9960.9010.8780.9240.9870.9930.9000.8730.9260.9870.993
std0.9040.8890.9390.9910.9960.8710.8770.9300.9880.9950.8730.8760.9290.9870.9940.8670.8680.9280.9870.993
0.5/1equal0.8620.9090.9520.9930.9960.8760.8990.9460.9910.9960.8790.9030.9450.9900.9950.8740.8910.9410.9900.995
entropy0.9410.9170.9480.9930.9960.9220.9090.9440.9900.9960.9120.8990.9400.9900.9950.9070.9020.9390.9900.995
std0.9130.9110.9500.9930.9960.8840.9000.9450.9900.9960.8830.8960.9440.9900.9950.8850.8940.9400.9900.995
(d)0.25/0.5equal0.9620.9650.9860.9970.9990.9700.9700.9850.9980.9990.9740.9750.9850.9980.9990.9720.9730.9840.9980.999
entropy0.9690.9700.9830.9970.9990.9760.9700.9850.9980.9990.9770.9750.9830.9980.9990.9800.9730.9850.9980.999
std0.9730.9710.9860.9970.9990.9790.9740.9860.9980.9990.9740.9790.9860.9980.9990.9730.9730.9870.9980.999
0.25/1equal0.9220.9330.9730.9950.9980.9320.9370.9680.9950.9980.9480.9410.9670.9950.9970.9360.9380.9650.9950.997
entropy0.9390.9420.9660.9950.9980.9460.9350.9680.9950.9980.9430.9440.9640.9940.9970.9510.9400.9660.9940.997
std0.9400.9400.9720.9950.9980.9460.9360.9680.9950.9980.9420.9450.9660.9950.9970.9460.9380.9660.9950.997
0.5/1equal0.9370.9570.9830.9970.9990.9480.9580.9780.9970.9990.9600.9590.9780.9960.9980.9560.9530.9770.9960.998
entropy0.9630.9640.9780.9970.9990.9640.9560.9780.9970.9990.9620.9590.9760.9960.9980.9680.9600.9760.9960.998
std0.9610.9610.9800.9970.9990.9600.9560.9770.9970.9980.9610.9600.9770.9960.9980.9640.9580.9760.9960.998
Table A11. Mean values of the r w correlation coefficient for the PROMETHEE II method with different k value and different preference functions: (a) U-shape; (b) V-shape (c) Level; and (d) V-shape2.
Table A11. Mean values of the r w correlation coefficient for the PROMETHEE II method with different k value and different preference functions: (a) U-shape; (b) V-shape (c) Level; and (d) V-shape2.
TypekWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)0.25/0.5equal0.9690.9420.9670.9940.9970.9520.9400.9650.9930.9970.9610.9400.9650.9930.9970.9440.9280.9620.9930.996
entropy0.9700.9400.9630.9930.9970.9550.9290.9610.9930.9960.9440.9230.9580.9930.9960.9570.9230.9570.9930.996
std0.9580.9340.9610.9940.9970.9420.9270.9570.9930.9970.9350.9150.9580.9930.9960.9420.9160.9590.9930.996
0.25/1equal0.9390.8920.9370.9860.9920.9240.8710.9230.9820.9900.9260.8710.9250.9820.9890.9240.8420.9190.9820.989
entropy0.9650.8950.9290.9850.9910.9250.8640.9180.9820.9890.9290.8520.9150.9810.9890.9380.8490.9090.9810.989
std0.9400.8810.9300.9850.9910.9070.8520.9170.9820.9900.9000.8420.9170.9810.9890.9050.8330.9140.9810.989
0.5/1equal0.9540.9160.9520.9890.9940.9450.9020.9430.9880.9930.9430.9020.9430.9870.9930.9390.8850.9420.9870.993
entropy0.9680.9180.9460.9890.9940.9380.8990.9380.9870.9930.9410.8840.9350.9870.9930.9470.8790.9330.9870.993
std0.9550.9060.9460.9890.9940.9250.8830.9390.9870.9930.9280.8760.9370.9870.9930.9270.8750.9360.9870.993
(b)0.25/0.5equal1.0000.9920.9961.0001.0000.9950.9910.9961.0001.0000.9970.9930.9960.9991.0000.9920.9870.9960.9991.000
entropy0.9950.9940.9971.0001.0000.9960.9930.9960.9991.0000.9970.9930.9960.9991.0000.9930.9920.9960.9991.000
std0.9940.9920.9961.0001.0000.9950.9920.9960.9991.0000.9930.9910.9961.0001.0000.9940.9910.9960.9991.000
0.25/1equal1.0000.9820.9910.9991.0000.9940.9820.9890.9990.9990.9900.9810.9900.9990.9990.9840.9790.9890.9990.999
entropy0.9910.9870.9930.9991.0000.9830.9830.9910.9990.9990.9880.9830.9900.9990.9990.9830.9810.9890.9990.999
std0.9940.9840.9910.9991.0000.9810.9840.9910.9990.9990.9880.9810.9900.9990.9990.9810.9810.9900.9990.999
0.5/1equal1.0000.9890.9950.9991.0000.9920.9890.9930.9991.0000.9900.9870.9940.9991.0000.9840.9870.9930.9991.000
entropy0.9900.9900.9950.9991.0000.9830.9890.9940.9991.0000.9890.9880.9940.9991.0000.9830.9880.9930.9991.000
std0.9920.9880.9950.9991.0000.9820.9890.9940.9991.0000.9870.9890.9930.9991.0000.9790.9870.9930.9991.000
(c)0.25/0.5equal0.9400.9410.9740.9950.9980.9300.9430.9710.9950.9970.9440.9450.9700.9950.9970.9260.9350.9680.9950.997
entropy0.9470.9410.9690.9950.9970.9320.9360.9680.9950.9970.9320.9350.9650.9940.9970.9260.9320.9650.9940.997
std0.9330.9400.9690.9950.9980.9260.9370.9670.9950.9970.9260.9350.9660.9940.9970.9020.9270.9670.9940.997
0.25/1equal0.8020.8930.9440.9850.9900.8270.8670.9300.9810.9870.8410.8740.9290.9800.9860.8120.8480.9240.9800.986
entropy0.9240.8940.9350.9830.9890.8840.8710.9230.9790.9860.8560.8590.9190.9790.9860.8580.8490.9140.9790.985
std0.8650.8820.9370.9840.9890.8150.8600.9240.9800.9870.8150.8580.9230.9790.9860.8060.8410.9200.9790.986
0.5/1equal0.8070.9180.9580.9890.9930.8540.9020.9480.9870.9920.8550.9040.9480.9870.9910.8440.8880.9430.9870.991
entropy0.9210.9180.9510.9880.9930.8930.9020.9440.9860.9910.8770.8880.9400.9860.9910.8730.8890.9370.9860.991
std0.8760.9080.9520.9880.9930.8340.8950.9450.9870.9910.8310.8870.9440.9860.9910.8300.8810.9400.9860.991
(d)0.25/0.5equal0.9660.9700.9880.9990.9990.9710.9730.9880.9980.9990.9730.9780.9880.9980.9990.9680.9760.9880.9980.999
entropy0.9650.9740.9880.9990.9990.9700.9730.9890.9980.9990.9720.9770.9880.9980.9990.9750.9760.9890.9980.999
std0.9720.9740.9890.9990.9990.9750.9760.9890.9990.9990.9670.9810.9880.9980.9990.9660.9750.9900.9980.999
0.25/1equal0.9240.9350.9760.9960.9980.9240.9370.9720.9950.9970.9410.9420.9700.9950.9970.9210.9380.9700.9950.997
entropy0.9330.9460.9730.9950.9970.9320.9350.9720.9950.9970.9290.9450.9680.9940.9970.9360.9390.9700.9940.997
std0.9340.9430.9750.9960.9980.9350.9360.9720.9950.9970.9270.9460.9700.9940.9970.9280.9370.9710.9950.997
0.5/1equal0.9290.9580.9850.9980.9990.9360.9590.9830.9970.9980.9500.9610.9820.9970.9980.9450.9550.9820.9970.998
entropy0.9580.9670.9840.9970.9990.9540.9590.9820.9970.9980.9530.9600.9810.9970.9980.9600.9600.9800.9970.998
std0.9560.9650.9840.9970.9990.9500.9580.9820.9970.9980.9520.9620.9810.9970.9980.9530.9600.9810.9970.998
Table A12. Mean values of the r w correlation coefficient for different MCDA methods: (a) TOPSIS minmax/VIKOR, (b) TOPSIS minmax/PROMETHEE II usual, (c) TOPSIS minmax/COPRAS, (d) VIKOR/PROMETHEE II usual, (e) VIKOR/COPRAS, and (f) PROMETHEE II usual/COPRAS.
Table A12. Mean values of the r w correlation coefficient for different MCDA methods: (a) TOPSIS minmax/VIKOR, (b) TOPSIS minmax/PROMETHEE II usual, (c) TOPSIS minmax/COPRAS, (d) VIKOR/PROMETHEE II usual, (e) VIKOR/COPRAS, and (f) PROMETHEE II usual/COPRAS.
MethodWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)equal0.060−0.009−0.024−0.025−0.0290.1810.2610.2720.3000.2960.030−0.019−0.003−0.014−0.0110.1350.1330.1480.1650.170
entropy0.018−0.024−0.033−0.010−0.0230.3070.3300.3170.3260.2940.011−0.025−0.002−0.014−0.0040.1820.1560.2070.1590.157
std0.024−0.034−0.032−0.021−0.0300.2460.3040.2770.3070.2900.011−0.028−0.013−0.015−0.0080.1660.1280.1670.1580.167
(b)equal0.8910.8890.9340.9850.9920.8960.8790.9190.9820.9900.8810.8760.9250.9830.9900.8600.8720.9220.9830.991
entropy0.9000.8980.9360.9800.9880.8500.8660.9130.9690.9830.8370.8480.8990.9670.9800.7900.8310.8830.9650.980
std0.8480.8650.9250.9830.9900.7890.8520.9070.9770.9880.7980.8360.9040.9770.9870.7710.8210.9030.9780.988
(c)equal0.7540.8320.8530.8730.8720.7350.8030.8520.8840.8850.7510.8510.9060.9440.9480.7600.8350.9050.9410.946
entropy0.9190.9000.8740.8690.8680.9140.8800.8670.8770.8800.9120.8890.9100.9380.9440.9000.8840.9030.9340.941
std0.8390.8600.8590.8710.8700.8470.8400.8550.8810.8840.8540.8750.9090.9420.9470.8550.8600.9090.9390.944
(d)equal0.1730.044−0.011−0.028−0.0290.2090.2590.2660.2930.2920.0840.004−0.003−0.018−0.0170.1690.1500.1490.1590.164
entropy0.019−0.019−0.031−0.016−0.0250.2690.3100.2980.3130.2880.010−0.039−0.004−0.017−0.0120.1670.1270.1800.1500.152
std0.004−0.017−0.026−0.025−0.0300.1970.2710.2640.2980.287−0.006−0.021−0.020−0.017−0.0140.1440.1100.1550.1530.162
(e)equal0.014−0.0180.0020.0850.1240.1130.1710.2180.3050.324−0.003−0.039−0.005−0.009−0.0030.1150.0800.1130.1380.142
entropy0.005−0.0100.0140.0980.1300.2890.3050.3060.3450.3370.002−0.0400.002−0.0070.0030.1810.1270.1770.1380.139
std0.006−0.0210.0030.0890.1250.2330.2470.2480.3170.325−0.022−0.041−0.011−0.0070.0000.1330.0980.1330.1350.142
(f)equal0.7760.8200.8530.8820.8830.7420.7780.8320.8830.8890.7640.8180.8790.9400.9490.7410.7930.8710.9350.945
entropy0.8950.8770.8770.8850.8840.8570.8360.8590.8870.8900.8550.8470.8900.9420.9500.8110.8380.8830.9380.946
std0.8110.8260.8540.8820.8830.7670.7930.8360.8840.8890.7740.8260.8780.9400.9490.7580.7990.8710.9360.945
Table A13. Mean values of the W S correlation coefficient for different MCDA methods: (a) TOPSIS minmax/VIKOR, (b) TOPSIS minmax/PROMETHEE II usual, (c) TOPSIS minmax/COPRAS, (d) VIKOR/PROMETHEE II usual, (e) VIKOR/COPRAS, and (f) PROMETHEE II usual/COPRAS.
Table A13. Mean values of the W S correlation coefficient for different MCDA methods: (a) TOPSIS minmax/VIKOR, (b) TOPSIS minmax/PROMETHEE II usual, (c) TOPSIS minmax/COPRAS, (d) VIKOR/PROMETHEE II usual, (e) VIKOR/COPRAS, and (f) PROMETHEE II usual/COPRAS.
MethodWeighting
Method
2 Criteria3 Criteria4 Criteria5 Criteria
AlternativesAlternativesAlternativesAlternatives
351050100351050100351050100351050100
(a)equal0.4630.4650.4640.3740.3390.5610.5690.5920.5900.5610.5280.4890.4950.4420.4200.5620.5460.5630.5660.558
entropy0.5960.5200.4900.4030.3570.6720.6520.6380.6140.5690.5690.5270.5120.4520.4240.6110.5880.6110.5600.542
std0.5400.4770.4660.3800.3420.6110.6040.5980.5960.5610.5190.4940.4900.4430.4230.5730.5440.5710.5590.555
(b)equal0.8730.8760.9330.9900.9950.9030.8780.9240.9880.9940.8840.8760.9240.9860.9930.8750.8790.9260.9850.992
entropy0.9190.9050.9330.9900.9950.8850.8810.9240.9870.9940.8800.8720.9140.9850.9930.8510.8570.9060.9830.992
std0.8760.8790.9320.9900.9950.8470.8730.9230.9880.9940.8590.8600.9160.9860.9930.8420.8540.9160.9850.992
(c)equal0.8220.8570.8810.9410.9550.8210.8400.8790.9350.9470.8300.8730.9120.9580.9650.8360.8610.9110.9540.962
entropy0.9380.9100.9000.9420.9550.9350.8980.8970.9370.9480.9330.9030.9180.9580.9660.9220.8960.9140.9550.963
std0.8900.8790.8870.9410.9550.8910.8670.8820.9350.9470.8910.8900.9150.9570.9650.8910.8770.9140.9550.963
(d)equal0.4920.5050.5230.5100.5070.5670.5860.6420.7300.7450.5320.5000.5220.5060.5080.5560.5490.5860.6320.644
entropy0.6260.5440.5200.5120.5060.6740.6390.6510.7250.7370.5700.5150.5250.5080.5080.6010.5640.6010.6240.635
std0.5780.5060.5190.5110.5070.5950.6090.6420.7300.7440.5210.5080.5200.5070.5090.5670.5490.5900.6300.643
(e)equal0.4810.5450.6040.7010.7410.5470.5950.6820.8060.8380.5180.5160.5700.6270.6570.5620.5530.6120.7030.730
entropy0.6040.5640.5980.7000.7400.6760.6700.7060.8090.8360.5730.5400.5750.6250.6530.6190.5940.6420.7010.728
std0.5620.5470.6000.7020.7410.6170.6340.6900.8080.8380.5310.5200.5700.6270.6580.5790.5660.6220.7030.730
(f)equal0.7770.8310.8820.9450.9580.8040.8240.8750.9400.9510.8110.8450.8980.9590.9670.8070.8360.8940.9560.965
entropy0.9190.8930.8980.9450.9580.8930.8640.8920.9420.9520.8900.8700.9050.9600.9670.8610.8610.8990.9570.966
std0.8700.8500.8800.9440.9580.8370.8360.8770.9400.9510.8400.8540.8980.9580.9670.8340.8430.8930.9570.965

References

  1. Greco, S.; Figueira, J.; Ehrgott, M. Multiple Criteria Decision Analysis; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  2. Roy, B. Multicriteria Methodology for Decision Aiding; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 12. [Google Scholar]
  3. Kodikara, P.N. Multi-Objective Optimal Operation of Urban Water Supply Systems. Ph.D. Thesis, Victoria University, Footscray, Australia, 2008. [Google Scholar]
  4. Mulliner, E.; Malys, N.; Maliene, V. Comparative analysis of MCDM methods for the assessment of sustainable housing affordability. Omega 2016, 59, 146–156. [Google Scholar] [CrossRef]
  5. Tzeng, G.H.; Chiang, C.H.; Li, C.W. Evaluating intertwined effects in e-learning programs: A novel hybrid MCDM model based on factor analysis and DEMATEL. Expert Syst. Appl. 2007, 32, 1028–1044. [Google Scholar] [CrossRef]
  6. Deveci, M.; Özcan, E.; John, R.; Covrig, C.F.; Pamucar, D. A study on offshore wind farm siting criteria using a novel interval-valued fuzzy-rough based Delphi method. J. Environ. Manag. 2020, 270, 110916. [Google Scholar] [CrossRef] [PubMed]
  7. Chatterjee, K.; Pamucar, D.; Zavadskas, E.K. Evaluating the performance of suppliers based on using the R’AMATEL-MAIRCA method for green supply chain implementation in electronics industry. J. Clean. Prod. 2018, 184, 101–129. [Google Scholar] [CrossRef]
  8. Zopounidis, C.; Doumpos, M. Multicriteria classification and sorting methods: A literature review. Eur. J. Oper. Res. 2002, 138, 229–246. [Google Scholar] [CrossRef]
  9. Behzadian, M.; Otaghsara, S.K.; Yazdani, M.; Ignatius, J. A state-of the-art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
  10. Palczewski, K.; Sałabun, W. The fuzzy TOPSIS applications in the last decade. Procedia Comput. Sci. 2019, 159, 2294–2303. [Google Scholar] [CrossRef]
  11. Sałabun, W. The mean error estimation of TOPSIS method using a fuzzy reference models. J. Theor. Appl. Comput. Sci. 2013, 7, 40–50. [Google Scholar]
  12. Opricovic, S.; Tzeng, G.H. Extended VIKOR method in comparison with outranking methods. Eur. J. Oper. Res. 2007, 178, 514–529. [Google Scholar] [CrossRef]
  13. Saaty, T. The Analytic Hierarchy Process; Mcgraw Hill: New York, NY, USA, 1980. [Google Scholar]
  14. Saaty, T.L. A scaling method for priorities in hierarchical structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  15. Zavadskas, E.K.; Kaklauskas, A.; Peldschus, F.; Turskis, Z. Multi-attribute assessment of road design solutions by using the COPRAS method. Balt. J. Road Bridge Eng. 2007, 2, 195–203. [Google Scholar]
  16. Govindan, K.; Jepsen, M.B. ELECTRE: A comprehensive literature review on methodologies and applications. Eur. J. Oper. Res. 2016, 250, 1–29. [Google Scholar] [CrossRef]
  17. Hashemi, S.S.; Hajiagha, S.H.R.; Zavadskas, E.K.; Mahdiraji, H.A. Multicriteria group decision making with ELECTRE III method based on interval-valued intuitionistic fuzzy information. Appl. Math. Model. 2016, 40, 1554–1564. [Google Scholar] [CrossRef]
  18. Brans, J.P.; Vincke, P.; Mareschal, B. How to select and how to rank projects: The PROMETHEE method. Eur. J. Oper. Res. 1986, 24, 228–238. [Google Scholar] [CrossRef]
  19. Uhde, B.; Hahn, W.A.; Griess, V.C.; Knoke, T. Hybrid MCDA methods to integrate multiple ecosystem services in forest management planning: A critical review. Environ. Manag. 2015, 56, 373–388. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Wątróbski, J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection: Rule set database and exemplary decision support system implementation blueprints. Data Brief 2019, 22, 639. [Google Scholar] [CrossRef]
  21. Więckowski, J.; Kizielewicz, B.; Kołodziejczyk, J. Application of Hill Climbing Algorithm in Determining the Characteristic Objects Preferences Based on the Reference Set of Alternatives. In Proceedings of the International Conference on Intelligent Decision Technologies; Springer: Berlin/Heidelberg, Germany, 2020; pp. 341–351. [Google Scholar]
  22. Wątróbski, J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection. Omega 2019, 86, 107–124. [Google Scholar] [CrossRef]
  23. Guitouni, A.; Martel, J.M. Tentative guidelines to help choosing an appropriate MCDA method. Eur. J. Oper. Res. 1998, 109, 501–521. [Google Scholar] [CrossRef]
  24. Zanakis, S.H.; Solomon, A.; Wishart, N.; Dublish, S. Multi-attribute decision making: A simulation comparison of select methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  25. Gershon, M. The role of weights and scales in the application of multiobjective decision making. Eur. J. Oper. Res. 1984, 15, 244–250. [Google Scholar] [CrossRef]
  26. Cinelli, M.; Kadziński, M.; Gonzalez, M.; Słowiński, R. How to Support the Application of Multiple Criteria Decision Analysis? Let Us Start with a Comprehensive Taxonomy. Omega 2020, 96, 102261. [Google Scholar] [CrossRef]
  27. Hanne, T. Meta decision problems in multiple criteria decision making. In Multicriteria Decision Making; Springer: Berlin/Heidelberg, Germany, 1999; pp. 147–171. [Google Scholar]
  28. Wang, X.; Triantaphyllou, E. Ranking irregularities when evaluating alternatives by using some ELECTRE methods. Omega 2008, 36, 45–63. [Google Scholar] [CrossRef]
  29. Chang, Y.H.; Yeh, C.H.; Chang, Y.W. A new method selection approach for fuzzy group multicriteria decision making. Appl. Soft Comput. 2013, 13, 2179–2187. [Google Scholar] [CrossRef]
  30. Hajkowicz, S.; Higgins, A. A comparison of multiple criteria analysis techniques for water resource management. Eur. J. Oper. Res. 2008, 184, 255–265. [Google Scholar] [CrossRef]
  31. Zak, J. The comparison of multiobjective ranking methods applied to solve the mass transit systems’ decision problems. In Proceedings of the 10th Jubilee Meeting of the EURO Working Group on Transportation, Poznan, Poland, 13–16 September 2005; pp. 13–16. [Google Scholar]
  32. Pamučar, D.; Stević, Ž.; Sremac, S. A new model for determining weight coefficients of criteria in mcdm models: Full consistency method (fucom). Symmetry 2018, 10, 393. [Google Scholar] [CrossRef] [Green Version]
  33. Zardari, N.H.; Ahmed, K.; Shirazi, S.M.; Yusop, Z.B. Weighting Methods and Their Effects on Multi-Criteria Decision Making Model Outcomes in Water Resources Management; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  34. Wątróbski, J.; Ziemba, E.; Karczmarczyk, A.; Jankowski, J. An index to measure the sustainable information society: The Polish households case. Sustainability 2018, 10, 3223. [Google Scholar] [CrossRef] [Green Version]
  35. Sałabun, W.; Palczewski, K.; Wątróbski, J. Multicriteria approach to sustainable transport evaluation under incomplete knowledge: Electric bikes case study. Sustainability 2019, 11, 3314. [Google Scholar] [CrossRef] [Green Version]
  36. Wątróbski, J.; Sałabun, W. Green supplier selection framework based on multi-criteria decision-analysis approach. In Proceedings of the International Conference on Sustainable Design and Manufacturing; Springer: Berlin/Heidelberg, Germany, 2016; pp. 361–371. [Google Scholar]
  37. Alimardani, M.; Hashemkhani Zolfani, S.; Aghdaie, M.H.; Tamošaitienė, J. A novel hybrid SWARA and VIKOR methodology for supplier selection in an agile environment. Technol. Econ. Dev. Econ. 2013, 19, 533–548. [Google Scholar] [CrossRef]
  38. Chu, T.C. Selecting plant location via a fuzzy TOPSIS approach. Int. J. Adv. Manuf. Technol. 2002, 20, 859–864. [Google Scholar] [CrossRef]
  39. Madić, M.; Marković, D.; Petrović, G.; Radovanović, M. Application of COPRAS method for supplier selection. In The Fifth International Conference Transport and Logistics-TIL 2014; Proceedings: Niš, Serbia, 2014; pp. 47–50. [Google Scholar]
  40. Elevli, B. Logistics freight center locations decision by using Fuzzy-PROMETHEE. Transport 2014, 29, 412–418. [Google Scholar] [CrossRef] [Green Version]
  41. Stević, Ž.; Pamučar, D.; Puška, A.; Chatterjee, P. Sustainable supplier selection in healthcare industries using a new MCDM method: Measurement of alternatives and ranking according to COmpromise solution (MARCOS). Comput. Ind. Eng. 2020, 140, 106231. [Google Scholar] [CrossRef]
  42. Ahmadi, A.; Gupta, S.; Karim, R.; Kumar, U. Selection of maintenance strategy for aircraft systems using multi-criteria decision making methodologies. Int. J. Reliab. Qual. Saf. Eng. 2010, 17, 223–243. [Google Scholar] [CrossRef]
  43. Venkata Rao, R. Evaluating flexible manufacturing systems using a combined multiple attribute decision making method. Int. J. Prod. Res. 2008, 46, 1975–1989. [Google Scholar] [CrossRef]
  44. Aghdaie, M.H.; Zolfani, S.H.; Zavadskas, E.K. Decision making in machine tool selection: An integrated approach with SWARA and COPRAS-G methods. Eng. Econ. 2013, 24, 5–17. [Google Scholar]
  45. Hashemi, H.; Bazargan, J.; Mousavi, S.M. A compromise ratio method with an application to water resources management: An intuitionistic fuzzy set. Water Resour. Manag. 2013, 27, 2029–2051. [Google Scholar] [CrossRef]
  46. Liu, C.; Frazier, P.; Kumar, L.; Macgregor, C.; Blake, N. Catchment-wide wetland assessment and prioritization using the multi-criteria decision-making method TOPSIS. Environ. Manag. 2006, 38, 316–326. [Google Scholar] [CrossRef] [PubMed]
  47. Roozbahani, A.; Ghased, H.; Hashemy Shahedany, M. Inter-basin water transfer planning with grey COPRAS and fuzzy COPRAS techniques: A case study in Iranian Central Plateau. Sci. Total Environ. 2020, 726, 138499. [Google Scholar] [CrossRef]
  48. Kapepula, K.M.; Colson, G.; Sabri, K.; Thonart, P. A multiple criteria analysis for household solid waste management in the urban community of Dakar. Waste Manag. 2007, 27, 1690–1705. [Google Scholar] [CrossRef]
  49. Carnero, M.C. Waste Segregation FMEA Model Integrating Intuitionistic Fuzzy Set and the PAPRIKA Method. Mathematics 2020, 8, 1375. [Google Scholar] [CrossRef]
  50. Boran, F.; Boran, K.; Menlik, T. The evaluation of renewable energy technologies for electricity generation in Turkey using intuitionistic fuzzy TOPSIS. Energy Sources Part B Econ. Plan. Policy 2012, 7, 81–90. [Google Scholar] [CrossRef]
  51. Kaya, T.; Kahraman, C. Multicriteria renewable energy planning using an integrated fuzzy VIKOR & AHP methodology: The case of Istanbul. Energy 2010, 35, 2517–2527. [Google Scholar]
  52. Krishankumar, R.; Ravichandran, K.; Kar, S.; Cavallaro, F.; Zavadskas, E.K.; Mardani, A. Scientific decision framework for evaluation of renewable energy sources under q-rung orthopair fuzzy set with partially known weight information. Sustainability 2019, 11, 4202. [Google Scholar] [CrossRef] [Green Version]
  53. Rehman, A.U.; Abidi, M.H.; Umer, U.; Usmani, Y.S. Multi-Criteria Decision-Making Approach for Selecting Wind Energy Power Plant Locations. Sustainability 2019, 11, 6112. [Google Scholar] [CrossRef] [Green Version]
  54. Wątróbski, J.; Ziemba, P.; Wolski, W. Methodological aspects of decision support system for the location of renewable energy sources. In Proceedings of the 2015 Federated Conference on Computer Science and Information Systems (FedCSIS), Lodz, Poland, 13–16 September 2015; pp. 1451–1459. [Google Scholar]
  55. Riaz, M.; Sałabun, W.; Farid, H.M.A.; Ali, N.; Wątróbski, J. A Robust q-Rung Orthopair Fuzzy Information Aggregation Using Einstein Operations with Application to Sustainable Energy Planning Decision Management. Energies 2020, 13, 2155. [Google Scholar] [CrossRef]
  56. Tong, L.I.; Chen, C.C.; Wang, C.H. Optimization of multi-response processes using the VIKOR method. Int. J. Adv. Manuf. Technol. 2007, 31, 1049–1057. [Google Scholar] [CrossRef]
  57. Tong, L.I.; Wang, C.H.; Chen, H.C. Optimization of multiple responses using principal component analysis and technique for order preference by similarity to ideal solution. Int. J. Adv. Manuf. Technol. 2005, 27, 407–414. [Google Scholar] [CrossRef]
  58. Mlela, M.K.; Xu, H.; Sun, F.; Wang, H.; Madenge, G.D. Material Analysis and Molecular Dynamics Simulation for Cavitation Erosion and Corrosion Suppression in Water Hydraulic Valves. Materials 2020, 13, 453. [Google Scholar] [CrossRef] [Green Version]
  59. Yazdani, M.; Graeml, F.R. VIKOR and its applications: A state-of-the-art survey. Int. J. Strateg. Decis. Sci. (IJSDS) 2014, 5, 56–83. [Google Scholar] [CrossRef] [Green Version]
  60. Behzadian, M.; Kazemzadeh, R.; Albadvi, A.; Aghdasi, M. PROMETHEE: A comprehensive literature review on methodologies and applications. Eur. J. Oper. Res. 2010, 200, 198–215. [Google Scholar] [CrossRef]
  61. Zavadskas, E.K.; Turskis, Z. Multiple criteria decision making (MCDM) methods in economics: An overview. Technol. Econ. Dev. Econ. 2011, 17, 397–427. [Google Scholar] [CrossRef] [Green Version]
  62. Guitouni, A.; Martel, J.M.; Vincke, P.; North, P. A Framework to Choose a Discrete Multicriterion Aggregation Procedure; Defence Research Establishment Valcatier (DREV): Ottawa, ON, Canada, 1998; Available online: https://pdfs.semanticscholar.org/27d5/9c846657268bc840c4df8df98e85de66c562.pdf (accessed on 28 June 2020).
  63. Spronk, J.; Steuer, R.E.; Zopounidis, C. Multicriteria decision aid/analysis in finance. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: Berlin/Heidelberg, Germany, 2005; pp. 799–848. [Google Scholar]
  64. Roy, B. The outranking approach and the foundations of ELECTRE methods. In Readings in Multiple Criteria Decision Aid; Springer: Berlin/Heidelberg, Germany, 1990; pp. 155–183. [Google Scholar]
  65. Ishizaka, A.; Nemery, P. Multi-Criteria Decision Analysis: Methods and Software; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  66. Stević, Ž.; Pamučar, D.; Subotić, M.; Antuchevičiene, J.; Zavadskas, E.K. The location selection for roundabout construction using Rough BWM-Rough WASPAS approach based on a new Rough Hamy aggregator. Sustainability 2018, 10, 2817. [Google Scholar] [CrossRef] [Green Version]
  67. Fortemps, P.; Greco, S.; Słowiński, R. Multicriteria choice and ranking using decision rules induced from rough approximation of graded preference relations. In Proceedings of the International Conference on Rough Sets and Current Trends in Computing; Springer: Berlin/Heidelberg, Germany, 2004; pp. 510–522. [Google Scholar]
  68. e Costa, C.A.B.; Vincke, P. Multiple criteria decision aid: An overview. In Readings in Multiple Criteria Decision Aid; Springer: Berlin/Heidelberg, Germany, 1990; pp. 3–14. [Google Scholar]
  69. Wang, J.J.; Yang, D.L. Using a hybrid multi-criteria decision aid method for information systems outsourcing. Comput. Oper. Res. 2007, 34, 3691–3700. [Google Scholar] [CrossRef]
  70. Figueira, J.; Mousseau, V.; Roy, B. ELECTRE methods. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: Berlin/Heidelberg, Germany, 2005; pp. 133–153. [Google Scholar]
  71. Blin, M.J.; Tsoukiàs, A. Multi-criteria methodology contribution to the software quality evaluation. Softw. Qual. J. 2001, 9, 113–132. [Google Scholar] [CrossRef] [Green Version]
  72. Edwards, W.; Newman, J.R.; Snapper, K.; Seaver, D. Multiattribute Evaluation; Number 26; Chronicle Books; SAGE Publications: London, UK, 1982. [Google Scholar]
  73. Jacquet-Lagreze, E.; Siskos, J. Assessing a set of additive utility functions for multicriteria decision-making, the UTA method. Eur. J. Oper. Res. 1982, 10, 151–164. [Google Scholar] [CrossRef]
  74. e Costa, C.A.B.; Vansnick, J.C. MACBETH—An interactive path towards the construction of cardinal value functions. Int. Trans. Oper. Res. 1994, 1, 489–500. [Google Scholar] [CrossRef]
  75. Bana E Costa, C.A.; Vansnick, J.C. Applications of the MACBETH approach in the framework of an additive aggregation model. J. Multi-Criteria Decis. Anal. 1997, 6, 107–114. [Google Scholar] [CrossRef]
  76. Chen, C.T.; Lin, C.T.; Huang, S.F. A fuzzy approach for supplier evaluation and selection in supply chain management. Int. J. Prod. Econ. 2006, 102, 289–301. [Google Scholar] [CrossRef]
  77. Vahdani, B.; Mousavi, S.M.; Tavakkoli-Moghaddam, R. Group decision making based on novel fuzzy modified TOPSIS method. Appl. Math. Model. 2011, 35, 4257–4269. [Google Scholar] [CrossRef]
  78. Rashid, T.; Beg, I.; Husnine, S.M. Robot selection by using generalized interval-valued fuzzy numbers with TOPSIS. Appl. Soft Comput. 2014, 21, 462–468. [Google Scholar] [CrossRef]
  79. Krohling, R.A.; Campanharo, V.C. Fuzzy TOPSIS for group decision making: A case study for accidents with oil spill in the sea. Expert Syst. Appl. 2011, 38, 4190–4197. [Google Scholar] [CrossRef]
  80. Roy, B. Classement et choix en présence de points de vue multiples. Rev. Française Inform. Rech. Oper. 1968, 2, 57–75. [Google Scholar] [CrossRef]
  81. Roy, B.; Skalka, J.M. ELECTRE IS: Aspects Méthodologiques et Guide D’utilisation; LAMSADE, Unité Associée au CNRS no 825; Université de Paris Dauphine: Paris, France, 1987. [Google Scholar]
  82. Roy, B.; Bertier, P.; La méthode ELECTRE, I. Une Application au Media Planning; North Holland: Amsterdam, The Netherlands, 1973. [Google Scholar]
  83. Roy, B.; Hugonnard, J.C. Ranking of suburban line extension projects on the Paris metro system by a multicriteria method. Transp. Res. Part A Gen. 1982, 16, 301–312. [Google Scholar] [CrossRef]
  84. Roy, B.; Bouyssou, D. Aide Multicritère à la Décision: Méthodes et Cas; Economica Paris: Paris, France, 1993. [Google Scholar]
  85. Mareschal, B.; Brans, J.P.; Vincke, P. PROMETHEE: A New Family of Outranking Methods in Multicriteria Analysis; Technical Report; ULB—Universite Libre de: Bruxelles, Belgium, 1984. [Google Scholar]
  86. Janssens, G.K.; Pangilinan, J.M. Multiple criteria performance analysis of non-dominated sets obtained by multi-objective evolutionary algorithms for optimisation. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations, Larnaca, Cyprus, 6–7 October 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 94–103. [Google Scholar]
  87. Chen, T.Y. A PROMETHEE-based outranking method for multiple criteria decision analysis with interval type-2 fuzzy sets. Soft Comput. 2014, 18, 923–940. [Google Scholar] [CrossRef]
  88. Martinez-Alier, J.; Munda, G.; O’Neill, J. Weak comparability of values as a foundation for ecological economics. Ecol. Econ. 1998, 26, 277–286. [Google Scholar] [CrossRef]
  89. Munda, G. Cost-benefit analysis in integrated environmental assessment: Some methodological issues. Ecol. Econ. 1996, 19, 157–168. [Google Scholar] [CrossRef]
  90. Ana, E., Jr.; Bauwens, W.; Broers, O. Quantifying uncertainty using robustness analysis in the application of ORESTE to sewer rehabilitation projects prioritization—Brussels case study. J. Multi-Criteria Decis. Anal. 2009, 16, 111–124. [Google Scholar] [CrossRef]
  91. Roubens, M. Preference relations on actions and criteria in multicriteria decision making. Eur. J. Oper. Res. 1982, 10, 51–55. [Google Scholar] [CrossRef]
  92. Hinloopen, E.; Nijkamp, P. Qualitative multiple criteria choice analysis. Qual. Quant. 1990, 24, 37–56. [Google Scholar] [CrossRef]
  93. Martel, J.M.; Matarazzo, B. Other outranking approaches. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: Berlin/Heidelberg, Germany, 2005; pp. 197–259. [Google Scholar]
  94. Marchant, T. An axiomatic characterization of different majority concepts. Eur. J. Oper. Res. 2007, 179, 160–173. [Google Scholar] [CrossRef] [Green Version]
  95. Bélanger, M.; Martel, J.M. An Automated Explanation Approach for a Decision Support System based on MCDA. ExaCt 2005, 5, 4. [Google Scholar]
  96. Guitouni, A.; Martel, J.; Bélanger, M.; Hunter, C. Managing a Decision-Making Situation in the Context of the Canadian Airspace Protection; Faculté des Sciences de L’administration de L’Université Laval, Direction de la Recherche: Quebec City, QC, Canada, 1999. [Google Scholar]
  97. Nijkamp, P.; Rietveld, P.; Voogd, H. Multicriteria Evaluation in Physical Planning; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  98. Vincke, P. Outranking approach. In Multicriteria Decision Making; Springer: Berlin/Heidelberg, Germany, 1999; pp. 305–333. [Google Scholar]
  99. Paelinck, J.H. Qualitative multiple criteria analysis, environmental protection and multiregional development. In Papers of the Regional Science Association; Springer: Berlin/Heidelberg, Germany, 1976; Volume 36, pp. 59–74. [Google Scholar]
  100. Matarazzo, B. MAPPAC as a compromise between outranking methods and MAUT. Eur. J. Oper. Res. 1991, 54, 48–65. [Google Scholar] [CrossRef]
  101. Greco, S. A new pcca method: Idra. Eur. J. Oper. Res. 1997, 98, 587–601. [Google Scholar] [CrossRef]
  102. Sałabun, W. The Characteristic Objects Method: A New Distance-based Approach to Multicriteria Decision-making Problems. J. Multi-Criteria Decis. Anal. 2015, 22, 37–50. [Google Scholar] [CrossRef]
  103. Kujawińska, A.; Rogalewicz, M.; Diering, M.; Piłacińska, M.; Hamrol, A.; Kochańskib, A. Assessment of ductile iron casting process with the use of the DRSA method. J. Min. Metall. Sect. B Metall. 2016, 52, 25–34. [Google Scholar] [CrossRef] [Green Version]
  104. Więckowski, J.; Kizielewicz, B.; Kołodziejczyk, J. Finding an Approximate Global Optimum of Characteristic Objects Preferences by Using Simulated Annealing. In Proceedings of the International Conference on Intelligent Decision Technologies, Split, Croatia, 17–19 June 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 365–375. [Google Scholar]
  105. Sałabun, W.; Wątróbski, J.; Piegat, A. Identification of a multi-criteria model of location assessment for renewable energy sources. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 12–16 June 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 321–332. [Google Scholar]
  106. Sałabun, W.; Karczmarczyk, A.; Wątróbski, J.; Jankowski, J. Handling data uncertainty in decision making with COMET. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1478–1484. [Google Scholar]
  107. Więckowski, J.; Kizielewicz, B.; Kołodziejczyk, J. The Search of the Optimal Preference Values of the Characteristic Objects by Using Particle Swarm Optimization in the Uncertain Environment. In Proceedings of the International Conference on Intelligent Decision Technologies, Split, Croatia, 17–19 June 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 353–363. [Google Scholar]
  108. Inuiguchi, M.; Yoshioka, Y.; Kusunoki, Y. Variable-precision dominance-based rough set approach and attribute reduction. Int. J. Approx. Reason. 2009, 50, 1199–1214. [Google Scholar] [CrossRef] [Green Version]
  109. Greco, S.; Matarazzo, B.; Slowinski, R. Rough approximation by dominance relations. Int. J. Intell. Syst. 2002, 17, 153–171. [Google Scholar] [CrossRef]
  110. Słowiński, R.; Greco, S.; Matarazzo, B. Rough-set-based decision support. In Search Methodologies; Springer: Berlin/Heidelberg, Germany, 2014; pp. 557–609. [Google Scholar]
  111. Al-Shemmeri, T.; Al-Kloub, B.; Pearman, A. Model choice in multicriteria decision aid. Eur. J. Oper. Res. 1997, 97, 550–560. [Google Scholar] [CrossRef]
  112. Kornyshova, E.; Salinesi, C. MCDM techniques selection approaches: State of the art. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, Honolulu, HI, USA