Identification of Relevant Criteria Set in the MCDA Process—Wind Farm Location Case Study

Bartłomiej Kizielewicz 1 , Jarosław Wątróbski 2,* and Wojciech Sałabun 1,* 1 Research Team on Intelligent Decision Support Systems, Department of Artificial Intelligence and Applied Mathematics, Faculty of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, ul. Żołnierska 49, 71-210 Szczecin, Poland; bartlomiej-kizielewicz@zut.edu.pl 2 Institute of Management, University of Szczecin, Cukrowa 8, 71-004 Szczecin, Poland * Correspondence: jaroslaw.watrobski@usz.edu.pl (J.W.); wojciech.salabun@zut.edu.pl (W.S.); Tel.: +48-91-449-5580 (W.S.)


Introduction
In recent times, decision-making in the field of energy policy has been determined not only by technological and economic factors [1]. The transfer of the principles of sustainability to the energy sector results in the inclusion of important groups of pro-environmental factors in the decision-making process [2] and also implies the assessment of future actions in the social dimension [3]. This results in the fact that planning or evaluation of energy policies for countries and regions becomes a complex process [4]. The same is true for the problem of evaluation of individual actions in the area of selection of type and location of energy sources [5]. The indicated conflicting objectives (technological, economic, environmental and social) provide the formal background for using Multi-Criteria Decision Analysis (MCDA) methods in this research area [6]. The current state of the art provides a solid justification for this fact by showing a strong potential of methods of multi-criteria decision support in the area of planning and evaluation of energy activities [7][8][9].
It should be pointed out that the classic paradigm of multi-criteria decision making assumes that it is a process composed of successive stages [10][11][12]. Figure 1 shows that the fundamental stages are the problem structuring, preference modelling, data aggregation and recommendation generation [10,13,14]. It is worth recalling that the vital role in this process is played by the decision-maker and system analyst [15,16]. It should be recalled that objectification of the developed MCDA models and generating the correct final recommendation require appropriate structuring of the model by the system analyst [14] and proper selection of the appropriate MCDA method for the given decision problem [14,16]. The problem of choosing the right MCDA method is current and is addressed in many works [10,[14][15][16][17]. The analysis of the literature provides several approaches, guidelines and frameworks containing algorithms or guidelines for choosing the proper MCDA method for a given decision-making situation. Nonetheless, from the perspective of the current state of art, there is a visible gap in terms of the lack of formal guidance to support the structuring stage of a decision problem. It is worth reminding that at the structuring phase, the task of a decision-maker and/or the system analyst is to identify a complete set of decision options and a set of criteria for their evaluation [10,14,15]. While the identification of a set of alternatives to a given decision problem is relatively simple (includes identification non-dominated solutions set in the Pareto terms), defining and proper mapping of the full set of criteria is a complex process [16,18].
The referential literature-based guidelines contained in the works [10,11,[14][15][16] dictate scrupulosity in the construction of the criteria set at the stage of the model structuring. In other words, the set of criteria should be comprehensive yet not redundant [10,15]. These guidelines, although they are fundamental and commonly used by analysts, do not have formal and algorithmic form. In practice, this means that the same reference decision-making problems, depending on the assumptions of the authors of decision models, differ in its structuring. For example, the problems of energy policy evaluation [19][20][21][22], wind farm location (onshore [23][24][25][26] or offshore [27][28][29]), photovoltaic farms location [30][31][32][33] are handled in different ways. The indicated examples significantly differ in terms of criteria used. In other words, the same decision-making problems are solved with the use of different sets of criteria. It should be noticed, that in most cases the final form of criteria sets is justified by the individual research authors on the base of its previous usage in relevant bibliography. Based on this exemplary analysis, the question arises: what set of criteria for model structuring is the reference for a given class of decision problems and how to objectify (in the scientific terms) the model structuring stage.
Additionally, it should be pointed out that the number and form of the family of criteria in the constructed decision-making model are also significant at the next stage of modelling the decision maker's preferences (criteria weighting or evaluation of alternatives) [10,15]. In the methodological dimension, it is connected to the model complexity, where the number of available criteria translates into the number of errors (during the transfer of the preferences of the decision-maker to the resulting model), as well as decreased consistency of experts' judgments and evaluations in the final model [11,34,35]. As it was indicated by Saaty [36], the number of criteria is directly related to the number of wrong expert judgments, which in practice causes errors in the priorities vector [37] or even final alternative assessments [38].
The current works in MCDA area [18,39] are mainly focused on mapping the natural imprecision of the decision makers' preferences and developing efficient mechanisms for uncertain data handling and aggregation [40,41]. The fuzzy set theory [42] and some newly developed generalizations of fuzzy numbers proved to be powerful tools to deal with various forms of uncertainty, being at the same time the formal foundation for many new MCDA methods [40,[43][44][45][46]. Additionally, fuzzy numbers have provided the basis for some new developments of popular methods such as AHP [19,47] and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [48,49]. It is essential that the newly developed MCDA methods take into account the imprecision of preferential information and uncertainty in the model data as well [44], leading to building more accurate models, thus fulfilling the Roy's system paradigm [10,15], which require that the developed decision support models are objective. However, these methods are not free of shortcomings. The adaptations of new fuzzy generalizations in the new models increase the dimensionality of the decision-making problem, which in turn results in a significant increase in the computational complexity of the considered problem. Therefore, there is a visible need to decrease the input data set in terms of the initial set of criteria and number of variants as it was pointed out in [44][45][46].
Consequently, the authors propose a formal approach to identifying relevant criteria in a given multi-criteria decision problem. For this purpose, with the use of multi-criteria methods and dedicated similarity coefficients, the authors analyze the relevance of the criteria in the decision model for the problem of inland wind farm location. In this paper, three popular MCDA methods are used. The literature review indicated the leading popularity of TOPSIS [49][50][51], however, for methodological correctness, it was decided to also use VIKOR [52,53] and COMET [54] methods for comparison.
The rest of the paper is organized as follows. Section 2 introduces the topic of wind farm locations. Section 3 presents the research methods used, including the formal basis of the used MCDA methods and the mechanisms used to measure the similarity of rankings. Section 4, using the presented research methods, investigated the relevance of the criteria in the reference windfarm localization problem and discussed the results. The most important conclusions and the future works are in Section 5.

Renewable Energy Sources
In recent years, an increased interest in renewable energy sources (RES) has been observed [55]. The conditions of such a situation can be found, among others, in the development of technologies and the search for ways to make the national economies of many countries dependent on conventional energy sources [56]. Additionally, the progressive decline of natural energy resources, with a simultaneous increase in their prices on the global market, forces changes in macro and micro strategies of energy generation [57]. In the face of global energy and climate challenges, renewable energy sources play an important role in building a safer and more competitive energy system [58]. For example, in Europe itself, which is the leader in the use of RES, it is assumed that the share of energy from renewable sources in the total energy consumption will increase to at least 32% in 2030 [59]. Therefore, the fight against climate change with simultaneous global growth of energy demand implies intensified efforts to develop new RES technologies [60] as well as their effective use [61]. It is worth noting that RES have a number of positive technological properties (e.g., low or zero CO 2 emission or a lower degree of production instability and variability compared to conventional energy sources) or political and economic (independence of countries from energy and fossil fuel imports, creation of new jobs) [62].
The development of technology and infrastructure, as well as political strategies (e.g., guaranteed price policy for energy obtained from RES), result in a continuous increase in the use of energy from RES with a simultaneous decrease in energy production prices [63]. It should be noted that the range of RES is wide and constantly growing, and the main types of RES are biomass energy, hydro or geothermal, solar, wind and marine energy (tidal energy). The basic types of investments using RES consist of solar, wind, water, geothermal, biofuel or cogeneration plants [64].
The most widespread and at the same time economical and rapidly developing renewable energy source is wind energy [65]. The development of technology has resulted in a significant reduction of expenditures on the exploitation of wind energy [66], making it competitive with many conventional energy production technologies [67]. The essence of wind technology is to convert available energy from the wind into mechanical or electrical energy by using wind turbines [68]. The best-known types of wind farm infrastructure are offshore wind farms and onshore wind farms [68]. Onshore wind power plants are characterized by relatively low investment and maintenance costs and high predictability of wind parameters [69]. Despite this, currently, offshore wind farms are becoming increasingly popular [70]. This is due both to the higher force of the wind (at sea and the lower negative impact on the environment combined with the remoteness of potentially burdensome phenomena related to their operation (e.g., landscape noise), which could disturb local communities [71].
Renewable energy sources can be used almost anywhere in the world [72]. However, the main problem is the economically, technologically, ecologically and socially correct justification of the location and construction of infrastructure using this type of resources [73], e.g., an improperly located farm can be a source of negative environmental and social impacts [74]. Literature analysis indicates the possibility of using multi-criteria methods to support decision making in the problem of localization selection of various types of RES [8]. However, it is extremely important to choose an appropriate family of criteria, which determine the correctness of the whole decision-making process [20]. The multiplicity of often conflicting criteria [6] causes the problem to be methodically reduced to solving the multi-criteria decision making problem [30]. Therefore, an important research task remains the correct modelling of the structure of this class of decision-making problems [75], as well as guaranteeing appropriate analytical capabilities in the developed model [24].

Application MCDA Methods in RES Domain
MCDA methods are widely used in solving RES decision-making problems. This issue is widely discussed in the literature. Many works concern both the selection or evaluation of RES technologies [6] themselves, their potential locations [29], as well as the evaluation of political [76], economic [8], environmental [77] and social [22] aspects or, more broadly, sustainability [7,22,77] of various RES. In this context, MCDA methods have proven to be a useful research tool not only supporting the process of assessment and prioritization of alternatives [76] but also providing guidance on how to modify strategies and actions to maximize the intended purpose of decision support [8]. This fact can be confirmed in "Literature review" papers, where the authors show the high effectiveness of MCDA methodology in the domain of RES decision-making problems. For example, the paper [77] contains a complete study on the use of MCDA methods in the field of RES decision making. The multidimensional meta-analysis analysis of the use of MCDA methods for the RES domain was done in [7,8,73,76]. Additionally, in work [78], a meta-analysis of the use of MCDA methods in sustainable energy policy has been performed. Among the studies, we can also point to the work [6], where a complete analysis of the use of multi-criteria methods in "RES in households" domain is included.
The issue of wind farm location is an exemplary one and is widely discussed in the literature [9]. This issue is sometimes related to other decision-making problems [7,79]. State of the art for this problem can be found in works [9,22,79]. Based on taxonomy of MCDA methods provided in [80] it is possible to demonstrate the effectiveness of different multi-criteria approaches (American, European and mixed approaches) in the issues related to the selection and evaluation of wind farm locations [9]. For example, when analyzing the methods of the American MCDA school, one can indicate the popularity of not only the AHP [81] or TOPSIS [82] methods themselves, but also their hybrids [24] in the problem of wind farm location. Researchers usually use the AHP method in the process of building a weights vector, and the final evaluation of decision options is done using the TOPSIS method [24,83].
Other methods of this group such as BWM or MAIRCA [84] or DEMATEL, ANP and COPRAS [85] are also used here. An undeniable methodological deficiency of this group of methods is an undesirable effect of linear substitution of criteria [17], which in practice makes it impossible to realize the so-called strong sustainability paradigm in the constructed model [29]. However, in practice, this effect is often minimized by authors of MCDA models, e.g., by introducing threshold values of minimum and maximum individual criteria [8,24]. As already indicated, the methods originating from the European school of multi-criteria decision support have shown their high usefulness in building multi-criteria wind farm location models [79]. The methods of this group, in contrast to the group of methods of the "American school", are characterized by a limited effect of linear compensation of criteria, and their foundations lie in proper reflection of natural imprecision of data and decision maker preferences [17]. From a formal point of view, the "European School" methods use outranking relation. Examples of their effective use are publications [86][87][88]. In the publication [86], the ELECTRE-III method was used in offshore wind power station site selection problems. In contrast, in [87], the ELECTRE-II method was used in the site selection of wind/solar hybrid power station problem. Other examples of using MCDA methods of this school include Promethee method [88] and PROSA method [29].
Current methodological challenges in the MCDA area include the correct reflection of various forms of uncertainty in terms of both model measurement data and the preferences of decision-makers [89]. Moreover, here we can see a widespread adaptation of MCDA models based on successive generations of fuzzy numbers in the field of wind farm site selection [90]. For example, in work [91] fuzzy extensions of AHP and TOPSIS methods were used in onshore wind farm site selection. In [92] fuzzy extensions of AHP method and cumulative prospect theory were successfully applied in a similar problem. In the paper [86] ELECTRE-III method under intuitive fuzzy environment problem was used in a wind power station site selection problem. Choquet integral also under intuitionistic fuzzy sets was used in the paper [93]. The current state of the art of fuzzy MCDA methodologies for RES evaluation and site selection can be found in [20]. However, it should be pointed out that despite the huge potential of developing fuzzy MCDA methods, they cause undesirable, significant limitations in the size of input models. In particular, it takes the form of the number of criteria used at one time and evaluated variants, which significantly affects the practical possibilities of their use in the RES domain [91,93].
The above analysis shows a huge potential of using various MCDA methods in the RES domain, including wind farm site selection and evaluation. It should be noted that, as indicated by the authors [7,22], none of the MCDA methods can be considered objectively "better" or "best", and the task of the decision-maker/analyst is always to choose the right decision support method for the problem [94]. This, together with the proper structuring of the problem (identification of decision options and criteria for their assessment) are essential elements of objectivization of the whole process of building assessment models and, more broadly, the process of decision support [8].
As indicated in Figure 1, the correct structuring of the decision-making model is ensured by the proper identification of the family of criteria. Its character should be complete and not excessive [15]. For the considered issue of wind farm location, the identification of reference criteria was based on literature. As a result of the analysis [7,9,22,24,73,78,[81][82][83]91,93], the following sets of criteria were identified: • technical aspects of the wind farm operation, • spatial aspects of wind farm location, • economic aspects (in particular those related to the planned costs of investment implementation and maintenance), • a group of social factors resulting from the construction and operation of a wind farm, • ecological aspects of investment, • a group of environmental factors surrounding a wind farm, • legal and political aspects related to the construction of wind farms.
Within the first group-technical aspects, the authors [24,82,83] identified a number of factors related to technical efficiency, including power or capacity, as well as the height of installation, wind energy generator properties (e.g., real and technical availability, micro-sitting, computerized supervisory), technical risks, power transmission safety, regular wind farm testing, and spare parts stock. Within the next group, which includes spatial factors [91], the following can be indicated: distance from the road network, distance from Natura 2000 areas and nature reserves, distance to urban areas and sand dunes, acceptability in terms of both safety and aesthetics for airports or city centres, acceptable proximity of transmission lines or distance from specific sites (archaeological sites, tourism facilities, historical sites) [81,82,91,93]. Another group of economic factors are, of course, cost of investment together with operational and maintenance cost [22]. In this group, a discount of tax rate, investment and production incentives or reasonable power pricing program can be positively indicated [22,78]. The analysis of the group of social factors allows for distinguishing the following specific criteria: social acceptance, visual impact, potential conflict among entrepreneurs, policymakers and residents, local benefits; and visual coordination [7,9,73]. Another important group of factors includes the ecological aspects of investments. The literature studies [22,24,83] indicate the following criteria here: noise, impact on ecosystems, acceptable in terms of bird habitat, ecological restoration conduct; energy conservation, carbon reduction effect; environmental ecology monitoring. Within the next group (environmental factors) the studies [78,81,82] indicate the following specific criteria: wind power density, annual mean wind speed, peak hours matching, wind occurrence >5 m, turbulence intensity, wind occurrence >20 m, the geographical distribution of wind speed frequency or uncertainty of land (geology suitability). The last group of factors are legal and political aspects related to the construction of wind farms [78]. The following specific criteria can be indicated here [7,9,78]: regulation for energy safety, energy subsidy policy, wind power concession program, clean development mechanisms program, other policy supports, or establishment of complete supply chain.
Of course, it is also easy to demonstrate the impact of other criteria on the final form of the decision model. Examples include a different assessment of a given technology in the perspective of a given strategy of a region or country, as well as a different model of financing of RES investments [22,78]. It is also worth noting that the identified criteria are related to wind farms in the land. Identification of a set of criteria, for the problem of offshore wind farms requires the analysis of different criteria such as technological (turbine foundation, the possibility of connecting to the power grid) [86], environmental (e.g., depth and type of seabed) [86] or ecological and social (impact on the marine ecosystem or fisheries management) [29]. Despite the number of identified criteria, the analysis of the literature in the area of inland wind farm location shows a different form of MCDA model structuring carried out by the authors of particular studies. Detailed studies also contained in works [9,29,78,95] also show a different form of the structuring of individual models. These differences include the same representation (single-level and hierarchical sets of criteria) and different numbers of evaluation criteria themselves (differentiated in the range of 5 to 32 criteria in individual models). The difference in the developed models of decision support results from the fact that the authors of individual studies assumed different goals and the scope of the built assessment models, which is consistent with the paradigm of multi-criteria decision support indicated by Roy [15], which requires the construction of personalized models reflecting the preferences of decision-makers in particular decision-making situations [94]. Nevertheless, the open research challenge undertaken in this paper is the question of the completeness and redundancy of the sets of criteria in individual models. In this aspect, the authors of this paper attempt to search for algorithmic procedures to identify relevant criteria in the decision model.

Conceptual Framework
The problem studied in this paper concerns the objectification of a set of criteria in a given decision problem. For this purpose, the authors developed a framework (see Figure 2) composed of two basic methodological elements: (1) a set of reference MCDA methods (TOPSIS, VIKOR, COMET); and (2) a set of similarity coefficients (Spearman correlation coefficients and WS coefficient). These elements are the basis for further relevance analysis of individual criteria in a considered MCDA model. In the first step, a primary ranking (including all the criteria for the given model) was computed, which in further studies was treated as the reference ranking. Subsequently, using the indicated MCDA methods, a set of rankings was prepared in which iteratively a single criterion was excluded. This was the basis for testing the similarity of the obtained solutions to the reference solution, as well as for the recommendations in terms of both indicating the high significance and possible elimination of individual criteria of the original model. In the subsequent research steps, the number of criteria eliminated from the original model was extended to 2 and 3, once again examining the similarities of the obtained sets of rankings with the reference solution. When carrying out the analysis for reference points, both the final rankings and utility function values of individual decision variants were taken into account. As a result of a detailed analysis of the obtained results, recommendations were obtained regarding the reference criteria set for the decision problem under consideration. It is worth noting that there is a significant similarity between these three methods-TOPSIS and VIKOR methods are based on the same assumptions-reference points. They differ only in the adopted techniques of normalization and data aggregation [96]. The TOPSIS method assumes minimizing the distance to the ideal solution and maximizing the distance from the anti-ideal solution [51]. In contrast, the VIKOR method minimizes only the distance to the ideal solution [97], which in practice results in the desired significant reduction of substitution of individual criteria. In the TOPSIS method, the normalization is vectorized, while in the VIKOR method, it is linear [96]. The COMET method is based on the space of so-called characteristic objects. This technique solves the problem of the ranking reversal paradox, because it compares characteristic objects instead of the alternatives. The principles of the COMET, TOPSIS and VIKOR methods, as mentioned above, make them a comprehensive group of methods based on the so-called "reference points". It is worth noting that TOPSIS does not require identification of the dependence of component attributes (criteria) of the decision model [50,98,99]. The situation is similar for the COMET method. However, additionally, the method secretly identifies the weights of attributes and allows to model any form of preference function [100,101]. The above indications show a great potential of using the whole group of methods indicated. What is important is that it can be successfully applied even in cases in which we do not yet have full scientific knowledge about dependencies or independence of model attributes [96,102].

The TOPSIS Method
The concept of the TOPSIS method is to specify the distance of the considered objects from the ideal and anti-ideal solution [99,103,104]. The final effect of the study is a synthetic coefficient which forms a ranking of the studied objects. The best object is defined as the one with the shortest distance from the ideal solution and, at the same time, the most considerable distance from the anti-ideal solution [105,106]. The formal description of the TOPSIS method should be shortly mentioned [50]: Step 1. Create a decision matrix consisting of n alternatives with the values of criteria k. Then normalize the decision matrix according to the Equation (1).
where x ij and r ij are the initial and normalized value of the decision matrix.
Step 2. Then create a weighted decision matrix that has previously been normalized according to the Equation (2).
where v ij is the value of the weighted normalized decision matrix and w j is the weight for j criterion.
Step 3. Determine the best and worst alternative according to the following Equation (3): Step 4. Calculate the separation measure from the best and worst alternative for each decision variant according to the Equation (4).
Step 5. Calculate the similarity to the worst condition by equation: Step 6. Rank the alternatives by their similarity to the worst state.

The VIKOR Method
The VIKOR method was developed to solve a discrete decision problem with conflicting criteria. The technique focuses on ranking and choosing from a set of alternatives, and finding compromise solutions for the problem [107]. It can be presented in the following steps [108,109]: Step 1. Create a decision matrix consisting of n alternatives with the values of criteria k. Then normalize the decision matrix according to the Equation (6).
where x ij and r ij are the initial and normalized value of the decision matrix.

a smaller response is desired}
Step 3. Determine the utility S i and regret R i measure using the Equations (8) and (9).
where w j is the weight of the jth criterion.
Step 4. By using the Equation (10) the VIKOR index should be evaluated. where where v is determined as the strategy weighting. In this case, it is equal 0.5.
Step 5. Rank the alternatives, sorting by the values Q, from the minimum value. In this way we obtained the final rank rank.

The COMET Method
Many MCDM methods exhibit the rank reversal phenomenon, however, the Characteristic Objects Method (COMET) is completely free of this problem [110]. In previous works, the accuracy of the COMET method was verified [102,111]. The formal notation of the COMET method should be briefly recalled [42,101,112]: Step 1. Definition of the space of the problem -the expert determines the dimensionality of the problem by selecting r criteria, C 1 , C 2 , . . . , C r . Then, a set of fuzzy numbers is selected for each criterion where C 1 , C 2 , . . . , C r are the ordinals of the fuzzy numbers for all criteria.
Step 2. Generation of the characteristic objects-the characteristic objects (CO) are obtained with the usage of the Cartesian product of the fuzzy numbers' cores of all the criteria (12): As a result, an ordered set of all CO is obtained (13): where t is the count of COs and is equal to (14): Step 3. Evaluation of the characteristic objects-the expert determines the Matrix of Expert Judgment (MEJ) by comparing the COs pairwise. The matrix is presented below (15): where α ij is the result of comparing CO i and CO j by the expert. The function C exp denotes the mental judgement function of the expert. It depends solely on the knowledge of the expert. The expert's preferences can be presented as (16): After the MEJ matrix is prepared, a vertical vector of the Summed Judgments (SJ) is obtained as follows (17): Eventually, the values of preference are approximated for each characteristic object. As a result, a vertical vector P is obtained, where the i-th row contains the approximate value of preference for CO i .
Step 4. The rule base-each characteristic object and its value of preference is converted to a fuzzy rule as (18): In this way, a complete fuzzy rule base is obtained.
Step 5. Inference and the final ranking-each alternative is presented as a set of crisp numbers, e.g., A i = {a i1 , a i2 , a ri }. This set corresponds to the criteria C 1 , C 2 , . . . , C r . Mamdani's fuzzy inference method is used to compute the preference of the i-th alternative. The rule base guarantees that the obtained results are unequivocal. The whole process of the COMET method is presented on the Figure 3.

Similarity Coefficients
The similarity coefficients of the rankings allow to compare how different is the order of variants in both compared rankings. It is important to choose such coefficients that work well in the decision-making field. The paper uses three such coefficients, i.e., Spearman correlation coefficient (19), Spearman weighted correlation coefficient (20) and WS similarity coefficients (21) [114]. The simplest way is to check whether the rankings are equal. The much more common way is to use one of the coefficients of the dependence for two variables, where the obtained rankings for a set of alternatives are our variables. The most frequently used symmetrical coefficient is the Spearman's coefficient.
where d i is defined as the difference between the ranks d i = x i − y i and n is the number of elements in the ranking.
The weighted rank measure of correlation r w is also symmetric coefficient which was shown in [115]. The equation is presented as (20): WS coefficient is a new ranking similarity factor, which is sensitive to significant changes in the ranking. This new indicator is strongly related to the difference between two rankings on particular positions. The ranking top has a more significant influence on similarity than the bottom of the ranking [114]. WS coefficient is asymmetrical, and the equation is presented as (21):

Results and Discussion
The practical organization of the research experiments presented below is as follows. The following Section 4.1 presents the similarity analysis of the rankings obtained by the TOPSIS method, assuming the elimination of any single criterion from the reference model. Similarly, the results obtained with the VIKOR and COMET methods can be found in Appendices A and B. Section 4.2 includes similarity rank studies as above, assuming the elimination of pairs of criteria from the original model. Here, too, the TOPSIS method was used. Additionally, the results of using the VIKOR and COMET methods are included in Appendices C and D. Similarly, the research was extended for the purposes of possible elimination of 3 criteria from the reference model. The results of the TOPSIS method are presented in Section 4.3, while the results of the VIKOR and COMET methods are available in Appendices E and F. A synthetic analysis of the tests performed for all 3 MCDA methods is presented in Section 4.4. Due to the fact that the analyzes so far used only the positions of decision variants in the rankings, in the next Section 4.5 a full quantitative analysis of the similarity of rankings was carried out, using its value of the utility function instead of the rank of a given decision variant. This made it possible to more precisely indicate the areas of relevance of the analyzed criteria of the decision model.
The three MCDA methods presented in Section 3 i.e., COMET, TOPSIS and VIKOR have been utilized to determine the similarity of alternative datasets rankings. For this purpose, an exemplary wind farm location problem [95] was chosen, from which a set of criteria and a set of alternatives were taken (see Table 1). The types of criteria are divided equally, with half of them being of cost type and half of them being the benefit type. The Table 2 presents a set of alternatives and contains 12 decision-making variants. The similarity of the reference ranking of a particular method with the ranking in which a particular criterion was excluded by means of similarity indicators was examined. The reference ranking was obtained by assessing the alternatives based on all defined criteria. Table 1. Selected criteria C 1 -C 10 with names and their units and type [95].

Rankings Comparison-One Criterion Excluded Case
The rankings of alternatives of particular variants of exclusion criteria and similarity coefficients of these rankings with the reference ranking for TOPSIS method are placed in Table 3. The reference ranking in the table is presented as the excluded "None" criterion. The rankings created with the excluded criteria C 1 , C 2 and C 8 have the biggest correlation with the reference ranking among the considered variants. However, it should be mentioned that the ranking distance created with the exclusion of criterion C 1 is much bigger than in case of exclusion of criterion C 4 , C 9 and C 10 from the ranking process, where C 1 has much bigger values of similarity indicators [116]. For the criteria excluded from the ranking process C 10 , C 7 and C 4 , the correlation between the resulting ranking and the reference ranking is high. For the excluded criteria C 9 and C 5 , the correlation between the created rankings and the reference ranking reflected by the similarity indicators is large. However, there is also a great difference between the indicators. The WS ratio is much smaller than the r s and r w ratios in both cases. The smallest correlation can be seen in the rankings that are created when the C 3 and C 6 criteria are excluded. These rankings have the greatest distance from the reference ranking. Table 3. Rankings of alternatives with similarity coefficients and distance for criteria not included in the ranking process (method: TOPSIS).

Excl.
A The charts showing relations between the reference ranking and the ranking, excluding one criterion for the TOPSIS method, are presented using the Figure 4. The x-axis shows the values of the reference ranking. The y-axis presents the values of the ranking in which a given criterion was excluded. The highest similarity among the considered rankings with excluded criteria and the reference ranking is noticeable for the criteria C 1 , C 2 and C 8 . The alternatives in these rankings do not differ from the reference ranking. Slightly less similarity between the rankings is visible in charts (a) and (g) for the criterion excluded from the ranking process C 10 and C 4 . Rankings differ in their alternatives on two positions. For the criteria, C 9 and C 7 excluded from the ranking process, the similarity to the reference ranking differs on four positions and six positions respectively. In the case where the C 5 criterion from the evaluation of alternatives is C 5 , the obtained ranking compared to the reference ranking differs on five positions. The lowest similarity among the considered rankings with excluded criteria and the reference ranking is noticeable for criterion C 5 and C 3 , where the positions in the rankings were the same for the alternative A 8 for criterion C 3 and for the alternatives A 2 , A 9 and A 8 for the excluded criterion C 5 .

Rankings Comparison-Two Criteria Excluded Case
The rankings of alternatives of particular variants of exclusion pairs of criteria and similarity coefficients of these rankings with the reference ranking for TOPSIS method are presented in Table 4. The reference ranking in the table is presented as the excluded "None" criterion. For the rankings created from excluded criterion pairs C 1 C 2 and C 8 C 2 , the similarity with the reference ranking is the highest. The resulting rankings have a value of 1 for all the similarity coefficients under consideration. However, in case of the ranking created with the exclusion of criteria pair C 1 C 2 the preference distance of this ranking with reference ranking preference is greater than in case of rankings created with the exclusion of criterion pair C 10 C 2 , C 9 C 2 and C 4 C 2 . Rankings created with excluded criteria pairs C 4 C 2 , C 7 C 2 and C 10 C 2 had slightly less correlation with the reference ranking. For rankings with excluded criteria pairs C 9 C 2 and C 5 C 2 the difference between the values of similarity ratios r w and r s and WS is large. It means that according to the WS similarity coefficient, the correlation between the rankings mentioned above and the reference ranking is small.
On the other hand, according to the r w and r s similarity coefficient, the correlation is large. The smallest similarity between the reference ranking exists for the rankings from which the C 3 C 2 and C 6 C 2 criteria pairs are excluded. There is a big difference between the similarity coefficients WS and r w and the r s coefficient for a ranking that excludes the C 3 C 2 criterion pair. In the case of a ranking that excludes the C 6 C 2 pair of criteria, all the similarity coefficients under consideration differ significantly.   Table 4. Rankings of alternatives with similarity and distance coefficients for criterion pairs not included in the ranking process (method: TOPSIS). The graphs showing the dependencies of the rankings in which pairs of criteria were excluded from the reference chart for the TOPSIS method have been visualized using the Figure 5. Rankings that have the same alternative positions as the reference chart was created with the criteria pair C 8 C 2 and C 1 C 2 excluded. The slightly worse similarity to the reference ranking are rankings which exclude criteria pairs C 3 C 2 and C 10 C 2 . These rankings differ from the reference ranking only on two positions. This means that these criteria do not have too much influence in the ranking process. The least similar rankings to the reference ranking were created with the exclusion of criterion pairs C 3 C 2 and C 6 C 2 . Most of the alternatives have entirely different positions than in the reference ranking, so the pairs of these criteria are significant in the ranking process. Rankings that have been created by excluding the rest of the criteria pairs differ from the reference ranking of four, five or six positions.

Rankings Comparison-Three Criteria Excluded Case
Rankings of alternatives of particular variants of the triple criteria exclusions and similarity coefficients of these rankings with the reference ranking for TOPSIS method are placed in Table 5. The reference ranking in the table is presented as the excluded "None" criterion. The highest similarity to the reference ranking was achieved by excluding the three criteria C 10 C 2 C 7 . It has the highest values of similarity indicators WS and r w from the considered rankings with the exclusion of the three criteria. However, its distance to the reference ranking is much greater than in the case of rankings from which the three criteria were excluded C 9 C 2 C 7 , C 8 C 2 C 7 , C 4 C 2 C 7 and C 1 C 2 C 7 . It should also be mentioned that the r s similarity ratio for a ranking with the three criteria, C 10 C 2 C 7 does not have the highest value from the table. The resemblance to the reference ranking with the rankings that were created when the three criteria C 8 C 2 C 7 , C 4 C 2 C 7 and C 1 C 2 C 7 are similar to the resemblance to the reference ranking with the ranking in which the three criteria C 10 C 2 C 7 were excluded. However, the WS coefficient value for this ranking is much higher than the WS coefficient values of other rankings. The big difference between the WS similarity coefficient and the similarity coefficients r w and r s have rankings that were created when the three criteria C 5 C 2 C 7 and C 9 C 2 C 7 were excluded. The value of the WS coefficient in these rankings is much smaller than in the case of the r w and r s coefficients. This means that according to these ratios, there is a strong correlation between these rankings with excluded criteria and the reference ranking. In case of the rankings from which the three criteria C 6 C 2 C 7 and C 3 C 2 C 7 have been excluded, the similarity indicators considered received the lowest values. This means that the data of the three criteria have a significant influence on the final ranking. Distances for the rankings from which data of three criteria were excluded were also given the highest values from the considered rankings. Table 5. Rankings of alternatives with similarity and distance factors for triad of criteria not included in the ranking process (method: TOPSIS).

Excl.
A In the charts illustrating the relationship between the rankings in which three criteria were excluded, and the reference chart for the TOPSIS method were visualized using the Figure 6. The values of reference ranking are on x-axis, and y-axis, there are values of the ranking that was created when the three criteria were excluded. The most similar to the reference ranking is the ranking that excludes the three criteria C 10 C 2 C 7 . The three criteria are of low importance for the ranking process. The three criteria are less similar to the reference ranking and do not include the three criteria C 1 C 2 C 7 , C 8 C 2 C 7 and C 9 C 2 C 7 . Rankings with the excluded triple criteria C 1 C 2 C 7 and C 8 C 2 C 7 differ from the reference ranking of four positions while ranking with excluded triple criteria C 9 C 2 C 7 differs from the reference ranking of five positions. The ranking that excludes the three criteria, C 4 C 2 C 7 and C 5 C 2 C 7 has little similarity to the reference ranking. For the ranking from which the three criteria C 4 C 2 C 7 have been excluded differ in six positions, while the ranking from which the criteria C 5 C 2 C 7 have been excluded differ in eight positions with a reference ranking. The least similarity to the reference ranking has a ranking that excludes the three criteria C 3 C 2 C 7 and C 6 C 2 C 7 . In case of a ranking from which three criteria C 6 C 2 C 7 have been excluded, it has three positions as the reference ranking. In contrast, the ranking from which criteria C 3 C 2 C 7 have been excluded does not have the same positions as the reference ranking. It means that the aforementioned three criteria have the most significant influence on the ranking process.

Results Analysis and Discussion
In order to graphically present the relations of the MCDA methods used and their impact on WS and r s coefficients values, histograms were used (see Figures 7-9). The Figure 7 chart refers to rankings in which one given criterion has been excluded. It shows a large difference between the WS coefficient and the r w coefficient. The r w ratio has a much higher value than the WS ratio for the C 9 criterion for COMET and VIKOR methods. On the other hand, in case of the C 6 criterion excluded, the WS ratio has a greater value than the r w ratio for COMET method. The value of the r w similarity coefficient is greater than the WS similarity coefficient for the C 5 criterion for the COMET method and the VIKOR method. Both coefficients for the TOPSIS method have relatively similar values. The rest of the considered coefficient values are also similar for COMET and VIKOR methods.  Similarity coefficients relating to the pairs of criteria are presented in the chart Figure 8. In case of the C 9 C 2 criteria pair the COMET method, and the VIKOR method has a greater value of r w than WS. Meanwhile, for criterion pair C 7 C 2 a smaller value than r w has a WS similarity factor for VIKOR method. Whereas for the criteria pair C 3 C 2 a smaller value has the r w ratio for the VIKOR method. When using COMET for the C 6 C 2 criteria pair, the r w ratio has a smaller value than the WS ratio. For TOPSIS method the WS similarity factor for criteria pairs C 5 C 2 , C 4 C 2 and C 1 C 2 has a greater value than the r w similarity factor.  The similarity coefficients for the three criteria have been visualized using the graph on Figure 9. In the case of the triple criteria, C 9 C 2 C 7 and C 5 C 2 C 7 , the r w similarity coefficient has a greater value than the WS coefficient for VIKOR and COMET methods. With three criteria C 8 C 2 C 7 and C 1 C 2 C 7 the WS coefficient has a smaller value than the r w coefficient for TOPSIS methods. In the COMET method, the three criteria C 6 C 2 C 7 have a greater value of WS than r w . However, for the three criteria C 3 C 2 C 7 the WS value is much smaller than the r w value for the COMET method. The value of r w for the three criteria C 4 C 2 C 7 is smaller than the value of WS for TOPSIS method. However, for the C 3 C 2 C 7 three criteria, the WS similarity factor has a smaller value than the r w similarity factor for the TOPSIS method.

Results Analysis Based on Utility Values of Decision Variants
Contrary to the previous sections, in this section, a quantitative analysis of the result rankings was carried out based on the resultant utility value of the alternatives in particular rankings. This analysis is important, as it provides a more complete and valuable insight of the effects of excluding particular criteria of the decision-making model. It is worth recalling that in the previous sections, only the place of alternatives in the rankings was examined. Here, the study was also conducted by excluding 1, 2 and 3 criteria in turn. Figures 10-12   On Figure 10, the calculated utility values for alternatives to COMET, VIKOR and TOPSIS for single exclusion criteria are presented. The "None" criterion means that no criterion is excluded. The differences between the utility values of alternatives to COMET and TOPSIS methods are minimal. On the other hand, the differences are large between the utility values of alternatives from VIKOR and COMET, and between the utility values of alternatives from VIKOR and TOPSIS. The utility values of alternatives to the VIKOR method are much more valuable than utility values of alternatives to COMET and TOPSIS. The utility values of alternatives from COMET method with excluded criterion C 2 and non-excluded criterion are almost identical.
On the other hand, the utility values of the alternatives with the excluded criterion C 6 are much higher than those of the non-excluded criterion and the excluded criterion C 2 . In the case of the VIKOR method, the differences between the utility values for the excluded criteria are not as significant as for COMET method. The utility values of alternatives with the excluded criterion C 6 have more similar values to the reference utility values of alternatives than the utility values of alternatives with excluded criterion C 2 . In the TOPSIS method, the alternatives' utility values for the excluded criterion C 2 and the non-excluded criterion are very similar to each other. On the other hand, in the case of utility values for the excluded criterion C 6 , the difference in value with the utility values of alternatives for the excluded criterion C 2 and the non-excluded criterion is massive. The utility values of alternatives for the excluded criterion C 6 have the highest value of the considered preferences. The calculated utility values of alternatives using TOPSIS, VIKOR and COMET methods for exclusion pairs of criteria have been visualized in the picture Figure 11. The "None" criterion means that no criterion is excluded. The utility values of the TOPSIS and COMET alternatives are similar. However, in case of the utility values for the VIKOR and COMET methods, as well as for the VIKOR and TOPSIS methods, a significant difference is visible because the VIKOR method has much higher values. The utility values for alternatives to the excluded C 7 C 2 criteria pair and the non-excluded COMET criteria pair is similar. However, this cannot be said about the C 3 C 2 criteria pair and the non-excluded criteria pair, because the difference between the utility values is big. The utility values for the C 3 C 2 and C 7 C 2 criteria pair differ significantly. For the VIKOR method, the utility values for the C 3 C 2 and the non-excluded criteria pair do not differ significantly. However, criterion pair C 7 C 2 has a much lower utility value for decision options than criterion pair C 3 C 2 and a non-excluded criterion pair. For the utility value of decision options for the excluded criteria pair C 7 C 2 and the non-excluded criteria pair for TOPSIS, the difference is minimal. On the other hand, for the value of decision option utility values for the excluded criteria pair C 3 C 2 and the non-excluded criteria pair for TOPSIS method, the difference is enormous. The excluded criteria pair C 3 C 2 has the highest utility values for TOPSIS alternatives. The utility values of the decision options that were calculated using VIKOR, TOPSIS and COMET methods for the triple exclusion criteria were presented using Figure 12. The "None" criterion means that no criterion is excluded. The utility values of alternatives to the three criteria' cases are similar for TOPSIS and COMET. In the case of the VIKOR method, the utility values of the decision options differ significantly from the utility values for the TOPSIS and COMET method. The difference in the utility values of alternatives between the excluded C 8 C 2 C 7 and the non-excluded three criteria is much smaller than between the utility values of the excluded three criteria C 6 C 2 C 7 and the non-excluded three for COMET. The reference utility values of the alternatives has the smallest values, while the utility values for the excluded three criteria C 6 C 2 C 7 has the highest values. In the case of the VIKOR method, the utility values for alternatives to the excluded criteria triangle C 6 C 2 C 7 is the lowest. For the criteria triangle, C 8 C 2 C 7 , the utility values for decision-making options is the lowest. The difference between these utility values and the reference utility values of the alternatives is not significant. The values of utility values of alternatives when excluding the three criteria, C 8 C 2 C 7 and C 6 C 2 C 7 are similar for TOPSIS. The difference between the reference utility values of alternatives and the utility values of alternatives to the excluded criteria triplets C 8 C 2 C 7 and C 6 C 2 C 7 is significant (WS = 0.8713 and r w = 0.8273).

Conclusions
In this paper, we focused on the structuring phase in the MCDA process. In particular, in order to make the decision support more effective, we examined the relevance of a set of input decision criteria in the model. Our research was embedded in a reference practical problem of wind farm location [95]. In the methodological dimension, three MCDA decision-making methods, i.e., COMET, TOPSIS and VIKOR were used. Using similarity coefficients, in particular Spearmann's and WS coefficents, we showed in the analysis that TOPSIS and COMET are most resistant to omitting one criterion, a pair of criteria and three criteria in the ranking process.
In terms of the analysis of the used criteria, the research showed that the most crucial criterion for the COMET method is C 3 and C 6 . In the studies conducted to exclude a single criterion, pairs and triples of criteria, they had the lowest values of similarity coefficients WS, r w and r s with a reference ranking. On the other hand, the Euclidean distance for the utility values of alternatives calculated without taking into account these criteria with the reference utility values of alternatives was high. Therefore, the similarity of the resulting rankings when excluding the criteria mentioned above is minimal, with the reference ranking for the COMET method.
Concerning the TOPSIS method, the most influential criteria taken into account for single, double and triple exclusions for the final ranking are C 6 criterion and C 3 criterion. The resulting rankings, which excluded these criteria, had the lowest similarity coefficients and these rankings differed in a large number of positions with a reference ranking. Moreover, the utility values of the alternatives calculated when excluding these criteria (both single, in pairs and triples) differed significantly from the reference utility values of the alternatives calculated taking into account all the defined criteria.
Unlike the COMET and TOPSIS methods, for the VIKOR method, criteria of C 4 and C 6 have the most significant impact on positions of the considered alternatives in the ranking. These criteria are most important because of their similarity coefficients, which are much smaller than the rest of the criteria considered. Also, the Euclidean distance of utility values of alternatives calculated without taking into account these criteria with reference utility values of alternatives is considerable. On the other hand, the positions of alternatives in the ranking with excluded criteria C 4 and C 6 differ from the positions of alternatives in the TOPSIS method reference ranking.
Compared to COMET and TOPSIS, in the VIKOR method, the utility values of alternatives calculated excluding the essential criteria differs significantly less from the reference utility values of alternatives. In the COMET and TOPSIS method, the utility values distribution of the alternatives with the exclusion of the least significant single criterion is similar to that of the reference alternatives. For TOPSIS and COMET, the utility values of alternatives excluding the most significant criterion (single, pair or triple) are much higher than the reference utility values and the utility values excluding the least significant criterion. It can be concluded that TOPSIS and COMET similarly evaluate decision variants.
It should be pointed out that the advisability and effectiveness of the proposed approach in objectification of decision support models have been demonstrated. The use of MCDA reference methods, as well as the proposed coefficients is a useful tool in the process of elimination of redundant and irrelevant criteria in the decision support model. Importantly, the approaches are highly universal and can be used by analysts each time in the process of building multi-criteria decision models.
Since only the exemplary decision problem is used in the research, the direction of further research is to build reference models of sets of criteria for the given decision problem. The next step in improving the efficiency of the proposed approach is its further algorithmization, in which the applied factor values will describe more analytically the relevance of individual criteria and their sets. The challenge is to apply this approach for an uncertain data environment with the use of various fuzzy number generalizations.

Acknowledgments:
The authors would like to thank the editor and the anonymous reviewers, whose insightful comments and constructive suggestions helped us to significantly improve the quality of this paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

Appendix A
The Table A1 presents values of alternatives for particular rankings of excluded criteria and similarity coefficients with the distance from the reference ranking for VIKOR method. The reference ranking in the table is presented as the excluded "None" criterion. The largest total correlation of the three coefficients with the reference ranking was obtained with the C 10 criterion excluded. In a given ranking, there is a big difference between factors r s and r w and a factor WS. The WS similarity coefficient is much smaller in the case of the ranking with the excluded criterion C 10 than in the case of the ranking with the excluded criterion C 5 and C 7 . Slightly worse values of similarity coefficients were achieved in rankings with excluded criteria C 2 , C 5 and C 7 . In the case of a ranking where the C 2 criterion was excluded, the r s similarity coefficient is much higher than the WS similarity coefficient. In the case of the ranking where criterion C 1 has been excluded, the value of similarity coefficient r s is the same as in the case of the ranking where criterion C 10 has been excluded. The difference between the WS ratio with ratios r w and r s is relatively large. This means that according to indicators r s and r w , the similarity of the ranking with the reference ranking is much greater than according to indicator WS. Rankings in which the criterion C 9 and C 8 is excluded have similar values of correlation indicators. However, it is much smaller than in the other rankings, and the biggest difference is visible in the r w value. In the case of the ranking with the excluded criterion C 6 , the similarity with the reference ranking is small. The distance with the reference ranking is large. The smallest values of correlation indicators with the reference ranking are those except for criterion C 4 . It also has the greatest distance from the reference ranking. This means that it is the most influential criterion in the VIKOR method. Table A1. Rankings of alternatives with similarity coefficients and distance for criteria not included in the ranking process (method: VIKOR).

Excl.
A The Figure A1 shows charts showing relations between the reference ranking and the ranking, excluding one criterion for the VIKOR method. The values of the reference ranking are on the x-axis, and the y-axis, there are values of the ranking that was created when a given criterion was excluded. If the criterion C 10 and C 5 is excluded, the difference between the resulting rankings with the reference ranking is the smallest among the considered variants, because seven alternatives are on the same positions as in the reference ranking. For the resulting rankings with excluded criteria C 1 , C 2 , C 3 and C 8 the alternatives were on six positions as in the reference ranking. The rest of the alternatives, on the other hand, took utterly different positions in the ranking. The similarity of the ranking that emerged when excluding the C 6 criterion with the reference ranking is not as great as the alternatives in the ranking are only five positions. Excluding the criteria C 4 , C 7 and C 9 the rankings do not differ on four positions. The rankings are the least similar to the reference ranking. This means that the criteria for evaluating alternatives in the VIKOR method are very influential. (j) Without C1 Figure A1. Visualizations of the dependence of the reference ranking on rankings without a particular criterion (method: VIKOR).

Appendix B
The Table A2 presents the rankings of alternatives for particular variants of exclusion criteria and the similarity coefficients of these rankings with the reference ranking. The reference ranking in the table is presented as the excluded "None" criterion. The best correlation with the reference ranking is the ranking with the C 2 excluded criterion because the similarity coefficients have a value of 1 and the distance between preferences was 0.0038. The similarity coefficients for the excluded rankings from the criteria C 10 , C 8 and C 7 were of a slightly worse value. For the value of coefficients for criteria C 8 and C 7 there is no significant difference, while for criterion C 10 the difference between indicators WS and r w and r s is much bigger than for criteria C 8 and C 7 . For the excluded rankings in the criteria C 9 and C 5 there is a big difference between the ratios WS and r w and r s . The r w and r s similarity ratios are much higher than the WS ratio. It means that according to the WS ratio, the correlation between the reference ranking and the ranking in which the criterion was excluded is much smaller than the rest of the considered correlation coefficients. The coefficients WS, r w and r s take the same values in case of correlation between the reference ranking and the ranking with the excluded criterion C 1 and C 4 . The lowest similarity of the rankings was for the ranking with the criterion C 3 and C 6 excluded. In the case of the excluded criterion C 3 the indicators WS and r w indicate a much greater correlation between the rankings than the indicator r s . On the other hand, in case of the excluded criterion C 6 , the biggest difference is between the WS indicator and the r w indicator. It is also worth mentioning that in this criterion, the difference between the value of each indicator is large. Table A2. Rankings of alternatives with similarity coefficients and distance for criteria not included in the ranking process (method: COMET).

Excl.
A  Figure A2 shows charts comparing rankings of COMET method. The x-axis is a representation of the reference ranking, while the y-axis is a representation of the ranking that was calculated by excluding one criterion. In case of exclusion of criterion C 2 , there is no difference between the resulting ranking and the reference ranking. When criterion C 8 and C 7 is excluded in the process of evaluating alternatives, the rankings differ on only two positions. If the criterion C 10 is excluded, however, the rankings differ on four positions. Excluding criterion, C 4 and C 1 the difference in ranking occurs with alternate pairs A 6 and A 10 between position 9 and 10 in the ranking, and A 1 and A 5 between position 3 and 4. The rest of the ranking positions remain unchanged. A little less similarity between the reference ranking and the excluded ranking is found for criteria C 5 and C 9 . The lowest similarity between the reference ranking and the excluded ranking is for criteria C 6 and C 3 . It can be stated that these criteria have the highest impact during the ranking process, while criterion C 2 has the lowest impact during the COMET method ranking.

Appendix C
The Table A3 shows the values of alternatives for each ranking of excluded criterion pairs and similarity coefficients together with the distance from the reference ranking. The reference ranking in the table is presented as the excluded "None" criterion. Most of the rankings that were created with the exclusion of the criterion pairs have significant differences among the values of similarity indicators. It is difficult to determine which ranking created with the exclusion of criterion pairs has the highest similarity to the reference ranking. The ranking that was created with the exclusion of criterion pairs C 8 C 2 has the smallest distance to the reference ranking. However, its value of the WS ratio referring to the reference ranking is much smaller than in the case of rankings created with the exclusion of criteria pairs C 9 C 2 , C 7 C 2 and C 5 C 2 .
On the other hand, the value of coefficient r w in case of the ranking excluding criteria pairs C 8 C 2 is higher than the values of coefficients for the rankings mentioned above excluding criteria pairs. In the case of the ranking created with the exclusion of criteria pairs C 10 C 2 the value of the WS coefficient is similar to the value of the WS coefficient for the ranking with the exclusion of criteria pairs C 8 C2. However, the values of the r s and r w similarity indicators have a more considerable value for the C 8 C2 ranking with excluded criteria pairs. Rankings that have been created except for criterion pairs C 3 C 2 and C 6 C 2 have small values of similarity coefficients. This means that their correlation with the reference ranking is small. It can be seen here that for a ranking with excluded criteria pair C 3 C 2 the distance with reference ranking preference is much smaller than for a ranking created with excluded criteria pair C 10 C 2 . However, this ranking has higher values of similarity indicators. The smallest similarity between the reference ranking exists for a ranking that excludes criterion pair C 4 C 2 . There is a big difference between the WS and r r similarity coefficients and the r w ratio. It means that according to the similarity coefficients WS and r s the ranking with the excluded criteria pair is more similar to the reference ranking than the r w similarity coefficient. Table A3. Rankings of alternatives with similarity and distance coefficients for criterion pairs not included in the ranking process (method: VIKOR).

Excl.
A The Figure A3 shows charts showing relations between the reference ranking and the ranking with the exclusion of criterion pairs for the VIKOR method. The values of the reference ranking are on the x-axis, and the y-axis, there are values of the ranking that was created when a given criterion pair was excluded. The most similar to the reference ranking are rankings in which criterion pairs C 5 C 2 and C 9 C 2 were excluded because more important than distance are r w and WS coefficients. Alternatives in these rankings are on the same eight positions as in the reference ranking. These rankings are very similar to the reference ranking. However, the excluded pairs of criteria do not have much influence in the ranking process. The rest of the rankings from which the given criterion pairs were excluded have a weak connection with the reference ranking. Most of the alternative positions determined by these rankings differ from the alternative positions in the reference ranking. The lowest similarity is found in the rankings from which the criterion pairs C 4 C 2 , C 6 C 2 , C 8 C 2 and C 10 C 2 were excluded. The positions of alternatives in rankings where criteria pairs C 4 C 2 , C 6 C 2 and C 8 C 2 do not differ on three positions with a reference ranking. In comparison, the ranking created with the exclusion of criteria pair C 10 C 2 does not differ on only two positions. This means that these criterion pairs have a big influence on the positions in the ranking of alternatives in the ranking process.

Appendix D
The Table A4 presents the rankings of alternatives for particular variants of exclusion pairs of criteria and similarity coefficients of these rankings with the reference ranking for COMET method. The reference ranking in the table is presented as the excluded "None" criterion. The most similar to the reference ranking were rankings in which criterion pairs C 1 C 2 , C 10 C 2 , C 4 C 2 , C 7 C 2 C 8 C 2 and C 9 C 2 were excluded. The values of similarity coefficients for these rankings are close to the value of 1. This means that they do not have much influence on the ranking. Slightly smaller values of similarity coefficients have rankings with excluded criteria pairs C 5 C 2 and C 9 C 2 . However, there is a big difference between the WS similarity coefficient and the r s and r w similarity coefficients. The WS ratio has a much smaller value than the r s and r w ratios. This means that according to coefficients r s and r w the calculated rankings with the above-mentioned criteria excluded are more similar to the reference rank. However, according to the WS ratio, the calculated rankings with the excluded criteria are less similar to the reference ranking. The least similarity to the reference ranking is found in the rankings in which the criterion pairs C 3 C 2 and C 6 C 2 have been excluded. The values of their similarity coefficients are much smaller to the rest of the rankings with the excluded criteria. It is also worth mentioning the distance between the preference of the reference and calculated ranking. It is much larger than in the case of the rest of the rankings with excluded criterion pairs. There is also a big difference between the similarity coefficients WS and r w and the r s coefficient for a ranking with excluded criteria pairs C 3 C 2 . In the case of a ranking that excludes the C 6 C 2 pair of criteria, all the similarity coefficients under consideration differ. Table A4. Rankings of alternatives with similarity and distance coefficients for criterion pairs not included in the ranking process (method: COMET).

Excl.
A  Figure A4 shows the charts that illustrate the relationship between the reference ranking and the ranking that was created with the exclusion of criteria pairs for the COMET method. Rankings that exclude the C 8 C 2 and C 7 C 2 have the most significant similarity to the reference ranking. The given criteria pairs also have the least influence on the ranking. The highest impact on the ranking process has the criterion pairs C 6 C 2 and C 3 C 2 . The graphs show that most of the alternatives had a different position in the calculated rankings by excluding these triples than in the reference ranking. Excluding the criteria pair C 9 C 2 also disrupts the position of the alternatives in the ranking, but it is not as significant.
In the rest of the rankings shown where criterion pairs were excluded, most of the alternatives were on the same position as in the reference ranking.

Appendix E
The Table A5 shows the values of alternatives for the individual rankings of the excluded triad of criteria and similarity coefficients together with the distance from the reference ranking. The reference ranking in the table is presented as the excluded "None" criterion. The rankings in which the three criteria C 9 C 2 C 7 and C 8 C 2 C 7 are excluded are closest to the reference ranking. The values of their similarity coefficients are also large. However, the WS coefficient has larger values than the coefficients r s and r w . Also, the distance to the reference ranking is large, which means that the data of the three criteria that made the resulting rankings have the least significant impact on the ranking. The rankings with the three criteria C 1 C 2 C 7 and C 3 C 2 C 7 are less similar to a reference ranking. They have smaller values of similarity coefficients than the rankings in which the three criteria C 9 C 2 C 7 and C 8 C 2 C 7 were excluded. The value of the WS similarity coefficient in these rankings is much higher than the values of r s and r w . This means that according to the WS coefficient, these rankings have a strong correlation with the reference ranking. In the case of the ranking created with the exclusion of the three criteria, C 10 C 2 C 7 big differences between the values of similarity coefficients can be noticed. The WS coefficient reaches the highest value, the lower value is reached by the r w coefficient, while the lowest value of the three considered coefficients for this ranking is reached by the r s coefficient. A small similarity to the reference ranking exists for the rankings for which the three criteria C 5 C 2 C 7 and C 6 C 2 C 7 are excluded. They have small values of similarity coefficients as well as large distance. This means that they are important in the process of decision making variants ranking. Three criteria C 4 C 2 C 7 have the biggest influence on the ranking process. The value of similarity coefficients is much lower than the rest of the three criteria and also the distance. The ranking created with the exclusion of these three criteria also has the smallest similarity to the reference ranking for the VIKOR method. Table A5. Rankings of alternatives with similarity and distance factors for triad of criteria not included in the ranking process (method: VIKOR).

Excl.
A Through the Figure A5 figure, the charts representing relations between the reference ranking and the ranking excluding the triple criteria for the VIKOR method are shown. The values of reference ranking are on x-axis and y-axis; there are values of the ranking that was created when the three criteria were excluded. Rankings that exclude the three criteria C 8 C 2 C 7 and C 9 C 2 C 7 have most of the same positions as the reference ranking for the three criteria in the VIKOR method. However, there are only five of these positions, so most of the alternative positions are different. The rest of the considered rankings from which the three criteria are excluded have less similarity to the reference ranking. The smallest of these has a ranking that excludes the three criteria C 6 C 2 C 7 and C 4 C 2 C 7 because all the alternative positions are different from the reference ranking. It means that the impact of these three criteria is enormous on the ranking process, and they are essential in the model.

Appendix F
The Table A6 presents the rankings of alternatives for individual variants of exclusions of the three criteria and the similarity coefficients of these rankings with the reference ranking for the COMET method. The reference ranking in the table is presented as the excluded "None" criterion. The most similarity of the reference ranking exists with the rankings from which the triad of criteria C 1 C 2 C 7 , C 4 C 2 C 7 , C 8 C 2 C 7 and C 10 C 2 C 7 have been excluded. Similarity indicators for these rankings are close to value 1. This means that their correlation with the reference ranking is very high. The rankings from which the three criteria C 1 C 2 C 7 , C 4 C 2 C 7 , C 8 C 2 C 7 and C 10 C 2 C 7 also have a much smaller distance to the reference ranking than the other considered rankings created by the excluded criteria. The distance of the reference ranking preference with the preference of the ranking determined with the exclusion of criteria triplets C 9 C 2 C 7 is much smaller than in the case of rankings that were created with the exclusion of criteria triplets C 1 C 2 C 7 , C 4 C 2 C 7 and C 10 C 2 C 7 . However, the value of the WS similarity ratio for this ranking is much lower than for rankings with excluded triple criteria C 1 C 2 C 7 , C 4 C 2 C 7 and C 10 C 2 C 7 . Much less similar to the reference ranking has a ranking that excludes the three criteria C 5 C 2 C 10 . There is also a big difference between the WS similarity coefficient and the r w and r s similarity coefficient values. The r w and r s ratios have much higher values for this ranking than the WS ratio. The smallest similarity of the reference ranking exists with the rankings in which the three criteria C 3 C 2 C 7 and C 6 C 2 C 7 were excluded during the ranking process. These rankings have the smallest values of similarity indicators in the table. Moreover, their distance is much larger than the rest of the considered rankings. This means that three of these criteria are very important for the ranking process. Table A6. Rankings of alternatives with similarity and distance factors for triad of criteria not included in the ranking process (method: COMET).

Excl.
A Diagrams in Figure A6 illustrate the relationship between the reference ranking and the ranking that was created with the exclusion of the three criteria for the COMET method. The values of the reference ranking are on the x-axis, and the y-axis, there are values of the ranking that was created with the exclusion of the three criteria. Of the excluded triple criteria in the ranking process, the most similar to the reference ranking is the ranking that excludes the triple criteria C 1 C 2 C 7 , C 4 C 2 C 7 and C 8 C 2 C 7 . The alternatives in these rankings differ from the reference rankings only on two positions. The positions in these rankings are also the same. However, the relevance of the three criteria C 1 C 2 C 7 , C 4 C 2 C 7 and C 8 C 2 C 7 in the ranking process is small. Slightly less similar to the reference ranking has a ranking created with the exclusion of the three criteria C 10 C 2 C 7 . The alternatives in this ranking differ from the reference ranking on three positions. Most of the positions in the ranking where the three criteria C 9 C 2 C 7 are excluded are the same as in the reference ranking. A minority of the same positions as in the reference ranking are in a ranking where the three criteria C 5 C 2 C 7 are excluded. The least similarity to the reference ranking is found in the ranking where the three criteria, C 3 C 2 C 7 and C 6 C 2 C 7 , are excluded. This means that these three criteria have the most significant influence on the process of decision making options ranking.