Comparative Assessment of the Stability of AHP and FAHP Methods

: Mathematical models describing physical, technical, economic, and other processes can be used to analyze these processes and predict their results, providing that these models are stable and their results are stable relative to the model parameters used. Small changes in the values of the model parameters correspond to small changes in the results. Multicriteria decision ‐ making models need to check the results’ stability against the models’ main components: the values of the criteria weights and the elements of the decision matrix. In this article, we study the stability of models associated with the calculation of criteria weights. For the analysis, the most commonly used models are taken—the Analytic Hierarchy Process (AHP) method and the fuzzy Analytic Hierarchy Process (FAHP) method, in which fuzzy numbers are used under conditions of data uncertainty. Both math ‐ ematically well ‐ based methods verify the consistency of the expert evaluations. The method of sta ‐ tistical simulation (Monte Carlo) is the basis for studying the results’ stability. The study checks the experts’ provided evaluations’ consistency, calculates the criteria weights, and evaluates their rela ‐ tive errors after a slight change in the estimates of the pairwise comparisons of the criteria provided by the experts. The matrix of comparisons of the FAHP method is constructed based on the entire expert group’s assessments. It estimates the boundaries of variance in the fuzzy criteria weights. This paper estimates the stability of the criteria’ weights associated with the mathematical methods themselves and the experts’ estimates. The results are useful to study the stability of specific MCDM methods when ranking alternatives.


Introduction
A mathematical model makes practical sense if its results are stable concerning the model parameters. If an insignificant variation in the values of the resulting characteristics of the model corresponds to slight variations in the model parameters. Components of Multicriteria Decision-Making (MCDM) models are represented by the criteria characterizing the process under evaluation, and these criteria' weights.
The criteria weights provide a quantitative estimation of the importance of the criteria. Given that the use of criteria weights in MCDM methods has an essential influence on the result of the evaluations and on the making of the proper decision, an investigation of the accuracy of such evaluations is interesting and important from both the theoretical and the practical point of view. This paper contains an investigation of the stability of the evaluations of the subjective weights of the criteria and the influence of data uncertainty upon the results.
Uncertainty of data may result from the evaluation of a subjective expert, the extent of the expert's interest, ambiguity, inaccuracy of measurements, or improperly applied methods. Various approaches, such as, in particular, fuzzy set theory and the methods of mathematical statistics, Boolean logic, logistic regression, Monte Carlo simulation, Bayesian networks, and neural networks, are used to evaluate the influence of the degree of uncertainty [1].
So-called subjective weights, obtained based on peer reviews, are most frequently applied in practice [2][3][4][5][6]. For that reason, the subjective evaluation carries an uncertainty in itself. Despite the experience and competence of the expert, the evaluations provided by the same expert may vary when solving complex problems with a large number of criteria. For example, if an expert fills in the same questionnaire several times at different times, these evaluations are usually different from each other.
There are various techniques applied in the evaluation of the weight criteria. The simplest methods are based on a ranking of the criteria depending on their significance and on the direct evaluation of the weights when the sum of the evaluations is equal to one or to 100%. The use of other scales with normalization of the results is also possible. More complex subjective methods for weight evaluation, like the AHP and FAHP, use mathematical theories and verify the consistency of the expert evaluations.
There is another approach to the quantitative evaluation of the importance of the criteria. This approach evaluates the structure of the data array-the criteria values for all the alternatives [3,7,8]. Methods like this are called objective. Objective weights are applied rarely in practice, and we disregard them in this paper. A combined evaluation of the weights, which is based on the integration of subjective and objective evaluations, is also possible [9][10][11][12].
Evans [13] related the concept of sensitivity analysis in decision-making theory to the stability of an optimal solution under variation of the model parameters and the accurate evaluation of the values of such parameters. The first significant papers in the sector of sensitivity analysis were written on the basis of using the concepts of sensitivity analysis in linear programming for the development of an optimal approach that could be applied to the classical problems of decision theory [13], and on using entropy and the least squares method [14].
Zhou et al. [15] suggested a method for the calculation of the entropy weights in the situation when the evaluation of the criteria might contain uncertainties such as, for example, interval values, and when it contains both uncertainties and incompleteness, for example, with the distribution of judgements. Wolters and Mareschal suggested three types of sensitivity analysis: (1) a fixed relation between the variation of the ranking and the variations of the alternatives based on certain criteria, (2) the influence exerted by specific variations of the points/criteria of the alternative, and (3) the minimum modification of the weights necessary to provide for the alternative to take first place [16]. The analysis was focused on and developed for the preference ranking organization method for enrichment evaluation (PROMETHEE) methods. Triantaphyllou and Sánchez [17] presented the methodology for the performance of sensitivity analysis of the weights of the decisionmaking criteria and the alternative efficiency values for the weighted sum model (WSM), the weighted product model (WPM), and the analytic hierarchy process (AHP) methods. Evaluation of the influence exerted by uncertainty in the SAW method was performed by Podvezko [18], who evaluated the ranges of weight intervals for the process criteria, the levels of matching and stability of the expert evaluations, and the influence of uncertainty on the ranking of the matched objects. The influence of weight variation upon the final result in the SAW method was studied by Zavadskas et al. [19], and sensitivity analysis of the SAW, TOPSIS, MOORA, and PROMETHEE methods was studied by Vinogradova [20]. Memariani et al. [21] and Alinezhad [22] studied the influence of the values of the decision matrix elements upon the results of the ranking. Moghassem [23] increased and decreased the weights of all the criteria by 5, 10, 15, and 20 percent in sensitivity analysis of the TOPSIS and VIKOR methods. Hsu et al. performed sensitivity analysis of the TOP-SIS method by increasing the three maximum weights by 10% and decreasing the three minimum ones by 10% [24]. Erkut and Tarimcilar [25] suggested dividing the problems of the stability of the AHP method that are to be solved into two groups. The approach of group one assumes operations over the criteria proper, by means of calculating the alternative evaluation as the sum of the alternative evaluations multiplied by the correspondent weights, based on the criteria under evaluation. The problems in group two are solved as decision-making problems under conditions of uncertainty, with the uncertainty meaning that there are a number of possible states of the nature and only one of them can be transformed into a true state. The very meaning of state probability is directly related to the meaning of the uncertainty of the problem to be solved within the framework of risk evaluation. The authors of the paper follow the first approach to the problems, solving them graphically by creating the weight space, which is represented as all the possible combinations of the weights for the purposes of tier one of the hierarchy. Consequently, separating the weight space into sets, the spatial data can be generated from the space. Any of the alternatives possesses the highest evaluation ranking in every one of the subsets [25].
In his paper, Masuda [26] studied how variations of the entire columns of the decision-making matrix might influence the values of the alternative priorities. He suggested using the sensitivity coefficient of the finite vector of the alternative priorities for each of the column vectors in the decision matrix to show how significantly the values of the finite alternative priorities vary. Warren [27] studied in more detail the theoretical aspects of the AHP method related to the evaluation scale, the determination of the vector of eigenvalues, the issue of normalization of the weights, and so on. Mimović et al. [28] suggested an integrated application of the analytic hierarchy process (AHP) and Bayesian analysis. The Bayesian formula managed to increase the input data accuracy for the analytic hierarchy process. The AHP method was used in this paper for the representation of the objectivized input data for the Bayesian formula in situations in which statistical evaluations of the probability are not possible. In the same way, Wu et al. [29], who generated pairwise comparison matrices and verified their stability, also suggested one of the methods for verification of the stability of the AHP method. The paper by Aguarón et al. shows the development of the theoretical basis for improvement of the AHP matrix inconsistency, when the Row Geometric Mean (RGM) is used as the prioritization procedure and the Geometric Consistency Index (GCI) as the inconsistency measure [30].
A number of papers with a genuine use of the МCDМ model have appeared recently in which the stability of results of the methods is studied. The paper by Chen et al. [31] evaluated the stability of the multicriteria weights by studying the GIS-based MCDM model, showing the influence of the variation of the criteria weights upon the model results in the spatial dimension and graphically. The weights were determined with the help of the AHP method and were varied from their initial values within limits of 20%. This range of variation for the initial weights was applied either to all the criteria or to each criterion, as required [31]. The paper by Deepa and Swamynathan [32] facilitated an improvement in the efficiency of internet networks through increasing their throughput capacity, by suggesting a mathematical model of a clustering protocol known as AETCP (a clustering protocol based on AHP-Entropy-TOPSIS). The mobile nodes were hierarchically organized into different clusters based on certain criteria. The integrated method for evaluation of the subjective and objective weights was applied to the evaluation of the mobile nodes. Later on, ranking of the sets of nodes was performed with the purpose selecting the nodes with the largest weight as the correspondent nodes of the cluster-head [32]. The paper by Zyoud and Fuchs-Hanusch [33] address a severe problem of a water deficit in the water supply networks. The FAHP method was used to evaluate the factors influencing the loss of water. The decision was made by diagnosing the loss-of-water risk index at the level of the pipes and the areas. The Fuzzy Synthetic Evaluation Technique (FSET) was used to evaluate the water loss index at the level of the water supply system, and Ordered Weighted Averaging (OWA) was used to aggregate individual values of the index applied to each area. A Monte Carlo simulation model was used to generate the final ranking of the areas. The results of this modeling provided sufficient stability from the point of view of the ranking of the investigated areas. Xue et al. [34] suggested a method for the evaluation of stability and safety in the construction of engineering facilities for a protective tunnel under a river using the AHP-entropy weight method and the ideal point evaluation model. The paper by Kumar et al. [35] contains an evaluation of the stability of the factor model for the environmental impact risk for materials (products/services) related to pharmaceutical drugs. The model sensitivity was checked in terms of the pro rata variation of the considered risk factor with respect to other factors, varying the weight value from 0.9 to 0.1; variations of inconsistencies were also observed for other risks. Evaluation of the supplier selection stability model was suggested by Stević et. al. [36] and executed by means of varying the weight and recording the variations in the ranking of the alternatives. The weights varied in a manner that on the increase of one criterion by a conditional unit (for example, 12%), the other criterion was, naturally, decreased correspondingly in order to satisfy the condition under which the sum of the values of all the criteria remained unchanged. Continuing with the topic of the estimation of supplier model quality, Stojić et al. [37] used the WASPAS method and suggested the calculation of the coefficient α to generate the number of relative values of the alternative; the coefficient depended on the weight parameter (α lay within the limits of 0 to 1, with increments of 0.1).
The generation of individual values of the criteria weights in the SWARA method was suggested by Zavadskas et al. [38]. The paper by Pamucar et al. [39] also evaluated the influence of the criteria (and the sub-criteria) upon the order of ranking of the alternatives, which were represented by suppliers, within the framework of the problem of increasing the service quality of third-party logistics providers. To process uncertain data under the procedure of group decision-making, the paper considered interval rough numbers (IRN) and the IRN-BWM (best worst method). The stability of the ranking of alternatives that was obtained was checked by varying the values of the coefficients of the linear combination and by the application of the operational competitiveness rankings analysis (OCRA) method [40]. Sensitivity analysis has also been used to confirm the stability of the final rankings of the results [41,42], or to verify and evaluate the feasibility of the optimal alternative [43], as well as to study the influence of the variation of the parameters and criteria weights upon the final results of ranking of the alternatives [44].
The stochastic approach to determining the uncertainty of the AHP weights has been used in different ways. Janssen [45] studied the sensitivity of the ranking of the alternatives using the effects table and compared this with the maximum evaluation of the decision-making person. The sensitivities of the rankings of alternatives to overall uncertainty in scores and priorities were analyzed using a Monte Carlo approach. Eskandari and Rabelo [46] followed another stochastic approach, and this gave these authors the opportunity to calculate the AHP weight dispersions, and to process their uncertain behavior with the help of a Monte Carlo simulation. An approach that applied fuzzy logic, an analytic hierarchy process, and a Monte Carlo simulation was used to solve the problem of the overexpenditure of funds within the framework of urban transit projects, to facilitate the effective planning of the future budget by the decision-making persons [47].
The stability of models related to calculations of subjective criteria weights is investigated in this paper. The most frequently used methods in the MCDM model-the analytic hierarchy process (АНР) method and the FAHP method, when fuzzy numbers are used under the conditions of data uncertainty-are taken for the analysis. The method of statistical simulation (Monte Carlo) is used in order to investigate the stability issues. The practical realization of the algorithm is written in the Python programming language. This paper's results can be used as an integral part of the study of the stability of MCDM methods in the ranking of alternatives.

Analytic Hierarchy Process (AHP) Method
The АНР method is the most frequently used in practice among all the subjective methods for the evaluation of criteria weights. The reason is that this method is mathematically substantiated, logically understandable, and allows the performance of a quantitative determination of the consistency of the evaluations provided by each of the experts. The experts compare all the possible pairs of criteria with each other. The pairwise comparison matrix is theoretically a ratio of unknown criteria weights: , , ( , where is the number of criteria. The element shows by how many times the i-th criterion is more important than the j-th one. The scale 1-3-5-7-9 suggested by the author of the method-Saaty [2]-is applied to the evaluation. The criteria weights are the normalized values of the eigenvector of the matrix , corresponding to the largest eigenvalue of the matrix: λ (1) The degree of consistency (internal consistency) of the expert evaluations determines the Consistency Index and the Consistency Ratio : , where is the average evaluation of the of the simulated matrices of the order of m [2]. The evaluations are considered to be consistent if the Consistency Ratio is such that 0.1.

Fuzzy Analytic Hierarchy Process (FAHP) Method
In the case of weight calculations using the FAHP method, the experts evaluate the criteria using interval values. Therefore, the uncertainty is included in the evaluations themselves. Unlike the deterministic АНР case, each evaluation for triangular fuzzy numbers can be represented as (L, M, U). The most probable evaluation M corresponds to the evaluation provided by the АНР method. The number L shows the lowest possible boundary of the evaluation, and the number U the corresponding upper boundary [48].
The matrix for the pairwise comparison of the criteria by the expert (or total evaluation by the entire group of experts) has the following representation: The symmetric fuzzy numbers with respect to the main diagonal are = , , ; the main diagonal elements are 1,1,1 .
The Chang algorithm [49] is used for calculation of the criteria weights. The value , called the extension of the fuzzy synthesis, is calculated for the i-th criterion using the following formula: All the criteria are compared pairwise using the value : The theory of the fuzzy numbers comparison is applied to the comparison of the values: , , … . , , … , min ∈ ,…, ; , 1, . . , .
The weight vector of the criteria is calculated using the following formula:

Stability Check for the AHP and FAHP Methods
The AHP and FAHP methods are applied in Multicriteria Decision-Making (MCDM) evaluations in order to determine the criteria weights. The АНР method is applied in the deterministic case when the significance (weight) of each of the criteria is determined with one number. In this case, each expert determines the criterion significance in the matrix of pairwise comparisons of the criteria with one number (taken from the Saaty scale: 1-3-5-7-9). The FAHP method is used in conditions of data uncertainty when interval fuzzy evaluations are used for the calculations. In this case, evaluations of the pairwise comparison of the criteria (that is, the values of the FAНP method matrix) are also represented by interval fuzzy numbers.
The criteria weights can be used in the MCDM model methods if the weight evaluation methods, that is, the AHP and FAHP methods, are stable (resistant) in relation to natural random variations of the evaluations. Considering that the value M of a triangular fuzzy number is matched with the most probabilistic evaluation of the AHP method, it would be of interest to perform a parallel investigation of stability for the АНР and FAHP methods. The references suggest more than 25 scales that can be used in order to form the triangular values of the triangular fuzzy numbers. The frequently applied symmetric and asymmetric scales of triangular fuzzy numbers are used in this paper.
The stability of the АНР method can be understood in two ways: the stability of just the method, which depends upon the essence of the method or its mathematical basis, and the stability of the results, that is, the values of the criteria weights depending upon the evaluations provided by the experts that vary due to the uncertainty inherent to their thinking processes.
Both stability options are studied in this paper.

Stability Check Algorithm for the AHP Method
The quantitative evaluation of the stability and the criterion for evaluating the stability depend upon the specific problem to be solved. Thus, for the evaluation of the stabilities of the method for the MCDM model, we can apply the percentage of loss as the best alternative to the leading position, the maximum inconsistencies of the evaluations of the method, the percentage of variation of the order for the ranking of the alternatives, etc. [20].
The stability is evaluated in this paper as the maximum relative error of the criteria weights: where is the weight of the j-th criterion, is the weight of the j-th criterion obtained as the result of the simulation, ξ is the simulation number, 1 , and is the number of simulations.
The stability of the АНР method itself is understood as follows. The Saaty scale for the АНP method applies only integer evaluation numbers from 1 to 9, showing by how many times one (the i-th) criterion is more important than the other (the j-th). With respect to the main diagonal, the symmetric evaluations are 1/ -where numbers that are less than 1 show by how many times the second (j-th) criterion is less important than the first (i-th) one. To check the stability of the AHP method itself, we expand the evaluation scale and assume that any real numbers can act as evaluations. That would allow variation in a random manner of the evaluations provided by each of the experts, using the method of statistical simulation (Monte Carlo) to simulate the evaluations within the limits of certain intervals, performing a consistency check of the evaluations each time, and recording the weight variation intervals. The statistical modeling method (Monte Carlo) allows reproducing a real situation on a computer many times. This cannot be replicated in practice, or implementation may require significant resources and time.
Why did Saaty suggest an integer number scale and not expand the scale to the set of real numbers? In the latter case, it would be sufficient to compare the importance of one criterion only (for example, the most important one) in relation to all the other criteria, that is, to fill in one column (or row) only. It would then immediately be possible to fill in all the remaining rows (or columns) of the matrix. The elements would be pro rata with the elements of the one filled-in column.
Considering that the criteria weights are related to the eigenvector of the comparison matrix, it is important to determine how small variations of the matrix elements affect the values of the eigenvector elements and, correspondingly, influence the values of the criteria weights-the normalized values of the eigenvector.
The stability check algorithm for the AHP method can be represented in the following manner.
Step 1. The matrix P for pairwise comparison of the criteria of one of the experts (k = 1) is selected. The consistency of the evaluations ( 0.1) is verified. The criteria weights Ω = ( ), j = 1, 2, ..., m are calculated.
Step 2. The percentage q of inconsistency ̂ of all the elements ( with the expert evaluations (q = 5%, q = 10%) is determined. Therefore, the random simulated values of the evaluations vary within the following interval ̂ ∊ [ -, + . The elements of the main diagonal remain unchanged: 1.
We vary the values of the integer numbers only (the evaluations) = 1, 2, …, 9 on both sides of the main diagonal. With respect to the main diagonal the symmetric elements are .
Step 3. A sequence of random numbers (r = 1) uniformly distributed within the interval [0,1] is selected using the method of statistical simulation (Monte Carlo). The random evaluation ̂ by the k-th expert with the q-th inconsistency is calculated; this be- A new random number from the sequence is used for each element of the matrix.
Step Step 6. The largest values of the relative errors of the criteria weights for every j-th criterion of the AHP method are calculated for every simulation ξ: = .
Step 7. The largest values of the relative errors of the criterion weight values for all the criteria are calculated for every simulation ξ: max .
Step 8. The largest value of the relative errors of the criteria weights for all the simulations ξ is taken as the АНР method error for the given matrix of comparison: δ=max .

Stability Check Algorithm for the AHP Method Related to the Evaluations of the Experts
The stability of the results-the values of the criteria weights depending upon the psychological state of the experts and the incomplete certainty of their evaluations-is understood as follows.
We have repeatedly proved that one and the same expert provides ambiguous evaluations when performing comparative evaluations of the importance of the same criteria, and even when ranking their importance at different moments in time. Naturally, the logic of the expert's thinking process is not undergoing major changes at that time, and the evaluations provided by the expert do not differ significantly.
Therefore, we vary the evaluations provided by the experts using, naturally, the Saaty scale, varying their values by 1 (or 2), both towards an increase and towards a decrease of the values, while the comparative evaluations of the other criteria are also varied. However, internal inconsistency of the evaluations must not occur: the Consistency Ratio CR must be less than 0.1.
The stability check algorithm for the AHP method depending on the state and psychological condition of the experts can be represented in the following manner.
Step 1. The matrix for the pairwise comparison of the criteria of one of the experts (k = 1) is selected. Consistency of the evaluations ( 0.1) is verified. The criteria weights Ω = ( ) are calculated, j = 1, 2, ..., m.
Step 2. A sequence of random numbers (r = 1) uniformly distributed within the interval of [0,1] is selected using the statistical simulation method (Monte Carlo). The values of all the evaluations of the experts -the elements ( of the matrix -are varied (increased or decreased) by 1. To do that, if 0 <0.5, the value increases. In the other case (0.5 ≤1), the value decreases. In order to attain complete symmetry, we exclude the value of = 0.5, that is, we do not vary the elements of the matrix. The two options have equal probability. If = 1, the value is always increased. If = 9, the value is decreased. The elements of the main diagonal remain unchanged: Step 4. New sequences of random numbers (r = 2, 3, …, T), are selected, where T is the number of repetitions (simulations). Steps 2 and 3 are repeated. The criteria weights ( ), (r = 2, 3, ..., T) are calculated.
Step 5. The relative errors of the values of the weight for each of the j-th criterion of the АНР method are calculated for every simulation ξ: = .
Step 6. The largest values of the relative errors of the criteria weights for all the criteria are calculated for every simulation ξ: max .
Step 7. We take the largest value of the relative errors of the criteria weights over all the simulations ξ as the АНР method error for the given matrix of comparison: δ = max .

Stability Check Algorithm for the FАНР Method
Various types of uncertainties influence the evaluation of the stability of the FAHP method. As in the deterministic case, the values of the elements of the criteria comparison matrix depend on the logic of the thinking of the experts, and their state at the moment of evaluation. Besides, the fuzzy method itself includes uncertainty in the evaluations-a triad of values is used instead of a one-point evaluation. It should also be kept in mind that the stability evaluation for the FAHP method refers solely to the weight evaluation algorithm used by us (4)- (8).
The stability check for the FAHP method for the calculation of the criteria weights can be represented in the form of the following steps.
Step 1. A fuzzy pairwise comparison matrix for the criteria is formed on the basis of the AHP matrix : , , . The values and vary depending on the selected symmetric or asymmetric scale of the fuzzy number.
Step 2. The consistency of evaluations ( 0.1) of the matrix M is verified. If the Consistency Ratio is 0.1, the pairwise comparison matrix is discarded.
Step 3. A sequence of random numbers The criteria weights , (r = 1) are calculated.
Step 4. New sequences of random numbers ( 2, 3, … , ) are selected, where T is the number of repetitions (simulations).
Step 5. The relative errors of the values of the weight for each of the j-th criterion of the АНР method are calculated for every simulation ξ: = .
Step 6. The largest values of the relative errors of the criteria weights for all the criteria are calculated for every simulation ξ: max .
Step 7. We take the largest value of the relative errors of the criteria weights over all the simulations ξ as the FАНР method error for the given comparison matrix: δ = max .

Results
This part illustrates the implementation of the above algorithms for checking the stability with several examples. For clarity, the algorithms for checking the stability of the AHP method will use the same pairwise comparison matrices. In the first case, to assess the stability of the methods, a 6x6 matrix with a good consistency index is taken, CI = 0.025, RI = 1.25, CR = 0.02 < 1. The weights of the criteria of the first matrix are 0.0873, 0.4246, 0.149, 0.2585, 0.0496, and 0.0311 (Table 1). In the second case, to assess the stability of the first and second algorithms, a 6 × 6 matrix with a critical consistency index is taken, CI = 0.116, RI = 1.25, CR = 0.09 < 1 ( Table 2). When the data of such a matrix change, even a small percentage of deviation can change the consistency of the data. The weights of the criteria of the second matrix are 0.3601, 0.1544, 0.2804, 0.0712, 0.0433, and 0.0905 (Table 2). In all the above algorithms for checking stability, the value of the largest relative error of the values of the criteria weights is taken. The stability of the method is established using different numbers of iterations: 100, 10,000, and 100,000. As with a small number of iterations the values of the largest relative errors change with each new check, the check is carried out several times (ten attempts). From the values obtained from the ten attempts, an interval is established, that is, the smallest and the largest values of the largest relative errors for each of the criteria calculated by formula (9).

Practical Application and Analysis of the Implementation of the First Algorithm for Checking the Stability
Using the first algorithm for checking the stability, a different percentage of deviation of all the elements is set. In the first case, the deviation is q = 5%, while in the second q = 10%. In each case, ten attempts are made to fix the interval of the largest relative errors.
In the first case, the matrix data with a good consistency index are used (Table 1). Table 3 shows the results of ten attempts to check the stability of the method, and their largest relative errors , with the number of iterations equal to 100 and 10% deviation, using the matrix of data from Table 1. The results show small values for the largest relative errors with a deviation of all elements by 10%. During the checking process, the percentage of consistent matrices is fixed for the total number of simulated matrices. At 5% and 10% deviation and with the number of simulations from 100 to 100,000, all the matrices, that is, 100%, are consistent.
The intervals of the largest relative errors for different numbers of iterations are shown in Tables 4 and 5 (5% and 10% deviation, respectively). With an increase in the number of iterations, the range of values of the largest relative errors narrows. With a deviation q of 5%, the value of the relative errors is less than with a deviation of 10%.   Table 5. The interval of the largest relative errors of the criteria weights , when checking the stability by the first algorithm, q = 10%, 10 repetitions, 1st matrix. In both verification cases, the results show the stability of the method: δ = 0.0579 (at 5% deviation) and δ = 0.1149 (at 10% deviation).
A similar stability check is implemented with the data from the other matrix, for which data consistency is critical ( Table 2). With the number of simulations equal to 100, the consistency interval is wider than with 10,000 and 100,000 simulations. In the general case, with a deviation of 5%, the percentage of consistent matrices ranges from 97% to 100%; with a deviation of 10%, the percentage of consistent matrices is, on average, 16% less and, more precisely, fluctuates in the range from 77% to 89% (Table 6). Table 6. The percentage of consistent matrices when checking the stability of the first algorithm using the data of the second matrix (5%, 10% deviation). The intervals for the largest relative errors for the different numbers of iterations are shown in Tables 7 and 8 (5% and 10% deviation, respectively). With a deviation of 5%, the value of the largest relative errors is less than with a deviation of 10%. In both verification cases, the results show the stability of the method: δ = 0.0531 (at 5% deviation) and δ = 0.116 (at 10% deviation). Table 7. The interval of the largest relative errors of the criteria weights , when checking the stability by the first algorithm, q = 5%, 10 repetitions, 2nd matrix.  Table 8. The interval of the largest relative errors of the criteria weights , when checking the stability by the first algorithm, q = 10%, 10 repetitions, 2nd matrix. The results obtained for the largest relative errors of the second matrix differ a little from the results for the largest relative errors of the first matrix (Tables 4 and 5). Testing the AHP method with the first algorithm shows a good result.

Practical Application and Analysis of the Implementation of the Second Algorithm for Checking the Stability
Using the second algorithm for checking the stability, the elements of the pairwise comparison matrix are changed by one. The stability of the method is also set by the different numbers of iterations: 100, 10,000, and 100,000. The value of the relative errors of the values of the criteria weights is fixed. In each case, ten attempts are made to fix the range of the relative errors. In the first case, the matrix data with a good consistency index are used (Table 1). Table 9 shows the results of ten attempts to check the stability of the method, and the largest relative errors , with the number of iterations equal to 100, using the first matrix data ( Table 1). The results show small values of the largest relative errors of the weights of the criteria. Comparing the results for the relative errors of the weights of the criteria (Table 10) with the results of the first algorithm (a deviation of 10%) (Table 3), the results have increased on average by 23%. Checking the consistency of the first matrix, the results of checking the second algorithm for the largest relative errors (Table 10) are less than with a 10% data deviation, but more than with a 5% deviation. With the number of iterations equal to 100,000, the intervals of the criteria are practically narrowed to one value. When checking the consistency of the matrices using the first matrix, all the other generated matrices are consistent, in contrast to the second matrix, in which 55-70% of the matrices are inconsistent and discarded (Table 11). Table 11. The percentage of consistent matrices when checking the stability by the second algorithm using the data of the first and second matrices. The largest relative error of the results for the second matrix is small (Table 12). The results turn out to be better than when using the first matrix and the same algorithm. This can be explained by the smaller number of consistent matrices used (30-45%). In both verification cases, the results show the stability of the method: δ = 0.5121 (using the first matrix) and δ = 0.7739 (using the second matrix).

Practical Application and Analysis of the Implementation of the Third Algorithm for Checking the Stability
First, the step-by-step calculation of the weights of the original matrix using Chang's method will be illustrated for a better understanding of the stability check of the FAHP method. This paper uses a matrix with triangular fuzzy numbers, which will be formed from the AHP matrix. In many papers, the authors prefer to use symmetric triangular fuzzy numbers, such as {1: [ [7,8,9], 9: [8, 9, 9]}. There are also different modifications of this scale, for example, 1: [0, 1, 1] or 9: [7,9,9], and others. Ishizaka et al. [50] describe different modifications of these scales in paper. Chang himself used non-symmetric triangular fuzzy numbers in his paper [49]. This paper will use both symmetric and asymmetric scales (Tables 13 and 14). The scales used do not go beyond the AHP scale proposed by Saaty, in the range from 1 to 9, so in the case of "Equally important" and "Absolutely important", the symmetry of the triangle is broken (Table 13).

Linguistic Characteristics of a Triangular Fuzzy Number
Equally In previous papers [51,52], it was noted that when forming a triangular fuzzy matrix from separate AHP matrices, the value is located closer to than to L, so the asymmetric triangular fuzzy scale is formed with a large distance from to (Table 14). An important point in Chang's proposed method for calculating the criteria weights is the comparison of , the extension of the fuzzy synthesis values (5). The essence of calculating the value of V ( ) is to find the ordinate of the intersection point of two triangular numbers (Figure 1). The triangles and do not intersect in the case when . Then, with further calculation, using the formulas (4)- (8), the criterion weight is zero. That is, if a triangle does not intersect with at least one of the subsequent triangles, its weight is zero. This case is possible if the distance between the values and is large, and the triangles themselves are narrow (the distance from to and from to is 1), as in the case of the symmetric scale from Table 13. To give a better explanation of the case of zero weights when using narrow symmetric triangles, we will illustrate it with an example. In solving real problems, when forming a fuzzy matrix from several AHP matrices, the values are most often averaged [49,51]. Chang's work uses triangular numbers where the value does not exceed three [49]. In the following sections, symmetric and asymmetric triangular fuzzy numbers are used to form the AHP fuzzy matrix of pairwise comparisons. The implementation of the third algorithm will be illustrated using different matrices of pairwise comparisons, depending on the scale of the triangular fuzzy numbers used.

Using the Symmetric Scale for the Fuzzy Triangle
The fuzzy matrix (Table 15) is formed in such a way that the most likely estimate of corresponds to the estimate of the AHP method. A 6 × 6 matrix with a good consistency index is used (Table 1). Fuzzy numbers replaces natural numbers, using the scales specified in Tables 13 and 14, The remaining matrix estimates are calculated using the formula (4). The matrix uses narrow symmetric triangular fuzzy numbers and a full scale of Saaty scores from 1 to 9. Systematically, we illustrate the calculation of weights using the Chang method.
Consider the summed values of ∑ ∑ (Table 16). The values of the number L of the second and fourth criteria ( , ) are very different from the other values of the L criteria. Note that the L value of the second criterion is greater than the U values of the first, third, fifth, and sixth criteria, and the L value of the fourth criterion is greater than the U values of the first, fifth and sixth criteria.   (Table 17).
For each i-th criterion, the value of -the extension of the fuzzy synthesis-is calculated using the formula (5), and the specified answers are rounded. The normalized summed values of are shown in Table 17. After normalization, the sum of the values of the triangular numbers is 0 1, 1, and 1. Let us analyze how normalization changed the values of L and U, while expanding the values of the triangular numbers. Taking the example of the first criterion, the difference increased from 1.85 to 3. Unfortunately, normalization did not solve all the problems of the second and fourth criteria ( , , , , ). The inequalities show that the weights of the first, fifth, and sixth criteria will be zero. Further calculations will also show this.
To avoid this situation, it is recommended that, when using a narrow symmetric scale to fill in the matrix of pairwise comparisons, extreme/large values, such as (8,9,9), (7,8,9), and so on, are not used. Another possible way to avoid zero values is to use an asymmetric scale, which expands the triangle and the differences between the L and U values. Then, using the extension of the fuzzy synthesis, all the criteria are compared in pairs using the formula (6) In our further study of the stability of the AHP fuzzy method, we use the matrix from Table 18, with a good consistency index (CR = 0.022). Using the third algorithm, the stability of the method is also set by different numbers of iterations: 100, 10,000, and 100,000. The value of the largest relative errors of the values of the criteria weights is fixed. At least ten attempts are made to fix the interval of the largest relative errors, for non-zero values of the weights.
To analyze the stability with a symmetric scale of triangular numbers, we use the matrix presented in Table 18. When simulating an AHP matrix with 100 iterations, 60%-70% of the subsequent matrices are consistent. The values of the weights of the initial fuzzy matrix criteria are 0.171, 0.288, 0.227, 0.071, 0.187, and 0.055. When calculating the criteria weights generated, and the matched pairwise comparison matrices, 70-80% of the matrices have zero weights, with the largest relative error being 1. The non-zero values of the weights and their largest relative errors are shown in Table 19. The results show a high maximum relative error δ = 0.9797. In general, the results for the relative errors of the third algorithm (Table 19), using a symmetric scale of triangular numbers, are greater than those for the first and second algorithms.

Using the Asymmetric Scale for the Fuzzy Triangle
A fuzzy matrix using an asymmetric triangular scale is formed in the same way as a symmetric one, only using the scale from Table 14. The same 6 × 6 matrix with a good consistency index is used (Table 18). The values of the weights of the initial fuzzy matrix criteria are 0.171, 0.253, 0.208, 0.095, 0.179, and 0.093. In contrast to the symmetric scale, zero values of weights are extremely rare, 1%-3%. The values of the criteria weights when using an asymmetric scale differ from the values obtained when using a symmetric one. However, the correlation coefficient of the weights obtained using symmetric and asymmetric scales is significant and is equal to 0.9987. The third matrix criteria weights were obtained using symmetric and asymmetric fuzzy scales from Tables 13 and 14, shown in Figure 2. The last check step was implemented to avoid zero weights using a symmetric scale using a third matrix with scores not exceeding 3. The result showed that the correlation of weights when using symmetric and asymmetric scales is high. Small changes to the symmetric scale do not significantly affect the scores, but they avoid zero weights.
Analyzing the results of 15 repetitions (100 iterations) (Table 20), the values of the largest relative errors of the criteria weights of the second and fourth criteria are high, which means that the weights in some cases are more than doubled.
The results of the relative error intervals for different numbers of iterations are presented in Table 21. As the number of iterations increases, the largest relative errors also increase. The weights increase and decrease by almost a factor of two. The largest changes are for the weight value of the second criterion, by almost four times. The use of an asymmetric scale of triangular numbers shows an improvement in the result compared to the results for the symmetric scale. There are practically no zero values of the weights, but even with a small fluctuation in the values of the matrix, the weights vary greatly. At the same time, the scale used in the AHP matrix varies from 1 to 4 or, respectively, fuzzy numbers from (1, 1, 1) to (3,4,5).
To further test the algorithm, the triangular values of fuzzy numbers are extended. We use the asymmetric scale from Table 22, in order to avoid zero weights for the criteria of the first matrix (Table 1). The initial values of M are used from the first matrix, which has a high consistency score ( Table 1). The weights of the criteria of the fuzzy matrix that is formed when using triangular fuzzy numbers with the new scale in Table 22 are 0.159, 0.286, 0.205, 0.247, 0.088, and 0.017. Figure 3 shows a greater degree of consistency of weight estimates by the AHP and FAHP methods using Chang's algorithm. This result could not be predicted because Chang's algorithm uses both different rating scales and the theory of comparing fuzzy numbers. The most significant difference in the weights of the first-matrix criteria obtained by the AHP and FAHP methods is observed in the first-third criterion. An analysis of the values of for ten repetitions shows good results for the algorithm for criteria 1 to 5 (Table 23). The large relative error values for the sixth criterion are explained by the fact that the weight in the original matrix is not large ( 0.017).  When checking the stability of the algorithm with a large number of iterations, the result remains the same, but the interval is narrowed to one value (Table 24). Despite the results for criterion 6 ( 4.6606), the weight of which varies from 0.017 to 0.08, the method using the new scale for forming triangular numbers shows a good result, comparable to the results of algorithms 1 and 2.

Discussion
Using the results of calculations of MCDM methods to choose the best alternative and to make the right decision makes practical sense if the models used are stable with respect to possible minor fluctuations in the initial data. Experts play a significant, and often crucial, role in the preparation of these MCDM methods. They form a set of criteria that characterize the process being evaluated. The criteria weights are usually calculated on the basis of their estimates, and often experts evaluate the values of the criteria themselves. Experts' estimates are characterized by uncertainty. Therefore, when using MCDM methods, it is very important to investigate the influence of incomplete certainty about the data on the results of the calculations, and to assess the stability of the methods themselves.
The stability of MCDM models and the stability of the results depend both on the methods used and on the problem data themselves. Therefore, for each specific problem solved by MCDM, the use of methods for calculating the criteria weights and the specific methods for evaluating alternatives can be selected after checking these methods and the data for stability.
This paper should be considered as an integral part of the general task of studying the stability of MCDM models. For each specific MCDM method, it is necessary to investigate the stability of the weight estimates (this paper), as well as to evaluate the stability of the MCDM method itself. After that, the total error of the calculations can be estimated and the model with the smallest errors accepted.
A change often expresses the assessment of MСDM methods' stability in the ranking of alternatives. In solving specific problems using many possible MCDM methods, the method's result with the lowest degree of change of rang estimates is used.
Various methods can establish the weights of the criteria. Still, the AHP and FAHP methods' peculiarity is in checking the consistency of expert assessments, which allows controlling the correctness of filling out the questionnaire.
The AHP method, as a mathematical method, shows a high degree of stability. This is to be expected: the method is mathematically justified, the elements of the matrix of the pairwise comparison of the criteria are ideally the ratio of the unknown criteria weights, and the weights themselves are normalized values of the eigenvector of the matrix.
Issues related to changes in the criteria weights that depend on ambiguous expert assessments have not received sufficient attention in the scientific literature. An expert, when filling out the same questionnaire again, usually fills it out a little differently. The errors in the calculated weights of the AHP method described in the second algorithm, which are related to the estimates of the experts themselves and the logic of their thinking, are significantly higher than the errors in the mathematical method itself. That, too, was to be expected. Reducing or increasing the comparative estimates of the experts by one significantly changes the values of the components of the eigenvector and the values of the criteria weights. The relative error of the weights increases. When comparing one criterion with all the others, the expert should also remember his previous assessments of the other criteria. With a large number of criteria, this task is not simple and often the comparison matrix is contradictory, not consistent. The expert is forced to fill in a new matrix, and his new estimates, of course, differ significantly from the original ones, as do the weights of the criteria. The relative error is quite large. It is possible to recommend that experts, before filling out an AHP matrix, rank the criteria according to their significance and, when filling out the matrix, constantly take into account the ranking results.
The greatest problems arise when evaluating the stability of the FAHP method weights. This is mainly due to the Chang algorithm used. Checking the AHP fuzzy method shows non-unambiguous results when using different scales of triangular numbers. In the analysis of the algorithm proposed by Chang for calculating weights, the possibility of zero weights for the criteria is emphasized. This is due to the possible excess of the value of L over U (L > U), which occurs because of the narrow scale of triangular numbers. Chang's proposed algorithm includes normalization, which partially solves the problem of decreasing L and increasing U values, by expanding the triangular numbers and increasing the probability of an intersection of the values. This paper proposed an asymmetric scale of triangular numbers, which excluded the appearance of zero weights. At the same time, the calculated fuzzy weights correlated well with the weights of the AHP method. In subsequent work, the authors plan to study in more detail the influence on the final result of the scale used for the triangular numbers and suggest other ways of normalization, to avoid zero values for the criteria weights. Chang's algorithm is also sensitive to large estimates from the Saaty scale (close to 9), in which case the criteria weights can take zero values.
The criteria weights calculated using the AHP and FAHP methods are naturally different, but as the average M values of the FAHP matrix coincide with the values of the AHP matrix, the weights of the two methods should correlate with each other. However, when some of the criteria weights are zero, there is no need to talk about compliance. The situation is "corrected" by the use of the asymmetric scale proposed in this article.
A review of the scientific literature confirms the relevance of the problems studied in this paper. Despite the presence of papers that use a stability check for the AHP method, the results of the algorithms could not be compared because of different interpretations of the results. Noted that despite the widespread use of the FAHP method, little attention has been paid to checking its stability, so the results of this paper are of scientific interest.
The theory of interval numbers is a universal approach for solving many applied problems. The FAHP method is useful for solving problems using linguistic scales that cannot be written down in a single number. In this case, the sensitivity test of the FAHP method to select the scale of triangular numbers is recommended. Otherwise, the AHP method is recommended.

Conclusions
The stability of multicriteria MCDM methods is associated with the incomplete certainty of the data used for calculations.
This uncertainty is particularly evident when calculating the subjective weights of criteria based on expert assessments. Unstable estimates (ranking) in MCDM methods reduce the quality of the estimates and the reliability of the decision. The instability of the MCDM methods can result in an incorrect ranking of the evaluated alternatives, not the proper choice of the best option, inaccurate estimates of the criteria' significance, and the criteria weights' values in a particular situation and environment. Therefore, the problem investigated in this publication is relevant. The calculations show that the AHP method, as a mathematical method, is stable with respect to minor fluctuations in the elements of the comparison matrix. The transition from the Saaty scale integers 1-3-5-7-9 to close real numbers slightly changes the values of the weights. The relative error of the weight estimates that were insignificant varied between δ = 0.0531 (at 5% deviation) and δ = 0.116 (at 10% deviation).
The maximum relative error of the AHP method, related to the assessments of the experts themselves, the logic of their thinking, and psychology, is significantly greater than the error of AHP as a mathematical method. At the same time, changing the elements of the comparison matrix by units significantly affects the values of the eigenvector components of the matrix. The relative error of the estimates of the weights much higher and varied in the range δ = 0.5121 to δ = 0.7739.
The stability of the weights of the FAHP method is related not only to the factors listed above for the AHP method, but also to the use of Chang's algorithm for estimating weights. The calculations show that the algorithm itself is not universal, is not applicable for all matrices, and depends on the scale used for the estimates of the triangular numbers. If the evaluation scale is incorrectly selected, the weights may be zero for some matrices. The relative error of the FAHP method is significantly higher than that for the AHP method and varied from δ = 0.2421 to δ = 0.9797. It was anomalous (δ = 4.6606) in the case of a very small weight for a criterion.
The proposed asymmetric FAHP scale significantly improved the results: it eliminated the appearance of zero weights for criteria, reduced the values of the maximum relative errors, and showed a high degree of correlation with the weights obtained by the AHP method.
Regarding the novelty and relevance of this paper, we can point to the study of the stability of the AHP method, which depends on the instability of the estimates of the experts themselves, and the analysis of the stability of the FAHP method, which was clearly insufficiently studied earlier. This paper can be used to analyze the stability of specific MCDM methods by ranking the alternatives.